MATH REFRESHER FOR SCIENTISTS AND ENGINEERS Third Edition
JOHN R. FANCHI
A JOHN WILEY & SONS, INC., PUBLICATION
MAT...
55 downloads
1637 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS Third Edition
JOHN R. FANCHI
A JOHN WILEY & SONS, INC., PUBLICATION
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS Third Edition
JOHN R. FANCHI
A JOHN WILEY & SONS, INC., PUBLICATION
Copyright # 2006 by John Wiley & Sons, Inc. All rights reserved Published by John Wiley & Sons, Inc., Hoboken, New Jersey Published simultaneously in Canada No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/ permission. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com. Library of Congress Cataloging-in-Publication Data. Fanchi, John R. Math refresher for scientists and engineers/John R. Fanchi.—3rd ed. p. cm. Includes bibliographical references and index. ISBN-13: 978-0-471-75715-3 ISBN-10: 0-471-75715-2 1. Mathematics. I. Title QA37.2.F35 2006 5120 .1—dc22
2005056262
Printed in the United States of America 10 9 8
7 6 5
4 3 2 1
CONTENTS PREFACE
xi
1
1
ALGEBRA 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8
2
GEOMETRY, TRIGONOMETRY, AND HYPERBOLIC FUNCTIONS 2.1 2.2 2.3 2.4 2.5
3
Algebraic Axioms / 1 Algebraic Operations / 6 Exponents and Roots / 7 Quadratic Equations / 9 Logarithms / 10 Factorials / 12 Complex Numbers / 13 Polynomials and Partial Fractions / 17
Geometry / 21 Trigonometry / 26 Common Coordinate Systems / 32 Euler’s Equation and Hyperbolic Functions / 34 Series Representations / 37
ANALYTIC GEOMETRY 3.1
21
41
Line / 41 vii
viii
CONTENTS
3.2 3.3 4
LINEAR ALGEBRA I 4.1 4.2 4.3
5
7
8
9
10
117
Line Integral / 117 Double Integral / 119 Fourier Analysis / 121 Fourier Integral and Fourier Transform / 124 Time Series and Z Transform / 127 Laplace Transform / 130
ORDINARY DIFFERENTIAL EQUATIONS 10.1
107
Indefinite Integrals / 107 Definite Integrals / 109 Solving Integrals / 112 Numerical Integration / 114
SPECIAL INTEGRALS 9.1 9.2 9.3 9.4 9.5 9.6
93
Partial Differentiation / 94 Vector Analysis / 96 Analyticity and the Cauchy – Riemann Equations / 103
INTEGRAL CALCULUS 8.1 8.2 8.3 8.4
79
Limits / 79 Derivatives / 82 Finite Difference Concept / 87
PARTIAL DERIVATIVES 7.1 7.2 7.3
65
Vectors / 65 Vector Spaces / 69 Eigenvalues and Eigenvectors / 71 Matrix Diagonalization / 74
DIFFERENTIAL CALCULUS 6.1 6.2 6.3
51
Rotation of Axes / 51 Matrices / 53 Determinants / 61
LINEAR ALGEBRA II 5.1 5.2 5.3 5.4
6
Conic Sections / 44 Polar Form of Complex Numbers / 48
First-Order ODE / 133
133
CONTENTS
10.2 10.3 10.4 11
12
13.3 13.4 13.5 14
15
16
203
Contravariant and Covariant Vectors / 203 Tensors / 207 The Metric Tensor / 210 Tensor Properties / 213
PROBABILITY 16.1 16.2
191
Calculus of Variations with One Dependent Variable / 191 The Beltrami Identity and the Brachistochrone Problem / 195 Calculus of Variations with Several Dependent Variables / 198 Calculus of Variations with Constraints / 200
TENSOR ANALYSIS 15.1 15.2 15.3 15.4
181
Classification / 181 Integral Equation Representation of a Second-Order ODE / 182 Solving Integral Equations: Neumann Series Method / 185 Solving Integral Equations with Separable Kernels / 187 Solving Integral Equations with Laplace Transforms / 188
CALCULUS OF VARIATIONS 14.1 14.2 14.3 14.4
167
Boundary Conditions / 168 PDE Classification Scheme / 168 Analytical Solution Techniques / 169 Numerical Solution Methods / 175
INTEGRAL EQUATIONS 13.1 13.2
151
Higher-Order ODE with Constant Coefficients / 151 Variation of Parameters / 155 Cauchy Equation / 157 Series Methods / 158 Laplace Transform Method / 164
PARTIAL DIFFERENTIAL EQUATIONS 12.1 12.2 12.3 12.4
13
Higher-Order ODE / 141 Stability Analysis / 142 Introduction to Nonlinear Dynamics and Chaos / 145
ODE SOLUTION TECHNIQUES 11.1 11.2 11.3 11.4 11.5
ix
Set Theory / 219 Probability Defined / 222
219
x
CONTENTS
16.3 16.4 17
PROBABILITY DISTRIBUTIONS 17.1 17.2 17.3 17.4
18
245
Probability and Frequency / 245 Ungrouped Data / 247 Grouped Data / 249 Statistical Coefficients / 250 Curve Fitting, Regression, and Correlation / 252
SOLUTIONS TO EXERCISES S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18
229
Joint Probability Distribution / 229 Expectation Values and Moments / 233 Multivariate Distributions / 235 Example Probability Distributions / 240
STATISTICS 18.1 18.2 18.3 18.4 18.5
19
Properties of Probability / 222 Probability Distribution Defined / 227
257
Algebra / 257 Geometry, Trigonometry, and Hyperbolic Functions / 262 Analytic Geometry / 267 Linear Algebra I / 269 Linear Algebra II / 276 Differential Calculus / 280 Partial Derivatives / 286 Integral Calculus / 290 Special Integrals / 293 Ordinary Differential Equations / 299 ODE Solution Techniques / 306 Partial Differential Equations / 312 Integral Equations / 316 Calculus of Variations / 323 Tensor Analysis / 327 Probability / 331 Probability Distributions / 333 Statistics / 337
REFERENCES
339
INDEX
343
PREFACE
Math Refresher for Scientists and Engineers, Third Edition is intended for people with technical backgrounds who would like to refresh their math skills. This book is unique because it contains in one source an overview of the essential elements of a wide range of mathematical topics that are normally found in separate texts. The first edition began with relatively simple concepts in college algebra and trigonometry and then proceeded to more advanced concepts ranging from calculus to linear algebra (including matrices) and differential equations. Numerical methods were interspersed throughout the presentation. The second edition added chapters that discussed probability and statistics. In this third edition, three new chapters with exercises and solutions have been added. The new material includes chapters on integral equations, the calculus of variations, and tensor analysis. Furthermore, the discussion of integral transforms has been expanded, a section on partial fractions has been added, and several new exercises have been included. Math Refresher for Scientists and Engineers, Third Edition is designed for the adult learner and is suitable for reference, self-review, adult education, and college review. It is especially useful for the professional who wants to understand the latest technology, the engineer who is preparing to take a professional engineering exam, or students who wish to refresh their math background. The focus of the book is on modern, practical applications and exercises rather than theory. Chapters are organized to include a review of important principles and methods. Interwoven with the review are examples, exercises, and applications. Examples are intended to clarify concepts. Exercises are designed to make you an active participant in the review process. Keeping in mind that your time is valuable, I have restricted the number of exercises to a number that should complement and xi
XII
PREFACE
supplement the text while minimizing nonessential repetition. Solutions to exercises are separated from the exercises to allow for a self-paced review. Applications provide an integration of concepts and methods from one or more sections. In many cases, they introduce you to modern topics. Applications may include material that has not yet been covered in the book, but should be familiar if you are using the book as a “refresher course.” You may wish to return to the applications after a first reading of the rest of the text. I developed much of the material in this book as course notes for continuing education courses in Denver and Houston. I would like to thank Kathy Fanchi, Cindee Calton, Chris Fanchi, and Stephanie Potter for their assistance in the development of this material. Any written comments or suggestions for improving the material are welcome. JOHN R. FANCHI
CHAPTER 1
ALGEBRA
Many practical applications of advanced mathematics assume the practitioner is fluent in the language of algebra. This review of algebra includes a succinct discussion of sets and groups, as well as a presentation of basic operations with both real and complex numbers. Although much of this material may appear to be elementary to the reader, its presentation here establishes a common terminology and notation for later sections.
1.1
ALGEBRAIC AXIOMS
Sets A set is a collection of objects. The objects are called elements or members of the set. Example: The expression “a and b belong to the set A” can be written a, b [ A, where a, b are elements or members of set A. Let iff be the abbreviation for “if and only if.” Two sets A, B are equal iff they have the same members. In other words, sets A, B satisfy the equality A ¼ B iff set A has the same members as set B.
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
1
2
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Example: Let A ¼ {2, 3, 4} and B ¼ {4, 5, 6}; then 3 [ A but 3 B, where denotes “does not belong to” or “is not a member of.” We conclude that set A and set B are not equal. The union of two sets A and B (A < B) is the set of all elements that belong to A or to B or to both. The intersection of two sets A and B (A > B) is the set of all elements that belong to both A and B. Example: Let A ¼ {2, 3, 4} and B ¼ {4, 5, 6}; then A < B ¼ {2, 3, 4, 5, 6} and A > B ¼ {4}. Union and intersection may be depicted by a Venn diagram (Figure 1.1). All the elements of a set are enclosed in a circle. The union of the sets is the total area bounded by the circles in the Venn diagram. The intersection is the overlapping cross-hatched area. The null set or empty set is the set with no elements and is denoted by 1. The set of Real numbers R includes rational numbers and irrational numbers. A rational number is a number that may be written as the quotient of two integers. All other real numbers are irrational numbers. A real number classification scheme is shown as follows: Rational numbers {a=bja, b [ integers, b = 0} Noninteger rationals (e.g., 12, 13, . . .) Integer rationals f. . . , 22, 21, 0, 1, 2, . . .g Negative integers f. . . , 23, 22, 21g Zero f0g Natural numbers f1, 2, 3, . . .g Irrational numbers A function f from a set A to a set B assigns to each element x [ A a unique element y [ B. This may be written f : A ! B and the function f is said to be a
Figure 1.1
Venn diagram.
ALGEBRA
3
map of elements from set A to set B. The element y is the value of f at x and is written y ¼ f (x). The set A is the domain of f, and x is an element in the domain of f. The set B is the range of f, and the value of y is in the range of f. Example: The equation y ¼ x2 is a function that maps the element x [ R to the element y [ R: The element y is a function of x, which is often written as y ¼ f (x). By contrast, the equation y ¼ x2 does not uniquely define x as a function of y because x is not single valued; that is, y ¼ 4 is associated with both x ¼ 2 and x ¼ 2. For further discussion of functions, see texts such as Swokowski [1986] or Riddle [1992]. Operations pffiffiffi Square Root For a [ R, where a is positive or zero, the notation a signifies the square proot ffiffiffipffiffiffiof a. It is the nonnegative pffiffiffi real number whose product with itself gives a; thus a a ¼ a. The notation a is sometimes called p theffiffiffi principal square root [Stewart et al., 1992]. The negative square root of a is a. pffiffiffi Example: Let a ¼ 169; then paffiffiffiffiffiffiffi is ffi13 since 13 13 ¼ 169: The number a ¼ 169 has the negative square root 169 ¼ 13.
Absolute Value For all a [ R, if a is positive or zero, then jaj ¼ a; if a is negative, then jaj ¼ a. Example: Let a ¼ {5, 1, 6, 14:3}; then jaj ¼ {5, 1, 6, 14:3}.
Greater Than Elements can be ordered using the concepts of greater than and less than. For all a, b [ R, b is less than a if for some positive real number c we have a ¼ b þ c. For such a condition, a is said to be greater than b, which may be written as a . b. Alternatively, a is greater than b if a b . 0. Similarly, we may say that b is less than a and write it as b , a if a b . 0. Example: Let a ¼ 3 and b ¼ 22. Is a greater than b? To satisfy b þ c ¼ a, we must have c ¼ 5. Since c is a positive real number, we know that a . b.
Groups Many of the axioms presented below will seem obvious, particularly when thought of in terms of the set of real numbers. The validity of the axioms becomes much less obvious when applied to different sets, such as sets of matrices.
4
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Axioms A binary operation ab between a pair of elements a, b [ G exists if ab [ G. Let G be a nonempty set with a binary operation. G is a group if the following axioms hold: [G1] [G2] [G3]
Associative law: For any a, b, c [ G, (ab)c ¼ a(bc). Identity element: There exists an element e [ G such that ae ¼ ea ¼ a for every a [ G. Inverse: For each a [ G there exists an element a 21 [ G such that aa 21 ¼ a 21 a ¼ e.
Example: Let a ¼ 3, b ¼ 2, and c ¼ 5. We verify the associative law for multiplication by example: (ab)c ¼ a(bc) (3 2) 5 ¼ 3 (2 5) 6 5 ¼ 3 10 30 ¼ 30 The identity element for multiplication of real numbers is 1, and the inverse of a real number a is 1/a as long as a = 0. The inverse of a ¼ 0 is undefined.
Abelian Groups A group G is an abelian group if ab ¼ ba for every a, b [ G. The elements of the group are said to commute. The commutativity of the product of two elements a, b may be written ½a, b ; ab ba ¼ 0 Commutativity is always satisfied for the set of real numbers but is often not satisfied for matrices. Example: Let a and b be the matrices a¼
1 , 0
0 1
b¼
1 0 0 1
Then ab ¼
0
1
1
0
1
0
0
1
¼
0 1 1
0
ALGEBRA
ba ¼
1 0
0 1
0 1
0 1 ¼ 1 0
1 0
5
Therefore matrices a and b do not commute; that is, ab = ba. Hint: To perform the multiplication of the above matrices, recall that
a11 a21
a12 a22
b11 b21
b12 b22
a b þ a12 b21 ¼ 11 11 a21 b11 þ a22 b21
a11 b12 þ a12 b22 a21 b12 þ a22 b22
where faijg and fbijg are elements of the matrices a and b, respectively. An analogous relation applies when the order of multiplication changes from ab to ba. For more discussion of matrices, see Chapter 4. Rings Axioms
Let R be a nonempty set with two binary operations:
1. Addition (denoted by þ), and 2. Multiplication (denoted by juxtaposition). The nonempty set R is a ring if the following axioms hold: [R1] [R2] [R3] [R4] [R5] [R6]
Associative law of addition: For any a, b, c [ R, (a þ b) þ c ¼ a þ (b þ c): Zero element: There exists an element 0 [ R such that a þ 0 ¼ 0 þ a ¼ a for every a [ R. Negative of a: For each a [ R there exists an element 2a [ R such that a þ (a) ¼ (a) þ a ¼ 0. Commutative law of addition: For any a, b [ R, a þ b ¼ b þ a. Associative law of multiplication: For any a, b, c [ R, (ab)c ¼ a(bc). Distributive law: For any a, b, c [ R, we have (i) a(b þ c) ¼ ab þ ac, and (ii) (b þ c)a ¼ ba þ ca.
Subtraction is defined in R by a b ; a þ (b). R is a commutative ring if ab ¼ ba for every a, b [ R. R is a ring with a unit element if there exists a nonzero element 1 [ R such that a . 1 ¼ 1 . a ¼ a for every a [ R. Integral Domain and Field A commutative ring R with a unit element is an integral domain if R has no zero divisors, that is, if ab ¼ 0 implies a ¼ 0 or b ¼ 0. A commutative ring R with a unit element is a field if every nonzero a [ R has a multiplicative inverse; that is, there exists an element a 21 [ R such that aa 21 ¼ a 21a ¼ 1.
6
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
1.2
ALGEBRAIC OPERATIONS
Algebraic operations may be presented as a collection of axioms. In all cases assume a, b, c, d [ R. The following presents equality axioms: EQUALITY AXIOMS
Reflexive law Symmetric law Transitive law Substitution law
a¼a If a ¼ b, then b ¼ a If a ¼ b and b ¼ c, then a ¼ c If a ¼ b, then a may be substituted for b or b for a in any expression
Ordering relations obey the following axioms: ORDER AXIOMS
Trichotomy law
Exactly one of the following is true: a , b, a ¼ b, or a . b If a , b and b , c, then a , c If a,b . 0, then a þ b . 0 and ab . 0
Transitive law Closure for positive numbers
Axioms for addition and multiplication operations are summarized as follows: ADDITION AXIOMS
Closure law for addition Commutative law for addition Associative law for addition Identity law of addition Additive inverse law
aþb [R aþb ¼ bþa (a þ b) þ c ¼ a þ (b þ c) aþ0 ¼ 0þa¼ a a þ (2a) ¼ (2a) þ a ¼ 0
MULTIPLICATION AXIOMS
Closure law for multiplication Commutative law for multiplication Associative law for multiplication Identity law of multiplication Multiplication inverse law Distributive law
ab [ R ab ¼ ba (ab)c ¼ a(bc) a.1 ¼ 1.a ¼ a a . (1/a) ¼ (1/a) . a ¼ 1 for a = 0 a . (b þ c) ¼ a . b þ a . c
ALGEBRA
7
Several algebraic properties follow from the axioms. Some of the most useful are as follows: MISCELLANEOUS ALGEBRAIC PROPERTIES
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.
a . 0 ¼ 0. 2(2a) ¼ a. 2a ¼ 21 . a. If a ¼ b, then a þ c ¼ b þ c. If a þ c ¼ b þ c, then a ¼ b. If a ¼ b, then a . c ¼ b . c. If a .c ¼ b . c, then a ¼ b for c = 0. a 2 b ¼ a þ (2b). a/b ¼ c/d if and only if a . d ¼ b . c for b, d = 0. a/b ¼ (a . c)/(b . c) for b, c = 0. (a/c) þ (b/c) ¼ (a þ b)/c for c = 0. (a/b) . (c/d) ¼ (a . c)/(b . d) for b, d = 0.
For a more detailed discussion of algebraic axioms and properties, see such references as Swokowski [1986], Breckenbach et al. [1985], Gustafson and Frisk [1980], and Rich [1973].
1.3
EXPONENTS AND ROOTS
Exponents Exponents obey the three laws tabulated as follows: EXPONENTS
Products Quotient
Power
a m . a n ¼ a mþn am ¼ amn if m . n an am ¼ 1 if m ¼ n an m a 1 ¼ if m , n an anm (a m)n ¼ a mn
The number a raised to a negative power is given by a 2m ¼ 1/a m. Any nonzero real number raised to the power 0 equals 1; thus a 0 ¼ 1 if a = 0. In the case of the number 0, we have the exponential relationships 00 ¼ 0, 0x ¼ 0 for all x.
8
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Example: Scientific notation illustrates the use of exponentiation. In particular, numbers such as 86,400 and 0.00001 can be written in the form 8.64 104 and 1 1025, respectively. Properties of exponents are then used to perform calculations; thus (86,400)(0:00001) ¼ (8:64 104 )(1 105 ) ¼ (8:64 1)(104 105 ) ¼ (8:64)(1045 ) ¼ 8:64 101 ¼ 0:864 Scientific notation is a means of compactly writing very large or very small numbers. It is also useful for making order or magnitude estimates. In this case, numbers are written so that they can be rounded off to approximately 1 10n, and then exponents are combined to estimate products or quotients. For example, the number 86,400 is approximated as 8.64 104 ¼ 0.864 105 ¼ 1 105 so that (86,400)(0:00001) (1 105 )(1 105 ) ¼ 1 as expected. Roots The solution of the equation bn ¼ a may be written formally as the n th root of a equals b, or ffiffiffi p n a¼b A real number raised to a fraction p/q may be written as the q th root of a p, or pffiffiffiffiffi a p=q ¼ q ap A summary of relations for roots is as follows: ROOTS
Product Quotient Power
pffiffiffi a1=x ¼ x a p ffiffiffiffiffi pffiffiffipffiffiffi x ab ¼ x a x b ¼ a1=x b1=x rffiffiffi p ffiffiffi x a a1=x x a ffiffiffi ¼ 1=x ¼p x b b b pffiffiffiffiffi (a1=y )x ¼ ax=y ¼ y ax p ffiffiffiffiffiffiffiffiffi p x y a ¼ (a1=y )1=x ¼ a1=xy
ALGEBRA
1.4
9
QUADRATIC EQUATIONS
The solutions of the quadratic equation ax2 þ bx þ c ¼ 0 are
x+ ¼
b +
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b2 4ac 2a
where x+ is the solution when the square root term is added and x2 is the solution when the square root term is subtracted. The quadratic equation is an example of a polynomial equation. Any polynomial equation with a term x n, where the degree n is the largest positive integer, has n solutions. The quadratic equation has degree n ¼ 2 and has two solutions. Cubic equations, which are often encountered in chemical equations of state, have degree n ¼ 3 and have three solutions. The solutions are not necessarily real. Polynomial equations are discussed in more detail in Section 1.8. Example: The solutions of 2x 2 2 6x þ 4 ¼ 0 are pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ( 6)2 4(2) 4 ¼2 xþ ¼ 2(2) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 6 ( 6)2 4(2) 4 x ¼ ¼1 2(2) 6þ
These solutions can be used to factor the quadratic equation into a product of terms that have degree n = 1; thus 2x2 6x þ 4 ¼ 2(x 1)(x 2) ¼ 0
EXERCISE 1.1: Given ax 2 þ bx þ c ¼ 0, find x in terms of a, b, c. The following exercise may be solved using elementary algebraic operations from the preceding sections. EXERCISE 1.2: (a) Expand and simplify (x þ y)2 , (x y)2 , (x þ y)3 , (x þ y þ z)2 , (ax þ b) (cx þ d), and (x þ y)(x y). (b) Factor 3x3 þ 6x2 y þ 3xy2 .
10
1.5
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
LOGARITHMS
Definition The logarithm of the real number x . 0 to the base a is written as loga x and is defined by the relationship: If x ¼ ay then y ¼ loga x Logarithms are the inverse operation of exponentiation and obey the following three laws: LOGARITHMS
Product Quotient Power
loga(xy) ¼ loga x þ loga y x loga ¼ loga x loga y y n loga (x ) ¼ nloga x loga (1) ¼ 0 logx (x) ¼ 1
Logarithms to the base e 2:71828 are called natural logarithms and are written as ln x ; loge x. The equation for changing the base of the logarithm from base a to base b is logb x ¼ loga x= loga b. EXERCISE 1.3: Let y ¼ loga x: Change the base of the logarithm from base a to base b, then set a ¼ 10 and b ¼ 2:71828 e and simplify. Application: Fractals. Benoit Mandelbrot [1967] introduced the concept of fractional dimension, or fractal, to describe the complexities of geographical curves. One of the motivating factors behind this work was an attempt to determine the lengths of coastlines. This problem is discussed here as an introduction to fractals. We can express the length of a coastline Lc by writing Lc ¼ N 1
(1:5:1)
where N is the number of measurement intervals with fixed length 1: For example, 1 could be the length of a meter stick. Geographers have long been aware that the length of a coastline depends on its regularity and the unit of length 1: They found that Lc for an irregular coastline, such as the coast of Australia (Figure 1.2), increases as the scale of measurement 1 gets shorter. This behavior is caused by the ability of the smaller measurement interval to more accurately include the lengths of irregularities in the measurement of Lc :
ALGEBRA
Figure 1.2
11
Irregular boundary.
Richardson [see Mandelbrot, 1983] suggested that N has a power law dependence on 1 such that N / 1D
(1:5:2)
N ¼ F 1D
(1:5:3)
or
where F is a proportionality constant and D is an empirical parameter. Substituting Eq. (1.5.3) into Eq. (1.5.1) gives L(1) ¼ F 1(1D)
(1:5:4)
Taking the log of Eq. (1.5.4) yields log L(1) ¼ (1 D) log 1 þ log F
(1:5:5)
If we define x ¼ log (1) and y ¼ log L(1), then Eq. (1.5.5) has the form of the equation of a straight line, y ¼ mx þ b, where m is the slope and b is the intercept of the straight line with the y axis (see Chapter 3). A plot of log(L) versus log (1) should yield a straight line with slope 1 2 D. Richardson applied Eq. (1.5.4) to several geographic examples and found that the power law did not hold [Mandelbrot, 1983, p. 33]. Values of D are given below. Mandelbrot [1967] reinterpreted Richardson’s empirical parameter D as a fractional dimension, or fractal. He went on to say that Eq. (1.5.4) applies because a coastline is self-similar; that is, its shape does not depend on the size of the measurement
12
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
interval. For a more detailed discussion of self-similarity and fractals, see a text such as Devaney [1992].
1.6
Example
Fractional Dimension D
West coast of Britain German land frontier, A.D. 1900 Australian coast South African coast Straight line
1.25 1.15 1.13 1.02 1.00
FACTORIALS
The quantity n! is pronounced “n factorial” and is defined as the product n! ¼ n (n 1) (n 2) (1) By definition, 0! and 1! are equal to 1; thus 0! ¼ 1! ¼ 1 Example: 4! ¼ 4 . 3 . 2 . 1 ¼ 24.
Application: Permutations and Combinations: Factorials are used to calculate permutations and combinations. A permutation is an arrangement of distinct objects in a particular order. The number of permutations Pkn of n distinct objects taken k at a time is Pkn ¼
n! (n k)!
If we are not concerned about the order of arrangement of the distinct objects, then we can calculate the number of combinations Ckn of n distinct objects taken k at a time as Ckn ¼
n! k!(n k)!
n is often written as the binomial coefficient . The number of combinations k n n The relationship between permutation Pk and combination Ck is Ckn
Ckn ¼
Pkn k!
ALGEBRA
13
A more detailed discussion of permutations and combinations is presented in several sources, such as Kreyszig [1999], Larsen and Marx [1985], and Mendenhall and Beaver [1991]. EXERCISE 1.4: What are the odds of winning a lotto? To win a lotto, you must select 6 numbers without replacement from a set of 42. Order is not important. Note that the combination of n different things taken k at a time without replacement is n! n ¼ k k!(n k)!
Stirling’s Approximation We can approximate the factorial of a large number with the expression pffiffiffiffiffiffiffiffiffi n! en (nn ) 2pn for large n(n .. 1) Alternatively, taking the natural logarithm of Stirling’s approximation gives ln n! n ln n n þ 12 ln (2pn) where ln x ¼ loge x. The series representation of Stirling’s approximation is pffiffiffiffiffiffiffiffiffi n n 1 þ smaller terms n! ¼ 2pn n e 1 þ 12n EXERCISE 1.5: Calculate 42!/36! using Stirling’s approximation for both 42! and 36!.
1.7
COMPLEX NUMBERS
If we solve the simple quadratic equation x2 þ 1 ¼ 0 we find x2 ¼ 1 which has the solution pffiffiffiffiffiffiffi x ¼ + 1 ; +i
14
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
This solution requires an extension of the number system to include imaginary numbers, that is, numbers that contain the square root of a negative number as a factor. notation, especially p inffiffiffiffiffiffi electrical engineering, for the pffiffiffiffiffiffiffi Another pffiffiffiffiffifficommon ffi ffi 1 is j ¼ 1. For consistency, we use i ¼ 1 in this book. It follows from the solution for x that i 2 ¼ 1 In general, a complex number is an ordered pair of real numbers (x, y) such that z ¼ x þ iy Properties The real part of a complex number z ¼ x þ iy is x, and the imaginary part is y. The real and imaginary parts of a complex number may be written using the notation Re(z) ¼ x Im(z) ¼ y Complex numbers are often graphed in a two-dimensional plane called the complex plane or Argand diagram (Figure 1.3). Two complex numbers z1 ¼ x1 þ iy1 z2 ¼ x2 þ iy2
Figure 1.3
Argand diagram (complex plane).
ALGEBRA
15
satisfy the equality z1 ¼ z2 if and only if x1 ¼ x2 ,
y1 ¼ y2
The sum and difference of two complex numbers are z1 þ z2 ¼ (x1 þ x2 ) þ i(y1 þ y2 ) and z1 z2 ¼ (x1 x2 ) þ i(y1 y2 ) The product of two complex numbers is z1 z2 ¼ (x1 þ iy1 )(x2 þ iy2 ) ¼ x1 x2 þ iy1 x2 þ iy2 x1 þ i 2 y1 y2 ¼ x1 x2 þ i(y1 x2 þ y2 x1 ) y1 y2 The conjugate of a complex number z is denoted by z and is found by changing i to 2i wherever it occurs; thus z1 ¼ x1 iy1 The complex conjugate z is the mirror image of z when the complex numbers z and z are plotted in the complex plane illustrated in Figure 1.3. The real x axis acts as the mirror for reflecting one point into another. Example z1 z1 ¼ ðx1 þ iy1 Þðx1 iy1 Þ ¼ x21 þ y21 Division is the inverse of multiplication. The ratio z1/z2 of two complex numbers is simplified by calculating (z1 z2 )=(z2 z2 ) so that the denominator is real. This operation is illustrated in Exercise 1.6.
EXERCISE 1.6: Factor z1 =z2 by calculating z1 z2 z1 z2 ¼ z2 z2 z2 z2
16
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Complex numbers satisfy the following laws: COMPLEX NUMBERS
Commutative law Associative law Distributive law
z1 þ z2 ¼ z2 þ z1 z1 z2 ¼ z2 z1 (z1 þ z2 ) þ z3 ¼ z1 þ (z2 þ z3 ) (z1 z2 )z3 ¼ z1 (z2 z3 ) z1 (z2 þ z3 ) ¼ z1 z2 þ z1 z3
The real and imaginary parts of a complex number z may be calculated from z and z as Re(z) ¼ x ¼ 12 (z þ z ) and Im(z) ¼ y ¼
1 (z z ) 2i
respectively. Conjugation of complex numbers satisfies the following frequently encountered relations: Sum Difference Product Quotient
(z1 þ z2) ¼ z1 þ z2 (z1 2 z2) ¼ z1 z2 (z z2) ¼ z1 z2 1 z1 z1 ¼ z2 z2
EXERCISE 1.7: Suppose z ¼ x þ iy and z ¼ x iy. Find x, y in terms of z, z . Application: Particle Lifetimes. The lifetime of some elementary particles may be inferred from measurements of energy distributions. The energy representation may be written as g(1) ¼
i 1 þ ðil=2Þ
where 1 is particle energy and l is particle lifetime. The marginal probability density of observing a particle with energy 1 and lifetime l is proportional to g (1)g(1), where g (1)g(1) ¼
1 12 þ ðl2 =4Þ
ALGEBRA
17
EXERCISE 1.8: Given g¼
i 1 þ ðil=2Þ
evaluate g g.
1.8
POLYNOMIALS AND PARTIAL FRACTIONS
Monomials and Polynomials A term of the form cx n, where x is a variable, c is a constant, and n is a positive integer or zero, is called a monomial function of x. A function that is the sum of a finite number of monomial terms is called a polynomial in x and has the form P(x) ¼ an xn þ an1 xn1 þ þ a1 x þ a0 ,
an = 0
(1:8:1)
The power n is called the degree of the polynomial. Algebraic and Transcendental Functions function of the form
An algebraic function y of x is a
P0 (x)yn þ P1 (x)yn1 þ þ Pn1 (x)y þ Pn (x) ¼ 0
(1:8:2)
where fPi (x): i ¼ 0, 1, . . . , ng are polynomials in x and n is a positive integer. A function that is not an algebraic function is called a transcendental function. pffiffiffi Example: The function y ¼ x with x . 0 is an algebraic function whose elements (x,y) satisfy the equation y2 x ¼ 0. For this case, P0 (x) ¼ 1, P1 (x) ¼ 0, and P2 (x) ¼ x in Eq. (1.8.2) with n ¼ 2. Rational Function N(x), D(x) such that
A rational function R(x) is the quotient of two polynomials
R(x) ¼
N(x) D(x)
(1:8:3)
provided that D(x) = 0. The rational function R(x) is called proper if the degree of N(x) is less than the degree of D(x). If the degree of N(x) is greater than the degree of D(x), then R(x) is called improper. Partial Fractions Suppose the degree of D(x) in Eq. (1.8.3) is less than the degree of N(x) so that R(x) is an improper rational function. In this case, we can
18
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
divide N(x) by D(x) to obtain a quotient q(x) and a remainder r(x). The resulting ratio is R(x) ¼
N(x) r(x) ¼ q(x) þ D(x) D(x)
(1:8:4)
where the degree of r(x) is less than the degree of D(x). If the degree of N(x) is less than the degree of D(x), then R(x) is a proper rational function and the quotient q(x) ¼ 0. The ratio r(x)=D(x) may be decomposed into a sum of simpler terms using the method of partial fractions. The method of partial fractions is applicable to ratios of polynomials where the degree of D(x) in Eq. (1.8.4) is greater than the degree of r(x). The method for decomposing r(x)=D(x) depends on the form of D(x). Techniques for determining partial fractions are presented in several sources, such as Zwillinger [2003], and Arfken and Weber [2001]. The following examples illustrate different partial fraction techniques. Example: Consider a polynomial D(x) of degree k. Suppose D(x) may be written as a product of terms that are linear in x so that k
D(x) ¼ P (ai x þ bi ) ¼ (a1 x þ b1 ) (a2 x þ b2 ) (ak x þ bk ) i¼1
(1:8:5)
where fai, big are constants and none of the factors (ai x þ bi) is duplicated. The ratio r(x)=D(x) can be written as the sum of partial fractions k r(x) X Ai ¼ D(x) i¼1 ai x þ bi
(1:8:6)
The constants fAig are determined by multiplying Eq. (1.8.6) by D(x) and expanding. The resulting equation is a polynomial of degree k. Equating coefficients of powers of x gives a system of k equations for the k unknown constants fAig. Example: Consider a polynomial E(x) of degree k. Suppose E(x) may be written as a product of terms of the form " # k2 Y (ai x þ bi ) (x2 þ ak x þ bk ) (1:8:7) E(x) ¼ i¼1
where fai, big are constants. If the factors of the quadratic term x2 þ ak x þ bk are complex, which occurs when the coefficients {ak , bk } satisfy the inequality a2k , 4bk , then the partial fraction associated with the quadratic term can be
ALGEBRA
19
written in the form (Ak x þ Bk )=(x2 þ ak x þ bk ). The ratio r(x)=E(x) can be written as the sum of partial fractions k2 r(x) X Ai Ak x þ Bk ¼ þ E(x) i¼1 ai x þ bi x2 þ ak x þ bk
(1:8:8)
An application of this technique is presented in Exercise 13.4. EXERCISE 1.9: Write 1/(s2 þ v2 ) in terms of partial fractions. EXERCISE 1.10: Suppose r(x) ¼ x þ 3, D(x) ¼ x2 þ x 2. Expand r(x)=D(x) by the method of partial fractions.
CHAPTER 2
GEOMETRY, TRIGONOMETRY, AND HYPERBOLIC FUNCTIONS
A thorough review of mathematics for scientists and engineers would be incomplete without a discussion of the basic concepts of geometry, trigonometry, and hyperbolic functions. Trigonometric and hyperbolic functions are examples of transcendental functions (see Chapter 1, Section 1.8). Like algebra, these topics are essential building blocks for discussing more advanced topics. In addition to definitions, this chapter contains many identities and relations, including definitions of common coordinate systems, Euler’s equation, and an introduction to series representations that are useful in subsequent chapters. 2.1
GEOMETRY
Angles An angle is determined by two rays having the same initial point O as in the sketch.
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
21
22
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The angle u may be written as angle AOB, and the point O is often referred to as the vertex of the two lines OA and OB. Angles may be measured in degrees or in radians. Each degree may be subdivided into 60 minutes, and each minute may be further subdivided into 60 seconds. If the angle u equals 360 degrees (denoted 3608), the line OB will lie on the line OA. There are 2p radians in an angle of 3608. The value of p is approximately 3.14159. The following presents terminology for some special angles: TERMINOLOGY FOR SPECIAL ANGLES
u ¼ 908 08 , u , 908 908 , u , 1808
Right angle Acute angle Obtuse angle
Two acute angles a, b are complementary if a þ b ¼ 908. Two positive angles u, v are supplementary if u þ v ¼ 1808. Some useful angular relations are presented as follows: ANGULAR RELATIONS
Radians 0 p/4 p/2 p 3p/2 2p
Degrees 08 458 908 1808 2708 3608
Shapes Right Triangle and Oblique Triangle Euclidean geometry must equal 1808:
The sum of the angles of any triangle in
a þ b þ g ¼ 1808 A triangle that does not contain a right angle is called an oblique triangle. A right triangle is a triangle with one angle equal to 908, as shown in Figure 2.1. Thus for a right triangle we have the relationship a þ b ¼ g ¼ 908 between the acute angles a, b. The Pythagorean relation between the lengths of the sides of the right triangle is a2 þ b2 ¼ c 2
GEOMETRY, TRIGONOMETRY, AND HYPERBOLIC FUNCTIONS
Figure 2.1
23
Right triangle.
The area of the right triangle is Area ¼
ab 2
Equilateral Triangle When the lengths of all three sides of a triangle are equal, we have an equilateral triangle. Since the lengths of each side are equal, we also have the angular equality a ¼ b ¼ g ¼ 608 and the usual sum a þ b þ g ¼ 1808 The area of an equilateral triangle (Figure 2.2) is pffiffiffi 3 2 Area ¼ a 4
Figure 2.2
Equilateral triangle.
24
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 2.3
Circle and Sphere following properties:
Circle.
A circle (Figure 2.3) and a sphere with radius r have the Circle
Sphere
Diameter 2r Circumference 2pr Area pr 2
Diameter 2r Surface Area 4pr 2 Volume 4pr 3=3
The chord of a circle is a line segment intersecting two points on the circle. The products of the lengths of the segments of any two intersecting chords are related by ab ¼ cd, where the lengths are defined by Figure 2.4. Suppose one chord is perpendicular to a diameter so that x(2r x) ¼ h2
Figure 2.4
Intersecting chords.
GEOMETRY, TRIGONOMETRY, AND HYPERBOLIC FUNCTIONS
25
If x ,, 2r, we obtain the approximation x
h2 2r
which is useful in optics.
Right Circular Cylinder A right circular cylinder with height h and radius r is shown in Figure 2.5. The right circular cylinder has the following properties: Total Surface Area ¼ 2prh þ 2pr 2 ¼ 2pr ðh þ r Þ Volume ¼ pr 2 h
Right Circular Cone A right circular cone with height h and radius r is shown in Figure 2.6. From the Pythagorean relation for a triangle, we find the slant height s is 1=2 s ¼ r 2 þ h2 The right circular cone has the following properties: Total Surface Area ¼ prs þ pr 2 h i 1=2 ¼ pr r 2 þ h2 þr Volume ¼ pr 2 h=3
Figure 2.5
Right circular cylinder.
26
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 2.6
2.2
Right circular cone.
TRIGONOMETRY
Trigonometric functions are defined as ratios of the lengths of sides of a right triangle. Using the notation of Figure 2.1, we make the following definitions for an acute angle a: sine a ¼ sin a ¼
a opposite ¼ c hypotenuse
c 1 ¼ a sin a b adjacent cosine a ¼ cos a ¼ ¼ c hypotenuse
cosecant a ¼ csc a ¼
c 1 ¼ b cos a a opposite tangent a ¼ tan a ¼ ¼ b adjacent secant a ¼ sec a ¼
cotangent a ¼ cot a ¼
b 1 ¼ a tan a
Notice that the sides a, b, c are referred to as the opposite, adjacent, and hypotenuse, respectively. Several useful relations between trigonometric functions can be derived by recalling that the right triangle satisfies the Pythagorean relation a2 þ b2 ¼ c2 . A few of these relations are presented as follows: TRIGONOMETRIC RELATIONS
Pythagorean Angle-sum and angle-difference
sin2 A þ cos2 A ¼ 1 sin (A + B) ¼ sin A cos B + cos A sin B cos (A + B) ¼ cos A cos B + sin A sin B tan (A + B) ¼
tan A + tan B 1 + tan A tan B
GEOMETRY, TRIGONOMETRY, AND HYPERBOLIC FUNCTIONS
TRIGONOMETRIC RELATIONS
Double-angle
sin 2A ¼ 2 sin A cos A cos 2A ¼ cos2 A sin2 A ¼ 1 2 sin2 A ¼ 2 cos2 A 1 tan 2A ¼
Product
2 tan A 1 tan2 A
sin A cos B ¼ 12 sinðA þ BÞ þ 12 sinðA BÞ sin A cos B ¼ 12 cosðA BÞ 12 cosðA þ BÞ cos A cos B ¼ 12 cosðA BÞ þ 12 cosðA þ BÞ
Power
sin2 A ¼ 12 ð1 cos 2AÞ cos2 A ¼ 12 ð1 þ cos 2AÞ tan2 A ¼
Half-angle
Euler
1 cos 2A 1 þ cos 2A
A 1 ¼ ð1 cos AÞ 2 2 A 1 cos2 ¼ ð1 þ cos AÞ 2 2 A 1 cos A tan2 ¼ 2 1 þ cos A sin2
eiA ¼ cos A þ i sin A, i ¼ 1 iA e eiA 2i 1 cos A ¼ eiA þ eiA 2 iA e eiA tan A ¼ i iA e þ eiA sin A ¼
pffiffiffiffiffiffiffi 1
27
28
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 2.1: Verify the following relations: sin2 a þ cos2 a ¼ 1 1 þ tan2 a ¼ sec2 a 1 þ cot2 a ¼ csc2 a sin a ¼ tan a cos a cos a ¼ cot a sin a tan a ¼
sin a cos a
by rearranging the trigonometric definitions and using the Pythagorean relation. A positive angle corresponds to a counterclockwise rotation of the hypotenuse from the x axis, and a negative angle corresponds to a clockwise rotation of the hypotenuse from the x axis as illustrated in Figure 2.7. Based on a geographical analysis and the definitions of trigonometric functions, we find the following relations between trigonometric functions with positive and negative angles: a sinðaÞ ¼ ¼ sin a c b cosðaÞ ¼ ¼ cos a c a tanðaÞ ¼ ¼ tan a b
Figure 2.7
Angular rotations.
GEOMETRY, TRIGONOMETRY, AND HYPERBOLIC FUNCTIONS
29
The numbers a, b in Figure 2.7 are the ordinate (y coordinate) and abscissa (x coordinate), respectively.
Example: We verify that tan (a) ¼ tan a by using the above relations with the definitions of trigonometric functions. First write tan (a) ¼
sin (a) cos ( a)
which, by the above relations, gives the expected result tan (a) ¼
sin a ¼ tan a cos a
Trigonometric functions are also known as circular functions because of the relationship between the definitions of trigonometric functions and a circle with radius r as sketched in Figure 2.8. A right triangle is formed by the three sides a, b, r corresponding to the ordinate, abscissa, and radius, respectively. The general definitions of trigonometric functions for an arbitrary angle Q follow: sin Q ¼
ordinate a ¼ radius r
cos Q ¼
abscissa b ¼ radius r
tan Q ¼
ordinate a ¼ abscissa b
The radius r is always positive, but the abscissa and ordinate can have positive or negative values. The sign of the trigonometric function depends on the sign of the ordinate and abscissa.
Figure 2.8
General definition of trigonometric functions.
30
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 2.9 Graph of trigonometric functions.
Figure 2.9 shows plots of the trigonometric functions sin x, cos x, and tan x for the angle x. The value of tan x is undefined at the angles x ¼ 908, 2708 where the abscissa b ¼ 0. EXERCISE 2.2: RLC Circuit. The alternating current I(t) for a resistor–inductor– capacitor circuit with an applied electromotive force is I ðt Þ ¼
E0 S E0 R cosðvtÞ þ 2 sinðvtÞ 2 S þ R2 þR
S2
Use trigonometric relations to write I(t) in the form I(t) ¼ I0 sin (vt d) where the maximum current I0 and phase d are expressed in terms of E0 , S, and R: A derivation of I(t) is given in Chapter 11, Exercise 11.1. Law of Cosines Suppose we want to find the distance C between two points A, B in Figure 2.10. The point A is at (xA , yA ) ¼ (b, 0) and point B is at (xB , yB ) ¼ (a cos u, a sin u), where the angle u is the angle between line OA and line OB. The distance between points A and B is the length C. In general, the length C in Cartesian coordinates is given by C2 ¼ ðxB xA Þ2 þ ðyB yA Þ2
GEOMETRY, TRIGONOMETRY, AND HYPERBOLIC FUNCTIONS
Figure 2.10
31
Law of cosines.
Substituting the trigonometric values of the Cartesian coordinates gives C 2 ¼ ða cos u bÞ2 þða sin uÞ2 Expanding the parenthetic terms C 2 ¼ a2 cos2 u 2ab cos u þ b2 þ a2 sin2 u and rearranging lets us write C 2 ¼ a2 ( cos2 u þ sin2 u) þ b2 2ab cos u Using the identity cos2 u þ sin2 u ¼ 1 gives the law of cosines: C2 ¼ a2 þ b2 2ab cos u The Pythagorean theorem is obtained when u ¼ 908 (Figure 2.11). Law of Sines The sides (a, b, c) of the triangle in Figure 2.11 and the sines of the opposite angles (a, b, g) are related by the law of sines, namely, a b c ¼ ¼ sin a sin b sin g
Figure 2.11
Law of sines.
32
2.3
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
COMMON COORDINATE SYSTEMS
The location of a point P in three-dimensional space is depicted in Figure 2.12. The coordinates {x1 , x2 , x3 } are orthogonal and may be written in terms of Cartesian coordinates {x, y, z} as x1 ¼ x x2 ¼ y x3 ¼ z Cartesian coordinates are also referred to as rectangular coordinates. The range of coordinates in a Cartesian coordinate system is given by 1 , x , 1 1 , y , 1 1 , z , 1 Trigonometric functions can be used to relate the Cartesian coordinate system to two other common coordinate systems: the circular cylindrical coordinate system and the spherical polar coordinate system [e.g., see Arfken and Weber, 2001; Zwillinger, 2003]. Circular Cylindrical Coordinates The location of point P in circular cylindrical coordinates is depicted in Figure 2.13. The perpendicular distance from the x3 axis to point P is the cylindrical radius rc . The angle uc is measured in the x1 –x2 plane from the positive x1 axis.
Figure 2.12
Cartesian coordinate system.
GEOMETRY, TRIGONOMETRY, AND HYPERBOLIC FUNCTIONS
Figure 2.13
33
Circular cylindrical coordinate system.
The relationship between Cartesian coordinates and circular cylindrical coordinates is x1 ¼ rc cos uc x2 ¼ rc sin uc x3 ¼ z The inverse relationship is rc ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffi x21 þ x22
uc ¼ tan1 ðx2 =x1 Þ z ¼ x3 The range of coordinates in the circular cylindrical coordinate system is 0 rc , 1 0 uc 2p 1 , z , 1 Spherical Polar Coordinates The location of point P in spherical polar coordinates is depicted in Figure 2.14. The distance from the origin to point P is the spherical radius rs . The azimuth angle f is measured in the x1 – x2 plane from the positive x1 axis. The polar angle us is measured from the positive x3 axis. The relationship between Cartesian coordinates and spherical polar coordinates is x1 ¼ rs sin us cos f x2 ¼ rs sin us sin f x3 ¼ rs cos us
34
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 2.14
Spherical polar coordinate system.
The inverse relationship is qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x21 þ x22 þ x23 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 x21 þ x22 þ x23 x3 us ¼ cos rs ¼
f ¼ tan1 ðx2 =x1 Þ The range of coordinates in the spherical polar coordinate system is given by 0 rs , 1 0 us p 0 f 2p 2.4
EULER’S EQUATION AND HYPERBOLIC FUNCTIONS
A fundamental relation between trigonometric and exponential functions is Euler’s equation, which has the form eia ¼ cos a þ i sin a,
i¼
pffiffiffiffiffiffiffi 1
(2:4:1)
Euler’s equation can be derived by comparing the series expansions of each function in Eq. (2.4.1). A discussion of series expansions is presented in Section 2.5. Euler’s equation for e 2ia is eia ¼ cosðaÞ þ i sinðaÞ or eia ¼ cos a i sin a
GEOMETRY, TRIGONOMETRY, AND HYPERBOLIC FUNCTIONS
35
EXERCISE 2.3: Derive expressions for cos a, sin a, and tan a in terms of exponential functions. Hyperbolic Functions Euler’s equation may be used to derive several useful relations between trigonometric functions and exponential functions with an imaginary exponent. A new set of functions, called hyperbolic functions, is constructed by using real exponents in relations that are similar to the exponential form of the trigonometric relations. The following presents the definitions and several properties of hyperbolic functions: HYPERBOLIC FUNCTIONS
Definitions
1 csch u 1 cosh u ¼ 12ðeu þ eu Þ ¼ sech u eu eu sinh u 1 ¼ tanh u ¼ u ¼ e þ eu cosh u coth u
Product
sinh u ¼ tanh u cosh u cosh u ¼ coth u sinh u tanh u ¼ sinh u sech u
Angle-sum and angle-difference
sinhðu + vÞ ¼ sinh u cosh v + cosh u sinh v
sinh u ¼ 12ðeu eu Þ ¼
coshðu + vÞ ¼ cosh u cosh v + sinh u sinh v tanhðu + vÞ ¼ Double-angle
tanh u + tanh v 1 + tanh u tanh v
sinh 2u ¼ 2 sinh u cosh u cosh 2u ¼ cosh2 u þ sinh2 u tanh 2u ¼
Half-angle
Power
2 tanh u 1 þ tanh2 u
u 1 ¼ ðcosh u 1Þ 2 2 u 1 cosh2 ¼ ðcosh u þ 1Þ 2 2 u cosh u 1 tanh2 ¼ 2 cosh u þ 1 sinh2
cosh2 u sinh2 u ¼ 1 tanh2 u þ sech2 u ¼ 1
36
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 2.15
Graph of hyperbolic functions.
Relations between hyperbolic functions with a negative argument are sinhðuÞ ¼ sinh u coshðuÞ ¼ cosh u tanhðuÞ ¼ tanh u Figure 2.15 presents plots of the hyperbolic functions sinh u, cosh u, and tanh u: Notice that sinh u and cosh u are approximately equal when u . 2 so that tanh u 1 for u . 2: By contrast, sinh u cosh u when u , 2 so that tanh u 1 when u , 2:
EXERCISE 2.4: Use the exponential forms of sinh u and cosh u to show that sinhðuÞ ¼ sinh u and coshðuÞ ¼ cosh u: EXERCISE 2.5: Evaluate cosh2 u sinh2 u: Application: Burger’s Equation. A nonlinear partial differential equation for the propagation of a shock wave is Burger’s equation [Burger, 1948]: @C @C @2 C ¼ ða þ bCÞ þD 2 @t @x @x
GEOMETRY, TRIGONOMETRY, AND HYPERBOLIC FUNCTIONS
37
If a, b, and D are positive constants, then the solution of Burger’s equation is
1 b ðx vtÞ C ðx, tÞ ¼ 1 tanh 2 4D where the velocity v is given by v¼aþ
b 2
and the boundary conditions at x ! +1 are C ð1, tÞ ¼ 1 and C ðþ1, tÞ ¼ 0: If b ¼ 0, then Burger’s equation simplifies to the linear convection –dispersion equation. Partial derivatives and partial differential equations are discussed further in Chapters 7 and 12, respectively. EXERCISE 2.6: Let z ¼ x vt, v ¼ 1:0 foot/day, b ¼ 0:1 foot/day, and D ¼ 0:1 foot2 /day. Plot C(z) in the interval 0 x 40 feet at times of 5.00 days, 6.11 days, and 7.22 days. Note that C(z) ¼ 12 ½1 tanh (bz=4D) and 0:0 C(z) 1:0.
2.5
SERIES REPRESENTATIONS
A polynomial may be written in a power series expansion as P(z) ¼
1 X
C m zm
m¼0
where {Cm , m ¼ 0, . . . ,1} are expansion coefficients of the power series. The power series is a polynomial of degree n if it has the form Pn (z) ¼
n X
C m zm ,
Cn = 0
m¼0
Example: The second-degree polynomial P2 (z) ¼ a þ bz þ cz2 has the expansion coefficients C0 ¼ a, C1 ¼ b, C2 ¼ c, and Cj ¼ 0 for all j . 2. Taylor Series and Maclaurin Series A function f (z) of the complex variable z may be represented by the Taylor series f (z) ¼
1 X f (m) ðaÞ ðz aÞm m! m¼0
38
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
where f (0) (a) ¼ f (a) The m th derivative of f (z) with respect to z and evaluated at a is dm f (z) (m) f ð aÞ ¼ , m1 dzm z¼a The center of the Taylor series is at a. If a ¼ 0, we obtain the Maclaurin series fM (z) ¼
1 X fM(m) ð0Þ m z m! m¼0
For more discussion of derivatives, see Chapter 6. EXERCISE 2.7: Find the Taylor series and Maclaurin series for f (z) ¼ ez . Representing Elementary Functions Using Power Series Examples of power series that are equivalent to several common functions are presented below. If we combine the series for cos z and i sin z, we obtain Euler’s equation (see Section 2.4). SERIES REPRESENTATION OF ELEMENTARY FUNCTIONS
Geometric series
Exponential function
Trigonometric functions
1 X 1 ¼ zn ¼ 1 þ z þ z2 þ , jzj , 1 1 z n¼0
The inequality is needed to avoid the singularity (division by 0) at z ¼ 1. 1 n X z z2 ez ¼ ¼ 1 þ z þ þ n! 2! n¼0 cos z ¼
1 X
( 1)n
n¼0
sin z ¼ Hyperbolic functions
1 X
n¼0 1 X
cosh z ¼
sinh z ¼
Logarithm
( 1)n
z2n z2 z4 ¼ 1 þ þ (2n)! 2! 4!
z2nþ1 z3 z5 ¼ z þ þ (2n þ 1)! 3! 5!
z2n z2 z4 ¼ 1 þ þ þ ð2nÞ! 2! 4! n¼0
1 X z2nþ1 z3 z5 ¼ z þ þ þ ð2n þ 1Þ! 3! 5! n¼0
lnð1 þ zÞ ¼
1 X znþ1 z2 z3 ð1Þn ¼ z þ nþ1 2 3 n¼0
GEOMETRY, TRIGONOMETRY, AND HYPERBOLIC FUNCTIONS
39
Given a series representation of a function f (z), it is possible to estimate the asymptotically small values of f (z) as z ! 0: This is done by keeping only firstorder terms in z, that is, terms with either z0 or z1 , and neglecting all higher-order terms zm where m 2:
EXERCISE 2.8: Keep only first-order terms in z for small z and estimate asymptotic functional forms for (1 2 z)21, e z, cos z, sin z, cosh z, sinh z, and ln (1 þ z). Application: Radius of Convergence of Power Series. Consider the power series y(x) ¼
1 X
cn xn
n¼0
where {cn } are expansion coefficients and x is real. The radius of convergence r of the power series y(x) is the limit cn r ¼ lim n!1 cnþ1 where cn and cnþ1 are consecutive expansion coefficients. Limits are discussed more fully in Chapter 6, Section 6.1. The power series diverges for all values of x = 0 when r ¼ 0: If r ¼ 1, the power series converges for all values of x. For values of r in the interval 0 , r , x, the power series converges when jxj , r and diverges when jxj . r. Additional discussion of power series and convergence criteria can be found in many sources, for example, Lomen and Mark [1988], Edwards and Penney [1989], Kreyszig [1999], and Zwillinger [2003]. Example: All of the expansion coefficients equal 1 for the geometric series. The radius of convergence is therefore r ¼ 1 for the geometric series, which means the geometric series converges and represents 1=(1 x) when jxj , 1. Example: The expansion coefficients for the exponential series are given by cn ¼ 1=n! so that the radius of convergence is lim 1=n! ¼ lim jn þ 1j ¼ 1 r¼ n ! 1 1=ðn þ 1Þ! n!1 We conclude that the exponential series converges for all values of x.
CHAPTER 3
ANALYTIC GEOMETRY
The review of descriptive geometry in Chapter 2 is extended here to include analytic geometry. The line and conic sections receive special attention because these topics lead naturally to a discussion of linear regression and polar coordinates, respectively. The polar form of complex numbers is then presented.
3.1
LINE
Figure 3.1 illustrates the algebraic representation of a straight line. A straight line is algebraically represented by the equation y ¼ mx þ b when m is the slope of the line, and b is the intercept of the line with the y axis. We can find m and b if we know at least two points (x1, y1) and (x2, y2) on the straight line, as shown in Figure 3.2. The two unknowns m and b are found by simultaneously solving the two equations y1 ¼ mx1 þ b
(3:1:1)
y2 ¼ mx2 þ b
(3:1:2)
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
41
42
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 3.1
Equation of a straight line.
Subtracting Eq. (3.1.1) from Eq. (3.1.2) eliminates b and gives y2 y1 ¼ m(x2 x1 )
(3:1:3)
Solving Eq. (3.1.3) for the slope gives m¼
y2 y1 rise ¼ x2 x1 run
(3:1:4)
where y2 2 y1 is called the rise and x2 2 x1 is called the run of the slope m of the straight line. The intercept b is found by rearranging either Eq. (3.1.1) or Eq. (3.1.2). For example, rearranging Eq. (3.1.1) gives b ¼ y1 mx1
(3:1:5)
Example: Suppose (x1, y1) ¼ (2, 2), (x2, y2) ¼ (21, 27). Using the above questions, we calculate (b ¼ 24, m ¼ 3) and y ¼ 3x 2 4. Higher-order polynomial equations may be used to fit three or more data points. A well-defined system will have at least one data point for each unknown coefficient of the polynomial equation.
Figure 3.2
Two points of a straight line.
ANALYTIC GEOMETRY
43
EXERCISE 3.1: Fit the quadratic equation y ¼ a þ bx þ cx 2 to the three points (x1, y1), (x2, y2), (x3, y3). Application: Least Squares Fit to a Straight Line. We saw above that a straight line could be fit to two points. Suppose we wish to draw the straight line y ¼ mx þ b
(3:1:6)
through N data points. The method of least squares lets us draw a unique straight line through all N data points by minimizing the square of the distance of each point from the straight line. The fit of a straight line to a set of data points using the least squares method is known as regression analysis in statistics, and the line in Eq. (3.1.6) is called the least squares regression line [Larson et al., 1990]. Any given point (xi, yi) is a distance yi 2 mxi 2 b from the line given by Eq. (3.1.6), as shown in Figure 3.3. The sum of the squares of the distance of each point from the line given by Eq. (3.1.6) is Q¼
N X ( yi mxi b)2
(3:1:7)
i¼1
The necessary conditions for Q to be a minimum with respect to m and b are that the partial derivatives vanish; that is, N X @Q ¼ 2 xi ( yi mxi b) ¼ 0 @m i¼1
(3:1:8)
N X @Q ¼ 2 ( yi mxi b) ¼ 0 @m i¼1
(3:1:9)
and
where we have used the derivative d(ax n ) ¼ nax n1 dx
Figure 3.3 Distance of datum from the least squares regression line.
44
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
for constants a and n. Derivatives are discussed in Chapter 6 and partial derivatives are discussed in Chapter 7. Simplifying the summations in Eqs. (3.1.8) and (3.1.9) gives N N N X X X xi yi m x2i b xi ¼ 0 i¼1
i1
N X
yi m
i¼1
(3:1:10)
i¼1
N X xi Nb ¼ 0
(3:1:11)
i¼1
Solving Eq. (3.1.11) for b yields N N X X 1 b¼ m xi yi N i¼1 i¼1 ! N N X 1 X yi m xi ¼ N i¼1 i¼1
! (3:1:12)
The slope m is found by substituting Eq. (3.1.12) into Eq. (3.1.10): N m¼
N P
N N P P xi yi yi xi
i¼1
N
N P i¼1
x2i
i¼1
i¼1
N 2 P xi
(3:1:13)
i¼1
Once the slope m is known, Eq. (3.1.12) can be solved for the intercept b. The least squares analysis may be applied to higher-order polynomials, for example, quadratic or cubic equations. The following exercise shows that the least squares method is applicable to equations that may be transformed into polynomial form. EXERCISE 3.2: Suppose w ¼ au b, where a, b are constants and u, w are variables. Take the logarithm of the equation and fit a least squares regression line to the resulting linear equation.
3.2 CONIC SECTIONS The most general second-order algebraic equation may be written Ax2 þ Bxy þ Cy2 þ Dx þ Ey þ F ¼ 0
(3:2:1)
ANALYTIC GEOMETRY
45
where the coefficients A, B, C, D, E, and F are constants. Different geometric figures, known as conic sections, correspond to different sets of values of the coefficients. The circle, ellipse, parabola, and hyperbola are called conic sections because each shape can be obtained when a plane intersects one or two right circular cones [Riddle, 1992]. Any conic section can be represented by Eq. (3.2.1) when A, B, and C are not all zero. Conversely, Eq. (3.2.1) is a second-order equation and any second-order equation represents either a conic section or a degenerate case of a conic section. Degenerate cases are summarized as follows [Riddle, 1992] for different values of the product AC of coefficients A and C: Conic Section
AC
Degenerate Cases
Parabola Ellipse
¼0 .0
Hyperbola
,0
One line (two coincident lines) Circle Point Two intersecting lines
Except for degenerate cases, the graph of Eq. (3.2.1) can be determined by calculating the discriminant D ¼ B2 4AC The following specifies the relation between a discriminant and its corresponding graph: Value of Discriminant
Conic Section
D¼0 D,0 D.0
Parabola Ellipse Hyperbola
The straight line is a degenerate case of Eq. (3.2.1) with E ¼ 1, D ¼ 2m, F ¼ 2b, and A ¼ B ¼ C ¼ 0. Sets of coefficients for several common conic sections are summarized in the following discussions.
Circle If A ¼ C ¼ 1, F ¼ 2r 2, and B ¼ D ¼ E ¼ 0, we obtain x2 þ y2 ¼ r2
(constant r)
which is the equation for a circle with radius r.
46
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 3.4
Ellipse.
Ellipse If A ¼ 1/a 2, C ¼ 1/b 2, F ¼ 21, and B ¼ D ¼ E ¼ 0, we obtain x2 y2 þ ¼ 1, b . a a2 b2 which is the equation for an ellipse. Figure 3.4 illustrates an ellipse with b . a. The semiminor axis is a and the semimajor axis is b.
Parabola If C ¼ 1, D ¼ 2k, and A ¼ B ¼ E ¼ F ¼ 0, we obtain y 2 ¼ kx which is the equation for a parabola (Figure 3.5).
Figure 3.5
Parabola.
ANALYTIC GEOMETRY
Figure 3.6
47
Hyperbola.
Hyperbola If A ¼ 1/a 2, C ¼ 21/b 2, F ¼ 21, and B ¼ D ¼ E ¼ 0, we obtain x2 y2 ¼1 a2 b2 which is the equation for a hyperbola (Figure 3.6). Notice that the only difference between an ellipse and a hyperbola is the sign of C. In general, a parabolic equation has the form y ¼ x n and n . 0, while a hyperbolic equation has the form y ¼ x n and n , 0.
Application: Polar Coordinate Representation of Conic Sections. Another representation of conic sections may be obtained using the polar coordinates sketched in Figure 3.7. The polar coordinate representation is well suited for solving orbit problems in fields such as astrophysics and space science. Let us first define some terms. In particular, the moving point P is the parameter, the fixed point F is the focus, and the fixed line D is the directrix. The horizontal distance d from the fixed line D to the intersection point I is related to the distance r from the fixed point I by r ¼ ed, where e is a constant called the eccentricity. The
Figure 3.7 Polar coordinates.
48
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
length from the focus to the directrix is r LDF ¼ d þ r cos v ¼ (1 þ e cos v) e
(3:2:2)
The angle v between the horizontal axis and the line connecting points F and I is called the true anomaly. The parameter P is the length LDF where v ¼ p/2; that is, p eLDF p ¼ eLDF r v¼ ;P¼ 2 1 þ e cos 2
(3:2:3)
from Eq. (3.2.2). Substituting Eq. (3.2.3) into Eq. (3.2.2) gives r¼
eLDF P ¼ 1 þ e cos v 1 þ e cos v
Parameter P determines the extent of the orbit and e determines the shape. Shapes for different values of e are tabulated as follows: Eccentricity e
Geometric Shape
0 0,e,1 1 e.1
Circle Ellipse Parabola Hyperbola
The closest approach of point P to the directrix D occurs at v ¼ 0 and is called the perihelion. The point P is at its greatest distance from D when v ¼ p. This distance is called the aphelion.
3.3 POLAR FORM OF COMPLEX NUMBERS In general, a complex number z may be written in terms of the ordered pair of real numbers (x, y) as z ¼ x þ iy If we let x ¼ r cos u and y ¼ r sin u, where r, u are polar coordinates, we obtain z ¼ r( cos u þ i sin u) The various coordinates are illustrated in Figure 3.8.
ANALYTIC GEOMETRY
49
Figure 3.8 Polar form of a complex number.
The length r is the absolute value of modulus of z; thus jzj ¼ r ¼
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffi x2 þ y2 ¼ zz 0
where we have used the definition of the absolute value of z: jzj ;
pffiffiffiffiffiffi zz
The angle u is positive when rotated in the counterclockwise direction and is measured in radians. It is obtained as the argument of z such that y x y arg z ¼ u ¼ sin1 ¼ cos1 ¼ tan1 r r x The value of u in the interval 2p , u p is called the principal value of the argument of z. Euler’s Equation As we saw in Chapter 2, combining the series expansions for cos u and sin u gives Euler’s equation: eiu cos u þ i sin u, 0 u 2p Multiplying e iu by r yields z ¼ reiu ¼ r( cos u þ i sin u) Comparing the polar coordinate expression of z with the Cartesian coordinate representation z ¼ x þ iy gives z ¼ reiu ¼ x þ iy
50
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
De Moivre’s Equation If we raise Euler’s equation to the n th power we obtain ( cos u þ i sin u)n ¼ einu Repeated use of Euler’s equation lets us write einu ¼ cos (nu) þ i sin (nu) ¼ ( cos u þ i sin u)n Multiplying by r n and using the polar coordinate representation of a number gives De Moivre’s equation: z n ¼ r n einu EXERCISE 3.3: Express z ¼ z1z2 and z ¼ z1/z2 in polar coordinates.
CHAPTER 4
LINEAR ALGEBRA I
The introduction of polar coordinates in Chapter 3 established the basis for discussing the rotation of coordinate axes in two dimensions. This coordinate rotation requires the solution of two equations for two unknowns. The need to solve the resulting set of equations leads us to a discussion of matrices and determinants.
4.1
ROTATION OF AXES
Figure 4.1 depicts a set of coordinates (x1, x2) undergoing a counterclockwise rotation through an angle u to a new set of coordinates ( y1, y2). We wish to describe the location of the point P relative to both sets of coordinates. In the (x1, x2) coordinate system, the point P is at x1 ¼ r cos (f) x2 ¼ r sin (f)
(4:1:1)
whereas in the ( y1, y2) coordinate system, P is at y1 ¼ r cos (f u) y2 ¼ r sin (f u)
(4:1:2)
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
51
52
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 4.1
Coordinate rotation.
Using the trigonometric identities sin (A + B) ¼ sin A cos B + cos A sin B cos (A + B) ¼ cos A cos B + sin A sin B
(4:1:3)
in Eq. (4.1.2) y1 ¼ r( cos f cos u þ sin f sin u) ¼ (r cos f) cos u þ (r sin f) sin u y2 ¼ r( sin f cos u cos f sin uÞ ¼ (r sin f) cos u (r cos f) sin u
(4:1:4)
Substituting Eq. (4.1.1) into Eq. (4.1.4) gives a system of linear equations relating the two coordinate systems in terms of the angle of rotation u; thus y1 ¼ x1 cos u þ x2 sin u y2 ¼ x2 cos u x1 sin u
(4:1:5)
The preceding set of equations represents a transformation from the (x1, x2) coordinate system to the (y1, y2) coordinate system. The set of equations in Eq. (4.1.5) may be written in a more compact form as
y1 y2
cos u ¼ sinu
sinu cosu
x1 x2
The equation may be written in an even more compact matrix notation as y ¼ ax
(4:1:6)
LINEAR ALGEBRA I
53
where x, y are column vectors and a is a 2 2 matrix that transforms x into y. Matrices appear in numerous real-world applications. The rest of this chapter focuses on their properties and serves as an introduction to linear algebra. More detailed treatments of linear algebra can be found in such sources as Berberian [1992], Nicholson [1986], and Rudin [1991].
4.2
MATRICES
A matrix is an array of numbers with m rows and n columns. The (i, j)th element is the element found in row i and column j. Matrices may be classified by the relationship between m and n (see below). A matrix is often referred to by its order, for example, the matrix is an m n matrix. m=n m¼n mn
Rectangular matrix Square matrix Order of matrix
Example: The matrix has m ¼ 2 rows, n ¼ 3 columns, and order 2 3. The (i, j)th element is aij. A¼
a11 a21
a12 a22
a13 a23
A column vector is a matrix of order m1, whereas a row vector is a matrix of order 1n. A number may be thought of as a matrix of order 11 and is considered a scalar if its value is invariant (does not change) with respect to a coordinate transformation. Matrices may be classified further based on the properties of their elements. The transpose of matrix A is formed by interchanging element aij with element aji; thus AT ¼ ½a ji ¼ ½aij T ¼ (A)T where AT denotes the transpose of matrix A.
Example: The transpose of the 23 matrix A from the previous example is the 32 matrix: 2 3 a11 a21 AT ¼ 4 a12 a22 5 a13 a23
54
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Example: A 31 column vector is 2
3 a1 a ¼ 4 a2 5 a3 The transpose of a is the 13 row vector aT ¼ [a1 a2 a3]. The conjugate transpose or adjoint of matrix A is Aþ ¼ ½A T where A is the complex conjugate of A or ½aij þ ¼ ½aij T ¼ ½aji A square matrix is symmetric if A ¼ AT; that is, aij ¼ aji. A square matrix is Hermitian if A ¼ Aþ. In this case the matrix is said to be self-adjoint. An important, physically useful property of Hermitian matrices is that they have real eigenvalues (see Chapter 5). EXERCISE 4.1: The Pauli matrices are the square matrices 0 1 0 i 1 0 , sy ¼ , sz ¼ sx ¼ 1 0 i 0 0 1 These matrices are used to calculate the spin angular momentum of particles such as electrons, protons, and neutrons. Show that the Pauli matrices are Hermitian. The principal diagonal of a square matrix A with elements faijg is the set of elements aij for which i ¼ j. The trace of an n n square matrix A is the sum of the elements along the principal diagonal; thus Trace(A) ; Tr (A) ¼
n X
aii
i¼1
If aij ¼ 0 for i , j, then A is a lower triangular matrix. If aij ¼ 0 for i . j, then A is an upper triangular matrix. A diagonal matrix is a square matrix with aij ¼ 0 for i = j and aii = 0. The elements aij with i = j are called off-diagonal elements. Example: Consider the 33 square matrix A: 2 3 a11 a12 a13 A ¼ 4 a21 a22 a23 5 a31 a32 a33
LINEAR ALGEBRA I
55
The sum of the elements of the principal diagonal of A is the trace of A:
Tr (A) ¼
3 X
aii ¼ a11 þ a22 þ a33
i¼1
If all off-diagonal elements of A are 0, that is, aij ¼ 0 for i = j, then we have the diagonal matrix 2
a11 diag A ¼ 4 0 0
0 a22 0
3 0 0 5 a33
Upper and lower triangular matrices U, L may be formed from A and written as 2
a11 U ¼40 0
a12 a22 0
3 2 a11 a13 a23 5, L ¼ 4 a21 a33 a31
0 a22 a32
3 0 0 5 a33
In general, a matrix is an upper triangular matrix when all the entries below the main diagonal are zero. Similarly, the matrix is a lower triangular matrix when all the entries above the main diagonal are zero. Matrix Operations Assume matrices A, B, C are matrices with the same order m n. The sum or difference of two matrices A, B is A+B¼C which implies aij + bij ¼ cij
for i ¼ 1, 2, . . . , m j ¼ 1, 2, . . . , m
The product of a matrix A with a scalar g is B ¼ gA or, in terms of matrix elements, bij ¼ gaij
56
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Example: Define a matrix A with complex elements such that A¼
0 i
i 0
Then 6A and 3i A are given by 6A ¼
0 6i , 6i 0
3i A ¼
0 3
3 0
Multiplication Let A be of order m n and B of order n p. Then the product of two matrices A and B is C ¼ AB or cij ¼
n X
aih bhj
h¼1
where the resulting matrix C is of order m p. EXERCISE 4.2: Calculate
3 2
4 3
2 1 2 4 0 1 6
2 1 3
3 4 25 9
EXERCISE 4.3: Calculate 2
½2 3
3 4 1 4 2 5 9
Square matrices obey the laws expressed as follows: Associative Distributive
A(BC) ¼ (AB)C (A þ B)C ¼ AC þ BC and C(A þ B) ¼ CA þ CB
In general, matrix multiplication is not commutative; that is, AB = BA. This is an important characteristic of quantum mechanics.
LINEAR ALGEBRA I
57
Example: Compare sx sy and sy sx. s x sy ¼ s y sx ¼
0
1
1
0
0
i
i
0
0
i
i
0 0
1
1
0
¼
¼
i
0
0
i
i
0
0
i
Notice that sx sy = sy sx; that is, sx and sy do not commute. The transpose of the product of two matrices is the product of the commuted, transposed matrices: (AB)T ¼ BT AT The adjoint of the product of two matrices is the product of the commuted, adjoint matrices: (AB)þ ¼ Bþ Aþ If all of the nonzero elements of a square, diagonal matrix equal 1, then that matrix is called the identity matrix. It satisfies the equality AI ¼ IA ¼ A where A is any square matrix with the same order as the square identity matrix I. The matrix I has the form 2
1 0 60 1 6 I¼6. 4 .. 0 0
3 0 07 7 7 .. 5 . 1
A null matrix (0) is a matrix in which all elements equal 0; thus A0 ¼ 0A ¼ 0. A matrix A is singular if Ax ¼ 0, x = 0 where x is a column vector [Lipschutz, 1968]. Notice that x = 0 if at least one element of x is nonzero. Alternatively, it is possible to show that a matrix A is singular if its determinant is zero. This is pointed out again in Section 4.3 after determinants have been defined.
58
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Suppose A is a square matrix and there exists a square matrix B such that AB ¼ I or BA ¼ I. Then A is called a nonsingular matrix and B is the inverse of matrix A; that is, B ¼ A21, where A21 denotes the inverse of matrix A. Inverse matrices are discussed in more detail below. As with scalars, we can define the power of a matrix by forming the products A2 ; AA,
A3 ; AAA, etc.
In addition, a matrix may be used as an exponent by using the series expansion 1 X Am
eA ;
m¼0
m!
EXERCISE 4.4: Suppose we are given the Pauli matrices s1 ¼
0 1 , 1 0
s2 ¼
0 i
i , 0
1 0
s3 ¼
0 1
(a) Calculate s23. (b) Calculate the trace of s1 and the trace of s3. (c) Verify the equality [s2 s3]T ¼ sT3 sT2 . Direct Product of Matrices Suppose we have an m m matrix A and an n n matrix B. We can form an mn mn matrix C by defining the direct product 2
a11 B 6 a21 B 6 C ¼AB¼6 . 4 .. am1 B
a12 B a22 B .. . am2 B
3 a1 m B a2 m B 7 7 .. 7 . 5
amm B
To be more specific, let A and B be the 2 2 matrices A¼
a11 a21
b a12 , B ¼ 11 a22 b21
The direct product matrix C is the 4 4 matrix 2 a11 b11 6 a11 b21 a11 B a12 B ¼6 C ¼AB¼ 4 a21 b11 a21 B a22 B a21 b21
b12 b22
a11 b12 a11 b22 a21 b12 a21 b22
a12 b11 a12 b21 a22 b11 a22 b21
3 a12 b12 a12 b22 7 7 a22 b12 5 a22 b22
In general, the direct product is not necessarily commutative; that is, A B may not equal B A.
LINEAR ALGEBRA I
59
Example: Let 0 be a 2 2 matrix with all of its elements equal to 0, and
0 1 0 , B ¼ sy ¼ A ¼ sx ¼ 1 0 i
i 0
The direct product is sx sy ¼
sy 0
0 sy
2
0 60 ¼6 40 i
0 0 i 0
0 i 0 0
3 i 07 7 05 0
We show that the direct product does not necessarily commute by calculating sy sx ¼
0 isx
isx 0
= sx sy
Inverse Matrices If A is a nonsingular square matrix of order n n, then there exists a unique matrix A21 such that A1 A ¼ AA1 ¼ I The matrix A21 is said to be the inverse of matrix A. The inverse of the product of two matrices is the product of the commuted, inverse matrices: (AB)1 ¼ B1 A1 The transpose of the inverse matrix is the inverse of the transposed matrix: (A1 )T ¼ (AT )1 The inverse of a matrix multiplied by a scalar gives 1 (gA)1 ¼ A1 g A square matrix A is said to be a unitary matrix if its inverse is self-adjoint; that is, Aþ ¼ A1
60
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Two matrices P, Q are similar if P ¼ A1 QA where A is a nonsingular matrix. EXERCISE 4.5: Let 3 5 1 , A ¼ 1 3
2 A¼ 1
5 2
Verify AA21 ¼ I. EXERCISE 4.6: Let 2
1 A ¼ 42 4
0 1 1
3 2 2 11 3 5, A1 ¼ 4 4 8 6
2 0 1
3 2 15 1
Verify AA21 ¼ I. EXERCISE 4.7: Let A¼
a c
b x y , A1 ¼ d z w
Assume a, b, c, d are known. Find x, y, z, w from AA21 ¼ I.
Solving a System of Linear Equations A system of n linear equations with n unknowns can be written in the form Ax ¼ b
(4:2:1)
where the column vector b is known and the column vector x is unknown. We assume the matrix A is a known, nonsingular square matrix of order n n so that there exists a unique inverse matrix A21. The formal solution for x is obtained by premultiplying Eq. (4.2.1) by A21 to get x ¼ A1 b
(4:2:2)
Notice that multiplying a matrix equation by a matrix from the left (premultiplying) is not necessarily equivalent to multiplying the matrix equation by a matrix from the right (postmultiplying) because matrix multiplication does not have to commute, and often does not.
LINEAR ALGEBRA I
61
EXERCISE 4.8: Assume b1, b2 are known constants. Write the system of equations 2x1 þ 5x2 ¼ b1 x1 þ 3x2 ¼ b2 in matrix form and solve for the unknowns x1, x2. 4.3
DETERMINANTS
The determinant of the square matrix A is denoted by det (A) or jAj. jAj is a scalar function of the square matrix A defined such that jAj jBj ¼ jABj The determinant of the transposed of matrix A is the same as the determinant of A; thus jAj ¼ jAT j An explicit representation of the determinant of a 2 2 matrix is
a det 11 a21
a12 a22
¼ a11 a22 a21 a12
Similarly, the determinant of a 3 3 matrix is 2
a12
a11
6 det4 a21 a31
a13
3
a22
7 a23 5
a32
a33
¼ ½a11 a22 a33 þ a12 a23 a31 þ a13 a21 a32 a31 a22 a13 a32 a23 a11 a33 a21 a12 The determinant of a 3 3 matrix is found operationally by appending the first two columns on the right-hand side of the matrix and subtracting the products of the diagonal terms (multiply terms from lower left to upper right)
a22
. ..
.. a23
a21
a22 ¼ a31 a22 a13 þ a32 a23 a11 þ a33 a21 a12
a31
a32
..
.. a32
a12
.
.
. .. a31
a11 .
a21
a13
..
a12 .
a11
a33
62
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
from the products of the diagonal terms (multiply terms from upper left to lower right) a11
a12 ..
a21
a13 ..
.
a32
a21 ..
.
a12
.
a23 ..
a31
.
a22
a11 ..
a33
a22 ¼ a11 a22 a33 þ a12 a23 a31 þ a13 a21 a32 ..
.
.
a31
a32
EXERCISE 4.9: Evaluate 2
2 det4 6 2
4 1 1
3 3 55 3
In general, the determinant of a square matrix can be calculated in terms of the cofactor. The cofactor of the square matrix A is denoted by cofij(A) and has the definition:
cofij(A) ; determinant of A found by deleting the i th row and j th column of A and choosing the positive (negative) sign if i þ j is even (odd).
The integer i þ j is even if it is part of the sequence of integers f2, 4, 6, . . .g, otherwise i þ j is odd. EXERCISE 4.10: Use the definition of cofactor to evaluate 2
2 cof 23 4 6 2
4 1 1
3 3 55 3
The determinant of an arbitrary square matrix A of order n n can be evaluated using cofactors; thus jAj ¼
n X j¼1
aij cof ij (A) for any i
LINEAR ALGEBRA I
63
or jAj ¼
n X
aij cof ij (A) for any j
i¼1
These expressions represent an expansion by cofactors.
EXERCISE 4.11: Evaluate the following determinant using cofactors: 2
2 jAj ¼ det4 6 2
3 4 3 1 55 1 3
Determinants are useful for determining whether or not a square matrix A is singular or nonsingular. If det(A) = 0, then there exists a unique inverse matrix A21. Furthermore, if square matrices A and B are nonsingular, then (AB)1 ¼ B1 A1 Application: Cramer’s Rule. Suppose that a linear system of equations can be written in terms of the nonsingular matrix A, the unknown column vector x, and a known column vector b such that Ax ¼ b
(4:3:1)
Recall that det(A) = 0 for a nonsingular matrix A. Cramer’s rule says that Eq. (4.3.1) has the unique solution xj ¼
det(Aj ) det(A)
(4:3:2)
where Aj is a matrix obtained from A by replacing the j th column of A by b. For example, suppose A is a 2 2 square matrix and b is a 2 1 column matrix; that is, A¼
a11 a21
b a12 ,b ¼ 1 a22 b2
In this case, there are two solutions of Eq. (4.3.2), namely,
b1 a12 a11 b1 det det b2 a22 a21 b2 , x2 ¼ x1 ¼ detðAÞ detðAÞ
64
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
If A is singular, that is, det(A) ¼ 0, then Eq. (4.3.1) has either no solution or an infinite number of solutions. EXERCISE 4.12: Use Cramer’s rule to find the unknowns fx1, x2, x3g that satisfy the system of equations x1 þ 2x3 ¼ 11 2x1 x2 þ 3x3 ¼ 17 4x1 þ x2 þ 8x3 ¼ 45
CHAPTER 5
LINEAR ALGEBRA II
Matrices arise naturally in the treatment of systems of linear algebraic equations with the form Ax ¼ B, where A is an m n matrix and x, B are n 1 matrices. Another perspective is to view x and B as vectors and A as an operator that transforms x into B. We review the mathematical framework associated with the vector perspective in this chapter and show how to calculate eigenvalues and eigenvectors. An important practical application of this material is to the diagonalization of a matrix, which is presented in an application.
5.1
VECTORS
A vector A from the origin of Figure 5.1 to a point P in three dimensions has the form A ¼ ax^i þ ay j^ þ az k^
(5:1:1)
^ are unit vectors along the {x, y, z} axes, respectively, and the vector where {^i, ^j, k} components {ax , ay , az } are the corresponding distances along the {x, y, z} axes. The length or magnitude of vector A is jAj ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a2x þ a2y þ a2z
(5:1:2)
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
65
66
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 5.1
Vector components.
A unit vector is a vector with magnitude one. The vector A may be written as an ordered triplet such that A ¼ (ax , ay , az ) where the elements of the ordered triplet are the vector components. Example: The unit vectors are ^i ¼ (1, 0, 0) ^j ¼ (0, 1, 0) k^ ¼ (0, 0, 1) Addition and subtraction of two vectors A and B are given by A + B ¼ (ax + bx )^i þ (ay + by )^j þ (az + bz )k^
Multiplying a vector A by a scalar a gives a A ¼ a ax^i þ a ay ^j þ a az k^ A zero vector is defined as ax^i þ ay ^j þ az k^ ¼ 0 which implies that ax ¼ ay ¼ az ¼ 0:
(5:1:3)
LINEAR ALGEBRA II
67
Example: Let B ¼ (3, 2, 0) and C ¼ (3, 22, 0). Then jBj ¼ jCj but the orientation of B does not equal the orientation of C. The dot product of two vectors is A B ¼ a x b x þ ay by þ az bz
(5:1:4)
or A B ¼ jAj jBj cos u where the angle u is the angle between the vectors A, B as in Figure 5.2. Notice that the dot product is a maximum when the vectors are aligned (u ¼ 0), a minimum when the vectors point in opposite directions (u ¼ 1808), and zero when the vectors are perpendicular (u ¼ 908). The dot product A . B is commutative; that is, AB¼BA The dot product of a vector with itself gives the square of the magnitude of the vector. EXERCISE 5.1: Define C ¼ B A. Derive the law of cosines by calculating C C for u 908. Vectors may be written as matrices. For example, in matrix notation, the vectors A ¼ (ax , ay , az ) and B ¼ (bx , by , bz ) are the 3 1 matrices 2
3 2 3 bx ax A ¼ 4 ay 5, B ¼ 4 by 5 az bz
Figure 5.2
Angle between vectors.
68
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The dot product in matrix notation is 2
3 bx A B ¼ AT B ¼ ½ax ay az 4 by 5 ¼ ax bx þ ay by þ az bz bz
(5:1:5)
The dot product is an operation between two vectors that yields a scalar. An operation between two vectors that yields a vector is the cross product, which has the definition C ¼ A B ¼ n^ jAj jBj sin u ¼ B A
(5:1:6)
where the vector C and the unit vector n^ are normal (perpendicular) to the plane containing both vectors A and B. The unit vector n^ is parallel to the vector C in Figure 5.3. Both n^ and C are oriented in the direction of advance of a righthanded screw if the screw was rotated through the angle u shown in Figure 5.3 [Kreyszig, 1999]. In general, the cross product A B is antisymmetric; that is, A B ¼ 2B A. Example: The cross product of each pair of unit vectors gives ^i ^j ¼ ^j ^i ¼ k^ ^j k^ ¼ k^ ^j ¼ ^i k^ ^i ¼ ^i k^ ¼ ^j An alternative expression for the cross product in terms of Cartesian unit vectors is 2
^i
^j
bx
ay by
6 A B ¼ det4 ax
k^
3
7 az 5 bz
¼ (ay bz az by )^i þ (az bx ax bz )^j þ (ax by ay bx )k^
Figure 5.3
Orientation of vectors in cross product.
LINEAR ALGEBRA II
69
Example: Let A ¼ (3, 2, 0) and B ¼ (3, 22, 0). Then 2
^i A B ¼ det4 3 3
3 ^j k^ 2 0 5 ¼ ( 6 6)k^ ¼ 12k^ 2 0
Notice that vectors A and B are in the x, y plane, but A B projects in the negative z direction.
5.2
VECTOR SPACES
The idea of associating a column matrix with a vector leads to the concept of vector spaces. Let E n denote the set of all n1 column matrices, or vectors with n elements. The nonempty set E n is a vector space because it satisfies the following axioms for vectors u, v, w [ En and scalars a, b. u þ v [ E n, au [ E n
Closure Addition
Associative law Zero vector Existence of negatives Commutative law
(u þ v) þ w ¼ u þ (v þ w) uþ0¼u u þ ( u) ¼ 0 uþv¼vþu
Multiplication
Distributive law
a(u þ v) ¼ au þ av (a þ b)u ¼ au þ bu (ab)u ¼ a(bu) 1u ¼ u
Associative law Unity law
A set of vectors {u(1) , u(2) , . . . , u(n) } is linearly dependent if and only if the sum n X
ci u(i) ¼ 0
(5:2:1)
i¼1
is satisfied for a set of scalars {c1 , . . . , cm } that are not all zero. The set of vectors {u(1) , u(2) , . . . , u(n) } is linearly independent if Eq. (5.2.1) is satisfied only when ci ¼ 0 for all values of i. A basis for the set E n is a set of n linearly independent vectors that belong to E n. If we write the basis as {u(1) , u(2) , . . . , u(n) }, then any vector u [ En can be written as a linear combination, or superposition, of basis vectors such that u¼
n X i¼1
where {ci } is a unique set of numbers.
ci u(i)
70
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The inner product of two n-dimensional vectors u, v is defined as (u, v) ¼
n X
ui vi ¼ (u )T v ¼ u v
i¼1
where denotes complex conjugation. Two vectors u(i) , u(j) are orthogonal if their inner product is zero; that is, (u(i) , u(j) ) ¼ 0. Vectors u(i) , u(j) are orthonormal if their inner product satisfies (u(i) , u(j) ) ¼ dij where the Kronecker delta is defined by dij ¼
0, i = j 1, i ¼ j
The Euclidean length of a vector u with elements {ui : i ¼ 1, . . . , n} is the inner product of vector u with itself; thus " 1=2
jjujj ¼ (u, u)
¼
n X
#1=2 jui j
2
i¼1
The Euclidean lengths of two vectors u, v satisfy the Schwartz inequality ju vj jjujj jjvjj Example: A set of basis vectors in E 3 is 2 3 2 3 2 3 1 0 0 e1 ¼ 4 0 5, e2 ¼ 4 1 5, e3 ¼ 4 0 5 0 0 1 The vector B ¼ (3, 2, 0) may be written in terms of basis vectors as B ¼ 3e1 þ 2e2 þ 0e3 with the coefficients {ci } ¼ {3, 2, 0}. Vector B has the Euclidean length pffiffiffiffiffi jjBjj ¼ (32 þ 22 )1=2 ¼ 13
LINEAR ALGEBRA II
5.3
71
EIGENVALUES AND EIGENVECTORS
Suppose A is an n n matrix. Then the determinant jA l Ij ¼ 0
(5:3:1)
is the characteristic equation of A and I is the identity matrix. The characteristic equation is an n th degree polynomial resulting from expansion of the determinant. The n values of l are the characteristic roots of the characteristic equation. They are also the eigenvalues associated with matrix A. EXERCISE 5.2: Let A¼
a11 a21
a12 a22
Find the characteristic equation and the characteristic roots. Eigenvalues are calculated from the eigenvalue equation Ax ¼ lx where x is a column vector with n rows and l is the eigenvalue of A. The eigenvalue equation may be written in the form Ax ¼ lx ¼ l Ix or (A l I) x ¼ 0
(5:3:2)
Equation (5.3.2) has nonzero solutions if l is a characteristic root of A; that is, l is a solution of Eq. (5.3.1). The solution x is the characteristic vector. If x has unit length; that is, xT x ¼ 1 it becomes the eigenvector for the characteristic root l of A. The roots or eigenvalues depend on the properties of the matrix A. For example, a real symmetric matrix has real roots, and a Hermitian matrix has real roots. If A is a square matrix of order n with eigenvalues {l1 , l2 , . . . , ln }, then the set of n eigenvalues is called the spectrum of A. The spectral radius of A is the eigenvalue with the largest magnitude. The trace and the determinant of A are given in terms
72
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
of eigenvalues as det(A) ¼
n Y
li , Tr (A) ¼
i¼1
n X
li
i¼1
If matrices A, B are similar, then they have the same eigenvalues. EXERCISE 5.3: Find eigenvalues for the Pauli matrices given in Exercise 4.1; that is, find l from si c ¼ l Ic where c is an eigenfunction and i ¼ {x, y, z}: Hamilton – Cayley Theorem In general, the characteristic equation associated with a square matrix A of order n may be written as the polynomial f (l) ¼
n X
ai lni ¼ 0
i¼0
where l are the eigenvalues given by the characteristic determinant jA l Ij ¼ 0 If we replace l in f (l) by the matrix A so that we obtain f (A) ¼
n X
ai Ani
i¼0
then the Hamilton – Cayley theorem says that f (A) ¼ 0 In other words, the matrix A satisfies its characteristic equation; thus n X
ai Ani ¼ 0
i¼0
Example: We demonstrate the validity of the Hamilton – Cayley theorem by applying it to the matrix 5 4 A¼ 1 2
LINEAR ALGEBRA II
73
The characteristic equation is 5 l 1
4 ¼ (5 l)(2 l) 4 ¼ l2 7l þ 6 ¼ 0 2 l
so that the eigenvalues are l+ ¼
7+
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 49 24 7 + 5 ¼ 2 2
In polynomial form, f (l) ¼ l2 7l þ 6 ¼ 0 so that f (A) ¼ A2 7A þ 6I Substituting in the explicit representation of A gives the expected result f (A) ¼ ¼
5
4
1
2
29
28
7
8
5
4
1
2
35
7
7
5
4
6
0
þ 1 2 0 6 28 6 0 0 þ ¼ 14 0 6 0
0
0
The Hamilton – Cayley theorem is useful in evaluating the inverse of a square matrix. To show this, we formally multiply f (A) by A1 to find A1 f (A) ¼ a0 An1 þ a1 An2 þ þ an1 I þ an A1 ¼ 0 Solving for A21 gives A
1
" # n1 1 X n1i ¼ ai A an i¼0
The inverse of a square matrix can be found from its characteristic equation and by forming the powers of A up to n 2 1. Example: Use the Hamilton – Cayley theorem to determine the inverse of 5 4 A¼ 1 2
74
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
From the above example we know that the characteristic equation is l2 7l þ 6 ¼ 0 so that a0 ¼ 1, a1 ¼ 27, a2 ¼ 6, and n ¼ 2. Thus " # 1 1 X 1 1 21i A ¼ ai A ¼ ½a0 A þ a1 I a2 i¼0 a2 Substituting in the explicit representation of A gives 1 5 4 1 0 1 7 A ¼ 0 1 6 1 2 Simplifying gives A
1
1 ¼ 6
5 1
4 7 2 0
0 7
1 2 ¼ 1 6
4 5
or 2
A1
2 6 ¼4 6 1 6
43 67 5 5 6
To verify, we evaluate the matrix product 2 3 2 2 4 10 4 5 4 6 6 7 6 6 6 6 1 AA ¼ 4 5¼4 2 2 1 5 1 2 6 6 6 6 1 0 ¼ ¼I 0 1
3 20 20 þ 6 6 7 4 10 5 þ 6 6
which is the expected result. EXERCISE 5.4: Use the Hamilton – Cayley theorem to find the inverse of the complex Pauli matrix 0 i sy ¼ i 0 5.4
MATRIX DIAGONALIZATION
It is often worthwhile to transform a matrix representation from a coordinate system x in which an n n square matrix D has nonzero off-diagonal elements to a
LINEAR ALGEBRA II
75
coordinate system y where only the diagonal elements of a square matrix D0 are nonzero. This can be done by applying a similarity transformation D0 ¼ ADA1
(5:4:1)
to the matrix D, where A is an invertible matrix. The coordinate systems x and y are related by the transformation matrix A such that y ¼ Ax
(5:4:2)
The procedure for finding A is the subject of this section. Similarity Transformations Suppose two matrices a and b are related by the linear transformation a ¼ Db
(5:4:3)
Such a relationship exists in many applications. For example, angular momentum is related to angular velocity by a linear transformation [Goldstein et al., 2002]. The transformation matrix D in this example is the moment of inertia. As noted previously, there are occasions when it is useful to diagonalize D while preserving the form of the relationship between a and b. It is easy to show that a similarity transformation preserves the relationship in Eq. (5.4.3). First, multiply Eq. (5.4.3) on the left by A to find Aa ¼ ADb
(5:4:4)
where A is a nonsingular, n n square matrix. Since A is nonsingular, it is invertible; that is, it satisfies the equality A1 A ¼ AA1 ¼ I
(5:4:5)
where I is the n n square matrix. Substituting Eq. (5.4.5) into Eq. (5.4.4) gives Aa ¼ ADA1 Ab
(5:4:6)
Defining the transformed matrices a0 ¼ Aa b0 ¼ Ab
(5:4:7)
and using the similarity transformation in Eq. (5.4.6) yields
D0 ¼ ADA1
(5:4:8)
a0 ¼ D0 b0
(5:4:9)
Equation (5.4.9) is the same form as Eq. (5.4.3).
76
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Diagonalization Algorithm The matrix D is diagonalized by finding and applying a particular similarity transformation matrix A. The procedure for finding a matrix A that diagonalizes D is given by Nicholson [1986] and summarized as follows:
Algorithm for Diagonalizing a Square Matrix Given an n n matrix D: 1. Find the eigenvalues {li : i ¼ 1, . . . , n} of D. 2. Find n linearly independent eigenvectors {ai : i ¼ 1, . . . , n}. 3. Form the similarity transformation matrix A with the eigenvectors as columns. 4. Calculate the diagonalized similar matrix D0 . The diagonal entries of D0 are the eigenvalues corresponding to the eigenvectors {ai : i ¼ 1, . . . , n}
The next application illustrates the diagonalization algorithm. Application: Diagonalizing a Square Matrix. The ideas just presented are made more concrete by applying the matrix diagonalization algorithm to a specific example. Consider the 2 2 matrix d11 e12 D¼ (5:4:10) e21 d22 as viewed in coordinate system x. For this example, we arbitrarily require that the elements of D satisfy the relations [Fanchi, 1983] d11 = d22
(5:4:11)
e12 ¼ e21
To find the diagonal matrix D0 corresponding to D we must first solve the eigenvalue problem det½ D l I ¼ 0 (5:4:12) EXERCISE 5.5: Calculate the eigenvalues of the matrix D. The characteristic roots or eigenvalues {l} of Eq. (5.4.12) are the diagonal elements of the diagonalized matrix D0 ; thus D0 ¼
lþ 0
0 l
where lþ , l were calculated in Exercise 5.5.
(5:4:13)
LINEAR ALGEBRA II
77
Eigenvectors The matrix A is composed of orthonormal eigenvectors {a} found from Da ¼ l a
(5:4:14)
(D l I) a ¼ 0
(5:4:15)
The basis vector a satisfies
where I is the identity matrix. Expanding Eq. (5.4.15) gives þ (d11 lþ ) aþ 1 þ e12 a2 ¼ 0 þ e12 aþ 1 þ (d22 lþ ) a2 ¼ 0
(5:4:16)
for the eigenvalue lþ , and (d11 l ) a 1 þ e12 a2 ¼ 0 e12 a 1 þ (d22 l )a2 ¼ 0
for the eigenvalue l : Rearranging Eq. (5.4.16) gives aþ 1 ¼
e12 aþ d11 lþ 2
(5:4:17)
Equation (5.4.17) and the normalization condition 2 þ 2 (aþ 1 ) þ (a2 ) ¼ 1
(5:4:18)
provide the two equations that are necessary for determining the components of aþ ; thus aþ 2 and aþ 1 ¼
e212 ¼ 1þ (d11 lþ )2
1=2
1=2 e12 e212 1þ (d11 lþ ) (d11 lþ )2
Similar calculations for a2 yield the results a 1 ¼
(d11 lþ ) {(d11 lþ )2 þ e212 }1=2
a 2 ¼
e12 {(d11 lþ )2 þ e212 }1=2
and
78
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
where the relation d11 lþ ¼ (d22 l ) has been used. EXERCISE 5.6: Show that aþ and a are orthogonal. Coordinate Transformation We now use the orthonormal eigenvectors aþ and a to construct the transformation matrix A. According to the algorithm for diagonalizing a square matrix presented previously, we form A as " # aþ a 1 1 þ A ¼ ½a , a ¼ aþ a 2 2 or 1 A¼ ½(d11 lþ )2 þ e212 1=2
e12 d11 lþ
(d11 lþ ) e12
(5:4:19)
The transformed coordinate system y is given by y ¼ Ax. Rewriting the matrix equation for coordinate transformations in algebraic form gives y 1 ¼ aþ 1 x 1 þ a1 x 2 y 2 ¼ aþ 2 x 1 þ a2 x 2
or "
y1 y2
#
" ¼
aþ 1
a 1
aþ 2
a 2
#"
x1
#
x2
(5:4:20)
An angle u is associated with the linear transformation by noting from Section 4.1 that y1 cos u sin u x1 ¼ (5:4:21) y2 sin u cos u x2 Equating elements of the transformation matrix in Eqs. (5.4.20) and (5.4.21) gives aþ 1 ¼ a2 ¼ cos u
and aþ 2 ¼ a1 ¼ sin u þ EXERCISE 5.7: Verify that aþ 1 ¼ a2 and a2 ¼ a1 : Find u:
CHAPTER 6
DIFFERENTIAL CALCULUS
Calculus provides a framework for studying change which has widespread practical utility. We begin a review of this framework by presenting the concept of limits and extending the concept to the calculation of derivatives.
6.1
LIMITS
Let f (x) be a real-valued function of a real variable x and let a be some fixed point in the domain of f. The number L is the limit of f (x) as x approaches a if, given any positive number 1, we can find a positive number d such that f (x) is in the range j f (x) Lj , 1 whenever x is in the domain 0 , jx aj , d We write lim f (x) ¼ L
x!a
and say “the limit of f (x) as x approaches a is L.” Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
79
80
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Limits may be evaluated graphically or analytically. One straightforward method for evaluating a limit is to first evaluate f (x) at f (a þ 1) and f (a 1), then let 1 ! 0: If f (a þ 1) ¼ f (a 1) when 1 ! 0, then the limit exists and is equal to f (a). Example lim 4x 2 ¼ 2
x!1
lim x2 þ 3 ¼ 12
x!3
Some limits are more difficult to find if they contain a denominator that goes to zero as x ! a: These limits must be treated carefully using, for example, L’Hoˆpital’s rule, discussed in Section 6.2. Proof that a limit exists requires manipulation of inequalities to determine the relationship between 1 and d. It is also necessary to verify that the inequalities j f (x) Lj , 1 and 0 , jx aj , d are satisfied as required by the definition of a limit. Examples of proofs of limits can be found in standard calculus textbooks such as Stewart [1999], Larson et al. [1990], and Thomas [1972]. We now consider several useful properties of limits. Suppose we have the limits lim f (x) ¼ L
(6:1:1)
lim g(x) ¼ M
(6:1:2)
x!a
and x!a
then lim ½ f (x) þ g(x) ¼ L þ M
x!a
that is, the limit of the sum of two functions is the sum of the limits. EXERCISE 6.1: Let f (x) ¼ 3x þ 7, g(x) ¼ 5x þ 1, so that f (x) þ g(x) ¼ 8x þ 8. Find lim ½ f (x) þ g(x). x!2
The limit of a constant c times a function f (x) is the constant c times the limit L of the function; that is, lim c f (x) ¼ c½lim f (x) ¼ cL
x!a
where we have used Eq. (6.1.1).
x!a
DIFFERENTIAL CALCULUS
81
EXERCISE 6.2: Let f (x) ¼ 3x þ 7 and c ¼ 6: Find lim 6(3x þ 7). x!2
The limit of the ratio of two functions f (x), g(x) is the ratio of their respective limits if the limit of the denominator is not 0; that is, f (x) L ¼ for M = 0 g(x) M
lim
x!a
where we have used Eqs. (6.1.1) and (6.1.2).
EXERCISE 6.3: Find lim
x!2
3x þ 7 5x þ 1
If P(x) is a polynomial of the form P(x) ¼
n X
ai xni
i¼0
with constant coefficients {ai }, then lim P(x) ¼ P(a)
x!a
EXERCISE 6.4: Let P(x) ¼ ax2 þ bx þ g: Find lim P(x). x!0
If f (x) is any rational function; that is, f (x) ¼
N(x) D(x)
where N(x), D(x) are polynomials, and D(a) = 0, then lim f (x) ¼
x!a
N(a) D(a)
EXERCISE 6.5: Find lim
x!1
x3 þ 5x2 þ 3x 2x5 þ 3x4 2x2 1
82
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
6.2
DERIVATIVES
The definition of the derivative of a function f (x) with respect to the independent variable x is given in terms of the limit of the function evaluated at two points x and x þ Dx; thus df (x) f (x þ Dx) f (x) ¼ f 0 (x) ¼ lim D x!0 dx Dx
(6:2:1)
The derivative, which is often written f 0 (x), is the slope of the line that is tangent to the graph of f (x) at the point x. Slope is illustrated by the dashed line in Figure 6.1. Notice that, in the limit as Dx approaches 0, the dashed line in Figure 6.1 more accurately approximates the tangent to the graph of f (x) at the point x. Derivatives for several common functions are derived from the definition of a derivative in the following exercise. EXERCISE 6.6: Use the definition of the derivative to find the derivative of each of the following functions: (a) f (x) ¼ ax; (b) f (x) ¼ bx2 ; (c) f (x) ¼ ex ; (d) f (x) ¼ sin x: Several common derivatives are tabulated as follows: DERIVATIVES
Assume fu, vg are functions of x, and fa, ng are constants. d a¼0 dx d du au ¼ a dx dx d n du u ¼ nun1 dx dx
d du dv (u þ v) ¼ þ dx dx dx d dv du (uv) ¼ u þ v dx dx dx d u 1 du u dv ¼ dx v v dx v2 dx
d du sin u ¼ cos u dx dx d du cos u ¼ sin u dx dx d du tan u ¼ sec2 u dx dx d u du e ¼ eu dx dx d u du a ¼ au ln a dx dx
d du sinh u ¼ cosh u dx dx d du cosh u ¼ sinh u dx dx d du tanh u ¼ sech2 u dx dx d 1 du ln u ¼ dx u dx d loga e du log u ¼ ; dx a u dx a . 0, a = 1
DIFFERENTIAL CALCULUS
d 1 du sin1 u ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 dx 1 u dx d 1 du cos1 u ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 dx 1 u dx d 1 du tan1 u ¼ dx 1 þ u2 dx
83
d 1 du sinh1 u ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 dx 1 þ u dx d 1 du cosh1 u ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 dx u 1 dx d 1 du tanh1 u ¼ ; juj , 1 2 dx 1 u dx
L’Hoˆpital’s Rule Let f (x), g(x) be two continuous functions in the interval a x b and be differentiable in the interval a , x , b: Suppose f (a) ¼ g(a) ¼ 0, as in Figure 6.2. If the derivative f 0 (x) = 0 in the interval a , x , b, L’Hoˆpital’s rule says that limþ
x!a
g(x) g0 (x) ¼ limþ 0 f (x) x!a f (x)
(6:2:2)
whenever the right-hand side exists. The notation x ! aþ signifies that x approaches a from above, which means we start with values of x . a and then reduce x to approach a. Example: For the curves shown in Figure 6.2, an application of L’Hoˆpital’s rule yields a limit of 0 as x approaches a from above. This can be verified by observing that g(x) has 0 slope at x ¼ a, while f (x) has a positive, nonzero slope at x ¼ a. Thus g0 (x) ¼ 0 at x ¼ a so that g0 (x)=f 0 (x) ¼ 0 at x ¼ a since f 0 (x) . 0 at x ¼ a: EXERCISE 6.7: Evaluate lim
x!0
x2 þ bx x
where g(x) ¼ x2 þ bx, and f (x) ¼ x. In some cases, it is difficult to calculate the limit, even using L’Hoˆpital’s rule. Exercise 6.8 illustrates this point.
Figure 6.1
Illustration of slope.
84
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 6.2
Illustration of continuous functions for L’Hoˆpital’s rule.
EXERCISE 6.8: Evaluate lim
x!0
x ln2 (1 þ x)
where g(x) ¼ x, and f (x) ¼ ln2 (1 þ x): In general, L’Hoˆpital’s rule applies to limits of indeterminate forms: 0=0, 1=1, (1)=1, 1=(1), and (1)=(1): For further details and examples, see Larson et al. [1990]. Properties of Derivatives The derivative of a product of two functions u(x), v(x) is d dv du uv ¼ u þ v dx dx dx The derivative of the sum of two functions is the sum of their respective derivatives; thus d du dv (u þ v) ¼ þ dx dx dx Suppose we let c be a constant. The derivative of a constant is zero: dc ¼0 dx The following derivatives can be derived from the preceding relations for constants c, n and functions u(x), v(x), w(x): dcv dv ¼c dx dx n du du for n = 0 ¼ nu n1 dx dx
DIFFERENTIAL CALCULUS
85
If n ¼ 0, the derivative of u0 equals 0 because u0 ¼ 1 is a constant. The derivative of the quotient u/w is d 1 du u dw (u=w) ¼ dx w dx w2 dx This equation can be derived from earlier expressions. Higher-order derivatives are given by dm u dm1 du ¼ dxm dxm1 dx The derivative dm u=dxm is the mth order derivative of u with respect to x. EXERCISE 6.9: Let u ¼ ax2 þ bx þ c: Find the second-order derivative. Differentiation, df (x)=dx, is an operation d=dx on a function f (x). It is possible to construct infinitesimally small differentials dx, dy by defining the differential of x, dx, as any real number in the domain 1 , dx , 1, and the differential of y, dy, as the function of x and f (x) given by dy ¼ f 0 (x) dx
(6:2:3)
If dx ¼ 0, then dy ¼ 0. Suppose y is a differentiable function of x, y ¼ f (x), and x is a differentiable function of an independent variable t, x ¼ g(t). Then y is a differentiable function of t that satisfies the chain rule dy df dx ¼ dt dx dt
(6:2:4)
The variable t is often said to parameterize x and y.
EXERCISE 6.10: (a) Calculate the differential of y ¼ x3 . (b) Use the chain rule to calculate dy=dx, where x, y are given by the parametric equations x ¼ 2t þ 3, y ¼ t2 1 Extrema Recall that the derivative f 0 (x) is the slope of the curve f (x) at the point x. The slope corresponding to df (x) ¼0 dx
(6:2:5)
86
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
can be an extremum: that is, a minimum or maximum at x ¼ xext , where xext is the value of x at which Eq. (6.2.5) is valid. The type of extremum is determined by calculating the second-order derivative. If d 2 f ð xÞ .0 dx 2 xext then f 0 ð xÞ ¼ 0 corresponds to a minimum of f ð xÞ at xext : If d 2 f ð xÞ ,0 dx 2 xext then f 0 ð xÞ ¼ 0 corresponds to a maximum of f (x) at xext . EXERCISE 6.11: Find the extrema of y ¼ ax2 þ bx þ c: Newton – Raphson Method Roots of the nonlinear equation gð xÞ ¼ a, where a is a constant, can be found using the Newton – Raphson method. We first write the nonlinear equation in the form f ð xÞ ¼ gð xÞ a ¼ 0: Our problem is to find the root xr such that y ¼ f ðxr Þ ¼ 0: From Figure 6.3 we see that the slope of f (x) at x ¼ x0 is slope ¼ tan u ¼ f 0 (x0 ) ¼
f (x0 ) 0 f (x0 ) ¼ (x0 x1 ) (x0 x1 )
(6:2:6)
The slope intersects the x axis at x1 , and f 0 (x0 ) denotes the derivative df =dx evaluated at x ¼ x0 . Rearranging Eq. (6.2.6) and solving for x1 gives x1 ¼ x0
f (x0 ) f 0 (x0 )
The value x1 is a first approximation of the root xr . A better approximation can be obtained by repeating the process using x1 as the new starting value as illustrated
Figure 6.3
Newton – Raphson method.
DIFFERENTIAL CALCULUS
87
in Figure 6.3. This iterative process leads to an iterative formula for converging to the answer: xnþ1 ¼ xn
f (xn ) ; n ¼ 0,1, . . . f 0 (xn )
(6:2:7)
EXERCISE 6.12: Find the square root of any given positive number c. Show that the Newton –Raphson method works for c ¼ 2. An alternative presentation of the iterative formula is f 0 (xn )(xnþ1 xn ) ¼ f (xn )
(6:2:8)
The terms f 0 (xn ) and f (xn ) are sometimes referred to as the acceleration term and the residual term, respectively. The form of Eq. (6.2.8) is suitable for extension to systems of equations.
6.3
FINITE DIFFERENCE CONCEPT
It is often relatively easy to formulate a mathematical description of a real-world problem, but not possible to solve the problem analytically. When analytical techniques are intractable, numerical methods may be the only way to make progress in finding even an approximate solution to a problem of practical importance. A frequently used numerical method is the finite difference concept described below. Taylor Series The basis of finite difference numerical approximations is the Taylor series. Suppose we know the value of a function f (x) at x ¼ xi . We can use the Taylor series to calculate the value of f (x) at x ¼ xi þ Dx by writing df (D x)2 d 2 f þ f (xi þ D x) ¼ f (xi ) þ D x dx x¼xi 2! dx2 x¼xi (D x)3 d3 f þ þ 3! dx3 x¼xi where xi and xi þ D x are discretized points along the x axis. The notation for a discretized representation of f (x) is illustrated in Figure 6.4. An approximation for the derivative df/dx at x ¼ xi is obtained by rearranging the Taylor series to find df f (xi þ D x) f (xi ) ¼ ET dx Dx
88
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 6.4 Discretizing f(x).
where ET ¼
D x d 2 f (D x)2 d 3 f þ þ 2! dx2 x¼xi 3! dx3 x¼xi
Neglecting ET gives the finite difference approximation of the first derivative as df f (xi þ D x) f (xi ) dx Dx The term ET is called the truncation error because it contains all of the terms of the Taylor series that are neglected, or truncated, in forming an approximation of the derivative. The truncation error ET is on the order of D x, where the lowest order of (D x)m determines the order of the finite difference approximation. The following application illustrates the use and accuracy of finite differences.
Application: Darcy’s Law. The rate of fluid flow in a porous medium is given by Darcy’s law [Peaceman, 1977; Aziz and Settari, 1979]; that is, flow rate is proportional to the pressure change, or Q ¼ 0:001127
KA dP m dx
where the symbols are defined as follows: Q K A P m x
Flow rate (bbl/day) Permeability (md) Cross-sectional area (ft2) Pressure (psi) Fluid viscosity (cp) Length (ft)
DIFFERENTIAL CALCULUS
89
For a porous medium, permeability is a measure of the connectedness of individual pores and is often measured in millidarcies (md). The unit of permeability is length squared. The geometry for Darcy flow is illustrated in Figure 6.5. Suppose the physical parameters have the following values: K ¼ 150 md, m ¼ 0.5 cp, and A ¼ 528 ft 50 ft ¼ 2.64 104 ft2. The pressure profile as a function of length is given as follows for three distances: x (ft): P (psia):
129 3983
645 4007
1161 4041
Notice that pressure is highest at the fluid inlet and lowest at the fluid outlet. This shows that the distance x in this example increases as we measure from left to right in Figure 6.5. Our problem is to find Q at the midpoint x ¼ 645 ft. Analytical Evaluation of dP/dx analytical expression
A curve fit of the pressure profile gives the
P(x) ¼ (1:88 105 ) x2 þ 0:03196x þ 3978 with the derivative dP ¼ (3:76 105 )x þ 0:03196 dx Evaluating dP/dx at x ¼ 645 ft gives the pressure gradient dP ¼ 0:05262 dx in psia/ft. Substituting physical quantities into Darcy’s law gives the flow rate Q ¼ 8926
Figure 6.5
dP dx
Darcy flow.
90
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
in bbl/day. At x ¼ 645 ft, we have Q ¼ 502 bbl=day Finite Difference Evaluation of dP/dx The Taylor series may be used to make the following numerical approximations: (a) First-order correct backward difference P(x) P(x D x) 4007 3983 ¼ ¼ 0:0465 Dx 516 yields Q ¼ 415 bbl=day (b) First-order correct forward difference P(x þ D x) P(x) 4041 4007 ¼ ¼ 0:0659 Dx 516 yields Q ¼ 588 bbl=day (c) Second-order correct centered difference P(x þ D x) P(x D x) 4041 3983 ¼ ¼ 0:0562 2D x 2(516) yields Q ¼ 502 bbl=day Comparing the three numerical results with the analytical value shows the expected result that the second-order correct centered difference method is more accurate than the first-order correct methods. Notice that the centered difference method requires values on both sides of the desired value; that is, from x þ Dx and x 2 Dx. EXERCISE 6.13: Use the Taylor series to show that the centered difference ½P(x þ D x) P(x D x)=(2D x) is second-order correct.
Effect of Step Size Dx P(x) we find
If we let x ! x þ D x in the analytical expression for
DP ¼ (1:88 105 )(2x þ D x) þ 0:03196 Dx
DIFFERENTIAL CALCULUS
91
The effect of step size Dx is found by estimating dP/dx at x ¼ 645 ft for several values of Dx: Dx
DP/Dx
516 258 129 0
0.0659 0.0611 0.0586 0.0562
As D x ! 0 the numerical result approaches the analytical solution.
CHAPTER 7
PARTIAL DERIVATIVES
The derivative is a way to find out how a function of a single variable will change when the variable changes. We often want to extend this idea to functions of several variables. This requires the introduction of methods for calculating the effect of changing one variable while the other variables are fixed. These ideas are illustrated below. Consider the area enclosed in Figure 7.1. The solid box encloses the original area A0 . The length of the sides of the solid box are denoted by x, y. The dotted lines in the figure represent an expansion of the original area. We wish to find the change in the total area. Let the sides of the original box with area A0 increase by increments Dx and Dy: First note that the original area is A0 ¼ x y and the total area is AT ¼ (x þ Dx ) (y þ Dy) or AT ¼ x y þ xDy þ y Dx þ Dx Dy: The change in area is D A ¼ AT A0 ¼ y D x þ x D y þ D x Dy ¼ D Ajy þ D Ajx þ D x Dy
Figure 7.1 Expanding area. Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
93
94
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
where jy and jx signify that the sides of the solid box with lengths y and x are not changed. If the length y is fixed, then DAjy ¼ y Dx or y ¼ DA=Dxjy . If the length x is fixed, DAjx ¼ x Dy or x ¼ DA=Dyjx . Substituting these expressions into DA gives DA DA Dx þ Dy þ Dx Dy DA ¼ D xy Dy x If we let Dx ! 0 and Dy ! 0, we obtain the differential of A(x, y) expanded in terms of partial derivatives (@A=@x, @A=@y) and differentials (dx, dy) such that dA ¼
@A @A dx þ dy @x y @y x
where the second-order term is negligible. A more formal treatment of these ideas follows.
7.1
PARTIAL DIFFERENTIATION
A function y is a function of several variables if it is a function of two or more independent variables. A function of n variables has the form y ¼ f (x1 , x2 , . . . , xn ) where x1 , x2 , . . . , xn are n independent variables and y is a dependent variable. The function f ( ) is a mapping from Rn ! R, where R is the set of real numbers and Rn is an n-dimensional set of real numbers. The partial derivative of y with respect to xi is defined by @y f (x1 , . . . , xi þ Dxi , . . . , xn ) f (x1 , . . . , xi , . . . , xn ) ¼ lim @xi Dxi !0 Dxi and all other {xj } are held constant. Higher-order partial derivatives are found by successive applications of this definition; thus @ @f @ 2f @ @f @ 2f ¼ fxy ¼ 2 ¼ fxx , ¼ @x @x @x @y @x @x@y The symbol fxy denotes the partial differentiation of f with respect to x and then y. The order of differentiation is commutative. If f xy and f yx are continuous functions of x and y, then f xy ¼ f yx .
Example: Let z ¼ ax2 þ bxy þ cy2 þ dx þ ey þ f :
PARTIAL DERIVATIVES
95
First-Order Partial Derivatives: @z @z ¼ 2ax þ by þ d, ¼ bx þ 2cy þ e @x @y Second-Order Partial Derivatives: @ 2z @ @z ¼ 2a ¼ @x2 @x @x @ 2z @ @z ¼ 2c ¼ @y2 @y @y @ 2z @ @z @ @z ¼ ¼ ¼b @x@y @x @y @y @x EXERCISE 7.1: Let f (x, y) ¼ e xy : Find f xy , f yx . The total differential of a function y of variables {xi : i ¼ 1, n} is dy ¼
n X @y i¼1
@xi
dxi
Example: Let y ¼ x21 þ x32 . In this case n ¼ 2 because there are two independent variables and dy ¼
@y @y dx1 þ dx2 ¼ 2x1 dx1 þ 3x22 dx2 @x1 @x2
EXERCISE 7.2: Let y ¼ x21 þ x21 x2 þ x32 . Calculate @y=@x1 , @y=@x2 , and dy. Suppose we have a function z of two parameterized variables such that z ¼ f (x(t), y(t)) The total derivative of the parameterized function z of two variables is dz @f dx @f dy ¼ þ dt @x dt @y dt This expression can readily be generalized for a parameterized function of several variables.
EXERCISE 7.3: Let z ¼ x2 þ y2 , x ¼ at, y ¼ bet . Find dz=dt.
96
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Application: Jacobian Transformation. The equation of a coordinate transformation may be written as y ¼ Ax If we consider a two-dimensional system, the total derivative of the transformation equation is dy1 ¼
@y1 @y1 dx1 þ dx2 @x1 @x2
dy2 ¼
@y2 @y2 dx1 þ dx2 @x1 @x2
or dy1 ¼ J11 dx1 þ J12 dx2 dy2 ¼ J21 dx1 þ J22 dx2 where Jij ¼
@yi @x j
Collecting the elements {Jij } in the 2 2 square matrix J gives ½dyi ¼
2 X
½Jij ½dxj
j¼1
or the matrix equation dy ¼ Jdx The matrix J is called the Jacobian and is given by ½Jif ¼
@yi @xj
EXERCISE 7.4: Find the Jacobian of the coordinate rotation y ¼ ax, where a is the matrix of coordinate rotations introduced in Chapter 4, Section 4.1.
7.2
VECTOR ANALYSIS
Vector analysis is the study of vectors and their transformation properties. As we show in the following, it is a discipline in which partial differentiation plays a major role.
PARTIAL DERIVATIVES
97
Scalar and Vector Fields Let x1 , x2 , x3 denote the Cartesian coordinates of a point X in a region of space R. The position vector x of X is x ¼ x1 i 1 þ x2 i 2 þ x3 i 3 where i1 , i2 , i3 are unit vectors defined along the orthogonal axes of the coordinate system. If we can associate a scalar function f with every point in R, then f is the scalar field in R and may be written f (x1 , x2 , x3 ) ¼ f (x) An example of a scalar field is the temperature at each point in a region of space. Instead of a scalar function, suppose we associate a vector v with every point in R. The resulting vector field has the form v(x1 , x2 , x3 ) ¼ v(x) A vector field is exemplified by a velocity field or a magnetic field. The vector field is a function that assigns a vector to every point in a region. Scalar and vector fields commute, that is, f u ¼ uf Vector fields u and v may be multiplied in the usual way as the dot product u v ¼ (u1 i1 þ u2 i2 þ u3 i3 ) (v1 i1 þ v2 i2 þ v3 i3 ) ¼
3 X m¼1
and the cross product i1 u v ¼ u1 v1
i2 u2 v2
i3 u3 ¼ v u v3
Vector fields u, v, w satisfy the triple scalar product u (v w) ¼ v (w u) ¼ w (u v) and the triple vector product u (v w) ¼ v(u w) (u v)
um v m
98
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Example: The expansion of the triple scalar product of the vector fields A, B, C gives ^i ^ A (B C) ¼ (ax^i þ ay ^j þ az k) bx c x
^j by cy
k^ bz c z
^ ½(by cz bz cy )^i (bx cz bz cx ) ^j ¼ (ax^i þ ay ^j þ az k) ^ þ (bx cy by cx )k ¼ ax (by cz bz cy ) þ ay (bz cx bx cz ) þ az (bx cy by cx )
Gradient, Divergence, and Curl We can determine the spatial variation of a scalar or vector field by introducing the del operator r defined in Cartesian coordinates as r ; i1
@ @ @ þ i2 þ i3 @x1 @x2 @x3
Applying r to the scalar field f gives a vector field called the gradient of f. It is denoted as grad f ¼ rf ¼ i1
@f @f @f þ i2 þ i3 @x1 @x2 @x3
The gradient of f points in the direction in which the scalar field f changes the most with respect to a change in position. The vector field rf is perpendicular, or normal, to the surfaces corresponding to constant values of f. The arrows in Figure 7.2 illustrate the direction of the gradient at several points around the surface of constant f.
Figure 7.2
Direction of gradient.
PARTIAL DERIVATIVES
99
Example: Let f (x, y) ¼ x2 y be a scalar field. The gradient of f (x, y) is @f @f rf (x, y) ¼ ^i þ ^j ¼ 2xy^i þ x2 ^j @x @y The del operator can be applied to vector fields in two ways: (1) to create a scalar and (2) to create another vector. Suppose a is a vector field. We create a scalar by defining the divergence of a as the dot product of the operator r and the vector a: @ @ @ ða1 i1 þ a2 i2 þ a3 i3 Þ div a ; r a ¼ i1 þ i2 þ i3 @x1 @x2 @x3 ¼
@a1 @a2 @a3 þ þ @x1 @x2 @x3
because im in ¼ dmn , where dmn is the Kronecker delta. The cross product of the operator r and the vector a is called the rotation of a or the curl of a. It is given by i1 @ curl a ; r a ¼ @x1 a 1
i2 @ @x2 a2
i3 @ @x3 a3
Note that a r and a r are not commutative; that is, a r = r a and a r = r a: The vector products a r and a r are given by @ @ @ a r ¼ ða1 i1 þ a2 i2 þ a3 i3 Þ i1 þ i2 þ i3 @x1 @x2 @x3 ¼
3 X m¼1
am
@ @xm
and i1 a a r ¼ 1 @ @x
1
i2 a2 @ @x2
i3 a3 @ @x3
The divergence of the gradient of a scalar field f gives the Laplacian of f ; thus in Cartesian coordinates we have r (rf ) ¼
3 X @ 2f
@x2m m¼1
; r2 f
100
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
By contrast, the Laplacian of a vector field u is r2 u ¼ r(r u) r (r u) The curl of the gradient of a scalar field vanishes, r (rf ) ¼ 0 and the divergence of the curl of a vector field is zero, r (r u) ¼ 0 Several other useful relations are summarized as follows: DEL OPERATOR RELATIONS
Let f, g be scalar fields and u, v be vector fields. Sum of fields
r(f þ g) ¼ rf þ rg r (u þ v) ¼ r u þ r v r (u þ v) ¼ r u þ r v r( fg) ¼ f (rg) þ g(rf ) r ( f u) ¼ f (r u) þ (rf ) u r ( f u) ¼ f (r u) þ (rf ) u
Product of fields
r (u v) ¼ v (r u) u (r v) r (u v) ¼ u(r v) þ (v r) u v(r u) (u r)v r(u v) ¼ u (r v) v(r u)þ (v r) u (u r)v r (rf ) ¼ r2 f r (r u) ¼ r(r u) r2 u
Laplacian
Example: Suppose A ¼ 2xy^i þ x2^j ¼ Ax^i þ Ay^j: Then the dot product is rA¼
@ @ Ax þ Ay ¼ 2y @x @y
The cross product is ^i @ r A ¼ @x A x
^j @ @y Ay
k^ @ @z Az
PARTIAL DERIVATIVES
101
Expanding the determinant and simplifying gives @Az @Ay @Ax @Az @Ay @Ax ^ ^ ^ rA¼i þj þk ¼ 2xk^ @y @z @z @x @x @y The curl of A is a vector transverse to the x –y plane containing the vector A. Also, the vector A is the gradient of f (x, y) ¼ x2 y from a previous example; thus r A ¼ r (rf ) ¼ r2 f ¼
@ 2f @ 2f @ 2f þ þ ¼ 2y @x2 @y2 @z2
A field vector v is called irrotational if the curl, or rotation, of v vanishes, that is, rv¼0 The vector field v is said to be solenoidal if rv¼0 A vector field V that is the gradient of a scalar field f is irrotational because the curl of a gradient vanishes; thus r V ¼ r (rf) ¼ 0 Similarly, a vector field U that is the curl of a vector field u is solenoidal because the divergence of the curl vanishes; thus r U ¼ r (r u) ¼ 0 ^ Evaluate (a) r ¼ jrj; (b) r r; (c) r r; EXERCISE 7.5: Let r ¼ x^i þ y^j þ zk: (d) r(r); and (e) r(1=r): Application: Propagation of Seismic Waves. Seismic waves are vibrations, or displacements from an undisturbed position, that propagate from a source, such as an explosion, through the earth. Seismic wave propagation is an example of a displacement propagating through an elastic medium. The equation for a wave propagating through an elastic, homogeneous, isotropic medium is r
@ 2u ¼ (l þ 2m) r (r u) mr (r u) @t2
(7:2:1)
where r is the mass density of the medium, l and m are properties of the elastic medium called Lame´’s constants, and u measures the displacement of the medium from its undisturbed state [Tatham and McCormack, 1991]. If the displacement uI is irrotational, then uI satisfies the constraint r uI ¼ 0
102
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
and Eq. (7.2.1) becomes @ 2 uI (l þ 2m) r(r uI ) ¼ r @t2
(7:2:2)
r(r u) ¼ r (r u) þ r2 u
(7:2:3)
The vector identity
for an irrotational vector is r(r uI ) ¼ r2 uI so that Eq. (7.2.1) becomes the wave equation @ 2 uI ¼ v2I r2 uI @t2
(7:2:4)
The speed of wave propagation vI for an irrotational displacement uI is sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (l þ 2m) vI ¼ r A solution of Eq. (7.2.4) that is irrotational is the solution for a longitudinal wave propagating in the z direction with amplitude u0 , frequency v, and wavenumber k: uI ¼ u0 ei(kzvt) k^ If the displacement us is solenoidal, then us satisfies the constraint r us ¼ 0 and Eq. (7.2.1) becomes @ 2 us m ¼ r (r us ) r @ t2
(7:2:5)
The vector identity in Eq. (7.2.3) for a solenoidal vector reduces to r (r us ) ¼ r2 us so that Eq. (7.2.5) becomes the wave equation @ 2 us ¼ v2s r2 us @ t2 The speed of wave propagation vs for a solenoidal displacement us is rffiffiffiffi m , vI vs ¼ r
(7:2:6)
PARTIAL DERIVATIVES
103
A solution of Eq. (7.2.6) that is solenoidal is the solution for a transverse wave propagating in the z direction: us ¼ u0 ei(kxvt) k^ The irrotational displacement uI represents a longitudinal P (primary) wave, whereas the solenoidal displacement us represents a slower transverse S (secondary) wave. Both types of waves are associated with earthquakes and explosions on or below the earth’s surface. The waves are useful for geophysical studies of the earth’s interior.
7.3
ANALYTICITY AND THE CAUCHY – RIEMANN EQUATIONS
A fundamental concept in complex analysis is the concept of analyticity. A function f (z) of a complex variable z is analytic in a domain D if f (z) exists and is differentiable at all points in D. In principle, the complex function f (z) of the complex variable z ¼ x þ iy can be written as the sum of a real function u(x, y) and an imaginary function iv(x, y); thus f (z) ¼ u(x, y) þ iv(x, y)
(7:3:1)
where u and v are real functions of the real variables x, y: The criteria for establishing the analyticity of f (z) are obtained by extending the definition of partial derivative presented in Section 7.1 from real to complex functions. The derivative of f (z) with respect to z is
f 0 (z) ¼ lim
Dz!0
f (z þ D z) f (z) Dz
(7:3:2)
The complex increment Dz ¼ Dx þ iDy may approach 0 along any path in a neighborhood of z. A neighborhood of z is an open set in the complex plane that encloses the point z. One example path for Dz ! 0 is to approach the point z by letting Dx ! 0 and then Dy ! 0. In this case, Dz ! iDy and Eq. (7.3.2) becomes f 0 (z) ¼ lim
Dy!0
f (x, y þ Dy) f (x, y) iDy
(7:3:3)
Expanding Eq. (7.3.3) in terms of u, v gives u(x, y þ Dy) þ iv(x, y þ Dy) u(x, y) iv(x, y) Dy!0 i Dy
f 0 (z) ¼ lim
(7:3:4)
104
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
or f 0 (z) ¼ lim
Dy!0
v(x, y þ Dy) v(x, y) Dy
u(x, y þ Dy) u(x, y) i lim Dy!0 Dy
(7:3:5)
Applying the definition of partial derivative in Section 7.1 to Eq. (7.3.5) yields f 0 (z) ¼
@v @u i @y @y
(7:3:6)
An alternative route to an expression for f 0 (z) is provided by letting Dy ! 0 before letting D x ! 0: In this case, D z ! D x and Eq. (7.3.2) becomes f 0 (z) ¼ lim
D x!0
f (x þ D x, y) f (x, y) Dx
(7:3:7)
By analogy with the procedure leading from Eq. (7.3.3) to (7.3.6), we obtain f 0 (z) ¼
@u @v þi @x @x
(7:3:8)
Equations (7.3.6) and (7.3.8) are equivalent expressions for f 0 (z). Equating the real and imaginary parts of Eqs. (7.3.6) and (7.3.8) gives the Cauchy –Riemann equations @u @v ¼ @x @y
(7:3:9)
and
@u @v ¼ @y @x
(7:3:10)
The validity of the Cauchy–Riemann equations depends on the existence of f 0 (z), which depends on the analyticity of f (z). If the function f (z) is analytic, then f 0 (z) exists and the Cauchy–Riemann equations are valid. Conversely, if the Cauchy–Riemann equations apply, then the function f (z) is analytic and f 0 (z) exists [Kreyszig, 1999]. If we differentiate Eq. (7.3.9) with respect to x and Eq. (7.3.10) with respect to y, we obtain @ 2u @ 2v ¼ @x2 @x@y
(7:3:11)
PARTIAL DERIVATIVES
105
and
@ 2u @ 2v ¼ @y2 @y@x
(7:3:12)
Subtracting Eq. (7.3.12) from (7.3.11) gives Laplace’s equation in two Cartesian dimensions: @ 2u @ 2u þ ¼0 @x2 @y2
(7:3:13)
Laplace’s equation is an example of a partial differential equation. Partial differential equations are discussed in more detail in Chapter 12.
EXERCISE 7.6: Given f (z) ¼ z2 , show that the Cauchy –Riemann equations are satisfied and evaluate f 0 (z).
CHAPTER 8
INTEGRAL CALCULUS
Differential calculus may be thought of as a reductionist view of change: we study a changing system by looking at infinitesimally small variations. The inverse process is to reconstruct the whole by summing the infinitesimal differences using the apparatus of integral calculus.
8.1
INDEFINITE INTEGRALS
The indefinite integral is the antiderivative, or inverse, of the derivative. Suppose the function g(x) is given and has the derivative g0 (x) ¼
dg(x) dx
Taking the inverse of g0 (x) should recover g(x); thus ð ½g0 (x)1 ¼ ½g0 (x)dz ¼ g(x) þ C Notice that an additive constant C has appeared on the right-hand side of the equation. This constant appears because the derivative of g(x) þ C is the same as the derivative of g(x), so information may be lost upon differentiation. The constant Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
107
108
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
C must be determined from additional information, such as specifying g(x) at a particular value of x.
Example: Let g(x) ¼ xn so that g0 (x) ¼ nxn1 The integral of g0 (x) is ð
ð
g (x)dx ¼ nxn1 dx ¼ xn þ C 0
The integration constant C must equal 0 in this case since g(x) ¼ xn . For two continuous functions f (x), g(x) we have the properties ð ð ð ½ f (x) + g(x)dx ¼ f (x)dx + g(x)dx and ð
ð af (x)dx ¼ a f (x)dx
where a is a constant. The variable of integration is a dummy variable, that is, ð
ð f (x)dx ¼ f (u)du
where x, u are dummy variables that may be freely interchanged. From the definition of the indefinite integral as the inverse of the derivative, we find that the derivative of an indefinite integral is the integrand; thus d dx
ð
f (x)dx ¼ f (x)
Example: Suppose g(x) ¼ x þ b, where b is a constant. Then g0 (x) ¼ 1. The indefinite integral of g0 (x) is ð ð g(x) ¼ g0 (x)dx ¼ 1dx ¼ x þ C where C is the integration constant. Comparing the integrated function with the original function shows that C ¼ b. Alternatively, we can say that C is determined by specifying the condition g(0) ¼ b.
INTEGRAL CALCULUS
109
Tables of Integrals Integrals of well-known functions have been tabulated in handbooks. These integrals were evaluated using several different strategies. One useful strategy is to calculate the derivative of a function, then recognize that the inverse operation of differentiation is integration. A few of these integrals are given below. INTEGRALS
Assume (u, v) are functions of x and (a, n) are constants. An integration constant C should be added to each indefinite integral. ð ð ð a dx ¼ ax ð
u dv ¼ uv v du
x nþ1 , n = 1 nþ1 ð 1 sin ax dx ¼ cos ax a ð 1 cos ax dx ¼ sin ax a ð 1 tan ax dx ¼ lnjcos axj a ð 1 ax ax e dx ¼ e a ð 1 x x sin ax dx ¼ 2 sin ax cos ax a a ð x sin 2ax 2 sin ax dx ¼ 2 4a ð sinh 2ax x sinh2 ax dx ¼ 4a 2 ð ð 1 n ax n n1 ax n ax x e dx ¼ x e x e dx a a ð dx 1 x ¼ tan1 2 2 a þx a a ax n dx ¼ a
8.2
ð
a dx ¼ a lnj xj x ð 1 sinh ax dx ¼ cosh ax a ð 1 cosh ax dx ¼ sinh ax a ð 1 tanh ax dx ¼ lnðcosh axÞ a ð ln ax dx ¼ x ln ax x ð 1 x x cos ax dx ¼ 2 cos ax þ sin ax a a ð x sin 2ax 2 cos ax dx ¼ þ 2 4a ð sinh 2ax x þ cosh2 ax dx ¼ 4a 2 ð x nþ1 x nþ1 n ln ax x ln ax dx ¼ nþ1 ð n þ 1Þ 2 ð dx 1 a þ x ¼ ln 2 2 a x 2a a x
DEFINITE INTEGRALS
Let f (x) be continuous and single valued for a x b. Divide the interval (a, b) of the x axis into n parts at the points (a ; x0 ), x1 , x2 , . . . , (xn ; b) as in Figure 8.1. Define D xi ¼ xi xi1 and let ji be a value of x in the interval xi1 , ji xi . Form the sum of the areas of each approximately rectangular area to get n X i¼1
f ðji ÞD xi
110
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 8.1 Discretizing the area under the curve.
If we take the limit as n ! 1 and D xi ! 0, the approximation of the area under the curve as a rectangle becomes more accurate and we obtain the definition of the definite integral of f (x) between the limits a, b; thus n X
lim
n!1 D xi !0 i¼1
ðb f ðji ÞD xi ;
f (x)dx
(8:2:1)
a
The points a, b are referred to as the lower and upper limits of integration, respectively. The definite integral ðb f (x) dx a
equals the area under the curve f (x) in the interval (a, b). Fundamental Theorem of Calculus The connection between differential and integral calculus is through the fundamental theorem of calculus [Larson et al., 1990]. In particular, if f (x) is continuous in the interval a x b and G(x) is a function such that dG=dx ¼ f (x) for all values of x in (a, b), then ðb f (x)dx ¼ G(b) G(a)
(8:2:2)
a
The definite integral acts like an antiderivative of f (x), or the inverse operation to differentiation. The concept of inverse differentiation can be made more explicit by introducing the notation ½G(x)ba ¼ G(b) G(a)
(8:2:3)
and using dG=dx ¼ f (x) in the definite integral of Eq. (8.2.2) to give ðb dG dx ¼ ½G(x)ba a dx
(8:2:4)
111
INTEGRAL CALCULUS
Relation of Definite and Indefinite Integrals We can express the indefinite integral as a definite integral with a variable limit: ð
ðx f (x) dx ¼
f (u)du þ C 0
a
where C 0 is a constant of integration.
Example: Let f (x) ¼ sin x. The indefinite integral of the left-hand side is ð f (x) dx ¼ cos x þ C Consider the right-hand side: ðx f (u) du þ C0 ¼ ½cos uxa þC 0 ¼ cos x þ cos a þ C 0 a
Comparing the left- and right-hand sides implies C ¼ cos a þ C0 .
Derivative of Definite Integrals Let f(a) be the definite integral ð v(a) f(a) ¼
f ( y, a) dy
(8:2:5)
u(a)
where f ( y, a) is a function of the independent variables y and a. The limits u(a), v(a) are differentiable functions of the independent variable a, and a is defined in the interval a a b, where a and b are constants. Furthermore, suppose f ( y, a) and @ f =@ a are continuous in the ya plane defined by the intervals u y v and a a b. Under these conditions, the derivative of f(a) with respect to a is given by Leibnitz’s rule [Arfken and Weber, 2001; Spiegel, 1963]: d f(a) ¼ da
ð v(a)
@ f ð y,aÞ dy @a u(a)
(8:2:6)
dv(a) du(a) f ðu(a), aÞ þ f ðv(a), aÞ da da If u and v are equated to constants c and d, respectively, Eq. (8.2.6) becomes d da
ðd
ðd f ð y, aÞdy ¼
c
c
@ f ð y,aÞ dy @a
(8:2:7)
112
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
If the function f does not depend on a, Eq. (8.2.6) becomes d da
ð vðaÞ f ð yÞdy ¼ f ðvðaÞÞ u ða Þ
dvðaÞ duðaÞ f ðuðaÞÞ da da
(8:2:8)
Example: Let f (y) ¼ ln y, u(a) ¼ a, and v(a) ¼ a. In this case, the function f does not depend on a. Consequently, we substitute u, v, and f into Eq. (8.2.8) to find d da
ða ln y dy ¼ f ðaÞ a
da da f ð aÞ ¼ ln a da da
(8:2:9)
because da=da ¼ 0 and f (a) ¼ ln a. This result is verified by first evaluating the integral ða
ln y dy ¼ ½y ln y yaa ¼ a ln a a ða ln a aÞ
(8:2:10)
a
Taking the derivative of Eq. (8.2.10) with respect to a gives d d ½a ln a a ða ln a aÞ ¼ ½a ln a a da da
(8:2:11)
¼ ln a Equation (8.2.11) agrees with Eq. (8.2.9) and verifies Eq. (8.2.8).
8.3
SOLVING INTEGRALS
Change of Variable or Variable Substitution One of the most powerful methods for solving integrals is to make a change of variables until the original integral acquires the form of an integral with a known solution. This technique is also known as variable substitution because one variable is being substituted for another. To substitute one variable for another, let x ¼ g(t). We can change the integral of a function with respect to x into an integral with respect to the new variable t by making a change of variables. For an indefinite integral we obtain ð
ð dgðtÞ f (x) dx ¼ f ðgðtÞÞ dt dt
113
INTEGRAL CALCULUS
The definite integral is a little more complicated because we have to account for a change of integral limits. The result is dgðtÞ dt f ðgðtÞÞ dt a
ðb
ðb f (x) dx ¼ a
where the integration limits are g(a) ¼ a and gðbÞ ¼ b. EXERCISE 8.1: Given 1 f (x) ¼ , x ¼ gðtÞ ¼ t n x Find
Ðb a
f (x) dx by direct integration and by a change of variable.
Integration by Parts Consider the differential of a product of functions u, v: dðuvÞ ¼ u dv þ v du
(8:3:1)
Integrate Eq. (8.3.1) between the integration limits A and B to get ðB
ðB dðuvÞ ¼
A
ðB u dv þ
A
v du
(8:3:2)
A
Evaluating the left-hand side gives ½uvBA ¼
ðB
ðB u dv þ
A
v du
(8:3:3)
A
Rearranging Eq. (8.3.3) gives the formula for an integration by parts, namely, ðB A
u dv ¼ ½uvBA
ðB v du
(8:3:4)
A
Ð EXERCISE 8.2: Evaluate the indefinite integral I(x) ¼ x sin x dx using integration by parts. Application: Length of a Curve. Consider the curve in Figure 8.2 between the end points A and B. We approximate the curve with straight-line segments. The length of the curve is approximately the sum of the straight-line segments;
114
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Length of a curve.
Figure 8.2
that is, L Si Dli . From the Pythagorean theorem, we find the length of a line segment is #1=2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi " Dyi 2 2 2 Dli ¼ D xi þ Dyi ¼ 1 þ D xi D xi The curve length L is given by the sum over the length of all line segments: " #1=2 X Dyi 2 1þ D xi L D xi i In the limit as D xi ! 0 we have ðB" L¼ A
2 #1=2 dy 1þ dx dx
where A, B are (boundary) end points of the curve. EXERCISE 8.3: Calculate the length of a straight line y ¼ mx þ b between the end points xA , xB . 8.4
NUMERICAL INTEGRATION
Integrals occur in many applications, yet it is not always possible to solve them analytically. When the integral is intractable, it is necessary to use an approximation technique. Definite Integrals Several numerical techniques exist for solving definite integrals. One widely used technique is to approximate the definite integral with an n th order polynomial. The definite integral is then approximated by the sum ðb n X f (x) dx wi f ðxi Þ (8:4:1) a
i¼0
INTEGRAL CALCULUS
115
where the finite interval ½a, b is divided into n intervals bounded by n þ 1 evenly spaced base points {x0 , x1 , . . . , xn } with separation h ¼ xiþ1 xi ¼
ba n
A weight factor wi is associated with the functional value f (xi ).
Trapezoidal Rule The trapezoidal rule approximates the integral f (x) of Eq. (8.4.1) with a straight line between the limits xi , xiþ1 of a subinterval. Since it takes two points on a graph to define a straight line, the trapezoidal rule is sometimes called the 2-point rule. The trapezoidal rule has the form ðb f (x) dx h a
1
2 f ð aÞ
þ f ðx1 Þ þ f ðx2 Þ þ þ f ðxn1 Þ þ 12 f ðbÞ
(8:4:2)
where x0 ¼ a, xn ¼ b, {xiþ1 ¼ xi þ h for all i ¼ 0, 1, . . . , n 1}, and h ¼ (b a)=n.
Example: The simplest application of the trapezoidal rule is the case in which only one interval is defined, that is, n ¼ 1. Then Eq. (8.4.2) becomes ðb f (x) dx h a
1
2 f ð aÞ
þ 12 f ðbÞ
Simpson’s Rule A more accurate numerical integration algorithm than the trapezoidal rule is obtained by approximating the integrand f (x) of Eq. (8.4.1) with a second-order polynomial (a quadratic equation) between the limits xi , xiþ2 of a pair of subintervals. The equally spaced subintervals are paired beginning with i ¼ 0. To assure that every subinterval has a pair, the number of subintervals n must be even. Each double subinterval then contains three points on a graph. For this reason, Simpson’s rule is sometimes called the 3-point rule. Simpson’s rule has the form ðb h f (x) dx ½ f (x0 ) þ 4f (x1 ) þ 2f (x2 ) 3 (8:4:3) a þ 4f (x3 ) þ þ 4f (xn1 ) þ f (xn ) where x0 ¼ a, xn ¼ b, {xiþ1 ¼ xi þ h for all i ¼ 0,1, . . . , n 1}, and h ¼ (b a)=n.
116
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 8.3
Numerical integration.
Example: The simplest application of Simpson’s rule is the case in which two subintervals are defined in the interval between a and b. Thus n ¼ 2 and Eq. (8.4.3) becomes ðb a
h f (x) dx ½ f ðaÞ þ 4f ða þ hÞ þ f ðbÞ 3
where h ¼ (b a)=2: Notice that the number of subintervals n ¼ 2 is even. The trapezoidal rule approximates the integrand with a straight line between the limits a, b. Simpson’s rule uses a polynomial of order 2, a quadratic function, to approximate the integrand. Simpson’s rule requires more work than the trapezoidal rule but is more accurate. This is illustrated in Figure 8.3 and Exercise 8.4. Ðb EXERCISE 8.4: Let y ¼ f (x) ¼ x2 : Evaluate the definite integral 0 f (x) dx analytically and numerically using the trapezoidal rule and Simpson’s rule. Compare the results.
Improper Integrals Methods exist for numerically integrating improper integrals in which the magnitude of one or both limits is 1. The quadrature formulas are obtained by fitting the integrand to polynomials with unequal spacing between base points fxi g. These methods are presented in references such as Carnahan et al. [1969] and Press et al. [1992].
CHAPTER 9
SPECIAL INTEGRALS
The fundamental concepts of integral calculus have been applied in many areas of mathematics. Some of these applications are considered here as special integrals. A few strategies for evaluating special integrals are illustrated below for the line integral and the double integral. These integrals provide an introduction to vector calculus integrals and multiple integrals, respectively. We then present three transforms: the Fourier transform, Z transform, and Laplace transform. The Fourier transform and Laplace transform are integral transforms, and the Z transform is related to the Fourier transform.
9.1
LINE INTEGRAL
The line integral may be viewed as an extension of the concept of definite integral ðb f (x)dx
Id ¼ a
In the case of Id , we integrate f (x) along the x axis between the points x ¼ a and x ¼ b. Suppose that instead of integrating along the x axis, we integrate along a piecewise smooth curve C. The curve C is piecewise smooth in the interval ½a, b if the interval can be partitioned into a finite number of subintervals on which C is smooth, that is, continuously differentiable. The resulting integral of a function Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
117
118
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
f (x) along the curve C may be written as ð f (x)ds C
where x and f (x) are dependent on the parameter s. This is the concept of a line integral. To see how to solve a line integral, we must be more explicit in defining the summation process and the associated path of integration (Figure 9.1). The curve C may be written as the vector r(s) ¼ x(s)^i þ y(s)^j þ z(s)k^ when the arc length s parameterizes the coordinates between the end points A, B corresponding to s ¼ a and s ¼ b. The curve C, the path of integration, is assumed to be a smooth curve so that r(s) is continuous and differentiable. The integrand f (x, y, z) is a continuous function of s. As in the case of the definite integral, we form the sum (IL )n ¼
n X
f (xm , ym , zm )Dsm
m¼1
where we have subdivided C into n segments of arc length Dsm ¼ sm sm1 If we let n ! 1 so that Dsm ! 0, we obtain the line integral of f along the path C as ð ðb IL ¼ f (x, y, z)ds ¼ f (x(s), y(s), z(s))ds C
a
The value of IL depends onÞ the path of integration C. If the path C is closed, the integral is often written as C f (x)dx:
Figure 9.1
Path of integration.
SPECIAL INTEGRALS
119
Example: Evaluate the integral ð (1,2) (x2 y) dx þ (y2 þ x) dy (0,1)
along the parabola x ¼ t, y ¼ t 2 þ 1. The integral is evaluated by replacing x, y with their parametric representations. Since t ¼ 0 at (x, y) ¼ (0, 1), and t ¼ 1 at (x, y) ¼ (1, 2), the line integral becomes ð 1 2 2 2 2 t (t þ 1) dt þ t þ 1 þt 2t dt t¼0
ð1 ¼
2t5 þ 4t3 þ 2t2 þ 2t 1 dt
0
6 1 t t4 t3 t2 ¼ 2 þ4 þ2 þ2 t 6 4 3 2 0 6 1 t 2 þ t 4 þ t3 þ t2 t ¼ 2 ¼ 3 3 0
EXERCISE 9.1: Line Integral. Evaluate the integral (1, 1, 1) along the path x ¼ t, y ¼ t 2, z ¼ t 3 for
Ð C
A dr from (0, 0, 0) to
A ¼ (3x2 6yz)^i þ (2y þ 3xz)^j þ (1 4xyz2 )k^ 9.2
DOUBLE INTEGRAL
An integral of a function of two independent variables can be defined as a generalization of the definition of a definite integral. In particular, let f (x, y) be defined in a closed region R (Figure 9.2). Form the sum n X
f (xi , yi )DAi
i¼1
where DAi is the area of the subregion at the point (xi , yi ). If we take the limit n ! 1, we obtain the double integral of f (x, y) over the region R: ðð n X f (xi , yi )DAi ¼ f (x, y) dA ID ¼ lim n!1
R
i¼1
In Cartesian coordinates, we have ðð ID ¼ f (x, y) dx dy R
120
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Double integral region R.
Figure 9.2
The solution of ID requires expressing the boundary of the region R in terms of x, y. For example, suppose we write the curves ACB and ADB in Figure 9.2 as y ¼ g1 (x) and y ¼ g2 (x), respectively. Then ID becomes ð b ð g2 (x) ID ¼ f (x, y) dy dx a
g1 (x)
and is evaluated by first integrating f (x, y) over y with x fixed, then integrating over x. Similarly, suppose the curves DAC and DBC are represented by x ¼ h1 (y) and x ¼ h2 (y), respectively. The corresponding expression for the double integral is ð d ð h2 (x) ID ¼ f (x, y) dx dy c
h1 (x)
In this case, the integral is solved by first integrating over x with y fixed, followed by the integration over y. If the function f (x, y) has the form f (x, y) ¼ p(x) q(y), then the double integral can be written as the product ðð
ð g2 (x)
ðb p(x) q(y) dx dy ¼
p(x) a
g1 (x)
ðd
ð h2 (x)
R
q(y) dy dx
q(y)
¼ c
p(x) dx dy
h1 (x)
Further simplification occurs if the limits of integration do not depend on the integration variables. Thus suppose R is defined by the inequalities a x b, c y d: Then ðb ðd
ðb f (x, y) dy dx ¼
a
c
ðd p(x) dx
a
q(y) dy c
EXERCISE 9.2: Double Integral. Evaluate the double integral in the region R bounded by y ¼ x2 , x ¼ 2, y ¼ 1:
ÐÐ R
(x2 þ y2 ) dx dy
SPECIAL INTEGRALS
9.3
121
FOURIER ANALYSIS
Integral transforms have many applications in science and engineering. We begin our discussion of integral transforms with the Fourier transform. The Fourier transform is an extension of the concept of the Fourier series, which we consider in this section following the introduction of two related concepts: even functions and odd functions. Even and Odd Functions A function p(x) is said to be even or symmetric if pð xÞ ¼ pðxÞ, whereas a function qð xÞ is called odd or antisymmetric if qð xÞ ¼ qðxÞ: The product r ð xÞ ¼ pð xÞqð xÞ of an even function and an odd function is odd, since r(x) ¼ p(x) q(x) ¼ p(x)½q(x) ¼ r(x) Example: The function f1 (x) ¼ x is odd since f1 (x) ¼ x ¼ f1 (x): The function f2 (x) ¼ x2 is even since f2 (x) ¼ (x)2 ¼ x2 ¼ f2 (x). The sum f3 (x) ¼ f1 (x)þ f2 (x) of the two functions f1 (x) and f2 (x) is f3 (x) ¼ x þ x2 . It is neither even nor odd since f3 (x) ¼ x þ (x)2 ¼ x þ x2 does not equal either f3 (x) or f3 (x). The product f4 (x) ¼ f1 (x)f2 (x) ¼ x3 is odd since f4 (x) ¼ (x)3 ¼ x3 ¼ f4 (x): If an even function pð xÞ is being integrated over a symmetric interval a=2 x a=2, where a . 0, then we have ð a=2 ð a=2 p(x) dx ¼ 2 p(x) dx 0
a=2
since ð a=2
ð0 p(x) dx ¼
p(x) dx 0
a=2
for an even function. The latter equality can be proved by making the change of variable x ! x in the integral on the left-hand side. By contrast, if an odd function q(x) is being integrated over the interval a=2 x a=2, then we have ð0 ð a=2 ð a=2 q(x) dx ¼ q(x) dx þ q(x) dx ¼ 0 a=2
a=2
0
since q(x) ¼ q(x): Fourier Series The function f (x) is periodic if it is defined for all real numbers x and satisfies the equality f ðx þ nT Þ ¼ f ð xÞ, where n is an integer and the period T is a positive
122
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
number. A periodic function may be represented by a superposition, or summation, of harmonic functions. This superposition of harmonic functions is called a Fourier expansion of the function. Example: The trigonometric functions sin x and cos x have period T ¼ 2p since sinðx þ 2npÞ ¼ sin x and cosðx þ 2npÞ ¼ cos x: The function f ð xÞ with period T and defined in the symmetric interval T=2 x T=2 may be represented by the Fourier series f (x) ¼ a0 þ
1 X 2pn 2pn x þ bn sin x an cos T T n¼1
(9:3:1)
The Fourier coefficients a0 ,fan , bn g are given by the Euler formulas: a0 ¼
1 T
ð T=2 f ð xÞdx T=2
2pn x dx T T=2
ð T=2 2 2pn bn ¼ x dx f ð xÞ sin T T=2 T an ¼
2 T
ð T=2
f ð xÞ cos
(9:3:2)
A perusal of the equation for the coefficient a0 shows that a0 is the mean or average value of f (x) over the period T.
Example: The Fourier series of f (x) ¼ x in the interval p x p is f (x) ¼
1 X
2 bn sin nx, bn ¼ ð1Þnþ1 n n¼1
The Fourier coefficients a0 and {an } are zero because they are calculated by integrating an odd function over the symmetric interval p x p: The {bn } terms are obtained by an integration by parts, that is, p ð ð 2 p 1 x cos nx 1 p x sin nx dx ¼ þ cos nx dx 2p p p n p n p p 1 2p 1 2 cos np þ 2 sin nx ¼ (1)nþ1 ¼ p n n n p
bn ¼
SPECIAL INTEGRALS
123
Some frequently encountered integrals for a period T ¼ 2p are summarized below. Notice that the Kronecker delta dmn ¼ 0 when m = n and dmn ¼ 1 when m ¼ n: 1 p 1 p 1 p
ðp sin nt sin mt dt ¼ dmn p ðp
cos nt cos mt dt ¼ dmn p ðp
cos nt sin mt dt ¼ 0 p
A set of functions {fn (t) : n ¼ 1, 2, 3, . . . } that has the property ðb fm (t)fn (t) dt ¼ dmn a
is called an orthonormal set. The function fm (t) is orthogonal to fn (t) if m = n, and the function is square integrable with unit normalization if m ¼ n, that is, ðb
2 fm ðtÞ dt ¼ 1
a
pffiffiffiffi EXERCISE 9.3: Show that fn ð xÞ ¼ ðsin nxÞ= p : 1, 2, 3, . . . is an orthonormal set of functions in the interval p x p: Notice that the integrals for the coefficients a0 ,fan g in Eq. (9.3.2) vanish when the function f ð xÞ with period T is odd, and the coefficients fbn g in Eq. (9.3.2) are zero when f (x) is even. Thus the Fourier series for an even function pð xÞ with period T is
1 X 2pn x p(x) ¼ a0 þ an cos T n¼1 with coefficients a0 ¼
2 T
4 an ¼ T
ð T=2 p(x) dx 0
ð T=2 0
2pn x dx p(x) cos T
Similarly, the Fourier series for an odd function qð xÞ with period T is
1 X 2pn qð x Þ ¼ x bn sin T n¼1
124
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
with coefficients bn ¼
4 T
ð T=2 q(x) sin 0
2pn x dx T
The Fourier series of a function f (x) of a real variable x may be written in complex form as f (x) ¼
1 X
cn einx
n¼1
with complex coefficients 1 cn ¼ 2p
ðp
f (x)einx dx for n ¼ 0, +1, +2, . . .
p
The Fourier series of a function f (z) of a complex variable z can be derived using Euler’s equation eiu ¼ cos u þ i sin u.
9.4
FOURIER INTEGRAL AND FOURIER TRANSFORM
A Fourier series expansion of a function f (x) of a real variable x with period T is defined over a finite interval T=2 x T=2. If the interval becomes infinite and we sum over infinitesimals, we obtain the Fourier integral 1 f (x) ¼ (2p)1=2
ð1
A(k)eikx dk
(9:4:1)
f (x)eikx dx
(9:4:2)
1
with the coefficients
A(k) ¼
1 (2p)1=2
ð1 1
Equation (9.4.2) is the Fourier transform of f (x). The Fourier integral is also known as the inverse Fourier transform of A(k). Example: Suppose we have a square waveform with constant pulse height H defined by f (x) ¼
H, a x a 0, jxj . a
125
SPECIAL INTEGRALS
Then the Fourier transform is 1 A(k) ¼ (2p)1=2 H ¼ (2p)1=2
ð a 0e
ikx
ða dx þ
He
1
ða e
ð1 dx þ
a
ikx
a
ikx
a H eikx dx ¼ pffiffiffiffiffiffi 2p ik a
0e
ikx
dx
a
H eþika e ika Ha sin ka ¼ pffiffiffiffiffiffiffiffiffi ¼ pffiffiffiffiffiffiffiffiffi ik 2(p) 2(p) ka The function (sin u)/u appears often enough in Fourier transform theory that it has been given the special name sinc u and is defined as sinc u ¼
sin u u
In this notation, the Fourier transform of a square waveform is Ha A(k) ¼ pffiffiffiffiffiffi sinc ka 2p and its Fourier integral is Ha f (x) ¼ 2p
ð1
sin ka ikx e dk 1 ka
EXERCISE 9.4: Determine the Fourier transform of the harmonic function g(t) ¼
exp(iv0 t), T t T 0, jtj . 0
Dirac Delta Function If we substitute the Fourier transform A(k) into the Fourier integral for f (x), we obtain 1 f (x) ¼ 2p
ð 1 ð 1 e 1
iky
f (y) dy eikx dk
1
The double integral can be rearranged to give ð1 d(x y) f (y) dy
f (x) ¼ 1
(9:4:3)
126
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
where the mathematical distribution 1 d(x y) ; 2p
ð1
eik(xy) dk
(9:4:4)
1
is called the Dirac delta function. The Dirac delta function is well defined only within the context of integration. It is widely used as a mathematical representation of a point source or sink. Alternative Forms of the Fourier Integral and Transform The notation in Eqs. (9.4.1) and (9.4.2) implies that the Fourier integral f (x) and Fourier transform A(k) are defined in the space (x) and wavenumber (k) domains, respectively. In the time (t) and frequency (v) domains, Eqs. (9.4.1) and (9.4.2) have the forms ð1 1 f (t) ¼ A(v)eivt dv (9:4:5) (2p)1=2 1 and A(v) ¼
1 (2p)1=2
ð1
f (t)eivt dt
(9:4:6)
1
Suppose the Fourier transform A(v) is written as the product A(v) ¼ F(v)eif(v)
(9:4:7)
The term F(v) is called the frequency spectrum of f (t) and f(v) is called the phase spectrum of f (t). The placement of the factor (2p)1=2 in Eqs. (9.4.1), (9.4.2), (9.4.5), and (9.4.6) is a matter of choice as long as the Fourier integral 1 f (t) ¼ (2p)
ð1 ð1 f (s)e 1
ivs
ds eivt dv
(9:4:8)
1
is satisfied. The variable s in Eq. (9.4.8) is a dummy variable of integration. An alternative definition of the Fourier integral is g(t) ¼
1 2p
ð1
B(v)eivt dv
(9:4:9)
1
with the corresponding Fourier transform ð1 B(v) ¼ g(t)eivt dt 1
(9:4:10)
SPECIAL INTEGRALS
127
Equation (9.4.10) is the inverse Fourier transform of B(v). The Fourier transform in Eq. (9.4.10) may be written in the notation ð1 F {g(t)} ¼ B(v) ¼ g(t)eivt dt (9:4:11) 1
In this notation, the inverse Fourier transform in Eq. (9.4.9) is ð 1 1 F 1 {B(v)} ¼ g(t) ¼ B(v)eivt dv 2p 1
(9:4:12)
EXERCISE 9.5: Show that Eq. (9.4.8) is valid. Hint: You should use the Dirac delta function in Eq. (9.4.4). Convolution Integral Consider a double integral of two functions whose variables of integration x,y are constrained by a delta function d(K 2 x 2 y) such that ðð Icon ¼ f (x) g(y) d(K x y) dx dy The double integral can be reduced to a single integral by integrating over the delta function to find ð ð Icon ¼ f (x) g(K x) dx ¼ f (K y) g(y) dy (9:4:13) The integrals in Eq. (9.4.13) are known as convolution integrals and are often written in the form Icon ¼ f (x) 8 g(K x) ¼ f (K y) 8 g(y)
(9:4:14)
EXERCISE 9.6: Show that the Fourier transform of the product of two functions g1 (t) and g2 (t) can be written as a convolution integral. Hint: You should use Eqs. (9.4.11) and (9.4.12) and the Dirac delta function in Eq. (9.4.4). 9.5
TIME SERIES AND Z TRANSFORM
Time series and Z transforms are used in a variety of real-world applications, such as signal processing and seismic analysis [Collins, 1999; Yilmaz, 2001; Telford et al., 1990]. The concepts of time series and Z transforms are reviewed here, as is the relationship between the Z transform and the Fourier transform. Time Series and Digital Functions Consider a function f (t) that is a continuous and single-valued function of time t. Time is viewed here as a continuous variable. A time series is a series of values
128
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
that is obtained by sampling the function f (t) at regular time intervals. The resulting time series is a sequence with each value in the sequence corresponding to a discrete value of t. Z Transforms Telford et al. [1990] call the sequence corresponding to a time series a digital function. For example, a digital function ht may be written as the sequence ht ¼ {hn } ¼ { . . . , h2 , h1 , h0 , h1 , h2 , . . . }
(9:5:1)
where n is an integer in the range 1 , n , 1. The value of the continuous variable t that corresponds to n is t ¼ n Dt
(9:5:2)
where Dt is a constant increment. The digital function ht is a causal digital function if fhn ¼ 0g for all n , 0. Define the Z transform of a digital function gt ¼ fgng for the sequence fgng ¼ f. . . , g22, g21, g0, g1, g2, . . .g as 1 X
g(z) ¼
gn z n
(9:5:3)
n¼1
If ht ¼ fhng is a causal digital function, the elements of the sequence ht ¼ fhng for all n , 0 are zero and the Z transform is h(z) ¼
1 X
hn z n
(9:5:4)
n¼0
The Z transform can be used to represent a sequence that is obtained by sampling a continuous function. EXERCISE 9.7: What are the Z transforms of the causal digital functions h1 ¼ {1, 2, 1, 0, 1} and h2 ¼ {0:6, 1:2, 2:4, 0:7, 3:8}?
Alternative Forms of the Z Transform The definition of the Z transform given in Eq. (9.5.3) is one of two definitions in the literature. We can see the difference between alternative forms of the Z transform by considering the relationship between the Z transform and the Fourier transform. Consider a function of the form g(t) ¼
1 X n¼1
gn d(t n Dt)
(9:5:5)
SPECIAL INTEGRALS
129
where d(t n Dt) is a Dirac delta function and Dt is a constant increment. Substituting Eq. (9.5.5) into the Fourier transform given by Eq. (9.4.12) gives # ð1 " X 1 gn d(t n Dt) eivt dt (9:5:6) B(v) ¼ 1 n¼1
Interchanging the sum and integral lets us write B(v) ¼
1 ð 1 X n¼1
gn d(t n Dt)eivt dt
(9:5:7)
1
Evaluating the integral gives 1 X
B(v) ¼
gn eivn Dt
(9:5:8)
n¼1
Two forms of the Z transform can be obtained from Eq. (9.5.8). If we define the variable z1 ¼ eiv Dt
(9:5:9)
and substitute into Eq. (9.5.8), we get B(z1 ) ¼
1 X
gn z1n
(9:5:10)
n¼1
Equation (9.5.10) is the definition of the Z transform used in seismic analysis [Yilmaz, 2001; Telford et al., 1990]. By contrast, if we define the variable z2 ¼ eiv Dt
(9:5:11)
and substitute into Eq. (9.5.8), we get B(z2 ) ¼
1 X
gn zn 2
(9:5:12)
n¼1
Equation (9.5.12) is the definition of the Z transform used by authors such as Zwillinger [2003] and Collins [1999]. If we make the variable substitution s ¼ z1 2
(9:5:13)
in Eq. (9.5.11), we obtain B(s) ¼
1 X
gn s n
(9:5:14)
n¼1
Equation (9.5.14) is the generating function of the sequence fgng [Zwillinger, 2003].
130
9.6
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
LAPLACE TRANSFORM
The Laplace transform of a function f (x) of a variable x is defined as the integral ð1 F(s) ¼ L{f (x)} ¼ esx f (x) dx (9:6:1) 0
where s is a real, positive parameter that serves as a supplementary variable. The Laplace transform of the sum or difference of two functions f (x), g(x) is ð1 L{f (x) + g(x)} ¼ esx ½ f (x) + g(x)dx 0
ð1 e
¼
sx
ð1 f (x) dx +
0
esx g(x) dx
0
Using the definition of the Laplace transform, we find L{f (x) + g(x)} ¼ L{f (x)} + L{g(x)}
(9:6:2)
The inverse of the Laplace transform is L1 {F(s)} ¼ f (x)
(9:6:3)
Example: The Laplace transform of f (x) ¼ e ax is F(s) ¼ L{eax } ¼
ð1
esx eax dx ¼
ð1
0
e(sa)x dx
0
This integral is called an improper integral because one of the integration limits is 1. To evaluate the improper integral we assume s . a to find (sa)x 1 e 1 ax ¼ L{e } ¼ sa 0 sa Several examples of Laplace transforms are given below, where a is a constant and n is an integer. LAPLACE TRANSFORMS
f (x) ¼ L1 {F(s)}
F(s) ¼ L{f (x)}
a xn e ax sin ax cos ax sinh ax cosh ax
a/s (n!)/s nþ1 1/(s 2 a) a/(s 2 þ a 2) s/(s 2 þ a 2) a/(s 2 2 a 2) s/(s 2 2 a 2)
131
SPECIAL INTEGRALS
Laplace Transforms of Derivatives The Laplace transform of the first derivative f 0 (x) is L
ð1 df (x) df (x) dx ¼ esx dx dx 0
Ð Ð Performing an integration by parts of the form u dv ¼ uv v du gives 0
sx
L{f (x)} ¼ ½ e
f (x)1 0 þs
ð1
esx f (x) dx
0
We assume esx f (x) ! 0 as x ! 1 so that L{f 0 (x)} ¼ f (0) þ sL{f (x)}
(9:6:4)
where we have used the definition of Laplace transform. The Laplace transform of f 0 (x) can be used to calculate higher-order derivatives. For example, to find the Laplace transform of the second-order derivative L{f 00 (x)}, let g(x) ¼ f 0 (x) so that g0 (x) ¼ f 00 (x). Then we have by repeated application of L{f 0 (x)} the result L{g0 (x)} ¼ sL{g(x)} g(0) ¼ s½sL{f (x)} f (0) f 0 (0) or L{f 00 (x)} ¼ s2 L{f (x)} sf (0) f 0 (0)
(9:6:5)
Laplace Transform of an Integral The Laplace transform of an integral ðt h(t) ¼
i(t) dt
(9:6:6)
0
is found by first writing the Laplace transform of the functions i(t) and h(t) as L{i(t)} ¼ I(s)
(9:6:7)
L{h(t)} ¼ H(s)
(9:6:8)
dh(t) ¼ i(t) dt
(9:6:9)
and
Equation (9.6.6) is equivalent to
132
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
given the initial condition h(0) ¼ 0 corresponding to the lower limit of the integral in Eq. (9.6.6). The Laplace transform of Eq. (9.6.9) given h(0) ¼ 0 is found from Eq. (9.6.4) to be
dh(t) L ¼ sH(s) dt
(9:6:10)
We rearrange Eq. (9.6.10) and use Eqs. (9.6.7) and (9.6.8) to find H(s) ¼
L dh(t)=dt L{i(t)} I(s) ¼ ¼ s s s
(9:6:11)
Substituting Eq. (9.6.6) into Eq. (9.6.8) and using the result in Eq. (9.6.11) gives the Laplace transform of the integral in Eq. (9.6.6) as ð t L
I(s) L{i(t)} ¼ i(t) dt ¼ s s 0
(9:6:12)
EXERCISE 9.8: Verify the Laplace transform L{af (t)} ¼ aL{f (t)}, where a is a complex constant, and f (t) is a function of the variable t. EXERCISE 9.9: Verify the inverse Laplace transform L1 {aF(s)} ¼ aL1 {F(s)}, where a is a complex constant, and F(s) ¼ L{f (t)} is the Laplace transform of a function f (t) of the variable t. EXERCISE 9.10: What is the Laplace transform of the complex function eivt with variable t and constant v? Use the Laplace transform of eivt and Euler’s equation eivt ¼ cos vt þ i sin vt to verify the Laplace transforms L{ cos vt} ¼ s=(s2 þ v2 ) and L{ sin vt} ¼ v=(s2 þ v2 ).
CHAPTER 10
ORDINARY DIFFERENTIAL EQUATIONS
Mathematical models of natural phenomena usually include terms for describing the change of one or more variables resulting from the change of one or more other variables. If the equation describing such a system has the form F(x, y(x), dy(x)/dy(x)/ dx, . . . , d m y(x)/dx m) ; F(.) ¼ 0, where x is the single independent variable and y is a function of x, then F(.) ¼ 0 is an m th order ordinary differential equation (ODE) with solution y(x). In this chapter we review the solution of first-order ODEs and a procedure for converting higher-order ODEs to systems of first-order ODEs. The latter procedure makes it possible to study the stability of the original ODE without actually finding a solution. Several ODE solution techniques are presented in Chapter 11.
10.1
FIRST-ORDER ODE
A linear, first-order ODE has the form dy þ f (x)y ¼ r(x) dx
(10:1:1)
where x is an independent variable, y is a dependent variable, and f (x) and r(x) are arbitrary functions of x. Equation (10.1.1) is linear because the variable y appears only with a power of 1, and it is first-order because the highest-order derivative is Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
133
134
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
first-order. The order of the ODE is determined by the highest-order derivative. Two special cases depend on the value of r (x). In particular, Eq. (10.1.1) is homogeneous if r (x) ¼ 0, and Eq. (10.1.1) is nonhomogeneous if r (x) = 0. Quadrature Solution Consider the homogeneous equation dyh þ f (x)yh ¼ 0 dx
(10:1:2)
We separate variables by collecting explicit functions of x and y on opposite sides of the equation to get dyh ¼ f (x)dx yh Integrating gives Ð ln yh ¼ f (x)dx þ c1 where c1 is the integration constant. The formal solution of the linear, first-order, homogeneous ODE is Ð (10:1:3) yh ¼ c2 e f (x)dx , c2 ¼ +ec1 Equation (10.1.3) is called the quadrature solution because it is expressed in terms of the quadrature, or integral, of f (x). The final solution of the ODE requires solving the integral in Eq. (10.1.3). Example: Suppose f (x) ¼ x and we want to find the solution of the homogeneous ODE dyh þ xyh ¼ 0 dx By Eq. (10.1.3) we have yh ¼ c2 e
Ð
f (x)dx
¼ c2 e
Ð
xdx
Performing the integration gives yh ¼ c2 eðx
2
=2Þþc3
¼ c2 ec3 ex
2
=2
where c3 is the integration constant. The term c2e c3 is a constant term because the three factors e, c2, c3 are constants. The term c2e c3 can be replaced by a new
ORDINARY DIFFERENTIAL EQUATIONS
135
constant, c4. Using c4 lets us simplify the solution; thus yh ¼ c4 ex
2
=2
To verify our solution, we calculate the derivative dyh d( x2 =2) 1 2 ¼ c4 ex =2 ¼ yh (2x) ¼ xyh dx 2 dx This reproduces our original homogeneous ODE. Let us now consider the nonhomogeneous form of Eq. (10.1.1). To solve Eq. (10.1.1), we first multiply by e h, where Ð h ¼ f (x)dx to get eh
dy þ eh fy ¼ eh r dx
The new ODE can be rearranged by noting that the left-hand side has the form d h deh dy (e y) ¼ y þ eh dx dx dx dh dy dy ¼ yeh þ eh ¼ yfeh þ eh dx dx dx
(10:1:4)
Therefore Eq. (10.1.4) becomes d h (e y) ¼ reh dx Separating variables gives a relationship between differentials: d(eh y) ¼ reh dx Upon integration we find Ð yeh ¼ eh r dx þ C where C is the constant of integration. Rearranging gives the general solution to the nonhomogeneous, linear, first-order ODE as Ð Ð y(x) ¼ eh eh r dx þ C , h ¼ f (x) dx
(10:1:5)
136
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 10.1: Solve dy y ¼ e2x dx In general, an m th order ODE has m unspecified constants in the solution. These constants are determined by specifying m additional constraints on the solution. The constraint imposed on the solution of a first-order ODE is usually called the initial condition if the independent variable is time. If the independent variable is a space coordinate, the constraint is called a boundary condition. An example of an initial value problem is a first-order ODE with the specified initial condition y(x0 ) ¼ y0 EXERCISE 10.2: Solve dy þ y tan x ¼ sin 2x, y(0) ¼ 1 dx
Systems of First-Order ODEs Consider the following system of equations: dx1 ¼ f1 (x1 , . . . , xn , t) dt dx2 ¼ f2 (x1 , . . . , xn , t) dt .. . dxn ¼ fn (x1 , . . . , xn , t) dt where the variables fxi : i ¼ 1, . . . , ng depend on the independent variable t. This set of equations may be written in matrix form as d x ¼ f (x, t) dt
(10:1:6)
Equation (10.1.6) represents a system of equations that are called nonautonomous because t appears explicitly in f. An autonomous system has the form d y ¼ f ( y) dt
(10:1:7)
ORDINARY DIFFERENTIAL EQUATIONS
137
If an initial vector x(t0) ¼ x0 or y(t0) ¼ y0 is specified, then we have an initial value problem. A nonautonomous, initial value problem can be made into an autonomous problem by adding the first-order ODE dxnþ1 ¼ 1, xnþ1 (t ¼ 0) ¼ t0 dt
(10:1:8)
with the initial condition as shown. Equation (10.1.8) has the solution xnþ1 (t) ¼ t þ t0 and Eq. (10.1.6) becomes d xA ¼ fA (xA ) dt where xA ¼
x f (xA ) , fA ¼ 1 t
are nþ1 column vectors. EXERCISE 10.3: Convert the nonautonomous equation dx1 ¼ 1 t þ 4x1 , x1 (0) ¼ 1 dt to an autonomous system. Runge– Kutta Method Systems of linear ODEs may be solved numerically using techniques such as the Runge – Kutta fourth-order numerical algorithm [Press et al., 1992, Sec. 16.1; Edwards and Penney, 1989, Sec. 6.4]. Suppose the initial conditions are x(t0 ) ¼ x0 at t ¼ t0 for the system of equations d x ¼ f (x, t) dt Values of x as functions of t are found by incrementally stepping forward in t. The fourth-order Runge – Kutta method calculates new values of xnþ1 from old values xn using the algorithm h xnþ1 ¼ xn þ ½w1 þ 2w2 þ 2w3 þ w4 þ O(h5 ) 6
138
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
where h is an incremental step size 0 , h , 1. The terms of the algorithm are tnþ1 ¼ tn þ h w1 ¼ f (xn , tn ) 1 1 w2 ¼ f xn þ hw1 , tn þ h 2 2 1 1 w3 ¼ f xn þ hw2 , tn þ h 2 2 w4 ¼ f (xn þ hw3 , tn þ h) The calculation begins at n ¼ 0 and proceeds iteratively. At the end of each step, the new values are defined as present values at the n th level and another iteration is performed. Example: Suppose we want to solve a system of two first-order ODEs of the form dx1 ¼ x2 dt dx2 ¼ x1 dt
(10:1:9)
with initial conditions
x x(t0 ) ¼ 10 x20
0 ¼ 1
The column vectors x and f are given by x f x2 x¼ 1 ,f ¼ 1 ¼ x2 f2 x1 The matrix equation to be solved for xnþ1 given xn and t is h xnþ1 ¼ xn þ (a þ 2b þ 2c þ d) 6 where tnþ1 ¼ tn þ h and a ¼ f (xn , tn ) h h b ¼ f xn þ a, tn þ 2 2 h h c ¼ f xn þ b, tn þ 2 2 d ¼ f ðxn þ hc, tn þ hÞ
(10:1:10)
ORDINARY DIFFERENTIAL EQUATIONS
139
A FORTRAN 90/95 program for solving this problem is as follows: RUNGE-KUTTA ALGORITHM IN FORTRAN 90/95
! ! ! ! ! !
RK_4th - 4TH ORDER RUNGE-KUTTA ALGORITHM FOR A SYSTEM WITH TWO 1ST ORDER DIFFERENTIAL EQUATIONS J.R. FANCHI, MAY 2003 DEFINE RHS FUNCTIONS FN1(X1,X2,T)=X2 FN2(X1,X2,T)=-X1
! ! INITIAL CONDITIONS AND R-K ALGORITHM CONTROL ! PARAMETERS ! DATA T0,X10,X20/0,0,1/ DATA H,NS/0.1,20/ ! ! INITIALIZE VARIABLES ! X1=X10 X2=X20 T=T0 OPEN(UNIT=6,FILE=‘RK_4th.OUT’) WRITE(6,10) 10 FORMAT(5X,18X,‘NUMERICAL’,10X,‘ANALYTICAL’, &/2X, & ‘STEP T X1 X2 X1AN X2AN’) ! ! R-K ITERATION ! DO I=1,NS A1=FN1(X1,X2,T) A2=FN2(X1,X2,T) B1=FN1(X1+(H*A1/2),X2+(H*A2/2),T+(H/2)) B2=FN2(X1+(H*A1/2),X2+(H*A2/2),T+(H/2)) C1=FN1(X1+(H*B1/2),X2+(H*B2/2),T+(H/2)) C2=FN2(X1+(H*B1/2),X2+(H*B2/2),T+(H/2)) D1=FN1(X1+(H*C1),X2+(H*C2),T+H) D2=FN2(X1+(H*C1),X2+(H*C2),T+H) ! UPDATE X,Y,T X1=X1+(H/6)*(A1+2*B1+2*C1+D1) X2=X2+(H/6)*(A2+2*B2+2*C2+D2) T=T+H
140
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
! ANALYTICAL SOLUTION X1AN=SIN(T) X2AN=COS(T) ! OUTPUT RESULTS WRITE(6,20) I,T,X1,X2,X1AN,X2AN 20 FORMAT(1X,I5,5(2X,F8.6)) ENDDO ! STOP END
Given the initial conditions in Eq. (10.1.10), the system in Eq. (10.1.9) has the analytic solution x1(t) ¼ sin t and x2(t) ¼ cos t. A comparison of analytical and numerical results is shown as follows for h ¼ 0.1. Notice the agreement between the analytical and numerical results. For more discussion of this example, see Edwards and Penney [1989, Sec. 6.4]. Additional examples are presented in texts such as Chapra and Canale [2002], Kreyszig [1999], and DeVries [1994].
Numerical t 0.100000 0.200000 0.300000 0.400000 0.500000 0.600000 0.700000 0.800000 0.900000 1.000000 1.100000 1.200000 1.300000 1.400000 1.500000 1.600000 1.700000 1.800000 1.900000 2.000000
Analytical
x1
x2
x1
x2
0.099833 0.198669 0.295520 0.389418 0.479425 0.564642 0.644217 0.717356 0.783326 0.841470 0.891207 0.932039 0.963558 0.985449 0.997495 0.999574 0.991665 0.973848 0.946301 0.909298
0.995004 0.980067 0.955337 0.921061 0.877583 0.825336 0.764843 0.696707 0.621611 0.540303 0.453597 0.362359 0.267500 0.169968 0.070738 20.029198 20.128843 20.227201 20.323288 20.416145
0.099833 0.198669 0.295520 0.389418 0.479426 0.564642 0.644218 0.717356 0.783327 0.841471 0.891207 0.932039 0.963558 0.985450 0.997495 0.999574 0.991665 0.973848 0.946300 0.909297
0.995004 0.980067 0.955337 0.921061 0.877583 0.825336 0.764842 0.696707 0.621610 0.540302 0.453596 0.362358 0.267499 0.169967 0.070737 20.029200 20.128845 20.227202 20.323290 20.416147
ORDINARY DIFFERENTIAL EQUATIONS
141
The fourth-order Runge –Kutta method is a robust method; that is, it is capable of solving a wide range of problems, but it is not necessarily the most accurate method for all applications. Predictor-corrector methods, for example, are often used in astrophysics. Specialized numerical methods references should be consulted for further details. EXERCISE 10.4: Convert the FORTRAN 90/95 program for the Runge –Kutta algorithm in the above example to the Cþþ programming language. Show that the numerical and analytical results agree.
10.2
HIGHER-ORDER ODE
An n th order ODE may be written in the form dn y dy d n1 y , . . . , ¼ F y, dtn dt dtn1 Suppose the initial conditions have the form dy dn1 y y(t0 ), , . . . , n1 dt t0 dt t0 If we introduce the variables x1 (t) ¼ y, x2 (t) ¼
dy d n1 y , . . . , xn (t) ¼ n1 dt dt
we can write the n th order ODE as a system of first-order ODEs such that dx1 ¼ x2 dt dx2 ¼ x3 dt .. . dxn ¼ F(x1 , . . . , xn ) dt The corresponding initial conditions are x1 (t0 ) ¼ y(t0 ), x2 (t0 ) ¼
dy d n1 y , . . . , x (t ) ¼ n 0 dt t0 dtn1 t0
142
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 10.5: Write the third-order ODE 2 d3 y dy ¼ 2y þ 4y dt3 dt as a first-order system.
EXERCISE 10.6: The linear harmonic oscillator is governed by d2 y dy þ y ¼ 0, y(0) ¼ a, ¼ b dt2 dt 0 Convert this second-order ODE to a first-order system. One practical advantage of transforming an n th order ODE into a system of firstorder ODEs is that the methods for solving the first-order system become applicable to solving the n th order ODE. Thus a numerical method like the Runge – Kutta method can be used to solve an n th order ODE after it has been transformed into a system of first-order ODEs. The Runge – Kutta example presented in Section 10.1 illustrates this technique for the second-order ODE y00 þ y ¼ 0 and initial conditions y(0) ¼ 0, y0 (0) ¼ 1. For more details, see Edwards and Penney [1989, Sec. 6.4]. 10.3
STABILITY ANALYSIS
Many realistic models of physical systems require mathematics that are intractable, yet we still would like information about the system. One of the most important pieces of information of interest to us is the stability of a dynamical system. A dynamical system is a system that changes with time t. We show here how to analyze the stability of a dynamical system [Beltrami, 1987] described by the second-order differential equation d2 L dL þ g0 L ¼ 0 þ g1 dt2 dt
(10:3:1)
The coefficients fgig may be arbitrary functions of t. The second-order differential equation is transformed to a set of first-order differential equations by defining x1 ¼ L, x2 ¼
dL dt
The corresponding set of first-order differential equations is dx1 dx ¼ x2 ; f1 , 2 ¼ g0 x1 g1 x2 ; f2 dt dt
(10:3:2)
ORDINARY DIFFERENTIAL EQUATIONS
143
Equation (10.3.2) can be cast in the matrix form x_ ¼ Ax ; f
(10:3:3)
where x1 d x1 d x_ ¼ ¼ x,x ¼ dt x2 dt x2 x2 0 f1 ¼ ,A ¼ f¼ g0 f2 g0 x1 g1 x2
1
(10:3:4)
g1
The solution x of Eq. (10.3.3) represents the state of the dynamical system as t changes. An equilibrium state xe is a state of the system which satisfies the equation x_ e ¼ 0 ¼ f A plot that may reveal the qualitative behavior of the system is the phase diagram. It is a plot of x versus dx/dt. Phase diagrams are especially graphic when x is a one-dimensional variable so that the phase space composed of fx, dx/dtg is a two-dimensional space. Phase space and phase diagrams are widely used in classical and statistical mechanics. The stability of a dynamical system can be determined by calculating what happens to the system when it is slightly perturbed from an equilibrium state. Stability calculations are relatively straightforward for linear systems, but can be very difficult or intractable for nonlinear problems. Since many dynamical models are nonlinear, approximation techniques must be used to analyze their stability. One way to analyze the stability of a nonlinear, dynamical model is to first linearize the problem. As a first approximation, the nonlinear problem is linearized by performing a Taylor series expansion of Eq. (10.3.3) about an equilibrium point. The result is u_ ¼ Ju þ z(u), u ; x x e
(10:3:5)
where u is the displacement of the system from its equilibrium state and z(u) contains terms of second-order or higher from the Taylor series expansion. The Jacobian matrix J is evaluated at the equilibrium point xe; thus 2
@f1 6 @x1 J¼6 4 @f2 @x1
3 @f1 @x2 7 0 7 ¼ g0 @f2 5 @x2 xe
1 g1
xe
In this case, the matrices A and J are equal. Neglecting higher-order terms in Eq. (10.3.5) gives the linearized equation u˙ ¼ Ju
(10:3:6)
144
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
We solve Eq. (10.3.6) by trying a solution with the exponential time dependence u ¼ elt g
(10:3:7)
where g is a nonzero vector and l indicates whether or not the solution will return to equilibrium after a perturbation. Substituting Eq. (10.3.7) into Eq. (10.3.6) gives an eigenvalue problem of the form (J l I)g ¼ 0
(10:3:8)
The eigenvalues l are found from the characteristic equation det(J l I) ¼ 0
(10:3:9)
where det denotes the determinant. The following summarizes the interpretation of l if we assume that the independent variable t is monotonically increasing: STABILITY EIGENVALUE INTERPRETATION
Eigenvalue Condition
Interpretation
l.0
Diverges from equilibrium solution Transition point Converges to equilibrium solution
l¼0 l,0
Eigenvalues from the characteristic equation for our particular case are l+ ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 g1 + g21 4g0 2
(10:3:10)
The linearized form of Eq. (10.3.3), namely, Eq. (10.3.6), exhibits stability when the product lt is less than zero because the difference u ! 0 as lt ! 21 in Eq. (10.3.7). If the product lt is greater than zero, the difference u diverges. This does not mean the solution of the nonlinear problem is globally divergent because of our linearization assumption. It does imply that a perturbation of the solution from its equilibrium value is locally divergent. Thus an estimate of the stability of the system is found by calculating the eigenvalues from the characteristic equation. EXERCISE 10.7: Suppose we are given the equation mx¨ þ cx˙ þ kx ¼ 0 as a model of a mechanical system, where fm, c, kg are real constants. Under what conditions are the solutions stable? Hint: Rearrange the equation to look like Eq. (10.3.1) and then find the eigenvalues corresponding to the characteristic equation of the stability analysis.
145
ORDINARY DIFFERENTIAL EQUATIONS
10.4
INTRODUCTION TO NONLINEAR DYNAMICS AND CHAOS
Nonlinear difference equations such as xnþ1 ¼ lxn (1 xn )
(10:4:1)
have arisen as simple models of turbulence in fluids, fluctuation of economic prices, and the evolution of biological populations. The mathematical significance of Eq. (10.4.1) was first recognized by biologist Robert May [1976], and Eq. (10.4.1) is now known as the May equation. May viewed Eq. (10.4.1) as a biological model representing the evolution of a population xn. The May equation has the quadratic form xnþ1 ; F(xn ) ¼ lxn l(xn )2
(10:4:2)
where F(xn) is called the logistic function. The linear term lxn represents linear growth (l . 1) or death (l , 1). The nonlinear term 2l(xn)2 represents a nonlinear death rate that dominates when the population becomes sufficiently large. Once we specify an initial state x0, the evolution of the system is completely determined by the relationship xnþ1 ¼ F(xn). The May equation is an example of a dynamical system. Each dynamical system consists of two parts: (1) a state and (2) a dynamic. The system is characterized by the state, and the dynamic is the relationship that describes the evolution of the state. The function F(xn) in Eq. (10.4.2) is the dynamic that maps the state xn to the state xnþ1. The quantity l in Eq. (10.4.1) is the parameter of the map F. Applying the map F for N iterations generates a sequence of states fx0, x1, x2, . . ., xNg known as the orbit of the iterative mapping as N gets very large. The behavior of the dynamical system approaches steady-state behavior as N ! 1. For large N, the steady-state xN must be bounded. There are several useful graphs that can be made to help visualize the behavior of a dynamical system. Three are summarized as follows:
HELPFUL NONLINEAR DYNAMICAL PLOTS
Assume a discrete map xiþ1 ¼ fl(xi) with parameter l. Type Plot Comments Time evolution
i vs. (xi or xiþ1)
Return map
xi vs. xiþ1
Logistic map
xiþ1 vs. l
Vary initial value x0 and l to study sensitivity to initial conditions Vary initial value x0 and l to look for instabilities Search for order (patterns) in chaos (apparent randomness)
146
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Example: The May Equation. May’s logistic function F(xn) in Eq. (10.4.2) has a linear term and a nonlinear term. Suppose we normalize the population so that 0 xn 1. If the original population x0 is much less than 1, then the nonlinear term is negligible initially and the population at the end of the first iteration is proportional to x0. The parameter l is the proportionality constant. The population will increase if l . 1, and it will decrease if l , 1. If l . 1, the population will eventually grow until the nonlinear term dominates, at which time the population will begin to decline because the nonlinear term is negative. The logistic map is the equation of an inverted parabola. When l , 1, all populations will eventually become extinct, that is, xnþ1 will go to zero. If l is a value between 1 and 3, almost all values of x0 approach—or are attracted to—a fixed point. If l is larger than 3, the number of fixed points begins to increase until, for sufficiently large l, the values of x do not converge to any fixed point and the system becomes chaotic. To see the onset of chaos, we plot return maps. Figures 10.1 and 10.2 show return maps for an initial condition x0 ¼ 0.5 and the parameter l equals 3.5 and 3.9. There are only four fixed points when l ¼ 3.5. The number of fixed points increases dramatically when l ¼ 3.9. EXERCISE 10.8: Another way to see the onset of chaos is to plot the number of fixed points versus the parameter l. This plot is called the logistic map. Plot the logistic map for the May equation in the range 3.5 l 4.0 and initial condition x0 ¼ 0.5. Application: Defining Chaos. Chaotic behavior of a dynamical system may be viewed graphically as two trajectories that begin with nearby initial conditions (Figure 10.3). If the trajectories are sensitive to initial conditions, so much so that they diverge at an exponential rate, then the dynamical system is exhibiting chaotic behavior. A quantitative characterization of this behavior is expressed in terms of the Lyapunov exponent [Hirsch and Smale, 1974; Lichtenberg and Lieberman, 1983; Jensen, 1987].
Figure 10.1
Return map of May equation at l ¼ 3.5.
ORDINARY DIFFERENTIAL EQUATIONS
Figure 10.2
147
Return map of May equation at l ¼ 3.9.
The definition of chaos as a dynamical system with positive Lyapunov exponents [Jensen, 1987] can be motivated by deriving its one-dimensional form from the firstorder differential equation dx0 ¼ V(x0 ) dt
(10:4:3)
A solution of Eq. (10.4.3) for a given initial condition is called a trajectory, and the set of all trajectories is the flow of Eq. (10.4.3). The flow is generated by the mapping V. Further terminology and discussion of the classification of differential equations may be found in the literature, for example, Hirsch and Smale [1974], Gluckenheimer and Holmes [1986], and Devaney [1992]. The separation Dx between two trajectories is calculated by slightly displacing the original trajectory. The difference between the original (x0) and displaced (x0 þ Dx) trajectories is d(x0 þ Dx) dx0 ¼ V(x0 þ Dx) V(x0 ) dt dt
Figure 10.3
Schematic of diverging trajectories.
(10:4:4)
148
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Substituting a first-order Taylor series expansion of V(x0 þ Dx) in Eq. (10.4.4) gives d(Dx) dV(x) ¼ Dx dt dx x0
(10:4:5)
Equation (10.4.5) has the solution (ð ) t dV 0 Dx ¼ Dx0 exp dt 0 dx x0
(10:4:6)
where t0 is a dummy integration variable and the separation Dx0 is the value of Dx at t ¼ 0. A measure of the trajectory separation is obtained by calculating the magnitude of Dx. Thus the trajectory separation d(x0, t) is qffiffiffiffiffiffiffiffiffiffiffi d(x0 , t) ¼ (Dx)2 ¼ jDxj
(10:4:7)
Following Lichtenberg and Lieberman [1983], a chaotic system is a system that exhibits an exponential rate of divergence of two initially close trajectories. This concept is quantified by assuming the exponential relationship d(x0 , t) ¼ d(x0 , 0)es(x0 , t)t
(10:4:8)
where d(x0, 0) is the trajectory separation at t ¼ 0 and s is a factor to be determined. A positive value of s in Eq. (10.4.8) implies a divergence of trajectories, whereas a negative value implies convergence. Solving Eq. (10.4.8) for s and recognizing that st is positive for diverging trajectories lets us write s(x0 , t) ¼
1 t
ðt dV dt0 0 dx x0
(10:4:9)
where Eqs. (10.4.6) and (10.4.7) have been used. The Lyapunov exponent sL is the value of s (x0, t) in the limit as t goes to infinity; thus ( ð ) 1 t dV 0 sL ¼ lim s(x0 , t) ¼ lim dt t!1 t!1 t 0 dx x0
(10:4:10)
Equation (10.4.10) is an effective definition of chaos if the derivative dV/dx is known at x0, and both the integral and limit can be evaluated. An alternative definition of the Lyapunov exponent is given by Parker and Chua [1987].
ORDINARY DIFFERENTIAL EQUATIONS
149
Example: Calculation of the Lyapunov Exponent. Suppose V(x) ¼ lx so that Eq. (10.4.3) can be written in the explicit form dx ¼ lx dt The derivative of V with respect to x is dV ¼l dx Substituting this derivative into Eq. (10.4.9) gives ðt dV dt0 dx 0 x0 ðt 1 ¼ ldt0 t 0
s(x0 , t) ¼
¼
1 t
lt t
¼l The Lyapunov exponent is now calculated as the limit sL ¼ lim s(x0 , t) t!1
¼ lim
t!1
¼l In this case sL and s(x0, t) are equal because s(x0, t) does not depend on time. The trajectory separation Dx is given by Eq. (10.4.4) as d(Dx) ¼ V(x0 þ Dx) V(x0 ) dt ¼ l(x0 þ Dx) l(x0 ) ¼ l Dx with the condition that Dx ¼ Dx0 at t ¼ 0. This equation has the solution Dx ¼ Dx0 elt Notice that a positive value of l implies sL is positive and the trajectory separation is divergent. Similarly, a negative value of l implies sL is negative and the trajectory separation is convergent.
CHAPTER 11
ODE SOLUTION TECHNIQUES
There is no guarantee that an ODE can be solved. Several different techniques exist for trying to solve ODEs, and it is often necessary to apply more than one method to ODEs that are not in a familiar form. The solution techniques presented in this chapter have been used successfully to solve a wide range of ODE problems.
11.1
HIGHER-ORDER ODE WITH CONSTANT COEFFICIENTS
Suppose we are given the nonhomogeneous ODE A€x þ B_x þ Cx ¼ D cos vt, D . 0, v . 0
(11:1:1)
where A, B, C, D, v are constants, and the time derivatives are given by x_ ¼
dx d 2x , x€ ¼ 2 dt dt
We seek to solve the ODE using the particular solution x(t) ¼ a cos vt þ b sin vt
(11:1:2)
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
151
152
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
for initial conditions x(t ¼ 0) ¼ x0 , x_ (t ¼ 0) ¼ x_ 0
(11:1:3)
The time derivatives of x(t) are x_ ¼ a v sin vt þ b v cos vt x€ ¼ v2 a cos vt v2 b sin vt We substitute these expressions into the nonhomogeneous ODE to find Av2 (a cos vt þ b sin vt) þ Bv(a sin vt þ b cos vt) þ C(a cos vt þ b sin vt) ¼ D cos vt Collecting terms in cos vt and sin vt gives ½Av2 a þ bBv þ aC D cos vt þ ½Av2 b aBv þ bC sin vt ¼ 0 This equation is satisfied if the coefficients of each trigonometric term vanish. Thus we have two equations for the two unknowns a, b: a(C Av2 ) þ bBv ¼ D a(Bv) þ b(C Av2 ) ¼ 0 Solving the latter equation for b yields b¼
Bv a C Av2
Substituting the expression for b into the equation containing D gives a(C Av2 ) þ
B2 v2 a¼D C Av2
We simplify these equations to obtain a¼
D(C Av2 ) (C Av2 )2 þ B2 v2
(11:1:4)
b¼
DBv (C Av2 )2 þ B2 v2
(11:1:5)
and
153
ODE SOLUTION TECHNIQUES
At t ¼ 0, the initial conditions are x(0) ¼ a ¼ x0 , x_ (0) ¼ bv ¼ x_ 0
(11:1:6)
Example: Application to a Mechanical System, Forced Oscillations. Define the following: A ¼ m ¼ mass of the body B ¼ c ¼ damping constant C ¼ k ¼ spring modulus D ¼ F0 ¼ applied external force at t ¼ 0 that corresponds to maximum driving force x(t) ¼ displacement of spring from equilibrium Using this notation, the nonhomogeneous ODE for the mechanical system shown in Figure 11.1 is m€x þ c_x þ kx ¼ F0 cos vt From the preceding discussion we have a particular solution x(t) ¼ a cos vt þ b sin vt where (k mv2 ) (k mv2 )2 þ c2 v2 cv b ¼ F0 (k mv2 )2 þ c2 v2 a ¼ F0
Figure 11.1
Mass on a damped spring.
154
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
If we define a natural mechanical frequency v0 ¼
pffiffiffiffiffiffiffiffi k=m . 0
we obtain m(v20 v0 ) m2 (v20 v2 )2 þ v2 c2 vc b ¼ F0 2 2 m (v0 v2 )2 þ v2 c2
a ¼ F0
Resonance occurs when v0 ¼ v: In an undamped system (when c ¼ 0), resonance can result in large oscillations because a ! 0 and b ! 1: In the case of mechanical oscillations, physical values for the initial conditions are x0 ¼ a, x_ 0 ¼ bv EXERCISE 11.1: Application to an Electrical System—RLC Circuit. Find a particular solution of the nonhomogeneous ODE I LI€ þ RI_ þ ¼ E0 v cos vt C^ where L ¼ inductance of inductor, R ¼ resistance of resistor, C^ ¼ capacitance of capacitor, E0 v ¼ maximum value of derivative of electromotive force (E0 ¼ voltage at t ¼ 0), and I(t) ¼ current (see Figure 11.2).
Figure 11.2
RLC circuit.
155
ODE SOLUTION TECHNIQUES
11.2
VARIATION OF PARAMETERS
Consider the second-order ODE y00 þ a1 (x)y0 þ a0 (x)y ¼ h(x)
(11:2:1)
where h(x) is a known function for the nonhomogeneous equation and the prime denotes differentiation with respect to x. The double prime denotes the second-order derivative with respect to x. Equation (11.2.1) has the homogeneous (h(x) = 0) solution yh ¼ c1 y1 (x) þ c2 y2 (x) where c1, c2 are constant parameters, and y1 , y2 are solutions of the homogeneous equation. To solve the nonhomogeneous equation, we seek a particular solution yp for the boundary conditions yp (x0 ) ¼ 0, y0p (x0 ) ¼ 0
(11:2:2)
The method we use follows Section 3.4 of Kreider et al. [1968]. We try a solution of the form yp ¼ c1 (x)y1 (x) þ c2 (x)y2 (x)
(11:2:3)
where parameters of the homogeneous solution are now allowed to depend on x. Substitute Eq. (11.2.3) into Eq. (11.2.1) to get 0 c1 y001 þ a1 y01 þ a0 y1 þ c2 y002 þ a1 y02 þ a0 y2 þ c001 y1 þ c02 y2 þ a1 c01 y1 þ c02 y2 þ c01 y01 þ c02 y02 ¼ h The terms multiplying c1 and c2 must equal 0 since y1 and y2 solve the homogeneous equation with h(x) ¼ 0: Simplifying the above equation gives c001 y1 þ c01 þ y01 þ c002 y2 þ c02 y02 þ a1 c01 y1 þ c02 y2 þ c01 y01 þ c02 y02 ¼ h Upon rearrangement we find d 0 c y1 þ c02 y2 þ a1 c01 y1 þ c02 y2 þ c01 y01 þ c02 y02 ¼ h dx 1
(11:2:4)
Equation (11.2.4) is solved if the following conditions are satisfied by the arbitrary parameters c1 , c2 : c01 y1 þ c02 y2 ¼ 0 c01 y01 þ c02 y02 ¼ h
(11:2:5a) (11:2:5b)
156
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Rearranging Eq. (11.2.5a) gives c01 ¼
c02 y2 y1
(11:2:6)
which may be used in Eq. (11.2.5b) to yield y2 0 0 y2 0 0 0 0 0 c2 y1 þ c2 y2 ¼ c2 y2 yi0 ¼ h y1 y1 Solving for c02 and simplifying lets us write c02 ¼
hy1 y2 y01
y1 y02
(11:2:7)
Using Eq. (11.2.7) in Eq. (11.2.6) gives c01 ¼
y2 0 hy2 c ¼ y1 2 y1 y02 y2 y01
Note that the denominator may be written as the determinant y1 y2 0 0 ; W( y1 ,y2 ) y1 y2 y2 y1 ¼ 0 y1 y02
(11:2:8)
(11:2:9)
The determinant is called the Wronskian and is denoted by W( y1 , y2 ). The Wronskian notation lets us simplify Eqs. (11.2.7) and (11.2.8); thus c01 ¼
dc1 hy2 dc2 hy1 ¼ , c02 ¼ ¼ dx W( y1 ,y2 ) dx W( y1 ,y2 )
(11:2:10)
If we integrate c01 and c02 over a dummy variable t from a fixed reference point x0 to a variable upper limit x, the particular solution becomes yp ¼ c1 y1 þ c2 y2 or yp ¼
ð x
h(t)y2 (t) dt y1 (x) W½ y1 (t); y2 (t) x0 ð x h(t)y1 (t) dt y2 (x) þ x0 W½ y1 (t); y2 (t)
Combining terms gives ðx yp (x) ¼ x0
y1 (t)y2 (x) y1 (x)y2 (t) h(t) dt W½ y1 (t); y2 (t)
(11:2:11)
ODE SOLUTION TECHNIQUES
157
since y1 (x) and y2 (x) are independent of the dummy integration variable t. Equation (11.2.11) can be written as ðx yp (x) ¼
K(x, t)h(t) dt
(11:2:12)
x0
where K(x, t) ¼
y1 (t)y2 (x) y1 (x)y2 (t) W½y1 (t), y2 (t)
(11:2:13)
The function K(x, t) is called the Green’s function for the nonhomogeneous equation Ly ¼ h(x) with the differential operator L given by L¼
d2 d þ a1 (x) þ a0 (x) dx dx2
(11:2:14)
Inspecting Eqs. (11.2.9) and (11.2.13) shows that Green’s function depends only on the solutions y1 , y2 of the homogeneous equation Ly ¼ 0: Green’s function K(x, t) does not depend on the function h(x): Once K(x, t) has been calculated, Eq. (11.2.12) gives a particular solution in terms of the function h(x): In summary, the general solution of Ly ¼ h(x) is y(x) ¼ yp (x) þ yh (x)
(11:2:15)
or ðx y(x) ¼
K(x, t)h(t) dt þ yh (x)
(11:2:16)
x0
EXERCISE 11.2: Find the general solution of the one-dimensional Schro¨dinger equation
h 2 d 2 c(x) Ec(x) ¼ V(x) 2m dx2
for the wavefunction c(x). The two boundary conditions are c(x0 ) ¼ 0, c0 (x0 ) ¼ 0:
11.3
CAUCHY EQUATION
Another ODE solution technique is illustrated by solving an equation of the form x2 y00 þ axy 0 þ by ¼ 0
(11:3:1)
158
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
where a, b are constants. This equation is called the Cauchy (or Euler) equation. It is a second-order ODE and should therefore have two solutions. To find them, we begin by assuming y ¼ xm
(11:3:2)
and substitute y into the Cauchy equation to get x2 m(m 1) xm2 þ axmxm1 þ bxm ¼ 0 Combining factors of x lets us simplify the equation to the form m(m 1)xm þ amxm þ bx m ¼ 0 Factoring xm gives ½m(m 1) þ am þ b x m ¼ 0
(11:3:3)
For arbitrary values of x, the coefficient in brackets [. . .] must vanish, and we obtain the indicial equation ½m(m 1) þ am þ b ¼ 0 or m2 þ (a 1)m þ b ¼ 0 The indicial equation is a quadratic equation with the roots pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (a 1) (a 1)2 4b + m+ ¼ 2 2
(11:3:4)
Hence the solution of Eq. (11.3.1) is y ¼ c1 xm þ þ c2 x m
(11:3:5)
for mþ = m For the special case of a double root (mþ ¼ m ), only one solution is found and another procedure such as variation of parameters must be used to obtain the other solution.
11.4
SERIES METHODS
Consider a linear, second-order ODE of the form A(x)y00 þ B(x)y 0 þ C(x)y ¼ 0
(11:4:1)
ODE SOLUTION TECHNIQUES
159
where the prime denotes differentiation with respect to x. The functions {A(x), B(x), C(x)} are analytic functions; that is, each function exists and is differentiable at all points in a domain D (see Chapter 7, Section 7.3 for more discussion of analyticity). Equation (11.4.1) is a homogeneous ODE with the general solution y(x) ¼ a1 y1 (x) þ a2 y2 (x)
(11:4:2)
where {a1 , a2 } are constants, and {y1 , y2 } are two linearly independent solutions of Eq. (11.4.1). Two powerful methods for finding solutions of Eq. (11.4.1) are discussed in this section: (1) the power series method and (2) the Frobenius series method. Both methods depend on series representations of the solution y(x). Power Series Method The power series method is usually presented as a technique for solving Eq. (11.4.1) in the more traditional form y00 þ P(x)y0 þ Q(x)y ¼ 0
(11:4:3)
Equation (11.4.3) was obtained by dividing Eq. (11.4.1) by A(x) so that P(x) ¼
B(x) C(x) , Q(x) ¼ A(x) A(x)
(11:4:4)
If the functions P(x) and Q(x) are both analytic at x ¼ 0, then x ¼ 0 is said to be an ordinary point. In this case, Eq. (11.4.2) has two linearly independent power series solutions of the form y(x) ¼
1 X
cn xn
(11:4:5)
n¼0
with expansion coefficients {cn }. An example illustrating the procedure for calculating the expansion coefficients {cn } follows. Example: Legendre’s equation is (1 x2 )y00 2xy0 þ my ¼ 0 where m is a real number and the prime denotes differentiation with respect to x. Legendre’s equation is solved using the power series expansion y(x) ¼
1 X k¼0
ck xk
160
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The derivatives of y are y0 ¼
1 X
kck xk1
k¼1
and y00 ¼
1 X
k(k 1)ck xk2
k¼2
Substituting the derivatives into Legendre’s equation gives (1 x2 )
1 X
k(k 1)ck xk2 2x
k¼2
1 X
kck xk1 þ m
k¼1
1 X
ck xk ¼ 0
k¼0
Multiplying the factors in the first and second summations yields 1 X
k(k 1)ck xk2
k¼2
1 X
k(k 1)ck xk
k¼2
2
1 X k¼1
kck xk þ m
1 X
ck xk ¼ 0
k¼0
Writing out the terms and collecting like powers of x lets us write mc0 þ mc1 x þ mc2 x2 þ mc3 x3 þ þ mcr xr þ þ 2c2 þ 3 2c3 x þ 4 3c4 x2 þ þ (r þ 2)(r þ 1)crþ2 xr þ 2c2 x2 3 2c3 x3 r(r 1)cr xr 2c1 x 2 2c2 x2 2 3c3 x3 2rcr xr ¼ 0 This equation is satisfied for all values of x when the coefficients of each power of x vanish; thus m c2 ¼ c0 2 m2 c1 c3 ¼ 32 m222 c2 c4 ¼ 43 m2332 c3 c5 ¼ 54 etc: In general, the expansion coefficients satisfy the relation crþ2 ¼
m 2r r(r 1) cr (r þ 2)(r þ 1)
ODE SOLUTION TECHNIQUES
161
This relationship between expansion coefficients is an example of a recurrence relation. The solution of Legendre’s equation can be expressed as the sum of two polynomials using the recurrence relation. First write the power series solution in the form y(x) ¼
1 X
ck xk
k¼0
¼ c0 þ c2 x2 þ c4 x4 þ þ c1 x þ c3 x3 þ c5 x5 þ The coefficients {ck } with k . 1 are now written in terms of c0 and c1 using the recurrence relations to give
m 2 m m42 4 x y(x) ¼ c0 1 x þ 2 2 43
m2 2 m2 m2332 5 x þ x þ c1 x 32 32 54 or y(x) ¼ c0 y1 (x) þ c1 y2 (x) The polynomials y1 (x) and y2 (x) are linearly independent power series. The constants c0 and c1 can be determined when two boundary conditions are specified for Legendre’s second-order ODE.
Frobenius Series Method The power series solution method just described is applicable to Eq. (11.4.3) when P(x) and Q(x) are analytic at x ¼ 0. If P(x) or Q(x) is not analytic at x ¼ 0, then x ¼ 0 is called a singular point and Eq. (11.4.3) is rewritten in the form y00 þ
p(x) 0 q(x) y þ 2 y¼0 x x
(11:4:6)
where p(x) ¼ xP(x); q(x) ¼ x2 Q(x)
(11:4:7)
If p(x) and q(x) are both analytic at x ¼ 0, then x ¼ 0 is said to be a regular singular point. In this case, a solution of Eq. (11.4.6) is the Frobenius series y(x) ¼ xr
1 X n¼0
an x n
(11:4:8)
162
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
where {an } and r are constants. It is assumed that the variable x is restricted to the interval 0 , x , r, where r is the radius of convergence of the series (see Chapter 2, Section 2.5). The restriction on x positive values can be relaxed if we replace xr by jxjr in Eq. (11.4.8). In many applications, however, the ODE in Eq. (11.4.6) arises from physical models expressed in radial coordinates, and the variable x in Eq. (11.4.6) then represents the non negative radial coordinate. The constant r in Eq. (11.4.8) is found by substituting Eq. (11.4.8) into Eq. (11.4.6) or its equivalent x2 y00 þ xp(x)y0 þ q(x)y ¼ 0
(11:4:9)
The result is the indicial equation r(r 1) þ p0 r þ q0 ¼ 0
(11:4:10)
with the notation p0 ¼ p(0) and q0 ¼ q(0). An indicial equation was encountered in Section 11.3 as part of the solution of the Cauchy equation. The two roots of the indicial equation are h pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffii r ¼ 12 1 p0 + (1 p0 )2 4q0
(11:4:11)
Notice that the roots of the indicial equation are not necessarily integers. At least one Frobenius series solution of Eq. (11.4.9) exists when the roots of Eq. (11.4.10) are real. In this case, one solution of Eq. (11.4.9) is y1 (x) ¼ xr1
1 X
an x n
(11:4:12)
n¼0
where r1 . r2 and a0 = 0. A second linearly independent solution of Eq. (11.4.9) exists if r1 r2 = 0 and r1 r2 is not a positive integer. The second solution is then y2 (x) ¼ xr2
1 X
bn x n
(11:4:13)
n¼0
with b0 = 0: The coefficients in Eqs. (11.4.12) and (11.4.13) are found by substituting the appropriate Frobenius series into Eq. (11.4.9) and expanding the resulting set of equations in equal powers of x until a recurrence relation is found. The Frobenius series solution method is illustrated in the following example and Exercise 11.3. Additional discussion of series solution methods can be found in many sources, for example, Lomen and Mark [1988], Edwards and Penney [1989], and Kreyszig [1999].
ODE SOLUTION TECHNIQUES
163
Example: Solve y00 þ y ¼ 0 using a power series solution of the form y(x) ¼
1 X
cm xmþr
(11:4:14)
m¼0
The derivatives of y with respect to x are y0 ¼
1 X
cm (m þ r)xmþr1 ,
m¼0
y00 ¼
1 X
cm (m þ r)(m þ r 1)xmþr2
m¼0
Substituting these power series expansions of y, y00 into the ODE gives y00 þ y ¼
1 1 X X cm (m þ r)(m þ r 1)xmþr2 þ cm xmþr ¼ 0 m¼0
(11:4:15)
m¼0
or c0 r(r 1)xr2 þ (r þ 1)rc1 xr1 þ c2 (r þ 2)(r þ 1)xr þ c3 (r þ 3)(r þ 2)xrþ1 þ
(11:4:16)
þ c0 xr þ c1 xrþ1 þ c2 xrþ2 þ ¼ 0 where we have expanded the sums of Eq. (11.4.15). Equating like powers of x in Eq. (11.4.16) yields the following set of equations: r(r 1)c0 ¼ 0 (r þ 1)rc1 ¼ 0 c0 þ c2 (r þ 2)(r þ 1) ¼ 0 c1 þ c3 (r þ 3)(r þ 2) ¼ 0 .. . The first two equations imply r ¼ 0 for nonzero values of c0 and c1 . The latter equations have the form ci þ ciþ2 (r þ i þ 2)(r þ i þ 1) ¼ 0
(11:4:17)
Solving Eq. (11.4.17) for ciþ2 in terms of ci leads to the recurrence relation ci ciþ2 ¼ (r þ i þ 2)(r þ i þ 1) Recognizing that r ¼ 0 gives ciþ2 ¼
ci (i þ 2)(i þ 1)
(11:4:18)
164
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Each coefficient ci for i . 1 can be expressed in terms of either c0 or ci ; thus c0 c1 , c3 ¼ , 21 32 c2 c0 c3 c1 c4 ¼ ¼ , c5 ¼ ¼ , 4 3 4! 5 4 5! c2 ¼
(11:4:19)
The constants c0 , c1 are determined by boundary conditions. Substituting the expressions in Eq. (11.4.19) into the power series expansion of y gives c0 c1 c0 c1 y(x) ¼ c0 þ c1 x x2 x3 þ x4 þ x5 þ 2! 3! 4! 5! or x2 x4 x3 x5 y(x) ¼ c0 1 þ þ þ c1 x þ þ 2! 4! 3! 5!
(11:4:20)
¼ c0 cos x þ c1 sin x where we have used the Maclaurin series expansions for sin x and cos x (see Chapter 2, Section 2.5). In this example, the power series method yielded a solution that we might have guessed by inspection of the ODE. Although the power series method was not needed to solve this exercise, doing so demonstrated its utility for problems where a trial solution is not so obvious. EXERCISE 11.3: Use the Frobenius method to solve the equation
d2 2 d 1 2 þ þ v R(r) ¼ 0 dr 2 r dr r 2
for R(r) when 1 and v are independent of the variable r, and r . 0. 11.5
LAPLACE TRANSFORM METHOD
The Laplace transforms of derivatives presented in Section 9.6 can be used to solve differential equations by performing the following procedure: 1. 2. 3. 4.
Take the Laplace transform of the differential equation. Solve the resulting algebraic equation for the Laplace transform. Invert the Laplace transformation. Verify the results.
This procedure is demonstrated by the following example.
ODE SOLUTION TECHNIQUES
165
Example: Solve the ODE y00 (x) þ y(x) ¼ 0 with initial conditions y(0) ¼ a, y0 (0) ¼ b using Laplace transforms. Step 1: Write the Laplace transform of y(x) as Y(s) ¼ Lfy(x)} and take the Laplace transform of y00 (x) þ y(x) ¼ 0 to obtain L{y00 (x)} þ L{y(x)} ¼ 0 or s2 Y(s) sy(0) y0 (0) þ Y(s) ¼ 0 Step 2: Solve for Y(s) and use the initial conditions to get Y(s) ¼
sa þ b sa b ¼ þ s2 þ 1 s2 þ 1 s2 þ 1
Step 3: To find y(x), we invert Y(s): y(x) ¼ L1 {Y(s)} ¼ a cos x þ b sin x Step 4: This solution is verified by substitution into the ODE: y00 (x) ¼
d2 (a cos x þ b sin x) ¼ a cos x b sin x ¼ y(x) dx2
EXERCISE 11.4: Use the Laplace transform to solve the ODE y00 þ (a þ b)y0 þ aby ¼ 0 where the prime denotes differentiation with respect to x, and the boundary conditions are y(0) ¼ a, y0 (0) ¼ b The quantities a, b, a, b are constants.
CHAPTER 12
PARTIAL DIFFERENTIAL EQUATIONS
Solutions of ODEs depend on only one independent variable. Suppose a function C of two or more independent variables {x, y, . . .} satisfies an equation of the form F(x, y, . . . , C, Cx , Cy , . . . , Cxx , Cyy , . . . ) ¼ 0 where we are using the notation Cx ¼
@C @C @ 2C @ 2C @ 2C , Cy ¼ , . . . , Cxx ¼ 2 , Cxy ¼ , Cyy ¼ 2 , . . . @x @y @x @x@y @y
to represent partial derivatives. We say the function C(x, y, . . . ) is a solution of a partial differential equation (PDE). The order of the PDE is the order of the highest derivative that appears in the equation F( . . . ) ¼ 0. A PDE is linear if (1) it is first-order in the unknown functions and their derivatives; and (2) the coefficients of the derivatives are either constant or depend on the independent variables {x, y, . . . }; that is, the PDE does not contain a product of derivatives. Several PDEs involving one or more unknown functions constitute a system of PDEs. This chapter focuses on PDEs that are first or second order only. These PDEs occur frequently in practice. Before presenting a scheme for classifying different types of second-order PDEs, we first summarize the most common types of boundary conditions. Methods for solving PDEs are then described. Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
167
168
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
12.1
BOUNDARY CONDITIONS
Boundary conditions for commonly occurring second-order PDEs may be written in the form a(x, y) c(x, y) þ b(x, y) cn (x, y) ¼ g(x, y) where c(x, y) is the unknown function and its derivative normal to a boundary is cn ¼
@c @n
The functions a, b, g are specified functions of x, y. A classification of boundary conditions is summarized below. The functions a, b, g, and c(x, y) and all applicable derivatives are defined in a domain R bounded by a surface S. Name
Form
Dirichlet (first kind) Neumann (second kind) Cauchy
Robbins (third kind)
12.2
b¼0 a¼0 Two equations: a ¼ 0 in one and b ¼ 0 in the other a and b = 0
Comment c specified at S cn specified at S
PDE CLASSIFICATION SCHEME
Consider the second-order PDE Acxx þ 2Bcxy þ Ccyy ¼ G The functions A, B, C, G are known functions of x, y and the first-order partial derivatives cx , cy in a domain R bounded by a surface S. Many phenomena in the real world are modeled successfully using PDEs of this type. For example, see sources like Arfken and Weber [2001], John [1978], Kreyszig [1999], and Lomen and Mark [1988]. The mathematical properties and solutions of PDEs depend on their boundary conditions and the relationship between the functions A, B, C. Three examples are as follows: PDE Type Hyperbolic Elliptic Parabolic
Criterion 2
B . AC B 2 , AC B 2 ¼ AC
Example cxx 2 ctt ¼ 0 cxx þ cyy ¼ 0 cxx ¼ ct
Wave equation Laplace equation Heat equation
PARTIAL DIFFERENTIAL EQUATIONS
169
The boundary conditions associated with the examples above are presented as follows: PDE Type
Criterion 2
B . AC B 2 , AC B 2 ¼ AC
Hyperbolic Elliptic Parabolic
12.3
Boundary Condition Cauchy Dirichlet or Neumann Dirichlet or Neumann
ANALYTICAL SOLUTION TECHNIQUES
Separation of Variables One of the most successful techniques for solving a PDE begins with the separation of variables to obtain a system of decoupled ODEs. We illustrate this method by solving the wave equation @ 2c @ 2c 2 ¼0 @x2 @t for the function c(x, t). The wave equation is a hyperbolic PDE. We wish to solve it by using the separation of variables technique. This means that we let the solution c(x, t) be a product of functions X(x) and T(t) such that c(x, t) ¼ X(x)T(t) Substituting this form of c(x, t) into the wave equation gives @ 2 X(x)T(t) @ 2 X(x)T(t) ¼0 @x2 @t2 Factoring terms and rearranging lets us write T(t)
@ 2 X(x) @ 2 T(t) ¼ X(x) 2 @x @t2
We now divide by X(x)T(t) to find T(t) @ 2 X(x) X(x) @ 2 T(t) ¼ 2 X(x)T(t) @x X(x)T(t) @t2 or 1 d 2 X(x) 1 d 2 T(t) ¼ X(x) dx2 T(t) dt2 Notice that partial derivatives have been replaced by ordinary derivatives because X depends only on x and T depends only on t. Each side of the equation depends on
170
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
arbitrary independent variables; hence they must equal a constant to assure equivalence. Calling the separation constant K, we have the following two equations: 1 d 2 X(x) 1 d 2 T(t) ¼ K, ¼K 2 X(x) dx T ðT Þ dt2 These equations are rearranged to give d 2 X(x) d 2 T(t) ¼ K X(x), ¼ K T(t) dx2 dt2 which are two ODEs for the functions X(x) and T(t). Each equation depends on only one independent variable. If the ODEs can be solved, including the effects of boundary conditions, the product of the resulting solutions yields the solution c(x, t) of the original PDE. Example: A particular solution of Laplace’s equation 2 @ @2 @2 r2 F ¼ Fðx, y, zÞ ¼ 0 þ þ @x2 @y2 @z2 can be found using a separable solution of the form Fðx, y, zÞ ¼ X(x)Y( y)Z ðzÞ
(12:3:1)
Substituting Eq. (12.3.1) into r2 F ¼ 0 and dividing by XYZ yields 1 d 2X 1 d 2Y 1 d 2Z þ þ ¼0 X dx2 Y dy2 Z dz2
(12:3:2)
The variables may now be separated to yield three ordinary differential equations: d 2X ¼ a2 X dx2 d 2Y ¼ b2 Y dy2
(12:3:3)
d 2Z ¼ d2 Z dz2 The separation constants a, b, d must satisfy the relationship d2 ¼ a 2 þ b2
(12:3:4)
so that the ODEs in Eq. (12.3.3) satisfy Eq. (12.3.2). The ODEs in Eq. (12.3.3) are eigenvalue equations and the separation constants a2 , b2 , d2 are the eigenvalues of the differential operators d 2 =dx2 , d 2 =dy2 ,
171
PARTIAL DIFFERENTIAL EQUATIONS
d 2 =dz2 , respectively. This is readily seen by writing the ODE for X(x) as the eigenvalue equation Lx X ¼ lx X
(12:3:5)
where Lx is the differential operator Lx ¼
d2 dx2
and the eigenvalue lx is expressed in terms of a constant a: lx ¼ a2 The eigenvalues {a, b, d} are related by Eq. (12.3.4). The solutions of the ODEs in Eq. (12.3.3) can be written in terms of exponential functions as X(x) ¼ a1 eiax þ a2 eiax Y( y) ¼ b1 eiby þ b2 eiby dz
Z ð zÞ ¼ c1 e þ c 2 e
(12:3:6)
dz
Alternatively, the solutions can be expressed in terms of trigonometric functions as X(x) ¼ a01 sin a x þ a02 cos a x Y( y) ¼ b01 sin by þ b02 cos b y Z(z) ¼
c01
sinh dz þ
c02
(12:3:7)
cosh d z
A particular solution is found by forming the product in Eq. (12.3.1). The constants in Eq. (12.3.6) or (12.3.7) depend on the choice of boundary conditions. EXERCISE 12.1: Assume the boundary conditions of Laplace’s equation are Fð0, y, zÞ ¼ FðxB , y, zÞ ¼ 0 Fðx, 0, zÞ ¼ Fðx, yB , zÞ ¼ 0 Fðx, y, 0Þ ¼ 0 Fðx, y, zB Þ ¼ FB ðx, yÞ in the intervals 0 x xB , 0 y yB , and 0 z zB : Find the solution of Laplace’s equation using these boundary conditions and the trigonometric solutions in Eq. (12.3.7). EXERCISE 12.2: Separation of Variables. Find the solution of @ 2 c @c ¼ @x2 @t
172
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
by assuming c(x, t) ¼ X(x)T(t) Variable Transformation The solution of a differential equation often depends on insight into the type of problem being solved. For example, if the differential equation is a model of a wavelike phenomenon in one space dimension, then a trial solution of the form f ðx vtÞ is a good starting point, where v is the velocity of the traveling wave. If we set z ¼ x vt, it may be possible to reduce the governing PDE to an ordinary differential equation. This variable transformation technique is illustrated here for a convection – diffusion (CD) equation of the form D
@ 2u @u @u v ¼ @x2 @x @t
(12:3:8)
The terms D and v are often identified as diffusion and velocity, and the unknown function u(x, t) of the two independent variables x, t is concentration. At this point, we have not restricted the functional form of D and v; that is, D and v can be functions of any of the variables {x, t, u}: This is important to note because the variable transformation technique used here can generate analytical solutions of a select class of nonlinear PDEs, as we now show. Let us make the variable transformation u(x, t) ¼ g(z), where the new variable z ¼ k x vt represents a traveling wave with wavenumber k and frequency v. The partial derivatives in Eq. (12.3.8) become @g @z @g @g @g @z @g @g ¼ ¼ v , ¼ ¼k @t @t @z @z @x @x @z @z 2 @2 g @ @g @ @g @ @g 2@ g ¼ ¼ k ¼ k ¼ k @x2 @x @x @x @z @z @x @z2
Substituting these derivatives into Eq. (12.3.8) gives Dk 2
@2 g @g @g vk ¼ v 2 @z @z @z
(12:3:9)
Let us now require that D and v depend only on z and g so that we can write Eq. (12.3.9) as the ODE d2 g v dg v dg þ ¼0 dz2 Dk dz Dk2 dz
(12:3:10)
Equation (12.3.10) can be reduced to a quadrature if the two terms with first-order derivatives can be written as total differentials. We therefore restrict the form of
PARTIAL DIFFERENTIAL EQUATIONS
173
D and v so that D is a constant and v has the functional form v ¼ a þ bg n so that Eq. (12.3.10) becomes d dg bg nþ1 1 vg ag þ þ ¼0 dz dz n þ 1 Dk Dk2
(12:3:11)
Equation (12.3.11) is satisfied if the bracketed term ½ is a constant c; thus dg bg nþ1 1 vg ag þ þ ¼c dz n þ 1 Dk Dk2
(12:3:12)
We now have a first-order, nonlinear ODE that, upon rearranging, yields dz ¼
dg ak v b g nþ1 cþ gþ 2 Dk Dk(n þ 1)
A quadrature is readily derived from this equation by integrating both sides: ð z¼
dg ak v b g nþ1 gþ cþ Dk2 Dk(n þ 1)
(12:3:13)
The final solution of our original CD equation is obtained by solving the integral and expressing g(z) as a function u(kx vt). EXERCISE 12.3: Find u(kx vt) using Eq. (12.3.13) and assuming n ¼ 1: The nonlinear equation in this case is known as Burger’s equation. Integral Transforms In our discussion of ODE solution techniques, we showed how the Laplace integral could be used to solve an ODE. The Laplace integral is one type of integral transform. Another integral transform is the Fourier integral. Both the Laplace and Fourier integrals can be used to solve some types of PDEs. We demonstrate this technique here using the Fourier integral. Suppose we want to solve a PDE of the form 2 @ c(x, t, s) @ @2 ¼ 2 2 c(x, t, s) i @s @t @x
(12:3:14)
for the unknown function c(x, t, s) in terms of the independent variables x, t, s. The above equation arises in parameterized relativistic quantum mechanics [Fanchi, 1993] and represents the behavior of a noninteracting, relativistic particle.
174
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The variables {x, t, s} correspond to one space coordinate, Einstein’s time coordinate, and Newton’s time parameter, respectively. Equation (12.3.14) can be solved by trying a solution of the form ððð cðx, t, sÞ ¼ hðk, v, kÞeiks eivt eikx dk dv dk where the expansion coefficients h(k, v, k) are determined by as yet unspecified normalization and boundary conditions. The signs of the arguments of the exponential terms were chosen to satisfy relativistic symmetry requirements (Lorentz covariance) so that the solution would have the form of a traveling wave, namely, ððð (12:3:15) cðx, t, sÞ ¼ hðk, v, kÞeiksþiðvtkxÞ dk dv dk Substituting Eq. (12.3.15) into Eq. (12.3.14) gives ððð
@ hðk, v, kÞi eiksþiðvtkxÞ dk dv dk ¼ @s
ððð
2 @ @2 hðk, v, kÞ 2 2 @t @x
eiksþiðvtkxÞ dk dv dk Calculating the derivatives gives ððð
hðk, v, kÞðkÞeiksþiðvtkxÞ dk dv dk ¼
ððð
hðk, v, kÞ v2 k 2
eiksþiðvtkxÞ dk dv dk Combining like terms in the preceding equation lets us write ððð
hðk, v, kÞ k v2 þ k 2 eiksþiðvtkxÞ dk dv dk ¼ 0 which is true for all values of k, v, k if k ¼ v2 k 2 : Thus the Fourier integral representation of the solution c leads to a solution of Eq. (12.3.14) with the associated dispersion relation k ¼ v2 k 2 : If we now specify normalization and boundary conditions, we can, at least in principle, find explicit expressions for h, v, k:
EXERCISE 12.4: Find the dispersion relation associated with the Klein – Gordon equation @2 @2 m f(x, t) ¼ 2 2 f(x, t) @t @x 2
Hint: Use a solution of the form f(x, t) ¼
ÐÐ
hðk, vÞeiðvtkxÞ dk dv:
PARTIAL DIFFERENTIAL EQUATIONS
12.4
175
NUMERICAL SOLUTION METHODS
Most PDEs, like most ODEs of practical interest, do not lend themselves readily to solution in closed form. It is often necessary to use numerical methods to make progress in constructing a solution of a PDE. Two popular methods of solving PDEs are based on the finite difference method and the finite element method. These methods are discussed here. Finite Difference Method We previously showed that numerical solutions of equations with derivatives can be achieved by using a Taylor series to approximate a derivative with a finite difference. The replacement of derivatives with differences can be done in one dimension or in N dimensions. As an illustration, let us consider the one-dimensional CD equation introduced previously: D
@ 2u @u @u v ¼ 2 @x @x @t
(12:4:1)
We assume here that D and v are real, scalar constants. The diffusion term is D @ 2 u=@x2 and the convection term is v @u=@x. When the diffusion term is much larger than the convection term, the CD equation behaves like the heat conduction equation, which is a parabolic PDE. If the diffusion term is much smaller than the convection term, the CD equation behaves like a first-order hyperbolic PDE. The CD equation is especially valuable for studying the implementation of numerical methods for two reasons: (1) the CD equation can be solved analytically and (2) the CD equation may be used to examine two important classes of PDEs (parabolic and hyperbolic). The analytic solution of the CD equation sets the standard for comparing the validity of a numerical method. To solve the CD equation, we must specify two boundary conditions and an initial condition. We impose the boundary conditions u(0, t) ¼ 1, u(1, t) ¼ 0 for all times t greater than 0, and the initial condition u(x, 0) ¼ 0 for all values of x greater than 0. The corresponding solution is [Peaceman, 1977] 1 x vt x þ vt (vx=D) p ffiffiffiffiffi p ffiffiffiffiffi erfc u(x, t) ¼ erfc þe 2 2 Dt 2 Dt where the complementary error function erfc( y) is ð 2 y z2 erfc( y) ¼ 1 pffiffiffiffi e dz p 0 Abramowitz and Stegun [1972] have presented an accurate numerical algorithm for calculating erfc( y). We now wish to compare the analytic solution of the CD equation with a finite difference representation of the CD equation. If we replace the time and space
176
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
derivatives with forward and centered differences, respectively, we obtain a discretized form of the CD equation: D
Dt nþ1 n n n u 2uinþ1 þ unþ1 i1 þ ui1 2ui þ ui1 2D x2 iþ1 Dt 1 nþ1 nþ1 u ui1 ¼ unþ1 uni v i D x 2 iþ1
where {i, n} denote the discrete levels in the space and time domains, and {Dx, Dt} are step sizes in the respective space and time domains. The finite difference representation of the CD equation leads to a system of linear equations of the form vDt DDt nþ1 DDt nþ1 þ u u þ 1 þ 2D x 2D x2 i1 D x2 i vDt DDt nþ1 DDt n u þ ¼ uni þ u 2uni þ uniþ1 2D x 2D x2 iþ1 2D x2 i1 The finite difference equations may be written in matrix form as Au ¼ B, where u }, A is a square matrix of coefficients is the column vector of unknown values {unþ1 i from the left-hand side of the equation, and B is a column vector of terms containing known values {uni } from the right-hand side of the equation. Various methods exist for solving systems of linear equations, but generally these methods fall into one of two groups: (1) direct methods and (2) iterative methods. Direct methods are techniques for solving the system of linear equations in a fixed number of steps. They are capable of solving a wide range of equations, but tend to require large amounts of computer storage. Iterative methods are less robust, but require less storage and are valuable for solving large, sparse systems of equations. Now that we have both an analytical and a numerical solution of the CD equation, we can make a direct comparison of the methods to demonstrate the validity of the numerical methods. The hyperbolic PDE comparison is shown in Figure 12.1, and the parabolic PDE comparison is shown in Figure 12.2. These figures show that the finite difference technique does a reasonably good job of reproducing the actual analytical solution. In this case, the parabolic solution is more accurately matched using finite difference methods than is the hyperbolic solution. A more detailed discussion of these methods is beyond the scope of our review. The reader should consult the references for further information, for example, Carnahan et al. [1969], Lapidus and Pinder [1982], Press et al. [1992], and DeVries [1994]. An example of an industrial finite difference simulator is available in Fanchi [1997]. Finite Element Method Our intent here is to introduce the finite element method by using it to solve a PDE of the form Lu() ¼ f for the unknown function u() ¼ u(x, y, z, t) given a differential
PARTIAL DIFFERENTIAL EQUATIONS
Figure 12.1
177
Comparison of hyperbolic PDE solutions.
operator L and a function f that is independent of u(). The finite element method assumes the unknown function u() can be approximated by a finite series u() u^ () ¼
N X
Uj fj ()
j¼1
where {fj ()} is a set of basis functions, {Uj } are expansion coefficients, and N is the number of terms in the series. The original PDE Lu() f ¼ 0
Figure 12.2
Comparison of parabolic PDE solutions.
178
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
is replaced by the finite element approximation L^u() f ¼ R() where R is the residual arising from replacing u() with u^ (). In terms of expansion coefficients, the residual is " R() ¼ L^u() f ¼ L
N X
# Uj fj () f ¼
j¼1
N X
Uj Lfj () f
j¼1
Our objective is to find the set of expansion coefficients {Uj } that minimizes the residual R in the region of interest V consisting of the union of the nonoverlapping subdomains {Vj tj } for a given set of basis functions {fj ()}. The subdomain Vj tj is the product of a spatial volume and time in physical applications. To achieve our objective, we invoke the method of weighted residuals. Method of Weighted Residuals In this method, N equations are formed by integrating the residual R times a weighting function wi over the domain of interest V; thus ð ð R(Þwi (Þ dV dt ¼ 0; i ¼ 1, . . . , N t
V
where {wi } are N specified weighting functions. Inserting the expression for R in terms of expansion coefficients gives the set of integrals ð ð (X N t
V
)
Uj Lfj () f wi () dV dt ¼ 0; i ¼ 1, . . . , N
j¼1
The effectiveness of the method depends to a large extent on selecting an effective set of weighting functions. Some examples are tabulated as follows: FINITE ELEMENT WEIGHTING FUNCTIONS
wi (.) ; f i ( .) 1, (x, y, z, t) [ Vi ti wi () ; 0, (x, y, z, t) Vi ti
Galerkin method Subdomain method
wi (.) ¼ d(x 2 x1)
Collocation method
Example: The best way to breathe life into these ideas is to illustrate their use. Suppose we wish to solve the simple differential equation K
dp ¼Q dx
PARTIAL DIFFERENTIAL EQUATIONS
179
for the unknown function p(x) given constants K, Q. The differential operator 2K d/dx corresponds to L and Q corresponds to the function f. According to the finite element method, we approximate p(x) with the same finite series p(x) p^ (x) ¼
N X
uj fj (x)
j¼1
where {fj (x)} are basis functions that we specify, and {uj } are the expansion coefficients that we need to compute. The residual is given by R() ¼ L^u() f , or R(x) ¼ K
N X dfj dp^ Q ¼ K uj Q dx dx j¼1
Our task is to find the expansion coefficients (uj ) that minimize the residual in the region of interest V ¼ {x : 0 x 1} using the method of weighted residuals. Illustrative Basis Functions segments f1 (x) ¼ f2 (x) ¼ f3 (x) ¼
Some typical basis functions are the straight-line
2x þ 1, 0,
0 x 0:5 0:5 x 1:0
2x, 2x þ 2,
0 x 0:5 0:5 x 1:0
0,
0 x 0:5
2x 1,
0:5 x 1:0
in the domain of interest for a finite series with N ¼ 3 basis functions. These functions are sketched in Figure 12.3. The function f1 (x) provides a nonzero contribution for the interval between nodes 1 and 2. Function f3 (x) serves a similar role in the interval between nodes 2 and 3. Applying the Method of Weighted Residuals To apply the method of weighted residuals, we must solve the set of N equations ð R(x)wi (x) dx ¼ 0; i ¼ 1, . . . , N V
for the expansion coefficients {uj }. For our illustration, we have ) ð1( X N dfj þ Q wi (x)dx ¼ 0 K uj dx 0 j¼1
180
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 12.3
Finite element functions.
where {wi } is the set of specified weighting functions. Several choices of weighting functions are possible. Two of the most common are the Galerkin method and the collocation method. The weighting functions for each method are the following: Gelerkin method: Collocation method:
wi (x) ¼ fi (x); basis function wi (x) ¼ d(x xi ); delta function
Actual solution of the problem from this point would require performing the integrals and then solving for the expansion coefficients. This is a tedious process best suited for computers. For further discussion of the finite element method, consult such references as Lapidus and Pinder [1982] or Press et al. [1992].
CHAPTER 13
INTEGRAL EQUATIONS
Equations in which the unknown function is a factor in the integrand and may be a term outside the integral are called integral equations. They arise in such disciplines as nuclear reactor engineering; the study of fluid flow in fractured, porous media; and the description of particle scattering. We present below a classification scheme for integral equations and three solution techniques. Arfken and Weber [2001], Corduneanu [1991], Collins [1999], and Chow [2000] present additional discussion of integral equations. 13.1
CLASSIFICATION
Integral equations are classified according to their integration limits and whether or not the unknown function appears outside the integrand. A summary of classification terms is as follows: INTEGRAL EQUATION TERMINOLOGY
Fredholm equation Volterra equation Integral equation of the first kind Integral equation of the second kind
Both integration limits are fixed One limit of integration is variable Unknown function appears only as part of the integrand Unknown function appears inside and outside the integral
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
181
182
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Example: Suppose the kernel K(x, t) and the function f (x) are known, and f(t) is the unknown function. Examples of integral equations are as follows: EXAMPLES OF INTEGRAL EQUATIONS
Fredholm equation of the first kind Fredholm equation of the second kind Volterra equation of the first kind Volterra equation of the second kind
Ðb f (x) ¼ a K(x, t) f(t)dt Ðb f(x) ¼ f (x) þ l a K(x, t) f(t)dt Ðx f (x) ¼ a K(x, t) f(t)dt Ðx f(x) ¼ f (x) þ l a K(x, t) f(t)dt
Note: If f (x) ¼ 0, the integral equation is homogeneous.
13.2 INTEGRAL EQUATION REPRESENTATION OF A SECOND-ORDER ODE An important class of integral equations is obtained by transforming the secondorder differential equation y00 þ Að xÞy0 þ Bð xÞy ¼ gð xÞ
(13:2:1)
for the unknown function y(x) into an integral equation. The unknown function satisfies the initial conditions yðaÞ ¼ y0 , y0 ðaÞ ¼ y00
(13:2:2)
at the point x ¼ a. We first integrate Eq. (13.2.1) to find ðx
0
y ¼
0
ðx
Að xÞy dx a
ðx Bð xÞydx þ
a
a
gð xÞdx þ y00
where y00 is an integration constant and we have used the relation ðx
d 2y dx ¼ 2 a dx
ðx d dy dy dx ¼ d ¼ y0 ð xÞ y0 ðaÞ ¼ y0 ð xÞ y00 dx a dx dx a
ðx
Now integrate the first integral of Eq. (13.2.3) by parts to get ðx a
udv ¼ uvjxa
ðx vdu a
where u ¼ Að xÞ, dv ¼ y0 dx ¼ dy
(13:2:3)
INTEGRAL EQUATIONS
183
The result is ðx a
x Að xÞy dx ¼ Að xÞ yð xÞa þ 0
ðx
ðx y dA ¼ Ay þ AðaÞ yðaÞ þ
a
y a
dA dx (13:2:4) dx
Writing A0 ¼ dA/dx and using Eq. (13.2.4) in Eq. (13.2.3) yields ðx
0
y ¼ Ay
ðx
0
ðB A Þy dx þ a
a
gð xÞdx þ AðaÞ yðaÞ þ y00
(13:2:5)
where y(a) ¼ y0. Integrating both sides of Eq. (13.2.5) lets us write ðx
y0 dx ¼
a
ð x ð x
ðx Ay dx a
a
ð x ð x
½BðtÞ A0 ðtÞ yðtÞdt dx
a
ðx
gðtÞdt dx þ
þ a
ðx A(a) y(a)dx þ
a
a
a
(13:2:6) y00
dx
where t is a dummy index. The last integral on the right-hand side of Eq. (13.2.6) is ðx a
y00 dx ¼ y00
ðx a
dx ¼ y00 ðx aÞ
The integral on the left-hand side of Eq. (13.2.6) is evaluated using the initial condition y(a) ¼ y0 to yield ðx x y0 dx ¼ ya ¼ yð xÞ yðaÞ ¼ yð xÞ y0 a
Substituting the above integrals into Eq. (13.2.6) gives ð x ð x
ðx Aydx
yð xÞ ¼ a
ð x ð x þ a
a
0
½BðtÞ A ðtÞ yðtÞdt dx a
a
gðtÞdt dx þ AðaÞy0 þ y00 ðx aÞ þ y0
(13:2:7)
The two integrals with curly brackets f. . .g are simplified using the relation [Arfken and Weber, 2001, p. 986] ð x ð x
ð x ð x
f ðtÞdt dx ¼ a
a
ðx
f ðtÞdx dt ¼ a
a
ðx tÞ f ðtÞdt
(13:2:8)
a
Equation (13.2.8) can be verified by differentiating both sides with respect to x. The equality of the resulting derivatives implies that the original expressions in
184
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Ð Ð Eq. (13.2.8) can only differ by a constant. Letting the limit x ! a so that xa ! aa implies the equality 0 ¼ 0 holds and the constant must equal 0. Substituting Eq. (13.2.8) into Eq. (13.2.7) gives ðx yð xÞ ¼
ðx Ay dx
a ðx
þ a
ðx tÞ½BðtÞ A0 ðtÞ yðtÞdt
a
ðx tÞ gðtÞdt þ AðaÞy0 þ y00 ðx aÞ þ y0
Collecting terms and simplifying lets us write ðx yð xÞ ¼
AðtÞ yðtÞ þ ðx tÞ½BðtÞ A0 ðtÞ yðtÞ dt
a
ðx þ a
ðx tÞ gðtÞdt þ AðaÞy0 þ y00 ðx aÞ þ y0
If we define the kernel K(x, t) as K ðx, tÞ ¼ ðt xÞ½BðtÞ A0 ðtÞ AðtÞ
(13:2:9)
ðx tÞgðtÞdt þ AðaÞy0 þ y00 ðx aÞ þ y0
(13:2:10)
and the function ðx f ð xÞ ¼ a
then we can write the formal solution to the second-order ODE (13.2.1) in the form ðx K ðx, tÞyðtÞdt
yð xÞ ¼ f ð xÞ þ
(13:2:11)
a
Equation (13.2.11) is a Volterra equation of the second kind. Application: Linear Oscillator. The linear oscillator equation y00 þ v2 y ¼ 0 has many real-world applications and can be solved as an ordinary differential equation. It can also be represented as an integral equation by comparing y00 þ v2 y ¼ 0 to Eq. (13.2.1) and recognizing that Að xÞ ¼ 0, Bð xÞ ¼ v2 , gð xÞ ¼ 0
(13:2:12)
Substituting Eq. (13.2.12) into Eqs. (13.2.9) and (13.2.10) gives the kernel K ðx, tÞ ¼ ðt xÞ½BðtÞ A0 ðtÞ AðtÞ ¼ ðt xÞv2
(13:2:13)
INTEGRAL EQUATIONS
185
and the function f ð xÞ ¼ y00 ðx aÞ þ y0
(13:2:14)
The formal solution of the linear oscillator equation y00 þ v2 y ¼ 0 is ðx ðx yð xÞ ¼ f ð xÞ þ K ðx, tÞyðtÞdt ¼ y00 ðx aÞ þ y0 þ v2 ðt xÞ yðtÞdt a
(13:2:15)
a
where the initial conditions at the point x ¼ a are yðaÞ ¼ y0 , y0 ðaÞ ¼ y00 . EXERCISE 13.1: Write the linear oscillator equation y00 þ v2 y ¼ 0 with initial conditions y(0) ¼ 0, y0 (0) ¼ 1 in the form of an integral equation. EXERCISE 13.2: Solve the linear oscillator equation y00 þ v2 y ¼ 0 with initial conditions y(0) ¼ 0, y(b) ¼ 0. Hint: Modify the above procedure to account for the use of boundary conditions y(0) ¼ 0, y(b) ¼ 0 [see Arfken and Weber, 2001, pp. 987 – 989]. 13.3 SOLVING INTEGRAL EQUATIONS: NEUMANN SERIES METHOD We would like to find a solution of an integral equation such as the inhomogeneous Fredholm equation ðb f(x) ¼ f (x) þ l K(x, t) f(t)dt (13:3:1) a
where f ð xÞ = 0. Assume the integral in Eq. (13.3.1) can be treated as a perturbation of f (x) so that we can try the solution f(x) f0 (x) ¼ f (x)
(13:3:2)
as a zero-order approximation. The first-order approximation of the solution f(x) is obtained by substituting Eq. (13.3.2) into Eq. (13.3.1); thus ðb (13:3:3) f1 (x) ¼ f (x) þ l K(x, t) f (t)dt a
We now proceed iteratively and use f(x) f1 (x) in Eq. (13.3.1) to obtain the second-order approximation ðb f2 (x) ¼ f (x) þ l a
K(x,t1 )f (t1 )dt1 þ l2
ðb ðb K(x,t1 )K(t1 ,t2 )f (t2 )dt1 dt2 a
a
(13:3:4)
186
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Rearranging Eq. (13.3.4) and combining terms gives ðb f2 (x) ¼ f (x) þ l
ðb K(x, t1 ) f (t1 ) þ l K(t1 , t2 )f (t2 ) dt1 dt1
a
(13:3:5)
a
For the n th order approximation we have fn (x) ¼
n X
li ui (x)
(13:3:6)
i¼0
where u0 (x) ¼ f (x) ðb u1 (x) ¼ K(x, t1 ) f (t1 ) dt1 a
ðb ðb u2 (x) ¼
K(x, t1 ) K(t1 , t2 ) f (t2 ) dt2 dt1 a
(13:3:7)
a
.. . ðb un (x) ¼
ðb K(x, t1 ) K(t1 , t2 ) K(tn1 , tn ) f (tn )dtn dt1
a
a
The solution of Eq. (13.3.1) is f(x) ¼ lim fn (x) ¼ lim n!1
n!1
n X
li ui (x)
(13:3:8)
i¼0
if the series in Eq. (13.3.8) converges. Eq. (13.3.8) is the Neumann series solution of the inhomogeneous Fredholm equation given by Eq. (13.3.1). An analogous procedure can be used to find a solution of the inhomogeneous Volterra equation ðx f(x) ¼ f (x) þ l K(x, t) f(t) dt (13:3:9) a
The zero-order approximation is fð xÞ f0 ð xÞ ¼ f ð xÞ
(13:3:10)
Substituting Eq. (13.3.10) into Eq. (13.3.9) gives the first-order approximation ðx f1 (x) ¼ f (x) þ l
K(x, t) f (t) dt a
(13:3:11)
INTEGRAL EQUATIONS
187
For the second-order approximation, let f(x) f1 (x) to find ðx f2 (x) ¼ f (x) þ l K(x, t1 ) f(t1 )dt1 a
ðx ¼ f (x) þ l
ð t1
K(x, t1 ) f (t1 ) þ l
(13:3:12)
K(t1 , t2 ) f (t2 )dt2 dt1 a
a
where f(t1 ) f1 (t1 ). Expanding the bracketed term gives ð x ð t1 ðx 2 f2 (x) ¼ f (x) þ l K(x, t1 ) f (t1 )dt1 þ l K(x, t1 )K(t1 , t2 ) f (t2 ) dt2 dt1 (13:3:13) a a
a
Higher-order approximations may be obtained iteratively by repeating the procedure outlined above. 13.4 SOLVING INTEGRAL EQUATIONS WITH SEPARABLE KERNELS Integral equations with separable kernels may be solved using the following procedure. A kernel K(x, t) is separable if it can be written in the form K(x, t) ¼
n X
Mj (x) Nj (t)
(13:4:1)
j¼1
where n is finite, and Mj (x), Nj (t) are polynomials or elementary transcendental functions. Substituting Eq. (13.4.1) into the Fredholm equation of the second kind given by Eq. (13.3.1) gives ðb n ðb X f(x) ¼ f (x) þ l K(x,t)f(t)dt ¼ f (x) þ l Mj (x) Nj (t) f(t)dt (13:4:2) a
j¼1
a
Equation (13.4.2) can be rewritten as f(x) ¼ f (x) þ l
n X
Mj (x)cj
(13:4:3)
j¼1
where ðb Nj (t) f(t)dt
cj ¼
(13:4:4)
a
are numbers. To determine cj , multiply Eq. (13.4.3) by Ni (x) and integrate: ðb ðb n ðb X Ni (x) f(x)dx ¼ Ni (x) f (x)dx þ l Ni (x) Mj (x)cj dx (13:4:5) a
a
j¼1
a
188
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Equation (13.4.5) has the form ðb n X Ni (x)f(x)dx ¼ ci ¼ bi þ l aij cj a
(13:4:6)
j¼1
where we have used Eq. (13.4.4) and the definitions ðb bi ¼ Ni ð xÞ f ð xÞdx
(13:4:7)
a
and ðb aij cj ¼
Ni ð xÞ Mj ð xÞcj dx
(13:4:8)
a
The constants {cj } can be factored out of Eq. (13.4.8) to yield ðb aij ¼
Ni (x) Mj (x) dx
(13:4:9)
a
Equation (13.4.6) can be written in matrix form as c lAc ¼ b ¼ ðI l AÞc
(13:4:10)
where I is the identity matrix. Equation (13.4.10) has the formal solution c ¼ ðI l AÞ1 b
(13:4:11)
If the index n in Eq. (13.4.1) is finite, the set of equations represented by Eqs. (13.4.10) and (13.4.11) are finite. Solving the determinant jI l Aj ¼ 0 gives a set of n eigenvalues lj . Substituting the resulting eigenvalues into Eq. (13.4.10) lets us calculate {cj }. Given the set {cj }, we solve Eq. (13.4.3) by using a trial and error procedure to determine the functions Mj (x), Nj (t). EXERCISE 13.3: Solve the homogeneous Fredholm Ð1 l 1 (t þ x) f(t)dt [see Arfken and Weber, 2001, p. 1002].
equation
f(x) ¼
13.5 SOLVING INTEGRAL EQUATIONS WITH LAPLACE TRANSFORMS The Laplace transform of an integral presented in Section 9.6 can be used to solve some types of integral equations. Consider the Volterra equation of the second kind, ðt f(t) ¼ f (t) þ l f(t)dt (13:5:1) 0
189
INTEGRAL EQUATIONS
where f ðtÞ is an arbitrary function of t, the lower limit of the integral is set to zero, and the kernel is K ðt, tÞ ¼ 1. The Laplace transform of Eq. (13.5.1) is ðt L{f(t)} ¼ L f (t) þ l f(t)dt 0
ðt ¼ L{f (t)} þ L l f(t)dt 0
ð t ¼ L{f (t)} þ lL
(13:5:2)
f(t)dt
0
The Laplace transform of the integral is given by Eq. (9.6.12) so that Eq. (13.5.2) becomes L fðtÞ (13:5:3) L fðtÞ ¼ L f ðtÞ þ l s where s is a real, positive parameter. Solving for L{f(t)} gives L{f(t)} ¼
L{f (t)} sL{f (t)} ¼ l sl 1 s
(13:5:4)
Taking the inverse Laplace transform of Eq. (13.5.4) gives the formal solution n s o f(t) ¼ L1 L{ f (t)} (13:5:5) sl EXERCISE 13.4: Use the Laplace transform to solve the integral equation 1 V0 cos vt ¼ Ri(t) þ C
ðt i(t) dt 0
for an RC circuit. An RC circuit consists of a resistor with resistance R and a capacitor with capacitance C in series with a time-varying voltage V ðtÞ ¼ V0 cos vt that has frequency v.
CHAPTER 14
CALCULUS OF VARIATIONS
The calculus of variations provides a methodology for finding a path of integration that satisfies an extremum condition. In this chapter, we present the calculus of variations for a single dependent variable, for several dependent variables, and the calculus of variations with constraints.
14.1 CALCULUS OF VARIATIONS WITH ONE DEPENDENT VARIABLE The path of integration of a line integral was defined in Chapter 9 as a smooth curve C that may be written as the vector r(s) ¼ x(s)^i þ y(s) ^j þ z(s)k^
(14:1:1)
in three dimensions where the arc length s parameterizes the coordinates between the end points fs ¼ a, s ¼ bg. The vector r(s) is illustrated in Figure 9.1. It is continuous and differentiable when the path of integration is a smooth curve. The calculus of variations can be used to find a path of integration C(a) that makes the line integral J(a) an extremum with respect to a parameter a. Let us choose the path of integration C(a ¼ 0) as the unknown path of integration that
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
191
192
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
minimizes J(a). The condition for finding C(a ¼ 0) given J(a) is @J(a) @a
¼0
(14:1:2)
a¼0
where the line integral is considered to be a function of the parameter a. We illustrate the calculus of variations by considering the line integral J1 of a function f1(x, x˙, t) that depends on the one-dimensional path x and its derivative x˙ ¼ dx/dt with respect to a variable t. The function f1, the path x, the derivative of the path x˙, and the line integral J1 may all depend on a parameter a that parameterizes the path of integration. The parameterized line integral may be written as ð t2 f1 (x(t, a), x_ (t, a), t)dt
J1 (a) ¼
(14:1:3)
t1
where the path x ¼ x(t, a) is defined between the end points ft1, t2g. The path of integration that corresponds to an extremum of J1 is the path x ¼ x(t, a) that satisfies Eq. (14.1.2). For example, we may write the path of integration as x(t, a) ¼ x(t, 0) þ 1(a)h(t)
(14:1:4)
The path x(t, 0) is the path that satisfies Eq. (14.1.2) and the function 1(a)h(t) is the deformation of the path relative to the path x(t, 0). The paths are illustrated in Figure 14.1. The deformation 1(a)h(t) must equal zero at the end points ft1, t2g to ensure that all parameterized paths pass through the end points.
Figure 14.1
Paths of integration.
CALCULUS OF VARIATIONS
193
Applying the extremum condition in Eq. (14.1.2) to Eq. (14.1.3) gives ð @J1 (a) @ t2 ¼ f1 (x(t, a), x_ (t, a), t) dt @a @a t1 ð t2 @f1 @x @f1 @_x ¼ þ dt ¼ 0 @_x @a t1 @x @a
(14:1:5)
The second integral on the right-hand side of Eq. (14.1.5) can be integrated by parts to yield ð t2 t1
ð t2
@f1 @2 x dt x @t @a t1 @_ ð t2 @f1 @x t2 d @f1 @x dt ¼ dt @_x @at1 @_x @a t1
@f1 @_x dt ¼ @_x @a
(14:1:6)
The partial derivative @x/@a is arbitrary except at the end points ft1, t2g, where @x/@a must equal zero to guarantee that all of the parameterized paths of integration pass through the points (t1, x1), (t2, x2) for any value of the parameter a. Consequently, the first term on the right-hand side of Eq. (14.1.6) vanishes (see Exercise 14.1) and Eq. (14.1.6) simplifies to ð t2 t1
@f1 @_x dt ¼ @_x @a
ð t2 t1
d @f1 @x dt dt @_x @a
(14:1:7)
Substituting Eq. (14.1.7) into Eq. (14.1.5) gives ð t2 @f1 @x d @f1 @x dt dt @_x @a t1 @x @a ð t2 @f1 d @f1 @x dt ¼ dt @_x @a t1 @x
@J1 (a) ¼ @a
(14:1:8)
¼0 The extremum condition (@J1(a)/@a)ja¼0 ¼ 0 is satisfied when the integrand in Eq. (14.1.8) vanishes at a ¼ 0. The integrand vanishes for an arbitrary deformation @x/@a when the term in brackets vanishes; therefore the extremum condition (@J1(a)/@a)ja¼0 ¼ 0 is satisfied when @f1 d @f1 ¼0 @x dt @_x Equation (14.1.9) is known as the Euler –Lagrange equation.
(14:1:9)
194
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 14.1: Show that Eq. (14.1.4) implies that (@x/@a)jtt21 ¼ 0. EXERCISE 14.2: What is the shortest distance p between two points in a plane? pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Hint: The element of arc length in a plane is ds ¼ dx2 þ dy2 ¼ 1 þ (dy=dx)2 dx.
Application: Classical Dynamics. Joseph Louis Lagrange used the calculus of variations to formulate the dynamical laws of motion in terms of a function of energy. We illustrate Lagrange’s formulation in one space dimension by introducing a function L that depends on the coordinate q and its velocity q˙ ¼ d˙q/dt, where t is time. The function L is called the Lagrangian and is the difference between kinetic energy T and potential energy V. Kinetic energy T may depend on position and velocity. Potential energy V may depend on position only for a conservative force, that is, a force that can be derived from the gradient of the potential energy. The resulting Lagrangian has the form L ¼ L(q, q_ ) ¼ T(q, q_ ) V(q)
(14:1:10)
Lagrange’s equations are derived from William R. Hamilton’s principle of least action. Action S is defined as the integral ð t2 L(q, q_ , t) dt (14:1:11) S¼ t1
between times ft1, t2g. The dependence of the Lagrangian on time is shown explicitly in Eq. (14.1.11). Hamilton’s principle of least action is a variational principle. It says that a classical particle will follow a path that makes the action integral an extremum. If we introduce the parameter a, Eq. (14.1.11) becomes ð t2 L(q(t, a), q_ (t, a), t) dt
S(a) ¼
(14:1:12)
t1
Equation (14.1.12) has the same form as the path integral in Eq. (14.1.3). The parameterized paths of integration corresponding to Eq. (14.1.4) may be written as q(t, a) ¼ q(t, 0) þ 1(a)h(t)
(14:1:13)
Following the derivation above, we obtain the Euler –Lagrange equation @L d @L ¼0 (14:1:14) @q dt @_q Momentum p is calculated from the Lagrangian as p¼
@L @_q
(14:1:15)
CALCULUS OF VARIATIONS
195
and force F is F¼
@L @q
(14:1:16)
Example: We illustrate the range of applicability of the calculus of variations by considering an LC circuit. An LC circuit consists of a capacitor and an inductor in series. In this example, the capacitor has constant capacitance C and the inductor has constant inductance L. The Lagrangian for the LC circuit is 1 1 q2 L ¼ L(q, q_ ) ¼ T(q, q_ ) V(q) ¼ L_q 2 2 2C
(14:1:17)
where charge q is a generalized coordinate that depends on the independent variable time t. Electric current I ¼ q˙ is the time rate of change of charge. The energy Lq˙2/2 associated with the inductor corresponds to the kinetic energy term, and the energy q 2/(2C ) stored in the capacitor corresponds to the potential energy term. The voltage drop around the LC circuit can be determined by substituting Eq. (14.1.17) into the Euler–Lagrange equation. The resulting differential equation is @L d @L q d q ¼ (L_q) ¼ L€q ¼ 0 @q dt @_q C dt C or L€q þ
q ¼0 C
(14:1:18)
Equation (14.1.18) is the voltage drop equation for an LC circuit. EXERCISE 14.3: The kinetic energy and potential energy of a harmonic oscillator (HO) in one space dimension are THO ¼ 12 mq˙2 and VHO ¼ 12 mq 2, respectively, where m is the constant mass of the oscillating object and k is the spring constant. Write the Lagrangian for the harmonic oscillator and use it in the Euler – Lagrange equation to determine the force equation for the harmonic oscillator.
14.2 THE BELTRAMI IDENTITY AND THE BRACHISTOCHRONE PROBLEM A problem that motivated the development of the calculus of variations was the brachistochrone or shortest time problem. The goal of the brachistochrone problem is to determine the shape of a frictionless wire that minimizes the time of descent of a bead sliding along the wire between two points in a vertical
196
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
plane under the influence of gravity. The bead has constant mass m and speed v. Its time of descent is given by the integral ð s2 ds (14:2:1) t12 ¼ s1 v where s is the arc length along the wire in a two-dimensional fx, zg plane with end points fs1(x1, z1), s2(x2, z2)g. The element of arc length is 0sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffi dz A 2 2 @ 1þ dx ¼ 1 þ z0 2 dx ds ¼ dx þ dz ¼ dx
(14:2:2)
where we are using the notation z0 ¼ dz/dx. If we assume the elevation z ¼ 0 at the point of release of the bead, we can determine the speed of the bead from conservation of energy 12 mv 2 ¼ mgz to be pffiffiffiffiffiffiffi v ¼ 2gz (14:2:3) We change the variable of integration in Eq. (14.2.1) using Eq. (14.2.2) and write the speed of the bead using Eq. (14.2.3) to find t12 ¼
ffi ð z2 pffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ z0 2 pffiffiffiffiffiffiffi dx 2gz z1
(14:2:4)
Equation (14.2.4) has the form of Eq. (14.1.3) with sffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ z_ 0 2 f1 (z(x, a), z_ (x, a), x) ¼ pffiffiffiffiffiffiffi 2gz 0
(14:2:5)
The corresponding Euler – Lagrange equation is @f1 d @f1 ¼0 @z dx @z 0
(14:2:6)
We could attempt to solve the brachistochrone problem by substituting Eq. (14.2.5) into Eq. (14.2.6) and solving the resulting differential equation. An alternative approach that is more direct for this problem is to use an alternative form of the Euler –Lagrange equation that is known as the Beltrami identity. The Beltrami identity is obtained by considering the differential of a function of the form f (z(x, a), z0 (x, a), x), thus df @f @f dz @f dz0 ¼ þ þ dx @x @z dx @z0 dx
(14:2:7)
CALCULUS OF VARIATIONS
197
We now solve Eq. (14.2.7) for the term containing @f/@z: @f dz df @f @f dz0 ¼ @z dx dx @x @z0 dx
(14:2:8)
If we multiply the Euler – Lagrange equation @f d @f ¼0 @z dx @z0 for f (z(x, a), z0 (x, a), x) by dz=dx we obtain @f dz dz d @f ¼0 @z dx dx dx @z0
(14:2:9)
Substituting Eq. (14.2.8) into Eq. (14.2.9) gives df @f @f dz0 dz d @f 0 ¼0 dx @x @z dx dx dx @z0
(14:2:10)
Combining terms in Eq. (14.2.10) gives the Beltrami identity @f d þ @x dx
dz @f f ¼0 dx @z0
Equation (14.2.11) is especially useful when @f =@x ¼ 0 since d dz @f f ¼0 dx dx @z0
(14:2:11)
(14:2:12)
which can be integrated to yield f
dz @f ¼C dx @z0
(14:2:13)
where C is a constant of integration. The Beltrami identity, especially Eq. (14.2.13), can be used to solve the brachistochrone problem because f1 in Eq. (14.2.5) satisfies @f1 =@x ¼ 0. Substituting Eq. (14.2.5) into the Beltrami identity and observing that @f1 =@x ¼ 0 gives f1
dz @f1 ¼C dx @z0
(14:2:14)
Evaluating derivatives in Eq. (14.2.14) and simplifying lets us write sffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ z0 2 1 ¼C z0 2 2gzð1 þ z0 2 Þ 2gz
(14:2:15)
198
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
since sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi @f1 1 ¼ z0 0 2gzð1 þ z0 2 Þ @z Combining terms in Eq. (14.2.15) yields sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi 1 1 ¼ 2gC ¼ 0 2 zð 1 þ z Þ k where we have redefined the constant of integration as k1 ¼ Eq. (14.2.17) for z0 gives rffiffiffiffiffiffiffiffiffiffiffiffi dz k2 z 0 ¼ z ¼ dx z
(14:2:16)
(14:2:17) pffiffiffiffiffi 2gC. Solving
or dz ¼ dx
rffiffiffiffiffiffiffiffiffiffiffi az z
(14:2:18)
where a ¼ k2 . The solution to Eq. (14.2.18) that solves the brachistochrone problem is a cycloid. EXERCISE 14.4: Solve Eq. (14.2.18). Hint: Perform a variable substitution by introducing a parameter u that lets you substitute z ¼ a sin2 u ¼ (a=2)(1 cos 2u) for z.
14.3 CALCULUS OF VARIATIONS WITH SEVERAL DEPENDENT VARIABLES The calculus of variations introduced in Section 14.1 for one dependent variable is extended to several dependent variables in this section. We illustrate the calculus of variations with several dependent variables by considering the line integral Jn of a function fn (x1 , x2 , . . . , xn , x_ 1 , x_ 2 , . . . , x_ n , t) that depends on n dependent variables {xi ¼ xi (t), i ¼ 1, . . . , n} and their derivatives {_xi ¼ dxi =dt, i ¼ 1, . . . , n} with respect to an independent variable t. The function fn, the path {xi ¼ xi (t), i ¼ 1, . . . , n}, the derivatives of the coordinates {_xi ¼ dxi =dt, i ¼ 1, . . . , n}, and the line integral Jn may all depend on a parameter a that parameterizes the path of integration. The parameterized line integral may be written as ð t2 fn ðx1 ðt, aÞ, x_ 1 ðt, aÞ, . . . , xn ðt, aÞ, x_ n ðt, aÞ, tÞdt
Jn (a) ¼ t1
(14:3:1)
199
CALCULUS OF VARIATIONS
and the condition for finding the path of integration that minimizes Jn (a) is @Jn ðaÞ ¼0 @a a¼0
(14:3:2)
The procedure for deriving the Euler –Lagrange equations for several dependent variables is analogous to the procedure used in Section 14.1 for a single dependent variable. We derive the Euler – Lagrange equations for several dependent variables by first applying Eq. (14.3.2) to Eq. (14.3.1) to obtain ð t2 X n @Jn @fn @xi @fn @_xi þ ¼ dt ¼ 0 @a a¼0 @_xi @a t1 i¼1 @xi @a
(14:3:3)
Integrating the second term in parentheses by parts gives ð t2 t1
ð t2
@fn @2 xi dt xi @t@a t1 @_ ð t2 @fn @xi t2 d @fn @xi dt ¼ dt @_xi @a t1 @_xi @a t1
@fn @_xi dt ¼ @_xi @a
(14:3:4)
for each value of i. We observe that the partial derivative @xi =@a is arbitrary except at the end points {t1 , t2 }, where @xi =@a must equal zero to guarantee that all of the parameterized paths of integration pass through the end points for any value of the parameter a. Consequently, the first term on the right-hand side of Eq. (14.3.4) vanishes and Eq. (14.3.4) simplifies to ð t2 t1
@fn @_xi dt ¼ @_xi @a
ð t2 t1
d @fn @xi dt dt @_xi @a
(14:3:5)
Substituting Eq. (14.3.5) into Eq. (14.3.3) gives ð t2 X n @Jn @fn d @fn @xi ¼ dt ¼ 0 @a a¼0 @xi dt @_xi @a t1 i¼1
(14:3:6)
The extremum condition (@Jn (a)=@a)ja¼0 ¼ 0 is satisfied when the integrand in Eq. (14.3.6) vanishes at a ¼ 0. The integrand vanishes for the set of arbitrary deformations {@xi =@a} when the term in parentheses vanishes; therefore the extremum condition (@Jn (a)=@a)ja¼0 ¼ 0 is satisfied when @fn d @fn ¼ 0 for all i ¼ 1, . . . , n @xi dt @_xi
(14:3:7)
200
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The equations represented by Eq. (14.3.7) are Euler –Lagrange equations for n dependent variables. EXERCISE 14.5: Solve Eq. (14.3.7) for the function of two dependent variables f2 (x1 , x2 , x_ 1 , x_ 2 , t) ¼ (m=2)(_x21 þ x_ 22 ) (k=2)(x1 þ x2 )2 and {m, k} are constants. 14.4
CALCULUS OF VARIATIONS WITH CONSTRAINTS
Suppose the line integral Jn in Eq. (14.3.1) is subject to the m constraints gj ðx1 ðt, aÞ, x_ 1 ðt, aÞ, . . . , xn ðt, aÞ, x_ n ðt, aÞ, tÞ ¼ 0, j ¼ 1, . . . , m
(14:4:1)
The constraints can be included in the calculus of variations using the method of undetermined multipliers. Let’s assume that the constraints in Eq. (14.4.1) are differentiable so that n @gj X @gj @xi ¼ ¼ 0, j ¼ 1, . . . , m @ a i¼1 @ xi @ a
(14:4:2)
We include the constraints in the calculus of variations by multiplying @gj =@ a by the unspecified multiplier lj and then integrate with respect to t; thus ð t2 t1
ð t2 X n n X @gj @xi @gj @xi dt ¼ dt ¼ 0, j ¼ 1, . . . , m lj lj @x @a @xi @a i t1 i¼1 i¼1
(14:4:3)
Each of the m constraint equations in Eq. (14.4.3) is now added to Eq. (14.3.6) to give " ! # ð t2 X n m @Jn @fn d @fn X @gj @xi ¼ þ lj dt ¼ 0 @ a a¼0 @xi dt @_xi j¼1 @xi @a t1 i¼1
(14:4:4)
The n derivatives {@xi =@ a} in Eq. (14.4.4) are not arbitrary because of the constraint equations in Eq. (14.4.2). We recognize this dependence by rewriting Eq. (14.4.4) as ! # m @fn d @fn X @gj @xi þ lj dt dt @x @_ x @x @a i i i t1 i¼1 j¼1 " ! # ð t2 X n m @fn d @fn X @gj @xi þ lj dt ¼ 0 þ @xi dt @_xi j¼1 @xi @a t1 i¼nmþ1
" ð t2 X nm
(14:4:5)
CALCULUS OF VARIATIONS
201
We choose the m undetermined multipliers {l1 , . . . , lm } so that the parenthetic term in the second integral of Eq. (14.4.5) vanishes. The result is X m @fn d @fn @gj þ lj ¼ 0 for all i ¼ n m þ 1, . . . , n @xi dt @_xi @xi j¼1
(14:4:6)
With this choice, the remaining n –m derivatives f@x1/@ a, . . . , @xn – m/@ ag are arbitrary. Equation (14.4.5) is satisfied by recognizing that the second integral vanishes because of our choice of undetermined multipliers in Eq. (14.4.6), and by requiring that the parenthetic term in the first integral must vanish for arbitrary f@x1/@ a, . . . , @xn – m/@ ag so that X m @fn d @fn @gj þ lj ¼ 0 for all i ¼ 1, . . . , n m @xi dt @_xi @xi j¼1
(14:4:7)
Equations (14.4.2), (14.4.6), and (14.4.7) are n þ m equations for the n variables fx1, . . . , xng and the m undetermined multipliers fl1, . . . , lmg. Equations (14.4.6) and (14.4.7) are Euler – Lagrange equations with constraints.
EXERCISE 14.6: Find the Euler – Lagrange equations with constraints for the function of three dependent variables f3(r, u, z) ¼ (m/2)[r˙2 þ r 2u˙2 + z˙2] 2 mgz subject to the constraint r 2 ¼ cz, where fm, g, cg are constants. Note that f3 is expressed in cylindrical coordinates fr, u, zg.
CHAPTER 15
TENSOR ANALYSIS
Tensor analysis is the study of tensors and their transformation properties. A few important physical quantities that may be classified as tensors include temperature, velocity, stress, permeability, and the electromagnetic tensor that contains both the electric and magnetic fields. The purpose of this chapter is to review essential concepts from tensor analysis. For additional discussion, see such references as Arfken and Weber [2001, Sec. 2.6], Chow [2000, Chap. 1], Zwillinger [2003, Sec. 5.10], Lebedev and Cloud [2003], Boas [2005], and Eddington [1960, Chap. II].
15.1
CONTRAVARIANT AND COVARIANT VECTORS
The concept of a vector is extended in tensor analysis to account for the different ways the components of a vector are expressed when the coordinate representation of a vector is transformed from one coordinate system to another. In this section we use two transformation relations to motivate the definition of two types of vectors: contravariant vectors and covariant vectors. Contravariant Vector Consider the representation of a three-dimensional vector in two coordinate systems: an unprimed coordinate system with components {xi } ¼ {x1 , x2 , x3 } and a primed coordinate system with components {x0i } ¼ {x01 , x0 2 , x0 3 }. We Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
203
204
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
write the component index as a superscript in conformity with the modern notation for a contravariant vector. The definition of contravariant vector depends on the transformation properties of the vector. In general, the primed components of a three-dimensional vector are related to the unprimed components by the coordinate transformation x01 ¼ f 1 (x1 , x2 , x3 ) x0 2 ¼ f 2 (x1 , x2 , x3 )
(15:1:1)
x0 3 ¼ f 3 (x1 , x2 , x3 ) where the functions { f 1 , f 2 , f 3 } relate the components in the unprimed coordinate system to the components in the primed coordinate system. The definition of a contravariant vector is motivated by considering the transformation properties of the displacement vector. Example: Suppose the primed coordinate system is obtained by rotating the unprimed coordinate system counterclockwise by an angle u about the x 3 axis. From Section 4.1, we know that the components in the primed coordinate system are related to the components in the unprimed coordinate system by x01 ¼ x1 cos u þ x 2 sin u x0 2 ¼ x1 sin u þ x 2 cos u 03
x ¼x
(15:1:2)
3
The relationships in Eq. (15.1.2) are the transformation equations that connect the unprimed and primed coordinate systems. The components of the displacement vector in the unprimed coordinate system {xi } are the differentials {dxi } ¼ {dx1 , dx2 , dx3 }. The components of the displacement vector in the primed coordinate system are expressed in terms of the components of the displacement vector in the unprimed coordinate system by dx0i ¼
@x0 i 1 @x0 i 2 @x0 i 3 dx þ 2 dx þ 3 dx , i ¼ 1, 2, 3 @x1 @x @x
or dx0i ¼
3 X @x0i j¼1
@x j
dx j , i ¼ 1, 2, 3
(15:1:3)
The partial derivatives @x0 i =@x j are determined from the transformation relations given by Eq. (15.1.1). Equation (15.1.3) is the transformation equation for the displacement vector. The generalization of Eq. (15.1.3) to n dimensions is used as
TENSOR ANALYSIS
205
the definition of a set of vectors known as contravariant vectors. A contravariant vector {Ai } in n-dimensional space is a vector with n components that transforms according to the relation A0i ¼
n X @x0 i j¼1
@x j
Aj , i ¼ 1, 2, . . . , n
(15:1:4)
EXERCISE 15.1: Compare the set of equations in Eq. (15.1.1) with the set of equations in Eq. (15.1.2). What are the functions { f 1 , f 2 , f 3 }? Use these functions in Eq. (15.1.3) to calculate the components of the displacement vector in the primed (rotated) coordinate system. Covariant Vector The transformation equation for contravariant vectors given by Eq. (15.1.4) is one way the components of a vector may transform. Another transformation equation is obtained by considering the transformation properties of a vector {bi } that is obtained as the gradient of a function f(x1 , x2 , x3 ) in three-dimensional space. Notice that the component index is written as a subscript to distinguish the type of vector that we are now considering from a contravariant vector. Vector {bi } is known as a covariant vector. Also notice that the function f depends on the components of a contravariant vector in the unprimed coordinate system. The components of {bi } are the components of the gradient of f, namely, bi ¼
@f , i ¼ 1, 2, 3 @xi
(15:1:5)
The components of the vector {b0i } in the primed coordinate system are b0i ¼
3 @f X @f @x j ¼ , i ¼ 1, 2, 3 @x0 i @xj @x0 i j¼1
(15:1:6)
Equation (15.1.6) is the transformation equation for the gradient of a function f(x1 , x2 , x3 ). It may be rewritten using Eq. (15.1.5) to obtain b0i ¼
3 X @x j j¼1
@x0 i
bj ; i ¼ 1, 2, 3
(15:1:7)
Equation (15.1.7) suggests a generalization analogous to Eq. (15.1.4). The generalization of Eq. (15.1.7) to n dimensions defines a set of vectors called covariant vectors. A covariant vector {Bi } in n-dimensional space is a vector with n components that transforms according to the relation B0i ¼
n X @x j j¼1
@x0 i
Bj , i ¼ 1, 2, . . . , n
(15:1:8)
206
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Application: Galilean and Lorentz Transformations. The Galilean transformation is a set of rules for transforming the physical description of a moving object in one uniformly moving (nonaccelerating) coordinate system to another. Each coordinate system is referred to as a reference frame. A simple expression for the Galilean transformation can be obtained by considering the relative motion of the two reference frames shown in Figure 15.1. The primed reference frame F0 is moving with a constant speed v in the x direction relative to the unprimed reference frame F. Using Cartesian coordinates, the origins of the two reference frames F and F0 are at x0 ¼ x ¼ 0 at time t 0 ¼ t ¼ 0. The Galilean transformation is x 0 ¼ x vt y0 ¼ y z0 ¼ z
(15:1:9)
t0 ¼ t The spatial coordinate x0 depends on the values of x and t in the unprimed reference frame, while the time coordinates t and t0 are equal. The equality of t and t0 expresses Isaac Newton’s view of time as absolute and immutable. The Galilean transformation does not change the form of Newton’s laws, but it does change the form of Maxwell’s equations. The change in the form of Maxwell’s equations following a Galilean transformation implied that the speed of light should depend on the motion of the observer. Measurements by Michelsen and Morley showed that the speed of light does not depend on the motion of observers whose relative motion is uniform. Einstein proposed a set of transformation rules that preserved the form of Maxwell’s equations in reference frames that were moving uniformly with respect to each other. These transformation rules, known collectively as the Lorentz transformation, introduced a mutual dependence between space and time coordinates in the primed and unprimed reference frames. The relationship between the unprimed and primed space and time coordinates is given by the
Figure 15.1 Two reference frames in uniform relative motion.
207
TENSOR ANALYSIS
Lorentz transformation x vt 1 x0 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ gx gvt; g ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 (v=c) 1 (v=c)2 y0 ¼ y z0 ¼ z
(15:1:10)
t xðv=c 2 Þ v t 0 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ gt g 2 x 2 c 1 (v=c) where {v, c, g} are constants with respect to the space and time coordinates. Equation (15.1.10) shows that space and time depend on one another in both the primed and unprimed reference frames. Equations (15.1.9) and (15.1.10) are examples of linear coordinate transformations in four-dimensional space.
EXERCISE 15.2: Show that the set of equations in Eq. (15.1.9) is a special case of Eq. (15.1.10). Hint: Apply the limit c ! 1 to the set of equations in Eq. (15.1.10). 15.2
TENSORS
A tensor in an n-dimensional space is a function whose value at a given point in space does not depend on the coordinate system used to cover the space. Examples of tensors include temperature, velocity, and stress. Temperature T is a scalar tensor, or tensor of rank zero, because it is defined by one number and does not depend on a component index. The contravariant vector {Ai } and covariant vector {Bj } are tensors of rank one; they depend on one component index. Tensors with higher ranks are obtained by generalizing the definitions of contravariant and covariant vectors. General definitions for the transformation equations of second-rank tensors and for tensors of arbitrary rank are presented in this section. Summation Convention and Tensor Notation The summation convention says that a sum is implied over any index that appears twice in a term. For example, the transformation equation in Eq. (15.1.4) for a contravariant vector {Ai } is A0 i ¼
@x0 i j A @x j
(15:2:1)
and the transformation equation in Eq. (15.1.8) for a covariant vector {Bj } is B0i ¼
@x j Bj @x0 i
(15:2:2)
208
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Each index {i, j} in n-dimensional space is an integer that is understood to range from 1 to n. The summation convention simplifies the notation, but may cause confusion if the use of the summation convention is not clearly stated. We have used curly brackets {Ai } and {Bj } to indicate a tensor. We now simplify the notation by omitting the brackets. In this widely used notation, the contravariant vector {Ai }, which is a tensor of rank one, is written as A i. The number of indices indicates the rank of the tensor, and the placement of the index as a superscript or subscript indicates the transformation property of the tensor. A superscript index indicates the contravariant transformation property illustrated in Eq. (15.2.1) for a contravariant tensor of rank one, and a subscript index indicates the covariant transformation property illustrated in Eq. (15.2.2) for a covariant tensor of rank one. Second-Rank Tensors A tensor of rank two, or a second-rank tensor, in an n-dimensional space may be formed by combining first-rank tensors. For example, the product of the first-rank tensors A i and Bj is the tensor C ij with n 2 components. The ij th component is Cji ¼ Ai Bj
(15:2:3)
The tensor Cji is called a mixed tensor because it has both an upper (contravariant) index i and a lower (covariant) index j. Equation (15.2.3) defines the ij th component of a second-rank tensor that is formed by the direct product of two first-rank tensors. The direct product of two first-rank tensors has also been called the outer product. The inner product of two first-rank tensors is the scalar C ¼ A i Bi
(15:2:4)
Note that the summation convention is implied in Eq. (15.2.4). Two other second-rank tensors may be formed by the direct product of two first-rank tensors. The outer product of two contravariant tensors A i and D j is the contravariant second-rank tensor E ij with the n 2 components E ij ¼ Ai D j
(15:2:5)
The outer product of two covariant tensors Bi and Fj is the covariant second-rank tensor Gij with the n 2 components Gij ¼ Bi Fj
(15:2:6)
The second-rank tensors Cji , E ij , Gij have been formed by the direct product of two first-rank tensors. Second-rank tensors do not have to be formed as the outer product of first-rank tensors. In general, the transformation equation for a
TENSOR ANALYSIS
209
TABLE 15.1 Transformation Equations for SecondRank Tensors Transformation Equation
Second-Rank Tensor
@x 0i @x 0j ab A @xa @xb
Contravariant
A0 ij ¼
Covariant
A0ij ¼
@x a @x b Aab @x 0i @x 0j
Mixed
A0j i ¼
@x 0i @x b a A @x a @x 0j b
contravariant second-rank tensor is A0ij ¼
@x0 i @x0j ab A @x a @x b
(15:2:7)
where we have used the summation convention. In this case, the two indices {a, b} appear twice in the term on the right-hand side of Eq. (15.2.7). A summation is implied for each index. A second-rank tensor is a quantity that has n n components and satisfies one of the transformation equations in Table 15.1. Each transformation equation in Table 15.1 represents n n equations for the n n components of the secondrank tensor.
EXERCISE 15.3: Write the transformation equations in Table 15.1 without the summation convention; that is, explicitly show the summations for each equation. Tensors with Arbitrary Rank The transformation equation for a tensor with arbitrary rank is 1 a2 ...an A0a b1 b2 ...bm ¼
@x0a1 @x0a2 @x 0an @x d1 @x d2 @x dm n c 0b Acd11cd22...c ...dm c c 0b 0b @x 1 @x 2 @x n @x 1 @x 2 @x m
(15:2:8)
where the summation convention has been used. The tensor in Eq. (15.2.8) has n contravariant indices and m covariant indices. Sums are implied over the n þ m repeated indices fc1, c2, . . . , cn, d1, d2, . . . , dmg on the right-hand side of Eq. (15.2.8). EXERCISE 15.4: Use Eq. (15.2.8) to write the transformation equation for a tensor with two contravariant indices and a single covariant index.
210
15.3
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
THE METRIC TENSOR
The length of a vector with components {x i , i ¼ 1, 2, 3} in a three-dimensional Euclidean space can be written in several equivalent ways; thus x x ¼ ~x ~x ¼ (x1 )2 þ (x2 )2 þ (x3 )2 ¼
3 X
x j xj ¼
j¼1
3 X 3 X i¼1 j¼1
gij x i x j ,
(15:3:1)
j
x ¼ xj The elements of the matrix {gij } are 2
1 0 {gi j } ¼ 4 0 1 0 0
3 0 05 1
(15:3:2)
The matrix {gij } is called the metric tensor or the fundamental tensor because it determines the length of a vector in a particular geometric space. The tensor gij is a second-rank tensor. The determinant of the matrix {gij } representing the metric tensor must be nonzero. In our example, the tensor gij is defined for a threedimensional space and is written as a 3 3 matrix in Eq. (15.3.2). Equation (15.3.2) illustrates the observation that a second-rank tensor in n-dimensional space can be written as a square matrix with n n elements. The elements of the metric tensor depend on the coordinate system we are working in. In the case of the metric tensor for Euclidean space shown in Eq. (15.3.2), there is no difference between the contravariant vector x j and the covariant vector x j. This is not true in some geometric spaces, such as Riemann space. The relation gik gkj ¼ dij
(15:3:3)
defines the contravariant tensor g ij in terms of the covariant tensor gij and the Kronecker delta function dij
¼
1 if i ¼ j 0 if i = j
(15:3:4)
The Kronecker delta function dij is a mixed tensor of rank two and dij is the same in every coordinate system. If we write the matrix {gij } as g and its inverse as g 21, the inverse g 21 is obtained from the relation g g1 ¼ g1 g ¼ I
(15:3:5)
TENSOR ANALYSIS
211
where I is the identity matrix. Equation (15.3.5) is equivalent to Eq. (15.3.3) and shows that the contravariant tensor g ij is equal to g 21, the inverse of matrix g. Example: The inverse of matrix g in three-dimensional Euclidean space is found from the matrix equation 2 3 1 0 0 g g1 ¼ g1 g ¼ 4 0 1 0 5 0 0 1 and gives the result 2
g1
1 ¼ g ¼ 40 0
0 1 0
3 0 05 1
The contravariant tensor gij is equal to the covariant tensor gij in this case; thus 2 3 1 0 0 {gij } ¼ {g i j } ¼ 4 0 1 0 5 0 0 1
EXERCISE 15.5: Write Eq. (15.3.1) using the summation convention. EXERCISE 15.6: Show that the determinant of the metric tensor represented by the matrix in Eq. (15.3.2) is nonzero.
Transformation of the Metric Tensor The transformation equation for the metric tensor gij is obtained from Table 15.1: g0ij ¼
@x a @xb gab @x0 i @x0j
(15:3:6)
Application: Energy –Momentum Four-Vector. The geometry of the special theory of relativity is defined in a four-dimensional space. The four components of the four-dimensional space are a time coordinate and three space coordinates. A widely used notation expresses the indices for the components of a space –time four-vector in the range 0 to 3. The 0th component of the space – time four-vector is the time coordinate x 0 ¼ ct, where c is the speed of light in vacuum, and the three remaining components {x1 , x2 , x3 } represent the three space coordinates. The fourdimensional space of the special theory of relativity is called Minkowski space – time.
212
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The length s of a four-vector with space – time components {x0 , x1 , x2 , x3 } in Minkowski space – time is s2 ¼
3 X 3 X
gmn xm xn for m, n ¼ 0, 1, 2, 3
(15:3:7)
m¼0 n¼0
The fundamental metric for determining the length s of a four-vector in Minkowski space –time is 2
1 60 ½gmn ¼ 6 40 0
0 1 0 0
0 0 1 0
3 0 07 7 05 1
(15:3:8)
Substituting the metric of Minkowski space – time into Eq. (15.3.7) gives s2 ¼ (x0 )2 (x1 )2 (x2 )2 (x3 )2 ¼ c2 t2 (x1 )2 (x2 )2 (x3 )2
(15:3:9)
The length s of the space –time four-vector is a scalar. The four-vector is called timelike if s 2 . 0 and it is called spacelike if s 2 , 0. If an object is moving at the speed of light, then the space – time four-vector has length s 2 ¼ 0. Another four-vector that has special importance is the energy –momentum fourvector {pm } ¼ {p0 , p1 , p2 , p3 } ¼ {E=c, p1 , p2 , p3 }. The 0th component of the energy –momentum four-vector is the total energy E divided by the speed of light in vacuum. The remaining three spatial components are the components of the momentum vector. The length of the energy– momentum four-vector for a relativistic object with mass m is given by Eq. (15.3.7) with the space –time four-vector replaced by the energy– momentum four-vector; thus m2 c2 ¼ ( p0 )2 ( p1 )2 ( p2 )2 ( p3 )2 ¼
E2 ( p1 )2 ( p2 )2 ( p3 )2 c2
(15:3:10)
The energy– momentum four-vector is timelike if m 2c 2 . 0 and it is spacelike if m 2c 2 , 0. An object moving slower than the speed of light is called a bradyon and has a positive mass. It is mathematically possible for an object to be moving faster than the speed of light. Faster-than-light objects are called tachyons. In the context of Eq. (15.3.10), the mass of a faster-than-light object would have to be imaginary. There are other ways to treat tachyons, but that discussion is beyond the scope of this book [e.g., see Fanchi, 1993]. If an object is moving at the speed of light, then the energy –momentum four-vector has length m 2c 2 ¼ 0. EXERCISE 15.7: Show that Eq. (15.3.10) reduces to the famous result E mc2 when the momentum of the mass is small compared to the mass of the object.
TENSOR ANALYSIS
15.4
213
TENSOR PROPERTIES
Tensors have many properties that facilitate their use in applications. Some tensor properties are summarized in this section. Equality of Tensors n A tensor Acd11cd22...c ...dm with n contravariant indices and m covariant indices has the contravariant rank n and the covariant rank m. Two tensors A, B are equal if they have the same contravariant rank, the same covariant rank, and every component c1 c2 ...cn n satisfies the equality Acd11cd22...c ...dm ¼ Bd1 d2 ...dm .
Contraction The operation of contraction can be applied to a mixed tensor. A mixed tensor has i ...an both a contravariant index and a covariant index. For example, the tensor Aab11 ab22 ...a ...bj ...bm is a mixed tensor if the contravariant rank n . 0 and the covariant rank m . 0. Contraction is achieved by setting a contravariant index (such as ai ) equal to a covariant index (such as bj) and summing over the index. The contraction operation reduces the rank of a tensor by two; the contravariant rank is reduced from n to n 2 1, and the covariant rank is reduced from m to m 2 1. Example: The contraction of a mixed second-rank tensor Aij is given by Aii , where the summation convention is implied. The contracted second-rank tensor A ¼ Aii is a tensor of rank zero (the scalar A). Addition and Subtraction of Tensors Tensors with the same contravariant rank and the same covariant rank may be added to and subtracted from each other by adding and subtracting their components. Let c1 c2 ...cn c1 c2 ...cn n Acd11cd22...c ...dm , Bd1 d2 ...dm , Cd1 d2 ...dm represent tensors that have contravariant rank n and covariant rank m. The addition and subtraction of tensors is given by c1 c2 ...cn c1 c2 ...cn n Acd11cd22...c ...dm + B d1 d2 ...dm ¼ C d1 d2 ...dm
(15:4:1)
Example: The sum of two contravariant second-rank tensors is found from Eq. (15.4.1) to be Aij + Bij ¼ Cij . Products of Tensors The outer product of two tensors is a tensor whose contravariant rank is the sum of the contravariant ranks of the two tensors, and whose covariant rank is the sum of the n covariant ranks of the two tensors. Suppose Aab11 ab22 ...a ...bm is a tensor with contravariant c1 c2 ...cp rank n and covariant rank m, and Bd1 d2 ...dq is a tensor with contravariant rank p and covariant rank q. The outer product forms a tensor with contravariant rank
214
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
n þ p and covariant rank m þ q. The components of the new tensor are c c ...c
a a ...a , c c ...c
1 2 p 1 2 n 1 2 p n Aab11 ab22 ...a ...bm B d1 d2 ...dq ¼ C b1 b2 ...bm , d1 d2 ...dq
(15:4:2)
As noted previously, the outer product of two tensors is also known as the direct product. c1 c2 ...cp n The inner product of two tensors Aab11 ab22 ...a ...bm , B d1 d2 ...dq is obtained by contracting the outer product of the two tensors with respect to one contravariant index and one covariant index. Example: The outer product of two vectors is a second-rank tensor that is sometimes called the tensor product. If the vectors are the first-rank tensors a, b, the tensor product may be written as a b. The components of the tensor product of two first-rank contravariant tensors are given by a i b j ¼ c ij . The tensor c is a second-rank contravariant tensor. The tensor product is used to define a mathematical operation called the wedge product [Goldstein et al., 2002, Sec. 7.5]. The wedge product is defined as a ^ b ¼ a b b a with components given by (a ^ b) ij ¼ a i b j b i a j . Example: The inner product of a contravariant vector a and a covariant vector b is ai bi ¼ c. The tensor c is a tensor of rank 0; that is, c is a scalar. Alternatively, we can form the outer product ai bj ¼ cij and then form the contraction cii ¼ c, where the summation convention is implied. Symmetry and Antisymmetry A contravariant tensor Aa1 ...ai ...aj ...an with rank n 2 is symmetric if, for all values of two contravariant indices ai , aj , we have the equality Aa1 ...ai ...aj ...an ¼ Aa1 ...aj ...ai ...an when the order of the indices ai , aj has been interchanged. The contravariant tensor Ba1 ...ai ...aj ...an with rank n 2 is antisymmetric if we have the equality Ba1 ...ai ...aj ...an ¼ Ba1 ...aj ...ai ...an when the indices ai , aj are interchanged. If a tensor is antisymmetric, the components with ai ¼ aj must be zero since Ba1 ...ai ...ai ...an ¼ Ba1 ...ai ...ai ...an ¼ 0. Note that we are not using the summation convention in this case. Analogous definitions apply to covariant tensors. The definitions of symmetry and antisymmetry apply to two contravariant indices or to two covariant indices; they do not apply when one index is contravariant and the other index is covariant. Example: The metric tensor gij is symmetric; that is, gij ¼ gji . Example: A second-rank contravariant tensor C ij is symmetric if C ij ¼ C ji . It is antisymmetric if C ij ¼ C ji . The symmetric and antisymmetric properties can be used to resolve the second-rank tensor C ij into symmetric and antisymmetric
TENSOR ANALYSIS
215
parts when we express the second-rank tensor C ij as the identity 1 1 Cij ¼ (C ij þ C ji ) þ (C ij C ji ) 2 2
(15:4:3)
The first term on the right-hand side of Eq. (15.4.3) is a symmetric tensor, and the second term is an antisymmetric tensor. Associated Tensors An associated tensor is formed by taking the inner product of a tensor with the metric n tensor gij or gij . Suppose Aab11 ab22 ...j...a ...bm is a tensor with contravariant rank n and covarin ant rank m. The inner product of Aab11 ab22 ...j...a ...bm with the covariant metric tensor gij gives the associated tensor a1 a2 ...j...an n1 Aab11 ab22 ...a ...i...bmþ1 ¼ gij Ab1 b2 ...bm
(15:4:4)
The inner product with gij lowers the contravariant index j and introduces another covariant index i. If we use the contravariant metric tensor gij to form the inner product with a1 a2 ...an Ab1 b2 ...j...bm , we obtain a new associated tensor ij a1 a2 ...an nþ1 Aab11 ab22 ...i...a ...bm1 ¼ g Ab1 b2 ...j...bm
(15:4:5)
The inner product with gij raises the covariant index j and introduces another contravariant index i. Example: The tensor associated with the contravariant vector Aj is the covariant vector Ai ¼ gij Aj . Conversely, the tensor associated with the covariant vector Aj is the contravariant vector Ai ¼ gij Aj . The summation convention is implied in both cases. Line Element The scalar line element ds is determined from the metric tensor gij and the displacement vector dx i as ds2 ¼ dxj dx j ¼ gij dx i dx j
(15:4:6)
where the summation convention is implied. Christoffel Symbols The metric tensor is used to form the Christoffel symbol of the first kind ½ij, k ¼
1 @gik @gjk @gij þ 2 @x j @x i @x k
(15:4:7)
216
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
and the Christoffel symbol of the second kind 1 kl @gil @gjl @gij þ {ij, k} ¼ g ½ij, l ¼ g 2 @x j @x i @x l kl
(15:4:8)
The shape of the bracket in Eqs. (15.4.7) and (15.4.8) distinguishes the kind of Christoffel symbol. Christoffel symbols look like third-rank tensors, but Christoffel symbols are not tensors. The Christoffel symbols vanish when the elements of the metric tensor are constant, which is the case in Cartesian coordinates. Christoffel symbols satisfy the symmetry relations ½ij, k ¼ ½ji, k {ij, k} ¼ {ji, k}
(15:4:9)
Covariant Derivatives Differentiation in tensor calculus is defined so that the mathematical quantity obtained by differentiating a tensor also transforms as a tensor. The mathematical quantity formed by the differentiation process is a tensor that is called the covariant derivative. Covariant derivatives of vectors and second-rank tensors are given in Table 15.2. The semicolon in the notation A ;j indicates covariant differentiation with respect to x j . The covariant derivative of a tensor of rank n includes the usual partial derivative term plus n terms that depend on Christoffel symbols of the second kind {ij, k}. The terms with a Christoffel symbol as a factor are needed to assure that the covariant derivative transforms as a tensor. EXERCISE 15.8: Calculate the contraction of the Kronecker delta function dij in n dimensions.
TABLE 15.2 Covariant Derivatives of Vectors and Second-Rank Tensors Tensor Contravariant vector A i Covariant vector Ai Contravariant tensor A ij Covariant tensor Aij Mixed tensor Aji
Covariant Derivative @Ai þ {kj, i}Ak @x j @Ai Ai; j ¼ j {ij, k}Ak @x @Aij ij A; k ¼ k þ {lk, i}Alj þ {lk, j}Ail @x @Aij Aij; k ¼ k {ik, l}Alj {jk, l}Ail @x j @A Aji; k ¼ ki {ik, l}Ajl þ {lk, j}Ali @x Ai; j ¼
TENSOR ANALYSIS
217
EXERCISE 15.9: Calculate the line element for the metric tensors ½gij ¼
1 0
0 1
and
2 2 m 1 1 6 x1 ½gij ¼ 6 4 0
3 0 1
7 7 2m5 x1
Assume the parameter m is a constant and x 1 is component 1 in a two-dimensional space with the displacement vector dx i.
EXERCISE 15.10: Show that ½ij, k þ ½kj, i ¼ @gik =@x j , where ½ is the Christoffel symbol of the first kind.
CHAPTER 16
PROBABILITY
Probability and statistics play a vital role in modern analysis. The concepts of probability and statistics are based on set theory, which is briefly reviewed in Section 16.1 as a prelude to our discussion of probability theory in the remainder of this chapter, probability distributions in Chapter 17, and descriptive statistics in Chapter 18. Further information can be found in a variety of references, such as Walpole and Myers [1985], Larsen and Marx [1985], Kurtz [1991], Blaisdell [1998], Miller and Miller [1999], and Kreyszig [1999].
16.1
SET THEORY
A set is a collection of points, or objects, which satisfy specified criteria. For example, if we flip a two-sided coin N tosses, and each toss outcome is either a head H or a tail T, each outcome is an element of the set. If the set of N elements represents all possible outcomes of the experiment, then the set S is called the sample space S. There is only one element in the set S corresponding to each outcome of the experiment, such as the toss of a coin. An element in the sample space is called a sample point or a simple event. An event is a subset of the sample space S. A discrete sample space contains a finite number of elements, while a continuous sample space typically contains the outcomes of measurements of continuous physical properties such as speed and temperature.
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
219
220
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The intersection of two sets A, B is denoted as A > B and defines those elements that are in both sets. If two sets do not have any elements in common, the events are said to be mutually exclusive and the sets are said to be disjoint. The union of two sets is written A < B. It consists of those elements that belong to set A, or to set B, or to both sets. The collection of elements in the sample space S comprises a set called the universal set U. The universal set is the union of a set A with its complement A0 fread “not A”), thus U ¼ A < A0 . The empty set Ø is a set with no elements. It is the intersection of A and A0 , thus Ø ¼ A > A0 . The empty set is a subset of every set, which we write as Ø , A. Suppose A, B, C are sets of objects in some sample space S. The union of sets fA, B, Cg comprise the universal set U covering the sample space S; that is, A < B < C ¼ U. The sets A, B, C are said to belong to U, or are subsets of U, which is written as A, B, C , U. Sets satisfy the following properties: PROPERTIES OF SETS
Closure Commutative Associative Distributive
For each pair of sets A, B there exists a unique set A < B and unique set A > B in S A < B ¼ B < A and A > B ¼ B > A (A < B) < C ¼ A < (B < C ) (A > B) > C ¼ A > (B > C ) A > (B < C ) ¼ (A > B) < (A > C ) A < (B > C ) ¼ (A < B) > (A < C )
A graphical tool for studying sets is the Venn diagram. The sample space is represented by a rectangle while the subsets are represented by circles within the rectangle. An example of a Venn diagram is shown in Chapter 1.
Permutations If k objects are selected from n distinct objects, any particular arrangement of these k objects is called a permutation. In general, the total number of permutations of n objects taken k at a time is Pn, k ¼
n! (n k)!
When the number of arranged objects k is equal to the number of distinct objects n, the number of permutations of n objects taken n at a time is Pn, n ¼ n!
PROBABILITY
221
Example: Let the sample space S contain three objects so that S ¼ fA, B, Cg. The possible permutations are shown as follows: PERMUTATIONS OF THREE OBJECTS
Taken 1 at a Time
Taken 2 at a Time
A, B, C
AB, AC, BA, BC, CB, CA
P3,1 ¼ 3!/(3 2 1)! ¼ 3
P3,2 ¼ 3!/(3 2 2)! ¼ 6
Taken 3 at a Time ABC, ACB, BAC, BCA, CAB, CBA P3,3 ¼ 3!/(3 2 3)! ¼ 6
Notice that the order of the objects A, B, C matters; that is, ABC and CBA are two separate permutations. Also observe that there is no duplication, or replacement, of any of the objects; for example, AAC is not an acceptable permutation. Combinations If k objects are selected from n distinct objects, any particular group of k objects is called a combination. The number of combinations of n things taken k at a time equals the number of permutations divided by k!. The primary difference between permutation and combination is the importance of order. Order must be considered in calculating permutations, but order is not important when calculating combinations. Thus the familiar “combination” lock found in many American schools is actually a permutation lock because the order of the numbers has significance. Example: The number of permutations of three things fa b cg taken three at a time is calculated by counting each possible sequence—thus fa b cg, fa c bg, fc a bg, fc b ag, fb a cg, and fb c ag. There are 3! ¼ 6 permutations of three things fa b cg taken three at a time. If the order of the three things in the sequence does not matter, then there is only one combination of the three things fa b cg. The number of combinations of n things taken k at a time is Cn, k ¼
Pn, k n! ¼ k!(n k)! k!
The symbol nk ¼ Cn, k is often used to represent the number of combinations of n things taken k at a time. The symbol nk is called the binomial coefficient because it appears in the binomial series n X n nk k n (x þ y) ¼ x y k k¼0 where n is a positive integer. EXERCISE 16.1: Calculate P5,4, C5,2, C5,3, C6,6, and C6,0.
222
16.2
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
PROBABILITY DEFINED
Probability may be defined as either an objective or a subjective probability. An objective probability is a probability that can be calculated using a repeatable, well-defined procedure. For example, the number of times that an event can occur in a set of possible outcomes is the frequency of occurrence of the event, and can be used as an estimate of probability. Thus if an experiment has n different, equally probable outcomes, and if m of these correspond to event A, then the probability of event A is P(A) ¼ m/n. Subjective probability is not as well defined as objective probability. Subjective probability is an estimate of probability based on prior knowledge, experience, or simple guessing. Subjective probabilities are justified when very little direct evidence is available; otherwise it is preferable to work with objective probabilities. Probability Tree Diagrams The preparation of a tree diagram is a systematic method for analyzing a sequence of events in which branching occurs. Tree diagrams make it possible to explicitly portray and then count all of the possible sequences of events. They can be used to calculate objective probabilities. Example: A restaurant offers a choice of four different salads fSi : 1 i 4g, three different main dishes fMj : 1 j 3g, and two desserts fDk : 1 k 2g. How many distinct meals are offered? Figure 16.1 displays each sequence fSi Mj Dkg, where the indices have the ranges f1 i 4, 1 j 3, 1 k 2g. We must sum each sequence to determine the number of distinct meals. The final number is 24, or 4 3 2 meals. EXERCISE 16.2: Four fair coins are each flipped once. Use a probability tree diagram to determine the probability that at least two coins will show heads. 16.3
PROPERTIES OF PROBABILITY
Probability is expected to satisfy a few basic postulates. Let A be an event in the sample space S, and let f be the empty set. Then probability P has the following
Figure 16.1
A tree diagram.
PROBABILITY
223
fundamental properties: P(A) 0 P(S) ¼ 1 P(f) ¼ 0 The first postulate requires that probability is a nonnegative number. The second postulate guarantees that at least one possibility in the sample space S will occur. The third postulate says that the probability of obtaining the empty set is zero. If events A and A0 are complementary, then P(A) þ P(A0 ) ¼ 1. The probability that two events A and B will both occur is given by the additive rule P(A < B) ¼ P(A) þ P(B) P(A > B). The term P(A > B) is zero if events A and B are mutually exclusive or disjoint. The additive rule can be extended to three or more events. In the case of three events A, B, and C, the additive rule is P(A < B < C) ¼ P(A) þ P(B) þ P(C) P(A > B) P(A > C) P(B > C) þ P(A > B > C)
Conditional and Marginal Probabilities The probability that event A occurred when it is known that event B occurred is called the conditional probability of A given B. The conditional probability of event A given event B is symbolized as P(AjB): If A and B are two subsets of a sample space S, then the conditional probability of A relative to B is P(AjB) ¼
P(A > B) P(B)
if P(B) . 0. The probability P(B) is called the marginal probability of event B. Multiplying the above equation by P(B) gives the multiplicative rule P(A > B) ¼ P(AjB)P(B). If A and B are any two subsets of a sample space S, then the multiplicative rule can be expressed in two symmetric forms: P(A > B) ¼ P(B)P(AjB) and P(A > B) ¼ P(A)P(BjA). The multiplicative rule for three events is P(A > B > C) ¼ P(A)P(BjA)P(CjA > B). Example: Find the probability of drawing three aces from a deck of cards without replacement. Event A denotes an ace obtained on the first draw, event B denotes an ace obtained on the second draw, and event C denotes an ace obtained on the third draw. The probability of drawing an ace on the first draw from a 52-card deck with 4 aces is P(A) ¼ 4/52. The probability of drawing an ace on the second draw given that an ace has already been drawn and not replaced is P(BjA) ¼ 3=51.
224
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The probability of drawing an ace on the third draw given that aces have already been drawn and not replaced on the first two draws is P(CjA > B) ¼ 2=50. The probability of drawing three aces without replacement is P(A > B > C) ¼ P(A) P(BjA)P(CjA > B) ¼ (4=52)(3=51)(2=50). Example: Suppose one hundred people qualify for teaching positions. The distribution of backgrounds is given in the following table and represents the sample space for this problem. A review of the table shows that some of the people are married, and some are single; some have three or more years of experience, and some have less than three years experience. Suppose we must select an applicant at random from this sample space. Use the multiplicative rule P(A > B) ¼ P(B)P(AjB) to determine the probability that the applicant will have three or more years experience given that the applicant is married. Married (M)
Single (M0 )
Total
Three or more years experience (E)
12
24
36
Less than three years experience (E0 )
18
46
64
Total
30
70
100
Distribution of Backgrounds
Let E be the selection of an applicant with three or more years experience, and M be the selection of an applicant who is married. The marginal probability that the applicant has three or more years experience is P(E) ¼ 36/100. The marginal probability that the applicant is married is P(M) ¼ 30/100. The joint probability that the applicant is married and has three or more years experience is P(E > M) ¼ 12=100. The conditional probability that the applicant will have three or more years experience given that the applicant is married is P(EjM) ¼ P(E > M)=P(M) ¼ 12=30. Independent Events Two events A and B are said to be statistically independent if the probability of occurrence of event A is not affected by the occurrence or nonoccurrence of event B, and vice versa. In terms of conditional probabilities, two events A and B are statistically independent if P(AjB) ¼ P(A) and P(BjA) ¼ P(B). The multiplication rule for two independent events is P(A > B) ¼ P(A)P(B). The general multiplication rule for N independent events is P(A1 > A1 > > AN ) ¼
N Y i¼1
P(Ai )
PROBABILITY
225
Random Variables Suppose we are given a sample space S. We can map elements of S onto a real number axis using a function X whose domain is S and whose range is the set of real numbers that maps elements of S onto a real number axis. The function X is called a random variable. A random variable X is a discrete random variable if its set of possible outcomes is a finite or countable number of values on the real number axis. The random variable X is a continuous random variable if it requires a continuum of values on the real number axis. Bayes’ Theorem Bayes’ theorem relates conditional and unconditional probabilities for events A and B through the multiplicative rule P(A > B) ¼ P(A)P(BjA). The joint probability P(A > B) is the probability that both events A and B will occur. It is the product of the probability that event B will occur given that event A has occurred and the probability that event A will occur. Thus the occurrence of event A gives us information about the likelihood that event B will occur. If we recall the symmetric forms of the multiplicative rule, namely, P(A > B) ¼ P(B)P(AjB) and P(A > B) ¼ P(A)P(BjA), we can derive the equality P(B)P (AjB) ¼ P(A)P(BjA) or P(AjB) ¼
P(BjA)P(A) P(B)
This expression provides a model for learning from observations by updating information with new evidence. From this perspective, the updated probability P(AjB) is proportional to the initial or prior probability P(A) that event A will occur times the likelihood P(BjA) of observing event B given event A. The probability P(B) normalizes the updated or posterior probability P(AjB). The prior probability P(A) refers to a probability that does not account for new information, whereas the posterior probability P(AjB) refers to a probability that does account for new information. EXERCISE 16.3: Two unbiased six-sided dice, one red and one green, are tossed and the number of dots appearing on their upper faces are observed. The results of a dice roll can be expressed as the ordered pair (g, r), where g denotes the green die and r denotes the red die. (a) (b) (c) (d)
What is the sample space S? What is the probability of throwing a seven? What is the probability of throwing a seven or a ten? What is the probability that the red die shows a number less than or equal to three and the green die shows a number greater than or equal to five? (e) What is the probability that the green die shows a one, given that the sum of the numbers on the two dice is less than four?
226
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 16.2
Dart board.
Application: Dart Board. The meaning of conditional probabilities is illustrated using a simple example from Collins [1985]. Consider a “dart board” with area elements numbered and shaded as shown in Figure 16.2. There are 16 elements, all equally likely to be hit by the toss of a dart. The probability of the joint event of a number j and a color c is P( j, c) ¼
N( j, c) N
where N( j, c) is the number of squares showing the number j and the color c while N is the total number of squares. The unconditional, or marginal, probability for a color c is P 0 (c) ¼
X N( j, c) N(c) ¼ N N j
where N(c) is the total number of squares showing the color c without regard for the number contained in the square. Then the conditional probability for j to occur, given that the color c has been observed, is Pc ( jjc) ¼
P( j, c) N( j, c) ¼ P0 (c) N(c)
from the definitions given above. Conditional probability applies to a reduction in the range of possible events. For example, if c is “dark,” D, then the conditional probability Pc ( jjc) refers to a reduced dart board as in Figure 16.3. Specific numerical results for this example are P(2, D) ¼ 1=16 and Pc (2jD) ¼ 1=6.
Figure 16.3
Reduced dart board.
PROBABILITY
16.4
227
PROBABILITY DISTRIBUTION DEFINED
The concept of random variable is used here as the independent variable associated with probability distributions. The type of probability distribution depends on the system of interest and the associated random variable. For example, the most obvious distinction is between discrete and continuous systems. The probability distribution of a discrete random variable is discrete, and the probability distribution of a continuous random variable is continuous. Discrete and continuous probability distributions are discussed here.
Discrete Probability Distribution Let us define a function f (x) as the probability of a random variable X associated with the outcome x; thus f (x) ¼ P(X ¼ x). The ordered pair {x, f (x)} is called the probability distribution or probability function of the discrete random variable X if (a)
f (x) 0 X (b) f (x) ¼ 1 x
If we wish to calculate the probability that the outcome associated with a random variable X is less than or equal to the real number x, then we must work with the cumulative distribution of the random variable X. In the case of a discrete random variable X, the cumulative distribution F(x) with probability distribution f (x) is F(x) ¼ P(X x) ¼
X
f (xi ), 1 , x , 1,
i
where the summation extends over those values of i such that xi x.
Continuous Probability Distribution Continuous probability distributions are defined in terms of a probability density function r(X) for a continuous random variable X defined over a set of real numbers x. The probability density function, which is often called the probability density, must be positive and its integral with respect to x over the entire sample space must be unity. The mathematical expressions for these requirements are r(x) 0 and the normalization condition ð1 r(x) dx ¼ 1 1
for all values of x in the interval 1 , x , 1.
228
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Figure 16.4
Cumulative distribution P(a , X , b).
The cumulative distribution function for a continuous random variable X with probability density r(x) is ðx F(x) ¼ P(X x) ¼
r(t) dt, 1 , x , 1
1
where t is a dummy variable of integration. If the cumulative distribution F(x) is known, the probability density can be found from the derivative r(x) ¼
dF(x) dx
Using the above definitions and relationships lets us write the cumulative distribution in the form ðb P(a , X , b) ¼ F(b) F(a) ¼
r(t) dt a
The probability P(a , X , b) is the area under the curve shown in Figure 16.4.
CHAPTER 17
PROBABILITY DISTRIBUTIONS
Fundamental concepts of probability theory were presented in Chapter 16 along with a list of references. Joint, conditional, and marginal probability distributions are defined here, followed by the presentation of examples of probability distributions and the parameters used to characterize them.
17.1
JOINT PROBABILITY DISTRIBUTION
In many cases the behavior of a system must be described by two or more random variables. The probability distribution associated with two or more random variables is called a joint probability distribution. It is illustrated below for the case of two random variables X and Y. The extension of these ideas to k random variables is given in Section 17.3. Discrete Case If the random variables X and Y are discrete, a function f (x, y) of the real numbers x, y is a joint probability distribution if f (x, y) 0 for all values of x and y, and the sum of f (x, y) over all values of x and y must be unity. The latter requirement
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
229
230
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
is the normalization condition XX x
f (x, y) ¼ 1
y
If these conditions are satisfied, the probability of X ¼ x and Y ¼ y is P(X ¼ x, Y ¼ y) ¼ f (x, y): An important distribution known as the marginal distribution can be obtained from the joint distribution function. In particular, the marginal distributions of discrete random variables X and Y are X X f (x, y) and h( y) ¼ f (x, y) g(x) ¼ y
x
respectively. Marginal distributions provide information about a single random variable selected from a distribution with more than one random variable. Conditional distributions can be obtained from joint and marginal distributions by dividing the joint distribution by the marginal distribution. For example, the conditional distribution of the random variable Y given X ¼ x is f ( yjx) ¼
f (x, y) if g(x) 0 g(x)
A similar relationship holds for f (xjy); thus f (xjy) ¼
f (x, y) if h( y) 0 h ( y)
Rearranging the expressions for conditional probability gives f (x, y) ¼ g(x) f ( yjx) ¼ h( y) f (xjy) The random variables X and Y are statistically independent if the joint distribution equals the product of the marginal distributions, or f (x, y) ¼ g(x) h( y). Statistical independence implies that the conditional probability of a random variable is equal to the marginal probability of the random variable. In our case, the random variables X and Y are statistically independent if f (xjy) ¼ g(x) and f ( yjx) ¼ h(x). Continuous Case If the random variables X and Y are continuous, a joint probability density function r(x, y) can be defined if the function r(x, y) satisfies the nonnegative requirement r(x, y) 0 for all values of (x, y) and the normalization condition 1 ð
1 ð
r(x, y) dx dy ¼ 1 1 1
PROBABILITY DISTRIBUTIONS
231
The probability of obtaining the random variables X and Y in the region R of the xy plane is ðð P½(X, Y) [ R ¼ F(x, y) ¼
r(x, y) dx dy R
The joint probability density can be obtained from the joint probability distribution function F(x, y) by calculating r(x, y) ¼
@ 2 F(x, y) @x @y
The marginal distributions for continuous random variables X and Y are 1 ð
r(x) ¼
1 ð
r(x, y) dy and r( y) ¼ 1
r(x, y) dx 1
Conditional probability distributions for the continuous random variables X and Y are given by r(xjy) ¼
r(x, y) r(x, y) and r(yjx) ¼ r( y) r(x)
Rearranging these expressions gives the joint probability in the form r(x, y) ¼ r(xjy) r( y) ¼ r( yjx) r(x) As in the discrete case, statistical independence corresponds to the condition that r(x, y) ¼ r(x) r( y), or r(xjy) ¼ r(x) and r( yjx) ¼ r( y). Example: Suppose the joint probability distribution function F(x, y) ¼ (1 ex ) (1 ey ) for x, y . 0. F(x, y) has the properties F(0, 0) ¼ 0 and F(1,1) ¼ 1. The probability density is r(x, y) ¼
@ 2 F(x, y) ¼ ex ey @x @y
EXERCISE 17.1: What is the probability that x, y are in the intervals 1 x 2 and 1 y 3 for the joint probability distribution function F(x, y) ¼ (1 ex ) (1 ey ) for x, y . 0: F(x, y) has the properties F(0, 0) ¼ 0 and F(1,1) ¼ 1, and r(x, y) ¼ ex ey .
232
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Example: Suppose we have a joint probability density r(x, y) ¼ 2(x þ 2y)=3 that is valid in the intervals 0 x 1 and 0 y 1. The conditional probability density r(xjy) is obtained by first calculating the marginal distribution function ð1 r( y) ¼
2 3
(x þ 2y) dx ¼ 23
1 2
þ 2y
0
and then using the definition of conditional probability to obtain r(xjy) ¼
r(x, y) (x þ 2y) ¼ 1 r( y) ( 2 þ 2y)
EXERCISE 17.2: Use the conditional probability r(xjy) ¼ (x þ 2y)=( 12 þ 2y) defined in the intervals 0 x 1 and 0 y 1 to determine the probability that x will have a value less than 12, given y ¼ 12. Example: The statistical independence of the variables x and y for the joint probability density r(x, y) ¼ 2x2 y2 =81 is determined by calculating the marginal probability densities. The variables x and y are defined in the interval 0 , x , y , 3. The marginal density for x is 2x2 y2 2 x3 dy ¼ x2 9 81 81 3
ð3 r(x) ¼ x
and the marginal density for y is ðy r( y) ¼
2x2 y2 2 2 y3 dx ¼ y 81 81 3
0
Taking the product of marginal densities gives r(x) r( y) ¼
2 81
2
3 x3 y x2 y2 9 = r(x, y) 3 3
which demonstrates that x and y are statistically dependent. Notice that the definition of the interval established a dependence of the random variables. EXERCISE 17.3: Determine the statistical independence of the variables x and y for the joint probability density r(x, y) ¼ x2 y2 =81 by calculating the marginal probability densities. The variables x and y are defined in the intervals 0 , x , 3 and 0 , y , 3. Note the difference between this exercise and the interval defined in the preceding example.
PROBABILITY DISTRIBUTIONS
17.2
233
EXPECTATION VALUES AND MOMENTS
Probability distributions are characterized by expectation values and moments. These quantities are defined below. Expectation Values Much useful information about the expected behavior of a particular system can be obtained from a probability distribution. The quantities used to extract this information are referred to as expectation values. The average value of a set of measurements is an example of an expectation value. Let X be a random variable with probability density f (x). Then the expected value of X, E(X), is defined as X E(X) ¼ xf (x) x
if X is a discrete random variable, and 1 ð
xf (x) dx
E(X) ¼ 1
if X is a continuous random variable. The expected value E(X) is usually referred to as the average value. In general, the expected value of a function g of a random variable X is defined as E ½g(X) ¼
X
g(x) f (x)
x
if X is a discrete random variable and 1 ð
g(x) r(x) dx
E ½g(X) ¼ 1
if X is a continuous random variable. Given the above definitions of expectation value in terms of sums and integrals, it is straightforward to derive the following useful properties: 1. E ½aX þ bY ¼ aE(X) þ bE(Y). 2. E ½X Y ¼ E(X) E(Y ) if X and Y are statistically independent. Moments Moments About the Origin The moments about the origin of a probability distribution are the expected values of the random variable that has the given
234
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
distribution. The r th moment of X, usually denoted by m0r , is defined as X m0r ¼ E ½X r ¼ xr f (x) x
if X is a discrete random variable and m0r
1 ð
r
xr f (x) dx
¼ E ½X ¼ 1
if X is a continuous random variable. The first moment, m0r , is called the mean of random variable X and is usually denoted by m.
EXERCISE 17.4: Calculate the first moment, or mean, of the discrete distribution f (x) ¼ with the binomical coefficient Moments About the Mean mr , is defined as
N x
N x p (1 p)Nx x
for 0 x N, and p a constant.
The r th moment about the mean, usually denoted by
mr ¼ E½(X m)r ¼
X
(x m)r f (x)
x
if X is a discrete random variable and
r
1 ð
mr ¼ E ½(X m) ¼
(x m)r f (x) dx
1
if X is a continuous random variable. The second moment about the mean, m2 , is m2 ¼ E ½(X m)2 ¼ m02 m2 The moment m2 is called the variance of the random variable X and is denoted by s2 . The square root of the variance s is called the standard deviation. Moment Generating Functions The moment generating function of the random variable X is defined in terms of the expected value E(etX ) of the exponential function etX , where t is a dummy parameter. In particular, the moment generating
PROBABILITY DISTRIBUTIONS
235
function is mx (t) ¼ E(etX ) ¼
X
etx f (x)
x
if X is a discrete random variable and mx (t) ¼ E(etX ) ¼
ð1
etx f (x) dx
1 th
if X is a continuous random variable. The r moment about the origin is d r mx (t) 0 mr ¼ dtr t¼0 for r ¼ 0, 1, 2, . . . : The series expansion (Xt)2 t2 þ ¼ 1 þ m01 t þ m02 þ mx (t) ¼ E(e ) ¼ E 1 þ Xt þ 2! 2 tX
shows that the moments about the origin m0r appear as coefficients of tr =r!. The function mx (t) may be regarded as generating the moments. The moments about the mean mr may be generated by the generating function Mx (t) ¼ E½et(Xm) ¼ emt E(etX ) ¼ emt mx (t) Example: The moment generating function of the normal distribution (described in Section 17.4) is s2 t 2 mx (t) ¼ exp mt þ 2 The mean is the first moment given by d d s2 t2 mx (t) ¼ exp mt þ dt dt 2 t¼0 t¼0 s2 t2 ¼ (m þ s2 t) exp mt þ ¼m 2 t¼0
17.3
MULTIVARIATE DISTRIBUTIONS
Much of the discussion presented above for one or two random variables can readily be extended to distributions with k random variables. Distributions with more than one random variable are called multivariate distributions. Properties of multivariate distributions are presented here.
236
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Discrete Case The k-dimensional random variable (X1 , X2 , . . . , Xk ) is a k-dimensional discrete random variable if it assumes values only at a finite or denumerable number of points (x1 , x2 , . . . , xk ). The joint probability f (x1 , x2 , . . . , xk ) of the k-dimensional random variable is defined as P ½X1 ¼ x1 , X2 ¼ x2 , . . . , Xk ¼ xk ¼ f (x1 , x2 , . . . , xk ) for every value that the set of random variables can assume. If E is a subset of the set of values that the random variables can assume, then the probability of E is P(E) ¼ P½(X1 , X2 , . . . , Xk ) is in E ¼
X
f (x1 , x2 , . . . , xk )
E
where the sum is over all those points in E. The cumulative distribution is defined as F(x1 , x2 , . . . , xk ) ¼
XX x1
x2
X
f (x1 , x2 , . . . , xk )
xk
Continuous Case The k random variables X1 , X2 , . . . , Xk are said to be jointly distributed if there exists a function r such that r(x1 , x2 , . . . , xk ) 0 for all 1 , xi , 1, i ¼ 1, 2, . . . , k and the probability of an event E in the sample space is given by ð P(E) ¼ P ½(X1 , X2 , . . . , Xk ) is in E ¼
r (x1 , x2 , . . . , xk )dx1 dx2 dxk E
The function r(x1 , x2 , . . . , xk ) is called the joint probability density of the random variables X1 , X2 , . . . , Xk . The cumulative distribution is defined as xð1
xðk
xð2
F(x1 , x2 , . . . , xk ) ¼
... 1 1
r (x1 , x2 , . . . , xk ) dxk dx2 dx1 1
Given the cumulative distribution, the joint probability density can be obtained by forming the derivative r (x1 , x2 , . . . , xk ) ¼
@ @ @ F(x1 , x2 , . . . , xk ) @x1 x2 @xk
PROBABILITY DISTRIBUTIONS
237
Expectation Values and Moments The r th moment of the i th random variable Xi is defined in terms of expectation values as E(Xir ) ¼
XX x1
x2
X
xri f (x1 , x2 , . . . , xk )
xk
if the Xi are discrete random variables and E(Xir )
1 ð
1 ð
1 ð
¼
1 1
xri r (x1 , x2 , . . . , xk ) dx1 dx2 dxk
1
if the Xi are continuous random variables. Joint moments about the origin are defined as the expectation value E(X1r1 X2r2 Xkrk ) where r1 þ r2 þ þ rk is the order of the moment. Joint moments about the mean are defined as E½(X1 m1 )r1 (X2 m2 )r2 (Xk mk )rk Joint moment generating functions for k random variables may also be defined. The reader should consult the references for further details.
Marginal and Conditional Distributions If the k discrete random variables X1 , X2 , . . . , Xk have the joint distribution f (x1 , x2 , . . . , xk ), then the marginal distribution of the subset of random variables {Xi :i ¼ 1, . . . , p with p , k} is g(x1 , x2 , . . . , xp ) ¼
XX x pþ1 x pþ2
X
f (x1 , x2 , . . . , xk )
xk
The conditional distribution of the subset of discrete random variables X1 , X2 , . . . , Xp given Xpþ1 , Xpþ2 , . . . , Xk is h(x1 , x2 , . . . , xp jx pþ1 , x pþ2 , . . . , xk ) ¼
f (x1 , x2 , . . . , xk ) g(x pþ1 , x pþ2 , . . . , xk )
if g(x pþ1 , x pþ2 , . . . , xk ) 0
238
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Similarly, if the k random variables X1 , X2 , . . . , Xk are continuous with a joint density r(x1 , x2 , . . . , xk ), then the marginal distribution of {Xi : i ¼ 1, . . . , p with p , k} is 1 ð
1 ð
r(x1 , x2 , . . . , xp ) ¼
1 ð
r(x1 , x2 , . . . , xk ) dx pþ1 dxk1 dxk
1 1
1
The conditional distribution of the subset of continuous random variables X1 , X2 , . . . , Xp given Xpþ1 , Xpþ2 , . . . , Xk is r(x1 , x2 , . . . , xp jx pþ1 , x pþ2 , . . . , xk ) ¼
r(x1 , x2 , . . . , xk ) r(x pþ1 , x pþ2 , . . . , xk )
if r(x pþ1 , x pþ2 , . . . , xk ) 0 Variance and Covariance The variance s2i of a random variable Xi is defined as the expectation value s2i ¼ E½(Xi mi )2 The square root of the variance si is the standard deviation. The covariance sij of random variables Xi and Xj is given by the expectation value sij ¼ rij si sj ¼ E½(Xi mi )(Xj mj ) where rij is the correlation coefficient and si and sj are the standard deviations of Xi and Xj . The variance is equal to the covariance sii in which i ¼ j and the corresponding correlation coefficient is rii ¼ 1.
EXERCISE 17.5: Suppose we are given the joint probability density r(x, y, z) ¼ (x þ y)ez for three random variables defined in the intervals 0 , x , 1, 0 , y , 1, and 0 , z: (a) Find the joint probability density r(x, y). (b) Find the marginal probability density r( y). Application: Quantum Mechanics and Probability Theory. Max Born first introduced the idea that quantum mechanics and probability were linked in 1926. Since then, the physics community has debated the precise meaning of the quantum mechanical wavefunction. One interpretation relies on a direct relationship between quantum theory and probability theory. The relationship is based on a derivation of quantum mechanical wave equations from fundamental assumptions in probability. An example of this derivation is outlined here.
PROBABILITY DISTRIBUTIONS
239
Construction of the nonrelativistic Schro¨dinger equation from fundamental probability concepts begins with the assumption that a conditional probability density r( yjt) exists. The symbol y denotes a set of three spatial coordinates in the volume D for which r( yjt) has nonzero values. The j th component of the position vector of a particle is written as y j , where j ¼ 1, 2, 3. Indices 1, 2, 3 signify space components. The symbol t plays the role of time in nonrelativistic theory. In this formalism, nonrelativistic time t is an evolution parameter that conditions the probability density r( yjt). According to probability theory, r( yjt) must be positive-definite and normalizable; thus r( yjt) 0 and ð
r( yjt) dy ¼ 1, dy ¼ d3 y ¼ dy1 dy2 dy3
D
Conservation of probability implies the continuity equation 3 @r X @rV j þ ¼0 @t @y j j¼1
where the term rV j represents the j th component of probability flux, and V j is a velocity vector. For r to be differentiable and nonnegative, its derivative must satisfy @r=@ t ¼ 0 if r ¼ 0 [Collins, 1977, 1979]. The positive-definite requirement for r( yjt) is satisfied by writing r( yjt) in the Born representation; thus r( yjt) ¼ c ( y, t)c(y, t) The scalar eigenfunctions c can be written as c( y, t) ¼ ½r( yjt)1=2 exp½ij(y, t) where j( y, t) is a real scalar function. The Schro¨dinger equation is derived by expressing the velocity {V j } as 1 @j( y, t) e j h A V j ( y, t) ¼ ( y, t) m @y j c We assume the vector Aj and consequently V j are real functions. Using these assumptions, we can derive the field equation ih
3 @C 1 X ¼ p j pj C þ UC @t 2m a¼1
240
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
where U is Hermitian. The operators p j and p j are defined by h @ e p j ; p j A j, p j ; c i @y j Given the form of the equations, the vector Aj is identified as the electromagnetic vector potential acting on the particle. Additional interactions may be incorporated via the potential function U. For more information, the reader should see Fro¨hner’s [1998] relatively recent discussion of the relationship between probability theory and quantum mechanics. A similar derivation can be used to obtain equations that behave like relativistic wave equations [Fanchi, 1993]. An example of one such relativistic equation is discussed in Section 12.3.
17.4
EXAMPLE PROBABILITY DISTRIBUTIONS
Many probability distributions have been developed to describe a variety of systems ranging from social to scientific. A few probability distributions are outlined below to illustrate their form and function. For more details, the reader should consult the references, such as Walpole and Myers [1985], Larsen and Marx [1985], Kurtz [1991], Blaisdell [1998], Miller and Miller [1999], and Kreyszig [1999]. Discrete Case Uniform Distribution The discrete random variable X has a uniform distribution if its probability function is 1 P(X ¼ x) ¼ f (x) ¼ , x ¼ x1 , x2 , . . . , xn n The mean and variance of the uniform distribution are Mean ¼ m ¼
nþ1 2
Variance ¼ s2 ¼
n2 1 12
Example: The probability distribution for the roll of a die with 6 sides is a uniform distribution with n ¼ 6, m ¼ 3:5, and s ¼ 1:7. Binomial Distribution The discrete random variable X has a binomial distribution if its probability function is n x P(X ¼ x) ¼ f (x) ¼ u (1 u)nx , x ¼ 0, 1, 2, . . . , n x
PROBABILITY DISTRIBUTIONS
241
with the binomial coefficient n! n ¼ x x!(n x)! The mean and variance of the binomial distribution are Mean ¼ m ¼ nu Variance ¼ s2 ¼ nu(1 u) The binomial distribution arises when the outcomes of a trial can be classified as one of two possible events. In this case n is the number of independent trials and X is the number of times that a given event occurs. The binomial distribution gives the probability that x successes will occur out of n independent trials given that u is the probability of success for one trial. Geometric Distribution The discrete random variable X has a geometric distribution if its probability function is P(X ¼ x) ¼ f (x) ¼ u(1 u) x1 , x ¼ 1, 2, 3, . . . , The mean and variance of the geometric distribution are 1 u 1u Variance ¼ s2 ¼ 2 u Mean ¼ m ¼
The geometric distribution can be used to determine the probability that a given event will occur on a particular trial, assuming that a trial is performed repeatedly and that each trial is independent. In this case, u is the probability that a given event will occur on a single trial, and X ¼ x is the number of trials required to obtain the given event. Example: The probability of rolling a 1 using a single die with six sides is 1/6. If you are rolling the die until you get a 1, the probability that you will have to roll five times is found from the geometric distribution to be 1 1 51 1 P(5) ¼ ¼ 0:08 6 6
242
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Poisson Distribution The discrete random variable X has a Poisson distribution if its probability function is
P(X ¼ x) ¼ f (x) ¼
el lx , l . 0, x ¼ 0, 1, . . . , x!
The mean and variance of the Poisson distribution are Mean ¼ m ¼ l Variance ¼ s2 ¼ l The Poisson distribution is the limiting case of the binomial distribution when n ! 1 and x ! 0 such that the product nx ! constant l. Hypergeometric Distribution The discrete random variable X has a hypergeometric distribution if its probability function is k Nk x nx , x ¼ 0, 1, 2, . . . , ½n, k P(X ¼ x) ¼ f (x) ¼ N n where [n,k] means the smaller of the two numbers n, k. The mean and variance of the hypergeometric distribution are kn N k(N k) n (N n) Variance ¼ s2 ¼ N 2 (N 1) Mean ¼ m ¼
The hypergeometric distribution arises when elements are drawn from a set without replacement. The random variable X is the number of times that an element of a given type is drawn. Example: Suppose a box contains 6 blue balls and 8 red balls. Ten balls will be taken from the box without replacement. If we want to know the probability that 5 of the balls will be blue, we use the hypergeometric function. In this case N is the number of balls in the original set (N ¼ 14); n is the number of balls drawn (n ¼ 10); k is the number of blue balls in the original set (k ¼ 6); and X is the number of balls that will be drawn (X ¼ x ¼ 5). The probability of obtaining
PROBABILITY DISTRIBUTIONS
243
5 blue balls when drawing 10 balls from the box is 6 14 6 5 10 5 (6) (42) ¼ 0:252 ¼ P(5) ¼ (1001) 14 10 Continuous Case Uniform Distribution The continuous random variable X has a uniform distribution if its probability density is 8 < 1 for a , x , b r(x) ¼ b a : 0 otherwise where a and b are parameters with a , b. The mean and variance of the uniform distribution are Mean ¼ m ¼
aþb 2
Variance ¼ s2 ¼
(b a)2 12
Normal Distribution The continuous random variable X has a normal or Gaussian distribution if its probability density is 2 1 2 r(x) ¼ pffiffiffiffiffiffi e(xm) =2s , 1 , x , 1 2ps
where m and s are parameters. For the normal distribution, the parameters m and s are also the mean and standard deviation of the random variable X, respectively: Mean ¼ m Variance ¼ s2 The normal distribution is useful for describing very large sample groups. According to the central limit theorem, many probability distributions approach the normal distribution as the number of elements in the sample group approaches infinity. The cumulative probability for the normal distribution is ðx 2 1 2 pffiffiffiffiffiffi e(xm) =2s dx F(x) ¼ 2ps 1 The standard normal distribution has a mean of zero and variance of one. If we set z ¼ (x m)=s in F(x), we obtain the cumulative standard normal distribution.
244
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 17.6: Determine the expectation value E(x) using the probability density for the normal distribution. Gamma Distribution The continuous random variable X has a gamma distribution if its probability density is r(x) ¼
1 xa ex=b , 0 , x , 1 G(a þ 1) baþ1
where a and b are parameters with a . 21 and b . 0. The gamma function G(z) is defined as the integral ð1 G(z) ¼
y z1 ey dy
0
for z . 0. The mean and variance of the gamma distribution are Mean ¼ m ¼ b(a þ 1) Variance ¼ s2 ¼ b2 (a þ 1) If a ¼ 0, the gamma distribution simplifies to the exponential distribution. Exponential Distribution The continuous random variable X has an exponential distribution if its probability density is r(x) ¼
1 x=u e ,x.0 u
where u is a parameter and u . 0: The mean and variance of the exponential distribution are Mean ¼ m ¼ u Variance ¼ s2 ¼ u2 The exponential distribution is often used to describe the amount of a radioactive substance remaining at a given time.
CHAPTER 18
STATISTICS
Fundamental concepts used in descriptive statistics are reviewed here, beginning with the presentation of the relationship between probability and frequency. This is followed by a discussion of the statistics of grouped and ungrouped data. Several statistical coefficients for describing a distribution of data are defined. Commonly used formulas for curve fitting and regression are then summarized.
18.1
PROBABILITY AND FREQUENCY
The relationship between probability and frequency is based on the law of large numbers of probability theory. Consider an event E ¼ Ej that is an element of a set of J events. Suppose that out of N trials the event Ej is observed to occur Nj times. The observed frequency fj of the event is fj ¼
Nj N
The law of large numbers states that the probability Pj that the event Ej will occur is related to the frequency of occurrence fj by the limit lim Pj fj ¼ 0
N!1
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
245
246
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
In theory, the number of trials should approach infinity for the law of large numbers to apply. In practice, the law of large numbers means that the frequency fj can be used for the probability Pj that the event Ej will occur when the number of trials is sufficiently large. The term “sufficiently large” depends on the situation of interest. Example: Suppose a six-sided die with numbers f1, 2, 3, 4, 5, 6g on the sides is rolled 100 times and the number 1 appears on the top of the die 18 times. The observation of the number 1 on the top of the rolled die is event E1 of the set of events fE1, . . . , E6g ¼ f1, . . . , 6g with J ¼ 6 events. The frequency of obtaining the number 1 is f1 ¼ N1/N ¼ 18/100 and the corresponding probability is P1 f1 ¼ 0.18. As the number of die rolls increases, the probability will approach 1/6 for an unbiased die. Data Grouping Suppose we have a set with n data elements fxi : i ¼ 1, 2, . . . , ng. Each data element xi has a value, and the range of values is the difference between the largest and smallest values in the set of elements. The set of elements is considered ungrouped if each element in the set is treated as a separate element rather than as a member of a subset with two or more elements. The elements in the set can be grouped by collecting elements in classes. A class is a collection of elements having a value in a subinterval of the range of values. The number of classes is subjective, but is often somewhere between 5 classes and 20 classes [Kurtz, 1991]. Each class has the same interval or uniform width c. The lower value of the class is the lower class limit L. If the class does not have a lower or an upper limit, it is an open class. The number of data points in the class is the class frequency f, as illustrated in Table 18.1. Cumulative frequency in a class interval is the sum of the frequencies of all class intervals up to and including the current class interval. The cumulative frequency for the last class is the total number of elements in the set.
TABLE 18.1 Class Interval 0.05 0.10 0.15 0.20 0.25 0.30 0.35
to 0.10 to 0.15 to 0.20 to 0.25 to 0.30 to 0.35 to 0.40
Total
Example of Grouped Statistical Data Number of Occurrences
Frequency
Cumulative Frequency
2 4 15 24 14 2 2
0.0317 0.0635 0.2381 0.3810 0.2223 0.0317 0.0317
0.0317 0.0952 0.3333 0.7143 0.9366 0.9683 1.0000
63
1.0000
STATISTICS
247
The actual value of each element disappears in a class and is replaced by a value that is uniformly distributed. Thus the r th value in the class Xr is r Xr ¼ L þ c f The midpoint or mark of a class is an element with the value L þ (c/2); that is, the midpoint of the class is halfway between the lower and upper limits of the class. If an element has the value L corresponding to the upper limit of one class and the lower limit L of an adjacent class, it is necessary to specify which class will be assigned elements with values equal to L. Example: Suppose a class of measurements has an interval ranging from 0.20 to 0.25. The interval “0.20 to 0.25” in this case is actually the interval “0.20 up to but not including 0.25.” The number of observations in the class is 10. The fourth value in the class is X4 ¼ 0.20 þ (4/10) 0.05 ¼ 0.22. The mark or midpoint of the class is 0.225. If a measurement has the value 0.25, it will be assigned to the class of measurements with an interval ranging from 0.25 to 0.30. Frequency Diagrams It is often useful to provide a visual representation of the frequency of occurrence of data. This can be done by plotting the value of a datum along the horizontal axis and its frequency of occurrence along the vertical axis. If the frequency of occurrence is divided by the total number of data points, the frequency diagram becomes a diagram showing a probability distribution, subject to the conditions associated with the law of large numbers described earlier in this section. Grouped data are represented by histograms. The histogram is a presentation of statistical data as rectangles. The width of the rectangle is the width of the class and the height of the rectangle is the frequency of the class. For further discussion of techniques for presenting statistical data, see such references as Kurtz [1991] and Kreyszig [1999]. EXERCISE 18.1: Prepare a histogram of the grouped data in Table 18.1. 18.2
UNGROUPED DATA
Several different means may be defined for a set of data elements. The arithmetic mean for a set of ungrouped data with n elements is x ¼
n 1X x1 þ x2 þ þ xn xi ¼ n i¼1 n
248
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
where xi denotes the values of the i th element. A weighted arithmetic mean can be defined by associating a weighting factor wi 0 with each value. The weighted arithmetic mean is then Pn wi xi w1 x1 þ w2 x2 þ þ wn xn ¼ x w ¼ Pi¼1 n w1 þ w2 þ þ wn i¼1 wi The geometric mean for ungrouped data is pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x G ¼ n x1 x2 xn The harmonic mean for ungrouped data is n x H ¼ Pn
1 i¼1 xi
¼
n 1 1 1 þ þ þ x1 x2 xn
The relationship between arithmetic, geometric, and harmonic means is x H x G x where the equality signs apply only if all sample values are identical. A mode Mo of a sample of size n is the value that occurs with greatest frequency; that is, it is the most common value. The class with the highest frequency is the modal class. A mode may not exist, and if it does exist it may not be unique. For example, if a distribution has two values that occur with the same maximum frequency, the distribution is said to be bimodal. The median is often used as a representative value of a set of data. If a sample with n elements is arranged in ascending order of magnitude, then the median Md is given by the (n þ 1)/2 value. When n is odd, the median is the middle value of the set of ordered data; when n is even, the median is the arithmetic mean of the two middle values of the set of ordered data. The mean absolute deviation for ungrouped data is M.A.D. ¼
n 1X jxi x j n i¼1
where x is the arithmetic mean. The standard deviation for ungrouped data is the root mean square of the deviations from the arithmetic mean; thus sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pn ) 2 i¼1 (xi x s¼ n The variance is the square of the standard deviation. The root mean square for ungrouped data is " #1=2 n 1X R.M.S. ¼ x2 n i¼1 i
STATISTICS
18.3
249
GROUPED DATA
The total number of observations in a frequency distribution having k classes with corresponding class frequencies f fi : i ¼ 1, 2, . . . , kg is n¼
k X
fi
i¼1
The arithmetic mean for grouped data is x ¼
k 1X f1 x1 þ f2 x2 þ þ fk xk f i xi ¼ n i¼1 n
where fxi : i ¼ 1, 2, . . . , kg is the set of class marks. The geometric mean for grouped data is qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n x G ¼ x1f1 x2f2 xkfk The harmonic mean for grouped data is n n x H ¼ ¼ Pk fi f1 f2 fk þ þ þ i¼1 xi x1 x1 xk The relationship between arithmetic, geometric, and harmonic means is x H x G x where the equality signs apply only if all sample values are identical.
EXERCISE 18.2: What is the mean of the grouped data in Table 18.1? The modal class is the class with the highest frequency. The mode Mo for grouped data is Mo ¼ L þ C
d1 d1 þ d2
where L is the lower limit of the modal class, C is the width of the modal class, d1 is the difference between the frequency of the modal class and that of the preceding class, and d2 is the difference between the frequency of the modal class and that of the succeeding class. Example: The mode for the grouped data in Table 18.1 is 0.20 þ 0.05(24 2 15)/[(24 2 15) þ (24 2 14)] ¼ 0.224. The modal class has the interval 0.20 to 0.25.
250
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The median of grouped data is element n/2 of a set of n values. If n is even, the median is a real element, otherwise it is hypothetical. The median class is the class containing the median. The median for grouped data is Md ¼ L þ c
ðn=2Þ Fc fm
where n is the number of classes, L is the lower limit of the median class, c is the width of the median class, Fc is the sum of the frequencies of all classes lower than the median class, and fm is the frequency of the median class. The median is always between the mode and the mean. If the mode is to the left of the mean on a frequency diagram, the distribution is skewed to the left. If the mode is to the right of the mean on a frequency diagram, the distribution is skewed to the right. Example: The median for the grouped data in Table 18.1 is between elements 31 and 32. Using the above equation for the median gives 0.20 þ 0.05(31.5 2 21)/ 24 ¼ 0.222. The median class has the interval 0.20 to 0.25. The mean of the grouped data in Table 18.1 is calculated in Exercise 18.2 to be 0.221. Comparing the mode and the mean, we find the mode (0.237) is greater than the mean (0.221), which places the mode to the right of the mean on a frequency diagram. Therefore the distribution of grouped data in Table 18.1 is skewed to the right. The mean absolute deviation for grouped data is M.A.D. ¼
k 1X fi jxi x j n i¼1
where x is the arithmetic mean. The standard deviation for grouped data is the root mean square of the deviations from the arithmetic mean; thus sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pk )2 i¼1 fi (xi x s¼ n The variance is the square of the standard deviation. The root mean square for grouped data is " #1=2 k 1X 2 R.M.S. ¼ fi x n i¼1 i 18.4
STATISTICAL COEFFICIENTS
A number of coefficients are used to describe a statistical distribution. Some of the more common coefficients are defined here. The simplest coefficients are defined
STATISTICS
251
in terms of means, standard deviations, and variances. More complex coefficients are expressed in terms of moments of the distribution. The coefficient of variation is defined as V¼
100s x
where x is the mean and s the standard deviation of the sample. The standardized variable or standard score associated with observation xi is defined as z¼
xi x s
where x is the mean and s the standard deviation of the sample. The concept of moments was introduced previously to describe probability distributions. A similar concept applies to statistical distributions. It is presented here in terms of ungrouped and grouped data. For ungrouped data, the r th moment about the origin is m0r ¼
n 1X xr n i¼1 i
The r th moment about the mean is m0r ¼
n 1X (xi x )r n i¼1
For grouped data, the r th moment about the origin is mr ¼
k 1X fi xri n i¼1
The r th moment about the mean x is mr ¼
k 1X fi (xi x )r n i¼1
Example: The second moment is the variance of the statistical distribution for both grouped and ungrouped data. Consequently, the standard deviation of a statistical distribution is the square root of the second moment. Given the concept of moments, we can calculate several coefficients describing the shape of the statistical distribution. The coefficients of skewness is a measure
252
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
of the symmetry of the distribution. If the distribution is unsymmetric about the mode, it is said to be skewed and the degree of asymmetry is referred to as skewness. The coefficient of skewness is a3 ¼
m3 (m2 )3=2
where m2 and m3 are the second and third moments about the mean of the sample, respectively. If the magnitude of a3 is close to zero, the distribution displays a high degree of symmetry about the mode. If the magnitude of a3 is greater than one, the distribution is highly skewed. Another measure of the shape of a statistical distribution is the coefficient of kurtosis. The coefficient of kurtosis is a4 ¼
m4 (m2 )2
where m2 and m4 are the second and fourth moments about the mean of the sample, respectively. It provides an indication of the width of the statistical distribution relative to its height. If we note that the standard deviation of the statistical distribution is the square root of the second moment, we see that a relatively small standard deviation implies a relatively large coefficient of kurtosis if the fourth moment is constant. Thus a large coefficient of kurtosis implies a relatively narrow statistical distribution. For comparison purposes, the coefficient of kurtosis for the normal probability distribution is 3 [Kurtz, 1991, p. 830]. Example: A statistical distribution is symmetric about the mode (coefficient of skewness ¼ 0) and has a coefficient of kurtosis of 2. It therefore has a narrower distribution than the normal probability distribution. It must also have a higher peak at the mode to ensure that the area under the curve is normalized.
18.5
CURVE FITTING, REGRESSION, AND CORRELATION
Regression is the estimation of one variable y from another variable x. The relationships presented in this section apply to a set of n ordered pairs f(xi, yi): i ¼ 1, 2, . . . , ng of independent variable x and dependent variable y. The ordered pair (xi, yi) is assumed to be a random sample from a bivariate normal distribution. The variable x is the predictor or regressor variable, and the variable y is the predicted or regressed variable. Regression is achieved by minimizing the deviation of a curve fit to a set of data as outlined below. Curve Fitting It is often useful to fit a smooth curve through data. Several smooth curves may be used. The simplest is the straight line y ¼ mx þ b, where m is the slope and b is
STATISTICS
253
the y intercept. For a straight line fit by the method of least squares (Chapter 3), the values b and m are obtained by solving the algebraic equations nb þ m
X
xi ¼
X
i
yi
i
and b
X
xi þ m
i
X
x2i ¼
X
i
xi yi
i
The solutions of these equations are given in Chapter 3. Fitting a straight line to a set of data may appear to have limited utility; however, it is possible to recast a number of useful nonlinear functions in a form that is suitable for analysis by least squares. Example: The exponential curve y ¼ abx can be recast in linear form by taking the logarithm log y ¼ log a þ ( log b)x For an exponential curve fit by the method of least squares, the values log a and log b are obtained by fitting a straight line to the set of ordered pairs f(xi, log yi)g. Similar regression procedures can be applied to polynomial functions of the form y ¼ b0 þ b1 x þ b2 x 2 þ þ bm x m For a polynomial function fit by the method of least squares, the value of the coefficients b0 , b1 , . . . , bm are obtained by solving a system of m þ 1 algebraic equations. The reader should consult the references for more details. EXERCISE 18.3: Show that the power function y ¼ ax b can be recast in linear form. Regression and Correlation Least squares analysis is an example of a more general technique known as regression. Regression is used to determine causal relationships between observed variables. For example, simple linear regression seeks to find a relationship for variable y given variable x; thus E(yjx) ¼ b þ mx where E(yjx) is the mean of the distribution of y for a given x. The term simple implies that y depends on only one variable. If y depended on more than one
254
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
variable, the regression would be called a multiple regression. The form of the relationship between variables x and y is often referred to as a regression model. In this case, we have a linear regression model. The regression coefficients b and m are determined by minimizing the deviations of the observed y values from the straight line. A measure of the validity of the regression model is the standard error of estimate. For simple linear regression, the standard error of estimate (SEE) is sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P 2 i ½yi (b þ mxi ) se ¼ n A large value of SEE implies large deviations from the assumed linear relationship between ordered pairs. By contrast, a small value of SEE implies that the set of ordered pairs closely follows a linear relationship. The total variation of the dependent variable y from the mean is given by the sum of squares total: SST ¼
X
( yi y )2
i
For comparison, the variation of the regressed value of the dependent variable y from the mean is given by the sum of squares regression: X SSR ¼ ½(b þ mxi ) y 2 i
The ratio of the sum of squares regression to the sum of squares total is the goodness-of-fit: R2 ¼
SSR SST
The square root of the goodness-of-fit variable R2 gives the correlation coefficient r ; thus pffiffiffiffiffi r ¼ R2 An alternate but equivalent form of r is the product-moment formula [Kurtz, 1991, p. 10.39; Miller and Miller, 1999, p. 453] X
xi yi
sxy r ¼ vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! !ffi ¼ s s u x y u X 2 X 2 t xi yi i
i
i
STATISTICS
255
Figure 18.1 Depth to top of anticlinal structure.
The terms sx and sy are the standard deviations for the x and y distributions, and sxy is the covariance of x and y. The correlation coefficient r is a dimensionless measure of the validity of a linear relationship between two variables. It can range from þ1 to 21. The sign of r is the same as the sign of the slope m of the linear regression model. The value jrj ¼ 1 says that all points of the ordered pair lie on a straight line. If r ¼ 0, there is no linear relationship. A value of r ¼ 0 is possible even if two variables are strongly correlated if the correlation is nonlinear. On the other hand, a value of jrj 1 does not imply causality between two variables, but only statistical correlation [Mendenhall and Beaver, 1991, p. 424]. Application: Trend Surface Analysis. A technique for determining the spatial distribution of a property by computer is to fit a surface through the control points. Control points are values of the property at particular locations in space. This technique is referred to here as trend surface analysis. Linear trend surface analysis uses regression to fit a line through all control point values in a given direction. The regression model for linear trend surface analysis is Vobs ¼ a0 þ a1 xloc þ a2 yloc where Vobs is the observed value at the control point, and {xloc , yloc } are the fx axis, y axisg locations of the control point. At least six control points are needed to determine the regression coefficients fa0, a1, a2, a3, a4, a5g. Quadratic trend surface analysis requires more control points than linear trend surface analysis to fit the larger set of regression coefficients. Quadratic trend surface analysis can fit a curved surface to data and is therefore useful in representing geologic structures such as anticlines or synclines. An example of an anticlinal surface obtained from trend surface analysis is illustrated in Figure 18.1.
CHAPTER 19
SOLUTIONS TO EXERCISES
S1
ALGEBRA
EXERCISE 1.1: Given ax2 þ bx þ c ¼ 0, find x in terms of a, b, c. Solution: We use the method of completing the square to derive the quadratic formula. Write ax2 þ bx ¼ c Divide by a to get b c x2 þ x ¼ a a The left-hand side is made a perfect square by adding (b=2a) 2 to both sides; thus 2 b b b 2 c b2 x2 þ x þ ¼ xþ ¼ þ 2 a 2a 2a a 4a Rearranging the right-hand side gives
b xþ 2a
2 ¼
b2 4ac 4a2
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
257
258
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Taking the square root of both sides and rearranging yields pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b2 4ac b + x¼ 2a 2a EXERCISE 1.2: (a) Expand and simplify (x þ y) 2 , (x y) 2 , (x þ y)3 , (x þ y þ z) 2 , (ax þ b) (cx þ d ), and (x þ y)(x y). (b) Factor 3x3 þ 6x2 y þ 3xy2 . Solution (a): (x þ y) 2 ¼ x2 þ 2xy þ y2 (x y) 2 ¼ x2 2xy þ y2 (x þ y)3 ¼ (x þ y) 2 (x þ y) ¼ x3 þ 3x2 y þ 3xy2 þ y3 (x þ y þ z) 2 ¼ x2 þ y2 þ z2 þ 2xy þ 2xz þ 2yz (ax þ b)(cx þ d ) ¼ acx2 þ (bc þ ad )x þ bd (x þ y)(x y) ¼ x2 y2 Solution (b): 3x3 þ 6x2 y þ 3xy2 ¼ 3x(x2 þ 2xy þ y2 ) ¼ 3x(x þ y) 2 EXERCISE 1.3: Let y ¼ loga x. Change the base of the logarithm from base a to base b, then set a ¼ 10 and b ¼ 2:71828 e and simplify. Solution: We note that loga x ¼ y
(i)
and x ¼ ay To change the base from a to b, we scale the base a by writing y ab x¼ b Taking the logarithm to the base b of x gives logb x ¼ y ½logb a þ logb b logb b ¼ y logb a Solving for y yields y¼
logb x ¼ loga x logb a
(ii)
SOLUTIONS TO EXERCISES
259
where we have used Eq. (i). From Eq. (ii) we obtain the equality logb x ¼ ( loga x) ( logb a) which shows how to change the logarithm from base a to base b. Suppose a ¼ 10 and b ¼ 2:71828 e. Then loge x ; ln x ¼ log10 x loge 10 ¼ log10 x ln 10 ¼ 2:3026 log10 x EXERCISE 1.4: What are the odds of winning a lotto? To win a lotto, you must select 6 numbers without replacement from a set of 42. Order is not important. Note that the combination of n different things taken k at a time without replacement is n! n ¼ k k!(n k)! Solution: For this lotto, n ¼ 42 and k ¼ 6; thus
42 6
¼
42! 42 41 40 39 38 37 36! 42 41 40 39 38 37 ¼ ¼ 6!(36)! 6! 36! 6!
¼
3:78 109 ¼ 5:25 106 720
Notice that 42!=36! ¼ 3:78 109 . The odds of winning this lotto are approximately 1 in 5:25 106 . EXERCISE 1.5: Calculate 42!=36! using Stirling’s approximation for both 42! and 36!. Solution: From Stirling’s approximation pffiffiffiffiffiffiffiffiffi n! en nn 2pn we find
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 42! e42 4242 2p(42) pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 36! e36 3636 2p(36)
Dividing 42! by 36! gives rffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 42! e42 4242 2p(42) e42 4242 42 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 36! e36 3636 2p(36) e36 3636 36
260
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
This is further simplified by performing the arithmetic 36 42! 42 e6 426 (1:0801) 36! 36 e6 426 (1:1667)36 (1:0801) e6 426 (257:35) (1:0801) 3:78 109 which agrees with the exact calculation in Exercise 1.4.
EXERCISE 1.6: Factor z1 =z2 by calculating z1 z2 z1 z2 ¼ z2 z2 z2 z2 Solution: z1 x1 þ iy1 x1 þ iy1 ðx2 iy2 Þ ¼ ¼ z2 x2 þ iy2 x2 þ iy2 ðx2 iy2 Þ x1 x2 þ y1 y2 y1 x2 x1 y2 þ i ¼ x22 þ y22 x22 þ y22 EXERCISE 1.7: Suppose z ¼ x þ iy and z ¼ x iy. Find x, y in terms of z, z . Solution: We find x and y by taking the sum and difference of z, z . Thus z þ z ¼ x þ iy þ x iy ¼ 2x which implies x ¼ 12 (z þ z ) and z z ¼ x þ iy (x iy) ¼ 2iy which implies y¼
1 (z z ) 2i
EXERCISE 1.8: Given g¼ evaluate g g.
i 1 þ il=2
SOLUTIONS TO EXERCISES
261
Solution g g ¼
i i i2 1 ¼ ¼ 2 2 il il i l l2 2 2þ 1 1þ 1 1 2 2 4 4
EXERCISE 1.9: Write 1=(s2 þ v2 ) in terms of partial fractions. Solution: Use Eq. (1.8.6) to expand 1=(s2 þ v2 ) as 1 A1 A2 ¼ þ s 2 þ v2 a1 s þ b1 a2 s þ b2
(i)
where the numerator is r(s) ¼ 1. The denominator factors into the terms D(s) ¼ s 2 þ v 2 ¼ (s þ iv)(s iv)
(ii)
The coefficient values for Eq. (i) are a1 ¼ 1, b1 ¼ iv a2 ¼ 1, b2 ¼ iv
(iii)
Multiply Eq. (i) by the denominator D(s): 1 ¼ A1
s2 þ v2 s2 þ v2 þ A2 s þ iv s iv
¼ A1 (s iv) þ A2 (s þ iv)
(iv)
¼ (A1 þ A2 )s þ iv(A2 A1 ) The coefficients of s n , n ¼ 0, 1 in Eq. (iv) satisfy the equalities iv(A2 A1 ) ¼ 1 A1 þ A2 ¼ 0
(v)
Solving for A1 , A2 gives A1 ¼
1 1 , A2 ¼ 2iv 2iv
(vi)
Substituting Eqs. (iii) and (vi) into (i) gives the partial fraction expansion 1 1 1 1 1 þ ¼ s2 þ v 2 2iv s þ iv 2iv s iv
(vii)
EXERCISE 1.10: Suppose r(x) ¼ x þ 3, D(x) ¼ x2 þ x 2. Expand r(x)=D(x) by the method of partial fractions.
262
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Solution: The denominator D(x) factors into the terms D(x) ¼ x2 þ x 2 ¼ (x þ 2)(x 1)
(i)
Use Eq. (1.8.6) to expand r(x)=D(x) as
x2
xþ3 A1 A2 ¼ þ þ x 2 a1 x þ b1 a2 x þ b2 A1 A2 þ ¼ xþ2 x1
(ii)
The coefficient values for Eq. (ii) are a1 ¼ 1, b1 ¼ 2 a2 ¼ 1, b2 ¼ 1
(iii)
Multiply Eq. (ii) by the denominator D(x) to obtain x þ 3 ¼ A1 (x 1) þ A2 (x þ 2) ¼ (A1 þ A2 )x A1 þ 2 A2
(iv)
The coefficients of xn , n ¼ 0, 1 in Eq. (iv) satisfy the equalities A1 þ 2 A2 ¼ 3 A 1 þ A2 ¼ 1
(v)
Solving for A1 , A2 gives 1 4 A1 ¼ , A2 ¼ 3 3
(vi)
Substituting Eqs. (iii) and (vi) into (ii) gives the partial fraction expansion
x2
S2
xþ3 1 1 4 1 þ ¼ 3 xþ2 3 x1 þx2
(vii)
GEOMETRY, TRIGONOMETRY, AND HYPERBOLIC FUNCTIONS
EXERCISE 2.1: Verify the following relations: sin2 a þ cos2 a ¼ 1 1 þ tan2 a ¼ sec2 a
SOLUTIONS TO EXERCISES
263
1 þ cot2 a ¼ csc2 a sin a ¼ tan a cos a cos a ¼ cot a sin a tan a ¼
sin a cos a
by rearranging the trigonometric definitions and using the Pythagorean relation. Solution: Recall the Pythagorean relation a2 þ b2 ¼ c 2
(i)
Many trigonometric identities can be proved by simple manipulation of Eq. (i). The identities in this exercise illustrate this procedure. Divide Eq. (i) by c2 to get 2 2 a b þ ¼ sin2 a þ cos2 a ¼ 1 c c Divide Eq. (i) by a2 to get 2 2 b c ¼ 1þ a a or 1 þ cot2 a ¼ csc2 a Divide Eq. (i) by b2 to get 2 2 a c þ1¼ b b or tan2 a þ 1 ¼ sec2 a From the definitions of the trigonometric functions we obtain
sin a ¼
a ab ¼ ¼ tan a cos a c bc
cos a ¼
b ba ¼ ¼ cot a sin a c ac
tan a ¼
a a c sin a ¼ ¼ ¼ sin a sec a b c b cos a
264
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 2.2: RLC Circuit. The alternating current I(t) for a resistor – inductor – capacitor circuit with an applied electromotive force is I(t) ¼
E0 S E0 R cos (vt) þ 2 sin (vt) S 2 þ R2 S þ R2
Use trigonometric relations to write I(t) in the form I(t) ¼ I0 sin (vt d) where the maximum current I0 and phase d are expressed in terms of E0 , S, and R. A derivation of I(t) is given in Chapter 11, Exercise 11.1. Solution: Use the trigonometric identity sin (A B) ¼ sin A cos B cos A sin B to expand I0 sin (vt d) ¼ I0 ( sin vt cos d cos vt sin d) ¼ I0 sin d cos vt þ I0 cos d sin vt Equate coefficients of cos vt, sin vt to obtain the two equations E0 S I0 sin d ¼ 2 S þ R2 I0 cos d ¼
E0 R S 2 þ R2
Take the ratio of equations to find d: sin d E0 S ¼ tan d ¼ cos d E0 R or tan d ¼
S R
Square both equations and sum to find I0 : I02 cos2 d þ I02 sin2 d ¼
E02 R2 E02 S 2 þ (S 2 þ R2 ) 2 (S 2 þ R2 ) 2
Combining terms and simplifying gives I02 ( cos2 d þ sin2 d) ¼
E02 (R2 þ S 2 ) (S 2 þ R2 ) 2
or I02 ¼
(S 2
E02 þ R2 )
SOLUTIONS TO EXERCISES
Rearranging gives
265
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi E0 ¼ I 0 S 2 þ R2
Comparing this equation with Ohm’s law (V ¼ IR) shows that impedance acts like resistance for an alternating current (AC) system.
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S 2 þ R2
EXERCISE 2.3: Derive expressions for cos a, sin a, and tan a in terms of exponential functions. Solution: Recall from Euler’s equation that eia ¼ cos a þ i sin a and eia ¼ cos (a) þ i sin (a) ¼ cos a i sin a From these relations we readily obtain cos a ¼
eia þ eia 2
sin a ¼
eia eia 2i
and tan a ¼
ia sin a e eia ¼ i ia cos a e þ eia
EXERCISE 2.4: Use the exponential forms of sinh u and cosh u to show that sinh (u) ¼ sinh u and cosh (u) ¼ cosh u. Solution:
sinh (u) ¼ 12 (eu e u ) ¼ 12 (e u eu ) ¼ sinh u cosh (u) ¼ 12 (eu þ e u ) ¼ 12 (e u þ eu ) ¼ cosh u
EXERCISE 2.5: Evaluate cosh2 u sinh2 u: Solution: Convert the relation to exponential form and then evaluate. 2 2 cosh2 u sinh2 u ¼ 12 (e u þ eu ) 12 (e u eu ) ¼ 14 {e2u þ 2 þ e2u (e2u 2 þ e2u )} ¼ 14 {4} ¼ 1
266
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 2.6: Let z ¼ x vt, v ¼ 1:0 foot/day, b ¼ 0:1 foot/day, and D ¼ 0:1 foot2/day. Plot C(z) in the interval 0 x 40 feet at times of 5.00 days, 6.11 days, and 7.22 days. Note that C(z) ¼ 12 ½1 tanh (bz=4D) and 0:0 C(z) 1:0: Solution
EXERCISE 2.7: Find the Taylor series and Maclaurin series for f (z) ¼ e z . Solution: The Taylor series derivatives are df (z) dez ¼ ¼ ez dz dz dm f (z) (m) f (a) ¼ ¼ ez jz¼a ¼ ea dzm f (0) (a) ¼ ea ,
z¼a
Substitute the derivative terms into the Taylor series to get
f (z) ¼
1 X f (m) (a) (z a) m m! m¼0
ea ea (z a) þ (z a) 2 þ 1! 2! (z a) 2 þ ¼ ea 1 þ (z a) þ 2!
¼ ea þ
The Maclaurin series corresponds to a ¼ 0; thus from the Taylor series we have ez ¼ 1 þ z þ
z2 þ 2!
SOLUTIONS TO EXERCISES
267
EXERCISE 2.8: Keep only first-order terms in z for small z and estimate asymptotic functional forms for (1 z)1 , e z , cos z, sin z, cosh z, sinh z, and ln (1 þ z). Solution: The series representation of each function is used to find the following asymptotic approximations: 1 1þz 1z ez 1 þ z cos z 1
z2 1 2
(drop second-order term)
z2 1 2
(drop second-order term)
sin z z cosh z 1 þ sinh z z ln (1 þ z) z S3
ANALYTIC GEOMETRY
EXERCISE 3.1: Fit the quadratic equation y ¼ a þ bx þ cx2 to the three points (x1 , y1 ), (x2 , y2 ), (x3 , y3 ). Solution: We must simultaneously solve three equations for the three unknown constants a, b, c. The equations to be solved are y1 ¼ a þ bx1 þ cx21
(i)
y2 ¼ a þ bx2 þ cx22
(ii)
y3 ¼ a þ bx3 þ cx23
(iii)
Rearrange Eq. (i) to find a in terms of b and c: a ¼ y1 bx1 cx21
(iv)
Subtract Eq. (i) from Eq. (ii) to eliminate a: y2 y1 ¼ b(x2 x1 ) þ c(x22 x21 )
(v)
Solve Eq. (v) for b in terms of c and simplify:
b¼
y2 y1 x2 x21 y2 y1 c 2 ¼ c(x1 þ x2 ) x2 x1 x2 x1 x2 x1
(vi)
268
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Subtract Eq. (iii) from Eq. (i) and use Eq. (vi) to eliminate b: y3 y1 ¼ b(x3 x1 ) þ c(x23 x21 ) y2 y1 ¼ c(x1 þ x2 ) þ c(x1 þ x3 ) (x3 x1 ) x2 x1
(vii)
Solve Eq. (vii) for c: c¼
1 y3 y1 y2 y1 (x3 x2 ) x3 x1 x2 x1
(viii)
Find a, b, c by using Eq. (viii), then Eq. (vi) and, finally, Eq. (iv).
EXERCISE 3.2: Suppose w ¼ aub , where a, b are constants and u, w are variables. Take the logarithm of the equation and fit a least squares regression line to the resulting linear equation. Solution: Rather than performing regression analysis on the original equation, a simpler way to approach the problem is to transform the equation to a new set of variables. If we take the logarithm of the equation, we have log w ¼ log a þ b log u By making the identifications y ¼ log w x ¼ log u m¼b b ¼ log a we express the original equation in the form of a straight line y ¼ mx þ b and our previous regression analysis applies. EXERCISE 3.3: Express z ¼ z1 z2 and z ¼ z1 =z2 in polar coordinates. Solution: Define the polar coordinate forms of z1 and z2 as z1 ¼ r1 eiu1 , z2 ¼ r2 eiu2 so that the product of z1 and z2 is z ¼ z1 z2 ¼ r1 r2 ei(u1 þu2 )
SOLUTIONS TO EXERCISES
269
Therefore the magnitude of z is jzj ¼ r1 r2 And the argument of z is arg (z) ¼ u1 þ u2 ¼ arg (z1 ) þ arg (z2 ) up to multiples of 2p. Using these definitions we write the ratio of z1 and z2 as z¼
z1 r1 i(u1 u2 ) ¼ e z2 r2
Therefore the magnitude of z is jzj ¼
r1 r2
and the argument of z is arg (z) ¼ u1 u2 arg (z1 ) ¼ arg (z2 ) S4
LINEAR ALGEBRA I
EXERCISE 4.1: The Pauli matrices are the square matrices sx ¼
0 1
1 0 , sy ¼ 0 i
i 1 , sz ¼ 0 0
0 1
These matrices are used to calculate the spin angular momentum of particles such as electrons, protons, and neutrons. Show that the Pauli matrices are Hermitian. Solution: The matrix A is Hermitian if it is self-adjoint such that A ¼ Aþ ; that is, ½aij þ ¼ ½aij T ¼ ½aji Find the conjugate transpose of the Pauli matrices: sþ x ¼ sþ y sþ z
0 1
1 0
¼
½sy T
¼
½sz T
¼ sx
¼
0
i
i 0
1 ¼ 0
T
¼
0 i i
0
¼ sy
0 ¼ sz 1
The Pauli matrices are self-adjoint, therefore they are Hermitian.
270
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 4.2: Calculate 3 2
4 3
2 1 2 4 0 1 6
2 1 3
3 4 25 9
Solution "
2 3 # 1 2 4 2 6 7 4 0 1 2 5 2 3 1 6 3 9 " # 31þ40þ26 3(2) þ 4(1) þ 2(3) 3(4) þ 4 2 þ 2 9 ¼ 2 1 þ 3 0 þ (1)6 2(2) þ 3(1) þ (1)(3) 2(4) þ 3 2 þ (1)9 " # 15 16 14 ¼ 4 4 11 3 4
EXERCISE 4.3: Calculate
2
½2
3
3 4 14 2 5 9
Solution 2
½2
3
3 4 14 2 5 ¼ 2 (4) þ 3 2 þ (1) 9 ¼ 11 9
EXERCISE 4.4: Suppose we are given the Pauli matrices 0 1 0 i 1 0 s1 ¼ , s2 ¼ , s3 ¼ 1 0 i 0 0 1 (a) Calculate s23 . (b) Calculate the trace of s1 and the trace of s3 . (c) Verify the equality ½s2 s3 T ¼ sT3 sT2 . Solution: (a) The square of s3 is 1 0 1 2 s3 ¼ 0 1 0
0 1 0 ¼ ¼I 1 0 1
(b) The trace of si equals 0 for all i ¼ 1, 2, 3. For example, Tr(s3 ) ¼ (s3 )11 þ (s3 )22 ¼ 1 þ (1) ¼ 0.
271
SOLUTIONS TO EXERCISES
(c) Calculate the left-hand side 0 i 1 T ½s2 s3 ¼ i 0 0
0 1
T
0 ¼ i
i 0
T
0 ¼ i
i 0
and right-hand side sT3 sT2
1 0 ¼ 0 1
0 i
0 i ¼ 0 i
i 0
Comparing results verifies that ½s2 s3 T ¼ sT3 sT2 . EXERCISE 4.5: Let A¼
5 3 , A1 ¼ 3 1
2 1
5 2
Verify AA1 ¼ I. Solution: AA1 ¼
2 5 1 3
3 5 2 3 þ 5(1) 2(5) þ 5 2 1 ¼ ¼ 1 2 1 3 þ 3(1) 1(5) þ 3 2 0
0 1
EXERCISE 4.6: Let 2
1 A ¼ 42 4
2 3 3 2 11 2 2 3 5, A1 ¼ 4 4 0 1 5 6 1 1 8
0 1 1
Verify AA1 ¼ I. Solution: 32 11 76 6 1 AA ¼ 4 2 1 3 54 4 2
1 4
0 2 1 8
3
2
2
0
7 15
6 1 1
3 1(11) þ 0(4) þ 2 6 1 2 þ 0 0 þ 2(1) 1 2 þ 0 1 þ 2(1) 7 6 ¼ 4 2(11) 1(4) þ 3 6 2 2 1 0 þ 3(1) 2 2 1 1 þ 3(1) 5 2
4(11) þ 1(4) þ 8 6 4 2 þ 1 0 þ 8(1) 4 2 þ 1 1 þ 8(1) 3 1 0 0 6 7 ¼ 40 1 05 ¼ I 2
0 0 1
272
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 4.7: Let A¼
b x , A1 ¼ d z
a c
y w
Assume a, b, c, d are known. Find x, y, z, w from AA1 ¼ I. Solution: AA
1
a ¼ c
b d
x z
y ax þ bz ay þ bw ¼ w cx þ dz cy þ bw
Note that {a, b, c, d} are known. To find {x, y, z, w} we recognize that I ¼ Equating AA1 and I gives us four equations in four unknowns; thus
1 0
0 1
.
ax þ bz ¼ 1, ay þ bw ¼ 0 cx þ dz ¼ 0, cy þ dw ¼ 1 Solving for x, y, z, w gives x¼
d b c a ,y¼ ,z¼ ,w¼ jAj jAj jAj jAj
where jAj ¼ ad bc.
EXERCISE 4.8: Assume b1 , b2 are known constants. Write the system of equations 2x1 þ 5x2 ¼ b1 x1 þ 3x2 ¼ b2 in matrix form and solve for the unknowns x1 , x2 . Solution: We want to write the system of equations as Ax ¼ b. In this case we have
2 A¼ 1
5 x1 b ,b¼ 1 ,x¼ x2 b2 3
(i)
The solution is x ¼ A1 b. We find the inverse of matrix A using the equation A1 A ¼ I, where I is the identity matrix; thus A1 A ¼
a g
b d
2 1
1 0 5 ¼I ¼ 0 1 3
(ii)
SOLUTIONS TO EXERCISES
273
This gives four equations for the four unknown elements a, b, g, d of A1 ; namely, 2a þ b ¼ 1 5a þ 3b ¼ 0 2g þ d ¼ 0
(iii)
5g þ 3d ¼ 1 These equations give a ¼ 3, b ¼ 5, g ¼ 1, d ¼ 2. These values can be checked by substituting into Eq. (iii) and verifying the equalities. The inverse matrix becomes 3 5 a b A1 ¼ ¼ (iv) g d 1 2 The relationship A1 A ¼ I is verified in Exercise 4.5. Use Eq. (iv) in x ¼ A1 b to find the unknowns x1 , x2 ; thus 3 5 b1 3b1 5b2 x1 ¼ ¼ (v) 1 2 b2 x2 b1 þ 2b2
EXERCISE 4.9: Evaluate 2
2 det4 6 2
4 1 1
3 3 55 3
Solution: 2
2 6 det4 6 2
4 1
3 3 7 55
1
3
¼ 2 1 3 þ 4 5(2) þ 3 6 1 (2) 1 3 1 5 2 3 6 4 ¼ 6 40 þ 18 (6) 10 72 ¼ 92
EXERCISE 4.10: Use the definition of cofactor to evaluate 2 3 2 4 3 cof 23 4 6 1 5 5 2 1 3
274
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Solution: 2
2 cof 23 4 6 2
3 3 2 5 5 ¼ 2 3
4 1 1
4 ¼ 10 1
EXERCISE 4.11: Evaluate the following determinant using cofactors: 2 3 2 4 3 jAj ¼ det4 6 1 5 5 2 1 3 Solution: Expand the determinant in cofactors: 2 4 jAj ¼ 6 1 2 1
3 5 ¼ a21 cof 21 þ a22 cof 22 þ a23 cof 23 3
Evaluating cofactors gives cof 23 ¼ 10 from Exercise 4.10, and 4 3 ¼ 9 cof 21 ¼ 1 3 2 3 ¼ 12 cof 22 ¼ þ 2 3 Substituting the cofactors into the expression for the determinant gives jAj ¼ 6(9) þ 1(12) þ 5(10) ¼ 54 þ 12 50 ¼ 92 in agreement with Exercise 4.9. EXERCISE 4.12: Use Cramer’s rule to find the unknowns {x1 , x2 , x3 } that satisfy the system of equations x1 þ 2x3 ¼ 11 2x1 x2 þ 3x3 ¼ 17 4x1 þ x2 þ 8x3 ¼ 45
(i)
SOLUTIONS TO EXERCISES
275
Solution: Write Eq. (i) in matrix form Ax ¼ b, where 2
1 A ¼ 42 4
0 1 1
3 2 35 8
(ii)
and 2
3 11 b ¼ 4 17 5 45
(iii)
The determinant of A is given by 2
1
6 det A ¼ det4 2 4
0 2
3
7 1 3 5 1 8
(iv)
¼ 1 (1) 8 þ 0 3 4 þ 2 2 1 4 (1) 2 1 3 1 8 2 0 ¼ 8 þ 4 þ 8 3 ¼ 1 The determinant of Aj is found by replacing column j with b; thus 3 0 2 7 1 3 5 ¼ 3 45 1 8 2 3 1 11 2 6 7 det A2 ¼ det4 2 17 3 5 ¼ 1 4 45 8 2
11 6 det A1 ¼ det4 17
(v)
(vi)
and 2
1 det A3 ¼ det4 2 4
0 1 1
3 11 17 5 ¼ 4 45
(vii)
The unknowns are found from Cramer’s rule as x1 ¼
det A1 det A2 det A3 ¼ 3, x2 ¼ ¼ 1, x3 ¼4 det A det A det A
(viii)
276
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
As a check of our solution, we note that the matrix equation Ax ¼ b has the formal solution x ¼ A1 b. The inverse of matrix A is given in Exercise 4.6. Carrying out the matrix multiplication 2 32 3 2 3 11 2 2 11 3 x ¼ A1 b ¼ 4 4 0 1 54 17 5 ¼ 4 1 5 (ix) 6 1 1 45 4 which verifies the solution found by Cramer’s rule.
S5
LINEAR ALGEBRA II
EXERCISE 5.1: Define C ¼ B A. Derive the law of cosines by calculating C C for u 908. Solution: Observe that we may express C in terms of the angle u, which is the angle between A and B. Now jCj2 ¼ C C ¼ (B A) (B A) ¼ (B B A B B A þ A A) 2
(i)
2
¼ jBj þ jAj 2jAj jBj cos u where we have used the commutative property of the dot product and its definition in terms of the angle u. Equation (i) is the law of cosines. EXERCISE 5.2: Let A¼
a11 a21
a12 a22
Find the characteristic equation and the characteristic roots. Solution: The characteristic equation is a11 l jA l Ij ¼ a 21
¼ (a11 l) (a22 l) a21 a12 a22 l a12
¼ a11 a22 l(a22 þ a11 ) þ l2 a21 a12 ¼ l2 l(a11 þ a22 ) þ (a11 a22 a21 a12 ) ¼ 0
SOLUTIONS TO EXERCISES
277
Solution of the characteristic equation gives the characteristic roots. In this case, the quadratic formula gives pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (a11 þ a22 ) 2 4(a11 a22 a21 a12 ) (a11 þ a22 ) + l+ ¼ 2 2 EXERCISE 5.3: Find eigenvalues for the Pauli matrices given in Exercise 4.1; that is, find l from si c ¼ l I c where c is an eigenfunction and i ¼ {x, y, z}. Solution: The eigenvalues are found by solving the equation jsi l Ij ¼ 0 for i ¼ 1, 2, 3. (a) i ¼ 1: l 1 2 1 l ¼ l 1 ¼ 0 implies the eigenvalues l+ ¼ +1. (b) i ¼ 2: l i ¼ l 2 þ i 2 ¼ l2 1 ¼ 0 i l implies l+ ¼ +1. (c) i ¼ 3: 1 l 0 ¼ (1 l) (1 þ l) ¼ (1 l2 ) ¼ l2 1 ¼ 0 0 1 l implies l+ ¼ +1. The eigenvalues of each Pauli matrix are +1. EXERCISE 5.4: Use the Hamilton– Cayley theorem to find the inverse of the complex Pauli matrix 0 i sy ¼ i 0 Solution: The characteristic equation is 0 l i ¼ l2 1 ¼ 0 i 0 l
278
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
so that the coefficients of the polynomial f (sy ) for the n ¼ 2 matrix sy are a0 ¼ 1, a1 ¼ 0, a2 ¼ 1 By the Hamilton – Cayley theorem s1 y ¼
1 (a0 sy þ a1 ) a2
Substituting in values of the coefficients gives s1 y ¼
1 (1 sy þ 0) ¼ sy (1)
In other words, the Pauli matrix sy is its own inverse, which is easily verified. EXERCISE 5.5: Calculate the eigenvalues of the matrix D. Solution: The eigenvalue problem is det ½D l I ¼ 0. Using the expression for D gives d11 l 121
112 ¼0 d22 l
or (d11 l) (d22 l) 112 121 ¼ 0 Recall that 112 ¼ 1 21 then expand the characteristic equation to get d11 d22 l(d11 þ d22 ) þ l2 1212 ¼ 0 The eigenvalues are found from the quadratic equation to be l+ ¼ 12 (d11 þ d22 ) + 12 ½(d11 þ d22 ) 2 4(d11 d22 1212 )1=2 EXERCISE 5.6: Show that aþ and a are orthogonal. Solution: To show that aþ and a are orthogonal, we must show that þ aþ a ¼ aþ 1 a1 þ a2 a 2 ¼ 0
SOLUTIONS TO EXERCISES
Substituting in the expressions for aþ , a and simplifying the algebra gives
1=2 112 1212 (d11 lþ ) 1þ 2 d11 lþ (d11 lþ ) ½(d11 lþ ) 2 þ 1212 1=2
1=2 1212 112 þ 1þ (d11 lþ ) 2 ½(d11 lþ ) 2 þ 1212 1=2 ¼ ð112 Þ
(d11 lþ ) 112 þ (d11 lþ ) ¼0 (d11 lþ ) 2 þ 1212 (d11 lþ ) 2 þ 1212
as expected. þ EXERCISE 5.7: Verify that aþ 1 ¼ a2 and a2 ¼ a1 . Find u.
Solution: Let B ¼ d11 lþ ; thus aþ 1
1=2 1=2 112 1212 112 B2 þ 1212 1þ 2 ¼ ¼ B B B B2
Further simplifying yields aþ 1
1=2 112 B2 112 B 112 ¼ ¼ ¼ 2 ¼ a 2 B B2 þ 1212 B ½B2 þ 1212 1=2 ½B þ 1212 1=2
A similar calculation for aþ 2 gives 1=2 2 1=2 1212 B þ 1212 B ¼ 1 þ ¼ ¼ 2 ¼ a aþ 2 1 2 2 B B ½B þ 1212 1=2 The angle u may be found from either u ¼ cos1 aþ 1 or u ¼ sin1 a 1 Note that 2 þ 2 2 2 cos2 u þ sin2 u ¼ (aþ 1 ) þ (a2 ) ¼ (a1 ) þ (a2 ) ¼ 1
which demonstrates that the eigenvectors are orthonormal.
279
280
S6
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
DIFFERENTIAL CALCULUS
EXERCISE 6.1: Let f (x) ¼ 3x þ 7, g(x) ¼ 5x þ 1, so that f (x) þ g(x) ¼ 8x þ 8. Find lim ½ f (x) þ g(x). x!2
Solution: Evaluate limits of functions separately. lim f (x) ¼ 13 and lim g(x) ¼ 11
x!2
x!2
Also lim ½ f (x) þ g(x) ¼ lim ½8x þ 8 ¼ 24
x!2
x!2
Therefore lim f (x) þ lim g(x) ¼ 24 ¼ lim ½ f (x) þ g(x)
x!2
x!2
x!2
as expected. EXERCISE 6.2: Let f (x) ¼ 3x þ 7 and c ¼ 6. Find lim 6(3x þ 7). x!2
Solution: lim 6(3x þ 7) ¼ 6 lim (3x þ 7) ¼ 6 13 ¼ 78
x!2
x!2
EXERCISE 6.3: Find lim
x!2
3x þ 7 5x þ 1
Solution: lim
x!2
lim (3x þ 7) 13 3x þ 7 x!2 ¼ ¼ 5x þ 1 lim (5x þ 1) 11 x!2
EXERCISE 6.4: Let P(x) ¼ ax2 þ bx þ g. Find lim P(x). x!0
Solution: lim P(x) ¼ lim ½ax2 þ bx þ g ¼ P(0) ¼ g
x!0
x!0
EXERCISE 6.5: Find lim
x!1 2x5
x3 þ 5x2 þ 3x þ 3x4 2x2 1
SOLUTIONS TO EXERCISES
281
Solution: lim (x3 þ 5x2 þ 3x) x3 þ 5x2 þ 3x x!1 lim ¼ x!1 2x5 þ 3x4 2x2 1 lim (2x5 þ 3x4 2x2 1) x!1
¼
(1)3 þ 5(1) 2 þ 3(1) 1 ¼ 5 4 2 2 2(1) þ 3(1) 2(1) 1
Note that the limit also equals N(1)=D(1), where f (x) ¼ N(x)=D(x) and we are evaluating limx!a f (x).
EXERCISE 6.6: Use the definition of the derivative to find the derivative of each of the following functions: (a) f (x) ¼ ax; (b) f (x) ¼ bx2 ; (c) f (x) ¼ ex ; (d) f (x) ¼ sin x. Solution: (a) Derivatives are found using the definition df (x) f (x þ Dx) f (x) ¼ lim D x!0 dx Dx We first evaluate f (x) and f (x þ Dx): f (x) ¼ ax, f (x þ Dx) ¼ a(x þ Dx) Now substitute these expressions into the definition of derivative: df (x) a(x þ Dx) ax aDx ¼ lim ¼ lim ¼a D x!0 Dx!0 Dx dx Dx (b) Evaluate f (x) and f (x þ Dx): f (x) ¼ bx2 f (x þ D x) ¼ b(x þ Dx) 2 ¼ b(x2 þ 2xDx þ Dx2 ) Evaluate df (x)=dx: df (x) b(x2 þ 2x Dx þ Dx2 ) bx2 ¼ lim D x!0 dx Dx b(2x Dx þ Dx2 ) D x!0 Dx
¼ lim
¼ lim b(2x þ D x) ¼ 2bx D x!0
(c) Evaluate f (x) and f (x þ D x): f (x) ¼ ex , f (x þ D x) ¼ ex eD x
282
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Evaluate df (x)=dx: df (x) ex eDx ex ex (eDx 1) ¼ lim ¼ lim D x!0 D x!0 dx Dx Dx From the series expansion of eD x for D x much less than 1 we get eD x 1 þ Dx, Dx 1 and df (x) ex Dx ¼ lim ¼ ex Dx!0 Dx dx (d) Evaluate f (x) and f (x þ Dx): f (x) ¼ sin x f (x þ D x) ¼ sin (x þ Dx) ¼ sin x cos Dx þ sin D x cos x where we have used the trigonometric identity sin (A þ B) ¼ sin A cos B þ sin B cos A. Now evaluate df (x)=dx: df (x) (sin x cos Dx þ sin Dx cos x) sin x ¼ lim D x!0 dx Dx As D x ! 0, we have cos D x 1 and sin D x D x. Therefore df (x) ( sin x þ Dx cos x) sin x ¼ lim ¼ cos x D x!0 dx Dx EXERCISE 6.7: Evaluate x2 þ bx x!0 x lim
where g(x) ¼ x2 þ bx, and f (x) ¼ x. Solution: Note that g(0) ¼ 0 and f (0) ¼ 0. Therefore L’Hoˆpital’s rule must be used; thus x2 þ bx 2x þ b ¼ lim ¼b x!0 x!0 x 1 lim
where dg(x)=dx ¼ 2x þ b and df (x)=dx ¼ 1. An alternative approach is to use factoring such that x2 þ bx ¼ lim (x þ b) ¼ b x!0 x!0 x lim
SOLUTIONS TO EXERCISES
283
EXERCISE 6.8: Evaluate lim
x!0
x ln2 (1 þ x)
where g(x) ¼ x, and f (x) ¼ ln2 (1 þ x). Solution: At x ¼ 0 we have g(0) ¼ 0 and f (0) ¼ 0. To find the limit, we must use L’Hoˆpital’s rule. First determine derivatives: dg(x) df (x) 2 ln (1 þ x) ¼ 1; ¼ dx dx 1þx By L’Hoˆpital’s rule, we have lim
x!0
1 1þx ¼ lim 2 ln (1 þ x) x!0 2 ln (1 þ x) 1þx
In the limit as x ! 0 we have the asymptotic approximation ln (1 þ x) x; thus lim
x!0
g(x) 1þx ¼ lim f (x) x!0 2x
This limit has two solutions that depend on how x ! 0: lim
x!0þ
1þx 1þx ¼ 1; limþ ¼ 1 x!0 2x 2x
where x ! 0þ implies x approaches 0 through positive values and x ! 0 implies x approaches 0 through negative values. Thus the function x=bln2 (1 þ x)c is discontinuous at the point x ¼ 0. EXERCISE 6.9: Let u ¼ ax2 þ bx þ c. Find the second-order derivative. Solution: Begin with the first-order derivative: du d ¼ (ax2 þ bx þ c) dx dx d d d ¼ (ax2 ) þ (bx) þ c dx dx dx ¼ 2ax þ b
284
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Find the second-order derivative by differentiating the result of the first-order derivative: d 2u d du d ¼ ¼ (2ax þ b) ¼ 2a dx2 dx dx dx
EXERCISE 6.10: (a) Calculate the differential of y ¼ x3 . (b) Use the chain rule to calculate dy=dx, where x, y are given by the parametric equations x ¼ 2t þ 3, y ¼ t2 1
Solution: (a) We calculate dy ¼
dy(x) dx ¼ 3x2 dx dx
(b) By the chain rule, dy dy dt ¼ dx dt dx But dt 1 ¼ dx dx=dt Therefore calculate dy dy=dt 2t ¼ ¼ ¼t dx dx=dt 2 Note that t ¼ (x 3)=2 from the parametric equation for x, so that dy x 3 ¼ dx 2 An alternative approach is to write x3 2 y¼t 1¼ 1 2 2
Then we get the expected result dy x3 d x3 1 x3 ¼2 ¼ (x 3) ¼ dx 2 dx 2 2 2
SOLUTIONS TO EXERCISES
285
EXERCISE 6.11: Find the extrema of y ¼ ax2 þ bx þ c. Solution: Evaluate the first and second derivatives of y: y0 ¼ 2ax þ b, y00 ¼ 2a Extrema are found by setting y0 ¼ 0 and finding corresponding values of x; thus y0 ¼ 2axext þ b ¼ 0 or xext ¼
b 2a
From the second derivative of y, we know that y00 ¼ 2a . 0 implies xext is a minimum and y00 ¼ 2a , 0 implies xext is a maximum. For example, suppose b ¼ c ¼ 0 and y ¼ ax2 . The extremum occurs at xext ¼ 0. Sketches of y ¼ ax2 are shown as follows:
EXERCISE 6.12: Find the square root of any given positive number c. Show that the Newton –Raphson method works for c ¼ 2. pffiffiffi Solution: Let x ¼ c so that c ¼ x2 and f (x) ¼ x2 c ¼ 0. The derivative is f 0 (x) ¼ 2x and the iterative formula becomes xnþ1 ¼ xn
x2n c c ¼ 0:5 xn þ 2xn xn
For c ¼ 2 and x0 ¼ 1, we find x1 ¼ 1:500 000 x2 ¼ 1:416 667 x3 ¼ 1:414 216 .. .
286
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 6.13: Use the Taylor series to show that the centered difference ½P(x þ D x) P(x D x) =(2D x) is second-order correct. Solution: Use the Taylor series to write P(x þ Dx) ¼ P(x) þ Dx
df Dx2 d 2 f Dx3 d 3 f þ þ þ 2 dx2 6 dx3 dx
P(x Dx) ¼ P(x) Dx
df Dx2 d 2 f Dx3 d 3 f þ þ dx 2 dx2 6 dx3
and
Calculate the centered difference by first calculating P(x þ D x) P(x D x) ¼ 2D x
df D x3 d 3 f þ2 þ dx 6 dx3
Solve for df =dx: df P(x þ D x) P(x D x) ¼ þ ET0 dx 2D x where the truncation error ET0 is ET0 ¼
D x2 d 3 f þ 6 dx3
The term ET0 is second order because of the term involving D x2 . Neglecting terms second order and above means that ET0 is considered negligible, particularly in the limit as Dx approaches zero.
S7
PARTIAL DERIVATIVES
EXERCISE 7.1: Let f (x, y) ¼ exy . Find fxy , fyx . Solution: Given f (x, y) ¼ exy , calculate @ @f @ ¼ (yexy ) ¼ (1 þ xy) exy @y @x @y @ @f @ ¼ (xexy ) ¼ (1 þ xy) exy fyx ¼ @x @y @x
fxy ¼
Note that fxy ¼ fyx , as expected.
SOLUTIONS TO EXERCISES
287
EXERCISE 7.2: Let y ¼ x21 þ x21 x2 þ x32 . Calculate @y=@x1 , @y=@x2 , and dy. Solution: From the partial derivatives @y @y ¼ 2x1 þ 2x1 x2 , ¼ x21 þ 3x22 @x1 @x2 we can determine the total differential:
dy ¼
2 X @y i¼1
@xi
dxi ¼
@y @y dx1 þ dx2 @x1 @x2
¼ (2x1 þ 2x1 x2 ) dx1 þ (x21 þ 3x22 ) dx2
EXERCISE 7.3: Let z ¼ x2 þ y2 , x ¼ at, y ¼ bet . Find dz=dt. Solution: The total derivative is dz @z dx @z dy ¼ þ ¼ 2x(a) þ 2y (bet ) dt @x dt @y dt Replace x, y with their parametric representations to obtain dz ¼ 2a2 t 2b2 e2t dt Alternatively, write z in terms of the parameter t only: z ¼ x2 þ y2 ¼ a2 t2 þ b2 e2t Now evaluate the derivative: dz ¼ 2a2 t 2b2 e2t dt which agrees with the total derivative calculated above.
EXERCISE 7.4: Find the Jacobian of the coordinate rotation y ¼ ax, where a is the matrix of coordinate rotations introduced in Chapter 4, Section 4.1.
288
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Solution: The matrix a of the coordinate rotation is given by cos u sin u a¼ ¼ ½aij sin u cos u The Jacobian is found from dy ¼ J dx where Jij ¼
@yi @xj
and yi ¼
2 X
aij xj
j¼1
Therefore Jij ¼
@yi ¼ aij @xj
or J ¼ a for the two-dimensional coordinate rotation. ^ Evaluate (a) r ¼ jrj; (b) r r; (c) r r; EXERCISE 7.5: Let r ¼ x^i þ y^j þ zk. (d) r(r); and (e) r(1=r). Solution: (a) Magnitude of r: r ¼ jrj ¼
pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r r ¼ x2 þ y2 þ z2
(b) Divergence of r: rr¼
@x @y @z þ þ ¼3 @x @y @z
(c) Curl of r: ^i r r ¼ @ @x x @ ^ ¼ i @y y
^j k^ @ @ @y @z y z @ @ @ @ ^ ^ @z j @x @z þ k @x x x z z
@ @y y
SOLUTIONS TO EXERCISES
289
Evaluating the derivatives resulting from expansion of the determinants shows that r r ¼ 0. (d) Gradient of r: rr ¼
@ 2 @ (x þ y2 þ z2 ) 1=2 ^i þ (x2 þ y2 þ z2 ) 1=2 ^j @x @y
@ 2 (x þ y2 þ z2 )1=2 k^ @z 1 ¼ 2 2(x þ y2 þ z2 )1=2
@ @ 2 @ 2 2 2 2 ^ 2 2 ^ 2 2 ^ n(x þ y þ z )i þ (x þ y þ z )j þ (x þ y þ z )k @x @y @z þ
1 ^ {2x^i þ 2y^j þ 2zk} 2r r ¼ r
¼
(e) Gradient of 1/r: r
1 1 r ¼ r(r1 ) ¼ 2 r(r) ¼ 3 r r r
EXERCISE 7.6: Given f (z) ¼ z2 , show that the Cauchy – Riemann equations are satisfied and evaluate f 0 (z). Solution: Expand f (x) in terms of x, y: f (z) ¼ z2 ¼ x2 y2 þ 2ixy Since f (z) ¼ u(x, y) þ iv(x, y), we find u(x, y) ¼ x2 y2 v(x, y) ¼ 2xy The Cauchy – Riemann equations give @u @v ¼ 2x ¼ @x @y
@u @v ¼ 2y ¼ @y @x
290
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The derivative f 0 (z) is f 0 (z) ¼
@u @v þ i ¼ 2x þ i2y ¼ 2(x þ iy) ¼ 2 z @x @x
or f 0 (z) ¼
S8
@v @u i ¼ 2x i( 2y) ¼ 2(x þ iy) ¼ 2z @y @y
INTEGRAL CALCULUS
EXERCISE 8.1: Given 1 f (x) ¼ , x ¼ g(t) ¼ t n x Find
Ðb a
f (x) dx by direct integration and by a change of variable.
Solution: Method 1 uses direct integration of f (x) with respect to x: ðb
ðb f (x) dx ¼
a
dx b ¼ ln xjba ¼ ln x a a
Method 2 involves performing integration with respect to t. First change the limits of integration: g(a) ¼ a ¼ an , g(b) ¼ b ¼ bn This implies a ¼ a1=n , b ¼ b1=n Change the integrand from a function of x to a function of t: dg(t) f (x) dx ¼ f (g(t)) dt ¼ tn ½ntn1 dt dt where f (x) ¼
1 x
SOLUTIONS TO EXERCISES
291
implies f (g(t)) ¼
1 1 ¼ n ¼ tn g(t) t
Evaluate the definite integral: ðb
dg(t) dt dt
ðb f (x) dx ¼
f (g(t)) a
a
ð b1=n
n
t nt
¼
n1
ð b1=n dt ¼ n
a1=n
¼ n ln
t1 dt
a1=n 1=n tjba1=n
1=n b b ¼ n ln ¼ ln a a
The results of Methods 1 and 2 agree, as expected. Ð EXERCISE 8.2: Evaluate the indefinite integral I(x) ¼ x sin x dx using integration by parts. Solution: Recall that integration by parts lets us write ð ð u dv ¼ uv v du Let x ¼ u, du ¼ dx, and sin x dx ¼ dv, so that ð ð v ¼ dv ¼ sin x dx ¼ cos x Then integration by parts gives ð ð x sin x dx ¼ x cos x ( cos x) dx ¼ x cos x (sin x) þ C ¼ sin x x cos x þ C where C is the integration constant. The integral solution is verified by evaluating the derivative d d sin x dx cos x dC ( sin x x cos x þ C) ¼ þ dx dx dx dx ¼ cos x ½cos x þ x( sin x) þ 0 ¼ x sin x
292
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 8.3: Calculate the length of a straight line y ¼ mx þ b between the end points xA , xB . Solution: For Method 1 the equation of a straight line is y ¼ mx þ b Therefore the slope of the line is dy ¼m dx and the curve length L between the end points (xA , xB ) is the integral ð xB L¼
½1 þ m2 1=2 dx ¼ ½1 þ m2 1=2
ð xB
xA
dx xA
¼ ½1 þ m2 1=2 (xB xA ) For Method 2 the length L of a straight line from analytic geometry is L ¼ ½(xB xA ) 2 þ (yB yA ) 2 1=2 where the points are sketched in the figure. Noting that the equation of a straight line lets us write yA ¼ mxA þ b, yB ¼ mxB þ b
we find yB yA ¼ (mxB þ b) (mxA þ b) ¼ m(xB xA ) Substituting this result into L gives L ¼ ½(xB xA ) 2 þ m2 (xB xA ) 2 1=2 ¼ ½1 þ m2 1=2 (xB xA ) in agreement with Method 1.
SOLUTIONS TO EXERCISES
293
Ðb EXERCISE 8.4: Let y ¼ f (x) ¼ x2 . Evaluate the definite integral 0 f (x) dx analytically and numerically using the trapezoidal rule and Simpson’s rule. Compare the results. Solution: The exact solution is ðb
x2 dx ¼
0
b x3 b3 ¼ 3 0 3
The trapezoidal rule gives ðb
x2 dx
0
h h b3 f (a) þ f (b) ¼ 2 2 4
since h ¼ b 0 ¼ b in this case. If we use Simpson’s rule, we get h 4h aþb h x dx f (a) þ f þ f (b) 3 3 2 3 0 4 b b 1 b ¼0þ f þ f (b) 3 2 2 3 2
ðb
2
where h is now h ¼ (b a)=2 ¼ b=2. Thus ðb
x2 dx
0
4 b b 2 1 b 2 b3 b3 b3 b ¼ þ ¼ þ 3 2 2 3 2 6 6 3
In this case, Simpson’s rule agrees exactly with the analytical solution.
S9
SPECIAL INTEGRALS
EXERCISE 9.1: Line Integral. Evaluate the integral (1, 1, 1) along the path x ¼ t, y ¼ t2 , z ¼ t3 for
Ð c
A dr from (0, 0, 0) to
A ¼ (3x2 6yz) ^i þ (2y þ 3xz) ^j þ (1 4xyz2 ) k^ Solution: The line integral has the expanded form ð
ð
^ {(3x2 6yz) ^i þ (2y þ 3xz) ^j þ (1 4xyz2 ) k}
A dr ¼ C
C
^ ½^idx þ ^jdy þ kdz ð (1,1,1) ¼ (0,0,0)
{(3x2 6yz) dx þ (2y þ 3xz) dy þ (1 4xyz2 ) dz}
294
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The integral is evaluated by replacing x, y, z coordinates with their parameterized values. The points (x, y, z) ¼ (0, 0, 0) and (1, 1, 1) correspond to parameter values t ¼ 0 and 1, respectively; thus ð ð1 A dr ¼ {½3t2 6t2 (t3 ) dt þ ½2t2 þ 3t(t3 ) d(t2 )} C
t¼0
ð1 þ
½1 4t(t2 )(t3 ) 2 d(t3 )
t¼0
ð1 ¼
{(3t2 6t5 ) dt þ (4t3 þ 6t5 ) dt þ (3t2 12t11 ) dt}
0
¼2 EXERCISE 9.2: Double Integral. Evaluate the double integral in the region R bounded by y ¼ x2 , x ¼ 2, y ¼ 1.
ÐÐ R
(x2 þ y2 ) dx dy
Solution: The region R is shown in the sketch.
The double integral becomes ) ð 2 (ð x 2 ðð 2 2 2 2 (x þ y ) dx dy ¼ (x þ y ) dy dx R
x¼1
x¼1
The double integral is solved in this case by first performing the y integration and then the x integration; thus x2 ) ðð ð 2 ( y3 2 2 2 (x þ y ) dx dy ¼ x yþ dx 3 y¼1 R x¼1 ð2 x6 1 x4 þ x2 dx ¼ 3 3 x¼1 ¼
1006 105
pffiffiffiffi EXERCISE 9.3: Show that {fn (x) ¼ ( sin nx)= p: n ¼ 1, 2, 3, . . . } is an orthonormal set of functions in the interval p x p.
SOLUTIONS TO EXERCISES
295
Solution: To be orthonormal, we must show that 1 p
ðp sin mx sin nx dx ¼ dmn p
The integral is evaluated by using the trigonometric identities cos (A + B) ¼ cos A cos B + sin A sin B to derive the relation 2 sin A sin B ¼ cos (A B) cos (A þ B) Substituting into the integral and solving for m = n gives the orthogonality condition ðp sin mx sin nx dx p
¼
1 2
ð p
ðp ½cos (m n)x dx
p
½cos (m þ n)x dx p
1 sin (m n)xp sin (m þ n)xp ¼ ¼ 0 if m = n 2 m n p m þ n p
The normalization condition is found by solving the integral for m ¼ n, to obtain ð p
ðp ð 1 p 1 sin nx sin nx dx ¼ dx ½cos 2nx dx p p 2p p p
1 sin 2nxp xjpp ¼ 2p 2n p ¼1 pffiffiffiffi Notice that the constant 1= p was needed in the definition of fn (x) to normalize the set of functions. EXERCISE 9.4: Determine the Fourier transform of the harmonic function g(t) ¼
exp (iv0 t), 0,
T t T jtj . 0
Solution: Calculate the integral 1 G(v) ¼ pffiffiffiffiffiffi 2p
ðT
1 eiv0 t eivt dt ¼ pffiffiffiffiffiffi 2p T
ðT T
ei(v0 v)t dt
296
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Solve the integral to get T 1 ei(v0 v) t G(v) ¼ pffiffiffiffiffiffi 2p i(v0 v)T and simplify using Euler’s equation: rffiffiffiffi 2 sin (v0 v) T T G(v) ¼ p (v0 v) T A plot of G(v) versus v shows that frequencies v near v0 contribute most to the transform. EXERCISE 9.5: Show that Eq. (9.4.8) is valid. Hint: You should use the Dirac delta function in Eq. (9.4.4). Solution: Equation (9.4.8) is 1 f (t) ¼ (2p)
ð1 ð1
ds eivt dv
(i)
eiv(ts) dv ds
(ii)
ivs
f (s)e 1
1
Rearranging gives
ð1 f (t) ¼
f (s) 1
1 (2p)
ð1 1
The bracketed term in Eq. (ii) is the Dirac delta function in Eq. (9.4.4), so Eq. (ii) becomes ð1 f (t) ¼ f (s) d(t s) ds (iii) 1
Equation (iii) is equivalent to Eq. (9.4.3), which demonstrates that Eq. (9.4.8) is valid. EXERCISE 9.6: Show that the Fourier transform of the product of two functions g1 (t) and g2 (t) can be written as a convolution integral. Hint: You should use Eqs. (9.4.11) and (9.4.12) and the Dirac delta function in Eq. (9.4.4). Solution: Following Collins [1999], we begin with Eq. (9.4.11) to form the Fourier transform of the product of two functions g1 (t) and g2 (t); namely, ð1 g1 (t)g2 (t)eivt dt (i) F {g1 (t)g2 (t)} ¼ 1
SOLUTIONS TO EXERCISES
297
The Fourier integral representations of the two functions g1 (t) and g2 (t) are F
1
1 {B1 ðvÞ} ¼ g1 (t) ¼ 2p
ð1
B1 ðvÞeivt dv
(ii)
B2 ðvÞeivt dv
(iii)
1
and F 1 {B2 ðvÞ} ¼ g2 (t) ¼
1 2p
ð1 1
from Eq. (9.4.12). Substituting Eqs. (ii) and (iii) into Eq. (i) gives F {g1 (t)g2 (t)} ¼
1 4p2
ð1 ð1 ð1 1 1
0
00
B1 (v0 )B2 (v00 )ei(vv v )t dv0 dv00 dt
(iv)
1
The integral over t is the Dirac delta function ð 1 1 i(vv0 v00 )t e dt ¼ d(v v0 v00 ) 2p 1 Using Eq. (v) in Eq. (iv) lets us write ð ð 1 1 1 B1 (v0 ) B2 (v00 ) d(v v0 v00 ) dv0 dv00 F {g1 (t)g2 (t)} ¼ 2p 1 1 Two integrals are possible. The integral over v0 is ð 1 1 B1 (v v00 ) B2 (v00 ) dv00 F {g1 (t)g2 (t)} ¼ 2p 1
(v)
(vi)
(vii)
The integral over v00 is 1 F {g1 (t)g2 (t)} ¼ 2p
ð1
B1 (v0 ) B2 (v v0 ) dv0
(viii)
1
Comparing Eqs. (vii) and (viii) with the form of the convolution integral in Eq. (9.4.14) shows that F {g1 (t)g2 (t)} ¼
1 1 B1 (v v00 ) W B2 (v00 ) ¼ B1 (v0 ) W B2 (v v0 ) 2p 2p
(ix)
Thus the Fourier transform of the product of two functions g1 (t) and g2 (t) can be written as a convolution integral.
EXERCISE 9.7: What are the Z transforms of the causal digital functions h1 ¼ {1, 2, 1, 0, 1} and h2 ¼ {0:6; 1:2; 2:4; 0:7; 3:8}?
298
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Solution: Substitute the coefficients of each causal digital function into the Z transform to obtain h1 (z) ¼ 1 2z z2 þ z4 and h2 (z) ¼ 0:6 1:2z 2:4z2 þ 0:7z3 þ 3:8z4 . EXERCISE 9.8: Verify the Laplace transform L{af (t)} ¼ aL{f (t)}, where a is a complex constant, and f (t) is a function of the variable t. Solution: Use the definition of Laplace transform to write ð1 ð1 est af (t) dt ¼ a est f (t) dt ¼ aL{f (t)} L{af (t)} ¼ 0
0
EXERCISE 9.9: Verify the inverse Laplace transform L1 {aF(s)} ¼ aL1 {F(s)}, where a is a complex constant, and F(s) ¼ L{f (t)} is the Laplace transform of a function f (t) of the variable t. Solution: Note from Exercise 9.8 that L{af (t)} ¼ aL{f (t)}. We can therefore write L1 {aF(s)} ¼ L1 {aL{f (t)}} ¼ L1 {L{af (t)}} ¼ af (t)
(i)
We also have the relation L1 {L{f (t)}} ¼ f (t)
(ii)
The ratio of Eqs. (i) and (ii) gives L1 {L{af (t)}} ¼a L1 {L{f (t)}}
(iii)
L1 {L{af (t)}} ¼ aL1 {L{f (t)}}
(iv)
so that
Using L{af (t)} ¼ aL{f (t)} and F(s) ¼ L{f (t)} in Eq. (iv) gives L1 {L{af (t)}} ¼ L1 {aL{f (t)}} ¼ L1 {aF(s)} ¼ aL1 {F(s)}
(v)
which is the desired result. EXERCISE 9.10: What is the Laplace transform of the complex function eivt with variable t and constant v? Use the Laplace transform of eivt and Euler’s equation eivt ¼ cos vt þ i sin vt to verify the Laplace transforms L{ cos vt} ¼ s=(s2 þ v2 ) and L{ sin vt} ¼ v=(s2 þ v2 ).
SOLUTIONS TO EXERCISES
299
Solution: The Laplace transform of eivt is L{eivt } ¼
ð1
est eivt dt ¼
0
ð1
e(siv)t dt
(i)
0
Evaluating the integral gives ð1
(siv)t
e 0
1 1 1 1 (siv)t e dt ¼ ¼ s iv (0 1) ¼ s iv s iv 0
(ii)
Equation (ii) can be rewritten in the form ð1
e(siv)t dt ¼
0
1 s þ iv s þ iv ¼ s iv s þ iv s2 þ v2
(iii)
Combining Eqs. (i) and (iii) gives the Laplace transform of eivt : ivt
ð1
L{e } ¼
est eivt dt ¼
0
s þ iv s2 þ v2
(iv)
If we now use Euler’s equation in Eq. (iv) we find L{eivt } ¼ L{ cos vt þ i sin vt} ¼ L{ cos vt} þ iL{ sin vt} ¼
s þ iv s2 þ v2
(v)
The real part of Eq. (v) gives Re(L{eivt }) ¼ L{ cos vt} ¼
s2
s þ v2
(vi)
s2
v þ v2
(vii)
and the imaginary part gives Im(L{eivt }) ¼ L{ sin vt} ¼
Equations (vi) and (vii) verify the Laplace transforms L{ cos vt} ¼ s=(s2 þ v2 ) and L{ sin vt} ¼ v=(s2 þ v2 ).
S10
ORDINARY DIFFERENTIAL EQUATIONS
EXERCISE 10.1: Solve dy y ¼ e2x dx
300
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Solution: In this case, we have f (x) ¼ 1 and r(x) ¼ e2x . Using f (x) in h gives ð h ¼ f (x) dx ¼ x Substituting this result into the equation for the general solution gives ð x x 2x e e dx þ C ¼ e x ½e x þ C ¼ Ce x þ e 2x y(x) ¼ e where C is a constant of integration. To verify the solution, we calculate dy ¼ Cex þ 2e2x dx and find the expected result dy y ¼ Cex þ 2e2x Cex e2x ¼ e2x dx EXERCISE 10.2: Solve dy þ y tan x ¼ sin 2x, y(0) ¼ 1 dx Solution: We first observe that f (x) ¼ tan x, r(x) sin 2x so that ð
ð
h ¼ f (x) dx ¼ tan x dx ¼ ln ( sec x) Using this result gives eh ¼ exp½ ln (sec x) ¼ sec x eh ¼ (eh )1 ¼ (sec x)1 ¼ cos x and eh r ¼ sec x sin 2x ¼
1 (2 sin x cos x) ¼ 2 sin x cos x
The general solution becomes ð y(x) cos x 2 sin x dx þ C ¼ C cos x 2 cos2 x
SOLUTIONS TO EXERCISES
301
The boundary condition y(0) ¼ 1 implies 1 ¼ C 2 or C ¼ 3: Thus y(x) ¼ 3 cos x 2 cos2 x The solution is verified by evaluating dy ¼ 3(sin x) 4 cos x(sin x) ¼ 3 sin x þ 4 cos x sin x dx Using the trigonometric identity sin 2x ¼ 2 sin x cos x we get dy ¼ 3 sin x þ 2 sin 2x dx Now calculate the right-hand side to get the expected result dy sin x þ y tan x ¼ 3 sin x þ 2 sin 2x þ (3 cos x 2 cos2 x) dx cos x ¼ 3 sin x þ 2 sin 2x þ 3 sin x 2 cos x sin x ¼ 2 sin 2x sin 2x ¼ sin 2x
EXERCISE 10.3: Convert the nonautonomous equation dx1 ¼ 1 t þ 4x1 , x1 (0) ¼ 1 dt to an autonomous system. Solution: The single-equation problem with an initial condition is equivalent to the two-equation problem dx1 ¼ 1 x2 þ 4x1 dt dx2 ¼1 dt subject to x1 (0) ¼ 1, x2 (0) ¼ 0
302
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 10.4: Convert the FORTRAN 90/95 program for the Runge – Kutta algorithm in the above example to the Cþþ programming language. Show that the numerical and analytical results agree. Solution: The following code illustrates the Runge – Kutta algorithm in Cþþ. It may be necessary to modify the code to work with different Cþþ compilers. It reproduces both the numerical and analytical results obtained by the FORTRAN 90/95 program in Chapter 10. RUNGE – KUTTA ALGORITHM IN Cþþ SOURCE: A. C. FANCHI
/* * 4th-Order Runge-Kutta Algorithm in C++ */ #include <math.h> #include <stdio.h> float F1 (float, float, float); float F2 (float, float, float); int main (int argc, char* argvA) { //Initial Conditions const float T0=0, X10=0, X20=1, H=0.1; float X1, X2, T, X1AN, X2AN; float A1, A2, B1, B2, C1, C2, D1, D2; int NS=20; int counter=1; //Initialize Variables X1=X10; X2=X20; T=T0; //Create and open output file FILE* outfile=fopen("RK_4th_C++.out", "w"); //Output file header fprintf(outfile, "NUMERICAL ANALYTICAL\n"); fprintf(outfile, "STEP T X1 X2 X1AN X2AN\n");
SOLUTIONS TO EXERCISES
303
do { A1=F1 (X1,X2,T); A2=F2 (X1, X2, T); B1=F1 (X1+(H*A1/2),X2+(H*A2/2), T+(H/2)); B2=F2 (X1+(H*A1/2),X2+(H*A2/2), T+(H/2)); C1=F1 (X1+(H*B1/2),X2+(H*B2/2), T+(H/2)); C2=F2 (X1+(H*B1/2),X2+(H*B2/2), T+(H/2)); D1=F1 (X1+(H*C1),X2+(H*C2), T+(H)); D2=F2 (X1+(H*C1),X2+(H*C2), T+(H)); //Update X1, X2, T X1=X1+(H/6) * (A1+2*B1+2*C1+D1); X2=X2+(H/6) * (A2+2*B2+2*C2+D2); T=T+H; X1AN=sin(T); X2AN=cos(T); //Output data to file fprintf (outfile, "%4d %9f %9f %9f %9f %9f\n", counter, T, X1, X2, X1AN, X2AN); counter++; } while (counter <= NS); //Close output file fclose(outfile); return 0; } float F1 (float X1, float X2, float T) { float F1; F1=X2; return F1; } float F2 (float X1, float X2, float T) { float F2; F2=-X1; return F2; }
304
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 10.5: Write the third-order ODE 2 d3 y dy ¼ 2y þ4y dt3 dt as a first-order system. Solution: In this case n ¼ 3 for the third-order ODE. Define the variables x1 ¼ y, x2 ¼
dy d2 y , x3 ¼ 2 dt dt
The resulting system of equations is dx1 ¼ x2 dt dx2 ¼ x3 dt dx3 ¼ 2x1 x22 þ 4x1 dt
EXERCISE 10.6: The linear harmonic oscillator is governed by d2 y dy þ y ¼ 0, y(0) ¼ a, ¼ b dt2 dt 0 Convert this second-order ODE to a first-order system. Solution: In this case n ¼ 2 for the second-order ODE. Define the variables x1 ¼ y, x2 ¼
dy dt
so that dx1 ¼ x2 dt dx2 ¼ x1 dt and the initial conditions are represented as x1 (0) ¼ a, x2 (0) ¼ b
SOLUTIONS TO EXERCISES
305
EXERCISE 10.7: Suppose we are given the equation m€x þ c_x þ kx ¼ 0 as a model of a mechanical system, where {m, c, k} are real constants. Under what conditions are the solutions stable? Hint: Rearrange the equation to look like Eq. (10.3.1) and then find the eigenvalues corresponding to the characteristic equation of the stability analysis. Solution: Rewrite the ODE as x€ þ
c k x_ þ x ¼ 0 m m
so that g1 ¼ c=m and g0 ¼ k=m by comparison with Eq. (10.3.1) of the stability analysis presentation. The displacement of the solution from equilibrium is u ¼ e l+ t g where the eigenvalues are given by " rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi # qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 1 c c2 4 k 2 + l+ g1 + g1 4g0 ¼ 2 2 m m2 m The displacement converges as long as l+ , 0; thus the physical constants must satisfy the criterion c + m
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi c2 4 k ,0 m2 m
for stability. Notice that the stability criterion is not satisfied if c ¼ 0. In this case we have a wave equation and the solution of the ODE is undamped. A detailed solution of this ODE is presented in Chapter 11. If k ¼ 0, the stability criterion is
c c + ,0 m m
All the solutions are stable for l ¼ c=m and m positive and k ¼ 0.
EXERCISE 10.8: Another way to see the onset of chaos is to plot the number of fixed points versus the parameter l. This plot is called the logistic map. Plot the logistic map for the May equation in the range 3:5 l 4:0 and initial condition x0 ¼ 0:5.
306
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Solution: The logistic map for the May equation is shown in the accompanying figure. Fifty iterations were made for each value of l. Values of l were increased by an increment of 0.01.
S11
ODE SOLUTION TECHNIQUES
EXERCISE 11.1: Application to an Electrical System—RLC Circuit. Find a particular solution of the nonhomogeneous ODE I LI€ þ RI_ þ ¼ E0 v cos vt C^ where L ¼ inductance of inductor, R ¼ resistance of resistor, C^ ¼ capacitance of capacitor, E0 v ¼ maximum value of derivative of electromotive force (E0 ¼ voltage at t ¼ 0), and I(t) ¼ current. Solution: The RLC equation has the form A€x þ B_x þ Cx ¼ D cos vt ^ D ¼ E0 v, and x ¼ I(t). The particular solution is where A ¼ L, B ¼ R, C ¼ 1=C, I(t) ¼ a cos vt þ b sin vt
SOLUTIONS TO EXERCISES
307
with
1 L v2 C^ a ¼ E0 v 2 1 L v2 þ R2 v2 C^ and b ¼ E0 v
Rv 2 1 L v2 þ R2 v2 C^
Factor 1/v2 to get
1 Lv E0 vC^ a¼ 2 1 L v þ R2 vC^ and b¼
E0 R 2 1 L v þ R2 vC^
These expressions are transformed into a physically more interesting form by defin^ to obtain ing reactance S ¼ L v (1=vC) E0 S S 2 þ R2 E0 R b¼ 2 S þ R2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Impedance is defined as R2 þ S 2 and corresponds to resistance for an AC system. See Exercise 2.2 of Chapter 2 for more details. a¼
EXERCISE 11.2: Find the general solution of the one-dimensional Schro¨dinger equation
h 2 d 2 cð xÞ Ecð xÞ ¼ V ð xÞ 2 m dx2
for the wavefunction c(x). The two boundary conditions are c(x0 ) ¼ 0, c0 (x0 ) ¼ 0.
308
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Solution: The one-dimensional Schro¨dinger equation may be put in a more familiar mathematical form by rearranging it and writing it as d 2 c ð xÞ þ a 0 cð x Þ ¼ h ð x Þ dx2 where a0 ¼
2mE 2m , h(x) ¼ 2 V(x), a1 (x) ¼ 0 h 2 h
The homogeneous equation is d2 ch (x) ¼ a0 ch (x) dx2 Using the trial solution ch (x) ¼ c1 sin kx þ c2 cos kx in the homogeneous equation gives d 2 ch (x) ¼ k2 ½c1 sin kx þ c2 cos kx ¼ a0 ch (x) dx2 so that k 2 ¼ a0 or qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi k ¼ 2mE=h 2 To solve the nonhomogeneous equation, we calculate the Green’s function K(x,t) ¼ ¼
sin kt cos kx sin kx cos kt sin kt(sin kt) cos kt(cos kt) sin kt cos kx sin kx cos kt ½sin2 kt þ cos2 kt
or K(x, t) ¼ sin kx cos kt sin kt cos kx
SOLUTIONS TO EXERCISES
309
where t is the dummy variable of integration. The particular solution of the nonhomogeneous equation is ðx cp (x) ¼
K(x, t) h(t) dt x0
2 mV ðtÞ dt ½sin kx cos kt sin kt cos kx 2 h x0 ðx 2 mV ðtÞ dt ¼ sin (kx kt) 2 h x0 ðx
¼
The general solution is the sum of the homogeneous and particular solutions, that is, c(x) ¼ cp (x) þ ch (x) An explicit solution may be obtained once V(x) is specified. EXERCISE 11.3: Use the Frobenius method to solve the equation
d2 2 d 1 2 þ v Rðr Þ ¼ 0 þ dr 2 r dr r 2
(i)
for R(r) when 1 and v are independent of the variable r, and r . 0. Solution: Multiply Eq. (i) by r 2 to find r 2 R00 þ 2rR0 þ (v2 r 2 1)R ¼ 0
(ii)
where the primes denote differentiation with respect to r. Assume a Frobenius series solution of the form R(r) ¼
1 X
ak r kþl
(iii)
k¼0
where the constant l is now undetermined. The first and second derivatives are given by 1 X
ak (k þ l)r kþl1
(iv)
ak (k þ l)(k þ l 1)r kþl2
(v)
R0 (r) ¼
k¼0
and R00 (r) ¼
1 X k¼0
310
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Equation (ii) becomes 1 X
ak (k þ l) (k þ l 1)r kþl þ 2
k¼0
þ
1 X
ak (k þ l)r kþl
k¼0 1 X
v2 ak r kþlþ2 1
k¼0
1 X
ak r kþl ¼ 0
(vi)
k¼0
or 1 X
ak ½(k þ l) (k þ l 1) þ 2(k þ l) 1r kþl
k¼0
þ
1 X
ak v2 r kþlþ2 ¼ 0
(vii)
k¼0
Expanding the first few terms of Eq. (vii) gives a0 ½lðl 1Þ þ 2l 1r l þa1 ½ð1 þ lÞ ðlÞ þ 2ð1 þ lÞ 1r lþ1 þ a2 ½ð2 þ lÞ ðl þ 1Þ þ 2ð2 þ lÞ 1 þ a0 v2 r lþ2 þ a3 ½ð3 þ lÞ ðl þ 2Þ þ 2ð3 þ lÞ 1 þ a1 v2 r lþ3 þ ¼ 0
(viii)
The indicial equation is found by setting the coefficient of r l to 0 in the k ¼ 0 term; thus a0 ½l(l 1) þ 2l 1 ¼ 0
(ix)
A nontrivial solution of Eq. (ix); that is, a solution in which a0 = 0, gives the roots
l+ ¼
1 +
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ 41 2
(x)
The coefficient of the r lþ1 term in Eq. (viii) must satisfy a1 l2 þ 3l þ 2 1 ¼ 0
(xi)
Since l was determined in Eq. (x), the only way to satisfy Eq. (xi) is to require a1 ¼ 0.
SOLUTIONS TO EXERCISES
311
The recurrence relation for the nonzero expansion coefficients is found from Eq. (viii) to be akþ2 ¼
v2 ak b+ ð k Þ
(xii)
where b+ (k) ¼ (k þ 2 þ l+ ) (k þ 1 þ l+ ) þ 2(k þ 2 þ l+ ) 1
(xiii)
and k ¼ 2,4,6, . . . . EXERCISE 11.4: Use the Laplace transform to solve the ODE y00 þ (a þ b)y0 þ aby ¼ 0
(i)
where the prime denotes differentiation with respect to x, and the boundary conditions are y(0) ¼ a, y0 (0) ¼ b
(ii)
The quantities a, b, a, b are constants. Solution: The Laplace transform of Eq. (i) is s2 L(s) sy(0) y0 (0) þ (a þ b) ½sL(s) y(0) þ abL(s) ¼ 0
(iii)
where L(s) ¼ L{y(x)}. Substituting Eq. (ii) into Eq. (iii) gives s2 L(s) sa b þ (a þ b)sL(s) a(a þ b) þ abL(s) ¼ 0 Rearranging lets us write
s2 þ (a þ b)s þ ab L(s) ¼ as þ a(a þ b) þ b
or L(s) ¼
as þ a(a þ b) þ b s2 þ (a þ b)s þ ab
¼
as þ a(a þ b) þ b (s þ a)(s þ b)
(iv)
Assume L(s) can be written in the form L(s) ¼
p q sþa sþb
(v)
312
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Equating Eqs. (iv) and (v) gives q¼
aa þ b ba
(vi)
ab þ b p¼aþq¼ ba Inverting L(s) yields the solution y(x) ¼ L
1
p q 1 L sþa sþb
or y(x) ¼ peax qebx
(vii)
Replacing p and q by the expressions in Eq. (vi) gives y(x) ¼
ab þ b ax aa þ b bx e e ba ba
(viii)
Equation (viii) is validated by substitution into Eq. (i) and application of Eq. (ii).
S12
PARTIAL DIFFERENTIAL EQUATIONS
EXERCISE 12.1: Assume the boundary conditions of Laplace’s equation are F(0, y, z) ¼ F(xB , y, z) ¼ 0 F(x, 0, z) ¼ F(x, yB , z) ¼ 0 F(x, y, 0) ¼ 0 F(x, y, zB ) ¼ FB (x, y) in the intervals 0 x xB , 0 y yB and 0 z zB . Find the solution of Laplace’s equation using these boundary conditions and the trigonometric solutions in Eq. (12.3.7). Solution: The boundary condition at x ¼ 0 implies X(0) ¼ 0 ¼ a02 The boundary condition at x ¼ xB together with the above result implies X(xB ) ¼ 0 ¼ a01 sin axB
SOLUTIONS TO EXERCISES
313
This relationship is satisfied for a nonzero value of a01 when axB ¼ np and n ¼ 1; 2; 3; . . . . The integers 0, 2 1, 2 2, . . . also satisfy the condition at X(xB). The choice of positive integers has been made in anticipation of using a Fourier series expansion later in the solution. Similar results to the x coordinate solution X(x) are obtained for the y coordinate solution Y( y) by imposing the y coordinate boundary conditions. The results are np xB mp Y( y) ¼ b01 sin(bm y), bm ¼ yB X(x) ¼ a01 sin(an x), an ¼
where n and m are positive integers. Part of the z coordinate solution Z(z) is found by applying the boundary condition at z ¼ 0 to get Z(z) ¼ c01 sinh (dnm z), d2nm ¼ a2n þ b2m A partial solution of the Laplace equation is the product XYZ, or np mp Fnm (x, y, z) ¼ Vnm sin sin sinh (dnm z) xB yB where the product of constants a01 b01 c01 has been replaced by the constant Vnm : The general solution of the Laplace equation is a superposition, or sum, of partial solutions F(x, y, z) ¼
1 X 1 X
Fnm (x, y, z)
n¼1 m¼1
The last boundary condition at z ¼ zB is satisfied by finding expansion coefficients {Vnm }, such that FB (x, y) ¼
1 X 1 X
Fnm (x, y, zB )
n¼1 m¼1
¼
1 X 1 X
Vnm sin (an x) sin (bm y) sinh (dnm zB )
n¼1 m¼1
This is a Fourier series representation of FB (x, y), and the coefficients {Vnm } are given by [Jackson, 1962] Vnm ¼
4 xB yB sinh (dnm zB )
ð x B ð yB FB (x, y) sin (an x) sin (bm y) dx dy 0
0
314
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 12.2: Separation of Variables. Find the solution of @2 c @c ¼ @x2 @t
(i)
c(x, t) ¼ X(x) T(t)
(ii)
by assuming
Solution: Substituting Eq. (ii) into Eq. (i) gives T(t)
@2 X(x) @T(t) ¼ X(x) 2 @x @t
Divide by XT to get 1 @2 X 1 @T ¼ k2 ¼ X @x2 T @t where k2 is the separation constant. Note that each side of the above equation must be constant if the equation is satisfied for arbitrary values of the variables x, t. The resulting ODEs are d2 X ¼ k2 X dx2 dT ¼ k2 T dt
(iii) (iv)
Equation (iii) has the solution X(x) ¼ aek x þ bek x and Eq. (iv) has the solution T(t) ¼ cek
2
t
Combining these solutions with Eq. (ii) gives c(x, t) ¼ (aek x þ bek x ) cek
2
t
or 2
c(x, t) ¼ a0 ek x þ k t þ b0 ek x þ k
2
t
where a0 , b0 are undetermined constants. The constants a0 ; b0 ; k are determined by imposing two boundary conditions and one initial condition.
SOLUTIONS TO EXERCISES
315
EXERCISE 12.3: Find u(kx vt) using Eq. (12.3.13) and assuming n ¼ 1. The nonlinear equation in this case is known as Burger’s equation. Solution: Equation (12.3.13) with n ¼ 1 is ð
ð dg dg z¼ ¼ ak v b 2 a þ bg þ gg2 g gþ cþ Dk2 2Dk where b and g are the coefficients of g and g 2, respectively, and a ¼ c. The solution of the integral depends on the values of a, b, g and the parameter q ¼ 4ag b2 : If q . 0, then dg 2 1 2gg þ b ¼ pffiffiffi tan z¼ pffiffiffi a þ bg þ gg2 q q ð
or pffiffiffi qz 1 pffiffiffi q tanh g(z) ¼ b 2g 2 Recall that u(kx vt) ¼ g(x) with the variable transformation z ¼ kx vt. If q , 0, then ð z¼
dg 2 1 2gg þ b tanh ¼ p ffiffiffiffiffiffi ffi p ffiffiffiffiffiffi ffi a þ bg þ gg2 q q
or pffiffiffiffiffiffiffi q z 1 pffiffiffiffiffiffiffi g(z) ¼ q tanh b 2g 2 If q ¼ 0, then ð
3
2
7 16 1 1 7 ¼ 2 ¼ 6 5 4 b b g b þ gg þg þg 2 2g 2g
z¼
dg
or g(z) ¼
1 b 1 þ g 2 z
316
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
EXERCISE 12.4: Find the dispersion relation associated with the Klein – Gordon equation @2 @2 f(x, t) @t2 @x2 ÐÐ Hint: Use a solution of the form f(x, t) ¼ h(k, v) ei(vtkx) dk dv. m2 f(x, t) ¼
Solution: Substitute f(x; t) into the Klein– Gordon equation and calculate the derivatives to get ðð
hðk, vÞ m2 eiðvtkxÞ dk dv ðð ¼
hðk, vÞ v2 k2 eiðvtkxÞ dk dv
Combining like terms in this equation lets us write ðð hðk, vÞ m2 v2 þ k2 eiðvtkxÞ dk dv ¼ 0 which is satisfied by the dispersion relation v2 ¼ m2 þ k2 . In this case, v and k are variables and m is a constant.
S13
INTEGRAL EQUATIONS
EXERCISE 13.1: Write the linear oscillator equation y00 þ v2 y ¼ 0 with initial conditions y(0) ¼ 0, y0 (0) ¼ 1 in the form of an integral equation. Solution: The initial conditions at the point x ¼ a ¼ 0 are y(0) ¼ y0 ¼ 0, y0 (0) ¼ y00 ¼ 1. The kernel is K(x, t) ¼ (t x)½B(t) A0 (t) A(t) ¼ (t x)v2 from Eq. (13.2.13), and the function f (x) is f (x) ¼ y00 (x a) þ y0 ¼ x from the initial conditions and Eq. (13.2.14). The formal solution of the linear oscillator equation y00 þ v2 y ¼ 0 is ðx y(x) ¼ f (x) þ a
K(x, t) y(t) dt ¼ x þ v2
ðx (t x) y(t) dt 0
SOLUTIONS TO EXERCISES
317
EXERCISE 13.2: Solve the linear oscillator equation y00 þ v2 y ¼ 0 with initial conditions y(0) ¼ 0, y(b) ¼ 0. Hint: Modify the above procedure to account for the use of boundary conditions y(0) ¼ 0, y(b) ¼ 0 [see Arfken and Weber, 2001, pp. 987 – 989]. Solution: Integrate y00 þ v2 y ¼ 0 with respect to x between the lower and upper limits 0 and x, and respectively: y0 y00 (0) ¼ v2
ðx y dx
(i)
0
Integrate again: y yð0Þ y0 ð0Þx ¼ v2
ðx ðx yðtÞdt dx 0
(ii)
0
Use Eq. (13.2.8) to write ð x ð x 0
ð x ð x ðx yðtÞdt dx ¼ yðtÞdx dt ¼ ðx tÞ f ðtÞdt
0
0
0
(iii)
0
Using Eq. (iii) in Eq. (ii) gives 0
yð xÞ ¼ yð0Þ þ y ð0Þx v
2
ðx ðx tÞ yðtÞdt
(iv)
0
Substituting the initial conditions y(0) ¼ 0, y(b) ¼ 0 into Eq. (iv) lets us write 2
0
ðx
yðbÞ ¼ 0 ¼ yð0Þ þ y ð0Þb v ¼ y0 ð0Þb v2
ðb tÞ yðtÞdt 0
ðx
ðb tÞ yðtÞdt
(v)
0
We solve Eq. (v) for y0 (0); thus y0 ð0Þ ¼
v2 b
ðx ðb tÞ yðtÞdt
(vi)
0
Substituting Eq. (vi) into Eq. (iv) yields yð xÞ ¼
v2 x b
ðb 0
ðb tÞ yðtÞdt v2
ðx ðx tÞ yðtÞdt 0
(vii)
318
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Break the interval [0, b] into the two intervals [0, x] and [x, b]: yð xÞ ¼
v2 x b
ð x
ðb ðb tÞ yðtÞdt þ
0
ðx ðb tÞ yðtÞdt v2 ðx tÞ yðtÞdt
x
(viii)
0
Combine the first and third integrals on the right-hand side of Eq. (viii) and rearrange: yð xÞ ¼
v2 x b
ðb
(b t) y(t) dt þ v2
x
ðx h 0
i x (b t) (x t) y(t) dt b
(ix)
Use x t t ðb tÞ ðx tÞ ¼ x x x þ t ¼ ðb xÞ b b b to simplify Eq. (ix): yð xÞ ¼ v2
ðb x
x ðb tÞ yðtÞdt þ v2 b
ðx h 0
i t ðb xÞ yðtÞdt b
(x)
The kernel in Eq. (x) can be written as 8t > < ðb xÞ, t , x b K ðx; tÞ ¼ > : x ðb tÞ, x , t b
(xi)
so that the solution y(x) has the form yð xÞ ¼ v2
ðb K ðx, tÞ yðtÞdt
(xii)
0
Notice that the kernel is symmetric, K(x, t) ¼ K(t, x), and continuous: t x ðb xÞ ¼ ðb tÞ b b t¼x x¼t The kernel K(x, t) is a Green’s function. The derivative @K(x; t)=@t of the kernel with respect to t is discontinuous at t ¼ x. EXERCISE 13.3: Solve the homogeneous Fredholm equation f(x) ¼ l f(t) dt [see Arfken and Weber, 2001, p. 1002].
Ð1 1
(t þ x)
Solution: Since the equation is homogeneous, we have f (x) ¼ 0 in Eq. (13.3.1) so Ðb the resulting equation is f(x) ¼ l a K(x, t)f(t) dt. We construct a separable
SOLUTIONS TO EXERCISES
319
kernel by writing M1 ¼ 1, M2 (x) ¼ x, N1 (t) ¼ t, N2 ¼ 1 in Eq. (13.4.1) so that K ðx; tÞ ¼
2 X
Mj ð xÞNj ðtÞ ¼ M1 N1 þ M2 N2 ¼ t þ x
j¼1
Ðb The coefficients bi ¼ a Ni (x) f (x) dx are b1 ¼ 0, b2 ¼ 0 since f (x) ¼ 0. The set of Ðb coefficients aij ¼ a Ni (x)Mj (x) dx are given by ð1 a11 ¼ a22 ¼
x dx ¼ 1
1 x3 2 ¼ x dx ¼ ¼ 3 3 1 1 ð1 ¼ dx ¼ xj11 ¼ 2 ð1
a12 a21
1 x2 ¼0 2 1
2
1
The determinant jI l Aj ¼ 0 for the eigenvalues is 1l 0 2l
2 4 2 l 3 ¼13 l ¼0 1l0
The resulting eigenvalues are
l+
rffiffiffi 3 ¼+ 4
Expanding the matrix equation (I l A) c ¼ b gives ð1 la11 Þ c1 þ ðla12 Þ c2 ¼ 0 ðla21 Þ c1 þ ð1 la22 Þ c2 ¼ 0 Solving for c1 in terms of c2 provides the following two relations: 2 2 c1 c2 l ¼ 0 ) c1 ¼ lc2 3 3 c2 l2c1 þ c2 ¼ 0 ) c1 ¼ 2l
320
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Substituting in values of the eigenvalues gives rffiffiffi ! 3 2 c2 ¼ 0 ) c1 ¼ pffiffiffi lþ : c 1 c 2 4 3 3 rffiffiffi ! 3 2 c2 ¼ 0 ) c1 ¼ pffiffiffi l : c1 c2 4 3 3 Combining relations gives c2 c1 ¼ +pffiffiffi 3 Alternatively, solving for c2 in terms of c1 results in the relations pffiffiffi c 2 ¼ + 3c 1 pffiffiffi If we set c1 ¼ 1, then we find c2 ¼ + 3 and the solutions for both eigenvalues are determined using Eq. (13.4.3): pffiffiffi pffiffiffi 2 X pffiffiffi 3 3 : f1 ¼ l1 1 þ 3x l1 ¼ l þ ¼ cj M j ð xÞ ¼ 2 2 j¼1 l2 ¼ l ¼
pffiffiffi pffiffiffi 2 X pffiffiffi 3 3 : f2 ¼ l2 1 3x cj M j ð xÞ ¼ 2 2 j¼1
EXERCISE 13.4: Use the Laplace transform to solve the integral equation 1 V0 cos vt ¼ RiðtÞ þ C
ðt iðtÞ dt 0
for an RC circuit. An RC circuit consists of a resistor with resistance R and a capacitor with capacitance C in series with a time-varying voltage V(t) ¼ V0 cos vt that has frequency v. Solution: Rearrange the integral equation to the form V0 1 iðtÞ ¼ cos vt RC R
ðt iðtÞ dt 0
(i)
SOLUTIONS TO EXERCISES
321
Comparing Eq. (i) to Eq. (13.5.1) lets us write fðtÞ ¼ iðtÞ V0 cos vt R 1 l¼ RC
f ðtÞ ¼
(ii)
To use Eq. (13.5.5), we evaluate the Laplace transform L{f (t)} ¼
V0 V0 s L{ cos vt} ¼ R R s2 þ v2
(iii)
Substituting Eqs. (ii) and (iii) into Eq. (13.5.5) gives
i(t) ¼ L1
8 > >
9 > > =
s V0 s s2 0 ¼ L1 2 2 > > 1 sl R s þv R > ; : sþ (s2 þ v2 )> RC
(iv)
The partial fraction expansion of the s-dependent term is s2 a1 a2 s þ a3 ¼ þ 2 1 1 s þ v2 (s2 þ v2 ) s þ sþ RC RC 1 a1 (s2 þ v2 ) þ (a2 s þ a3 ) s þ RC ¼ 1 (s2 þ v2 t) sþ RC
(v)
where {a1 ; a2 ; a3 } are expansion coefficients. Multiplying Eq. (v) by the denominator of the left-hand side lets us expand Eq. (v) in powers of s: s2 ¼ a1 s2 þ a1 v2 þ a2 s2 þ
a2 s a3 þ a3 s þ RC RC
(vi)
Equating the coefficients of the powers of s gives s 2 : a1 þ a2 ¼ 1 ) a1 ¼ 1 a2 a2 þ a3 ¼ 0 ) a2 ¼ a3 RC s1 : RC a3 ¼ 0 ) a3 ¼ a1 RCv2 s0 : a1 v2 þ RC
(vii)
322
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Solving Eq. (vii) for {a1 ; a2 ; a3 } lets us write a1 ¼ 1=½1 þ (RCv) 2 a2 ¼ 1 a1 ¼ ½(RCv) 2 = ½1 þ (RCv) 2
(viii)
a3 ¼ (RCv2 )=½1 þ (RCv) 2 Substituting {a1 , a2 , a3 } from Eq. (viii) into Eq. (v) and combining the result with Eq. (iv) gives 8 2 39 > > s þ v2 > ; :R sþ 1 RC (ix) 8 2 39 > > s þv s þ v2 > ; : R 1 þ (RCv) s þ 1 RC The inverse Laplace transform on the right-hand side of Eq. (ix) can be factored to yield 8 9 > >
< 2
2 V0 1 1 = 1 1 ðRCvÞ s 1 RCv iðtÞ ¼ L (x) þL L > s2 þ v2 R 1 þ ðRCvÞ2 s2 þ v2 :s þ 1 > ; RC Further factoring of constants gives 9 8 2 > >
< V0 1 1 = s 6 1 2 1 þ (RCv) L i(t) ¼ 4L > s2 þ v2 R 1 þ (RCv) 2 ; :s þ 1 > RC 3
v 7 RCv L1 2 5 s þ v2 The inverse Laplace transforms in Eq. (xi) are 8 9 > < 1 > = L1 ¼ et=RC > :s þ 1 > ; RC
s ¼ cos vt L1 2 s þ v2
v 1 L ¼ sin vt s2 þ v2
(xi)
(xii)
SOLUTIONS TO EXERCISES
323
Substituting Eq. (xii) into Eq. (xi) gives i(t) ¼
V0 1 ½et=RC þ (RCv) 2 cos vt RCv sin vt R 1 þ (RCv) 2
(xiii)
Equation (xiii) is the solution obtained by Jordan and Smith [1994, p. 392].
S14
CALCULUS OF VARIATIONS
EXERCISE 14.1: Show that Eq. (14.1.4) implies that (@x=@a)jtt21 ¼ 0. Solution: All of the parameterized paths of integration x(t, a) ¼ x(t, 0) þ 1(a)h(t) must equal x(t, 0) at the end points {t1 , t2 }. This implies that xðt1 ; aÞ ¼ xðt1 ; 0Þ so that 1ðaÞ hðt1 Þ ¼ 0 xðt2 ; aÞ ¼ xðt2 ; 0Þ so that 1ðaÞ hðt2 Þ ¼ 0
(i)
Equation (i) shows that h(t1 ) ¼ h(t2 ) ¼ 0 for an arbitrary function 1(a) of the parameter a. The derivative (@x=@a)jtt21 is @x t2 @ 1ðaÞt2 @ 1ðaÞ @ 1 ð aÞ hðt1 Þ ¼0 ¼ h ð t Þ ¼ hðt2 Þ @at1 @a t1 @a @a because h(t1 ) ¼ h(t2 ) ¼ 0.
EXERCISE 14.2: What is the shortest distance p between two points in a plane? pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Hint: The element of arc length in a plane is ds ¼ dx2 þ dy2 ¼ 1 þ (dy=dx) 2 dx. Solution: The shortest distance between two points in a plane can be determined using the calculus of variations. The element of arc length in a plane is pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ds ¼ dx2 þ dy2 ¼
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ffi dy 1þ dx dx
(i)
The total length of the curve is the summation of arc lengths between end points of the curve. For infinitesimal arc length, the summation is given by the integral ð x2
ð s2 ds ¼
L¼ s1
x1
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ffi dy 1þ dx dx
(ii)
The shortest distance between two points in a plane is determined as the extremum of Eq. (ii). The function f1 for use in Eq. (14.1.8) is determined from
324
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Eq. (ii) to be sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ffi qffiffiffiffiffiffiffiffiffiffiffiffiffi dy dy ¼ 1 þ y_ 2 , y_ ¼ f1 ¼ 1 þ dx dx The condition for an extremum is determined using Eq. (14.1.8), which has the form @f1 d @f1 ¼0 @y dx @_y for the variables being used in this example. Calculating the derivatives gives pffiffiffiffiffiffiffiffiffiffiffiffiffi @f1 @ 1 þ y_ 2 ¼ ¼0 @y @y ! pffiffiffiffiffiffiffiffiffiffiffiffiffi! y_ d @f1 d @ 1 þ y_ 2 d pffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ ¼ @_y dx @_y dx dx 1 þ y_ 2
(iii)
Substituting Eq. (iii) into Eq. (14.1.8) gives ! y_ d pffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 0 dx 1 þ y_ 2 or y_ pffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ k1 1 þ y_ 2
(iv)
where k1 is a constant. Equation (iv)ffi is satisfied if y_ ¼ k2 , where k2 is a constant pffiffiffiffiffiffiffiffiffiffiffiffi that is related to k1 by k2 = 1 þ k22 ¼ k1 from Eq. (iv). The equation y_ ¼ k2 ¼ dy=dx can be integrated to give an equation for a straight line: y ¼ k2 x þ y0
(v)
where y0 is the value of y at x ¼ 0. The shortest distance between two points in a plane is a straight line. EXERCISE 14.3: The kinetic energy and potential energy of a harmonic oscillator (HO) in one space dimension are THO ¼ 12 m_q2 and VHO ¼ 12 mq2 , respectively, where m is the constant mass of the oscillating object and k is the spring constant. Write the Lagrangian for the harmonic oscillator and use it in the Euler – Lagrange equation to determine the force equation for the harmonic oscillator.
SOLUTIONS TO EXERCISES
325
Solution: The Lagrangian for the harmonic oscillator is the difference between kinetic energy and potential energy, or 1 1 LHO ¼ THO VHO ¼ m_q2 mq2 2 2
(i)
The force on the oscillating mass is FHO ¼
@LHO ¼ kq @q
(ii)
pHO ¼
@LHO ¼ m_q @_q
(iii)
and the momentum is
Substituting the derivatives in Eqs. (ii) and (iii) into the Euler – Lagrange equation gives @LHO d @LHO d (iv) ¼ kq ðm_qÞ ¼ kq þ m€q ¼ 0 dt @_q dt @q The resulting force equation for the harmonic oscillator is obtained by rearranging Eq. (iv) to yield m€q ¼ m
d 2q ¼ kq dt2
(v)
EXERCISE 14.4: Solve Eq. (14.2.18). Hint: Perform a variable substitution by introducing a parameter u that lets you substitute z ¼ a sin2 u ¼ (a=2)(1 cos 2u) for z. Solution: We solve Eq. (14.2.18) by introducing a parameter u that lets us write a z ¼ a sin2 u ¼ ð1 cos 2uÞ 2 Substituting Eq. (i) into Eq. (14.2.18) gives rffiffiffiffiffiffiffiffiffiffiffi dz az ¼ ¼ cot u dx z
(i)
(ii)
The integral of Eq. (ii) is ð x¼
ð dz ¼ tan u dz cot u
(iii)
The differential dz can be written in terms of u as dz ¼ 2a sin u cos u du
(iv)
326
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
where we have taken the differential of Eq. (i). Substituting Eq. (iv) into Eq. (iii) gives ð
ð
a x ¼ tan udz ¼ 2a tan u sin u cos udu ¼ ð2u sin 2uÞ þ K 2
(v)
where K is an integration constant. Collecting Eqs. (i) and (v) lets us write the parameterized solution of the brachistochrone problem as a x ¼ ð2u sin 2uÞ þ K 2 a z ¼ ð1 cos 2uÞ 2
(vi)
The equations in Eq. (vi) are the equations for a cycloid. The constants of integration {a, K} are determined by fitting Eq. (vi) to the end points. EXERCISE 14.5: Solve Eq. (14.3.7) for the function of two dependent variables f2 (x1 , x2 , x_ 1 , x_ 2 , t) ¼ (m=2)(_x21 þ x_ 22 ) (k=2)(x1 þ x2 ) 2 and {m; k} are constants. Solution: Equation (14.3.7) for two dependent variables is @f2 d @f2 ¼ 0 for i ¼ 1; 2 @xi dt @_xi The derivatives are @f2 @f2 ¼ kðx1 þ x2 Þ, ¼ kðx1 þ x2 Þ @x1 @x2 and d @f2 d @f2 ¼ m€x1 , ¼ m€x2 dt @_x1 dt @x2 The two equations resulting from Eq. (14.3.7) are m€x1 þ kðx1 þ x2 Þ ¼ 0, m€x2 þ kðx1 þ x2 Þ ¼ 0 EXERCISE 14.6: Find the Euler – Lagrange equations with constraints for the 2 function of three dependent variables f3 (r; u; z) ¼ (m=2)½_r 2 þ r 2 u_ þ z_ 2 mgz subject to the constraint r 2 ¼ cz, where {m, g, c} are constants. Note that f3 is expressed in cylindrical coordinates {r, u, z}.
SOLUTIONS TO EXERCISES
327
Solution: In this case we have n ¼ 3 dependent variables and m ¼ 1 constraint. The constraint may be written in the form gðr, u, zÞ ¼ r 2 cz ¼ 0
(i)
The derivatives corresponding to Eq. (14.4.2) are @g @g @g ¼ 2r, ¼ 0, ¼ c @r @u @z The Euler – Lagrange equations with one undetermined multiplier are @f3 d @f3 @g þl ¼0 @r @r dt @_r @f3 d @f3 @g þl ¼0 _ @u @u dt @u @f3 d @f3 @g þl ¼0 @z @z dt @_z
(ii)
(iii)
2
Substituting f3 (r; u; z) ¼ (m=2)½_r 2 þ r 2 u_ þ z_ 2 mgz and Eq. (ii) into Eq. (iii) gives 2 m€r mr u_ 2rl ¼ 0
mr 2 u€ þ 2mr r_ u_ ¼ 0
(iv)
m€z þ mg þ cl ¼ 0 Equation (iv) is the set of equations of motion for a constant mass m sliding in a gravitational field with constant acceleration g on the inner surface of a paraboloid r 2 ¼ cz [McCuskey, 1959, p. 65]. S15
TENSOR ANALYSIS
EXERCISE 15.1: Compare the set of equations in Eq. (15.1.1) with the set of equations in Eq. (15.1.2). What are the functions {f 1 ; f 2 ; f 3 }? Use these functions in Eq. (15.1.3) to calculate the components of the displacement vector in the primed (rotated) coordinate system. Solution: Comparing Eq. (15.1.1) with Eq. (15.1.2) shows that
x01 ¼ f 1 x1 , x2 , x3 ¼ x1 cos u þ x2 sin u
x0 2 ¼ f 2 x1 , x2 , x3 ¼ x1 sin u þ x2 cos u
x0 3 ¼ f 3 x1 , x2 , x3 ¼ x3
328
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
The partial derivatives @x 0i =@x j are @x 01 @x 01 @x 01 ¼ cos u, 2 ¼ sin u, 3 ¼ 0 1 @x @x @x @x 02 @x 02 @x 02 ¼ sin u, ¼ cos u, ¼0 @x1 @x2 @x3 @x 03 @x 03 @x 03 ¼ 0, ¼ 0, ¼1 @x1 @x2 @x3 since the rotation angle u is a constant with respect to the unprimed coordinates. The components of the displacement vector in the primed (rotated) coordinate system are dx 01 ¼ cos udx1 þ sin udx2 dx 02 ¼ sin udx1 þ cos udx2 dx 03 ¼ dx3 EXERCISE 15.2: Show that the set of equations in Eq. (15.1.9) is a special case of Eq. (15.1.10). Hint: Apply the limit c ! 1 to the set of equations in Eq. (15.1.10). Solution: Derive Eq. (15.1.9) from Eq. (15.1.10) by taking the limit c ! 1: x vt x 0 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x vt because v=c ! 0 1 (v=c) 2 y0 ¼ y z0 ¼ z v tx 2 c t 0 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi t because v=c ! 0 and v=c2 ! 0 1 (v=c) 2 EXERCISE 15.3: Write the transformation equations in Table 15.1 without the summation convention, that is, explicitly show the summations for each equation. Solution: Table 15.1 becomes
Second-Rank Tensor
Transformation Equation
Contravariant
A0ij ¼
Covariant
A0ij ¼
Mixed
A0ij ¼
Pn @x 0i @x 0j ab a¼1 b¼1 @xa @xb A Pn Pn @xa @xb a¼1 b¼1 @x 0i @x 0j Aab Pn
Pn
a¼1
Pn
@x 0i @xb b¼1 @xa @x 0j
Aab
SOLUTIONS TO EXERCISES
329
EXERCISE 15.4: Use Eq. (15.2.8) to write the transformation equation for a tensor with two contravariant indices and a single covariant index. Solution: In this case n ¼ 2 and m ¼ 1. The corresponding transformation equation is 1 a2 A0a ¼ b1
@x 0a1 @x 0a2 @x d1 c1 c2 A @x c1 @x c2 @x 0b1 d1
EXERCISE 15.5: Write Eq. (15.3.1) using the summation convention. Solution: Equation (15.3.1) is written as x x ¼ ~x ~x ¼ (x1 ) 2 þ (x2 ) 2 þ (x3 ) 2 ¼ x i xj ¼ gij x i x j , x j ¼ xj using the summation convention. EXERCISE 15.6: Show that the determinant of the metric tensor represented by the matrix in Eq. (15.3.2) is non zero. Solution: The determinant of the matrix 2 1 gij ¼ 4 0 0 is 1 0 gij ¼ 0 1 0 0
0 1 0
3 0 05 1
0 0 ¼ 1 1
which is nonzero as expected. EXERCISE 15.7: Show that Eq. (15.3.10) reduces to the famous result E mc2 when the momentum of the mass is small compared to the mass of the object. Solution: Rearrange Eq. (15.3.10) so that total energy can be calculated from mass and momentum:
2
2
2 E2 ¼ m2 c4 þ p1 c2 þ p2 c2 þ p3 c2 (i) We assume the momentum is small enough to be considered negligible compared to the mass of the object. In this case Eq. (i) reduces to E mc2
(ii)
330
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
If our reference frame is moving with the object, the object is at rest and Eq. (ii) becomes the equality E ¼ mc2 ¼ m0 c2
(iii)
where m0 is called the rest mass of the object. EXERCISE 15.8: Calculate the contraction of the Kronecker delta function dij in n dimensions. Solution: The contraction d of the Kronecker delta function dij in n dimensions is obtained by summing the components of the KroneckerP delta function when the contravariant and covariant indices are equal; thus d ¼ ni¼1 dii ¼ n. Notice that we have explicitly written the summation and are not using the summation convention. EXERCISE 15.9: Calculate the line element for the metric tensors 3 2 2m 1 0 1 7 6 1 0 x1 7 and ½gij ¼ 6 ½gij ¼ 4 0 1 2m 5 0 1 1 x Assume the parameter m is a constant and x 1 is component 1 in a two-dimensional space with the displacement vector dx i. Solution: The line element for the metric 1 0 ½gij ¼ 0 1 is ds2 ¼
2 X 2 X
gij dx i dx j ¼ dx1 dx2
i1 j¼1
The line element for the metric 2 2m 1 6 1 x1 ½gij ¼ 6 4 0
3 0 1
7 7 2m 5 x1
is 2
ds ¼
2 X 2 X i¼1 j¼1
i
gij dx dx
j
2 m 1 1 2m ¼ 1 1 dx 1 1 dx2 x x
SOLUTIONS TO EXERCISES
331
EXERCISE 15.10: Show that ½ij, k þ ½kj, i ¼ @gik =@x j , where ½ is the Christoffel symbol of the first kind. Solution: From the definition of the Christoffel symbol we have
1 @gik @g jk @gij 1 @gki @g ji @gkj þ i k þ þ k i ½ij, k þ ½kj, i ¼ 2 @x j 2 @xj @x @x @x @x
(i)
Applying the symmetry relation gab ¼ gba to the second term on the right-hand side of Eq. (i) gives
1 @gik @g jk @gij 1 @gik @gij @g jk ½ij, k þ ½kj, i ¼ (ii) þ i k þ þ 2 @x j 2 @x j @x k @x i @x @x Combining like terms in Eq. (ii) gives the desired result ½ij,k þ ½kj,i ¼ @gik =@xj .
S16
PROBABILITY
EXERCISE 16.1: Calculate P5,4 , C5,2 , C5,3 , C6,6 , and C6,0 . Solution: P5,4 ¼ 120, C5,2 ¼ 10, C5,3 ¼ 10, C6,6 ¼ 1, and C6,0 ¼ 1. EXERCISE 16.2: Four fair coins are each flipped once. Use a probability tree diagram to determine the probability that at least two coins will show heads. Solution: Prepare a probability tree: Coin #1
#2
#3
#4
H
H
H
H
3
T
3
H
3
T
3
H
3
T
3
H
3
T T
H T
T T
H
H T
H
3
T
3
H
3
332
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
T T
H
H
3
T T
H T
There are 24 ¼ 16 possible outcomes. The number of outcomes that have at least two heads are found by counting the outcomes that are labeled with a 3. The probability is 11/16. EXERCISE 16.3: Two unbiased six-sided dice, one red and one green, are tossed and the number of dots appearing on their upper faces are observed. The results of a dice roll can be expressed as the ordered pair (g, r), where g denotes the green die and r denotes the red die. (a) (b) (c) (d)
What is the sample space S? What is the probability of throwing a seven? What is the probability of throwing a seven or a ten? What is the probability that the red die shows a number less than or equal to three and the green die shows a number greater than or equal to five? (e) What is the probability that the green die shows a one, given that the sum of the numbers on the two dice is less than four?
Solution: (a) The sample space S consists of all possible dice rolls. In this case, the sample space has 36 elements given by S ¼ {(1; 1); (1; 2); . . . ; (6; 5); (6; 6)}. (b) Let A denote the set of all dice rolls that add up to seven. We express A as the union of ordered pairs {(1; 6)} < {(2; 5)} < {(3; 4)} < {(4; 3)} < {(5; 2)}< {(6; 1)}. The probability of A is P(A) ¼ (1=36) þ (1=36) þ (1=36) þ (1=36) þ (1=36)þ (1=36) ¼ 1=6. (c) Let A denote the event “throwing a seven” and B denote the event “throwing a ten.” Then the probability of A is P(A) ¼ 1/6, and the probability of B is P(B) ¼ 1/ 12. The probability of throwing a seven or a ten is P(A < B) ¼ P(A) þ P(B) ¼ (1=6) þ (1=12) ¼ 1=4. (d) Let C denote the event “red die shows number 3” and D denote the event “green die shows number 5.” The elements of the set C > D are {(1; 5); (2; 5); (3; 5); (1; 6); (2; 6); (3; 6)}. The probability P(C > D) is the number of elements in C > D divided by the total number of elements, thus P(C > D) ¼ 6=36.
SOLUTIONS TO EXERCISES
333
The independence of the events is evaluated as follows: The probability P(C) of event C and the probability P(D) of event D are P(C) ¼ 18=36 and P(D) ¼ 12=36, respectively. We have the product P(C)P(D) ¼ (1=2)(1=3) ¼ 1=6 ¼ P(C > D), which demonstrates that events C and D are independent. (e) Let A denote the event “green die shows 1” and B denote the event “sum of numbers on dice , 4.” The elements of event A are {(1; 1); (1; 2); (1; 3); (1; 4); (1; 5); (1; 6)}, and the elements of event B are {(1; 1); (1; 2); (2; 1)}. The intersection of elements is A > B ¼ {(1; 1); (1; 2)}. Using the multiplicative rule, we find P(AjB) ¼ P(A > B)=P(B) ¼ (2=36)(3=36) ¼ 2=3.
S17
PROBABILITY DISTRIBUTIONS
EXERCISE 17.1: What is the probability that x, y are in the intervals 1 x 2 and 1 y 3 for the joint probability distribution function F(x, y) ¼ (1 ex )(1 ey ) for x, y . 0: F(x, y) has the properties F(0, 0) ¼ 0 and F(1, 1) ¼ 1, and r(x, y) ¼ ex ey . Solution: The probability is ð2 ð3
ð2 ð3 rðx, yÞdx dy ¼
P(1 x 2, 1 y 3) ¼ 1
1
ð2
x
ð3
e dx
¼ 1
¼e
5
1
ex ey dx dy
1
ey dy ¼ e2 þ e1 e3 þ e1
1
e
3
e4 þ e2
EXERCISE 17.2: Use the conditional probability r(xjy) ¼ (x þ 2y)=( 12 þ 2y) defined in the intervals 0 x 1 and 0 y 1 to determine the probability that x will have a value less than 12 given y ¼ 12. Solution: The probability that x will have a value less than 12 given y ¼ 12 is the cumulative probability ð 1=2 1 1 1 þ 2y dx F(xjy) ¼ F ¼ (x þ 2y) 2 2 2 0 ð 1=2 2(x þ 1) 5 ¼ dx ¼ 3 12 0 EXERCISE 17.3: Determine the statistical independence of the variables x and y for the joint probability density r(x, y) ¼ x2 y2 =81 by calculating the marginal
334
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
probability densities. The variables x and y are defined in the intervals 0 , x , 3 and 0 , y , 3. Note the difference between this exercise and the intervals defined in the preceding example. Solution: The marginal density for x is ð3
x2 y2 1 dy ¼ x2 9 0 81
r(x) ¼ and the marginal density for y is
ð3
x2 y2 1 dx ¼ y2 9 0 81
r( y) ¼
Taking the product of marginal densities gives 2 1 x2 y2 ¼ r(x, y) r(x) r( y) ¼ 9 The equality of the product of marginal probability densities and the joint probability density demonstrates that x and y are independent. EXERCISE 17.4: Calculate the first moment, or mean, of the discrete distribution f (x) ¼ with the binomial coefficient
N x
N p x (1 p) Nx x
for 0 x N, and p a constant.
Solution: The distribution f (x) is known as the binomial distribution. The first moment or mean of f (x) is given by the summation
x ¼
N X
x f (x) ¼
x¼0
N X N x p x (1 p) Nx x x¼0
To evaluate the mean, we first note that f (x) is normalized, that is,
1¼
N X x¼0
f (x) ¼
N X N x¼0
x
p x (1 p)Nx
SOLUTIONS TO EXERCISES
335
The derivative of the normalization condition with respect to p gives N N X @ X N 0¼ ½xp x1 (1 p)Nx (N x) p x (1 p)Nx1 f (x) ¼ x @p x¼0 x¼0 Rearranging gives N X
N
x¼0
x
! ½xp
x1
(1 p)
Nx
¼
N N X x¼0
¼N
! ½(N x) p x (1 p) Nx1
x
N X
N
x¼0
x
N X
N
x¼0
x
! ½ p x (1 p) Nx1 ! ½xp x (1 p) Nx1
Multiplying by p(1 2 p) and rearranging further gives N N X X N N x Nx x Nx p x ð1 pÞ Nx ½(1 p) p (1 p) x þ pp (1 p) ¼ Np x x x¼0 x¼0 The left-hand side can be simplified by canceling terms in the square bracket to obtain N N X X N N ½ p x (1 p) Nx x ½ p x (1 p) Nx ¼ x ¼ Np x x x¼0 x¼0 where we have used the definition of the mean. If we now apply the normalization condition to the right-hand side, we find the simple result x ¼ Np as the mean of the binomial distribution. EXERCISE 17.5: Suppose we are given the joint probability density r(x, y, z) ¼ (x þ y)ez for three random variables defined in the intervals 0 , x , 1, 0 , y , 1, and 0 , z. (a) Find the joint probability density r(x, y). (b) Find the marginal probability density r( y).
336
MATH REFRESHER FOR SCIENTISTS AND ENGINEERS
Solution: (a) The joint probability density r(x; y) is found by integrating over the variable z; thus ð1
ð1 r(x, y, z) dz ¼
r(x, y) ¼ 0
(x þ y) ez dz ¼ (x þ y)
0
ð1
ez dz ¼ x þ y
0
(b) The marginal probability density r( y) is found by integrating the joint probability density r(x; y) over the variable x; thus ð1 r(y) ¼
ð1 p(x þ y) dx ¼
0
(x þ y) dx ¼ y þ 0
1 2
EXERCISE 17.6: Determine the expectation value E(x) using the probability density for the normal distribution. Solution: The expectation value is ð1 E(x) ¼
1 xr(x) dx ¼ pffiffiffiffiffiffi 2p s 1
(x m) 2 dx x exp 2s2 1
ð1
The integral is solved as follows. Define the new variable xm dx z ¼ pffiffiffi , dz ¼ pffiffiffi 2s 2s so that the integral becomes ð 1 pffiffiffi pffiffiffi 1 Eð xÞ ¼ pffiffiffiffiffiffi 2 sz þ m exp z2 2 sdz 2p s 1 pffiffiffi ð 1 ð 2s m 1 ¼ pffiffiffiffi z exp z2 dz þ pffiffiffiffi exp z2 dz p 1 p 1 The first integral on the right-hand side of the equation is an integral of an odd function evaluated over a symmetric interval and is therefore zero. The result is m E(x) ¼ pffiffiffiffi p
ð1
exp z2 dz ¼ m
1
since ð1 1
pffiffiffiffi exp z2 dz ¼ p
SOLUTIONS TO EXERCISES
S18
337
STATISTICS
EXERCISE 18.1: Prepare a histogram of the grouped data in Table 18.1. Solution: The histogram is shown in the following figure.
EXERCISE 18.2: What is the mean of the grouped data in Table 18.1? Solution: The mean for grouped data is x ¼ ¼
k 1X f i xi n i¼1
2(0:075) þ 4(0:125) þ 15(0:175) þ 24(0:225) þ 14(0:275) þ 2(0:325) þ 2(0:375) 63
¼ 0:221 EXERCISE 18.3: Show that the power function y ¼ ax b can be recast in linear form. Solution: The power function y ¼ ax b is recast in linear form by taking the logarithm log y ¼ log a þ b log x For a power function fit by the method of least squares, the values log a and b are obtained by fitting a straight line to the set or ordered pairs {( log xi , log yi )}. The slope of the straight line on a log – log plot is b and the corresponding intercept of the straight line is log a.
REFERENCES
Abramowitz, M.A. and I.A. Stegun (1972): Handbook of Mathematical Functions, 9th printing, Dover, New York. Arfken, G.B. and H.J. Weber (2001): Mathematical Methods for Physicists, 5th ed., ElsevierAcademic Press, New York. Aziz, K. and A. Settari (1979): Petroleum Reservoir Simulation, Applied Science Publishers, London. Beltrami, E. (1987): Mathematics for Dynamic Modeling, Academic Press, Boston. Berberian, S.K. (1992): Linear Algebra, Oxford University Press, New York. Blaisdell, E.A. (1998): Statistics in Practice, 2nd ed., Saunders College Publishing, Fort Worth, TX. Boas, M. (2005): Mathematical Methods in the Physical Sciences, 3rd ed., Wiley, Hoboken, NJ. Breckenbach, E.F., I. Drooyan, and M.D. Grady (1985): College Algebra, 6th ed., Wadsworth, Belmont, CA. Burger, J.M. (1948): A mathematical model illustrating the theory of turbulence, Adv. Appl. Mechanics 1: 171– 199. Carnahan, B., H.A. Luther, and J.O. Wilkes (1969): Applied Numerical Methods, Wiley, Hoboken, NJ. Chapra, S.C. and R.P. Carnale (2002): Numerical Methods for Engineers, McGraw-Hill, Boston. Chow, T.L. (2000): Mathematical Methods for Physicists, A Concise Introduction, Cambridge University Press, Cambridge, UK. Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
339
340
REFERENCES
Collins, R.E. (1977): The mathematical basis of quantum mechanics, Lett. Nuovo Cim. 18: 581 – 584. Collins, R.E. (1979): The mathematical basis of quantum mechanics: II, Lett. Nuovo Cim. 25: 473 – 475. Collins, R.E. (1985): Characterization of heterogeneous reservoirs, Research & Engineering Consultants, and private communication. Collins, R.E. (1999): Mathematical Methods for Physicists and Engineers, 2nd corrected ed., Dover, Mineola, NY. Corduneanu, C. (1991): Integral Equations and Applications, Cambridge University Press, Cambridge, UK. Devaney, R.L. (1992): A First Course in Chaotic Dynamical Systems, Addison-Wesley, Reading, MA. DeVries, P.L. (1994): A First Course in Computational Physics, Wiley, Hoboken, NJ. Eddington, A.S. (1960): The Mathematical Theory of Relativity, Cambridge University Press, Cambridge, UK. Edwards, C.H. Jr. and D.E. Penney (1989): Elementary Differential Equations with Boundary Value Problems, 2nd ed., Prentice-Hall, Englewood Cliffs, NJ. Fanchi, J.R. (1983): Multidimensional numerical dispersion, Soc. Pet. Eng. J. 143 – 151. Fanchi, J.R. (1993): Parametrized Relativistic Quantum Theory, Kluwer, Dordrecht. Fanchi, J.R. (1997): Principles of Applied Reservoir Simulation, Gulf Publishers, Houston, TX. ¨ Frohner, F.H. (1998): Missing link between probability theory and quantum mechanics: the Riesz – Fejer´ theorem, Z. Naturforsch. 53a: 637 –654. Gluckenheimer, J. and P. Holmes (1986): Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer, New York. Goldstein, H., C. Poole, and J. Safko (2002): Classical Mechanics, 3rd ed., Addison-Wesley, New York. Gustafson, R.D. and P.D. Frisk (1980): College Algebra, Brooks/Cole, Monterey, CA. Hirsch, M.W. and S. Smale (1974): Differential Equations, Dynamical Systems, and Linear Algebra, Academic Press, New York. Jackson, J.D. (1962): Classical Electrodynamics, Wiley, Hoboken, NJ. Jensen, R.V. (1987): Classical chaos, Am. Sci. 75: 168 – 181. John, F. (1978): Partial Differential Equations, 3rd ed., Springer-Verlag, New York. Jordan, D.W. and P. Smith (1994): Mathematical Techniques: An Introduction for the Engineering, Physical and Mathematical Sciences, Oxford University Press, Oxford, UK. Kreider, D.L., R.G. Kuller, and D.R. Ostberg (1968): Elementary Differential Equations, Addison-Wesley, Reading, MA. Kreyszig, E. (1999): Advanced Engineering Mathematics, 8th ed., Wiley, Hoboken, NJ. Kurtz, M. (1991): Handbook of Applied Mathematics for Engineers and Scientists, McGrawHill, New York. Lapidus, L. and G.F. Pinder (1982): Numerical Solution of Partial Differential Equations in Science and Engineering, Wiley, Hoboken, NJ. Larsen, R.J. and M.L. Marx (1985): An Introduction to Probability and Its Applications, Prentice-Hall, Englewood Cliffs, NJ.
REFERENCES
341
Larson, R.E., R.P. Hostetler, and B.H. Edwards (1990): Calculus with Analytic Geometry, 4th ed., D.C. Heath, Lexington, MA. Lebedev, L.P. and M.J. Cloud (2003): Tensor Analysis, World Scientific, Singapore. Lichtenberg, A.J. and M.A. Lieberman (1983): Regular and Stochastic Motion, SpringerVerlag, New York. Lipschutz, S. (1968): Linear Algebra, McGraw-Hill, New York. Lomen, D. and J. Mark (1988): Differential Equations, Prentice-Hall, Englewood Cliffs, NJ. Mandelbrot, B.B. (1967): How long is the coast of Britain? Statistical self-similarity and fractional dimension, Science 155: 636– 638. Mandelbrot, B.B. (1983): The Fractal Geometry of Nature, Freeman, New York. May, R.M. (1976): Simple mathematical models with very complicated dynamics, Nature 261: 459ff. McCuskey, S.W. (1959): An Introduction to Advanced Dynamics, Addison-Wesley, Reading, MA. Mendenhall, W. and R.J. Beaver (1991): Introduction to Probability and Statistics, PWS-Kent Publishers, Boston, MA. Miller, I. and M. Miller (1999): John E. Freund’s Mathematical Statistics, 6th ed., PrenticeHall, Upper Saddle River, NJ. Nicholson, W.K. (1986): Elementary Linear Algebra with Applications, PWS-Kent Publishers, Boston, MA. Parker, T.S. and L.O. Chua (1987): Chaos: a tutorial for engineers, Proc. IEEE 75: 982 – 1008. Peaceman, D.W. (1977): Fundamentals of Numerical Reservoir Simulation, Elsevier, New York. Press, W.H., S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery (1992): Numerical Recipes, 2nd ed., Cambridge University Press, New York. Rich, B. (1973): Modern Elementary Algebra, McGraw-Hill, New York. Riddle, D.F. (1992): Analytic Geometry, 5th ed., Wadsworth, Belmont, CA. Rudin, W. (1991): Functional Analysis, 2nd ed., McGraw-Hill, New York. Spiegel, M.R. (1963): Advanced Calculus, McGraw-Hill, New York. Stewart, J., L. Redlin, and S. Watson (1992): College Algebra, Brooks/Cole, Pacific Grove, CA. Stewart, J. (1999): Calculus, 4th ed., Brooks/Cole, Pacific Grove, CA. Swokowski, E.W. (1986): Algebra and Trigonometry with Analytic Geometry, Prindle, Weber & Schmidt, Boston, MA. Tatham, R.H. and M.D. McCormack (1991): Multicomponent Seismology in Petroleum Exploration, Society of Exploration Geophysicists, Tulsa, OK. Telford, W.M., L.P. Geldart, and R.E. Sheriff (1990): Applied Geophysics, 2nd ed., Cambridge University Press, Cambridge, UK. Thomas, G.B. (1972): Calculus and Analytic Geometry, alternate ed., Addison-Wesley, Reading, MA. Walpole, R.E. and R.H. Myers (1985): Probability and Statistics for Engineers and Scientists, 3rd ed., Macmillan Publishing, New York. ¨ . (2001): Seismic Data Analysis, Volume I, Society of Exploration Geophysicists, Yilmaz, O Tulsa, OK. Zwillinger, D. (2003): CRC Standard Mathematical Tables and Formulae, 31st ed., Chapman & Hall/CRC, Boca Raton, FL.
INDEX
Abelian 4 abscissa 29, 30 absolute value 49 action 194 acute angle 22, 26 additive rule 223 algebra 1, 21, 53 algebraic function 17 analytic geometry 41 antisymmetric 68, 121, 214, 215 antisymmetry 214 arc length 118, 191, 194, 196 area 2, 23, 24, 88, 93, 109, 110, 117, 119, 226, 228 Argand diagram 14 associated tensor 215 associative 4 –6, 16, 56, 69, 220 autonomous 136, 137 backward difference 90 basis vector 69, 70, 77 binomial coefficient 12, 221 binomial distribution 240
boundary conditions 37, 155, 157, 161, 164, 165, 167 – 171, 174, 175, 185 calculus 79, 80, 107, 110, 117, 191, 192, 194, 195, 198, 200, 216 calculus of variations 191, 192, 194, 195, 198, 200 capacitance 154, 189, 195 Cartesian coordinates 30 – 33, 97 – 99, 119, 206, 216 Cauchy equation 158, 162 Cauchy – Riemann 104, 105 centered difference 90, 176 chaos 145 – 148 characteristic equation 71 –74, 144 characteristic root 71, 76 characteristic vector 71 chord 24 Christoffel 215 – 217 circle 2, 24, 29, 45, 48, 220 circumference 24 class 172, 182 classes 175
Math Refresher for Scientists and Engineers, Third Edition By John R. Fanchi Copyright # 2006 John Wiley & Sons, Inc.
343
344
INDEX
cofactor 62 Collocation method 178, 180 column vector 53, 54, 57, 60, 63, 71, 137, 138, 176 combination 12, 13, 69, 221 commutative 5, 6, 16, 56, 58, 67, 69, 94, 99, 220 complementary error function 175 complex conjugate 15, 54 complex number 1, 14– 16, 41, 48 complex plane 14, 15, 103 conditional distribution 230, 237, 238 conditional probability 223, 224, 226, 230, 231, 239 Cone 25, 26, 45 conic section 41, 45, 47 continuous probability 227 continuous random variable 225, 227, 228, 231, 233 – 235, 237, 238 contraction 213, 214, 216 contravariant vector 203– 205, 207, 208, 210, 214 – 216 convection 37, 172, 175 convergence 39, 148 convolution integral 127 coordinate system 210 coordinate transformation 53, 78, 96, 204, 207 correlation 238 covariance 174, 238 covariant derivative 216 covariant vector 203, 205, 207, 210, 214 – 216 cross product 68, 97, 99, 100 cumulative distribution 227, 228, 236 curl 99, 100, 101 cylinder 25 cylindrical coordinates 32, 33, 201 Darcy 88, 89 definite integral 110, 111, 113, 114, 116 – 119 definite integrals 114 determinant 57, 61– 63, 71, 72, 101, 144, 156, 188, 210, 211 diagonal matrix 54, 55, 57, 76 diagonalization 65, 76 diameter 24
differential equation 142, 147, 164, 172, 178, 182, 195, 196 differential operator 157, 170, 171, 179 differentiation 85, 94, 107, 109, 110, 155, 159, 165, 216 diffusion 172, 175 digital function 128 Dirac delta function 126, 127, 129 direct product 58, 59, 208, 214 directrix 47, 48 discrete random variable 225, 227, 230, 233 – 237, 240 discriminant 45 disjoint 220, 223 dispersion 37, 174 distributive 5, 6, 16, 56, 69, 220 divergence 99, 100, 101, 148 domain 3, 5, 79, 85, 103, 126, 159, 168, 176, 178, 179, 225 dot product 67, 68, 97, 99, 100 double integral 117, 119, 120, 125, 127 eccentricity 47 eigenvalue 54, 65, 71 –73, 76, 77, 144, 170, 171, 188 eigenvectors 65, 76 –78 ellipse 45 – 47 elliptic 168, 169 empty set 2, 220, 222, 223 equilateral triangle 23 equilibrium 143, 144, 153 Euclidean length 70 Euclidean space 210 Euler 21, 27, 34, 35, 38, 49, 50, 122, 124, 132, 158 Euler – Lagrange 193 – 197, 199 – 201 even function 121, 123 event 219, 220, 222 – 226, 236 expectation value 233, 237, 238 exponent 7, 8, 35, 58, 146 – 149 extrema 86 factorial 12, 13 fields 47, 97 – 100, 203 finite difference 87, 88, 175, 176 finite element 175 – 180 focus 47, 48 forward difference 90
INDEX
Fourier integral 124– 126, 173, 174 Fourier series 121– 124 Fourier transform 117, 121, 124– 129 four-vector 211, 212 fractal 10 – 12 Fredholm equation 181, 182, 185–187 Frobenius series 159, 161, 162 fundamental theorem of calculus 110 Galerkin method 178, 180 Galilean transformation 206 gradient 89, 98– 101, 194, 205 graph 30, 36, 45, 82, 115, 145 group 1, 4, 176, 221 Hamilton – Cayley theorem 72– 74 harmonic function 122, 125 Hermitian 54, 71, 240 homogeneous equation 134, 155, 157 hyperbolic 21, 35, 36, 38, 47, 168, 169, 175 – 177 hyperbolic function 21, 35, 36, 38 hypotenuse 26, 28 identity element 4 identity matrix 57, 71, 77, 188, 211 imaginary number 14 improper integral 116, 130 indefinite integral 107– 109, 111– 113 indicial equation 158, 162 inductance 154, 195 inequality 18, 38, 70 initial value problem 136, 137 inner product 70, 208, 214, 215 integer 2, 9, 17, 62, 121, 128, 130, 162, 208, 221 integral domain 5 integral equation 181, 182, 184, 185, 187 – 189 integrand 108, 115, 116, 118, 181, 193, 199 integration by parts 113, 122, 131 integration constant 108, 109, 134, 182 intercept 11, 41, 42, 44 intersection 2, 47, 220 inverse matrix 59, 60, 63 irrotational 101–103 iterative 87, 145, 176
345
Jacobian 96, 143 joint distribution 230, 237 joint moment 237 joint probability 224, 225, 229 – 232, 236, 238 kernel 182, 184, 187, 189 kinetic energy 194, 195 Klein – Gordon 174 Kronecker delta 70, 99, 123, 210, 216 Lagrangian 194, 195 Laplace transform 117, 130 – 132, 164, 165, 188, 189 Laplacian 99, 100 law of cosines 31, 67 law of sines 31 least squares 43, 44 line element 215, 217 line integral 117 – 119, 191, 192, 198, 200 linear algebra 53, 65 linear oscillator 184, 185 linear regression 41 linear transformation 75, 78 logarithm 10, 38, 44 logistic map 145, 146 longitudinal 102, 103 Lorentz 174, 206, 207 Lorentz transformation 206, 207 Lyapunov 146 – 149 Maclaurin series 38, 164 mapping 94, 145, 147 marginal distribution 230, 231, 237, 238 marginal probability 16, 223, 224, 229, 230, 232, 238 Mark 39, 162, 168 matrix equation 60, 78, 96, 138, 211 matrix multiplication 56, 60 maximum 30, 67, 86, 153, 154 May equation 145, 146 method of least squares 43 method of undetermined multipliers 200 metric 210 metric tensor 210, 211, 214 – 217 minimum 43, 67, 86 moment generating function 234, 235 monomial 17 multiplication 4 – 6, 15, 56, 69, 224
346
INDEX
multiplicative rule 223– 225 mutually exclusive 220, 223 natural logarithm 10, 13 Neumann series 186 Newton– Raphson 86, 87 nonautonomous 136, 137 nonhomogeneous 134, 135, 151– 155, 157 nonlinear equation 86, 173 nonsingular matrix 58, 60, 63 normal distribution 235 normalization 77, 123, 174, 227, 230 null matrix 57 oblique triangle 22 obtuse angle 22 odd function 121– 123 orbit 47, 48, 145 order of matrix 53 ordinary differential equation 133, 170, 172, 184 ordinate 29 orthogonal 32, 70, 78, 97, 123 orthonormal 70, 77, 78, 123 outer product 208, 213, 214 parabola 45, 46, 48, 119, 146 parabolic 47, 168, 169, 175– 177 parametric equations 85 partial derivative 37, 43, 44, 94, 103, 104, 167 – 169, 172, 193, 199, 204, 216 partial differential equation 36, 37, 105, 167 partial differentiation 94, 96 partial fraction 18, 19 path of integration 118, 191, 192, 198, 199 Pauli matrices 54, 58, 72 perihelion 48 periodic function 122 permutation 12, 220, 221 phase diagram 143 plane 14, 32, 33, 45, 68, 69, 101, 111, 194, 196, 231 polar coordinate 32– 34, 41, 47– 51 polynomial 9, 17, 18, 37, 42, 44, 71– 73, 81, 114 – 116 potential energy 194, 195 power series 37– 39, 159, 161, 163, 164 principal diagonal 54, 55
principal value 49 probability density 227, 228, 231, 233, 239 probability distribution 219, 227, 229, 231, 233, 240 probability tree 222 Pythagorean 22, 25, 26, 28, 31, 114 quadratic equation 9, 13, 43, 115, 158 quadrature 116, 134, 172, 173 quantum mechanics 56, 173, 238, 240 radius 24, 25, 29, 32, 33, 39, 45, 71, 162 radius of convergence 39, 162 random variable 225, 227, 229 – 238 range 3, 32 – 34, 79, 128, 141, 146, 151, 176, 195, 208, 211, 225, 226 rational function 17, 18, 81 rational number 2 real number 2 – 4, 7, 8, 10, 14, 48, 85, 94, 121, 159, 225, 227, 229 recurrence relation 161 – 163 regression line 43, 44 residual 87, 178, 179 resistance 154, 189 return map 145, 146 right angle 22 right triangle 22, 23, 26, 29 ring 5 root 3, 8, 9, 14, 86, 87, 158, 234, 238 rotation 28, 51, 52, 96, 99, 101 row vector 53, 54 Runge– Kutta 137, 141, 142 sample 219 – 225, 227, 236 sample space 219 – 225, 227, 236 scalar field 97 – 101 scientific notation 8 seismic 101, 127, 129 separation constant 170 separation of variables 169 series representation 13, 21, 39, 159 set theory 219 shock wave 36 similarity transformation 75, 76 solenoidal 101 – 103 sphere 24 square matrix 53, 54, 57– 63, 71 – 75, 78, 96, 176, 210 stability 133, 142 – 144
INDEX
standard deviation 234, 238 statistical independence 230–232 statistics 43, 219 subset 219, 220, 236–238 symmetric 6, 54, 71, 121, 122, 214, 215, 223, 225 symmetric matrix 71 symmetry 174, 214, 216 Taylor series 37, 38, 87, 88, 90, 143, 148, 175 tensor 203, 207– 211, 213– 216 time series 127, 128 trace 54, 55, 58, 71 trajectory 147– 149 transformation properties 96, 203–205 transpose 53, 54, 57, 59 transverse 101, 103 trapezoidal rule 115, 116 triangle 22, 23, 25, 31 triangular matrix 54, 55 trigonometric function 26, 28– 30, 32, 35, 38, 122, 171 trigonometry 21
347
triple scalar product 97, 98 triple vector product 97 uniform distribution 240 union 2, 178, 220 unit vector 65, 66, 68, 97 universal set 220 variance 234, 238, 240 variation of parameters 158 vector analysis 96 vector components 65, 66 vector product 99 vector space 69 velocity 37, 75, 97, 172, 194, 203, 207, 239 Venn diagram 2, 220 Volterra equation 181, 182, 184, 186, 188 volume 24, 178, 239 wave equation 102, 169, 238, 240 Wronskian 156 Z transform 117, 127 – 129