Universitext
Dorina Mitrea
Distributions, Partial Differential Equations, and Harmonic Analysis
Universitext
Unive...
113 downloads
1406 Views
4MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Universitext
Dorina Mitrea
Distributions, Partial Differential Equations, and Harmonic Analysis
Universitext
Universitext Series Editors: Sheldon Axler San Francisco State University Vincenzo Capasso Universit`a degli Studi di Milano Carles Casacuberta Universitat de Barcelona Angus J. MacIntyre Queen Mary, University of London Kenneth Ribet University of California, Berkeley Claude Sabbah ´ CNRS, Ecole Polytechnique Endre S¨uli University of Oxford Wojbor A. Woyczynski Case Western Reserve University
Universitext is a series of textbooks that presents material from a wide variety of mathematical disciplines at master’s level and beyond. The books, often well class-tested by their author, may have an informal, personal even experimental approach to their subject matter. Some of the most successful and established books in the series have evolved through several editions, always following the evolution of teaching curricula, to very polished texts. Thus as research topics trickle down into graduate-level teaching, first textbooks written for new, cutting-edge courses may make their way into Universitext.
For further volumes: http://www.springer.com/series/223
Dorina Mitrea
Distributions, Partial Differential Equations, and Harmonic Analysis
123
Dorina Mitrea Department of Mathematics University of Missouri Columbia, MO, USA
ISSN 0172-5939 ISSN 2191-6675 (electronic) ISBN 978-1-4614-8207-9 ISBN 978-1-4614-8208-6 (eBook) DOI 10.1007/978-1-4614-8208-6 Springer New York Heidelberg Dordrecht London Library of Congress Control Number: 2013944653 Mathematics Subject Classification (2010): 35A08, 35A09, 35A20, 35B05, 35B53, 35B65, 35C15, 35D30, 35E05, 35G05, 35G35, 35H10, 35J05, 35J30, 35J45, 35J47, 35J48 © Springer Science+Business Media New York 2013 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Dedicat˘a cu drag lui Diana ¸si Adrian, Love, Mom
Preface
This book has been written from the personal perspective of a mathematician working at the interface between partial differential equations and harmonic analysis. Its aim is to offer, in a concise, rigorous, and largely self-contained form, a rapid introduction to the theory of distributions and its applications to partial differential equations and harmonic analysis. This is done in a format suitable for a graduate course spanning either a one-semester period, when the focus is primarily on the foundational aspects, or over a two-semester period, which allows for the proper amount of time to cover all intended applications as well. Throughout, a special effort has been made to develop the theory of distributions not as an abstract edifice but rather to give the reader a chance to see the rationale behind various seemingly technical definitions, as well as the opportunity to apply the newly developed tools (in the natural build-up of the theory) to concrete problems in partial differential equations and harmonic analysis, at the earliest opportunity. In addition to being suitable as a textbook for a graduate course, this book has been designed so that it may also be used for independent study since the presentation is reader-friendly, mostly self-sufficient (e.g., all auxiliary results originating outside the scope of this book have been carefully collected and presented in the appendix), and a large number of the suggested exercises have complete solutions. Columbia, MO, USA
Dorina Irena Rita Mitrea
vii
Contents Preface
vii
Introduction
xiii
Common Notational Conventions
xix
1 Weak Derivatives 1.1 The Cauchy Problem for a Vibrating 1.2 Weak Derivatives . . . . . . . . . . . 1.3 The Spaces E(Ω) and D(Ω) . . . . . 1.4 Additional Exercises for Chap. 1 . .
Infinite String . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
. . . .
Space D (Ω) of Distributions The Definition of Distributions . . . . . . . . . . . . . . The Topological Vector Space D (Ω) . . . . . . . . . . . Multiplication of a Distribution with a C ∞ Function . . Distributional Derivatives . . . . . . . . . . . . . . . . . The Support of a Distribution . . . . . . . . . . . . . . . Compactly Supported Distributions and the Space E (Ω) Tensor Product of Distributions . . . . . . . . . . . . . . The Convolution of Distributions in Rn . . . . . . . . . Distributions with Higher-Order Gradients Continuous or Bounded . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 Additional Exercises for Chap. 2 . . . . . . . . . . . . .
2 The 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9
. . . .
. . . .
. . . .
. . . .
. . . .
1 1 3 8 13
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
17 17 25 27 29 34 37 48 59
. . . . . . . . . .
72 80
3 The Schwartz Space and the Fourier Transform 89 3.1 The Schwartz Space of Rapidly Decreasing Functions . . . . . . . 89 3.2 The Action of the Fourier Transform on the Schwartz Class . . . . . . . . . . . . . . . . . . . . . . . . 99 3.3 Additional Exercises for Chap. 3 . . . . . . . . . . . . . . . . . . 106 4 The 4.1 4.2 4.3
Space of Tempered Distributions 109 Definition and Properties of Tempered Distributions . . . . . . . 109 The Fourier Transform Acting on Tempered Distributions . . . . 119 Homogeneous Distributions . . . . . . . . . . . . . . . . . . . . . 129 ix
CONTENTS
x 4.4 4.5 4.6 4.7
Principal Value Tempered Distributions . . . . . . . . . The Fourier Transform of Principal Value Distributions Tempered Distributions Associated with |x|−n . . . . . . A General Jump-Formula in the Class of Tempered Distributions . . . . . . . . . . . . . . . . . . . . . . . . 4.8 The Harmonic Poisson Kernel . . . . . . . . . . . . . . . 4.9 Singular Integral Operators . . . . . . . . . . . . . . . . 4.10 Derivatives of Volume Potentials . . . . . . . . . . . . . 4.11 Additional Exercises for Chap. 4 . . . . . . . . . . . . .
. . . . . 135 . . . . . 141 . . . . . 146 . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
151 162 167 175 184
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
189 189 191 195 200
6 Hypoelliptic Operators 6.1 Definition and Properties . . . . . . . . . . . . . . . . . 6.2 Hypoelliptic Operators with Constant Coefficients . . . 6.3 Integral Representation Formulas and Interior Estimates 6.4 Additional Exercises for Chap. 6 . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
201 201 203 210 216
5 The 5.1 5.2 5.3 5.4
Concept of a Fundamental Solution Constant Coefficient Linear Differential Operators A First Look at Fundamental Solutions . . . . . . The Malgrange–Ehrenpreis Theorem . . . . . . . . Additional Exercises for Chap. 5 . . . . . . . . . .
. . . .
. . . .
7 The Laplacian and Related Operators 7.1 Fundamental Solutions for the Laplace Operator . . . . . . 7.2 The Poisson Equation and Layer Potential Representation Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Fundamental Solutions for the Bi-Laplacian . . . . . . . . . 7.4 The Poisson Equation for the Bi-Laplacian . . . . . . . . . 7.5 Fundamental Solutions for the Poly-Harmonic Operator . . 7.6 Fundamental Solutions for the Cauchy–Riemann Operator . 7.7 Fundamental Solutions for the Dirac Operator . . . . . . . 7.8 Fundamental Solutions for General Second-Order Operators 7.9 Layer Potential Representation Formulas Revisited . . . . . 7.10 Additional Exercises for Chap. 7 . . . . . . . . . . . . . . . 8 The 8.1 8.2 8.3 8.4 9 The 9.1 9.2 9.3
Heat Operator and Related Versions Fundamental Solutions for the Heat Operator . . . . . . . . The Generalized Cauchy Problem for the Heat Operator . . Fundamental Solutions for General Second Order Parabolic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fundamental Solution for the Schr¨odinger Operator . . . .
217 . . . 217 . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
223 233 237 240 248 253 260 271 274
277 . . . 277 . . . 280 . . . 282 . . . 287
Wave Operator 289 Fundamental Solution for the Wave Operator . . . . . . . . . . . 289 The Generalized Cauchy Problem for the Wave Operator . . . . 307 Additional Exercises for Chap. 9 . . . . . . . . . . . . . . . . . . 308
CONTENTS
xi
10 The 10.1 10.2 10.3 10.4
Lam´ e and Stokes Operators General Remarks About Vector and Matrix Distributions . Fundamental Solutions and Regularity for General Systems Fundamental Solutions for the Lam´e Operator . . . . . . . Mean Value Formulas and Interior Estimates for the Lam´e Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 The Poisson Equation for the Lam´e Operator . . . . . . . . 10.6 Fundamental Solutions for the Stokes Operator . . . . . . . 10.7 Additional Exercises for Chap. 10 . . . . . . . . . . . . . . .
11 More on Fundamental Solutions for Systems 11.1 Computing a Fundamental Solution for the Lam´e Operator 11.2 Computing a Fundamental Solution for the Stokes Operator 11.3 Fundamental Solutions for Higher-Order Systems . . . . . . 11.4 Interior Estimates and Real-Analyticity for Null-Solutions of Systems . . . . . . . . . . . . . . . . . 11.5 Reverse H¨older Estimates for Null-Solutions of Systems . . 11.6 Layer Potentials and Jump Relations for Systems . . . . . . 11.7 Additional Exercises for Chap. 11 . . . . . . . . . . . . . . 12 Solutions to Selected Exercises 12.1 Solutions to Exercises from Sect. 1.4 . 12.2 Solutions to Exercises from Sect. 2.10 12.3 Solutions to Exercises from Sect. 3.3 . 12.4 Solutions to Exercises from Sect. 4.11 . 12.5 Solutions to Exercises from Sect. 5.4 . 12.6 Solutions to Exercises from Sect. 6.4 . 12.7 Solutions to Exercises from Sect. 7.10 . 12.8 Solutions to Exercises from Sect. 9.3 . 12.9 Solutions to Exercises from Sect. 10.7 . 12.10Solutions to Exercises from Sect. 11.7 .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
13 Appendix 13.1 Summary of Topological and Functional Analytic . . . . . . 13.2 Summary of Basic Results from Calculus, Measure Theory, and Topology . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Custom-Designing Smooth Cut-Off Functions . . . . . . . . 13.4 Partition of Unity . . . . . . . . . . . . . . . . . . . . . . . 13.5 The Gamma and Beta Functions . . . . . . . . . . . . . . . 13.6 Surfaces in Rn and Surface Integrals . . . . . . . . . . . . . 13.7 Integration by Parts and Green’s Formula . . . . . . . . . . 13.8 Polar Coordinates and Integrals on Spheres . . . . . . . . . 13.9 Tables of Fourier Transforms . . . . . . . . . . . . . . . . .
309 . . . 309 . . . 314 . . . 319 . . . .
. . . .
. . . .
326 332 334 338
341 . . . 341 . . . 343 . . . 344 . . . .
. . . . . . . . . .
. . . .
. . . . . . . . . .
. . . .
356 361 365 373
. . . . . . . . . .
375 375 381 397 400 407 408 409 413 413 414
415 . . . 415 . . . . . . . .
. . . . . . . .
. . . . . . . .
424 428 429 434 435 437 437 445
xii
CONTENTS
Bibliography
449
Subject Index
455
Symbol Index
459
Introduction
It has long been recognized that there is a large overlap and intricate interplay among distribution theory (DT), partial differential equations (PDE), and harmonic analysis (HA). The purpose of this book is to guide a reader with a background in basic real analysis through the journey taking her/him to the stage when such connections become self-evident. Another goal of the present book is to convince the reader that traditional distinctions made among these branches of mathematics are largely artificial and are often simply a matter of choice in focus. Indeed, given the manner in which they complement, motivate, and draw inspiration from one another, it is not necessarily a stretch to attempt to pursue their development virtually simultaneously. Concerning the triumvirate DT, PDE, HA, while there exist a number of good reference texts available on the market, they are by and large conceived in such a way that they either emphasize more one of these topics, typically at the detriment of the others, or are simply not particularly well-suited for a nonspecialist. By way of contrast, not only is the present text written in a way that brings together and blends the aforementioned topics in a unified, coherent body of results, but the resulting exposition is also sufficiently detailed and reader-friendly so that it may be read independently, outside the formal classroom setting. Indeed, the book is essentially self-contained, presents a balanced treatment of the topics involved, and contains a large number of exercises (upwards of 200, more than half of which are accompanied by solutions), which have been carefully chosen to amplify the effect, and substantiate the power and scope, of the theory discussed here. While the topics treated are classical, the material is not entirely standard since a number of results are new even for a seasoned practitioner, and the overall architectural design of the monograph (including the way in which certain topics are covered) is original. Regarding its inception, the present monograph is an expanded version of the notes prepared for a course on distribution theory that I taught in the Spring of 2007 and the Spring of 2011 at the University of Missouri. My intention was to present the theory of distributions not as an abstract edifice but rather to give the student a chance to instantaneously see the justification and practical benefits of the multitude of seemingly technical definitions and results, as well xiii
INTRODUCTION
xiv
as give her/him the opportunity to immediately see how the newly introduced concepts (in the natural build-up of the theory) apply to concrete problems in PDE and harmonic analysis. Special care has been paid to the pedagogical aspect of the presentation of the material in the book. For example, a notable feature of the present monograph is the fact that fundamental solutions for some of the most basic differential operators in mathematical physics and engineering, including Laplace, heat, wave, poly-harmonic, Dirac, Lam´e, Stokes, and Schr¨odinger, are systematically deduced starting from first principles. This stands in contrast with the more common practice in the literature in which one starts with a certain distribution (the origins of which are fairly obscure) and simply checks that the distribution in question is a fundamental solution for a given differential operator. Another feature is the emphasis placed on the interrelations between topics. For example, a clear picture is presented as to how DT vastly facilitates the computation of fundamental solutions, and the development of singular integral operators, tools which, in turn, are used to solve PDE as well as represent and estimate solutions of PDE. The presentation is also conceived in such a way as to avoid having to confront heavy-duty topology/functional analysis up front, in the main narrative. For example, the jargon associated with the multitude of topologies on various spaces of test functions and distributions is minimized by deferring to an appendix the technical details while retaining in the main body of the discussion only those consequences that are most directly relevant to the fluency of the exposition. While the core material I had in mind deals primarily with the theory of distributions, this book is ultimately devised in such a way as to make the present material a solid launching pad for a number of subsequent courses dealing with allied topics, including: • Harmonic analysis • Partial differential equations • Boundary integral methods • Sobolev spaces • Pseudodifferential operators For example, the theory of singular integral operators of convolution type in L2 (Rn ) is essentially developed here, in full detail, up to the point where more specialized tools from harmonic analysis (such as the Hardy–Littlewood maximal operator and the Calder´on–Zygmund lemma) are typically involved in order to further extend this theory (via a weak-(1, 1) estimate, interpolation with L2 , and then duality) to Lp spaces with p ∈ (1, ∞), as in [6, 9, 16, 23, 47, 62–64], among others.
INTRODUCTION
xv
Regarding connections with PDE, the Poisson problem in the whole space, Lu = f
in
D (Rn ),
(0.0.1)
is systematically treated here for a variety of differential operators L, including the Laplacian, the bi-harmonic operator, and the Lam´e system. While Sobolev spaces are not explicitly considered (the interested reader may consult in this regard [1, 2, 13, 40, 43, 44, 76], to cite just a small fraction of a large body of literature on this topic), they lurk just beneath the surface as their presence is implicit in estimates of the form
∂ α uLp (Rn ) ≤ C(L, p)f Lp(Rn ) ,
1 < p < ∞,
(0.0.2)
|α|=m
where m denotes the order of the elliptic differential operator L. Such estimates are deduced from an integral representation formula for u, involving a fundamental solution of L, and estimates for singular integral operators of convolution type in Rn . In particular, this justifies two features of the present monograph: (1) the emphasis placed on finding explicit formulas for fundamental solutions for a large number of operators of basic importance in mathematics, physics, and engineering; (2) the focus on the theory of singular integral operators of convolution type, developed along side the distribution theory. In addition, the analysis of the Cauchy problem formulated and studied for the heat and wave operators once more underscores the significance of the fundamental solutions for the named operators. As a whole, this material is designed to initiate the reader into the field of PDE. At the same time, it complements, and works well in tandem with, the treatment of this subject in [3, 12, 21, 30–32, 46, 68, 69, 75]. Whenever circumstances permit it, other types of problems are brought into play, such as the Dirichlet and Neumann problem in the upper-half space for the Laplacian, as well as more general second-order systems. In turn, the latter genre of boundary value problems motivates introducing and developing boundary integral methods, and serves as an opportunity to highlight the basic role that layer potential operators play in this context. References dealing with the latter topic include [34, 36, 46, 48, 74]. Analysis of the structure of the boundary layer potential operators naturally intervening in this context also points to the possibility of considering larger classes of operators where the latter may be composed, inverted, etc., in a stable fashion. This serves as an excellent motivation for the introduction of such algebras of operators as pseudodifferential and Fourier integral operators, a direction that the interested reader may then pursue in, for example, [24, 67, 69, 75], to name a few sources.
INTRODUCTION
xvi A brief description of this book’s contents is as follows.
Chapters 1–2 are devoted to the development of the most basic aspects of the theory of distributions. Starting from the discussion of the Cauchy problem for a vibrating infinite string as a motivational example, the notion of a weak derivative is introduced as a mean of extending the notion of solution to a more general setting, where the functions involved may lack standard pointwise differentiability properties. After touching upon classes of test functions, the space of distributions is then introduced and studied from the perspective of a topological vector space with various other additional features (such as the concept of support, and a partially defined convolution product). Chapter 3 contains material pertaining to the Schwartz space of functions rapidly decaying at infinity and the Fourier transform in such a setting. In Chap. 4 the action of the Fourier transform is further extended to the setting of tempered distributions, and several distinguished subclasses of tempered distributions are introduced and studied (including homogeneous and principal value distributions). The foundational material developed up to this point already has significant applications to harmonic analysis and PDE. For example, a general, higher-dimensional jump-formula is deduced in this chapter for a certain class of tempered distributions (that includes the classical harmonic Poisson kernel) which is latter used as the main tool in deriving information about the boundary behavior of layer potential operators associated with various partial differential operators and systems. Also, one witnesses here how singular integral operators of central importance to harmonic analysis (such as the Riesz transforms) naturally arise as an extension to L2 of the convolution product of tempered distributions of principal value type with Schwartz functions. The first explicit encounter with the notion of fundamental solution takes place in Chap. 5, where the classical Malgrange–Ehrenpreis theorem is presented. Subsequently, in Chap. 6, the concept of hypoelliptic operator is introduced and studied. In particular, here a classical result, due to L. Schwartz, is proved to the effect that a necessary and sufficient condition for a linear, constant coefficient differential operator to be hypoelliptic in the entire ambient space is that the named operator possesses a fundamental solution with singular support consisting of the origin alone. In Chap. 6 we also prove an integral representation formula and interior estimates for a subclass of hypoelliptic operators, which are subsequently used to show that null-solutions of these operators are real-analytic. One of the main goals in Chap. 7 is identifying (starting from first principles) all fundamental solutions that are tempered distributions for scalar elliptic operators. While the natural starting point is the Laplacian, this study encompasses a variety of related operators, such as the bi-Laplacian, the polyharmonic operator, the Cauchy–Riemann operator, the Dirac operator, as well as general second-order constant coefficient strongly elliptic operators. Having
INTRODUCTION
xvii
accomplished this task then makes it possible to prove the well-posedness of the Poisson problem (0.0.1) (equipped with a boundary condition at infinity), and derive qualitative/quantitative properties for the solution such as (0.0.2). Along the way, Cauchy-like integral operators are also introduced and their connections with Hardy spaces is brought to light in the setting of both complex and Clifford analysis. Chapter 8 has a twofold aim: determine all fundamental solutions that are tempered distributions for the heat operator and related versions (including the Schr¨ odinger operator), then use this as a toll in the solution of the generalized Cauchy problem for the heat operator. The same type of program is then carried out in Chap. 9, this time in connection with the wave operator. While the analysis up to this point has been largely confined to scalar operators, the final two chapters in the book are devoted to studying systems of differential operators. The material in Chap. 10 is centered around two such basic systems: the Lam´e operator arising in the theory of elasticity, and the Stokes operator arising in hydrodynamics. Among other things, all their fundamental solutions that are tempered distributions are identified, and the well-posedness of the Poisson problem for the Lam´e system is established. The former issue is then revisited in the first part of Chap. 11 from a different perspective, and subsequently generalized to the case of (homogeneous) constant coefficient systems of arbitrary order. In Chap. 11 we also show that integral representation formulas and interior estimates hold for null-solutions of homogeneous systems with non-vanishing full symbol. As a consequence, we prove that such nullsolutions are real-analytic and satisfy reverse H¨older estimates. The final topic addressed in Chap. 11 pertains to layer potentials associated with arbitrary constant coefficient second order systems in the upper-half space, and the relevance of these operators vis-a-vis to the solvability of boundary value problems for such systems in this setting. For completeness, a summary of topological and functional analysis results in reference to the description of the topology and equivalent characterizations of convergence in spaces of test functions and in spaces of distributions is included in the appendix (which also contains a variety of foundational results from calculus, measure theory, and special functions originating outside the scope of this book). One aspect worth noting in this regard is that the exposition in the main body of the book may be followed even without being fully familiar with all these details by alternatively taking, as the starting point, the characterization of convergence in the various topologies considered here (summarized in the main text under the heading Fact) as definitions. Such an approach makes the topics covered in the present monograph accessible to a larger audience while, at the same time, provides a full treatment of the topological and functional analysis background accompanying the theory of distributions for the reader interested in a more in-depth treatment.
xviii
INTRODUCTION
Finally, each book chapter ends with bibliographical references tailored to its respective contents under the heading Further Notes, as well as with a number of additional exercises, selectively solved in Chap. 12. Acknowledgments. The work on this project has been supported in part by the Simons Foundation grant # 200750, and by a University of Missouri Research Leave grant. The author wishes to express her gratitude to these institutions. In addition, the author thanks the referees for their useful comments which have lead to various improvements in the manuscript.
Common Notational Conventions
Throughout this book the set of natural numbers will be denoted by N, that is N := {1, 2, . . . }, while N0 := N ∪ {0}. For each k ∈ N set k! := 1 · 2 · · · (k − 1) · k, and make the convention that 0! := 1. The letter C will denote the set of complex numbers, and z denotes the complex conjugate of z ∈ C. Also the real and imaginary parts of a complex number z are denoted by Re z √ and Im z, respectively. The symbol i is reserved for the complex imaginary unit −1 ∈ C. The letter R will denote the set of real numbers and its n-fold Cartesian product of R with itself (where n ∈ N) is denoted by Rn . That is, (0.0.3) Rn := x = (x1 , . . . , xn ) : x1 , . . . , xn ∈ R considered with the usual vector space and inner product structure, that is, x + y := (x1 + y1 , . . . , xn + yn ),
cx := (cx1 , . . . , cxn ),
x · y :=
n
xj yj ,
j=1
∀ x = (x1 , . . . , xn ) ∈ Rn , ∀ y = (y1 , . . . , yn ) ∈ Rn , ∀ c ∈ R.
(0.0.4) The standard orthonormal basis of vectors in Rn is denoted by {ej }1≤j≤n , where we have set ej := (0, . . . , 0, 1, 0, . . . , 0) ∈ Rn with the only nonzero component on the j-th slot. We shall also consider the two canonical (open) half-spaces of Rn , denoted by Rn± := {x = (x1 , . . . , xn ) ∈ Rn : ± xn > 0}. Hence, Rn+ = Rn−1 × (0, ∞) and Rn− = Rn−1 × (−∞, 0). Given a multi-index α = (α1 , . . . , αn ) ∈ Nn0 , we set α! := α1 !α2 ! · · · αn ! and |α| :=
n
αj ,
(0.0.5)
j=1
∂ α := ∂1α1 · · · ∂nαn , xα :=
n
α
xj j
where
∂j :=
∂ for j = 1, . . . , n, ∂xj
for every x = (x1 , . . . , xn ) ∈ Cn .
(0.0.6)
(0.0.7)
j=1
Also if β = (β1 , . . . , βn ) ∈ Nn0 is another multi-index we shall write β ≤ α provided βj ≤ αj for each j ∈ {1, . . . , n}, in which case we set α − β := (α1 − β1 , . . . , αn − βn ). We shall also say that β < α if β ≤ α and β = α. Recall that the Kronecker symbol is defined by δjk := 1 if j = k and δjk := 0 if j = k. All functions in this monograph are assumed to be complex valued unless otherwise indicated. Derivatives of a function f defined on the real line are k going to be denoted using f , f , etc., or f (k) , or ddxfk . xix
xx
COMMON NOTATIONAL CONVENTIONS
Throughout the book, Ω denotes an arbitrary open subset of Rn . If A is an ˚ A, and ∂A denote its interior, its closure, and arbitrary subset of Rn , then A, its boundary, respectively. In addition, if B is another arbitrary subset of Rn , then their set theoretic difference is denoted by A \ B := {x ∈ A : x ∈ B}. In particular, the complement of A is Ac := Rn \ A. For any E ⊂ Rn we let χE stand for the characteristic function of the set E (i.e., χE (x) = 1 if x ∈ E and χE (x) = 0 if x ∈ Rn \ E). For k ∈ N0 ∪ {∞}, we will work with the following classes of functions that are vector spaces over C: C k (Ω) := ϕ : Ω → C : ∂ α ϕ continuous ∀ α ∈ Nn0 , |α| ≤ k , (0.0.8) (0.0.9) C k (Ω) := ϕΩ : ϕ ∈ C k (U ), U ⊆ Rn open set containing Ω , (0.0.10) C0k (Ω) := ϕ ∈ C k (Ω) : supp ϕ compact subset of Ω . As usual, for any Lebesgue-measurable (complex-valued) function f defined on a Lebesgue measurable set E ⊆ Rn and any p ∈ [1, ∞] we write
1/p p if 1 ≤ p < ∞, E |f | dx f Lp(E) := (0.0.11) ess-supE |f | if p = ∞, and denote by Lp (E) the Banach space of (equivalence classes of) Lebesguemeasurable functions f on E satisfying f Lp(E) < ∞. Also, we will work with locally integrable functions and with compactly supported integrable functions. For p ∈ [1, ∞] these are defined as Lploc (Ω) := f : Ω → C : f Lebesgue measurable such that f Lp(K) < ∞, ∀ K ⊂ Ω compact set , (0.0.12) and, respectively, as Lpcomp(Ω) := f ∈ Lp (Ω) : supp f compact subset of Ω ,
(0.0.13)
where supp f is defined in (2.5.10). Given a measure space (X, μ), a measurable set A ⊆ X with 0 < μ(A) < ∞, and a function f ∈ L1 (A, μ), we define the integral average of f over A by 1 − f dμ := f dμ. (0.0.14) μ(A) A A If E is a Lebesgue measurable subset of Rn , the Lebesgue measure of E is denoted by |E|. If x ∈ Rn and radius r > 0, set B(x, r) := {y ∈ Rn : |y −x| < r} for the ball of center x and radius R and its boundary is denoted by ∂B(0, r). The unit sphere in Rn centered at zero is S n−1 := {x ∈ Rn : |x| = 1} = ∂B(0, 1) and its surface measure is denoted by ωn−1 .
COMMON NOTATIONAL CONVENTIONS
xxi
For n, m ∈ N and R, an arbitrary commutative ring with multiplicative units, we denote by Mn×m (R) the collection of all n × m matrices with entries from R. If B ∈ Mn×m (R), then B denotes its transpose and if n = m, then det B denotes the determinant of the matrix B, while In×n denotes the identity matrix. Regarding semi-orthodox notational conventions, A := B stands for “A is defined as being B.” while A =: B stands for “B is defined as being A.” Also, the letter C when used as a multiplicative constant in various inequalities, is allowed to vary from line to line. Whenever necessary, its dependence on the other parameters a, b, . . . implicit in the estimate in question is stressed by writing C(a, b, . . .) or Ca,b,... in place of just C.
Chapter 1
Weak Derivatives 1.1
The Cauchy Problem for a Vibrating Infinite String
The partial differential equation ∂12 u − ∂22 u = 0
in R2
(1.1.1)
was derived by Jean d’Alembert in 1747 to describe the displacement u(x1 , x2 ) of a violin string as a function of time and distance along the string. Assuming that the string is infinite and that at time x2 = 0 the displacement is given by some function ϕ0 ∈ C 2 (R) leads to the following global Cauchy problem ⎧ u ∈ C 2 (R2 ), ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨∂12 u − ∂22 u = 0 ⎪ ⎪ u(·, 0) = ϕ0 ⎪ ⎪ ⎪ ⎪ ⎩ (∂2 u)(·, 0) = 0
in R2 , in R,
(1.1.2)
in R.
Thanks to the regularity assumption on ϕ0 , it may be checked without difficulty that the function u(x1 , x2 ) := 12 [ϕ0 (x1 + x2 ) + ϕ0 (x1 − x2 )],
for every (x1 , x2 ) ∈ R2 , (1.1.3)
is a solution of (1.1.2). This being said, the expression of u in (1.1.3) continues to be meaningful under much less restrictive assumptions on ϕ0 . For example, u is a well-defined continuous function in R2 whenever ϕ0 ∈ C 0 (R). While in this case expression (1.1.3) is no longer a classical solution of (1.1.2), it is natural to ask whether it is possible to identify a new (and possibly weaker) sense in which (1.1.3) would continue to satisfy ∂12 u − ∂22 u = 0. D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 1, © Springer Science+Business Media New York 2013
1
CHAPTER 1. WEAK DERIVATIVES
2
To answer this question, fix a function u ∈ C 2 (R2 ) satisfying ∂12 u − ∂22 u = 0 pointwise in R2 . If ϕ ∈ C0∞ (R2 ) is an arbitrary function and R ∈ (0, ∞) is a number such that supp ϕ ⊆ (−R, R) × (−R, R), then integration by parts gives 0= (∂12 u − ∂22 u)ϕ dx
R2
R
R
= −R
−
−R R
−R
= R2
∂12 u(x1 , x2 )ϕ(x1 , x2 ) dx1 dx2 R
−R
∂22 u(x1 , x2 )ϕ(x1 , x2 ) dx1 dx2
(∂12 ϕ − ∂22 ϕ)u dx.
(1.1.4)
Note that the condition R2 (∂12 ϕ − ∂22 ϕ)u dx = 0 for all ϕ ∈ C0∞ (R2 ) is meaningful even if u ∈ C 0 (R2 ), which suggests the following definition. Definition 1.1. A function u ∈ C 0 (R2 ) is called a weak (generalized) solution of the equation ∂12 u − ∂22 u = 0 in R2 if (∂12 ϕ − ∂22 ϕ)u dx = 0 for all ϕ ∈ C0∞ (R2 ). (1.1.5) R2
Returning to (1.1.3), let us now check that, under the assumption ϕ0 ∈ C 0 (R), the function u defined in (1.1.3) is a generalized solution of ∂12 u−∂22 u = 0 in R2 . Concretely, fix ϕ ∈ C0∞ (R2 ) and write (∂12 ϕ − ∂22 ϕ)u dx R2
=
1 2
R
R
∂12 ϕ(x1 , x2 ) − ∂22 ϕ(x1 , x2 ) ϕ0 (x1 + x2 ) + ϕ0 (x1 − x2 ) dx1 dx2
y1 + y2 y1 − y2 , ϕ0 (y1 ) + ϕ0 (y2 ) dy1 dy2 2 2 R R y1 + y2 y1 − y2 1 , − (∂22 ϕ) ϕ0 (y1 ) + ϕ0 (y2 ) dy1 dy2 , 4 R R 2 2
1 = 4
(∂12 ϕ)
(1.1.6)
where for the last equality in (1.1.6) we have made the change of variables 2 y1 −y2 for x1 + x2 = y1 , x1 − x2 = y2 . If we now let ψ(y1 , y2 ) := ϕ y1 +y 2 , 2 (y1 , y2 ) ∈ R2 , then y1 + y2 y1 − y2 y1 + y2 y1 − y2 1 1 ∂1 ψ(y1 , y2 ) = (∂1 ϕ) , , + (∂2 ϕ) , 2 2 2 2 2 2 (1.1.7)
1.2. WEAK DERIVATIVES and
3
y1 + y2 y1 − y2 y1 + y2 y1 − y2 1 , , − (∂2 ∂1 ϕ) 2 2 4 2 2 y1 + y2 y1 − y2 y1 + y2 y1 − y2 1 1 2 , , + (∂1 ∂2 ϕ) − (∂2 ϕ) 4 2 2 4 2 2 (1.1.8)
1 ∂2 ∂1 ψ(y1 , y2 ) = (∂12 ϕ) 4
which, when used in (1.1.6), give (∂12 ϕ − ∂22 ϕ)u dx = ∂1 ∂2 ψ(y1 , y2 )[ϕ0 (y1 ) + ϕ0 (y2 )] dy1 dy2 . (1.1.9) R2
R
R
Let R ∈ (0, ∞) be such that supp ϕ ⊂ (−R, R)×(−R, R). Then the support of ψ is contained in the set of points (y1 , y2 ) ∈ R2 satisfying −2R ≤ y1 +y2 ≤ 2R and −2R ≤ y1 −y2 ≤ 2R. Hence, if R > 2R we have supp ψ ⊂ (−R , R )×(−R , R ) and integration by parts yields (∂12 ϕ − ∂22 ϕ)u dx R2
R
= −R
ϕ0 (y1 )
−R R
= −R
−R
R
+
R
ϕ0 (y2 )
∂2 ∂1 ψ(y1 , y2 ) dy2 dy1
R
−R
∂1 ∂2 ψ(y1 , y2 ) dy1 dy2
ϕ0 (y1 ) ∂1 ψ(y1 , R ) − ∂1 ψ(y1 , −R ) dy1
R
+ −R
ϕ0 (y2 ) ∂2 ψ(R , y2 ) − ∂2 ψ(−R , y2 ) dy2 = 0.
(1.1.10)
In summary, this proves that for each ϕ0 ∈ C 0 (R) the function u defined as in (1.1.3) is a weak solution of the equation ∂12 u − ∂22 u = 0 in R2 .
(1.1.11)
We emphasize that in general, there is no reason to expect that u has any pointwise differentiability properties if ϕ0 is merely continuous.
1.2
Weak Derivatives
Convention. Unless otherwise specified, Ω denotes an arbitrary open subset of Rn . Before proceeding with the definition of weak (Sobolev) derivatives we discuss a phenomenon that serves as motivation for the definition. Let f ∈ C m (Ω),
CHAPTER 1. WEAK DERIVATIVES
4
where m ∈ N0 . Given an arbitrary ϕ ∈ C0∞ (Ω), consider a function F ∈ C0m (Rn ) that agrees with f in a neighborhood of supp ϕ. For example, we may take F = ψ f, where tilde denotes the extension by zero outside Ω and ψ ∈ C0∞ (Rn ) is identically one in a neighborhood of supp ϕ (see Proposition 13.26 for the construction of such a function). Also, pick R > 0 large enough so that supp ϕ ⊂ B(0, R). Then for each α ∈ Nn0 with |α| ≤ m, integration by parts (cf. Theorem 13.41 for a precise formulation) and support considerations yield α (∂ f )ϕ dx = (∂ α F )ϕ dx Ω
B(0,R) |α|
|α|
α
f ∂ α ϕ dx.
F ∂ ϕ dx = (−1)
= (−1)
B(0,R)
(1.2.1)
Ω
This computation suggests the following definition. Definition 1.2. If f ∈ L1loc (Ω) and α ∈ Nn0 , we say that ∂ α f belongs to L1loc (Ω) in a weak (Sobolev) sense provided there exists some g ∈ L1loc (Ω) with the property that gϕ dx = (−1)|α| f ∂ α ϕ dx for every ϕ ∈ C0∞ (Ω). (1.2.2) Ω
Ω
Whenever this happens, we shall write ∂ α f = g and call g the weak derivative of order α of f . The fact that the concept of weak derivative is unambiguously defined is then ensured by the next theorem.
Theorem 1.3. If g ∈ L1loc (Ω) and Ω gϕ dx = 0 for each ϕ ∈ C0∞ (Ω) then g = 0 almost everywhere on Ω. Proof. From the start, we note that the real and imaginary parts of g enjoy the same type of property as g itself (as seen by integrating g against real-valued test functions ϕ). Thus, without any loss of generality we may assume that g is real-valued.
We make the claim Ω gu dx = 0 for all u ∈ L∞ comp (Ω). To see that this is the case, pick an arbitrary u ∈ L∞ comp (Ω). By working with the extension of u by n zero outside Ω, we may assume that u ∈ L∞ comp (R ) and supp u ⊂ Ω, to begin with. To proceed, consider a function φ satisfying (see (13.3.3) for a concrete example) ∞ n φ(x)dx = 1, (1.2.3) φ ∈ C0 (R ), φ ≥ 0, supp φ ⊆ B(0, 1) and Rn
and for each ε > 0 define φε (x) :=
1 x , φ εn ε
for each x ∈ Rn .
(1.2.4)
1.2. WEAK DERIVATIVES
5
Then φε ∈ C0∞ (Rn ),
φε ≥ 0,
Let us now introduce
supp φε ⊆ B(0, ε),
and Rn
φε (x) dx = 1. (1.2.5)
uε (x) :=
Rn
u(y)φε (x − y) dy,
∀ x ∈ Rn ,
(1.2.6)
and set Kε := x ∈ Rn : dist (x, supp u) ≤ ε . Since u has compact support contained in Ω it follows that there exists ε0 > 0 such that (1.2.7) uε ∈ C ∞ (Rn ) and supp uε ⊆ Kε0 ⊂ Ω if 0 < ε ≤ ε0 . In particular, the restriction of uε to Ω satisfies uε Ω ∈ C0∞ (Ω) provided we select 0 < ε ≤ ε0 . Granted the current assumptions on g, it follows that guε dx = 0 whenever 0 < ε ≤ ε0 . (1.2.8) Ω
Next, using the properties of φε and the definition of uε , we may write |uε (x) − u(x)| = u(x − y)φε (y) dy − u(x) B(0,ε) = (1.2.9) [u(x − y) − u(x)]φε (y) dy B(0,ε) y 1 ≤ n dy |u(x − y) − u(x)|φ ε B(0,ε) ε c ≤ |u(x − y) − u(x)|dy −→ 0 for a.e. x ∈ Rn , |B(0, ε)| B(0,ε) ε→0+ ∞ n where c := ωn−1 n φL (R ) , and for the convergence in (1.2.9) we used Lebesgue’s differentiation theorem (cf. Theorem 13.11). Hence,
(1.2.10) uε → u as ε → 0+ pointwise for almost every x ∈ Rn .
In addition, since Rn φε (x−y) dy = 1 for every x ∈ Rn , it follows that |uε (x)| ≤ uL∞ (Rn ) for every x ∈ Rn . Together with (1.2.7), this implies |guε | = |uε ||g| ≤ uL∞ (Rn ) |g|χKε0 ∈ L1 (Ω).
(1.2.11)
Making use of (1.2.8) and Lebesgue’s dominated convergence theorem (cf. Theorem 13.12 that applies thanks to (1.2.10) and (1.2.11)), we therefore obtain 0 = lim+ guε dx = gu dx, (1.2.12) ε→0
Ω
finishing the proof of the claim made earlier.
Ω
CHAPTER 1. WEAK DERIVATIVES
6
To proceed, pick a sequence of compact sets {Kj }j∈N such that Ω =
Kj .
j∈N
For example, Kj := x ∈ Ω : dist(x, ∂Ω) ≥ 1j ∩ B(0, j),
j ∈ N,
(1.2.13)
will do. Also, for each j ∈ N, define uj := χ{x∈Kj : g(x)>0} − χ{x∈Kj : g(x)<0} .
(1.2.14)
∞ Then for each j ∈ N we
have uj ∈ Lcomp(Ω); hence, based on what we proved so far, 0 = Ω guj dx = Kj |g| dx. The latter condition implies that g = 0 almost everywhere in each Kj hence, g = 0 almost everywhere in Kj = Ω. j∈N
Next we present a few examples related to the notion of weak (Sobolev) derivative. Example 1.4. Consider the function x, f : R −→ R, f (x) := 0,
x > 0, x ≤ 0,
∀ x ∈ R.
(1.2.15)
Note that f is continuous on R but not differentiable at 0. Nonetheless, f has a weak derivative of order one that is equal to the Heaviside function 1, x > 0, H : R −→ R, H(x) := ∀ x ∈ R. (1.2.16) 0, x ≤ 0, Indeed, if ϕ ∈ C0∞ (R), then integration by parts yields ∞ ∞ ∞ f (x)ϕ (x) dx = − xϕ (x) dx = −xϕ(x) + − −∞
0
0
∞
ϕ(x) dx
0
∞
H(x)ϕ(x) dx,
=
(1.2.17)
−∞
which shows that
R
Hϕ dx = −
f ϕ dx,
∀ ϕ ∈ C0∞ (R).
R
(1.2.18)
Note that H ∈ L1loc (R) hence H is the weak (or Sobolev) derivative of order one of the function f in R. Example 1.5. Does there exist a function g ∈ L1loc (R) such that g is a weak derivative (of order one) of the Heaviside function? To answer this question, first observe that for each ϕ ∈ C0∞ (R) we have ∞ ∞ H(x)ϕ (x) dx = − ϕ (x) dx = ϕ(0). (1.2.19) − −∞
0
1.2. WEAK DERIVATIVES
7
1 Suppose that H has a weak derivative of order one,
and call this g ∈ Lloc (R). Then, by Definition 1.2 and (1.2.19), we have R g ϕ dx = ϕ(0) for all ϕ ∈
∞ C0∞ (R). This forces 0 gϕ dx = 0 for all ϕ ∈ C0∞ ((0, ∞)). In concert with Theorem 1.3, the latter yields g = 0 almost everywhere on (0, ∞). Similarly, we obtain that g = 0 almost everywhere on (−∞, 0), hence g = 0 almost everywhere on R. When combined with (1.2.19), this gives that 0 = R gϕ dx = ϕ(0) for all ϕ ∈ C0∞ (R), leading to a contradiction (as there are functions ϕ ∈ C0∞ (R) with ϕ(0) = 0). Thus, a weak (Sobolev) derivative of order one of H does not exist.
Having defined the notion of weak (Sobolev) derivatives for locally integrable functions, we return to the notion of weak solution considered in Definition 1.1 in a particular case, and extend this to more general partial differential equations. To set the stage, let P (x, ∂) be a linear partial differential operator of order m ∈ N of the form aα (x)∂ α , aα ∈ C |α| (Ω), α ∈ Nn0 , |α| ≤ m. (1.2.20) P (x, ∂) := |α|≤m
Also, suppose f ∈ C 0 (Ω) is a given function and that u is a classical solution of the partial differential equation P (x, ∂)u = f in Ω. That is, assume u ∈ C m (Ω) and the equation holds pointwise in Ω. Then, for each ϕ ∈ C0∞ (Ω), integration by parts gives f ϕ dx = ϕ aα (∂ α u) dx = (−1)|α| ∂ α (aα ϕ) u dx. (1.2.21) Ω
|α|≤m
Ω
Ω |α|≤m
Hence, if we define P (x, ∂)ϕ :=
(−1)|α| ∂ α (aα ϕ),
(1.2.22)
|α|≤m
and call it the transpose of the operator P (x, ∂), the resulting equation from (1.2.21) becomes f ϕ dx = [P (x, ∂)ϕ]u dx, ∀ ϕ ∈ C0∞ (Rn ). (1.2.23) Ω
Ω
Thus, any classical solution u of P (x, ∂)u = f in Ω satisfies (1.2.23). On the other hand, there might exist functions u ∈ L1loc (Ω) that satisfy (1.2.23) but are not classical solutions of the given equation. Such a scenario has been already encountered in (1.1.11) (cf. also the subsequent comment). This motivates the following general definition (compare with Definition 1.1 in the case when P (x, ∂) = ∂12 − ∂22 ). Definition 1.6. Let u, f ∈ L1loc (Ω) be given and assume that P (x, ∂) is as in (1.2.20). Then P (x, ∂)u = f is said to hold in the weak (or Sobolev) sense if f ϕ dx = [P (x, ∂)ϕ]u dx, ∀ ϕ ∈ C0∞ (Ω). (1.2.24) Ω
Ω
CHAPTER 1. WEAK DERIVATIVES
8
From the comments in the preamble to Definition 1.6 we know that if u is a classical solution of P (x, ∂)u = f in Ω for some f ∈ C 0 (Ω), then u is also a weak (Sobolev) solution of the same equation. Conversely, if u ∈ C m (Ω) is a weak solution of the partial differential equation P (x, ∂)u = f for some given function f ∈ C 0 (Ω), then by Definition 1.6 and integration by parts we obtain f ϕ dx = [P (x, ∂)ϕ]u dx = ϕ P (x, ∂)u dx. (1.2.25) Ω
Ω
Ω
C0∞ (Ω)
Since ϕ ∈ is arbitrary, Theorem 1.3 then forces f = P (x, ∂)u almost everywhere in Ω, hence ultimately everywhere in Ω, since the functions in question are continuous. In summary, the above discussion shows that the notion of weak solution of a partial differential equation is a natural, unambiguous, and genuine generalization of the concept of classical solution, in the following precise sense: • any classical solution is a weak solution, • any sufficiently regular weak solution is classical, • weak solutions may exist even in the absence of classical ones.
1.3
The Spaces E(Ω) and D(Ω)
A major drawback of Definition 1.2 is that while the right hand-side of (1.2.2) is always meaningful, it cannot always be written in the form given by the left-hand side of (1.2.2). In addition, it might be the case that some locally integrable function in Ω may admit weak (Sobolev) derivatives of a certain order and not of some intermediate lower order (see the example in Exercise 1.26). The remedy is to focus on the portion of (1.2.2) that always makes sense. Specifically, given f ∈ L1loc (Ω) and α ∈ Nn0 , define the mapping ∞ |α| f (∂ α ϕ) dx, ∀ ϕ ∈ C0∞ (Ω). (1.3.1) gα : C0 (Ω) → C, gα (ϕ) := (−1) Ω
The functional in (1.3.1) has the following properties. 1. gα is linear, i.e., gα (λ1 ϕ1 + λ2 ϕ2 ) = λ1 gα (ϕ1 ) + λ2 gα (ϕ2 ) for every λ1 , λ2 ∈ C, and every ϕ1 , ϕ2 ∈ C0∞ (Ω). 2. For each ϕ ∈ C0∞ (Ω) we may estimate α |gα (ϕ)| ≤ |f ||∂ ϕ| dx ≤ |f | dx sup |∂ α ϕ(x)|. Ω
supp ϕ
(1.3.2)
x∈suppϕ
The fact that the term supp ϕ |f |dx in (1.3.2) depends on ϕ is inconvenient if we want to considering the continuity of gα in some sense. Nonetheless, if a priori
1.3. THE SPACES E(Ω) AND D(Ω)
9
a compact set K ⊂ Ω is fixed and the requirement supp ϕ ⊆ K is imposed, then (1.3.2) becomes |f |dx sup |∂ α ϕ(x)| (1.3.3) |gα (ϕ)| ≤ x∈K
K
and, this time, K |f |dx is a constant independent of ϕ. This observation motivates considering an appropriate topology τ on C ∞ (Ω). For the exact definition of this topology see Sect. 13.1. We will not elaborate here more on this subject other than highlighting those key features of τ that are particularly important for our future investigations. To record the precise statements of these features, introduce (1.3.4) E(Ω) := C ∞ (Ω), τ , a notation that emphasizes that E(Ω) is the vector space C ∞ (Ω) equipped with the topology τ . We then have: Fact 1.7. A sequence {ϕj }j∈N ⊂ C ∞ (Ω) converges in E(Ω) to some ϕ ∈ C ∞ (Ω) as j → ∞ if and only if ∀ K ⊂ Ω compact, ∀ α ∈ Nn0 ,
we have
lim sup |∂ α (ϕj − ϕ)(x)| = 0,
j→∞ x∈K
(1.3.5) E(Ω)
in which case we use the notation ϕj −−−→ ϕ. j→∞
Fact 1.8. E(Ω) is a locally convex, metrizable, and complete topological vector space over C. It is easy to see that as a consequence of Fact 1.7 we have the following result. Remark 1.9. A sequence {ϕj }j∈N ⊂ C ∞ (Ω) converges in E(Ω) to ϕ ∈ C ∞ (Ω) as j → ∞ if and only if for any compact set K ⊂ Ω and any m ∈ N0 one has lim
sup
sup |∂ α (ϕj − ϕ)(x)| = 0.
j→∞ α∈Nn , |α|≤m x∈K 0
(1.3.6)
E(Ω)
Exercise 1.10. Prove that if ϕj −−−→ ϕ then the following also hold: j→∞
E(Ω)
(1) ∂ α ϕj −−−→ ∂ α ϕ for each α ∈ Nn0 ; j→∞
E(Ω)
(2) a ϕj −−−→ a ϕ for each a ∈ C0∞ (Ω). j→∞
A standard way of constructing a sequence of smooth functions in Rn that converges in E(Rn ) to a given f ∈ C ∞ (Rn ) is by taking the convolution of f with dilations of a function as in (1.2.3). This construction is discussed in detail next.
CHAPTER 1. WEAK DERIVATIVES
10
Example 1.11. Let f ∈ C ∞ (Rn ) be given. Then a sequence of functions from C ∞ (Rn ) that converges to f in E(Rn ) may be constructed as follows. Recall φ from (1.2.3) and define φj (x) := j n φ(jx)
for x ∈ Rn and each j ∈ N.
Clearly, for each j ∈ N we have φj ∈ C0∞ (Rn ),
supp φj ⊆ B(0, 1/j),
Now if we further set for each j ∈ N fj (x) := f (x − y)φj (y) dy = Rn
(1.3.7)
and Rn
φj dx = 1.
f (x − z/j)φ(z) dz,
(1.3.8)
for x ∈ Rn ,
B(0,1)
(1.3.9) then fj ∈ C ∞ (Rn ). Also, if K is an arbitrary compact set in Rn and α ∈ Nn0 , then |∂ α fj (x) − ∂ α f (x)| ≤ |∂ α f (x − z/j) − ∂ α f (x)| φ(z) dz B(0,1)
≤
1 max ∂ β f L∞ (K) j |β|=|α|+1
∀ x ∈ K,
(1.3.10)
n
E(R ) := {x ∈ Rn : dist (x, K) ≤ 1}. Hence fj − where K −−−→ f , as desired. j→∞
The previous approximation result may be further strengthened as indicated in the next two exercises. Exercise 1.12. Prove that C0∞ (Rn ) is sequentially dense in E(Rn ). That is, show that for each f ∈ C ∞ (Rn ) there exists a sequence {fj }j∈N ⊂ C0∞ (Rn ) with E(Rn )
the property that fj −−−−→ f . j→∞
Hint: Let ψ ∈ C0∞ (Rn ) be such that ψ(x) = 1 whenever |x| < 1. Then if f ∈ C ∞ (Rn ) define fj (x) := ψ(x/j)f (x), for each x ∈ Rn and each j ∈ N. Exercise 1.13. Prove that C0∞ (Ω) is sequentially dense in E(Ω). Kj = Ω and Hint: Recall the sequence of compacts from (1.2.13). Then j∈N
˚j+1 for each j ∈ N. For each j ∈ N, let ψj ∈ C0∞ (Ω) be such that Kj ⊂ K ψj = 1 on a neighborhood of Kj and supp ψj ⊆ Kj+1 (cf. Proposition 13.26). If f ∈ C ∞ (Ω), define fj := ψj f for every j ∈ N. Moving on, we focus on defining a topology on C0∞ (Ω) that suits the purposes we have in mind. Since C0∞ (Ω) ⊂ E(Ω), one option would be to consider the topology induced by this larger ambient on C0∞ (Ω). However, this topology has the distinct drawback of not preserving the property of being compactly supported under convergence. Here is an example to that effect.
1.3. THE SPACES E(Ω) AND D(Ω)
11
Example 1.14. Consider the function ⎧ 1 ⎪ ⎨ x− 1 2 − 1 2 4 e if 0 < x < 1, ϕ(x) := ⎪ ⎩ 0 if x ≤ 0 or x ≥ 1,
for each x ∈ R, (1.3.11)
that satisfies ϕ ∈ C ∞ (R), supp ϕ = [0, 1], and ϕ > 0 in (0, 1). For each j ∈ N define ϕj (x) := ϕ(x − 1) + 12 ϕ(x − 2) + · · · + 1j ϕ(x − j),
x ∈ R.
(1.3.12)
E(Rn )
Then ϕj ∈ C ∞ (R), supp ϕj = [1, j + 1], and ϕj −−−−→ ϕ where j→∞
ϕ(x) :=
∞
1 kϕ
x−j
for x ∈ R.
(1.3.13)
k=1
Clearly this limit function does not have compact support. The flaw just highlighted is remedied by introducing a different topology on C0∞ (Ω) that is finer than the one inherited from E(Ω). First, for each K ⊂ Ω compact, denote by DK (Ω) the vector space consisting of functions from C ∞ (Ω) supported in K endowed with the topology induced byE(Ω). Second, consider ∞ on C0 (Ω) the inductive limit topology of the spaces DK (Ω) K⊂Ω and decompact
note the resulting topological vector space by D(Ω). For precise definitions see Sect. 13.1. Two features that are going to be particularly important for our analysis are singled out below (see Sect. 13.1 in this regard). Fact 1.15. D(Ω) is a locally convex and complete topological vector space over C. Fact 1.16. A sequence {ϕj }j∈N ⊂ C0∞ (Ω) converges in D(Ω) to some function ϕ ∈ C0∞ (Ω) as j → ∞ if and only if the following two conditions are satisfied: (1) there exists a compact set K ⊂ Ω such that supp ϕj ⊆ K for all j ∈ N and supp ϕ ⊆ K; (2) for any α ∈ Nn0 we have lim supx∈K |∂ α (ϕj − ϕ)(x)| = 0. j→∞
D(Ω)
We abbreviate (1)–(2) by simply writing ϕj −−−→ ϕ. j→∞
In view of Fact 1.7 one obtains the following consequence of Fact 1.16. D(Ω)
Remark 1.17. ϕj −−−→ ϕ if and only if j→∞
(1) there exists a compact set K ⊂ Ω such that supp ϕj ⊆ K for all j ∈ N, and E(Ω)
(2) ϕj −−−→ ϕ. j→∞
CHAPTER 1. WEAK DERIVATIVES
12
If one now considers the identity map from D(Ω) into E(Ω), a combination of Remark 1.17, and Theorem 13.5 yields that this map is continuous. Hence, if we also take into account Exercise 1.12, it follows that D(Ω) is continuously and densely embedded into E(Ω).
(1.3.14)
Exercise 1.18. Suppose ω is an open subset of Ω and consider the map ϕ on ω, ∞ ∞ ι : C0 (ω) → C0 (Ω), ι(ϕ) := ∀ ϕ ∈ C0∞ (ω). (1.3.15) 0 on Ω \ ω, D(ω)
D(Ω)
j→∞
j→∞
Prove that if ϕj −−−→ ϕ then ι(ϕj ) −−−→ ι(ϕ). Use Theorem 13.5 to conclude that ι : D(ω) → D(Ω) is continuous.
Exercise 1.19. Let x0 ∈ Rn and consider the translation by x0 map defined as tx0 : D(Rn ) −→ D(Rn ) tx0 (ϕ) := ϕ(· − x0 ),
∀ ϕ ∈ C0∞ (Rn ).
(1.3.16)
Prove that tx0 is linear and continuous. Hint: Use Theorem 13.5. D(Ω)
Exercise 1.20. Prove that if ϕj −−−→ ϕ then the following also hold: j→∞
D(Ω)
(1) ∂ α ϕj −−−→ ∂ α ϕ for each α ∈ Nn0 ; j→∞
D(Ω)
(2) a ϕj −−−→ a ϕ for each a ∈ C0∞ (Ω). j→∞
E(Ω)
D(Ω)
j→∞
j→∞
Exercise 1.21. Prove that if ϕj −−−→ ϕ then a ϕj −−−→ a ϕ for each function a ∈ C0∞ (Ω). Also show that if ω is an open subset of Ω, a ∈ C0∞ (ω), and we E(Ω)
D(ω)
j→∞
j→∞
have ϕj −−−→ ϕ, then a ϕj −−−→ a ϕ. We have already noted that the topology D(Ω) is finer than the topology C0∞ (Ω) inherits from E(Ω), and an example of a sequence of smooth, compactly supported functions in Ω convergent in E(Ω) to a limit which does not belong to D(Ω) has been given in Example 1.14. The example below shows that even if the limit function is in D(Ω), one should still not expect that convergence in E(Ω) of a sequence of smooth, compactly supported functions in Ω implies convergence in D(Ω). Example 1.22. Let ϕ be as in (1.3.11) and for each j ∈ N set ϕj (x) := ϕ(x− j) for x ∈ R. Clearly, ϕj ∈ C ∞ (R) and supp ϕj = [j, j + 1] for all j ∈ N. If K is some compact subset of R, then there exists j0 ∈ N such that K ⊆ [−j0 , j0 ].
1.4. ADDITIONAL EXERCISES FOR CHAP. 1
13
(k) Consequently, supp ϕj ∩ K = ∅ for j ≥ j0 . Thus, trivially, sup ϕj (x) = 0 if x∈K
E(R)
j ≥ j0 which shows that ϕj −−−→ 0. Consider next the issue whether {ϕj }j∈N j→∞
converge in D(R). If this were to be the case, there would exist r ∈ (0, ∞) such ∞ that supp ϕj ⊆ [−r, r] for every j. However, supp ϕj = [1, ∞) which leads j=1
to a contradiction. Thus, {ϕj }j∈N does not converge in D(R). Convention. In what follows, we will often identify a function f ∈ C0∞ (Ω) with its extension by zero outside its support, which makes such an extension belong to C0∞ (Rn ). Further Notes for Chap. 1. The concept of weak derivative goes back to the pioneering work of the Soviet mathematician Sergei Lvovich Sobolev (1908– 1989). Although we shall later extend the scope of taking derivatives in a generalized sense to the larger class of distributions, a significant portion of partial differential equations may be developed solely based on the notion of a weak derivative. For example, this is the approach adopted in [12], where distributions are avoided all together. A good reference for the topological aspects that are most pertinent to the spaces of test functions considered here is [70], though there are many other monographs dealing with these issues. The interested reader may consult [10, 59, 71], and the references therein.
1.4
Additional Exercises for Chap. 1
Exercise 1.23. Given ϕ0 ∈ C 2 (R), ϕ1 ∈ C 1 (R), and F ∈ C 1 (R2 ), show that the function u : R2 → R defined by x1 +x2 u(x1 , x2 ) := 12 (ϕ0 (x1 + x2 ) + ϕ0 (x1 − x2 ) + 12 ϕ1 (t) dt −
1 2
x2
x1 +(x2 −t)
F (ξ, t) dξ 0
x1 −x2
dt,
x1 −(x2 −t)
is a classical solution of the problem ⎧ u ∈ C 2 (R2 ), ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨∂12 u − ∂22 u = F in R2 , ⎪ ⎪ u(·, 0) = ϕ0 in R, ⎪ ⎪ ⎪ ⎪ ⎩ (∂2 u)(·, 0) = ϕ1 in R.
∀ (x1 , x2 ) ∈ R2 , (1.4.1)
(1.4.2)
Exercise 1.24. Determine the values of a ∈ R for which the function f : R → R x, if x ≥ a, defined by f (x) := for each x ∈ R, has a weak derivative. 0, if x < a,
14
CHAPTER 1. WEAK DERIVATIVES
Exercise 1.25. Does the function f : R → R defined by f (x) :=
1, if x ≥ 2, 0, if x < 2,
for each x ∈ R, have a weak derivative? Exercise 1.26. Let f : R2 → R2 be defined by f (x, y) := H(x) + H(y) for each (x, y) ∈ R2 , where H is the Heaviside function from (1.2.16). Prove that for α = (1, 1) and β = (1, 0) the weak derivatives ∂ α f and ∂ α+β f exist, while the weak derivative ∂ β f does not exist. Exercise 1.27. Compute the weak derivative of order one of f : (−1, 1) → R defined by f (x) := sgn(x) |x| for every x ∈ (−1, 1), where ⎧ ⎪ ⎨ 1 0 sgn(x) := ⎪ ⎩ −1
if x > 0, if x = 0,
(1.4.3)
if x < 0.
Exercise 1.28. Let f : R2 → R2 be the function defined by f (x, y) := x|y| for each (x, y) ∈ R2 . Prove that the weak derivative ∂12 ∂2 f exists, while the weak derivative ∂1 ∂22 f does not. Exercise 1.29. Let f : R2 → R2 be defined by f (x, y) := H(x) − sgn(y) for each (x, y) ∈ R2 , and let α = (α1 , α2 ) ∈ N20 . Prove that ∂ α f exists in the weak sense if and only if α1 ≥ 1 and α2 ≥ 1. Exercise 1.30. Let f : R → R be defined by f (x) := sin |x| for every x ∈ R. Does f exist in the weak sense? How about f ? Exercise 1.31. Let ω, Ω be open subsets of Rn with ω ⊆ Ω. Suppose f ∈ L1loc (Ω) is such that ∂ α f exists in the weak sense in Ω for some α ∈ Nn0 . Show that the weak derivative ∂ α (f |ω ) exists and equals (∂ α f )|ω almost everywhere in ω. Exercise 1.32. Let ε ∈ (0, 1) and consider the function f : Rn → R defined by |x|−ε if x ∈ Rn \ {0}, f (x) := ∀ x ∈ Rn . 1 if x = 0, Prove that ∂j f exists in the weak sense for each j ∈ {1, . . . , n} if and only if n ≥ 2. Also compute the weak derivatives ∂j f , j ∈ {1, . . . , n}, in the case when n ≥ 2. Exercise 1.33. Assume that a, b ∈ R are such that a < b. (a) Prove that if f ∈ L1loc (a, b) is such that the weak derivative f exists and is equal to zero almost everywhere on (a, b), then there exists some complex number c such that f = c almost everywhere on (a, b).
b Hint: Fix ϕ0 ∈ C0∞ (a, b) with a ϕ0 (t) dt = 1. Then each ϕ ∈
b C0∞ (a, b) is of the form ϕ = ϕ0 a ϕ(t) dt + ψ , for some ψ ∈ C0∞ (a, b) .
1.4. ADDITIONAL EXERCISES FOR CHAP. 1
15
(b) Assume that g ∈ L1loc (a, b) and x0 ∈ (a, b). Prove that the function
x defined by f (x) := x0 g(t) dt, for x ∈ (a, b), belongs to L1loc ((a, b)) and has a weak derivative that is equal to g almost everywhere on (a, b). (c) Let f ∈ L1loc (a, b) be such that the weak derivative f (k) exists for some k ∈ N. Prove that all the weak derivatives f (j) exist for each j ∈ N with j < k.
x Hint: Prove that if g(x) := x0 h(t) dt where x0 ∈ (a, b) is a fixed point and h := f (k) , and if ϕ0 is as in the hint to (a), then f
(k−1)
=g−
b
b
(k−1)
k−1
g(t)ϕ0 (t) dt + (−1) a
f (t)ϕ0
(t)dt.
a
(d) Let f ∈ L1loc (a, b) such that f (k) = 0 for some k ∈ N. Prove that there k−1 exist a0 , a1 , . . . , ak−1 ∈ C such that f (x) = aj xj for almost every j=0
x ∈ (a, b). (e) If Ω is an open set in Rn for n ≥ 2 and f ∈ L1loc (Ω) is such that the weak derivative ∂ α f exists for some α ∈ Nn0 , does it follow that ∂ β f exists in a weak sense for all β ≤ α? Exercise 1.34. Let θ ∈ C0∞ (Rn ) and m ∈ N. Prove that the sequence ϕj (x) := e−j j m θ(jx),
∀ x ∈ Rn ,
j ∈ N,
converges in D(Rn ). Exercise 1.35. Let θ ∈ C0∞ (Rn ), h ∈ Rn \ {0}, and set ϕj (x) := θ(x + jh),
∀ x ∈ Rn ,
E(Rn )
D(Rn )
j→∞
j→∞
j ∈ N.
Prove that ϕj −−−−→ 0. Is it true that ϕj −−−−→ 0? Exercise 1.36. Let θ ∈ C0∞ (Rn ) be not identically zero, and for each j ∈ N define 1 ϕj (x) := θ(jx), ∀ x ∈ Rn . j Prove that the sequence {ϕj }j∈N does not converge in D(Rn ). Exercise 1.37. Let θ ∈ C0∞ (Rn ), h ∈ Rn \ {0}. Prove that the sequence ϕj (x) := e−j θ(j(x − jh)),
∀ x ∈ Rn ,
j ∈ N,
converges in D(Rn ) if and only if θ(x) = 0 for all x ∈ Rn .
CHAPTER 1. WEAK DERIVATIVES
16
Exercise 1.38. Consider θ ∈ C0∞ (Rn ) not identically zero, and for each j ∈ N define 1 x , ∀ x ∈ Rn . ϕj (x) := θ j j Does {ϕj }j∈N converge in D(Rn )? How about in E(Rn )? Exercise 1.39. Suppose that {ϕj }j∈N is a sequence in C0∞ (Ω) with the property that lim Rn f (x)ϕj (x) dx = 0 for every f ∈ L1loc (Ω). Is it true that j→∞
D(Rn )
ϕj −−−−→ 0? j→∞
Chapter 2
The Space D(Ω) of Distributions 2.1
The Definition of Distributions
Building on the idea emerging in (1.3.1), we now make the following definition that is central to all subsequent considerations. Definition 2.1. u : D(Ω) → C is called a distribution on Ω if u is linear and continuous. By design, distributions are simply elements of the dual space of the topological vector space D(Ω). Given a functional u : D(Ω) → C and a function ϕ ∈ C0∞ (Ω), we use the traditional notation u, ϕ in place of u(ϕ) (in particular, u, ϕ is a complex number). While any linear and continuous functional is sequentially continuous, the converse is not always true. Nonetheless, for linear functionals on D(Ω), continuity is equivalent with sequential continuity. This remarkable property, itself a consequence of Theorem 13.5, is formally recorded below. Fact 2.2. Let u : D(Ω) → C be a linear map. Then u is a distribution on Ω D(Ω)
if and only if for every sequence {ϕj }j∈N ⊂ C0∞ (Ω) with ϕj −−−→ ϕ for some j→∞
ϕ ∈ C0∞ (Ω), we have lim u, ϕj = u, ϕ (where the latter limit is considered in C).
j→∞
Remark 2.3. In general, if X, Y are topological vector spaces and Λ : X → Y is a linear map, then Λ is sequentially continuous on X if and only if Λ is sequentially continuous at the zero vector 0 ∈ X. When combined with Fact 2.2, D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 2, © Springer Science+Business Media New York 2013
17
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
18
this shows that a linear map u : D(Ω) → C is a distribution on Ω if and only if D(Ω)
lim u, ϕj = 0 for every sequence {ϕj }j∈N ⊂ C0∞ (Ω) with ϕj −−−→ 0.
j→∞
j→∞
Another important characterization of continuity of complex-valued linear functionals u defined on D(Ω) is given by the following proposition. Proposition 2.4. Let u : D(Ω) → C be a linear map. Then u is a distribution if and only if for each compact set K ⊂ Ω there exist k ∈ N0 and C ∈ (0, ∞) such that |u, ϕ| ≤ C sup |∂ α ϕ(x)| x∈K |α|≤k
for all ϕ ∈ C0∞ (Ω) with supp ϕ ⊆ K.
(2.1.1)
Proof. Fix u : D(Ω) → C linear and suppose that for each compact set K ⊂ Ω there exist k ∈ N0 and C ∈ (0, ∞) satisfying (2.1.1). To show that u is a D(Ω)
distribution, let ϕj −−−→ 0. Then, there exists a compact set K ⊆ Ω such that j→∞
supp ϕj ⊆ K for all j ∈ N, and ∂ α ϕj −−−→ 0 uniformly on K for any α ∈ Nn0 . j→∞
For this compact set K, by our hypotheses, there exist C > 0 and k ∈ N0 such that (2.1.1) holds, and hence, |u, ϕj | ≤ C sup |∂ α ϕj (x)| −−−→ 0,
(2.1.2)
j→∞
x∈K |α|≤k
which implies that u, ϕj −−−→ 0. From this and Remark 2.3 it follows that u j→∞
is a distribution in Ω. To prove the converse implication we reason by contradiction. Suppose that there exists a compact set K ⊆ Ω such that for every j ∈ N, there exists a function ϕj ∈ C0∞ (Ω) with supp ϕj ⊆ K and |u, ϕj | > j sup |∂ α ϕj (x)|.
(2.1.3)
x∈K |α|≤j 1 ϕj . Then for each j ∈ N we have ψj ∈ C ∞ (Ω), supp ψj ⊆ K, Define ψj := u,ϕ j and u, ψj = 1. On the other hand, from (2.1.3) we see that
sup |∂ α ψj (x)| <
x∈K |α|≤j
1 j
∀ j ∈ N.
(2.1.4)
Now let α ∈ Nn0 be arbitrary. Then (2.1.4) implies that sup |∂ α ψj | < x∈K |α|≤j
1 j
whenever j ≥ |α|,
(2.1.5)
D(Ω)
thus ψj −−−→ 0. Since u is a distribution in Ω, the latter implies lim u, ψj = j→∞
j→∞
0, contradicting the fact that u, ψj = 1 for each j ∈ N. This completes the proof of the proposition.
2.1. THE DEFINITION OF DISTRIBUTIONS
19
Remark 2.5. Recall that for each compact set K ⊂ Ω we denote by DK (Ω) the vector space of functions in C ∞ (Ω) with support contained in K endowed with the topology inherited from E(Ω). A closer look at the topology in DK (Ω) reveals that Proposition 2.4 may be rephrased as saying that a linear map u : D(Ω) → C is a distribution in Ω if and only if uDK (Ω) is continuous for each compact set K ⊂ Ω. In fact, the topology on D(Ω) is the smallest topology on C0∞ (Ω) with this property. Definition 2.6. Let u be a distribution in Ω. If the nonnegative integer k intervening in (2.1.1) may be taken to be independent of K, then u is called a distribution of finite order. If u is a distribution of finite order, then the order of u is by definition the smallest k ∈ N0 satisfying condition (2.1.1) for every compact set K ⊂ Ω. Here are a few important examples of distributions. Example 2.7. For each f ∈ L1loc (Ω) define the functional uf : D(Ω) → C by uf (ϕ) := f (x)ϕ(x) dx, ∀ ϕ ∈ C0∞ (Ω). (2.1.6) Ω
Then clearly uf is linear and, if K is an arbitrary compact set contained in Ω, then
|uf (ϕ)| ≤ sup |ϕ(x)| K |f | dx = C sup |ϕ(x)|, x∈K x∈K (2.1.7) ∞ for all ϕ ∈ C0 (Ω) with supp ϕ ⊆ K. Hence, by Proposition 2.4, uf is a distribution in Ω. Moreover, (2.1.7) also shows that uf is a distribution of order 0. Remark 2.8. A distribution with an action defined as in (2.1.6) will be referred to as a distribution of function type. To simplify notation, if f ∈ L1loc (Ω) we will often simply use f (in place of uf ) for the distribution of function type defined as in (2.1.6). This is justified by the fact that the injection ι : L1loc (Ω) → {u : D(Ω) → C : u linear and continuous} ι(f ) := uf
for each f ∈ L1loc (Ω),
(2.1.8)
is one-to-one. Indeed, if ι(f ) = 0 for some f ∈ L1loc (Ω), then Ω f ϕ dx = 0 for all functions ϕ ∈ C0∞ (Ω), which in turn, based on Theorem 1.3, implies that f = 0 almost everywhere in Ω. Since ι is also linear, the desired conclusion follows. Example 2.9. We have that ln |x| ∈ L1loc (Rn ), thus ln |x| is a distribution in Rn . To see that indeed ln |x| is locally integrable in Rn , observe first that 1 ln t ≤ max tε , t−ε ε
for all t > 0 and ε > 0.
(2.1.9)
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
20
This is justified by starting with the elementary inequality ln t ≤ t for all t > 0, then replacing t by t±ε . In turn, for each R > 0 and ε ∈ (0, 1), estimate (2.1.9) gives ln |x| dx ≤ ε−1 (2.1.10) max |x|ε , |x|−ε dx < ∞. B(0,R)
B(0,R)
Let us now revisit the functional from (1.3.1). Example 2.10. For a given f ∈ L1loc (Ω) and multi-index α ∈ Nn0 , consider the functional gα : D(Ω) → C defined by f ∂ α ϕ dx for each ϕ ∈ C0∞ (Ω). (2.1.11) gα (ϕ) := (−1)|α| Ω D(Ω)
Clearly this is a linear mapping. Moreover, if ϕj −−−→ 0, then there exists K j→∞
compact subset of Ω such that supp ϕj ⊆ K for each j ∈ N, hence |gα (ϕj )| ≤ |f ||∂ α ϕj | dx K
≤ ∂ α ϕj L∞ (K) f L1 (K) → 0 as j → ∞.
(2.1.12)
By invoking Remark 2.3, it follows that gα is a distribution in Ω. Next we consider a set of examples of distributions that are not of function type. As a preamble, observe that f (x) := x1 for x = 0, is not locally integrable on R. Nonetheless, it is possible to associate to this function a certain distribution, not as in (2.1.6), but in the specific manner described below. Example 2.11. Consider the mapping P.V. x1 : D(R) → C defined by P.V.
1 (ϕ) := lim+ x ε→0
|x|≥ε
ϕ(x) dx, x
∀ ϕ ∈ C0∞ (R).
(2.1.13)
We claim that P.V. x1 is a distribution of order one in R. First, we prove that the mapping (2.1.13) is well-defined. Let ϕ ∈ C0∞ (R) and suppose R > 0 is such that supp ϕ ⊂ (−R, R). Fix ε ∈ (0, R) and observe that since x1 is odd on R \ {0}, we have |x|≥ε
ϕ(x) dx = x
ε≤|x|≤R
ϕ(x) dx = x
ε≤|x|≤R
ϕ(x) − ϕ(0) dx. x
(2.1.14)
≤ sup|y|≤R |ϕ (y)| for each x ∈ R\{0}. Thus, Lebesgue’s In addition, ϕ(x)−ϕ(0) x
dominated convergence theorem implies that lim+ |x|≥ε ϕ(x) x exists and is equal ε→0
to |x|≤R ϕ(x)−ϕ(0) dx. This shows that the mapping P.V. x1 is well-defined, and x
2.1. THE DEFINITION OF DISTRIBUTIONS
21
from definition it is clear that P.V. x1 is linear. Furthermore, it is implicit in the argument above that 1 (ϕ) ≤ 2R sup |ϕ (x)|, (2.1.15) ∀ ϕ ∈ C0∞ (−R, R) . P.V. x |x|≤R In concert with Proposition 2.4 this shows that P.V. x1 is a distribution in R of order at most one. We are left with showing that P.V. x1 does not have order 0. ∞ Consider the compact K = [0, 1] and for each j ∈ N let ϕ ∈ C j 0 (0, 1) be such 1 1 that 0 ≤ ϕj ≤ 1 and ϕj = 1 on j+2 , 1 − j+2 . Then from the very definition of P.V. x1 and the fact that ϕj vanishes near zero, 1 1− j+2 ! " 1 ϕ (x) 1 1 j dx ≥ dx = ln(j + 1), P.V. , ϕj = 1 x x x 0 j+2
∀ j ∈ N.
(2.1.16) Since supx∈K |ϕj (x)| ≤ 1 and lim ln(j+1) = ∞, the inequality in (2.1.16) shows j→∞
that there is no constant C ∈ (0, ∞) such that |P.V. x1 , ϕ| ≤ C supx∈K |ϕ(x)| for all ϕ ∈ C0∞ (R) with supp ϕ ⊆ K. This proves that the distribution P.V. x1 does not have order 0. Remark 2.12. An inspection of the proof in Example 2.11 shows that we have ! 1 " ϕ(x) − ϕ(0) ϕ(x) P.V. , ϕ = dx + dx, ∀ ϕ ∈ C0∞ (R). x x x |x|≤1 |x|>1 (2.1.17) Example 2.13. An important distribution is the Dirac distribution δ defined by δ(ϕ) := ϕ(0), ∀ ϕ ∈ C0∞ (Rn ). (2.1.18) It is not difficult to check that δ is a distribution in Rn of order 0. A natural question to ask is whether δ is a distribution of function type. To answer there exists f ∈ L1loc (Rn ) such that ϕ(0) = δ, ϕ =
this question, suppose ∞ f ϕ dx for all ϕ ∈ C0 (Rn ). The latter implies that Rn f ϕ dx = 0 for every Rn ϕ ∈ C0∞ (Rn ) satisfying supp ϕ ∩ {0} = ∅. Hence, by Theorem 1.3 we have f = 0 almost everywhere on Rn \{0}, thus f = 0 almost everywhere in Rn . Consequently, f ϕ dx = 0, ∀ ϕ ∈ C0∞ (Rn ), (2.1.19) ϕ(0) = δ, ϕ = Rn
which is false. This proves that the Dirac distribution is not of function type. Example 2.14. The Dirac distribution δ is sometimes referred to as having “mass at zero” since, for each x0 ∈ Rn , one may similarly define the map δx0 : C0∞ (Rn ) → C by setting δx0 (ϕ) := ϕ(x0 ). Then it is easy to see that δx0 is a distribution in Rn , called the Dirac distribution with mass at x0 , and the convention we make is to drop the subscript x0 if x0 = 0 ∈ Rn .
22
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
Example 2.15. Let μ be either a complex Borel measure on Ω, or a Borel positive measure on Ω that is locally finite (i.e., satisfies μ(K) < ∞ for every compact K ⊂ Ω). Consider ϕ dμ, ∀ ϕ ∈ C0∞ (Ω). (2.1.20) μ : D(Ω) → C, μ(ϕ) := Ω
The mapping in (2.1.20) is well-defined, linear, and if K is an arbitrary compact set in Ω, then ∀ ϕ ∈ C ∞ (Ω), supp ϕ ⊂ K.
|μ(ϕ)| ≤ |μ|(K) sup |ϕ(x)|,
(2.1.21)
x∈K
By Proposition 2.4 we have that μ induces a distribution in Ω. The estimate in (2.1.21) also shows that μ is a distribution of order zero. In the last part of this section we discuss the validity of a converse implication to the implication in Example 2.15. Proposition 2.16. Let u be a distribution in Ω of order zero. Then the distribution u extends uniquely to a linear map Λu : C00 (Ω) → C that is locally bounded, in the following sense: for each compact set K ⊂ Ω there exists CK ∈ (0, ∞) with the property that |Λu (ϕ)| ≤ CK sup |ϕ(x)|, x∈K
∀ ϕ ∈ C00 (Ω) with supp ϕ ⊆ K.
(2.1.22)
In addition, the functional Λu satisfies the following properties. ˚j+1 (i) Let {Kj }j∈N be a sequence of compact subsets of Ω satisfying Kj ⊂ K ∞ for j ∈ N and Ω = Kj . Then there exists a sequence of complex regular j=1
Borel measures μj on Kj , j ∈ N, with the following properties: ˚ , and each (a) μj (E) = μ (E) for each ∈ N, each Borel set E ⊂ K j ≥ ; (b) for each j ∈ N one has ϕ dμj , ∀ ϕ ∈ C00 (Ω) with supp ϕ ⊂ Kj . Λu (ϕ) =
(2.1.23)
Kj
(ii) Then there exist two Radon measures μ1 , μ2 , taking Borel sets from Ω into [0, ∞] (i.e., measures satisfying the regularity properties (ii)–(iv) in Theorem 13.17), such that Re Λu (ϕ) = ϕ dμ1 − ϕ dμ2 , ∀ ϕ ∈ C00 (Ω) real-valued. (2.1.24) Ω
Ω
Furthermore, a similar conclusion is valid for Im Λu .
2.1. THE DEFINITION OF DISTRIBUTIONS
23
Proof. Let u be a distribution in Ω of order zero and let K be a compact set contained in Ω. Fix ϕ ∈ C00 (Ω) such that supp ϕ ⊆ K. We claim that there exists a sequence of functions φj ∈ C0∞ (Ω), j ∈ N, supported in a fixed compact neighborhood K0 of K such that lim sup |φj (x) − ϕ(x)| = 0.
(2.1.25)
j→∞ x∈K0
To justify this claim consider φε as in (1.2.3)–(1.2.5) then define ϕε (x) := ϕ(y)φε (x − y) dy, ∀ x ∈ Rn ,
(2.1.26)
Rn
Then the same type of reasoning as in (1.2.9) (using that a continuous function on a compact set is uniformly continuous as a replacement for the use of Lebesgue’s differential theorem), then shows that φj := ϕ1/j do the job in (2.1.25). Because u is a distribution of order zero, there exists C = C(K0 ) ∈ (0, ∞) such that (2.1.1) holds with k = 0. The latter combined with (2.1.25) implies |u, φj − u, φk | ≤ C sup |φj (x) − φk (x)| −−−−−→ 0. j,k→∞
x∈K0
(2.1.27)
Hence, the sequence of complex numbers {u, φj }j∈N is Cauchy, thus convergent in C, which allows us to define Λu (ϕ) := lim u, φj .
(2.1.28)
j→∞
Proving that this definition is independent of the selection of {φj }j∈N is done by interlacing sequences. Specifically, if {φj }j∈N is another sequence of functions in C0∞ (Ω), supported in K0 and satisfying (2.1.25), then by considering the sequence {ψj }j≥2 defined by ψ2j+1 := φj and ψ2j := φj for every j ≥ 2, we obtain that {u, ψj }j≥2 is convergent, thus its subsequences {u, φj }j∈N and {u, φj }j∈N are also convergent and have the same limit. In turn, the independence of the definition of Λu (ϕ) on the approximating sequence {φj }j∈N readily implies that Λu : C00 (Ω) → C is a linear mapping. In addition, the fact that u is a distribution of order zero implies the existence of a finite constant C > 0 such that |u, φj | ≤ C sup |φj (x)|,
∀ j ∈ N.
(2.1.29)
x∈K0
Taking the limit as j → ∞ in (2.1.29) gives |Λu (ϕ)| ≤ C sup |ϕ(x)| on account x∈K
of (2.1.28). Let us now show that the linear extension Λu of u to C00 (Ω) is unique in the class of locally bounded mappings. By linearity, this comes down to proving that if Λ : C00 (Ω) → C is a linear locally bounded mapping that vanishes on C0∞ (Ω) then Λ is identically zero on C00 (Ω). To this end, pick an arbitrary ϕ ∈ C0∞ (Ω), set K := supp ϕ and, as before, let φj ∈ C0∞ (Ω), j ∈ N,
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
24
be a sequence of functions supported in a fixed compact neighborhood K0 of K such that (2.1.25) holds. Then |Λ(ϕ)| = |Λ(ϕ − φj )| ≤ C sup |ϕ(x) − φj (x)| −−−→ 0,
(2.1.30)
j→∞
x∈K0
proving that Λ(ϕ) = 0, as wanted. Moving on, fix a sequence of compacts as in the hypothesis of (i) (e.g., the sequence from (1.2.13) will do). Based on what we have proved so far and Riesz’s representation theorem for complex measures (see Theorem 13.18), it follows that there exists a sequence of complex regular Borel measures μj on Kj , j ∈ N, such that (2.1.23) holds. ˚ . If j ≥ , since the measures μj , Now fix ∈ N and a Borel set E ⊂ K μ are regular, in order to prove that μj (E) = μ (E) it suffices to show that μj (F ) = μ (F ) for every compact set F ⊆ E. Fix such a compact set and ˚ , with the property choose a sequence of compact sets {Fk }k∈N contained in K ∞ # ˚k for k ∈ N and that Fk+1 ⊂ F Fk = F . Applying Uryshon’s lemma (see k=1
Proposition 13.20) we obtain a sequence ϕk ∈ C00 (Ω), 0 ≤ ϕk ≤ 1, supp ϕk ⊆ Fk and ϕk = 1 on F , for each k ∈ N. Then lim ϕk (x) = χF (x) for x ∈ Ω and we k→∞
may write μj (F ) =
Kj
k→∞
Kj
=
ϕk dμj = lim u, ϕk = lim
χF dμj = lim
k→∞
χF dμ = μ (F ).
k→∞
ϕk dμ K
(2.1.31)
K
This completes the proof of (i). Finally, the claim in (ii) follows by invoking Riesz’s representation theorem for locally bounded functionals (cf. Theorem 13.19). Among other things, Proposition 2.16 is a useful ingredient in the following representation theorem for positive distributions. Theorem 2.17. Let u be a distribution in Ω such that u, ϕ ≥ 0 for every nonnegative function ϕ ∈ C0∞ (Ω). Then there exists a unique positive Borel regular measure μ on Ω such that u, ϕ = ϕ dμ, ∀ ϕ ∈ C0∞ (Ω). (2.1.32) Ω
Proof. First we prove that u has order zero. To do so, let K be a compact set in Ω and fix ψ ∈ C0∞ (Ω), ψ ≥ 0 and satisfying ψ ≡ 1 on K. Then, if ϕ ∈ C0∞ (Ω) has supp ϕ ⊆ K and is real valued, it follows that ( sup |ϕ(x)|)ψ ± ϕ ≥ 0, x∈K
(2.1.33)
2.2. THE TOPOLOGICAL VECTOR SPACE D (Ω)
25
hence by the positivity of u, we have ( sup |ϕ(x)|)u, ψ ± u, ϕ ≥ 0. x∈K
Thus, |u, ϕ| ≤ u, ψ supx∈K |ϕ(x)|, (2.1.34)
for all real valued ϕ ∈ C00 (Ω) with supp ϕ ⊆ K.
If now ϕ ∈ C0∞ (Ω) with supp ϕ ⊆ K is complex valued, say ϕ = ϕ1 + iϕ2 and ϕ1 , ϕ2 , are real valued, then by using (2.1.34) we may write |u, ϕ| = |u, ϕ1 |2 + |u, ϕ2 |2 ≤ u, ψ sup |ϕ1 (x)| + sup |ϕ2 (x)| x∈K
x∈K
≤ 2 u, ψ sup |ϕ(x)|.
(2.1.35)
x∈K
The fact that the distribution u has order zero now follows from (2.1.35). Having established this, we may apply Proposition 2.16 to conclude that u may be uniquely extended to a linear map Λu : C00 (Ω) → C that is locally bounded. We next propose to show that this linear map is positive. In this respect, we note that if ϕ ≥ 0 then the functions {φj }j∈N used in the proof of Proposition 2.16 (which are constricted by convolving ϕ with a nonnegative mollifier) may also be taken to be nonnegative. When combined with (2.1.28) and the fact that u is positive, this shows that the extension Λu of u to C00 (Ω) as defined in (2.1.28) is a positive functional. Consequently, Riesz’s representation theorem for positive functionals (see Theorem 13.17) may be may invoked to conclude that there exists a unique positive Borel regular measure μ on Ω such that Λu (ϕ) = ϕ dμ, ∀ ϕ ∈ C00 (Ω). (2.1.36) Ω
Now (2.1.32) follows by specializing (2.1.36) to ϕ ∈ C0∞ (Ω). This finishes the proof of the theorem.
2.2
The Topological Vector Space D (Ω)
The space of distributions in Ω endowed with the natural addition and scalar multiplication of linear mappings becomes a vector space over C. Indeed, if u1 , u2 , u are distributions in Ω and λ1 , λ2 , λ ∈ C, we define u1 + u2 : D(Ω) → C and λu : D(Ω) → C by setting ∀ ϕ ∈ C0∞ (Ω). (2.2.1) It is not difficult to check that u1 + u2 and λu are distributions in Ω. The topology we consider on the vector space of distributions in Ω is the weak∗-topology induced by D(Ω) (for details see Sect. 13.1), which makes it a (u1 + u2 )(ϕ) := u1 , ϕ + u2 , ϕ,
(λu)(ϕ) := λu, ϕ,
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
26
topological vector space over C and we denote this topological vector space by D (Ω). In particular, D (Ω) is a locally convex topological vector space over C. We record next an important equivalent characterization of convergence in D (Ω). Fact 2.18. A sequence {uj }j∈N in D (Ω) converges to some u ∈ D (Ω) as j → ∞ in D (Ω) if and only if uj , ϕ −−−→ u, ϕ for every ϕ ∈ C0∞ (Ω), in j→∞
D (Ω)
which case we use the notation uj −−−−→ u. j→∞
Moreover, the topological space D (Ω) is complete, in the following sense. If the sequence {uj }j∈N ⊂ D (Ω) is such that lim uj , ϕ exists (in C) for every j→∞
ϕ ∈ C0∞ (Ω) then the functional u : D(Ω) → C defined by u(ϕ) := lim uj , ϕ j→∞
for every ϕ ∈ C0∞ (Ω) is a distribution in Ω.
Note that from Fact 2.18 it is easy to see that if a sequence {uj }j∈N in D (Ω) is convergent, then its limit is unique. Indeed, if such a sequence would have two limits, say u, v ∈ D (Rn ), then it would follow that for each ϕ ∈ C0∞ (Rn ), the sequence of numbers {uj , ϕ}j∈N would converge to both u, ϕ and v, ϕ, thus u, ϕ = v, ϕ. Hence, u = v. Remark 2.19. Assume that we are given u ∈ D (Ω) and uε ∈ D (Ω) for each D (Ω)
ε ∈ (0, ∞). We make the convention that uε −−−−→ u is understood in the sense + ε→0
that for every sequence of positive numbers {εj }j∈N satisfying lim εj = 0 we j→∞
D (Ω)
have uεj −−−−→ u. j→∞
Example 2.20. Let φ be as in (1.2.3) and recall the sequence of functions {φj }j∈N from (1.3.7). Interpreting each φj ∈ L1loc (Rn ) as distribution in Rn , for each function ϕ ∈ C0∞ (Rn ) we have φj , ϕ = φj (x)ϕ(x) dx = φ(y)ϕ(y/j) dy, ∀ j ∈ N. (2.2.2) Rn
Rn
Thus, by Lebesgue dominated convergence theorem (cf. Theorem 13.12), φj , ϕ = φ(y)ϕ(y/j) dy −−−→ ϕ(0) φ(y)dy = ϕ(0) = δ, ϕ. (2.2.3) j→∞
Rn
Rn
D (Rn )
This proves that φj −−−−→ δ. j→∞
Exercise 2.21. Prove that if p ∈ [1, ∞] and a sequence of functions {fj }j∈N D (Ω)
from Lp (Ω) converges in Lp (Ω) to some f ∈ Lp (Ω) then fj −−−−→ f . j→∞
Hint: Use H¨ older’s inequality.
2.3. MULTIPLICATION OF A DISTRIBUTION WITH A C ∞ . . .
27
Exercise 2.22. (a) Assume that f ∈ L1 (Rn ) and for each ε > 0 define fε (x) := ε−n f (x/ε) for x ∈ Rn . Prove that for each g ∈ L∞ (Rn ) that is continuous at 0 ∈ Rn we have fε (x)g(x) dx −−−−→ f (x) dx g(0). (2.2.4) ε→0+
Rn
Rn
(b) Use part (a) to prove that if f ∈ L1 (Rn ) is given and for each j ∈ N we define fj (x) := j n f (jx) for each x ∈ Rn , then each fj belongs to L1 (Rn ) (hence fj ∈ D (Rn )) and D (Rn ) f (x) dx. (2.2.5) fj −−−−→ c δ where c := j→∞
Rn
Hint: To justify the claim in part (a), make a change of variables to write the integral Rn fε (x)g(x) dx as Rn f (y)g(εy) dy, then use Lebesgue’s dominated convergence theorem.
2.3
Multiplication of a Distribution with a C ∞ Function
The issue we discuss in this section is the definition of the multiplication of a distribution u ∈ D (Ω) with a smooth function a ∈ C ∞ (Ω). First we consider the case when u is of function type, i.e., u = uf for some f ∈ L1loc (Ω). In this particular case, af ∈ L1loc (Ω); thus, it defines a distribution uaf on Ω and uaf , ϕ = (af )ϕ dx = f (a ϕ) dx = f, a ϕ, ∀ ϕ ∈ C0∞ (Ω). (2.3.1) Ω
Ω
In the general case, this suggests defining au as in the following proposition. Proposition 2.23. Let u ∈ D (Ω) and a ∈ C ∞ (Ω). Then the mapping au : D(Ω) → C defined by
(au)(ϕ) := u, aϕ,
∀ ϕ ∈ C0∞ (Ω),
(2.3.2)
is linear and continuous, hence a distribution on Ω. Proof. The fact that au is linear is immediate from its definition. To show that au is also continuous we make use of Remark 2.3. To this end, consider a D(Ω)
D(Ω)
j→∞
j→∞
sequence ϕj −−−→ 0. By (2) in Exercise 1.20 we have a ϕj −−−→ 0. Since u is a distribution on Ω, the latter implies lim u, aϕj = 0. Moreover, from (2.3.2) j→∞
we have (au)(ϕj ) = u, aϕj for each j ∈ N. Hence, lim (au)(ϕj ) = 0 proving j→∞
that au is continuous.
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
28
Exercise 2.24. Let f ∈ L1loc (Ω) and a ∈ C ∞ (Ω). With the notation from (2.1.6), prove that auf = uaf in D (Ω). Remark 2.25. (1) When more information about u ∈ D (Ω) is available, (2.3.2) may continue to yield a distribution under weaker regularity demands on a than the current assumption that a ∈ C ∞ (Ω). In general, however, the condition a ∈ C ∞ (Ω) may not be weakened if (2.3.2) is to yield a distribution for arbitrary u ∈ D (Ω). (2) As observed later (see Remark 2.30), one may not define the product of two arbitrary distributions in a way that ensures associativity. (3) Based on (2.3.2) and (2.2.1), it follows that if u, u1 , u2 ∈ D (Ω), and if a, a1 , a2 ∈ C ∞ (Ω), then a(u1 + u2 ) = au1 + au2 , (a1 + a2 )u = a1 u + a2 u, and a1 (a2 u) = (a1 a2 )u, where the equalities are considered in D (Ω). Exercise 2.26. The following properties hold. E(Ω)
D (Ω)
j→∞
j→∞
1. If u ∈ D (Ω) and aj −−−→ a, then aj u −−−−→ au. D (Ω)
D (Ω)
j→∞
j→∞
2. If a ∈ C ∞ (Ω) and uj −−−−→ u, then auj −−−−→ au. Example 2.27. Recall the Dirac distribution defined in (2.1.18) and assume that some function a ∈ C ∞ (Ω) has been given. Then, for each ϕ ∈ C0∞ (Rn ) we may write $ % aδ, ϕ = δ, aϕ = (aϕ)(0) = a(0)ϕ(0) = a(0)δ, ϕ . This shows that aδ = a(0)δ
in D (Rn ) for each a ∈ C ∞ (Ω).
(2.3.3)
As a consequence, xm δ = 0
in D (R),
for every m ∈ N.
(2.3.4)
Example 2.28. The goal is to solve the equation xu = 1
in D (R).
(2.3.5)
Clearly, this equation does not have a solution u of function type. Recall the distribution defined in Example 2.11. Then for every ϕ ∈ C0∞ (R) we may write ! " 1 " ! 1 x ϕ(x) x P.V. , ϕ = P.V. , xϕ(x) = lim dx + x x x ε→0 |x|≥ε ϕ(x) dx = 1, ϕ. (2.3.6) = R
2.4. DISTRIBUTIONAL DERIVATIVES
29
Thus,
1 x P.V. = 1 in D (R). (2.3.7) x Given (2.3.4), it follows that u := P.V. x1 + c δ will also be a solution of (2.3.5) for any c ∈ C. We will see later (c.f. Remark 2.70) that in fact any solution of (2.3.5) is of the form P.V. x1 + c δ, where c ∈ C. Exercise 2.29. Let ψ ∈ C ∞ (R). Determine a solution u ∈ D (R) of the equation xu = ψ in D (R).
Remark 2.30. Suppose one could define the product of distributions as an associative operation, in a manner compatible with the multiplication by a smooth function. Considering then δ, x, and P.V. x1 ∈ D (R), one would then necessarily have 1 1 = δ x · P.V. = δ · 1 = δ in D (R), 0 = (δ · x) P.V. (2.3.8) x x thanks to (2.3.4) and (2.3.7), leading to the false conclusion that δ = 0 in D (R).
2.4
Distributional Derivatives
We are now ready to define derivatives of distributions. One of the most basic attributes of the class of distributions, compared with other classes of locally integrable functions, is that distributions may be differentiated unrestrictedly within this environment (with the resulting objects being still distributions), and that the operation of distributional differentiation retains some of the most basic properties as in the case of ordinary differentiable functions (such as, a suitable product formula, symmetry of mixed derivatives, etc.). In addition, the differentiation of distributions turns out to be compatible with the pointwise differentiation in the case when the distribution in question is of function type, given by a sufficiently regular function. To develop some sort of intuition, we shall start our investigation by looking first at a distribution of function type, and try to generalize the notion of weak (Sobolev) derivative from Definition 1.2. As noted earlier, if f ∈ L1loc (Ω), the mapping defined in (1.3.1) is a distribution on Ω. This suggests making the following definition. Definition 2.31. For each α ∈ Nn0 and u ∈ D (Ω), the distributional derivative (or the derivative in the sense of distributions) of order α of the distribution u is the mapping ∂ α u : D(Ω) → C defined by ∂ α u(ϕ) := (−1)|α| u, ∂ α ϕ,
∀ ϕ ∈ C0∞ (Ω).
(2.4.1)
Note that if u ∈ D (Ω) is of function type, say u = uf for some f ∈ L1loc (Ω), and if the weak derivative ∂ α f exists, that is, one can find g ∈ L1loc (Ω) such that (1.2.2) holds, then according to Definition 2.31 we have that the distributional derivative ∂ α u is equal to the distribution ug in D (Ω). In short,
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
30
∂ α uf = u∂ α f in this case. Thus, Definition 2.31 generalizes Definition 1.2. That the class of distributions in Ω is stable under taking distributional derivatives is proved in the next proposition. Proposition 2.32. For each α ∈ Nn0 and each u ∈ D (Ω) we have ∂ α u ∈ D (Ω). Proof. Fix α ∈ Nn0 and a distribution u ∈ D (Ω). The linearity of the mapping D(Ω)
∂ α u : D(Ω) → C is immediate. Regarding its continuity, let ϕj −−−→ 0. Then, j→∞
D(Ω)
since ∂ α ϕj −−−→ 0 (see (1) in Exercise 1.20) and u ∈ D (Ω), we may write j→∞
∂ α u(ϕj ) = (−1)|α| u, ∂ α ϕj −−−→ 0. j→∞
(2.4.2)
Remark 2.3 then shows that ∂ α u ∈ D (Ω). Exercise 2.33. Suppose that m ∈ N and f ∈ C m (Ω). Prove that for any α ∈ Nn0 satisfying |α| ≤ m, the distributional derivative of order α of uf is the distribution of function type given by the derivative, in the classical sense, of order α of f , that is, ∂ α (uf ) = u∂ α f in D (Ω). Proposition 2.34. Let m ∈ N0 and assume that P (x, ∂) := aα (x)∂ α , aα ∈ C ∞ (Ω), α ∈ Nn0 , |α| ≤ m.
(2.4.3)
|α|≤m
Also, suppose that u ∈ C m (Ω). Then P (x, ∂)u, computed in D (Ω), coincides as a distribution with the distribution induced by P (x, ∂)u, computed pointwise in Ω. Proof. This follows from (2.4.3), Exercises 2.33, and 2.24. Remark 2.35. Recall that a function f : Rn → C is called Lipschitz provided there exists M ∈ [0, ∞) such that |f (x) − f (y)| ≤ M |x − y|
∀ x, y ∈ Rn .
(2.4.4)
If f is Lipschitz, then the constant C := inf {M for which (2.4.4) holds} is called the Lipschitz constant of f . We will prove (see Theorem 2.104) that if f is Lipschitz then the distributional derivatives ∂k f , k = 1, . . . , n, belong to L∞ (Rn ). Consequently, f : Rn → R Lipschitz =⇒ ∂k (uf ) = u∂k f in D (Rn ) for k = 1, . . . , n. (2.4.5) Some basic properties of differentiation in the distributional sense are summarized below. Proposition 2.36. The following properties of distributional differentiation hold.
2.4. DISTRIBUTIONAL DERIVATIVES
31
(1) Any distribution is infinitely differentiable (i.e. D (Ω) is stable under the action of ∂ α for any α ∈ N0 ). (2) If u ∈ D (Ω) and k, ∈ {1, . . . , n} then ∂k ∂ u = ∂ ∂k u in D (Ω). D (Ω)
D (Ω)
j→∞
j→∞
(3) If uj −−−−→ u and α ∈ Nn0 , then ∂ α uj −−−−→ ∂ α u. (4) For any u ∈ D (Ω) and any a ∈ C ∞ (Ω) we have ∂j (au) = (∂j a)u + a(∂j u) in D (Ω). Proof. The first property follows immediately from the definition of distributional derivatives. To prove the remaining properties, fix an arbitrary function ϕ ∈ C0∞ (Ω). Then, using (2.4.1) repeatedly and the symmetry of mixed partial derivatives for smooth functions (Schwarz’s theorem), we have ∂k ∂ u, ϕ = −∂ u, ∂k ϕ = u, ∂ ∂k ϕ = u, ∂k ∂ ϕ = −∂k u, ∂ ϕ = ∂ ∂k u, ϕ,
∀ k, ∈ {1, . . . , n},
which implies (2). Let now {uj }j∈N , u, and α satisfy the hypotheses in (3). Based on (2.4.1) and Fact 2.18 we may write ∂ α uj , ϕ = (−1)|α| uj , ∂ α ϕ −−−→ (−1)|α| u, ∂ α ϕ = ∂ α u, ϕ, j→∞
hence ∂ α uj converges to ∂ α u in D (Ω) as j → ∞ and the proof of (3) is complete. Finally, for u ∈ D (Ω) and a ∈ C ∞ (Ω), using (2.4.1) and Leibniz’s product formula for derivatives of smooth functions we can write ∂j (au), ϕ = −au, ∂j ϕ = −u, a(∂j ϕ) = −u, ∂j (aϕ) + u, (∂j a)ϕ $ % = ∂j u, aϕ + (∂j a)u, ϕ = a(∂j u) + (∂j a)u, ϕ , (2.4.6) from which (4) follows. Example 2.37. Recall the Heaviside function H from (1.2.16). This is a locally integrable function thus it defines a distribution on R that we denote also by H. Then, the computation in (1.2.19) implies H , ϕ = −H, ϕ = δ, ϕ, Hence,
H=δ
∀ ϕ ∈ C0∞ (R).
in D (R).
(2.4.7) (2.4.8)
Exercise 2.38. Prove that for every function a ∈ C ∞ (Ω) and every α ∈ Nn0 we have α! a(∂ α δ) = (−1)|β| (∂ β a)(0)∂ α−β δ in D (Ω). (2.4.9) β!(α − β)! β≤α
Hint: Use formula (13.2.4) when computing ∂ α (aϕ) for ϕ ∈ C0∞ (Ω).
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
32
Exercise 2.39. Prove that for every c ∈ R one has −c|x| e = −c e−cx H(x) + c ecx H(−x)
in
D (R).
(2.4.10)
Hint: Show e−c|x| = e−cx H(x) + ecx H(−x) in D (R) and then use part (4) in Proposition 2.36 and (2.4.8). Next, we look at the issue of existence of antiderivatives for distributions on open intervals. Proposition 2.40. Let I be an open interval in R and suppose u0 ∈ D (I). (1) The equation u = u0 in D (I) admits at least one solution. (2) If u1 , u2 ∈ D (I) are such that (u1 ) = u0 in D (I) and (u1 ) = u0 in D (I), then there exists c ∈ C such that u1 − u2 = c in D (I). Proof. Suppose I = (a, b), where a ∈ R ∪ {−∞} and b ∈ R ∪ {+∞}, and define the set A(I) := {ϕ : ϕ ∈ C0∞ (I)}. We claim that ϕ(x) dx = 0. (2.4.11) if ϕ ∈ C0∞ (I), then ϕ ∈ A(I) ⇐⇒ I
The left-to-right implication in (2.4.11) is immediate from the fundamental theorem of calculus. To prove the converse implication, suppose ϕ ∈ C0∞ (I) is
x such that I ϕ(x) dx = 0. Then the function ψ(x) := a ϕ(t) dt, x ∈ I, satisfies ψ ∈ C ∞ (I), supp ψ ⊆ supp ϕ, and ψ = ϕ on I, thus ϕ ∈ A(I). This finishes the justification of (2.4.11).
Next, fix ϕ0 ∈ C0∞ (I) with the property that I ϕ0 (x) dx = 1 and consider the map Θ : D(I) → D(I) defined for each ϕ ∈ C0∞ (I) by Θ(ϕ) := θϕ
where θϕ (x) := a
x
' & ϕ(x) dx ϕ0 (t) dt, ϕ(t) −
∀ x ∈ I.
I
(2.4.12) Since the integral of ϕ − I ϕ(x) dx ϕ0 over I is zero, our earlier discussion shows that Θ is well-defined. Since Θ is linear, Theorem 13.5 and Fact 1.15 imply that Θ is continuous if and only if it is sequentially continuous. The latter property may be verified from definitions. Finally, from (2.4.12) and the fundamental theorem of calculus we have
Θ(ϕ ) = ϕ for every ϕ ∈ C0∞ (I).
(2.4.13)
Suppose now that an arbitrary distribution u0 on I has been fixed. Define u := −u0 ◦ Θ, which is a distribution on I thanks to the properties of Θ. In concert with (2.4.13), this definition implies that u, ϕ = −u0 , ϕ for every function ϕ ∈ C0∞ (I), proving that u = u0 in D (I). This finishes the proof of the statement in (1).
2.4. DISTRIBUTIONAL DERIVATIVES
33
Moving on, suppose u ∈ D (I) is such that u = 0 in D (I). Then if ϕ0 is as earlier in the proof, for any ϕ ∈ C0∞ (I) we may write " ! ϕ(x) dx ϕ0 + ϕ(x) dx u, ϕ0 u, ϕ = u, ϕ − I
I
$ % $ % = u, (θϕ ) + u, ϕ0 , ϕ = c, ϕ ,
(2.4.14)
where c := u, ϕ0 ∈ C. Hence, u = c in D(I). By linearity, this readily implies the statement in (2). The proof of the proposition is now complete. Proposition 2.41. Let I be an open interval in R, g ∈ C ∞ (I), and f ∈ C k (I) for some k ∈ N0 . If u ∈ D (I) satisfies u + gu = f in D (I), then u ∈ C k+1 (I). x
Proof. Fix a ∈ I and define F (x) := e a g0 (t) dt for x ∈ I. Then F ∈ C ∞ (I) and we may use (4) in Proposition 2.36 and the equation satisfied by u to write (keeping in mind that f is continuous) x F (t)f (t) dt = F u + F u − F f = 0 in D (I). (2.4.15) Fu − a
By Proposition 2.40, it follows that there exists some constant c ∈ C with the x x property that F u = a F (t)f (t) dt + c in D (I). Since a F (t)f (t) dt ∈ C k+1 (I)
x 1 1 c ∞ and F ∈ C (I), we conclude that u = F (x) a F (t)f (t) dt + F (x) ∈ C k+1 (I), as desired. We close this section by presenting a higher-degree version of the product formula for differentiation from part (4) in Proposition 2.36. Proposition 2.42 (Generalized Leibniz Formula). Suppose f ∈ C ∞ (Ω) and u ∈ D (Ω). Then for each α ∈ Nn0 one has ∂ α (f u) =
β≤α
α! (∂ β f )(∂ α−β u) β!(α − β)!
in
D (Ω).
(2.4.16)
Proof. The first step is to observe that for each j ∈ {1, . . . , n} and each k ∈ N0 we have ∂jk (f u) =
0≤≤k
k! (∂ f )(∂jk− u) in D (Ω), !(k − )! j
(2.4.17)
which is proved by induction on k making use of part (4) in Proposition 2.36. Hence, given any α = (α1 , α2 , . . . , αn ) ∈ Nn0 , via repeated applications of (2.4.17) we obtain ∂1α1 (f u) =
0≤β1 ≤α1
α1 ! (∂ β1 f )(∂1α1 −β1 u), β1 !(α1 − β1 )! 1
(2.4.18)
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
34 ∂1α1 ∂2α2 (f u) =
0≤β1 ≤α1 0≤β2 ≤α2
α2 ! α1 ! (∂ β1 ∂ β2 f )(∂1α1 −β1 ∂2α2 −β2 u), β1 !(α1 − β1 )! β2 !(α2 − β2 )! 1 2 (2.4.19)
and by induction ∂ α (f u) =
···
0≤β1 ≤α1
0≤βn ≤αn
αn ! α1 ! ··· × β1 !(α1 − β1 )! βn !(αn − βn )!
× (∂1β1 · · · ∂nβn f )(∂1α1 −β1 · · · ∂nαn −βn u) =
0≤β≤α
α! (∂ β f )(∂ α−β u), β!(α − β)!
(2.4.20)
as claimed.
2.5
The Support of a Distribution
In preparation to discussing the notion of support of a distribution, we first define the restriction of a distribution to an open subset of the Euclidean domain on which the distribution is considered. Necessarily, such a definition should generalize restrictions at the level of locally integrable functions. We start from the observation that if f ∈ L1loc (Ω) and ω is a non-empty set open subset of Ω, 1 then f ω ∈ Lloc (ω). Thus, f ω , ϕ = f ϕ dx = f ι(ϕ) dx, ∀ ϕ ∈ C0∞ (ω), (2.5.1) ω
Ω
where ι is the map from (1.3.15). Proposition 2.43. Let Ω be a nonempty open subset of Rn and suppose ω is a nonempty open subset of Ω. Also, recall the map ι from (1.3.15). Then for each u ∈ D (Ω), the mapping arising as the restriction of the distribution u to ω, i.e., uω : D(ω) → C defined by uω (ϕ) := u, ι(ϕ), ∀ ϕ ∈ C0∞ (ω), (2.5.2) is linear and continuous. Hence, uω ∈ D (ω). Proof. It is immediate that the map in (2.5.2) is well-defined and linear. To see that it is also continuous we use Proposition 2.4. Let K be a compact set contained in ω. Then K ⊂ Ω and, since u ∈ D (Ω), Proposition 2.4 applies and gives k ∈ N0 and C ∈ (0, ∞) such that (2.1.1) holds. In particular, for each ϕ ∈ C0∞ (ω) with supp ϕ ⊆ K, (2.5.3) u ω (ϕ) = u, ι(ϕ) ≤ C sup |∂ α ϕ(x)|. x∈K |α|≤k
2.5. THE SUPPORT OF A DISTRIBUTION
35
The conclusion that uω ∈ D (ω) now follows. For an alternative proof of the continuity of uω one may use Fact 2.2 and Exercise 1.18. Exercise 2.44. (1) Prove that the definition of the restriction of a distribution from (2.5.2) generalizes the usual restriction of functions. More specifically, using the notation introduced in(2.1.6), show that if ω is an open subset of Ω and f ∈ L1loc (Ω), then uf ω = uf |ω in D (ω). (2) Prove that the operation of differentiation of a distribution commutes with the operation of restriction of a distribution to open sets, that is, if ω is an open subset of Ω, then ∂ α uω = (∂ α u)ω , ∀ u ∈ D (Ω), ∀ α ∈ Nn0 . (2.5.4) The next proposition shows that a distribution is uniquely determined by its local behavior. Proposition 2.45. If u1 , u2 ∈ D (Ω) are such that for each x0 ∈ Ω there exists an open subset ω of Ω with x0 ∈ ω and satisfying u1 ω = u2 ω in D (ω), then u1 = u2 in D (Ω). Proof. Observe that this proposition may be viewed as a reconstruction problem; thus, it is meaningful to try to use a partition of unity. Let ϕ ∈ C0∞ (Ω) be arbitrary, fixed and set K := supp ϕ. The goal is to prove that u1 , ϕ = u2 , ϕ. From hypotheses it follows that for each x ∈ K there exists an open neighborhood ωx ⊂ Ω of x with the property that u1 ωx = u2 ωx . Based on the fact that K is compact, the cover {ωx}x∈K of K may be refined to a finite one, consisting of, say ω1 , . . . , ωN . These are open subsets of Ω and satisfy K⊂
N (
ωj
and u1 ωj = u2 ωj
for j = 1, . . . , N.
(2.5.5)
j=1
Consider a partition of unity {ψj : j = 1, . . . , N } subordinate to the cover ∞ {ωj }N j=1 of K, as given by Theorem 13.29. Hence, we have ψj ∈ C0 (Ω) with N ψj = 1 on K. Consequently, using supp ψj ⊂ ωj for each j = 1, . . . , N , and j=1
the linearity of distributions and (2.5.5), we obtain N N N ! " u1 , ϕ = u1 , ϕ ψj = u1 , ϕψj = u2 , ϕψj j=1
j=1
j=1
N " ! ϕψj = u2 , ϕ. = u2 , j=1
Thus, u1 , ϕ = u2 , ϕ and the proof of the proposition is complete.
(2.5.6)
36
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
Exercise 2.46. Let k ∈ N0 ∪ {∞} and suppose u ∈ D (Ω) is such that for each x ∈ Ω there exists a number rx > 0 and a function fx ∈ C k (B(x, rx )) such that B(x, rx ) ⊂ Ω and uB(x,rx ) = fx in D (B(x, rx )). Prove that u ∈ C k (Ω). Hint: Use Theorem 13.34 to obtain a partition of unity {ψj }j∈J subordinate to the cover {B(x, rx )}x∈Ω of Ω, then show that f := ψj fj is a function in j∈J
C k (Ω) satisfying u = f in D (Ω). Exercise 2.47. Let u, v ∈ D (Ω) be such that supp u ∩ supp v = ∅ and u + v = 0 in D (Ω). Prove that u = 0 and v = 0 in D (Ω). Hint: Note that v Ω\supp u = (u + v)Ω\supp u = 0 and v Ω\supp v = 0. Combine this with the fact that (Ω \ supp u) ∪ (Ω \ supp v) = Ω and Proposition 2.45 to deduce that v = 0. Now we are ready to define the notion of support of a distribution. Recall that if f ∈ C 0 (Ω) then its support is defined to be the closure relative to Ω of the set {x ∈ Ω : f (x) = 0}. However, the value of an arbitrary distribution at a point is not meaningful. The fact that Ω\supp f is the largest open set contained in Ω on which f = 0 suggests the introduction of the following definition. Definition 2.48. The support of a distribution u ∈ D (Ω) is defined as supp u := x ∈ Ω : there is no ω open such that x ∈ ω ⊆ Ω and uω = 0 . (2.5.7) Based on (2.5.7), it follows that Ω \ supp u = x ∈ Ω : ∃ ω open set such that x ∈ ω ⊆ Ω and uω = 0 , (2.5.8) which is an open set. Hence, supp u is relatively closed in Ω. Moreover, if we apply Proposition 2.45 to the distributions u and 0 ∈ D (Ω \ supp u) we obtain that ∀ u ∈ D (Ω). (2.5.9) uΩ\supp u = 0, In other words, Ω\supp u is the largest open subset of Ω on which the restriction of u is zero. Example 2.49. Recall the Dirac distribution from (2.1.18). We claim that supp δ = {0}. Indeed, if ϕ ∈ C0∞ (Rn \ {0}) it follows that δ, ϕ = ϕ(0) = 0. By Proposition 2.45, δ Rn \{0} = 0, thus supp δ ⊆ {0}. To prove the opposite inclusion, consider an arbitrary open subset ω of Ω such that 0 ∈ ω. Then there exists ϕ ∈ C0∞(ω) such that ϕ(0) = 1, and hence, δ, ϕ = 1 = 0, which in turn implies that δ ω = 0. Consequently, 0 ∈ supp δ as desired. Similarly, if x0 ∈ Rn , then supp δx0 = {x0 }, where δx0 is as in Example 2.14. Example 2.50. If f ∈ C 0 (Ω) then supp uf = supp f , where uf is the distribution from (2.1.6). Indeed, since f = 0 in Ω \ supp f , we have Ω f (x)ϕ(x) dx = 0 for every ϕ ∈ C0∞ (Ω \ supp f ), hence supp uf ⊆ supp f . Also, if x ∈ Ω \ supp uf then there exists an open neighborhood ω of x with ω ⊆ Ω and such that uf ω = 0. Thus, for every ϕ ∈ C0∞ (Ω) one has 0 = uf , ϕ = Ω f (x)ϕ(x) dx.
2.6. COMPACTLY SUPPORTED DISTRIBUTIONS ...
37
Invoking Theorem 1.3 we arrive at the conclusion that f = 0 almost everywhere in ω hence, ultimately, f = 0 in ω (since f is continuous in ω). Consequently, x ∈ supp f and this proves that supp f ⊆ supp uf . We propose to extend the scope of the discussion in Example 2.50 as to make it applicable to functions that are merely locally integrable (instead of continuous). This requires defining a suitable notion of support for functions that lack continuity, and we briefly address this issue first. Given an arbitrary set E ⊆ Rn and an arbitrary function f : E → C, we define the support of f as supp f := x ∈ E : there is no r > 0 so that f = 0 a.e. in B(x, r) ∩ E . (2.5.10) From this definition one may check without difficulty that ( E \ supp f = E ∩ B(x, rx )
(2.5.11)
x∈E\supp f
where for each x ∈ E \ supp f the number rx > 0 is such that f = 0 a.e. in B(x, rx ) ∩ E. Moreover, since Rn has the Lindel¨of property, the above union can be refined to a countable one. Based on these observations, the following basic properties of the support may be deduced: supp f is a relatively closed subset of E,
(2.5.12)
f = 0 a.e. in E \ supp f,
(2.5.13)
supp f ⊆ F if F ⊆ E is relatively closed and f = 0 a.e. on E \ F ,
(2.5.14)
supp f = supp g if g : E → C is such that f = g a.e. on E.
(2.5.15)
In addition, if the set E ⊆ Rn is open and the function f : E → C is continuous, then supp f may be described as the closure in E of the set {x ∈ E : f (x) = 0}, which is precisely our earlier notion of support in this context. Exercise 2.51. If f ∈ L1loc (Ω), then supp uf = supp f , where uf is the distribution from (2.1.6). Hint: Use (2.5.10), (2.5.8), part (1) in Exercise 2.44, and the fact that the injection in (2.1.8) is one-to-one.
2.6
Compactly Supported Distributions and the Space E (Ω)
Next we discuss the issue of extending the action of a distribution u ∈ D (Ω) to a subclass of C ∞ (Ω) that is possibly larger than C0∞ (Ω). Observe that if f ∈ L1loc (Ω), the expression Ω f ϕ dx is meaningful for functions ϕ ∈ C ∞ (Ω)
38
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
such that supp f ∩ supp ϕ is a compact subset of
Ω. A particular case is when we have supp ϕ ∩ supp f = ∅ in which scenario Ω f ϕ dx = 0. This observation is the motivation behind the following theorem. Theorem 2.52. Let u ∈ D (Ω) and consider a relatively closed subset F of Ω satisfying supp u ⊆ F . Set MF := {ϕ ∈ C ∞ (Ω) : supp ϕ ∩ F is a compact set in Rn }.
(2.6.1)
Then there exists a unique linear map u : MF → C satisfying the following conditions: (i) u, ϕ = u, ϕ for every ϕ ∈ C0∞ (Ω), and (ii) u, ϕ = 0 for every ϕ ∈ C ∞ (Ω) with supp ϕ ∩ F = ∅, where, if ψ ∈ MF , then u, ψ denotes u (ψ). Moreover, extensions of u constructed with respect to different choices of F act in a compatible fashion. More precisely, if F1 , F2 ⊆ Ω are two relatively closed sets in Ω with the property that supp u ⊆ Fj , j = 1, 2, then u1 , ϕ = u2 , ϕ for every ϕ ∈ MF1 ∩ MF2 , where u 1 , u 2 , are the extensions of u constructed as above relative to the sets F1 and F2 , respectively. Before presenting the proof of this theorem, a few comments are in order. Remark 2.53. Retain the context of Theorem 2.52. (a) One has C0∞ (Ω) ⊆ MF and {ϕ ∈ C ∞ (Ω) : supp ϕ ∩ F = ∅} ⊆ MF . (b) MF is a vector subspace of C ∞ (Ω), albeit not a topological subspace of E(Ω). (c) If F = supp u we are in the setting discussed prior to the statement of Theorem 2.52. Also, if F1 ⊆ F2 then MF2 ⊆ MF1 . In particular, the largest MF corresponds to the case when F = supp u. (d) If supp u is compact and we take F = supp u then MF = C ∞ (Ω). In such a scenario, Theorem 2.52 gives an extension of u, originally defined as linear functional on C0∞ (Ω), to a linear functional defined on the larger space C ∞ (Ω). From a topological point of view, this extension turns out to be a continuous mapping of E(Ω) into C (as we will see later, in Theorem 2.59). Proof of Theorem 2.52. Fix a relatively closed subset F of Ω satisfying the condition supp u ⊆ F . First we prove the uniqueness statement in the first part of 2 : MF → C satisfy (i) and (ii). Fix ϕ ∈ MF and the theorem. Suppose u 1 , u consider ψ ∈ C0∞ (Ω) such that ψ ≡ 1 in an open neighborhood W of the set F ∩ supp ϕ. That such a function ψ exists is guaranteed by Proposition 13.26. Decompose ϕ = ϕ0 +ϕ1 where ϕ0 := ψϕ ∈ C0∞ (Ω) and ϕ1 := (1−ψ)ϕ ∈ C ∞ (Ω). In general, if A ⊆ Rn and f ∈ C 0 (Rn ), it may be readily verified that f = 0 on A if and only if supp f ⊆ (Ac ). Making use of this observation we obtain
2.6. COMPACTLY SUPPORTED DISTRIBUTIONS ...
39
c ˚ = W c . It follows that supp ϕ1 ⊆ W c ∩ supp ϕ, that supp (1 − ψ) ⊆ W c = W 1 and u 2 , we have hence supp ϕ1 ∩ F = ∅. Thus, by (i) and (ii) written for u u1 , ϕ = u1 , ϕ0 + u1 , ϕ1 = u, ϕ0 + 0 = u2 , ϕ0 = u2 , ϕ0 + u2 , ϕ1 = u2 , ϕ,
(2.6.2)
which implies that u 1 = u 2 . To prove the existence of an extension satisfying properties (i) and (ii), we make use of the decomposition of ϕ already employed in the proof of uniqueness. The apparent problem is that such a decomposition is not unique. Let ϕ ∈ MF and suppose that ϕ = ϕ0 +ϕ1 = ϕ0 +ϕ1 , with ϕ0 , ϕ0 ∈ C0∞ (Ω), ϕ1 , ϕ1 ∈ C ∞ (Ω) and supp ϕ1 ∩ F = ∅ = supp ϕ1 ∩ F. Thus, ϕ0 − ϕ0 = ϕ1 − ϕ1 , and since supp (ϕ1 − ϕ1 ) ∩ F = ∅, we also have supp (ϕ0 − ϕ0 ) ∩ F = ∅, which in turn implies supp (ϕ0 − ϕ0 ) ⊆ Ω \ supp u. The latter condition entails 0 = u, ϕ0 − ϕ0 = u, ϕ0 − u, ϕ0 . This suggests defining the extension u : MF −→ C,
u, ϕ := u, ψϕ for each ϕ ∈ MF and
each ψ ∈ C0∞ (Ω) with ψ ≡ 1 in a neighborhood of supp ϕ ∩ F .
(2.6.3)
Clearly u as in (2.6.3) is linear and, based on the previous reasoning, independent of the choice of ψ, thus well-defined. We claim that this extension also satisfies u, ϕ = (i) and (ii). Indeed, if ϕ ∈ C0∞ (Ω), we choose ψ ≡ 1 on supp ϕ. Then u, ϕ, so the extension in (2.6.3) satisfies (i). Also, if ϕ ∈ C ∞ (Ω) is such that supp ϕ ∩ F = ∅, we may choose ψ ∈ C0∞ (Ω) such that supp ψ ∩ F = ∅ which forces u, ϕ = u, ψϕ = 0, hence our extension satisfies (ii) as well. This proves the claim. We are left with proving the compatibility of extensions. Let F1 , F2 ⊆ Ω 2 be relatively closed sets in Ω each containing supp u. Denote by u 1 and u the linear extensions of u to MF1 and MF2 , respectively, constructed as above relative to the sets F1 and F2 . For ϕ ∈ MF1 ∩ MF2 let ψ ∈ C0∞ (Ω) be such that ψ = 1 on an open neighborhood of supp ϕ ∩ F1 and on an open neighborhood of supp ϕ ∩ F2 . Then by (2.6.3), u1 , ϕ = u, ψϕ = u2 , ϕ. The proof of the theorem is now complete. Remark 2.54. In the context of Theorem 2.52 consider u ∈ D (Ω), a ∈ C ∞ (Ω), and α ∈ Nn0 . Then the extension given in Theorem 2.52 satisfies the following properties:
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
40
(1) ) au, ϕ = u, aϕ for every ϕ ∈ MF ; α u, ϕ = (−1)|α| (2) ∂* u, ∂ α ϕ for every ϕ ∈ MF .
Indeed, since by Theorem 2.52 an extension with properties (i) and (ii) is unique, the statement in (1) above will follow if one proves that the actions of the linear functionals considered in the left- and right-hand sides of the equality in (1) coincide on C0∞ (Ω) and on C ∞ (Ω) functions with supports outside F , which are immediate from (2.6.3) and properties of distributions. A similar approach works for the proof of (2). We introduce the following notation Dc (Ω) := {u ∈ D (Ω) : supp u is a compact subset of Ω}.
(2.6.4)
By applying Theorem 2.52 to a distribution u ∈ Dc (Ω) and the set F := supp u, in which case we have MF = C ∞ (Ω), it follows that there exists a linear map u : C ∞ (Ω) → C satisfying (i) and (ii) in the statement of this theorem. In fact, this extension turns out to be continuous with respect to the topology E(Ω), an issue that we will address shortly. The dual of E(Ω) is the space {v : E(Ω) → C : v linear and continuous}.
(2.6.5)
Whenever v : E(Ω) → C is linear and continuous, and whenever ϕ ∈ C ∞ (Ω), we use the notation v, ϕ in place of v(ϕ). The following is an equivalent characterization of continuity for linear functionals on E(Ω) (see Sect. 13.1 for more details). Fact 2.55. A linear functional v : E(Ω) → C is continuous (for details see (13.1.22)) if and only if there exist a compact K ⊂ Ω, a number m ∈ N0 , and a constant C ∈ (0, ∞), such that |v(ϕ)| ≤ C
sup
sup |∂ α ϕ(x)| ,
x∈K α∈Nn 0 , |α|≤m
∀ ϕ ∈ C ∞ (Ω).
(2.6.6)
In the current setting, functionals on E(Ω) are continuous if and only if they are sequentially continuous. This can be seen by combining the general result presented in Theorem 13.14 with Fact 1.8. A direct proof, applicable to the specific case of linear functionals on E(Ω), is given in the next proposition. Proposition 2.56. Let v : E(Ω) → C be a linear map. Then v is continuous if and only if v is sequentially continuous. Proof. The general fact that any linear and continuous functional on topological vector spaces is sequentially continuous gives the left-to-right implication. To prove the converse implication, it suffices to check continuity at zero. This is done reasoning by contradiction. Assume that E(Ω)
v(ϕj ) −−−→ 0 whenever ϕj −−−→ 0, j→∞
j→∞
(2.6.7)
2.6. COMPACTLY SUPPORTED DISTRIBUTIONS ...
41
but that v is not continuous at 0 ∈ E(Ω). Then for each compact subset K of Ω and every j ∈ N, there exists ϕj ∈ E(Ω) such that |v(ϕj )| > j sup |∂ α ϕj (x)|.
(2.6.8)
x∈K |α|≤j
Consider now a nested sequence of compact sets {Kj }j∈N such that
∞
Kj = Ω.
j=1
For each j ∈ N, let ϕj be as given by (2.6.8) corresponding to K := Kj and ϕ define the function ψj := v(ϕjj ) which belongs to E(Ω). Then v(ψj ) = 1 and
sup x∈Kj , |α|≤j
|∂ α ψj (x)| ≤
1 for every j ∈ N. j
(2.6.9)
Thus, for each fixed α ∈ Nn0 and every compact subset K of Ω there exists some j0 ≥ |α| with the property that K ⊂ Kj0 and supx∈K |∂ α ψj (x)| < 1j for all E(Ω)
j ≥ j0 . The latter implies ψj −−−→ 0 which, in light of (2.6.7), further implies j→∞
v(ψj ) −−−→ 0. Since this contradicts the fact that v(ψj ) = 1 for every j ∈ N, j→∞
the proof is finished. The topology we consider on the dual of E(Ω) is the weak∗-topology, and we denote the resulting topological vector space by E (Ω) (see Sect. 13.1 for more details). A significant byproduct of this set up is singled out next. Fact 2.57. E (Ω) is a locally convex topological vector space over C, which is not metrizable, but is complete. In addition, we have the following important characterization of continuity in E (Ω). Fact 2.58. A sequence {uj }j∈N ⊂ E (Ω) converges to u ∈ E (Ω) as j → ∞ E (Ω)
in E (Ω), something we will indicate by writing uj −−−→ u, if and only if uj , ϕ −−−→ u, ϕ for every ϕ ∈ E(Ω).
j→∞
j→∞
We are now ready to state and prove a result that gives a complete characterization of the class of functionals that are extensions as in Theorem 2.52 of distributions u ∈ D (Ω) with compact support. Theorem 2.59. The spaces Dc (Ω) and E (Ω) are algebraically isomorphic. , where u is the Proof. Consider the mapping ι : Dc (Ω) → E (Ω), ι(u) := u extension of u given by Theorem 2.52 corresponding to F := supp u. Then Msupp u = C ∞ (Ω) and, to conclude that ι is well defined, there remains to show that the functional u is continuous on E(Ω). With this goal in mind, note that while in general the function ψ ∈ C0∞ (Ω) used in the construction of u as in (2.6.3) depends on ϕ, given that we are currently assuming that supp u
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
42
is compact, we may take ψ = 1 on a neighborhood of supp u (originally we only needed ψ = 1 on a neighborhood of supp u ∩ supp ϕ ⊆ supp u). Then u, ϕ = u, ψϕ for all ϕ ∈ C ∞ (Ω). Now let K0 := supp ψ. It follows that ϕψ ∈ C0∞ (Ω) and supp (ϕψ) ⊆ K0 , for all ϕ ∈ C ∞ (Ω). Fix ϕ ∈ C ∞ (Ω). Since u ∈ D (Ω), corresponding to the compact set K0 there exist k0 ∈ N0 and a finite constant C ≥ 0 such that | u, ϕ| = |u, ψϕ| ≤ C sup |∂ α (ψϕ)|.
(2.6.10)
x∈K0 |α|≤k0
Starting with Leibniz’s formula (13.2.4) applied to ψϕ, we estimate α! α α−β β ∂ |∂ (ψϕ)| = ψ∂ ϕ ≤ C sup |∂ β ϕ(x)|, x∈K0 β≤α β!(α − β)!
(2.6.11)
|β|≤k0
for some finite constant C = C (α, ψ) > 0. Combining (2.6.10) and (2.6.11), we obtain (2.6.12) | u, ϕ| ≤ C · C sup |(∂ β ϕ)(x)|, x∈K0 |β|≤k0
hence u ∈ E (Ω), proving that ι is well-defined. Moving on, it is clear that ι is linear, hence to conclude that it is injective, it suffices to show that if ι(u) = 0 for some u ∈ Dc (Ω), then u = 0. Consider u ∈ Dc (Ω) such that ι(u) = 0. Then by (i) in Theorem 2.52, u = ι(u)C ∞ (Ω) = 0, 0 as desired. Consider now the task of proving that ι is surjective. To get started, let v ∈ E (Ω) be arbitrary, and set u := v C ∞ (Ω) . Clearly u : D(Ω) → C is linear. 0 Since v ∈ E (Ω), Fact 2.55 ensures the existence of a compact set K ⊂ Ω, nonnegative integer k, and finite constant C > 0, such that |v, ϕ| ≤ C sup |∂ α ϕ(x)|,
∀ ϕ ∈ C ∞ (Ω).
(2.6.13)
x∈K |α|≤k
Then, for each compact subset A of Ω and ϕ ∈ C0∞ (Ω) with supp ϕ ⊆ A, by regarding ϕ as being in E(Ω) we may use (2.6.13) to write |u, ϕ| ≤ C sup |∂ α ϕ(x)| = C sup |∂ α ϕ(x)| ≤ C sup |∂ α ϕ(x)|. x∈K |α|≤k
x∈K∩A |α|≤k
(2.6.14)
x∈A |α|≤k
From (2.6.14) we may now conclude (invoking Proposition 2.4) that u ∈ D (Ω). Next, we claim that supp u ⊆ K. Indeed, if the function ϕ ∈ C0∞ (Ω) is such that supp ϕ ∩ K = ∅ then from (2.6.13) we obtain |u, ϕ| = 0, thus u = 0 on the set Ω \ K. Hence, the claim is proved which, in turn, shows that u ∈ Dc (Ω). To finish the proof of the surjectivity of ι, it suffices to show that ι(u)=v. Denote by u K the extension of u given by Theorem 2.52 with F := K. Then
2.6. COMPACTLY SUPPORTED DISTRIBUTIONS ...
43
reasoning as in the proof of the fact that ι is well-defined, we obtain that K C ∞ (Ω) = u. Also, if u K ∈ E (Ω). By part (i) in Theorem 2.52 it follows that u 0 ϕ ∈ C ∞ (Ω) satisfies the condition supp ϕ ∩ K = ∅, then by (ii) in Theorem 2.52 we have uK , ϕ = 0, while (2.6.13) implies v, ϕ = 0. Hence, the uniqueness result in Theorem 2.52 yields u K = v. On the other hand, since K and supp u are compact, we have MK = C ∞ (Ω) = Msupp u . Now the last conclusion in Theorem 2.52 gives u K = ι(u). Consequently, ι(u) = v, and the surjectivity of ι is proved. This finishes the proof of the theorem. In light of the significance of Dc (Ω), Theorem 2.59 provides a natural algebraic identification E (Ω) = u ∈ D (Ω) : supp u is a compact subset of Ω . (2.6.15) Remark 2.60. The spaces Dc (Ω) and E (Ω) are not topologically isomorphic since there exist sequences of distributions with compact support that converge in D (Ω) but not in E (Ω). For example, take the sequence {δj }j∈N ⊂ D (R) of Dirac distributions with mass at j ∈ N, that have been defined in Example 2.14. Then it is easy to check that the sequence {δj }j converges to 0 in D (R) but not in E (R). Theorem 2.59 nonetheless proves that the identity mapping is well-defined from E (Ω) into D (Ω). Keeping this in mind and relying on (1.3.14) and Proposition 13.3, we see that E (Ω) is continuously embedded into D (Ω).
(2.6.16)
This corresponds to the dual version of (1.3.14). In particular, the operation of restriction to an open subset ω of Ω is a well-defined linear mapping E (Ω) u → uω ∈ D (ω). (2.6.17) Moreover, uω ∈ E (ω) whenever the support of u ∈ E (Ω) is contained in ω. Remark 2.61. (1) In the sequel, we will often drop · from the notation of the extension (as defined in the proof of Theorem 2.59 or Proposition 2.63) of a compactly supported distribution. More precisely, if u ∈ Dc (Ω) we will simply use u for the extension of u to a functional in E (Ω), as well as for its extension to a functional in E (O), where O is an open subset of Rn containing Ω. (2) Whenever necessary, if u ∈ E (Ω), ϕ ∈ C0∞ (Ω), and ψ ∈ C ∞ (Ω), we will use the notation D u, ϕD for the action of u on ϕ as a functional in D (Ω), and the notation E u, ψE for the action of u on ψ as a functional in E (Ω). Proposition 2.62. Let u ∈ D (Ω) and ψ ∈ C0∞ (Ω). Then ψu ∈ E (Ω) and E ψu, ϕE
= D u, ψϕD ,
∀ ϕ ∈ E(Ω).
(2.6.18)
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
44
Proof. Since ψu ∈ D (Ω) and supp u ⊆ supp ψ, we have ψu ∈ Dc (Ω), thus ψu ∈ E (Ω) (i.e., ψu extends as an element in E (Ω)). Let φ ∈ C0∞ (Ω) be such that φ = 1 in a neighborhood of supp ψ. Then for every ϕ ∈ E(Ω), E ψu, ϕE
= E φψu, ϕE = E ψu, φϕE = D ψu, φϕD = D u, ψφϕD = D u, ψϕD
(2.6.19)
proving (2.6.18). Proposition 2.63. Let ω and Ω be open subsets of Rn such that ω ⊆ Ω. Then every u ∈ E (ω) extends to a functional u ∈ E (Ω) by setting u : E(Ω) → C,
u, ϕ := u, ψϕ,
∀ ϕ ∈ C ∞ (Ω),
(2.6.20)
where ψ ∈ C0∞ (ω) is such that ψ = 1 in a neighborhood of supp u. Proof. We first claim that the mapping in (2.6.20) is well-defined. Indeed, if ψj ∈ C0∞ (ω), ψj = 1 in a neighborhood of supp u, for j = 1, 2, then for each function ϕ ∈ C ∞ (Ω) we have (ψ1 − ψ2 )ϕ ∈ C0∞ (ω) and supp u ∩ supp [(ψ1 − ψ2 )ϕ] = ∅. Consequently, u, (ψ1 − ψ2 )ϕ = 0, so that the definition of u is independent of the choice of ψ with the given properties. The functional defined in (2.6.20) is also linear, its continuity is a consequence of Proposition 2.56 and Exercise 1.21. In addition, if ϕ ∈ C ∞ (ω) is given and if ψ ∈ C0∞ (ω) is such that ψ = 1 in a neighborhood of supp u, then supp [(1 − ψ)ϕ] ∩ supp u = ∅. Hence u, ϕ = u, ψϕ = u, (1 − ψ)ϕ + u, ϕ = u, ϕ,
(2.6.21)
proving that u is an extension of u. Exercise 2.64. In the context of Proposition 2.63 prove that αu ∂αu = ∂*
in D (Ω)
(2.6.22)
for every u ∈ E (ω) and every α ∈ Nn0 . D (Ω)
Exercise 2.65. Let uj −−−−→ u be such that there exists a compact K in Rn that j→∞
is contained in Ω and with the property that supp uj ⊆ K for every j ∈ N. Prove E (Rn )
E (Ω)
j→∞
j→∞
that supp u ⊆ K and uj −−−−→ u. Consequently, we also have uj −−−→ u. Exercise 2.66. Let k ∈ N0 and assume that cα ∈ C for α ∈ Nn0 with |α| ≤ k. Prove that cα ∂ α δ = 0 in D (Rn ) ⇐⇒ each cα = 0. (2.6.23) |α|≤k
Hint: Use (13.2.5).
2.6. COMPACTLY SUPPORTED DISTRIBUTIONS ...
45
Exercise 2.67. Prove that if u ∈ D (Rn ) and supp u ⊆ {a} for some a ∈ Rn , then u has a unique representation of the form cα ∂ α δa , (2.6.24) u= |α|≤k
for some k ∈ N0 and coefficients cα ∈ C. Sketch of proof: (I) Via a translation, reduce matters to the case a = 0. (II) Use Fact 2.55 to determine k ∈ N0 . (III) Fix ψ ∈ C0∞ (B(0, 1)) such that ψ = 1 on B(0, 12 ) and for ε > 0 define the function ψε (x) := ψ( xε ) for every x ∈ Rn . Prove that u = ψε u in D (Rn ). (IV) For ϕ ∈ C0∞ (Rn ) consider the kth order Taylor polynomial for ϕ at 0, i.e., 1 ϕk (x) := ∂ β ϕ(0) xβ , ∀ x ∈ Rn . (2.6.25) β! |β|≤k
Prove that for α ∈
Nn0
satisfying |α| ≤ k one has
∂ α (ϕ − ϕk ) = ∂ α ϕ − (∂ α ϕ)k−|α| . (V) Show that for each ϕ ∈ C0∞ (Rn ) there exists a constant c ∈ (0, ∞) such that |u, (ϕ − ϕk )ψε | ≤ c ε. (VI) Combine all the above to obtain that u, ϕ =
! (−1)|α| " u, xα ∂ α δ , ϕ α!
∀ ϕ ∈ C0∞ (Rn ).
(2.6.26)
|α|≤k
(VII) Prove that the representation in (VI) is unique. Example 2.68. Let m ∈ N. We are interested in solving the equation xm u = 0 in
D (R).
(2.6.27)
In this regard, assume that u ∈ D (R) solves (2.6.27) and note that if we have ϕ ∈ C0∞ (R \ {0}), then x1m ϕ ∈ C0∞ (R \ {0}). This observation permits us to write ! 1 " ∀ ϕ ∈ C0∞ (R \ {0}), (2.6.28) u, ϕ = xm u, m ϕ = 0, x which proves that supp u ⊆ {0}. In particular, u ∈ E (R). Applying ExerN cise 2.67 we conclude that there exists N ∈ N0 such that u = ck δ (k) in D (R), for some ck ∈ C, k = 0, 1, 2, . . . , N . We claim that c = 0
whenever m ≤ ≤ N.
k=0
(2.6.29)
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
46
To see why this is true, observe that since u ∈ E (R) it makes sense to apply u to any function in C ∞ (R). In particular, it is meaningful to apply u to any polynomial. Concerning (2.6.29), if N ≤ m − 1 there is nothing to prove, while in the case when N ≥ m for each ∈ {m, . . . , N } we may write 0 = xm u, x−m = u, x =
N
ck δ (k) , x
k=0
=
N
(−1)k ck
k=0
dk (x ) = (−1) ! c . dxk x=0
(2.6.30)
This proves (2.6.29) which, in turn, forces u to have the form u=
m−1
ck δ (k)
for some ck ∈ C, k = 0, 1, . . . , m − 1.
(2.6.31)
k=0
Conversely, one may readily verify that any distribution u as in (2.6.31) solves (2.6.27). In conclusion, any solution u of (2.6.27) is as in (2.6.31). Exercise 2.69. Let m ∈ N, a ∈ R, and u ∈ D (R). Prove u is a solution of the m−1 (k) equation (x − a)m u = 0 in D (R) if and only if u is of the form u = ck δa k=0
for some ck ∈ C, k = 0, 1, . . . , m − 1.
Remark 2.70. You have seen in Example 2.28 that P.V. x1 is a solution of the equation xu = 1 in D (R). Hence, if v ∈ D (R) is another solution of this equation, then the distribution v − P.V. x1 is a solution of the equation xu = 0 in D (R). By Example 2.68, it follows that v − P.V. x1 = c δ, where c ∈ C. Thus, the general solution of the equation xu = 1 in D (R) is u = P.V. x1 + c δ, for c ∈ C. Example 2.71. Let N ∈ N and aj ∈ Rn , j ∈ {1, . . . , N }, be distinct points. If u ∈ D (Rn ) is such that supp u ⊆ {a1 , a2 , . . . , aN }, then u has a unique representation of the form u=
N
cα,j ∂ α δaj ,
kj ∈ N0 ,
cα,j ∈ C.
(2.6.32)
j=1 |α|≤kj
To justify formula (2.6.32), fix a family of pairwise disjoint balls Bj := B(aj , rj ), j ∈ {1, . . . , N }, and for each j select a function ψj ∈ C0∞ (Bj ) satisfying ψj = 1 in a neighborhood of B(aj , rj /2). Then for each j ∈ {1, . . . , N } we have that ψj u ∈ E (Rn ) and supp (ψj u) ⊆ {aj }. Exercise 2.67 then gives that there exist cα,j ∂ α δaj in D (Rn ). In addition, kj ∈ N0 and cα,j ∈ C such that ψj u = since
N
|α|≤kj
ψj = 1 in a neighborhood of supp u, we have
j=1 N ! j=1
N N N " " ! uψj , ϕ = ψj u, ϕ = u, ψj ϕ = u, ϕ ψj = u, ϕ, (2.6.33) j=1
j=1
j=1
2.6. COMPACTLY SUPPORTED DISTRIBUTIONS ... for each ϕ ∈ C0∞ (Rn ). Hence, u =
N
47
ψj u in D (Rn ) which, given (2.6.33),
j=1
proves (2.6.32). Example 2.72. Let a, b ∈ R be such that a = b. We are interested in solving the equation (x − a)(x − b)u = 0 in D (R). (2.6.34) The first observation is that any solution u of this equation satisfies the condition supp u ⊆ {a, b}. Indeed, if we take an arbitrary ϕ ∈ C0∞ (R \ {a, b}), then the 1 ϕ belongs to the space C0∞ (R \ {a, b}) and function (x−a)(x−b) " ! 1 ϕ = 0. (2.6.35) u, ϕ = (x − a)(x − b)u , (x−a)(x−b) Hence, we may apply Example 2.71 to conclude that u=
N1
cj δa(j) +
j=0
N2
(j)
dj δb
in D (R),
(2.6.36)
j=0
where N1 , N2 ∈ N, {cj }0≤j≤N1 ⊂ C and {dj }0≤j≤N2 ⊂ C. Moreover, by dropping terms with zero coefficients, there is no loss of generality in assuming that cN1 = 0 and dN2 = 0.
(2.6.37)
In this scenario, we make the claim that N1 = N2 = 0. To prove this claim, suppose first that N1 ≥ 1. Then, using (2.6.36) and the hypotheses on u, we obtain % $ % $ 0 = (x − a)(x − b)u, (x − a)N1 −1 (x − b)N2 = u, (x − a)N1 (x − b)N2 +1 =
N1
N2 $ (j) % $ (j) % N1 N2 +1 + cj δa , (x − a) (x − b) dj δb , (x − a)N1 (x − b)N2 +1
j=0
=
N1
j=0
(−1)j cj
j=0
+
N2 j=0
dj N1 N2 +1 (x − a) (x − b) dxj x=a
(−1)j dj
dj N1 N2 +1 (x − b) (x − a) dxj x=b
= (−1)N1 cN1 N1 ! (a − b)N2 +1 .
(2.6.38)
Since by assumption a = b, from (2.6.38) we obtain cN1 = 0. This contradicts (2.6.37) and shows that necessarily N1 = 0. Similarly, we obtain that N2 = 0, hence any solution of (2.6.34) has the form u = c δa + d δb
in D (R),
c, d ∈ C.
(2.6.39)
Conversely, it is clear that any distribution as in (2.6.39) solves (2.6.34). To sum up, (2.6.39) describes all solutions of (2.6.34).
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
48
2.7
Tensor Product of Distributions
Let m, n ∈ N, U be an open subset of Rm , V be an open subset of Rn , and consider two complex-valued functions f ∈ L1loc (U ) and g ∈ L1loc (V ). Then the tensor product of the functions f and g is defined as f ⊗g : U ×V → C,
(f ⊗g)(x, y) := f (x)g(y)
for each (x, y) ∈ U ×V. (2.7.1)
In particular, it follows from (2.7.1) that f ⊗ g ∈ L1loc (U × V ). When f , g, and f ⊗ g are regarded as distributions, for each ϕ ∈ C0∞ (U × V ) we obtain that f ⊗ g, ϕ = f (x)g(y)ϕ(x, y) dx dy = f (x) g(y)ϕ(x, y) dy dx =
U×V
f (x)ϕ(x, y) dx dy
g(y) V
U
V
(2.7.2)
U
or, concisely, $ % $ % f ⊗ g, ϕ = f (x), g(y), ϕ(x, y) = g(y), f (x), ϕ(x, y) .
(2.7.3)
If, in addition, the test function ϕ has the form ϕ1 ⊗ ϕ2 , for some ϕ1 ∈ C0∞ (U ) and ϕ2 ∈ C0∞ (V ), then (2.7.3) becomes f ⊗ g, ϕ1 ⊗ ϕ2 = f, ϕ1 g, ϕ2 .
(2.7.4)
This suggests a natural way to define tensor products of general distributions granted the availability of the following density result for D(U × V ). Proposition 2.73. Let m, n ∈ N, U be an open subset of Rm , and V be an open subset of Rn . Then the set ⎧ ⎫ N ⎨ ⎬ ϕj ⊗ ψj : ϕj ∈ C0∞ (U ), ψj ∈ C0∞ (V ), N ∈ N C0∞ (U ) ⊗ C0∞ (V ) := ⎩ ⎭ j=1
(2.7.5) is sequentially dense in D(U × V ). Before proceeding with the proof of Proposition 2.73 we state and prove two lemmas. Lemma 2.74. Suppose that the sequence {fj }j∈N ⊂ E(Rn ) and f ∈ E(Rn ) are such that ∂ α fj − ∂ α f L∞ (B(0,j)) < E(Rn )
Then fj −−−−→ f . j→∞
1 j
for all α ∈ Nn0 satisfying |α| ≤ j.
(2.7.6)
2.7. TENSOR PRODUCT OF DISTRIBUTIONS
49
Proof. Suppose {fj }j∈N and f satisfy the current hypotheses, and let ε ∈ (0, ∞), α ∈ Nn0 , and a compact subset K of Rn be fixed. Then there exists j0 ∈ N such that K ⊂ B(0, j0 ). If we now fix some j ∗ ∈ N with the property that j ∗ > max{ 1ε , |α|, j0 }, it follows that for each j ≥ j ∗ we have |α| ≤ j ∗ ≤ j and ∂ α fj − ∂ α f L∞ (K) ≤ ∂ α fj − ∂ α f L∞ (B(0,j)) <
1 1 < ∗ < ε. j j
(2.7.7)
Hence, ∂ α fj converges uniformly on K to ∂ α f . Since α and K are arbitrary, E(Rn )
we conclude that fj −−−−→ f . j→∞
Lemma 2.75. For every f ∈ C0∞ (Rn ) there exists a sequence {Pj }j∈N of polyE(Rn )
nomials in Rn such that Pj −−−−→ f . j→∞
Proof. For each t > 0 define the function |x−y|2 n ft (x) := (4πt)− 2 e− 4t f (y) dy, Rn
∀ x ∈ Rn .
(2.7.8)
The first goal is to prove that E(Rn )
ft −−−−→ f. t→0+
To this end, consider the function u defined by ft (x), x ∈ Rn , t > 0, ∀ (x, t) ∈ Rn × [0, ∞). u(x, t) := f (x), x ∈ Rn , t = 0,
(2.7.9)
(2.7.10)
From definition it is clear that u is continuous on Rn × (0, ∞). We claim that, n in fact, u is continuous √ on R × [0, ∞). Indeed, by making use of the change of variables x − y = 2 tz, we may write √ 2 −n 2 u(x, t) = π e−|z| f (x − 2 tz) dz, ∀ x ∈ Rn , ∀ t > 0. (2.7.11) Rn
Hence, for each x∗ ∈ Rn , Lebesgue dominated convergence theorem (cf. Theorem 13.12) gives 2 −n 2 lim u(x, t) = f (x )π e−|z| dz = f (x∗ ) = u(x∗ , 0), (2.7.12) ∗ x→x ∗
t→0+
Rn
proving that u is continuous at points of the form (x∗ , 0). Being continuous on Rn ×[0, ∞), u is uniformly continuous on every compact subset of Rn × [0, ∞), thus uniformly continuous on sets of the form K × [0, 1], where K ⊂ Rn is compact. Fix such a compact K and fix ε ∈ (0, 1) arbitrary. Then, there exists δ > 0 such that if (x1 , t1 ), (x2 , t2 ) ∈ K × [0, 1] satisfy
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
50
(x1 , t1 ) − (x2 , t2 ) ≤ δ then u(x1 , t1 ) − u(x2 , t2 ) ≤ ε. In particular, if x ∈ K and t ∈ (0, δ), then u(x, t) − u(x, 0) ≤ ε, that is ft − f L∞ (K) ≤ ε for all 0 < t < δ. This proves that lim+ ft (x) = f (x) uniformly on compact sets in Rn . t→0
The derivatives of ft enjoy the same type of properties as ft . More precisely, √ by a direct computation (involving also the change of variables x − y = 2 tz) we see that for each t > 0 and each α ∈ Nn0 we have |x−y|2 α −n 2 e− 4t (∂ α f )(y) dy, ∀ x ∈ Rn . (2.7.13) ∂x ft (x) = (4πt) Rn
In addition, as before, we obtain that lim (∂xα ft ) = ∂ α f uniformly on compact t→0+
sets in Rn . This completes the proof of (2.7.9). Next, recall that the Taylor expansion of the function ex , x ∈ R, about the ∞ j x origin is ex = j! with the series converging uniformly on compact subsets j=0
of R. In addition, for each N ∈ N, the remainder RN (x) := ex −
N j=0
xj j!
satisfies
N +1
C |RN (x)| ≤ eC (N +1)! whenever |x| ≤ C. Fix t > 0 and a compact subset K of 2
Rn . Then there exists C > 0 such that |x−y| ≤ C for every x ∈ K and every 4t y ∈ supp f , so |x − y|2 C N +1 lim RN − = 0. (2.7.14) ≤ lim eC N →∞ N →∞ 4t (N + 1)! Consequently, n
∂xα ft (x) = (4πt)− 2
j ∞ 1 |x − y|2 ∂ α f (y) dy, − j! 4t n R j=0
(2.7.15)
for each α ∈ Nn0 and each t > 0, and the series in (2.7.15) converges uniformly for x in a compact set in Rn . In addition, integrating by parts, we may write ∞ |x − y|2 j n (−1)|α| f (y) dy, (2.7.16) ∂xα ft (x) = (4πt)− 2 ∂yα − j 4t Rn j=0 where, for each t > 0 fixed, the series in (2.7.16) converges uniformly on compact sets in Rn . Hence, if for each t > 0 we define the sequence of polynomials j k 1 |x − y|2 −n 2 f (y) dy, ∀ x ∈ Rn , ∀ k ∈ N, Pt,k (x) := (4πt) − j! 4t n R j=0 E(Rn )
(2.7.17)
then the above proof implies that for each t > 0 we have Pt,k −−−−→ ft . k→∞
Next, we claim that there exists a sequence of positive numbers {tj }j∈N with the property that for each j ∈ N we have ∂ α ftj − ∂ α f L∞ (B(0,j)) <
1 2j
for every α ∈ Nn0 with |α| ≤ j.
(2.7.18)
2.7. TENSOR PRODUCT OF DISTRIBUTIONS
51
To construct a sequence {tj }j∈N satisfying (2.7.18) we proceed by induction. First, consider the compact set B(0, 1). For each α ∈ Nn0 satisfying |α| ≤ 1, based on (2.7.9), there exists 1α ∈ N with the property that . . α .∂ ft − ∂ α f . ∞ (2.7.19) < 12 for all t ∈ 0, 1/1α . L (B(0,1)) 1 n : α ∈ N , |α| ≤ 1 . 0 1α Suppose that, for some j ≥ 2, we have already selected t1 , . . . , tj−1 satisfying (2.7.18). Let α ∈ Nn0 be such that |α| ≤ j. Based on (2.7.9), there exists whenever |α| ≤ j − 1, and such that jα ∈ N with the property that jα ≥ j−1 α . α . 1 .∂ ft − ∂ α f . ∞ (2.7.20) < 2j for all t ∈ 0, 1/jα . L (B(0,j)) Define t1 := min
Now define tj := min
1
: α ∈ Nn0 , |α| ≤ j . In particular, this choice ensures
jα that tj ≤ tj−1 . Proceeding by induction it follows that the sequence {tj }j∈N constructed in this manner satisfies (2.7.18). Our next claim is that for each t > 0 and each j ∈ N there exists kt,j ∈ N such that ∂ α Pt,kt,j − ∂ α ft L∞ (B(0,j)) <
1 2j
for every α ∈ Nn0 with |α| ≤ j.
(2.7.21)
E(Rn )
To prove this, fix t > 0 and j ∈ N. Since Pt,k −−−−→ ft and B(0, j) is a compact k→∞
subset of Rn , it follows that for each α ∈ Nn0 satisfying |α| ≤ j there exists kα∗ ∈ N such that 1 ∂ α Pt,k − ∂ α ft Lα (B(0,j)) < 2j , for k ≥ kα∗ . (2.7.22) If we now define kt,j := max kα∗ : α ∈ Nn0 , |α| ≤ j , then estimate (2.7.21) holds for this kt,j . This completes the proof the claim. Here is the end-game in the proof of the lemma. For each j ∈ N, let tj > 0 be as constructed above so that (2.7.18) holds, for this tj let ktj ,j be as defined above so that (2.7.21) holds, and set Pj := Ptj ,ktj ,j . Hence, for each j ∈ N and each α ∈ Nn0 satisfying |α| ≤ j we have
∂ α Pj −∂ α f L∞ (B(0,j)) = ∂ α Ptj ,ktj ,j −∂ α f L∞ (B(0,j)) ≤
1 1 2j + 2j
= 1j . (2.7.23)
E(Rn )
The fact that Pj −−−−→ f now follows from (2.7.23) by invoking Lemma 2.74. j→∞
Before turning to the proof of Proposition 2.73 we introduce some notation. For m, n ∈ N, if U is an open subset of Rm , V is an open subset of Rn , and A ⊆ U × V , the projections of A on U and V , respectively, are πU (A) := {x ∈ U : ∃ y ∈ V such that (x, y) ∈ A}, πV (A) := {y ∈ V : ∃ x ∈ U such that (x, y) ∈ A}.
(2.7.24)
We are ready to present the proof of the density result stated at the beginning of this section.
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
52
Proof of Proposition 2.73. Let ϕ ∈ C0∞ (U × V ). By Lemma 2.75, there exists a E(Rn+m )
sequence of polynomials {Pj }j∈N in Rn+m with the property that Pj −−−−−−→ ϕ. j→∞
Set K := supp ϕ, K1 := πU (K), and K2 := πV (K). Then K1 and K2 are compact sets in Rm and Rn , respectively. Fix a compact set L1 ⊂ U such that K1 ⊂ L˚1 and a compact set L2 ⊂ U such that K2 ⊂ L˚2 . Then there exists a function ϕ1 ∈ C0∞ (U ) with supp ϕ1 ⊆ L1 , ϕ1 = 1 in a neighborhood of K1 , and a function ϕ2 ∈ C0∞ (V ) satisfying supp ϕ2 ⊆ L2 and ϕ2 = 1 in a neighborhood of K2 . Consequently, ϕ1 ⊗ ϕ2 ∈ C0∞ (Rn+m ) and supp (ϕ1 ⊗ ϕ2 ) ⊆ L1 × L2 .
(2.7.25)
By (2) in Exercise 1.10 it follows that E(Rn+m )
(ϕ1 ⊗ ϕ2 )Pj −−−−−−→ (ϕ1 ⊗ ϕ2 )ϕ. j→∞
(2.7.26)
Hence, since supp [(ϕ1 ⊗ ϕ2 )Pj ] ⊆ L1 × L2 for every j ∈ N and since (ϕ1 ⊗ ϕ2 )ϕ = ϕ, we obtain D(U×V )
(ϕ1 ⊗ ϕ2 )Pj −−−−−− → ϕ. j→∞
(2.7.27)
Upon observing that (ϕ1 ⊗ϕ2 )Pj ∈ C0∞ (U )⊗C0∞ (V ) for every j ∈ N, the desired conclusion follows. The next proposition is another important ingredient used to define the tensor product of two distributions. Proposition 2.76. Let m, n ∈ N, U be an open subset of Rm , and V be an open subset of Rn . For each u ∈ D (U ) the following properties hold. (a) If for each ϕ ∈ C0∞ (U × V ) we define the mapping $ % ψ : V → C, ψ(y) := u(x), ϕ(x, y) ∀ y ∈ V,
(2.7.28)
then ψ ∈ C0∞ (V ). (b) The mapping D(U × V ) ϕ → ψ ∈ D(V ), with ψ as defined in (a), is linear and continuous. Remark 2.77. In the definition of ψ in part (a) of Proposition 2.76, the use of the notation u(x) does NOT mean that the distributions u%is evaluated at x since $ the latter is not meaningful. The notation u(x), ϕ(x, y) should be understood in the following sense: for each y ∈ V fixed, the distribution u acts on the function ϕ(·, y).
2.7. TENSOR PRODUCT OF DISTRIBUTIONS
53
Proof of Proposition 2.76. Fix ϕ ∈ C0∞ (U × V ) and let K := supp ϕ that is a compact subset of U × V . Also, consider ψ as in (2.7.28) and recall the projections πU , πV from (2.7.24). Then clearly supp ψ ⊆ πU (K), thus ψ has compact support. Next we prove that ψ is continuous on V . Let {yj }j∈N be a sequence in V such that lim yj = y0 for some y0 ∈ V . Since u ∈ D (U ), based j→∞
on the definition of ψ and Fact 2.2, in order to conclude that lim ψ(yj ) = j→∞
D(U)
ψ(y0 ) it suffices to show that ϕ(·, yj ) −−−→ ϕ(·, y0 ). It is clear that for every j→∞
j ∈ N we have ϕ(·, yj ) ∈ C0∞ (U ) and supp ϕ(·, yj ) ⊆ πU (K). Moreover, since ϕ ∈ C ∞ (U × V ) it follows that ∂xα ϕ is continuous on K for every α ∈ Nm 0 , thus uniformly continuous on K. Consequently, (∂xα ϕ)(·, y j ) −−−→ (∂xα ϕ)(·, y 0 ) uniformly on πU (K). j→∞
This completes the proof of the fact that ψ is continuous on V . To continue, we claim that ψ is of class C 1 on V . Fix y ∈ V and some j ∈ {1, . . . , n}. Recall that ej is the unit vector in Rn with the jth component equal to 1, and let h ∈ R \ {0}. Since V is open, there exists ε0 > 0 such that if |h| < ε0 then y + hej ∈ V . Make the standing assumption that |h| < ε0 and set Rh (x, y) :=
∂ϕ ϕ(x, y + hej ) − ϕ(x, y) − (x, y), h ∂yj
∀ x ∈ U.
(2.7.29)
Then " ψ(y + hej ) − ψ(y) ! ∂ϕ − u(x), (x, y) = u(x), Rh (x, y), h ∂yj
∀ x ∈ U. (2.7.30)
Suppose lim Rh (·, y) = 0 in D(U ).
h→0
(2.7.31)
% $ Then lim u, Rh (·, y) = 0, which in view of (2.7.30) implies h→0
! ∂ϕ " ∂j ψ(y) = u, (·, y) . ∂yj
(2.7.32)
∂ϕ ∈ C0∞ (U × V ) by reasoning as in the proof of the continuity Moreover, since ∂y j of ψ on V , we also obtain that ∂j ψ is continuous on V . Hence, since j in {1, . . . , n} is arbitrary, to complete the proof of the claim, we are left with showing (2.7.31). Clearly supp [Rh (·, y)] ⊆ πU (K). Applying Taylor’s formula to ϕ in the variable y for each fixed x ∈ U we obtain
∂ϕ (x, y) + h2 ϕ(x, y + hej ) = ϕ(x, y) + h ∂yj
1
(1 − t) 0
∂ 2ϕ (x, y + thej ) dt. (2.7.33) ∂yj2
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
54
Hence, (2.7.29) and (2.7.33) imply 1 ∂2ϕ (1 − t) 2 (x, y + thej ) dt. Rh (x, y) = h ∂yj 0 Consequently, for every β ∈ Nm 0 , we have 1 ∂2ϕ ∂xβ Rh (x, y) = h (1 − t)∂xβ 2 (x, y + thej ) dt, ∂yj 0
∀ x ∈ U.
(2.7.34)
(2.7.35)
Since the integral in the right-hand side of (2.7.35) is bounded by a constant independent of h, x and y, it follows that lim ∂xβ Rh (·, y) = 0 uniformly on h→0
πU (K). Combined with the support information on Rh (·, y), this implies (2.7.31) and completes the proof of the claim that ψ ∈ C 1 (V ). By induction, we obtain ψ ∈ C ∞ (V ), completing the proof of the statement in part (a) of the proposition. The linearity of the mapping in part (b) is immediate since u is a linear mapping. To show that the mapping in (b) is also continuous, since D(V ) is locally convex, by Theorem 13.5 it suffices to prove that it is sequentially D(U×V )
continuous. To this end, let ϕj −−−−−−→ ϕ. In particular, there exists a compact j→∞
subset K of U × V such that supp ϕj ⊆ K for all j ∈ N and ∂ α ϕj −−−→ ∂ α ϕ j→∞
uniformly on K, for each α ∈ Nm+n . 0
(2.7.36)
$ % To proceed, for each y ∈ V set ψ(y) := u, ϕ(·, y) and ψj (y) := u, ϕj (·, y) for D(V )
each j ∈ N. The goal is to prove that ψj −−−→ ψ. Applying Proposition 2.4 to j→∞
the distribution u and compact πU (K) yields k ∈ N0 and C > 0 for which (2.1.1) holds with K replaced by πU (K). Then, for each β ∈ Nn0 , we have sup y∈πV (K)
β ∂ ψj (y) − ∂ β ψ(y)
=
sup y∈πV (K)
≤ =
u(x), ∂yβ ϕj (·, y) − ∂yβ ϕ(·, y)
sup
sup
y∈πV (K) x∈πU (K) |γ|≤k
C
γ β ∂x ∂y ϕj (x, y) − ∂xγ ∂yβ ϕ(x, y)
sup ∂xγ ∂yβ ϕj (x, y) − ∂xγ ∂yβ ϕ(x, y) −−−→ 0,
(x,y)∈K |γ|≤k
j→∞
(2.7.37)
where γ ∈ Nm 0 and the convergence to zero in (2.7.37) is due to (2.7.36) applied D(V )
for α := (γ, β). Thus, ψj −−−→ ψ and the proof of the statement in part (b) is j→∞
complete. We are now ready to define the tensor product of distributions. Theorem 2.78. Let m, n ∈ N, U be an open subset of Rm , and V be an open subset of Rn . Consider u ∈ D (U ) and v ∈ D (V ). Then the following statements are true.
2.7. TENSOR PRODUCT OF DISTRIBUTIONS
55
(i) There exists a unique distribution u ⊗ v ∈ D (U × V ), called the tensor product of u and v, with the property that % $ u ⊗ v, ϕ1 ⊗ ϕ2 = u, ϕ1 v, ϕ2 , ∀ ϕ1 ∈ C0∞ (U ), ∀ ϕ2 ∈ C0∞ (V ). (2.7.38) (ii) The tensor product just defined satisfies u ⊗ v = v ⊗ u in D (U × V ). Proof. For each ϕ ∈ C0∞ (U × V ) consider the function ψ(y) := u(x), ϕ(x, y)
for y ∈ V.
By Proposition 2.76, we have ψ ∈ C0∞ (V ) and the mapping D(U × V ) ϕ → ψ ∈ D(V )
is linear and continuous.
(2.7.39)
Hence, v, ψ is meaningful and we may define u ⊗ v : D(U × V ) −→ C " u ⊗ v, ϕ := v(y), u(x), ϕ(x, y) for every ϕ ∈ C0∞ (U × V ). !
(2.7.40)
As defined, the mapping u ⊗ v is the composition of two linear and continuous mappings, linear and continuous. In addition, if ϕ1 ∈ C0∞ (U ) and ϕ2 ∈ C0∞ (V ), then ! " u ⊗ v, ϕ1 ⊗ ϕ2 = v(y) , u(x), ϕ1 (x)ϕ2 (y) " ! = v(y) , ϕ2 (y)u(x), ϕ1 (x) $ %$ % = v(y), ϕ2 (y) u(x), ϕ1 (x) , (2.7.41) thus the mapping u ⊗ v defined in (2.7.40) satisfies (2.7.38). To prove the uniqueness statement in part (i), suppose w1 , w2 ∈ D (U × V ) are such that wj , ϕ1 ⊗ ϕ2 = u, ϕ1 v, ϕ2 , j = 1, 2, (2.7.42) for every ϕ1 ∈ C0∞ (U ), ϕ2 ∈ C0∞ (V ). Then it follows that w1 , ϕ = w2 , ϕ for every ϕ ∈ C0∞ (U ) ⊗ C0∞ (V ), which in concert with Proposition 2.73 and the continuity of w1 and w2 implies w1 = w2 in D (U × V ). This completes the proof of the statement in (i). As for the statement in (ii), observe that based on (2.7.40) we have v ⊗ u : D(U × V ) → C and $ % v ⊗ u, ϕ = u(x), v(y), ϕ(x, y) for every ϕ ∈ C0∞ (U × V ).
(2.7.43)
In particular, for every ϕ1 ∈ C0∞ (U ) and ϕ2 ∈ C0∞ (U ) we have v ⊗ u, ϕ1 ⊗ ϕ2 = u, ϕ1 v, ϕ2 = u ⊗ v, ϕ1 ⊗ ϕ2 .
(2.7.44)
The uniqueness result from part (i) now implies u ⊗ v = v ⊗ u in D (U × V ).
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
56
Remark 2.79. If u ∈ D (U ) and v ∈ L1loc (V ), then the statement in part (ii) in Theorem 2.78 becomes " ! $ % v(y)ϕ(x, y) dy = v(y) u(x), ϕ(x, y) dy, ∀ ϕ ∈ C0∞ (U × V ). u(x), V
V
(2.7.45) The interpretation of (2.7.45) is that the distribution u commutes with the integral. We next establish a number of basic properties for the tensor products of distributions. Theorem 2.80. Let m, n ∈ N, U be an open subset of Rm , and V be an open subset of Rn . Assume that u ∈ D (U ) and v ∈ D (V ). Then the following properties hold. (a) supp u ⊗ v = supp u × supp v. n (b) ∂xα ∂yβ (u ⊗ v) = (∂xα u) ⊗ (∂yβ v) for every α ∈ Nm 0 and every β ∈ N0 .
(c) (f ⊗ g) · (u ⊗ v) = (f u) ⊗ (gv) for every f ∈ C ∞ (U ) and every g ∈ C ∞ (V ). (d) The mapping D (U ) × D (V ) (u, v) → u ⊗ v ∈ D (U × V ) is bilinear and separately continuous. (e) The tensor product of distributions is associative. Proof. We start by proving the set theoretic equality from (a). For the leftto-right inclusion, fix (x0 , y0 ) ∈ supp u × supp v. If C ⊆ U × V is an open neighborhood of (x0 , y0 ), then there exist an open set A ⊆ U containing x0 and an open set B ⊆ V containing y0 such that A × B ⊂ C. In particular, since x0 ∈ supp u and y0 ∈ supp v, there exist ϕ1 ∈ C0∞ (A) and ϕ2 ∈ C0∞ (B) with the property that u, ϕ1 = 0 and v, ϕ2 = 0. If we now set ϕ := ϕ1 ⊗ ϕ2 , then ϕ ∈ C0∞ (C) and u ⊗ v, ϕ = u, ϕ1 v, ϕ2 = 0. Hence (x0 , y0 ) ∈ supp (u ⊗ v), finishing the proof of the left-to-right inclusion in (a). To prove the opposite inclusion, observe that supp (u ⊗ v) ⊆ supp u × supp v is equivalent to (U × V ) \ (supp u × supp v) ⊆ (U × V ) \ supp (u ⊗ v).
(2.7.46)
Write the left hand-side of (2.7.46) as D1 ∪D2 , where D1 := (U \supp u)×V and D2 := U ×(V \supp v). Note that D1 and D2 are open sets in Rm ×Rn . Since the support of a distribution is the smallest relatively closed set outside of which the distribution vanishes, for (2.7.46) to hold it suffices to show that u ⊗ v, ϕ = 0 for every ϕ ∈ C0∞ (D1 ∪ D2 ). Fix such a function ϕ, set K := supp ϕ, and consider a partition of unity subordinate to the covering {D1 , D2 } of K, say ψj ∈ C0∞ (Dj ), j ∈ {1, 2},
ψ1 + ψ2 = 1 in a neighborhood of K.
(2.7.47)
2.7. TENSOR PRODUCT OF DISTRIBUTIONS
57
Then ϕψ1 ∈ C0∞ (D1 ), ϕψ2 ∈ C0∞ (D2 ) (with the understanding that ψ1 and ψ2 have been extended by zero outside their supports), and ϕ = ϕψ1 + ϕψ2 on U × V . Since πU (D1 ) ∩ supp u = ∅ and πV (D2 ) ∩ supp v = ∅, we may write $ % u ⊗ v, ϕ = v(y), u(x), ϕ(x, y)ψ1 (x, y) % $ (2.7.48) + u(x), v(y), ϕ(x, y)ψ2 (x, y) = 0. This completes the proof of the equality of sets from part (a). n ∞ To prove the identity in (b), fix α ∈ Nm 0 , β ∈ N0 , and let ϕ1 ∈ C0 (U ), ∞ ϕ2 ∈ C0 (V ). Then starting with the definition of distributional derivatives and then using (2.7.38) we may write % % $ $ α β ∂x ∂y (u ⊗ v), ϕ1 ⊗ ϕ2 = (−1)|α|+|β| u ⊗ v, ∂xα ∂yβ (ϕ1 ⊗ ϕ2 ) = (−1)|α|+|β|u ⊗ v, ∂xα ϕ1 ⊗ ∂yβ ϕ2 = (−1)|α|+|β|u, ∂xα ϕ1 v, ∂yβ ϕ2 = ∂xα u, ϕ1 ∂yβ u, ϕ2 $ % = (∂xα u) ⊗ (ϕβy v), ϕ1 ⊗ ϕ2 .
(2.7.49)
By the uniqueness statement in part (i) of Theorem 2.78 we deduce from (2.7.49) that (∂xα u) ⊗ (ϕβy v) = ∂xα ∂yβ (u ⊗ v), completing the proof of the identity in (b). Moving on to the proof of the statement in (c), note that if f ∈ C ∞ (U ) and g ∈ C ∞ (V ), then f ⊗ g ∈ C ∞ (U × V ). The latter, combined with the definition of multiplication of a distribution with a smooth function and (2.7.38), permits us to write % $ (f ⊗ g) · (u ⊗ v), ϕ1 ⊗ ϕ2 = u ⊗ v, (f ⊗ g) · (ϕ1 ⊗ ϕ2 ) (2.7.50) $ % = u ⊗ v, (f ϕ1 ) ⊗ (gϕ2 ) = u, f ϕ1 v, gϕ2 % $ = f u, ϕ1 gv, ϕ2 = (f u) ⊗ (gv), ϕ1 ⊗ ϕ2 , for every ϕ1 ∈ C0∞ (U ), ϕ2 ∈ C0∞ (V ). The identity in (c) now follows from (2.7.50) by once again invoking the uniqueness result from part (i) in Theorem 2.78. The bilinearity of the mapping (u, v) → u ⊗ v is a consequence of the definition of u ⊗ v and (2.2.1). To prove that this mapping is also separately D (U)
continuous, let uj −−−−→ u and fix v ∈ D (V ). If ϕ ∈ C0∞ (U × V ), then j→∞
v(y), ϕ(x, y) ∈ C0∞ (U ) by Proposition 2.76, and we may use Fact 2.18 to write uj ⊗ v, ϕ = uj (x), v(y), ϕ(x, y) −−−→ u(x), v(y), ϕ(x, y) = u ⊗ v, ϕ. j→∞
(2.7.51)
D (v)
D (U×V )
j→∞
j→∞
Similarly, if u ∈ D (U ) is fixed and vj −−−→ v, then u ⊗ vj −−−−−−→ u ⊗ v. Upon recalling that the topological vector space D(U × V ) is locally convex, Theorem 13.5 may now be invoked to finish the proof of (d).
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
58
Finally, we are left with proving the associativity of the tensor product of distributions. To this end, let k ∈ N, W be an open subset of Rk , and w ∈ D (W ) be arbitrary. By Theorem 2.78, u ⊗ v ∈ D (U × V ), v ⊗ w ∈ D (V × W ) and, furthermore, (u ⊗ v) ⊗ w ∈ D (U × V × W ),
u ⊗ (v ⊗ w) ∈ D (U × V × W ). (2.7.52)
The goal is to prove that (u ⊗ v) ⊗ w = u ⊗ (v ⊗ w)
in
D (U × V × W ).
(2.7.53)
In this regard we first note that, for each ϕ ∈ C0∞ (U ), ψ ∈ C0∞ (V ), and η ∈ C0∞ (W ), we may write $
% (u ⊗ v) ⊗ w, (ϕ ⊗ ψ) ⊗ η = u, ϕv, ψw, η $ % = u ⊗ (v ⊗ w), ϕ ⊗ (ϕ ⊗ η) .
(2.7.54)
Define C0∞ (U ) ⊗ C0∞ (V ) ⊗ C0∞ (W ) as ⎧ ⎫ N ⎨ ⎬ ϕj ⊗ ψj ⊗ ηj : ϕj ∈ C0∞ (U ), ψj ∈ C0∞ (V ), ηj ∈ C0∞ (W ), N ∈ N , ⎩ ⎭ j=1
(2.7.55) and note that this set is sequentially dense in D(U × V × W ) (which can be proved by reasoning as in the proof of Proposition 2.73). Granted this, (2.7.53) is implied by (2.7.54), completing the proof of the theorem. Exercise 2.81. Let n, m ∈ N and x0 ∈ Rn , y0 ∈ Rm . Prove that δx0 ⊗ δy0 = δ(x0 ,y0 ) in D (Rn+m ). We close this section by revisiting the result proved in Proposition 2.76 and establishing a related version that is going to be useful later on. Proposition 2.82. Let m, n ∈ N, U be an open subset of Rm , and V be an open subset of Rn . Assume that u ∈ E (U ), ϕ ∈ C ∞ (U × V ), and define the function ψ : V → C,
ψ(y) := u(x), ϕ(x, y),
∀ y ∈ V.
(2.7.56)
Then ψ ∈ C ∞ (V ) and for every α ∈ Nn0 we have $ % ∂ α ψ(y) = u(x) , ∂yα ϕ(x, y) ,
∀ y ∈ V.
(2.7.57)
Proof. Fix some η ∈ C0∞ (U ) that satisfies η = 1 in a neighborhood of supp u. Then for each θ ∈ C0∞ (V ) we may write
2.8. THE CONVOLUTION OF DISTRIBUTIONS IN RN
59
$ % $ % (θψ)(y) = (ηu)(x), θ(y)ϕ(x, y) = u(x), (η ⊗ θ)(x, y)ϕ(x, y) $ % = u(x), (η ⊗ θ)ϕ (x, y) , ∀ y ∈ V. (2.7.58) Given that (η ⊗ θ)ϕ ∈ C0∞ (U × V ), Proposition 2.76 applies and gives that the right-most side of (2.7.58) depends in a C ∞ manner on the variable y ∈ V . Hence, θψ ∈ C ∞ (V ) and since θ ∈ C0∞ (V ) has been arbitrarily chosen we deduce that ψ ∈ C ∞ (V ). This takes care of the first claim in the statement of the proposition. As regards (2.7.57), observe that it suffices to prove this formula in the case |α| = 1, since the general case then follows by iteration. With this in mind, fix j ∈ {1, . . . , n} and pick an arbitrary point y ∗ ∈ V . Also, select some function θ ∈ C0∞ (V ) such that θ = 1 near y ∗ . These properties of θ permit us to compute ∂yj (η ⊗ θ)ϕ (x, y ∗ ) = η(x) (∂j θ)(y ∗ )ϕ(x, y ∗ ) + θ(y ∗ )(∂yj ϕ)(x, y ∗ ) = η(x)(∂yj ϕ)(x, y ∗ ),
∀ x ∈ U.
Making use of (2.7.32), (2.7.58), and (2.7.59), we may then write $ % ∂j ψ(y ∗ ) = ∂j (θψ)(y ∗ ) = u(x), ∂yj (η ⊗ θ)ϕ (x, y ∗ ) $ % = u(x), η(x)(∂yj ϕ)(x, y ∗ ) $ % = u(x), (∂yj ϕ)(x, y ∗ ) .
(2.7.59)
(2.7.60)
This corresponds precisely to formula (2.7.57) written at the point y = y ∗ and for the multi-index α = (0, . . . , 0, 1, 0, . . . , 0) ∈ Nn0 with the nonzero component on the jth slot. As remarked earlier, this suffices to finish the proof.
2.8
The Convolution of Distributions in Rn
Recall that if f, g ∈ L1 (Rn ) then, as a consequence of Fubini’s theorem, the function h : Rn × Rn → C, defined by h(x, y) := f (x − y)g(y) for every (x, y) ∈ Rn × Rn , is absolutely integrable on Rn × Rn and |h(x, y)| dx dy = |f (x − y)g(y)| ddx dy Rn ×Rn
Rn
=
Rn
Rn
|g(y)|
Rn
|f (x − y)| dx dy
= f L1 (Rn ) gL1(Rn ) . Hence, the convolution of f and g defined as n f ∗ g : R → C, (f ∗ g)(x) := f (x − y)g(y) dy Rn
(2.8.1)
for each x ∈ Rn , (2.8.2)
60
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
satisfies f ∗ g ∈ L1 (Rn ) (and a natural estimate). We would like to extend this definition to functions that are not necessarily in L1 (Rn ). Specifically, assume that f, g ∈ L1loc (Rn ) have the property that MB(0,r) := (x, y) ∈ supp f × supp g : x + y ∈ B(0, r) (2.8.3) is a compact set in Rn × Rn for every r ∈ (0, ∞). In this scenario, consider the function G : Rn → [0, ∞] defined by |f (x − y)| |g(y)| dy for each x ∈ Rn . G(x) :=
(2.8.4)
Rn
Note that for every r ∈ (0, ∞), by making a natural change of variables and using the fact that f ⊗ g ∈ L1loc (Rn × Rn ), we obtain G(x) dx = |f (z)||g(y)| dy dz < ∞. (2.8.5) |x|≤r
MB(0,r)
Thus, the function G is locally integrable, hence finite almost everywhere in Rn . In conclusion, whenever f, g ∈ L1loc (Rn ) satisfy (2.8.3), for almost every x ∈ Rn the integral (f ∗ g)(x) := Rn f (x − y)g(y) dy is absolutely convergent and f ∗ g ∈ L1loc (Rn ). Furthermore, having fixed an arbitrary ϕ ∈ C0∞ (Rn ) we may write f (x − y)g(y)ϕ(x) dy dx f ∗ g, ϕ = Rn
Rn
f (z)g(y)ϕ(z + y) dy dz.
= Rn
(2.8.6)
Rn
To proceed, observe that the function ϕΔ defined by ϕΔ : Rn × Rn → C,
ϕΔ (x, y) := ϕ(x + y) for every x, y ∈ Rn , (2.8.7)
satisfies ϕΔ ∈ C ∞ (Rn × Rn ) though, in general, the support of ϕΔ is not compact. Formally, the last double integral in (2.8.6) has the same expression as f ⊗ g, ϕΔ . However, under the current assumptions on ϕ, f , and g, it is not clear that this pairing may be interpreted in the standard distributional sense. Indeed, even though f ⊗ g is a well-defined distribution in D (Rn × Rn ) (cf. Theorem 2.78), the function ϕΔ does not belong to C0∞ (Rn × Rn ), as it lacks the compact support property. Nonetheless, (2.8.3) implies supp ϕΔ ∩ supp (f ⊗ g) is a compact set in Rn × Rn .
(2.8.8)
Theorem 2.52 applies with F := supp (f ⊗ g) and allows us to uniquely extend the action of the distribution f ⊗ g to the set of functions ψ ∈ C ∞ (Rn × Rn ) satisfying the property that supp ψ ∩ supp (f ⊗ g) is a compact set in Rn × Rn . Denote this unique extension by f ⊗ g. Then f ⊗ g, ϕΔ is well-defined, and it Δ is meaningful to set f ∗ g, ϕ := f ⊗ g, ϕ . This discussion justifies making the following definition.
2.8. THE CONVOLUTION OF DISTRIBUTIONS IN RN
61
Definition 2.83. Suppose u, v ∈ D (Rn ) are such that for each compact subset K of Rn the set MK := {(x, y) ∈ supp u × supp v : x + y ∈ K} is compact in Rn × Rn . (2.8.9) Granted this, define the convolution of the distributions u and v as the functional u ∗ v : D(Rn ) → C whose action on each ϕ ∈ C0∞ (Rn ) is given by u ∗ v, ϕ := u ⊗ v, ϕΔ
(2.8.10)
where ϕΔ (x, y) := ϕ(x+y) for every x, y ∈ Rn , and u ⊗ v is the unique extension of u ⊗ v obtained by applying Theorem 2.52 with F := supp (u ⊗ v). Remark 2.84. Retain the context of Definition 2.83. (1) If ϕ ∈ C0∞ (Rn ) and ψ ∈ C0∞ (Rn × Rn ) is such that ψ = 1 on a neighborhood of Msupp ϕ then % $ (2.8.11) u ∗ v, ϕ = u ⊗ v, ψϕΔ . (2) If (2.8.9) holds for the compacts Kj = B(0, j), j ∈ N, then (2.8.9) holds for arbitrary compact sets K ⊂ Rn . (3) Condition (2.8.9) is always satisfied if either u or v is compactly supported. The issue of continuity of the convolution map introduced in Definition 2.83 is discussed next. Theorem 2.85. If u, v ∈ D (Rn ) are such that (2.8.9) holds, then one has u ∗ v ∈ D (Rn ). In particular, the convolution between two distributions in Rn , one of which is compactly supported, is always well-defined and is a distribution in Rn . Proof. Let u, v ∈ D (Rn ) be such that (2.8.9) is satisfied. From Theorem 2.52, D(Rn )
we have that u ⊗ v is linear, hence u ∗ v is linear as well. Let ϕj −−−−→ 0. j→∞
Then there exists a compact subset K of Rn such that supp ϕj ⊆ K for each j ∈ N, and lim ∂ α ϕj = 0 uniformly on K, for every α ∈ Nn0 . In particular, j→∞
Msupp ϕj ⊆ MK for every j ∈ N. Hence, if we fix ψ ∈ C0∞ (Rn × Rn ) such that ψ = 1 in a neighborhood of MK , then part (1) in Remark 2.84 gives % $ ∀ j ∈ N. (2.8.12) u ∗ v, ϕj = u ⊗ v, ψϕΔ j , Moreover, we claim that
D(Rn ×Rn )
ψ ϕΔ −−−−−−→ 0. j − j→∞
(2.8.13)
To prove this claim, note that supp (ψ ϕΔ j ) ⊆ supp ψ for each j ∈ N and for each α1 , α2 , β1 , β2 ∈ Nn0 we have α1 β1 sup (∂x ∂y ψ)(x, y)(∂xα2 ∂yβ2 ϕΔ j )(x, y) (x,y)∈supp ψ
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
62 ≤
α β ∂x 1 ∂y 1 ψ(x, y)∂ α2 +β2 ϕj L∞ (Rn )
sup (x,y)∈supp ψ
≤ ∂xα1 ∂yβ1 ψL∞ (supp ψ) ∂ α2 +β2 ϕj L∞ (K) −−−→ 0. j→∞
(2.8.14)
Hence, (2.8.13) follows by combining (2.8.14) with Leibniz’s formula (13.2.4). Since u ⊗ v ∈ D (Rn × Rn ), from (2.8.12) and (2.8.13) we deduce % $ −−−→ 0. (2.8.15) u ∗ v, ϕj = u ⊗ v, ψϕΔ j j→∞
On account of Remark 2.3, this proves that u ∗ v ∈ D (Rn ). Finally, the last statement of the theorem is a consequence of what we proved so far and part (3) in Remark 2.84. Remark 2.86. A combination of Theorem 2.85 and the discussion regarding (2.8.6) yields the following result. If f, g ∈ L1loc (Rn ) are such that for each compact K ⊂ Rn the set (x, y) : x ∈ supp f, y ∈ supp g, x + y ∈ K is compact, then uf ∗ ug = uf ∗g in D (Rn ). That is, f ∗ g ∈ L1loc (Rn ), the convolution between the distributions uf and ug [recall (2.1.6)] is well-defined, and uf ∗ ug is a distribution of function type that is equal to the distribution uf ∗g . In particular, ⎫ f ∈ L1loc (Rn ), g ∈ L1comp(Rn ) ⎪ ⎬ or =⇒ uf ∗ ug = uf ∗g . (2.8.16) ⎪ 1 n 1 n ⎭ f ∈ Lcomp(R ), g ∈ Lloc (R ) The main properties of the convolution of distributions, whenever meaningfully defined, are stated and proved in the next theorem. Recall that for any A, B ⊆ Rn the set A ± B is defined as {x ± y : x ∈ A, y ∈ B}. Theorem 2.87. The following statements are true. (a) If u, v ∈ D (Rn ) are such that (2.8.9) is satisfied, then supp (u ∗ v) ⊆ supp u + supp v.
(2.8.17)
(b) If u, v ∈ D (Rn ) are such that (2.8.9) is satisfied, then u ∗ v = v ∗ u. (c) If u, v, w ∈ D (Rn ) are such that ⎧ n ⎪ ⎨ for each compact subset K of R the set := {(x, y, z) ∈ supp u × supp v × supp w : x + y + z ∈ K} (2.8.18) MK ⎪ ⎩ is compact in Rn × Rn × Rn , then (u ∗ v) ∗ w and u ∗ (v ∗ w) are well-defined, belong to D (Rn ), and are equal.
2.8. THE CONVOLUTION OF DISTRIBUTIONS IN RN
63
(d) Let u ∈ D (Rn ), α ∈ Nn0 . Then (∂ α u) ∗ δ = ∂ α u = u ∗ ∂ α δ. In particular, u ∗ δ = u. (e) If the distributions u, v ∈ D (Rn ) are such that (2.8.9) is satisfied and α ∈ Nn0 , then ∂ α (u ∗ v) = (∂ α u) ∗ v = u ∗ (∂ α v). Proof. Let u, v ∈ D (Rn ) be such that (2.8.9) holds. Since supp u + supp v is closed, the inclusion in (a) will follow as soon as we show that u ∗ v Rn \(supp u+supp v) = 0. Pick some arbitrary function ϕ ∈ D(Rn \ (supp u + supp v)). Then necessarily supp ϕΔ ∩ supp (u ⊗ v) = ∅, hence u ⊗ v, ϕΔ = 0 which, in turn, implies u ∗ v, ϕ = 0, as wanted. The statement in (b) is an immediate consequence of Definition 2.83 and the fact that the tensor product is commutative (see (ii) in Theorem 2.78). To prove the statement in (c), suppose u, v, w ∈ D (Rn ) satisfy (2.8.18). Define the functional u ∗ v ∗ w : D(Rn ) → C by setting $ % v ⊗ w, ϕΔ , u ∗ v ∗ w, ϕ := u ⊗ (2.8.19) for each ϕ ∈ C0∞ (Rn ), v⊗w where ϕΔ (x, y, z) := ϕ(x + y + z) for each x, y, x ∈ Rn , and where u ⊗ is the unique extension of u ⊗ v ⊗ w obtained by applying Theorem 2.52 for the set F := supp (u ⊗ v ⊗ w). The mapping in (2.8.19) is well-defined since if ϕ ∈ C0∞ (Rn ) then ϕΔ ∈ C ∞ (Rn × Rn × Rn ) and, based on (2.8.18), the set
Δ Msupp ϕ = supp (u ⊗ v ⊗ w) ∩ supp ϕ
is compact in Rn × Rn × Rn . (2.8.20)
Reasoning as in the proof of Theorem 2.85, it follows that u ∗ v ∗ w ∈ D (Rn ) and $ % for ϕ ∈ C0∞ (Rn ) and for each u ∗ v ∗ w, ϕ = u ⊗ v ⊗ w, ψϕΔ (2.8.21) ψ ∈ C0∞ (Rn × Rn × Rn ) with ψ = 1 in a neighborhood of Msupp ϕ. Given the freedom in selecting ψ as in (2.8.21), we choose to take ψ as foln lows. Let πj : Rn × Rn × Rn → R , j = 1, 2, 3, be the projections defined by π1 (x, y, z) := x, π2 (x, y, z) := y, π3 (x, y, z) := z, for all x, y, z ∈ Rn . Given ϕ ∈ C0∞ (Rn ), fix ψj ∈ C0∞ (Rn ) with ψj = 1 near πj (Msupp ϕ ), j = 1, 2, 3,
then choose
(2.8.22)
(2.8.23) ψ := ψ1 ⊗ ψ2 ⊗ ψ3 ∈ C0∞ (Rn × Rn × Rn ). The next two claims are designed to proving that u ∗ (v ∗ w) exists.
Claim 1. For every compact K in Rn the set NK := {(y, z) ∈ supp v × supp w : y + z ∈ K} is compact in Rn × Rn .
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
64
To see why this is true, start by observing that, for every x0 ∈ supp u, the set K + x0 is compact in Rn and B := {(x, y, z) ∈ {x0 } × supp v × supp w : x + y + z ∈ K + x0 }
(2.8.24)
. Since (2.8.18) ensures that MK+x is compact is closed and contained in MK+x 0 0 in Rn × Rn × Rn , it follows that B is compact in Rn × Rn × Rn . In addition, the mapping θ : Rn × Rn × Rn → Rn × Rn , defined by θ(x, y, z) := (y, z) for x, y, z ∈ Rn , is continuous and θ(B) = NK . Therefore, NK must be compact, and Claim 1 is proved. The latter ensures that v ∗ w exists.
Claim 2. For every K ⊂ Rn compact, the set PK := {(x, z) ∈ supp u × supp (v ∗ w) : x + z ∈ K} is compact in Rn × Rn . By part (a) in the theorem, we have supp (v ∗ w) ⊆ supp v + sup w. Thus PK ⊆ {(x, z) ∈ supp u × (supp v + supp w) : x + z ∈ K} = (x, y + t) : x ∈ supp u, y ∈ supp v, t ∈ supp w, x + y + t ∈ K (2.8.25) and the last set in (2.8.25) is closed in Rn × Rn . If we now set σ : Rn × Rn × Rn → Rn × Rn , σ(x, y, t) := (x, y + t) for every x, y, t ∈ Rn ,
(2.8.26)
then σ is continuous and PK ⊆ σ(MK ). By (2.8.18), the set MK is compact in n n n n n R × R × R , hence σ(MK ) is compact in R × R . The set PK being closed in Rn × Rn , we may conclude that PK is compact in Rn × Rn . This proves Claim 2 and, as a consequence, the fact that u ∗ (v ∗ w) exists.
With an eye toward proving u ∗ (v ∗ w) = u ∗ v ∗ w, we dispense with two more claims. Claim 3. For each K ⊂ Rn compact, A := (supp v + supp w) ∩ (K − supp u) is a compact set in Rn . Rewrite A as A = {t ∈ supp v + supp w : t = ω − x, for some ω ∈ K, x ∈ supp u}. Then if t ∈ A, it follows that there exist y ∈ supp v, z ∈ supp w, ω ∈ K and x ∈ supp u such that t = y+z = ω−x. Hence, (x, y, z) ∈ supp u×supp v×supp w and A ⊆ ν(MK ), where and x + y + z = ω, which implies that (x, y, z) ∈ MK ν : Rn × Rn × Rn → Rn , ν(x1 , x2 , x3 ) := x2 + x3
for every x1 , x2 , x3 ∈ Rn .
(2.8.27)
2.8. THE CONVOLUTION OF DISTRIBUTIONS IN RN
65
We may now conclude that A is compact since the map ν is continuous, MK is compact and A is closed. This proves Claim 3.
Claim 4. Fix ϕ ∈ C0∞ (Rn ) and set K := supp ϕ. Also, let A be as in Claim 3 corresponding to this K, and suppose η ∈ C0∞ (Rn ) is such that η = 1 in a neighborhood of A. Then, with ψ1 as in (2.8.22), we have ψ1 ⊗ η = 1
on
(supp u × supp (v ∗ w)) ∩ supp ϕΔ
(2.8.28)
where, as before, ϕΔ (x, y) = ϕ(x + y) for x, y ∈ Rn . ) and η = 1 on A, it suffices to show To prove this claim, since ψ1 = 1 on π1 (MK that supp u × supp (v ∗ w) ∩ supp ϕΔ ⊆ π1 (MK ) × A. (2.8.29)
To justify the latter inclusion, let x ∈ supp u, y ∈ supp v, and z ∈ supp w, be such that (x, y + z) ∈ supp ϕΔ . Then x + y + z ∈ K which forces x ∈ π1 (MK ) as well as y + z ∈ K − supp u ⊆ A. Thus, (x, y + z) ∈ π1 (MK ) × A and this completes the proof of Claim 4. Consider now an arbitrary function ϕ ∈ C0∞ (Rn ) and set K := supp ϕ. Also, assume that ψ1 , ψ2 , ψ3 are as in (2.8.22), and let η be as in Claim 4. Making use of the definition of the convolution and tensor products, and keeping in mind (2.8.28), we may write " $ % ! (2.8.30) u ∗ (v ∗ w), ϕ = u(x) ⊗ (v ∗ w)(t), ψ1 (x)η(t)ϕ(x + t) " ! $ % = u(x), (v ∗ w)(t), ψ1 (x)η(t)ϕ(x + t) ! $ %" = u(x), v(y) ⊗ w(z), ψ1 (x)η(y + z)ϕ(x + y + z)ψ2 (y)ψ3 (z) . A few words explaining the origin of the last equality are in order. According to the definition of the convolution, passing from v ∗ w to v ⊗ w requires that we consider the set C := (supp v × supp w) ∩ supp η Δ ∩ supp[ϕ(x + ·)]Δ .
(2.8.31)
) × π3 (MK ), it follows that C is Since C is closed and satisfies C ⊂ π2 (MK compact. Now, the fact that ψ2 ⊗ ψ3 = 1 in a neighborhood of C justifies the presence of ψ2 ⊗ ψ3 in the last term in (2.8.30). Going further, since ψ1 (x)ψ2 (y)ψ3 (z) = ψ1 (x)ψ2 (y)ψ3 (z)η(y + z) for (x, y, z) near MK , (2.8.32)
referring to (2.8.21) and (2.8.23) allows us to rewrite (2.8.30) in the form ! %" $ u ∗ (v ∗ w), ϕ = u(x), v(y) ⊗ w(z), ψ1 (x)ψ2 (y)ψ3 (z)ϕ(x + y + z) " ! = u ⊗ v ⊗ w, (ψ1 ⊗ ψ2 ⊗ ψ3 )ϕΔ = u ∗ v ∗ w, ϕ. (2.8.33)
66
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
Since ϕ ∈ C0∞ (Rn ) was arbitrary, it follows that u ∗ (v ∗ w) = u ∗ v ∗ w. Similarly, it can be seen that (u ∗ v) ∗ w = u ∗ v ∗ w and this completes the proof of the statement in (c). Moving on to (d), fix u ∈ D (Rn ) and α ∈ Nn0 . Since the Dirac distribution δ has compact support, by Theorem 2.85 it follows that both δ ∗ u and (∂ α δ) ∗ u are well-defined and belong to D (Rn ). Let ϕ ∈ C0∞ (Rn ) be arbitrary and consider ψ ∈ C0∞ (Rn × Rn ) such that ψ = 1 on a neighborhood of the set ({0} × supp u) ∩ supp ϕΔ . Starting with Definition 2.83, then using (2.4.1) combined with Proposition 2.32, then Leibniz’s formula (13.2.4), then (2.1.18), the support condition for ψ and then (2.4.1) again, we may write $ α % $ % ∂ δ ∗ u, ϕ = (∂xα δ) ⊗ u(y), ψ(x, y)ϕ(x + y) ! %" $ = u(y), ∂ α δ(x), ψ(x, y)ϕ(x + y) ! $ %" = (−1)|α| u(y), δ(x), ∂xα (ψ(x, y)ϕ(x + y)) ! ! = (−1)|α| u(y), δ(x),
β α−β α! ϕ)(x β!(α−β)! ∂x ψ(x, y)(∂x
"" + y)
β≤α
! = (−1)|α| u(y) ,
"
β α−β α! ϕ)(y) β!(α−β)! (∂x ψ)(0, y)(∂x
β≤α
% $ = (−1)|α| u(y), ψ(0, y)∂ α ϕ(y) $ % $ % = (−1)|α| u, ∂ α ϕ = ∂ α u, ϕ .
(2.8.34)
In particular, if |α| = 0, the above implies δ ∗ u = u. When combined with (b), this finishes the proof of the statement in (d). Finally, by making use of the results from (d) and (c) we have ∂ α (u ∗ v) = ∂ α (δ ∗ (u ∗ v)) = ∂ α δ ∗ (u ∗ v) = (∂ α δ ∗ u) ∗ v = (∂ α u) ∗ v. (2.8.35) A similar argument also shows that ∂ α (u ∗ v) = u ∗ (∂ α v). The proof of the theorem is now complete. Exercise 2.88. Prove that for a distribution u ∈ D (Rn ), u = δ ⇐⇒ u ∗ f = f
for each
f ∈ C0∞ (Rn ).
(2.8.36)
Hint: For the right-to-left implication use f = φj , where φj is as in Example 2.20, and let j → ∞. Next, we extend the translation map (1.3.16) to distributions. Proposition 2.89. For each x0 ∈ Rn and each u ∈ D (Rn ) fixed, the translation mapping D(Rn ) ϕ → u, t−x0 (ϕ) ∈ C is linear and continuous. Denoting this map by tx0 u thus yields a distribution in Rn that satisfies tx0 u, ϕ = u, t−x0 (ϕ),
∀ ϕ ∈ C0∞ (Rn ).
(2.8.37)
2.8. THE CONVOLUTION OF DISTRIBUTIONS IN RN
67
Proof. This follows by observing that the mapping in question is the composition u ◦ t−x0 where the latter translation operator is consider in the sense of Exercise 1.19. Exercise 2.90. Fix x0 ∈ Rn and recall the distribution δx0 from Example 2.14. Prove that δx0 = tx0 δ in D (Rn ). Also show that δx0 ∗ u = tx0 u for every u ∈ D (Rn ). In particular, if x1 ∈ Rn is arbitrary, then δx0 ∗ δx1 = δx0 +x1 in D (Rn ). Remark 2.91. (1) If u, v ∈ E (Rn ) then u ∗ v ∈ E (Rn ). This is an immediate consequence of (2.8.17). (2) There exists a sequence {uj }j∈N of compactly supported distributions in R that converges to some u ∈ D (R) and such that uj ∗ v does not necessarily converge to u ∗ v in D (R) for each v ∈ D (R). To see an example in this regard, consider the sequence of compactly D (R)
supported distributions {δj }j∈N that satisfies δj −−−− → 0. Then if 1 denotes j→∞
the distribution on R given by the constant function 1, Exercise 2.90 gives δj ∗ 1 = 1 for each j, and the constant sequence {1}j∈N ⊂ D (R) does not converge in D (R) to 0 ∗ 1 = 0. This shows that sequential continuity for convolution of distributions cannot be expected in general. (3) Condition (2.8.18) is necessary for the operation of convolution of distributions to be associative. To see this, consider the distributions 1, δ , and H on R. Then we have supp δ = {0}, supp 1 = R, supp H = [0, ∞). If K is a compact set in Rn , the set = (x, 0, z) : x ∈ R, z ∈ [0, ∞), x + z ∈ K (2.8.38) MK is not compact in R × R × R, thus (2.8.18) does not hold. Furthermore, 1 ∗ δ = 1 ∗ δ = 0 ∗ δ = 0 so (1 ∗ δ ) ∗ H = 0, while 1 ∗ (δ ∗ H) = 1 ∗ (δ ∗ H ) = 1 ∗ (δ ∗ δ) = 1 ∗ δ = 1 and clearly (1 ∗ δ ) ∗ H = 1 ∗ (δ ∗ H) in D (R). Proposition 2.92. The following statements are true. D (Rn )
D (Rn )
j→∞
j→∞
(1) If u ∈ E (Rn ) and vj −−−−→ v, then u ∗ vj −−−−→ u ∗ v. D (Rn )
(2) If uj −−−−→ u and there exists K ⊂ Rn compact with supp uj ⊆ K for j→∞
D (Rn )
every j ∈ N, then uj ∗ v −−−−→ u ∗ v for every v ∈ D (Rn ). j→∞
Proof. To see why (1) is true, fix ϕ ∈ C0∞ (Rn ). Then by definition, for each j ∈ N we have u ∗ vj , ϕ = u ⊗ vj , ψj ϕΔ for any smooth compactly supported
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
68
function ψj with ψj = 1 on a neighborhood of (supp u × supp vj ) ∩ supp ϕΔ . Note that (supp u × supp vj ) ∩ supp ϕΔ ⊆ supp u × (supp ϕ − supp u)
(2.8.39)
and supp u × (supp ϕ − supp u) is compact since both supp u and supp ϕ are compact. Hence, if we fix ψ ∈ C0∞ (Rn × Rn ) such that ψ = 1 in a neighborhood of the set supp u × (supp ϕ − supp u), then u ∗ vj , ϕ = u ⊗ vj , ψϕΔ for every j ∈ N. Based on (d) in Theorem 2.80, we may write u ∗ vj , ϕ = u ⊗ vj , ψϕΔ −−−→ u ⊗ v, ψϕΔ = u ∗ v, ϕ, j→∞
(2.8.40)
and the desired conclusion follows. Assume next the hypotheses in part (2) of the proposition. In particular, these entail supp u ⊆ K. Let ϕ ∈ C0∞ (Rn ) and note that (supp uj × supp v) ∩ supp ϕΔ ⊆ K × (supp ϕ − K),
∀ j ∈ N.
(2.8.41)
Then if ψ ∈ C0∞ (Rn × Rn ) is a function with the property that ψ = 1 in a neighborhood of K × (supp ϕ − K), we have uj ∗ v, ϕ = uj ⊗ v, ψϕΔ for every j ∈ N. Hence, uj ∗ v, ϕ = uj ⊗ v, ψϕΔ −−−→ u ⊗ v, ψϕΔ = u ∗ v, ϕ, j→∞
(2.8.42)
where for the convergence in (2.8.42) we used part (d) in Theorem 2.80. When convolving an arbitrary distribution with a distribution of function type given by a compactly supported smooth function, the resulting distribution is of function type. This fact is particularly useful in applications and we prove it next. Proposition 2.93. If u ∈ D (Rn ) and g ∈ C0∞ (Rn ), then the distribution u ∗ g is of function type given by the function f : Rn → C,
f (x) := u(y), g(x − y),
∞
∀ x ∈ Rn ,
(2.8.43)
that satisfies f ∈ C (R ). Moreover, if u is compactly supported then so is f . In short, n
D (Rn ) ∗ C0∞ (Rn ) ⊆ C ∞ (Rn )
and
E (Rn ) ∗ C0∞ (Rn ) ⊆ C0∞ (Rn ). (2.8.44)
Proof. Let φ : Rn × Rn → C be defined by φ(x, y) := g(x − y) for each (x, y) ∈ Rn × Rn . Then φ ∈ C ∞ (Rn × Rn ) and φ(x, ·) ∈ C0∞ (Rn ) for each x ∈ Rn . This shows that the function f in (2.8.43) is well-defined. To prove that f is of class C ∞ in Rn , pick an arbitrary point x∗ ∈ Rn and pick a function ψ ∈ C0∞ (Rn ) with the property that ψ = 1 in a neighborhood of B(x∗ , 1). In addition, select η ∈ C0∞ (Rn ) such that η = 1 near B(x∗ , 1)−supp g. Then $ % $ % f ∗ (x) = u(y), g(x − y) = u(y), ψ(x)g(x − y) B(x ,1)
$ % = (ηu)(y), ψ(x)g(x − y) ,
∀ x ∈ B(x∗ , 1).
(2.8.45)
2.8. THE CONVOLUTION OF DISTRIBUTIONS IN RN
69
Since ηu ∈ E (Rn ) and the function Rn × Rn (x, y) → ψ(x)g(x − y)∈ C is of class C ∞ , we may further invoke Proposition 2.82 to conclude that f B(x∗ ,1) ∈ C ∞ B(x∗ , 1) . Given that x∗ ∈ Rn has been arbitrarily chosen, it follows that f ∈ C ∞ (Rn ). We now turn to the task of showing that the distribution u ∗ g is of function type and is given by f . To this end, fix ϕ ∈ C0∞ (Rn ) and consider a function ψ ∈ C0∞ (Rn × Rn ) such that ψ = 1 in a neighborhood of the set (x, y) : x ∈ supp u, y ∈ supp g, x + y ∈ supp ϕ .
(2.8.46)
Then, starting with Definition 2.83 we write $ % u ∗ g, ϕ = u ⊗ g, ψϕΔ ! $ %" = u(x), g(y), ψ(x, y)ϕ(x + y)
!
" g(y)ψ(x, y)ϕ(x + y) dy
= u(x) , Rn
! = u(x) ,
" g(y)ϕ(x + y) dy
Rn
! = u(x) ,
Rn
= Rn
g(z − x)ϕ(z) dz
"
u(x), g(z − x)ϕ(z) dz
= f, ϕ.
(2.8.47)
For the third equality in (2.8.47) we have used the fact that g is a distribution of function type, for the forth equality we used condition (2.8.46), the fifth equality is based on a change of variables, the sixth equality follows from (2.7.45), while the last equality is a consequence of the definition of f . To complete the proof of the proposition there remains to notice that, by part (2.8.17), we have supp f ⊆ supp u + supp g. In particular, if u is compactly supported, then so is f . Exercise 2.94. Prove that if u ∈ E (Rn ) and g ∈ C ∞ (Rn ) then the distribution u ∗ g is of function type given by the function f : Rn → C,
f (x) := u(y), g(x − y),
∀ x ∈ Rn ,
(2.8.48)
that satisfies f ∈ C ∞ (Rn ). In short, E (Rn ) ∗ C ∞ (Rn ) ⊆ C ∞ (Rn ). Hint: Use Proposition 2.82 to show that f ∈ C ∞ (Rn ), then reason as in Proposition 2.93 to take care of the remaining claims.
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
70
Exercise 2.95. Let Ω be a bounded open set in Rn . Suppose u ∈ L1 (Ω) and define u to be the extension by zero outside Ω of u. Also assume that a function ∈ D (Rn ), it satisfies supp u ⊆ Ω, hence g ∈ L1loc (Rn ) is given. Prove that u n u ∗ g is well-defined in D (R ), and that the distribution u ∗ g is of function type given by the function ( u ∗ g)(x) = g(x − y)u(y) dy, ∀ x ∈ Rn . (2.8.49) Ω
Hint: Apply (2.8.16) with f = u . D (Rn )
Exercise 2.96. Prove that if u ∈ D (Rn ) and ϕj −−−−→ ϕ, then one has j→∞
E(Rn )
u ∗ ϕj −−−−→ u ∗ ϕ. j→∞
Next, we prove that distributions of function type given by smooth, compactly supported functions are dense in the class of distributions. First we treat the simpler case when the distributions are considered in Rn . The case when the distributions are considered on arbitrary open subsets of Rn needs to be handled with a bit more care. Theorem 2.97. The set C0∞ (Rn ) is sequentially dense in D (Rn ). Proof. First we will show that C ∞ (Rn ) is sequentially dense in D (Rn ). Let φ be as in (1.2.3) and recall the sequence of functions {φj }j∈N from (1.3.7). In particular, we have that supp φj ⊆ B(0, 1) for every j ∈ N. Recall from D (Rn )
Example 2.20 that φj −−−−→ δ. j→∞
Let u ∈ D (Rn ) be arbitrary and define uj := u ∗ φj
in D (Rn ),
∀ j ∈ N.
(2.8.50)
By Proposition 2.93 we have uj ∈ C ∞ (Rn ) for all j ∈ N. Also, by part (2) in Proposition 2.92 and part (d) in Theorem 2.87 we obtain D (Rn )
uj = u ∗ φj −−−−→ u ∗ δ = u. j→∞
(2.8.51)
This completes the proof of the fact that C ∞ (Rn ) is sequentially dense in D (Rn ). Moving on, and keeping the notation introduced so far, fix ψ ∈ C0∞ (Rn ) satisfying ψ(x) = 1 if |x| < 1 and set wj (x) := ψ(x/j)(u ∗ φj )(x)
∀ x ∈ Rn ,
∀ j ∈ N.
(2.8.52)
It is immediate that wj ∈ C0∞ (Rn ). Moreover, if ϕ ∈ C0∞ (Rn ) is given, then there exists j0 ∈ N with the property that supp ϕ ⊂ B(0, j0 ). Therefore, for all j ≥ j0 we may write
2.8. THE CONVOLUTION OF DISTRIBUTIONS IN RN wj , ϕ =
ψ
x
Rn
j
71
(u ∗ φj )(x)ϕ(x) dx =
Rn
(u ∗ φj )(x)ϕ(x) dx
= u ∗ φj , ϕ −−−→ u, ϕ.
(2.8.53)
j→∞
D (Rn )
This shows that wj −−−−→ u. Hence, ultimately, C ∞ (Rn ) is sequentially dense j→∞
in D (Rn ), finishing the proof of the theorem. The same type of result as in Theorem 2.97 actually holds in arbitrary open subsets of the Euclidean ambient. Theorem 2.98. The set C0∞ (Ω) is sequentially dense in D (Ω). Proof. Fix u ∈ D (Ω) and recall the sequence of compact sets introduced in ˚j+1 for every j ∈ N. For each j ≥ 2 (1.2.13). Then Kj = Ω and Kj ⊂ K j∈N
consider a function ψj ∈ C0∞ (Ω), ψj = 1 on a neighborhood of Kj−1 , supp ψj ⊆ Kj ,
(2.8.54)
and define uj := ψj u ∈ D (Ω). Since supp uj ⊆ Kj , Proposition 2.63 states that each uj may be extended to a distribution in E (Rn ), which we continue to denote by uj . If we now set εj :=
1 dist (Kj , ∂Kj+1 ) > 0, 4
∀ j ∈ N,
(2.8.55)
then the definition of the compacts in (1.2.13) forces εj 0 as j → ∞. Having
fixed some φ ∈ C0∞ (Rn ) satisfying supp φ ⊆ B(0, 1) and Rn φ(x) dx = 1, define φj (x) := ε−n j φ(x/εj ),
∀ x ∈ Rn , ∀ j ∈ N.
(2.8.56)
Thus, φj ∈ C0∞ (Rn ),
supp φj ⊆ B(0, εj ) ⊆ B(0, 1),
∀ j ∈ N,
(2.8.57)
and reasoning as in Example 2.20 we see that D (Rn )
φj −−−−→ δ. j→∞
(2.8.58)
Finally, for each j ≥ 2 introduce wj := uj ∗ φj . By combining part (1) in Remark 2.91 with Proposition 2.93 we obtain that wj ∈ C0∞ (Rn ) for every j ≥ 2. In addition, (2.8.17) and (2.8.55) imply that supp wj ⊆ Kj + B(0, εj ) ⊂ Kj+1 , so in fact we have wj ∈ C0∞ (Ω). D (Ω)
To complete the proof of the theorem it suffices to show that wj −−−−→ u. j→∞
To this end, fix ϕ ∈ C0∞ (Ω) and observe that based on (1.2.13) there exists j0 ∈ N such that supp ϕ ⊆ Kj0 . Then for j > j0 + 1 we may write wj , ϕ = uj ∗ φj , ϕ = (uj ∗ φj )(x)ϕ(x) dx Ω
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
72
uj (y), φj (x − y)ϕ(x) dx =
= Ω
u(y), ψj (y)φj (x − y)ϕ(x) dx Ω
u(y), ψj0 +2 (y)φj (x − y)ϕ(x) dx = uj0 +2 ∗ φj , ϕ.
=
(2.8.59)
Ω
For the second equality in (2.8.59) we used the fact that uj ∗ φj ∈ C0∞ (Rn ) for j ≥ 2, for the third equality we used Proposition 2.93, while for the forth the fact that uj = ψj u for every j ∈ N. These observations also give the last equality in (2.8.59). As for the penultimate equality in (2.8.59), observe that if j > j0 + 1, x ∈ supp ϕ ⊆ Kj0 and x − y ∈ supp φj ⊆ B(0, εj ), then y ∈ Kj0 − B(0, εj ) ⊆ Kj0 − B(0, εj0 ) ⊂ Kj0 +1 , thus ψj (y) = 1 = ψj0 +2 (y). If we now combine (2.8.58) with (2.8.57) and part (2) in Proposition 2.92, it D (Rn )
follows that uj0 +2 ∗ φj −−−−→ uj0 +2 ∗ δ = uj0 +2 . Hence, if we return with this j→∞
in (2.8.59), we may write lim wj , ϕ = lim uj0 +2 ∗φj , ϕ = uj0 +2 , ϕ = ψj0 +2 u, ϕ = u, ϕ, (2.8.60)
j→∞
j→∞
since ψj0 +2 = 1 in a neighborhood of the support of ϕ. This finishes the proof of the theorem.
2.9
Distributions with Higher-Order Gradients Continuous or Bounded
We have seen that if u ∈ C m (Rn ) for some m ∈ N, then its distributional derivatives up to order m are distributions of function type, each given by the corresponding pointwise derivative of u. A more subtle question pertains to the possibility of deducing regularity results for distributions with distributional derivatives of a certain order that are of function type, and these functions exhibit a certain amount of smoothness. In this section we prove two main results in this regard. In the first result (see Theorem 2.102), we show that if a distribution u ∈ D (Ω) has all distributional derivatives of order m ∈ N continuous, then in fact u ∈ C m (Ω). In the second main result (see Theorem 2.104) we prove that a distribution in D (Rn ) is of function type given by a Lipschitz function if and only if all its first-order distributional derivatives are bounded functions in Rn . We start by proving a weaker version of Theorem 2.102. Proposition 2.99. If u ∈ D (Rn ) and there exists m ∈ N0 such that for each multi-index α ∈ Nn0 satisfying |α| ≤ m the distributional derivative ∂ α u belongs to C 0 (Rn ), then u ∈ C m (Rn ). Proof. Recall the sequence of distributions {φj }j∈N from Example 2.20 and for u satisfying the hypothesis of the proposition let {uj }j∈N be as in (2.8.50). In particular, (2.8.51) holds, thus the distributional and classical derivatives of each uj coincide.
2.9. DISTRIBUTIONS WITH HIGHER-ORDER GRADIENTS...
73
Next we make the following claim: lim ∂ α uj = ∂ α u uniformly on compact sets in Rn
j→∞
for every multi-index α ∈ Nn0 satisfying |α| ≤ m.
(2.9.1)
To prove this claim, observe that by part (e) in Theorem 2.87, for each j ∈ N we have ∂ α uj = (∂ α u) ∗ φj for every α ∈ Nn0 . Fix α ∈ Nn0 satisfying |α| ≤ m. Since by the current hypotheses the distributional derivative ∂ α u is continuous, by invoking Proposition 2.93 we may write ∂ α uj (x) = ((∂ α u) ∗ φj )(x) = ∂ α u(y), φj (x − y) α (∂ u)(y)φj (x − y) dy = (∂ α u)(x − z)φj (z) dz = Rn
=
Rn
Rn
(∂ α u) x − y/j φ(y) dy,
∀ x ∈ Rn .
(2.9.2)
Fix a compact set K in Rn . Making use of (2.9.2) and the properties of φ (recall (1.2.3)) we estimate α α α α sup |∂ uj (x) − ∂ u(x)| = sup [(∂ u) x − y/j − (∂ u)(x)]φ(y) dy x∈K
x∈K
≤
Rn
sup (∂ α u) x − y/j − (∂ α u)(x).
(2.9.3)
x∈K y∈B(0,1)
Since ∂ α u is continuous in Rn it follows that ∂ α u is uniformly continuous on compact subsets of Rn , thus lim sup |∂ α uj (x) − ∂ α u(x)| ≤ lim sup (∂ α u) x − z/j − (∂ α u)(x) = 0, j→∞ x∈K
j→∞
x∈K y∈B(0,1)
(2.9.4) completing the proof of the claim. With (2.9.1) in hand, we may invoke Lemma 2.100 below and proceed by induction on |α| to conclude that, as desired, u ∈ C m (Rn ). Lemma 2.100. Suppose the functions {uj }j∈N and u are such that: (i) uj ∈ C 1 (Ω) for every j ∈ N, (ii) lim uj = u uniformly on compact subsets of Rn contained in Ω, and j→∞
(iii) for each k ∈ {1, . . . , n} there exists a function vk ∈ C 0 (Ω) with the property that lim ∂k uj = vk uniformly on compact subsets of Rn contained in Ω. j→∞
Then u ∈ C 1 (Ω) and ∂k u = vk for each k ∈ {1, . . . , n}. Proof. From the start, since uniform convergence on compact sets preserves continuity, we have that u ∈ C 0 (Ω). Fix x ∈ Ω and k ∈ {1, 2, . . . , n} and let t0 > 0 be such that x + tek ∈ Ω whenever t ∈ [−t0 , t0 ] where, as before, ek is
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
74
the unit vector in Rn where the kth component is equal to 1. Then, for each t ∈ [−t0 , t0 ], we may write t d uj (x + s ek ) ds u(x + tek ) − u(x) = lim [uj (x + tek ) − uj (x)] = lim j→∞ j→∞ 0 ds t t = lim (∂k uj )(x + s ek ) ds = vk (x + s ek ) ds. (2.9.5) j→∞
0
0
The first equality in (2.9.5) is based on (ii) while the last one is a consequence of (iii). Next, since vk is continuous on Ω, by the mean value theorem for integrals it follows that there
t exists some st , belonging to the interval with endpoints 0 and t, such that 0 vk (x + s ek ) ds = tvk (x + st ek ). Hence, u(x + tek ) − u(x) = lim vk (x + st ek ) = vk (x), t→0 t→0 t lim
(2.9.6)
which proves that (∂k u)(x) exists and is equal to vk (x). Thus, ∂k u = vk ∈ C 0 (Ω) for every k, which shows that u ∈ C 1 (Ω). Lemma 2.101. Let u ∈ D (Ω) be such that for each j ∈ {1, . . . , n}, the distributional derivatives ∂j u are of function type and belong to C 0 (Ω). Then u ∈ C 0 (Ω).
1 Proof. Since ∇u ∈ [C 0 (Ω)]n the function v(x) := 0 (∇u)(tx) · x dt for x ∈ Ω (where “·” denotes the dot product of vectors) is well-defined and belongs to C 0 (Ω). Given the current goals, by Exercise 2.46 it suffices to show that for each x0 ∈ Ω there exists an open set ω ⊂ Ω such that x0 ∈ ω and uω = v ω in D (ω). To this end, fix x0 ∈ Ω and let r ∈ (0, dist(x0 , ∂Ω)). Then B(x0 , r) ⊂ Ω. Without loss of generality in what follows we may assume that x0 = 0 (since translations interact favorably, in a reversible manner, with both hypotheses and conclusion). Let ϕ ∈ C0∞ (Ω) be such that supp ϕ ⊂ B(0, r) and fix some j ∈ {1, . . . , n}. Then we have 1 ∂j v, ϕ = −v, ∂j ϕ = − (∇u)(tx) · x ∂j ϕ(x) dt dx =−
1
Ω
0
(∇u)(tx) · x ∂j ϕ(x) dx dt
Ω 1
1
= − lim
ε→0+
ε
= − lim
ε→0+
= − lim
ε→0+
0
ε
n Ω k=1
(∇u)(tx) · x ∂j ϕ(x) dx dt Ω
y
∇u(y) · Ω
tn+1
∂k u(y) ε
1
(∂j ϕ)(y/t) dy dt
yk tn+1
(∂j ϕ)(y/t) dt dy
2.9. DISTRIBUTIONS WITH HIGHER-ORDER GRADIENTS... n ! = lim+ u(y), ε→0
k=1
1
ε
∂yk
y " k (∂j ϕ)(y/t) dt . n+1 t
75
(2.9.7)
For the fourth equality in (2.9.7) we have used Lebesgue’s dominated convergence theorem (cf. Theorem 13.12) and for the fifth one the change of variables tx = y (note that since supp ϕ ∈ B(0, r) and t ∈ (0, 1], we have supp ϕ(·/t) ⊂ B(0, r) ⊂ Ω). Furthermore, for each ε > 0, differentiating with respect to y and then integrating by parts in t, gives n 1 y k ∂yk n+1 (∂j ϕ)(y/t) dt t ε k=1
n 1
=
ε k=1
1 tn+1
(∂j ϕ)(y/t) +
yk tn+2
(∂k ∂j ϕ)(y/t) dt
n 1 d (∂ (∂ ϕ)(y/t) − ϕ)(y/t) dt j j tn+1 tn dt ε t=1 1 1 = − n (∂j ϕ)(y/t) = −∂j ϕ(y) + n (∂j ϕ)(y/ε). t ε t=ε 1
=
By combining (2.9.7) and (2.9.8) we obtain ! ∂j v, ϕ = −u, ∂j ϕ + lim+ u(y), ε→0
= ∂j u, ϕ + lim
ε→0+
! ∂j u(y),
1
∂y εn−1 j
" ϕ(y/ε)
" ϕ(y/ε) . n−1 1
ε
(2.9.8)
(2.9.9)
Since ∂j u ∈ C 0 (Ω), the pairing under the limit in the right-most term in (2.9.9) is given by an integral in which we make the change of variables x = y/ε to further compute " ! 1 1 (∂j u)(y)ϕ(y/ε) dy lim ∂j u(y), n−1 ϕ(y/ε) = lim+ n−1 ε ε→0+ ε→0 ε Ω (2.9.10) = lim+ ε (∂j u)(εx)ϕ(x) dx = 0 ε→0
Ω
given that, by the continuity of ∂j u at 0 ∈ Ω, lim (∂j u)(εx)ϕ(x) dx = ∂j u(0) ϕ(x) dx. ε→0+
Ω
(2.9.11)
Ω
In summary, from (2.9.9)–(2.9.11), the fact that ϕ is an arbitrary element in D(B(0, r)), and that j is arbitrary in {1, . . . , n}, we conclude that necessarily ∇v B(0,r) = ∇uB(0,r) in D (B(0, r)). By Exercise 2.140 there exists c ∈ C such that uB(0,r) − v B(0,r) = c in D (B(0, r)). Since v ∈ C 0 (Ω), the latter implies that uB(0,r) belongs to C 0 (B(0, r)) as desired. This completes the proof of the lemma.
76
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
After these preparations, we are ready to state and prove our first main result. Theorem 2.102. Let u ∈ D (Ω) and suppose that there exists m ∈ N0 such that for each α ∈ Nn0 satisfying |α| = m the distributional derivative ∂ α u is continuous on Ω. Then u ∈ C m (Ω). Proof. We prove the theorem by induction on m. For m = 0 there is nothing to prove. Suppose m = 1. Applying Lemma 2.101, we obtain that u ∈ C 0 (Ω). To prove that u ∈ C 1 (Ω), it suffices to show that u is of class C 1 in a neighborhood of any point x0 ∈ Ω. Fix x0 ∈ Ω, take r > 0 such that B(x0 , r) ⊂ Ω and a function ψ ∈ C0∞ (Rn ) such that supp ψ ⊂ B(x0 , r) and ψ ≡ 1 on B(x0 , r/2). Then ) ∈ E (Rn ). Also, by Proposition 2.63 the distribution ψu extends to v := ψu v ∈ C 0 (Rn ) since u ∈ C 0 (Ω) and ψ is compactly supported in Ω. Using (2.5.4) and part (4) in Proposition 2.36 we obtain 0 n * ∂j v = ∂ j (ψu) = (∂j ψ)u + ψ ∂j u ∈ C (R ).
(2.9.12)
1 n Applying Proposition 2.99 since with m = 1 then gives v ∈ C (R ). In addition, by design v B(x ,r/2) = u B(x ,r/2) , we conclude that u B(x ,r/2) ∈ C 1 (B(x0 , r/2)), 0 0 0 as wanted. Assume now that the theorem is true for all nonnegative integers up to, and including, some m ∈ N. Take u ∈ D (Ω) with the property that ∂ α u ∈ C 0 (Ω) for all α ∈ Nn0 satisfying |α| = m + 1 and fix j ∈ {1, . . . , n}. Then ∂j u ∈ D (Ω) satisfies ∂ α (∂j u) ∈ C 0 (Ω) for all α ∈ Nn0 with |α| = m. By the induction hypothesis, it follows that ∂j u ∈ C m (Ω). In particular, ∂j u ∈ C 0 (Ω). This being true for all j ∈ {1, . . . , n}, what we already proved for m = 1 implies u ∈ C 1 (Ω). Thus, u is a C 1 function in Ω with first-order partial derivatives that are of class C m in Ω. Then necessarily u ∈ C m+1 (Ω) as wanted. The proof of the theorem is finished.
Exercise 2.103. Let u ∈ D (Rn ) be such that for some N ∈ N0 we have ∂ α u = 0 in D (Rn ) for each α ∈ Nn0 with |α| > N . Prove that u is a polynomial of degree less than or equal to N . Hint: Use Theorem 2.102 to conclude that u ∈ C ∞ (Rn ) then invoke Taylor’s formula (13.2.6). Moving on to the second issue discussed at the beginning of this section we recall (cf. Remark 2.35) that a function f : Ω → C is called Lipschitz in Ω provided there exists some constant M ∈ [0, ∞) such that |f (x) − f (y)| ≤ M |x − y|,
∀ x, y ∈ Ω,
(2.9.13)
and that the Lipschitz constant of f is the smallest M for which (2.9.13) holds. Our next task is to prove the following theorem. Theorem 2.104. For f ∈ D (Rn ) and a number M ∈ [0, ∞) the following two statements are equivalent:
2.9. DISTRIBUTIONS WITH HIGHER-ORDER GRADIENTS...
77
(i) f is given by a Lipschitz function in Rn with Lipschitz constant less than or equal to M ; (ii) for each k ∈ {1, . . . , n}, the distributional derivative ∂k f belongs to L∞ (Rn ) and ∂k f L∞ (Rn ) ≤ M . Proof. Fix a distribution f ∈ D (Rn ) and let φ be as in (1.2.3). Consider the sequence of functions {φj }j∈N from (1.3.7) that satisfies the properties listed in (1.3.8). Furthermore, set fj := f ∗ φj
in D (Rn ),
∀ j ∈ N.
(2.9.14)
By Proposition 2.93 we have fj ∈ C ∞ (Rn ) and fj (x) = f, φj (x − ·),
∀ j ∈ N.
(2.9.15)
D (Rn )
Also, since (as proved in Example 2.20) one has φj −−−−→ δ, by part (2) in j→∞
Proposition 2.92 and part (d) in Theorem 2.87 one obtains D (Rn )
fj = f ∗ φj −−−−→ f ∗ δ = f. j→∞
(2.9.16)
Next we proceed with the proof of (i) =⇒ (ii). Suppose f is Lipschitz with Lipschitz constant ≤ M . In particular, the formula in (2.9.15) becomes fj (x) = f (y)φj (x − y) dy = f (x − y)φj (y) dy, ∀ x ∈ Rn , ∀ j ∈ N. Rn
Rn
(2.9.17)
We claim that for each j ∈ N one has |fj (x) − fj (y)| ≤ M |x − y|,
∀ x, y ∈ Rn
and ∇fj L∞ (Rn ) ≤ M. (2.9.18)
Indeed, if j ∈ N is fixed, then from (2.9.17) we obtain |fj (x) − fj (y)| ≤ |f (x − z) − f (y − z)|φj (z) dz ≤ M |x − y|,
(2.9.19)
Rn
for every x, y ∈ Rn . In turn, since fj is smooth, we have fj (x + hek ) − fj (x) , h→0 h
(∂k fj )(x) = lim
∀ x ∈ Rn , ∀ k ∈ {1, . . . , n}. (2.9.20)
In combination with (2.9.19) this implies ∇fj L∞ (Rn ) ≤ M , completing the proof of the claims made in (2.9.18). Next, fix k ∈ {1, . . . , n} and ϕ ∈ C0∞ (Rn ). Then based on (2.9.16) we may write ∂k f, ϕ = −f, ∂k ϕ = − lim fj , ∂k ϕ = lim ∂k fj , ϕ j→∞
j→∞
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
78
= lim
j→∞
Rn
(∂k fj )(x)ϕ(x) dx.
Using the second estimate in (2.9.18) we obtain n (∂k fj )(x)ϕ(x) dx ≤ ∇fj L∞ (Rn ) ϕL1 (Rn ) ≤ M ϕL1 (Rn ) .
(2.9.21)
(2.9.22)
R
Hence, from (2.9.22) and (2.9.21) it follows that (∂k fj )(x)ϕ(x) dx ≤ M ϕL1(Rn ) . ∂k f, ϕ ≤ lim sup
(2.9.23)
Rn
j→∞
Consequently, the linear assignment C0∞ (Rn ) ϕ → ∂k f, ϕ ∈ R
(2.9.24)
is continuous in the L1 -norm and has norm less than or equal to M . Since C0∞ (Rn ) is dense in L1 (Rn ), the linear functional in (2.9.24) extends to a linear, bounded Λk : L1 (Rn ) → C with norm less or equal to M . Thus, 1functional n ∗ Λk ∈ L (R ) = L∞ (Rn ) has norm less than or equal to M , which implies that there exists a unique gk ∈ L∞ (Rn )
with gk L∞ (Rn ) ≤ M
(2.9.25)
and such that Λk (h) =
Rn
gk (x)h(x) dx,
∀ h ∈ L1 (Rn ).
(2.9.26)
Granted (2.9.26) and keeping in mind that Λk is an extension of the linear assignment in (2.9.24), we arrive at the conclusion that ∂k f, ϕ = gk (x)ϕ(x) dx, ∀ ϕ ∈ C0∞ (Rn ). (2.9.27) Rn
The identity in (2.9.27) yields ∂k f = gk in D (Rn ), which proves (ii) in view of (2.9.25). Conversely, suppose (ii) is true. Fix j ∈ N and note that from (2.9.14) and part (e) in Theorem 2.87, for every k ∈ {1, . . . , n} one has ∂k fj = (∂k f ) ∗ φj in D (Rn ). Thus, using Proposition 2.93, the current assumptions on f , and the properties of φj , we have |(∂k fj )(x)| = |∂k f, φj (x − ·)| = (∂k f )(y)φj (x − y) dy Rn φj (x − y) dy ≤ M, ∀ x ∈ Rn , (2.9.28) ≤ ∂k f L∞ (Rn ) Rn
2.9. DISTRIBUTIONS WITH HIGHER-ORDER GRADIENTS...
79
for each k ∈ {1, . . . , n}. Now fix x0 ∈ Rn and consider the sequence of functions {gj }j∈N given by gj (x) := fj (x) − fj (x0 ),
for x ∈ Rn .
(2.9.29)
Then gj ∈ C ∞ (Rn ) and based on the mean value theorem, (2.9.29), and (2.9.28) we also obtain |gj (x) − gj (y)| ≤ M |x − y|, |gj (x)| ≤ M |x − x0 |,
∀ x, y ∈ Rn ,
and
∀ x ∈ Rn .
(2.9.30) (2.9.31)
By a corollary of the classical Arzel´ a–Ascoli theorem (see Theorem 13.16), there exists a subsequence {gj }∈N that converges uniformly on any compact subset of Rn to some function g ∈ C 0 (Rn ). As such, for each ϕ ∈ C0∞ (Rn ) we may write gj (x)ϕ(x) dx = g(x)ϕ(x) dx = g, ϕ lim gj , ϕ = lim →∞
→∞
supp ϕ
supp ϕ
(2.9.32) D (Rn )
which goes to show that gj −−−−→ g. The latter, (2.9.16), and (2.9.29), state →∞
that whenever ψ ∈ C0∞ (Rn ) is such that Rn ψ(x) dx = 0 we have g, ψ = lim gj , ψ = lim gj (x)ψ(x) dx →∞
= lim
→∞
Rn
→∞
Rn
fj (x)ψ(x) dx = lim fj , ψ = f, ψ. →∞
(2.9.33)
Thus, we may apply Exercise 2.139 to conclude that there exists some constant c ∈ C with the property that f = g + c in D (Rn ). This proves that the distribution f is of function type and is given by the function g + c. Moreover, writing the estimate in (2.9.30) with j replaced by j and then taking the limit as → ∞ (recall that {gj } converges pointwise to g) leads to |g(x) − g(y)| ≤ M |x − y|,
∀ x, y ∈ Rn ,
(2.9.34)
which in concert with the fact that f (x) − f (y) = g(x) − g(y) proves that f is a Lipschitz function with Lipschitz constant ≤ M . Now the proof of (ii) =⇒ (i) is finished.
Further Notes for Chap. 2. The material in this chapter is at the very core of the theory of distributions since it provides a versatile calculus for distributions that naturally extends the scope of the standard calculus for ordinary functions. The definition of distributions used here is essentially that of the French mathematician Laurent Schwartz (1915–2002), cf. [60], though nowadays there are many books dealing
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
80
at length with the classical topics discussed here. These include [8, 15, 17–20, 24, 25, 30, 31, 59, 65, 68, 70–72], and the reader is referred to these sources for other angles of exposition. In particular, in [24, 31, 68], distributions are defined on smooth manifolds, while in [53] the notion of distributions is adapted to rough settings.
2.10
Additional Exercises for Chap. 2
Exercise 2.105. Consider the mapping u : D(R) → C by setting u(ϕ) :=
∞ 1 ϕ 2 − ϕ(0) , j j=1
∀ ϕ ∈ C0∞ (R).
(2.10.1)
Prove that u is well-defined. Is u a distribution? If yes, what is the order of u? Exercise 2.106. Prove that there exists u ∈ D (Ω) for which the following statement is false: For each compact K contained in Ω there exist C > 0 and k ∈ N0 such that |u, ϕ| ≤ C
sup x∈K, |α|≤k
|∂ α ϕ(x)|,
∀ ϕ ∈ C0∞ (Ω). (2.10.2)
Exercise 2.107. Prove that |x|N ln |x| ∈ L1loc (Rn ) whenever N is a real number satisfying N > −n. Thus, in particular, |x|N ln |x| ∈ D (Rn ) when N > −n. Exercise 2.108. Suppose n ≥ 2 and given ξ ∈ S n−1 define f (x) := ln |x · ξ| for each x ∈ Rn \ {0}. Prove that f ∈ L1loc (Rn ). In particular, ln |x · ξ| ∈ D (Rn ). Exercise 2.109. Prove that (ln |x|) = P.V. x1 in D (R), where P.V. x1 is the distribution defined in (2.1.13). Exercise 2.110. Let f : R → R be defined by f (x) = x ln |x| − x if x ∈ R \ {0} and f (0) = 0. Prove that f ∈ C 0 (R) and the distributional derivative of f in D (R) equals ln |x|. Exercise 2.111. Suppose n ≥ 2. Prove that ∂j (ln |x|) = every j ∈ {1, . . . , n}.
xj |x|2
in D (Rn ) for
Exercise 2.112. Let θ ∈ C0∞ (R) be supported in the interval (−1, 1) and such
that R θ(t) dt = 1. For each j ∈ N, define
j
j θ(jx − jt) dt,
ψj (x) :=
∀ x ∈ R.
1/j D (R)
Prove that ψj −−−−→ H, where H is the Heaviside function. j→∞
2.10. ADDITIONAL EXERCISES FOR CHAP. 2
81 ∞
Exercise 2.113. Let u : D(R) → C defined by u(ϕ) :=
ϕ(j) (j) for each
j=1
ϕ ∈ D(R). Prove that u ∈ D (R) and that this distribution does not have finite order. Exercise 2.114. For each j ∈ N define fj (x) :=
1
for x ∈ Rn \ {0}.
1
j|x|n− j
D (Rn )
Prove that fj −−−−→ ωn−1 δ, where ωn−1 is the area of the unit sphere in Rn . j→∞
Exercise 2.115. For each ε > 0 define fε (x) :=
ε 1 π x2 + ε2
for x ∈ R.
D (R)
Prove that fε −−−−→ δ. + ε→0
Exercise 2.116. For each ε > 0 define n
fε (x) := (4πε)− 2 e−
|x|2 4ε
for x ∈ Rn .
D (Rn )
Prove that fε −−−−→ δ. ε→0+
Exercise 2.117. Recall that i = fε± (x) :=
√
−1 ∈ C and, for each ε ∈ (0, ∞), define
1 x±iε
for x ∈ R.
Also, recall the distribution from (2.1.13). Prove that 1 1 −→ ∓iπδ + P.V. x±iε x
as
This is the so-called Sokhotsky’s formula. Exercise 2.118. Prove that the sequence as j → ∞.
ε → 0+ in D (R).
sin(jx) πx
j∈N
(2.10.3)
converges to δ in D (R)
Exercise 2.119. In each case determine if the given sequence of distributions in D (R) indexed over j ∈ N converges√and determine its limit whenever convergent. Below m ∈ N is fixed and i = −1. √ 2 j (a) fj (x) := √ e−jx for x ∈ R; π (b) fj (x) ::= j m cos(jx)
for x ∈ R;
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
82 (c) fj (x) =
2j 3 x2 π(1 + j 2 x2 )2
for x ∈ R;
(d) uj := (−1)j δ 1j ; (e) uj :=
j δ 1j − δ− 1j ; 2
(f ) fj (x) :=
1 1 χ x |x|≥ j
(g) fj (x) :=
1 sin2 (jx) jπ x2
for x ∈ R \ {0}; for x ∈ R \ {0};
(h) fj (x) := j m eijx for x ∈ R; jeijx if x > 0, (j) fj (x) := 0 if x ≤ 0,
for x ∈ R.
Exercise 2.120. Let a ∈ R. Compute (H(· − a)) in D (R). Exercise 2.121. Consider the function x f : R −→ R defined by f (x) := 0
if x > a, if x ≤ a,
∀ x ∈ R,
(2.10.4)
where a ∈ R is fixed. Compute (uf ) in D (R), where uf is defined as in (2.1.6). Exercise 2.122. Let f : R → R be defined by f (x) := sin |x| for every x ∈ R. Compute (uf ) and (uf ) in D (R). Exercise 2.123. Let I ⊆ R be an open interval, x0 ∈ I, and f ∈ C 1 (I \ {x0 }) be such that f ∈ L1loc (I) (here f is the pointwise derivative of f in I \ {x0 }). Prove that the one-sided limits lim f (x), lim f (x) exist and are finite, that f∈
L1loc (I),
x→x+ 0
x→x− 0
and that
(uf ) = uf +
lim f (x) − lim f (x) δx0
x→x+ 0
x→x− 0
in
D (I).
Remark 2.124. Prove that there exist pointwise differentiable functions on R for which the distributional derivative does not coincide with the classical derivative. You may consider the function f defined by f (x) := x2 cos x12 for x = 0 and f (0) := 0, and show that f ∈ C 1 (R \ {0}) and also f is differentiable at the origin, while f ∈ L1loc (R). Exercise 2.125. Let I ⊆ R be an open interval, x0 ∈ I, and let m ∈ N. Suppose that f ∈ C ∞ (I \ {x0 }) is such that the pointwise derivatives f , f , . . . , f (m) , belong to L1loc (I). Prove that for each k ∈ {0, , 1, . . . , m−1} the limits lim+ f (k) (x) x→x0
2.10. ADDITIONAL EXERCISES FOR CHAP. 2
83
and lim− f (k) (x) exist, are finite, and that x→x0
(m)
uf
lim f (x) − lim f (x) δx(m−1) 0
=uf (m) +
+
x→x− 0
lim+ f (x) − lim− f (x) δx(m−2) + ··· 0
+
x→x+ 0
x→x0
x→x0
lim f (m−1) (x) − lim f (m−1) (x) δx0
x→x+ 0
in D (I).
x→x− 0
Exercise 2.126. Let I ⊆ R be an open interval and consider a sequence {xk }k∈N ⊂ I with no accumulation point in I. Suppose f ∈ C 1 (I \ {xk : k ∈ N}) is such that the pointwise derivative f belongs to L1loc (I). Prove that for each k ∈ N the limits lim± f (x) exist and are finite, that f ∈ L1loc (I), and that x→xk
(uf ) = uf +
∞ k=1
lim+ f (x) − lim− f (x) δxk
x→xk
in
x→xk
D (I).
Exercise 2.127. Let f : R → R be the function defined by f (x) := x for each x ∈ R, where x is the integer part of x. Determine (uf ) . Exercise 2.128. Let Σ ⊂ Rn be a surface of class C 1 as in Definition 13.36, and denote by σ the the mapping δΣ : C0∞ (Rn ) → C
surface measure on Σ. Define ∞ n by δΣ (ϕ) := Σ ϕ(x) dσ(x) for each ϕ ∈ C0 (R ). Prove that δΣ ∈ D (Rn ), it has order zero, and supp δΣ = Σ. Also show that if g ∈ L∞ (K ∩ Σ) for each compact set K in Rn , and if we define gδΣ (ϕ) := g(x)ϕ(x) dσ(x), ∀ ϕ ∈ C0∞ (Rn ), (2.10.5) Σ
then gδΣ ∈ D (R ). n
Exercise 2.129. Let Ω ⊂ Rn be a domain of class C 1 (recall Definition 13.40) and denote by ν = (ν1 , . . . , νn ) its outward unit normal. Denote by δ∂Ω the distribution defined as in Exercise 2.128 corresponding to Σ := ∂Ω. Set Ω+ := Ω and Ω− := Rn \ Ω. Suppose that f ∈ L1loc (Rn ) has the property that the distributional derivative ∂k f belongs to L1loc (Rn ) for each number k ∈ {1, 2 . . . , n}. In addition, assume that the restrictions f± := f |Ω± satisfy f± ∈ C 1 (Ω± ) and that they may be extended so that f± ∈ C 0 (Ω± ). Prove that for each k ∈ {1, 2 . . . , n} the following equality holds: ∂k uf = u∂k f + s∂Ω (f )νk δ∂Ω
in D (Rn ),
where s∂Ω (f ) : ∂Ω → C is defined by s∂Ω (f )(x) := f− (x) − f+ (x) =
lim
Rn \Ω y→x
f (y) − lim f (y) Ω y→x
for every x ∈ ∂Ω.
(2.10.6)
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
84
Exercise 2.130. Let Ω ⊂ Rn be a bounded domain of class C 1 with outward unit normal ν = (ν1 , . . . , νn ). Prove that for each k ∈ {1, 2 . . . , n} one has ∂k χΩ = −νk δ∂Ω in D (Rn ). Exercise 2.131. Suppose R ∈ (0, ∞) and let u ∈ D (Rn ) be such that (|x|2 − R2 )u = 0
in D (Rn ).
(2.10.7)
Prove that u has compact support. Give an example of a distribution u satisfying condition (2.10.7). Exercise 2.132. Let f ∈ C 0 (Ω) be such that uf ∈ E (Ω). Prove that f has compact support and supp uf = supp f . Exercise 2.133. Compute the derivatives of order m ∈ N of each distribution on R given below. (a) |x|
(b) sgn x
(c) cos x H
(d) sin x H
(e) x2 χ[−1,1]
Exercise 2.134. Consider the set A := {(x, y) ∈ R2 : |x−2|+|y−1| < 1} ⊂ R2 . Compute (∂12 − ∂22 )χA in D (R2 ). Exercise 2.135. Let f : R2 → R be defined by f (x, y) := χ[0,1] (x − y) for x, y ∈ R. Compute ∂1 (uf ), ∂2 (uf ) in D (R2 ). Prove that ∂12 (uf ) − ∂22 (uf ) = 0 in D (R2 ). Exercise 2.136. Let ψ ∈ C ∞ (Ω) be such that ψ(x) = 0 for every x ∈ Ω. Prove that for each v ∈ D (Ω) there exists a unique solution u ∈ D (Ω) of the equation ψu = v in D (Ω). Exercise 2.137. Let ψ ∈ C ∞ (Ω) and suppose u1 , u2 ∈ D (Ω) are such that u1 = u2 and ψu1 = ψu2 in D (Ω). Prove that the set {x ∈ Ω : ψ(x) = 0} is not empty. Exercise 2.138. Suppose {Ωj }j∈I is an open cover of the open set Ω ⊆ Rn and there exists a family of distributions {uj }j∈I such that uj ∈ D (Ωj ) for each j ∈ I and uj Ωj ∩Ω = uk Ωj ∩Ω in D (Ωj ∩ Ωk ) for every j, k ∈ I such that k k Ωj ∩ Ωk = ∅. Prove that there exists a unique distribution u ∈ D (Ω) with the property that uΩj = uj in D (Ωj ) for every j ∈ I. Exercise 2.139. Let u ∈ D (Rn ) be such that u, ϕ = 0 for every ϕ ∈ C0∞ (Rn )
satisfying Rn ϕ(x) dx = 0. Prove that there exists c ∈ C such that u = c in D (Rn ). Exercise 2.140. Let Ω ⊆ Rn be open and connected and let u ∈ D (Ω) be such that ∂1 u = ∂2 u = · · · = ∂n u = 0 in D (Ω). Prove that there exists c ∈ C such that u = c in D (Ω). Exercise 2.141. Let u ∈ D (Rn ) be such that xn u = 0 in D (Rn ). Prove that there exists v ∈ D (Rn−1 ) such that u(x , xn ) = v(x ) ⊗ δ(xn ) in D (Rn ).
2.10. ADDITIONAL EXERCISES FOR CHAP. 2
85
Exercise 2.142. Let u ∈ D (Rn ) be such that x1 u = · · · = xn u = 0 in D (Rn ). Determine the expression for u. Exercise 2.143. Let u ∈ D (Rn ) be such that ∂n u = 0 in D (Rn ). Prove that there exists v ∈ D (Rn−1 ) such that u(x , xn ) = v(x ) ⊗ 1 in D (Rn ), where 1 denotes the constant function (equal to 1) in R. Exercise 2.144. Let v, w ∈ D (R) and define the distribution u(x1 , x2 ) := 1 ⊗ v(x2 ) + w(x1 ) ⊗ 1
in
D (R2 ),
where 1 denotes the constant function (equal to 1) in R. Prove that ∂1 ∂2 u = 0 in D (R2 ). Exercise 2.145. Let u(x1 , x2 , x3 ) := H(x1 ) ⊗ δ(x2 ) ⊗ δ(x3 ) in D (R3 ), where H is the Heaviside function on the real line. Compute ∂1 u, ∂2 u, ∂3 u in D (R3 ). Exercise 2.146. Consider the sequence eix·ξ dξ, fj (x) := (2π)−n
∀ x ∈ Rn , ∀ j ∈ N.
(2.10.8)
[−j,j]×···×[−j,j] D (Rn )
Prove that fj −−−−→ δ. j→∞
Exercise 2.147. Solve each equation in D (R) for u. (1) (x − 1)u = δ; (2) xu = a, where a ∈ C ∞ (R); (3) xu = v, where v ∈ D (R). Exercise 2.148. Prove that the given convolutions exist and then compute them. (a) H ∗ H (b) H(−x) ∗ H(−x) (c) x2 H ∗ (sin x H) (d) χ[0,1] ∗ (xH) (e) |x|2 ∗ δ∂B(0,r) where r > 0 and δ∂B(0,r) is as defined in Exercise 2.128 corresponding to the surface Σ := ∂B(0, r). Exercise 2.149. Let a ∈ Rn \ {0}, uj := δja , vj := δ−ja , for each j ∈ N. Determine lim uj , lim vj , lim (uj ∗ vj ), in D (Rn ). j→∞
j→∞
j→∞
CHAPTER 2. THE SPACE D (Ω) OF DISTRIBUTIONS
86
Exercise 2.150. For each j ∈ N, consider the functions defined by j j fj (x) : = (−1) 2 χ 1 1 (x) and gj (x) := (−1) , for every x ∈ R. Determine if −j,j
the given limits exist in D (R). (a) lim fj j→∞
(b) lim gj j→∞
(c) lim (fj ∗ gj ) j→∞
Exercise 2.151. Let u ∈ D (Rn ) and consider the map Λ : D(Rn ) → E(Rn ) given by Λ(ϕ) := u ∗ ϕ, for every ϕ ∈ C0∞ (Rn ). Prove that Λ is a well-defined, linear and continuous map. Also prove that Λ commutes with translations, that is, if x0 ∈ Rn and ϕ ∈ C0∞ (Rn ), then tx0 Λ(ϕ) = Λ tx0 ϕ , where tx0 is the map from (1.3.16). Exercise 2.152. Suppose Λ : D(Rn ) → E(Rn ) is a linear, continuous map that commutes with translations (in the sense explained in Exercise 2.151). Prove that there exists a unique u ∈ D (Rn ) such that Λ(ϕ) = u ∗ ϕ for every function ϕ ∈ C0∞ (Rn ). Exercise 2.153. Let u ∈ E (Rn ) be such that u, xα = 0 for every α ∈ Nn . Prove that u = 0 in E (Rn ). Exercise 2.154. Let u : E(R) → C be the functional defined by ⎡ ⎤ k 1 u(ϕ) := lim ⎣ ϕ ∀ ϕ ∈ C ∞ (R). − kϕ(0) − ϕ (0) ln k ⎦ , j
k→∞
j=1
Prove that u ∈ E (R) and determine supp u. Exercise 2.155. For each j ∈ N consider the function fj : R → R defined by fj (x) :=
j 2
if |x| ≤
1 j
E (R)
and fj (x) := 0 if |x| > 1j . Prove that fj −−−→ δ. j→∞
Exercise 2.156. For each j ∈ N consider the function fj : R → R defined by fj (x) := 1j if |x| ≤ j and fj (x) := 0 if |x| > j. Prove that the sequence {fj }j∈N converges in D (R) but not in E (R).
Exercise 2.157. Let ψ ∈ C0∞ (Rn ) be such that Rn ψ(x) dx = 1 and for each j ∈ N define fj : Rn → C by fj (x) := j n ψ(jx) for each x ∈ Rn . Prove that E (Rn )
fj −−−−→ δ. j→∞
Exercise 2.158. Let {xj }j∈N be a sequence of points in Rn . Prove that {xj }j∈N is convergent in Rn if and only if {δxj }j∈N is convergent in E (Rn ).
2.10. ADDITIONAL EXERCISES FOR CHAP. 2
87
Exercise 2.159. Let a ∈ R and k ∈ N0 be given. Prove that (x + a)δa(k) = 2a δa(k) + kδa(k−1)
in
D (R),
(x2 − a2 )δa(k) = −2k a δa(k−1) + k(k − 1)δa(k−2) (−m)
with the convention that δa
(2.10.9) in
D (R),
:= 0 ∈ D (R) for each m ∈ N.
(2.10.10)
Chapter 3
The Schwartz Space and the Fourier Transform 3.1
The Schwartz Space of Rapidly Decreasing Functions
Recall that if f ∈ L1 (Rn ) then the Fourier transform of f is the mapping f3 : Rn → C defined by e−ix·ξ f (x) dx for each ξ ∈ Rn , (3.1.1) f3(ξ) := Rn
√ where i := −1 ∈ C. Note that under the current assumptions the integral in (3.1.1) is absolutely convergent (which means that f3 is well-defined pointwise in Rn ) and one has sup |f3(ξ)| ≤ f L1 (Rn )
ξ∈Rn
and f3 ∈ C 0 (Rn ).
(3.1.2)
where the second condition is seen by applying Lebesgue’s dominated convergence theorem. Hence, the mapping F : L1 (Rn ) → {g ∈ C 0 (Rn ) : g is bounded},
F f := f3,
∀ f ∈ L1 (Rn ), (3.1.3)
called the Fourier transform, is well-defined. Besides being continuous, functions belonging to the image of F also vanish at infinity. This property is proved next. Proposition 3.1. If f ∈ L1 (Rn ), then lim f3(ξ) = 0. |ξ|→∞
D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 3, © Springer Science+Business Media New York 2013
89
CHAPTER 3. THE SCHWARTZ SPACE AND THE FOURIER...
90
Proof. First, consider the case when f ∈ C0∞ (Rn ). In this scenario, integrating n 4 (ξ) for each ξ ∈ Rn \ {0}, where Δf := ∂ 2 f . by parts gives |ξ|2 f3(ξ) = −Δf j=1
j
Hence, |f3(ξ)| ≤
4 (ξ)| Δf L1 (Rn ) |Δf ≤ , 2 |ξ| |ξ|2
∀ ξ ∈ Rn \ {0},
(3.1.4)
from which it is clear that lim f3(ξ) = 0 in this case. |ξ|→∞
Consider now the case when f is an arbitrary function in L1 (Rn ). Since C0∞ (Rn ) is dense in the latter space, for each ε > 0 fixed there exists g ∈ C0∞ (Rn ) such that f − gL1 (Rn ) ≤ 2ε . Also, from what we proved so far, there exists R > 0 with the property that |3 g (ξ)| ≤ ε2 whenever |ξ| > R. Keeping this in mind and using the linearity of the Fourier transform as well as the estimate in (3.1.2) we may write |f3(ξ)| ≤ |(f − g)(ξ)| + |3 g (ξ)| ≤ f − gL1 (Rn ) + From this, the desired conclusion follows. therefore complete.
ε ≤ ε, 2
if |ξ| > R.
(3.1.5)
The proof of the proposition is
We are very much interested in the possibility of extending the action of the Fourier transform from functions to distributions, though this is going to be accomplished later. For now, we note the following consequence of Fubini’s theorem: f (x)3 g (x) dx, ∀ f, g ∈ L1 (Rn ). (3.1.6) f3(ξ)g(ξ) dξ = Rn
Rn
Identity (3.1.6) might suggest defining the Fourier transform of a distribution based on duality. However, there is a serious impediment in doing so. Specif3 ∈ C ∞ (Rn ) (as may be seen ically, while for every ϕ ∈ C0∞ (Rn ) we have ϕ directly from (3.1.1)) and ϕ 3 decays at infinity (as proved in Proposition 3.1), we nonetheless have F (C0∞ (Rn )) ⊂ C0∞ (Rn ).
(3.1.7)
ϕ ∈ C0∞ (Rn ) and ϕ 3 ∈ C0∞ (Rn ) =⇒ ϕ = 0.
(3.1.8)
In fact, we claim that
3 has compact To see that this is the case, suppose ϕ ∈ C0∞ (Rn ) is such that ϕ support in Rn , and pick an arbitrary point x∗ = (x∗1 , . . . , x∗n ) ∈ Rn . Define the function Φ : C → C by setting n ∗ Φ(z) := e−izx1 + j=2 xj xj ϕ(x1 , . . . , xn ) dx1 · · · dxn , for z ∈ C. (3.1.9) Rn
Then Φ is analytic in C and Φ(t) = ϕ(t, 3 x∗2 , . . . , x∗n ) for each t ∈ R. Given that ϕ 3 has compact support, this implies that Φ vanishes on R \ [−R, R] if
3.1. THE SCHWARTZ SPACE OF RAPIDLY DECREASING...
91
R > 0 is suitably large. The identity theorem for ordinary analytic functions of one complex variable then forces Φ = 0 everywhere in C. In particular, ϕ(x 3 ∗ ) = Φ(x∗1 ) = 0. Since x∗ ∈ Rn has been chosen arbitrarily, we conclude that ϕ 3 = 0 in Rn . However, as we will see in the sequel, the Fourier transform is injective on C0∞ (Rn ), so (3.1.8) follows. To overcome the difficulty highlighted in (3.1.7), we introduce a new (topological vector) space of functions, that contains C0∞ (Rn ), is invariant under F , and that has a dual that is a subspace of D (Rn ). This is the space of Schwartz functions, named after the French mathematician Laurent–Mo¨ıse Schwartz (1915–2002) who pioneered the theory of distributions and first considered this space in connection with the Fourier transform. Before presenting the definition of Schwartz functions, we introduce some notation, motivated by the observation that each time a partial derivative of ϕ 3 is taken, the exponential introduces i as a multiplicative factor. To adjust for this factor, it is therefore natural to re-normalize the ordinary partial differentiation operators as follows: Dj := 1i ∂j ,
j = 1, . . . , n,
Dα := D1α1 · · · Dnαn ,
D := (D1 , . . . , Dn ),
∀ α = (α1 , . . . , αn ) ∈ Nn0 .
(3.1.10)
At times, we will also use subscripts to specify the variable with respect to which the differentiation is taken. For example, Dxα stands for Dα with the additional specification that the differentiation is taken with respect to the variable x ∈ Rn . Fix now α, β ∈ Nn0 and observe that for each ϕ ∈ C0∞ (Rn ) integration by parts implies (−Dx )β (e−ix·ξ ) (−x)α ϕ(x) dx 3 = ξ β e−ix·ξ (−x)α ϕ(x) dx = ξ β Dξα ϕ(ξ) Rn
=
Rn
Hence,
e−ix·ξ Dxβ (−x)α ϕ(x) dx.
3 ≤ sup ξ β Dξα ϕ(ξ)
ξ∈Rn
Rn
(3.1.11)
Rn
|Dxβ (xα ϕ(x))| dx < ∞.
(3.1.12)
The conclusion from (3.1.12) is that derivatives of any order of ϕ 3 decrease at ∞ faster than any polynomial. This suggests making the following definition. Definition 3.2. The Schwartz class of rapidly decreasing functions is defined as (3.1.13) S(Rn ) := ϕ ∈ C ∞ (Rn ) : sup |xβ ∂ α ϕ(x)| < ∞, ∀ α, β ∈ Nn0 . x∈Rn
We shall simply say that ϕ is a Schwartz function if ϕ ∈ S(Rn ). Obviously, C0∞ (Rn ) ⊂ S(Rn ) ⊂ C ∞ (Rn )
(3.1.14)
though both inclusions are strict. An example of a Schwartz function that is not compactly supported is provided below.
CHAPTER 3. THE SCHWARTZ SPACE AND THE FOURIER...
92
Exercise 3.3. Prove that for each fixed number a ∈ (0, ∞), the function f , 2 defined by f (x) := e−a|x| for each x ∈ Rn , belongs to S(Rn ) and has the property that supp f = Rn . Other elementary observations pertaining to the Schwartz class from Definition 3.2 are recorded below. Remark 3.4. One has S(Rn ) = ϕ ∈ C ∞ (Rn ) : sup |xβ Dα ϕ(x)| < ∞, ∀ α, β ∈ Nn0 ,
(3.1.15)
and if ϕ ∈ C ∞ (Rn ), then ϕ ∈ S(Rn ) if and only if ∀ m ∈ N0 , ∀ α ∈ Nn0 , |α| ≤ m. sup (1 + |x|)m |∂ α ϕ(x)| < ∞,
(3.1.16)
x∈Rn
x∈Rn
Indeed, (3.1.15) is immediate from Definition 3.2. Also, the second claim in the remark readily follows from the observation that for each m ∈ N0 there exists C ∈ [1, ∞) with the property that C −1 |x|m ≤ |xγ | ≤ C|x|m , ∀ x ∈ Rn . (3.1.17) |γ|=m
In turn, the second inequality in (3.1.17) is seen by noting that for each α ∈ Nn0 and x = (x1 , . . . , xn ) ∈ Rn , we have |xα | = |x1 |α1 · · · |xn |αn ≤ |x|α1 · · · |x|αn = |x||α| .
(3.1.18) |xγ | To justify the first inequality in (3.1.17), consider the function g(x) := |γ|=m
for x ∈ Rn . Then its restriction to S n−1 attains a nonzero minimum, and the desired inequality follows by rescaling. Exercise 3.5. Prove that if f ∈ S(Rn ) then for each α, β ∈ Nn0 and N ∈ N there exists C = Cf,N,α,β ∈ (0, ∞) such that α β x ∂ f (x) ≤
C (1 + |x|)N
for each x ∈ Rn .
(3.1.19)
Use this to deduce that S(Rn ) ⊂ Lp (Rn ) for every p ∈ [1, ∞]. allows us to In particular, S(Rn ) ⊂ L1 (Rn ) which, in concert with (3.1.3), consider the Fourier transform on S(Rn ). Also, F C0∞ (Rn ) ⊆ S(Rn ) as seen from the computation in (3.1.12). Clearly, S(Rn ) is a vector space when endowed with the canonical operations of addition of functions and multiplication by complex numbers. For a detailed discussion regarding the topology we consider on S(Rn ) see Sect. 13.1. We continue to denote by S(Rn ) the respective topological vector space and we single out here a few important facts that are useful for our analysis.
3.1. THE SCHWARTZ SPACE OF RAPIDLY DECREASING...
93
Fact 3.6. S(Rn ) is a Frech´et space, that is, S(Rn ) is a locally convex, metrizable, complete, topological vector space over C. Fact 3.7. A sequence {ϕj }j∈N ⊂ S(Rn ) converges in S(Rn ) to some ϕ ∈ S(Rn ) provided ∀ α, β ∈ Nn0 , (3.1.20) sup xβ ∂ α ϕj (x) − ϕ(x) −−−→ 0, j→∞
x∈Rn
S(Rn )
in which case we use the notation ϕj −−−−→ ϕ. j→∞
S(Rn )
Exercise 3.8. Use (3.1.17) to prove that ϕj −−−−→ ϕ if and only if j→∞
sup
x∈Rn α∈Nn 0 , |α|≤m
(1 + |x|)m ∂ α [ϕj (x) − ϕ(x)] −−−→ 0,
∀ m ∈ N0 .
j→∞
(3.1.21)
It is useful to observe that the Schwartz class embeds continuously into Lebesgue spaces. Exercise 3.9. Prove that if p ∈ [1, ∞] and a sequence of functions {ϕj }j∈N in S(Rn ) converges in S(Rn ) to some ϕ ∈ S(Rn ) then {ϕj }j∈N also converges in Lp (Rn ) to ϕ. Hint: Use Exercise 3.8. Definition 3.10. The space of slowly increasing functions in Rn is defined as L(Rn ) := a ∈ C ∞ (Rn ) : ∀ α ∈ Nn0 ∃ k ∈ N0 such that sup (1 + |x|)−k |∂ α a(x)| < ∞ . (3.1.22) x∈Rn
Note that an immediate consequence of Definition 3.10 is that L(Rn ) is stable under differentiation (i.e., if a ∈ L(Rn ) then ∂ α a ∈ L(Rn ) for every α ∈ Nn0 ). Also, (3.1.23) S(Rn ) ⊂ L(Rn ), though L(Rn ) contains many additional functions that lack decay as, for example, the class of polynomials (other examples are contained in the two exercises below). 2
Exercise 3.11. Prove that the function f (x) := ei|x| , x ∈ Rn , belongs to L(Rn ). Exercise 3.12. Prove that for each k ∈ R the function f (x) := (1 + |x|2 )k , x ∈ Rn , belongs to L(Rn ). Some other basic properties of the Schwartz class are collected in the next theorem.
CHAPTER 3. THE SCHWARTZ SPACE AND THE FOURIER...
94
Theorem 3.13. The following statements are true. (a) For each a ∈ L(Rn ), the mapping S(Rn ) ϕ → aϕ ∈ S(Rn ) is welldefined, linear, and continuous. (b) For each α ∈ Nn0 , the mapping S(Rn ) ϕ → ∂ α ϕ ∈ S(Rn ) is well-defined, linear, and continuous. (c) D(Rn ) → S(Rn ) → E(Rn ) and the embeddings are continuous. (d) C0∞ (Rn ) is sequentially dense in S(Rn ). Also, the Schwartz class S(Rn ) is sequentially dense in E(Rn ). (e) If m, n ∈ N and f ∈ S(Rm ), g ∈ S(Rn ), then f ⊗ g ∈ S(Rm × Rn ) and the mapping S(Rm ) × S(Rn ) (f, g) → f ⊗ g ∈ S(Rm × Rn )
(3.1.24)
is bilinear and continuous. (f ) If f, g ∈ S(Rn ) then f ∗ g ∈ S(Rn ) and the mapping S(Rn ) × S(Rn ) (f, g) → f ∗ g ∈ S(Rn )
(3.1.25)
is bilinear and continuous. Proof. Clearly, the mappings in (a) and (b) are linear. By Fact 3.6 and Theorem 13.14, their continuity is equivalent with sequential continuity at 0, something that can be easily checked using Fact 3.7. Moving on to the statement in (c), we first prove that D(Rn ) embeds continuously into S(Rn ). Let ι : D(Rn ) → S(Rn ) be defined by ι(ϕ) := ϕ for each ϕ ∈ C0∞ (Rn ). From (3.1.14) this is a well-defined and linear mapping. To see that ι is sequentially continuous D(Rn )
at 0 ∈ D(Rn ), consider ϕj −−−−→ 0. Then there exists a compact set K ⊂ Rn j→∞
with the property that supp ϕj ⊆ K for every j ∈ N, and lim sup |∂ α ϕj | = 0 j→∞ x∈K
for every α ∈ Nn0 . Hence, for any α, β ∈ Nn0 , sup xβ ∂ α ϕj (x) = sup xβ ∂ α ϕj (x) ≤ C sup ∂ α ϕj (x) −−−→ 0, x∈Rn
x∈K
x∈K
j→∞
(3.1.26)
proving that ι is sequentially continuous at the origin. Recalling now Fact 3.6 and Theorem 13.5, we conclude that ι is continuous. Our next goal is to show that S(Rn ) embeds continuously in E(Rn ). From (3.1.14) we have that the identity ι : S(Rn ) → E(Rn ) given by ι(f ) := f , for each f ∈ S(Rn ), is a well-defined linear map. By Fact 3.6, Fact 1.8, and Theorem 13.14, ι is continuous if and only if it is sequentially continuous at S(Rn )
zero. However, if fj −−−−→ 0, then for any compact set K ⊂ Rn and any α ∈ Nn0 ,
j→∞
lim sup |∂ α fj (x)| ≤ lim sup |∂ α fj (x)| = 0.
j→∞ x∈K
j→∞ x∈Rn
(3.1.27)
3.1. THE SCHWARTZ SPACE OF RAPIDLY DECREASING...
95
This shows that ι is sequentially continuous at zero, finishing the proof (c). Next, we prove the statement in (d). Let f ∈ S(Rn ) be arbitrary and, for some fixed ψ ∈ C0∞ (Rn ) satisfying ψ = 1 in a neighborhoodofB(0, 1), define the sequence of functions fj : Rn → C by setting fj (x) := ψ xj f (x) for every x ∈ Rn and every j ∈ N. Then fj ∈ C0∞ (Rn ) and fj = f on B(0, j) for each j ∈ N. We claim that S(Rn )
fj −−−−→ f. j→∞
(3.1.28)
n To see that this is the case, if α, β ∈ N x0 are arbitrary, by making use of Leibniz’s formula (13.2.4) and the fact that ψ j = 1 for each x ∈ B(0, j), we may write x β β α α! sup x ∂ fj (x) − f (x) = sup x ∂ γ f (x)∂ α−γ ψ − 1 γ!(α − γ)! j x∈Rn x∈Rn γ≤α x α! ≤ sup xβ ∂ γ f (x)∂ α−γ ψ γ!(α − γ)! j |x|≥j γ<α
x + sup xβ ∂ α f (x) ψ − 1 . j
(3.1.29)
|x|≥j
Since ψ ∈ C0∞ (Rn ), it follows that there exists a finite constant C > 0, depending only on ψ and α, such that α−γ x C ≤ , sup ∂ ψ ∀ γ ∈ Nn0 , γ < α, ∀ j ∈ N. (3.1.30) j j |x|≥j
Also, since f ∈ S(Rn ), we may invoke (3.1.19) to conclude that there exists some C = Cf,α,β ∈ (0, ∞) such that C x − 1 ≤ 1 + ψL∞ (Rn ) . sup xβ ∂ α f (x) ψ (3.1.31) j j |x|≥j Combining (3.1.29)–(3.1.31), and keeping in mind that f ∈ S(Rn ), we obtain sup |xβ ∂ α (fj (x) − f (x))|
x∈Rn
(3.1.32)
β C C α! γ sup x ∂ f (x) + −−−→ 0. ≤ j x∈Rn γ!(α − γ)! j j→∞ γ≤α S(Rn )
This shows that fj −−−−→ f and completes the proof of the fact that C0∞ (Rn ) j→∞
is sequentially dense in S(Rn ). The sequential continuity of S(Rn ) in E(Rn ) is a consequence of Exercise 1.13 and (3.1.14). This completes the proof of (d). The claims in part (e) follow using the observation that (x, y)(α,β) ∂xγ ∂yμ (f ⊗ g)(x, y) = xα ∂ γ f (x)y β ∂ μ g(y), (3.1.33)
CHAPTER 3. THE SCHWARTZ SPACE AND THE FOURIER...
96
for every (x, y) ∈ Rm × Rn , for every f ∈ S(Rm ), g ∈ S(Rn ), and every n α, γ ∈ Nm 0 , β, μ ∈ N0 . Consider now the statement in (g). From Exercise 3.5 we know that S(Rn ) ⊂ 1 L (Rn ), thus S(Rn ) ∗ S(Rn ) is meaningful. To see that S(Rn ) ∗ S(Rn ) ⊂ S(Rn ), fix some arbitrary f, g ∈ S(Rn ) and α, β ∈ Nn0 . Then, making use of the binomial theorem (cf. Theorem 13.7) as well as Exercise 3.5, we may estimate β α β α sup x ∂x (f ∗ g)(x) = sup ((x − y) + y) ∂x f (x − y)g(y)dy x∈Rn
x∈Rn
≤ sup
x∈Rn
≤ Cα,β ≤ Cα,β
γ≤β
β! γ!(β − γ)!
Rn
Rn
|(x − y)β−γ (∂ α f )(x − y)||y γ g(y)| dy (3.1.34)
sup (1 + |z|)|β| ∂ α f (z)
z∈Rn
Rn
(1 + |y|)|β| |g(y)| dy
sup (1 + |y|)|β|+n+1 g(y) < ∞. sup (1 + |z|)|β| ∂ α f (z)
z∈Rn
y∈Rn
This implies f ∗ g ∈ S(Rn ). The fact that the mapping in (e) is bilinear is immediate from definitions. As regards its continuity, we may invoke again Theorem 13.14 and Fact 3.6 to reduce matters to proving sequential continuity instead. However, the latter is apparent from the estimate in (3.1.34). This finishes the proof of the theorem. n Exercise 3.14. Assume ψ ∈ S(R ) isn given and, for each j ∈ N, define xthat the function ψj (x) := ψ j for every x ∈ R . Prove that S(Rn )
ψj f −−−−→ ψ(0)f j→∞
for every f ∈ S(Rn ).
(3.1.35)
Hint: Adapt estimates (3.1.29) and (3.1.30) to the current setting and, in place of (3.1.31), this time use the mean value theorem for the term ψ(x/j) − ψ(0) to get a decay factor of the order 1/j. Proposition 3.15. Let m, n ∈ N. Then C0∞ (Rm ) ⊗ C0∞ (Rn ) is sequentially dense in S(Rm × Rn ). Proof. Since the topology on S(Rm × Rn ) is metrizable (recall Fact 3.6), there exists a distance function d : S(Rm × Rn ) × S(Rm × Rn ) → [0, ∞) that induces its topology. Hence, S(Rm ×Rn )
fj −−−−−−−→ f j→∞
if and only if
lim d(fj , f ) = 0.
j→∞
(3.1.36)
Now let f ∈ S(Rm × Rn ). Then by part (d) in Theorem 3.13, there exists a sequence {fj }j∈N ⊂ C0∞ (Rm × Rn ) with the property that d(fj , f ) < 1j for every j ∈ N. Furthermore, by Proposition 2.73, for each fixed number j ∈ N, there
3.1. THE SCHWARTZ SPACE OF RAPIDLY DECREASING...
97
D(Rm ×Rn )
exists a sequence {gjk }k∈N ⊂ C0∞ (Rm ) ⊗ C0∞ (Rn ) such that gjk −−−−−−−→ fj . k→∞
In particular, by (c) in Theorem 3.13, S(Rm ×Rn )
gjk −−−−−−−→ fj k→∞
for each j ∈ N,
(3.1.37)
thus lim d(gjk , fj ) = 0
k→∞
for each j ∈ N.
(3.1.38)
Condition (3.1.38) implies that for each j ∈ N there exists kj ∈ N with the property that d(gjkj , fj ) < 1j . Now the sequence {gjkj }j∈N ⊂ C0∞ (Rm ) ⊗ C0∞ (Rn ) satisfies d(gjkj , f ) ≤ d(gjkj , fj ) + d(fj , f ) <
2 j
for every j ∈ N.
(3.1.39)
S(Rm ×Rn )
In turn, this forces gjkj −−−−−−−→ f , from which the desired conclusion follows. j→∞
For n, m ∈ N, denote by Mn×m (R) the collection of all n × m matrices with entries in R. Recall that a map L : Rm → Rn is linear if and only if there exists a matrix A ∈ Mn×m (R) such that L(x) = Ax for every x ∈ Rm , where Ax denotes the multiplication of the matrix A with the vector x viewed as an element in Mm×1 (R). Moreover, such a matrix is unique. In the sequel, we follow the standard practice of denoting by A the linear map associated with a matrix A. Exercise 3.16. Suppose A ∈ Mn×n (R) is such that det A = 0. Prove that the composition mapping S(Rn ) ϕ → ϕ ◦ A ∈ S(Rn )
is well-defined, linear, and continuous. (3.1.40)
Hint: To prove continuity you may use the linearity of the map in (3.1.40), Fact 3.6, and Theorem 13.14, to reduce matters to proving sequential continuity at 0. We conclude this section by proving that L(Rn ) ∗ S(Rn ) ⊆ C ∞ (Rn ). Proposition 3.17. For every function f ∈ L(Rn ) and every function g ∈ S(Rn )
one has Rn |f (x − y)||g(y)| dy < ∞ for each x ∈ Rn , and the convolution f ∗ g defined by f (x − y)g(y) dy for each x ∈ Rn , (3.1.41) (f ∗ g)(x) := Rn
has the property that f ∗ g ∈ C ∞ (Rn ).
CHAPTER 3. THE SCHWARTZ SPACE AND THE FOURIER...
98
Proof. If f, g are as in the statement, then from (3.1.22) and Exercise 3.5 it follows that there exists M ∈ N such that for every N ∈ N there exists a constant C ∈ (0, ∞) |f (x − y)||g(y)| dy ≤ C (1 + |x − y|)M (1 + |y|)−N dy, x ∈ Rn . Rn
Rn
(3.1.42) For each fixed point
x ∈ R choose now N ∈ N such that N > M + n and note that this ensures Rn (1 + |x − y|)M (1 + |y|)−N dy < ∞, proving the first claim in the statement. The fact that f ∗ g ∈ C ∞ (Rn ) is now seen in a similar fashion given that ∂ α f continues to be in L(Rn ) for every α ∈ Nn0 . n
Exercise 3.18. Prove that Lp (Rn ) ∗ S(Rn ) ⊆ C ∞ (Rn ) for every p ∈ [1, ∞]. Hint: Use the blueprint as in the proof of Proposition 3.17, using H¨older’s inequality in place of estimates for slowly growing functions, and arrange matters so that all derivatives fall on the Schwartz function. We conclude this section with an integration by parts formula that will be useful shortly. Lemma 3.19. If f ∈ L(Rn ) and g ∈ S(Rn ), then for every α ∈ Nn0 the following integration by parts formula holds: (∂ α g)(x)f (x) dx = (−1)|α| g(x)(∂ α f )(x) dx. (3.1.43) Rn
Rn
Proof. Fix f ∈ L(Rn ) and g ∈ S(Rn ). Since the classes L(Rn ) and S(Rn ) are stable under differentiation, it suffices to show that for each j ∈ {1, . . . , n} we have (∂j g)(x)f (x) dx = − g(x)(∂j f )(x) dx, (3.1.44) Rn
Rn
since (3.1.43) then follows by iterating (3.1.44). To this end, fix some integer j ∈ {1, . . . , n} along with some arbitrary R ∈ (0, ∞). The classical integration by parts formula in the bounded, smooth, domain B(0, R) ⊂ Rn then reads (cf. (13.7.4)) (∂j g)(x)f (x) dx = − g(x)(∂j f )(x) dx B(0,R)
B(0,R)
+
g(x)f (x)(xj /R) dσ(x).
(3.1.45)
∂B(0,R)
From part (a) in Theorem 3.13 we know that f g ∈ S(Rn ). Based on this and Exercise 3.5, it follows that g(x)f (x)(xj /R) dσ(x) ≤ ωn−1 Rn−1 sup |(f g)(x)| ∂B(0,R)
|x|=R
≤ CR−1 −−−−→ 0. R→∞
(3.1.46)
3.2. THE ACTION OF THE FOURIER TRANSFORM ...
99
On the other hand, (∂j g)f, g∂j f ∈ S(Rn ) ⊂ L1 (Rn ). As such, passing to limit R → ∞ in (3.1.45) yields (3.1.44) on account of Lebesgue’s dominated convergence theorem and (3.1.46).
3.2
The Action of the Fourier Transform on the Schwartz Class
Originally, we defined the Fourier transform in (3.1.3) as a mapping acting on functions from L1 (Rn ). Since S(Rn ) is contained in L1 (Rn ), it makes sense to consider the Fourier transform acting on the Schwartz class. In this section we study the main properties of the Fourier transform in this setting. The reader is advised that we use the symbols F and 3· interchangeably to denote this Fourier transform. To state our first major result pertaining to the Fourier transform in this setting, recall the notation introduced in (3.1.10). Theorem 3.20. The following statements are true. α f (ξ) = ξ α f3(ξ) for every 5 (a) If f ∈ S(Rn ) and α ∈ Nn0 are arbitrary, then D n ξ∈R . α f (ξ) = (−D)α f3(ξ) for (b) If f ∈ S(Rn ) and α ∈ Nn0 are arbitrary, then x5 n every ξ ∈ R .
(c) The Fourier transform, originally introduced in the context of (3.1.3), induces a mapping F : S(Rn ) → S(Rn ) that is linear and continuous. (d) If m, n ∈ N, f ∈ S(Rm ) and g ∈ S(Rn ), then f ⊗ g = f3 ⊗ 3 g. Proof. Fix f ∈ S(Rn ) and α ∈ Nn0 . Then the decay of f (cf. (3.1.19)) ensures that we may differentiate under the integral sign in (3.1.1) and obtain e−ix·ξ (−x)α f (x) dx Dα f3(ξ) = Rn
α f (ξ), = (−x)
∀ ξ ∈ Rn .
(3.2.1)
From this, the statement in (b) readily follows. Also, if β ∈ Nn0 is arbitrary, then using the first identity in (3.2.1), the fact that ξ β e−ix·ξ = (−Dx )β (e−ix·ξ ), and the integration by parts formula from Lemma 3.19, we obtain (−Dx )β (e−ix·ξ )(−x)α f (x) dx ξ β Dα f3(ξ) = Rn
=
Rn
e−ix·ξ Dxβ (−x)α f (x) dx,
∀ ξ ∈ Rn .
(3.2.2)
CHAPTER 3. THE SCHWARTZ SPACE AND THE FOURIER...
100
The formula in (a) follows by specializing (3.2.2) to the case α = (0, . . . , 0) ∈ Nn0 . In addition, starting with (3.2.2) we may estimate sup |ξ D f3(ξ)| ≤ β
2 −n
α
ξ∈Rn
Rn
(1 + |x| )
× sup
x∈Rn
dx ×
(1 + |x|2 )n Dxβ xα f (x) < ∞,
where the finiteness condition is a S(Rn ). Clearly, (3.2.1) also implies with (3.2.3), shows that f3 ∈ S(Rn ). The fact that this mapping is linear S(Rn )
(3.2.3)
consequence of the membership of f to that f3 ∈ C ∞ (Rn ) which, in combination Hence, the mapping in (c) is well-defined. is immediate from definition. In addition,
if fj −−−−→ 0, then based on the first inequality in (3.2.3) we have that, for each j→∞
m, k ∈ N0 , sup
ξ∈Rn |α|≤m, |β|≤k
|ξ β ∂ α f3j (ξ)|
≤C
sup
n
(1 + |x|2 )n ∂ β xα fj (x) → 0 as j → ∞.
(3.2.4)
x∈R |α|≤m, |β|≤k S(Rn )
In view of Exercise 3.8, this proves F fj −−−−→ 0. The latter combined with j→∞
Fact 3.6 and Theorem 13.14 then implies that F is continuous from S(Rn ) into S(Rn ). At this stage we are left with proving the statement in (d). To this end, fix f ∈ S(Rm ) and g ∈ S(Rn ). Then by part (e) in Theorem 3.13, we obtain that f ⊗ g ∈ S(Rm × Rn ), so F (f ⊗ g) is well-defined. Furthermore, by applying Fubini’s theorem, we may write e−ix·ξ−iy·η (f ⊗ g)(x, y) dy dx f ⊗ g(ξ, η) = Rm
Rn
e
=
−ix·ξ
e−iy·η g(y) dy
f (x) dx
Rm
Rn
= f3(ξ)3 g (η) = (f3 ⊗ g3)(ξ, η),
∀ ξ ∈ Rm , ∀ η ∈ Rn .
(3.2.5)
This finishes the proof of the theorem. Example 3.21. Suppose λ ∈ C satisfies Re(λ) > 0 and if λ = reiθ for r > 0 √ 2 1 and −π/2 < θ < π/2, set λ 2 := reiθ/2 . Consider the function f (x) := e−λ|x| for x ∈ Rn . Then f ∈ S(Rn ) and f3(ξ) =
π n2 λ
e−
|ξ|2 4λ
for each ξ ∈ Rn .
(3.2.6)
3.2. THE ACTION OF THE FOURIER TRANSFORM ...
101
Proof. Fix λ ∈ C satisfying the given hypotheses. Then Exercise 3.3 ensures 2 2 that f is a Schwartz function. Also, f (x) = e−λx1 ⊗ · · · ⊗ e−λxn for each x = (x1 , . . . , xn ) ∈ Rn . As such, part (d) in Theorem 3.20 shows that it suffices 2 to prove (3.2.6) when n = 1, in which case f (x) = e−λx for every x ∈ R. Suppose that this is the case and observe that f satisfies f + 2λxf = 0 in R. By taking the Fourier transform of both sides of this differential equation, and using (a) and (b) in Theorem 3.20, we arrive at ξ f3+ 2λ f3 = 0. The solution to this ξ2 latter ordinary differential equation is f3(ξ) = f3(0)e− 4λ for ξ ∈ R. There remains
1 2 to show that f3(0) = πλ 2 . Since by definition, f3(0) = R f (x) dx = R e−λx dx, we are left with showing that π 12 2 e−λx dx = whenever λ ∈ C has Re(λ) > 0. (3.2.7) λ R 1
2 Corresponding to the case when λ ∈ R+ , the identity R e−λx dx = πλ 2 is a standard exercise in basic calculus. To extend this to complex λ’s, observe that the function π 12 2 e−zx dx − for z ∈ C with Re(z) > 0, h(z) := z R is analytic and equal to zero for every z ∈ R+ . This forces h(z) = 0 for all z ∈ C 1 with Re(z) > 0. Thus, f3(0) = π 2 , as desired. λ
Exercise 3.22. Let a ∈ (0, ∞) and b ∈ R be fixed. Show that if x ∈ R then F (e−ax
2
+ibx
)(ξ) =
π 12 a
Hint: First prove that F (e−ax then use Example 3.21.
2
e−
+ibx
(ξ−b)2 4a
for every ξ ∈ R.
(3.2.8)
2
)(ξ) = (F (e−ax ))(ξ − b) for every ξ ∈ R,
Exercise 3.23. Prove that if A ∈ Mn×n (R) is such that det A = 0, then ϕ ◦ A−1 = | det A| (ϕ 3 ◦ A ),
∀ ϕ ∈ S(Rn ),
(3.2.9)
where A−1 and A denote, respectively, the inverse and the transpose of the matrix A. Next we note a consequence 3.20 of basic importance. As moti of Theorem vation, suppose P (D) = aα Dα is a differential operator of order m ∈ N, |α|≤m
with constant coefficients aα ∈ C, for every α ∈ Nn0 with |α| ≤ m. Furthermore, assume that f ∈ S(Rn ) has been given. Then any solution u ∈ S(Rn ) of the differential equation P (D)u = f in Rnalso satisfies P (ξ)3 u(ξ) = f3(ξ) for each n α aα ξ . In particular, if P (ξ) has no ξ ∈ R , where we have set P (ξ) := zeros, then u 3(ξ) =
f(ξ) P (ξ) ,
|α|≤m
for every ξ ∈ Rn . This gives a formula for the Fourier
CHAPTER 3. THE SCHWARTZ SPACE AND THE FOURIER...
102
transform of u. In order to find a formula for u itself, the natural question that arises is whether we can reconstruct u from u 3. The next theorem provides a positive answer to this question in the class of Schwartz functions. Theorem 3.24. The mapping F : S(Rn ) → S(Rn ) is an algebraic and topologic isomorphism, that is, it is bijective, continuous, and its inverse is also continuous. In addition, its inverse is the operator F −1 : S(Rn ) → S(Rn ) given by the formula −1 −n (F g)(x) = (2π) eix·ξ g(ξ) dξ, ∀ x ∈ Rn , ∀ g ∈ S(Rn ). (3.2.10) Rn
Proof. The proof of the fact that the mapping F −1 : S(Rn ) → S(Rn ) defined as in (3.2.10) is well-defined, linear, and continuous is similar to the proof of part (c) in Theorem 3.20. There remains to show that F −1 ◦ F = I = F ◦ F −1 on S(Rn ), where I is the identity operator on S(Rn ). To proceed, observe that the identity F −1 ◦ F = I is equivalent to −n eix·ξ f3(ξ) dξ = f (x), (3.2.11) ∀ x ∈ Rn , ∀ f ∈ S(Rn ). (2π) Rn
As regards (3.2.11), fix some function f ∈ S(Rn ) along with some point x ∈ Rn . Recall (cf. (3.1.1)) that e−iy·ξ f (y) dy, ∀ ξ ∈ Rn . (3.2.12) f3(ξ) = Rn
As such, one is tempted to directly replace f3(ξ) in (3.2.11) by the right-hand side of (3.2.12) and then use Fubini’s theorem to reverse the order of integration in the variables ξ and y. There is, however, a problem in carrying out this approach, since the function ei(x−y)·ξ f (y), considered jointly in the variable (ξ, y) ∈ Rn × Rn , does not belong to L1 (Rn × Rn ), hence Fubini’s theorem is not necessarily applicable. To remedy this problem, we introduce a “convergence factor” in the form of a suitable family of Schwartz functions ψ ε , indexed by ε > 0 (to be specified shortly), designed to provide control in the variable ξ, thus ensuring the applicability of Fubini’s theorem.
Hence, in place of Rn eix·ξ f3(ξ) dξ the idea is to consider Rn eix·ξ ψ ε (ξ)f3(ξ) dξ, for which we may write (granted that ψ ε ∈ S(Rn )) eix·ξ ψ ε (ξ)f3(ξ) dξ = eix·ξ ψ ε (ξ) e−iy·ξ f (y) dy dξ Rn
Rn
Rn
e−i(y−x)·ξ ψ ε (ξ)f (y) dy dξ
= Rn ×Rn
= Rn
=
Rn
e−i(y−x)·ξ ψ ε (ξ) dξ f (y) dy
Rn
4ε (y − x)f (y) dy = ψ
4ε (z)f (x + z) dz. ψ
Rn
(3.2.13)
3.2. THE ACTION OF THE FOURIER TRANSFORM ...
103
Given the goal we have in mind (cf. (3.2.11)), as well as the format of the current identity, we find it convenient to define ψ ε by setting ψ ε (ξ) := ϕ(ε ξ) for each ξ ∈ Rn and ε > 0,
(3.2.14)
where ϕ ∈ S(Rn ) is to be specified momentarily. The rationale behind this 4ε (x) are reasonably easy to compute. choice is that both lim ψ ε (ξ) and lim ψ ε→0+
ε→0+
This will eventually allow us to deduce (3.2.11) from (3.2.13) by letting ε → 0+ . Concretely, from (3.2.14) we have lim+ ψ ε (ξ) = ϕ(0), while from the definition ε→0
of the Fourier transform it is immediate that z 4ε (z) = ε−n ϕ ψ 3 ε (z) for every z ∈ Rn . = ϕ 3 ε
(3.2.15)
Keeping this in mind and employing part (a) in Exercise 2.22 we obtain 4ε (z)f (x + z) dz = lim ϕ 3 ε (z)f (x + z) dz ψ lim ε→0+
ε→0+
Rn
=
Rn
Rn
ϕ(z) 3 dz f (x).
(3.2.16)
Also, on account of (3.2.14) and the fact that f3 ∈ S(Rn ) ⊂ L1 (Rn ), Lebesgue’s dominated convergence theorem gives eix·ξ ψ ε (ξ)f3(ξ) dξ = lim+ eix·ξ ϕ(ε ξ)f3(ξ) dξ lim+ ε→0
ε→0
Rn
Rn
= ϕ(0)
eix·ξ f3(ξ) dξ.
(3.2.17)
Rn n In summary,
(3.2.13), (3.2.16), and (3.2.17), show that whenever ϕ ∈ S(R ) is such that Rn ϕ(z) 3 dz = 0 we have eix·ξ f3(ξ) dξ = f (x), (3.2.18) ∀ x ∈ Rn , ∀ f ∈ S(Rn ), C Rn
where the normalization constant C is given by ϕ(0) . ϕ(z) 3 dz Rn
C :=
(3.2.19)
As such, (3.2.11) will follow as soon as we prove that C = (2π)−n . For this task, we have the freedom of choosing the function ϕ ∈ S(Rn ) and a candidate that springs to mind is the Schwartz function from Example 3.21 (say, in the partic2 ular case when λ = 1). Hence, if ϕ(x) := e−|x| for each x ∈ Rn , formula (3.2.6) gives n
ϕ(ξ) 3 = π 2 e−
|ξ|2 4
for each ξ ∈ Rn .
(3.2.20)
CHAPTER 3. THE SCHWARTZ SPACE AND THE FOURIER...
104
Consequently, n ϕ(ξ) 3 dξ = π 2 Rn
e−
|ξ|2 4
n
dξ = π 2
Rn
e−
|t|2 4
n dt
n
n
= π 2 (4π) 2 = (2π)n .
R
(3.2.21)
where the second equality is simply Fubini’s theorem, while the third equality is provided by (3.2.7) with λ := 1/4. Since in this case we also have ϕ(0) = 1, it follows that C = (2π)−n , as wanted. This finishes the justification of the identity F −1 ◦ F = I on S(Rn ). The same approach also works for the proof of F ◦ F −1 = I, completing the proof of the theorem. In what follows, for an arbitrary function f : Rn → C we define f ∨ (x) := f (−x),
∀ x ∈ Rn .
(3.2.22)
Exercise 3.25. Prove that the mapping S(Rn ) f → f ∨ ∈ S(Rn )
(3.2.23)
is well-defined, linear, and continuous. Hint: Use Fact 3.6 and Theorem 13.14. Recall that z denotes the complex conjugate of z ∈ C. Exercise 3.26. Let f ∈ S(Rn ). Then the following formulas hold: (1) f4∨ = (f3 )∨ . 4∨ (2) f3 = f . 3 (3) f3 = (2π)n f ∨ .
(4) Rn f (x) dx = f3(0) and Rn f3(ξ) dξ = (2π)n f (0). Proposition 3.27. Let f, g ∈ S(Rn ). Then the following identities hold:
g (x) dx = Rn f3(ξ)g(ξ) dξ. (a) Rn f (x)3 (b)
f (x)g(x) dx = (2π)−n Rn f3(ξ) g3(ξ) dξ an identity referred to in the literature as Parseval’s identity. Rn
(c) f ∗ g = f3 · 3 g. (d) f5 · g = (2π)−n f3 ∗ g3. Proof. The identity in (a) follows via a direct computation using Fubini’s the4 ∨ g3 = g4 = (2π)n g which, when orem. Also, based on Exercise 3.26, we have 3
3.2. THE ACTION OF THE FOURIER TRANSFORM ...
105
combined with (a) gives (b). The identity in (c) follows using Fubini’s theorem. Specifically, for each ξ ∈ Rn we may write e−ix·ξ (f ∗ g)(x) dx = e−ix·ξ f (x − y)g(y) dy dx f ∗ g(ξ) =
Rn
g(y)
=
Ru
Rn
Rn
e−ix·ξ f (x − y) dx dy
e−iy·ξ g(y)
=
Rn
e−iz·ξ f (z) dz dy = f3(ξ)3 g (ξ),
(3.2.24)
Rn
R
as wanted. Next, the identity from (c) combines with Exercise 3.26 to yield 3 f3 ∗ g3 = f3 · 3 g = (2π)2n f ∨ · g ∨ = (2π)2n (f · g)∨ . 3
(3.2.25)
Applying now the Fourier transform to the most extreme sides of (3.2.25) and once again invoking Exercise 3.26, we obtain ∨
−n
(2π)
∨ · g)∨ = f5 · g. f3 ∗ g3 = (2π)−2n f3 ∗ 3 g = (f
(3.2.26)
This completes the proof of the proposition. Remark 3.28. (1) It is not difficult to see via a direct computation that we also have f3(ξ)g(ξ) dξ, ∀ f ∈ L1 (Rn ), ∀ g ∈ S(Rn ). f (x)3 g (x) dx = Rn
Rn
(3.2.27)
(2) Parseval’s identity written for f = g ∈ S(Rn ) becomes f3L2 (Rn ) = (2π) 2 f L2(Rn ) . n
(3.2.28)
As a consequence, since C0∞ (Rn ) is dense in L2 (Rn ), the Fourier transform F may be extended to a linear operator from L2 (Rn ) into itself, and the latter identity continues to hold for every f ∈ L2 (Rn ). In summary, this extension of F , originally considered as in part (c) of Theorem 3.20, satisfies F : L2 (Rn ) → L2 (Rn ) is linear and continuous and n
F f L2 (Rn ) = (2π) 2 f L2 (Rn ) ,
∀ f ∈ L2 (Rn ).
(3.2.29)
Based on this, part (3) in Exercise 3.26, the continuity of the linear mapping L2 (Rn ) f → f ∨ ∈ L2 (Rn ), and the density of Schwartz functions in L2 (Rn ), we further deduce that (3.2.30) F F f = (2π)n f ∨ , ∀ f ∈ L2 (Rn ).
106
CHAPTER 3. THE SCHWARTZ SPACE AND THE FOURIER... Combined with (3.2.29), this proves that F : L2 (Rn ) → L2 (Rn ) is a linear, continuous, isomorphism, ∨ and F −1 f = (2π)−n F f ∨ = (2π)−n F f , ∀ f ∈ L2 (Rn ). (3.2.31) We will continue to use the notation f3 for F f whenever f ∈ L2 (Rn ). The identity in (3.2.29) is called Planch´ erel’s identity. The same type of density argument shows that formula from part (b) of Proposition 3.27 extends to −n f (x)g(x) dx = (2π) ∀ f, g ∈ L2 (Rn ), f3(ξ) g3(ξ) dξ, Rn
Rn
(3.2.32)
to which we continue to refer as Parseval’s identity. (3) An inspection of the computation in (3.2.24) reveals that the identity f ∗ g = f3 · 3 g remains valid if f, g ∈ L1 (Rn ).
Exercise 3.29. Prove that Rn f (x)3 g (x) dx = Rn f3(ξ)g(ξ) dξ for every f, g ∈ L2 (Rn ). Hint: Use part (a) in Proposition 3.27, (3.2.29), and the fact that C0∞ (Rn ) is sequentially dense in L2 (Rn ) to first prove the identity for f ∈ L2 (Rn ) and g ∈ S(Rn ). Further Notes for Chap. 3. The basic formalism associated with the Fourier transform goes back to the French mathematician and physicist Joseph Fourier (1768–1830) in a more or less precise form. A distinguished attribute of this tool, of fundamental importance in the context of partial differential equations, is the ability to render the action of a constant coefficient differential operator simply as ordinary multiplication by its symbol on the Fourier transform side. As the name suggest, the Schwartz space of rapidly decreasing functions has been formally introduced by Laurent Schwartz who was the first to recognize its significance in the context of the Fourier transform. Much of the elegant theory presented here is due to him.
3.3
Additional Exercises for Chap. 3
Exercise 3.30. Prove that if f ∈ L1comp(Rn ) then f3 ∈ C ∞ (Rn ). Exercise 3.31. Prove that if f ∈ L1 (Rn ) is real-valued and odd, then so is f3. Exercise 3.32. Let ϕ ∈ C0∞ (Rn ) be such that ϕ = 0 and for each j ∈ N set ϕj (x) := e−j ϕ(x/j), S(Rn )
∀ x ∈ Rn .
Prove that ϕj −−−−→ 0 but the sequence {ϕj }j∈N does not converge in D(Rn ). j→∞
3.3. ADDITIONAL EXERCISES FOR CHAP. 3
107
Exercise 3.33. Let ϕ ∈ C0∞ (Rn ) be such that ϕ = 0 and for each j ∈ N set ϕj (x) :=
1 ϕ(x/j), j
∀ x ∈ Rn .
E(Rn )
Prove that ϕj −−−−→ 0 but the sequence {ϕj }j∈N does not converge in S(Rn ). j→∞
Exercise 3.34. Let θ ∈ C0∞ (R) be such that θ(x) = 1 for |x| ≤ 1, and let ψ ∈ C ∞ (R) be such that ψ(x) = 0 for x ≤ −1 and ψ(x) = e−x for x ≥ 0. For each j ∈ N then set ϕj (x) :=
1 ψ(x)θ(x/j), j
∀ x ∈ R.
Prove that the sequence {ϕj }j∈N converges in S(R). Exercise 3.35. Determine which of the following functions belongs to S(Rn ). 2
2
(a) e−(x1 +x2 +···+xn ) (b) (x21 + x22 + · · · + x2n )n! e−|x|
2
n
(c) (1 + |x|2 )−2 2
(d)
sin(e−|x| ) 1+|x|2
(e)
cos(e−|x| ) (1+|x|2 )n
2
2
2
(f ) e−|x| sin(ex1 ) (g) e−(Ax)·x , where A ∈ Mn×n (R) is symmetric and satisfies (Ax) · x > 0 for all x ∈ Rn \ {0} (as before, “·” denotes the dot product of vectors in Rn ). Exercise 3.36. Let A ∈ Mn×n (R) be symmetric and such that (Ax) · x > 0 for every x ∈ Rn \ {0}. Prove that if f (x) := e−(Ax)·x for x ∈ Rn , then n (A−1 ξ)·ξ for every ξ ∈ Rn . f3(ξ) = √π 2 e− 4 det A
2
2
Exercise 3.37. Prove that f : R2 → R defined by f (x1 , x2 ) := e−(x1 +x1 x2 +x2 ) for (x1 , x2 ) ∈ R2 belongs to S(R2 ), then compute its Fourier transform. Exercise 3.38. If P (x) is a polynomial in Rn , compute the Fourier transform 2 of the function defined by f (x) := P (x)e−|x| for each x ∈ Rn . Exercise 3.39. If a ∈ (0, ∞) and x0 ∈ Rn are fixed, compute the Fourier 2 transform of the function defined by f (x) := e−a|x| sin(x · x0 ) for each x ∈ Rn . Exercise 3.40. Let ϕ ∈ S(R). Prove that the equation ψ = ϕ has a solution ψ ∈ S(R) if and only if R ϕ(x) dx = 0.
108
CHAPTER 3. THE SCHWARTZ SPACE AND THE FOURIER... 2
Exercise 3.41. Does the equation ψ = e−x have a solution in S(R)? Exercise 3.42. Fix x0 ∈ Rn . Prove that the translation map tx0 from (1.3.16) extends linearly and continuously as a map from S(Rn ) into itself. More precisely, show that the translation map tx0 : S(Rn ) → S(Rn ) defined by tx0 (ϕ) := ϕ(· − x0 ) for every ϕ ∈ S(Rn ), is linear and continuous. Also, prove that for every ϕ ∈ S(Rn ) the following identities hold in S(Rn ) F tx0 (ϕ) (ξ) = e−ix0 ·ξ ϕ(ξ) 3 = F eix0 ·x ϕ . 3 and tx0 ϕ (3.3.1)
Chapter 4
The Space of Tempered Distributions 4.1
Definition and Properties of Tempered Distributions
The algebraic dual of S(Rn ) is the vector space u : S(Rn ) → C : u is linear and continuous .
(4.1.1)
Functionals u belonging to this space are called tempered distributions (a piece of terminology justified a little later). An important equivalent condition for a linear functional on S(Rn ) to be a tempered distribution is stated next (see (13.1.32)). Fact 4.1. A linear functional u : S(Rn ) → C is continuous if and only if there exist m, k ∈ N0 , and a finite constant C > 0, such that ∀ ϕ ∈ S(Rn ). |u(ϕ)| ≤ C sup sup xβ ∂ α ϕ(x) , (4.1.2) x∈Rn α,β∈Nn 0 , |α|≤m, |β|≤k
From Fact 3.6 and Theorem 13.14 we also know that any f : S(Rn ) → C (linear or not) is continuous if and only if it is sequentially continuous. As a consequence, we have the following characterization of tempered distributions. Proposition 4.2. Let u : S(Rn ) → C be linear. Then u is a tempered distribuS(Rn )
tion if and only u(ϕj ) −−−→ 0 whenever ϕj −−−−→ 0. j→∞
j→∞
As discussed in Example 2.7, to any locally integrable function f in Rn one can associate an “ordinary” distribution uf ∈ D (Rn ). This being said, it is not always the case that uf is actually a tempered distribution (for more on this, see Remark 4.15). This is, however, true if the locally integrable function f becomes integrable at infinity after being tempered by a polynomial. We elaborate on this issue in the next example. D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 4, © Springer Science+Business Media New York 2013
109
110
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
Example 4.3. Let f ∈ L1loc (Rn ) be such that there exist m ∈ [0, ∞) and R ∈ (0, ∞) with the property that |x|≥R
|x|−m |f (x)| dx < ∞.
(4.1.3)
We claim that the distribution of function type defined by f is a tempered distribution, that is, the mapping uf : S(Rn ) → C, uf (ϕ) := f ϕ dx, ∀ ϕ ∈ S(Rn ), (4.1.4) Rn
is a well-defined tempered distribution. To see that this is the case, pick N ∈ N0 such that N ≥ m and, for an arbitrary ϕ ∈ S(Rn ), estimate Rn
|f ϕ| dx ≤
|x|
≤C
|f (x)ϕ(x)| dx +
sup
x∈Rn , 0≤k≤N
|x|≥R
|x|k |ϕ(x)| 7
+
|x|≥R
|x|
−m
|x|−m |f (x)||x|m |ϕ(x)| dx
6
|x|
|f (x)| dx ≤ C
|f (x)| dx
sup
x∈Rn , 0≤k≤N
k |x| |ϕ(x)| , (4.1.5)
where we have used the easily checked fact that, since N ≥ m, we have sup |x|m |ϕ(x)| ≤ Rm−N sup |x|N |ϕ(x)| . x∈Rn
|x|≥R
(4.1.6)
Estimate (4.1.5) shows that the functional uf is well-defined on S(Rn ). Clearly, uf is linear, and (4.1.5) also implies (based on Fact 4.1) that uf is continuous on S(Rn ). Hence, uf is a tempered distribution. In the sequel, we will often use the notation f instead of uf , whenever f is such that the operator uf as in (4.1.4) is linear and continuous. Exercise 4.4. Prove that if a ∈ (−n, ∞) then |x|a is a tempered distribution in Rn . Exercise 4.5. Let f : Rn → C be a Lebesgue measurable function with the property that there exists m ∈ R such that (1 + |x|)m |f (x)| dx < ∞. (4.1.7) Rn
Then the mapping uf as in (4.1.4) is well-defined and is a tempered distribution. Hint: Use Example 4.3.
4.1. DEFINITION AND PROPERTIES OF TEMPERED . . .
111
Exercise 4.6. Prove that L(Rn ), the space of slowly increasing functions defined in (3.1.22), is contained in the space of tempered distributions. Example 4.7. Let p ∈ [1, ∞] and f ∈ Lp (Rn ) be arbitrary. We claim that uf defined in (4.1.4) is a tempered distribution. To prove this claim, since f is measurable, by Exercise 4.5, it suffices to show that f satisfies (4.1.7). If p = 1, then (4.1.7) holds for m = 0, while if p = ∞, (4.1.7) holds for any m < −n. If p ∈ (1, ∞), by applying H¨older’s inequality, we have p−1 p mp (1 + |x|)m |f (x)|dx ≤ f Lp(Rn ) (1 + |x|) p−1 dx <∞ Rn
Rn
provided we choose m < − n(p−1) . In summary, we proved that p for each p ∈ [1, ∞] the space Lp (Rn ) is a subspace of the space of tempered distributions.
(4.1.8)
As an immediate consequence of Exercise 3.5 and (4.1.8) we therefore obtain S(Rn ) is a subspace of the space of tempered distributions.
(4.1.9)
Example 4.8. From Example 2.9 we know that ln |x| ∈ and, hence, defines a distribution in Rn . We claim that the distribution ln |x| is, in fact, a tempered distribution. Indeed, this follows from Exercise 4.5, since (4.1.7) holds for any m < −n (something seen by using estimate (2.1.9) with 0 < ε < min {n, −m − n}). L1loc (Rn )
The topology we consider on the space of tempered distributions is the weak∗-topology and we denote this topological vector space by S (Rn ). In particular, from the general discussion in Sect. 13.1 we have: Fact 4.9. S (Rn ) is a locally convex topological space. Fact 4.10. A sequence {uj }j∈N ⊂ S (Rn ) converges to some u ∈ S (Rn ) in S (Rn ) if and only if uj , ϕ −−−→ u, ϕ for every ϕ ∈ S(Rn ), in which case j→∞
S (Rn )
we write uj −−−−→ u. j→∞
It is useful to note that, for each p ∈ [1, ∞], the space of pth-power integrable functions in Rn embeds continuously in the space of tempered distributions. Exercise 4.11. Prove that if p ∈ [1, ∞] and a sequence of functions {fj }j∈N in S (Rn )
Lp (Rn ) converges in Lp (Rn ) to some f ∈ Lp (Rn ) then fj −−−−→ f . j→∞
Hint: Use H¨ older’s inequality and (4.1.8). Exercise 4.12. Assume that φ is as in (1.2.3) and recall the sequence of funcS (Rn )
tions {φj }j∈N from (1.3.7). Prove that φj −−−−→ δ. j→∞
Hint: Reason as in Example 2.20.
112
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
Theorem 4.13. For each n, m ∈ N the following statements are true. (a) E (Rn ) → S (Rn ) → D (Rn ), where all the embeddings are injective and continuous. (b) For each a ∈ L(Rn ) and each u ∈ S (Rn ), the distribution au ∈ D (Rn ) extends uniquely to a tempered distribution, which will be denoted by au and its action is given by au, ϕ = u, aϕ for every ϕ ∈ S(Rn ). Moreover, the mapping (4.1.10) S (Rn ) u → au ∈ S (Rn ) is linear, and continuous. (c) For every α ∈ Nn0 and each u ∈ S (Rn ), the distribution ∂ α u ∈ D (Rn ) extends uniquely to a tempered distribution, that will be denoted by ∂ α u and its action is given by ∂ α u, ϕ = (−1)|α| u, ∂ α ϕ
for every
ϕ ∈ S(Rn ).
(4.1.11)
Moreover, the mapping S (Rn ) u → ∂ α u ∈ S (Rn )
(4.1.12)
is linear and continuous. (d) If u ∈ S (Rm ) and v ∈ S (Rn ) then the distribution u ⊗ v ∈ D (Rm × Rn ) extends uniquely to a tempered distribution, that will be denote by u ⊗ v. In addition, u⊗v, ϕ1 ⊗ϕ2 = u, ϕ1 v, ϕ2 ,
∀ ϕ1 ∈ S(Rm ), ∀ ϕ2 ∈ S(Rn ), (4.1.13)
and u ⊗ v = v ⊗ u for every u ∈ S (Rm ) and v ∈ S (Rn ). Moreover, the mapping S (Rm ) × S (Rn ) (u, v) → u ⊗ v ∈ S (Rm × Rn )
(4.1.14)
is bilinear and separately continuous. Proof. The statement in (a) is a consequence of parts (c) and (d) in Theorem 3.13 and duality (cf. Proposition 13.3). Next, fix a function a ∈ L(Rn ) and let u ∈ S (Rn ) be arbitrary. Then based on part (a) we have u ∈ D (Rn ). Hence, by Proposition 2.23, au exists and belongs to D (Rn ). We will show that au may be extended uniquely to a tempered distribution. Define a) u : S(Rn ) → C,
a) u(ϕ) := u, aϕ ∀ ϕ ∈ S(Rn ).
(4.1.15)
Since a) u is the composition of u ∈ S (Rn ) with the map in part (a) of Theorem 3.13, both of which are linear and continuous, it follows that u ∞ = au, so if we invoke (d) in Theoa) u ∈ S (Rn ). In addition, a) C0 (Rn )
rem 3.13, we obtain that a) u is the unique continuous extension of au to S(Rn ).
4.1. DEFINITION AND PROPERTIES OF TEMPERED . . .
113
The map in (4.1.10) is also continuous as seen from Theorem 3.13 and the general fact that the transpose of any linear and continuous operator between two topological vector spaces is continuous at the level of dual spaces equipped with weak∗-topologies (cf. Proposition 13.1). Re-denoting a) u simply as au now finishes the proof of the statement in (b). The proof of (c) is similar to the proof of (b). Turning our attention to (d), let u ∈ S (Rm ) and v ∈ S (Rn ). By part (a) we have u ∈ D (Rm ) and v ∈ D (Rn ), hence (by Theorem 2.78) u⊗v ∈ D (Rm ×Rn ). We construct an extension u ⊗ v : S(Rn+m ) → C by setting % $ % $ u ⊗ v, ϕ := u(x), v(y), ϕ(x, y) , ∀ ϕ ∈ S(Rm × Rn ). (4.1.16) To see that this mapping is in S (Rm × Rn ), fix ϕ ∈ S(Rn ). Clearly, we have ϕ(x, ·) ∈ S(Rn ) for each x ∈ Rm . We claim that ψ(x) := v(y), ϕ(x, y) for each x ∈ Rm =⇒ ψ ∈ S(Rm ).
(4.1.17)
Indeed, by reasoning as in the proof of Proposition$ 2.76, we obtain ψ %∈ C ∞ (Rm ). β α β α Also, for any α, β ∈ Nm 0 we have x ∂ ψ(x) = v(y), x ∂x ϕ(x, y) and, since n v ∈ S (R ), there exist C > 0, and , k ∈ N0 , such that v satisfies (4.1.2). Thus, for every x ∈ Rm , we have β α γ β α δ $ % x ∂ ψ(x) = v(y), xβ ∂xα ϕ(x, y) ≤ C y x ∂x ∂y ϕ(x, y). sup y∈Rn |γ|≤, |δ|≤k
(4.1.18) Therefore, since ϕ ∈ S(Rm × Rn ), estimate (4.1.18) further yields γ α β δ y x ∂x ∂y ϕ(x, y) < ∞. sup xβ ∂ α ψ(x) ≤ C sup x∈Rm , y∈Rn |γ|≤, |δ|≤k
x∈Rm
(4.1.19)
This shows that ψ ∈ S(Rm ), finishing the proof of the claim. Consider next the mapping (4.1.20) S(Rm × Rn ) ϕ → ψ ∈ S(Rm ). This is linear by design, as well as continuous (as seen from (4.1.19), Fact 3.6, and Theorem 13.14). Thus, the composition between u and the map in (4.1.20) gives rise to a linear and continuous map, which proves that u ⊗ v belongs to ⊗ v C ∞ (Rm ×Rn ) = u ⊗ v, which in combination with S (Rm × Rn ). In addition u 0
(d) in Theorem 3.13, implies that u ⊗ v is the unique continuous extension of u ⊗ v to S(Rm × Rn ). Moreover, from (4.1.16) we see that $ % u ⊗ v, ϕ1 ⊗ ϕ2 = u, ϕ1 v, ϕ2 , ∀ ϕ1 ∈ S(Rm ), ∀ ϕ2 ∈ S(Rn ). (4.1.21)
Similarly, we define v ⊗ u ∈ S (Rn × Rm ). Recalling now part (ii) in Theorem 2.78 we may write u ⊗ v C ∞ (Rm ×Rn ) = u ⊗ v = v ⊗ u = v ⊗ uC ∞ (Rn ×Rm ) . (4.1.22) 0
0
114
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
Thus, using (d) in Theorem 3.13, we conclude u ⊗ v = v ⊗ u. m n Clearly, S (R ) × S (R ) (u, v) → u ⊗ v ∈ S (Rm × Rn ) is bilinear, and our goal is to show that this is also separately continuous. Since u ⊗ v = v ⊗u m n for every u ∈ S (R ) and every v ∈ S (R ), it suffices to prove that this map is continuous in the first variable. For this, we shall rely on the abstract description of open sets in the weak∗-topology from (13.1.9). Specifically, having fixed v ∈ S (Rn ), pick an arbitrary finite set A ⊂ S(Rm × Rn ) along with some number ε ∈ (0, ∞), and introduce OA,ε := w ∈ S (Rm × Rn ) : |w, ψ| < ε, ∀ ψ ∈ A . (4.1.23) := v(y), ψ(·, y) : ψ ∈ A , then from what we proved If we now define A is finite is a subset of S (Rm ). Also, A earlier (cf. (4.1.17)) it follows that A since A is finite. Hence, if we now set , := u ∈ S (Rm ) : |u, ϕ| < ε, ∀ ϕ ∈ A (4.1.24) O A,ε . In light of (13.1.9), using (4.1.16) we have u ⊗ v ∈ OA,ε for every u ∈ O A,ε this shows that u → u ⊗ v is continuous. Finally, abbreviating matters by simply writing u ⊗ v in place of u ⊗ v, all claims in part (d) of the statement of the theorem now follow. Remark 4.14. (i) In view of (a) in Theorem 4.13 and (d) in Theorem 3.13 we have: if u, v ∈ S (Rn ) and u = v in D (Rn ), then u = v in S (Rn ).
(4.1.25)
(ii) Whenever u ∈ S (Rn ) its support is understood as defined in (2.5.7). Remark 4.15. The inclusion S (Rn ) → D (Rn ) (which goes to show that any tempered distribution is indeed an ordinary distribution) is actually strict. This follows from the observation that, in contrast to the case of ordinary distributions, (4.1.26) L1loc (Rn ) ⊂ S (Rn ). To justify (4.1.26), take λ > 0 arbitrary, fixed, and consider the function f : Rn → C,
f (x) := e|x|
λ
for every x ∈ Rn .
(4.1.27)
Clearly f ∈ L1loc (Rn ), hence f ∈ D (Rn ). However, this distribution cannot be extended to a tempered distribution. To see why this is true, suppose there exists u ∈ S (Rn ) such that uC ∞ (Rn ) = f in D (Rn ). Since u is a tempered 0 distribution there exist some finite constant C ≥ 0 and numbers k, m ∈ N0 such that β α x ∂ ϕ(x), u, ϕ ≤ C ∀ ϕ ∈ S(Rn ). (4.1.28) sup x∈Rn , |α|≤k, |β|≤m
4.1. DEFINITION AND PROPERTIES OF TEMPERED . . . Now fix a function ψ∈
C0∞ (Rn ),
115
ψ ≥ 0,
supp ψ ⊆ B(0, 1),
ψ(x) dx = 1, (4.1.29) 1/2≤|x|≤1
and for each j ∈ N define ψj (x) := ψ(x/j), for x ∈ Rn . Then by (4.1.28), for each j ∈ N, we have β α x ∂ ψj (x) ≤ C j m−k , u, ψj ≤ C sup (4.1.30) x∈B(0,j), |α|≤k, |β|≤m
for some C ∈ [0, ∞) independent of j. On the other hand, (4.1.29) forces the lower bound λ λ u, ψj = e|x| ψj (x) dx ≥ e|x| ψj (x) dx |x|≤j
λ
≥ e(j/2) j −n
j/2≤|x|≤j λ
ψ(x) dx = e(j/2) j −n ,
∀ j ∈ N.
(4.1.31)
1/2≤|x|≤1
Comparing (4.1.30) and (4.1.31) yields a contradiction choosing j large enough. Hence, f cannot be extended to a tempered distribution. Remark 4.16. What prevents locally integrable functions of the form (4.1.27) from belonging to S (Rn ) is the fact that their growth at infinity is not tempered enough and, in fact, this observation justifies the very name “tempered distribution.” Exercise 4.17. Let λ ∈ R be such that λ < n − 1, and fix some j ∈ {1, . . . , n}. For these choices, consider the functions f, g defined, respectively, by f (x) := |x|−λ and g(x) := −λxj |x|−λ−2 for each x ∈ Rn \ {0}. Prove that f, g ∈ S (Rn ) and that ∂j f = g in S (Rn ). In short, ∂j |x|−λ = −λxj |x|−λ−2 in S (Rn ) (4.1.32) whenever λ < n − 1 and j ∈ {1, . . . , n}. Hint: To prove (4.1.32) use (4.1.11), and integration by parts coupled with a limiting argument to extricate the singularity at the origin. Given a tempered distribution, we may consider the convolution between its restriction to C0∞ (Rn ) and any compactly supported distribution. A natural question, addressed in the next theorem, is whether the distribution obtained via such a convolution may be extended to a tempered distribution. Theorem 4.18. The following statements are true. (a) If u ∈ E (Rn ) and v ∈ S (Rn ), then the distribution u ∗ v ∈ D (Rn ) extends uniquely to a tempered distribution, that will be denoted by u ∗ v. Also, v ∗ u ∈ D (Rn ) extends uniquely to a tempered distribution and u ∗ v = v ∗ u in S (Rn ). Moreover, the mapping E (Rn ) × S (Rn ) (u, v) → u ∗ v ∈ S (Rn ) is bilinear, and for every u ∈ E (Rn ), v ∈ S (Rn ), we have
116
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS ∂ α (u ∗ v) = (∂ α u) ∗ v = u ∗ (∂ α v)
in
S (Rn ),
∀ α ∈ Nn0 .
(4.1.33)
D (Rn )
(b) If uj −−−−→ u and there exists some compact set K ⊂ Rn such that j→∞
S (Rn )
supp uj ⊂ K for every j ∈ N, then uj ∗ v −−−−→ u ∗ v for each v ∈ S (Rn ). j→∞
S (Rn )
S (Rn )
j→∞
j→∞
(c) If vj −−−−→ v, then u ∗ vj −−−−→ u ∗ v for each u ∈ E (Rn ). (d) If u ∈ S (Rn ) then δ ∗ u = u = u ∗ δ for each u ∈ S (Rn ). (e) Let a ∈ S(Rn ) and u ∈ S (Rn ). Then the mapping (a ∗ u)(ϕ) := u, a∨∗ ϕ
a ∗ u : S(Rn ) → C,
∀ ϕ ∈ S(Rn ), (4.1.34)
is well-defined, linear, and continuous, hence a ∗ u belongs to S (Rn ). We also define u ∗ a := a ∗ u. If a ∈ C0∞ (Rn ), then the map in (4.1.34) is the unique continuous extension of a ∗ u ∈ D (Rn ) to a tempered distribution. In addition, the mapping S(Rn ) × S (Rn ) (a, u) → a ∗ u ∈ S (Rn )
(4.1.35)
is bilinear and separately sequentially continuous. Also, for every function a ∈ S(Rn ) and every distribution u ∈ S (Rn ) we have ∂ α (a ∗ u) = (∂ α a) ∗ u = a ∗ (∂ α u)
in
S (Rn ),
∀ α ∈ Nn0 .
(4.1.36)
Proof. Fix u ∈ E (Rn ) and v ∈ S (Rn ). Because of (a) in Theorem 4.13 and Theorem 2.85, it follows that u ∗ v exists as an element in D (Rn ). We will prove that u ∗ v extends uniquely to an element in S (Rn ). Fix ψ ∈ C0∞ (Rn ) such that ψ = 1 in a neighborhood of supp u. Recall the notation introduces in (2.8.7) and let 1 denote the constant function equal to 1 in Rn . We then claim that the map S(Rn ) ϕ → (ψ ⊗ 1)ϕΔ ∈ S(Rn × Rn ) is linear and continuous.
(4.1.37)
Indeed, if ϕ ∈ S(Rn ), then (ψ ⊗ 1)ϕΔ ∈ C ∞ (Rn × Rn ) and if α, β, γ, δ ∈ Nn0 , then by Leibniz’s formula (cf. Proposition 13.8), the binomial theorem (cf. Theorem 13.7), and the compactness of the support of ψ, we may write α β γ x y ∂x ψ(x)∂yδ ϕ(x + y) sup x∈Rn , y∈Rn
=
sup x∈supp ψ, y∈R
≤C
n
α β γ x y ∂x ψ(x)(∂ δ ϕ)(x + y)
sup
μ≤γ x∈supp ψ, y∈R
n
(y + x − x)β (∂ μ ψ)(x)(∂ γ−μ+δ ϕ)(x + y)
4.1. DEFINITION AND PROPERTIES OF TEMPERED . . . ≤C
sup
μ≤γ η≤β x∈supp ψ, y∈R
≤C
n
(y + x)η (∂ γ−μ+δ ϕ)(x + y)
sup z η ∂ γ−μ+δ ϕ(z) < ∞,
μ≤γ η≤β z∈R
117
(4.1.38)
n
where all constants are independent of ϕ. Hence, (ψ ⊗1)ϕΔ ∈ S(Rn ×Rn ) which proves that the map in (4.1.37) is well-defined. Since this map is also linear, it suffices to check its continuity at zero. This, in turn, follows from Fact 3.6, Theorem 13.14, and the fact that the final estimate in (4.1.38) implies that the map in (4.1.37) is sequentially continuous. If we now define u ∗ v : S(Rn ) −→ C,
$ % u ∗ v (ϕ) := u(x) ⊗ v(y), ψ(x)ϕ(x + y) ,
∀ ϕ ∈ S(Rn ),
(4.1.39)
then (d) in Theorem 4.13 combined with (4.1.37) gives that u ∗ v belongs to the space S (Rn × Rn ). In addition, this definition is independent of the choice of ψ selected as above. Indeed, if ψ1 , ψ2 ∈ C0∞ (Rn ) are such that each equals 1 in some neighborhood of supp u, then " ! u(x) ⊗ v(y) , ψ1 (x) − ψ2 (x) ϕ(x + y) = 0 for every ϕ ∈ S(Rn ). (4.1.40) To see that the map in (4.1.39) is an extension of u ∗ v ∈ D (Rn ), take an arbitrary function ϕ ∈ C0∞ (Rn ). If η ∈ C0∞ (Rn ) is such that η = 1 in a neighborhood of the set supp ϕ − supp u, then the function Ψ(x, y) := ψ(x)η(y), for each x, y ∈ Rn , satisfies Ψ ∈ C0∞ (Rn × Rn ) and Ψ = 1 in a neighborhood of the set (supp u × supp v) ∩ supp ϕΔ . Upon recalling (2.8.11), this permits us to write $ % $ % u(x) ⊗ v(y), ψ(x)ϕ(x + y) = u(x) ⊗ v(y), Ψ(x, y)ϕ(x + y) = u ∗ v, ϕ.
(4.1.41)
This shows that the mapping in (4.1.39) is an extension of u ∗ v ∈ D (Rn ) which, together with (d) in Theorem 3.13, implies that u ∗ v is the unique extension of u ∗ v ∈ D (Rn ) to a tempered distribution. A similar construction realizes v ∗ u as a tempered distribution. Moreover, ∗ v = v ∗ u in since u ∗ v = v ∗ u in D (Rn ), from (4.1.25) we deduce that u S (Rn ). It is also clear from (4.1.39) that the mapping ∗ v ∈ S (Rn ) E (Rn ) × S (Rn ) (u, v) → u is bilinear. After dropping the tilde, all claims in statement (a) now follow, with the exception of (4.1.33). Regarding the latter, we first note that that the equalities in (4.1.33) hold when interpreted D (Rn ), thanks to part (e) in Theorem 2.87. Given that all distributions involved are tempered, we may invoke (4.1.25) to conclude that the named equalities also hold in S (Rn ).
118
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS D (Rn )
Moving on, let uj −−−−→ u be such that there exists a compact set K ⊂ Rn j→∞
with the property that supp uj ⊆ K for each j ∈ N. Then, by Exercise 2.65, E (Rn )
we have supp u ⊆ K and uj −−−−→ u. The latter combined with (a) in Thej→∞
S (Rn )
orem 4.13 implies uj −−−−→ u. Fix now an arbitrary v ∈ S (Rn ). From the j→∞
current part (a) it follows that u ∗ v ∈ S (Rn ) and uj ∗ v ∈ S (Rn ), for every j ∈ N. If ϕ ∈ S(Rn ) then definition (4.1.39) implies that $ % ∀ j ∈ N, (4.1.42) uj ∗ v, ϕ = uj (x) ⊗ v(y), ψ(x)ϕ(x + y) , where ψ ∈ C0∞ (Rn ) is a fixed function chosen so that ψ = 1 in a neighborhood of K. The proof of (4.1.37) shows that (ψ ⊗ 1)ϕΔ ∈ S(Rn × Rn ). Since the map in (4.1.14) is separately continuous, we may write % $ % $ uj ∗ v, ϕ = uj ⊗ v, (ψ ⊗ 1)ϕΔ −−−→ u ⊗ v, (ψ ⊗ 1)ϕΔ = u ∗ v, ϕ. (4.1.43) j→∞
This proves the statement in (b). The proof of the statement in (c) is similar. Also the statement in (d) is an immediate consequence of part (d) in Theorem 2.87 and (4.1.25). Turning our attention to the statement in (e), we first note that, as noted in Exercise 3.25, the mapping S(Rn ) a → a∨ ∈ S(Rn ) is well-defined, linear, and continuous. Based on this and (f ) in Theorem 3.13 we may then conclude that the map in (4.1.34) is well-defined, linear and continuous, as a composition of linear and continuous maps. Assume next that a ∈ C0∞ (Rn ). Then by Proposition 2.93, for each u ∈ D (Rn ) we have a ∗ u ∈ C ∞ (Rn ) and (a ∗ u)(x) = u(y), a(x − y) for every x ∈ Rn , hence a ∗ u, ϕ = u(y), a(x − y)ϕ(x) dx (4.1.44) Rn
! = u(y), Rn
" a(x − y)ϕ(x)dx = u, a∨ ∗ ϕ,
∀ ϕ ∈ C0∞ (Rn ).
This shows that, in this case, the map in (4.1.34) is an extension of a∗u ∈ D (Rn ) to a tempered distribution. The uniqueness of such an extension is then seen from (d) in Theorem 3.13. S(Rn )
Regarding the map in (4.1.35), its bilinearity is immediate. If aj −−−−→ a j→∞
then
S(Rn ) −−−→ a∨ j − j→∞ n
∨
a , so by (f ) in Theorem 3.13,
a∨ j
S(Rn )
∨
∗ ϕ −−−−→ a ∗ ϕ for every j→∞
ϕ ∈ S(R ), hence ∨ u, a∨ j ∗ ϕ −−−→ u, a ∗ ϕ j→∞
for each u ∈ S (Rn ),
proving that the map in (4.1.35) is sequentially continuous in the first variable. S (Rn )
Moreover, if uj −−−−→ u and a ∈ S(Rn ), then j→∞
a∗ uj , ϕ = uj , a∨ ∗ ϕ −−−→ u, a∨ ∗ ϕ = a∗ u, ϕ j→∞
∀ ϕ ∈ S(Rn ). (4.1.45)
4.2. THE FOURIER TRANSFORM ACTING ON TEMPERED...
119
S (Rn )
Thus, a ∗ uj −−−−→ a ∗ u. This finishes the proof of the fact that the mapping j→∞
(4.1.35) is bilinear and separately sequentially continuous. Finally, (4.1.36) is a consequence of (4.1.33), part (d) in Theorem 3.13, the separate sequential continuity of (4.1.35), and part (c) in Theorem 4.13. This completes the proof of the theorem. In the last part of this section we present a density result. Proposition 4.19. The space C0∞ (Rn ) is sequentially dense in S (Rn ). In particular, the Schwartz class S(Rn ) is sequentially dense in S (Rn ). Proof. Pick an arbitrary u ∈ S (Rn ). Fix a function ψ ∈ C0∞ (Rn ) such that ψ = 1 on B(0, 1) and, for each j ∈ N, define ψj (x) := ψ(x/j) for every x ∈ Rn . We claim that S (Rn )
ψj u −−−−→ u.
(4.1.46)
j→∞
Indeed, if ϕ ∈ S(Rn ) is arbitrary, then by reasoning as in the proof of (3.1.28) S(Rn )
we deduce that ψj ϕ −−−−→ ϕ. Hence, j→∞
ψj u, ϕ = u, ψj ϕ −−−→ u, ϕ,
(4.1.47)
j→∞
from which (4.1.46) follows. In light of (4.1.46) it suffices to show that any tempered distribution with compact support is the limit in S (Rn ) of a sequence from C0∞ (Rn ). To this end, suppose that u ∈ S (Rn ) is compactly supported. Also, choose a function φ as in (1.2.3) and recall the sequence S (Rn )
{φj }j∈N ⊂ C0∞ (Rn ) from (1.3.7). Exercise 4.12 then gives φj −−−−→ δ which, j→∞
S (Rn )
in concert with parts (d)–(e) in Theorem 4.18, implies that φj ∗ u −−−−→ u. j→∞
At this stage there remains to observe that φj ∗ u ∈ C0∞ (Rn ) for every j ∈ N, thanks to (2.8.44).
4.2
The Fourier Transform Acting on Tempered Distributions
The goal here is to extend the action of the Fourier transform, considered earlier on the Schwartz class of rapidly decreasing functions, to the space of tempered distributions. To get some understanding of how this can be done in a natural fashion, we begin by noting that if f ∈ L1 (Rn ) is arbitrary then (3.2.27) gives f (x)ϕ(x) 3 dx, ∀ ϕ ∈ S(Rn ). (4.2.1) f3(ξ)ϕ(ξ) dξ = Rn
Rn
Since f ∈ L1 (Rn ) and f3 ∈ L∞ (Rn ), based on (4.1.8) we have f, f3 ∈ S (Rn ). Hence, (4.2.1) may be rewritten as f3, ϕ = f, ϕ 3 , for every ϕ ∈ S(Rn ). This
120
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
suggests the definition made below for the Fourier transform of a tempered distribution. Proposition 4.20. Let u ∈ S (Rn ). Then the mapping u 3 : S(Rn ) → C,
u 3(ϕ) := u, ϕ 3 ,
∀ ϕ ∈ S(Rn ),
(4.2.2)
is well-defined, linear and continuous. Hence, u 3 ∈ S (Rn ). Proof. This is an immediate consequence of the fact that u ∈ S (Rn ), part (c) in Theorem 3.20, and the identity u 3 = u ◦ F , where F is the Fourier transform on S(Rn ). Example 4.21. Since δ ∈ S (Rn ) we may write 3 ϕ = δ, ϕ δ, 3 = ϕ(0) 3 = ϕ(x) dx = 1, ϕ, Rn
thus
δ3 = 1
∀ ϕ ∈ S(Rn ),
in S (Rn ).
(4.2.4)
Also, if x0 ∈ R , then for each ϕ ∈ S(R ) we may write 4 δx0 , ϕ = δx0 , ϕ 3 = ϕ(x 3 0) = e−ix0 ·x ϕ(x) dx = e−ix0 ·x , ϕ, n
(4.2.3)
n
(4.2.5)
Rn
proving
−ix0 ·x δ4 x0 = e
in
S (Rn ).
(4.2.6)
3 ∈ S (Rn ) (which is further analyzed Remark 4.22. The map S (Rn ) u → u in part (a) of Theorem 4.25) is an extension of the map in (3.1.3). Indeed, if f ∈ L1 (Rn ), then the map uf from (4.1.4) is a tempered distribution and we may write $ % u 4f , ϕ = uf , ϕ 3 = f (x)ϕ(x) 3 dx = f3(ξ)ϕ(ξ) dξ Rn
= f3, ϕ
∀ ϕ ∈ S(Rn ),
Rn
(4.2.7)
where f3 is as in (3.1.1) and for the third equality we used (3.2.27). This shows that u 4f = uf in S (Rn ), ∀ f ∈ L1 (Rn ). (4.2.8) In particular, (4.2.8) also holds if f ∈ S(Rn ) (since S(Rn ) ⊂ L1 (Rn ) by Exercise 3.5). 1 Example 4.23. Suppose a ∈ (0, ∞) and consider the function f (x) := x2 +a 2 1 for every x ∈ R. Since f ∈ L (R), by (4.1.8) we may regard f as a tempered distribution. The goal is to compute f3 in S (R). From (4.2.8) we know that this is the same as the Fourier transform of f as a function in L1 (R), that is, e−ixξ dx, ∀ ξ ∈ R. (4.2.9) f3(ξ) = 2 2 R x +a
4.2. THE FOURIER TRANSFORM ACTING ON TEMPERED...
121
To compute the integral in (4.2.9) we use the calculus of residues applied to the function e−izξ for z ∈ C \ {ia, −ia}. (4.2.10) g(z) := 2 z + a2 In this regard, denote the residue of g at z by Resg (z). We separate the computation in two cases. Case 1: Assume ξ ∈ (−∞, 0). For each R ∈ (a, ∞) consider the contour Γ := Γ1 ∪ Γ2 , where Γ1 := t : −R ≤ t ≤ R , (4.2.11) iθ Γ2 := Re : 0 ≤ θ ≤ π . (4.2.12) The function g has one pole z = ia enclosed by Γ and the residue theorem gives g(z) dz + g(z) dz = 2πi Resg (ia). (4.2.13) Γ1
Γ2
Let us analyze each term in (4.2.13). First, Resg (ia) =
e−a|ξ| e−izξ eaξ = . = z + ia z=ia 2ia 2ia
(4.2.14) −itξ
Second, given that for each fixed ξ ∈ R the function R t → te2 +a2 ∈ C is absolutely integrable, Lebesgue’s dominated convergence theorem gives R −itξ e e−itξ g(z) dz = dt − − − − → dt = f3(ξ). (4.2.15) 2 2 2 2 R→∞ −R t + a Γ1 R t +a Third,
Γ2
g(z) dz =
π
0
C e ξR sin θ−iξR cos θ iθ ≤ −−−→ 0, iRe dθ R − 2 i2θ 2 R→∞ R e +a
(4.2.16)
since e ξR sin θ ≤ 1 for θ ∈ [0, π] (recall that we are assuming ξ < 0). Hence, letting R → ∞ in (4.2.13) and making use of (4.2.14)–(4.2.16), we arrive at e f3(ξ) = 2πi
−a|ξ|
2ia
=
π −a|ξ| e a
whenever ξ ∈ (0, ∞).
(4.2.17)
Case 2: Assume ξ ∈ (0, ∞). This time we consider a contour that encloses the pole z = −ia. Specifically, we keep Γ1 as before with R ∈ (a, ∞) and set Γ3 := Re iθ : π ≤ θ ≤ 2π . (4.2.18) Applying the residue theorem for the contour Γ1 ∪ Γ3 we then obtain e−izξ e−a|ξ| . − g(z) dz + g(z) dz = 2πi Resg (−ia) = =− z − ia z=−ia 2ia Γ1 Γ3 (4.2.19)
122
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
A computation similar to (note that e ξR sin θ ≤ 1 for θ ∈ [π, 2π] that in (4.2.16) g(z) dz = 0, which when used in (4.2.19) yields and ξ > 0) gives lim Γ2
f3(ξ) =
R→∞
π −a|ξ| ae
for ξ ∈ (0, ∞).
Case 3: Assume ξ = 0. It is immediate from (4.2.9) that 1 π f3(0) = dx = . 2 2 a R x +a In summary, we have proved 1 π F 2 (ξ) = e−a|ξ| for every ξ ∈ R, as well as in S (R). 2 x +a a
(4.2.20)
In the next example we compute the Fourier transform of a tempered distribution induced by a slowly growing function that is not absolutely integrable in Rn . 2
Example 4.24. Let a ∈ (0, ∞) and consider the function f (x) := e−ia|x| , x ∈ Rn . It is not difficult to check that f ∈ L(Rn ); thus, by Exercise 4.6 the function f may be regarded as a tempered distribution (identifying it with uf ∈ S (Rn ) associated with f as in (4.1.4)). We claim that −ia|x|2 = e
π n2 ia
ei
|ξ|2 4a
in
S (Rn ).
(4.2.21)
To prove (4.2.21), fix ϕ ∈ S(Rn ) and starting with the definition of the Fourier transform on S (Rn ) write ! " $ % 2 2 −ia|x|2 −ia|x| e ,ϕ = e ,ϕ 3 = e−ia|ξ| ϕ(ξ) 3 dξ = lim+ ε→0
= lim
ε→0+
Rn
Rn
$ 2 2 % e−(ia+ε)|ξ| ϕ(ξ) 3 dξ = lim+ ϕ 3 , e−(ia+ε)|ξ| ε→0
! $ n2 − |ξ|2 " 2 % π ϕ , F e−(ia+ε)|x| = lim ϕ , ia+ε e 4(ia+ε) ε→0+
n2 |ξ|2 π e− 4(ia+ε) ϕ(ξ) dξ ia + ε ε→0+ Rn π n2 |ξ|2 = e− 4ia ϕ(ξ) dξ ia Rn " ! π n2 |ξ|2 e i 4a , ϕ . = ia = lim
(4.2.22)
Above, the first equality is based on (4.2.2), while the second equality is a con2 sequence of the way in which the slowly growing function e−ia|x| is regarded as a tempered distribution. The third and second-to-last equalities are based on Lebesgue’s dominated convergence theorem. In the fourth equality we interpret the Schwartz function ϕ 3 as being in S (Rn ) and rely on the fact that
4.2. THE FOURIER TRANSFORM ACTING ON TEMPERED...
123
2
e−(ia+ε)|x| ∈ S(Rn ) for each ε > 0 (as noted in Example 3.21). The fifth equality once uses Remark 4.22 and once again (4.2.2). The sixth equality is a consequence of formula (3.2.6), while the rest is routine. Theorem 4.25. The following statements are true. (a) The mapping F : S (Rn ) → S (Rn ) defined by F (u) := u 3, for every u ∈ S (Rn ), is bijective and continuous, and its inverse is also continuous. αu = ξα u 5 3 for all α ∈ Nn0 and all u ∈ S (Rn ). (b) D α u = (−D)α u (c) x5 3 for all α ∈ Nn0 and all u ∈ S (Rn ).
(d) If u ∈ S (Rm ) and v ∈ S (Rn ), then u ⊗v =u 3 ⊗ v3. Proof. Recall from Theorem 3.24 that the map F : S(Rn ) → S(Rn ) is linear, continuous, and bijective, and its inverse is also continuous. Since the transpose of this map [in the sense of (13.1.10)] is precisely the Fourier transform in the context of part (a) of the current statement, from Propositions 13.1 and 13.2 it follows that F : S (Rn ) → S (Rn ) is also well-defined, linear, continuous, and bijective, and its inverse is also continuous. Consider next u ∈ S (Rn ) and α ∈ Nn0 . Then for every ϕ ∈ S(Rn ), using part (c) in Theorem 4.13, part (a) in Theorem 3.20, (4.2.2), and part (b) in Theorem 4.13, we may write α u, ϕ = D α u, ϕ αϕ 5 D 3 = (−1)|α| u, Dα ϕ 3 = u, ξ5
= 3 u, ξ α ϕ = ξ α u 3, ϕ,
(4.2.23)
and α u, ϕ = xα u, ϕ αϕ x5 3 = u, xα ϕ 3 = u, D
= 3 u, Dα ϕ = (−D)α u 3, ϕ.
(4.2.24)
This proves the claims in (b) and (c). We are left with the proof of the statement in (d). Fix u ∈ S (Rm ) and v ∈ S (Rn ). Then based on (d) in Theorem 4.13 we have u ⊗ v ∈ S (Rm × Rn ). Hence, starting with (4.2.2), then using (d) in Theorem 3.20 and (4.1.13), we may write u ⊗ v, ϕ ⊗ ψ = u ⊗ v, ϕ ⊗ ψ = u ⊗ v, ϕ 3 ⊗ ψ3
(4.2.25)
= u, ϕ 3 v, ψ3 = 3 u, ϕ3 v , ψ = 3 u ⊗ v3, ϕ ⊗ ψ,
∀ ϕ ∈ S(Rm ), ∀ ψ ∈ S(Rn ).
3 ⊗ v3C ∞ (Rm )⊗C ∞ (Rn ) , which in combiConsequently, u ⊗ v C ∞ (Rm )⊗C ∞ (Rn ) = u 0 0 0 0 nation with Proposition 3.15 proves the statement in (d).
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
124
Exercise 4.26. Recall (4.1.4) and (4.1.8). Prove that u 4f = uf in S (Rn ) for 2 n 1 n each f ∈ L (R ) and each f ∈ L (R ). Hint: Use (3.2.27) and Exercise 3.29. Lemma 4.27 (Riemann–Lebesgue Lemma). If f ∈ L1 (Rn ), then the tempered distribution uf satisfies u 4f ∈ C 0 (Rn ). Proof. This is a consequence of Exercise 4.26 and (3.1.3). Exercise 4.28. Prove that for every f, g ∈ L2 (Rn ) one has f ∗ g = f3 · g3 in n S (R ). Hint: Use a density argument based on the formula from part (c) of Proposition 3.27, Exercise 4.11, Young’s inequality (cf. Theorem 13.13) in the particular case when p = q = 2, and the fact that the Fourier transform is continuous both on L2 (Rn ) and on S (Rn ). Example 4.29. Let a ∈ (0, ∞). We are interested in computing the Fourier x transform of the bounded function x2 +a 2 , viewed as a distribution in S (R). 1 In this vein, consider the auxiliary function f (x) := x2 +a2 for x ∈ R, and recall from (4.2.20) that f3(ξ) = πa e−a|ξ| in S (R). This formula, part (c) in Theorem 4.25, and (2.4.10), allows us to write x d 3 d π −a|ξ| (ξ) = F xf (ξ) = i e f (ξ) = i F 2 2 x +a dξ dξ a πi = − a e−aξ H(ξ) + a eaξ H(−ξ) a = −πi(sgn ξ)e−a|ξ|
in
S (R).
(4.2.26)
Proposition 4.30. For each u ∈ S (Rn ) define the mapping u∨ : S(Rn ) → C, ∨
Then u ∈ S (R ) and
u∨ (ϕ) := u, ϕ∨ ,
∀ ϕ ∈ S(Rn ).
(4.2.27)
n
3 u 3 = (2π)n u∨ .
(4.2.28)
Proof. Fix u ∈ S (R ). Then (4.2.27) is simply the composition of u with the mapping from Exercise 3.25. Since both are linear and continuous, it follows that u∨ ∈ S (Rn ). Formula (4.2.28) then follows by combining (4.2.2) with the identity in part (3) of Exercise 3.26. n
Exercise 4.31. Show that 3 1 = (2π)n δ
in
S (Rn ).
(4.2.29)
Exercise 4.32. Prove that for any tempered distribution u the following equivalence holds: u∨ = −u
in S (Rn ) ⇐⇒ (3 u )∨ = −3 u in S (Rn ).
(4.2.30)
4.2. THE FOURIER TRANSFORM ACTING ON TEMPERED...
125
One suggestive way of summarizing (4.2.30) is to say that a tempered distribution is odd if and only if its Fourier transform is odd. Theorem 4.33. The following statements are true. ∗a = 3 au 3 in S (Rn ), where 3 a is (a) If a ∈ S(Rn ) and u ∈ S (Rn ), then u n viewed as an element from S(R ). (b) If u ∈ E (Rn ) then the 3 is of function type given by $ tempered %distribution u the formula u 3(ξ) = u(x), e−ix·ξ for every ξ ∈ Rn , and u 3 ∈ L(Rn ). (c) If u ∈ E (Rn ), v ∈ S (Rn ) then u5 ∗v = u 33 v , where u 3 is viewed as an element in L(Rn ). Proof. By (d) in Theorem 4.18 we have u ∗ a ∈ S (Rn ), hence u ∗ a exists and a ∈ S(Rn ), by (3.1.23) and (b) in Theorem 4.13 belongs to S (Rn ). Also, since 3 it follows that 3 au 3 ∈ S (Rn ). Then, we may write $
% $ % $ % $ % u ∗ a, ϕ = u ∗ a, ϕ 3 = u, a∨ ∗ ϕ 3 = (2π)−n u, 3 3 a∗ϕ 3 $ % $ % $ % = u, 3 a4ϕ = u 3, 3 aϕ = 3 au 3, ϕ , ∀ ϕ ∈ S(Rn ).
(4.2.31)
For the second equality in (4.2.31) we used (4.1.34), for the third we used part (3) in Exercise 3.26, while for the forth we used (d) in Proposition 3.27. This proves the statement in (a). n Moving $ on to the %proof of (b), fix some u ∈ E (R ) and introduce the function f (ξ) := u(x), e−ix·ξ for every ξ ∈ Rn . From Proposition 2.82 it follows that f ∈ C ∞ (Rn ) and ! " ∂ α f (ξ) = u(x) , ∂ξα [e−ix·ξ ] , ∀ ξ ∈ Rn , for every α ∈ Nn0 . (4.2.32) In addition, since u ∈ E (Rn ), there exist a compact subset K of Rn , along with a constant C ∈ (0, ∞), and a number k ∈ N0 , such that u satisfies (2.6.6). Combining all these facts, for each α ∈ Nn0 , we may estimate α ∂ f (ξ) = u(x), ∂ξα e−ix·ξ ≤ C sup ∂xβ ∂ξα e−ix·ξ x∈K, |β|≤k
≤C
sup x∈K, |β|≤k
α β x ξ ≤ C(1 + |ξ|)k .
(4.2.33)
From (4.2.33) and the fact that f is smooth we deduce that f ∈ L(Rn ). Hence, if we now recall Exercise 4.6, it follows that f ∈ S (Rn ). We are left with proving that u 3 = f as tempered distributions. To this end, fix θ ∈ C0∞ (Rn ) such that θ = 1 in a neighborhood of supp u. Then, for every ϕ ∈ C0∞ (Rn ) one has " $ % ! 3 u, ϕ = u, ϕ 3 = u(ξ) , θ(ξ)ϕ(ξ) 3 = u(ξ), e−ix·ξ θ(ξ)ϕ(x) dx . (4.2.34) Rn
126
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
At this point recall Remark 2.79 (note that the function e−ix·ξ θ(ξ)ϕ(x) belongs to C0∞ (Rn ×Rn ) and one may take v = 1 in (2.7.45)) which allows one to rewrite the last term in (4.2.34) and conclude that $ % 3 u, ϕ = u(ξ), e−ix·ξ ϕ(x) dx = f, ϕ, ∀ ϕ ∈ C0∞ (Rn ). (4.2.35) Rn
Hence the tempered distributions u 3 and f coincide on C0∞ (Rn ). By (4.1.25), we n therefore have u 3 = f in S (R ), completing the proof of the statement in (b). Regarding the formula in part (c), while informally this is similar to the formula proved in part (a), the computation in (4.2.31) through which the latter has been deduced no longer works in the current case, as various ingredients used to justify it break down (given that u 3 is now only known to belong to L(Rn ) and n not necessarily to S(R )). This being said, we may employ what has been established in part (a) together with a limiting argument to get the desired result. Specifically, assume that u ∈ E (Rn ) and v ∈ S (Rn ). Then u ∗ v ∈ S (Rn ) (recall statement (a) in Theorem 4.18) hence, u ∗ v is well-defined in S (Rn ). n 3 v3 is meaningful and u 3 v3 ∈ S (Rn ). To Also, from (b) we have u 3 ∈ L(R ), thus u proceed, recall the sequence {φj }j∈N from Example 1.11. In particular, (1.3.8) D (Rn )
holds and φj −−−−→ δ (see Example 2.20). Thus, the statement from part (b) j→∞
S (Rn )
in Theorem 4.18 applies and gives that u ∗ φj −−−−→ u ∗ δ = u. Moreover, j→∞
since u ∈ E (Rn ), by (2.8.17) it follows that u ∗ φj ∈ E (Rn ) and supp (u ∗ φj ) ⊆ supp u + B(0, 1) for every j ∈ N (by part (1) in Remark 2.91). Hence, one may S (Rn )
apply (b) in Theorem 4.18 to further conclude that (u ∗ φj ) ∗ v −−−−→ u ∗ v. j→∞
Given (a) in Theorem 4.25, the latter implies (recall Fact 4.10) $ % lim F (u ∗ φj ) ∗ v , ϕ = u ∗ v, ϕ, ∀ ϕ ∈ S(Rn ). j→∞
(4.2.36)
Note that (2.8.44) gives u ∗ φj ∈ C0∞ (Rn ) for every j ∈ N. Hence, based on what we have proved already in part (a), we obtain (keeping in mind that u 3 ∈ L(Rn )) 4j u 3 ∗ φj v3 = φ 3 v3 = φ(·/j) u 3 v3 in S (Rn ), ∀j ∈ N. F (u ∗ φj ) ∗ v = u (4.2.37) In concert, (4.2.36) and (4.2.37) yield that for each ϕ ∈ S(Rn ) $ % $ % $ % $ % 3 3 ϕ = u 33 v , φ(·/j) ϕ = u 3 v3, φ(0) 3 v3, ϕ (4.2.38) u ∗ v, ϕ = lim u j→∞
S(Rn )
3 ϕ by Exercise 3.14, and φ(0) 3 3 = 1. This proves the since φ(·/j) ϕ −−−−→ φ(0) j→∞
statement in (c) and finishes the proof of the theorem. Example 4.34. If a ∈ (0, ∞) then χ[−a,a] , the characteristic function of the interval [−a, a], belongs to E (R) and by statement (b) in Theorem 4.33 we have ⎧ a ⎨ 2 sin(aξ) for ξ ∈ R \ {0}, ξ −ixξ (ξ) = e dx = χ (4.2.39) [−a,a] ⎩ −a 2a for ξ = 0.
4.2. THE FOURIER TRANSFORM ACTING ON TEMPERED...
127
Exercise 4.35. Use Exercise 2.65 and statement (b) in Theorem 4.33 to prove 3 ⊆ {a} for some a ∈ Rn , then there exist that if u ∈ S (Rn ) is such that supp u k ∈ N0 and constants cα ∈ C, for α ∈ Nn0 with |α| ≤ k, such that u= cα xα eix·a in S (Rn ). (4.2.40) |α|≤k
In particular, if a = 0 then u is a polynomial in Rn . Example 4.36. Let a ∈ R and consider the function f (x) := sin(ax) for x ∈ R. Then f ∈ C ∞ (R) ∩ L∞ (R), hence f ∈ S (R). Suppose a = 0. We shall compute the Fourier transform of f in S (R) by making use of a technique relying on the ordinary differential equation that f satisfies. More precisely, since f +a2 f = 0 in R, the same equation holds in S (R), thus by (a) and (b) in Theorem 4.25 we have (ξ 2 − a2 )f3 = 0 in S (R). By Example 2.72 this implies f3 = C1 δa + C2 δ−a in S (R) for some C1 , C2 ∈ C. Applying again the Fourier transform to the last equation, using (4.2.28) and part (b) from Theorem 4.33, we have −ixξ = C1 δ3a + C2 δ5 + C2 δ−a (ξ), e−ixξ 2π sin(−ax) = sin(ax) −a = C1 δa (ξ), e = C1 e−iax + C2 eiax = (C1 + C2 ) cos(ax) + i(C1 − C2 ) sin(−ax).
(4.2.41)
The resulting identity in (4.2.41) forces C1 = −iπ and C2 = iπ. Plugging these constants back in the expression for f3 yields = −iπδa + iπδ−a sin(ax)
in S (R).
(4.2.42)
It is immediate that (4.2.42) also holds if a = 0. Example 4.37. Let a, b ∈ R. Then the function g(x) := sin(ax) sin(b x) for x ∈ R satisfies g ∈ L∞ (R), thus g ∈ S (R) (cf. (4.1.8)). Applying the Fourier transform to the identity in (4.2.42) and using (4.2.28) we obtain sin(ax) =
i3 i δa − δ5 −a 2 2
in S (R).
(4.2.43)
Also, making use of (4.2.43), part (c) in Theorem 4.33, and then Exercise 2.90, we may write i i i i δ3a − δ5 δ3b − δ5 sin(ax) sin(bx) = −a −b 2 2 2 2 1 = − F δa+b − δa−b − δb−a + δ−a−b in S (R). (4.2.44) 4 Hence, another application of the Fourier transform gives (relying on (4.2.28)) π F sin(ax) sin(bx) = − δa+b − δa−b − δb−a + δ−a−b in S (R). (4.2.45) 2
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
128
2
Example 4.38. Let a ∈ (0, ∞) and consider the function f (x) := e ia|x| for x ∈ Rn . Then f ∈ L∞ (Rn ); thus, f ∈ S (Rn ) by (4.1.8). The goal is to compute the Fourier transform of f in S (Rn ). The starting point is the observation that 2
2
2
f (x) = e iax1 ⊗ e iax2 ⊗ · · · ⊗ e iaxn
∀ x = (x1 , . . . , xn ) ∈ R.
(4.2.46)
Invoking part (d) in Theorem 4.25 then reduces matters to computing the Fourier transform of f in the case n = 1. 2 Assume that n = 1, in which case f (x) = e iax for x ∈ R. Then f satisfies the differential equation: f −2ia xf = 0 in S (R). Taking the Fourier transform in S (R) and using the formulas from (b)–(c) in Theorem 4.25 we obtain (f3) +
i 3 ξ f = 0 in S (R). 2a
(4.2.47)
The format of (4.2.47) suggests that we consider the ordinary differential equai ξy = 0 in R, in the unknown y = y(ξ). One particular solution of tion y + 2a ξ2
this o.d.e. is y(ξ) = e−i 4a . Note that both y and 1/y belong to L(Rn ). In particular, it makes sense to consider the tempered distribution u := (1/y)f3, with the derivative u = (1/y) f3 + (1/y)(f3) = −(y /y 2 )f3 + (1/y)(f3) i i ξ /y f3 − ξ(1/y)f3 = 0 in S (R). = 2a 2a
(4.2.48)
From this and part (2) in Proposition 2.40 we then deduce that there exists c ∈ C such that u = c in S (R) which, given the significance of u and y, forces ξ2 f3 = c e−i 4a in S (R) for some c ∈ C. We are therefore left with determining the constant c. This may be done by choosing a suitable Schwartz function and ξ2 then computing the action of f3 on it. Take ϕ(ξ) := e− 4a for ξ ∈ R. Since √ 2 ϕ(x) 3 = 4aπe−ax for x ∈ R (recall Example 3.21), we may write 2 2 √ 2 2 −i ξ4a − ξ4a 3 c e dξ = f , ϕ = f, ϕ 3 = 4aπ e iax −ax dx. (4.2.49) R
R
The two integrals in (4.2.49) may be computed using formula (3.2.7) with λ := 1+i 4a and λ := a(1 − i), respectively. After some routine algebra (i.e., computing these integrals, replacing their values in (4.2.49), then solving for c), we find π c = πa e i 4 . In summary, this analysis proves that |ξ| nπ ia|x|2 (ξ) = π 2 e i 4 e−i 4a e a n
2
in S (Rn ).
(4.2.50)
Partial Fourier Transforms In the last part of this section we define partial Fourier transforms. To set the stage, fix m, n ∈ N. We shall denote by x, ξ generic variables in Rn , and by
4.3. HOMOGENEOUS DISTRIBUTIONS
129
y, η generic variables in Rm . The partial Fourier transform with respect to the 3x or Fx ϕ, is defined by variable x of a function ϕ ∈ S(Rn+m ), denoted by ϕ e−ix·ξ ϕ(x, y) dx, ∀ ξ ∈ Rn , ∀ y ∈ Rm . (4.2.51) ϕ 3x (ξ, y) := Rn
Reasoning in a similar manner as in the proof of Theorem 3.24, it follows that Fx : S(Rn+m ) → S(Rn+m ) is bijective,
(4.2.52)
continuous, with continuous inverse and its inverse is given by (Fx−1 ψ)(x, η)
−n
eix·ξ ψ(ξ, η) dξ,
:= (2π)
Rn
for all (x, η) ∈ R
(4.2.53)
and all ψ ∈ S(R
n+m
n+m
).
Furthermore, analogously to Proposition 4.20, the partial Fourier transform Fx extends to S (Rn+m ) as a continuous map by setting ∀ u ∈ S (Rn+m ),
Fx u, ϕ := u, Fx ϕ,
∀ ϕ ∈ S(Rn+m ),
(4.2.54)
and this extension is an isomorphism from S (Rn+m ) into itself, with continuous inverse denoted by Fx−1 . Moreover, the action of Fx enjoys properties analogous to those established for the “full” Fourier transform in Theorem 3.20, Exercise 3.26, Theorem 4.25, and Proposition 4.30. Exercise 4.39. Let 3· denote the full Fourier transform in Rn+m . Prove that for each function ϕ ∈ S(Rn × Rm ) we have Fx Fy ϕ(x, y) (ξ, η) = Fy Fx ϕ(x, y) (ξ, η) = ϕ(ξ, 3 η) ∀ (ξ, η) ∈ Rn × Rm . (4.2.55) Also, show that Fx Fy u = Fy Fx u = u 3 in S (Rn+m ),
∀ u ∈ S (Rn × Rm ).
Exercise 4.40. Prove that Fx δ(x) ⊗ δ(y) = 1(ξ) ⊗ δ(y)
4.3
in S (Rn+m ).
(4.2.56)
(4.2.57)
Homogeneous Distributions
Let A ∈ Mn×n (R) be such that det A = 0. Then for every f ∈ L1 (Rn ) one has f ◦ A ∈ L1 (Rn ); thus, f, f ◦ A ∈ S (Rn ) by (4.1.8). Moreover, f (Ax)ϕ(x) dx = | det A|−1 f (y)ϕ(A−1 y) dy f ◦ A, ϕ = Rn
Rn
−1
= | det A|
−1
f, ϕ ◦ A
,
∀ ϕ ∈ S(Rn ).
(4.3.1)
This and Exercise 3.16 justifies extending the operator of composition with linear maps to S (Rn ) as follows.
130
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
Proposition 4.41. Let A ∈ Mn×n (R) be such that det A = 0. For each u ∈ S (Rn ), define the mapping u ◦ A : S(Rn ) → C by setting $ % (4.3.2) ∀ ϕ ∈ S(Rn ). u ◦ A (ϕ) := | det A|−1 u, ϕ ◦ A−1 , Then u ◦ A ∈ S (Rn ). Proof. This is an immediate consequence of (3.1.40). Exercise 4.42. Let A, B ∈ Mn×n (R) be such that det A = 0 and det B = 0. Then the following identities hold in S (Rn ): (1) (u ◦ A) ◦ B = u ◦ (AB) for every u ∈ S (Rn ). (2) u ◦ (λA) = λu ◦ A for every u ∈ S (Rn ) and every λ ∈ R. (3) (u + v) ◦ A = u ◦ A + v ◦ A for every u, v ∈ S (Rn ). In the next proposition we study how the Fourier transform interacts with the operator of composition by an invertible matrix. Recall that the transpose of a matrix A is denoted by A . Proposition 4.43. Assume that A ∈ Mn×n (R) is such that det A = 0. Then for each u ∈ S (Rn ), −1 u ◦ A = | det A|−1 u 3 ◦ A . (4.3.3) Proof. For each ϕ ∈ S(Rn ), based on (4.2.2), (4.3.2), and (3.2.9), we may write $ % 3 ◦ A−1 u ◦ A, ϕ = u ◦ A, ϕ 3 = | det A|−1 u, ϕ $ % $ % = u, ϕ ◦ A = u 3, ϕ ◦ A $ −1 % 3 ◦ A = | det A|−1 u ,ϕ . (4.3.4) This proves (4.3.3). The mappings in (3.1.40) and (4.3.2) corresponding to A := tIn×n , for some number t ∈ (0, ∞), are called dilations and will be denoted by τt . More precisely, for each t ∈ (0, ∞) we have τt : S(Rn ) → S(Rn ), and
(τt ϕ)(x) := ϕ(tx),
∀ ϕ ∈ S(Rn ), ∀ x ∈ Rn ,
(4.3.5)
τt : S (Rn ) → S (Rn ), τt u, ϕ := t−n u, τ 1t ϕ,
∀ u ∈ S (Rn ), ∀ ϕ ∈ S(Rn ).
(4.3.6)
Exercise 4.44. Prove that for each t ∈ (0, ∞) the following are true: F (τt ϕ) = t−n τ 1t F (ϕ)
in
S(Rn ),
∀ ϕ ∈ S(Rn ),
(4.3.7)
F (τt u) = t−n τ 1t F (u)
in
S (Rn ),
∀ u ∈ S (Rn ).
(4.3.8)
4.3. HOMOGENEOUS DISTRIBUTIONS
131
Hint: Use (3.2.9) with A = 1t In×n to prove (4.3.7), then use (4.3.7) and (4.3.6) to prove (4.3.8). To proceed, we make a couple of definitions. Definition 4.45. A linear transformation A ∈ Mn×n (R) is called orthogonal provided A is invertible and A−1 = A . Some of the most basic attributes of an orthogonal matrix A are (A )−1 = A,
|det A| = 1,
|Ax| = |x| for every x ∈ Rn .
(4.3.9)
Definition 4.46. A distribution u ∈ S (Rn ) is called invariant under orthogonal transformations provided u = u ◦ A in S (Rn ) for every orthogonal matrix A ∈ Mn×n (R). Proposition 4.47. Let u ∈ S (Rn ). Then u is invariant under orthogonal transformations if and only if u 3 is invariant under orthogonal transformations. Proof. This is a direct consequence of (4.3.3) and the fact that any orthogonal matrix A satisfies (4.3.9). Next we take a look at homogeneous functions to gain some insight into how this notion may be defined in the setting of distributions. Definition 4.48. (1) A nonempty open set O in Rn is called a cone-like region if tx ∈ O whenever x ∈ O and t ∈ (0, ∞). (2) Given a cone-like region O ⊆ Rn , call a function f : O → C positive homogeneous of degree k ∈ R if f (tx) = tk f (x) for every t > 0 and every x ∈ O. Exercise 4.49. Prove that if O ⊆ Rn is a cone-like region, N ∈ N, and f ∈ C N (O) is positive homogeneous of degree k ∈ R on O, then ∂ α f is positive homogeneous of degree k − N on O for every α ∈ Nn0 with |α| ≤ N . Exercise 4.50. Prove that if f ∈ C 0 (Rn \{0}) is positive homogeneous of degree 1 − n on Rn \ {0}, and g ∈ C 0 (S n−1 ), then g(x/R)f (x) dσ(x) = g(x)f (x) dσ(x), ∀ R ∈ (0, ∞). ∂B(0,R)
S n−1
(4.3.10) Exercise 4.51. Prove that if f ∈ C 0 (Rn \{0}) is positive homogeneous of degree k ∈ R, then |f (x)| ≤ f L∞ (S n−1 ) |x|k for every x ∈ Rn \ {0}. x k Hint: Write f (x) = f |x| |x| for every x ∈ Rn \ {0}.
132
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
Exercise 4.52. Show that if f ∈ C 0 (Rn \{0}) is positive homogeneous of degree k ∈ R with k > −n, then f ∈ S (Rn ). Hint: Make use of Exercise 4.51 and the result discussed in Example 4.3. After this preamble, we are ready to extend the notion of positive homogeneity to tempered distributions. Definition 4.53. A distribution u ∈ S (Rn ) is called positive homogeneous of degree k ∈ R provided τt u = tk u in S (Rn ) for every t > 0. Exercise 4.54. Prove that δ ∈ S (Rn ) is positive homogeneous of degree −n. Exercise 4.55. Let f ∈ L1loc (Rn ) be such that (4.1.3) is satisfied for some m ∈ N and some R ∈ (0, ∞) and let k ∈ R. Show that the tempered distribution uf is positive homogeneous of degree k, if and only if f is positive homogeneous of degree k. Exercise 4.56. Prove that if u ∈ S (Rn ) is positive homogeneous of degree k ∈ R then for every α ∈ Nn0 the tempered distribution ∂ α u is positive homogeneous of degree k − |α|. Deduce from this that for every α ∈ Nn0 the tempered distribution ∂ α δ is positive homogeneous of degree −n − |α|. Proposition 4.57. Let k ∈ R. If u ∈ S (Rn ) is positive homogeneous of degree k, then u 3 is positive homogeneous of degree −n − k. Proof. Let u ∈ S (Rn ) be positive homogeneous of degree k, and fix t > 0. Then (4.3.8) and the assumption on u give 3 = t−n F τ 1t u = t−n F t−k u = t−n−k u 3 in S (Rn ), (4.3.11) τt u hence u 3 is positive homogeneous of degree −n − k. Proposition 4.58. If u ∈ S (Rn ), uRn \{0} ∈ C ∞ (Rn \ {0}), and uRn \{0} is positive homogeneous of degree k, for some k ∈ R, then u 3 n ∈ C ∞ (Rn \{0}). R \{0}
Proof. Fix u satisfying the hypotheses of the proposition. By (c) in Proposi α u in S (Rn ). Also, it is not tion 4.25, for each α ∈ Nn0 one has Dξα u 3 = (−x) α n difficult to check that (−x) u ∈ S (R ) continues to satisfy the hypotheses of the proposition with k replaced by k + |α|. Hence, the desired conclusion follows once we prove that u 3Rn \{0} is continuous on Rn \ {0}. To this end, assume first that k < −n and fix ψ ∈ C0∞ (Rn ) such that ψ = 1 on B(0, 1). Use this to decompose u = ψu + (1 − ψ)u. Since ψu ∈ E (Rn ) part 4 ∈ C ∞ (Rn ). Furthermore, (1 − ψ)u vanishes near (b) in Theorem 4.33 gives ψu x x = |x|k u |x| . Given the origin while outside supp ψ becomes u(x) = u |x| |x| the current assumption on k, this behavior implies (1 − ψ)u ∈ L1 (Rn ), hence (1 − ψ)u ∈ C 0 (Rn ) by Lemma 4.27. To summarize, this analysis shows that u 3 ∈ C 0 (Rn ) whenever k + n < 0.
(4.3.12)
4.3. HOMOGENEOUS DISTRIBUTIONS
133
To treat the case k+n ≥ 0, let α ∈ Nn0 be arbitrary and set vα := Dα u ∈ S (Rn ). Since uRn \{0} ∈ C ∞ (Rn \ {0}) differentiating u(tx) = tk u(x) yields t|α| (Dα u)(tx) = tk (Dα u)(x) for x ∈ Rn \ {0} and t > 0. Given that vα Rn \{0} ∈ C ∞ (R \ {0}), the latter translates into
(4.3.13)
vα (tx) = tk−|α| vα (x) for x ∈ Rn \ {0} and t > 0. (4.3.14) Hence, vα Rn \{0} is homogeneous of degree k − |α|. Based on what we proved 0 n earlier (cf. (4.3.12)), it follows that v4 α ∈ C (R ) whenever k − |α| < −n. In terms of the original distribution u this amounts to saying that ξα u 3 ∈ C 0 (Rn ),
∀ α ∈ Nn0
with
|α| > k + n.
(4.3.15)
The end-game in the proof is then as follows. Given an arbitrary k ∈ R, pick a natural number N with the property that 2N > k + n. Writing (cf. (13.2.1)) for each ξ ∈ Rn N! ξ 2α , |ξ|2N = (4.3.16) α! |α|=N
we obtain |ξ|2N u 3=
N! ξ 2α u 3 in S (Rn ). α!
(4.3.17)
|α|=N
Collectively, (4.3.17), (4.3.15), and the assumption on N imply |ξ|2N u 3 ∈ C 0 (Rn ).
(4.3.18)
1 2N
belongs to C ∞ (Rn \ {0}), condition (4.3.18) further implies |ξ| Rn \{0}0 n that u 3Rn \{0} ∈ C (R \ {0}). This completes the proof of the proposition. Since
An inspection of the proof of Proposition 4.58 shows that several other useful versions could be derived, two of which are recorded below. Exercise 4.59. If u ∈ S (Rn ), uRn \{0} ∈ C ∞ (Rn \ {0}) and uRn \{0} is positive
homogeneous of degree k, for some k ∈ R satisfying k < −n, then u 3 ∈ C k0 (Rn ), where k0 := max{j ∈ N0 , j + k < −n}. Exercise 4.60. Assume that u ∈ S (Rn ), uRn \{0} ∈ C N (Rn \{0}) where N ∈ N is even, and u n is positive homogeneous of degree k, for some k ∈ R R \{0}
satisfying k < N − n. Then u 3 ∈ C m (Rn \ {0}) for every m ∈ N0 satisfying m < N − n − k. Next we take on the task of computing the Fourier transform of certain homogeneous tempered distributions that will be particularly important later in applications. Recall the gamma function Γ from (13.5.1).
134
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
−λ n Proposition 4.61. Let λ ∈ (0, n)∞andnset fλ (x) := |x| , for each x ∈ R \{0}. n 3 Then fλ ∈ S (R ), fλ Rn \{0} ∈ C (R \ {0}), and n−λ n Γ 2 |ξ|λ−n for every ξ ∈ Rn \ {0}. f3λ (ξ) = 2n−λ π 2 (4.3.19) Γ λ2
Proof. Fix λ ∈ (0, n). Exercise 4.4 then shows that fλ ∈ S (Rn ). Clearly, |x|−λ is invariant under orthogonal transformations and is positive homogeneous of degree −λ. Hence, by Proposition 4.47 and Proposition 4.57, it follows that and positive homogeneous of f4 λ is invariant under orthogonal transformations ∞ n degree −n + λ. In addition, f4 λ Rn \{0} ∈ C (R \ {0}) by Proposition 4.58. Fix ξ ∈ Rn \ {0} and choose an orthogonal matrix A ∈ Mn×n (R) with the property that Aξ = (0, 0, . . . , 0, |ξ|) (such a matrix may be obtained by ξ completing the vector vn := |ξ| to an orthonormal basis {v1 , . . . , vn } in Rn and then taking A to be the matrix mapping each vj into ej for j = 1, . . . , n). Then λ−n 4 4 , f4 λ (ξ) = fλ (Aξ) = fλ (0, . . . , 0, |ξ| ) = cλ |ξ|
(4.3.20)
where cλ := f4 λ (0, . . . , 0, 1) ∈ C. As such, we are left with determining the value |x|2 of cλ . We do so by apply f3λ to the particular Schwartz function ϕ(x) := e− 2 n
|ξ|2
for x ∈ Rn . From Example 3.21 we know that ϕ(ξ) 3 = (2π) 2 e− 2 for every n 3 may ξ ∈ R . Based on this formula and (4.3.20), the identity f4 λ , ϕ = fλ , ϕ be rewritten as |ξ|2 |x|2 n λ−n − 2 2 cλ |ξ| e dξ = (2π) |x|−λ e− 2 dx. (4.3.21) Rn
Rn
The two integrals in (4.3.21) may be computed simultaneously, by adopting a slightly more general point of view, as follows. For any p > −n use polar coordinates (cf. (13.8.9)) and a natural change of variables to write (for the definition and properties of the gamma function Γ see (13.5.1) and the subsequent comments) ∞ 2 ρ2 p − |ξ| 2 |ξ| e dξ = ωn−1 ρp+n−1 e− 2 dρ Rn
0
= ωn−1 2
p+n−2 2
∞
t
p+n−2 2
e−t dt
0
=2
p+n−2 2
ωn−1 Γ
p + n 2
.
(4.3.22)
When used with p := λ − n and p := −λ, formula (4.3.22) allows us to rewrite (4.3.21) as λ −λ + n λ−n+n−2 −λ+n−2 n 2 = (2π) 2 2 2 ωn−1 Γ . (4.3.23) ωn−1 Γ cλ 2 2 2 n
This gives cλ = 2n−λ π 2
Γ( n−λ 2 ) Γ( λ 2)
, finishing the proof of (4.3.19).
4.4. PRINCIPAL VALUE TEMPERED DISTRIBUTIONS
135
A remarkable consequence of Proposition 4.61 is singled out below. Corollary 4.62. Assume that n ∈ N, n ≥ 2, and fix λ ∈ [0, n − 1). Then for each j ∈ {1, . . . , n}, we have Γ n−λ ξj xj n−λ−1 n 2 n−λ π 2 λ in S (Rn ). (4.3.24) = −i 2 F λ+2 |x| Γ 2 + 1 |ξ| In particular, corresponding to the case when λ = n−2, formula (4.3.24) becomes xj ξj F (4.3.25) = −i ωn−1 2 in S (Rn ). |x|n |ξ| Proof. Fix an integer n ≥ 2, and suppose first that λ ∈ (0, n−1). In this regime, both (4.3.19) and (4.1.32) hold. In concert with part (b) in Theorem 4.25, these give xj −λ F = F (∂j fλ ) = iξj f3λ (ξ) |x|λ+2 Γ n−λ n−λ n 2 2 ξj |ξ|λ−n in S (Rn ). π (4.3.26) = i2 Γ λ2 Hence, whenever λ ∈ (0, n − 1), xj F = Cλ ξj |ξ|λ−n |x|λ+2
in S (Rn ),
(4.3.27)
where we have set Cλ := −i 2
n−λ−1
π
n 2
Γ
n−λ
2λ λ Γ 2 2
= −i 2
n−λ−1
Γ n−λ 2 π Γ λ2 + 1 n 2
(4.3.28)
and the last equality follows from (13.5.2). This proves formula (4.3.24) in the case when λ ∈ (0, n − 1). The case when λ = 0 then follows from what we have just proved, by passing to limit λ → 0+ in (4.3.24) and observing that all quantities involved depend continuously on λ, in an appropriate sense. Finally, (4.3.25) is a direct consequence of (4.3.24) and (13.5.6).
4.4
Principal Value Tempered Distributions
Recall the distribution P.V. x1 ∈ D (R) from Example 2.11. As seen from Exercise 4.103, we have P.V. x1 ∈ S (R). The issue we address in this section is the generalization of this distribution to higher dimensions. The key features of the function Θ(x) := x1 , x ∈ R \ {0}, that allowed us to define P.V. x1 as a tempered distribution on the real line are as follows: first, Θ ∈ C 0 (R \ {0}), second, Θ is positive homogeneous of degree −1, and third, Θ(1) + Θ(−1) = 0.
136
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
Moving from one dimension to Rn , this suggests considering the class of functions satisfying Θ ∈ C 0 (Rn \ {0}), positive homogeneous of degree −n,
Θ dσ = 0. S n−1
(4.4.1) It is worth noting that a function Θ as above typically fails to be in L1loc (Rn ) [though obviously Θ ∈ L1loc (Rn \ {0})]. As such, associating a distribution with Θ necessarily has to be a more elaborate process than the one identifying functions from L1loc (Rn ) with distributions in Rn . This has already been the case when defining P.V. x1 and here we model the same type of definition in higher dimensions. Specifically, given Θ as in (4.4.1) consider the linear mapping P.V. Θ : S(Rn ) → C, P.V. Θ (ϕ) := lim Θ(x)ϕ(x) dx, ε→0+
|x|≥ε
∀ ϕ ∈ S(Rn ).
(4.4.2)
That this definition does the job is proved next. Proposition 4.63. Let Θ be a function satisfying (4.4.1). Then the map P.V. Θ considered in (4.4.2) and is a tempered distribution in Rn . In is well-defined addition, P.V. Θ Rn \{0} = Θ Rn \{0} in D (Rn \ {0}). Before proceeding with the proof of Proposition 4.63 we recall a definition and introduce a class of functions that will be used in the proof. Definition 4.64. A function ψ : Rn → C is called radial if ψ(x) depends only on |x| for every x ∈ Rn , that is, ψ(x) = f (|x|) for all x ∈ Rn , where f : R → C. Next, consider the class of functions ⎧ ψ : Rn → C, ψ ∈ C 1 (Rn ), ⎪ ⎪ ⎨ ψ is radial and ψ(0) = 1, ⎪ ⎪ ⎩ ∃ εo ∈ (0, ∞) such that ψ decays like |x|−εo at ∞, (for example ψ(x) = e−
|x|2 2
(4.4.3)
, x ∈ Rn , satisfies (4.4.3)) and set
Q := ψ : ψ satisfies (4.4.3) . Now we are ready to return to Proposition 4.63.
(4.4.4)
4.4. PRINCIPAL VALUE TEMPERED DISTRIBUTIONS
137
Proof of Proposition 4.63. Fix an arbitrary ψ satisfying (4.4.3). Then, making use of formula (13.8.9) and the properties of Θ and ψ, for each ε ∈ (0, ∞) we have x Θ( |x| ) Θ(x)ψ(x) dx = ψ(x) dx n |x|≥ε |x|≥ε |x| ∞ ψ(ρ) Θ(ω) dσ(ω) dρ = 0. (4.4.5) = ρ ε S n−1 Hence, Θ[ϕ − ϕ(0)ψ] ∈ L1 (Rn ) for every ϕ ∈ S(Rn ), which when combined with (4.4.5) and Lebesgue dominated convergence theorem 13.12 yields P.V. Θ (ϕ) = Θ(x)[ϕ(x) − ϕ(0)ψ(x)] dx, ∀ ϕ ∈ S(Rn ). (4.4.6) Rn
Note that because of (4.4.5), the right-hand side in (4.4.6) is independent of the choice of ψ. Estimating the right hand-side of (4.4.6) (using Exercise 4.51, the decay at infinity of functions from Q and the Schwartz class, and the mean value theorem near the origin) shows that there exists a constant C ∈ (0, ∞) independent of ϕ with the property that α β P.V. Θ (ϕ) ≤ C x ∂ ϕ(x), ∀ ϕ ∈ S(Rn ). (4.4.7) sup x∈Rn , |α|≤1, |β|≤1
Since from (4.4.6) we see that P.V. Θ is linear, in light of Fact 4.1 estimate (4.4.7) implies P.V. Θ ∈ S (Rn ) as wanted. The fact that the restriction in the distributional sense of P.V. Θ to Rn \ {0} is equal to the restriction of the function Θ to Rn \ {0} is immediate from definitions. Remark 4.65. (1) As already alluded to, if n = 1 and Θ(x) := x1 , x ∈ R \ {0}, then we have P.V.Θ = P.V. x1 . (2) Suppose Θ is as in (4.4.1). Since identity (4.4.6) holds for any ψ ∈ Q, we may select ψ ∈ Q that also satisfies ψ = 1 on S n−1 and observe that for this choice of ψ we have Θ(x)[ϕ(x) − ϕ(0)ψ(x)] dx (4.4.8) Rn
=
Θ(x) ϕ(x)−ϕ(0) dx+
|x|≤1
Hence,
|x|>1
P.V. Θ, ϕ =
|x|≤1
Θ(x)ϕ(x) dx, ∀ ϕ ∈ S(Rn ).
Θ(x) ϕ(x) − ϕ(0) dx
Θ(x)ϕ(x) dx,
+ |x|>1
∀ ϕ ∈ S(Rn ).
(4.4.9)
138
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS x
j Example 4.66. If j ∈ {1, . . . , n}, the function Θ defined by Θ(x) := |x|n+1 for xj n each x ∈ R \ {0} satisfies (4.4.1). By Proposition 4.63 we have P.V. |x|n+1 ∈ n n S (R ) and part (2) in Remark 4.65 gives that for every ϕ ∈ S(R )
! P.V.
" xj xj ϕ(x) , ϕ = lim dx n+1 n+1 + |x| ε→0 |x|≥ε |x| = |x|≤1
xj (ϕ(x) − ϕ(0)) dx + |x|n+1
(4.4.10) |x|>1
xj ϕ(x) dx. |x|n+1
The next proposition elaborates on the manner in which principal value tempered distributions convolve with Schwartz functions. Proposition 4.67. Let Θ be a function the conditions in (4.4.1). satisfying Then for each ϕ ∈ S(Rn ) one has that P.V. Θ ∗ ϕ ∈ S (Rn ) ∩ C ∞ (Rn ) and P.V. Θ ∗ ϕ (x) = lim+ ε→0
|x−y|≥ε
Θ(x − y)ϕ(y) dy,
∀ x ∈ Rn .
(4.4.11)
n Proof. Fix an arbitrary ϕ ∈ S(Rn ) and note that since P.V. Θ ∈ S (Rn ) (c.f. Proposition 4.63), part (e) in Theorem 4.18 gives P.V. Θ ∗ ϕ ∈ S (R ). Let ψ ∈ C0∞ (Rn ) be such that ψ = Then 1 − ψ ∈ L(Rn ) and it 1 near the origin. n makes sense to consider (1−ψ) P.V. Θ in S (R ) (cf. part (b) in Theorem 4.13). Hence, we may decompose P.V. Θ = u + v where
u := ψ P.V. Θ ∈ E (Rn ) and v := (1 − ψ) P.V. Θ ∈ S (Rn ).
(4.4.12)
The last part in Proposition 4.63 also permits us to identify v = (1 − ψ)Θ in L1loc (Rn ). By Exercise 4.51 we have v ∈ Lp (Rn ) for every p ∈ (1, ∞) which, in combination with Exercise 3.18, allows us to conclude that v ∗ ϕ ∈ C ∞ (Rn ) and (v ∗ ϕ)(x) =
Rn
(1 − ψ(y))Θ(y)ϕ(x − y) dy,
∀ x ∈ Rn .
(4.4.13)
Since the above integral is absolutely convergent, by Lebesgue’s dominated convergence theorem we further express this as (v ∗ ϕ)(x) = lim (1 − ψ(y))Θ(y)ϕ(x − y) dy, ∀ x ∈ Rn . (4.4.14) ε→0+
|y|≥ε
Thanks to Exercise 2.94 we also have u ∗ ϕ ∈ C ∞ (Rn ) and (u ∗ ϕ)(x) = ψ P.V. Θ, ϕ(x − ·)
for each x ∈ Rn .
(4.4.15)
On the other hand, the definition of the principal value gives that for each x ∈ Rn
4.4. PRINCIPAL VALUE TEMPERED DISTRIBUTIONS $ % $ % ψ P.V. Θ, ϕ(x − ·) = P.V. Θ, ψ(·)ϕ(x − ·) = lim Θ(y)ψ(y)ϕ(x − y) dy. ε→0+
139
(4.4.16)
|y|≥ε
Collectively, these arguments show that, for each x ∈ Rn , P.V. Θ ∗ ϕ (x) = (u ∗ ϕ)(x) + (v ∗ ϕ)(x) = lim+ Θ(y)ψ(y)ϕ(x − y) dy ε→0
|y|≥ε
+ lim
ε→0+
|y|≥ε
(1 − ψ(y))Θ(y)ϕ(x − y) dy
= lim
ε→0+
|x−y|≥ε
Θ(x − y)ϕ(y) dy,
(4.4.17)
proving (4.4.11). The next example discusses a basic class of principal value tempered distributions arising naturally in applications. Example 4.68. Let Φ ∈ C 1 (Rn \ {0}) be positive homogeneous of degree 1 − n. Then for each j ∈ {1, . . . , n} it follows that ∂j Φ satisfies the conditions in (4.4.1). Consequently, P.V.(∂j Φ) is a well-defined tempered distribution. To see why this is true fix j ∈ {1, . . . , n} and note that ∂j Φ ∈ C 0 (Rn \ {0}) and ∂j Φ is positive homogeneous of degree −n (cf. Exercise 4.49). Moreover, using Exercise 4.50, then integrating by parts based on (13.7.4), and then using (13.8.5), we obtain xj 0= Φ(x) dσ(x) − Φ(x)xj dσ(x) = ∂j Φ(x) dx 2 |x|=2 |x|=1 1<|x|<2
2
= 1
S n−1
(∂j Φ)(ρω) ρn−1 dσ(ω) dρ =
1
2
dρ ρ
S n−1
(∂j Φ)(ω) dσ(ω)
= (ln 2) S n−1
This shows that
(∂j Φ)(ω) dσ(ω).
S n−1
(4.4.18)
∂j Φ dσ = 0, hence ∂j Φ satisfies all conditions in (4.4.1).
Principal value tempered distributions often arise when differentiating certain types of functions exhibiting a point singularity. Specifically, we have the following theorem. Theorem 4.69. Let Φ ∈ C 1 (Rn \{0}) be a function that is positive homogeneous of degree 1 − n. Then for each j ∈ {1, . . . , n}, the distributional derivative ∂j Φ satisfies Φ(ω)ωj dσ(ω) δ + P.V.(∂j Φ) in S (Rn ). (4.4.19) ∂j Φ = S n−1
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
140
Proof. From the properties of Φ, Exercises 4.51 and 4.5 it follows Φ ∈ S (Rn ). Fix j ∈ {1, . . . , n}. By Example 4.68 we have that P.V.(∂j Φ) ∈ S (Rn ). Hence, invoking (4.1.25), to conclude (4.4.19) it suffices to prove that the equality in (4.4.19) holds in D (Rn ). To this end, fix ϕ ∈ C0∞ (Rn ) and using the Lebesgue dominated convergence theorem 13.12 and integration by parts (based on (13.7.4)) write Φ(x)∂j ϕ(x) dx ∂j Φ, ϕ = −Φ, ∂j ϕ = − Rn
= − lim
ε→0+
|x|≥ε
Φ(x)∂j ϕ(x) dx
= lim
ε→0+
|x|≥ε
∂j Φ(x)ϕ(x) dx + lim
ε→0+
! " = P.V.(∂j Φ), ϕ + lim+ ε→0
|x|=ε
xj Φ(x)ϕ(x) dσ(x) ε
|x|=ε
xj Φ(x)ϕ(x) dσ(x). ε
(4.4.20)
Also, for each ε ∈ (0, ∞) we have xj Φ(x)ϕ(x) dσ(x) |x|=ε ε = |x|=ε
= |x|=ε
xj Φ(x) ϕ(x) − ϕ(0) dσ(x) + ϕ(0) ε xj Φ(x) ϕ(x) − ϕ(0) dσ(x) + ϕ(0) ε
|x|=ε
xj Φ(x) dσ(x) ε
S n−1
ωj Φ(ω) dσ(ω), (4.4.21)
where for the last equality in (4.4.21) we used Exercise 4.50. In addition, using the fact that ϕ ∈ C0∞ (Rn ) and Exercise 4.51 we may estimate xj Φ(x) ϕ(x) − ϕ(0) dσ(x) |x|=ε ε ≤ ε∇ϕL∞ (Rn ) ΦL∞ (S n−1 )
|x|=ε
1 dσ(x) |x|n−1
= ∇ϕL∞ (Rn ) ΦL∞ (S n−1 ) ωn−1 ε −−−−→ 0. +
(4.4.22)
ε→0
Combining (4.4.20), (4.4.21), and (4.4.22), we conclude ! " ∂j Φ, ϕ = P.V.(∂j Φ), ϕ + ωj Φ(ω) dσ(ω) δ, ϕ. S n−1
(4.4.23)
4.5. THE FOURIER TRANSFORM OF PRINCIPAL VALUE... This yields that ∂j Φ = P.V.(∂j Φ) + completes the proof of the theorem.
4.5
S n−1
141
ωj Φ(ω) dσ(ω) δ in D (Rn ) and
The Fourier Transform of Principal Value Distributions
From Proposition 4.63 we know that whenever Θ is a function satisfying the conditions in (4.4.1), the principal value distribution P.V. Θ belongs to S (Rn ). As such, its Fourier transform makes sense as a tempered distribution. This being said, in many applications (cf. the discussion in Remark 4.92), it is of basic importance to actually identify this distribution. The general aim of this section is to do just that, though as a warm-up, we deal with the following particular (yet relevant) case. Proposition 4.70. For each k ∈ {1, . . . , n} consider the function Φk (x) :=
xk , |x|n
∀ x ∈ Rn \ {0}.
(4.5.1)
Then for each j ∈ {1, . . . , n} the function ∂j Φk satisfies the conditions in (4.4.1) and ξj ξk ωn−1 δjk in S (Rn ). F P.V. (∂j Φk ) = ωn−1 2 − (4.5.2) |ξ| n Proof. Fix j, k ∈ {1, . . . , n}. From Example 4.68 it is clear that the function ∂j Φk is as in (4.4.1). Moreover, Theorem 4.69 gives that ∂j Φ k = ωk ωj dσ(ω) δ + P.V.(∂j Φk ) in S (Rn ). (4.5.3) S n−1
Taking into account (13.8.44), and applying the Fourier transform to both sides, this yields ωn−1 F P.V. (∂j Φk ) = F (∂j Φk ) − δjk n
in S (Rn ).
(4.5.4)
On the other hand, since by part (b) in Theorem 4.25 and Corollary 4.62 we have ξj ξk F (∂j Φk ) = i ξj F (Φk ) = ωn−1 2 in S (Rn ), (4.5.5) |ξ| formula (4.5.2) follows from (4.5.4) and (4.5.5). The next theorem shows that the Fourier transform of principal value distributions P.V. Θ is given by bounded functions. As we shall see in Sect. 4.9, such a result plays a key role in establishing the L2 boundedness of singular integral operators (SIO).
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
142
Theorem 4.71. Let Θ be a function satisfying the conditions in (4.4.1). Then the function given by the formula Θ(ω) log(i(ξ · ω)) dσ(ω) for ξ ∈ Rn \ {0}, (4.5.6) mΘ (ξ) := − S n−1
is well-defined, positive homogeneous of degree zero, satisfies . . .mΘ . ∞ n ≤ Cn ΘL∞ (S n−1 ) , L (R )
(4.5.7)
where Cn ∈ (0, ∞) is defined as πωn−1 + Cn := 2 and
S n−1
ξ ln |ξ| · ω dσ(ω),
(4.5.8)
∨ m Θ = m Θ∨ .
(4.5.9)
Moreover, the Fourier transform of the tempered distribution P.V. Θ is of function type and F P.V. Θ = mΘ in S (Rn ), (4.5.10) and mΘ = mΘ if Θ is an even function.
(4.5.11)
Proof. First, we show that the integral in (4.5.6) is absolutely convergent for each vector ξ ∈ Rn \ {0}. To see this, fix ξ ∈ Rn \ {0} and observe that for each ω ∈ S n−1 we have π log(i(ξ · ω)) = ln |ξ · ω| + i sgn (ξ · ω) 2 π ξ · ω + i sgn (ξ · ω). = ln |ξ| + ln |ξ| 2 Applying Proposition 13.46 with f (t) := ln |t|, t ∈ R, and v := S n−1
ξ ln |ξ| · ω dσ(ω) = 2ωn−2
1
ξ |ξ| ,
(4.5.12) yields
ln s( 1 − s2 )n−3 ds < ∞.
(4.5.13)
0
As a by-product, we note that this implies that the constant Cn from (4.5.8) is finite. Next, from (4.5.13) and (4.5.8) we obtain π ξ · ω + i sgn (ξ · ω) dσ(ω) Θ(ω) ln |ξ| 2 S n−1 πωn−1 ξ + ≤ ΘL∞ (S n−1 ) ln |ξ| · ω dσ(ω) 2 S n−1 = Cn ΘL∞ (S n−1 ) < ∞.
(4.5.14)
4.5. THE FOURIER TRANSFORM OF PRINCIPAL VALUE...
143
From (4.5.12) and (4.5.14) it is now clear that the integral in (4.5.6) is absolutely convergent for each fixed ξ ∈ Rn \ {0}. This proves that mΘ is well-defined in Rn \ {0}.
Going further, since S n−1 Θ dσ = 0, from (4.5.6) and (4.5.12) we see that, for each ξ ∈ Rn \ {0}, π ξ mΘ (ξ) = − (4.5.15) Θ(ω) ln |ξ| · ω + i sgn (ξ · ω) dσ(ω). 2 S n−1 Having justified this, we see that mΘ is positive homogeneous of degree zero and that (4.5.7) follows based on (4.5.14). Also, (4.5.9) is obtained directly from (4.5.6) by changing variables ω → −ω. To prove (4.5.10), fix an arbitrary function ϕ ∈ S(Rn ). Using (4.4.2), Lebesgue’s dominated convergence theorem, Fubini’s theorem, and (13.8.7), write $ % $ % Θ(x)ϕ(x) 3 dx (4.5.16) F P.V. Θ , ϕ = P.V. Θ, ϕ 3 = lim+ ε→0
|x|≥ε
Θ(x)ϕ(x) 3 dx
= lim
ε→0+
R→∞
ε≤|x|≤R
ε→0+
R→∞
Rn
ε≤|x|≤R
= lim
ε→0+
R→∞
S n−1
Rn
Θ(ω) S n−1
S n−1
e−i(ω·ξ)ρ
ε
ϕ(ξ)
where the last equality uses
R
Θ(ω)
Rn
= lim
R→∞
ϕ(ξ)
ε→0+
Θ(x) e−ix·ξ dx dξ
ϕ(ξ)
= lim
R Θ(ω) ε
R
dρ dσ(ω) dξ ρ
e−i(ω·ξ)ρ − cos ρ
ε cos ρ ρ dρ
dρ ρ
dσ(ω) dξ,
dσ(ω) = 0 for each ε, R > 0
(itself a consequence of the fact that Θ has mean value zero over S n−1 ). At this stage, we wish to invoke Lebesgue’s dominated convergence theorem in order to absorb the limit inside the integral. To see that this theorem is applicable in the current context, we first note that for each ξ ∈ Rn \ {0} and each ω ∈ S n−1 such that ξ · ω = 0, formulas (4.11.5) and (4.11.6) give R −i(ω·ξ)ρ) dρ e − cos ρ lim ρ ε→0+ ε R→∞
6
= lim
ε→0+
R→∞
ε
R
7 R cos (ω · ξ)ρ − cos ρ sin (ω · ξ)ρ dρ − i dr ρ ρ ε
= − ln |(ω · ξ)| − i
π sgn (ω · ξ) = − log i(ω · ξ) . 2
(4.5.17)
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
144
This takes care of the pointwise convergence aspect of Lebesgue’s theorem. To verify the uniform domination aspect, based on (4.11.7) and (4.11.8) we first estimate
Rn
|ϕ(ξ)|
S n−1
|Θ(ω)| sup
≤
Rn
|ϕ(ξ)|
0<ε
S n−1
R ε
e−i(ω·ξ)ρ − cos ρ
|Θ(ω)| 2 ln |ω · ξ| + 4 dσ(ω) dξ
≤ ΘL∞ (S n−1 ) 4ωn−1 ϕL1 (Rn ) +2
Rn
and note that, further, ln |ω · ξ| dσ(ω) dξ ≤ |ϕ(ξ)| Rn
dρ dσ(ω) dξ ρ
S n−1
S n−1
Rn
|ϕ(ξ)|
+ ωn−1 From (4.5.18) and (4.5.19), the therefore conclude that |ϕ(ξ)| |Θ(ω)| sup
|ϕ(ξ)|
Rn
S n−1
(4.5.18)
ln |ω · ξ| dσ(ω) dξ ,
ξ ln |ξ| · ω dσ(ω) dξ
|ϕ(ξ)| ln |ξ| dξ.
(4.5.19)
fact that ϕ ∈ S(Rn ), and (4.5.13), we may
dρ −i(ω·ξ)ρ − cos ρ e dσ(ω) dξ < ∞. ρ 0<ε
Rn
From this, the fact that mΘ ∈ L∞ (Rn ), and keeping in mind that ϕ ∈ S(Rn ) was arbitrary, (4.5.10) follows. Finally, to show (4.5.11), note that if Θ is assumed to be even, then Θ(ω) sgn (ξ · ω) dσ(ω) = Θ(ω) dσ(ω) − Θ(ω) dσ(ω) S n−1
ω∈S n−1
ω∈S n−1
ω·ξ>0
ω·ξ<0
Θ(ω) dσ(ω) −
= ω∈S n−1
ω∈S n−1
ω·ξ>0
= ω∈S n−1
ω·ξ>0
Θ(−ω) dσ(ω) (−ω)·ξ<0
Θ(ω) − Θ(−ω) dσ(ω) = 0.
(4.5.22)
4.5. THE FOURIER TRANSFORM OF PRINCIPAL VALUE... Consequently,
145
π ξ Θ(ω) ln |ξ| · ω − i sgn (ξ · ω) dσ(ω) 2 S n−1 π ξ =− Θ(ω) ln |ξ| · ω + i sgn (ξ · ω) dσ(ω) 2 S n−1 + iπ Θ(ω) sgn (ξ · ω) dσ(ω)
mΘ (ξ) = −
S n−1
= mΘ (ξ),
(4.5.23)
proving (4.5.11). The proof of the theorem is complete. Exercise 4.72. Complete the following outline aimed at extending the convolution product to the class of principal value distributions in Rn . Assume that Θ1 , Θ2 are two given functions as in (4.4.1). Step 1. Pick an arbitrary function ψ ∈ C0∞ (Rn ) with the property that ψ = 1 near the origin, and show that the following convolutions are meaningfully defined in S (Rn ): u00 := ψ P.V. Θ1 ∗ ψ P.V. Θ2 u01 := ψ P.V. Θ1 ∗ (1 − ψ) P.V. Θ2 u10 := (1 − ψ) P.V. Θ1 ∗ ψ P.V. Θ2 (4.5.24) u11 := (1 − ψ) P.V. Θ1 ∗ (1 − ψ) P.V. Θ2 . For u00 , u01 , and u10 , use Proposition 4.63 and part (a) of Theorem 4.18. Show that u11 = f1 ∗ f2 where fj := (1 − ψ)Θj , j = 1, 2, are functions belonging to L2 (Rn ) (here the behavior of Θ1 , Θ2 at infinity is relevant). Use Young’s inequality to conclude that u11 is meaningfully defined in L∞ (Rn ). Step 2. With the same cutoff function ψ as in Step 1, define P.V. Θ1 ∗ P.V. Θ2 := u00 + u01 + u10 + u11 in S (Rn ),
(4.5.25)
and use this definition to show that F P.V. Θ1 ∗ P.V. Θ2 = mΘ1 mΘ2
(4.5.26)
in
S (Rn ),
where mΘ1 , mΘ2 are associated with Θ1 , Θ2 as in (4.5.6). To do so, compute first u4 jk for j, k ∈ {0, 1}, using part (c) in Theorem 4.33, Exercise 4.28, and Theorem 4.71. Step 3. Use (4.5.26) to show that the definition in (4.5.25) is independent of the cutoff function ψ chosen at the beginning.
146
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS As an application, show that 1 1 ∗ P.V. = −π 2 δ P.V. x x
4.6
in
S (R).
(4.5.27)
Tempered Distributions Associated with |x|−n
Let us consider the effect of dropping the cancellation condition in (4.4.1). Concretely, given a function Ψ ∈ C 0 (Rn \ {0}) that is positive homogeneous of degree −n, if one sets C := S n−1 Ψ dσ, then Ψ(x) = Ψ0 (x) + C|x|−n for every x ∈ Rn \ {0}, where Ψ0 satisfies (4.4.1). From Sect. 4.4 we know that one may associate to Ψ0 the tempered distribution P.V. Ψ0 . As such, associating a tempered distribution to the original function Ψ hinges on how to meaningfully associat a tempered distribution to 1 1 |x|n . In fact, we shall associate to |x|n a family of tempered distributions in the manner described below. Recall the class of functions Q from (4.4.4), fix ψ ∈ Q, and define 1 wψ (ϕ) := [ϕ(x) − ϕ(0)ψ(x)] dx, ∀ ϕ ∈ S(Rn ). (4.6.1) n |x| Rn This gives rise to a well-defined, linear mapping wψ : S(Rn ) → C. In addition, much as in the proof of (4.4.7), there exists a constant C ∈ [0, ∞) with the property that α β x ∂ ϕ(x), wψ (ϕ) ≤ C sup ∀ ϕ ∈ S(Rn ). (4.6.2) x∈Rn , |α|≤1, |β|≤1
In light of Fact 4.1 this shows that wψ is a tempered distribution. An inspection of (4.6.1) also reveals that wψ Rn \{0} = |x|−n in D (Rn \ {0}). This is the sense in which we shall say that we have associated to |x|−n the family of tempered distributions {wψ }ψ∈Q . Our next goal is to determine a formula for the Fourier transform of the tempered distribution in (4.6.1). As a preamble, note that the class Q is invariant under dilations (recall (4.3.5)); that is, for each t ∈ (0, ∞) we have τt (ψ) ∈ Q,
∀ t ∈ (0, ∞),
∀ ψ ∈ Q.
(4.6.3)
Theorem 4.73. For each ψ ∈ Q, the Fourier transform of the tempered distribution wψ defined in (4.6.1) is of function type and w 4ψ (ξ) = −ωn−1 ln |ξ| + Cψ , for some constant Cψ ∈ C.
∀ ξ ∈ Rn \ {0},
(4.6.4)
4.6. TEMPERED DISTRIBUTIONS ASSOCIATED WITH |X|−N
147
Proof. Fix ψ ∈ Q. As noted earlier, wψ Rn \{0} = |x|1n , hence Proposition 4.58 applies and yields w 4ψ Rn \{0} ∈ C ∞ (Rn \ {0}). (4.6.5) To establish (4.6.4) we proceed by proving a series of claims. Claim 1: w 4ψ is invariant under orthogonal transformations. Let A ∈ Mn×n (R) be an orthogonal matrix. Then % $ wψ ◦ A, ϕ = wψ , ϕ ◦ A−1 = = wψ , ϕ,
Rn
1 ϕ(A−1 x) − ϕ(0)ψ(A−1 x) dx |x|n
∀ ϕ ∈ S(Rn ).
(4.6.6)
This shows that wψ is invariant under orthogonal transformations which, together with Proposition 4.47, implies that w 4ψ is also invariant under orthogonal transformations. Claim 2: τt (wψ ) = t−n wτt ψ in S (Rn ). To see why this is true, for any t ∈ (0, ∞) and any ϕ ∈ S(Rn ) write τt wψ , ϕ = t
−n
wψ , τ ϕ = t 1 t
1
= Rn
tn |y|n
−n
Rn
1 x ϕ − ϕ(0)ψ(x) dx n |x| t
ϕ(y) − ϕ(0)ψ(ty) dy = t−n wτt ψ , ϕ,
(4.6.7)
from which the desired identity follows. Claim 3: For each t ∈ (0, ∞) we have t−n
τt wψ − t−n wψ =
Rn
ψ(x) − ψ(tx) dx δ |x|n
in S (Rn ).
(4.6.8)
To prove the current claim, consider ψ1 , ψ2 ∈ Q and write
ϕ(0) [ψ2 (x) − ψ1 (x)] dx n Rn |x| 8 9 ψ2 (x) − ψ1 (x) = dx δ , ϕ |x|n Rn
wψ1 − wψ2 , ϕ =
∀ ϕ ∈ S(Rn ). (4.6.9)
As a result, wψ1 − wψ2 =
Rn
1 [ψ2 (x) − ψ1 (x)] dx δ |x|n
in S (Rn ).
Combining now the identity from Claim 2 and (4.6.10) yields
(4.6.10)
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
148
τt wψ − t−n wψ = t−n [wτt ψ − wψ ] ψ(x) − ψ(tx) = t−n dx δ |x|n Rn
in S (Rn )
(4.6.11)
for every t ∈ (0, ∞). This proves Claim 3. Claim 4: The following identity is true: ψ(x) − ψ(tx) dx = ωn−1 ln t |x|n Rn
∀ t ∈ (0, ∞).
(4.6.12)
To set the stage for the proof of Claim 4 observe that there exists some C 1 function η : [0, ∞) → R decaying at ∞, satisfying η(0) = 1, and ψ(x) = η(|x|) x for every x ∈ Rn \ {0}. for every x ∈ Rn . In particular, (∇ψ)(x) = η (|x|) |x| Focusing attention on (4.6.12), note that the intervening integral is absolutely convergent. In turn, this allows us to write for each fixed t ∈ (0, ∞) ψ(x) − ψ(tx) 1 1 d ψ(sx) ds dx dx = lim n n R→∞ |x|
0
1
t
= ωn−1 lim
R→∞
t
= ωn−1 lim
R→∞
t
0 1
R
1 d η(sρ) dρ ds s dρ
1 η(sR) − η(0) ds = ωn−1 ln t, s
as wanted. Claim 5: The following identity holds: τt w 4ψ = w 4ψ − ωn−1 ln t in Rn \ {0},
∀ t ∈ (0, ∞).
(4.6.14)
To prove (4.6.14), we first combine (4.6.8) and (4.6.12) and obtain τt wψ = t−n wψ + (ωn−1 t−n ln t)δ
in S (Rn ),
∀ t ∈ (0, ∞).
(4.6.15)
Hence, for each t ∈ (0, ∞) and each ϕ ∈ S(Rn ), we have ! " $ % 3ψ , τ 1t ϕ = wψ , τt ϕ 4ψ , ϕ = t−n w 3 = t−n τ 1t wψ , ϕ 3 τt w $ % $ % = t−n tn wψ − (ωn−1 tn ln t)δ , ϕ 3 = wψ − (ωn−1 ln t)δ , ϕ 3 $ % = w 4ψ − ωn−1 ln t , ϕ . (4.6.16)
4.6. TEMPERED DISTRIBUTIONS ASSOCIATED WITH |X|−N
149
The first and third equality in (4.6.16) is based on the definition of τt acting on S (Rn ), the second uses (4.3.7), while the forthmakes use of (4.6.15). Now (4.6.14) follows from (4.6.16) and the fact that w 4ψ Rn \{0} ∈ C ∞ (Rn \ {0}). We are ready to complete the proof of Theorem 4.73. First, Claim 1 ensures that w 4ψ Rn \{0} is constant on S n−1 , thus 4ψ S n−1 Cψ := w
(4.6.17)
1 is a well-defined complex number. Hence, given any ξ ∈ Rn \ {0}, taking t := |ξ| in (4.6.14) yields ξ 1 w Cψ = w 4ψ 4ψ (ξ) = w 4ψ (ξ) + ωn−1 ln |ξ|, (4.6.18) = τ |ξ| |ξ|
from which (4.6.4) follows. Remark 4.74. It is possible to enlarge the scope of our earlier considerations by relaxing the hypotheses on the function ψ. Specifically, in place of (4.4.3) assume now that ψ : Rn → C is such that ψ is radial, of class C 1 near origin, satisfies ψ(0) = 1, and ∃ εo , C ∈ (0, ∞) such that |ψ(x)| ≤ C(1 + |x|)−εo .
(4.6.19)
Such a function ψ still induces a tempered distribution as in (4.6.1), and we claim that formula (4.6.4) continues to be valid in this more general case as well. To justify this claim, observe that the proof of Theorem 4.73 goes through for ψ as in (4.6.19), albeit (4.6.12) requires more care, since we are now dropping the global C 1 requirement on η. However, this may be remedied via mollification, i.e., working with ηε in place of η (constructed much as in (1.2.3)–(1.2.6)), then observing that if ψε (x) := ηε (|x|) then ψε /ψε (0) ∈ Q, hence ψε (x) − ψε (tx) dx = ωn−1 ψε (0) ln t ∀ t ∈ (0, ∞), (4.6.20) |x|n n R and finally passing to limit ε → 0+ in (4.6.20). Remark 4.74 applies, in particular, to the function ψ := χB(0,1) . This gives that ! " ϕ(x) ϕ(x) − ϕ(0) wχB(0,1) , ϕ = dx + dx, ∀ ϕ ∈ S(Rn ), n |x|n |x|>1 |x| |x|≤1 (4.6.21) is a tempered distribution, F wχB(0,1) ∈ S (Rn ) is of function type, and there exists a constant c ∈ C such that (4.6.22) F wχB(0,1) (ξ) = −ωn−1 ln |ξ| + c, ∀ ξ ∈ Rn \ {0}.
150
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
In the last part of this section we compute the constant c from (4.6.22) in the case n = 1. Before doing so we recall that Euler’s constant γ is defined by ⎤ ⎡ k 1 − ln k ⎦ . γ := lim ⎣ (4.6.23) k→∞ j j=1 The number γ plays an important role in analysis, as it appears prominently in a number of basic formulas. Two such identities that are relevant for us here are as follows: √ ∞ ∞ π −x −x2 (γ + 2 ln 2). (4.6.24) e ln x dx = −γ, e ln x dx = − 4 0 0 Example 4.75. We claim that F wχ(−1,1) (ξ) = −2 ln |ξ| − 2γ
in S (R),
(4.6.25)
where wχ(−1,1) is defined in (4.6.21). To see why this is true, note that (4.6.22) written for n = 1 (in which case ωn−1 = 2) becomes F wχ(−1,1) (x) = −2 ln |x|+ c in S (R). Consequently, to obtain (4.6.25) it remains to prove that c = −2γ when n = 1. The idea for determining the value of c is to apply F wχ(−1,1) to the 2
Schwartz function e−x . First, from (4.6.22) we have " ! 2 2 F wχ(−1,1) , e−x = −2 ln |x| + c, e−x .
(4.6.26)
Second, we compute the term in the left hand-side of (4.6.26). Specifically, based on the definition of the Fourier transform of a tempered distribution, (3.2.6), and (4.6.21) we may write " $ ! % $ √ 2 2 % 2 F wχ(−1,1) , e−x = wχ(−1,1) , F (e−x ) = wχ(−1,1) , πe−x /4 =
√ π
2
|x|>1
√ =2 π
∞ 1/2
√ e−x /4 dx + π |x| 2
√ e−x dx + 2 π x
e−x
1/2
0
/4
−1
|x|
|x|≤1
2
dx
2
e−x − 1 dx =: I + II. x (4.6.27)
Going further, integrating by parts then changing variables gives ∞ √ √ 2 I = 2 πe−1/4 ln 2 + 4 π (ln x)e−x x dx √ √ = 2 πe−1/4 ln 2 + π
1/2 ∞
√ (ln x)e 1/ 2
−x
dx.
(4.6.28)
4.7. A GENERAL JUMP-FORMULA IN THE CLASS . . .
151
Regarding II, for each ε > 0 an integration by parts and then a change of variables yield
1/2
ε
2
2 e−x − 1 dx = −(e−1/4 − 1) ln 2 − (e−ε − 1) ln ε + 2 x
2
= −(e−1/4 − 1) ln 2 − (e−ε
1 − 1) ln ε + 2
1/2
2
(ln x)e−x x dx
ε
√ 1/ 2
√
(ln x)e−x dx.
ε
(4.6.29) Observe that since 2
lim+ (e−ε − 1) ln ε =
ε→0
ln ln y − ln(y − 1) 1 lim = 0, 2 y→∞ y
(4.6.30)
by taking the limit as ε → 0+ in (4.6.29) we may conclude that √ √ II = −2 π(e−1/4 − 1) ln 2 + π
√ 1/ 2
(ln x)e−x dx.
(4.6.31)
0
Hence, from (4.6.28), (4.6.31), and the first identity in (4.6.24), it follows that ∞ √ √ √ √ I + II = 2 π ln 2 + π (ln x)e−x dx = 2 π ln 2 − πγ. (4.6.32) 0
This takes care of the term in the left-hand side of (4.6.25). To compute the term in the right-hand side of (4.6.25), write 2 −x2 −x2 −2 ln |ξ| + c, e = −2 (ln |x|)e dx + c e−x dx R
= −4
∞
R
√ 2 (ln x)e−x dx + c π
0
√ √ = π(γ + 2 ln 2) + c π.
(4.6.33)
For the last equality in (4.6.33) we used the second identity in (4.6.24). A combination of (4.6.26), (4.6.27), (4.6.32), and (4.6.33), then implies c = −2γ, as desired.
4.7
A General Jump-Formula in the Class of Tempered Distributions
The aim in this section is to prove a very useful formula expressing the limits of certain sequences of tempered distributions {Φε }ε=0 as the index parameter ε ∈ R \ {0} approaches the origin from either side. The trademark feature (which also justifies the name) of this formula is the presence of a jump-term of
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
152
the form ±Cδ (where the sign is correlated to sgn ε) in addition to a suitable principal value tempered distribution. A conceptually simple example of this phenomenon has been presented in Exercise 2.117, which our theorem contains as a simple special case (see Remark 4.79). With the notational convention that for points x ∈ Rn we write x = (x , t), where x = (x1 , . . . , xn−1 ) ∈ Rn−1 and t ∈ R, our main result in this regard reads as follows. Theorem 4.76. If Φ ∈ C 4 (Rn \ {0}) is odd and positive homogeneous of degree 1 − n, then i 3 lim Φ(x , ε) = ± Φ(0 , 1) δ(x ) + P.V. Φ(x , 0) 2
ε→0±
in S (Rn−1 ).
(4.7.1)
A few comments before presenting the proof of this result are in order. First, above we have employed the earlier convention of writing u(x ) for a distribution u in Rn−1 simply to stress that the test functions to which u is applied are considered in the variable x ∈ Rn−1 . Second, for each fixed ε ∈ R \ {0}, applying Exercise 4.51 yields −(n−1)/2 |Φ(x , ε)| ≤ ΦL∞ (S n−1 ) |x |2 + ε2
for each x ∈ Rn−1 .
(4.7.2)
Having observed this, the discussion in Example 4.3 then shows Φ(·, ε) ∈ S (Rn−1 ). Third, it is worth recalling an earlier convention to the effect that given a family of tempered distributions uε , indexed by ε ∈ I, I = (a, b) ⊆ R open interval, we say that uε → u ∈ D (Rn ) in S (Rn ) as I ε → a+ provided uεj → u in S (Rn ) for every sequence {εj }j∈N ⊆ I such that εj → a+ as j → ∞. In particular, this interpretation is in effect for (4.7.1). Fourth, from Exercise 4.52 we know that Φ ∈ S (Rn ). In addition, Exer3 n 3 , 1) cise 4.60 applied to Φ shows that Φ ∈ C 1 (Rn \{0}). In particular, Φ(0 R \{0} is meaningfully defined. Fifth, Φ(·, 0) (viewed as a function in Rn−1 \ {0 }) is continuous and posn−1 \ {0 } and, being odd, satisfies
itive homogeneous of degree −(n − 1) in R S n−2 Φ(·, 0) dσ = 0. As such, the conditions in (4.4.1) are satisfied (with n − 1 in place of n), hence Proposition 4.63 ensures that P.V. Φ(·, 0) is a well-defined tempered distribution in Rn−1 . Proof of Theorem 4.76. Assume Φ ∈ C 4 (Rn \ {0}) is odd and positive homogeneous of degree 1 − n, and let ϕ ∈ S(Rn−1 ). Then, for any fixed ε > 0, write Φ(x , t)ϕ(x ) dx = lim+ Φ(x , t)ϕ(x ) dx lim+ t→0
Rn−1
t→0
|x |>ε
+ lim+ t→0
|x |<ε
Φ(x , t)[ϕ(x ) − ϕ(0 )] dx
4.7. A GENERAL JUMP-FORMULA IN THE CLASS . . .
153
Φ(x , t) dx
+ ϕ(0 ) lim+ t→0
|x |<ε
=: Iε + IIε + IIIε .
(4.7.3)
Note that, making the change of variable y := x /t, we obtain IIIε = ϕ(0 ) lim Φ(y , 1) dy = ϕ(0 ) lim Φ(y , 1) dy , t→0+ |y |<ε/t
r→∞ |y |
(4.7.4)
that is independent of ε. Also, since for every x ∈ Rn−1 and t ∈ R \ {0}, |Φ(x , t)| ≤
ΦL∞ (S n−1 ) ΦL∞ (S n−1 ) ≤ , |(x , t)|n−1 |x |n−1
|ϕ(x ) − ϕ(0 )| ≤ ∇ϕL∞ (Rn−1 ) |x |,
(4.7.5) (4.7.6)
it follows that |IIε | ≤ Cε, hence lim IIε = 0.
(4.7.7)
ε→0+
Finally, it is clear from Lebesgue’s dominated convergence theorem that Iε = Φ(x , 0)ϕ(x ) dx . (4.7.8) |x |>ε
By passing to the limit ε → 0+ we therefore conclude that for any ϕ ∈ S(Rn−1 ) lim Φ(x , t)ϕ(x ) dx = lim Φ(x , 0)ϕ(x ) dx t→0+
Rn−1
ε→0+
|x |>ε
+ ϕ(0 ) lim
r→∞ |x |
Φ(x , 1) dx .
(4.7.9)
To proceed, make the change of variable x → −x in each integral of (4.7.9) and use the fact that Φ is odd. After re-denoting t by −t and ϕ∨ by ϕ this yields the identity lim− Φ(x , t)ϕ(x ) dx = lim+ Φ(x , 0)ϕ(x ) dx t→0
Rn−1
ε→0
|x |>ε
+ ϕ(0 ) lim
r→∞ |x |
Φ(x , −1) dx ,
(4.7.10)
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
154
for any ϕ ∈ S(Rn−1 ). Collectively, (4.7.9) and (4.7.10) may then be written as lim±
t→0
Φ(x , t)ϕ(x ) dx
(4.7.11)
Rn−1
Φ(x , 0)ϕ(x ) dx ,
= a± ϕ(0 ) + lim+ ε→0
∀ ϕ ∈ S(Rn−1 ),
|x |>ε
where
a± := lim
r→∞ |x |
Φ(x , ±1) dx .
(4.7.12)
Regarding (4.7.12), we note that the limits lim
r→∞
|x |
Φ(x , ±1) dx
exist in C.
To see that this is the case, first observe that by Exercise 4.51, |Φ(x , t)| dx < ∞, ∀ t = 0, ∀ r > 0.
(4.7.13)
(4.7.14)
|x |
Consider the case of the choice of the sign “plus” in (4.7.13) (the case of the sing “minus” is treated analogously). For this choice of sign we write for each fixed r > 0 1 1 Φ(x , 1) dx = Φ(x , 1) dx + Φ(x , 1) dx 2 |x |
where we have used the fact that Φ is odd. Next, the mean value theorem and the fact that ∇Φ is positive homogeneous of degree −n allow us to estimate |Φ(x , 1) − Φ(x , −1)| ≤
2∇ΦL∞ (S n−1 ) |x |n
for |x | large,
(4.7.16)
which implies that Rn−1
|Φ(x , 1) dx − Φ(x , −1)| dx < ∞.
(4.7.17)
4.7. A GENERAL JUMP-FORMULA IN THE CLASS . . .
155
Consequently, by Lebesgue’s dominated convergence theorem
Φ(x , 1) dx =
lim
r→∞
|x |
Rn−1
1 2
Φ(x , 1) dx − Φ(x , −1) dx ∈ C, (4.7.18)
proving (4.7.13). There remains to identify the actual values of a± and we organize the remainder of the proof as a series of claims, starting with: Claim 1. If u0 ∈ E (Rn ) and u1 ∈ L1 (Rn ), then u := u0 + u1
has the property that u 3 ∈ C 0 (Rn ).
(4.7.19)
31 . Then, since u 30 ∈ C ∞ (Rn ) by Proof of Claim 1. First note that u 3=u 30 + u part (b) in Theorem 4.33, and u 31 ∈ C 0 (Rn ) by Lemma 4.27, it follows that u 3 ∈ C 0 (Rn ). Claim 2. Assume that ξn = 0 and that ξ ∈ Rn−1 is such that |ξ | ≤ C for some C ∈ (0, ∞). Also, suppose that Θ ∈ C 1 (Rn \ {0}) satisfies Θ(λx) = λ−1 Θ(x),
∀ x ∈ Rn \ {0},
∀ λ ∈ R \ {0}.
(4.7.20)
Then there exists C ∈ (0, ∞) independent of ξn such that |Θ(ξ , −ξn ) + Θ(0 , ξn )| ≤
C . |ξn |2
(4.7.21)
Proof of Claim 2. From (4.7.20) it follows that (∇Θ)(λx) = λ−2 (∇Θ)(x) for every x ∈ Rn \{0} and every λ ∈ R\{0}. Based on (4.7.20) and this observation, we may estimate 1 |Θ(−ξ /ξn , 1) − Θ(0 , 1)| |ξn | 1 ξ · sup |(∇Θ)(−tξ /ξn , 1)| ≤ · |ξn | ξn t∈[0,1] ∇ΘL∞ (S n−1 ) 1 ξ · · sup ≤ |ξn | ξn t∈[0,1] |(−tξ /ξn , 1)|2
|Θ(ξ , −ξn ) + Θ(0 , ξn )| =
≤
C · ∇ΘL∞ (S n−1 ) , |ξn |2
(4.7.22)
where the last line uses the assumption that |ξ | ≤ C . Since ∇Θ is continuous, hence bounded, on S n−1 , the desired result follows by taking C := C ∇ΘL∞ (S n−1 ) < ∞.
156
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
Claim 3. If ϕ ∈ S(Rn−1 ) is such that ϕ 3 has compact support, then 3 , −ξn ) + Φ(0, 3 ξn ) ϕ(ξ Φ(ξ 3 ) dξ . F (ξn ) := −(2π)1−n
(4.7.23)
Rn−1
is well-defined and integrable on R \ [−1, 1]. 3 is positive homogeProof of Claim 3. From Proposition 4.57 it follows that Φ neous tempered distribution of degree −n − (1 − n) = −1. This readily implies 3 n , viewed as a continuous function in Rn \{0}, is also positive homothat Φ R \{0} 3 is an odd funcgeneous of degree −1. Moreover, from (4.2.30) we deduce that Φ 3 n ∈ C 1 (Rn \ {0}) tion in Rn \ {0}. As a consequence, the function Θ := Φ R \{0}
satisfies (4.7.20). Hence, Claim 2 applies to this Θ and, granted the compact support condition on ϕ, 3 it follows that there exists C ∈ (0, ∞) such that |F (ξn )| ≤
C , |ξn |2
∀ ξn = 0,
(4.7.24)
from which the desired conclusion follows. Claim 4. Assume ϕ ∈ S(Rn−1 ) is such that ϕ 3 has compact support. Then the function 1 3 , 1)ϕ(0 ), f (t) := Φ(x , t)ϕ(x ) dx + (sgn t)Φ(0 (4.7.25) 2i Rn−1 originally defined for t ∈ R \ {0}, has a continuous extension to all of R.
Proof of Claim 4. Decompose f = f1 + f2 where f1 (t) := Rn−1 Φ(x , t)ϕ(x ) dx 1 3 , 1)ϕ(0 ) for each t ∈ R \ {0}. From (12.4.18) and and f2 (t) := 2i (sgn t)Φ(0 (3.2.10) we know that 1 ϕ(ξ 3 ) dξ . (4.7.26) s5 gn(ξn ) = −2i P.V. , and ϕ(0 ) = (2π)1−n ξn n−1 R 3 is odd and positive homogeneous of degree −1 in Rn \ {0}, and since Also, Φ P.V. ξ1n restricted to R \ {0} is simply ξ1n , it follows that P.V.
1 ξn
3 , 1) = Φ(0 3 , ξn ) on R \ {0}. Φ(0
Keeping these in mind we conclude that 3 , ξn )ϕ(ξ Φ(0 3 ) dξ f32 (ξn ) = −(2π)1−n Rn−1
for ξn ∈ R \ {0}.
(4.7.27)
As far as f1 is concerned, observe first that f1 ∈ S (R). Indeed, for every ψ ∈ S(R) we have f1 , ψ = Φ, ϕ ⊗ ψ. Since ϕ ⊗ ψ ∈ S(Rn ) and, as already
4.7. A GENERAL JUMP-FORMULA IN THE CLASS . . .
157
noted, Φ ∈ S (Rn ), the desired conclusion follows. The next order of business is to compute the Fourier transform of f1 . With this goal in mind, pick an arbitrary ψ ∈ S(R) and write (keeping in mind Exercise 3.26) f31 , ψ = f1 , ψ3 = Φ, ϕ ⊗ ψ3 $ $ % % 3 ϕ 3∨ ⊗ ψ) = (2π)1−n Φ, 3∨ ⊗ ψ = (2π)1−n Φ, F (ϕ !$ " % 3 , ξn ), ϕ(−ξ = (2π)1−n Φ(ξ 3 ) , ψ(ξn )
(4.7.28)
which proves that f31 (ξn ) = (2π)1−n
Rn−1
3 , ξn )ϕ(−ξ Φ(ξ 3 ) dξ
for ξn ∈ R \ {0}.
(4.7.29)
By combining (4.7.27) with (4.7.29) we arrive at the conclusion that, for ξn ∈ R \ {0}, 1−n 1−n 3 3 3 , ξn )ϕ(ξ Φ(ξ , ξn )ϕ(−ξ Φ(0 3 ) dξ − (2π) 3 ) dξ f (ξn ) = (2π) Rn−1
= −(2π)1−n
Rn−1
Rn−1
3 , −ξn )ϕ(ξ Φ(ξ 3 ) dξ − (2π)1−n
= F (ξn ),
Rn−1
3 , ξn )ϕ(ξ Φ(0 3 ) dξ (4.7.30)
3 is odd in Rn \ {0}. Hence, f3 = F where the second equality uses the fact that Φ on R \ {0}, where F is defined in Claim 3. Now select θ ∈ C0∞ (R) with θ ≡ 1 on [−1, 1] and write f3 = (1 − θ)f3 + θf3. (4.7.31) Since (1 − θ)f3 = (1 − θ)F ∈ L1 (R) by Claim 3, and θf3 ∈ E (R), we may conclude from Claim 1 that the Fourier transform of f3 belongs to C 0 (R) hence, ultimately, that f itself belongs to C 0 (R). This completes the proof of Claim 4. Claim 5. Assume that ϕ ∈ S(Rn−1 ) is such that ϕ 3 has compact support. Then 3 , 1)ϕ(0 ). Φ(x , t)ϕ(x ) dx − lim− Φ(x , t)ϕ(x ) dx = i Φ(0 lim+ t→0
Rn−1
t→0
Rn−1
(4.7.32) Proof of Claim 5. This follows from Claim 4 by writing for each t ∈ R \ {0} 1 3 , 1)ϕ(0 ) + f (t) Φ(x , t)ϕ(x ) dx = − (sgn t)Φ(0 (4.7.33) 2i Rn−1 with f continuous on R. Claim 6. For a± originally defined in (4.7.12) one has i 3 a± = ± Φ(0 , 1). 2
(4.7.34)
158
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
Proof of Claim 6. Let ψ ∈ C0∞ (Rn−1 ) be such that ψ(x ) dx = 1,
(4.7.35)
Rn−1
3 Then ϕ ∈ S(Rn−1 ), ϕ 3 = (2π)n−1 ψ ∨ has compact support, and and set ϕ := ψ. 3 ) = ϕ(0 ) = ψ(0 ψ(x ) dx = 1. (4.7.36) Rn−1
From (4.7.11) and Claim 5, one obtains 3 , 1) = a+ − a− . i Φ(0
(4.7.37)
However, since Φ is odd, from the definition of a± in (4.7.12) we see that a− = −a+ . This forces the equalities in (4.7.34), and finishes the proof of Claim 6. At this stage, we note that (4.7.1) is a consequence of (4.7.11) and (4.7.34). The proof of Theorem 4.76 is therefore complete. The proof just completed offers a bit more and, below, we bring to the forefront one such by-product. Proposition 4.77. Assume that Φ ∈ C 4 (Rn \ {0}) is odd and positive homogeneous of degree 1 − n. Also, let ξ ∈ S n−1 be arbitrary and set Hξ := {x ∈ Rn : x · ξ = 0}. Then 3 Φ(ξ) = −2i lim Φ(x + ξ) dσ(x), (4.7.38) r→∞ x∈Hξ , |x|
where σ denotes the surface measure on Hξ (viewed as surface in Rn ). Proof. We start by observing that (4.7.12) and (4.7.34) imply that 3 n ) = −2i lim Ψ(e Ψ (x , 0) + en dx for each function Ψ r→∞ x ∈Rn−1 , |x |
of class C 4 , odd, positive homogeneous of degree 1 − n, in Rn \ {0}. (4.7.39) To proceed, fix a vector ξ ∈ S n−1 and denote by Hξ the hyperplane in Rn orthogonal to ξ. Consider then an orthogonal matrix A ∈ Mn×n (R) satisfying AHξ = Rn−1 × {0} and Aξ = en .
(4.7.40)
For example, if v1 , . . . , vn−1 is an orthonormal basis in Hξ and we set vn := ξ, then the conditions Avj = ej for j ∈ {1, . . . , n} define an orthogonal matrix A such that the conditions in (4.7.40) hold. If we now introduce Ψ : Rn \ {0} → C by setting Ψ := Φ ◦ A−1 then Ψ ∈ C 4 (Rn \ {0}) is odd, positive homogeneous of
4.7. A GENERAL JUMP-FORMULA IN THE CLASS . . .
159
3 =Φ 3 ◦ A−1 [as seen from (4.3.3)]. Using these observations degree 1 − n, and Ψ and (4.7.39), we may write 3 3 3 n ) = −2i lim Ψ (x , 0) + en dx . (4.7.41) Φ(ξ) = Ψ(Aξ) = Ψ(e r→∞ x ∈Rn−1 , |x |
Given a number r ∈ (0, ∞), consider now the surface Σ := {x ∈ Hξ : |x| < r} and note that if O := {x ∈ Rn−1 : |x | < r} then P : O → Σ,
P (x ) := A−1 (x , 0) for each x ∈ O,
(4.7.42)
is a global C ∞ parametrization of Σ, satisfying |∂1 P × · · · × ∂n−1 P | = 1 at each point in the set O. Consequently, (13.6.6) gives Φ(x + ξ) dσ(x) = Φ(P (x ) + ξ) dx O
x∈Hξ , |x|
= O
Φ A−1 (x , 0) + A−1 en dx
Ψ (x , 0) + en dx .
=
(4.7.43)
x ∈Rn−1 , |x |
At this stage, (4.7.38) is clear from (4.7.41) and (4.7.43). The power of Theorem 4.76 is most apparent by considering the consequence discussed in Corollary 4.78 below, which sheds light on the boundary behavior of the form lim F ± (x , t) of functions of the following type t→0+
F ± (x , t) :=
Rn−1
Φ(x − y , ± t)ϕ(y ) dy ,
(x , t) ∈ Rn+ ,
(4.7.44)
where Φ is as in the statement of Theorem 4.76 and ϕ is a Schwartz function on Rn−1 (which is canonically identified with ∂Rn+ ). Functions of the form (4.7.44) play a prominent role in partial differential equations and harmonic analysis as they arise naturally in the treatment of boundary value problems via boundary integral methods. We shall return to this topic in Sect. 11.6. Corollary 4.78. Let the function Φ ∈ C 4 (Rn \ {0}) be odd and positive homogeneous of degree 1 − n, and assume that ϕ ∈ S(Rn−1 ). Then for every x ∈ Rn−1 one has i 3 lim Φ(x − y , t)ϕ(y ) dy = ± Φ(0 , 1)ϕ(x ) 2 t→0± Rn−1 Φ(x − y , 0)ϕ(y ) dy . + lim ε→0+
y ∈Rn−1 |x −y |>ε
(4.7.45)
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
160
Proof. Given any ϕ ∈ S(Rn−1 ) and any x ∈ Rn−1 , write lim Φ(x − y , t)ϕ(y ) dy = lim Φ(z , t)ϕ(x − z ) dz t→0±
t→0±
Rn−1
Rn−1
$ % = lim± Φ(·, t), ϕ(x − ·)
(4.7.46)
t→0
! " i 3 = ± Φ(0 , 1)ϕ(x ) + P.V. Φ(·, 0), ϕ(x − ·) 2 then notice that ! " P.V. Φ(·, 0), ϕ(x − ·) = lim+
Φ(z , 0)ϕ(x − z ) dz
ε→0
z ∈Rn−1 |z |>ε
= lim
ε→0+
Φ(x − y , 0)ϕ(y ) dy .
(4.7.47)
n−1
y ∈R |x −y |>ε
Now (4.7.45) follows from (4.7.46) and (4.7.47). Remark 4.79. Consider the function Φ : R2 \ {(0, 0)} → C given by Φ(x, y) := 1 2 x+iy for all (x, y) ∈ R \ {(0, 0)}. Then Φ is odd and homogeneous of degree −1. Moreover, since under the canonical identification R2 ≡ C the function Φ 1 3 becomes Φ(z) = z1 for z ∈ C \ {0}, it follows that Φ(ξ) = 2π i · ξ for all ξ ∈ C \ {0} 3 1) = 2π · 1 = (for details, see Proposition 7.33). In particular, this yields Φ(0, i i −2π. Consequently, (4.7.1) becomes in this case lim+
ε→0
1 1 = ∓iπ δ + P.V. x ± iε x
in S (R),
(4.7.48)
which is in agreement with Sokhotsky’s formula from (2.10.3). Theorem 4.76 also suggests a natural procedure for computing the Fourier transform of certain principal value distributions. While the latter topic has been treated in Sect. 4.5, where the general formula (4.5.10) has been established, such a procedure remains of interest since the integral in (4.5.6) may not always be readily computed from scratch. Specifically, we have the following result. Corollary 4.80. Assume that the function Φ ∈ C 4 (Rn \{0}) is odd and positive homogeneous of degree 1 − n. Then i 3 F P.V. Φ( ·, 0) = ∓ Φ(0 , 1) + lim F Φ( ·, ε) ± 2 ε→0 where F denotes the Fourier transform in Rn−1 .
in S (Rn−1 ),
(4.7.49)
4.7. A GENERAL JUMP-FORMULA IN THE CLASS . . .
161
Proof. Formula (4.7.49) is a direct consequence of Theorem 4.76, the continuity of F on S (Rn−1 ), and the fact that F δ(x ) = 1. Here is an example implementing this procedure in a case that is going to be useful later on, when discussing the Riesz transforms in Rn . Proposition 4.81. For each j ∈ {1, . . . , n} we have (with F denoting the Fourier transform in Rn ) F P.V.
iωn ξj xj =− |x|n+1 2 |ξ|
in
S (Rn ).
(4.7.50)
Proof. Fix some j ∈ {1, . . . , n} and consider the function defined by xj Φ(x, t) := n+1 , |x|2 + t2 2
∀ (x, t) ∈ Rn+1 \ {0}.
(4.7.51)
Note that Φ is C ∞ , odd, and positive homogeneous of degree −n in Rn+1 \ {0}. Moreover, if hat denotes the Fourier transform in Rn+1 , from Corollary 4.62 we have ξj 3 η) = −iωn Φ(ξ, in S (Rn+1 ), (4.7.52) |ξ|2 + η 2 3 1) = 0. Keeping this in mind, formula (4.7.49) from which we see that Φ(0, (used with n + 1 in place on n) then yields xj (4.7.53) F P.V. n+1 = lim+ Fx Φ(x, t) in S (Rn ), |x| t→0 where Fx denotes the Fourier transform in the variable x in Rn . Next, recall the discussion at the end of Sect. 4.2 regarding partial Fourier transforms and compute 3 η) (t) = −iωn ξj Fη−1 Fx Φ(x, t) (ξ) = Fη−1 Φ(ξ, = −iωn (2π)−1 ξj Fη =−
iωn ξj −|ξ||t| e 2 |ξ|
1 (t) |ξ|2 + η 2
in S (Rn ),
1 (t) |ξ|2 + η 2
(4.7.54)
where the last equality makes uses of (4.2.20). Since by Lebesgue’s dominated convergence theorem & ' ξj −|ξ||t| ξj lim e in S (Rn ), (4.7.55) = t→0 |ξ| |ξ| formula (4.7.50) now follows from (4.7.53)–(4.7.55).
162
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
Remark 4.82. Alternatively, one may prove (4.7.50) by combining (4.3.19) and (4.4.19). Indeed, if j ∈ {1, . . . , n} is fixed, identity (4.4.19) written for 1 |x|−(n−1) , x ∈ Rn \ {0}, becomes Φ(x) := 1−n x 1 j (4.7.56) in S (Rn ). ∂j |x|−(n−1) = P.V. 1−n |x|n+1 Hence, taking the Fourier transform of (4.7.56), then using (b) in Theorem 4.25, then (4.3.19) with λ := n − 1, and formulas (13.5.2) and (13.5.6), we obtain 1 xj i F ∂j |x|−(n−1) = ξj F |x|−(n−1) F P.V. n+1 = |x| 1−n 1−n 1 n iωn ξj i 2π 2 Γ 2 ξj =− in S (Rn ). =− n − 1 Γ n−1 |ξ| 2 |ξ| 2 (4.7.57) Corollary 4.80 may also be combined with Theorem 4.71 to obtain the following result pertaining to certain limits of Fourier transforms of tempered distributions. Corollary 4.83. Suppose that Φ ∈ C 4 (Rn \ {0}) is odd and positive homogeneous of degree 1 − n. Then in S (Rn−1 ), i 3 , 1) − Φ(θ, 0) log(i(ξ · θ)) dσ(θ) (4.7.58) lim± F Φ( ·, ε) (ξ ) = ± Φ(0 2 ε→0 S n−2 where F denotes the Fourier transform in Rn−1 and S n−2 denotes the unit sphere centered at the origin in Rn−1 . Proof. This follows from Corollary 4.80, (4.5.10), and (4.5.6).
4.8
The Harmonic Poisson Kernel
The goal of this section is to introduce and discuss the harmonic Poisson kernel. Definition 4.84. Define the harmonic Poisson kernel P : Rn \ {0} → R by setting P (x , xn ) :=
2 xn , ωn−1 |x|n
∀ x = (x , xn ) ∈ Rn \ {0}.
(4.8.1)
Furthermore, for each x ∈ Rn−1 set p(x ) := P (x , 1), that is, consider p(x ) :=
1 n , ωn−1 1 + |x |2 2 2
∀ x ∈ Rn−1 .
(4.8.2)
In our next result we discuss the boundary behavior of a mapping P defined below, taking Schwartz functions from Rn−1 into functions defined in Rn± , which in partial differential equations is referred to as the harmonic double layer potential operator.
4.8. THE HARMONIC POISSON KERNEL
163
Proposition 4.85. Given any ϕ ∈ S(Rn−1 ) and any x = (x , xn ) ∈ Rn with xn = 0, define (where P denotes the harmonic Poisson kernel) (Pϕ)(x) := P (x − y , xn )ϕ(y ) dy Rn−1
=
2 ωn−1
Rn−1
xn n ϕ(y ) dy . |x − y |2 + x2n 2
(4.8.3)
Then for each x ∈ Rn−1 one has lim (Pϕ)(x , xn ) = ± ϕ(x ).
xn →0±
(4.8.4)
Proof. The function P : Rn \ {0} → R from (4.8.1) is C ∞ , odd, and positive homogeneous of degree 1 − n. Moreover, Corollary 4.62 gives ξn P3 (ξ) = −2i 2 |ξ|
in
S (Rn ).
(4.8.5)
Consequently, P3 (0 , 1) = −2i. Also, since P (x , 0)=0 we have P.V. P (x , 0) = 0. At this point, Corollary 4.78 applies (with P playing the role of Φ) and yields (4.8.4). Remark 4.86. When specializing Theorem 4.76 to the case Φ := P , with P defined as in (4.8.1), we obtain (making use of (4.8.5)) lim P (x , xn ) = ± δ(x )
in
xn →0±
S (Rn−1 ).
(4.8.6)
This may be regarded as a higher dimensional generalization of the result presented in Exercise 2.115, to which (4.8.6) reduces in the case when n = 2 and xn = ε > 0. Among other things, the following proposition sheds light on the normalization of the harmonic Poisson kernel introduced in (4.8.1). Proposition 4.87. The function p defined in (4.8.2) satisfies the following properties. # Lq (Rn−1 ) and (1) One has p ∈ 1≤q≤∞
p(x ) dx = 1.
(4.8.7)
Rn−1
(2) For each t > 0 set pt (x ) := t1−n p(x /t)
for x ∈ Rn−1 . (4.8.8) # Then for each t ∈ (0, ∞) we have pt ∈ Lq (Rn−1 ) and its Fourier 1≤q≤∞
transform is
p3t (ξ ) = e−t|ξ |
in
S (Rn−1 ).
(4.8.9)
164
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
(3) The family {pt }t>0 has the semigroup property, that is, pt1 ∗ pt2 = pt1 +t2
∀ t1 , t2 ∈ (0, ∞).
(4.8.10)
Proof. That p ∈ Lq (Rn−1 ) for any q ∈ [1, ∞] is immediate from its expression. Fix t ∈ (0, ∞) and let pt be as in part (2). Applying Exercise 2.22 (with n − 1 in place of n, p in place of f , and t in place of 1/j), we obtain D (Rn−1 ) pt (x ) −−−−−−→ c δ(x ) where c := p(x ) dx . (4.8.11) t→0+
Rn−1
On the other hand, as seen from (4.8.2) and (4.8.1), pt (x ) = P (x , t)
for each t > 0 and x ∈ Rn−1 ,
(4.8.12)
so (4.8.6) gives pt (x ) → δ(x ) in S (Rn )
as t → 0+ .
(4.8.13)
Now (4.8.7) follows by combining (4.8.13) with (4.8.11). Moving on to the proof of part (2), observe first that pt ∈ Lq (Rn−1 ) for each t > 0, if q ∈ [1, ∞], thus pt ∈ S (Rn−1 ) (as a consequence of (4.1.8)). In particular, its Fourier transform in S (Rn−1 ) is meaningfully defined and equal to its Fourier transform as a function in L1 (Rn−1 ) (by Remark 4.22) and belongs to C 0 (Rn−1 ) (by (3.1.3)). In what follows we will be using partial Fourier transforms (cf. the discussion at the end of Sect. 4.2) and the ξ ∈ Rn−1 and η ∈ R. The idea we notation will pursue is to compute Fx pt (x ) (ξ ) by making use of (4.8.12) and the fact that we know P3 (ξ , η), the Fourier transform of P in S (Rn ) (cf. (4.8.5)). With this in mind, for each ξ ∈ Rn−1 \ {0} and t ∈ R we write η (t) Fx P (x , t) (ξ ) = Fη−1 P3 (ξ , η) (t) = Fη−1 − 2i 2 |ξ | + η 2 η i (t) = (−πi)(sgn t)e−|t| |ξ | = 2i(2π)−1 Fη 2 2 |ξ | + η π
= (sgn t)e−|t| |ξ | .
(4.8.14)
For the second equality in (4.8.14) we have used (4.8.5), for the third the fact that Fη−1 g = (2π)−1 Fη g ∨ for every g ∈ S (Rn ), while for the fourth we have used (4.2.26) (applied with a := |ξ |). In particular, (4.8.14) implies that for ξ ∈ Rn−1 \ {0} and t > 0. (4.8.15) Fx pt (x ) (ξ ) = e−t|ξ | Now (4.8.9) follows from (4.8.15) and the fact that pt ∈ C 0 (Rn−1 ) for every t > 0. Regarding (4.8.10), fix t1 , t2 ∈ (0, ∞) and by using the property of the Fourier transform singled out in part (3) of Remark 3.28 and (4.8.10) we obtain Fx pt1 ∗ pt2 (ξ ) = Fx pt1 (ξ )Fx pt2 (ξ )
4.8. THE HARMONIC POISSON KERNEL
165
= e−t1 |ξ | e−t2 |ξ | = e−(t1 +t2 )|ξ | ∀ ξ ∈ Rn−1 , ∀ t > 0. = Fx pt1 +t2
(4.8.16)
Now (4.8.10) follows from (4.8.16) by recalling that Fx is an isomorphism on S (Rn−1 ). We wish to note that the harmonic Poisson kernel presented above plays a basic role in the treatment of boundary value problems for the Laplacian Δ := ∂12 + · · · + ∂n2 in the upper-half space. More specifically, for any given function ϕ ∈ S(Rn−1 ) (called the boundary datum), the Dirichlet problem ⎧ u ∈ C ∞ (Rn+ ), ⎪ ⎪ ⎪ ⎨ Δu = 0 in Rn+ , (4.8.17) ⎪ ⎪ ver ⎪ n−1 n ⎩ u =ϕ on R ≡ ∂R+ , n ∂R+
has as a solution the function u := Pϕ in Rn+ , i.e., (cf. (4.8.3)) u(x) =
2 ωn−1
Rn−1
(4.8.18)
xn n ϕ(y ) dy , |x − y |2 + x2n 2
x = (x , xn ) ∈ Rn+ .
(4.8.19) ver In the last line of (4.8.17), the symbol u n stands for the “vertical limit” of u ∂R+
to the boundary of the upper-half space, understood at each point x ∈ Rn−1 as ver u n (x ) := lim u(x , xn ). (4.8.20) ∂R+
xn →0+
To see why u as in (4.8.19) is a solution of (4.8.17) note that (4.8.1) implies u ∈ C ∞ (Rn+ ), that u has the limit to the boundary equal to ϕ based on (4.8.4), n while the fact that Δu = 0 in follows from (4.8.19) by checking directly that, R+ n n−1 2 2 −2 = 0 for all (x , xn ) ∈ Rn+ . for each y ∈ R fixed, Δ xn |x − y | + xn Definition 4.88. The mappings Qj : Rn \ {0} → R, j ∈ {1, . . . , n − 1}, defined by Qj (x , xn ) :=
2 xj , ωn−1 |x|n
∀ x = (x , xn ) ∈ Rn \ {0},
j ∈ {1, . . . , n − 1}, (4.8.21)
are called the conjugate harmonic Poisson kernels. Furthermore, for each x ∈ Rn−1 we set qj (x ) := Qj (x , 1), that is, qj (x ) :=
xj n , ωn−1 1 + |x |2 2 2
∀ x ∈ Rn−1 ,
j ∈ {1, . . . , n − 1}.
(4.8.22)
166
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
Proposition 4.89. The functions defined in (4.8.22) satisfy the following properties. (1) For each j ∈ {1, . . . , n − 1} and each p ∈ (1, ∞) one has qj ∈ Lp (Rn−1 ). (2) For each t > 0 set (qj )t (x ) := t1−n qj (x /t)
x ∈ Rn−1 ,
for
j ∈ {1, . . . , n − 1}. (4.8.23)
Then for each j ∈ {1, . . . , n − 1}, each t ∈ (0, ∞), and each p ∈ (1, ∞), we have (qj )t ∈ Lp (Rn−1 ) and its Fourier transform is 5 (q j )t (ξ ) = −i
ξj −t|ξ | e |ξ |
in
S (Rn−1 ).
(4.8.24)
(3) The following identity holds n−1 d pt (x ) = − ∂j (qj )t (x ) dt j=1
∀ t ∈ (0, ∞), ∀ x ∈ Rn−1 , (4.8.25)
where pt is as in (4.8.23). Proof. The claim in (1) is an immediate consequence of (4.8.22). To prove (2), fix j ∈ {1, . . . , n − 1}. Using (4.8.23) we obtain (qj )t (x ) =
2 xj n = Qj (x , t) ωn−1 (t2 + |x |2 ) 2
∀ x ∈ Rn−1 ,
∀ t ∈ (0, ∞).
(4.8.26) As in the proof of Proposition 4.87, we will be using partial Fourier transforms n−1 and (cf. the discussion at the end of Sect. 4.2) and the notation ξ ∈ R η ∈ R. The idea we will pursue is to compute Fx (qj )t (x ) (ξ ) by making use of (4.8.26) and the fact that, by (4.3.25), the Fourier transform of Qj is 4j (ξ , η) = −2i Q
ξj |ξ |2 + η 2
in S (Rn ).
(4.8.27)
Hence, for each t ∈ (0, ∞) we may write 4j (ξ , η) (t) = Fη−1 − 2i Fx (qj )t (x ) (ξ ) = Fη−1 Q = −2i(2π)−1 ξj Fη =−
iξj −t |ξ | e |ξ |
ξj (t) |ξ |2 + η 2
1 iξj π −t |ξ | (t) = − e 2 2 |ξ | + η π |ξ | in
S (Rn−1 ).
(4.8.28)
For the third equality in (4.8.28) we used the fact that Fη−1 g = (2π)−1 Fη g ∨ for every g ∈ S (Rn ), while for the fourth we used (4.2.20) (applied with a := |ξ |). This completes the proof of (2). Finally, (4.8.25) follows by a direct computation based on the chain rule.
4.9. SINGULAR INTEGRAL OPERATORS
4.9
167
Singular Integral Operators
In Example 4.66 we have already encountered the principal value tempered xj , for j ∈ {1, . . . , n}. The operators Rj , j ∈ {1, . . . , n}, distributions P.V. |x|n+1 defined by convolving with these distributions, that is, xj ∀ ϕ ∈ S(Rn ), (4.9.1) Rj ϕ := P.V. n+1 ∗ ϕ, |x| are called the Riesz transforms in Rn . In the particular case when n = 1 the corresponding operator, that is, 1 ∗ ϕ, ∀ ϕ ∈ S(R), (4.9.2) Hϕ := P.V. x is called the Hilbert transform. These operators play a fundamental role in harmonic analysis and here the goal is to study a larger class of operators containing the aforementioned examples. We begin by introducing this class. The format of Proposition 4.67 suggests the following definition. 0 n Definition 4.90. For each function
Θ ∈ C (R \ {0}) that is positive homogeneous of degree −n and such that S n−1 Θ(ω) dσ(ω) = 0, define the SIO (TΘ ϕ)(x) := lim+ Θ(x − y)ϕ(y) dy, ∀ x ∈ Rn , ∀ ϕ ∈ S(Rn ). ε→0
|x−y|≥ε
(4.9.3) Proposition 4.67 ensures that the above definition is meaningful. Moreover, for the class of SIO just defined, the following result holds (further properties are deduced in Theorem 4.96). Proposition 4.91. Let Θ be a just completed” the conditions in (4.4.1) and consider the SIO TΘ associated with Θ as in (4.9.3). Then TΘ : S(Rn ) −→ S (Rn )
is linear and sequentially continuous.
Moreover, for each ϕ ∈ S(Rn ) we have TΘ ϕ ∈ C ∞ (Rn ) and TΘ ϕ = P.V. Θ ∗ ϕ as well as
T5 3 F P.V. Θ Θϕ = ϕ
in
in
S (Rn ),
S (Rn ).
(4.9.4)
(4.9.5) (4.9.6)
Proof. All claims are consequences of Definition 4.90, Proposition 4.67, part (e) in Theorem 4.18, and part (a) in Theorem 4.33. Remark 4.92. In the harmonic analysis parlance, a mapping T : S(Rn ) → S (Rn ) 3 m in S (Rn ) with the property that there exists m ∈ S (Rn ) such that T4ϕ = ϕ n for every ϕ ∈ S(R ) is called a multiplier (and the tempered distribution m
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
168
is referred to as the symbol of the multiplier). Note that any multiplier T is necessarily a linear and sequentially bounded mapping from S(Rn ) into S (Rn ). Also, with this piece of terminology, for any function Θ as in (4.4.1), (4.9.7) the operator TΘ is a multiplier with symbol F P.V. Θ . Regarding the nature of the symbol F P.V. Θ , we have already seen that xj , this Fourier transform may be computed explicitly in the case Θ(x) := |x|n+1 j ∈ {1, . . . , n} (see (4.7.50)). The fact that such functions appear in the definition of the Riesz transforms in Rn (cf. (4.9.1)), warrants revisiting these operators. To set the stage for the subsequent discussion, we recall that given a linear and bounded map (also referred to as a bounded operator) T : L2 (Rn ) → L2 (Rn ), its adjoint is the unique map T ∗ : L2 (Rn ) → L2 (Rn ) with the property that (T f )(x)g(x) dx = f (x)(T ∗ g)(x) dx, ∀ f, g ∈ L2 (Rn ). (4.9.8) Rn
Rn
All ingredients are now in place for proving the following important theorem. Theorem 4.93. Consider the Riesz transforms, originally introduced as in (4.9.1). Then for each j ∈ {1, . . . , n} the following properties hold. (a) The operator Rj : S(Rn ) → S (Rn )
is well-defined,
linear, and sequentially continuous.
(4.9.9)
(b) For each ϕ ∈ S(Rn ) we have Rj ϕ ∈ C ∞ (Rn ) and Rj may be expressed as the SIO xj − yj ϕ(y) dy, ∀ x ∈ Rn . (4.9.10) (Rj ϕ)(x) = lim+ n+1 ε→0 |x−y|≥ε |x − y| (c) The jth Riesz transform is a multiplier with symbol mj (ξ) := − iω2n which belongs to L∞ (Rn ). That is, for each ϕ ∈ S(Rn ), iωn ξj 5 R ϕ(ξ) 3 j ϕ(ξ) = − 2 |ξ|
in
S (Rn ).
ξj |ξ|
(4.9.11)
(d) For each ϕ ∈ S(Rn ), we have that Rj ϕ originally viewed as a tempered distribution belongs to the subspace L2 (Rn ) of S (Rn ), and Rj ϕL2 (Rn ) ≤ (ωn /2)ϕL2 (Rn ) .
(4.9.12)
(e) The jth Riesz transform Rj , originally considered as in (4.9.9) extends, by density to a linear and bounded operator Rj : L2 (Rn ) −→ L2 (Rn )
(4.9.13)
4.9. SINGULAR INTEGRAL OPERATORS
169
and the operator Rj is skew-symmetric (i.e., Rj∗ = −Rj ) in this context. In addition, iωn ξj 3 5 R f (ξ) j f (ξ) = − 2 |ξ|
a.e. in
Rn ,
∀ f ∈ L2 (Rn ).
(4.9.14)
(f ) For every k ∈ {1, . . . , n}, we have Rj Rk = Rk Rj and
n
Rk2 = −
ωn 2 2
I
L2 (Rn ),
as operators on
as operators on
L2 (Rn ),
(4.9.15)
(4.9.16)
k=1
where I denotes the identity operator on L2 (Rn ). Proof. The claims in (a) and (b) are immediate consequence of (4.9.1) and (4.9.2) and Proposition 4.67, upon recalling the discussion in Example 4.66. Also, formula (4.9.11) follows from (4.9.1), (4.9.6), and Proposition 4.81. Turning our attention to (d), fix an index j ∈ {1, . . . , n} along with two arbitrary functions ϕ, ψ ∈ S(Rn ). Based on Exercise 3.26, (4.2.2), (4.9.11), Cauchy–Schwarz’s inequality, and (3.2.28), we may then write ! " $ % ∨ −n ωn ξj ∨ 3 3 Rj ϕ, ψ = (2π)−n R 5 = (2π) ϕ 3, ψ j ϕ, ψ 2 |ξ| ξj −n ωn 3 = (2π) ϕ(ξ) 3 ψ(−ξ) dξ 2 Rn |ξ| ωn 3 dξ ϕ(ξ) ≤ (2π)−n 3 ψ(−ξ) 2 Rn ωn 3 L2 (Rn ) ≤ (2π)−n ϕ 3 L2 (Rn ) ψ 2 =
ωn 2 n 2 n 2 ϕL (R ) ψL (R ) .
(4.9.17)
This computation may be summarized by saying that, for each ϕ ∈ S(Rn ) fixed, the linear functional (4.9.18) Λϕ : S(Rn ) → C, Λϕ (ψ) := Rj ϕ, ψ, ∀ ψ ∈ S(Rn ), satisfies Λϕ (ψ) ≤ ω2n ϕL2 (Rn ) ψL2 (Rn ) for every ψ ∈ S(Rn ). Given that S(Rn ) is a dense subspace of L2 (Rn ), it follows that the mapping Λϕ from ϕ : L2 (Rn ) → C satisfying (4.9.18) has a unique extension Λ Λϕ (g) ≤ ω2n ϕL2 (Rn ) gL2(Rn ) for every g ∈ L2 (Rn ). (4.9.19) According to Riesz’s representation theorem for such functionals, there exists a unique fϕ ∈ L2 (Rn ) satisfying the following two properties: ϕ (g) = Λ fϕ (x)g(x) dx for every g ∈ L2 (Rn ), (4.9.20) Rn
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
170 and
fϕ L2 (Rn ) ≤
ωn 2 n 2 ϕL (R ) .
(4.9.21)
Since L (R ) ⊆ S (R ) (cf. (4.1.8)), it follows that fϕ may be regarded as a tempered distribution. At this stage we make the claim that Rj ϕ = fϕ as tempered distributions. Indeed, for every ψ ∈ S(Rn ) we have Rj ϕ, ψ = Λϕ (ψ) = Λϕ (ψ) = fϕ (x)ψ(x) dx = fϕ , ψ, (4.9.22) 2
n
n
Rn
from which the claim follows. With this in hand, (4.9.12) now follows from (4.9.21), finishing the proof of part (d). In turn, estimate (4.9.12) and a standard density argument give that Rj extends to a linear and bounded operator in the context of (4.9.13). The next item in part (e) is showing that Rj in (4.9.13) is skew-symmetric. To this end, first note that, given any ϕ, ψ ∈ S(Rn ), by reasoning much as in (4.9.17) we obtain " ! $ 3∨ 3 ∨ % = −i(2π)−n ωn ξj ϕ 5 3, ψ Rj ϕ, ψ = (2π)−n R j ϕ, ψ 2 |ξ| ωn ξj dξ ϕ(ξ) 3 ψ(−ξ) = −i(2π)−n 2 Rn |ξ| ωn ξj 5 = i(2π)−n ψ(ξ)ϕ(−ξ) 3 dξ 2 Rn |ξ| $ 5 ∨% ωn ! ξj 3 ∨ " = −(2π)−n R = i(2π)−n ψ, ϕ 3 3 j ψ, ϕ 2 |ξ| = −Rj ψ, ϕ = −Rj ψ, ϕ,
(4.9.23)
where the last step uses the fact that, as seen from (4.9.10), Rj ψ = Rj ψ. Since all functions involved are (by part (d)) in L2 (Rn ), the distributional pairings may be interpreted as pairings in L2 (Rn ). With this in mind, the equality of the most extreme sides of (4.9.23) reads (Rj ϕ)(x)ψ(x) dx = − ϕ(x)(Rj ψ)(x) dx. (4.9.24) Rn
Rn
By density and part (d), we deduce from (4.9.24) that (Rj f )(x)g(x) dx = − f (x)(Rj g)(x) dx, ∀ f, g ∈ L2 (Rn ). (4.9.25) Rn
Rn
In light of (4.9.8), this shows that Rj∗ = −Rj as mappings on L2 (Rn ), as wanted. To complete the treatment of part (e), there remains to observe that, collectively, the boundedness of Rj and of the Fourier transform on L2 (Rn ), (4.9.11), and the density of the Schwartz class in L2 (Rn ), readily yield (4.9.14) via a limiting argument.
4.9. SINGULAR INTEGRAL OPERATORS
171
In turn, given k ∈ {1, . . . , n}, for each f ∈ L2 (Rn ) formula (4.9.14) allows us to compute ω 2 ξ ξ iωn ξj 5 n j k 3 Rk f = − (4.9.26) f in L2 (Rn ). F Rj (Rk f ) = − 2 |ξ| 2 |ξ|2 2 ξj ξk 2 n 3 A similar computation also gives that F Rk (Rj f ) = − ω2n |ξ| 2 f in L (R ). From these, (4.9.15) follows upon recalling (cf. (3.2.31)) that the Fourier transform is an isomorphism of L2 (Rn ). Finally, as regards (4.9.16), for any function f ∈ L2 (Rn ), formula (4.9.26) (with j = k) permits us to write F
n k=1
Rk2 f
n n ω 2 ξk2 3 n = F Rk (Rk f ) = − f 2 |ξ|2 k=1
=−
k=1
ω 2 n
2
f3 in L2 (Rn ), n
where the last equality uses the fact that
k=1
2 ξk |ξ|2
(4.9.27)
= 1 a.e. in Rn . Now identity
(4.9.16) follows from (4.9.27) and (3.2.31). Corollary 4.94. Let j ∈ {1, . . . , n − 1} and t ∈ (0, ∞), and recall the functions pt and (qj )t from (4.8.8) and (4.8.23), respectively. Then (qj )t =
2 Rj pt ωn−1
in
L2 (Rn−1 ),
(4.9.28)
where Rj in (4.9.28) is the jth Riesz transform from Theorem 4.93 corresponding to n replaced by n − 1. Proof. By (3.2.31), it suffices to check identity (4.9.28) on the Fourier transform side. That the latter holds is an immediate consequence of part (2) in Proposition 4.89, (4.9.14), and part (2) in Proposition 4.87. In the one-dimensional setting, Theorem 4.93 yields the following type of information about the Hilbert transform. Corollary 4.95. The Hilbert transform defined in (4.9.2) satisfies the following properties. (a) The operator
H : S(R) → S (R)
is well-defined,
linear, and sequentially continuous.
(4.9.29)
(b) For each ϕ ∈ S(R) we have Hϕ ∈ C ∞ (R) and H may be expressed as the SIO 1 ϕ(y) dy, ∀ x ∈ R. (4.9.30) (Hϕ)(x) = lim+ x − y ε→0 |x−y|≥ε
172
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
(c) The Hilbert transform is a multiplier with symbol m(ξ) := −iπ (sgn ξ) belonging to L∞ (R). In other words, for each ϕ ∈ S(R), 5 Hϕ(ξ) = −iπ (sgn ξ)ϕ(ξ) 3
in
S (R).
(4.9.31)
(d) For each ϕ ∈ S(R), we have that Hϕ originally viewed as a tempered distribution belongs to the subspace L2 (R) of S (R), and HϕL2 (Rn ) ≤ πϕL2 (Rn ) .
(4.9.32)
(e) The Hilbert transform H, originally considered as in (4.9.29) extends, by density to a linear and bounded operator H : L2 (R) −→ L2 (R)
(4.9.33)
and the operator H is skew-symmetric (i.e., H ∗ = −H) in this context. In addition, 5(ξ) = −iπ (sgn ξ) f3(ξ) Hf
a.e. in
R,
∀ f ∈ L2 (R).
(4.9.34)
(f ) In the context of (4.9.33), H 2 = −π 2 I
in
L2 (R),
(4.9.35)
where I denotes the identity operator on L2 (R). Proof. This corresponds to Theorem 4.93 in the case when n = 1. The Riesz transforms are prototypes for the more general class of operators introduced in Definition 4.90 and most of the properties deduced for the Riesz transforms in Theorem 4.93 have natural counterparts in this more general setting. Theorem 4.96. Assume that Θ is a function satisfying the conditions in (4.4.1) and let TΘ be the SIO associated with Θ as in (4.9.3). Then the following statements are true. (a) The operator TΘ : S(Rn ) → S (Rn )
is well-defined,
linear, and sequentially continuous.
(4.9.36)
(b) For each ϕ ∈ S(Rn ) we have TΘ ϕ ∈ C ∞ (Rn ) and the SIO TΘ may be expressed as the convolution ∀ ϕ ∈ S(Rn ). (4.9.37) TΘ ϕ = P.V. Θ ∗ ϕ
4.9. SINGULAR INTEGRAL OPERATORS
173
(c) The function mΘ , associated with Θ as in (4.5.6), is the symbol of the multiplier TΘ . This means that for each ϕ ∈ S(Rn ), T5 3 Θ ϕ(ξ) = mΘ (ξ) ϕ(ξ)
in
S (Rn ).
(4.9.38)
(d) For each ϕ ∈ S(Rn ), we have that TΘ ϕ originally viewed as a tempered distribution belongs to the subspace L2 (Rn ) of S (Rn ), and TΘ ϕL2 (Rn ) ≤ Cn ΘL∞ (S n−1 ) ϕL2 (Rn ) ,
(4.9.39)
where the constant Cn ∈ (0, ∞) is as in (4.5.8). (e) The operator TΘ , originally considered as in (4.9.36) extends, by density to a linear and bounded operator TΘ : L2 (Rn ) −→ L2 (Rn ).
(4.9.40)
Moreover, in this context, 3 T5 Θ f (ξ) = mΘ (ξ) f (ξ)
a.e. in
and the operator TΘ satisfies ∗ TΘ = TΘ∨
Rn ,
in
∀ f ∈ L2 (Rn ),
L2 (Rn ).
(4.9.41)
(4.9.42)
In particular, if Θ is∗odd and real-valued, then the operator TΘ is skewsymmetric (i.e., TΘ = −TΘ ), while if Θ is even and real-valued then ∗ the operator TΘ is self-adjoint (i.e., TΘ = TΘ ). Proof. Parts (a) and (b) are contained in Proposition 4.91, while part (c) follows from (4.9.6) and Theorem 4.71. Part (d) is justified by reasoning as in the proof of part (d) in Theorem 4.93, keeping in mind (4.5.7). As for part (e), in a first stage we note that, for any ϕ, ψ ∈ S(Rn ), by reasoning much as in (4.9.23) while relying on (4.9.38) and (4.5.9), we obtain ! " $ 3 ∨ % = (2π)−n m ϕ 3∨ TΘ ϕ, ψ = (2π)−n T5 Θ ϕ, ψ Θ 3, ψ dξ mΘ (ξ) ϕ(ξ) 3 ψ(−ξ) = (2π)−n Rn
= (2π)−n
!
Rn
5 mΘ (−ξ) ψ(ξ)ϕ(−ξ) 3 dξ
3, ϕ 3∨ = (2π)−n mΘ∨ ψ
"
$ ∨% = (2π)−n T 3 Θ∨ ψ, ϕ
= TΘ∨ ψ, ϕ = TΘ∨ ψ, ϕ,
(4.9.43)
where the last step uses the fact that, as seen from (4.9.3), TΘ∨ ψ = TΘ∨ ψ. Passing from (4.9.43) to (4.9.42) may now be done by arguing as in the proof of part (e) of Theorem 4.93 (a pattern of reasoning which also gives all remaining claims in the current case).
174
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
We conclude this section with a discussion of the following result of basic importance from Calder´ on–Zygmund theory (originally proved in [5]; for a more timely presentation see, e.g., [23, 62, 63]). Theorem 4.97. Let Θ be a function satisfying (4.4.1) and, in analogy with (4.9.3), given some p ∈ (1, ∞) set Θ(x − y)f (y) dy, x ∈ Rn , f ∈ Lp (Rn ). (TΘ f )(x) := lim+ ε→0
|x−y|≥ε
(4.9.44) Then for every f ∈ Lp (Rn ) we have that (TΘ f )(x) exists for almost every x ∈ Rn and TΘ f Lp(Rn ) ≤ CΘ f Lp(Rn ) ,
(4.9.45)
where CΘ is a finite positive constant independent of f . Compared with Theorem 4.96, a basic achievement of Calder´on and Zygmund is allowing functions from Lp (Rn ) in lieu of S(Rn ). While the estimate in (4.9.45) for p = 2 is essentially contained in part (d) of Theorem 4.96, the fact that for each f ∈ L2 (Rn ) the limit defining (TΘ f )(x) as in (4.9.44) exists for almost every x ∈ Rn is a bit more subtle. To elaborate on this issue, recall that we have already proved that the limit in question exists at every point when f is a Schwartz function (cf. Proposition 4.67). Starting from this, one can pass to arbitrary functions in L2 (Rn ) granted the availability of additional tools from harmonic analysis (more specifically, the boundedness in L2 (Rn ) of
the so-called maximal operator T∗ f (x) := supε>0 |x−y|≥ε Θ(x − y)f (y) dy , for x ∈ Rn ). Having dealt with (4.9.45) in the case p = 2, its proof for p ∈ (1, ∞) proceeds as follows. In a first step, a suitable version of (4.9.45) is established for p = 1 which, when combined with the case p = 2 already treated yields (via a technique called interpolation) (4.9.45) for p ∈ (1, 2). In a second step, one uses duality to handle the case p ∈ (2, ∞). There are two important factors at play p ∈ (2, ∞), here: that the dual of Lp (Rn ) with p ∈ (1, 2) is Lp (Rn ) with p = p−1 and the formula for the adjoint of TΘ we deduced in (4.9.42). In particular, Theorem 4.97 gives that, given p ∈ (1, ∞), the jth Riesz transform Rj , j ∈ {1, . . . , n}, originally considered as in (4.9.9) extends by density so that Rj : Lp (Rn ) −→ Lp (Rn )
is linear and bounded,
and for each f ∈ Lp (Rn ) we have xj − yj f (y) dy (Rj f )(x) = lim+ |x − y|n+1 ε→0 |x−y|≥ε
for a.e. x ∈ Rn .
(4.9.46)
(4.9.47)
Of course, the same type of result is true for the Hilbert transform on the real line.
4.10. DERIVATIVES OF VOLUME POTENTIALS
4.10
175
Derivatives of Volume Potentials
One basic integral operator in analysis is the so-called Newtonian potential, given by 1 −1 f (y) dy, x ∈ Rn , (4.10.1) (NΩ f )(x) := (n − 2)ωn−1 Ω |x − y|n−2 where Ω is an open set in Rn , n ≥ 3, and f ∈ L∞ (Ω) with bounded support. In this regard, an important issue is that of computing ∂j ∂k NΩ f in the sense of distributions in Rn , where j, k ∈ {1, . . . , n}. First, we will show (cf. Theorem 4.101) that NΩ f belongs to C 1 (Rn ) and we have xk − yk 1 f (y) dy, x ∈ Rn . (4.10.2) ∂k (NΩ f )(x) = ωn−1 Ω |x − y|n
zk 1 n Hence, ∂k (NΩ f )(x) = Ω Φ(x − y)f (y) dy where Φ(z) := ωn−1 |z|n for z ∈ R \ {0}. Note that for x fixed, Φ(x − ·) is locally integrable, though this is no longer the case for (∂j Φ)(x − ·). This makes the job of computing ∂j ∂k (NΩ f ) considerably more subtle. There is a good reason for this since, as it turns out, the latter distributional derivative involves (as we shall see momentarily) SIO. We note that the function Φ considered above is C ∞ and positive homogeneous of degree 1 − n in Rn \ {0}. These are going to be key features in our subsequent analysis. Our most general results encompassing the discussion about the Newtonian potential introduced above are contained in Theorems 4.100 and 4.101. We begin by first considering the case when Ω = Rn and f is a Schwartz function. Proposition 4.98. Let Φ ∈ C 1 (Rn \ {0}) be a function
that is positive homogeneous of degree 1 − n and let f ∈ S(Rn ). Then Rn Φ(x − y)f (y) dy is differentiable in Rn and for each j ∈ {1, . . . , n} we have Φ(x − y)f (y) dy = Φ(ω)ωj dσ(ω) f (x) (4.10.3) ∂xj Rn
S n−1
+ lim+ ε→0
|y−x|≥ε
(∂j Φ)(x − y)f (y) dy
∀ x ∈ Rn .
Proof. The key ingredient in the proof of (4.10.3) is formula (4.4.19). To see how (4.4.19) applies, first note that by using Exercise 4.51 we have for each R > 0, Φ(x − y)f (y) dy Rn
≤ ΦL∞ (S n−1 )
Rn
|f (y)| dy |x − y|n−1
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
176
≤ ΦL∞ (S n−1 ) f L∞ (Rn )
|y−x|≤R
dy |x − y|n−1
+ ΦL∞ (S n−1 ) (1 + |y|) f L∞ (Rn ) 2
thus
Φ(x − y)f (y) dy is well-defined. Second, since
Rn
Rn
Φ(x − y)f (y) dy =
we have that ∂xj
|y−x|>R
dy < ∞, (1 + |y|2 )|x − y|n−1 (4.10.4)
Rn
Rn
Rn
Φ(y)f (x − y) dy
∀ x ∈ Rn ,
(4.10.5)
Φ(x − y)f (y) dy is differentiable and
Φ(x − y)f (y) dy =
Rn
Φ(y)(∂j f )(x − y) dy
∀ x ∈ Rn .
(4.10.6)
Third, for each x ∈ Rn , with tx as in (2.8.37) (and recalling (3.2.22)), we may write Φ(x − y)ϕ(y) dy = tx (Φ∨ ), ϕ, ∀ ϕ ∈ S(Rn ). (4.10.7) Rn
Now fix j ∈ {1, . . . , n}. Then for x ∈ Rn we have Φ(x − y)f (y) dy ∂xj Rn
= tx (Φ∨ ), ∂j f = Φ∨ , t−x (∂j f ) = Φ∨ , ∂j [t−x f ] = −∂j [Φ∨ ], t−x f =−
S n−1
! " ωj Φ∨ (ω) dσ(ω) δ, t−x f − P.V. ∂j (Φ∨ ) , t−x f
= f (x) S n−1
= f (x) S n−1
ωj Φ(ω) dσ(ω) + lim+
(∂j Φ)(−y)f (x + y) dy
ωj Φ(ω) dσ(ω) + lim
(∂j Φ)(x − y)f (y) dy.
ε→0
ε→0+
|y|≥ε
|y|≥ε
(4.10.8) The first equality in (4.10.8) uses (4.10.6) and (4.10.7), the fifth uses (4.4.19), the sixth the fact that ∂j (Φ∨ ) = −(∂j Φ)∨ , and the last one a suitable change of variables. This completes the proof of the corollary. Next, we present a version of Proposition 4.98 when the function f is lacking any type of differentiability properties. Here we make use of the basic Calder´ on– Zygmund result recorded in Theorem 4.97.
4.10. DERIVATIVES OF VOLUME POTENTIALS
177
Theorem 4.99. Let Φ ∈ C 1 (Rn \{0}) be a function that is positive homogeneous of degree 1 − n and let f ∈ Lp (Rn ) for some p ∈ (1, n). Then Φ ∗ f ∈ L1loc (Rn ) and for each j ∈ {1, . . . , n} we have T∂j Φ f ∈ Lp (Rn ) and Φ(ω)ωj dσ(ω) f + T∂j Φ f in D (Rn ), (4.10.9) ∂j (Φ ∗ f ) = S n−1
where T∂j Φ is the operator from (4.9.45) with ψ replaced by ∂j Φ. Proof. Fix p ∈ (1, n), f ∈ Lp (Rn ), and R ∈ (0, ∞). Then we write |f (y)| 1 dy dx = |f (y)| dx dy n−1 |x − y| |x − y|n−1 n B(0,R) R |y|≤2R B(0,R)
+ |y|>2R
|f (y)| B(0,R)
1 dx dy= : I+II. |x − y|n−1 (4.10.10)
If y ∈ B(0, 2R) and x ∈ B(0, R), then |x − y| ≤ 3R, thus 1 |f (y)| dx dy I≤ n−1 |y|≤2R |x−y|≤3R |x − y| ≤
1 dz |B(0, 2R)|1− p f Lp(B(0,2R) < ∞,
1
|z|≤3R
|z|n−1
(4.10.11)
where for the second inequality in (4.10.11) H¨older’s inequality has been used. Also, if y ∈ Rn \ B(0, 2R) and x ∈ B(0, R), then |y| ≤ |y − x| + |x| ≤ |y − x| + R ≤ |y − x| +
|y| , 2
which implies |y − x| ≥ |y|/2. Using this, II is estimated as 1 II ≤ 2n−1 |f (y)| dx dy n−1 |y| |y|>2R |x|≤R ≤2
n−1
|B(0, R)|
|y|>2R
|f (y)| dy |y|n−1
≤ 2n−1 |B(0, R)|f Lp(Rn )
dy |y|>2R
|y|
(n−1)p
1/p
< ∞,
(4.10.12)
p where p := p−1 is the H¨ older conjugate exponent for p. In the third inequality in (4.10.12) the assumption p ∈ (1, n) has been used to ensure that (n−1)p > n. A combination of (4.10.10), (4.10.11), and (4.10.12) yields the following conclusion: for every R ∈ (0, ∞) there exists some finite positive constant C, depending on R, n, and p, such that |f (y)| dy dx ≤ Cf Lp(Rn ) , ∀ f ∈ Lp (Rn ). (4.10.13) |x − y|n−1 n B(0,R) R
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
178
In turn, (4.10.13) entails several useful conclusions. Recall that the assumptions on Φ imply |Φ(x)| ≤ ΦL∞ (S n−1 ) |x|1−n for each x ∈ Rn \{0} (cf. Exercise 4.51). The first conclusion is that for each f ∈ Lp (Rn ) and each R ∈ (0, ∞), Rn
|Φ(x − y)||f (y)| dy < ∞ for a.e.
x ∈ B(0, R).
(4.10.14)
This shows that (Φ ∗ f )(x) is well-defined for a.e. x ∈ Rn . Second, from what we have just proved and (4.10.13) we may conclude that Φ ∗ f ∈ L1loc (Rn ). Next, recall that C0∞ (Rn ) is dense in Lp (Rn ) for each p ∈ (1, ∞). As such there exists a sequence {fk }k∈N of function in C0∞ (Rn ) with the property that lim fk = f in Lp (Rn ). Based on (4.10.13) we may conclude that the sequence k→∞
{Φ ∗ fk }k∈N converges in Lp (Rn ) to Φ ∗ f . Therefore, by Exercise 2.21, D (Rn )
Φ ∗ fk −−−−→ Φ ∗ f.
(4.10.15)
k→∞
Moving on, fix j ∈ {1, . . . , n} and note that, by Exercise 4.49, ∂j Φ is positive homogeneous of degree −n. The function Φ ∗ fk belongs to C ∞ (Rn ) and Proposition 4.98 applies and implies that Φ(ω)ωj dσ(ω) fk + T∂j Φ fk , ∀ k ∈ N, (4.10.16) ∂j Φ ∗ fk = S n−1
pointwise in Rn . In particular, from (4.10.16) we infer that T∂j Φ fk ∈ C ∞ (Rn ) for all k ∈ N, and that the equality in (4.10.16) also holds in D (Rn ). Since Lp (Rn )
→ T∂j Φ f , based on Exercise 2.21 it follows Theorem 4.97 gives that T∂j Φ fk −−−−− k→∞
that D (Rn )
T∂j Φ fk −−−−→ T∂j Φ f k→∞
and
D (Rn )
fk −−−−→ f.
(4.10.17)
k→∞
D (Rn )
Since an immediate consequence of (4.10.15) is that ∂j (Φ∗ fk ) −−−−→ ∂j (Φ∗ f ), k→∞
combining this, (4.10.17), and the fact that (4.10.16) holds in D (Rn ), we obtain (4.10.9). We are prepared to state and prove our most general results regarding distributional derivatives of volume potentials. In particular, the next two theorems contain solutions to the questions formulated at the beginning of this section. Theorem 4.100. Let Φ ∈ C 1 (Rn \ {0}) be a function that is positive homogeneous of degree 1 − n. Consider a measurable set Ω ⊆ Rn and assume that f ∈ Lp (Ω) for some p ∈ (1, n). Then Ω Φ(x−y)f (y) dy is absolutely convergent for a.e. x ∈ Rn and belongs to L1loc (Rn ) as a function of the variable x. Moreover, if f denotes the extension of f by zero to Rn , then for each j ∈ {1, . . . , n} we have
4.10. DERIVATIVES OF VOLUME POTENTIALS
Φ(x − y)f (y) dy =
∂xj
S n−1
Ω
179
Φ(ω)ωj dσ(ω) f(x)
+ lim+ ε→0
(∂j Φ)(x − y)f (y) dy (4.10.18) y∈Ω\B(x,ε)
where the derivative in the left-hand side is taken in D (Rn ) and the equality is also understood in D (Rn ). In particular, formula (4.10.18) holds for any function f belonging to some Lq (Ω), with q ∈ (1, ∞), that vanishes outside of a measurable subset of Ω of finite measure. Proof. The main claim in the statement follows by applying Theorem 4.99 to the function f in Ω, f := (4.10.19) 0 in Rn \ Ω, upon observing that f ∈ Lp (Rn ). Moreover, if f is as in the last claim in the statement then H¨ older’s inequality may be invoked to show that f belongs to Lp (Ω) for some p ∈ (1, n). Recall that if a ∈ R, the integer part of a is denoted by a and is by definition the largest integer that is less than or equal to a. To state the next theorem we introduce a if a ∈ Z, a := (4.10.20) a − 1 if a ∈ Z. Hence, a is the largest integer strictly less than a (thus, in particular, a < a for every a ∈ R). Theorem 4.101. Let Φ ∈ C ∞ (Rn \ {0}) be a function that is positive homogeneous of degree m ∈ R where m > −n. Define the generalized volume n potential associated with Φ by setting for each f ∈ L∞ comp (R ) (ΠΦ f )(x) :=
Rn
Φ(x − y)f (y) dy,
∀ x ∈ Rn .
(4.10.21)
n m+n Then for each f ∈ L∞ (Rn ) and comp (R ) one has ΠΦ f ∈ C
∀ α ∈ Nn0 with |α| ≤ m + n. (4.10.22) (Rn ), the Moreover, if α ∈ Nn0 is such that |α| = m + n, then for each f ∈ L∞ comp distributional derivative ∂ α ΠΦ f is of function type and satisfies ∂ α (ΠΦ f ) = Π∂ α Φ f
pointwise in Rn ,
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
180
∂ ΠΦ f (x) =
α
S n−1
(∂ β Φ)(ω)ωj dσ(ω) f (x)
+ lim
ε→0+
Rn \B(x,ε)
(∂ α Φ)(x − y) f (y) dy
in D (Rn ), (4.10.23)
for any β ∈ Nn0 and j ∈ {1, . . . , n} such that β + ej = α. n n Proof. Fix f ∈ L∞ comp (R ) and let K := supp f which is a compact set in R . By Exercises 4.49 and 4.51,
we have that ∂ α Φ ∈ C ∞ (Rn \ {0}),
for every α ∈ Nn0
∂ α Φ is positive homogeneous of degree m − |α|, and α ∂ Φ(x − y) ≤ ∂ α ΦL∞ (S n−1 ) |x − y|m−|α| ∀ x, y ∈ Rn , x = y.
(4.10.24)
Fix now α ∈ Nn0 such that |α| ≤ m + n. Then m − |α| ≥ m + n > −n, hence (4.10.24) further yields α |(∂ Φ)(x − y)f (y)| dy ≤ Cf L∞ (Rn ) |x − y|m−|α| dy < ∞. (4.10.25) Rn
K
This proves that Π∂ α Φ f is well-defined. Next, we focus on proving Π∂ α Φ f ∈ C 0 (Rn ).
(4.10.26)
To see this, fix x0 ∈ Rn and pick an arbitrary sequence {xk }k∈N of points in Rn satisfying lim xk = x0 . Consider the following functions defined a.e. in Rn : k→∞
vk := (∂ α Φ)(xk − ·)f,
∀ k ∈ N,
and v := (∂ α Φ)(x0 − ·)f.
(4.10.27)
To conclude that Π∂ α Φ f is continuous at x0 , it suffices to show that lim vk (y) dy = v(y) dy. (4.10.28) k→∞
K
K
The strategy for proving (4.10.28) is to apply Vitali’s theorem (cf. Theorem 13.21) with X := K and μ being the restriction to K of the Lebesgue measure in Rn . Since K is compact we have μ(X) < ∞ and from (4.10.25) we know that vk ∈ L1 (X, μ) for all k ∈ N. Clearly, |v(x)| < ∞ for μ-a.e. x ∈ K. Also, lim vk (y) = v(y) for μ-almost every y ∈ K. Hence, in order to obtain k→∞
(4.10.28), the only hypothesis left to verify in Vitali’s theorem is that the sequence {vk }k∈N is uniformly integrable in (X, μ). With this goal in mind, let ε > 0 be fixed and consider a μ-measurable set A ⊂ K such that μ(A) is sufficiently small, to a degree to be specified later. Then for every k ∈ N, based on (4.10.24), we have
4.10. DERIVATIVES OF VOLUME POTENTIALS
181
α vk (x) dμ(x) ≤ ∂ ΦL∞ (S n−1 ) f L∞ (Rn ) |xk − y|m−|α| dy A
A
|x|m−|α| dx,
=C
(4.10.29)
A−xk
where C := ∂ α ΦL∞ (S n−1 ) f L∞(Rn ) . Note that μ(A − xk ) = μ(A) for k ∈ N. Also, since the sequence {xk }k∈N and the set K are bounded, ∃ R ∈ (0, ∞) such that A − xk ⊂ K − xk ⊂ B(0, R) ∀ k ∈ N.
(4.10.30)
Given that m − |α| > −n, we have |x|m−|α| ∈ L1 (B(0, R)) and we may invoke Proposition 13.22 to conclude that there exists δ > 0 such that for every Lebesgue measurable set E ⊂ B(0, R) with Lebesgue measure less than δ we
have E |x|m−|α| dx < ε/C. At this point return with the latter estimate in (4.10.29) to conclude that (4.10.31) if μ(A) < δ then vk (x) dμ(x) < ε ∀ k ∈ N. A
This proves that the sequence {vk }k∈N is uniformly integrable in (X, μ) and finishes the proof of the fact that Π∂ α Φ f is continuous at x0 . Since x0 was arbitrary in Rn the membership in (4.10.26) follows. Our next goal is to show (4.10.22). First note that based on what we proved so far we have ΠΦ f, Π∂ α Φ f ∈ L1loc (Rn ), thus they define distributions in Rn . We claim that the distribution Π∂ α Φ f is equal to the distributional derivative ∂ α [ΠΦ f ]. To see this, fix ϕ ∈ C0∞ (Rn ) and using the definition of distributional derivatives and the definition of ΠΦ f write $ $ α % % ∂ ΠΦ f, ϕ = (−1)|α| ΠΦ f, ∂ α ϕ Φ(x − y)f (y) dy ∂ α ϕ(x) dx = (−1)|α| Rn
|α|
= (−1)
Rn
f (y)
Rn
Rn
Φ(x − y)∂ α ϕ(x) dx dy.
(4.10.32)
Based on (4.10.24), the assumptions on ϕ, and the fact that m − |α| > −n, we may use Lebesgue’s dominated convergence theorem and formula (13.7.4) repeatedly to further write
Rn
Φ(x − y)∂ α ϕ(x) dx = lim
ε→0+
Rn \B(y,ε)
= lim
ε→0+
+
(−1)|α|
cjβγ
α=β+γ+ej
Φ(x − y)∂ α ϕ(x) dx
Rn \B(y,ε)
(∂ α Φ)(x − y)ϕ(x) dx
⎫ ⎬ x − y j j γ ∂ ϕ(x) dσ(x) , (∂ β Φ)(x − y) ⎭ ε ∂B(y,ε) (4.10.33)
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
182
where cjβγ are suitable constants independent of ε. Note that, for each β, γ, and j such that α = β + γ + ej , in light of (4.10.24) we have xj − yj γ β ∂ ϕ(x) dσ(x) (∂ Φ)(x − y) ∂B(y,ε) ε ≤ |(∂ β Φ)(x − y)| |∂ γ ϕ(x)| dσ(x) ∂B(y,ε)
≤ ∂ ϕL∞ (Rn ) ∂ ΦL∞ (S n−1 ) γ
|x − y|m−|β| dσ(x)
β
∂B(y,ε)
= Cεm−|β|+n−1 −−−−→ 0, ε→0+
(4.10.34)
since m−|β|+n−1 ≥ m+n−|α| > 0. Returning with (4.10.34) to (4.10.33), and applying one more time Lebesgue’s dominated convergence theorem, it follows that Φ(x − y)∂ α ϕ(x) dx = (−1)|α| (∂ α Φ)(x − y)ϕ(x) dx. (4.10.35) Rn
Rn
This, when used in (4.10.32) further yields $ α % ∂ ΠΦ f, ϕ = f (y) (∂ α Φ)(x − y)ϕ(x) dx dy Rn
Rn
= Rn
Rn
(∂ α Φ)(x − y)f (y) dy ϕ(x) dx
% $ = Π∂ α Φ f, ϕ .
(4.10.36)
Since ϕ ∈ C0∞ (Rn ) is arbitrary, from (4.10.36) we conclude ∂ α ΠΦ f = Π∂ α Φ f
in
D (Rn ),
∀ α ∈ Nn0 with |α| ≤ m + n.
(4.10.37)
Upon observing that (4.10.26) and (4.10.37) hold for every α ∈ Nn0 with the property that |α| = m + n, we may invoke Theorem 2.102 to infer that ΠΦ f ∈ C m+n (Rn ) and that (4.10.22) is valid. We are left with proving the very last statement in the theorem. To this end, suppose there exists α ∈ Nn0 satisfying |α| = m + n. In particular, we have m ∈ Z and |α| ≥ 1. Hence, there exists j ∈ {1, . . . , n} with the property that αj ≥ 1. Set β := (α1 , . . . , αj−1 , αj − 1, αj+1 , . . . , αn ), so that α = β + ej
and |β| = m + n − 1 = m + n.
(4.10.38)
Based on what we proved earlier, we have ∂ β [ΠΦ f ] = Π∂ β Φ f pointwise in Rn . Also, ∂ β Φ is of class C ∞ and positive homogeneous of degree 1 − n in Rn \ {0}. Thus, we may apply Theorem 4.100 with Φ replaced by ∂ β Φ and Ω replaced by K and obtain
4.10. DERIVATIVES OF VOLUME POTENTIALS
183
∂ α ΠΦ f (x) = ∂j ∂ β [ΠΦ f ] (x) = ∂j Π∂ β Φ f (x) (∂ β Φ)(x − y)f (y) dy = ∂x j
K
= S n−1
+ lim+ ε→0
(∂ β Φ)(ω)ωj dσ(ω) f (x) Rn \B(x,ε)
(∂ α Φ)(x − y) f (y) dy,
(4.10.39)
where the derivative in the left hand-side of (4.10.39) is taken in D (Rn ) and the equality is understood in D (Rn ). This proves (4.10.23) and finishes the proof of the theorem. Exercise 4.102. In the context of Theorem 4.101 prove directly, without relying n m+n on Theorem 2.102, that whenever f ∈ L∞ (Rn ). comp (R ) one has ΠΦ f ∈ C Hint: Show that for each x ∈ Rn and each j ∈ {1, . . . , n} fixed, one has (4.10.40) lim (ΠΦ f )(x + hej ) − (ΠΦ f )(x) = (Π∂j Φ f )(x) h→0
The proof of (4.10.40) may be done by using Vitali’s theorem (cf. Theorem 13.21) in a manner analogous to the proof of (4.10.26). Once (4.10.40) is established, iterate to allow higher-order partial derivatives. Further Notes for Chap. 4. The significance of the class of tempered distributions stems from the fact that this class is stable under the action of the Fourier transform. The topics discussed in Sects. 4.1–4.6 are classical and a variety of expositions is present in the literature, though they differ in terms of length and depth, and the current presentation is no exception. For example, while the convolution product of distributions is often confined to the case in which one of the distributions in question is compactly supported, that is, E (Rn ) ∗ D (Rn ), in Theorem 4.18 we have seen that S(Rn ) ∗ S (Rn ) continues to be meaningfully defined in S (Rn ). For us, extending the action of the convolution product in this manner is motivated by the discussion in Sect. 4.9, indicating how SIO may be interpreted as multipliers. The main result in Sect. 4.7, Theorem 4.76, appears to be new at least in the formulation and the degree of generality in which it has been presented. This result may be regarded as a far-reaching generalization of Sokhotsky’s formula (2.10.3); cf. Remark 4.79 for details. Theorem 4.76 has a number of remarkable consequences, and we use it to offer a new perspective on the treatment of the classical harmonic Poisson kernel in Sect. 4.8. Later on, in Sect. 11.6, Theorem 4.76 resurfaces as the key ingredient in the study of boundary behavior of layer potential operators in the upper-half space. The treatment of the SIO from in Sect. 4.9 highlights the interplay between distribution theory, harmonic analysis, and partial differential equations. Specifically, first the action of a SIO of the form TΘ on a Schwartz function ϕ is interpreted as
P.V. Θ ∗ϕ (which, as pointed out earlier, is a well-defined object in S(Rn )∗S (Rn ) ⊂ S (Rn )). Second, the Fourier analysis of tempered distributions is invoked to conclude
184
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
that T F P.V. Θ which shifts the focus of understanding the properties of Θϕ = ϕ
the SIO TΘ to clarifying the nature of the tempered distribution mΘ := F P.V. Θ . Third, it turns out that the distribution mΘ is of function type and, in fact, a suitable pointwise formula may be deduced for it. In particular, it is apparent from this formula that mΘ ∈ L∞ (Rn ). Having established this, in a fourth step we may return to the original focus of our investigation, namely the SIO TΘ , and use the fact that along with Planch´erel’s formula and the boundedness of mΘ to evenT Θ ϕ = mΘ ϕ, tually conclude (via a density argument) that TΘ extends to a linear and bounded operator on L2 (Rn ). In turn, SIO naturally intervene in the derivatives of volume potentials discussed in Sect. 4.10, and the boundedness result just derived ultimately becomes the key tool in obtaining estimates for the solution of the Poisson problem, treated later. The discussion in the previous paragraph also sheds light on some of the main aims of the theory of SIO, as a subbranch of harmonic analysis. For example, one would like to extend the action of SIO to Lp (Rn ) for any p ∈ (1, ∞), rather than just L2 (Rn ). Also, it is desirable to consider SIO which are not necessarily of convolution type, and/or in a setting in which Rn is replaced by a more general type of ambient (e.g., some type of surface in Rn+1 ). The execution of this program, essentially originating in the work of A. P. Calder´ on, A. Zygmund, and S. G. Mikhlin in the 1950s, stretches all the way to the present day. The reader is referred to the exposition in the monographs [6, 7, 16, 23, 47, 62, 63], where references to specific research articles may be found. Here, we only wish to mention that, in turn, such progress in harmonic analysis has led to significant advances in those areas of partial differential equations where SIO play an important role. In this regard, the reader is referred to the discussion in [28, 36, 47, 50].
4.11
Additional Exercises for Chap. 4
Exercise 4.103. Prove that P.V. x1 ∈ S (R). Exercise 4.104. Prove that |x|N ln |x| ∈ S (Rn ) if N is a real number satisfying N > −n. Exercise 4.105. Prove that (ln |x|) = P.V. x1 in S (R). Exercise 4.106. Let a ∈ (0, ∞) and recall the Heaviside function H from (1.2.16). Prove that the function eax H(x), x ∈ R, does not belong to S (R). Do any of the functions e−ax H(−x), eax H(−x), or e−ax H(x), defined for x ∈ R, belong to S (R)? Exercise 4.107. In each case below determine if the given sequence of tempered distributions indexed over j ∈ N converges in S (R). Whenever convergent, determine its limit. (a) fj (x) =
x x2 +j −2 ,
(b) fj (x) =
1 j
·
x∈R
1 x2 +j −2 ,
x∈R
4.11. ADDITIONAL EXERCISES FOR CHAP. 4 (c) fj (x) =
1 sin jx π x ,
185
x ∈ R \ {0}
2
(d) fj (x) = ej δj Exercise 4.108. For each j ∈ N let fj (x) := j n θ(jx), x ∈ Rn , where θ ∈ S(Rn ) is fixed and satisfies Rn θ(x) dx = 1. Does the sequence {fj }j∈N converge in S (Rn )? If yes, determine its limit. 2
Exercise 4.109. For each j ∈ N consider the function fj (x) := ex χ[−j,j] (x) defined for x ∈ R. Prove that the sequence {fj }j∈N converges in D (R) but not in S (R). Exercise 4.110. For each j ∈ N let fj (x) := χ[j−1,j] (x) for x ∈ R. Prove that the sequence {fj }j∈N does not converge in Lp (R) for any p ≥ 1 but it converges to zero in S (R). Exercise 4.111. For each j ∈ N consider the tempered distribution uj ∈ S (R) D (R)
defined by uj := χ[−j,j] ex sin(ex ). Prove that uj −−−−→ ex sin(ex ). Does the j→∞
sequence {uj }j∈N converge in S (R)? If yes, what is the limit? Exercise 4.112. Let m ∈ N. Prove that any solution u of the equation xm u = 0 in D (R) satisfies u ∈ S (R). Use the Fourier transform to show that the general m−1 solution to this equation is u = ck δ (k) in S (R), where ck ∈ C, for every k = 0, 1, . . . , m − 1.
k=0
Exercise 4.113. Prove that for any f ∈ S (R) there exists u ∈ S (R) such that xu = f in S (R). 2
Exercise 4.114. Does the equation e−|x| u = 1 in S (Rn ) have a solution? Exercise 4.115. Prove that the Heaviside function H belongs to S (R) and compute its Fourier transform in S (R). Exercise 4.116. Compute the Fourier transform of P.V. x1 in S (R). Exercise 4.117. Compute the Fourier transform in S (R) of each of the following tempered distribution (all are considered in S (R) and recall the definition of the sgn function from (1.4.3)): (a) sgn x. (b) |x|k for k ∈ N. (c)
sin(ax) x
for a ∈ R \ {0}.
CHAPTER 4. THE SPACE OF TEMPERED DISTRIBUTIONS
186 (d)
sin(ax) sin(bx) x
for a, b ∈ R \ {0}.
(e) sin(x2 ). (f ) ln |x|. Exercise 4.118. Let n = 3 and R ∈ (0, ∞). Compute the Fourier transform of the tempered distribution δ∂B(0,R) in S (R3 ), where δ∂B(0,R) the distribution defined as in Exercise 2.128 corresponding to Σ := ∂B(0, R). Exercise 4.119. Let k ∈ N0 and suppose f ∈ L1 (Rn ) has the property that xα f ∈ L1 (Rn ) for every α ∈ Nn0 satisfying |α| ≤ k. Prove that f3, the Fourier transform of f in S (Rn ) satisfies f3 ∈ C k (Rn ). Exercise 4.120. Show that χ[−1,1] ∈ S (R) and compute χ [−1,1] in S (R). Do 1 we have χ [−1,1] ∈ L (R)?
Exercise 4.121. Fix x0 ∈ Rn and set tx0 : S (Rn ) → S (Rn ) tx0 u, ϕ) := u, t−x0 (ϕ),
∀ ϕ ∈ S(Rn ),
(4.11.1)
where t−x0 (ϕ) is understood as in Exercise 3.42. Prove that the map in (4.11.1) is well-defined, linear, and continuous, and is an extension of the map from Exercise 3.42 in the sense that, if f ∈ S(Rn ), then tx0 uf = utx0 (f ) in S (Rn ). Also show that F tx0 u = e−ix0 ·(·) u 3 in S (Rn ) for every u ∈ S (Rn ). (4.11.2) Exercise 4.122. Let a ∈ R and consider the function g(x) := cos(ax) for every x ∈ R. Prove that g ∈ S (R) and compute g3 in S (R). Exercise 4.123. Let m ∈ N and suppose P is a polynomial in Rn of degree 2m no real roots. Prove that P1 ∈ S (Rn ), and that if 2m > n then 1that has 2m−n−1 F P ∈C (Rn ). Exercise 4.124. Let P be a polynomial in Rn and suppose k ∈ N. Prove that if P is homogeneous of degree −k then P ≡ 0. Exercise 4.125. Let ζ, η ∈ Rn , n ≥ 2, be two unit vectors such that ζ · η = 1, and consider the linear mapping R : Rn → Rn define by Rξ := ξ −
ξ · (η + ζ) ξ · [(1 + 2η · ζ)ζ − η] ζ+ η, 1+η·ζ 1+η·ζ
∀ ξ ∈ Rn .
(4.11.3)
Show that this is an orthogonal transformation in Rn that satisfies Rζ = η
and
R η = ζ.
(4.11.4)
4.11. ADDITIONAL EXERCISES FOR CHAP. 4
187
Exercise 4.126. Prove that for every c ∈ R \ {0} the following formulas hold
R
lim
ε→0+
R→∞
ε
lim
ε→0+
R→∞
ε
R
cos(cρ) − cos ρ dρ = − ln |c|, ρ
(4.11.5)
π sin(cρ) dρ = sgn c, ρ 2
(4.11.6)
R cos(cρ) − cos ρ sup dρ ≤ 2 ln |c|, ρ 0<ε
(4.11.7)
(4.11.8)
Chapter 5
The Concept of a Fundamental Solution In this chapter we consider constant coefficient linear differential operators and discuss the existence of a fundamental solution for such operators.
5.1
Constant Coefficient Linear Differential Operators
Recall (3.1.10). To fix notation, for m ∈ N0 , let P (D) := aα Dα, aα ∈ C, ∀ α ∈ Nn0 , |α| ≤ m. |α|≤m
Also, whenever P (D) is as above, define P (−D) :=
(5.1.1)
(−1)|α| aα Dα , and set
|α|≤m
P (ξ) :=
aα ξ α ,
∀ ξ ∈ Rn .
(5.1.2)
|α|≤m
Typically, an object P (D) as in (5.1.1) is called a constant coefficient linear differential operator, while P (ξ) from (5.1.2) is referred to as its total (or full) symbol. Moreover, P (D) is said to have order m provided there exists α∗ ∈ Nn0 with |α∗ | = m and aα∗ = 0.
(5.1.3)
Whenever P (D) is a constant coefficient linear differential operator of order m we define its principal symbol to be Pm (ξ) := aα ξ α , for ξ ∈ Rn . (5.1.4) |α|=m
D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 5, © Springer Science+Business Media New York 2013
189
190
CHAPTER 5. THE CONCEPT OF A FUNDAMENTAL SOLUTION
Obviously, any operator of the form Q(∂) := bα ∂ α ,
(5.1.5)
|α|≤m
may be expressed as P (D) for the choice of coefficients aα := i|α| bα for each α ∈ Nn0 with |α| ≤ m. Hence, we may move freely back and forth between the writings of a differential operator in (5.1.1) and (5.1.5). While we shall bα ξ α , the reader is advised that the naturally define in this case Q(ξ) := |α|≤m |α| full symbol of Q(∂) from (5.1.5) is the expression i bα ξ α , for ξ ∈ Rn , |α|≤m while its principal symbol is |α|=m im bα ξ α , for ξ ∈ Rn . The terminology and formalism described above will play a significant role in our subsequent work. In the last part of this section we prove a regularity result involving constant coefficient linear differential operators with nonvanishing total symbol outside the origin, that is further used to establish a rather general Liouville-type theorem for this class of operators. Proposition 5.1. Suppose P (D) is a constant coefficient linear differential operator in Rn with the property that P (ξ) = 0
for every
ξ ∈ Rn \ {0}.
(5.1.6)
Then if u ∈ S (Rn ) is such that P (D)u = 0 in S (Rn ) it follows that u is a polynomial in Rn and P (D)u = 0 pointwise in Rn . Proof. If u ∈ S (Rn ) is such that P (D)u = 0 in S (Rn ), taking the Fourier transform, we obtain P (ξ)3 u = 0 in S (Rn ). In view of (5.1.6), the latter identity implies supp u 3 ⊆ {0}. By Exercise 4.35, it follows that u is a polynomial in Rn . ∞ Since u ∈ C (Rn ), the condition P (D)u = 0 in S (Rn ) further yields P (D)u = 0 pointwise in Rn . In preparation for the general Liouville-type theorem advertised earlier, we prove the following result relating the growth of a polynomial to its degree. Lemma 5.2. Let P (x) be a polynomial in Rn with the property that there exist N ∈ N0 and C, R ∈ [0, ∞) such that |P (x)| ≤ C|x|N
whenever
|x| ≥ R.
(5.1.7)
Then P (x) has degree at most N . In particular, the only bounded polynomials in Rn are constants. Proof. We begin by noting that the case n = 1 is easily handled. In the aα xα is a polynomial in multidimensional setting, assume that P (x) = |α|≤m
Rn satisfying (5.1.7) for some N ∈ N0 and C, R ∈ [0, ∞). Suppose m ≥ N + 1,
5.2. A FIRST LOOK AT FUNDAMENTAL SOLUTIONS
191
since otherwise there is nothing to prove. Fix an arbitrary ω ∈ S n−1 and consider the one-dimensional polynomial pω (t) := P (tω) for t ∈ R, that is, pω (t) =
m j=0
a α ω α tj ,
t ∈ R.
(5.1.8)
|α|=j
Given that |pω (t)| = |P (tω)| ≤ C|tω|N = CtN whenever |t| ≥ R, the onedimensional result applies and gives that the coefficient of tj in (5.1.8) vanishes if j ≥ N + 1. Thus, P (tω) = pω (t) =
N j=0
a α ω α tj = aα (tω)α ,
|α|=j
∀ t ∈ R.
(5.1.9)
|α|≤N
Given that ω ∈ S n−1 was arbitrary, it follows that P (x) = x ∈ Rn .
|α|≤N
aα xα for every
All ingredients are now in place for dealing with the following result. Theorem 5.3 (A General Liouville-Type Theorem). Assume P (D) is a constant coefficient linear differential operator in Rn such that (5.1.6) holds. Also, suppose u ∈ L1loc (Rn ) satisfies P (D)u = 0 in S (Rn ) and has the property that there exist N ∈ N0 and C, R ∈ [0, ∞) such that |u(x)| ≤ C|x|N
whenever
|x| ≥ R.
(5.1.10)
Then u is a polynomial in Rn of degree at most N . In particular, if u ∈ L∞ (Rn ) satisfies P (D)u = 0 in S (Rn ) then u is necessarily a constant. Proof. Since (5.1.10) implies that the locally integrable function u belongs to S (Rn ) (cf. Example 4.3), Proposition 5.1 implies that u is a polynomial in Rn . Moreover, Lemma 5.2 gives that the degree of this polynomial is at most N .
5.2
A First Look at Fundamental Solutions
Definition 5.4. Let P (D) be a constant coefficient linear differential operator in Rn . Then E ∈ D (Rn ) is called a fundamental solution for P (D) provided P (D)E = δ
in
D (Rn ).
(5.2.1)
A simple, useful observation is that E ∈ D (Rn ) is a fundamental solution for P (D) if and only if E, P (−D)ϕ = ϕ(0),
∀ ϕ ∈ C0∞ (Rn ).
(5.2.2)
Indeed, this is immediate from the fact that P (D)E, ϕ = E, P (−D)ϕ for each function ϕ ∈ C0∞ (Rn ).
192
CHAPTER 5. THE CONCEPT OF A FUNDAMENTAL SOLUTION
Remark 5.5. Suppose P (D) is a constant coefficient linear differential operator in Rn . Then the equation P (D)u = f (5.2.3) is called the Poisson equation for the operator P (D). If the operator P (D) has a fundamental solution E ∈ D (Rn ), then for any f ∈ D (Rn ) for which E∗f exists as an element of D (Rn ) (for example, this is the case if f ∈ E (Rn )) the Poisson equation (5.2.3) is solvable in D (Rn ). Indeed, if we set u := E ∗ f , then (5.2.4) P (D)u = (P (D)E) ∗ f = δ ∗ f = f in D (Rn ). This is one of the many reasons for which fundamental solutions are important when solving partial differential equations. Remark 5.6. Let P (D) and Q(D) be two constant coefficient linear differential operators in Rn and let E ∈ D (Rn ) be a fundamental solution for the operator Q(D)P (D) in D (Rn ). Then P (D)E is a fundamental solution for Q(D) in D (Rn ). Moreover, P (D)E ∈ S (Rn ) if E ∈ S (Rn ). In general, there are two types of questions we are going to be concerned with in this regard, namely whether fundamental solutions exist, and if so whether they can all be cataloged. The proposition below shows that for any differential operator with a nonvanishing total symbol on Rn \ {0} a fundamental solution which is a tempered distribution is unique up to polynomials. Proposition 5.7. Suppose P (D) is a constant coefficient linear differential operator in Rn with the property that P (ξ) = 0 for every ξ ∈ Rn \ {0}. If P (D) has a fundamental solution E that is a tempered distribution, then any other fundamental solution of P (D) belonging to S (Rn ) differs from E by a polynomial Q satisfying P (D)Q = 0 pointwise in Rn . Proof. Suppose u ∈ S (Rn ) is such that P (D)u = δ in S (Rn ). Then P (D)(u-E)=0 in S (Rn ) and the desired conclusion is provided by Proposition 5.1. Consider now the task of finding a formula for a fundamental solution. In a first stage, suppose P (D) is as in (5.1.1) and make the additional assumption that there exists c ∈ (0, ∞) such that |P (ξ)| ≥ c for every ξ ∈ Rn .
(5.2.5)
1 In such a scenario, P (ξ) ∈ L(Rn ), hence P 1(ξ) ∈ S (Rn ) by Exercise 4.6. Thus, n if some E ∈ S (R ) happens to be a fundamental solution of P (D) (we will see later in Theorem 5.13 that such an E always exists), then P (D)E = δ in S (Rn ). 3 = 1 in S (Rn ) (recall After applying the Fourier transform, this implies P (ξ)E part (b) in Theorem 4.25 and Example 4.21). In particular, since P (ξ) P 1(ξ) = 1 in S (Rn ), we obtain 1 3 P (ξ) E(ξ) − = 0 in S (Rn ). (5.2.6) P (ξ)
5.2. A FIRST LOOK AT FUNDAMENTAL SOLUTIONS Multiplying by
1 P (ξ)
3 ∈ L(Rn ) then gives E(ξ) =
1 P (ξ)
193
hence, further,
! ∨ " $ % 33 3 ϕ 3∨ E, ϕ = (2π)−n E , ϕ = (2π)−n E, = (2π)−n
Rn
ϕ(−ξ) 3 dξ, P (ξ)
∀ ϕ ∈ S(Rn ).
(5.2.7)
Formula (5.2.7), derived under the assumption that (5.2.5) holds, is the departure point for the construction of a fundamental solution for a constant coefficient linear differential operator P (D) in the more general case when (5.2.5) is no longer assumed. Of course, we would have to assume that P (D) = 0 (since otherwise (5.2.1) clearly does not have any solution). The following comments are designed to shed some light into how the issues created by the zeros of P (ξ) [in the context of (5.2.7)] may be circumvented when condition (5.2.5) is dropped. The first observation is that if ϕ ∈ C0∞ (Rn ) we may extend its Fourier transform ϕ 3 to Cn by setting n ϕ(ζ) 3 := e−i j=1 xj ζj ϕ(x) dx for every ζ = (ζ1 , . . . , ζn ) ∈ Cn . (5.2.8) Rn
Note that the compact support n condition on ϕ is needed since for each fixed ζ ∈ Cn \ Rn the function |e−i j=1 xj ζj | grows rapidly as |x| → ∞. Second, if (5.2.5) holds then P (ξ) = 0 for each ξ ∈ Rn . Consequently, for each ξ ∈ Rn there exists a small number r(ξ) > 0 such that P (z) = 0 whenever z ∈ Cn satisfies |z − ξ| < 2r(ξ). Hence, for every η ∈ S n−1 the mapping
ϕ(−ξ 3 − τ η) ∈C τ ∈ C : |τ | < 2r(ξ) τ → P (ξ + τ η)
(5.2.9)
is well-defined and analytic. Using Cauchy’s formula [and keeping in mind the convention (5.2.8)] we may therefore re-write (5.2.7) in the form ϕ(−ξ) 3 E, ϕ = (2π)−n dξ (5.2.10) Rn P (ξ) 1 ϕ(−ξ 3 − τ η) dτ −n = (2π) · dξ, ∀ ϕ ∈ C0∞ (Rn ). τ Rn 2πi |τ |=r(ξ) P (ξ + τ η) The advantage of the expression in the last line of (5.2.10), compared to the expression in the last line of (5.2.7), is the added flexibility in the choice of the set over which P1 is integrated. This discussion suggests that we try to construct a fundamental solution for P (D) in a manner similar to the expression in the last line of (5.2.10), with the integration taking place on a set avoiding the zeros of P , even in the case when (5.2.5) is dropped. We shall implement this strategy in Sect. 5.3. For the time being, we discuss a preliminary result that will play a basic role in this endeavor. To state it recall the notion of principal symbol from (5.1.4).
194
CHAPTER 5. THE CONCEPT OF A FUNDAMENTAL SOLUTION
Lemma 5.8. Let m ∈ N and let P (D) be a differential operator of order m as in (5.1.1). In addition, suppose that Pm (η) = 0 for some η ∈ S n−1 . Then there exist ε > 0, r0 , r1 , . . . , rm > 0 and closed sets F0 , F1 , . . . , Fm in Rn such that (a) Rn =
m
Fj and
j=0
(b) If ξ ∈ Fj , τ ∈ C, |τ | = rj , then |P (ξ + τ η)| ≥ ε, for each j = 0, 1, . . . , m. −m |Pm (η)| > 0. Proof. Choose rj := j+1 m , for j ∈ {0, 1, . . . , m} and set ε := (2m) Also, for each j ∈ {1, . . . , m} set
Fj := ξ ∈ Rn : |P (ξ + τ η)| ≥ ε, ∀ τ ∈ C, |τ | = rj .
(5.2.11)
These sets are closed in Rn and clearly satisfy (b). There remains to show that (a) holds. Fix ξ ∈ Rn , and consider the mapping C τ → P (ξ + τ η) ∈ C. This is a polynomial in τ of order m with the coefficient of τ m equal to Pm (η). Let τj (ξ), j = 1, . . . , m, be the zeros of this polynomial. Then P (ξ + τ η) = Pm (η)
m τ − τj (ξ) ,
∀ τ ∈ C.
(5.2.12)
j=1
Consider 1 Mj := τ ∈ C : dist τ, ∂B(0, rj ) < , 2m
for j ∈ {0, 1, . . . , m}.
(5.2.13)
Given that rj − rj−1 = 1/m for j = 1, . . . , m, the sets in the collection {Mj }m j=0 are pairwise disjoint. Consequently, among these m + 1 sets there is at least one that does not contain any of the numbers τj (ξ), j = 1, . . . , m (a simple form of the pigeon hole principle). Hence, there exists k ∈ {0, 1, . . . , m} such that 1 dist τj (ξ), ∂B(0, rk ) ≥ , 2m
∀ j ∈ {1, . . . , m}.
(5.2.14)
1 Then, |τj (ξ) − τ | ≥ 2m , for every |τ | = rk and every j = 1, . . . , m, which when used in (5.2.12) implies m |Pm (η)| τ − τj (ξ) ≥ = ε, |P (ξ + τ η)| = Pm (η) (2m)m j=1
if
|τ | = rk . (5.2.15)
This shows that ξ ∈ Fk , thus (a) is satisfied for the sets defined in (5.2.11).
5.3. THE MALGRANGE–EHRENPREIS THEOREM
5.3
195
The Malgrange–Ehrenpreis Theorem
The Malgrange–Ehrenpreis theorem reads as follows: Theorem 5.9 (Malgrange–Ehrenpreis). Let P (D) be a linear constant coefficient differential operator that is not identically zero. Then there exists E ∈ D (Rn ) which is a fundamental solution for P (D). Proof. Let m ∈ N0 and assume that P (D) is as in (5.1.1) of order m. If m = 0 then P (D) = a ∈ C\{0} and the distribution E := a1 δ is a fundamental solution for P (D). Suppose now that m ≥ 1. Since the principal symbol Pm does not vanish identically in Rn \ {0} and since Pm (ξ) = |ξ|m Pm
ξ for all ξ ∈ Rn \ {0}, |ξ|
(5.3.1)
we conclude that there exists η ∈ S n−1 such that Pm (η) = 0. Fix such an η and retain the notation introduced in the proof of Lemma 5.8. Then the functions 1, ξ ∈ Fj , n j ∈ {0, 1, . . . , m}, (5.3.2) χj : R → R, χj (ξ) := 0, ξ ∈ Rn \ Fj , and fj : Rn → R,
χj fj := , m χk
j ∈ {0, 1, . . . , m},
k=0
are well-defined, measurable (recall that the Fj ’s are closed), and
(5.3.3) m
fj = 1
j=0
on Rn . In addition, 0 ≤ fj ≤ 1, and fj = 0 on Rn \ Fj for each j ∈ {0, 1, . . . , m}. Define now the linear functional E : D(Rn ) → C by setting, for each function ϕ ∈ C0∞ (Rn ), m 1 ϕ(−ξ 3 − τ η) dτ · dξ. (5.3.4) fj (ξ) E(ϕ) := (2π)−n 2πi |τ |=rj P (ξ + τ η) τ n j=0 R In order to prove that E is well-defined, fix an arbitrary compact set K ⊂ Rn and suppose ϕ ∈ C0∞ (Rn ) is such that supp ϕ ⊆ K. First, observe that for each j = 0, 1, . . . , m, if ξ ∈ Fj = supp fj , then |P (ξ + τ η)| ≥ ε for every τ ∈ C, |τ | = rj , which makes the integral in (5.3.4) over |τ | = rj finite. Second, we claim that the integrals over Rn in (5.3.4) are convergent. Indeed, since ϕ is compactly supported and ϕ(−ξ 3 − τ η) = eix·ξ+iτ x·η ϕ(x) dx, (5.3.5) Rn
it follows that ϕ(−ξ 3 − τ η) is analytic in τ . Then, if we fix j ∈ {0, 1, . . . , m}, and take N ∈ N0 , using (5.3.5) and integration by parts, for each |τ | = rj we may write
196
CHAPTER 5. THE CONCEPT OF A FUNDAMENTAL SOLUTION
(1 + |ξ|2 )N ϕ(−ξ 3 − τ η) =
Rn
= Rn
1−
n
∂x2
N
eix·ξ eiτ x·η ϕ(x) dx
=1 n N eiτ x·η ϕ(x) dx, eix·ξ −1 − ∂x2
(5.3.6)
=1
for each ξ ∈ Rn . An inspection of the last integral in (5.3.6) then yields if |τ | = rj , (5.3.7) |ϕ(−ξ 3 − τ η)| ≤ Cj (1 + |ξ|2 )−N sup ∂xα ϕ(x) x∈K |α|≤2N
for some constant Cj = Cj (K, rj , η) ∈ (0, ∞). In turn, we may use (5.3.7) in (5.3.4) to obtain m 1 2πrj · |E(ϕ)| ≤ (2π)−n Cj fj (ξ) (1 + |ξ|2 )−N sup ∂ α ϕ(x) dξ 2π εrj x∈K Rn j=0 |α|≤2N
≤ C sup ∂ α ϕ(x)
(1 + |ξ|2 )−N dξ
Rn
x∈K |α|≤2N
= C sup ∂ α ϕ(x)
(5.3.8)
x∈K |α|≤2N
for some C ∈ (0, ∞) independent of ϕ, by choosing N > n2 . This proves that E is well-defined. Clearly, E is linear which, together with (5.3.8) and Proposition 2.4, implies that E ∈ D (Rn ). To finish the proof of the theorem we are left with showing that P (D)E = δ in D (Rn ), that is, that (5.2.2) holds. Since F P (−D)ϕ = P ∨ ϕ, 3 recalling (5.3.4) we may write 7 6 m 1 ϕ(−ξ 3 − τ η) −n dτ dξ fj (ξ) E, P (−D)ϕ = (2π) 2πi |τ |=rj τ n j=0 R = (2π)−n
m j=0
= (2π)−n
fj (ξ)ϕ(−ξ) 3 dξ
Rn
= ϕ(0).
Rn
ϕ(ξ) 3 dξ (5.3.9)
For the second equality in (5.3.9) we used the fact that ϕ(−ξ 3 − τ η) is analytic
ϕ(−ξ−τ η) 1 in τ , hence by Cauchy’s formula we have 2πi dτ = ϕ(−ξ). 3 The τ |τ |=rj m fj = 1 on Rn , while the last one follows from part (4) third equality relies on j=0
in Exercise 3.26. On account of (5.3.9) and (5.2.2) we may therefore conclude that E is a fundamental solution for P (D).
5.3. THE MALGRANGE–EHRENPREIS THEOREM
197
One of the consequences of Theorem 5.9 is the fact that the Poisson problem for nonzero constant coefficient linear differential operators is locally solvable. Specifically, we have the following result. Corollary 5.10. Let P (D) be a constant coefficient linear differential operator that is not identically zero. Suppose Ω is a nonempty, open subset of Rn and that some f ∈ D (Ω) is given. Then for each nonempty, open subset ω of Ω with the property that ω is a compact subset of Ω, there exists u ∈ D (Ω) such that P (D)u = f in D (ω). Proof. Let E ∈ D (Rn ) be a fundamental solution of P (D), which is known to exist by Theorem 5.9. Also, fix ψ ∈ C0∞ (Ω) such that ψ = 1 on ω. Then ) ∈ E (Rn ) ψf ∈ E (Ω) hence, by Proposition 2.63, it has a unique extension ψf n ) ) ⊆ supp ψ. Now let v := E ∗ (ψf ) ) ∈ D (R ). By Proposition 2.43 and supp (ψf we have v Ω ∈ D (Ω) and we claim that u := v Ω is a solution of P (D)u = f in D (ω) (in the sense that P (D) uω = f ω in D (ω)). Indeed, ) )] P (D) uω = P (D)u ω = P (D)(v Ω ) ω = [P (D)(E ∗ ψf ω ) = δ ∗ ψf ) = P (D)E ∗ ψf ω ω
) ] = [ψf ] = f = [ψf ω ω ω
in D (ω).
(5.3.10)
For the first and third equality in (5.3.10) we used (2.5.4), for the fourth equality we used (e) in Theorem 2.87, while for the sixth equality we used (d) in Theorem 2.87. The next example gives an approach for computing fundamental solutions for linear constant coefficient operators on the real line. Example 5.11. Let m ∈ N and let P :=
d m dx
+
m−1 j=0
aj
d j dx
be a differential
operator of order m in R with constant coefficients. Suppose that v is the solution to ⎧ v ∈ C ∞ (R), ⎪ ⎪ ⎪ ⎪ ⎨ P v = 0 in R, (5.3.11) ⎪ 0 ≤ j ≤ m − 2, v (j) (0) = 0, ⎪ ⎪ ⎪ ⎩ (m−1) (0) = 1, v with the understanding that the third condition is void if m = 1. Recall the Heaviside function H from (1.2.16) and define the distributions u+ := vH
and u− := −vH ∨
in D (R).
(5.3.12)
198
CHAPTER 5. THE CONCEPT OF A FUNDAMENTAL SOLUTION
We claim that u+ and u− are fundamental solutions for P in D (R). To see d m m−1 j 1 d j + i aj i dx , in line that this is the case, express P as P (D) = im 1i dx j=0
with (5.1.1), and note that this forces P (−D) = (−i)m
1 d i dx
= (−1)m
d dx
m +
m−1
(−i)j aj
j=0
m
m−1
(−1)j aj
+
j=0
As such, for every ϕ ∈ C0∞ (R) we have $ % $ % P (D)u+ , ϕ = u+ , P (−D)ϕ = = v (m−1) (0)ϕ(0) +
∞
1 d i dx d dx
j
j .
(5.3.13)
v(x) P (−D)ϕ (x) dx
0 ∞
(P v)(x)ϕ(x) dx = δ, ϕ,
(5.3.14)
0
where the third equality uses (5.3.11) and repeated integrations by parts. This proves that P u+ = δ in D (R). Similarly, one obtains that P u− = δ in D (R). It is worth pointing out that the two fundamental solutions just described satisfy the support conditions supp u+ ⊆ [0, ∞) and supp u− ⊆ (−∞, 0]. d Example 5.12. Let c ∈ C and consider the differential operator P := dx + c in −cx R. It is easy to see that v(x) = e , x ∈ R is the solution (in the classical sense) of the initial value problem P v = 0 in R, v(0) = 1. Hence, by Example 5.11, the distributions u+ := vH and u− := −vH ∨ in D (R) are fundamental solutions for P . Note that instead of using Example 5.11, one may show that P u+ = δ (and similarly that P u− = δ) in D (R) by applying (4) in Proposition 2.36 and the fact that H = δ in D (R) to write
P (vH) = (vH) + cvH = v H + vH + cvH = vδ = δ
in D (R).
(5.3.15)
We close this section by proving a strengthened version of Theorem 5.9 that addresses the issue of existence of fundamental solutions that are tempered distributions. Such a result is important in the study of partial differential equations as it opens the door for using the Fourier transform. Theorem 5.13. Let P (D) be a nonzero constant coefficient linear differential operator in Rn . Then there exists E ∈ S (Rn ), which is a fundamental solution for P (D). In particular, the equality P (D)E = δ holds in S (Rn ). We prove this theorem by relying on a deep result due to L. H¨ormander and S. Lojasiewicz. For a proof of Theorem 5.14 the interested reader is referred to [29, 41].
5.3. THE MALGRANGE–EHRENPREIS THEOREM
199
Theorem 5.14 (H¨ormander–Lojasiewicz). Let P be a polynomial in Rn that does not vanish identically and consider the pointwise multiplication mapping MP : S(Rn ) → S(Rn ),
MP (ϕ) := P ϕ,
∀ ϕ ∈ S(Rn ).
(5.3.16)
Then MP is linear, continuous, injective, with closed range SP (Rn ) in S(Rn ). Moreover, the mapping MP : S(Rn ) → SP (Rn ) has a linear and continuous inverse T : SP (Rn ) → S(Rn ). Granted this, we can now turn to the proof of the theorem stated earlier. Proof of Theorem 5.13. Let P (D) be as in the statement and consider the polynomial P (ξ) associated to it as in (5.1.2). Denote by SP (Rn ) the range of the mapping MP from (5.3.16) corresponding to this polynomial and by T the inverse of MP : S(Rn ) → SP (Rn ). Then Theorem 5.14 combined with Proposition 13.1 give that the transpose of T is linear and continuous as a mapping T t : S (Rn ) → SP (Rn )
(5.3.17)
where SP (Rn ) is the dual of SP (Rn ) endowed with the weak-∗ topology. Next, assume that some f ∈ S (Rn ) has been fixed and define u := T t f . Then u ∈ SP (Rn ) and, making use of (13.1.3), (13.1.7), and Theorem 13.4, it is possible to extend u to an element in S (Rn ), which we continue to denote by u. We claim that P u = f in S (Rn ), where P u is the tempered distribution obtained by multiplying u with the Schwartz function P (ξ). To see this, fix ϕ ∈ S(Rn ) and write $ % P u, ϕ = u, P ϕ = T t f, MP ϕ = f, T MP ϕ = f, ϕ, (5.3.18) proving the claim. In particular, the above procedure may be implemented for the distribution f := 1 ∈ S (Rn ). Corresponding to this choice, we obtain u ∈ S (Rn ) with the property that P u = 1 in S (Rn ). Define E := F −1 u. Then E ∈ S (Rn ) and 3 = P u = 1 = δ3 P (D)E = P (ξ)E
in S (Rn ).
(5.3.19)
Taking the Fourier transform we see that, as desired, E is a tempered distribution fundamental solution for P (D).
Further Notes for Chap. 5. The existence of fundamental solutions for arbitrary nonzero, constant coefficient, linear, partial differential operators was first established in this degree of generality by L. Ehrenpreis and B. Malgrange (cf. [11, 42]), via proofs relying on the Hahn–Banach theorem. This basic result served as the first compelling piece of evidence of the impact of the theory of distributions in the field of partial differential equations. The general discussion initiated here is going to be further augmented by considering in later chapters concrete examples of fundamental solutions associated with basic operators arising in mathematics, physics, and engineering.
200
5.4
CHAPTER 5. THE CONCEPT OF A FUNDAMENTAL SOLUTION
Additional Exercises for Chap. 5
Exercise 5.15. Let m ∈ N. Prove that u ∈ S (R) : u(m) = δ in S (R) =
1 (m−1)!
xm−1 H +
m−1
(5.4.1) ck xk : ck ∈ C, k = 1, . . . , m − 1 .
k=0
Exercise 5.16. Let n, m ∈ N and k1 , k2 ∈ N0 . Consider two scalar operators aα Dxα , aα ∈ C, ∀ α ∈ Nn0 , |α| ≤ k1 , (5.4.2) P1 (Dx ) := |α|≤k1
P2 (Dy ) :=
bα Dyα ,
bβ ∈ C, ∀ β ∈ Nm 0 , |β| ≤ k2 ,
(5.4.3)
|β|≤k2
and assume that E1 ∈ D (Rn ) and E2 ∈ D (Rm ) are fundamental solutions for P1 in Rn and P2 in Rm , respectively. that E1 ⊗ E2 is a fundamental Prove aα bα Dxα Dyα in Rn × Rm . solution for P1 (Dx ) ⊗ P2 (Dy ) := |α|≤k1 |β|≤k2
Chapter 6
Hypoelliptic Operators As usual, assume that Ω ⊆ Rn is an open set, and fix m ∈ N. From Theorem 2.102 we know that if u ∈ D (Ω) is such that for each α ∈ Nn0 satisfying |α| = m the distributional derivative ∂ α u is continuous on Ω, then u ∈ C m (Ω). To rephrase this result in that lends itself more naturally a manner to generalizations, define P m u := ∂ α u |α|=m for each u ∈ D (Ω). In this notation, the earlier result reads u ∈ D (Ω) and P m u ∈ C 0 (Ω) =⇒ u ∈ C m (Ω).
(6.0.1)
We are interested in proving results in the same spirit with the role of mapping P m played by (certain) linear differential operators. Recall the notation from (3.1.10) and consider the linear differential operator P (x, D) = aα (x)Dα , aα ∈ C ∞ (Ω), for all α ∈ Nn0 with |α| ≤ m. (6.0.2) |α|≤m
By Proposition 2.34, if f ∈ C 0 (Ω) and u ∈ C m (Ω) is a solution of P (x, D)u = f in D (Ω), then u is a solution of P (x, D)u = f in Ω in the classical sense. However, this reasoning requires knowing in advance that u ∈ C m (Ω). It is desirable to have a more general statement to the effect that if u ∈ D (Ω) is a solution of P (x, D)u = f in D (Ω) and f has certain regularity properties, then the distribution u also belongs to an appropriate smoothness class (e.g., if f ∈ C ∞ (Ω) then u ∈ C ∞ (Ω)). While such a phenomenon is not to be expected in general, there exist classes of operators P (x, D) for which there are a priori regularity results for the solution u based on the regularity of the datum f . Such a class of operators is singled out in this chapter.
6.1
Definition and Properties
Definition 6.1. An operator P (x, D) as in (6.0.2) is called hypoelliptic in Ω if for all open subsets ω of Ω the following property holds: whenever u ∈ D (ω) is such that P (x, D)u ∈ C ∞ (ω) then u ∈ C ∞ (ω). D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 6, © Springer Science+Business Media New York 2013
201
202
CHAPTER 6. HYPOELLIPTIC OPERATORS
Remark 6.2. It is easy to see from Definition 6.1 that if an operator is hypoelliptic in Ω, then it is also hypoelliptic in any other open subset of Ω. Definition 6.3. Let u ∈ D (Ω) be arbitrary. Then the singular support of u, denoted by sing supp u, is defined by sing supp u := x ∈ Ω : there is no open set ω such that x ∈ ω ⊆ Ω and uω ∈ C ∞ (ω) . (6.1.1) Example 6.4. Not surprisingly, sing supp δ = {0}.
(6.1.2)
To see why this is true, note that clearly sing supp δ ⊂ {0} and sing supp δ = ∅ since, by Example 2.13, the distribution δ is not of function type. Remark 6.5. (1) Since Ω \ sing supp u is an open set, it follows that sing supp u is relatively closed in Ω. (2) By Exercise 2.46, we have that uΩ\sing supp u ∈ C ∞ (Ω \ sing supp u). (3) The singular support of a distribution u ∈ D (Ω) is the smallest relatively closed set in Ω with the property that u restricted to its complement in Ω is of class C ∞ . (4) If the operator P (x, D) is as in (6.0.2), then sing supp [P (x, D)u] ⊆ sing supp u,
∀ u ∈ D (Ω).
(6.1.3)
Indeed, if x0 ∈ Ω \ sing supp u, then there exists an open neighborhood ω of x0 contained in Ω and such that uω ∈ C ∞ (ω). Hence, (6.1.4) [P (x, D)u]ω = P (x, D)[uω ] ∈ C ∞ (ω), which implies that x0 ∈ Ω \ sing supp [P (x, D)u]. The quality of being a hypoelliptic operator turns out to be equivalent to (6.1.3) being valid with equality. Proposition 6.6. Let P (x, D) be as in (6.0.2). Then P (x, D) is hypoelliptic in Ω if and only if sing supp [P (x, D)u] = sing supp u,
∀ u ∈ D (Ω).
(6.1.5)
Proof. Suppose P (x, D) is hypoelliptic in Ω. The left-to-right inclusion in (6.1.5) has been proved in part (4) of Remark 6.5. To prove the opposite inclusion, fix a point x0 ∈ Ω \ sing supp [P (x, D)u]. Then there exists an open subset ω of Ω with x0 ∈ ω and [P (x, D)u]ω ∈ C ∞ (ω). Since P (x, D) is hypoelliptic in Ω it is also hypoelliptic in ω and, as such, uω ∈ C ∞ (ω). This proves that x0 ∈ Ω \ sing supp u, as wanted.
6.2. HYPOELLIPTIC OPERATORS WITH CONSTANT...
203
Now suppose (6.1.5) holds and let ω be an open subset of Ω. Fix some u ∈ D (ω) such that P (x, D)u ∈ C ∞ (ω). The desired conclusion will follow as soon as we show that u ∈ C ∞ (ω). By Exercise 2.46, it suffices to prove that for each x0 ∈ ω there exists an open neighborhood ω0 ⊆ ω of x0 such that uω0 ∈ C ∞ (ω0 ). To this end, fix x0 ∈ ω along with some open neighborhood ω0 of x0 with a closure that is a compact subset of ω. Furthermore, pick a function ψ ∈ C0∞ (ω) with the property that ψ = 1 in a neighborhood of ω0 , and note that ψu ∈ E (ω). By Proposition 2.63, every v ∈ E (ω) has a unique extension to E (Ω) that we will denote by Ext(v). Since the operation of taking the extension Ext commutes with applying P (x, D) (recall Exercise 2.64) we may write P (x, D)[Ext(ψu)] = Ext P (x, D)(ψu) = Ext ψP (x, D)u + g (6.1.6) = Ext ψP (x, D)u + Ext(g) in D (Ω), where g := P (x, D)(ψu) − ψP (x, D)u ∈ E (ω). On the other hand, from part (4) in Proposition 2.36 it follows that g ω = 0, thus 0
(sing supp g) ∩ ω0 = ∅.
(6.1.7)
In concert, (6.1.7), the fact that ψ(P (x, D)u) ∈ C0∞ (ω) [recall that, by assumption, P (x, D)u ∈ C ∞ (ω)], and (6.1.6), imply that sing supp P (x, D)[Ext(ψu)] ∩ ω0 = ∅. (6.1.8) Invoking the hypothesis (6.1.5) then yields sing supp [Ext(ψu)] ∩ ω0 = ∅ which, in turn, forces [Ext(ψu)]ω0 ∈ C ∞ (ω0 ). The desired conclusion now follows upon observing that [Ext(ψu)]ω0 = (ψu)ω0 = uω0 . Remark 6.7. A noteworthy consequence of Proposition 6.6 is as follows: any two fundamental solutions of a hypoelliptic operator in Rn differ by a C ∞ function. More specifically, if an operator P (x, D) as in (6.0.2) is hypoelliptic in Rn , and E1 , E2 ∈ D (Rn ) are such that P (x, D)E1 = δ and P (x, D)E2 = δ in D (Rn ), then we have E1 − E2 ∈ C ∞ (Rn ). In particular, if an operator P (x, D) as in (6.0.2) is hypoelliptic in Rn and has a fundamental solution E ∈ D (Rn ) with sing supp E = {0}, then the singular support of any other fundamental solution of P (x, D) is also {0}.
6.2
Hypoelliptic Operators with Constant Coefficients
In this subsection we further analyze hypoelliptic operators with constant coefficients. First we present a result due to L. Schwartz regarding a necessary and sufficient condition for a linear, constant coefficient operator to be hypoelliptic in the entire space.
CHAPTER 6. HYPOELLIPTIC OPERATORS
204
Theorem 6.8. Let P (D) be a constant coefficient linear differential operator in Rn . Then P (D) is hypoelliptic in Rn if and only if there exists E ∈ D (Rn ) fundamental solution of P (D) in D (Rn ) satisfying sing supp E = {0}. Proof. Suppose P (D) is hypoelliptic. Then necessarily P (D) is not identically zero (since the zero operator sends the Dirac distribution to the C ∞ function zero). Granted this, Theorem 5.9 applies and gives that the operator P (D) has a fundamental solution E ∈ D (Rn ). In concert with the hypoellipticity of P (D), Proposition 6.6, and (6.1.2), this allows us to write sing supp E = sing supp [P (D)E] = sing supp δ = {0},
(6.2.1)
as wanted. Consider next the converse implication. Suppose that there exists some E ∈ D (Rn ) with P (D)E = δ and sing supp E = {0}. Fix u ∈ D (Rn ) arbitrary. Combining (4) in Remark 6.5 with Proposition 6.6, in order to conclude that P (D) is hypoelliptic, it suffices to prove that sing supp u ⊆ sing supp [P (D)u].
(6.2.2)
To this end, fix a point x0 ∈ Rn \ sing supp [P (D)u]. Then, there exists an open neighborhood ω of the point x0 such that [P (D)u]ω ∈ C ∞ (ω). The fact that x0 ∈ sing suppu will follow if we are able to show that there exists some number r ∈ (0, ∞) with the property that uB(x0 ,r) ∈ C ∞ (B(x0 , r)). Pick a number r1 ∈ (0, ∞) with the property that B(x0 , r1 ) ⊂ ω. Also, fix r0 ∈ (0, r1 ) and select a function ψ ∈ C0∞ (ω) such that ψ = 1 on B(x0 , r1 ). Set v := ψu. Then v B(x0 ,r1 ) = uB(x0 ,r1 ) and v ∈ E (ω). By virtue of Proposition 2.63 we may extend v to a distribution in E (Rn ) (and we continue to denote this extension by v). Define g := P (D)v − ψ[P (D)u]. Since [P (D)u]ω ∈ C ∞ (ω) and supp ψ ⊂ ω, we deduce that ψ[P (D)u] ∈ C0∞ (ω) and supp ψ[P (D)u] ⊂ ω. In particular, ψ[P (D)u] ∈ C0∞ (Rn ). Consequently, g ∈ E (Rn ). In addition, combining (4) in Proposition 2.36 with the fact that ψ = 1 on B(x0 , r1 ) shows that g B(x ,r ) = 0. 0 1 Thus, sing supp g ∩ B(x0 , r1 ) = ∅.
(6.2.3)
On the other hand, invoking (d) and (e) in Theorem 2.87 we may write v = δ ∗ v = (P (D)E) ∗ v = E ∗ (P (D)v) = E ∗ g + E ∗ (ψP (D)u)
in D (Rn ).
(6.2.4)
We claim that sing supp(E ∗ g) ∩ B(x0 , r0 ) = ∅.
(6.2.5)
6.2. HYPOELLIPTIC OPERATORS WITH CONSTANT...
205
∞ To see why (6.2.5) is true, ε ∈ (0, r1 − r0 ) and φ ∈ C0 (B(0, ε)) with the ε fix property that φ = 1 on B 0, 2 . Then E ∗ g = [(1 − φ)E] ∗ g + (φE) ∗ g. Hence, by (2.8.17), we may write
supp [(φE) ∗ g] ⊆ supp (φE) + supp g ⊆ B(0, ε) + [Rn \ B(x0 , r1 )] ⊆ Rn \ B(x0 , r0 ).
(6.2.6)
sing supp[(φE) ∗ g] ∩ B(x0 , r0 ) = ∅.
(6.2.7)
This shows that Since by assumption sing supp E = {0}, it follows that (1 − φ)E ∈ C ∞ (Rn ) and invoking Exercise 2.94, we obtain that [(1 − φ)E] ∗ g ∈ C ∞ (Rn ). The latter combined with (6.2.7) now yields (6.2.5). After using (6.2.5) back in (6.2.4) and observing that E ∗ (ψP(D)u) ∈ C ∞ (Rn ) (itself a consequence of Proposition 2.93) we conclude that v B(x0 ,r0 ) ∈ C∞ (B(x0 , r0 )). Upon recalling that ψ = 1 on B(x0 , r0 ), we necessarily have uB(x0 ,r0 ) ∈ C ∞ (B(x0 , r0 )). The proof of the theorem is now complete. Exercise 6.9. Show that if P (D) is a constant coefficient linear differential operator which is hypoelliptic in Rn , then any fundamental solution E ∈ D (Rn ) of P (D) has the property that sing supp E = {0}. Definition 6.10. Let P (D) be a constant coefficient linear differential operator in Rn . Then F ∈ D (Rn ) is called a parametrix of P (D) if there exists some w ∈ C ∞ (Rn ) with the property that P (D)F = δ + w in D (Rn ). It is natural to think of the concept of a parametrix as a relaxation of the notion of a fundamental solution since, clearly, any fundamental solution is a parametrix. We have seen that Theorem 6.8 is an important tool in determining if a given operator P (D) is hypoelliptic. One apparent drawback of this particular result is the fact that constructing a fundamental solution for a given operator P (D) may be a delicate task. It is therefore desirable to have a more flexible criterion for deciding whether a certain operator is hypoelliptic and, remarkably, Theorem 6.11 shows that one can use a parametrix in place of a fundamental solution to the same effect. Theorem 6.11. Let P (D) be a constant coefficient linear differential operator in Rn . Then P (D) is hypoelliptic in Rn if and only if there exists F ∈ D (Rn ) which is a parametrix of P (D) and satisfies sing supp F = {0}. Proof. In one direction, if P (D) is hypoelliptic in Rn , then by Theorem 6.8 there exists a fundamental solution E of P (D) with sing supp E = {0}. The desired conclusion follows by taking F := E. Conversely, suppose there exists a parametrix F ∈ D (Rn ) of P (D) satisfying sing supp F = {0}. Let w ∈ C ∞ (Rn ) be such that P (D)F − δ = w in D (Rn ). To show that P (D) is hypoelliptic in Rn , we adjust the argument used in the proof of Theorem 6.8. Specifically, starting with u ∈ D (Rn ) arbitrary, as before, matters are reduced to proving
CHAPTER 6. HYPOELLIPTIC OPERATORS
206
n that if x0 ∈∞R \ sing supp [P (D)u], then there exists r ∈ (0, ∞) such that u B(x ,r) ∈ C (B(x0 , r)). Retaining the notation and context from the proof 0 of Theorem 6.8, the reasoning there applies and allows us to write (in place of (6.2.4))
v = δ ∗ v = (P (D)F − w) ∗ v = F ∗ (P (D)v) − w ∗ v = F ∗ g + F ∗ (ψP (D)u) − w ∗ v
in D (Rn ).
(6.2.8)
Since sing supp F = {0}, the same argument as in the proof of (6.2.5) shows that (6.2.9) sing supp(F ∗ g) ∩ B(x0 , r0 ) = ∅, while since ψ[P (D)u] ∈ C0∞ (Rn ), Proposition 2.93 gives that F ∗ (ψP (D)u) ∈ C ∞ (Rn ).
(6.2.10)
The new observation is that from v ∈ E (Rn ), w ∈ C ∞ (Rn ), and Exercise 2.94, we have w ∗ v ∈ C ∞ (Rn ). (6.2.11) ∞ Combining (6.2.8)-(6.2.11) it follows that v B(x0 ,r0 ) ∈ C (B(x0 , r0 )). The latter ∈ C ∞ (B(x0 , r0 )) (recall that ψ = 1 on B(x0 , r0 )). The proof of forces u B(x0 ,r0 )
the theorem is now complete. In the class of constant coefficient linear differential operators, Theorem 6.8 and Theorem 6.11 provide useful characterizations of hypoellipticity. In order to offer a wider perspective on this matter, we state, without proof, another such characterization1 due to L. H¨ormander. Theorem 6.12. Let P (D) be a nonzero constant coefficient linear differential operator in Rn . Then P (D) is hypoelliptic if and only if there exist some constants C, R, c ∈ (0, ∞) with the property that β ∂ P (ξ) −c|β| , ∀ ξ ∈ Rn with |ξ| ≥ R, (6.2.12) P (ξ) ≤ C|ξ| for every β ∈ Nn0 . An important subclass of linear, constant coefficient operators is that of elliptic operators. Assume that an operator P (D) of order m ∈N0 as in (5.1.1) has been given and recall that its principal symbol is Pm (ξ) = aα ξ α , ξ ∈ Rn .
|α|=m
Definition 6.13. Let P (D) be a constant coefficient linear differential operator of order m ∈ N0 in Rn . Then P (D) is called elliptic if its principal symbol vanishes only at zero, that is, if Pm (ξ) = 0 for every ξ ∈ Rn \ {0}. 1 This characterization is not going to play a significant role for us here. For a proof, the interested reader is referred to [32, Theorem 11.1.3, p. 62].
6.2. HYPOELLIPTIC OPERATORS WITH CONSTANT...
207
Remark 6.14. An example of a constant coefficient elliptic operator playing a significant role in partial differential equations is the poly-harmonic operator of order 2m, where m ∈ N, defined as n
Δm :=
∂j21 · · · ∂j2m
Rn .
in
(6.2.13)
j1 ,...,jm =1
n 2 m , hence, the principal symbol of Δm is Note that Δm is simply j=1 ∂j m 2m (−1) |ξ| that does not vanish for any ξ ∈ Rn \{0}. The operators obtained for m = 1 and m = 2 are called, respectively, the Laplacian and the bi-Laplacian operator. By the above discussion they are also elliptic. The relevance of the class of elliptic operators is apparent from the following important consequence of Theorem 6.11. Theorem 6.15. Linear, constant coefficient elliptic operators are hypoelliptic in Rn . Proof. Let P (D) be a constant coefficient linear elliptic operator of order m ∈ N0 . By Theorem 6.11, to conclude that P (D) is hypoelliptic, matters reduce to showing that P (D) has a parametrix F with sing supp F = {0}. Starting from the ellipticity condition Pm (ξ) = 0 for every ξ ∈ Rn \ {0}, we note that if C0 := inf |Pm (ξ)|, then C0 > 0. Keeping in mind that Pm (ξ) is a |ξ|=1
homogeneous polynomial of degree m we may then estimate ξ ξ m |Pm (ξ)| = Pm |ξ| ∀ ξ ∈ Rn \ {0}. = |ξ| Pm ≥ C0 |ξ|m , |ξ| |ξ| (6.2.14) In addition, there exists C1 ∈ (0, ∞) such that |P (ξ) − Pm (ξ)| ≤ C1 (1 + |ξ|)m−1
∀ ξ ∈ Rn .
(6.2.15)
Hence, from (6.2.14) and (6.2.15) it follows that for every ξ ∈ Rn \ {0} we have |P (ξ)| ≥ |Pm (ξ)| − |P (ξ) − Pm (ξ)| ≥ C0 |ξ|m − C1 (1 + |ξ|)m−1 . If |ξ| is sufficiently large, then C1 (1+|ξ|)m−1 ≤ exists R ∈ (0, ∞) such that |P (ξ)| ≥
C0 m |ξ| 2
C0 m 2 |ξ| ,
(6.2.16)
which implies that there
whenever |ξ| ≥ R.
(6.2.17)
Hence, if we take ψ ∈ C0∞ (Rn ) satisfying ψ = 1 on B(0, R), then 1 − ψ(ξ) ∈ C ∞ (Rn ) ∩ L∞ (Rn ). P (ξ)
(6.2.18)
Recalling (4.1.8) and (a) in Theorem 4.25, it follows that there exists F ∈ S (Rn ) with the property that 1 − ψ(ξ) F3 = P (ξ)
in S (Rn ).
(6.2.19)
CHAPTER 6. HYPOELLIPTIC OPERATORS
208
Since P (ξ) ∈ L(Rn ), we further have P (ξ)F3 = 1 − ψ(ξ) in S (Rn ). Making use of (b) in Theorem 4.25 and (4.2.4), then taking the Fourier transform of the latter identity and invoking (4.2.28), we arrive at P (D)F − δ = −(2π)−n ψ3 ∨
in S (Rn ).
(6.2.20)
Since ψ3 ∈ S(R ), identity (6.2.20) shows that F is a parametrix for P (D). There remains to prove sing supp F = {0}. The inclusion {0} ⊆ sing supp F is immediate from (6.2.20) given (6.1.2) and (6.1.3). The opposite inclusion will follow once we show (6.2.21) F Rn \{0} ∈ C ∞ (Rn \ {0}). n
To this end, we claim that if β, α ∈ Nn0 then F Dβ xα F ∈ L1 (Rn ) whenever |β| − |α| − m < −n.
(6.2.22)
Assume (6.2.22) for now. Then, recalling (3.1.3), it follows that Dβ (xα F ) ∈ C 0 (Rn )
for β, α ∈ Nn0 , |β| − |α| − m < −n.
(6.2.23)
Next, fix k ∈ N0and choose N ∈ N0 with the property that 2N > n + k − m. N ! 2α Since |x|2N = for every x ∈ Rn [cf. (13.2.1)] from (6.2.23) we may α! x |α|=N β 2N
n conclude that D (|x| F ) ∈ C 0 (R β ∈ Nn0 such that |β| = k. Hence, ) for every 1 2N k n ∞ |x| F ∈ C (R ). Because |x|2N Rn \{0} ∈ C (Rn \ {0}), the latter further gives F n ∈ C k (Rn \ {0}). The membership in (6.2.21) now follows since k ∈ N0 R \{0}
is arbitrary. Returning to the proof of (6.2.22), fix β, α ∈ Nn0 with |β| − |α| − m < −n. In light of (b)–(c) in Theorem 4.25, it suffices to show ξ β Dα F3 ∈ L1 (Rn ). Note that based on (6.2.19) and (2.4.16) we may write 1 α! Dα F3 = Dγ Dα−γ 1 − ψ(ξ) in S (Rn ). (6.2.24) γ!(α − γ)! P (ξ) γ≤α
We now claim that for each μ, ν ∈ Nn0 and with R as in (6.2.17), there exists a finite constant C > 0 independent of ξ such that μ ν 1 ≤ C|ξ||μ|−|ν|−m ξ D for |ξ| > R. (6.2.25) P (ξ) Q(ξ) Indeed, since by induction one can see that Dν P 1(ξ) = P (ξ) |ν|+1 for some polynomial Q of degree at most (m − 1)|ν|, by making also use of (6.2.17), for |ξ| ≥ R we may write (m−1)|ν| μ ν 1 ξ D = |ξ||μ| |Q(ξ)| ≤ C|ξ||μ| |ξ| = C|ξ||μ|−|ν|−m , P (ξ) |P (ξ)||ν|+1 |ξ|(|ν|+1)m (6.2.26) where C ∈ (0, ∞) is independent of ξ. This proves (6.2.25).
6.2. HYPOELLIPTIC OPERATORS WITH CONSTANT...
209
At this point, we combine (6.2.24), (6.2.25), (6.2.17), and the properties of ψ to estimate β γ 1 α−γ α! β α 3 1 − ψ(ξ) ξ D F ≤ ξ D P (ξ) D γ!(α − γ)! γ<α β α 1 + ξ D 1 − ψ(ξ) P (ξ) Cα,γ |ξ||β|−|γ|−mχsupp ψ\B(0,R) ≤ γ<α
+ C|ξ||β|−|α|−m χRn \B(0,R) ,
∀ ξ ∈ Rn .
(6.2.27)
Recalling that |β|−|α|−m < −n, from (6.2.27) we obtain that ξ β Dα F3 ∈ L1 (Rn ), as desired. This completes the proof of (6.2.22), and with it the proof of the theorem. Exercise 6.16. Use Theorem 6.12 to give an alternative proof of Theorem 6.15, that is, that linear, constant coefficient elliptic operators are hypoelliptic in Rn . Hint: Show that C1 may be chosen sufficiently large so that, in addition to (6.2.15), one also has |∂ β P (ξ)| ≤ C1 (1 + |ξ|)m−|β| . Then use the latter and 1 m 1 (6.2.17) to show that (6.2.12) is verified by taking C := 2C C0 (1 + R ) and c := 1. Proposition 6.17. The poly-harmonic operator is hypoelliptic. In particular, the operators Δ and Δ2 are hypoelliptic. Proof. This is a consequence of Theorem 6.15 and Remark 6.14. Corollary 6.18. Let P (D) be a linear, constant coefficient elliptic operator in n Rn and let Ω be an open subset of R∞. If u ∈ D (Ω) is such that for some open subset ω of Ω we have P (D)u ω ∈ C (ω), then uω ∈ C ∞ (ω). Proof. From Theorem 6.15 and Remark 6.2 we know that the restriction of P (D) to ω is a hypoelliptic operator. Since P (D)uω ∈ D (ω) has an empty singular support, Proposition 6.6 gives that the singular support of uω is also empty. Consequently, uω ∈ C ∞ (ω). An example of a linear, constant coefficient, differential operator that is not hypoelliptic is the operator P = ∂12 − ∂22 used in (1.1.1) to describe the equation governing the displacement of a vibrating string in R2 . Indeed, as we shall see later (cf. (9.1.28)), the distribution uf associated with the locally integrable function ∀ (x1 , x2 ) ∈ R2 , (6.2.28) f (x1 , x2 ) := H(x2 − |x1 |), (where H stands for the Heaviside function) satisfies (∂12 − ∂22 )uf = 2δ in D (R2 ) and sing supp uf = {(x1 , x2 ) ∈ R2 : x2 = |x1 |}. Hence, sing supp uf = sing supp P uf which, in light of Proposition 6.6, shows that P is not hypoelliptic.
(6.2.29)
CHAPTER 6. HYPOELLIPTIC OPERATORS
210
6.3
Integral Representation Formulas and Interior Estimates
Consider the constant coefficient, linear, differential operator P (∂) = aα ∂ α , aα ∈ C, for all α ∈ Nn0 with |α| ≤ m.
(6.3.1)
|α|≤m
Theorem 6.19 (A general integral representation formula). Assume that the constant coefficient, linear, differential operator P (∂) is as in (6.3.1) and is hypoelliptic in Rn . Let E ∈ D (Rn ) be a fundamental solution for P (∂) as provided by Theorem 6.8 (hence, in particular, E ∈ C ∞ (Rn \ {0})). Let Ω be an open subset of Rn and suppose u ∈ D (Ω) satisfies P (∂)u = 0 in D (Ω). Then u ∈ C ∞ (Ω) and for each x0 ∈ Ω, each r ∈ 0, dist (x0 , ∂Ω) , and each function ψ ∈ C0∞ B(x0 , r) such that ψ = 1 near B(x0 , r/2), we have
u(x) = −
(−1)|α|+|γ|aα
|α|≤m γ<α
α! × (α − γ)!γ!
(6.3.2)
×
(∂ γ E)(x − y)(∂ α−γ ψ)(y)u(y) dy, B(x0 ,r)\B(x0 ,r/2)
for each x ∈ B(x0 , r/2). In particular, for every μ ∈ Nn0 , (∂ μ u)(x) = −
(−1)|α|+|γ| aα
|α|≤m γ<α
α! × (α − γ)!γ!
(6.3.3)
×
(∂ γ+μ E)(x − y)(∂ α−γ ψ)(y)u(y) dy, B(x0 ,r)\B(x0 ,r/2)
for each x ∈ B(x0 , r/2). Proof. The fact that u ∈ C ∞ (Ω) is a consequence of the hypoellipticity of the operator P (∂) (cf. Definition 6.1 and Remark 6.2). As regards (6.3.2), pick an arbitrary x0 ∈ Ω, fix r ∈ 0, dist (x0 , ∂Ω) , and let ψ ∈ C0∞ B(x0 , r) be such that ψ = 1 near B(x0 , r/2). Then ψu ∈ C0∞ (Rn ) and since P (∂)E = δ in D (Rn ) we may write, for each point x ∈ B(x0 , r/2), u(x) = (ψu)(x) = δ ∗ (ψu) (x) = (P (∂)E) ∗ (ψu) (x) = E ∗ (P (∂)(ψu)) (x) =
|α|≤m 0<β≤α
aα
α! E ∗ (∂ β ψ∂ α−β u) (x), β!(α − β)!
(6.3.4)
where we have used (6.3.1), part (e) of Theorem 2.87, and that P (∂)u = 0 in Ω. Note that for each β > 0 the function ∂ β ψ is compactly supported in
6.3. INTEGRAL REPRESENTATION FORMULAS...
211
B(x0 , r) \ B(x0 , r/2). Keeping this in mind, we may then integrate by parts in order to further write the last expression in (6.3.4) as
aα
|α|≤m 0<β≤α
=
α! E ∗ (∂ β ψ∂ α−β u) (x) β!(α − β)!
α! β!(α − β)!
aα
(−1)|α|+|β| α! β!(α − β)!
|α|≤m 0<β≤α
=
|α|≤m 0<β≤α
=
aα
|α|≤m 0<β≤α γ≤α−β
B(x0 ,r)\B(x0 ,r/2)
E(x − y)∂ β ψ(y)∂ α−β u(y) dy
B(x0 ,r)\B(x0 ,r/2)
∂yα−β E(x − y)∂ β ψ(y) u(y) dy
(α − β)! (−1)|α|+|β|+|γ| α! × β!(α − β)! γ!(α − β − γ)!
×
aα
(6.3.5)
B(x0 ,r)\B(x0 ,r/2)
(∂ γ E)(x − y)∂ α−γ ψ(y) u(y) dy.
To proceed, observe that whenever γ < α formula (13.2.3) gives (α − γ)! (α − γ)! = −1 (−1)|β| (−1)|β| β!(α − γ − β)! β!(α − γ − β)! 0<β≤α−γ
β≤α−γ
= 0 − 1 = −1.
(6.3.6)
Making use of (6.3.6) back in (6.3.5) yields
aα
|α|≤m 0<β≤α
α! E ∗ (∂ β ψ∂ α−β u) (x) β!(α − β)!
=−
|α|≤m γ<α
aα
(−1)|α|+|γ|α! × γ!(α − γ)!
(6.3.7)
×
(∂ γ E)(x − y)∂ α−γ ψ(y) u(y) dy, B(x0 ,r)\B(x0 ,r/2)
and (6.3.2) follows from (6.3.4) and (6.3.7). Finally, (6.3.3) is obtained by differentiating (6.3.2). To state our next result, recall that the operator P (∂) from (6.3.1) is said to be homogeneous (of degree m) whenever aα = 0 for all multi-indices α with |α| < m. Theorem 6.20 (Interior estimates). Let P (∂) be a constant coefficient, linear, differential operator, of order m ∈ N0 , which is hypoelliptic in Rn . Suppose E ∈ D (Rn ) is a fundamental solution for P (∂) as provided by Theorem 6.8 (thus, in particular, E ∈ C ∞ (Rn \ {0})). Then for each μ ∈ Nn0 there exists a constant Cμ ∈ (0, ∞) (which also depends on the coefficients of P (∂) and n) with the following significance.
CHAPTER 6. HYPOELLIPTIC OPERATORS
212
If Ω be an open subset of Rn and u ∈ D (Ω) satisfies P (∂)u =0 in D (Ω), then ∞ u ∈ C (Ω) and for each x0 ∈ Ω and each r ∈ 0, dist (x0 , ∂Ω) we have sup |(∂ μ u)(x)| ≤ Cμ max r|γ|−|α| · sup |(∂ γ+μ E)(z)| × |α|≤m
x∈B(x0 ,r/2)
γ<α
r/4<|z|
×
|u(y)| dy.
(6.3.8)
B(x0 ,r)\B(x0 ,r/2)
In the particular case when P (∂) is homogeneous (of degree m) and one also assumes that there exists k ∈ N such that the fundamental solution E satisfies |∂ β E(x)| ≤ Cβ |x|−n+m−|β| , ∀ x ∈ Rn \ {0}, ∀ β ∈ Nn0 with |β| ≥ k, (6.3.9) then whenever |μ| ≥ k we have |(∂ μ u)(x0 )| ≤
Cμ |u(y)| dy. − r|μ| B(x0 ,r)
(6.3.10)
Proof. Pick a function φ ∈ C0∞ B(0, 1) such that φ = 1 near B(0, 3/4), and note that if we set ψ(x) := φ (x − x0 )/r , ∀ x ∈ Rn , (6.3.11) then ψ ∈ C0∞ B(x0 , r) and ψ = 1 near B(x0 , 3r/4). In particular, for each γ ∈ Nn0 with |γ| > 0 there exists cγ ∈ (0, ∞) independent of r such that supp (∂ γ ψ) ⊆ B(x0 , r) \ B(x0 , 3r/4) and ∂ γ ψL∞ (Rn ) ≤ cγ r−|γ| .
(6.3.12)
For such a choice of ψ, estimate (6.3.8) then follows in a straightforward manner from the integral representation formula (6.3.3). In turn, (6.3.8) readily implies (6.3.10) in the case when P (∂) is homogeneous (of degree m), E satisfies (6.3.9) for some k ∈ N, and |μ| ≥ k. The procedure described abstractly in the following lemma is useful for deriving higher-order interior estimates, with precise control of the constants involved, for various classes of functions. Lemma 6.21. Let A ⊂ C ∞ (Ω) be a family of functions satisfying the following two conditions: (1) ∂ α u ∈ A for every u ∈ A and every α ∈ Nn0 ; (2) There exists C ∈ (0, ∞) such that for every u ∈ A we have |∂j u(x)| ≤
C r
max |u(y)|, y∈B(x,r)
whenever x ∈ Ω and r ∈ 0, dist (x, ∂Ω) .
∀ j ∈ {1, . . . , n},
(6.3.13)
6.3. INTEGRAL REPRESENTATION FORMULAS...
213
Then given any u ∈ A, for every x ∈ Ω, every r ∈ 0, dist (x, ∂Ω) , every k ∈ N, and every λ ∈ (0, 1), we have (with C as in (6.3.13)) |∂ α u(y)| ≤
max y∈B(x,λr)
C k (1−λ)−k ek−1 k! rk
for every multi-index α ∈
Nn0
max |u(y)|, y∈B(x,r)
(6.3.14)
with |α| = k.
Proof. In a first stage, we propose to prove by induction over k that, given any u ∈ A, for every x ∈ Ω, every r ∈ 0, dist (x, ∂Ω) , and every k ∈ N we have (with C as in (6.3.13)) |∂ α u(x)| ≤
C k ek−1 k! max |u(y)|, rk y∈B(x,r)
∀ α ∈ Nn0 with |α| = k.
(6.3.15)
Note that if k = 1, this is contained in (6.3.13). Suppose now that (6.3.15) holds for some k ∈ N and every u ∈ A, every x ∈ Ω, and every r ∈ 0, dist (x, ∂Ω) , and pick an arbitrary α ∈ Nn0 with |α| = k + 1. Then there exist j ∈ {1, . . . , n} and β ∈ Nn0 for which ∂ α = ∂j ∂ β . Fix such j and β and note that, in particular, |β| = k. Next, take x ∈ Ω and r ∈ 0, dist (x, ∂Ω) arbitrary, and pick some ε ∈ (0, 1) to be specified shortly. Since ∂ β u ∈ A we may use (6.3.13) with u replaced by ∂ β u and r replaced by (1 − ε)r to obtain C max |∂ β u(y)|. (6.3.16) (1 − ε)r y∈B(x,(1−ε)r) Note that if y ∈ B x, (1 − ε)r then B(y, εr) ⊂ B(x, r) ⊂ Ω. Thus, for each point y ∈ B x, (1 − ε)r , we may use the induction hypothesis to estimate |∂ α u(x)| ≤
|∂ β u(y)| ≤
C k ek−1 k! C k ek−1 k! max |u(z)| ≤ max |u(z)|. (εr)k z∈B(y,εr) εk rk z∈B(x,r)
(6.3.17)
Combined, (6.3.16) and (6.3.17) yield |∂ α u(x)| ≤ Set now ε :=
k k+1
C k+1 ek−1 k! (1 − ε)εk rk+1
max |u(z)|.
(6.3.18)
z∈B(x,r)
∈ (0, 1) in (6.3.18) which, given that ε−k < e, further implies |∂ α u(x)| ≤
C k+1 ek (k + 1)! max |u(z)|. rk+1 z∈B(x,r)
(6.3.19)
Hence, (6.3.15) holds with k + 1 in place of k, as desired. Finally, as far as (6.3.14) is concerned, pick x0 ∈ Ω, r ∈ 0, dist (x0 , ∂Ω) , k ∈ N, as well as λ ∈ (0, 1). Next, point x ∈ B(x0 , λr) and select an arbitrary note that this forces (1 − λ)r ∈ 0, dist (x, ∂Ω) . In concert with the fact that B x, (1 − λ)r ⊆ B(x0 , r), this permits us to invoke (6.3.15) in order to estimate C k ek−1 k! |∂ α u(x)| ≤ k (1 − λ)r
max y∈B(x,(1−λ)r)
|u(y)|
CHAPTER 6. HYPOELLIPTIC OPERATORS
214 ≤
C k (1 − λ)−k ek−1 k! max |u(y)|, rk y∈B(x0 ,r)
(6.3.20)
whenever α ∈ Nn0 satisfies |α| = k. Taking the supremum over x ∈ B(x0 , λr) then yields (6.3.14) (written with x0 in place of x). Definition 6.22. A function u ∈ C ∞ (Ω) is called real-analytic in Ω provided for every x ∈ Ω there exists rx ∈ (0, ∞) with the property that B(x, rx ) ⊆ Ω and the Taylor series for u at x converges uniformly to u on B(x, rx ), that is, u(y) =
1 (y − x)α (∂ α u)(x) α! n
uniformly for y ∈ B(x, rx ).
(6.3.21)
α∈N0
The following observation justifies the term “real-analytic” used for the class of functions introduced above. Remark 6.23. In the context of Definition 6.22, it is clear that, for each x ∈ Ω, u (z) :=
1 (z − x)α (∂ α u)(x) α! n
for z ∈ Cn with |z − x| < rx ,
(6.3.22)
α∈N0
is a holomorphic function in {z ∈ Cn : |z − x| < rx } which locally extends the original real-analytic function u. Such local extensions may be constructed near each point x ∈ Ω and Lemma 7.53 ensures that any two local extensions coincide on their common domain. The conclusion is that the real-analytic function u has a well-defined extension to a holomorphic function defined in a neighborhood of Ω in Cn . Our next lemma gives a sufficient condition (which is in the nature of best possible) ensuring real-analyticity. Lemma 6.24. Suppose u ∈ C ∞ (Ω) is a function with the property that for each x ∈ Ω there exist r = r(x) ∈ 0, dist (x, ∂Ω) , M = M (x) ∈ (0, ∞), and C = C(x) ∈ (0, ∞), such that for every k ∈ N the following estimate holds: max |∂ α u(y)| ≤ M C k k! y∈B(x,r)
∀ α ∈ Nn0 with |α| = k.
(6.3.23)
Then u is real-analytic in Ω.
Proof. Fix x ∈ Ω and let r ∈ 0, dist (x, ∂Ω) be such that (6.3.23) holds for every k ∈ N. Write Taylor’s formula (13.2.7) for u at x to obtain that for each N ∈ N and each y ∈ B(x, r/2) there exists θ ∈ (0, 1) such that 1 (y − x)α (∂ α u)(x) + RN,u (y), (6.3.24) u(y) = α! |α|≤N −1
where RN,u (y) :=
1 (y − x)α (∂ α u)(x + θ(y − x)). α!
|α|=N
(6.3.25)
6.3. INTEGRAL REPRESENTATION FORMULAS...
215
Using (6.3.23), for each α ∈ Nn0 with |α| = N and y ∈ B(x, r/2), we may estimate α (∂ u)(x + θ(y − x)) ≤ max |(∂ α u)(z)| ≤ M C N N ! (6.3.26) z∈B(x,r)
since B x + θ(y − x), r ⊂ B(x, r). Together, (6.3.25) and (6.3.26) imply that, for each y ∈ B(x, r/2), |RN,u (y)| ≤ M C N |y − x|N
N! N = M (nC|y − x|) . α!
(6.3.27)
|α|=N
For the equality in (6.3.27) we used formula (13.2.1) for x1 = · · · = xn = 1. 1 , 2r , then lim RN,u (y) = 0 uniformly with Hence, if say, |y − x| < min 2nC N →∞
respect to y. Consequently, the Taylor series for u converges uniformly to u in a neighborhood of x. Since x is arbitrary in Ω it follows that u is real-analytic in Ω. Theorem 6.25 (Unique continuation). Suppose Ω ⊆ Rn is an open and connected set. Then any real-analytic function u in Ω with the property that there exists x0 ∈ Ω such that ∂ α u(x0 ) = 0 for all α ∈ Nn0 (which is the case if, e.g., u is zero in a neighborhood of+ x0 ) vanishes identically in Ω. Proof. Suppose u satisfies the hypotheses of the theorem and define the set (6.3.28) U := x ∈ Ω : ∂ α u(x) = 0 for all α ∈ Nn0 . Since x0 ∈ U we have U = ∅. Also, U is relatively closed in Ω given that it is the intersection of the relatively closed sets (∂ α u)−1 {0} , α ∈ Nn0 . In addition, if x ∈ U then, on the one hand, the Taylor series of u at x is identically zero, while on the other hand, this series converges to u in an open neighborhood of x. Thus, u is identically zero in that neighborhood of x, which ultimately proves that U is also open. Recalling that Ω is connected, it follows that U = Ω as desired. Theorem 6.25 highlights the much more restrictive nature of real-analyticity compared to indefinite differentiability, since obviously there are plenty of C ∞ functions that vanish in a neighborhood of a point without being identically zero. Theorem 6.26. Let P (∂) be a constant coefficient, linear, differential operator in Rn , which is homogeneous of degree m ∈ N0 . Assume that P (∂) has a fundamental solution E ∈ D (Rn ) with the property that E ∈ C ∞ (Rn \ {0}) and |∂ β E(x)| ≤ Cβ |x|−n+m−|β| ,
∀ x ∈ Rn \ {0},
∀ β ∈ Nn0 with |β| > 0. (6.3.29)
Finally, suppose that Ω ⊆ Rn is open and u ∈ D (Ω) satisfies P (∂)u = 0 in D (Ω).
CHAPTER 6. HYPOELLIPTIC OPERATORS
216
Then u ∈ C ∞ (Ω) and there exists a constant C ∈ (0, ∞) with the property that if λ ∈ (0, 1) we have C |α| (1 − λ)−|α| |α|! max |u(y)|, ∀ α ∈ Nn0 , (6.3.30) r|α| y∈B(x,λr) y∈B(x,r) whenever x ∈ Ω and r ∈ 0, dist (x, ∂Ω) . In particular, u is real-analytic in Ω. max
|∂ α u(y)| ≤
Proof. The fact that P (∂) has a fundamental solution E ∈ D (Rn ) with the property that E ∈ C ∞ (Rn \ {0}) implies sing supp E = {0}, hence P (∂) is hypoelliptic in Rn by Theorem 6.8. Granted the properties enjoyed by P (∂) and condition (6.3.29), estimate (6.3.30) follows from (6.3.9) (used with |μ| = 1) and Lemma 6.21 (in which we take A := {u ∈ C ∞ (Ω) : P (∂)u = 0}). Finally, the last claim in the statement of the theorem is a consequence of (6.3.30) and Lemma 6.24.
Further Notes for Chap. 6. For more extensive discussion of the of notion of hypoellipticity the interested reader is referred to the excellent presentation in [32]. Further information about real analytic functions may be found in, e.g., [37].
6.4
Additional Exercises for Chap. 6
Exercise 6.27. Let a > 0 be fixed. (a) Compute a fundamental solution for the operator
d2 dx2
− a2 in R.
(b) Use the result from part (a) to compute the Fourier transform in S (R) 1 of the tempered distribution associated with the function f (x) := x2 +a 2, x ∈ R (compare with Example 4.23). Exercise 6.28. Prove that sing supp u ⊆ supp u for every u ∈ D (Ω). Exercise 6.29. Give an example of a distribution u for which the inclusion from Exercise 6.28 is strict. Give an example of a distribution u for which sing supp u = supp u. Exercise 6.30. Let x0 ∈ Rn . Prove that sing supp δxα0 = {x0 } for every α ∈ Nn0 . Exercise 6.31. Let a ∈ C0∞ (Ω) and u ∈ D (Ω). Prove that sing supp (au) ⊆ sing supp u. Exercise 6.32. Recall the distribution from (2.1.13). Determine sing supp (P.V. x1 ). Exercise 6.33. Let m ∈ N and let P be a polynomial in Rn of degree 2m 1 n with no real roots. Recall from Exercise 4.123 that P ∈ S (R ). Prove that sing supp F P1 = ∅. Exercise 6.34. Let u ∈ D (R2 ) be the distribution defined by u := P.V. x1 ⊗δ(y), where P.V. x1 ∈ D (R) is as in (2.1.13). Determine sing supp u.
Chapter 7
The Laplacian and Related Operators One of the most important operators in partial differential equations is the Laplace operator1 Δ (also called the Laplacian) in Rn which is defined as n Δ := ∂j2 . Functions f satisfying Δf = 0 pointwise in Rn are called harmonic j=1
in Rn . The Laplace operator arises in many applications such as in the modeling of heat conduction, electrical conduction, and chemical concentration, to name a few. The focus for us will be on finding all fundamental solutions for Δ that are tempered distributions. Part of the motivation is that, as explained in Remark 5.5, this greatly facilitates the study of the Poisson equation Δu = f in D (Rn ), a task we take up in Sect. 7.2. In addition to treating the Laplace operator, in this chapter we also consider a number of other related operators, such as the bi-Laplacian, the poly-harmonic operator, the Cauchy–Riemann operator, the Dirac operator, as well as general constant coefficient (homogeneous) second-order strongly elliptic operators.
7.1
Fundamental Solutions for the Laplace Operator
The goal in this section is to determine all fundamental solutions of Δ that are also in S (Rn ). This is done by employing the properties of the Fourier transform we have proved so far. To get started, fix some E ∈ S (Rn ) that is a fundamental solution for Δ. Note that the existence of such a tempered distribution is guaranteed by Theorem 5.13. Since ΔE = δ in D (Rn ) and δ ∈ S (Rn ), by (4.1.25) it follows that ΔE = δ 1 Named
in S (Rn ).
(7.1.1)
after the French mathematician and astronomer Pierre Simon de Laplace (1749–
1827.) D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 7, © Springer Science+Business Media New York 2013
217
218
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
Applying F to the equation in (7.1.1) (and using that −Δ =
n j=1
Dx2 j , part (b)
in Theorem 4.25, and (4.2.3)) we obtain 3=1 − |ξ|2 E
in S (Rn ).
(7.1.2)
To proceed with determining E, we discuss separately the cases n ≥ 3, n = 2 and n = 1. Case n ≥ 3. From Exercise 4.4 we have that |ξ|1 2 ∈ S (Rn ). In addition, |ξ|2 ∈ L(Rn ), thus |ξ|2 · |ξ|1 2 ∈ S (Rn ) [recall (b) in Theorem 4.13], and it is not 3 + 1 2 = 0 in difficult to check that |ξ|2 · |ξ|1 2 = 1 in S (Rn ). Hence, |ξ|2 E |ξ| 3 + 1 2 ⊆ {0} and by Exercise 4.35, it follows that S (Rn ). Thus, supp E |ξ|
3 + 1 = P3(ξ) E |ξ|2
in S (Rn ),
(7.1.3)
where P is a polynomial in Rn satisfying |ξ|2 P3 = 0 in S (Rn ). The latter implies that ΔP = 0 in S (Rn ). Since P ∈ C ∞ (Rn ), we may conclude that ΔP = 0 pointwise in Rn , hence P is a harmonic polynomial in Rn . To compute 3 apply F −1 to the identity in (7.1.3) and then E (which is equal to F −1 (E)) recall Proposition 4.61 with λ = 2 to write 1 (x) + P (x) E(x) = −F −1 |ξ|2 Γ n−2 −2 − n 2 = −2 π 2 |x|2−n + P (x) Γ(1) =−
1 1 · + P (x), (n − 2)ωn−1 |x|n−2
∀ x ∈ Rn \ {0},
(7.1.4)
where in (7.1.4) is also based on the fact that Γ(1) = 1 and last equality the Γ n2 = n2 − 1 Γ n2 − 1 , and on (13.5.6). Moreover, 1 1 · (7.1.5) = δ in S (Rn ), Δ − (n − 2)ωn−1 |x|n−2 since (7.1.5) is equivalent [via the Fourier transform, as a consequence of (a) 1 1 2 and (b) in Theorem 4.25, as well as (4.2.3)] with |ξ| F (n−2)ωn−1 · |x|n−2 = 1
in S (Rn ), or equivalently, with |ξ|2 · true.
1 |ξ|2
= 1 in S (Rn ), which we know to be
Case n = 2. Since |ξ|1 2 ∈ L1loc (R2 ), we cannot proceed as in the case n ≥ 3. Instead, we rely on Theorem 4.73. Retaining the notation from Theorem 4.73, fix ψ ∈ Q and observe that |ξ|2 wψ = 1
in S (R2 ).
(7.1.6)
7.1. FUNDAMENTAL SOLUTIONS FOR THE LAPLACE...
219
To see why this is true, first note that since |ξ|2 ∈ L(R2 ), by (b) in Theorem 4.13 it follows that |ξ|2 wψ ∈ S (R2 ) while by part (a) in Theorem 3.13 we have |ξ|2 ϕ ∈ S(R2 ) for every ϕ ∈ S(R2 ). Hence, we may write |ξ|2 ϕ(ξ) − (|ξ|2 ϕ(ξ)) ψ(ξ) ξ=0 2 2 |ξ| wψ , ϕ = wψ , |ξ| ϕ = dξ |ξ|2 Rn ϕ(ξ) dξ = 1, ϕ, ∀ ϕ ∈ S(R2 ), (7.1.7) = R2
finishing the proof of (7.1.6). 3 + wψ ) = 0 in S (R2 ). A combination of (7.1.2) and (7.1.6) yields |ξ|2 (E Hence, by reasoning as in the case n ≥ 3, the latter implies 3 = −wψ + P3 E
in S (Rn ),
(7.1.8)
for some harmonic polynomial P . After applying the Fourier transform to the equality in (7.1.8) and recalling (4.6.4), we arrive at E(x) =
1 ln |x| + P (x), 2π
∀ x ∈ R2 \ {0}.
(7.1.9)
In addition, by making use of the Fourier transform (recall (a) and (c) in Theorem 4.25, (4.2.3), and Proposition 4.30), Eq. (7.1.6) is equivalent with − Δ(w 4ψ ) = (2π)2 δ
in S (R2 ).
Combining (7.1.10) with (4.6.4), we arrive at 1 ln |x| = δ in S (R2 ). Δ 2π
(7.1.10)
(7.1.11)
Case n = 1. In this case we use Exercise 5.15 to conclude that E = xH +c1 x+c0 in S (R), for some c0 , c1 ∈ C. Hence, E is of function type and we also have E(x) = xH + c1 x + c0 , for every x ∈ R. Remark 7.1. From Example 5.11 we know that −xH ∨ is another fundamental solution for Δ in D (R). This is in agreement with what we proved above since H + H ∨ = 1 in S (R); thus −xH ∨ = xH + x in S (R). In summary, we have proved the following result. Theorem 7.2. The function E ∈ L1loc (Rn ) defined as ⎧ −1 1 ⎪ ⎪ if x ∈ Rn \ {0}, n ≥ 3, ⎪ n−2 ⎪ (n − 2)ω |x| ⎪ n−1 ⎪ ⎨ E(x) := 1 if x ∈ R2 \ {0}, n = 2, ⎪ ⎪ 2π ln |x| ⎪ ⎪ ⎪ ⎪ ⎩ xH(x) if x ∈ R, n = 1,
(7.1.12)
220
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
belongs to S (Rn ) and is a fundamental solution2 for the Laplace operator Δ in Rn . Moreover, u ∈ S (Rn ) : Δu = δ in S (Rn (7.1.13) = E + P : P harmonic polynomial in Rn . Remark 7.3. Note that, as a consequence of Proposition 5.1 and Example 4.3, for a harmonic function in Rn , the respective function is a polynomial if and only if it is a tempered distribution. The collection of harmonic functions is strictly larger that the collection of harmonic polynomials. For example, the function u(x, y) := ex cos y for (x, y) ∈ R2 , is harmonic in R2 without being a polynomial. In particular, we obtain that u ∈ S (R2 ). Moving on, we recall that if Ω ⊂ Rn is a domain of class C 1 (see Definition 13.40) with outward unit normal ν = (ν1 , . . . , νn ) and u is a function of class C 1 in an open neighborhood of ∂Ω, then the normal derivative of u on ∂Ω, denoted by ∂u ∂ν , is the directional derivative of the function u along the unit vector ν, that is, ∂u (y) := νj (y)∂j u(y), ∂ν j=1 n
∀ y ∈ ∂Ω.
(7.1.14)
Remark 7.4. Via a direct computation, one may check that E as in (7.1.12) defines a fundamental solution for the Laplacian. We outline the computation for n ≥ 3 and leave the cases n = 2 and n = 1 as an exercise. First, observe that E ∈ C ∞ (Rn \ {0}) and ΔE = 0 in Rn \ {0}. Next, let ϕ ∈ C0∞ (Rn ) be such that supp ϕ ⊂ B(0, R), for some R ∈ (0, ∞). Then, starting with the definition of distributional derivatives and then using Lebesgue’s dominated convergence theorem, we write ΔE, ϕ = E, Δϕ = lim+ E(x)Δϕ(x) dx ε→0
6 = lim+ ε→0
ε≤|x|≤R
ΔE(x)ϕ(x) dx + ε≤|x|≤R
E ∂B(0,R)
∂ϕ dσ ∂ν
7 ∂E ∂E ∂ϕ dσ − ϕ dσ + ϕ dσ − E ∂ν ∂B(0,ε) ∂B(0,R) ∂ν ∂B(0,ε) ∂ν 6 7 ∂ϕ ∂E = lim+ ϕ−E dσ = ϕ(0), (7.1.15) ∂ν ∂ν ε→0 ∂B(0,ε)
2 In the case n = 3, the expression for E was used in 1789 by Pierre Simon de Laplace to show that for f smooth, compactly supported, Δ(E ∗ f ) = 0 outside the support of f (cf. [38]).
7.1. FUNDAMENTAL SOLUTIONS FOR THE LAPLACE...
221
where for each integral considered over ∂B(0, r), for some r ∈ (0, ∞), ν denotes the outward unit normal to B(0, r). For the third equality in (7.1.15) we used (13.7.5) while for the fourth equality in (7.1.15) we used the fact that the support of ϕ is contained inside B(0, R). In addition, to see why the last equality in 1 · |x|xn for every x ∈ Rn \ {0} to (7.1.15) holds, use the fact that ∇E(x) = ωn−1 write ∂E 1 (x) ϕ(x) dσ(x) = ϕ(x)dσ(x) (7.1.16) ωn−1 εn−1 ∂B(0,ε) ∂B(0,ε) ∂ν 1 = [ϕ(x) − ϕ(0)]dσ(x) + ϕ(0) ωn−1 εn−1 ∂B(0,ε) 1 = [ϕ(εy) − ϕ(0)]dσ(y) + ϕ(0) −−−−→ ϕ(0) ωn−1 S n−1 ε→0+ where the convergence follows by invoking Lebesgue’s dominated convergence theorem. Also, ∇ϕL∞ 1 ∂ϕ E(x) (x) dσ(x) ≤ dσ(x) (n − 2)ωn−1 ∂B(0,ε) |x|n−2 ∂B(0,ε) ∂ν = ε C(n, ϕ) −−−−→ 0.
(7.1.17)
ε→0+
This computation shows that if n ≥ 3, the L1loc (Rn ) function E from (7.1.12) satisfies ΔE, ϕ = δ, ϕ for every ϕ ∈ C0∞ (Rn ); thus, E is a fundamental solution for the Laplacian. We conclude this section by presenting a result relating the composition of Riesz transforms to singular integral operators which have kernels that are two derivatives on the fundamental solution for the Laplacian. Proposition 7.5. Let E be the fundamental solution for the Laplacian from (7.1.12) and recall the Riesz transforms in Rn (cf. Theorem 4.93). Then for each j, k ∈ {1, . . . , n} and each ϕ ∈ S(Rn ), we have ω 2 n Rj Rk ϕ (x) = − lim (∂j ∂k E)(x − y)ϕ(y) dy 2 ε→0+ |x−y|>ε ω 2 δ n jk ϕ(x) in S (Rn ), − (7.1.18) 2 n with the left-hand side initially understood in L2 (Rn ) (cf. Theorem 4.93) and the right-hand side considered as a tempered distribution (cf. Proposition 4.67). Moreover, if T∂j ∂k E is the operator as in part (e) of Theorem 4.96 (for the choice Θ := ∂j ∂k E), then for every f ∈ L2 (Rn ) we have ω 2 ω 2 δ n n jk f T∂j ∂k E f − Rj Rk f = − 2 2 n
in
L2 (Rn ).
(7.1.19)
222
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
Proof. Fix j, k ∈ {1, . . . , n} along with some ϕ ∈ S(Rn ). Since the Fourier transform is an isomorphism of S (Rn ), it suffices to show that (7.1.18) holds on the Fourier transform side. With this in mind, note that on the one hand, ω 2 ξ ξ ωn ξj n j k F Rk ϕ = − F Rj Rk ϕ = −i ϕ 3 in S (Rn ), (7.1.20) 2 |ξ| 2 |ξ|2 by a twofold application of (4.9.14). On the other hand, since the function ∂j ∂k E is as in (4.4.1) (here Example 4.68 is also used), we may invoke Proposition 4.67, (4.9.6), and (4.5.2), to conclude that the Fourier transform of the right-hand side of (7.1.18) is ω 2 δjk n ϕ 3 F P.V. (∂j ∂k E) + ϕ 3 − 2 n ω 2 1 δjk n ϕ 3 =− ϕ 3 F P.V. (∂j Φk ) + 2 ωn−1 n ω 2 ξ ξ δjk δjk n j k ϕ 3 ϕ 3 + =− − 2 |ξ|2 n n ω 2 ξ ξ n j k ϕ 3 in S (Rn ). (7.1.21) =− 2 |ξ|2 That (7.1.18) holds now follows from (7.1.20) and (7.1.21). Finally, (7.1.19) is a consequence of what we have proved so far, part (e) in Theorem 4.93, part (e) in Theorem 4.96, and the fact that S(Rn ) is dense in L2 (Rn ). Making use of the full force of Theorem 4.97 we obtain the following Lp version, 1 < p < ∞, of Proposition 7.5. Proposition 7.6. Let E be the fundamental solution for the Laplacian from (7.1.12) and recall the Riesz transforms in Rn (cf. Theorem 4.93). Also, fix p ∈ (1, ∞). Then for each j, k ∈ {1, . . . , n} and f ∈ Lp (Rn ) we have ω 2 n lim (∂j ∂k E)(x − y)f (y) dy Rj Rk f (x) = − 2 ε→0+ |x−y|>ε ω 2 δ n jk f (x) for a.e. x ∈ Rn , − (7.1.22) 2 n where the Riesz transforms are understood to be bounded operators on Lp (Rn ) (cf. (4.9.46)–(4.9.47)) while the singular integral operator on the right-hand side is also considered as a bounded mapping on Lp (Rn ) (cf. Theorem 4.97). In particular, if T∂j ∂k E is interpreted in the sense of Theorem 4.97 (for the choice Θ := ∂j ∂k E), then for every f ∈ Lp (Rn ) we have ω 2 ω 2 δ n n jk f in Lp (Rn ). T∂j ∂k E f − (7.1.23) Rj Rk f = − 2 2 n Proof. All claims are consequences of Proposition 7.5, Theorem 4.97, and the density of S(Rn ) in Lp (Rn ).
7.2. THE POISSON EQUATION AND LAYER...
7.2
223
The Poisson Equation and Layer Potential Representation Formulas
In this section we use the fundamental solutions for the Laplacian determined in the previous section in a number of applications. The first application concerns the Poisson equation3 for the Laplacian in Rn : Δu = f
in Rn .
(7.2.1)
In (7.2.1) the unknown is u and f is a given function. As a consequence of Remark 5.5, this equation is always solvable in D (Rn ) if f ∈ C0∞ (Rn ). We will see that in fact the solution obtained via the convolution of such an f with the fundamental solution for Δ from (7.1.12) is smooth and that we can solve the Poisson equation in D (Rn ) for a less restrictive class of functions f . The second application is an integral representation formula, involving layer potentials, for functions of class C 2 on a neighborhood of the closure of a domain of class C 1 . Proposition 7.7. Let E be the fundamental solution for the Laplacian from (7.1.12). Suppose f ∈ C0∞ (Rn ). Then the function u(x) := E(x − y)f (y) dy, ∀ x ∈ Rn , (7.2.2) Rn
satisfies u ∈ C ∞ (Rn ) and is a classical solution of the Poisson equation (7.2.1) for the Laplacian in Rn . Proof. Fix f ∈ C0∞ (Rn ) and let E be as in (7.1.12). Define u := E ∗ f in D (Rn ). Then Remark 5.5 implies that u is a solution of the equation Δu = f in D (Rn ). In addition, by Proposition 2.93 and the fact that E ∈ L1loc (Rn ), we have u ∈ C ∞ (Rn ) and u(x) = E(x − y), f (y) = E(x − y)f (y) dy, ∀ x ∈ Rn , (7.2.3) Rn
or even more ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ u(x) = ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
explicitly (based on the expressions in (7.1.12)), that −1 f (y) dy, if n ≥ 3, (n − 2)ωn−1 Rn |x − y|n−2 1 ∀ x ∈ Rn . ln |x − y|f (y) dy, if n = 2, 2π R2 (x − y)H(x − y)f (y) dy, if n = 1,
(7.2.4)
R
Furthermore, based on Proposition 2.34, we may conclude that Δu = f pointwise in Rn . This proves that u is a solution of (7.7) as desired. 3 In 1813, the French mathematician, geometer, and physicist Sim´ eon Denis Poisson (1781– 1 1840) proved that Δ(E ∗ f ) = f in dimension n = 3 (cf. [55]), where E(x) = − 4π|x| ,
x ∈ R3 \ {0}, and f ∈ C0∞ (R3 ).
224
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
An inspection of the right-hand side of (7.2.2) reveals that this expression continues to be meaningful under weaker assumptions on f . This observation may be used to prove that (7.7) is solvable in the class of distributions for f belonging to a larger class of functions than C0∞ (Rn ). Proposition 7.8. Let E be the fundamental solution for the Laplacian from (7.1.12). Suppose Ω is an open set in Rn , n ≥ 2, and f ∈ L∞ (Ω) vanishes outside a bounded measurable subset of Ω. Then the function E(x − y)f (y) dy, ∀ x ∈ Ω, (7.2.5) u(x) := Ω
is a distributional solution of the Poisson equation for the Laplacian in Ω, i.e., Δu = f in D (Ω). In addition, u ∈ C 1 (Ω) and for each j ∈ {1, . . . , n}, ∀ x ∈ Ω. (7.2.6) ∂j u(x) = (∂j E)(x − y)f (y) dy, Ω
Proof. Suppose first that n ≥ 3. Note that E ∈ C ∞ (Rn ) and is positive homogeneous of degree 2 − n. Also the extension of f by zero outside Ω, which n we continue to denote by f , satisfies f ∈ L∞ comp (R ). From Theorem 4.101 it 1 follows that u ∈ C (Ω) and (7.2.6) holds for each j ∈ {1, . . . , n}. If n = 2, a reasoning based on Vitali’s theorem, similar to the one used in the proof of Theorem 4.101, yields (∂j E) ∗ f ∈ C 0 (Ω) for each j ∈ {1, . . . , n}. Also, the computation in (4.10.33) may be adapted to give that the distributional derivative ∂j u is equal to (∂j E)∗f for each j ∈ {1, . . . , n}. Hence, Theorem 2.102 applies and yields u ∈ C 1 (Ω), as desired. To finish the proof of the proposition, there remains to show that Δu = f in D (Ω). Since f ∈ L1loc (Ω) we have f ∈ D (Ω). Thus, for each ϕ ∈ C0∞ (Ω) we may write Δu, ϕ = u, Δϕ = u(x)Δϕ(x) dx = E(x − y)f (y) dy Δϕ(x) dx = Ω
=
Ω
=
Ω
Ω
f (y) E(x − y)Δϕ(x) dx dy Ω
f (y)
E(x)Δϕ(x + y) dx dy
Ω−y
$ % f (y) E, Δϕ(· + y) dy =
Ω
Ω
$ % f (y) ΔE, ϕ(· + y) dy
Ω
f (y)ϕ(y) dy = f, ϕ,
f (y)δ, ϕ(· + y) dy =
=
Ω
(7.2.7)
Ω
where for each y ∈ Ω we set Ω − y := {x − y : x ∈ Ω}. Note that for each y ∈ Ω fixed one has ϕ(·+ y) ∈ C0∞ (Ω− y), 0 ∈ Ω− y, and E ∈ D (Ω− y), thus the sixth equality in (7.2.7) is justified. The eight equality in (7.2.7) is a consequence of the fact that E is a fundamental solution for the Laplacian and that 0 ∈ Ω − y. This completes the proof of the proposition.
7.2. THE POISSON EQUATION AND LAYER...
225
Exercise 7.9. Assume the hypothesis of Proposition 7.8 and recall the fundamental solution E for the Laplacian from (7.1.12). By using Vitali’s
Theorem 13.21, show that u(x) = Ω E(· − y)f (y)dy is differentiable in Ω (without making use of Theorem 2.102). As a by-product, show that, for each j ∈ {1, . . . , n}, ∂j E(x − y)f (y) dy, ∀ x ∈ Ω. (7.2.8) ∂j u(x) = Ω
We next establish mean value formulas for harmonic functions. A clarification of the terminology employed is in order. Traditionally, the name harmonic function in an open set Ω ⊆ Rn has been used for functions u ∈ C 2 (Ω) n with the property that ∂j2 u = 0 in a pointwise sense, everywhere in Ω. Such j=1
a function is called a classical solution of the equation Δu = 0 in Ω. Theorem 7.10. For an open set Ω ⊆ Rn the following are equivalent: (i) u is harmonic in Ω; (ii) u ∈ D (Ω) and Δu = 0 in D (Ω); (iii) u ∈ C ∞ (Ω) and Δu = 0 in a pointwise sense in Ω. Proof. This is a direct consequence of Theorem 6.8 and Theorem 7.2 (or, alternatively, of Remark 6.14 and Corollary 6.18). Recall the symbol for integral averages from (0.0.14). n Theorem 7.11. Let Ω be an open set in R and u be a harmonic function in Ω. Then for every x ∈ Ω and every r ∈ 0, dist(x, ∂Ω) we have u(y) dy and u(x) = − u(y) dσ(y). (7.2.9) u(x) = − B(x,r)
∂B(x,r)
Proof. From Theorem 7.10 we know that u ∈ C ∞ (Ω) and Δu = 0 in a pointwise sense in Ω. Fix x ∈ Ω and define the function φ : 0, dist(x, ∂Ω) → R, (7.2.10)
φ(r) := −∂B(x,r) u(y) dσ(y), ∀ r ∈ 0, dist(x, ∂Ω) .
1 A change of variables gives that φ(r) = ωn−1 S n−1 u(x + rω) dσ(ω). Taking the derivative of φ and then using the integration by parts formula from Theorem 13.41 gives that 1 (∇u)(x + rω) · ω dσ(ω) (7.2.11) φ (r) = ωn−1 S n−1 r r (Δu)(x + ry) dy = − Δu(z) dz. = ωn−1 B(0,1) n B(x,r)
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
226
Since Δu = 0 in Ω we have that φ (r) = 0, thus φ(r) = C for some constant C. We claim that lim φ(r) = u(x). Then this claim implies C = u(x) and the r→0+
first formula in (7.2.9) follows. Tosee the claim, fix ε > 0 and by the continuity of u at x find δ ∈ 0, dist (x, ∂Ω) such that |u(y) − u(x)| < ε if |y − x| < δ. Consequently, (7.2.12) r ∈ (0, δ) =⇒ φ(r) − u(x) = − [u(y) − u(x)] dy ≤ ε, B(x,r)
as desired. With the help of (13.5.6) and (13.8.5), the first formula in (7.2.9) may be rewritten as r ωn−1 rn u(x) = u(y) dσ(y) = u(ω) dσ(ω) dρ (7.2.13) n 0 B(x,r) ∂B(x, ρ) for r ∈ 0, dist (x, ∂Ω) . Differentiating the left- and rightmost terms in (7.2.13) with respect to r and then dividing by ωn−1 rn−1 gives the second formula in (7.2.9). Interior estimates for harmonic functions, as in (6.3.30), may be obtained as a particular case of Theorem 6.26 by observing that the fundamental solution for the Laplacian from (7.1.12) satisfies (6.3.29) (with m = 2). Below we take a slightly more direct route which also yields explicit constants. Theorem 7.12 (Interior estimates for Suppose u is harmonic the Laplacian). in Ω. Then for each x ∈ Ω, each r ∈ 0, dist (x, ∂Ω) , and each k ∈ N, we have max y∈B(x,r/2)
|∂ α u(y)| ≤
(2n)k ek−1 k! max |u(y)|, rk y∈B(x,r)
∀ α ∈ Nn0 with |α| = k. (7.2.14)
Proof. Let j ∈ {1, . . . , n} be arbitrary and note that, by Theorem 7.10, we have u ∈ C ∞ (Ω) and hence, ∂j u is harmonic in Ω. Thus, by (7.2.9) and the integra tion by parts formula (13.7.4), for each x ∈ Ω and each r ∈ 0, dist (x, ∂Ω) , we may write n ∂ u(y) dy |∂j u(x)| = j ωn−1 rn B(x,r) n yj − xj = u(y) dσ(y) ωn−1 rn ∂B(x,r) r ≤
n max |u(y)|. r y∈B(x,r)
(7.2.15)
With this in hand, Lemma 6.21 applies (with A the class of harmonic functions in Ω and C := n) and yields (7.2.14).
7.2. THE POISSON EQUATION AND LAYER...
227
Theorem 7.13. Any harmonic function in Ω is real-analytic in Ω. Proof. This is an immediate consequence of Theorem 7.12 and Lemma 6.24 (or, alternatively, Theorem 6.26). Corollary 7.14. Suppose u is harmonic in an open, connected set Ω ⊆ Rn with the property that there exists x0 ∈ Ω such that ∂ α u(x0 ) = 0 for all α ∈ Nn0 . Then u vanishes identically in Ω. As a consequence, if a harmonic function defined in an open connected set Ω ⊆ Rn vanishes on an open subset of Ω, then the respective function vanishes identically in Ω. Proof. This is an immediate consequence of Theorem 6.25 and Theorem 7.13. Theorem 7.15 (Liouville’s theorem for the Laplacian). If u is a bounded harmonic function in Rn then there exists c ∈ C such that u = c in Rn . Proof. This may be justified in several ways. For example, it suffices to note that this is a particular case of Theorem 5.3. Another proof may be given based on interior estimates. Specifically, let j ∈ {1, . . . , n} and using (7.2.14), for each x ∈ Rn , we may write lim |∂j u(x)| ≤ lim
r→∞
r→∞
n uL∞(Rn ) = 0. r
(7.2.16)
Hence, ∇u = 0 proving that u is locally constant in Rn . Since Rn is connected, the desired conclusion follows. All ingredients are now in place for proving the following basic well-posedness result for the Poisson problem for the Laplacian in Rn . n Theorem 7.16. Assume n ≥ 3. Then for each f ∈ L∞ comp (R ) and each c ∈ C n the Poisson problem for the Laplacian in R , ⎧ u ∈ C 0 (Rn ), ⎪ ⎪ ⎨ Δu = f in D (Rn ), (7.2.17) ⎪ ⎪ ⎩ lim u(x) = c, |x|→∞
has a unique solution. Moreover, the solution u satisfies the following additional properties. (1) The function u belongs to C 1 (Rn ) and has the integral representation formula u(x) = c + E(x − y)f (y) dy, x ∈ Rn , (7.2.18) Rn
where E is the fundamental solution for the Laplacian in Rn from (7.1.12). (2) If, in fact, f ∈ C0∞ (Rn ), then actually u ∈ C ∞ (Rn ).
228
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
(3) For every j, k ∈ {1, . . . , n}, we have ∂j ∂k u = −
2 2 Rj Rk f in D (Rn ), ωn
(7.2.19)
where Rj and Rk are (jth and kth) Riesz transforms in Rn . (4) For every p ∈ (1, ∞), the solution u of (7.2.17) satisfies ∂j ∂k u ∈ Lp (Rn ) for each j, k ∈ {1, . . . , n}, where the derivatives are taken in D (Rn ), and there exists a constant C = C(p, n) ∈ (0, ∞) with the property that n
∂j ∂k uLp (Rn ) ≤ Cf Lp(Rn ) .
(7.2.20)
j,k=1
Proof. The fact that the function u defined as in (7.2.18) is of class C 1 (Rn ) and solves Δu = f in D (Rn ) has been established in Proposition 7.8. To proceed, let R ∈ (0, ∞) be such that supp f ⊂ B(0, R), and note that if |x| ≥ 2R, then for every y ∈ B(0, R) we have |x − y| ≥ |x| − |y| ≥ R ≥ |y|. Hence, using (7.1.12) (recall that we are assuming n ≥ 3) and Lebesgue’s dominated convergence theorem we obtain dy E(x − y)f (y) dy ≤ Cf L∞ (Rn ) → 0 as |x| → ∞. |x − y|n−2 n R B(0,R) (7.2.21) It is then clear from (7.2.21) that the function u from (7.2.18) also satisfies the limit condition in (7.2.17). That (7.2.18) is the unique solution of (7.2.17) follows from linearity and Theorem 7.15. Next, that u ∈ C ∞ (Rn ) if f ∈ C0∞ (Rn ), is a consequence of Proposition 7.7 (or, alternatively, Corollary 6.18 and the ellipticity of the Laplacian). This proves (1)–(2). As regards (3), start by fixing j, k ∈ {1, . . . , n}. Then from (7.2.6), Theorem 4.99, (13.8.44), and Proposition 7.6, we deduce that
(∂j ∂k u)(x) = ∂j ∂k u(x) = ∂j
=
=
1 ωn−1
S n−1
Rn
(∂k E)(x − y)f (y) dy
ωk ωj dσ(ω) f (x)+ lim
δjk f (x) + lim n ε→0+
=−
ε→0+
|x−y|>ε
(∂j ∂k E)(x−y)f (y) dy
|x−y|>ε
(∂j ∂k E)(x − y)f (y) dy
2 2
Rj Rk f (x) in D (Rn ). ωn
(7.2.22)
Finally, the claim in (4) follows from (3) and the boundedness of the Riesz transforms on Lp (Rn ) (cf. (4.9.46)).
7.2. THE POISSON EQUATION AND LAYER...
229
In the last part of this section, we prove an integral representation formula involving layer potentials (a piece of terminology we elaborate on a little later) for arbitrary (as opposed to harmonic) functions u ∈ C 2 (Ω ), where Ω is a bounded domain of class C 1 in Rn (as in Definition 13.40). To state this recall the definition of the normal derivative from (7.1.14). Proposition 7.17. Suppose n ≥ 2 and let Ω ⊂ Rn be a bounded domain of class C 1 and let ν denote its outward unit normal. Also let E be the fundamental solution for the Laplacian from (7.1.12). If u ∈ C 2 (Ω), then ∂u Δu(y)E(x − y) dy − E(x − y) (y) dσ(y) ∂ν Ω ∂Ω − [ν(y) · (∇E)(x − y)]u(y) dσ(y) ∂Ω u(x), x ∈ Ω, (7.2.23) = 0, x ∈ Rn \ Ω. Several strategies may be employed to prove Proposition 7.17, and here we choose an approach that highlights the role of distribution theory. Another approach (based on isolating the singularity in the fundamental solution and a limiting argument), more akin to the proof of (7.1.15), is going to be used in the proof of Theorem 7.57 (presented later) where a result of similar flavor to (7.2.23) is established for more general differential operators than the Laplacian. Proof of Proposition 7.17. In this proof, for a function v defined in Ω, we let v denote the extension of v with zero outside Ω. Fix some function u ∈ C 2 (Ω ) and observe that u ∈ L1loc (Rn ) ⊂ D (Rn ). In addition, for each ϕ ∈ C0∞ (Rn ) one has ∂u ∂ϕ −ϕ Δ u, ϕ = u, Δϕ = uΔϕ dx = ϕΔu dx + u dσ, ∂ν ∂ν Ω Ω ∂Ω (7.2.24) where the last equality in (7.2.24) is based on (13.7.5). ∂ (aδ∂Ω ) : C0∞ (Rn ) → C by For a ∈ C 0 (∂Ω) define the mappings aδ∂Ω , ∂ν setting aδ∂Ω , ϕ := a(x)ϕ(x) dσ(x), ∀ ϕ ∈ C0∞ (Rn ), (7.2.25) ∂Ω
and ! ∂ " ∂ϕ (aδ∂Ω ), ϕ := − a(x) (x) dσ(x), ∂ν ∂ν ∂Ω
∀ ϕ ∈ C0∞ (Rn ).
(7.2.26)
By Exercise 2.128 corresponding to Σ := ∂Ω we have aδ∂Ω ∈ D (Rn ). By a ∂ (aδ∂Ω ) ∈ D (Rn ). In addition, it is easy to similar reasoning, one also has ∂ν see that the supports of the distributions in (7.2.25) and (7.2.26) are contained
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
230
in ∂Ω, hence aδ∂Ω , equivalent with
∂ ∂ν (aδ∂Ω )
∈ E (Rn ). In light of these definitions, (7.2.24) is
) − ∂u δ∂Ω − ∂ (uδ∂Ω ) Δ u = Δu ∂ν ∂ν
in D (Rn ).
(7.2.27)
) are contained in the Note that the supports of the distributions Δ u and Δu compact set Ω. Hence, all distributions in (7.2.27) are compactly supported and their convolutions with E are well-defined. Furthermore, since Δ u∗E =u ∗ ΔE = u ∗δ = u in D (Rn ), after convolving the left- and right-hand sides of (7.2.27) with E we arrive at ) ∗ E − ∂u δ∂Ω ∗ E − ∂ (uδ∂Ω ) ∗ E in D (Rn ). u = Δu (7.2.28) ∂ν ∂ν Moreover, since Rn \ ∂Ω is open, it follows that the equality in (7.2.28) also holds in D (Rn \ ∂Ω). The goal is to show that all distributions in (7.2.28) when restricted to Rn \ ∂Ω are of function type and to determine the respective functions that define them. The fact that the distribution u Rn \∂Ω is of function type follows from the definition of u . Also, it is immediate that u Rn \∂Ω is given by the function equal to u in Ω and equal to zero in Rn \ Ω, which is precisely the function in the right-hand side of (7.2.23). ) being compactly supported in Ω, by ) E ∈ L1 (Rn ), with Δu Since Δu, loc ) Exercise 2.95 we have that Δu∗E is of function type, determined by the function ) ∗ E ∈ L1 (Rn ) defined as Δu loc ) (Δu ∗ E)(x) = E(x − y)Δu(y)dy, ∀ x ∈ Rn . (7.2.29) Ω
To proceed with identifying the restrictions to Rn \ ∂Ω of the other two convolutions in the right-hand side of (7.2.28), let ϕ ∈ C0∞ (Rn \ ∂Ω) and set K := supp ϕ. Then, the set MK := (x, y) ∈ Rn × ∂Ω : x + y ∈ K is compact in R2n and if we take ψ ∈ C0∞ (R2n ) such that ψ = 1 in a neighborhood of MK , we have ' " !& ∂u " ! ∂u (7.2.30) δ∂Ω ∗ E, ϕ = δ∂Ω (y), E(x), ψ(x, y)ϕ(x + y) ∂ν ∂ν ! ∂u " = δ∂Ω (y), E(x)ψ(x, y)ϕ(x + y) dx . ∂ν Rn Note that via a change of variables we have E(x)ψ(x, y)ϕ(x + y) dx = E(z − y)ψ(z − y, y)ϕ(z) dz, Rn
Rn
∀ y ∈ Rn . (7.2.31)
7.2. THE POISSON EQUATION AND LAYER... If we now consider the function h(y) := E(z − y)ψ(z − y, y)ϕ(z) dz, Rn
231
∀ y ∈ Rn ,
(7.2.32)
then h is well-defined. Moreover, since derivatives of order 2 or higher of E are not integrable near the origin, it follows that h does not belong to C ∞ (Rn ), hence (7.2.31) may not be used to rewrite the last term in (7.2.30). To fix this drawback, we impose an additional restriction on ψ. Specifically, since ∂Ω∩K = ∅, there exists ε > 0 sufficiently small such that (∂Ω+B(0, ε))∩(K+B(0, ε)) = ∅. Then, if U , the neighborhood of MK where ψ = 1, is such that whenever (x, y) ∈ U we have y ∈ ∂Ω + B(0, ε/2), we may further require that ψ(·, y) = 0 for y ∈ K + B(0, ε/2). Under this requirement, if z ∈ K = supp ϕ and y ∈ Rn is such that |z − y| < ε/2, then ψ(z − y, y) = 0. This ensures that derivatives in y of any order of the function E(z − y)ψ(z − y, y)ϕ(z) are integrable in z over Rn , thus h ∈ C ∞ (Rn ). Now (7.2.31) may be used in the last term in (7.2.30) to obtain ! ∂u " ! ∂u " δ∂Ω ∗E, ϕ = δ∂Ω (y), E(z−y)ψ(z−y, y)ϕ(z) dz . (7.2.33) ∂ν ∂ν Rn Since
∂u ∂ν
∈ C 0 (∂Ω), by (7.2.25) one has
" δ∂Ω (y), E(x − y)ψ(x − y, y)ϕ(x) dx ∂ν Rn ∂u (y) ϕ(x)E(x − y)ψ(x − y, y) dx dσ(y) = ∂Ω ∂ν Rn ∂u (y)E(x − y) dσ(y) ϕ(x) dx. (7.2.34) = Rn ∂Ω ∂ν ∂u Combining (7.2.33) and (7.2.34) it follows that ∂ν δ∂Ω ∗ E Rn \∂Ω is of function type and ' & ∂u ∂u (y)E(x − y) dσ(y), ∀ x ∈ Rn \ ∂Ω. δ∂Ω ∗ E (x) = ∂ν ∂ν ∂Ω (7.2.35) Similarly, with ϕ and ψ as previously specified, since u ∈ C 0 (∂Ω), definition (7.2.26) applies and we obtain 9 8 9 8 ∂ ∂ (uδ∂Ω ) ∗ E, ϕ = (uδ∂Ω )(y), E(x)ψ(x, y)ϕ(x + y) dx ∂ν ∂ν Rn 8 9 ∂ = (uδ∂Ω )(y), E(x − y)ψ(x − y, y)ϕ(x) dx ∂ν Rn ' & ∂ =− (E(x − y)) dx dσ(y) u(y) ϕ(x) ν(y) · ∂y ∂Ω Rn ! ∂u
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
232
= Hence
∂ ∂ν (uδ∂Ω )
u(y)[ν(y) · (∇E)(x − y)] dσ(y) dx.
ϕ(x) Rn
(7.2.36)
∂Ω
∗E
Rn \∂Ω
is of function type and
∂ (uδ∂Ω ) ∗ E (x) ∂ν = u(y)[ν(y) · (∇E)(x − y)] dσ(y),
(7.2.37) ∀ x ∈ Rn \ ∂Ω.
∂Ω
Combining (7.2.28), earlier comments about u Rn \∂Ω , (7.2.29), (7.2.35), and (7.2.37), we may conclude that ∂u E(x − y)Δu(y) dy − E(x − y) (y) dσ(y) u (x) = ∂ν Ω ∂Ω − [ν(y) · (∇E)(x − y)]u(y) dσ(y), ∀ x ∈ Rn \ ∂Ω, (7.2.38) ∂Ω
proving (7.2.23). As the observant reader has perhaps noticed, the solid integral appearing in (7.2.23) is simply the volume (or Newtonian) potential defined in (4.10.1) acting on f := Δu. Starting from this observation, formula (7.2.23) also suggests the consideration of two other types of integral operators associated with a given bounded domain Ω ⊂ Rn . Specifically, for each given complex-valued function ϕ ∈ C 0 (∂Ω) set Dϕ (x) := − ν(y) · (∇E)(x − y) ϕ(y) dσ(y) ∂Ω
ν(y) · (y − x) ϕ(y) dσ(y), x ∈ Rn \ ∂Ω, ωn−1 ∂Ω |x − y|n
and S ϕ (x) := ∂Ω E(x − y)ϕ(y) dσ(y), for x ∈ Rn \ ∂Ω. Thus,
⎧ −1 1 ⎨ (n−2)ωn−1 ∂Ω |x−y|n−2 ϕ(y) dσ(y) if n ≥ 3, S ϕ (x) = ⎩ 1
if n = 2. 2π ∂Ω ln |y − x|ϕ(y) dσ(y) =
1
(7.2.39)
(7.2.40)
The operators D and S are called, respectively, the double- and single-layer potentials for the Laplacian. In this notation, (7.2.23) reads ∂u u(x), x ∈ Ω, (7.2.41) (x) + D u ∂Ω (x) = NΩ (Δu) (x) − S ∂ν 0, x ∈ Rn \ Ω, for every u ∈ C 2 (Ω ). In particular, this formula shows that we may recover any function u ∈ C 2 (Ω ) knowing Δu in Ω as well as ∂u ∂ν and u on ∂Ω.
7.3. FUNDAMENTAL SOLUTIONS FOR THE BI-LAPLACIAN
233
The above layer potential operators play a crucial role in the treatment of boundary value problems. In this vein, we invite the reader to check that the version of the double-layer operator (7.2.39) corresponding to the case when Ω = Rn+ is precisely twice the operator P from (4.8.3), the relevance of which, in the context of boundary value problems for the Laplacian, has been highlighted in (4.8.17)–(4.8.18).
7.3
Fundamental Solutions for the Bi-Laplacian
The bi-Laplacian in Rn is the operator Δ2 = ΔΔ =
n j=1
2 ∂j2 , sometimes also
referred to as the bi-harmonic operator. Functions u of class C 4 in an open set Ω of Rn satisfying Δ2 u = 0 pointwise in Ω are called bi-harmonic in Ω. Theorem 5.13 guarantees the existence of E ∈ S (Rn ) such that Δ2 E = δ in S (Rn ). The goal in this section is to determine all such fundamental solution for Δ2 . In the case when n ≥ 4, these may be computed by following the approach from Sect. 7.1 (corresponding to n ≥ 2 there). Such a line of reasoning will then require treating the cases n = 1, 2, 3 via another method. In what follows, we will proceed differently by employing the fact that we have a complete description of all fundamental solutions in S (Rn ) for the Laplacian and that Δ2 = ΔΔ. This latter approach, will take care of the cases n ≥ 2, leaving n = 1 to be treated separately. Fix E ∈ S (Rn ) satisfying Δ2 E = δ in D (Rn ). By (4.1.25) we have Δ2 E = δ in S (Rn ). Keeping in mind that Δ2 = ΔΔ, it is natural to consider the following equation: (7.3.1) ΔE = EΔ in S (Rn ), where EΔ is the fundamental solution for the Laplacian from (7.1.12). Note that any E as in (7.3.1) is a fundamental solution for Δ2 . We proceed by analyzing three cases. 1 |x|2−n for Case n ≥ 3. Under the current assumptions, EΔ (x) = − (n−2)ω n−1 each x ∈ Rn \ {0}. The key observation is that the following identity holds Δ |x|m = m(m + n − 2)|x|m−2 , pointwise in Rn \ {0}, ∀ m ∈ R. (7.3.2)
This suggests investigating the validity of a similar identity when derivatives are taken in the distributional sense. As seen in the next lemma, the proof of which we postpone for later, a version of identity (7.3.2) holds in S (Rn ) for a suitable range of exponents that depends on the dimension n. Lemma 7.18. Let N be a real number such that N > 2 − n. Then Δ |x|N = N (N + n − 2)|x|N −2 in S (Rn ).
(7.3.3)
In view of Lemma 7.18 and the definition of EΔ , it is justified to look for a solution to (7.3.1) of the form E(x) = cn |x|4−n , x ∈ Rn \ {0}, where cn is a constant
234
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
to be determined. This candidate is in S (Rn ) (recall Exercise 4.4) and satisfies 1 1 . Hence, we obtain cn = 2(n−2)(n−4)ω if (7.3.1) if 2cn (4 − n) = − (n−2)ω n−1 n−1 n = 4. To handle the case n = 4, we use another result that we state next (for a proof see the last part of this section). Lemma 7.19. Let n ≥ 3. Then Δ ln |x| = (n − 2)|x|−2 in S (Rn ). Lemma 7.19 suggests to take when n = 4 the candidate E(x) = c ln |x|, x ∈ Rn \ {0}, where c is a constant to be determined. From Example 4.8 we know that E ∈ S (Rn ). Also, Lemma 7.19 and the expression of EΔ corresponding to n = 4 imply that E(x) = c ln |x| satisfies (7.3.1) if 2c = − 2ω1 3 . Since ω3 = 2π 2 (recall (13.5.6) and (13.5.3)), the latter implies c = − 8π1 2 . In summary, we proved that ⎧ 1 ⎪ |x|4−n for n ≥ 3, n = 4, ⎨ 2(n − 2)(n − 4)ω n−1 x ∈ Rn \ {0}, E(x) = ⎪ ⎩ for n = 4, − 8π1 2 ln |x| satisfies E ∈ S (Rn ) and Δ2 E = δ in S (Rn ), n ≥ 3. (7.3.4) The treatment of the case n = 2 is different from the above considerations 1 since, in such a setting, EΔ (x) = 2π ln |x| for x ∈ R2 \ {0}. Given the format of EΔ , for some insight on how to choose a candidate E we start with the readily verified identity Δ |x|m ln |x| = |x|m−2 m(m + n − 2) ln |x| + 2m + n − 2 (7.3.5) pointwise in Rn \ {0}, ∀ m ∈ R. The function |x|a belongs to L(Rn ) only when a ∈ N0 and is even, in which case |x|a ln |x| ∈ S (Rn ) (by Example 4.8 and (b) in Theorem 4.13). Hence, these restrictions should be taken into account when considering the analogue of identity (7.3.5) in the distributional sense in S (Rn ). The latter is stated next and its proof, which is very much in the spirit of the proofs for Lemmas 7.18 and 7.19 (here Exercise 4.104 is also relevant), is left as an exercise. Lemma 7.20. Let N be a real number such that N > 2 − n. Then in S (Rn ). (7.3.6) Δ |x|N ln |x| = |x|N −2 N (N + n − 2) ln |x| + 2N + n − 2 We are ready to proceed in earnest with considering: Case n = 2. In view of Lemma 7.20, it is natural to start with a candidate of the form E(x) := c|x|2 ln |x|, x ∈ R2 \ {0}, with c a constant yet to be determined. Lemma 7.20 applied for N = 2 = n implies ΔE = 4c ln |x| + 4c in S (R2 ), hence 1 1 (7.3.1) is satisfied for this E provided 4c = 2π , or equivalently, if c = 8π . The
7.3. FUNDAMENTAL SOLUTIONS FOR THE BI-LAPLACIAN
235
bottom line is that, for this value of c, we have E ∈ S (R2 ) and ΔE = EΔ + in S (R2 ). Hence, since Δc = 0 in S (R2 ), we conclude that E(x) =
1 2 8π |x|
ln |x|
1 2π
for x ∈ R2 \ {0},
satisfies E ∈ S (R2 ) and Δ2 E = δ in S (R2 ).
(7.3.7)
Finally, we are left with: Case n = 1. In this situation we use Exercise 5.15 to conclude that any fundamental solution for Δ2 that is a tempered distribution has the form 16 x3 H + P for some polynomial P in R of degree less than or equal to 3. The main result emerging from this analysis is summarized next. Theorem 7.21. The function E ∈ L1loc (Rn ) defined as ⎧ 1 4−n ⎪ ⎪ if x ∈ Rn \ {0}, ⎪ ⎪ 2(n−2)(n − 4)ωn−1 |x| ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ − 1 ln |x| if x ∈ R4 \ {0}, 8π 2 E(x) := ⎪ ⎪ 1 ⎪ ⎪ ⎪ |x|2 ln |x| if x ∈ R2 \ {0}, ⎪ ⎪ 8π ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 1 3 ⎩ x H if x ∈ R, 6
n ≥ 3, n = 4,
n = 4, (7.3.8) n = 2, n = 1,
belongs to S (Rn ) and is a fundamental solution for the operator Δ2 in Rn . Moreover, u ∈ S (Rn ) : Δ2 u = δ in S (Rn ) (7.3.9) = E + P : P bi-harmonic polynomial in Rn . Proof. It is clear from (7.3.8) that E ∈ L1loc (Rn ) ∩ S (Rn ). That Δ2 E = δ in S (Rn ) has been checked in (7.3.4), (7.3.7), and Exercise 5.15. Finally, (7.3.9) is a consequence of Proposition 5.7. We now turn to the task of proving Lemmas 7.18 and 7.19. Proof of Lemma 7.18. Fix N > 2 − n. Based on Exercise 4.4 we may conclude that |x|N −2 , |x|N ∈ S (Rn ). Thus, invoking (4.1.25), there remains to show that $ N % ∀ ϕ ∈ C0∞ (Rn ). (7.3.10) Δ |x| , ϕ = N (N + n − 2)|x|N −2 , ϕ, To this end, fix ϕ ∈ C0∞ (Rn ) arbitrary and let R ∈ (0, ∞) be such that supp ϕ ⊂ B(0, R). Then, starting with the definition of distributional derivatives, then using the support condition for ϕ and Lebesgue’s dominated convergence theorem, and then (13.7.5) [keeping in mind that supp ϕ ⊂ B(0, R)], we have
236
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
$ N % Δ |x| , ϕ = =
|x| Δϕ(x) dx = lim+
|x|N Δϕ(x) dx
N
Rn
ε→0
ϕ(x)Δ|x| dx −
∂ϕ (x) dσ(x) ∂ν
|x|N
N
lim
ε→0+
ε<|x|
ε<|x|
∂B(0,ε)
+
ϕ(x) ∂B(0,ε)
∂|x|N dσ(x) , ∂ν
(7.3.11)
where ν(x) = xε , for x ∈ ∂B(0, ε). By (7.3.2) and the Lebesgue dominated convergence theorem 13.12 it follows that
lim+
ε→0
ϕ(x)Δ|x|N dx = N (N + n − 2) ε<|x|
Rn
|x|N −2 ϕ(x) dx
$ % = N (N + n − 2) |x|N −2 , ϕ .
Since ∇(|x|N ) = N |x|N −2 x for every x = 0, we have ∂B(0, ε). Hence,
∂|x|N ∂ν
(7.3.12) = N εN −1 on
∂|x|N N ∂ϕ (x) dσ(x) + dσ(x) |x| ϕ(x) − ∂B(0,ε) ∂ν ∂ν ∂B(0,ε) ≤ ∇ϕL∞ (Rn ) εN +n−1 + N ϕL∞ (Rn ) εN +n−2 −−−−→ 0 ε→0+
(7.3.13)
given that N > 2 − n. Now (7.3.10) follows from (7.3.11)–(7.3.13). The proof of the lemma is complete. Proof of Lemma 7.19. From Exercise 4.4 and the assumption n ≥ 3 it follows that |x|−2 ∈ S (Rn ), while from Example 4.8 we have ln |x| ∈ S (Rn ). Hence, by (4.1.25) matters reduce to showing $ % $ % Δ ln |x| , ϕ = (n − 2) |x|−2 , ϕ ,
∀ ϕ ∈ C0∞ (Rn ).
(7.3.14)
Fix ϕ ∈ C0∞ (Rn ) and let R ∈ (0, ∞) be such that supp ϕ ⊂ B(0, R). Using in the current setting a reasoning similar to that applied when deriving (7.3.11) one obtains $ % Δ ln |x| , ϕ = ln |x|Δϕ(x) dx = lim ln |x|Δϕ(x) dx Rn
ε→0+
= lim+ ε→0
ε<|x|
ϕ(x)Δ(ln |x|) dx −
ε<|x|
+
ϕ(x) ∂B(0,ε)
ln |x| ∂B(0,ε)
∂ ln |x| dσ(x) . ∂ν
∂ϕ (x) dσ(x) ∂ν (7.3.15)
7.4. THE POISSON EQUATION FOR THE BI-LAPLACIAN
237
It is easy to see that Δ(ln |x|) = (n−2)|x|−2 pointwise in Rn \{0}, thus invoking the Lebesgue dominated convergence theorem 13.12 we obtain ϕ(x)Δ(ln |x|) dx = (n − 2) |x|−2 ϕ(x) dx lim+ ε→0
ε<|x|
Rn
= (n − 2)|x|−2 , ϕ.
(7.3.16)
|x| x n \ {0}, we have ∂ ln = ε−1 on Also, since ∇(ln |x|) = |x| 2 pointwise in R ∂ν ∂B(0, ε). Hence, ∂ϕ ∂ ln |x| dσ(x) ln |x| (x) dσ(x) + ϕ(x) − ∂B(0,ε) ∂ν ∂ν ∂B(0,ε)
≤ ∇ϕL∞ (Rn ) εn−1 | ln ε| + ϕL∞ (Rn ) εn−2 −−−−→ 0, ε→0+
(7.3.17)
where for the convergence in (7.3.17) we used the fact that n ≥ 3. To finish the proof of the lemma we combine (7.3.15)–(7.3.17). Exercise 7.22. Let E be the fundamental solution for the bi-Laplacian from (7.3.8). Prove that for each α ∈ Nn0 with |α| = 3 the function ∂ α E is C ∞ and positive homogeneous of degree 1 − n in Rn \ {0}.
7.4
The Poisson Equation for the Bi-Laplacian
Analogously to Theorem 7.10 we have the following regularity result for the bi-Laplacian. Theorem 7.23. For an open set Ω ⊆ Rn the following are equivalent: (i) u is bi-harmonic in Ω; (ii) u ∈ D (Ω) and Δ2 u = 0 in D (Ω); (iii) u ∈ C ∞ (Ω) and Δ2 u = 0 in a pointwise sense in Ω. Proof. This is a direct consequence of Theorem 6.8 and Theorem 7.21 (or, alternatively, of Remark 6.14 and Corollary 6.18). We begin by establishing mean value formulas for biharmonic functions. Recall the notation from (0.0.4). Theorem 7.24. Let Ω be an open set in Rn and let u be a biharmonic function in Ω. Then for each x ∈ Ω and each r ∈ 0, dist(x, ∂Ω) the following formulas hold: r2 (Δu)(x), (7.4.1) u(y) dσ(y) − u(x) = − 2n ∂B(x,r) r2 u(x) = − (Δu)(x). (7.4.2) u(y) dy − 2(n + 2) B(x,r)
238
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
Proof. Fix x ∈ Ω and recall the function φ defined in (7.2.10). Then (7.2.11) holds. Since Δ(Δu) = 0 in Ω we have that Δu is harmonic in Ω and we can apply r2 (7.2.9) to obtain that φ (r) = nr Δu(x). Consequently, φ(r) = 2n Δu(x) + C for some constant C. To determine C we use the fact that lim+ φ(r) = u(x) (for r→0
the latter see (7.2.15)). Hence formula (7.4.1) follows. Next, write (7.4.1) as ωn−1 rn+1 n−1 Δu(x). (7.4.3) u(x) = u(y) dσ(y) − ωn−1 r n ∂B(x,r) Integrating (7.4.3) with respect to r for r ∈ (0, R) and R ∈ 0, dist (x, ∂Ω) , and then dividing by ωn−1 Rn /n gives (7.4.2) with r replaced by R. Theorem 7.25 (Liouville’s theorem for Δ2 ). Any bounded bi-harmonic function in Rn is constant. Proof. One way to justify this result is by observing that it is a particular case of Theorem 5.3. Another proof based on interior estimates goes as follows. Let u be a bounded bi-harmonic function in Rn . Formula (7.4.2) in the current setting gives that 2(n + 2) 2(n + 2) u(y) dy − u(x) (7.4.4) − Δu(x) = r2 r2 B(x,r)
for every x ∈ Rn and every r ∈ (0, ∞). Since −B(x,r) u(y) dy| ≤ uL∞(Rn ) , if we let r → ∞ in (7.4.4), we see that Δu(x) = 0, thus u is harmonic. By Liouville’s Theorem 7.15 for harmonic functions it follows that there exists c ∈ R such that u(x) = c for all x ∈ Rn . We shall next show that bi-harmonic functions satisfy interior estimates and are real-analytic. Theorem 7.26. Suppose u ∈ D (Ω) satisfies Δ2 u = 0 in D (Ω). Then u is real-analytic in Ω and there exists a dimensional constant C ∈ (0, ∞) such that C |α| |α|! max |u(y)|, r|α| y∈B(x,r) y∈B(x,r/2) for each x ∈ Ω and each r ∈ 0, dist (x, ∂Ω) . max
|∂ α u(y)| ≤
∀ α ∈ Nn0 ,
(7.4.5)
Proof. In the case when n = 1 or n ≥ 3, all claims are direct consequences of Theorem 6.26 and Theorem 7.21. To treat the case n = 2 we shall introduce a := Ω × R “dummy” variable. Specifically, in this setting consider the open set Ω 3 (x1 , x2 , x3 ) := u(x1 , x2 ) for (x1 , x2 , x3 ) ∈ Ω. in R and define the function u Observe that u is bi-harmonic in Ω. As such, the higher-dimensional theory applies and yields that u satisfies, for some universal constant C ∈ (0, ∞), max y∈B(x,r/2)
|∂ α u (y)| ≤
C |α| |α|! r|α|
max | u(y)|, y∈B(x,r)
∀ α ∈ N30 ,
(7.4.6)
7.4. THE POISSON EQUATION FOR THE BI-LAPLACIAN
239
and each r ∈ 0, dist (x, ∂ Ω) . Applying (7.4.6) for points of for each x ∈ Ω and for multi-indices α = (α1 , α2 , 0) then yields the form x = (x1 , x2 , 0) ∈ Ω (7.4.5) for the original function u. In turn, this and Lemma 6.24 imply that u is real-analytic in Ω. We are now prepared to discuss the well-posedness of the Poisson problem for the bi-Laplacian in Rn . Theorem 7.27. Assume n ∈ N satisfies n ≥ 3 and n = 4. Then for each n function f ∈ L∞ comp (R ) and each c ∈ C the Poisson problem for the bi-Laplacian n in R , ⎧ u ∈ C 0 (Rn ), ⎪ ⎪ ⎨ 2 Δ u = f in D (Rn ), (7.4.7) ⎪ ⎪ ⎩ lim u(x) = c, |x|→∞
has a unique solution. Moreover, the solution u satisfies the following additional properties. (1) The function u has the integral representation formula u(x) = c + E(x − y)f (y) dy, x ∈ Rn ,
(7.4.8)
Rn
where E is the fundamental solution for the bi-Laplacian in Rn from (7.3.8). Moreover, u belongs to C 3 (Rn ) and for each α ∈ Nn0 with the property that 0 < |α| ≤ 3 we have α (∂ α E)(x − y)f (y) dy, x ∈ Rn . (7.4.9) ∂ u(x) = Rn
(2) If in fact f ∈ C0∞ (Rn ) then u ∈ C ∞ (Rn ). (3) For every α ∈ Nn0 with |α| = 4 there exists a constant cα ∈ C such that ∂ α u = cα f + T∂ α E f in D (Rn ),
(7.4.10)
where T∂ α E is the singular integral operator associated with Θ := ∂ α E as in Definition 4.90. (4) For each p ∈ (1, ∞), the solution u of (7.4.7) satisfies ∂ α u ∈ Lp (Rn ) for each α ∈ Nn0 with |α| = 4, where the derivatives are taken in D (Rn ). Moreover, there exists a constant C = C(p, n) ∈ (0, ∞) with the property that n ∂ α uLp(Rn ) ≤ Cf Lp(Rn ) . (7.4.11) |α|=4
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
240
Proof. That the function u defined as in (7.4.8) is of class C 3 (Rn ) and formula (7.4.9) holds for each α ∈ Nn0 with |α| ≤ 3 can be established much as in the proof of Proposition 7.8 keeping in mind that ∂ α E ∈ L1loc (Rn for each α ∈ Nn0 with |α| ≤ 3. Also, this function solves Δ2 u = f in D (Rn ) by reasoning in a similar fashion to the computation in (7.2.7). Given the format of E from (7.3.8) under the current assumptions on n, and the conditions on f , it is clear that the function u from (7.4.8) satisfies the limit condition in (7.4.7) [see (7.2.21) in the case of the Laplace operator]. The fact that (7.4.8) is the unique solution of (7.4.7) is a consequence of Theorem 7.25. Moving on, that u ∈ C ∞ (Rn ) if f ∈ C0∞ (Rn ) follows from Corollary 6.18 and the ellipticity of Δ2 . The above arguments cover the claims in parts (1)-(2). Turning to part (3), start by fixing α ∈ Nn0 with |α| = 4. Then there exists β ∈ Nn0 with |β| = 3 and j ∈ {1, . . . , n} such that ∂ α = ∂j ∂ β . Then the function Φ := ∂ β E is C ∞ and positive homogeneous of degree 1 − n in Rn \ {0} (cf. Exercise 7.22). Consequently, the function Θ := ∂ α E is as in (4.4.1), thanks to the discussion in Example 4.68. Then from what we have proved in part (1) and Theorem 4.99 we deduce that the following formula holds in D (Rn ): β α (∂ β E)(x − y)f (y) dy (7.4.12) ∂ u(x) = ∂j ∂ u(x) = ∂j Rn
= S n−1
(∂ β E)(ω)ωj dσ(ω) f (x) + lim ε→0+
|x−y|>ε
(∂ α E)(x − y)f (y) dy.
Choosing cα := S n−1 (∂ β E)(ω)ωj dσ(ω) the above formula may be written as in (7.4.10), and this finishes the proof of part (3). Finally, the claim in (4) follows from (3) and the boundedness of the singular integral operators T∂ α E on Lp (Rn ) (cf. Theorem 4.97).
7.5
Fundamental Solutions for the Poly-Harmonic Operator
Let m ∈ N and consider the poly-harmonic operator ⎛ Δm := ⎝
n j=1
⎞m ∂j2 ⎠
=
m! ∂ 2γ , γ!
in Rn .
(7.5.1)
|γ|=m
The goal in this section is to identify all fundamental solutions for this operator in Rn that are tempered distributions. The case m = 1 has been treated in Theorem 7.2 and the case m = 2 is discussed in Theorem 7.21. We remark that the formulas from Lemma 7.18, Lemma 7.19, and Lemma 7.20, suggest that we should look for a fundamental solution for Δm that is a constant multiple of |x|2m−n or a constant multiple of |x|2m−n ln |x|. In precise terms, we will prove the following result (recall that H denotes the Heaviside function).
7.5. FUNDAMENTAL SOLUTIONS FOR THE POLY-HARMONIC...
241
Theorem 7.28. Let m, n ∈ N and consider the function
⎧ (−1)m Γ(n/2 − m) 2m−n ⎪ ⎪ if n > 2m or n is odd and n = 1, |x| ⎪ ⎪ π n/2 4m (m − 1)! ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ (−1)n/2+1 Fm,n (x) := |x|2m−n ln |x| if n is even and ≤ 2m, ⎪ n/2 m−1 ⎪ 2π 4 (m − n/2)!(m − 1)! ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 1 ⎪ ⎪ x2m−1 H if n = 1, ⎩ (2m − 1)! (7.5.2)
defined for x ∈ Rn \ {0} if n ≥ 2 and for x ∈ R if n = 1. Then Fm,n belongs to L1loc (Rn ) ∩ S (Rn ) and is a fundamental solution for Δm in Rn . Moreover, u ∈ S (Rn ) : Δm u = δ in S (Rn ) (7.5.3) = Fm,n + P : P poly-harmonic polynomial in Rn . Proof. The case n = 1 is immediate from Exercise 5.15. Assume in what follows that n ≥ 2. Since n − 2m < n, we clearly have |x|2m−n ∈ L1loc (Rn ) and furthermore, by Exercise 4.4, that |x|2m−n ∈ S (Rn ). By Exercise 2.107 and Exercise 4.104 we also have |x|2m−n ln |x| ∈ L1loc (Rn )∩S (Rn ). This proves that Fm,n ∈ L1loc (Rn ) ∩ S (Rn ). Once we show that Fm,n is a fundamental solution for Δm in Rn , Proposition 5.7 readily yields (7.5.3). There remains to prove Δm Fm,n = δ in S (Rn ). If m = 1, then it is easy to see that F1,n is the same as the distribution in (7.1.12) (corresponding to n ≥ 2). Thus, by Theorem 7.2 we have ΔF1,n = δ
in
S (Rn ).
(7.5.4)
We proceed by breaking up the remaining part of our analysis in a few cases. The case n > 2m or n is odd. Because of (7.5.4) we may assume m ≥ 2. Applying Lemma 7.18 and (13.5.2) we obtain ΔFm,n =
(−1)m Γ(n/2 − m) Δ(|x|2m−n ) π n/2 4m (m − 1)!
=
(−1)m Γ(n/2 − m) (2m − n)(2m − 2)|x|2m−n−2 π n/2 4m (m − 1)!
=
(−1)m−1 Γ(n/2 − m)(n/2 − m)(m − 1) 2(m−1)−n |x| π n/2 4m−1 (m − 1)!
= Fm−1,n
in S (Rn ) and pointwise in Rn \ {0}.
(7.5.5)
Hence, inductively we have Δm−1 Fm,n = F1,n in S (Rn ), which when combined with (7.5.4) yields the desired conclusion in the current case.
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
242
The case n = 2m. In this scenario n is even, and the goal is to prove n
in S (Rn ).
Δ 2 F n2 ,n = δ
(7.5.6)
Note that (7.5.6) holds for n = 2 as seen from (7.5.4). Consequently, we may assume that n ≥ 4. Inductively, using Lemma 7.18, it is not difficult to show that −2 (−1)k 4k k! n2 − 2 ! −2−2k k n Δ |x| = |x| in S (Rn ) − k − 2 ! (7.5.7) 2 if n is even, n ≥ 4 and k ∈ N0 is such that k < In particular, (7.5.7) specialized to k =
n 2
n 2
− 1.
− 2 yields
n 2 n n n − 2 ! |x|2−n Δ 2 −2 |x|−2 = (−1) 2 4 2 2
in S (Rn ).
(7.5.8)
Starting with the expression from (7.5.2) we the obtain n
n
Δ 2 −1 F n2 ,n =
n (−1) 2 +1 n Δ 2 −1 ln |x| n −1 2π 4 2 2 −1 ! n 2
n
n (−1) 2 +1 (n − 2)Δ 2 −2 |x|−2 = n n 2π 2 4 2 −1 n2 − 1 ! =
n 2 n n (−1) 2 +1 2 42 − 2 ! |x|2−n (n − 2)(−1) n 2 2π 4 2 −1 n2 − 1 !
=
−1 |x|2−n (n − 2)ωn−1
n
n 2
in S (Rn ).
(7.5.9)
The second equality in (7.5.9) is based on Lemma 7.19, the third equality uses (7.5.8), while the last equality follows from straightforward calculations and n (13.5.6). In summary, we proved that if n is even and n ≥ 4, then Δ 2 −1 F n2 ,n = EΔ in S (Rn ), where EΔ is the fundamental solution for the Laplacian from (7.1.12). The latter combined with Theorem 7.2 now yields (7.5.6) for n ≥ 4, and finishes the proof of (7.5.6). The case n ≤ 2m and n is even. Fix an even number n ∈ N. We will prove that if m ≥ Δm Fm,n = δ
n 2
then
in S (Rn )
(7.5.10)
by induction on m. The case m = n2 has been treated in the previous case. Suppose (7.5.10) holds for some m ≥ n2 . Then clearly 2(m + 1) > n and invoking Lemma 7.20 we obtain ΔFm+1,n = Fm,n + Cm+1,n (4m − n + 2)|x|2m−n
in S (Rn ),
(7.5.11)
7.5. FUNDAMENTAL SOLUTIONS FOR THE POLY-HARMONIC...
243
where Cm+1,n is the coefficient of |x|2(m+1)−n ln |x| in the expression for Fm+1,n . Note that since n is even the expression |x|2m−n is a polynomial of degree 2m−n. Hence, Δm [|x|2m−n ] = 0 given that Δm is a homogeneous differential operator of degree 2m > 2m − n. Combined with (7.5.11) and the induction hypothesis on Fm,n , this gives Δm+1 Fm+1,n = Δm Fm,n + Cm+1,n (4m − n + 2)Δm [|x|2m−n ] = δ in S (Rn ). (7.5.12) By induction, it follows that (7.5.10) holds in the current setting. This finishes the proof of the theorem. We remark that Theorem 7.28 specialized to m = 2 gives another proof of the fact that when n ≥ 2 the expression from (7.3.8) is a fundamental solution for Δ2 in Rn . Proposition 7.29. Let m, n ∈ N be such that n ≥ 2 and let Fm,n be the fundamental solution for Δm in Rn as defined in (7.5.2). Then if N ∈ N0 is such that 0 ≤ N < m the following identity holds ΔN Fm,n = Fm−N,n + Pm,N in S (Rn ) and pointwise in Rn \ {0}, (7.5.13) where Pm,N is a homogeneous polynomial in R of degree less than or equal to max {2m − n − 2N, 0} which is identically zero if n > 2m or n is odd, or if n is even such that n ≤ 2m and N > m − n2 (in short, Pm,N ≡ 0 if either n is odd or 2N + n > 2m). n
Proof. We start by observing that using an inductive reasoning and Lemma 7.20 we obtain that for each N ∈ N0 (1) (2) ΔN |x|M ln |x| = CM,N |x|M−2N ln |x| + CM,N |x|M−2N in S (Rn ) if M is a real number such that M − 2N > −n, (7.5.14) (1) (2) for some real constants CM,N , CM,N , depending on n, M and N . Now suppose n is even andn ≤ 2m. Specializing (7.5.14) to the case when M := 2m − n and taking N ∈ 0, 1, . . . , m − n2 , we conclude (1)
(2)
ΔN Fm,n = cm,N Fm−N,n + cm,N |x|2m−n−2N (1)
in S (Rn ),
(7.5.15)
(2)
for some real constants cm,N , cm,N , depending only on n, m and N . For each number N ∈ 0, 1, . . . , m − n2 set (2)
Pm,N (x) := cm,N |x|2m−n−2N ,
x ∈ Rn .
polynomial Note that since n is even and n ≤ 2m, each Pm,N is a homogeneous of degree at most 2m−n−2N . Also, we have that for each N ∈ 0, 1, . . . , m− n2
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
244
the operator Δm−N is a homogeneous differential operator of degree 2m − 2N > 2m − n − 2N , thus Δm−N Pm,N = 0 both in S (Rn ) and pointwise in Rn . The latter, (7.5.15), and Theorem 7.28 imply (1) (1) δ = Δm−N ΔN Fm,n = cm,N Δm−N Fm−N,n + Δm−N Pm,N = cm,N δ (7.5.16) (1) in S (Rn ) for each N ∈ 0, 1, . . . , m − n2 . This implies that cm,N = 0 for each N ∈ 0, 1, . . . , m − n2 . Consequently, (7.5.15) becomes ΔN Fm,n = Fm−N,n + Pm,N
n in S (Rn ), ∀ N ∈ 0, 1, . . . , m − . (7.5.17) 2
That the above equality also holds pointwise in Rn is immediate, hence we proved (7.5.13) for n even such that n ≤ 2m and 0 ≤ N ≤ m − n2 . Note that if n = 2 then m− n2 = m − 1 so we actually have proved (7.5.13) for all N ∈ 0, 1, . . . , m − 1 . To finish with the proofof (7.5.13) when n iseven and satisfies n ≤ 2m, there remains the case N ∈ m − n2 + 1, . . . , m − 1 under the assumption that n ≥ 4. For starters, observe that if we write (7.5.17) for N = m − n2 , then Pm,N is just a constant and using the expression for F n2 ,n we have n
(2)
(2)
Δm− 2 Fm,n = F n2 ,n + Cm,m− n = cn ln |x| + cm,m− n 2
2
in S (Rn ),
(7.5.18)
for some real constant cn . Therefore, (7.5.18), Lemma 7.19, and Lemma 7.18 (which we can apply m + n2 − 1 times since N < m) allow us to conclude N− n that for each N ∈ m − 2 + 1, . . . , m − 1 there exists some constant cm,N such that n n ΔN Fm,n = ΔN −m+ 2 Δm− 2 Fm,n = cm,N |x|−2N +2m−2n in S (Rn ). (7.5.19) Upon observing that the condition N > m − n2 is equivalent with n > 2(m − N ), (1) we may conclude that cm,N |x|−2N +2m−2n = cm,N Fm−N,n for some constant (1) cm,N . Hence, for each N ∈ m − n2 + 1, . . . , m − 1 we have (1)
ΔN Fm,n = cm,N Fm−N,n
in S (Rn ).
(7.5.20) (1)
Applying Δm−N to (7.5.20) and invoking Theorem 7.28 we obtain cm,N = 1 for each N ∈ m − n2 + 1, . . . , m − 1 , so that ΔN Fm,n = Fm−N,n
n in S (Rn ), ∀ N ∈ m − + 1, . . . , m − 1 . 2
This proves (7.5.13) when n is even, satisfies n ≤ 2m, and m − we take Pm,N := 0.
n 2
(7.5.21)
< N < m if
7.5. FUNDAMENTAL SOLUTIONS FOR THE POLY-HARMONIC...
245
Suppose next that n > 2m or n is odd. Then as seen in the proof of Theorem 7.28, identity (7.5.12) holds in S (Rn ) and pointwise in Rn \ {0}. Iterating this identity we arrive at (7.5.22) ΔN Fm,n ] = Fm−N,n in S (Rn ) and pointwise in Rn \ {0}. Consequently, formula (7.5.13) holds in the current setting if we take Pm,N := 0 for all N ∈ {0, . . . , m − 1}. This completes the proof of the proposition. We next address the issue of interior estimates and real-analyticity for polyharmonic functions. Theorem 7.30. Whenever u ∈ D (Ω) satisfies Δm u = 0 in D (Ω), it follows that u is real-analytic in Ω and there exists a constant C = C(n, m) ∈ (0, ∞) with the property that C |α| |α|! max |u(y)|, r|α| y∈B(x,r) y∈B(x,r/2) for each x ∈ Ω and each r ∈ 0, dist (x, ∂Ω) . max
|∂ α u(y)| ≤
∀ α ∈ Nn0 ,
(7.5.23)
Proof. In the case when either n is odd, or n ≥ 2m, all claims are consequences of Theorem 6.26 and Theorem 7.28. The remaining cases may be reduced to the ones just treated by introducing “dummy” variables, as in the proof of Theorem 7.26. In the last part of this section we prove an integral representation formula for a fundamental solution for the poly-harmonic operator in Rn involving a sufficiently large power of the Laplacian (i.e., Δm with m positive integer bigger than or equal to n/2). The difference between this fundamental solution and that in (7.5.2) turns out to be a homogeneous polynomial that is a null solution of the respective poly-harmonic operator. In what follows log denotes the principal branch of the complex logarithm that is defined for points z ∈ C \ (−∞, 0]. Lemma 7.31. Let n ∈ N, n ≥ 2, and suppose q ∈ N0 is such that n + q is an even number. Then the function x · ξ 1 q Eq (x) := − dσ(ξ), ∀ x ∈ Rn \{0}, (7.5.24) (x·ξ) log (2πi)n q! S n−1 i is a fundamental solution for the poly-harmonic operator Δ Moreover, Eq ∈ S (Rn ) and Eq (x) = F n+q ,n (x) + C(n, q)|x|q , 2
in Rn .
∀ x ∈ Rn \ {0},
where F n+q ,n is the fundamental solution for Δ 2
n+q 2
n+q 2
(7.5.25)
in Rn given in (7.5.2) and
1 ⎧ 2ωn−2 ⎪ ⎨ − sq (ln s)( 1 − s2 )n−3 ds (2πi)n q! 0 C(n, q) := ⎪ ⎩ 0
if n is even, (7.5.26) if n is odd.
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
246
Proof. Fix n and q as in the hypotheses of the lemma. Fix x ∈ Rn \ {0} and x·ξ use the fact that log i = ln |x · ξ| − i π2 sgn (x · ξ) for every ξ ∈ S n−1 to write (x · ξ) log q
S n−1
x · ξ i
dσ(ξ) = ξ∈S n−1 , x·ξ>0
(x · ξ)q ln |x · ξ| − i π2 dσ(ξ)
π (x · ξ)q ln |x · ξ| + i dσ(ξ) 2 ξ∈S n−1 , x·ξ>0 π 1 |x · ξ|q ln |x · ξ| − i dσ(ξ) = 2 S n−1 2 q (−1) + |x · ξ|q ln |x · ξ| + i π2 dσ(ξ) 2 S n−1 1 + (−1)q = |x · ξ|q ln |x · ξ| dσ(ξ) 2 n−1 S ((−1)q − 1)πi + |x · ξ|q dσ(ξ) (7.5.27) 4 n−1 S + (−1)q
To compute the integrals in the rightmost side of (7.5.27) we use Proposition 13.46. First, applying (13.8.15) with f (t) := |t|q for t ∈ R \ {0} and v := x, we obtain 1 |x · ξ|q dσ(ξ) = 2ωn−2 |x|q sq ( 1 − s2 )n−3 ds S n−1
0
π 2
= 2ωn−2 |x|q
sin θ
q
cos θ
n−2
dθ
0
Γ
= ωn−2
q+1 n−1 n−1 2π 2 Γ q+1 q 2 Γ 2 2 q+n |x| = |x|q . (7.5.28) Γ q+n Γ 2 2
For the third equality in (7.5.28) we used (13.5.10), while the last one is based on (13.5.6). Second, formula (13.8.15) with f (t) := |t|q ln |t| for t ∈ R and v := x, in concert with (7.5.28) further yields
|x · ξ| ln |x · ξ| dσ(ξ) = 2ωn−2 |x| q
S n−1
1
q
sq ln |x| + ln s ( 1 − s2 )n−3 ds
0
=
2π
n−1 2
Γ
Γ
q+1
q+n 2
|x|q ln |x| + c(n, q)|x|q ,
2
(7.5.29) where
1
sq (ln s)( 1 − s2 )n−3 ds < ∞.
c(n, q) := 2ωn−2 0
(7.5.30)
7.5. FUNDAMENTAL SOLUTIONS FOR THE POLY-HARMONIC...
247
Hence, a combination of (7.5.27), (7.5.28), (7.5.29), and (7.5.24) implies ⎧ n (−1) 2 +1 Γ q+1 c(n, q) ⎪ 2 ⎪ |x|q if n is even, ⎪ q+n |x|q ln |x| − n+1 ⎪ n q! n−1 ⎪ (2πi) π 2 q! Γ 2 ⎨ 2 Eq (x) = q+1 n−1 ⎪ ⎪ ⎪ ⎪ (−1) 2 Γ 2 ⎪ if n is odd, |x|q ⎩ n n−1 2 π 2 q! Γ q+n 2 (7.5.31) for every x ∈ Rn \ {0}. That this expression belongs to S (Rn ) follows from Exercise 4.104. Let F n+q ,n be as in (7.5.2). The goal is to prove that 2
Eq − F n+q ,n = 2
c(n, q) |x|q (2πi)n q!
pointwise in Rn \ {0}.
(7.5.32)
Once such an identity is established, Theorem 7.28 may be used to conclude n+q that Eq is a fundamental solution for Δ 2 in Rn (note that if n is even, then q q is even and |x| is a homogeneous polynomial of order q that is annihilated by n+q c(n,q) Δ 2 ). Also, it is easy to see that C(n, q) = (2πi) n q! . With an eye toward proving (7.5.32), suppose first that n is even. Choosing m := n+q 2 in (7.5.2), we have n ≤ 2m, hence n
F n+q ,n (x) = 2
(−1) 2 +1 q n+q |x|q ln |x|, n n+q−1 ! π22 − 1 ! 2 2
∀ x ∈ Rn \ {0}. (7.5.33)
Applying (13.5.4) with q/2 in place of n, and using (13.5.3) (recall that n + q is even) we have Γ
q + 1 2
=
q! π 1/2 2q (q/2)!
and
Γ
q + n 2
=
q + n 2
− 1 !.
(7.5.34)
Using now (7.5.34) in (7.5.33) and the expression for Eq , it follows that (7.5.32) holds. If next we assume that n is odd, then n+q (−1) 2 Γ − q2 |x|q ln |x|, F n+q ,n (x) = n n+q n+q ∀ x ∈ Rn \ {0}. (7.5.35) 2 π22 − 1 ! 2 Applying (13.5.5) and (13.5.3) with (q + 1)/2 in place of n we see that q (−1)(q+1)/2 2q+1 q+1 !π 1/2 q + 1 q! π 1/2 2 = q and Γ − = (7.5.36) Γ 2 2 (q/2)! 2 (q + 1)! which in concert with the second equality in (7.5.34), the expression in (7.5.35), and the formula for Eq , implies Eq (x) = F n+q ,n (x) in Rn \ {0}. The proof of 2 the lemma is now complete.
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
248
7.6
Fundamental Solutions for the Cauchy–Riemann Operator
In this section we determine all fundamental solutions for the Cauchy-Riemann ∂ ∂ operator ∂z := 12 (∂1 + i∂2 ) and its conjugate, ∂z := 12 (∂1 − i∂2 ) that belong ∂ ∂ 1 2 to S (R ). It is immediate that ∂z ∂z = 4 Δ when acting on distributions from D (R2 ). Given this, Remark 5.6 is particularly useful. Specifically, since by 1 Theorem 7.2 we have that E(x) = 2π ln |x|, x ∈ R2 \ {0}, is a fundamental 2 2 solution for Δ in R and E ∈ S (R ), Remark 5.6 implies that ∂ 2 ln |x| ∈ S (R2 ) ∂z π
is a fundamental solution for
∂ ∂z
in S (R2 ), (7.6.1)
∂ 2 ln |x| ∈ S (R2 ) ∂z π
is a fundamental solution for
∂ ∂z
in S (R2 ). (7.6.2)
2 ∂ 2 We proceed with computing ∂z π ln |x| in the distributional sense in S (R ). By (c) in Theorem 4.13 it suffices to compute the distributional derivative ∂ 2 2 is equal ∂z π ln |x| in D (R ). We will show that this distributional derivative 2 ∂ ln |x| in to the distribution given by the function obtained by taking ∂z π the classical sense for x = 0. To this end, fix a function ϕ ∈ C0∞ (R2 ). Since ln |x| ∈ L1loc (R2 ) (recall Example 2.9) we may write " !2 ! ∂ 2 ∂ϕ " 1 ln |x| , ϕ = − ln |x|, ln |x|[(∂1 + i∂2 )ϕ(x)] dx. =− ∂z π π ∂z π R2 (7.6.3) Let R > 0 be such that supp ϕ ⊂ B(0, R). By the Lebesgue dominated convergence theorem 13.12 we further have ln |x| [(∂1 + i∂2 )ϕ(x)] dx = lim+ ln |x| [(∂1 + i∂2 )ϕ(x)] dx. (7.6.4) ε→0
R2
ε≤|x|≤R
Fix ε ∈ (0, R/2) and with x = (x1 , x2 ) use (13.7.4) to write ln |x| (∂1 + i∂2 )ϕ(x) dx ε≤|x|≤R
ln ε ϕ(x)(∂1 + i∂2 ) ln |x| dx − =− ε |x|≥ε
(7.6.5)
|x|=ε
(x1 + ix2 )ϕ(x) dσ(x),
where we have used the fact that the outward unit normal to B(0, ε) at a point x on ∂B(0, ε) is 1ε (x1 , x2 ) and that ϕ vanishes on ∂B(0, R). Using polar coordinates, we further write ln ε (x1 + ix2 )ϕ(x) dσ(x) ε |x|=ε
7.6. FUNDAMENTAL SOLUTIONS FOR THE CAUCHY–RIEMANN... 249 = ε ln ε
2π
(cos θ + i sin θ)ϕ(ε cos θ, ε sin θ) dθ
0
≤ Cε| ln ε → 0
as ε → 0+ .
(7.6.6)
Since (∂1 + i∂2 )[ln |x|] ∈ L1loc (R2 ) and ϕ is compactly supported, by Lebesgue’s dominated convergence theorem we have ϕ(x)(∂1 + i∂2 ) ln |x| dx = ϕ(x)(∂1 + i∂2 ) ln |x| dx. (7.6.7) lim+ ε→0
R2
|x|≥ε
By combining (7.6.3)–(7.6.7) we conclude that 2 1 1 ∂ 1 ln |x| = (∂1 + i∂2 )[ln |x|] = · ∂z π π π x1 − ix2 Similarly, 2 1 1 ∂ 1 ln |x| = (∂1 − i∂2 )[ln |x|] = · ∂z π π π x1 + ix2
in S (R2 ).
(7.6.8)
in S (R2 ).
(7.6.9)
With this at hand, by invoking Proposition 5.7, we have a complete description ∂ ∂ and ∂z that are also tempered distributions. of all fundamental solutions for ∂z In summary, we proved the following theorem. Theorem 7.32. Consider the functions E(x1 , x2 ) :=
1 1 · , π x1 − ix2
F (x1 , x2 ) :=
1 1 · , π x1 + ix2
(7.6.10)
defined for x = (x1 , x2 ) ∈ R2 \ {0}. Then E ∈ S (R2 ) and is a fundamental ∂ solution for the operator ∂z . Also F ∈ S (R2 ) and is a fundamental solution ∂ for the operator ∂z . Moreover, ∂ u ∈ S (R2 ) : ∂z u = δ in S (R2 ) (7.6.11) ∂ P = 0 in R2 = E + P : P polynomial in R2 satisfying ∂z and ∂ u = δ in S (R2 ) u ∈ S (R2 ) : ∂z = F + P : P polynomial in R2 satisfying
(7.6.12) ∂ 2 ∂z P = 0 in R .
Having identified the fundamental solutions in R2 for the Cauchy–Riemann operator ∂ = 12 (∂x + i∂y ), we shall now compute the Fourier transform of 1/z. Throughout, we shall repeatedly identify R2 with C. Proposition 7.33. Let u(z) :=
1 z
u 3(ξ) =
for all z ∈ C \ {0}. Then u ∈ S (R2 ) and 2π iξ
in S (R2 ).
(7.6.13)
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
250
Proof. That u ∈ S (R2 ) follows by observing that u is locally integrable near the origin, while like |z|−1 for |z| > 1. To prove (7.6.13), first note 1 |u(z)| decays that, since ∂ πz = δ in S (R2 ) by Theorem 7.32, it follows that ∂u = π δ in S (R2 ). Hence 2π i 4 3(ξ) =⇒ ξ u 3(ξ) = . (7.6.14) = ξu π = ∂u(ξ) 2 i Thus, ξ 2π 3(ξ) = 0 in S (R2 ), which, when combined with Example 2.68, iξ − u implies 2π −u 3(ξ) = c δ, for some c ∈ C. (7.6.15) iξ Thus, 2π u−u 3 = cδ i
in S (R2 ).
(7.6.16)
Taking another Fourier transform yields 2π u 3 + (2π)2 u = c i
in S (R2 ),
(7.6.17)
3 given that u 3 = −(2π)2 u (keeping in mind that u is odd). A linear combination of (7.6.16) and (7.6.17), which eliminates u 3, then leads us to the conclusion that (−2iπδ + 1) c = 0 in S (R2 ). In turn, this forces c = 0, concluding the proof of the proposition. Exercise 7.34. Consider u(z) := 1/z for all z ∈ C \ {0}. Show that u ∈ S (R2 ) and 2π (7.6.18) u 3(ξ) = in S (R2 ). iξ Hint: Use Proposition 7.33 and Exercise 3.26. Proposition 7.35. Let Ω be an open set in C that contains the origin. Suppose 1 ∂u u 1 1 1 u ∈ C 0 (Ω) is such that ∂u ∂z ∈ Lloc (Ω) and z ∂z ∈ Lloc (Ω). Then z ∈ Lloc (Ω) and ∂ u 1 ∂u = πu(0)δ + in D (Ω). (7.6.19) ∂z z z ∂z Proof. Pick a function θ ∈ C ∞ (C) with the property that θ = 0 on B(0, 1) and θ = 1 on C \ B(0, 2). For each ε ∈ (0, 1) define θε : C → C by setting θε (z) := θ(z/ε) for each z ∈ C. Then θε ∈ C ∞ (C),
supp (∇θε ) ⊆ B(0, 2ε) \ B(0, ε) ∀ z ∈ C \ {0},
lim θε (z) = 1,
ε→0+
and |θε (z)| ≤ 1 ∀ z ∈ C.
(7.6.20) (7.6.21)
7.6. FUNDAMENTAL SOLUTIONS FOR THE CAUCHY–RIEMANN... 251 Next, fix ϕ ∈ C0∞ (Ω) and, with L2 denoting the Lebesgue measure in R2 , write ! u ∂ϕ " ! ∂ u " u ∂ϕ ,ϕ = − , =− dL2 ∂z z z ∂z Ω z ∂z u ∂ϕ θε dL2 , = − lim+ ∂z ε→0 Ω z
(7.6.22)
where for the last equality in (7.6.22) we used (7.6.21) and Lebesgue’s dominated convergence theorem. Note that 1 ϕ θε ∈ C0∞ (Ω) z
and
∂ 1 =0 ∂z z
in C \ {0}.
(7.6.23)
Therefore, 1 ∂ϕ 1 ∂θ ∂ 1 ε ϕ θε = θε + ϕ ∂z z z ∂z z ∂z
in Ω.
(7.6.24)
Combining (7.6.22) and (7.6.24) we obtain ! ∂ u " u ∂θε ∂ 1 , ϕ = − lim+ ϕ θε dL2 + lim+ ϕ dL2 u ∂z z ∂z z z ∂z ε→0 ε→0 Ω Ω ! ∂u 1 " u ∂θε , ϕ θε + lim ϕ dL2 =: I + II. (7.6.25) = lim + + ∂z z ∂z ε→0 ε→0 Ω z Using the hypotheses on u, (7.6.21), and Lebesgue’s dominated convergence theorem, we obtain I = lim
ε→0+
Ω
∂u 1 ϕ θε dL2 = ∂z z
Ω
! 1 ∂u " ∂u 1 ϕ dL2 = ,ϕ . ∂z z z ∂z
(7.6.26)
To compute II, first write Ω
u ∂θε ϕ dL2 = z ∂z
Ω
u − u(0) 1 ∂θ ϕ (·/ε) dL2 + u(0) z ε ∂z
Ω
1 ∂θε ϕ dL2 z ∂z
=: III + IV.
(7.6.27)
Using the support condition from (7.6.20), the continuity of u and the properties of ϕ, term III may be estimated by
|III| ≤ CϕL∞ (Ω) ∇θL∞ (Ω)
sup |u − u(0)| B(0,2ε)
−−−−→ 0. + ε→0
(7.6.28)
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
252
As for IV , we have
1 ∂θε 1 ∂ϕ ϕ dL2 = −u(0) lim θε dL2 ∂z ε→0+ Ω z ε→0+ Ω z ∂z 8 9 8 & ' 9 1 ∂ϕ ∂ 1 1 ∂ϕ 2 dL = −u(0) , = −u(0) = u(0) ,ϕ z ∂z ∂z z Ω z ∂z
lim IV = u(0) lim
ε→0+
= πu(0)δ, ϕ.
(7.6.29)
For the second equality in (7.6.29), for each ε ∈ (0, 1) fixed, we used integration by parts (cf. (13.7.4)) on D \ B(0, ε/2), where D is a bounded open subset of Ω with the property that supp ϕ ⊂ D. At this step is also useful to recall (7.6.23). For the third equality in (7.6.29), we applied Lebesgue’s dominated convergence theorem, while the fifth is based on Theorem 7.32. Now (7.6.19) follows by combining (7.6.25)–(7.6.29), since ϕ is arbitrary in C0∞ (Ω). Exercise 7.36. Let Ω be an open set in C and let z0 ∈ Ω. Suppose u ∈ C 0 (Ω) 1 ∂u u 1 1 1 is such that ∂u ∂z ∈ Lloc (Ω) and z−z0 ∂z ∈ Lloc (Ω). Then z−z0 ∈ Lloc (Ω) and ∂ u 1 ∂u = πu(z0 )δz0 + ∂z z − z0 z − z0 ∂z
in
D (Ω).
(7.6.30)
Proposition 7.37. Given any complex-valued function ϕ ∈ S(R), define the Cauchy operator ϕ(x) 1 dx, ∀ z ∈ C \ R. (7.6.31) (C ϕ)(z) := 2πi R x − z Then the following Plemelj jump-formula holds at every x ∈ R: 1 ϕ(y) 1 lim (C ϕ)(x + iy) = ± ϕ(x) + lim dy. ± + 2 y−x y→0 ε→0 2πi
(7.6.32)
y∈R
| x−y|>ε
Proof. Apply Corollary 4.78 to the function Φ : R2 \ {(0, 0)} → C given by −1 Φ(x, y) := 2πi(x+iy) for all (x, y) ∈ R2 \ {(0, 0)}. Note that Φ is C ∞ , odd, and homogeneous of degree −1 and, under the canonical identification R2 ≡ C, takes −1 3 for z ∈ C \ {0}. Proposition 7.33 then gives that Φ(ξ) = 1ξ the form Φ(z) = 2πiz 3 1) = −i. Having established for all ξ ∈ C \ {0} which, in particular, yields Φ(0, this, (7.6.32) follows directly from (4.7.45). Remark 7.38. Upon recalling formula (4.9.30) for the Hilbert transform H on the real line, we may recast the version of Plemelj jump-formula (7.6.32) corresponding to considering the Cauchy operator in the upper half-plane in the form ver i 1 Hϕ in R, ∀ ϕ ∈ S(R), (7.6.33) Cϕ 2 = ϕ + 2 2π ∂R+
7.7. FUNDAMENTAL SOLUTIONS FOR THE DIRAC...
253
where the “vertical limit” of C ϕ to the boundary of the upper half-plane is understood as in (4.8.20). In turn, formula (7.6.33) suggests the consideration of the operator (with I denoting the identity) P :=
i 1 I+ H. 2 2π
(7.6.34)
From Corollary 4.95 it follows that P is a well-defined, linear and bounded operator on L2 (R). Using the fact that H 2 = −π 2 I and H ∗ = −H on L2 (R) (again, see Corollary 4.95), we may then compute 1
i 2 1 i 2 i H2 H = I+ H+ 2 2π 4 2π 2π i 1 1 i 1 H+ I= I+ H = I+ 4 2π 4 2 2π
P2=
I+
= P,
(7.6.35)
and P∗ =
i ∗ i 1 1 I− H = I+ H = P. 2 2π 2 2π
(7.6.36)
Any linear and bounded operator on L2 (R) satisfying these two properties (i.e., P 2 = P and P ∗ = P ) is called a projection. Then one may readily verify that i I − P = 12 I − 2π H is also a projection and if we introduce (what are commonly referred to as Hardy spaces on the real line) H±2 (R) :=
1 2
I±
i H f : f ∈ L2 (R) , 2π
(7.6.37)
then any complex-valued function f ∈ L2 (R) may be uniquely decomposed as f = f+ + f− with f± ∈ H±2 (R) and, moreover, any two functions f± ∈ H±2 (R) are orthogonal, in the sense that f+ (x)f− (x) dx = 0. (7.6.38) R
7.7
Fundamental Solutions for the Dirac Operator
In a nutshell, Dirac operators are first order differential operators factoring the Laplacian. When n = 1, Δ = d2 /dx2 , hence if we set D := i(d/dx), then D2 = −Δ. We seek a higher-dimensional generalization of the latter factorization formula. The natural context in which such a generalization may be carried out is the Clifford algebra setting.
254
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
The Clifford algebra with n generators Cn is the associative algebra with unit Cn , !, +, 1 freely generated over R by the family {ej }1≤j≤n , of the standard orthonormal base in Rn , now called imaginary units, subject to the following axioms: ej ! ek + ek ! ej = −2δjk , ∀ j, k ∈ {1, . . . , n}. (7.7.1) Hence, ej ! ek = −ek ! ej if 1 ≤ j = k ≤ n, and ej ! ej = −1
for j ∈ {1, . . . , n}.
(7.7.2)
The first condition above indicates that Cn is noncommutative if n > 1, while the second condition justifies calling {ej }1≤j≤n imaginary units. Elements in the Clifford algebra Cn can be uniquely written in the form a=
n
aI e I
(7.7.3)
l=0 |I|=l
with aI ∈ C, where eI stands for the product ei1 ! ei2 ! · · · ! eil whenever I = (i1 , i2 , . . . , il ) with 1 ≤ i1 < i2 < · · · < il ≤ n,e∅ := 1 ∈ R (which plays the role of the multiplicative unit in Cn ) and indicates that the |I|=l
sum is performed over strictly increasingly ordered indexes I with l components (selected from the set {1, . . . , n}). In the writing (7.7.3) we shall refer to the numbers aI ∈ C as the scalar components of a. Exercise 7.39. Given any a, b ∈ Cn with scalar components aI , bI ∈ C define (a, b) :=
n
aI b I ,
a :=
n
l=0 |I|=l
aI e I ,
(7.7.4)
l=0 |I|=l
and abbreviate Ma b := a ! b. Prove that Ma b = Ma b
and
(Ma b , c) = (b , Ma c)
for every
a, b, c ∈ Cn .
(7.7.5)
Clifford algebra-valued functions defined in an open set Ω ⊆ Rn may be defined naturally. Specifically, any function f : Ω → Cn is an object of the form f=
n
f I eI ,
(7.7.6)
l=0 |I|=l
where each fI is a real-valued function defined in Ω. Given any k ∈ N0 ∪{∞}, we shall denote by C k (Ω, Cn ) the collection of all Clifford algebra-valued functions f whose scalar components fI are of class C k in Ω. In a similar manner we may define C0∞ (Ω, Cn ), L1loc (Ω, Cn ), Lp (Ω, Cn ), etc. In fact, we may also consider Clifford algebra-valued distributions in an open set Ω ⊆ Rn . Specifically, write u ∈ D (Ω, Cn ) provided u=
n l=0 |I|=l
u I eI
with uI ∈ D (Ω) for each I.
(7.7.7)
7.7. FUNDAMENTAL SOLUTIONS FOR THE DIRAC...
255
Much of the theory originally developed for scalar-valued distributions naturally extends to this setting. For example, if u ∈ D (Ω, Cn ) is as in (7.7.7), we may define n α ∂ α u I eI , ∀ α ∈ Nn0 , (7.7.8) ∂ u := l=0 |I|=l
and f u :=
n n
f J u I eJ ! eI ,
(7.7.9)
=0 |J|= k=0 |I|=k
for any f =
n k=0 |J|=k
fJ eJ ∈ C ∞ (Ω, Cn ).
We are ready to introduce the Dirac operator D associated with Cn . Specifically, given a Clifford algebra-valued distribution u ∈ D (Ω, Cn ) as in (7.7.7), we define Du ∈ D (Ω, Cn ) by setting Du :=
n n
∂j uI ej ! eI .
(7.7.10)
j=1 l=0 |I|=l
In other words, D :=
n
Mej ∂j ,
(7.7.11)
j=1
where Mej denotes the operator of Clifford algebra multiplication by ej from the left. Proposition 7.40. Let Ω ⊆ Rn be an open set. Then the Dirac operator D satisfies D2 = −Δ in D (Ω, Cn ),
(7.7.12)
where Δ is the Laplacian. Proof. Pick an arbitrary u ∈ D (Ω, Cn ), say u =
n
l=0 |I|=l
uI eI with uI ∈ D (Ω)
for each I. Then using (7.7.10) twice yields D2 u = D(Du) =
n n
∂k ∂j uI ek ! ej ! eI .
(7.7.13)
j,k=1 l=0 |I|=l
Observe that, on the one hand,
n
1≤j=k≤n l=0 |I|=l
∂k ∂j uI ek ! ej ! eI = 0,
(7.7.14)
256
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
since for each I, we have ∂k ∂j uI = ∂j ∂k uI and ek ! ej ! eI = −ej ! ek ! eI whenever 1 ≤ j = k by the first formula in (7.7.2). On the other hand, corresponding to the case when j = k n n
∂k2 uI ek ! ek ! eI = −
k=1 l=0 |I|=l
n n
∂k2 uI eI = −Δu,
(7.7.15)
k=1 l=0 |I|=l
since ek ! ek = −1 by the second formula in (7.7.2). Exercise 7.41. Consider the embedding R → Cn , n
R x = (xj )1≤j≤n ≡ n
n
xj ej ∈ Cn ,
(7.7.16)
j=1
which identifies vectors from Rn with elements in the Clifford algebra Cn . With this identification in mind, show that x ! x = −|x|2
for any x ∈ Rn ,
x ! y + y ! x = −2x · y
for any x, y ∈ Rn .
(7.7.17) (7.7.18)
For example, in light of the embedding described in (7.7.16) we may regard the assignment Rn \ {0} x → |x|xn as a Clifford algebra-valued function. Proposition 7.40 combined with Remark 5.6 yields the following result. Theorem 7.42. The Clifford algebra-valued function E(x) := −
x 1 ∈ L1loc (Rn , Cn ) ∩ S (Rn , Cn ) ωn−1 |x|n
(7.7.19)
is a fundamental solution for the Dirac operator D in Rn , i.e., DE = δ
in S (Rn , Cn ).
(7.7.20)
Moreover, any u ∈ S (Rn , Cn ) satisfying Du = δ in S (Rn , Cn ) is of the form E + P where E as in (7.7.19) and P is a Clifford algebra-valued function whose components are polynomials and such that DP = 0 in Rn . Proof. Let EΔ be the fundamental solution for Δ as described in (7.1.12) for n ≥ 2. Then − (DEΔ )(x) = −
x 1 , ωn−1 |x|n
x ∈ Rn \ {0},
(7.7.21)
becomes, thanks to (7.7.12) and Remark 5.6, a fundamental solution for the Dirac operator D in Rn . To justify the last claim in the statement of the theorem, let F ∈ S (Rn , Cn ) be an arbitrary fundamental solution for the Dirac operator D in Rn . Then the tempered distribution P := F − E satisfies DP = 0 in S (Rn , Cn ) and, on the
7.7. FUNDAMENTAL SOLUTIONS FOR THE DIRAC...
257
Fourier transform side, we have ξ ! P3 = 0 in S (Rn , Cn ). Multiplying (in the Clifford algebra sense) this equality with the Clifford algebra-valued function with polynomial growth ξ then yields −|ξ|2 P3 = 0 in S (Rn , Cn ) (cf. (7.7.17)). The latter implies supp P3 ⊆ {0}, hence the components of P are polynomials in Rn (cf. Exercise 4.35). There are other versions of the Dirac operator D from (7.7.11) that are ∂ more in line with the classical Cauchy–Riemann operator ∂z := 12 (∂1 + i∂2 ). n+1 n+1 set x = (x0 , x1 , x2 , . . . , xn ) ∈ R and consider Specifically, in R ±
D := ∂0 ± D = ∂0 ±
n
Mej ∂j ,
(7.7.22)
j=1
Note that in the case when n = 1 the Dirac operator D− corresponds to a ∂ . A reasoning similar to constant multiple of the Cauchy–Riemann operator ∂z that used above for D also yields fundamental solutions for D± . We leave this as an exercise for the interested reader. Exercise 7.43. Let Ω ⊆ Rn+1 be an open set and let x = (x0 , x1 , x2 , . . . , xn ) ∈ Rn+1 . Then the Dirac operators D± satisfy D+ D− = D− D+ = Δn+1 in D (Ω, Cn ),
(7.7.23)
where Δn+1 is the Laplacian in Rn+1 . Moreover, use a reasoning similar to the one used in the proof of Theorem 7.42 to show that the functions
1 E (x) : = ωn +
1 E (x) : = ωn −
x0 +
n
xj ej
j=1 |x|n+1
x0 −
n
∈ L1loc (Rn+1 , Cn ) → D (Rn+1 , Cn )
(7.7.24)
∈ L1loc (Rn+1 , Cn ) → D (Rn+1 , Cn )
(7.7.25)
xj ej
j=1 |x|n+1
are fundamental solution for the Dirac operators D− and D+ , respectively, in Rn+1 . We next introduce the Cauchy-Clifford operator and discuss its jump formulas in the upper and lower half-spaces. Theorem 7.44. For each Cn -valued function ϕ ∈ S(Rn−1 ) define the Cauchy– Clifford operator n−1 1 j=1 (xj − yj )ej + xn en ! en ! ϕ(y ) dy , (7.7.26) (C ϕ)(x) := − x − (y , 0)n ωn−1 Rn−1 for each x = (x1 , . . . , xn ) ∈ Rn with xn = 0. Then for each x ∈ Rn−1 one has
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
258
lim (C ϕ)(x , xn )
xn →0±
1 1 = ± ϕ(x ) + lim+ 2 ε→0 ωn−1
y ∈Rn−1
n−1
j=1 (yj |x −
− xj )ej y |n
! en ! ϕ(y ) dy .
|x −y |>ε
(7.7.27) Proof. Consider the Clifford algebra-value function Φ : Rn \ {0} → Cn given by n j=1 xj ej Φ(x) := − ! en for each x = (x1 , . . . , xn ) ∈ Rn \ {0}. (7.7.28) ωn−1 |x|n Then Φ is C ∞ , odd, and positive homogeneous of degree 1 − n in Rn \ {0}. As such, Corollary 4.78 may be applied (to each component of Φ). In this regard note from Corollary 4.62 that ⎡ ⎤ ⎤ ⎡ n n xj 1 ⎣ ξj 3 Φ(ξ) =− F e ⎦ ! en (7.7.29) (ξ) ej ⎦ ! en = i ⎣ 2 j ωn−1 j=1 |x|n |ξ| j=1 in S (Rn ). In particular, since en ! en = −1, we obtain 3 , 1) = i en ! en = −i. Φ(0
(7.7.30)
Given that, as seen from (7.7.26) and (7.7.28), we have (C ϕ)(x) = Φ(x − y , xn ) ! ϕ(y ) dy ,
(7.7.31)
Rn−1
for each x = (x1 , . . . , xn ) ∈ Rn with xn = 0, the jump formulas for the Cauchy– Clifford operator in (7.7.27) follow from (4.7.45) and (7.7.30). In parallel with the discussion in Remark 7.38, in the higher-dimensional setting we have the following connection between the Cauchy–Clifford operator and Riesz transforms. Remark 7.45. Let Rj , j ∈ {1, . . . , n − 1}, be the Riesz transforms in Rn−1 (i.e., singular integral operators defined as in (4.9.10) with n − 1 in place of n). Also, recall the definition of the “vertical limit” of a function defined in Rn+ to the boundary of the upper half-space from (4.8.20). Then we may express the version of the jump-formula (7.7.27) corresponding to considering the Cauchy– Clifford operator in the upper half-space as n−1 ver 1 1 ej ! en ! (Rj ϕ) Cϕ n = ϕ − 2 ωn−1 j=1 ∂R+
in
Rn−1
(7.7.32)
for each Cn -valued function ϕ ∈ S(Rn−1 ), where the Riesz transforms act on ϕ componentwise.
7.7. FUNDAMENTAL SOLUTIONS FOR THE DIRAC...
259
The format of the jump formula displayed in (7.7.32) suggests considering the operator acting on Cn -valued functions according to P :=
n−1 1 1 ej ! en ! Rj I− 2 ωn−1 j=1
1 1 I+ en ! ej ! Rj , 2 ωn−1 j=1 n−1
=
(7.7.33)
where I stands for the identity operator. In the second line of (7.7.33) the change in sign is due to the formula (cf. (7.7.2)) ej ! en = −en ! ej
for each j ∈ {1, . . . , n − 1}.
(7.7.34)
Theorem 4.93 then gives that P is a well-defined, linear, and bounded operator on L2 (Rn−1 , Cn ). Moreover, since each Rj has a real-valued kernel, its action commutes with multiplication by elements from Cn (i.e., for each a ∈ Cn we have Rj Ma = Ma Rj , in the notation from Exercise 7.39). Keeping this in mind and relying on (7.7.34) and the fact that e2n = −1 (cf. (7.7.2)), we may then write P2 =
1 2
I+
1 ωn−1
en !
n−1
ej ! Rj
2
j=1
1 2 2 1 1 I+ en ! ej ! Rj + ej ! Rj . 4 ωn−1 ωn−1 j=1 j=1 n−1
=
n−1
(7.7.35)
Furthermore, using the fact that, as proved in Theorem 4.93, the Riesz transn−1 2 2 forms commute with one another and satisfy Rj = − ωn−1 I on L2 (Rn−1 ), 2 j=1
we may expand n−1
ej ! Rj
2 =
j=1
n−1
ej ! ek ! Rj Rk
j,k=1
=
n−1
e2j Rj2 +
j=1
=−
n−1 j=1
ej ! ek ! Rj Rk
1≤j=k≤n−1
Rj2 =
ωn−1 2 2
I,
(7.7.36)
where the source of the cancellation taking place in the third equality above is the observation that ej ! ek ! Rj Rk = −ek ! ej ! Rk Rj whenever we have that 1 ≤ j = k ≤ n − 1. Combining (7.7.35)–(7.7.36) and recalling (7.7.33) then yields (7.7.37) P 2 = P on L2 (Rn−1 , Cn ).
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
260
Note that this is in agreement with the result obtained in (7.6.35) in the case of the two-dimensional setting. Let us also consider the higher-dimensional analogue of (7.6.36). In this regard, we first observe that based on Exercise 7.39 for any f, g ∈ L2 (Rn−1 , Cn ) we may write Rn−1
n−1 Mej (Rj f )(x ) , g(x ) dx Men j=1
n−1
=
Rn−1 j=1
=−
(Rj f )(x ) , Mej Men g(x ) dx
n−1
Rn−1 j=1
= Rn−1
f (x ) , Mej Men (Rj g)(x ) dx
n−1 Mej (Rj g)(x ) dx . f (x ) , Men
(7.7.38)
j=1
From this and (7.7.33) it follows that for every f, g ∈ L2 (Rn−1 , Cn ) we have (7.7.39) (P f )(x ), g(x ) dx = f (x ), (P g)(x ) dx , Rn−1
Rn−1
a condition that we shall interpret simply as P∗ = P
on L2 (Rn−1 , Cn ).
(7.7.40)
In summary, the above analysis shows that the operator P defined as in (7.7.33) is a projection on L2 (Rn−1 , Cn ). Starting from this result, a corresponding higher-dimensional Hardy space theory may be developed in the Clifford-algebra setting as well.
7.8
Fundamental Solutions for General Second-Order Operators
Consider a constant, complex coefficient, homogeneous, second-order differential n ajk ∂j ∂k in Rn . In the first stage, our goal is to find necessary operator L = j,k=1
and sufficient conditions, that can be expressed without reference to the theory of distributions, guaranteeing that a function E ∈ L1loc (Rn ) is a fundamental solution for L. Several necessary conditions readily present themselves. First, it is clear that LE is the zero distribution in Rn \ {0}. In addition, if L is elliptic then this necessarily implies that E ∈ C ∞ (Rn \ {0}). In the absence of ellipticity as a hypothesis for L, we may wish to assume that E is reasonably regular, say E ∈ C 2 (Rn \ {0}). Second, if the fundamental solution E is a priori
7.8. FUNDAMENTAL SOLUTIONS FOR GENERAL...
261
3 known to be a tempered distribution, then −L(ξ)E(ξ) = 1 in S (Rn ). If L is elliptic and n ≥ 3, this forces E = E0 + P where E0 := −F −1 L(ξ)−1 is homogeneous of degree 2 − n and P is a polynomial that is annihilated by L. Hence, working with E0 in place of E, there is no loss of generality in assuming that E is homogeneous of degree 2 − n. The case n = 2 may also be included in this discussion by demanding that ∇E is homogeneous of degree 1 − n. In summary, it is reasonable to restrict our search for a fundamental solution for L in the class of functions satisfying E ∈ C 2 (Rn \ {0}) ∩ L1loc (Rn ) with the property that ∇E is positive homogeneous of degree 1 − n in Rn \ {0}. However, these conditions do not rule out such trivial candidates as the zero distribution. In Theorem 7.46 we identify the key nondegeneracy property (7.8.2), guaranteeing that E is in fact a fundamental solution. Theorem 7.46 is then later used to find an explicit formula for such a fundamental solution, under a strong ellipticity assumption on L (cf. Theorem 7.54). Theorem 7.46. Assume that n ≥ 2 and consider L=
n
ajk ∂j ∂k ,
ajk ∈ C.
(7.8.1)
j,k=1
Then for a function E ∈ C 2 (Rn \ {0}) ∩ L1loc (Rn ) with the property that ∇E is positive homogeneous of degree 1 − n in Rn \ {0}, the following statements are equivalent: (1) When viewed in L1loc (Rn ), the function E is a fundamental solution for L in Rn ; (2) One has LE = 0 pointwise in Rn \ {0} and n j,k=1
S n−1
ajk ωj ∂k E(ω) dσ(ω) = 1.
(7.8.2)
Remark 7.47.
(i) In the partial differential equation parlance, the integrand n in (7.8.2), that is, the expression ajk ωj ∂k E(ω), is referred to as the j,k=1
conormal derivative of E on S n−1 . (ii) One remarkable aspect of Theorem 7.46 is that the description of a fundamental solution from part (2) is purely in terms of ordinary calculus (i.e., without any reference to the theory of distributions). Proof of Theorem 7.46. Let E ∈ C 2 (Rn \ {0})∩L1loc(Rn ) with the property that ∇E is positive homogeneous of degree 1 − n in Rn \ {0}. Exercise 4.51 then implies that E ∈ L1loc (Rn ). Fix an arbitrary f ∈ C0∞ (Rn ). Then making use of (4.4.19) and Proposition 4.67 (applied to each ∂k E) we obtain
262
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
(LE) ∗ f =
n
ajk ∂j (∂k E) ∗ f
j,k=1 n
ajk
S n−1
j,k=1
n
=
S n−1 j,k=1
n ∂k E(ω)ωj dσ(ω) (δ ∗ f ) + ajk P.V.(∂j ∂k E) ∗ f j,k=1
ajk ∂k E(ω)ωj dσ(ω) f
+ lim+ ε→0
n
|y−· |>ε j,k=1
ajk (∂j ∂k E)(· − y)f (y) dy
(7.8.3)
in D (Rn ),
where we have also used the fact that δ ∗ f = f . Having proved this, we now turn in earnest to the proof of the equivalence in the statement of the theorem. First, assume that E is solution for L in Rn . Then LE= δ in a fundamental n D (R ) implies that L E Rn \{0} = 0 in D (Rn ). Since by assumption E Rn \{0} belongs to C 2 (Rn \ {0}), we arrive at the conclusion that LE = 0 pointwise in Rn \ {0}. Explicitly, n
∀ x ∈ Rn \ {0}.
ajk ∂j ∂k E(x) = 0,
(7.8.4)
j,k=1
This proves the first claim in (2). Next, for each function f ∈ C0∞ (Rn ) we may write f = δ ∗ f = (LE) ∗ f in D (Rn ) which, in light of (7.8.3) and (7.8.4), forces f=
n
S n−1 j,k=1
ajk ∂k E(ω)ωj dσ(ω) f.
(7.8.5)
Since f ∈ C0∞ (Rn ) was arbitrary, (7.8.2) follows. This finishes the proof of (1 ) ⇒ (2 ). Conversely, suppose that E ∈ C 2 (Rn \ {0}) ∩ L1loc (Rn ) with the property that ∇E is positive homogeneous of degree 1 − n in Rn \ {0}, such that LE = 0 pointwise in Rn \ {0}, and (7.8.2) holds. Then for each f ∈ C0∞ (Rn ) formula (7.8.3) simply reduces to (LE) ∗ f = f . Now Exercise 2.88 may be invoked to conclude that LE = δ in D (Rn ), as wanted. Next, we turn to the task of finding all fundamental solutions that are tempered distributions for general homogeneous, second-order, constant coefficient operators that are strongly elliptic. We begin by defining this stronger (than originally introduced in Definition 6.13) notion of ellipticity. Let A = ajk 1≤j,k≤n ∈ Mn×n (C) and associated to such a matrix A, consider the operator LA := LA (∂) :=
n j,k=1
ajk ∂j ∂k .
(7.8.6)
7.8. FUNDAMENTAL SOLUTIONS FOR GENERAL...
263
This is a homogeneous, second-order, constant coefficient operator for which the ellipticity condition reads n
LA (ξ) :=
ajk ξj ξk = 0,
∀ ξ = (ξ1 , . . . , ξn ) ∈ Rn \ {0}.
(7.8.7)
j,k=1
As a trivial consequence of the Malgrange–Ehrenpreis theorem (cf. Theorem 5.9), any elliptic operator has a fundamental solution. The goal is to obtain explicit formulas for such fundamental solutions for a subclass of homogeneous, elliptic, second-order, constant coefficient operators satisfying a stronger condition than (7.8.7). Definition 7.48. Call an operator LA as in (7.8.6) strongly elliptic, if there exists a constant C ∈ (0, ∞) such that Re
n
ajk ξj ξk ≥ C|ξ|2 ,
∀ ξ = (ξ1 , . . . , ξn ) ∈ Rn .
(7.8.8)
j,k=1
By extension, call a matrix A = ajk 1≤j,k≤n ∈ Mn×n (C) strongly elliptic provided there exists some C ∈ (0, ∞) with the property that (7.8.8) holds. Remark 7.49. (1) It is obvious that any operator LA as in (7.8.6) that is strongly elliptic is elliptic. (2) Up to changing L to −L, any elliptic, homogeneous, second-order, constant coefficient differential operator L with real coefficients is strongly elliptic. To see why this is the case let A ∈ Mn×n (R) and suppose that LA is ellipn ajk ξj ξk tic. Consider the function f : S n−1 → R defined by f (ξ) := j,k=1
for ξ ∈ S n−1 . Then f is continuous and since LA is elliptic the number 0 is not in the image of f . The unit sphere S n−1 being compact and connected, it is mapped by f in a compact, connected, subset of R, not containing 0. This forces the image of f to be a compact interval that does not contain 0. Hence, there exists c ∈ (0, ∞) with the property that either f (ξ) ≥ c for every ξ ∈ S n−1 or −f (ξ) ≥ c for every ξ ∈ S n−1 . This implies that either f (ξ/|ξ|) ≥ c for every ξ ∈ Rn \ {0} or −f (ξ/|ξ|) ≥ c for every ξ ∈ Rn \ {0}, or equivalently, that either LA or −LA is strongly elliptic. (3) Consider the operator L = ∂12 + i∂22 in R2 . Then L is a homogeneous, second-order, constant coefficient differential operator with L(ξ) = ξ12 +iξ22 , ξ = (ξ1 , ξ2 ) ∈ R2 . Clearly L(ξ) = 0 if ξ = 0, so L is elliptic. However, L is not strongly elliptic since Re [L(ξ)] = ξ12 , which cannot be bounded from below by a constant multiple of |ξ|2 since the latter blows up if |ξ2 | → ∞.
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
264
Fix A = ajk 1≤j,k≤n ∈ Mn×n (C) and consider the operator LA as in (7.8.6). Due to the symmetry of mixed partial derivatives in the sense of distributions, it is immediate that LA = LAsym ,
where Asym :=
A + A . 2
(7.8.9)
As such, any fundamental solution for LAsym is also a fundamental solution for LA . Also, since (Asym ξ) · ξ = (Aξ) · ξ for each ξ ∈ Rn , we have that LA
is strongly elliptic if and only if
LAsym
is strongly elliptic.
(7.8.10)
Consequently, when computing the fundamental solution for LA we may assume without loss of generality that A is symmetric, that is, A = A . For further reference we summarize a few basic properties of symmetric matrices (throughout, the symbol dot denotes the real inner product of vectors with complex components). A ∈ Mn×n (R), A = A =⇒ (Aζ) · ζ ∈ R,
∀ ζ ∈ Cn ,
(7.8.11)
A ∈ Mn×n (C), A = A =⇒ Re A = (Re A) and Im A = (Im A) , (7.8.12) A ∈ Mn×n (C), A = A =⇒ Re (Aζ) · ζ = (Re A)ζ · ζ, ∀ ζ ∈ Cn . (7.8.13) It is easy to see that (7.8.11)–(7.8.12) hold, while (7.8.13) follows from (7.8.11)– (7.8.12) after writing A = Re A + i Im A. Also, recall that a matrix A ∈ Mn×n (C) is said to be positive definite provided (Aζ) · ζ is real and strictly positive for each ζ ∈ Cn \ {0}.
(7.8.14) It is easy to see that any positive definite matrix A ∈ Mn×n (C) satisfies A = A, and there exists c ∈ (0, ∞) such that (Aζ) · ζ ≥ c|ζ|2 ,
∀ ζ ∈ Cn .
(7.8.15)
Remark 7.50. Fix a matrix A ∈ Mn×n (C) that is symmetric and satisfies (7.8.8). Then, for each ζ ∈ Cn we have (with C ∈ (0, ∞) as in (7.8.8)) Re [(Aζ) · ζ ] = Re A(Re ζ + i Im ζ) · (Re ζ − i Im ζ) = Re (ARe ζ) · Re ζ + (A Im ζ) · Im ζ ≥ C|Re ζ|2 + C|Im ζ|2 = C|ζ|2 .
(7.8.16)
The second equality in (7.8.16) uses the fact that A is symmetric, while (7.8.8) is used for the inequality in (7.8.16). Thus, combining (7.8.16) with the Cauchy– Schwarz inequality, we obtain |Aζ| ≥ C|ζ|
for every
ζ ∈ Cn ,
(7.8.17)
7.8. FUNDAMENTAL SOLUTIONS FOR GENERAL...
265
proving that the linear map A : Cn → Cn is injective, thus invertible. In particular, detA = 0. Also, thanks to (7.8.13) and (7.8.16) we have that Re A is a positive definite matrix. More precisely, with C as in (7.8.8), we have ∀ ζ ∈ Cn . (7.8.18) (Re A)ζ · ζ ≥ C|ζ|2 From Remark 7.50 and definitions we see that, given any A ∈ Mn×n (C), the following implications hold: Re A positive definite =⇒ A strongly elliptic, A strongly elliptic ⇐⇒ Asym strongly elliptic, A strongly elliptic and symmetric =⇒ Re A positive definite.
(7.8.19) (7.8.20) (7.8.21)
Remark 7.51. Assume that A ∈ Mn×n (C) is symmetric and satisfies (7.8.8). From Remark 7.50 it follows that A is invertible. Moreover, if we define A := sup |Aζ| : ζ ∈ Cn , |ζ| = 1 , (7.8.22) then (7.8.17) ensures that A > 0. We claim that Re (A−1 ζ) · ζ ≥
C |ζ|2 A2
∀ ζ ∈ Cn ,
(7.8.23)
where C is as in (7.8.8). To justify this, first note that |ζ|2 = |AA−1 ζ|2 ≤ A2 |A−1 ζ|2 for every ζ ∈ Cn . In turn, this and (7.8.16) permit us to estimate Re [(A−1 ζ) · ζ ] =Re [ (A−1 ζ) · ζ] = Re (A−1 ζ) · (AA−1 ζ) ≥ C|A−1 ζ|2 ≥
C |ζ|2 A2
∀ ζ ∈ Cn .
(7.8.24)
This proves (7.8.23). In particular, (7.8.23) yields A−1 |ξ|2 ≥ |(A−1 ξ) · ξ| ≥
C |ξ|2 A2
∀ ξ ∈ Rn .
Remark 7.52. Consider the set M := A ∈ Mn×n (C) : A = A , Re A is positive definite .
(7.8.25)
(7.8.26)
Since the n × n symmetric matrices A = (ajk )1≤j,k≤n with complex entries are uniquely determined by the elements ajk with 1 ≤ j ≤ k ≤ n, we may naturally identify M with an open convex subset of Cn(n+1)/2 . Throughout, this identification is implicitly assumed. Moreover, every A ∈ M satisfies det A = 0, since if Aζ = 0 for some ζ ∈ Cn , then (7.8.13) and the fact that Re A is positive definite force ζ = 0. The fact that M is convex, implies that there is a unique 1 1 analytic branch of the mapping M A → (detA) 2 ∈ C such that (detA) 2 > 0 1 when A is real. Thus (detA) 2 is unambiguously defined for A ∈ M.
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
266
To proceed with the discussion regarding determining a fundamental solution for a strongly elliptic operator LA we first analyze the case when A has real entries. The case when A is real, symmetric and satisfies (7.8.8). Since the matrix A is real, symmetric and positive definite, A is diagonalizable with A = U −1 DU for some orthogonal matrix U ∈ Mn×n (R) and some diagonal n × n matrix D where the entries on the main diagonal are strictly positive real 1 numbers. Hence, D 2 is meaningfully defined as the n × n diagonal matrix with the entries on the diagonal being equal to the square roots of the entries on the 1 1 main diagonal in D. In addition, A 2 √ := U −1 D 2 U is well-defined, symmetric, 1 1 1 invertible, A 2 A 2 = A and det(A 2 ) = detA. Next, fix a function u ∈ L1loc (Rn ) such that
∂k u ∈ L1loc (Rn )
for each k ∈ {1, . . . , n}, (7.8.27)
where the derivatives are taken in the sense of distributions. Also, consider ϕ ∈ C0∞ (Rn ). Then, using the fact that A is symmetric, we obtain LA u, ϕ =
n
n $ $ % % ajk ∂j ∂k u, ϕ = − ajk ∂k u, ∂j ϕ
j,k=1
=−
Rn
j,k=1
∇u(x) · A∇ϕ(x) dx.
(7.8.28) 1
In the integral in (7.8.28) we make the change of variables x = A 2 y. Since for every invertible matrix B ∈ Mn×n (R) and any function f the chain rule gives (∇f )(By) = (B )−1 ∇[f (By)] for each y ∈ Rn
(7.8.29)
with the property that f is differentiable at By, we obtain 1 (∇u)(x) · A(∇ϕ)(x) dx = |detA 2 | Rn
=
√ detA
Rn
Rn
1 1 (∇u)(A 2 y) · A(∇ϕ)(A 2 y) dy
1 ∇ u(A 2 y)
1 1 1 · (A 2 )−1 A(A 2 )−1 ∇ ϕ(A 2 y) dy √ = detA ∇v(y) · ∇ψ(y) dy Rn √ = − detA v(y)Δψ(y) dy,
(7.8.30)
Rn
where we have set 1
v := u ◦ A 2
1
and ψ := ϕ ◦ A 2 ∈ C0∞ (Rn ),
(7.8.31)
7.8. FUNDAMENTAL SOLUTIONS FOR GENERAL...
267
and for the last equality in (7.8.30) we integrated by parts one more time. Hence, (7.8.28) and (7.8.30) imply √ LA u, ϕ = detA v(y)Δψ(y) dy. (7.8.32) Rn
In particular, if we now choose 1 1 u(y) := √ EΔ ◦ A− 2 (y) for y ∈ Rn \ {0}, detA
(7.8.33)
where EΔ is the fundamental solution for the Laplacian from (7.1.12), then u satisfies the conditions listed in (7.8.27), and the function v from (7.8.31) becomes 1 EΔ (y) for y ∈ Rn \ {0}. (7.8.34) v(y) = √ detA 1
Since A 2 (0) = 0, we may write 1 ϕ(0) = ϕ ◦ A 2 (0) = ψ(0) = δ, ψ = ΔEΔ , ψ = EΔ , Δψ √ √ 1 √ EΔ (y)Δψ(y) dy = detA = detA v(y)Δψ(y) dy detA Rn Rn % 1 1 $ LA EΔ ◦ A− 2 , ϕ , (7.8.35) = LA u, ϕ = √ detA where for the second to the last equality in (7.8.35) we used (7.8.32). Since (7.8.35) holds for every ϕ ∈ C0∞ (Rn ), we may conclude that √
1 1 EΔ ◦ A− 2 detA
is a fundamental solution for LA in Rn .
(7.8.36) 1
Denoting by EA and keeping in mind that |A− 2 x|2 = − 1 this− fundamental −1 solution 1 A 2 x · A 2 x = A x · x for every x ∈ Rn , we obtain that the function defined by ⎧ 1 −1 ⎪ √ · if n ≥ 3, ⎪ n−2 ⎪ −1 ⎨ (n − 2)ωn−1 detA [(A x) · x] 2 (7.8.37) EA (x) := ⎪ ⎪ ⎪ ⎩ √1 ln (A−1 x) · x if n = 2, 4π detA for every x ∈ Rn \ {0} is a fundamental solution for LA in the current case. Moreover, as is apparent from (7.8.37), (7.8.25), and Exercise 4.5, the function EA is locally integrable and a tempered distribution in Rn . In preparation for dealing with the case of matrices with complex entries, we state and prove the following useful complex analysis result.
268
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
Lemma 7.53. Let N ∈ N and assume O is an open and convex subset of CN with the property that O ∩RN = ∅ (where RN is canonically embedded into CN ). Also, suppose f, g : O → C are two functions that are separately holomorphic (i.e., in each scalar complex component in CN ) such that (7.8.38) f O∩RN = g O∩RN . Then f = g in O. Proof. Fix an arbitrary point (x1 , . . . , xN ) ∈ O ∩ RN and consider O1 := z1 ∈ C : (z1 , x2 , . . . , xN ) ∈ O .
(7.8.39)
Then O1 is an open convex subset of C, which contains x1 , hence O1 ∩ R = ∅. Define the functions f1 , g1 : O1 → C by f1 (z) := f (z, x2 , . . . , xN ) and g1 (z) := g(z, x2 , . . . , xN ),
∀ z ∈ O1 . (7.8.40)
Then f1 and g1 are holomorphic functions in O1 that coincide on O1 ∩ R. Since the latter contains an accumulation point in the convex (hence connected) set O1 , it follows that f1 = g1 on O1 by the coincidence theorem for holomorphic functions of one complex variable. Since (x1 , . . . , xN ) ∈ O ∩ RN was arbitrary, we may conclude that (7.8.41) f O∩(C×RN −1) = g O∩(C×RN −1 ) . Next, fix (z1 , x2 , . . . , xN ) ∈ O ∩ (C × RN −1 ) and define O2 := z2 ∈ C : (z1 , z2 , x3 , . . . , xN ) ∈ O .
(7.8.42)
Once again, O2 is an open convex subset of C, which contains x2 , hence O2 ∩R = ∅. If we now define the functions f2 , g2 : O2 → C by ∀ z ∈ O2 , (7.8.43) it follows that f2 , g2 are holomorphic in O2 , which, by (7.8.41), coincide on O2 ∩ R. Given that the latter set contains an accumulation point in the convex set O1 , we deduce that f2 = g2 on O2 by once again invoking the coincidence theorem for holomorphic functions of one complex variable. Upon recalling that (z1 , x2 , . . . , xN ) ∈ O ∩ (C × RN −1 ) was arbitrary, we conclude that (7.8.44) f O∩(C2 ×RN −2 ) = g O∩(C2 ×RN −2 ) . f2 (z) := f (z1 , z, x3 , . . . , xN ) and g2 (z) := g(z1 , z, x3 , . . . , xN ),
Continuing this process inductively, we arrive at the conclusion that f = g in O. After this preamble, we are ready to consider the general case. The case when A has complex entries, is symmetric and satisfies (7.8.8). As observed in Remark 7.50, under the current assumptions, A continues to be
7.8. FUNDAMENTAL SOLUTIONS FOR GENERAL...
269
invertible. Also, (7.8.25) holds. In addition, under the current assumptions we 1 have that A ∈ M and (detA) 2 is unambiguously defined (see in Remark 7.52). These comments show that the function EA from (7.8.37) continues to be well-defined under the current assumption on A if ln is replaced by the principal branch of the complex log (defined for points z ∈ C\(−∞, 0] so that z a = ea log z for each a ∈ R). In addition, EA continues to belong to L1loc (Rn ) and we have EA ∈ C ∞ (Rn \ {0}). Furthermore, from (7.8.25) and Exercise 4.5 it follows that the function EA continues to be a tempered distribution in Rn . The goal is to prove that this expression is a fundamental solution for LA in the current case. First, observe that since A−1 is symmetric, for each j, k ∈ {1, . . . , n} we have ∂k A−1 x · x = 2(A−1 x)k and ∂j A−1 x k = (A−1 )kj . (7.8.45) Hence, for every x ∈ Rn \ {0}, differentiating pointwise we obtain n 2−n −n LA A−1 x · x 2 = ajk ∂j (2 − n) A−1 x · x 2 (A−1 x)k j,k=1
= −n(2 − n)
n
−n−2 ajk A−1 x · x 2 (A−1 x)j (A−1 x)k
j,k=1
+ (2 − n)
n
−n ajk A−1 x · x 2 (A−1 )kj = 0. (7.8.46)
j,k=1
Similarly, LA log (A−1 x) · x = 0 for x ∈ R2 \ {0}. Thus, we may conclude that ∀ x ∈ Rn \ {0}.
LA EA (x) = 0
(7.8.47)
Second, by making use of (7.8.45) and the expression for EA we obtain ∇EA (x) =
A−1 x 1 √ · n ωn−1 detA [(A−1 x) · x] 2
∀ x ∈ Rn \ {0}
(7.8.48)
which, in particular, shows that ∇EA is positive homogeneous of degree 1 − n in Rn \ {0}. Furthermore, for each x ∈ S n−1 we have n n
ajk xj ∂k EA (x) =
j,k=1
= =
1 √
ωn−1 detA
ωn−1
1 √ 1 √
·
ajk xj (A−1 x)k
j,k=1 n
[(A−1 x) · x] 2
(A x) · (A−1 x) n detA [(A−1 x) · x] 2
ωn−1 detA
· ·
1 [(A−1 x)
where the last equality uses the fact that |x| = 1.
n
· x] 2
,
(7.8.49)
270
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
Invoking Theorem 7.46 we conclude that EA is a fundamental solution for LA in Rn if and only if √ dσ(x) ωn−1 detA = . (7.8.50) −1 x) · x] n 2 n−1 [(A S The fact that we already know that EA is a fundamental solution for LA in Rn in the case when A ∈ Mn×n (R) satisfies A = A and condition (7.8.8), implies that formula (7.8.50) holds for this class of matrices. We make the claim that in fact (7.8.50) actually holds for the larger class of matrices A ∈ Mn×n (C) satisfying A = A and condition (7.8.8). To see why this is true, recall the open subset M of Cn(n+1)/2 (7.8.26) and consider from the functions f, g : M → C defined for every A = ajk 1≤j,k≤n ∈ M by √ (7.8.51) f (ajk )1≤j≤k≤n := ωn−1 detA, dσ(x) (7.8.52) g (ajk )1≤j≤k≤n := n . −1 x) · x] 2 S n−1 [(A Then f and g are analytic (as functions of several complex variables) on M, which is an open convex set in Cn(n+1)/2 . If A ∈ M has real entries, then A satisfies (7.8.8), and (7.8.50) holds for such A, hence f = g on M ∩ Mn×n (R). Invoking Lemma 7.53 we may therefore conclude that f = g on M. Thus, (7.8.50) holds for every A ∈ Mn×n (C) satisfying A = A and condition (7.8.8). Finally, we note that thanks to Proposition 5.7 and the current strong ellipticity assumption, any other fundamental solution of LA belonging to S (Rn ) differs from EA by a polynomial that LA annihilates. In summary, the above analysis proves the following result. Theorem 7.54. Suppose A = ajk 1≤j,k≤n ∈ Mn×n (C) and consider the operator LA associated to A as in (7.8.6). If LA is strongly elliptic, then the function defined by ⎧ 1 1 ⎪ ⎪ − · if n ≥ 3, n−2 ⎪ ⎪ −1 (n − 2)ω detA ⎪ x) · x] 2 n−1 sym [((Asym ) ⎨ EA (x) := ⎪ ⎪ ⎪ 1 ⎪ ⎪ log ((Asym )−1 x) · x if n = 2, ⎩ 4π detAsym (7.8.53) for x ∈ Rn \{0} belongs to L1loc (Rn )∩S (Rn )∩C ∞ (Rn \{0}) and is a fundamental solution for LA in Rn . Above, Asym := 12 (A + A ), detAsym is defined as in Remark 7.52, and log denotes the principal branch of the complex logarithm (defined for complex numbers z ∈ C \ (−∞, 0] so that z a = ea log z for each a ∈ R). Moreover, (7.8.54) u ∈ S (Rn ) : LA u = δ in S (Rn ) = EA + P : P polynomial such that LA P = 0 in Rn .
7.9. LAYER POTENTIAL REPRESENTATION FORMULAS...
271
We conclude this section with a couple of related exercises about fundamental solutions for second-order, constant coefficient, differential operators. n Exercise 7.55. Let n ≥ 2 and consider a differential operator L = ajk ∂j ∂k j,k=1
with complex coefficients. Assume that E ∈ C 2 (Rn \{0})∩L1loc(Rn ) is a function with the property that ∇E is positive homogeneous of degree 1 − n in Rn \ {0}. In addition, suppose that n ajk ωj ∂k E(ω) dσ(ω) = 0. (7.8.55) λ := j,k=1
S n−1
Prove that λ−1 E is a fundamental solution for L in Rn . Use this result to find the proper normalization for the standard fundamental solution for the Laplacian in Rn , starting with E(x) := |x|2−n when n ≥ 3, and with E(x) := ln |x| for n = 2. Exercise 7.56. Let n ≥ 2 and suppose that E ∈ C 2 (Rn \ {0}) ∩ L1loc (Rn ) has the property that ∇E is odd and positive homogeneous of degree 1 − n in Rn \ {0}. In addition, assume that the function E is a fundamental solution for n ajk ∂j ∂k in Rn . the complex constant coefficient differential operator L = j,k=1
Prove that for every ξ ∈ S n−1 one has n 1 ajk ωj ∂k E(ω) dσ(ω) = . 2 ω∈S n−1
(7.8.56)
j,k=1
ω·ξ>0
7.9
Layer Potential Representation Formulas Revisited
The goal here is to derive a layer potential representation formula generalizing the identity from Proposition 7.17 for the Laplacian. We begin by describing the setting in which we intend to work. Given a matrix A = ajk 1≤j,k≤n ∈ Mn×n (C), we associate the homogeneous second-order differential operator LA =
n
ajk ∂j ∂k
in Rn ,
(7.9.1)
j,k=1
and for every unit vector ν = (ν1 , . . . , νn ) and any complex-valued function u of class C 1 , define the conormal derivative of u associated with the matrix A (along ν) as n νj ajk ∂k u. (7.9.2) ∂νA u := j,k=1
272
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
Theorem 7.57. Suppose n ≥ 2 and let Ω ⊂ Rn be a bounded domain of class ν and surface measure σ. In addition, assume C 1 , with outward unit normal that the matrix A = ajk 1≤j,k≤n ∈ Mn×n (C) is such that the operator LA associated with A as in (7.9.1) is strongly elliptic, and recall the fundamental solution EA for LA defined in (7.8.53). Then for every complex-valued function u ∈ C 2 (Ω ) one has (LA u)(y)EA (x − y) dy − EA (x − y)(∂νA u)(y) dσ(y) Ω
−
∂Ω
u(y)(∂νA E)(x − y) dσ(y) u(x), x ∈ Ω, = 0, x ∈ Rn \ Ω, ∂Ω
(7.9.3)
where is the conormal derivative associated with the matrix A (along ν). Proof. When x ∈ Rn \ Ω, it clear from (7.8.53) that EA (x − ·) ∈ C ∞ (Ω ). Also, since EA is a fundamental solution for LA in Rn we have that LA [EA (x − ·)] = 0 in Rn \ {x}, hence (LA EA )(x − ·) = 0 in Ω. Based on these and repeated use of (13.7.4) we may then write Ω
(LA u)(y)EA (x−y) dy =
n Ω
j,k=1
=
n Ω
j,k=1
ajk (∂k u)(y)(∂j EA )(x − y) dy
n
+
ajk (∂j ∂k u)(y)EA (x − y) dy
j,k=1
= Ω
∂Ω
ajk νj (y)(∂k u)(y)EA (x−y) dσ(y)
u(y)(LA EA )(x−y) dy+
+ ∂Ω
∂Ω
u(y)(∂νA EA )(x − y) dσ(y)
EA (x−y)(∂νA u)(y) dσ(y).
(7.9.4)
Upon recalling that (LA EA )(x − ·) = 0 in Ω, the last solid integral drops out and the resulting identity is in agreement with (7.9.3). Consider now the case when x ∈ Ω. Since Ω is open, there exists r > 0 such that B(x, r) ⊆ Ω. For each ε ∈ (0, r) define Ωε := Ω \ B(x, ε) which is a bounded domain of class C 1 . Since EA (x − ·) ∈ C ∞ (Ωε ) and (LA EA )(x − ·) = 0 in Ωε , the same type of reasoning as above gives (keeping in mind that ∂Ωε = ∂Ω ∪ ∂B(x, ε)) Ωε
(LA u)(y)EA (x−y) dy =
∂Ω
u(y)(∂νA EA )(x−y) dσ(y)
−
∂B(x,ε)
u(y)(∂νA EA )(x − y) dσ(y)
7.9. LAYER POTENTIAL REPRESENTATION FORMULAS...
+ ∂Ω
EA (x−y)(∂νA u)(y) dσ(y)
−
273
∂B(x,ε)
EA (x−y)(∂νA u)(y) dσ(y) =: I+II+III+IV. (7.9.5)
As seen from (7.8.53), we have |IV | ≤ CA ∇uL∞ (Ω) ε max 1, ln |ε| , from which we deduce that lim+ IV = 0. Next, split II = II + u(x)II where ε→0
II := −
[u(y) − u(x)](∂νA EA )(x − y) dσ(y),
(7.9.6)
∂B(x,ε)
and observe that II := −
(∂νA EA )(x − y) dσ(y) ∂B(x,ε)
n
=−
∂B(x,ε) j,k=1
=
n
S n−1 j,k=1
= S n−1
n
ajk
yk − xk (∂j EA )(x − y) dσ(y) ε
ajk ωk (∂j EA )(ω) dσ(ω)
(A )jk ωj (∂k EA )(ω) dσ(ω) = 1.
(7.9.7)
j,k=1
Above, the first equality defines II , the second equality uses the definition of the conormal derivative and the outward unit normal to the ball, the third equality is based on the change of variables ω = (x − y)/ε and the fact that ∇EA is positive homogeneous of degree 1 − n. Finally, in the fourth equality we have interchanged j and k in the summation and used the identities EA = EA , LA = LA , while the last equality is due to (7.8.2). In addition, |II | ≤ CA ∇uL∞ (Ω) ε, hence lim II = 0, and ε→0+
(LA u)(y)EA (x − y) dy =
lim
ε→0+
Ωε
(LA u)(y)EA (x − y) dy Ω
by Lebesgue’s dominated convergence theorem (recall that EA (x − ·) ∈ L1 (Ω) since Ω is bounded). Collectively, the results deduced in the above analysis yield (7.9.3) in the case when x ∈ Ω, finishing the proof of the theorem.
Further Notes for Chap. 7. As evidenced by the treatment of the Poisson problem for the Laplacian and bi-Laplacian (from Sects. 7.2 and 7.4, respectively), fundamental solutions play a key role both for establishing integral representation formulas and for deriving estimates for the solution. This type of application to partial differential
274
CHAPTER 7. THE LAPLACIAN AND RELATED OPERATORS
equations amply substantiate the utility of the tools from distribution theory and harmonic analysis derived in Sect. 4.10 (dealing with derivatives of volume potentials) and Sect. 4.9 (dealing with singular integral operators). The aforementioned Poisson problems serve as a prototype for other types of boundary value problems formulated for other differential operators and with the entire space Rn replaced by an open set Ω ⊂ Rn . In the latter scenario one specifies boundary conditions on ∂Ω in place of ∞ as in the case of Rn (note that ∞ plays the role of the topological boundary of Rn regarded as an open subset of its compactification Rn ∪ {∞}). In Sect. 7.7 the Dirac operator has been considered in the natural setting of Clifford algebras. For more information pertaining to this topic, the interested reader is referred to the monographs [4, 22, 49]. The last two references also contain a discussion of Hardy spaces in the context of Clifford algebras (a topic touched upon in Sect. 7.7). A classical reference to Hardy spaces in the ordinary context of C (which appeared at the end of Sect. 7.6) is the book [27].
7.10
Additional Exercises for Chap. 7
Exercise 7.58. Prove that there exists a unique E ∈ S (Rn ) with the property that ΔE − E = δ in S (Rn ). Exercise 7.59. Does there exist E ∈ L1 (Rn ) such that ΔE = δ in D (Rn )? Exercise 7.60. Prove that for every u ∈ D (Ω) we have div(∇u) = Δu
D (Ω).
in
(7.10.1)
Exercise 7.61. Prove that if f ∈ C ∞ (Ω) and u ∈ D (Ω), then Δ(f u) = (Δf )u + 2(∇f ) · (∇u) + f Δu
in
D (Ω).
(7.10.2)
Exercise 7.62. Suppose n ≥ 2 and denote by EΔ the fundamental solution for the Laplacian operator Δ given in (7.1.12). Without making use of Corollary 4.62, prove that for each j ∈ {1, . . . , n} one has F (∂j EΔ ) = −i
ξj |ξ|2
In turn, use (7.10.3) to show that xj ξj F = −iωn−1 2 |x|n |ξ| and F −1
ξj |ξ|2
=
S (Rn ).
in
in
i xj · ωn−1 |x|n
in
S (Rn ),
S (Rn ).
(7.10.3)
(7.10.4)
(7.10.5)
Exercise 7.63. Suppose n ≥ 3 and denote by EΔ2 the fundamental solution for the bi-Laplacian operator Δ2 given in (7.3.8). Prove that for each j, k ∈ {1, . . . , n} one has F (∂j ∂k EΔ2 ) = −
ξj ξk |ξ|4
in
S (Rn ).
(7.10.6)
7.10. ADDITIONAL EXERCISES FOR CHAP. 7
275
Consequently, δjk 1 xj xk ξj ξk 1 −1 · − · F = |ξ|4 2(n − 2)ωn−1 |x|n−2 2ωn−1 |x|n
and F
xj xk |x|n
= ωn−1
δjk ξj ξk − 2ωn−1 4 |ξ|2 |ξ|
in
in
S (Rn ).
S (Rn ), (7.10.7)
(7.10.8)
Exercise 7.64. Let P (D) be a nonzero linear constant coefficient operator of order m ∈ N0 . Prove that P (D) is elliptic if and only if there exist two constants C, R ∈ (0, ∞) such that |P (ξ)| ≥ C|ξ|m for every ξ ∈ Rn \ B(0, R). Exercise 7.65. Reprove Theorem 7.54 without making any appeal to Theorem 7.46.
Chapter 8
The Heat Operator and Related Versions Throughout this chapter we use the notation (x, t) := (x1 , . . . , xn , t) ∈ Rn+1 . n ∂x2j and will be The heat operator1 is then defined as L := ∂t − Δx = ∂t − j=1
at the center of our investigations in this chapter. The focus is on determining all fundamental solutions for the heat operator that are tempered distributions. As an application of the latter, we also discuss the generalized Cauchy problem for the heat operator. In addition, at the end of this section we compute a fundamental solution for the Schr¨ odinger operator 1i ∂t − Δx .
8.1
Fundamental Solutions for the Heat Operator
The starting point in determining all fundamental solutions for the heat operator L that are tempered distributions is Theorem 5.13, which guarantees that such fundamental solutions do exist. As we have seen in the case of the Laplace operator, the Fourier transform is an important tool in determining explicit expressions for fundamental solutions that are tempered distributions. We will continue to make use of this tool in the case of the heat operator with the adjustment that, this time, we work with the partial Fourier transform Fx discussed at the end of Sect. 4.2. Let E ∈ S (Rn+1 ) be a fundamental solution for L. Thus, in view of Exercise 2.81 and (4.1.25), we have ∂t E − Δx E = δ(x) ⊗ δ(t)
in S (Rn+1 ).
(8.1.1)
1 First considered in 1809 for n = 1 by Laplace (cf. [39]) and then for higher dimensions by Poisson (cf. [56] for n = 2.)
D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 8, © Springer Science+Business Media New York 2013
277
278
CHAPTER 8. THE HEAT OPERATOR AND RELATED VERSIONS
3x , and using Exercise 4.40, it Applying Fx to (8.1.1), denoting Fx (E) by E follows that 3x + |ξ|2 E 3x = 1(ξ) ⊗ δ(t) in ∂t E
S (Rn+1 ).
(8.1.2)
In particular, for each fixed ξ ∈ Rn , we have
3x = δ(t) ∂t + |ξ|2 E
in D (R).
(8.1.3)
3x (ξ, t) := H(t)e−|ξ|2 t is a solution of Applying Example 5.12, we obtain that E (8.1.3), where as before, H denotes the Heaviside function from (1.2.16). Note 3x (ξ, t) ∈ S (Rn+1 ). Also, if ϕ ∈ C ∞ (Rn+1 ) then, integration by that we have E 0 parts yields 2
2
x +|ξ|2 E x , ϕ = −H(t)e−|ξ| t , ∂t ϕ(ξ, t) +H(t)e−|ξ| t , |ξ|2 ϕ(ξ, t) ∂t E ∞ ∞ 2 2 =− e−|ξ| t ∂t ϕ(ξ, t) dt dξ+ |ξ|2 e−|ξ| t ϕ(ξ, t) dξ dt
Rn
= Rn
0
0
Rn
ϕ(ξ, 0) dξ = 1(ξ) ⊗ δ(t), ϕ ,
(8.1.4)
3x + |ξ|2 E 3x = 1(ξ) ⊗ δ(t) in D (Rn+1 ). Invoking (4.1.25), 3x verifies ∂t E hence E 3x (ξ, t) = H(t)e−|ξ|2 t verifies (8.1.2). We remark here that it follows that E 2 while the distribution −H(−t)e−|ξ| t satisfies (8.1.3), based on our earlier discussion pertaining to the nature of the function in (4.1.27), it does not belong to S (Rn+1 ), thus we cannot apply Fx−1 to it. Starting from the identity Fx (Fx E(x, t)) = (2π)n E(−x, t) (which is easy to check), we may write 3x (ξ))(x) = (2π)−n H(t)Fξ (e−t|ξ| )(x) E(−x, t) = (2π)−n Fξ (E π n2 |x|2 |x|2 n = (2π)−n H(t) e− 4t = H(t)(4πt)− 2 e− 4t in S (Rn+1 ), t (8.1.5) 2
where for the third equality in (8.1.5) we used Remark 4.22 and (3.2.6). Hence, we may conclude that the tempered distribution from (8.1.5) is a fundamental solution for the heat operator. n Note that, with the notation from (3.1.10), we have ∂t −Δ = iDn+1 + Dj2 , j=1
hence the heat operator satisfies the hypothesis of Proposition 5.7. Consequently, if u ∈ S (Rn+1 ) is an arbitrary fundamental solution for the heat operator in Rn+1 , then u − E = P (x, t) in S (Rn+1 ), for some polynomial P (x, t) in Rn+1 satisfying (∂t − Δx )P (x, t) = 0 in Rn+1 . As a final remark, we claim that E as in (8.1.5) satisfies E ∈ L1loc (Rn+1 ). Indeed, if K is a compact subset of Rn × [−R, R] for some R ∈ (0, ∞), then
8.1. FUNDAMENTAL SOLUTIONS FOR THE HEAT OPERATOR
0≤
R
E(x, t) dx dt ≤
Rn
0
K
n
(4πt)− 2 e−
n
= π− 2
R
dx dt
2
Rn
0
|x|2 4t
279
e−|y| dy dt = R < ∞.
(8.1.6)
In summary, we proved the following theorem. Theorem 8.1. The function defined as n
E(x, t) := H(t)(4πt)− 2 e−
|x|2 4t
∀ (x, t) ∈ Rn+1 ,
,
(8.1.7)
belongs to S (Rn+1 )∩L1loc (Rn+1 )∩C ∞ (Rn+1 \{0}) and is a fundamental solution for the heat operator ∂t − Δx in Rn+1 . Moreover, u ∈ S (Rn+1 ) : (∂t − Δx )u = δ(x) ⊗ δ(t) in S (Rn+1 ) (8.1.8) = E + P : P polynomial in Rn+1 satisfying (∂t − Δx )P (x, t) = 0 . Corollary 8.2. The heat operator ∂t − Δx is hypoelliptic in Rn+1 . Proof. This is a consequence of Theorem 6.8 and Theorem 8.1. Exercise 8.3. Let E be the function defined in (8.1.8) and set F (x, t) := −E(x, −t), for (x, t) ∈ Rn+1 . Prove that F is a fundamental solution for the operator ∂t + Δ in Rn+1 . n
|x|2
Remark 8.4. Given the expression E(x, t) = H(t)(4πt)− 2 e− 4t defined for each (x, t) ∈ Rn+1 , then one may check via a direct computation that this is a fundamental solution for the heat operator L = ∂t − Δx . First, the computation in (8.1.6) gives that E ∈ L1loc (Rn+1 ), which in turn implies E ∈ D (Rn+1 ). Second, if ϕ ∈ C0∞ (Rn+1 ) is arbitrary, then using integration by parts we may write LE, ϕ = −E, ∂t ϕ + Δx ϕ ' & ∞ E(x, t)(∂t ϕ(x, t) + Δx ϕ(x, t)) dx dt = − lim+ ε→0
ε
&
= lim+ ε→0
= lim+ ε→0
= lim+ ε→0
Rn
∞
Rn
LE(x, t)ϕ(x, t) dx dt ε
n
(4πε)− 2 e−
|x|2 4ε
'
E(x, ε)ϕ(x, ε) dx + Rn
ϕ(x, ε) dx
Rn
√ 2 n π − 2 e−|y| ϕ(2 εy, ε) dy
Rn
= ϕ(0) = δ, ϕ.
(8.1.9)
For the fourth equality in (8.1.9) we have used the fact that LE = 0 pointwise in Rn × (0, ∞), for the fifth a suitable change of variables, while for the sixth
280
CHAPTER 8. THE HEAT OPERATOR AND RELATED VERSIONS
equality we applied Lebesgue’s dominated convergence theorem. This proves that LE = δ in D (Rn+1 ), thus E is a fundamental solution for L. In closing we record a Liouville-type theorem for the operator ∂t − Δ, which is a particular case of Theorem 5.3. Theorem 8.5 (Liouville’s theorem for the heat operator). Any bounded function in Rn+1 that is a null solution of the heat operator is constant.
8.2
The Generalized Cauchy Problem for the Heat Operator
Let F ∈ C 0 Rn × [0, ∞) , f ∈ C 0 (Rn ) and suppose u is a solution of the Cauchy problem for the heat operator: ⎧ u ∈ C 0 Rn × [0, ∞) , ⎪ ⎪ ⎪ ⎪ ⎪ ⎨∂t u, ∂ 2 u ∈ C 0 Rn × (0, ∞) , j = 1, . . . , n, xj (8.2.1) ⎪ ⎪ ∂t u − Δx u = F in Rn × (0, ∞), ⎪ ⎪ ⎪ ⎩ u(·, 0) = f in Rn . Denote by u and F the extensions by zero of u and F to the entire space Rn+1 . Then, if ϕ ∈ C0∞ (Rn+1 ), integrating by parts and using Lebesgue’s dominated convergence theorem we obtain ∞ $ % $ % (∂t − Δx ) u, ϕ = − u , ∂t u + Δx ϕ = − lim u(∂t ϕ + Δx ϕ) dx dt & = lim+ ε→0
∞
ε
=
ε→0+
∞ Rn
Rn
ε
(∂t u − Δx u)ϕ dx dt +
Rn
' u(x, ε)ϕ(x, ε) dx
Rn
F (x, t)ϕ(x, t) dx dt + 0
f (x)ϕ(x, 0) dx
% $ % $ = F, ϕ + f (x) ⊗ δ(t), ϕ(x, t) .
Rn
(8.2.2)
This proves that (∂t − Δx ) u = F + f (x) ⊗ δ(t) in D (Rn+1 ) and suggests the definition made below. As a preamble, we introduce the notation Rn+1 := (x, t) ∈ Rn+1 : t ≥ 0 (8.2.3) + and the space D+ (Rn+1 ) := {u ∈ D (Rn+1 ) : supp u ⊆ Rn+1 }. + D+ (Rn+1 )
(8.2.4)
Definition 8.6. Given F0 ∈ and f ∈ D (R ), a distribution u ∈ (Rn+1 ) is called a solution of the generalized Cauchy problem for the heat D+ operator for the data F0 and f , if u verifies (∂t − Δx )u = F0 + f (x) ⊗ δ(t)
n
in D (Rn+1 ).
(8.2.5)
8.2. THE GENERALIZED CAUCHY PROBLEM FOR THE HEAT . . . 281 The issue of solvability of Eq. (8.2.5) fits into the framework presented in Remark 5.5. More precisely, let E be the fundamental solution for the heat (Rn+1 ) and f ∈ D (Rn ) are such operator as given in (8.1.7). Then, if F0 ∈ D+ that u := E ∗ [F0 + f ⊗ δ] exists in D (Rn+1 )
(8.2.6)
the distribution u satisfies (8.2.5). For this u to be a solution of the generalized Cauchy problem for the heat operator, we would also need supp u ⊆ Rn+1 + . Since it is not difficult to check that supp E ⊆ Rn+1 and supp (f ⊗ δ) ⊆ Rn+1 + + , and since by assumption supp F0 ⊆ Rn+1 + , it follows that whenever condition (8.2.6) is verified, the distribution u defined in (8.2.6) also satisfies supp u ⊆ Rn+1 (by + (2.8.17)). While the convolution of two arbitrary distributions in D+ (Rn ) does not always exist (you may want to check that an exception is the case n = 1), under the additional assumptions f ∈ E (Rn ) and F0 ∈ E (Rn+1 ) condition (8.2.6) is verified. The above discussion is the reason why we analyze in detail the following setting. Retain the notation introduced at the beginning of the section, and assume that F ∈ C0∞ (Rn+1 ) and f ∈ C0∞ (Rn ). (8.2.7) Then u := E ∗ F + E ∗ f (x) ⊗ δ(t)
exists, belongs to D+ (Rn+1 ),
(8.2.8)
and is a solution of (8.2.5). We proceed by rewriting the expression for u in a more explicit form. First, by Proposition 2.93 and the fact that E ∈ L1loc (Rn+1 ), we have E ∗ F ∈ C ∞ (Rn+1 ) and $ % $ % (E ∗ F )(x, t) = E(y, τ ), F (x − y, t − τ ) = E(x − y, t − τ ), F (y, τ ) ∞ − n (x−y)2 = H(t − τ ) 4π(t − τ ) 2 e− 4(t−τ ) F (y, τ ) dy dτ 0
t
Rn
= 0
Rn
E(x − y, t − τ )F (y, τ ) dy dτ.
(8.2.9)
To compute E ∗ (f ⊗ δ), fix an arbitrary compact set K ⊂ Rn+1 , consider a function ϕ ∈ C0∞ (Rn+1 ) such that supp ϕ ⊆ K, and pick some ψ ∈ C0∞ (R2n+2 ) with the property that ψ = 1 in a neighborhood of the set (x, t, y, 0) ∈ Rn × R × Rn × R : y ∈ supp f and (x + y, t) ∈ K . (8.2.10) Relying on the definition of convolution we have $ % ! $ %" E ∗ (f ⊗ δ), ϕ = E(x, t), f (y) ⊗ δ(τ ), ψ(x, t, y, τ )ϕ(x + y, t + τ ) 9 8 f (y)ϕ(x + y, t)dy = E(x, t), Rn
282
CHAPTER 8. THE HEAT OPERATOR AND RELATED VERSIONS
=
E(x, t)f (y)ϕ(x + y, t) dx dy dt R
Rn
R
Rn
Rn
= Rn
E(z − y, t)f (y)ϕ(z, t) dz dy dt.
Hence, E ∗ f (x) ⊗ δ(t) is given by the function E(x − y, t)f (y) dy ∈ C, Rn × R (x, t) →
(8.2.11)
(8.2.12)
Rn
whose restriction to Rn × (0, ∞) is of class C ∞ . In summary, this analysis proves the following result.
Proposition 8.7. Let f ∈ C0∞ (Rn ) and assume that F ∈ C 0 Rn × [0, ∞) is such that its extension F by zero to Rn+1 satisfies F ∈ C0∞ (Rn+1 ). Also let E be the fundamental solution for the heat operator ∂t − Δx as given in (8.1.7). Then the generalized Cauchy problem for the heat operator for the data F and f has a solution u ∈ D+ (Rn+1 ) that is of function type, whose restriction n ∞ to R × (0, ∞) is of class C , and has the expression t E(x − y, t)f (y) dy + E(x − y, t − τ )F(y, τ ) dy dτ (8.2.13) u(x, t) = Rn
0
Rn
for every x ∈ R and t ∈ (0, ∞). n
Note that the integrals in (8.2.13) are meaningfully defined under weaker assumptions on F and f . In fact, starting with u as in (8.2.13) one may prove that this is a solution to a version of (8.2.1) (corresponding to finite time, i.e., t ∈ (0, T ), for some T > 0) under suitable yet less stringent conditions on F and f .
8.3
Fundamental Solutions for General Second Order Parabolic Operators
Let A = ajk 1≤j,k≤n ∈ Mn×n (C) and associated to such a matrix A, consider the parabolic operator LA := LA (∂) := ∂t −
n
ajk ∂j ∂k .
(8.3.1)
j,k=1
The goal is to obtain explicit formulas for all tempered distributions that are fundamental solutions for LA under the additional assumption that there exists a constant C ∈ (0, ∞) such that the matrix A satisfies the strict positiveness condition n Re ajk ξj ξk ≥ C|ξ|2 , ∀ ξ = (ξ1 , . . . , ξn ) ∈ Rn . (8.3.2) j,k=1
8.3. FUNDAMENTAL SOLUTIONS FOR GENERAL . . .
283
The approach is an adaptation to the parabolic setting of the ideas used in Sect. 7.8 for the derivation of (7.8.53). In a first stage, we note that, via the same reasoning as in Sect. 7.8, when computing the fundamental solution for LA we may assume without loss of generality that A is symmetric, that is, A = A . Also, we treat first the case when A has real entries. The case when A is real, symmetric,, and satisfies (8.3.2). 1 Since A is real, symmetric and positive definite, √ A 2 is well-defined, real, sym1 1 1 metric, invertible, A 2 A 2 = A, and det(A 2 ) = detA. Throughout this discussion, we agree to denote by ∇x,t the global gradient in Rn+1 , and reserve ∇ for the gradient in the variable x only. As in the past, Δ refers to the Laplacian in the variable x. Fix such that ∂k u ∈ L1loc (Rn+1 ) for each k ∈ {1, . . . , n},
u ∈ L1loc (Rn+1 )
(8.3.3) Also, fix some ϕ ∈ C0∞ (Rn+1 ). Then, using the fact that A is symmetric we obtain n
LA u, ϕ = ∂t u, ϕ −
n $ $ % % ajk ∂j ∂k u, ϕ = −u, ∂t ϕ + ajk ∂k u, ∂j ϕ
j,k=1
j,k=1
=−
Rn+1
u(x, t)∂t ϕ(x, t) dx dt +
=−
R
Rn
n j,k=1
Rn+1
∂j u(x, t)ajk ∂k ϕ(x, t) dx dt
u(x, t)∂t ϕ(x, t) − ∇u(x) · A∇ϕ(x, t) dx dt.
(8.3.4)
In the last integral over Rn+1 in (8.3.4) we make the change of variables 1 x = A 2 y. Since for every invertible matrix B ∈ Mn×n (R) and any function f the chain rule gives (∇f )(By) = (B )−1 ∇[f (By)]
(8.3.5)
for each y ∈ Rn such that f is differentiable at By, we obtain (the analogue of (7.8.30)) u(x, t)∂t ϕ(x, t) − ∇u(x, t) · A∇ϕ(x, t) dx dt R
Rn
=
√ detA R
=
√ detA
Rn
R
Rn
v(y, t)∂t ψ(y, t) − ∇v(y, t) · ∇ψ(y, t) dy dt v(y, t) ∂t ψ(y, t) + Δψ(y, t) dy dt,
(8.3.6)
where we have set 1
v(x, t) := u(A 2 x, t)
1
and ψ(x, t) := ϕ(A 2 x, t)
for (x, t) ∈ Rn+1 . (8.3.7)
284
CHAPTER 8. THE HEAT OPERATOR AND RELATED VERSIONS
Note that ψ ∈ C0∞ (Rn+1 ). Hence, (8.3.4) and (8.3.6) imply √ LA u, ϕ = − detA v(y, t)[∂t ψ(y, t) + Δψ(y, t)] dy dt.
(8.3.8)
Rn+1
Choose u to be the function u(y, t) := √
1 detA
1
E∂t −Δ (A− 2 y, t) for
(y, t) ∈ Rn+1 ,
(8.3.9)
where E∂t −Δ is the fundamental solution for the heat operator from (8.1.7). In particular, u satisfies (8.3.3). For this choice of u, the function v from (8.3.7) becomes 1 v(y, t) = √ E∂t −Δ (y, t) for (y, t) ∈ Rn+1 . detA
(8.3.10)
1
Since A 2 (0) = 0, we may write $ % 1 ϕ(0) = ϕ(A 2 0, 0) = ψ(0) = δ(x) ⊗ δ(t), ψ $ % $ % = (∂t − Δ)E∂t −Δ , ψ = E∂t −Δ , −∂t ψ − Δψ √ 1 √ = − detA E∂t −Δ (y, t) ∂t ψ(y, t) + Δψ(y, t) dy dt detA Rn+1 √ v(y, t) ∂t ψ(y, t) + Δψ(y, t) dy dt = − detA Rn+1
= LA u, ϕ = √
% $ 1 1 LA E∂t −Δ (A− 2 x, t) , ϕ(x, t) , detA
(8.3.11)
where the penultimate equality in (8.3.11) uses (8.3.8). From (8.3.11) we may conclude that the function 1 1 E∂t −Δ (A− 2 x, t) EA (x, t) := √ detA
for (x, t) ∈ Rn+1 ,
(8.3.12)
n is a fundamental solution Rn+1 . Keeping − 1for LA in −1 in mind that for each x ∈ R 1 − 12 2 − we have |A x| = A 2 x · A 2 x = A x · x, the function EA may be rewritten as (A−1 x)·x n 1 EA (x) := √ H(t)(4πt)− 2 e− 4t detA
for (x, t) ∈ Rn+1 .
(8.3.13)
Moreover, from (8.3.12), Theorem 8.1, and Proposition 4.41, it follows that EA belongs to S (Rn+1 ) ∩ L1loc (Rn+1 ) ∩ C ∞ (Rn+1 \ {0}). The case when A has complex entries, is symmetric, and satisfies (8.3.2). As observed in Remark 7.50, under the current assumptions, A continues to be invertible. Also, (7.8.25) holds. In addition, under the current assumptions 1 (detA) 2 is unambiguously defined (see in Remark 7.52).
8.3. FUNDAMENTAL SOLUTIONS FOR GENERAL . . .
285
These comments show that the function EA from (8.3.13) continues to be well-defined under the current assumption on A if ln is replaced by the principal branch of the complex log (defined for points z ∈ C\(−∞, 0] so that z a = ea log z for each a ∈ R). In addition, EA continues to belong to L1loc (Rn+1 ) (this can be seen by a computation similar to that in (8.1.6), keeping in mind (7.8.23)), and EA ∈ C ∞ (Rn+1 \ {0}). Moreover, from (7.8.23) and Exercise 4.5 it follows that EA belongs to S (Rn+1 ). The goal is to prove that this expression is a fundamental solution for LA in the current case. Making use of (7.8.45) for every (x, t) ∈ Rn+1 with t = 0 differentiating pointwise we obtain (A−1 x)·x (A−1 x) · x (A−1 x)·x n ajk ∂j ∂k e− 4t − = e− 4t , 4t2 2t
(8.3.14)
(A−1 x) · x (A−1 x)·x (A−1 x)·x n = t−n/2 e− 4t . − ∂t t−n/2 e− 4t 2 4t 2t
(8.3.15)
n j,k=1
while
From (8.3.14), (8.3.15), and the expression for EA we may conclude that ∀ x ∈ Rn , ∀ t ∈ R \ {0}.
LA EA (x, t) = 0
(8.3.16)
In addition, for each ϕ ∈ C0∞ (Rn+1 ) we may compute n ! " LA EA , ϕ = − EA , ∂t ϕ + ajk ∂j ∂k ϕ j,k=1
⎡ = − lim ⎣ ε→0+
∞
Rn
ε
⎤ n EA (x, t) ∂t ϕ(x, t) + ajk ∂j ∂k ϕ(x, t) dx dt⎦ j,k=1
& = lim+ ε→0
Rn
EA (x, ε)ϕ(x, ε) dx + ε
1 = lim+ √ ε→0 detA n
π− 2 = lim √ ε→0+ detA n
∞
π− 2 = ϕ(0) √ detA
n
(4πε)− 2 e−
(A−1 x)·x 4ε
'
Rn
(LA EA )(x, t)ϕ(x, t) dx dt
ϕ(x, ε) dx
Rn
e−(A
−1
y)·y
√ ϕ(2 εy, ε) dy
Rn
e−(A
−1
y)·y
dy.
(8.3.17)
Rn
For the fourth equality √ in (8.3.17) we have used (8.3.16), for the fifth the change of variables x = 2 εy, while for the sixth equality we applied Lebesgue’s
286
CHAPTER 8. THE HEAT OPERATOR AND RELATED VERSIONS
dominated convergence theorem. From (8.3.17) we then conclude that EA is a fundamental solution for LA in Rn+1 if and only if
e−(A
−1
y)·y
n
dy = π 2
√ detA.
(8.3.18)
Rn
The fact that we already know that EA is a fundamental solution for LA in Rn+1 in the case when A ∈ Mn×n (R) satisfies A = A and condition (8.3.2), implies that formula (8.3.18) holds for this class of matrices. By using the same circle of ideas as the ones employed in proving (7.8.50) (based on Lemma 7.53), we conclude that (8.3.18) holds for the larger class of matrices A ∈ Mn×n (C) satisfying A = A and condition (8.3.2). Hence, EA is indeed a fundamental solution for LA under the current assumptions on A. Next, we claim that the hypotheses of Proposition 5.7 hold in the case of the operator P (D) := LA . To justify this, note that if ξ = (ξ1 , . . . , ξn+1 ) ∈ Rn+1 is such that P (ξ) = 0 then iξn+1 +
n
ajk ξj ξk = 0.
(8.3.19)
j,k=1
Taking reals parts, condition (8.3.2) implies ξ1 = · · · = ξn = 0 which, in combination with (8.3.19), also forces ξn+1 = 0. Hence, ξ = 0 as wanted. Applying now Proposition 5.7 gives that if u ∈ S (Rn+1 ) is an arbitrary fundamental solution for LA in Rn+1 , then u = EA + P in S (Rn+1 ), for some polynomial P in Rn+1 satisfying LA P = 0 pointwise in Rn+1 . In summary, the above analysis proves the following result. Theorem 8.8. Suppose A = ajk 1≤j,k≤n ∈ Mn×n (C) satisfies (8.3.2) and consider the operator LA associated to A as in (8.3.1). Then the function defined by 1 n − H(t)(4πt)− 2 e detAsym
EA (x) :=
(A−1 sym x) · x 4t
for all (x, t) ∈ Rn+1 , (8.3.20)
)∩L1loc (Rn+1 )∩C ∞ (Rn+1 \{0}) and is a fundamental solution Above, Asym := 12 (A + A ), detAsym is defined as in
belongs to S (R for LA in Rn+1 . Remark 7.50, and log denotes the principal branch of the complex logarithm (defined for points z ∈ C \ (−∞, 0] so that z a = ea log z for each a ∈ R). Moreover,
n+1
(8.3.21) u ∈ S (Rn+1 ) : LA u = δ in S (Rn+1 ) = EA + P : P polynomial in Rn+1 satisfying LA P = 0 .
¨ 8.4. FUNDAMENTAL SOLUTION FOR THE SCHRODINGER ...
8.4
287
Fundamental Solution for the Schr¨ odinger Operator
Let x ∈ Rn and t ∈ R. The operator 1i ∂t − Δx is called the (time-dependent) Schr¨ odinger operator in Rn+1 (with zero potential). In this section we determine a fundamental solution for this operator. By Theorem 5.13, there exists E ∈ S (Rn+1 ) such that 1 i
∂t − Δx E = δ(x) ⊗ δ(t) in S (Rn+1 ).
(8.4.1)
Fix such a distribution E and take the partial Fourier transform Fx of (8.4.1) (recall the discussion of the partial Fourier transform at the end of Sect. 4.2) to obtain 1 3 3x = 1(ξ) ⊗ δ(t) in S (Rn+1 ). ∂t Ex + |ξ|2 E (8.4.2) i Fix ξ ∈ Rn and consider the equation 1 ∂t u + |ξ|2 u = δ i
in D (R).
(8.4.3)
2
2
Using Example 5.11, we obtain that iH(t)e−i|ξ| t and −iH(−t)e−i|ξ| t are solu2 tion of this equation. This suggests considering F := iH(t)e−i|ξ| t that belongs to S (Rn+1 ) and satisfies (based on a computation similar to that from (8.1.4)) 1 ∂t F + |ξ|2 F = 1(ξ) ⊗ δ(t) i
in S (Rn+1 ).
(8.4.4)
Then Fx−1 (F ) ∈ S (Rn+1 ) and 1 i
∂t − Δx Fx−1 (F ) = δ(x) ⊗ δ(t) in S (Rn+1 ).
(8.4.5)
To compute Fx−1 (F ), pick ϕ ∈ S(Rn+1 ) and, based on the properties of Fx as well as the expression for F , write $
$ % $ % % Fx−1 (F ), ϕ = (2π)−n Fx F ∨ , ϕ = (2π)−n F ∨ , Fx ϕ 2 H(t)e−i|ξ| t (Fx ϕ)(ξ, t) dξ dt = (2π)−n i Rn+1
−n
= (2π)
i
∞
&
H(t) 0
e Rn
−i|ξ|2 t
' (Fx ϕ)(ξ, t) dξ dt.
(8.4.6)
$ % 2 For each t > 0, the integral over Rn in (8.4.6) equals Fx (e−it|x| ), ϕ(·, t) , which π n2
|ξ|2 by Example 4.24 (with a = t) is equal to it e− 4it ϕ(ξ, t) dξ. Hence, for Rn each ϕ ∈ S(Rn+1 ) we have
288
CHAPTER 8. THE HEAT OPERATOR AND RELATED VERSIONS $ −1 % Fx (F ), ϕ = (2π)−n i
∞
H(t)
π n2 & it
0 n
= i1− 2
∞
n
H(t)(4πt)− 2
&
e
−|ξ|2 4it
' ϕ(ξ, t) dξ dt
Rn
' |ξ|2 e− 4it ϕ(ξ, t) dξ dt.
(8.4.7)
Rn
0 |x|2
n
We remark here that H(t)(4πt)− 2 e− 4it ∈ L1loc (Rn+1 ) only if n = 1. Hence Fx−1 (F ) is of function type only if n = 1. In general, this distribution belongs to S (Rn+1 ) and its action on ϕ ∈ S(Rn+1 ) is given as in (8.4.7). In summary we proved the following result. Theorem 8.9. The distribution E ∈ S (Rn+1 ) defined by ' & ∞ 2 1− n −n − |ξ| 2 2 4it H(t)(4πt) e ϕ(ξ, t) dξ dt, E, ϕ := i 0
Rn
∀ ϕ ∈ S(Rn+1 ), (8.4.8)
1 i ∂t −Δx
is a fundamental solution for the Schr¨ odinger operator in R . In particular, if n = 1 then E is of function type and is given by the L1loc (R2 ) 1
E(x, t) = H(t)(4πt)− 2 e−
|x|2 4it
,
x ∈ R, t ∈ R.
n+1
(8.4.9)
Further Notes for Chap. 8. The heat equation is one example of what is commonly referred to as linear evolution equations. Originally derived in physics from Fourier’s law and conservation of energy (see, e.g., [71] for details), the heat equation has come to play a role of fundamental importance in mathematics and applied sciences. In mathematics, the heat operator is the prototype for a larger class, called parabolic partial differential operators, that includes the operators studied in Sect. 8.3. The Schr¨ odinger operator is named after the Austrian physicist Erwin Schr¨ odinger who first introduced it in 1926. It plays a fundamental role in quantum mechanics, where it describes how the quantum state of certain physical systems changes over time.
Chapter 9
The Wave Operator The operator := ∂t2 −Δx , x ∈ Rn , t ∈ R, is called the wave operator in Rn+1 . The wave operator arises from modeling vibrations in a string, membrane or elastic solid. The goal is to determine fundamental solutions for this operator that are tempered distributions. As an application we discuss the generalized Cauchy problem for the wave operator.
9.1
Fundamental Solution for the Wave Operator
By Theorem 5.13, we know that the wave operator admits a fundamental solution E ∈ S (Rn+1 ). Hence, by (4.1.25) and Exercise 2.81, we have ∂t2 E − Δx E = δ(x) ⊗ δ(t) in S (Rn+1 ).
(9.1.1)
In this section we determine an explicit expression for two such fundamental solutions. Fix E ∈ S (Rn+1 ) that satisfies (9.1.1) and apply the partial Fourier transform Fx to this equation (recall the discussion about partial Fourier transforms from the last part of Sect. 4.2) to obtain 3x + |ξ|2 E 3x = 1(ξ) ⊗ δ(t) in S (Rn+1 ), ∂t2 E
(9.1.2)
3x := Fx (E) ∈ S (Rn+1 ). For ξ ∈ Rn \ {0} fixed consider the initial where E value problem (in the variable t) ⎧ 2 ⎨ d 2 v + |ξ|2 v = 0 in R, dt (9.1.3) ⎩(∂ v)(0) = 1, v(0) = 0, t which admits the solution v(t) = sin(t|ξ|) for t ∈ R. By Example 5.11, it follows |ξ| that the distributions vH and −vH ∨ are fundamental solutions for the operator D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 9, © Springer Science+Business Media New York 2013
289
CHAPTER 9. THE WAVE OPERATOR
290 d2 dt2
+ |ξ|2 in D (R). In addition, vH and −vH ∨ belong to S (R) (based on (b) in Theorem 4.13 and Exercise 4.115), thus 2 2 d d 2 2 + |ξ| + |ξ| (vH) = δ and (−vH ∨ ) = δ in S (R). dt2 dt2 (9.1.4) Moreover, there exists c ∈ (0, ∞) such that H(±t) sin(t|y|) ≤ c|t| for (y, t) ∈ B(0, 1) \ {0} ⊂ Rn+1 . |y|
(9.1.5)
In particular, if we define the functions f± (y, t) := H(±t)
sin(t|y|) |y|
for (y, t) ∈ Rn+1 with t = 0,
(9.1.6)
then (9.1.5) implies f± ∈ L1loc (Rn+1 ). Furthermore, (1 + |y|2 + t2 )−n−2 f± ∈ L1 (Rn+1 ),
(9.1.7)
∈ S (Rn+1 ). Based on all these thus by Example 4.3, we obtain H(±t) sin(t|y|) |y| facts, if we introduce F + := H(t)
sin(t|ξ|) , |ξ|
F − := −H(−t)
sin(t|ξ|) , |ξ|
(9.1.8)
then F + , F − ∈ S (Rn+1 ) and ∂t2 F ± + |ξ|2 F ± = 1(ξ) ⊗ δ(t)
(9.1.9) in S (Rn+1 ).
(9.1.10)
In particular, Fx−1 (F + ) and Fx−1 (F − ) are meaningfully defined in S (Rn+1 ). Thus, if we set sin(t|ξ|) E+ := Fx−1 H(t) , (9.1.11) |ξ| sin(t|ξ|) , E− := Fx−1 −H(−t) |ξ|
(9.1.12)
we have E± ∈ S (Rn+1 ),
supp E+ ⊆ R × [0, ∞), n
in S (Rn+1 ),
(9.1.13)
supp E− ⊆ R × (−∞, 0].
(9.1.14)
(∂t2 − Δx )E± = δ
n
The next task is to find explicit expressions for Fx−1 (F ± ). To this end, fix a function ϕ ∈ S(Rn+1 ) and, with the operation ·∨ considered only in the variable x, write $ $ % % E+ , ϕ = (2π)−n (Fx Fx (E+ ))∨ , ϕ = (2π)−n Fx (E+ ), (Fx ϕ)∨
9.1. FUNDAMENTAL SOLUTION FOR THE WAVE OPERATOR
291
sin t|ξ| ϕ 3x (−ξ, t) dξ dt. (9.1.15) |ξ| Rn+1
Note that if one replaces ϕ 3x (−ξ, t) above with Rn eix·ξ ϕ(x, t) dx then the order 1 of integration in the resulting iterated integral may not be switched since |ξ| is not integrable at infinity, thus Fubini’s theorem does not apply. This is why we should proceed with more care and, based on Lebesgue’s dominated convergence theorem, we introduce a convergence factor that enables us to eventually make the use of Fubini’s theorem. More precisely, we have E+ , ϕ = (2π)−n F + (ξ, t) ϕ 3x (−ξ, t) dξ dt −n
H(t)
= (2π)
Rn+1 −n
= lim (2π) ε→0+
Rn+1
−n
F + (ξ, t)e−ε|ξ| ϕ 3x (−ξ, t) dξ dt
= lim+ (2π) ε→0
ϕ(x, t)
e
Rn+1
ix·ξ−ε|ξ|
+
F (ξ, t) dξ
dx dt
Rn
= lim+ Eε , ϕ,
(9.1.16)
ε→0
where −n
Eε (x, t) := (2π)
eix·ξ−ε|ξ|
H(t) Rn
sin(t|ξ|) dξ, |ξ|
∀ x ∈ Rn , ∀ t ∈ R. (9.1.17)
To compute the last limit in (9.1.16), we separate our analysis into three cases: n = 1, n = 2p + 1 with p ≥ 1, and n = 2p with p ≥ 1. The Case n = 1 Fix ε > 0 and x ∈ R and define the function ∞ sin(t|ξ|) fε (t) := dξ eixξ−ε|ξ| |ξ| −∞ Then
∂t fε (t) =
∞
e
ixξ−ε|ξ|
−∞
1 = 2
&
∞
e 0
∞
cos(t|ξ|) dξ =
ξ(ix+it−ε)
eixξ−ε|ξ|
−∞
∞
dξ +
e
t ∈ R.
for
'
ξ(ix−it−ε)
(9.1.18)
eit|ξ| + e−it|ξ| dξ 2
dξ
0
' & 0 0 1 ξ(ix−it+ε) ξ(ix+it+ε) + e dξ + e dξ 2 −∞ −∞ & ' 1 1 1 1 1 = − − + + 2 ix + it − ε ix − it − ε ix − it + ε ix + it + ε =
ε ε + , 2 2 (x + t) + ε (x − t)2 + ε2
∀ t ∈ R.
(9.1.19)
CHAPTER 9. THE WAVE OPERATOR
292
Consequently, (9.1.19) and the fact that fε (0) = 0 imply x+t x−t fε (t) = arctan − arctan for t ∈ R. ε ε
(9.1.20)
Making use of (9.1.20) back in (9.1.17) (written for n = 1) then gives x+t x−t 1 H(t) arctan − arctan , ∀ x, t ∈ R. Eε (x, t) = 2π ε ε (9.1.21) Hence, for ϕ ∈ S(R2 ) fixed, we may write lim Eε (x, t), ϕ(x, t)
ε→0+
1 = lim + ε→0 2π
(9.1.22)
∞
&
arctan
R
0
x+t ε
− arctan
x−t ε
' ϕ(x, t) dt dx.
To continue with our calculation, we further decompose
∞
arctan
0
R
∞
x+t ε
ϕ(x, t) dx dt
−t
arctan
= 0
−∞
x+t ε
∞
∞
ϕ(x, t) dx dt +
arctan 0
−t
x+t ε
=: Iε + IIε .
ϕ(x, t) dx dt (9.1.23)
By Lebesgue’s dominated convergence theorem, π ∞ −t lim Iε = − ϕ(x, t) dx dt 2 0 ε→0+ −∞ and lim+ IIε =
ε→0
π 2
∞
∞
ϕ(x, t) dx dt. 0
(9.1.24)
(9.1.25)
−t
Similarly,
∞
arctan
lim+
ε→0
R
0
=−
π 2
∞
x−t ε
ϕ(x, t) dx dt
t
ϕ(x, t) dx dt + −∞
0
π 2
∞
∞
ϕ(x, t) dx dt. 0
(9.1.26)
t
Combining (9.1.22)–(9.1.26) permits us to write lim Eε (x, t), ϕ(x, t)
(9.1.27)
ε→0+
1 = 4
0
∞
& −
−t
−∞
∞
ϕ(x, t) dx + −t
' ϕ(x, t) dx dt
9.1. FUNDAMENTAL SOLUTION FOR THE WAVE OPERATOR +
=
1 2
1 4
∞
&
−∞
0 ∞
t
' ϕ(x, t) dx dt
t
t
ϕ(x, t) dx dt = −t
0
∞
ϕ(x, t) dx −
293
1 2
R2
H(t − |x|)ϕ(x, t) dx dt.
From (9.1.27) and (9.1.16) we then conclude that1 E+ (x, t) = H(t−|x|) for 2 (x, t) ∈ R2 . In summary, we proved that ⎧ ⎪ ⎨ E+ (x, t) = H(t − |x|) , ∀ (x, t) ∈ R2 , is a tempered distribution 2 ⎪ ⎩ 2 and satisfies (∂t − ∂x2 )E+ = δ in S (R2 ). (9.1.28) Remark 9.1. (1) The reasoning used to obtain E+ also yields that ⎧ ⎪ ⎨ E− (x, t) := H(−t − |x|) , ∀ (x, t) ∈ R2 , is a tempered distribution 2 ⎪ ⎩ 2 and satisfies (∂t − ∂x2 )E− = δ in S (R2 ). (9.1.29) (2) An inspection of (9.1.28) and (9.1.29) reveals that supp E+ = {(x, t) ∈ R2 : |x| ≤ t}
and
supp E− = {(x, t) ∈ R2 : |x| ≤ −t}.
(9.1.30)
The Case n = 2p + 1, p ≥ 1 It is immediate from (9.1.17) that Eε (x, t) is invariant under orthogonal transformations in the variable x. Hence, Eε (y, t) = Eε (|y|, 0, . . . , 0, t) for every y ∈ Rn and t ∈ R. As such, in what follows we may assume that x = (|x|, 0, . . . , 0). Fix such an x and set r := |x|. With t ∈ R fixed, using polar coordinates (see (13.8.1), (13.8.2), and (13.8.3)), rewrite the expression from (9.1.17) as Eε (x, t) = (2π)−n H(t)
0
∞
(0,π)n−2
2π
eiρr cos θ1 −ερ
0
sin tρ n−1 ρ ρ
× (sin θ1 )
· · · sin(θn−2 ) dθn−1 · · · dθ1 dρ ∞ π −n n−2 ρ sin(tρ) eiρr cos θ1 −ερ (sin θ1 )n−2 × = (2π) H(t) n−2
0 1 This
0
expression for a fundamental solution for the wave operator when n = 1 was first used by Jean d’Alembert in 1747 in connection with a vibrating string.
CHAPTER 9. THE WAVE OPERATOR
294 6 ×
7
2π
(sin θ2 )n−3 · · · sin(θn−2 ) dθn−1 · · · dθ2 dθ1 dρ
(0,π)n−3
0
= (2π)−n ωn−2 H(t)
∞
0
π
eiρr cos θ−ερ ρn−2 (sin θ)n−2 sin(tρ) dθ dρ 0
(9.1.31) where ωn−2 denotes the surface area of the unit ball in Rn−1 (see also (13.6.6) for more details on why the expression inside the right brackets in (9.1.31) is equal to ωn−2 ). To proceed with the computation of the integrals in the rightmost term in (9.1.31), recall that n = 2p + 1 and, for ρ > 0 fixed, set π eiρr cos θ (sin θ)2p−1 dθ, ∀ p ∈ N. (9.1.32) Ip := ρ2p−1 0
We claim that 2p ∂r Ip for all p ≥ 1. (9.1.33) r In order to prove (9.1.33) note that, for each p ≥ 1, integration by parts yields π 2p+1 Ip+1 = ρ eiρr cos θ (sin θ)2p+1 dθ Ip+1 = −
0
−1 iρr 6
π
= ρ2p+1
∂θ (eiρr cos θ )(sin θ)2p dθ 0
7 θ=π π i 2p iρr cos θ 2p iρr cos θ 2p−1 = ρ (sin θ) −2p e (sin θ) cos θ dθ e r 0 θ=0 2pi 2p =− ρ r =−
π
eiρr cos θ (sin θ)2p−1 cos θ dθ 0
2pi 2p 1 ρ ∂r r iρ
π
eiρr cos θ (sin θ)2p−1 dθ = − 0
2p (∂r Ip ), r
(9.1.34)
as wanted. By induction, from the recurrence relation in (9.1.33) it follows that (p−1) 1 p−1 Ip = (−2) (p − 1)! ∂r I1 , ∀ p ∈ N. (9.1.35) r As for I1 , we have π −1 iρr cos θ θ=π 2 sin(ρr) I1 = ρ e . eiρr cos θ sin θ dθ = = θ=0 ir r 0
(9.1.36)
Recalling (9.1.31) and the fact that n = 2p + 1 for some p ∈ N, formulas (9.1.35) and (9.1.36) yield ∞ Eε (x, t) = (2π)−n ωn−2 H(t) Ip e−ερ sin(tρ) dρ (9.1.37) 0
9.1. FUNDAMENTAL SOLUTION FOR THE WAVE OPERATOR
295
= (2π)−n ωn−2 H(t)(−2)p−1 (p − 1)! 2 ×
∞
×
e−ερ sin(tρ)
0
1 ∂r r
p−1
sin(ρr) r
dρ.
Furthermore, using (13.5.6) we have n−1
ωn−2 =
2π p 2π 2 2π p n−1 = = Γ(p) (p − 1)! Γ 2
(9.1.38)
which further simplifies the expression in (9.1.37) to ' p−1 & ∞ 1 1 Eε (x, t) = 2(−2π)−p−1 H(t) ∂r e−ερ sin(tρ) sin(ρr) dρ . r r 0 (9.1.39) Our next claim is that ' & ∞ ε 1 ε e−ερ sin(tρ) sin(ρr) dρ = − . 2 ε2 + (t − r)2 ε2 + (t + r)2 0 Indeed, ∞ e 0
−ερ
1 sin(tρ) sin(ρr) dρ = 2
∞
(9.1.40)
e−ερ [cos(t − r)ρ − cos(t + r)ρ] dρ
0
1 = Re 2
∞
[e−ερ+i(t−r)ρ − e−ερ+i(t+r)ρ ] dρ
0
& ' 1 1 1 = Re − + 2 −ε + i(t − r) −ε + i(t + r) =
& ' ε + i(t − r) 1 −ε − i(t + r) Re 2 + 2 2 ε + (t − r)2 ε + (t + r)2
=
' & 1 ε ε − , 2 ε2 + (t − r)2 ε2 + (t + r)2
(9.1.41)
proving (9.1.40). The identity resulting from (9.1.41) further simplifies the expression in (9.1.39) as (p−1) & ' 1 1 ε ε ∂r − Eε (x, t) = (−2π)−p−1 H(t) r r ε2 +(t−r)2 ε2 +(t+r)2 (9.1.42) for every x ∈ Rn and every t ∈ R, where r = |x|. Recall from (9.1.16) that in order to determine E+ we further need to compute lim+ Eε (x, t) in S (Rn+1 ). With this goal in mind, fix ϕ ∈ S(Rn+1 ) and ε→0
use (9.1.42) in concert with (13.8.8) to write
CHAPTER 9. THE WAVE OPERATOR
296 Eε , ϕ = (−2π)−p−1
∞ ∞ 0
0
∞
1 ∂B(0,r)
r
∂r
p−1 & ε r
1 1 − 2 ε2 + (t − r)2 ε + (t + r)2
' ×
× ϕ(ω, t) dσ(ω) dr dt = (−2π)−p−1 ε
0
∞
0
1 ∂r r
&
p−1 1 r
1 1 − 2 ε2 + (t − r)2 ε + (t + r)2
' ×
×
ϕ(ω, t) dσ(ω) dr dt. ∂B(0,r)
(9.1.43)
A natural question to ask is whether, if p ≥ 2, we may integrate by parts (p− 1) times in the r variable in the rightmost expression in (9.1.43). Observe that, at least formally, ' ' & ∞ & ∞ 1 d d 1 g(r) dr (9.1.44) f (r) f (r) g(r) dr = − r dr dr r 0 0 if f (r)g(r) f (r)g(r) = 0 = lim+ . r→∞ r r r→0 lim
(9.1.45)
In the setting of the last expression in (9.1.43), for each ε, t > 0 fixed, and assuming p ≥ 2, if we let f (r) := g(r) :=
1 ∂r r
p−2 & ' 1 1 1 − , r > 0, r ε2 + (t − r)2 ε2 + (t + r)2 ϕ(ω, t) dσ(ω) = rn−1 ϕ(rω, t) dσ(ω), r > 0,
(9.1.46)
(9.1.47)
S n−1
∂B(0,r)
then these functions satisfy (9.1.45) and (9.1.44). Proceeding by induction (with p ≥ 2), we apply (p − 1) times formula (9.1.44), pick up (−1)p−1 in the process (the last factor 1r bundled up with the derivative) and write ∞ ∞ ε ε Eε , ϕ = (2π)−p−1 − × ε2 + (t − r)2 ε2 + (t + r)2 0 0 7 6 p−1 1 ∂r ϕ(ω, t) dσ(ω) dr dt. × r r ∂B(0,r) (9.1.48) 1
Note that (9.1.48) is also valid if p = 1 without any need of integration by parts.
9.1. FUNDAMENTAL SOLUTION FOR THE WAVE OPERATOR
297
We are left with taking the limit as ε → 0+ in (9.1.48) a task we complete by using Lemma 9.3 (which is stated and proved at the end of this subsection). Specifically, we apply Lemma 9.3 with 7 p−1 6 1 1 ∂r ϕ(ω, t) dσ(ω) . (9.1.49) h(r) := r r ∂B(0,r) Note that the second equality in (9.1.47) and the fact that ϕ ∈ S(Rn+1 ) guarantee that h in (9.1.49) satisfies the hypothesis of Lemma 9.3. These facts combined with Lebesgue’s dominated convergence theorem yield 7 p−1 ∞ 6 1 1 ∂r lim Eε , ϕ = (2π)−p−1 π ϕ(ω, t) dσ(ω) dt. r r ∂B(0,r) ε→0+ 0 r=t
(9.1.50) In summary, we proved the following result: If n = 2p + 1, for some p ∈ N, then E+ ∈ S (Rn+1 ) defined by −p−1
E+ , ϕ = (2π)
∞
6
π 0
1 ∂r r
7 p−1 1 ϕ(ω, t) dσ(ω) dt r ∂B(0,r) r=t
(9.1.51) for ϕ ∈ S(Rn+1 ) is a fundamental solution for the wave operator in Rn+1 . The reasoning used to obtain (9.1.51) also yields an expression for E− . More precisely, similar to (9.1.16), we arrive at E− , ϕ = lim+ Eε , ϕ for ϕ ∈ S(Rn+1 ) where, this time, 0 ∞ Eε , ϕ = −(2π)−p−1 −∞
0
ε→0
ε ε − 2 2 2 ε + (t − r) ε + (t + r)2
×
7 6 p−1 1 ∂r ϕ(ω, t) dσ(ω) dr dt × r r ∂B(0,r) 1
−p−1
∞
= (2π)
0
0
∞
ε ε − 2 ε2 + (t − r)2 ε + (t + r)2
×
(9.1.52)
7 6 p−1 1 ∂r ϕ(ω, −t) dσ(ω) dr dt. × r r ∂B(0,r) 1
CHAPTER 9. THE WAVE OPERATOR
298
Then applying Lemma 9.3, we obtain: If n = 2p + 1, for some p ∈ N, then E− ∈ S (Rn+1 ), defined by E− , ϕ = (2π)−p−1 π
∞ 1 0
r ∂r
p−1 1
r
ϕ(ω, −t) dσ(ω) dt, (9.1.53) ∂B(0,r) r=t
for ϕ ∈ S(Rn+1 ), is a fundamental solution for the wave operator in Rn+1 . Remark 9.2. (1) The distributions E+ and E− from (9.1.51) and (9.1.53), respectively, satisfy supp E+ = {(x, t) ∈ R2p+1 × [0, ∞) : |x| = t},
(9.1.54)
supp E+ = {(x, t) ∈ R2p+1 × (−∞, 0] : |x| = −t}.
(9.1.55)
(2) In the case when n = 3 (thus, for p = 1), formulas (9.1.51) and (9.1.53) become ∞ 1 1 E+ , ϕ = ϕ(ω, t) dσ(ω) dt 4π 0 t ∂B(0,t) 9 8 H(t) δ∂B(0,t) , ϕ , (9.1.56) = 4πt ∞ 1 1 E− , ϕ = ϕ(ω, −t) dσ(ω) dt 4π 0 t ∂B(0,t) 9 8 H(−t) δ∂B(0,−t) , ϕ , = − 4πt
(9.1.57)
for every ϕ ∈ S(R4 ) where, for each R ∈ (0, ∞), the symbol δ∂B(0,R) stands for the distribution defined as in Exercise 2.128 corresponding to Σ := ∂B(0, R). (3) If n = 2p, p ∈ N, the approach used to obtain (9.1.51) works up to the point where the general formula for Ip was obtained. More precisely, with ρ > 0 fixed, if we define π n eiρr cos θ (sin θ)n dθ, ∀ n ≥ 2, (9.1.58) Jn := ρ 0
then the recurrence formula Jn = − n−1 r ∂r Jn−2 is valid for all n ≥ 2 (observe π that Ip = J2p−1 ). However, the integral J0 = 0 eiρr cos θ dθ cannot be computed explicitly (as opposed to the computation of I1 ), hence the recurrence formula for Jn may not be used inductively to obtain an explicit expression for Jn . This is why, in order to obtain explicit expressions for E± when n is even we resort to a proof different than the one used when n is odd.
9.1. FUNDAMENTAL SOLUTION FOR THE WAVE OPERATOR
299
Lemma 9.3. If h : [0, ∞) → R is continuous and bounded, then for each t ∈ (0, ∞) we have ∞ ε ε − lim h(r) dr = πh(t). (9.1.59) ε2 + (t − r)2 ε2 + (t + r)2 ε→0+ 0 Proof. If t ≥ 0 is fixed, then via suitable changes of variables we obtain ∞ ε ε − lim+ h(r) dr ε2 + (t − r)2 ε2 + (t + r)2 ε→0 0 6 = lim+ ε→0
∞
= −∞
∞
− εt
h(t + ελ) dλ − 1 + λ2
∞ t ε
7 h(−t + ελ) dλ 1 + λ2
(9.1.60)
h(t) dλ = πh(t), 1 + λ2
where for the second to the last equality in (9.1.60) we applied Lebesgue’s dominated convergence theorem. The Method of Descent In order to treat the case when n is even, we use a procedure called the method of descent. The ultimate goal is to use the method of descent to deduce from a fundamental solution for the wave operator in dimension n + 1 with n even, a fundamental solution for the wave operator in dimension n. To set the stage we make the following definition. Definition 9.4. A sequence {ψj }j∈N ⊂ C0∞ (R) is said to converge in a dominated fashion to 1 if the following two conditions are satisfied: (i) for each compact subset K of R there exists j0 = j0 (K) ∈ N such that ψj (x) = 1 for all x ∈ K if j ≥ j0 ; (ii) for each q ∈ N0 , the sequence {ψj }∞ j∈N is uniformly bounded on R. (q)
An example of a sequence {ψj }j∈N converging in a dominated fashion to 1 is given by x ψj (x) := ψ , ∀ x ∈ R, ∀ j ∈ N, (9.1.61) j where ψ ∈ C0∞ (R)
satisfied ψ(x) = 1 whenever |x| ≤ 1.
(9.1.62)
In what follows we will use the notation x = (x , xn ) ∈ Rn , x ∈ Rn−1 , xn ∈ R,
∂ = (∂ , ∂n ), ∂ = (∂1 , . . . , ∂n−1 ). (9.1.63)
CHAPTER 9. THE WAVE OPERATOR
300
Definition 9.5. A distribution u ∈ D (Rn ) is called integrable with respect ∞ {ψj }j∈N to xn if for any ϕ ∈ C0∞ (Rn−1 ) and any sequence ⊂ C0 (R) converging in a dominated fashion to 1, the sequence u, ϕ ⊗ ψj j∈N is convergent and its limit does not depend on the selection of the sequence {ψj }j∈N . Suppose u ∈ D (Rn ) is integrable with respect to xn and consider a sequence {ψj }j∈N ⊂ C0∞ (R) that converges in a dominated fashion to 1. For each j ∈ N define the linear mapping uj : D(Rn−1 ) → C,
uj (ϕ) := u, ϕ ⊗ ψj ,
Then uj ∈ D (Rn−1 ) for every j ∈ N and
∀ ϕ ∈ C0∞ (Rn−1 ).
(9.1.64)
lim uj − uk , ϕ = 0 for every
j,k→∞
function ϕ ∈ C0∞ (Rn−1 ), thus the sequence {uj }j∈N is Cauchy in D (Rn−1 ) (see the definition after (13.1.27)). Since D (Rn−1 ) is sequentially complete (recall Fact 2.18), it follows that there exists some u0 ∈ D (Rn−1 ) with the property that D (Rn−1 )
and u0 , ϕ = lim uj , ϕ ∀ ϕ ∈ C0∞ (Rn−1 ).
uj −−−−−−→ u0 j→∞
(9.1.65)
j→∞
Moreover, u0 is independent of the choice of the sequence {ψj }j∈N converging in a dominated fashion to 1 (prove this as an exercise). The distribution u0 will
∞ be called the integral of u with respect to xn and will be denoted by −∞ u dxn . The reason for using this terminology and notation for u0 is evident from the next proposition. We denote by Ln−1 the Lebesgue measure in Rn−1 . Proposition 9.6. If f ∈ L1loc (Rn ) is a function with the property that R
|f (x , xn )| dxn < ∞ f ∈ L (K × R) 1
and
for Ln−1 -almost every x ∈ Rn−1 for every compact set K ⊂ R
n−1
(9.1.66)
,
then the distribution uf ∈ D (Rn ) determined by f (recall
∞ (2.1.6)) is integrable with respect to xn . In addition, the distribution u0 := −∞ uf dxn is of function type and is given by the function g(x ) :=
R
f (x , xn ) dxn
defined for Ln−1 -almost every x ∈ Rn−1 . (9.1.67)
Proof. Since f ∈ L1loc (Rn ) we have that uf ∈ D (Rn ) as in (2.1.6) is welldefined. Fix ϕ ∈ C0∞ (Rn−1 ). Since f is absolutely integrable on supp ϕ × R, if {ψj }j∈N ⊂ C0∞ (R) is a sequence converging in a dominated fashion to 1, we may apply Fubini’s theorem and then Lebesgue’s dominated convergence theorem to write
9.1. FUNDAMENTAL SOLUTION FOR THE WAVE OPERATOR $ % lim uf , ϕ ⊗ ψj = lim
j→∞
ϕ(x )
j→∞
Rn−1
ϕ(x )
= Rn−1
R
R
301
f (x , xn )ψj (xn ) dxn dx
f (x , xn ) dxn dx ,
(9.1.68)
and the desired conclusion follows. The proposition that is the engine in the method of descent is proved next. Proposition 9.7. Let m ∈ N0 and let P (∂) = P (∂ , ∂n ) be a constant coefficient, linear operator of order m in Rn . Define the differential operator P0 (∂ ) := P (∂ , 0) in Rn−1 . If f ∈ D (Rn−1 ) and u ∈ D (Rn ) are such that (9.1.69) P (∂)u = f (x ) ⊗ δ(xn ) in D (Rn )
∞ and u is integrable with respect to xn , then u0 := −∞ u dxn is a solution of the equation P0 (∂ )u0 = f in D (Rn−1 ). Proof. Fix a sequence {ψj }j∈N ⊂ C0∞ (R) that converges in a dominated fashion to 1 and let ϕ ∈ C0∞ (Rn−1 ). Using the definition of u0 we may write $ % $ $ % % P0 (∂ )u0 , ϕ = u0 , P0 (−∂ )ϕ = lim
j→∞
! = lim
j→∞
u, (P0 (−∂ )ϕ) ⊗ ψj
" !
(9.1.70)
u, P (−∂)(ϕ ⊗ ψj ) + u, (P0 (−∂ )ϕ) ⊗ ψj −P (−∂)(ϕ ⊗ ψj )
" .
We claim that ! lim
j→∞
" u , (P0 (−∂ )ϕ) ⊗ ψj − P (−∂)(ϕ ⊗ ψj ) = 0.
(9.1.71)
Assume the claim for now. Then returning to (9.1.70) we have $ $ $ % % % P0 (∂ )u0 , ϕ = lim u, P (−∂)(ϕ ⊗ ψj ) = lim P (∂)u , ϕ ⊗ ψj j→∞
j→∞
$ % $ % = lim f ⊗ δ, ϕ ⊗ ψj = lim f, ϕ ψj (0) = f, ϕ, j→∞
j→∞
(9.1.72)
where for the last equality in (9.1.72) we used property (i) in Definition 9.4 with K = {0}. Hence, the desired conclusion follows. To prove (9.1.71), observe that m (q) P0 (−∂ )ϕ ⊗ ψj − P (−∂)(ϕ ⊗ ψj ) = Pq (∂ )ϕ ⊗ ψj q=1
where Pq is a differential operator of order ≤ m−q. Then for each q ∈ {1, . . . m}, (q) the sequence ψj + ψj j∈N also converges in a dominated fashion to 1, which combined with the fact that u is integrable with respect to xn , further yields
CHAPTER 9. THE WAVE OPERATOR
302 $ (q) % lim u, Pq (∂ )ϕ ⊗ ψj
j→∞
$ $ % (q) % = lim u, Pq (∂ )ϕ ⊗ (ψj + ψj ) − lim u, Pq (∂ )ϕ ⊗ ψj j→∞
$ % $ % = u0 , Pq (∂ )ϕ − u0 , Pq (∂ )ϕ = 0,
j→∞
(9.1.73)
proving (9.1.71). The proof of the proposition is now complete.
The Case n = 2p, p ≥ 1 We are now ready to proceed with determining a fundamental solution for the wave operator in the case when n = 2p, p ≥ 1. The main idea is to 2p+1 2 ∂j being the wave operator in Rn+2 , use Proposition 9.7 with P := ∂t2 − j=1
f := δ(x1 , . . . , x2p ) ⊗ δ(t), and u equal to E2p+1 , the fundamental solution from 2p (9.1.51). Note that under these assumptions we have P0 := ∂t2 − ∂j2 which is j=1
integrable with respect to x2p+1 , the wave operator in Rn+1 . Thus, if E2p+1 is
∞ then by Proposition 9.7 it follows that u := −∞ E2p+1 dx2p+1 satisfies ⎛ ⎝∂t2 −
2p
⎞ ∂j2 ⎠ u = δ(x1 , . . . , x2p ) ⊗ δ(t)
in D (R2p+1 ).
(9.1.74)
j=1
Therefore, u is a fundamental solution for the wave operator corresponding to n = 2p. Let us first show that the distribution E2p+1 given by the formula in (9.1.51) is integrable with respect to x2p+1 . Fix an arbitrary function ϕ ∈ C0∞ (R2p+1 ) and let {ψj }j∈N ⊂ C0∞ (R) be a sequence that converges in a dominated fashion to 1. Then, using the notation x = (x , x2p+1 ) ∈ R2p × R, we have lim
j→∞
E2p+1 , ϕ ⊗ ψj
= lim 2−(p+1) π −p j→∞
0
∞
⎡ ⎛ p−1 ⎢ 1 ⎜1 ∂r ⎣ ⎝ r r
⎡ ⎛ p−1 ∞ ⎢ 1 ⎜1 −(p+1) −p =2 π ∂r ⎣ ⎝ r r 0
⎞⎤
⎟⎥ ϕ(x , t)ψj (x2p+1 ) dσ(x)⎠⎦
x∈R2p+1 , |x|=r
x∈R2p+1 , |x|=r
r=t
⎞⎤ ⎟⎥ ϕ(x , t) dσ(x)⎠ ⎦
dt
dt,
(9.1.75)
r=t
where for the last equality in (9.1.75) we used Lebesgue’s dominated convergence theorem (here we note that the second equality in (9.1.47) and the properties satisfied by {ψj }j∈N play an important role). With (9.1.75) in hand, we may conclude that E2p+1 is integrable with respect to x2p+1 . Consequently, if one
9.1. FUNDAMENTAL SOLUTION FOR THE WAVE OPERATOR sets E2p :=
∞ −∞
303
E2p+1 dx2p+1 , then E2p ∈ D (R2p+1 ) and
E2p , ϕ = 2−(p+1) π −p
∞ 0
⎡ ⎛ p−1 ⎢ 1 ⎜1 ∂r ⎣ ⎝ r
⎞⎤
⎟⎥
ϕ(x , t) dσ(x)⎠⎦
r
x∈R2p+1 , |(x ,x
2p+1 )|=r
dt
r=t
(9.1.76)
for every ϕ ∈ C0∞ (R2p+1 ). In addition, by Proposition 9.7, it follows that E2p is a fundamental solution for the wave operator in Rn+1 . Note that the function ϕ appearing under the second integral in (9.1.76) does not depend on the variable x2p+1 . Hence, it is natural to proceed further in order to rewrite (9.1.76) in a form that does not involve the variable x2p+1 . Fix r > 0 and denote by B(0, r) the ball in Rn of radius r and centered at 0 ∈ Rn . Define the mappings P± : B(0, r) → R2p+1 by setting P± (x ) := (x1 , x2 , . . . , x2p , ±
r2 − |x |2 ), ∀ x = (x1 , . . . , x2p ) ∈ B(0, r). (9.1.77)
Then P+ and P− are parametrizations of the (open) upper and lower, respectively, hemispheres of the surface ball in R2p+1 of radius r and centered at 0. Hence (keeping in mind Definition 13.38 and Definition 13.39), we may write ϕ(x , t) dσ(x) = ϕ(x , t) |∂1 P+ × · · · × ∂2p P+ | dx B(0,r)
x∈R2p+1 , |(x ,x2p+1 )|=r
ϕ(x , t) |∂1 P− × · · · × ∂2p P− | dx .
+ B(0,r)
(9.1.78) A direct computation based on (9.1.77) and (13.6.4) further yields 1 0 . . . 0 ∓ √ x1 r 2 −|x |2 0 1 . . . 0 ∓ √ 2x2 2 r −|x | ∂1 P± × · · · × ∂2p P± = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . =: det A± , 0 0 . . . 1 ∓ √ x2p 2 −|x |2 r e1 e2 . . . e2p e2p+1
(9.1.79)
where ej is the unit vector in R2p+1 with 1 on the jth position, for each j ∈ {1, . . . , 2p + 1}. Hence, the components of the vector ∂1 P± × · · · × ∂2p P± are (−1)k+1 det A± k ∈ {1, . . . , 2p, 2p + 1}, (9.1.80) k, ± where A± by deleting column k and k is the 2p × 2p matrix obtained from A row 2p + 1. It is easy to see from (9.1.79) that
det A± 2p+1 = 1,
k det A± k = (−1)
∓xk r2
− |x |2
,
∀ k ∈ {1, . . . , 2p}.
(9.1.81)
CHAPTER 9. THE WAVE OPERATOR
304 Consequently,
r
|∂1 P± × · · · × ∂2p P± | =
r2 − |x |2
.
(9.1.82)
Formula (9.1.82) combined with (9.1.78) and (9.1.76) gives ∞ 6 p−1 1 r 1 −p E2p , ϕ = (2π)
r
0
= (2π)
−p
∞
6
1 r
0
∂r
r
p−1
ϕ(x , r)
x ∈B(0,r)
B(0,r)
r 2 − |x |2
dx
r 2 − |x |2
dx
dt r=t
7
ϕ(x , t)
∂r
7
dt.
(9.1.83)
r=t
In summary, we proved: If n = 2p, for some p ∈ N, then E+ ∈ S (Rn+1 ), defined by −p
E+ , ϕ = (2π)
0
∞
6
1 ∂r r
p−1 x∈Rn , |x|
ϕ(x, t) r2 − |x|2
7 dx
dt, r=t
(9.1.84) for ϕ ∈ S(Rn+1 ), is a fundamental solution for the wave operator in Rn+1 . The reasoning used to obtain E+ also applies if one starts with E2p+1 being the distribution from (9.1.53). Under this scenario the conclusion is that If n = 2p, for some p ∈ N, then E− ∈ S (Rn+1 ), defined by ' &
∞ 1 p−1
ϕ(x,−t) √ E− , ϕ = (2π)−p 0 ∂ dx dt, r n r x∈R , |x|
r=t
(9.1.85) for ϕ ∈ S(Rn+1 ), is a fundamental solution for the wave operator in Rn+1 . Remark 9.8. (1) If E+ and E− are as in (9.1.84) and (9.1.85), respectively, then supp E+ = {(x, t) ∈ R2p × [0, ∞) : |x| ≤ t}, and
(9.1.86)
supp E− = {(x, t) ∈ R2p × (−∞, 0] : |x| ≤ −t}.
(9.1.87)
(2) If p = 1, then for each ϕ ∈ S(R3 ) the expression2 in (9.1.84) becomes > ? ∞ ϕ(x, t) 1 H(t − |x|) E+ , ϕ = dx dt = ,ϕ . 2π 0 t2 − |x|2 2π t2 − |x|2 |x|
expression was first found by Vito Volterra (cf. [73]).
9.1. FUNDAMENTAL SOLUTION FOR THE WAVE OPERATOR
305
Hence, if n = 2 then E+ is of function type and E+ (x, t) =
H(t − |x|) 2π
t2 − |x|2
for every
x ∈ R2
and every
t ∈ R.
Similarly, when n = 2 it follows that E− is of function type and E− (x, t) =
H(−t − |x|) 2π
t2 − |x|2
for every
x ∈ R2
and every
t ∈ R.
Summary for Arbitrary n Here we combine the results obtained in this section regarding fundamental solutions for the heat operator. These results have been summarized in (9.1.28), Remark 9.1, (9.1.51), (9.1.53), Remark 9.2, (9.1.84), (9.1.85), and Remark 9.8. Theorem 9.9. Consider the wave operator = ∂t2 −Δx in Rn+1 , where x ∈ Rn and t ∈ R. Then the following are true. (1) Suppose n = 1. Then the L1loc (R2 ) functions E+ (x, t) :=
H(t − |x|) , 2
E− (x, t) :=
H(−t − |x|) , 2
∀ (x, t) ∈ R2 , ∀ (x, t) ∈ R2 ,
(9.1.88) (9.1.89)
satisfy E+ , E− ∈ S (R2 ) and are fundamental solutions for the wave operator in R2 . Moreover, supp E+ = {(x, t) ∈ R2 : |x| ≤ t},
(9.1.90)
supp E− = {(x, t) ∈ R2 : |x| ≤ −t}.
(9.1.91)
(2) Suppose n = 2p + 1, for some p ∈ N. Then the distributions E+ ∈ S (Rn+1 ) and E− ∈ S (Rn+1 ) defined by
∞ 1 p−1 1
E+ , ϕ := (2π)−p−1 π 0 dt, r ∂r r ∂B(0,r) ϕ(ω, t) dσ(ω) r=t
(9.1.92)
∞ 1 p−1 1
E− , ϕ := (2π)−p−1 π 0 dt, r ∂r r ∂B(0,r) ϕ(ω, −t) dσ(ω) r=t
(9.1.93) for every ϕ ∈ S(Rn+1 ), are fundamental solutions for the wave operator in Rn+1 . Moreover, supp E+ = {(x, t) ∈ R2p+1 × [0, ∞) : |x| = t},
(9.1.94)
CHAPTER 9. THE WAVE OPERATOR
306
supp E+ = {(x, t) ∈ R2p+1 × (−∞, 0] : |x| = −t}.
(9.1.95)
Corresponding to the case n = 3 (thus, for p = 1), formulas (9.1.92) and (9.1.93) become E+ , ϕ =
E− , ϕ =
1 4π
1 4π
∞
0
∞
0
1 t
1 t
8
ϕ(ω, t) dσ(ω) dt = ∂B(0,t)
ϕ(ω, −t) dσ(ω) dt = ∂B(0,t)
9 H(t) δ∂B(0,t) , ϕ , 4πt (9.1.96) 8 9 H(−t) δ∂B(0,−t) , ϕ , − 4πt (9.1.97)
for every ϕ ∈ S(R4 ). (3) Suppose n = 2p, for some p ∈ N. Then the distributions E+ ∈ S (Rn+1 ) and E− ∈ S (Rn+1 ) defined by −p
E+ , ϕ := (2π)
&
∞ 1 0
r ∂r
p−1
x∈Rn , |x|
' √ϕ(x,t) dx 2 2 r −|x|
dt, r=t
(9.1.98) −p
E− , ϕ := (2π)
&
∞ 1 0
r ∂r
p−1
x∈Rn , |x|
' √ϕ(x,−t) dx 2 2 r −|x|
dt,
r=t
(9.1.99) for every ϕ ∈ S(Rn+1 ), are fundamental solutions for the wave operator in Rn+1 . Moreover, supp E+ = {(x, t) ∈ R2p × [0, ∞) : |x| ≤ t}, and
(9.1.100)
supp E− = {(x, t) ∈ R2p × (−∞, 0] : |x| ≤ −t}.
(9.1.101)
Corresponding to the case n = 2, thus p = 1, the distributions E+ and E− defined in (9.1.98) and (9.1.99) are of function type and given by the functions E+ (x, t) =
H(t−|x|) √ , 2π t2 −|x|2
∀ x ∈ R2 , ∀ t ∈ R,
E− (x, t) =
H(−t−|x|) √ , 2π t2 −|x|2
∀ x ∈ R2 , ∀ t ∈ R.
(9.1.102)
(9.1.103)
9.2. THE GENERALIZED CAUCHY PROBLEM FOR THE WAVE . . . 307
9.2
The Generalized Cauchy Problem for the Wave Operator
Let F ∈ C 0 (Rn+1 ), f, g ∈ C 0 (Rn ) and, as before, use the notation x ∈ Rn , t ∈ R. Suppose u ∈ C 2 (Rn+1 ) solves in the classical sense the Cauchy problem 2 (∂t − Δx )u = F in Rn+1 , (9.2.1) u = f, ∂t u = g in Rn . t=0
t=0
Define u (x, t) := H(t)u(x, t) and F(x, t) := H(t)F (x, t) for every x ∈ Rn and every t ∈ R. Then u , F ∈ L1loc (Rn+1 ) and for each ϕ ∈ C0∞ (Rn+1 ) we have ∞ % $ % $ 2 u, ϕ = u , (∂t2 − Δx )ϕ = lim+ u (∂t2 − Δx )ϕ dx dt (∂t − Δx )
∞
ε→0
= lim
ε→0+
ε
Rn
+ lim
ε→0+
Rn
ε
Rn
(∂t2 − Δx )u ϕ dx dt
(9.2.2)
(∂t u(x, ε)ϕ(x, ε) − u(x, ε)∂t ϕ(x, ε)) dx
$ % $ % $ % = F , ϕ + f (x) ⊗ δ (t), ϕ(x, t) + g(x) ⊗ δ(x), ϕ(x, t) . This suggests the following definition [recall (8.2.4)]. Definition 9.10. Given F ∈ D+ (Rn+1 ) and f, g ∈ C0∞ (Rn ) a distribution u in n+1 ) is called a solution of the generalized Cauchy problem for the wave D+ (R operator with data F , f , and g, if
(∂t2 − Δx ) u = F + f (x) ⊗ δ (t) + g(x) ⊗ δ(t)
in D (Rn+1 ).
(9.2.3)
Theorem 9.11. Let F ∈ D+ (Rn+1 ) and f, g ∈ C0∞ (Rn ) be given. Then the generalized Cauchy problem (9.2.3) has the unique solution
u = E ∗ F + E ∗ (f ⊗ δ ) + E ∗ (g ⊗ δ),
(9.2.4)
where E is the fundamental solution for the wave operator in Rn+1 as specified in (9.1.88) if n = 1, in (9.1.92) if n ≥ 3 is odd, and in (9.1.98) if n is even. Proof. Let E be as in the statement of the theorem. Then, by (9.1.90), by (Rn+1 ) we have that, whenever K is (9.1.94), and by (9.1.100), for any G ∈ D+ n+1 a compact subset of R , the set MK := {((x, t), (y, s)) : (x, t) ∈ supp E, (y, s) ∈ supp G, (x+y, t+s) ∈ K} (9.2.5) is compact in Rn+1 . Hence, by Theorem 2.85, the convolution E ∗G exists. This proves that u as in (9.2.4) is a well-defined element of D (Rn+1 ). Moreover, since
CHAPTER 9. THE WAVE OPERATOR
308
F, f ⊗ δ , g ⊗ δ, E ∈ D+ (Rn+1 ), by (2.8.17), we may conclude that u belongs n+1 ). In addition, by Remark 5.5, we obtain that u as in (9.2.4) is a to D+ (R solution of the generalized Cauchy problem (9.2.3). To prove uniqueness, observe that if u ∈ D+ (Rn+1 ) is such that (∂t2 −Δx ) u= n+1 ), then 0 in D (R
u) ∗ E = 0. u =u ∗δ = u ∗ ((∂t2 − Δx )E) = ((∂t2 − Δx )
(9.2.6)
This completes the proof of the theorem.
Further Notes for Chap. 9 The wave operator := ∂t2 − Δx , where x ∈ Rn and t ∈ R, was originally discovered by the French mathematician and physicist Jean le Rond d’Alembert in 1747 in the case n = 1 in his studies of vibration of strings. For this reason, is also called the d’Alembert operator or, simply, the d’Alembertian. Like the heat operator discussed in Chap. 8, the wave operator is another basic example of a partial differential operator governing a linear evolution equation (though, unlike the heat operator, the wave operator belongs to a class of operators called hyperbolic operators).
9.3
Additional Exercises for Chap. 9
Exercise 9.12. Use the method of descent to compute a fundamental solution n ∂j2 in Rn , n ≥ 3, by starting from a fundamenfor the Laplace operator Δ = j=1
tal solution for the heat operator L = ∂t −
n j=1
∂j2 in Rn+1 .
Chapter 10
The Lam´ e and Stokes Operators With the exception of the Dirac operator, so far we have discussed only scalar differential operators. In this chapter we analyze two differential operators acting on vectors with components distributions in Rn : the Lam´e operator arising in the theory of elasticity and the Stokes operator arising in hydrodynamics. Throughout this chapter, it is assumed that n ∈ N satisfies n ≥ 2.
10.1
General Remarks About Vector and Matrix Distributions
The material developed up to this point may be regarded as a theory for scalar distributions. Nonetheless, practical considerations dictate the necessity of considering vectors/matrices with components/entries that are themselves distributions. It is therefore natural to refer to such objects as vector and matrix distributions. A significant portion of the theory of scalar distributions then readily extends to this more general setting. The philosophy in the vector/matrix case is that we perform the same type of analysis as in the scalar case, at the level of individual components, while at the same time obeying the natural algebraic rules that are now in effect (e.g., keeping in mind the algebraic mechanism according to which two matrices are multiplies, etc.). To offer some examples, fix an open set Ω ⊆ Rn and consider a matrix distribution U = uk 1≤k≤N ∈ MN ×K D (Ω) , (10.1.1) 1≤≤K
that is, U is an N × with entries from D (Ω). Naturally, equality of K matrix elements in MN ×K D (Ω) is understood entry by entry in D (Ω). We agree to D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 10, © Springer Science+Business Media New York 2013
309
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
310
define the support of U as supp U :=
N K ( (
supp uk .
(10.1.2)
k=1 =1
Note that supp U is the smallest relatively closed subset of Ω outside of which all entries of U vanish. Similarly, we define the singular support of U as sing supp U :=
N K ( (
sing supp uk .
(10.1.3)
k=1 =1
Hence, sing supp U is the smallest relatively closed subset of Ω outside of which all entries of U are C ∞ . ∞ jk (Ω) , then we define (with U as in Next, if A = a 1≤j≤M ∈ MM×N C 1≤k≤N
(10.1.1)) AU :=
N
jk
a uk
k=1
1≤j≤M
∈ MM×K D (Ω) .
(10.1.4)
1≤≤K
Some trivial, yet useful properties include if U, V ∈ MN ×K D (Ω) , A ∈ MJ×M C ∞ (Ω) , then A(BU) = (AB)U
B ∈ MM×N C ∞ (Ω) ,
and
U = V =⇒ AU = AV. (10.1.5) Given an operator of the form ∞ L= Aα ∂ α , Aα = ajk α 1≤j≤M ∈ MM×N C (Ω) ,
(10.1.6)
1≤k≤N
|α|≤m
referred to as an M × N system (of differential operators) of order m ∈ N, the natural way in which L acts on U as in (10.1.1), is to regard LU as the matrix distribution from MM×K D (Ω) with entries given by (LU)j :=
N
α ajk α ∂ uk
for all 1 ≤ j ≤ M, 1 ≤ ≤ K.
(10.1.7)
|α|≤m k=1
A matrix distribution with a single column is referred to as a vector distribution and is simply denoted by u = (uk )1≤k≤N with components uk ∈ D (Ω), 1 ≤ N k ≤ N . The collection of all such vector distributions is denoted by D (Ω) . Regarding the convolution product for matrix distributions, if U = ujk 1≤k≤N ∈ MM×N D (Rn ) and 1≤≤K
V = vk 1≤k≤N ∈ MN ×K E (Rn ) 1≤≤K
10.1. GENERAL REMARKS ABOUT VECTOR AND MATRIX...
311
(hence, V is an N × K matrix with entries from E (Rn )), we define U ∗ V as being the M × K matrix with entries that are the distributions from D (Rn ) given by (U ∗ V)j :=
N
ujk ∗ vk ,
1 ≤ j ≤ M, 1 ≤ ≤ K.
(10.1.8)
k=1
Formula as definition for U ∗ V in the case when (10.1.8) is also taken U = ujk 1≤k≤N ∈ MM×N E (Rn ) and V = vk 1≤k≤N ∈ MN ×K D (Rn ) . 1≤≤K
1≤≤K
The reader is advised that, as opposed to the scalar case, the commutativity property for the convolution product is lost in the setting of (say, square) matrix distributions. If foreach M ∈ N we denote by δIM×M the matrix distribution belonging to MM×M D (Rn ) and with entries δjk δ, for all j, k ∈ {1, . . . , M }, where δjk = 1 if j = k and 0 otherwise, is the Kronecker symbol, then one can check using (10.1.8) and part (d) in Theorem (2.87) that ∀ U ∈ MM×N D (Rn ) , (10.1.9) (δIM×M ) ∗ U = U, n V ∗ (δIM×M ) = V, ∀ V ∈ MN ×M D (R ) . (10.1.10) Two important first-order systems of differential operators are the gradient ∇ and the divergence div. Specifically, the gradient ∇ := (∂ 1 , . . . ,∂nn ) acts on a scalar distribution according to ∇u := (∂1 u, . . . , ∂n u) ∈ D (Ω) for every u ∈ D (Ω), while the divergence operator div acts on a vector distribution n n v := (v1 , . . . , vn ) ∈ D (Ω) according to divv := ∂j vj . j=1
To give an example of a specific result from the theory of scalar distributions and scalar differential operators that naturally carries over to the vector/matrix case, we note here that if L is an J × M system with constant coefficients, then for every matrix distributions U ∈ MM×N D (Rn ) and V ∈ MN ×K E (Rn ) then L(U ∗ V) = (LU) ∗ V = U ∗ (LV) in MJ×K D (Rn ) . (10.1.11) Similar equalities hold if U ∈ MM×N E (Rn ) and V ∈ MN ×K D (Rn ) . Definition 10.1. Given an M × M system of order m ∈ N, ∞ n Aα ∂ α , Aα = ajk L= α 1≤j,k≤M ∈ MM×M C (R ) ,
(10.1.12)
|α|≤m
call a matrix distribution E ∈ MM×M D (Rn ) a fundamental solution for L in Rn provided LE = δIM×M in MM×M D (Rn ) . (10.1.13) We record next the analogue of Proposition 5.1 and Proposition 5.7 in the current setting.
312
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
Proposition 10.2. Suppose L is a constant coefficient M × M system of order m ∈ N, Aα ∂ α , Aα = ajk (10.1.14) L = L(∂) := α 1≤j,k≤M ∈ MM×M (C), |α|≤m
with the property that det (L(iξ)) = 0
for every
ξ ∈ Rn \ {0}.
(10.1.15)
Then the following hold. (1) If U ∈ MM×K S (Rn ) is such that LU = 0 in MM×K S (Rn ) , then the entries of U are polynomials in Rn and LU = 0 ∈ MM×K (C) pointwise in Rn . (2) If L has a fundamental solution E ∈ MM×M S (Rn ) , then any other fundamental solution of L belonging to MM×M S (Rn ) differs from E by an M × M -matrix P which has entries that are polynomials in Rn satisfying LP = 0 ∈ MM×M (C) pointwise in Rn . Proof. If U ∈ MM×K S (Rn ) is such that LU = 0 in MM×K S (Rn ) , taking 3 = 0 in MM×K S (Rn ) . In particular, the Fourier transform, we obtain L(iξ)U 3 = 0 in MM×K D (Rn \ {0}) . L(iξ)U (10.1.16) Since (10.1.15) holds, the matrix L(iξ) is invertible for every ξ ∈ Rn \ {0}, hence −1 L(iξ) ∈ MM×M C ∞ (Rn \ {0}) . Based on this, (10.1.16), and (10.1.5), we may write 3 = L(iξ) −1 0 = 0 in MM×K D (Rn \ {0}) . 3 = L(iξ) −1 L(iξ)U U (10.1.17) 3 = 0 in MM×K D (Rn \ {0}) which implies supp U 3 ⊆ {0}. ExerHence, U cise 4.35 (applied to each entry of U) then gives that each component of U is a polynomial in Rn . In addition, LU = 0 pointwise in Rn . This proves (1). now U ∈ MM×M S (Rn ) is such that Suppose LU = δIM×M in MM×M n S (R ) . Then L(U − E) = 0 in MM×M S (Rn ) and the desired conclusion follows by applying part (1). We also have the general Liouville-type theorem for a system with total symbol invertible outside the origin, which we prove next. Theorem 10.3 (A general Liouville-type theorem for systems). Assume L is an M ×M system of order m ∈ N of the form (10.1.14) which satisfies (10.1.15). M M satisfies Lu = 0 in S (Rn ) Also, suppose u = (u1 , . . . , uM ) ∈ L1loc (Rn ) and has the property that there exist N ∈ N0 and C, R ∈ [0, ∞) such that |uj (x)| ≤ C|x|N
whenever
|x| ≥ R,
∀ j ∈ {1, . . . , M }.
(10.1.18)
10.1. GENERAL REMARKS ABOUT VECTOR AND MATRIX...
313
Then uj is a polynomial in Rn of degree at most N for each j ∈ {1, . . . , M }. M M In particular, if u ∈ L∞ (Rn ) satisfies Lu = 0 in S (Rn ) then the components of u are necessarily constants. Proof. Since (5.1.10) implies that the locally integrable function uj belongs to S (Rn ) (cf. Example 4.3) for each j ∈ {1, . . . , M }, Proposition 10.2 implies that all the entries of u are polynomials in Rn . Moreover, Lemma 5.2 gives that the degree of these polynomials are at most N . We conclude this section by noting a couple of useful results. Proposition 10.4. Assume that L= Aα ∂ α , Aα = ajk α 1≤j,k≤M ∈ MM×M C) ,
(10.1.19)
|α|≤m
is an M × M system of order m∈ N with constant complex coefficients and solution for the system L suppose that E ∈ MM×M D (Rn ) is a fundamental n n in R . Then for any U ∈ MM×N E (R ) one has L(E ∗ U) = U
in
MM×M D (Rn ) .
(10.1.20)
Proof. This follows from (10.1.11), (10.1.13) and (10.1.9). Our last result in this section is the following counterpart of Theorem 7.46 at the level of systems. Theorem 10.5. Assume that n ≥ 2, M ∈ N, and consider the M × M system L=
n
Ajk ∂j ∂k ,
Ajk ∈ MM×M (C).
(10.1.21)
j,k=1
Then for an M × M matrix-valued function E whose components are contained in C 2 (Rn \{0})∩L1loc (Rn ) and have the property that their gradients are positive homogeneous of degree 1−n in Rn \{0}, the following statements are equivalent: (1) When viewed in L1loc (Rn ), the matrix-valued function E is a fundamental solution for the system L in Rn ; (2) One has LE = 0 pointwise in Rn \ {0} and n j,k=1
S n−1
ωj Ajk ∂k E(ω) dσ(ω) = IM×M .
(10.1.22)
Proof. This is established much as in the scalar case, following the argument in the proof of Theorem 7.46, keeping in mind the matrix algebra formalism.
314
10.2
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
Fundamental Solutions and Regularity for General Systems
Definition 10.6. Let K be a field, and let (R, +, 0) be a vector space over K equipped with an additional binary operation from R × R to R, called product (or multiplication). Then R is said to be a commutative associative algebra over K if the following identities hold for any three elements a, b, c ∈ R, and any two scalars x, y ∈ K: commutativity :
ab = ba,
(10.2.1)
associativity :
(ab)c = a(bc),
(10.2.2)
distributivity :
(a + b)c = ac + bc,
(10.2.3)
(xa)(yb) = (xy)(ab).
(10.2.4)
compatibility with scalars :
Finally, an element e ∈ R is said to be multiplicative unit provided one has ea = ae = a for each a ∈ R. Let R be a commutative associative algebra over a field K, with multiplicative unit e and additive neutral element 0. We make the convention that a − b := a + (−b) if a, b ∈ R and −b is the inverse of b with respect to the addition operation in the algebra R. Also, for j ∈ N, we define (−1)j a simply a when j is even, and as −a when j is odd. Given M ∈ N, we shall let MM×M (R) stand for the collection of M × M matrices with entries from R. Then, if A = ajk 1≤j,k≤M ∈ MM×M (R) and c ∈ R, one defines in the usual fashion the operations A := akj 1≤j,k≤M ∈ MM×M (R), (10.2.5) c A := c ajk 1≤j,k≤M ∈ MM×M (R), det A :=
(10.2.6)
(−1)sgn σ a1σ(1) a2σ(2) · · · aMσ(M) ∈ R,
(10.2.7)
σ∈PM
where PM is the collection of all permutations of the set {1, 2, . . . , M }, and the matrix of cofactors adj (A) := (−1)j+k det Ajk 1≤j,k≤M , (10.2.8) where, for any given j0 , k0 ∈ {1, . . . , M }, the minor Aj0 k0 is defined by (10.2.9) Aj0 k0 := ajk 1≤j,k≤M, j=j0 , k=k0 ∈ M(M−1)×(M−1) (R). Furthermore, given another matrix B = bjk 1≤j,k≤M ∈ MM×M (R), we set A ± B := ajk ± bjk 1≤j,k≤M
and A · B :=
M r=1
ajr brk
. (10.2.10) 1≤j,k≤M
10.2. FUNDAMENTAL SOLUTIONS AND REGULARITY...
315
A number of basic properties satisfied by these operations are collected in the next proposition. Proposition 10.7. Let R be a commutative associative algebra over a field K with multiplicative unit e and additive neutral element 0. Also, let M ∈ N be arbitrary. Then the following statements are true: (i) (A · B) = B · A for all A, B ∈ MM×M (R); ⎛ ⎞ e 0 ... 0 ⎜ 0 e ... 0 ⎟ ⎟ (ii) The identity matrix IM×M := ⎜ ⎝ . . . . . . . . . . . . ⎠ ∈ MM×M (R) 0 0 ... e has the property that IM×M ·A = A·IM×M = A for each A ∈ MM×M (R); (iii) det (A ) = det A for each A ∈ MM×M (R); (iv) det (c A) = cM det A for each A ∈ MM×M (R) and each c ∈ K; (v) Given any A ∈ MM×M (R), for each j, k ∈ {1, . . . , M } we have det A =
M
j+
aj (−1)
det (Aj ) =
=1
M
ak (−1)+k det (Ak );
(10.2.11)
=1
(vi) det A = 0 whenever A ∈ MM×M (R) has either two identical columns, or two identical rows; (vii) adj (A ) · A = A · adj (A ) = det A IM×M for each A ∈ MM×M (R); (viii) det (A · B) = (det A) (det B) for all A, B ∈ MM×M (R); (ix) adj (A · B) = adj (A) · adj (B) for all A, B ∈ MM×M (R); (x) MM×M (R) is a (in general, noncommutative) ring, with multiplicative unit IM×M , and A ∈ MM×M (R) has a multiplicative inverse in MM×M (R) if and only if det (A) has a multiplicative inverse in R. Proof. All properties are established much as in the standard case R ≡ C. Here we only wish to mention that (vii) is a direct consequence of (v)–(vi) (complete proofs may be found in, e.g., [33]). An example that is relevant for our future considerations pertaining to the Lam´e system is as follows. Suppose R is a commutative associative algebra over a field K and that n ∈ N, λ, μ ∈ K, with μ = 0. Also, fix a ∈ Rn (i.e., a = (a1 , . . . , an ) with aj ∈ R for each j ∈ {1, . . . , n}), and consider A ∈ Mn×n (R) given by A := μ
n j=1
aj aj In×n + (λ + μ)a ⊗ a,
(10.2.12)
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
316
where a ⊗ a ∈ Mn×n (R) is defined as a ⊗ a := aj ak 1≤j,k≤n . Then a direct calculation (compare with Exercise 10.13) shows that det A = μ
n−1
n n (λ + 2μ) aj aj ,
(10.2.13)
j=1
and adj (A) = μ
n−2
n
' n−2 & n aj aj aj aj In×n − (λ + μ)a ⊗ a . (λ + 2μ)
j=1
j=1
(10.2.14) Our main interest in the algebraic framework developed so far in this section lies with the particular case when, for some fixed n ∈ N, R :=
B m ∈ N0 ,
aα ∂ α : aα ∈ C,
(10.2.15)
α∈Nn 0 , |α|≤m
with the natural operations of addition and multiplication, and with the convention that ∂ (0,...,0) = 1. Then clearly R is a commutative associative algebra over C with multiplicative unit 1 and, for each M ∈ N, the set MM×M (R) consists of all M × M systems of constant, complex coefficient differential operators. Thus, if M ∈ N, m ∈ N0 , are fixed an M × M system of constant, complex coefficient differential operators of degree m has the form L(∂) ∈ MM×M (R) withalign α ajk ∂ . (10.2.16) L(∂) := α 1≤j,k≤M
|α|≤m
We shall call L(∂) a homogeneous (linear) system (of order m) if ajk α = 0 whenever |α| < m and j, k ∈ {1, . . . , M }. According to (10.2.5), the algebraic transpose of L(∂) from (10.2.16) in MM×M (R) is given by L(∂) =
|α|≤m
α akj α ∂
.
(10.2.17)
1≤j,k≤M
Next, for each L(∂) as in (10.2.16) set DL (∂) := det(L(∂))
(10.2.18)
and notice that DL (∂) is a scalar, constant (complex) coefficient, differential operator of order ≤ M m.
10.2. FUNDAMENTAL SOLUTIONS AND REGULARITY...
317
Consequently, statement (vii) in Proposition 10.7, (10.2.17), and (10.2.18) readily give that ∀ L(∂) ∈ MM×M (R). (10.2.19) L(∂) · adj L(∂) = DL (∂)IM×M , Let us also observe that, at the level of symbols, DL (ξ) = det(L(ξ)) for each ξ ∈ Rn
(10.2.20)
and that DL (∂) is a homogeneous scalar operator whenever L(∂) is a homogeneous system. In particular, from (10.2.20) and (10.2.21) we deduce that C if L(∂) is a homogeneous system with =⇒ DL (∂) is elliptic. det(L(ξ)) = 0 for each ξ ∈ Rn \ {0}
(10.2.21)
(10.2.22)
Our goal is to use (10.2.19) as a link between systems and scalar partial differential operators. We shall prove two results based on this scheme, the first of which is a procedure to reduce the task of finding a fundamental solution for a given system to the case of scalar operators. Specifically, we have the following proposition. Theorem 10.8. Let L(∂) be an M × M system with constant coefficients as in (10.2.16), and let DL (∂) be the scalar differential operator associated with L as in (10.2.18). Then if E ∈ D (Rn ) is a fundamental solution for DL (∂) in Rn , it follows that E := adj L(∂) EIM×M ∈ MM×M (D (Rn )) (10.2.23) is a fundamental solution for the system L(∂) in Rn . Moreover, E ∈ S (Rn ) =⇒ E ∈ MM×M (S (Rn )).
(10.2.24)
In particular, if DL (∂) is not identically zero, then L(∂) has a fundamental solution that is a tempered distribution. Proof. Let E be a fundamental solution for DL (∂) and define E as in (10.2.23). That E is a fundamental solution for L(∂) follows from (10.2.19). Also it is clear that E ∈ S (Rn ) forces E ∈ MM×M (S (Rn )). As for the last claim in the statement of the theorem, note that if DL (∂) is not identically zero, then Theorem 5.13 ensures the existence of a fundamental solution E ∈ S (Rn ) for DL (∂) which, in turn, yields a solution for L(∂) via the recipe (10.2.23). Next we record the following consequence of (10.2.19) and (10.2.22). As a preamble, recall the notion of singular support from (10.1.3).
318
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
Theorem 10.9. Assume that L = L(∂) is an M × M homogeneous differential system in Rn , with constant complex coefficients, such that (10.2.25) det L(ξ) = 0, ∀ ξ ∈ Rn \ {0}. M one has Then, for each open set Ω ⊆ Rn and for each u ∈ D (Ω) sing supp u ⊆ sing supp(Lu).
(10.2.26)
M M is such that Lu ∈ C ∞ (Ω) then u ∈ In particular, if u ∈ D (Ω) ∞ M C (Ω) . Proof. Let L be a differential system as in the statement, and set DL (∂) := det[L(∂)]. Then (10.2.22) ensures that DL (∂) is an elliptic, scalar differential M operator, with constant coefficients. Next, fix u = (u1 , . . . , uM ) ∈ D (Ω) and let ω ⊆ Ω be an open set such that (Lu)ω ∈ C ∞ (ω). Then, for each j ∈ {1, . . . , M } there holds DL (∂)uj ω = adj L(∂) L(∂)u j = adj L(∂) L(∂)uω j (10.2.27) ω
from which it follows that DL (∂)uj ∈ C ∞ (ω) for each j ∈ {1, . . . , M }. From this and Corollary 6.18 it follows that uj ω ∈ C ∞ (ω) for each j ∈ {1, . . . , M }. Consequently ω ⊆ Ω\sing supp u, from which (10.2.26) immediately follows. Corollary 10.10. Let L = L(∂) be an M × M homogeneous differential system in Rn , with constant complex coefficients, such that det [L(ξ)] = 0 for all ξ in Rn \ {0}. Then L has a fundamental solution E ∈ MM×M (S (Rn )) with sing supp E = {0}. Proof. This is an immediate consequence of Theorem 10.8 and Theorem 10.9.
Exercise 10.11. Suppose M ∈ N and consider an M ×M homogeneous second n jk order system with constant complex coefficients L(∂) = ars ∂r ∂s r,s=1
1≤j,k≤M
in Rn . Assume that there exists c ∈ (0, ∞) such that this system satisfies the Legendre–Hadamard ellipticity condition Re L(ξ)η · η ≥ c|ξ|2 |η|2 ,
∀ ξ ∈ Rn , ∀ η ∈ CM .
(10.2.28)
Then, |det [L(ξ)]| ≥ cM |ξ|2M for every ξ ∈ Rn . In particular, det [L(ξ)] = 0 for every ξ ∈ Rn \ {0}. Sketch of proof: Show that A ∈ MM×M (C), |Aη| ≥ c|η|, ∀ η ∈ CM =⇒ |detA| ≥ cM .
(10.2.29)
´ OPERATOR 10.3. FUNDAMENTAL SOLUTIONS FOR THE LAME
319
This may be proved by considering the self-adjoint matrix B := A∗ A (where A := (A ) is the adjoint of A) that satisfies (Bη) · η = |Aη|2 ≥ c2 |η|2 for every η ∈ CM . Use the latter and the fact that B is diagonalizable to conclude that |detA|2 = detB ∈ [c2M , ∞). To finish the proof of the exercise, apply (10.2.29) with A := L(ξ), ξ ∈ Rn . ∗
10.3
Fundamental Solutions for the Lam´ e Operator
The Lam´e operator L in Rn is a differential operator that acts on vector distributions. Specifically, if u ∈ [D (Rn )]n , i.e., u = (u1 , . . . , un ) where uk ∈ D (Rn ), k = 1, . . . , n, then the action of the Lam´e operator on u is defined by Lu := μΔu + (λ + μ)∇div u n = μΔuj + (λ + μ) ∂j ∂ u =1
n ∈ D (Rn ) ,
(10.3.1)
1≤j≤n
where the constants λ, μ ∈ C (typically called Lam´e moduli) are assumed to satisfy μ = 0 and 2μ + λ = 0. (10.3.2) To study in greater detail the structure of the Lam´e operator, we need to discuss some useful algebraic formalism. Definition 10.12. Given, a = (a1 , . . . , an ) ∈ Cn , b = (b1 , . . . , bn ) ∈ Cn , define a ⊗ b to be the matrix (10.3.3) a ⊗ b := aj bk 1≤j,k≤n ∈ Mn×n (C). Exercise 10.13. Prove that: (1) (a ⊗ b) = b ⊗ a for all a, b ∈ Cn ; (2) Tr (a ⊗ b) = a · b for all a, b ∈ Cn ; (3) (a ⊗ b)c = (b · c)a for all a, b, c ∈ Cn ; (4) det (In×n + a ⊗ b) = 1 + a · b for all a, b ∈ Cn ; (5) (a ⊗ b) (c ⊗ d) = (b · c) a ⊗ d for all a, b, c, d ∈ Cn ; (6) for every a ∈ Rn and every numbers μ, λ ∈ C the matrix μIn×n + λ a ⊗ a is invertible if and only if μ = 0 and μ = −λ|a|2 ; moreover, whenever μ = 0 and μ = −λ|a|2 , ' & −1 λ 1 μIn×n + λ a ⊗ a = a⊗a . (10.3.4) In×n − μ μ + λ|a|2
320
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
Hint for (4): Fix a = (a1 , . . . , an ) ∈ Cn and b = (b1 , . . . , bn ) ∈ Cn . By continuity, it suffices to prove the formula in (4) when aj = 0 and bj = 0 for every j ∈ {1, . . . , n}. Assuming that this is the case, we may write 1 + a 1 b1 a 2 b1 det (In×n + a ⊗ b) = det ··· a n b1
=
' n
aj
j=1
=
' n
aj
j=1
=
' n
' n
1 a1
aj
n
' n j=1
+ b1 b1 ··· b1
bj det
j=1
j=1
=1+
det
··· ··· ··· ···
a 1 b2 1 + a 2 b2 ··· a n b2
bj
' n j=1
a 1 bn a 2 bn ··· 1 + a n bn
b2 1 + b2 a2 ··· b2 1 a1 b1
+1
1 ··· 1
1 a j bj
··· ··· ··· ···
bn bn ··· 1 + bn an ··· ··· ··· ···
1 1 a2 b2
+1 ··· 1
1+
n
1 1 ··· 1 +1 an bn
a j bj
j=1
aj bj = 1 + a · b.
(10.3.5)
j=1
The format of the Lam´e system (10.3.1) suggests the following definition and result, shedding light on the conditions imposed on the Lam´e moduli in (10.3.2). Proposition 10.14. Given any Lam´e moduli λ, μ ∈ C, define the characteristic matrix for the Lam´e system (10.3.1) as L(ξ) := μ|ξ|2 In×n + (λ + μ)ξ ⊗ ξ,
∀ ξ ∈ Rn .
(10.3.6)
Then the following statements are equivalent: (1) L(ξ) is invertible for every ξ ∈ Rn \ {0}; (2) L(ξ) is invertible for some ξ ∈ Rn \ {0}; (3) μ = 0 and λ + 2μ = 0. Moreover, if μ = 0 and λ + 2μ = 0, then for each ξ ∈ Rn \ {0} one has & ' −1 ξ 1 λ+μ ξ ⊗ L(ξ) = − I . (10.3.7) n×n μ|ξ|2 λ + 2μ |ξ| |ξ| Proof. This is a direct consequence of Exercise 10.13.
´ OPERATOR 10.3. FUNDAMENTAL SOLUTIONS FOR THE LAME
321
Now we are ready to tackle the issue of finding all fundamental solutions for the Lam´e operator (10.3.1) when the Lam´e moduli satisfy (10.3.2). In this context, according to Definition 10.1, a fundamental solution for the Lam´e n operator is a matrix distribution n E ∈ Mn×n (D (R )) with the property that LE = δIn×n in MM×M D (R ) . It follows that the columns Ek , k = 1, . . . , n, of E satisfy the equations n for k = 1, . . . , n, (10.3.8) LEk = δ ek in D (Rn ) where ek denotes the unit vector in Rn with 1 on the kth entry, for each k ∈ {1, . . . , n}. Our goal is to determine, under the standing assumption n ≥ 3, all the fundamental solutions for the Lam´e operator with entries in S (Rn ). We do so by relying on the tools developed for scalar operators. To get started, suppose that there exists some matrix E = Ejk 1≤j,k≤n ∈ Mn×n (S (Rn )) with columns that satisfy (10.3.8). Since δ ∈ S (Rn ) and (4.1.25) is true, using (10.3.1), the latter is equivalent to μΔEjk + (λ + μ)
n
∂j ∂ Ek = δjk δ in S (Rn ),
j, k ∈ {1, . . . , n}. (10.3.9)
=1
Applying the Fourier transform to each equation in (10.3.9), we further write 5 − μ|ξ|2 E jk (ξ) − (λ + μ)ξj
n
n 5 ξ E k (ξ) = δjk in S (R ),
(10.3.10)
=1
for each j, k ∈ {1, . . . , n}. For k arbitrary, fixed, multiply (10.3.10) with ξj and then sum up the resulting identities over j = 1, . . . , n to obtain that −μ|ξ|2
n
n n 2 n 5 5 ξj E (ξ)−(λ+μ) ξ ξ E jk k (ξ) = ξk in S (R ), (10.3.11) j
j=1
j=1
=1
for all k ∈ {1, . . . , n}, or equivalently, that − (λ + 2μ)|ξ|2
n
n 5 ξj E jk (ξ) = ξk in S (R ),
k ∈ {1, . . . , n}.
(10.3.12)
j=1
If we now multiply (10.3.10) by (λ + 2μ)|ξ|2 and make use of (10.3.12), we may conclude that, for each j, k ∈ {1, . . . , n}, 2 n 5 − μ(λ + 2μ)|ξ|4 E jk (ξ) = (λ + 2μ)|ξ| δjk − (λ + μ)ξj ξk in S (R ). (10.3.13)
To proceed fix j, k ∈ {1, . . . , n} and note that since n ≥ 3, by Exercise 4.4 ξj ξk we have |ξ|1 2 ∈ S (Rn ). Also, |ξ| ∈ L1loc (Rn ) and in view of Example 4.3 4 one may infer that
ξj ξk |ξ|4
∈ S (Rn ). In addition, |ξ|4 ∈ L(Rn ), thus |ξ|4 ·
ξj ξk |ξ|4 ,
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
322 |ξ|4 ·
1 n |ξ|2 ∈ S (R ) (recall (b) in Theorem 4.13), and it is not difficult to check ξ ξ j k 4 1 2 n |ξ|4 · |ξ| 4 = ξj ξk and |ξ| · |ξ|2 = |ξ| in S (R ). These conclusions combined
that with (10.3.13) imply (recall also (10.3.2))
& ' 1 ξj ξk δjk (λ + μ) 5 · · μ(λ + 2μ)|ξ| Ejk (ξ) + − = 0 in S (Rn ). μ |ξ|2 μ(λ + 2μ) |ξ|4 (10.3.14) Thus, by Proposition 5.1 applied with P (D) := Δ2 , it follows that 4
1 ξj ξk δjk (λ + μ) 5 5 E · · − =P jk (ξ) + jk (ξ) μ |ξ|2 μ(λ + 2μ) |ξ|4
in S (Rn ),
(10.3.15)
where Pjk is a polynomial in Rn satisfying Δ2 Pjk = 0 pointwise in Rn . To continue with the computation of Ejk we apply the inverse Fourier transform to (10.3.15) and use Proposition 4.61 with λ = 2 as well as (7.10.7) to write Ejk = −
δjk −1 F μ
1 |ξ|2
+
(λ + μ) −1 F μ(λ + 2μ)
ξj ξk |ξ|4
+ Pjk
1 δjk · n−2 + Pjk (x) μ(n − 2)ωn−1 |x| ' & (λ + μ) 1 δjk 1 xj xk + · − · μ(λ + 2μ) 2(n − 2)ωn−1 |x|n−2 2ωn−1 |x|n & ' (λ + μ) 1 1 δjk · = − · n−2 μ(λ + 2μ) 2(n − 2)ωn−1 μ(n − 2)ωn−1 |x|
=−
xj xk (λ + μ) · + Pjk (x) 2ωn−1 μ(λ + 2μ) |x|n & ' 3μ + λ δjk −1 (μ + λ)xj xk = + + Pjk (x) 2μ(2μ + λ)ωn−1 n − 2 |x|n−2 |x|n −
(10.3.16)
in S (Rn ) (hence, in particular, for all x ∈ Rn \ {0}). Next, we claim that the matrix F = Fjk 1≤j,k≤n ∈ Mn×n S (Rn ) with entries defined by Fjk :=
& ' 3μ + λ δjk −1 (μ + λ)xj xk + , 2μ(2μ + λ)ωn−1 n − 2 |x|n−2 |x|n
(10.3.17)
for j, k ∈ {1, . . . , n}, is a fundamental solution for the Lam´e operator. Note that, based on the properties of the Fourier transform, this claim is equivalent with having the entries of F satisfy 5 − μ|ξ|2 F jk (ξ) − (λ + μ)ξj
n =1
n 5 ξ F k (ξ) = δjk in S (R )
(10.3.18)
´ OPERATOR 10.3. FUNDAMENTAL SOLUTIONS FOR THE LAME
323
for each j, k ∈ {1, . . . , n}. To check (10.3.18) we use (10.3.17), Proposition 4.61 (with λ = n − 2) and (7.10.8) to first write & ' 1 (3μ + λ)δjk xj xk −1 5 Fjk (ξ) = F + (μ + λ)F 2μ(2μ + λ)ωn−1 n−2 |x|n−2 |x|n −(3μ + λ)δjk (n − 2)ωn−1 · 2μ(2μ + λ)ωn−1 (n − 2) |ξ|2 & ' δjk ξj ξk μ+λ − ωn−1 2 − 2ωn−1 4 2μ(2μ + λ)ωn−1 |ξ| |ξ|
=
=−
δjk 1 ξj ξk (λ + μ) · · + in S (Rn ). μ |ξ|2 μ(λ + 2μ) |ξ|4
(10.3.19)
Next, we use (10.3.19) to rewrite the term in the left-hand side of (10.3.18) as 5 − μ|ξ|2 F jk (ξ) − (λ + μ)ξj
n
5 ξ F k (ξ)
=1
& = μ|ξ|
2
' & ' n δjk δk (λ + μ)ξj ξk (λ + μ)ξ ξk − ξ − + (λ + μ)ξj μ|ξ|2 μ(λ + 2μ)|ξ|4 μ|ξ|2 μ(λ + 2μ)|ξ|4 =1
&
= δjk −
ξk (λ + μ)ξj ξk (λ + μ)ξk + (λ + μ)ξj − (λ + 2μ)|ξ|2 μ|ξ|2 μ(λ + 2μ)|ξ|2
'
= δjk in S (Rn ),
(10.3.20)
proving that the matrix F is a fundamental solution for the Lam´e operator. The main result emerging from this analysis is summarized next. Theorem 10.15. Assume n ≥ 3 and let L be the Lam´e operator from (10.3.1) such that the constants the λ, μ ∈ C satisfy (10.3.2). Then the matrix E = Ejk 1≤j,k≤n , which has entries given by the L1loc (Rn ) functions & ' 3μ + λ δjk −1 (μ + λ)xj xk + (10.3.21) 2μ(2μ + λ)ωn−1 n − 2 |x|n−2 |x|n where x ∈ Rn \ {0}, for each j, k ∈ {1, . . . , n}, belongs to Mn×n S (Rn ) and is a fundamental solution for the Lam´e operator in Rn . In addition, any fundamental solution U ∈ Mn×n S (Rn ) of the Lam´e operator in Rn is of the form U = E + P, for some matrix P := Pjk 1≤j,k≤n whose entries are polynomials in Rn and whose columns Pk , k = 1, . . . , n, satisfy the pointwise equations LPk = (0, . . . , 0) ∈ Cn in Rn for k = 1, . . . , n. Ejk (x) :=
Proof. Since the entries of E as given in (10.3.21) are the same as the expressions from (10.3.17), the earlier analysis shows that E belongs to Mn×n S (Rn ) and
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
324
is a fundamental solution for the Lam´e operator in Rn . To justify the claim in the last paragraph of the statement of the theorem we shall invoke Proposition 10.2. Concretely, since condition (10.3.2) and Proposition 10.14 imply det (L(ξ)) = 0 for ξ ∈ Rn \ {0}, it follows that det (L(iξ)) = − det (L(ξ)) = 0 for ξ ∈ Rn \ {0}. This shows that (10.1.15) is satisfied, hence Proposition 10.2 applies and yields the desired conclusion. Note that, as Lemma 10.17 shows, if P = Pjk 1≤j,k≤n is as in Theorem 10.15, then Δ2 Pjk = 0 pointwise in Rn for every j, k ∈ {1, . . . , n}. One may check without using of the Fourier transform that the matrix from Theorem 10.15 is a fundamental solution for the Lam´e operator (much as in the spirit of Remark 7.4). Exercise 10.16. Follow the outline below check, without the use of the to Fourier transform, that the matrix E = Ejk 1≤j,k≤n ∈ Mn×n (S (Rn )), with entries of function type defined by the functions from (10.3.21), is a fundamental solution for the Lam´e operator in Rn , n ≥ 3. n Step 1. Show that μΔEjk + (λ + μ) =1 ∂j ∂ Ek = 0 pointwise in Rn \ {0} for each j, k ∈ {1, . . . , n}. Step 2. Show that the desired conclusion (i.e., that the given E is a fundamental solution for the Lam´e operator in Rn , n ≥ 3) is equivalent with the condition thateqnarray 7 6 n lim μ Ejk (x)Δϕ(x) dx + (λ + μ) Ek (x)∂j ∂ ϕ(x) dx ε→0+
|x|≥ε
|x|≥ε =1
= ϕ(0)δjk for every j, k ∈ {1, . . . , n} and every ϕ ∈
(10.3.22)
C0∞ (Rn ).
Step 3. Fix j, k ∈ {1, . . . , n} and a function ϕ ∈ C0∞ (Rn ). Let R ∈ (0, ∞) be such that supp ϕ ⊆ B(0, R), so that one may replace the domain of integration for the integrals in the left-hand side of (10.3.22) with the set {x ∈ Rn : ε < |x| < R}. Use this domain of integration, (13.7.5), (13.7.4), and the result from Step 1, to prove that (10.3.22) is equivalent to 6 ∂ϕ ∂Ejk (x) dσ(x) Ejk (x) (x) dσ(x) + μ ϕ(x) lim −μ + ∂ν ∂ν ε→0 ∂B(0,ε) ∂B(0,ε) − (λ + μ) ∂B(0,ε)
+(λ + μ) ∂B(0,ε)
n
Ek (x)∂ ϕ(x) νj (x) dσ(x)
=1 n
7 ν (x)∂j Ek (x) ϕ(x) dσ(x) = ϕ(0)δjk ,
=1
(10.3.23) where ν(x) =
x ε
for each x ∈ ∂B(0, ε).
´ OPERATOR 10.3. FUNDAMENTAL SOLUTIONS FOR THE LAME
325
Step 4. Prove that there exists a constant C ∈ (0, ∞) independent of ε such that each of the quantities: ∂Ejk (x)[ϕ(x) − ϕ(0)] dσ(x) , ∂B(0,ε) ∂ν
∂ϕ Ejk (x) (x) dσ(x) , ∂B(0,ε) ∂ν
& ' n Ek (x)∂ ϕ(x) νj (x) dσ(x) , ∂B(0,ε)
and
=1
7 & n ν (x)∂j Ek (x) [ϕ(x) − ϕ(0)] dσ(x), ∂B(0,ε)
(10.3.24)
=1
is bounded by C∇ϕL∞ (Rn ) ε, thus convergent to zero as ε → 0+ . Step 5. Combine Steps 2–4 to reduce matters to proving that lim+ μ
ε→0
∂B(0,ε)
n xs s=1
ε
(∂s Ejk )(x) dσ(x)
n x
+ (λ + μ)
∂B(0,ε) =1
ε
(10.3.25)
(∂j Ek )(x) dσ(x) = δjk .
Step 6. Prove that for every x ∈ ∂B(0, ε) we have n xs s=1
ε
(∂s Ejk )(x) =
(3μ + λ)δjk (μ + λ)(2 − n)xj xk − n−1 2μ(2μ + λ) ωn−1 ε 2μ(2μ + λ) ωn−1 εn+1 (10.3.26)
and n x =1
ε
(∂j Ek )(x) =
(3μ + λ) − (μ + λ)(1 − n) xj xk 2μ(2μ + λ) ωn−1 εn+1 −
(λ + μ)δjk · 2μ(2μ + λ) ωn−1 εn−1
(10.3.27)
Step 7. Using the fact that (cf. (13.8.44)) xj xk dσ(x) = ∂B(0,ε)
εn+1 ωn−1 δjk , n
(10.3.28)
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
326
integrate the expressions in (10.3.26)–(10.3.27) to conclude that
∂B(0,ε) s=1
and
&
' (μ + λ)(2 − n) δjk (∂s Ejk )(x) dσ(x) = 3μ + λ − ε n 2μ(2μ + λ) (10.3.29)
n xs
n x
∂B(x,ε) =1
ε
(∂j Ek )(x) dσ(x) =
δjk , n(2μ + λ)
(10.3.30)
then finish the proof of (10.3.25).
10.4
Mean Value Formulas and Interior Estimates for the Lam´ e Operator
The goal here is to prove that solutions of the Lam´e system satisfy certain mean value formulas similar in spirit to those for harmonic functions. We start by establishing a few properties of elastic vector fields. Lemma 10.17. Let λ, μ ∈ C be such that μ = 0 and λ + 2μ = 0. Assume n that the vector distribution u = (u1 , u2 , . . . , un ) ∈ D (Ω) satisfies the Lam´e system n (10.4.1) μΔu + (λ + μ)∇ div u = 0 in D (Ω) . n Then u ∈ C ∞ (Ω) and the following statements are true. (i) The function div u is harmonic in Ω. (ii) The function uj is biharmonic in Ω for each j = 1, . . . , n. (iii) The function ∂j (div u) is harmonic in Ω for each j = 1, . . . , n. n Proof. The fact that u ∈ C ∞ (Ω) is a consequence of Theorem 10.9, Proposition 10.14, and our assumptions on λ, μ. To show (i), we apply div to (10.4.1). Since div∇ = Δ and divΔ = Δ div we obtain (λ + 2μ)Δdiv u = 0 in Ω, thus (i) follows since λ + 2μ = 0. Now if we return to (10.4.1) and apply Δ to it, the second term will be zero because of the harmonicity of div u. Consequently, μΔ2 u = 0 in Ω which proves (ii) since μ = 0. Finally, applying Δ to (10.4.1) and using (ii) yields (λ + μ)Δ(∇div u) = 0, and (iii) follows from this in the case λ + μ = 0. If the latter condition fails, then from (10.4.1) we have that u is harmonic, so (iii) also holds in this case. As an auxiliary step in the direction of Theorem 10.19 that contains the mean value formulas alluded to earlier, we prove the following useful result.
10.4. MEAN VALUE FORMULAS AND INTERIOR ESTIMATES...
327
Proposition 10.18. Let λ, μ ∈ C be such that μ = 0 and λ + 2μ = 0. Assume (10.4.1). u = (u1 , u2 , . . . , un ) ∈ [D (Ω)]n satisfies the Lam´e system Then we have u ∈ [C ∞ (Ω)]n and, for every x ∈ Ω, every r ∈ 0, dist(x, ∂Ω) , and every j ∈ {1, . . . , n}, the following formulas hold: (μ − λ)r2 n ∂j (div u)(x) = 2 − (yj − xj )(y − x) · u(y) dσ(y), uj (x) + 2μ(n + 2) r ∂B(x,r) (10.4.2) and [(n + 3)μ + (n + 1)λ]r2 ∂j (div u)(x) (10.4.3) 2μ(n + 2)(n − 1) n − =− (yj − xj )(y − x) · u(y) − r2 uj (y) dσ(y). 2 (n − 1)r ∂B(x,r)
uj (x) −
∞ n follows from Lemma 10.17. To proceed, fix Proof. The fact that u ∈ [C (Ω)] some x ∈ Ω, r ∈ 0, dist(x, ∂Ω) , and j ∈ {1, . . . , n}. By (iii) in Lemma 10.17, we have that ∂j (div u)(x) is harmonic in Ω. Thus, we may apply the mean value formula on solid balls for harmonic function (c.f. the first formula in (7.2.9)), followed by an application of the integration by parts formula (13.7.4), to write n ∂j (div u)(y) dy ∂j (div u)(x) = ωn−1 rn B(x,r) yj − xj n divu(y) dσ(y) = ωn−1 rn ∂B(x,r) r n = div (yj − xj )u(y) dσ(y) n+1 ωn−1 r ∂B(x,r) n uj (y) dσ(y) =: I + II. (10.4.4) − ωn−1 rn+1 ∂B(x,r)
Since uj is biharmonic (recall (ii) in Lemma 10.17), we may write (7.4.1) for uj , then use the latter formula to simplify II, and then replace Δuj by − λ+μ e system) to obtain μ ∂j (div u) (recall that u satisfies the Lam´ II = −
1 n λ+μ n ∂j (div u)(x). uj (x) − Δuj (x) = − 2 uj (x) + 2 r 2 r 2μ
(10.4.5)
Combining (10.4.4) and (10.4.5) we see that μ − λ n+1 r ∂j (div u)(x) + nrn−1 uj (x) 2μ n div (yj − xj )u(y) dσ(y). = ωn−1 ∂B(x,r)
(10.4.6)
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
328
Fix R ∈ 0, dist(x, ∂Ω) . Integrating (10.4.6) with respect to r for r ∈ (0, R), applying (13.8.5) and then (13.7.4), we arrive at Rn uj (x) + = =
n ωn−1 n ωn−1
μ−λ Rn+2 ∂j (div u)(x) 2μ(n + 2) div ((yj − xj )u(y)) dy
(10.4.7)
B(x,R)
(yj − xj ) ∂B(x,R)
y−x · u(y) dσ(y) R
which in turn yields (10.4.2) (with R in place of r) after dividing by Rn . To prove (10.4.3) we start with the term in the right-hand side of formula (10.4.2) in which we add and subtract r2 uj (y) under the integral sign and then split it in two integrals one involving (yj −xj )(y −x)·u(y)−r2 uj (y), call this expression I1 , and another one involving uj (y), call it I2 . For I2 we recall that uj is biharmonic and use (7.4.1) after which we replace Δuj (x) by − λ+μ μ ∂j (div u)(x). Hence, n (yj − xj )(y − x) · u(y) dσ(y) (10.4.8) ωn−1 rn+1 ∂B(x,r) n (yj − xj )(y − x) · u(y) − r2 uj (y) dσ(y) = ωn−1 rn+1 ∂B(x,r) + nuj (x) −
r2 λ + μ · ∂j (div u)(x). 2 μ
Finally, (10.4.3) follows by adding (10.4.2) and (10.4.8). As mentioned before, Proposition 10.18 is an important ingredient in proving the following mean value formulas for the Lam´e system. Theorem 10.19. Let λ, μ ∈ C be such that μ = 0, λ + 2μ = 0, and (n + 1)μ + λ = 0. Assume u ∈ [D (Ω)]n satisfies the Lam´ e system (10.4.1). Then u ∈ [C ∞ (Ω)]n , and for every x ∈ Ω and every r ∈ 0, dist(x, ∂Ω) the following formulas hold: n(λ + μ)(n + 2) (y − x) · u(y) (y − x) dσ(y) − u(x) = 2 2[(n + 1)μ + λ]r ∂B(x,r) n(μ − λ) + u(y) dσ(y) (10.4.9) − 2[(n + 1)μ + λ] ∂B(x,r) and u(x) =
' & y−x y−x n(λ + μ)(n + 2) · u(y) dy − 2[(n + 1)μ + λ] B(x,r) |y − x| |y − x| n(μ − λ) − + u(y) dy. 2[(n + 1)μ + λ] B(x,r)
(10.4.10)
10.4. MEAN VALUE FORMULAS AND INTERIOR ESTIMATES...
329
Proof. Once again, that u ∈ [C ∞ (Ω)]n is contained in Lemma 10.17. Formula (10.4.9) follows by taking a suitable linear combination of (10.4.2) and (10.4.3) so that ∂j (div u) cancels (here (n + 1)μ + λ = 0 is used). To prove (10.4.10), multiply (10.4.9) by ωn−1 rn−1 , then move r12 from the front of the first integral 1 in the right-hand side of the equality obtained inside of that integral as |y−x| 2, then integrate with respect to r ∈ (0, R), where R ∈ (0, dist(x, ∂Ω)) and use (13.8.5). Finally divide the very last expression by ωn−1 Rn /n which is precisely the volume of B(x, R). This gives (10.4.10) written with R in place of r. We shall now discuss how the mean value formulas from Theorem 10.19 can be used to obtain interior estimates for solutions of the Lam´e system. Two such versions are proved in Theorems 10.20 and 10.22 (cf. also Exercise 10.21). Theorem 10.20 (L1 -Interior estimates for the Lam´e operator). Let λ, μ ∈ C be such that μ = 0, λ + 2μ = 0, and (n + 1)μ + λ = 0. Assume u ∈ [D (Ω)]n is a vector distribution satisfying the Lam´e system (10.4.1). Then u ∈ [C ∞ (Ω)]n and there exists C ∈ (0, ∞) depending only on n, λ, and μ, such that for every x ∈ Ω, every r ∈ 0, dist(x, ∂Ω) and each k ∈ {1, . . . , n}, one has C |u(y)| dy. (10.4.11) |∂k u(x)| ≤ − r B(x,r) Proof. The fact that u ∈ [C ∞ (Ω)]n has been established in Lemma 10.17. Fix k ∈ {1, . . . , n} and observe that since u = (u1 , . . . , un ) satisfies the Lam´e system in Ω, then so does ∂k u. Fix x∗ ∈ Ω arbitrary and select R ∈ 0, dist(x∗ , ∂Ω) . Writing formula (10.4.10) for u replaced by ∂k u and r replaced by R/2, shows that for each j ∈ {1, . . . , n}, yj − x∗j y − x∗ c1 2 n n ∗ · ∂ dy (10.4.12) u(y) ∂k uj (x ) = k ωn−1 Rn B(x∗ , R/2) |y − x∗ | |y − x∗ | c2 2 n n + ∂k uj (y) dy, ωn−1 Rn B(x∗ , R/2) where c1 :=
n(λ + μ)(n + 2) 2[(n + 1)μ + λ]
and
c2 :=
n(μ − λ) . 2[(n + 1)μ + λ]
(10.4.13)
Integrating by parts [using (13.7.4)] in both integrals in (10.4.12) yields c1 2 n n ∂k uj (x ) = − ωn−1 Rn
∗
B(x∗ , R/2)
n =1
& ∂yk
' (yj − x∗j )(y − x∗ ) u (y) dy |y − x∗ |2 (10.4.14)
+
c1 2 n n ωn−1 Rn
∂B(x∗ , R/2)
n yk − x∗k (yj − x∗j )(y − x∗ ) u (y) dσ(y) |y − x∗ | |y − x∗ |2 =1
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
330
c2 2 n n + ωn−1 Rn Thus, ∗
|∂k u(x )| ≤ CR
−n
∂B(x∗ , R/2)
B(x∗ , R/2)
yk − x∗k uj (y) dσ(y). |y − x∗ |
|u(y)| dy + CR−n |y − x∗ |
|u(y)| dσ(y), (10.4.15)
∂B(x∗ , R/2)
where C stands for a finite positive constant depending only on n, λ, and μ. Before continuing let us note of (10.4.10). Specifically, for a useful consequence every x ∈ Ω and every r ∈ 0, dist(x, ∂Ω) one has |u(x)| ≤ C − |u(z)| dz. (10.4.16) B(x,r)
Taking y ∈ B(x∗ , R/2) forces B(y, R/2) ⊂ B(x∗ , R) which, when used in estimate (10.4.16) written for x replaced by y and r by R/2, gives |u(z)| dz ≤ C − |u(z)| dz, ∀ y ∈ B(x∗ , R/2). |u(y)| ≤ C − B(x∗ , R)
B(y,R/2)
(10.4.17) This, in turn, allows us to estimate the integrals in (10.4.15) as follows. For the boundary integral, (10.4.17) yields |u(y)| dσ(y) ≤ C − |u(z)| dz Rn−1 B(x∗ , R)
∂B(x∗ , R/2)
=
C R
B(x∗ , R)
|u(z)| dz,
(10.4.18)
while for the solid integral |u(y)| dy dy ≤ C − |u(z)| dz ∗ |y − x∗ | B(x∗ , R) B(x∗ , R/2) |y − x | B(x∗ , R/2)
≤
C R
B(x∗ , R)
|u(z)| dz,
(10.4.19)
R2 n−2 dy since B(x∗ , R/2) |y−x ρ dρ = CRn−1 . Now (10.4.11) (with r ∗| = C 0 replaced by R) follows from (10.4.15), (10.4.18), and (10.4.19). Exercise 10.21. Under the same background assumptions as in Theorem 10.20, prove that for every α ∈ Nn0 there exists a finite constant Cα > 0 that depends only on n, α, λ, μ, with the property that for every x ∈ Ω and every r ∈ 0, dist(x, ∂Ω) Cα |u(y)| dy. (10.4.20) |∂ α u(x)| ≤ |α| − r B(x,r)
10.4. MEAN VALUE FORMULAS AND INTERIOR ESTIMATES...
331
Hint: Use induction on |α|, (10.4.11) written for r/2, and the fact that ∂ α u continues to be a solution of the Lam´e system (10.4.1) Theorem 10.22 (L∞ -Interior estimates for the Lam´e operator). Let λ, μ ∈ C be such that μ = 0, λ + 2μ = 0, and (n + 1)μ + λ = 0 and suppose u ∈ [D (Ω)]n n satisfies the Lam´e system (10.4.1). Then u ∈ [C ∞ (Ω)] and, with C as in Theorem 10.20, for each x ∈ Ω, each r ∈ 0, dist(x, ∂Ω) , and each k ∈ N, we have
|∂ α u(x)| ≤
C k ek−1 k! max |u(y)|, rk y∈B(x,r)
∀ α ∈ Nn0 with |α| = k.
(10.4.21)
Proof. The fact that u ∈ [C ∞ (Ω)]n follows from Lemma 10.17. In particular, this implies that ∂ α u satisfies (10.4.1) for every α ∈ Nn0 . The case k = 1 is an immediate consequence of (10.4.11) since, clearly, −B(x,r) |u(y)| dy ≤ maxy∈B(x,r) |u(y)|. Having established this, the desired conclusion follows by invoking Lemma 6.21 (with A the class of null-solutions for the Lam´e system in Ω). Recall Definition 6.22. Theorem 10.23. Suppose λ and μ are complex numbers such that μ = 0, λ + 2μ = 0, and (n + 1)μ + λ = 0. Then any null solution of the Lam´e system (10.4.1) has components that are real-analytic in Ω. Proof. This is an immediate consequence of Theorem 10.22 and Lemma 6.24. Theorem 10.24. Suppose λ and μ are complex numbers such that μ = 0, λ + 2μ = 0, and (n + 1)μ + λ = 0 and suppose Ω ⊆ Rn is open and connected. If u is a null solution of the Lam´e system (10.4.1) such that for some x0 ∈ Ω we have ∂ α u(x0 ) = 0 for all α ∈ Nn0 , then u = 0 in Ω. Proof. This follows from Theorems 10.23 and 6.25. Next we record the analogue of the classical Liouville’s theorem for the Laplacian (cf. Theorem 7.15) in the case of the Lam´e system. Theorem 10.25 (Liouville’s theorem for the Lam´e system). Let λ, μ ∈ C be such that μ = 0, λ + 2μ = 0 and suppose u ∈ [L∞ (Rn )]n satisfies the Lam´e system n (10.4.22) μΔu + (λ + μ)∇div u = 0 in D (Rn ) . Then there exists a constant vector c ∈ Cn such that u(x) = c for all x ∈ Rn . Proof. This is a particular case of Theorem 10.3 since based on the current assumptions on λ and μ, Proposition 10.14 ensures that condition (10.1.15) is satisfied.
332
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
Exercise 10.26. Assuming that λ, μ ∈ C are such that μ = 0, λ + 2μ = 0, and (n + 1)μ + λ = 0, give an alternative proof of Theorem 10.25 by relying on the interior estimates from (10.4.11).
10.5
The Poisson Equation for the Lam´ e Operator
Let L be the operator from (10.3.1) with coefficients μ, λ ∈ C satisfying (10.3.2). Then the Poisson equation for the Lam´e operator in an open subset Ω of Rn reads Lu = f , (10.5.1) where the vector f is given and the vector u is the unknown. If u is a priori known to be of class C 2 , then the equality in (10.5.1) is considered in the pointwise sense, everywhere in Rn . Such a solution is called classical. Often, one starts with u simply n a vector distribution in which scenario (10.5.1) is interpreted in D (Ω) . In this case, we shall refer to u as a distributional solution. In this vein, it is worth pointing out that if the datum f is of class C ∞ in Ω then any distributional solution of (10.5.1) is also of class C ∞ in Ω, as seen from Theorem 10.9, Proposition 10.14, and (10.3.2). A key ingredient in solving the Poisson equation (10.5.1) is going to be the fundamental solution for the Lam´e system derived in (10.3.21). Proposition 10.27. Let L be the Lam´e operator from (10.3.1) such that (10.3.2) is satisfied. Assume n ≥ 2 and let E = Ejk 1≤j,k≤n be the fundamental solution for L in Rn with entries as in (10.3.21) for n ≥ 3 and as in (10.7.1) for n = 2. Suppose that Ω is an open set in Rn and that f ∈ [L∞ (Ω)]n vanishes outside a bounded subset of Ω. Then E(x − y)f (y) dy, ∀ x ∈ Ω, (10.5.2) u(x) = Ω
is a distributional solution of the Poisson equation for the Lam´e system in Ω, that is, Lu = f in [D (Ω)]n . In addition, u ∈ [C 1 (Ω)]n . Proof. This is established by arguing along the lines of the proof of Proposition 7.8. The main result in this section is the following well-posedness result for the Poisson problem for the Lam´e operator in Rn . Theorem 10.28. Assume n ≥ 3, and let L be the Lam´e operator from (10.3.1) n n with λ, μ ∈ C satisfying μ = 0 and λ + 2μ = 0. Also, suppose f ∈ L∞ comp (R ) and c ∈ Cn are given. Then the Poisson problem for the Lam´e operator in Rn , ⎧ u ∈ [C 0 (Rn )]n , ⎪ ⎪ ⎪ ⎨ n Lu = f in D (Rn ) , (10.5.3) ⎪ ⎪ ⎪ ⎩ lim u(x) = c, |x|→∞
´ OPERATOR 10.5. THE POISSON EQUATION FOR THE LAME
333
has a unique solution. Moreover, the solution u satisfies the following additional properties. (1) The function u is of class C 1 in Rn and admits the integral representation formula u(x) = c + E(x − y)f (y) dy, ∀ x ∈ Rn , (10.5.4) Rn
where E = Ejk 1≤j,k≤n is the fundamental solution for L in Rn with entries as in (10.3.21). Moreover, for each j ∈ {1, . . . , n} we have ∂j u(x) = (∂j E)(x − y)f (y) dy, ∀ x ∈ Rn . (10.5.5) Rn
n n (2) If in fact f ∈ C0∞ (Rn ) then u ∈ C ∞ (Rn ) . (3) For every j, k ∈ {1, . . . , n} there exists a matrix Cjk ∈ Mn×n (C) such that n ∂j ∂k u = Cjk f + T∂j ∂k E f in D (Rn ) , (10.5.6) where T∂j ∂k E is the singular integral operator associated with the matrixvalued function Θ := ∂j ∂k E (cf. Definition 4.90). n (4) For every p ∈ (1, ∞), the solution u of (10.5.3) satisfies ∂j ∂k u ∈ Lp (Rn ) n n for each j, k ∈ {1, . . . , n}, where the derivatives are taken in D (R ) . Moreover, there exists a constant C = C(p, n) ∈ (0, ∞) with the property that n ∂j ∂k u[Lp (Rn )]n ≤ Cf [Lp(Rn )]n . (10.5.7) j,k=1
Proof. From Proposition 10.27 we have that nu defined as in (10.5.4) is of class C 1 in Rn and satisfies Lu = f in D (Rn ) . In addition, by reasoning as in the proof of Proposition 7.8 we also see that formula (10.5.5) holds for each j ∈ {1, . . . , n}. Furthermore, the same type of estimate as in (7.2.21) proves that the function u from (10.5.4) also satisfies the limit condition in (10.5.3). This concludes the treatment of the existence. As for uniqueness, suppose u is a solution of (10.5.3) for f = 0 ∈ Cn . Given that u satisfies (10.5.3), it follows that u is bounded in Rn . As such, Liouville’s theorem (cf. Theorem 10.25) applies and gives that u = 0. This proves that u defined as in (10.5.4) is the unique solution of (10.5.3). Next, the regularity result in part (2) may be seen either directly from (10.5.4), or by relying on Theorem 10.9, Proposition 10.14, and the assumptions on the Lam´e moduli λ, μ. This concludes the proof of the claims made in parts (1)–(2). Consider now the claim made in part (3). Fix j, k ∈ {1, . . . , n} arbitrary. Then, as seen from (10.3.21), the function Φ := ∂k E is C ∞ and positive homogeneous of degree 1 − n in Rn \ {0}. In turn, this implies that the matrix-valued
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
334
function Θ := ∂j ∂k E has entries satisfying the conditions in (4.4.1) (here the discussion in Example 4.68 is relevant). From what we have n in part (1) proved and Theorem 4.99 we then conclude that, in the sense of D (Rn ) , ' & ∂j ∂k u(x) = ∂j ∂k u(x) = ∂j
'
&
Rn
= S n−1
(∂k E)(x − y)f (y) dy
(∂k E)(ω)ωj dσ(ω) f (x)+ lim+ ε→0
|x−y|>ε
(10.5.8) (∂j ∂k E)(x−y)f (y)dy.
Upon taking Cjk := S n−1 (∂k E)(ω)ωj dσ(ω) ∈ Mn×n (C), formula (10.5.6) follows, finishing the proof of part (3). Lastly, the claim in (4) is a consequence of (3) and the boundedness of the singular integral operators T∂j ∂k E on p n n L (R ) (as seen by applying Theorem 4.97 componentwise).
10.6
Fundamental Solutions for the Stokes Operator
n Let u = (u1 , . . . , un ) ∈ D (Rn ) and p ∈ D (Rn ). Then the Stokes operator LS acting on (u, p) = (u1 , . . . , un , p) is defined by n n+1 LS (u, p) := Δu1 − ∂1 p, . . . , Δun − ∂n p , ∂s us ∈ D (Rn ) . (10.6.1) s=1
In practice, u and p are referred to as the velocity field and the pressure function, respectively. A fundamental (E, p), solution for the Stokes operator is given by a pair n where E = Ejk 1≤j,k≤n ∈ Mn×n (D (Rn )), p = (p1 , . . . , pn ) ∈ D (Rn ) , conditions. If for each k ∈ {1, . . . , n} we set Γk := satisfy the following E1k , . . . , Enk , pk and en+1 denotes the unit vector in Rn+1 with 1 on the k kth entry, then in LS Γk = δ en+1 k
n n+1 D (R ) for k = 1, . . . , n.
(10.6.2)
We propose to determine all the fundamental solutions for the Stokes operator with entries in S (Rn ) in the case when n ≥ 3, a condition assumed in this section unless otherwise specified. Suppose (E, p) is a fundamental solution for LS with the property that E = Ejk 1≤j,k≤n ∈ Mn×n (S (Rn )) n and p = (p1 , . . . , pn ) ∈ S (Rn ) . Then (10.6.2) implies ΔEjk − ∂j pk = δjk δ in S (Rn ), n s=1
∂s Esk = 0 in S (Rn ),
j, k ∈ {1, . . . , n},
k ∈ {1, . . . , n}.
(10.6.3) (10.6.4)
10.6. FUNDAMENTAL SOLUTIONS FOR THE STOKES . . .
335
Apply the Fourier transform to each of the equalities in (10.6.3) and (10.6.4) to obtain n 5 −|ξ|2 E jk − iξj p3k = δjk in S (R ), n
n 5 ξs E sk = 0 in S (R ),
j, k ∈ {1, . . . , n}, k ∈ {1, . . . , n}.
(10.6.5) (10.6.6)
s=1
Fix k ∈ {1, . . . , n} and, for each j = 1, . . . , n, multiply the identity in (10.6.5) corresponding to this j with ξj , then sum up over j and use (10.6.6) to arrive at |ξ|2 p3k = iξk
S (Rn ).
in
(10.6.7)
Reasoning as in the derivation of (10.3.15) from (10.3.13), the last identity implies ξk p3k = i 2 + r3k (ξ) in S (Rn ), (10.6.8) |ξ| for some harmonic polynomial rk in Rn . An application of the inverse Fourier transform to (10.6.8) combined with (7.10.5) then yields pk = iF
−1
ξk |ξ|2
+ rk = −
1 ωn−1
1 In particular, if we take pk = − ωn−1 · in (10.6.5), we arrive at the condition
5 |ξ|2 E jk =
·
xk + rk |x|n
xk |x|n
in S (Rn ).
(10.6.9)
and use it to substitute for p3k back
ξj ξk − δjk in S (Rn ), |ξ|2
j ∈ {1, . . . , n}.
(10.6.10)
Consequently, since n ≥ 3, by reasoning as in the derivation of (10.3.15) from (10.3.13), identity (10.6.10) implies ξj ξk δjk n 5 5 E − 2 +R jk = jk in S (R ), |ξ|4 |ξ|
j ∈ {1, . . . , n},
(10.6.11)
for some polynomials Rjk in Rn satisfying ΔRjk = 0 in Rn , j = 1, . . . , n. Taking the Fourier transform in (10.6.11), then using (7.10.7) and Proposition 4.61 with λ = n − 2, we obtain 1 −1 ξj ξk −1 Ejk = F −F δjk + Rjk |ξ|4 |ξ|2 =−
δjk 1 1 xj xk − + Rjk in S (Rn ). 2(n − 2)ωn−1 |x|n−2 2ωn−1 |x|n
(10.6.12)
We are now ready to state our main result regarding fundamental solutions for the Stokes operator that are tempered distributions.
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
336
Theorem 10.29. Let n ≥ 3 and let LS be the Stokes operator from (10.6.1). Consider the following functions in L1loc (Rn ): Ejk (x) := − pk (x) := −
1 1 xj xk δjk − , 2(n − 2)ωn−1 |x|n−2 2ωn−1 |x|n xk , ωn−1 |x|n 1
j, k ∈ {1, . . . , n}, (10.6.13)
k ∈ {1, . . . , n},
(10.6.14)
defined for x ∈ Rn \{0}. Then if we set E := Ejk 1≤j,k≤n and p := (p1 , . . . , pn ), n we have E ∈ Mn×n S (Rn ) , p ∈ S (Rn ) , and the pair (E, p) is a fundamental solution for the Stokes operator in Rn . Moreover, any fundamental solution (U, q) for n the Stokes operator with the property that U ∈ Mn×n S (Rn ) , q ∈ S (Rn ) is of the form U = E + P, q = p + r, for some matrix P := Pjk 1≤j,k≤n whose entries are polynomials in Rn satisfying Δ2 Pjk = 0 pointwise in Rn for j, k = 1, . . . , n, and some vector r := (r1 , . . . , rn ), whose entries are polynomials in Rn satisfying Δrk = 0 pointwise in Rn for k = 1, . . . , n. n Proof. From (10.6.12) and (10.6.9) we have E ∈ Mn×n S (Rn ) , p ∈ S (Rn ) , and ξj ξk δjk 5 E − 2, jk = |ξ|4 |ξ|
p3k = i
ξk , |ξ|2
in S (Rn ),
j, k ∈ {1, . . . , n}.
(10.6.15)
Based on the properties of the Fourier transform, we have that (E, p) is a fundamental solution for the Stokes operator if and only if its components satisfy (10.6.5) and (10.6.6). By making use of (10.6.15) we may write 5 − |ξ|2 E jk − iξj p3k = −
ξj ξk ξj ξk + δjk + = δjk in S (Rn ), 2 |ξ| |ξ|2
j, k ∈ {1, . . . , n} (10.6.16)
and
n j=1
5 ξj E jk =
n ξj2 ξk j=1
|ξ|4
−
n δjk ξj j=1
|ξ|2
= 0 in S (Rn ),
(10.6.17)
proving that (E, p) is a fundamental solution for the Stokes operator. Suppose (U,q) is another fundamental such solution for the Stokes operator n that U = Ujk 1≤j,k≤n ∈ Mn×n S (Rn ) and q = (q1 , . . . , qn ) ∈ S (Rn ) . Then for each k ∈ {1, . . . , n} we have qk = pk + rk for some harmonic polynomial rk in Rn (recall the computation that lead to (10.6.9)). From this fact and the equations corresponding to (10.6.3) written for the components of (E, p) and (U, q), we further obtain that for each j, k ∈ {1, . . . , n} there holds Δ(Ujk − Ejk ) = ∂j rk in S (Rn ), hence Δ2 (Ujk − Ejk ) = 0 in S (Rn ). Conse n 5 5 quently, |ξ|4 (U jk − Ejk ) = 0 in S (R ) for j, k ∈ {1, . . . , n}, which in turn implies Ujk −Ejk = Rjk for some polynomials Rjk in Rn satisfying Δ2 Rjk = 0 pointwise in Rn for all j, k ∈ {1, . . . , n}. The proof of the theorem is now complete.
10.6. FUNDAMENTAL SOLUTIONS FOR THE STOKES . . .
337
Exercise 10.30. Follow the outline below to check, without the use of the Fourier transform, that (E, p) with entries E = Ejk 1≤j,k≤n and p = (p1 , . . . , pn ) of function type defined by the functions from (10.6.13) and (10.6.14) is a fundamental solution for the Stokes operator in Rn , n ≥ 3. Step 1. Show that ΔEjk − ∂j pk = 0 and
n
∂ Ek = 0 pointwise in Rn \ {0} for
=1
every j, k ∈ {1, . . . , n}.
Step 2. Show that the desired conclusion (i.e., that the given (E, p) is a fundamental solution for the Stokes operator in Rn , n ≥ 3) is equivalent with the conditions that ' & Ejk (x)Δϕ(x) dx + pk (x)∂j ϕ(x) dx = ϕ(0)δjk lim+ ε→0
|x|≥ε
|x|≥ε
(10.6.18) and
& lim
ε→0+
n
|x|≥ε =1
' Ek (x)∂ ϕ(x) dx = 0
(10.6.19)
for every j, k ∈ {1, . . . , n} and every ϕ ∈ C0∞ (Rn ). Step 3. Fix j, k ∈ {1, . . . , n}, ϕ ∈ C0∞ (Rn ), and let R ∈ (0, ∞) be such that supp ϕ ⊆ B(0, R), so that one may replace the domain of integration for the integrals in (10.6.18) and (10.6.19) with {x ∈ Rn : ε < |x| < R}. Use this domain of integration, (13.7.5), (13.7.4), and the result from Step 1, to prove that (10.6.18) and (10.6.19) are equivalent with 6 ∂ϕ ∂Ejk (x) dσ(x) lim − Ejk (x) (x) dσ(x) + ϕ(x) ∂ν ∂ν ε→0+ ∂B(0,ε) ∂B(0,ε) 7
−
pk (x)ϕ(x)νj (x) dσx = ϕ(0)δjk
(10.6.20)
∂B(0,ε)
and
& lim
ε→0+
where ν(x) =
x ε
n
' Ek (x)ϕ(x)νl (x) dσx = 0,
(10.6.21)
∂B(0,ε) =1
for each x ∈ ∂B(0, ε).
Step 4. Prove that there exists a constant C ∈ (0, ∞) independent of ε such that each of the quantities:
∂Ejk (x) dσ(x) (x)[ϕ(x) − ϕ(0)] dσ(x) , , ∂B(0,ε) Ejk (x) ∂ϕ ∂ν ∂B(0,ε) ∂ν
∂B(0,ε) pk (x)[ϕ(x) − ϕ(0)]νj (x) dσx , (10.6.22)
´ AND STOKES OPERATORS CHAPTER 10. THE LAME
338
is bounded by C∇ϕL∞ (Rn ) ε, thus convergent to zero as ε → 0+ , and such that n Ek (x)ϕ(x)νl (x) dσ(x) ≤ CϕL∞ (Rn ) ε −−−−→ 0. ∂B(0,ε) ε→0+ =1
(10.6.23) Step 5. Combine Steps 2, 3, and 4, to reduce matters to proving that &
lim
ε→0+ ∂B(0,ε)
n xs s=1
ε
(∂s Ejk )(x) dσ(x) −
' xj pk (x) dσ(x) = δjk . ε
∂B(0,ε)
(10.6.24) Step 6. Prove that for each x ∈ ∂B(0, ε) we have n xs s=1
ε
(∂s Ejk )(x) −
xj δjk nxj xk pk (x) = + . ε 2 ωn−1 εn−1 2 ωn−1 εn+1
(10.6.25)
Step 7. Integrate the expression in (10.6.25) and use (10.3.28) to finish the proof of (10.6.24).
Further Notes for Chap. 10. The discussion in Chaps. 5 and 6 about the existence and nature of fundamental solutions has been limited to the case of scalar differential operators, and in Sect. 10.2 these scalar results have been extended to generic constant coefficient systems (of arbitrary order) via an approach that appears to be new. Moreover, this approach offers further options for finding an explicit form for a fundamental solution for a given system, such as the Lam´e system. While we shall pursue this idea later, in Sect. 11.1, we felt it is natural and beneficial to first deal with the issue of computing a fundamental solution for the Lam´e and Stokes systems via Fourier analysis (as done in Sects. 10.3 and 10.6, respectively). The inclusion of a section on mean value formulas for the Lam´e operator is justified by the fact that such formulas directly yield interior estimates, without resorting to quantitative elliptic regularity, which is typically formulated in the language of Sobolev spaces. In turn, these interior estimates play a pivotal role in establishing uniqueness for the corresponding Poisson problem. A more systematic treatment of the issue of mean value formulas for the Lam´e operator may be found in [51].
10.7
Additional Exercises for Chap. 10
Exercise 10.31. Let LS be the Stokes operator from (10.6.1). Suppose that n n+1 . p ∈ D (Ω) and u = u1 , . . . , un ) ∈ D (Ω) satisfy LS (u, p) = 0 in D (Ω) Under these conditions prove that p, uj ∈ C ∞ (Ω) for j ∈ {1, . . . , n}, and Δp = 0 pointwise in Ω while Δ2 uj = 0 pointwise in Ω for j ∈ {1, . . . , n}.
10.7. ADDITIONAL EXERCISES FOR CHAP. 10
339
Exercise 10.32. For each x ∈ R2 \ {0} consider the functions ' & 1 (μ + λ)xj xk Ejk (x) := , j, k ∈ {1, 2}. (3μ + λ)δjk ln |x| − 4πμ(2μ + λ) |x|2 (10.7.1) Prove that E := (Ej,k )1≤j,k≤2 is a fundamental solution for the Lam´e operator in R2 . Exercise 10.33. Consider the functions Ejk (x) :=
1 1 xj xk , δjk ln |x| − · 4π 4π |x|2
pk (x) := −
1 xk · 2, 2π |x|
(10.7.2)
defined for x ∈ R2 \ {0} and j, k ∈ {1, 2}. Prove that if E := (Ej,k )1≤j,k≤2 and p = (p1 , p2 ), then (E, p) is a fundamental solution for the Stokes operator in R2 . Exercise 10.34. Assume n ≥ 3, let L be the Lam´e operator from (10.3.1) such n that (10.3.2) is satisfied and consider f = (f1 , . . . , fn ) ∈ C0∞ (Rn ) . Prove that for each A ∈ Mn×n (R) and each b ∈ Rn , there exists a unique solution for the problem ⎧ u = (u1 , . . . , un ) ∈ [C ∞ (Rn )]n , ⎪ ⎪ ⎨ Lu = f pointwise in Rn , (10.7.3) ⎪ ⎪ ⎩ lim |u(x) − Ax − B| = 0. |x|→∞
Exercise 10.35. Assume n ≥ 3, let L be the Lam´e operator from (10.3.1) such that (10.3.2) is satisfied. Prove that if m ∈ N0 and u is a solution of the problem ⎧ ∞ n n ⎪ ⎨ u ∈ [C (R )] , Lu = 0 pointwise in Rn , (10.7.4) ⎪ ⎩ m |u(x)| ≤ C(|x| + 1), for some C ∈ (0, ∞), then the components of u are polynomials of degree ≤ m.
Chapter 11
More on Fundamental Solutions for Systems 11.1
Computing a Fundamental Solution for the Lam´ e Operator
In this section we use Theorem 10.8 in order to find a fundamental solution for the Lam´e operator of elastostatics in Rn , with n ≥ 2. This is possible since in the case when L(∂) is the Lam´e operator we have that det(L(∂)) is a constant factor of Δn . Concretely, fix n ∈ N, n ≥ 2, and consider the algebra R as in (10.2.15). Fix constants λ, μ ∈ C satisfying μ = 0, λ + 2μ = 0, and set L(∂) := μΔ + (λ + μ)∇div ∈ Mn×n (R),
(11.1.1)
[recall that ∇ = (∂1 , . . . , ∂n ) and the operator div was defined in Exercise 7.60]. This is precisely the Lam´e operator from (10.3.1) and L(∂) has the form (10.2.12), with a := ∇. Thus, for this choice of a, formula (10.2.13) gives P (∂) := detL(∂) = μn−1 (λ + 2μ)Δn ,
(11.1.2)
while (10.2.14) implies adj [L(∂)] = μn−2 (λ + 2μ)ΔIn×n − (λ + μ)∇div Δn−2 .
(11.1.3)
Going further, for each m ∈ N, denote by EΔm the fundamental solution for the poly-harmonic differential operator Δm in Rn from (7.5.2) and apply Proposition 7.29 to conclude that Δn−2 EΔn = EΔ2 + c(n) in S (Rn ) and pointwise in Rn \ {0}, (11.1.4) D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 11, © Springer Science+Business Media New York 2013
341
342
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
where c(n) = 0 if n = 4 and c(4) ∈ R. A direct computation starting with 5 (7.5.2) corresponding to n = 4 = m and using Lemma 7.20 yields c(4) = − 24π 2. Thus, a combination of (11.1.4) and (7.3.8) yields ⎧ |x|4−n ⎪ ⎪ if n ≥ 3, n = 4, ⎪ ⎪ ⎪ 2(n − 2)(n − 4)ωn−1 ⎪ ⎪ ⎪ ⎨ n−2 5 1 Δ EΔn (x) = (11.1.5) ⎪ if n = 4, − 2 · ln |x| − ⎪ 2 ⎪ 8π 24π ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 1 · |x|2 ln |x| if n = 2, 8π both for each x ∈ Rn \ {0} as well as in S (Rn ). Also, by applying one more Δ to the identity in (11.1.4) and recalling (7.1.12), we have ⎧ 1 ⎪ ⎪ · |x|2−n if n ≥ 3, ⎨ − (n − 2)ωn−1 n−1 (11.1.6) Δ EΔn (x) = EΔ (x) = ⎪ 1 ⎪ ⎩ · ln|x| if n = 2, 2π for every x ∈ Rn \ {0} and in S (Rn ). Hence, based on (10.2.23) and (11.1.2), we obtain that a fundamental (matrix) solution EL for the operator L(∂) as in (11.1.1) is given by EL (x) = adj [L(∂)] Eμn−1 (λ+2μ)Δn (x)In×n , x ∈ Rn \ {0}, (11.1.7) where Eμn−1 (λ+2μ)Δn is a fundamental solution for P (∂) = μn−1 (λ + 2μ)Δn . Using (11.1.3) and (11.1.4) in the rightmost expression in (11.1.7) further gives that, for each x ∈ Rn \ {0}, 1 · μn−2 (λ+2μ)ΔIn×n −(λ+μ)∇div Δn−2 EΔn (x)In×n EL (x) = n−1 μ (λ+2μ) 1 λ+μ δjk EΔ (x) − (∂j ∂k EΔ2 )(x) = . (11.1.8) μ μ(λ + 2μ) 1≤j,k≤n A quick inspection of (7.3.8) and the last equality in (11.1.6) shows that both for each x = (x1 , . . . , xn ) ∈ Rn \ {0} as well as in S (Rn ), for each j, k ∈ {1, . . . , n} we have ⎧ 1 xj xk δjk ⎪ ⎪ E (x) + · if n ≥ 3, ⎪ ⎨ 2 Δ 2ωn−1 |x|n (11.1.9) ∂j ∂k EΔ2 (x) = ⎪ δjk 1 xj xk δjk ⎪ ⎪ ⎩ EΔ (x) + , if n = 2. · + 2 2ωn−1 |x|n 8 δ
Note that, for the purpose of computing E L , the additive constant jk 8 from (11.1.9) may be dropped. Thus, based on (11.1.8) and (11.1.9) (with δjk /8 dropped), it follows that
11.2. COMPUTING A FUNDAMENTAL SOLUTION . . . EL (x) =
3μ + λ EΔ (x)In×n 2μ(2μ + λ) −
343 (11.1.10)
x x λ+μ · ⊗ n/2 , 2μ(2μ + λ)ωn−1 |x|n/2 |x|
∀ x ∈ Rn \ {0},
is a tempered distribution that is a fundamental solution for the operator Lam´e operator (11.1.1) (with μ = 0, λ + 2μ = 0) in Rn . Note that this formula coincides with the one from (10.3.21) if n ≥ 3 and with the one from (10.7.1) if n = 2.
11.2
Computing a Fundamental Solution for the Stokes Operator
In this section, the goal is to compute a fundamental solution for the Stokes operator in Rn starting from the explicit expression for the fundamental solution for the operator Lam´e operator EL from (11.1.10). To this end, fix n ∈ N, n ≥ 2 and λ, μ ∈ C satisfying μ = 0, λ + 2μ = 0, and let x = (x1 , . . . , xn ) for x ∈ Rn . For each k ∈ {1, . . . , n}, denote by EL k the kth column of the fundamental solution EL from (11.1.10). Then a straightforward calculation gives that for each x ∈ Rn \ {0} and in S (Rn ), there holds div [EL k (x)] =
1 xk · , (λ + 2μ)ωn−1 |x|n
∀ k ∈ {1, . . . , n}.
(11.2.1)
Define pk (x) := −
1 ωn−1
·
xk , |x|n
∀ x ∈ Rn \ {0}, ∀ k ∈ {1, . . . , n}.
(11.2.2)
Then, from (11.2.1) it follows that for each k ∈ {1, . . . , n} we have n n lim div [EL k ] = 0 pointwise in R \ {0} and in S (R ),
λ→∞
(11.2.3)
and
lim −(λ + μ)div[EL ] = pk k
λ→∞
pointwise in Rn \ {0} and in S (Rn ). (11.2.4)
In addition, from (11.1.10) we obtain lim EL =
λ→∞
1 S E pointwise in Rn \ {0} and in S (Rn ), μ
where, we have set δ 1 xj xk jk · EΔ (x) − · ES (x) := 2 2ωn−1 |x|n 1≤j,k≤n
(11.2.5)
for x ∈ Rn \ {0}, (11.2.6)
with EΔ as in (11.1.6). Consequently, a combination of (11.2.4) and (11.2.6) (in view of the definition (11.1.1) of the Lam´e operator), implies
344
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . . n S L L ΔEjk − ∂j pk = lim μEjk + (λ + μ) ∂j ∂s Esk λ→∞
= δjk δ
s=1
in S (Rn ), ∀ j, k ∈ {1, . . . , n}.
(11.2.7)
Also, (11.2.5) and (11.2.3), with the convention that ESk stands for the kth column in the matrix ES from (11.2.6), give n div [ESk ] = lim div [EL (11.2.8) k ] = 0 in S (R ), ∀ k ∈ {1, . . . , n}. λ→∞
A quick inspection of (11.2.6) and (11.2.2), shows that the entries in the matrix ES and the components of the vector p := (p1 , . . . , pn ) belong to L1loc (Rn ), while (11.2.7) guarantees that ΔES − ∇p = δ0 In×n , in Mn×n S (Rn ) , (11.2.9) div ES = 0, in [S (Rn )]n . Recalling now the definition of the Stokes operator from (10.6.1), we may conclude that (ES , p) is a fundamental solution for the Stokes operator in Rn .
(11.2.10)
S
Note that the expressions we obtained for (E , p) as given in (11.2.6) and (11.2.2) are the same as the ones from (10.6.13)–(10.6.14) if n ≥ 3 and as the one from (10.7.2) if n = 2.
11.3
Fundamental Solutions for Higher-Order Systems
In this section we discuss an approach for computing all fundamental solutions that are tempered distributions for a certain subclass of systems of the form (10.2.16). To specify this class, let as before n ∈ N denote the Euclidean dimension, fix M, m ∈ N, and consider M × M systems of homogeneous differential operators of order 2m with complex constant coefficients that have the form α ajk ∂ =: Ljk (∂) 1≤j,k≤M , (11.3.1) L = L(∂) := α |α|=2m
1≤j,k≤M
where ajk α ∈ C, j, k ∈ {1, . . . , M }. Define the characteristic matrix of L to be α ajk ξ for ξ ∈ Rn . (11.3.2) L(ξ) := α |α|=2m
1≤j,k≤M
The standing assumption under which we will identify all fundamental solutions for L that are tempered distributions is det [L(ξ)] = 0,
∀ ξ ∈ Rn \ {0}.
(11.3.3)
Several types of operators discussed earlier fall under the scope of these specifications, including the class of strictly elliptic, homogeneous, second-order,
11.3. FUNDAMENTAL SOLUTIONS FOR HIGHER-ORDER...
345
constant coefficient operators from Sect. 7.8, the poly-harmonic operator from Sect. 7.5, and the Lam´e operator from Sect. 10.3. For any operator L as in (11.3.1), its transpose and complex conjugate are, respectively, given by jk α α ajk ∂ , L = a ∂ , (11.3.4) L := α α 1≤k,j≤M
|α|=2m
1≤j,k≤M
|α|=2m
while the formal adjoint of L is defined as
L∗ := L .
(11.3.5)
The main result in this section is Theorem 11.1 below, describing the nature and properties of all fundamental solutions for a higher-order system L as in (11.3.1)– (11.3.3) that are tempered distributions. Before presenting this basic result we first describe a strategy that points to a natural candidate for a fundamental solution for L. Suppose that Q is a scalar constant coefficient differential operator (of an auxiliary nature) which, from other sources of information, is known to have a fundamental solution in Rn of the form EQ (x) = F (x · ξ) dσ(ξ), (11.3.6) S n−1
where F is a sufficiently regular scalar-valued function on the real line. Granted this, we then proceed to define −1 G(x · ξ) L(ξ) dσ(ξ) , (11.3.7) EL (x) := Q S n−1
where G is a scalar-valued function on the real line, chosen in such a way that p(t) := G(2m) (t) − F (t) is a polynomial in t ∈ R of degree < order Q. (11.3.8)
Then P (x) :=
S n−1
p(x · ξ) dσ(ξ),
x ∈ Rn ,
(11.3.9)
is a polynomial in Rn of degree < order Q. Keeping this in mind and observing that (11.3.10) LQ = QL and L G(x · ξ) = G(2m) (x · ξ)L(ξ), we may compute L EL (x) = Q
S n−1
=Q
S n−1
−1 G(2m) (x · ξ)L(ξ) L(ξ) dσ(ξ) F (x · ξ) dσ(ξ) + P (x) IM×M
= Q[EQ (x)]IM×M + (QP )(x)IM×M = δIM×M + 0 IM×M = δIM×M ,
(11.3.11)
which shows that EL is a fundamental solution for the system L in Rn .
346
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
To implement the procedure just described, it is natural to take the poly-harmonic operator ΔN to play the role of the auxiliary scalar differential operator Q. This is not just a matter of convenience, but from Lemma 7.31 we already know that if N ∈ N satisfies N ≥ n/2 then there exists a fundamental solution for this operator that has the form requested in (11.3.6). Specifically, in this case we have (compare with (7.5.24)) F (t) := −
1 t2N −n log t/i . (2πi)n (2N − n)!
(11.3.12)
To find a function G on the real line that satisfies (11.3.8), we may try G(t) := a tA log t/i , (11.3.13) where a ∈ R, A ∈ N with A ≥ 2m, and note there exists a constant CA,a,m such that d 2m A! tA−2m log t/i + CA,a,m tA−2m , (11.3.14) G(t) = a dt (A − 2m)! which means that (11.3.8) is going to hold if we take A := 2N − n + 2m and a := −
1 , (2πi)n (2N − n + 2m)!
(11.3.15)
for a polynomial p on R of degree 2N − n, which is strictly less than the order of Q = ΔN . To bring matters more in line with notation employed in the proof of Theorem 11.1, pick now a number q ∈ N0 with the same parity as n, and choose N := (n + q)/2 (which satisfies N ∈ N and N ≥ n/2). For this choice of N , the operator Q becomes Δ
n+q 2
,
(11.3.16)
and if A, a are as in (11.3.15) the function G from (11.3.13) takes the form G(t) = −
1 t2m+q log t/i . (2πi)n (2m + q)!
(11.3.17)
For these specifications, the function EL from (11.3.7) then becomes precisely the function x · ξ −1 −1 (n+q)/2 Δ L(ξ) (x·ξ)2m+q log dσ(ξ). E(x) := x n (2π i) (2m + q)! i S n−1 (11.3.18) Having explained the genesis of this formula, we now proceed to rigorously show that not only is this candidate for a fundamental solution natural but it also does the job.
11.3. FUNDAMENTAL SOLUTIONS FOR HIGHER-ORDER...
347
Theorem 11.1. Fix n, m, M ∈ N with n ≥ 2, and assume that L is an M × M system in Rn of order 2m as in (11.3.1)–(11.3.3). Then the M × M matrix E defined at each x ∈ Rn \ {0} by −1 1 (n−1)/2 Δ (x·ξ)2m−1 sgn (x·ξ) L(ξ) dσ(ξ) E(x) := x n−1 4(2π i) (2m − 1)! S n−1 (11.3.19) if n is odd, and −1 −1 n/2 Δ (x · ξ)2m ln |x · ξ| L(ξ) dσ(ξ) (11.3.20) E(x) := x n (2π i) (2m)! S n−1 if n is even, satisfies the following properties. 1. One has E ∈ MM×M S (Rn ) ∩ MM×M C ∞ (Rn \ {0}) and E(−x) = E(x)
for all
x ∈ Rn \ {0}.
(11.3.21)
Moreover, the entries in E are real-analytic functions in Rn \ {0}. 2. If IM×M is the M × M identity matrix, then for each y ∈ Rn Lx E(x − y) = δy (x) IM×M in MM×M S (Rn ) ,
(11.3.22)
where the superscript x denotes the fact that the operator L in (11.3.22) is applied to each column of E in the variable x. 3. Define the M × M matrix-valued function −1 −1 P(x) := (x · ξ)2m−n L(ξ) dσ(ξ), n (2π i) (2m − n)! S n−1
∀ x ∈ Rn . (11.3.23)
Then the entries of P are identically zero when either n is odd or n > 2m, and are homogeneous polynomials of degree 2m − n when n ≤ 2m. Moreover, there exists a function Φ ∈ MM×M C ∞ (Rn \ {0}) that is positive homogeneous of degree 2m − n such that (11.3.24) E(x) = Φ(x) + ln |x| P(x), ∀ x ∈ Rn \ {0}. 4. For each β ∈ Nn0 with |β| ≥ 2m − 1, the restriction to Rn \ {0} of the matrix distribution ∂ β E is of class C ∞ and positive homogeneous of degree 2m − n − |β|. 5. For each β ∈ Nn0 there exists Cβ ∈ (0, ∞) such that the estimate
β
|∂ E(x)| ≤
⎧ Cβ ⎪ ⎪ ⎪ ⎨ |x|n−2m+|β|
if either n is odd, or n > 2m, or if |β| > 2m−n,
⎪ ⎪ ⎪ ⎩ Cβ (1 + | ln |x||) |x|n−2m+|β|
if 0 ≤ |β| ≤ 2m − n, (11.3.25)
holds for each x ∈ Rn \ {0}.
348
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
3 (with the “hat” denoting the 6. When restricted to Rn \ {0}, the entries of E ∞ Fourier transform) are C functions and, moreover, −1 3 E(ξ) = (−1)m L(ξ) for each ξ ∈ Rn \ {0}. (11.3.26) 7. Writing EL in place of E to emphasize the dependence on L, the fundamental solution EL with entries as in (11.3.19)–(11.3.20) satisfies ∗ EL = EL∗ , EL = EL , EL = EL , (11.3.27) and EλL = λ−1 EL ∀ λ ∈ C \ {0}. 8. Any fundamental solution U ∈ MM×M S (Rn ) of the system L in Rn is of the form U = E + Q where E is as in (11.3.19)–(11.3.20) and Q is an M × M matrix with entries that are polynomials in Rn and with columns, Qk , k ∈ {1, . . . , M }, that satisfy the pointwise equations LQk = 0 ∈ CM in Rn for each k ∈ {1, . . . , M }. Proof. To facilitate the subsequent discussion, denote by P jk (ξ) 1≤j,k≤M the inverse of the characteristic matrix L(ξ), that is, −1 P jk (ξ) := L(ξ) , ∀ ξ ∈ Rn \ {0}. (11.3.28) 1≤j,k≤M
Then the entries Ejk 1≤j,k≤M of the matrix E are given at each point x in Rn \ {0} by 1 (n−1)/2 Ejk (x) = Δ (x·ξ)2m−1 sgn (x·ξ)P jk (ξ) dσ(ξ) 4(2π i)n−1 (2m − 1)! x S n−1 (11.3.29) if n is odd, and −1 n/2 Δ (x · ξ)2m ln |x · ξ|P jk (ξ) dσ(ξ) (11.3.30) Ejk (x) = (2π i)n (2m)! x S n−1 if n is even. The proof of the theorem is completed in eight steps. Step I. We claim that if we set q := 0 if n is even and q := 1 if n is odd, then x · ξ −1 (n+q)/2 2m+q Ejk (x) = Δ P jk (ξ) dσ(ξ) (x·ξ) log (2π i)n (2m+q)! x i S n−1 (11.3.31) for x ∈ Rn \ {0} and j, k ∈ {1, . . . , M }, where log denotes the principal branch of the complex logarithm defined for points z ∈ C \ {x : x ∈ R, x ≤ 0}. Suppose first q = 0, and start from (11.3.30), then use that n is even, πthus i the formula log x·ξ sgn (x·ξ) for the term under the integral sign = ln |x·ξ|− i 2
11.3. FUNDAMENTAL SOLUTIONS FOR HIGHER-ORDER...
349
and the fact that the integral over S n−1 of the function (x · ξ)2m sgn (x · ξ)P jk (ξ) (which is odd in ξ) is zero, to obtain that (11.3.31) holds. Moving on to the case when n is odd, consider the function t2m+q log (t/i) if t ∈ R \ {0}, F (t) := (11.3.32) 0 if t = 0. It is not difficult to see that F ∈ C ∞ (R \ {0}) ∩ C 2m+q−1 (R),
if k = 1, . . . , 2m + q − 1, (11.3.33) and that for each k ∈ {1, . . . , 2m} there exists a constant Cm,q,k such that d k dt
F (t) =
F (k) (0) = 0
(2m + q)! 2m+q−k t t log + Cm,q,k t2m+q−k (2m + q − k)! i
in R \ {0}.
(11.3.34) In particular, by the chain rule, if β ∈ Nn0 is such that |β| ≤ 2m + q − 1, then ∂xβ [F (x · ξ)] = F (|β|) (x · ξ)ξ β , ∀ x ∈ Rn , ∀ ξ ∈ S n−1 . (11.3.35) Based on (11.3.35) and (11.3.34) for each x ∈ Rn \ {0} we may write x · ξ Δx P jk (ξ) dσ(ξ) (x · ξ)2m+1 log i S n−1 jk F (x · ξ)P (ξ) dσ(ξ) = F (x · ξ)P jk (ξ) dσ(ξ) = Δx S n−1
S n−1
= 2m(2m + 1) S n−1
+ Cm,q
S n−1
(x · ξ)2m−1 log
x · ξ i
P jk (ξ) dσ(ξ)
(x · ξ)2m−1 P jk (ξ) dσ(ξ)
= 2m(2m+1) S n−1
(x · ξ)2m−1 ln |x · ξ|− πi sgn(x · ξ) P jk (ξ) dσ(ξ) 2
= −πim(2m + 1)
S n−1
(x · ξ)2m−1 sgn(x · ξ)P jk (ξ) dσ(ξ). (11.3.36)
Hence, if one starts with the expression in the right-hand side of (11.3.31), then one transfers Δ under the integral sign using from (11.3.36), one arrives at the expression of Ejk from (11.3.29). This completes the proof of Step I. Step II. Proof of the fact that the entries of E are C ∞ and even in Rn \ {0}. That the functions in (11.3.29) and (11.3.30) are even is immediate from their respective expressions. To show that the components of E belong to C ∞ (Rn \ {0}), for each ∈ {1, . . . , n} let e be the unit vector in Rn with one on the th component, and consider the open set O := Rn \ {λ e : λ ≤ 0} = x = (x1 , . . . , xn ) ∈ Rn : x > 0 . (11.3.37)
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
350
Then for each given x ∈ O define the linear map R,x : Rn → Rn by R,x (ξ) := ξ −
(ξ · x)(|x| + 2x ) − ξ |x|2 ξ |x| + ξ · x + e , |x|(|x| + x ) |x|(|x| + x )
By Exercise 4.125 (with ζ = transformation and
x |x|
x · R,x (ξ) = |x|ξ ,
ξ ∈ Rn .
(11.3.38) and η := e ) we have that this is a unitary ∀ x ∈ O ,
∀ ξ ∈ Rn .
(11.3.39)
and λ > 0
(11.3.40)
is of class C ∞ .
(11.3.41)
Also, R,λ x = R,x
whenever x ∈ O
and the joint application O × Rn (x, ξ) → R,x (ξ) ∈ Rn
Fix j, k ∈ {1, . . . , M }. Using the invariance under unitary transformations of the operation of integration over S n−1 , for each x ∈ O we may then write x · ξ (x · ξ)2m+q · log P jk (ξ) dσ(ξ) i S n−1 x · R (ξ) ,x = (x · R,x (ξ))2m+q · log P jk (R,x (ξ)) dσ(ξ) i S n−1 |x|ξ (|x|ξ )2m+q · log P jk (R,x (ξ)) dσ(ξ) = i n−1 S ξ B 2m+q 2m+q = |x| ξ ln |x| + log P jk (R,x (ξ)) dσ(ξ). i n−1 S (11.3.42) From this representation, (11.3.41), and (11.3.31), it is clear that we have Ejk ∈ C ∞ (O ). To complete the proof of Step II, there remains to observe n O . that Rn \ {0} = =1
Step III. Proof of part (3) in the statement of the theorem. To facilitate the discussion, fix j, k ∈ {1, . . . , M }, introduce Qjk (x) := (x · ξ)2m+q P jk (ξ) dσ(ξ), ∀ x ∈ Rn ,
(11.3.43)
S n−1
and define Ψjk : Rn \ {0} → C by setting x · ξ (x · ξ)2m+q · log P jk (ξ) dσ(ξ) − ln |x| Qjk (x) Ψjk (x) := i S n−1 (11.3.44)
11.3. FUNDAMENTAL SOLUTIONS FOR HIGHER-ORDER...
351
for each Rn \ {0}. Observe that Qjk is a polynomial of degree 2m + q that vanishes when n is odd [since in that case the integrand in (11.3.43) is odd]. Also, from our earlier analysis in (11.3.42), we know that the integral in the right-hand side of (11.3.44) depends in a C ∞ fashion on the variable x ∈ Rn \ {0}. These comments and (11.3.44) imply that Ψjk is of class C ∞ in Rn \ {0}. Our next goal is to prove that Ψjk
is positive homogeneous of degree 2m + q in Rn \ {0}. (11.3.45)
To this end, first note that for each ∈ {1, . . . , n} we may write 2m+q Qjk (x) = |x| ξ2m+q P jk (R,x (ξ)) dσ(ξ), ∀ x ∈ O , (11.3.46) S n−1
by (11.3.43) and (11.3.39). Consequently, from (11.3.46), (11.3.42), and (11.3.44), we deduce that for each ∈ {1, . . . , n}, ξ 2m+q Ψjk (x) := |x| P jk (R,x (ξ)) dσ(ξ), ∀ x ∈ O . ξ2m+q log i n−1 S In turn, this and (11.3.40) readily show that Ψjk is positive homogeneous of degree 2m + q when restricted to the cone-like region O . Since Rn \ n {0} = O , the claim in (11.3.45) follows. =1
Next, an induction argument shows that for each ξ ∈ Rn \{0} and k, N ∈ N satisfying N ≥ 2k, the following formulas hold: Δkx (x · ξ)N =
N! (x · ξ)N −2k |ξ|2k (N − 2k)!
(11.3.47)
and Δkx (x · ξ)N ln |x| = (ln |x|)Δkx (x · ξ)N +
k
(11.3.48)
c(r, k, N, n)|x|−2r (x · ξ)N −2k+2r .
r=1
We now observe that if Pjk is the (j, k)-entry in the matrix P from (11.3.23), then −1 (n+q)/2 Δ (x · ξ)2m+q P jk (ξ) dσ(ξ), Pjk (x) = (2π i)n (2m + q)! x S n−1 (11.3.49) as seen from identity (11.3.47) with N := 2m + q and k := (n + q)/2. It is also immediate from (11.3.49) that Pjk is identically zero when either n is odd (due to parity considerations) or n > 2m (due to degree considerations). In addition, formula (11.3.49) shows that Pjk is a homogeneous
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
352
polynomial of degree 2m − n whenever n ≤ 2m. Finally, we note that combining (11.3.43), (11.3.49), and (11.3.48) used with N := 2m + q and k := (n + q)/2, yields −1 Δ(n+q)/2 (ln |x|)Qjk (x) = (ln |x|)Pjk (x) (11.3.50) (2π i)n (2m + q)! x (n+q)/2 −2r + Cr |x| (x · ξ)2m−n+2r P jk (ξ) dσ(ξ), ∀ x ∈ Rn \ {0}, S n−1
r=1
for some constants Cr depending only on r, n, q, and m. It is easy to see that the sum in the right-hand side of (11.3.50) gives rise to a function that belongs to C ∞ (Rn \ {0}) and is positive homogeneous of degree 2m − n. At this point, if we define Φjk (x) :=
−1 Δ(n+q)/2 Ψjk (x) (2π i)n (2m + q)! x
(n+q)/2
+
Cr |x|−2r
(11.3.51)
r=1
S n−1
(x · ξ)2m−n+2r P jk (ξ) dσ(ξ),
∀ x ∈ Rn \ {0},
then (11.3.24) follows from (11.3.31), (11.3.44), (11.3.50), and (11.3.51). Moreover, from (11.3.45) and (11.3.51) it is clear that Φjk is positive homogeneous of degree 2m − n, while the regularity of Ψjk established earlier entails Φjk ∈ C ∞ (Rn \ {0}). Step IV. Proof of the fact that E ∈ MM×M (S (Rn )). Fix j, k ∈ {1, . . . , M } and recall (11.3.24). By what we proved in Step III, Φjk is positive homogeneous of degree 2m − n in Rn \ {0}, and since m ≥ 1 we may invoke Exercise 4.51 to conclude that Φjk ∈ L1loc (Rn ). Now the estimate from Exercise 4.51 and Example 4.3 give Φjk ∈ S (Rn ). In addition, by Exercise 4.104, it follows that (ln |x|)Pjk ∈ S (Rn ). In summary, we have Ejk ∈ S (Rn ). Step V. Proof of (11.3.22). Componentwise, (11.3.22) reads as follows: for each j, k ∈ {1, . . . , M }, M
Lxjr
Erk (x − y) =
r=1
0
if j = k,
δy (x)
if j = k,
in S (Rn ),
(11.3.52)
where the superscript x denotes the fact that the operator Ljr (defined as in (11.3.1)) is applied in the variable x. To justify this, fix some numbers j, k, r ∈ {1, . . . , M }. By (11.3.34) and (11.3.35) we have Lxjr F (x · ξ) =
(2m + q)! q!
(x · ξ)q log
x·ξ + Cm,q (x · ξ)q Ljr (ξ) (11.3.53) i
11.3. FUNDAMENTAL SOLUTIONS FOR HIGHER-ORDER...
353
for every x ∈ Rn \ {0} and every ξ ∈ S n−1 such that x·ξ = 0. To continue, fix ϕ ∈ C0∞ (Rn ) and write !
Lxjr
S n−1
" F (x · ξ)P rk (ξ) dσ(ξ) , ϕ(x)
!
= S n−1
" F (x · ξ)P rk (ξ) dσ(ξ), Ljr ϕ(x)
F (x · ξ)P rk (ξ) dσ(ξ)Ljr ϕ(x) dx
= Rn
S n−1
=
P
rk
(ξ)
S n−1
x∈Rn , x·ξ>0
F (x · ξ)Ljr ϕ(x) dx dσ(ξ)
P rk (ξ)
+ S n−1
x∈Rn , x·ξ<0
F (x · ξ)Ljr ϕ(x) dx dσ(ξ). (11.3.54)
At this point, we integrate by parts repeatedly with respect to x (according to formula (13.7.4)) in the innermost integrals in (11.3.54) until we transfer all the derivatives from ϕ onto F (x · ξ). This is justified by (11.3.33), the fact that ∂xα F (x · ξ) ∈ L1loc (Rn ) whenever α ∈ Nn0 has |α| ≤ 2m (cf. (11.3.35) and Exercise 2.108), and the fact that ϕ has compact support. Note that in the process, the terms corresponding to boundary integrals (i.e., integrals over the set {x ∈ Rn : x·ξ = 0} ∩supp ϕ) are zero thanks to the formulas in (11.3.33). Hence, summing up over r the resulting identity M Ljr (ξ)P rk (ξ) = δjk in (11.3.54), then using (11.3.53) and the fact that r=1
for every ξ ∈ S n−1 , we arrive at M !
Lxjr
S n−1
r=1
=
r=1
P
rk
(ξ)
S n−1
x∈Rn
(2m + q)!
= Rn
(11.3.55)
M
" F (x · ξ)P rk (ξ) dσ(ξ) , ϕ(x)
S n−1
q!
[Lxjr F (x · ξ)]ϕ(x) dx dσ(ξ)
(x · ξ)q log
x·ξ + Cm,q (x · ξ)q δjk dσ(ξ)ϕ(x) dx. i
Consequently, since ϕ ∈ C0∞ (Rn ) is arbitrary, a combination of (11.3.55), (11.3.32), and (11.3.31), yields M r=1
Ljr Erk = Δ(n+q)/2 Eq + Pq δjk
in D (Rn ),
(11.3.56)
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
354 where
−1 Eq (x) := (2π i)n q!
S n−1
(x · ξ)q · log
x · ξ i
dσ(ξ),
∀ x ∈ Rn \ {0}, (11.3.57)
and Pq (x) :=
−Cm,q (2π i)n (2m + q)!
S n−1
(x · ξ)q dσ(ξ),
∀ x ∈ Rn .
(11.3.58)
Given our choice of q, from Lemma 7.31 we have that Eq is a fundamental solution for Δ(n+q)/2 , so Δ(n+q)/2 Eq = δ in D (Rn ). Also, Pq is a homogeneous polynomial of degree q in Rn , thus Δ(n+q)/2 Pq = 0 pointwise and in D (Rn ). Therefore, (11.3.56) becomes M
Ljr Erk = δjk δ
in D (Rn ),
∀ j, k ∈ {1, . . . , M }.
(11.3.59)
r=1
The statement in part (2) of the theorem follows from (11.3.59), (4.1.25), and the result from Step IV. Step VI. Proof of claims in parts (4)–(7) in the statement of the theorem. The claim in part (4) follows from part (1), (11.3.24), the fact that each Pjk is a homogeneous polynomial of degree at most 2m − n, and the observation that when computing ∂ β Ejk with |β| ≥ 2m − 1 at least one derivative falls on ln. The estimates in (11.3.25) are a direct consequence of (11.3.24). This takes care of parts (4) and (5) in the statement of the theorem. Moving on to the proof of part (6), fix j, k ∈ {1, . . . , M } and recall (11.3.24). From the proof in Step IV, we have that Φjk and (ln |x|)Pjk are tempered distributions in Rn . Since Φjk ∈ C ∞ (Rn \ {0}) and is positive homogeneous of degree 2m − n in Rn \ {0}, Proposition 4.58 implies that its Fourier transform coincides with a C ∞ function on Rn \ {0}. To analyze the effect of taking the Fourier transform of (ln |x|)Pjk pick some θ ∈ C0∞ (Rn ) such that supp θ ⊂ B(0, 2) and θ ≡ 1 on B(0, 1) and write (ln |x|)Pjk = (1 − θ)(ln |x|)Pjk + θ(ln |x|)Pjk
in Rn \ {0}.
(11.3.60)
The two terms in the right-hand side of (11.3.60) continue to belong to S (Rn ). From Example 2.9 and the fact that θ is compactly 1 n supported, we jk ∈ L (R ) and has compact sup |x|)P obtain θ(ln ∞ n port, thus F θ(ln |x|)Pjk ∈ C (R ) (recall Exercise 3.30). Regarding (1 − θ)(ln |x|)Pjk , note that this function is of class C ∞ in Rn . Also, for n β α every β ∈ N0 , the function x ∂ (1 − θ)(ln |x|)Pjk belongs to L1 (Rn ) provided α ∈ Nn0 and |α| is sufficiently large. Since the Fourier transform of any L1 function is continuous, this readily implies that for any r ∈ N there exists α ∈ Nn0 such that F ∂ α [(1 − θ)(ln |x|)Pjk ] = (iξ)α F (1 − θ)(ln |x|)Pjk ∈ C r (Rn ). (11.3.61)
11.3. FUNDAMENTAL SOLUTIONS FOR HIGHER-ORDER...
355
Thus, we necessarily have F (1 − θ)(ln |x|)Pjk ∈ C ∞ (Rn \ {0}). The reasoning above shows that the Fourier transform of (the matrixvalued tempered distribution) E when restricted to Rn \ {0} is a function of class C ∞ . Taking the Fourier transforms of both sides of (11.3.59) gives (−1)m
M
5 Ljr (ξ)E rk (ξ) = δjk
in
S (Rn ),
∀ j, k ∈ {1, . . . , M }.
r=1
(11.3.62) Restricting (11.3.62) to Rn \ {0} then readily implies (11.3.26) and finishes the proof of part (6). Finally, the identities in (11.3.27) can be seen directly from (11.3.29)–(11.3.30). Step VII. Proof of the claim in part (8) in the statement of the theorem. Let U ∈ MM×M S (Rn ) be an arbitrary fundamental solution n of the n system L in R and set Q := U−E. Then LQ = 0 in MM×M S (R ) and, 3 = 0 in MM×M S (Rn ) . on the Fourier transform side, Q satisfies L(ξ)Q 3 ⊆ {0}, hence Q is an M × M In light of (11.3.3), this implies supp Q matrix with entries that are polynomials in Rn by Exercise 4.35 (applied to each entry). Step VIII. Proof of the fact that the entries of E are real-analytic in Rn \ {0}. This is a direct consequence of the fact that (cf. (11.3.22)) (11.3.63) L E = δ IM×M in MM×M S (Rn ) , where the operator L is applied to each column of E, which implies that L E = 0 in MM×M D (Rn \ {0}) , and Theorem 11.4, established in the next section. The proof of Theorem 11.1 is now complete. Theorem 11.1 describes all fundamental solutions, which are tempered distributions, for any homogeneous constant coefficient system with an invertible characteristic matrix, and elaborates on the properties of such fundamental solutions. In specific cases, it is possible to use formulas (11.3.19)–(11.3.20) to find an explicit expression for a specific fundamental solution. The case of the poly-harmonic operator is discussed in Exercise 11.21. Here we study in detail the case of the three-dimensional Lam´e operator (cf. also Exercises 11.17 and 11.18). We remark that the argument in the proof of Proposition 11.2 is different from the one used to derive the fundamental solution for the Lam´e operator in Sect. 11.1. Proposition 11.2. Let λ, μ ∈ C be such that μ = 0, λ+2μ = 0. A fundamental solution for the Lam´e operator (10.3.1) in R3 is E(x) = −
1 λ + 3μ 1 1 λ+μ x⊗x I3×3 − , 8π μ(λ + 2μ) |x| 8π μ(λ + 2μ) |x|3
x ∈ R3 \{0}. (11.3.64)
356
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
Proof. We start by recalling formula (11.3.29) that gives an expression for the fundamental solution of a homogeneous differential operator for n odd. In our case, L = (Ljk )1≤j,k≤3 and Ljk = μδjk Δ + (λ + μ)∂j ∂k , thus L(ξ) = μI3×3 + (λ + μ)ξ ⊗ ξ for ξ ∈ S 2 . Observe that (x · ξ) sgn (x · ξ) = |x · ξ|. As such, for any x ∈ R3 \ {0} we may compute 1 Δ |x · ξ|(L(ξ))−1 dσ(ξ) (11.3.65) E(x) = − x 16π 2 2 S 1 1 λ+μ I ξ ⊗ ξ dσ(ξ) =− Δ |x · ξ| − x 3×3 16π 2 μ μ(λ + 2μ) S2 =−
ω 1 λ + μ ω1 x ⊗ x 1 |x|I |x|I Δ − + x 3×3 3×3 16π 2 μ μ(λ + 2μ) 4 |x|
=−
4λ + 8μ − λ − μ 1 λ + μ x ⊗ x Δx |x|I3×3 − 16π 2μ(λ + 2μ) 2μ(λ + 2μ) |x|
=−
1 3λ + 7μ 2 λ+μ 2 4x ⊗ x I3×3 − I3×3 − 16π 2μ(λ + 2μ) |x| 2μ(λ + 2μ) |x| |x|3
=−
1 λ + 3μ 1 1 λ+μ x⊗x I3×3 − . 8π μ(λ + 2μ) |x| 8π μ(λ + 2μ) |x|3
For the second equality in (11.3.65) we have used Proposition 10.14, for the third we have used Proposition 13.47 and Proposition 13.48, while for the fifth we have used (7.3.2) and the readily verified fact that if N is a nonzero integer then x ⊗ x 2 N (N − n − 2) = In×n + x ⊗ x, (11.3.66) Δx |x|N |x|N |x|N +2 for each x ∈ Rn \ {0}.
11.4
Interior Estimates and Real-Analyticity for Null-Solutions of Systems
The aim in this section is to explore the extent to which results such as integral representation formula and interior estimates, proved earlier in Sect. 6.3 in the scalar context, continue to hold for vector-valued functions that are nullsolutions of a certain class of systems of differential operators. Proposition 11.3. Fix n, m, M ∈ N with n ≥ 2, and assume that L is an M × M system in Rn of order 2m of the form Aα ∂ α , Aα = ajk (11.4.1) L= α 1≤j,k≤M ∈ MM×M (C), |α|=2m
11.4. INTERIOR ESTIMATES AND REAL-ANALYTICITY...
357
with the property that det [L(ξ)] = 0 for each ξ ∈ Rn \ {0}. In addition, suppose M is such that Lu = 0 Ω ⊆ Rn is an open set and u = (u1 , . . . , uM ) ∈ D (Ω) M in D (Ω) . M Then u ∈ C ∞ (Ω) and for each x0 ∈ Ω, each r ∈ 0, dist (x0 , ∂Ω) , and each function ψ ∈ C0∞ B(x0 , r) such that ψ = 1 near B(x0 , r/2), we have u (x) = −
M
(−1)|γ| ajk α
j,k=1 |α|=2m γ<α
α! × γ!(α − γ)!
×
(∂ γ Ej )(x − y)∂ α−γ ψ(y) uk (y) dy
(11.4.2)
B(x0 ,r)\B(x0 ,r/2)
for each ∈ {1, . . . , M } and each x ∈ B(x0 , r/2), where E = Ejk 1≤j,k≤M ∈ MM×M S (Rn ) ∩ MM×M C ∞ (Rn \ {0})
(11.4.3)
is the fundamental matrix for L (the transpose of L) as given by Theorem 11.1. In particular, for every μ ∈ Nn0 , ∂ u (x) = − μ
M
(−1)|γ| ajk α
j,k=1 |α|=2m γ<α
α! × γ!(α − γ)!
×
(∂ γ+μ Ej )(x − y)∂ α−γ ψ(y) uk (y) dy
(11.4.4)
B(x0 ,r)\B(x0 ,r/2)
for each ∈ {1, . . . , M } and each x ∈ B(x0 , r/2). Also, if either n is odd, or n > 2m, or if |μ| > 2m − n, we have Cμ μ |(∂ u)(x0 )| ≤ |μ| − |u(y)| dy, (11.4.5) r B(x0 ,r) where Cμ ∈ (0, ∞) is independent of u, x0 , r, and Ω. Proof. Let u = (u )1≤≤M and E = Ejk 1≤j,k≤M be as specified in the statement of the proposition. In particular, by Theorem 11.1, E ∈ MM×M S (Rn ) ∩ MM×M C ∞ (Rn \ {0}) . Also, pick an arbitrary point x0 ∈ Ω, a number r ∈ 0, dist (x0 , ∂Ω) , and fix some function ψ ∈ C0∞ B(x0 , r) such that ψ = 1 near B(x0 , r/2). Then, from M M Theorem 10.9 it follows that u ∈ C ∞ (Ω) , hence also ψu ∈ C0∞ (Ω) . Granted these, for each x ∈ B(x0 , r/2) we may then write (keeping in mind that E is a fundamental solution for L , the transposed of L), u(x) = (ψu)(x) = (ψu )(x) 1≤≤M =
M $ % δk δ , (ψuk )(x − ·) k=1
1≤≤M
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
358
=
M % $ (L E)k , (ψuk )(x − ·) k=1
Note that
L =
α (−1)|α| A α∂ =
|α|=2m
where
A α
α A α∂ ,
|α|=2m j=1
M M !
" α ajk α Ej , ∂ (ψuk )(x − ·)
|α|=2m j=1
k=1
1≤≤M
1≤≤M
|α|=2m k=1
M ! M (x − ·), ajk Ej α |α|=2m k=1
j=1
+
0<β≤α
" α! ∂ β ψ∂ α−β uk β!(α − β)! 1≤≤M
M ! M " α Ej (x − ·), ψ ajk α ∂ uk |α|=2m k=1
j=1
=
(11.4.8) 1≤≤M
M ! M " α Ej , ajk (∂ (ψu ))(x − ·) k α j=1
=
(11.4.7)
is the transposed of the matrix Aα , for each α. Thus, (11.4.6) implies
k=1
=
(11.4.6)
|α|=2m
M ! M " α u(x) = (A α )kj ∂ Ej , (ψuk )(x − ·)
=
. 1≤≤M
M
ajk α
j,k=1 |α|=2m 0<β≤α
1≤≤M
" ! α! Ej (x − ·), ∂ β ψ ∂ α−β uk β!(α − β)! 1≤≤M
since for each j ∈ {1, . . . , M }, ψ
M
α n ajk α ∂ uk = ψ(Lu)j = 0 in R .
(11.4.9)
|α|=2m k=1
Next, for each , j, k ∈ {1, . . . , M } and each α, β ∈ Nn0 with |α| = 2m and 0 < β ≤ α, we observe that % $ Ej (x − ·), ∂ β ψ ∂ α−β uk = Ej (x − y)∂ β ψ(y)∂ α−β uk (y) dy B(x0 ,r)\B(x0 ,r/2) |β|
∂yα−β Ej (x − y)∂ β ψ(y) uk (y) dy
= (−1)
B(x0 ,r)\B(x0 ,r/2)
=
γ≤α−β
(−1)|β|+|γ|
(α − β)! × γ!(α − β − γ)!
11.4. INTERIOR ESTIMATES AND REAL-ANALYTICITY...
359
×
(∂ γ Ej )(x − y)∂ α−γ ψ(y) uk (y) dy, (11.4.10) B(x0 ,r)\B(x0 ,r/2)
and that, whenever γ < α, (α − γ)! (α − γ)! = −1 (−1)|β| (−1)|β| β!(α − β − γ)! β!(α − β − γ)! 0<β≤α−γ
β≤α−γ
= 0 − 1 = −1.
(11.4.11)
At this stage, (11.4.2) follows from (11.4.8), (11.4.10), and (11.4.11). In turn, (11.4.2) readily implies (11.4.4). Finally, (11.4.5) is a consequence of (11.4.4), the assumptions on n and μ, and (11.3.25), by choosing ψ as in (6.3.11)–(6.3.12). An L∞ -version of the interior estimate (11.4.5), valid in all space dimensions, is established in the next theorem. In particular, this version allows us to prove the real-analyticity of null-solutions of the systems considered here. Theorem 11.4. Let n, m, M ∈ N and suppose L is an M × M constant (complex) coefficient system in Rn , homogeneous of order 2m, and with the property that det [L(ξ)] = 0 for each ξ ∈ Rn \ {0}. Assume also that Ω ⊆ Rn is an open M M is such that Lu = 0 in D (Ω) . set and u ∈ D (Ω) M Then u ∈ C ∞ (Ω) and there exists a constant C ∈ (0, ∞) such that C |α| (1 − λ)−|α| |α|! max |u(y)|, ∀ α ∈ Nn0 , (11.4.12) r|α| y∈B(x,λr) y∈B(x,r) for each x ∈ Ω, each r ∈ 0, dist (x, ∂Ω) , and each λ ∈ (0, 1). In particular, each component of u is real-analytic in Ω. M Proof. Theorem 10.9 gives that u ∈ C ∞ (Ω) . As far as (11.4.12) is concerned, consider first the case when either n is odd, or n > 2m. In this scenario, (11.4.5) gives that there exists some C ∈ (0, ∞), independent of u and Ω, such that C |u(y)| dy, ∀ j ∈ {1, . . . , n}, (11.4.13) |∂j u(x)| ≤ − r B(x,r) whenever x ∈ Ω and r ∈ 0, dist (x, ∂Ω) . A quick inspection reveals that the proof of Lemma 6.21 carries through in the case when the functions involved are vector-valued. When applied with M : Lu = 0 in Ω , (11.4.14) A := u ∈ C ∞ (Ω) this lemma gives [thanks to (11.4.13)] that for every x ∈ Ω, every r ∈ 0, dist (x, ∂Ω) , every k ∈ N, and every λ ∈ (0, 1), we have (with C as in (11.4.13)) max
|∂ α u(y)| ≤
max y∈B(x,λr)
|∂ α u(y)| ≤
C k (1−λ)−k ek−1 k! rk
for every multi-index α ∈
Nn0
max |u(y)|, y∈B(x,r)
with |α| = k.
(11.4.15)
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
360
This proves (11.4.12) under the current assumptions on n. Fix now an arbitrary n ∈ N and pick some k ∈ N. With L as in (11.4.1) consider the constant coefficient, homogeneous, M × M system in Rn , of order 4m, given by A∗α Aβ ∂ α+β , (11.4.16) L (∂) := L∗ L = |α|=|β|=2m
and note that for each ξ ∈ R we have L (ξ) = ξ α+β A∗α Aβ = n
|α|=|β|=2m
α ∗ β ∗ ξ Aα ξ Aβ = L(ξ) L(ξ).
|α|=|β|=2m
(11.4.17) In particular, L (ξ)η · η = |L(ξ)η|2 ,
∀ ξ ∈ Rn ,
∀ η ∈ CM .
(11.4.18)
Finally, define the constant coefficient, homogeneous, M × M system in Rn+k , of order 4m, )(∂, ∂n+1 , . . . , ∂n+k ) := L (∂) + ∂ 4m + · · · + ∂ 4m IM×M . (11.4.19) L n+1 n+k We claim that )(ξ, ξn+1 , . . . , ξn+k ) = 0, det L
∀ (ξ, ξn+1 , . . . , ξn+k ) ∈ Rn+k \ {0}. (11.4.20)
To see that this is the case, fix some (ξ, ξn+1 , . . . , ξn+k ) ∈ Rn+k \ {0} and note )(ξ, ξn+1 , . . . , ξn+k ) that it suffices to show that the M × M complex matrix L acts injectively on CM . With this in mind, let η ∈ CM be such that )(ξ, ξn+1 , . . . , ξn+k )η = 0 ∈ CM . Then L )(ξ, ξn+1 , . . . , ξn+k )η · η = |L(ξ)η|2 + ξ 4m + · · · + ξ 4m |η|2 (11.4.21) 0=L n+1 n+k which forces L(ξ)η = 0 and
2 4m 4m |η| = 0. ξn+1 + · · · + ξn+k
(11.4.22)
If ξ ∈ Rn \ {0} then the first condition in (11.4.22) implies η = 0, given the assumptions on L. If ξ = 0 then necessarily there exists j ∈ {1, . . . , k} such that ξn+j = 0, and the second condition in (11.4.22) now implies η = 0. Thus, η = 0 in all alternatives which finishes the proof of (11.4.20). M satisfies Lu = 0 in Ω, and define To proceed, assume u ∈ C ∞ (Ω) (x, xn+1 , . . . , xn+k ) := u(x), u
:= Ω×Rk . (11.4.23) ∀ (x, xn+1 , . . . , xn+k ) ∈ Ω
is an open subset of Rn+k and u M . Moreover, as is apparent ∈ C ∞ (Ω) Then Ω from (11.4.16), (11.4.19), and (11.4.23), we have )(∂, ∂n+1 , . . . , ∂n+k ) L u=0
in Ω.
(11.4.24)
¨ 11.5. REVERSE HOLDER ESTIMATES FOR NULL-SOLUTIONS . . .
361
Assume now that n ∈ N is an arbitrary given number and pick k ∈ N such that n + k is either odd or n + k > 4m. Then from the first part in the proof, )and u , we know that there exists a constant C ∈ (0, ∞) such that applied to L | ≤ max |∂ α u B( x,λr)
C |α| (1 − λ)−|α| |α|! max | u|, r|α| B( x,r)
∀ α ∈ Nn0 ,
(11.4.25)
each r ∈ 0, dist ( , and each λ ∈ (0, 1), where the for each x ∈ Ω, x, ∂ Ω) ballsappearing in (11.4.25) are considered in Rn+k . Now, given x ∈ Ω and r ∈ 0, dist (x, ∂Ω) , specializing (11.4.25) to the case when x := (x, 0, . . . , 0) yields (11.4.12) for the current n. Thus, (11.4.12) holds for any n. Finally, the last claim in the statement of the theorem is a consequence of (11.4.12) and Lemma 6.24.
11.5
Reverse H¨ older Estimates for Null-Solutions of Systems
The principal result in this section is the version of interior estimates for null-solutions of certain higher-order systems stated in Theorem 11.9. In particular, this contains the fact that such null-solutions satisfy reverse H¨ older estimates (a topic worthy of investigation in its own right). We begin by giving the following definition. Definition 11.5. A continuous (complex) vector-valued function u defined in Ω is said to be p-subaveraging for some p ∈ (0, ∞) if there exists a finite constant C > 0 with the property that p1
|u(x)| ≤ C −
|u(y)| dy p
(11.5.1)
B(x,r)
for every x ∈ Ω and every r ∈ 0, dist (x, ∂Ω) . The class of p-subaveraging functions exhibits a number of self-improvement properties, the first of which is discussed below. Lemma 11.6. Assume that u is a continuous (complex) vector-valued function defined in Ω, and suppose that 0 < p < ∞. Then u is p-subaveraging if and only if there exists a finite constant C > 0 such that
sup z∈B(x,λr)
−n/p
|u(z)| ≤ C(1 − λ)
p1
|u(y)| dy
−
p
B(x,r)
for every x ∈ Ω, every r ∈ 0, dist (x, ∂Ω) , and every λ ∈ (0, 1).
(11.5.2)
362
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
Proof. The fact that (11.5.2) implies (11.5.1) is obvious. Conversely, suppose that u is p-subaveraging. Pick x ∈ Ω, r ∈ 0, dist (x, ∂Ω) , and z ∈ B(x, λr). Then, if R := (1 − λ)r, it follows that z ∈ Ω and 0 < R < dist (z, ∂Ω). Furthermore, B(z, R) ⊆ B(x, r). Consequently, with C as in (11.5.1), p1 p1 −n (1 − λ) |u(y)|p dy =C |u(y)|p dy |u(z)| ≤ C − |B(z, r)| B(z,R) B(z,R) −n/p ≤ C(1 − λ) −
p1 |u(y)|p dy
,
(11.5.3)
B(x,r)
which readily implies (11.5.2) by taking the supremum over z ∈ B(x, λr). The second self-improvement within the class of p-subaveraging functions is the fact that the value of the parameter p is immaterial. Lemma 11.7. If there exists p0 > 0 such that u is p0 -subaveraging function, then u is p-subaveraging for every p ∈ (0, ∞). In light of Lemma 11.7, it is unequivocal to refer to a function u as simply being subaveraging if it is p-subaveraging for some p ∈ (0, ∞). The optimal constant which can be used in (11.5.1) is referred to as the p-subaveraging constant of the function u. Proof of Lemma 11.7. The proof is based on ideas used in the work of Hardy and Littlewood [26] (cf. also [14, Lemma 2, pp. 172–173]). The case p > p0 can be handled directly utilizing H¨ older’s inequality with q = pp0 > 1. Henceforth we shall focus on the case when p < p0 . Replacing u by a suitable power of |u|, there is no loss of generality in assuming that, in fact, p0 = 1 and p < 1. We may also assume (by rescaling and making a translation) that B(0, 1) ⊆ Ω, x = 0, and
p B(0,1) |u(y)| dy = 1. The goal is then to prove the estimate |u(0)| ≤ C with a finite constant C > 0 independent of u. Continuing our series of reductions, we may assume that |u(0)| > 1. Next, for each r ∈ (0, 1] and q ∈ (0, ∞], introduce ⎧ p1
⎪ p ⎪ ⎨ |u(y)| dy if q < ∞, B(0,r) mq (r) := (11.5.4) ⎪ ⎪ ⎩ sup |u(y)| if q = ∞. B(0,r)
Observe that for each r ∈ (0, 1), p 1−p 1−p m1 (r) ≤ mp (r) m∞ (r) ≤ m∞ (r) ,
∀ p ∈ (0, 1),
(11.5.5)
where the last inequality holds by virtue of the trivial estimate mp (r) ≤ mp (1), valid for every r ∈ (0, 1), and the assumption mp (1) = 1. On the other hand, for every x ∈ Ω and every r ∈ 0, dist (x, ∂Ω) C |u(y)| dy, (11.5.6) |u(x)| ≤ n r B(x,r)
¨ 11.5. REVERSE HOLDER ESTIMATES FOR NULL-SOLUTIONS . . .
363
and, consequently, |u(z)| ≤
C (r − ρ)n
|u(y)| dy ≤ B(z,r−ρ)
C (r − ρ)n
|u(y)| dy
(11.5.7)
B(0,r)
whenever |z| = ρ ∈ (0, r). Then for any z ∗ ∈ B(0, ρ) such that |z ∗ | = ρ∗ < ρ we obtain C C |u(y)| dy ≤ |u(y)| dy, (11.5.8) |u(z ∗ )| ≤ (r − ρ∗ )n B(0,r) (r − ρ)n B(0,r) which, in concert with (11.5.5), yields the estimate m∞ (ρ) ≤
C m∞ (r)1−p . (r − ρ)n
(11.5.9)
To continue, set ρ := ra for some a ∈ (1, ∞) to be specified momentarily. Then (11.5.9) entails 1 1 dr 1 a dr ≤C +n ln m∞ (r ) ln a) r r (r − r 1/2 1/2 1 dr + (1 − p) (11.5.10) ln m∞ (r) , r 1/2 and for the first integral above the change of variables t := ra gives 1 1 1 dr dt = ln m∞ (ra ) ln m∞ (t) . r a (1/2)a t 1/2
(11.5.11)
Since our assumption |u(0)| > 1 implies m∞ (t) ≥ 1, the right-hand side of (11.5.11) is bounded from below by dr 1 1 (11.5.12) ln m∞ (r) . a 1/2 r Therefore, (11.5.10)–(11.5.12) imply 1 1 1 dr dr 1 −1+p ≤C +C ≤ C < ∞. (11.5.13) ln m∞ (r) ln a) r a r (r − r 1/2 1/2 Choose now a > 1 such that
1 a
− 1 + p > 0. Then (11.5.13) forces
1
ln m∞ (r) dr ≤ C,
(11.5.14)
1/2
and hence, ln m∞ (1/2) ≤ C for some finite constant C > 0 independent of initial function u. In concert with the inequality |u(0)| ≤ m∞ (1/2), this finishes the proof of the lemma.
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
364
There are certain connections between the subaveraging property and reverse H¨older estimates, brought to light by our next two results. Lemma 11.8. Let u be a subaveraging function. Then for every p, q ∈ (0, ∞) and λ ∈ (0, 1) the following reverse H¨ older estimate holds q1 p1 q |u(y)| dy ≤C − |u(y)|p dy , (11.5.15) − B(x,λr)
B(x,r)
for x ∈ Ω and 0 < r < dist (x, ∂Ω), where C > 0 is a finite constant depending only on p, q, λ, n and the p-subaveraging constant of u. Proof. Given x ∈ Ω and 0 < r < dist (x, ∂Ω), we write 1q |u(y)|q dy ≤ sup |u(z)| ≤ C − − B(x,λr)
z∈B(x,λr)
|u(y)|p dy
p1
,
B(x,r)
(11.5.16) by Lemma 11.6. The main result in this section is the combination of interior estimates and reverse H¨ older estimates contained in the following theorem. Theorem 11.9. Let n, m, M ∈ N and suppose L is an M × M constant (complex) coefficient system in Rn , homogeneous of order 2m, and with the property that det [L(ξ)] = 0 for each ξ ∈ Rn \ {0}. Assume also that Ω ⊆ Rn is an open M M is such that Lu = 0 in D (Ω) . set and u ∈ D (Ω) M Then u ∈ C ∞ (Ω) and u is subaveraging. Moreover, there exists a constant C = C(L, n) ∈ (0, ∞) such that given p ∈ (0, ∞) there exists c = c(L, n, p) ∈ (0, ∞) with the property that 1/p C |α| |α|! p |u(y)| dy , max |∂ α u(y)| ≤ c (1 − λ)−|α|−n/p − r|α| y∈B(x,λr) B(x,r) (11.5.17) whenever x ∈ Ω, 0 < r < dist (x, ∂Ω), λ ∈ (0, 1), and α ∈ Nn0 . M Proof. As in the past, Theorem 10.9 gives that u ∈ C ∞ (Ω) . By working ) defined as in (11.4.19) and the function u defined as in with the system L (11.4.23), the same type of reasoning as in the proof of Theorem 11.4 shows that estimate (11.4.5) actually holds without any restrictions on the dimension n. Keeping this in mind, the version of this estimate corresponding to μ = (0, . . . , 0) may be interpreted as saying that u is 1-subaveraging. Hence, by Lemma 11.7 and the ensuing remark, it follows that u is subaveraging. Based on this and (11.4.12), for each x ∈ Ω, each r ∈ 0, dist (x, ∂Ω) , each θ, η ∈ (0, 1) and each α ∈ Nn0 we may write max y∈B(x,θηr)
|∂ α u(y)| ≤
C |α| (1 − θ)−|α| |α|! max |u(y)| (ηr)|α| y∈B(x,ηr)
(11.5.18)
11.6. LAYER POTENTIALS AND JUMP RELATIONS . . . ≤ c(1 − η)−n/p
365
1/p C |α| (1 − θ)−|α| |α|! p |u(y)| dy , − (ηr)|α| B(x,r)
where c = c(L, n, p) ∈ (0, ∞). Next, given any λ ∈ (0, 1), specialize (11.5.18) to the case when θ := 2λ/(λ + 1) and η := (λ + 1)/2. Note that θ, η ∈ (0, 1), θη = λ, 1 − η = (1 − λ)/2, and 1 − θ = (1 − λ)/(1 + λ). Based on these, we may transform (11.5.18) into 1/p (2C)|α| |α|! |u(y)|p dy , − max |∂ α u(y)| ≤ c 2n/p (1 − λ)−|α|−n/p |α| r y∈B(x,λr) B(x,r) (11.5.19) from which (11.5.17) follows after adjusting notation.
11.6
Layer Potentials and Jump Relations for Systems
The goal of this section is to introduce and study certain types of integral operators, typically called layer potentials, that are particularly useful in the treatment of boundary value problems for systems. This is also an excellent opportunity to illustrate how a good understanding of the nature of fundamental solutions coupled with a versatile command of distribution theory yield powerful tools in the study of partial differential equations. To set the stage, we introduce some notation and make some background assumptions. Throughout this section we let L=
n
Ajk ∂j ∂k ,
Ajk = aαβ jk 1≤α,β≤M ∈ MM×M (C),
(11.6.1)
j,k=1
be a second-order, homogeneous, complex constant coefficient, M × M system, where M ∈ N, with the property that its characteristic matrix, L(ξ), defined for each vector ξ = (ξ1 , . . . , ξn ) ∈ Rn by the formula L(ξ) :=
n j,k=1
ξj ξk Ajk =
n
aαβ jk ξj ξk
j,k=1
satisfies the ellipticity condition det L(ξ) = 0,
1≤α,β≤M
∈ MM×M (C),
∀ ξ ∈ Rn \ {0}.
(11.6.2)
(11.6.3)
In particular, since L(ej ) = Ajj for each j ∈ {1, . . . , n}, the ellipticity condition (11.6.3) entails Ajj ∈ MM×M (C) is an invertible matrix for each j ∈ {1, . . . , n}.
(11.6.4)
Thanks to (11.6.3), Theorem 11.1 is applicable (with m = 1) and we denote by (11.6.5) E = Eαβ 1≤α,β≤M ∈ MM×M S (Rn )
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
366
the matrix-valued fundamental solution of L in Rn where the entries are constructed according to the recipe devised there. Thus, among other things, for each α, β ∈ {1, . . . , M } we have Eαβ Rn \{0} is even and belongs to C ∞ (Rn \ {0}), (11.6.6) ∇Eαβ is positive homogeneous of degree 1 − n.
(11.6.7)
First, by specializing Theorem 4.76 to the case when Φ is a first-order partial derivative of E, we obtain the following basic result. Theorem 11.10. Assume that the system L is as in (11.6.1)–(11.6.3) and suppose that E is the fundamental solution for L in Rn constructed in Theorem 11.1. Then for each j ∈ {1, . . . , n} one has −1 + P.V. (∂j E)(x , 0) lim (∂j E)(x , ε) = ± 12 δjn δ(x ) Ann
ε→0±
(11.6.8)
in MM×M S (Rn−1 ) . Proof. Fix j ∈ {1, . . . , n}. Then Φ := ∂j E is C ∞ , odd, and positive homogeneous of degree 1 − n in Rn \ {0} (cf. (11.6.6)–(11.6.7)). Moreover, by virtue of (11.3.26) used with m = 1, we have −1 3 3 Φ(ξ) = ∂5 j E(ξ) = i ξj E(ξ) = −i ξj L(ξ)
in MM×M S (Rn−1 ) . (11.6.9)
This further implies that 3 , 1) = Φ(e 3 n ) = −i δjn L(en ) −1 = −i δjn Ann −1 . Φ(0
(11.6.10)
With this in hand, (11.6.8) follows with the help of (4.7.1). In the next result, we introduce the so-called single-layer potential operator associated with the system L and study the boundary behavior of its firstorder derivatives. Here and elsewhere, the integral of a vector-valued function is applied to each individual component. Theorem 11.11. Suppose that the system L is as in (11.6.1)–(11.6.3) and assume that E is the fundamental solution for L in Rn constructed in Theorem 11.1. Given any CM -valued function ϕ ∈ S(Rn−1 ), define the CM -valued function E(x − y , t)ϕ(y ) dy , ∀ x = (x , t) ∈ Rn with t = 0. (S ϕ)(x) := Rn−1
(11.6.11)
Then for any CM -valued function ϕ ∈ S(Rn−1 ), S ϕ ∈ C ∞ (Rn± ),
L(S ϕ) = 0 pointwise in Rn± ,
(11.6.12)
11.6. LAYER POTENTIALS AND JUMP RELATIONS . . .
367
and for each j ∈ {1, . . . , n} and each x ∈ Rn−1 one has −1 lim ∂j (S ϕ)(x , t) = ± 12 δjn Ann ϕ(x ) t→0±
(∂j E)(x − y , 0)ϕ(y ) dy .
+ lim
ε→0+
(11.6.13)
y ∈Rn−1
|x −y |>ε
Proof. Let ϕ ∈ S(Rn−1 ) be an arbitrary CM -valued function. That S ϕ is C ∞ in Rn \ {xn = 0} is clear from (11.6.11), (11.6.6), and estimates for the derivatives of E (see (11.3.25)). In addition, for each x = (x , xn ) ∈ Rn with xn = 0, (LE)(x − y , xn )ϕ(y ) dy = 0 (11.6.14) L(S ϕ)(x) = Rn−1
since LE = δ IM×M in D (Rn ) which (upon recalling that E ∈ C ∞ (Rn \ {0})) forces (11.6.15) LE = 0 pointwise in Rn± . Finally, (11.6.13) follows from Corollary 4.78 applied to the same function Φ used in the proof of Theorem 11.10. Remark 11.12. The integral operator S from (11.6.11) is called the singlelayer potential (or single layer operator) associated to L in the upper and lower half-spaces in Rn , while (11.6.13) may be naturally regarded as the jump formula for the gradient of the single-layer potential in this setting. Our next result brings to the forefront another basic integral operator, typically referred to as the double layer potential. Theorem 11.13. Let the system L be as in (11.6.1)–(11.6.3) and assume that E is the fundamental solution for L in Rn constructed in Theorem 11.1. Given any CM -valued function ϕ ∈ S(Rn−1 ), define for each x = (x , t) ∈ Rn with t = 0 n (Dϕ)(x) := (∂k E)(x − y , t)Akn ϕ(y ) dy . (11.6.16) Rn−1 k=1
Then, for any CM -valued function ϕ ∈ S(Rn−1 ), Dϕ ∈ C ∞ (Rn± ),
L(Dϕ) = 0 pointwise in Rn± ,
(11.6.17)
and for every x ∈ Rn−1 one has lim (Dϕ)(x , t) = ± 12 ϕ(x )
t→0±
+ lim
ε→0+ y ∈Rn−1
|x −y |>ε
n k=1
(∂k E)(x − y , 0)Akn ϕ(y ) dy . (11.6.18)
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
368
Proof. Fix some CM -valued function ϕ ∈ S(Rn−1 ). Then the fact that Dϕ is C ∞ in Rn \ {xn = 0} is seen from (11.6.16), (11.6.6), and estimates for the derivatives of E (cf. (11.3.25)). Moreover, for each x = (x , xn ) ∈ Rn with xn = 0,
n
L(Dϕ)(x) =
Rn−1 k=1
∂xk (LE)(x − y , xn ) Akn ϕ(y ) dy = 0
(11.6.19)
where the last equality uses (11.6.15). At this stage, there remains to prove (11.6.18). In this regard, if x ∈ Rn−1 has been fixed, making use of Corollary 11.11 we may write lim (Dϕ)(x , t) =
t→0±
n k=1
=
n k=1
lim
t→0±
Rn−1
(∂k E)(x − y , t)Akn ϕ(y ) dy
lim ∂k (S Akn ϕ)(x , t)
t→0±
−1 1 δkn Ann Akn ϕ(x ) 2 n
=±
(11.6.20)
k=1
n
+ lim
ε→0+ y ∈Rn−1
(∂k E)(x − y , 0)Akn ϕ(y ) dy
k=1
|x −y |>ε
=
± 12
ϕ(x ) + lim
ε→0+ y ∈Rn−1
n
(∂k E)(x − y , 0)Akn ϕ(y ) dy ,
k=1
|x −y |>ε
as wanted. Remark 11.14. The mapping D from (11.6.16) is called the double-layer potential (or double layer operator) associated to the system L in Rn± and (11.6.18) may be naturally regarded as the jump formula for the double-layer potential in this setting. When dealing with the so-called Neumann boundary value problem for the system L [discussed later in (11.6.34)], as boundary condition one prescribes a certain combination of first-order derivatives of the unknown function, namely n Ank ∂k , amounting to what is called the conormal derivative associated − k=1
with the system L. Our next result elaborates on the nature of boundary behavior of this conormal derivative of the single-layer potential S introduced earlier in (11.6.11).
11.6. LAYER POTENTIALS AND JUMP RELATIONS . . .
369
Corollary 11.15. Assume that the system L is as in (11.6.1)–(11.6.3) and let E be the fundamental solution for L in Rn constructed in Theorem 11.1. Then for every CM -valued function ϕ ∈ S(Rn−1 ), lim±
t→0
n
Ank ∂k (S ϕ)(x , t) = ± 12 ϕ(x )
k=1
(11.6.21)
n
+ lim
ε→0+
Ank (∂k E)(x − y , 0)ϕ(y ) dy .
k=1
y ∈Rn−1
|x −y |>ε
Proof. For each CM -valued function ϕ ∈ S(Rn−1 ), formula (11.6.21) follows using Corollary 11.11 by writing lim±
t→0
n
Ank ∂k (S ϕ)(x , t) =
k=1
n
Ank lim± ∂k (S ϕ)(x , t)
k=1
t→0
−1 1 Ank δkn Ann ϕ(x ) 2 n
=±
(11.6.22)
k=1
n
+ lim
ε→0+ y ∈Rn−1
Ank (∂k E)(x − y , 0)ϕ(y ) dy
k=1
|x −y |>ε
= ± 21 ϕ(x ) + lim
ε→0+ y ∈Rn−1
n
Ank (∂k E)(x − y , 0)ϕ(y ) dy ,
k=1
|x −y |>ε
at every point x ∈ Rn−1 . Exercise 11.16. State and prove the analogue of Theorem 4.96 in the scenario when Θ : Rn \ {0} → MM×M (C), for some M ∈ N, is a matrix-valued function with scalar entries that satisfy the conditions in (4.4.1). In particular, pay attention to the fact that the format of (4.9.42) now becomes ∗ M TΘ = TΘ ∨ in L2 (Rn ) (11.6.23) where, as before, Θ denotes the transposed of the matrix Θ. Hint: Prove and use the fact that mΘ = mΘ . M we have L(Dϕ) = 0 in Rn± (cf. The fact that for any ϕ ∈ S(Rn−1 ) (11.6.17)) means that we may regard the double-layer potential operator D defined in (11.6.16) as a mechanism for generating an abundance of solutions of the partial differential equation Lu = 0 in Rn± . In addition, the jump formula (11.6.18) fully clarifies the boundary behavior of the function u := Dϕ.
(11.6.24)
370
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
This is particularly relevant in the context of the Dirichlet problem for the M × M system L (assumed to be as in (11.6.1)–(11.6.3)), the formulation of which in, say, the upper half-space reads (compare with (4.8.17) in the case L := Δ) ⎧ M ⎪ u ∈ C ∞ (Rn+ ) , ⎪ ⎪ ⎪ ⎨ Lu = 0 in Rn+ , (11.6.25) ⎪ ver ⎪ ⎪ ⎪ on Rn−1 ≡ ∂Rn+ . ⎩ u n = ψ ∂R+
M is a given function (called the boundary datum) and, Above, ψ ∈ S(Rn−1 ) ver much as in the case of (4.8.17), the symbol u n stands for the “vertical limit” ∂R+
of u to the boundary of the upper half-space, understood at each point x ∈ Rn−1 as lim + u(x , xn ). xn →0
Indeed, focusing the search for a solution of (11.6.25) in the class of functions M yet to be determined) has the u defined as in (11.6.24) (with ϕ ∈ S(Rn−1 ) distinct advantage that the first two conditions in (11.6.25) are automatically satisfied (thanks to (11.6.17)), irrespective of the choice of ϕ. Keeping in mind (11.6.18), matters have been therefore reduced to solving the boundary integral equation n 1 (∂k E)(x − y , 0)Akn ϕ(y ) dy = ψ(x ), x ∈ Rn−1 , 2 ϕ(x ) + lim ε→0+
y ∈Rn−1
k=1
|x −y |>ε
(11.6.26) so named since R may be regarded as the boundary of notation, introduce the singular integral operator n (∂k E)(x − y , 0)Akn ϕ(y ) dy , (Kϕ)(x ) := lim
Rn+ .
n−1
ε→0+
y ∈Rn−1
To streamline
x ∈ Rn−1 ,
k=1
|x −y |>ε
(11.6.27) typically referred to as the principal value (or boundary version) of the double layer associated with the system L in the upper half-space. Employing (11.6.27), we may write in place of (11.6.26) 1 n−1 . (11.6.28) 2 IM×M + K ϕ = ψ in R Reducing the entire boundary value problem (11.6.25) to solving the boundary integral equation (11.6.28) is perhaps the most distinguished feature of the technology described thus far for dealing with (11.6.25), called the method of boundary layer potentials (in standard partial differential equations parlance). Regarding the boundary integral equation (11.6.28), involving the singular integral operator K introduced in (11.6.27), we wish to note that by taking the
11.6. LAYER POTENTIALS AND JUMP RELATIONS . . .
371
Fourier transform (in Rn−1 ), then invoking Theorem 4.71 (the applicability of which in the present setting is justified by virtue of the properties of E from (11.6.6)–(11.6.7)), we arrive at 1
2 IM×M
3 = ψ3 in S (Rn−1 ), + aL ϕ
(11.6.29)
where the M × M matrix-valued function aL is defined by the formula
aL (ξ ) := −
n k=1
S n−2
(∂k E)(ω , 0) log(i(ξ · ω )) dσ(ω ) Akn
(11.6.30)
for each ξ ∈ Rn−1 \ {0 }. Thus, in the case when the matrix 12 IM×M + aL is invertible, at least at the formal level we may express the solution of (11.6.29) in the form −1 ψ3 , (11.6.31) ϕ = F −1 12 IM×M + aL then ultimately conclude from (11.6.24) and (11.6.31) that −1 u = D F −1 12 IM×M + aL ψ3
(11.6.32)
solves the Dirichlet problem (11.6.25) for the system L in Rn+ with ψ as boundary datum. In certain concrete cases of practical importance, the “symbol” function aL from (11.6.30) is simple enough in order for us to make sense of the expression appearing in the right-hand side of (11.6.31). For example, it is visible from (11.6.30) that aL = 0 in Rn−1 \ {0 } whenever n
(∂k E)(x , 0)Akn = 0
for all x ∈ Rn−1 \ {0 }.
(11.6.33)
k=1
This is indeed the case when L = Δ, the Laplacian in Rn . To see that this happens, note that in this scalar (M = 1) case, Ajk = δjk for each j, k ∈ {1, . . . , n}, and if EΔ denotes the fundamental solution for the Laplacian from (7.1.12) xn 1 then for each x = (x , xn ) ∈ Rn \ {0} we have (∂n EΔ )(x , xn ) = ωn−1 |x|n , hence (∂n EΔ )(x , 0) = 0 for each x ∈ Rn−1 \ {0 }. The bottom line is that the symbol function aΔ vanishes identically. In light of (11.6.31), this shows that for the Laplacian we must chose the (originally unspecified) function ϕ to be equal to the (given) boundary datum ψ. For this choice, the general formula (11.6.32) then reduces precisely to the classical expression (4.8.19), as the reader may verify without difficulty. −1 ψ3 can be of However, in general, the assignment ψ → F −1 12 IM×M +aL an intricate nature, leading to operators that are well beyond the class of singular integral operators introduced in Definition 4.90. This leads to the consideration of more exotic classes of operators, such as pseudodifferential operators, Fourier integral operators, etc., which are outside of the scope of the present monograph.
372
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
This being said, the material developed here serves both as preparation and motivation for the reader interested in further pursuing such matters. Similar considerations apply in the case of the Neumann problem for a system L as in (11.6.1)–(11.6.3), where the formulation reads as ⎧ M ⎪ u ∈ C ∞ (Rn+ ) , ⎪ ⎪ ⎪ ⎨ Lu = 0 in Rn+ , (11.6.34) ⎪ n ⎪ ver ⎪ n−1 n ⎪ Ank ∂k u n = ψ on R ≡ ∂R+ , ⎩ − ∂R+
k=1
M where ψ ∈ S(Rn−1 ) is the boundary datum. Given the results proved in Theorem 11.11 and Corollary 11.15, this time it is natural to seek a solution in the form (compare with (11.6.24)) (11.6.35) u := S ϕ in Rn+ , M is subject to the boundary integral equation where the function ϕ ∈ S(Rn−1 ) (implied by (11.6.21)) n Ank (∂k E)(x − y , 0)ϕ(y ) dy = ψ(x ) (11.6.36) − 12 ϕ(x ) − lim+ ε→0
y ∈Rn−1
k=1
|x −y |>ε
for each x ∈ Rn−1 . Once again, Theorem 4.71 may be used in order to rewrite (11.6.36) on the Fourier transform side as 1 3 = ψ3 in S (Rn−1 ), aL ϕ (11.6.37) − 2 IM×M + where the M × M matrix-valued function aL is now defined by the formula n Ank (∂k E)(ω , 0) log(i(ξ · ω )) dσ(ω ) (11.6.38) aL (ξ ) := k=1
S n−2
for each ξ ∈ Rn−1 \ {0 }. As regards the symbol function aL for the Neumann problem, one can see from (11.6.38), (11.6.30), and (11.3.27) that (11.6.39) aL = − aL
which shows that aL has, up to transpositions, the same nature as the symbol function for the Dirichlet problem. Hence, the same type of considerations as in the latter case apply. For example, corresponding to the case when L = Δ we have aΔ = 0 which means that a solution to the Neumann problem for the Laplacian in the upper half-space, that is, ⎧ u ∈ C ∞ (Rn+ ), ⎪ ⎪ ⎪ ⎨ Δu = 0 in Rn+ , (11.6.40) ⎪ ver ⎪ ⎪ n−1 n ⎩ − ∂n u = ψ on R ≡ ∂R+ , n ∂R+
11.7. ADDITIONAL EXERCISES FOR CHAP. 11 where ψ ∈ S(Rn−1 ) is the boundary datum, is given by u(x) := EΔ (x − y , xn )ψ(y ) dy for x = (x , xn ) ∈ Rn+ ,
373
(11.6.41)
Rn−1
with EΔ denoting the standard fundamental solution for the Laplacian (cf. (7.1.12)). Further Notes for Chap. 11. As already mentioned, the approach developed in Sect. 10.2 contains a constructive procedure for reducing the task of finding a fundamental solution for the Lam´e operator to the scalar case. This scheme has been implemented in Sect. 11.1 based on the knowledge of a fundamental solution for the poly-harmonic operator from Sect. 7.5. Subsequently, in Sect. 11.2 a fundamental solution for the Stokes operator is computed indirectly by rigorously making sense of the informal observation that this operator is a limiting case of the Lam´e operator (taking μ = 1 and sending λ → ∞). From the considerations in Sect. 10.2 it was also already known that the higher-order homogeneous elliptic constant coefficient linear systems considered in Sect. 11.3 possess fundamental solutions that are tempered distributions with singular support at the origin. The new issue addressed in Theorem 11.1 is to find an explicit formula and to study other properties of such fundamental solutions. This theorem refines and further builds on the results proved in [31, 35, 50, 54, 61], in various degrees of generality. Fundamental solutions for variable coefficient elliptic operators on manifolds have been studied in [52].
11.7
Additional Exercises for Chap. 11
Exercise 11.17. Use Theorem 11.1 in a similar manner as in the proof of Proposition 11.2 in order to derive a formula for a fundamental solution for the Lam´e operator in Rn when n ≥ 3 is arbitrary and odd. Exercise 11.18. Similarly to Exercise 11.17, derive a formula for a fundamental solution for the Lam´e operator in Rn when n is even. Exercise 11.19. Let L be a homogeneous differential operator in Rn , n ≥ 2, of order 2m, m ∈ N, with complex constant coefficients as in (11.3.1) that satisfies (11.3.3), and recall (11.3.28). Also, let q ∈ N0 be such that n + q is even. Consider the matrix E := Ejk 1≤j,k≤M with entries given by −1 Ejk (x) = (2π i)n (2m + q)!
S n−1
(x · ξ)2m+q log
x · ξ P jk (ξ) dσ(ξ) i
(11.7.1)
for x ∈ Rn \ {0} and j, k ∈ {1, . . . , M }, where log denotes the principal branch of the complex logarithm defined for points z ∈ C \ {x : x ∈ R, x ≤ 0}. Prove (n+q)/2 L that E ∈ MM×M (S Rn ) is a fundamental solution for the system Δx n in R .
374
CHAPTER 11. MORE ON FUNDAMENTAL SOLUTIONS . . .
Exercise 11.20. Let L1 , L2 be two homogeneous M × M systems of differential operators in Rn , n ≥ 2, of orders 2m1 and 2m2 , respectively, where m1 , m2 ∈ N, with complex constant coefficients, satisfying det L1 (ξ) = 0 and det L2 (ξ) = 0 for every ξ ∈ Rn \ {0}. Prove that, with the notational convention employed in part (7) of Theorem 11.1, one has L2 EL1 L2 = EL1 + P
in
S (Rn ),
(11.7.2)
where P is zero if either n is odd or 2m1 < n, and P is an M × M matrix whose entries are homogeneous polynomials of degree 2m1 − n if 2m1 ≥ n. Exercise 11.21. Recall the notational convention employed in part (7) of Theorem 11.1 as well as Fm,n from (7.5.2). Prove that for every m ∈ N one has (11.7.3) EΔm = Fm,n + P in S (Rn ), where P is zero if either n is odd or 2m < n, and P is a homogeneous polynomial of degree 2m − n if 2m ≥ n.
Chapter 12
Solutions to Selected Exercises 12.1
Solutions to Exercises from Sect. 1.4
Exercise 1.24 a = 0; see Examples 1.4 and 1.5. Exercise 1.25 No; see Examples 1.4 and 1.5. Exercise 1.26 Let ϕ ∈ C0∞ (R2 ). Based on integration by parts and the fact that ϕ is compactly supported we have ∂1 ∂2 f, ϕ = f (x, y)(∂1 ∂2 ϕ)(x, y) dx dy R2
∞
0
= R
∂2 (∂1 ϕ)(x, y) dy dx + 0
∞
R
∂1 (∂2 ϕ)(x, y) dx dy = 0. (12.1.1)
This shows that ∂1 ∂2 f = 0 in the weak sense. Similarly, the other weak derivatives are ∂2 (∂1 ∂2 f ) = 0 and ∂12 ∂2 f = 0. Suppose now that ∂1 f exists in the weak sense, i.e., there exists g ∈ L1loc (R2 ) such that R2
f (x, y)(∂1 ϕ)(x, y) dx dy = −
g(x, y)ϕ(x, y) dx dy, R2
∀ ϕ ∈ C0∞ (R2 ).
(12.1.2)
Since R2 f (x, y)(∂1 ϕ)(x, y) dx dy = − R ϕ(0, y) dy (as seen using the definition of f and integration by parts), (12.1.2) becomes ϕ(0, y) dy = g(x, y)ϕ(x, y) dx dy, ∀ ϕ ∈ C0∞ (R2 ). (12.1.3) R
R2
D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 12, © Springer Science+Business Media New York 2013
375
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
376
Taking now ϕ in (12.1.3) to be supported in {(x, y) ∈ R2 : x = 0} and recalling Theorem 1.3, it follows that g = 0 almost everywhere on {(x, y) ∈ R2 : x = 0}. R2 . The latter, when used in (12.1.3), implies
Thus, g = 0 almost everywhere in∞ 2 R ϕ(0, y) dy = 0 for every ϕ ∈ C0 (R ). However this leads to a contradiction by taking ϕ(x, y) = ϕ1 (x)ϕ2 (y) for (x, y) ∈ R2 , where ϕ1 , ϕ2 ∈ C0∞ (R) are such that ϕ1 (0) = 1 and R ϕ2 (y) dy = 1. Exercise 1.27 Let ϕ ∈ C0∞ (−1, 1) be arbitrary. Then Lebesgue’s dominated convergence theorem and integration by parts give 1 −ε 1 √ √ f (x)ϕ (x) dx = lim −x ϕ (x) dx − lim x ϕ (x) dx − ε→0+
−1
ε→0+
−1
ε
' 1 √ ϕ(x) dx ε→0 −1 2 −x ' & 1 √ 1 ε √ ϕ(x) dx − lim+ xϕ(x)|−1 − ε→0 ε 2 x 1 1 = ϕ(x) dx, (12.1.4) −1 2 |x| where we have used the fact that √1 ∈ L1 (−1, 1) . Hence the weak deriva&
= lim+
√ −x ϕ(x)|ε−1 +
2
−ε
|x|
tive of order one of f is the function given by g(x) := √1
|x|
2
for x ∈ (−1, 1) \ {0}.
Exercise 1.28 Consider a function ϕ ∈ C0∞ (R2 ) and pick a number R > 0 such that supp ϕ ⊂ (−R, R) × (−R, R). Then − x|y|(∂12 ∂2 ϕ)(x, y) dx dy = − |y| x(∂12 ∂2 ϕ)(x, y) dx dy = 0, (12.1.5) R2
R
R
where for the last equality is obtained integrating by parts twice with respect to x, keeping in mind that ϕ has compact support. This proves that ∂12 ∂2 f = 0 in the weak sense. On the other hand, 2 x|y|(∂1 ∂2 ϕ)(x, y) dx dy = |y|(∂22 ϕ)(x, y) dx dy − R2
6 = − R
=−
R2
R2
0
−R
y(∂22 ϕ)(x, y) dy
7
R
y(∂22 ϕ)(x, y) dy
+
dx
0
sgn(y)(∂2 ϕ)(x, y) dy dx = 2
ϕ(x, 0) dx
(12.1.6)
R
after integrating by parts, first with respect to x, then twice with respect to y. Now reasoning as in the proof of Exercise 1.26 one can show that if g ∈ L1loc (R2 ) is such that
12.1. SOLUTIONS TO EXERCISES FROM SECT. 1.4 2
ϕ(x, 0) dx =
R
377
∀ ϕ ∈ C0∞ (R2 ),
g(x, y)ϕ(x, y) dx dy, R2
leads to a contradiction. Hence, ∂1 ∂22 f does not exist in the weak sense. Exercise 1.29 Reasoning as in the proofs of Exercise 1.26 and Exercise 1.28, one can show that ∂1k f and ∂2k f do not exist in the weak sense for any k ∈ N, and that ∂ α f = 0 in the weak sense whenever α = (α1 , α2 ) ∈ N20 with α1 = 0 and α2 = 0. Exercise 1.30 Let ϕ ∈ C ∞ (R) with supp ϕ ⊂ (−R, R) for some R ∈ (0, ∞). Then, 0 R sin(−x)ϕ (x) dx − sin(x)ϕ (x) dx − f (x)ϕ (x) dx = − −R
R
=−
0
0
R
cos(x)ϕ(x) dx + −R
=−
cos(x)ϕ(x) dx 0
sgn(x) cos(x)ϕ(x) dx.
(12.1.7)
R
Since sgn(x) cos(x) ∈ L1loc (R), we deduce from (12.1.7) that f exists in the weak sense and that f (x) = sgn(x) cos(x) for x ∈ R. Also, 0 R f (x)ϕ (x) dx = cos(x)ϕ (x) dx − cos(x)ϕ (x) dx R
−R
0
= 2ϕ(0) −
sgn(x) sin(x)ϕ(x) dx.
(12.1.8)
R
If the weak derivative f were to exist we could find g ∈ L1loc (R) such that g(x)ϕ(x) dx = 2ϕ(0) − sgn(x) sin(x)ϕ(x) dx, ∀ ϕ ∈ C0∞ (R). (12.1.9) R
R
∞ In particular, (12.1.9) would also hold for all ϕ ∈ C0 (R) supported in (0, ∞), which would give g (0,∞) = − sin(x). Similarly, we would conclude that g (−∞,0) = sin(x) and, hence, returning to (12.1.9), we would obtain ϕ(0) = 0 for all ϕ ∈ C0∞ (R). The latter is false, thus the weak derivative f does not exist.
Exercise 1.32 Note that f ∈ L1loc (Rn ) for every n ∈ N. Suppose n ≥ 2, and fix some index j ∈ {1, . . . , n} along with a function ϕ ∈ C0∞ (Rn ). If R > 0 is such that supp ϕ ⊂ B(0, R), then using Lebesgue’s dominated convergence theorem and (13.7.4) we obtain −
Rn
f (x)∂j ϕ(x) dx = − lim
r→0+
f (x)∂j ϕ(x) dx r<|x|
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
378
6
xj
= lim
r→0+
∂B(0,r)
= −ε
Rn
rε+1
ϕ(x) dσ(x) − ε r<|x|
7 xj ϕ(x) dx |x|ε+2
xj ϕ(x) dx, |x|ε+2
(12.1.10)
since, given that under the current assumptions n − 1 − ε > 0, we have xj ϕ(x) dσ(x) ≤ ωn−1 ϕL∞ (Rn ) rn−1−ε → 0 as r → 0+ . ∂B(0,r) rε+1 (12.1.11) This proves that if n ≥ 2 then the weak derivative ∂j f exists and is equal to xj the function −ε |x|ε+2 ∈ L1loc (Rn ). Assume next that n = 1 and suppose that there exists some g ∈ L1loc (R) with
the property that − R f ϕ dx = R gϕ dx for every ϕ ∈ C0∞ (R). Then, with R > 0 such that supp ϕ ⊂ (−R, R), using Lebesgue’s dominated convergence theorem we may write 7 6 R −r ϕ (x) ϕ (x) f ϕ dx = lim dx + dx ε xε r→0+ −R (−x) r R 7 6 −r R ϕ(x) ϕ(x) ϕ(r) ϕ(−r) = lim+ −ε dx − ε + ε dx . ε+1 ε+1 rε r r→0 −R (−x) r x (12.1.12) In particular, identity (12.1.12) holds if supp ϕ ⊂ (0, ∞), in which case (12.1.12) implies g(x) = εx−ε−1 for x > 0, and if supp ϕ ⊂ (−∞, 0), when we obtain g(x) = −ε(−x)−ε−1 for x < 0 (recall Theorem 1.3). However, the function g thus obtained does not belong to L1loc (R). Consequently, f does not have a weak derivative of order one in the case when n = 1. Exercise 1.33 (a) Let f ∈ L1loc (a, b) be such that the weak derivative f exists and equals 0. That is, b (12.1.13) f (x)ϕ (x) dx = 0, ∀ ϕ ∈ C0∞ (a, b) . a
b Select ϕ0 ∈ C0∞ (a, b) such that a ϕ0 (s) ds = 1. Then, each ϕ ∈ C0∞ (a, b) may be written as b ϕ(t) = ϕ0 (t) ϕ(s) ds + ψ (t), ∀ t ∈ (a, b), (12.1.14) a
where ψ(x) := a
x
ϕ(t) − ϕ0 (t) a
b
ϕ(s) ds dt,
∀ x ∈ (a, b).
(12.1.15)
12.1. SOLUTIONS TO EXERCISES FROM SECT. 1.4
379
An inspection of (12.1.15) reveals that ψ ∈ C ∞ (a, b) , and if [a0 , b0 ] ⊆ (a, b) is an interval such that supp ϕ, supp ϕ0 ⊆ [a0 , b0 ], then we also have supp ψ ⊆ [a0 , b0 ] (as seen by analyzing the cases x ∈ (a, a0 ) and x ∈ (b0 , b)). Now define
b the constant c := a f (x)ϕ0 (x) dx and use (12.1.13) to obtain
b
∀ ϕ ∈ C0∞ (a, b) .
b
f (x)ϕ(x) dx =
c ϕ(s) ds,
a
(12.1.16)
a
Hence, by invoking Theorem 1.3 we obtain f = c almost everywhere in (a, b).
x (b) Let g ∈ L1loc (a, b) , x0 ∈ (a, b), and set f (x) := x0 g(t) dt for every x ∈ (a, b). By Lebesgue’s dominated convergence theorem, it follows that f is continuous on (a, b), hence f ∈ L1loc (a, b) . There remains to prove that g is the weak derivative of order one of f . Parenthetically, we note that under the current assumptions, one may not expect f to necessarily be pointwise differentiable in (a, b). As an example, take the function f : R → R defined by f (x) := x for x ≥ 0 and f (x) := 0 for x < 0. Then f is not differentiable at zero but its weak derivative f exists and is equal to χ(0,∞) ∈ L1loc (R). Also, observe that the fundamental theorem of calculus 1 does not apply in this case since while g is in Lloc (R) it is not continuous. ∞ Let ϕ ∈ C0 (a, b) and c ∈ (a, b) such that supp ϕ ⊂ (c, b). Consider two cases. Case 1: Assume c > x0 . Then supp ϕ ⊂ (x0 , b) and using the expression for f and Fubini’s theorem we may write b
b
x
x0
x0
f (x)ϕ (x) dx =
a
b
x0
b
=−
b
g(t) dt ϕ (x) dx =
ϕ(t)g(t) dt = − x0
ϕ (x) dx g(t) dt
t
b
g(t)ϕ(t) dt.
(12.1.17)
a
Case 2: Assume c < x0 . Again, using the expression for f and Fubini’s theorem it follows that x0 b b f (x)ϕ (x) dx = f (x)ϕ (x) dx + f (x)ϕ (x) dx a
a
x0
x
= =−
x0
g(t)
=−
t
ϕ (x) dx dt +
a
a
b
x0 b
x
x0
g(t) x0
g(t) dt ϕ (x) dx
b
ϕ (x) dx dt
t
b
g(t)ϕ(t) dt. a
x0
g(t) dt ϕ (x) dx +
x0
a
(12.1.18)
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
380
From (12.1.17) and (12.1.18) the desired conclusion follows. (c) Denote by h ∈ L1loc (a, b) the weak derivative f (k) and fix an arbitrary
x point x0 ∈ (a, b). If one sets g(x) := x0 h(s) ds for x ∈ (a, b), then by Lebesgue’s dominated convergence theorem, it follows that g is continuous. Thus, in par
b ticular, g ∈ L1loc (a, b) . Fix ϕ0 ∈ C0∞ (a, b) with a ϕ0 (t) dt = 1. Using induction and what we have proved in part (b), the desired conclusion will follow if we show that b b (k−1) f (k−1) = g − g(t)ϕ0 (t) dt + (−1)k−1 f (t)ϕ0 (t) dt a.e. on (a, b). a
a
Let ϕ ∈ C0∞ (a, b) and write it as in (12.1.14)–(12.1.15). Then
b
a
g(x)ϕ0 (x) dx
a
a
a
g(x)ϕ0 (x) dx
a
b
b
= =
a
g(x)ϕ0 (x) dx
a
b
b
=
b
g(x)ϕ(x) dx =
b
a
b
g(x)ϕ0 (x) dx
+(−1)k
a
b
ϕ(t) dt −
b
g(x)ψ (x) dx
a b
h(x)ψ(x) dx a
ϕ(t) dt +(−1)k−1 ϕ(t) dt +(−1)k−1
a
(k−1)
f (x)ϕ0
b
ϕ(t) dt +
(12.1.19)
b
f (x)ψ (k) (x) dx
a b
f (x)ϕ(k−1) (x) dx
a
b
(x) dx
ϕ(t) dt .
(12.1.20)
a
Now (12.1.19) follows from (12.1.20) and Theorem 1.3. (d) Using what we have proved in part (c), we obtain that the weak derivative f (j) exists for each j ∈ N satisfying j ≤ k. From what we proved in part (a) we know that f (k−1) = ak−1 ∈ C, while from part (c) we know that f (k−2) = ak−1 x − ak−1 x0 − c. The proof may be now completed by induction. (e) No; see Exercise 1.26. Exercise 1.34 Let R > 0 be such that supp θ ⊆ B(0, R). Then ϕj ∈ C0∞ (Rn ) and supp ϕj ⊆ B(0, R) for each j ∈ N. Also, given any α ∈ Nn0 we have ∂ α ϕj = e−j j m+|α| (∂ α θ)(j·) Hence,
for every j ∈ N.
|∂ α ϕj (x)| ≤ e−j j m+|α| θL∞ (Rn ) −−−→ 0.
sup x∈B(0,R)
D(Rn )
This shows that ϕj −−−−→ 0. j→∞
j→∞
(12.1.21)
12.2. SOLUTIONS TO EXERCISES FROM SECT. 2.10
381
Exercise 1.35 Clearly, for every j ∈ N, ϕj ∈ C0∞ (Rn ) and supp ϕj = supp θ − jh. Hence, for every R > 0 there exists j0 ∈ N such that (supp θ − jh) ∩ B(0, R) = ∅ for j ≥ j0 . This shows that there is no compact set K ⊂ Rn such that supp ϕj ⊆ K for all j ∈ N, which implies that {ϕj }j∈N does not converge in D(Rn ). In addition, if α ∈ Nn0 , K is a compact set in Rn , and R > 0 is such that K ⊆ B(0, R), then supx∈K |∂ α ϕj (x)| = supx∈K |(∂ α θ)(x + jh)| = 0 for all j ≥ j0 , where j0 ∈ N is such that (supp θ − jh) ∩ B(0, R) = ∅. This proves that E(Rn )
ϕj −−−−→ 0. j→∞
E(Rn )
Exercise 1.36 Suppose there exists ϕ ∈ C0∞ (Rn ) such that ϕj −−−−→ ϕ. Then j→∞
necessarily ϕj −−−→ ϕ pointwise (explain why). Since lim ϕj (x) = 0 for j→∞
j→∞
every x ∈ Rn , this forces ϕ = 0. However, if α ∈ N0 is such that |α| = 1, then ∂ α ϕj = (∂ α θ)(j·) for every j ∈ N, hence for each compact K we have supx∈K |∂ α ϕj | = ∂ α θL∞ (K) which does not converge to zero as j → ∞, leading to a contradiction. Exercise 1.37 One implication is immediate. If θ = 0, then supp ϕj = jh + 1 n j supp θ for each j ∈ N, and since h = 0, there is no compact K ⊂ R such that supp ϕj ⊆ K for all j ∈ N. This shows that {ϕj }j∈N does not converge in D(Rn ). Exercise 1.38 {ϕj }j∈N does not converges in D(Rn ) since supp ϕj = j supp θ for every j ∈ N, thus there is no compact K ⊂ Rn such that supp ϕj ⊆ K for all j ∈ N. On the other hand, if α ∈ Nn0 then |∂ α ϕj (x)| ≤ j −1−|α| ∂ α θL∞ (Rn ) for every x ∈ Rn . Thus, ∂ α ϕj converges to zero uniformly on Rn , proving that E(Rn )
ϕj −−−−→ 0. j→∞
Exercise 1.39 Let x0 ∈ Ω and R > 0 be such that B(x0 , R) ⊂ Ω. Consider a nonzero function ϕ ∈ C0∞ (R) with the property that supp ϕ ⊂ B(0, R) and define the sequence ϕj (x) := 1j ϕ j(x − x0 ) , x ∈ Ω, j ∈ N. Show that the sequence {ϕj }j∈N satisfies the hypotheses in the problem but it does not converge in D(Rn ).
12.2
Solutions to Exercises from Sect. 2.10
Exercise 2.105 Let K be a compact set in R and fix an arbitrary function ϕ ∈ C0∞ (R) with supp ϕ ⊆ K. By the mean value theorem we have 1 1 |ϕ 2 − ϕ(0)| ≤ 2 ϕ L∞ (K) . j j This shows that the series in (2.10.1) is absolutely convergent, hence u is well ∞ defined. Clearly, u is linear and since |u(ϕ)| ≤ j −2 sup |ϕ (x)| it follows j=1
x∈K
382
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
that u ∈ D (R), it has finite order, and its order is at most 1. To see that u is not of order zero, consider the sequence of functions {ϕk }k∈N satisfying, for each k ∈ N, the conditions ϕk ∈ C0∞ (0, 2) , 0 ≤ ϕk ≤ k1 , (12.2.1) 1 1 1 , 1 . ϕk (x) = 0 if x ≤ (k+1) 2 , ϕk (x) = k if x ∈ 2 k Then u, ϕk =
k
ϕk
1
= 1. If we assume that u has order zero, then there
j2
j=1
exists a finite positive number C such that |u, ϕ| ≤ C supx∈[0,2] |ϕ(x)| for every ϕ ∈ C0∞ (R) with supp ϕ ⊆ [0, 2]. This implies 1 = |u, ϕk | ≤ Ck for every k ∈ N, which leads to a contradiction. Hence the order of u is 1. Exercise 2.106 Take u to be the distribution given by the constant function 1. Then this u does not satisfy (2.10.2). Indeed, for any compact K ⊂ Ω we can choose ϕ ∈ C0∞ (Ω) with supp ϕ ⊆ Ω \ K and Ω ϕ dx = 0 which would lead to a contradiction if (2.10.2) were true for u = 1. Exercise 2.107 Use a reasoning similar in spirit to the one from Example 2.9.
Exercise 2.108 Estimate B(0,R) |f (x)| dx by working in polar coordinates [based on formula (13.8.7)] and using Proposition 13.46, as well as (2.1.9). Exercise 2.109 Let ϕ ∈ C0∞ (R) with supp ϕ ⊂ (−R, R). Then using integration by parts we have % % $ $ (ln |x|) , ϕ = − ln |x|, ϕ ϕ (x) ln |x| dx = − ϕ (x) ln |x| dx = − lim+ R
= lim+ ε→0
ε→0
ε<|x|
ε<|x|
R −ε ϕ(x) − ϕ(x) ln |x| dx − ϕ(x) ln |x| x −R ε
! 1 " = P.V. , ϕ + lim+ ϕ(ε) − ϕ(−ε) ln ε x ε→0 " ! 1 = P.V. , ϕ . x For the last equality in (12.2.2) note that 0. ϕ(ε) − ϕ(−ε) ln ε ≤ 2ϕ L∞ (R) ε| ln ε| −−−−→ +
(12.2.2)
(12.2.3)
ε→0
Exercise 2.110 It is immediate that f is continuous on R \ {0}, while its continuity at x = 0 follows from the fact that lim x ln |x| = 0.
x→0
(12.2.4)
12.2. SOLUTIONS TO EXERCISES FROM SECT. 2.10
383
Fix ϕ ∈ C0∞ (R). Starting with the definition of distributional derivative, then applying Lebesgue’s dominated convergence theorem (in concert with the properties of ϕ), then integrating by parts and using (12.2.4), we obtain $ $ % % f , ϕ = − f, ϕ ∞ −ε = − lim (x ln(−x) − x)ϕ (x) dx + (x ln x − x)ϕ (x) dx ε→0+
−∞
−ε
ln(−x)ϕ(x) dx +
= lim+ ε→0
−∞
$
ε ∞
(ln x)ϕ(x) dx
ε
% = ln |x|, ϕ .
(12.2.5)
The last equality in (12.2.5) uses the fact that ln |x| ∈ L1loc (R). Exercise 2.111 Fix j ∈ {1, . . . , n} and ϕ ∈ C0∞ (Rn ). Then % $ % $ (ln |x|)∂j ϕ(x) dx ∂j (ln |x|), ϕ = ln |x|, ∂j ϕ = − Rn
= − lim
ε→0+
= lim+ ε→0
= Rn
|x|≥ε
|x|≥ε
(ln |x|)∂j ϕ(x) dx
xj ϕ(x) dx + lim+ |x|2 ε→0
(ln ε) |x|=ε
xj ϕ(x) dx + lim εn−1 (ln ε) |x|2 ε→0+
xj ϕ(x) dσ(x) ε
S n−1
ωj ϕ(εω) dσ(ω) (12.2.6)
For the third and last equality in (12.2.6) we used Lebesgue’s dominated x convergence theorem [note that |x|j2 , ln |x| ∈ L1loc (Rn ) and ϕ is compactly supported]. The fourth equality uses the integration by parts formula (13.7.4). Also, for the last equality, in the integral on ∂B(0, ε) we made the change of variables x = εω with ω ∈ S n−1 . One more application of Lebesgue’s dominated convergence theorem yields ωj ϕ(εω) dσ(ω) = ϕ(0) ωj dσ(ω) = 0, (12.2.7) lim ε→0+
S n−1
S n−1
where the last equality is due to the fact that the integral of an odd function over the unit sphere is zero. Moreover, (2.1.9) implies lim+ εn−1 (ln ε) = 0 (recall ε→0
that we are assuming n ≥ 2). Returning with all these comments to (12.2.6) we arrive at the conclusion that " % ! xj $ ,ϕ for every ϕ ∈ C0∞ (Rn ). ∂j (ln |x|), ϕ = 2 |x| Hence, if n ≥ 2 then ∂j (ln |x|) =
xj |x|2
in D (Rn ).
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
384
Exercise 2.112 Fix j ∈ N. Using the change of variables y = jx − jt the expression for ψj becomes ψj (x) =
jx−1
θ(y) dy jx−j 2
for x ∈ R.
1 Hence ψj (x) = 0 if x ≤ 0, while for x > 0 we have ψj (x) = −1 θ(y) dy = 1 if j ≥ j0 , where j0 ∈ N is such that j0 x−j02 ≤ −1 and j0 x−1 ≥ 1. This shows that
1 ψj converges pointwise to H as j → ∞. In addition, |ψj (x)| ≤ −1 |θ(y)| dy < ∞ for every j ∈ N. Hence, by applying Lebesgue’s dominated convergence theorem, lim ψj (x)ϕ(x) dx = H(x)ϕ(x) dx for each ϕ ∈ C0∞ (R). j→∞
R
R
D (R)
This proves that ψj −−−− → H as wanted. j→∞
Exercise 2.113 If ϕ ∈ C0∞ (R) then the support condition for ϕ guarantees ∞ that only finitely many terms in the sum ϕ(j) (j) are nonzero, hence u is j=1
well-defined. Clearly u is linear. If K is a compact set in R and k ∈ N is such that K ⊆ [−k, k], then |u(ϕ)| ≤
k (j) ϕ (j) ≤ k j=1
sup
x∈K, j≤k
(j) ϕ (x)
for ϕ ∈ C0∞ (R) with supp ϕ ⊆ K.
This proves that u ∈ D (R). Suppose the distribution u has finite order. Then there exists a nonnegative integer k0 with the property that for each compact set K ⊂ R there is a finite constant CK ≥ 0 such that |u, ϕ| ≤ CK sup |ϕ(j) (x)| for every ϕ ∈ C0∞ (R) x∈K, j≤k0
with supp ϕ ⊆ K. In particular, from this and the definition of u it follows that there exists that whenever ϕ ∈ C0∞ (R) satisfies the condition C ∈1 [0, ∞) 3such supp ϕ ⊆ k0 + 2 , k0 + 2 we have |ϕ(k0 +1) (k0 + 1)| = |u, ϕ| ≤ C
sup
|ϕ() (x)|.
(12.2.8)
x∈ k0 + 12 ,k0 + 13 , ≤k0
Now consider θ ∈ C0∞ (−1/2, 1/2) satisfying θ(0) = 1 and construct the sequence of smooth functions ϕj (x) := θ(jx − j(k0 + 1)) for x ∈ R and j ∈ N. Then, for each j ∈ N, we have 1 3 supp ϕj ⊆ k0 + , k0 + , 2 2 ()
ϕj = j θ() (j · −j(k0 + 1)), (k0 +1)
and ϕj
(k0 + 1) = j k0 +1 .
∀ ∈ N0 ,
12.2. SOLUTIONS TO EXERCISES FROM SECT. 2.10
385
Combining all these facts with (12.2.8) it follows that for each j ∈ N, j k0 +1 ≤ C
j |θ() (x)|
sup
x∈ − 12 , 12 , ≤k0
≤ C max θ() L∞ ([−1/2,1/2]) : ≤ k0 j k0 .
(12.2.9)
Choosing now j sufficiently large in (12.2.9) leads to a contradiction. Hence u does not have finite order. Exercise 2.114 Note that for each j ∈ N we have fj ∈ L1loc (Rn ). Take some ϕ ∈ C0∞ (Rn ) and suppose R > 0 is such that supp ϕ ⊂ B(0, R). Then ϕ(x) − ϕ(0) 1 1 ϕ(0) dx + (12.2.10) fj , ϕ = 1 1 dx. n− n− j B(0,R) |x| j j j B(0,R) |x| Using the mean value theorem and then (13.8.6) we may further write 1 ϕ(x) − ϕ(0) ∇ϕL∞ (Rn ) 1 dx ≤ dx n−1− 1j j B(0,R) |x|n− 1j j B(0,R) |x| =
ωn−1 ∇ϕL∞ (Rn ) j
1
R
1
ρ j dρ = 0
ωn−1 ∇ϕL∞ (Rn ) R j +1 −−−→ 0. j→∞ j+1 (12.2.11)
One more use of (13.8.6) implies 1 j
B(0,R)
1 |x|
n− 1j
dx =
ωn−1 j
R
1
ρ j −1 dρ
0 1
= ωn−1 R j −−−→ ωn−1 . j→∞
(12.2.12)
By combining (12.2.10)–(12.2.12) we obtain lim fj , ϕ = ωn−1 ϕ(0) = ωn−1 δ, ϕ . j→∞
The desired conclusion now follows. Exercise 2.115 Note that fε ∈ L1loc (R) for every ε > 0, hence fε ∈ D (R). Also, lim fε (x) = 0 for every x ∈ R \ {0}. For a given function ϕ ∈ C0∞ (R) let ε→0+
R > 0 be such that supp ϕ ⊂ (−R, R) and for ε > 0 write
R
ϕ(0)ε xε ϕ(x) − ϕ(0) dx + · x2 + ε2 x π
R
dx =: I + II. 2 + ε2 x −R −R (12.2.13) xε Then x2 +ε2 ≤ 1/2 for each x ∈ R if ε > 0, so we may apply Lebesgue’s dominated convergence theorem to conclude that lim I = 0. Also, integrating the 1 fε , ϕ = π
ε→0+
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
386
second term and then taking the limit yields lim+ II = ε→0
2ϕ(0) π
ϕ(0). In conclusion, lim+ fε , ϕ = ϕ(0) = δ, ϕ as desired.
lim arctan(R/ε) =
ε→0+
ε→0
∞ n Exercise √ 2.116 Let ϕ ∈ C0 (R ) and ε > 0. Using first the change of variables x = 2 εy and then Lebesgue’s dominated convergence theorem, we have |x|2 n fε , ϕ = (4πε)− 2 e− 4ε ϕ(x) dx (12.2.14)
=π
−n 2
Rn
e
−|y|2
√ n ϕ(2 εy) dy −−−−→ π− 2 +
ε→0
Rn
2
e−|y| ϕ(0) dy = ϕ(0). Rn
Exercise 2.117 Note that for every ε > 0 we have |fε± | ≤ 1ε , hence these are functions in L1loc (R) ⊂ D (R). Let ϕ ∈ C0∞ (R) and let R > 0 be such that supp ϕ ⊂ (−R, R). Then fε± (x), ϕ Since
x x2 +ε2
R
= ϕ(0) −R
x ∓ iε dx + x2 + ε2
R
−R
x ∓ iε [ϕ(x) − ϕ(0)] dx =: I + II. x2 + ε2 (12.2.15)
is odd, we further obtain
I = ∓2iϕ(0) arctan (R/ε) −−−−→ ∓iπϕ(0) = ∓iπδ, ϕ. ε→0+
x∓iε 2 2 ε→0+ x +ε
As for II, since lim
=
1 x
(12.2.16)
for every x = 0 and
x ∓ iε |ϕ(x) − ϕ(0)| ∈ L1 (−R, R) , x2 + ε2 ϕ(x) − ϕ(0) ≤ |x| we may apply Lebesgue’s dominated convergence theorem to obtain R ! 1 " ϕ(x) − ϕ(0) dx = P.V. , ϕ . (12.2.17) lim II = x x ε→0+ −R Exercise 2.118 You may use the following outline:
(a) Show that for f ∈ L1 (R) one has lim R f (x) sin(jx) dx = 0 by reducing j→∞
to the case f ∈ C0∞ (R) based on density arguments. $ % (b) For ϕ ∈ C0∞ (R) and R > 0, write the expression π1 sinxjx , ϕ(x) as the sum of two integrals, one over the region {x ∈ R : |x| ≤ R}, the other over {x ∈ R : |x| > R}, and use (a) to obtain the desired conclusion. Recall that R sinx x dx = π.
12.2. SOLUTIONS TO EXERCISES FROM SECT. 2.10
387
Exercise 2.119 Let ϕ ∈ C0∞ (R).
√ (a) In the expression for fj , ϕ use the change of variables x = y/ j and D (R)
then Lebesgue’s dominated convergence theorem to conclude fj −−−−→ δ. j→∞
(b) Integrate by parts m + 1 times to conclude that |fj , ϕ| = j m cos(jx)ϕ(x) dx ≤ j −1 |ϕ(m+1) (x)| dx −−−→ 0, j→∞ R
R
D (R)
hence fj −−−−→ 0. j→∞
(c) Use a reasoning similar to the one in the proof of (a) to conclude that D (R)
fj −−−− → δ, this time via the change of variables x = y/j. j→∞
(d) Not convergent since if the function ϕ ∈ C0∞ (R) is such that ϕ(0) = 0, then uj , ϕ = (−1)j ϕ(1/j) and the sequence {(−1)j ϕ(1/j)}j∈N is not convergent. (e) Use the mean value theorem to obtain that uj , ϕ −−−→ ϕ (0) = −δ , ϕ. j→∞
D (R)
→ P.V. x1 . (f) fj −−−− j→∞
(g) Use a reasoning similar to the one in the proof of (a) to conclude that D (R)
fj −−−− → δ, this time via the change of variables x = y/j and recalling j→∞
(sin y)2 that R y2 dy = π. (h) Integrate by parts m + 1 times and then use Lebesgue’s dominated conD (R)
→ 0. vergence theorem to conclude that fj −−−− j→∞
(j) Integrate by parts twice to obtain ∞ uj , ϕ = iϕ(0) + i eijx ϕ (x) dx 0
1 1 = iϕ(0) − ϕ (0) − j j
∞
eijx ϕ (x) dx −−−→ iϕ(0) = iδ, ϕ.
0
j→∞
(12.2.18)
Exercise 2.120 (H(· − a)) = δa in D (R). Exercise 2.121 (uf ) = aδa + H(· − a) in D (R).
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
388
Exercise 2.122 Let ϕ ∈ C0∞ (R). Then integration by parts yields −
f (x)ϕ (x) dx =
0
−∞
R
=−
sin(x)ϕ (x) dx −
∞
sin(x)ϕ (x) dx
0
0
∞
cos(x)ϕ(x) dx + −∞
cos(x)ϕ(x) dx,
(12.2.19)
0
hence (uf ) = cos(x)H(x) − cos(x)H(−x) in D (R). Similarly,
f (x)ϕ (x) dx = 2ϕ(0) +
0
−∞
R
sin(x)ϕ(x) dx −
∞
sin(x)ϕ(x) dx, (12.2.20) 0
hence (uf ) = 2δ − sin(x)H(x) + sin(x)H(−x) in D (R). Exercise 2.123 Let a, b ∈ I be such that a < x0 < b. Since for each x ∈ [a, x0 ) x we have f (x) = f (a) + a f (t) dt, Lebesgue’s dominated convergence theorem
x gives that lim f (x) exists and equals f (a) + a 0 f (t) dt. A similar argument − x→x0
b proves that lim f (x) = f (b) − x0 f (t) dt. Note that f ∈ L1 [a, b] . Suppose x→x+ 0
now that ϕ ∈ C0∞ (I) satisfies supp ϕ ⊂ [c, d] for some c < x0 < d. Then for ε > 0 small enough we have (uf ) , ϕ = −uf , ϕ = −
d
f (t)ϕ (t) dt
c
x0 −ε
= −f (x0 − ε)ϕ(x0 − ε) +
f (t)ϕ(t) dt −
c
d
+ f (x0 + ε)ϕ(x0 + ε) +
f (t)ϕ(t) dt.
x0 +ε
f (t)ϕ (t) dt
x0 −ε
(12.2.21)
x0 +ε
Send ε → 0+ in (12.2.21) and observe that lim+ ε→0
x0 +ε x0 −ε
f (t)ϕ (t) dt = 0 by
Lebesgue’s dominated convergence theorem. The case when x0 is not in the interior of the support of ϕ is simpler, and can be handled via a direct integration by parts. Exercise 2.125 Use Exercise 2.123 and induction. Exercise 2.126 Use Exercise 2.123 and the fact that since {xk }k∈N has no accumulation point in I, for each R > 0 only finitely many terms of the sequence {xk }k∈N are contained in (−R, R). Exercise 2.127 Use Exercise 2.126.
12.2. SOLUTIONS TO EXERCISES FROM SECT. 2.10
389
Exercise 2.128 Clearly δΣ is well-defined and linear. Also, for each compact set K ⊂ Rn and each ϕ ∈ C0∞ (Rn ) satisfying supp ϕ ⊆ K we have |δΣ (ϕ)| ≤ σ(Σ ∩ K) sup |ϕ(x)|.
(12.2.22)
x∈K
This shows that δΣ ∈ D (Rn ) and has order zero. Also δΣ , ϕ = 0 if supp ϕ ∩ Σ = ∅, thus supp δΣ ⊆ Σ. To prove that Σ ⊆ supp δΣ , note that if x∗ ∈ Σ, then there exists a neighborhood U (x∗ ) of x∗ and a local parametrization P of class C 1 near x∗ as in (13.6.2)–(13.6.3). In particular, if u0 ∈ O is such that P (u0 ) = x∗ , then the vectors ∂1 P (u0 ), . . . , ∂n−1 P (u0 ), are linearly independent. Upon recalling the cross product from (13.6.4), this ensures that c0 := ∂1 P (u0 ) × · · · × ∂n−1 P (u0 ) = 0.
(12.2.23)
Since P is of class C 1 , it follows that ∂1 P (u) × · · · × ∂n−1 P (u) ≥ c0 /2 for ⊆ O of u0 in Rn−1 . Using each u belonging to some small open neighborhood O the fact that P : O → P (O) is a homeomorphism (see Proposition 13.37), it follows that there exists some open set V (x∗ ) in Rn contained in U (x∗ ) and = V (x∗ ) ∩ Σ. By further shrinking containing x∗ with the property that P (O) O if necessary, there is no loss of generality in assuming that V (x∗ ) is bounded. Now consider 0 < r1 < r2 < ∞ such that V (x∗ ) ⊆ B(x∗ , r1 ) ⊆ B(x∗ , r2 ). Pick a function ϕ ∈ C0∞ B(x∗ , r2 ) with ϕ ≥ 0, ϕ = 1 in a neighborhood of B(x∗ , r1 ) (see Proposition 13.26). Then ϕ(x) dσ(x) ≥ ϕ(x) dσ(x) (12.2.24) δΣ , ϕ = Σ
=
O
Σ∩V (x∗ )
> 0. ∂1 P (u) × · · · × ∂n−1 P (u) du ≥ c0 |O|
(12.2.25)
In a similar manner, for each function g satisfying g ∈ L∞ (K ∩ Σ) for any compact set K ⊆ Rn , we have gδΣ ∈ D (Rn ) and for each compact K one has |(gδΣ )(ϕ)| ≤ gL∞ (K) σ(Σ ∩ K) sup |ϕ(x)|, x∈K
∀ ϕ ∈ C0∞ (Rn ), supp ϕ ⊆ K.
Exercise 2.129 Use integration by parts (see Theorem 13.41) and Exercise 2.128. Exercise 2.130 Use the definition of distributional derivative, integration by parts (cf. Theorem 13.41), and Exercise 2.128. Exercise 2.131 Let ϕ ∈ C0∞ (Rn ) be such that supp ϕ ∩ ∂B(0, R) = ∅. In this scenario, we have |x|2ϕ−R2 ∈ C0∞ (Rn ), hence we may write ! u, ϕ = (|x|2 − R2 )u,
" ϕ = 0. |x|2 − R2
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
390
This proves that supp u ⊆ ∂B(0, R), thus u is compactly supported. Examples of distributions satisfying the given equation include δ∂B(0,R) and δx0 , for any x0 ∈ Rn with |x0 | = R. Exercise 2.132 Use Example 2.50. Exercise 2.133 The derivative of order m is equal to: (a) sgn(x) if m = 1 and 2δ (m−2) if m ≥ 2; (b) 2δ (m−1) ; (c)
n
δ (2j) − sin x H if m = 2n + 1, n ∈ N, and
j=0
n
δ (2j−1) + cos x H if m = 2n,
j=1
n ∈ N;
(d) (sin x H) = cos x H and use (c); (e) δ1 − δ−1 − 2xχ[−1,1] if m = 1, (δ1 ) − (δ−1 ) + 2δ1 + 2δ−1 + 2χ[−1,1] if m = 2, (m−1)
and δ1
(m−1)
− δ−1
(m−2)
+ 2δ1
(m−2)
+ 2δ−1
(m−3)
− 2δ1
(m−3)
+ 2δ−1
if m ≥ 3.
Exercise 2.134 For ϕ ∈ C0∞ (R2 ) use the change of variables u = x+y, v = x−y u−v and the reasoning in (1.1.6)–(1.1.8) with ψ(u, v) := ϕ u+v , for u ∈ [2, 4], 2 2 v ∈ [0, 2], to write % $ 2 [(∂12 ϕ)(x, y) − (∂22 ϕ)(x, y)] dx dy (∂1 − ∂22 )χA , ϕ = A
4
2
=2
∂u ∂v ψ(u, v) dv du 2
0
= 2[ψ(4, 2) − ψ(2, 2) − ψ(4, 0) + ψ(2, 0)] = 2[ϕ(3, 1) − ϕ(2, 0) − ϕ(2, 2) + ϕ(1, 1)] $ % = 2[δ(3,1) − δ(2,0) − δ(2,2) + δ(1,1) ], ϕ .
(12.2.26)
Exercise 2.135 Fix ϕ ∈ C0∞ (R2 ). Then, using the change of variables x = t+ y we obtain $ % ∂1 (uf ), ϕ = − χ[0,1] (x − y)∂1 ϕ(x, y) dx dy R2
=− = R
R2
χ[0,1] (t)(∂1 ϕ)(t + y, y) dt dy
[ϕ(y, y) − ϕ(1 + y, y)] dy.
(12.2.27)
12.2. SOLUTIONS TO EXERCISES FROM SECT. 2.10
391
Similarly,
$ % ∂2 (uf ), ϕ =
R
[ϕ(x, x − 1) − ϕ(x, x)] dx.
(12.2.28)
Combining (12.2.27)–(12.2.28) it follows that ∂1 (uf ) − ∂2 (uf ) = 0 in D (R2 ) and, in turn, that ∂12 (uf ) − ∂22 (uf ) = (∂1 + ∂2 ) ∂1 (uf ) − ∂2 (uf ) = 0 in D (R2 ). Exercise 2.138 The uniqueness statement is a consequence of Proposition 2.45. Let K be a compact set in Rn such that K ⊂ Ω. Refine {Ωj }j∈I to a finite subcover {Ωk }N k=1 , with k ∈ I for k = 1, . . . , N , of K. Consider a partition of unity {ϕk }1≤k≤N subordinate to the cover {Ωk }N k=1 of K, as given by Theorem 13.29. Hence, for each k ∈ N we have ϕk ∈ C0∞ (Ω) satisfies supp ϕk ⊂ Ωk and N ϕk = 1 in a neighborhood of K. Next, for each function ϕ ∈ C0∞ (Ω) with k=1
supp ϕ ⊆ K define uK (ϕ) :=
N
uk , ϕk ϕ. Show that
k=1
N
uk , ϕk ϕ is indepen-
k=1
dent of the cover of K chosen and of the partition of unity, thus uK : DK (Ω) → C is well-defined. The map uK is clearly linear. Now set u : D(Ω) → C to be the map given by u(ϕ) := uK (ϕ) for each ϕ ∈ C0∞ (Ω) such that supp ϕ ⊆ K. Show that this map is well-defined, satisfies u ∈ D (Ω), and uΩj = uj in D (Ωj ) for every j ∈ I.
Exercise 2.139 Fix ϕ0 ∈ C0∞ (Rn ) with the property that Rn ϕ0 (x) dx = 1.
Next, let ϕ ∈ C0∞ (Rn ) be arbitrary and
set λ := Rn ϕ(x) dx. Hence, if we set ψ := ϕ − λϕ0 , then ψ ∈ C0∞ (Rn ) and Rn ψ(x) dx = 0, so 0 = u, ψ = u, ϕ − λu, ϕ0 .
(12.2.29)
Consequently, u, ϕ = λu, ϕ0 = c, ϕ, where c := u, ϕ0 ∈ C. Exercise 2.140 You may proceed by completing the following steps. Fix some arbitrary ϕ ∈ C0∞ (Rn ).
Step I. Prove that Rn ϕ(x) dx = 0 if and only if there exist ϕ1 , . . . , ϕn ∈ C0∞ (Rn ) n ∂j ϕj . Do so by induction over n. Corresponding to n = 1 such that ϕ = j=1 show that if a, b ∈ R satisfy a < b and for ϕ ∈ C0∞ (a, b) one defines φ1 (x) :=
b
x ∞ a ϕ(t) dt, x ∈ (a, b), then φ1 ∈ C0 (a, b) if and only if a ϕ(x) dx = 0. Step II. Show that the statement from Step I continues to hold if Rn is replaced by (a1 , b1 ) × · · · × (an , bn ), where aj , bj ∈ R, aj < bj for each j = 1, . . . , n. Step III. Fix aj , bj ∈ R, aj < bj for j = 1, . . . , n and consider the n-dimensional rectangle Q := (a1 , b1 ) × · · · × (an , bn ). Let ϕ0 ∈ C0∞ (Q) be such that
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
392
ϕ0 dx = 1. Then, if ϕ ∈ C0∞ (Q) is arbitrary, the function defined by
ψ := ϕ − Q ϕ dx ϕ0 belongs to C0∞ (Q) and satisfies Q ψ dx = 0. As such, Step II applies and shows that there exist ϕ1 , . . . , ϕn ∈ C0∞ (Q) such that n
ϕ = Q ϕ dx ϕ0 + ∂j ϕj . In turn, this permits us to write Q
j=1
u, ϕ = u, ϕ0
ϕ dx − Q
n
$ % ∂j u, ϕj = u, ϕ0 , ϕ ,
(12.2.30)
j=1
which shows if cQ := u, ϕ0 ∈ C, then uQ = cQ in D (Q). Step IV. Since Ω is connected and open, it is path connected. Now combine this with the fact that u is locally constant (as proved in Step III) to finish the proof. Exercise 2.141 By Proposition 2.73, it suffices to prove that there exists some v ∈ D (Rn−1 ) such that u, ϕ ⊗ ψ = v, ϕδ, ψ for every ϕ ∈ C0∞ (Rn−1 ) and every ψ ∈ C0∞ (R). Fix ϕ ∈ C0∞ (Rn−1 ), ψ ∈ C0∞ (R), and consider ψ0 ∈ C0∞ (R) with the property that ψ0 (0) = 1. Then there exists some h ∈ C0∞ (R) satisfying ψ(xn )−ψ(0)ψ0 (xn ) = xn h(xn ) for every xn ∈ R. This and the fact that xn u = 0 allows us to write u, ϕψ = u, ϕ(ψ − ψ(0)ψ0 ) + u, ϕψ0 ψ(0) = xn u, ϕh + u, ϕψ0 ψ(0) = u, ϕψ0 δ, ψ.
(12.2.31)
Now define v : D(Rn−1 ) → C by v(ϕ) := u, ϕψ0 for ϕ ∈ C0∞ (Rn−1 ), and show that v ∈ D(Rn−1 ). By (12.2.31) this v does the job. Exercise 2.142 Fix ψ ∈ C0∞ (R) with the property that ψ(0) = 1. Use Exercise 2.141 and induction to show that u = c δ(x1 ) ⊗ · · · ⊗ δ(xn ), where c := u, ψ ⊗ · · · ⊗ ψ ∈ C.
Exercise 2.143 Fix ψ ∈ C0∞ (R) with the property that R ψ(s) ds = 1. Given any function ϕ ∈ C0∞ (Rn ), at each point x = (x , xn ) ∈ Rn−1 × R we may write ϕ(x) = ϕ(x) − ψ(xn ) = ∂xn
xn
−∞
R
ϕ(x , s) ds + ψ(xn )
ϕ(x , t) − ψ(t)
R
ϕ(x , s) ds
R
ϕ(x , s) ds dt + ψ(xn )
ϕ(x , s) ds.
R
Since ∂n u = 0 in D (Rn ), this yields ! " u, ϕ = u , ψ(xn ) ϕ(x , s) ds . R
(12.2.32)
12.2. SOLUTIONS TO EXERCISES FROM SECT. 2.10
393
In particular, if ϕ = ϕ1 ⊗ ϕ2 for some ϕ1 ∈ C0∞ (Rn−1 ) and some ϕ2 ∈ C0∞ (R), then " ! ϕ2 (s) ds = u, ϕ1 ⊗ ψ1, ϕ2 . (12.2.33) u, ϕ = u , ϕ1 ⊗ ψ R
: C0∞ (Rn−1 ) → n−1
Define v C by v(θ) := u, θ ⊗ ψ for every θ ∈ C0∞ (Rn−1 ). Then v ∈ D (R ) and u(x , xn ) = v(x ) ⊗ 1 when restricted to C0∞ (Rn−1 ) ⊗ C0∞ (R). The desired conclusion follows by recalling Proposition 2.73. Exercise 2.146 Fix j ∈ N and note that for each x = (x1 , . . . , xn ) ∈ Rn we may write j j 1 1 eix1 ξ1 dξ1 ⊗ · · · ⊗ eixn ξn dξn . (12.2.34) fj (x) = 2π −j 2π −j Also, for each j ∈ N and each k ∈ {1, . . . , n}, the fundamental theorem of calculus gives j j sin(jxk ) ixk ξk e dξk = cos(xk ξk ) dξk = 2 , assuming xk = 0. xk −j −j (12.2.35) Now use (12.2.34), (12.2.35), Exercise 2.118, and part (d) in Theorem 2.80 to finish the proof. Exercise 2.147 Note that u = −δ is a solution for the equation in (1). Hence, if u is any other solution of the equation (x − 1)u = δ, then setting v := u + δ it follows that (x − 1)v = 0 in D (R). Next use this and the reasoning from Example 2.68 to conclude that the general solution for the equation in (1) is −δ + c δ1 , with c ∈ C. Fix ψ ∈ C0∞ (R) with the property that ψ(0) = 1. Show that any solution u of the equation in (2) satisfies $ % ϕ(x) − ϕ(0)ψ(x) dx+ u, ψδ, ϕ , ∀ ϕ ∈ C0∞ (R) (12.2.36) a(x) u, ϕ = x R and use this to obtain that the general solution of the equation in (2) is va + c δ, c ∈ C, where va is the distribution given by ϕ(x) − ϕ(0)ψ(x) dx, ∀ ϕ ∈ C0∞ (R). a(x) va , ϕ := x R Similarly, any solution u of the equation in (3) satisfies ! ϕ − ϕ(0)ψ " $ % + u, ψδ, ϕ , ∀ ϕ ∈ C0∞ (R), (12.2.37) u, ϕ = v, x so the general solution for (3) is w + c δ, where c ∈ C and w is the distribution given by ! ϕ − ϕ(0)ψ " w, ϕ := v, , ∀ ϕ ∈ C0∞ (R). x
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
394
Exercise 2.148 (a) Since H ∈ L1loc (R) and MR := (x, y) ∈ [0, ∞) × [0, ∞) : |x + y| ≤ R is a compact set in R2 for each R > 0, by Remark 2.84 and Theorem 2.85, it follows that H ∗ H is well-defined and belongs to D (R). Fix a compact set K in R and let R > 0 be such that K ⊂ (−R, R). Pick now ϕ ∈ C0∞ (R) with supp ϕ ⊆ K, and suppose that ψ ∈ C0∞ (R2 ) satisfies ψ = 1 on MK := (x, y) ∈ [0, ∞) × [0, ∞) : x + y ∈ K . Then H ∗ H, ϕ =
H(x)H(y)ϕ(x + y)ψ(x, y) dy dx R
∞
R
∞
ϕ(x + y)ψ(x, y) dy dx
= 0
(12.2.38)
0
Note that
(x, y) ∈ [0, ∞) × [0, ∞) : |x + y| ≤ R = (x, y) : 0 ≤ x ≤ R, 0 ≤ y ≤ R − x ,
hence
R
R−x
H ∗ H, ϕ =
R
R−x
ϕ(x + y)ψ(x, y) dy dx = 0
0
R
R
=
R
ϕ(z) dz dx = 0
x
ϕ(x + y) dy dx 0
0
z
ϕ(z) dx dz = 0
0
R
zϕ(z) dz 0
= xH, ϕ.
(12.2.39)
Alternatively, one may use Remark 2.86, to observe that H ∗ H in the distributional sense is the distribution given by the function obtained by taking the convolution, in the sense of (2.8.2), of the function H with itself. Hence, (H ∗ H)(x) =
∞
χ[0,∞) (x − y) dy = xH(x)
for every x ∈ R.
(12.2.40)
0
(b) −xH(−x) (c) (x2 − 2 + 2 cos x)H(x) 2 2 (d) x2 H(x) − (x−1) H(x − 1) 2 (e) Exercise 2.128 ensures that δ∂B(0,r) is compactly supported, so the given convolution is well-defined. Also, by Exercise 2.94, |x|2 ∗ δ∂B(0,r) ∈ C ∞ (Rn ) and equals
12.2. SOLUTIONS TO EXERCISES FROM SECT. 2.10
δ∂B(0,r) , |x − y| =
|x − y| dσ(y) = r
2
2
∂B(0,r)
=r
395
n−1 S n−1
n−1 S n−1
|x − rω|2 dσ(ω)
[r2 + |x|2 − 2rx · ω] dσ(ω)
= (rn+1 + |x|2 rn−1 )ωn−1 ,
∀ x ∈ Rn .
(12.2.41)
For the second equality in (12.2.41) we used the change of variables y = rω, ω ∈ S n−1 , while for the last equality we used the fact that since x · ω as a function in ω is odd, its integral over S n−1 is zero. D (Rn )
Exercise 2.149 uj −−−−→ 0, j→∞
D (Rn )
vj −−−−→ 0. j→∞
Given that uj , vj ∈ E (Rn ), we D (Rn )
deduce that uj ∗ vj ∈ E (Rn ) for each j ∈ N. Also, uj ∗ vj −−−−→ δ. j→∞
Exercise 2.150 The limits in (a) and (b) do not exist. Since for each j ∈ N we have fj ∈ E (R) and gj ∈ C ∞ (R), Exercise 2.94 may be used to conclude that D (Rn )
fj ∗ gj ∈ C ∞ (R) and that fj ∗ gj = 1 for every j. Thus, fj ∗ gj −−−−→ 1. j→∞
Exercise 2.151 Note that Λ is well-defined based on Proposition 2.93. You may want to use Theorem 13.5 to prove the continuity of Λ. Exercise 2.152 For f : Rn → C set f ∨ (x) := f (−x), x ∈ Rn . If u ∈ D (Rn ) is such that u ∗ ϕ = 0 for every ϕ ∈ C0∞ (Rn ), then 0 = (u ∗ ϕ∨ )(0) = u, ϕ for every ϕ ∈ C0∞ (Rn ), thus u = 0. This proves the uniqueness part in the statement. As for existence, given Λ as specified, define u0 : D(Rn ) → C by u0 (ϕ) := δ, Λ(ϕ∨ ) for ϕ ∈ C0∞ (Rn ). Being a composition of linear and continuous maps, u0 is linear and continuous, thus u0 ∈ D (Rn ). Also, if ϕ ∈ C0∞ (Rn ) is fixed, we have $ % $ % (u0 ∗ ϕ)(x) = u0 , ϕ(x − ·) = u0 , (ϕ∨ )(· − x) = δ, Λ(ϕ(· + x) $ % = δ, (Λϕ)(· + x) = Λ(ϕ)(x), ∀ x ∈ Rn . (12.2.42) For the first equality in (12.2.42) we used Proposition 2.93, the third equality is based on the definition of u0 , while the forth equality uses the fact that Λ is commutes with translations. Exercise 2.153 From hypotheses we obtain u, P = 0 for every polynomial P in Rn . Now use Lemma 2.75 to conclude that u, ϕ = 0 for every ϕ ∈ C0∞ (Rn ). Let ψ ∈ C0∞ (Rn ) be such that ψ = 1 in a neighborhood of supp u. Then for every function ϕ ∈ C ∞ (Rn ) we have 0 = u, ψϕ = u, ϕ since the support condition on u implies u, (1 − ψ)ϕ = 0.
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
396
Exercise 2.154 Fix ϕ ∈ C ∞ (R) and write k ϕ 1j − kϕ(0) − ϕ (0) ln k j=1
⎡ ⎤ k k 1 1 ⎦ = ϕ j − ϕ(0) − 1j ϕ (0) + ϕ (0) ⎣ j − ln k . j=1
Since
(12.2.43)
j=1
1 ϕ j − ϕ(0) − 1j ϕ (0) ≤
1 ∞ j 2 ϕ L ([0,1])
∀ j ∈ N,
taking the limit as k → ∞ in (12.2.43) [also recall Euler’s constant γ from (4.6.23)] we obtain u(ϕ) =
∞ ϕ 1j − ϕ(0) − 1j ϕ (0) + γ ϕ (0). j=1
∞ Now apply Fact 2.55 with K := [0, 1], m := 2 and C := j=1 u ∈ E (R). Also, show that supp u = {0} ∪ 1j : j ∈ N .
1 j2
+ γ to conclude
Exercise 2.155 Note that fj ∈ L1comp (R) for each j ∈ N, hence {fj }j∈N ⊂ E (R). If ϕ ∈ C ∞ (R) then we may write 1/j j j 1/j fj , ϕ − δ, ϕ = ϕ(x) − ϕ(0) dx ≤ |ϕ(x) − ϕ(0)| dx 2 −1/j −1/j 2 1 ϕ L∞ ([−1,1]) → 0 as j → ∞. j
≤
(12.2.44)
Exercise 2.156 Since fj ∈ L1comp(R) for each j ∈ N, we have {fj }j∈N ⊂ E (R). Let ϕ ∈ C0∞ (R) and suppose R ∈ (0, ∞) is such that supp ϕ ⊂ (−R, R). Then for j ∈ N with j ≥ R we have R j 2Rϕ ∞ ϕ(x) ϕ(x) L (R) dx = dx ≤ (12.2.45) |fj , ϕ| = j j j −j −R D (Rn )
which proves that fj −−−−→ 0. Suppose next that there exists a distribution
u ∈ E (R ) such that n
for every ϕ ∈
j→∞ E (Rn ) fj −−−−→ j→∞
C0∞ (R).
u. In particular, we have lim fj , ϕ = u, ϕ j→∞
Together with what we have proved before, this forces
12.3. SOLUTIONS TO EXERCISES FROM SECT. 3.3
397
u = 0. However, fj , 1 = 2 for every j ∈ N, which contradicts the fact that u = 0. Thus, {fj }j∈N does not converge in E (Rn ). Exercise 2.157 Since fj ∈ C0∞ (R) for each j ∈ N, we have {fj }j∈N ⊂ E (R). Let ϕ ∈ C ∞ (Rn ). Then using the change of variables jx = y we may write j n ψ(jx)ϕ(x) dx − ϕ(0) ψ(x) dx |fj , ϕ − δ, ϕ| = Rn
Rn
= ψ(y)ϕ(y/j) dy − ϕ(0) ψ(x) dx Rn Rn ≤ |ϕ(y/j) − ϕ(0)||ψ(y)| dy → 0 as j → ∞, (12.2.46) supp ψ
where the convergence is based on Lebesgue’s dominated convergence theorem. Exercise 2.158 For each k ∈ {1, . . . , n} consider the sequence {δxj , ϕk }j∈N where the function ϕk is defined by ϕk (x) := xk for each x = (x1 . . . , xn ) ∈ Rn .
12.3
Solutions to Exercises from Sect. 3.3
Exercise 3.32 Let R > 0 be such that supp ϕ ⊂ B(0, R). Then ϕj ∈ C0∞ (Rn ) and supp ϕj ⊂ B(0, jR) for each j ∈ N. Since there is no compact K ⊂ Rn such that supp ϕj ⊂ K for all j ∈ N, the sequence {ϕj }j∈N does not converge 1 −j in D(Rn ). Also, for each α ∈ Nn0 we have ∂ α ϕj (x) = j |α| e (∂ α ϕ)(x/j). Hence n if we also take β ∈ N0 arbitrary, then sup |xβ ∂ α ϕj (x)| ≤ e−j j −|α|
x∈Rn
|xβ (∂ α ϕ)(x/j)|
sup x∈B(0,jR)
≤ e−j j −|α|
|xβ |∂ α ϕL∞ (B(0,R))
sup x∈B(0,jR)
≤ e−j j |β|−|α|R|β| ∂ α ϕL∞ (B(0,R)) −−−→ 0, j→∞
(12.3.1)
which implies that {ϕj }j∈N converges to zero in S(Rn ). 1 Exercise 3.33 For each α ∈ Nn0 we have ∂ α ϕj (x) = j |α|+1 (∂ α ϕ)(x/j), for each n n x ∈ R . Consequently, for every compact subset K of R we may write
sup |∂ α ϕj (x)| ≤ j −|α|−1 sup |(∂ α ϕ)(x/j)| x∈K
x∈K
≤ j −|α|−1 ∂ α ϕL∞ (Rn ) −−−→ 0, j→∞
E(Rn )
which proves that ϕj −−−−→ 0. Moreover, if j→∞ O := x = (x1 , . . . , xn ) ∈ Rn : xj = 0 for j = 1, . . . , n
(12.3.2)
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
398
we claim that there exists x∗ ∈ O with the property that ϕ(x∗ ) = 0. Indeed, if this were not to be the case, we would have ϕ = 0 in O, hence ϕ = 0 in Rn since ϕ is continuous and O is dense in Rn . Having fixed such a point x∗ we then proceed to estimate sup |xβ ϕj (x)| = j −1 sup |xβ ϕ(x/j)| = j |β|−1 sup |xβ ϕ(x)|
x∈Rn
x∈Rn
x∈Rn
≥ j |β|−1 |xβ∗ | |ϕ(x∗ )| −−−→ ∞,
(12.3.3)
j→∞
for β ∈ Nn0 satisfying |β| > 1. Thus, {ϕj }j∈N does not converge in S(Rn ). Exercise 3.34 Clearly {ϕj }j∈N ⊂ C ∞ (R). If m ∈ N then for each j ∈ N0 we have m! 1 1 ψ (k) (x)θ(m−k) (x/j) m−k , = j k!(m − k)! j m
(m) ϕj (x)
∀ x ∈ R.
(12.3.4)
k=0
Hence, if ∈ N0 , then using the properties of θ and ψ we may write sup |x
x∈R
(m) ϕj (x)|
m 1 x m! = sup ψ (k) (x)θ(m−k) (x/j) m−k j x∈R k!(m−k)! j k=0
≤
m m! 1 x 1 sup |ψ (m) (x)|+ sup e−x θ(m−k) (x/j) m−k j |x|≤1 j x>1 k!(m − k)! j k=0
≤
C C C + sup |e−x x | ≤ −−−→ 0. j j x>1 j j→∞
(12.3.5)
S(Rn )
In conclusion, ϕj −−−−→ 0. j→∞
Exercise 3.35 (a) Not in S(Rn ) since it is not bounded given that
lim
x1 →−∞
e−x1 = ∞.
2
(b) Since e−|x| ∈ S(Rn ) and |x|2n! ∈ L(Rn ), by (a) in Theorem 3.13, their product is in S(Rn ). n
n
(c) (1 + |x|2 )2 is a polynomial function and if (1 + |x|2 )−2 ∈ S(Rn ) then their product, which is equal to 1, would belong to S(Rn ), which is not true. 2
(d) Show first that sin(e−|x| ) ∈ S(Rn ). Then, given that 2
follows that
sin(e−|x| ) 1+|x|2
∈ S(Rn ).
1 1+|x|2
∈ L(Rn ), it
12.3. SOLUTIONS TO EXERCISES FROM SECT. 3.3
399
−|x|2
) (e) Not in S(Rn ) since lim (1 + |x|2 )n+1 cos(e (1+|x|2 )n = ∞. |x|→∞
2
2
(f) Set ϕ(x) := e−|x| sin(ex1 ), x ∈ Rn . If ϕ were to belong to S(Rn ) then ∂1 ϕ and x1 ∂ϕ would be bounded. However, for every x = (x1 , . . . , xn ) ∈ Rn , 2
2
2
2
2
2
∂1 ϕ(x) = 2e−x2 −x3 −···−xn x1 [cos(ex1 ) − e−x1 sin(ex1 )].
(12.3.6)
In particular, 2
(∂1 ϕ)(x1 , 0, . . . , 0) + 2x1 ϕ(x1 , 0, . . . , 0) = 2x1 cos(ex1 ) which is not bounded. (g) Since A is positive definite, there exists a real, symmetric, positive definite 2 n × n matrix B such that B 2 = A. Hence, ϕ(x) = e−|Bx| for every x ∈ Rn , which means that ϕ is the composition between the function 2 e−|x| ∈ S(Rn ) (recall Exercise 3.3) and the linear transformation B that maps S(Rn ) into itself. Recalling Exercise 3.16 we may conclude that ϕ ∈ S(Rn ). Exercise 3.36 From part (g) in Exercise 3.35 we have f ∈ S(Rn ). Let B ∈ 2 Mn×n (R) be such that B 2 = A, so that f (x) = e−|Bx| for every x ∈ Rn . Now use Exercise 3.23 and Example 3.21.
1 1/2 1/2 1 and Exercise 3.36 applies and yields Exercise 3.37 Let A :=
. Then f (x) = e−(Ax)·x for every x ∈ R2
2 2 2π f3(ξ) = √ e−(ξ1 −ξ1 ξ2 +ξ2 )/3 3
for ξ = (ξ1 , ξ2 ) ∈ R2 .
(12.3.7)
Exercise 3.38 Use (b) in Theorem 3.20 and Example 3.21. ix·x 1 Exercise 3.39 Since sin(x · x0 ) = 2i e 0 − e−ix·x0 matters reduced to the case n = 1 after which we may apply Exercise 3.22. Hence, if x = (x1 , . . . , xn ), x0 = (x01 , . . . , x0n ), and ξ = (ξ1 , . . . , ξn ), we may write 2 F e−a|x| sin(x · x0 ) (ξ) =
1 2i
n
n 2 2 F e−axj +ixj x0j − F e−axj −ixj x0j
j=1
=
1 2i
j=1
n π n2 a
e−
(ξj −x0j )2 4a
j=1
=
1 π n 2
2i
a
e
|ξ−x |2 − 4a0
−
n
e−
(ξj +x0j )2 4a
j=1
− e−
|ξ+x0 |2 4a
.
(12.3.8)
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
400
Exercise 3.40 Fix ϕ ∈ S(R). Suppose ψ ∈ S(R) is such that ψ = ϕ. Then
ϕ(x) dx = R
R2
lim
R1 ,R2 →∞
ψ (x) dx =
R1
lim
[ψ(R2 ) − ψ(R1 )] = 0.
R1 ,R2 →∞
x
Conversely, if R ϕ(x) dx = 0, then the function ψ(x) := 0 ϕ(t) dt, for x ∈ R, belongs to S(R) and ψ = ϕ. Exercise 3.41 Use Exercise 3.40.
12.4
Solutions to Exercises from Sect. 4.11
Exercise 4.103 Using the change of variables y = −x on the region corresponding to x < 0, we have |x|≥ε
ϕ(x) dx = x
ε
∞
ϕ(x) − ϕ(−x) dx, x
∀ ϕ ∈ S(Rn ).
(12.4.1)
Moreover, for each ε > 0 and each ϕ ∈ S(R) we may write ∞ 1 ϕ(x) |ϕ(x) − ϕ(−x)| |ϕ(x) − ϕ(−x)|x dx ≤ dx + dx |x|≥ε x x x2 ε 1 ∞ dx sup |xϕ(x)| ≤ sup |ϕ (x)| + 2 (12.4.2) x2 x∈R x∈R 1 so that P.V. x1 is well-defined and ! dx 1 " sup |xk ϕ(j) (x)|, P.V. , ϕ ≤ 1 + 2 x x∈R, k=0,1, j=0,1 |x|>1 |x|
∀ ϕ ∈ S(R), (12.4.3)
hence (4.1.2) holds for m = k = 1. Since P.V. x1 is also linear, we necessarily have P.V. x1 ∈ S (Rn ). Exercise 4.104 Use Exercise 2.107, (2.1.9), and Exercise 4.5. Exercise 4.105 Use Exercise 2.109 and (4.1.25). Exercise 4.106 Use a reasoning similar to the proof of the fact that the function in (4.1.27) is not a tempered distribution to conclude that eax H(x) and e−ax H(−x) do not belong to S (R) while e−ax H(x), eax H(−x) ∈ S (R).
12.4. SOLUTIONS TO EXERCISES FROM SECT. 4.11
401
Exercise 4.107 Let ϕ ∈ S(R) and j ∈ N. (a) We may write (
) x ϕ(x) − ϕ(0) x x2 dx , ϕ = ϕ(x) dx+ · 2 −2 2 −2 2 −2 x +j x |x|>1 x + j |x|≤1 x + j x dx. (12.4.4) +ϕ(0) 2 + j −2 x |x|≤1
The last integral in (12.4.4) is equal to zero (the function integrated is odd) while for the other two integrals in (12.4.4) apply Lebesgue’s dominated convergence theorem to obtain that the first converges to
ϕ(x) ϕ(x)−ϕ(0) dx. In conclux |x|>1 x dx and the second converges to |x|≤1 sion,
x x2 +j −2
S (R)
−−−→ P.V. x1 . j→∞
(b) Making the change of variables x = y/j and then applying Lebesgue’s dominated convergence theorem write ! " 1 ϕ(x) ϕ(y/j) , ϕ dx = dy −−−→ πϕ(0), = 2 −2 2 −2 2 j→∞ j(x + j ) ) R j(x + j R y +1 (12.4.5) hence
1
j(x2 +j −2 )
S (R)
−−−→ πδ. j→∞
(c) See Exercise 2.118.
sin(jx) S (R) −−−→ πx j→∞
2
D (R)
δ. 2
(d) Prove first that ej δj −−−−→ 0. If {ej δj }j∈N would converge in S (R) it j→∞
would have to converge to 0. However, for the test function e− one has " ! 2 j2 x2 ej δj , e− 2 = e 2 −−−→ ∞.
x2 2
∈ S(R)
j→∞
Exercise 4.108 Use the change of variables jx = y and Lebesgue’s dominated S (R)
convergence theorem to show that j n θ(jx) −−−→ δ. j→∞
Exercise 4.109 Based on Lebesgue’s dominated convergence theorem, it D (R)
2
follows that fj −−−−→ ex . On the other hand, the sequence {fj }j∈N does not j→∞
convergence in S (R) since $ % 2 fj , e−x /2 =
j
e −j
x2 /2
dx −−−→ j→∞
ex R
2
/2
dx = ∞.
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
402
Exercise 4.110 If there were some f ∈ Lp (R) such that lim fj − f Lp(R) = 0, j→∞
then lim fj (x) = f (x) for almost every x ∈ R. Since lim fj (x) = 0 for j→∞
j→∞
every x ∈ R, we have that f = 0 almost everywhere on R. This leads to a
j contradiction given that we would have 0 = lim fj pLp (R) = j−1 1 dx = 1. j→∞
Hence, the sequence {fj }j∈N does not converge in Lp (R). If ϕ ∈ S(R) then j dx C 2 −−−→ 0, ≤ (12.4.6) |fj , ϕ| ≤ sup |x ϕ(x)| 2 x j(j − 1) j→∞ x∈R j−1 S (R)
which shows that fj −−−→ 0. j→∞
Exercise 4.111 Using Lebesgue’s dominated convergence theorem it is not difficult to check that uj , ϕ −−−→ ex sin(ex ), ϕ for every ϕ ∈ C0∞ (R). j→∞
If ϕ ∈ S(R), integration by parts yields j j x x x e sin(e )ϕ(x) dx = − cos(e )ϕ(x) −j + −j
j
cos(ex )ϕ (x) dx.
(12.4.7)
−j
Since ϕ ∈ S(R) one has cos(ex )ϕ ∈ L1 (R) and j C j cos(ex ) xϕ(x)−j ≤ −−−→ 0. cos(ex )ϕ(x)−j = x j j→∞
Thus, lim uj , ϕ = R cos(ex )ϕ (x) dx and if u : S(R) → C is such that j→∞
u, ϕ :=
cos(ex )ϕ (x) dx
R
for every ϕ ∈ S(R),
then u is a well-defined, linear mapping and dx sup |(1 + x2 )ϕ (x)| |u, ϕ| ≤ 2 1 + x x∈R R
for every ϕ ∈ S(R).
(12.4.8)
(12.4.9)
S (R)
Hence u ∈ S (R) and uj −−−→ u. j→∞
Exercise 4.113 Fix f ∈ S (R) and let ψ ∈ S(R) be such that ψ(0) = 1. Then, if u is a solution of the equation xu = f in S (R), for each ϕ ∈ S(R) we have ! ! ϕ − ϕ(0)ψ " ϕ − ϕ(0)ψ " u, ϕ = u, x + u, ϕ(0)ψ = f, + u, ψδ, ϕ. x x (12.4.10) Set
g(x) :=
ϕ(x)−ϕ(0)ψ(x) x
if x ∈ R \ {0},
ϕ (0) − ϕ(0)ψ (0)
if x = 0.
(12.4.11)
12.4. SOLUTIONS TO EXERCISES FROM SECT. 4.11
1
Note that g ∈ S(R), since g(x) =
403
ϕ (tx) − ϕ(0)ψ (tx) dt for x ∈ R. Thus,
0
if we define
ϕ(x) :=
ϕ(x)−ϕ(0)ψ(x) x
if |x| > 1,
g(x)
if |x| ≤ 1,
(12.4.12)
then ϕ ∈ S(R) and xϕ ) = ϕ. There remains to observe that the mapping wf : S(R) → C,
wf , ϕ := f, ϕ
∀ ϕ ∈ S(R),
(12.4.13)
where ϕ is as in (12.4.12) satisfies wf ∈ S (R), in order to conclude that u = wf + cδ, some c ∈ R. 2
Exercise 4.114 Suppose u ∈ S (Rn ) is a solution of the equation e−|x| u = 1 in S (Rn ). Then for ϕ ∈ C0∞ (Rn ) we have $ $ $ 2 % 2 2 % 2 % u, ϕ = u, e−|x| e|x| ϕ = 1, e|x| ϕ = e|x| , ϕ , (12.4.14) 2 which shows that uC ∞ (Rn ) = e|x| . Since C0∞ (Rn ) is sequentially dense in S(Rn ), 0
2
this would imply that u = e|x| . However, as proved following Remark 4.15, 2 e|x| ∈ S (Rn ). Exercise 4.115 Since H is Lebesgue measurable and (1 + x2 )−1 H ∈ L1 (R), by Exercise 4.5 it follows that H defines a tempered distribution. We have seen that H ∈ D (R) and H = δ in D (R) (recall Example 2.37). Since δ ∈ S (R), of by (4.1.25) it follows that H = δ in S (R). Taking the Fourier transform the 1 3 last equation we arrive at iξ H = 1 in S (R). On the other hand, ξ P.V. ξ = 1 3 − P.V. 1 = 0 in in S (R) by Exercise 4.103 and (2.3.6). Consequently, ξ iH ξ 3 − P.V. 1 = cδ, for S (R). Now Example 2.68 may be used to conclude that iH ξ some c ∈ C. Hence 3 = −iP.V. 1 − cδ H in S (R). (12.4.15) ξ 3 to the function ϕ(x) := e−x2 ∈ S(R). First, by ExamTo determine c apply H ple 3.21 write ∞ % $ % $ √ % √ $ 2 2 −x2 = H, πe−x /4 = 3 e−x2 = H, e5 π e−x /4 dx H, √
=
π 2
0 ∞
e−x
2
/4
−x2 /4 (0) = π. dx = e
(12.4.16)
−∞ −x2
Second, from (12.4.15) and the fact that e x is an odd function, we obtain 2 ! " 2 1 e−x − i P.V. − cδ, e−x = −i lim dx − c = −c. (12.4.17) ξ ε→0+ |x|>ε x 3 = −i P.V. 1 + πδ. Combining (12.4.15)–(12.4.17), we arrive at H ξ
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
404
Exercise 4.116 You may use Exercise 4.115 to show that 1 (ξ) = 2πiH(ξ) − iπ = iπ sgn (ξ) F P.V. x
in S (R).
Exercise 4.117 (a) sgn = H − H ∨ , hence using Exercise 4.115 one may write 1 1 1 ∨ = πδ − i P.V. 3 −H 5 − πδ − i P.V. = −2i P.V. s5 gn = H ξ ξ ξ
in S (R).
(12.4.18) Alternatively, you may use Exercise 4.116 and take another Fourier transform. 5 5k = x kδ 3 = (−D)k 3 (b) If k is even, then |x|k = xk , so that |x| δ3 = 2πik δ (k) in k k S (R). If k is odd, then |x| = x sgn x, thus 1 (k) 5k = x k sgn x = (−D)k sgn x = −2ik+1 P.V. |x| ξ
in
S (R). (12.4.19)
(c) You may use Example 4.34. Alternatively, take the Fourier transform of = sin(ax) in S (R), then use (c) in Theorem 4.25 the identity x · sin(ax) x and (4.2.42) to conclude that & ' sin(ax) = πδ−a − πδa F x
in S (R).
Now invoke Exercise 2.120 and Proposition 2.40 to arrive at F
sin(ax) x
= π sgn(a)χ[−|a|,|a|] + c
in S (R),
and then show that c = 0 by applying the Fourier transform to the last identity and recalling (4.2.39). (d) Take the Fourier transform of the identity x·
sin(ax) sin(bx) = sin(ax) sin(bx) x
in S (R),
then use (c) in Theorem 4.25 and (4.2.45) to conclude that & ' iπ sin(ax) sin(bx) [δa+b − δa−b − δb−a + δ−a−b ] = F x 2 Now use Exercise 2.120 and Proposition 2.40 to obtain
in S (R).
12.4. SOLUTIONS TO EXERCISES FROM SECT. 4.11 F
405
sin(ax) sin(bx) x iπ [H(x−a−b)−H(x−a+b)−H(x−b+a)+H(x+a+b)] +c0 2 iπ sgn(b) χ[−a−|b|,−a+|b|] −χ[a−|b|,a+|b|] +c0 = (12.4.20) 2 =
in S (R), and then show that c0 = 0 by applying the Fourier transform to the last identity. 2 2 1 eix − e−ix , (4.2.21) and (4.2.50), we (e) Using the identity sin(x2 ) = 2i may write 1 5 2) = −ix2 eix2 − e sin(x 2i ' √ & 2 π i π −i ξ2 −i π i ξ4 4 4 4 −e e = e e 2i ' √ & 2 π i π−ξ2 −i π−ξ 4 4 −e = . e 2i
(12.4.21)
(f) Use (4.6.25), (4.2.3), and Proposition 4.30 to show ln |x| = −2πγδ − πwχ(−1,1)
in S (R).
Exercise 4.118 From Exercise 2.128 we know that δ∂B(0,R) ∈ E (R3 ), thus by (b) in Theorem 4.33 it follows that % $ (ξ) = δ∂B(0,R) , e−ix·ξ = e−ix·ξ dσ(x), ∀ ξ ∈ R3 . δ∂B(0,R) ∂B(0,R)
(12.4.22) Check that δ∂B(0,R) is invariant under orthogonal transformations, and conclude is invariant under orthogonal transformations. Fix ξ ∈ R3 \ {0} that δ∂B(0,R) and show that there exists an orthogonal transformation A ∈ M3×3 (R) such
(ξ) = that Aξ = (|ξ|, 0, 0), and furthermore that δ∂B(0,R) e−ix1 |ξ| dσ(x). ∂B(0,R) Now use spherical coordinates to compute the latter integral and conclude that (ξ) = 4πR sin(R|ξ|) . Treat separately the case ξ = (0, 0, 0). δ∂B(0,R) |ξ| Exercise 4.119 Use Lemma 4.27. Exercise 4.120 Since χ[−1,1] ∈ L1 (R), from (4.1.8) it follows that χ[−1,1] ∈ S (R). Also, by Exercise 4.26, we have that χ [−1,1] in S (R) is the tempered distribution given by the function
406
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
1
e−ixξ dx =
−1
In particular, since
sin ξ ξ
2 sin ξ , ξ
ξ ∈ R \ {0}.
1 ∈ L1 (R) we have χ [−1,1] ∈ L (R).
Exercise 4.122 See Example 4.36 and deduce that g3 = πδa + πδ−a in S (R). Exercise 4.123 Use Exercise 4.119. Exercise 4.124 If P (x) = |α|≤m aα xα , then the condition P (tx) = t−k P (x) |α| α −k α n in Rn for each t > 0 implies |α|≤m t aα x = t |α|≤m aα x in R for each t > 0. Hence, for each α ∈ Nn0 we have t|α| aα = t−k aα for every t > 0, or equivalently that t|α|+k aα = aα for every t > 0. Now take t → 0+ in the last equality to obtain aα = 0. Exercise 4.125 Check, via a direct calculation, that R w := w −
w · (ζ + η) w · [ζ − (1 + 2η · ζ)η] ζ− η, 1+η·ζ 1+η·ζ
∀ w ∈ Rn , (12.4.23)
and then use (12.4.23) to prove that R R = RR = In×n . Now (4.11.4) is easy to verify. Exercise 4.126 Fix c ∈ R \ {0} and use the fact that cosine is an odd function, the fundamental theorem of calculus, and Fubini’s theorem to write R R cos(cρ) − cos ρ cos(|c|ρ) − cos ρ dρ = dρ ρ ρ ε ε R |c| =− sin(rρ) dr dρ ε
=− = 1
1
|c|
R
sin(rρ) dρ dr 1 |c|
ε
cos(Rr) cos(εr) − dr. r r
(12.4.24)
In particular, estimate (4.11.7) readily follows from this formula. Going further, we note that an integration by parts gives |c| sin(Rr) r=|c| cos(Rr) 1 |c| sin(Rr) dr = + dr. (12.4.25) r Rr R 1 r2 r=1 1 By combining (12.4.24) with (12.4.25) we arrive at
12.5. SOLUTIONS TO EXERCISES FROM SECT. 5.4
R
ε
sin(|c|R) sin R 1 cos(cρ) − cos ρ dρ = − + ρ |c|R R R |c| cos(εr) dr. − r 1
407
1
|c|
sin(Rr) dr r2 (12.4.26)
Passing to limit R → ∞ and ε → 0+ in (12.4.26) yields (4.11.5) after observing (e.g., by Lebesgue’s dominated convergence theorem) that |c| |c| cos(εr) dr dr = = ln |c|. (12.4.27) lim r r ε→0+ 1 1 Next, lim
ε→0+
R→∞
ε
R
sin(cρ) dρ = lim ρ ε→0+
R→∞
Rc εc
sin t dt t
= sgn c lim
ε→0+
R→∞
ε
R
π sin t dt = sgn c, t 2
(12.4.28)
by a suitable change of variables and the well-known fact (based on a residue
R calculation) that lim ε→0+ ε sint t dt = π2 . This proves (4.11.6). Finally, to R→∞
justify (4.11.8) use the fact that whenever 0 < a < b < ∞ an integration by parts gives b cos t t=b b cos t sin t dt = − − dt. (12.4.29) t t t2 t=a a a
12.5
Solutions to Exercises from Sect. 5.4
d m Exercise 5.15 The ordinary differential equation dx v = 0, with initial conditions v(0) = v (0) = · · · = v (m−2) (0) = 0, and v (m−1) (0) = 1, has the 1 xm−1 for x ∈ R. Hence, by Example 5.11 the unique solution v(x) = (m−1)! d m 1 function u := (m−1)! xm−1 H is a fundamental solution for dx in R. In addition, by Exercise 4.115 we have H ∈ S (R) which, when combined with the 1 xm−1 ∈ L(R), implies (as a consequence of (b) in Theorem 4.13) fact that (m−1)! 1 xm−1 H ∈ S (R). If v ∈ S (R) is an arbitrary fundamental solution that (m−1)! d m for dx , then by Proposition 5.7 we have that v − u = P for some polynomial d m in R satisfying dx P = 0 in R. This forces P to be a polynomial of degree less than or equal to m − 1. Exercise 5.16 First prove that P1 (Dx ) ⊗ P2 (Dy ), ϕ1 (x) ⊗ ϕ2 (y) = δx ⊗ δy , ϕ1 (x) ⊗ ϕ2 (y) for every ϕ1 ∈ C0∞ (Rn ) and every ϕ2 ∈ C0∞ (Rm ), and then use Proposition 2.73.
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
408
12.6
Solutions to Exercises from Sect. 6.4
Exercise 6.27 (a) The general solution for the ordinary differential equation v − a2 v = 0 in R is v(x) := c1 eax + c2 e−ax for x ∈ R, where c1 , c2 ∈ C. Hence, if we 1 further impose the conditions v(0) = 0 and v (0) = 1 then c1 = 2a = −c2 , 1 ax 1 −ax H satisfies and by Example 5.11, we have that u := 2a e H − 2a e u − a2 u = δ in D (R). d2 2 (b) By Theorem 5.13 there exists E ∈ S (R) such that dx E = δ 2 − a in S (R). Fix such an E and apply the Fourier transform to the last 3 = 1 in S (R). Since 2 1 2 ∈ L(R) we may equation. Then −(ξ 2 + a2 )E ξ +a 3 = 2 1 2 = f , and, multiply the last equality by 2 1 2 to conclude that −E ξ +a
ξ +a
furthermore, that f3 = −2πE ∨ . Therefore we can determine f3 as soon as d2 2 we find a fundamental solution E ∈ S (R) for the operator dx 2 − a . 2
d 2 By Theorem 6.15, dx is a hypoelliptic operator in R. As a conse2 − a quence of Remark 6.7 we have [with u as in (a)] that u − E ∈ C ∞ (R) and is a classical solution of the ordinary differential equation v − a2 v = 0 in R. Thus,
E = u + c1 eax + c2 e−ax 1 ax 1 e H − e−ax H + c1 eax H + c1 eax H ∨ + c2 e−ax H + c2 e−ax H ∨ 2a 2a & & ' ' 1 1 ax + c1 e H + − + c2 e−ax H + c1 eax H ∨ + c2 e−ax H ∨ . = 2a 2a (12.6.1) =
1 The condition E ∈ S (R) implies (in view of Exercise 4.106) that 2a + 1 −a|x| . c1 = 0 and c2 = 0, which when used in (12.6.1) give E = − 2a e Consequently, f3(x) = πa e−a|x| for x ∈ R.
Exercise 6.29 Recall Exercise 6.4. Consider u := 1 + δ in D (Rn ). It follows that sing supp u = {0} and supp u = Rn , thus sing supp u ⊂ supp u. Exercise 6.30 Since ∂ α δx0 Rn \{x } = 0 we have sing supp ∂ α δx0 ⊆ {x0 }. To see 0 that the latter inclusion is in fact equality, use Example 2.13 if α = (0, . . . , 0) and if |α| > 0 use the fact that the order of the distribution ∂ α δx0 equals |α| while any distribution of function type is of order zero. Exercise 6.31 there exists an open set ω ⊆ Ω with If x ∈ Ω is such that x ∈ ω and uω ∈ C ∞ (ω), then (au)ω ∈ C ∞ (ω), which gives Ω \ sing supp u ⊆ Ω \ sing supp (au).
12.7. SOLUTIONS TO EXERCISES FROM SECT. 7.10
409
Exercise 6.32 Since P.V. x1 = x1 and x1 belongs to C ∞ (Rn \ {0}), it n \{0} R follows that sing supp P.V. x1 ⊆ {0}. Now using Example 2.11 we have that the distribution P.V. x1 is not of function type. 1 Exercise 6.33 First prove that P (x)· P (x) = 1 in S (Rn ). Then take the Fourier transform to arrive at 1 (12.6.2) P (−D)F (ξ) = (2π)n δ in S (Rn ). P
Use (12.6.2), (6.1.3), and Example 6.4, to show that it is not possible to have 1 sing supp F = ∅. P Exercise 6.34 sing supp u = R × {0}. To prove this you may want to use the fact that " ! ϕ(x, 0) ϕ(x, 0) − ϕ(0, 0) 1 dx + dx u, ϕ = P.V. , ϕ(x, 0) = x x x |x|≥1 |x|≤1 (12.6.3) for every ϕ ∈ C0∞ (R2 ).
12.7
Solutions to Exercises from Sect. 7.10
Exercise 7.58 If the equation has two solutions E1 , E2 ∈ S (Rn ), set E := E1 − E2 . Then ΔE − E = 0 in S (Rn ) implies (after an application of the 3 = 0 in S (Rn ). Since 1 2 ∈ L(Rn ) (recall Fourier transform) −(|ξ|2 + 1)E 1+|ξ| 1 n Exercise 3.12), by part (a) in Theorem 3.13 we have 1+|ξ| 2 ϕ ∈ S(R ) for every n ϕ ∈ S(R ). Thus, ! 3 ϕ = (1 + |ξ|2 )E, 3 E,
" 1 ϕ = 0, 2 1 + |ξ|
∀ ϕ ∈ S(Rn ).
3 = 0 in S (Rn ); thus, after applying the Fourier transform This proves that E we obtain E = 0 in S (Rn ). In turn, the latter implies E1 = E2 in S (Rn ) and the proof of the uniqueness statement is complete. Regarding the existence statement, suppose E ∈ S (Rn ) satisfies ΔE − E = δ in S (Rn ). This equation, 3 = 1 in S (Rn ). Since via the Fourier transform, is equivalent to −(|ξ|2 + 1)E 1 1 n 2 Exercise 4.6) and (|ξ| + 1) 1+|ξ|2 = 1 in S (Rn ), it follows 1+|ξ|2 ∈ S (R ) (use 1 is a solution of ΔE − E = δ in S (Rn ), which must that E := −F −1 1+|ξ| 2 be unique based on the earlier reasoning.
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
410
3= Exercise 7.59 Suppose there exists E ∈ L1 (Rn ) with ΔE = δ. Then −|ξ|2 E 0 n 2 4 1 3 3 ∈ L ⊂ C (R ) [recall (3.1.3)], hence [|ξ| E] 1. We know that E = 0. ξ=(0,...,0) This leads to a contradiction. Exercise 7.62 Since EΔ ∈ S (Rn ) is a fundamental solution for Δ, we have ΔEΔ = δ in S (Rn ). Fix j ∈ {1, . . . , n}. Then, Δ(∂j EΔ ) = ∂j δ in S (Rn ). Take the Fourier transform of the last equation to arrive at in S (Rn ).
|ξ|2 F (∂j EΔ ) = −iξj Since n ≥ 2, by Example 4.3 we have |ξ|2 ·
ξj |ξ|2 2
ξj |ξ|2
(12.7.1)
∈ S (Rn ). Also, |ξ|2 ∈ L(Rn ), thus
∈ S (Rn ) (recall (b) in Theorem 4.13), and it is not difficult to check
that |ξ| ·
ξj |ξ|2
= ξj in S (Rn ). These facts combined with (12.7.1) imply
ξj (12.7.2) |ξ|2 F (∂j EΔ ) + i 2 = 0 in S (Rn ). |ξ| ξ Thus, supp F (∂j EΔ ) + i |ξ|j2 ⊆ {0} and by Exercise 4.35, it follows that F (∂j EΔ ) = −i
ξj 4j (ξ) +P |ξ|2
in S (Rn ),
(12.7.3)
where Pj is a polynomial in Rn . Now a direct computation gives that if n ≥ 2, then 1 xj ∂j EΔ = · in S (Rn ). (12.7.4) ωn−1 |x|n Hence, ∂j EΔ is positive homogeneous of degree 1 − n, which in turn when combined with Proposition 4.57 implies that F (∂j EΔ ) is positive homogeneous of degree −1. Thus, the term in the right-hand side of (12.7.3) is positive ξ homogeneous of degree −1. Since |ξ|j2 is positive homogeneous of degree −1, we 4j (ξ) is positive homogeneous of degree −1, and furthermore, by conclude that P Proposition 4.57 and Exercise 4.55, that the polynomial Pj is positive homogeneous of degree 1 − n ≤ −1. Now invoking Exercise 4.124 we obtain Pj ≡ 0. The latter when used in (12.7.3) proves (7.10.3). Identities (7.10.4) and (7.10.5) follow from (7.10.3) and (12.7.4). Exercise 7.63 Since EΔ2 ∈ S (Rn ) is a fundamental solution for Δ2 , we have Δ2 EΔ2 = δ in S (Rn ). Fix j, k ∈ {1, . . . , n}. Then, Δ2 (∂j ∂k EΔ2 ) = ∂j ∂k δ in S (Rn ). Take the Fourier transform of the last equation to arrive at |ξ|4 F (∂j ∂k EΔ2 ) = −ξj ξk
in S (Rn ).
Under the current assumption on n, by Exercise 4.4 we have ξj ξk |ξ|4
(12.7.5) 1 |ξ|2
∈ S (Rn ). Also,
∈ L1loc (Rn ) and in view of Example 4.3 one may infer that
ξj ξk |ξ|4
∈ S (Rn ).
12.7. SOLUTIONS TO EXERCISES FROM SECT. 7.10 In addition, |ξ|4 ∈ L(Rn ), thus |ξ|4 · ξj ξk |ξ|4
ξj ξk |ξ|4 , 4
Theorem 4.13), and |ξ|4 · = ξj ξk , |ξ| · combined with (12.7.5) imply
|ξ|4 · 1 |ξ|2
ξj ξk |ξ|4 F (∂j ∂k EΔ2 ) + =0 |ξ|4 Thus, supp F (∂j ∂k EΔ2 ) +
ξj ξk |ξ|4
1 |ξ|2
411
∈ S (Rn ) (recall (b) in
= |ξ|2 in S (Rn ). These facts
in S (Rn ).
(12.7.6)
⊆ {0} and by Exercise 4.35, it follows that
F (∂j ∂k EΔ2 ) = −
ξj ξk 5 +R jk (ξ) |ξ|4
in S (Rn ),
(12.7.7)
where Rjk is a polynomial in Rn . It is easy to check that ∂j ∂k EΔ2 = −
1 δjk 1 xj xk · n−2 + · 2(n − 2)ωn−1 |x| 2ωn−1 |x|n
in S (Rn ).
(12.7.8)
Hence, ∂j ∂k EΔ2 is positive homogeneous of degree 2 − n which, in turn, when combined with Proposition 4.57, implies that F (∂j ∂k EΔ2 ) is positive homogeneous of degree −2. Thus, the term in the right-hand side of (12.7.7) is positive ξj ξk homogeneous of degree −2. Since |ξ| 4 is positive homogeneous of degree −2, 5 we may conclude that Rjk (ξ) is positive homogeneous of degree −2 and furthermore (by Proposition 4.57 and Exercise 4.55) that the polynomial Rjk is positive homogeneous of degree 2 − n ≤ −1. Now invoking Exercise 4.124 we obtain Rjk ≡ 0. The latter when used in (12.7.7) proves (7.10.6). Identity (7.10.7) follows from (7.10.6) and (12.7.8). As for identity (7.10.8), apply the Fourier transform to (7.10.7) and then use Proposition 4.61 with λ = n − 2. Exercise 7.64 If P (D) is elliptic, then first show that there exists C ∈ (0, ∞) such that Pm (ξ) ≥ C for ξ ∈ S n−1 ; thus, conclude Pm (ξ) ≥ C|ξ|m for every ξ ∈ Rn \ {0}. Use the latter and the fact that |P (ξ)| ≥ |Pm (ξ)| −
aα ξ α
|α|≤m−1
to obtain the desired conclusion. Conversely, suppose that there exist C, R ∈ (0, ∞) such that |P (ξ)| ≥ C|ξ|m for every ξ ∈ Rn \ B(0, R) and that Pm (ξ∗ ) = 0 for some ξ∗ ∈ Rn \ {0}. Then for every λ > R/|ξ∗ | we have 0 < Cλ |ξ∗ | m
m
m−1 |α| α ≤ |P (λξ∗ )| = aα λ ξ∗ ≤ cj λj |α|<m
j=1
and obtain a contradiction by dividing with λm and then letting λ → ∞.
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
412
Exercise 7.65 Fix ϕ ∈ C0∞ (Rn ). Using the fact that ϕ is compactly supported, Lebesgue’s dominated convergence theorem, formula (13.7.4) twice, and the fact that A is symmetric, we may write EA (x)LA ϕ(x) dx = lim EA (x)LA ϕ(x) dx ε→0+
Rn
6 = lim
ε→0+
Rn \B(0,ε)
1 + ε
Rn \B(0,ε)
(LA EA )(x)ϕ(x) dx −
EA (x)(A∇ϕ(x)) · x dσ(x) ∂B(0,ε)
[ϕ(x) − ϕ(0)] A∇EA (x) · x dσ(x)
∂B(0,ε)
ϕ(0) + ε
1 ε
7 A∇EA (x) · x dσ(x) .
(12.7.9)
∂B(0,ε)
For the second equality in (12.7.9) we also used the fact that the outward unit normal to ∂B(0, ε) is ν(x) = 1ε x, for x ∈ ∂B(0, ε). Analyze each of the integrals in the rightmost side of (12.7.9). First, a direct computation shows that LEA = 0 pointwise in Rn \ {0}. Second, (7.8.48), (7.8.25), and the mean value theorem for ϕ, imply 1 [ϕ(x) − ϕ(0)] A∇EA (x) · x dσ(x) ≤ Cε −−−−→ 0, (12.7.10) ε ∂B(0,ε) ε→0+ where the constant C in (12.7.10) depends on A, ∇ϕL∞ (Rn ) and n but not on ε. Third, thanks to (7.8.25) we have 1 EA (x)(A∇ϕ(x)) · x dσ(x) ε ∂B(0,ε) Cε if n ≥ 3 ≤ −−−−→ 0. (12.7.11) C(| ln ε| + 1)ε if n = 2 ε→0+ Fourth, (7.8.48) and a natural change of variables allow us to write ε 1 dσ(x) √ A∇EA (x) · x dσ(x) = n ε ∂B(0,ε) ωn−1 detA ∂B(0,ε) [(A−1 x) · x] 2 dσ(x) 1 √ = . (12.7.12) −1 x) · x] n 2 n−1 [(A ωn−1 detA S Combining (12.7.9)–(12.7.12) and (7.8.47), we conclude that, under the current assumptions on A, for every ϕ ∈ C0∞ (Rn ) we have dσ(x) ϕ(0) √ EA (x)LA ϕ(x) dx = (12.7.13) n . −1 ωn−1 detA S n−1 [(A x) · x] 2 Rn
12.9. SOLUTIONS TO EXERCISES FROM SECT. 10.7
413
In particular, (12.7.13) holds if A has real entries, is symmetric, and satisfies (7.8.8). In this latter situation, we already proved that EA is a fundamental solution for LA , thus the left-hand side in (12.7.13) is equal to EA , LA ϕ = LA EA , ϕ = δ, ϕ = ϕ(0), which means that (7.8.50) holds in the case when A has real entries, is symmetric and satisfies (7.8.8). As in the proof of Theorem 7.54, the identity (7.8.50) may be extended to the class of complex symmetric matrices satisfying condition (7.8.8). Based on what we have just proved and (12.7.13) we deduce that EA (x)LA ϕ(x) dx = ϕ(0) (12.7.14) Rn
C0∞ (Rn )
and every matrix A ∈ Mn×n (Cn ) satisfying A = holds for every ϕ ∈ A as well as condition (7.8.8). The latter, combined with the fact that EA ∈ L1loc (Rn ) further implies that EA is a fundamental solution for LA whenever A ∈ Mn×n (C) is symmetric and satisfies (7.8.8).
12.8
Solutions to Exercises from Sect. 9.3
Exercise 9.12 Let E be the fundamental solution for the heat operator ∂t − Δx in Rn+1 as in (8.1.7). Then E ∈ L1loc (Rn+1 ) and we claim that for each x ∈ Rn \ {0}, the function E(x, t) is absolutely integrable with respect to t ∈ R. n |x|2 Indeed, n using n the change of variables τ = 4t , (13.5.1), the fact that Γ 2 = 2 − 1 Γ 2 − 1 , and (13.5.6), we obtain ∞ ∞ n 1 Γ(n/2 − 1) E(x, t) dt = − n/2 n−2 e−τ τ 2 −2 dτ = − n/2 n−2 4π |x| 4π |x| −∞ 0 =−
1 1 · (n − 2)ωn−1 |x|n−2
if x ∈ Rn \ {0}.
(12.8.1)
Hence, we may apply Proposition 9.6 and Proposition 9.7, to conclude that the distribution ∞ 1 1 − E(x, t) dt = · n−2 , x ∈ Rn \ {0}, (n − 2)ω |x| n−1 −∞ is a fundamental solution for Δ in Rn , when n ≥ 3.
12.9
Solutions to Exercises from Sect. 10.7
n Exercise 10.31 We have Δu + ∇p = 0 in D (Ω) and div u = 0 in D (Ω). Apply div to the first equation and conclude that Δp = 0 in D (Ω). Next apply Δ to the first equation. Finally, use Theorem 7.10 and Theorem 7.23. Exercise 10.32 Follow the outline from Exercise 10.16 with the adjustment that the integrals in (10.3.24) this time will be bounded by C∇ϕL∞ (Rn ) ε(1+| ln ε|).
CHAPTER 12. SOLUTIONS TO SELECTED EXERCISES
414
Exercise 10.33 Follow the outline from Exercise 10.30 with the adjustment that the integrals in (10.6.22) this time will be bounded by C∇ϕL∞ (Rn ) ε(1+| ln ε|). Exercise 10.34 Note that u is a solution for (10.7.3) if and only if u − Ax − b is a solution for (10.5.3). Exercise 10.35 Show that Theorem 10.3 applies.
12.10
Solutions to Exercises from Sect. 11.7
Exercise 11.19 Follow the outline of the proof of Theorem 11.1. Exercise 11.20 Compute L2 EL1 L2 using the analogue of (11.3.31) written for the operator L := L1 L2 , and make use of a suitable version of (11.3.53). Deduce that (n+q)/2
P (x) = cm1 ,m2 ,n,q Δ
S n−1
−1 (x · ξ)2m1 +q L2 (ξ) dσ(ξ)
(12.10.1)
for every x ∈ Rn , from which all the desired properties of P follow. Exercise 11.21 Start with (11.3.31) written for L = Δm and use (7.5.24), (7.5.25), and (7.5.13), in order to deduce that (11.7.3) holds with x ∈ Rn , P (x) = Pm+ n+q , n+q (x) + C(n, q)Δ(n+q)/2 |x|q , (12.10.2) 2
2
where C(n, q) is as in (7.5.26).
Chapter 13
Appendix 13.1
Summary of Topological and Functional Analytic
A topology on a set X is a family τ of subsets of X satisfying the following properties: 1. X, ∅ ∈ τ ; 2. If {Aα }α∈I is a family of sets contained in τ , then
α∈I
Aα ∈ τ ;
3. If A1 , A2 ∈ τ , then A1 ∩ A2 ∈ τ . If X and τ are as above, the pair (X, τ ) is called a topological space and the elements of τ are called open sets. If x is an element of X (sometimes referred to as point), then any open set containing x is called an open neighborhood of x. Any set that contains an open neighborhood of x is called a neighborhood of x. A sequence {xj }j∈N of elements of a topological space (X, τ ) are said to converge to x ∈ X, and we write lim xj = x, if every neighborhood of x contains all but j→∞
a finite number of the elements of the sequence. A family B of subsets of X is a base for a topology τ on X if B is a subfamily of τ and for each x ∈ X, and each neighborhood U of x, there is V ∈ B such that x ∈ V ⊂ U . We say that the base B generates the topology τ . An equivalent characterization of bases that is useful in applications is as follows. A base for a topological space (X, τ ) is a collection B of open sets in τ such that every open set in τ can be written as a union of elements of B. It is important to note that not any family of subsets of a given set is a base for some topology on the set. Given a set X and a family B of subsets of X, this family B is a base for some topology on X if and only if ⎧ B = X, ⎨ ⎩
B∈B
∀ B1 , B2 ∈ B and ∀ x ∈ B1 ∩ B2 , ∃ B ∈ B such that x ∈ B ⊆ B1 ∩ B2 . (13.1.1)
D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6 13, © Springer Science+Business Media New York 2013
415
CHAPTER 13. APPENDIX
416
If τ1 and τ2 are two topologies on a set X, such that every member of τ2 is also a member of τ1 , then we say that τ1 is finer (or, larger) than τ2 , and that τ2 is coarser (or, smaller) than τ1 . If (X, τ ) is a topological space and E ⊆ X, then the topology induced by τ on E is the topology for which all open sets are intersections of open sets in τ with E. Let (X, τ ) and (Y, τ ) be two topological spaces. The topology on X × Y with base the collection of products of open sets in X and open sets in Y [which satisfies (13.1.1)] is called the product topology. A function f : X → Y is called continuous at x ∈ X if the inverse image under f of every open neighborhood of f (x) is an open neighborhood of x. It is called continuous on X if it is continuous at every x ∈ X. In particular, it is easy to see that if f : X → Y is continuous then lim f (xj ) = f (x) for every convergent sequence j→∞
{xj }j∈N of X converging to x ∈ X, that is, f is sequentially continuous. While in general the converse is false, in the case when X is in fact a metric space (see below), if f : X → Y is sequential continuous, then f is continuous. A topological space (X, τ ) is said to be separated, or Hausdorff, if for any two distinct elements x and y of X, there exist U , a neighborhood of x and V , a neighborhood of y, such that U ∩ V = ∅. In a Hausdorff space the limit of a convergent sequence is unique. A metric space is a set X equipped with a distance function (also called metric). This is a function d : X × X → [0, ∞) satisfying the following properties: 1. If x, y ∈ X, then d(x, y) = 0 if and only if x = y (nondegeneracy); 2. d(x, y) = d(y, x) for every x, y ∈ X (symmetry); 3. d(x, y) ≤ d(x, z) + d(z, y) for every x, y, z ∈ X (triangle inequality). Let d be a metric on X. The open balls B(x, r) := {y ∈ X : d(x, y) < r}, x ∈ X, r ∈ (0, ∞), are then the base of a Hausdorff topology on X (since it satisfies (13.1.1)), denoted τd . For a sequence {xj }j∈N of points in X and x ∈ X one has lim xj = x in the topology τd , if and only if the sequence of numbers j→∞
{d(xj , x)}j∈N converges to 0 as j → ∞. A sequence {xj }j∈N of points in (X, τd ) is called Cauchy if d(xj , xk ) → 0 as j, k → ∞. A metric space is called complete if every Cauchy sequence is convergent. A topological space X is called metrizable if there exists a distance function for which the open balls form a base for the topology (i.e., the topology on X coincides with τd ). A vector space X over C becomes a topological vector space if equipped with a topology that is compatible with its vector structure, that is, the operation of vector addition, and also the operation of multiplication by a complex number, are continuous maps X × X → X and C × X → X, respectively (where X × X and C × X are endowed each with the corresponding product topology). Observe that in order to specify a topology on a vector space, it suffices to give the system of neighborhoods of the zero element 0 ∈ X, since the system of
13.1. SUMMARY OF TOPOLOGICAL AND FUNCTIONAL . . .
417
neighborhoods of any other element of X is obtained from this via translation. In fact, it suffices to give a base of neighborhoods of 0. This is a family B of neighborhoods of 0 such that every neighborhood of 0 contains a member of B. A set E ⊆ X is then open if and only if, for every x ∈ E, there exists U ∈ B such that x + U ⊆ E, where x + U := {x + y : y ∈ U }. A topological vector space is called locally convex if it has a base of neighborhoods of 0 consisting of sets that are convex, balanced, and absorbing. A set U is called convex provided tx + (1 − t)y ∈ U whenever x, y ∈ U and t ∈ [0, 1]. A set U is called balanced if cx ∈ U whenever x ∈ U , c ∈ C and |c| ≤ 1. A set U is called absorbing provided for each x ∈ X there exists t > 0 such that x ∈ tU := {ty : y ∈ U }. A seminorm on a vector space X is a function p : X → R satisfying the properties 1. p(cx) = |c|p(x), for every c ∈ C and every x ∈ X (positively homogeneous); 2. p(x + y) ≤ p(x) + p(y), for every x, y ∈ X (sub-additive). In particular, a seminorm p on X satisfies p(0) = 0 and p(x) ≥ 0 for every x ∈ X. A family P of seminorms on X is called separating if, for each x ∈ X, x = 0, there exists p ∈ P such that p(x) = 0. Given a separating family of seminorms P on X, let B be the collection of sets of the form (13.1.2) x : p(x) < ε ∀ p ∈ P0 , P0 ⊆ P, P0 finite, ε > 0. Then B is a base of neighborhoods of 0 of a locally convex vector space topology τP on X called the topology generated by the family of seminorms P. In this context, it may be readily verified that if Y is a linear subspace of X and P|Y := {p|Y : p ∈ P}, then the topology induced on Y by τP coincides with τP|Y .
(13.1.3)
Conversely, if X is a locally convex topological vector space, for each U convex, balanced, and absorbing neighborhood of 0, the mapping pU : X → R defined by pU (x) := inf{t > 0 : t−1 x ∈ U }, x ∈ X (called the Minkowski functional associated with U ) is a seminorm. It is then not hard to see that the topology on X is generated by this family of seminorms (in the manner described above). Let P = {pj }j∈N be a countable family of seminorms that is also separating (thus, if x ∈ X and pj (x) = 0 for all j ∈ N, then x = 0). The topology generated by this family is metrizable. Indeed, the function d : X × X → R defined by d(x, y) :=
∞ j=1
2−j
pj (x − y) , 1 + pj (x − y)
for each x, y ∈ X,
(13.1.4)
is a distance on X and the topology τd induced by the metric d coincides with the topology generated by P. In the converse direction, it can be shown that the
CHAPTER 13. APPENDIX
418
topology of a locally convex space that is metrizable, endowed with a translation invariant metric, can be generated by a countable family of seminorms. A locally convex topological vector space that is metrizable and complete is called a Fr´ echet space. Thus, if a family P = {pj }j∈N of seminorms generates the topology of a Fr´echet space X, then whenever pj (xk − xl ) → 0 as k, l → ∞ for every j ∈ N, there exists x ∈ X such that pj (xk − x) → 0 as k → ∞ for every j ∈ N. Let X bea vector space, {Xj }j∈J be a family of vector subspaces of X such Xj and if j1 , j2 ∈ J then there exists j3 ∈ J with Xj1 ⊂ Xj3 that X = j∈J
and Xj2 ⊂ Xj3 . Also assume that there exist topologies τj on Xj such that if Xj1 ⊂ Xj2 ; then the topology τj1 is finer than the topology τj2 j1 induced on Xj1 by Xj2 . Let W := W ⊂ X : W balanced, convex and such that W ∩ Xj is a neighborhood of 0 in Xj , ∀ j ∈ N . (13.1.5) Then W is a base of neighborhoods of 0 in a locally convex topology τ on X. Call this topology the inductive limit topology on X. If in addition whenever Xj1 ⊂ Xj2 we also have τj2 j1 = τj1 , we call the inductive limit strict. If the topology τ on X is the strict inductive limit of the topologies of an increasing sequence {Xn }n∈N , then the topology induced on Xn by the topology τ on X coincides with the initial topology τn on Xn , for every n ∈ N. In general, if X is a topological vector space, its dual space is the collection of all linear mappings f : X → C (also referred to as functionals) that are continuous with respect to the topology on X. In the case when X is a locally convex topological vector space and P is a family of seminorms generating the topology of X, then a seminorm q on X is continuous if and only if there exist N ∈ N, seminorms p1 , . . . , pN ∈ P, and a constant C ∈ (0, ∞), such that |q(x)| ≤ C max {p1 (x), . . . , pn (x)},
∀ x ∈ X.
(13.1.6)
This fact then gives a criterion for continuity for functionals on X, since if f : X → C is a linear mapping, then q(x) := |f (x)|, for x ∈ X, is a seminorm on X. In addition, if the family of seminorms P has the property that for any p1 , p2 ∈ P there exists p3 ∈ P with the property that max{p1 (x), p2 (x)} ≤ p3 (x) for every x ∈ X, then a linear functional f : X → C is continuous if and only if there exist a seminorm p ∈ P and a constant C ∈ (0, ∞), such that |f (x)| ≤ Cp(x),
∀ x ∈ X.
(13.1.7)
Given a nonempty set X and a family F of mappings f : X → C, denote by τF the collection of all unions of finite intersections of sets f −1 (V ), for f ∈ F and V open set in C. Then τF is a topology on X, and is the weakest topology on X that makes every f ∈ F continuous. We will refer to it as the F -topology on X.
13.1. SUMMARY OF TOPOLOGICAL AND FUNCTIONAL . . .
419
Let X be a vector space, F be a separating vector space of linear functionals on X and τF be the F -topology on X. Then (X, τF ) is a locally convex topological space and its dual is F . In particular, a sequence {xj }j in X satisfies xj → 0 in τF as j → ∞ if and only if f (xj ) → 0 as j → ∞ for every f ∈ F . If X is a topological vector space and X denotes its dual space, then every x ∈ X induces a linear functional Fx on X defined by Fx (Λ) := Λ(x) for every Λ ∈ X and if we set F := {Fx : x ∈ X}, then F separates points in X . In particular, the F -topology on X , called the weak∗-topology on X , is locally convex and every linear functional on X that is continuous with respect to the weak∗-topology is precisely of the form Fx , for some x ∈ X. Moreover, a sequence {Λj }j ⊂ X converges to some Λ ∈ X , weak∗-topology on X , if and only if Λj (x) → Λ(x) as j → ∞ for every x ∈ X. An inspection of the definition of the weak∗-topology yields a description of the open sets in this topology. More precisely, if X is a topological vector space and for A ⊆ X and ε ∈ (0, ∞) we set (13.1.8) OA,ε := f ∈ X : |f (x)| < ε, ∀ x ∈ A , then the following equivalence holds: O ⊆ X is a weak∗-open neighborhood of 0 ∈ X ⇐⇒ there exist a set I, OAj ,εj . Aj ⊆ X finite and εj > 0 for each j ∈ I, such that O = j∈I
(13.1.9) The transpose of any linear and continuous operator between two topological vector spaces is always continuous at the level of dual spaces equipped with weak∗-topologies. Proposition 13.1. Assume X and Y are two given topological vector spaces, and denote by X , Y their duals, each endowed with the corresponding weak∗topology. Also, suppose T : X → Y is a linear and continuous operator, and define its transpose T t as the mapping T t : Y → X ,
T t (y ) := y ◦ T
for each
y ∈ Y .
(13.1.10)
Then T t is well-defined, linear, and continuous. Proof. For each y ∈ Y it follows that y ◦ T is a composition of two linear and continuous mappings. Hence, y ◦ T ∈ X which proves that T t : Y → X is well-defined. It is also clear from (13.1.10) that T t is linear. There remains to prove that T t is continuous. By linearity it suffices to check that T t is continuous at 0. With this goal in mind, fix an arbitrary finite subset A of X along with an arbitrary number ε ∈ (0, ∞), and define (13.1.11) OA,ε := x ∈ X : |x (x)| < ε, ∀ x ∈ A . := {T x : x ∈ A} and set Furthermore, introduce A := y ∈ Y : |y (y)| < ε, ∀ y ∈ A . O A,ε
(13.1.12)
CHAPTER 13. APPENDIX
420
) ⊆ OA,ε . Invoking the description from (13.1.9) we may conclude Then T t (O A,ε t that T is continuous at 0, and this finishes the proof of the proposition. Proposition 13.2. Suppose that X, Y , Z are topological vector spaces, and denote by X , Y , Z their duals, each endowed with the corresponding weak∗topology. In addition, assume that T : X → Y and R : Y → Z are two linear and continuous operators. Then (R ◦ T )t = T t ◦ Rt .
(13.1.13)
In particular, if T : X → Y is a linear, continuous, bijective map, with continuous inverse T −1 : Y → X, then T t : Y → X is also bijective and has a continuous inverse (T t )−1 : X → Y that satisfies (T t )−1 = (T −1 )t . Proof. Formula (13.1.13) is immediate from definitions, while the claims in the last part of the statement are direct consequences of (13.1.13) and the fact that the transpose of the identity is also the identity. We also state and prove an embedding result at the level of dual spaces endowed with the weak∗-topology. Proposition 13.3. Suppose X and Y are topological vector spaces such that X ⊆ Y densely and the inclusion map ι : X → Y , ι(x) := x for each x ∈ X, is continuous. Then Y endowed with the weak∗-topology embeds continuously in the space X endowed with the weak∗-topology, in the sense that the mapping ιt : Y −→ X
(13.1.14)
is well-defined, linear, injective and continuous. Under the identification of Y with ιt (Y ) ⊆ X , we therefore have Y → X .
(13.1.15)
Proof. The fact that ιt is well-defined, linear, and continuous follows directly from Proposition 13.1 and assumptions. Assume now that y ∈ Y is such that ιt (y ) = 0. Then from the fact that y ◦ ι : X → C is zero we deduce that y X = 0. Since y : Y → C is continuous and X is dense in Y , we necessarily have that y vanishes on Y , forcing y = 0 in Y . Keeping in mind that ιt is linear, this implies that ιt is injective. Theorem 13.4 (Hahn–Banach Theorem). Let X be a vector space (over complex numbers) and suppose p : X → [0, ∞) is a seminorm on X. Also assume that Y is a linear subspace of X and that ϕ : Y → C is a linear functional dominated by p on Y , that is, |φ(y)| ≤ p(y),
∀ y ∈ Y.
(13.1.16)
Then there exists a linear functional Φ : X → C satisfying Φ(y) = φ(y)
∀ y ∈ Y,
|Φ(x)| ≤ p(x)
∀ x ∈ X.
(13.1.17)
13.1. SUMMARY OF TOPOLOGICAL AND FUNCTIONAL . . .
421
In the next installment we shall specialize the above considerations to various specific settings used in the book. The reader is reminded that Ω ⊆ Rn denotes a fixed, nonempty, arbitrary open set. The Topological Vector Space E(Ω) By τ we denote the topology on C ∞ (Ω) generated by the following family of seminorms: ⎧ ∞ ⎪ ⎨ pK,m : C (Ω) → R, ⎪ ⎩ pK,m (ϕ) :=
sup
x∈K, α∈Nn 0 , |α|≤m
|∂ α ϕ(x)|,
∀ ϕ ∈ C ∞ (Ω),
(13.1.18)
where K ⊂ Ω is a compact set and m ∈ N0 . Consequently, a sequence ϕj ∈ C ∞ (Ω), j ∈ N, converges in τ to a function ϕ ∈ C ∞ (Ω) as j → ∞, if and only if for any compact set K ⊂ Ω and any m ∈ N0 one has lim
sup
sup |∂ α (ϕj − ϕ)(x)| = 0.
j→∞ α∈Nn , |α|≤m x∈K 0
We will use the notation
E(Ω) = C ∞ (Ω), τ .
(13.1.19)
(13.1.20)
The space E(Ω) is locally convex and metrizable since its topology is defined by the family of countable seminorms {pKm ,m }m∈N0 where ∞ ( ˚m+1 ⊂ Ω, Km is compact for each m ∈ N0 and Ω = Km ⊂ K Km . m=0
(13.1.21) In addition, τ is independent of the family {Km }m∈N0 with the above properties and E(Ω) is complete. Thus, E(Ω) is a Frech´et space. The Topological Vector Space E (Ω) Based on the discussion on dual spaces for locally convex topological vector spaces (see (13.1.7) and the remarks preceding it), it follows that the dual space of E(Ω) is the collection of all linear functionals u : E(Ω) → C for which there exist m ∈ N, a compact set K ⊂ Rn with K ⊂ Ω, and a constant C > 0 such that |u(ϕ)| ≤ C
sup
sup |∂ α (ϕ)(x)| ,
x∈K α∈Nn 0 , |α|≤m
∀ ϕ ∈ C ∞ (Ω).
(13.1.22)
This dual space will be endowed with the weak∗-topology induced by E(Ω) and we denote this topological space by E (Ω). Hence, if for each ϕ ∈ C ∞ (Ω) we consider the evaluation mapping Fϕ taking any functional u from the dual of E(Ω) into the number Fϕ (u) := u(ϕ) ∈ C, then the family F := {Fϕ : ϕ ∈ E(Ω)} separates points in the dual of E(Ω), and the weak∗-topology on this dual is the F -topology on it. In particular, if {uj }j∈N is a sequence in E (Ω) and u ∈ E (Ω), then uj → u in E (Ω) as j → ∞ ⇐⇒ (13.1.23) uj (ϕ) → u(ϕ) as j → ∞, ∀ ϕ ∈ C ∞ (Ω).
CHAPTER 13. APPENDIX
422
Moreover, a sequence {uj }j∈N in E (Ω) is Cauchy provided lim (uj −uk )(ϕ) = 0 j,k→∞
for every ϕ ∈ C ∞ (Ω) and the weak∗-topology on the dual of E(Ω) is locally convex and complete. The Topological Vector Space DK (Ω) Let K ⊆ Ω be a compact set in Rn . Denote by DK (Ω) the topological vector space of functions {f ∈ C ∞ (Ω) : supp f ⊆ K} with the topology induced by τ , the topology in E(Ω). Then the topology on DK (Ω) is generated by the family of seminorms ⎧ ⎨ pm : DK (Ω) → R, ⎩ pm (ϕ) :=
sup
x∈K, α∈Nn 0 , |α|≤m
|∂ α ϕ(x)|,
∀ ϕ ∈ C ∞ (Ω), supp ϕ ⊆ K, (13.1.24)
where m ∈ N0 . Hence, DK (Ω) is a Fr´echet space. In addition, a linear mapping u : DK (Ω) → C is continuous if and only if there exist m ∈ N and a constant C > 0, both depending on K, such that |u(ϕ)| ≤ C
sup
x∈K, α∈Nn 0 , |α|≤m
|∂ α (ϕ)(x)| ,
∀ ϕ ∈ C ∞ (Ω), supp ϕ ⊆ K.
(13.1.25) The Topological Vector Space D(Ω) The topological vector space on C0∞ (Ω) endowed with the inductive limit topology of the Frech´et spaces DK (Ω) will be denoted by D(Ω). In this setting, we have ϕj → ϕ in D(Ω) as j → ∞ ∃ K ⊆ Ω compact set such that ϕj ∈ DK (Ω), ∀ j, and ⇐⇒ → ϕ in DK (Ω) as j → ∞. ϕj − (13.1.26) The topology D(Ω) is locally convex and complete but not metrizable (thus not Fr´echet) and is the strict inductive limit of the topologies {DKj (Ω)}j∈N , where the family {Kj }j∈N is as in (13.1.21). We also record an important result that is proved in [59, Theorem 6.6, p. 155]. Theorem 13.5. Let X be a locally convex topological vector space and suppose the map Λ : D(Ω) → X is linear. Then Λ is continuous if and only if for every D(Ω)
sequence {ϕj }j∈N in C0∞ (Ω) satisfying ϕj −−−→ 0 we have lim Λ(ϕj ) = 0 in X. j→∞
j→∞
The Topological Vector Space D (Ω) The dual space of D(Ω) endowed with the weak∗-topology is denoted by D (Ω). Hence, if {uj }j∈N is a sequence in D (Ω) and u ∈ D (Ω), then uj → u in D (Ω) as j → ∞ ⇐⇒ uj (ϕ) → u(ϕ) as j → ∞, ∀ ϕ ∈ C0∞ (Ω).
(13.1.27)
13.1. SUMMARY OF TOPOLOGICAL AND FUNCTIONAL . . .
423
In addition, a sequence {uj }j∈N in D (Ω) is called Cauchy provided lim (uj − uk )(ϕ) = 0
j,k→∞
for every ϕ ∈ C0∞ (Ω). The weak∗-topology on the dual of D(Ω) is locally convex and complete and an inspection of this topology reveals that it coincides with the topology defined by the family of seminorms D (Ω) u → max |u(ϕj )|, 1≤j≤m
m ∈ N, ϕ1 , . . . , ϕm ∈ D(Ω).
(13.1.28)
The Topological Vector Space S(Rn ) The Schwartz class of rapidly decreasing functions is the vector space S(Rn ) := ϕ ∈ C ∞ (Rn ) : ∀ α, β ∈ Nn0 , sup |xβ ∂ α ϕ(x)| < ∞ , (13.1.29) x∈Rn
endowed with the topology generated by the family of seminorms {pk,m }k,m∈N0 defined by ⎧ n ⎪ ⎨ pk,m : S(R ) → R, (13.1.30) sup |xβ ∂ α ϕ(x)|, ∀ ϕ ∈ S(Rn ). ⎪ ⎩ pk,m (ϕ) := n x∈R , |α|≤m, |β|≤k
Hence, the topology generated by the family of seminorms {pk,m }k,m∈N0 on S(Rn ) is locally convex, metrizable, and since it is also complete, the space S(Rn ) is Frech´et. Moreover, a sequence ϕj ∈ S(Rn ), j ∈ N, converges in S(Rn ) to a function ϕ ∈ S(Rn ) as j → ∞, if and only if for every m, k ∈ N0 one has β α x ∂ (ϕj − ϕ)(x) = 0. (13.1.31) lim sup j→∞ x∈Rn , |α|≤m, |β|≤k
The Topological Vector Space S (Rn ) By the discussion about dual spaces for locally convex topological vector spaces, it follows that the dual space of S(Rn ) is the collection of all linear functions u : S(Rn ) → C for which there exist m, k ∈ N0 , and a finite constant C > 0, such that |u(ϕ)| ≤ C sup sup xβ ∂ α ϕ(x) , ∀ ϕ ∈ S(Rn ). (13.1.32) x∈Rn α,β∈Nn 0 , |α|≤m, |β|≤k
We endow the dual of S(Rn ) with the weak∗-topology and denote the resulting locally convex topological vector space by S (Rn ). Hence, if {uj }j∈N is a sequence in S (Rn ) and u ∈ S (Rn ), then uj → u in S (Rn ) as j → ∞ ⇐⇒ uj (ϕ) → u(ϕ) as j → ∞, ∀ ϕ ∈ S(Rn ). Also, a sequence {uj }j∈N in S (Rn ) is called Cauchy provided lim (uj − uk )(ϕ) = 0 for every ϕ ∈ S(Rn ).
j,k→∞
(13.1.33)
CHAPTER 13. APPENDIX
424
13.2
Summary of Basic Results from Calculus, Measure Theory, and Topology
Proposition 13.6 (Multinomial theorem). If x = (x1 , . . . , xn ) ∈ Rn and N ∈ N are arbitrary, then n N N! xα . xj = (13.2.1) α! j=1 |α|=N
Theorem 13.7 (Binomial theorem). For any x, y ∈ Cn and any γ ∈ Nn0 we have γ! xα y β , (13.2.2) (x + y)γ = α!β! α+β=γ
(with the convention that z 0 := 1 for each z ∈ C). In the particular case when x = (1, . . . , 1) ∈ Cn and y = (−1, . . . , −1) ∈ Cn , formula (13.2.2) yields 0=
α+β=γ
γ! (−1)|β|, α!β!
∀ γ ∈ Nn0 with |γ| > 0.
(13.2.3)
Proposition 13.8 (Leibniz’s formula). Suppose that U ⊆ Rn is an open set, N ∈ N, and f, g : U → C are two functions of class C N in U . Then ∂ α (f g) =
β≤α
α! (∂ β f )(∂ α−β g) β!(α − β)!
in U,
(13.2.4)
for every multi-index α ∈ Nn0 of length ≤ N . It is useful to note that for each α, β ∈ Nn0 and x ∈ Rn , β
α
∂ (x ) =
⎧ ⎨ ⎩
α! xα−β (α − β)!
if β ≤ α,
0
otherwise.
(13.2.5)
Theorem 13.9 (Taylor’s formula). Assume U ⊆ Rn is an open convex set, and that N ∈ N. Also, suppose that f : U → C is a function of class C N +1 on U . Then for every x, y ∈ U one has f (x) =
1 (x − y)α (∂ α f )(y) α!
(13.2.6)
|α|≤N
+
|α|=N +1
N +1 α!
1
(1 − t)N (x − y)α (∂ α f )(tx + (1 − t)y) dt. 0
13.2. SUMMARY OF BASIC RESULTS FROM CALCULUS...
425
In particular, for each x, y ∈ U there exists θ ∈ (0, 1) with the property that f (x) =
1 (x − y)α (∂ α f )(y) α!
(13.2.7)
|α|≤N
+
|α|=N +1
1 (x − y)α (∂ α f )(θx + (1 − θ)y). α!
Theorem 13.10 (Rademacher’s theorem). If f : Rn → R is a Lipschitz function with Lipschitz constant less than or equal to M , for some M ∈ (0, ∞), then f is differentiable almost everywhere and ∂k f L∞ (Rn ) ≤ M for each k = 1, . . . , n. Theorem 13.11 (Lebesgue’s differentiation theorem). If f ∈ L1loc (Rn ), then 1 |f (y) − f (x)| dy = 0 for almost every x ∈ Rn . (13.2.8) lim+ ε→0 |B(x, ε)| B(x,ε) In particular, lim+
ε→0
1 |B(x, ε)|
f (y) dy = f (x)
for almost every x ∈ Rn .
(13.2.9)
B(x,ε)
Theorem 13.12 (Lebesgue’s dominated convergence theorem). Let (X, μ) be a positive measure space and assume that g ∈ L1 (X, μ) is a nonnegative function. If {fj }j∈N is a sequence of μ-measurable, complex valued functions on X, such that |fj (x)| ≤ g(x) for μ-almost every x ∈ X and f (x) := lim fj (x) exists (in j→∞
C) for μ-almost every x ∈ X, then f ∈ L1 (X, μ) and lim X |fj − f | dμ = 0. j→∞
In particular, lim X fj dμ = X f dμ. j→∞
Theorem 13.13 (Young’s inequality). Assume that 1 ≤ p, q, r ≤ ∞ are such that p1 + 1q = 1r + 1. Then for every f ∈ Lp (Rn ), g ∈ Lq (Rn ) it follows that f ∗ g is well-defined almost everywhere in Rn , f ∗ g ∈ Lr (Rn ), and f ∗ gLr (Rn ) ≤ f Lp(Rn ) gLq (Rn ) . Theorem 13.14. Let X and Y be two Hausdorff topological spaces. (a) If Λ : X → Y is continuous, then Λ is sequentially continuous. (b) If Λ : X → Y is sequentially continuous and X is metrizable, then Λ is continuous. For a proof of Theorem 13.14 see [59, Theorem, p. 395]. Definition 13.15. A rigid transformation, or isometry, of the Euclidean space Rn is any distance preserving mapping of Rn , that is, any function T : Rn → Rn satisfying |T (x) − T (y)| = |x − y|,
∀ x, y ∈ Rn .
(13.2.10)
CHAPTER 13. APPENDIX
426
A rigid transformation of Rn is any distance preserving mapping of Rn , i.e., any function T : Rn → Rn satisfying |T x−T y| = |x−y| for every x, y ∈ Rn . The rigid transformations of the Euclidean space Rn are precisely those obtained by composing a translation with a mapping in Rn given by an orthogonal matrix. In other words, a mapping T : Rn → Rn is a rigid transformation of Rn if and only if there exist x0 ∈ Rn and an orthogonal matrix A : Rn → Rn with the property that (13.2.11) T (x) = x0 + Ax, ∀ x ∈ Rn . For a proof of the following version of the Arzel`a–Ascoli theorem see [57, Corollary 34, p. 179]. Theorem 13.16 (Arzel` a–Ascoli theorem). Let F be an equicontinuous family of real-valued functions on a separable space X. Then each sequence {fj }j∈N in F which is bounded at each point has a subsequence {fjk }k∈N that converges pointwise to a continuous function, the converges being uniform on each compact subset of X. Theorem 13.17 (Riesz’s representation theorem for positive functionals). Let X be a locally compact Hausdorff topological space and Λ a positive linear functional on the space of continuous, compactly supported functions on X (denoted by C00 (X)). Then there exists a unique σ-algebra M on X, which contains all Borel sets on X, and a unique measure μ : M → [0, ∞] that represents Λ, that is, the following hold:
(i) Λf = X f dμ for every continuous, compactly supported function f on X; (ii) μ(K) < ∞ for every compact K ⊂ X; (iii) For every E ∈ M we have μ(E) = inf{μ(V ) : E ⊂ V, V open}; (iv) μ(E) = sup {μ(K) : K ⊂ E, K compact} for every open set E and every E ∈ M with μ(E) < ∞; (v) If E ∈ M, A ⊂ E, and μ(E) = 0, then μ(A) = 0. Theorem 13.18 (Riesz’s representation theorem for complex functionals). Let X be a locally compact Hausdorff topological space and consider the space of continuous functions on X vanishing at infinity, that is, Coo (X) := {f ∈ C 0 (X) : ∀ ε > 0, ∃ compact K ⊂ X such that |f (x)| < ε for x ∈ X \ K}.
(13.2.12)
Then Coo (X) is the closure in the uniform norm of C00 (X) and for every bounded linear functional Λ : Coo (X) → C there exists a unique regular complex Borel measure μ on X such that Λf = X f dμ for every f ∈ Coo (X) and Λ = |μ|(X).
13.2. SUMMARY OF BASIC RESULTS FROM CALCULUS...
427
Theorem 13.19 (Riesz’s representation theorem for locally bounded functionals). Let X be a locally compact Hausdorff topological space and assume that Λ : C00 (X) → R is a linear functional that is locally bounded, in the sense that for each compact set K ⊂ X there exists a constant CK ∈ (0, ∞) such that |Λf | ≤ CK sup |f (x)|, x∈K
∀ f ∈ C00 (X) with supp f ⊆ K.
(13.2.13)
Then there exist two measures μ1 , μ2 , taking Borel sets from X into [0, ∞], and satisfying properties (ii)–(iv) in Theorem 13.17, such that Λf = f dμ1 − f dμ2 for every f ∈ C00 (X). (13.2.14) X
X
The reader is warned that since both μ1 and μ2 are allowed to take the value ∞, their difference μ1 − μ2 is not well-defined in general. This being said, μ1 − μ2 is a well-defined finite signed measure on each compact subset of X. Proposition 13.20 (Urysohn’s lemma). If X is a locally compact Hausdorff space and K ⊂ U ⊂ X are such that K is compact and U is open, then there exists a function f ∈ C00 (U ) that satisfies f = 1 on K and 0 ≤ f ≤ 1. Theorem 13.21 (Vitali’s convergence theorem). Let (X, μ) be a positive measure space with μ(X) < ∞. Suppose {fk }k∈N is a sequence of functions in L1 (X, μ) and that f is a function on X (all complex-valued) satisfying: (i) fk (x) → f (x) for μ-almost every x ∈ X as k → ∞; (ii) |f | < ∞ μ-almost everywhere in X; (iii) {fk }k∈N is uniformly integrable, in the sense that for every ε > 0 there exists δ > 0 such that for every k ∈ N we have E fk dμ < ε whenever E ⊆ X is a μ-measurable set with μ(E) < δ. Then f ∈ L1 (X, μ) and
fk − f dμ = 0.
lim
k→∞
In particular, lim
k→∞ X
fk dμ =
X
(13.2.15)
X
f dμ.
See, for example, [58, p. 133]. Proposition 13.22. Let (X, μ) be a positive measure space and suppose that f ∈ L1 (X, μ). Then for every ε > 0 there exists δ > 0 such that for every μ-measurable set A ⊆ X satisfying μ(A) < δ we have A |f | dμ < ε. Proof. Consider the measure λ := |f |μ on X. Then λ is absolutely continuous with respect to μ and the (ε, δ) characterization of absolute continuity of measures (see, e.g., [58, Theorem 6.11, p. 124]) yields the desired conclusion.
CHAPTER 13. APPENDIX
428
13.3
Custom-Designing Smooth Cut-Off Functions
Lemma 13.23. Let f : R → R be the function defined by e−1/x , if x > 0, f (x) := ∀ x ∈ R. 0, if x ≤ 0,
(13.3.1)
Then f is of class C ∞ on R. Proof. Denote by C the collection of functions g : R → R for which there exists a polynomial P such that e−1/x P (1/x), if x > 0, ∀ x ∈ R. (13.3.2) g(x) := 0, if x ≤ 0, Recall that if h : R → R is a continuous function that is differentiable on R \ {0} and for which there exists L ∈ R such that lim− h (x) = L = lim+ h (x), then x→0
x→0
h is also differentiable at the origin and h (0) = L. An immediate consequence of this fact is that any g ∈ C is differentiable and g ∈ C. In turn, this readily gives that any g ∈ C is of class C ∞ on R. Since f defined in (13.3.1) clearly belongs to C, it follows that f is of class C ∞ on R.
Lemma 13.24. The function φ : Rn → R defined by 1 Ce |x|2 −1 if x ∈ B(0, 1), φ(x) := (13.3.3) 0 if x ∈ Rn \ B(0, 1), −1
∞ 2 where C := ωn−1 0 e1/(ρ −1) ρn−1 dρ ∈ (0, ∞), satisfies the following properties: φ(x) dx = 1. (13.3.4) φ ∈ C ∞ (Rn ), φ ≥ 0, supp φ ⊆ B(0, 1), and Rn
Proof. That φ ≥ 0 and supp φ ⊆ B(0, 1) is immediate from its definition. Also, since φ(x) = f (1 − |x|2 ) for x ∈ Rn where f is as in (13.3.1), invoking
Lemma 13.23 it follows that φ ∈ C ∞ (Rn ). Finally, the condition Rn φ(x) dx = 1 1
follows upon observing that based on (13.8.9) we have Rn e |x|2 −1 dx = 1/C. Proposition 13.25. Let F0 , F1 ⊂ Rn be two nonempty sets with the property that dist(F0 , F1 ) > 0. Then there exists a function ψ : Rn → R with the following properties: ψ ∈ C ∞ (Rn ),
0 ≤ ψ ≤ 1,
∀ α ∈ Nn0 ∃ Cα ∈ (0, ∞)
such that
ψ = 0 on F0 ,
ψ = 1 on F1 ,
|∂ α ψ(x)| ≤
Cα dist (F0 , F1 )|α|
and ∀ x ∈ Rn . (13.3.5)
13.4. PARTITION OF UNITY
429
)1 := x ∈ Rn : dist (x, F1 ) ≤ r/4 . Proof. Let r := dist(F0 , F1 ) > 0 and set F 4 n Also, with φ as in Lemma 13.24, define the function θ(x) := r φ(4x/r) for x ∈ Rn . Then θ ∈ C ∞ (Rn ), θ ≥ 0, supp θ ⊆ B(0, r/4), and θ(x) dx = 1. (13.3.6) Rn
We claim that the function ψ : Rn → R defined by ψ := χF1 ∗ θ has the desired properties. To see why this is true, note that since θ(x − y) dy ∀ x ∈ Rn , (13.3.7) ψ(x) = 1 F
from the properties of θ it is immediate that ψ ∈ C ∞ (Rn ) and 0 ≤ ψ ≤ 1. Furthermore, for α ∈ Nn0 and x ∈ Rn we may write |α| n α 4 4 (∂ φ) 4(x − y)/r dy r r F 1 α |α| Cα ∂ φ(y) dy = Cα r−|α| = , (13.3.8) ≤ 4r dist (F0 , F1 )|α| Rn
where Cα := 4|α| Rn ∂ α φ(y) dy is a positive, finite number independent of r. We are left with checking the fact that ψ = j on Fj , j = 0, 1. First, if x ∈ F0 , )1 , )1 , hence θ(x − y) = 0 for every y ∈ F then |x − y| ≥ 3r/4 for every y ∈ F which when combined with (13.3.7) implies ψ(x) = 0. Second, if x ∈ F1 , then ) B(x, r/4)
⊆ F1 . Since the support of θ(x − ·) ⊂ B(x, r/4) the latter implies ψ(x) = Rn θ(x − y) dy = 1. The proof of the proposition is complete. |∂ α ψ(x)| ≤
A trivial yet useful consequence of the above result is as follows. Proposition 13.26. If U ⊆ Rn is open and K ⊂ U is compact, then there exists a function ψ : Rn → R that is of class C ∞ , satisfies 0 ≤ ψ(x) ≤ 1 for every x ∈ Rn and ψ(x) = 1 for every x ∈ K, and which has compact support, contained in U . Proof. Since K is a compact contained in U we have r := dist (K, Rn \ U ) is contained in (0, ∞). Proposition 13.25 applied with F1 := K and with set F0 := {x ∈ Rn : dist (x, K) > r/2} then yields the desired function ψ.
13.4
Partition of Unity
Lemma 13.27. If C ⊂ Rn is compact, and U ⊆ Rn is an open set such that ˚ ⊂ D ⊂ U. C ⊂ U , then there exists a compact set D ⊆ Rn such that C ⊂ D Proof. Let V = U c ∩ B(0, R), where R > 0 is large enough so that U ⊂ B(0, R). Then V is compact and disjoint from C so r := dist(V, C) = inf x − y > 0. x∈C
y∈V
CHAPTER 13. APPENDIX
430 Hence, if we set D :=
(
B(x, r/4),
(13.4.1)
x∈C
˚ ⊂ D ⊂ U. then D is compact and C ⊂ D Lemma 13.28. Suppose that K ⊆ Rn is a compact set and that {Oj }1≤j≤k is a finite open cover of K. Then there are compact sets Dj ⊂ Oj , 1 ≤ j ≤ k, with the property that k ( ˚j . D K⊂ (13.4.2) j=1
k Proof. Set C1 := K \ j=2 Oj ⊆ O1 . Since C1 is compact, Lemma 13.27 shows ˚1 ⊂ D1 ⊂ O1 . that there exists a compact set D1 with the property that C1 ⊂ D Next, proceeding inductively, suppose that m ∈ N is such that 1 ≤ m < k and that m sets Dk 1 , . . . , Dm have been constructed with the property that compact m ˚ D ∪ O K⊂ j=1 j j=m+1 j . Introduce ⎛ Cm+1 := K \ ⎝
m ( j=1
˚j ∪ D
k (
⎞ Oj ⎠ .
(13.4.3)
j=m+2
Clearly, Cm+1 is a compact subset of Om+1 . By once again invoking Lemma 13.27, there exists a compact set Dm+1 ⊂ Om+1 such that Cm+1 ⊂ ˚m+1 . After k iterations, this procedure yields a family of sets C1 , . . . , Ck D which have all the desired properties. Theorem 13.29 (Partition of unity for compact sets). Let K ⊂ Rn be a compact set, and let {Oj }1≤j≤N be a finite open cover of K. Then there exists a finite collection of C ∞ functions ϕj : Rn → R, 1 ≤ j ≤ N , satisfying the following properties: (i) For every 1 ≤ j ≤ N , the set supp (ϕj ) is compact and contained in Oj ; (ii) For every 1 ≤ j ≤ N , one has 0 ≤ ϕj ≤ 1; (iii)
N
ϕj (x) = 1 for every x ∈ K.
j=1
The family {ϕj : 1 ≤ j ≤ N } is called a partition of unity subordinate to the cover {Oj }1≤j≤N of K. Proof. Let {Oj }1≤j≤N be any finite open cover for K. From Lemma 13.28 we N ˚j . D know that there exist compact sets Dj ⊆ Oj , 1 ≤ j ≤ N , such that K ⊂ j=1
13.4. PARTITION OF UNITY
431
By Proposition 13.26, for each 1 ≤ j ≤ N , choose a C ∞ function ηj : Rn → [0, ∞) that is positive on Dj and has compact support in Oj . It follows that N N ˚j , so we can define for each j ∈ {1, . . . , N } the D ηj (x) > 0 for all x ∈ j=1
j=1
function ψj :
N (
˚j → R, D
ψj (x) :=
i=1
ηj (x) , N ηk (x)
∀x ∈
N (
˚j . D
(13.4.4)
i=1
k=1 N ˚j . ˚⊆U ⊆ D By Lemma 13.28, there exists a compact set U ⊆ Rn with K ⊆ U j=1
We apply Proposition 13.26 to obtain a C ∞ function f : Rn → [0, 1] that satisfies ˚. Then for each f (x) = 1 for x ∈ K and having compact support contained in U j ∈ {1, . . . , N } we define the function ϕj := f ψj acting from Rn into R. It is ∞ n not hard to see that each ϕj is C in R , has compact support, contained in Oj , 0 ≤ ϕj ≤ 1, and ϕj (x) = 1 for all x ∈ K. 1≤j≤N
Definition 13.30. (i) A family {Fj }j∈I of subsets of Rn is said to be locally finite in E ⊆ Rn provided every x ∈ E has a neighborhood O ⊆ Rn with the property that the set {j ∈ I : Fj ∩ O = ∅} is finite. (ii) Given a collection of functions fj : Ω → R, j ∈ I, defined in some fixed fj is called locally finite in E ⊆ Rn subset Ω of Rn , the sum j∈I
provided the family of sets {x ∈ Ω : fj (x) = 0}, indexed by j ∈ I, is locally finite in E. Exercise 13.31. Show that a family {Fj }j∈I of subsets of Rn is locally finite in the open set E ⊆ Rn if and only if for every compact K ⊆ E the collection {j ∈ I : Fj ∩ K = ∅} is finite. Exercise 13.32. Show that if the family {Fj }j∈I of subsets of Rn is locally finite in E ⊆ Rn , then {Fj }j∈I is also locally finite in E. Exercise 13.33. Show that ifthe family {Fj }j∈I of closed subsets of Rn is locally finite in E ⊆ Rn , then (E ∩ Fj ) is a relatively closed subset of E. j∈I
Proof. Let x∗ ∈ E be such that x∗ ∈ / Fj for every j ∈ I. Then there exists r > 0 with the property that B(x∗ , r) ∩ Fj = ∅ for every j ∈ I \ I∗ , where I∗ is a finite subset of I. Hence, by eventually further decreasing r, it can be assumed that B(x∗ , r) ∩ Fj = ∅ for every I∗ as well. Thus, B(x∗ , r) ∩ E is a Fj . This proves that relative neighborhood of x∗ in E that is disjoint from j∈I E\ Fj is relatively open in E hence, (E ∩ Fj ) is a relatively closed j∈I
subset of E.
j∈I
CHAPTER 13. APPENDIX
432
Theorem 13.34 (Partition of unity for arbitrary opencovers). Let {Ok }k∈I be Ok . Then there exists an arbitrary family of open sets in Rn and set Ω := k∈I
an at most countable collection {ϕj }j∈J of C ∞ functions ϕj : Ω → R satisfying the following properties: (i) For every j ∈ J there exists k ∈ I such that ϕj is compactly supported in Ok ; (ii) For every j ∈ J, one has 0 ≤ ϕj ≤ 1 in Ω; (iii) The family of sets {x ∈ Ω : ϕj (x) = 0}, indexed by j ∈ J, is locally finite in Ω; (iv) ϕj (x) = 1 for every x ∈ Ω. j∈J
The family {ϕj }j∈J is called a partition of unity subordinate to the family {Ok }k∈I . Proof. Start by defining Ωj := x ∈ Ω : |x| ≤ j and dist (x, ∂Ω) ≥ 1j , Then Ω =
∞
j ∈ N,
(13.4.5)
Ωj and
j=1
Ωj ⊆ Rn is compact,
Ωj ⊂ ˚ Ωj+1
for every j ∈ N.
(13.4.6)
Proceed now to define the compact sets Kj := Ωj \ ˚ Ωj−1
for every j ≥ 3.
(13.4.7)
As such, from (13.4.7) and (13.4.6) we have c K j = Ωj ∩ ˚ Ωj−1 ⊆ ˚ Ωj+1 ∩ (Ωj−2 )c
for every j ≥ 3.
(13.4.8)
K2 := Ω2 ,
Finally, we define the following families of open sets O2 := Ok ∩ ˚ Ω3 : k ∈ I , Oj := Ok ∩ Ωj+1 \ ˚ Ωj−2 : k ∈ I Making use of (13.4.6), for every j ≥ 3 we have that ◦ ◦ ◦ Ωj+1 ∩ (Ωj−2 )c Ωj+1 \ Ωj−2 = Ωj+1 ∩ (Ωj−2 )c = ˚ c =˚ Ωj+1 ∩ Ωj−2 = ˚ Ωj+1 ∩ (Ωj−2 )c .
∀ j ≥ 3. (13.4.9)
(13.4.10)
Hence, (13.4.7), (13.4.9), and (13.4.10), imply that Oj is an open cover for Kj for every j ≥ 3. Also, from the definitions of K2 and O2 and (13.4.6), we obtain that O2 is an open cover for K2 . Since the Kj ’s are compact, these open covers can be refined to finite subcovers in each case. As such, for each
13.4. PARTITION OF UNITY
433
j = 2, 3, . . . , we can apply Theorem 13.29 to obtain a finite partition of unity {ϕ : ϕ ∈ Φj } for Kj subordinate to Oj . Also note that due to (13.4.10) and (13.4.9), we necessarily have that, for each j ∈ {2, 3, . . . }, Ωj ∩ O = ∅ for every O ∈ Ok , for every k ∈ N satisfying k ≥ j + 2. This ensures that the family {supp ϕ : ϕ ∈ Φj , j ≥ 2} is locally finite in Ω, so we can define s(x) :=
ϕ(x),
for every x ∈ Ω.
(13.4.11)
j≥2 ϕ∈Φj
Given that differentiability is a local property, it follows that s is of class C ∞ in Ω. Moreover, note that s(x) > 0 for every x ∈ Ω, since 0 ≤ ϕ ≤ 1 for all ϕ(x) = 1. ϕ ∈ Φj , j = 2, 3, . . . , and if x ∈ Ωj , some j ∈ {2, 3, . . . }, then ϕ∈Φj
Consequently, 1/s is also a C ∞ function in Ω. It is then clear that the collection Φ := ϕ/s : ϕ ∈ Φj , j = 2, 3, . . . is a partition of unity subordinate to the family of open sets {Ok }k∈I (in the sense described in the statement of the theorem). Theorem 13.35 (Partition of unity with preservation of indexes). Let {Ok }k∈I be an arbitrary family of open sets in Rn and set Ω := Ok . Then there exists k∈I
a collection {ψk }k∈I of C ∞ functions ψk : Ω → R satisfying the following properties: (i) For every k ∈ I the function ψk vanishes outside of a relatively closed subset of Ok ; (ii) For every k ∈ I, one has 0 ≤ ψk ≤ 1 in Ω; (iii) The family of sets {x ∈ Ω : ψk (x) = 0}, indexed by k ∈ I, is locally finite in Ω; (iv)
ψk (x) = 1 for every x ∈ Ω.
k∈I
Proof. Let {ϕj }j∈J be a partition of unity subordinate to the family {Ok }k∈I , and denote by f : J → I a function with the property that, for every j ∈ J, the function ϕj is compactly supported in Of (j) . That this exists is guaranteed by Theorem 13.34. For every k ∈ I then define ψk (x) :=
ϕj (x),
∀ x ∈ Ω.
(13.4.12)
j∈f −1 ({k})
Note that the sum is locally finite in Ω, hence ψk : Ω → R is a well-defined nonnegative function, of class C ∞ in Ω, for every k ∈ I. In addition, the result from Exercise 13.33 shows that, for every k ∈ I, ϕk vanishes outside a relatively
CHAPTER 13. APPENDIX
434
closed subset of Ok . Furthermore, since {f −1 ({k})}k∈I is a partition of J into mutually disjoint subsets, we may compute
ψk (x) =
ϕj (x) =
k∈I j∈f −1 ({k})
k∈I
ϕj (x) = 1 ∀ x ∈ Ω.
(13.4.13)
j∈J
Incidentally, this also shows that, necessarily, 0 ≤ ψk (x) ≤ 1 for every k ∈ I and x ∈ Ω. Finally, the fact that the family of sets {x ∈ Ω : ψk (x) = 0}, k ∈ I, is locally finite in Ω is inherited from the corresponding property of the ϕj ’s.
13.5
The Gamma and Beta Functions
The gamma function is defined as
∞
Γ(z) :=
tz−1 e−t dt,
for z ∈ C, Re z > 0.
(13.5.1)
0
√ It is easy to check that Γ(1) = 1, Γ(1/2) = π and via integration by parts that Γ(z + 1) = zΓ(z) for z ∈ C, Re z > 0. By analytic continuation, the function Γ(z) is extended to a meromorphic function defined for all complex numbers z except for z = −n, n ∈ N0 , where the extended function has simple poles with residue (−1)n /n! and this extension satisfies Γ(z + 1) = z Γ(z) for z ∈ C \ {−n : n ∈ N0 }.
(13.5.2)
By induction it follows that for every n ∈ N we have Γ(n) = (n − 1)!, Γ Γ
1 2 1 2
(13.5.3)
1 · 3 · 5 · · · (2n − 1) √ (2n)! √ +n = π = 2n π, n 2 2 n! √ (−1)n 2n (−1)n 22n n! √ −n = π= π. 1 · 3 · 5 · · · (2n − 1) (2n)!
(13.5.4) (13.5.5)
The volume of the unit ball B(0, 1) in Rn , which we denote by |B(0, 1)|, and the surface area of the unit sphere in Rn , denoted here by ωn−1 , have the following formulas: n
|B(0, 1)| =
π2 n , Γ 2 +1
n
2π 2 . Γ n2
(13.5.6)
Re z, Re w > 0.
(13.5.7)
ωn−1 = n|B(0, 1)| =
Next, consider the so-called beta function
1
tz−1 (1 − t)w−1 dt,
B(z, w) := 0
13.6. SURFACES IN RN AND SURFACE INTEGRALS
435
Clearly B(z, w) = B(w, z). Making the change of variables t = u/(u + 1), u ∈ (0, ∞), it follows that ∞ 1 z+w B(z, w) = uw−1 du (13.5.8) u+1 0 whenever Re z, Re w > 0. The basic identity relating the gamma and beta functions reads B(z, w) =
Γ(z)Γ(w) , Γ(z + w)
Re z, Re w > 0.
(13.5.9)
∞ This is easily proved starting with (13.5.8), writing Γ(z + w) = 0 tz+w−1 e−t dt and expressing B(z, w)Γ(z + w) as a double integral, then making the change of variables s := t/(u + 1). A useful consequence of identity (13.5.9) is the following formula π/2 b+1 a+1 b+1 1 1 Γ( a+1 a b 2 )Γ( 2 ) , if a, b > −1. (sin θ) (cos θ) dθ = B = 2 2 2 2 Γ( a+b+2 ) 0 2 (13.5.10) Indeed, making the change of variables u := (sin θ)2 , the integral in the leftmost
1 side of (13.5.10) becomes 12 0 u(a−1)/2 (1 − u)(b−1)/2 du. For further reference, let us also note here that π a+1 b+1 1 + (−1)b ·B , (sin θ)a (cos θ)b dθ = 2 2 2 0 =
b+1 1 + (−1)b Γ( a+1 2 )Γ( 2 ) · , a+b+2 2 Γ( 2 )
(13.5.11)
whenever a, b > −1. This is proved by splitting the domain of integration into (0, π/2) ∪ (π/2, π), making a change of variables θ → θ − π/2 in the second integral, and invoking (13.5.10).
13.6
Surfaces in Rn and Surface Integrals
Definition 13.36. Given n ≥ 2 and k ∈ N ∪ {∞}, a C k surface (or, surface of class C k ) in Rn is a subset Σ of Rn with the property that for every x∗ ∈ Σ there exists an open neighborhood U (x∗ ) such that Σ ∩ U (x∗ ) = P (O)
(13.6.1)
where O is an open subset of Rn−1 and P : O −→ Rn
is an injective function of class C k satisfying
rank [DP (u)] = n − 1, for all u = (u1 , . . . , un−1 ) ∈ O,
(13.6.2)
CHAPTER 13. APPENDIX
436
where DP is the Jacobian matrix of P , that is, D(P1 , . . . , Pn ) DP (u) = (u), D(u1 , . . . , un−1 )
u ∈ O.
(13.6.3)
The function P in (13.6.2) is called a local parametrization of class C k near x∗ and Σ ∩ U (x∗ ) a parametrizable patch. In the case when (13.6.1) holds when we formally take r = +∞, that is, in the case when Σ = P (O), we call P a global parametrization of the surface Σ. Proposition 13.37. If O is an open subset of Rn−1 and P : O → Rn satisfies (13.6.2)–(13.6.3) for k = 1 , then P : O → P (O) is a homeomorphism. Definition 13.38. Assume n ≥ 3. If v1 = (v11 , . . . , v1n ), . . . , vn−1 = (vn−1 1 , . . . , vn−1 n ) are n − 1 vectors in Rn , their cross product is defined as ⎛ v1 × v2 × · · · × vn−1
v11 v21 .. .
⎜ ⎜ ⎜ := det ⎜ ⎜ ⎝ vn−1 1 e1
v12 v22 .. .
... ... ... ... ...
vn−1 2 e2
v1n v2n .. . vn−1 n en
⎞ ⎟ ⎟ ⎟ ⎟, ⎟ ⎠
(13.6.4)
where the determinant is understood as computed by formally expanding it with respect to the last row, the result being a vector in Rn . More precisely, v1 × · · · × vn−1 :=
n
j+1
(−1)
⎛ ⎜ det ⎝
j=1
v11 .. .
... . . . . ..
vj−1 .. .
vj+1
vn−1 1
...
vn−1 j−1
vn−1 j+1
...
... .. .
v1n
(13.6.5) ⎞ ⎟ ⎠ ej .
. . . vn−1 n
Definition 13.39. Let Σ ⊂ Rn , n ≥ 2, be a C 1 surface and assume that P : O → Rn , with O open subset of Rn−1 , is a local parametrization of Σ of class C 1 near some point X ∗ ∈ Σ. Also, suppose that f : Σ → R is a continuous function on Σ that vanishes outside of a compact subset of P (O). We then define f (x) dσ(x) := (f ◦ P )(u) |∂1 P (u) × · · · × ∂n−1 P (u)| du1 . . . dun−1 O
Σ
if n ≥ 3, and
(13.6.6)
f (x) dσ(x) := Σ
O
(f ◦ P )(u)|P (u)| du
if n = 2.
(13.6.7)
In (13.6.6), dσ stands for the surface measure (or, surface area element), whereas in (13.6.7), dσ stands for the arc-length measure.
13.8. POLAR COORDINATES AND INTEGRALS ON SPHERES
13.7
437
Integration by Parts and Green’s Formula
Definition 13.40. We say that a nonempty open set Ω ⊆ Rn , where n ≥ 2, is a C k domain (or, a domain of class C k ), for some k ∈ N0 ∪ {∞}, provided the following holds. For every point x∗ ∈ ∂Ω there exist R > 0, an open interval I ⊂ R with 0 ∈ I, a rigid transformation T : Rn → Rn with T (x∗ ) = 0, along with a function φ of class C k that maps B(0, R) ⊆ Rn−1 into I with the property that φ(0) = 0 and, if C denotes the (open) cylinder B(0, R)×I ⊆ Rn−1 ×R = Rn , then C ∩ T (Ω) = {x = (x , xn ) ∈ C : xn > φ(x )},
(13.7.1)
C ∩ ∂T (Ω) = {x = (x , xn ) ∈ C : xn = φ(x )},
(13.7.2)
C ∩ (T (Ω))c = {x = (x , xn ) ∈ C : xn < φ(x )}.
(13.7.3)
If Ω ⊆ Rn is a C k domain for some k ∈ N0 ∪ {∞}, then it may be easily verified that ∂Ω is a C k surface. Theorem 13.41 (Integration by parts formula). Suppose Ω ⊂ Rn is a domain of class C 1 and ν = (ν1 , . . . , νn ) denotes its outward unit normal. Let k ∈ {1, . . . , n} and assume f, g ∈ C 1 (Ω) ∩ C 0 (Ω ) are such that ∂k f , ∂k g ∈ L1 (Ω) and there exists a compact set K ⊂ Rn with the property that f = 0 on Ω \ K. Then, (∂k f )g dx = − f (∂k g) dx + f gνk dσ, (13.7.4) Ω
Ω
∂Ω
where σ is the surface measure on ∂Ω. For the sense in which the last integral in (13.7.4) should be understood see Definition 13.39. An immediate corollary of Theorem 13.41 is Green’s formula that is stated next (recall (7.1.14)). Theorem 13.42 (Green’s formula). Suppose Ω ⊂ Rn is a bounded domain of class C 1 and ν denotes its outward unit normal. If f, g ∈ C 2 (Ω), then ∂f ∂g −g f Δg dx = gΔf dx + f dσ. (13.7.5) ∂ν ∂ν Ω Ω ∂Ω
13.8
Polar Coordinates and Integrals on Spheres
Assume that n ≥ 3 and R > 0 are fixed. For ρ ∈ (0, R), θj ∈ (0, π), j ∈ {1, . . . , n − 2}, and θn−1 ∈ (0, 2π), set x1 := ρ cos θ1 , x2 := ρ sin θ1 cos θ2 , x3 := ρ sin θ1 sin θ2 cos θ3 ,
(13.8.1)
CHAPTER 13. APPENDIX
438 .. .
xn−1 := ρ sin θ1 sin θ2 . . . sin θn−2 cos θn−1 , xn := ρ sin θ1 sin θ2 . . . sin θn−2 sin θn−1 . The variables θ1 , . . . , θn−1 , ρ are called polar coordinates. Definition 13.43. Assume that x∗ ∈ Rn , n ≥ 3, is a fixed point, and R > 0 is arbitrary. The standard parametrization of the ball B(x∗ , R) is defined as P : (0, π)n−2 × (0, 2π) × (0, R) −→ Rn , P(θ1 , θ2 , . . . , θn−1 , ρ) := x∗ + (x1 , x2 , . . . , xn ),
(13.8.2)
where x1 , . . . , xn are as in (13.8.1). The function P in (13.8.2) is injective, of class C ∞ , takes values in B(x∗ , R), its image differs from B(x∗ , R) by a subset of measure zero and det (DP)(θ1 , θ2 , · · · , θn−1 , ρ)=ρn−1 (sin θ1 )n−2 (sin θ2 )n−3 · · · (sin θn−2 ), (13.8.3) at every point in its domain, where DP denotes the Jacobian of P. Using this standard parametrization for the unit sphere in Rn , we see that
ωn−1 =
π
π
π
2π
... 0
0
0
0
(sin ϕ1 )n−2 (sin ϕ2 )n−3 . . . (sin ϕn−2 ) dϕn−1 dϕn−2 . . . dϕ1 . (13.8.4)
This parametrization of the sphere B(x∗ , R) may also be used to prove the following theorem. Theorem 13.44 (Spherical Fubini and polar coordinates). Let f ∈ L1loc (Rn ), n ≥ 2. Then for each x∗ ∈ Rn and each R > 0 the following formulas hold: R f dx = f dσ dρ, (13.8.5) 0
B(x∗ ,R)
∂B(x∗ ,ρ) R
f dx = S n−1
0
B(x∗ ,R)
R
f (x∗ + ρω)ρn−1 dσ(ω) dρ
f (ρω)ρn−1 dσ(ω) dρ.
= 0
0
0
f dσ dρ,
(13.8.8)
∂B(0,ρ) ∞
f (ρω)ρn−1 dσ(ω) dρ.
f dx = Rn
(13.8.7)
∂B(x∗ ,1)
Moreover, if f ∈ L1 (Rn ), then ∞ f dx = Rn
(13.8.6)
S n−1
(13.8.9)
13.8. POLAR COORDINATES AND INTEGRALS ON SPHERES
439
Note that if P : O → S n−1 is a parametrization of the unit sphere in Rn and if R is a unitary transformation in Rn , then R ◦ P : O → S n−1 is also a parametrization of the unit sphere in Rn . Indeed, this function is injective, has C 1 components, its image is S n−1 (up to a negligible set) and Rank D(R ◦ P ) = dim Im [R(DP )] = dim Im (DP ) = Rank (DP ) = n − 1. (13.8.10) Hence, S n−1
f ◦ R dσ =
O
f (R ◦ P )|∂u1 P × · · · × ∂un−1 P | du1 . . . dun−1
=
O
f (R ◦ P )|∂u1 (R ◦ P ) × · · · × ∂un−1 (R ◦ P )| du1 · · · dun−1
=
f dσ.
(13.8.11)
S n−1
The same type of reasoning also yields the following result. Proposition 13.45. For each x = (x1 , . . . , xn ) ∈ Rn , define Rj (x) := (x1 , . . . , xj−1 , −xj , xj+1 , . . . , xn ),
1 ≤ j ≤ n,
Rjk (x) := (x1 , . . . , xj−1 , xk , xj+1 , . . . , xk−1 , xj , xk+1 , . . . , xn ), Then
1 ≤ j ≤ k ≤ n. (13.8.12)
S n−1
S n−1
f ◦ Rj dσ =
f dσ,
j = 1, 2, . . . , n,
(13.8.13)
f dσ,
1 ≤ j ≤ k ≤ n.
(13.8.14)
S n−1
f ◦ Rjk dσ =
S n−1
Proposition 13.46. Let v ∈ Rn \ {0}, n ≥ 2, be fixed. Then for any real-valued function f defined on the real line, there holds 1 f (v · θ) dσ(θ) = ωn−2 f (s|v|)( 1 − s2 )n−3 ds. (13.8.15) S n−1
−1
Proof. Since integrals over the unit sphere are invariant under orthogonal transformations, we may assume that v/|v| = e1 and, hence, using polar coordinates and (13.8.3), we have
S n−1
f (v · θ) dσ(θ) =
S n−1
f (|v|θ1 ) dσ(θ)
2π π
=
π
... 0
0
= ωn−2
π 0
0
f (|v| cos ϕ1 )
n−2 '
(sin ϕj )n−1−j dϕ1 · · · dϕn−2 dϕn−1
j=1
f (|v| cos ϕ1 )(sin ϕ1 )n−2 dϕ1 .
(13.8.16)
CHAPTER 13. APPENDIX
440
Making the change of variables s := cos ϕ1 in the last integral above shows that this matches the right-hand side of (13.8.15). Proposition 13.47. Let f ∈ C 0 (R) be positive homogeneous of degree m ∈ R and fix η = (η1 , . . . , ηn ) ∈ Rn . Then if n ∈ N with n ≥ 2, for j, k ∈ {1, . . . , n} one has f (η · ξ)ξj ξk dσ(ξ) = α|η|m δjk + β|η|m−2 ηj ηk (13.8.17) S n−1
where
1 f (ξ1 )(1 − ξ12 ) dσ(ξ), n − 1 S n−1 1 β= f (ξ1 )(nξ12 − 1) dσ(ξ). n − 1 S n−1 α=
Proof. For j, k ∈ {1, . . . , n} set qjk (η) :=
S n−1
f (η · ξ)ξj ξk dσ(ξ),
η ∈ Rn ,
(13.8.18)
(13.8.19)
and define the quadratic form Q(ζ, η) :=
n
qjk (η)ζj ζk ,
∀ ζ, η ∈ Rn .
(13.8.20)
j,k=1
Observe that we can write Q(ζ, η) = S n−1 f (η·ξ)(ζ ·ξ)2 dσ(ξ). By the invariance under rotations of integrals over S n−1 (see (13.8.11)), we have that for any rotation R in Rn Q(ζ, η) = f (η·R ξ)(ζ·R ξ)2 dσ(ξ) = Q(Rζ, Rη), ζ, η ∈ Rn . (13.8.21) S n−1
A direct computation also gives that Q(λ1 ζ, λ2 η) = λ21 λm 2 Q(ζ, η)
for all λ1 , λ2 > 0, ζ, η ∈ Rn .
(13.8.22)
Next we claim that Q(ζ, η) = α + β(η · ζ)2 ,
∀ ζ, η ∈ S n−1 .
(13.8.23)
To show that (13.8.23) holds, we first observe that it suffices to prove (13.8.23) when η = e1 . Indeed, if we assume that (13.8.23) is true for η = e1 , then for an arbitrary η let R be the rotation such that Rη = e1 . Then if we also take into account (13.8.20), we have Q(ζ, η) = Q(Rζ, Rη) = Q(Rζ, e1 ) = α + β(Rζ · e1 )2 = α + β(ζ · R e1 )2 = α + β(η · ζ)2 .
(13.8.24)
13.8. POLAR COORDINATES AND INTEGRALS ON SPHERES
441
Hence, (13.8.23) will follow if we prove that S n−1
f (ξ1 )(ζ · ξ)2 dσ(ξ) = α + β ξ12 ,
∀ ζ ∈ S n−1 .
(13.8.25)
To see the later we let Ajk := S n−1 f (ξ1 )ξj ξk dσ(ξ) for j, k ∈ {1, . . . , n}. Assume j = k. Then either j = 1 or k = 1. If, say, j = 1 we use (13.8.13) to conclude that Ajk = 0 in this case. A similar reasoning applies if k = 1. Clearly, A11 = S n−1 f (ξ1 )ξ12 dσ(ξ). As for the case j = k = 1, we first observe that A22 = · · · = Ann since Ajj is independent of j ∈ {2, . . . , n} due to n
Ajj = S n−1 f (ξ1 ) dσ(ξ), which in turn implies that (13.8.14). Moreover, j=1
1 Ajj = n−1 S n−1 f (ξ1 ) dσ(ξ) − A11 = α for each j = 2, . . . , n. Combining all these we have that for ζ ∈ S n−1 , S n−1
f (ξ1 )(ζ · ξ)2 dσ(ξ) =
n
ζj ζk Ajk
j,k=1
=
ζ12
S n−1
f (ξ1 )ξ12
dσ(ξ) + α
n
ζj2
j=2
= α + βζ12 .
(13.8.26)
This concludes the proof of (13.8.25) which, in turn, implies (13.8.23). Now if ζ, η ∈ Rn \ {0}, we make use of (13.8.22) and (13.8.23) to write Q(ζ, η) = |ζ|2 |η|m Q =
ζ η , = α|ζ|2 |η|m + β|η|m−2 (ζ · η)2 |ζ| |η|
(13.8.27)
n α|η|m δjk + β|η|m−2 ηj ηk ζj ζk j,k=1
which proves (13.8.17). Proposition 13.48. Consider f (t) := |t| for t ∈ R, and let α, β be as in (13.8.17) for this choice of f . Then α=β=
2ωn−2 , n2 − 1
(13.8.28)
where ωn−2 denotes the surface measure of the unit ball in Rn−1 . Proof. Using the standard parametrization of S n−1 (see (13.8.1) with R = 1) we have
CHAPTER 13. APPENDIX
442 1 n−1
α=
ωn−2 n−1
=
π
π
π
2π
| cos ϕ1 |(1 − (cos ϕ1 )2 )(sin ϕ1 )n−2 (sin ϕ2 )n−3 ×
... 0
0
0
0
· · · × (sin ϕn−2 ) dϕn−1 dϕn−2 · · · dϕ1 π
| cos ϕ1 |(sin ϕ1 )n dϕ1 .
(13.8.29)
0
The change of variables θ = π−y yields − Using this back in (13.8.29) then gives 2 ωn−2 α= n−1
0
π 2
π π 2
cos θ(sin θ)n dθ =
π 2
0
cos y(sin y)n dy.
π 2 ωn−2 (sin θ)n+1 2 2 ωn−2 . (13.8.30) cos θ (sin θ) dθ = = 2 n−1 n + 1 0 n −1 n
Similar arguments in computing β give that ωn−2 π | cos θ|[n(cos θ)2 − 1](sin θ)n−2 dθ (13.8.31) n−1 0 π n ωn−2 π = ωn−2 | cos θ|(sin θ)n−2 dθ − | cos θ|(sin θ)n dθ n − 1 0 0 π2 π2 2 ωn−2 = cos θ (sin θ)n−2 dθ − n cos θ (sin θ)n dθ (n − 1) n−1 0 0
β=
=
π2 2 ωn−2 (sin θ)n+1 π2 2 ωn−2 . (sin θ)n−1 − n = 2 n−1 n+1 0 n −1 0
This finishes the proof. Recall from 0.0.7 thatz α = z1α1 z2α2 · · · znαn whenever z = (z1 , . . . , zn ) ∈ Rn and α = (α1 , α2 , . . . , αn ) ∈ Nn0 . Let us also introduce 2Nn0 := (2α1 , 2α2 , . . . , 2αn ) : α = (α1 , α2 , . . . , αn ) ∈ Nn0 .
(13.8.32)
The next proposition deals with the issue of integrating arbitrary monomials on the unit sphere centered at the origin. Proposition 13.49. For each multi-index α = (α1 , . . . , αn ) ∈ Nn0 , ⎧ ⎪ ⎪0 ⎪ ⎨ |α| z α dσ(z) = n−1 1+|α| ⎪ 2 ! α! 2π 2 Γ( 2 ) S n−1 ⎪ ⎪ · ⎩ α |α|! Γ( |α|+n ) 2 !
2
where Γ is the gamma function introduced in (13.5.1).
if α ∈ 2Nn0 , (13.8.33) if α ∈ 2Nn0 ,
13.8. POLAR COORDINATES AND INTEGRALS ON SPHERES Proof. Fix an arbitrary k ∈ N and set qα := z α dσ(z), ∀ α ∈ Nn0 with |α| = k.
443
(13.8.34)
S n−1
Also, with a “dot” standing for the standard inner product in Rn , introduce Qk (x) :=
k! qα xα , α! n
x = (x1 , x2 , . . . , xn ) ∈ Rn .
(13.8.35)
α∈N0
|α|=k
Then (13.2.1) implies that Qk (x) =
S n−1
(z · x)k dσ(z),
∀ x ∈ Rn .
(13.8.36)
Let us also observe here that, if x ∈ S n−1 is arbitrary but fixed and if R is a rotation about the origin in Rn such that R−1 x = en := (0, . . . , 0, 1) ∈ Rn , then by (13.8.35) and the rotation invariance of integrals on S n−1 (cf. (13.8.11)), we have (Rz · x)k dσ(z) = Qk (en ). (13.8.37) Qk (x) = S n−1
By the homogeneity of Qk , (13.8.37) implies that Qk (x) = |x|k Qk (en ) for all x ∈ Rn and, hence, k! qα xα = |x|k Qk (en ) for all x ∈ Rn . α!
(13.8.38)
|α|=k
We now consider two cases: Case I: k is odd. In this the mapping S n−1 z → znk ∈ R is odd.
scenario, k In particular, Qk (x) = S n−1 zn dσ(z) = 0. This, in turn, along with (13.8.38) k! α n then force α! qα x = 0 for every x ∈ R . From (13.2.5) it is easy to deduce |α|=k
that for each β ∈ Nn0 we have ∂xγ [xα ]
⎧ ⎨0 =
if α = γ,
(13.8.39) α! if α = γ. 1 α = 0 for each γ ∈ Nn0 q x We may therefore conclude that qγ = ∂xγ α |α|=k α! with |γ| = k, in agreement with (13.8.33). m! 2β Case II: k is even. Suppose k = 2m for some m ∈ N. Then, |x|k = β! x x=0
⎩
|β|=m
and (13.8.38) becomes k! m! x2β Qk (en ) = qα xα β! α!
|β|=m
|α|=k
for all x ∈ Rn .
(13.8.40)
CHAPTER 13. APPENDIX
444
Fix γ ∈ Nn0 such that |γ| = k and observe that ⎧ ⎪ ' & ⎨ m! (2β)!Qk (en ) if γ = 2β, some β ∈ Nn , |β| = m, 0 γ m! 2β β! x Qk (en ) ∂x = ⎪ β! ⎩0 x=0 otherwise. (13.8.41) This, (13.8.40), and (13.8.39) then imply that ⎧ 0 ⎪ ⎪ ⎪ ⎨ qγ = |γ| ⎪ 2 ! γ! ⎪ ⎪ Q|γ|(en ) ⎩ γ · |γ|! ! 2
if γ ∈ 2Nn0 , (13.8.42) if γ ∈
2Nn0 .
We are now left with computing Q2m (en ) when m ∈ N. Using spherical coordinates, a direct computation gives that Q2m (en ) = (z · en )2m dσ(z) S n−1
2π π
= 0
0
π
(cos θ1 )2m
... 0
(sin θj )n−1−j dθ1 · · · dθn−2 dθn−1
n−2 j=1
π
(cos θ)2m (sin θ)n−2 dθ =
= ωn−2 0
2π
n−1 2
Γ( 12 + m) , Γ(m + n2 )
(13.8.43)
by (13.8.4) and (13.5.6) (considered with n − 1 in place of n), and (13.5.11). This once again agrees with (13.8.33), and the proof of Proposition 13.49 is finished. A simple useful consequence of the general formula (13.8.33) is the fact that ωn−1 δjk whenever 1 ≤ j, k ≤ n. zj zk dσ(z) = (13.8.44) n S n−1
13.9. TABLES OF FOURIER TRANSFORMS
13.9
445
Tables of Fourier Transforms
Fourier Transforms of Schwartz Functions
F(f )
f 2
e−λ|x| , λ ∈ C, Re(λ) > 0 e−ax
2
+ibx
, a > 0 and b ∈ R fixed
Location
πn
e−
π1
e−
λ
a
2
2
|ξ|2 4λ
Example 3.21
(ξ−b)2 4a
Exercise 3.22
n
e−(Ax)·x , A ∈ Mn×n (R), A = A ,
(A−1 ξ)·ξ π2 4 e− det A
Exercise 3.36
2 2 2π √ e−(ξ1 −ξ1 ξ2 +ξ2 )/3 3
Exercise 3.37
√
n
(Ax) · x > 0 for x ∈ R \ {0} 2
2
e−(x1 +x1 x2 +x2 ) for (x1 , x2 ) ∈ R2 2
e−a|x| sin(x · x0 ), for x ∈ Rn , n
if a > 0 and x0 ∈ R are fixed
1 2i
πn a
2
e−
|ξ−x0 |2 4a
− e−
|ξ+x0 |2 4a
Exercise 3.39
CHAPTER 13. APPENDIX
446
Fourier Transforms of Tempered Distributions in R Below x, ξ ∈ R.
u
F (u)
Location
1 , x2 +a2
a ∈ (0, ∞) given
π −a|ξ| e a
Example 4.23
a ∈ (0, ∞) given
−πi(sgn ξ)e−a|ξ|
Example 4.29
x , x2 +a2
χ[−a,a] , a ∈ (0, ∞) given
⎧ ⎨ 2
sin(aξ) ξ
for ξ ∈ R \ {0}
⎩ 2a
sin(b x), b ∈ R fixed
−iπδb + iπδ−b
cos(b x), b ∈ R
πδb + πδ−b
sin(b x) sin(c x), b, c ∈ R fixed
− π2
H P.V.
1 x
1 ξ
|x|k , k ∈ N even
2πik δ(k)
|x|k ,
−2ik+1
k ∈ N odd b ∈ R fixed
sin(b x) sin(c x) , x
ln |x|
Example 4.37
−cδ
Exercise 4.115
iπ sgn (ξ) −2i P.V.
sin(x2 )
Exercise 4.122
δb+c − δb−c − δc−b + δ−b−c
sgn x
sin(b x) , x
Example 4.36
−iP.V.
Example 4.34
for ξ = 0
b, c ∈ R fixed
Exercise 4.116 1 ξ
Exercise 4.117 Exercise 4.117
P.V.
1 (k) ξ
Exercise 4.117
π sgn(b)χ[−|b|,|b|] iπ 2 √
Exercise 4.117
sgn(c) χ[−b−|c|,−b+|c|] − χ[b−|c|,b+|c|]
π 2i
*
ei
π−ξ 2 4
− e−i
π−ξ 2
Exercise 4.117
+
4
−2πγδ − πwχ(−1,1) wχ(−1,1) from (4.6.21), γ from (4.6.23)
Exercise 4.117 Exercise 4.117
13.9. TABLES OF FOURIER TRANSFORMS
447
Fourier Transforms of Tempered Distributions Below x, ξ ∈ Rn and x , ξ ∈ Rn−1 . u
F (u)
Location
δ
1
Example 4.21
1
(2π)n δ
Exercise 4.31
2
e−ia|x| , a ∈ (0, ∞) given 2
e ia|x| , a ∈ (0, ∞) given
πn
e−i
2
a
πn 2
a
ei
nπ 4
Example 4.38
e
n−λ n Γ 2 2 Γ λ 2
2n−λ π
xj , |x|n
ξ −i ωn−1 |ξ|j2
xj , |x|λ+2
Example 4.24
|ξ|2 −i 4a
P.V. ∂j
xk |x|n
, j, k ∈ {1, . . . , n}
P.V. Θ, with Θ as in (4.4.1)
P.V.
xj |x|n+1
, j ∈ {1, . . . , n}
2 t ωn−1 (t2 +|x |2 ) n 2
δ
Γ n−λ 2
−
,
ξj ξk |ξ|2
S n−1
e−t|ξ
ξj ξk |ξ|4
ωn−1 δjk n
Θ(ω) log(i(ξ · ω)) dσ(ω)
ξj |ξ|
− iω2n
−
ξj Γ( λ +1) |ξ|n−λ 2
jk ωn−1 |ξ| 2 − 2ωn−1
ωn−1
Proposition 4.61 Corollary 4.62
n
n ≥ 3, j, k ∈ {1, . . . , n}
|ξ|λ−n
−i 2n−λ−1 π 2
n ≥ 2,
( )
λ ∈ [0, n − 1), j ∈ {1, . . . , n} xj xk , |x|n
ei
|x|−λ , λ ∈ (0, n) fixed n≥2
|ξ|2 4a
nπ 4
Corollary 4.62
Exercise 7.63 Proposition 4.70 Theorem 4.71 Proposition 4.81
|
Proposition 4.87
with t > 0 fixed xj 2 ωn−1 (t2 +|x |2 ) n 2
with t > 0 fixed, j ∈ {1, . . . , n − 1}
ξ
−i |ξj | e−t|ξ
|
Proposition 4.89
Bibliography [1] R. A. Adams and J. J. F. Fournier, Sobolev Spaces, Pure and Applied Mathematics, 140, 2nd edition, Academic Press, 2003. [2] D. R. Adams and L. I. Hedberg, Function Spaces and Potential Theory, Grundlehren der Mathematischen Wissenschaften, Vol. 314, Springer, Berlin, 1996. [3] S. Axler, P. Bourdon, and W. Ramey, Harmonic Function Theory, 2nd edition, Graduate Texts in Mathematics, 137, Springer-Verlag, New York, 2001. [4] F. Brackx, R. Delanghe, and F. Sommen, Clifford Analysis, Research Notes in Mathematics, 76, Pitman, Advanced Publishing Program, Boston, MA, 1982. [5] A. P. Calder´ on and A. Zygmund, On the existence of certain singular integrals, Acta Math., 88 (1952), 85–139. [6] M. Christ, Lectures on Singular Integral Operators, CBMS Regional Conference Series in Mathematics, No. 77, AMS, 1990. [7] G. David and S. Semmes, Singular Integrals and Rectifiable Sets in Rn : Beyond Lipschitz Graphs, Ast´erisque, No. 193, 1991. [8] J. J. Duistermaat and J. A. C. Kolk, Distributions. Theory and applications, Translated from the Dutch by J. P. van Braam Houckgeest, Cornerstones, Birkh¨ auser Boston, Inc., Boston, MA, 2010. [9] J. Duoandikoetxea. Fourier Analysis, Graduate Studies in Mathematics, AMS, 2000. [10] R. E. Edwards, Functional Analysis. Theory and Applications, Publications, Inc., New York, 1995.
Dover
[11] L. Ehrenpreis, Solution of some problems of division. I. Division by a polynomial of derivation, Amer. J. Math. 76, (1954). [12] L. Evans, Partial Differential Equations, 2nd edition, Graduate Studies in Mathematics, 19, American Mathematical Society, Providence, RI, 2010. [13] L. C. Evans and R. F. Gariepy, Measure Theory and Fine Properties of Functions, Studies in Advanced Mathematics, CRC Press, Boca Raton, FL, 1992. D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6, © Springer Science+Business Media New York 2013
449
450
BIBLIOGRAPHY
[14] C. Fefferman and E. M. Stein, H p spaces of several variables, Acta Math., 129 (1972), no. 3-4, 137–193. [15] G. Friedlander and M. Joshi, Introduction to the Theory of Distributions, Cambridge University Press, 2nd edition, 1998. [16] J. Garcia-Cuerva and J. Rubio de Francia, Weighted Norm Inequalities and Related Topics, North Holland, Amsterdam, 1985. ˇ [17] I. M. Gel’fand and G. E. Silov, Generalized Functions, Vol. 1: Properties and Operations, Academic Press, New York and London, 1964. ˇ [18] I. M. Gel’fand and G. E. Silov, Generalized Functions, Vol. 2: Spaces of Fundamental and Generalized Functions, Academic Press, New York and London, 1968. [19] I. M. Gel’fand and N. Y. Vilenkin, Generalized Functions, Vol. 4: Applications of Harmonic Analysis, Academic Press, New York and London, 1964. [20] I. M. Gel’fand, M. I. Graev, and N. Y. Vilenkin, Generalized Functions, Vol. 5: Integral Geometry and Representation theory, Academic Press, 1966. [21] D. Gilbarg and N.S. Trudinger, Elliptic Partial Differential Equations of Second Order, Springer-Verlag, 2001. [22] J. Gilbert and M. A. M. Murray, Clifford Algebras and Dirac Operators in Harmonic Analysis, Cambridge Studies in Advanced Mathematics, 26, Cambridge University Press, Cambridge, 1991. [23] L. Grafakos, Classical and modern Fourier analysis, Pearson Education, Inc., Upper Saddle River, NJ, 2004. [24] G. Grubb, Distributions and Operators, Springer-Verlag, 2009. [25] M. A. Al-Gwaiz, Theory of Distributions, Pure and Applied Mathematics, CRC Press, 1992. [26] G. Hardy and J. Littlewood, Some properties of conjugate functions, J. Reine Angew. Math., 167 (1932), 405–423. [27] K. Hoffman, Banach Spaces of Analytic Functions, Dover Publications, 2007. [28] S. Hofmann, M. Mitrea and M. Taylor, Singular integrals and elliptic boundary problems on regular Semmes-Kenig-Toro domains, International Math. Research Notices, 2010 (2010), 2567–2865. [29] L. H¨ ormander, On the division of distributions by polynomials, Ark. Mat., 3 (1958), 555–568. [30] L. H¨ ormander, Linear Partial Differential Operators, Die Grundlehren der Mathematischen Wissenschaften in Einzeldarstellungen, Vol. 116, Springer, 1969. [31] L. H¨ ormander, The Analysis of Linear Partial Differential Operators I, Distribution Theory and Fourier Analysis, Springer-Verlag, 2003.
BIBLIOGRAPHY
451
[32] L. H¨ ormander, The Analysis of Linear Partial Differential Operators II, Differential Operators with Constant Coefficients, Springer-Verlag, 2003. [33] R. Howard, Rings, determinants, and the Smith normal form, preprint (2013). http://www.math.sc.edu/ howard/Classes/700c/notes2.pdf [34] G.C. Hsiao and W.L. Wendland, Boundary integral equations, Mathematical Sciences, 164, Springer-Verlag, Berlin, 2008.
Applied
[35] F. John, Plane Waves and Spherical Means Applied to Partial Differential Equations, Interscience Publishers, New York-London, 1955. [36] C. E. Kenig, Harmonic Analysis Techniques for Second Order Elliptic Boundary Value Problems, CBMS Regional Conference Series in Mathematics, 83, American Mathematical Society, Providence, RI, 1994. [37] S. G. Krantz and H. R. Parks, A Primer of Real Analytic Functions, 2-nd edition, Birkh¨ auser, Boston, 2002. [38] P. S. Laplace, M´emoire sur la th´eorie de l’anneau de Saturne, M´em. Acad. Roy. Sci. Paris (1787/1789), 201–234. ´ [39] P. S. Laplace, J. Ecole Polyt´ech. cah., 15 (1809), p. 240 (quoted in Enc. Math. Wiss. Band II, 1. Teil, 2. H¨ alfte, p. 1198). [40] G. Leoni, A First Course in Sobolev Spaces, Vol. 105, Graduate Studies in Mathematics, American Mathematical Soc., 2009. [41] S. Lojasiewicz, Division d’une distribution par une fonction analytique de variables r´eelles, C. R. Acad. Sci. Paris, 246 (1958), 683–686. [42] B. Malgrange, Existence et approximation des solutions des ´equations aux d´eriv´ees partielles et des ´equations de convolution, (French) Ann. Inst. Fourier, Grenoble 6 (1955–1956), p. 271–355. [43] V. G. Maz’ya, Sobolev Spaces, Springer-Verlag, Berlin-New York, 1985. [44] V. G. Maz’ya, Sobolev spaces with applications to elliptic partial differential equations, Second, revised and augmented edition, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 342, Springer, Heidelberg, 2011. [45] V. G. Maz’ya and T. Shaposhnikova, Theory of Sobolev multipliers. With applications to differential and integral operators, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 337, Springer-Verlag, Berlin, 2009. [46] W. McLean, Strongly Elliptic Systems and Boundary Integral Equations, Cambridge University Press, Cambridge, 2000. [47] Y. Meyer, Ondelettes et Op´erateurs II. Op´erateurs de Calder´ on-Zygmund, Actualit´es Math´ematiques, Hermann, Paris, 1990. [48] Y. Meyer and R. R. Coifman, Ondelettes et Op´erateurs III. Op´erateurs Multilin´eaires, Actualit´es Math´ematiques, Hermann, Paris, 1991.
452
BIBLIOGRAPHY
[49] M. Mitrea, Clifford Wavelets, Singular Integrals, and Hardy Spaces, Lecture Notes in Mathematics, 1575, Springer-Verlag, Berlin, 1994. [50] I. Mitrea and M. Mitrea, Multi-Layer Potentials and Boundary Problems for Higher-Order Elliptic Systems in Lipschitz Domains, Lecture Notes in Mathematics 2063, Springer, 2012. [51] D. Mitrea and H. Rosenblatt, A General Converse Theorem for Mean Value Theorems in Linear Elasticity, Mathematical Methods for the Applied Sciences, Vol. 29, No. 12 (2006), 1349–1361. [52] D. Mitrea, M. Mitrea and M. Taylor, Layer potentials, the Hodge Laplacian and global boundary problems in nonsmooth Riemannian manifolds, Memoirs of American Mathematical Society, Vol. 150, No. 713, 2001. [53] D. Mitrea, I. Mitrea, M. Mitrea, and S. Monniaux, Groupoid Metrization Theory With Applications to Analysis on Quasi-Metric Spaces and Functional Analysis, Birkh¨ auser, Springer New York, Heidelberg, Dordrecht, London, 2013. [54] C. B. Morrey, Second order elliptic systems of differential equations. Contributions to the theory of partial differential equations, Ann. Math. Studies, 33 (1954), 101–159. [55] S. D. Poisson, Remarques sur une ´equation qui se pr´esente dans la th´eorie de l’attraction des sph´eroides, Bulletin de la Soci´et´e Philomathique de Paris, 3 (1813), 388–392. [56] S. D. Poisson, Sur l’int´egrale de l´equation relative aux vibrations des surfaces ´elastiques et au mouvement des ondes, Bulletin de la Soci´et´e Philomathique de Paris, (1818), 125–128. [57] H. L. Royden, Real Analysis, Macmillian Publishing Co., Inc., New York, 1968. [58] W. Rudin, Real and Complex Analysis, 2nd edition, International Series in Pure and Applied Mathematics, McGraw-Hill, Inc., 1986. [59] W. Rudin, Functional Analysis, 2nd edition, International Series in Pure and Applied Mathematics, McGraw-Hill, Inc., 1991. [60] L. Schwartz, Th´eorie des Distributions, I, II, Hermann, Paris, 1950–51. [61] Z. Shapiro, On elliptical systems of partial differential equations, C. R. (Doklady) Acad. Sci. URSS (N. S.), 46 (1945). [62] E.M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton Mathematical Series, 30, Princeton University Press, Princeton, N.J., 1970. [63] E.M. Stein, Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals, Princeton Mathematical Series, 43, Monographs in Harmonic Analysis, III, Princeton University Press, Princeton, NJ, 1993. [64] E. M. Stein and G. Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, Princeton, New Jersey, 1971.
BIBLIOGRAPHY
453
[65] R. Strichartz, A guide to Distribution Theory and Fourier Transforms, World Scientific Publishing Co., Inc., River Edge, NJ, 2003. [66] H. Tanabe, Functional Analytic Methods for Partial Differential Equations, Monographs and Textbooks in Pure and Applied Mathematics, 204, Marcel Dekker, Inc., New York, 1997. [67] M. E. Taylor, Pseudodifferential Operators, Princeton Mathematical Series, 34, Princeton University Press, Princeton, N.J., 1981. [68] M. E. Taylor, Partial Differential Equations. I. Basic Theory, Applied Mathematical Sciences, 115, Springer-Verlag, New York, 1996. [69] M. E. Taylor, Tools for PDE. Pseudodifferential Operators, Paradifferential Operators, and Layer Potentials, Mathematical Surveys and Monographs, 81, American Mathematical Society, Providence, RI, 2000. [70] F. Tr´eves, Topological Vector Spaces, Distributions and Kernels, Dover Publications, 2006. [71] V. S. Vladimirov, Equations of Mathematical Physics, Marcel Dekker Inc., 1971. [72] V. S. Vladimirov, Methods of the Theory of Generalized Functions, Analytical Methods and Special Functions, 6, Taylor & Francis, London, 2002. [73] V. Voltera, Sur les vibrations des corps ´elastiques isotropes, Acta Math., 18 (1894), 161–232. [74] W. Wendland, Integral Equation Methods for Boundary Value Problems, SpringerVerlag, 2002. [75] J. T. Wloka, B. Rowley, and B. Lawruk, Boundary Value Problems for Elliptic Systems, Cambridge University Press, 1995. [76] W. P. Ziemer, Weakly Differentiable Functions: Sobolev Spaces and Functions of Bounded Variation, Graduate Text in Mathematics, 120, Springer-Verlag, New York, 1989.
Subject Index absorbing set, 417 Arzel` a-Ascoli’s Theorem, 426 balanced set, 417 base for a topology, 415 Beta function, 434 bi-harmonic, 237 Binomial Theorem, 424 Cauchy problem heat operator, 280 vibrating infinite string, 1 wave operator, 307 Cauchy–Clifford operator, 257 Cauchy operator, 252 Cauchy sequence, 416 Clifford algebra, 253 conjugate Poisson kernels, 165 conormal derivative, 271 continuous, 416 convergence in D(Ω), 11 E (Ω), 9 S(Rn ), 93 convex set, 417 convolution of distributions, 61 convolution of functions, 59 convolution, 115, 116 cross product in Rn , 436 cut-off functions, 428 differential operators bi-Laplacian, 207, 233 Cauchy–Riemann, 248 Dirac, 253, 255 heat, 277 higher order systems, 344 Lam´e, 319 Laplacian, 207, 217, 220 poly-harmonic, 207, 240 scalar second order, 260 Schr¨ odinger, 287 second order parabolic, 282 Stokes, 334 wave, 289 dilations, 130 Dirac distribution δ, 21, 31, 36, 120 Dirichlet problem, 165, 370 distance, 416
distribution compactly supported, 40, 41 definition of, 17 of finite order, 19 of function type, 19 tempered, 109 distributional derivative, 29 elliptic operators, 206, 207 Euler’s constant, 150 Fourier transform, 89, 99, 105, 120, 123, 125, 142, 160 partial, 128 Fr´ echet space, 418 F-topology, 418 function spaces D(Ω), 11, 422 DK (Ω), 422 E (Ω), 9, 421 S(Rn ), 91, 423 fundamental solutions definition of, 191 for higher order systems, 348 for scalar second order operators, 261 for second order parabolic operators, 286 for strongly elliptic operators, 270 for the bi-Laplacian, 235 for the Cauchy operator, 249 for the Dirac operator, 256 for the heat operator, 279 for the Lam´e operator, 323, 341 for the Laplacian, 220 for the poly-harmonic operator, 241 for the Schr¨ odinger operator, 288 for the Stokes operator, 336, 343 for the wave operator, 305 gamma function, 434 generalized volume potential, 179 Green’s formula, 437 Hahn-Banach Theorem, 420 harmonic function, 217, 225 Heaviside function definition of, 6 derivative, 31
D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6, © Springer Science+Business Media New York 2013
455
456 Hilbert transform, 167, 171, 253 hypoelliptic operators, 201, 203, 207, 279 inductive limit topology, 418 integral representation formula, 210, 357 integration by parts formula, 437 integration on surfaces, 436 interior estimates for elliptic homogeneous systems, 359 for hypoelliptic operators, 211 for the bi-Laplacian, 238 for the Lam´e operator, 329, 331 for the Laplacian, 226 for the poly-harmonic operator, 245 invariant under orthogonal transformations, 131 inverse Fourier transform, 102 isometry, 425 layer potential operators double layer, 232, 367 harmonic double-layer, 162 single layer, 232, 366 layer potential representation formula, 229, 272 Lebesgue’s Differentiation Theorem, 425 Lebesgue’s Dominated Convergence Theorem, 425 Leibniz’s Formula, 424 Liouville’s theorem for bi-Laplacian, 238 for general operators, 191 for Lam´e, 331 for Laplacian, 227 for systems, 312 for the heat operator, 280 Lipschitz function, 30, 77 locally finite, 431 Malgrange-Ehrenpreis theorem, 195 mean value formulas for bi-Laplacian, 237 for Lam´e, 328 for Laplacian, 225 metric space complete, 416 definition of, 416
SUBJECT INDEX Multinomial Theorem, 424 multiplier, 168 Neumann problem, 372 Newtonian potential, 175, 177, 178, 232 open neighborhood, 415 open set, 415 order of a distribution, 19 orthogonal transformation, 131 parametrix, 205 parametrization, 436 Parseval’s identity, 104 partition of unity for arbitrary open covers, 432 for compact sets, 430 with preservation of indexes, 433 Planch´ erel’s identity, 106 Plemelj formula, 252 Poisson equation, 192, 223, 332 Poisson kernel, 162 Poisson problem for Lam´e, 332 for the bi-Laplacian, 239 for the Laplacian, 227 polar coordinates, 438 positive definite matrix, 264 positive homogeneous distribution, 132 function, 131 principal symbol, 189 principal value distribution, 20, 136 projection, 253, 259 p-subaveraging, 361 Rademacher’s Theorem, 425 real-analytic function, 214, 216, 227, 238, 245, 331, 347, 359 restriction of a distribution, 34 reverse H¨ older estimate, 364 Riesz’s Representation Theorem for complex functionals, 426 for locally bounded functionals, 427 for positive functionals, 426 Riesz transforms, 167, 168, 221, 222, 228, 258 rigid transformation, 425 seminorm, 417 separating seminorms, 417 sequentially continuous, 416
SUBJECT INDEX sign function, 14 singular integral operator, 167, 172 singular support, 202 slowly increasing functions, 93 strongly elliptic matrix, 263 strongly elliptic operators, 263 p-subaveraging, 362, 364 support of an arbitrary function, 37 support of a distribution, 36 surface, 435 Taylor’s Formula, 424 tensor product of distributions, 55, 112 of functions, 48 the method of descent, 299 topological space definition of, 415 Hausdorff, 416 metrizable, 416 separated, 416
457 topological vector space definition, 416 dual, 418 locally convex, 417 topology coarser, 416 definition of, 415 finer, 416 induced, 416 product, 416 transpose of an operator, 7 unique continuation property, 215 Urysohn’s Lemma, 427 Vitali’s Convergencen Theorem, 427 weak derivative, 4 solution, 2 weak∗-topology, 419 Young’s Inequality, 425
Symbol Index α ! defined on page, xix |α| defined on page, xix C
Cauchy operator in C, 252 Cauchy-Clifford operator, 257 χE characteristic function of the set E, xx f ∨ , 104 u∨ , 124 C k (Ω), xx C k (Ω), xx C0k (Ω), xx C0∞ (Ω), xx Cn Clifford algebra with n generators, 253 Clifford algebra multiplication, 253 ∗ convolution of S(Rn ) with S (Rn ), 116 distributions, 61 functions, 59 v1 × v2 × · · · × vn−1 cross product in Rn , 436 D Dirac
operator,255 D = 1i ∂1 , . . . 1i ∂n , 91 δjk Kronecker symbol, xix δ Dirac distribution, 36 D(Ω) test functions, 11, 422 DK (Ω) test functions supported in K, 11, 422 D (Ω) general distributions, 26, 422 {ej }1≤j≤n orthonormal basis in Rn , xix E (Ω), 9, 421 E (Ω) compactly supported distributions, 41, 421 EA fundamental solution for LA elliptic, 270 parabolic, 286 γ Euler’s constant, 150 ! factorial, xix F Fourier transform, 89, 123 · Fourier transform, 89, 120 F −1 inverse Fourier transform, 102
Fx partial Fourier transform, 128 Fx−1 inverse partial Fourier transform, 129 Fm,n fundamental solution for Δm in Rn , 241 Γ gamma function, 434 H Heaviside function, 6 H Hilbert transform, 167 ,i complex imaginary unit, xix −A f dμ integral average, xx Δ Laplace operator, 165 Δ2 bi-Laplace operator, 207 Δm poly-harmonic operator, 207 ∂t − Δ heat operator, 277 ∂t2 − Δ wave operator, 289 Lpcomp (Ω), xx Lploc (Ω), xx LA elliptic, 262 parabolic, 282 Mn×m (R) matrices with entries from R, xxi, 314
the ring Mn×m D (Ω) , 309 Mn×m E (Ω) , 311 mΘ the Fourier transform of P.V. Θ, 142 N the Newtonian potential, 175 ∂f normal derivative of f , 220 ∂ν ∂νA conormal derivative, 271 ωn−1 surface measure of unit ball in Rn , xx, 434 ∂ α partial derivative of order α, xix ϕΔ defined on page, 60 ΠΦ generalized volume potential, 179 P harmonic Poisson kernel, 162 projection, 253, 259 pt defined on page, 163 P harmonic double layer, 162
D. Mitrea, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, DOI 10.1007/978-1-4614-8208-6, © Springer Science+Business Media New York 2013
459
460 P (D) linear constant coefficient partial differential operator, 101, 189 P (x, ∂) linear partial differential operator, 7 P (ξ), 101, 189 Pm (ξ) principal symbol of P (D), 189 P.V. Θ principal value distribution associated with Θ, 136 P.V. x1 principal value distribution associated with x1 , 20
SYMBOL INDEX sgn x sign function, 14 sing supp u singular support of the distribution u, 202 L(Rn ) slowly increasing functions, 93 supp f support of an arbitrary function f , 37 supp u support of the distribution u, 36
Re real part of a complex number, xix Im imaginary part of a complex number, xix u|ω restriction of the distribution u to the open set ω , 34 Rj Riesz transform, 167
τt dilation, 130 ⊗ tensor product of distributions, 55 of functions, 48 of tempered distributions, 112 tx0 translation by x0 map of distributions, 66 of functions, 12 P transpose of the operator P , 7 TΘ defined on page, 167
S(Rn ) Schwartz functions, 91, 423 S (Rn ) tempered distributions, 111, 423
uf distribution associated with f , 19, 110 vertical limit of u to ∂Rn u|ver + , 165 ∂Rn +
Qj defined on page, 165 (qj )t defined on page, 166