de Grunter Studies in Mathematics 22 Editors: Heinz Bauer • Jerry L. Kazdan • Eduard Zehnder
de Gruyter Studies in Ma...
59 downloads
1192 Views
5MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
de Grunter Studies in Mathematics 22 Editors: Heinz Bauer • Jerry L. Kazdan • Eduard Zehnder
de Gruyter Studies in Mathematics 1 Riemannian Geometry, 2nd rev. ed., Wilhelm P A. Klingenberg 2 Semimartingales, Michel Métivier 3 Holomorphic Functions of Several Variables, Ludger Kaup and Burchard Kaup 4 Spaces of Measures, Corneliu Constantinescu 5 Knots, Gerhard Burde and Heiner Zieschang 6 Ergodic Theorems, Ulrich Kren gel 7 Mathematical Theory of Statistics, Helmut Strasser 8 Transformation Groups, Tammo tom Dieck 9 Gibbs Measures and Phase Transitions, Hans-Otto Georgii 10 Analyticity in Infinite Dimensional Spaces, Michel Hervé 11 Elementary Geometry in Hyperbolic Space, Werner Fenchel 12 Transcendental Numbers, Andrei B. Shidlovskii 13 Ordinary Differential Equations, Herbert Amann 14 Dirichlet Forms and Analysis on Wiener Space, Nicolas Bouleau and Francis Hirsch 15 Nevanlinna Theory and Complex Differential Equations, Ilpo Laine 16 Rational Iteration, Norbert Steinmetz 17 Korovkin-type Approximation Theory and its Applications, Francesco Altomare and Michele Camp iti 18 Quantum Invariants of Knots and 3-Manifolds, Vladimir G. Turaev 19 Dirichlet Forms and Symmetric Markov Processes, Masatoshi Fukushima, Yoichi Oshima, Masayoshi Takeda 20 Harmonic Analysis of Probability Measures on Hypergroups, Walter R. Bloom and Herbert Heyer 21 Potential Theory on Infinite-Dimensional Abelian Groups, Alexander Bendikov
Vladimir E. Nazaikinskii • Victor E. Shatalov • Boris Yu. Sternin
Methods of Noncommutative Analysis Theory and Applications
w
6
Walter de Gruyter Berlin • New York 1996
Authors.
Vladimir E. Nazaikinskii Moscow State Institute of Electronics and Mathematics Technical University 3/12 B. Vuzovskii per Moscow 109028, Russia
Victor E. Shatalov, Boris Yu. Sternin Department of Computational Mathematics and Cybernetics Moscow State University Vorob'evy Gory Moscow 119899, Russia
Series Editors
Heinz Bauer Mathematisches Institut der Universitdt BismarckstraBe 1 1/2 D-91054 Erlangen, FRG
Jerry L. Kazdan Department of Mathematics University of Pennsylvania
Eduard Zehnder
209 South 33rd Street
ETH-Zentrum/Mathematik RdmistraBe 101 CH-8092 Zürich
Philadelphia, PA 19104-6395, USA
Switzerland
1991 Mathematics Subject Classification. 35-02; 22 Exx, 44-XX, 47-XX, 81R50 Keywords Functional analysis, partial differential equations, Fourier integral operators, asymptotic theory, representation theory, Yang-Baxter equations, quantum groups
0 Printed on acid-free paper which falls within the guidelines of the ANSI to ensure permanence and durability Library of Congress Cataloging-in-Publication Data
Methods of noncommutative analysis : theory and applications / Vladimir E. Nazaikinskii, Viktor E. 'S. hatalov, Boris Yu. Stemin p. cm. — (De Gruyter studies in mathematics ; 22) Includes bibliographical references and index. 1. Geometry, Differential 2. Noncommutative algebras 3 Mathematical physics. I. Shatalov, V. E. (Viktor Evgen'evich) II Stemin, B. Yu. III. Title. IV. Series. QC20.7.G44N39 1996 95-39641 515'.72—dc20 CIP
Die Deutsche Bibliothek — Cataloging-in-Publication Data Nazajkinskij, Vladimir E.: Methods of noncommutative analysis : theory and applications / Vladimir E. Nazaikinskii ; Victor E. Shatalov , Boris Yu. StemM. Berlin ; New York : de Gruyter, 1996 (De Gruyter studies in mathematics ; 22) ISBN 3-11-014632-0 NE: atalov, Viktor E.:; Stemin, Boris J.:; GT
-
C) Copyright 1995 by Walter de Gruyter & Co., D-10785 Berlin. All rights reserved, including those of translation into foreign languages. No part of this book may be reproduced in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Printed in Germany. Typeset using the authors'TeX files: I. Zimmermann, Freiburg — Printing. Gerike GmbH, Berlin. Binding: Dieter Mikolai, Berlin. — Cover design: Rudolf Hübler, Berlin.
Preface
Noncommutative analysis, that is, the calculus of noncommuting operators, is one of the main tools in contemporary mathematics. Indeed, the theory of differential and pseudodifferential operators, various problems of algebra, functional analysis, and theoretical physics deal with functions of noncommuting operators. This was clearly understood by such outstanding scientists as H. Weyl, I. Schur, R. Feynman and many others. It is therefore not surprising that the development of mathematics required creating noncommutative analysis clear and convenient in applications. R. Feynman was clearly a pioneer in the field. As early as 1951, he noticed in his paper "An operator calculus having applications in quantum electrodynamics", that noncommutativity of operators can be accounted for by introducing numbers or indices showing the order of action of the operators. This apparently simple remark served ,. as a starting point for the noncommutative operational calculus created by V. Maslov in the early 70's. Maslov's ideas have been developed in numerous papers, and very deep and important results have been obtained. In particular, they show that noncommutative operator calculus is intimately related not only to traditional mathematical and physical theories but also to rapidly developing new ones. Thus, noncommutative analysis occurs to be useful in such mathematical fields as the theory of geometric and asymptotic quantization, representation theory, the theory of quantum groups, and so on. In this book the reader will find a lot of examples from functional analysis, algebra, representation theory, and the theory of differential equations where non commutative analysis is involved. Unfortunately, up to now there does not exist a sufficiently simple exposition of noncommutative analysis which might serve as an introduction to the subject for scientists who are just beginning to get acquainted with this area and that is why this book has been written. It is primarily addressed to those who are not specialists in noncommutative analysis. At the same time, even the experienced mathematician can find in this book many new and interesting topics. Noncommutative analysis gives a new outlook both on quite traditional and modern mathematical topics such as representation theory, operator theory, the theory of (pseudo)differential operators, Yang—Baxter equations and others.
vi Acknowledgements. We express our kind gratitude to Professor Victor P. Maslov, whose great influence inspired our work and whose advice was of much use for us. This book has been written under the support of the Chair of Nonlinear Dynamic Systems and Control Processes, Moscow State University, and of Max-PlanckArbeitsgruppe "Partielle Differentialgleichungen und Komplexe Analysis", Institut fiir Mathematik, Universitdt Potsdam. We are thankful to the heads of these Departments — Professor Stanislav V. Emel' yanov and Professor Bert-Wolfgang Schulze. We are also grateful to Dr. Manfred Karbe, whose advice was truly invaluable. Finally, we are cordially thankful to Mrs. Helena R. Shashurina, who prepared the manuscript for the publishers. Moscow — Potsdam, 1994
The Authors
Contents
Preface I
v
1 Elementary Notions of Noncommutative Analysis Some Situations where Functions of Noncommuting Operators Arise 1 1 Nonautonomous Linear Differential Equations of First Order. 1.1 T-Exponentials 1 1.2 Operators of Quantum Mechanics. Creation and Annihilation 4 Operators 7 Differential and Integral Operators 1.3 Problems of Perturbation Theory 10 1.4 14 1.5 Multiplication Law in Lie Groups 16 1.6 Eigenfunctions and Eigenvalues of the Quantum Oscillator 20 T-Exponentials, Trotter Formulas, and Path Integrals 1.7 2 Functions of Noncommuting Operators: the Construction and Main 23 Properties Motivations 23 2.1 The Definition and the Uniqueness Theorem 26 2.2 33 Basic Properties 2.3 Tempered Symbols and Generators of Tempered Groups . 42 2.4 The Influence of the Symbol Classes on the Properties of Gen2.5 erators 45 48 2.6 Weyl Quantization 51 Noncommutative Differential Calculus 3 52 The Derivation Formula 3.1 54 The Daletskii—Krein Formula 3.2 55 Higher-Order Expansions 3.3 60 Permutation of Feynman Indices 3.4 66 The Composite Function Formula 3.5 70 The Campbell—Hausdorff Theorem and Dynkin's Formula 4 70 Statement of the Problem 4.1 71 The Commutation Operation 4.2 74 4.3 A Closed Formula for in (e B e A ) 77 A Closed Formula for the Logarithm of a T-Exponential . . 4.4 Summary: Rules of "Operator Arithmetic" and Some Standard Tech5 84 niques
viii
CONTENTS
5.1 5.2 5.3
Notation Rules Standard Techniques
85 86 87
II Method of Ordered Representation 94 1 Ordered Representation: Definition and Main Property 94 1.1 Wick Normal Form 94 Ordered Representation and Theorem on Products 1.2 97 Reduction to Normal Form 1.3 99 Some Examples 2 104 2.1 Functions of the Operators x and —iha/ax 105 2.2 Perturbed Heisenberg Relations 107 Examples of Nonlinear Commutation Relations 2.3 108 2.4 Lie Commutation Relations 110 Graded Lie Algebras 2.5 115 Evaluation of the Ordered Representation Operators 117 3 Equations for the Ordered Representation Operators 3.1 117 How to Obtain the Solution 121 3.2 3.3 Semilinear Commutation Relations 125 4 The Jacobi Condition and Poincaré—Birkhoff—Witt Theorem 132 4.1 Ordered Representation of Relation Systems and the Jacobi Condition 133 4.2 The Poincaré—Birkhoff—Witt Theorem 138 Verification of the Jacobi Condition: Two Examples 4.3 142 5 The Ordered Representations, Jacobi Condition, and the Yang—Baxter Equation 144 6 Representations of Lie Groups and Functions of Their Generators . . 156 6.1 Conditions on the Representation 156 Hilbert Scales 6.2 158 Symbol Spaces 6.3 161 6.4 Symbol Classes: More Suitable for Asymptotic Problems . 167 III Noncommutative Analysis and Differential Equations 171 1 Preliminaries 171 1.1 Heaviside's Operator Method for Differential Equations with Constant Coefficients 174 1.2 Nonstandard Characteristics and Asymptotic Expansions . . 179 182 Asymptotic Expansions: Smoothness vs Parameter 1.3 1.4 Asymptotic Expansions with Respect to an Ordered Tuple of 185 Operators 186 1.5 Reduction to Pseudodifferential Equations 1.6 Commutation of an h -1 -Pseudodifferential Operator with an Exponential 189
CONTENTS
ix
1.7
2
3
4
5 6
7
Summary: the General Scheme 191 Difference and Difference-Differential Equations 193 2.1 Difference Approximations as Pseudodifferential Equations 194 2.2 Difference Approximations as Functions of x and 8x± 196 2.3 Another Approach to Difference Approximations 198 Propagation of Electromagnetic Waves in Plasma 200 3.1 Statement of the Problem 200 3.2 The Construction of the Asymptotic Expansion 202 3.3 Analysis of the Asymptotic Solution 205 Equations with Symbols Growing at Infinity 208 4.1 Statement of the Problem and its Operator Interpretation . 208 4.2 Asymptotic Solution of the Symbolic Equation 210 4.3 Equations with Fractional Powers of x in the Coefficients . 212 Geostrophic Wind Equations 216 Degenerate Equations 224 6.1 Statement of the Problem 224 6.2 Localization of the Right-Hand Side 225 6.3 Solving the Equation with Localized Right-Hand Side 229 The Asymptotic Solution in the General Case 6.4 232 Microlocal Asymptotic Solutions for an Operator with Double Characteristics 233
IV Functional-Analytic Background of Noncommutative Analysis Topics on Convergence 1 What Is Actually Needed'? 1.1 Polynormed Spaces and Algebras 1.2 1.3 Tensor Products 2 Symbol Spaces and Generators 2.1 Definitions 2.2 S" Is a Proper Symbol Space S"-Generators 2.3 3 Functions of Operators in Scales of Spaces Banach Scales 3.1 S"-Generators in Banach Scales 3.2 3.3 Functions of Feynman-Ordered Selfadj oint Operators
242 242 242 249 257 260 260 263 268 270 270 273 278
Appendix A. Representation of Lie Algebras and Lie Groups 287 Lie Algebras and Their Representations 287 1 . Lie Algebras, Bases, Structure Constants, Subalgebras 1.1 287 Examples of Lie Algebras 1.2 288 Homomorphisms, Ideals, Quotient Algebras 289 1.3 Representations 1.4 290 The Associated Representation ad. The Center of a Lie Algebra 291 1.5
x
CONTENTS
1.6 1.7
2
3 4
The Ado Theorem 291 Nilpotent Lie Algebras 292 Lie Groups and Their Representations 292 2.1 Lie Groups, Subgroups, the Gleason—Montgomery—Zippin Theorem 292 2.2 Examples of Lie Groups 293 2.3 Local Lie Groups 294 2.4 Homomorphisms of Lie Groups, Normal Subgroups, Quotient Groups 294 Left and Right Translations. The Haar Measure 295 Left and Right Regular Representations 3.1 295 Representations of Lie Groups 3.2 296 297 The Relationship between Lie Groups and Lie Algebras 4.1 The Lie Algebra of a Lie Group 297 Examples 298 4.2 The Exponential Mapping, One-Parameter Subgroups, Coor4.3 dinates of I and II Genera 299 Evaluating the Commutator with the Help of the Mapping exp 301 4.4 Derived Homomorphisms 302 4.5 Derived Representation 4.6 303 The Lie Group Corresponding to a Lie Algebra 4.7 305 The Krein—Shikhvatov Theorem 4.8 307
Appendix B. Pseudodifferential Operators Elementary Introduction 1 2 Symbol Spaces and Generators Pseudodifferential Operators 3
311
Glossary
327
Bibliographical Remarks
351
Bibliography
357
Index
371
311 317 321
Chapter I
Elementary Notions of Noncommutative Analysis
1 Some Situations where Functions of Noncommuting Operators Arise In this section we consider a few examples of problems from different areas of mathematics and physics whose study requires the usage of functions of noncommuting operators. In fact, each of these problems can be studied by its own inherent methods, but if considered as a whole they suggest that there should be a universal apparatus that would permit us to consider them all from a common point of view. Indeed, such an apparatus is already developed. It is called noncommutative analysis; the aim of his section is to illustrate its main features by simple examples and, in particular, to introduce the so-called Feynman indices, which play the main role in the machinery of noncommutative analysis. It should be clearly noted that the examples given in the following were not chosen at random. In the course of our exposition we will return to them repeatedly and show how the ideas and methods of the theory developed work in simple situations.
1.1 Nonautonomous Linear Differential Equations of First Order. T-Exponentials The equation X = A(t)x
can easily be integrated in quadratures if x function:
E
R1 and A(t) is a given continuous
t
x(t) = exp (
fo
A(t) dt) x(0).
No more complicated is the case in which x A (e) commute with each other:
E
Rn but the n x n matrices A (t) and
[A(t), A(e)] tf A(t)A(t I ) — A(t I)A(t) = 0,
for any t, t'.
(I.1)
(I.2)
I. Elementary Notions of Noncommutative Analysis
2
The solution is expressed by the same formula, where exp stands for the matrix exponential, defined as the sum of the convergent series exp(B) =
a° Bn E . n!
n=0
Indeed, under condition (I.2) we have ( t+At
f 0
exp
_
A(t) dt)
t
t+At
— exp (f A(t) dt) 0
exp (
f t
A(t) dt)
(since the arguments of the exponentials commute with each other, one can expand the exponentials into Taylor series and repeat verbatim the proof of the identity ea eb = ea+b valid for the case in which a and are numbers). Since
b
( t+At
f 0
exp
t+At
A(t) dt --= 1
+f t
A(t) dt +
it is easy to differentiate x (t) with respect to t and prove that x (t) satisfies the original equation.
However, the simple formula fails if we do not require that the commutator (1.2) is equal to zero. In general, we can only write down the solution in the form of the following limit: x(t) = lim exp(ANAtN) exp(AN-1 AtN_i) .. . exp(Ai Ati) x(0) 1 N—>.00
max Ati —>.0 i
where
0 = to < ti < • • • < tn = t ,
Ati = ti = ti — ti—i
and Ai = A (0i ) for some Oi E [ti_i , tit In physics literature the limit on the right-hand side of the latter equality is called a T -exponential (see, e.g., [SS]) and is sometimes t
denoted by T-exp (J A (r) dr x(0). Generally speaking, 0
t
t
T- exp ( / A(r) dr x(0) 0 exp 0
f A(r) dr 0
since the identity exp(A) exp(B) = exp(A ±
B),
which could be used to represent the expression under the limit sign as the exponential of an integral sum, is not valid if [A, B] 0 0, that is, if A does not commute with B. However, Feynman invented a fairly simple notation enabling us to sidestep this
1. Some Situations where Functions of Noncommuting Operators Arise
3
difficulty and, in a sense, pay no attention to the fact that the matrices A(t) do not commute with each other for different values of t. Namely, we equip matrices with indices prescribing their arrangement in products: the greater the index, the further to the left stands the corresponding matrix. In writing, these indices will be placed over the letters denoting matrices; for example, 12
AB = BA, 1
12
2,
22
12
(A -I- Br = A +2AB ± B = A2 ± B 2 + 2B A, 2 „ 1 (A + B)' = A" + 3BA2 ± 3B 2 A ± B 3 , etc. In this notation, we see that 1 1 2
exp(A ± B) =
E
(A
2
+ B)n n!
1 2 = exp(A) exp(B) = exp(B) exp(A).
Note that the order of the factors exp(Ai Ail) in the product coincides with the order of the points Oi on the real axis. Hence we can rewrite x(t) as X (t) =
el ON lim exp(Ai Ati + • • • + AN AtN) X (0).
N—).00 max Ati—>.0 i
Formally, the argument of the exponential is the integral sum of some integral; let us write down this integral explicitly:
t
e x(t) = exp (f A(9) dO) x(0). 0
Note that equation (I.1) is a special case of the last equation. Indeed, if {A (t)} is a family of commuting matrices then their arrangement in products is irrelevant, and the index 19 over A(0) may as well be omitted. As a result, we obtain formula (I.1). At first it may seem that the expression on the right-hand side of the last equation is no more than yet another notation for the T-exponential. But this is a false impression: its meaning can be defined t
directly as the result of substituting the family M (t )} of matrices ordered by the parameter
argument y(t) into the functional
t for the
t F [y] = exp
(f y(r) dr) . 0
Thus we have introduced indices over operators (in the example considered these operators are matrices, which, in fact, does not matter). They will be referred to as Feynman indices, in honor of their inventor.
I. Elementary Notions of Noncommutative Analysis
4
Let us sum up their simplest (and evident) properties. (i) Indices over operators indicate their arrangement in products; namely, an operator with lower index always stands to the right of an operator with greater index. Thus, it is the order relation between the indices that counts, not the values of the indices themselves (which may be changed without affecting the operator expression). (ii) The mutual order of indices over commuting operators plays no role at all. In particular, if all operators involved in an expression commute with each other, then the indices can be omitted.
1.2 Operators of Quantum Mechanics. Creation and Annihilation Operators In quantum mechanics, the state of a system of particles is described by the wave function T, which is a normalized element of a Hilbert state space 7-1; observables correspond to Hermitian operators in 7-I. The quantity A = (T, AT), where ( • , • ) is the inner product in 7-1, is called the expectation value of the observable A in the state T, and (AA) 2 = (T, (A — A) 2 T) is called the variance of A in the state T. One says that A has a definite value a in a state T if A T = a T, that is to say, T is an eigenvector of A with eigenvalue a. It is easy to see that in this case A = a and (AA) 2 = O. The converse is also true: if (AA) 2 = 0, then
I I A tli — A ti/ 11 2 = ((A — A) tli, (A — A) 40 = ( W, (A — A) 2 '1 ) = 0 (recall that A is a Hermitian operator), so that A T = AT. Heisenberg 's Uncertainty Principle claims that the variances of two observables A and B corresponding to canonically conjugate variables in classical mechanics (such as the position q i and the dual momentum pi) satisfy the uncertainty relation (AA) (AB) > h, where h is Planck's constant. In particular, the observables A and B cannot be measured "simultaneously", that is to say, they cannot have definite values in some state T simultaneously. Indeed, the Uncertainty Principle implies that if AA = 0, then AB = oo. Thus, A and B have no common eigenvectors. On the other hand, commuting Hermitian operators always have a complete system of common (generalized) eigenvectors. We conclude that A and B do not commute with each other, and the algebra of observables is not commutative.
1. Some Situations where Functions of Noncommuting Operators Arise
5
In the Schro5dinger representation the states tif are functions of the physical coordinates x = (x 1 , . . . , x 3N), where N is the number of particles in the system, kii = W(x) e '1-1 = L2(R3N ) (we assume that the particles have no internal degrees of freedom). The observables X i associated with the coordinates x i are merely the operators of multiplication by x i , X i = x i , whereas the observables Pi associated with the momenta pi are differentiation operators, pi = — ih a lax i . Moreover, if some classical variable F has the form of a function F = F(x, p), then the associated quantum observable is given by
fr = F(X, P)t-f F (x, —ih
a
axi
) •
(1.3)
In fact, the quantization procedure F iis ambiguous and the last formula is only valid modulo lower-order terms with respect to h. We point out that this formula itself is ambiguous: since the operators x and — ihalaxi do not commute with each other, one should fix their ordering, e.g., by equipping them with Feynman indices. However, the said ambiguity can be "hidden" in some concrete problems. For example, if H is a Hamiltonian of the form N
2 Pi
H(x, p) = E - ± V (x),
2m
then, due to its additive structure, the choice of Feynman indices of
x and
— ih
a
.
axi
is unimportant, and the energy operator N h2 if = H(x, p) = — —Ai + V(x) i=1 2m
E
(where Ai is the Laplacian w.r.t. the coordinates of the i th particle) is defined uniquely 1 . Relativistic quantum mechanics deals with systems with a variable number of particles. Here the so-called occupation number representation is convenient. Let us consider it in the simplest model version. Suppose that there is only one type of particle in the system considered, and each particle can occupy one of n distinct basis states2 . Let us represent the state space 7-t as the Hilbert sum of one-dimensional subspaces -
7-1 =
-
.ii,...,ino
1 The picture would be much more complicated should we consider the quantization in "generalized"
coordinates and momenta. 2 1n realistic systems the number of possible states, as a rule, is infinite.
I. Elementary Notions of Noncommutative Analysis
6
where 7-1h ,..., in is the space of states such that exactly jk particles occupy the kth basis state, k = 1, .. . , n. The numbers ji , . . . , in are referred to as occupation numbers. The space No = H0,...,0 is called the vacuum subspace and corresponds to the state with no particles at all. The structure of the state space is known to depend on the spin of the particles. Namely, there are two possibilities: if the spin is integral, then the particles obey Bose—Einstein statistics, that is, each state can be occupied by an arbitrary number of particles; if the spin is half-integral, then we have Fermi—Dirac statistics, that is, at most one particle can occupy each given state. In the first case, the sum is taken over all nonnegative ji, . . . , in , whereas in the second case the sum is finite and extends over ji G {0, 1 } . According to the type of statistics, the particles are referred to as bosons or fermions. To each of the n basis states there corresponds a creation operator aiE and an annihilation operator ak— , the adjoint of aiE . These operators "create" and "destroy" particles in the kth basis state, that is,
4-c N11—lk....in C Nil—A -FL-in , ak— N11-1k—jn C N .i1-1k -1 —in (it is assumed that H11 ....in = {0} provided that at least one of the indices jk is negative or, in the case of fermions, greater than 1). The operators aP and ak— satisfy the following commutation relations: [at, ail+ = [ai, aï]± = 0, [at, aï 1J+ = SO I.
Here
8k1
is the Kronecker delta, I the identity operator, and [A, B]= AB ± BA
the (anti)commutator of operators A and B (the upper sign is taken for fermions and the lower for bosons). The Wick normal form of an operator A acting on the space H is its representation in the form A = E C ai , ... , cen Ph-4n (at) al . . . (a;ban (aT)fil .. . (a , /3)
(an- )fin ,
that is, the representation in which all creation operators in each monomial stand to the left of all annihilation operators. The Wick normal form is very convenient, e.g. for evaluating vacuum expectations (the expectations in the vacuum state To E No, I I To I l = 1). Indeed, we have
( 'I's , Alpo)
=
E cc,i,...,cinp.,...,finc(at)ai ... (a n-F ) n To, (al ... ( a,-,- )Pn To)
=
(01, P) co...(),
1. Some Situations where Functions of Noncommuting Operators Arise
7
since all terms with (a, fi) 0 (0, 0) vanish. Clearly, any polynomial (series) of creation and annihilation operators can be reduced to the Wick normal form by permuting all creation operators to the left with the help of the commutation relations. From the viewpoint of noncommutative analysis, the Wick normal form of an operator A is none other that its representation as 2
2
1
1
21 +
def
= f (a a– )
(the Feynman indices were assigned taking into account that the creation operators commute with each other, as do the annihilation operators). The function f (z, w) = f (z i , . . . ,z, w 1 , . . . , wn)
=E
7
al
7 an „f3 1 " • 4-n uv 1 • "
wn n
(c1,13)
is called the Wick symbol of the operator A [11]. It is easy to check that the Wick symbol is unique. In the case of fermions the creation operators (and the annihilation operators) no longer commute with each other, so they all should get different Feynman indices: 2n
1
n+1 n
The problem of calculating the Wick normal form of an operator acting on 7-1 will be considered in Subsection 1.1 of Chapter II.
1.3 Differential and Integral Operators The theory of linear partial differential equations deals with differential operators of the form a
P=
E a(x) (—axa )
9
iceinz
where X E EV, a = (a1 , ... , an ) is a multi-index, and m is the order of P. Obviously, the operator P can be interpreted as a function of the operators xi and alaxj, j = i „...,n: 1)
2
P = f (x, where
f (x, ) =
a
ax
E ace (x) e . lal - In
8
I. Elementary Notions of Noncommutative Analysis
For technical reasons (since —ia/ax is a self-adjoint operator in L2(EV)), the operator P is usually represented in a somewhat different form, namely
1
. a
(2
P = g x, —t —) , ax where
g(x, p) =
E aa (x)(ip)'. icii'm
2
Thus, a differential operator is a function of the Feynman-ordered operators x and 1 —id/8x with symbol a polynomial in p. A solution (or almost-solution) of a differential equation
Pu=f can often be represented in the form of some integral operator applied to f. For example, if P is an elliptic operator, then the solution modulo smooth functions can be represented via a pseudodifferential (singular integral) operator,
u(x) = f K(x, x — y) f(y)dy (see, for example, [7911). If P is a hyperbolic operator, then the solution can be represented via a Fourier integral operator (see [81], [137], and others). Let us show, formally for now, that the pseudodifferential operator can be represented as a function 1 2 of the Feynman-ordered operators x and —ia/ax. To this end, we make the change of variables y = x ± r in the integral and obtain
u(x) = f K(x, — r) f(x ± r)dr = f K(x, — r)etalax f (x) dr,
where et a /ax is the operator of translation by r, which can be defined as the solution U of the operator Cauchy problem
au au ar = ax'
Ui r=0 = id .
Set
f
H(x, p) = K(x,—r)e uP dr.
1. Some Situations where Functions of Noncommuting Operators Arise
9
Then 1
u (x) =
dr f (x) = H 1, —i
2n-
— 8
I f (x),
where = (2n- )n K (x , — r) is the Fourier transform of H (x , p) with respect to the variable p. The properties of the kernel K(x, x — y) imply that H(x, p) is a homogeneous function of p smooth for p O. We also have 1
H
(2
x,
a
f 11(x, Oe i“x— Y ) f (y)dy
f (x) =
Dx
Fourier integral operators admit a similar representation. Let us illustrate this fact on a simple example. Suppose that the Fourier integral operator has the form
43.f = f e"))a(x, p) f (13) dP,
(1.4)
where (p) is the Fourier transformation of the function f(x) (such operators give, in particular, an almost-solution of the Cauchy problem
I
2
: i auf = MX, —I" aX \14 )'
ult=0 = f (x),
1 2 for small values of t; here H(x, — ialax) is a first-order pseudodifferential operator). The operator (1.4) can be represented in the form f = f K(x, x — y) f (y)dy, where
K 1 , z) = Fp,z {e i(1)(x'P) a(x, p)}, (I) (x , p) = S(x, p) — xp, and F- z is the inverse Fourier transform. Now computations similar to those above show that
i)f = ei cp (x2 ,—ialax) a x2 , __ i
(
(
f.
I. Elementary Notions of Noncommutative Analysis
10
1.4 Problems of Perturbation Theory Let A and B be two noncommuting operators. We will consider a function f (A ± 8B) of the operator A ± 8B, where e is a small parameter, and try to expand it in powers of B. This is a typical problem of perturbation theory; the operator A is considered as the operator of the nonperturbed problem and EB as a small perturbation. For example, consider the operator Cauchy problem du
dt
= (A + e B) u, ult=0 = E
(1.5)
(here E is the identity operator). Suppose that the solution of the unperturbed problem (s = 0) is known. Then the "corrections" for small s can be found by expanding the function f (A ± EB), where f (x) = etx , in powers of s.
If the operators A and B commute, the solution of this problem is well known. It is given by the Taylor formula 82
f (A ± 8B) = f (A) + 813 J.' (A) + — B 2 f" (A) ± • • • . 2 However, this equation fails if [A, B] 0 0. Indeed, consider the simplest example in which f (x) = x2 . We have f (A + 8B) = A2 ± 8(AB ± BA) ± whereas
82
f (A) ± e B f i (A) ± — B 2 f " (A) = A2 ± 28B A + 2 and these two expressions differ by 8 [A , B]. Hence there is the problem of giving a counterpart of the Taylor expansion for noncommuting A and B. Computing the derivative d f (A + 813) de would be a first step in that direction (this provides the desired expansion with the accuracy 0 (8 2)); let us compute this derivative for the particular case in which f (x) = (X — x) -1 (that is, we consider the perturbation theory for the resolvent RA (A)). The following resolvent formula is valid (see, e.g., [1971) (X — C) -1 — (X — Art = (X — C) -1 (C — A)(X — A) -1 (this can easily be checked by multiplying both sides of the last equation by (X — C) on the left and by (X — A) on the right). By inserting C = A + 8B in this formula, we obtain f (A ± 8B) - f (A) = (X - A - 813) -1 8B (X. - A)-1.
1. Some Situations where Functions of Noncommuting Operators Arise
11
Hence, it is clear that 2
d f (A ± eB)][— ds
= (X
B
A) l B (X — A) -1 =
6=0
3
1.
(X
Obviously, the same "naive" method permits us to calculate higher-order derivatives with respect to 8; however, we refrain from doing so, since our computation uses the particular form of the function f heavily and is by no means universal. The general formulas of perturbation theory for f (A ± EB) will be given in Section 3 and now we consider yet another example. Let us compute the expansion in powers of 8 of the exponential e A+8B (recall that this expansion is the perturbation theory series for the solution of the Cauchy problem (I.5) at t = 1). For simplicity, we assume that A and B are bounded operators on a normed space. Thus, we intend to find the coefficients Pi, P2, . . . of the expansion e A-1-03 = e A ± s pi ± 82p2 +....
(I.6)
We will use the extraction formula for T-exponentials [54]:
t t t T-exp [f (A(r) ± B(r)) dri= T-exp (f A(r) dr) T- exp (i C(r) dr) 0 0 0
?
where C(r) is the operator family given by the formula
—1 t t C(t) = [T- exp (f A (r) dr)1 B(t) T - exp (f A (r) dr) ,
t that is, C(t) is the conjugate of B(t) by the operator T- exp ( f A(t) dr).
0
Proof The proof is quite simple. For brevity, rewrite the desired formula as UA-FB
(t) = UA (t)Uc (t).
Both sides are equal to the identity operator if t = O. We can differentiate both sides with respect to t, using the definition of a T-exponential (see Subsection 1.1). We obtain
a
—
(uA +B(0) = (A(t) ± B(t)Wit+B(t)
8t
and
a — at (LIA(ouc (0) = mouA(ouc(t)+uA(t)c(ouc(t) = (mt)+B(ow Awuc(t),
I. Elementary Notions of Noncommutative Analysis
12
since
C(t)
=
UA(t) -1 B(t)UA(t).
This implies immediately that Uit+B(t) = UA(t)Uc(t) for all t.
0
We choose A(t) and B(t) to be constant families of operators, A(t) B(t) ..- EB, and obtain the formula
a.-
A and
1 eA-FeB = eA T - exp
(C(t) dt ,
E
o where
C (t) = e —rA Be ta .
As in Subsection 1.1, we rewrite the T-exponential using Feynman indices and expand it into the Taylor series in powers of E:
1 T - exp
(e f C(r) dr) —
exp
(e 1 Cl(r) dr)
0 1
0 r
= 1 ± s f C(v)
62
(Pr + — 2
0
2
1
(f
Cl(r) dr)
±
•
o
The coefficient of E on the right-hand side is equal to
1
1
f0 C(r) dr = 0
e —r A B e" dr
(since in the coefficient of E all C(r) enter additively and are not multiplied by one another, their Feynman indices play no role and can be omitted). Let us calculate the coefficient of 62 • We have (1
2
1
1
11
0 r f C (r) dr = f C(r) dr f C (0) de = f f C (r)C (0)
0
0
r
o
0
00
The order of factors in the integrand is governed by the sign of the difference r — 0: we have r 0 C (r)C (0) = C(r) C(0) for r > O, and
(0) = C (0) C (r) C (r)C
1. Some Situations where Functions of Noncommuting Operators Arise
13
for r < 0, that is, the factor with the smaller argument always stands to the right of the other factor. We divide the integration square [0, 1] x [0, 1] into two triangles r > 0 and r < B and note that both of them give the same contribution to the integral. We retain only one of these triangles. Then the factor 1/2 cancels out, and we obtain the following expression for the coefficient of s2 : r f / C(v)C(0) 1( 0 0
= 1 f e– "Be ("")A Be eA dO 0
dr.
0
Finally, we find that
e A+cB = eA
, -r
1 8 f e (1--r)A B e r Ad.(
0 1
±
62 f 0
( t
f e (1—r)AB e(r-6)A Be 9.4 —a
dr ± • •
0
Thus, we have calculated the coefficients Pi and P2 of the expansion (1.6) for e ll-FEB . Using Feynman indices again, we can write 1 3 1 2 --r)A-FrAdr, Pi = B f e(1 o I ( r 3 1 5 24 f (1–t)A-F(r--0)A-FOA de d r. e = BB 12 f 0
0
It is easy to predict the formulas for P3, P4, etc., but we leave that to the reader. We remark that the choice of two above examples is not accidental. Actually, the exponent et At plays an important role in the definition of functions of noncommuting operators (see Subsection 2.4 below) for tempered symbols. Similarly, as is well known, the resolvent RA (A) of the operator A can be used for the definition of f (A) for symbols f (A) analytic in a neighborhood of the spectrum of the operator A. These examples are also of use when considering Weyl quantization (see Subsection 2.6 of this Chapter).
I. Elementary Notions of Noncommutative Analysis
14
1.5 Multiplication Law in Lie Groups The extraction formula can be read from right to left:
t T - exp (f A (t) dt)T
t t - exp (f /3 (r) dr) T - exp (f C(t) dt) , o
O
o
where r
r
--1
C(t) = A(t) ± T exp (f A(r) dr) B(t) [T exp (f A(r) dr)1 -
-
0
0
(we have changed the notation slightly so as to show some respect for the alphabetic ordering). In this form it has the following meaning: the product of two T-exponentials is again a T-exponential. Furthermore, if [A(t), B (0] = A(t)B(t) — B(t')A(t) = 0 for all t and t', then the exponent C(t) has the form C(t) = A(t) ± B(t) (indeed, we
t can permute B(t) and T exp f A(t) dr , which follows easily by considering the -
0
passage to the limit (see Subsection 1.1) or the corresponding Cauchy problem). A and B(t) B. Now let A (t) and B(t) be constant families of operators, A(t) Then for t = 1 we obtain
1 eA eB = T exp ( f C(t) dt), -
0 where
C(t) = A + e tA Be —tA .
Thus, the product of two "common" exponentials of operators is represented as a T exponential. There arises the natural question whether this product can be represented as a "common" exponential, that is to say, whether one can find an operator C such that 21 e A+B C=e . (I.7) -
This is an important problem in the theory of Lie groups. It was partially solved already in the XIX century (see [19 1 ). Namely, it was shown that the element C can be expressed as the series 1 C=A±B±—[A,B]+•••
2
1. Some Situations where Functions of Noncommuting Operators Arise
15
convergent for small l'All and IIBII, where the dots stand for the sum of commutators of the operators A and B of order > 2 (the order of a commutator is the number of left (or right) commutation brackets it contains). Let A and B be elements of some matrix Lie algebra; then eA and e B are elements of the corresponding local Lie group realized by matrices, and the above relations mean that the multiplication law in a Lie group is uniquely determined by the commutation operation in its Lie algebra. This is just the statement of the famous Campbell–Hausdorff theorem. An explicit expression for all terms of the series was found about half a century later by E. B. Dynkin [43], who showed that
(_ om-2
co
C
.E
M
m=1
k1+li>1,ki,li>0
where [Si
def
,
.
.
.
,
[Aki , B/1 , . . . , Ak m
z
ki !ii ! . . .
,
km !Im !
Bi n
,
]
'
1
SN ] = — [Si, [S2 . . .[SN–i, Sri] . . .l. N
In Section 4 we obtain in an elementary way, in the framework of noncommutative analysis, a closed formula expressing the operator C via A, B and the commutation operation. Let us use the considered problem to make a comment on the notation used. We 21
seek an element C such that "C = In D, where D = e A+B " , It would be desirable to convert the phrase in quotes into a formula. Unfortunately, we cannot write 21
C = ln(eA ±B ), since this means the following: "take the function f (x, y) = ln(ex+Y) and substitute 2
1
A, y 1--> B into it". Taking into account that ln(ex+Y) = x + y, we would obtain C = A + B. In order to solve the arising problem we extend our notation by introducing the so-called autonomous brackets [[11. We write X 1-+
2 1 C = ln([e A+B ]l); the meaning of this notation is as follows: "calculate the operator in the brackets [[ ][ and forget about Feynman indices occurring inside [H]; instead, use the resultant operator in the subsequent computations as an indivisible entity". The introduction of autonomous brackets is motivated by the simple observation that operator expressions involve, in addition to traditional arithmetic operations and function evaluation, only one new operation, namely, substitution of operators (equipped with Feynman indices) instead of numerical arguments. And we do not have any convenient means to denote recursive application of this operation. Recall that the order of computation in arithmetic expressions is governed by brackets. We just introduce here a new type of brackets, responsible for substitution of operators.
I. Elementary Notions of Noncommutative Analysis
16
1.6 Eigenfunctions and Eigenvalues of the Quantum Oscillator Let us now use a well-known problem of quantum mechanics to show how the above notation works. Let H be the energy operator of the one-dimensional quantum oscillator, 2 = 1
(D
ax
2x21
where h is the Planck constant, co > 0 the frequency of the oscillator, and x e R. Consider the eigenvalue problem for I1 in the space L2 (R 1 ):
!IT (x) = E I (x) ,
tlf
E L2(R 1 ).
One should find the values of E for which this equation has nontrivial (nonzero) solutions W. It is easy to check that H can be represented in the form
= hco (a+ a— ± 1/2), where
a
a+ = (2hw)-112 (-ih— ±i cox) ax
are the creation and annihilation operators (it is not mere chance that this term coincides with the one already introduced in Subsection 1.24 this is related to the decomposition of a free electromagnetic field into oscillators, frequently used in quantum theory). Thus, our problem is reduced to the eigenvalue problem for the operator 21
a +a = a+ a: 21
a+ a- ‘11 (x) = Xtlf (x)
We seek the solution in the form
1 W(x) = G (a + , a- )v(x), 2
1
(I.8)
where v(x) is an arbitrary function and G( , y) is an unknown symbol to be defined. 21
2
1
It suffices to require that the product of the operators a+ a— and G (a+, a— ) be equal 2
1
to A.G (a+, a). Using the introduced notation, we can rewrite the cited product in any of the following forms: 21
2
1
43
2
1
2
1
a + a ] [[G(a+ , a- )11 = a+ aG (a+ , a- ) = a+ a- [G (a+ , a- )]] 321
2
= a+ a- [[G(a+ ,
1
= BA,
1. Some Situations where Functions of Noncommuting Operators Arise 2
17
1
where B = a+ a- and A = G (a+, a- ). (The index over the left autonomous bracket in the last case is the Feynman index to be assigned to the operator obtained by computing the expression inside the autonomous brackets). We obtain the equation 2
1
2
1
a+ a- [[G (a+ , a- )]] = A.G (a+ , a- ) 2
(I.9)
1
for the operator G (a+ , a- ) . Our plan is to reduce this equation to an equation with respect to the symbol G , y). 2 1 To this end, we should represent the left-hand side as a function of a+ and a- . Should a+ and a- be arbitrary operators, such a representation would be impossible. However, a+ and a- satisfy the commutation relation clef
+
_
[a+ , a] = a a - a -+ a = -1, which enables us to compute the symbol of the product on the left-hand side of (I.9). 2 1 Compute first the product W_ = al[G(a+, a- )]1. Here we apply a trick standard in "operator arithmetic". Clearly, we have 3
2
1
W_ = a- G(a+, cr), 2
3
and the problem is to permute a+ to the last place so as to identify the arguments a1 and a- . Note that if G(y,, y) were a linear function, this problem could be solved immediately by applying the commutation relation, which can be rewritten in the form 3
21
- a- ) = 1. If G( , y) is a polynomial in , one could apply induction on the order of G(, y). In fact, we can avoid this cumbersome procedure and obtain the result for general symbols as follows: 3
W_ =
4
1
3
2
1
4
1
a- G (a+ , a- ) ± a- (G(a+ , a- ) - G(a+ , a - )) 3
4
1
3 2
4
3G 2 4
1
= a- G(a+, a- ) ± a- (a+ - a+)— (a+ , a+ , a- ), 4 where
SG
G(, Y) - G( 71, Y) - 77
I. Elementary Notions of Noncommutative Analysis
18
is the difference derivative of G(, y) by 4. The first term is already in the desired form, and we transform the second one by changing the Feynman indices:
4 sH 2 4 1 a- (a+ - a- )— (a+ , a+ , a- ) 3
4 sH 1 5 0 = a- (a+ - a- )— (a+ , a+ , a- ) 3
32
32
33 2
4
sH 1 5 0
= [[a- (a+ - a+)]]— (a+ , a+, a- ) 3 (we can insert autonomous brackets since none of the operators outside them have a Feynman number in the interval [2, 4]). The commutation relation says that the operator in the autonomous brackets is equal to 1, and the last expression takes the form BG 1 5 0 BG 2 2 1 3G 2 1 = — (a+ , a+ , a- ) =
c3
c3
cS
(the difference derivative becomes the usual derivative on the diagonal = 77). Finally, we obtain 22
W_ = H (a+ , a- ), where
aG
H(, y) = yG(, y) + --(, y).
n
We can now compute the product 2 W+ =
2
1
1
a+ a- [[G(a+ , al )]] = a+ W_ = al[11 (a+ , a- A.
In fact, this is trivial: 2 W+ =
1
3
2
1
2
2
1
2
1
al[H(a+ , a- )]] = a+ H (a+ , a- ) = a+ H (a+ , a- ) = 11±(a+ , a- ),
where
y) = 4 11 (4, Y). (The change of Feynman indices is valid, since their order over noncommuting operators was preserved.) Let us substitute the result of our computation into the left-hand side of (1.9) and equate the symbols of operators on both sides of the resulting equation. For G(, y) we obtain the equation
a
(y + —) n G(4,
Y) = ) .G(4,
y).
The remaining part of the solution is, in fact, purely technical. We have obtained an ordinary differential equation, whose general solution has the form
G(, y) = e-- Y e c(y),
1. Some Situations where Functions of Noncommuting Operators Arise
19
2
where c(y) is an arbitrary function. We substitute a+ instead of and a- instead of y into G(4, y) and substitute the resulting operator in (1.8): 2 1 2 1 klf(x) = e -a+ (a+)1 c(a+)v(x). Extracting (a+)À , we obtain (x) = (a+) )` o (x),
where
21
q)(x) =
1
Ca+a-c(a- )v(x).
Although v(x) is an arbitrary function the same is not true of ço(x). In fact, we 21 1 intend to show that the operator exp (-a+a- )c(a- ) is one-dimensiona1 3 . Indeed, apply a- to ça (x): 1
21
q), (x) =
[P-a+ c(a- )Jlv(x).
We compute the operator on the right-hand side in the same way as W_ and obtain 1
21
2
1 = f (a+ , a+),
where
a
f(, y) = y e+4.Y c(y) — e+sY c(y) Henceforth,
O.
a
(x) = (2wh) -112 (h— wx) g9(x) = 0, ax
so that ço(x) = const . e - x 2 /th .
This function decays together with all its derivatives more rapidly then any polynomial, so for each integer nonnegative the function tlf (x) lies in L2 (R2 ). Thus the function •
(a+)ne —x 2 colih =
(.
n W(x)=
COX
012. 1
—
ax
n e—x2w/ih
is an eigenfunction of a+ a and hence of ñr and the corresponding eigenvalue is ,
En = coh(n ± 1/2). This is a classical result in quantum mechanics.
Remark 1.1 How to prove that there are no other eigenvalues? The simplest way is to prove that the system of functions obtained is complete in L2 (R2 ); however, one can also prove directly that the function (a+)x yo(x) does not lie in L2 (R 1 ) for noninteger 3 That is, its range is one-dimensional.
L Elementary Notions of Noncommutative Analysis
20
1.7 T-Exponentials, Trotter Formulas, and Path Integrals Now that we are acquainted with autonomous brackets, let us return to the T-exponentials introduced in Subsection 1.1. Consider the SchrOdinger equation
1 avi a 2 —i— H x, —i— , t * = 0, x at + 0x
ERn ,
t
E Ri ,
with time-dependent energy operator (we assume a system of units in which Planck's constant h equals 1). Its solutions can be expressed via the T-exponential, 1
J
f [r[H (x2 , —i—d , r) Adr) *0(x), , ax
(x , t) = exP ( —i
0
where *0 (x) is the initial value. The integrand in the exponent contains autonomous brackets. It would be interesting to remove them, so let us try it (as we shall see, the result is quite intriguing). To this end, consider first the T-exponential t
F(t) = exp f
r
12
[[f (A,
B, r)]lch ,
o where A and B are bounded operators (e.g., matrices). By definition,
lim
F(t)=
N
1
2
12
exPaLf(A, B, -EN)116aN)... exPaif (A, B, ralAti),
At=max Ati —03
i
where 0 = to < ti < • • • < have 12
exp(Lf (A, B,
tN = t, Ati = ti — ti_i , and ri c [ti -i, tit However, we 12
ti)][Ati) = 1 ± f (A, B, 1
ti)Ati + 0 (At 2 )
2
= exp(f (A, B, ri)Ati) ± 0 (At 2). The remainder terms 0 (At2 ) result in 0 (At), which vanishes as At —> 0 in the overall product. Hence the autonomous brackets can be removed, t
exp f [[ f (A, B , r)]]ch r 1 2
0
,
2N-12N
=
lim
N—). co At=max Ati—)-0
i
12
exp(f( A , B , rN)AtN) — exp(f (A, B , ri)A 11)•
1. Some Situations where Functions of Noncommuting Operators Arise
21
It is natural to denote the limit on the right-hand side of the last equation by
I
t
r r+0
exp ( f f (A, B , v)dr) o that is, we have
r
exp
r
t
12
f 1[ f (A, B, r)Adt = exp (f f (A,r r+0 B,
where the "+0" over B stands there to indicate that at each time y the operator B acts after the operator A. Hence, the autonomous brackets can safely (and almost without trace) be removed in T-exponentials. The 0 (At2 ) argument is no longer usable with functions of unbounded operators. However, under appropriate functional-analytic conditions (not to be discussed here) the conclusion remains the same. Consider the simplest case in which
f (x, y, Then
r
t)
=x ± y.
12
exp (I' [[ f (A, B , -c)]]dt = ell A+1311!
,
0
and, by setting ti = tIN, i = 0, .. . , N, we obtain the Trotter product formula [180] e liti+mr _ lim e(tIN) B e(t IN) A . ..e (tIN)B e (r1 N) A . — N—)-00
N
pairs of factors
We see that the rule of removing autonomous brackets in T exponentials is merely a generalization of the Trotter formula. Let us return to the T-exponential solution of the Schriidinger equation. According to our reasoning, we get -
r
t
0 c, t) = exp ( —i f H
r+0
x
0
a
dt)Vfo(x)
ax
or 2k-1
N
Ox,
t) =
lim
N—>co At=max Ati—>0 {
i
11 exp (—i H ( 2xk k=1
, —i
— a , rk) Atk) ik o (x)} ax
L Elementary Notions of Noncommutative Analysis
22
The operators included in the product can be expressed by the formula 2k-1 )
2xk , - i -1 , tic Atk u(x)
exp ( —iH
ax
1 f e ip(x—y) e —(x,p,rk)Arku /ky,d) .1 = — yup, (27r)fl R2
(this is one of the usual representations of a Fourier integral operator, see Subsection 1.3). Substituting this expression into the preceding equation, we obtain N -1
ilf(x,t) =
f
lim
N- *cc At=max A ti —>.°R2Nn
E
exp { i (xk+i ( pk — xk) k=0
i
—H(xk+i, pk,rk)Atk
NI—r -1
*o(xo) I I
dxkdpk
where xk and Pk are n-vectors, the expression pk(xk+i — xk) is the inner product of Pk by (xk+i — xk), and xN = X. Let us interpret (xk, Pk) as the point at time t = tk of some trajectory (q(t), p(t)) in the phase space. Then, as Atk —›- 0, we have xk+i — xk = 4(ric)Atic +
and the expression in the exponent can be considered as a Riemann sum for the action integral t
fo [p(r)4(r) — H(q(r), p(r),r)]dr. Hence, the limit on the right-hand side of the obtained equation is denoted by t Vi(x, t) = ff [dq][ 3" 1 ] exp 27r
/if [p(r)dger) o
dr
H(q(r), p(r),r)]dr ifro(xo)dx0;
the inner integral is known as the Feynman path integral; it is taken over all phase trajectories (P(r), q(r))te[0,ti such that q(0) = xo and q(t) = x.
Thus, at the formal level, the path integral expression for the solution of the Schr6dinger equation can be obtained directly from the T-exponential expression by removing the autonomous brackets.
2. Functions of Noncommuting Operators: the Construction and Main Properties
23
2 Functions of Nonconunuting Operators: the Construction and Main Properties In the preceding section we considered several examples of functions of noncommuting operators and described convenient notation (Feynman indices and autonomous brackets) permitting one to define the arrangement of operators in operator expressions. However, we were acting rather intuitively, having no definition for functions of noncommuting operators. The meaning of operator expressions is clear as long as the functions involved are reasonably simple (e.g., polynomials or exponents). However, the examples considered in Section 1 suggest that, in view of applications, noncommutative analysis cannot be limited to elementary functions. One is forced to deal with expressions of the form f (Ai, . , An ), where A 1 , . . . , An are given operators and the symbol f (x i , . . . , x n ) is a function of quite general form. Hence, let us consider what meaning can be assigned to such expressions and study their basic properties.
2.1 Motivations We intend to learn how to solve the following problem. Suppose that some operators A 1 , . . . , An are given (that is, elements of an operator algebra A, noncommutative in general) and .Fn is some class of functions of n arguments .x 1 , , xn . Let f E Fn• 1
Then, what is f (Ai , • • • , An)? Let us first study the trivial case in which all the considered symbols are polyno, xn ] is the algebra of polynomials in n variables. Here mials, that is, Fn = C[x i , , x n ) is the polynomial the definition is evident: if f (x i , =
ccq...an
xal
• • ' X nan
al + —Fan
then the corresponding operator has the form 1
Cal ...an A ncfn al +•..+an <m
(the ordering of factors in each term on the right-hand side is determined by their Feynman indices: the larger the index, the further to the left the factor occurs). Thus, a mapping tt 1
n
C[X11
Xni
A
is determined that takes each symbol into the corresponding operator. Note that this mapping has the following three properties and is uniquely defined by these properties:
I. Elementary Notions of Noncommutative Analysis
24 1°. The mapping p, 1
„ is linear.
A i ,...,A.
2°. For n = 1 the product of symbols is taken into the product of corresponding operators, so that /IA is a homomorphism of algebras with identity element. Moreover, if f (x) = x, then tA (f) = A. 3°. If f (x i , . . . , x n ) = fi (xi) f2(x2) • • • Itt
fn(Xn),
then
1 n ( f) = [tA n ( fn) • • • 1.1 212( f2) A iii(f1) • Ai,...,A n
Now suppose that a larger symbol space .Fn D C[X i , ... , Xn ] is considered. Then it is natural to try to extend the mapping p, 1 „ from C[x i , . . . , xn ] to .Fn with Ai,...,An
properties 1°-3° being preserved and then, if it is possible, to set 1 f (Ai,
•••,
n def An) = Ai n (f). A I,...,A n
This approach can lead to valuable results if the desired extension not only exists (its existence is merely a restriction on the class of admissible operators A 1 , . . . , An ) but is unique. Indeed, under lack of uniqueness the equation A = B apparently would not imply that f (A) = f (B), and the theory would appear to be stillborn. The uniqueness problem resides in two different places. a) Uniqueness for n = 1. Condition 2° requires that p,A is an algebra homomorphism l
ti A : Fi —> A. Hence .Ti should be an algebra (with respect to pointwise multiplication of functions) and we arrive at the following question: Is the homomorphism ILA uniquely determined by the condition /IA (x) = A, i.e., by the value it takes on the function f (x) = x? (with polynomial symbols, the answer is evident). It turns out (see Subsection 2.2) that, under quite general conditions, the answer is "yes" if we consider only continuous homomorphisms. From now on we tacitly assume that the notion of convergence is defined in the symbol spaces Fn and the operator algebra A and all mappings in question are continuous2 . We say that an element A E A is a generator if some homomorphism p„A satisfying the above condition is fixed. 'Throughout the exposition all algebras considered contain the identity element, and homomorphisms are assumed to take 1 into 1. 2 The detailed and rigorous analysis of all questions pertaining to admissible symbol and operator classes, convergence, estimates in function spaces, etc., is postponed until Chapter IV.
2. Functions of Noncommuting Operators: the Construction and Main Properties
25
b) Uniqueness for n > 1. If the operators Ai, j = 1, . .. , n, are generators, that is, the homomorphisms j = 1, . . . , n, are defined and fixed, then conditions 1° and 3° define the action of the mapping ti, i n uniquely on functions f (x i , . . . , x n ) representable as finite linear
combinations N
f (x i , . . • , x n ) = E fi (xi) • • • fsn(Xn), fsk E -F1
(IA 0)
s=1
(such linear combinations comprise the so-called tensor product .F1 0 • • - 0 Fi = 1. ■ ---.. v.-d
On
1
n copies
of n copies of the space Fi). In contrast to the case of polynomial symbols, we cannot hope that the linear combinations (I.10) fill the entire space .Fn , i.e., that Fn = .F1® n . Instead, let us require that each symbol g E Fn can be approximated by such linear combinations. Then the mapping bc 1 n , which is uniquely defined on F1® ", Ai,...,A, uniquely extends to the entire space Yn by continuity. Prior to formal definitions, let us consider two simple examples.
Example 1.1 (Entire functions of bounded operators) Let' A be the algebra of bounded operators in a Hilbert space 'H. For the space .Fn of n-ary symbols we take the space 0(C") of entire analytic functions on C" equipped with the topology of uniform convergence on compact subsets in C". Let A E A be an arbitrary operator. If 00
f (x) =E fkxk E .Fi k=0
00
is an arbitrary function, then the series
E
fk A k clearly converges in the operator norm
k=0
and defines a continuous operator in 'H, which will be denoted by f (A). The mapping ILA : f i-
f (A)
is obviously continuous. Furthermore, f (A) can be obtained as the limit N
f (A) = iim E fkii k k=0
N—>o
of polynomials in A, which implies that ti,A is an algebra homomorphism uniquely defined by the condition tt,i(x) = A. Thus, each element of A is a generator.
I. Elementary Notions of Noncommutative Analysis
26
The definition of functions of several operators is now evident. By passing to the limit we easily show that for each function co
f (x i , ...,xn ) = E fkxciei, ...,xnan E .Fn 1(1 1= 0
and any A 1 , . . . , A, E A we have 1 f
(Ai,
•••,
n An) =
00
la 1=0
and the series on the right-hand side converges in the operator norm.
Example 1.2 (Continuous functions of bounded self-adjoint operators) Let .F„ = C (WI ) be the algebra of bounded continuous functions on Rn with uniform convergence on compact subsets of Rn , and let A be the same algebra as in Example IA. If A E A is a self-adjoint operator, then for each function f (x) E li the operator co
f (A) dg f f (X)d E A( A)
is well-defined (see [42]); here Ex (A) is the spectral function of the operator A, and the Stieltjes integral on the right-hand side is in fact taken over a finite interval. The mapping ttA : f 1- f
(A)
is continuous and coincides with the "natural" one on polynomial symbols; since by the Weierstrass theorem a continuous function on a compactum can be approximated by polynomials, it is clear that IA is an algebra homomorphism uniquely determined by the condition ,u,A (x) = A. Hence A is a generator. A function of several Feynman-ordered operators can be defined as follows: 1 f
(Ai,
••.,
co Do n An) = f • ' • f -
f
(' ... , A. n ) d EA,„(A n ) . . . d Ex i (Ai) ,
00 -00
(the integral on the right-hand side is an iterated Stieltjes integral).
2.2 The Definition and the Uniqueness Theorem Let us now make the construction of the preceding subsection into a precise definition. Let A be some algebra, whose elements will be referred to as operators (in applications of noncommutative analysis, as a rule, A is an algebra of operators in some
2. Functions of Noncommuting Operators: the Construction and Main Properties
27
linear space; however, within the framework of the general theory it is convenient to pay no attention to this fact and assume that A is merely an associative algebra with 1). Furthermore, let a space 3:1 of unary symbols be given. The elements of .FE are functions of a real (or complex) variable x. 3 We assume that .Fi is an algebra with respect to pointwise multiplication of functions and contains the function f (x) x (and, consequently, all polynomials). Definition 1.1 An operator A E A is said to be a generator if there exists a continuous homomorphism
AA : Fi —> A such that istA (x) = A. If A is a generator, then we set f (A) = /LA (f )
for each f
E 3:1.
The property of being a generator obviously depends on the choice of the symbol class F1, so we mention .Fi explicitly, by saying that A is an Fi -generator, whenever ambiguity is possible.
Since the uniqueness of /IA has not yet been proved (see Theorem 1.1 below in this subsection), we temporarily assume where necessary that for all the operators involved the corresponding pt-homomorphism is chosen and fixed. Let Y n be the space of n-ary symbols f (x i , . . . , xn ). According to our strategy, the elements of the tensor product l'i®n should be dense in .Fn . We assume that Fn is obtained from .Fi®n by completion, that is to say, by adding the limits of Cauchy sequences of elements (I.10); we denote this by = -F6 n ' -Fn = -F.1( • • • 6-Fi copies
n
(the hat over 0 denotes completion). Instead, we could impose a weaker condition that the embedding ,F1 '6n c— .Fn be continuous and dense. However, we would have to require additionally that the mapping (1.11) be continuous as a mapping from Fn to A, which holds automatically in our case. This would lead to slight modifications in the subsequent statements.
Let A 1 , .. . , An be generators. The mapping n
: Ti n —> A,
fi (xi) f2(x2) • • • fn(xn) "- fn(xn) • • • f2(x2) fi (xi)
extends by continuity to the entire space F. 3 We do not assume that x ranges over the entire space IR or C.
(I.11) (I.12)
I. Elementary Notions of Noncommutative Analysis
28
Definition 1.2 We set n
1
f (ili,
• •• ,
An) = A
n 1 111,...,An
(f);
1 n a tuple A = (Ai , . .. , An ) of generators equipped with Feynman indices will be referred to as a Feynman tuple. in
il
Functions of a Feynman tuple (A1, . . . , An ) with arbitrary pairwise distinct Feynman indices ji , . . . , jn are defined in a similar way: the factors on the right-hand side of (I.12) should be arranged so that their Feynman indices form a descending sequence.
Remark 1.2 In order to avoid clumsy notation, we assume that all the operators A 1 , . . . , An are Fi -generators with the same symbol class Fi. In principle, one may well consider operator expressions involving generators with respect to various symbol classes. In that case .Fn would be the tensor product of various spaces of unary symbols rather that .T16 n , each space associated with the corresponding operator argument. In the sequel we often make use of a particular case of this situation in which some of the operators occur only linearly or polynomially in the expression considered. If so, the corresponding symbol classes may be chosen to consist of polynomials, which eliminates the necessity of checking any additional conditions (with polynomial symbol, each operator is a generator). Remark 1.3 The presented construction of functions of several operators suggests the following general way of proving this or that identity, assertion, etc. in noncommutative analysis: first, they should be established for decomposable symbols of the form f = fi 0 • • • 0 In and then extended by linearity to .F? n and by continuity to the entire -Fn, cf. the proof of Propositions I.1 and 1.2 given below. Our definitions do not allow coinciding Feynman indices over arguments of an operator expression (later on, when it will be shown that the ordering of Feynman indices over commuting operators does not affect the value of an operator expression, commuting operators will be allowed to bear the same Feynman indices). However, il
the formula f (A, A) is not erroneous. It means the following: one should restrict the function f (x , y) to the diagonal x = y by setting g(x) = f (x, x) and substitute the I operator A into g(x) instead of x. Thus, il
f (A, A) = g(A). The same interpretation is used if f has additional operator arguments, e.g. 112 def
12
f (A, A, B) = g(A, B),
2. Functions of Noncommuting Operators: the Construction and Main Properties
29
where g(x, z) = f (x , x, z). It turns out that in this situation the Feynman indices over an operator A can be moved apart, i.e., set different.
Proposition I.1 (Moving indices apart) i)The operator of restriction to the diagonal [tfl(x) = f (x, x). is a continuous operator from .F2 to Fi. ii) Let A E A be a generator For each symbol f (x, y) E .F2 we have 12
21
1
1
f (A, A) = f (A, A) = f (A, A) (note that the right-hand side is nothing other than [tf](A).) Proof Let f (x, y) E .F2 have the form f = g 0 h, that is, f (x, y) = g(x) h(y), so
that t(g 0 h) = gh.
Since .Fi is an algebra and multiplication in Fi is continuous, the operator t is continuous from .F1(i3:)' F1 = F2 to Fi. Moreover, since ptA is a homomorphism, we have 12
(g 0 h)(A, A) = h(A) g (A) = hg (A) = c[g,® h](A), 2 1 and similarly for (g 0 h)(A, A). Thus the identity in ii) holds on F1 0 .F1 and, by 0 continuity, on the entire F2. The proposition is proved.
Evidently, the statement of Proposition 1.1 remains valid also in the presence of additional operator arguments. The Feynman indices over A can be moved apart but only in such a way that the ordering relation between them and Feynman indices of other operators remains unchanged. More precisely, sm s,n k 1 si l i si f (A, A, Bi, . • ., Bm) = f (A, A, Bi, • • • , Bm)
provided that j, k, and ! all lie in some interval [a, b] that contains none of the numbers m . Since our definitions make it evident that the value of an operator expression S depends on the ordering relation between Feynman indices rather than on the indices themselves, we arrive at the following conclusion: Feynman indices in an operator expression can be changed arbitrarily without affecting its value, provided that the ordering relation between Feynman indices over different operators is preserved. For example, 123
213
115
f (A , A, B) = f (A, A, B) = f (A, A, B),
I. Elementary Notions of Noncommutative Analysis
30 but
123 423 f (A, A, B) 0 f (A, A, B)
in general, since the order of Feynman indices of the first and the third argument has
been changed. The next proposition is stated in its general form (i.e., with an arbitrary number of operator arguments). Proposition 1.2 (Extraction of a linear factor) Let f (x i ,
. . . , x n ) E Fn be a symbol
of the form ,x,) = g(xi , ... , x k )h(x k+i , ... ,x,), il
in
and let A = (A1, . . . , A n ) be a Feynman tuple satisfying the following condition: the Feynman indices j1 , . . . , jn lie in some interval (a, b) that contains none of the indices Then k+1 s h in il ik f ( 11 1, • • • , An) = ffg (Ai, . . . , Ak)lh( A k+1,
in
where s E (a, b) can be chosen arbitrarily.
Remark 1.4 Recall that the autonomous brackets [ill have the following meaning: one should evaluate the expression they enclose and use the result as a single new operator in the subsequent evaluation, no longer caring about the Feynman indices that occurred within the brackets. The index s over the left bracket is just the Feynman il lic index to be assigned to this new operator. The operator g (Ai, . . . , Ak) occurs linearly in this identity, so, according to Remark 1.2, there is no need to impose any additional requirements on this operator. Proof Assume that f (x i , . .. , x n ) is a decomposable symbol, f (x1, • • • , x n ) = ft (xi) • • • fn(xn).
Then g (x i , . . . , x k ) = fi(xi) • • • fic(xk)
and h(x k+i , . . . , x n ) = fk+i(xk-Fi) • • • fn(xn)• il
in
The factors fi (A 1 ), . . . , fkin (Ak) in the product fi (A 1 ) • • • fn (An) stand next to each other due to the condition imposed on the Feynman indices. Hence, we can group these factors together, thus obtaining the factor il ik ilc il fl(A1) • • • fk(Ak) = 8 1 (A1, • • • , Ak)
2. Functions of Noncommuting Operators: the Construction and Main Properties
31
in the considered product. Hence, the proposition is proved for decomposable symbols. By linearity and continuity, it extends to arbitrary symbols g E Yk, h E Yn-k (cf. Remark 1.3). The proof is now complete. 0 Let us consider two examples. We have 15
23
15
223
2
15
g(A, B) h(C , D) = [[h(C , Dj] g(A, B) = K g(A, B), 23
where K = h(C , D). Similarly, we have 2
4
1
5
4
32
1
5
(A — A) sin(B + B) = [[A — Al sin(B + B) = 0, since
2
4
A — A =0. However, 2
4
1
3
(A — A) sin(B + B) 57 0 3
2
4
in general, since the operator B acts between A and A. Proposition 1.1 and 1.2 enable us to prove a theorem stating the uniqueness of the homomorphism ti,A for any generator A.
Theorem 1.1 Suppose that the symbol class .F1 satisfies the following condition: the difference derivative
8
8f 6x
{ (f (x) — f (Y))1(x — y), x 0 x=y nx),
)1 ,
is a continuous operator form .F1 to .7.2. Then for any generator A e A the homomorphism 1.1,A described in Definition 1.1 is uniquely defined. Proof Let A be a generator in A, and let ti,1 and p,2 be two (possibly, different) choices of the homomorphism ,uA. It suffices to prove that pi = g2. For notational convenience, we set C = A, ILA = gi, and p,c = p,2; in other words, we denote the operator argument differently, according to which of the two homomorphisms is used:
f (A) = ,(LA (f),
f(C) = ',Lc (f )•
All that we need is to establish that f (A) = f (C) for every f E .F1. By substituting 2 1 A for x and C for y in the identity 8f f (x) — f(y) = (x — y) — (x , y), sx
I. Elementary Notions of Noncommutative Analysis
32 we obtain
2 1 2 8f 1 2 f (A) — f (C) = f (A) — f (C) = (A — C)—(A, C). Sx By the conditions of the theorem, 8f/8x(x, y) e F2, and consequently, 1
(z — w)Sf/tSx(x, y) E F4. Hence we can apply Proposition 1.1 so as to move the Feynman indices over A apart, and the same thing can be done for C:
1 2 sf 0 3 f (A) — f (C) = (A — C)-(A, C). Sx 1 2 Next, we use Proposition 1.2 to enclose the factor (A — C) in autonomous brackets: 1 2 sf 0 3 2 sf 0 3 21 (A — C)— (A, C) = [A — Cl— (A, C), 8x Sx
and the last expression is equal to zero since
1 2 A —C=A—C=0 (the reader should not be too suspicious about this conclusion, since the homomorphisms IA and pc coincide on the symbol 'p(x) = x by definition). Hence we have 0 shown that f (A) = f (C). The theorem is proved. In what follows we assume that the condition stated in Theorem 1.1 is fulfilled. Remark 1.5 This proof is so simple that it seems to be a mere trick. However, it is perfectly rigorous. Its simplicity is just due to the wizardry of our notation, which displays itself in quite a few places thereafter. Remark 1.6 According to Proposition 1.1, the operator t of restriction to the diagonal acts continuously from ..T2 to F1. Since t 8/8x = Diaz (on the diagonal the difference derivative coincides with the usual one), we see that ,#), is a continuous operator from -F1 to li and consequently all symbols in .F1 (and hence, in each .F,i ) are infinitely differentiable. We could choose another way and postulate that the operator d „ — : Pi —> Pi dx is continuous. Then the formula 1 Sx
0
dx
— t)y)d -c
2. Functions of Noncommuting Operators: the Construction and Main Properties
33
would imply that the condition of Theorem 1.1 is satisfied. However, in that case we would have to require that the domain where symbols are defined is arcwise connected and the mapping (t, f (x)) 1--> f (-c - x ± (1 — t)y) from II x .F1 to .F2 is continuous.
2.3 Basic Properties Let A be an algebra of operators, and let
Fi = . F ,
Y2= .T .C.T.,...,-Fn
= Y6 n , . . .
be algebras of symbols. Given a tuple A = (A 1 , . . . , An ) of generators in A, we intend to study the mapping n : Fn —> A. ILA = Al Ai,...,A,,
In doing so it is convenient to use the regular representationsof the algebra A on itself. Let A E A. The mappings LA: A
—>- A
B I--> LA(B) = AB and
: A —>- A B I--> RA(B) = B A
RA
defined as left and right multiplication by A, respectively, are continuous linear operators on A. The set of continuous linear operators on A will be denoted by L(A), so we can write LA E L (A) and RA E £ (A) . Consider the mapping
L : A —> G(A) A I--> LA. It is a continuous linear mapping called the left regular representation of the algebra A. Similarly, the continuous linear mapping
R : A —>- L(A) A 1--> RA is called the right regular representationof A. Note that L is a homomorphism of algebras LAB = LA LB,
I. Elementary Notions of Noncommutative Analysis
34
whereas R is an antihomomorphism, that is, it inverses the order of factors RAB = RB RA.
Since L(A) is an algebra of continuous linear operators on a linear space, we may well consider the notion of a generator in L'. (A). It turns out that the supply of generators in .C(A) is at least as rich as that in A, as shown by the following theorem: Theorem 1.2 The following conditions are equivalent: i) ii) iii)
A is a generator in A. LA is a generator in L(A). RA is a generator in L(A).
Proof i) .#;> ii). Let A be a generator in A and 1.2,A : F —> A be the corresponding
homomorphism. We define the mapping AL A : .F —> L(A)
by setting ALA
(f ) = LAA(f)•
Clearly, this mapping is a continuous homomorphism as the composition of two continuous homomorphisms, namely of L and ptA. We also have ALA(X) = LA(x) = LA,
and so LA matches the requirements of Definition I.1 . Conversely, let A E A be an operator such that LA is a generator; in other words, for each symbol f (x) E .F the operator f (I, A) = A L A (f) is well-defined. To construct the corresponding mapping ILA, note that for an arbitrary operator B E A we have [LA, RB] dg. LARB — RBLA=
0,
that is, the operators LA and RB commute. Indeed, for any C LA(RBC) = LA(CB) = A(CB) = (AC)B =
E A
we have
RB (AC) = RBLAC,
by the associativity of A. We can now apply an argument similar to that used in the proof of Theorem I.1 to obtain f (L A )R B = RB f (L A ).
(1.13)
Indeed, we have 3
1
2
3
1
2
8f 3
f(LA)RB - RB f ( 1, A) = (f ( 1, A) — f(LA))RB = (LA- LA)RB—(LA, 6x
1 LA).
2. Functions of Noncommuting Operators: the Construction and Main Properties
35
We can move apart the Feynman indices over LA in this expression (Proposition 1.1) and then extract a linear factor (Proposition 1.2) thus obtaining f(LARB —
13 1 2 6f 3 1 LA) = 0, RB f (LA) = FLA — L A)R.131 — (L A,
Sx
since the operator in the autonomous brackets is equal to zero. Since B is arbitrary, equation (1.13) means exactly that f (L A) is the left regular representation of some element of A, which will be denoted by f (A). Specifically, we have f ( 1, A)(x) = f (1 , A)(1 • x) = f (LA) (Rx 1) = Rx f(LA)(1) = ( f(LA)(1)) X,
and consequently, f (A) = f(LA)(1).
The mapping f
p,A (f) = f (A) is clearly continuous; moreover,
f (A) g (A) = f A(1) g(L A)(1) = f (L A)g(LA)(
1 ) = (f
A(1) = f g(A),
so that it is a homomorphism. iii). This can be proved similarly; the only point worth mentioning is that R i) is an antihomomorphism, and at first glance it may seem that the mapping AR A = R
fails to be a homomorphism. However, the image of ,uA is a commutative subalgebra of A, and the restriction of R to it is a homomorphism. The proof is complete. Now there is an evident formula: 1
f (1, A i , • • • ,
An ) =
L1
n
f
for any generators A 1 , . , An E A and symbol f c Tn . The proof goes by linearity and continuity: if f (x) = (xi) . . . fn (xn ) is a decomposable symbol, we have 1
f ( L A1 • • •
LAO
= f n (LA )
fl(L Al) = fn(An) • • • Lifl(A1)
Lfn(An)....f1( 11 1) =
L1
and this equation extends to the entire .Fn by continuity. Similarly, 1 f(R A i , • • • , R A n ) = R n f
n
I. Elementary Notions of Noncommutative Analysis
36
(the operators A 1 , . . . , An on the right come in reversed Feynman ordering, since R is an antihomomorphism). Let us consider a slightly more general situation in which a function contains both L and R arguments. Let f (x , y) E Y2 be an arbitrary binary symbol. What is 2
1
f(LA,RB)? Note that LA and RB commute, and so we can choose their Feynman indices arbitrarily; let us choose them in such a way that the index of LA is greater 1
3
than that of RB; thus, we consider f (L A, R B). This operator is an element of L(A), and to describe it means to describe its action on an arbitrary element C E A. If f is decomposable, f (x , y) = fi (x) f2(y), then we have 3
312
1
f (LA, RB)(c) = fi(LA)f2(RB)c = fi(A)C f2(B) = f (A, B)C .
By continuity, this identity extends to arbitrary symbols f result in a general form.
E F2.
We now can state the
Theorem 1.3 Let A i , ... , A s ,
B i , ... , B,n E A
be generators, and let j1 , . . . , js , k l , . . . , km be their (pairwise distinct) Feynman indices. Then il
is
f (Li Ai, • • • , L A s
,
km k1 RBi, • • • , R B m )(C)
ko—km .is 1c0 — k1 C f (L A i , • • • , L A s , B1 , • • • , Bm ), 1
=
il
where /co and l are chosen in such a way that ko — k1 < 1 < ji for any i and l E {i, . . . , M}.
E
{1, . .. , s}
The proof is modelled on the above argument, and we omit the trivial details. Let us now state and prove the main properties of Feynman operator calculus. We fix some class Y of unary symbols and some generator A E A.
Theorem 1.4 The Feynman operator calculus has the following naturality properties. 10. Let ço : A —> B be a continuous homomorphism of algebras. Then w(A) E B is a generator, and f (ço( A)) = W(f (A)) for any f E Y. 0 2 . Let B E A be an operator such that AB = BA. Then
f (A)B = B f (A) for any f E Y. 3°. Suppose that
AB = BC,
(I.14)
2. Functions of Noncommuting Operators: the Construction and Main Properties
37
where B,C E A and C is a generator Then
f (A)B = B f (C) for any f E Y. 4°. Assume that A is an algebra of operators in a linear space V, B an algebra of operators in a linear space W, and
x : W —› V a continuous linear operator such that
Ax=xC, where C
E
(1.15)
13 is a generator Then we have f (A) x = x f (C)
50 .
for any f E F. As in 4°, assume that A is an algebra of operators in a linear space V. Let be an eigenvector of A with eigenvalue A,
E
V
,U =A. Suppose also that f(A) is defined for any f E F. Then "
f (A) = f (xg for any f E Y.
Remark 1.7 There are no Feynman indices in the statement of this theorem. However, the proof employs Feynman ordering heavily. See Theorem 1.5 for generalization to multivariate symbols.
Proof 1°. Let q) : A —›- B be a continuous homomorphism of algebras. Set f (ço( A)) tf ki. (A)), that is, Py(A) = PIAA.
Evidently, pw (A) is a continuous homomorphism (as a composition of such mappings). Hence, q) (A) is a generator in B, and we obtain the desired result (recall that g y (A) is unique by Theorem I.1). 2°. We proceed as in the proof of Theorem 1.2. We can write 2 3 1 1 6f 3 1 23 f (A) B — B f (A) = B ( f (A) — f (A)) = B (A — A)-' (A, A) . 8x
I. Elementary Notions of Noncommutative Analysis
38
Next, by moving indices apart and extracting a linear factor, we obtain 22 3 1 8f 5 0 f (A)B — Bf (A) = [13 (A — A)]1 — (A, A) = 0 8x
since the expression in autonomous brackets is zero: 23
1
B (A — A) = AB — BA = O. 1 1 3°. In order to prove this, we repeat the above computation with A replaced by C: 2
3
1
f (A)B — Bf (C) =
B(f (A) — f (C) 2 3 1 8f 3 1 = B (A — C)— (A, C) 3x 22 3 1 8f 5 0 = I[B (A — C)]]— (A, C) = O. 8x
Of course, 2° is a particular case of 3°. 4°. The condition of this item can be expressed by the commutative diagram A
V
C
W
V
w
We now make some preparations in order to reduce the proof of this item to that of item 3 0 Consider the direct sum E = V ED W of the linear spaces V and W. Continuous linear operators in E can be viewed as 2 x 2 matrices (
GH E F )
with continuous operator entries
E : V —> V, H : W —>, W, F : W —> V, G : V —> W.
2. Functions of Noncommuting Operators: the Construction and Main Properties
39
Relation (I.15) can be rewritten in the form (A 0 c 0 ) (0 00 x ) = ( 0 x 0 )(A 0O c).
Let A be some algebra of continuous linear operators in E containing the subalgebra 00 x0 ) (e.g., the minimal algebra generated by these A ED B and the operator ( objects). Then the last equation has the form (I.14) in the algebra A. Furthermore, ( A0 is a generator in A. Indeed, one can set 0 C
=
f (( j°
( f (A) 0
0 ) f (C)
By applying item 3°, we obtain
f (( or
( f (A) 0
x 1(1) l' ) ) ( °() X° ) = ( °1 3 f (O c) )
0 0 )
0
) f ( ( 11
( 0 0(0x x0 ) ( f (A) 0
C O)
)'
0 ) f (C)
We perform the necessary multiplications and find that f (A) x = x f (C), as desired. 5°. Strange as it may seem, but this is a particular case of 4°. Specifically, let H c V be the one-dimensional subspace generated by the eigenvector , and let x : C —)- V be the linear mapping such that x (a) = c4 for all a E C. It is evident that x (C) = H. The commutativity of the diagram C
d C
X
X
v
1
A
v
expresses the fact that H = x(C) is an eigenspace of A with eigenvalue A. An 0 application of item 4° yields the desired result. The theorem is proved. We can now state the counterpart of Theorem 1.4 for multivariate symbols. In the following we assume that the class Y of unary symbols is fixed, Y„ = Y @n, and A 1 , . . . , An E A are arbitrary generators.
I. Elementary Notions of Noncommutative Analysis
40
Theorem 1.5 1°. Let g) : A —> B be a continuous homomorphism of algebras. Then we have 1 n n 1 f Will), . . - , kiln)) = g)(f Oh • • • , An))
for any symbol f E •Tn• 2°. Let B E Abe an operator commuting with all Aj,
[Ai, B]F-_-- AJB — BAi =O. 1
n
Then B commutes with functions of A1, ... , An as well, n 1 n 1 n 1 lf (Ai, • • • , An), B] = [Lf Oh ••• , An4B — B f ERAi, . .. , An)II = 0.
3°. Let AiB = BCi, j =1,...,n, where B
E
A, Ci
E
A, j = 1, . . . , n, and each Ci is a generator. Then 1
n n 1 f (A1, . . . , A n )B = Bf (C1, . . . , C n )
for any symbol f E Fn . 4°. Let A 1 , . . . , An E A be continuous linear operators in a linear space V, B 1 , . . . , Bn E B be continuous linear operators in a linear space W, and let
x :W > V —
be a continuous linear operator such that Ai x =x Bi, j =1,...,n (in this case one says that x is an intertwining operator for the tuples (A 1 , . . . , An ) and Assume that each Bj is a generator in B. Then 1
n
1
n
f (Ai, . . . , An) o x = x f (Bi, . .. , BO.
In other words, if x intertwines operator tuples, it also intertwines any function of these tuples. 5°. Let A 1 , . .. , An E A be linear operators in a linear space V, and let E V be a common eigenvector of A 1 , . . . , A n with eigenvalues A = (X 1 , ... , Xn ): Aft = XA, j = 1, ...,n.
2. Functions of Noncommuting Operators: the Construction and Main Properties 1
Then for each f
E .Fn
41
n
the vector is an eigenvector of f (Ai, .. . , A n ) with eigenvalue
f 0`.1 , • • • , Xn) ,
1
n • • ., An) = f f (Ai,
(xi,
• • • ,
Xng•
Proof The proof of this theorem can be carried out in the usual way. Namely, we first consider decomposable symbols; for these symbols the statement of each item follows directly from the corresponding item of Theorem 1.4. Next we extend the result to 0 arbitrary symbols by continuity. Let us make some remarks on Theorems 1.3 and 1.4. These theorems express naturality properties of the Feynman operator calculus: item 1° says that it behaves naturally under homomorphisms of algebras; item 4° says the same about its behaviour under homomorphisms of linear spaces in which the operators act; item 5° states that one may as well restrict consideration to common invariant subspaces where necessary, and finally, items 2° and 3° express quite natural behavior with respect to the commutation operation, in the spirit of von Neumann's definition of functions of operators via
bicommutants. It should be pointed out, however, that item 5° is less important as soon as functions of several operators are considered because, typically, a tuple of noncommuting operators has no common eigenvectors at all, and in any case, a system of such vectors cannot be complete (this is quite understandable since operators commute on each common eigenspace and hence on the sum of common eigenspaces.) It is clear that the assertions of these theorems are far from being independent. In fact, we can draw the following diagram showing the dependence between some of our assertions: The continuity of the mapping 8 ••Y —> .F&F 87
-
2° = 2° 1: tP=uniqueness 0 4 of p,A 30
lo .
We have already mentioned and used some of these implications in our proof. However, let us discuss those implications which remain not elucidated as yet. The implication 4 0 = 3° can be proved in a standard way: we consider the algebra A ED A and rewrite the commutation relation (I.14) in the form
( A 1:)
0\(0
B\ (0 B\(AO 0 c ).
C)0
0)
0 0)
It remains to follow the lines of proof of item 4 0 . The implication 4° = 3 0 can be proved as follows. We consider A, B, and C as linear operators in A acting by multiplication on the left, i.e., we pass to the left regular representation. We have LALB = LBLc,
and item 4° applies. In conclusion let us state the following commutation theorem.
I. Elementary Notions of Noncommutative Analysis
42
Theorem 1.6 Let A, C e A be arbitrary generators; let B, D e A; and suppose that AB = BC ± D. Then for any symbol f
c .F we have 2
3f 3 1
f (A)B = Bf (C) ± D—(A, C). Sx Proof This can be proved by the following computation: 3 1 sf 3 1 2 1 23 f (A)B — Bf (C) = B(f (A) — f (C)) = B (A — C)—(A, C) Sx 23 1 6f 4 0 = B (A — C)— (A, C) (moving indices apart) Sx 22 3 1 3f 4 0 = 1[B (A — C) — (A, C) (extraction of a linear factor) Sx 2
3f
3 1
= D—(A, A), Sx 0
as desired.
2.4 Tempered Symbols and Generators of Tempered Groups Though the discussion of functional-analytic questions has been postponed until Chapter IV, we consider here a particular realization of the construction presented in Subsection 2.2. This realization deals with symbols of tempered growth and the related class of generators and is particularly needed in applications to differential equations. (a) The Symbol Classes. We introduce the space .Fn = Sec (Rn) of n-ary symbols. By definition, the space Soc(Rn) consists of all functions f (y i , . . . , yn ), (y 1 , . . . , yn ) = y e Rn, satisfying the estimate ° y ) " ( — a
f(y)
Ca (1
+ ly Dr,
for any multi-index a = (al , . . . , an ), with some constant r independent of a (but depending on f). A (generalized) sequence fp E S °° (Rn ) is said to be convergent to an f E S °° (Rn ) if there exists an r such that
ysup E Rn
a a (.5T7 ) {
(
4960 —
(1
+ ly i)r } —> 0
2.
Functions of Noncommuting Operators: the Construction and Main Properties
43
for any multi-index a. The algebra S°° (1W') is complete and the algebraic operations are continuous with respect to this convergence. Moreover, one has scx)(Rn) _ soo(kl) c)... •(:Lbsoo(R1) ,
and the difference derivative is a continuous operator ,iip 1 3 Ts; : S-- (R- ) _>. S °° (R).
These properties (whose proof we cannot carry out as yet since we do not have the material of Chapter IV at our disposal) show that the symbol spaces S°°(Rn) satisfy all the requirements imposed in Subsection 2.2. (b) The Operator Classes. We consider operators acting on a Hilbert scale. A Hilbert scale is a sequence of densely embedded Hilbert spaces • • • C H_2 C H-1 C
Ho
C
Hi
C H2 C - • •
indexed by Z or R. To be definite, we assume that the indexing set is discrete and deal with Z-indexed Hilbert scales. It is assumed that the set H-00
= iEz n Hi
Hi is convergent to is dense in each H. A (generalized) sequence hp E He° = H U jEz h E I1,, if there exists a k such that for /3 > /30 all hp E Hk and hp —> h in Hk. A linear operator A on Hoo is said to be bounded (of order r) in the scale {Hs } if there exists an integer r such that for any s one has -
AHs C Hs+r and the operator AIHs : Hs —> Hs±r
is continuous. The minimal possible r is called the order of A and is denoted by ord A. If the set of possible r does not have a lower bound, we say that ord A = —oo. A bounded operator A in the scale {Hs } is said to be the generator of a semigroup of tempered growth (or simply a tempered generator) if the Cauchy problem du —i— = Au, dt
ul t=o = Uo,
has a unique solution for any uo E Hoo , and there exists an 1 such that for any s E Z the inclusion uo E Hs implies that u(t) E Hs ±/ for all t, is differentiable in Hs+i, and
11u
(
t) 1 1 s +1
co + iti)Niluoi is,
I. Elementary Notions of Noncommutative Analysis
44
where 11h Ilk is the norm of h in Hk and the constants C and N depend on s but are independent of uo E H. Thus, a tempered semigroup in a Hilbert scale grows at most polynomially as t ---> oc. We denote u(t) = exp(iAt)uo.
(c) The Functional Calculus. Let A be the algebra of bounded operators in the scale {Hs }, and let A tempered generator in {Hs } . Let us define the mapping
E
A be a
AA : S 00 (R 1 ) -> A
by setting co
(i + AYn f exp(iAt)(t)dt, PAU') = V-2ri T 00
where the integral is in the sense of strong convergence, f (y)
g (y) .
(y + Om ' 1 f 00 CitY g(y)dy
k (t) .
.,/-27t- i
-00
is the Fourier transform of g(y), and the number m is chosen large enough to ensure that (t) is continuous. It is easy to prove, using our definitions and the properties of the Fourier transform, that any possible choice of m gives the same result and that the resulting operator is bounded in {Hs }. Moreover, /1A takes y into A and is an algebra homomorphism; let us give a formal calculation proving the latter statement: if 00
PA(h) = (i + Arl f exp(iAt)ic(t)dt, ri ,NF-Tf -00
where kW is the Fourier transform of k(y) = (i ± yrnlh(y), then
1-tit (f *A (h) =
(i + Ar±m1 10 i exp(iA(t ± r))g(t)i-c(r) dr 2, 7ri - 00 -00
= (i ± A)m±"" i
N/2-7r-
oo
f exP(in A)
- 00
i° -g (11 - t)k(t) dr -co
dn.
2. Functions of Noncommuting Operators: the Construction and Main Properties
45
By the properties of the Fourier transform of the convolution, the expression in braces in the integrand on the right-hand side of this equation is just the Fourier transform of the product g (y) k(y), and so we obtain ILA (f)i-tA (h) = I-tA(f h),
or f (A)h(A) = f h(A),
as desired.
1 n We can now define f (Ai , . .. , An ) for a function f E S°0 (R) and a Feynman tuple
1 n (Ai, ... , A n ) of tempered generators in a usual way (see Remark 1.3). Obviously, we
get 1
f (Al,
n
( 1 \ n/2 f
... , An) =
27r. i )
_ f (ti, • • • ,tn )exP(iAntn)
...
exp(iAiti)dti ... dtn,
where ( i \n/2 f
tn ) =
27r )
f (y 1 , .. . , yn ) exp(—itiyi — • • • — itnYn)dYi • • • dYn
is the Fourier transform of f and the integral is understood in the sense of strong convergence. We have omitted here several important details, which will be clarified in Chapter IV.
2.5 The Influence of the Symbol Classes on the Properties of Generators The choice of the class .F of unary symbols in fact determines the possible properties of generators in a rather restrictive manner; this is evident in itself from general considerations, and this was confirmed by the example considered in the preceding subsection. Here we present some more simple examples clarifying the subject.
Example 1.3 We begin with a simple remark that the structure of .F is closely related to the possible spectrum of a generator. For instance, if the symbol f (x) = (x — a ) 1
belongs to F, then necessarily a V a (A) for any generator A (here a (A) is the spectrum of A). Indeed, we have (A — a) f (A) = f (A)(A — a) = 1, and so A — a is right and left invertible. Going back to Section 2.4, we see that (x — a) E S 00 (R 1 ) for any nonreal a. Therefore, the spectrum of each generator lies
I. Elementary Notions of Noncommutative Analysis
46
completely on the real axis, so that the polynomial estimates for the growth of exp(i At) become less surprising (in fact, the position of the spectrum of A on the real axis is not sufficient for the polynomial growth of exp(itA); some estimates of the resolvent of A are also needed; however, these estimates can also be derived from the fact that ILA :
f I-- f (A)
is defined and continuous on P0 (R 1 ). Example 1.4 This example is somewhat less trivial. Suppose that the symbol class .F3 contains the function f (y1, Y2, Y3) = e i(n—Ylbq Suppose also that operators A and B satisfy [A, B] =
—
i
(for example, A = it and B = x). Then at least one of the operators A and B is not an T-generator. Indeed, we have, by Theorem 1.10, —
132
123
f (A, B, A)
(5 2 f
4
1 3 5 2 6
= f (A, B, A) + [A, B] 6y2sy3 (A, B, B, A, A) 1 32 3 af 1 3 2 4 4 — 42 (A, B, A, A), = f (A, B, A) — i -
3
since the commutator [A, B] commutes with B. Furthermore, , 07 1, Y2, Y3, Y4) =
'3 Y3 0)/2
( - i (Y3 — Y1)e i(Y3 Y1)Y2 (Y1, Y2, Y3, Y4).
oy3
We use the Leibniz rule for difference derivative
3 3f 3g — ( f (x)g(x)) = —(xi, x2)exi) ± f (x2) - (xi, -x2) 3x 3x and obtain
a
3
SY3 ay2
(yi,
Y2, Y37 Y4) = e1(Y4—Y1»2 ± t: (Y3 - Y1 »(y17 Y27 Y3, y4),
where
e i(Y3 — Y1)Y2 _ ei(Y4 — Y1)Y2 W(Y1, Y2, Y3, Y4) =
Y2 — Y4
By substituting the last expression into the previously obtained expression for 1 23 f (A, B, A), we find that 312
e
213
413
i(A—A)B = e i(A—A)B + ei(A—A)B ±
2
1
1324
(A — A)(p (A , A, A, A).
2. Functions of Noncommuting Operators: the Construction and Main Properties
47
However, 213
e
223
i(A—A)B = e i(A—A)B = 1
by Proposition 1.1, 413
312
e i(A—A)B = ei(A—A)B and
2 (A
1
—
1324
12
1
043 5
1
A)(A, B, A, A) = [[A — AMA, B, A, B) = O.
Thus we obtain 312
312
e i(A—A)B = 1 ± ei(11—A)B ,
which is a contradiction. Thus, A and B cannot be generators simultaneously. Example 1.5 Let r be the Riemann surface of the function .Vi = ,s/x ± iy. It is a double covering of C \In We will consider r as a manifold with local coordinates (x, Y). Consider the operators
a ax
a
and B = —i — ay
,
on the Hilbert space L2 (r) (the inner product on L2(r) is defined by the integral over the Lebesgue measure dxdy). These operators are essentially self-adjoint on the dense subspace C8,°(r) c L2 (T) of smooth compactly supported functions and commute on
C(F), a2 f
82 f
ay aX
aX ay
for any f c c-,T(r). Consequently, they generate unitary groups on L2 (T). Thus, [A, B] = 0, and, according to the permutation formula, we should apparently have 12
21
g(A, B) = g(A, B) for any symbol g E V° (R2 ). However, this is not the case. Indeed, consider the symbol eitzi e itz2 ezi, z2) = (here t and r are parameters). The corresponding operators 12
a
a
21
g(A, B) = e r 75" et T, and g(A, B) = e t k e r 4 are simply the products of the corresponding unitary groups, which are the translations by t and r along the x and y-axis, respectively. But the translations along the two -
48
I. Elementary Notions of Noncommutative Analysis
axes do not commute on I': starting from the point (x, y) = (1, 1) and performing the translations by —2 along both axes, we move to different sheets of the Riemann surface depending on the order of the translations performed. Thus A and B cannot be generators simultaneously in any Hilbert scale including L2 (r). The reason for this phenomenon is that the product of the groups e itA e ir13 is not strongly differentiable on q(r) and Cr (f) is not invariant under these semigroups: as soon as in the course of a translation the support of a function collides with the branching point (0, 0), the function loses its differentiability.
2.6 Weyl Quantization We have given the definition of functions of Feynman tuples of noncommuting operators. The approach was to equip the operators with Feynman indices and arrange them in products according to the ordering of these indices. In the literature one can find another approach to the construction of the functional calculus of several noncommuting operators, known as the Weyl functional calculus or Weyl quantization. We neither study nor use the Weyl functional calculus in this book but only say a few words about it in this subsection. In contrast to Feynman quantization, Weyl quantization is symmetric; this means that neither of the operators acts first or last. In a sense, they all act "simultaneously". Let A 1 , . . . , An be a tuple of operators and f a polynomial of the form f (z i , • • • ,
Zn) = 0Y1Z1+ a2Z2+ • • • ± anZn),
where a1 , . . . , an are numbers and (y) is a univariate polynomial. Then the Weylquantized function of A 1 , . . . , An with symbol f is defined by
fw(A. 1
, • • • ,
AN)
= g9 (11a1A1+ • • • ± anAn11)
(the autonomous brackets are not obligatory here but we let them stand, for clarity). The subscript W is an acronym for "Weyl". Now if f (z p . . . , z„) is an arbitrary polynomial, it can be represented in the form f (z i , • • • , Zn) =
E
W(alzi ± • • • ±
anzn),
w ,ce i , • • •, an where q) are univariate polynomials, a1 , . .. , an are complex numbers, and the sum is finite. We set fw(A i , .. • , An) =
E (P,ai ,
this is well-defined.
49 (Tai Al + • • • ± anAni), an
2. Functions of Noncommuting Operators: the Construction and Main Properties
49
The main property of Weyl quantization is its affi ne covariance. Let M be an n x n matrix and let f (z) = g (m z);
then f (A) = g(M A), where MA is the tuple of operators defined by n
(MA) j = Emi kAk• k=1
To define Weyl quantization for general symbol classes, it is necessary to make some additional assumptions, namely: 1°. Any linear combination ai Ai ± • • • ± an A n is an ..F-generator, where .F is the class of univariate symbols. 2°. The set of functions f (ai z 1 + • • • + anzn) for various f c .T and a1 , . . . , an is dense in the class Tn of n-ary symbols. (The admissible values of the scalars a 1 , • .. , an can be taken either complex or real, depending on the specific case, but simultaneously in 1° and 2° Under these assumptions, the affine covariance property in conjunction with the requirement that f (A) is defined as usual for unary symbols determines the Weyl quantization uniquely. Let us consider two examples.
Example 1.6 (Weyl functions of self-adjoint operators) Let A i , .. . , An be selfadjoint operators on a Hilbert space H. We assume that there exists a dense linear subset D c H such that for any a l , . . . , an e R the linear combination a A = ai A 1 + • • • + an A, is essentially self-adjoint on D (that is, a A is closable and its closure ia a self-adjoint operator on H. Then the exponential U (a) = exp(aA) is well-defined for any a E Rn and is strongly continuous with respect to a (see [179]). We define fw (A i , . . . , AO for f (z) c C7 (Rn) by the formula
1 ) n/2 f f( (a)U(a)da, fw (A i , . . . , A n ) = (27 ri Rn
where f (a) is the Fourier transform of f (Z) and the integral is convergent in the strong sense, or else by the formula
1 ) n/2
fw(A i , ... , An ) = 27i (-
f f(z) 17(z)dz, Rn
where the operator-valued distribution C/(z) is the Fourier transform (in the sense of strong convergence) of U (a) (see [2] and [3]).
I. Elementary Notions of Noncommutative Analysis
50
If D is invariant under A 1 , . , An , then the above definition can be extended to symbols of tempered growth (i.e., symbols growing polynomially as lz I oo with all derivatives). Weyl quantization has a minor advantage from the quantum-mechanical point of view. Namely, if A 1 , . .. , An are self-adjoint, then Weyl quantization takes real-valued functions (Hamiltonian functions in quantum mechanics) into self-adjoint operators. This is not at all essential in Cartesian coordinates since the Hamiltonian function usually has the form 2
H(x, p) = — ± V (x), 2m the position and momenta operators are not mixed and the Feynman and Weyl quantizations give the same result. However, in curvilinear generalized coordinates the Hamiltonian function becomes H(q, p)=
aii(q) pi pi +V(q),
and the choice of ordering is essential. It is Weyl quantization that takes H(q, p) into a self-adjoint operator.
Example 1.7 (Analytic Weyl functions of bounded operators) Let A 1 , , An be , an ) E Cn with bounded operators in a Banach space B such that for any (a l , ± • • • ± lani 2 < 1 the spectrum cr(ai Ai -I- • • • ± an A n ) lies inside the unit ball U in Cn centered at the origin. Let f (z), zee'', be a holomorphic function in a neighbourhood of the unit ball. Then the Cauchy—Fantappiè formula
f cz)
(n - .1)! f (2z j 110=1}
(-1)v -117, 171. )
A • • • A c4v _i A dev±i A • • • A v=1
c4n A
n_
(1 - E
v=1
U, and, consequently, the resolvent is valid for z E U. For ç E 0U we have a (e A) (1 — A) - 1 is well-defined. We use the Cauchy—Fantappiè formula to define the Weyl-quantized function fw (A i , , A n ), (n 1)! f (2.7r (k 1=11 —
, An )
-
) --n
A•••A
cgv-1
A fgv+1 A • • • A
den A 4.
n u =1
We considered here the simplest case in which the domain is the unit ball. Of course, more complicated spectrum structures can be considered as well by using the Cauchy—Fantappiè formula for more complicated domains; we leave this to the reader as an exercise.
3. Noncommutative Differential Calculus
51
3 Noncommutative Differential Calculus The conventional differential calculus deals with the local behaviour of functions; given a function f (x) and a point xo, the main question is as follows: what can be said about f (xo --1-- Ax) as Ax ----> 0? The ultimate answer is that, provided f (x) is a "well-behaved" function, one has f (x0+ Ax)= f (x0)+ f t (x0) Ax ±
( Ax )2 +
f
2
+ f (N) (x0) (Ax)N-1-0(AxN+1), N!
i.e., f (xo + Ax) can be expanded into a Taylor series in powers of Ax. (Of course all of differential calculus is not reduced to this; but this is the essence.) We will pose a similar question for functions of noncommuting operators, and the machinery developed to answer it will be called noncommutative differential calculus. We will see that the matter is rich in subtleties and try to explain them as clearly as possible. Thus, we should consider f (A --I-- AA), where AA is "small", AA —> 0; but what does this requirement mean? We follow the usual practice of perturbation theory and take AA = EB, where E is a small numerical parameter. This permits us to keep track of infinitesimals of different orders easily, since the orders are indicated by the powers of s. In Subsection 1.4 we have already considered two examples of computation for f (A + E B), namely, 1 f (x) = X—x and f (x) = exp(x). Here we deal with the general setting. In the conventional differential calculus, the first-order term f" (x0) Ax of the Taylor expansion is of extreme importance; it bears the special name of the first differential of f, and teaching differential calculus usually begins with its intensive study. By analogy with that case, we are specifically interested in the term of order E in the expression f (A + 613) = f (A) + eCi + 8 2 C2 ± • • • .
It would be wise to ask why such an expansion exists at all; however, as is the case for the conventional calculus, the computational method at the same time provides justification for the expansion. The coefficient of E can be obtained as d Cl =-78 [f (A + 03 )]is.0 ,
(I.16)
i.e., by differentiating with respect to E followed by setting E = O. However, we will find an expression for a more general "derivative", of which this is a particular case.
I. Elementary Notions of Noncommutative Analysis
52
3.1 The Derivation Formula Let A be an operator algebra. A derivation of A is an arbitrary linear mapping
D : A ----> A
(I.17)
satisfying the Leibniz rule D(uv) = (Du)v ± u(Dv), u, v E A. Let A E A be a generator and f E .F a symbol. We claim that (5f 1 3 2 D[f (A)] = [DA] =-(A, A) Bx
(I.18)
for an arbitrary derivation (I.17). The proof will be divided into several stages. First, we consider a special class of derivations.
Definition 1.3 Derivations of the form D(A) = DB (A) ,=- BA — AB, where B is an arbitrary element of A, are called inner derivations of A. It is easy to show that the commutator indeed defines a derivation; the corresponding computation is rather simple and is left to the reader. Proposition 1.3 Formula (I.18) is valid for inner derivations. Proof We need to compute the commutator
[B , f (A)] = B f (A) — f (A)B . So far, there are no Feynman indices in this expression; let us introduce some. Note that they can be chosen independently for either summand on the right; we may prefer 2
1
21
B f (A) — f (A)B ,
52
107
or B f (A) — f (A) B ,
or whichever other choice provided that the indices are consistent with the order of factors. We would like to factor out B, and so we write 2
1
32
2
1
3
[B, f (A)] = B f (A) — f (A) B = B ( f (A) — f (A)). 1
3
Of course, we cannot simply put f (A) — f (A) = 0, because the Feynman index of B lies just between those of the first and the second A.
3. Noncommutative Differential Calculus
53
The subsequent computations are quite simple: we obtain, by multiplying and 1
3
dividing by A — A, 1
3
21 3 8f 1 3 f (A) — f (A) [B , f (A)] = B (A — A) = B (A — A) — (A, A) 1 3 Sx A — A 21
1
3
3
(the division by A — A seems to be very dangerous, but in fact is not so; actually, we used the identity 3f f (x) - f (y) = (x - y) - (x, y), 8x which is valid everywhere including the diagonal x = y, where Sf
f (x, x) = f' (x). Sx We now move apart the Feynman indices over the A's, thus obtaining 2 1 2 1 3 sf 1 3 3 8f 0 4 22 1 3 8f 1 3 B (A — A) — (A, A) = B (A — A)—(A, A) = I[B (A — Al— (A , A) Sx 8x Sx (it is easy to see that the introduction of autonomous brackets is valid). It remains to notice that the operator in brackets is equal to
21
3
B(A — A) = BA — AB = [B, A],' and, consequently, 8f 1 3 [B, f (A)] = [B, A]—(A, A). Sx 2
D
The proposition is proved. Theorem 1.7 Formula (I.18) is valid for an arbitrary derivation of A. In order to prove Theorem 1.7, we need the following lemma.
Lemma I.1 The left regular L representation takes each derivation D of A into the inner derivation LDL -1 of L(A). Proof Let D be a derivation of A and A E A. For any C E A
L D(A)(C) = D(A)C = D(AC) — AD(C) by the Leibniz rule; we can thus write LD(A)(C) = D(LAC) — LAD(C) = [D, i.e.,
L D (A) = [D, LA], that is, we have the commutative diagram
La A
](C ),
I. Elementary Notions of Noncommutative Analysis
54
D
A
A
I
Li
L [D,•]
L(A)
which shows that any derivation D is transformed by L into the operator [D, • ] of 0 commutation with D in .C(A). The lemma is proved. Proof of Theorem 1.7. Let D be an arbitrary derivation of A. By Proposition 1.3, 3f 1 3 2 [D, f (LA)] = [D, LA]— (LA, LA), Sx
By Theorem 1.2,
f (1, A) = L f (A) , and so, by Lemma 1.1, we obtain 3f 1 3 2 L D(f(A)) = I, D(A) - (LA, LA) 3x
=L
2
xf 1 3
D (A) (A , A)
(we have applied Theorem 1.2 one more time). Since L is a faithful representation, we obtain 2 sf 1 3 D(f (A)) = D(A)—(A, A), 3x o as desired. The proof is complete.
3.2 The Daletskii—Krein Formula Let us now return to our original problem of computing the coefficient Ci defined in (1.16). To this end, consider the algebra Aft } whose elements are (infinitely differentiable) families of elements of A depending on a numerical parameter t. Clearly, the mapping d A Al r } --> A{t} dt taking each family A(t) E A{t} into its t-derivative is a continuous derivation of the algebra Af t). By Theorem 1.7 we obtain :
3 sf 1 2 d — f (A(t)) = A t (t)— (A(t), A(t)). dt Sx
3. Noncommutative Differential Calculus
55
This is the famous Daletskii—Krein formula, obtained by these authors in [29] for the case of self-adjoint unbounded operators on a Hilbert space (their technique involved spectral families). From this we easily derive a formula for the coefficient (I.16). Namely, take A (s) = A + EB, then A1 (E) --. B, and we obtain 2 6f 1 3 Ci = B-8---- (A, A), x
2 sf 1 3 f (A + EB) = f (A) + E B — (A, A) + 0(82 ) . bc
Remark 1.8 If [B, A] = 0, one can take the same Feynman index for both A's on the right-hand side of the last equation, thus obtaining the usual Taylor expression
af
f (A + EB) = f (A) ± E B — (A) +
Ox
3.3 Higher-Order Expansions What we discussed in the preceding subsection was, in fact, the first-order infinitesimal calculus in a noncommutative setting. We have evaluated the differential of f (A), that is, the linear part of the increment f (C) — f (A) w.r.t. the difference B = C — A. It is given by the Daletskii—ICrein formula, which can be rewritten as 8f 1 3 = B — (A, A) B) =- — f (A + 03)] Sx de e=0 def [ d
2
and employs at least the notation of noncommutative analysis, although there was no indication of it being useful on the left-hand side. However, we should like to develop infinitesimal calculus in its full extent, which assumes deriving expansions of arbitrarily high order for the increment cited, with explicitly writing out the remainders. Hence, let us consider the difference f (C)— f (A) more comprehensively. Assume that C — A = EB, where 8, as above, will be treated as a small parameter. Then it becomes much easier to keep track of the orders of various terms in our formulas and to explain convincingly the order of the accuracy of our expansions. The simplest and clearest method to obtain the expansion of f (C) — f (A) = f (A + E B) — f (A)
I. Elementary Notions of Noncommutative Analysis
56
in powers of E is to apply the Daletskii—Krein formula successively. Namely, we can write down the usual MacLaurin expansion (A + s B) — f (A) 2.-. '.
cx) CkE k E f k ' k=1
!
of E k is the kth E-derivative of f (A ± s B) at way, the Daletskii—Krein formula holds for E 0 0 as well:
where the coefficient
Ck
E=
O. By the
3 2 8f 1 d f (A ± sB) = B— (1[A ± en IA + EMI). cle Sx
Thus we can differentiate it with respect to E once more. There will clearly be two terms arising from the differentiation in the first and in the second argument of 8f18x; and the formula itself applies to each of these terms. Hence we obtain 2 4 82 f 1
d2
3
5
f (A ± s B) = 2B B—(1[A ± s B]], 1[A ± EMI, TA + s B]]) de Sx 2
(the factor 2 is due to the presence of two terms, which are equal to each other since 8f18x(x, y) is symmetric, 8f 8f — (x, Y) = — (Y, x), Sx Sx and so it makes no difference with respect to which argument the subsequent derivatives are taken). At this stage, it is easy to predict the general result (and to prove it, which we leave to the reader): dk —
de
24
2k sk f 1
f (A ± EB) = k!B B .. . B
2k+1
3
+ s B[1, 1[A ± EMI, ... , [[A ± s B]]).
Sx k
Thus the factor 1/ k! in the MacLaurin expansion cancels out, and we obtain the Newton formula e° , 2 4 k=1
2k8kf 1 3
Sxk
2k+1
••'
A)
or, forgetting about E, f (C) — f (A) e.'.
Sk f 1 3 °° 2 4 2k 2k+1 E ic _ All[C — A]] . . . [[C — All j (A A . . . A ) . 8xk .--.„--.—., L '
k=1
k copies
k+1 arguments
However, these formulas are still of little practical importance, chiefly because of the mysterious sign " La " in the middle. What does it mean exactly? This is not so easy to explain without going into functional analytic peculiarities, but there is an alternative, stating that we will always be on the safe side if we have an explicit formula for the remainder. Fortunately, such a formula is at hand. -
3. Noncommutative Differential Calculus
57
Theorem 1.8 (Newton's formula with remainder) Let A and C be generators. Then for any symbol f (x) and for any positive integer N we have N-12
f (c)
- f (A) =
sk
2k
2k+1 A, ... , A ) + RN, E[c - Al_ Ic - il]](A, E 8.3c k k=1 13
f
where the remainder RN is given by the formula 2
8N f 1
2N
2N+1
3
A ).
RN = I[C — Al ... [ C — All (xN
Proof We proceed by induction on N. Clearly, 3
1
3
1
f (A)
3
1
f (C) — f (A) = f (C) — f (A) = (C — A)
.
C—1 We can move the indices apart and then isolate the factor C — A, 1
1
3
(C — A)
3
f (C) — f (A) 1 3 C—A
0
=
1
(C
4
3 f (C) — f (A) =[ [2 ., A) 0 4 C—A
4
0
Ai, f (C) — f (A) ii
0
4
.
C—A
3
1
Since C — A = C — A, it follows that 2 3f 1 3 f (C) — f (A) = IC — A]]--= (C , A), Sx
(1.19)
which is none other than Newton's formula for N = 1. Let us carry out the induction step N = 1 = N = 2. To this end, we apply the last identity to the difference 4 2 [C —
(5f 1 3 2 5f 1 3 Al —8x(C , A) — I[C — A]]— (A, A). Sx
We obtain
3f
2
TC
—
13
2
8f
13
0
2
32 f —1 1 3
All— (C, A) = TC — Al—(A, A) + [C — ATV — MI T— x2 (A , A, A). Sx Sx
The second term, up to an inessential change in indices, is just the remainder R2. Proceeding in a similar way, we accomplish the proof. D 2
4 This means that we apply the identity (I.19) to
1 1 operators C and A instead of z.
3
the function I[C — A]3f/8x(z, A) substituting
58
I. Elementary Notions of Noncommutative Analysis
The remainder is small in the following sense. If C — A = EB, then E N factors out of RN. Also, the distinguishing feature of Newton's formula is that all its terms, except for the remainder, are "multilinear forms in 2
2k
1[C — Al, ... , [[ C — A]] with various Feynman indices". In the commuting case ([A, C] = 0) Newton's formula is reduced to the conventional Taylor formula with operator arguments. However, the commuting case is a distinguished one; it can be viewed as the "maximally degenerate case" of noncommutativity. Therefore, it is not surprising that there are a large variety of formulas generalizing the Taylor formula to the noncommutative case. Let us get acquainted with one of these formulas. Theorem 1.9 (Taylor's formula with remainder) Under the hypotheses of Theorem 1.8 one has
E _k!1 f (k) (A)[(C1 — A ) ]] + QN, N
f (C) — f (A) =
2,
k=1
where the remainder QN is given by the formula 21
QN =
3N f 1
2
[RC — A) N 1
3
3
8xN
Proof By the usual Taylor formula for functions of numerical arguments,
f(y) - f (x)
N x---% 1 f(-„
= 2 IT;
1 (x) (y - x) k + QN(x, Y),
k=1 •
where QN(X , y) is easily proved to be equal to
0f QN(x,
y) =
2 1 Let us insert y ---> C, x ----> A into the formula for f (y) — f (x). We obtain
N
f(C) - f (A) =
1
Ek=110•
2
1
2
21
f (k) (A)(C — A) k + QN (A,
q.
This yields, by moving indices apart and isolating appropriate factors, the desired formula. The proof is complete. D Let us now make a comparative analysis of the Newton and Taylor formulas. Both formulas are quite easy to obtain; beyond this, there are more differences than similarities.
3. Noncommutative Differential Calculus
59
First of all, strange as it may seem, the Taylor formula does not generally provide an expansion in powers of E if C = A --1- EB. We claim that, unless A and B satisfy some algebraic relations, the remainder 21
2 7„ 8N fl1 3
3
Q N = (C - A)" II---j— (C A . . . A) ' 8x N ' '
is of order E regardless of N. To understand this well, let us consider a few of the first terms of the expansion. The point is that although 1
2
C—A=C—A=eB=0(e),
this does not mean that 1 2 „, (C - A)" = 0(E N ).
Indeed, say, 1 2 (C - A) 2 0 (C - A) 2 .
Instead, we have 2 1 .2 _ 2AC, (c, _ A)2 = c2 ± A (c, ..._ A)2 = c2 ± A.2 - AC — CA,
and these two operators differ by CA — AC = [C, A]. However, C = A ± 8B and hence [C, A] = E[B, A], because A commutes with itself. Eventually, we obtain
1 2, (C - Ar = 62 172 ± E[B, A], and this is not 0 (5 2 ) unless [B, A] = O. 1
2,
Next, let us compute (C — A)'. By similar computation, we are led to the relation 1 2 „ (C - A)' = e`[A, BIB — e 2 [A, (B — A) 2] + E[A, [A, B]]. 2, 1 We see that (C — A)" = 0(E) in general. However, if [A, [A, B]] = 0, then 1 2 (C - A) 2 ) = 0(6 2 ). Similarly, if for some mo
[A, [A, ... [A, B] .. .] = 0, ,---„,--, m times
I. Elementary Notions of Noncommutative Analysis
60
then the order in E of summands in the Taylor formula increases with N, though less rapidly than in the Newton formula. This enables us to compute the derivatives (--dd7 ) k f (A + E B) 1 in terms of the derivatives of f (x). Thus, if E=0
[A, [A, B]] = 0,
then we have 1 = f'(A) B — — f"(A)[A, B], 2 6=0 1 , 2 d2 = f"(A)B - — — f m (A)[A, BIB — f (A + EB) 3 3 d82 =o 1 ±- f"(A)[A, B]2 . 4
( Te d f (A + E B))
We see that the derivative of order k of f (A + E B) w.r.t. higher-order derivatives of f taken at A.
E
B]
can be expressed via
Remark 1.9 The Daletskii—Krein formula can also be considered as the usual formula for the differential. Namely, a symbol f can be viewed as a (partial) mapping from A to A that assigns f (A) to each generator A. We may wish to consider the differential of this mapping. According to the general rule, the differential of a mapping q) : A ----->- A is a linear mapping ql,,, : A —> A, i.e., an element of L(A). Having this in mind, we can represent the Daletskii—Krein formula as follows: 3f f(A) = — (L A RA). Sx '
This formula is understood in the sense that df (A, B) = MA)(B) = MA)(B)
=
2 8f 1 3 Sf LA, RA)B = B — (A, A). Sx Sx
—(
We have considered various expansions for f (A + AA). These expansions are closely related to the conventional Taylor's formula in the analysis of functions of the numerical argument. However, there are also several topics specific to noncommutative differential calculus and having no counterpart in the usual calculus. We mean the index permutation formulas and the composite function formulas considered in the following two subsections.
3.4 Permutation of Feynman Indices Given a binary symbol f (x, y) and two generators A and B, one can define f (A, B) 12 2 1 (A, B). can take f (A, B) or f in two different ways using Feynman's approach: one How different will the results be? The answer is given by the following theorem.
3. Noncommutative Differential Calculus
61
Theorem 1.10 (Index permutation formula) 12
21
3
82 f
15
24
f (A, B) = f (A, B) + [B, A] —3. . (A, A, B, B) 8y 21 3 B2 f 2 4 1 5 = f (A, B) + [B, A] - (A, A, B, B). 8x8y 12 2 1 consider the difference Proof Let us [[ f (A, B)]] — [[ f (A, B)J]. We can change the indices in this difference arbitrarily provided that (i) the order of operators in each term remains unchanged; (ii) noncommuting operators are assigned different indices. This being done, we can omit the autonomous brackets. In particular, we can write 12
21
12
32
[[f (A, B)]] — [[f (A, B)]] = f (A, B) — f (A, B).
Let us now recall that once all operators in an expression are equipped with indices, we can transform the expression according to the rules of commutative algebra. Hence we have 12
32
1
f (A, B) — f (A, B) = (A — A)
12 32 3 f (A, B) — f (A, B))
1 132 3 f = (A — A)— f (A, A, B).
1 3 (Sx A—A Strange as it may seem, this computation is perfectly rigorous; in fact, it means nothing other than that we take the formula Sf f (x, y) — f (z, y) = (x — z) f (x , y) - f (z, y) = (x — z)— f (x, z; y) x—z
Sx
123
and substitute the operators A, B, A for the variables x, y, z. In the following, we always use the shortened form for computations like this. The relations obtained give 1 2 21 1 3 Bf 1 32 f (A, B) — f (A, B) = (A — A)—(A, A, B). Sx Note that we do not write autonomous brackets on the left-hand side of the last equation, although they must formally stand there; from now on we widely make such abuse of notation provided that this cannot lead to misunderstanding. Changing the indices again according to (i) and (ii), we obtain 12 2 1 f (A, B) — f (A, B)
1 Sf 1 3 2 3 8f 1 3 2 = A(A, A, B) — A — (A, A; B) 8x Sx 3 Sf 1 5 4 3 8f 1 5 2 = A" (A, A, B) — A — (A, A; B) 8x 6x 8f 1 5 2 3 8f 1 5 4 = A[' (A, A; B) — —(A, A; B)]. Sx Sx
I. Elementary Notions of Noncommutative Analysis
62
We transform the right-hand side of this relation in a similar way, by introducing yet another difference derivative, and obtain 1 2 21 34 282f 1524 f (A, B) — f (A, B) = A(B — B)—(A, A; B, B), SxSy or, by (i), 1 2 2,5 82 f 1 5 2 4 2 1 3 3,5 f (A, B) — f (A, B) = A(B — B)— (A, A; B, B). SxSy 3 3,5
We can now extract the linear factor A( B 1 2
2 1
2,5 —
B):
3 3,5
62 f
2,5
1 5 2 4
f (A, B) — f (A, B) = [[A(B — B)]]—(A, A; B, B). 8xSy
This is just the first variant of the index permutation formula since 3 3,5
2,5
A(B — B)= BA — AB =[B, A]. The proof of the second variant of the formula is left to the reader. The theorem is proved.
0
Remark 1.10 Let us point out (though this is trivial) that expressions of the form il
i2
in
il
(A — A) f (B 1, . • • , Bn) i generally do not vanish if [Bi, A] 0 0 for at least one 1 such that ii <1 < i2.
Remark 1.11 By introducing additional operator arguments we find that 1
2 il
2
ik
f (A, B,C1,...,Ck) = -1-
1 il
ik
f (A, B,Ci, . • • ,Ck) 1,5 [ B,
,
1
2 1,25 1,75 il
ik
A]St 2 (A, A; B , B ; C1, .. . , Ck),
where is It [1, 2], s = 1, . . . ,k. Note that the commutation formula stated in Proposition 1.3 is an easy consequence of Theorem 1.10.
Corollary 1.1 (Commutation formula) One has 1 2 2 1 2 8f 1 3 A f (B) = A f (B) ± [B, A]—(B, B). Sy
or, in other form,
8f 1 3 [A, f (B)] = [A, B]—(B, B). Sy 2
3. Noncommutative Differential Calculus
63
Proof This is just a particular case of the theorem with f (x , y) replaced by x f (y). Indeed, we have 6 .f , 82 [X f 00] , = — 0'1, Y2), &ay 4
so that 1
2
2
1
2
5 2 4 2 sf 1 3 (A ,A; A; B , B) = [B , A]— (B , B). Sy
6 2 [xf (y)] 1
A f (B) — A f (B) = [B , A]
SxSy
0
1
n
We can also easily compute the commutator [A, f(Bi, ... , Rn)]. For this purpose, we consecutively permute A with Ba , ... , B 1 . After the first step we get 1 n n+1 1 n 0 1 n [A, f Oh ... , Bn)] = A f (B 1, . . . , Bn) — Af(Bi, ••• , Bn) n
1
n-1
n+1
1
n+1
n-1
n n+2
= A f (B 1, . . . , B n-1, B n ) + [ A, Bn]tSn f (B 1; • • • ; B n-1; Bn, B n) 0 n 1 —A f (Bi, • • • , Bn). Next we proceed to Bn _i, etc. (Here S n stands for the difference derivative w.r.t. the n-th variable. Similar notations are used below). Finally, the last term on the right-hand side cancels, and we obtain n »0
n
1
[A, f(Bi, • • • , Bn)1=
E[
.1+2 j+3
j
1 j-1
n+2
A, Bi]Sif (Bi, B j-1; Bj, B j; B j+1; • • • ; B n).
j=1
Let us show how the formulas obtained can be used to derive some identities for pseudodifferential operators (that is, functions of x and ialax). For convenience, we consider h-pseudodifferential operators, in which the operator —i8/ax occurs with the factor h. —
a
Example 1.8 (Permutation of x and -ih / ax) Let a symbol P(x, p) be given. The usual correspondence 1
2
x 1-4- x,
a
p 1---> —ihax 1 2 gives the pseudodifferential operator P (x , —ihal ax). Let us rewrite this operator in 2
1
the form Q (x, 1
—
ihalax). By Theorem 1.10, we have 2
1
5
P( 1, —ih a = P 1, —ih a +3[x, —ih a i 62 P i,l, —ih a —ih a ax ax ax 8x3p ax' ax •
I. Elementary Notions of Noncommutative Analysis
64 Since
= ih,
-ihl
ax
[x we see that 1
2
a 2 P (x, -th ax
.
s2p
. a )+1.h 8x3p _ _ P (x, -th ax 1
a
a
-ih—, -ih— ax ax 1
2
= P ;,-ih
3'
1
22 , x x,
3
ax a .`;--P i 1, -ih Ta ,—ih T a +ih[T dx
ax
Sf13x(x, x)= f' (x)). 1 2 Let us again permute the operators - i ha/ax and x. We get
(the last equality is due to the fact that
1
2
a P P(x, -th 2
=
ax
2
a
1, -ih— ± ax
a2p 1 a ax ih-=-- x, -ih— dpax 1
A 2
+Rih a 0 - )i ax Sp
3
a
3
a
a
P1(.2i , - h—, -ih—,-ih—). ax ax ax
One can proceed with these manipulations and obtain the order changing formula modulo any power of h. Note that with this method we obtain an exact formula for the remainder.
Example 1.9 (Product formula) Suppose that two h-1 -pseudodifferential operators 2
1
.
2
.
1
ax, -th lax)x, and Q ( lax) are given. Let us compute their product. We -Ilia
P( have
1
1
a a3 ) a (4 2 1= P x, -ih— Q x, -ih— . EP x,-ih ax 11[1Q i, -ih— ax ax ax (2
â
In order to reduce this expression to the standard ordering, we must permute the oper3
ators
2
-ihalax and x. By Theorem 1.10, we have 1
1
2
[[P (x, ih -
a —
ax
a
SP (7 +[x, -ih--1— x, ax Sp 4
2 2
HQ .i, x, -ih— II = P x, -ihax
ax
3
-ih
5
Q
.
1 )
a sQ 2 6 -i— (x, x,h =-, -ih— ax dx ax Sx a
a
a
1 ) (L -thax
65
3. Noncommutative Differential Calculus
Since [—ih — a , x]= —ih,
ax
we obtain 1 )
[V
(2 x,—h— i ax
—i h
al )
(2
1 [[Q 1 , — ih— là ]] = P x, —ih— ax ax
SP ( 7 SQ x —ih— — ih x Sp ' ax ' 0x ) 3x
26
2
Q x,
a —ihax
01
, x, —ih— ) . ax
Taking into account that 3
5
Sp
3
a
3
SP ( 7 . a . a = 8 P (7 x, —ih Tx a , — ih T; = v x, —zit ax , —th Tx x, —ih
SP (7
ax ) '
we get 1
a ]][[Q x, —ih— 2
x
aP
1\
1
a 2 x, —ih—
x
a 1= P x, —ih— 2
ax
3
1
a
SQ ax Sx
(5
—ih— x, —ih— h
ap
24
1'
a Q x, —ih— (2
ax
a
x,x, —i — ) . ax
3
2
After permuting x and —iha/ax in the last term, we obtain the following product formula 1
1
1
a [[P (i, —ih—a lffQ 1, —ih— a 1= P 1 , —ih— Q ax
aP
—ih— Op
±(—ih) 2
ax
3
ax
1 )
(2
a
x, —ih—
ax
1 )
a ) aQ (x,2 —ih—a x, —zn—
(5
.
7
ax ax
a2p
4
apaq
(x ,
ax
3
1
a
ax —ih 53ac- oe i x, x, x, • —th—) 2 2 4 4
.
Obviously, this process can be continued so as to obtain the subsequent terms of the expansion.
Remark 1.12 In both preceding examples the more elegant way to get the answer would be to use the left ordered representation of the tuple (x, —iha lax) (see Chapter II).
I. Elementary Notions of Noncommutative Analysis
66
3.5 The Composite Function Formula In this subsection we will consider the problem of rewriting the composite function 1
2
12
f (I[g (A, B)]1) via functions of A, B, and their commutators. Recall that f (g(A, B)) 12
12
means ( f o g)(A, B), as distinct from f[[g(A, B)]1), which means "f (C) with C = 12 1 2 g(A, B)". We will derive a formula for f a[g(A, B)]1) whose leading term coincides 1 2 with f (g(A, B)). First let us consider the relatively simple case in which g(x, y) =
x+y. Theorem 1.11 A function of the sum of two operators admits the following expansion: f (TA + BD
1
2
1
2
3
2f 5
1
f
2
21
4
+ Bl, A + B , A + B) = f (A + B) + [A, B]—([[A sy 2 3
82
4
1
2
= f (A + B) + [A, B]— sy 2 ([[A + Bl, [f A + Bl, A + B). Proof We have 2
1
1
3
2
f ([[A + BP — f (A + B) = f (TA + BD — f (A + B) 1 2 1 2 sf 3 3 = (If A + /31 — A — B).' (P1 + Bl, A + B) SY
1
3
=
2
3f4
0
1,5
(TA ± B]] — A — B)([1A + Bl, A + B). Sy
1,5
1
2
Should the operator B not act between the operators A and B in the first factor, 1
1,5
this expression would be zero. We now permute the operators A and B with the help of Theorem 1.10 and get 2 1 3 1,5 2 sf4 0 1 — f (A + B) = ([[A + kil — f (1[A + Bp A — B)—(TA + fil, A + B)
Sy
3
2f 4
1
21
4
—[B, A]-- A-([[A + B], A + B , A + B), Sy 4
since 321 Sf 1 , {(x5 — x2 — x4) — (x6, xi +xØ)} = -- (x6, xi + x3 , xi + 4). Sy Sx2ox3 Sy 32
The first term on the right-hand side of the penultimate formula vanishes, and we obtain the first version the desired formula. The proof of its second version is left to the reader 0 as an exercise.
3. Noncommutative Differential Calculus
67
Theorem 1.11 can be used, in particular, for the case in which the commutator [A, B] is in some sense "smaller" that the operators A and B (the examples are given below). In order to formalize the situation, we introduce the notion of a degree for arbitrary products of A, B, and their commutators. Our definition is recursive. We say that the degree of A and B is equal to zero; for any term C, the degree of the commutators [A, C] and [B, C] is equal to the degree of C plus one, and the degree of a product is the sum of the degrees of the factors. Thus, for example, the degree of [A, [A, B]] is equal to 2 and the degree of [A, B] . [B, [A, B]] is equal to 3. Note that the degree is not additive but only semiadditive: the degree of a sum may exceed the degree of the summands (for example, this is the case for the expression AB — BA = [A, B]). Using Theorem 1.11, we can expand f ([[A + B]]) up to terms of arbitrarily high degree. Let us consider an example of such an expansion. Corollary 1.2 The expansion 1
12
2
3
1
f ([[A + BP = f (A + B) -I- — [A, B] f" (A + B) 2 -
„, 1 3 „, 1 2 3 12 1 H[M, [A, B]] ± — RA, B], B]lif”) A + B) + — [RA B]2]Ir" (A + B)
6
8
'
is valid up to terms of degree > 3. Proof We apply the first version of the equation in Theorem 1.11 to the second term on its own right-hand side, thus obtaining f (ffA + BA)
1 2 3 62 f 5 61 21 4 f (A + B) + [A, B]6y2(A ±B,A+B,A+ B)
=
3 1 21 45 65 8 84f9 -F[A, B]- (I[A + B]], A+B,A+B,A+B,A+ B).
44
In view of Theorem 1.11, we can omit the autonomous brackets in the third term of the obtained expression and use any admissible indices over the operators, with the resultant remainder being of degree > 3. Thus, we have
62 f 5 61 21 1 2 3 4 f a[A + B]])'.-`-' f (A + B) + [A, B], (A + B, A + B, A + B) 44
2,,
1
31
31
31
3
+IRA, Bfiler f (A +B,A+B,A+B,A+ B), where -2-'-= can be translated as "is equal modulo terms of degree > 3". 5
In the second term on the right-hand side of the last equation we permute the operator A consecu4
3
tively with B and [A, B]: 3,
5
61
21
4
3
32 f 4
61
21
5
[A, B ] ' f (A + B, A + B, A + B) 2-1= [A, B]6y2(A +B,A+B,A+ B) 21 51 3 4 f4 91 7 98 3 6 -1- [A, B][A, B]- (A+ B, A + B, A + B, A + B, A + B)
6 y4
I. Elementary Notions of Noncommutative Analysis
68 4
62 f 3
61
21
5
= [A, B]— (5),2 (A +B,A+B,A+ B) 71 21 6 4 83 f 3 75 +[A , [A, B]]— (A + B, A + B, A + B, A + B) 43
51 21 7 84 f 4 98 91 3 6 A + B). (A + B, A±B,A+ B, A + B, +[A, B][A, 13]44
In the second and third terms of the last expression we can use any admissible order of operators modulo terms of order > 3. Hence, 1
2
4
21
61
5
2f 3 f ([[A + Bil) = f (A + B) + [A, B]6y2(A +B,A+B,A+ B) 33 f 1
2
31
31
3
+[A, [A, B]]— (A +B,A+B,A+ B) Sy 3 64 f 1 31 31 2 31 31 3 + B, A + B, A + B, A + B). (A+ B, A + 2 HA + M211 75-y4 2
3
4
In the second term on the right-hand side of this equation we permute B with A and [A, B]. Similar calculations yield 1
f (IA + Bp
2
4
82f 1
31
3
31
f (A + B) + [A, B]— (5y2 (A + B, A + B, A + B) 83f 1
2 -
31
31
31
3
FHA, [A, B]]+ [[A, B], B]]]—(A + B, A-FB,A+B,A+ B) By3
2, 64f 1
31
31
31
31
3
+3[[[A + Br]1— sy 4 (A + B, A-F B, A -I- B, A + B, A -I- B). The last formula proves the corollary, since Sk f/Bx k (x, ... , x) =
Let us
r (k) 1( x)
Ei
now proceed to the general case.
Theorem 1.12 The following composite function formula holds: 1 2
12
f ([g(A, B )]I ) = f (g(A, B)) 28 78 5 sg 3 4 6 sg 2 7 8 62 f 1 12 ±[A, B]—(A, B, B)—(A, A; B)-(---5 y2 (D(A, B)]], g(A, B), g(A, B)). e5 xi Sx 2, Proof We have 12
12
12
112
f (l[g(A, B) ]1) — f(g(A, B)) = fa[g(A, B)]]) — f (g(A, B)) 112
23 8f 1 12
23
= a[g(A, B )]] —g(A, B))—([[g(A, B)]], g(A, B)) Sy 36 212 45 sf 1 1 2 = a[g(A, B)]] —g(A, B))—([[g(A, B )]I, g(A, B)). Sy
3. Noncommutative Differential Calculus 3
69
212
By permuting the operators A and [[g(A, B)]] in the last expression, we obtain 26 12 12 312 4 5 sf 1 12 g(A, B)) f([[g(A, B)]1) — f(g(A, B)) = (1[g(A, B)]] — g(A, B))— ([[g(A, By 3 12 82 f 1 12 25 45 k g 2 4 5 -I- [A , [[g(A, ([[g(A, g(A, B), g(A, B))- (A, A, B). 3x 1 By 4
Here the first term on the right-hand side vanishes, which can easily be proved by index manipulations (see Proposition IA) 3g 1
3
12
24
[A, [[g(A, B)]]] = [A, B]—(A; B, B). 3x2
The theorem is proved.
Let us now apply the obtained formula to pseudodifferential operators. 2
1
.
l ax ) be an h —l -pseudodifferential operator, and let a Example 1.10 Let P x ( —tha , function f(z) of a single variable z be given. We will rewrite the operator 2
1
f (IP x, ( —iha lax )1) in the form of an h -1 -pseudodifferential operator modulo 0 (h2) . Due to Theorem 1.12 we have 1\
1
a
2
5
a
ih Tx ,x]
sP
(4 6
x,x, —th y;
a
s2 f 1
x — ([1 P (x, —ih—)]], P ax Sz 2
a
2
) (x, —th y;
f([13 (x, —th—)]]) P = (f ax
)
SP (8
x,
2\
(8
a
Dx
—ih Tc
)
7
,P
ax
).
Since [—iha/ax, x]= —ih, we can omit the index 5. Consequently, the indices 4 and 6 can be set equal to each other. Taking into account that
SP
— sx (x,x, p) =
aP(x, p) ax
we get 1 a 2 a (2 f (Up ( x, —ih— ]]) = (f o P) x, —ih— ax :x) 2 2 a ) SP (5 a) P (3 a —ih— x, ih— — — x, —ih— —ih— ax ax ax Sp ax' DP
I. Elementary Notions of Noncommutative Analysis
70
4
32 f 1 (2
x—a[P Sz 2
419
(5
x,—ih— 1, P x,—ih ax ax
)
(5
a
, P x,—ih— ). ax
Computing up to terms of order h 2 , we can change arbitrarily the indices over operators in the second term on the right-hand side of the last formula. Hence, 1
a fa[13 x, ( —ih y; 2
2 _. â) h ax
) JD= (f o P) x, t
1 P ap — (f" o P)] i,—ih—a +0(h2 ).
ax ap
L
—[
ax
Similarly, one can calculate the subsequent terms of this expansion.
4 The Campbell—Hausdorff Theorem and Dynkin's Formula As a sample application of the techniques of noncommutative analysis, let us consider a famous old problem in the theory of Lie algebras and Lie groups. This problem was already mentioned in Subsection 1.5, but here we start from the very beginning and give a more detailed exposition. Those who are not familiar with Lie algebras and Lie groups at all may wish to consult the Appendix, where all the necessary information is provided; however, for better readability we reproduce here some of the definitions.
4.1 Statement of the Problem Let L be a (finite-dimensional) Lie algebra. This means that L is a (finite-dimensional) linear space equipped with a bilinear operation [., •], referred to as Lie bracket and satisfying the following conditions: i) [a, b] = —[b, a] (antisymmetry); ii) [a, [b, c]] = Ra, b], c] + [b,[a, c]] (Jacobi identity). For our aims it suffices to assume that L is realized by square matrices of size mxm, with the usual matrix multiplication l . This assumption is, in fact, unnecessary, and we are only doing so in order to simplify the exposition by avoiding the consideration of unbounded operators. 1 By
Ado's theorem this is the general case.
4. The Campbell—Hausdorff Theorem and Dynlcin's Formula
71
Next, let G be a Lie group, i.e., a manifold equipped with smooth group operations. Again, we assume that, at least locally2 , G is a represented as a matrix group. Then the tangent space Te G to G at the point e is naturally identified with a subspace of the space of matrices, and it can be shown to possess the structure of a Lie algebra. There arises a natural question: given a Lie algebra, can one reconstruct the multiplication law in the corresponding Lie group? Let G be a Lie group and L the corresponding Lie algebra. There is a mapping
exp : L —> G
taking each element X E L into the element exp(X) E G defined as follows: let { g (t)} be the one-parameter subgroup of G defined by the condition AO) = X. Then we set exp(X) = g(1).
Thus we obtain a coordinate system in the vicinity of the neutral element e E G; this coordinate system is referred to as the exponential coordinate system. Let us seek the multiplication law in the exponential coordinate system. If G is commutative, then the multiplication law has the form A • B = A + B. If G is not commutative, there appear correction terms on the right-hand side of the last equation, namely, 1 2
A • B=A+B± — [A, B]-1- • • • . The first correction is equal to 1[A, B] and this is completely determined by the commutation law in L. The Campbell—Hausdorff theorem [19] asserts that the same is true of all subsequent terms of this expansion, which implies that the multiplication law in G can be reconstructed given the commutation law in L. E. B. Dynkin [43] found an explicit exposition for all terms of the series. In this section, following Mosolova [139], we find a closed formula expressing ln(eA e B ) in the form of an integral rather than a series. Then we use the conventional Fourier expansion so as to obtain the expression for ln(e A e B ) via the commutators. First of all, let us consider functions of commutation operators in more detail; this proves to be useful not only in the particular problem considered, but also in the general framework of noncommutative analysis.
4.2 The Commutation Operation We have already seen how important a role commutators play in noncommutative analysis. It is clear that this operation is worth studying. In this subsection we obtain a simple expression for functions of the operator of commutation. 2 That
is, in a neighbourhood of the neutral element.
72
I. Elementary Notions of Noncommutative Analysis
Let A be an algebra and B e A some element. Denote by adB : A —> A the linear operator that takes each element A E A into its commutator with the element B: adB :
A —>- A A 1—> adB (A) = [B, A].
Clearly, ad B (A) : BA — AB = L B(A) — RB (A), that is, adB = LB — RB, where L B and RB are the operators of left and right regular representations introduced in Subsection 2.3. The powers of adB give rise to multiple commutators with B: (adB) k (A) = [B . . .[ B , A] . . 1. k times
Let us find as expression for such commutators using Feynman indices. Note that 3
12
[B, A] = BA — AB = (B — B)A. Similarly, (adB)k (A) = (LB — RB) k (A) = (13 — 113)k /21' , according to the description of functions of the operators LB and RB given in Subsection 2.3. Moreover, for any symbol f (x) we have 3
12
f (adB)(A) = f (B — B)A.
(1.20)
Although this follows directly from the formulas given in Theorem 1.3 let us present the proof for two particular cases in which the functions of operators are defined via the Fourier transform or the Cauchy integral formula.
A. The Cauchy integral formula. Since the mapping f i-± f (A) is continuous, it suffices to consider the case f (x) = (A — x) -1 . Let f (x) = (A — x) -1 , where A 0 a (B)— cr (B), that is, A cannot be represented in the form A = Al —A2, where Xi E a (B),i = 1, 2. Then the function (A — x + y) -1 is analytic in the neighborhood of
4. The Campbell-Hausdorff Theorem and Dynkin's Formula
73 31
2
the product a (B) xcr (B) c C2 , and hence the operator (X - B ± B) A is well-defined. We have
(X - adB)
1
(
3
X—
-
A
1
3
1
2
12
A
B + B)[[
= (X
1 1
3
A-B±B 1 = (X B ± B) 3
A
B + B)
3
B±B 2
4 0
= (X
2)
X-B+B
2
A 3
1
=A
(we used extraction of a linear factor and moving indices apart). Hence the operator X - adB is the left inverse of the operator 1
A-*
2
3
1
A.
Similarly, it can be proved that the operator X - adB is the right inverse. Consequently, 1
Rx(adB)(A) =
2
A. 3 1 X-B ±B D
B. The Fourier transform formula. Let 3
i 2
U(t)A = eit(B-13) A.
We intend to show that U(t)= eitadB.
Indeed, let us differentiate U(t)A by t. We obtain 3 1 , 31 2 4 0 „1 n1 , 2 d —U(t)A = i(B - B)e 1t(13-B) A =i(B - B)ei t 'D - D ) A dt 4
0 2 „!
nl, 2
= (B - 13)[[e`t 'D - D' Al 2 3 1 = i(B - B)U(t)A = i adB(U(t)A).
Moreover, for t = 0 we have U(0)A = A, that is, U(0) = id = eitad B 1
t=0
•
We see that U(t) and eit adB satisfy the same Cauchy problem and hence are equal,
I. Elementary Notions of Noncommutative Analysis
74
which implies the desired result for the case in which functions of operators are defined via the Fourier transform. Let us now prove a technical lemma, which will be used in the next section.
Lemma 1.2 Suppose that A, B,C
E
A are elements such that
e c = eB e A . Then
e adc
d ad = e a B eA .
12
4 03
Proof We have 3
1 2
e adB eadA (D) = eadB (eA—A D) = e B—B e A—A D =
3 4 2
1o
e A+B D e —A—B
But 34
e A+B = e C ,
10
e —A—B = e —C .
Thus,
e adB eadA (D) = ec De —t-, = as desired.
4.3 A Closed Formula for ln (eB eA ) For simplicity, assume that A and B are elements of a Banach algebra A with identity; this means that A is a complete normed linear space, and the multiplication defined on A has an identity element 1 E A and satisfies the inequality
11xY I I
11x11 • 111'11
for any X and Y E A. Throughout the following exposition we only use analytic symbols and define functions of operators via Taylor series or Cauchy integrals. Our intention is to find an explicit expression for the element C = ln(e B e A ). Let us consider the family of elements D(t) = in(etB eA ) with parameter t. Clearly, we have D(0) = A and D(1) = C. Moreover,
e by the definition of the logarithm.
t) = etB e A
4. The Campbell—Hausdorff Theorem and Dynkin's Formula
75
We differentiate the last equation by t and multiply by e —D(t) on the right. This gives the following result:
B =
±,130)) e —D(t) . ( dt -
The latter equation can be transformed with the help of the Daletskii—Krein formula (see Subsection 3.2), which asserts that
d D(t) —e =
1
3
2
e D(t) — e D(t)
D' (t) 3
dt
1
.
D(t) — D(t)
Thus, 3
B=
1
2 e D(t) — e D(t) D' (t) 3 1
o
e — D (t ) .
D(t) — D(t)
There are no operators with Feynman indices between 0 and 1 on the right- hand o
side. Therefore, the "moving indices apart" rule applies to the operators D(t) and I D(t), and their indices can be set equal to each other: 3 1 e D(O—D(t) _ 1
2
B = D' (t)
3
1
.
D(t) — D(t)
According to the results of Subsection 4.2, 3
12
f (D — D)E = f (adD)(E),
where adD is the operator of commutation with D. By taking D = D(t), E = V (t), and f (x) =
ex
—
1
X
we obtain B = f (adD(0)(13' (t)).
We apply the operator f (adD(0) -1 to both sides of this equation and obtain
1
(B)
f (adD(0)
However, by Lemma 1.2, adD -(t-) = ln(e t adB e adA ),
76
I. Elementary Notions of Noncommutative Analysis
and so
1
( ID = oe tad BeadA)(B), `' A)) ead f (1n(et adB
Di (t) =
where
1 ln z = . f (ln z) z-1 Finally, we use the Newton-Leibniz formula and obtain q)(z) =
1
t
C = A + f Di (t) dt = A + f go (e t adB eadA) d t (B) . 0
0
We see that C is expressed as a linear combination of A, B, and their commutators. Indeed, we can expand (p (z) into a Taylor series in powers of z - 1: ln z
°°
v(z) = z - 1 = E( ok k=0
thus,
(Z - 1)k
•
k+1'
00, (et ad B ead A _ ok q)(e tad B e ad A) =E(-1) K • k+1 k=0
Let us represent the exponentials in the form of Taylor series. We obtain
E ti (adB) 1 (adAr 00
k+1
k=0
=
Nc°-■ (-1) k
nm!
(-Frn>0
q)(et adB ead A) =
k
t11±-±ik
Lak+i tiE i +; In >t., n
(adB) 11 (adA) m l ... (adB) 11 (adA) mk ii!mi!•••/k!mk!
k=0
We replace ça (e t ad B eadA ) by the last expression and perform te-by-term rm integration: \--N (adB) 11 (adAr l ... (ad B) Ik (ad Ark (B) C=Ad-B +E (-1)k k+ 1 , 1-1 n (11 + • • • + lk + 1)11!mi! .. .1k!ink! k=1
t i±mi>v
00(
=
Ad-B-1-E k=1
k
1 -
)
E
k + 1 li+mi>0,1;>0,mi>0
[B 11 A m l . . . B ik A m k
ldmi! . . .1k!mk!
13] '
where [1311 A m l . .. Blk A m k B. ] = (11 + • • • + lk + 1) -1 B[B . . .[B [A . . . [A . . .[B . . .[B [A . . . [A, B] • • •]• s........„,, ,,.„-, •-...„-,,..„—• 1k
mk
4. The Campbell—Hausdorff Theorem and Dynkin's Formula
77
This expansion is slightly different from that obtained by Dynkin. In fact, Dynkin's expansion is more "symmetric" and can easily be obtained by our method if we consider the family D(t) . e tB etA
as was originally done by Mosolova [139]. The only reason for considering our family is that this leads to even simpler calculations. Thus, we have proved that ln(e B e A ) can be expressed as a linear combination of operators A, B and their commutators. There arises a natural question: where is the obtained formula valid? We do not go into a thorough investigation of this question here; let us only mention that p(z) is an analytic function in the disk I z — 11 < 1; e therefore, go (e t adA h ) is well-defined at least so long as \
I le t adBead A _
1 11 < 1 .
The latter inequality is valid at least if II adB II and II ad A I I are small enough (one can easily write down the bounds for I I adB II and I I adA II). Next, if this condition is violated, we can proceed by analytic continuation. The function go (P (t)), where P(t) = etadBeadA can be defined via the Cauchy integral 1 i g9(X) dA. q)( 13 (t))= 2 1.i ir ),,— P(t) ' C
where C is a contour surrounding the spectrum of P (t) and such that go (X) is holomorphic in the closed domain bounded by C. The function q)(z) has the unique ramification point z = 0; it is easy to see that a sufficient condition for go (P (t)) can be defined is that the spectrum of w (P (t)) does not surround the point Z = 0 and does not contain this point. If this is true, we can find a simply connected domain Dt such that Cf ( P ( t ) ) C Dt and 0 (I D. We set C = apt . Note that one should define w (P (0) for each t E [0, 1] in such a way that w(P (t)) depends continuously on t. This imposes an additional condition on the behaviour of the domain Dt as t moves along the interval [0, 1]. For instance, if both A and B are positive definite self-adjoint operators, then the spectrum of P (t) never intersects the negative part of the real axis and everything is all right. We refer the reader to M. V. Mosolova [139] for a detailed discussion of conditions under which the analytic 0 continuation is possible.
4.4 A Closed Formula for the Logarithm of a T-Exponential The reasoning in the preceding subsection can easily be modified so as to solve the following problem.
I. Elementary Notions of Noncommutative Analysis
78
Problem. Let a T-exponential 1
U = exp (fA(8)d6) be given, where A(0) is a family of elements 3 of an operator algebra A. Find an element C E A such that U = ec . This problem is an obvious continuous generalization of the discrete problem of finding log(e B eA ). Again, the method of solving this problem is to use noncommutative differential calculus. To this end, we introduce the family of operators
t fo U(t) = exp ( A(0)de , t E [0, 1], 0
and seek for an operator C(t) such that
U(t) = e t) . Clearly, U(1) = U, and D(1) = C is the desired operator. Let us differentiate both sides of the last equation with respect to t. By the definition of T-exponentials, we have
d — U (t) = A(t)U(t) = A(t)e D (t) • dt
the differentiation on the right-hand side gives 2 ±eD ( ‘it) = D'
dt
3
(t)
1
e D(t) — e D(t)
3
1
D(t) — D(t)
by the Daletskii—Krein formula. Combining the last two equations, we obtain 2
A(t) = D' (t) or
3
3
1
e D(r)—D(t)
1
3 1 D(t) — D(t)
1
D(t) — D(t) 2 (t) = 3 A(t) = f (ad D(t))(A( 1")), 1 e D(t)—D(t) 1 3 Our argument is a bit formal in this subsection since we do not dwell upon the conditions that would guarantee the existence of the T-exponential, which is a rather complicated topic in its own right.
4. The Campbell—Hausdorff Theorem and Dynlcin's Formula
where f (z) =
79
z ez — 1
.
We will make use of the following lemma. Lemma 1.3 The formula r
e f admodO
eadD(t) = exp
0 is valid. Proof. We have 3
12
e admo (B) = e D(t)—Do) B = e mt) Be—D (t ) = U(t)BU(t) 1 for any B E A (cf. Section 4); in view of the definition of U (t), d .„1 —euuD(t)(B) = A(t) o U (t)BU (t) -1 — U (t)BU (t) -1 A = adA(t) (e ad mo (B)). dt On the other hand, r 0 d — exp f admod0 dt 0
(r (B) = adA(t) exp
,
f adAe (ode
(B)
0
by the definition of the T-exponential. We see that both sides of the equation in question satisfy the same first-order equation dX — = adA( t) X dt
and the same initial condition X(0) = id (identity operator). Thus, they coincide for all t. The lemma is proved. 0
It follows from Lemma 1.3 that t
e
admo = log (ffexp (f admod0)1) . 0
Hence we obtain r
e D' (t) = q) (ex p (f ad(ode)l) (A(t)),
0
I. Elementary Notions of Noncommutative Analysis
80 where
q)(z) = f (log z) =
log z
z-l
=
E (1k —+ z)1 k .
ic=0
Since D(0) = 0, we obtain r
r
D(t) = f ço o
ffexp
e f ad(e)d6) ]] A(r)dr 0
and, in particular
t
1
0
C = D(1) = f q) (i[exp (f adA(0)(19)11) A(t)dt . 0
0
This is a closed expression for 1
0
C = log ([[exp (f A( 0)d0)1) 0 via the operators A (t) and their commutators. Proceeding by analogy with the preceding subsection, we can obtain an expansion of C whose Nth term contains Nth-order commutators for any N. To this end, we expand q) in a Taylor series, thus obtaining t 0 (1 — ffexp (f adA(9) de) 1) 0
1
c=
f0 kEc° =°
k
k+1
A(t)dt.
Next, we use the Taylor series expansion of the T-exponential:
t ) 0 exp (f adAmde 0
2
t
e 1 = 1 + f ad A(e)dt9 +
( 0f t 0
0
t
= 1 ± f adme) 0
adAmd0 + . • • t ei
ch9 + f f ad moo adA(02) d02de1 + • • • 00
(in the Nth-order term we divide the integration domain into N! equal simplices on each of which the factors admoi ) appear in a fixed order; it is easy to see that all these simplices give the same contribution, and the factor + I cancels out). On substituting this expansion of the T-exponential into the preceding 'formula we obtain
4. The Campbell-Hausdorff Theorem and Dynkin's Formula
c
81
cc) =EE k ±1 (
s
--
o
k
s=0 k=0
6. i- 1
. . . f admoi ) adA(92) ... ado o 0 x
dOil .. . dOi
• • •
f f ad moo adA(92) . .. adoik ) Oh, . . . d61 A(t)dt. 0
0
The sth term of the outer sum is a finite sum of integrals; this sum involves all sth-order commutators involved in the expansion of C. Let us write out a few first terms of the expansion. We have 1
t
1
1 C = f A(t)dt - - f f adA(0)(A(t))d9dt 2
o
—
f f admoo adA(02) (A (t))&92dOidt
w
th
1
t
1
oo
t
000 t
+1 f f
admoo adA(g2 )(A(t))d92dO1dt ± • • • f 000
3 f f or, in terms of commutators,
1
i
t
C = f A(t)dt - f f [A(0), A(t)]dOdt o oo 1 1 t ei -- f f f [A(6i), [A(02), A(01]d62d9idt 2
1 +3
000
i t t
f f f [A(01), [A(92), A(t)11d02d91dt ± • • • , 000
where the dots stand for a sum of integrals of commutators whose order is greater than or equal to three. Let us consider a simple illustrative example.
L Elementary Notions of Noncommutative Analysis
82
Example 1.11 Consider the first-order differential equation
au at
au —
ax
± b(t)xu
a(t)—
with the initial condition
u(x, t)Ir=o = uo(x). The solution of this Cauchy problem can easily be found explicitly by straightforward computation. We consider the equation of characteristics .i = a(t), x(0) = xo;
its solution is
r x (xo, t) = xo ± f a(r)dr.
I Set U (xo, t) = u(x(xo, t), t);
then for U we obtain the equation r
dU
dt = b(t) xo ± f a(r)dr u, o
whence
t U(xo, t) = uo(x0)exp (f
e b(6) (x0 ± f a(r)dr) d6 )
o
o
Making the inverse substitution t xo = x — f a(r)dr, 0
we obtain r
r
r
u(x , t) = uo (x — f a(r)dr) exp o
(f
b(6) (x — f a(r)dr) dO)
o
e
Now, let us compute the same solution by means of the T-exponential. Let u(t) denote the solution u(x , t) considered as the function of t with values in functions of x. We have It
u(t) = exp (frA (z)dr) u(0), o
4. The Campbell—Hausdorff Theorem and Dynkin's Formula
where
83
a
A(r) = a(r)— +b(v)x.
ax
Let us compute the T-exponential for t = to (to is arbitrary) using the formula we have just obtained. We shall see that this leads to the same result. We have
a
a
( 0)— + b(9)x, [A(0), A(t)] =[a ax
± b(t)d= ( a 0)b(t) —
ax
and the commutators of second and higher orders are all zero. Therefore, only two terms are retained in the expansion of the logarithm of the T-exponential. Namely, we have to
log (exp
(f
( to
to ax
0
7
2
o
t
to
+.i1- f
1
—a + (f b(t)dt)x —
f a(t)dt)
0
2
o
o =
t
to
to
1 r A( t)d -r)) = f A(t)dt — — f f [A(0), A(t)]adt
oo to (
f
t
b (t) f a (0)&9 dt
o
o
a
def
a(t) f b(r)d -c dt = C1— ± C2x + A. ax 0
The value of the solution u(x,t) at t = to, say, coincides with the solution at t =1 of the Cauchy problem
{ay _ ,-,1 av at — -- D--i vit=o
= uØ (x )
with coefficients independent of t. We have
( 1 v(x, 1) =
f C2(x —(1 — t)Ci)dt ± X
uo(x — CO exp
o to
to
to
1 f t°a(t)dt) exp { x f b(t)dt — -2- f a(t)dt f b(1)dr
= uo (x
o
o
o
to
t
to
to
1 1 ±2 f a(t) f b(r)dr dt — -2- f a(t) f b(z)dr dt
o
o
o
I. Elementary Notions of Noncommutative Analysis
84
to to to = uo ( — if a(t)dt) exp I f b(t) ( — f a(r)dr) dtl , t o o which coincides with the value obtained by the straightforward method. The method shown in this example is quite useful if for each t the operator A (t) is a linear combination of generators of a nilpotent finite-dimensional Lie algebra,
m
A(t) =
E a., (0 Aj ,
where Ai are the generators. In this case the computation of the T-exponential t 0 exp ( f A(0)d9 is reduced to the computation of exp( E oti (t)Ai (t)), where the o coefficients cei can be obtained by appropriate integrations from the above expansion. Even if the Lie algebra in question is not nilpotent, the problem is greatly simto I plified since we reduce the computation of exp ( f A(0)d0) (operators in an infinite° / dimensional space) to the computation of 1
f
t
(i[exp
o
o (f ad(e)c/81) )
o
which in fact is carried out in the m-dimensional space of the Lie algebra.
5 Summary: Rules of "Operator Arithmetic" and Some Standard Techniques In the preceding sections we have considered a number of notions and statements pertaining to noncommutative analysis. We hope that it is now clear to the reader why noncommutative analysis proves so useful in applications. Here we summarize the main notions and notation introduced and also give a few common techniques, or even "tricks", that are usually employed in noncommutative analysis. So far we have been studying the "operator arithmetic", which comprises the rules used in treating operator expressions and the corresponding notation.
5. Summary: Rules of "Operator Arithmetic" and Some Standard Techniques
85
5.1 Notation The subject of noncommutative analysis is functions of operators, more precisely, numerical functions of numerical arguments substituted by linear operators (or elements of some associative algebra). The definition of such a substitution encounters certain difficulties. First, even for functions of a single argument it is not clear a priori what the substitution means if the function is not a polynomial; second, if there is more than one argument, then an additional ambiguity arises, caused by the fact that a function of commuting numerical arguments cannot in principle carry any information about the arrangement of the arguments. The first difficulty is overcome by the traditional method following from the familiar functional calculus of a single operator. The requirement that the mapping symbol * operator be a continuous homomorphism taking x into A uniquely determines the operator f (A) for each symbol f (x) for an appropriate class of symbol spaces (see Theorem I.1). The second difficulty can be removed by introducing new notation having no counterpart in classical analysis. This new notation includes Feynman indices and autonomous brackets. —
Feynman Indices. In order to determine the order of action for operators occurring in an operator expression, we write an index (a real number) over each of the operators. The factors in all products are arranged so that these indices decrease from left to right (that is, the leftmost operator has the highest Feynman index, and the rightmost, the lowest). For example, 1320
AABC = ABAC. If two operators commute, their indices are allowed to coincide; this does not lead to any ambiguity.
Autonomous Brackets. These are the special brackets [111 used to determine the order of computations in an operator expression. The expression within the brackets is computed first and then is used in the subsequent computations as a single operator (equipped where necessary with a Feynman index, which is written in this case over the left autonomous bracket). In particular, the indices outside and inside the brackets do not affect the action of each other. For example, 112
2
1012
AAB = BA, AffAB]] = ABA; 1
2 ,
(A ± 1
= A2 ± B2 ± 2B A, 2 ,
Br =
(A ± B)2 = A2 ± B2 ± A B B A .
86
I. Elementary Notions of Noncommutative Analysis Nested autonomous brackets are allowed. For example, the expression
1 431 2 AA[[(A + 1[A + B1)21 should be computed as follows. First we compute the inner bracket
C = 1[A + B]]= A ± B, and then the outer one: 1
2,
D = [[(A ± C)]] = A2 + C2 + 2CA
= A2 + (A + B) 2 + 2(A + B)A = 4A 2 ± B2 +3BA ± AB. Finally, the entire expression is computed: 143
AAD = A(4A2 + B2 + 3BA + AB)A = 4A4 + AB 2 A + 3ABA 2 + A2 BA.
5.2
Rules
It follows from the definition of Feynman ordering that it is the mutual order of the Feynman indices, not the indices themselves, that counts. Hence the indices can be changed without affecting the value of an operator expression. Let us now state a rule describing admissible changes of indices. A pair of arguments A, B of an operator expression is said to be critical if the following conditions are satisfied: (a) A and B do not commute, [A, B] 0 O. J
(b) The operator expression cannot be represented in the form E1 ± E2, where A / occurs only in El and B in E2. J
(c) The expression does not contain autonomous brackets such that A is inside the / brackets and B outside the brackets, or vice versa.
The Rule of Changing Feynman Indices. Feynman indices in an expression can be changed arbitrarily provided that for any criticalpair the ordering relation between the Feynman indices of the operators in the pair remains valid.
Examples. 1)
2)
1
2
2
1
12
A+B=A+B (the pair A, B is not critical since condition (b) is violated). 123 112 1 1 f (A, A, B) = f (A, A, B) = g(A, B), where g(x, y) = f (x, x, y). Thus, the changing indices rule includes changing the number of arguments of a symbol by restriction to the diagonal ("moving indices apart", Proposition I.1).
5. Summary: Rules of "Operator Arithmetic" and Some Standard Techniques 5,
321
3)
5
324
87
31
BEA ± BY]] = BEA ± B) 2 1 (the pair B, A is not critical since B and A are separated by an autonomous bracket).
The Rule of Deleting Autonomous Brackets. Autonomous brackets can be removed if for any operator argument within the brackets its Feynman index is in the same relation with the Feynman indices of operators outside the brackets as the index of the brackets themselves. Naturally, this rule also tells us whether we can introduce the autonomous brackets.
Examples. 4
232
1)
4
32
15
1[A(B — B)}B f 3x(B , B) = A(B — B)8 f /8x(B, B). 2
31
2)
15
1
EA ± B) 21I(A
2, —
1
2,1
2,
0 (A ± BY (A — BY
in general, since the Feynman index
1
2
of the A inside the brackets, which have the index 3, is less than that of B outside the brackets. Surprisingly, this is almost all the new notation and rules to be introduced in order to permit us to deal easily with operator expressions.
5.3 Standard Techniques When someone applies differential and integral calculus to solve a particular problem, he or she generally uses a lot of standard techniques such as integration by parts, L'Hôpital's rule, etc., being however scarcely aware of their presence, because they are used often enough to become fairly commonplace. Similarly, noncommutative analysis includes quite a few techniques of its own, which should always be at hand in noncommutative computations. Some of them were already presented in this chapter; and the others will be given in the sequel. Here we recall briefly what we have already seen above. The methods can be divided into a few groups. First, there are some simple transformations following directly from definitions and from the rules of changing Feynman indices and inserting (deleting) autonomous brackets, given in the preceding subsection. It is probably convenient to present them as a number of axioms, as was done in [129 ]. ( pi) (Homogeneity axiom). For any symbol f and number a we have
i[a f (All
nk
ni
A
= (YU (Al
nk
A011.
, nk and m i , . . . , mk be Feynman indices (p,2) (Index changing axiom). Let n 1 , such that ni < ni if and only if mi < mi. Then nk
f
Oh
• • • , Ak) = f
ml
mk
(Ai, • • • , AO.
I. Elementary Notions of Noncommutative Analysis
88
If ni = ni and Ai = Aj = A, then ni
fl
ni
ni
ni
nk
where g(xi, . .. , xi, — • , xi, • • • , xk) = f (xi, • • • , xidix .i =xi • ni
(p,3) (Correspondence axiom). If f (x i , .. • , xk ) = 1, then f (Ai, ni
tity operator); if f (x i , .. . , xk ) =
nk ... , Ak) =
1 (iden-
nk
then f (iii, • • • , Ak) = Aj.
(p,4) (The sum axiom). ni
mi
nk
mi
ni
mi
mi
nk
Ak)± g(B 1, — , BO = U(Ai, ••• , Ak)11+ ffg(B 1, • • • , B 01 (p,5) (Product axiom). If mi < n1 for any i and j, then ni
nk
mi
mi
ni
mi
nk
mi
f (Ai, • . . , Ak)g(M 1, . . . , Bi) = Ef (Ai, ... , Ak)Hg(B 1, • • . , B1)11. ni
nk
(A6) (Zero axiom). If f (Ai, ... , Ak) = 0 and the indices Pi , • • • , PI ,
ri , • - • , rm
satisfy pi < ni and ri > ni for any i and j, then rm Pi P1 ri f (Ai, • . . , Ak)g(B 1, • • • , Bi, Ci, . . . , C m ) = 0 ni
nk
for any generators B 1 , ... , B 1 and C1 , ... , Cm . (T1) (Continuity axiom I). The mapping ni
f (x i , . . . , x k ) i—
nk
f (Ai, . . . , Ak)
is continuous. ni nk (T2) (Continuity axiom II). If fc,(Ai , ... , Ak) —> 0 and the indices Pi, • • • , pi, ri, . .. , rm satisfy the conditions of axiom 04), then ni
nk Pl fa(Al, — , Alag(B1, • • • ,
pi Ti
T,,,
Br, Ci, ••• , C m ) —> 0
for any generators B1 , . . . , B1 and C1 , ... , Cm . Second, there is a group of techniques based on the naturality properties of the mapping symbol 1---> operator, as stated in Theorems 1.4 and 1.5. Actually, the main naturality property is that expressed by item 1 0 in the theorems cited, namely, that if ço:A—>8
5. Summary: Rules of "Operator Arithmetic" and Some Standard Techniques
89
is a homomorphism of operator algebras, then go takes generators into generators and preserves symbols when applied to functions of generators (i.e., go ( f (A)) is the function of B = g)(A) with symbol f, and a similar statement is true of multivariate symbols). Let us recall some of these techniques. (a) Passage to the left (or right) regular representation. The mapping
A 1-->- LA, where LA (B) = AB , B E A
is an algebra homomorphism (a representation of A in the linear space A) and we have f (LA) = Lf(A), 1
n
f (LA i , • • • ,LA n )= L
1 n f(A1,...,An)
(this was proved in Subsection 2.3 independently of Theorems 1.4 and 1.5). Since the representation is faithful (i.e., its kernel is zero), if suffices to reason in terms of the left regular representation operators, which is often much simpler. For example, in proving the derivation formula 23 3f 1
D[f (A)] = DA— (A, A) Sx
the passage to LA permits us to assume that D is an inner derivation (i.e., given by the commutator with some element), since LDA = [D, LAI
(b) The conjugation transformation. If U is an invertible element of an operator algebra A, then the mapping A i-->- UAU -1 !Ad(A) is an automorphism of A. Consequently, for any generator A a generator as well, and we have 1
n
1
E
A the operator UA U -1 is n
UU (Ai, • • • , An)}IU -1 = f (U AiU -1 , — ,U A n U -1 ) 1
n
for any symbol f (y i , ... , yn ) and any Feynman tuple (Al, . .. , An ) of generators. Further examples of using the naturality property can be found in Chapter II.
I. Elementary Notions of Noncommutative Analysis
90
Along with LA, RA, and Ads, it is often helpful to use two other classes of transformations on A; namely, we speak of adA = LA — RA
(Commutation operator) and allA = LA -I- RA
(Anticommutation operator). Suppose that A is a generator; then adA is a generator provided that f (x —y) E Y2 for any f E .F, and the same is true of anA if f (x+y) E .F2 for any f E F.
These operators satisfy numerous remarkable formulas. Here are some of them: e adA _ Ad L (e A ); [adA , adB] = [anA, anB1 = ad[A, /3];
[adA, anB] = an[a,b]; [LA, LB] = [LA, ac1B] = [LA, ani3 ] = L[a,b]•
Third, there is a very useful trick of considering matrix operators. It was widely used in the proofs of theorems in Section 2. Without loss of generality it can be assumed that all operators considered act in linear spaces (this can always be achieved by passing to the left regular representation). Let A be an algebra of operators on V, B an algebra of operators on W, g an (A, B)bimodule of operators from W into V, and D a (B, A)-bimodule of operators from V into W (we do not exclude the case in which V = W and A = B = g = D). We may consider the algebra U of matrix operators ( A D
C B)' A
E
A,
B E B, C E
g,
and D E D,
acting on V ED W. Computations in such an algebra are sometimes much simpler than in the original algebra. This is chiefly due to the following property.
Lemma 1.4 Let A E A and B E B be generators. Then for any C E g the operator (
A
0
c\
B
. i
s a generator in U, and
f ( () A6 CB
)
(f (A) 0
for any symbol f E F.
2 A f3 1 ) C t, (A , B)
f (B)
5. Summary: Rules of "Operator Arithmetic" and Some Standard Techniques
91
Proof The above formula clearly defines a continuous mapping .T. —> U. Next, this A C since Sy/Sy 0 B the mapping is a homomorphism. We have mapping takes 1 1—> 1 and y 1—>
f
B )) g (( l ' e
(( l '
= ( f (A)
0
CI (A, 1) Y f (B)
1. It remains to check that
e B )) 2 (
3 1 )
g (A) C (A,B) Y g(B) 0
. f((A)g (A) f (B)g(B)
0
The diagonal entries have the desired form; the superdiagonal entry can be transformed as follows: 2 31 8g 3 1 2 8f 3 1 (A, B)]g(B) = CF(A,B), f (A)I[CT (A, B )]] + 1[C— Sy y 2
where 3f 8g F (x , y) = f (x)— (x , y) ± g(y)— (x , y) Sy Sy 8( f g) f (x)g(x) — f (x)g(y) ± g(y) f (x) — g(y) f (y) = (x, y). = x—y Sy
o
The lemma is proved. Let us consider two examples. (a) Suppose that the identity ac = cb ± d
is valid for some elements a, b, c, d E A where a and b are generators. In matrix form, we can write out this identity as ( (a 0 db )
01
0c ) = ( 01
c ) ( a d) , 0 b 0
or
AC = CA, where A=( a d ) 0 b)
( 1 c \ and c = ( 00)•
I. Elementary Notions of Noncommutative Analysis
92
By Theorem 1.4, 2°, we obtain f (A)C = C f (A)
for any symbol f, or, according to Lemma 1.4, 31
2
(f (a) 0
1
( f (a)
d () a,b) ( 1 c ) Y 0 0 ) f (b)
Y
0
(a, b) )
.
f (b)
Multiplying the matrices on both sides of the last equation, we get
2f 3 1 f (a)c = cf (b) + d — (a, b), Sy
that is, we have obtained yet another proof of Theorem 1.6. (b) Our second example pertains to Lie algebras and employs n x n rather than 2 x 2 matrices. Operators A 1 , .. . , An E A are said to form a representation of a Lie algebra if n [Ai, Ad = Ai ,
Exiik /=1
where A.1jkare numbers, called the structural constants of the Lie algebra. These commutation relations can be rewritten in a very simple form if we introduce the matrices
0
( A1 A3
A= 0
An
J
0
Ai
'
Ai d O
A
...
Ai ) '
and Ai = 114i Ilic,/=1. With this notation, the Lie commutation relations take the form A Ai = (Ai + Ai)A ,
which permits one to obtain various permutation formulas easily; for example, we have, by Theorem 1.4, 3 0 Af (Ai) = f (A i + Ai) A .
Finally, let us note that the techniques of noncommutative analysis widely use the difference derivative in conjunction with moving indices apart; this can be seen throughout this chapter.
5. Summary: Rules of "Operator Arithmetic" and Some Standard Techniques
93
Of course, there are also numerous formulas such as those derived in Section 3. However, it would be redundant to list all these formulas here, and we refrain from doing so, with the exception of the following ones: d
3
811
2
— f (A(t)) = A' (t) (A(t), A(t)) dt Sy (Daletskii-Krein formula); N-12
f (C) = f (A) ±
2k+1
E i[ c - it]] ... [IC - Al.,3-yk
k=1 2N
2
8k 1 1
2k
8N 1 1
3
2N+1
+I[C - Al ... [[ C - All„ (C, A, . .. , A ) Sy"
(Newton formula);
f (C) = f (A) ±
k=1 f (k) (A)
1
2,
21
2
3
82 1
1 5 2 4
I
3
3
E , A!, iRc _ A)"]] ± FC - Arl— syN (C, A, . .. , A) N-1
(Taylor formula); 12
21
f (A, B) = f (A, B) ± [B, A]
I (A, 8 Y1 6Y2
A; B, B)
(Index permutation formula);
f (11A ± BP
12
f ([[g(A, B)]I)
1
2
3
82 1 4
2
12
= f (g(A, B)) 5 82 2 7 8 8,s, 3 4 6 , 1-(A, A, B)- ,; (A, B, B) = [A, B] t 28 78 62 11 1 2 ag(A, BAI, g(A, B), g(A, B)) X
6y
(Composite function formulas); 3
12
f (ad B)(A) = f (B - B)A
(functions of adB).
1
2
= f (A ± B) ± [A, B] Ty2 (IM ± B]],[[A ± B]], A ± B) 1 2 21 4 3 1 8 2 15 = f (A + B) ± [A, 13. ] 8y 2--(1[A ± B1], A ± B , A ± B),
Chapter II
Method of Ordered Representation
1 Ordered Representation: Definition and Main Property 1.1 Wick Normal Form We begin with an example. In Subsection 1.2 we have posed the problem of calculating the Wick normal form for operators acting on a Hilbert state space, and in fact the answer was given for a rather particular case in Subsection 1.6. Let us recall the matter and clarify the subject. For simplicity, suppose that we deal with the case in which there is only one basic state. So, we are given a Hilbert space li and two operators, a+ and a— , on 7-1 satisfying the commutation relation [a a +,, a— ] m a+ a— — a— a+ =
where / is the identity operator on 7-1 , the operators a+ and a— will be referred to as the creation and the annihilation operator, respectively. For example, we can take 7-1 = L2(R 1 ) and
a a ± = (21m0-1/2 Lih_ 1 ax
where h and co are positive parameters, as was done in Subsection 1.6 in considering the eigenvalue problem for the quantum-mechanical oscillator. The Wick normal form of an operator A acting on 7-1 is a representation of A in the 2 1 (a+, form of a function of the Feynman tuple a— ), namely, 2 1 A = f (a+ , a— )
where
f (z, w) = E cafi zawfi ci,fi_>0
1. Ordered Representation: Definition and Main Property
95
is a function of two arguments, which we call the Wick symbol of A (we avoid discussing 2
1
exact requirements on the symbols and assume that f (a+, a- ) is defined as 1
2
f (a+ , a- ) = E ccep(aice(a—)fi, ci,i50
where the series is assumed to converge, say in the weak sense, on some dense subset of 7-0. Given an operator A on 71, how can we reduce it to the Wick normal form? If A is arbitrary, this question is rather difficult. Therefore, we will not consider this question in full generality, but will rather consider a particular class of operators A, namely, the operators which can be represented as functions of several appropriately numbered occurrences of a+ and a- . First of all, consider the problem of reducing to the Wick normal form the product A = A1 A2, where A1 and A2 are already in normal form, 2
1
2
Ai = fi (a+ , a- ),
A2 =
1
f2(a +, a- ).
In a particular case (A2 is arbitrary and A1 = a+ a- ) this problem has already been solved in Subsection 1.6. However, the general case is not much more complicated. Indeed, suppose for the moment that fi (z, w) = z or fi (Z, tV) = w. For the former, we have 22
3
2
1
2
2
1
alf2(a+ , d ) l = a+ f2(a + , a- ) = a4 12(a+ , a- )
1 g (a+ , a- ), 2
where g(z, w) = zf2(z, w). For the latter, the calculation is a little longer, since we have to move the operator 3
a+ to the first position. In doing so, we use the permutation formula: 2
1
3
2
1
1
2
1
3
3 f2 42
1
a- f[h(a+ , a- )]] = a- f2(a + , a- ) = a- f2(a + , a- ) ± a- , al— (a+' a+ a- ) ' ' Sz However,
[a- , al = I , and so we can continue this chain of equations by identifying the first and the third arguments in 8f2/8z: 2
1
1
2
1
af2 2
1
2
I
a- fff2(a + , a- )]] = a- f2(a + , a- ) ± — (a+ , a- ) = h(a+ , a - ), az
96
II. Method of Ordered Representation
where
af2 az
h(z, w) = wh(z, w) — —(z, w). Denote by 1+ and 1 — the operators taking 12 into g and h, respectively, i.e.,
a
1+ = z, 1— = w — —•
az
Thus, we have
1 alf (a+, a— )] = (1+ f)(a + , a— ), 2 2 1 1 al[f (a+, a— )]] = (1— f)(a+ , a— ). 2
2
1
The operators 1+ and 1 — characterized by this property are called the left ordered 2 1 a representation operators for the Feynman tuple ( +, a). As soon as these operators are evaluated, the problem of reducing the product of operators to the Wick normal form becomes trivial. Indeed, let
h (z, w) = E baffle Wfi . a,6
Then 2
I
1
2
2
II fi (a+ , cl — )11 Th(a + , a— A]
1
= E ba,fi (a ± r(a — )P EU2 (a+ , d)ll. a, 0
We can split off the operators a— and a+ in the products (a+)"(a— )/3 one by one. As a result, each of them will be replaced by the corresponding operator l+ or 1 — acting on 12, and we obtain 2 1 l[h(a + , a — )1
2
1
2
((I + r (1— ) 13 h)(a+ , a— )
U2 (a+ , a — )11 = «,13
=
2 1 2 1 [fl (1+ , r) f2](a + , a— )
(we do not discuss the convergence of the series). Thus, the symbol of the product 2 itfl (a+ ,
1
1
2
1
cr)Hh(a + , ct — )11
can be calculated according to the following recipe:
(11. 1)
1. Ordered Representation: Definition and Main Property
97
1 Take the left ordered representation operators 1+and 1— and substitute them into the symbol fi. Then apply the obtained operator to the symbol 12. We see that the introduction of left ordered representation operators in this example enables us to avoid direct calculations with the operators a+ and a— and with functions of these operators. Instead, we consider symbols and operators acting on symbols. In fact, this is the basic idea of the method of ordered representations. Let us now discuss this idea and then proceed to general definitions. 2
1.2 Ordered Representation and Theorem on Products Let
1
n
A = (Ai, ... , An ) be a Feynman tuple in some operator algebra A. We are interested in the problem of reducing an operator A E A to a "normal form". What is a normal form? By this we mean a representation 1 n B = f (Al, .. • , An),
i.e., an operator is in normal form if it is represented as a function of the Feynman tuple A. Let us point out that we make no attempts to reduce arbitrary operators to normal form. Our task is far less general. We start from some operators already in normal form, perform some algebraic operations and try to express the result in normal form. Clearly, the main difficulty is to represent, in the form desired, the product of two operators and the superposition f (ig (A)1). Once this is done, we will actually pass from analysis in the algebra A to analysis in symbol spaces. Let us now give the precise definition. We assume that a class Y . of unary symbols is fixed and .Fn = Y.6 n . 1 n Let A = (Ai, . . . , An ) be a Feynman tuple in an algebra A and suppose that each Ai is an T-generator in A. 1
n
Definition 11.1 A Feynman tuple 1 = (ii, . .. , l n ) of operators li : .Fn —>- Tn ,
j = 1, . . . , n, 1
is called the left ordered representation of the tuple A = two conditions hold: i) for any f E Fn and j = 1, . . . , n we have 1
n
1
n
i, .. . , A n ) if the following
n
Ai Iff (Ai, ... , An)I1 = (li f)(211, ... , An);
II. Method of Ordered Representation
98
if a symbol f E Tn does not depend on y, • •
ii)
yn , f = f (Y 1 , • • • , yi ), then (II.2)
f = Yi f
Condition ii) will be referred to as the regularity condition. In view of i), this condition is quite natural; indeed, 1 1+1 j j 1 j AiLf (Ai, • • • , Ai)1 = A if (Ai, ••• , Ai) = Ai f (Ai, • • • , Ai), so that the symbol of this product can be chosen equal to yi f (y i , . , y 1 ), and condition ii) simply says that the operators of left regular representation are consistent with this natural choice l . We intend to establish a theorem generalizing (11. 1) to the "abstract" situation of Definition II.1 . Since we have no reason to assume that it is possible to insert the operators Li into arbitrary symbols f E Tn , we need to be extremely careful in our statement. Let :fr C F be a continuously embedded subalgebra (we do not exclude the case = .7). Assume that the operators Li are .fr-generators in G(Tn ). Such a subalgebra can always be found; in the worst case, it still contains all polynomials. Clearly, = is a continuously embedded subspace in the symbol space F.
Theorem II.1 Under the above assumptions any product 11 lIf (Ai , • • • , 21011 l[g(Ai, • • • , :10 ]1, where f E
frn
and g E Fn , can be reduced to normal form. Namely,
1 1 n 1 1 n n if (A19 • • • An)11 141 (A19 • • • An)1 = [f( 1 19 • • • 9 in)g] (A19 •••
An).
Proof In the following we will widely use the more convenient notation f (A) for
f (Ai,
. , A n ); with this notation, the preceding equation reads f (A)11 [g (A)}1 = [ f (l)g](A),
or even f (A) o g(A) = f (l) g(A) (the small circle on the left-hand side serves as a substitute for autonomic brackets, which look somewhat clumsy in combination with the abridged notation). Condition i) can be rewritten as the commutative diagram is essential to impose this requirement since the uniqueness of the symbol of a given operator is not assumed anywhere. l It
1. Ordered Representation: Definition and Main Property
11,
Tn li
99
A
1
1 L Ai II
Tn
A,
where A = AA is the linear mapping sending each symbol f (x i , . . . , xn ) into the coneI n sponding operator f (Ai, . .. , A n ) and L Ai is the operator of left regular representation, L Ai B = AB for any B E A. The operators /i and L A are 5.-generators in £(.Fn ) and L(A), respectively, and the continuous mapping p, intertwines the tuples 1 = (ii, . . . , 1n ) and LA = (L A ! , • • • , L A), as expressed by the diagram. By Theorem 1.4, we have ,
f (L A) itt = kt f (1)
for any symbol f E At . Let us apply both sides of the last equation to an arbitrary symbol g E Tn . We obtain
f (L A) (g (A)) = (f (1)g) (A), whence the statement of the theorem follows immediately, since
f (L A) (g (A)) = f (A)g (A) (see Subsection 2.3). The proof is complete.
0
1.3 Reduction to Normal Form We now consider the case f" = F. In this case the statement of Theorem II. 1 can be given the following interpretation. We introduce a bilinear operation * on .Fn by setting f * g = f (1) g . This operation will be referred to as the "twisted product". It makes Tn into an algebra (possibly nonassociative; the associativity conditions will be discussed later in this chapter). Then Theorem 11.1 states that the mapping
kt : (Tn,*) —> A (where A is equipped with the usual operator multiplication) is an algebra homomorphism. Next, assuming that .'. = F, we can give a somewhat more elegant statement of condition ii) in Definition II.1 .
II Method of Ordered Representation
100
Lemma 11.1 If all li are T-generators in Tn, then condition ii) in Definition 11.1 is equivalent to the following condition: ii') for any g E .F,, one has 1 ell,
• • • ,
n ln) 1 = g(Yi? • • • , Yn) ,
where the operator on the left is applied to the symbol identically equal to 1. Proof ii') = ii). Let f (y i , ... , y) E Tn ; set
g(Yi, • • • , Yi) = Yi
f (Yi, • • • ,
Yi).
By condition ii'), we have =
Xi f(Y1, • • • ,y1)
j+1 I j j 1 j ljf(li, • ..,li)1= li f(li,...,li)1 1
= 1j(f (li, . . . ,
j
l) 1) = lj f(Yi, • • • , y 1 ),
the last equality also due to ii'). ii) = ii'). It suffices to prove ii') for decomposable symbols of the form g(Yi, - • • , Yn) = gl(Y1) • • • gn(Yn)•
Condition ii) implies that the subspace Yi C Tn consisting of symbols independent of yi+1 , . . . , yn is an invariant subspace of the operator /i , and the restriction /i I . _ . coincides with multiplication by yi ; that is, there is a commutative diagram j
Ti Xi
1
Ti where the horizontal arrows are the embeddings. Furthermore, /i is an Y-generator in Tn by assumption and yi is an Y-generator in Ti (for any f E Y the corresponding function of yi is merely the operator of multiplication by f (yi)). Hence the embedding .T1 —> Tn intertwines the T-generators yi and li, and by Theorem 1.4 we have f(0)I F = f (b)
for any f E F.
1. Ordered Representation: Definition and Main Property
101
We now apply this identity successively for j = 1, 2, . . . , n, and obtain 1 g(l
, 7n ) 1
,
=
(1 0
gn ln) • . • g2(12) (
= gn(ln) • • • g2(12)gl(Y1) = gn(ln) • • • g3(i3)g2(Y2)gi (Yi) • • •
= g(Yi • • • YO•
This equation extends by continuity to the entire symbol space Tn . The lemma is proved. We have considered the problem of reduction to normal form for the product C1 C2 , is of two operators in normal form. Let us now consider the general case. Let j 1 , and j 1 , . . . , is be two sequences of indices such that ji jr whenever [Ail , Ai r ] 0 0. , . , y s ) be a given symbol. We consider the operator Let fi
C=
Is • • • , Ai s )
and try to reduce it to normal form. 1
Theorem 11.2 Suppose that the left regular representation '1 = (li,
, l n ) of the
1
, An ) exists and that each l is an T-generator in Fn. Feynman tuple A = (Ai, , ys ) e Ps . Then the operator C can be reduced to a normal Assume that ço(y i , form. Namely, Js { , Ai s ) = ço(l
fi
, i 5 )1 } (A).
Proof By analogy with the proof of Theorem 11.1, we see that the mapping .Fn —>
A
A, 1
f Oh
f fi
•••
An) Is
fi
fs
intertwines the Feynman tuples (LA ii . . . , L A is ) and ( / , . . . , 1 is ). Therefore, we obtain Js q)(A zi ,
,
is , LA) (1)
ii = ÇO(L It
Is
, • • . LAi s )
= Ii
= Juitçql
(1)
Is • • • , 1 is ) (1) =
1 f(A i , .
,
An),
11. Method of Ordered Representation
102 where J!
is
f (yi, . . • , y,) = çql ii, -.. , l is )
(1). I=1
The theorem is proved.
Remark II.! We cannot guarantee that the symbol f (yi, . . . , yn ) lies in .f-n - since be invariant under functions of L. However, the paradox there is no reason for F is that we can still compute products of the form f (A) g(A) with such a symbol f; we il
is
need only bear in mind that f (A) can be represented as q)(Ai i , . .. , Ai s ), and so h is f (A) o g(A) = Iço ( 1 i i , ... , 1 is ) g} (A)
Let us now consider how to perform reduction to normal form for composite functions. Let f (z) be a function of a single variable z, let g E .Fn be an n-ary symbol, and
1
n
suppose that the tuple A = (A1, .. . , An ) of T-generators has a left ordered represen-
tation
1
n 1 = (li, ... , I n ) consisting of F-generators.
Theorem 11.3 Let f E .t, - where .--. is some symbol class, and suppose that the operators g(A) and g(1) are .f.- . -generators in A and ,C(Fn ), respectively. Then f (Eg (A)]1) = (f (ffg (MD 1) (A).
In other words, f (Eg(A)1) can be reduced to normal form, f (Eg(A)1) = q)(A), where the symbol q)(y) E Fn is given by the action of the operator f a[g(1)1) on the constant function 1 e F. Proof This is quite standard stuff. Indeed, the mapping p,A clearly intertwines g(1) and g(L A), that is, the diagram
AA
-Fn
A
1 AA
Fn
g(LA)
A
commutes. Next, g(1) and g(LA) are P-generators in £(Fn ) and L(A), respectively, so that we have, by Theorem 1.4, AA
f alg (MD = f (1Ig(L A)1) /1'A
1. Ordered Representation: Definition and Main Property
103
and consequently,
f ([[g(A)]]) =
Lf(ff g (A )]) U)
= f (L g (A) ) (1) = f (L g(A)),u,A(1)
= f (ffg(L A )JI) P, A (1) = Ail f (ffg(1)1) (1), whence the assertion of the theorem follows immediately. The proof is complete.
0
We have defined the left ordered representation of a Feynman tuple 1
n
(Ai, . • • , An)•
For symmetry reasons, it is clear that one can define the right ordered representation in a similar way. —1 —n Definition 11.2 A Feynman tuple r = (r 1, ... , r n ) of operators
ri : *Fri —> .Fn , j = 1, . . . , n 1
n
is called the right ordered representationof the tuple A = (A1, . . . , An ) if the following two conditions hold: i) for any f E .Fn and j = 1, ... ,n we have 1 n 1 n Ef 01, • • • , AniliA j = (rj f ) (Ai, . . . , An);
ii)
if a symbol f = f(y i ,..., yn ) G .F does not depend on y l , . . . , yi _ 1 , then
ri f = y 1 f. Remark 11.2 Note that the Feynman indices of the tuple r have the minus sign. This is because the natural ordering of these operators is the opposite of that of A 1 , . . . , An . Let us state, without proof (which follows the lines of that for the left ordered representation) the main properties of the right ordered representation. Theorem 11.4 Let the operators r1 , . .. , rn of the right ordered representation be i'-. - c F. Then generatorsfor some continuously embedded subalgebra i. have (i) for any f E Fn and g E F 1 ... , AniliffeAl, . .. , A n )]] = [g(ri, ... , rn)f] Oi, ... , An n) or, in the shorthand notation,
f (A) g (A) = [g(r) f] (A);
II Method of Ordered Representation
104
(ii) for any f E A. and any sequences of indices i v . • . , is and j1 , .. . , js such that jk 0 ji whenever [Ai k , Ai l ] 0 O we have fi f (Ail , • • • ,
.is -.is —.11 Ai s ) = [f (r 1 , • • • , ris ) 1 1 (A);
(iii) If :F = .F, then condition ii) of Definition 11.2 is equivalent to the condition
ii') f (r) 1 = f (Y)
for any
f
E
Tn;
(iv) let r1 , .. . , rn be T-generators in L(Tn ), and suppose that g E .Fn and f E F, where rf." is some symbol class, and moreover, g(LA) and g(r) are :F-generators in A and Tn, respectively. Then
f ([[g(A)1) = (f a[g(r)]J)1) (A). We remark that the existence of the right ordered representation is equivalent to the existence of the left ordered representation. More precisely, the following statement is valid. 1
n
Theorem 11.5 Suppose that the tuple A = (A1, . . . , A n ) has a left ordered represen1
n
tation 1 = (li, .. . , I n ) such that all the operators li are T-generators. Then there —1 n exists a right regular representationr = (ri , . . . , rn ) of this tuple. It is given by —
1
n
(rj f)(y)= f (li, • • • ,ln)(yd• Proof The proof goes by straightforward verification. Indeed we have
(ri f)(A) = (f (1)yi)(A) = f (A) Ai (the last equality is due to Theorem II. 1 ), and we see that condition i) of Definition 11.2 is satisfied. Next, if f does not depend on y 1 , . . . , yi _ i then we have j
n
j
n
ri f = f(li,...,1n )(yi)= f(li,...,1 n )(1i1) j n = (Yjf)( 11, • • • ,ln)( 1 ) = Yjf(Yj, • • • , Yn) ,
so that condition ii) of Definition 11.2 is also satisfied.
0
2 Some Examples Note that a given Feynman tuple of operators does not necessarily have a left ordered representation. Really, why should it exist at all? We have seen that the principal
2. Some Examples
105
usefulness of the ordered representation is to move the operators in products (or even more complicated aggregates such as composite functions) to the places rigidly prescribed by the Feynman indices. All in all, such an operation requires (perhaps implicit) permutations of the operators involved, and so we may suggest that the existence of a sufficiently rich set of commutation relations is a necessary condition for the existence of the left ordered representation. This guess is supported by the observation that in the example considered in Section 1 (creation-annihilation operators) it was the commutation relation [a+,a— ] = I that enabled us to construct the left ordered representation. Here we consider some extremely simple, but at the same time useful examples of calculating the ordered representation operators. In the next section we distinguish some classes of Feynman tuples (more precisely, of commutation relations) and present general methods for computing the regular representation.
2.1 Functions of the Operators x and
—
iitO/Ox
In Section 1 we have mentioned that differential, pseudodifferential, and Fourier integral operators can be considered as functions of the position and momenta operators x l , .. . , x n and —ialaxi,...,—ialax n . Since the main problems of the theory of differential equations involve computation of products of such operators, it would be useful to calculate the ordered representation for the tuples consisting of these operators. In order to simplify the notation, we make all the calculations for n = 1. Since the operators xi and —i a/axi commute with xk and —i a/axk whenever j 0 k, the result for n > 1 can easily be obtained from the result for n = 1 by attaching the indices. First of all, consider the Feynman tuple 1
. a X, —I—) . ax
(2
Clearly, we have the commutation relation [
a ax
—i—,x1= —i
(we drop the identity operator I in our notation); this relation differs from that for the creation-annihilation operators a+ and a— only by the factor i. Hence it is not surprising that the computations and the result are quite similar to what we have for a+ and a— . Thus, we will just formulate the result, leaving the computation to the reader.
1
Namely, the left ordered representation of the tuple (2x,—ia/ax
. a ix = q, Lialax = P —1 • aq
has the form
IL Method of Ordered Representation
106
A similar computation yields an expression for the right ordered representation operators: rx = q -
a
ap
, r_id/ax = P.
One can also consider the reverse-ordered tuple
For this tuple the ordered representation operators have the form
2
.
a
lx
= q + i—, Lia/ax = P; ap
Tx
= q, r-ia/ax = 13 ± i aq .
a
1 2 (x,-ia/ax) be pseudodifferential operators. By The-
1
(x, - lax)Q and Let P ta
1 1 2 2 ialax)]] is again a pseudodiffer(x, [[Q (x,orem 11.6, the product [[P-ialax)]]
ential operator, 1 1 2 a a )1= H (x, -i--6-.x- ), [[P (i, -i s )]][[Q (i, -i— ax ax
1
where
1 2
a
H(p,q)= P q, p - i— aq (
Q(q, p).
The last formula immediately implies the classical formula [109], [84] for the symbol of the product of pseudodifferential operators. In order to obtain this formula,
1 we expand the operator P q, p - ia/aq into a Taylor series in powers of -ialaq. 2
This poses no difficulties since p and -i a/aq commute, and we obtain 00
H (p, q) r=d
E k=0
(_ok ak p
k!
apk
ak n
(q, p) --..aqk
(a a) . ‘ 7 7
r
Of course, this expansion needs justification by estimating the decay as 'pi -)- oo of the remainder terms, but we omit these routine computations.
2. Some Examples
107
2.2 Perturbed Heisenberg Relations The operators Ao = — ia lax and Bo = x satisfy the commutation relation Bo Ao — A0B0 = i. Consider the "perturbed" relation BA — aAB = i,
where a is a constant, a = 1 + E, E -> 0 valid for some "perturbed" operators A, 13; we do not need to use below the concrete form of these operators. Let us compute the 21
left ordered representation for the tuple (A, B). We have 21
3
21
21
2
A[[f (A, B)]] = A f (A, B) = A f (A, B), so that /A = x (here x and y are the arguments of f, f = f(x, y)). Furthermore, by analogy with Subsection 2.1 we obtain 21
21
3
B[[f (A, B)]]
= B f (A, B) 41 4 sf 2 41 3 32 = B f(aA, B) + B(A – aA)— (A, aA, B). Sx
Moving the indices apart and introducing the autonomous brackets, we obtain 21
1
21
32
4
332
4
Sf 2
41
B[[f (A, BA = B f (a A, B) + [[B (A – a A))]]—(A, a A, B). Sx But
B(A – aA) = BA – aAB = i, so that we obtain 21
1
21
8f2
21
Bfff (A, B)]]= B f(aA, B) -I- 1—(A, aA, B). Sx In contrast to the preceding example, the arguments of the difference derivative do not coincide, so that it does not reduce to the usual derivative. Let Ic, denote the dilatation operator /0,q)(x, y) = w(ax, y). We obtain the left ordered representation operators in the form lA
= x, 1B = yic, +
i
(1 – a)x
(1 – JOE).
lil. Method of Ordered Representation
108
Let us demonstrate how this ordered representation can be applied. Assume that 2 1 we wish to invert an operator of the form f (A, B), i.e., to solve the equation 21
21
Ef (A, B)]] 11)((A, B)11 = 1, where x (x, y) is an unknown symbol. By Theorem 11.1, we obtain the following equation for the symbol x: 2 1
x, ffy/a +
f
(
i (1 — IA) X (x , y) = 1. (1 — a)x
If f (x, y) is a polynomial with respect to y, then this is a difference equation and can be solved explicitly; it may seem more convenient to reduce this equation to a difference equation on a uniform grid, which can be accomplished by the change of variables x = e0 . Under this change of variables, /a is carried into the operator ecia me of translation by a = in a, and the equation takes the form f (e2° , [lye" + lie— ea (1 — eaa/a0 )]]) x (e° , y) = 1. This is a difference equation on a uniform grid with mesh width a.
2.3 Examples of Nonlinear Commutation Relations Let A, B, and C be operators satisfying the relations [A, B] = 1, [A, C] = a(C)A ± I3(C)B ± y (C), [B, C] = E(C)A -I- B(C)B + a (C), where a (x),
13 (.0 , y (x), 6(x),
and a (x) are some symbols. Let us find the left 1 23 ordered representation for the Feynman tuple (A, B, C). The arguments of symbols 1 2 3 will be denoted by (x,y,z),x1--> A, y 1--> B, and z 1-->- C. The last two relations satisfied by A, B, and C can be written in the matrix form 6 (C) ,
) C = S( C) (
AB )
+
(y(C) ) cr (C)
where S(z) is the matrix symbol S(z)
=
(z + a(z) I3 (z) ) E(z) z -I- (5(z)
2. Some Examples
109
Let F(x, y, z) be an arbitrary symbol. By Theorem 1.6,
( A B
irF(
4 =( A
A, ;31,
11,
B 3
12
3
4 (
= F (A, B, S(C))
A)
3F 1 2 3 3 ( y (C) — (A, B, C, S(C))
In the second term of the last expression the ordering of operators is just the required 3 one, whereas in the first term A still needs to be moved to the first position. This, by analogy with the previous example, can be performed by applying the operator x+alay to F. Finally, we obtain /A
( /13
)
x
=
'S(z)
1 1 — iS(z) 2 2
z—
S (Z)
2 Y (Z) /2 kz)
where /s(z) is the operator of substituting the matrix S(z) instead of z into a function,
/s(z)F(Z)
1 2 2 a = exp f (S(z) — z)— az
def
F(z) = F(S(z)). ,
since the operator C acts last, it is clear that lc = z. Our next example is concerned with the phenomenon that the ordered representation may or may not exist depending on the chosen ordering in a Feynman tuple. Let A and B be operators satisfying the permutation relation
AB = B r' A, where n > 1 is an integer. Let us first construct the left ordered representation for the 12
tuple (A, B). We have 2 12 B )J ] = B f (A, B) BI[ f(A, 12
and 12
A [[ f (A, BA
3
12
3
14
= A f (A, B) = A f (A, Jr) =
4 f 12 4 332 I[A(B — B a )]] —(A, B, B n ) Sy
1 12 A f (A, Ba),
and so
2
/A
=
1
1B = y,
II. Method of Ordered Representation
110 12
if we choose the ordering (A, B). On the other hand, one can easily check that the 21
ordered representation does not exist at all for the ordering (A, B). Indeed, consider the quotient A of the free algebra with generators A and B modulo the ideal generated by AB-13n A. Each element of A can be uniquely represented in the form E aki Bk A1 , where aki are complex numbers and the sum is finite. The existence of the representation follows from the existence of the left ordered representation operators /A and /B (see above). The uniqueness is also clear: since
1A1B
= 173 1A ,
it follows that the algebra generated by /A and /B is a homomorphic (in fact, isomorphic) image of A; now the coefficients aki can be determined by applying the operator E akilll AI to the function 1:
[Eakak,A](1)=Eakiykx/. Now consider the element BA E A. It has no representation of the form
Ec ki AkB/, for if such a representation could be found, we would have
BA
= Eck/AkBi=EckiBnkiAk,
which is impossible since the system nk/ = k has no integer solutions. This implies that the left ordered representation simply does not exist for the or2 1 dering (A, B). Let us consider two more general examples concerned with Lie algebras.
2.4 Lie Commutation Relations 1
m
Generators of a Lie algebra, special case. Let A = (A1, . . . , Am ) be a Feynman tuple satisfying the following conditions: i) for any j,k E 11, .. . , m} there exists an index s = s (j, k) E { 1, . . . , ml such that [Ai, Ad = —icsAs, where cs is some constant;
2. Some Examples ii)
111
there exists a positive integer N such that any commutator of length > N composed of A 1 , .. . , An is equal to zero, [A 1 , [A 2 , [. . . [AiN_ i , AN] . . .1]] = 0 for any ii , ./2, • • • , i N E {1, • .. , N },
Condition i) means that the operators A 1 , . . . , Am define a representation of a Lie algebra of a rather special form (with a general Lie algebra, the right-hand sides would contain linear combinations of all A1, 1 E {1, . . . , n}; see the next example). The factor —i is introduced so as to ensure that cs are real in case A 1 , .. . , As are selfadjoint operators. Condition ii) is simply the requirement that this Lie algebra is nilpotent; N is referred to as the nilpotency rank. This class of commutation relations was considered in [129]. Note that any nilpotent Lie algebra can be described as a homomorphic image of a nilpotent algebra of the type described above. Let us describe the construction of the left ordered representation for this case. The main (and the only) idea is the same as in Subsection 2.1. Assume that we need to compute the operator /Ai , i.e., in fact, the product m m+1 1 Ai f (A) = Ai f (Ai, ... , Am ) for an arbitrary symbol f = f (x i , .. . , xn ). To this end, the product should be transformed in such a way that the Feynman indices of A . +1 , . . . , Am become greater than m + 1. Using the permutation formula (see Corollary 1.1) a
b
b
a
4b c
a
Aço(B) = Aço(B) ± [A, B]—(B, B), Sx where c > a > b and ço may have additional operator arguments, we obtain, by successive commutation, Ai f (A) = (xi f )(A) ±
m 1+1
6f 1
1
E [4,,A,] —,,...,A, 0 Sx/
i=j+i
m 1+2 A /, ... , Am ).
We now use the expression for the commutator and obtain m
Ai f (A) = (xi f)(A) — i
1+1
3f1
1 1+2
8xI (Ai, . .. , A1, E cs„,„ A s u,0-
m
A1, ... , Am ).
i=j+1
The first summand already has the desired form; as for the other summands, let us move the operator A s om in each of them to its place, thus obtaining remainder terms with second-order difference derivatives and second-order commutators, then we proceed to the remainders, etc. By virtue of nilpotency, the process will terminate after the Nth step. Clearly, at this stage all difference derivatives disappear and transform into the usual ones. Thus we have proved the following theorem.
II. Method of Ordered Representation
112
Theorem 11.6 ([129]) Under conditions i) and ii) the Feynman tuple 1 n (Ai, .. • , An)
possesses the left ordered representation li , .. . , ln , and each of the operators li is a differential operator of order < N — 1 with linear homogeneous coefficients.
In fact, it still remains to prove that 1 m f(li, ... ,l m )1 = f (x i , ... , xm )
for any symbol f (x). By Lemma II.!, it suffices to prove that lAi g(x) = xig(x) whenever g(x) is independent of x i +1 , .. • , x,n . However, this follows by construction.
Generators of a Lie algebra, general case. Let us now consider the case in which the operators A 1 , . . . , An are generators of a representation of an arbitrary Lie algebra. This means that n
[Ai, A] =
E
CliciAk,
i , j =1, ...,n,
k=1
where ci.c. if are some (complex) numbers called the structural constants of the Lie algebra. Let us calculate the left ordered representation (ii, . . . , ln ) for the tuple of operators n 1 (A1, . .. , A n ). In order to avoid nonalgebraic difficulties, we consider only polynomial
symbols f (x), x = (x 1 , .. . , xn ). Remark 11.3 As shown below, for polynomial symbols we can accomplish the construction by purely algebraic means. For more general symbols this is not the case; as is shown in the Appendix, in the latter situation it is more convenient to pass to Fourier transforms, which takes the left ordered representation operators into right-invariant vector fields on the corresponding Lie group. Remark 11.4 The constants ci.c. i j are structural constants of a Lie algebra, and therefore they must satisfy the antisymmetry condition and the Jacobi identity. However, our construction does not use these properties. These conditions are discussed in detail in Section 4 of this chapter. Denote by A the associative subalgebra generated by A 1 , . . . , An and by G c A the linear span of A 1 , . . . , A. In the algebra A[[ti , . .. , tn I] of formal power series in t = (t1 , . . . , tn ) with coefficients in A consider the element U(t) = etn A n . . . eti Al .
2. Some Examples
113
Clearly, for any polynomial p(x), x = (x l , . . . , xn ), we hay& A 'i n) = 1P
u (01
at
t=0
(it suffices to check this identity for monomials, which is trivial). Let us find operators Li such that
U(t) = Ai U(t), j = 1,
, n.
To this end, we compute the derivative (a/atq )U(t) using the relationship e St D
_ eadB (D) e St .
Differentiating U (t) and applying the last equation successively, we obtain
a U(t) = a tq
e tn A n . . . e tq+1A q+1 A g e t qAq . . . et i A i r _tnadA n
= te
For each
j
E
{1,
etq+
ad Aq+1
(A q )]U(t).
, n1 denote by ci the matrix with the elements (ci)ki = c11 .
Evidently, the operator adAi acts from L. into L, and if A =
E As As, 1
(11.3)
A., A s , then adAi (A) =
where A s = ci A.s •
Note that the last assertion is valid regardless of whether or not the operators
, A n ) are linearly independent. Therefore, the element in square brackets on the right-hand side of (II.3) belongs to L, and moreover, we have
a U(t) = EA qi(t) Ai u (t) , at where2 Aqj (t) =
1
[e tnCn . . . etq+1Cq+1 ] iq ,
8 kj
(11.4)
if q < n, if q = n.
In particular, Aqj (0) = E, and we see that the matrix A(t) = (Aqj (t)) is invertible in the algebra of formal power series. Multiplying (II.4) by A -1 (t) on the left, we obtain
a
A; 0 U(t) = EAT.: (t) _u (t), q atq 1 Recall that the substitution t = 0 is a well-defined operation in the algebra of formal power series;
this operation assigns to each formal series its constant term. 2 Here [ stands for the (j, q)th entry of the matrix inside the square brackets.
H. Method of Ordered Representation
114 i.e., we can set
where L., ( t, x)
= E A7: (t)x q . q
Now, for any polynomial p(x) we have 1 n AjffP(Ai, ... , An)ii = Aj
1p (-0-18 -. )u(t)1 t=o
(II.5)
= jp (-:i,) AiU(t)1 t.0 =
a
Ep (-.i.)IIIIL; ( 2t,. )r 11u()I t=0
To compute the composition of the two autonomic brackets in the latter expression we use the right ordered representation
rt =t+
a ax
, ra =x Vi
1 2 for the tuple (r,a/at) computed above in Subsection 2.1. With the help of this representation the above mentioned composition can be written in the form
a —
li p
at (
11 2 a )11[[Li(t,— A= q (t, at
a
2
where the function q(t, x) equals
1
q(t, x) = Li
a
(
2 „
x t + —, ax
p(x)•
Substituting this equality in the right-hand side of formula (II.5) we represent the leftI n hand side of this formula in the form h(Ai, ... , An), where the function h(x) is given by 1
2 _1 (
h(x) = q(0, x) =Ex A . q
g ig
a — p(x) ax
'
2. Some Examples
115
(we used here the above expression of the functions Li (t, x) via the matrix A -1 ). Thus, we have obtained the left ordered representation operators in the form 2 1 ( = EXqA7 - ij —) , i = 1 „ — , n. fq aX q It is easy to check that these operators are well-defined on polynomials and satisfy condition ii) of Definition 11.1.
2.5 Graded Lie Algebras We again consider the creation-annihilation operators, as in Subsection 1.1. This time we assume that there are different types of particles in the system, namely, + bosons and fermions. Let b.1. be the creation-annihilation operators for bosons, and + f•.1 be the corresponding operators for fermions. Then the following relations must be valid:
[b7: ,bk+] = [b7,
= [b,
0,
+ [bj , bk ] = —
[ fi , f k + ]
=- f i f k
+ f k f i = Si k I.
These relations are different from the usual Lie relations in that one uses anticommutators instead of commutators for fermion creation-annihilation operators. Thus, the model symmetric with respect to bosons and fermions leads to the consideration of a new mathematical notion, namely, of graded Lie algebras that involve both commutators and anticommutators (see [26]). Formally, the underlying linear space of a graded Lie algebra L is the direct sum L = Lo ED Li. The space L is equipped with a bilinear operation, [, .], that respects the Z 2 gradation, that is,
[Lo Lo] C Lo, [Lo, ,
Li] C Li,
[L1, L1] C Lo,
and if a, b, c are graded elements of L (that is, they each belong either to Lo or to Li), then the graded Jacobi identity holds:
(_1)1a1 I cta , b] , cij --1-. i— 1) 1b1 la i[[b, c], a] ± (-1)1c libi[[c, a], b] = O. Furthermore,
[a, b] = —(-1)iallbi[b, a] (here lai is the gradation of a: we write la I = j if a c Li).
116
II. Method of Ordered Representation
A representation of a graded Lie algebra L is a linear mapping of L into an associative algebra such that [a, b] is carried into ab — (_ olallbl b a; in particular, the bracket of two elements of L1 is realized as the anticommutator and the brackets of all other combinations of graded elements are taken into commutators. Let us consider a simple example in which we construct the ordered representation for some operators forming a representation of a graded Lie algebra. Let A and B be operators satisfying the relation AB ± BA =1. This can be regarded as a representation of a graded Lie algebra with A, B E L1, and 1 E Lo. Note that this graded algebra is likely to be infinite-dimensional since the above relation does not form a closed system of graded Lie relations by itself: we have not supplied any data for the anticommutators [A, A] = 2A2 and [B, B] = 2B2 . Formally, this can be considered as a special case (for a = —1) of the perturbed Heisenberg relations considered in Subsection 2.2. 12
Let us consider functions of the Feynman tuple (A, B). Then for any symbol f (x , y) we have 21 3 21 2 2 1 BU (B , A)1 =B f (B , A) =B f (B , A), which implies that /B = x. Proceeding as in the cited example, we obtain 21
3
4
3
21
Afff (B , A)]] = A f (B , A) =A f (— 3 2
± A (B ±
1
B , A)
21 41 4 f(B, A) - f (— B , A) B) 2 4
B±B
3f 1 50 = f(- B, A) A ±ff A (B ± B)1—(B , — B, A). Sx 2 1 1
32
23 2
4
4
Since A (B ± B) = 1, it follows that 21
21
AU(B, A)1 = /Af(B, A),
where lAf(x,Y) = Yf ( — x, y) ±
f (x,
)7) -
f (-x,
)7)
2x
•
12
Thus, the left ordered representation of the Feynman tuple (A, B) is /B = x,
1 — Ix lA = Y Ix ± 2x '
where ./.), is the spatial inversion operator Ix f(x, y)
= f(—x,
y).
3. Evaluation of the Ordered Representation Operators
117
3 Evaluation of the Ordered Representation Operators In Section 2 we have considered several examples of computation for the operators of left and right ordered representation. Here we present a general method to compute these operators for some "distinguished" classes of commutation relations.
3.1 Equations for the Ordered Representation Operators We consider a special class of commutation relations of the following form. Let k < n be a positive integer. Consider the relations
Mi, Ai]
=
0,
j = k ± 1, ... ,n, i = 1, ... ,n,
(II.6)
n
AA i = E§91,-J (Ai ,Ak,,,...,An)Ar,
i, j = 1, . . . , k,
(II.7)
r=1
where Ki (Yi , Yk+l' • • • , yn ) are some symbols; since, according to (II.6), the operators Ai, Ak+i , ... , An are pairwise commuting, their Feynman indices are inessential and therefore omitted. The relations (II.6) and (II.7) can be rewritten in the matrix form AA i = q)j(Ai, Ak+i , ... , A n )A, j = 1, . .. ,n,
(II.8)
where A is the vector operator A = f (A 1 , .. . , An ), and çoi (Ai , Ak+i , ... , An ) is the n x n matrix operator with entries (g0j(Aj, Ak+i, ... ,
I
W il:i (Aj,
Ak+i , ... , An )
if j < k and i _ < k, otherwise.
A »Si r
Example 11.1 Let k = n and
au
0 yi ± Ai ,
0
crnj
(II.9)
)
where Ai are constant matrices. Then, if all aii = +1, we obtain Lie commutation relations; if crii = ±1, we obtain graded Lie algebras, if aii are close to 1, we obtain "perturbations" of Lie algebras. One can also consider the more general case (11.1 0) where the matrix Mi is not diagonal, or even the case in which q)i (yi) is a nonlinear function (such relations will be referred to as strongly nonlinear).
118
II. Method of Ordered Representation
First of all, note that it follows from (II.6) that Lk+1 = Yk+1, • • • , L n = yn ; without loss of generality it can be assumed that the operators Li, j = k +1, ,n, commute with all the other left ordered representation operators. Thus we can assume that k = n, which is done throughout the sequel. Along with (II.7) one can also consider the "dual" relations AkAj =
E As s-2ski (Ak). s=1
They transform into (II.7) and vice versa if we pass from the algebra
to the opposite algebra Av, whose multiplication is given by the formula
N'PB = BA. It follows that the left ordered representation of A corresponds to the right ordered representation of A° P , and the same is true for the right ordered representation of A and left ordered representation of A° P . Let us first consider the commutation relations (II.7). Introduce the auxiliary operators Lii, i =1, ,n, j =0, . . . , n satisfying the relations 1 (Lii f)(111,
n j+1 1 1 f (A 1, An) = A
j j+2
, Ai , A j+1,
n+1 , A n)
for any symbol f. In particular,
Li n = LA i and Li() = RA; are the operators of left and right ordered representation respectively. From the moving indices apart rule it follows that Lii = yi, j =1,
,n.
We seek the operators Lii in the form 1
a . = Lij (y, --) ay y
2
Our aim is to derive equations for the symbols Lii (x, p). Introduce the vector
operatorl Gi = t (L 1i , 1 Here t stands for the transposed matrix.
, L ni ), j =0,
,n,
3. Evaluation of the Ordered Representation Operators
119
with symbol Gi (x, p) = t (Lii (x, p),..., L ni (x, p)).
By Theorem 1.4 we conclude that 1
j-I-1
j+2
j
n+1
(II.12)
Ai f (Ai, ... , A j, A j +i, ... , A n) j
j-1
1
j+2
j +1
rz+1
A j-1, ço j (A), A j +i, ... , A n )
= Ai f (Ai, ... ,
(the operators A1+1 , ... , An on the right-hand side of the latter equation are tensored by the identity matrix). Using the substitution operator (see Subsection 2.3 above) one can write f (Y 1, • • • , (Yi), • • • , Yn) =
e (99(Y))—Yi)a/aYi f (yi , • . • , yn).
Thus it follows that the operators corresponding to the symbols Lij f (yi, • • • , Yn)
and
n
E Lr,
j—i f e(49.1'01.1
') —
Y.1*)a/aY1'l
f
tr '
r=1
( 1)
J k.7 1 f • • • f
Yn )
coincide. We require that the symbols of these operators coincide and use the right 2
1
regular representation of the tuple (y, 8/0y) constructed in the preceding section. We obtain 2
1 )
pi Fi xi +alapi p) = e Gj(x, Li_1(x, p),
where Fj (Y) = ÇO.i (Y)
—
y.
Using the relation Lii = yi, we can write out the following system of equations to define Lo(x, P):
I E
2
1 )
2
p i Fi (x i +aialpii )
pi Fi (xi +alapi
e
... e
140(X? P)
= x1.
(II.14)
k
jk
If this system is solvable, its solution gives the right ordered representation operators; then we can also obtain the left ordered representation operators. Of course, it remains to prove that the solution obtained indeed gives the ordered representation operators. This requires some additional assumptions, and we will return to this question later; the corresponding theorem will be stated only for relations (II.11), since
11. Method of Ordered Representation
120
the statements are essentially the same. Let us now consider relations (II.11) in more detail. We define matrix functions Ak (y) by the equations
(II.15)
(Ak (Y ))sj = S-2Isci (Y) — Y 3si, where 34 is the Kronecker delta, and introduce the matrix operator 1 a 2 Uk = eXP ( Ak (Y k) ---, (gXk
The operator of Yk
Uk
•
acts on scalar functions of yk, and the result is n x n matrix functions
7
[ukf(yk)]si = f(Yk + Ak(Yk))s;
(II.16)
7 . = f(ok(yk))si,
where Ok (A) is the matrix with elements
(Qk (Yk))si = asici (Yk), and f (Ok(Yk))si stands for the (s, Dth entry of f (Ok (Yk)). Define operators D51, s, j =1, ... ,n, acting on functions of n variables y l , . • • ,yn by the formula D51
=
[Un X • • • X Uj+11sj,
j
< n,
Dsn = Ssn .
Theorem 11.7 Suppose that the operators il , . . . , ln of the left ordered representation for system (11. 11) exist and are uniquely defined. Suppose also that all the operators 1.1i are invertible. Then the operators i l , .. . , ln satisfy the system of equations
E is Ds; = yi .
(II.17)
s
Proof Let yo(y i , . . . , yn ) be a given symbol. Relations (II.11) can be rewritten in the
form AkA = AS2k(Ak), k = 1, ... , n,
where as above A = 4 (A 1 , . . . , An ). Thus, using successively relations (II.11) and (II.16) we obtain 1
k-1
ÇO(A17 • • • 7
k+1
A k-1, A k, . . . ,
n+1 k A n)A
1 k-1 k k+2 n+1 k+1 = W(Al, • • • , A k-1,S-2k(Ak), A k-I-1, • • • , A n ) A 1 k-1 k n+1 k+1 k+2 = (UkV)(Ak, • • • , A k-1, Ak, • • • , A k+17 • • • 7 A n ) A
1
n„ def „ \ „ = (1 Un • • • UkO(Ak, • • • , An) = lt(k)Wlktil
7 • • • 7
An ),
...
3. Evaluation of the Ordered Representation Operators
121
where l
=
(lie ... , 1n ),
1 (k) = (1 (k)1 , • . • , 1 (k)n)•
Since all operators U1 are invertible and / is uniquely determined, it follows that /(k) is also uniquely determined by the property 1
k-1
ÇqA1, • • • ,
k+1
A k-1, A
1
n+1 k k, • • • ,
n
A n) A = (1 (k)W)(A k , • • • , A,),
that is, /(k)i = iik. From this it follows that
E is [un . . . ui+ds., = ii; = yj, 1 < j < n — 1. s
Taking into account that ln = yn , we find that the operators ls satisfy system (II.17). 0 The theorem is proved.
3.2 How to Obtain the Solution Let us now solve the equations for the ordered representation operators. Equations (II.14) and (II.17) admit explicit solution in a variety of cases. We shall now analyze these cases and then state a general theorem concerning the solvability of system (II.14). (a) Let the operators A 1 , . . . , An form a Lie algebra, i.e., satisfy the relations [A i ,
A;
] = E4Ak,
(II.18)
where X.kij are structural constants. These relations can be rewritten in the form (II.8) by setting çoi(b)
= xi + Ai,
Fj (Y i) = q).i (Y j) — yi = Ai, (Ai)ik = x,
and we obtain from (II. 13) £1(x, p) =---- ePi Ai ,Ci_1(x, p). Let us compute the left ordered representation
L i (x, I))
4. We have
= e— Pii-1Ai+1 . e — Pj+2Aj+2 ...e—PnAnLn(x,
Thus we can write out the system of equations eke p— n An L n (x , p))i
= Xi
P).
122
II. Method of Ordered Representation
and obtain the following familiar expression for L n (cf. Subsection 2.4): Ln = A -1 where
\. . (A(p)) ii . (e— P)+1 A.ii-i .. e —pn An nj •
Note that A(0) = and consequently, the obtained operator is well-defined as a series in powers of 0/8 y . Simple though cumbersome calculations show that the components of Ln satisfy (11.1 8) if and only if the structural constants satisfy the Jacobi identity
E (A4;4 + xlici Ali + A.i.p4i ) = 0 k
for all tuples (j, i, 1, s) (cf. Section 4). (b) Let us now analyze a much less trivial case in which one manages to solve system (II.17) with nonlinear functions S2ski .
Definition 11.3 The system of commutation relations (II.11) is said to be solvable if (a) all matrices Ak (y) in (II.15) are lower-triangular, that is, AL 0 for s < j; (b) the function ç 5 (y) have inverses (12 5 ) 1 (y) for all k and s. Theorem 11.8 If system (II.11) is solvable, then system (II.17) can be solved for ls, s = 1, ...,n (the explicit form of the solution is given below in the proof of the theorem). Proof First of all, let us transform (II.17) to a more convenient form. Let us seek /s in the form n
is :---
E
ykmks,
k=1
where Mk s are now unknown operators. Inserting this into (II.17), we obtain
E
YkMksDsj =
y»
j
= 1, . . . , ri,
k,s
so that it suffices to solve the system of equations Mks D1= Ski j, s
k =1, ... , n.
(II.19)
3. Evaluation of the Ordered Representation Operators
123
Let us introduce the operators R1 by setting (Rj f)(Y 1, • • • , y n ) =
f (Y i , • • • ,
3 y 1 , (E2ij+ i, j) -1 (Y j+1), • • • , ( 2n j) -1 (Yn),
j = 1, . . . ,n — 1, Rn = 1. Now set
Mss = Rs ,
s = 1, . . . ,n,
M1 =O, s < i, s—i Msj = — Rs Dsi mi.; ,
E
s > j.
1=1
These equations permit us to determine the operators Msi for all s and j. We claim that they satisfy (II.19). To prove this, note that if A is a lower-triangular matrix, then so is f (A), and f(A)1j = f (Ajj) for all j = 1, . . . , n. We conclude that Ds1 = 0 for s < j and DE o R1 = 1, j = 1, . . . , n. Inserting Ms./ into (II.19), we obtain, by the above, n
E
Mks Dsj
E
=
Mks Dsj
s j
s=1
k-1 k-1
= MkkDkj
EE —E {E —
RkDkIMIs Dsj
s=j 1=s k-1
= MkkDkj
1
RkDkl
'wisps;
s=,
/=1
1
If j = k, then only the term MkkDkk = Rj Djj = 1 occurs on the right-hand side of the last equation. Let us show by induction over k j that —
k
E
/wisps.; = So •
s=i There is nothing to prove for k < j. If k > j, then, by the induction hypothesis, 1
k-1 MkkDkj
—
E
RkDkl
1E
/wisps,
s=,
I
k-1 = MkkDkj
—
E
RkDk1 31j
1=j
= MkkDkj — RkDkj = RkDkj — RkDkj =
The theorem is proved.
O. 0
H. Method of Ordered Representation
124
Let us now prove that system (II.14) is solvable in the class of formal power series provided that the functions çoi (yi) have the form (II.10). We seek the solution of (II.14) in the form 00
=
Lso(x, P)
iœi=clof equations for the functions Cs , OE : Then we obtain the following infinite system n
oo
E E cs ,a (x) la 1=0
Vn !
(vn
x • • • x xnvn —41n
•
_ an )!
...
Vi_f_i !
. —ai+1 x vi-i-i (vi ± i — ai + i)! s=1 .1 + 1 aai
[ aai ---Te7 (Wl (X1))Vi • •
(II 20)
aX "1 1 Goi(xi))1 is
a X J. j
= x 1 . .. xjvj+1 ... x nnv , VI
(vi = 0,1, ...; i = 1, ... ,n).
Indeed, let us make the substitution ..x -± y, p -±
— aay
yivl . . . ynvn. Then in (II.14) and apply both sides of this equation to the monomial we will have yr .. . y 7 +1 . . . ynvn on the right-hand side of this equation and on the left-hand side we can use the fact that (p, x + 0/8p) is the right regular representation of the tuple
1 2 (a lay, )7). We obtain 1
ERN
2 ( y,
k
a
—) ay
= E L k 0 (Y2 ' —
a° y)
E vi+i
oo
n
= E E csa(y) s=1 la1=0
X
3(79 f.
E
ay.j
J
1
2
ky •1vi I • •
yriin )
ik
1 k
1
2
leFi (Y-dala Yi ...e F' (Y 1)a laY1 YI+1 ' • •
ynn i
[(
vn ! (vn
—
Wi (Yi» vi — (Sol (Y1)) vi Lk
vi±i!
an)!
• • • (Vi + I —
Vi4-1 — Crj+1
cei±i)! Yi+1
. . ...nvn —an •
Y
aai
(Wi
(Y./ )) vi • • • ai. (49 1 (Y1))1 "Y1
j,s
and we arrive directly at (II.20). Let q)i(yi) have the form (II.10). Then the summation in (II.20) ranges over a < y, and (II.20) can be rewritten in the form n vi ! . . . I), ! E[mJv.i cs,v (X) is s=1
_mil
3. Evaluation of the Ordered Representation Operators vi
= x 1 ..
vi+1
.x i
125 n
. . . Xnv n +
E E pi,,„ v(x)ci,,, (X), ct
where Pi/ay (x) is a known polynomial of degree < I v I — la I . The last equation immediately implies the following theorem.
Theorem 11.9 Let the functions wi(x) have the form (II. 1 0). Then system (II. 1 4) has a solution provided that for any multi-index y = (y1 , . . . , vn ) we have
det I I (M li l .. . Mr )is i i 0 O. Under the conditions of Theorem 11.9, Cs ,a (x) is a polynomial of degree < lai -I- 1 and consequently, the operator Lo (y, 0/0 y ) is well-defined on the space of polynomial symbols. In particular, the hypotheses of Theorem 11.9 are necessarily satisfied if w is a function of the form (II.9) and crii 0 0 for all i and j.
3.3 Semilinear Commutation Relations Let us now consider a generalization of the commutation relations defining a Lie algebra. It turns out that the method used can be generalized to a wider class of relations, namely, to that of quasilinear commutation relations. Consider a Feynman tuple 1
n
Ai, ... , A n ,
n+1
B
n+1
1,
... , B ni
of operators satisfying the following relations: n
[A» Ak] = —i Ecii., ( B ) A i , j, k = 1, . . . , n; 1=1 [A» B s ] = —idis (B), j = 1, ... ,n, s = 1, . .. ,m; [Bs , Br ] = 0, s,r =1, ... ,m.
(11.2 1)
Here clik (z 1 , .. . , z.) and dis (z 1 , . .. , z,n ) are given m-ary symbols. We see that the operators Bs commute with one another so it is quite appropriate to assign the same Feynman index to all these operators, Relations (II.21) generalize the Lie commutation relations in the sense that the structural constants clj. k are now allowed to depend on the additional operator arguments B. However, these arguments all commute with one another and satisfy a rather special commutation relations with the operators Ai (see the second line in (11.2 1). Let us outline the method that we follow to construct the left ordered representation; the reader will see that in principle it differs only slightly from that used for Lie commutation relations.
126
II. Method of Ordered Representation
(a) Consider the Feynman-ordered exponential U(t, r) = eirm Bm
where t
E
Rn and r
e iti
e iriBi e itnBn
si
Rm. We will find operators
E
1
1\
a at
, 2 2 (
a; = vi
a at
such that
r) = Ai U(t, r). (b) For any symbol f (y, z), y 1
E
Rn , Z E Rin, we have
a
n n+1
An, B ) = [f (—i — , at
f
a
—)u(t, r)] t=r=0
Combining this with the preceding formula, we obtain 1
n n+1 B
Ai l[f (Ai, . . . , A n ,
a at
= A1 [f (—i—
a
a at
(t, t)] t=r=0
a\
r) A i"' rd
= [f = [(f
a, at
a ar
(11.22) t=r=0
Q j) U (t r)] t =r
=0
The composition
f
i[f
a' t —i—a—) ar
. a\ t at' 1 5-7 ) 111Qi ._a
22
ay 11 at' —
can be computed with the help of the right ordered representation for the tuple 1 1 The computations similar to those carried out in the end of Subsection 2.4 lead us to the formula
1J = QJ
1 . 0
( aY
Oz
22
y z)
j =
,
since the operators B1 act last and commute with one another, we have 1 Bifff
n n+1 n+1 1 7 An , B )] = zj B1 f (Ai,
n n+1 7 An, B ) 7
3. Evaluation of the Ordered Representation Operators
127
so that the corresponding representation operators are very simple, /Bi --= zi,
1 - = 1, • • • , m•
Let us now proceed to the implementation of the outlined scheme. Since, in contrast to Example II. 1, we do not assume that the symbol space Y consists only of polynomials, we should make some other assumptions so as to ensure the rigorousness of our considerations. Specifically, we assume the following.
Condition 11.1 The exponential e" belongs to Y for any t
R; moreover, there exists a subalgebra (13 of the algebra C'(R; Y) of all infinitely differentiable mappings from R to Y such that eitx (more precisely, the mapping t 1-3- eitx) belongs to .7. and the operator --ialat is an Y-generator in (1). E
I s Lemma 11.2 Under Condition A, for any tuple C = (Ci, .. . , Cs ) of .F-generators in an operator algebra A and for any symbol f E Fs we have f (C) = [f (_i
_a ) e itc] at t =0
7
(11.23)
where eitc is the Feynman-ordered exponential e itctts cs
=e
. . . e tti ci .
Proof Since factorable symbols are dense in .T5 and both sides of (11.23) depend on f linearly and continuously, it suffices to prove the lemma for symbols of the form f(y) = fi(Yi) • • • fs(Ys).
Furthermore, we see that for such symbols the statement of the lemma can easily be reduced to the case s = 1. Thus, here is what we have to prove: if C is an Y-generator and Condition II. 1 holds, then equation (11.23) is valid. Let cto c C" (R; A) be the image of (I) under the mapping ço (t, x) 1---> ço(t, C) and let 4)0 c :ci) be the subalgebra of elements independent of t. Evidently, the diagram
Lc
.8 —,
8:
II. Method of Ordered Representation
128
commutes (here Lc stands for the operator of left multiplication by C etc.) Thus L eitc is an intertwining operator for Lc and --ja/at in the above-mentioned spaces, and, consequently, Le itc f (Lc) =
i a f (—
— )
L eigc.
at
Apply both sides of the last equation to the identity operator 1 E 4) o. We obtain 8 ) e itc . eitc f (C) = f (- z• -5-t•
The desired relation can now be derived from this one by setting t = O. The lemma is 0 proved.
'6./
Let us now try to find the operators simple. We have
a at;
r) =
J -i-
a
at;
=
(
item (a) of the outline). The idea is quite
e irri eitnAn .. . eitj+iAj+i Aie itjilj . . . e Ai v
1, . . . , n;
U(t,r) = 131U (t , r), 1 = 1, . .. , m.
(11.24)
We would like to represent the first group of equations in the form a
—i
at.J
—
n
1
u(t, r) = [[Xi (Al, • • • , An, B,T,r)I1U (t , r)
and then construct the './ by choosing appropriate "quasilinear combination" of the equations obtained. To transform (11.24) into the desired form, we should move Ai to the last place in the product on the right-hand side, i.e., make it act after all other operators in this expression.
Lemma 11.3 We have n
e itnlin ... e irj-4-1Ai+1 A j =
E
ÇO ir (B ,
i • 1 • 1 t)A r dn A n . . . di+ A-1 + ,
r=1
where V:i (Z, t) are symbols determined in the proof of the lemma (in fact, independent of t i , . . . , ti ). Proof Let n X
= E01(B)Ar; r=1
çOrj (Z,
t) is
3. Evaluation of the Ordered Representation Operators
129
let us compute X(tk) = e itkAk Xe—itkAk
One has
— dt d
eitkadAk
(X),
tk E
(11.25)
(X(4)) = [Ak, X (tk)1, X(0) = X.
We seek X (tk) in the form x(tk)
= Ewro, tkmr, r=1
where of (z, tk) will be determined later. On substituting this expression into (11.25) we obtain in the commutator term [
Ak, X
NA =
IE
L d IMO; (B tk)[Ar r=1
n
m
dks(B)
(B tic)[Alc, Ar])
wr (13, tk) oZs
r=1 s=1
A r (Dr (B , tk)
E 4(B)A1 1=1
(we have used relations (II.21) and the commutation formulas from Chapter I). On the whole, we get n
1
m acor — (B, tk)A r = Edks(B)7,-- + ts a r=1 s=1 r=1 s=1 1=1 °Z s n m
EE
E
Edo, toci,(B)
Ar
(in the second term on the right-hand side we have interchanged the notation for the dummy indices r and 1). Clearly, it suffices to require that the symbols (or (B , tk), r = 1, . . . , n, satisfy the system
I
n n = crki (z)(01 , (Z)r "(4) ati a Zs 1=1
E
71 dks s=1 wr (Z, tk)it k =0 =
atk
which is a system of ordinary differential equations along the trajectories of the vector
field l d
a
dtk
atk
m
E dks (z) za - • s=r
Let Ck(Z) be the matrix with the elements
(C )ri = c 1 ( z). 1 We assume
this field to generate a global phase flow.
II. Method of Ordered Representation
130 We have tk CO (Z,
tk) =
T - exp ( f Ck(Z(k)(ZO, r))dr co (zo), 0
where the integral is taken along the trajectory z = z(k)(zo, t) of the field k such that = t (0) , ... , co n) . Z(k)(ZO, t) = z, and co We apply this formula successively for k = j -I- 1, j + 2, ... , n and obtain the following expression for the coefficients q i'- (Z, t): tn
W.j(Z,
t) = T - exp
Ck(z(n)(zo,n, r))(1r
(
0 ( tn-I X
T- exp
f Cn _1(zot—i)(zo,n-1, rt nel o (ti +i
x • • • x T- exp
f Ci+i(zo+D(zo,i+t, r))cit 1»
(11.26)
o where goi = (w.li , . .. , wp, 1i is the vector with jth component 1 and other components zero, and the integrals are taken along the trajectories of the related vector fields. The 0 lemma is proved.
Note that (11.26) can also be rewritten in the form ço.; (z, 0 =
a
exp tn Cr, (z) + Edns(z) zs
)
(
s
= exp ti+i
a
Edi+i,s(z)— azs s
.
b.
(11.27)
We have obtained the equation n
a U(t, r) = Eçoir (B , t)Are ir 13 e itn A n . . . e tti Ai —i —
at.J
,
r=1
where the functions çori (B , t) are given by (11.26) or (11.27). It remains to permute Ar and the exponential et B• We have
e ir„,Bm . . . e iti Bi A r = [e irm ad Bm . . . e iri ad B1] (A r) • eit'n 13m . . . e so that our aim is to evaluate the first factor on the left-hand side.
,
3. Evaluation of the Ordered Representation Operators
131
However, this is quite simple. We have
adBk (Ar)
=
[Bk, Ar] = idrk(B),
(ad Br (A r ) = 0 for lai > 1; am ,
, an ) is a multi-index, lai = al
here a (ai ,
(adB)" = (adB„,) am
(adBi )al . Consequently, we can expand the exponential et ad B into the Taylor series, retaining only the first two terms: eit
a
(Ar) = (1 + ir ad B)(A r ) = A r — Eridri(B)• 1=1
Finally, we get
a at;
V jr ( 1 t) (Ar — E r,dr,(B)) u(t, r) E i=t t=t
—i—u(t, r)
[n
E 4°rj(B t)Ar — E rig/j(B t)dri(B) U (t , r) (11.28) r=1 r,1=1 a art
—i—u(t, r) =
MU(t, r).
(We tacitly assume that the functions V:i (z, t) are well-defined for all t and belong to the symbol space Tm for each t.) Set
ri,r; (z, odri(z) = ,o; (z, t, r). E r,1=1 Then we have
a at.
r) = Év ir (B, t)A r — wi(B, t, r) U(t, r). r=1
Assume that the matrix I i(z, t)ll is invertible for each (z, t), and the entries of the inverse ii/f5(z, t)ii belong to the symbol space Fm for each t. Since x
(
cl —i — dr
U (t , r) = x (B)U (t , r)
for any symbol x by virtue of the second equation in (11.28), it follows that
n [ n
=E E j=1 r=1
(B t)A r ikki (B , t) — wj (B ,
t,
t) U (t , r).
132
II. Method of Ordered Representation
Furthermore, permuting A r and
lif
I, (B, t), we obtain the relation
a a 1
12 Qk t , r, (
—i
Tt
1
, —i— ) u(t, r) = AkU(t, r), Dr
where
n Qk(t,
.
r, y, z) =
(
z,
oxi + A k(Z, t, r).
J= 1
and the functions
ilk (Z,
t, r) are given by
n
Ak(z,
n m
j
a Ilik t, r) = E i E Eç9r. J(z, odri(z)—(z, aZ1 j=1
t)
r=1 1=1
Thus we obtain the formulas for the left regular representation of the relations
(II.21): 1 13,
= Zs,
S = 1,•••,
n ,
lA k =Evfk
i=1
• (2
in,
a y.
1\
2
.
z, -/—
Dy
i
a , —i—) 1
) 2 +ilk
z, —i—
Dy
1
az
where the functions 14 (z, t) and p,k(z, t, r) were defined above. Note that, in analogy with the case of Lie algebras, the left ordered representation operators are linear in y, but no longer homogeneous: there appears a constant term. As with Lie algebras, the condition that the matrix ligfi (z, t) I I be "good" (i.e., everywhere defined and invertible) can be guaranteed if we assume that the matrices Ck(Z) are lower-triangular and the derivatives of functions dks(z) are bounded. We encourage the reader to check these facts for himself.
4 The Jacobi Condition and Poincaré-Birkhoff Witt
Theorem The examples given in Section 2 and somewhat more general computation methods presented in Section 3 show that it is not the specific form of the operators A 1 , . . . , A n that is essential to constructing the ordered representation. In fact, we use only the relations satisfied by these operators, and it is quite evident that in the lack of such relations there would be no regular representation at all. Thus, the ordered representation actually "represents" the relations that exist between the operators, which motivates the definitions given in the following.
4. The Jacobi Condition and Poincaré—Birkhoff—Witt Theorem
133
4.1 Ordered Representation of Relation Systems and the Jacobi Condition First, let us fix a class .F of unary symbols and the corresponding classes Tk = .F (i‘k of k-ary symbols to work with. We assume that A 1 , . . . , An are T-generators in some operator algebra A. A relation between A 1 , . . . , An is an equation of the form CO (Ai l , . . . , Ai r )
W (A )
= 0,
(11.29)
where co E Tr , and the sequences ( ji , . . . , jr) and (6 , .. . , i,.) satisfy the conditions: Is E ( 1, . . . , n),
s = 1, . . . , r
(it is not prohibited that some of the Js coincide); ii 0 ik
whenever [Ai l , Ai k ] 0 0,
so that the left-hand side of (11.29) makes sense. It is easy to see that the relations considered in the preceding sections all fall under our definition. For example, the commutation relation
HI aax' x ] = —ir can be written in the form 1
3'
a . a co x,-1—,-1— = 0, . (2
ax
ax
where tO(Y1, Y2, Y3) = Y2(Y3 — Y1) +
i.
Let Q be a (possibly, infinite) set of relations of the form (11.29). Strictly speaking, such a relation is determined by a triple (a), fish Ijs1), where co is an r-ary symbol for some r and (is ) and (js } are two sequences satisfying the cited conditions. However, for brevity we will simply write co E Q; this cannot lead to misunderstanding, since the sequences (j5 ) and Us ) will be supplied by the context where necessary. 1
n
Definition 11.4 A tuple / = (ii .. . , i of continuous linear operators on the space Tq of n-ary symbols is called a left ordered representation of the system Q if the following conditions hold: ,)
1
i)
n
for any Feynman tuple A = (A1, . .. , A n ) of T-generators satisfying all the relations co E Q and any f E Tn we have A i f (A) = (li f)(A), j = 1, . . . , n
134 ii)
II. Method of Ordered Representation if f
E
.Fn is a symbol independent of x
.. . , xn , then
(li.f)(Y) = Yj f(Yi, • • • , i).
In other words, we require that 1 be a left ordered representation for any tuple A = (Ai, , An ) of Y-generators satisfying Q (see Definition 11.1 above). One can easily give the definition of the right ordered representation by himself or herself.
Definition 11.5 We say that Q is a system of permutation relations if it has a left regular representation. n+1
Thus, Q is a system of permutation relations if one can always move Ai into the n+1 1
, An ). jth place in the expression Aif (Ai, Let S-2 be a system of permutation relations and 1 a left ordered representation of Q. Suppose that each of the operators l is an Y-generator in F. Then, according to Section 1, we can make Yn into an algebra by defining the bilinear operation (twisted product) *:Yn x.Fn --*-Tn , (f, g) f * g = f (L)(g).
The algebra (Fa , *) (which will be denoted by Yn for brevity) is not necessarily associative. However, Yn contains the identity. It is given by the function identically equal to one. Indeed, f *1 = f for any f, and the equation 1 * f = 1(L)( f) = f is trivial. We see that 1 is the two-sided identity in F.
1 , A n ) is a Feynman tuple of Y-generators in an arbitrary operator If A = (Ai, algebra A with left ordered representation L = (L i , . . . , Ln), then, as shown in Section 1 (see Theorem 11.1 above) the mapping
A, \
A
f
'LA
(f ) = f (A)
is an algebra homomorphism, (f * g)(A) = f (A) g (A).
In practice, this implies that one can study the twisted multiplication in .Fn instead of the operator multiplication in A as long as functions of A are considered. Clearly, the mapping ILA determines an inclusion of the quotient algebra Fn / Ker ptA, (where Ker /IA, the kernel of p,A, is the set of symbols taken by p,A into zero) in A. Since A
4. The Jacobi Condition and Poincaré—Birkhoff—Witt Theorem
135
is an associative algebra, it follows that .T„ / Ker /IA is associative as well. Moreover, consider all possible tuples A of T-generators satisfying Q and set def
n
Ker p,A ,
A
where the intersection is taken over all such tuples. Since each Ker 'LA is a two-sided ideal in .Tn , the same is true of L7s2. By construction, f (A) = 0 for f E L7Q whenever A is a Feynman tuple of .T-generators satisfying Q. Moreover, the algebra .F/Jo is associative. Indeed, for any tuple A satisfying Q. Hence f * (g * h) — (f * g) * h E
A,
which implies the associativity of Fn Actually, .Fn /Jç2 is a "natural" symbol space for functions of operators satisfying Q. The simplest (and most important) is the case in which js2 = {0}. It turns out that there is a simple criterion for the triviality of Js2 in terms of the left ordered representation operators.
Theorem 11.10 The following conditions are equivalent: (i) Jst = (ii) The left ordered representation operators i, .. . , ln of system Q themselves satisfy the system Q. Under either of these conditions the operators i, . . . , in are uniquely determined and (Fn , *) is an associative algebra.
Recall that /1 , . .. , in are assumed to be _T.-generators. Condition (ii) will be referred to as the (generalized) Jacobi condition. It was originally introduced by V. Maslov [130], who also suggested to call the associative algebra (Fn *) a hypergroup. Theorem 11.10 was stated and proved in [130] in somewhat different form.
1
Proof (ii) = (i). If the tuple 1 = (li, , I n ) satisfies Q, then A = I is one of the tuples occurring in the intersection defining Jo and so we have A c Ker p,. But Ker = 0 by Lemma II.1 , and we arrive at (i). (i) = (ii). We prove this implication by reductio ad absurdum.
Suppose that
0_40
def 1 = W(l
Ir
, ir ) 0 0
for some 0.) E Q. Then there exists a symbol f E .Fn such that 4° = a(i) f
O.
136
II. Method of Ordered Representation
Now let A = (A1, ••• , An ) be an arbitrary Feynman tuple satisfying Q. Then,
ço, (A) = (co (1) f)(A) = co (A) f (A) = 0, E Jo and hence Jç20 {O}, which since the first factor is zero. If follows that contradicts (i). The associativity of (Yn , *) under condition (i) is clear, since in that case we have
=
I
and the latter algebra is associative. Let us prove the uniqueness of 1. Suppose that there exist operators l , .. . , ln with the same properties as / 1 , . . . , n . For any Feynman tuple A = (A1, . . . , An ) satisfying Q we have
çoi (A) dg (1.; f)(A) — (li f)(A) = A1f(A) —
Ai f (A) = 0
for any f E Set A = 1 in the last equation. Then we obtain çoi (1) = 0, and hence
(ri f)(Y) — (0 f)(Y) = q)1 (y) = 49i (1) 1 = O. Since f
E
.Fn is arbitrary, it follows that
= 1» The theorem is proved.
In fact, we can prove even more. The statement that follows can best be expressed in the language of category theory, and we do just that. The reader unfamiliar with this language can skip the following theorem without hesitation since it is not used directly in the remaining part of the book. Consider the category Alg(Q) whose objects are tuples (A, A 1 , .. . , A n ) , where A is an operator algebra and A 1 , .. . , A n E A are F-generators satisfying the system Q. The morphisms in Alg(Q) are continuous algebra homomorphisms taking the distinguished elements into the corresponding distinguished ones, that is, if (A, A 1 , , A n ) and , Rn) are elements of Ob(Alg(Q)), then a morphism (B, B 1 ,
is a continuous algebra morphism
f:
-4- 13
such that
= Bi, i = 1, 2, . . . , n . We can ask ourselves whether there exists a universal object in Alg(Q), i.e., an algebra . , AV) E A that are T-generators and satisfy Q, and A" with elements , A n ) E Ob(Alg(Q)) there exists a unique continuous moreover, for any (A, A 1 , morphism 1r : AM A taking each A i0)nto i A1 . The answer is given by the following theorem.
4. The Jacobi Condition and Poincaré—Birkhoff—Witt Theorem
137
Theorem 11.11 Suppose that the system Q has left ordered representation (ii, . . . ,1n ), and moreover; that (A 1 , .. . , A n ) are F-generators and satisfy Q. Then the algebra (Tn , *), where * is the twisted product with the elements Ar = yi, . .. , AT ) = yn is the universal object in the category Alg(Q). Proof By Theorem 11.10, the algebra (Fa , *) is associative. Next, let us assume (A, A 1 , . . . , An ) E Ob(Alg(Q)). The mapping AA
:
Fn -± A,
f
1-->
f (A)
is an algebra homomorphism, and we have 11, A(Yi)
= Ai,
i = 1, ... ,n.
Since all Ai are .F-generators, it follows from Theorem 1.1 , that the mapping / L A is uniquely determined by the last property. The theorem is thereby proved. E: We can give another interpretation to this theorem by noting that the algebra (Fa , *) is isomorphic to the subalgebra of G(Tn ) consisting of elements of the form 1 n ), we can state Theorem 11.1 1 as f(li, • • .,in). Denoting this subalgebra by 4Fn follows: The algebra Z(Fn ) with distinguished elements 11 , . . . , in is the universal object in the category Alg(Q). In fact the last theorem deals with a far-reaching generalization of the notion of an algebra determined by generators and relations. Indeed, if T is the set of polynomials, then all relations in Q are polynomial and the universal object 14 E Alg(Q) is just the algebra determined by the generators A 1 , . . . , An and the relations co E Q. In this situation Theorem II.1 1 says that each element of 11 can be represented as an ordered 1 n polynomial p (A 1 , .. . , A n ) in the generators, and that such polynomials are linearly 1 n independent, i.e., p (A 1 , . . . , An ) = 0 implies p(y i , . . . , yn ) 0. The latter property is known as the Poincaré--Birkhoff— Witt property, or PBW-property for short; this name takes its origin from the Poincaré—Birkhoff—Witt theorem valid for Lie algebras. Let us state this theorem and prove it using the above-stated Theorem 11.10. In the course of the proof it will become clear why we use the term "Jacobi condition" for condition (ii) in Theorem 11.10.
II. Method of Ordered Representation
138
4.2 The Poincaré—Birkhoff—Witt Theorem Let us proceed to exact statements. Since the Poincaré—Birkhoff—Witt theorem is an assertion about the enveloping algebra of a Lie algebra, we begin by introducing the latter notion. Let L be a finite-dimensional Lie algebra determined by its structural constants ci[i in some linear basis al , . .. , an of L: n
[ai, aj]
= E c,k•jak, i, j
= 1, .. . ,n,
(11.30)
k=1
here [. , .1 is the Lie bracket on L.
Definition 11.6 The enveloping algebra of L is the associative algebra U (L) with identity element, determined by the generators a 1 , . . . , an and the relations (II.30), where [ai, ail now stands for the commutator [ai,ai] = aiai — ajai.
Thus, the enveloping algebra is the quotient of the free associative algebra generated by a 1 , .. . , an w.r.t. the minimal ideal containing all elements of the form [ai, a1] E ci.ci j.ak. In other words, the elements of U (L) are finite linear combinations of the -
products a11 a12 ...aik , and the factors in such products can be permuted according to
(11.30). Clearly, the linear span of ordered monomials an
al
n
lai — ai . . . anan, lal = at + • • • + an = 0, 1, 2, ...
coincides with U (L). Indeed, any unordered monomial can be represented as a1 1
1a• 2 " '
1
n‘
a• iN = P(a1, .. • ,an),
where P (y 1 , .. . , yn ) is the polynomial
P(y i , ... , yn ) = 0 1 112 . . . ON (1) and / 1 , .. . , ln are the operators of the left ordered representation constructed for the relations (II.30) in Section 2.
Theorem 11.12 (Poincaré—Birkhoff—Witt) The ordered monomials form a basis in the linear space U (L). Remark 11.5 Here "basis" means "Hamel basis", i.e., we only consider finite linear combinations.
4. The Jacobi Condition and Poincaré—Birkhoff—Witt Theorem
139
Proof Clearly, we only have to show that these monomials are linearly independent.
To do this we need the following assertion. Lemma 11.4 The operators 1 1 , .. . , In satisfy the generalized Jacobi condition, i.e.,
1,
[l, l] =
n.
The proof will be given later in this section; assuming that the lemma is valid, we proceed with the proof of Theorem 11.12. Let V be an associative algebra and A 1 , .. . , A n E V be elements satisfying the relations [Ai, A1]
Consider the linear mapping r : U (L)
= E Clici Ak.
(11.31)
V given by the formula
r (aji a 12 • • . aih,) = A11 1112 • • • AiN
for any sequence of indices fi, , jN E { 1, , 14 This is well-defined since all relations between al , , an are corollaries of (11.30) and the operators A 1 , . . . , A n satisfy the same relations l . Furthermore, r is an algebra homomorphism , which can be observed from the fact that r (a11 .. . ajs )r (ais+i .. . ai N ) = r (a11 .. . ais ais+1 .. . ai).
Suppose that the ordered monomials are linearly dependent, i.e., there exists a nonzero 1 polynomial P (y1 , . , y n ) such that P (ai, , an ) = 0. But then
1
n 11 A n ) = Per(al), , r(ân )) = v(P(ai,
, an)) = 0.
We point out that this conclusion holds for an arbitrary Feynman tuple satisfying the relations (11.31). Hence we obtain
P
E n
1 , . • • , An)
Kerp,A = Jo,
A
where Q is the set of relations (11.31) and the intersection is taken over all tuples A satisfying Q. Taking into account the result of Lemma 11.4, we can apply Theorem 11.10, which asserts that ,7s2 = {0}. Thus P(y) 0, which contradicts our assumption. 111 The theorem is thereby proved. 1 This conclusion reflects the fact that U(/) is universal with respect to homomorphisms of / into associative algebras.
It Method of Ordered Representation
140
Proof of Lemma 11.4. We proceed by straightforward computation. The operators /i
were computed in Section 2. Recall that they have the form
0 = E y q Bi g 2
1)
a
,
3, . -
j = 1, ... ,n,
,7
where B(t) = A -1 (t) and the matrix A(t) is given by Aqj (0 = {
q < n, q = n.
(etn cn .. . etg+ICq+1 )jq ,
ski
Here Ski is the Kronecker delta and Ci are the matrices of structural constants, (Ci )kJ = CliCi .
The structural constants c.if cannot be arbitrary; they satisfy a number of relations [184]. Specifically, k k C.i j• ±Cit =0, i, j, k = 1, . . . ,n (antisymmetry), and
E lcii`i 41 ± cli cici ±c/44i } = 0,
i, j,l,s =1,...,n
k
(the Jacobi identities for structural constants). The latter equations can be rewritten in terms of the matrices Cq , namely,
[ci , q] = EC i Ck•
(11.32)
k
Thus we see that the matrices Cq themselves satisfy the relations defining L (in fact, these matrices define the so-called adjoint representation [184] of L in the basis (a 1 , . .. , an )). To compute the commutator [ii, ki] we use the right ordered representation of the 1
tuple (2y, alay . It was computed in Section 2 and has the form
ramy = p,
a
ry — q + ap.
We compute the products 1i isi and /i/i according to Theorem 11.1 and obtain
141
4. The Jacobi Condition and Poincaré—Birkhoff—Witt Theorem where H(q, p) =
E
[Bis(p) (qs
k,s
=
a
+— , ) °Ps
Bik(p)qkBis(p) (qs
+
B1(p)q]
ap a s)
n
aBik , aBik — " Djs , — ,— icris) • Eqk ( ° Ps Ps ° k,s
In order to prove the lemma it suffices to check that x_.‘
=
aBik B ,s _ aBik Bis aps -I aps
1-1 s
E4iB1k•
(11.33)
1
Multiplying the last identity by Ak r and summing over q we obtain2 aAkr clii =E(BisBi k — B15) , • o Ps s,k
Let us now multiply this by Ami Au and sum over i and j. This yields
Ec;;A mi Aii =
ailmr
ailir
aPI
aPm
.
(11.34)
To compute the derivatives on the right-hand side of this equation, we make use of (11.32). We obtain
a
(ePricn ...eP'ci) = E AkiCieN C" ...ePic'. apk i
Hence it follows that
a
aA,
= --(ePncn ...ePm+1Cm+i)rm api aP1
10,
=
if 1< m,
[E ilii CiePn Cn ...ePm -" Cm+1
2_ A1j(Cj)rsAms, 7 rm j,s
L 3
or aA mr api
=
I
0,
E c!"ii.Ah• A • i mj,
if / < m, if 1 > m.
i,j
21n doing so we have E used (thaeBrieklaAtion
aakr
kr + Bjk—
T
'
aPs
)
a
Bis-tt= — (Sir)= O.
aPs
if 1> m,
142
II. Method of Ordered Representation
Thus we have proved (11.34) for 1 > m (and, by antisymmetry, for m < 1). If m = 1, then both sides of (11.34) are zero, so that everything is all right in this case,
too. It remains to note that the passage from (11.33) to (11.34) was an equivalence. The lemma is proved.
Remark 11.6 The proof of Lemma 11.4 could be substantially shorter if we used some standard facts from the theory of Lie algebras. However, we have preferred to give a direct proof since it shows with full clarity that there is a direct algebraic relationship between the Jacobi condition for the operators / 1 , .. . , /n of the left regular representation and the Jacobi condition for the structural constants of the Lie algebra. It is because of this relationship that the term "generalized Jacobi condition" has been introduced.
4.3 Verification of the Jacobi Condition: Two Examples In the preceding subsection we have checked the generalized Jacobi condition for commutation relations. Here we accomplish such a verification for the examples given in Subsections 2.2, 2.3, and 2.5. The perturbed Heisenberg relations. The left ordered representation constructed in Subsection 2.2 for the Feynman tuple (A, B) satisfying the relations BA — aAB = i has the form lA = x, 1B = yla ±
i
(1 — a)x
(1 — la),
where /OE is the dilatation operator, Ia f (x, y) = f (ax, y). Let us check that 1B1A — alAlB = i•
The variable y can be considered as a parameter. Therefore, we can assume without loss of generality that our symbols depend only on x. Let f (x) be an arbitrary symbol. We have 1131A f = 1B(xf)
—
(y Joe ±
i
(1 —
= ayxf (ax) +
a)x(1-- Ia )) (xf (x)) ix ( f (x) — a f (ax))
(1 — a)x
'
or, on cancelling out the factor x, . f (x) — a f (ax)
1B1A f (x) = ayxf (ax) + t
1— a
Furthermore, 1A1B f (x) = x 1 B(.f (x)) = xYf (ax) ± t
. f (x) — f (ax) 1— a
'
4. The Jacobi Condition and Poincaré—Birkhoff—Witt Theorem
143
We obtain, by combining the last two inequalities, f (x) — af (ax) 1— a Pax)) a (xyf(ax) ± i f(x) = if(x), 1— a
(1 B l A — alAlB) f (x) = axyf (ax) + i
—
that is, the operators /A and /B satisfy the same perturbed Heisenberg relation as the operators A and B themselves. Thus we may conclude that the twisted product 1 (f * g)(x , Y) = f (l A , 1 B )(g (x , Y)) clef
2
defines the structure of an associative algebra on the symbol space .F2, or, in other words, functions of A and B form a hypergroup. Of course, this argument applies to the case a = —1 as well, so that for the simple example of a graded Lie algebra considered in Subsection 2.5 we can also claim that the functions of A and B form a hypergroup. However, the situation is quite different as soon as a closed system of graded Lie relations defining a finite-dimensional graded Lie algebra is considered. For example, consider the relations [A , B] + -• AB ± BA = 1, [A, A ] 2A2 = 0, [B, B ] 4_ -.-a 2B 2 = 0.
The last two relations are not needed in calculating the ordered representation operators; nor do /A and /B satisfy li = / /23 = 0, and we see that the generalized Jacobi condition, together with the Poincaré—Birkhoff—Witt property is lost. The remedy may be to consider symbols as functions of commuting and anticommuting arguments (see, e.g., [12] and [110]). The nonlinear commutation relations of Subsection 2.3. Let us check that the operators /A , /B, and lc, constructed there satisfy the relations UA ,1B ] = 1, UA, lc] = a(1C )L A + 13(101B + Y (lc), UB, lc] = 13* (101A + 3(/c)//3 ± a(lc). We have ( ii3 A
x + 19, ) , 1-15(z) ( y(z) )] — ) / e v),(x y z) = PS(z) ( -t- z--_,S(z) a (Z) y a ) (p(x, y, z) -r-I ZS(Z)I5(z) ( = Sqc)Is(z) ( x+ -5-Y Y
Y (zz ) Cr
)
/cço(x,
y, z)
(.x ' Y' z).
II. Method of Ordered Representation
144
By adding and subtracting S(z) in the numerator, we obtain lA 1
( tB)
lc = S(lc) S(lc)( 1A ) + ( "C) 1B (TUC). ).
It remains to check that UA , //31 = 1. Exercise 11.1 Check that
UA , / B] = 1 if and only if tr(S(z) — z) --. O. 123
Thus, the last condition guarantees that functions of (A, B, C) form a hypergroup.
5 The Ordered Representations, Jacobi Condition, and the Yang Baxter Equation In this section we clarify the relationship between the generalized Jacobi condition for the ordered representation operators introduced in the preceding section and the Yang—Baxter equation playing an important role, e.g. in the theory of quantum groups. In Section 4 we studied the classical Poincaré—Birkhoff—Witt theorem. Letus recall its statement. The theorem claims that if L is a Lie algebra with basis A 1 , . . . , An and U the enveloping algebra of L, than the ordered monomials E (Z+ U {O }) '
form a basis in U. This result can be viewed as follows. The algebra U is generated by generators A 1 , .. . , An and Lie relations between these generators. The PoincaréBirkhoff—Witt theorem provides a linear basis in U constructed from A 1 , . . . , A,. Here we discuss the same problem for general algebras defined by generators and relations. As we have learned in the above exposition, the main problem which is solved by introducing the ordered representation operators is as follows. Suppose that we are given an algebra A, generated by the operators A 1 , . . . , An and the relations il
(co (Ail , . .. ,
is
40 = 0}„EQ
(11.35)
(for simplicity, we assume that all symbols co (y i , .. . , ys ) are polynomials; the number s of arguments of this polynomial is allowed to depend on co). We are interested
1
n
in whether any element of A can be represented in the form P (Ai , . . . , An ) for an appropriately chosen symbol P(y i , . . . , yn ). It was shown that this is true if and only if there exist operators / 1 , . . . , ln in the space of n-ary symbols such that 1
n
A1 11 f (A 1, . . . , A)]1 =
1 n (ii f)(Al, .. . , An)
(11.36)
5. The Ordered Representations, Jacobi Condition, and the Yang—Baxter Equation
145
for any j = 1, ... ,n and any symbol f (y i , . . . , yn ). With polynomial (or power series) symbols it is not hard to show that provided that the operators / 1 , ... , in exist at all, they can be chosen so that 1
n
(11.37)
f(11, • • • ,ln)(1)= f(Yi, • • • ,y n )
for any symbol f (y i , • • • , yn ). Using the operators / 1 , . . . , ln we represent any element il ik l'ir = Vi(Ai i , ... , Ai k ) E
A_
as an ordered polynomial, 1 n ik 1fr (A ji, • • • , A jk) = f(Ai, .. • , An), il
where il
ik
f(Yi, • • • , Y n) = lif (1 j 1 , • • • , 1 jk )(1),
i.e., we can bring the operator gf into normal form. From this viewpoint, formula (11.37) means that the normal form of an operator already in normal form is just the operator itself. We can arrange things in a slightly different way. Let Fn be the space of n-ary symbols. We have the linear mapping it
=
, :F, —> A i ,...,A,
A
it 1
1
n
(II.40)
f (iii, • • • , An),
f F-*
whose image consists of operators in normal form. Since any operator lif E A can be brought into normal form (see (11.38) — (11.39)), we see that A is an epimorphism. Moreover, by (11.38) — (11.39) we have I
n if RA1, • • • , AnAi
where the symbol
h(y i , . .. , y n )
1
n
1
n
lig(Ai, ... , An)II = h(Ai, ... , An),
of the product can be expressed by the formula 1
n
1
n
h(y i , ... , yn ) = II f(11, ... , 1,)111Ig (11, ... ,111)1(1). Since, according to (11.37), we have
1
(II.41)
n
g(li, . . . ,ln)(1) = g(Yi , • • • , Y n),
(11.42)
IL Method of Ordered Representation
146 (11.42) reduces to
1 h(y i ,
Denote
, Y n ) = f(i1, .
, n)(g(y)).
n f * g = f (li, • • • , in)(g(Y))•
This is a bilinear operation on Fn and the element 1 E F is the two-sided identity with respect to the operation * (indeed, 1 is the right identity by (11.37) and the left identity since 1 (11.45) 1* g =1(11, ... ,1n)(g)= g. The mapping (II.40) is an algebra homomorphism, and it is epimorphic. There are two important closely related questions. (1) Is (Fn , *) an associative algebra (or, in Maslov's terminology [130], is (Fn , *) a hypergroup)? (2) Is the mapping (II.40) a monomorphism? The implication (11.37) 44. (11.36) is easy, since if (II.40) is a monomorphism, then it is an isomorphism taking (Tn ,*) into an associative algebra. So we can concentrate on condition (11.37). It turns out (see Theorem 11.10) that the mapping (II.40) is an isomorphism if and only if the left ordered representation operators / 1 , . . . , /n themselves satisfy relations (11.35), i.e., in
)
=0
(11.46)
for all co E Q. This condition is called the generalized Jacobi condition. Hence, we have the equivalence Generalized Jacobi condition = 4a is an isomorphism.
(11.47)
Let us study the right-hand side of (11.47) in more detail. With polynomial symbols, it can be restated as follows: The ordered monomials
(11.48) form a (linear) basis in the algebra A. This property is known as the Poincaré—Birkhoff—Witt (PBW) property and the basis (11.48) as the PBW basis. One can even define a finer property as follows. Let AO ) c A be the subspace spanned by the unordered monomials of order < j. Clearly,
.0) C AM C • • • C A (i ) C fl (i +1) C • • •
(11.49)
is an increasing filtration on A. Definition 11.7 One says that the PBW property is satisfied in A up to order s if for k form a basis in the any k < s the unordered monomials (11.48) with k1 + • • • + k space
5. The Ordered Representations, Jacobi Condition, and the Yang—Baxter Equation
147
The major advantage of the new definition is that the PBW property (= the PBW property up to order oo in terms of this definition) can be verified step-by-step, via, say, induction on k. However, this is not so easy, and in practice it is much simpler to invert the emphasis and to find some (necessary) conditions on the constants involved in the commutation relations (11.35) than to prove these conditions to be sufficient for the PBW property for any given order. Let us consider some examples.
Example 11.2 Enveloping algebra of a Lie algebra. Let A be the algebra generated by A 1 , . . . , A n and the relations
[ A1 , Ad t AJ Ak — AkA; . Eciik Ai.
(II.50)
/ Let us assume that A satisfies the PBW property and infer some consequences on . This is quite easy. First of all we have the constants cljk. [Ai, Ak] ± [Ak, Ai] = 0,
(II.51)
E(clik + clki ) A1 = 0. /
(11.52)
that is, according to (II.50),
Assuming that the PBW property is valid for A up to order 1, that is, the A1, 1 = 1, . .. , n, are linearly independent, we readily obtain that / . ,... . ik / = —L„I./
(11.53)
Furthermore, consider the Jacobi identity [[Ai, Ak], Ad + c.p. = 0,
(11.54)
where c.p. stands for the two summands obtained from [[A1, Ak], Al] by cyclic permutations. Using (II.50) twice, we obtain
Ecsikcsr or + c.p. = 0,
(11.55)
s
where c.p. denotes the terms obtained by cyclic permutations of the indices j, k, and 1. Again using the linear independence of the Asi , we obtain n s ,,r E (csjk crs l + cslj sk cr ± Ckit'sl i\ =u
.
(11.56)
s
The identities (11.53) and (11.56) are known, respectively, as the antisymmetry condition and the Jacobi condition for the structural constants of a Lie algebra. Let us
11. Method of Ordered Representation
148
emphasize that they both follow from the assumed PBW property up to order 1 (!) in A (the algebra A is called the enveloping algebra of the Lie algebra determined by the structural constants c s. j k ). Let us also point out that it is not obvious at this stage how to infer the PBW property of A (at least for order 1) from these identities. Nevertheless, the PBW property holds in this situation to arbitrary order (the Poincaré—Birkhoff—Witt theorem). The standard "combinatorial" proof of this theorem can be found in any advanced textbook on Lie algebras. In this book we have given another proof. First, the left ordered representation of the relations (II.50) was constructed (see Section 4), and then it was proved that the identities (11.53) and (11.56) imply that the generalized Jacobi condition holds for the operators of the left ordered representation.
Example 11.3 Consider the algebra A defined by generators A, B, and C and the relations BA =aAB +(IC,
(11.57)
BC = yCB ± vB, CA= PAC ± ti,A,
where a, y, )5, a, y, and pc are some prescribed constants. Let us assume the PBW property and infer some equations involving these constants. First of all, it is easy to see that ap y 0 0. Indeed, assume the opposite, say, a = 0. Then BA crC = 0, (11.58) —
that is, BA and C are linearly dependent and the PBW property up to order 2 fails. Next, consider the monomial ABC and transform it into CBA using the relations (11.57) in two different ways. Method 1. Interchange first B and C. We have
(11.59)
ABC = A(yCB ± vB)= vAB ± yACB.
In the last term we interchange A with C and with B, thus obtaining 1 vAB ± yACB = vAB + y CAB (– – IIAB)
P
= (v
P
y ApY ) AB ± —CBA– Ya C2 . Pa Pa —
(II.60)
Method 2. Let us first interchange B and A in the product ABC. Then we have, by analogy with the above, Y ABC = —CBA -I- .111' BA– –a C2 . af3 a af3
(II.61)
5. The Ordered Representations, Jacobi Condition, and the Yang—Baxter Equation
149
Comparing (II.60) and (1 1.61) we see that
#
AB +
2
(
C——C = v a
af3
—
gy P —
AB — YOE C2 . Pa
(11.62)
Assuming the PBW property up to order 2 we see that one should have o- ( v — p,)
afi
o= -.; (1
fi )
and
v — ,u, = vf3 — ,u,y . In other words, either
a = 0 and v(1 — /3) = ,u(1 — y) Or
f3 = y and v = ,u.
(11.65)
Note that we have only proved that conditions (II.60) or (11.65) are necessary for the PBW property to be satisfied and not vice versa. To prove the converse statement, we again calculate the left ordered representation operators. This can easily be done by analogy with numerous other examples that can be found in the preceding sections. 123
Choose the Feynman ordering C, B, A. We omit the calculations and only present the final result. It reads lA = Ys, 1B =
— ayv IY3 Ivi-7 2lY' l +a ,uy317 ,8, I , cryi I)( 3"8 1)712' Y + y2 ( la Y3
IC = ylIf73 .1y12/1/
(11.66)
_ L ip iyliy,1 ± itty3 //y33,1 , y Y3 2
where the notation /zŒ, /z1145 , and g"6')' stands for the following combinations of dilatations and difference derivatives:
(11: f)(z) = f (az),
(I:43 f)(z) = — 81x (az, Pz), 82 f W AY f)(z) = — (az, fiz, yz). 8x 2
(11.67)
It is a routine but useful exercise to check that the operators (11.66) satisfy the original commutation relations (11.57) if and only if one of conditions (11.64) and (11.65) is satisfied.
150
II. Method of Ordered Representation
Example 11.4 (Graded algebras) Let G be a finite abelian group, and let
A = ED
(11.68)
Ag
gEG
be a G-graded algebra with 1. Recall that this means that the underlying linear space A is the direct sum of linear spaces A g , g E G, and that
Ag Ah C
(11.69)
Ag-Fh
for any g, h E G. In particular, it is assumed that 1 E A0, where 0 E G is the neutral element of G. Assume that A is generated (as an algebra) by a finite number of elements A1, .. ., A. These elements are homogeneous, that is, Ai
E Agi , i =
1, . . . ,n,
and that the following permutation relations are valid:
A i A; — wij A; Ai
= E vicj Ak
( 11 .70)
k
where coii and Xii`i are some prescribed constants (for homogeneity reasons, the sum on the right-hand side of ( 11 .70) extends only over k such that
(II.71)
gk = gi + gi;
all the other Xiii. are zero). We shall assume that the coii depend not on i and j themselves but rather on gi and gi: coii = co (gi , gi). (11.72) The numbers coii and Xkii will be referred to as the scale and structural constants of the algebra A, respectively. Assume that the PBW property is valid in A with respect to the tuple A 1 , . . . , An , and deduce some equations involving the scale and structural constants. Let (11.73) A" c A(1) c • • • C A (s) c A(8± 1) c • • • be the filtration in A induced by the degree of unordered monomials in A 1 , .. . , An . Clearly, one has (11.74) A(i) = e AV gEG -
'
where
(11.75)
Aog ) = A(i ) n Ag.
Consider the associated graded algebra
B = °EIS A (i) I A (i —1) --. i=0
06 B (i)
i =0
(11.76)
5. The Ordered Representations, Jacobi Condition, and the Yang—Baxter Equation
151
(here we set A(-1) = {0}). Then, clearly, each BO is G-graded, that is, B(1) = e3, B(i) geG
g '
(11.77)
and, moreover, B has the PBW property with respect to the tuple B1 , . . . , Bn , where each Bi is the image of the corresponding Ai in A(') / Am . The operators Bi satisfy the "homogeneous" permutation relations that can be obtained from (II.70) by omitting the linear term, lost in the course of factorization: Bi Bi = co(gi, gi)BiBi.
(11.78)
Consider the product BiBi Bk and transform it into BkB1Bi using the relations (11.78). We have Bi Bj Bk =
co (gi , gi)Bi Bi Bk
= W(gi , gi)co(gi, gk)Bi Bk Bi
= co (gi , gi)w(gi , gk)ci.)(gi, gk)BkBi Bi and the same factor results if we first interchange 131 and Bk. Hence, the PBW property imposes no restrictions on the constants co(gi, g1) alone, except for (z)(gi, gi ) = co(gi , gi ) -1
(11.79)
In particular, if all the constants A.lici are zero, then B is isomorphic to A and it is easy to see that the PBW property holds in A up to any order. Let us now proceed to the general case of nonzero constants A. We assume that the scale constants co (g , h) satisfy the following condition: co(g, h ± k) = co (g , h)c o (g , k).
(II.80)
If we define a bracket in A by setting [A g , Ah] = A g Ah — co (g , h)AhA g
for A g E Ag and Ah
E Ah,
(II.81)
then the Jacobi identity
[A g , [Ah, Ak]i = [[A g , Ah], Ad + co(g, h)[Ah,[Ag, Ak]i
(11.82)
holds for any homogeneous elements A g E A g , Ah E Ah, and Ak E Ak. Applying the identity (11.82) to Ai , Ai , and Ak and taking into account that the third-order terms in this identity necessarily cancel out, we obtain
E xiii (Ai Ai
— co (gi , gi)Ai Ai) = E
x(Aiiik — co (gi, goAkm
i
i
+co (gi , gi)
E ),/ik (Aj Ai 1
—
co (gi , gi)Ai Ai),
(11.83)
II. Method of Ordered Representation
152 or
(x i k x/i7 _ xii; x1,nk _ co(gi, gi)X4 k X7i )A ni = O. E i,m
(11.84)
Assuming the PBW property of order 1, we obtain the following equation tying up the scale and structure constants of the graded algebra: — x li i x 'ink — co ( g i , gi ) x li k ) ; ) = 0
(11.85)
i
for all i, j,k,m =1, ...,n. It can be shown by constructing the ordered representation and computing the commutators that conditions (11.79), (II.80) and (11.85) guarantee that the PBW property holds in the algebra A.
Example 11.5 (The Faddeev—Zamolodchikov algebra) In the quantum inverse scattering problem method the key role is played by the Faddeev—Zamolodchikov algebra, defined as follows. PP;) be a tensor whose indices range from 1 to n. Consider the algebra Let R = (R cia A generated by the operators 4 where a, 13 = 1, . . . , n and the relations
R au/ PP' ApY 13 A Y', R YY/ AP' AP = PP , a/ a'
(11.86)
where a, a', y, y' = 1, . .. , n. Such an algebra is called a Faddeev—Zamolodchikov algebra. It can be proved that the PBW property up to degree 3 holds in a FaddeevZamolodchikov algebra A provided that the tensor R satisfies a certain equation known as the Yang—Baxter equation. This equation can be written out as follows. The tensor R can be viewed as an element of the space
R E Mat(n, C) 0 Mat(n, C), R=
E as 0 vs,
(11.87) (11.88)
where as and vs are (n x n) matrices. Define the elements R 12 , R 13 , R23 E Mat(n, C) 0 Mat(n, C) 0 Mat(n, C) by setting .-.12
K
'Eas 0 Vs Q!,
R 13 =
E 0-s 0 I ® vs,
R 23 =E/Oas0vs, where / is the identity matrix.
(11.89)
5. The Ordered Representations, Jacobi Condition, and the Yang—Baxter Equation
153
The Yang—Baxter equation reads
R 12 R 13 R23 =
R23R13 p12 4‘
(II.90)
Lengthy but routine computations show that (II.90) guarantees the PBW property in the Faddeev—Zamolodchikov algebra up to degree 3. Unfortunately, the ordered representation operators cannot be written out in such a general setting, so that it remains unknown whether the PBW property holds up to arbitrary degree. However, in some particular cases the ordered representation can be computed explicitly. From now on we use the term "Yang—Baxter equation" for any equations following from the PBW property for the constants in the commutation relations.
Example 11.6 (The Sklyanin algebra) Consider the algebra A with generators Ao, A1,
A2,
and A3 and the relations [A0, Ai ] = i (AoAi + Ai A0), [Ak, Al l
=
(AkAl + AiAk),
(II.91)
valid for any cyclic permutation (i, j, k) of the indices 1, 2, 3. The constants g-id occurring in equation (II.91) satisfy the Yang—Baxter equations provided that —
Jkl
=
3
t7k
,
where all Jk are constants. Sklyanin [168] showed that the PBW property holds in A. However, for general „Z. the ordered representation is not known. Let us consider the particular case in which
= ‘72 = 1 and 33 = 1 + co, where for simplicity we assume co > 0 and co 0 1. The relations (II.91) take the form [A0, Al] = i (A2A3 + A3A2),
[A0, A2] = —iw(A1A3 + A3A1), [Ao, A3] = 0, Ak] = i(A0A1 + A1A0),
(11.92)
where (j, k,l) is an arbitrary cyclic permutation of the indices (1, 2, 3). We choose in A another set of generators given by the formulas a± = Ai ± A2, b± = ±A0 4fo5A3.
(11.93)
154
II. Method of Ordered Representation
These generators are linear combinations of the original generators. We choose the 1
3
2
4
Feynman ordering (a_, a+, b_, b+) and construct the regular representation for this ordering. For the new generators the relations (11.92) take the form
[b+, a+] = ±A/FD(b+a+ ab), [b_ , a+] = /5(b _a + a+b_), [b+, b_] = 0, [at , a _] = o (b2+ —
(11.94)
In a slightly different form, these relations can be rewritten as
b+a+
= ota+b+,
b_a+
=
b+a_
= (71 a_b+,
b_a_
=
[b+, b_]
= 0,
[a+a_] = ce
where we denoted a=
1
a+b_, (11.95) 1 (b 2_ b2
+ AFL)
(11.96)
Let us proceed to computing the left ordered representation. First, note that [b_, b+] = 0, and these operators act last. Hence, no commutations are necessary in computing the composition 1
2
3
4
b±i[f (a_, a+, b_, b+)]], and consequently, we obtain lb+ = y4,
1b_ =
)3.
Furthermore, we have
1 2 3 4 5 1 2 3 4 a+fff (a_, a+ , b_, b+)]] = a+ f(a_, a+, b_, b+) 1 4 ) 3 (1 2 2 a+ f a_, a+, ab_, a
(11.99)
by virtue of (11.95) and the permutation formulas from Chapter I. We arrive at the conclusion that (II.100) la+ = y2Iya3 IyY a , where
f)(z) = f (a z) , that is, Ift is the dilatation with respect to the variable z.
(II.101)
5. The Ordered Representations, Jacobi Condition, and the Yang—Baxter Equation
155
Similar but somewhat more complicated computations yield the following formula for the representation operator / a_ . We have
la_
=-
1
,2
[yi + (1 + a )2
2 (PI y2 Y2
y
2 2 Y4
-1) + a —(I y2lia2— 1) Y2
a lia Y4 Y3
(II.102)
The constructed operators are readily verified to satisfy relations (11.95). For example, let us check the commutator [la, la]. We have 1a + 1a_ —
=
r 2 a 2 2 1/a2 a iY3 i L.)) (i 2 — 1 ) + a y4 (IY2 Y4 3 Y2 ,2
—
(
„Y ' )
a
Y2
— 1)1/ a / 1/a (1 ±a 2 ) Y4 y3
,
2 l ice` 2 -7 4 — (IY2 a — 1) + a - (1)72
1) y2(1 + a ) -2 .
Y2
Performing elementary transformations we find that
c e2 _ 1) ± a) -2 1a2yi (Iya22 1) ± y,24. (iyy y2 + y32 — — y32a2 1ra2
a— 1 a+ 1
2 kY4
2\ Y3
)=
2 T 1 /ce 2 y4 1 y2 +
a-1 (72 a
± k'b+
2 21 y4 72 \
)•
The other relations in (11.95) can be checked in a similar way. Now let us proceed to the general scheme. Suppose that we are given an algebra A determined by generators A 1 , . .. , An and relations (11.35) involving certain parameters X,. We suppose that there are sufficiently many such relations so that any element lr E A can be put into normal form 1 =
f
Oh
•••,
An).
How can we study the Poincaré—Birkhoff—Witt property in the algebra A? The usual technique is as follows. First, one considers triple products (cf. Examples 11.2 and 11.3) and uses permutations performed in various orders to infer the conditions on the parameters for which the PBW property holds up to degree < 1, 2 or 3. This procedure is considerably simpler than that of computing the ordered representation, and it permits one to eliminate from consideration all the parameter values for which the PBW property fails in low orders. The equations obtained at this stage are the Yang—Baxter equations. Then one assumes that the parameters take only the allowed values and computes the ordered representation (if possible). Finally, the operators of the ordered representation are substituted into the original commutation relations and this makes it clear whether the PBW property holds in the algebra A.
11. Method of Ordered Representation
156
Hence, the situation is as follows. The Yang—Baxter equation is in general only a necessary condition for the PBW property to hold up to a certain low order. The ordered representation, though often hard to obtain, always gives a precise answer (that is, a necessary and sufficient condition for the PBW property to be valid).
6 Representations of Lie Groups and Functions of Their Generators As is clear from the preceding considerations, noncommutative analysis provides the following method of dealing with commutation relations: Given a set 0 of commutation relations, find some reasonable class of representations and an appropriate symbol class such that the operators 1 f
with admissible symbols
f
(A) =
f
n
(Ai,
•••,
An)
are well defined and the composition formula
1
n 1 n [Lf Oh • • • , AnlEg(Al, • • • , An)Il =
1
Lf (Li,
1 n n • • • , Ln) gii (Al, • • • , An)
is valid; Furthermore, try to produce a reasonably explicit expression for the left regular representation operators L 1 , . .. , L n ; Then it becomes possible to use the above calculus for investigating such problems as the Cauchy problem and the inversion problem for the operator f (A). Here we present a partial realization of this programme for Lie commutation relations, that is, the commutation relations which define finite-dimensional Lie algebras. Let us now proceed to precise formulations.
6.1 Conditions on the Representation 1
n
We intend to study functions of the operator tuple A = (A1, . .. , An ) whose components satisfy Lie commutation relations. What does this mean exactly? This means that the operators A1, .. . , An act in some linear space E, and that n
[A j , Adu = —i
E
Clik
Ai
U
/=1
for any u E E, where clik are the structural constants of some Lie algebra g in some basis ai, .. . , an E Ç. Using notions and notations of representation theory, one may
6. Representations of Lie Groups and Functions of Their Generators
157
write
A1 = —i r (ai), j = 1, . . . , n, where r:
g ---> End(E)
is a representation of g in the algebra of endomorphisms of E (see Section 1). The reasons for introducing the factor (—i) will soon become clear. Thus, we are able to develop the calculus for polynomial symbols, or in terms of formal series. This might be interesting in itself, but it does not cover even the simplest applications to the theory of differential equations (see Chapter III below). Thus we have to impose some further conditions which make it possible to consider wider symbol classes. First, we require that g be a real Lie algebra (that is, g is an n-dimensional vector space over R; in particular, Clik E R), and that g be a nilpotent Lie algebra, that is, any commutator
[A11, ... , [Ai s _i , Ais ] . . .] = 0 once s > N, where N < oc is the nilpotency rank of Ç. Violation of these requirements would lead to considering analytical symbols with certain restrictions on the supports of their Fourier transforms; these topics are beyond the framework of this book. Since the definition of f (A) for nonpolynomial symbols uses some notions of convergence (see Chapter IV below), our model should necessarily involve conditions of a functional-analytic nature. We introduce the simplest version of conditions of this sort:
—
E is a dense linear subset in some Hilbert space H;
—
The following Nelson condition is satisfied: the operators A1, j = 1, .. . , n and n
D= 1 + EA. ' j=1 are essentially self-adjoint on E. These conditions are very important. Indeed, Nelson's theorem (see Nelson [149]) guarantees that under these conditions there exists a unitary representation
T : G --->- Aut(H) such that t is a derived representation of T; more precisely, r = LIE•
The representation T plays a crucial role in what follows.
Ii. Method of Ordered Representation
158
6.2 Hilbert Scales We have stated the conditions we impose on the operators A. Our next task is to de1 n .. . , A n ) act. It is not convenient scribe the function space, in which the operators f (Ai, to construct the calculus in the space H itself, since unbounded symbols (which are of practical interest) give unbounded operators. To avoid this difficulty, we use special Hilbert scales associated with H, whose construction uses the operators A1, ... , An (see the propositions given below). We postpone using the nilpotency requirement as far as possible; for now we only use the much weaker requirement that G is unimodular, that is, its Haar measure dg is biinvariant (see Appendix).
Proposition II.! Let A be a strongly positive essentially self-adjoint operator, defined on a dense linear subset E of a Hilbert space H, invariant with respect to A (that is, AE C E). Consider the following family of norms on E: ,L 3Lk u \l/2 K ) Ilullk = (u,
r-17 E g.., U E
E
(here A is the closure of A; in the last formula one may take A instead of A for k > 0). We denote by H I` the completion of E with respect to the norm Il • Ilk. The following statements are valid: (1) {H k }kEz is a Hilbert scale, 110 = H. (2) For k > 0, Hk may be naturally identified with the domain of the operator A k/2. (3) A 112 :ti-k ---> Hk-1 is an isomorphism for k > 0 and extends to such isomorphism for k < O. (4) The bilinear form (u, 0 may be uniquely extended by continuity to the pairing Hk x fl –k ---> C, which induces an isomorphism (H –k ) Ld (H k )*. Proof Since A. is strongly positive and self-adjoint, all powers (A)' are well defined (see Dunford, Schwartz [42]). Set Wk = DC-e 12) fork > 0, Ilullwk = ilullk, Wk = (147–k )* for k < O, with the identification W0 = (W°)* obtained via the scalar product (., .)H in the space H. Clearly, properties (1) — (4) are satisfied for the spaces W k , and we must only prove that Hk = W k for k > 0 (the case k < 0 then follows automatically). It suffices to show that E is dense in W k in the topology of Wk . Assume the opposite is true. Then there exists a nonzero vector E Wk such that
(, u)k = 0 for all u
E
E.
We have
(, u)k = (, A k u) = ( 3,k , u) = 0 for any u E E. Since E is dense in H, ,&11 = 0, and thus 4' = 0, because Ker Ak is ril trivial. The proposition is proved.
6. Representations of Lie Groups and Functions of Their Generators
159
Proposition 11.2 Let G be a Lie group with Lie algebra g, and let r:
g --> End(E)
be a representation of g in a dense linear subset E of a Hilbert space H, satisfying Nelson's conditions. Let g, c g be an ideal, and let ai, . . . , ak be a (linear) basis of gi , A; , —ir (ai), j = 1, .. . , k. Denote by
T : G -- Aut(H) the unitary representation of G in H, associated to r by Nelson's theorem, and set k
Au=
E iOu J +u, u
E E.
j=1
The following statements are valid: (1) A is a strongly positive essentially self-adjoint operator on E; the corresponding Hilbert scale does not depend on the choice of the basis (ai, . . . , ak) up to a norm equivalence. (2) The representation T extends to a bounded strongly continuous representation in the space Hs for any s; there is an estimate
ilT (g) ull s __ C s (ii Ad g a Ilulls, where Cs is a constant independent of u, and (Ii Ad glpgi is the norm of the restriction of the operator Ad g on the invariant subspace gi. Proof 1) Since g1 is a subalgebra, it generates the corresponding subgroup G1 C G. The operators A1, ... , Ak are generators of the unitary representation T 1G 1 . By
Nelson's theorem, the operator A is essentially self-adjoint. Positivity of A is clear. The fact that the corresponding Hilbert scale does not depend on the choice of the basis up to norm equivalence is a consequence of the following lemma.
Lemma 11.5 Let {H k ) be the Hilbert scale described above. For any integer s the norm equivalence k
illills+1 fd Mulls +
E iiiipils
holds for u E E.
We postpone the proof of this lemma until the end of the proof of Proposition 11.2. 2) It suffices to prove the estimate for positive s and then use conjugation. We proceed by induction on s. The estimate is valid for s = 0, since T is a unitary representation, and therefore, II 7' (g)li = 1 for all g E G. To perform the induction step s = s + 1, we use Lemma 11.5.
160
II. Method of Ordered Representation
By this lemma, it suffices to estimate Ilk T (g)u Il s . We have
Ai T (g) = T (g) (T (g) -1 Ai T (g)) = — i T (g)r (Ad g(ai)), thus IlAiT(Oulls = II T(g)r(Ad g(ai))/411,
5_ I T (g)11s - II Ad gligi . Ilulls+i• The latter inequality together with Lemma 11.5 immediately yields the desired estimate for the new value of s. The proposition is proved. 0
Proof of Lemma 11.5. Case 1. s > 0. For s = 0 we have
n Illiq = CO ± Ai ± • • • + AD u, 0 = 11 14 1I 2 + E HA;
uH2,
j=1
and the statement of the lemma holds. For s > 0 we use the identity
n [A1,
n
A] = Dili, Ai] = —i E ci ( A, A k + Akm. .
k=1
k,I=1
Using the latter relation, we obtain
n 114+1 = ((1 ± E A . ) A s u, u)=Hull s2 j=1
n +E(AjA s u, A1
u)
j=1
n = Illills2 + E HA; uHs2 +
(P2s(11 1, ... ,
An ) u, u)
j= 1
n
cik ((A i iik + A k A ;) Ai As-iu,
+ E
u) ,
j ,k,I=1
where P2s (Ai, ... An) is an unordered polynomial of degree < 2s. The last term equals zero due to antisymmetry, so that
n 114+1 =11 11 11, + E HA; /411. + (P25(Ai, ... , An) u, u ) . ; =1
By induction on s we prove that k IlUilk+1 ' illillk + E mph j=1
for all k E {0, ... , s — 1}
6. Representations of Lie Groups and Functions of Their Generators
161
l(P2s (Ai, ... , A n ) u, u)I < C Hue. First of all, the first relation implies the second, since (P2s (Ai, . . . , A n )u, u) is a sum of terms of the form (A1 1 ... Aisi u, A11 . . . Ai n u), where si , s2 < s. Thus, it suffices to verify the first relation. The statement is valid for s = 1 (see above). The induction step goes as follows. We have
n
Hu e+,
calue + E HAE4e), j =1
n
2 —CI I U 11 s. II U lis+1 ?-. E IlitiU Il s22 j=1
Moreover, one has
ilue + 1
Hue.
Multiplying this by (c + 1) and adding the result to the penultimate inequality, we obtain
n
(C + 2) Ilue-Fi
iiiiii, + E liApe, j=1
as desired. Case 2. s
3, 1/2 : Hs
_)_
Hs-1
is an isomorphism for any s. The proof for s < 0 now goes quite similarly to that for s > 0 if we use the relations [A» A --t i _ —A—[A1, A ] 6,--1; instead of the polynomial h25 (Ai, ... , A n ) we get a polynomial in Ai, ... , A n and 0 A -1 of weight < 2s, where the weight of Ai is 1, and the weight of A -1 is —2. Thus, we can associate a scale {Hs} to any ideal gi above). Of course, g1 = g would be the simplest choice.
c g (see Proposition 11.2
6.3 Symbol Spaces Let us now define admissible symbols. For this purpose, we introduce a special function space on the Lie group G. Definition 11.8 Let G be a Lie group with Lie algebra g, gi c g an ideal. We denote by 1)± (G, go the space of distributions q) on G representable both in the form §9 =
E
lalm,ak+1,...,an=0
laraa
II. Method of Ordered Representation
162
and in the form
=
E
ra(rta),
icrimotrk+,,•••,cen=0
where 1 = O p ...,ln ) is a tuple of right-invariant vector fields on G, forming a basis in g, with / 1 , ... , /k being a basis in Ç 1 , m = mco, ,uct are measures on g summable with the weight II Ad gllsoi for any s > 0, r = r1 , . . . , rn is a basis in the space of left-invariant vector fields on g, with r1, ... , ri being a basis in Ç1; /2„ satisfy the same conditions as If q) E 'P+ (G,
go, then the operator T (q)) = f q)(g)T (g)dg G
may be defined as follows: for q' (g)
E
=
1' ii3OE (g)
l'Imotrk+1 , --an=°
we set T ( 0 tf
E
lalm,ak+i
( -1 ) 1cri f pl, a (g) [la T
an =0
(g)]dg .
(II.103)
G
Proposition 11.3 The integral (II.103) strongly converges in the scale {Hs} and defines a continuous operator
T (q)) : W _>. Hs — m (S0 ) for any s E Z. Proof We have la T (g) u = (i Ar T (g) u = T (g) (i Ad g (A))" u.
Thus, ilia T (g) ulls—mm
c IIT (g)ll s—m(ço)---s —no) II Ad gil a Hulls < ci II Ade s iluiis•
Since the measure t.ta is integrable with the weight II Ad glisoi , the integral converges in Hs— m (v) when applied to u and defines a bounded operator. 0 The proposition is proved. The following two assertions describe the product for operators of the form T (0.
6. Representations of Lie Groups and Functions of Their Generators Proposition 11.4 The space P+ (G, lification ('Pi * (P2)
163
go is an algebra with respect to the group mol-
=f
Wi(h)402(h -1 g) dh.
G
Proposition 11.5 Let wi, q)2 E P+ (G, ai). Then T(1) 7' (q)2) = T (Sol * 'p2). Proof of Proposition 11.4. Denote
4. (g) = il Ad gligi ; since Ad(gi g2) = Ad gi o Ad g2, we have (g h) First of all, note that it suffices to prove the statement for the case where both q)i and ÇO2 are measures on G. Indeed, if q;11 = la Pt, q)2 = rfi ii, then (Pi * go2 = rx rfi (A *T). Next, let us verify that 'pi * q;02 is a well defined measure dn G. Let f G Co(G) be a test function. We have <(Pi * g)2, f> = f çoi(h) 402 (h -1 g) f (g) dh dg
GxG = f q) 1(h) q)2, (k ) f (hk ) dh dk, GxG
since the Haar measure is invariant. Since f (hk) is bounded, and fl , go2 are summable on G, the latter integral converges. Let us now prove that 'pi * 'p is absolutely integrable with the weight (g)s for any s. We have
< IWi * q)21, r > =
f IWi (h) W2(k)I e (hk) dh dk GxG
<
f 140 1(h)1 k92(101e (h) r (k) dh dk GxG f lq) 1(h)I r (h) dh) (f lq)2( 101 e (k) dk)
=
(
G
G
164
H. Method of Ordered Representation
Both integrals on the last line converge. The proposition is therefore proved.
CI
Proof of Proposition 11.5. This reduces to standard computations: 7' (40i) T (40 2) = f q4(g) T (g) d g f ÇO2 (h ) T (h ) d h G
G
= f çoi(g) q)2(g-1 k)T (k) dg dk GxG
= f (q)i * 2.)(1<)T (k) dk . G
The above propositions mean that P+ (G , gi) is a group algebra, and the mapping q) 1-4- T[w] is a group algebra representation. 1
n
We use this representation so as to define functions f (A) = f (Ai, . . . , A n ) for a certain class of symbols. Recall that the mapping
exp2 : g --> G (xi, . .. , xn ) i-->- exp(x n an) • • • • • exP(xi ai)
(coordinates of second genus) is a local diffeomorphism at least in the neighborhood of the point 0 E Ç. Suppose that a function f (x), x E g*, is given such that the restriction of exp2 to some neighborhood of the support of its Fourier transform f (p) =
1 ,, f e i." f (x) dx , p E g
(27r) n ,z /2
g*
is a diffeomorphism. Then we may define the group Fourier transform /(g) = f (exp2-1 (g)) I det(exp2* ) I -1 .
Denote by 73+ (G, go the space of functions f (x), x E g*, such that the group Fourier transform J(g) is defined and belongs to P+ (G, g1). CI
Definition 11.9 Let f E 73+ (G,
go. 1
f (A) —= f (Ai,
Then n def ... , An ) =
1
(27r)n/2
T (f).
(II.104)
The above considerations imply that the following theorem is true:
Theorem 11.13 The operator f (A) is well defined and bounded in the scale UPI, that is, there exists an m E Z such that
f (A) : Hs —>- H' is a continuous operator for any s E Z.
6. Representations of Lie Groups and Functions of Their Generators
165
Remark 11.7 With the help of the above definitions one can easily rewrite (II.104) in the form 1 f (A) ei Pl A1 dp. = (27)f/2 f f(p) ei Pn An . . . G
(In the computations one has to use the fact that T(exp(piai)) = exp(ip14j), which is valid due to the commutative diagram whose lower arrow is defined via the solution of the Cauchy problem du
—i — = Au, dt
ul t=0 = uo E E
with a self-adjoint operator A.) Note that we did not use the nilpotency condition thus far. We now make use of this condition in order to achieve further results. First of all, in the nilpotent case the mapping exp2 is a global diffeomorphism with detlexp2 ,d 1, provided that the matrices of the adjoint representation are triangular in the basis ai , . . . , an . It follows that the group Fourier transform is defined for any tempered function f, and the group mollification induces a bilinear operation ,T, in 73+ (G, g1):
(f,Tg) def f * k, . f, g E j5+ (G,
go .
Clearly, we have: ( f ; i:g) (A) = f (A)g (A). We intend to show that the "twisted product" (fig) (x) may be interpreted as the n 1 result of action of the operator f(i) = f (li, . . . , in) on g, where /1 , . . . , i n are the operators of the left ordered representation, whose explicit expression will be given below. We have [f * kl (k)
=
1 (27)n/2
f 7(k)k (1C 1 h) dk G
=
1 (270n12 f f (k) [Lk
k]
(h)
G
=
( 1 Lk dk) (270n12 f (k) f
k (h),
G
where the operators
4 act in P+ (G, g1 ) according to the formula [rk go] (h) = ço(k-1 h), k, h E G.
IL Method of Ordered Representation
166
The operators Lk form a representation of the group G,
L t = Lkt
Lk
The corresponding representation of g is given by the right-invariant vector fields on G, which have been denoted by Ti , j = 1, n. Thus, we have li
= L(ai).
If k = exp2 (p), then obviously we obtain Lk = L(eXP2(P)) = eXP(Pnin) eXP(Plï1)
Therefore, [ f* ] (k)
=
(27rir/2
f
fv (k) Lk dic)
G
1
= f
• • • , i- n)k•
Denote by P the group Fourier transformation, tgs' = Pg, and by P -1 the inverse transformation,
P = (exp l ) * F, where F is the usual Fourier transformation. We have
, nin ) g)
»74 = P-1 [1* k]= P-1 [P-1 f
• • • ,ïn)Pig = f (P— liiP • •
1
f(11, ..., 1 n ) g
f (1) g ,
where
=
= F-1 1°./ F,
0
and 1i is the expression for Ti in the coordinates of second genus. It is well known that 0
in the nilpotent case /i is a vector field with polynomial coefficients, so the operators /i of the left ordered representation are differential operators with linear coefficients.
6. Representations of Lie Groups and Functions of Their Generators
167
6.4 Symbol Classes: More Suitable for Asymptotic Problems - (G, go, described in the preceding subsection, seems rather The symbol class 73+ exotic from the practical viewpoint. It is well known that in applications to (pseudo) differential equations one mostly deals with symbols homogeneous with respect to a group of variables. Here we consider classical symbols.
Definition 11.10 By Sm = Sm (G,
go denote the space of symbols f (x)
E 15+ (G,
go
asymptotically homogeneous of degree < m with respect to (xi, . .. , xk). In other words, each symbol f (x) E Sm possesses an asymptotic expansion f (x) = fm(x) ± fm-1(x) ± • • •
where fi (x) E 75+ (G, go is a homogeneous function of degree j with respect to (xi, . .. , xk), that is, fj (A.xi , . . . ,
X-Xk, Xk+1, • • • , Xic) =
X i fj (Xi
, ... , Xk, Xk+i, . ..
,x)
for A. > 1, 'xi I + • • • ± lxk I large enough, and the remainder decays rapidly as 'xi I + • • • + ixk I —>. 00 . The function fm (X) will be called the principal symbol of f (A). From now on we assume that Ak+1, ... , A n commute with one another.
Theorem 11.14 Let f
E Sm . Then
f (A) : Hs —> 118—m
is a continuous operator for any s E Z.
For the general version of this theorem, see [134], Theorem II.4.F.2. To prove the theorem, we first consider the following Lemma.
Lemma 11.6 Let f
E Sm , g E Snil .
Then the principal symbol of the product
f (A)g (A) is equal to fni (x)g m , (x).
Clearly, w E P+ (G, go belongs to Sr if w Pat , . . . , A.xk, xk-Ei , • • • , xn) possesses an asymptotic expansion of the form
Proof
0%-xi , • • • , kxk, xk-}-1, • • - , xn)
= x r fr (x) + x r —1
fr_i (x) + . . .
as A. —)- oo. The symbol of f (A) g(A) is equal to [f (I) g](x). Let us perform the change of variables xi —>- XXI, Xk —). XXk, Xk-F1 —)- Xk+1, xn —> xn
168
II. Method of Ordered Representation
in the latter expression. Then each operator li takes the form
/i = A. (xi +
)
.- is/ Q.-1 )),
where Si (X -1 ) is a differential operator whose coefficients are polynomials in A.-1 . Moreover, Si (A. -1 ) commutes with xi, since the matrices of the adjoint representation are triangular in the basis (ai, . . . , ak) (recall that this basis is assumed to be chosen in a special way). Thus, we obtain the expansion
po g = X1 n +M I fm grn, + A, n1 +M I 'r l Fi + xm+m'-2 F2 ± ... by using the ordinary Taylor expansion of f (X (x ± A 1 8)) in powers of A.-1 • The lemma is proved.
0
Let us now prove the theorem. Clearly, it suffices to consider the case m = 0, s = 0; results for the other cases may be obtained via left and right multiplications by appropriate powers of A. Consider the operator (f (A))* (the adjoint operator is taken in H = H° ). It is clear that 1 (f (A)) * = f (;iti, . . . ,
An).
Therefore, we have
f (A)* f (A) = h(A), where
h(x) = f (7 1 , . . . ,ln) f (x). As was done in the proof of the lemma, we can show that the principal symbol of h(A) is equal to 1 f (x) 1 2 . Now let M be a constant such that M > max If(x)1 2 ,
and set
g (x) =.1M—
If (x)1 2 .
Then g (x) E S0 , and
1f1 2 +1g1 2 = M. Thus, we have
f (A)* f (A) + g(A)* g (A) = M + R (A), where R(x) E S-1 , and its principal symbol R_1 (x) is real-valued since R (A) is self-adjoint. 1 Here the bar over f stands for complex conjugation.
6. Representations of Lie Groups and Functions of Their Generators
169
Choose g_1 (x) = —R_1(x)12g(x) and set G(A) = g(A) g_i(A). have f (A)* f (A) ± G (A)* G (A)
= M R (A)
Then we
g (A)* g _1 (A) ± g _1(A)* g (A) = M (A),
where R' (x) E S -2 . Continuing this procedure, we finally obtain f* (A) f (A) + H* (A) H (A) = M R" (A), where R u (x) E s-N for N large enough. The Fourier transform R" (x) is now a measure on G and, so R" (A) is bounded in H° . We obtain
Ilf(A)u11 2
Ilf(A)u 11 2 + 1111(A)u11 2 = m11u 11 2 + (R" A u,u (
)
)
c11 u11 2 ,
for any u E H° . Therefore, f (A) : H° H° is bounded. Application of noncommutative analysis to the Cauchy problem and to the inversion problem with the operator f (A) goes as follows (for detail see Chapter III below). Consider the problem t
at -
F
F( A)B =o, k
(II.105)
B It=0 = or f (A)B = 1. Let us try to solve it by a substitution of the form B = g (A , t) and B = g (A). This leads to the equation for the symbol g(x , t),
or
I
—i Ph + f (L) g = 0,
(II.106)
git=o = go(x) f (L)g = 1,
respectively. The solutions of these equations usually do not belong to the space P+(G, so the problem arises to define operators whose symbols are of this sort and to prove their boundedness in the scale (Hs}.
go,
Definition 11.11 Let f (x) E C"(Rn ). We define the operator f (A) as a strong limit f (A) tf s- lim(p s f)(A), 6-0
where p (x) = p (8 x) , p (x) E C(1[1n), p (0) = 1, provided that the limit exists. The operator f (A) is a bounded operator in the scale {Hk }, and does not depend on the choice of p.
170
II. Method of Ordered Representation
Symbols arising as solutions of (II.105) and (II.106) possess a special structure, intimately related to the geometry of the cotangent bundle T*G. Thorough study of this structure eventually implies boundedness theorems (see [129], [101], and [134]). Since we have no opportunity to discuss these topics, which require from the reader a great amount of knowledge of symplectic geometry and the theory of Fourier integral operators (see, e.g., [142] and references cited therein), we present only the following result.
Theorem 11.15 Let f (x)
E S 1 be real-valued.
Suppose also that go(x) G S ° . Then
the solution B(t) of the Cauchy problem
i_:aB _,_ f(A)B J‘ Bit=o=go(A)
=o,
(II.107)
is bounded in the scale {Hs} for any t; more precisely, we have
Cs(t), where Cs (t)is a locally bounded function. Proof Since f is real, we have,
(f (A)) * = [(f (7 1,
l]t ri))
MA)
= f
(A) + R(A),
where h(x) E S° and therefore R(A) is bounded. The problem (II.107) can be rewritten in the form {_ i aB = KB + LB,
at
Blt=o= Bo, where K = K*, whereas L and Bo are bounded operators. It is well known that the solution of the last problem is a bounded operator. Indeed, d dt
= (Bu, LBu) — (LBu, Bu) <211L11 11 13 4 2
which implies the desired estimate.
Chapter III
Noncommutative Analysis and Differential Equations
1 Preliminaries Let us show how the technique developed in the preceding chapters can be applied to differential equations. We chiefly consider two sorts of problems: (a) Find the inverse of a differential operator P. (b) Find the resolvent operator for a Cauchy problem. Recall that the resolvent operator is defined as follows. Consider the Cauchy problem
I
aiu
where u = u(x, t), x E II', t e IV; P = P (x,t,alax,alat) is a partial differential operator containing t-derivatives of order at most 1 — 1; yi = vi(x) are given functions (the Cauchy data). Then the resolvent operator of this problem is the (unique) vector operator Q(t) = (Q0(t), ... , Q1—(t))
such that the solution is given by the formula u (t )
= E z( t ) v., J =0
for arbitrary initial data yo , ... , We shall illustrate the approach to these problems suggested by noncommutative analysis on an example of problem (a). One expresses the operator P as a function 1
m
of a Feynman tuple A1, ... , Am . It is assumed that the left ordered representation 1 m /1, . .. , / m of this tuple exists. If so, one can seek for the inverse of P (problem (a))
in the form
I
m
P -1 = Q(Ai, • • • , Am).
172
III. Noncommutative Analysis and Differential Equations
By definition, we should have 1
=
I'
1
in
in
U(14 1, • • • , AM )J] 1[Q(Ai, • • • , Am)1 = 1
(to be completely rigorous, we now speak of the right inverse, which is often not the same as the left inverse in the context of differential operators). We obtain an equation for the unknown symbol Q(yi , ... , y,n ) by passing to the left ordered representation, as is described in Chapter II: 1 f (li, • • • 9 17 m)(Q(Yi, • • • , y m )) =1.
It remains to solve the latter equation, and one obtains the answer (Problem (b) can be treated in a similar way). The most evident (though useless) representation is given by the choice m = 1, A = P (we omit the subscript 1). We have
P = f (A), f(y)= y; L = y; 1 and obtain the equation Y Q(Y) = 1
for the symbol Q(y) of the inverse. The solution Q(Y) = 1 /Y
is obvious, and we obtain P -1 = P -1 , which gives no information at all. Another representation (used very often) follows merely from the fact that P is a differential operator, 1
a P = f ( , —i—) , 2
ax
so it is automatically a function of the tuple
1
—L
1
a
.8 ,
OX1
9•••9
—I — 9
aXn
2
2
Xi, . • • , Xn•
Let y1 , ... , yn , yn+i , ... , y2n be the corresponding arguments of symbols. Then the left ordered representation has the form
a orYn+1
n i—, L=y aY2n
Ln+1 = Yn+1, • • • , L2n = Y2n,
Note that the left ordered representation for a tuple consisting of a single operator A always has the form L = y. I
1. Preliminaries
173
and so the equation for the symbol Q of the inverse of P takes the form I
f ( yi
1
a in
a
uyn+i
2
, • • • , Yn i , , Y n+1,
0 Y2n
2 Y2n
• • • 1
Q(Y)= 1.
In particular problems, according to their specific structure, it is often useful to choose the operators A 1 , ... , Am in some other way. In any case, the main idea is that an equation in the algebra of operators is reduced to an equation in the symbol space. Clearly, the choice of A 1 , . . . , Am is a matter of the investigator's skill and experience rather than of any standard procedure; indeed, this choice should meet two
1
m
requirements: first the operator P should be a function of A1, .. . , Am , and, second, the tuple A 1 , .. . , Am should satisfy a sufficiently rich system of commutation relations,
1
m
so that functions of A1, . . . , Am form an algebra. Also, one should take care that the passage to an equation for Q(y) be a simplification rather that a complication. Moreover, the following phenomenon is frequently encountered: the symbol Q(y) of the operator that gives the solution of the problem considered lies in a function space larger than the original one. The operators with symbols in this space itself do not form an algebra, but only a module over the original algebra. We point out that this phenomenon is not a disadvantage of the method but is essentially related to the nature of the problems considered. For example, if P is an /th order differential operator of principal type with real principal symbol, then its right almost-inverse P —1 acts as follows: P
: Hos+1 _>. Tis-Fi —10c ,
s E R,
where 1-1:+ 1 is the subspace of the Sobolev space lis+ 1 consisting of compactly supported distributions and Hiso±c 1 is the space of distributions belonging to 1-1S+1 locally2 . Products of such operators cannot be considered in general: an application of such an operator to a function with compact support yields a function whose support need not be compact, and the second application thus becomes impossible. Such operators, however, form a module over the algebra of (pseudo)differential operators with proper support. The method cited applies to problems of various levels of difficulty, and in Section 1.1 we consider the simplest case of ordinary differential operators with constant coefficients. It turns out that in this case the above-described scheme leads to the conventional Heaviside method. Clearly, this case is trivial from the standpoint of noncommutative analysis: no noncommutativity is involved, and the details of the method, in particular, the extension of symbol classes, employ a technique completely different from the one used in the subsequent sections. However, this example displays some characteristic features of the method, and that is why it is included. specialists, let us note that P —1 is a Fourier integral operator on a manifold with boundary that is not the graph of a canonical transformation even locally (see [173]). 2 For
174
III. Noncommutative Analysis and Differential Equations
The table given below shows some differences between the cases of ordinary differential equations with constant coefficients and of partial differential equations. The type of equations considered The main features of the method
ordinary differential equations with constant coefficients
partial differential equations with variable coefficients
algebraic
(pseudo) differential
Equation for the symbol of the inverse operator The choice of the tuple
depends on
A 1 , . . . , A.
cd m = 1, A1 = Ti
The solution obtained
a particular problem asymptotic (in most cases)
exact The method of extending the class of symbols and justifying the correctness
Algebraic (passage
of definition for operators with symbols from this class
of quotients)
to the ring
Function-theoretic
1.1 Heaviside's Operator Method for Differential Equations with Constant Coefficients Consider the equation P (D) u(x) = f (x), where x
E (a, b),
f (x)
E Cc° (a , b)
is a given function and m
P (D) = Eck Dk k=0
is a differential operator of order m with constant coefficients ck (here D = 7d ). riThe function 171
P(y) =
E
CkY
k
k=0
will be called the symbol of P (D). It is a polynomial with complex coefficients. Thus, P (D) is already represented as a function of a tuple of operators.This "tuple", however, consists of a single operator D. According to the general scheme, we seek a right inverse P -1 in the form P -1 = Q(D), where Q(x) is some unknown function.
1. Preliminaries
175
(Note that one cannot find the two-sided inverse of f for the obvious reason that P (D) has a nontrivial kernel provided that m > 0). Whichever function f (y) we take, it is obviously true that D f (D) = fi(D), where fi(y) = y f (y). Thus, left multiplication by D in the algebra of operators corresponds to multiplication by y in the algebra of their symbols. In accordance with the general theorems provided in Chapter II, we have P (D) Q(D) = (PQ) (D),
(which is obvious in itself). We have to find a function Q(y) such that P (D) Q(D) = 1,
and so we should require that P (Y) Q(Y) = 1 .
The solution is quite obvious: Q(Y) = 1 / 13 (Y).
In order to calculate the operator Q(D) explicitly, let us expand Q(y) into partial fractions,
WY)
=
I
1 P (y)
rk
EE
aki
where X i , . . • , X i are the roots of P (y), r1 , .. . , r1 are their multiplicities, and aki are constants, which can be found from a system of linear equations. We see that if suffices to compute operators of the form 1/(D — A.)i. To this end, we use the following trick. Let Ux denote the operator of multiplication by e—x x . Clearly, the conjugation mapping A 1—>- UjA UÀ is an automorphism of the algebra of operators acting on functions on the real axis. Moreover, we have LIAT 1 (D A.)Ux = D. By Theorem 1.4 we have
= f (Ui-1 (D A.)Ux) = f (D)
t/;.-1 f (D for any symbol f (y). In particular, 1
(D — A.)i
=
11
- Di
th,
Ill Noncommutative Analysis and Differential Equations
176
and we have reduced our problem to the computation of should be the right inverse of Di , that is,
1/Dj. The operator 1/Dj
Di • 11Di =1. However, this condition does not uniquely determine 1/ Di ; to get rid of the ambiguity, we assume, completely at random, that 1/Di is the resolvent operator of the problem
1 di u dxi = f '
I
Ul x=0 -
1
U X=0 = • • • =
I
u(i -1 x=o = 0 . )
Then a simple calculation yields
1 Di
1
.,c
f = (j - 1)! f (x o
f (0 4 ,
and consequently, X
1
f = e,oc . e—Ax f =
(D — ) .)i
Di
(j — 1)!
fe
X(x-0 (x — 0i -1 f () 4 .
o
The formula for Q(D) follows immediately. The above argument is extremely simple; however, still there is a point to be clarified. As was mentioned above, our definition of 1/D1 is not the only possible one (we have posed the zero Cauchy data at x = 0, which is by no means necessary). Therefore, the extension of the mapping symbol i— operator given here needs further investigation and verification. Let us show that our method gives a well-defined result for any rational functions taken as symbols. Let P = C[z] be the ring of polynomials in z with complex coefficients, and let Op = End (C' (a, b)) be the ring of continuous linear operators in the space C' (a, b) equipped with the natural topology. Consider the ring homomorphism n
m
d )k
P(z) = Eckzk i__>_ p ( D ) = ECk ( --— dx k=0 k=0
We wish to solve is the inversion problem for the operator P (D) = p(P) and, at first glance, its solution from the algebraic point of view is an extension of the above homomorphism to a homomorphism pcf : R, —> Op,
1. Preliminaries
177
where 7Z is the ring of quotients 3 of P. However, we know that such an extension is not possible, since the two-sided inverse of P (D) does not exist. We shall seek a right inverse, that is, an operator '. that assigns a particular solution u = 6f of the equation P (D) u = f to each righthand side f. To this end, we consider R. as a left P-module and Op as a left module over itself; our aim is to extend p, to a homomorphism
of left modules over p,.4 Let us realize the described program. To simplify the notation we assume that the origin is contained in the interval (a, b) under consideration; clearly, this assumption does not lead to loss of generality. First of all we present the definition of ii.
Definition 111.1 Let R(z)=-- P (z) I Q(z) be a rational function. Then the operator
ii (R) =
P (D) : C' (a, b) —>- C' (a , b) Q(D)
is defined by the formula
P (D) f = P (D)v Q(D) for any function f
E
C°° (a, b), where y is the unique solution of the Cauchy problem
I
Q(D)v = f, di v =0, j = 0, 1, ... , deg Q — 1 d xi x=0
with zero Cauchy data. 3 The ring TZ is the set of pairs (P, Q) E PxP
factorized by the following congruence: (P, Q) — (P1, Qi) is and only if P Qi = QPi. The operations on R. are defined as follows: (P, Q) + (R,W) = (PW + QR, QW); (P, Q)(R,W) = (P R, QW).
Since 7, is an integral domain, this definition is correct. Each element (P, Q) E R. is naturally interpreted as a rational function P is embedded in R. by the mapping P i-- P/1. 4 A homomorphism of modules over a homomorphism p, of rings is a morphism FL of abelian groups such that A(pr) = for each p E p and r E R..
.3
III. Noncommutative Analysis and Differential Equations
178
Since the Cauchy problem is uniquely solvable for an ordinary differential operator, we see that the operator P (D)1 Q(D) is uniquely determined by the operators P (D) and Q(D). However, to verify the correctness of Definition 111.1 one must check that the result of acting by the operator P (D)/ Q(D) depends not on the polynomials P and Q themselves but only on the equivalence class of the pair (P, Q) in the quotient ring TZ. Therefore, it is necessary to check the relation P (D)S(D) _ P (D) Q(D)S(D) — Q(D) for any polynomials P, Q, and S. By definition, we obtain P (D)S (D) f = P (D)S(D)v Q(D)S(D)
(III.2)
where y is the solution of the Cauchy problem
I
Q(D)S (D)v = f,
ai v
ax
-1
=0, j=0, 1,...,degQ+degS— 1. x=0
Denote ij = S (D)v . Then we have P (D)S(D) f = P (D)f) Q(D)S(D) and it is quite simple to check that the function 15 satisfies the Cauchy problem (III. 1 ). Thus, the relation (III.2) is valid and the correctness of Definition 111.1 is proved. Now we must verify that the defined mapping
ii : R, —> Op is a homomorphism of left modules over the ring homomorphism it. To do this we must verify the following three assertions.
1°. P (D)11(D) = P (D) for P
E
P, that is, ii, is an extension of .A
2°. (Ri + R2)(D) = Ri(D)R2(D) for any R1, R2 E 3°. (P R)(D) = P (D)R(D) for R
E
R. and P
E
P.
The verification of these three assertions is quite simple and therefore is left to the reader. Now we shall justify the naturality property UT I f (A)U),. = f (UT 1 AU )) for UÀ = elx and A = D used above for description of the explicit computational procedure of the operator P (D)1 Q (D) .
1. Preliminaries
179
Proposition MA For any rational function R E R, we have e—xx R(D)e xx = R(D ± X). Proof If the rational function R is the quotient of the polynomials P and Q, namely,
R = P / Q, then the function u(x) = R(D){e xx f (x)}
is defined by the equality u(x) = P (D)v(x)
where v(x) is the solution of the following Cauchy problem:
I
Q(D)v(x) = e xx f (x),
di v(x) dxi
=0, j = 0, 1, ... , deg Q — 1. x =0
Let us seek for the solution v(x) of the latter problem in the form v(x) = e)"x fi(x). Substituting this equation in the Cauchy problem (III.3), we obtain that the function i5(x) is the solution of the problem
I
Q(D + ).)i3(x)
(III.4)
di i) dxi x =0
= 0,
j = 0, 1, ... , deg Q — 1.
Here we used the obvious fact that Q(D)e xx i) = exx Q(D ± ).)i)
for any differential operator Q(D). Thus, we obtain R(D){e xx f (x)} = P (D){e xx i3(x)} = e xx P (D -I- ))i5(x).
The latter relation together with the Cauchy problem (III.4) proves the required property of the operator R(D). CI
1.2 Nonstandard Characteristics and Asymptotic Expansions We now proceed to considering partial differential equations with variable coefficients. As was mentioned above, the theory is asymptotic in this case.
180
III. Noncommutative Analysis and Differential Equations
In this subsection we consider the question of which Hamilton—Jacobi equations correspond to a given differential equation if various types of asymptotic expansions are considered. The first subsection is chiefly devoted to nonstandard characteristics (see [131], [134]). Our main topic, noncommuting operators, remains in the background. In the second subsection we compare asymptotic expansions with respect to a large parameter and smoothness and draw the conclusion that noncommuting operators play a leading role in the theory of arbitrary asymptotic expansions. In the following subsections we use noncommutative analysis to develop the main stages in the construction of asymptotic solutions to differential equations of various types, and here noncommuting operators exhibit their full strength. The theory of characteristics of differential equations has been developing since the very origin of the study of partial differential equations. It gained a strong impetus with the appearance of physical problems requiring asymptotic expansions with respect to a small parameter. Among such problems, we note the WICB method for constructing asymptotic solutions of the Schritidinger equation in quantum mechanics and the method of geometric optics applied to the propagation of high-frequency electromagnetic waves. As soon as the investigation of such problems started, the theory of characteristics of differential equations split into two parallel branches developing almost independently. The first of these branches deals with singularities of solutions to differential equations (and with asymptotic expansions with respect to smoothness), whereas the second one treats asymptotic expansions with respect to a small parameter. It should be noted that the second branch was at first developed chiefly in treatises on mathematical physics. Having both branches in mind, let us study the following question: what is the "correct" characteristic equation for, say, the Klein—Gordon equation
h2 a2U _ h 2 82 14 + m2c4u =. 0 at2
ax2
(here m is the mass of the particle described by the equation, c the velocity of light, and h the Planck constant). A physicist would probably claim that the characteristic equation is the Hamilton— Jacobi equation for free motion of a relativistic particle:
f 3 s\ 2 / 8 s\ 2 ) — T;) —M2C4 = °. However, a mathematician specializing in hyperbolic equations could argue that the correct characteristic equation is another Hamilton—Jacobi equation, namely,
(as) 2 _ (8s) 2 = 0. 8t) ax
(III.5)
1. Preliminaries
181
We postpone answering the question of which version is correct and note that the problem is even less trivial for the Helmholtz equation 82U
82U
ax2
ay2
9 9
y)u
O.
Indeed, in the previous example we only discussed the form of the corresponding Hamilton—Jacobi equation, whereas for the Helmholtz equation there would be different opinions as to whether there exist (real) characteristics at all. A mathematician could reasonably assert that the equation is elliptic, so that it makes little sense to speak about its real characteristics. A physicist, in turn, could argue that the equation describes propagation of electromagnetic waves in some (inhomogeneous) medium and hence is a wave equation. Moreover, the physicist would readily write out the Hamilton—Jacobi equation
as)2 ± (as)\2 ax — a y —n2(x ' y) =13,
— (
which is none other than the classical eikonal equation. Which of the described standpoints is correct? So far, the answer seems evident: neither is correct (or both are correct, if you like that better). The point is that the reason of the argument is lies in the terminology. Each side has its own interpretation of the notion of characteristics, and so each point of view is valid. An attentive reader has necessarily noticed that the key question is what is called a wave. Let us give this question a somewhat different setting: what propagates along the characteristic rays corresponding to the Hamilton—Jacobi equation? The mathematician specializing in hyperbolic equations gives the following answer: it is the discontinuities of solutions of the differential equation considered that propagate along the trajectories of the Hamiltonian system (more precisely, along the projections of these trajectories to the configuration space). Hence, an appropriate asymptotic solution to the Klein—Gordon equation may have the following form: u(x, t) = 0(4)(x, t)) Ç 90(x , t)
cb(x, t) e(cD(x , t)) 491(x , t) ± • • • ,
(III.7)
where (I), Soo, 4 0 1, • . . are smooth functions of the variables (x, t) and 0(z) is the Heaviside function
f 1, z > o, 9(z) = t o, z < o.
The solution is discontinuous on the surface (I) (x , t) = 0, and we monitor the evolution of this surface. The physicists' answer is also evident. He would say that it is the wave, i.e., the oscillations of solutions that propagate along the rays of geometric optics. In particular, the asymptotic expansion of the solution may have the form of an electromagnetic wave,
u(x, y ) =
eikS(X,Y)( q)0 (x,
y)
k 1 q)1 (X , y) + • • •)
(III.8)
182
III. Noncommutative Analysis and Differential Equations
(here we consider the Helmholtz equation). The level surfaces S(x, y) = const of the phase S(x, y) play the role of wave fronts, and we should analyze the evolution of these surfaces. It is now completely evident that the same differential equation can have different characteristic equations according to which asymptotic expansions we are interested in. Indeed, if we substitute the asymptotic expansion (III.7) into the Klein—Gordon equation and equate the coefficient of the leading singularity 3'('t.) to zero, we naturally obtain the characteristic equation (III.5), whereas the substitution of the asymptotic expansion (III.8) into the Helmholtz equation yields the eikonal equation (III.6). We arrive at the following conclusion: there is no natural characteristic equation associated with a given differential equation; the characteristic equation (or, what is the same, the Hamiltonian function) is determined not only by the differential equation, but also by the form of the asymptotic expansion we intend to obtain. Following the existing tradition, we say that the characteristics associated with expansions with respect to smoothness (e.g., (III.7)) are standard; all other types of characteristics (including oscillatory expansions of the form (III.8)) are said to be nonstandard (as regards this terminology, see the paper [1 3 1], which was extensively used in this section). One should note that there are numerous types of various asymptotic expansions that could be constructed for a given differential equation: with respect to smoothness, large (or small) parameter, growth at infinity, etc. We also mention the so-called synchronous asymptotic expansions involving two or more parameters (say, smoothness and a large parameter). The preceding discussion shows that each type of asymptotic expansion leads to a different characteristic equation (or Hamiltonian function) and to different equations for the amplitudes çoo, i, . . . occurring in asymptotic expansions of the form (III.7), (III.8), etc. It would be illogical if one had to develop the theory from the very beginning for each new asymptotic problem. Therefore, in the next subsection we try to find common features of asymptotic expansions with respect to smoothness and small (large) parameters and thus to guess the outlines of the general theory.
1.3 Asymptotic Expansions: Smoothness vs Parameter The problem of obtaining asymptotic expansions with respect to a small parameter can easily be reduced to the problem of constructing asymptotics with respect to smoothness. Here we use the Helmholtz equation to illustrate this reduction. Physically, the idea is quite obvious. The point is that the Helmholtz equation is the stationary equation corresponding to the wave equation ,9 2 u
c2
(
0
2 u a 2 u\
0 t2 = n2 (x , y) 0x 2 ± 3 y 2 ) •
1. Preliminaries
183
In other words, the Helmholtz equation is an equation for the Fourier transform of U (x, t) with respect to t taken at frequency co. The wave number k is equal to co/c and the ratio c n(x, y) is the phase velocity of the wave in the medium (n(x, y) is known as the reflection coeffi cient in geometric optics). Since the decay of the Fourier transform u(x , y, k) in k is known to be equivalent to the smoothness of U (x, y, t) with respect to t, we see that the problem of finding asymptotic expansions as k —>- oo of solutions to the Helmholtz equation is reduced to the problem of finding asymptotics expansions with respect to smoothness for the wave equation'. The reverse transition (from smoothness asymptotics to parameter asymptotics) is not as simple as the direct one. First, asymptotic expansions with respect to smoothness are usually considered in all variables rather then a single variable. Second, the transition to the Fourier transform in t in the wave equation is simplified by the fact that the coefficients of this equation do not depend on this variable. If we tried to pass to the Fourier transform in the variable x, we would not obtain a differential equation for it unless the coefficients of the equation are polynomials. To deal with each of these difficulties separately, we first consider the equation aiOE(x, (t
)
— x , t) — 0 03 x ) a u
(III.9)
and construct asymptotic solutions with respect to smoothness in t alone. According to what was said above, we perform the Fourier transform t k in equation (III.9) and obtain the equation
E +Ice15..m
a)
(x, k) =0.
a• -- (ik)i Ja ak
We seek its solutions in the form
î (x, k) = eikS(x)
M (x )
k -1 vi (x )
.) = e ikS(x) çqx, k).
The asymptotic solution of the original equation has the form u(x, t) =
t j eiks(x),, ( ,. it) ). /
7
t is the inverse Fourier transform. where Note that the latter formula can be rewritten as
a u(x,o= eigx( — ialat) w (c,—i— at )s(t).
(III.10)
It is somewhat difficult to substitute this expression into the equation directly, since the coefficients depend on t and thus do not commute with the operator (—i that in spite of the evident relationship described here both theories were developed independently for a long time. The reason probably resides in the difficulties which we discuss below. 1 Note
184
III. Noncommutative Analysis and Differential Equations
involved in the asymptotic expansion (in fact, —i8/0t is the "large parameter" of our expansion). This indicates that noncommuting operators may be useful here. Clearly, the fact that t does not commute with —ia/at directly affects the form of the corresponding Hamiltonian function. In conclusion, let us try to find the general form of asymptotic expansions with respect to smoothness of solutions to differential equations. We use formula (111.1 0) as a starting point. Clearly, one should use derivatives with respect to all variables. To simplify the notation, we consider independent variables (x 1 , .. . , xn ) (i.e., we do not distinguish t explicitly). Analyzing formula (111.1 0), we arrive at the conclusion that the general form of the asymptotic expansion with respect to smoothness is 2 2
1) , a
isx,-1 7, u(x)= e (
where
I
ço (i,—i -1) f (x),
ax
f (x) is an arbitrary function of x, S(x, p) is first-order homogeneous in
p=
(p 1 , ... , pn ), and w(x, p) is given by a formal series
60 (x, P) = goo(x, p)+ q)_1(x, p) ± • - • . of homogeneous functions (the subscript denotes the degree of homogeneity). We see that i) the operators (—i
a
axl '•••'
.
a
1 axn
play the role of "large parameters" in the expansions with respect to smoothness; ii) the terminology of noncommutative analysis is convenient when dealing with expansions of the form (111.1 0). In more general cases, the operators defining the asymptotic expansion can be chosen in some other way. For example, if we deal with simultaneous asymptotic expansions with respect to smoothness and growth at infinity, we will usually introduce the set of operators
( — i a/axi, . . . , — i a/axn , x l , . . . , xn ) , which itself is not commutative. In the next subsection we give a sketch of the general theory of asymptotic expansions with respect to a tuple of (noncommuting) operators. 2 We
ignore focal (caustic) points for the sake of simplicity.
1. Preliminaries
185
1.4 Asymptotic Expansions with Respect to an Ordered Tuple of
Operators As was explained in the preceding section, in order to construct the general theory of asymptotic expansions, it is necessary, first of all, to choose and fix a tuple of operators A = (A 1 , . . . , A n ), which will serve as large parameters for asymptotic expansions in question. Clearly, this tuple cannot be chosen completely arbitrarily (but rather should satisfy some conditions we do not discuss here; see [133]). We assume that our operators are self-adjoint operators in some Hilbert space l H. The choice of the tuple A determines the type of asymptotic expansion and it is now appropriate to give a precise definition of this basic notion. For this purpose, we introduce a scale of spaces Hs (A) associated with the tuple A in the following way. Let D C H be a linear subset of the common domain of all possible products of powers of operators A 1 , . .. , A N . The set D is assumed to be dense in H. We consider the following family of norms on D:
ii u ii s2 --= 11 (1 + Ai + • • • + A 2N Y/ 2 u The space Hs (A) is defined as the completion of D with respect to the norm11 . 11 s .
Definition 111.2 A function ii is called an asymptotic approximation of order s of a function u if u — ii E Hs (A). Thus, if H = L2(R 1 ), N = n, and
a axi
(
a axn
then we can take D = C(7° (Rn ) , and the scale Hs (—i0/8x) is none other than the usual Sobolev scale. Asymptotic expansions obtained in this way are those with respect to smoothness. If we choose another tuple of operators, say,
a 9 • • • 9
-
i
a - 9
X1
9 • • • 9
X
n) 9
then the norms in the spaces Hs (—i8/0x, x) are defined by the formulas
11u(x)11s = 11( 1 + 1 x 1 2 _ 6.) si2u(x)1 dx. An advantage of these norms is that they are Fourier-invariant. Along with smoothness, these norms allow for the decay at infinity and prove to be useful in studies dealing with pseudodifferential equations (see Shubin [163]). 1 For
example, one can often take H
= L2.
III. Noncommutative Analysis and Differential Equations
186
The second stage in the construction of asymptotic expansions with respect to the tuple (A 1 , . . . , A N ) is to represent the differential operator of the original problem as a function of the (generally noncommuting) tuple (A 1 , . . . , A N ), that is, to represent the original equation in the form 1 N H(Ai, ... , AN) ti =
(111.11)
!
Of course, such a representation is generally ambiguous and sometimes impossible. In constructing asymptotic expansions with respect to smoothness, along with the operators (—j 8/0x) defining the scale Hs, one needs to introduce the operators x 1 , .. . , xn not involved in the definition of the asymptotic expansion. Even at this stage one can obtain different representations of the original equation by choosing different ordering of operators. For simplicity, we do not dwell upon the fact that not all of the operators A 1 , . . . , A N can be involved in the definition of the scale Hs (A). The first two stages described are not algorithmized as yet. The choice of the tuple A and the representation of the original equation must be done in a special way for each equation and is a matter of art rather than craft. However, we try to show in the following that as soon as the tuple A is chosen and the representation obtained, the remaining stages are automatically determined by the calculus of noncommuting operators.
1.5 Reduction to Pseudodifferential Equations With the asymptotic expansion constructed in Section 2 as an example, let us try to find asymptotic solutions, in the sense of Definition 111.2, of equation (III. 1 1 ) in the form 1 N 14 = F(A1, • • • , AN)
f
A sufficient condition that u be a solution of equation (III. 1 1) is that the operator
1
N
F(Ai, ... , AN) be a solution of the operator equation 1
N 1 N [MAL • • • , AN)]I t[RA1, ... , ANA = 1. 1
N
This is an equation for the symbol F(z i , ... , z n ) of the operator F(Ai, ... , AN). Here we consider the case, studied in Chapter II, in which the following condition is satisfied:
1
N
The product of any two functions of the ordered tuple A = (A1, . . . , AN) can be 1 N represented as a function of the same tuple, i.e., the functions of (A1, . .. , AN) form an operator algebra. As was shown in Chapter II, this is the case if the tuple A = (A 1 , .. . , A N ) has a left ordered representation on the space of symbols. Let 1 = (I 1 , .. . , I N ) be this
1. Preliminaries
187
representation. Then 1 [H(Ai, •••
,
N 1 N 1 N AN) ]1[F(Al, • • • , ANA = W(Al, • • • , AN),
where 1 N Ço(Yi, • • • , Y N) = H( 1 1, • • • , 1 N)(F).
Thus we obtain the equation 1 N H(11, ... , 1 N)
(F) = 1
for the symbol F(y i , . .. , y N ). Of course, this equation has little practical value unless 1
N
we can calculate the left ordered representation operator H (11, . . . , 1 N) explicitly; we have discussed these problems in Chapter II, and we shall also find numerous examples in the following sections. Here we only recall that "triangular" commutation relations lead to differential operators / 1 , . . • , /N of the left ordered representation; if the symbol H is a polynomial in the variables whose associated operators are nontrivial, then 1
N
1 N) is a differential operator as well. So, what conclusion can we derive from our considerations? The problem of solving equation (III.11) has been reduced successively to the operator equation and then to the equation for the symbol. At first glance it seems that this "symbolic" problem is no simpler than the original one. This impression would be true if we considered exact solutions of equation (III.11) rather that asymptotic ones. But in order to obtain asymptotic solutions we should solve the operator equation modulo operators of sufficiently low negative order in the scale 1-1S (A). Accordingly, the "symbolic" equation should be solved up to functions of z 1 , . . . , Zn that decay at infinity sufficiently 1 N rapidly (this clearly requires "good" estimates for the operators F (Ai, .. . , AN) in the scale Hs (A) and concerns the choice of appropriate symbol classes). Thus, the operational calculus permits us to reduce the problem of constructing asymptotic solutions to equation (111.11) with respect to an arbitrary tuple of operators to asymptotic expansion of a solution to the symbolic equation with respect to growth at infinity. On passing in the latter equation to the Fourier transform, we reduce an "arbitrary" asymptotic problem to a problem of constructing asymptotic expansions with respect to smoothness. We can now return (at a new level of understanding) to the problem posed in Subsection 1.2: What is the "correct" characteristic equation (or, what is the same, the Hamiltonian) for a given differential equation if we consider asymptotic solutions with respect to a tuple A = (A 1 , . . . , A N )? With the above information in mind, we should perform the following steps in order to construct the Hamiltonian. 1)
Compute the operators of the left ordered representation.
III. Noncommutative Analysis and Differential Equations
188 2)
1 Compute the operator H (11, . . . , 1 N) and pass to the Fourier transform by , z N ) (or only the part of these variables that corresponds to the type (z 1 , of asymptotic expansion consid ered).
The "standard" Hamiltonian function of the operator obtained is the Hamiltonian function corresponding to the original equation for the given type of asymptotic expansions. Let us consider the construction of the characteristic equation for the Helmholtz equation. To represent this equation in the form (III. 1 1), we introduce the following set of operators: =
a
a
—, ay
A2 =
ax
A3 = k, B1= x,
B2 = y.
The Helmholtz equation now becomes 12
12
12
2 2 [A1 +A2 — A 3 n 2 (Bi, B2)] u =
(the assignment of Feynman indices uses the fact that some of these operators commute with each other). Let us write out the left ordered representation operators, using the following correspondence between the operators and the arguments of symbols: Zi 4-->- A1, Z2 ÷-> A2, Z3 4---> A3, wi <-± B1, W2 '‹+ B2.
We have
1A3
= Wl,
1 B2 = W2,
= z3,
/A i = z1 —
a dwi
,
1A 2
a z2
811)2
1
Hence the operator H(11,..., 1 N) has the following form 1
a
H(li, , 1 N) = (z1 — j— , oWl
2
a
-1- (Z2 — i — )
2 Zin2 (W1, W2)•
aW2
The subsequent argument depends on which operators we include in the definition of the asymptotic expansions considered. If we consider asymptotic solutions with respect to smoothness, then the operators A1 and A2 serve as large parameters. Accordingly, the Hamiltonian function is equal to (z ± /30 2 + (z2 + p2) 2 (where pi and p2 are the variables dual to wi and w2, respectively), and there are no real characteristics.
1. Preliminaries
189
On the other hand, if we consider asymptotic expansions with respect to a parameter, then only the operator A3 is to be included in the definition of the scale. Then the Hamiltonian function equals 2 2 2 2 P1 + P2 — Z3n (wl, W2),
which is in full agreement with the Hamilton-Jacobi equation (III.6).
1.6 Commutation of an h -1 -Pseudodifferential Operator with an Exponential The WKB method of constructing asymptotic expansions of solutions to differential equations, as well as several related approaches, uses Ansatze of the form u(x, h) = e is(x)/h g*, h)
for the solutions of the differential equation 1
2
a
P (x, -ih— u(x, h) = O. ax
Here ço (x , h) depends regularly on the small parameter h, and the method is to substitute u = exp(i S / h)q) into the equation, permute P and exp(iS/ h), and then solve the obtained equation for ço by means of the regular perturbation theory with respect to h. Thus the point is to expand the operator
e -is(xv hp (x2 , _
la
S(x)/h
(111.1 2)
ih-OX. lei
in a series in powers of h. The conventional technique used in this situation is the stationary phase method. We present here a different technique that uses noncommutative functional calculus. Denote U = eiS(x) / 11 . The mapping A 1-4. U -1 A U is an algebra homomorphism, it follows from Theorem 1.4, 1° that 1
a (1 -1 [P (x, -ih— 2
Ox
1 , ( 2 a Au = p(wxtft i w — i —i—)u1). —
ax
ei/h into the latter formula and taking into account that Substituting U = S(x)
e -iS(x)lh xe iS(x)1 h = x,
III. Noncommutative Analysis and Differential Equations
190 and
e —is(x)lh
(_ ih a ) e is(xvh__ih a + as ax ax' ax —
we obtain the expression for the operator (III.12) in the form 1
/2'
a
as
C is(x) / h 1[P i —ih a leisoolh =1[P x, ih— + —Iii 1 ' ax 1— Thus, it suffices to calculate the operator p (2 , i
ih
a + as ]i . ax ax ) 2
In doing so, we may pay no attention to the argument x and consider simply
f (E ih
a + 85,1) , ax ax )
with a given function f(z) (the general case is no more complicated). In order to obtain the expansion up to a certain power of the small parameter h, we use the Taylor formula (see the discussion following Theorem 1.12). Using this formula for f(B + EC) with B = aSlax,C = —d/0x, E = h, we obtain2
f
(E ih
a + as ___\
ax ax 11 ) = f (—a axs ) a +h [f , (as ) ( i aax ) _ .f.„ (as -)1+ 002), )adaslax(—i — 0x ax a,c since adl (C) =0. Furthermore, we have adas/ax (
.8\ [as . a 1 .02 s --) = ax ax' t ax j = 1 0 x 2 '
which implies that
a
as
—1I) f @ —i— ax+ax
= f ( as ax ) _ ih I f , (aasx ) ( aax ) + 21 a 2x s2 f „ ( a s)1 +0 02). ax ) J
Let us also mention an elegant exact expression for f 1—ih8l0x + 85104) in terms of functions of noncommuting operators (see [98]):
a
as
f (ff—i— —1) ax+ ax 2 For
2
a (—ih— =1
ax
+ss).
(III.13)
simplicity, we write down only two main terms of the expansion; the subsequent terms can be written down easily with the help of Taylor's formula and we leave this computation to the reader.
1. Preliminaries
191
We are going to prove this formula with the use of the definition of functions of operators via the Fourier transform (this is quite natural if one deals with h —l the particular case in which-pseudoifrntal s).Thefor,witcnsd f (z) = eitz; the general case follows by integration. Taking into account the relation
as = e —iS(x)1 h im iS(x)I h ii+ _ t- ax where /3 = —i0/8x, we obtain the formula e it[[13-1-a Slax]]
_
e —iS(x)1 h e itp e iS(x)I h
=
e at fi-E(S(x)—S(x))1h1
2
1
3
2
13
04
=e
Using the obvious relation 2
1
• ^ e itp (p (x1 ) = e'• # ^ . 2 ;p4°V
± th)
2 we can permute the operators /3 and i and obtain 2
1
04
3
2
13
e ito+asiaxl = ei[113+(x+th—x)SS(x,x)1h] = eor TH-ss(x,x)) . Representing now both sides of the formula (III.13) via the Fourier transform of f(z) and using the latter relation we obtain the desired result. This is indeed a remarkable formula. It gives a closed form expression for
as
,
f
(
IIP + — at A),
and the expansion in powers of h can be obtained in a very simple way, just by expanding into the ordinary Taylor series in powers of (—iha/ax).
1.7 Summary: the General Scheme Here we try to summarize the observations made above. Suppose that we intend to construct an asymptotic expansion of some type for the equation
a
P (x —i—) u(x) = f(x), ' ax or for the Cauchy problem
I
(—i-) m u+ P (x,t,—i-ja-i) u = f,
at]
'
j
=1, ...,m —1,
III. Noncommutative Analysis and Differential Equations
192
or for any other well-posed problem for a differential equation. The first stage of analysis consists in introducing operators A 1 , . . . , A N (probably not commuting with each other), which would define the type of asymptotic expansion. This means that one constructs an asymptotic expansion of the solution to the original problem with respect to the scale HS (A) of function spaces with norms determined by the equation Hull, = (1 ± Ai ± • • • -I- A 2N )uL. Next, it is necessary to represent the main problem in operator form. As we will see in concrete examples in the subsequent sections, this might require introducing some additional operators B1 , . . . , B. that do not occur among A 1 , . . . , A n . It may even happen that one and the same operator occurs twice in the sequence A 1 , . . . , An , B 1 , . . . , B., once among the Ai 's, defining the type of asymptotic expansions, and once more among B 1 , . . . , B.; this looks as if such an operator splits into two on passing to the operator interpretation. At this stage we represent the problem under investigation in the form of an operator equation il
in
11
ii
im
il
in
1[1101, ... , A n , B1, • • • , BmAPD(Al, • • • ,
An,
in;
B1, ... , B m )]] = 1
(for stationary equations such as the Helmholtz equation) or of a Cauchy problem in
il
i 14— Da7.) m ± H(Ai, ••• ,
fi
in 11 ii 1m (A1, m )] = 0, An, B1, .. . , B •••, FD
Lm
An, B1, . . . , B m)ii
ait• =0, j = 0, . . . , m — 2; atj ,.= .0
ati
=
1,
t=0
or in some other operatorial form (as usual, 1 denotes the identity operator in the function space considered). The operators A 1 , . . . , An and B 1 , . . . , B. occurring in these representations should be chosen in such a way that 1) the class of operators in
001, ... , A n ,
il
irn
Bi, • • • , B In)
for the chosen symbol class cl) E .7. forms an operator algebra; 2) the symbol class .F is large enough as to contain asymptotic solutions of equations for the symbols given below. Using the first requirement, we pass from the operator equations to equations for the symbols of these operators by using the technique of left ordered re presentations. To this end, one computes the operators tit./ , j = 1, . .. , N, and [Bp j = 1, . . . , A1, Ii .im il in of the left ordered representation associated with the tuple (A 1 , . • • , An, Bi, • • •, Bm) and writes out the equation for the symbol: il
in
H(1,4 1 , ... , 1 A n
,
1
Bi , • • • ,
jm 1 AO di) (Zi, • • .,z n ,W1, — ,tv m ) =1
2. Difference and Difference-Differential Equations
193
(or the corresponding symbolic Cauchy problem for the operator Cauchy problem). One should now construct the asymptotic solution of the resulting symbolic problem oo (if desired, one can pass to smoothness with respect to decay as (z 1 , , z) asymptotic expansions by considering the Fourier transform with respect to z). This stage of constructing asymptotic expansions has nothing to do with noncommuting operators except for the fact that it is at this stage that the minimal extent of the symbol space .F to be considered becomes clear. Of course, most appropriate would be the variant in which .F is very large and so automatically contains the solutions of problems of this type. However, it should be taken into account that if .F is too large, then unavoidable function-analytic difficulties may well occur. Thus one should be very careful. If we succeed, then it is quite easy to construct the asymptotic solutions. For example, the asymptotic expansion of the solution to the equation P (x, —ia/ax)u = f has the form 1m in .ii u(x) = , A n , B1, ..., B f (x), where c1 (z, w) is an asymptotic solution of the corresponding symbolic problem with respect to negative powers of z, and the asymptotic solution of the Cauchy problem has the form
u(x , t) = f
lm
in
— r), Ai, ..., An , B1, . , B m ) f (r, x) dr,
o where cI) (t, z, w) is the asymptotic solution of the corresponding symbolic Cauchy problem. Thus, the operator calculus provides a method of reducing the problem of asymptotic expansions of arbitrary type for (differential) equations to a problem of constructing asymptotic solutions for large z for pseudodifferential equations.
2 Difference and Difference-Differential Equations Noncommutative analysis offers convenient techniques to deal with difference and difference-differential approximations to differential equations. Such approximations are obtained by replacing the derivatives occurring in a differential equation by their finite-difference counterparts. For example, 3f/ a x can be substituted on a uniform grid by either (x h) — f (x) 8 xE - f (x) = f h
(the forward difference), or 3 x f (x)=
f (x) — f (x — h) h
Ill. Noncommutative Analysis and Differential Equations
194
(the backward difference), or, more rarely, S .,,c f (x) = f (x
+ h12) — f (x — h12) h
(the central difference); here h is the mesh width. The second derivative 82 f/8x 2 is usually replaced by its central-difference approximation 1- ; f (x = 36
f (x ± h) — f (x — h) — 2f (x)
)
h2
,
and the mixed derivatives a2 flaxiaxj turn into something like q(5..i f with various combinations of signs. Similar expressions can be written out for higher-order derivatives. The resulting equation can be viewed as a finite linear algebraic system (or a system of ordinary differential equations if, say, the t-derivatives are retained in the equation) for the finite set of values of the unknown function(s) at the mesh points. It is to be supplemented with the related boundary and (or) initial conditions and then solved numerically. There is an advanced theory concerning the choice of grids and approximations, the convergence and stability of solutions as h -->- 0, etc. How can noncommutative calculus be useful in these topics? Without going into detail, let us briefly outline three different approaches that can be applied here.
2.1 Difference Approximations as Pseudodifferential Equations Since the translation operator T h f (x) = f (x ± h)
can be presented as a function of —i a/ax, T h = e ha/ax = e ih(—ialax) ,
the same is clearly true of difference approximations to derivatives. Specifically, we have ,5 -
=
Sx
=
8x+ 6;
=
e ha/ax
_
1
=
eih( -18 / 8 -0 — 1
h e —ih(—ialax) _1
h
h
eih e-ialax) ± e—ih(_ia lax) — 2
4 sin2 = —g
h2
ka ih
— ax
),
=
2 cos (—i h a/ax)
h2
—
2
2. Difference and Difference-Differential Equations
195
etc. With these expressions substituted for derivations, a differential operator turns into a pseudodifferential operator. For example, consider the wave equation in an inhomogeneous medium
a2u
, a2 u at2 — c`(x) ax2 = 0,
t
ER,
X
E Ri ,
where c(x) is the speed of sound depending on the position x. By replacing 0 2 /0x 2 with its central-difference approximation, we obtain the difference-differential equation 8 2u 4C2 (X) . 2 + h2 sin at2
---) u = O. 2 8x
Denote by un (t) = u(nh, t)
the values of u at the mesh points. Then the equation takes the form
=
4cn2 (un±i — 2un + un—i) h2
,
n = 0, +1,+2,...,
where cn = c(nh). The latter system describes vibrations of a one-dimensional crystal lattice in which the atoms only interact with their nearest neighbours and the interaction is Hookeian. It is interesting to study the relationship between the solutions to the two equations (with appropriately related initial data) as h —> O. For u sufficiently smooth, the difference-differential operator is a good approximation to the wave operator (one can expand
sin2 ( jh a 2 3 into the MacLaurin series, and the remainder will decay as h —> 0 since u is smooth); but what can be said of arbitrary (say discontinuous) u? The advantage of the operator approach is that the difference-differential equation is considered as a pseudodifferential equation, and so the standard WICB and canonical operator machinery can be used to find its asymptotic solutions with respect to either smoothness, h —> 0, or both. Precise analysis (not reproduced here) shows that, for smooth initial data, the solutions of the difference-differential equation tend to the corresponding solutions of the wave equation, whereas the situation is a bit more complicated for discontinuous initial data. Specifically, the "leading" discontinuity of the solution to the differencedifferential equation propagates along the characteristics of the wave equation, but it is accompanied by a "tail of vibrations" that occupies a certain region in the space 111x2 t . This region can be described in terms of the characteristics (we refer the reader to [12'9] for a detailed exposition). Thus, it is possible to study the behaviour as the mesh width tends to zero of solutions to difference and difference-differential equations with ill-behaved initial data or right-hand sides. —
)
III. Noncommutative Analysis and Differential Equations
196
2.2 Difference Approximations as Functions of x and Sx± Another approach, which can also prove valuable in some applications, consists in representing difference operators as functions of x, S x+, and Ç. Then difference equations can be written as something like 1
3
1
F(x, S x+ ,S;)u = v,
and the problem is easily reduced to finding the symbol G(yi , y2, y3) of the (asymp 3
2
1
3
1
1
totic) inverse G(x, 8 x+ , 8;) of F(x, 3x+, 8;), where the equation for G(yi, Y2, y3) is obtained via the ordered representation. Therefore, let us calculate the left ordered representation for the Feynman tuple 3
21
(x,(5 .;E,S;); the corresponding variables will be denoted by yi
1 Y3 *<-* 8;
2
3
-•(--). X ,
y2 *--). Sx+ ,
(for clarity, we assume that X E R 1 , so that no additional indices appear). First of all, let us find the commutation relations satisfied by these operators. Denote (S x- = 3+ and 8; = 3 — (i.e., omit the index x). We have 3 3 + (x f (x)) = 7---(x f (x))(x , x ± h) 8x and, by the Leibniz rule for difference derivatives, 8 + (xf (x)) = f (x) 8 - F (x) ± (x ± h)8+ (f (x)) = f (x) ± (x ± h)8 + (f (x)),
that is, 3 + x = (x + h) 8+ ± 1.
Similarly, 3+ x = (x — h) 8+ ± 1.
Finally, both 3+ and 8+ are linear combinations of translations and scalar operators. Since the translations commute, so do 8+ and 3 + , [3 +, 3 - ] = O. We can now evaluate the ordered representation operators. We have 3
2
1
3
x[[f (x , 3 + , VA = x f(
so that lx
= Yi.
3
2
8+
1
2. Difference and Difference-Differential Equations
197
Next, 1 2
1
4
8 + [[f (x , 8 + , 6 - )1 =
3 2
1
(5 + f (x, 6 + , 8 - )
1 5 3 2 14 8f' = 8+ f (x ± h, (3+ , a- ) ± — (x ± h, x , 8 + , (5 - )1 4
5
2
8y1
=
2
3
2
1
8f
3
3 2 1
SY1
and we obtain
is + = y2 Tyhi + (5 3,±1 where the operator 1 171 is the shift along the axis y l :
TYhi ± f (y' ) = f (y' + h). Similarly, 3
21
= = =
4 4
2
21
3 5
3
21
sf
5
2 1
By1 3f
3
3
214
3 2
1
SYi
so that
/8- = y3TyTh + 6 The tuple (l.,„ 1 8+, 1 8 -) satisfies the generalized Jacobi condition. Indeed, we have [18+, 18-] = 0
since both operators involve translations with respect to yi and multiplications by Y2 or y3. Furthermore,
18 + lx = (y2Tyh, + Sy± )yi = y2(yi + h)Tyhi + (yi + h)3),±1 +1 = (lx + h)18+ +1, and similarly 18-ix = (lx - h) 18- + 1. 3
21
Now, to invert the difference operator F(x,S.;E,S x+), one should solve the equation 2 1 „3 I (Y 1, Y2 n + BYEI , Y3TPI ± Byi )(G(Y1 / Y2, Y3)) = 1,
which is again a difference equation but with a special fight-hand side.
III. Noncommutative Analysis and Differential Equations
198
2.3 Another Approach to Difference Approximations The commutation relations between (S± (or 3 — ) and x in the preceding subsection can be considered as small perturbations of those between a/ax and x. So far, the logical sequence of reasoning was as follows: we replace derivatives by difference operators and study the operators arising, commutation relations, etc. Putting a different emphasis on it, we might wish to perturb the commutation relations directly without being bound to any particular replacements in the equation itself. We have also considered an example of this sort in Subsection 2.2; let us present another one. Consider the Schrtidinger equation
[ h2 a 2 --2--57j. + v(x)1 vf(x)— E(x) =o, where E is a constant and V (x) a smooth potential. Set
a
B = ihe' —' ax
A = ex .
Then [A, B] = —ih, and the Schri5dinger equation takes the form
[
--1 (AB) 2 + V(ln A)] lif(x) — E*(x) = O. 2
We fix the operator A = ex and assume that the commutation relation undergoes a small perturbation, ea AB — BA = — ih,
where a is small. What form will the equation take? To answer this question, we should find the perturbed operator B. We have A -1 BA = ea B ± ihA -1 , or
e—x B ex = ea • B ± ihe— x . 2
1
Let us seek B in the form B = B (x,alax). We have 1
2
a
2
e —x [[B(x, — Ilex = B(e—x • x . ex , e— x ax
1
a ax
ex
)
2
a
= B (x —1 +1) , ' ax
2. Difference and Difference-Differential Equations
199
so that the equation for B becomes (
1
(
a
2
1
a +1) =ea B x, — ± ihe', 2
Bx — ' ax
ax
or, passing to the symbols on both sides, B(x, p +1) = ea B(x, p)± ihe' .
Obviously the last equation has quite a few solutions. We take the particular solution l — eaP 1 — ea
B(x, p) = ihe—x
Indeed, we have (1 _ e a) ± (ea _ e a e ap)
l — ea(13+1)
= ihe' 1 — ea = ihe—x ± ea B(x, p).
B(x, p +1) = ihe—x
1 — ea
The corresponding operator has the form (
1
2
a
B x — = ihe—x ' ax
l — ea alax 1 — ea
and (
1
a
a
B x — —> ihe —x ' ax ax (2
as a --> 0.
With this choice of B, the perturbed equation can be rewritten in the form
h2 Olin — 1 + IN+ 1 — 2Vfn } V ((n — 1)a)*n—i — EVi n _i = 0, 2(1 — é') 2 where *n = i,k(na).
Hence, we have arrived at a difference approximation of the Schriidinger equation on the uniform grid fx = na}; the example is of course trivial, but in less evident situations the idea may be helpful.
III. Noncommutative Analysis and Differential Equations
200
3 Propagation of Electromagnetic Waves in Plasma Let us consider the construction of high-frequency asymptotic expansions for a wave generated in plasma by a point source. This problem is a classical one (see [120]) and is characterized by an interesting physical phenomenon. Aside from the usual rays predicted by geometric optics, there occur so-called transient rays; in the illuminated region their contribution is of an order less than that of geometric optics rays, but they still give the leading term of the asymptotic expansion in the umbral region. Here we show that the appearance of transient rays is caused by turning on the point source at the initial moment of time (t = 0). Concurrently, we compute the socalled diffraction coeffi cient, which is in fact the ratio of two amplitudes corresponding, respectively, to the rays of geometric optics and to the transient rays. The cited phenomenon (the appearance of transient rays) was discovered very long ago, but until very recently the diffraction coefficient was computed by a semiheuristic method based on model problems. The uniform asymptotic expansion with respect to smoothness and parameter provides rigorous justification for both the diffraction coefficient and the appearance of the transient part of the asymptotic expansion. Since it it just simultaneous asymptotic expansion (with respect to parameter and smoothness) that is needed, we are led to the use of noncommuting operators. Their usage provides uniform asymptotic expansions admitting analysis in physical terms. Our exposition is arranged as follows. In the first subsection we describe the statement of the problem and choose a family of noncommuting operators adequate to the type of asymptotic expansion required. The second subsection deals with the construction of the asymptotic expansion for the considered problem from the viewpoint of noncommutative analysis. The third (and the last) subsection is devoted to the analysis of the obtained asymptotic expansion in different zones of the physical space (the illuminated region, the umbral region, etc.) Of necessity, the exposition in this section is more technical than the authors would like. However, we tried to avoid clumsy calculations by considering the asymptotic solution of the Cauchy problem only for small values of t, which permits us to use solely the WIG3 method and to sidestep the much more complicated language of Maslov's canonical operator. Of course, with the help of the latter all constructions of this section can be carried out for arbitrary values of t. We refer the reader to [145] and [146] for further details.
3.1 Statement of the Problem We consider the Cauchy problem
1
a2 u
---,-, ± C 2 — 19 8. --7 214 — )1/4. 2 b2 (X)/4
atz
ult.o = — rtlt.0=0
=
3. Propagation of Electromagnetic Waves in Plasma
201
describing propagation of electromagnetic waves in plasma (see, e.g., Lewis [120]). Here Ab(x) is the plasma frequency and A the average plasma frequency considered as a large parameter. The function b(x) hence describes spatial inhomogeneity of the plasma and is dimensionless; it is assumed to be everywhere positive. The right-hand side of the equation describes a point source at the origin with amplitude r(t) and instantaneous frequency Xq / (t). Our intention is to study the asymptotic behaviour of the solutions as A —> oo. We begin with several remarks. First, we need to take smoothness into account when constructing asymptotic expansions with respect to A, since the right-hand side of the equation contains a distribution. Otherwise each subsequent term of the asymptotic expansion would be less smooth than the preceding one, which makes the expansion physically unjustified. Second, it will be more convenient to consider the Cauchy problem for a homogeneous equation. This can be achieved with the help of the classical Duhamel principle. Specifically, it is easy to see that the solution can be represented in the form r u(x, t) = — f v(x, t, r) dr,
(111. 14)
0 where v(x , t, r) is the solution of the following problem:
I
at-
(III. 1 5)
F (x , X, r). vit=r =°' 4 ) 1t=r =
Here we use the following notation. By il we denote the operator
/4
= e2
02
_ x 2b2(x ) ,
ax2
and F = XS (x)r(t)e —ix q (t)
is the right-hand side of the original equation. Having in mind that the solutions of (III. 1 5) and the original problem are related, we see that it suffices to construct the simultaneous asymptotic expansion of a solution to (111.1 5) with respect to smoothness and the large parameter A. This being done, the Duhamel formula yields the asymptotic expansion of the same type for the solution to the original problem. We consider the following set of operators: 1 A1 =
a
—i—, Ox
1 A2 = A.,
2
B = x,
acting on the Hilbert space Ho = L2(Rx2 X [1, oo)x). Only the operators A1 and A2 are "large parameters" of the asymptotic expansion; consequently, the scale H 1 (A) is
202
III. Noncommutative Analysis and Differential Equations
defined by the following sequence of norms:
M u Il i = (1 + A
± Absi2u L 2 = 11
.
0 - A + x2 ).v/2 u L2
In Subsection 3.2 we construct the asymptotic solution of problem (III.15) in the scale W (A). The asymptotic behaviour of the solution u(x , t) to the original problem is carried out in Subsection 3.3.
3.2 The Construction of the Asymptotic Expansion We begin with an operator interpretation of the Cauchy problem under investigation. First of all, we represent .fi as a function of the operator tuple (A1, A2, B), ,.,1 2
1 2 ,, 2
H = —c`A i — A 21)` (B).
Next, we seek the solution to (III.15) in the form 1
V(X,
t, r) = (I)(Ai,
1 2 A2, B, t, r)
F = 43F,
(III.16)
where 4) is the operator with unknown symbol (1(p, A, x, t, r). Here we use the correspondence p <---> Ai, )k. <-.± A2, and x <--->- B between the variables and the operators; it will always be clear whether, say, x denotes the variable or the corresponding multiplication operator. The operator Cip should satisfy the following operator Cauchy problem:
18
2 4) --2- + HO = 0,
at
Dit=r = 0,
(
t=r
1.
since ii is independent of t, it suffices to obtain the solution for r = 0 and then substitute t — r for t (in fact, we here use the homogeneity in time of our system). The next step is to transform the operator Cauchy problem into a usual Cauchy problem for the symbol (13(p, A, x, t, r) of the operator ozi.. As was explained in Chapter II, to this end we should use the operators of the left ordered representation for the tuple 1 1 2 (A1, A2, B). These easy-to-compute operators have the following form:
a
1,1 1 = p —1—, 1A 2 =À, ax
1B
= x.
3. Propagation of Electromagnetic Waves in Plasma
203
We substitute these operators into the symbol of II and obtain a Cauchy problem for the symbol 4):
/
3027,
,2 (p i 88x )2 4:1)
— x2b2(x ),:p _ 0, (111.1 7)
4)it=0 =- 0,
34) at
t= 0 = 1.
Of course, we are interested in an asymptotic rather than a precise solution. Taking into account the expression for the norms in the spaces HS (A), where our asymptotic expansions live, we see that it suffices to solve the latter problem to within functions decaying sufficiently rapidly as (p, ). ) —> oc. Operators corresponding to such symbols are of large negative order in the scale HS (A). It is therefore natural to consider ,u = -NA2 + p2 as a large parameter and to seek the asymptotic expansion of the solution as ti. —)- oc. We set wi = p Iti, and co2 = A.111,; the "angular parameters" col and co2 range over the unit sphere co? + 4 = 1. To obtain the asymptotic solution of (111.1 7) as p, > oc for large (but finite) values of t one should use Maslov's canonical operator method. We have no intention of describing this theory here (see, e.g. [1371) and limit ourselves to obtaining an asymptotic expansion of the solution for small t. For these t, the WKB approximation can be used: —
, t, = e ips(0),x,r) (a0(a), x, t) + it —l ai (w, x, t) + • • .)• Moreover, here we only find the leading term of the asymptotic expansion, via the functions S(o), x, t) and ao(w, x, t). Following the standard procedure of the WKB method, we insert 4) into (111.1 7), collect similar terms, and equate the coefficients of powers of 12, to zero. This procedure yields the Hamilton—Jacobi equation
3 s) 2 2 ( 485) 2 \ = Vn -I- C (01 + — 1- , 0) 2L 2 /X) at ax
— (—
211
for S (w , x, t) and a transport equation for ao (co, x, t). The Cauchy data in (111.1 7) induce the initial data
Sit=o = 0 for the Hamilton—Jacobi equation. Clearly, the latter splits into two equations:
a s±
at
a s±
2
± c2 (0)1 + —) + coib2 (x) = 0,
ax
and hence we obtain two solutions S± = S±(w, x, t). These solutions can be evaluated explicitly by the standard method of characteristics. Specifically, denote by 7- ( A, x, q) the Hamiltonian function 'H(w, x, q ) = .1c2(coi + p)2 + 04b2 (.0
204
III. Noncommutative Analysis and Differential Equations
(we consider the "+" sign). The corresponding Hamiltonian system for the characteristics has the form
J •i = 'MI (co , x, q) = cr (col + q)17-t,
I. 4= —Rx(w,x,q) = —24b(x)1/ (x)I7 i,
with the initial data
J x(0) = xo , 1 q(0) = 0. Let
X = x + (xo, t), q = q + (xo, t)
be its solution. Then, for small t, the function S1 (co, x, t) can be calculated according to the formula
[r f c2 0)1((oi + q) + b2 (x)(03 S+ (co , X , t) = 7-I o
1
x=x+ (x0,t),q=q+ (xo,t)
where the integral is in fact taken along the trajectories of the Hamiltonian system and xo = x ( 3I - (x , t) is the solution of the equation x = x+ (xo, t) for xo. Similarly, the function S(co, x, t) is given by the integral
f r
S_ ( co, x, 0
=[
—
c2 (01 (Col ±
o
q) + b2 (x)o) 2 7-1 x=x - (xo,t) ,q=q - (xo ,t)
4
where the functions x — , q — , and x o— are defined by analogy with x +, q+,and from the Hamiltonian system corresponding to the "—" sign in the Hamilton—Jacobi equation. Since, on the one hand, there are two functions S±, the solutions of the Hamilton— Jacobi equation, and, on the other hand, there are two initial conditions in (III.17), the asymptotics of the solution to problem (111.1 7) is represented as the sum (1) = e ips+(x,x,r) ao+(co, x, t) + e ips_((0,x,r) acT(0), x, t) (recall that we only construct the leading term of the asymptotic expansion). The initial conditions in (111.1 7) lead to the initial data for the functions cisc ; each of these functions is found from the transport equation associated with the corresponding function S+ (w, x, t). We shall not dwell upon the computational aspect and merely write out the leading term of the asymptotic expansion of the symbol 41)(x, p, ). , t) with respect to the parameter tr, = ../A2 ± p2 . It has the form
cr. (x,
E
i p, )k., t) = — jeioi 2 je(+,—)
t)
c,2
p ±
asi ax
2
]-1/2
+ b2 (OA? ji
,
3. Propagation of Electromagnetic Waves in Plasma where
ax • ax o
205
Ji = det [--1-(xo, t)] x0 .x 0 ,x,t,
is the corresponding Jacobian. Now the simultaneous asymptotic expansion of the solution to the original problem with respect to smoothness and parameter is given by (III.14) and (III.16). On substituting the expression obtained for (I), into (III.16) we obtain an explicit expression
u(x, t, A) L' u ±(x, t, A) + u_(x, t, A) for the solution u(x, t, A) of the original problem, where
t u±(x, t, A) = ± iX f if ea±(p,x, A, t, r) dp ch. , 2(27r) n
(1II.18)
0 RP
and the phase and the amplitude of the integrand are given by the formulas
ço+ = px ± S±(p, x, X, t — r) — Xq(t), a± = [4 (2 (4+ )2 + x 2b2 (0)1 ax
-1/2 r(r).
Of course, the integral over Rnp is in the sense of the theory of distributions.
3.3 Analysis of the Asymptotic Solution First, consider the simplest case in which b(x) = b = const. Then
J+ = J._ = 1, s± = ±.1c.2 p2 + x2b2 t. Straightforward computations show that
- \ Ic 2 p2 ± x2b2(t — r) — Xq(r) = X(k x ±- \ Ic2 k2 ± b2 (t — r) — q (r)), c1)± = px + where k = p /X, and
ix2-n ft f r(t)e" u± = + ,s /c 2 k2 ± b2 dr dk 2 (2nin i 0Rk
(see R. Lewis [120], p. 848). Note in particular that the method described gives a precise solution for constant coefficients since the right-hand sides vanish for higher transport equations.
Ill Noncommutative Analysis and Differential Equations
206
Let us now study the general case b = b(x). The asymptotic expansion of the integral (111.1 8) can be computed with the help of the stationary phase method. The stationary point equations for the phase function have the form
I x+
fis)(p, A., x , t — r) = 0,
(p, A., x , t — T. ) + A.q' (r) = 0
(111.1 9)
(here we consider only the function u+; the function u_ can be considered in a similar way).
However, the integration domain in (111.1 8) has boundary points, and henceforth we must allow for the contribution of boundary stationary points at r = 0 and r = t. The equations of these stationary points are as follows:
I
(III.20) r=0
for the boundary point r = 0 and
x = 0, r = t
(111.2 1)
for the boundary point r = t.
We shall not analyze asymptotic expansions in the neighborhood of the point x = 0 (i.e., of the source). Hence we do not have to take the stationary points (111.2 1) into consideration. By the change of variables p = Al we reduce MIA 8) to the form ix2—n U
2(27r)n
ff
eiXcNk,x,l,t,t)r(r) dk dr
o lizz
I/ (c2 (Px )2 + b2 (x))
t
The stationary point equations can be obtained from (III. 1 9) and (III.20) by the substitution p = k, A = 1. In order to apply the stationary phase method correctly, we should require that the stationary points flc(x , t), r (x, t)} of system (III. 1 9) and k' (x, t) of (III.20) be located in a bounded domain RI < R < oo of the space R. In that case we can use a partition of unity {ei (RD, e2(IkI)} such that ei(11(1) + e2(1k1) = 1 and e2 (z) = 0 for z < R. The integral in question reduces to a sum of two terms, of which the second one can be shown to be 0 (A—m) uniformly with respect to smoothness. This can be performed by integration by parts with respect to k. The first term is an integral over a compact domain in Rik% so one can apply the stationary phase method to obtain its asymptotic expansion. The boundedness of the set of stationary points is equivalent to the nontrivial solvability of systems (111.1 9) and (III.20) for p at A = 0.
3. Propagation of Electromagnetic Waves in Plasma The relations
I
207
S(p, 0, x, t — r) = cp(t — r), _wi as (p, 0, c, t — r) = c(t — r),
vas, (p, 0, x, t — r) = cp
imply that for X = 0 these systems can be written as x + c(t — v) = 0,
cp = 0,
and x + ct = 0,
(111.22)
respectively. Since the first system implies that p = 0, we see that system (III. 1 9) does not have any "bad" solutions, and (111.22) shows that neither does system (III.20), unless the point (x, t) lies on the characteristic cone. Since we wish to obtain a uniform asymptotic expansion, we have to use uniform asymptotic expansions given by the stationary phase method for the case in which the boundary stationary point (i.e., the solution of equation (III.20)) coincides with the interior stationary point (III. 1 9). This case corresponds to the points (x, t) lying on the so-called shadow boundary. We obtain the following results by using the stationary phase method: A. If the integral has interior stationary points (the illuminated region), the leading term of the asymptotic expansion has the form
u , A, 2 - 3n/2 e o.s(x,r) (1)(x,
t),
which coincides with the approximation given for the original problem by geometric optics. B. If the integral has no interior stationary points, but rather has stationary points at the boundary, then the asymptotic expansion of the solution has the form u—
e iÀS(x ' t) ço(x, t)
aacl: (k, x, 1, t, 0) (see [48], p. 141). This formula corresponds to the so-called transient rays of geometric optics. The factor 1
I
at (k, x , 1, t , 0) = x — q i (r)
ail)
r=0
VC2 k2 ± b2 (0) — q'(0)
(.D s
—
WO
is known as the diffraction coefficient. Here coo is the frequency of the source, and cos is the instantaneous frequency on a given transient ray. We do not compute the asymptotic expansion for the case in which an interior stationary point coincides with a boundary stationary point. This can be done with the help of an appropriate version of the stationary phase method (see, e.g., [48 1 ).
III. Noncommutative Analysis and Differential Equations
208
4 Equations with Symbols Growing at Infinity In this section we consider the technique of noncommuting operators on the example of constructing asymptotic solutions to differential equations with respect to smoothness and decay at infinity. We only consider a model example that displays all the main features of the theory. We refer the reader to [134] for more detailed information.
4.1 Statement of the Problem and its Operator Interpretation We consider the Cauchy problem
1
2 u _ a 2 u _ x c (x ) u + f (x, t), 21
at
8u It
au at
.° =
(111.23) t=0 = ui(x),
where t R, x e R, and the function c(x) is bounded together with all its derivatives and satisfies the condition c(x) > co > 0. In order to construct the simultaneous asymptotic solution with respect to smoothness and growth as x > oo we define the scale Hs (A, B), where — -
a
A = —i— ax
and B
x.
The norm in Hs (A, B) is given by the expression
u (.0 1,52 =
+ A2 + B2y/2u(x) L2(R1)
•
However, even at first glance it is clear that the role of the variable x is quite different in the factor x 21 than in the coefficient c(x). Whereas x21 increases at infinity, c(x) is uniformly bounded. Recall that a necessary condition for success of our method is that the operator in question be homogeneous with respect to the operators defining the scale Hs (A, B). To this end, we introduce three rather than two operators, forming a Feynman tuple: 1 a 2 2 A= , Bi = x, B2 = x
ax
2
2
and use Bi for "homogeneous" factors and B2 for "well-behaved" bounded factors. We can rewrite the Cauchy problem in the form
I
a2 u _ 2
[A ± Bfl c(B2)] u f (x, t),
, uit=0 = uo(x),
au I at it..0= ul(x).
(111.24)
4. Equations with Symbols Growing at Infinity
209
Here only the operators A and B1 occur in the definition of the scale H' (i.e., one should replace B by B1, and the operator B2 is, in a sense, an "operator parameter". We can now present an operator setting for the Cauchy problem (111.24). Clearly, in order to find the solution of (111.24), it suffices to construct the Green operator of this problem, i.e., an operator it(t) depending on the parameter t and satisfying the following operator Cauchy problem:
1
02 h(t) _ —i-A2 ± B.12./c(B2)] h(t), at2 —
L
(111.25)
a h 0) — 1 = 0, TT
/03)
-
'
Indeed, by Duhamel's principle, the solution of (111.24) can be expressed via h(t) by the formula
Itoc, t) =
t
0k(t)
., uo + R(ou i + f h(t — r) f (x , r) ch. at 0
(we take into account that the coefficients of the operator do not depend on t). Moreover, it is clear that for constructing the asymptotic expansion of the solution to (111.24) in the scale of spaces H' (A, B) it suffices to construct the asymptotic solution of (111.25) modulo operators of low negative order in this scale. Taking all this into account, we seek the operator h(t) (or, more precisely, its asymptotic expansion) in the form 122
R(t)
= (1)(t, A, B1, B2)
(the choice of Feynman indices reflects the fact that B1 and B2 commute with each other). Let us denote by z, wi , and w2 the arguments of the function (1) corresponding to the operators A, B1, and B2, respectively. The left ordered representation operators 12
2
for the tuple (A, B1, B2) can be computed easily. They are lA
a + Z - i awi
i
a aw2
1,31 --- W1,
and /B2 =
2•
(Numerous examples of similar computations are given in Chapter II; we leave detailed computations to the reader.) On substituting h (0 into the operator equation (111.25), we obtain the following Cauchy problem for the symbol 0 (t, z, wr , w2):
18
. a 2 43 --T +[(z — i awi at olt=o0, "1
. a \2 21 =0, l a't) 2) ± w l c(w2) ] 4)
(111.26)
210
III. Noncommutative Analysis and Differential Equations
We should construct the asymptotic expansion of the solution (1) to the Cauchy problem (111.26) as (Z, WO -> oc. This expansion should be uniform with respect to tv2. This difference between the roles played by wi and w2 clarifies the reasons for "duplicating" the operators B1 and B2 in the expression for the Green operator ft(t).
4.2 Asymptotic Solution of the Symbolic Equation We shall now construct the required asymptotic expansion of the solution to (111.26). As was mentioned above, the problem of constructing such asymptotic expansions is not an intrinsic problem of noncommutative analysis. However, the difficulties encountered in constructing asymptotic solutions to (111.26) are quite characteristic of the problems arising in the applications of the theory of noncommuting operators in the asymptotic theory of differential equations. Therefore, we devote some space to the construction of the asymptotic solution to problem (111.26). In order to spare the reader superfluous technical details we restrict ourselves to constructing the asymptotic expansion of the solution for small values of t. The solution of (111.26) will be sought in the form
where
0± .
eis±o w) a±(t, z, w).
By substituting the components 41+ of cl) into equation (111.26) we obtain the following equation (the ± sign is omitted):
[_ (aast ) 2 ± (z + aaws ± aaws ) 2 + w?i c (w2) 1
a(t, z, w) — ifila(t, z, w)
+132a(t, z, w) = 0,
(111.27)
where i3i are operators of order j with coefficients depending on derivatives of S; the explicit form of these coefficients is inessential to us as yet. Let us focus our attention on the first term on the left-hand side of this equation; in the following we shall see that this term gives the Hamilton—Jacobi equation. The construction of asymptotic expansions of such a form usually employs homogeneous functions (thus, the action S is usually a homogeneous function of order 1 of the variables with respect to which the asymptotic expansion is constructed; in our case, these are z and w 1 ). However, since different powers of z and w I occur there, it is natural to use generalized homogeneity. Specifically, we assume that S is a homogeneous function of Z and wi of the form
S(t, Xl z, Xwi, w2) = XI S(t, z, wi, w2),
4. Equations with Symbols Growing at Infinity
211
and we seek the amplitude a in the form of a sum of homogeneous functions of the same type and decreasing orders of homogeneity. The orders of homogeneity of the functions occurring in the first term on the lefthand side are as follows:
ord z = /; ord wi = 1. Hence, the terms of the highest homogeneity degree 2 1 give
E
( sas
2 (
Lm+.
as \ 2
1 ,2 ) + wi c(w2)1 ao, z-F-a-;---
-
where ao is the zero-order homogeneous component of a. The next order of homogeneity present is (2/ — 1). The corresponding terms have the following forms:
2 a.v2) (z + S\ as ao. awi
(111.28)
Evidently, following the standard method, we should equate both quantities to zero, thus obtaining two different Hamilton—Jacobi equations for S. This is because the functions occurring in the second summand in expression (111.27),
as as as ) aa as ) a a 2 (z + -5—w7 + -5-11) a wi +(z+ $7,-Di- + ,9,2 Tt. 'Pia = —2 aw2 2 s a2 s 2s 02 5 as aa +
■
a
2 awlaw2 ± a8 tiiat2+w?
[
a,
all have homogeneity degrees not exceeding 1, and therefore do not occur in the second term in (111.28) provided that 1 > 1. The remedy is to include all terms homogeneous of degrees from l + 1 to 2 1 into the leading term. This gives the Hamilton—Jacobi equation
as )2 w 2/ (as\ 2 l as at )=z + — +aw2 — + 1 COD 2 ) , 011/1
(111.29)
with the initial condition
Slt=o = O. Note that the function S(t, z, w) thus determined is no longer a generalized homogeneous function of z and wi . However, it can be shown that S is an asymptotically generalized homogeneous function of these variables, which is more than sufficient for proving appropriate estimates. The same can be said of the components ao, a_i , . .. of the amplitude a(t , z, w). For these components transport equations can be obtained in a standard way, and solving these equations leads to asymptotically generalized homogeneous functions of z and wi .
212
III. Noncommutative Analysis and Differential Equations
Thus, the Hamiltonian of problem (111.23) corresponding to the asymptotic expansion with respect to smoothness and growth at infinity is the function
(III.30)
(z + pi ± p2) 2 + w?l c(w2)•
However, it should be noted that the principal (generalized homogeneous) component of S(t, Z, w) is not defined by the Hamiltonian function, but rather by the Hamiltonian (111.3 1) (z + /3 2) 2 ± qc(w2). This is because the term asiawi is of degree lower than that of any other term. It is now clear that all conditions on the behaviour of the trajectories of the Hamiltonian system (usually occurring in stationary rather than Cauchy problems) should be imposed on the Hamiltonian (111.3 1), whereas S is a solution of problem (111.29) with the Hamiltonian (III.30). Thus two Hamiltonian functions have arisen in the problem! The second one (111.31) is referred to merely as the Hamiltonian, whereas the first one is called the essential Hamiltonian of problem (III.30). Such a situation is characteristic of problems related to generalized homogeneity (see [129]). The subsequent steps are not difficult and can be carried out by analogy with their counterparts in the preceding section. An interested reader can either do this himself or herself or look through the literature recommended above.
4.3 Equations with Fractional Powers of x in the Coefficients We have considered an example of an equation with coefficients growing at infinity as an integral power of x, so that the operator L occurring in the equation could be written in the form L = f (Ai, B1, B2), where f (Yi, y2, y3) is a quasihomogeneous function of (Yi, y2) and is smooth for n. 2 1- uY1+
Let us now consider a more complicated example in which the coefficients behave as a fractional power of x. For simplicity, we restrict ourselves to the first-order equation
1)
au au 2 a Lu —i— — i— ± c(x)Ixlau + b x, — i— u = 0 at ax ax
(111.32)
with the initial condition
(111.33) where a > 0, c(x) is bounded together with all its derivatives, c(x) > co > 0 for 1x1 > 1, c(x)Ixl a is everywhere smooth, and b(x, ) = bi(x, x, 0, where bi (x, y, z) is an (a -1 , 1 )-quasihomogeneous function of order 0 of (y, z) for 1 y la ± Izi > 1, that uit.o = uo(x),
is, bi(x, X l ia y, Xz) = bi (x , y, z)
4. Equations with Symbols Growing at Infinity
213
for A> land I yla + 1z1 1. Introduce the operators
a
Al= —i—,
(111.34) A2 = Vf(x), B x ax where 1r (x) is a real-valued function such that V, (x) E C(R), tr (x) = Ix I for Ix I > 2 and lif(x) > 1 everywhere. The operators (111.34) satisfy the commutation relations [A1, A2] =
[A2, B] = 0,
(B), [A1, 13] =
(111.35)
where x (x) is a smooth real-valued function, x (x) = sign x for Ix1 > 2. It is easy 1
2
2
to compute the left ordered representation operators for the tuple (Ai, A2, B). They have the form
lA i =Y1 — 1A 2 = Y2,
a
— x (Y3) —
43 1B =
(we assume the correspondence yi ÷-> A1, y2 ÷->- A2, Y3 ÷->. A3). Equation (111.32) takes the form
au at
1
22
21
Aiu c1(B)Au bi(B , A2, Ai)U
b2(B, Ai)u = 0,
where the function
ci (x) = c(x)
(x) is smooth and bounded together with all its derivatives, ci (x) > co > 0, and b2(x, is homogeneous of degree 0 in for large I I and is compactly supported with respect to x Let us seek the solution of problem (111.32) — (111.33) in the form 1
2
2
u = u(t) = (t)uo = g(Ai, A2, B, t)uo, then for g(yi, Y2, y3, t) we obtain the equation
[
a
a
a
—i— + yi — i,— — i X (Y3) — ± cl (Y3)31 at 42 on a a ) , 2 2 1 -Hoi (y3, Y2 , [[Y1 – i — , – ix (Y3)_ oy3
ay2
a a , 2 1 +102(y3 , ffyi — i — — i x (Y3) —II)] g(Yi , Y2, Y3, t) 0Y2 0 Y3 with the initial conditions g(Y1, Y2, Y3, 0) = 1 . The following assertion is valid.
III. Noncommutative Analysis and Differential Equations
214
Theorem 111.1 For any N there exists an N1 = N1 (N) such that the bounds
as T ayS
CO + lYti + 1y21) - " 1/31 ,
(yi, Y2, y3)
WI = 1, ... , N1, y2
112,
imply that the operator
a 6
—i—) XY
Ox
(
1 22 T(Ai, A2, B)
is bounded in L2 for y + 3 < N. Proof The
spectrum of the self-adjoint operator 1
hence the operator Ti,
22
A2, B)
A2 lies in the
domain y2 > 1, and
does not depend on the values of its symbol T (y) for
Y2 < 1. Let w(Y2) E C x (TR), q)(Y2)----- 1 for y2 1, (p(Y2) = 0 for y2 < 1/2. If T satisfies the estimates in the theorem for y2 > 1/2, then so does ç9(y2), and T (y) for
all y. On the other hand, as was mentioned above, 1 2 2 1 2 2 T(241, A2, B) = ((pT)(Ai, A2, B),
and the required assertion follows from the results of Section 3. The theorem is proved. 0
Let us now make the change of variables Y2 = A. 1/a X2, y3 = X3,
Y1 = XXI,
where A. is a large parameter, in equation (111.36). We obtain the equation [—iA, -1 — a + x1 — iA. -1 a —
at
ax3
2 2
1
-1-). —1 b1 X3, ( X2, 11X1 —
is x(x 3 p1/4. -1 -a + ci(x3)xY ax2
1
a
(111.38)
01/4. - — — iEX (X3)À 1:1 1
ax3
ax2
,
±X -1 102 (X3, 11X1 — iA. -- — ieX(X3P. - — 1)] G(X,
ax3
ax2
t, 8,
A) = 0,
where e = A -1 /a and G(x, t, E, A) = ey, 01 yi =xx i , y2 =e--1 x2 ,y3 =x3 •
It suffices to construct an asymptotic solution to equation (111.38) in the domain x2 > E, i.e., to find a function G(x,t, E, A.) such that the left-hand side in (111.38) is O (As) in x2 > E uniformly with respect to E. We seek G (x , t, E, A.) in the form G(x,t,
s E —k = e ixs(x,t,e) 8, )1/4.) A. R (x , t, E, A -1 ). ,
k=0
(111.39)
4. Equations with Symbols Growing at Infinity
215
We insert (111.39) into (111.38) and obtain the following equations for S and R:
as at
as
as
+ x i + — +6x(x3) Ox 3 09X2
+c (x )xa = 0
I
3
(III.40)
2
(the Hamilton—Jacobi equation) and the transport equations
a Wk , ,(pk a , + , -._- + E X k-x3i Tx dt d X3
490c
+ i [ bi x3, x 2,
as
X i + -,
d x3
(111.41) — +8x(x3)— ,as ) ox 2
as
as
+b2(x3,x1+ ,_+sx(x3)— , )](pk = B(pk-1 ox 3 u.x 2
where the right-hand side of the latter equations is equal to BWk-1
, as as :, 1 2 3 rt = —ix{ v i (x3, X2, 11X1 ± — + 8 X (X3) -, — IA 0.,c3
—isx(x3)x-1
OX2
a 0.7C3
a — ax 211 ) as
—bi x3, ( x2, xl + :973 + EX (x3)
as) 2
/ 21 as 495 _ix_ 1 +b2(x3 ,pi + -$9.+Ex(x3) ax2
a ax3
—isx(x3)À-1_a 1) 99.x2
as + E X(X3) — , ) 1 Wk-1 02 X
as
—b2 (X3, XI + 19x3 —
(here q)_i .-- 0 by convention). To be able to use the equation (111.4 1) for constructing the asymptotic solution, one must verify that the right-hand side BR_i of this equation does not, increase as X -- -Foo. This fact can be verified directly from the explicit formula for BçOk_i with the help of the Newton series expansion. The corresponding simple calculation is left to the reader. It is now rather simple to write down the explicit formulas for the action S(x, t) and the amplitudes (pk(x, t) via integrals along trajectories of the vector field
V
a
a
a
t
8x3
ox2
= y + — ± EX (x3) — ,
occurring on in the left-hand side in (111.4 1). These formulas give an asymptotic solution of equation (111.38) with respect to powers of X -1 in the domain t
I x2 — 6 f X (X3 — r)dr > o
III. Noncommutative Analysis and Differential Equations
216
i.e., on substituting this solution into the equation we obtain the remainder Rs (x , t, on the right-hand side such that the following estimates are valid:
E,
))
r)dr > 612, and it can be shown that the constant
C
alaiRs (x, t, 8, X)
< cx-s-l+lal
aX"
for (x, t) E K, x2
—
t E f X(X3 —
o depends only on K and does not depend on E. Hence, we obtain a simultaneous asymptotic expansion of the solution to the equation considered with respect to smoothness and growth at infinity. The condition t X2
1 8 (2 + f X(X3 —
r)dr)
o imposes restrictions on the size of the interval [0, T]. Namely, since we seek the solution in the domain {x2 > e } , T can be defined from the condition T
sup f IX (x3 — -r)ldr > 1/2, X3 o e.g., one can set T = (2 sup x (x3)) -1 . To increase T, it suffices to choose ifr (x) so that inf * (x) > M; then one can take
T = (M— 1 / 2)/ suP IX (x3)I. X3
5 Geostrophic Wind Equations Let us consider an example of an asymptotic solution with simultaneous smoothnessgrowth at infinity for a model but still physically meaningful problem. We will consider the so-called geostrophic wind and study the evolution of small deviations of velocities and pressure from their geostrophic values in the equatorial zone. According to [153], the geostrophic wind is described as follows. The equations of the gradient wind are derived from the equations of atmospheric dynamics under the assumption that the motion is stationary and the trajectories of particles are isobars. In considering large-scale atmospheric phenomena it is often possible to neglect the nonlinear terms describing the acceleration of particles due to the curvature of the
5. Geostrophic Wind Equations
217
trajectories in the equations of motion. The expression for the components of the flow velocity obtained under this assumption in conjunction with the hypothesis that the flow is stationary is known as the geostrophic wind. Obukhov showed that a deviation from the geostrophic state results in the appearance of rapidly propagating waves which "dissolve" the perturbation in a short period of time. Namely, the pressure field adapts to the velocity field. His conclusions were based on the following argument. He considered the approximate system
I
DU + — au at ax +...2 ,Ati ay ± w au — 2coz t) =
az
DV
at
av , av ± n Wz LI -r- W aZ
au
± 14-T x + V aY
L
1 P2
= — —p1,-,aLy 5
—P
a z —-g
for the flow velocity ii = (u, v, w) and pressure p, where p is the density and coz the vertical component of the Earth's angular velocity, coz = coo sin 0 (here coo = 7.29 x 10-5 sec -1 and 0 is the latitude). Hence, a w/a t, the component Fz of the Coriolis force, and quantities such as cox w, coy w, etc. are neglected. Further, additional simplifying assumptions are made. 1) The surface of the Earth is assumed to be flat (i.e., a Cartesian coordinate system (x, y) is used); 2) coz is assumed to be independent of (x, y), 3) the terms quadratic in 76 and derivatives of 67 are discarded; 4) averaging over z is performed; 5) the barotropic scheme for the dependence of P on p is used. This results in the following averaged system:
lau -Tr — 2coz V =
—gH o aaxx
,
av- + 2,coz U = —g1-10 12277 ay ' aX__
at —
where
(au _,av) \ ax i ay ) '
00 00 z z U = — f pudz, V = — f pvdz, Po Po
o
and
x=
o
(po is the pressure averaged over the surface and Ho — 8 km). The geostrophic wind takes place for
au _ av ax
n
P — P0 Po
218
III. Noncommutative Analysis and Differential Equations
The adaptation of perturbations of the stationary solution goes as follows. There occurs a wave with velocity c = VP-17 280 m/sec, which "carries the perturbation away" in time t 2R/C, where R is the radius of the perturbed region. Here we study perturbations of the geostrophic state under different assumptions. We do not assume that wz is constant. Quite the opposite, we consider the flow in the equatorial zone. Since wz = wo sin 0, we see that co, = 0 on the equator. We assume that the zone in small enough to take sin 0 0, but the characteristic length of the perturbation is relatively small with respect to the size of the zone, so that the distance y from the equator can be considered as a large parameter. With these assumptions in mind, we will state the problem of construction of simultaneous asymptotic expansions with respect to smoothness and large y. Hence, if we denote the longitude by x and the latitude by y, the asymptotic expansion of the solution will be sought with respect to the tuple of operators
A1 =
a —
,
A2 =
a
- and ay,
A3 = y.
Here we derive the geostrophic wind equations in the situation described, pass to the ordered representation, and construct the leading term of the asymptotic expansion. We assume that the surface of the Earth is the unit sphere
x2 + y2 + z 2 = in space RV with coordinates (X, Y, Z) (not to be confused with the lower case letters x and y, whose meaning is quite different). The equator is given by the equation Z = 0 and the angular velocity of the Earth by the vector
= (0, 0, 1). The Coriolis force is
F = 2m[V, where
V =(X k,2), in what follows we assume that 2m = 1. Let w be the longitude and 0 the latitude. Then we obtain
X = sin w cos 0,
Y = cos w cos 0, Z = sin 0,
In the derivation of the equations, it suffices to consider an arbitrary value of w, say w = 0; then = çb cos 0, 1.7 = —6 sin 0, 2 = 6 cos O, and [V, Q] = (-6 sin 0, —cos0, 0).
5. Geostrophic Wind Equations
219
The tangent plane at any point ((p, 0), = 0, is determined by the outer normal = (0, cos 0, sin 0). The projection of F on this plane has the form
— , [V , S-21) (-6 sin 0, —0 cos 0 sin 2 O, çbcos 2 0 sin 0).
Pr (F) = [V , S21
=
By neglecting second- and higher-order infinitesimals and denoting q) = x, 0 = y, = v, and O = w, we obtain
Fx = — yw, Fy = yv. The hydrodynamic equations for i; = (y, w) have the form
div(p0 = 0, %Pt=+P(P). 1
We linearize this system assuming that
p = 1 + p' ,
ap Op
and the quantities p' and y are small. The linearized system can be written in the form
= 0.
(111.42)
(
We assume the correspondence
a at
P0
a ax
P
P2
a
— ay
between the derivation operators and the corresponding arguments of symbols; we then obtain the following symbol matrix for this system: ( PO
iy
— 6' Pl PO P2
P1 P2
•
po
Its determinant is equal to det
(
po iy
— iy po
pi
pi
P2
po
P2
= PO(Psi —
Pi — Pi — y 2 ) ,
III. Noncommutative Analysis and Differential Equations
220
and it is clear intuitively that the system is "hyperbolic" with respect to the desired type of asymptotic expansions. We equip equation (111.42) with the initial data V
w P
(
( vo )
= )
(111.43)
WO PO
t=0
and seek for the solution of the Cauchy problem (111.42) — (111.43) in the form y
(2 2 ( ) = G x, y, —i—, —i—, t
w
Ox
P
(
V° wo ,
ay
(111.44)
Po
G is a 27-periodic matrix function with respect to the variable x and Glr=o = 1 .
The equation for G is readily obtained with the help of the left ordered representation I I 22\ of the tuple ( —i a/ ax , —i0/0y, x, y). Recall that this representation has the form
a
a
Lia/0x = Pt — i — , Lia/ay = P2 — i ax Ty
lx = x,
,
1y = y,
so that for G(x, y, pi, P2, 0 we obtain the system ac + _i ___
at
0
—iy
iy
0
P1 — i
4
pi
. a - /-w,,
p2 —
•
a
P2 — 1 7957
it
G=0
0
with the initial conditions G I t=0 = 1. To visualize the orders of the terms in the expansion, let us make the change of variables
x = xi,
y=
xx2,
pi
= xni,
P2 =
xn2,
where A is a large parameter (note that we have assigned A to exactly those variables that correspond to the operators —ia/ax, —i a/ay, and y, which define the type of the asymptotic expansion. We denote g(x, y, t,
X) = G(xi, Xx2, xni, xn 2 ,1-).
The problem for g reads
ag ± -ix -1 —
at
(
gl t .0 = 1 .
in
-
O
-ix 2
iX2
o
a 112 — iX -1 A a ix -1 axi ux2
ni - ix -1 ,ixi A 'j-
k
0
(111.45)
5. Geostrophic Wind Equations
221
Clearly, g is independent of XI. We seek for a particular solution to the equation in this problem of the form g = ea S (X , 11 , t ) w (x , 77, 1 , 1.), where S and w are smooth functions of their arguments and S is real-valued. Substituting g into the system and collecting the coefficients of the powers of A-1 , we obtain the following system of equations:
_
0
..
(as (. ix2 at + ln
0 0
—ix
711 712,
o
1 a — ix - -
o
772
+
(
0
o
0
.as ;2
o
_A.2
0
O
J
i aax 2 )1ço(x, 77, t, A. --1 ) =o
o (we omit the derivatives a/axi since w and S are independent of xi, aS/axi = aw/axi = 0). Next, we equate to zero the coefficients of powers of A-1 in the equation obtained. First of all, we should have
o
[as
—
ix2
71 772 )1 7
0 712
—+ a-T ix2 7/1
W(X,
71, t, 0) = O,
0
but this equation can be satisfied for w (x, /7, t, 0) 0 0 if and only if
det
as —ix 2 in at ( i x 2 as at 772 ) = 0, as 711 7/ 2 at
that is,
as at
Rasy
2 2 2 =0. x2]
at
Also, we should have 51 t=0 = 0 (this follows from the initial data). There are three solutions satisfying these conditions:
So (x ?I, t)-=- 0 ,
and
III. Noncommutative Analysis and Differential Equations
222
Let us find the corresponding eigenvectors of the symbol matrix, of which ço(x, 77, t, 0) is a multiple. Lee t (a, b, c) be an eigenvector with eigenvalue zero. We have 0 —ix2 111 ) ( a cqi — ibx2 = 0= b i x2 0 iax2 ± c772 17 2 • c aqi ± bq2 0 172 1/1 We normalize the eigenvector by the condition c = 1; then we obtain (
a = i 772/x2,
b = —iqi /x2,
and the eigenvector normalized to unity has the form Xo =
t
(j
172, — i 171, x2)
2
2
2
The eigenvectors corresponding to S+ (x, 17, t) can be obtained in a similar way. Specifically, the system of equations for these vectors has the form —
iX2
J
171
±1q + id +4
12
7
11 2
a
b
0
C (
)
(one takes the lower or the upper sign in all three rows simultaneously). The normalized eigenvectors have the form
x±=
t(±71,17.1? +
Ili
+ 4 + i17ix2, ±772,V77? + qi + 4 - i111x2, rd ± 17 i)
,,r2Vq+74,417,7: + tii + xi
The orthogonal projections onto the one-dimensional eigenspaces have the form Po
=
2
77 1
+ 772 + x
— iril 772
2
—ix2771
An
1
P+ =
—1702
2 ( qi
1 2
2 (17 ? + /73
+4)
ni ill1x2
ix2 172
—1111x2
,
4
Al2 A13
A21 A22 A23 A 31 A32 A33
where
All
2 = rh.
+4,
A21 = 171r/2 + ix2V77? ±
q
+4
A32 = — i111x2 f i 172-1 11 + qi + 1 Here
the superscript
t
A33 = rii2 '7'1 ,,,2 ii2,
2 1 2 1122 = 77 2 -r- X2 ,
A
A31 = i172x2 f i 171 V17? 4-
4
stands for the transposed matrix.
273 + 4,
5. Geostrophic Wind Equations
223
and the matrix II Aii II is Hermitian, that is, Au = Au (the bar stands for complex conjugation). Clearly, we have Po ± P± + P_ 1. We now seek the asymptotic solution to problem (111.45) (more precisely, we seek its leading term) in the form
eixs+q4 ± g(x, 77, A, t) = eixsowo 4_ eas- W-, with
çaj = ci (x , 71, OP» Wilt=o = P i ' j = 0, +' — . Substituting this representation of the solution into the equation, we obtain
_
o o o [ a +( o o i_--as ,,ox2) o i tS2 0 _
Wi = A(S)W.j,i,
j = 0, +, —,
where ço1,1 is the next term of the asymptotic expansion and
A necessary and sufficient condition for the solvability of this equation is that 00
0
0 0
i ks
as o ax2
0
or
ac,
as
1c
1 =o,
ooo
P ----' + i—c P 0 0 1 Pf = O. 3 8t 0x 2 3 ' o 1 0 However, it is easy to check that
Pi
(
00 0 0 0 1 ) Pi = 0, 0 1 0
j = 0, ±, —.
III. Noncommutative Analysis and Differential Equations
224
1 and we obtain, in the first approximation, the following expression Henceforth, ci for the symbol of the resolvent operator: (
G(y, px py , t) =
py2
1 2 2 . 2 Px ± Py -t- Y
PxPY
2 Px
— iYPx
iYPy
iY
y2
—
cos[t.ip + /3; + y 2 ] + P,
++2
Px2 + Y 2 PxPy
+ 13,
++2
PxPy py2 ± y2
0 Y — iPx
)
— iYPy iypx
— iYPx P x2 + Py2
ixPy
sin[t.ipi ± 13; + y 2 ]
— PxPy iYPY
—Y 0
— Px — ipy
— iPy
0
+ lower order terms. -
This expression provides the leading term of the asymptotic solution to the model geostrophic-wind problem (111.42) — (111.43) by formula (111.44).
6 Degenerate Equations In this section we consider a simple model example showing the application of noncommutative analysis to a problem of constructing the asymptotic expansion of the solution to a degenerate equation. The problem itself is probably of little interest mainly because it can be solved by applying an appropriate "quantized canonical transformation" that takes the operator in the equation into the Hamiltonian operator of the harmonic oscillator. However, it is the technique of noncommutative analysis that we intend to illustrate by considering this example; specifically, we first obtain a "weakened" asymptotic solution for localized right-hand sides. Then, using the known exact solution for some "standard" operator (in this particular example, this is the energy operator of the harmonic oscillator), we obtain asymptotic solutions for arbitrary right-hand sides.
6.1 Statement of the Problem Consider the ordinary differential operator 82
H = —a(x)— ± b(x)x 2 22 , x
ax 2
E
R,
where S-2 E R is a large parameter. We assume that a(x) and b(x) are smooth realvalued functions bounded from below by a positive constant. Moreover, we assume that a(x) and b(x) are bounded together with all their derivatives. Without loss of
6. Degenerate Equations
225
generality it can be assumed that a (0) = b (0) = 1, which can always be achieved by a linear change of variables followed by the multiplication of 11 by an appropriate constant. Under these conditions 1"/ can be defined by closure from C(R) as a closed operator on L2(R). Consider the following stationary problem for find the solution u E L2 (R) of the equation 8 2 u(x) ax 2 b(x)x 2 Q2 u(x) = v(x),
—a(x)
Hu(x)
(111.46)
where I) E L2(R) is a given element.
oo of problem (111.46) is a sequence Definition 111.3 An asymptotic solution as Q of operators 6N, N =1, 2, . . . , on L2 (R) such that HHGN — 1 .L2(R) _, Hk(N) (R)
C N•
—k(N)
Q>
oo and Hk (R) W (R) is the Sobolev space of order k.
where k(N) --+ oo as N
The construction of the asymptotic solution to (111.46) is carried out in three stages. Namely, at the first stage we reduce the problem to a similar one with localized righthand side. At the second stage we present the solution of this problem. Finally, at the third stage we construct the asymptotic solution for a general case.
6.2 Localization of the Right-Hand Side Near the degeneracy point x = 0 the operator LI is close to the operator a2
Q2 x2 ,
8X 2
which coincides with the energy operator of the quantum mechanical harmonic oscillator up to a constant factor. In this subsection we construct an operator 6 1 N such that the substitution u = 61 Ny yields an equation whose right-hand side is sufficiently smooth and, in some sense, is localized near the point x = O. This enables us to solve the resulting equation with the help of perturbation theory, using the known inverse We seek G 1 N in the form of a function of the following self-adjoint operators in
L2(R): Ai
=
a ax
,
A2 = S-2x, A3 =
1 22 2 61 N = GN 1 (A1, A2, A3, B).
B = x;
III. Noncommutative Analysis and Differential Equations
226
The operators A1, A2, A3 and B generate the Lie algebra with the commutation relations [A1, A2] = - i A3, [A1, B] = - i (all the other commutators are zero). According to the general theory presented in 1 2 2 2 product of two operators ii fi(Ai, A2, A3, B), i = 1, 2, can be Chapter II, the written as 22 2 1 0 = f3(A1, A2, A3, B), 12 /1 where the symbol f3(y , a), y e R 3 , a E JR, of the product
ii /2 is given by the formula
1 2 2 2 f3( )' , a) = fl(lili, 1 A2, /A3, 1 B)(f2(Y, a)),
where /A i , /A2 , /A 3 and /B are the left ordered representation operators,
a
a
1A 1 = yi - 1Y3, - 1, , 0)12 oa
1A 2 = Y2, /A3 = Y3,
13 =
CY.
In particular, since 212
212
H = a(B)A 1 ± b (B) A 2 , it follows that fidiN
=
1 + iI N -- 1 + R bi / (Ai , /-12, I13, L. ),
where 2 R NI (y , a) = a(a) (yi - iy3- - i - ± b(a)y3: G NI (y , a) - 1. ay2 ace a
a
)
(
Being solved exactly, the equation R N1 (y, a) = 0 for G NI (y , a) provides the exact solution of the original problem after the substitution of A1, A2, A3 and B for y 1 , yz, y3, and a. We will construct an asymptotic solution of that equation assuming that 1 /\/y 12: ± y3. and y3 I (y 12. ± y ) are small. Clearly,
a
a 2
a(a) (yi - 1y3- - i -) + b(a )yi = Po + P1 + P2, dY2
da
where P3 , j = 1, 2, 3, are differential operators of order j; more precisely, Po = a(a)Y? + b(a)A, Pi
= -2ia(a)yi
a
,
a
\
(y3yy 1- Tx ) '
6. Degenerate Equations
227
and
a
a \2
( Y3W2
)
Set
1
0 g (y ce)
g 1 (y a)
fa(a)y
b(a)y}
and define the functions g (y, a), k = 1, 2, . , by the iterative formulas 1
gk (Y, a) =
a (a) yf b (a) yi
(Pi gic -
a) ± P2g1c--2(Y , a))
(note that a (a)y? b(a)yq 0 0 for y i2. + y3. 0 0 by virtue of the conditions imposed on a (a) and b(a)). Next, choose a function V/ E Cceic (R) such that iii (Z) =- 1 in the neighborhood of zero. Set' GN I (y , a)
= *
)7 2 43_ ( - • - 2 k=0
v
Proposition 111.2 For this choice of the symbol G N1(y, a) the remainder R N1 (y, a) is bounded along with all its derivatives and satisfies the estimates
a lpi+y Pl
(y , a)
)62
ayi aY2 aaY
101, Y
= 0, 1,
Proof First let us show that the following assertion is valid.
l
Lemma MA The functions g (y, a), k = —1, 0, 1, 2, ..., have the form
6k
k/
=
Ecrik(yi, Y2, 003'3 ,
i=1 where the functions crik(yi,y2,a) are smooth for (Yi, y2) of degree (k j + bounded uniformly in a E R.
y2i
y3,'
0 and homogeneous with respect to
2). On the sphere y? + y = 1 all derivatives of these functions are
This lemma can easily be proved by induction on k. 1 The function G 1 thus defined has a singularity. However, we are only interested in the domain
y3> 1, where no singularities occur (y3 is replaced by 0)). Since in this case the function lit vanishes for yi2 ± )1, sufficiently small.
228
III. Noncommutative Analysis and Differential Equations
Let us now proceed to the proof of Proposition 111.2. The formula for R N1 (y, a) can be rewritten in the form R N ' a) (P0 + P1 + P2) * ,2
2_, gk(y , a) — 1
,2)
k=0
N =[P1 + P2, ( 2 Y3 2 Yi + Y2 )] k=0
g k (Y , a ) N
-Flk ( , Y3 ,) (P0gP(Y , a) —1) +
-FVF ( 9 y3 9 (PlgN i (Y
Evogl (Y, a) + Pigl_i (Y, a)P2glic _2(Y, a))
P2 g N 1 -1(Y , a ) + P2 g N1 (Y'' "
yr + Yi
The expression in curly brackets is equal to zero, which follows from the construction of g l (y, a). Let us estimate the remaining terms. (a) The derivatives of * (y3 /(y? + y3')) of order II > 1 satisfy the bound
a II
/
0 1 /32 aYi aY2
)
2
2
(
Y3
5 -
Yi 4- Y2
C
)10112
1
m13 ( 2 2 YI + Y2 )
+
for any m, which can be checked by straightforward derivation (one should take into account that the supports of all derivatives of *(z) lie in the domain 1/C < z < C for some C > 0). (b) The commutator [P1, VT] has the form 2
y3
Y3
Y3
P1 ' ( 2 2) YI + Y2
a (a ) Y1 Y2
(
2
2) YI + Y2
( 2
2)
YI + Y2
It follows immediately from Lemma MA on glic and from the bounds for 1î that [P1,
E g satisfy k=0
the needed estimates. Similarly we can estimate the term containing [P2, In (c) Let us estimate Pi g N1 (y , a): g Al , (y , a)
—2ia (a)
E ( y3
i +i
yl
J.°
a:iN o
ao-iN
+ Y 3 yi — •
0Y2
The derivative alPHY la4laylPae of this expression is a sum of terms of the form 3
Y3 6a(Y1, Y2, a)
where 0 s N We have, for y?' +
1 and o- (yi , y2 , a) is a function homogeneous of order r < — (N + fil I +s + 1). > c > 0, y3 > 1, Y'3a(Y1, Y2, a) < const Y(Y?' + YD- 1/31/2(y? ( Y3 2 2 Yl + Y2
s
) (N+1)/2 2 —I/3I/2 (YI + Y2 )
Y3
2
2
2)
YI + Y2
'
In analogy with the above, we can estimate P2gN 1 (y, a) and P2g h1r1 (y, a). In conjunction with the estimates (a), these estimates imply Proposition 111.2, since y3 Ay?
yD and 4y? + y are
bounded on supp Vf (owing to the condition y3 > 1). Proposition 111.2 is proved.
6. Degenerate Equations
229
Now let v(x, Q) E L2(R) be a function such that the norm I IvIlL2 (R) is bounded uniformly in Q. In the equation flu = v we replace u by uN = 6 N 1 v and obtain the following equation for the difference (u — 14N): fi(u — UN) = — 14v -=- rN(x, S2). As was already mentioned at the beginning of this section, the right-hand side r N (x Q) of this equation is localized in a neighbourhood of the degeneracy point x = O. We will now give a precise meaning to this assertion. Introduce the new variable 4 by setting x = 4. 0 -1 /2 , Q) f (x, S2) for any function f E L2 (Rx ) depending on the parameter Q.
Q) 14 -f
Proposition 111.3 The function 7-N( , S2) lies in the domain of (4'2 — 82/n2) (N+ 1)/2 and
(4.2
82 ) (N+1)/2 < c s2 1/4 11v( x,
r N(',Ç2)
a4.2
iL2 (R)
L2 (1ik)
where the constant C is independent of Q.
This proposition can be proved by the standard technique of L2-estimates. We omit this proof here since such estimates are not the topic of the present book. The reader can carry them out for himself or herself.
Let us now proceed to the second stage of the solution.
6.3 Solving the Equation with Localized Right-Hand Side Let
E
denote the small parameter s = Q -1 /2 • We again use the change of variables
X
For brevity, we denote the difference u — UN by the same letter u. The equation for u reads ,9 2 u( 8)
Hu
where
a(8)
E) = E2 FN (4, 6 -2 )
a •
2
b(s)4 2u(, s) =
Thus, the right-hand side
0 2 )(N+1)/2 11(,
3;2
s), s) satisfies the estimate
< c E 3/2
s)
(111.47)
L2(R4 )
where the constant is independent of E. The function u( 9 s) will be calculated with the help of perturbation theory, by expanding in E. To this end, let us choose a positive integer 1 (the precise value of 1 will be fixed later) and rewrite the equation for u in the following form using the Taylor expansions of a and b:
/ + E e k iik ± 8/±1 hi±i (8) u = 1), ( fiose k=1
III. Noncommutative Analysis and Differential Equations
230 where
82 fiosc — 2 82
is the energy operator of the quantum oscillator, a2
1 (bk e+2 _ fik = _ k!
02 /+1 _ b/+1 (eM 1+3 — cii-Fi (EM n2 ) '
1 = — h/+1 ( 6) /!
ak, bk are the MacLaurin coefficients of a(x) and b(x), and
1 a i+l a ai+i (x) = f ax1+1 (rx)(1 o
f1 a i+lb -t- )1 dr, bi±i(x) = o
dx 1+1
(rx)(1 —
Let us seek u(x, E) in the form 1 14(X, s) =
i \ ES k /4kkX, s).
k=0
For uk (x, s) we obtain the system of equations
floscuo(x, E) = v(x,
£),
k
floscUk(X , e)
= — E fli uk_i (x, 8 ) , i=1
Assuming this system to be satisfied we obtain the following remainders on the right-hand side: [fiu(x, s) — v(x, s)]
= E1+1 k=1
j=0
j=0
Let us solve the system for uk (x, E). The operator fiosc is boundedly invertible in L2 (TR) . The inverse has the form
.
fio—s ci =
n=0
where A.„ = 2n -I- 1 is the nth eigenvalue of !lose and Pn the orthogonal projector on the corresponding (one-dimensional) eigenspace. Thus, formally we obtain k
u0 = Its cl v ,
Uk = — fio-sci
E fiiuk_i, i=1
k = 1, 2, ... , d.
6. Degenerate Equations
231
In order to prove that the above formulas indeed define an asymptotic solution of the problem flu = y as s ---> 0, we should show that the functions uk successively obtained lie in the domains of flow , iii, i = 1, . .. , 1 and hi± i (s). Also, it is necessary to estimate the remainder. We shall prove the following assertion.
Proposition 111.4 Under the above assumptions on .v(, 8) the formulas constructed define correctly for eachl < N +1 an element u of L2 (1R) such that u lies in the domain of fI. For each k = 0, . . . , 1 the element u lies in the domain of (flosc) (N +3—k) /2 ; and the remainder satisfies the following estimates for 1 < 7r/2:
_ Cs I 1 fioNsZ2-1 (fiu — 01 1L 2 (Re ) <
,(N+1)/2 „ v 1 1L 2 (Re )• II ilsc H0
(Recall that y satisfies (111.47), and, consequently, the right-hand side of the last equation does not exceed c 81 +3/2).
Thus, for 1 < N/2 we obtain a true asymptotic solution modulo 0 (si+ 1 ), and the greater N, the more derivations and multiplications by 4 can be applied to this asymptotic expansion. We omit the proof of this proposition due to its purely technical nature. The constructed solution of the equation fili
=V
has the form u = G-2iv, "
where G21 is the unbounded operator in L2 (IR) determined by the formulas
/ 62, = E e k 62 1k; k=0
6 2 10 = 11 0— sci ; k
62 1k = — ilk 0—sci
E fi/ 62/
_„
k = 1, . . . , 1.
i=1
The operators 621k are (1(10 ) 1+1 /2 -bounded in L2(11) uniformly in E (recall that an operator A is said to be B-bounded if I 'Au' I < const i I Bul I for all u in the domain of the operator B). The operator 67 written in the original coordinates (x, S2) will also be denoted by 621. These operators together with the operators G NI determined in Subsection 6.2 will be used in construction of an asymptotic solution in the next subsection.
III. Noncommutative Analysis and Differential Equations
232
6.4 The Asymptotic Solution in the General Case The following theorem describes the asymptotic solution in the general case. Theorem 111.2 The sequence of operators 6 N =61 4N — 62 NR-1 4N
oc to (111.46) in the sense of Definition 111.3.
is an asymptotic solution as Proof We have
fl6N = f1 6 1 4N — fid2Ni14N = 1+ (1— f/620i14N. Thus, we should prove that there exists a function K (N) such that lirn
K(N) = +pa
and > 1.
4 N11L2(K(N)(R x ) < CN2 K(N)
11( 1 — f162
By Proposition 111.4,
e
11(flosc) k ( 1 — 12 N)14111,2(R)
8N+1 11(flosc)2N+112_"1 I lL2(Rd
for k < N . By Proposition 111.3, IrliN VI IL2(Rt ) < C 21/4 11VIIL2(Rx )• I I (flosc)2N+1/2 /14
Returning to the variables x, we obtain
2 —k
n 2, )k ( s.-22 x 2
—
'.7c2
<
6 2N )R 7 —4N, V
2 —(N+1)1 2 11!IV 11
ilL2(Rx)•
L2 (Ri )
To continue the proof we need the following statement.
Lemma 111.2 For any natural k there exists a constant Ck independent of S -2 > 1 and such that for any y E S(R) we have 2
(1
—
aax2
\
k
V
< Ck L2(R x
)
(22 x 2
02 )k .X 2
L2 (1R)
7. Microlocal Asymptotic Solutions for an Operator with Double Characteristics
233
Proof Let us make the change of variable x = I ,s,/r2, then the inequality in question becomes
/1 F-2
2
a \k a2
)
< Ck
I)
(e _ n2 32 j
) k
v
L2 (Re )
L2 (Re )
Since 1/ S-2 < 1, it suffices to prove that
al l)
av
< Ck
( 22 a2± )) ' .
L2 (Re )
L2 (R
)
for / < k, which follows from the fact that the operator
r (4) s (J)—(r+1)/2 is bounded for any r, s. The last assertion can be verified with the help of the standard technique of L2-estimates. The lemma is proved. El Hence, with the help of Lemma 111.2 we obtain HO — 1162N)h4N I IL2(Rx)—>Hk(Rx) _<
c Qk-(N+1)/2 .
Putting K (N) = (N ± 1)/4, we get the desired estimate. The theorem is proved.
I=1
7 Microlocal Asymptotic Solutions for an Operator with Double Characteristics In this section we consider (pseudo)differential equations with degeneration of a type somewhat similar to that discussed in the preceding example. The equations in question are a quite special case of equations with double characteristics. The latter have been considered in a number of papers (e.g., see Boutet de Monvel [15], Kohn [108], Boutet de Monvel and Treves [16], H6rmander [80], Radkevich [157], Gnishin [74], and Taylor [175]). We do not obtain any new results; our only purpose is to show how the operator calculus machinery can be used in combination with conventional techniques in order to solve problems in the theory of differential equations. Accordingly, we put the main emphasis on the operator calculus and are rather brief on the other topics arising in this example. We consider the equation
Lu = f,
(111.48)
III. Noncommutative Analysis and Differential Equations
234 2.
1
where L = L(x, --talax) is a (pseudo)differential operator satisfying several conditions to be given below. In what follows we deal with micro local asymptotic solutions of (111.48). One says that a function w is an asymptotic of a function y microlocally at a point (xo, po) of the phase space T*Illn if the wavefront W F(w — y) of their difference does not contain the point (xo, po)'. An operator h is called a right microlocal regularizer of L if for any right-hand side f the function Li? f is microlocally an asymptotic of f at (xo, po). Equivalently, h is a right microlocal regularizer of L if
(111.49) where S' is a pseudodifferential operator whose symbol is equal to 1 in some neighbourhood of the point (xo, po) and '6 is an operator of order —oo in the Sobolev scale w soRn\)1, . We shall only construct microlocal regularizers of finite order; that is, the operator '6 in (111.49) will be of arbitrarily large but finite negative order. The passage to infinitely smoothing '6 is standard and can be accomplished by using the Borel lemma. Let us state the conditions that we impose on L. We assume that L is an operator of order m with total symbol -00
L(x, P) =
E (-i)m -kLax, 13), k=m
where Lk(x , p) is kth-order homogeneous in p. It is assumed that (x0, po) is a characteristic point of L (that is, L,n (xo, AO = 0) and that the following conditions are satisfied in a homogeneous neighbourhood of the point (xo , Po): (i) Lm (x, p) > 0 for p 0 0; (ii) the characteristic set S = {(x, p)I L,„(x, p) = 0} of the operator L is a smooth manifold of dimension 2n — 2; (iii) for any (x, p) E S and any vector at the point (x, p) the inequality (d 2 L m (x, p), 0 > 0
holds provided that is not tangent to S; (iv) we have rank i *co = 2n — 2 = dim S, n
where i : S --> R n (131 R n is the natural embedding, co =
E dpi
A dxi is the
standard symplectic form in the symplectic space Rn ED iRn with the coordinates (x, p). Thus, L is (microlocally, in a neighbourhood of (xo, po)) an elliptic operator degenerating on the surface S. Mishchenko, Stemin, and Shatalov [137] for a comprehensive discussion of wavefronts and microlocalization. 1 See
7. Microlocal Asymptotic Solutions for an Operator with Double Characteristics
235
Note that in view of condition (i), we have dL ni (x, p) = 0 for (x, p) E S. Hence condition (iii) is invariant with respect to changes of variables. Before solving equation (111.48) we bring it into a simpler form by transformations that clearly do not affect the possibility of constructing microlocal asymptotic solutions. The transformations are as follows: 1. Right and left multiplications by a microlocally elliptic pseudodifferenti al operator A:
L F-›• 2.
LA,
il i—
AL.
Conjugation
L by a microlocally unitary Fourier operator associated with a homogeneous symplectic transformation g (here cio; is the adjoint of Cig g in L2, and microlocal
unitarity means that the total symbol of ('IS) *g 3g is equal to 1 in a neighbourhood of (xo, Po)).
Remark 111.1 Without loss of generality it can be assumed (and is assumed throughout the remaining part of this section) that m = 2. Indeed, otherwise, one can always apply transformation 1 with ord A = 2 — m. The following theorem shows that the operator L can be reduced to a quite simple "normal form."
Theorem 111.3 Let L be an operator satisfying the above conditions. Then there exist elliptic pseudodifferential operators  and h and a symplectic transformation g such that: (a) g -1 takes the surface S to the surface {xi = pi = 0 } and the point (xo, po) to a point (0, 4; 0, 130, pot 0 0, where x' = (x2, . .. , x n ), p' = (p2, .. . , pn ); (b) the composition of the corresponding transformations 1 and 2 takes L to the operator
il = H
= Ack*Lcbg h
with total symbol -00
11(x, p) = p? + 411112 — i 111(x' , p') + E(-i)i Hi (4 p').
i=o
Here ord Hi (x' , p') = j, j = 1, 0, —1,..., and Hi (x' , p')
= g* 121,sub Idet[co-1 d2 L211 1 1
113'1. Xl=p1=0
III. Noncommutative Analysis and Differential Equations
236
In the last formula 1 n 82L 2 (x, p)
L Alb (X , p) = Li(x, p) —
IE oXi n nupi i=1
is the subprincipal symbol of L,
is the mapping induced by the standard symplectic form a) = dp A dx (and denoted by the same letter); d2 L2 : TR2n ___>. T * 11 2n
is the mapping taking any vector to the form d2 L2(, .); 0) -1 0 d2 L2 is restricted to the skew-orthogonal complement of TS prior to evaluating the determinant. Note that the function H1 (x', p') is determined uniquely up to a symplectic transformation in the space lexr,' p,. The proof is based on the technique of microlocal reduction to a normal form (e.g., see [172], [124], [143], [144]). This technique has nothing to do with noncommutative analysis, and so we omit the proof and deal directly with the reduced operator in what follows. Hence we consider the equation
flu = f, where
il is the pseudodifferential operator with symbol —00 H(x, p) = pi' ± xiip'1 2
—
i Hi (x' , p') + E (_i )iiii(x' , p'). j=0
Suppose for the moment that x' does not occur in the equation, that is, the operator H does not depend on x'. Then, by applying the Fourier transform with respect to x', we arrive at the equation a2
[
—
ax?
—oo ± IP/ 1 2 X? — D — i) i Hi (P) 171- = f
.J =1
where the tilde stands for the Fourier transform. The task is to obtain the asymptotic expansion of the solution as I pl i —> oc. This is pretty easy. Denoting = A.,
...fi.xi = Y,
w=
13 '
Ip'l
7. Microlocal Asymptotic Solutions for an Operator with Double Characteristics
237
we rewrite the equation as [
—oo
a2
-
ay 2
+ y2 - Hi (w) + z xi - ' ( - i) i Hi (w) 1'4 = x -1 f. j=0
asymptotics as A. ---> oo of the solution to the last equation can be obtained by regular perturbation theory provided that H1 (co) is not an eigenvalue of the operator The
— 02/8 y 2 ± y2 for any co E
Hence, the idea was to consider p' as parameter thus reducing the problem to the familiar quantum-mechanical oscillator. Let us now return to the general case, in which x' does occur in the equation. Then the Fourier transform does not help us make the primed variables into parameters. However, still there is an appropriate technique, namely, that of operator-valued
symbols2 . Let us say a few words as to what an operator-valued symbol is. Let A be an operator algebra and .F a proper symbol space. The usual definition of the spaces of n-ary symbols reads
.Fn = .F6 • - • 6.F . ,.._...„.,_., n copies
We consider the space
frn = A6.F6 • • • 6.F = A® T n . The elements of
.frn are called operator-valued symbols. If A 1 , . . . , An are .F-
generators in A and f E frn an operator-valued symbol, then we can substitute An into f. However, in this case not only A1 , . . . , An , but also f itself should be equipped with Feynman indices. If f = a 0 g, a E A, a E Tn , then we set
su in in s il f (Ai, . . . , An ) = ag(Ai, ... , A n ). The definition is extended to the entire .fro by linearity and continuity. The calculus of operator-valued symbols includes many useful counterparts of formulas valid for ordinary symbols; they are usually easy to prove by considering factorable elements first. Let us give a few examples: (a) The conjugation formula , s ll
in
3
.11
in
U — ifff (Ai, . . . , A n AU = U -1 fU (U -1 AiU , . . . , U -1 AnU), 2 This technique, which originally arose in context of differential equations, is now well-integrated into noncommutative analysis.
238
III. Noncommutative Analysis and Differential Equations
(b) The left multiplication formula s
L s :11 in f(A1 ,...,An)
in
il
= Lf(LAi, • • • , L An)•
Returning to our example, the idea we have in mind is to represent fi as a function with operator-valued symbol that can be considered as a perturbation of the quantum oscillator. Namely, we write 2
1a ' H=O (x', —i—) ,
ax/
where
0 (x' , p') = '41p 12 —
00
a
+ —i Hi (x' , p') +E(-0— i Ili (x' , p') I,
494
i=0
(I is the identity operator). Note that the symbol 0(x', p') satisfies the inequalities aa+P (ax')°€ (ap')fi 0 (x' , p')u
< Cap(1 +1111 2—IP l ) (1 + Xf L2
2 (
a.0 2 )8 u
L2
Naturally, the regularizer will also be sought in the form 2 a f? = R (x' , —i l— ) ax/
with operator-valued symbol R(x' , p'). We should have where
"6 is a smoothing operator in the Sobolev scale; if so, then the operator i?'i = cb g iihiiii■ g*,
where 3g, Â, and h are the operators constructed in Theorem 111.3, is a microlocal right regularizer of L. In order to compute the product II h, we use the left ordered representation of the 1 2 tuple (x, —i a/ax'); this representation has the form (cf. Chapter II)
= x",
Lia/ax' = P' — ia lax' .
since both symbols il and h are operator-valued, care should be taken so as to guarantee that the usual product formula be valid. In our case, the symbol 0(x' , p') commutes
239
7. Microlocal Asymptotic Solutions for an Operator with Double Characteristics
with each of the operators x' and —id/ax', and we shall require that the same be true of the symbol R(x' , p'). Then we have (1 2 (-
smbl(ilf?) = 0 x' , p —i
R(x' , p') = E
ax/
OM a 0 (x' , p') a R(x l , p')
a!
ce>o
(ax')a .
(ap')a
(III.50) We intend to solve the equation
smbl(iii?) = 1 asymptotically. For convenience, we make a certain transformation of the symbols 0 and R of the operators il and R. Our aim is to transform the principal part of 0 (x' , p') into the operator of the quantum oscillator. Let
Ul : L2(10)
L2(10),
be the operator that acts according to the formula Uil (Y) = f (Y
IX).
Let
U =_-- U(p) = U.,/71 . Clearly, we have 0 (x l , p') = U
1 [111 1{flosc — iG (x' , p')1 ± N (x' ,
p')]U,
where _00
G(x' , p') = 111(x' , 11 )111/1,
N (x' , p')
=E(- 0 -iHi(x',
p'),
i=o and
ti
OSC
a )2 (_ i_ axi
is the operator of the quantum oscillator. We seek R in the form R=U-1 CU. We equate the right-hand side of (III.50) to 1 term-by-term and solve the resulting infinite system of equations recursively. The difficulty is that U does not commute with a/ap', and this leads to additional terms in the transformed equation (III.50).
III. Noncommutative Analysis and Differential Equations
240
To this end, we should be able to compute the derivatives — i a I api(U-1 AU) and —ia/axi (U -1 AU) for a given symbol A. Clearly, we have
au f ] (y) = a fJ H , api
y af 1 1 „II -312 Pi y )_ Gr pTi - - II P I . - ( Y - ) (— ,-- ) 114 ay -VIM
ape
1 pi
y
2 Ip i i 2
af (
y ) _ 1 pi
( ±) f ( y ) .
aY j/71 — 2 1p'1 2
Hence
au
1 pi
8Y )
VT'l
a
api = 21p1 12Y ay u. On the other hand,
au-1 ay _ 1= 1 , = _rj— l —ri 2 opi
a pi
pi
F19, 12 U
_1
a
3' Ty .
These formulas yield
1 pi r a (u-1 Au) = (J-1 I a A 21p'12 Ly ,, , A1 l u. api tapi +
a
The formula for (a/axi)(U-1 AU) is evident,
a
axi
,
(U — ' AU) = U—1 aA y
axi .
We obtain the following equation for the symbol C: smbl(fift) = U -1 [i/Il Posc — iG(x / , 11)1 ± N(x' , ',
+E Icell
( -
i)°` u-1
a!
( a +1 p'
ap'
2 1 p'1 2
ICU
(111.51)
la CU = 1.
ady alay ) " (— ax
Let C = Co — iCi — C2 + • • •
be the expansion of C by homogeneity degree. Equation (111.5 1) results in the following system of equations:
—iG(x' , p')) Co = 1, —iG (x' , p')) C1 = —iG (x' , p')) C2 = -F2(Co, CO,
7. Microlocal Asymptotic Solutions for an Operator with Double Characteristics
241
where
F(C0), F(Co, C1), . . . are expressions that depend only on the symbols determinable from the preceding equations. The symbols Ci (x', p') can be computed from this system if the operator /lose — iG(x' , p') is invertible, that is, if iG(x / , p') does not assume values in the spectrum of !lose . Let us write down the expressions for the Co, C1: C1: Co (x i , /1 )
1 { flow — iG(x / , 11)1 II
Ci(x' , p') =
114
{fl ow — iG(x i , Pi)1 1 I N (xi 13') {flow — iG(x i , 11)1 IP'l {,
+
ip I
i=2
aG(X l ?
PI
)
Pi tio_se _L
_L 1
where
1Pi IPf l °sc1
aPi
111 osc — iG(x / 191)1
Ip'l
fi
—2
aG
P')1,
axi
= y2 ± 82/42 .
fiosc We have taken into account the fact that 2 7 y2 y a , y2 ± 32
L
a2
ay
ay 2 0 y2] Theorem 111.4 If the function iG(x' , p') does not assume values belonging to the spectrum of the operator flow , the microlocal regularizer of the operator H exists and has the form
it
= (ucu) -1
1
2
a
(x' —i— •
'
axi)
'
where the symbol C is the solution of equation (III.5 1 ). This equation possesses an (asymptotically) unique solution, whose first two components are given by the above formulas. In closing, let us point out that this is the simplest example, which indicates two main features of the approach: 1) The operator-valued symbol, in conjunction with operator transformations of the symbol, is used in such a manner that reduces the principal part of the equation to a simple (though infinite-dimensional) operator. 2) Ordered representations are used to compute products of operators with operatorvalued symbols. In that case certain commutativity conditions should be imposed in order that the usual product formulas be valid.
Chapter IV
Functional-Analytic Background of Noncommutative Analysis
1 Topics on Convergence Functions of several Feynman-ordered operators were introduced in Chapter I. Their definition and the proofs of theorems concerning their properties in fact rely heavily upon the notions of continuity and convergence, completion of tensor products, etc. However, we were not too rigorous about all these concepts and abandoned precise statements in favour of brief and clear exposition based primarily on common sense. The present chapter is intended to justify the definitions and assertions made earlier in this book; we begin with locating the bottlenecks of our argument.
1.1 What Is Actually Needed? Let us briefly look through Section 2 of Chapter! and find out exactly in which places the convergence and related notions are used and in what context (that section is a crucial one, since the sections following it are based on its results rather on the properties of convergence itself). There are quite a few such places where some mapping is required or proved to be continuous, but the most important are as follows: (a) In Definition 1.2 the Feynman quantization mapping
AA f (yi, • • • ,
: Tn.
—>
A
Yn)
1--
f (A)
1
n
for some Feynman tuple of operators A = (A1, .. . , A n is described as the unique extension by continuity from the subspace .7. 0 • • • 0 .F, where .T. is the space of unary symbols; moreover, the space .Fn of n-ary symbols is defined as the completion .').F of the tensor product T 0 • • • 0 T. However, nothing was said Fn = .F6 • • • (g about the topology chosen on .F 0 • • • 0 .F, so that this point needs clarification. )
1. Topics on Convergence
243
(b) The main hypothesis in the uniqueness theorem (Theorem 1.1) is that the difference derivative is a continuous mapping
s
3
: .7. --->- .F6.F;
-3
the same type of objection applies here. (c) In the statements and proofs of the main theorems in Section 2 the passage from an operator algebra A to the algebra G(A) of continuous linear operators on A and to (at least 2 x 2) matrix algebras over A is freely used, and, for a linear space E, algebras of continuous operators on E are considered. Hence, we should equip the newly arising spaces and algebras with a natural notion of convergence. Let us examine the question about tensor products more closely. Given a space ,F of unary symbols with some topology, what topology is actually needed on .T. 0 ,F? The answer becomes clear if we look at how this tensor product is used in our definitions. Suppose we are given two generators A, B e A and, hence, two continuous mappings
AA : Y . —>- A
and
,uB : ..T —> A.
We define the mapping Ii 2 : A,B
TOT-±A
by setting
1-t 1 2 ( f
0
g) = AB (g) p, A( f ) = g(B) f (A)
A,B
the mapping is extended to the whole TOT by linearity and to .F(g).F, the completion of .F 0 F, by continuity. We see that our definitions must ensure the continuity of P' 1 2 on .F 0 T, otherwise the construction fails. A,B
Suppose momentarily that .F and A are Banach algebras; we can define a mapping
v:TOT ---> A by setting
v(f, g) = g(B) f (A); clearly, y is a continuous bilinear mapping, that is,
II v(f, g )II
clIf I 110,
where C = II ILA II • II AB ll . Consider the following projective norm on .F 0 .F:
II(PII = infi tII fill II gill, i.i
IV. Functional-Analytic Background of Noncommutative Analysis
244
where the infimum is taken over all representations N (P
=
E fi 0 gi i=1
(where N finite is not fixed). We claim that u extends by linearity to a continuous mapping .F 0 .F --> A. Indeed, denote the extension by T) and let ço ETOT. For each E > 0 there exists a positive integer N and elements f, gi E .T such that n
cp =EfiOgiand i=1
t II.fi I Ilgill i=1
11Ç911 +E.
Then IF) ( q)) 11 =
0 Ê gi(B) fi(A)0
C l'
i=1
Ilfi I Ilgi I
calsoll +0.
i=1
Since E is arbitrary, we obtain
Ilf( (p )I I
c (ho l l).
The completion FF of T 0 Y with respect to the chosen norm is called the projective tensor product of .F and .F and is characterized by the following property: for any continuous bilinear mapping u : F (8) ..T . --÷ A into a Banach space A there exists a unique continuous mapping 13: FF. 6 .—> T A such that the diagram v
YxY
A
.T.6.F commutes, where the left arrow is the natural embedding (a, b) 1—> a 0 b. Let us consider a simple example. Suppose that .7- is the Banach space of continuous functions on the unit circle, T = C (S 1 ). Then the space .F2 = FFof binary symbols has a rather odd-looking structure. It is continuously embeddedl into C (S 1 x 5 1 ) = C(T 2 ) since if ço = E fi 0 gi E C(S 1 ) 0 C(S 1 ), then
II w 11 c(T 2)
=
max I x, y ET2
fi (x) max gi (y) E fi (x)gi wi ..-.. E max .x
= E lifi Ilc(si)llgi 11c(si), 1 We omit the proof of injectivity.
Y
1. Topics on Convergence
245
and we can pass to the infimum over all possible representations of ça. However, the norm on .T"(3:‘).7 is not the one induced from C(T 2). Indeed, let 41 (x ) be a continuous function with C-norm 1 and support in [0, 27r]. Set N
N(x, y)
=
E W(Nx —
27rk)xli(Ny — 2nk);
k=1
then
Ilç oN (x , Y)lic(T2) = 1 and Ilft/(x, .Y)11c(s1)6c(s1) > (the latter inequality is easy to prove since the supports of the terms in the sum defining cioN (x, y) do not intersect one another). Hence,
C(S 1 x S 1 ) 0 C(S 1 )(i?) C(S 1 ). Fortunately, the situation is quite different for infinitely differentiable symbols, which, by Remark 1.6, are of primary interest to us. Let us take .T. = C°°(S 1 ) and prove that
.F3‘).T. = C° (T 2) (the symbol space C°°(S 1 ) is not used anywhere in this book; however, we think it is useful to consider this example, because the idea of the proof, which is quite obvious here, remains essentially the same for the symbol spaces S"(R), but the proof itself becomes rather cumbersome). We should first equip C 0 (S 1 ) and C"(T 2) with some topology. We consider C°°(S 1 ) as the Fréchet space with topology defined by the system of seminorms k
11111k = max E if (i) ( x)I ,
k = 0, 1, 2, ...
xES 1 j .0
Similarly, the system of seminorms
111fIllk = max
E ifo , i)(x,y)l,
(x,y)€T2 i+ j
k = 0, 1, 2, .. .
makes C°°(T 2 ) into a Fréchet space. Consider the diagram
cl00 (,s 1 )(cv°°(5')
c 00 (s 1 ) ® cm(s 1 )
(To — — — —.-
co
C"(T2 )
ccc(s 1 ) 0 ca(s1),
246
IV. Functional-Analytic Background of Noncommutative Analysis
where the vertical arrows are embeddings and w is the identity mapping on COO si) 0 c00(s1) (
considered with two different topologies: on the left, this is the topology of the projective tensor product determined by the system of seminorms
1140 11 (§),i = inf E I fillillgiIli, i
where the infimum is taken over all representations 99 = E fi 0 gi , whereas on the i right, this is the topology induced from C' (T 2 ). We claim that the dashed arrow c7.) is well-defined and is a continuous isomorphism. The proof is in several steps. 1) Let us prove that w is continuous. In fact, this is obvious, since if w = E fi 0 gi , i
then
E1 E i < const • E l ifi Ilk Ilgi Ilk,
11101Ik =
max
(x,y)ET2 s-I- j
.
fi(s) (x)gP ) ( y)I
i
where const is independent of the number of summands, and, by passing to the infimum over all possible representations, we obtain III (PlIlk _< const 1149 11 ('') ,k
for any k. Thus, the continuity of w is proved, and we can uniquely complete the dashed arrow by continuity. 2) Next, let us prove that C° (S 1 ) 0 C° (S 1 ) is dense in C' (T 2). This is however trivial, too, since for any w E C 00 (T2) its Fourier series
00
E k,1=—oo where
(Pkieikx eil Y ,
1 —ikx—il y ço(x, y) dx dy, kl = — 477_ 2 fT2e
converges to cp uniformly with all derivatives, and its partial sums N çokie
k,1-,N
ikx e ily
1. Topics on Convergence
247
belong to Ce 0 (S 1 ) 0 C"(S 1 ). 3) Finally, let us prove that co -1 is continuous. In conjunction with the above, this clearly implies the existence and continuity of (To-1 . We will show that there exists a constant C and a positive integer m such that
c 1 1199111m
Ilç26 , 0 for any
(IV.1)
E C 00 (S 1 ) 0 C °° (S 1 ), then, by induction over s, it can easily be shown that
II49 II(§), s 5_ CsIIMIIm+s (we omit the induction step). Thus, let N
q) =
E fi 0 gi
G
C° ° (S 1 ) 0 C' (S 1 ).
i.i
No matter how large m is, we can represent f i = Fi +fi ,
fi and gi
as
gi = Gi + ki ,
where Fi and Gi are trigonometric polynomials and the remainder terms fi and satisfy the estimates
II lam
lifill0
ki
8,
Ilki 110< Ilkillm where E > 0 is arbitrarily small. Set
Then 0
N - ft 0 ki), i=1
and we have
o çoIllm Mie, 0„ Il q) — 14:),0 5. _ M2s,
119
—
where M1 and M2 depend only on N, maxi II fi Il m , and maxi Ilgi Il m • Since 6 > 0 is arbitrary, we see that it suffices to prove (IV. 1) for trigonometric polynomials. Thus, we can assume that R
ço(x, y)
=
E k,1=-R
me ikx e ily ,
IV. Functional-Analytic Background of Noncommutative Analysis
248
and, consequently, R
hou6 ,05_ E
k9kii.
k,1=-R
Take m = 2. We have R
R
199kil = E iodiv(1 +k2)(1 + /2 ) V,(1 + k 21) (1 ± 12 ) E k,1=-R k,1=-R < { R
E
-
E
igokii2(1+k2)(1 + / 2 )
k,1=- R
1
kR (1 + k2)(1
R
1/2
+12)
by the Cauchy—Schwarz—Bunyakovskii inequality. The second factor in the braces is bounded uniformly with respect to R, since the series 00 1
E
k=-00 1+ k2
is convergent. The first factor can be rewritten as R
E
iodi2(1+k2)(1 +12)
k,1=-R 1
ff
42 i=
JT2
82
(
i
ax2)
82 çO(x, y) (1 — ay2 )
(x, 31 ) dx dY 5_ Ill(Pill2,
for the same reason. Hence, we arrive at inequality (IV.1), and the desired assertion is proved. Let us now outline the requirements on the class of spaces and algebras to be used in the constructions of noncommutative analysis. Partially, these requirements follow from the preceding considerations, and partially they are implied by general mathematical principles. The spaces and algebras should be endowed with the notion of convergence, and notions such as "Cauchy sequence", "completion", "dense subspace" should be welldefined. If El and E2 belong to the considered category, then the same should be true of E1 ED E2 and of the space L(E1, E2) of continuous linear mappings from El to E2. Moreover, the projective tensor product should be well-defined (as the universal object for polylinear mappings). From the practical viewpoint, we should have the possibility to deal with unbounded operators (such as differential operators), either by involving the technique of scales of spaces, or otherwise. However, it seems impossible to devise a consistent approach to the construction of symbol spaces and operator algebras within the framework of topological linear spaces. The most adequate apparatus is probably supplied by the theory of polynormed spaces and algebras, some elements of which are presented below.
1. Topics on Convergence
249
1.2 Polynormed Spaces and Algebras The exposition in this and the following section is neither complete nor self-contained owing to space limitations. We only give the necessary definitions and provide examples making these definitions understandable; most of the proofs are omitted and can be found elsewhere. Following Moore and Smith, we describe convergence in terms of generalized sequences, and so it is appropriate to say a few words on that topic. Let I be a poset (a partially ordered set). The set / is said to be directed if for any a E / and /3 E / there exists a y E / such that y > a and y > fi. Definition IV.1 A generalized sequence in a set X is a mapping I ---> X, a } — xa , where / is a directed set. One says that {x ot } is a (generalized) sequence indexed by (elements of) /; the words in parentheses may be omitted. Suppose that X is a topological space. A generalized sequence {X a }ote i in X is said to be convergent to an element x E X if for any neighborhood U C X of the point x there exists an ao G / such that xoe E U for all a > ao. A topology on X is uniquely determined by the class of convergent generalized sequences in X. Next, we need the notion of a filter. Definition IV.2 Let X be a nonempty set. A family F of subsets of X is called a filter in X if the following conditions are satisfied: i) 0 V F; ii) ifAEFandBEF,thenAnBE F; iii) if A E F, then for any subset B D A of X we have B E F. Let F be a filter in X. A subset Fo c F is called afilter base if each A E F includes some Ao E Fo. Note that if Fo is a filter base, then it necessarily has the following property: for any A, B E F0 there exists a C E F0 such that C c An B. Indeed, AnBEF since F is a filter, and the existence of C follows since Fo is a base of F. Definition IV.3 A bidirected set is a poset / such that both / and f are directed sets (here /' is the poset obtained from / by inverting the ordering relation). Hence, a poset / is a bidirected set if for any a, 0 E / there exist y, 8 E / such that y > a, y > 13 and 8 < a,8 < 0. Definition IV.4 Let / be a (bi)directed set. A section of / is a nonempty subset S c I such that if a E S, then each 8 E I satisfying (5 > a belongs to S. We can now give the definition of a polynormed space. Let H be a linear space over the field C (spaces over Ek are considered in the same way).
250
IV. Functional-Analytic Background of Noncommutative Analysis
Definition IV.5 A polynorm on
H
is a mapping
p:Hx1-±1R± = [0, co) U {oo}, where / is a bidirected set, such that the following properties hold: 1) px , a) = P.Ip(x, a) for any ),. E C, X E H , and a E l; 2) p(x -I- y , a) < p(x , a) + p(y , a) for any x,y E H and a E I; 3) p(x, a) < p(x , 13) whenever a > 13 for any x E H; 4) for any x E H there exists an a E / such that p(x , a) is finite. Consider a linear space H equipped with a polynorm H x / --> 14 . Note that property 1) yields p(0, a) = 0 for any a; property 3) means that p(x , a) is a nonincreasing function of a for any fixed x. Properties 1) and 2) imply that for each a the set Ha c H of elements x with finite p(x , a) is a linear space and that p(, a) is a seminorm on Ha . Hence, Ha can be considered as a topological linear space with topology determined by the seminorm p(. , a). As follows from property 3), there is a continuous embedding Hp C 11, for any a > Let x E H. It follows from property 4) that the subset I (x) c I of indices a such that p(x , a) is finite is nonempty. By property 3), if ao G 1(x), then a G / (X) for any a > ao, so that 1(x) is a section of I in the sense of Definition IV.4. 1(x) will be referred to as the finiteness set for x. Let {x i, . .. , x,} be a finite subset of H. The intersection I(xi) n • • • n I(x) is nonempty; indeed, if al E 1(xi) and a2 E 1(x2), then there exists a y E / such that y > ai and y > a2; all such y belong to I(xi) n / (x2), since l(x) and I(x2) are sections of I; induction over n accomplishes the proof. Moreover, we see that I(xi) n • • • n l(x) is a section of 1. The finite intersections of the sets 1(x), x E H, thus clearly form a filter base for some filter Ao in 1; namely, Ao contains a subset A if and only if A D 1(xi) n • • • n I(x) for some xi, ... , x n E H. The filter AN is said to be a filter of sections of I since it has a base consisting of sections. Let A be an arbitrary filter of sections of I such that A D A0 (in this case one says that A majorizes Ao; we do not exclude the case A = A0). Then the following representation is valid:
u(nHa).
H==
AEA
(IV.2)
ctEA
Indeed, to verify this equation we need only to prove that for any x E H one can find a set A E A such that X E Ha for all a e A, or, equivalently, p(x , a) is finite for any a E A. Since A D Ao, it suffices to find such a set in Ao. This, however, is trivial, because Ao contains 1(x) and we can set A = 1(x). Denote HA
=n Ha' (YEA
1. Topics on Convergence
251
Each HA is a locally convex space with respect to the topology determined by the system of seminorms {P( . , a)}aEA • We introduce a convergence in H as follows. A generalized sequence {x i, } I,, E r in H is said to be convergent to zero in H if there exists an A E A and A0 E r such that x t, E HA for all IL> po and x t, converges to zero in H A (in other words, p(x 4 , a) —> 0 for each a E A). Accordingly, we say that a generalized sequence {x i, } iE r in H converges to x E H x —> 0, that is, for some A E A and po E r one has x E HA , x if x H A for and x i, —> x in HA .
Definition IV.6 The space H equipped with the convergence described above is called a polynormed space over the filter A. Clearly, linear operators on H are continuous with respect to the convergence introduced. The usual method of constructing polynormed spaces in applications is as follows. Suppose that 1 is a bidirected set, and let a family {Va }a a of linear spaces be given such that Va c 17/3 whenever a < p. Moreover, suppose that each Va is equipped with a seminorm pa () in such a way that all these embeddings are continuous with norm < 1, pp (x) < pa (x) for a < fi and x E V. Now let A be an arbitrary filter of sections of 1. Set
H= = U(n ia) AEA
aEA
(note that this formula, though it is quite like (IV.2), is, in fact, essentially different, since Va irt H in general). For any x E H we define the finiteness set 1(x) by the formula 1(x) ={ceE/laEAandxE n
,
forsomeAEA)
yEA
and the polynorm E /(x), p(x , a) = { Pa(x) if a -Foe otherwise
(one should resist the temptation to define p(x, a) = pa (x) for all x E Va n H since then it could not be guaranteed that A majorizes the filter Ao generated by the finiteness sets). Such a definition makes H into a polynormed space over A, and representation (IV.2) is valid with Ha = ix EVa nHlae/(x)). Indeed, Ha is clearly a linear space; properties 1) — 4) in Definition IV.5 are obviously satisfied; Ha is the space of elements x E H for which p(x , a) is finite. It remains to show that A majorizes Ao. By the definition of a filter, it suffices to prove
IV. Functional-Analytic Background of Noncommutative Analysis
252
that 1(x) E A for each x E H. But this is evident since the definition of 1(x) can be rewritten as follows: / (x) = A. U AEA:xE
n
II,
a EA
Thus, 1(x) contains a subset belonging to A and therefore belongs to A (recall that A is a filter). Let us now consider two examples of polynormed spaces. These examples play a crucial role in noncommutative analysis and its applications to differential equations.
Example IV.1 Let ST (Rn) denote the space of Ck -functions
1(Y)
= f (Yi, • • • ,
Yn)
on Rn satisfying the estimates k
(1+ lyirn III IIm,k = ysup E litn
E I f(a) (y)1 <00 la1=0
(here a = (ai, ... , an ) is a multi-index, f (a) (y) = alai f (y)laya). Consider the following partial order on the set Z x Z+ (m, k):
(m, k) 5_ (m' , k') if and only if m < m' and k > W. Clearly, if (m, k) < (m', k'), then
ST (Rn ) c
S' (R") and Ilfil m , k _?_ Ilfil mi, ki, f E
so that the system of spaces Sr (Rn) equipped with the seminorms II • II m jc satisfies the assumptions used in the construction above. Next, let A be the filter with filter base formed by the subsets
Imo
= On
> mo, k E Z+ ) C Z X Z+ ,
where mo ranges over Z. Clearly, each /mo is a section of Z x Z+, so that A is a filter of sections. The corresponding polynormed space is denoted by S" (Rn) and defined by s000Rn)
= u( n = u ( n Sr = u (n sT AEA
(„,,k)EA
mo EZ (m ,k)E
sircn(Rn»
(R n ))
into
o(Rn) )
moEZ
k€Z 4-
1. Topics on Convergence
253
(the union over all elements of the filter can clearly be replaced by the union over the filter base, and the intersection over all (m, k) with m > mo can be replaced by the intersection over all k with m = mo owing to the cited embeddings of the spaces skmo (Rn ) ) . srkti (Rn ) form an increasing sequence, and n kEz+
Furthermore, the spaces Sm (Rn) =
it is easy to see that for any f E Sm (Rn ) the finiteness set has the form
I (f) = {(1,k) 11 >m, k EZ). Hence, the convergence on S°°(W2 ) can be described as follows. A generalized sequence {fo lt, E r converges to f E Scx) (Rn ) if there exists a Ao E r and m E Z such that ft, E Sm (Rn ) for ,u > ,u0 and f — 4, —> 0 in Sm (Rn ), that is, k
E
11 f — f fillm,k = sup (1 + 1y1) — m 1 f (a) (x) — f, (f ) (x)1 —> 0, k = 0, 1, 2, ... yEiRn la1=0 In other words , starting from some index ,uo, the functions fi., and all their derivatives grow at infinity not faster than (1 ± I y Dm and /4/(1 + ly Dm ---> f/(1 + I y Ir with all derivatives in the uniform metric.
Example IV.2 Let K C Cn be a compact set. For any neighborhood U of K in Cn denote by Ou the locally convex space of holomorphic functions on U with the topology of uniform convergence on compact subsets of U. Next, denote by OK the space
°K
= U (91h where the union is taken over all open sets U containing K. The space OK can be interpreted as a polynormed space in the following manner. Denote by I the set of pairs (U, L), where U is an open neighborhood of K and L U a compact subset of U. We write (U, L) < (U', L') if tE c u and L' c L. Clearly, I is a poset. We denote by O(U, L) the space Ou equipped with the seminorm
11f 11 (u ,L) = sup I f (z)I; zEL
then, for (U, L) < (U', L) we have OU,L C 0UI,L 1 (the embedding is given by the restriction from U to U') and
11f 11(LP,P) = sup 1 f (z)1._- _ sup 1 f (z)1 = 11f 11 w ,o• zEL
zEL
Next, we take the filter A with base formed by the sets {(U, L)) with U fixed and L arbitrary. We may write 0K
=u( n ow,o) U LU
254
IV. Functional-Analytic Background of Noncommutative Analysis
and define the convergence in OK by the following condition: a generalized sequence ft, converges to zero in OK is there exists a ,u0 and a neighborhood U of K such that for all ii, > p.o we have fii, E Ou and 4, converges to zero uniformly on compact subsets of U. A polynormed space H is called a Hausdorff space if each generalized sequence in H has at most one limit. Let {xiL } AE r be a generalized sequence in a polynormed space H. Consider the set r x P with the partial order given by (it, y) < (4a', y') .#. ti, < i.il
Clearly,
r xr is a directed set.
and y < y'.
Consider the generalized sequence {xt, — x v }i.., vi. V)ErXr •
One says that xii, is a Cauchy sequence in H if the sequence xi., — x v is convergent to zero.
Definition IV.7 A polynormed space H is called a complete polynormed space (or a poly-Banach space) if H is a Hausdorff space and each Cauchy sequence in H is convergent. In particular, if all HA, A E A, are complete Hausdorff locally convex spaces, as is the case in Examples IV.1 and IV.2 then H is a poly-Banach space. Suppose that a polynormed space H is not complete. Under certain conditions, it is possible to define the completion of H, which is a poly-Banach space. These conditions are summarized in the following statement.
Lemma TV.1 Let H be a Hausdorff polynormed space satisfying the following property: if A, B E A, A c B, {x i, } 1i Er is a Cauchy sequence in HB , and x i, --> 0 in HA, then x i, --> 0 in HB . Then the completion of H exists and is a poly-Banach space.
We omit the proof but mention the important particular case in which the polynormed space H
( rl
= AEA U aEA HO
satisfies the following property. For each a E I the seminorm p (. , a) is a norm on Ha and for any a < /3 e I the embedding Ha c Hp is consistent, i.e., if a generalized Cauchy sequence x i, E Ha is convergent to zero in Hp, then it is convergent to zero in Ha .
In this case the completion It of each Ha is a Banach space, there are continuous embeddings fia C tip for any a < p, and the completion of H is
AEA aEA
1. Topics on Convergence
255
with the natural convergence: x i, —> x in ii if, for some A Xti E n fia for ,u, > ,uo and xi, --> x in each Ha , a E A.
E
A and some 1u0, we have
aEA
We now pass to the consideration of continuous linear mappings of polynormed spaces. Let
( n1-1,y) , H=u AEA aEA
nG/3)
G=U(
fiEB
BEE
be polynormed spaces indexed by I and J and equipped with polynorms p (. , a) and q (. , fi), respectively.
Definition IV.8 A linear mapping Ili
: H --> G
is said to be continuous if it takes convergent generalized sequences into convergent generalized sequences. Clearly, lif is continuous if for any A E A it maps the locally convex space H A continuously into some G B . Furthermore,
p
is continuous if and only if for any seminorm q • , fi), seminorm p (. , a), a E A, on HA such that q(*(x), /3) < Cp(x , a),
x
E
E B,
on GB there exists a
H.
Hence, the set L(H , G) of all continuous mappings from H to G can be described as
L(H, G) =
GP), nunuL(Ha, AEA BEE 13EB aEA
(IV.3)
where L(HŒ , G,) is the space of continuous mappings from Ha to G p. We introduce a seminorm in L(Ha , G p) by setting Pap(*) =
sup
q(*(x),
p).
xEHOE , p(x,a)<1
It turns out that r(H, G) possesses a natural structure of polynormed space. Consider the indexing set I' x J (where as above /' is the set I equipped with the reverse order). By definition, (a, 13) < (ai, pi ) if and only if a > ai and fi < pi . Clearly, IIPcip(*)11 ?. IIPaifii (*)Il whenever (a, fi) < (ai, fii), so that pap(*) is a polynorm on £(H, G). It remains to reduce the number of intersection and union signs in (IV.3) so as to give the "canonical" representation of £(H, G) as a polynormed space. Let Q be the
256
IV. Functional-Analytic Background of Noncommutative Analysis
filter of sections of I' x J defined as follows. For any A E A and B e E denote by K (A, B) the set of all sections C of I' x J such that for any fi E B there exists an a E A such that (a, P) E C. The filter Q is generated by the family n H43 E 7.- K (A, B). AEA Then the filter Q majorizes the filter in I' x J generated by the finiteness sets for the polynorm loco, we have r(H, G) =
un
KES2 (a, fi)EK
L(HcoGp),
and L(H, G) is a polynormed space over the filter Q. It can be proved that L(H, G) is a Hausdorff space whenever G is so, and that if G is a poly-Banach space, then so is L(H, G). In what follows we always consider L(H, G) endowed with the structure of a poly-Banach space defined above. Let us now consider the particular case in which H = G. We will denote ,C(H, H) = C(H). The polynormed space L(H) is equipped with the natural multiplication defined by the composition of mappings. We have Pap(lif 40)
Pay (40 )
PYP(*)
whenever the right-hand side is finite. This suggests that the multiplication is continuous on L(H) x L(H), which is indeed true. In fact, for any K1, K2 E Q we can find a K E Q such the multiplication is continuous from G(H) K1 x r(H)K2 to L(H) K , that is, for any 3 = (a, 13) E K there exists a Si = (ai, pi ) E K1 and 32 = (a2, P2) E K2 such that P32 (W) Ps (1fr 49) const . Ps i (the constant, of course, can be taken equal to 1). Furthermore, L(H) contains the identity element represented by the identity mapping 1 of H, and 1 is separated from 0 in the sense that py (1) 0 0 for some y E / I X J (it suffices to choose y of the form y = (a, a)). We obtain the definition of a polynormed algebra by taking the abstract version of these properties.
Definition IV.9 An (associative) polynormed algebra (with 1) is a polynormed space A over a filter A, equipped with a bilinear associative operation (multiplication) such that (a) multiplication is continuous, i.e. for any A1, A2 E A there exists an A E A such that multiplication is continuous in the space
AA 1
x
AA 2
AA ;
in other words, for any a E A there exist ai E Ai, i = 1, 2, such that P(ctic1 2, a)
p(ai, a)P(a2, a)
1. Topics on Convergence
257
for any ai and a2; (b) A is an algebra with 1; (c) 1 is separated from 0 by the polynorm, i.e., p(1, a) > 0 for some a. If a polynormed algebra is Hausdorff and complete, then it is called a poly-Banach algebra. Thus, if H is a poly-Banach space, then L(H) is a poly-Banach algebra. The following lemma is obvious.
Lemma IV.2 Let A be a polynormed algebra. The mappings L: A A
---> L(A) 1—> L A
and
R: A A
---> L(A) 1---> RA,
where L A and RA are the operators of left and right multiplication by A in A, are a continuous homomorphism and a continuous antihomomorphism of polynormed algebras, respectively.
In particular, each polynormed algebra can be realized as an algebra of continuous endomorphisms of a polynormed space; such a realization is given by the left regular representation A i--> L(A), which is faithful since LA(l) .A.1=A00ifA0 O.
Example IV.3 The polynormed spaces S°°(1[In ) and 0 1( described in Examples IV.1 and IV.2 are poly-Banach algebras with respect to pointwise multiplication. Remark IV.1 The convergence of a polynormed space H is defined by the triple (I, p, A), where / is the indexing set, p the polynorm, and A the filter involved in Definition IV.6. However, different triples may define the same convergence on H (in this case the identity mapping of H is two-sided continuous between these convergences). More generally, we can consider two polynormed spaces H1 and H2 with a continuous continuously invertible linear mapping i : H1 --> H2. In this case we say that H1 and H2 are equivalent and do not distinguish between them.
1.3 Tensor Products Let H1 and H2 be poly-Banach spaces over filters A1 and A2 with polynorms pi and 192 and indexing sets /1 and 12, respectively. We intend to define the tensor product of H1 and H2 in the category of poly-Banach spaces. Following the usual practice, we define the tensor product by the universal mapping property.
258
IV. Functional-Analytic Background of Noncommutative Analysis
Definition IV.10 The tensor product of the poly-Banach spaces H1 and H2 is the polyBanach space H1 & H2 such that for any poly-Banach space H and any continuous bilinear mapping r : H1 X H2 ----> H there exists a unique mapping r : 1/1 & H2 —> H such that the diagram
H1 6 H2
Hi X H2
r
H
commutes. Here j is a predefined continuous bilinear mapping. 2
Theorem IV.1 The tensor product of poly-Banach spaces is well-defined, that is, it always exists and is unique up to a uniquely determined equivalence (see Remark IV.1). (The explicit structure of H1&H2 will be given below.)
Remark IV.2 The tensor product thus defined is usually referred to as the projective tensor product of H1 and H2. Proof Uniqueness. Suppose that Hi &H2 and H1 H2 are two different tensor products of H1 and H2. According to Definition IV. 10, there are unique morphisms f and re
such that the diagrams
Hi '6 1-11
Hi X H2
and
111 2 Purists
X
H2
i
Hi Hi
would probably object to our definition by saying that j should be included in the notion of the tensor product, i.e., the tensor product of H1 and H2 is the pair (Hi '0)H2, j : H1 X H2 —> Hi '0: b H2 ).
1. Topics on Convergence
259
commute. Then so does the diagram Hi '0H2
y
< of
i
Hi X 112
H10‘111
=
since f 0 f is unique, we must have f- 0 f = id. Similarly, f o r = id, and that is, f is an isomorphism. Existence. Consider the algebraic tensor product H1 0 H2. Clearly,
U y n Hia) 0 ( n
Hi 0 H2 =
AEAI,BEA2 aEA
=
I
U
H2/6)}
/3EB
n
(A,B)EAl xA2(a,fi)EAxB
Hice 0 H2/31 .
We equip 1 1 x 12 with the product partial order and introduce a polynorm pap 0, (a, )3) E /1 x 12, on H1 0 H2 by setting Pap = inf E
where
Ç9
II fi iiallgillp,
E Hi°, 0 H2 13 and the infimum is taken over all finite presentations N
ça = E
® gi
.i =1
with fi e Hi, and g1 E H2,8 (here N is not fixed). The space H1 0 H2 is polynormed over the filter Ai x 42; its completion is a poly-Banach space, which will be denoted by H1 '6 H2 Let us prove that this space is the tensor product of H1 and H2. Ha is a poly-Banach space and Suppose H =
un
AEA
aEA tt : Hi X H2 ---> H
i
a continuous bilinear mapping. By the properties of the algebraic tensor product, there is a unique mapping ii : Hi 0 H2 --> H such that ii, = ri, o j, where j : H1 x H2 ---> Hi 0 H2 is the natural embedding,
j (x, y) = x 0 y. Since H1 H2 is the completion of H1 0 H2, the mapping ii can be extended to H10' H2 uniquely provided that /..i is continuous. Thus, it remains to prove the continuity of ri,.
260
IV. Functional-Analytic Background of Noncommutative Analysis
Since p, is continuous, it is continuous in the spaces : Hia X H2 fi —>- Hy
for certain triples (a, p, y); the structure of the set of such triples agrees with the filters A1, A 2 , and A. Hence it suffices to prove that / : i is continuous in the normed spaces
ii : 1-11 a 0 H2p —> Hy for the same triples (a, /3, y). But the proof of this fact is essentially that used for Banach spaces (see the discussion at the beginning of Subsection 1.1). Thus, the 0 theorem is proved. In a similar way one can define tensor products H1 "6 H2 6 • • • 6H,1 of n > 2 polyBanach spaces. This definition involves no new ideas and is therefore omitted.
Remark IV.3 If H1 and H2 are poly-Banach algebras, then H16 H2 is also a polyBanach algebra.
2 Symbol Spaces and Generators Symbol spaces and their properties are a topic of interest in noncommutative analysis. They were widely used throughout the exposition, but at an intuitive level: we appealed to the notion of convergence in symbol spaces without knowing exactly what is meant. Now that we are aware of the notion of poly-Banach algebras, we are in a position to give precise definitions.
2.1 Definitions The main properties of symbol spaces are those used in Definitions 1.1 and 1.2 and in Theorems 1.1, 1.4, and 1.5. Here we collect these properties and express them in the language of poly-Banach spaces, which yields the desired definition. Let Z c C be a subset without isolated points, and let T be a linear subspace (over C) of the space of C-valued continuous functions on Z. We impose some conditions on T. Condition A. The space .T. is a poly-Banach algebra with 1 with respect to the pointwise multiplication. That is, ..F is equipped with a polynorm and is Hausdorff and complete with respect to this polynorm; moreover, the product of any two functions in .T. belongs to .F, and the multiplication is continuous; the function identically equal to one on Z belongs to T.
2. Symbol Spaces and Generators
261
Condition B. The function f (x) = x, x E Z, belongs to Y. Condition C. The poly-Banach convergence in Y implies (i.e., is stronger than) locally uniform convergence on Z. Consider the space Fz of all continuous functions f : Z ---* C (we assume that Z is equipped with the topology induced from C). The space Fz can be equipped the structure of a poly-Banach space as follows. Let I be the set of all subsets of Z (the powerset of Z). Let I be partially ordered by inclusion, a > 16 .<=>.ciC/3,
a, 13 E I.
We set P(f; a) = sup I f (x)I, f X
E
FZ.
Ea
Clearly, p is a polynorm on Fz and p(f, a) is finite whenever a is a compact subset of Z. Let A be the filter in I determined by the condition that each A E A contains all compact subsets of Z. Then, since each finiteness set satisfies the same property, it follows that A majorizes the filter Ao generated by finiteness sets, and we can equip Fz with the structure of a polynormed space over the filter A. It is easy to see that Fz with this convergence is a poly-Banach space and the convergence in F is just the locally uniform convergence. Condition C means that the embedding Y c Fz is a continuous mapping. Let Fz x z be the poly-Banach space of all continuous functions on Z x Z constructed by analogy with F. There is a sequence of continuous mappings
.Fx.F --›- FzxFz—>Fzxz ( f, g) 1-* (f, g) 1-* f (x)g(y). The composite mapping is bilinear and, passing to the projective tensor product, we obtain the natural mapping FF —)- Fzx z • Condition D. The natural mapping .F&F --> Fz x z is an embedding. The meaning of Condition D is that the elements of .F&F can be unambiguously identified with some continuous functions on Z x Z. Technically, no extension of this condition is really necessary to construct functions of operators. However, in all practical examples the following condition is satisfied: Condition D'. The natural mapping
n copies
n factors
is an embedding. Conditions A—D are routine ones. Finally, we introduce the most important condition.
IV. Functional-Analytic Background of Noncommutative Analysis
262
Condition E. For any f E .F the difference derivative { f (x) — f (Y) x-y ' — (x, Y) = 8x lim f (v)
x E Z, y E Z, x 0 y,
8f
v-)...t, vEz
- f (Y ) , x=yEZ
exists and is continuous. Moreover, 6f/6x E .F6.T. (this assertion makes sense in view of Condition D) and the mapping 3 — : .F --> .F6T
Sx
is a continuous mapping of poly-Banach algebras.
Definition IV.11 The space Y is called a (proper unary) symbol space if it satisfies Conditions A-D. 3 Given a symbol space .F, we define the spaces of n-ary symbols as the projective tensor products
Fn = F6F6 • • • 6.7' = .F6 n . ..._...__v____—, n copies Each Fn is a poly-Banach algebra. For n = 2 elements of Y are naturally interpreted as functions of two arguments yi , y2 E Z, and the same is true for arbitrary n provided that Condition D' is satisfied. Let A be a poly-Banach algebra and Y a symbol space.
Definition IV.12 An element A e A is called an F-generator if there exists a continuous homomorphism ktA : .7' --›
-A
of poly-Banach algebras such that AA
(x) = A
(here x is the function taking each point x
E
Z into x E C). We denote tiA Cf ) = f (A).
Let A1, .. . , A n E A be Y-generators. The mapping Yx•••x.F --> (fi, • • • , fn) i -->
A fn(An) • • • fi (A 1)
factors through Fn = Y 6' n, and we denote the associated mapping by AA : Fn -- A
1
n
where A = (A I , .. • , An). 3 The words "proper" and "unary" will usually be omitted.
2. Symbol Spaces and Generators
263
n Definition IV.13 A tuple A = (Ai, . .. , A n ) of .F -generators equipped with Feynman indices is called a Feynman tuple. For any f E .Fn we define the function of A with symbol f by setting 1 n (Ai, --:- f f (A) =An) = FLA(f)• 1
.
.
.
,
We have reproduced all definitions given in Subsection 1.2 in the rigorous context of polynormed spaces. As to propositions, lemmas, theorems, corollaries, etc. concerning functions of Feynman tuples, we point out that no one can object to the rigor of the argument given in their proofs, with the understanding that convergence, continuity, etc. is considered in poly-Banach spaces. For this reason, we do not dwell on these assertions here and leave the subject.
Remark IV.4 Our definitions are flexible enough to cover most of the applications. Nevertheless, there is an important case that does not match the definitions literally. We mean functions of matrices. Indeed, given a matrix A, the natural symbol space for functions of A is comprised by functions on the spectrum a (A) together with their derivatives up to a certain order (depending on the size of Jordan blocks of A). The set a (A) is discrete, so that the derivatives of a symbol cannot be reconstructed from its values but must be determined separately. Hence we can take Z = a (A), but in this case jets should be considered instead of functions (the order of the jet may vary from point to point). Having finished with the general theory, we proceed to the consideration of concrete symbol classes. The symbol classes most frequently used are S" and OK. The former is used to define functions of tempered generators and the latter to define functions of bounded operators in a Banach space. We consider S" in some detail and leave the consideration of 0K as an exercise.
2.2 S
Is a Proper Symbol Space
Theorem IV.2 The poly-Banach space S"(IR 1 ) satisfies Conditions A—D and D' of the preceding subsection. Moreover,
where the._.^_ stands for the isomorphism of poly-Banach spaces and is given by the natural interpretation of elements of [S"(11I 1 )] . 'n as functions of n variables. Proof The validity of Conditions A,B,C, and E follows directly from the definition of
S' (R1 ). The main difficulty is, of course, to prove D and D'. We carry out the proof only for n = 2 (Condition D), since the argument for n > 2 (Condition D') involves no additional ideas. Following the argument used in Subsection 1.3 to prove the equality
IV Functional-Analytic Background of Noncommutative Analysis
264
C"(R 1 )6C" (R 1 ) = Cœ) (T 2 ), we subdivide the proof into three stages expressed by the following lemmas.
Lemma IV.3 There is a natural embedding j:
(R 1 ) 0 S' (R 1 ) c
(R2 ),
and this embedding is continuous with respect to the projective inf-seminorm on S" (1R 1 ) S" (1R 1 ).
Lemma IV.4 The space S 00 (R. 1 ) S' (111 1 ) is dense in S' (R 2 ). Lemma IV.5 The embedding j has a continuous inverse j
(S00 (R1) 0 s co (R)) _>. sco(Rl) s oc (R i )
on its range (the range j (Se° (R1 ) 0 S' (R 1 ) is equipped with the polynorm inherited from S 00 (R2 )). Combining these lemmas, we readily obtain the assertion of the theorem. Indeed, Lemmas IV.4 and IV.5 imply that j -1 can be extended by continuity to the entire space S(112 ) (of course, the range of the extension j -1 will be in the completion S 00 (R 1 ))S 00 (111 1 ) of Sœ)(Ezi) 0 soorb itt ). Furthermore j extends by continuity to a mapping j : s00(R1)(g)s00(Ri) --± S 00 (R2 ). Since fr i = j 1 j = id,
the same is true of their extensions by continuity: J J-1 = J-17 = id ,
so that
i --1 = CO-1 Thus,
.
7= s00mi)(S00(:0)_, s o0 (R 2 )
is an isomorphism of polynormed spaces, and the theorem is proved. It remains to prove Lemmas IV.3 — IV.5. Proof of Lemma IV.3. Obvious, since if the functions f (x) and g(y) grow, together with all their derivatives, not faster than some polynomial, then the same is true of f (x)g(y). Proof of Lemma IV.4. As with C 00 (S 1 ), trigonometric polynomials are dense in S' (le) , k = 1, 2.4 However the periods of the exponentials are not exhausted by 4 And
for k> 2 as well.
2. Symbol Spaces and Generators
265
multiples of 27r. Let us prove this assertion. Recall that S"(Illlk ) is the union S c) ° (Ile) = U Sm (iIR k ) meZ
of the Fréchet spaces Sm (Iilk ) with locally convex topology determined by the system of seminorms /
IlfIlm,/ =
sup (1 + IYI) —m yeRk
E
If(x)I.
ai=a We need to prove that any f E Sm (Rk ) can be approximated by trigonometric polynomials in the convergence of S'(IR k ) (not in the topology of Sm (IRk ); this is exactly the point where the apparatus of polynormed spaces works). In fact, such an approximation already exists in ST (IlR k ), where r = max{m, 1}. Indeed, let yo(y) be a smooth compactly supported function of y E Rk equal to 1 in the neighborhood of the origin. Then the sequence fn(Y) = (P(Y / n) f 00
of smooth compactly supported functions converges to f (y) in S r (Ile) as n —> oo, since the norm of the "tails" will be killed by the extra factor (1 + IYI) -1 ^1 n -1 in the definition of the norm. Hence, it suffices to obtain trigonometric approximations for smooth compactly supported functions. However, this is trivial: any f E qT(1111`) can be expanded into the Fourier integral i \k12 f
( 27 )
eipy f (p) dp
le
f(Y)=
with smooth rapidly decaying Fourier transform f (p), this integral converges absolutely in any Sr (IRk ) with r > 0, and the desired approximations can be taken in the form of finite Riemann sums f (y) '_-_-
E caeipay.
Take k = 2. Then e iPaY = e iPalYl eiPa2Y2 E S 00 (R 1 ) 0 S 0° (R 1 ).
The lemma is proved.
CI
As was shown just a few lines above, smooth compactly supported functions are dense in S'(IRk ) for any k and hence in S'(1R2 ) and in S°°(1R 1 ) 0 S 00 (11Z 1 ). Consequently, it suffices to check the boundedness of j -1 on the subset consisting of finite sums
Proof of Lemma IV.5.
ço(x,
y)
= Es fs(x)gs(y),
(IV.4)
IV. Functional-Analytic Background of Noncommutative Analysis
266
where fs , gs E C(NII1 1 ). Let q) be such a function. Let us prove that for any m' >
m > 0, const holl s2,.-2 (R2) -=- const MOH.
il(Pii =----= Wil sgli (Ri )oscn Ri )
(IV.5)
Assuming that (IV.5) is valid, it is easy to prove the inequalities Il 4° 114n' oft 1 )04m i (RI) < const II ço II r±-,2±2(R2) for all r, t > 0 by induction on r + t. The collection of these inequalities implies that for any m' > m > 0 the mapping j -1 is continuous between the spaces
J -1 : Sm -2 (11R 2 ) _÷. s m' (IRt) 0 on the range of j, as desired. We will prove inequality (IV.5) and omit the trivial induction step. We have
sup
al±kkx , y)
E
axiayk
(x,y)cR2 1+k2
inf E
ci ± ix' ± iy1) — m,
hodisgi (Ri ) oh ii sg: (RI) ,
I
where
II* Ils-o ( R i) = su p (1 + ixi) —m lik(x)i X
ER 1
and the infimum is taken over all finite representations ço = Ei çoi 0 *1 (without the assumption that p and *1 are compactly supported). Let us represent q)(x, y) defined in (IV.4) by the Fourier integral
' y) =
i f f e i(Px+gY) E is (p)k-s (q) dp dq,
Tr q)(x
R2
s
where is and ks are the Fourier transforms of f and g, respectively. Denote fsm x (
)
= ( 1 +x2 ) —m/2 fs(x), gsm(Y) = ( 1 +
Then we can write i (19(x 'Y)= Tr
f
f 1 R2 /
± x 2r /2 0 ± y 2 ) 170.
fsm(p )k sm(q)e i(px+gy) dp dq. tsi
2. Symbol Spaces and Generators
267
The last integral can be approximated by Riemann sums in S'(1) 0 S'on'(IR1 ) for any m' > m:
E [ f (1 + x2ri2fsm(p)eiPx dp][ fo ± y 2r1'2g5m (q ) e i1 x dq ]
27r yo (x , y) = i s
. E iim x2[ E (1 ± x2r12 eakx fsm(XIC)11i111[ E ( 1 ± y 2 ) ,./2 e wygsm( ,01 s
=
hr. ),.2
iki
E -0,n 04, Al m + X 2r12 e akx 0 ± y11 21;2 e imy , 'kid/1
where the limits as ),. --›- 0, R --> oc, and 1,3? --> oc are in S'(11) on the second line and in (IR 1 ) 0 S' (11R 1 ) on the third and fourth lines (we have used the fact that the mapping 0 S' (1R) —> str,' (TR 2 ) 0 (TR 1 ) is continuous. Here we used the notation 0m - (p, q) = fs m( p)ksm(q) .
se S'(11)
se
E s
It follows that 1 1160 11 = — 27r lim X2 II
0m( A1, Al).
0 ± x 2 r/ 2 (l ± y2 ) InI2e a(kx+1Y) Ç
E 1k1, 111
However, we can write out the following estimate:
I
E ( 1 ± x 2 )m/2 0 ± y 2r/2 e iX(kx-I-ly)--)m (x/ _ , 1 7\ II K AI )11 < I k 1,1 1 1R
E
romo,k,
Ik 1,1 1 1
since
11 (1
+ x 2r1/2 eakX ii , 11 Sm 0 (R 1 )
=1 '
Furthermore,
E 1 0- ,„(U, 7,1)1
X,2
Ik1,111
<
-
{E
x,2 1 ,-6m( A1, ),.1)1 2 (1 ± )L2k2)(1 + A212)
Ikl,i/l
E
),.2.
(1±)1/4,2 k2 )(1± ),,212) ik1,/
I
1/2
by the Cauchy–Schwarz–Bunyakovskii inequality. The sum A2
A.
E ( 1 + )1/4. 2 k 2 )0 +A2 1 2 ) = ( E 1 -1-' )`.2k2 )2 Ikl
ikl,l/I
m)1,
IV. Functional-Analytic Background of Noncommutative Analysis
268
is uniformly bounded, and
lim E ,x 2 wom p,k, x01 2 (l ± x2 k2 )(1 ± x2 /2 ) lkI, = f (l ± p 2 )(1 ± q 2 )i 0- m(p, 01 2 dog
a2 +x 2 ) _,./2 (l ± y2) - M/2(p(x , y)] _ _ro ( ax 2 =f i L
82
X f (1 — ay 2 [(1 + x 2 ) -m/2 (1 ± y 2 ) -mi 2v(x, y)]dxdy
by the Parseval identity. Since
ie) (x, Y)l
± y 2)(m-2)/2 , const •1149 11( 1 +x 2
lai = 0, 1, 2,
the last integral converges and is bounded by const 14)111 2 . Thus we obtain lkoll
5_
const •111011. 0
The lemma is proved.
2.3 S" -Generators Now let us describe the structure of S"-generators in poly-Banach algebras.
Theorem IV.3 Let A be an S"-generator in a poly-Banach algebra
A
n
= KcAcceK U
Ace.
Then A is a tempered generator in A, that is, A generates a one-parameter group {exp(iAt)}, t E Ft satisfying the polynomial growth estimates: there exists aK € A such that for any a E K the estimate
il exp(iAt)ll < Ca (1 ± Itra is valid with nonnegative constants Ca and m a independent oft. Similar estimates are valid for all t -derivatives of exp(i At). Proof The function eitY belongs to S°(R 1 ) for any t E iiii and is continuously differentiable with respect to t as a function with values in S 1 (IR 1 ). Hence the mapping U(., 1 (. , t) : t }-* e itY is a continuously differentiable mapping of IR 1 into S(1R1 ) and
satisfies the Cauchy problem
I
dU (y , t)
= YU(y,t), dt (IL O = 1.
2. Symbol Spaces and Generators
269
Applying the continuous mapping A Sc)° (1R 1 ) —›- A :
to both sides of the equation and the initial condition, we obtain
I
dU(A,t) =
AU (A, t), dt U (A, 0) = 1,
that is, U (A, t) = exp(i At) is the one-parameter subgroup generated by A in A. Let us now prove the polynomial estimates. The mapping btA : Vaal:0) —> A is continuous, in particular, there exists aK c A such that ,u,A is continuous in the spaces AA : S° (R 1 )
AK.
This implies that for any a E K there exists an 1 such that p,A is continuous in the spaces sp(1Ri) —›. A ce . But Ile
iyt
I
I
aay ye iyt E —
=
iisp(Ri) ysu p ozn i=o II
(
= sup
Eitii
yeRn j=0
C1(1 ±
Id .
The estimates for (d Idt)r exp(i At) can be proved in a similar way. The theorem is proved. o
Corollary IV.1 Functions of S'-generators can be defined via the Fourier transform: if A1, . . . , A n are S'-generators in a poly-Banach algebra A and f E S'(iln), then
1 n i r) n f (Ai, . .. , An) = (-E-
f f (pi, ... , Pn)exP(iPnAn) ...exP(iPlAi)dpi
... dpn,
Rn
where f (pi, . . • , Pn) = (-71 ) n f f (yi, . . . , yn ) e
27r
lPi+...+YnPn) .1 uyi .. . d yn
Rn
is the Fourier transform of f.
The proof is obvious. Theorem IV.3 and Corollary IV.1 give a convincing example of how the structure of the symbol space determines the properties of generators and the method of defining the mapping symbol 1---> operator.
270
IV. Functional-Analytic Background of Noncommutative Analysis
3 Functions of Operators in Scales of Spaces Functions of noncommuting operators were defined in Section 1 and Section 2 in the situation of abstract poly-Banach algebras. However, in applications (particularly, to differential equations) one usually deals with algebras of operators acting on a scale of spaces (say, on the Sobolev scale Ws2 ). Algebras of continuous operators on Banach scales are special case of general poly-Banach algebras, and in this section we consider such algebras in some detail.
3.1 Banach Scales Let / be a poset and {Bi }i e j a family of Banach spaces with continuous embeddings
Bi c Bi defined whenever i < j. Such a family {A lid is called a Banach scale. If / is a directed set, then the scale is said to be inductive; and if /' is a directed set, then the scale is said to be projective. Thus, in an inductive scale for each two spaces Bi and /31 there always exists a space Bk such that Bi c Bk and /31 C Bk, whereas in a projective scale for each two spaces Bi and /31 there always exists a space Bk such that Bk c Bi n B./ . In applications, most frequently one has / = Z or / = IR, and we restrict ourselves to one of these two situations. Let {Bi ha be a Banach scale. According to definition IV.6, to any filter of sections B. Among these spaces, of / there corresponds a polynormed space, namely,
un
AcA icA
there is a maximal space
Bc° = U Bi ici and a minimal space B-00
= fl B. icl
Lemma IV.6 One has = n uL(Bi' B1 ), = nuL(Bi'
B1).
The elements of the former algebra are called operators continuous to the left, and the elements of the latter algebra are called operators continuous to the right in the scale {BA
3. Functions of Operators in Scales of Spaces
271 E
Proof Trivial.
One of the usual methods of constructing a scale is to take some Banach space B and a tuple of unbounded operators in .8; the scale is constructed with the help of graph or similar norms. The case in which B is a Hilbert space and Bi = D(Ai 12 ), Hu Iii = II A i 12 ul113, where A is a positive definite unbounded self-adjoint operator in B, is considered in Chapter II in the context of representations of Lie algebras and groups (see also the following subsection). Here we deal with the case in which B is a Banach space and the norms in 13i are defined according to a tuple of unbounded operators in B. Let B be a Banach space and D cBa dense linear subset. Let also A1, . . . , Am on B such that for each j = 1, .. . , m the set D isbeunodclsprat invariant under Ai and moreover, D is a core of Ai (that is, Ai is the closure of its restriction to D). There are numerous possible ways to define a scale using these data. Here we consider the simplest one. Let 11 • II denote the norm of B. For any positive integer k set
lixiik=-11xlikk= E il Ai,Ah ...A.,,x ii , 0<s
where the sum is taken over all sequences ji , . .. , js of integers satisfying the condition 1 < ji < m,1 = 1, . .. , s. Denote by Bk the completion of D with respect to the norm Il . Ilk.
Proposition IV.1 The collection {Bk}k>0 is a Banach scale. Each of the operators A j , j = 1, . . . , m, is continuous in the spaces Ai : Bk —›- Bk_i, k = 1, 2, . . . Proof Clearly,
Ilx Ilk — IlxIlk—i ± t iMix Ilk—i,
(IV.6)
1=1 where the tilde stands for equivalence of norms. In particular, the identity operator on D is continuous in the pair of norms (il . Il k , I . Ilk—i) , and so it extends by continuity to a bounded operator Ek,k-1 : Bk —± Bk—i. Let us prove by induction on k that ek,k—i has trivial kernel and each Ai is closable from D in Bk. The induction hypothesis (k = 0) is obviously valid since there is no 60,_i at all (and so nothing to check), whereas the Ai are closed in B = Bo by our assumption. Let us carry out the inductive step. Assume that the statement is proved for k < ko and set k = ko. First we prove that ek,k—i has trivial kernel. Let x E Ker Ek,k_i. Then there exists a sequence xi E D, 1 = 1, 2, . . . k-1 k such that xi --›- x in Bk (we write xi --4- x for short) and xi —›- O. It follows from (IV.6) that for each j the sequence Aixi, 1 = 1, 2, . . . , is convergent in Bk_i. Since, k-1
by the induction assumption, Ai is closable in Bk_i, it follows that Aixi --± O. But now, again by (IV.6), we see that Ilxi Ilk —4- 0, that is, x = O. Let us now prove that
IV. Functional-Analytic Background of Noncommutative Analysis
272
each Ai is closable from D in Bk. Let xi E Bk,1 = 1, 2, . . . , be a sequence such that k
k
k-1
k-1
0 and Aix/ -4- y. Then Aix/ -4- ek,k_i(y) and xi —›- 0; since Ai is closable in Bk_i, we necessarily have sk,k—i(Y) = 0 and hence y = 0 due to the triviality of 0 Ker ek , k_i(y). The proposition is proved.
Xi --±
Now let
be a linear operator. We are interested in conditions that would guarantee that A extends to a bounded linear operator in the scale {Bk}. To state the following theorem conveniently, we introduce some terminology. We will assign length s to any product Ail Ah .. . Ais as well as to the commutator Ki (A) = [Ail [Ah[. . . [Ai, , A] . . .]]].
Theorem TV.4 Suppose that the operator A satisfies the following condition: there exists a function ço : Z+ U {0 } ---> Z + U 101 such that
cr I lx 4 (r),
II Ici (A )11
X
E D
for any commutator K1 (A) of length r, where C,- depends only on r. Then A extends to a bounded operator in the scale Bk. Namely,
I AX Ilk
Ck
IIX Ili(k),
X
E E,
k= 0, 1, 2, . ..
(with some other constant Ck), where
*(k) = max (k ± cp(r) — r). 0
Proof One has
E iiii i1 A12 ... Ai, Axii.
ii Axiik =
0<s
However, Ah Al2 .. . Ais Ax is a linear combination of terms of the form K/ (A)A,i . . . Ax with 1 ± r = s (this can readily be proved by induction). Each such term can be estimated by II Ki (A)A„, ... A
II
5_ 11A, 1 . . . A, r x II o(l)
C Ilx 11 43(0+r •
But GI) (l) +r=4)(0-Fs-1 and the desired estimate follows immediately. The 0 theorem is proved.
3. Functions of Operators in Scales of Spaces
273
3.2 S'-Generators in Banach Scales In Theorem IV.3 we used S'-generators in a poly-Banach space to produce oneparameter groups of tempered growth in this space. In applications (at least to differential equations) one mostly deals with scales obtained from a Banach (or even Hilbert) space; the commonest way to construct groups of tempered growth in scales is to consider such a semigroup in the "parent" Banach space and then to use some additional information in order to prove that the group extends appropriately to the entire scale. Thus, we start from one-parameter groups in Banach spaces and their generators.
Definition IV.14 Let X be a Banach space. A family U(t), t E R, of bounded linear operators on X is called a strongly continuous one-parameter group if (i) U(t) is strongly continuous with respect to t; (ii) U(0) = 1 and U(t)U(r) = U(t -I- r) for any t, r E R. One says that U(t) is a one-parameter group of tempered growth if IIU(t)II
KO ± itlYn
for some K and m (note that an exponential bound for II U(t)II is always guaranteed). The strong limit C = lim e—>o
T(e) — T(0)
ie
exists on a dense subset Dc and is a closed operator. It is easy to show that for x E Dc the family u(t) = U(t)x can be defined as the unique solution to the Cauchy problem du — = iCu, dt
u110 =
X.
Let 8 be a Banach space, Dc Ba dense linear subset, and {Bs } the scale generated by the tuple (A1, ... , An ) of closed operators with invariant core D on B (recall that D is called an invariant core of a closed operator A if AD c D and A is the closure of the restriction AID). Thus, the spaces Bs are defined for all integer s > 0, Bo = B, and Bs is the completion of D with respect to the norm
n
11 14 111 = 11 14 11s-1
± E HAJuils_i,
s = 1, 2, .. .
J.1
We intend to consider generators of tempered one-parameter groups in {B s }. The main idea is as follows. Suppose that C is the generator of a semigroup e ict in B. In view of the definition of the norm in Bs , the estimation of eicr in Bi, 82, . . . , Bs ,... requires permuting Ai and e ict . In doing so the following lemma can prove useful.
Lemma IV.7 Let Xi and
X2 be Banach spaces, and let Ci and C2 be generators of strongly continuous one-parameter groups in X1 and X2, respectively. Also, let
IV. Functional-Analytic Background of Noncommutative Analysis
274
A : X —› - Y be a closed operator with dense domain VA. Suppose that there exists a core D c DA of the operator A such that, whenever x e Do, that is, x e D nDc 1 and Cix e D, we have Ax E DC2 and
C2Ax = ACix. Finally, suppose that one of the following conditions is satisfied: (i) Do is a core of A, D is e ic' r -invariant, and Ae ici t x is continuous in X2 for each x E D. (ii) D is invariant under the resolvent Rx (CO for I Im ),.1 large enough. Then for any x E DA we have e icl t x E DA and
Ae ic' t x -- e ' 21 Ax,
t E R.
Remark IV.5 We cannot derive Lemma IV.7 from Theorem 1.4 because C1 and C2 are not known to be included in an operator algebra as T-generators with some symbol class .T. containing the exponentials eitY . Proof We omit the consideration of case (i) (see [134]). Suppose that (ii) is satisfied. Suppose Im ),.
(C2 — A.)ARx(Ci)x = A(Ci — X)Rx(Ci)x = Ax, x
E D,
whence it follows that AR,k(Ci)X = Rx(C2)AX,
X E D.
Fix x E D and t > 0 and set xn = (—inR_i n (C1)) [ndx, where [nt] is the integral part of nt. Obviously, xn E D and Ax„ = (—inR_i n (C2)) [nd Ax.
Passing to the limit as n -->- oo we obtain Ax„ .....± e ic2t Ax xn __>. e' 1 1 x , (see [197]). Since A is closed, it follows that Ae ici t x = e ic2t Ax. This identity extends to DA by closure. The case t < 0 is treated similarly. The lemma [I] is proved. Now let C be the generator of a strongly continuous one-parameter group in B. We will prove that C is a generator in each of the Bs provided that the commutator C with each of the Ai can be expressed linearly via the operators Ai .
3. Functions of Operators in Scales of Spaces
275
Proposition IV.2 Let C be the generator of a strongly continuous one-parameter group in B. Suppose that D C Dc, D is invariant under C, and there exist numbers A.ik, j, k = 1, . . . , n, such that n
[C, Ax =
Exik A k x, x
E D.
k=1
Suppose also that either D is e i "-invariant and Aie icltx is a strongly continuous function of t for x E D, or D is invariant under the resolvent R(C) for 1 1m A.I sufficiently large. Then C is the generator of a strongly continuous one-parameter group in B, for each s. Proof In order to apply Lemma IV.7, let us represent the cited commutation relations
in the form ACx = (C01 — I 0 A)Ax, x
ED
where A = r (Ai, ... , A n ) : B .--± B sa) • • •EDB=BOCn, n copies
/ is the identity operator in either factor, and A is the matrix with entries A.A. Let us check that Lemma IV.7 applies with C1 = C and C2 = COI—I® A. The operator A is closable from D since if xn E D, x .-- 0, and Ax --> y = (yl, ... i.e., Aix —›- yi, j = 1, • .., n, then yi = 0 because Ai is closed. Furthermore, C2 generates a strongly continuous one-parameter group in B ORn, namely,
e te2t =-- e f(e0I—I01)t = e t 0 e —iAt (the reader is advised to check this identity by direct computation). Thus we may conclude that Ae ici t x = e_i C2t ti X, X E D, A
or, in more detail,
n A. iCt x = et Je
E exp[—AdikAkx. k=1
Let us now estimate e ict in Bi . For x E D, we have n
ileict xiii
=iieict xllo ± E liAi eict xlbo j=1 n
E ti exp[—At]Jkillieict A. kx 110. k,j=1
IV. Functional-Analytic Background of Noncommutative Analysis
276
Let R(t) be a bound for II eictx 1113-3-B; then
n
IleiCt Xiil
R(t)(11X110 +
E Ii exp[—Atlik II Ilxiii) j,k=1
< M R(t) Ile-At ll IIX I 1 for some constant M, where Ile -Ar I is any matrix norm of lie-At II (clearly, only M depends on the particular choice of norm). Proceeding by induction on s, we easily prove that
MsR(t) Ile-
Ileicr xIls
At iis ii IIX Ill,
that is, et extends to a continuous operator in each Bs . Next, Aeictx ,_ et 0 e t (Ax ) is strongly continuous in B for x E D, whence it easily follows that et is strongly continuous Bi and, by induction, on each B s . The proposition is proved. 0
Corollary IV.2 If, under the conditions of Proposition IV.2, C is a tempered generator in B (that is, I R WI < C(1 ± It Dm for some m > 0), the matrix A has purely imaginary eigenvalues, and the operator C satisfies the conditions of Theorem IV.4, then C is a
tempered generator in L(B oo ) =
L r-1s Bs). (
Proof If the spectrum of A is pure imaginary, then
Ile -At II <_ const . (1 -I- It1) 1 1 , where 1 is the largest size of Jordan blocks in the Jordan normal form of A. Hence,
I e _iCt n115,--43, < const 41
-I- It ir+q-es.
By Theorem IV.4, for any s there exists an si such that II C II 5, 1 ,B, < oo. Since t
eicrx — e icrx = f i ei " Cx dO, T
we see that eitc is a continuous function of t with values in £(.851 , Bs ), and its norm D grows polynomially as It I --> 00.
Remark IV.6 We cannot make C be a generator in L(B_oo) = L(U Bs) since the s
scale is bounded from below, the spaces Bs with s < O are not defined. In particular, the operator C cannot be realized as a bounded operator acting from B to any function space. In the situation of Hilbert spaces one usually deals with the case in which Ai, . . . , A n are self-adjoint operators in B, and then the "negative spaces" B_ s can be defined by duality, B_ s = Bs* (Chapter II above). Then the operator C (under certain additional assumptions) can be proved to be a generator in G(B,,,o).
3. Functions of Operators in Scales of Spaces
277
Example IV.4 Let 8 = L2(R)n, n = 1, A1 = —ia/ax. Then the scale Bs is (up to norm equivalence) the usual Sobolev scale 13s = 1471(R), and A1 is a tempered generator in £(14°(R)) (as well as in L(W2'(R))); in fact, A1 is self-adjoint in each of the spaces WI (R)). Let C be the operator of multiplication by x in L2(R). It generates the strongly continuous group of multiplication by eixt in L2 (R). We have [
c, A i] = i
and the hypotheses of Proposition IV.2 can be satisfied by the following trick: set n = 2 and add the operator A2 = i. The norm Ii • II is thus replaced by an equivalent one (since A2 is bounded in L2(11V)) and the scale remains unchanged. We have
[C, Ad= i A2, [C, Ad = 0, that is, A
= ( O0 0 )
is a Jordan block of size 2 with eigenvalue zero. Thus,
Ile —Ar II < CO + ItI) and the group et grows as (1 ± It Ds in 147i' (R). However, C is not a generator in r(14°) (in particular, it does not satisfy the conditions of Theorem IV.4). Therefore, it is not very surprising that S'-functions of C are not well-defined in .C(B°°) (in particular, the operator C itself does not belong to r(B°°)— multiplication by x is an unbounded operator in any pair of spaces N(R), liq (R 1 )). However, we can devise another scale of spaces in which x is a generator. Let
a
A1 = — , ax
A2 = C =
X,
A3 = 1,
so that
I u I l s = 2 1Iu I s--1 + Ilxu I s--1 + I a u/ax Ils -i. The matrix A has the form f00 i A=I 0 0 0 ) , 0 0 0 the size of the Jordan block is still 2, so that the growth estimates remain the same. However, C is now a bounded operator in the scale Bs and hence is proved to be a generator in L(13,0 (and, in fact, in £(8_,))). )
278
IV. Functional-Analytic Background of Noncommutative Analysis
In some applications (for example, to representations of stratified Lie algebras (see [159], [61], [62], etc.), a slightly more complicated situation occurs. Namely, operators A1, .. . , A,, An+ i, . . . , A,n+n are given and each of the Ai, j = n +1, . . . , m +n, can be represented as a linear combination of commutators of length < r of Ai , . • • , An. An operator C is given satisfying the conditions of Proposition IV.2 except for the fact that the commutation relations have the form m+n [C, Aj]x =
E XikAkx,
x ED,
j =1, ...,m ±n.
k=1
Proposition IV.3 Under the above conditions, the following estimate is valid: i ik n Pict xIlk < const R(t)Ile–Ar I IIx Ilrk•
The proof, which can be obtained from that of Proposition IV.2 by purely technical manipulations, is omitted. Using this proposition, one can prove various results concerning generators in Banach scales.
3.3 Functions of Feynman-Ordered Selfadj oint Operators In this subsection we prove some estimates for functions of a Feynman tuple of selfadjoint operators in the scale generated by (part of) these operators. These estimates use a technique specific to Hilbert spaces and are more precise than the general estimates that can be derived from the results of the preceding subsection. Our main goal is to consider the case in which the Feynman tuple considered has the left ordered representation in S". However, we begin by proving some auxiliary results. Let 1-1 be a Hilbert space, and let A1, . . . , An be unbounded selfadjoint operators on H. We assume that the intersection DA L n • • • n DA n of their domains contains a linear manifold D dense in li and invariant with respect to Ai and the corresponding groups exp(i t Ai), j = 1, . . . , n, of unitary operators. Also, we assume that for an arbitrary sequence ji, . . . , j,n E 11, .. . , n1 the product exp(i tm Ai,n ) . . . exp(i ti Ail )
is strongly infinitely differentiable with respect to t E Rin on D and all its derivatives are of tempered growth,
a lai — (exp(i tn, Aim ) . . . exp(i ti Ail )u apt
< C(1 + Ite , u E D,
where N may depend only on la l, whereas C depends on lai and on u E D. Under these conditions, for any f E S"(Rn) the formula 1 n _ f (A) -=-- f (Ai, • • • , An) = (f, QA),
3. Functions of Operators in Scales of Spaces
279
defines a linear operator on a dense linear manifold Doo D D invariant with respect to all operators f (A), f E S m (Rn ). Here f (t) = (27r) —n f f (4- )e — ir d4- , t, 4- E Rn is the Fourier transform of f, QA -- AA(t) = exp(itn An ) ... exp(itiAi),
and the brackets (., •) denote the pairing of distributions and test functions of t in the weak topology on D. The operator f (A) is necessarily bounded if the Fourier transform of f is a limit of compactly supported continuous functionals over the space C(Rn) of bounded continuous functions on Rn with the norm
IlflIc =
sup f (Y).
yeRn
Under our assumptions f (A) can also be represented by the iterated Stieltjes integral
where dEx(Ai) is the spectral measure of Ai and the integral is understood in the sense of the weak topology in 7-t. This formula can also be used for symbols f V S°° (Rn), but the resultant operator need not be densely defined if f V S°° (Rn).
Theorem IV.5 Assume the operators Ak+i, ... , A n are pairwise commuting (i.e., their spectral families commute), and suppose that the function f (y i, . . . , yn ) satisfies the estimates
a cel+•••+ak aY c'l l . • . aYkak
f (Yi, • . . , Yn) < C(1 +1)7 11+ • • • ± I.Yd) k—e
for some e > 0 and every a = (ai, . . . , ak) with at = ai ± • • • ± ak
1
n
f (Ai, . . . , A n ) is bounded in li (and, consequently, extends by continuity to the entire space R). The proof is divided into several lemmas.
Lemma IV.8 Let B1, . .. , B,n be pairwise commuting selfadjoint operators on a Hilbert space 7-t and let the sequence gn (x) of bounded continuous functions of (xi, .. . , x m ) E Rm be uniformly bounded (that is, Ign (x)I < M for all n and x) and convergent as n --÷ co to some function g(x) locally uniformly with respect to x E W. Then the sequence of operators gn (Bi, . . . , Bm ) strongly converges as n -- oc to g(Bi, . . . , Bm).
IV. Functional-Analytic Background of Noncommutative Analysis
280
Proof Since the spectral measures
dEx i (BO, ... , dEx m (B,n ) commute, we have the bound sup Ign(x)i
lign(Bi, • • • , Bm)li
xERm
M,
and it suffices to verify the convergence on the dense subset in 7-( consisting of the vectors U = E6. 1 (B1)E6,2 (B2) ... EA 1 (Bm)v,
where A1, . . . , Am are compact Borel subsets of the real axis. For any such u we have lign (Bi, ... , B,n )u - g(Bi, . . . , B, n )ull
sup xe6.1 x•••>(A„,
ign(x) - g(x)I,
and, consequently, gn (Bi , . . . , Brn )u -± g(Bi, ... , B,n )u. The lemma is proved. ID Lemma W.9 Suppose that the hypotheses of Theorem IV.5 are satisfied. Then the Fourier transform 45v (ti, ... , tk) of the Ii-valued function (pv(yi, • • • , yk) = f(yi, • • • , Yk, Ak+1, • • • , An)Y,
y E 7-(
is continuous and
f Rk
Pv(ti , • • • , 4)11
dti • • • dtk
CH O,
where C is independent of U.
Since f (yi, . . . , y n ) is jointly continuous and bounded, it follows from Lemma IV.8 and from the estimate in Theorem IV.5 that (pv (yi , .. . , yk) is a continuous integrable function. Hence the Fourier transform
Proof
,tk) = (27 ) -k I g9v(Y1 , ... , Yk)e -itY dyi .. . dYk
exists and is a bounded continuous Ii-valued function. Furthermore, we have t g) v (ti, ... ,tk) = (27r) —k a-
a
f [( - -i ay—
Œ
(Pv (Yi , • • , A) •
C itY dYl • • • dYk
for any multi-index a with l al < k + 1 (the differentiability of (pv (yi, ... , yk) also follows from Lemma IV.8 and, consequently, the norm of ta(-16v is bounded for lal < k+ 1 by const • II v II. We now have
0
1 ± t2)
2 (t)
0
< const •
E litarpvcoil
c'ilvii,
Icel
whence the desired estimate follows immediately.
0
3. Functions of Operators in Scales of Spaces
281
Lemma IV.10 Under the conditions of the theorem we have 1 (f (Ai,
exp(itiAi)u, rpv (ti,
An)u, y) = f (exP(itkAk)
, tk))dti
dtk
Rk
(here (., .) is the inner product on 7-0. Proof For any function f E S°° (R'1 ) the required formula can be proved by direct
computation. Later on, if f S°°(Rn) still satisfies the estimates in Theorem IV.5, then we can approximate f by a sequence fs. E S"(Rn), s = 1, 2, .. . , in such a way that f — f, satisfies the same estimates with Cs -->- O. Passing to the limit as s —> oo, we complete the proof. Theorem IV.5 follows from Lemmas IV.9 and IV.10. , A n be a tuple of selfadjoint operators on a Hilbert space 7-t. We Now let A1, assume that these operators satisfy the conditions stated at the beginning of this subsection and also the following conditions: (a) The operators Ak+i, ..., A n are pairwise commuting (k < n is fixed). (b) The left ordered representation /1, . , in of the Feynman tuple (A1, .. . , A n ) in S°°(Rn) exists, and the operators /1, , In are S°°-generators in S". In other (Rn) we have words, for any two symbols f, g E Lf
An)] = h(A1,
An)Hg(Al, •
on D, where
1
An)
n
h(Y) = f(ii, . . . , 171)(g (Y)), 1 • . . ,ln are linear operators on S"(IEV), and the operator f (li, . . . , l n ) is defined
on S" by the formula 1
f
n
, ln) = f f (ti,
, tn)exp(ilntn)
Condition (b) is satisfied e.g. if A1, , An form a representation of nilpotent Lie algebra and satisfy the conditions of the Krein—Shikhvatov theorem. Denote yk),
Y = (Y' , Y ") , y' = (Yi,
y' = (Yk+1,
yn),
and introduce similar notation for multi indices a = (ai, • • • , an ). Let -
spm (R k , R n—k ) s mp
M E R,
p > 0,
IV. Functional-Analytic Background of Noncommutative Analysis
282
be the space of functions f (y)
E
C°°(Rn) such that
alalf aya
(y) _<. Ca (1 + IY' p m Plail , 1
II = 0, 1, 2, . . . ,
n
and let Gpm be the set of operators f (Ai, .. . , A n ) with symbols the following condition on the representation operators.
Condition IV.1 For any m
E
R, f
71. 1
E Spm ,
and a permutation
Srpn. We introduce
f E
TC
=
(zi, • • • ,
zn) of
gn
(1, . . . , n) the operator f(11, . . . , l n ) is a pseudodifferential operator on Sc' (Rn ), 7r1 gn 2 f(11, • • • , in) = H(Y,
1
—i0/8y),
whose symbol H(y, 0 satisfies the following estimates:
(H(Y , 0) — ala+filgi ayanfi
where .31)(y,
(Y, 0
f (y)) E Spm—P
Cap(c13(y, 4:) r — P(Ia'1+10"1),
0 is a function on R2n 3 (y, 0 such that for some No
> 0 the estimates
1 < (1) (Y, 0 _. C( 1 + 1)/1)( 1 + IO N° , (4) (Y, 0) -1 5. C( 1 + 1)/1Y 1 ( 1 + IO N° are valid with some constant C independent of (y, 0. Theorem IV.6 The following holds under the above conditions. (a) The union U Gm is a filtered algebra. More precisely, if fi P M
then
IN
E Sp ,
i = 1, 2,
ER
I
1 1 n n n Liel(Al, • • • , An)]IU2(Al, • • • , An)] = g (AI , • • • , An),
where g E Spni1±n12 . Furthermore,
(fi(Y)h(Y) — g(Y))
E Spm1±m2—P •
(b) For positive integer m let 7-1m be the completion of D with respect to the norm 1111m defined by Hullo = Hulk where II . ll is the Hilbert norm on 1-1;
k IlUilm = 1111 16-1 +EHA Jum m _i, j=1
m=
1, 2, .. .
3. Functions of Operators in Scales of Spaces
283
1
Then each operator fi, .. . , A n ) E Gmp extends by continuity to abounded operator from 7-1S to Rs+E— ml (here [—m] is the integral part of —m) for
s > max(—[—m], 0). In particular, the operators with symbols in S p° are bounded in H.
Remark IV.7 One can define 7-rs for s < 0 as the dual of 7is with respect to the inner product on H. Then any operator f E G pin has order [—m] in the entire scale { 7-is } . Proof of Theorem IV.6. To begin with, we formulate and prove the following auxiliary
assertion.
Proposition IV.4 Let fi
i = 1, 2, and let (ni,
E Spmi ,
, n-n ) be a permutation of
(1, . . . , n). Then def
g =
7r1 h (11, • • . ,
In )(f2(Yi , • • • , Yn)) E Spm l ±m2
and
fl(Y)f2(Y)
g(Y)
G Spmi±m2—P .
Proof It suffices to prove the latter inclusion. By Condition IV. 1, we have 1
2
a
g(Y) =
— i f dt - (1-4
(2
y,
a
—ii- -ay ) , f2y(Y))I
2
a
dr (74 Y, —it — ay
fi(y)f2(y)—
, f2y(Y))
*(Y)
0 fi(y)f2(y)— iG(Y) slk (Y),
where *(y) E Spm1±m2—P and the angle brackets ( , ) denote summation over the coordinate indices from 1 to n. Let us estimate the derivatives of G(y). Set si = 0, j E ... ,Icl, si = 1, j E + 1, . , nl. We have
a lce+fil
C(CD(y,O)ml—P(Ic/I+1/3"1-Fei)
ayanfi uniformly with respect to E [0, 1]. (Here and below C denotes different constants). Using the inequalities for (1) and the consequent inequality (c1) (y, O ) ' < 1, we obtain
ala+fil ayanfi
'
(y, r )
C(1
W. Functional-Analytic Background of Noncommutative Analysis
284
C independent of r
with some constant
E [0,
1]. Set
1
alai
Foe,N ,
-to (y, r ) di- (1 ± 4'2 ) — N . j (Y , 0 =[ f —7 aya 0
If N is sufficiently large, then for all a < ao, where ao is an arbitrary fixed multi-index, and for any 1,81 = 1, 2, ... , we have alp'
C(1 ± lyi VII— PE:I—Pia' I 0 ± I 1)— N —1 •
--F3 a Fa,N,j 01 , 0
These estimates imply that the Fourier transform Pee ,N,i(y, r) with respect to 4 is integrable and
f
I fra,N , j () 1 ,
r)i
dn
_.5. C(1 ± 1 y 'Dm1 –Psi – Plai l.
We have 1 )
N
G(a°)
=
=
E E i =1 P-EY=oto 13 1.Y!
as''' -F
1 (27on
fi ' l s I
j
(2 _ A)N fY) ( v ) aay ( 1 y , —i — ' J 2 Yi \J /
N
ao! Po E E MY! •
P -FY=a0 J=1
N j()7 ,y "
By the preceding inequality and the fact that h estimated by
E
— 77)(1
—
A)N flYYj) (77)
dn
.
412 each term in this sum can be
ee'l C(1 ± IY 1 1) m1±m2—P—PIP'I—PlY'l = C(1 ± 1y/1)m1+m2—P—Pl.
Hence we have shown that G(y)
E Spni1+m2—P .
Proposition IV.1 is proved.
0
Now we remark that statement (a) of Theorem IV.6 is a particular case of Proposition IV.4. Let us now prove assertion (b) of the theorem. First, we establish the boundedness of operators with symbols in S. f E S. We will find a symbol g E Sp° such that Let
(f (A))* f (A) ± (g (A))* g (A) = M2 + Vî (A),
(IV.7)
where M = 1 ± sup i f (Y)I, * (Y) E 'V" , and the asterisk denotes the adjoint yERn operator on H. Equation (IV.7) implies the boundedness of f (A) in li immediately
3. Functions of Operators in Scales of Spaces
285
since *(A) is bounded by Theorem IV.5. Since Ai, ... , A n are selfadjoint operators, the adjoint of f (A) is given on D by
( f (A)) * = f(Â il, • • • , Ai,n), where 7 is the complex conjugate of f and the Feynman indices occur in the inverse order. If g E Sp° , then, by Proposition IV.4, we obtain
(f (A))* (f (A)) + (g (A))* (g (A)) = (I f 1 2 + Ig1 2 )(A) + * (A),
*E
So; P .
Let us solve equation (IV.7) by the method of successive approximations. Set
go(Y) = -/M2 — I f(Y)1 2 • Clearly, go E Sp° and we have (the uppercase letters stand for operators and the lower case letters for the corresponding symbols):
F* F + GijG0 = M2 + /'o, ifro E ST-t)-P - . We now set
1 Re *i_i (y) +g—i(y) =-- ri(Y) + gi —1( Y ) gi 0') = 2 gi_1(y) for j = 1, 2, ... , where *i_i is determined from the conditions
F* F ± GI_ 1 Gi_i = M2 + *j_ 1 , t E SiT PU+1) , irri 1fri E
(IV.8)
It turns out that such a choice of * is always possible. Indeed, it follows from (IV.8) fri (yi , . . • , Yn), 71 -1 = n ± 1— 1, that lifi = *1(A) is symmetric on D. We put fi = T 1, and derive from Proposition IV.4 the equation 1=,.nf2
Ti(21, • • • , L) = T0i, • • • , Â1 n) + X(L, • • • , ;II n), where x E Sp PU+2) provided that ifri E S;. Set
Vilj 07) = ( 1fri (y) + *j (y ) + X 00)/ 2; then *(A) = *1(A), and V?.1 (A) satisfies both conditions in (IV.8). Thus, it suffices to J find symbols *1 satisfying the first condition in (IV.8), and then the second condition can also be guaranteed. The argument goes by induction over j. Suppose that the symbols gi_i , *j_i satisfying the induction hypothesis are given.
IV. Functional-Analytic Background of Noncommutative Analysis
286
Note that gi_i and rj are real-valued. Due to the definition of gj (y), we have F*F --1--G;Gj = F*F --1-G;L1 Gi_1+G;L1 Rj ± RiGi_1+ Ri R1 = M2 + illi _1 + G;Li Rj + RIG j _1 ± R1 Ri , where ri E Sp Pi , gi_i E Sp° and g."1 _ 1 /-1 + Tigi_i = 2gi_irj = —11'1_1. By virtue of Proposition IV.4, F* F ± GI Gj = M2 ± iirj ,
(i) ,
Ifrj E S
and the inductive step is complete. Thus we have shown that operators with symbols in Sp° are bounded. Let us now prove that if F E Gnip (m is an integer), then F : lis —> Hs' is continuous. The following facts are evident: (a) Ai E G pi ,i =1,..., lc; (b) A1: Hs — 7-0-1 are continuous for j = 1, . . . , k; (c) there is an implication F E Gmp = [Ai, F] tf iliF — Fili G Gm09 -19+1 , j --7--- 1, ... ,k; (d) if F E G°P ' then F is bounded on c Gm±m' n Gm' (e) G` P P ' P boundedness of F E Gkp from 7-ts to V' follows from (a) — (e) by a standard The 0 argument, which we do not reproduce here. The theorem is proved.
Remark IV.8 It may be difficult to check the validity of Condition IV. 1. However, it is valid for the usual pseudodifferential operators (here n = 2k, Ai±k = Xi, Ai = —i3/8x1 in L2 (TRk); we have /i = yj — ialayk ±j and /k±i = Ylc+j, j = 1, ...,k, so that ... , yk + lc, Yk+1, • • • , Yn) 1-107 , 0 = f(yi + and one can set (NY, 0 = 1 ± IY1 ± el, No = 1.)
Corollary IV.3 Let a function f (y (1) , y (2) , y (3) ), y (i) E W 1 , i = 1, 2, 3, satisfy the estimates
aifi+y+sif ay MP ay (2)Y 8 y(3) 3
_< Cigy 6 (1 ± .1(y( 1 )) 2 ±
(y
(2))2
) 1,5 I -IA ?
L81+ 1)/1 + 161= o, 1, 2, . .. 1
22
Then the operator f (—ia in , 4. , 4) is bounded on L2(Rn ) and the upper bound of its norm is completely determined by the constants Cpy5.
Appendix A
Representation of Lie Algebras and Lie Groups
Here we present some well-known elementary facts concerning Lie algebras, Lie groups and their representations. For further information on these topics, see e.g.
[162], [184], [86].
1 Lie Algebras and Their Representations 1.1 Lie Algebras, Bases, Structure Constants, Subalgebras A Lie algebra over R (or C) is a linear space L over the corresponding field endowed with a bilinear operation (the Lie bracket or commutator) [
,
]
: L x L --> L, (x , y) i-->- [x , y],
such that [x, y] = —[y, x]
(A.1)
x,y EL
(Antisymmetry) and
x, y], z] +
[[
ay, z], x] + [[z, x], y] = 0,
(A.2)
x, y,z E L
(Jacobi identity). Let L be a Lie algebra of finite dimension n. Let {ai, . .. , an } be a basis in L. The
commutator of any two elements of the basis may be expanded in this basis, 1 [a1, ak] = Xlik ai.
The numbers Xlik are called the structure constants of the algebra L with respect to the basis {al, . . . , an } . Clearly, for any elements x = xi ai, y = yiai E L, we have
(A.3)
[x, y] = Xlik xi y k ai.
summation convention. If a superscript coincides with a subscript in a product, then summation is performed over this index. In particular, Xij k al means E À 1j k ai. 1 Here and below we use the
1
288
Appendix A: Representation of Lie Algebras and Lie Groups
A Lie subalgebra in L is a linear subspace W c L such that [W, W] c W. In particular, W itself is a Lie algebra.
1.2 Examples of Lie Algebras The simplest example of a Lie algebra is an abelian algebra. One takes any linear space L and defines the Lie bracket on L by the equality [x, y] = 0, x,y E L.
However, Lie algebras induced by associative algebras are of particular interest for us. Let A be an associative algebra. We define the Lie bracket on A by setting [x, y] = xy — yx, x, y E A
(A.4)
that is, [x, y] is the conventional commutator of elements of A. The antisymmetry and the Jacobi identity can be easily verified for this definition of Lie bracket. Thus the commutator (A.3) defines the structure of a Lie algebra on A. By taking Lie subalgebras in A, one can provide a large number of examples. Let A = Mata (R) be the algebra of real (n x n) matrices. The corresponding Lie algebra is denoted by gl (n, R). There is a Lie subalgebra so(n, R) in gl(n, R) consisting of the skew-symmetric matrices, that is, matrices A such that t
A = —A
(here, as above, t A denotes the transpose of the matrix A). Indeed, we have
B]= t (AB — BA) = t B t A — t A t B _ [t LI- , t A] = [B, A] = —[A, B]
t [A,
(A.5)
so that so(n, R) is closed under commutation. Let A denote the algebra of all differential operators in Rn with smooth real coefficients. Its elements have the form m(D)
D=
E aa (x) ( axa )
)
OE
01= 0
where m(D) is the order of the operator D. It is clear that
m([131 i, D2 ] ) m(Di) ± m(D2) — 1
(A.6)
so that the set A1 C A of differential operators of order m(D) < 1 is a Lie subalgebra of A.
Lie Algebras and Their Representations
289
In turn, one may consider the subalgebra of Ai consisting of operators of the form ;
a
D =ax —
(A.7)
ax-'
This is a Lie algebra of dimension 2n ± 1. It is called the Heisenberg algebra and denoted by hb(n). The elements x1,
,x
a '
a
ax1' ' "
1
form a basis in hb(n), with the commutation relations given by
[xj ' xk]
a
[xi axk —
=
'
Ic] = °'
= a
(A.8) j ,k =1,
, n.
Here is the Kronecker symbol. Let now A be some algebra. Consider the space Der(A) of linear mappings D : A such that d(u, v) = uD(v) D(u)v, , u, y E A (the "Leibniz rule"). A mapping D satisfying (A.4) is called a derivation of the algebra A. Let Di and D2 are derivations of A. Then, as one can easily verify, their commutator is also a derivation. Thus, Der(A) is a Lie algebra. Take now A = C"(M), the algebra of smooth functions on a manifold M, with the usual pointwise multiplication. Its derivations are nothing other than vector fields on M, so that Der(A) = Vect(M) is the algebra of vector fields of the manifold M, with the Lie bracket given by the commutator of vector fields.
1.3 Homomorphisms, Ideals, Quotient Algebras Let L1 and L2 be Lie algebras. A linear mapping
: Li —> L2 is called a homomorphism (of Lie algebras) if
W([x, y]) = [W(x), W(Y)]
(A.9)
Appendix A: Representation of Lie Algebras and Lie Groups
290 for any x, y e Li. The kernel
g = Ker ço = Ix
E LikO(X) =
0}
(A.10)
of a homomorphism w is an ideal of L1, that is, a linear subspace with the property
[x, y]
E
g,
and y E Ç.
whenever x E L1
(Ail)
This is a direct consequence of formula (A.2). If J cL is an ideal of a Lie algebra L, the quotient space LIJ is naturally endowed with the structure of a Lie algebra. Indeed, it follows from (A.11) that the equivalence class of [x, y] depends only on those of x and y. The Lie algebra L I J is called the quotient algebra of the Lie algebra L with respect to the ideal J.
1.4 Representations A representation is a homomorphism of a special type. Namely, if L is a Lie algebra and V is a linear space, any homomorphism
w : L —> End(V), (where End(V) is the algebra of linear operators on V), that is, a linear mapping such that q)([x, y]) = gx»(Y) — W(Y)gx), is called a representation of the Lie algebra L in the linear space V. The representation is said to be faithful if Ker w = { 0 } .
Examples of Representations. The algebras gl(n, R), so(n, R) c gl(n , R), obviously act on the space Rn by multiplication: w(a)x = ax, a E gl(n, R),
x
E Rn ,
which gives us natural representations of these algebras in the linear space R". The algebra Vect(M) of vector fields on a manifold M possesses a natural representation w in the space C°° (M) of smooth function on M; this representation is given by w(X) (f (x)) = X f (x). The Heisenberg algebra hb(n, R) possesses a natural representation in the space Cw (R). Elements of hb(n, R) act as the corresponding differential operators:
( a \ --axi ) f (x) =
0f axi '
Lie Algebras and Their Representations
291
ço(xj)f(x) =
xi f (x), j = 1, ... ,n,
w (1) f (x) = f (x).
1.5 The Associated Representation ad. The Center of a Lie Algebra Let L be a Lie algebra. Its associated representation is the representation in the linear space L given by the formula ad :
L -- End (L) ,
(A.12)
x 1--> ads ,
where ad, (y) = [x, y]. The mapping (A.12) is necessarily a representation since by the Jacobi identity we have adv y i(z) = [[x, y ] , z] = — [[y, z], x] — [[z, x], y] = [x, [y, z]] — [y, [x, z]] = [ad s , ad] (z).
(A.13)
The kernel Z c L of the representation ad consists of all x E L satisfying the property [x, y] = 0 for all y e L. Z is called the center of the Lie algebra L. Let L be a Lie algebra of finite dimension n. Let us fix some basis {al, . . . , an } in L. Let x = x i ai. The operator ads in this basis is given by the matrix ads where
(A s )/1 = Xski , and Xski are the structure constants of L with respect to the basis {a 1 , . . . , an I.
1.6 The Ado Theorem The representation ad of a finite-dimensional Lie algebra L is a representation of L in a finite-dimensional space. Generally, this representation is not faithful (unless the center of L is trivial). However, the following theorem is valid. Theorem A.1 Any finite-dimensional Lie algebra possesses faithful finite-dimensional representations.
Thus we may consider any finite-dimensional Lie algebra as a matrix Lie algebra (though the size of the matrices may be very large). The proof of the Ado theorem is very complicated, and we do not consider it here.
292
Appendix A: Representation of Lie Algebras and Lie Groups
1.7 Nilpotent Lie Algebras A Lie algebra L is said to be nilpotent if for any x e L the operator ad x is nilpotent (that is,
(adx ) N = 0 for some N). The Engel theorem says that a Lie algebra is nilpotent if and only if there exists a number N such that for xi , x2, .. . , xN e L one has [. • • [xi, x2 1 , x 3], . . . , x N] = O. [
The minimal N with this property is called the nilpotency rank of L. Let L be a nilpotent Lie algebra. Let us define a decreasing sequence of ideals in L by setting
Ln = L, L (1) = [L, Ln], L (2) = [L, L (1) ], ... , L (k) = [L, We have
L = Lo ) D L (1) D L(2) D ... D L (k — i) D L (k) D . .. .
It is easy to see that L (N) = {0}, where N is the nilpotency rank of L. Let dim LR ) = rk, ro =n. Let us choose a basis in L, by using the following procedure: choose some basis [al, ... , arN _ 1 1 in L (N-1) , extend it to a basis [al, . .. , arN_2 ) in and so on, until a basis {al, . .. , an } in L is obtained. Now set bi = an+i—j, i = 1, ... , n. It is clear that [bi , bid
= E x/ikbi ; 1>max(j,k)
thus, )..ii i, = 0 for k > 1. In other words, all matrices A1, j = I, . . . , n, of the associated representation are strictly upper-triangular in the basis {b1, . • • , bnl.
2 Lie Groups and Their Representations 2.1 Lie Groups, Subgroups, the Gleason—Montgomery—Zippin Theorem A Lie group is a smooth manifold G together with a group structure such that the group operations G x G ---)- G, (x, y)i--> xy; G --> G, x 1--> x-1
Lie Groups and Their Representations
293
are smooth mappings of manifolds. The Gleason—Montgomery—Zippin theorem states that it suffices to require that the manifold and the group operations be of class C0 in the above definition. Under this condition, one may define the structure of a real-analytic manifold on G such that the group operations are real-analytic mappings. Let G be a Lie group. A Lie subgroup in G is an embedded submanifold N c G which is at the same time a subgroup of G.
2.2 Examples of Lie Groups The simplest example of a Lie group is the real line R with the multiplication (x, y) 1—›—x. It is easy to check all the conditions of the x ± y and the inverse element x definition. Another example is the group T of unimodular complex numbers with the usual multiplication. The mapping ço eiç° (A.14) introduces the (local) coordinate ço on T, and the group operations may be written in the form (p1
Çoi +
ço
492 (mod 2n- )
—
the coordinates y 1 ,. Let us now consider the space R2" c and define the multiplication law by setting
•Y n 9 Z1
9•••9
Zn,
(y (1) , z (1) , c (1)) (y (2) , z (2) , c (2)) = ( y (1) ± y (2) , z (1)
z (2) , c (1)
c (2)
(A.15)
where z y = ziy i . We leave to the reader the easy verification of the fact that the multiplication law (A.15) is associative. Further, this law possesses the neutral element (0, 0, 0), the inverse element always exists and is given by (y, z, c) -1 = (—y, —z, —c — zy). Thus, R2n+ 1 with this operation is a Lie group. It will be denoted by HB(n, R) and called the Heisenberg group. Consider now the set of nondegenerate n x n matrices with real elements. (Of course, they can be identified with nondegenerate linear operators in R n .) It is clear that this set is in fact a Lie group (the product is the conventional matrix product, whereas the smooth structure is inherited from the space Rn 2 , in which nondegenerate matrices form an open subset). This Lie group is denoted by gl(n , R) and called the full linear group. The group gl(n , R) and its subgroups are called matrix Lie groups. As an example, consider the group SO(n, R) c gl(n, R) of matrices A such that det A = 1, tA
A= E
294
Appendix A: Representation of Lie Algebras and Lie Groups
(E is the identity matrix). Thus, SO(n, R) consists of orthogonal matrices with unit determinant. The group SO(n, R) is called the special orthogonal group.
2.3 Local Lie Groups Let (U, q) : U —›- Rn) be a coordinate chart on a Lie group G in a neighborhood of the unit element e E G. We assume that yo(e) = 0. Then the group operations are represented by the mappings
Vt : V X V --->- Rn , s: V -->. Rn
(A.16)
in some neighborhood V c ço(u) of the point 0 E R'1 , and lif(0, x) = lif(x, 0) = x, E(0) = 0, Ifr(x, Ifr(Y, z)) = ifr(*(x, y), z), Vf(x, E(x)) = lif(E(x), x) =0
(A.17)
(these equalities are satisfied whenever all the expressions involved are defined). A neighborhood of zero V c Rn together with a pair of mappings (A.16) satisfying (A.17) is called a local Lie group. Thus, any Lie group gives a local Lie group via the above construction. It turns out that, vice versa, any local Lie group gives rise to a Lie group. We do not prove the latter statement here.
2.4 Homomorphisms of Lie Groups, Normal Subgroups, Quotient Groups Let G1 and G2 be Lie groups. The mapping
is called a homomorphism of Lie groups, if ço is a smooth mapping of manifolds an a group homomorphism at the same time. If go : G1 —>- G2 is a Lie group homomorphism, its kernel N = Ker q) = Ix e GiWx) = el
is a Lie subgroup in G1. It is a normal subgroup, that is, x E N implies yxy -1 E N for any y E G. Indeed, if x E N then 49 (YxY -1 ) = W(Y)§ 0 (x)49 (Y -1 ) = 40 (Y) e4o(Y -1 ) = 49 (Y)W(Y -1 ) = 40 (YY -1 ) = 49 (e) = e. The quotient group GIN clearly possesses the structure of a smooth manifold concordant with the group structure, so that GIN is a Lie group.
3. Left and Right Translations. The Haar Measure
295
3 Left and Right Tr anslations. The Haar Measure Let G be a Lie group. Then for any g E G the following mappings are defined: Lg .
G --->- G, h 1--> gh
(left translation), and Rg :
G -->- G,
h i-->- hg (right translation). Both mappings are diffeomorphisms. The mappings (A.10), (A.11) induce the corresponding mappings of tangent and cotangent spaces, L g* : ThG —>- TghG, R g* : ThG --->- Thg G
and L; : Tg*h G --->- Th* G , R• g*•T*hgG -->. Th*G
for any h E G. A nondegenerate volume form dw on G is called a right (left) Haar measure on G, if it is right- (left-)invariant, that is, R* g (dw) = dw,
or L*g (dw) = dw,
respectively, for any g E G. Right and left Haar measures on a Lie group G always exist and are defined up to a nonzero factor. Indeed, we have (dr g) h = (R) 1 (drg) e or
(dig) h = WO 1 (di g)e .
Here dr is a right Haar measure, dig is a left Haar measure, e is the unit element of the group.
3.1 Left and Right Regular Representations Let G be a Lie group, C' the space of smooth functions of G. For any g define the operators Lg and R.g in C°°(G) by setting (Cg f) (h) = f (g -1 h),
(7Zg f) (h) = f (hg)
E
G let us
Appendix A: Representation of Lie Algebras and Lie Groups
296
for any h e G. The introduced operators possess the following properties: L e = 'Re = id; (Lg1rg2 f) (h) =
(Lgi f) (g -1 h) = f
h)
= f ((g1g2) -1 h) = (Lg i g2 f) (h), (7Zg i 1Zg2 f) (h) = (R gi f) (hg2) = f (hgig2) = (R-g1g2f) (h).
Thus, the mappings
R, g
Rg
are homomorphisms of G into the group Aut(Cm(G)) of automorphisms of the linear space C"(G). These homomorphisms are called the left and the right regular representation, respectively.
3.2 Representations of Lie Groups The homomorphisms considered in the preceding subsection are nothing other than special examples of representations of Lie groups. A representation of a Lie group G in a linear space V is a homomorphism TG
Aut(V)
from the group G to the group of automorphisms of V. We assume that V is a topological linear space and that Aut(V) is the set of linear continuous operators on V with continuous inverses. Further, we introduce a certain topology in the set Aut( V), and the mapping T is assumed to be continuous in this topology. The following situation is of particular interest for us. Let V be a Banach space, Aut(V) be a set of bounded operators in V with bounded inverses, with the topology of strong convergence. The corresponding representations are called strongly continuous representations of the group G in the Banach space V. In particular, (1) the operators T (g) are bounded operators in V for all g e G; (2) T(e) = idv; (gig2) = (gi)T (g2) for any gi, g2 E G; (3) for any y e V the V-valued vector function g T (g) v on G is continuous on G with respect to the norm in V. A representation T is said to be faithful if T (g) 0 id v for any g 0 e.
4. The Relationship between Lie Groups and Lie Algebras
297
Examples of Representations. We consider only two examples here. A. Any matrix group acts in RV by left multiplications. B. Consider the Heisenberg group HB(n, R). We define the mapping TA : HB(n, R) —)- Aut(L 2 (Rn ))
(A.18)
(71(y, z, Of ) (x) = ei X(Y x +c) f (X ± z) ,
(A.19)
by the formulas where X E R is a fixed number. The mapping TA is a strongly continuous representation of the group HB(n, R) in the space L 2 (Rn). Indeed, we have (A.20)
IITA.(y , Z , C)11L2(litn)—>L2(1/0) = 1
for any (y, z, c) E HB(n, R). Further, if f (x) is a smooth finite function, it is obvious that formula (A.19) defines an element of the space L 2 (R) which depends on (y, z, c) E HB(n, R) continuously. Since C;cr) (1Eln ) is dense in L2 (Rn ) , and the norms of the operators (A.18) are uniformly bounded (A.20), one may conclude that the element (A.19) of L2 (11In ) depends on (y, z, c) E HB(n, R) continuously for any f E L2(Rn) . Finally, TA, (0, 0, 0) is the identity operator in f E L2 (R'1 ), and Tx(g) Tx(h) = TA,(gh),
as one can easily verify. Thus our statement is proved.
4 The Relationship between Lie Groups and Lie Algebras 4.1 The Lie Algebra of a Lie Group Let G be a Lie group. A vector field X e Vect(G) is said to be right invariant if -
R g* Xh = Xgh
(A.21)
for any g, h e G. It is easy to see that right-invariant vector fields form a finitedimensional linear space naturally isomorphic to the tangent space Te G of the group G at the point e. Indeed, (A.21) implies X g = R g* X e for any g E G, thus proving the claimed isomorphism.
298
Appendix A: Representation of Lie Algebras and Lie Groups
The commutator of right-invariant vector fields is itself a right-invariant vector field. Indeed, let us consider any field X as an operator X : C°°(G) -- C'(G). The fact that X is right-invariant means exactly that X 'R, g = 'RI X
for any g E G. Now let Xi, X2 be right-invariant vector fields. We have [X1, X21R. g = XiX21, g - X2X17Z g = Rg X1X2 - 7Z.gX2X1 = Rg [Xl, X2], so that [Xi, X2] is also right-invariant. Thus, right-invariant vector fields on G form an n-dimensional Lie algebra Ç. It will be called the Lie algebra of the Lie group G. We use the above isomorphism to identify g with the tangent space Te G.
4.2 Examples A. Let us construct the Lie algebra of the group R. Any vector field on R has the form d X = a(x) Tx . Right translations act in R according to the formula Ry x = x ± y. Hence
d
(R g ,K X) x+y = and the right-invariance condition means that a(x ± y) =
that is, a(x) is a constant. We see that any element of the considered Lie algebra has the form d X = a— , a = const, dx and the Lie algebra is one-dimensional, with the trivial Lie bracket. B. Let us now consider the Lie algebra of the group HB(1 , R). Recall that the standard coordinates on HB (1, R) were denoted by (y, z, c). Any vector field on HB (1, R) has the form a a a X = ai (y, z, c)— + a2(y, z, c)— + a3 (y , z, c)— .
ay
az
ac
The Relationship between Lie Groups and Lie Algebras
299
Right translations on HB (1, R) are given by
5,z + 2, c + E + z3-7),
R(5.,,i,o(Y, z, c) = (y +
so that the matrix of the operator R6o,, has the form
1 0 0
= (
0 0 1 0 55 1
)
.
.
Thus, (R(-,,) * X),ky ±y . -, z+i,c+e+zji) =
= ai (y, z, c)
a
a
,
a
+ a2(Y, z, c)— + (a3(Y, z, c) + 5-7 a2(Y, z, c)) -5--- • Ty c az
The right-invariance conditions imply ai(y ± 5, z + 2, c
+ a. + z,S7- ) = ai(Y, z, c),
a2(y + j;, z + i, c + E + z53 ) = a2(Y, z, 0, a3 (y + 5;, z + .2, c + E + z5;) = a3 (37 , z, c) + Sja2(Y, z, 0, and so ai = const, a2 = const, a3 = ya2 + b, b = const . Thus, any right-invariant vector field X on HB(1, R) has the form
aa a 0\ X = ai— d - a2(— + y— -1 - b— ay ac ac az with some constants ai, a2 and b. Example A.1 The Lie algebra of HB(1, R) is isomorphic to hb(1, R).
4.3 The Exponential Mapping, One-Parameter Subgroups, Coordinates of I and II Genera Let G be a Lie group, g its Lie algebra, X e g an arbitrary element, considered as a right-invariant vector field on G. Consider an ordinary differential equation on G,
k=
xg
(A.22)
with the initial data g(0) = e.
(A.23)
300
Appendix A: Representation of Lie Algebras and Lie Groups
Lemma A.1 The Cauchy problem (A.22) defined for all r e R and satisfying
(A.23) possesses a unique solution g(t)
-
(A.24)
g(r) g(t) = g(t + r) for all t, r c R.
Proof. The existence theorem for ordinary differential equations guarantees that the solution of (A.22) - (A.23) exists on the interval r e ( - E, E). The curve g(t+r) = k (r)
satisfies the equation (A.22) with the initial condition g (0) = g(t). Since X is rightinvariant, the curve g-: (r) = g(r) g(t) = 'Tg (t)g(T) satisfies the same equation, and the same initial condition. By the uniqueness theorem, , ' rI1 < E. k r) = g(r) on their common domain. Thus, (A.24) is valid for Itl,Irl,, I,t -IThe identity (A.24) allows us to extend the definition of g(t) for all r E R, by setting (
g(T)
= (g (i))N
for sufficiently large integer N. It is easy to verify that (A.22) remains valid, the lemma being thereby proved. A curve g : R -›- G satisfying the conditions (A.23) - (A.24) is called a oneparameter subgroup of G, the one-parameter subgroup constructed in the above lemma is said to correspond to the element X c Ç. It will be denoted by Ux (t). Let us now define the exponential mapping
exp : g —> G, by setting
exp(X) = Lix (1). It is clear that exp is a smooth mapping. Let us show that this mapping is nondegenerate in a neighborhood of zero. To do this, let us compute the derivative exp. (0) : To g
----›- Te G .
The tangent space To g may be identified with
X
E
g = Te G .
Then, for any
Te G ,
we have d
exp,,,, (0)X = --d-i exp(tX)
d
= —Utx (1) dt t =0 t=0
= t =0
= 1:6( (0) = X,
301
The Relationship between Lie Groups and Lie Algebras
so that exp. (0) is the identity operator. Thus, exp is a diffeomorphism from a neighborhood of zero in g to a neighborhood of e in G. Special coordinate systems in G are related to the mapping exp. Let {al, . • . , an} be a basis in Ç. The coordinates of the I (first) genus are defined in the following way: the element exp(x i ai) e G is considered as having the coordinates x = (x1, .. . , x n ). Since exp is a nondegenerate mapping, we obtain a coordinate system in a neighborhood of e c G. Next we define the coordinates of the II (second) genus. Consider the mapping
exp2 :
G
Rn ----›-
x
1--->
(A.25)
exp(x n an ) .. . exP(x l ai ).
The derivative exp2 (0),,, is nondegenerate since it takes the vectors
a
a
axi' • • • ' axn
to the vectors ai, .. . , an which are linearly independent. Consequently, the mapping (A.25) defines a coordinate system in a neighborhood of the point e = exp2 (0). We point out that the coordinates of both I and II genera on the group G depend on the choice of the basis {ai, . . . , an } in its Lie algebra Ç.
4.4 Evaluating the Commutator with the Help of the Mapping exp Let G be a Lie group. Let X, Y e g be elements of the corresponding Lie algebra. Then the following formula for the commutator is valid: d [X, Y] = (— exP(sfi X) exP(..fin exp(-- ■fiX)exp(--,./iY)) dt
.
(A.26)
t=0
Indeed, let us consider X, Y as vector fields on G, and consider the following linear operator in C" (G) B(r)
= e rX e r Y e -rX e -T Y .
Here erx is the operator of left translation in C" (G), (e-rx f,) (g) = f (exp(tX)g),
g c G
and consequently, we have d _ex = Xe rx . dr
Furthermore, we have d B(t) = x e rx e TY e —rx e —ty + e rXy e rY e —rX e —r dr —e r x e rY Xe —rx C rY — e rx erY e—rxy e— T Y ;
302
Appendix A: Representation of Lie Algebras and Lie Groups
and hence d B(r) dr r =0
= 0.
Differentiating once more with respect to r we obtain in a similar way
d2 B(r) = 2[X, Y]. dr2 r =0 Finally, we have proved that
,
d
dt
= [X, Y], t=0
which implies (A.26).
4.5 Derived Homomorphisms Let w : G 1 —>- G2
be a homomorphism of Lie groups. Consider the corresponding mapping def W* = W* (0) : g1 ---- g2
(A.27)
of Lie algebras. We claim that the mapping (A.27) is a homomorphism. To prove this, consider the following diagram. W
G2
Î exp
W*
g2
It is commutative. Indeed, if X c g1 and 14(0 = exp (t X) is the corresponding one-parameter subgroup, then go(14x(t)) is a one-parameter subgroup. Next, d
ço(14x (t)) w(exp x) = Obix (1)) = exP ( — dt
) t =0
The Relationship between Lie Groups and Lie Algebras
303
as desired. Let us now use formula (A.26). We have, for X, Y e gi , ,0([x, I']) d = ( --cr- t exp(/iX) exp(ViY) exp(-- N/iX) exp(---NfiY)
) t=0
d = ço (exp(NfiX) exp(fiY) exp(—V i-- X) exp(---,,fiY)) d =— dt exp(4/iço * X) exp(/igo* Y) exp(---Nfi(p * X) exp(--Viço * Y) t=0
= [q)*X, q)* 11 The mapping g)* is called the Lie algebra homomorphism corresponding to the homomorphism .
4.6 Derived Representation Let G be a Lie group, T : G ----± Aut(V) be a representation of G in a vector space V. We wish to construct the corresponding representation of the Lie algebra Ç. We consider two cases. A. The space V is finite-dimensional, dim V = n. In this case Aut(V)'....j Gl(n, R) is a Lie group, and T is a homomorphism of Lie groups, T: G ----* Gl(n, R). The corresponding representation of the Lie algebra is defined as
g —> gl(n, R)-r_'-_ End(V) (see the preceding subsection to verify that T* is a representation of the Lie algebra). B. V is a Banach space,
T : G ----* Aut(V) is a strongly continuous representation of the group G. Consider the subspace V"" c V consisting of all vectors y e V such that the vector-function g i-->- T(g)v on G is infinitely smooth. The space V" is called the Gen-ding space of the representation T. The subspace V" is dense in V. Indeed, let ço (g) be a smooth finite function on G, and y e V be an arbitrary element. Consider the element i5 = f q)(h) T (h) vdih G
Appendix A: Representation of Lie Algebras and Lie Groups
304 then
T (g) .5 = f ço(h) T (g) T (h) v dih = f ço(h) T (gh) v dih = f ço(g -1 h) T (h) v dih G
G
G
(here dih is a left Haar measure; see Section 3 above) and, since T (h)v is continuous, T (g)i3 is a smooth function on G. Let us now take ço converging to the 3-function. Under these conditions, i) —>- y, so we see that V" is dense in V. Define the mapping T,,, : g --* End( V °° ) by setting d (LX) x = — T (exptX)x dt
t=0
for v e V°°, X E g. As in Subsection 4.5, one can verify that T,,, is a homomorphism of Lie algebras. 7',,, is called the derived representation of g (associated with T). Let X E g, A = T* X. It is evident that A may be considered as an (unbounded) operator in V with domain V°°. Consider the representation U(t) = T(exp tX)
of the one-parameter semigroup corresponding to the element X. By the properties of T, U (t) is a strongly continuous one-parameter group of bounded linear operators in V (that is, U(t) are bounded operators). U(t) y is a continuous (with respect to norm) function of t E R for any y E V, and U(t) U(r) = U(t + r), t, r E R). By the Gelfand theorem, the generator  of the group U(t) is an unbounded operator on V of the form d A- v = — (U(t)v) dt r=0 with the domain DA consisting of the vectors y E V for which U(t) y is differentiable with respect to t. The operator A is a closed densely defined operator whose resolvent 1?), (A) satisfies, for some M > 0, w > 0, the estimates
li RA. Ciir
i
M (Re?. — tor '
m=
1, ...,ReX > co.
(And, vice versa, each closed densely defined operator satisfying such estimates is a generator of a strongly continuous group in V.) It is clear that  D A (that is, DA D DA and Ay = Ay for y E DA). In fact, A = A (the closure of the operator A). This is a consequence of the following lemma.
Lemma A.2 Let A be a generator of a strongly continuous one-parameter group of bounded operators U(t) = eAr in a Banach space V, and let D c DA be a dense subspace of V such that eAt C D for all t. Then A = (AID).
305
The Relationship between Lie Groups and Lie Algebras
Proof Denote DA = (A — ),.E) D. Let us show that DA is dense in D for Re). > co. (This implies the desired result immediately since we have RA. (A) = (A) I DÂ and, consequently, A — AE = [RA(A)] -1 = RA(A) -1 1 D = (A —
Let
y .E
D.)
D. Consider the integral
=fe 00
w
—Xt e At x dt.
0
Since Re A > co and I le m II < Me t , this integral converges; furthermore, we have 00
(A
—
w=
(A
—
X)e —xt eAt v dt
o oo
d ee At v dt = —v. —
J dt o
Thus, the element u may be approximated by integral sums of the form
E (A
—
A.)e —Àti e Ati Ativ
E
DA.
Since D is dense in V, the same is true of D. The lemma is proved.
4.7 The Lie Group Corresponding to a Lie Algebra Let
r be an n-dimensional Lie algebra with basis ai , [ai, ail =
E 4aic,
, an , so that
j = 1, . . . , n,
k=1
, an . where A. / . are the structure constants of r with respect to the basis ai, By the Campbell—Hausdorff theorem, there is a unique, up to isomorphism, local Lie group with Lie algebra r, and by the Cartan—Levi—Maltsev theorem, there is a unique connected simply connected Lie group with Lie algebra r. Let G be a Lie group with Lie algebra r. We shall use the canonical coordinates of second genus in the neighborhood of the neutral element e E G: if x = (xi, . • • , xn) lies in a neighborhood of zero in ilk", then we set g(x) = gn (xn )gn _i(x n _i) . . .
(xi) E G,
306
Appendix A: Representation of Lie Algebras and Lie Groups
where gi (t) is the one-parameter subgroup of G corresponding to a e position law in the coordinate system (xi, , xn ) has the form
r. The com(A.28)
g(x)g(y) = g(/î(x, y)),
where * is a smooth mapping of a neighborhood of the origin in Rn x likn into Rn, *(y , 0) = (0, y) E-.2 y. The mapping * can be expressed explicitly via the CampbellHausdorff—Dynkin formula. However, this expression is not needed for our aims, and we omit it. It is easy to calculate the derivative (avilax)(x, y). Since our consideration is local we may assume, by the Ado theorem, that r is realized as a matrix Lie algebra and the neigbborhood of e in G is that in a matrix Lie group. Then g(x) has the form _i g(x) = e xnan ex„_ian
where
exa
is the usual matrix exponent. (0/8*)g(*). We have
a
alp;
.•e
x i ai ,
We now calculate the derivative
e*iai
ajelbiai
g(*) =
Since
e tbae—tb = e t adb (a),
where adb = [h, • ], we obtain
a
adai (ai)i g(*).
g(*) = [e'frn ad an •
To simplify the expression in the square brackets, we note that in the basis (ai, . • • , an) the operator adas has the matrix As with entries (As) pq = 4q and, consequently, the operator exp(*n adan ) • • • • exp(*i ad ai ) is represented by the matrix exp(* n An) • • • • • exp (*i Ai ). Thus
a
avf; g
=
E [exp(ifnAn).... • exp(Vii
A.01pjapg (*)
p=1 =
Bp j (11I)C p) g (*) 19=1
the matrix
B(*) = B(Vii, • • • , Ifrn) =
(13 pq (
Bpq(*) = [exP(111nAn)
P)) being equal to exp(*q Aq )lpq .
In particular, B(0) = / (the identity matrix), so that the inverse matrix is defined when * is close to zero.
C(*) = B -1 (*)
The Relationship between Lie Groups and Lie Algebras
307
Calculating the derivative with respect to x on both sides of (A.28), we obtain n
D
axi g(*(x, y)) = =
0
E
,fr.
IB
p,j=1
aX i Pi
• ()a g(*(x, y)), P n
a
E Bpi(x)ap g(x)g(y) axi (g(x)g(y)) = p=1 n
= P=1
The matrices ap , p = 1, ...,n, are linearly independent and so are ap g(*(x, y)), since g(*) is invertible. Thus we obtain n
Bp i(x) =
E Bpi(*) alp)
(A.29)
whence it follows that n
alp;
axi or simply
(X,
Y) =
E ciik (fr (x, Y))Bki (x) k=1
avi $7,- = C(i)B(x) = 13 -1 (*)B (x).
4.8 The Krein—Shikhvatov Theorem Assume now that a representation of the Lie algebra r is given in a Banach space. Under what conditions is it integrable, i.e., gives rise to a strongly continuous representation of the Lie group G? The answer is given by the Krein—Shikhvatov theorem for strongly continuous representations in Banach spaces and by the Nelson theorem for unitary representations in Hilbert spaces. Let us prove a version of the Krein—Shikhvatov theorem suitable for our purposes.
Theorem A.2 Assume that Ai, .. . , A n are the generators of strongly continuous groups of bounded linear operators in a Banach space B. Let D C B be a dense linear subset such that: (a) D c D A i CI • • • n DA n andD is invariant under the operators Ai, their resolvents 1?),(Ai), and the groups e itA i , j = 1, . . . ,n. (b) The operators i Ai form a representation of r in D, i.e., n
[Aj, Adh = — i
E k A s h, s=1
h E D, j, k = 1, . . . ,n.
308
Appendix A: Representation of Lie Algebras and Lie Groups Then there exists a representation T of G in B such that Ai are its generators, Ai = T(a).
Proof Since G is connected and simply connected, it suffices to construct T (g) for g in a neighborhood of e, namely in the neighborhood covered by a canonical coordinate system. For such g we set def T (g(x)) = T (x) = e iA n x n
Then T (x) is a bounded strongly continuous function, and moreover,
Tai (t) = exp(iAit) is a strongly continuous group with generator Ai, so to prove that the operators T (x) satisfy the group law in the neighborhood of zero, i.e., T (x)T (y) = T (*(x, y))
for x, y small enough, it suffices to check this identity on the dense subset D C B. Note that D is a core of each Ai . Lemma A.3 For h E D we have e iA s t Akh = E[exp(tAs)]pkApeiAsth, s, k = 1, . . . , n.
P=1 Proof Consider the Banach space
Y
= 13Ø C '= B EB J3 e3.. .EB l3. n summands
The operators in Y may be represented as matrices with operators in B as their elements. We introduce the operator B : B Y which is the closure l from D of the operator h —> (Aih, . . . , A n h) c Y, and the operator Cs : Y Y, which has the form C, (A,
0 )
• ••
+i 0
As
• ••
Dcs = DA, ED • • • ED DA, , is closable since hn 0, Bhn = 0, since Ai are closed operators. 1B
= (hi
, iin ) implies Ah n
i = 1, . . . , n, so that
309
The Relationship between Lie Groups and Lie Algebras
where A's is the transpose of A. Then Gs is a generator of the strongly continuous group exp(iCs t) in the space Y. Indeed, direct computation shows that exp(iCs t) = exp(iA s t) 0 exp(— At).
Since Xs.jk are antisymmetric with respect to the subscripts j, k, it follows that B A s h = (AlA s h, . . . , A n A s h) = (A s Aih, . . . , A s A n h) n —i(EA. lis Aih, 1=1
n
. . . , E x`nsitio = (A s 0 I)Bh ± i(I x A's )Bh = Cs Bh, h
E
D.
1=1
Since V is invariant under the resolvent Rx(As ), we can apply Lemma IV.7 and obtain h E DB D D.
B exp(i As t)h = exp(iCs t)Bh,
This identity can be written in the form A p exp(As t)h
=
n exP(iAst) E[exp(—A sI t)]pkAkh k=1
n = exp(iA s t)
E[exp (—As t)]kpAkh. k=1
Since exp(—As t) is the inverse of exp(As t), the lemma is proved.
0
Lemma A.4 For any h e D the function T(A) is differentiable with respect to X E Rn and n a — i — T (A)h = E Bpj(X)ApT ()Oh.
ax• J
p=1
Proof Set h(X) = T (A.)h . For any c = (ci, . . . , sn ) E R', we have
n
—h(A1, . • • , Ai , X j±i ± Ei+i, .. . , An ± En )] = i
E sie iAn(xn-i-co ... j=1
1
X e iAi+10,..H..1+ei+1)
dreiAA-Frei)
.„iAi_ili....1 ..... e iApk ih
A
"Ic f
0
=
n i EE.J e iAn l "• j=1
...
iA• II• • J -1 • e iA.A.. J i A. J e -7 -
• • • • e iA1X1
h + 0 (II s II)
Appendix A: Representation of Lie Algebras and Lie Groups
310
as Ms II = (s?+• • •±8n2 ) 1 /2 --)- 0. Here we used the strong continuity of exp(i A n An ) . . . exp(i Ai Ai) and the invariance of D under exp(i Ai t). Thus h(X) is differentiable and
a hoo = e iAn4 . .... _i _ a x•J
e iApj
AJ.eiA -i - ' xi- i • . . . e iA0. 1h.
Successive applications of Lemma A.3 yield —i
a
n
— h (X) =E[exp(X n A n ) • . . . • exp(XiAjApjAp h(X).
ax•J
p=1
The lemma is proved. For h
E
0
D, set 111(x , Y) = 7' (x)T (Y)h, h2(x,
y) = T(1/î(x, Y))h•
We have hi (0, y) = h2(0, y) = T (Y)h .
Be Lemma A.4 n = E Bpj(x)Aphi(x, p=1
= '2..., a*s (x ' Y) s=1 axi
y),
EnBps(lk(x, Y))Aph2(x , Y)
p=1
n
= E Bpi (x)Aph2(x, y) p=1
by (A.29). We note that Bpj (Xl, . . . , Xj , 0, ...
,0) = [exp(xi A
)]
R]
=
since the j-th column of Ai consists of zeros 01/4.ïi -___ 0), and so we have
0h —i — (xi, . . . , xi, 0, ... , 0) = Aihi(xi, ... , xi, 0, ... , 0, y), ax J• i = 1, 2,
j = 1, ... , n.
We can prove that hi (xi , • • • , xi, 0, • • • , 0, y) = h2(xi, • • • , xj, 0, • • • , 0, y), by induction on j. This is valid for j = jo; we see that for j = jo ± 1 both hl and h2 satisfy the same Cauchy data at xi = 0. Since Ai is the generator of a strongly continuous semigroup, the solution of the Cauchy problem is unique, and we obtain T (x)T (y)h = T (*(x, y))h. The theorem is proved. 0
Appendix B
Pseudodifferential Operators
Pseudodifferential operators, that is, functions of x and —i0/3x, are widely used throughout this book. However, we do not assume that the readers are familiar with the topic, and this appendix is intended to make the book self-contained by presenting the definitions and main properties of pseudodifferential operators. Pseudodifferential operators, originally known as "singular integral operators" in the theory of elliptic equations, have a long and intricate history. Accordingly, there are quite a few expositions of this theory in the literature, starting with [109] and followed by [84], [175], [178], and others. However, we follow neither of the cited papers here, and after an elementary introduction, where we consider the classical Kohn—Nirenberg pseudodifferential operators (*DO), we present an exposition in the spirit of noncommutative analysis of the theory of *DO's which may have rapidly oscillating symbols. Here we mainly follow the paper [103].
1 Elementary Introduction The aim of the present section is to present an elementary introduction to the theory of pseudodifferential operators (*DO's in the sequel). Thus, in this section the reader will find the motivation for the appearence of the notion of pseudodifferential operator and the main definitions and theorems of the theory of *DO's rather than an accurate description of function spaces and precise statements and proofs of the theorems. We hope, however, that this section will be of use for the beginner in the *DO theory. The reader who is interested in precise statements and proofs can find them in the subsequent sections. One of the themes that lead to the notion of pseudodifferential operators is the problem of constructing a parametrix for an elliptic differential operator. For simplicity we shall carry out our considerations in the Cartesian space Rn. To recall the statement of this problem let us consider a differential equation of the form
flu = f
(B.30)
Appendix B: Pseudodifferential Operators
312 where
1) fi = H (L _i axa = E
aa (x) (—i — a V
lai..in
ax )
(13.31)
is an elliptic differential operator in Rn with smooth coefficients aa (x) and f = f (x) is a (in general nonsmooth5 ) function of variables x = (x 1 , . . . , xn) E Rn. The upper indices in (B.31) determine the order of action of the operators included in the latter relation. The ellipticity of the operator (B.31) means that for any nonzero vector P = (P1, • • • , Pn) ERn
of the Cartesian space Rn (which is dual to the space Rn) the principal symbol of the operator ii
Hm (x, p) =
E
aa (x) pa
lal=m
is not equal to zero: Hm (x, p) 0 O.
(B.32)
The problem we shall now consider is to solve equation (B.30) up to sufficiently smooth terms. To formulate precisely the meaning of the latter phrase one must have some tools measuring the smoothness of the functions. This can be done, for example, with the help of Sobolev spaces. The idea of introducing these spaces is based on the fact that, due to the formula f (x) =
f e iPx f (p) dp,
representing the function f (x) via its Fourier transform f (p), the smoothness of the function f (x) is determined by the decay of f (p) as f pi --›- oo. Actually, the faster the function f(p) decreases at infinity, the smoother is the function f (x). Hence, it makes sense to introduce the space Hs (Rn) with the norm
11f e = f (1 + I p12) 1 If(p)1 2 dp. Clearly, smoother functions belong to the space Hs (Rn) for larger values of s. The spaces Hs (Rn) are called Sobolev spaces. We remark that the family of Sobolev spaces parametrized by s forms a decreasing scale of spaces, that is that
Hs (Rn ) C Hs' (Rn ) for any s' < s. reader familiar with the theory of L. Schwartz's distributions can view f (x) as a distribution, that is, as an element of the space S''(Rn). 5 The
313
Elementary Introduction
It can be easily shown that the operator (B.31) acts in Sobolev spaces as follows:
H s ( r)
_÷.
Hs—m
(R)
for any real value of s. Thus, the problem of finding a solution to equation (B.30) up to sufficiently smooth terms can be formulated as follows: Find an operator RN : Hs—m (Ir) ---> Hs (WI) such that the following relation holds: HR N = 1 ± (' -- N
(B.33)
where 6—Ar is a smoothing operator of order —N for sufficiently large values of N, that is, the operator
6 N : Hs ( r)
__>. H " (R")
is continuous. The operator hN satisfying relation (B.33) is called a parametrix of the operator H. If the parametrix for the operator il is constructed, then the solution to equation (B.30) up to elements of the space Hs+N (Rn ) is given by u = RN f; we note that the larger values of N we choose, the smoother are the functions from H" (W). Let us begin with the construction of a parametrix. To do this, we represent equation (B.30) in the form 11 u(x) = (1 ) 1 f e iPx f (p) dp.
27r The solution of this equation (parametrix) can be found in the form
f (p) dp
(B.34)
(x, p) = ( 1 ) 1eiPx . fIG
(B.35)
u(x) = f G(x , p) if the function G(x, p) satisfies the equation
27r
Moreover, to construct a solution to equation (B.30) up to sufficiently smooth terms one has to solve equation (B.35) up to functions with sufficiently strong decay at infinity. Thus, we have introduced in our problem a large numerical parameter I pi which corresponds to asymptotic expansions with respect to smoothness. As we shall see below, the asymptotic expansions for solutions to equation (B.35) can be constructed in terms of homogeneous functions in the variables p of decreasing order of homogeneity. Let us try to find a solution to equation (B.35) in the form —oc
G(x, p) = eiPx
E gi (x, p)
.i=ko
(B.36)
Appendix B: Pseudodifferential Operators
314
where gi (x, p) are homogeneous functions of order j with respect to the variables p (we do not yet know the value of ko, that is, the order of homogeneity of the principal term of the expansion (B.36)). Substituting the latter equality in equation (B.35) we obtain _0. i Px gi (x, p)]= (1 ) e iPx . (B.37) il [ei
E
27t
.i=ko
Now we must commute the factor eiPx with the differential operator if. To do this, we note that, due to the relation
a
a [etPx • —i -a-g(x, p)] = eiPx (-i— axk + Pk) ex, p) xic
the following commutation formula is valid: 1 ) H
(2
.
1
a
a
2
x,—t—
[eiPx g(x, pd= eiPx H (x, p — i Tc ) g( x, p).
ax
Using this relation, we rewrite (B.37) in the form 1
eiPx H
a
2
) — E gf• (x' -p)— 27
x, p — i—
ax
7, 11 .
'Px
e •
Cancelling out the factor eiPx we arrive at the equality 1
in-1
[Hm (x, p) -1--E Hk k=0 2
2 X,
p, —
a ax i—
/ 1 \
2 gi (X' P) = j =k0
27r. )
I
(B.38)
1
where Hk (x, p, — i P,
are differential operators of order m — k with coefficients
homogeneous in p of order k. Equating terms of equal order of homogeneity in the right- and left-hand sides of equality (B.38) we arrive at the following recurrence system for the unknown functions g) (x, p) (this equality also shows that the order /co of the leading term of the expansion (B.36) equals — m):
Hm (x, p)g,(x, p) = Hm (x, p)gi(x, p) = —
1
E 1=1-j
I (B.39) 2 a Hin _i_i x, p, —i — aX (
In the latter formula we set gi (x, p) = 0 for j >
—
m.
gi(x, p), j 5_ —m — 1.
Elementary Introduction
315
Due to the ellipticity condition (B.32) the latter system of equations has the unique solution
1
1)I
g—m (x , P) = ( 2 7r
Hm (x, p) .
(2
a ax
g—m-1(x, p) = — Hm—i x, p, —1— g_ m (x, p)
(B.40)
The functions gi(x, p) are homogeneous functions in p of order — j. Thus, to construct a parametrix for equation (B.30) one should truncate the expansion (B.36) at a sufficiently high level. Actually, substituting the truncated expansion in (B.34) one has: -N-m U(X) =
RN f(x) =
E
f eiPx g;(x, p) f (p)dp,
(B.41)
where the gi (x, p) are determined from (B.40) and N is a sufficiently large number. Unfortunately, the functions gi(x, p), being homogeneous functions of large negative order, may have a nonintegrable singularity at the origin and, hence, the integrals involved in the right-hand side of (B.41) are, in general, divergent. To overcome this difficulty, we use a smooth cut-off function x(p) which is equal to zero in some neighbourhood of the origin and equals unity outside a compact set of Rn . Thus, we obtain the formula for the required parametrix -N-m f
E
RN f(x) =
j el/3 'g' (x, P)f(P)X(P)dp.
(B.42)
i----ni
It is easy to see that the latter expression does not depend on the choice of the cut-off function x(p) up to arbitrary smooth terms, that is, up to terms from the Sobolev space Hs (Rn) for arbitrary values of s. Remark B.1 In the case when the homogeneity order of the function g1 (x, p) in (B.42) is positive, then the use of the cut-off function in this formula is not necessary. 2
1 . a
Remark B.2 Any differential operator H(x,—I T,) can be written in a form similar to the right-hand side of (B.42):
H
a
2 (1 x,—i— =
ax
m
E f eiPx H., (x, p) f (p)dp
j=0
(B.43)
Appendix B: Pseudodifferential Operators
316 where Hi (x , p) =
E aa (x) pa , j = 0, ... , m lal=i
are the components of the complete symbol H (x, p) of the operator il of order j. Here we have omitted the cut-off function x(p) in accordance with the previous remark. Thus, we arrive at the notion of pseudodifferential operator using the operator on the right in (B.42) as a model. Definition B.1 The operator (
2
1' .
a
(B.44)
P x, —t— = f e iPx P (x, P)f(p)x(P)dp ax
is called a pseudodifferential operator with symbol P(x , p). The only question which we must discuss to complete the definition of a pseudodifferential operator is the class of symbols used in Definition B.1. If we consider in (B.44) only polynomial (with respect to the variables p) symbols, then the class of pseudodifferential operators will coincide (due to (B.43)) with the class of differential operators. Certainly, this symbol class is too small; it does not even contain parametrices of elliptic differential operators. The smallest suitable class of symbols, which was in essence introduced in the original paper by J. J. Kohn and L. Nirenberg [109], is the class of so-called classical symbols. We say that the smooth function P(x, p) is a classical symbol of order m if for any multi-indices a, p the estimate
( a a— ( aX)a
Y P(alx
p)
Cap (1 ± 1191 2) 1146
"
with some positive constant C. We denote by Sm the set of classical symbols6 of order m. The set of classical symbols is suitable for constructing parametrices for elliptic differential operators. However, for other problems this class can turn out to be too small. A rather general symbol class will be considered in the subsequent sections; here we restrict ourselves to consideration of the class Sm. Let us formulate the two main theorems of the theory of pseudodifferential operators. The first of them is concerned with the action of a pseudodifferential operator in the Sobolev spaces Hs (Rn), and the second treats the composition of pseudodifferential operators. 6 For
a rigorous definition one should impose some conditions on the behavior of symbols as 1 x1 -oc. Exact conditions of this kind are presented, for example, in the cited paper. For simplicity, the reader can assume that all symbols considered have compact support with respect to the variable x, or that these symbols do not depend on x outside a compact set (which can itself depend on the symbol).
2. Symbol Spaces and Generators
317
Theorem B.1 Let P (x, p) be a classical symbol of order m. Then the corresponding *DO (2
vs
a
.
H s (Rn)
r x, —t—
Hs—m (Rn)
aX
is a continuous operator for any s E Rn .
Theorem B.2 Let P(x, p) and Q(x , p) be classical symbols of orders m p and m Q, respectively. Then the relation 1 )
2
P (x,
1 )
a Q x, -t- -=
a —t— Ox .
2
.
ax
1 )
a
2
R x, --i—
ax
holds modulo operators of order —oo in the Sobolev scale. The symbol R(x, p) of the
_
2
1
operator K (x,-i
tc has the asymptotic expansion
R(x,
(-01al alalp(x , p) alai Q(x , p) iceio
as ipl
a /.
apa
axot
oo.
We shall not present the proofs of these theorems here. The reader can find them in many books and papers concerned with this topic (see, for example, [84], [175] and others). We also remark that we have omitted from the framework of this short presentation such important questions as the behavior of pseudodifferential operators under change of variables, the theory of pseudodifferential operators on smooth manifolds, and so on. These topics can also be found in the literature cited above.
2 Symbol Spaces and Generators The pseudodifferential operators used in this book usually act in symbol spaces. Technically, various symbol spaces may be considered; however, the most important in applications is S°° (Rn ), the poly-Banach algebra of symbols of tempered growth introduced in Chapter III. For this reason, we shall define and study pseudodifferential operators in S°° (Rn ). In what follows we use the abbreviation *DO for "pseudodifferential operator". Recall that S°° (Etn) is defined as the union of intersections soo
(Rn ) =
Un sit(Rn) k
(B.45)
Appendix B: Pseudodifferential Operators
318
where Sit (Rn) is the space of Ck functions on Rn with finite norm
, 1 11f 11 sti oz n ) = sup (1_ + lx1 2 ) 7 (E
If(a) (A,
lalk
and is endowed with the corresponding convergence (see Chapter III). The space S /k (Rn ) is not a Hilbert space, and in order to make our exposition as elementary as possible (and almost independent of Chapter III) we consider the following representation of S°° (Rn) in terms of Hilbert spaces: (B.46)
soo(Rn)=unHic(Rn), 1 k where
HI' (Rn) is the completion of C7 with respect to the Hilbert norm p u p Hik (R„ ) =
(f (1 + 1x1 2) 1 (1 l
—
A)
u(x)
1
2
i
dx
)/
.
(As an exercise, one can easily check that (B.45) and (B.46) give the same result by using Sobolev's embedding theorems). The convergence in S°° (Rn) can be described as follows: a generalized sequence {fa) C V ° (Rn) is said to converge to zero if there exists an 1 such that for any k we have fa E Hi' for sufficiently large a and II h II k ---> O. H1 Pseudodifferential operators are elements of the algebra L (S°°, S") of all continuous linear operators on S°° (Rn ). For Hilbert spaces B1, B2 let L (B1, B2) denote the Banach space of all continuous linear operators A : B1 --›- B2 equipped with the operator norm II A il L(Bi , B2 ) =
sup xEB I ,x 00
I AxIl B 2 . „ I lx 11Bi
The algebra L (S°°, S°°) has the form L
(s" , 5 00 ) = nunuL(Hik,H;), 1
r
(B.47)
sk
that is, each operator T E L (S°°, V') has the following property: V/ 3r Vs 3k such that T extends by continuity to a bounded linear operator from Hi' to IV The convergence on L (500, , S") is introduced as follows: a generalized sequence IL) E L (S°°, S") is said to converge to zero if V/ 3r Vs 3k such that L E L (Hi', 1.0 for sufficiently large a and I TŒ I L (Hik 914
9
Both S°° and L (5' , S') are poly-Banach algebras with respect to the convergence described above.
Symbol Spaces and Generators
319
We are interested in r°-generators in G (S°°, S'). Recall that an operator A E G (S° ° , S°°) is an S°°-generator if there exists a (necessarily unique) continuous homomorphism
p. : S" (IR 1 ) —> G (S" , S") such that p(y) = A where y is a coordinate on R1 .
Lemma B.1 An operator A is an S'-generator if and only if there exists a oneparameter subgroup {e t , t E R} C L (S', rip) such that i) et is differentiable with respect to t and
der —i — = A; dt ii)
the subgroup {et } is of tempered growth in G (S", 5 00 ) as t —> c>o, that is,
V/ 3r Vs 3k 3/3 I lierllifik_* H: 5- C (1 + ItI)P 1 .
(B.48)
Proof We shall not prove the "only if" part of the lemma since it is not used in
our subsequent considerations. Let us prove the "if" part, that is, the sufficiency of conditions i) and ii). Let f E S" (R 1 ). We intend to define II( f) = f (A). This can be done as follows. For some 1 we have
f
E
n
Hik
(
le)
k
whence it follows that the Fourier transform of p(y) = (1 + y 2) —N f (y) is continuous and decays rapidly as IA —> oo (it suffices to take N > 1+ 1/2). Consider the integral
1=
1 ri f _ W(t)er dt ,
(B.49)
where Cp. (t) is the Fourier transform of ço (y). Since if) (t) decays faster than any negative power of t as t —> oo, it follows from (B.48) that V/ 3r Vs 3k such that the integral in (B.49) converges in L(Hik , Hi). By the definition of convergence in G(S" , S°°), this integral converges in G(S" , S"). Hence, we can set f (A) = (1 ± A2)
N
1
ti .57—
f Cp(t)e t dt.
Trivial computations show that the result does not depend on the choice of admissible N and that the mapping f 1--> f (A) is a homomorphism, obviously continuous.
Appendix B: Pseudodifferential Operators
320 Furthermore, for f (y) = y we have f (A)
=
( Y 1 ri fF (1 - F A 2) fs— y-->i 1+ y2 ) et dt
=
y \ / F ri f Y—t f 1 + y2 s/Ir
)Y-4
1
,
02, ) er dt,
since it follows easily from condition i) that —iae t /at = Ae t for all t e R. Here Fy , t is the Fourier transform. Continuing the computations, we obtain f (A)
=
f
1
,s/.7ri J et
(1
02 ) Fy--›.t ( at2 1 1 : 2) dt
.,./7r.i 1 f et Ft (y) dt
— f e t 3' (0 dt
at as desired (we have used elementary properties of the Fourier transform). The proof 0 is complete. Using the Lemma, we can easily show that the following operators are S"-generators in L(Sc c ' , Sc°): a)
The operator of multiplication by the coordinate xi for each j = 1, .. . , n (this operator will be denoted by the same symbol xi).
b)
The differentiation operator Pi = —ialaxi, j =1, ...,n.
c)
The operator Pi + (x), where q)(x) is a smooth real function on Rn all of whose derivatives are bounded.
Indeed, we have only to construct the corresponding one-parameter groups and to check that they are of tempered growth. In case a) we have
et
exp (i xi t) = eix; r
(the operator of multiplication by etxi t ); we obviously have < C (1 +
3. Pseudodifferential Operators
321
whence it follows that Her Il iiik_ *Hik 5_ C (1 + iti)lkl . In case b) we have
a
et = exp(t—)
ax;
and et 41 (x) = kli(xi, ... , xi_i, xi + t, xi+i, • • • , x,); it is easy to observe that IlerIlHik,Hik 5_ C(1 + iti) 1/1.
Finally, in case c) we have t et (Pi + (x)) = exP ( i f q)(xl, • • • , xi + hr, ••• , x n )dr 0
exp {iti3i} ,
which is a one-parameter group of tempered growth in L(Sc ° , S').
3 Pseudodifferential Operators Pseudodifferential operators are functions of the multiplication and differentiation operators
x = (x i , ..., x n ) and
—
O
i— =
OX
(
a
a
-- i—, . . , --i— , aX n aX1
which satisfy the commutation relations
a1 [xk,—i— ax; Now let f
E
Se°(1K x In).
Definition B.2 The operator
(2
1 )
a
ax
(2
x i,
1 2 • . • ,xn,
a
1 )
a
. — 1 —, , • • • , — i —
ta xi
is called the pseudodifferential operator with symbol f.
axn
(B.50)
322
Appendix B: Pseudodifferential Operators
It is easy to show that the operator (B.50) acts on an arbitrary function u(x) E C(W2 ) according to the formula (
1 2 x,
Ox
„ ;.-, ,„,„ p)ru(y), =
(B.51)
)
where
1 p q)(y) = 27ti F I f e —i PY w(y)dy and frp--X 111 (P) =
(1
f
27r i
e—i PYT (p) dp
are the direct and the inverse Fourier transforms, respectively. Consider the function ep (x) = exp(ipx), which belongs to S' (Rn ) for any p E Rn . Let T E L(S", S") be an arbitrary operator. With T we associate a function of the variables p and x E Rn by setting
smb {T} (x, p) = e_(x)Te(x).
(B.52)
Definition B.3 The function smb {T} defined in (B.52) is called the symbol of the operator T. This definition is justified by the following lemma.
Lemma B.2 If 1
a
2
T = f (x, —i— , f E S(Rrxi X Rnp )
aX
is a *DO, then
smb {T} = f.
(B.53)
Proof To prove (B.53) we use (B.51). We have 1 2
Tep (x) = f x,
a
ax f
eiPx
OPY--le iPY
71) (27rO n/2 3(i1 — P)) 4—*x{f(x, -
fril—>x { f (x, p) (270'112 3 (77 — 10)) f (x, p)e iPx = f(x, p)ep (x).
(B.54)
Pseudodifferential Operators
323
On multiplying both sides in (B.54) by e_p (x), we obtain the desired identity.
ill
Our next step is to widen the class of symbols for which pseudodifferential operators are defined so as to make an arbitrary element of G(S", S") be a "pseudodifferential operator" in the sense that 1 2
a
T = smb {T} (x, —i —) I. ax
(B.55)
Let lirsil:rs22, (Rxn x ii) denote the completion of q° (Rxn x Rpn) with respect to the norm Il f ll Hsi .s2 1- 1 ,r 2
=
if f
(1 + IPI 2 ) r2
X 1(1 — Ap)
s2/2
0+
1x1 2
r
(1 - Ax ) s1 /2 f (x, p)
2
11/2
dx dp
Consider the poly-Banach space
L" (R x7l x R pn ) =
nunu S2
ri sl r2
1
H 1:;22: (Rixl x
This is a poly-Banach algebra with respect to pointwise multiplication. Obviously, we have
S" (IK x R np ) c G" (117ix x R np ) , and the embedding is continuous. Let T E r° (Rxn ,Rpn). Then the function smb {T} defined in (B.52) obviously
belongs to .03 (Mix x ii).
Theorem B.3 The mapping
2 1a f I--> f ( x, —i — ax
)
can be extended to a linear homeomorphism
p, : G" (i K' x Rpn) —> G" (S" , s°° ). The inverse mapping is given by the formula
= smb {T} .
Appendix B: Pseudodifferential Operators
324 Proof It is well-known that for any f
E
L 2 (Rxn x lielp ) the operator 1 2
a
f = f ( , —i — ) ax is a Hilbert—Schmidt operator in L 2 (Rn), and hence we have
1 f)] 112 — [Tr (f * — (270nI2 IIIflL2(Rn)i} f II 0 (R,x --(Rn) 5_
,
xR
)•
(B.56)
From the last estimate, by multiplying f by (1 — 6.)s and (1 + x 2)k with appropriate k and s, we obtain the following estimate 11,111Hir+k (lltn)_iliir+s(Rn) 5_
for any f
E
Crlsk
(B.57)
'I f11 H --sr:ki (R'; x15)
Hsr'ki (Rxn x Rpn).
In P° (litnx x Rpn) consider the convergence inherited from L" (111xn x Rpn). Then L" is a completion of S" in this convergence; it follows from the estimate (B.57) that the mapping ,u : S" x Ift pn) —> C (S" , S")
OK
takes generalized Cauchy sequences into generalized Cauchy sequences and hence extends to be a continuous linear mapping from C° (Rnx x Rpn) to C (S" , S"). Let us show that u is a homomorphism, that is, the inverse mapping
,u-1 = smb : r (S" , V') —> G" (I[exi x lik pn) is continuous. To this end we establish the following estimate on the symbol of an operator T in C (S" , S"): Vi E 2Z+ 3s E R Vr E 2Z+ 3k E R such that IISMb {T}II
,.. (41)
"r,1 (lie Rn\ ---' -- s+1,r—k k x x pi
■s
Hp ii ` -" risk II I II 'Irk:: (Rn) _), H :(Rn) •
(B.58)
Here 2Z+ is the set of nonnegative even integers. The estimate (B.58) is equivalent to the continuity of the mapping
smb : C (S" , S") —> .0° (UK' x R). Let us now prove (B.58). Since T Vr E 2Z+ 3k E R such that
E
G(S" , S"), it follows that V/
Hr < 00 • IIT li Hk-Fn s r+n
E
2Z+ 3s
E
R
Pseudodifferential Operators
325
Set smb {T} = f; then
1 T = f(—i
— a
ax
=f .
It follows from (B.56) that II f 11 2H rd
=
(27r) 2 Tr (fi A*) ,
s+1,r—k
where fi(x,
p) = (1 + lx12) 41 = (1 + Ip12)
. (1 — Ax)i (1— Ap ) f (x, p).
Consequently, we have (2) < CIrks gr,1 _ k — fi (1 — Mi (1 + lx1 2) 5 11 f 11 ..._s+1,r
L2 L2
in the scale {fl ik ). This is easy to carry out by using the formula that relates fi to f (we have purposely chosen / E 2Z+ , so that fi is expressed in terms of the derivatives of f of integral order). As a result, we obtain
Ilfll —s+1,r—k led
k+n Clirskil f 11Hr+n
fr
--,
Hsr •
The theorem is proved.
Corollary f
E
B.1
El 1
2
An operator f (x ,—i »-ax 2-
is continuous in S°° (Rn) if and only if
C° (Rix' x R) .
We observe that the linear structure and the convergence structure of the spaces G" (Rxn x RnP ) and G (S°° ( Rn) , Soo (Rn)) are the same. However, these spaces have different algebraic structures. By using the homomorphism p, we can transfer the multiplication from G (S", S") to Lc° ( Rnx x R), thus obtaining a noncommutative multiplication in the symbol space G* (Rxn x ii). This multiplication is given by the formula 1
f * g = smb
a A f f (x, —1—) ax [ 2
.
ffg (x2 , —i l — a )]] 1
ax
(B.59)
where [H] are autonomous brackets (see Chapter I). An explicit formula for the symbol of the product in (B.59) is given by the following theorem.
Appendix B: Pseudodifferential Operators
326
Theorem B.4 The following relation holds: 1 f * g = f (
whenever g
E
2
ax
S" (R xn x Ri).
The proof follows from the results of Chapter II for the case in which both f and g lie in S°° (R xn x Rpn). The general result (f c G " (R xn x g c S°° (R xn x Rnp ) then follows by continuity.
Glossary
Ado's theorem A theorem stating that each finite-dimensional Lie algebra can be represented as a subalgebra of a matrix Lie algebra. asymptotic expansion If H is a linear space equipped with a decreasing filtration,
H= 110
D H1 D H2 D • • • ,
then an asymptotic expansion of an element h e H with respect to the filtration is a sequence {N} of elements of H such that h — hn E Hn ,
n = 1, 2, 3, .. .
Depending on the choice of the space and the filtration, various particular types of asymptotic expansions can be obtained. asymptotic expansion with respect to parameter A type of asymptotic expansion in which H is a space of bounded mappings f : (0, oo) —> W, lt. }--> f (?), where W is a Banach space, and the filtration is determined by the condition f E Hn 44. Xn f E H
(asymptotic expansion as À —> oc)
or f E Hn <#. X — n f E H
(asymptotic expansion as X —> 0).
asymptotic expansion with respect to smoothness An asymptotic expansion in which H is a function space and the filtration is determined by the condition f E Hn <#.
the derivatives of f up to the nth order exist and belong to H.
asymptotic expansion with respect to growth at infinity An expansion in which H is a space of functions of variable(s) x and the greater is n, the less rapidly do f E lin grow at infinity (and, for n large, the more rapidly do they decay at infinity). autonomous brackets A kind of brackets El used to limit the scope of Feynman indices in operator expressions. A subexpression in autonomous brackets has to
328
Glossary be computed separately and used in subsequent computations as a simple operator; Feynman indices in the subexpression are valid only within the brackets. Examples: 1 2 „
(A +
= A' + 2CA + C2 ,
but 1
2
[[A + Cr = (A + C) 2 = A2 + AC ± CA + C 2 ; 12
12
e B eA = e4
elA±B]] = eA±B
if [A, B]
O.
The autonomous brackets themselves may bear Feynman indices, which is to be assigned to the operator resulting from the computation of the expression in brackets. Thus, 2\2
113
[[AB]j+
= BABA ± C 2 -1-2CBA,
(
whereas
2
13
= B2 A 2 ±
(AB +
2BC A.
If we use Feynman indices only in a part of an operator expression, it is preferable to enclose parts of operator expressions in autonomous brackets so as to avoid misunderstanding, e.g. for 12
12
CeA +B (eA
multiplied by C on the left)
312 it
1 2
is better to write CeA +13 or Cffe A +13 1.
Often autonomous brackets prove useful even if there are no Feynman indices at all, or at least inside the bracket, as in the formula e —iS(x)
(2
H x,
2
we could also write H (x,
2
l as
eis(x ) = H x,[[
ax
ax
a
— i--11 •
ax
1
as/ax — ia/ax
for the right-hand side.
Banach scale A collection of Banach spaces {Ba } ad , indexed by a poset I, such that there is a continuous embedding Ba C Bfi for a < p. Given a Banach scale, one can obtain various poly-Banach spaces by choosing an arbitrary filter A of sections of I and by setting
13A= u
AEA
B, fl aEA
329
Glossary
with the convergence defined as follows: a generalized sequence ix 4 1 is convergent if it is convergent, for some A e A, in all Ba with a E A. Campbell—Hausdorff—Dynkin formula An important formula of Lie theory expressing ln(e B eA ) via A, B, and their commutators: c ln(e B e A ) = A + B ± t
(_1)c
E
[Bu Am' . . . Blk Amk 13]
k=1 k ± 1 li+mi>0,1;>0,mi>0
Willi! • • • lk!mk!
'
where the bracket in the numerator stands for the commutator (adA)ml ... (adB) 4 (adA)mk (B) [Bu iAtm' . . . B ikA ni kBi = (adB) I '
11 ± • • • ± lk + 1
Can be obtained by developing into a usual Taylor series in powers of t from the closed formula in(e tB e A ) = A +
t f ln(et adB eadA)
et adB e adA _ 1 (B) dt
o provided by noncommutative analysis. Campbell—Hausdorff theorem A theorem stating that in a local Lie group G the multiplication law is completely defined by the Lie bracket in its Lie algebra Ç. In the exponential coordinates of first genus,
1 A•B=A-1-B-F—[A,B1+• 2
where A, B c g, the neighborhood of zero in g is identified with that in G via the exponential mapping, and the dots stand for commutators of order > 2. 1
n
characteristics Let A be an operator algebra, and let A = (A1, . . . , An ) be a Feynman I n tuple possessing a left ordered representation /1, . . . , ln . If B = f (Ai, . . . , A n ) is a function of A, then one may consider the pseudodifferential operator
1 a iB = f((li, . • . , inn ) —= H (y, —i —) . ay The zeroes of its principal symbol (Hamiltonian) are called the characteristics of B. In fact, the definition depends on the type of homogeneity considered, which, in turn, is closely related to which asymptotic expansions are to be obtained for solutions to the equation Bu = u.
Glossary
330 commutation formula The formula
sf 1 3 2 [A, f (B)] = [A, B]—(B, B). Sy commutator The commutator [A, B] of two elements A, B of an associative algebra A is defined as [A, B] = AB — BA. commutation relations Let A be an algebra determined by a set of generators A1, .. . , An and relations il
ilc
[co(Ai i , ... , Ai k ) = 010)E E.
The system of relations E is called a system of commutation relations if it possesses a left ordered representation, that is, there exist operators
L i , ... , L n on the space of n-ary symbols such that
1 (Idj f)(Al , • • • ,
An)
n
1
n = Ajfff
(Al,
•••,
An)]
for any n-ary symbol f. In other words, the system of relations is rich enough to guarantee that in any product we can rearrange the operators A 1 , ... , An in the order prescribed by their indices. composite function formula A formula expressing 12
fief
f a[g(A, B)]]) 12
where C = g(A, B). It reads 12
12
f ([[(g (A , B)JI) = f (g (A , B)) 28 78 sg 2 7 8 8 2 f 1 12 A, B)—([[g(A, BA, g(A, B), g(A, B)). ±[A, B]—(A, B, B)—(A, 3),2 5
Sg
3 4 6
8 Y2 8 Y1 For the particular case in which eyi, y2) = yi + y2 one has, say, 1 2
1
8 2 f1
1
21
4
f IA ± B]]) = f (A, B) ± [A, B]-, ([[A ± B]], A ± B, A ± B), Sy L and there is a variety of similar formulas.
Glossary
331
Daletskii-Krein formula A formula expressing the derivative with respect to a parameter of a function of an operator depending on this parameter. It reads 3 — f (A(t)) = A' (t)-(A(t), A(t)), dt Sx 2
d
3f1
where 8f, f (x) - f (y) , tx, y) = x-y
— 8x
is the difference derivative of f (x). It can also be arranged as 8 df (A) =—f (RA, LA). Sx Here d f (A) is the differential of the function f, considered as a mapping f : A -> A A }--> f (A) at the point A, and RA and L A are the operators of right and left multiplication by A, respectively. However, the latter formula seems less justified in case f is only partially defined on A (recall that f (A) is meaningless if A is not a generator). derivation formula The formula
_ L 6ff 1 3A), D(f (A)) = D A (A, Sy
where D is any derivation of the operator algebra. difference derivatives The difference derivative of a function f (y), y E Ilk, is a function of two variables defined by 6f (Yi , Y2) = f Sy
f (Yi) - f (Y2)
Y 1 - Y2 .r (Y1),
'
Y1 0 Y2, Y1 = Y2.
Difference derivatives of higher order are defined inductively; to find S k f 1Sy k one should fix all but one argument of 8k-1 My" and apply 818y with respect to the remaining argument. Here are some useful properties of difference derivatives. a) S k f 1Sy k is a symmetric function of its k + 1 arguments; if f is smooth, then so is S k f 1 Sy k for any k.
Glossary
332 b) 8k f
sy k (Y1 , • • • , Yk+i )
Jdi
f
0
0
—
1.1 1 —• — 114- I
dp,2
f (k) (,U1Y1
+ • • + tikyk + (1— E kti) Yk+i) (here f (k) is the kth derivative of f); c) k+1
sk f syk (Yi, • • • Yk+i) =
if yi
yi for i
ji (y.; - yi) -1
f (Y.; ) j=1
i=i,i0i
j;
d)
Sk f
sy k (Y
1 Y ) = -/j f ("(y),
7•••7
e) 1
skf
sy
...,
k
.7C)
(
=
f
1
k — 1)!
.t.)k—lf(k)( ty
±
_ r)x)dr;
o
0 (
Ra y ) (
a
) ai
•••
al! • • • Œk+1!
(k
lai)!
a )
Œk+1 s k
f
Sy (Yi , • • • Yk+i)
ayk+i
YI ="'=Yk+1
=Y
f (k+Ial) (Y).
difference-differential equations An equation containing both derivatives and finite differences. Such an equation can always be considered as a pseudodifferential equation since the difference derivatives can be expressed via the derivation operator; e.g. f(x h) — f (x) f(x) 8h h is expressed as e1 '
Sh =
etc.
—
h
1
Glossary
333
differential of an operator function The linear part of the increment f (A ± B) — f (A)
considered as a function of B. The differential is the element of L(A), the algebra of linear operators in the operator algebra A, and is given by the formula d[f (A)] =
f
(L A, RA),
Sy
where L A and RA are the operators of left and right multiplication by A, respectively. intertwining operators Let A = (A 1 , . . . , An ) and B = (B1 , . . . , Bn ) be linear operators on the spaces E and F, respectively, and let p, : E --)- F be a linear operator. Then say that p, is an intertwining operator for the tuples A and B, or merely that p, intertwines A and B, if the diagram A•i
E
A
E
i
I Bj
F
F
commutes for any j = 1, . . . , n.
factor extracting rule The rule of noncommutative calculus stating that in the operator expression J! f
il
(Ai,
••,
km
k1
in
An) g(B1, • • • , B m)
in
the factor f (Ai, ... , An ) can be isolated by enclosing it into autonomous provided that the indices k 1 , . .. , km all fall within two categories: either ki > max( j 1 , ... ,in),
or ki < min( ji , ... , jn ), i = 1, ... , m. This being true, we have Ji f (Ai, • • • ,
in
k1
km
il
il
in
k1
km
An) g( 13 1, • • • , B rn) = Lf Mi, • • • , A0118'(B1, • • • , B m).
Glossary
334
Feynman indices Indices in operator expressions showing the order in which operators act (the arrangement of operators), the greater is an index, the closer to the left the corresponding operator stands. In writing Feynman indices they are placed just over the corresponding operator argument or over the left autonomous bracket, e.g., 1
2
1
2
2
(A ± B) (A — B) = A— BA + BA ± B 2 = A 2 ± B 2 , 7 2 11 3 1[AB — ClD = D(B A — C), etc. The value of an operator expressions depends on the order relation between Feynman indices rather than on the indices themselves. Also indices over operator arguments commuting with all the other arguments in an operator expression are irrelevant and may be omitted by convention.
Feynman quantization Here quantization is understood as a rule taking each symbol f (y 1 , . .. , yn ) into the operator f (A i , . .. , A n ) (for a prescribed tuple A1,. A n ). Different quantizations use different ways to define
f(A i , ..., An ). The term "Feynman quantization" refers to the case in which one sets
1 n (A An), 1, .. . , f (Y 1, • • • , Y„) = f i.e., Feynman indices are used to remove the ambiguity in the definition of f (A i , . . . , A n ). il
in
Feynman tuple A collection A = (Ai, . . . , A n ) of generators in an algebra A, equipped with Feynman indices j, . .. , jn . If f (y 1 , . .. , yn ) is an n-ary symbol then we can define the function of A by setting f (A)
=
in
h
def f
(Ai, • • • , An).
Feynman's extraction formula The formula
t t t T- exp [ f (A(r) ± B (r)) dr] = T- exp ( f A(r) dr) T- exp ( f C (r) dr), 0
0
0
where
t t C(t) = [T-exp ( f A(r) dr)] 1 B(t) T- exp ( f A(r) dr), 0
0
335
Glossary t
allowing one to extract the factor T- exp ( J A (v) dr) from the T-exponential of
0
A(r) + B(r).
2.
1
Fourier integral operators Functions of (x, —t 0/8x) with symbols given by oscillatory integrals and naturally occurring as parametrices of the Cauchy problem (pseudo)differential equations of hyperbolic type.
generators A generator is an element of an operator algebraon which functions can be defined. Specifically, let Y be a symbol space. An element A of an operator algebra A is called an Y-generator (or simply a generator, if Y is clear from the context) if there exists a continuous algebra morphism
such that ILA (Y) = A. The morphism it,A, if it exists at all, is always unique. Notation: p, A( f) = f (A). The element f (A) of the algebra A is called the function of A with symbol f (y).
graded Lie algebras Let G be an abelian group, and let x:GxG-÷C\{0} be a mapping such that i) x (g, .) is a homomorphism of groups for each fixed g
E
G;
ii) x (g , h)x(h, g) --_a 1. A G-graded Lie algebra of colour x is a g-graded vector space L equipped with the bilinear graded Lie bracket [., .] such that [A, 13 ] =
=
EDg EGL g
-x ( I A I, I B 1)[B, A ]
(graded antisymmetry) [A, [B, C ] l = [[A, Bl, C] ± X GAL 1/3 1)[B, [A, C ] l (graded Jacobi identity) (here I A I is the gradation of A, etc.). Any G-graded associative algebra A = ED gEG Ag can be made into a graded Lie algebra by setting [A, B] = AB — x(IAI, IBDB A. In applications to particle physics (supersymmetries) one mostly deals with the case
x x, y) = ( -1 ) xY . (
Hamiltonian See characteristics.
Glossary
336 Hamilton—Jacobi equation The equation
H x, aS(x) ) = 0,
ax
(
satisfied by the phase S(x) in the WKB-approximation
ifr(x) = e (i/h)s(x) g)(x) to the solution of the Schr6dinger equation 1 2
a
H x, ( —ih— * = 0 ax with Hamiltonian H(x, p), and various similar equations arising in the context of Maslov's canonical operator, Fourier integral operators, etc.
Heisenberg algebra The Lie algebra R2n+1 (x,
y, ) = (xi, • • • , xn,
yi, • • • ,
Yn, )
with the bracket (commutation relations) [xi, yk] = iOik, [xi, xk] =[Yi, Yk] = [xi, 0 = [Yi , 0 =
0,
where Sik is the Kronecker delta. It has the representation x 1. I--> X .J
(multiplication operators),
a
1--> —i—, ax;
Yi 4-
in L2 (Rn ), widely used in quantum mechanics and the theory of differential, pseudodifferential, and Fourier integral operators. The left ordered representa2 21 i 3 tion for the tuple (xi , x,, yi, . . . , yn , ) is ,
...
ly;
= yj — i
a
—. axi
Heisenberg commutation relations See Heisenberg algebra. 12
index permutation formula Any of the formulas permitting one to pass from f (A, B) 21 to
f (A, B), such as 1 2 3 12 82 f 1 5 2 4 f (A, B) = f (A, B) ± [B, A]— (A, A, B, B). Bx8y
337
Glossary
Jacobi condition The Jacobi condition is a condition on the operators of the left or1 n dered representation. Let (A1, .. . , A n ) be a Feynman tuple, and suppose that the operators A 1 , . . . , A n satisfy a system of alglebraic relations, say, {
ii is o)(Ai i , ..., AO = 01
, co€S2
where the sequences i l , . . . , is and j1 , . . . , is are, in general, chosen differently for different a) E Q. Suppose also that there exists a left ordered representation (i v . . . , In ) of the tuple (A i , . . . , A n ). Then the Jacobi condition requires that fi
for all a) E
,
is
l is ) = 0
a.
In fact, the Jacobi condition guarantees monomorphy of the mapping 1 f (x i , . . . , xn ) ---> f (A, . . . ,
n
A)
provided that the system 0 and its consequences exhaust all possible relations between the operators A 1 , .. . , A n . In the context of Lie algebras, this leads directly to the Poincaré—Birkhoff—Witt theorem stating that the ordered monomials Aan n , . . . , Ar form a basis in the enveloping algebra of the Lie algebra with basis A 1 , ... , A n . The system of equations given by the Jacobi condition can be viewed as a system for finding / 1 , . . . , in . Being equipped with the additional regularity condition 1
n
f( 1 1,•••,ln)( 1 ) = f(Yi, • • •, Yn) ,
it may well serve as such. Jacobi identity The identity [A, [B, C]] = [[A, B], C] ± [B, [A, C]] for the commutation in a Lie algebra, or the identity [A, [B, C ] l = RA, Bl, Cl + X(1A1, 1B D[B , [A, C ] l for the graded x -commutator on a graded Lie algebra.
338
Glossary
Leibniz rule The identity D(AB) = D(A)B ± AD(B)
to be satisfied by any derivation D of an algebra.
Nelson's condition A condition on the generators A 1 , .. . , An of a strongly continuous unitary representation of an n-dimensional Lie group G. Nelson's condition states that A 1 , .. . , A„ must be essentially self-adjoint on some dense subset D in the representation space H together with the operator A = —(Ai ± Ai ± • • • ± A 2n ).
Newton formula The expansion of f (C) N1 2 f (C) — f
(A) =
—
f (A) 3
sk f 1
2k
E ffc - Al... [[c - A]1— esxk (A, A, ... ,
2k+1
A
) + RN
k=1
with the remainder 2
2N
RN = IC — Al . . . E[ C —
esk f
1 3 2k+1 A). — . , (C, A, Al — sxk
Lie algebra A Lie algebra L is a linear space L equipped with a bilinear operation [. , .1 (Lie bracket) such that [A, B]±[B, A] = 0 for any A, B e L,
(skew-symmetry)
[[A, B], C] ± [[B, C], A] + [[C, A], B] = 0 (Jacobi identity). 1
n
normal form Let A be an operator algebra and A = (A 1 , . . . , A n ) a Feynman tuple of Y-generators on A (here Y is some fixed symbol class). A normal form of an operator B E A is its representation as 1 n B = f (Ai, • • • , An)
with some f E F,1 . The normal form neither exists nor is unique in general. However, if the tuple A has the left ordered representation (h . , . . . , In ) consisting of Y-generators in Y, then any operator of the form s 1 B = q)(Aj i , . . . , Ajs ), çoeFs
can be reduced to a normal form by setting s I f (Y 1 , • • • , Yn) = q)(lii, • • • , lis)(1).
The normal form is unique if the left ordered representation operators satisfy the generalized Jacobi condition.
Glossary
339
operator algebras In this book, by an "operator algebra" we mean an associative algebra with identity element equipped with an appropriate convergence (see Chapter IV for details on this point). That is, an operator algebra A is a linear space A over C on which a bilinear operation (multiplication) A, B 1----> AB is defined such that (i) (AB)C = A(BC) for any A, B, C E A;
(ii) there exists an element 1
E
A (the two-sided identity element) such that
A1= lA = A for any A E A;
(iii) the linear operations and the multiplication are continuous, e.g., if Ace and Ba are two (generalized) sequences in A converging, respectively, to A and B, then A a Ba is convergent to AB. Typical examples of operator algebras are supplied by algebras of continuous linear operators in some linear space. What is more, any operator algebra can be realized in this way by using the actual embedding A ---÷ L (A) , where £ (A) is the algebra of linear operators in A (see regular representations).
operator-valued symbol Let W and V be operator algebras, let T be a symbol class, and let f be a W-valued function of (yi, • • • , yn) such that f A1, .. . , A n are T-generators in V, then the operator 1 B = f (Ai, . . . ,
E Tn 6W .
If
itz ) E 17 (W
I n is well-defined (the mapping f 1-4- f (A 1 , .. . , A n ) extends by continuity from Fn 0 W, where its definition is obvious, to Fn W). The operator B is called a 1
n
function of A1, . . . , An with operator-valued symbol f. A slightly more complicated situation occurs if f takes values in the algebra itself where A1, . . . , An lie, f e .Tn 6VV. The values of f need not commute with A1, . .. , An , so that one must be careful. We define k
1
n
B = f (Ai, • • • , An),
where k, the Feynman index of the symbol, is distinct from 1, . . . , n, as follows. Let f =EfiOvi ETn OV. We set k
1
n
1
It
k
f (Ai, .
(this is well-defined since vi occur only linearly, and we need not assume that vi is a generator). This definition extends to the entire .T.n 6V by continuity.
Glossary
340 1
n
ordered representations Let F be a fixed symbol space, A = (Ai, ... , A n ) a Feynman tuple in an operator algebra A. The left ordered representation of A on Fn 1
n
is the Feynman tuple 1 = (lie ... ,1 n ) of operators determined by the following properties: (i) (li f )(A) = Ailf (A)]I for any i = 1, . . . , n and f (ii) (li f)(y) = yi f (y) whenever f is independent of
E
Fn;
Under the assumption that / 1 , ... , in are F-generators in portant property holds for these operators:
Fn , the following im-
[[ f (A)]1E g (A)] = [ f (Og](A)
for any f, g
E
Fn .
The right ordered representation is defined similarly; we replace the conditions (i) and (ii) by the conditions (i') (ri f)(A) = [f (A)1A for any i = 1, . . . ,n and f
E Fn;
(ii') (ri f)(y) = yi f (y) whenever f is independent of y 1 , ... , yi _ 1 .
Then If (A)1[g(A)]] = [g(r) f](A)
for any f, g
E
.F, .
path integral Also Feynman's path integral. The expression
t i f (pq — H) dr ifro(xo)dx0
f f [dx] [— di ] exp
=
27r
0 NI
def
=
iE ( Pk (xk+i —x0-1-1(xk+1,Pk,ric)Ark
f Ilin N--*oo, max Ati--0
e
k° =
)
R2N
--r dxkdpk I I 27
N-1 1 X
ifro(xo)
k=0
for the solution to the Schrtidinger equation
1
)
.8* a 2 —1— + H (x, —i — ifr = 0 ax at with the initial data
Ifr(x, 0) = *0(x).
Glossary
341
Poincaré---Birkhoff—Witt theorem A theorem stating that in the enveloping algelbra U(L) of a given Lie algebra L the elements 41 , ... , aai n form a basis, where (a1 , .. . , an ) is a basis of L.
Poisson algebra A space (13 of smooth functions on a manifold M, equipped with a Poisson bracket. A Poisson bracket is a bilinear operation {•, •} on 4) such that rsa
if
'
gl
af ag _ Ewik(x)_____ = j,k 8x, a Xk
in local coordinates. For this operation the skew-symmetry property
tf, 81 = —fg, fl, and the Jacobi identity (f, fg,
10) ± VI, (f, gll +
( g, fh, fl) = 0
are satisfied.
poly-Banach space A linear space B equipped with a polynorm p:BxI-->R + U(oo)
satisfying the axioms given below and with convergence defined as follows: a generalized sequence {x4 ) is convergent in B if for some A E A (where A is a given filter of sections of the bidirected set I) and for any a E Ap (xp,, a) < co and xt, is convergent in the seminorm X., a). Axioms of polynorm: 1) For any x E B there exists an a such that p(x , a) < 00. 2) p (. , a) is a seminorm on 13„ = fx E B I p(x , a) < oo). 3) p(x, a) < p(x, 13) whenever a > p. 1
n
product theorem A theorem stating that if a Feynman tuple A = (A1, . .. , An ) possesses a left ordered representation, then for any two symbols f, g E Fn the 1
n
1
n
product If (Ai, . .. , A n )r[g(iii, ... , A n )] can be reduced to a normal form,
1
n 1 Li. (Al, • • • , An)i1i[g(Al, • • • ,
n
1
n
1
n
An)] = U(11, • • • ,ln)gliAi, • • • , An)
provided that /1, . . . , in can be substituted into f (e.g., f is a polynomial or /1, . .. , in are .F-generators in Fn ). Similarly, 1 1 1 n n n —n —1 [[ f (Ai, . . . , A n )Hg(Ai, .. . , A n )]] = [g(ri , . . . , rn )f](211, ... where ri, .. . , rn are the operators of the right ordered representation. We use here the negative indices over the operators ri, . . . , rn.
342
Glossary
pseudodifferential operators Functions of the operators 2
1
—ia/aX)
with symbols f (x, p) that are sums of homogeneous functions with respect to p (the classical pseudodifferential operators) or with symbol f (x, p) satisfying the estimates aa+fi f
< co ± 1 p wn —pla1+81/31,
axaapfi (x, 13)
lai, L81
= 0, 1, 2,
when x lies in a compact set; here 0 < 8 < p < 1 (Htirmander's classes of 1 pseudodifferential operators),In the broad sense, any function of —i8/8x) may be referred to as a pseudodifferential operator. quantization This word has quite a few different meanings. In this book it refers to the choice of ordering of operators in substituting them for (commuting) numerical arguments of a function. See Feynman quantization, Weyl quantization. quantum oscillator The quantum system described by the Schrtidinger equation
ax
=
with quadratic Hamiltonian H(x, p) = P2 2m
(02x2.
quantum groups Quantum groups were introduced in [114], [166], and [47] as a formal object useful in the quantum method of the inverse scattering problem. The theory of quantum groups has been developing intensively since then; we refer the reader to [38], [170], [32], and the paper cited therein for detailed information (one should also mention the very important paper [152]). Here we only give some brief remark, following [38] and [170]. First, let us recall the notion of a Poisson algebra. Let M be a smooth manifold and A = F (M) a space of functions on M. Suppose that A ia equipped with a bilinear operation (a, b) {a, b) that is antisymmetric and satisfies the Jacobi identity. Then we say that A is a Poisson algebra and refer to { , ) as the Poisson bracket. A quantization of a Poisson algebra A on M is an algebra Ah over the algebra C[[h]] of formal power series such that i) there is an isomorphism Ah/ hAh
A;
Glossary
343
ii) [a, b] mod h = (a, b) under this isomorphism; iii) Ah is a topologically free Cah11-module.
Now let G be a Lie group and H = F(G) a space of functions on G. Suppose that H is equipped with a Poisson bracket such that the multiplication map
is a morphism of Poisson manifolds. Note that H is a Hopf algebra; namely, the mappings A :H---+HOH
(comultiplication),
S: H
H (antipode), and
e:H
C (counit)
are defined by duality to the multiplication, inversion x 1--> x -1 , and unit ( e) ---> G, respectively, and these mappings satisfy the axioms of a Hopf algebra. A quantum group is a quantization Hq of H in the class of Poisson Hopf algebras (here q is the tuple of quantization parameters). Thus, in the spirit of algebraic geometry, we consider a ring H of functions on G as the primary object and quantize H rather that G itself. Note that G can be reconstructed from H by applying the functor Spec; in contrast, Hq is usually not commutative, so applying Spec to Hq would make little sense. Here by Spec H we denote the space of maximal ideals in the ring H. It is convenient to describe quantum groups in dual terms. Let H be the Hopf algebra of regular functions on G. Then the universal enveloping algebra U(Ç) of the corresponding Lie algebra g is contained in H*, and H is known to be isomorphic to a Hopf algebra consisting of matrix elements of finite-dimentional representations of u (g). The quantization u (g) of U(Ç) is easy to describe (see the example below). Then Hq can be defined as the Hopf algebra generated by matrix elements of finite-dimensional representations of u (g). Consider a simple example (quantization of SL2(C)). Let
e
1 01 0)
01 1
o
)
'
h
=
f _(
0 )
0
be the standard Chevalley basis of the Lie algebra s12 = s12 (C). We have the commutation relations
[e, f]= h, [h, e] =2e, [h, f]= —2f.
(B.60)
The universal enveloping algebra U(s12) is the algebra with generators e, f, h, determined by the relations (B.60).
Glossary
344
0, ±1 be a complex number. Consider the associative algebra Now let q Ug (s/2) with generators e, f, a, a -1 and defining relations
aa-1 = = 1, ae=q2ea, af = 47 -2 fa, [e, f]= (a — a 1 )/(q — q -1 ). The algebra U(s12) can be obtained as the limit of Uq (s 1 2) as q —›- 1 in the following way. Let a = 1 + Eh + • • • , E
q = 1 + E,
0.
Then we obtain
e + Ehe = (1 +28)(e + eeh) + • • • , or
Elh,e]=2se + • • , and [h, e] 2e as E ---> 0. Similarly, [h, f] —2f and [e, f] h. We omit the verification of the fact that Ug (s12) has the required Hopf algebra structure. Hence, we have described the quantization of the enveloping algebra U(s12)• regular representations Representations of an algebra on itself by left and right multiplications. Let A be an associative algebra with identity element. For any A E A denote by LA and RA two linear operators acting on the linear space A by the formula LAB = AB, RAB = BA, B E A. The linear mappings L: A
R: A
G(A)
A(A)
and AI-4 LA
A— RA,
where L(A) is the algebra of linear operators on A, are called the left and the right regular representations of A. In fact, L is a representation, but R is an anti representation since it reverses the order of factors, RARB = RBA.
Both L and R are faithful representations, and they commute with each other, LARB = LBRA
for any A, B
E
A.
345
Glossary
representations A representation of an associative algebra A on a linear space E is a homomorphism of A into the algebra End(()E) of linear operators on E. A representation of a Lie algebra is defined in the same way, but the algebra End(OE) should be considered as a Lie algebra with Lie bracket [A, B]tf AB — BA, A, B E End(E). Sometimes a representation is understood as a morphism into an arbitrary algeEnd(E) is said to be faithful if Ker ço = 0, i.e. bra. A representation ço : A q)(a) = 0 implies a = O.
resolvent formulas The identites for the resolvent of a linear operator A: R(A) — R A (A) = Rx(A)R 1,(A) — Â)
(first resolvent identity);
R(A)
—
R(B) = Rx(A)(A — B)Rx(B)
(second resolvent identity).
semigroup property A family of linear operators T (t), t
E
114, is said to satisfy the
semigroup property if T (t
r) = T (t)T (r), t, r
E
R.
stationary phase method A method to obtain the asymptotic expansion as h -->- 0 of the integral i
) n/2 f
I (h) = 27th
eis(x)1 h w(x) dx , lin
where h E (0, 1], S(X) is smooth and real-valued, and (x) is smooth and compactly supported. Suppose that S(x) has a unique stationary point xo on supp g)(x), that is,
as ax
—(X0) = 0,
as ax
---- (x) 0 0
for x
xo,
x E supp
Furthermore, suppose that the stationary point is nondegenerate in the sense that J = det
a2S(x0))
axi axi
0 O.
Then the integral I (h) can be expanded into the asymptotic series
Glossary
346 where 10
e iS (x 0) / h w (x0 )
=
1
and the argument of J is determined by n
arg J =
E arg
Xk,
k=1
where 4 are the eigenvalues of the matrix a2 S(x0)
axi ax;
,
and arg 4 are chosen in the interval 3n-
< arg A.k <
7r
--- i -—
There is also a version of the stationary phase method for the case in which the function S(x) is complex-valued. symbol spaces Intuitively, symbols are functions whose arguments we can replace by operators, thus obtaining functions of operators and operator expressions. In order to obtain a calculus, some assumptions should however be made. They are listed below, and throughout the book we require them to be valid for all symbol spaces considered. a) Unary symbols. A space F of unary sumbols is a space consisting of functions f (y) depending on a simple variable y (complex or real). We require that: (al) The functions f E F are defined either on some domain in R or C or in the neighborhood of some closed subset in R or C. (a2) F is an algebra with respect to pointwise multiplication of fuctions. .F contains the constant functions and the coordinate function f (y) = y. (a3) F is equipped with some convergence (cf. Chapter IV), and the algebraic operations are continuous. (a4) The difference derivative 818y acts continuously in the spaces 8.
Ty . where F( s).F is the completed tensor product as defined in Chapter IV. (b) Multivariate symbols. The space .F„ of n-ary symbols is defined as the product Fn = -F( • • • &F cilf ....—.,,--d
n copies
,
347
Glossary
where F is the given space of unary symbols. Sometimes a more complicated construction is used, in which different spaces of unary symbols are allowed for different variables. Specifically, -Fn = •F(1) ' • ' &F(n),
where F(l) .. . Foo are the given spaces of unary symbols. The latter construction is mostly used for the case in which some of -F(1), • • • , -F(n)
are the algebra of polynomials, and the other are equal to some F.
synchronous asymptotic expansion A particular type of asymptotic expansion in which the filtration chosen provides two or more simple types of asymptotic expansions simultaneously. For example, if
H= Ho
D
HD
f12 D • • •
is the filtration associated with asymptotic expansions with respect to h —›- 0,
and H = Go
D
Gi D G2
D•••
is the filtration (in the same space) associated with asymptotic expansions with respect to smoothness, then the filtration
H= Ho fl Go D Hi fl Gi
i
H2
n G2
D•••
is said to provide simultaneous asymptotic expansions with respect to the parameter h and differentiability. Taylor formula The formula expressing f (C) — f (A) in the form
N I at,
1
f (C) — f (A) = E_f , -)0)1(ck=1
k
2,
A il+ Q N )
with the remainder 8N 1 1 3 3 A) . . . , A, (C , 7— Q N = ( C — A)1 x N 21
2
[
and having the disadvantage of that it cannot provide the expression of f (C) — f (A) in powers of E for C = A +EB, unless the commutators of A and B satisfy some additional conditions such as nilpotency, etc.
Glossary
348
tensor product The algebraic tensor product of two linear spaces A and B over C is the space generated by finite formal sums
E
aiai
0 bi , ai E A, bi E B,
ai E C
and factorized by the relations aO(b+c) = a0b-Faec, (a+b)0c = a0c±b0c, X(a0b) = A.a0b=a®Àb.
The tensor product of A and B is denoted by A 0 B. The projective tensor product of Banach spaces A and B is the completion of A 0 B with respect to the projective norm ilf II
= inf E Ilai II Ilbi II,
where the infimum is taken over all (finite) representations f =Eai Ø b. i
The projective tensor product is denoted by AB. The projective tensor product of poly-Banach spaces is defined as the completion of A 0 B with respect to a certain projective polynorm on A 0 B. In any case, the tensor product satisfies the following universal mapping property: if lif : A x B —> C is a (continuous) bilinear mapping, then there exists a unique (continuous) linear mapping if : A 0 B —> C such that Ilf = :lif 0i, where i: AxB —>
(a, b) i--->-
A0B,
a0b
is the natural bilinear mapping A.
T exponentials Let A(t) be a family of operators with parameter t E IR. Consider the Cauchy problem -
dU(t) dt
= A(t)U, U(0) = I
(identity operator).
Were the operators U(t) commuting with one another, we would have t U(t) = exp (f A(t)dt) .
o
349
Glossary In lack of commutativity, we denote
r U(t) = T- exp (f A(t)dt) . 0 The prefix T indicates that the operators A (t) in that expression are ordered by their arguments t. Using Feynman indices, we can write r
U(t) = exp (f A (t) dt) 0r t
i.e., U(t) is a function of the "infinite Feynman tuple" I A (t)
. Accordingly,
ret the symbol also depends on infinitely many arguments, i.e., is a functional:
r U (t) = f UAW)), where
t f (ty (OD = exp (f y(t) clt) 0 Either of these expressions is referred to as the T-exponential. Trotter formula Any of the formulas like e [WEB] = iim e Aln e Bln
e Aln e Bln n—).co'----:-.,:-:--- 1 2n factors
expressing the exponential of a sum via the exponentials of the summands. Trotter-type formulas Any of the formulas like t
t
r r-fo T-exp(f [[ f (A, B , r)]] ch) = T - exp( f f (A, B , t)ch) t
12
which allows one to remove the autonomous brackets in a T-exponential and further to pass from the T-exponential to the Feynman path integral. twisted product The product * in the symbol space introduced via the operators of the left ordered representation:
f * g = f (l)(g). With this product, the mapping symbol -÷ operator is a homomorphism of algebras. The associativity of the *-product is generated by the Jacobi condition.
Glossary
350
Weyl quantization A method of quantization in which all operators are assumed to act "simultaneously." More precisely, if f (y i , . . . , yn ) is a power of a linear function and A = (A 1 , ... , An ) is a tuple of operators, then the Weyl-quantized function fw (A 1 , . . . , A n ) is defined as the power of the given linear function of A p ••• , A n.• if f (y 1 , • • • , yn ) = (ceiYi + • • • ± anYn) tm ,
then fw(Ai, • • • , A n ) = Pei Ai ± • • • ± anAnr•
By linearity, this definition extends to arbitrary polynomials. As for wider symbol classes, the definition for, say, Fourier-representable functions can be given as follows:
—27ri y 1
fw (A 1 , • • • , An) = (
7
f f (ti, • • • , tn ) exP(iffti Ai + • • • ± tn A n 1) dt, Rn
where f (t) is the Fourier transform of f (y). The Weyl quantization is a more delicate issue then the Feynman quantization both in that it imposes stronger analytic requirecounts and in that the calculus is less evident with the Weyl quantization.
Wick normal form The representation of an operator A in the quantum state space as a function of the creation a+ and annihilation a– operators in which all a– act first and all a+ act last:
Bibliographical Remarks
Chapter I Ordering operators by indices, one of the main ideas in noncommutative analysis, was first introduced by Feynman [54], but its systematic use in conjunction with other tools such as autonomous brackets was inspired only by Maslov's book [129]. Attempts to define functions of several noncommuting operators necessarily involve some kind of ordering. The Feynman ordering is apparently the most convenient one and is the only ordering used in this book. However, there exist several other kinds of ordering, of which the most widespread and familiar is undoubtedly the Weyl ordering (Subsection 2.6). It originally arose in connexion with problems of quantum mechanics. We refer the reader to [191], [2], [3], [4], [151] and [101] for details concerning the Weyl ordering. Almost not touched upon in this book are problems pertaining to functions of several commuting operators. Not that we consider them too simple; they merely lie off the mainstream of our exposition, and we give only the information absolutely necessary to maintain the desired level of rigor in dealing with the noncommutative case. The reader interested in problems specific to commutative functional calculus may refer to either textbooks on functional analysis such as [78], [105], and [197] or to more special treatments such as [14] and [174]. T-exponentials (Subsection 1.1) were introduced by Feynman [54], who also discovered a striking relationship between T-exponentials and path integrals (cf. [55], [150]). We describe this relationship in Subsection 1.7 following the paper [135]. Various functional-analytic subtleties, which are beyond the framework of an elementary textbook, arise in connexion with the T-exponentials; e.g., see [105], [135], [179], and [180]. The idea of representing operators of quantum mechanics as functions of creation and annihilation operators (Subsection 1.2) is very old. Perhaps the most rigorous treatment of the subject from the mathematical viewpoint can be found in [10] and [11]. The fact that (pseudo)differential operators are functions of operators of multiplication and differentiation by the coordinates was clearly understood very long ago. For example, the treatment of pseudodifferential operators in [109] in fact uses the Feynman ordering (however, this use is implicit since Feynman indices never occur in the paper). From the very beginning of the systematic development of noncommutative analysis it was realized that the emphasis should be put on common algebraic properties rather
352
Bibliographical Remarks
than on specific details of the definition of functions of noncommuting operators. At first glance, the most attractive idea is to present these properties in the form of a set of axioms; then for each specific definition one will only have to check these axioms. This was the approach adopted in [129]. However, there is the disadvantage that serious technical difficulties occur whenever one tries to consider morphisms of operator algebras and find what happens to functions of noncommuting operators under these morphisms. That is why in the subsequent publications [99] and [101] the set of axioms was replaced by another set, which deals with the properties of the symbol classes and the mapping symbol --> operator rather than with the purely algebraic properties and their behavior under transformations. We hope that in the present book the definitions (see Section 2) have assumed their final form. In any case they have become simple, universal, and more convenient than before. The fundamental formulas of noncommutative differential calculus presented in Section 3 come from a number of sources. The Daletskii—ICrein formula expressing the derivative with respect to a parameter of a function of an operator depending on the parameter was first obtained in [29] and [30]. It was extended to a general derivation formula in [96] and [97]. Permutation formulas (Subsection 3.4) and the composite function formula (Subsection 3.5) can be found in [96], [97], [98], and [129]. We should point out that only a small part of a broad variety of formulas of noncommutative differential calculus is presented in the book. A great many of these formulas were obtained by Karasev. Here we cite only a few of his main papers; the complete list of references can be found in [98] and [101]. The Campbell—Hausdorff theorem and Dynkin's formula (Section 4) have a long and intricate history. Surprisingly, they have a number of important practical applications not limited to the reconstruction of the multiplication law in a Lie group [19], [53], [104], [117], [190]. There have been different approaches to the proof of Dynkin's formula [43]; e.g., see [21], [34], [53], [56], and [190]. The proof presented in Section 4 was obtained in [139], and the generalization to T-exponentials in [102].
Chapter II The method of ordered representation is the most powerful tool in special noncommutative analysis (that is, noncommutative analysis that deals with functions of a fixed Feynman tuple of operators). There is a substantial body of literature devoted, in essence, to functions of a tuple of operators satisfying a special class of relations. Primarily, these are Lie relations or graded Lie relations (see [12], [26], [28], [61], [62], [110], [119], [140], etc). The ordered representation (Section 1) was introduced in [129], and in [130] the essential role of the generalized Jacobi condition (Section 4) (which consists in the requirement that the operators of ordered representation should satisfy the same relations as the original operators) was announced, and was later studied in [133], [141], and
Bibliographical Remarks
353
[134]. The ordered representation permits one to compute products in terms of symbols and leads (under the condition that the generalized Jacobi condition be satisfied) to the notion of a hypergroup (see [130] and cf. [122], [119]). The Jacobi condition for the ordered representation leads immediately to the Poincaré-Birkhoff-Witt (PBW) theorem (see [86], [162 ] ), which states the PBW property for enveloping algebras of Lie algebras. Far more interesting results are obtained if the Jacobi condition is applied to non-Lie commutation relation involving parameters. This results in a certain finite number of equations in these parameters. (Numerous examples of this sort were considered in [130], and some of them are included in Section 2 and Section 4.) These equations are closely related to the Yang-Baxter equations [7], [196] heavily used in the theory of quantum groups and quadratic algebras, which has been recently intensively under development (e.g., see [32], [35], [36], [37], [38], [47], [46], [70], [71],
[76], [90], [91], [92], [113], [114], [115], [116], [121], [122], [152], [158], [161], [166], [167], [168], [169], [170], [181], [182], [192], [193], [194], [195], and [198]). The situation is as follows: the Yang-Baxter equations are easier to verify but only guarantee that the PBW property is valid on polynomials of degree < 3, whereas the generalized Jacobi condition is much harder to check but ensures the PBW property for polynomials of arbitrary degree. The difficulty lies, of course, in how to compute the ordered representation operators. Some methods for this computation for certain classes of relations not restricted to Lie relations were proposed in [133], [134], [141], and [101] (see also references cited therein). Note that there is a very important branch of special noncommutative analysis not included in the present textbook, since it goes far beyond the elementary level. We mean asymptotic noncommutative analysis, which deals with commutation relations involving a small parameter and provides ordered representations, composition formulas, etc., in the form of asymptotic expansions with respect to this parameter. The small parameter being interpreted as Planck's constant, we arrive at the old, familiar quantization problem. Its solution leads to rich geometric constructions involving symplectic geometry (cf. [75]). We refer the reader to [99], [100], and [101], where a relatively complete bibliography on the topic can also be found. However, there is a case in which asymptotic and exact noncommutative analysis coincide, i.e., they give the same results. We mean the case in which the Feynman tuple in question satisfies Lie commutation relations. In this case the noncommutative analysis of functions of this tuple is closely related to the analysis on the corresponding Lie group. The relationship is considered in Section 6. For the reader's convenience, simple basic facts concerning Lie algebras, Lie groups, and their representations are collected in the Appendix at the end of the book. They can also be found in any standard textbook on the topic, e.g. [86], [162], [184] and others. The functional-analytic conditions on the generators of a representation of a Lie group such as the Krein-Shikhvatov and Nelson theorems are discussed in numerous works, e.g., [42], [58], [69], [78], [93], [95], [105], [107], [138], [149], [164], and [197]. Analysis in Hilbert (or Banach) scales (e.g., see [111]) is a natural framework
354
Bibliographical Remarks
for our considerations in Section 6. The symbol classes introduced there are suitable for solving asymptotic problems arising in the theory of differential equations.
Chapter III This chapter comprises several examples in which noncommutative analysis is applied to obtain some kind of asymptotic solution to a problem for a differential equation. For standard settings of asymptotic problems for differential equations and related topics we refer the reader to [41], [79], [80], [81], [82], [84], [108], [109], [137], [142], [143], [165], [175], and [178]. However, the applications we cover also include nonstandard settings of asymptotic problems and our exposition chiefly follows [131], [132], and
[134]. The approaches to difference and differential-difference equations in Section 2 are taken from [129] and [131]. The problem on propagation of electromagnetic waves in plasma (Section 3) first considered by Lewis [120]. The rigorous mathematical treatment of this problem was carried out in [145], [146], [147], and [148] (cf. also [134]). The problem of obtaining synchronous asymptotics for equations with coefficients growing at infinity (Section 4) has been considered by many authors (let us mention, e.g., the papers [52], [163], and the literature cited therein). Here we include the results concerning coefficients of polynomial growth and obtained in [141]; these results were published in [133] and [134]. In Section 5 we consider a model example based on the geostrophic wind equations introduced in [153]. In Section 6 we demonstrate the method of re-expansions using the exact solution of some standard operator [129] on the example of a degenerate equation taken from [141]. Here the standard operator is just the operator of the quantum oscillator. Finally, in Section 7 we consider a problem involving an operator with double characteristics. This problem is well-studied (e.g., see [15], [16], [74], [80], [154], [155], and [157]) and we present no new results. Our aim is merely to demonstrate the use of operator-valued symbols [129] in problems of such sort in the context of noncommutative analysis. To this end, we first reduce the problem to a simple normal form in the spirit of [172], [124], and [1].
Chapter W The general definitions of noncommutative analysis given in Section 2 require an appropriate functional-analytic framework. This is provided by the language of polynormed
Bibliographical Remarks
355
spaces. The theory of polynormed spaces (and, more generally, of spaces with convergence) has been developed intensively for a long time, see [13], [22], [23], [24], [31], [68], [72], [77], [87], [88], [89], [98], [126], and [138]. We reproduce the results necessary for our purposes in Section 1; the results of Section 2 are apparently new; at least we could not find them elsewhere in the literature. The exposition in Section 3 chiefly follows [134] and [141].
Bibliography
[1] S. Alinhac, On the reduction of pseudo-differential operators to canonical forms. J. Differential Equations 31, No. 2, 1979, 165-182. [2] R. F. V. Anderson, The Weyl functional calculus. J. Funct. Anal. 4, No. 2, 1969,
240-267. [3] —, On the Weyl functional calculus. J. Funct. Anal. 6, No. 1, 1970, 110-115. [4] —, Laplace transform methods in multivariate spectral theory. Pacific J. Math. 51, No. 2, 1974, 339-348. [5] V. I. Arnord, Mathematical Methods of Classical Mechanics, 2nd ed., Graduate Texts in Math. 60, Springer-Verlag, Berlin–Heidelberg–New York 1989.
[6] —, and A. B. Givental', Symplectic geometry. In: Itogi Nauki i Tekhniki, Sovrem. Probl. Mat., No. 4, pp. 7-139. VINITI, Moscow 1985. [Russian]. [7] R. J. Baxter, Partition function of the eightvertex lattice model. Ann. Physics 70, 1972, 193-228. [8] R. Beals, A general calculus of pseudodifferential operators. Duke Math. J. 42,
1979, 1-42. [9] —, and C. Fefferman, Spatially inhomogeneous pseudodifferential operators. Comm. Pure App!. Math. 27, 1974, 1-24. [10] F. A. Berezin, Methods of Second Quantization. Pure and Applied Physics 24, Academic Press, New York 1966.
[11] —, Wick and anti-Wick operator symbols. Math. USSR-Sb. 15, No. 4, 1971, 577-606. [12] —, and G. I. Kac, Lie groups with commuting and anticommuting parameters. Math. USSR-Sb. 11, No. 3, 1970, 311-325. [13] G. Birkhoff, Moore–Smith convergence in general topology. Ann. of Math. 38,
1937, 39-56. [14] N. Bourbaki, Théories Spectrales. Hermann, Paris 1967.
358
Bibliography
[15] L. Boutet de Monvel, Hypoelliptic operators with double characteristics and related pseudodifferential operators. Comm. Pure App!. Math. 27, 1974, 585–
639. [16] —, and F. Trèves, On a class of pseudodifferential operators with double characteristics. Invent. Math. 24, 1974, 1-34. [17] V. S. Buslaev, The generating integral and the canonical Maslov operator in the WKB method. Functional Anal. App!. 3, 1969, 181-193.
[18] —, Quantization and the WKB method. In Proc. Steklov Inst. Math. 110, 1972, 1-27. [19] J. E. Campbell, Introductory Treatise on Lie's Theory of Finite Continuous Transformation Groups. Reprint. Chelsea, New York 1966. [20] W. B. Campbell, P. Finkler, C. E. Jones, and M. N. Misheloff, Path integrals with arbitrary generators and the eigenfunction problem. Ann. Physics 96, 1976,
286-302. [21] P. Cartier, Démonstration algébrique de la formula de Hausdorff. Math. France 84, 1956, 241-249.
Bull. Soc.
[22] C. H. Cook, Compact pseudo-convergences. Math. Ann. 202, No. 3, 1973,
193-202. [23] —, and H. R. Fisher, On equicontinuity and continuous convergence. Math. Ann. 159, 1965, 94-104. [24] —, and H. R. Fisher, Uniform convergence structures. Math. Ann. 173, 1967, 290-306. [25] A. Cordoba and C. Fefferman, Wave packets and Fourier integral operators. Comm. Partial Differential Equations 3, No. 11, 1978, 979-1005. [26] L. Corwin, Y. Ne'eman, and S. Sternberg, Graded Lie algebras in mathematics and physics. (Bose–Fermi symmetry). Rev. Modern Phys. 47, No. 3, 1975,
573-603. [27] —, and L. Rothschild, Nesessary conditions for local solvability of homogeneous left-invariant differential operators on nilpotent Lie groups. Acta Math. 147, 1981, 265-288. [28] Yu. L. Daletskii, Lie superalgebras in a Hamiltonian operator theory. In: Nonlinear and turbulent processes in physics, Vol. 3 (Kiev 1983), pp. 1289-1295, Harwood Academic Publ., Chur 1984.
Bibliography
359
[29] —, and S. G. Krein, A formula for differentiating functions of Hermitian operators with respect to a parameter. Dokl. Akad. Nauk SSSR 76, No. 1, 1951, 13-66. [Russian]. [30] —, and S. G. Krein, Integration and differentiation of operators depending on a parameter. Uspekhi Mat. Nauk 12, No. 1, 1957, 182-186. [Russian]. [31] G. F. C. De Bruyn, Concepts in vector spaces with convergence structures. Canad. Math. Bull. 18, No. 4, 1975, 499-502. [32] E. E. Demidov, On some aspects of the theory of quantum groups. Uspekhi Mat. Nauk 48, No. 6, 1993, 39-74. [Russian]. [33] P. A. M. Dirac, Lectures on quantum mechanics. Belfer Graduate School of Science. Yeshiva Univ., New York 1964. [34] D. 2. Djokovi'c, An elementary proof of the Baker–Campbell–HausdorffDynkin formula, Math. Z. 143, 1975, 209-211. [35] V. G. Drinfeld, Hamiltonian structures on Lie groups, Lie bialgebras and the geometric meaning of the classical Yang–Baxter equations. Soviet Math. Dokl. 27, No. 1, 1983, 68-71. [36] —, On constant, quasiclassical solutions of the Yang–Baxter quantum equation. Soviet Math. Dokl. 28, No. 3, 1983, 667-671. [37] —, On quadratic commutation relations in the quasiclassical case. In Mat. Fiz. i Funkts. Anal., pp. 25-34. Naukova Dumka, Kiev 1986 [Russian]. [38] —, Quantum groups. J. Soviet Math. 41, No. 2, 1988, 898-915. [39] B. A. Dubrovin, A. T. Fomenko, and S. P. Novikov, Modern Geometry–Methods and Applications, Part I, II, III. Graduate Texts in Math. 93, 14, 124 , SpringerVerlag, Berlin–New York 1984 – 1990]. [40] J. Duistermaat and V. Guillemin, The spectrum of positive elliptic operators and periodic bicharacteristics. Invent. Math. 29, 1975, 39-79. [41] —, and L. Hdrmander, Fourier integral operators II. Acta Math. 128, 1972, 183-269. [42] N. Dunford and J. T. Schwartz, Linear Operators, Vol. I, II. Wiley Interscience 1963. [43] E. B. Dynkin, On a representation of the series log(ex en in noncommuting x and y via their commutators. Mat. Sb. 25(67), No. 1, 1949, 155-162. [Russian]. [44] Yu. V. Egorov, Canonical transformations and pseudodifferential operators. Trans. Moscow Math. Soc. 24, 1974, 1-28.
360
Bibliography
[45] —, Linear Differential Equations of Principal Type. Plenum Publishing Corp., New York–London 1987. [46] L. D. Faddeev, N. Yu. Reshetikhin, and L. A. Takhtajan, Quantization of Lie groups and Lie algebras. In: Algebraic analysis. Papers dedicated to Prof Mikio Sato on the occasion of his sixtieth birthday, Vol. 1, pp. 129-139, Academic Press, Boston 1989. [47] —, and L. A. Takhtajan, A Liouville model on the lattice. In: Field Theory, quantum gravitiy and strings (Meudon/Paris, 1984/1985), pp. 166-179, Lecture Notes in Phys. 246, Springer-Verlag, Berlin–Heidelberg-New York 1986. [48] M. V. Fedoryuk, The Saddle-Point Method. Nauka, Moscow 1977. [Russian]. [49] C. Fefferman and D. Phong, On positivity of pseudodifferential operators. Proc. Nat. Acad. Sci. USA 75, 1978, 4673-4674. [50] —, —, On the eigenvalue distribution of a pseudodifferential operator. Proc. Nat. Acad. Sci. USA 77, 1980, 5622-5625. [51] —, —, The uncertainty principle and sharp Ghrding inequalities. Comm. Pure Appl. Math. 34, 1981, 285-331. [52] V. I. Feigin, New classes of pseudodifferential operators in Rn and some applications. Trans. Moscow Math. Soc. 1979, issue 2, 153-195. [53] F. Fer, Résolution de l'équation matricielle ff– itii, = pU par produit infini d'exponentielles matricielles. Acad. Roy. Belg. Bull. Cl. Sci. (5) 44, 1958, 818– 829. [54] R. P. Feynman, An operator calculus having applications in quantum electrodynamics. Phys. Rev. 84, No. 2, 1951, 108-128. [55] —, and A. R. Hibbs, Quantum Mechanics and Path Integrals. McGraw-Hill, New York 1965. [56] D. Finkelstein, On relations between commutators. Comm. Pure Appl. Math. 8, 1955, 245-250. [57] H. R. Fisher, Limesrdume. Math. Ann. 137, 1959, 269-303. [58] M. Flato, J. Simon, H. Snellman, and D. Sternheimer, Simple facts about analytic vectors and integrability. Ann. Sci. École Norm. Sup. 5, 1972, 423-434. [59] V. A. Fok, On the canonical transformation in classical and quantum mechanics. Vestrz. Leningrad Univ. Math. 16, 1959, 67. [Russian]. [60] —, Fundamentals of Quantum Mechanics, 2nd ed., Mir, Moscow 1978.
Bibliography
361
[61] G. B. Folland, Subelliptic estimates and function spaces on nilpotent Lie groups. Ark. Mat. 13, No. 2, 1975, 161-207. [62] —, and E. M. Stein, Estimates for 5/, complex and analysis on the Heisenberg group. Comm. Pure App!. Math. 27, No. 4, 1974, 429. [63] A. T. Fomenko, Differential Geometry and Topology. Plenum Publ. Corporation, New York–London 1987. [64] —, Integrability and Nonintegrability in Geometry and Mechanics. Kluwer Acad. Publ., Dordrecht–Boston–London 1988. [65] —, Symplectic Geometry. Gordon & Breach, London 1989. [66] —, Visual Geometry and Topology. Springer-Verlag, Berlin–Heidelberg–New York 1993. [67] —, and V. V. Trofimov, Integrable System of Lie Algebras and Symmetric Spaces. Gordon & Breach, London 1987. [68] A. Friilicher and W. Bucher, Calculus in Vector Spaces without Norm. Lecture Notes in Math. 30, Springer-Verlag, Berlin–New York–Heidelberg 1966. [69] I. M. Gel'fand, One-parameter groups of operators in a normed space. Dokl. Akad. Nauk SSSR 25, No. 9, 1939, 713-718. [Russian]. [70] —, and I. V. Cherednik, The abstract Hamiltonian formalism for the classical Yang–Baxter bundles. Russian Math. Surveys 38 : 3, 1983, 1-22. [71] —, and I. Ya. Dorfman, Hamiltonian operators and the classical Yang–Baxter equation. Functional Anal. Appl. 16, No. 4, 1982, 241-248. [72] K. K. Golovkin, Parametric-normed Spaces and Normed Massives. Proc. Steklov Inst. Math. 106 (1969), American Math. Society, Providence 1972. [73] R. W. Goodman, Nilpotent Lie Groups: Structure and Applications to Analysis. Lecture Notes in Math. 562. Springer-Verlag, Berlin–Heidelberg–New York 1976. [74] V. V. Grushin, On a class of elliptic pseudodifferential operators degenerate on a submanifold. Math. USSR-Sb. 13, No. 2, 1971, 155-185. [75] V. Guillemin and S. Sternberg, Geometric Asymptotics. Math. Surveys Monographs 14, revised edition. Amer. Math. Soc., Providence, Rhode Island 1990. [76] D. I. Gurevich, Poisson brackets associated with the classical Yang–Baxter equation. Functional Anal. App!. 23, No. 1, 1989, 57-59.
362
Bibliography
[77] A. Ya. Helemskii, Banach and Locally Convex Algebras. Oxford 1993.
Clarendon Press,
[78] E. Hine and R. S. Phillips, Functional Analysis and Semigroups, Amer. Math. Soc. Col loq. Publ. 31, Amer. Math. Soc., Providence, Rhode Island 1957 [79] L. Wirmander, Pseudo-differential operators. Commun. Pure Appl. Math. 18,
1965, 501-517. [80] -, Hypoelliptic second order differential equations. Acta Math. 119, 1967, 147-171. [81] -, The calculus of Fourier integral operators. In: Prospects in mathematics (Proc. Symp., Princeton 1970), pp. 33-57, Ann. of Math. Stud. 70, Princeton Univ. Press, Princeton 1971. [82] -, Fourier integral operators I. Acta Math. 127, 1971, 79-183. [83] -, The Weil calculus of pseudodifferential operators. Comm. Pure and Appl. Math. 32, 1979, 355-443. [84] -, The Analysis of Linear Partial Differential Operators I, II, III, IV Grundlehren Math. Wiss. 256, 257, 274, 275, Springer-Verlag, BerlinHeidelberg-New York 1983-1985. [85] R. Howe, Quantum mechanics and partial differential equations. J. Funct. Anal. 38 1980, 188-254. [86] N. Jacobson, Lie Algebras. Wiley Interscience 1962. [87] H. Jarchow, Dualitdt und Marinescu-Rdume. Math. Ann. 182, No. 2, 1969,
134-144. [88] -, Marinescu-Rdume. Comment. Math. Helv. 44, No. 2, 1969, 138-163. [89] -, On tensor algebras of Marinescu spaces. Math. Ann. 187, No. 3, 1970, 163-174. [90] J. Jimbo, A q-difference analogue of u (g) and the Yang-Baxter equation. Lett. Math. Phys. 10, 1985, 63-69.
[91] -, A q -analogue of u(gl (n + 1)) , Hecke algebras and the Yang-Baxter equation. Lett. Math. Phys. 11, 1986, 247-252. [92] -, Quantum r-matrix for the generalized Toda system. Comm. Math. Phys. 102, 1986, 537-547. [93] P. T. Jorgensen, Perturbation and analytic continuation of group representation. Bull. Amer Math. Soc. 82, 1976, 921-928.
Bibliography
363
[94] V. G. Kac, Infinite-Dimensional Lie Algebras, 3rd ed., Cambridge University Press, Cambridge 1990. [95] S. D. Karakozov, Representations of Lie semigroups in a locally compact space. PhD Dissertation, Mathematical Institute, Siberian Division of the USSR Academy of Sciences, Novosibirsk 1985. [Russian]. [96] M. V. Karasev, Expansions of functions of noncommuting operators. Soviet Math. Dokl. 15, No. 1, 1974, 346-350.
[97] —, Formulas for functions of ordered operators. Math. Notes 18, 1975,746-751. [98] —, Exercises in Operator Calculus. Moscow Inst. Electr. Mach., Moscow 1979. [Russian]. [99] —, and V. P. Maslov, Algebras with general commutation relations and their applications. II. Unitary-nonlinear operator equations. J. Soviet Math. 15, No.
3, 1981, 273-368. [100] —, and V. P. Maslov, Global asymptotic operators of the regular representation. Soviet Math. Dokl. 23, No. 2, 1981, 228-232. [101] —, and V. P. Maslov, Nonlinear Poisson Brackets. Geometry and Quantization, Transl. Math. Monographs 119, American Mathematical Society, Providence, Rhode Island 1993. [102] —, and M. V. Mosolova, Infinite products and T-products of exponentials. Theoret. and Math. Phys. 28, No. 2, 1976, 721-729. [103] —, and V. E. Nazaikinskii, On the quantization of rapidly oscillating symbols. Math. USSR-Sb. 34, No. 6, 1978, 737 – 764. [104] M. Kashiwara and M. Vergne, The Campbell–Hausdorff formula and invariant hyperfunctions. Invent. Math. 47, No. 3, 1978, 249-272. [105] T. Kato, Perturbation Theory for Linear Operators, 2nd ed., Grundlehren Math. Wiss. 132, Springer-Verlag, Berlin–Heidelberg–New York 1976. [106] A. A. Kirillov, The characters of unitary representations of Lie groups. Functional. Anal. Appl. 2, No. 2, 1968, 133-146.
[107] —, Elements of the Theory of Representations. Gnmdlehren Math. Wiss. 220, Springer-Verlag, Berlin–Heidelberg–New York 1976. [108] J. Kohn, Pseudodifferential operators and hypoellipticity. In: Partial Differential Equations, pp. 61-69, Proc. Symp. Pure Math. 23, American Math. Society, Providence 1973.
364
Bibliography
[109] J. J. Kohn and L. Nirenberg, An algebra of pseudo-differential operators. Comm. Pure Appl. Math. 18, 1965, 269-305. [110] B. Kostant,
Graded manifolds, graded Lie theory, and prequantization. In: Differential Geometrical Methods in Mathematical Physics, pp. 177-306, Lect. Notes in Math. 570, Springer-Verlag, Berlin–Heidelberg–New York 1975.
[111] S. G. Krein and Yu. I. Petunin, Scales of Banach spaces. Russian Math. Surveys 21, No. 2, 1966, 85-159. [112] —, and A. M. Shikhvatov, Linear differential equations on a Lie group. Functional. Anal. Appl. 4, No. 1, 1970, 46-54. [113] I. M. Krichever, Baxter's equations and algebraic geometry. Functional Anal. Appl. 15, No. 2, 1981, 92-103. [114] P. R Kulish and N. Yu. Reshetikhin, Quantum linear problem for the Sine– Gordon equation and higher representations. Zap. Nauch. Sem. LOMI 101, 1981, 101-110. [Russian]. [115] —, and E. K. Sklyanin, Solutions of the Yang–Baxter equation. J. Soviet Math. 19, 1982, 1596-1620. [116] —, N. Yu. Reshetikhin, and E. K. Sklyanin, Yang–Baxter equations and representation theory. I. Lett. Math. Phys. 5, 1981, 393-403. [117] Chen Kuo-Tsai, Integration of paths, geometric invariants and generalized Baker–Hausdorff formula. Ann. of Math. 65, No. 1, 1957, 163-178. [118] P. Lax and L. Nirenberg, On stability for difference schemes; a sharp form of GArding's inequality. Comm. Pure and Appl. Math. 19, 1966, 473-492. [119] B. M. Levitan, The Theory of Generalized Shift Operators. Nauka, Moscow 1973. [Russian]. [120] R. M. Lewis, Asymptotic Theory of Transients. Pergamon Press, New York 1967. [121] G. L. Litvinov, On double topological algebras and Hopf topological algebras. Trudy Sem. Vektor Tenzor Anal. 18, 1978, 372-375. [Russian]. [122] —, Hypergroups and hypergroup algebra. In: Itogi Nauki i Tekhniki, Sovrem. Probl. Mat. 26, pp. 57-106. VINITI, Moscow 1985. [Russian]. [123] G. Lusztig, Quantum deformations of certain simple modules over enveloping algebras. Adv. Math. 70, 1988, 237-249. [124] V. V. Lychagin and B. Yu. Sternin, On the microlocal structure of pseudodifferential operators. Math. USSR-Sb. 56, 1987, 515 – 527.
Bibliography
365
[125] Yu. I. Manin, Some remarks on Koszul algebras and quantum groups. Ann. Inst. Fourier 38, 1988, 191-206. [126] G. Marinescu, Opérations linéaires dans les espaces vectorials pseudotopologiques et produits tensoriels. Bull. Math. Soc. Sci. Math. R. S. Roumanie (N.S.) 2, No. 1, 1958, 49-54. [127] V. P. Maslov, Théorie des Perturbations et Méthods Asymptotiques. Dunod, Paris 1972.
[128] —, Complex Markov Chains and Feynman's Path Integral. Nauka, Moscow 1976. [Russian]. [129] —, Operational Methods. Mir, Moscow 1976. [130] —, Application of the method of ordered operators to obtain exact solutions. Theoret. and Math. Phys. 33, No. 2, 1977, 960-976. [Russian]. [131] —, Nonstandard characteristics in asymptotic problems. Russian Math. Surveys 38 : 6, 1983, 1-42. [132] —, Asymptotic Methods for Solving Pseudodifferential Equations. Nauka, Moscow 1987. [Russian]. [133] —, and V. E. Nazaikinskii, Algebras with general commutation relations and their applications. I. Pseudodifferential equations with increasing coefficients. J. Soviet Math. 15, No. 3, 1981, 176-273. [134] —, and V. E. Nazaikinskii, Asymptotics of Operator and Pseudo-Differential Equations. Plenum Publishing Corp., New York 1988. [135] —, and I. A. Shishmarev, On the T-product of hypoelliptic operators,. In: Itogi Nauki i Tekhniki, Sovrem. Probl. Mat. 8, pp. 137-198. VINITI, Moscow 1977. [136] A. Melin, Parametrix constructions for right invariant differential operators on nilpotent groups. Comm. Partial Differential Equations 6, 1981, 1363-1405. [137] A. S. Mishchenko, V. E. Shatalov, and B. Yu. Sternin, Lagrangian Manifolds and the Maslov Operator. Springer-Verlag, Berlin–Heidelberg–New York 1990. [138] R. T. Moore, Exponentiation of operator Lie algebras on Banach spaces. Bull. Amer Math. Soc. 71, 1965, 903-908. [139] M. V. Mosolova, New formula for log(e B eA ) in terms of commutators of A and B. Math. Notes 23, No. 5-6, 1978, 448-452.
[140] —, Functions of non-commuting operators that generate a graded Lie algebra. Math. Notes 29, No. 1-2, 1981, 17-22.
366
Bibliography
[141] V. E. Nazaikinskii, Ordering and Regular Representations of Noncommuting Operators. PhD Dissertation, MIEM, Moscow 1981. [Russian].
[142] —, V. G. Oshmyan, B. Yu. Stern, and V. E. Shatalov, Fourier integral operators and the canonical operator. Russian Math. Surveys 36 :2, 1981,93-161. [143] —, V. E. Shatalov, and B. Yu. Stern, Contact Geometry and Linear Differential Equations. De Gruyter Exp. Math. 6, Walter de Gruyter, Berlin–New York 1992. [144] —, B. Yu. Sternin, and V. E. Shatalov, Contact geometry and linear differential equations. Russian Math. Surv. 48, 1993,97-134. [145] —, —, —, Applications of noncommuting operators to diffractional problems. In: Waves and Diffraction, pp. 33-40. Nauka, Moscow 1990. [Russian]. [146] —, —, —, On an application of the Maslov operator method to a diffraction problem. Soviet Math. Dokl. 43, No. 2,1991,547-549. [147] —, —, —, Introduction to Maslov's operational method (noncommutative analysis and differential equations). In: Global Analysis – Studies and Applications V, pp. 81-91, Lecture Notes in Math. 1520, Springer-Verlag, BerlinHeidelberg–New York 1992.
[148] —, —, —, Maslov operational calculus and noncommutative analysis. In: Operator Calculus and Spectral Theory, pp. 225-243, Operator Theory: Advances and Applications 57, Birkhduser Verlag, Basel–Boston 1992. [149] E. Nelson, Analytic vectors. Ann. of Math. 70, No. 2,1959,572-615.
[150] —, Feynman integrals and the Schriidinger equation. J. Math. Phys. 5, No. 3, 1964,332-343. [151] —, Operants: A functional calculus for non-commuting operators. In: Functional Analysis and Related Fields, ed. F. E. Browder, pp. 172-187, SpringerVerlag, Berlin–Heidelberg–New York 1970. [152] S. P. Novikov, Various doublings of Hopf algebras. Operator algebras on quantum groups, complex cobordisms. Russian Math. Surveys 47 :5, 1992,198-199. [153] M. A. Obukhov, Concerning the geostrophic wind. lzv. Akad. Nauk SSSR, Ser Geograf Geofiz. 13, 1949,281-306. [Russian].
[154] 0. A. Oleynik, Linear equations of second order with nonnegative characteristic form. Mat. Sb. 69, No. 1,1966,111-140. [Russian]. [155] —, and E. V. Radkevich, Second order equations with nonnegative characteristic form. Plenum Press, New York–London 1973. [156] S. B. Priddy, Koszul resolutions. Trans. Amer Math. Soc. 152, 1970,39-60.
Bibliography
367
[157] E. V. Radkevich, Hypoelliptic operators with multiple characteristics. Math. USSR-Sb. 8, No. 2, 1969, 181-205. [158] M. Rosso, Comparaison des groupes SU(2) quantiques de Drinfeld et Woronowicz. C. R. Acad. Sci. Paris Ser. I Math. 304, 1987, 323-326. [159] L. P. Rotschild and E. M. Stein, Hypoelliptic differential operators and nilpotent groups. Acta Math. 137, No. 3-4, 1976, 247-320. [160] M. Sato, T. Kawai, and M. Kashiwara, Microfunctions and pseudodifferential equations. In: Hypeifunctions and pseudodifferential equations, pp. 265-529, Lecture Notes in Math. 287, Springer-Verlag, Berlin-Heidelberg-New York
1973. [161] M. A. Semenov-Tyan-Shanskii, Classical r-matrices and quantization. J. Soviet Math. 31, 1985, 3411-3431. [162] J.-P. Serre, Lie Algebras and Lie Groups. Benjamin, New York 1965. [163] M. A. Shubin, Pseudodifferential Operators and Spectral Theory. SpringerVerlag, Berlin-Heidelberg-New York 1987. [164] J. Simon, On the integrability of finite-dimensional real Lie algebras. Comm. Math. Phys. 28, 1972, 39-46. [165] J. Sjbstrand, Parametrices for pseudodifferential operators with multiple characteristics. Ark. Mat. 12, No. 1, 1974, 85-130. [166] E. K. Sklyanin, Some algebraic structures connected with the Yang-Baxter equation. Funct. Anal. Appl. 16, No. 4, 1983, 263-270. ,
[167] -, Some algebraic structures connected with the Yang-Baxter equation. Representations of quantum algebras. Funct. Anal. Appl. 17, No. 4, 1983, 273-284. [168] -, On an algebra generated by quadratic relations. Uspekhi Mat. Nauk 40, 1985, 214. [Russian]. [169] -, L. A. Takhtadzhyan, and L. D. Faddeev, Quantum inverse problem method. Theoret. and Math. Phys. 40, No. 2, 1979, 688-706. [170] Ya. S. Soibel' man and L. L. Vaksman, On some problems in the theory of quantum groups. In: Representation theory and dynamical systems, ed. A. M. Vershik, pp. 3-55, Adv. in Soviet Math. 9, American Math. Society, Providence 1992. [171] E. Stein, Invariant pseudo-differential operators on a Lie group. Ann. Scuola Norm. Sup. Pisa Cl. Sci. 26, 1972, 587-611.
368
Bibliography
[172] B. Yu. Stemin, Differential equations of subprincipal type. Math. USSR-Sb. 53, No. 1, 1986, 37-67.
[173] —, and V. E. Shatalov, On a method of solving equations with simple characteristics. Math. USSR-Sb. 44, No. 1, 1983, 23-59. [174] M. E. Taylor, Functions of several self-adjoint operators. Proc. Amer. Math. Soc. 19, 1968, 91-98.
[175] —, Pseudodifferential Operators. Princeton Univ. Press, Princeton 1981. [176] —, Pseudodifferential operators on contact manifolds. Graduate lecture notes. Stony Brook, Spring 1981. [177] —, Noncommutative microlocal analysis. Mem. Amer Math. Soc. 52, No. 313, 1984, 1-182. [178] F. Trèves, Introduction to Pseudodifferential and Fourier Integral Operators, Vol. 1, 2. Plenum Publishing Corp., New York—London 1980.
[179] H. F. Trotter, Approximation of semigroups of operators. Pacific J. Math. 8, No. 4, 1958, 887-920.
[180] —, On the product of semi-groups of operators. Proc. Amer Math. Soc. 10, 1959, 545-551. [181] L. L. Vaksman and Ya. S. Soibel'man, Algebra of functions on the quantum group SU(2). Functional Anal. Appl. 22, 1988, 170-181. [182] A. M. Vershik, Algebras with quadratic relations. In: Spectral theory of operators and infinite-dimensional analysis, pp. 32-57, Akad. Nauk Ukrain. SSR, Inst. Mat., Kiev 1984. [Russian]. [183] A. Voros, An algebra of pseudodifferential operators and the asymptotics of quantum mechanics. J. Funct. Anal. 29, 1978, 104-132. [184] F. W. Warner, Foundations of Differential Manifolds and Lie Groups. Graduate Texts in Math. 93, Springer-Verlag, Berlin—Heidelberg-New York 1983.
[185] Alan Weinstein, Symplectic manifolds and their Lagrangian submanifolds. Adv. Math. 6, 1971, 329-346.
[186] —, Lagrangian submanifolds and Hamiltonian systems. Ann. of Math. 98, No. 2, 1973, 377-410. [187] —, On Maslov's quantization conditions. In: Fourier Integral Operators and Partial Differential Equations, Lecture Notes in Math. 459, pp. 341-372, Springer-Verlag, Berlin—Heidelberg—New York 1975.
Bibliography
369
[188] —, Lectures on symplectic manifolds. CBMS Regional Conf. Ser. in Math. 29, American Math. Society, Providence 1979. [189] —, Noncommutative geometry and geometric quantization. In: Symplectic Geometry and Mathematical Physics, pp. 446-462, Progr. Math. 99, Birkhduser Verlag, Basel–Boston 1991. [190] G. H. Weiss and K. Maradudin, The Baker–Hausdorff formula and a problem in crystal physics. J. Math. Phys. 3, No. 4,1962,771-777. [191] H. Weyl, Gruppentheorie und Quantenmechanik. Leipzig 1928.
[192] S. L. Woronowicz, Compact matrix pseudogroups. Comm. Math. Phys. 111, 1987,613-655. [193] —, Twisted SU (2) group. An example of a non-commutative differential calculus. Publ. Res. Inst. Math. Sci. 23, 1987,117-181. [194] —, Tannaka–Krein duality for compact matrix pseudogroups. Twisted SU (n) groups. Invent Math. 93, 1988,35-76. [195] —, Differential calculus on compact matrix pseudogroups (quantum groups). Comm. Math. Phys. 122, 1989,125-170.
[196] C. N. Yang, Some exact results for the many-body problem in one dimension with repulsive delta-function inversation. Phys. Rev. Leu. 19, 1967,1312-1314. [197] K. Yosida, Functional Analysis. Grundlehren Math. Wiss. 123, Springer-Verlag, Berlin–Heidelberg 1965. [198] A. B. Zamolodchikov and Al. B. Zamolodchikov, Two-dimensional factorizable s-matrices as exact solutions of some quantum field theory models. Ann. Physics 120, 1979,253-291.
Index
abelian algebra, 288 Ado theorem, 291 asymptotic expansion, 179, 327 —w.r.t growth at infinity, 327 —w.r.t ordered tuple, 185 —w.r.t parameter, 182, 327 —w.r.t smoothness, 182, 327 —, synchronous, 347 autonomous brackets, 15, 85
Banach scale, 328 Bose—Einstein statistics, 6 boundedness theorems, 170
Campbell—Hausdorf theorem, 70, 329 Cauchy integral formula, 72 characteristics, 179, 329 commutation relations —, linear, 156
—, semilinear, 125 commutator, 287, 330 coordinates of I and II genus, 301
Daletskii—Krein formula, 60 derivation, 289 derived
—homomorphism, 302 —representation, 304
—,difference-differential, 193, 332 —of geostrophic wind, 216 —, (pseudo)differential, 167 —with fractional powers of x, 212 —with growing symbols, 208 expectation value, 4 exponential mapping, 300 Faddeev—Zamolodchikov algebra, 152 Fermi—Dirac statistics, 6 Feynman indices, 3, 28, 85 Feynman tuple, 28 filter, 249 formula
—, commutation, 62, 330 —for ln eB eA , 74 —composite function, 66, 330 —derivation, 52, 331 —extraction, 334 —index permutation, 61, 336 —product, 64, 341 Fourier transform, 73, 166 —, group, 164, 166 —integral operator, 170, 335 functions of operators, 25, 26, 269
difference —approximation, 194, 196 —derivative, 18 Dynkin formula, 70
Ghrding space, 303 Gel'fand theorem, 304 generator, 24, 260, 268, 273, 317 Gleason—Montgomery—Zippin theorem, 293
Engel theorem, 292 enveloping algebra, 138 equation —, degenerate, 224 —, difference, 193
group —algebra, 164 —representation, 164 —, full linear, 293 —mollification, 163 —, special orthogonal, 294
372
Index
Haar measure, 158, 163, 295 Hamilton—Jacobi equation, 336 Heaviside operator method, 174 Heisenberg —algebra, 289, 336 —, representation, 290 —commutation relations, 336 —group, 293 —uncertainty principle, 4 Hilbert scale, 158
Newton formula, 57, 338 nilpotency rank, 157, 292 normal form, 338
inversion problem, 169
parametrix, 315 path integral, 340 Poincaré—Birkhoff—Witt —basis, 146 —property, 146 —theorem, 138 Poisson algebra, 341 poly-Banach space, 254, 341 polynorm, 250 polynormed —algebra, 249, 256 —space, 249 —space over filter, 251 product theorem, 341 pseudodifferential operators, 316, 321, 342 —, commutation with an exponent, 189 —, composition, 317, 325 —, continuous, 317, 324
Jacobi conditions, 133 —, generalized, 139 Jacobi identity, 287, 337
Krein—Shikhvatov theorem, 307 Leibniz rule, 289, 338 Lie algebra, 287 —, examples, 288 —, generators, 110,112 —, graded, 335 —, representation, 116
—, homomorphism, 289 —, nilpotent, 157, 291 —of Lie group, 298 —, quotient, 289 —, representation, 290 Lie bracket, 287 Lie commutation relations, 156 Lie group, 292 —corresponding to Lie algebra,
observable, 4 one-parameter subgroup, 300 operator —, annihilation, 6, 16, 94 —, creation, 6, 16, 94 —with double characteristics, 233
quantization procedure, 5 quantum oscillator, 16, 342
305 —, examples, 293 —, homomorphism, 294 —, local, 294 —, matrix, 293 —, multiplication law, 14 Lie subalgebra, 288 Lie subgroup, 293 Nelson condition, 157
representation, 290 —, associated, 291 —, derived, 304 —, examples, 290 —of Heisenberg algebra, 290 —, left-ordered, 97 —of Lie group, 296 —, regular (left, right), 33, 165
295
Index
373
—, right-ordered, 103 —, strongly continuous, 296 right-invariant vector field, 166 rule
—of changing Feynman indices,
86 —of deleting autonomous brackets, 87
semigroup property, 345 Sklyanin algebra, 153 Sobolev space, 312 state space, 4 stationary phase method, 345 structure constants, 287 symbol —classes, 42, 167 —, classical, 167 —of differential operator, 167, 174 —of pseudodifferential operator,
321 —, operator-valued, 339 —, principal, 167 —, subprincipal, 236 —space, 161,260
symplectic —form, 234 —space, 234
T-exponential, 20 —, extraction formula, 11 Taylor formula, 58, 347 tensor product, 257, 348 translation —, left, 295 —, right, 295 Trotter formula, 20, 349 Trotter-type formula, 349 twisted product, 165 uncertainty relation, 4
vacuum subspace, 6 wave function, 4
Weyl —functions of operators, 48 —quantization, 48 Wick —normal form, 6, 94, 350 —symbol, 7, 95 Yang—Baxter equation, 153, 156