Fermionic Functional Integrals and the Renormalization Group Joel Feldman Department of Mathematics University of British Columbia Vancouver B.C. CANADA V6T 1Z2
[email protected] http://www.math.ubc.ca/∼feldman Horst Kn¨ orrer, Eugene Trubowitz Mathematik ETH–Zentrum CH-8092 Z¨ urich SWITZERLAND
Abstract
The Renormalization Group is the name given to a technique for analyzing
the qualitative behaviour of a class of physical systems by iterating a map on the vector space of interactions for the class. In a typical non-rigorous application of this technique one assumes, based on one’s physical intuition, that only a certain finite dimensional subspace (usually of dimension three or less) is important. These notes concern a technique for justifying this approximation in a broad class of Fermionic models used in condensed matter and high energy physics.
These notes expand upon the Aisenstadt Lectures given by J. F. at the Centre de Recherches Math´ematiques, Universit´e de Montr´eal in August, 1999.
Table of Contents Preface
p
i
§I Fermionic Functional Integrals p §I.1 Grassmann Algebras §I.2 Grassmann Integrals §I.3 Differentiation and Integration by Parts §I.4 Grassmann Gaussian Integrals §I.5 Grassmann Integrals and Fermionic Quantum Field Theories §I.6 The Renormalization Group §I.7 Wick Ordering §I.8 Bounds on Grassmann Gaussian Integrals
1
§II Fermionic Expansions §II.1 Notation and Definitions §II.2 Expansion – Algebra §II.3 Expansion – Bounds §II.4 Sample Applications Gross–Neveu2 Naive Many-fermion2 Many-fermion2 – with sectorization
1 7 11 14 18 24 30 34
p p p p p p p
40 42 44 56 57 63 67
p 40
Appendices §A Infinite Dimensional Grassmann Algebras
p 75
§B Pfaffians
p 86
§C Propagator Bounds
p 93
§D Problem Solutions
p 100
References
p p p p p p p p
p 147
Preface
The Renormalization Group is the name given to a technique for analyzing the qualitative behaviour of a class of physical systems by iterating a map on the vector space of interactions for the class. In a typical non-rigorous application of this technique one assumes, based on one’s physical intuition, that only a certain finite dimensional subspace (usually of dimension three or less) is important. These notes concern a technique for justifying this approximation in a broad class of fermionic models used in condensed matter and high energy physics. The first chapter provides the necessary mathematical background. Most of it is easy algebra – primarily the definition of Grassmann algebra and the definition and basic properties of a family of linear functionals on Grassmann algebras known as Grassmann Gaussian integrals. To make §I really trivial, we consider only finite dimensional Grassmann algebras. A simple–minded method for handling the infinite dimensional case is presented in Appendix A. There is also one piece of analysis in §I – the Gram bound on Grassmann Gaussian integrals – and a brief discussion of how Grassmann integrals arise in quantum field theories. The second chapter introduces an expansion that can be used to establish analytic control over the Grassmann integrals used in fermionic quantum field theory models, when the covariance (propagator) is “really nice”. It is also used as one ingredient in a renormalization group procedure that controls the Grassmann integrals when the covariance is not so nice. To illustrate the latter, we look at the Gross–Neveu 2 model and at many-fermion models in two space dimensions.
i
I. Fermionic Functional Integrals
This chapter just provides some mathematical background. Most of it is easy algebra – primarily the definition of Grassmann algebra and the definition and basic properties of a class of linear functionals on Grassmann algebras known as Grassmann Gaussian integrals. There is also one piece of analysis – the Gram bound on Grassmann Gaussian integrals – and a brief discussion of how Grassmann integrals arise in quantum field theories. To make this chapter really trivial, we consider only finite dimensional Grassmann algebras. A simple–minded method for handling the infinite dimensional case is presented in Appendix A.
I.1 Grassmann Algebras Definition I.1 (Grassmann algebra with coefficients in C) Let V be a finite dimensional vector space over C. The Grassmann algebra generated by V is
V0
Vn
V=
∞ M Vn n=0
V
V is the n–fold antisymmetric tensor product of V with itself. V Thus, if {a1 , . . . , aD } is a basis for V, then V is a vector space with elements of the form
where
V = C and
V
f (a) =
D X
X
n=0 1≤i1 <···
βi1 ,···,in ai1 · · · ain
with the coefficients βi1 ,···,in ∈ C. (In differential geometry, the traditional notation uses ai1 ∧ · · · ∧ ain in place of ai1 · · · ain .) Addition and scalar multiplication is done componentwise. That is, α
D X
X
n=0 1≤i1 <···
βi1 ,···,in ai1 · · · ain + γ =
D X
D X
n=0 1≤i1 <···
X
n=0 1≤i1 <···
1
X
δi1 ,···,in ai1 · · · ain
αβi1 ,···,in + γδi1 ,···,in ai1 · · · ain
The multiplication in
V
V is determined by distributivity and
(ai1 · · · aim ) (aj1 · · · ajn ) =
{i1 , · · · , im } ∩ {j1 , · · · , jn } 6= ∅ otherwise
0 sgn(k) ak1 · · · akm+n
where (k1 , · · · , km+n ) is the permutation of (i1 , · · · , im , j1 · · · , jn ) with k1 < k2 < · · · < km+n and sgn(k) is the sign of that permutation. In particular ai aj = −aj ai and ai ai = 0 for all i. For example a2 + 3a3 a1 + 2a1 a2 = a2 a1 + 3a3 a1 + 2a2 a1 a2 + 6a3 a1 a2
= −a1 a2 − 3a1 a3 − 2a1 a2 a2 − 6a1 a3 a2 = −a1 a2 − 3a1 a3 + 6a1 a2 a3
Remark I.2 If {a1 , . . . , aD } is a basis for V, then
The dimension of
Vn
V
V has basis
ai1 · · · ain n ≥ 0, 1 ≤ i1 < · · · < in ≤ D
V is
D n
Mn =
and the dimension of
V
V is
PD
D n=0 n
(i1 , · · · , in ) 1 ≤ i1 , · · · , in ≤ D
= 2D . Let
be the set of all multi–indices of degree n ≥ 0 . Note that i1 , i2 · · · does not have to be in increasing order. For each I ∈ Mn set aI = ai1 · · · ain . Also set a∅ = 1 . Every element V f (a) ∈ V has a unique representation f (a) =
D X X
βI a I
n=0 I∈Mn
with the coefficients βI ∈ C antisymmetric under permutations of the elements of I. For example, a1 a2 = 21 a1 a2 − a2 a1 = β(1,2) a(1,2) + β(2,1) a(2,1) with β(1,2) = −β(2,1) = 21 . 2
V Problem I.1 Let V be a complex vector space of dimension D. Let s ∈ V. Then s L D Vn has a unique decomposition s = s0 + s1 with s0 ∈ C and s1 ∈ n=1 V. Prove that, if V V s0 6= 0, then there is a unique s0 ∈ V with ss0 = 1 and a unique s00 ∈ V with s00 s = 1
and furthermore
s0 = s00 =
1 s0
D X
+
sn
1 (−1)n sn+1 0
n=1
It will be convenient to generalise the concept of Grassmann algebra to allow for coefficients in more general algebras than C. We shall allow the space of coefficients to be any superalgebra [BS]. Here are the definitions that do it.
Definition I.3 (Superalgebra) (i) A superalgebra is an associative algebra S with unit, denoted 1, together with a decomposition S = S+ ⊕ S− such that 1 ∈ S+ , S + · S+ ⊂ S +
S − · S− ⊂ S +
S + · S− ⊂ S −
and
ss0 = s0 s
S − · S+ ⊂ S − if
s ∈ S+
or
s0 ∈ S +
ss0 = −s0 s if s, s0 ∈ S− The elements of S+ are called even, the elements of S− odd. (ii) A graded superalgebra is an associative algebra S with unit, together with a decompoL∞ sition S = m=0 Sm such that 1 ∈ S0 , Sm · Sn ⊂ Sm+n for all m, n ≥ 0, and such that the
decomposition S = S+ ⊕ S− with M S+ = Sm
S− =
m even
gives S the structure of a superalgebra.
M
Sm
m odd
V Example I.4 Let V be a complex vector space. The Grassmann algebra S = V = L Vm Vm L Vm V over V is a graded superalgebra. In this case, Sm = V, S+ = V and m even m≥0 L Vm S− = V. In these notes, all of the superalgebras that we use will be Grassmann m odd
algebras.
3
Definition I.5 (Tensor product of two superalgebras) (i) Recall that the tensor product of two finite dimensional vector spaces S and T is the Pn vector space S ⊗ T constructed as follows. Consider the set of all formal sums i=1 si ⊗ ti
with all of the si ’s in S and all of the ti ’s in T. Let ≡ be the smallest equivalence relation
on this set with
s ⊗ t + s 0 ⊗ t0 ≡ s 0 ⊗ t0 + s ⊗ t (s + s0 ) ⊗ t ≡ s ⊗ t + s0 ⊗ t s ⊗ (t + t0 ) ≡ s ⊗ t + s ⊗ t0 (zs) ⊗ t ≡ s ⊗ (zt)
for all s, s0 ∈ S, t, t0 ∈ T and z ∈ C. Then S ⊗ T is the set of all equivalence classes under this relation. Addition and scalar multiplication are defined in the obvious way. (ii) Let S and T be superalgebras. We define multiplication in the tensor product S ⊗ T by
s ⊗ (t+ + t− ) (s+ + s− ) ⊗ t = ss+ ⊗ t+ t + ss+ ⊗ t− t + ss− ⊗ t+ t − ss− ⊗ t− t
for s ∈ S, t ∈ T , s± ∈ S± , t± ∈ T± . This multiplication defines an algebra structure on S ⊗ T. Setting (S ⊗ T)+ = (S+ ⊗ T+ ) ⊕ (S− ⊗ T− ), (S ⊗ T)− = (S+ ⊗ T− ) ⊕ (S− ⊗ T+ ) we get a superalgebra. If S and T are graded superalgebras then the decomposition S ⊗ T = L∞ m=0 (S ⊗ T)m with M (S ⊗ T)m = Sm1 ⊗ T m2 m1 +m2 =m
gives S ⊗ T the structure of a graded superalgebra. Definition I.6 (Grassmann algebra with coefficients in a superalgebra) Let V be a complex vector space and S be a superalgebra, the Grassmann algebra over V with coefficients in S is the superalgebra ^
If S is a graded superalgebra, so is
V
S
S
V =S⊗
V. 4
^
V
V V Remark I.7 It is natural to identify s ∈ S with the element s⊗1 of S V = S⊗ V and to V V identify ai1 · · · ain ∈ V with the element 1 ⊗ ai1 · · · ain of S V. Under this identification
Every element of
s a i1 · · · a in = s ⊗ 1 1 ⊗ a i1 · · · a in = s ⊗ a i1 · · · a in
V
S
V has a unique representation D X
X
n=0 1≤i1 <···
si1 ,···,in ai1 · · · ain
with the coefficients si1 ,···,in ∈ S. Every element of D X X
V
S
V also has a unique representation
sI aI
n=0 I∈Mn
with the coefficients sI ∈ S antisymmetric under permutation of the elements of I. If V S = V 0 , with V 0 the complex vector space with basis {b1 , · · · , bD }, then every element of V S V has a unique representation D X
n,m=0
X
βI,J bJ aI
I∈Mn J∈Mm
with the coefficients βI,J ∈ C separately antisymmetric under permutation of the elements of I and under permutation of the elements of J.
Problem I.2 Let V be a complex vector space of dimension D. Every element s of L D Vn has a unique decomposition s = s0 + s1 with s0 ∈ C and s1 ∈ n=1 V. Define s
e =e
s0
n P D
n=0
Prove that if s, t ∈
V
1 n n! s1
o
V with st = ts, then, for all n ∈ IN, n
(s + t) =
n X
m=0
n m
sm tn−m
and es et = et es = es+t 5
V
V
Problem I.3
Use the notation of Problem I.2. Let, for each α ∈ IR, s(α) ∈
V
V.
Assume that s(α) is differentiable with respect to α (meaning that if we write s(α) = D P P si1 ,···,in (α)ai1 · · · ain , every coefficient si1 ,···,in (α) is differentiable with
n=0 1≤i1 <···
respect to α) and that s(α)s(β) = s(β)s(α) for all α and β. Prove that d s(α)n dα
= ns(α)n−1 s0 (α)
and s(α) d dα e
= es(α) s0 (α)
Problem I.4 Use the notation of Problem I.2. If s0 > 0, define ln s = ln s0 +
D X
(−1)n−1 n
n=1
s1 n s0
with ln s0 ∈ IR. a) Let, for each α ∈ IR, s(α) ∈
V
V. Assume that s(α) is differentiable with respect to α,
that s(α)s(β) = s(β)s(α) for all α and β and that s0 (α) > 0 for all α. Prove that d dα
b) Prove that if s ∈
Prove that if s ∈
V
V
ln s(α) =
s0 (α) s(α)
V with s0 ∈ IR, then ln es = s
V with s0 > 0, then eln s = s
Problem I.5 Use the notation of Problems I.2 and I.4. Prove that if s, t ∈ st = ts and s0 , t0 > 0, then ln(st) = ln s + ln t 6
V
V with
Problem I.6 Generalise Problems I.1–I.5 to superalgebra having S0 = C.
V
S
V with S a finite dimensional graded
V Problem I.7 Let V be a complex vector space of dimension D. Let s = s0 + s1 ∈ V LD Vn with s0 ∈ C and s1 ∈ n=1 V. Let f (z) be a complex valued function that is analytic P∞ 1 (n) (0) sn converges and in |z| < r. Prove that if |s0 | < r, then n=0 n! f ∞ X
1 (n) f (0) sn n!
=
n=0
D X
1 (n) f (s0 ) sn1 n!
n=0
I.2 Grassmann Integrals Let V be a finite dimensional complex vector space and S a superalgebra. Given R any ordered basis {a1 , . . . , aD } for V, the Grassmann integral · daD · · · da1 is defined to V Vn be the unique linear map from S V to S which is zero on ⊕D−1 V and obeys n=0 Z a1 · · · aD daD · · · da1 = 1
Problem I.8 Let a1 , · · · , aD be an ordered basis for V. Let bi =
PD
j=1
Mi,j aj , 1 ≤ i ≤ D,
be another ordered basis for V. Prove that Z Z · daD · · · da1 = det M · dbD · · · db1
In particular, if bi = aσ(i) for some permutation σ ∈ SD Z Z · daD · · · da1 = sgn σ · dbD · · · db1
Example I.8 Let V be a two dimensional vector space with basis {a1 , a2 } and V 0 be a V second two dimensional vector space with basis {b1 , b2 }. Set S = V 0 . Let λ ∈ C \ {0}
and let S be the 2 × 2 skew symmetric matrix 0 − λ1 S= 1 0 λ 7
−1 Use Sij to denote the matrix element of
S
−1
0 = −λ
λ 0
in row i and column j. Then, using the definition of the exponential of Problem I.2 and recalling that a21 = a22 = 0, −1
1
e− 2 Σij ai Sij
aj
1
= e− 2 λ[a1 a2 −a2 a1 ] = e−λa1 a2 = 1 − λa1 a2
and −1
1
eΣi bi ai e− 2 Σij ai Sij
aj
= eb1 a1 +b2 a2 e−λa1 a2 n on o = 1 + (b1 a1 + b2 a2 ) + 21 (b1 a1 + b2 a2 )2 1 − λa1 a2 n on o = 1 + b1 a1 + b2 a2 − b1 b2 a1 a2 1 − λa1 a2 = 1 + b1 a1 + b2 a2 − (λ + b1 b2 )a1 a2
Consequently, the integrals
Z
Z
1
−1
aj
da2 da1 = −λ
1
−1
aj
da2 da1 = −(λ + b1 b2 )
e− 2 Σij ai Sij
eΣi bi ai e− 2 Σij ai Sij
and their ratio is R Σ b a − 1 Σij ai S −1 aj ij e i i ie 2 da2 da1 1 1 −(λ + b1 b2 ) = 1 + b1 b2 = e− 2 Σij bi Sij bj = R − 1 Σij ai S −1 aj −λ λ ij e 2 da2 da1 Example I.9 Let V be a D = 2r dimensional vector space with basis {a1 , · · · , aD } and V V 0 be a second 2r dimensional vector space with basis {b1 , · · · , bD }. Set S = V 0 . Let λ1 , · · · , λr be nonzero complex numbers and let S be the D × D skew symmetric matrix r M 0 − λ1m S= 1 0 m=1 λm
All matrix elements of S are zero, except for r 2 × 2 blocks running down the diagonal. For example, if r = 2,
0
λ1 1 S= 0 0
− λ11 0 0 0 8
0 0 0 1 λ2
0 0 − λ12 0
Then, by the computations of Example I.8, e
−1 − 12 Σij ai Sij aj
=
r Y
e
−λm a2m−1 a2m
=
m=1
r Y
m=1
1 − λm a2m−1 a2m
and −1
1
eΣi bi ai e− 2 Σij ai Sij
aj
r Y
= =
m=1 r Y
m=1
eb2m−1 a2m−1 +b2m a2m e−λm a2m−1 a2m
1 + b2m−1 a2m−1 + b2m a2m − (λm + b2m−1 b2m )a2m−1 a2m
This time, the integrals
Z
Z
e
−1 − 21 Σij ai Sij aj
−1
1
eΣi bi ai e− 2 Σij ai Sij
aj
daD · · · da1 = daD · · · da1 =
r Y
(−λm )
m=1 r Y
(−λm − b2m−1 b2m )
m=1
and their ratio is R Σ b a − 1 Σij ai S −1 aj r Y ij daD · · · da1 e i i ie 2 1 = 1 + λ1 b1 b2 = e− 2 Σij bi Sij bj R − 1 Σij ai S −1 aj ij e 2 daD · · · da1 m=1 Lemma I.10 Let V be a D = 2r dimensional vector space with basis {a 1 , · · · , aD } and V 0 V be a second D dimensional vector space with basis {b1 , · · · , bD }. Set S = V 0 . Let S be a
D × D invertible skew symmetric matrix. Then R
1
−1
eΣi bi ai e− 2 Σij ai Sij aj daD · · · da1 1 = e− 2 Σij bi Sij bj R − 1 Σij ai S −1 aj ij e 2 daD · · · da1
Proof: Both sides of the claimed equation are rational, and hence meromorphic, functions of the matrix elements of S. So, by analytic continuation, it suffices to consider matrices S with real matrix elements. Because S is skew symmetric, Sjk = −Skj for all 1 ≤ j, k ≤ D. Consequently, ıS is self–adjoint so that • V has an orthonormal basis of eigenvectors of S • all eigenvalues of S are pure imaginary 9
Because S is invertible, it cannot have zero as an eigenvalue. Because S has real matrix elements, S~v = µ~v implies S~v = µ ¯~v (with
designating “complex conjugate”) so that
• the eigenvalues and eigenvectors of S come in complex conjugate pairs.
Call the eigenvalues of S, ±ı λ11 , ±ı λ12 , · · · ± ı λ1r and set T =
r M 0
1 λm
m=1
− λ1m 0
By Problem I.9, below, there exists a real orthogonal D×D matrix R such that R t SR = T . Define a0i =
D P
j=1
b0i =
t Rij aj
D P
j=1
t Rij bj
Then, as R is orthogonal, RRt = 1l so that S = RT Rt , S −1 = RT −1 Rt . Consequently, Σi b0i a0i =
P
i,j,k
t t Rij bj Rik ak =
P
i,j,k
t bj Rji Rik ak =
P
bj δj,k ak = Σi bi ai
j,k
where δj,k is one if j = k and zero otherwise. Similarly, −1 Σij ai Sij aj = Σij a0i Tij−1 a0j
Σij bi Sij bj = Σij b0i Tij b0j
Hence, by Problem I.8 and Example I.9 R
R Σ b0 a0 − 1 Σij a0 T −1 a0 −1 1 i ij j daD · · · da1 e i i ie 2 eΣi bi ai e− 2 Σij ai Sij aj daD · · · da1 = R − 1 Σij ai S −1 aj R − 1 Σij a0 T −1 a0 ij i ij j da e 2 e 2 daD · · · da1 D · · · da1 R Σ b0 a0 − 1 Σij a0 T −1 a0 i ij j 0 0 da0D · · · da01 e i i ie 2 1 1 = e− 2 Σij bi Tij bj = e− 2 Σij bi Sij bj = R − 1 Σij a0 T −1 a0 i ij j da0 · · · da0 e 2 1 D
Problem I.9 Let ◦ S be a matrix ◦ λ be a real number ◦ ~v1 and ~v2 be two mutually perpendicular, complex conjugate unit vectors ◦ S~v1 = ıλ~v1 and S~v2 = −ıλ~v2 . 10
Set w ~1 =
√1 2ı
~v1 − ~v2
w ~2 =
√1 2
~v1 + ~v2
a) Prove that • w ~ 1 and w ~ 2 are two mutually perpendicular, real unit vectors • Sw ~ 1 = λw ~ 2 and S w ~ 2 = −λw ~ 1. b) Suppose, in addition, that S is a 2 × 2 matrix. Let R be the 2 × 2 matrix whose first column is w ~ 1 and whose second column is w ~ 2 . Prove that R is a real orthogonal matrix 0 −λ and that Rt SR = . λ 0 c) Generalise to the case in which S is a 2r × 2r matrix.
I.3 Differentiation and Integration by Parts V Definition I.11 (Left Derivative) Left derivatives are the linear maps from V to V SD V that are determined as follows. For each ` = 1, · · · , D , and I ∈ n=0 Mn the left derivative
∂ ∂a`
aI of the Grassmann monomial aI is ∂ ∂a`
In the case of
V
S
aI =
0 (−1)|J| aJ aK
if ` ∈ /I if aI = aJ a` aK
V, the left derivative is determined by linearity and
∂ ∂a`
0 saI = (−1)|J| s aJ aK −(−1)|J| s aJ aK
if ` ∈ /I if aI = aJ a` aK and s ∈ S+ if aI = aJ a` aK and s ∈ S−
Example I.12 For each I = {i1 , · · · in } in Mn , ∂ ∂ain
···
∂ ∂ai1
aI = 1
∂ ∂ai1
···
∂ ∂ain
aI = (−1) 2 |I|(|I|−1)
1
11
Proposition I.13 (Product Rule, Leibniz’s Rule) SD V For all k, ` = 1, · · · , D , all I, J ∈ n=0 Mn and any f ∈ S V, (a)
(b) (c)
∂ ∂a`
ak = δk,` ∂ ∂ = a a aJ + (−1)|I| aI a I J I ∂a` ∂a` R ∂ ∂a` f daD · · · da1 = 0
(d) The linear operators
∂ ∂ak
and
∂ ∂a`
aJ
anticommute. That is,
∂ ∂ ∂ak ∂a`
Proof:
∂ ∂a`
+
∂ ∂ ∂a` ∂ak
f = 0
Obvious, by direct calculation. P
ci z i be a power series with complex coefficients and i≥0 V infinite radius of convergence and let f (a) be an even element of S V. Show that ∂ 0 ∂ = P f (a) ∂a` f (a) ∂a` P f (a)
Problem I.10
Let P (z) =
Example I.14 Let V be a D dimensional vector space with basis {a1 , · · · , aD } and V 0 a
second D dimensional vector space with basis {b1 , · · · , bD }. Think of eΣi bi ai as an element V V of either ∧V V 0 or (V ⊕ V 0 ). (Remark I.7 provides a natural identification between V V V 0 (V ⊕ V 0 ).) By the last problem, with ai replaced by bi , P (z) = ez ∧V V , ∧V 0 V and P and f (b) = i bi ai , ∂ ∂b`
e Σi b i a i = e Σi b i a i a `
Iterating ∂ ∂bi1
···
∂ ∂bin
e Σi b i a i =
∂ ∂bi1
···
∂ ∂bin−1
e Σi b i a i a i n
=
∂ ∂bi1
···
∂ ∂bin−2
eΣi bi ai ain−1 ain
= e Σi b i a i a I where I = (i1 , · · · , in ) ∈ Mn . In particular, aI = where, as you no doubt guessed,
∂ ∂bi1
P
I
···
∂ ∂bin
e Σi b i a i
b1 ,···,bD =0
βI bI b1 ,···,bD =0 means β∅ . 12
Definition I.15 Let V be a D dimensional vector space and S a superalgebra. Let S = Sij be a skew symmetric matrix of order D . The Grassmann Gaussian integral V with covariance S is the linear map from S V to S determined by Z 1 eΣi bi ai dµS (a) = e− 2 Σij bi Sij bj Remark I.16 If the dimension D of V is even and if S is invertible, then, by Lemma I.10,
Z
f (a) dµS (a) =
R
−1
1
f (a) e− 2 Σij ai Sij aj daD · · · da1 R − 1 Σij ai S −1 aj ij e 2 daD · · · da1
The Gaussian measure on IRD with covariance S (this time an invertible symmetric matrix) is
−1
1
e− 2 Σij xi Sij xj d~x dµS (~x) = R −1 1 e− 2 Σij xi Sij xj d~x
This is the motivation for the name “Grassmann Gaussian integral”. Definition I.15, however, makes sense even if D is odd, or S fails to be invertible. In particular, if S is the R zero matrix, f (a) dµS (a) = f (0). Proposition I.17 (Integration by Parts) Let S = Sij of order D . Then, for each k = 1, · · · , D , Z Z D P ak f (a) dµS (a) = Sk` `=1
Proof 1:
∂ ∂a`
be a skew symmetric matrix
f (a) dµS (a)
This first argument, while instructive, is not complete. For it, we make the
additional assumption that D is even. Since both sides are continuous in S, it suffices to consider S invertible. Furthermore, by linearity in f , it suffices to consider f (a) = aI , with I ∈ Mn . Then, by Proposition I.13.c, Z D −1 P − 21 Σai Sij aj ∂ 0= Sk` daD · · · da1 a e ∂a` I `=1
=
D P
`=1
Sk`
Z
∂ ∂a`
|I|
aI − (−1) aI
D P
m=1
−1 S`m
13
am
1
−1
e− 2 Σai Sij
aj
daD · · · da1
D P
=
Sk`
`=1
=
D P
Sk`
`=1
Z
Z
Consequently,
Proof 2:
∂ ∂a`
aI e
∂ ∂a`
aI e Z
−1 aj − 12 Σai Sij
−1 − 12 Σai Sij aj
daD · · · da1 − daD · · · da1 − n P
ak aI dµS (a) =
Sk`
`=1
Z
−1
1
|I|
(−1) aI ak e− 2 Σai Sij
Z
−1
1
ak aI e− 2 Σai Sij
Z
∂ ∂a`
aj
aj
daD · · · da1
daD · · · da1
aI dµS (a)
By linearity, it suffices to consider f (a) = aI with I = {i1 , · · · , in } ∈ Mn .
Then Z
ak aI dµS (a) =
Z
∂ ∂ ∂bk ∂bi1
···
∂ ∂bin
Z
e Σi b i a i
b1 ,···,bD =0
dµS (a)
eΣi bi ai dµS (a) b1 ,···,bD =0 1 ∂ = ∂∂bk ∂b · · · ∂∂bi e− 2 Σi` bi Si` b` i1 n b1 ,···,bD =0 1 − 2 Σi` bi Si` b` ∂ ∂ ∂ · · · e = (−1)n ∂b ∂bin ∂bk i1 b1 ,···,bD =0 X 1 − 2 Σij bi Sij bj ∂ ∂ = −(−1)n Sk` ∂b · · · b e ∂bin ` i
=
∂ ∂ ∂bk ∂bi1
···
∂ ∂bin
= −(−1)
= (−1)n =
X
n
X `
Sk`
`
=
X `
X `
Sk`
Z
Z
b1 ,···,bD =0
1
`
∂ Sk` ∂b i1
···
∂ ··· Sk` ∂b i 1
∂ ∂ ∂a` ∂bi1 ∂ ∂a`
···
∂ ∂bin b`
∂ ∂bin
Z
Z
Σi b i a i ∂ ∂a` e
∂ e Σi b i a i ∂bin
aI dµS (a)
e
Σi b i a i
dµS (a)
dµS (a)
b1 ,···,bD =0
b1 ,···,bD =0
b1 ,···,bD =0
dµS (a)
I.4 Grassmann Gaussian Integrals We now look more closely at the values of Z 1 ai1 · · · ain dµS (a) = ∂∂bi · · · ∂∂bi e− 2 Σi` bi Si` b` n 1
14
b1 ,···,bD =0
Obviously,
and, as
P
i,` bi Si` b`
is even, Z
∂ ∂bi1
Z
1 dµS (a) = 1
∂ − 12 Σi` bi Si` b` ∂bin e
···
ai1 · · · ain dµS (a) = 0
has the same parity as n and if n is odd
To get a little practice with integration by parts (Proposition I.17) we use it to evaluate R ai1 · · · ain dµS (a) when n = 2 Z Z D P ∂ dµS ai1 ai2 dµS = Si1 m ∂am ai2 m=1
=
D P
m=1
Si1 m
= S i1 i2 and n = 4 Z
ai1 ai2 ai3 ai4 dµS = =
D P
m=1
Z
Si1 m
Z
Z
∂ ∂am
δm,i2 dµS
a i2 a i3 a i4
dµS
(Si1 i2 ai3 ai4 − Si1 i3 ai2 ai4 + Si1 i4 ai2 ai3 ) dµS
= S i1 i2 Si3 i4 − S i1 i3 Si2 i4 + S i1 i4 Si2 i3 Observe that each term is a sign times Siπ(1) iπ(2) Siπ(3) iπ(4) for some permutation π of (1, 2, 3, 4). The sign is always the sign of the permutation. Furthermore, for each odd j, π(j) is smaller than all of π(j + 1), π(j + 2), · · · (because each time we applied R R ∂ P ai` · · · dµS = m Si` m ∂am · · · dµS , we always used the smallest available `). From
this you would probably guess that Z X ai1 · · · ain dµS (a) = sgnπ Siπ(1) iπ(2) · · · Siπ(n−1) iπ(n) π
where the sum is over all permutations π of 1, 2, · · · , n that obey π(1) < π(3) < · · · < π(n − 1) and
π(k) < π(k + 1) for all k = 1, 3, 5, · · · , n − 1
Another way to write the same thing, while avoiding the ugly constraints, is Z X 1 sgnπ Siπ(1) iπ(2) · · · Siπ(n−1) iπ(n) ai1 · · · ain dµS (a) = 2n/2 (n/2)! π
15
(I.1)
where now the sum is over all permutations π of 1, 2, · · · , n. The right hand side is precisely the definition of the Pfaffian of the n × n matrix whose (`, m) matrix element is Si` im . Pfaffians are closely related to determinants. The following Proposition gives their main properties. This Proposition is proven in Appendix B.
Proposition I.18 Let S = (Sij ) be a skew symmetric matrix of even order n = 2m . a) For all 1 ≤ k 6= ` ≤ n , let Mk` be the matrix obtained from S by deleting rows k and ` and columns k and ` . Then, Pf(S) =
n P
`=1
In particular,
sgn(k − `) (−1)k+` Sk` Pf (Mk` )
Pf(S) =
n P
(−1)` S1` Pf (M1` )
`=2
b) If
S =
0 −C t
where C is a complex m × m matrix. Then,
C 0
1
Pf(S) = (−1) 2 m(m−1) det(C) c) For any n × n matrix B , Pf B t SB
= det(B) Pf(S)
d) Pf(S)2 = det(S)
Using these properties of Pfaffians (in particular the “expansion along the first row” of Proposition I.18.a) we can easily verify that our conjecture (I.1) was correct.
Proposition I.19 For all even n ≥ 2 and all 1 ≤ i1 , · · · , in ≤ D, Z h i ai1 · · · ain dµS (a) = Pf Sik i` 1≤k,`≤n
16
Proof:
The proof is by induction on n . The statement of the proposition has already
been verified for n = 2 . Integrating by parts, Z Z n P ∂ ai1 · · · ain dµS (a) = Si1 ` a · · · a dµS (a) i i 2 n ∂a` `=1 Z n P j = (−1) Si1 ij ai2 · · · aij−1 aij+1 · · · ain dµS (a) j=2
Our induction hypothesis and Proposition I.18.a now imply Z h i ai1 · · · ain dµS (a) = Pf Sik i`
1≤k,`≤n
Hence we may use the following as an alternative to Definition I.15.
Definition I.20 Let S be a superalgebra, V be a finite dimensional complex vector space and S a skew symmetric bilinear form on V. Then the Grassmann Gaussian integral on V S V with covariance S is the S–linear map Z ^ f (a) ∈ V 7→ f (a) dµS (a) ∈ S S
that is determined as follows. Choose a basis ai 1 ≤ i ≤ D for V. Then Z ai1 ai2 · · · ain dµS (a) = Pf Sik i` 1≤k,`≤n where Sij = S(ai , aj ).
Proposition I.21 Let S and T be skew symmetric matrices of order D . Then Z Z hZ i f (a) dµS+T (a) = f (a + b) dµS (a) dµT (b) Proof:
Let V be a D dimensional vector space with basis {a1 , . . . , aD }. Let V 0 and V 00
be two more copies of V with bases {b1 , . . . , bD } and {c1 , . . . , cD } respectively. It suffices V to consider f (a) = eΣi ci ai . Viewing f (a) as an element of ∧S V 00 V, Z Z 1 f (a) dµS+T (a) = eΣi ci ai dµS+T (a) = e− 2 Σij ci (Sij +Tij )cj 17
Viewing f (a + b) = eΣi ci (ai +bi ) as an element of Z
f (a + b) dµS (a) =
Z
e
Σi ci (ai +bi )
V
∧S (V 0 +V 00 )
dµS (a) = e
V,
Σ i ci b i
1
= eΣi ci bi e− 2 Σij ci Sij cj 1
Now viewing eΣi ci bi e− 2 Σij ci Sij cj as an element of Z hZ
i
f (a + b) dµS (a) dµT (b) = e
V
∧S V 00
V0
− 12 Σij ci Sij cj 1
Z
Z
eΣi ci ai dµS (a)
eΣi ci bi dµT (b) 1
= e− 2 Σij ci Sij cj e− 2 Σij ci Tij cj 1
= e− 2 Σij ci (Sij +Tij )cj as desired.
Problem I.11 Let V and V 0 be vector spaces with bases {a1 , . . . , aD } and {b1 , . . . , bD0 }
respectively. Let S and T be D × D and D 0 × D 0 skew symmetric matrices. Prove that Z hZ
i
f (a, b) dµS (a) dµT (b) =
Z hZ
i f (a, b) dµT (b) dµS (a)
Problem I.12 Let V be a D dimensional vector space with basis {a1 , . . . , aD } and V 0 be a second copy of V with basis {c1 , . . . , cD }. Let S be a D × D skew symmetric matrix. Prove that
Here Sc
i
Z
=
P
j
e
Σ i ci a i
f (a) dµS (a) = e
− 12 Σij ci Sij cj
Z
f (a − Sc) dµS (a)
Sij cj .
I.5 Grassmann Integrals and Fermionic Quantum Field Theories In a quantum mechanical model, the set of possible states of the system form (the rays in) a Hilbert space H and the time evolution of the system is determined by a self–adjoint operator H on H, called the Hamiltonian. We shall denote by Ω a ground 18
state of H (eigenvector of H of lowest eigenvalue). In a quantum field theory, there is additional structure. There is a special family, ϕ(x, σ) x ∈ IRd , σ ∈ S of operators
on H, called annihilation operators. Here d is the dimension of space and S is a finite set. You should think of ϕ(x, σ) as destroying a particle of spin σ at x. The adjoints, † ϕ (x, σ) x ∈ IRd , σ ∈ S , of these operators are called creation operators. You should
think of ϕ† (x, σ) as creating a particle of spin σ at x. All states in H can be expressed as linear combinations of products of annihilation and creation operators applied to Ω. The time evolved annihilation and creation operators are eiHt ϕ(x, σ)e−iHt
eiHt ϕ† (x, σ)e−iHt
If you are primarily interested in thermodynamic quantities, you should analytically continue these operators to imaginary t = −iτ eHτ ϕ(x, σ)e−Hτ
eHτ ϕ† (x, σ)e−Hτ
because the density matrix for temperature T is e−βH where β =
1 kT
. The imaginary time
operators (or rather, various inner products constructed from them) are also easier to deal with mathematically rigorously than the corresponding real time inner products. It has turned out tactically advantageous to attack the real time operators by first concentrating on imaginary time and then analytically continuing back. If you are interested in grand canonical ensembles (thermodynamics in which you adjust the average energy of the system through β and the average density of the system through the chemical potential µ) you replace the Hamiltonian H by K = H − µN , where N is the number operator and µ is the chemical potential. This brings us to ϕ(τ, x, σ) = eKτ ϕ(x, σ)e−Kτ ϕ(τ, ¯ x, σ) = eKτ ϕ† (x, σ)e−Kτ
(I.2)
Note that ϕ(τ, ¯ x, σ) is neither the complex conjugate, nor adjoint, of ϕ(τ, x, σ). In any quantum mechanical model, the quantities you measure (called observables) are represented by operators on H. The expected value of the observable O when 19
the system is in state Ω is hΩ, OΩi, where h · , · i is the inner product on H. In a quantum field theory, all expected values are determined by inner products of the form D E p Q ( ) Ω, T ϕ(τ` , x` , σ` )Ω `=1
( )
Here the ϕ signifies that both ϕ and ϕ¯ may appear in the product. The symbol T designates
the “time ordering” operator, defined (for fermionic models) by ) ( ) ( ) ( ) T(ϕ(τ 1 , x1 , σ1 ) · · · ϕ(τp , xp , σp ) = sgnπ ϕ(τπ(1) , xπ(1) , σπ(1) ) · · · ϕ(τπ(p) , xπ(p) , σπ(p) )
where π is a permutation that obeys τπ(i) ≥ τπ(i+1) for all 1 ≤ i < p. There is also a tie breaking rule to deal with the case when τi = τj for some i 6= j, but it doesn’t interest us here. Observe that ) ( ) ϕ(τπ(1) ,xπ(1) ,σπ(1) ) · · · (ϕ( τπ(p) ,xπ(p) ,σπ(p) ) = eKτπ(1) ϕ(†) (xπ(1) ,σπ(1) )eK(τπ(2) −τπ(1) ) ϕ(†) · · · eK(τπ(p) −τπ(p−1) ) ϕ(†) (xπ(p) ,σπ(p) )e−Kτπ(p)
The time ordering is chosen so that, when you sub in (I.2), and exploit the fact that Ω is E D Qp ) , x , σ )Ω has an eigenvector of K, every eK(τπ(i) −τπ(i−1) ) that appears in Ω, T `=1 (ϕ(τ ` ` ` τπ(i) − τπ(i−1) ≤ 0. Time ordering is introduced to ensure that the inner product is well–
defined: the operator K is bounded from below, but not from above. Thanks to the sgnπ D E Qp ) in the definition of T, Ω, T `=1 (ϕ(τ , x , σ )Ω is antisymmetric under permutations of ` ` `
) the (ϕ(τ ` , x` , σ` )’s.
Now comes the connection to Grassmann integrals. Let x = (τ, x). Please forget, for the time being, that x does not run over a finite set. Let V be a vector space with
basis {ψx,σ , ψ¯x,σ }. Note, once again, that ψ¯x,σ is NOT the complex conjugate of ψx,σ . It is just another vector that is totally independent of ψx,σ . Then, formally, it turns out that R Qp ( ) ¯ Q A(ψ,ψ) E D ¯ p Q `=1 ψ x` ,σ` e x,σ dψx,σ dψx,σ ( ) p R ϕ(x` , σ` )Ω = (−1) Ω, T ¯ Q ¯ eA(ψ,ψ) `=1 x,σ dψx,σ dψx,σ
¯ is called the action and is determined by the Hamiltonian The exponent A(ψ, ψ)
H. A typical action of interest is that corresponding to a gas of fermions (e.g. electrons), of strictly positive density, interacting through a two–body potential u(x − y). It is Z P dd+1 k k2 ¯ A(ψ, ψ) = − ik − − µ ψ¯k,σ ψk,σ 0 d+1 2m (2π) σ∈S
−
λ 2
P
σ,σ 0 ∈S
Z
4 Q
i=1
dd+1 ki (2π)d+1 δ(k1 +k2 −k3 −k4 )ψ¯k1 ,σ ψk3 ,σ u ˆ(k1 (2π)d+1
20
− k3 )ψ¯k2 ,σ 0 ψk4 ,σ 0
Here ψk,σ is the Fourier transform of ψx,σ and u ˆ(k) is the Fourier transform of u(x). The zero component k0 of k is the dual variable to τ and is thought of as an energy; the final d components k are the dual variables to x and are thought of as momenta. So, in A, k2 2m
is the kinetic energy of a particle and the delta function δ(k1 + k2 − k3 − k4 ) enforces
conservation of energy/momentum. As above, µ is the chemical potential, which controls the density of the gas, and u ˆ is the Fourier transform of the two–body potential. More generally, when the fermion gas is subject to a periodic potential due to a crystal lattice, the quadratic term in the action is replaced by Z P dd+1 k − ik0 − e(k) ψ¯k,σ ψk,σ (2π)d+1 σ∈S
where e(k) is the dispersion relation minus the chemical potential µ. I know that quite a few people are squeamish about dealing with infinite dimensional Grassmann algebras and integrals. Infinite dimensional Grassmann algebras, per se, are no big deal. See Appendix A. It is true that the Grassmann “Cartesian measure” Q ¯ x,σ dψx,σ dψx,σ does not make much sense when the dimension is infinite. But this probQ lem is easily dealt with: combine the quadratic part of the action A with x,σ dψx,σ dψ¯x,σ
to form a Gaussian integral. Formally, (−1)
p
D
Ω, T
p Q
E
( )
ϕ(τ` , x` , σ` )Ω =
`=1
= where ¯ = −λ W (ψ, ψ) 2 and
R
P
σ,σ 0 ∈S
Z
4 Q
i=1
R Qp
¯ Q ψ x` ,σ` eA(ψ,ψ) x,σ dψx,σ dψ¯x,σ R ¯ Q ¯ eA(ψ,ψ) x,σ dψx,σ dψx,σ ( )
`=1
R Qp
( )
¯ W (ψ,ψ)
¯ dµS (ψ, ψ) `=1 ψ x` ,σ` e R ¯ ¯ eW (ψ,ψ) dµS (ψ, ψ)
dd+1 ki (2π)d+1 δ(k1 +k2 −k3 −k4 )ψ¯k1 ,σ ψk3 ,σ u ˆ(k1 (2π)d+1
(I.3)
− k3 )ψ¯k2 ,σ 0 ψk4 ,σ 0
¯ is the Grassmann Gaussian integral with covariance determined by · dµS (ψ, ψ) Z ¯ =0 ψk,σ ψp,σ 0 dµS (ψ, ψ) Z ¯ =0 ψ¯k,σ ψ¯p,σ 0 dµS (ψ, ψ) Z (I.4) δ 0 σ,σ d+1 ¯ = ψk,σ ψ¯p,σ 0 dµS (ψ, ψ) δ(k − p) ik0 −e(k) (2π) Z ¯ = − δσ,σ0 (2π)d+1 δ(k − p) ψ¯k,σ ψp,σ 0 dµS (ψ, ψ) ik0 −e(k) 21
Problem I.13 Let V be a complex vector space with even dimension D = 2r and basis
{ψ1 , · · · , ψr , ψ¯1 , · · · , ψ¯r }. Again, ψ¯i need not be the complex conjugate of ψi . Let A be an R ¯ the Grassmann Gaussian integral obeying r × r matrix and · dµA (ψ, ψ) Z Z Z ¯ ¯ ¯ ¯ ¯ = Aij ψi ψj dµA (ψ, ψ) = ψi ψj dµA (ψ, ψ) = 0 ψi ψ¯j dµA (ψ, ψ) a) Prove Z
¯ ψin · · · ψi1 ψ¯j1 · · · ψ¯jm dµA (ψ, ψ) =
m P
(−1)
`=1
`+1
A i 1 j`
Z
¯ ψin · · · ψi2 ψ¯j1 · · · 6 ψ¯j` · · · ψ¯jm dµA (ψ, ψ)
Here, the 6 ψ¯j` signifies that the factor ψ¯j` is omitted from the integrand. b) Prove that, if n 6= m,
Z
¯ =0 ψin · · · ψi1 ψ¯j1 · · · ψ¯jm dµA (ψ, ψ)
c) Prove that Z
h i ¯ ¯ ¯ ψin · · · ψi1 ψj1 · · · ψjn dµA (ψ, ψ) = det Aik j`
1≤k,`≤n
¯ ¯ d) Let V 0 be a second copy of V with basis {ζ1 , · · · , ζr , ζ¯1 , · · · , ζ¯r }. View eΣi (ζi ψi +ψi ζi ) as V an element of ∧V 0 V. Prove that Z ¯ ¯ ¯ = eΣi,j ζ¯i Aij ζj eΣi (ζi ψi +ψi ζi ) dµA (ψ, ψ)
To save writing, lets rename the generators of the Grassmann algebra from {ψk,σ , ψ¯k,σ } to {ai }. In this new notation, the right hand side of (I.3) becomes Z Z p Q W (a) 1 a i` e dµS (a) where Z = eW (a) dµS (a) G(i1 , · · · , ip ) = Z `=1
These are called the Euclidean Green’s functions or Schwinger functions. They determine all expectation values in the model. Let {ci } be the basis of a second copy of the vector space with basis {ai }. Then all expectation values in the model are also determined by Z 1 S(c) = Z eΣi ci ai eW (a) dµS (a) (I.5) 22
(called the generator of the Euclidean Green’s functions) or equivalently, by C(c) = log
1 Z
Z
eΣi ci ai eW (a) dµS (a)
(I.6)
(called the generator of the connected Green’s functions) or equivalently (see Problem I.14, below), by G(c) = log
1 Z
Z
eW (a+c) dµS (a)
(I.7)
(called the generator of the connected, freely amputated Green’s functions).
Problem I.14 Prove that C(c) = − 12 Σij ci Sij cj + G − Sc where Sc
i
=
P
j
Sij cj .
If S were really nice, the Grassmann Gaussian integrals of (I.5–I.7) would be very V easy to define rigorously, even though the Grassmann algebra V is infinite dimensional.
The real problem is that S is not really nice. However, one can write S as the limit of
really nice covariances. So the problem of making sense of the right hand side of, say, (I.7) may be expressed as the problem of controlling the limit of a sequence of well–defined quantities. The renormalization group is a tool that is used to control that limit. In the version of the renormalization group that I will use, the covariance S is written as a sum S=
∞ X
S (j)
j=1
with each S (j) really nice. Let S
(≤J )
=
J X
S (j)
j=1
and GJ (c) = log
1 ZJ
Z
e
W (a+c)
dµS (≤J ) (a) 23
where
ZJ =
Z
eW (a) dµS (≤J ) (a)
(The normalization constant ZJ is chosen so that GJ (0) = 0.) We have to control the limit of GJ (c) as J → ∞. We have rigged the definitions so that it is very easy to exhibit the relationship between GJ (c) and GJ +1 (c): Z 1 GJ +1 (c) = log ZJ +1 eW (b+c) dµS (≤J +1) (b) Z 1 = log ZJ +1 eW (b+c) dµS (≤J ) +S (J +1) (b) Z hZ i 1 = log ZJ +1 eW (a+b+c) dµS (≤J ) (b) dµS (J +1) (a) Z ZJ = log ZJ +1 eGJ (c+a) dµS (J +1) (a)
by Proposition I.21
Problem I.15 We have normalized GJ +1 so that GJ +1 (0) = 0. So the ratio Z ZJ eGJ (c+a) dµS (J +1) (a) GJ +1 (c) = log ZJ +1 had better obey ZJ +1 ZJ
=
Z
ZJ ZJ +1
in
eGJ (a) dµS (J +1) (a)
Verify by direct computation that this is the case.
I.6 The Renormalization Group By Problem I.15,
GJ +1 (c) = ΩS (J +1) GJ (c)
(I.8)
where the “renormalization group map” Ω is defined in
Definition I.22 Let V be a vector space with basis {ci }. Choose a second copy V 0 of V and
denote by ai the basis element of V 0 corresponding to the element ci ∈ V. Let S be a skew V V symmetric bilinear form on V 0 . The renormalization group map ΩS : S V → S V is defined by
ΩS (W )(c) = log
1 ZW,S
Z
e
W (c+a)
dµS (a) 24
where
ZW,S =
Z
eW (a) dµS (a)
for all W ∈
V
S
V for which
R
eW (a) dµS (a) is invertible in S.
Problem I.16 Prove that GJ = ΩS (J ) ◦ ΩS (J −1) ◦ · · · ◦ ΩS (1) (W ) = ΩS (1) ◦ ΩS (2) ◦ · · · ◦ ΩS (J ) (W ) For the rest of this section we restrict to S = C. Observe that we have normalized the renormalization group map so that ΩS (W )(0) = 0. Define the subspaces V(>0)
of
V
V(e)
V and the projection P (>0) :
V= V=
V
∞ M Vn n=1 ∞ M
n=2 n even
V −→
V
Vn
V(>0)
V
V
W (a) 7−→ W (a) − W (0) Then
Lemma I.23 i) If ZW,S 6= 0, then ZP (>0) W,S 6= 0 and ΩS (W ) = ΩS (P (>0) W ) ∈ ii) If ZW,S 6= 0 and W ∈ Proof:
V(e)
V, then ΩS (W ) ∈
V(e)
V(>0)
V
V.
i) is an immediate consequence of ZP (>0) W,S = e Z
e
(P (>0) W )(c+a)
dµS (a) = e 25
−W (0) −W (0)
Z
Z
eW (a) dµS (a) eW (c+a) dµS (a)
ii) Observe that W ∈
V
V is in
V(e)
V if and only if W (0) = 0 and W (−a) = W (a).
Suppose that ZW,S 6= 0 and set W 0 = ΩS (W ). We already know that W 0 (0) = 0. Suppose also that W (−a) = W (a). Then R 0
W (−c) = ln
As
R
R W (c−a) eW (−c+a) dµS (a) e dµS (a) R = ln R W (a) W (a) e dµS (a) e dµS (a)
aI dµS (a) = 0 for all |I| odd we have Z Z f (−a) dµS (a) = f (a) dµS (a)
for all f (a) ∈
V
V and R W (c−a) R W (c+a) e dµS (a) e dµS (a) ln R W (a) = ln R W (a) = W 0 (c) e dµS (a) e dµS (a)
Hence W 0 (−c) = W 0 (c).
Example I.24 If S = 0, then ΩS=0 (W ) = W for all W ∈
V(>0)
V. To see this, observe
˜ (c, a) with W ˜ (c, a) a polynomial in c and a with every term that W (c + a) = W (c) + W R having degree at least one in a. When S = 0, aI dµS (a) = 0 for all |I| ≥ 1 so that Z Z n ˜ (c, a) n dµ0 (a) = W (c)n W (c + a) dµ0 (a) = W (c) + W and
In particular
Z Z
eW (c+a) dµ0 (a) = eW (c)
eW (a) dµ0 (a) = eW (0) = 1
Combining the last two equations R W (c+a) e dµ0 (a) = ln eW (c) = W (c) ln R W (a) e dµ0 (a)
We conclude that
S = 0, W (0) = 0 =⇒ ΩS (W ) = W 26
Example I.25 Fix any 1 ≤ i, j ≤ D and λ ∈ C and let W (a) = λai aj . Then Z
e
λ(ci +ai )(cj +aj )
Z
eλci cj eλai cj eλci aj eλai aj dµS (a) Z λci cj =e 1 + λai cj 1 + λci aj 1 + λai aj dµS (a) Z h i λci cj 1 + λai cj + λci aj + λai aj + λ2 ai cj ci aj dµS (a) =e h i = eλci cj 1 + λSij + λ2 cj ci Sij
dµS (a) =
Hence, if λ 6= − S1ij , ZW,S = 1 + λSij is nonzero and R
eλci cj 1 + λSij + λ2 cj ci Sij eW (c+a) dµS (a) λ2 S R = eλci cj 1 + cj ci 1+λSijij = W (a) 1 + λSij e dµS (a) λ2 S
=e
λci cj − 1+λSij ci cj ij
λ
= e 1+λSij
ci cj
We conclude that W (a) = λai aj , λ 6= − S1ij =⇒ ΩS (W )(c) =
λ ci cj 1 + λSij
In particular Z0,S = 1 and ΩS (0) = 0 for all S.
Definition I.26 Let E be any finite dimensional vector space and let f : E → C and
F : E → E. For each ordered basis B = (e1 , · · · , eD ) of E define f˜B : CD → C and F˜B;i : CD → C, 1 ≤ i ≤ D by
f (λ1 e1 + · · · + λD eD ) = f˜B (λ1 , · · · , λD ) F (λ1 e1 + · · · + λD eD ) = F˜B;1 (λ1 , · · · , λD )e1 + · · · + F˜B;D (λ1 , · · · , λD )eD We say that f is polynomial (rational) if there exists a basis B for which f˜B (λ1 , · · · , λD ) is a polynomial (ratio of two polynomials). We say that F is polynomial (rational) if there exists a basis B for which F˜B;i (λ1 , · · · , λD ) is a polynomial (ratio of two polynomials) for all 1 ≤ i ≤ D. 27
Remark I.27 If f is polynomial (rational) then f˜B (λ1 , · · · , λD ) is a polynomial (ratio of
two polynomials) for all bases B. If F is polynomial (rational) then F˜B;i (λ1 , · · · , λD ) is a polynomial (ratio of two polynomials) for all 1 ≤ i ≤ D and all bases B. Proposition I.28 Let V be a finite dimensional vector space and fix any skew symmetric bilinear form S on V. Then i) ZW,S :
V(>0)
V −→ C Z W (a) 7−→ eW (a) dµS (a)
is polynomial. ii) ΩS (W ) :
V(>0)
V −→
V(>0) R
V
eW (a+c) dµS (c) W (a) 7−→ ln R W (c) e dµS (c)
is rational.
Proof:
i) Let D be the dimension of V. If W (0) = 0, eW (a) =
polynomial in W . Hence, so is ZW,S .
PD
1 D n=0 n! W (a)
is
R R ii) As in part (i), eW (a+c) dµS (c) and eW (c) dµS (c) are both polynomial. By example R I.25, eW (c) dµS (c) is not identically zero. Hence R
W (a+c) dµS (c) ˜ (a) = Re W W (c) e dµS (c)
˜ (0) = 1. Hence is rational and obeys W ˜ = ln W
D X
(−1)n−1 n
n=1
˜ and rational in W . is polynomial in W 28
˜ −1 W
n
Theorem I.29 Let V be a finite dimensional vector space. i) If S and T are any two skew symmetric bilinear forms on V, then ΩS ◦ ΩT is defined as a rational function and ΩS ◦ ΩT = ΩS+T ii)
ΩS ( · ) S a skew symmetric bilinear form on V is an abelian group under compo-
sition and is isomorphic to IRD(D−1)/2 , where D is the dimension of V. Proof:
i) To verify that ΩS ◦ ΩT is defined as a rational function, we just need to check
that the range of ΩT is not contained in the zero set of ZW,S . This is the case because ΩT (0) = 0, so that 0 is in the range of ΩT , and Z0,S = 1. As ZW,T , ZΩT (W ),S and ZW,S+T are all rational functions of W that are not identically zero
W ∈
V
V ZW,T 6= 0, ZΩT (W ),S 6= 0, ZW,S+T 6= 0
V
V. On this subset R W (c+a) e dµS+T (a) ΩS+T (W ) = ln R W (a) e dµS+T (a) R R W (a+b+c) e dµT (b) dµS (a) = ln R R W (a+b) e dµT (b) dµS (a) R Ω (W )(a+c) R W (b) e T e dµT (b) dµS (a) = ln R Ω (W )(a) R W (b) T e e dµT (b) dµS (a) R Ω (W )(a+c) e T dµS (a) = ln R Ω (W )(a) T e dµS (a) = ΩS ΩT (W )
is an open dense subset of
ii) The additive group of D × D skew symmetric matrices is isomorphic to IR D(D−1)/2 . By part (i), S 7→ ΩS ( · ) is a homomorphism from the additive group of D × D skew symmetric matrices onto ΩS ( · ) S a D × D skew symmetric matrix . To verify that it is an isomorphism, we just need to verify that the map is one–to–one. Suppose that ΩS (W ) = ΩT (W ) 29
for all W with ZW,S 6= 0 and ZW,T 6= 0. Then, by Example I.25, for each 1 ≤ i, j ≤ D, λ λ = 1 + λSij 1 + λTij
for all λ 6= − S1ij , − T1ij . But this implies that Sij = Tij .
I.7 Wick Ordering Let V be a D–dimensional vector space over C and let S be a superalgebra. Let V S be a D × D skew symmetric matrix. We now introduce a new basis for S V that is
adapted for use with the Grassmann Gaussian integral with covariance S. To this point, we have always selected some basis {a1 , . . . , aD } for V and used ai1 · · · ain n ≥ 0, 1 ≤ V i1 < · · · < in ≤ D as a basis for S V. The new basis will be denoted :ai1 · · · ain : n ≥ 0, 1 ≤ i1 < · · · < in ≤ D The basis element :ai1 · · · ain : will be called the Wick ordered product of ai1 · · · ain .
Let V 0 be a second copy of V with basis {b1 , . . . , bD }. Then :ai1 · · · ain : is deter-
mined by applying
∂ ∂bi1
···
∂ ∂bin
to both sides of
. eΣ bi ai . = e 21 Σ bi Sij bj eΣ bi ai . . and setting all of the bi ’s to zero. This determines Wick ordering as a linear map on
V
S
V.
Clearly, the map depends on S, even though we have not included any S in the notation. It is easy to see that :1: = 1 and that :ai1 · · · ain : is • a polynomial in ai1 , · · · , ain of degree n with degree n term precisely ai1 · · · ain V • an even, resp. odd, element of S V when n is even, resp. odd. • antisymmetric under permutations of i1 , · · · , in (because
∂ ∂bi1
···
∂ ∂bin
is antisym-
metric under permutations of i1 , · · · , in ). So the linear map f (a) 7→ :f (a): is bijective. The easiest way to express ai1 · · · ain in terms of the basis of Wick ordered monomials is to apply
∂ ∂bi1
···
1 eΣ bi ai = e− 2 Σ bi Sij bj .. eΣ bi ai ..
and set all of the bi ’s to zero. 30
∂ ∂bin
to both sides of
Example I.30 :ai : = ai
ai = :ai :
:ai aj : = ai aj − Sij
ai aj = :ai aj : + Sij
:ai aj ak : = ai aj ak − Sij ak − Ski aj − Sjk ai
ai aj ak = :ai aj ak : + Sij :ak : + Ski :aj : + Sjk :ai :
By the antisymmetry property mentioned above, :ai1 · · · ain : vanishes if any two of the indices ij are the same. However :ai1 · · · ain : :aj1 · · · ajm : need not vanish if one of the ik ’s is the same as one of the j` ’s. For example :ai aj : :ai aj : = ai aj − Sij
Problem I.17 Prove that :f (a): = f (a) =
Z
Z
2 ai aj − Sij = −2Sij ai aj + Sij
f (a + b) dµ−S (b) :f :(a + b) dµS (b)
Problem I.18 Prove that ∂ ∂a`
:f (a): = .. ∂∂a` f (a) ..
Problem I.19 Prove that Z Z D P :g(a)ai : f (a) dµS (a) = Si` :g(a): `=1
∂ f (a) ∂a`
dµS (a)
Problem I.20 Prove that :f :(a + b) = :f (a + b):a = :f (a + b):b Here : · :a means Wick ordering of the ai ’s and : · :b means Wick ordering of the bi ’s. Precisely, if {ai }, {bi }, {Ai }, {Bi } are bases of four vector spaces, all of the same dimension, . eΣi Ai ai +Σi Bi bi . = eΣi Bi bi . eΣi Ai ai . = eΣi Ai ai +Σi Bi bi e 12 Σij Ai Sij Aj . .a . .a
. eΣi Ai ai +Σi Bi bi . = eΣi Ai ai . eΣi Bi bi . = eΣi Ai ai +Σi Bi bi e 12 Σij Bi Sij Bj . .b . .b 31
Problem I.21 Prove that :f :S+T (a + b) = .. f (a + b) .. a,S b,T
Here S and T are skew symmetric matrices, : · :S+T means Wick ordering with respect to S + T and .. · .. a,S means Wick ordering of the ai ’s with respect to S and of the bi ’s with b,T
respect to T .
Proposition I.31 (Orthogonality of Wick Monomials) a) If m > 0
Z
b) If m 6= n
Z
c)
Z
:ai1 · · · aim : dµS (a) = 0
:ai1 · · · aim : :aj1 · · · ajn : dµS (a) = 0
:ai1 · · · aim : :ajm · · · aj1 : dµS (a) = det Sik j` 1≤k,`≤m
Note the order of the indices in :ajm · · · aj1 :. Proof: Z
a) b) Let V 00 be a third copy of V with basis {c1 , . . . , cD }. Then . eΣ bi ai . . eΣ ci ai . dµ (a) = . . . . S
Z
1
1
e 2 Σ bi Sij bj eΣ bi ai e 2 Σ ci Sij cj eΣ ci ai dµS (a) Z 1 Σ bi Sij bj 21 Σ ci Sij cj 2 =e e eΣ (bi +ci )ai dµS (a) 1
1
1
= e 2 Σ bi Sij bj e 2 Σ ci Sij cj e− 2 Σ (bi +ci )Sij (bj +cj ) = e−Σ bi Sij cj Now apply ∂ ∂bi1
···
∂ ∂bi1 ∂ ∂bim
··· e
∂ ∂bim
and set all of the bi ’s to zero.
−Σ bi Sij cj
bi =0
=e
−Σ bi Sij cj
= (−1)m
P j10
−
P
S
j10
Si1 j10 cj10
32
i1 j10
c
j10
···
··· P 0 jm
−
P
S
0 jm
0 cj 0 S i m jm m
0 i m jm
c
0 jm
bi =0
Now apply
∂ ∂cj1
···
∂ ∂cjn
and set all of the ci ’s to zero. To get a nonzero answer, it is
necessary that m = n. c) We shall prove that Z :aim · · · ai1 : :aj1 · · · ajm : dµS (a) = det Sik j` 1≤k,`≤m
This is equivalent to the claimed result. By Problems I.19 and I.18, Z Z m X Q `+1 :aim · · · ai1 : :aj1 · · · ajm : dµS (a) = (−1) Si1 j` :aim · · · ai2 : ..
ajk .. dµS (a)
1≤k≤m k6=`
`=1
The proof will be by induction on m. If m = 1, we have Z Z :ai1 : :aj1 : dµS (a) = Si1 j1 1 dµS (a) = Si1 j1 as desired. In general, by the inductive hypothesis, Z m X :aim · · · ai1 : :aj1 · · · ajm : dµS (a) = (−1)`+1 Si1 j` det Sip jk 1≤p,k≤m `=1
p6=1, k6=`
= det Sip jk
Problem I.22 Prove that
Z
1≤k,p≤m
:f (a): dµS (a) = f (0)
Problem I.23 Prove that Z Y n e . Qi a . dµ (ψ) = Pf T . `i,µ . S (i,µ),(i0 ,µ0 ) i=1 µ=1
where
0 if i = i0 S`i,µ ,`i0 ,µ0 if i 6= i0 Pn Here T is a skew symmetric matrix with i=1 ei rows and columns, numbered, in order T(i,µ),(i0 ,µ0 ) =
(1, 1), · · · , (1, e1 ), (2, 1), · · · (2, e2 ), · · · , (n, en ). The product in the integrand is also in this order. Hint: Use Problems I.19 and I.18 and Proposition I.18.
33
I.8 Bounds on Grassmann Gaussian Integrals We now prove some bounds on Grassmann Gaussian integrals. While it is not really necessary to do so, I will make some simplifying assumptions that are satisfied in applications to quantum field theories. I will assume that the vector space V generating the Grassmann algebra has basis ψ(`, κ) ` ∈ X, κ ∈ {0, 1} , where X is some finite
set. Here, ψ(`, 0) plays the rˆ ole of ψx,σ of §I.5 and ψ(`, 1) plays the rˆ ole of ψ¯x,σ of §I.5. I will also assume that, as in (I.4), the covariance only couples κ = 0 generators to κ = 1 generators. In other words, we let A be a function on X × X and consider the Grassmann R V Gaussian integral · dµA (ψ) on V with 0 if κ = κ0 = 0 Z 0 if κ = 0, κ0 = 1 ψ(`, κ)ψ(`0, κ0 ) dµA (ψ) = A(`, `0 ) 0 −A(` , `) if κ = 1, κ = 0 0 0 if κ = κ = 1 We start off with the simple bound
Proposition I.32 Assume that there is a Hilbert space H and vectors f ` , g` , ` ∈ X in H such that A(`, `0 ) = hf` , g`0 iH Then
Z Y Y n Q ≤ kf k kg`i kH ψ ` , κ ) dµ (ψ) ` H i i A i i=1
Proof:
for all `, `0 ∈ X
1≤i≤n κi =0
Define
1≤i≤n κi =1
1 ≤ i ≤ n κi = 0 F¯ = 1 ≤ i ≤ n κi = 1
F =
By Problem I.13, if the integral does not vanish, the cardinality of F and F¯ coincide and there is a sign ± such that Z n Q
i=1
h
ψ `i , κi ) dµA (ψ) = ± det A`i ,`j
i
i∈F ¯ j∈F
The proposition is thus an immediate consequence of Gram’s inequality. For the convenience of the reader, we include a proof of this classical inequality below. 34
Lemma I.33
(Gram’s inequality) Let H be a Hilbert space and
u 1 , · · · , un ,
v1 , · · · , vn ∈ H. Then h i det hui , vj i
1≤i,j≤n
n Y kui k kvi k ≤ i=1
Here, h · , · i and k · k are the inner product and norm in H, respectively. Proof:
We start with three reductions. First, we may assume that the u1 , · · · , un
are linearly independent. Otherwise the determinant vanishes, because its rows are not independent, and the inequality is trivially satisfied. Second, we may also assume that each vj is in the span of the ui ’s, because, if P is the orthogonal projection onto that span, h i h i Qn Qn det hui , vj i = det hui , P vj i while i=1 kP ui k kvi k ≤ i=1 kui k kvi k. 1≤i,j≤n
1≤i,j≤n
Third, we may assume that v1 , · · · , vn are linearly independent. Otherwise the determinant vanishes, because its columns are not independent. Denote by U the span of u1 , · · · , un . We have just shown that we may assume that u1 , · · · , un and v1 , · · · , vn are two bases for U . Let αi be the projection of ui on the orthogonal complement of the subspace Pi−1 ˜ ˜ spanned by u1 , · · · , ui−1 . Then αi = ui + j=1 L ij uj for some complex numbers Lij and αi is orthogonal to u1 , · · · , ui−1 and hence to −1 kαi k Lij = 0 ˜ ij kαi k−1 L
α1 , · · · , αi−1 . Set if i = j if i < j if i > j
Then L is a lower triangular matrix with diagonal entries Lii = kαi k−1
such that the linear combinations u0i
=
i X j=1
Lij uj , i = 1, · · · , n
are orthonormal. This is just the Gram-Schmidt orthogonalization algorithm. Similarly, let βi be the projection of vi on the orthogonal complement of the subspace spanned by 35
v1 , · · · , vi−1 . By Gram-Schmidt, there is a lower triangular matrix M with diagonal entries Mii = kβi k−1 such that the linear combinations vi0 =
i X j=1
Mij vj , i = 1, · · · , n
are orthonormal. Since the vi0 ’s are orthonormal and have the same span as the vi ’s, they P
form an orthonormal basis for U . As a result, u0i = j u0i , vj0 vj0 so that X
j
and the matrix
we have
h
u0i , vj0
i
u0i , vj0
vj0 , u0k = hu0i , u0k i = δi,k
is unitary and consequently has determinant of modulus one. As h i h
i L hui , vj i M † = u0i , vj0
n n Y Y det hui , vj i = det−1 L det−1 M = kαi k kβi k ≤ kui kkvi k i=1
since
i=1
kuj k2 = kαj + (uj − αj )k2 = kαj k2 + kuj − αj k2 ≥ kαj k2 kvj k2 = kβj + (vj − βj )k2 = kβj k2 + kvj − βj k2 ≥ kβj k2
Here, we have used that αj and uj − αj are orthogonal, because αj is the orthogonal projection of uj on some subspace. Similarly, βj and vj − βj are orthogonal.
Problem I.24 Let aij , 1 ≤ i, j ≤ n be complex numbers. Prove Hadamard’s inequality 1/2 h i n P n Q 2 |a | ≤ det a ij ij i=1
j=1
from Gram’s inequality. (Hint: Express aij = hai , ej i where ej = (0, · · · , 1, · · · , 0) , j =
1, · · · , n is the standard basis for Cn .)
36
Problem I.25
Let V be a vector space with basis {a1 , · · · , aD }. Let S`,`0 be a skew
symmetric D × D matrix with S(`, `0 ) = hf` , g`0 iH
for all 1 ≤ `, `0 ≤ D
p for some Hilbert space H and vectors f` , g`0 ∈ H. Set F` = kf` kH kg` kH . Prove that Z n Y Q a dµ (a) ≤ Fi` i` S `=1
1≤`≤n
We shall need a similar bound involving a Wick monomial. Let : · : denote Wick ordering with respect to dµA .
Proposition I.34 Assume that there is a Hilbert space H and vectors f ` , g` , ` ∈ X in H such that A(`, `0 ) = hf` , g`0 iH
for all `, `0 ∈ X
Then Z m n Y Y Y Y .Q Q . 0 0 ≤ 2n 0 kH ψ ` , κ ψ ` , κ dµ (ψ) kf k kg k kf kg`0j kH . . i i A ` H ` H ` j j i i j i=1
j=1
1≤i≤m κi =0
1≤i≤m κi =1
1≤j≤n κ0 =0 j
1≤j≤n κ0 =1 j
Proof: By Problem I.17, Z m n Q Q ψ `i , κi .. ψ `0j , κ0j .. dµA (ψ) i=1
=
=
Z
j=1
m Q
ψ ` i , κi
i=1
X
J ⊂{1,···,n}
±
Z
n Q ψ `0j , κ0j + φ `0j , κ0j dµA (ψ) dµ−A (φ) j=1
m Q
ψ ` i , κi
i=1
Q
ψ
j∈J
`0j , κ0j
dµA (ψ)
Z
Q
j6∈J
φ `0j , κ0j dµ−A (φ)
There are 2n terms in this sum. For each term, the first factor is, by Proposition I.32, no larger than
Y
kf`i kH
1≤i≤m κi =0
Y
kg`i kH
1≤i≤m κi =1
37
Y
kf`0j kH
j∈J κ0 =0 j
Y
kg`0j kH
j∈J κ0 =1 j
and the second factor is, again by Proposition I.32 (since −A(`, `0 ) = hf` , −g`0 iH ), no larger than
Y
kf`0j kH
j6∈J κ0 =0 j
Y
kg`0j kH
j6∈J κ0 =1 j
Problem I.26 Prove, under the hypotheses of Proposition I.34, that Z n e Y √ Y √ Y . Qi . ψ `i,µ , κi,µ . dµA (ψ) ≤ 2kf`i,µ kH 2kg`i,µ kH . i=1 µ=1
1≤i≤n 1≤µ≤ei κi,µ =0
1≤i≤n 1≤µ≤ei κi,µ =1
Finally, we specialise to almost the full structure of (I.4). We only replace
δσ,σ0 ik0 −e(k)
by a general matrix valued function of the (momentum) k. This is the typical structure in dk quantum field theories. Let S be a finite set. Let Eσ,σ 0 (k) ∈ L1 IRd+1 , (2π) d+1 , for each
σ, σ 0 ∈ S, and set
Cσ,σ 0 (x, y) = Let
R
Z
dd+1 k (2π)d+1
eik·(y−x) Eσ,σ 0 (k)
· dµC (ψ) be the Grassmann Gaussian integral determined by 0 if κ = κ0 Z 0 if κ = 0, ψσ (x, κ)ψσ 0 (y, κ0 ) dµC (ψ) = Cσ,σ (x, y) −Cσ 0 ,σ (y, x) if κ = 1, 0 if κ = κ0
=0 κ0 = 1 κ0 = 0 =1
for all x, y ∈ IRd+1 and σ, σ 0 ∈ S.
Corollary I.35 Z Q Z (m+n)/2 n Q m . . 0 0 n dk 0 ψσi (xi , κi ) . ψσj (xj , κj ) . dµC (ψ) ≤ 2 kE(k)k (2π)d+1 sup xi ,σi ,κi x0 ,σ 0 ,κ0 j j j
sup
xi,µ ,σi,µ κi,µ
i=1
j=1
Z Q Z Σi e2i n . ψ (x , κ ) · · · ψ . dk . σi,1 i,1 i,1 σi,ei (xi,ei , κi,ei ) . dµC (ψ) ≤ 2 kE(k)k (2π)d+1 i=1
Here kE(k)k denotes the norm of the matrix Eσ,σ 0 (k) 38
σ,σ 0 ∈S
as an operator on `2 C|S| .
Proof:
Define
(i, µ) 1 ≤ i ≤ n, 1 ≤ µ ≤ ei A (i, µ), (i0 , µ0 ) = Cσi,µ ,σi0 ,µ0 (xi,µ , xi0 ,µ0 )
X=
Let Ψ (i, µ), κ , (i, µ) ∈ X, κ ∈ {0, 1} be generators of a Grassmann algebra and let dµA (Ψ) be the Grassmann Gaussian measure on that algebra with covariance A. This construction has been arranged so that Z
ψσi,µ (xi,µ , κi,µ )ψσi0 ,µ0 (xi0 ,µ0 , κi0 ,µ0 ) dµC (ψ) =
and consequently Z
Z
Ψ (i, µ), κi,µ Ψ (i0 , µ0 ), κi0 ,µ0 ) dµA (Ψ)
n Q . ψ (x , κ ) · · · ψ . . σi,1 i,1 i,1 σi,ei (xi,ei , κi,ei ) . dµC (ψ) i=1 Z n . Q. = . Ψ (i, 1), κi,1 ) · · · Ψ((i, ei ), κi,ei . dµA (Ψ) i=1
dk ⊗ C|S| and Let H = L2 IRd+1 , (2π) d+1 fi,µ (k, σ) = eik·xi,µ
p kE(k)k δσ,σi,µ
E
σ,σ gi,µ (k, σ) = eik·xi,µ √ i,µ
kE(k)k
If kE(k)k = 0, set gi,µ (k, σ) = 0. Then
and, since
P
σ∈S
Eσ,σ
A (i, µ), (i0 , µ0 ) = hfi,µ , gi0 ,µ0 iH
2 ≤ kE(k)k2 , (k) i,µ
p
fi,µ , gi,µ = kE(k)k 2 = H H L
Z
kE(k)k
dk (2π)d+1
The Corollary now follows from Proposition I.34 and Problem I.26.
39
(k)
1/2
II. Fermionic Expansions
This chapter concerns an expansion that can be used to exhibit analytic control over the right hand side of (I.7) in fermionic quantum field theory models, when the covariance S is “really nice”. It is also used as one ingredient in a renormalization group procedure that controls the right hand side of (I.8) when the covariance S is not so nice.
II.1 Notation and Definitions Here are the main notations that we shall use throughout this chapter. Let • A be the Grassmann algebra generated by {a1 , · · · , aD }. Think of {a1 , · · · , aD } as some finite approximation to the set ψx,σ , ψ¯x,σ x ∈ IRd+1 , σ ∈ {↑, ↓} of fields integrated out in a renormalization group transformation like Z ¯ ¯ 1 ¯ ¯ f ¯ W (ψ, ψ) → W (Ψ, Ψ) = log Z eW (ψ+Ψ,ψ+Ψ) dµS (ψ, ψ)
• C be the Grassmann algebra generated by {c1 , · · · , cD }. Think of {c1 , · · · , cD } as ¯ x,σ x ∈ IRd+1 , σ ∈ { ↑ , ↓ } some finite approximation to the set Ψx,σ , Ψ of fields that are arguments of the output of the renormalization group transfor-
mation. • AC be the Grassmann algebra generated by {a1 , · · · , aD , c1 , · · · , cD }. • S = (Sij ) be a skew symmetric matrix of order D . Think of S as the “sin-
gle scale” covariance S (j) of the Gaussian measure that is integrated out in the
renormalization group step. See (I.8). R • · dµS (a) be the Grassmann, Gaussian integral with covariance S . It is the unique linear map from AC to C satisfying Z 1 eΣ ci ai dµS (a) = e− 2 Σ ci Sij cj In particular
• Mr
Z
ai aj dµS (a) = Si,j = (i1 , · · · , ir ) 1 ≤ i1 , · · · , ir ≤ D be the set of all multi indices of
degree r ≥ 0 . For each I ∈ Mr set aI = ai1 · · · air . By convention, a∅ = 1 . 40
• the space (AC)0 of “interactions” is the linear subspace of AC of even Grassmann polynomials with no constant term. That is, polynomials of the form W (c, a) =
P
l,r∈IN 1≤l+r∈2ZZ
P
wl,r (L, J) cL aJ
L∈Ml J∈Mr
Usually, in the renormalization group map, the interaction is of the form W (c+a). (See Definition I.22.) We do not require this.
Here are the main objects that shall concern us in this chapter.
Definition II.1 a) The renormalization group map Ω : (AC)0 → C0 is Z Z W (c,a) 1 Ω(W )(c) = log ZW,S e dµS (a) where ZW,S = eW (0,a) dµS (a) It is defined for all W ’s obeying
R
eW (0,a) dµS (a) 6= 0. The factor
1 ZW,S
ensures that
Ω(W )(0) = 0, i.e. that Ω(W )(c) contains no constant term. Since Ω(W ) = 0 for W = 0 Ω(W )(c) = =
Z
Z
1 d Ω(εW )(c) dε dε
0 1 0
R
W (c, a) eεW (c,a) dµS (a) R dε − eεW (c,a) dµS (a)
Z
1 0
R
W (0, a) eεW (0,a) dµS (a) R dε eεW (0,a) dµS (a)
(II.1)
Thus to get bounds on the renormalization group map, it suffices to get bounds on b) the Schwinger functional S : AC → C, defined by Z 1 S(f ) = Z(c) f (c, a) eW (c,a) dµS (a) where Z(c) = as well as f .
R
eW (c,a) dµS (a) . Despite our notation, S(f ) is a function of W and S
c) Define the linear map R : AC → AC by Z . eW (c,a+b)−W (c,a) − 1 . f (c, b) dµ (b) R(f )(c, a) = . .b S 41
where .. · .. b denotes Wick ordering of the b–field (see §I.6) and is determined by . eΣ Ai ai +Σ Bi bi +Σ Ci ci . = e 12 Σ Bi Sij Bj eΣ Ai ai +Σ Bi bi +Σ Ci ci . .b where {ai }, {Ai }, {bi }, {Bi }, {ci }, {Ci } are bases of six isomorphic vector spaces. If you don’t know what a Feynman diagram is, skip this paragraph. DiagramR W (c,a) 1 matically, Z(c) e f (c, a) dµS (a) is the sum of all connected (f is viewed as a single, connected, vertex) Feynman diagrams with one f –vertex and arbitrary numbers of W – vertices and S–lines. The operation R(f ) builds parts of those diagrams. It introduces those W –vertices that are connected directly to the f –vertex (i.e that share a common S– line with f ) and it introduces those lines that connect f either to itself or to a W –vertex.
Problem II.1 Let D X
F (a) =
f (j1 , j2 ) aj1 aj2
j1 ,j2 =1 D X
W (a) =
w(j1 , j2 , j3 , j4 ) aj1 aj2 aj3 aj4
j1 ,j2 ,j3 ,j4 =1
with f (j1 , j2 ) and w(j1 , j2 , j3 , j4 ) antisymmetric under permutation of their arguments. a) Set S(λ) = Compute b) Set
1 Zλ
Z
F (a) e
d` S(λ) dλ` λ=0
λW (a)
d` R(λ) dλ` λ=0
where
Zλ =
eλW (a) dµS (a)
for ` = 0, 1, 2.
R(λ) = Compute
dµS (a)
Z
Z
. eλW (a+b)−λW (a) − 1 . F (b) dµ (b) . .b S
for all ` ∈ IN.
II.2 The Expansion – Algebra To obtain the expansion that will be discussed in this chapter, expand the (1l − R)−1 of the following Theorem, which shall be proven shortly, in a power series in R. 42
Theorem II.2 Suppose that W ∈ AC0 is such that the kernel of 1l − R is trivial. Then, for all f in AC , S(f ) =
Z
(1l − R)−1 (f ) dµS (a)
Proposition II.3 For all f in AC and W ∈ AC0 , Z Z Z Z W (c,a) W (c,a) f (c, a) e dµS (a) = f (c, b) dµS (b) e dµS (a) + R(f )(c, a) eW (c,a) dµS (a) Proof:
Subbing in the definition of R(f ), Z Z Z W (c,a) f (c, b) dµS (b) e dµS (a) + R(f )(c, a) eW (c,a) dµS (a) Z Z . W (c,a) W (c,a+b)−W (c,a) . dµS (a) = . b f (c, b) dµS (b) e .e ZZ . eW (c,a+b) . f (c, b) dµ (b) dµ (a) = . .b S S
since .. eW (c,a+b)−W (c,a) .. b = .. eW (c,a+b) .. b e−W (c,a) . Continuing, Z Z Z W (c,a) f (c, b) dµS (b) e dµS (a) + R(f )(c, a) eW (c,a) dµS (a) ZZ . eW (c,a+b) . f (c, b) dµ (b) dµ (a) by Problem I.20 = .a . S S Z = f (c, b) eW (c,b) dµS (b) by Problems I.11, I.22 Z = f (c, a) eW (c,a) dµS (a)
Proof of Theorem II.2: For all g(c, a) ∈ AC Z Z W (c,a) (1l − R)(g) e dµS (a) = Z(c) g(c, a) dµS (a) by Proposition II.3. If the kernel of 1l − R is trivial, then we may choose g = (1l − R) −1 (f ). So
Z
f (c, a) e
W (c,a)
dµS (a) = Z(c) 43
Z
(1l − R)−1 (f )(c, a) dµS (a)
The left hand side does not vanish for all f ∈ AC (for example, for f = e−W ) so Z(c) is nonzero and 1 Z(c)
Z
f (c, a) e
W (c,a)
dµS (a) =
Z
(1l − R)−1 (f )(c, a) dµS (a)
II.3 The Expansion – Bounds Definition II.4 (Norms) For any function f : Mr → C, define P f (J) kf k = max max 1≤i≤r 1≤k≤D
|||f ||| =
P f (J)
J∈Mr ji =k
J∈Mr
The norm k · k, which is an “L1 norm with one argument held fixed”, is appropriate for kernels, like those appearing in interactions, that become translation invariant when the cutoffs are removed. Any f (c, a) ∈ AC has a unique representation f (c, a) =
P
l,r≥0
P
k1 ,···,kl j1 ,···,jr
fl,r (k1 , · · · , kl , j1 , · · · , jr ) ck1 · · · ckl aj1 · · · ajr
with each kernel fl,r (k1 , · · · , kl , j1 , · · · , jr ) antisymmetric under separate permutation of its k arguments and its j arguments. Define kf (c, a)kα = |||f (c, a)|||α =
X
l,r≥0
X
l,r≥0
αl+r kfl,r k αl+r |||fl,r |||
In the norms kfl,r k and |||fl,r ||| on the right hand side, fl,r is viewed as a function from Ml+r to C. Problem II.2 Define, for all f : Mr → C and g : Ms → C with r, s ≥ 1 and r + s > 2, f ∗ g : Mr+s−2 → C by f ∗ g(j1 , · · · , jr+s−2 ) =
D X
k=1
f (j1 , · · · , jr−1 , k)g(k, jr , · · · , jr+s−2 ) 44
Prove that kf ∗ gk ≤ kf k kgk
and
|||f ∗ g||| ≤ min |||f ||| kgk , kf k |||g|||
Definition II.5 (Hypotheses) We denote by R S (HG) for all H, J ∈ r≥0 Mr bH :bJ : dµS (b) ≤ F|H|+|J|
(HS)
kSk ≤ F2 D
the main hypotheses on S that we shall use. Here kSk is the norm of Definition II.4 applied to the function S : (i, j) ∈ M2 7→ Sij . So F is a measure of the “typical size of fields b in the support of the measure dµS (b)” and D is a measure of the decay rate of S. Hypothesis (HG) is typically verified using Proposition I.34 or Corollary I.35. Hypothesis (HS) is typically verified using the techniques of Appendix C.
Problem II.3 Let : · :S denote Wick ordering with respect to the covariance S. a) Prove that if
then
Z bH :bJ :S dµS (b) ≤ F|H|+|J|
for all H, J ∈
Z p |H|+|J| bH :bJ :zS dµzS (b) ≤ |z| F
Hint: first prove that
R
bH :bJ :zS dµzS (b) = z
b) Prove that if Z bH :bJ :S dµS (b) ≤ F|H|+|J|
45
[
r≥0
Mr
bH :bJ :S dµS (b).
Z bH :bJ :T dµT (b) ≤ G|H|+|J|
S for all H, J ∈ r≥0 Mr , then Z |H|+|J| bH :bJ :S+T dµS+T (b) ≤ F + G Hint: Proposition I.21 and Problem I.21.
r≥0
Mr
for all H, J ∈
R (|H|+|J|)/2
and
[
for all H, J ∈
[
r≥0
Mr
Theorem II.6
Assume Hypotheses (HG) and (HS). Let α ≥ 2 and W ∈ AC0 obey
DkW k(α+1)F ≤ 1/3 . Then, for all f ∈ AC , kR(f )kαF ≤
3 α 3 α
|||R(f )|||αF ≤
DkW k(α+1)F kf kαF DkW k(α+1)F |||f |||αF
The proof of this Theorem follows that of Lemma II.11.
Corollary II.7
Assume Hypotheses (HG) and (HS). Let α ≥ 2 and W ∈ AC0 obey
DkW k(α+1)F ≤ 1/3 . Then, for all f ∈ AC , kS(f )(c) − S(f )(0)kαF ≤
α α−1
kf kαF
|||S(f )|||αF ≤
α α−1
|||f |||αF
kΩ(W )kαF ≤
α α−1
kW kαF
The proof of this Corollary follows that of Lemma II.12. Let W (c, a) =
P
l,r∈IN
P
wl,r (L, J) cL aJ
L∈Ml J∈Mr
where wl,r (L, J) is a function which is separately antisymmetric under permutations of its L and J arguments and that vanishes identically when l + r is zero or odd. With this antisymmetry W (c, a + b) − W (c, a) =
X
l,r≥0 s≥1
X
r+s s
L∈Ml J∈Mr K∈Ms
wl,r+s (L, J.K)cL aJ bK
where J.K = (j1 , · · · , jr , k1 , · · · , ks ) when J = (j1 , · · · , jr ) and K = (k1 , · · · , ks ). That is, J.K is the concatenation of J and K. So . eW (c,a+b)−W (c,a) − 1 : = . b
X `>0
1 `!
X
li ,ri ≥0 si ≥1
X
Li ∈Ml i Ji ∈Mr i Ki ∈Ms i
46
:
` Q
i=1
ri +si si
wli ,ri +si (Li , Ji .Ki )cLi aJi bKi :b
with the index i in the second and third sums running from 1 to `. Hence X X ` s Q ri +si 1 R(f ) = Rl,r (f ) `! si `>0
where Rsl,r (f )
=
X
Li ∈Ml i Ji ∈Mr i Ki ∈Ms i
Z
Problem II.4 Let W (a) =
i=1
r,s,l∈IN` si ≥1
.Q w . li ,ri +si (Li , Ji .Ki )cLi aJi bKi :b f (c, b) dµS (b) `
i=1
P
1≤i,j≤D
W (a + b) − W (a) =
P
w(i, j) ai aj with w(i, j) = −w(j, i). Verify that w(i, j) bi bj + 2
1≤i,j≤D
P
w(i, j) ai bj
1≤i,j≤D
Problem II.5 Assume Hypothesis (HG). Let s, s0 , m ≥ 1 and f (b) =
P
fm (H) bH
W (b) =
H∈Mm
a) Prove that
P
ws (K) bK
W 0 (b) =
K∈Ms
P
K0 ∈Ms0
ws0 0 (K0 ) bK0
Z . . W (b) f (b) dµ (b) ≤ mFm+s−2 |||fm ||| kSk kwsk . .b S
R b) Prove that .. W (b)W 0 (b) .. b f (b) dµS (b) = 0 if m = 1 and Z 0 . . . W (b)W 0 (b) . b f (b) dµS (b) ≤ m(m − 1)Fm+s+s −4 |||fm ||| kSk2 kws k kws0 0 k if m ≥ 2.
Proposition II.8 Assume Hypothesis (HG). Let f (p,m) (c, a) =
P
fp,m (I, H) cI aH
H∈Mm I∈Mp
Let r, s, l ∈ IN` with each si ≥ 1. If m < `, Rsl,r (f (p,m) ) vanishes. If m ≥ ` ` m Q si −2 kRsl,r (f (p,m) )k1 ≤ `! m F kf k kSkF kw k p,m li ,ri +si ` i=1
|||Rsl,r (f (p,m) )|||1 ≤ `!
m `
` m Q F |||fp,m ||| kSkFsi −2 kwli ,ri +si k i=1
47
Proof:
We have
Rsl,r (f (p,m) ) = ±
X
I∈Mp
X
fl,r,s (L1 , · · · , L` , I, J1 , · · · , J` ) cL1 · · · cL` cI aJ1 · · · aJ`
Ji ∈Mr i Li ∈Ml i 1≤i≤`
with fl,r,s (L1 , · · · , L` , I, J1 , · · · , J` ) =
P
H∈Mm Ki ∈Ms i
Z
:
` Q
i=1
wli ,ri +si (Li , Ji , Ki )bKi :b fp,m (I, H) bH dµS (b)
The integral over b is bounded in Lemma II.9 below. It shows that fl,r,s (L1 , · · · , L` , I, J1 , · · · , J` ) vanishes if m < ` and is, for m ≥ `, bounded by
where
fl,r,s (L1 , · · · , L` , I, J1 , · · · , J` ) ≤ `!
T (L1 , · · · , L` , I, J1 , · · · , J` ) =
P
m `
T (L1 , · · · , L` , I, J1 , · · · , J` ) Fm+Σsi −2`
` P D Q
H∈Mm i=1 ki =1
|ui (Li , Ji , ki )||Ski ,hi | |fp,m (I, H)|
and, for each i = 1, · · · , ` P
ui (Li , Ji , ki ) =
˜ i ∈Ms −1 K i
˜ i )| |wli ,ri +si (Li , Ji , (ki ).K
˜2 , · · · , k ˜s ) when K ˜s ). By construction, kui k = ˜ = (k1 , k ˜ = (k˜2 , · · · , k Recall that (k1 ).K kwli ,ri +si k. By Lemma II.10, below kT k ≤ kfp,m k kSk`
` Q
i=1
kui k ≤ kfp,m k kSk`
` Q
i=1
kwli ,ri +si k
and hence kfl,r,s k ≤ `!
m `
Fm+Σsi −2` kfp,m k kSk`
` Q
i=1
kwli ,ri +si k
Similarly, the second bound follows from |||T ||| ≤ |||fp,m ||| kSk` 48
Q`
i=1
kui k.
Lemma II.9 Assume Hypothesis (HG). Then Z Q ` : bKi : bH dµS (b) ≤ F|H|+Σ|Ki |−2`
P
i=1
Proof:
1≤µ1 ,···,µ` ≤|H| all different
` Q
i=1
|Ski1 ,hµi |
˜ i = Ki \ {ki1 } for each i = 1, · · · , ` . By For convenience, set ji = ki1 and K
antisymmetry, Z
:
` Q
i=1
Z
bKi : bH dµS (b) = ± :
` Q
i=1
bK ˜ i bj1 · · · bj` : bH dµS (b)
Recall the integration by parts formula (Problem I.19) Z Z D P : bK bj : f (b) dµS (b) = Sj,m : bK : m=1
and the definition (Definition I.11) 0 ∂ |J| ∂bm bH = (−1) bJ bK
∂ ∂bm
f (b) dµS (b)
if m ∈ /H if bH = bJ bm bK
of left partial derivative. Integrate by parts successively with respect to b j` · · · bj1 ` D Z ` Z ` Q Q Q P ∂ : bKi : bH dµS (b) = ± : bK Sji ,m ∂bm bH dµS (b) ˜ i: i=1
i=1
i=1
m=1
and then apply Leibniz’s rule ` P D Q
i=1
m=1
Sji ,m
∂ ∂bm
bH =
P
1≤µ1 ,···,µ` ≤|H| all different
±
Q `
i=1
Sji ,hµi bH\{hµ1 ,···,hµ` }
and Hypothesis (HG).
Lemma II.10 Let T (J1 , · · · , J` , I) = with ` ≤ m. Then
P
` P D Q
H∈Mm i=1 ki =1
|ui (Ji , ki )||Ski ,hi | |fp,m (I, H)|
kT k ≤ kfp,m k kSk` |||T ||| ≤ |||fp,m ||| kSk` 49
` Q
i=1 ` Q
kui k
i=1
kui k
Proof:
For the triple norm, X |||T ||| =
Ji ∈Mr , 1≤i≤` i I∈Mp
X
=
Ji ∈Mr , 1≤i≤` i I∈Mp , H∈Mm
=
D ` P Q
i=1
X Q ` P
I∈Mp H∈Mm
=
T (J1 , · · · , J` , I)
i=1
i=1
I∈Mp H∈Mm
ki =1
P
ki Ji ∈Mri
` X Q
|ui (Ji , ki )||Ski ,hi | |fp,m (I, H)|
|ui (Ji , ki )||Ski ,hi | |fp,m (I, H)|
vi (hi ) |fp,m (I, H)|
where vi (hi ) =
P
P
ki Ji ∈Mri
Since sup vi (hi ) = sup hi
hi
≤ sup ki
P ki
P
Ji ∈Mri
P
Ji ∈Mri
≤ kui k kSk
we have |||T ||| ≤
X Q `
I∈Mp H∈Mm
i=1
|ui (Ji , ki )||Ski ,hi |
|ui (Ji , ki )| |Ski ,hi |
P |ui (Ji , ki )| sup |Ski ,hi | hi
ki
` Q kui k kSk |fp,m (I, H)| = |||fp,m ||| kui k kSk i=1
The proof for the double norm, i.e. the norm with one external argument sup’d over rather than summed over, is similar. There are two cases. Either the external argument that is being sup’d over is a component of I or it is a component of one of the Jj ’s. In the former case, X
sup 1≤i1 ≤D
Ji ∈Mr , 1≤i≤` i ˜ I∈Mp−1 , H∈Mm
=
sup 1≤i1 ≤D
≤
sup 1≤i1 ≤D
` P D Q
i=1
ki =1
X Q `
˜ I∈Mp−1 H∈Mm
i=1
X Q `
˜ I∈Mp−1 H∈Mm
i=1
|ui (Ji , ki )||Ski ,hi | |fp,m ((i1 )˜I, H)|
vi (hi ) |fp,m ((i1 )˜I, H)| ` Q ˜ kui k kSk |fp,m ((i1 )I, H)| ≤ kfp,m k kui k kSk i=1
50
In the latter case, if a component of, for example, J1 , is to be sup’d over Q D X P ` P D ˜1 , k1 )||Sk ,h | |u (J , k )||S | sup |u1 ((j1 ).J i i i ki ,hi |fp,m (I, H)| 1 1 1≤j1 ≤D
˜ J1 ∈Mr −1 1 Ji ∈Mr , 2≤i≤` i I∈Mp , H∈Mm
=
X
sup 1≤j1 ≤D
≤
Q `
i=2
i=2
k1 =1
˜ J1 ∈Mr −1 1 I∈Mp H∈Mm
kui k kSk
≤ kfp,m k
` Q
i=1
P D
k1 =1
sup 1≤j1 ≤D
kui k kSk
by Problem II.2, twice.
ki =1
˜1 , k1 )||Sk ,h | |u1 ((j1 ).J 1 1 X
D P
k1 =1
˜ J1 ∈Mr −1 1 I∈Mp H∈Mm
Q `
i=2
vi (hi ) |fp,m (I, H)|
˜1 , k1 )||Sk ,h ||fp,m (I, H)| |u1 ((j1 ).J 1 1
Lemma II.11 Assume Hypotheses (HG) and (HS). Let f (p,m) (c, a) =
P
fp,m (I, H) cI aH
H∈Mm I∈Mp
For all α ≥ 2 and ` ≥ 1 P
r,s,l∈IN` si ≥1
P
r,s,l∈IN` si ≥1
1 `!
1 `!
` Q
i=1 ` Q
i=1
ri +si si
ri +si si
kRsl,r (f (p,m) )kαF ≤
(p,m) 2 kαF α kf
|||Rsl,r (f (p,m) )|||αF ≤
(p,m) 2 |||αF α |||f
DkW k(α+1)F
`
` DkW k(α+1)F
We prove the bound with the k · k norm. The proof for ||| · ||| is identical. We m may assume, without loss of generality, that m ≥ `. By Proposition II.8, since m ` ≤ 2 ,
Proof:
s (p,m) 1 )kαF `! kRl,r (f
=
≤
1 Σli +Σri +p Σli +Σri +p F kRsl,r (f (p,m) )k1 `! α ` m Q Σli +Σri +p Σli +Σri +p m kSkFsi −2 α F F kfp,m k ` i=1
≤ 2m αp Fm+p kfp,m k =
2 m kfp,m kαF α
` Q
i=1
` Q
i=1
Dαli +ri Fli +ri +si kwli ,ri +si k
Dαli +ri Fli +ri +si kwli ,ri +si k
51
kwli ,ri +si k
As α ≥ 2 and m ≥ ` ≥ 1, P
r,s,l∈IN` si ≥1
1 `!
` Q
i=1
ri +si si
s kRl,r (f (p,m) )kαF
≤
(p,m) 2 kαF α kf
=
(p,m) 2 kαF α kf
= ≤ ≤
(p,m) 2 kαF α kf (p,m) 2 kαF α kf 2 kf (p,m) kαF α
P
i=1
r,s,l∈IN` si ≥1
h h h h
D
` h Q
P
r+s s
r,s,l∈IN s≥1
D
ri +si si
q P P P
i` r l l+r+s α αF kwl,r+s k
l∈IN q≥1 s=1
D
P
i li +ri li +ri +si Dα F kwli ,ri +si k
q s
αq−s αl Fl+q kwl,q k
(α + 1)q αl Fl+q kwl,q k
q,l∈IN
DkW k(α+1)F
i`
i`
i`
Proof of Theorem II.6: kR(f )kαF ≤ ≤
X X `>0
1 `!
r,s,l∈IN` si ≥1
XX `>0 m,p
` Q
i=1
ri +si si
(p,m) 2 kαF α kf
kRsl,r (f )kαF
` DkW k(α+1)F
=
DkW k 2 kf kαF 1−DkW(α+1)F α k(α+1)F
≤
3 α kf kαF
DkW k(α+1)F
The proof for the other norm is similar.
Lemma II.12 Assume Hypothesis (HG). If α ≥ 1 then, for all g(a, c) ∈ AC Z g(a, c) dµS (a) ≤ |||g(a, c)|||αF αF
Z
[g(a, c) − g(a, 0)] dµS (a) ≤ kg(a, c)kαF αF
52
Proof:
Let P
g(a, c) =
P
l,r≥0
gl,r (L, J) cL aJ
L∈Ml J∈Mr
with gl,r (L, J) antisymmetric under separate permutations of its L and J arguments. Then Z g(a, c) dµ (a) S
P =
αF
l,r≥0
P
=
P
L∈Ml J∈Mr
P P
αl Fl
L∈Ml
l≥0
P
≤
gl,r (L, J) cL
αl Fl+r
l,r≥0
aJ dµS (a)
gl,r (L, J)
r≥0 J∈Mr
P
L∈Ml J∈Mr
≤ |||g(a, c)|||αF
P
Z
|gl,r (L, J)|
Z
αF
aJ dµS (a)
Similarly,
Z
[g(a, c) − g(a, 0) dµS (a) =
αF
P
l≥1
≤
P
l
P
=
αF
l≥1 r≥0
l
P
gl,r (L, J) cL
L∈Ml J∈Mr
P
sup
1≤k≤n L∈M ˜ l−1
αl Fl+r sup
l≥1 r≥0
1≤k≤n
P
P
Z
P
aJ dµS (a)
gl,r
αF
˜ J (k)L,
r≥0 J∈Mr
˜ L∈M l−1 J∈Mr
˜ J | |gl,r (k)L,
Z
aJ dµS (a)
≤ kg(a, c)kαF
Proof of Corollary II.7: kS(f )(c) − S(f )(0)kαF
Set g = (1l − R)−1 f . Then
Z
= g(a, c) − g(a, 0) dµS (a)
αF
≤ k(1l − R)−1 (f )kαF
≤
1 1−3DkW k(α+1)F /α
kf kαF ≤
1 1−1/α
(Lemma II.12) kf kαF
The argument for |||S(f )|||αF is identical. With the more detailed notation S(f, W ) =
R
f (a, c) eW (a,c) dµS (a) R eW (a,c) dµS (a) 53
(Theorem II.2)
(Theorem II.6)
we have, by (II.1) followed by the first bound of this Corollary, with the replacements W → εW and f → W , kΩ(W )kαF
Z 1
= S(W, εW )(c) − S(W, εW )(0) dε αF 0 Z 1 α α kW kαF dε = α−1 kW kαF ≤ α−1 0
We end this subsection with a proof of the continuity of the Schwinger functional Z Z W (c,a) 1 S(f ; W, S) = Z(c;W,S) f (c, a) e dµS (a) where Z(c; W, S) = eW (c,a) dµS (a) and the renormalization group map Z 1 Ω(W, S)(c) = log ZW,S eW (c,a) dµS (a) where
ZW,S =
Z
eW (0,a) dµS (a)
with respect to the interaction W and covariance S. Theorem II.13 Let F, D > 0, 0 < t, v ≤ 12 and α ≥ 4. Let W, V ∈ (AC)0 . If, for all S H, J ∈ r≥0 Mr Z Z √ |H|+|J| bH :bJ :S dµS (b) ≤ F bH :bJ :T dµT (b) ≤ ( t F)|H|+|J| kSk ≤ F2 D
kT k ≤ tF2 D
DkW k(α+2)F ≤ then
1 6
DkV k(α+2)F ≤
|||S(f ; W + V, S + T ) − S(f ; W, S)|||αF ≤ 8 (t + v) |||f |||αF kΩ(W + V, S + T ) − Ω(W, S)kαF ≤
Proof:
v 6
First observe that, for all |z| ≤ 1t Z bH :bJ :zT dµzT (b) ≤ F|H|+|J| Z bH :bJ :S+zT dµS+zT (b) ≤ (2F)|H|+|J|
kS + zT k ≤ 2F2 D ≤ (2F)2 D 54
3 D
(t + v)
by Problem II.3.a by Problem II.3.b
Also, for all |z 0 | ≤ v1 ,
DkW + z 0 V k(α+2)F ≤
1 3
Hence, by Corollary II.7, with F replaced by 2F, α replaced by
α 2,
W replaced by W + z 0 V
and S replaced by S + zT
for all |z| ≤
|||S(f ; W + z 0 V, S + zT )|||αF ≤
α α−2
|||f |||αF
kΩ(W + z 0 V, S + zT )kαF ≤
α α−2
kW + z 0 V kαF ≤
1 t
α 1 α−2 3D
(II.2)
and |z 0 | ≤ v1 .
The linear map R( · ; εW + z 0 V, S + zT ) is, for each 0 ≤ ε ≤ 1, a polynomial in
z and z 0 . By Theorem II.6, the kernel of 1l − R is trivial for all 0 ≤ ε ≤ 1. Hence, by
Theorem II.2 and (II.1), both S(f ; W + z 0 V, S + zT ) and Ω(W + z 0 V, S + zT ) are analytic functions of z and z 0 on |z| ≤ 1t , |z 0 | ≤ v1 .
By the Cauchy integral formula, if f (z) is analytic and bounded in absolute value by M on |z| ≤ r, then
0
f (z) = and, for all |z| ≤ r2 , Hence, for all |z| ≤
1 2t ,
|z 0 | ≤
0 f (z) ≤
1 2πı
Z
|ζ|=r
f (ζ) (ζ−z)2 dζ
M 1 2π (r/2)2 2πr
= 4M r1
1 2v
d α S(f ; W + z 0 V, S + zT ) ≤ 4t α−2 |||f |||αF dz αF d α 0 S(f ; W + z 0 V, S + zT ) ≤ 4v α−2 |||f |||αF dz αF
d
α 1
Ω(W + z 0 V, S + zT ) ≤ 4t α−2 dz 3D αF
d 1 α
0 Ω(W + z 0 V, S + zT ) ≤ 4v α−2 dz 3D αF
By the chain rule, for all |z| ≤ 1, d S(f ; W + zV, S + zT ) dz
αF
α ≤ 4(t + v) α−2 |||f |||αF
d
α
Ω(W + zV, S + zT ) ≤ 4(t + v) α−2 dz αF
1 3D
Integrating z from 0 to 1 and α S(f ; W + V, S + T ) − S(f ; W, S) ≤ 4(t + v) α−2 |||f |||αF αF
1 α
Ω(W + V, S + T ) − Ω(W, S) ≤ 4(t + v) α−2 3D αF As α ≥ 4,
α α−2
≤ 2 and the Theorem follows.
55
II.4 Sample Applications We now apply the expansion to a few examples. In these examples, the set of Grassmann algebra generators {a1 , · · · , an } is replaced by ψx,σ , ψ¯x,σ x ∈ IRd+1 , σ ∈ S
with S being a finite set (of spin/colour values). See §I.5. To save writing, we introduce the notation Z
ξ = (x, σ, b) ∈ IRd+1 × S × {0, 1} X XZ dξ . . . = dd+1 x . . . b∈{0,1} σ∈S
ψξ =
ψx,σ ψ¯x,σ
if b = 0 if b = 1
Now elements of the Grassmann algebra have the form ∞ Z X f (ψ) = dξ1 · · · dξr fr (ξ1 , · · · , ξr ) ψξ1 · · · ψξr r=0
the norms kfr k and kf (ψ)kα become kfr k = max sup 1≤i≤r ξi
kf (ψ)kα =
∞ X r=0
Z
r Q
j=1 j6=i
αr kfr k
dξj fr (ξ1 , · · · , ξr )
When we need a second copy of the generators, we use
Ψξ ξ ∈ IRd+1 × S × {0, 1} (in
place of {c1 , · · · , cn }). Then elements of the Grassmann algebra have the form ∞ Z X f (Ψ, ψ) = dξ10 · · · dξl0 dξ1 · · · dξr fl,r (ξ10 , · · · , ξl0 ; ξ1 , · · · , ξr ) Ψξ10 · · · Ψξl0 ψξ1 · · · ψξr l,r=0
and the norms kfl,r k and kf (Ψ, ψ)kα are Z l+r Q kfl,r k = max sup dξj fl,r (ξ1 , · · · , ξl ; ξl+1 , · · · , ξl+r ) 1≤i≤l+r ξi
kf (Ψ, ψ)kα =
∞ X
l,r=0
j=1 j6=i
αl+r kfl,r k
All of our Grassmann Gaussian integrals will obey Z Z ψx,σ ψx0 ,σ 0 dµS (ψ) = 0 ψ¯x,σ ψ¯x0 ,σ 0 dµS (ψ) = 0 Z Z ψx,σ ψ¯x0 ,σ 0 dµS (ψ) = − ψ¯x0 ,σ 0 ψx,σ dµS (ψ) 56
Hence, if ξ = (x, σ, b), ξ 0 = (x0 , σ 0 , b0 ) 0 0 0 S(ξ, ξ 0 ) = Cσ,σ (x, x0 ) −Cσ 0 ,σ (x , x) 0 where Z Cσ,σ 0 (x, x0 ) =
if if if if
b = b0 = 0 b = 0, b0 = 1 b = 1, b0 = 0 b = b0 = 1
ψx,σ ψ¯x,σ 0 dµS (ψ)
That our Grassmann algebra is no longer finite dimensional is, in itself, not a
big deal. The Grassmann algebras are not the ultimate objects of interest. The ultimate objects of interest are various expectation values. These expectation values are complex numbers that we have chosen to express as the values of Grassmann Gaussian integrals. See (I.3). If the covariances of interest were to satisfy the hypotheses (HG) and (HS), we would be able to easily express the expectation values as limits of integrals over finite dimensional Grassmann algebras using Corollary II.7 and Theorem II.13. The real difficulty is that for many, perhaps most, models of interest, the covariances (also called propagators) do not satisfy (HG) and (HS). So, as explained in §I.5, we express the covariance as a sum of terms, each of which does satisfy the hypotheses. These terms, called single scale covariances, will, in each example, be constructed by substituting a partition of unity of IRd+1 (momentum space) into the full covariance. The partition of unity will be constructed using a fixed “scale parameter” M > 1 and a function ν ∈ C0∞ ([M −2 , M 2 ]) that takes values in [0, 1], is identically 1 on [M −1/2 , M 1/2 ] and obeys
∞ X j=0
for 0 < x < 1.
ν M 2j x = 1
(II.3)
Problem II.6 Let M > 1. Construct a function ν ∈ C0∞ ([M −2 , M 2 ]) that takes values
in [0, 1], is identically 1 on [M −1/2 , M 1/2 ] and obeys ∞ X j=0
for 0 < x < 1.
ν M 2j x = 1
57
Example (Gross–Neveu2 ) The propagator for the Gross-Neveu model in two space-time dimensions has C
σ,σ 0
0
(x, x ) =
Z
ψx,σ ψ¯x,σ 0 dµS (ψ) =
where 6p =
ip0 −p1
Z
d2 p ip·(x0 −x) 6 pσ,σ 0 + mδσ,σ 0 e (2π)2 p2 + m 2
p1 −ip0
is a 2 × 2 matrix whose rows and columns are indexed by σ ∈ {↑, ↓}. This propagator does not satisfy Hypothesis (HG) for any finite F. If it did satisfy (HG) for some finite F, R Cσ,σ 0 (x, x0 ) = ψx,σ ψ¯x0 ,σ 0 dµS (ψ) would be bounded by F2 for all x and x0 . This is not the case – it blows up as x0 − x → 0. Set νj (p) =
Then
ν
ν 1
M 2j p2
M 2j p2
S(x, y) =
∞ X
if j > 0 if j = 0, |p| ≥ 1
if j = 0, |p| < 1 S (j) (x, y)
j=0
with
and
0 C (j) (x, x0 ) σ,σ 0 S (j) (ξ, ξ 0 ) = (j) 0 −Cσ 0 ,σ (x , x) 0 C
(j)
0
(x, x ) =
Z
if b = b0 = 0 if b = 0, b0 = 1 if b = 1, b0 = 0 if b = b0 = 1
d2 p ip·(x0 −x) 6 p + m e νj (p) (2π)2 p2 + m 2
(II.4)
We now check that, for each 0 ≤ j < ∞, S (j) does satisfy Hypotheses (HG) and
(HS). The integrand of C (j) is supported on M j−1 ≤ |p| ≤ M j+1 for j > 0 and |p| ≤ M 2 for j = 0. This is a region of volume at most π M j+1 ≤ const M 2j . By Corollary I.35
and Problem II.7, below, the value of F for this propagator is bounded by Z 1/2
d2 p 6 p+m
Fj = 2 ≤ CF p2 +m2 νj (p) (2π)2
1 Mj
M 2j
is the matrix norm of for some constant CF . Here p6 2p+m +m2 58
1/2
= CF M j/2
6 p+m p2 +m2 .
Problem II.7 Prove that
6 p+m p2 +m2
= √
1 p2 +m2
By the following Lemma, the value of D for this propagator is bounded by Dj =
1 M 2j
We have increased the value of CF in Fj in order to avoid having a const in Dj .
Lemma II.14 sup x,σ
Proof:
XZ σ0
(j)
d2 y |Cσ,σ 0 (x, y)| ≤ const M1j
When j = 0, the Lemma reduces to supx,σ
the case, because every matrix element of
P R
6 p+m ν (p) p2 +m2 j
σ0
(0)
d2 y |Cσ,σ 0 (x, y)| < ∞. This is (0)
is C0∞ , so that Cσ,σ 0 (x, y) is a C ∞
rapidly decaying function of x − y. Hence it suffices to consider j ≥ 1. The integrand of 2 (II.4) is supported on a region of volume at most π M j+1 ≤ const M 2j and has every
matrix element bounded by
M j+1 +m M 2j−2 +m2
≤ const M1 j
since M j−1 ≤ |p| ≤ M j+1 on the support of νj (p). Hence (j) sup |Cσ,σ 0 (x, y)| x,y σ,σ 0
≤
Z
sup σ,σ 0
≤ const M1 j
6 +m p νj (p) d2 p2 2 2 0 p +m σ,σ (2π)
Z
(II.5)
νj (p) d2 p ≤ const M1 j M 2j ≤ const M j
(j)
To show that Cσ,σ 0 (x, y) decays sufficiently quickly in x − y, we play the usual integration by parts game. (y − x)
4
(j) Cσ,σ 0 (x, y)
Z
d2 p 6 p + m νj (p) (2π)2 p2 + m2 Z d2 p ip·(y−x) ∂ 2 + e = ∂p20 (2π)2
=
59
∂2 ∂p20
+
∂2 2 ∂p21
∂ 2 2 ip·(y−x) e 2 ∂p1
6p + m ν (p) j p2 + m 2
Each matrix element of
6p+m p2 +m2
is a ratio
P (p) Q(p)
of two polynomials in p = (p0 , p1 ). For any
such rational function ∂ P (p) = ∂pi Q(p)
∂P (p)Q(p) ∂pi
∂Q − P (p) ∂p (p) i
Q(p)2
The difference between the degrees of the numerator and denominator of deg
∂P ∂pi Q
Hence
(α ,α1 )
obeys
∂Q − P ∂p − deg Q2 ≤ deg P + deg Q − 1 − 2deg Q = deg P − deg Q − 1 i ∂ α0 ∂ α1 α α ∂p0 0 ∂p1 1
with Pσ,σ00
∂ P (p) ∂pi Q(p)
6 +m p p2 +m2 σ,σ 0
=
(α ,α1 ) (p)
Pσ,σ00
(p2 +m2 )1+α0 +α1
(p) a polynomial of degree at most deg (p2 + m2 )1+α0 +α1 − 1 − α0 − α1 =
1+α0 +α1 . In words, each
∂ ∂pi
acting on
6p+m p2 +m2
increases the difference between the degree
of the denominator and the degree of the numerator by one. So does each
∂ ∂pi
acting on
νj (p), provided you count both M j and p as having degree one. For example, 2p0 2j 0 M 2j M 2j ∂ ν = − M ν 2 4 2 ∂p0 p p p h i 2 4p2 4j 00 M 2j 2 2j 8p 2j 0 M 2j ∂ 2 M 0 = − + M ν + p80 M ν ν 2 2 4 6 2 p p p p p2 ∂p 0
In general,
∂2 ∂p20
+
∂2 2 2 ∂p1
6pσ,σ0 +m p2 +m2
νj (p) =
X
n,`∈IN 1≤n+`≤4
Here n is the number of derivatives that acted on
Pσ,σ0 ,n,` (p) M 2`j Qn,` (p) (`) M 2j ν (p2 +m2 )1+n p4`+2(4−n−`) p2 6pσ,σ0 +m p2 +m2
and ` is the number of derivatives
that acted on ν. The remaining 4 − n − ` derivatives acted on the factors arising from the argument of ν upon application of the chain rule. The polynomials Pσ,σ 0 ,n,` (p) and Qn,` (p) have degrees at most 1+n and `+(4−n−`), respectively. All together, when you count both M j and p as having degree one, the degree of the denominator (p2 + m2 )1+n p4`+2(4−n−`) , namely 2(1 + n) + 4` + 2(4 − n − `) = 10 + 2`, is at least five larger than the degree of
the numerator Pσ,σ 0 ,n,` (p)M 2`j Qn,` (p), which is at most 1 + n + 2` + ` + (4 − n − `) =
5 + 2`. Recalling that |p| is bounded above and below by const M j (of course with different constants), Pσ,σ0 ,n,` (p) M 2`j Qn,` (p) (`) (p2 +m2 )1+n p4`+2(4−n−`) ν
M 2j p2
(1+n)j M 2`j M (4−n)j = const M15j ≤ const M M 2j(1+n) M j(8−2n+2`) 60
and
∂2 ∂p20
+
∂2 2 2 ∂p1
p+m 6 p2 +m2
νj (p) ≤ const M15j
on the support of the integrand. The support of
∂2 ∂p20
+
∂ 2 2 6p+m p2 +m2 ∂p21
νj (p) is contained
in the support of νj (p). So the integrand is still supported in a region of volume const M 2j and
(j) sup M 4j (y − x)4 Cσ,σ 0 (x, y) ≤ const M 4j M15j M 2j ≤ const M j x,y σ,σ 0
Multiplying the
1 4
power of (II.5) by the
3 4
(II.6)
power of (II.6) gives
(j) sup M 3j |y − x|3 Cσ,σ 0 (x, y) ≤ const M j
(II.7)
x,y σ,σ 0
Adding (II.5) to (II.7) gives
(j) sup [1 + M 3j |y − x|3 ]Cσ,σ 0 (x, y) ≤ const M j x,y σ,σ 0
Dividing across
j
(j)
M |Cσ,σ 0 (x, y)| ≤ const 1+M 3j |x−y|3
Integrating Z
2
d y
(j) |Cσ,σ 0 (x, y)|
≤
Z
2
d y const
Mj 1+M 3j |x−y|3
= const
1 Mj
Z
d2 z
1 1+|z|3
≤ const M1j
We made the change of variables z = M j (y − x). To apply Corollary II.7 to this model, we fix some α ≥ 2 and define the norm R P kW kj of W (ψ) = r>0 dξ1 · · · dξr wr (ξ1 , · · · , ξr ) ψξ1 · · · ψξr to be kW kj = Dj kW kαFj =
X
(αCF )r M j
r
r−4 2
kwr k
Let J > 0 be a cutoff parameter (meaning that, in the end, the model is defined by taking the limit J → ∞) and define, as in §I.5, GJ (Ψ) = log
1 ZJ
Z
e
W (Ψ+ψ)
dµS (≤J ) (ψ) 61
where
ZJ =
Z
eW (ψ) dµS (≤J ) (ψ)
and Ωj (W )(Ψ) = log Z
1 W,S (j)
Z
e
W (Ψ+ψ)
dµS (j) (ψ)
where
ZW,S (j) =
Z
eW (ψ) dµS (j) (ψ)
Then, by Problem I.16, GJ = ΩS (1) ◦ ΩS (2) ◦ · · · ◦ ΩS (J ) (W ) Set Wj = ΩS (j) ◦ ΩS (j+1) ◦ · · · ◦ ΩS (J ) (W ) Suppose that we have shown that kWj kj ≤ 31 . To integrate out scale j − 1 we use Theorem II.15GN Suppose α ≥ 2 and M ≥
α α−1
2 α+1 α
for r ≤ 4, then kΩj−1 (W )kj−1 ≤ kW kj . Proof:
6
. If kW kj ≤
1 3
and wr vanishes
We first have to relate kW (Ψ + ψ)kα to kW (ψ)kα , because we wish to apply
Corollary II.7 with W (c, a) replaced by W (Ψ + ψ). To do so, we temporarily revert to the old notation with c and a generators, rather than Ψ and ψ generators. Observe that X X W (c + a) = wm (I)(c + a)I = =
m
I∈Mm
m
I∈Mm J⊂I
X X X
wm J(I \ J) cJ aI\J
X X X l,r
l+r l
J∈Ml K∈Mr
wl+r (JK)cJ aK
In passing from the first line to the second line, we used that wm (I) is antisymmetric under permutation of its arguments. In passing from the second line to the third line, we renamed |J|+|K| I \ J = K. The l+r arises because, given two ordered sets J, K, there are ordered l |J|
sets I with J ⊂ I, K = I \ J. Hence X kW (c + a)kα = αl+r l,r
l+r l
X kwl+r k = αm 2m kwm k = kW (a)k2α m
Similarly, kW (Ψ + ψ)kα = kW (ψ)k2α 62
To apply Corollary II.7 at scale j − 1, we need 1 3
Dj−1 kW (Ψ + ψ)k(α+1)Fj−1 = Dj−1 kW (ψ)k2(α+1)Fj−1 ≤ But Dj−1 kW k2(α+1)Fj−1 = =
X r
r r−4 2(α + 1)CF M (j−1) 2 kwr k
X r≥6
≤
as M >
2 α+1 α
6
X
2
1
−( 2 − r ) 2 α+1 α M 1
−6 2 α+1 α M
r≥6
≤ 2 α+1 α
6
r
1 kW kj M
r
(αCF )r M j
(αCF )r M j
≤
r−4 2
r−4 2
kwr k
kwr k
1 3
and kW kj ≤ 13 . By Corollary II.7,
α kΩj−1 (W )kj−1 = Dj−1 kΩj−1 (W )kαFj−1 ≤ α−1 Dj−1 kW (Ψ + ψ)kαFj−1 6 1 α ≤ α−1 2 α+1 α M kW kj ≤ kW kj
Theorem II.15GN is just one ingredient used in the construction of the Gross– Neveu2 model. It basically reduces the problem to the study of the projection P W (ψ) =
X Z
r=2,4
dξ1 · · · dξr wr (ξ1 , · · · , ξr ) ψξ1 · · · ψξr
of W onto the part of the Grassmann algebra of degree at most four. A souped up, “renormalized”, version of Theorem II.15GN can be used to reduce the problem to the study of the projection P 0 W of W onto a three dimensional subspace of the range of P .
Example (Naive Many-fermion2 ) The propagator, or covariance, for many–fermion models is 0
Cσ,σ 0 (x, x ) = δσ,σ 0
Z
1 d3 k ik·(x0 −x) e 3 (2π) ik0 − e(k) 63
where k = (k0 , k) and e(k) is the one particle dispersion relation (a generalisation of
k2 ) 2m
minus the chemical potential (which controls the density of the gas). The subscript on many-fermion2 signifies that the number of space dimensions is two (i.e. k ∈ IR2 , k ∈ IR3 ). For pedagogical reasons, I am not using the standard many–body Fourier transform
conventions. We assume that e(k) is a reasonably smooth function (for example, C 4 ) that has a nonempty, compact, strictly convex zero set, called the Fermi curve and denoted F . We further assume that ∇e(k) does not vanish for k ∈ F , so that F is itself a reasonably smooth curve. At low temperatures only those momenta with k0 ≈ 0 and k near F are important, so we replace the above propagator with Z d3 k ik·(x0 −x) U (k) 0 e Cσ,σ 0 (x, x ) = δσ,σ 0 (2π)3 ik0 − e(k) The precise ultraviolet cutoff, U (k), shall be chosen shortly. It is a C0∞ function which takes values in [0, 1], is identically 1 for k02 + e(k)2 ≤ 1 and vanishes for k02 + e(k)2 larger than some constant. This covariance does not satisfy Hypotheses (HS) for any finite D. If it did, Cσ,σ 0 (0, x0 ) would be L1 in x0 and consequently the Fourier transform would be uniformly bounded. But
U (k) ik0 −e(k)
U (k) ik0 −e(k)
blows up at k0 = 0, e(k) = 0. So we write the
covariance as sum of infinitely many “single scale” covariances, each of which does satisfy (HG) and (HS). This decomposition is implemented through a partition of unity of the set of all k’s with k02 + e(k)2 ≤ 1.
We slice momentum space into shells around the Fermi curve. The j th shell is
defined to be the support of ν (j) (k) = ν M 2j (k02 + e(k)2 )
where ν is the function of (II.3). By construction, ν(x) vanishes unless that the j th shell is a subset of
k
1 M j+1
≤ |ik0 − e(k)| ≤
1 M j−1
1 M2
≤ x ≤ M 2 , so
As the scale parameter M > 1, the shells near the Fermi curve have j near +∞. Setting Z d3 k ik·(x0 −x) ν (j) (k) (j) 0 Cσ,σ 0 (x, x ) = δσ,σ 0 e (2π)3 ik0 − e(k) 64
and U (k) =
∞ P
ν (j) (k) we have
j=0
0
Cσ,σ 0 (x, x ) =
∞ X
(j)
Cσ,σ 0 (x, x0 )
j=0
The integrand of the propagator C (j) is supported on a region of volume at most 1 M j−1
const M −2j (|k0 | ≤
and, as |e(k)| ≤
1 M j−1
and ∇e is nonvanishing on F , k must
remain within a distance const M −j of F ) and is bounded by M j+1 . By Corollary I.35, the value of F for this propagator is bounded by 1/2 Z 1/2 (j) (k) d3 k ≤ CF M j M12j = CF M1j/2 Fj = 2 |ikν0 −e(k)| (2π)3
(II.8)
for some constant CF . Also
(j) sup |Cσ,σ 0 (x, y)| x,y σ,σ 0
Each derivative
∂ ∂ki
acting on
ν (j) (k) ik0 −e(k)
≤
Z
ν (j) (k) d3 k |ik0 −e(k)| (2π)3
≤ const M1j
increases the supremum of its magnitude by a factor
of order M j . So the naive argument of Lemma II.14 gives XZ (j) (j) 1/M j |Cσ,σ 0 (x, y)| ≤ const [1+M −j |x−y|]4 ⇒ sup d3 y |Cσ,σ 0 (x, y)| ≤ const M 2j x,σ
σ0
There is not much point going through this bound in greater detail, because Corollary C.3 (j) gives a better bound. In Appendix C, we express, for any lj ∈ M1j , M1j/2 , Cσ,σ 0 (x, y) as
a sum of at most
const lj
bound, with lj =
1 , M j/2
terms, each of which is bounded in Corollary C.3. Applying that
yields the better bound XZ (j) sup d3 y |Cσ,σ 0 (x, y)| ≤ const l1j M j ≤ const M 3j/2 x,σ
σ0
So the value of D for this propagator is bounded by Dj = M 5j/2 This time we define the norm kW kj = Dj kW kαFj =
X r
65
(αCF )r M −j
r−5 2
kwr k
(II.9)
Again, let J > 0 be a cutoff parameter and define, as in §I.5, Z Z W (Ψ+ψ) 1 e dµS (≤J ) (a) where ZJ = eW (ψ) dµS (≤J ) (a) GJ (c) = log ZJ and Ωj (W )(c) = log
1 ZW,S (j)
Z
e
W (Ψ+ψ)
dµS (j) (a)
where
ZW,S (j) =
Z
eW (ψ) dµS (j) (a)
Then, by Problem I.16, GJ = ΩS (J ) ◦ · · · ◦ ΩS (1) ◦ ΩS (0) (W ) Also call Gj = Wj . If we have integrated out all scales from the ultraviolet cutoff, which in this (infrared) problem is fixed at scale 0, to j and we have ended up with some interaction that obeys kW kj ≤
1 3,
then we integrate out scale j + 1 using the following analog of
Theorem II.15GN. α Theorem II.15MB1 Suppose α ≥ 2 and M ≥ 2 α−1
vanishes for r < 6, then kΩj+1 (W )kj+1 ≤ kW kj . Proof:
2
α+1 12 . α
If kW kj ≤
To apply Corollary II.7 at scale j + 1, we need Dj+1 kW (Ψ + ψ)k(α+1)Fj+1 = Dj+1 kW k2(α+1)Fj+1 ≤
But Dj+1 kW k2(α+1)Fj+1 = =
X r
r−5 2(α + 1 CF )r M −(j+1) 2 kwr k
X
1
r≥6
≤
By Corollary II.7
X
5
M − 2 (1− r ) 2 α+1 α 1
M − 12 2 α+1 α
r≥6
≤ 2 α+1 α
6
r
r
(αCF )r M −j
(αCF )r M −j
1 kW kj M 1/2
≤ kW kj ≤
r−5 2
1 3
r−5 2
kwr k
kwr k
1 3
α Dj+1 kW (Ψ + ψ)kαFj+1 kΩj+1 (W )kj+1 = Dj+1 kΩj+1 (W )kαFj+1 ≤ α−1 6 1 α kW kj ≤ kW kj ≤ α−1 2 α+1 α M 1/2
66
1 3
and wr
It looks, in Theorem II.15MB1 , like five-legged vertices w5 are marginal and all vertices wr with r < 5 have to be renormalized. Of course, by evenness, there are no five–legged vertices so only vertices wr with r = 2, 4 have to be renormalized. But it still looks, contrary to the behaviour of perturbation theory [FST], like four–legged vertices are worse than marginal. Fortunately, this is not the case. Our bounds can be tightened still further. In the bounds (II.8) and (II.9) the momentum k runs over a shell around the Fermi curve. Effectively, the estimates we have used to count powers of M j assume that all momenta entering an r–legged vertex run independently over the shell. Thus the estimates fail to take into account conservation of momentum. As a simple illustration of R (j) (j) this, observe that for the two–legged diagram B(x, y) = d3 z Cσ,σ (x, z)Cσ,σ (z, y), (II.9)
yields the bound
sup x
Z
3
d y |B(x, y)| ≤ sup x
Z
(j) d z Cσ,σ (x, z) 3
Z
(j) d3 y Cσ,σ (z, y)
≤ const M 3j/2 M 3j/2 = const M 3j
But B(x, y) is the Fourier transform of W (k) =
ν (j) (k)2 [ik0 −e(k)]2
= C (j) (k)C (j) (p) p=k. Conser-
vation of momentum forces the momenta in the two lines to be the same. Plugging this W (k) and lj =
1 M j/2
into Corollary C.2 yields Z sup d3 y |B(x, y)| ≤ const l1j M 2j ≤ const M 5j/2 x
We exploit conservation of momentum by partitioning the Fermi curve into “sectors”.
Example (Many-fermion2 – with sectorization) We start by describing precisely what sectors are, as subsets of momentum space. Let, for k = (k0 , k), k0 (k) be any reasonable “projection” of k onto the Fermi curve F=
k ∈ IR2 e(k) = 0
In the event that F is a circle of radius kF centered on the origin, it is natural to choose
k0 (k) =
kF |k| k.
For general F , one can always construct, in a tubular neighbourhood of F , 67
a C ∞ vector field that is transverse to F , and then define k0 (k) to be the unique point of F that is on the same integral curve of the vector field as k is. Let j > 0 and set if k ∈ F 1 (≥j) ν (k) = P ν (i) (k) otherwise i≥j
Let I be an interval on the Fermi surface F . Then s=
k k0 (k) ∈ I, k ∈ supp ν (≥j−1)
is called a sector of length length(I) at scale j. Two different sectors s and s0 are called neighbours if s0 ∩ s 6= ∅. A sectorization of length lj at scale j is a set Σj of sectors of length lj at scale j that obeys - the set Σj of sectors covers the Fermi surface - each sector in Σj has precisely two neighbours in Σj , one to its left and one to its right - if s, s0 ∈ Σj are neighbours then
1 16 lj
≤ length(s ∩ s0 ∩ F ) ≤ 81 lj
Observe that there are at most 2 length(F )/lj sectors in Σj . In these notes, we fix lj =
1 M j/2
and a sectorization Σj at scale j. s1
s2 s3 s4
F
Next we describe how we “sectorize” an interaction Wr =
X
σi ∈{↑,↓} κi ∈{0,1}
where
Z
r Q wr (x1 , σ1 , κ1 ), · · · , (xr , σr , κr ) ψσ1 (x1 , κ1 ) · · · ψσr (xr , κr ) dxi
ψσi (xi ) = ψσi (xi , κi ) κ
i=1
i =0
68
ψ¯σi (xi ) = ψσi (xi , κi ) κ
i =1
Let F (r, Σj ) denote the space of all translation invariant functions r fr (x1 , σ1 , κ1 , s1 ), · · · , (xr , σr , κr , sr ) : IR3 × {↑, ↓} × {0, 1} × Σj → C
whose Fourier transform, fˆr (k1 , σ1 , κ1 , s1 ), · · · , (kr , σr , κr , sr ) , vanishes unless ki ∈ si .
An fr ∈ F (r, Σj ) is said to be a sectorized representative for wr if
P ˆ w ˆr (k1 , σ1 , κ1 ), · · · , (kr , σr , κr ) = fr (k1 , σ1 , κ1 , s1 ), · · · , (kr , σr , κr , sr ) si ∈Σj 1≤i≤r
for all k1 , · · · , kr ∈ supp ν (≥j) . It is easy to construct a sectorized representative for wr
by introducing (in momentum space) a partition of unity of supp ν (≥j) subordinate to Σj . Furthermore, if fr is a sectorized representative for wr , then Z r Q dxi wr (x1 , σ1 , κ1 ), · · · , (xr , σr , κr ) ψσ1 (x1 , κ1 ) · · · ψσr (xr , κr ) i=1 Z r P Q fr (x1 , σ1 , κ1 , s1 ), · · · , (xr , σr , κr , sr ) ψσ1 (x1 , κ1 ) · · · ψσr (xr , κr ) = dxi i=1
si ∈Σj 1≤i≤r
for all ψσi (xi , κi ) “in the support of” dµC (≥j) , i.e. provided ψ is integrated out using a Gaussian Grassmann measure whose propagator is supported in supp ν (≥j) (k). Furthermore, by the momentum space support property of fr , Z r Q fr (x1 , σ1 , κ1 , s1 ), · · · , (xr , σr , κr , sr ) ψσ1 (x1 , κ1 ) · · · ψσr (xr , κr ) dxi i=1 Z r Q = fr (x1 , σ1 , κ1 , s1 ), · · · , (xr , σr , κr , sr ) ψσ1 (x1 , κ1 , s1 ) · · · ψσr (xr , κr , sr ) dxi i=1
where
ψσ (x, b, s) =
Z
d3 y ψσ (y, b, s)χ ˆ(j) s (x − y)
(j)
and χ ˆs is the Fourier transform of a function that is identically one on the sector s. This function is chosen shortly before Proposition C.1. We have expressed the interaction Z r r Q P Q Wr = fr (x1 , σ1 , κ1 , s1 ), · · · , (xr , σr , κr , sr ) ψσi (xi , κi , si ) dxi i=1
si ∈Σj σi ∈{↑,↓} κi ∈{0,1}
69
i=1
in terms of a sectorized kernel fr and new “sectorized” fields, ψσ (x, κ, s), that have propagator (j) Cσ,σ 0
0
(x, s), (y, s ) =
Z
ψσ (x, 0, s)ψσ 0 (y, 1, s0) dµC (j) (ψ)
= δσ,σ 0 The momentum space propagator (j) Cσ,σ 0 (k, s, s0)
Z
(j)
(j)
d3 k ik·(y−x) ν (j) (k)χs (k)χs0 (k) e (2π)3 ik0 − e(k) (j)
(j)
ν (j) (k)χs (k)χs0 (k) = δσ,σ 0 ik0 − e(k)
vanishes unless s and s0 are equal or neighbours, is supported in a region of volume const lj M12j and has supremum bounded by const M j . By an easy variant of Corollary I.35, the value of F for this propagator is bounded by Fj ≤ C F
1 M 2j
M j lj
1/2
= CF
q
lj Mj
for some constant CF . By Corollary C.3, XZ (j) sup d3 y |Cσ,σ 0 (x, s), (y, s0) | ≤ const M j x,σ,s
σ 0 ,s0
so the value of D for this propagator is bounded by Dj =
2j 1 lj M
We are now almost ready to define the norm on interactions that replaces the unsectorized norm kW kj = Dj kW kαFj of the last example. We define a norm on F (r, Σj ) by kf k = max
X
max
1≤i≤r xi ,σi ,κi ,si
σk ,κk ,sk k6=i
Z
Q
`6=i
dx` f (x1 , σ1 , κ1 , s1 ), · · · , (xr , σr , κr , sr )
and for any translation invariant function
we define
r wr (x1 , σ1 , κ1 ), · · · , (xr , σr , κr ) : IR3 × {↑, ↓} × {0, 1} → C kwr kΣj = inf
n
o kf k f ∈ F (r, Σj ) a representative for W
The sectorized norm on interactions is X X r−2 r−4 kW kα,j = Dj (αFj )r kwr kΣj = (αCF )r lj 2 M −j 2 kwr kΣj r
r
70
Proposition II.16 (Change of Sectorization) Let j 0 > j ≥ 0. There is a constant
CS , independent of M, j and j 0 , such that for all r ≥ 4
l r−3 kwr kΣj kwr kΣj 0 ≤ CS l j0 j
Proof:
The spin indices σi and bar/unbar indices κi play no role, so we suppress them.
Let > 0 and choose fr ∈ F (r, Σj ) such that wr (k1 , · · · , kr ) =
P
fr (k1 , s1 ), · · · , (kr , sr )
si ∈Σj 1≤i≤r
for all k1 , · · · , kr in the support of supp ν (≥j) and kwr kΣj ≥ kfr k − Let 1=
X
χs0 (k0 )
s0 ∈Σj 0
be a partition of unity of the Fermi curve F subordinate to the set intervals that obeys
sup ∂km0 χs0 ≤ k0
Fix a function ϕ ∈ C0∞
constm lm0
s0 ∩ F s0 ∈ Σj of
j
[0, 2) , independent of j, j 0 and M , which takes values in [0, 1]
and which is identically 1 for 0 ≤ x ≤ 1. Set
ϕj 0 (k) = ϕ M 2(j
0
−1)
[k02 + e(k)2 ] 0
Observe that ϕj 0 is identically one on the support of ν (≥j ) and is supported in the support of ν (≥j
0
−1)
. Define gr ∈ F (r, Σj 0 ) by
r Q P gr (k1 , s01 ), · · · , (kr , s0r ) = fr (k1 , s1 ), · · · , (kr , sr ) χs0m (km )ϕj 0 (km ) m=1
s` ∈Σ 1≤`≤r
=
P
s` ∩s0 6=∅ ` 1≤`≤r
fr (k1 , s1 ), · · · , (kr , sr ) 71
r Q m=1
χs0m (km )ϕj 0 (km )
Clearly wr (k1 , · · · , kr ) =
P
s0 ∈Σ 0 j ` 1≤`≤r
gr (k1 , s01 ), · · · , (kr , s0r )
0
for all k` in the support of supp ν (≥j ) . Define Momi (s0 ) = (s01 , · · · , s0r ) ∈ Σrj0 s0i = s0 and there exist k` ∈ s0` , 1 ≤ ` ≤ r P such that (−1)` k` = 0 `
Here, I am assuming, without loss of generality, that the even (respectively, odd) numbered
¯ legs of wr are hooked to ψ’s (respectively ψ’s). Then Z X Q kgr k = max sup dx` gr (x1 , s01 ), · · · , (xr , s0r ) 1≤i≤r
xi ∈IR3 s0 ∈Σ 0 j
`6=i
Momi (s0 )
Fix any 1 ≤ i ≤ r, s0 ∈ Σ0j and xi ∈ IR3 . Then X Z Q dx` gr (x1 , s01 ), · · · , (xr , s0r ) Momi (s0 )
≤
X
Momi (s0 )
`6=i
P
s1 ,···,sr s` ∩s0 6=∅ `
Z
Q
`6=i
dx` fr (x1 , s1 ), · · · , (xr , sr ) max kχ ˆs00 ∗ ϕˆj 0 kr 00 s ∈Σj 0
By Proposition C.1, with j = j 0 and φ(j) = ϕˆj 0 , maxs00 ∈Σj 0 kχ ˆs00 ∗ ϕˆj 0 kr is bounded by a constant independent of M, j 0 and lj 0 . Observe that Z X P Q dx` fr (x1 , s1 ), · · · , (xr , sr ) Momi (s0 )
s1 ,···,sr s` ∩s0 6=∅ `
≤
`6=i
X
X
s1 ,···,sr Mom (s0 ) i si ∩s0 6=∅ s ∩s0 = 6 ∅ ` ` 1≤`≤r
Z
Q
`6=i
dx` fr (x1 , s1 ), · · · , (xr , sr )
I will not prove the fact that, for any fixed s1 , · · · , sr ∈ Σj , there are at most l r−3 CS0 l j0 elements of Momi (s0 ) obeying s` ∩ s0` 6= ∅ for all 1 ≤ ` ≤ r, but I will try to j
motivate it below. As there are at most two sectors s ∈ Σj that intersect s0 , X X Z Q dx` fr (x1 , s1 ), · · · , (xr , sr ) s1 ,···,sr Mom (s0 ) i si ∩s0 6=∅ s ∩s0 = 6 ∅ ` ` 1≤`≤r
`6=i
l r−3 sup ≤ 2 CS0 l j0 j
s∈Σj
l r−3 kfr k ≤ 2 CS0 l j0
X
s1 ,···,sr si =s
Z
Q
`6=i
j
72
dx` fr (x1 , s1 ), · · · , (xr , sr )
and
0 lj r−3 r 00 ∗ ϕ 0k kwr kΣj 0 ≤ kgr k ≤ 2 max k χ ˆ ˆ CS l 0 kfr k s j 00 s ∈Σj 0
j
l r−3 ≤ CS l j0 k kwl,r kΣj + j
with CS = 2 maxs00 ∈Σj 0 kχ ˆs00 ∗ ϕˆj 0 k4 CS0 .
Now, I will try to motivate the fact that, for any fixed s1 , · · · sr ∈ Σj , there are l r−3 elements of Momi (s0 ) obeying s` ∩ s0` 6= ∅ for all 1 ≤ ` ≤ r. We may at most CS0 l j0 j
assume that i = 1. Then s01 must be s0 . Denote by I` the interval on the Fermi curve F
that has length lj + 2lj 0 and is centered on s` ∩ F . If s0 ∈ Σj 0 intersects s` , then s0 ∩ F is
contained in I` . Every sector in Σj 0 contains an interval of F of length 34 lj 0 that does not intersect any other sector in Σj 0 . At most [ 34
contained in I` . Thus there are at most
l [ 34 l j0 j
lj +2lj 0 lj 0
] of these “hard core” intervals can be
+ 3]r−3 choices for s02 , · · · , s0r−2 .
Fix s01 , s02 , · · · , s0r−2 . Once s0r−1 is chosen, s0r is essentially uniquely determined by
conservation of momentum. But the desired bound on Momi (s0 ) demands more. It says, roughly speaking, that both s0r−1 and s0r are essentially uniquely determined. As k` runs Pr−2 over s0` for 1 ≤ ` ≤ r − 2, the sum `=1 (−1)` k` runs over a small set centered on some
point p. In order for (s01 , · · · , s0r ) to be in Mom1 (s0 ), there must exist kr−1 ∈ s0r−1 ∩ F and
kr ∈ s0r ∩ F with kr − kr−1 very close to p. But kr − kr−1 is a secant joining two points of the Fermi curve F . We have assumed that F is convex. Consequently, for any given
p 6= 0 in IR2 there exist at most two pairs (k0 , q0 ) ∈ F 2 with k0 − q0 = p. So, if p is not
near the origin, s0r−1 and s0r are almost uniquely determined. If p is close to zero, then Pr−2 ` 0 0 0 `=1 (−1) k` must be close to zero and the number of allowed s1 , s2 , · · · , sr−2 is reduced.
Theorem II.15MB2 Suppose α ≥ 2 and M ≥
2 α α−1
2CS α+1 α
wr vanishes for r ≤ 4, then kΩj+1 (W )kα,j+1 ≤ kW kα,j . Proof:
12
. If kW kα,j ≤
1 3
and
We first verify that kW (Ψ + ψ)kα+1,j+1 ≤ 31 .
kW (Ψ + ψ)kα+1,j+1 = kW k2(α+1),j+1 =
X r
73
2(α + 1)CF
r
(r−2)/2
lj+1
M −(j+1)
r−4 2
kwr kΣj+1
≤ ≤ =
X
2 α+1 α
r≥6
X
r
2CS α+1 α
r≥6
X
lj+1 lj
r
r−2 2
M− 1
M−
r−4 2
4
− 4 (1− r ) 2CS α+1 α M
r≥6
≤ 2CS α+1 α
6
lj lj+1
r
1 kW kα,j M 1/2
r−4 2
≤
l
j CS lj+1 r−4 2
r−3
(r−2)/2
(αCF )r lj (r−2)/2
(αCF )r lj (r−2)/2
(αCF )r lj
M −j
M −j
r−4 2
r−4 2
M −j
r−4 2
kwr kΣj
kwr kΣj
kwr kΣj
1 3
By Corollary II.7, kΩj+1 (W )kα,j+1 ≤
α α−1 kW (Ψ
+ ψ)kα,j+1 ≤
74
α α−1
2CS α+1 α
6
1 kW kα,j M 1/2
≤ kW kα,j
Appendix A: Infinite Dimensional Grassmann Algebras
To generalize the discussion of §I to the infinite dimensional case we need to
add topology. We start with a vector space V that is an `1 space. This is not the only possibility. See, for example [Be]. Let I be any countable set. We now generate a Grassmann algebra from the vector space V = `1 (I) = Equipping V with the norm kαk =
n
P o α:I→C |αi | < ∞ i∈I
P
i∈I
|αi | turns it into a Banach space. The algebra
will again be an `1 space. The index set will be I, the (again countable) set of all finite subsets of I, including the empty set. The Grassmann algebra, with coefficients in C,
generated by V is A(I) = `1 (I) =
n
P o α:I→C |αI | < ∞ I∈I
Clearly A = A(I) is a Banach space with norm kαk =
It is also an algebra under the multiplication (αβ)I =
X J⊂I
P
I∈I
|αI |.
sgn(J, I\J) αJ βI\J
The sign is defined as follows. Fix any ordering of I and view every I 3 I ⊂ I as being listed in that order. Then sgn(J, I\J) is the sign of the permutation that reorders (J, I\J) to I. The choice of ordering of I is arbitrary because the map {αI } 7→ {sgn(I)αI }, with sgn(I) being the sign of the permutation that reorders I according to the reordering of I, is an isometric isomorphism. The following bound shows that multiplication is everywhere defined and continuous. kαβk =
X I∈I
XX X X |αJ | |βI\J | ≤ kαk kβk (A.1) |(αβ)I | = sgn(J, I\J) αJ βI\J ≤ I∈I
I∈I J⊂I
J⊂I
Hence A(I) is a Banach algebra with identity 1lI = δI,∅ . In other words, 1l is the function on I that takes the value one on I = ∅ and the value zero on every I 6= ∅. 75
Define, for each i ∈ I, ai to be the element of A(I) that takes the value 1 on I = {i} and zero otherwise. Also define, for each I ∈ I, aI to be the element of A(I) that takes the value 1 on I and zero otherwise. Then aI =
Y
ai
i∈I
where the product is in the order of the ordering of I and α=
X
αI a I
I⊂I
If f : C → C is any function that is defined and analytic in a neighbourhood of 0, then the power series f (α) converges for all α ∈ A(I) with kαk strictly smaller than the radius of convergence of f since, by (A.1), ∞
X
kf (α)k =
n=0
1 (n) (0)αn n! f
≤
∞ X
n=0
(n) 1 (0)| kαkn n! |f
If f is entire, like the exponential of any polynomial, then f (α) is defined on all of A(I). The following problems give several easy generalizations of the above construction.
Problem A.1 Let I be any ordered countable set and I the set of all finite subsets of I (including the empty set). Each I ∈ I inherits an ordering from I. Let w : I → (0, ∞) be any strictly positive function on I and set WI =
Y
wi
i∈I
with the convention W∅ = 1. Define V = `1 (I, w) =
n
P o wi |αi | < ∞ α:I→C i∈I
and “the Grassmann algebra generated by V” 1
A(I, w) = ` (I, W ) =
n
P o α:I→C WI |αI | < ∞ I∈I
76
P
sgn(J, I\J) αJ βI\J where sgn(J, I \ J) is the sign of the P permutation that reorders (J, I \ J) to I. The norm kαk = WI |αI | turns A(I, w) into a
The multiplication is (αβ)I =
J⊂I
I∈I
Banach space. a) Show that
kαβk ≤ kαk kβk b) Show that if f : C → C is any function that is defined and analytic in a neighbourhood P∞ 1 (n) of 0, then the power series f (α) = n=0 n! f (0)αn converges for all α ∈ A with kαk smaller than the radius of convergence of f . o n is a dense c) Prove that Af (I) = α : I → C αI = 0 for all but finitely many I subalgebra of A(I, w).
Problem A.2 Let I be any ordered countable set and I the set of all finite subsets of I. Let S=
α:I→C
be the set of all sequences indexed by I. Observe that our standard product (αβ)I = P P sgn(J, I\J) αJ βI\J is well–defined on S – for each I ∈ I, J⊂I is a finite sum. We now J⊂I
define, for each integer n, a norm on (a subset of) S by kαkn =
P
I∈I
2n|I| |αI |
It is defined for all α ∈ S for which the series converges. Observe that this is precisely
the norm of Problem A.1 with wi = 2n for all i ∈ I. Also observe that, if m < n, then kαkm ≤ kαkn . Define
α ∈ S kαkn < ∞ for all n ∈ ZZ A∪ = α ∈ S kαkn < ∞ for some n ∈ ZZ
A∩ =
a) Prove that if α, β ∈ A∩ then αβ ∈ A∩ .
b) Prove that if α, β ∈ A∪ then αβ ∈ A∪ . c) Prove that if f (z) is an entire function and α ∈ A∩ , then f (α) ∈ A∩ . 77
d) Prove that if f (z) is analytic at the origin and α ∈ A∪ has |α∅ | strictly smaller than the radius of convergence of f , then f (α) ∈ A∪ . Before moving on to integration, we look at some examples of Grassmann algebras generated by ordinary functions or distributions on IRd . In these algebras we can have elements like ¯ =− A(ψ, ψ) −
P
σ∈S λ 2
Z
P
dd+1 k (2π)d+1
σ,σ 0 ∈S
Z
4 Q
i=1
ik0 −
k2 2m
− µ ψ¯k,σ ψk,σ
dd+1 ki (2π)d+1 δ(k1 +k2 −k3 −k4 )ψ¯k1 ,σ ψk3 ,σ u ˆ(k1 (2π)d+1
− k3 )ψ¯k2 ,σ 0 ψk4 ,σ 0
from §I.5, that look like the they are in an algebra with uncountable dimension. But they really aren’t. Example A.1 (Functions and distributions on IRd /LZZd .) There is a natural way to express the space of smooth functions on the torus IR d /LZZd (i.e. smooth functions on IRd that are periodic with respect to LZZd ) as a space of sequences: Fourier series. The same is true for the space of distributions on IRd /LZZd . By definition, a distribution on IRd /LZZd is a map
f : C ∞ IRd /LZZd → C
h 7→< f, h >
that is linear in h and is continuous in the sense that there exist constants Cf ∈ IR, νf ∈ IN such that | < f, h > | ≤
sup x∈IRd /LZZd
Q d Cf 1− j=1
νf d2 h(x) dx2j
(A.2)
for all h ∈ C ∞ . Each L1 function f (x) on IRd /LZZd is identified with the distribution < f, h >=
Z
dx f (x)h(x)
(A.3)
IRd /LZZd
which has Cf = kf kL1 , νf = 0. The delta “function” supported at x is really the distribution < δx , h >= h(x) 78
and has Cδx = 1, νδx = 0 . Let γ ∈ ZZ. We now define a Grassmann algebra Aγ (IRd /LZZd ), γ ∈ ZZ. It is the Grassmann algebra of Problem A.1 with index set 2π d Z L Z
I= and weight function d Q
wpγ =
j=1
1 + p2j
γ
To each distribution f , we associate the sequence
f˜p = f, e−i
indexed by I. In the event that f is given by integration against an L1 function, then f˜p =
Z
dx f (x)e−i
IRd /LZZd
is the usual Fourier coefficient. For example, let q ∈ f˜p = and
X
p∈ 2π Zd L Z
1 0
2π d Z L Z
and f (x) =
1 i e . Ld
Then
if p = q if p = 6 q
wpγ f˜p = wqγ
is finite for all γ. Call this sequence (the sequence whose pth entry is one when p = q and zero otherwise) ψq . We have just shown that ψq ∈ Vγ (IRd /LZZd ) ⊂ Aγ (IRd /LZZd ) for all
γ ∈ ZZ. Define, for each x ∈ IRd /LZZd ,
ψ(x) =
1 Ld
X
ei ψq
q∈ 2π Zd L Z
The pth entry in the sequence for ψ(x) is 1 Ld
X
p∈ 2π Zd L Z
1 Ld
P
q
ei δp,q =
wpγ ei = 79
1 Ld
X
p∈ 2π Zd L Z
1 i
e . Ld
wpγ
As
converges for all γ < − 12 , ψ(x) ∈ Vγ (IRd /LZZd ) ⊂ Aγ (IRd /LZZd ) for all γ < − 12 . By (A.2), for each distribution f , there are constants Cf and νf such that
|f˜p | = f, e−i
≤
sup x∈IRd /LZZd
Because
Q d 1− Cf j=1
X
p∈ 2π Zd L Z
d Q
j=1
νf −i
d2 e 2 dxj
= Cf
d Q
j=1
(1 + p2j )νf
(1 + p2j )γ
converges for all γ < −1/2 , the sequence {f˜p | p ∈
2π d Z L Z
} is in Vγ (IRd /LZZd ) =
`1 (I, w γ ) ⊂ Aγ (IRd /LZZd ), for all γ < −νf − 1/2 . Thus, every distribution is in Vγ
for some γ (often negative). On the other hand, by Problem A.4 below, a distribution is given by integration against a C ∞ function (i.e. is of the form (A.3) for some periodic C ∞ function f (x)) if and only if f˜p decays faster than the inverse of any polynomial in p and then it (or rather f˜) is in Vγ for every γ.
Problem A.3 a) Let f (x) ∈ C ∞ IRd /LZZd . Define Z f˜p =
dx f (x)e−i
IRd /LZZd
Prove that for every γ ∈ IN, there is a constant Cf,γ such that d f˜p ≤ Cf,γ Q (1 + p2 )−γ j j=1
for all p ∈
2π d ZZ L
b) Let f be a distribution on IRd /LZZd . For any h ∈ C ∞ IRd /LZZd , set Z X i
˜ 1 ˜ e hp where hp = dx h(x)e−i
hR (x) = Ld IRd /LZZd
p∈ 2π ZZd L |p|
Prove that hf, hi = lim hf, hR i R→∞
80
Problem A.4 Let f be a distribution on IRd /LZZd . Suppose that, for each γ ∈ ZZ , there is a constant Cf,γ such that d −i
Q f, e ≤ Cf,γ (1 + p2j )−γ
for all p ∈
j=1
2π d Z L Z
Prove that there is a C ∞ function F (x) on IRd /LZZd such that hf, hi =
Z
dx F (x)h(x) IRd /LZZd
Example A.2 (Functions and distributions on IRd ). Example A.1 exploited the basis i
p ∈ 2π ZZd of L2 IRd /LZZd . This is precisely the basis of eigenfunctions e L Qd 2 of j=1 1 − ddx2 . If instead we use the basis of L2 (IRd ) given by the eigenfunctions of j Qd 2 d2 we get a very useful identification between tempered distributions and j=1 xj − dx2 j
functions on the countable index set
o n I = i = (i1 , · · · , id ) ij ∈ ZZ≥0
that we now briefly describe. For more details see [RS, Appendix to V.3]. Define the differential operators A=
√1 2
x+
d dx
A† =
√1 2
and the Hermite functions h` (x) =
(
1 2 1 e− 2 x π 1/4 √1 (A† )` h0 (x) `!
for ` ∈ ZZ≥0 . Problem A.5 Prove that a) AA† f = A† Af + f b) Ah0 = 0 c) A† Ah` = `h` 81
`=0 `>0
x−
d dx
d) hh` , h`0 i = δ`,`0 . Here hf, gi = e) x2 −
d2 dx2
= 2A† A + 1
R
f¯(x)g(x)dx.
Define the multidimensional Hermite functions hi (x) =
d Y
hij (xj )
j=1
for i ∈ I. They form an orthonormal basis for L2 (IRd ) and obey d Q
x2j −
j=1
γ d2 hi 2 dxj
=
d Q
(1 + 2ij )γ hi
j=1
By definition, Schwartz space S(IR d ) is the set of C ∞ functions on IRd all of
whose derivatives decay faster than any polynomial. That is, a function f (x) is in S(IR d ), if it is C ∞ and
Q d x2j − sup x
j=1
γ d2 f (x) 2 dxj
<∞
for all γ ∈ ZZ≥0 . A tempered distribution on IRd is a map f : S(IRd ) → C h 7→< f, h > that is linear in h and is continuous in the sense that there exist constants C ∈ IR, γ ∈ IN such that
Q d x2j − | < f, h > | ≤ sup C x∈IRd
j=1
γ d2 h(x) 2 dxj
To each tempered distribution we associate the function f˘i =< f, hi >
on I. In the event that the distribution f is given by integration against an L2 function, that is, < f, h >= for some L2 function, F (x), then f˘i =
Z
Z
F (x)h(x) dd x
F (x)hi (x) dd x 82
(A.4)
The distribution f is Schwartz class (meaning that one may choose the F (x) of (A.4) in S(IRd )) if and only if f˘i decays faster than any polynomial in i. For any distribution, f˘i is bounded by some polynomial in i. So, if we define wiγ =
d Y
(1 + 2ij )γ
j=1
then every Schwartz class function (or rather its f˘) is in Vγ = `1 (I, w γ ) for every γ and every tempered distribution is in Vγ for some γ. The Grassmann algebra generated by Vγ
is called Aγ (IRd ). In particular, since the Hermite functions are continuous and uniformly
bounded [AS, p.787], every delta function δ(x − x0 ) and indeed every finite measure has P |f˘i | ≤ const . Thus, since i∈ZZ≥0 (1 + i)γ converges for all γ < −1, all finite measures are in Vγ for all γ < −1. The sequence representation of Schwartz class functions and of tempered distributions is discussed in more detail in [RS, Appendix to V.3].
Problem A.6 Show that the constant function f = 1 is in Vγ for all γ < −1. Hint: the
Fourier transform of h` is (−i)` h` .
Infinite Dimensional Grassmann Integrals Once again, let I be any countable set, I the set of all finite subsets of I, V = `1 (I)
and A = `1 (I). We now define some linear functionals on A. Of course the infinite R dimensional analogue of · daD · · · da1 will not make sense. But we can define the R analogue of the Grassmann Gaussian integral, · dµS , at least for suitable S.
Let S : I × I → C be an infinite matrix. Recall that, for each i ∈ I, ai is the
element of A that takes the value 1 on I = {i} and zero otherwise, and, for each I ∈ I, a I is the element of A that takes the value 1 on I and zero otherwise. By analogy with the finite dimensional case, we wish to define a linear functional on A by Z
p Q
j=1
aij dµS = Pf Sij ,ik 1≤j,k≤p
We now find conditions under which this defines a bounded linear functional. Any A ∈ A 83
can be written in the form A=
∞ X X p=0
αI a I
I⊂I |I|=p
We want the integral of this A to be Z Z ∞ X X αI aI dµS A dµS = p=0
=
I⊂I |I|=p
∞ X X p=0
I⊂I |I|=p
αI Pf SIj ,Ik 1≤j,k≤p
(A.5)
where I = {I1 , · · · , Ip } with I1 < · · · < Ip in the ordering of I. We now hypothesise that there is a number CS such that sup Pf SIj ,Ik 1≤j,k≤p ≤ CSp
(A.6)
I⊂I |I|=p
Such a bound is the conclusion of the following simple Lemma. The bound proven in this Lemma is not tight enough to be of much use in quantum field theory applications. A much better bound for those applications is proven in Corollary I.35. Lemma A.3 Let S be a bounded linear operator on `2 (I). Then sup Pf SIj ,Ik 1≤j,k≤p ≤ kSkp/2 I⊂I |I|=p
Proof:
Recall that Hadamard’s inequality (Problem I.24) bounds a determinant by the product of the lengths of its columns. The k th column of the matrix SIj ,Ik 1≤j,k≤p has
length
v uX sX u p 2 t |SIj ,Ik | ≤ |Sj,Ik |2 j=1
j∈I
But the vector Sj,Ik j∈I is precisely the image under S of the Ith k standard unit basis vector, δj,Ik j∈I , in `2 (I) and hence has length at most kSk. Hence
det SIj ,Ik
1≤j,k≤p
≤
p sX Y
k=1
84
j∈I
|Sj,Ik |2 ≤ kSkp
By Proposition I.18.d, 1/2 S ≤ det S ≤ kSkp/2 Pf Ij ,Ik 1≤j,k≤p Ij ,Ik 1≤j,k≤p
Under the hypothesis (A.6), the integral (A.5) is bounded by Z ∞ X X AdµC ≤ |αI | CSp ≤ kAk p=0
(A.7)
I∈I |I|=p
provided CS ≤ 1. Thus, if CS ≤ 1, the map A ∈ A 7→ with norm at most one.
R
A dµS is a bounded linear map
The requirement that CS be less than one is not crucial. Define, for each n ∈ IN, the Grassmann algebra An =
P n|I| 2 |αI | < ∞ α : I → C kαkn = I∈I
R If 2n ≥ CS , then (A.7) shows that A ∈ A 7→ A dµS is a bounded linear functional on An . R Alternatively, we can view · dµS as an unbounded linear functional on A with domain
of definition the dense subalgebra An of A.
85
Appendix B: Pfaffians
Let S = (Sij ) be a complex n × n matrix with n = 2m even. By
Definition B.1
definition, the Pfaffian Pf(S) of S is Pf(S) =
n X
1 2m m!
i1 ,···,in =1
where εi1 ···in =
(
εi1 ···in Si1 i2 · · · Sin−1 in
(B.1)
1 if i1 , · · · , in is an even permutation of 1, · · · , n −1 if i1 , · · · , in is an odd permutation of 1, · · · , n 0 if i1 , · · · , in are not all distinct
By convention, the Pfaffian of a matrix of odd order is zero.
Problem B.1 Let T = (Tij ) be a complex n × n matrix with n = 2m even and let S = 21 (T − T t ) be its skew symmetric part. Prove that Pf T = Pf S.
Problem B.2 Let S =
0 S21
S12 0
with S21 = −S12 ∈ C . Show that Pf S = S12 .
For the rest of this Appendix, we assume that S is a skew symmetric matrix. Let Pm =
n
o {(k1 , `1 ), · · · , (km , `m )} {k1 , `1 , · · · , km , `m } = {1, · · · , 2m}
be the set of of all partitions of {1, · · · , 2m} into m disjoint ordered pairs. Observe that,
the expression εk1 `1 ···km `m Sk1 `1 · · · Skm `m is invariant under permutations of the pairs in the partition {(k1 , `1 ), · · · , (km , `m )} , since εkπ(1) `π(1) ···kπ(m) `π(m) = εk1 `1 ···km `m for any permutation π . Thus, Pf(S) =
1 2m
X
P∈Pm
Let < Pm =
n
εk1 `1 ···km `m Sk1 `1 · · · Skm `m
o {(k1 , `1 ), · · · , (km , `m )} ∈ Pm ki < `i for all 1 ≤ i ≤ m 86
(B.10 )
Because S is skew symmetric εk1 `1 ···ki `i ···km `m Ski `i = εk1 `1 ···`i ki ···km `m S`i ki so that Pf(S) =
X
< P∈Pm
εk1 `1 ···km `m Sk1 `1 · · · Skm `m
(B.100 )
Problem B.3 Let α1 , · · · , αr be complex numbers and let S be the 2r×2r skew symmetric matrix
r M 0 S= −αm m=1
αm 0
All matrix elements of S are zero, except for r 2 × 2 blocks running down the diagonal. For example, if r = 2,
0 −α1 S= 0 0
α1 0 0 0
0 0 0 −α2
Prove that Pf(S) = α1 α2 · · · αr .
0 0 α2 0
Proposition B.2 Let S = (Sij ) be a skew symmetric matrix of even order n = 2m . a) Let π be a permutation and set S π = (Sπ(i)π(j) ) . Then, Pf(S π ) = sgn(π) Pf(S) b) For all 1 ≤ k 6= ` ≤ n , let Mk` be the matrix obtained from S by deleting rows k and ` and columns k and ` . Then, Pf(S) =
n P
`=1
sgn(k − `) (−1)k+` Sk` Pf (Mk` )
In particular, Pf(S) =
n P
`=2
87
(−1)` S1` Pf (M1` )
Proof:
a) Recall that π is a fixed permutation of 1, · · · , n. Making the change of
summation variables j1 = π(i1 ), · · · , jn = π(in ), π
n X
1 2m m!
Pf(S ) =
1 2m m!
=
i1 ,···,in =1 n X
εi1 ···in Sπ(i1 )π(i2 ) · · · Sπ(in−1 )π(in ) επ
−1
(j1 )···π −1 (jn )
j1 ,···,jn =1
= sgn(π
−1
= sgn(π
−1
n X
) 2m1m!
j1 ,···,jn =1
Sj1 j2 · · · Sjn−1 jn
εj1 ···jn Sj1 j2 · · · Sjn−1 jn
) Pf(S) = sgn(π) Pf(S)
b) Fix 1 ≤ k ≤ n . Let P = {(k1 , `1 ), · · · , (km , `m )} be a partition in Pm . We may assume, by reindexing the pairs, that k = k1 or k = `1 . Then, X X Pf (S) = 21m εk`1 ···km `m Sk`1 · · · Skm `m + 21m εk1 k···km `m Sk1 k · · · Skm `m P∈Pm k1 =k
P∈Pm `1 =k
By antisymmetry and a change of summation variable, X
P∈Pm `1 =k
εk1 k···km `m Sk1 k · · · Skm `m =
Thus, Pf(S) =
X
2 2m
P∈Pm k1 =k
=
n P
2 2m
`=1
X
P∈Pm k1 =k
εk`1 ···km `m Sk`1 · · · Skm `m
εk`1 ···km `m Sk`1 · · · Skm `m X
P∈Pm k1 =k `1 =`
εk `···km `m Sk` · · · Skm `m
Extracting Sk` from the inner sum, Pf(S) =
k−1 P
Sk`
`=1
+
n P
`=k+1
2 2m
X
P∈Pm k1 =k `1 =`
Sk`
2 2m
εk ` k2 `2 ···km `m Sk2 `2 · · · Skm `m
X
P∈Pm k1 =k `1 =`
εk ` k2 `2 ···km `m Sk2 `2 · · · Skm `m
The following Lemma implies that, Pf(S) =
k−1 P
Sk` (−1)k+` Pf (Mk` ) +
`=1
n P
`=k+1
88
Sk` (−1)k+`+1 Pf (Mk` )
Lemma B.3 For all 1 ≤ k 6= ` ≤ n , let Mk` be the matrix obtained from S by deleting rows k and ` and columns k and ` . Then, Pf (Mk` ) = sgn(k − `)
P∈Pm k1 =k `1 =`
For all 1 ≤ k < ` ≤ n ,
Proof:
Mk` = where, for each 1 ≤ i ≤ n − 2 , i0 =
X
(−1)k+` 2m−1
(
εk ` k2 `2 ···km `m Sk2 `2 · · · Skm `m
Si0 j 0 ; 1 ≤ i, j ≤ n − 2
i if 1 ≤ i ≤ k − 1 i + 1 if k ≤ i ≤ ` − 1 i + 2 if ` ≤ i ≤ n − 2
By definition, Pf (Mk` ) =
X
1
2m−1 (m−1)!
1≤i1 ,···,in−2 ≤n−2
εi1 ···in−2 Si01 i02 · · · Si0n−3 i0n−2
We have 0
0
εi1 ···in−2 = (−1)k+`−1 εk`i1 ···in−2 for all 1 ≤ i1 , · · · , in−2 ≤ n − 2 . It follows that X k+`−1 Pf (Mk` ) = 2(−1) m−1 (m−1)! = =
k+`−1
1≤i1 ,···,in−2 ≤n−2
(−1) 2m−1 (m−1)! k+`−1
(−1) 2m−1 (m−1)!
X
1≤k2 ,`2 ···,km ,`m ≤n
X
P∈Pm k1 =k `1 =`
0
0
εk `i1 ···in−2 Si01 i02 · · · Si0n−3 i0n−2 εk ` k2 `2 ···km `m Sk2 `2 · · · Skm `m
εk ` k2 `2 ···km `m Sk2 `2 · · · Skm `m
If, on the other hand, k > ` , Pf (Mk` ) = Pf (M`k ) =
X
(−1)k+`+1 2m−1
P∈Pm k1 =` `1 =k
=
(−1)k+` 2m−1
X
P∈Pm k1 =k `1 =`
89
ε` k k2 `2 ···km `m Sk2 `2 · · · Skm `m
εk ` k2 `2 ···km `m Sk2 `2 · · · Skm `m
Proposition B.4 Let S =
0 −C t
C 0
where C = (cij ) is a complex m × m matrix. Then, 1
Pf(S) = (−1) 2 m(m−1) det(C)
Proof:
The proof is by induction on m ≥ 1 . If m = 1 , then, by Problem B.2, 0 S12 Pf = S12 S21 0
Suppose m > 1 . The matrix elements Sij , i, j = 1, · · · , n = 2m , of S are 0 if 1 ≤ i, j ≤ m ci j−m if 1 ≤ i ≤ m and m + 1 ≤ j ≤ n Sij = −cj i−m if m + 1 ≤ i ≤ n and 1 ≤ j ≤ m 0 if m + 1 ≤ i, j ≤ n For each k = 1, · · · , m , we have
M1k+m =
0 t −C 1,k
C 1,k 0
where C 1,k is the matrix of order m − 1 obtained from the matrix C by deleting row 1 and column k . It now follows from Proposition B.2 b) and our induction hypothesis that n P
Pf(S) = = = =
`=m+1 n P
(−1)` S1` Pf (M1` ) (−1)` c1 `−m Pf (M1` )
`=m+1 m P
(−1)k+m c1 k Pf (M1k+m )
k=1 m P
k=1
1 (−1)k+m c1 k (−1) 2 (m−1)(m−2) det C 1,k
Collecting factors of minus one
1
Pf(S) = (−1)m+1 (−1) 2 (m−1)(m−2)
m P
(−1)k+1 c1 k det C 1,k
k=1
= (−1)
1 2 m(m−1)
m P
k=1
= (−1)
1 2 m(m−1)
(−1)k+1 c1 k det C 1,k
det(C)
90
Proposition B.5 Let S = (Si,j ) be a skew symmetric matrix of even order n = 2m . a) For any matrix B of even order n = 2m , Pf B t S B b) Pf(S)2 = det(S)
= det(B) Pf(S)
Proof: a) The matrix element of B t S B with indices i1 , i2 is
P
j1 ,j2
Pf B t S B
=
1 2m m!
X
εi1 ···in
i1 ,···,in
=
1 2m m!
X
j1 ,···,jn
X
j1 ,···,jn i1 ,···,in
The expression
X
i1 ,···,in
X
bj1 i1 Sj1 j2 bj2 i2 . Hence
bj1 i1 · · · bjn in Sj1 j2 · · · Sjn−1 jn
εi1 ···in bj1 i1 · · · bjn in Sj1 j2 · · · Sjn−1 jn
εi1 ···in bj1 i1 · · · bjn in
is the determinant of the matrix whose `th row is the j`th row of B. If any two of j1 , · · · , jn
are equal, this determinant is zero. Otherwise it is εj1 ···jn det(B). Thus X
i1 ,···,in
εi1 ···in bj1 i1 · · · bjn in = εj1 ···jn det(B)
and Pf B t S B
= det(B)
1
2m m!
X
j1 ,···,jn
εj1 ···jn Sj1 j2 · · · Sjn−1 jn = det(B) Pf(S)
b) It is enough to prove the identity for real, nonsingular matrices, since Pf(S)2 and det(S) are polynomials in the matrix elements Sij , i, j = 1, · · · , n , of S . So, let S be real and nonsingular and, as in Lemma I.10 (or Problem I.9), let R be a real orthogonal matrix such that Rt SR = T with m M 0 T = −αi i=1
αi 0
for some real numbers α1 , · · · , αm . Then, det R = ±1, so that, by part (a) and Problem B.3, Pf(S) = ± det(R) Pf(S) = ±Pf Rt SR 91
= ±Pf(T ) = ±α1 · · · αm
and det(S) = det Rt SR
= det T = α21 · · · α2m = Pf(S)2
Problem B.4 Let S be a skew symmetric D×D matrix with D odd. Prove that det S = 0.
92
Appendix C: Propagator Bounds
The propagator, or covariance, for many–fermion models is the Fourier transform of Cσ,σ 0 (k) =
δσ,σ 0 ık0 − e(k)
where k = (k0 , k) and e(k) is the one particle dispersion relation minus the chemical potential. For this appendix, the spins σ, σ 0 play no role, so we suppress them completely. We also restrict our attention to two space dimensions (i.e. k ∈ IR2 , k ∈ IR3 ) though it is trivial to extend the results of this appendix to any number of space dimensions. We assume that e(k) is a C 4 (though C 2+ε would suffice) function that has a nonempty, compact zero set F , called the Fermi curve. We further assume that ∇e(k) does not vanish for k ∈ F , so that F is itself a reasonably smooth curve. At low temperatures only those momenta with k0 ≈ 0 and k near F are important, so we replace the above propagator with C(k) =
U (k) ık0 − e(k)
The precise ultraviolet cutoff, U (k), shall be chosen shortly. It is a C0∞ function which takes values in [0, 1], is identically 1 for k02 + e(k)2 ≤ 1 and vanishes for k02 + e(k)2 larger than some constant. We slice momentum space into shells around the Fermi surface. To do this, we fix M > 1 and choose a function ν ∈ C0∞ ([M −2 , M 2 ]) that takes values in [0, 1], is identically
1 on [M −1/2 , M 1/2 ] and obeys
∞ X j=0
ν M 2j x = 1
for 0 < x < 1. The j th shell is defined to be the support of ν (j) (k) = ν M 2j k02 + e(k)2 By construction, the j th shell is a subset of
k
1
M j+1
≤ |ık0 − e(k)| ≤ 93
1
M j−1
Setting C (j) (k) = C(k)ν (j) (k) and U (k) =
∞ P
ν (j) (k) we have
j=0
C(k) =
∞ X
C (j) (k)
j=0
Given any function χ(k0 ) on the Fermi curve F , we define Cχ(j) (k) = C (j) (k)χ(k0 (k)) where, for k = (k0 , k), k0 (k) is any reasonable “projection” of k onto the Fermi curve. In the event that F is a circle of radius kF centered on the origin, it is natural to choose
k0 (k) =
kF |k| k.
For general F , one can always construct, in a tubular neighbourhood of F ,
a C ∞ vector field that is transverse to F , and then define k0 (k) to be the unique point of F that is on the same integral curve of the vector field as k is. See [FST1, Lemma 2.1]. k k0 F
To analyze the Fourier transform of C (j) (k), we further decompose the j th shell into more or less rectangular “sectors”. To do so, we fix lj ∈ M1 j , M1j/2 and choose a partition of unity
1=
X
0 χ(j) s (k )
s∈Σ(j) (j)
of the Fermi curve F with each χs
supported on an interval of length lj and obeying
≤ sup ∂km0 χ(j) s k0
const lj m
for m ≤ 4
Here ∂k0 is the derivative along the Fermi curve. If k0 (t) is a parametrization of the Fermi curve by arc length, with the standard orientation, then ∂k0 f (k0 ) k0 =k0 (t) = ddt f k0 (t) . 94
Proposition C.1 Let χ(k0 ) be a C 4 function on the Fermi curve F which takes values in [0, 1], which is supported on an interval of length lj ∈ M1j , M1j/2 and whose derivatives obey
sup ∂kn0 χ(k0 ) ≤ k0
Kχ ln j
for n ≤ 4
ˆ be unit tangent and normal vectors to Fix any point k0c in the support of χ. Let ˆt and n the Fermi curve at k0c and set ˆ | + lj |(x − y) · ˆt| ρ(x, y) = 1 + M −j |x0 − y0 | + M −j |(x − y) · n Let φ be a C04 function which takes values in [0, 1] and set φ(j) = φ M 2j [k02 + e(k)2 ] . For
any function W (k) define
(j) Wχ,φ (x, y)
=
Z
d3 k ık·(x−y) W (k)φ(j) (k) χ (2π)3 e
k0 (k)
There is a constant, const , depending on Kχ , φ and e(k), but independent of M , j, x and y such that (j) |Wχ,φ (x, y)|
≤ const
lj ρ(x, y)−4 M 2j
α1 α2 α0 ˆ ˆ t·∇k W (k) ∂ n ·∇k M j(α0 +α1 ) k0 α
max
α∈IN3 |α|≤4
sup k∈supp χφ(j)
where α = (α0 , α1 , α2 ) and |α| = α0 + α1 + α2 .
lj 2
ˆ n O(1/M j ) k0c
ˆt
O(lj )
Proof:
Use S to denote the support of φ(j) (k)χ (k0 (k) . If φ(x) vanishes for x ≥ Q2 ,
then φ(j) (k) vanishes unless |k0 | ≤ QM j and |e(k)| ≤ QM j . If k ∈ S, then k0 lies in an interval of length 2QM −j , the component of k tangential to F lies in an interval of length 95
const lj and, by Problem C.1 below, the component of k normal to F lies in an interval of length 2C 0 QM −j . Thus S has volume at most const M −2j lj and (j)
l
sup |Wχ,φ (x, y)| ≤ vol(S) sup |W (k)| ≤ const Mj2j sup |W (k)| x,y
k∈S
k∈S
We defined ρ as a sum of four terms. Multiplying out ρ4 gives us a sum of β n β1 0 0 (x−y)·ˆ ˆt β2 with |β| ≤ 4. To bound 44 terms, each of the form x0M−y l (x − y) · j j j M (j)
supx,y ρ(x, y)4 |Wχ,φ (x, y)| by 44 C, it suffices to bound
β2 (j) lj (x − y) · ˆt Wχ,φ (x, y)
n β1 x0 −y0 β0 (x−y)·ˆ Mj Mj Z =
β0 1 d3 k ık·(x−y) 1 ˆ e ∂ 3 j k 0 (2π) M Mj n
· ∇k
β1
lj ˆt · ∇k
β2
W (k)φ(j) (k) χ k0 (k)
by C for all x, y ∈ IR3 and β ∈ IN3 with |β| ≤ 4. The volume of the domain of integration l
is still bounded by const Mj2j , so by the product rule, to prove the desired bound it suffices to prove that max sup
|β|≤4 k∈S
Since lj ≥ max sup
|β|≤4 k
1 Mj
1 Mj
β0 1 1 ˆ M j ∂ k0 Mj n
· ∇k
β1
∂ k0
β0
1 Mj
max sup
|β|≤4 k∈S
ˆ ·∇k n
β1
lj ˆt·∇k
1 Mj
∂ k0
β0
1 Mj
β2
0
and, for each I 0 ⊂ I, dI = d φ
φ(j) (k) χ k0 (k)
(k) =
|β| X
Q
i∈I 0
X
χ k (k) ≤ const max β1
≤ const β
lj 2 1 β +β jβ β1 +β2 ≤4 M 1 lj 1 2
0
ˆ · ∇k n
Set I = {1, · · · , |β|}, 1 M j ∂ k0 ˆ · ∇k di = M1 j n lj ˆt · ∇k
(j)
β2
and all derivatives of k0 (k) to order 4 are bounded,
so, by the product rule, it suffices to prove
I
lj ˆt · ∇k
lj ˆt · ∇k
β2
φ
(j)
≤ const
(k) ≤ const
if 1 ≤ i ≤ β0
if β0 + 1 ≤ i ≤ β0 + β1
if β0 + β1 + 1 ≤ i ≤ |β|
di . By Problem C.2, below, dm φ dxm
M 2j k02 + e(k)2
m=1 (I1 ,···,Im )∈Pm
96
m Q
i=1
M 2j dIi k02 + e(k)2
where Pm is the set of all partitions of I into m nonempty subsets I1 , · · · , Im with, for
all i < i0 , the smallest element of Ii smaller than the smallest element of Ii0 . For all m m ≤ 4, ddxmφ M 2j k02 + e(k)2 is bounded by a constant independent of j, so to prove the Proposition, it suffices to prove that max sup M 2j
|β|≤4 k∈S
1 Mj
∂ k0
β0
1 Mj
ˆ · ∇k n
β1
lj ˆt · ∇k
If β0 6= 0 M
2j
1 M j ∂ k0
β0
1 ˆ M j n · ∇k
β1
lj ˆt · ∇k
β2
k02
+ e(k)
2
=
β2
(
k02
+ e(k) ≤ const 2
2k0 M j 2 0
if β0 = 1, β1 = β2 = 0 if β0 = 2, β1 = β2 = 0 otherwise
is bounded, independent of j, since |k0 | ≤ const M1j on S. Thus it suffices to consider β0 = 0. Applying the product rule once again, this time to the derivatives acting on M 2j e(k)2 = [M j e(k)] [M j e(k)], it suffices to prove max sup M j
|β|≤4 k∈S
1 ˆ n Mj
· ∇k
β1
lj ˆt · ∇k
β2
e(k) ≤ const
If β1 = β2 = 0, this follows from the fact that |e(k)| ≤ const it follows from
β M j lj 2 M β1 j
≤ 1. (Recall that lj ≤
1 .) M j/2
1 Mj
(C.1)
on S. If β1 ≥ 1 or β2 ≥ 2,
This leaves only β1 = 0, β2 = 1. If
ˆt · ∇k e(k) is evaluated at k = k0c , it vanishes, since ∇k e(k0c ) is parallel to n ˆ . The second derivative of e is bounded so that, M j lj sup ˆt · ∇k e(k) = M j lj sup ˆt · ∇k e(k) − ˆt · ∇k e(k0c ) k∈S
k∈S j
≤ const M lj sup |k − k0c | k∈S
≤ since lj ≤
const M j l2j
≤ const
1 . M j/2
Problem C.1 Prove, under the hypotheses of Proposition C.1, that there are constants C, C 0 > 0, such that if k − k0c ≤ C, then there is a point p0 ∈ F with (k − p0 ) · ˆt = 0 and k − p0 ≤ C 0 e(k) . 97
Problem C.2 Let f : IRd → IR and g : IR → IR. Let 1 ≤ i1 , · · · , in ≤ d. Prove that Q n
`=1
∂ ∂xi`
n X g f (x) =
X
g (m) f (x)
m Q Q
p=1 `∈Ip
m=1 (I ,···,I )∈P (n) 1 m m
∂ ∂xi`
f (x)
(n)
where Pm is the set of all partitions of (1, · · · , n) into m nonempty subsets I1 , · · · , Im with, for all i < i0 , the smallest element of Ii smaller than the smallest element of Ii0 .
Corollary C.2 Under the hypotheses of Proposition C.1, (j) sup Wχ,φ (x, y) ≤ const
lj M 2j
sup k∈supp χφ(j)
|W (k)|
and sup x
Z
(j) dy Wχ,φ (x, y) , sup y
≤ const max3 α∈IN |α|≤4
Proof:
Z
(j) dx Wχ,φ (x, y)
α1 α2 α0 ˆ ˆ n · ∇ t · ∇ W (k) ∂ k k M j(α0 +α1 ) k0 α
sup
k∈supp χφ(j)
lj 2
The first claim is an immediate consequence of Proposition C.1 since ρ ≥ 1. For
the second statement, use sup x
Z
dy
1 ρ(x,y)4 ,
sup y
Z
dx
1 ρ(x,y)4
= sup y
Z
dx
1 ρ(x−y,0)4
=
Z
dx0
with x0 = x − y. Subbing in the definition of ρ, Z
dx
1 ρ(x,0)4
Z
dx [1+M −j |x |+M1−j |x·ˆn|+l |x·ˆt|]4 0 j Z 1 = M 2j l1j dz [1+|z0 |+|z 4 1 |+|z2 |] =
≤ const M 2j l1j
ˆ = M j z1 , x · ˆt = We made the change of variables x0 = M j z0 , x · n 98
1 lj z 2 .
1 ρ(x0 ,0)4
Corollary C.3 Under the hypotheses of Proposition C.1,
and sup x
Proof:
sup Cχ(j) (x, y) ≤ const
Z
dy Cχ(j) (x, y) , sup y
Z
Apply Corollary C.2 with W (k) =
lj Mj
dx Cχ(j) (x, y) ≤ const M j 1 ık0 −e(k)
and φ = ν. To achieve the desired
bounds, we need max
sup
|α|≤4 k∈supp χν (j)
1 Mj
∂ k0
α0
1 Mj
ˆ · ∇k n
α1
lj ˆt · ∇k
α2
1 ık0 −e(k)
≤ const M j
In the notation of the proof of Proposition C.1, with β replaced by α, I (j)
d ν
(k) =
|α| X
m=1
=M
j
X
(−1)m m!
(I1 ,···,Im )∈Pm |α| X
(−1)m m!
m=1
1 ık0 −e(k)
X
(I1 ,···,Im )∈Pm
m+1 Q m
i=1
1/M j ık0 −e(k)
dIi (ık0 − e(k))
m+1 Q m
On the support of χν (j) , |ık0 − e(k)| ≥ const M1 j so that
i=1
M j dIi (ık0 − e(k))
1/M j ık0 −e(k)
m+1
is bounded uni-
formly in j. That M j dIi (ık0 − e(k)) is bounded uniformly in j is immediate from (C.1), M
j
1 Mj
∂ k0
β0
1 Mj
ˆ · ∇k n
β1
lj ˆt · ∇k
β2
(ık0 ) =
(
ık0 M j ı 0
and the fact that |k0 | ≤ const M1j on the support of χν (j) .
99
if β0 = β1 = β2 = 0 if β0 = 1, β1 = β2 = 0 otherwise
Appendix D: Problem Solutions
§I. Fermionic Functional Integrals V Problem I.1 Let V be a complex vector space of dimension D. Let s ∈ V. Then s has LD Vn a unique decomposition s = s0 + s1 with s0 ∈ C and s1 ∈ V. Prove that, if n=1 V V s0 6= 0, then there is a unique s0 ∈ V with ss0 = 1 and a unique s00 ∈ V with s00 s = 1
and furthermore
0
00
s =s =
1 s0
+
D X
sn
1 (−1)n sn+1 0
n=1
Solution. Note that if n > D, then sn1 = 0. Define s˜ =
1 s0
+
D X
sn
1 (−1)n sn+1 0
n=1
Then s˜s = s˜ s = (s0 + s1 ) h
= 1+ h
= 1+ h
= 1+
D X
n=1 D X
n=1 D X
n=1
h
1 s0
+
D X
n=1 n
s (−1)n sn1 0
n
s (−1)n sn1 0
n
s (−1)n sn1 0
i i i
sn 1 (−1)n sn+1 0
+ +
h
s1 s0
+
D X
sn+1
(−1)n s1n+1 0
n=1
h D−1 X n=0
+
i
D hX
n+1
s1 (−1)n sn+1 0
n0 =1
i
i
0 i 0 sn (−1)n −1 s1n0 = 1 0
Thus s˜ satisfies s˜s = s˜ s = 1. If some other s0 satisfies ss0 = 1, then s0 = 1s0 = s˜ss0 = s˜1 = s˜ and if some other s00 satisfies s00 s = 1, then s00 = s00 1 = s00 s˜ s = 1˜ s = s˜.
Problem I.2 Let V be a complex vector space of dimension D. Every element s of 100
V
V
has a unique decomposition s = s0 + s1 with s0 ∈ C and s1 ∈ s
e =e
s0
n P D
n=0
Prove that if s, t ∈
V
1 n n! s1
o
LD
n=1
Vn
V. Define
V with st = ts, then, for all n ∈ IN, n
(s + t) =
n X
n m
m=0
sm tn−m
and es et = et es = es+t
n = 1 is obvious. Suppose that (s +
Pn
n m n−m is by induction on n. The case m=0 m s t P n n m n−m t)n = m=0 m s t for some n. Since st = ts,
Solution. The proof that (s + t)n =
we have ssm tn = sm tn s for all nonnegative integers m and n, so that (s + t)n+1 = s(s + t)n + (s + t)n t n n X X n m+1 n−m = s t + m =
m=0 n+1 X
n m−1
m=1
=s =t =t
n+1
n+1
n+1
+ + +
=
m=0
m=0 n X
sm tn+1−m +
n X
m=1 n X
n+1 m
m n+1−m s t n m
m=0
n m−1
m=1 n h X
m=1
n+1 X
n m
m n+1−m s t
n m n+1−m X s t +
m=1
n! (m−1)!(n−m+1)!
n+1 m
h
m n+1
+
+
n m
n! m!(n−m)!
n+1−m n+1
i
sm tn+1−m + tn+1 i
sm tn+1−m + sn+1
sm tn+1−m + sn+1
m n+1−m s t
For the second part, since s0 , t0 , s1 = s − s0 and t1 = t − t0 all commute with each other, 101
and sn1 = tn1 = 0 for all n > D s t
t s
e e =ee =e
s0
n P ∞
m=0
= e s0 e
1 m m! s1
o
n P ∞ P N t0
e
N =0 m=0
= e s 0 e t0
n P ∞
N =0
=e
n P ∞ s0 +t0
N =0
1 N!
t0
n P ∞
n=0
1 n n! t1
o
=e e
m N −m 1 m!(N −m)! s1 t1
N P
m=0
1 N ! (s1
N m
m N −m o s 1 t1
+ t1 )
N
o
s 0 t0
n P ∞
m,n=0
o
m n 1 m!n! s1 t1
o
where N = n + m
= es+t
Problem I.3 Use the notation of Problem I.2. Let, for each α ∈ IR, s(α) ∈
V
V. Assume
that s(α) is differentiable with respect to α and that s(α)s(β) = s(β)s(α) for all α and β. Prove that d s(α)n dα
d = ns(α)n−1 dα s(α)
and d es(α) dα
d = es(α) dα s(α)
Solution. First, let for each α ∈ IR, t(α) ∈ with respect to α. Taking the limit of s(α+h)t(α+h)−s(α)t(α) h
=
V
V and assume that t(α) is differentiable
s(α+h)−s(α) t(α h
+ h) + s(α) t(α+h)−t(α) h
as h tends to zero gives that s(α)t(α) is differentiable and obeys the usual product rule d dα
s(α)t(α) = s0 (α)t(α) + s(α)t0 (α)
Differentiating s(α)s(β) = s(β)s(α) with respect to α gives that s0 (α)s(β) = s(β)s0 (α) for all α and β too. If for some n, n d dα s(α)s(α)
d n dα s(α)
= ns(α)n−1 s0 (α), then,
d = s0 (α)s(α)n + s(α) dα s(α)n = s0 (α)s(α)n + ns(α)s(α)n−1 s0 (α)
= (n + 1)s(α)n s0 (α) 102
and the first part follows easily by induction. For the second part, write s(α) = s0 (α) + LD Vn s1 (α) with s0 (α) ∈ C and s1 (α) ∈ n=1 V. Then s0 (α) and s1 (β) commute for all α and β and
d es(α) dα
= =
d dα
h
es0 (α)
nP D
n=0
s00 (α)es0 (α)
= s00 (α)e =e
1 s (α)n n! 1
n D+1 P n=0
n P D s0 (α)
n P D s0 (α)
n=0
n=0
oi
n 1 n! s1 (α) 1 s (α)n n! 1
n 1 n! s1 (α)
o
=
o
o
d dα
+e
h
es0 (α)
s0 (α)
n D+1 P n=0
n D+1 P n=1
+e
n P D s0 (α)
n=0
1 s (α)n n! 1
oi
n−1 1 (n−1)! s1 (α)
1 s (α)n n! 1
o
o
s01 (α)
s01 (α)
s00 (α) + s01 (α) = es(α) s0 (α)
Problem I.4 Use the notation of Problem I.2. If s0 > 0, define ln s = ln s0 +
D X
(−1)n−1 n
n=1
s1 n s0
with ln s0 ∈ IR. a) Let, for each α ∈ IR, s(α) ∈
V
V. Assume that s(α) is differentiable with respect to α,
that s(α)s(β) = s(β)s(α) for all α and β and that s0 (α) > 0 for all α. Prove that d dα
b) Prove that if s ∈
V
ln s(α) =
V with s0 ∈ IR, then ln es = s
Prove that if s ∈
V
V with s0 > 0, then eln s = s 103
s0 (α) s(α)
Solution. a) d dα
ln s(α) =
s00 (α) s0 (α)
+
D X
n=1
=
s00 (α) s0 (α)
+
n−1
0 (−1)n−1 s1s(α) n s1 (α) − 0 (α)
D−1 X
n
0 1 (α) (−1)n s0s(α) n+1 s1 (α) +
n=0
=
s00 (α) s0 (α)
+
d dα
s01 (α) s0 (α)
+
1 d D+1 D+1 dα s1 (α)
ln s(α) =
s00 (α) s0 (α)
+
n
0 1 (α) (−1)n−1 s0s(α) n+1 s0 (α)
n=1
D X
n
0 1 (α) (−1)n s0s(α) n+1 s0 (α)
n=1 D X
n=1
since s1 (α)D s01 (α) =
D X
s01 (α) s0 (α)
n 0 0 1 (α) (−1)n s0s(α) n+1 s1 (α) + s0 (α)
= 0. By Problem I.1, +
1 s(α)
−
1 s0 (α)
0 s1 (α) + s00 (α) =
s0 (α) s(α)
b) For the first part use (with α ∈ IR) d dα
ln eαs =
seαs eαs
=s=
d dα αs
This implies that ln eαs − αs is independent of α. As it is zero for α = 0, it is zero for all
α. For the second part, set s0 = eln s . Then, by the first part, ln s0 = ln s. So it suffices to PD PD prove that ln is injective. This is done by expanding out s0 = `=0 s0` and s = `=0 s` V` with s0` , s` ∈ V and verifying that every s0` = s` by induction on `. Projecting ln s0 +
D X
(−1)n−1 n
n=1
onto
V0
s−s0 n s0
=
ln s00
+
D X
(−1)n−1 n
n=1
s0 −s00 n s00
V gives ln s0 = ln s00 and hence s00 = s0 . Projecting both sides onto (−1)1−1 1
s−s0 s0 1
s0 −s00 n s00
=
(−1)1−1 1
s0 −s00 s00 1
V1
V gives
Vm has no component in V for any m < n) and hence s01 = s1 . V2 Projecting both sides onto V gives h h 0 0 i2 i2 (−1)1−1 s0 −s0 s −s0 (−1)2−1 (−1)1−1 s−s0 (−1)2−1 s−s0 0 + + = 1 s0 2 s0 1 s0 2 s0 2 1 2 1 (note that
0
0
Since s00 = s0 and s01 = s1 , s2 s0
=
(−1)1−1 1
s−s0 s0 2
and consequently s02 = s2 . And so on.
=
(−1)1−1 1
104
s0 −s00 s00 2
=
s02 s00
Problem I.5 Use the notation of Problems I.2 and I.4. Prove that if s, t ∈ st = ts and s0 , t0 > 0, then
V
V with
ln(st) = ln s + ln t
Solution. Let u = ln s and v = ln t. Then s = eu and t = ev and, as uv = vu, ln(st) = ln eu ev = ln eu+v = u + v = ln s + ln t
Problem I.6 Generalise Problems I.1–I.5 to superalgebra having S0 = C.
V
S
V with S a finite dimensional graded
Solution. The definitions of Problems I.1–I.5 still make perfectly good sense when the V Grassmann algebra V is replaced by a finite dimensional graded superalgebra S having
S0 = C. Note that, because S is finite dimensional, there is a finite DS such that S = V DS Sm . Furthermore, as was observed in Definitions I.5 and I.6, S0 = S V is itself a finite ⊕m=0 L 0 S +D 0 dimensional graded superalgebra, with S0 = ⊕D m=0 Sm where Sm = m1 +m2 =m Sm1 ⊗ V m2 V. In particular S00 = S0 . So, just replace V Let V be a complex vector space of dimension D. Every element s of V has a unique L D Vn V. decomposition s = s0 + s1 with s0 ∈ C and s1 ∈ n=1 by
0
0 0 Let S0 = ⊕D m=0 Sm be a finite dimensional graded superalgebra having S 0 = C. Every 0
0 element s of S0 has a unique decomposition s = s0 + s1 with s0 ∈ C and s1 ∈ ⊕D m=1 Sm .
The other parts of the statements and even the solutions remain the same, except for trivial changes, like replacing D by D 0 .
V Problem I.7 Let V be a complex vector space of dimension D. Let s = s0 + s1 ∈ V LD Vn with s0 ∈ C and s1 ∈ n=1 V. Let f (z) be a complex valued function that is analytic 105
P∞
1 (n) (0) sn n=0 n! f
in |z| < r. Prove that if |s0 | < r, then ∞ X
1 (n) (0) sn n! f
=
n=0
Solution. The Taylor series
P∞
1 (m) (0) tm m=0 m! f
converges to f (n) (s0 ) and 1 (n) (s0 ) sn1 n! f
=
1 (n) (s0 ) sn1 n! f
n=0
f (t) for all t in any compact subset of
D X
D X
converges and
D X ∞ X
converges absolutely and uniformly to P∞ 1 (n+m) z ∈ C |z| < r . Hence m=0 m! f (0) sm 0
(n+m) n 1 (0) sm 0 s1 n!m! f
n=0 m=0
n=0
=
∞ X
N =0
=
∞ X
min{N,D}
X
n=0
−n n 1 f (N ) (0) sN s1 0 n!(N −n)! min{N,D}
X
1 (N ) (0) N! f
n=0
N =0
= =
∞ X
N =0 ∞ X
N n
1 (N ) (0) N! f
N X
n=0
N n
where N = m + n
−n n sN s1 0
−n n sN s1 0
1 (N ) (0) sN N! f
N =0
Problem I.8 Let a1 , · · · , aD be an ordered basis for V. Let bi =
PD
be another ordered basis for V. Prove that Z Z · daD · · · da1 = det M · dbD · · · db1 In particular, if bi = aσ(i) for some permutation σ ∈ SD Z Z · daD · · · da1 = sgn σ · dbD · · · db1 Solution. By linearity, it suffices to verify that Z bi1 · · · bi` daD · · · da1 = 0 106
j=1
Mi,j aj , 1 ≤ i ≤ D,
unless ` = D and
That
R
Z
b1 · · · bD daD · · · da1 = det M
bi1 · · · bi` daD · · · da1 = 0 when ` 6= D is obvious, because
vanishes when ` 6= D. Z b1 · · · bD daD · · · da1 = Now
R
Z
D X
j1 ,···,jD =1
M1,j1 · · · MD,jD
Z
R
aj1 · · · aj` daD · · · da1
aj1 · · · ajD daD · · · da1
aj1 · · · ajD daD · · · da1 = 0 unless all of the jk ’s are different so Z X b1 · · · bD daD · · · da1 = M1,π(1) · · · MD,π(D) aπ(1) · · · aπ(D) daD · · · da1 π∈SD
=
X
π∈SD
=
X
π∈SD
M1,π(1) · · · MD,π(D)
Z
sgn π a1 · · · aD daD · · · da1
sgn π M1,π(1) · · · MD,π(D) = det M
In particular, if bi = aσ(i) for some permutation σ ∈ SD Z Z b1 · · · bD daD · · · da1 = aσ(1) · · · aσ(D) daD · · · da1 Z = sgn σ a1 · · · aD daD · · · da1 = sgn σ
Problem I.9 Let ◦ S be a matrix ◦ λ be a real number ◦ ~v1 and ~v2 be two mutually perpendicular, complex conjugate unit vectors ◦ S~v1 = ıλ~v1 and S~v2 = −ıλ~v2 . Set w ~1 =
√1 2ı
~v1 − ~v2
w ~2 =
√1 2
~v1 + ~v2
a) Prove that • w ~ 1 and w ~ 2 are two mutually perpendicular, real unit vectors • Sw ~ 1 = λw ~ 2 and S w ~ 2 = −λw ~ 1. 107
b) Suppose, in addition, that S is a 2 × 2 matrix. Let R be the 2 × 2 matrix whose first column is w ~ 1 and whose second column is w ~ 2 . Prove that R is a real orthogonal matrix 0 −λ and that Rt SR = . λ 0 c) Generalise to the case in which S is a 2r × 2r matrix. Solution. a) Aside from a factor of
√
2, w ~ 1 and w ~ 2 are the imaginary and real parts of ~v1 ,
so they are real. I’ll use the convention that the complex conjugate is on the left argument of the dot product. Then
and
w ~1 · w ~1 =
1 2
w ~2 · w ~2 =
1 2
w ~1 · w ~2 =
ı 2
Sw ~1 =
√1 2ı
Sw ~2 =
√1 2
~v1 − ~v2 · ~v1 − ~v2 = ~v1 + ~v2 · ~v1 + ~v2 = ~v1 − ~v2 · ~v1 + ~v2 =
S~v1 − S~v2 = S~v1 + S~v2 =
√1 2ı √1 2
1 2 1 2 ı 2
~v1 · ~v1 + ~v2 · ~v2 = 1 ~v1 · ~v1 + ~v2 · ~v2 = 1 ~v1 · ~v1 − ~v2 · ~v2 = 0
ıλ~v1 + ıλ~v2 = ıλ~v1 − ıλ~v2 =
λ √ 2 ıλ √ 2
~2 ~v1 + ~v2 = λw ~v1 − ~v2 = −λw ~1
b) The matrix R is real, because its column vectors w ~ 1 and w ~ 2 are real and it is orthogonal because its column vectors are mutually perpendicular unit vectors. Furthermore, t t t w ~1 w ~1 w ~1 S [w ~1 w ~2 ] = [ Sw ~ 1 Sw ~2 ] = [ λw ~ 2 −λw ~1 ] w ~ 2t w ~ 2t w ~ 2t w ~1 · w ~ 2 −w ~1 · w ~1 0 −λ =λ = w ~2 · w ~ 2 −w ~2 · w ~1 λ 0 c) Let r ∈ IN and ◦ S be a 2r × 2r matrix ◦ λ` , 1 ≤ ` ≤ r be real numbers ◦ for each 1 ≤ ` ≤ r, ~v1,` and ~v2,` be two mutually perpendicular, complex conjugate unit vectors ◦ ~vi,` and ~vj,`0 be perpendicular for all 1 ≤ i, j ≤ 2 and all ` 6= `0 between 1 and r ◦ S~v1,` = ıλ`~v1,` and S~v2,` = −ıλ`~v2,` , for each 1 ≤ ` ≤ r. Set w ~ 1,` =
√1 2ı
~v1,` − ~v2,`
w ~ 2,` = 108
√1 2
~v1,` + ~v2,`
for each 1 ≤ ` ≤ r. Then, as in part (a), w1,1 , w2,1 , w1,2 , w2,2 , · · · , w1,r , w2,r are mutually perpendicular real unit vectors obeying Sw ~ 2,` = −λ` w ~ 1,`
Sw ~ 1,` = λ` w ~ 2,`
and if R = [w1,1 , w2,1 , w1,2 , w2,2 , · · · , w1,r , w2,r ], then r M 0 R SR = λ` t
`=1
−λ` 0
P
ci z i be a power series with complex coefficients and i≥0 V infinite radius of convergence and let f (a) be an even element of S V. Show that
Problem I.10 Let P (z) =
∂ ∂a`
P f (a)
= P 0 f (a)
∂
∂a`
f (a)
LD Vm V. Solution. Write f (a) = f0 + f1 (a) with f0 ∈ C and f1 (a) ∈ S \ S0 ⊕ m=1 S ⊗
We allow P (z) to have any radius of convergence strictly larger than |f 0 |. By Problem I.7, ∞ X
n
ci f (a) =
n=0
As
∂ ∂a`
f (a) =
∂ ∂a`
D X
1 P (n) (f0 ) f1 (a)n n!
n=0
f1 (a), it suffices, by linearity in P , to show that ∂ ∂a`
f1 (a)
n
= n f1 (a)
n−1 ∂
∂a`
f1 (a)
For any even g(a) and any h(a), the product rule ∂ ∂a`
i h i h ∂ g(a)h(a) = ∂∂a` g(a) h(a) + g(a) ∂a h(a) `
applies. (D.1) follows easily from the product rule, by induction on n. 109
(D.1)
Problem I.11 Let V and V 0 be vector spaces with bases {a1 , . . . , aD } and {b1 , . . . , bD0 }
respectively. Let S and T be D × D and D 0 × D 0 skew symmetric matrices. Prove that Z hZ Z hZ i i f (a, b) dµS (a) dµT (b) = f (a, b) dµT (b) dµS (a)
Solution. Let {c1 , . . . , cD } and {d1 , . . . , dD0 } be bases for second copies of V and V 0 ,
respectively. It suffices to consider f (a, b) = eΣi ci ai +Σi di bi , because all f (a, b)’s can be
constructed by taking linear combinations of derivatives of eΣi ci ai +Σi di bi with respect to ci ’s and di ’s. For f (a, b) = eΣi ci ai +Σi di bi , Z hZ Z hZ i i f (a, b) dµS (a) dµT (b) = eΣi ci ai +Σi di bi dµS (a) dµT (b) Z Z h i Σi d i b i eΣi ci ai dµS (a) dµT (b) = e Z h i Σi di bi − 12 Σij ci Sij cj = e e dµT (b) Z 1 = e− 2 Σij ci Sij cj eΣi di bi dµT (b) 1
1
= e− 2 Σij ci Sij cj e− 2 Σij di Tij dj
and
Z hZ
i
Z hZ
i eΣi ci ai +Σi di bi dµT (b) dµS (a) Z Z h i Σi d i b i Σ i ci a i e dµT (b) dµS (a) = e Z h i Σi ci ai − 12 Σij di Tij dj = e e dµS (a) Z 1 = e− 2 Σij di Tij dj eΣi ci ai dµS (a)
f (a, b) dµT (b) dµS (a) =
1
1
= e− 2 Σij di Tij dj e− 2 Σij ci Sij cj
are equal.
Problem I.12 Let V be a D dimensional vector space with basis {a1 , . . . , aD } and V 0 be a second copy of V with basis {c1 , . . . , cD }. Let S be a D × D skew symmetric matrix. Prove that
Z
e
Σ i ci a i
f (a) dµS (a) = e
− 21 Σij ci Sij cj
110
Z
f (a − Sc) dµS (a)
Here Sc
i
=
P
j
Sij cj .
Solution. It suffices to consider f (a) = eΣi bi ai , because all f ’s can be constructed by taking linear combinations of derivatives of eΣi bi ai with respect to bi ’s. For f (a) = eΣi bi ai , Z
e
Σ i ci a i
f (a) dµS (a) =
Z
eΣi (bi +ci )ai dµS (a) 1
= e− 2 Σij (bi +ci )Sij (bj +cj ) 1
1
= e− 2 Σij ci Sij cj e−Σij bi Sij cj e− 2 Σij bi Sij bj and e
− 12 Σij ci Sij cj
Z
f (a − Sc) dµS (a) = e
− 21 Σij ci Sij cj 1
Z
eΣi bi (ai −Σj Sij cj ) dµS (a) 1
= e− 2 Σij ci Sij cj e−Σij bi Sij cj e− 2 Σij bi Sij bj
are equal.
Problem I.13 Let V be a complex vector space with even dimension D = 2r and basis
{ψ1 , · · · , ψr , ψ¯1 , · · · , ψ¯r }. Here, ψ¯i need not be the complex conjugate of ψi . Let A be an R ¯ the Grassmann Gaussian integral obeying r × r matrix and · dµA (ψ, ψ) Z
¯ = ψi ψj dµA (ψ, ψ)
Z
¯ =0 ψ¯i ψ¯j dµA (ψ, ψ)
Z
¯ = Aij ψi ψ¯j dµA (ψ, ψ)
a) Prove Z
¯ ψin · · · ψi1 ψ¯j1 · · · ψ¯jm dµA (ψ, ψ) =
m P
`=1
(−1)
`+1
A i 1 j`
Z
¯ ψin · · · ψi2 ψ¯j1 · · · 6 ψ¯j` · · · ψ¯jm dµA (ψ, ψ)
Here, the 6 ψ¯j` signifies that the factor ψ¯j` is omitted from the integrand. b) Prove that, if n 6= m,
Z
¯ =0 ψin · · · ψi1 ψ¯j1 · · · ψ¯jm dµA (ψ, ψ) 111
c) Prove that Z
h i ¯ ¯ ¯ ψin · · · ψi1 ψj1 · · · ψjn dµA (ψ, ψ) = det Aik j`
1≤k,`≤n ¯
¯
d) Let V 0 be a second copy of V with basis {ζ1 , · · · , ζr , ζ¯1 , · · · , ζ¯r }. View eΣi (ζi ψi +ψi ζi ) as V an element of ∧V 0 V. Prove that Z ¯ ¯ ¯ = eΣi,j ζ¯i Aij ζj eΣi (ζi ψi +ψi ζi ) dµA (ψ, ψ) Solution. Define ¯ = ai (ψ, ψ) and
Then
if 1 ≤ i ≤ r if r + 1 ≤ i ≤ 2r
ψi ψ¯i−r
0 S= −At Z
f (a) dµS (a) =
Z
A 0
¯ dµA (ψ, ψ) ¯ f a(ψ, ψ)
a) Translating the integration by parts formula Z Z D P ak f (a) dµS (a) = Sk` `=1
of Proposition I.17 into the ψ, ψ¯ language gives Z Z r P ¯ ¯ ψk f (ψ, ψ) dµA (ψ, ψ) = Ak` `=1
∂ ∂a`
f (a) dµS (a)
∂ ¯` ∂ψ
¯ dµA (ψ, ψ) ¯ f (ψ, ψ)
Now, just apply this formula to Z Z (m+1)(n−1) ¯ = (−1) ¯ ψin · · · ψi1 ψ¯j1 · · · ψ¯jm dµA (ψ, ψ) ψi1 ψ¯j1 · · · ψ¯jm ψin · · · ψi2 dµA (ψ, ψ)
¯ = ψ¯j · · · ψ¯j ψi · · · ψi . This gives with k = i1 and f (ψ, ψ) 1 m n 2 Z ¯ ψin · · · ψi1 ψ¯j1 · · · ψ¯jm dµA (ψ, ψ) Z m `−1 (m+1)(n−1) P ¯ (−1) Ai1 j` ψ¯j1 · · · 6 ψ¯j` · · · ψ¯jm ψin · · · ψi2 dµA (ψ, ψ) = (−1) `=1 Z m (m+1)(n−1)+(m−1)(n−1) P `−1 ¯ = (−1) (−1) Ai1 j` ψin · · · ψi2 ψ¯j1 · · · 6 ψ¯j` · · · ψ¯jm dµA (ψ, ψ) `=1 Z m P `−1 ¯ = (−1) Ai1 j` ψin · · · ψi2 ψ¯j1 · · · 6 ψ¯j` · · · ψ¯jm dµA (ψ, ψ) `=1
112
as desired. d) Define ¯ = bi (ψ, ψ)
ζ¯i −ζi−r
Z
eΣi bi ai dµS (a) = e− 2 Σi,j bi Sij bj = eΣi,j ζi Aij ζj
if 1 ≤ i ≤ r if r + 1 ≤ i ≤ 2r
Then Z
e
¯i ζi ) Σi (ζ¯i ψi +ψ
¯ = dµA (ψ, ψ)
1
¯
since 1 ¯ 0 ζ, −ζ t −A 2 Qn
A 0
ζ¯ −ζ
i −Aζ 1 ¯ 1h ¯ t¯ ¯ = ζ, −ζ = − ζAζ + ζA ζ = −ζAζ −At ζ¯ 2 2
Qm
to the conclusion of part (d) and set ζ = ζ¯ = 0. The R ¯ Unless m = n, resulting left hand side is, up to a sign, ψin · · · ψi1 ψ¯j1 · · · ψ¯jm dµA (ψ, ψ).
b) Apply
∂ `=1 ∂ ζ¯i `
and
∂ `=1 ∂ζj`
the right hand side is zero.
c) The proof is by induction on n. For n = 1, by the definition of dµA , Z
¯ = Ai j ψi1 ψ¯j1 dµA (ψ, ψ) 1 1
as desired. If the result is known for n − 1, then, by part (a), Z
¯ ψin · · · ψi1 ψ¯j1 · · · ψ¯jm dµA (ψ, ψ) = =
m P
`=1 m P
`=1
(−1)
`+1
A i 1 j`
Z
¯ ψin · · · ψi2 ψ¯j1 · · · 6 ψ¯j` · · · ψ¯jm dµA (ψ, ψ)
(−1)`+1 Ai1 j` det M (1,`)
where M (1,`) is the matrix Aiα jβ 1≤α,β≤n with row 1 and column ` deleted. So expansion along the first row,
m P det Aiα jβ 1≤α,β≤n = (−1)`+1 Ai1 j` det M (1,`) `=1
gives the desired result. 113
Problem I.14 Prove that C(c) = − 12 Σij ci Sij cj + G − Sc where Sc
i
=
P
j
Sij cj .
Solution. By Problem I.12, with f (a) = eW (a) , and Problem I.5, Z 1 C(c) = log Z eΣi ci ai eW (a) dµS (a) Z h i 1 − 12 Σij ci Sij cj = log Z e eW (a−Sc) dµS (a) h Z i 1 eW (a−Sc) dµS (a) = − 2 Σij ci Sij cj + log Z1 = − 12 Σij ci Sij cj + G − Sc
Problem I.15 We have normalized GJ +1 so that GJ +1 (0) = 0. So the ratio GJ +1 (c) = log had better obey ZJ +1 ZJ
=
ZJ ZJ +1
Z
Z
eGJ (c+a) dµS (J +1) (a)
eGJ (a) dµS (J +1) (a)
Verify by direct computation that this is the case.
Solution. By Proposition I.21, Z ZJ +1 = Z = Z =
eW (a) dµS (≤J +1) (a) hZ i W (a+c) e dµS (≤J ) (a) dµS (J +1) (c) h
i Zj eGJ (a) dµS (J +1) (c) Z = Zj eGJ (a) dµS (J +1) (c)
114
ZJ ZJ +1
in
Problem I.16 Prove that GJ = ΩS (J ) ◦ ΩS (J −1) ◦ · · · ◦ ΩS (1) (W ) = ΩS (1) ◦ ΩS (2) ◦ · · · ◦ ΩS (J ) (W ) Solution. By the definitions of GJ and ΩS (W ), GJ = ΩS (≤J ) (W ) Now apply Theorem I.29.
Problem I.17 Prove that :f (a): = f (a) =
Z
Z
f (a + b) dµ−S (b) :f :(a + b) dµS (b)
Solution. It suffices to consider f (a) = eΣi ci ai . For this f Z and
f (a + b) dµ−S (b) =
Z
e
Σi ci (ai +bi )
dµ−S (b) = e
1
= eci ai e 2 Σi,j ci Sij cj = :f (a): Z
Z
Σ i ci a i
Z
eΣi ci bi dµ−S (b)
1
eci (ai +bi ) e 2 Σi,j ci Sij cj dµS (b) Z Σi ci ai 12 Σi,j ci Sij cj =e e eΣi ci bi dµS (b)
:f :(a + b) dµS (b) =
1
1
= eΣi ci ai e 2 Σi,j ci Sij cj e− 2 Σi,j ci Sij cj = eΣi ci ai = f (a)
Problem I.18 Prove that ∂ ∂a`
:f (a): = .. ∂∂a` f (a) .. 115
Solution. By linearity, it suffices to consider f (a) = aI with I = (i1 , · · · , in ). 1 Σi,j bi Sij bj Σi bi ai ∂ ∂ ∂ ∂ 2 e ∂a` :aI : = ∂a` ∂bi1 · · · ∂bin e b=0 1 n∂ Σ b S b Σ ∂ ∂ i,j i ij j i b i ai 2 = (−1) ∂bi · · · ∂bi ∂a` e e n 1 b=0 1 Σ b S b Σ b a n∂ ∂ i,j i ij j i i i b` e = −(−1) ∂bi · · · ∂bi e 2 n
1
b=0
If ` ∈ / I, we get zero. Otherwise, by antisymmetry, we may assume that ` = in . Then 1 n∂ ∂ ∂ 2 Σi,j bi Sij bj eΣi bi ai :a : = −(−1) · · · e I ∂a` ∂bi ∂bi 1
= (−1)
b=0
n−1
n−1 .
.
. ai1 · · · ain−1 .
= .. ∂∂a` aI ..
Problem I.19 Prove that Z Z D P :g(a)ai : f (a) dµS (a) = Si` :g(a): `=1
∂ f (a) ∂a`
dµS (a)
Solution. It suffices to consider g(a) = aI and f (a) = aJ . Observe that Z 1 1 1 eΣm bm am e 2 Σm,j bm Smj bj eΣm cm am dµS (a) = e 2 Σm,j bm Smj bj e− 2 Σm,j (bm +cm )Smj (bj +cj ) 1
= e−Σm,j bm Smj cj e− 2 Σm,j cm Smj cj
Now apply ∂∂bi to both sides Z D P 1 1 ∂ eΣm bm am e 2 Σm,j bm Smj bj eΣm cm am dµS (a) = − Si,` c` e−Σbm Smj cj e− 2 Σcm Smj cj ∂bi =−
=
D P
Si,` c`
`=1
D P
Si,`
`=1
Applying
(−1)|J|
Z Q
j∈I
Z
`=1
1
eΣm bm am e 2 Σm,j bm Smj bj eΣm cm am dµS (a) 1
eΣm bm am e 2 Σm,j bm Smj bj ∂∂a` eΣm cm am dµS (a)
∂ ∂bj
Q
k∈J
∂ ∂ck
to both sides and setting b = c = 0 gives the desired result. The (−1)|J| is to move Q ∂ ∂ ∂ k∈J ∂ck past the ∂bi on the left and past the ∂a` on the right. 116
Problem I.20 Prove that :f :(a + b) = :f (a + b):a = :f (a + b):b Here : · :a means Wick ordering of the ai ’s and : · :b means Wick ordering of the bi ’s. Precisely, if {ai }, {bi }, {Ai }, {Bi } are bases of four vector spaces, all of the same dimension, . eΣi Ai ai +Σi Bi bi . = eΣi Bi bi . eΣi Ai ai . = eΣi Ai ai +Σi Bi bi e 12 Σij Ai Sij Aj . .a . .a . eΣi Ai ai +Σi Bi bi . = eΣi Ai ai . eΣi Bi bi . = eΣi Ai ai +Σi Bi bi e 12 Σij Bi Sij Bj . .b . .b
Solution. It suffices to consider f (a) = eΣi ci ai . Then
and
1 1 :f :(a + b) = :f (d):d d=a+b = eΣi ci di e 2 Σij ci Sij cj d=a+b = eΣi ci (ai +bi ) e 2 Σij ci Sij cj 1
:f (a + b):a = :eΣi ci (ai +bi ) :a = eΣi ci bi :eΣi ci ai :a = eΣi ci bi eΣi ci ai e 2 Σij ci Sij cj 1
= eΣi ci (ai +bi ) e 2 Σij ci Sij cj and
1
:f (a + b):b = :eΣi ci (ai +bi ) :b = eΣi ci ai :eΣi ci bi :b = eΣi ci ai eΣi ci bi e 2 Σij ci Sij cj 1
= eΣi ci (ai +bi ) e 2 Σij ci Sij cj All three are the same.
Problem I.21 Prove that :f :S+T (a + b) = .. f (a + b) .. a,S b,T
Here S and T are skew symmetric matrices, : · :S+T means Wick ordering with respect to S + T and .. · .. a,S means Wick ordering of the ai ’s with respect to S and of the bi ’s with b,T
respect to T . Solution. It suffices to consider f (a) = eΣi ci ai . Then 1 :f :S+T (a + b) = :f (d):d,S+T d=a+b = eΣi ci di e 2 Σij ci (Sij +Tij )cj d=a+b 1
= eΣi ci (ai +bi ) e 2 Σij ci (Sij +Tij )cj 117
and . f (a + b) . . Σ c (a +b ) . . Σcb . Σ c a . . a,S = . e i i i i . a,S = :e i i i :a,S . e i i i . b,T b,T
b,T
=e
Σ i ci a i
e
1 2 Σij ci Sij cj
1
eΣi ci bi e 2 Σij ci Tij cj
1
= eΣi ci (ai +bi ) e 2 Σij ci (Sij +Tij )cj The two right hand sides are the same.
Problem I.22 Prove that
Z
:f (a): dµS (a) = f (0)
Solution. It suffices to consider f (a) = eΣi ci ai . Then Z
Z
Z
1
: dµS (a) = eΣi ci ai e 2 Σij ci Sij cj dµS (a) Z 1 1 1 Σij ci Sij cj 2 =e eΣi ci ai dµS (a) = e 2 Σij ci Sij cj e− 2 Σij ci Sij cj = 1 = f (0)
:f (a): dµS (a) =
:e
Σ i ci a i
Problem I.23 Prove that Z Y n e . Qi a . dµ (ψ) = Pf T 0 0 . `i,µ . S (i,µ),(i ,µ ) i=1 µ=1
where T(i,µ),(i0 ,µ0 ) =
Here T is a skew symmetric matrix with
0 S`i,µ ,`i0 ,µ0
Pn
i=1 ei
if i = i0 if i = 6 i0
rows and columns, numbered, in order
(1, 1), · · · , (1, e1 ), (2, 1), · · · (2, e2 ), · · · , (n, en ). The product in the integrand is also in this order. 118
Solution. The proof is by induction on the number of a’s. That is, on Z Z :ai : dµS (ψ) = ai dµS (ψ) = 0 Z Z :ai1 : :ai2 : dµS (ψ) = ai1 ai2 dµS (ψ) = Si1 ,i2 Z :ai1 ai2 : dµS (ψ) = 0
Pn
`=1 e` .
As
the induction starts OK. By Problem I.19, Z Y Z e −1 n e . 1Q a . . Qi a . dµ (ψ) = P S . . `1,µ . `i,µ . S `1e1 ,p p
i=1 µ=1
µ=1
Define E0 = 0
Ei =
i X
∂ ∂ap
n e Y . Qi a . dµ (ψ) . `i,µ . S
i=2 µ=1
ej
j=1
and lEi−1 +µ = `i,µ Under this new indexing, the last equation becomes Z Y n
. .
Ei Q
.
i=1 j=Ei−1 +1
alj . dµS (ψ) = =
E Pn
k=E1 +1
P p
SlE1 ,p
Z
E1 −1 . Q alj .. . j=1
(−1)k−E1 −1 SlE1 ,lk
Z
∂ ∂ap
n Y
. .
Ei Q
alj .. dµS (ψ)
i=2 j=Ei−1 +1 n E1 −1 E Y . Q . . Qi a lj . . . j=Ei−1 +1 j=1 i=2 j6=k
alj .. dµS (ψ)
where we have used the product rule (Proposition I.13), Problem I.18 and the observation Qei that .. µ=1 a`i,µ .. has the same parity, even or odd, as ei does. Let Mk` be the matrix obtained from T by deleting rows k and ` and columns k and ` . By the inductive hypothesis Z Y n
. .
Ei Q
i=1 j=Ei−1 +1
alj .. dµS (ψ) =
E Pn
k=E1 +1
(−1)k−E1 −1 SlE1 ,lk Pf ME1 k
By Proposition I.18.a, with k replaced by E1 , Z Y n
. .
Ei Q
i=1 j=Ei−1 +1
alj .. dµS (ψ) = Pf(T )
since, for all k > E1 , sgn(E1 − k) (−1)E1 +k = (−1)k−E1 −1 . 119
Problem I.24 Let aij , 1 ≤ i, j ≤ n be complex numbers. Prove Hadamard’s inequality h i 1/2 n P n Q 2 |aij | det aij ≤ i=1
from Gram’s inequality.
j=1
Solution. Set ai = (ai1 , · · · , ain ) and write aij = hai , ej i with the vectors ej =
(0, · · · , 1, · · · , 0) , j = 1, · · · , n forming the standard basis for Cn . Then, by Gram’s inequality 1/2 h i h i n n n P Q Q 2 |a | ≤ ka k ke k = = det ha , e i det a ij i j i j ij i=1
i=1
j=1
Problem I.25 Let V be a vector space with basis {a1 , · · · , aD }. Let S`,`0 be a skew symmetric D × D matrix with S(`, `0 ) = hf` , g`0 iH
for all 1 ≤ `, `0 ≤ D
p for some Hilbert space H and vectors f` , g`0 ∈ H. Set F` = kf` kH kg` kH . Prove that Z n Y Q a dµ (a) ≤ Fi` i` S `=1
1≤`≤n
Solution. I will prove both (a)
Z n Y Q ≤ a dµ (a) Fi` i S ` `=1
(b)
1≤`≤n
Z Y Y m n Q Q . . Fik F j` a ik . aj` . dµS (a) ≤ 2n k=1
`=1
1≤k≤m 1≤`≤n
For (a) Just apply Gram’s inequality to h Z Q i n ai` dµS (a) = Pf Sik ,i` `=1
120
1≤k,`≤n
q = det Sik ,i`
For (b) observe that, by Problem I.17, Z m n Q Q aik .. aj` .. dµS (a) k=1 `=1 Z m n Q Q = a ik aj` + bj` dµS (a) dµ−S (b) k=1 `=1 Z Z m X Q Q Q bj` dµ−S (b) a ik aj` dµS (a) = ± I⊂{1,···,n}
k=1
`6∈I
`∈I
There are 2n terms. Apply (a) to both factors of each term.
Problem I.26 Prove, under the hypotheses of Proposition I.34, that Z Y n e . Qi ψ ` , κ . dµ (ψ) ≤ . i,µ i,µ . A i=1 µ=1
Y
1≤i≤n 1≤µ≤ei κi,µ =0
√
2kf`i,µ kH
Y
1≤i≤n 1≤µ≤ei κi,µ =1
√ 2kg`i,µ kH
Solution. Define (i, µ) 1 ≤ i ≤ n, 1 ≤ µ ≤ ei , κi,µ = 0 S¯ = (i, µ) 1 ≤ i ≤ n, 1 ≤ µ ≤ ei , κi,µ = 1
S=
If the integral does not vanish, the cardinality of S and S¯ coincide and there is a sign ± such that (this is a special case of Problem I.23) Z Y n e . Qi ψ ` , κ . dµ (ψ) = ± det M . A α,β α∈S i,µ i,µ . i=1 µ=1
where M(i,µ),(i0 ,µ0 ) =
¯ β∈S
0 A(`i,µ , `i0 ,µ0 )
if i = i0 if i = 6 i0
Define the vectors uα , α ∈ S and v β , β ∈ S¯ in Cn+1 by ( 1 if i = n + 1 uα = 1 if α = (i, µ) for some 1 ≤ µ ≤ ei i 0 otherwise ( 1 if i = n + 1 viβ = −1 if β = (i, µ) for some 1 ≤ µ ≤ ei 0 otherwise 121
¯ Observe that, for all α ∈ S and β ∈ S,
√ kuα k = kv β k = 2 1 if α = (i, µ), β = (i0 , µ0 ) with i 6= i0 α β u ·v = 0 if α = (i, µ), β = (i0 , µ0 ) with i = i0
Hence, setting
Fα = uα ⊗ f`i,µ ∈Cn+1 ⊗ H
for α = (i, µ) ∈ S
Gβ = v β ⊗ g`i,µ ∈ Cn+1 ⊗ H
for β = (i, µ) ∈ S¯
we have Mα,β = hFα , Gβ iCn+1 ⊗H and consequently, by Gram’s inequality, Z n e Y . Qi . ψ `i,µ , κi,µ . dµA (ψ) = det Mα,β α∈S . ¯ β∈S i=1 µ=1 Y Y ≤ kFα kCn+1 ⊗H kGβ kCn+1 ⊗H ¯ β∈S
α∈S
≤
Y√
α∈S
122
2kf`α kH
Y√ 2kg`β kH
¯ β∈S
§II. Fermionic Expansions
Problem II.1 Let D X
F (a) =
f (j1 , j2 ) aj1 aj2
j1 ,j2 =1 D X
W (a) =
w(j1 , j2 , j3 , j4 ) aj1 aj2 aj3 aj4
j1 ,j2 ,j3 ,j4 =1
with f (j1 , j2 ) and w(j1 , j2 , j3 , j4 ) antisymmetric under permutation of their arguments. a) Set S(λ) = Compute b) Set
1 Zλ
Z
F (a) e
d` S(λ) dλ` λ=0
λW (a)
d` R(λ) dλ` λ=0
where
Zλ =
eλW (a) dµS (a)
for ` = 0, 1, 2.
R(λ) = Compute
dµS (a)
Z
Z
. eλW (a+b)−λW (a) − 1 . F (b) dµ (b) . .b S
for all ` ∈ IN.
Solution. a) We have defined R R D X aj1 aj2 eλW (a) dµS (a) F (a) eλW (a) dµS (a) R R f (j1 , j2 ) = S(λ) = eλW (a) dµS (a) eλW (a) dµS (a) j ,j =1 1
2
Applying Proposition I.17 (integration by parts) to the numerator with k = j1 and using the antisymmetry of w gives S(λ) = =
D X
j1 ,j2 =1
D X
j1 ,j2 =1
=
D X
j1 ,j2 =1
f (j1 , j2 )
P
Sj1 ,k
k
f (j1 , j2 ) Sj1 ,j2
f (j1 , j2 ) Sj1 ,j2
R
∂ ∂ak
R
aj2 eλW (a) dµS (a)
eλW (a) dµS (a) R X aj2 eλW (a) ∂∂ak λW (a) dµS (a) R − Sj1 ,k eλW (a) dµS (a) k R X aj2 aj4 aj5 aj6 eλW (a) dµS (a) R − 4λ Sj1 ,j3 w(j3 , j4 , j5 , j6 ) eλW (a) dµS (a) j ,j ,j ,j 3
4
5
6
(D.2)
123
Setting λ = 0 gives S(0) =
D X
f (j1 , j2 )Sj1 ,j2
j1 ,j2 =1
Differentiating once with respect to λ and setting λ = 0 gives X
0
S (0) = −4
f (j1 , j2 )Sj1 ,j3 w(j3 , j4 , j5 , j6 )
j1 ,···,j6
Z
aj2 aj4 aj5 aj6 dµS (a)
Integrating by parts and using the antisymmetry of w a second time X
0
S (0) = −12 = −12
f (j1 , j2 )Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )
j1 ,···,j6
X
Z
aj5 aj6 dµS (a)
f (j1 , j2 )Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )Sj5 ,j6
j1 ,···,j6
To determine S 00 (0) we need the order λ contribution to X
R
aj2 aj4 aj5 aj6 eλW (a) dµS (a) R eλW (a) dµS (a) j3 ,j4 ,j5 ,j6 R X aj5 aj6 eλW (a) dµS (a) R =3 Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 ) eλW (a) dµS (a) j3 ,j4 ,j5 ,j6 R X aj4 aj5 aj6 eλW (a) ∂∂ak λW (a) dµS (a) R − Sj1 ,j3 w(j3 , j4 , j5 , j6 )Sj2 ,k eλW (a) dµS (a) j3 ,j4 ,j5 ,j6 ,k X =3 Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )Sj5 ,j6 Sj1 ,j3 w(j3 , j4 , j5 , j6 )
j3 ,j4 ,j5 ,j6
aj6 eλW (a) ∂∂ak λW (a) dµS (a) R −3 Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )Sj5 ,k eλW (a) dµS (a) j3 ,j4 ,j5 ,j6 ,k R X aj4 aj5 aj6 eλW (a) ∂∂ak λW (a) dµS (a) R − Sj1 ,j3 w(j3 , j4 , j5 , j6 )Sj2 ,k eλW (a) dµS (a) j3 ,j4 ,j5 ,j6 ,k X =3 Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )Sj5 ,j6 R
X
j3 ,j4 ,j5 ,j6
− 3λ −λ
X
Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )Sj5 ,k
j3 ,j4 ,j5 ,j6 ,k
X
Sj1 ,j3 w(j3 , j4 , j5 , j6 )Sj2 ,k
j3 ,j4 ,j5 ,j6 ,k
124
Z
Z
a j6
a j4 a j5 a j6
∂ ∂ak W (a) ∂ W (a) ∂ak
dµS (a) dµS (a) + O(λ2 )
D X
00
S (0) = 24
f (j1 , j2 )
j1 ,j2 =1 D X
+8
f (j1 , j2 )
j1 ,j2 =1 D X
= 24
f (j1 , j2 )
j1 ,j2 =1
+ 16
D X
f (j1 , j2 )
j1 ,j2 =1
+8
D X
f (j1 , j2 )
j1 ,j2 =1
Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )Sj5 ,k
j3 ,j4 ,j5 ,j6 ,k
X
Z
Sj1 ,j3 w(j3 , j4 , j5 , j6 )Sj2 ,k
j3 ,j4 ,j5 ,j6 ,k
X
Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )Sj5 ,k
j3 ,j4 ,j5 ,j6 ,k
X
X
Sj1 ,j3 w(j3 , j4 , j5 , j6 )Sj2 ,k Sj4 ,`
Z
Z
∂ aj6 ∂a W (a) k
aj4 aj5 aj6 ∂∂ak W (a)
Z
Sj1 ,j3 w(j3 , j4 , j5 , j6 )Sj4 ,j5 Sj2 ,k
j3 ,j4 ,j5 ,j6 ,k
j3 ,···,j6 k,`
X
= 24 × 4 × 3
X
aj6 ∂∂ak W (a) Z
dµS (a)
dµS (a)
dµS (a)
∂ aj6 ∂a W (a) k
∂ ∂ W (a) aj5 aj6 ∂a ` ∂ak
dµS (a)
dµS (a)
f (j1 , j2 )Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )Sj5 ,j7 Sj6 ,j8 w(j7 , j8 , j9 , j10 )Sj9 ,j10
j1 ,···,j10
X
+ 16 × 4 × 3
f (j1 , j2 )Sj1 ,j3 Sj2 ,j7 w(j3 , j4 , j5 , j6 )Sj4 ,j5 Sj6 ,j8 w(j7 , j8 , j9 , j10 )Sj9 ,j10
j1 ,···,j10
+8×4×3
X
f (j1 , j2 )Sj1 ,j3 Sj2 ,j7 w(j3 , j4 , j5 , j6 )Sj4 ,j8 Sj5 ,j6 w(j7 , j8 , j9 , j10 )Sj9 ,j10
j1 ,···,j10
− 8 × 4!
X
f (j1 , j2 )Sj1 ,j3 Sj2 ,j7 w(j3 , j4 , j5 , j6 )Sj4 ,j8 Sj5 ,j9 Sj6 ,j10 w(j7 , j8 , j9 , j10 )
j1 ,···,j10
X
= 24 × 4 × 3
f (j1 , j2 )Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )Sj5 ,j7 Sj6 ,j8 w(j7 , j8 , j9 , j10 )Sj9 ,j10
j1 ,···,j10
X
+ 24 × 4 × 3
f (j1 , j2 )Sj1 ,j3 Sj2 ,j7 w(j3 , j4 , j5 , j6 )Sj4 ,j5 Sj6 ,j8 w(j7 , j8 , j9 , j10 )Sj9 ,j10
j1 ,···,j10
− 8 × 4!
X
f (j1 , j2 )Sj1 ,j3 Sj2 ,j7 w(j3 , j4 , j5 , j6 )Sj4 ,j8 Sj5 ,j9 Sj6 ,j10 w(j7 , j8 , j9 , j10 )
j1 ,···,j10
By way of resume, the answer to part (a) is S(0) =
D X
f (j1 , j2 )Sj1 ,j2
j1 ,j2 =1
0
S (0) = −12
D X
f (j1 , j2 )Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )Sj5 ,j6
j1 ,···,j6 =1
X
S 00 (0) = 288
f (j1 , j2 )Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )Sj5 ,j7 Sj6 ,j8 w(j7 , j8 , j9 , j10 )Sj9 ,j10
j1 ,···,j10
+ 288
X
f (j1 , j2 )Sj1 ,j3 Sj2 ,j7 w(j3 , j4 , j5 , j6 )Sj4 ,j5 Sj6 ,j8 w(j7 , j8 , j9 , j10 )Sj9 ,j10
j1 ,···,j10
125
− 192
X
f (j1 , j2 )Sj1 ,j3 Sj2 ,j7 w(j3 , j4 , j5 , j6 )Sj4 ,j8 Sj5 ,j9 Sj6 ,j10 w(j7 , j8 , j9 , j10 )
j1 ,···,j10
b) First observe that W (a + b) − W (a) =
D X
h
w(j1 , j2 , j3 , j4 ) bj1 bj2 bj3 bj4 + 4bj1 bj2 bj3 aj4
j1 ,j2 ,j3 ,j4 =1
+ 6bj1 bj2 aj3 aj4 + 4bj1 aj2 aj3 aj4 is of degree at least one in b. By Problem I.19 and Proposition I.31.a,
R
i
:bI : bJ dµS (b) d vanishes unless the degree of J is at least as large as the degree of I. Hence dλ R(λ) =0 ` `
for all ` ≥ 3. Also
R(0) =
Z
λ=0
:0:b F (b) dµS (b) = 0
That leaves the nontrivial cases Z . [W (a + b) − W (a)] . F (b) dµ (b) 0 R (0) = . .b S Z X =6 f (j1 , j2 )w(j3 , j4 , j5 , j6 ) .. bj3 bj4 aj5 aj6 .. b bj1 bj2 dµS (b) j1 ,···,j6
= −12
X
f (j1 , j2 )Sj1 ,j3 Sj2 ,j4 w(j3 , j4 , j5 , j6 )aj5 aj6
j1 ,···,j6
and Z
. [W (a + b) − W (a)]2 . F (b) dµ (b) . .b S Z X = 16 f (j1 , j2 )w(j3 , j4 , j5 , j6 )w(j7 , j8 , j9 , j10 ) .. bj3 aj4 aj5 aj6 bj7 aj8 aj9 aj10 .. b bj1 bj2 dµS (b) 00
R (0) =
j1 ,···,j10
= 32
X
f (j1 , j2 ) Sj1 ,j3 Sj2 ,j7 w(j3 , j4 , j5 , j6 )aj4 aj5 aj6 w(j7 , j8 , j9 , j10 )aj8 aj9 aj10
j1 ,···,j10
Problem II.2 Define, for all f : Mr → C and g : Ms → C with r, s ≥ 1 and r + s > 2, f ∗ g : Mr+s−2 → C by f ∗ g(j1 , · · · , jr+s−2 ) =
D X
k=1
f (j1 , · · · , jr−1 , k)g(k, jr , · · · , jr+s−2 ) 126
Prove that kf ∗ gk ≤ kf k kgk
and
|||f ∗ g||| ≤ min |||f ||| kgk , kf k |||g|||
Solution. By definition kf ∗ gk =
max
P
max
1≤i≤r+s−2 1≤`≤D
1≤j1 ,···,jr+s−2 ≤D ji =`
f ∗ g(j1 , · · · , jr+s−2 )
We consider the case 1 ≤ i ≤ r − 1. The case r ≤ i ≤ r + s − 2 is similar. For any 1 ≤ i ≤ r − 1, max
1≤`≤D
P
1≤j1 ,···,jr+s−2 ≤D ji =`
≤ max
1≤`≤D
≤ max
1≤`≤D
≤ max
1≤`≤D
D P f (j1 , · · · , jr−1 , k)g(k, jr , · · · , jr+s−2 )
k=1
1≤j1 ,···,jr+s−2 ≤D ji =`
P
= max
1≤`≤D
P
f ∗ g(j1 , · · · , jr+s−2 )
1≤j1 ,···,jr−1 ≤D ji =`
P
1≤j1 ,···,jr−1 ≤D ji =`
P
1≤j1 ,···,jr−1 ≤D ji =`
D P f (j1 , · · · , jr−1 , k)
k=1
P
1≤jr ,···,jr+s−2 ≤D
hP D i f (j1 , · · · , jr−1 , k) max
g(k, jr , · · · , jr+s−2 )
P g(k, jr , · · · , jr+s−2 )
1≤k≤D 1≤j ,···,j r r+s−2 ≤D
k=1
D P f (j1 , · · · , jr−1 , k) kgk
k=1
≤ kf k kgk
We prove |||f ∗ g||| ≤ |||f ||| kgk. The other case is similar. P f ∗ g(j1 , · · · , jr+s−2 ) |||f ∗ g||| = 1≤j1 ,···,jr+s−2 ≤D
≤
=
P
1≤j1 ,···,jr+s−2 ≤D k=1
X
j1 ,···,jr−1
≤ ≤
D P f (j1 , · · · , jr−1 , k)g(k, jr , · · · , jr+s−2 )
X
j1 ,···,jr−1
X
hP f (j1 , · · · , jr−1 , k) k
jr ,···,jr+s−2
hP i h f (j1 , · · · , jr−1 , k) max
P
i g(k, jr , · · · , jr+s−2 )
P
k jr ,···,jr+s−2
k
j1 ,···,jr−1 k
= |||f ||| kgk
P
f (j1 , · · · , jr−1 , k) kgk
127
i g(k, jr , · · · , jr+s−2 )
Problem II.3 Let : · :S denote Wick ordering with respect to the covariance S. a) Prove that if Z b :b : dµ (b) ≤ F|H|+|J| H J S S
then
for all H, J ∈
Z p |H|+|J| bH :bJ :zS dµzS (b) ≤ |z| F
[
r≥0
Mr
for all H, J ∈
[
r≥0
Mr
b) Prove that if Z b :b : dµ (b) ≤ F|H|+|J| H J S S
for all H, J ∈
S
r≥0
and
Mr , then
Z b :b : dµ (b) ≤ G|H|+|J| H J T T
Z |H|+|J| bH :bJ :S+T dµS+T (b) ≤ F + G
for all H, J ∈
[
r≥0
Mr
Solution. a) For any ζ with ζ 2 = z, Z
e
Σi a i b i
. eΣi ci bi . dµ (b) = . zS . zS
Z
1
eΣi ai bi eΣi ci bi e 2 Σij zci Sij cj dµzS (b) 1
1
= e− 2 Σij z(ai +ci )Sij (aj +cj ) e 2 Σij zci Sij cj 1
1
= e− 2 Σij (ζai +ζci )Sij (ζaj +ζcj ) e 2 Σij (ζci )Sij (ζcj ) Z = eΣi ζai bi .. eΣi ζci bi .. S dµS (b) Differentiating with respect to ai , i ∈ H and ci , i ∈ J gives Z
bH :bJ :zS dµzS (b) = ζ
|H|+|J|
Z
bH :bJ :S dµS (b) = z
(|H|+|J|)/2
Z
bH :bJ :S dµS (b)
Note that if |H| + |J| is not even, the integral vanishes. So we need not worry about which square root to take. 128
b) Let f (b) = bJ . Then Z Z bH :bJ :S+T dµS+T (b) = (a + b)H :f :S+T (a + b) dµS (a) dµT (b) by Proposition I.21 Z = (a + b)H .. (a + b)J .. a,S dµS (a) dµT (b) by Problem I.21 b,T Z X Z = ± aH0 :aJ0 :S dµS (a) bH\H0 :bJ\J0 :T dµT (b) H0 ⊂H J0 ⊂J
Hence Z X 0 0 0 0 b :b : dµ (b) F|H |+|J | G|H\H |+|J\J | = (F + G)|H|+|J| ≤ H J S+T S+T H0 ⊂H J0 ⊂J
Problem II.4 Let W (a) =
P
1≤i,j≤D
W (a + b) − W (a) =
P
w(i, j) ai aj with w(i, j) = −w(j, i). Verify that w(i, j) bi bj + 2
1≤i,j≤D
P
w(i, j) ai bj
1≤i,j≤D
Solution. W (a + b) − W (a) =
P
1≤i,j≤D
P
=
w(i, j) (ai + bi )(aj + bj ) −
1≤i,j≤D
Now just sub in P
w(i, j) bi aj =
1≤i,j≤D
P
1≤i,j≤D
P
w(i, j) bi bj +
P
w(i, j) ai bj +
1≤i,j≤D
−w(j, i) (−aj bi ) =
w(i, j) ai aj
1≤i,j≤D
P
P
w(i, j) bi aj
1≤i,j≤D
w(j, i) aj bi =
1≤i,j≤D
P
w(i, j) ai bj
1≤i,j≤D
Problem II.5 Assume Hypothesis (HG). Let s, s0 , m ≥ 1 and f (b) =
P
H∈Mm
fm (H) bH
W (b) =
P
ws (K) bK
K∈Ms
129
W 0 (b) =
P
K0 ∈Ms0
ws0 0 (K0 ) bK0
a) Prove that
Z . . . W (b) . b f (b) dµS (b) ≤ mFm+s−2 |||fm ||| kSk kwsk
R b) Prove that .. W (b)W 0 (b) .. b f (b) dµS (b) = 0 if m = 1 and that, for m ≥ 2, Z 0 . . 0 . W (b)W (b) . b f (b) dµS (b) ≤ m(m − 1)Fm+s+s −4 |||fm ||| kSk2 kws k kws0 0 k Solution. a) By definition Z Z X . W (b) . f (b) dµ (b) = ws (K)fm (H) .. bK .. b bH dµS (b) . .b S H∈Mm K∈Ms
X
=
H∈Mm ˜ K∈M s−1
D P
˜ ws (K.(k))f m (H)
k=1
Z
. b b . b dµ (b) . K ˜ k .b H S
˜ ˜ and (k), is the element of Ms constructed by Recall that K.(k), the concatenation of K ˜ of Ms−1 . By Problem I.19 (integration by parts) and appending k to the element K
Hypothesis (HG), Z Z P . D . Sk` :bK . bK ˜ bk . b bH dµS (b) = ˜: `=1
∂ ∂b` bH
dµS (b)
Z P m j−1 = (−1) Sk,hj :bK ˜ : bh1 ,···,6hj ,···,hm dµS (b)
≤
j=1 m P
j=1
|Sk,hj | Fs+m−2
We may assume, without loss of generality, that fm (H) is antisymmetric under permutation of its m arguments. Hence Z . . . W (b) . b f (b) dµS (b) ≤
X
H∈Mm ˜ K∈M s−1
=m
X
D P m P
k=1 j=1 D P
H∈Mm k=1
≤ mkws k
˜ |ws (K.(k))| |Sk,hj | |fm (H)| Fs+m−2 X
˜ K∈M s−1
X P D
H∈Mm
≤ mkws k kSk
k=1
X
H∈Mm
˜ |ws (K.(k))| |Sk,h1 | |fm (H)| Fs+m−2
|Sk,h1 | |fm (H)| Fs+m−2
|fm (H)| Fs+m−2
= mkws k kSk |||fm ||| Fs+m−2 130
b) By definition Z . W (b)W 0 (b) . f (b) dµ (b) = . .b S X
=
H∈Mm ˜ K∈M s−1 ˜ 0 ∈M 0 K s −1
D P
P
H∈Mm K∈Ms K0 ∈M 0 s
˜0
k,k0 =1
Z
ws (K)ws0 0 (K0 )fm (H)
˜ ws (K.(k))w s0 (K .(k ))fm (H) 0
Z
. b b 0 . b dµ (b) . K K .b H S
. b b b b 0 . b dµ (b) . K ˜ k K ˜0 k .b H S
By Problem I.19 (integration by parts), twice, and Hypothesis (HG), Z P Z D . . ∂ 0 0 S :b b b : = b b b b dµ (b) b dµ (b) . bK . 0 0 ˜ ˜ ˜ k K ˜ k b H k ` S S K k K ∂b` H `=1 Z P m j 0 −1 b b : b dµ (b) (−1) Sk0 ,hj 0 :bK = ˜ k K ˜0 h1 ,···,6hj 0 ,···,hm S 0 j =1 Z P m m P = ±Sk0 ,hj 0 Sk,hj :bK ˜ bK ˜ 0 : bH\{hj ,hj 0 } dµS (b) j 0 =1
≤
j=1 j6=j 0
m P m P
j 0 =1
0
|Sk,hj | |Sk0 ,hj 0 | Fs+s +m−4
j=1 j6=j 0
Again, we may assume, without loss of generality, that fm (H) is antisymmetric under permutation of its m arguments, so that Z . . 0 f (b) dµ (b) W (b)W (b) . .b S ≤
X
D P
H∈Mm k,k ˜ K∈M s−1 ˜ 0 ∈M 0 K s −1
= m(m − 1)
0 =1
m P m P
j 0 =1
X
j=1 j6=j 0
D P
H∈Mm k,k ˜ K∈M s−1 ˜ 0 ∈M 0 K s −1
= m(m − 1)
X
0 =1
˜ ˜ 0 .(k 0 ))| |Sk,h | |Sk0 ,h 0 | |fm (H)| Fs+s0 +m−4 |ws (K.(k))| |ws0 (K j j
˜ ˜ 0 .(k 0 ))| |Sk,h | |Sk0 ,h | |fm (H)| Fs+s0 +m−4 |ws (K.(k))| |ws0 (K 1 2
D P P
0 H∈Mm k,k =1
≤ m(m − 1) kws k kws0 k
˜ K
˜ |ws (K.(k) )|
X P D
H∈Mm
≤ m(m − 1) kws k kws0 k kSk2
k=1
X
H∈Mm
P ˜0 K
|Sk,h1 |
0 ˜ 0 .(k0 ))| |Sk,h | |Sk0 ,h | |fm (H)| Fs+s +m−4 |ws0 (K 1 2
P D
k0 =1 0
0 |Sk0 ,h2 | |fm (H)| Fs+s +m−4
|fm (H)| Fs+s +m−4 0
= m(m − 1) kws k kws0 k kSk2 |||fm ||| Fs+s +m−4
131
Problem II.6 Let M > 1. Construct a function ν ∈ C0∞ ([M −2 , M 2 ]) that takes values in [0, 1], is identically 1 on [M −1/2 , M 1/2 ] and obeys ∞ X j=0
ν M 2j x = 1
for 0 < x < 1.
Solution. The function ν1 (x) =
e−1/x 0
2
if x > 0 if x ≤ 0
is C ∞ . Then ν2 (x) = ν1 (−x)ν1 (x + 1) is C ∞ , strictly positive for −1 < x < 0 and vanishes for x ≥ 0 and x ≤ −1, Rx
ν3 (x) = R−1 0 −1
ν2 (y) dy ν2 (y) dy
is C ∞ , vanishes for x ≤ −1, is strictly increasing for −1 < x < 0 and identically one for x ≥ 0 and ν4 (x) = ν3 (1 − x)ν1 (x + 1) is C ∞ , vanishing for |x| ≥ 2 identically one for |x| ≤ 1 and monotone for 1 ≤ |x| ≤ 2. Observe that, for any x, at most two terms in the sum ν¯(x) =
∞ X
ν4 (x + 3m)
m=−∞
are nonzero, because the support of ν4 has width 4 < 2 × 3. Hence the sum always converges. On the other hand, as the width of the support is precisely 4 > 3, at least one term is nonzero, so that ν¯(x) > 0. As ν¯ is periodic, it is uniformly bounded away from zero. Furthermore, for |x| ≤ 1, we have |x + 3m| ≥ 2 for all m 6= 0, so that ν¯(x) = ν4 (x) = 1 for all |x| ≤ 1. Hence ν5 (x) = 132
ν4 (x) ν ¯(x)
is C ∞ , nonnegative, vanishing for |x| ≥ 2 identically one for |x| ≤ 1 and obeys ∞ X
∞ X
ν5 (x + 3m) =
m=−∞
ν4 (x+3m) ν ¯(x)
=1
m=−∞
Let L > 0 and set 1 L
ν(x) = ν5
ln x
Then ν(x) is C ∞ on x > 0, is supported on L1 ln x ≤ 2, i.e. for e−2L ≤ x ≤ e2L , is identically one for L1 ln x ≤ 1, i.e. for e−L ≤ x ≤ eL and obeys ∞ X
ν e
3Lm
∞ X
x =
m=−∞
ν5
1 L
m=−∞
ln x + 3m = 1
for all x > 0
Finally, just set M = e3L/2 and observe that e2L ≤ M 2 , eL ≥ M 1/2 and that when x < 1 P∞ only those terms with m ≥ 0 contribute to m=−∞ ν e3Lm x . Problem II.7 Prove that
6 p+m p2 +m2
= √
1 p2 +m2
Solution. For any matrix, kM k2 = kM ∗ M k, where M ∗ is the adjoint (complex conjugate of the transpose) of M . As ∗
6p = and 2
6p =
ip0 −p1
=
p1 −ip0
ip0 −p1
∗
=
p1 −ip0
p1 −ip0
ip0 −p1
−ip0 p1
=−
−p1 ip0
p20 + p21 0
= −6p 0 2 p0 + p21
= −p2 1l
we have
6 p+m 2 p2 +m2
=
1
+ m)(6 p + m) = (p2 +m 2 )2 (− 6 p + m)(6 p + m)
2 2
= 2 1 2 1l = 2 1 2 2 −6p + m
1
∗ (p2 +m2 )2 (6 p 1 (p2 +m2 )
p +m
133
p +m
Appendix A. Infinite Dimensional Grassmann Algebras
Problem A.1 Let I be any ordered countable set and I the set of all finite subsets of I (including the empty set). Each I ∈ I inherits an ordering from I. Let w : I → (0, ∞) be any strictly positive function on I and set WI =
Y
wi
i∈I
with the convention W∅ = 1. Define V = `1 (I, w) =
n
P o α:I→C wi |αi | < ∞ i∈I
and “the Grassmann algebra generated by V” A(I, w) = `1 (I, W ) = P
n
P o α:I→C WI |αI | < ∞ I∈I
sgn(J, I\J) αJ βI\J where sgn(J, I \ J) is the sign of the P permutation that reorders (J, I \ J) to I. The norm kαk = WI |αI | turns A(I, w) into a
The multiplication is (αβ)I =
J⊂I
I∈I
Banach space. a) Show that
kαβk ≤ kαk kβk b) Show that if f : C → C is any function that is defined and analytic in a neighbourhood P∞ 1 (n) of 0, then the power series f (α) = n=0 n! f (0)αn converges for all α ∈ A with kαk smaller than the radius of convergence of f . n o c) Prove that Af (I) = α : I → C αI = 0 for all but finitely many I is a dense subalgebra of A(I, w).
Solution. a) kαβk =
X I∈I
|WI (αβ)I | =
X I∈I
X WI sgn(J, I \ J) αJ βI\J
134
J⊂I
X X = sgn(J, I \ J) WJ αJ WI\J βI\J I∈I
≤ ≤
J⊂I
XX I∈I J⊂I
X
I,J∈I
WJ |αJ | WI\J |βI\J |
WJ |αJ | WI |βI | = kαk kβk
b) ∞
X
kf (α)k =
n=0
1 (n) (0)αn n! f
≤
∞ X
n=0
(n) 1 (0)| kαkn n! |f
converges if kαk is strictly smaller than the radius of convergence of f . c) Af (I) is obviously closed under addition, multiplication and multiplication by complex numbers, so it suffices to prove that Af (I) is dense in A(I, w). I is a countable set. Index its elements I1 , I2 , I3 , · · ·. Let α ∈ A(I, w) and define, for each n ∈ IN, α(n) ∈ Af (I) by (n)
αIj =
n
αIj 0
if j ≤ n otherwise
Then
lim α − α(n) = lim
n→∞
n→∞
since
P∞
j=1
∞ X
j=n+1
WIj |αIj | = 0
WIj |αIj | converges.
Problem A.2 Let I be any ordered countable set and I the set of all finite subsets of I. Let S=
α:I→C
be the set of all sequences indexed by I. Observe that our standard product (αβ)I = P P sgn(J, I\J) αJ βI\J is well–defined on S – for each I ∈ I, J⊂I is a finite sum. We now J⊂I
define, for each integer n, a norm on (a subset of) S by kαkn =
P
I∈I
135
2n|I| |αI |
It is defined for all α ∈ S for which the series converges. Define A∩ = α ∈ S kαkn < ∞ for all n ∈ ZZ A∪ = α ∈ S kαkn < ∞ for some n ∈ ZZ a) Prove that if α, β ∈ A∩ then αβ ∈ A∩ .
b) Prove that if α, β ∈ A∪ then αβ ∈ A∪ . c) Prove that if f (z) is an entire function and α ∈ A∩ , then f (α) ∈ A∩ . d) Prove that if f (z) is analytic at the origin and α ∈ A∪ has |α∅ | strictly smaller than the radius of convergence of f , then f (α) ∈ A∪ . Solution. a), b) By problem A.1 with wi = 2n , kαβkn ≤ kαkn kβkn . Part (a) follows immediately. For part (b), let kαkm , kβkm0 < ∞. Then, since kαkm ≤ kαkn for all m ≤ n, kαβkmin{m,m0 } ≤ kαkmin{m,m0 } kβkmin{m,m0 } ≤ kαkm kβkm0 < ∞ c) For each n ∈ ZZ,
∞ ∞
X
X
k (k) 1 (k) 1 kf (α)kn = f (0)α ≤ (0)| kαkkn
k! k! |f n
k=0
k=0
converges if kαkn is strictly smaller than the radius of convergence of f . d) Let n ∈ ZZ be such that kαkn < ∞. Such an n exists because α ∈ A∪ . Observe that, for each m ≤ n, kαkm =
P
I∈I
2m|I| |αI | =
≤ |α∅ | + 2
P
I∈I
−(n−m)
2(m−n)|I| 2n|I| |αI | ≤ |α∅ | + 2−(n−m)
kαkn
P
I∈I
2n|I| |αI |
Consequently, for all α ∈ A∪ , lim kαkm = |α∅ |
m→−∞
If |α∅ | is strictly smaller than the radius of convergence of f , then, for some m, kαkm is also strictly smaller than the radius of convergence of f and kf (α)km ≤
∞ X
k=0
(k) 1 (0)| kαkkm k! |f
136
<∞
Problem A.3 a) Let f (x) ∈ C ∞ IRd /LZZd . Define Z f˜p =
dx f (x)e−i
IRd /LZZd
Prove that for every γ ∈ IN, there is a constant Cf,γ such that d Q f˜p ≤ Cf,γ (1 + p2j )−γ for all p ∈ j=1
2π d Z L Z
b) Let f be a distribution on IRd /LZZd . For any h ∈ C ∞ IRd /LZZd , set Z X i
˜ 1 ˜ dx h(x)e−i
hR (x) = Ld e hp where hp = IRd /LZZd
p∈ 2π ZZd L |p|
Prove that hf, hi = lim hf, hR i R→∞
Solution. a) d Q
(1 +
j=1
γ p2j ) f˜p
Z
=
Z
=
Z
=
dx f (x) IRd /LZZd
d Q
j=1
dx f (x) IRd /LZZd
γ
(1 + p2j ) e−i
d Q
j=1
1−
dx e−i
IRd /LZZd
d Q
j=1
γ −i
∂2 e 2 ∂xj 1−
γ ∂2 f (x) 2 ∂xj
by integration by parts. Note that there are no boundary terms arising from the integration by parts because f is periodic. Hence Z d Q 2 γ ˜ (1 + pj ) fp ≤ Cf,γ ≡ j=1
b) By (A.2) D hf, h − hR i = f, 1d L ≤ Cf ≤
Cf Ld
X
p∈ 2π ZZd L |p|≥R
sup x∈IRd /LZZd
X
p∈ 2π ZZd L |p|≥R
Q d dx 1−
IRd /LZZd j=1
E ˜ p ei
h
Q d 1−
d Q
j=1
j=1
1 + p2j
137
νf d2 2 dxj
νf ˜ hp
γ ∂2 f (x) ∂x2j
1 Ld
X
p∈ 2π ZZd L |p|≥R
e
<∞
hp
i
˜
By part (a) of this problem, there is, for each γ > 0, a constant Ch,γ such that d Q ˜ hp ≤ Ch,γ (1 + p2j )−γ
for all p ∈
j=1
2π d Z L Z
Hence, choosing γ = νf + 1,
X
hf, h − hR i ≤ Ch,γ Cfd L
p∈ 2π ZZd L |p|≥R
X
C
= Ch,γ Lfd
p∈ 2π ZZd L |p|≥R
Since
X
p∈ 2π Zd L Z
d Q
j=1
1 + p2j
we have lim
R→∞
−1
X
j=1 d Q
j=1
1 + p2j
1 + p2j
h X
=
Z p1 ∈ 2π L Z
d Q
j=1
p∈ 2π ZZd L |p|≥R
d Q
1 + p2j
νf
−1
1 1+p21
−1
(1 + p2j )−γ
id
<∞
=0
and hence lim hf, h − hR i = 0
R→∞
Problem A.4 Let f be a distribution on IRd /LZZd . Suppose that, for each γ ∈ ZZ , there is a constant Cf,γ such that d −i
Q ≤ Cf,γ f, e (1 + p2j )−γ j=1
for all p ∈
Prove that there is a C ∞ function F (x) on IRd /LZZd such that hf, hi =
Z
dx F (x)h(x) IRd /LZZd
138
2π d Z L Z
P Proof: Define F˜p = f, e−i
and F (x) = L1d q∈ 2π ZZd ei F˜q . By hypothesis L Q F˜q ≤ Cf,γ d (1 + q2 )−γ for all q ∈ 2π ZZd and γ ∈ IN. Consequently, all termwise j j=1 L
derivatives of the series defining F converge absolutely and uniformly in x. Hence F ∈ C ∞ IRd /LZZd . Define the distribution ϕ by Z < ϕ, h >= dx F (x)h(x) IRd /LZZd
We wish to show that ϕ = f . As Z −i < ϕ, e >= =
dx F (x)e−i
Z X F˜q dx ei<(q−p),x>
IRd /LZZd
1 Ld
IRd /LZZd
q∈ 2π Zd L Z
=
1 Ld
X
F˜q Ld δp,q = F˜p =< f, e−i
>
q∈ 2π Zd L Z
we have that < ϕ, h >=< f, h > for any h that is a finite linear combination of e−i
, p ∈
2π d ZZ . L
It now suffices to apply part (b) of Problem A.3.
Problem A.5 Prove that a) AA† f = A† Af + f b) Ah0 = 0 c) A† Ah` = `h` d) hh` , h`0 i = δ`,`0 . Here hf, gi = e) x2 −
d2 dx2
= 2A† A + 1
R
f¯(x)g(x)dx.
df Solution. a) By definition and the fact that ddx (xf ) = f + x dx , 2 d d d x − dx f = 12 x2 + ddx x − x dx − ddx2 f = AA† f = 12 x + dx 2 d d d A† Af = 21 x − dx x + dx f = 12 x2 − ddx x + x dx − ddx2 f =
Subtracting AA† f − A† Af = f 139
1 2 1 2
x2 + 1 −
d2 dx2
x2 − 1 −
d2 dx2
f
f (D.3)
b) √
2π 1/4 Ah0 = x +
d dx
1
2
1
2
1
2
e− 2 x = xe− 2 x − xe− 2 x = 0
c) The proof is by induction on `. When ` = 0, A† Ah0 = 0 = 0h0 by part (b). If A† Ah` = `h` , then, by part (a), 1 A † h` = A† Ah`+1 = A† A √`+1
√ 1 A† A† Ah` `+1
+
√ 1 A † h` `+1
=
√ 1 A† `h` `+1
+
√ 1 A † h` `+1
1 = (` + 1) √`+1 A† h` = (` + 1)h`+1
d) By part (c)
(` − `0 ) hh` , h`0 i = h`h` , h`0 i − hh` , `0 h`0 i = A† Ah` , h`0 − h` , A† Ah`0
= h` , A† Ah`0 − h` , A† Ah`0 = 0
since A† and A are adjoints. So, if ` 6= `0 , then hh` , h`0 i = 0. We prove that hh` , h` i = 1, by induction on `. For ` = 0 hh0 , h0 i = If hh` , h` i = 1, then
√1 π
Z
2
e−x dx = 1
1 A† h` , A† h` = `+1 h` , AA† h`
1 1 = `+1 h` , A† A + 1 h` = `+1 h` , ` + 1 h` = 1
hh`+1 , h`+1 i =
1 `+1
e) Adding the two rows of (D.3) gives AA† f + A† Af = x2 −
d2 dx2
f
or x2 −
d2 dx2
= AA† + A† A
Part (a) may be expressed AA† = A† A + 1. Subbing this in gives part (e). 140
Problem A.6 Show that the constant function f = 1 is in Vγ for all γ < −1. Solution. We first show that, under the Fourier transform convention, gˆ(k) =
√1 2π
Z
g(x) e−ikx dx
ˆ ` = (−i)` h` . The Fourier transform of h0 is h ˆ 0 (k) = h =
1 π 1/4
√1 2π
Z
e
− 21 x2
1 2 e− 2 k √12π π 1/4
1
= h0 (k)
Z
e
−ikx 1
dx =
2
e− 2 x dx =
1 π 1/4
√1 2π
Z
2
1
1
2
e− 2 (x+ik) e− 2 k dx
1 2 1 e− 2 k π 1/4
as desired. Furthermore c dg
=
√1 2π
x cg(k) =
√1 2π
dx (k)
so that
As h` =
1 √ A† h`−1 , `
Z Z
d †g = A
0
g (x) e
−ikx
dx =
− √12π
Z
xg(x) e−ikx dx = i ddk √12π
√1 2
x−
d dx
b g =
d −ikx e dx = ik gˆ(k) g(x) dx Z g(x) e−ikx dx = i ddk gˆ(k)
√1 (i d 2 dk
− ik gˆ = (−i)A† gˆ
ˆ ` = (−i)` h` follows easily by induction. the claim h
Now let f be the function that is identically one. Then f˘` =< f, h` >=
Z
h` (x) dx =
√ √ ˆ ` (0) = 2π(−i)` h` (0) 2π h
is bounded uniformly in ` (see [AS, p.787]). Since f is in Vγ for all γ < −1.
141
P
i∈IN (1 + 2i)
γ
converges for all γ < −1,
Appendix B. Pfaffians
Problem B.1 Let T = (Tij ) be a complex n × n matrix with n = 2m even and let S = 21 (T − T t ) be its skew symmetric part. Prove that Pf T = Pf S. (k)
Solution. Define, for any n matrices, T (k) = (Tij ), 1 ≤ k ≤ n each of size n × n, pf T
(1)
,···,T
(n)
=
n X
1 2m m!
(1)
i1 ,···,in =1
(n)
εi1 ···in Ti1 i2 · · · Tin−1 in
Then pf T
(1) t
,T
(2)
,···,T
(n)
= = =
n X
1 2m m!
1 2m m!
1 2m m!
i1 ,···,in =1 n X i1 ,···,in =1 n X
t
= −pf T
(1) 1 2 {T
t
− T (1) }, T (2) , · · · , T (n)
(2)
(n)
(1)
(2)
(n)
εj2 ,j1 ,i3 ···in Tj1 j2 Ti3 i4 · · · Tin−1 in (1)
(2)
(n)
εj1 ,j2 ,i3 ···in Tj1 j2 Ti3 i4 · · · Tin−1 in
, T (2) , · · · , T (n)
Because pf T (1) , T (2) , · · · , T (n) is linear in T (1) , pf
(1)
j1 ,j2 ,i3 ,···,in =1
(1)
(n)
εi1 ···in Ti2 i1 Ti3 i4 · · · Tin−1 in
j1 ,j2 ,i3 ,···,in =1 n X
= − 2m1m!
(2)
εi1 ···in T (1) i1 i2 Ti3 i4 · · · Tin−1 in
t = 21 pf T (1) , T (2) , · · · , T (n) − 12 pf T (1) , T (2) , · · · , T (n) = 12 pf T (1) , T (2) , · · · , T (n) + 12 pf T (1) , T (2) , · · · , T (n) = pf T (1) , T (2) , · · · , T (n)
Applying the same reasoning to the other arguments of pf T (1) , T (2) , · · · , T (n) . pf
(1) 1 2 {T
t t − T (1) }, · · · , 21 {T (n) − T (n) } = pf T (1) , T (2) , · · · , T (n)
Setting T (1) = T (2) = · · · = T (n) = T gives the desired result. 142
Problem B.2 Let S =
0 S21
S12 0
Solution. By (B.1) with m = 1, 0 S12 Pf = S21 0
1 2
with S21 = −S12 ∈ C . Show that Pf S = S12 .
P
εk` Sk` =
1≤k,`≤2
1 2
S12 − S21
= S12
Problem B.3 Let α1 , · · · , αr be complex numbers and let S be the 2r ×2r skew symmetric matrix
r M 0 S= −αm m=1
Prove that Pf(S) = α1 α2 · · · αr .
αm 0
Solution. All matrix elements of S are zero, except for r 2 × 2 blocks running down the diagonal. For example, if r = 2,
0 −α 1 S= 0 0
By (B.100 ), Pf(S) =
X
P∈Pr<
=
α1 0 0 0
0 0 0 −α2
0 0 α2 0
εk1 `1 ···kr `r Sk1 `1 · · · Skr `r X
1≤k1
εk1 `1 ···kr `r Sk1 `1 · · · Skr `r
The conditions 1 ≤ k1 < k2 < · · · < kr ≤ 2r and 1 ≤ ki < `i ≤ 2r, 1 ≤ i ≤ r combined with the requirement that the k1 , `1 · · · kr , `r all be distinct force k1 = 1. Then Sk1 `1 is nonzero only if `1 = 2. When k1 = 1 and `2 = 2, the conditions k1 < k2 < · · · < kr ≤ 2r and 1 ≤ ki < `i ≤ 2r, 1 ≤ i ≤ r combined with the requirement that the k1 , `1 · · · kr , `r all be distinct force k2 = 3. Then Sk2 `2 is nonzero only if `2 = 4. Continuing in this way Pf(S) = ε1,2,···(2r−1),2r S1,2 · · · S2r−1,2r = α1 α2 · · · αr
143
Problem B.4 Let S be a skew symmetric D×D matrix with D odd. Prove that det S = 0. Solution. As S t = −S, det S = det S t = det(−S) = (−1)D det S As D is odd, det S = − det S and det S = 0.
144
Appendix C. Propagator Bounds
Problem C.1 Prove, under the hypotheses of Proposition C.1, that there are constants C, C 0 > 0, such that if k − k0c ≤ C, then there is a point p0 ∈ F with (k − p0 ) · ˆt = 0 and k − p0 ≤ C 0 e(k) . ˆ , ∇e(k0c ) · n ˆ is Solution. We are assuming that ∇e does not vanish on F . As ∇e(k0c ) k n ˆ is bounded away from zero, say by C2 , on a neighbourhood of nonzero. Hence ∇e(k) · n
ˆ and ˆt, k0c . This neighbourhood contains a square centered on k0c , with sides parallel to n ˆ n k p
k0c 0
ˆt F
such that F is a graph over the sides parallel to ˆt. Pick C to be half the length of the
sides of this square. Then any k within a distance C of k0c is also in this square. For each k in this square there is an ε such that k + εˆ n ∈ F and the line joining k to k + εˆ n∈F also lie in the square. Then Z ε e(k) = e(k) − e(k + εˆ dt n) = 0 = C2 k − (k + εˆ n)
Z d = e(k + tˆ n ) dt
ε 0
ˆ ≥ C2 |ε| dt ∇e(k + tˆ n) · n
Pick p0 = k + εˆ n.
Problem C.2 Let f : IRd → IR and g : IR → IR. Let 1 ≤ i1 , · · · , in ≤ d. Prove that Q n
`=1
∂ ∂xi`
g f (x) =
n X
X
m=1 (I ,···,I )∈P (n) 1 m m
g (m) f (x)
m Q Q
p=1 `∈Ip
∂ ∂xi`
f (x)
(n)
where Pm is the set of all partitions of (1, · · · , n) into m nonempty subsets I1 , · · · , Im with, for all i < i0 , the smallest element of Ii smaller than the smallest element of Ii0 . 145
Solution. The proof is easy by induction on n. When n = 1, ∂ ∂xi1
∂ g f (x) = g 0 f (x) ∂x f (x) i 1
(1)
which agrees with the desired conclusion, since P1
= {(1)}, m must be one and I1 must
be (1). Once the result is known for n − 1, Q n
`=1
=
∂ ∂xi`
n−1 X
∂ ∂xin
g f (x) = X
n−1 X
n−1 X
g (m+1) f (x)
X
m X
=
X
m0 =1 (I 0 ,···,I 0 )∈P (n) 1 m0 m0
g
∂
∂xin
g (m) f (x)
(m0 )
f (x)
f (x)
m Q Q
m Q Q
∂ ∂xi`
∂ ∂xi`
f (x)
0 m Q Q
p=1 `∈Ip0
m Q Q
p=1 `∈Ip
p=1 `∈Ip
p=1 p6=q
m=1 (I ,···,I )∈P (n) q=1 1 m m n X
g (m) f (x)
m=1 (I ,···,I )∈P (n−1) 1 m m
m=1 (I ,···,I )∈P (n) 1 m m
+
X
`∈Ip
∂ ∂xi`
f (x)
∂ ∂xi`
f (x)
f (x)
∂ ∂xin
Q
`∈Iq
∂ ∂xi`
f (x)
For the first sum after the second equals sign, we made the changes of variables m0 = m+1, (n)
0 0 0 0 I10 = I1 , · · · , Im 0 −1 = Im , Im0 = (n). They account for all terms of (I1 , · · · , Im0 ) ∈ Pm0 for
which n is the only element of some I`0 . For each 1 ≤ q ≤ m, the q th term in the second (n)
0 sum after the second equals sign accounts for all terms of (I10 , · · · , Im 0 ) ∈ Pm0 for which n
appears in Iq0 and is not the only element of Iq0 .
146
References [AR] A. Abdesselam, V. Rivasseau, Explicit Fermionic Tree Expansions, Letters in Math. Phys. 44 (1998), 77. [AS] M. Abramowitz and I. A. Stegun eds., Handbook of mathematical functions with formulas, graphs, and mathematical tables, Dover, 1964. [Be] F. A. Berezin, The method of second quantization, Academic Press, 1966. [BS] F. Berezin, M. Shubin, The Schr¨ odinger Equation, Kluwer 1991, Supplement 3: D. Le˘ites, Quantization and supermanifolds. [FMRS] J. Feldman, J. Magnen, V. Rivasseau and R. S´en´eor, A Renormalizable Field Theory: The Massive Gross-Neveu Model in Two Dimensions, Commun. Math. Phys. 103 (1986), 67-103. [FST] J. Feldman, M. Salmhofer, E. Trubowitz, Perturbation Theory around Non-nested Fermi Surfaces, I: Keeping the Fermi Surface Fixed, Journal of Statistical Physics 84, 1209-1336 (1996). [GK] K. Gawedzki, A. Kupiainen, Gross–Neveu model through convergent perturbation expansions, Commun. Math. Phys. 102 (1986), 1. [RS] M. Reed and B. Simon, Methods of Modern Mathematical Physics, I: Functional Analysis, Academic Press, 1972. The “abstract” fermionic expansion discussed in §II is similar to that in • J. Feldman, H. Kn¨ orrer, E. Trubowitz, A Nonperturbative Representation for Fermionic Correlation Functions, Communications in Mathematical Physics, 195, 465-493 (1998). and is a simplified version of the expansion in • J. Feldman, J. Magnen, V. Rivasseau and E.Trubowitz, An Infinite Volume Expansion for Many Fermion Green’s Functions, Helvetica Physica Acta, 65 (1992) 679-721. The latter also contains a discussion of sectors. Both are available over the web at http://www.math.ubc.ca/e feldman/research.html 147