Lecture Notes in Mathematics Editors: J.-M. Morel, Cachan F. Takens, Groningen B. Teissier, Paris
830
J.A. Green
Polynomial Representations of GLn 2nd corrected and augmented edition
with an Appendix on Schensted Correspondence and Littelmann Paths by K. Erdmann, J.A. Green and M. Schocker
ABC
Author and co-authors for the appendix James A. Green 19 Long Close Oxford OX2 9SG United Kingdom e-mail:
[email protected]
Manfred Schocker Department of Mathematics University of Wales Swansea Singleton Park, Swansea SA2 8PP United Kingdom e-mail:
[email protected]
Karin Erdmann Mathematical Institute University of Oxford 24-29 St Giles Oxford OX1 3LB United Kingdom e-mail:
[email protected]
Library of Congress Control Number: 2006934862 Mathematics Subject Classification (2000): Primary: 20C30, 20G05, 20G15, 16S50, 17B99, 05E10 ISSN print edition: 0075-8434 ISSN electronic edition: 1617-9692 ISBN-10 3-540-46944-3 Springer Berlin Heidelberg New York ISBN-13 978-3-540-46944-5 Springer Berlin Heidelberg New York DOI 10.1007/3-540-46944-3 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com c Springer-Verlag Berlin Heidelberg 2007 ° The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting by the authors using a Springer LATEX package Cover design: WMXDesign GmbH, Heidelberg Printed on acid-free paper
SPIN: 11008118
VA41/3100/SPi
543210
Preface to the second edition
This second edition of “Polynomial representations of GLn (K)” consists of two parts. The first part is a corrected version of the original text, formatted in LATEX, and retaining the original numbering of sections, equations, etc. The second is an Appendix, which is largely independent of the first part, but which leads to an algebra L(n, r), defined by P. Littelmann, which is analogous to the Schur algebra S(n, r). It is hoped that, in the future, there will be a structure theory of L(n, r) rather like that which underlies the construction of Kac-Moody Lie algebras. We use two operators which act on “words”. The first of these is due to C. Schensted (1961). The second is due to Littelmann, and goes back to a 1938 paper by G. de B. Robinson on the representations of a finite symmetric group. Littelmann’s operators form the basis of his elegant and powerful “path model” of the representation theory of classical groups. In our Appendix we use Littelmann’s theory only in its simplest case, i.e. for GLn . Essential to my plan was to establish two basic facts connecting the operations of Schensted and Littelmann. To these “facts”, or rather conjectures, I gave the names Theorem A and Proposition B. Many examples suggested that these conjectures are true, and not particularly deep. But I could not prove either of them. This work was therefore stalled, until I sought the help of my colleagues Karin Erdmann and Manfred Schocker. They accepted the challenge, and within a few weeks produced proofs of both conjectures. Their proofs constitute the heart of the Appendix, and make it possible to begin a comparison of the Littelmann algebra L(n, r) with the Schur algebra S(n, r). Karin and Manfred have made this Appendix possible, and have written large parts of the text. It has been a happy experience for me to work with them. A few weeks before the final manuscript of the Appendix was ready, we heard that A. Lascoux, B. Leclerc and J.-Y. Thibon have published a work
VI
Preface
on “The plactic monoid”, which contains results equivalent to Theorem A and Proposition B. Their methods are rather different from ours, and they prove also many important facts which do not come into our Appendix. We give a brief summary of this work in §D.11. Oxford, August 2006
Sandy (J. A.) Green
Contents
Polynomial representations of GLn 1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2
Polynomial Representations of GLn (K): The Schur algebra 2.1 Notation, etc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The categories MK (n), MK (n, r) . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 The Schur algebra SK (n, r) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 The map e : KΓ → SK (n, r) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Modular theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 The module E ⊗r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Contravariant duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 AK (n, r) as KΓ-bimodule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11 11 12 13 14 16 17 19 21
3
Weights and Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Weight spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Some properties of weight spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Irreducible modules in MK (n, r) . . . . . . . . . . . . . . . . . . . . . . . . . . .
23 23 23 24 26 28
4
The modules Dλ,K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Preamble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 λ-tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Bideterminants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Definition of Dλ,K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 The basis theorem for Dλ,K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 The Carter-Lusztig lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Some consequences of the basis theorem . . . . . . . . . . . . . . . . . . . . 4.8 James’s construction of Dλ,K . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33 33 33 34 35 36 37 39 40
VIII
Contents
5
The Carter-Lusztig modules Vλ,K . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Definition of Vλ,K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Vλ,K is Carter-Lusztig’s “Weyl module” . . . . . . . . . . . . . . . . . . . . 5.3 The Carter-Lusztig basis for Vλ,K . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Some consequences of the basis theorem . . . . . . . . . . . . . . . . . . . . 5.5 Contravariant forms on Vλ,K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Z-forms of Vλ,K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43 43 43 45 47 48 50
6
Representation theory of the symmetric group . . . . . . . . . . . . . 6.1 The functor f : MK (n, r) → mod KG(r) (r ≤ n) . . . . . . . . . . . . . 6.2 General theory of the functor f : mod S → mod eSe . . . . . . . . . . 6.3 Application I. Specht modules and their duals . . . . . . . . . . . . . . . 6.4 Application II. Irreducible KG(r)-modules, char K = p . . . . . . . 6.5 Application III. The functor f : MK (N, r) → MK (n, r) (N ≥ n) 6.6 Application IV. Some theorems on decomposition numbers . . .
53 53 55 57 60 65 67
Appendix: Schensted correspondence and Littelmann paths A
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 Preamble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 The Robinson-Schensted algorithm . . . . . . . . . . . . . . . . . . . . . . . . A.3 The operators e˜c , f˜c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.4 What is to be done . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
73 73 74 75 78
B
The Schensted Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1 Notations for tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2 The map Sch : I(n, r) → T (n, r) . . . . . . . . . . . . . . . . . . . . . . . . . . . B.3 Inserting a letter into a tableau . . . . . . . . . . . . . . . . . . . . . . . . . . . B.4 Examples of the Schensted process . . . . . . . . . . . . . . . . . . . . . . . . . B.5 Proof that (µ, U, V ) ← x1 belongs to T (n, r) . . . . . . . . . . . . . . . . B.6 The inverse Schensted process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.7 The ladder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81 81 81 82 85 88 89 92
C
Schensted and Littelmann operators . . . . . . . . . . . . . . . . . . . . . . . 95 C.1 Preamble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 C.2 Unwinding a tableau . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 C.3 Knuth’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 C.4 The “if” part of Knuth’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 107 C.5 Littelmann operators on tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . 114 C.6 The proof of Proposition B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
D
Theorem A and some of its consequences . . . . . . . . . . . . . . . . . . 121 D.1 Ingredients for the proof of Theorem A . . . . . . . . . . . . . . . . . . . . . 121 D.2 Proof of Theorem A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 D.3 Properties of the operator C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Contents
IX
D.4 The Littelmann algebra L(n, r) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 D.5 The modules MQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 D.6 The λ-rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 D.7 Canonical maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 D.8 The algebra structure of L(n, r) . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 D.9 The character of Mλ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 D.10 The Littlewood–Richardson Rule . . . . . . . . . . . . . . . . . . . . . . . . . . 140 D.11 Lascoux, Leclerc and Thibon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 E
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 E.1 Schensted’s decomposition of I(3, 3) . . . . . . . . . . . . . . . . . . . . . . . 147 E.2 The Littelmann graph I(3, 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Index of symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
§I.
Introduction Issai Schur determined the polynomial replesentations of the complex
general linear group in 1901.
GLn(¢)
in his doctoral dissertation [S], published
This remarkable work contained many very original ideas, developed
with superb algebraic skill.
Schur showed that these representations are
completely reducible, that each irreducible one is "homogeneous" of some degree
r ~ 0
(see 2.2), and that the equivalence types of irreducible
polynomial representations of
GLn(~) ,
of fixed homogeneous degree
in one-one correspondence with the partitions not more than
n
parts.
see 3.5).
~I
in
n
kn)
of
are
r
into
Moreover Schur showed that the character of an
irreducible representation of type function
k = (~i'''"
r,
k
is given by a certain symmetric
variables (since described as a "Schur function";
An essential part of Schur's technique was to set up a correspond-
ence between representations of
GL (~) n
of fixed homogeneous degree
and representations of the finite symmetric group
G(r)
on
r
r,
symbols,
and through this correspondence to apply C. Frobenius's discovery of the characters of
G(r)
[F, 1900].
This pioneering achievement of Schur was one of the main inspirations for Hermann Weyl's monumental researches on the representation theory of semi-simple Lie groups [We, 1925, 1926].
Of course Weyl's methods,
based on the representation theory of the Lie algebra of the Lie group and the possibility of integrating over a compact form of
F,
F,
were very
different from the purely algebraic methods of Schur~s dissertation; in particular Weyl's general theory contained nothing to correspond to the symmetric group GLn(~),
G(r).
In 1927 Schur published another paper IS'] on
which has deservedly become a classic.
"dual" actions of
GLn(¢)
(see 2.6) to rederive
and
G(r)
on the
In this he exploited the
~ th
tensor-space
E~r
all the results of his 1901 dissertation in a new
and very economical way.
Weyl publicized the method of Schur's 1927 paper,
with its attractive use of the "double centralizer property",in his influential book "The Classical Groups" [We',1939]. in Chapters 3(B) and
4
of that book has become the
polynomial representations of
GL (~) n
standard treatment of
(and, incidentally, of
representation theory of the symmetric groups this explains the comparative neglect
In fact the exposition
G(r)),
Alfred Young's
and perhaps
of Schur's work of 1901.
I think
this neglect is a pity, because the methods of this earlier work are in some ways very much in keeping with the present-day ideas on representations of algebraic groups. in part
It is the purpose of these lectures to give some accounts,
based on the ideas of Schur's 1901 dissertation, of the polynomial
~epresentatlons of the general linear groups infinite field of arbitrary characteristic.
GLn(K),
where
K
is an
Our treatment will be "elementary" use algebraic interesting
in the sense that we shall not
group theory in our main discussion.
But it might be
to indicate here some general ideas from the representation
theory of algebraic
groups
inverse is not important
(or of algebraic
in this context),
semigroups,
since the group
which are relevant
to
our
work. Let
F
be any semigroup
associative multiplication) field.
A representation
space over
K)
~(IF) = ~V'
•
is a map
for all
identity map on
V.)
: KF-+ EndK(V);
(i.e.
is a set, equipped with an
with identity element of
F
on a
K-space
• : F ~EndK(V )
g, g' E F.
KF
T
~,
and let
V
(i.e.
which satisfies
(For any set
We can extend
here
F
V,
K
a vector T(gg')
= ~(g)~(g'),
we denote by
~V
linearly to give a map of
is the semlgroup-algebra
of
be any
F
the
K-algebras
over
K,
whose
We can make
KF
act
elements are all formal linear combinations
=
whose support on
V
by
Z g~F
supp K = {g ~ F : K
Kv = ~(~)(v) (V,~),
(V',~')
is, by definition,
bijective
representation KF-modules;
or simply
V. a
~'(g)f = f~(g)
is called a
a right
# 0}
g
A
K-map
and thereby get a left KF-module,
f : V ~ V'
for all
g E r.
Kr-modules
(i.e, A
f
(V,~),
is a linear map)
KF-map which is
or an equivalence
One has analogous
KF-module
is finite.
KF-map between such
KF-isomorphism,
~, ~'.
~ K, g
(~ ~ K~, v ~ V),
denoted
which satisfies
K Kgg,
definitions
between the
for right
can be regarded as a pair
(V,~)
where
T
: F ~ EndK(V)
i.e.
~(gg')
is an a n t i - r e p r e s e n t a t i o n
= ~(g')~(g)'
The set
KF
g ~ f(g)f'(g),
element If
~
s ~ F
of
KF
and
are defined
defined
takes each
f ~ KF
K-algebra
Lsf ,
Ls, R s
maps
homomorphism)
a representation
F
rifht
Thus
KF
KF-module
that if
s ~ F
on
and
(using
L).
f ~ KF
takes
that these actions
f ~ K F. f ® f'
: F ~K
KF
while
F.
for all
The identity 1K
of
f
of
K.
by
s
g ~ F.
In particular,
L : s ~ L
K-algebra L s, R s
both
R : s ~ Rs
gives
gives an anti-represent-
s
KF- module
(using
R)
actions by
and a o,
so
we write
and
f o s = L f. s (sof) o t = s o(fot) K F ® K F ~ K FxF
(®
to the function m a p p i n g s, t ~ F.
structure
on
and
F
for all means
F x F ~ K
s, t E F
~)
which
by
This linear map is injective
, and we use it to identify
8 : K F -* K F×F
to
given by
We denote both module
com~ute:
(f, f' ~ K F)
The semigroup
of
into itself and is a
K F ~ K F.
The~e is a linear map
(s,t) ~ f(s)f'(t),
g
is defined
It is easy to check that
K F,
s o f = R f s Notice
ff'
("point")
can be made into a left
and
e.g.
K-algebra,
to the identity element
Rsf
V,
g, g' E F.
R f : g ~ f(gs), s
EndK(KF ) .
ation.
K-space
is s commutative
"pointwise",
g ~ F
belong to the space of
f : F ~K
on the
then the left and r i g h t translates
to be the maps
Each of the operators
F
for all
for every element
L f : g ~ f(sg), s
map (i.e.
~ ~V
of all maps
with algebra operations take
~(IF)
of
KF @ KF
with a subspace of
gives rise to two maps
~ : K F -~ K~
K FxF.
as follows: Both
A,e
if are
f £ K F,
then
&f : (s,t) -~ f(st),
K-algebra maps.
and
~(f) = f(IF)-
We shall say that an element
f ~ K F is
finitary, or is a representative function, if it satisfies any one of the conditions
FI, F2, F3
<see e.g.
[H, Ch. 2]).
FI.
The left
below:
these three conditions are in fact equivalent
KF-submodule
KFof
generated by
f
is finite-
dimensional. F2.
The right
KF-submodule
foK/"
generated by
f
is finite-
dimensional. F3.
Af ~ K F ® K F.
(h
This means that there exist elements
fh' fh E K F
runs over some finite index set) such that
(Is)
Af =
[ih fh ® fh' "
This equation is equivalent to the system of equations
(ib)
f(st) =
~ h
fh(s)f{(t),
all
s, t 6 F.
It is also equivalent to each of the following systems
(ic)
tof =
Z f~(t)fh, h
all
t ( F,
fos =
~ fh(s)f~, h
all
s 6 F.
or (id)
The set
F = F(K F)
of all finitary functions
f: F ~ K
is a
K-bialgebra (see [Sw] for the definitions of coalgebras and bialgebras). It is a
K-subalgebra of
£F ~ F ~ F
K ~,
(this means that if
and is also closed to f
A
in the sense that
is finitary, then the functions
fh' fh F ,
in (la) can be chosen to be themselves finitary).
equipped ~ith the maps
these two structures on fact that
A
and
c
A : F ÷ F ~ F , ~ : F + K ,
F ,
of algebra and coalgebra,
are both
K - a l g e b r a maps
F i n i t a r y functions on
F
d i m e n s i o n a l r e p r e s e n t a t i o n s of on a f i n i t e - d i m e n s i o n a l V ,
z(g)v b = gv b =
Suppose
V °
rab(g)
~ K .
E rab(g)v a , a eB
The functions
c o e f f i c i e n t functions of
If
K-coalgebra;
are linked by the ~,
T
p. i ~ ) .
is a r e p r e s e n t a t i o n of
{vb : b e B)
T ,
or of the
KF-module
for
rab
is a
F
K-basis of
V .
g c F , b e B ;
: F ~ K (a,b g B)
or of the
of these functions is a subspace of T ,
is a
we have equations
(le)
here
K-space
appear as coefficient functions of finite-
F .
K-space
(see
The
KF
KF-module
are called
V = (V,T)
.
The
K-span
called the coefficient space (1)
We denote this space by
cf(V) =
of
E K.rab a,b
;
it is e l e m e n t a r y to verify that it is independent of the choice of the basis {vb} . i.e.
The m a t r i x
R = (tab)
R(gg') = R(g)R(g')
Kronecker delta).
gives a m a t r i x r e p r e s e n t a t i o n of
, R(IF) = (~ab)
for all
tab
(if)
~ r ac ~ rcb , S~rab) ' = 6ab ' eEB
The m a t r i x
,
R = (rab)
C = cf(V)
all
a, b c B .
is sometimes called an "invariant m a t r i x "
are finitary~ hence that
~,
p.140].
it follows that all the coefficient functions cf(V)
is a subspace of
is a s_ubcoalgebra of
As a m a t t e r of fact, every finitary f u n c t i o n space of some f i n i t e - d i m e n s i o n a l
(I)
is the
viz°
From the first equations
shows that
(~ab
These conditions translate into conditions on the
coefficients
Arab =
g, g' c F
F ,
KF-module
F ,
F = F(K F) . i.e.
f : F ÷ K V ;
that
But AC~
rab
(If)
also
C ~ C
lies in the coefficient
for this purpose
In [HI, this is called the ~'space of r e p r e s e n t a t i v e functions" of T, or
V.
we could take
V = KFof
(see
FI).
It is for this reason that finitary
functions are sometimes called "representative functions". If K-space), left
S
is any
mod (S)
S-modules.
shall denote the category of all finite-dimensional Similarly
dimensional right over
K
F(KF), the
K-algebra (possibly of infinite dimension as
mod'(S)
S-modules.
An algebraic representation theory of
could be defined as follows: i.e.
A
is a
K-subspace of
"A-representation theory" of
full subcategory
mOdA(KF )
dimensional left
KF-modules
f : V ~ V' just the
of
F,
first choose a subcoalgebra F(~)
V
such that V, V'
satisfying
~A c A ® A.
cf(V) c A,
of finite-dimensional right way.
The assumption
if
KF-module
cf(V) ~ A ;
A-rational left
V
fh' fhT from
appearing in (ic) (id)
(la)
implies that if
KF-moduleso
A
It is A-rational
mod~(KF)
A-rational in the same f ~ A,
then the functions
can themselves be chosen to belong to
follows that
is
then mOdA(KF)
We can define the category
KF-modules which are
gA c A ~ A
Then
(The morphisms
clear that submodules, quotient-modules and finite direct sums of A-rational.
of
of this category are, by definition,
"rational", or more precisely "A-rational",
modules, are themselves
A
whose objects are all finite-
In some contexts we say that a
is the category of finite-dimensional
F
is defined to be the study of the
mod (KF),
between two objects Ki'-maps.)
is the category of all finite-
is a left and right
A.
KF-submodule of
Then KF ;
also by quite elementary calculations that any finite-dimensional left (or right) (or
KF-submodule
mod~(KF))o
V
of
A
belongs to the category
mOdA(KF)
Examples i.
Let
closed field
K
F
be an affine algebraic group over an algebraically
(see for example ! H, p.21]), and
regular functions on mOdA(KF)
F
(A
A = K[F]
is often called the affin~ ring of
is the category of rational (finite-dimensional)
the usual sense of algebraic group theory~ sub~oalgebra of
F(KF),
remarks apply when
F
In this case,
A
is an affine algebraic semigroupo F(~)
= K F.
2.
If we take
F).
Then
KF-modu!es in is not orlly a
but a sub-blalgebra (see [Se, p. 46]).
finite semigroup, then of course mOdA(Kr) = mod (KF).
the ring of
The same
Let
F
A = K F,
be a then
The (left and right)
K~-module structures on
A,
are dual to tile (right and left) "regular"
KF-module structures on
K
given by multiplication:
we may identify
(KF)
Let
= HomK(KF, K)f
integer, and
3.
F = CL (K)
K
K.
polynomial functions
f : F ~ K
be an infinite field,
We could take
A
(see 2.1).
R = (tab)
F
2.2) by taking
mention that semigroup
g E F
~(n)
Mn(K )
r
M (K), n
F.
matrices
V = (V,T)
in
see 2.2) are
K-bases
(including {Vb}
of
The study of such represent-
the space of polynomial functions in the
n
2
coefficients of a
(see 2.1 for a precise formulation.)
Finally we might
can also be regarded as the affine ring of the algebraic of a l l
n x n
matrices (singular or not) over
we may regard polynomial representations of ations of
n x n
We get another category (denoted
A = ~(n,r),
which are homogeneous of degree
general element
~(n),
obtained by using
ations is the subject of these lectures. in
a positive
the ring of all
The objects
are called polynomial representations of
~(n,r)
n
KF-modules, and the associated representations
the matrix representations
on
~(n),
(we shall later denote this category by
called polynomial
V)
with the dual space
the group of all non-singulsr
with coefficients in
mOdA(K~)
KF
and conversely.
CLn(K),
K,
so that
as rational represent-
Now suppose once more that identity
IF ,
and that
A
F
is an aribtrary
is a subcoalgebra
all flnitary
functions on
to the maps
A : A ~ A ~ A, s : A ~ K.
com(A)
of all right
dimensional which is
F.
A-eomodules;
K-space,
= RV
A
of the space
A-module;
[Se, p. 38] or [Sw, p. 30]). as follows:
if
an object
V
of
the identities
Our category
(le).
relative
com(A)
is a finite-
T : V ~ V ® A
(T ~ ~A)T = (~V ~ A)~ , A-comodule
better references
V E mOdA(KF),
and write down the equations
of
So we may consider the category
(see [G, p. 138], where a right
referred to as a left
F(K F)
is itself a coalgebra,
together with a "structure map"
K-linear and satisfies
(IV ® s)T
com(A),
Then
semigroup with
are [H, p. ].6],
mOdA(KF)
take any
is equivalent
K-basis
Now define
is perversely
{Vb}
of
~ : V -+ V ~ A
to V
to be the
K-linear map given by equations
(Ig)
Y(Vb)
--
% a~B
It is easy to check that using (if) we see that Conversely
given an
the elements
rab
hold, so we may use It is evident that
v a ® rab
T T
satisfies
A ; (le)
such that these morphisms
of the basis
the comodule
(V, T),
{vh}.
Moreover
identities just given.
use equations
to define the left
of(V) c A .
f : V ~ V~
b ~ B.
(Ig)
to define
the comodule identities now show that
can be regarded as a right of morphism
for
is independent
A-comodule of
,
So every
A-comodule,
in
com(A)
KF-module
A-rational,
KF-maps in
KF-module
The definition
(see the references
are the same as
V = (V, ~).
left
and conversely.
(if)
cited)
mOdA(KF).
is
10
This formal transition trivial,
from
but it is nevertheless
20 begin with,
KF-modu]es
worth making,
the basic representation
(~e should here work in the category are possibly infinite-dimensional) discovered (see
[G]).
A-comodules
is rather
from several points of view.
theory of arbitrary
Mod(A),
A-comodules
whose objects
V = (V, T)
follows to a surprising extent the pattern
by R. Brauer and C. Nesbitt
[B],
to
for finite-dimensional
Included here is the possibility
algebra
of a modular
theory,
which we shall discuss below. Next,
the
an important
A-comodule
for the
K-algebra
is the dual of the coalgebra tile product (1)
~D
= Hon~(A,
structure
to be the map of
(~)(f)
(la).
b~longs ~ ~, = ([ o~
A
on A
A,
K).
K
can be regarded as
The algebra structure on
i.e.
into
profit by
if
~,D ~ A ,
we define
which takes the element
to
(lh)
see
also permits us to
fact, namely that every right A-comodule
a left module
f ~ A
interpretation
V,
to v
=
Z ~(fh)~(f~), h
The identity element of com(A),
® ~) (v),
we make for
~v b
=
(see
s : A-* K.
K -module
(1) This product
V = (V, T)
Working in terms of a basis
(ig))
% ~(rab)V a , a~B
for
b E B.
Therefore we have three kinds of matrix representation our original
If
A -module by the rule
into an
~ ~ A , v 6 V.
this rule becomes
(]i)
V
is
A
V = (V, ~),
relstlve
is often called "convolution"
associated
to the basis
{Vb}:
with
{vb}
A
(i)
the representation
g ÷ (rab(g))
whose elements are functions on
F,
of
F ;
(il)
the matrix
satisfying equations
(If), and which
can be thought of as a kind of representation of the coalgebra (iii)
the representation
equetions g ~ F all
(li).
let
We can recover
eg : A ~ K
f 6 A.
Then
e
E A ,
g
K-algebra map
(iii)
with If
e , A
(i)
from
and the map
for
So
modA(KF )
dimensional algebra
r),
for
satisfies may be extended linearly
(i).
is finite-dimenslonal,
A = ~(n,
for each
and if we compoee the representation
and
then it is quite elementary to show mod(A )
mOdA(KF) by composition with the map
Jn the case
e
finally
given by
eg(f) = f(g),
i.e.
e :F + A
are equivalent; this amounts
to showing that every finite-dimensional left in
A ,
A ;
(ill7 very easily: g" ,
g, g' ~ I'.
e : KF ~ A ,
we recover
that the categories
of the algebra
be "evaluation at
e eg, = egg, , elF = s , to a
£ ~ (~(rab))
R = (tab)
e.
A -module
V
yields a module
Schur exploited this fact
end could thereby work with the finite-
AK(n~ r)
= SK(n, r)
(which I have called the
"Sehur algebra" in these lectures; see 2.3, 2.4), instead of with the i1~flnite-dimensicnal and irrelevantly complicated group-algebra If
A
is infinite-dimensional,
V ~ mOdA(KF) is dense in O~ group
F
it is often useful to regard modules
~s modules over some "dense" subalgebra A
~en
if, for every A = K[F]
0 # a 6 A,
by(F)
of
there is some
F
(see
K,
of
A*
~ ~ S
[CFS, §6]).
In ease
F
such that
hy(F)
S
the
is simply-
mOdA(KF)
sets up an equivalence of eategorles (J. Sullivan; see
Moreover in that case
(S
one may take for
connected and semisimple, the correspondence between mod(S)
S
is the affine ring of a connected algebraic
over an algebraically closed field
"hyperalgebra"
KF.
can be identified with an algebra
and [CPS, 6~8]). UK
12
constructed out of the complex semisimple Lie algebra associated with the root system of algebra
UK
F
(W. Haboush:
[CPS, 6.5, 6.6] o r [ H a ,
1.3]).
This
(which is sometimes defined to be the hyperalgebra of
F)
has an explicit basis with sufficiently good multiplicative properties to make it immensely valuable in studying the rational representations of In an important paper
[CL]
F.
R, Carter and G. Lusztig have used the
hyperalgebra -- rather than the Schur algebra -- to investigate the polynomial representations of
F = GLn(K).
Carte1-Lusztig use the idea, which is derived from C. Chevalley's fundamental paper
[Ch]
family of all groups
on split semisimple algebraic groups, that the
GLn(K )
(n
fixed,
of commutative rings) is "defined over
K Z".
varying over some class
K
This makes possible a '~odular
theory" for tile polynomial representations of these groups, which in its essentials corresponds to R. Brauer's modular representation theory for finite groups. as follows. the class
We can give a sufficiently general setting for such a theory
Suppose we have a family K
of all infinite fields,
K~-subcoalgebra of are satisfied. ZI. i.e.
The (a)
O~basis.
F(KFK).
(Q
AQ = (AA, ~Q, ~Q)
AQ , and
Z2~
For each
K ~ K
(here
~
®v~ ,
means
is a semigroup and
K
A~K
in is a
Suppose also that the following two conditions
is a laLtice in
{av.l of
FK
where for each
denotes the rational field.)
Q-coalgebra AZ
{FK, ~K} ,
AQ ,
(b)
there is a and
"e~ten~ion of scalars").
AZ ® K
contains a
which means
z-form
AZ = ~v Zav
&Q(A Z) _c A Z @ A Z ,
for some
~:Q(~Z) --c Z.
K-coalgebra isomorphism is made into a
AZ ,
aK : A Z ® K + A Z
K-coa]gebra by
13
In this case we say that the f~mily means of
Let
n : ~ g -+ EndcE
semisimple Lie algebra dimension n, and let or [St, p. 17 ]). defined by SLn(K ).
~ EZ
be an "admissible lattice" in
its
K ~ K
let
FK
For each pair
(~,v)
showed in over
Z.
[Ch]
of
(see also
The relevant
generated by the
cQ
isomorphisms, .tnd take
(see [Bo, p. A-5]
be the Chevalley group over
K cpKk @' c ~
gc K = L fly X cK
is a
we take this to be
[Bo,
]#thatthe
§4 AZ
the maps
of eK
A0
g = (g~v)
,
we deduce that
family
~..
Chevalley
{FK, AK}
is defined
is just the subring of
are
c~v~t ® 1K ~ cK
K-algebra (as well as
K-coalgebra)
K,
and the family
defined by the
(FK, ~ )
{FK, AK}
Z-bialgebra
AZ .
[Se, p. 46].)
Example 5. K E K. r ~ 0
Z",
A0
for all
is an affine algebraic group defined over is an "affine group scheme over
K
K-subcoalgebra (hence
F(KFK);
Z-form ;
E
of finite
(From the standpoint of algebraic group theory, each pair
See
by
define the coefficient function
From the equations
K-subbialgebra)
E
elements can be regarded as matrices
K-subalgebra generated by all the
even a
Z
be a faithful representation of a complex
over a complex vector space
For each
IY, E Z ;
cK : g ~ gl~v " ~v the
is defined over
AZ .
Example 4.
in
{FK, ~ }
Fix the positive integer
For
AK
(see 2.1).
the fa~aily Az(n, r) {F K, ~ ( n ,
we may take either
and let
~(n),
or
FK = GLn(K)
AK(n,r)
for each
for some fixed
It is completely elementnry to verify that in each case
{FK' AK}
is defined over
are described in 2.5. r)}.
n,
Z;
the relevant
Z-forms
Az(n) ,
in these lectures, we study the family
/4
The first essential of the modular representation family
{FK, AK}
reduction. K E K. which
which is defined over
We shall ~ i t e
Then an object FQ
acts.
equations like
(lj)
If
VQ
Z,
~
for the category
in
MQ
mod~(KFK) ,
is a finlte-dimensional
{Vb, Q : b E B}
N Z r]b(g)v a = " aEB
Here the functions
is a
Q-basis of
rQ ab
,
for
for any
Q-space on
VQ ,
we have
belong to
AQ
and satisfy equations like
'
a subset
Z-form (or admissible lattice) of
VQ
V Z = ~ ZVb, Q
for some
if
VZ (a)
Q-basis
All the coefficient functions
b E B,
g ~ -0 'r
We make the following definition:
(b)
is the process of modular
(le)
gvb, n~ =
~.~hich means
theory of any
rQ ab '
of VZ
VQ
(if)
is called a
is a lattice in
r iVb,Q}
of
VQ ,
VQ ,
and
relative to this basis, lie in
AZ . Another way of expressing condition ~n
AQ-COmodule by means of a map
like
(ig).
Then
(b)
VQ
into
using equations
to
YQ(v z) i V z e A z •
Now suppose that (~
is to convert
~Q : VQ ~ VQ ® AQ ,
is equivalent
(b')
(b)
means
rabK
=
®Z )
(if).
We can make the
into an object of
~K (rQab ® 1 K)
~K : AZ ® K ~ ~
K E K.
of
AK '
~
,
as follows.
usimg the
postulated in Z2.
So we may define an action of
K-coalgebra K rab
These FK
K-space
on
VK
VK = V Z @ K
Define elements isomorphism
satisfy equations like by equations
15
(ik)
Z a6B
gvb, K =
Here
r[~(g)Va,K=u '
Vb, K = Va, Q ® 1 K ,
converted,
via the
A general z-form
Z-form
into
V Z , V~ ,...
VK = VZ ® K ,
V K , V K' ,...
Different
may give non-isomorphic
MK ,
but another general theorem
form to Brauer and Nesbitt)
have the same composition
the notion of decomposition
is
possesses at least one
[G, (2.2d), p. 159]).
VQ
in
VQ
is called modular reduction.
VQ 6 MQ
or
of the same
V~ = V~ @ K ....
(due i~ its original
VK
that every
([Se, Lemma 2, p. 43]
g E F K , b ~ B,
The p~oeess by which
b 6 B.
VZ ,
theor~n guarantees
VZ
Z-forms
for
for
says that all these modules
factor multiplicities;
numbers can be defined
from this
([Se, p. 44] or
[G, (2.5a), p. 162]).
In these lectures we take and study
KF-modules
a fixed homogeneity algebra
SK(n,r )
of
SK(n,r )
V = (V,T)
degree r
where
K
I, above).
and it is shown how
when the latter is given its natural structure
group
G(r)
then every module
V
in
~(n,r)
Schur's multiplication
V
elements
.
in
By definition, homogeneous
SK(n,r)
of degree
r
theorem
SK(n,r)
~(n,r)
An alternative
can
description
as module for the synmletric (2.6e):
if
char K = O ,
in
(see (2.3b)) provides
MK(n,r ) .
are easily expressed Weights
the character of
in
is completely reducible.
method for calculating with modules
~
Schur's
rule for
spaces" of such a module
, for
algebra of the r-th tensor space
E ~r ,
This has as corollary
MK(n,r)
In chapter 2 the Schur
KF-modules
SK(n,r)-modules , and conversely.
is that it is the endomorphism
.
is an infinite field,
which belong to the category
(see Example
is defined,
be regarded as left
F = GL (K) n
V
and characters
For example, in
n
variables
the "weight-
terms of certain idempotent
are discussed
is a symmetric polynomial
in a set of
an effective
over
XI,...,X n .
in chapter 3. Z ,
which is
16
In 3.5 is reproduced
the argument by which Schur showed that the isomorphism
classes of irreducible modules with the partitions
in
% = (%1,...,%n)
Of course Schur considered only minor modification
K ;
yet been determined
of
for an arbitrary
we write this
are in one-one correspondence r
only the case
of an irreducible module of type of
~(n,r)
~%,p
% For
except in special
into not more than
K = ¢ ,
n
parts.
but his argument requires
infinite field
K .
The character
depends only on the characteristic p # O
these characters
cases.
For
~%,p
p = 0 ,
p
have not
Schur showed in
IS] that they are the symmetric functions now known as "Schur functions".
A
proof of this is given at the end of 3.5 - our proof uses some identities involving symmetric functions which can be found, Macdonald's
recent book
for example,
in I.G.
[M].
In chapters 4 and 5, I have departed widely from Schur's dissertation. These chapters K ,
are concerned with the construction,
of two modules
"explicit"
has a unique irreducible
{F%,K}, parts,
V~, K
in our category
in the sense of the "contravariant"
has a unique minimal
then
and
as
X
submodule,
factor module;
F%, K ~ VX, K ~ D%, K .
in
They are
They are dual
this is denoted
% of
gives a full set of irreducible modules
.
duality described
which is isomorphic
ranges over all partitions
% and for each
MK(n,r)
in the sense that a basis can be given for each.
to each other, V%, K
D~, K
for each
to r
F%, K .
F ,K .
D%, K
The set
into not more than
P~(n,r)
But for char K = p # 0 ,
in 2.7.
.
If
knowledge
n
char K = O , of the
F%, K
is still very incomplete.
The history of the modules indebted
to J. Towber
D%, K
and
V%, K
is interesting,
(see IT]) for much of the following
is generated by certain determinantal
expressions
and I am
information.
(here denoted
D%, K
(T%:Ti)),
17
whose significance in 1892.
as "primary covariants" was noted by J. Deruyts
Although Schur refers to two later papers of Deruyts,
no sign in [S] that he appreciated set of irreducible modules in "standard"
(T~:T i) ,
observation that the equivalently that the by G. Higman
M~(n,r)
.
The discovery of the basis of the [Y, 1902].
The
generate a Z-form V~, K
D%, Z
(and the Z-form
called these "Weyl modules",
M. Clausen
V%, K .
G.D. James describes,
are isomorphic to the D%, K .
They
Towber
~]
showed that
([DKR]) to construct both modules in his book
~a],
some
Chapter 6 r e t u r ~ t o
theory of D%, K
KF-modules which
His construction is quite different from those D%, K
in the sense of algebraic group theory.
Schur's disseration.
procedure by which he constructed G(r) .
D%, K
and more
we show in 4.8 that it yields the important and deep fact that
is an "induced" module,
group
1974].
[CI] has used recent combinatorial
G.-C. Rota and his collaborators
above;
were constructed,
are dual to each other - his framework is "functorial"
general than ours.
and
~L,
D%,Q - was made
and their construction was based on methods used
in the theory of semisimple algebraic groups. V%, K
in
V%,Z)
independently of all this, by R. Carter and G. Lusztig
and
The
can be constructed over an arbitrary field - or
(T%:Ti)
~Hi, 1965].
there is
that Deruyts had really given a complete
seems to go back to A. Young D%, K
[D!,
KF-modules
I have "reversed" the elegant from modules for the symmetric
This provides an interesting illumination of some recent work
of James on the modular representation
theory
of the symmetric groups.
1 Introduction
Issai Schur determined the polynomial representations of the complex general linear group GLn (C) in his doctoral dissertation [47], published in 1901. This remarkable work contained many very original ideas, developed with superb algebraic skill. Schur showed that these representations are completely reducible, that each irreducible one is “homogeneous” of some degree r ≥ 0 (see 2.2), and that the equivalence types of irreducible polynomial representations of GLn (C), of fixed homogeneous degree r, are in one-one correspondence with the partitions λ = (λ1 , . . . , λn ) of r into not more than n parts. Moreover Schur showed that the character of an irreducible representation of type λ is given by a certain symmetric function Sλ in n variables (since described as “Schur function”; see 3.5). An essential part of Schur’s technique was to set up a correspondence between representations of GLn (C) of fixed homogeneous degree r, and representations of the finite symmetric group G(r) on r symbols, and through this correspondence to apply G. Frobenius’ discovery of the characters of G(r) (see [17]). This pioneering achievement of Schur was one of the main inspirations for Hermann Weyl’s monumental researches on the representation theory of semi-simple Lie groups [54]. Of course Weyl’s methods, based on the representation theory of the Lie algebra of the Lie group Γ, and the possibility of integrating over a compact form of Γ, were very different from the purely algebraic methods of Schur’s dissertation; in particular Weyl’s general theory contained nothing to correspond to the symmetric group G(r). In 1927 Schur published another paper [48] on GLn (C), which has deservedly become a classic. In this he exploited the “dual” actions of GLn (C) and G(r) on rth tensor space E ⊗r (see 2.6) to rederive all the results of his 1901 dissertation in a new and very economical way. Weyl publicized the method of Schur’s 1927 paper, with its attractive use of the “double centralizer property”, in his influential book “The Classical Groups” [55]. In fact the exposition in Chapters 3B and 4 of that book has become a standard treatment of polynomial representations of GLn (C) (and, incidentally, of Alfred Young’s representation theory of the symmetric group G(r)), and perhaps this explains the comparative neglect of
2
1 Introduction
Schur’s work of 1901. I think this neglect is a pity, because the methods of this earlier work are in some ways very much in keeping with the present-day ideas on representations of algebraic groups. It is the purpose of these lectures to give some accounts, in part based on the ideas of Schur’s 1901 dissertation, of the polynomial representations of the general linear groups GLn (K), where K is an infinite field of arbitrary characteristic. Our treatment will be “elementary” in the sense that we shall not use algebraic group theory in our main discussion. But it might be interesting to indicate here some general ideas from the representation theory of algebraic groups (or algebraic semigroups, since the group inverse is not important in this context), which are relevant to our work. Let Γ be any semigroup (i.e. Γ is a set, equipped with an associative multiplication) with identity 1Γ , and let K be any field. A representation τ of Γ on a K-space V (i.e. a vector space over K) is a map τ : Γ → EndK (V ) which satisfies τ (gg ) = τ (g)τ (g ), τ (1Γ ) = IV , for all g, g ∈ Γ. (For any set V , we denote by IV the identity map on V .) We can extend τ linearly to give a map of K-algebras τ : KΓ → EndK (V ); here KΓ is the semigroup-algebra of Γ over K, whose elements are all formal linear combinations κ= κg g, κg ∈ K, g∈Γ
whose support supp κ = { g ∈ Γ : κg = 0 } is finite. We can make KΓ act on V by κv = τ (κ)(v) (κ ∈ KΓ, v ∈ V ), and thereby get a left KΓ-module, denoted (V, τ ), or simply V . A KΓ-map between such KΓ-modules (V, τ ), (V , τ ) is, by definition, a K-map f : V → V (i.e. f is a linear map) which satisfies τ (g)f = f τ (g) for all g ∈ Γ. A KΓ-map which is bijective is a KΓisomorphism, or an equivalence between the representations τ , τ . One has analogous definitions for right KΓ-modules; a right KΓ-module can be regarded as a pair (V, τ ) where τ : Γ → EndK (V ) is an anti-representation of Γ on the K-space V , i.e. τ (gg ) = τ (g )τ (g) for all g, g ∈ Γ, τ (1Γ ) = IV . The set K Γ of all maps Γ → K is a commutative K-algebra, with algebra operations defined “pointwise”, e.g. f f is defined to take g → f (g)f (g), for every element (“point”) g of Γ. The identity element 1 of K Γ takes each g ∈ Γ to the identity element 1K of K. If s ∈ Γ and f ∈ K Γ , then the left and right translates of f by s are defined to be the maps Ls f, Rs f : Γ → K given by Ls f : g → f (sg),
Rs f : g → f (gs),
g ∈ Γ.
Each of the operators Ls , Rs maps K Γ into itself and is a K-algebra map (i.e. K-algebra homomorphism) K Γ → K Γ . In particular, Ls , Rs both belong to the space EndK (K Γ ). It is easy to check that R : s → Rs gives a representation of Γ on K Γ , while L : s → Ls gives an anti-representation. Thus K Γ can be made into a left KΓ-module (using R) and a right KΓ-module (using L). We denote both module actions by ◦, so that if s ∈ Γ and f ∈ K Γ we write s ◦ f = Rs f
and
f ◦ s = Ls f.
1 Introduction
3
Notice that these actions commute: (s ◦ f ) ◦ t = s ◦ (f ◦ t) for all s, t ∈ Γ and f ∈ K Γ . There is a linear map K Γ ⊗ K Γ → K Γ×Γ (⊗ means ⊗K ) which takes f ⊗ f (f, f ∈ K Γ ) to the function mapping Γ × Γ → K by (s, t) → f (s)f (t), for all s, t ∈ Γ. This linear map is injective, and we use it to identify K Γ ⊗ K Γ with a subspace of K Γ×Γ . The semigroup structure on Γ gives rise to two maps ∆ : K Γ → K Γ×Γ
and
ε : K Γ → K,
as follows: if f ∈ K Γ , then ∆f : (s, t) → f (st), and ε(f ) = f (1Γ ). Both ∆, ε are K-algebra maps. We shall say that an element f ∈ K Γ is finitary, or is a representative function, if it satisfies any one of the conditions F1, F2, F3 below: these three conditions are in fact equivalent (see e.g. [24, Chapter 2]). F1. The left KΓ-submodule KΓ ◦ f generated by f is finite-dimensional. F2. The right KΓ-submodule f ◦ KΓ generated by f is finite-dimensional. F3. ∆f ∈ K Γ ⊗ K Γ . This means that there exist elements fh , fh ∈ K Γ (where h runs over some finite index set) such that fh ⊗ fh . (1a) ∆f = h
This equation is equivalent to the system of equations fh (s)fh (t), all s, t ∈ Γ. (1b) f (st) = h
It is also equivalent to each of the following systems (1c) t◦f = fh (t)fh , all t ∈ Γ, h
or (1d)
f ◦s=
fh (s)fh , all s ∈ Γ.
h
The set F = F (K Γ ) of all finitary functions f : Γ → K is a K-bialgebra (see [51] for the definitions of coalgebras and bialgebras). It is a K-subalgebra of K Γ , and is also closed to ∆ in the sense that ∆F ⊆ F ⊗ F (this means that if f is finitary, the functions fh , fh in (1a) can be chosen to be themselves finitary). The K-space F , equipped with the maps ∆ : F → F ⊗F , ε : F → K, is a K-coalgebra; these two structures on F , of algebra and coalgebra, are linked by the fact that ∆ and ε are both K-algebra maps (see [24, p. 15]). Finitary functions on Γ appear as coefficient functions of finite-dimensional representations of Γ. Suppose τ is a representation of Γ on a finitedimensional K-space V . If { vb : b ∈ B } is a K-basis of V , we have equations (1e) τ (g)vb = gvb = rab (g)va , for g ∈ Γ, b ∈ B; a∈B
4
1 Introduction
here rab (g) ∈ K. The functions rab : Γ → K (a, b ∈ B) are called coefficient functions of τ , or of the KΓ-module V = (V, τ ). The K-span of these functions 1 is a subspace of K Γ called the coefficient space of τ , or of the KΓ-module V . K · r We denote this space by cf(V ) = ab ; it is elementary to verify a,b that it is independent of the choice of the basis {vb }. The matrix R = (rab ) gives a matrix representation of Γ, i.e. R(gg ) = R(g)R(g ), R(1Γ ) = (δab ) for all g, g ∈ Γ (δab is the Kronecker delta). These conditions translate into conditions on the coefficients rab , viz. (1f ) ∆rab = rac ⊗ rcb , ε(rab ) = δab , all a, b ∈ B. c∈B
The matrix R = (rab ) is sometimes called an “invariant matrix” [20, p. 140]. From the first equations it follows that all the coefficient functions rab are finitary, hence that cf(V ) is a subspace of F = F (K Γ ). But (1f) also shows that C = cf(V ) is a subcoalgebra of F , i.e. that ∆C ⊆ C ⊗ C. As a matter of fact, every finitary function f : Γ → K lies in the coefficient space of some finite-dimensional KΓ-module V ; for this purpose we could take V = KΓ ◦ f (see F1). It is for this reason that finitary functions are sometimes called “representative functions”. If S is any K-algebra (possibly of infinite dimension as K-space), mod(S) shall denote the category of all finite-dimensional left S-modules. Similarly, mod (S) is the category of all finite-dimensional right S-modules. An algebraic representation theory of Γ over K could be defined as follows: first choose a subcoalgebra A of F (K Γ ), i.e. A is a K-subspace of F (K Γ ) satisfying ∆A ⊆ A ⊗ A. Then “A-representation theory” of Γ, is defined to be the study of the full subcategory modA (KΓ) of mod(KΓ), whose objects are all finite-dimensional left KΓ-modules V such that cf(V ) ⊆ A. (The morphisms f : V → V between two objects V , V of this category are, by definition, just the KΓ-maps.) In some contexts we say that a KΓ-module V is “rational”, or more precisely “A-rational”, if cf(V ) ⊆ A; then modA (KΓ) is the category of finite-dimensional A-rational left KΓ-modules. It is clear that submodules, quotient-modules and finite direct sums of A-rational modules, are themselves A-rational. We can define the category modA (KΓ) of finitedimensional right KΓ-modules which are A-rational in the same way. The assumption ∆A ⊆ A ⊗ A implies that if f ∈ A, then the functions fh , fh appearing in (1a) can themselves be chosen to belong to A. Then from (1c), (1d) follows that A is a left and right KΓ-submodule of K Γ ; also by quite elementary calculations that any finite-dimensional left (or right) KΓ-submodule V of A belongs to the category modA (KΓ) (or modA (KΓ)). Examples. 1. Let Γ be an affine algebraic group over an algebraically closed field K (see for example [24, p. 21]), and A = K[Γ] the ring of regular functions on Γ 1
In [24], this is called the “space of representative functions” of τ , or V .
1 Introduction
5
(A is often called the affine ring of Γ). Then modA (KΓ) is the category of rational (finite-dimensional) KΓ-modules in the usual sense of algebraic group theory. In this case, A is not only a subcoalgebra of F (K Γ ), but a subbialgebra (see [49, p. 46]). The same remarks apply when Γ is an affine algebraic semigroup. 2. Let Γ be a finite semigroup, then of course F (K Γ ) = K Γ . If we take A = K Γ, then modA (KΓ) = mod(KΓ). The (left and right) KΓ-module structures on A, are dual to the (right and left) “regular” KΓ-module structures on K given by multiplication: we may identify K Γ with the dual space (KΓ)∗ = HomK (KΓ, K). 3. Let K be an infinite field, n a positive integer, and Γ = GLn (K), the group of all non-singular n × n matrices with coefficients in K. We could take A = AK (n), the ring of all polynomial functions f : Γ → K (see 2.1). The objects (V, τ ) in modA (KΓ) (we shall later denote this category by MK (n), see 2.2) are called polynomial KΓ-modules, and the associated representations (including the matrix representations R = (rab ) obtained by using the K-bases {vb } of V ) are called polynomial representations of Γ. The study of such representations is the subject of these lectures. We get another category (denoted by MK (n, r) in 2.2) by taking A = AK (n, r), the space of polynomial functions on Γ which are homogeneous of degree r in the n2 coefficients of a general element g ∈ Γ (see 2.1 for a precise formulation). Finally we might mention that AK (n) can also be regarded as the affine ring of the algebraic semigroup Mn (K) of all n×n matrices (singular or not) over K, so that we may regard polynomial representations of GLn (K), as rational representations of Mn (K), and conversely. Now suppose once more that Γ is an arbitrary semigroup with identity 1Γ , and that A is a subcoalgebra of the space F (K Γ ) of all finitary functions on Γ. Then A is itself a coalgebra, relative to the maps ∆ : A → A ⊗ A and ε : A → K. So we may consider the category com(A) of all right A-comodules; an object V of com(A) is a finite-dimensional K-space, together with a “structure map” γ : V → V ⊗ A which is K-linear and satisfies the identities (γ ⊗ IA )γ = (IV ⊗ ∆)γ, (IV ⊗ ε)γ = IV (see [20, p. 138], where a right A-comodule is perversely referred to as a left A-comodule; better references are [24, p. 16], [49, p. 38] or [51, p. 30]). Our category modA (KΓ) is equivalent to com(A), as follows: if V ∈ modA (KΓ), take any K-basis {vb } of V and write down the equations (1e). Now define γ : V → V ⊗ A to be the K-linear map given by equations (1g) γ(vb ) = va ⊗ rab , for b ∈ B. a∈B
It is easy to check that γ is independent of the basis {vb }. Moreover using (1f) we see that γ satisfies the comodule identities just given. Conversely given an A-comodule (V, γ), use equations (1g) to define the elements rab of A; the comodule identities now show that (1f) hold, so we may use (1e) to define the
6
1 Introduction
left KΓ-module V = (V, τ ). It is evident that cf(V ) ⊆ A. So every A-rational, left KΓ-module can be regarded as a right A-comodule, and conversely. The definition of morphism f : V → V in com(A) (see the references cited) is such that these morphisms are the same as KΓ-maps in modA (KΓ). This formal transition from KΓ-modules to A-comodules is rather trivial, but it is nevertheless worth making, from several points of view. To begin with, the basic representation theory of arbitrary A-comodules (we should here work in the category Com(A), whose objects V = (V, γ) are possibly infinite-dimensional) follows to a surprising extent the pattern discovered by R. Brauer and C. Nesbitt for finite-dimensional algebras (see [5, 20]). Included here is the possibility of a modular theory, which we shall discuss below. Next, the A-comodule interpretation also permits us to profit by an important fact, namely that every right A-comodule can be regarded as a left module for the K-algebra A∗ = HomK (A, K). The algebra structure in A∗ is the dual of the coalgebra structure on A, i.e. if ξ, η ∈ A∗ , we define the product2 ξη to be the map of A into K which takes the element f ∈ A to (1h) ξη(f ) = ξ(fh )η(fh ), h
see (1a). The identity element of A∗ is ε : A → K. If V = (V, γ) belongs to com(A), we make V into an A∗ -module by the rule ξv = (IV ⊗ ξ)(γ(v)), for ξ ∈ A∗ , v ∈ V . Working in terms of a basis {vb } of V , this rule becomes (see (1g)) (1i) ξvb = ξ(rab )va , for b ∈ B. a∈B
Therefore we have three kinds of matrix representation associated with our original KΓ-module V = (V, τ ), relative to the basis {vb }: (i) the representation g → (rab (g)) of Γ; (ii) the matrix R = (rab ) whose elements are functions on Γ, satisfying equations (1f), and which can be thought of as a kind of representation of the coalgebra A; (iii) the representation ξ → (ξ(rab )) of the algebra A∗ , given by equations (1i). We can recover (i) from (iii) very easily: for each g ∈ Γ let eg : A → K be “evaluation at g”, i.e. eg (f ) = f (g), for all f ∈ A. Then eg ∈ A∗ , and the map e : Γ → A∗ satisfies eg eg = egg , e1Γ = ε, for g, g ∈ Γ. So e may be extended linearly to a K-algebra map e : KΓ → A∗ , and if we compose the representation (iii) with e, we recover (i). If A is finite-dimensional, then it is quite elementary to show that the two categories modA (KΓ) and mod(A∗ ) are equivalent; this amounts to showing that every finite-dimensional left A∗ -module V yields a module in modA (KΓ) by composition with the map e. Schur exploited this fact in 2
This product is often called “convolution”.
1 Introduction
7
the case A = AK (n, r), and could thereby work with the finite-dimensional algebra AK (n, r)∗ = SK (n, r) (which I have called the “Schur algebra” in these lectures, see 2.3, 2.4), instead of with the infinite-dimensional and irrelevantly complicated group algebra KΓ. If A is infinite-dimensional, it is useful in many cases to regard modules V ∈ modA (KΓ) as modules over some “dense” subalgebra S of A∗ (S is dense in A∗ if, for every 0 = a ∈ A, there is some ξ ∈ S such that ξ(a) = 0). When A = K[Γ] is the affine ring of a connected algebraic group Γ over an algebraically closed field K, one may take S the “hyperalgebra” hy(Γ) of Γ (see [9, §6]). In case Γ is simply-connected and semisimple, the correspondence between modA (KΓ) and mod(S) sets up an equivalence of categories (J. Sullivan; see [9, 6.8]). Moreover in that case hy(Γ) can be identified with an algebra UK constructed out of the complex semisimple Lie algebra associated with the root system of Γ (W. Haboush; [9, 6.5, 6.6] or [22, 1.3]). This algebra UK (which is sometimes defined to be the hyperalgebra of Γ) has an explicit basis with sufficiently good multiplicative properties to make it immensely valuable in studying the rational representations of Γ. In an important paper [6] R. Carter and G. Lusztig have used the hyperalgebra— rather than the Schur algebra—to investigate the polynomial representations of Γ = GLn (K). Carter-Lusztig use the idea, which is derived from C. Chevalley’s fundamental paper [7] on split semisimple algebraic groups, that the family of all groups GLn (K) (n fixed, K varying over some class K of commutative rings) is “defined over Z”. This makes possible a “modular theory” for the polynomial representations of these groups, which in its essentials corresponds to R. Brauer’s modular representation theory for finite groups. We can give a sufficiently general setting for such a theory as follows. Suppose we have a family {ΓK , AK }, where for each K in the class K of all infinite fields, ΓK is a semigroup and AK is a K-subcoalgebra of F (K ΓK ). Suppose also that the following two conditions are satisfied. (Q denotes the rational field.) ) contains a Z-form AZ , i.e. (a) AZ is Z1. The Q-coalgebra AQ = (AQ , ∆Q , εQ a lattice in AQ , which means AZ = ν Zaν for some Q-basis {aν } of AQ , and (b) ∆Q (AZ ) ⊆ AZ ⊗ AZ , εQ (AZ ) ⊆ Z. Z2. For each K ∈ K there is a K-coalgebra isomorphism αK : AZ ⊗ K → AK (here ⊗ means ⊗Z , and AZ ⊗K is made into a K-coalgebra by “extension of scalars”). In this case we say that the family {ΓK , AK } is defined over Z by means of AZ . Examples. 4. Let π : GC → EndC E be a faithful representation of a complex semisimple Lie algebra GC over a complex vector space E of finite dimension n, and let EZ be an “admissible lattice” in E (see [4, p. A-5] or [50, p. 17]). For
8
1 Introduction
each K ∈ K let ΓK be the Chevalley group over K defined by π, EZ ; its elements can be regarded as matrices g = (gµν ) in SLn (K). For each pair (µ, ν) define the coefficient function cK µν : g → gµν . From the equa K K K tions ∆cµν = λ cµλ ⊗cλν , we deduce that the K-subalgebra generated by ΓK ); all the cK µν is a K-subcoalgebra (hence even a K-subbialgebra) of F (K we take this to be AK . Chevalley showed in [7] (see also [4, §4]) that the family {ΓK , AK } is defined over Z. The relevant Z-form AZ of AQ is just the subring of AQ generated by the cQ µν ; the maps αK are K-algebra (as K well as K-coalgebra) isomorphisms, and take cQ µν ⊗ 1K → cµν for all µ, ν. (From the standpoint of algebraic group theory, each pair (ΓK , AK ) is an affine algebraic group defined over K, and the family {ΓK , AK } is an “affine group scheme over Z”, defined by the Z-bialgebra AZ . See [49, p. 46].) 5. Fix a positive integer n, and let ΓK = GLn (K) for each K ∈ K. For AK we may take either AK (n), or AK (n, r) for some fixed r ≥ 0 (see 2.1). It is completely elementary to verify that in each case the family {ΓK , AK } is defined over Z; the relevant Z-forms AZ (n), AZ (n, r) are described in 2.5. In these lectures, we study the family {ΓK , AK (n, r)}. The first essential of the modular representation theory of any family {ΓK , AK } which is defined over Z, is the process of modular reduction. We shall write MK for the category modAK (KΓK ), for any K ∈ K. Then an object VQ in MQ is a finite-dimensional Q-space on which ΓQ acts. If { vb,Q : b ∈ B } is a Q-basis of VQ , we have equations like (1e) Q rab (g)va,Q , for g ∈ ΓQ , b ∈ B. (1j) gvb,Q = a∈B Q belong to AQ , and satisfy equations like (1f). We make Here the functions rab the following definition: a subset VZ of VQ is called a Z-form (or admissible lattice) of VQ if (a) VZ is a lattice in VQ , which means VZ = b Zvb,Q for some Q-basis {vb,Q } of VQ , and Q (b) All the coefficient functions rab , relative to this basis, lie in AZ .
Another way of expressing condition (b) is to convert VQ into an AQ -comodule by means of the map γQ : VQ → VQ ⊗ AQ , using equations like (1g). Then (b) is equivalent to (b’) γQ (VZ ) ⊆ VZ ⊗ AZ . Now suppose that K ∈ K. We can make the K-space VK = VZ ⊗K (here ⊗ Q K = αK (rab ⊗ 1K ) ∈ AK , means ⊗Z ) into an object of MK , as follows. Define rab using the K-coalgebra isomorphism αK : AZ ⊗ K → AK postulated in Z2. K These rab satisfy equations like (1f). So we may define an action of ΓK on VK by equations
1 Introduction
(1k)
gvb,K =
9
K rab (g)va,K , for g ∈ ΓK , b ∈ B.
a∈B
Here vb,K = vb,Q ⊗ 1K , for b ∈ B. The process by which VQ is converted, via the Z-form VZ , into VK is called modular reduction. A general theorem guarantees that each VQ ∈ MQ possesses at least one Z-form VZ (see [49, Lemma 2, p. 43] or [20, (2.2d), p. 159]). Different Z-forms VZ , VZ , . . . of the same VQ may give non-isomorphic VK = VZ ⊗ K, VK = VZ ⊗ K, . . . in MK , but another general theorem (due in its original form to Brauer and Nesbitt) says that all these modules VK , VK , . . . have the same composition factor multiplicities; from this the notion of decomposition numbers can be defined (see [49, p. 44] or [20, (2.5a), p. 162]). In these lectures we take Γ = GLn (K), where K is an infinite field, and study KΓ-modules V = (V, τ ), which belong to the category MK (n, r), for a fixed homogeneity degree r (see Example 3, above). In chapter 2 the Schur algebra SK (n, r) is defined, and it is shown how KΓ-modules in MK (n, r) can be regarded as left SK (n, r)-modules, and conversely. An alternative description of SK (n, r) is that it is the endomorphism algebra of the rth tensor space E ⊗r , when the latter is given its natural structure as a module for the symmetric group G(r). This has as corollary Schur’s theorem (2.6e): if char K = 0, then every module V in MK (n, r) is completely reducible. Schur’s multiplication rule for SK (n, r) (see (2.3b)) provides an effective method for calculating with modules in MK (n, r). For example, the “weight spaces” of such a module V are easily expressed in terms of certain idempotent elements ξa in SK (n, r). Weights and characters are discussed in chapter 3. By definition, the character of V is a symmetric polynomial over Z, which is homogeneous of degree r in a set of n variables X1 , . . . , Xn . In 3.5 is reproduced the argument by which Schur showed that the isomorphism classes of irreducible modules in MK (n, r) are in one-one correspondence with the partitions λ = (λ1 , . . . , λn ) of r into not more than n parts. Of course Schur considered only the case K = C, but his argument requires only minor modification for an arbitrary infinite field K. The character of an irreducible module of type λ depends only on the characteristic p of K; we write this φλ,p . For p = 0 these characters have not yet been determined except in special cases. For p = 0, Schur showed in [47] that they are the symmetric functions now known as “Schur functions”. A proof of this is given at then end of 3.5— our proof uses some identities involving symmetric functions which can be found, for example, in I. G. Macdonald’s recent book [39]. In chapters 4 and 5, I have departed widely from Schur’s dissertation. These chapters are concerned with the construction, for each λ and for each K, of two modules Dλ,K and Vλ,K in our category MK (n, r). They are “explicit” in the sense that a basis can be given for each. They are dual to each other, in the sense of the “contravariant” duality described in 2.7. Vλ,K has a unique irreducible factor module; this is denoted Fλ,K . Dλ,K has a unique minimal submodule, which is isomorphic to Fλ,K . The set {Fλ,K }, as λ ranges over all partitions λ of r into not more than n parts, gives a full set of
10
1 Introduction
irreducible modules in MK (n, r). If char K = 0, then Fλ,K ∼ = Vλ,K ∼ = Dλ,K . But for char K = p = 0, knowledge of Fλ,K is still very incomplete. The history of the modules Dλ,K and Vλ,K is interesting, and I am indebted to J. Towber (see [52]) for much of the following information. Dλ,K is generated by certain determinantal expressions (here denoted (Tl : Ti )), whose significance as “primary covariants” was noted by J. Deruyts [13], in 1892. Although Schur refers to two later papers of Deruyts, there is no sign in [47] that he appreciated that Deruyts had really given a complete set of irreducible modules in MC (n, r). The discovery of the basis of the “standard” (Tl : Ti ), seems to go back to A. Young [58, 1902]. The observation that the Dλ,K can be constructed over an arbitrary field—or equivalently that the (Tl : Ti ) generate a Z-form Dλ,Z in Dλ,Q —was made by G. Higman [23, 1965]. The Vλ,K (and the Z-form Vλ,Z ) were constructed, independently of all this, by R. Carter and G. Lusztig [6, 1974]. They called these “Weyl modules”, and their construction was based on methods used in the theory of semisimple algebraic groups. Towber [52] showed that Dλ,K and Vλ,K are dual to each other—his framework is “functorial” and more general than ours. M. Clausen [8] has used recent combinatorial theory of G.-C. Rota and his collaborators [15] to construct both modules Dλ,K and Vλ,K . G. D. James describes, in his book [27], some KΓ-modules which are isomorphic to the Dλ,K . His construction is quite different from those above; we show in 4.8 that it yields the important and deep fact that Dλ,K is an “induced” module, in the sense of algebraic group theory. Chapter 6 returns to Schur’s dissertation. I have “reversed” the elegant procedure by which he constructed KΓ-modules from modules for the symmetric group G(r). This provides an interesting illumination of some recent work of James on the modular representation theory of the symmetric group.
§2.
Polynomial
2.1
Notation, Let
representations
of
GLn(K):
The Schur algebra
etc.
n
be a positive
integer,
K
an infinite field, and
F = GL (K) n
the group of all non-singular of elements of ates to each the of
n = {l,...,n}, g ~ F
K-subalgebra A
its
of
the
c
matrices over
let
c
KF
generated
the polynomial
For each pair
D,v
be the function which associg~v"
Denote by
independent over
over K
A
c v~,v
functions on
regarded as the algebra of all polynomials
c v(~,~ ~
K.
by the functions
are algebraically
~v
~ ~
~,v)-coefficient
are, by definition,
infinite,
n × n
in
Since
K,
so that
n
the elements
~ n);
F.
2
~K(n)
or
is
K A
can be
"indeterminates"
n_).
For each
r > 0
we denote by
of the elements expressible
~(n,r)
the subspace of
A
as polynomials which are homogeneous
consisting of degree
r
(n2+r-l) in the
c v.
particular
Then
AK(n,r)
~(n,0)
= K-IA,
1 A : g -+ iK(g ~ F).
The
(2.1b)
has finite dimension where
1A
K-algebra
A = AK(n ) =
If integers
n,r
set of all functions
i : r ~ n.
vector or "multi-index" group on the set on the right on
(both
A
has the standard
Z" r~0
by
i~
K-space;
in
function
grading
AK(n,r).
are given, we write
Such a function
i = (il,.°.,i r)
r = {l,...,r} I(n,r)
~ i)
denotes
as
r the constant
G(r)
for the
is usually written as a
with values
is denoted
I(n,r)
or
= (i (i) .... ,i (r)) ,
i~ ~ _n" G.
The symmetric
It acts naturally
so that
~ ~ G(r)
~9
acts as a "place-permutation" on the set
I(n,r)
× l(n,r)
that the elements j = i~ and
for some
g = j~
i,J
as
= (i~,j~).
Similarly
(i,j) ~
AK(n,r)
in bijeetive
The categories The maps
& : KF~
with
K r×F c
) =
the
IF
A(Cp,q)
= 1
P,q
or
k = i~
that
AK(n,r)
is
is not uniquely
if and only if monomials
G(r)-orbits
of
(i,j) ~
(2.1b), I(n,r)
determined (k,~).
and these are
× I(n,r).
(n2+r-l) . r -
is
~ g : KF ~ K
O,
=
(see introduction)
behave
as
(~,v 6 n_):
k6nZ
c k ~' ckv
of
F.
for any "multi-indices"
5
notice
(i,j)
,
from the rule for multiplying
for the unit element
(2.2b)
that
that
MK(n) , MK(n,r)
&(c
These follow
the pair ci, j = Ck, ~
of these orbits
on the functions
(2.2a)
i.e.
... c i . , r Jr
the set of distinct
correspondence
2.2
deduce,
in fact
K-basis
the number
follows
Of course
(2.1b);
Thus
to indice
by the monomials
i,j f I(n,r),
has as
means
act also
i ~ j
G(r)-orbit,
(k,g)
of the use of this notation,
K-module,
by the monomial
G(r)
We write
are in the same
ci, j = c . . ci2J2 i131
for all
We make
, ~ G(r).
(2.1b)
Here
i ~ I(n,r).
(i,j)~
I(n,r)
~ ~ G(r).
As an example spanned,
by
of
for some
on each
Z s61
Since p,q
g(c v) =
6Gv.
two matrices,
&,c
are both multiplicative
6 I(n,r)
of length
Cp, s ~ c
according
r ~ I,
6 s,q'
as
and from the formula
p = q
g(Cp,q) or
p
=
~ q.
P,q
we
20
These formulae show that bialgebra) of (for
r = 0
MK(n,r)
F(~),
A = ~(n)
is a sub-coalgebra (hence also a sub-
and that each
this is because
~(n,r)
Al A = 1A ~ IA).
for the categories
modAK(n)(kT)
is the category of finite-dimensional representations of
F = GLn(K);
and
(left)
~K(n,r)
is a sub-coalgebra of
AK(n)
We shall write
and
ME(n)
mOdAK(n,r)(F$).
Thus
MK(n)
~Y-modules which afford "polynomial"
the subcategory consisting of those
affording representations in which all the coefficients are polynomials homogeneous of degree IS, p. 5] in case
r
in the
K = ~,
c v.
By an argument first given by Schur
but valid for any infinite field
K
of any character-
istic, we have (2.2c) V =
Theorem
~ re0
V r,
Each
KF-module
where for each
V (~(n)
r e 0
v '
cf(V r) E AK(n,r) ,
i,e.
has a direct sum decomposition is a sub-module of
V
with
r
Vr ¢ MK(n'r)"
In other words each polynomial representation of
F
is equivalent to
a direct sum of homogeneous ones. Remark A
In fact (2.2c) follows from n general theorem on
is any coalgebra which is a direct sum
(p then
ranging over an index set V =
%~
Vp,
where for each
maximum sub-comodule of Vp (com(Ap)
P).
V
A
of sub-coalgebras
p
This theorem says that if p ( P
such that
[G, p. 156, (io6C).
A = Z~ P
A-comodules, where
the space
cf(Vp) E Ap,
Vp i.e.
V = (,~)
~ com(A),
is the unique such that
The proof given there does not depend on
the assumption that the sub-coalgebra summands
R
P
are minimal].
A
21 2.3
The Schur algebra
SK(n,r )
Theorem (2.2c) shows that each indecomposable module homogeneous,
i.e
V ~ MK(n,r)
confine our attention fixed, and define
for some
r > O.
to homogeneous modules.
SK(n,r )
As
K-space,
S K ~ , r ) has basis
{ei, j : i,J E l(n,r)} of
SK(n,r )
of
i
ci, j
if and only if
r
r ~ 0
be
~(n~r).
: i,j 6 I(n,r)}
For
dual to the basis
i,j 6 I(n,r),
~i,j
is the element
given by
~i'j(Cp'q) As with the
From now on, let
= HOmK(~(n,r) , K).
{~i,j
~(n,r).
is
This means that we may as well
to be the dual space of
SK(n,r) = ~ ( n , r )
V E ~(n)
=
if if
(i,j) ~ (p,q) (i,j) ~ (p,q)
all
•
p,q ~ I ( n , r ) .
we have an e~uality rule to take into account:
(i,j) ~ (k,g).
The dimension of
SK(n,r)
is
~i,j = ~k,~
of course
/ = dim AK(n,r ) . Since
algebra.
AK(n,r )
We saw in
is a coalgebra,
its dual
§i that the product
is defined as follows:
if
c ~ ~(n,r)
&(c) =
~
SK(n,r)
of elements
~,~
of
SK(n,r)
and if
Z ct @ ct t
where the sum is finite and the
ct, c t ~ AK(n,r),
(2.3a)
= ~ ~(ct)~(e ~),
(~n)(c)
is an associative
then
t The unit element of for all
c E AK(n,r).
SK(n,r)
will be denoted
a;
it is given by
s(c) = c(l r)
22
Applying (2.3a) to a basis element
of
c = Cp,q
AK(n ,r),
we ge<
(see (2.2b)) ( ~ ) (Cp, q) = s~ I(n,r) Specializing to the case where SK(n,r),
~ (c p, s)~ (c s,q ).
= ~i,j' ~ = ~k,g
we deduce a
Multiplication rule for
(2.3b)
SK(n,r ).
~i,j~k,Z
=
% {Z(i,j,k,g,p,q). IK} ~p,q, P,q
where the sum is over a set of re~r_esentatives of
are basis elements of
l(n,r) × l(n,r),
(p,q)
of the
G (r)-orbits
and
Z(i,J,k,~,p,q_)_ = Card{s 6 I(n,r)
: (i,J) ~ (p,s)
and
(k,~) ~ (s,q)}.
This multiplication rule (rather differently expressed) in due to Schur [S, p. 20]. (2.3c) (i)
Some special cases are worth noticing.
For any
i,j,k,~ 6 l(n,r)
~i,jSk,g = 0
i,i i,j
unless
i,J
j ~ k,
and
s, p,q k ~ s, From
with hence
distinct
~i,i
~i,j~k,g ~ 0,
(i,J) ~ (p,s)
and
then by (2.3b) there must
(k,g) ~ (s,q).
This implies
j ~ s
j ~ k.
(2.3c) follows that
Of course if
and
i,j j,j "
For example, (i) holds because if exist
there hold
i ~ J,
then
~2 :'i,i =
(i,i) ~ (J,j)
and
~i,i~j,j = 0
if
and hence
~i,i = <J,j"
But the
~i,i'
i # J.
form a set of mutually orthogonal idempotents, and their sum
is the unit element
e
of
SK(n,r).
This last equation,
23
sum over a set of representives of the
(2.3d) of
L = Z ~i,i, i l(n,r)
C (r)-orbits
c
is proved by evaluating both sides at all the basis elements Of great importance in the modular theory for for fixed
n,r,
the scheme or family of algebras
in the following sense. K ~i,j
elements Sz(n,r)
of
of
Let us use a superscript
SK(n,r).
SQ(n,r),
K
is "defined ever
is a
Z-order in
K-algebras
Z",
to denote the basis Z-submodule
SQ(n,r).
is
And for any field
Sz(n,r) @ K ~ SK(n,r)
which takes
K
~ ,j ® IK ~ ~i,j
The map
e : KF ~ SK(n,r)
For each
g E F
we define the element
eg(C) = c(g) (c E AK(n,r)). for all
g,g' E F;
the map
g ~ e
of
SK(n,r)
Q ~i,j(i,j E I(n,r)),
which is generated by the
there is an isomorphism of
each
2.4
is the fact that,
n
It is clear from (2.3b) that the
multiplicatively closed - - i t K,
GL
~(n,r) .
of
P,q
g
also
eg E SK(n,r )
by
It is clear from (2.3a) and (i) that
eI =
s by the definition of
linearly we get a map
~.
e : KF ~ SK(n,r)
egeg, = egg,
So if we extend which a morphism
K-algebras. Any function
f : KF ~ K. <=Z~g
(2.4a)
g
f E KF
has a unique extension to a linear map
With this convention,
EKF,
is "evaluation at
e(~)
' c -+ c(K),
the image under <";
all
e
of an element
i.e.
e E ~(n,r).
Propositions (2.4b,c) give the most important facts about
e.
24
(2.4b)
Proposition
(ii)
Let
Then
f ( AK(n,r )
Proof
(i)
exist
some
(i)
Y = Ker e,
e
is surjective.
and let
f
be any element
if and only if
If lme were a proper 0 # c E ~(n,r)
of
F K
f(Y) = 0. subspace
such that
of
SK(n,r)
eg(C)
= ~(n,r)
= c(g) = 0
,
there would
for all
g ( F,
a
contradiction. (ii)
If
f (AK(n,r)
by (2o4a).
So
K I' such that
and
f(Y) = 0. f(Y) = 0.
K E Y,
we have
Now suppose By
(i)
e(K) = 0
conversely
there
and hence
that
f
f(~) = 0,
is any element
of
is an exact sequence
0 ~ Y ~ KF ~ SK(n,r)
~ 0, *
from which it is clear y(e(~))
= f(<)
for all
there exists = e(<),
that there
c (1~(n,r)
then, we have
K E KF.
(2.4c)
Proposition
y (SK(n,r)
y(<) = <(c)
f(K) = e(K)(c)
and the proof of (2.4b)
some
By the natural
such that
f = c,
Let
exists
= c(<),
such that
isomorphism
SK(n,r)
~ ~(n,r),
for all
< (SK(n,r).
Put
all
K ~ KF.
Therefore
is complete.
V E mod(KF).
Then
V (~(n,r)
if and only if
YV = 0. Proof
Let
{v b}
by the action of if
rab(Y ) = 0
saying
be a basis KF
But of course the proof
of
V,
on this basis
for all
that all the
of
rab
a,b.
(rab)
(see (i)).
the invariant Clearly
By the last proposition,
lie in
AK(n,r) ,
this is the condition (2.4c)
and
is complete.
for
V
YV = 0
afforded
if and only
this equivalent
that is, that to belong
matrix
to
cf(V)
to
~ AK(n,r).
MK(n,r),
and so
25
There propositions are equivalent,
and in a very elementary
can be transformed
Kv
= e(K)v,
to relate the action on actions determine
V
of submodule,
all
[S,
on a module
V ~ ~(n,r),
F
action of
SK(n,r)
on a basis
We might mention
and
SK(n,r).
it holds for all
etc. coincide
on
Since both
V,
the
in the two categories.
that the action of
which is given by (2.4d),
{Vb}
of
% ~(rab)V a a
For it's clear that
outlined V
SK(n,r)
= AK(n,r)
is the same as that which
in the Introduction.
is given by equations
all
~ ~ SK(n,r) ,
(2.4d) holds, whenever
K f K
If the
(I~),
then the
and
K = g
Vb ~ B
and
v = v b.
By linearity
v ~ V.
theory
R. Brauer's
theory of modular representations
by Brauer himself
[B] and by Nakayama
More recently Serre [Se]
affine algebraic
groups,
GL
K
F K = GLn(K).)
n
(see also
of finite groups was
[N], to finite dimensional
[G]) extended
or rather to affine algebraic
group scheme
"reduction"
KF
is given by
~v b =
the group
in either category
was one of the main techniques used by Schur in his
p. 21].
action of
algebras.
V
mod(SK(n,r))
v E V
of the two algebras
is obtained by the general procedure
extended,
~ ~ KF,
module homomorphism,
dissertation
M0dular
way; an object
and
the same algebra of linear transformations
This category equivalence
2.5
MK(n,r )
into an object of the other, using the rule
(2.4d)
concepts
show that the categories
is a functor, which associates We can describe
or "decomposition"
the theory to
group schemes.
to each commutative
the characteristic
process as follows.
(The ring
modular
26 Let
Az
be the subsets of
consisting of those polynomials in the Z.
These are
"Z-forms" of Q-basis
{c~,j}
a(Az(n,r) 5 Z
(see (2.2b)).
of
i,j ~ l(n,r).
The
of all elements
Z-order
Now let
VQ
taking
such that
be any object of
which (i)
is the
Z-span of some
closed to the action of defined by rab
lie in
{vb}
Az(n,r).
That every
If
K,
there is a
c ,j ~ iK ~ ci, j
K-coalgebra
for all
~(Az(n,r) ~ Z.
M(n,r); we shall regard By a
Z-form of
Q-basls
{vb}
RQ = (rab)
where
(VQ,~) VQ
in
is the
of
VQ
as
VQ
is meant a subset
V,
and (ii) i~
is the invariant matrix
Az(n,r)-comodule determined hy
MQ(n,r)
contains at least one
Z-form,
, p. 256, §6] or [Se, p. 43, lemme 2] or [G , p. 158, (2.2c)].
If we now take any infinite field
hence as a
A~(n,r) ~ AZ(~,r) ~ ~(n,r),
Still another formulation of (ii) is that
QFQ-mOdule
follows from [B
VK = VZ ® K
is the
(see (I e )), then condition (li) just says that all the
T(V Z) ! V Z ® Az(n,r), VQ.
Sz(n,r ),
Az(n,r)
which we defined in (2.3), is the set
SQ(n,r)-module when this is convenient. VZ
and
For any infinite field
Sz(n,r)
~ ~ So(n,r)
for example
AQ(n,r),
Az(n,r) ® K ~ AK(n,r)
respectively,
whose coefficients all lie in
AQ(n), AQ(n,r);
Z-span of the
isomorphism
c
AQ(n), AQ(n,r)
K,
it is clear that the
can be regarded as a left module for ~K-mOdule in
M~(n,r).
K-space
SK(n,r ) ~ Sz(n,r) ~ K,
The transition from
VQ
to
VK
is
particularly easy to express in terms of invariant matrices; the invariant matrix
RK
defined by the
K-basis
{vb ® 1K}
of
VK,
(tab) = RQ
is the Invariant matrix defined by the basis
case where
K
has finite characteristic
the coefficient of
RQ.
is
(tab ~ IK) , where
{vb}
of
VQ.
In the
p, this amount to "reducing mod p"
27
Our notation in the preceding discussion conceals a disadvantage: there are in general many different VQ E MQ(n,r),
Z-forms
and the corresponding
may be not all isomorphic.
Vz,V~,...
KFK-mOdules
of a given
~Q-mOdule
V K = V Z ® K, V~ = V~ ~ K,...
However one of the classical results of modular
theory, deducible from [B , p. 257, (8)] or [Se, p. 44, theorhme 2] or [G, p. 162, (2.5a)], says that, for any type of simple E MK(n,r),
the multiplicity
depends only on case that d ~k'
VQ, i.e.
VQ = V
~ ( V K)
of
Lk
as composition factor in
is the same for all
is a simple
KFK-mOdule
Z-forms
VZ
of
VQ.
VK
In the
QFQ-mOdule, this multiplicity is often written
and referred to as a decomposition number for the modular reduction
MQ(n,r) ~ MK(n,r).
2.6
The module
E~r
Fix our infinite field E = E K = K.e I @ ... • K-e n {e
v
: v E n}
on which
=
gev
K,
and write
be an
n-dimensional
E
Z e ~E~ g~v ~
is an object of
Now let E<%r = E ~ . . ®
r ~ 0 E
Let
K-space with a basis
F acts "naturally":
=
% ~En
c v(g)e v ,
Since the corresponding invariant matrix is KF-module
F = F K = GLn(K ).
all
g E F, v E ~.
C = (c),
we see that the
AK(n,I).
be given, and then
in the usual way
(®
F
acts on the
here means
~).
{ei = eil ® "'" ~ eir : i E l(n,r)},
and relative to this the action of
F
is given by
r-fold tensor power Eer
has
K-basis
28
gej = ge
®.. ~
ge. ]r
Jl =
Z i(I(n,r)
The invariant matrix is E~r ( M K ( n , r ) . as an
(2.6a)
Sej =
Z i( I (n,r)
ci, j(g)ei,
all
. gilJl
g ~ F,
(ei,j) = C x ... × C,
According
SK(n,r)-module,
=
to
what
ei "" girJr
j ( l(n,r)
and this shows that
was said in (2~4),
E®r
can be regarded
by the rule
Z <(ci,j)ei, i(I(n,r)
all
< (SK(n,r) ,
j (l(n,r).
In a very famous paper [S'] which appeared in 1927, Schur rederived all the results of his 1901 dissertation ~r
[S] by an analysis of this module
Although his method gives a complete answer only when
char K = O,
it
is still valuable for fields of finite characteristic.
We make the s}~mmetric
group
act on the right of
Ef~r
G(r),
and hence also its group-algebra
KG(r),
by
(2.6b)
ei~
= ei~ ,
all
6 G(r).
i (l(n,r),
It is clear that this action commutes with that of with that of all
SK(n,r);
~ (SK(n,r),
KF,
we can verify from (2.6a) that
x ( Eo~r
and
~ (G(r).
or (what is the same) (~x)~
= ~(x~r)
We have however a stronger
statement. (2.6c)
Theorem (Schur)
representation afforded by the (i)
Im @
(ii)
Ker @ = 0.
Hence
= EndKG(r )(E ~ ) ,
SK(n,r ) ~ EndKG(r ) (E@r)
Let
~ : SK(n,r) ~ EndK(E~)
SK(n,r)-module and
E ~r.
Then
be the
for
2g
Proof
Each element
basis
{ei}
of
E®r.
of course, and the EndKG(r)(~ 'r)
0 E EndK(E Here
i,j
Ti, j ~ K.
Tin,j ~
the set
)
has matrix, say
(Ti,j)
for all
has a
lies in
Ti, j = i
and all
~p,q
of
SK(n,r )
I × I,
or
0
0
namely if
co
according as
(p,q).
~0 is such an orbit,
to be that
e E EndK(E~r)
whose
(i,j) ~ ~0 or not. (p,q) E I × I,
~r
is represented on
G(r)-orbit containing
~ ~ G(r).
K-basis in one-to-one correspondence with
G(r)-orbits On
has
i,j ~ I
it follows very readily from (2.6a) that, for any
~0 is the
e
From (2.6b) follows at once that
Ti,j,
EndKG(r)(E~r )
~ of all
element
I = I(n,r) ,
run independently over the set
define the corresponding basis element matrix
relative to the
(Ti,j) ,
if and only if
(2.6d)
Consequently
~r
by
Therefore
the basis
~(~p,q) = ~0 '
~
Now
where
induces an isomorphism
SK(n,r) -~ EndKG(r ) (E~r) , and this proves the theorem. Remark
The proof of (2.6c) shows that
sentation by the algebra of all condition having ~i,i
(2.6d).
Ti, j = 1
nr × nr
The basis element or
0
SK(n,r)
matrices
~p,q
according as
has a faithful matrix repre(Ti, j)
which satisfy
is represented by the matrix
(i,j) ~ (p,q)
or not.
The idempotents
are represented by diagonal matrices, and the "orthogonal" decomposition
(2.3d) is easy to deduce from this. (2.6e) then Proof
Corollary to (2.6c) (Sehur [S, S']). SK(n,r )
is semisimple.
Hence every
Under the given conditions on
is semisimple K-G(r)-module,
(since
char K
If
V ~ MK(n,r)
char K,
does mot divide
and in particular
E~r,
char K = 0,
or if char K = p > r,
is completely reducible.
the group algebra IG(r) I = r!).
K-G(r)
Therefore every
is completely reducible.
But the
endomomorphism algebra of a completely reducible module is semisimple,
so by
30
(2.6c), and
SK(n,r)
mod SK(n,r)
is semisimple.
is clearly "defined over is simply a version of
{~r}
Z"
,
r
fixed but with
Z.
GL -module, n
We say that the family
a
of
in the category
VQ,
{V K}
and for each
~(n,r).
varying,
GL
being regarded
n
See [Se, p. 46]).
Suppose that for each infinite field
VZ
K
in the sense of the following definition (which
V K E MK(n,r).
and
with
the definition of
as affine group scheme over
Z-form
~(n,r)
now completes the proof of (2.6e).
The family of modules
Definition
The equivalence of the categories
K
K
we have a
is defined over
an isomorphism
More exactly we say
{VK}
KFK-mOdule Z
if there is
6 K : VZ @ K ~ VK
is
Z-defined by
VZ
{6K}.
Example I.
Take
of
VQ
(we write
of
EK, E®r K ),
MK(n,r), ively.
for all
is defined over
Definition
The module
Suppose that
K
the
K
Z i~l(n,r)
Z.e i
is a
Z-form
for the basis elements
6K : V Z ~ K ~ V K
is an isomorphism in
taking ~(n,r).
So
Z. {VK} , {WK} Z,
Suppose we have for each
We say that the family
K-map
i E l(n,r),
both defined over
and for each
VZ =
e~,K, ei,K = eil,K ® "'" ® eir,K
and for each
e i ® 1 K ~ ei,K, {~K r}
V K = E~ r.
{eK}
By K
are both families of modules in VZ
and
{6K} ,
a morphism
is defined over
Z
GQ~ K
VK
>
OK - - >
and
{~K},
eK : V K ~ W K
the diagram shown commutes VZ ® K
WZ
WZ ® K
WK
if
eQ
maps
in
respectM~(n,r).
VZ
into
Wz,
31
Example 2. the
r th
Define the
r th
monomial
Dr, K
of a given
Z-form
of degree
in the variables
r
Dr, Z
The isomorphism
for all
i E I(n,r).
defined over
Z
in
by
becomes a
{Dr,K}
is defined over
belong
e I = el,Q,..., en = en,Q, takes
which have coefficients
e(i ) ,Q ~ 1K ~+ e(i ) ,K
V
In this sectlo~ we keep = HomK(V,K)
of a
(v E V).
K
one defines
a left
action
However the
to our category
which is still in
with this
{O K } is
KF-module if
fixed and write V E MK(n,r )
f E V ,
To make
V
g E F
(denoted
KF-module
NK(n,r).
V
by a d o t )
F-action
~L.(n,r). K-
KF-module
F
to "reverse" on
V
so defined will not, in
B u t i f we r e p l a c e
We denote by
can be
g ~ g-i of
F = F K.
define
into a left
(transposed matrix) in the above definition, we get a left V
Z ;
is the set of all homogeneous polynomials
K -module in a natural way:
by (g'f)(v) = f(g-lv). general,
Dr, K
which takes
It is clear now that the family of morphisms
(fg)(v) = f(gv)
multiplication;
KF-map:
is the restriction to
the traditional practice in group-theory is to use the map
on
There is
in the sense of the last definition.
The dual space made into right
Dr, Q
@K
K[el,.°.,en]
~]K : Dr,Z ~ K -~ Dr, K
Contravariant duality
fg E V
g E F,
We can show that the family
the relevant
2.7
K[el,...,en];
KF-module, such that
K-algebra automorphism of
-~ ge (~ E n).
Z.
to be
are regarded as commuting indeterminate..
has a unique structure as
of the unique
in
EK
K-map
in fact tha action on
e
of
e K : Eer -~ Dr(EK) taking e i = ell @ ... ~ e i to the K r e(i) = ell "'" eir ' for all i E I(n,r). It is well-known that
a surjectivc
r,K
Dr, K = Dr(EK)
homogeneous subspace of the polynomial
e I = el,K,..., e n = en, K
D
symmetric power
V°
g
-i
by
g
tr
KF-module structure
the space
V ,
equipped
32
(2.7a)
V°
(g-f)(v) = f(gtrv)
all
is called the "contravariant dual" to
g E F, f E V , v E V.
V;
an analogous dual applies to
rational modules over all semisimple algebraic groups a great
deal in recent years by Wong
F,
and has been used
[W], Verma [V] and Jantzen
[J].
It is convenient to express (2.7a) in terms of the action of It is easy to see that the J(
all
K-linear map
i,j E I(n,r)
In fact one has clearly, for any
J(~)(ci,j) = ~(cj,i),
and by taking
~
= e
we find that
is regarded as
the contravariant dual
(2.7c)
MK(n,r) ,
V°( = V )
J(eg) = e t r ' g
(,) : V × W ~ K
V, W
g E F.
gives an exact contravariant function on V ~ (V)
of
K-spaces,
V ~ (V°) °.
be modules in
MK(n,r).
Then a
K-bilinear form
is called contravariant if it has the property
(~v,w) = (v,J(~)w),
So if
~ E SK(n,r) , f E V , v E V.
and that the usual natural isomorphism
Let
all
reads
all
V ~ V°
gives a natural isomorphism Definitio N
i,j E I(n,r),
SK(n,r)-module the action (2.7a) which defines
(~.f)(v) = f(J($)v),
It is clear that
~ E SK(n,r),
all
g V E MK(n,r)
given by
is an involutory anti-automorphism of
(2.7b)
(2.7d)
J : SK(n,r ) -~ SK(n,r)
SK(n,r).
all
~ E SK(n,r) , v E V, w E W.
The proof of the next proposition is standard.
33 (2.7e)
Proposition
correspondence A
: V~
W°
If
V, W 6 ~ ( n , r )
between contravariant
in
~(n,r),
(,)
forms
(,) : V × W + K,
and morphisms
given by
A(v)(w)
The form
are given, there is a bijective
= (v,w),
is non-singular
all
v ~ V, w 6 W.
(= non-degenerate)
if and only if
F
is an
isomorphism. Example 1
We can use the last proposition to show that the module
self-dual,
that is that
~r
~ (~r)o
: E~r × ~ r
<,>
i,j ~ I(n,r).
But a simple calculation,
on
defined by
condition
<ei,ej>
=
6ij,
using (2.6a), shows that
(2.7d). Call <,>
all <,>
the canonical form
E~r .
Example 2
Let
Z-defined by
{VK} VZ
and
be a family of modules {6K}
V Z.
For each
a basis of
K,
write
(V K E ~ ( n , r ) )
as in the last section.
running over some finite index set Va, K =
B) be a basis of 6K(Va, Q ® 1 K)
For each Then a,Q
K,
write
{fa,K}
for the basis of
VzO = {f ~ VQ : f(Vz) --c Z} }.
which is
Let VQ
so that
{Va,Q} which
(~ Z-generates
{Va, K : a 6 B}
is
V K.
We can now show easily that the family
{f
is
For there is clearly a non-singular
bilinear form
satisfies the contravariant
~ K
E~r
is a
It is not hard to see that
6K : VZo ® K ~ VKo
which take
° {V K}
V K* = V oK
Z-form of {VK} o
is
fa,Q ~ IK °+ fa,K'
is defined over
o VZ,
dual to having
Z-defined by all
a 6 B.
Z.
{Va,K}°
Z-basis V oZ
and the maps
34
Example 3
Let
which are
Z-defined by
K
and
{WK}
is defined over
and for each
be families
VZ, {6 K}
we have a bilinear form
{(,)K } K,
{VK}
and
(VK
WZ, {~K }
(')K : VK x WK ~ K,
Z
if
v Z E VZ,
(,)Q
wZ E WZ
maps
and
WK
both in
respectively.
M~(n,r)~
If for every
we say that the family
Vz x Wz
into
z,
and for each
there holds
(SK(Vz ® IK), nK(wZ ® IK)) K = (Vz,Wz).l K. If all the (')K
are contravariant, W Ko
then we can
(given, for each
show
K,
by Proposition
family of morphisms
AK : VK
is defined over
For example, the family of canonical forms
Z.
EK~)r is defined over EK~r ~ (E~r) ° K
2.8
AK(n,r)
Z,
is a
these two
KF
on
derived from these forms.
as
KF-bimodule
KF-bimodule,
i.e.
actions commute.
there holds an equation (see 2.3).
<'>K
(2.7e))
and so therefore is the family of isomorphisms
We saw in the introduction (p.4) that the space f:F ÷ K
that the
KF
it is a left and right If we take an element
&(c) = Z c t ~ c't ' t
of all functions KF-module, and
c E AK(n,r)
for suitable elements
,
then
c t ,c't c AK(n,r)
By formulae (ic), (Id),
(2.8a)
g o c = ~ c~(g)c t , t
for any
g g F .
KF-actions, hence
c o g = Z ct(g)c' t' t
This shows that ~(n,r)
AK(n,r) is closed to both the left and right
is itself a
K -bimodule.
linearity, to give the action of an arbitrary element
Now extend (2°8a) by < c K :
35
c o < = Z ct(K)_- c tv
< o c = Ec'(K)ctt'
By (2.4b), we find (2.4c) shows that MK(n,r)
.
~(n,r)
AK(n,r)
Similarly of right
bimodule,
(2.8b)
K o c = c o ~ = O , ,
regarded
AK(n,r)
belongs
KF-modules
as left
K ~ Y = Ker e . K£-module,
to an analogously
(see 4.4).
AK(n,r)
belongs
defined
becomes
Then
an
to
category
SK(n,r)-
with actions
~ o c = Z$(c~)e t ,
c o % = Z~(ct)c ~ ,
Now let us define a bilinear rule:
for any
if
~ E SK(n,r)
form
, c c AK(n,r)
,
(
,
for
~ E SK(n,r)
•
by the
):SK(n,r)~K(n,r ) ~ K
then
($,c) = J ( O ( c )
This is non-singular, to
(
,
)
(see (2.7b)).
~,q E SK(n,r)
(2.8c)
in fact
,
{cj, i}
are dual bases with respect
The reader may check that for any
c e AK(n,r)
there holds
(~n,c) = (~,J(~) o c) = (~,c o J(q))
In fact, using
(2.3a),
all three expressions shows that left
{$i,j},
(
,
)
(2.7e) we deduce
(2.8b) and the d e f i n i t i o n just given are equal
is eontravariant,
SK(n,r)-modules that
.
to
SK(n,r)
of
( , ) ,
E(~,c~)(q,c t) • and
AK(n,r)
(and even when they are regarded AK(n,r)
~ (SK(n,r)) ° ,
we see that But
(2.8c)
being regarded
as right modules).
an isomorphism of
as
By
KF-bimodules.
2 Polynomial Representations of GLn (K): The Schur algebra
2.1 Notation, etc. Let n be a positive integer, K an infinite field, and Γ = GLn (K) the group of all non-singular n × n matrices over K. For each pair µ, ν of elements of n = {1, . . . , n}, let cµν ∈ K Γ be the function which associates to each g ∈ Γ its (µ, ν)-coefficient gµν . Denote by A or AK (n) the K-subalgebra of K Γ generated by the functions cµν (µ, ν ∈ n); the elements of A are, by definition, the polynomial functions on Γ. Since K is infinite, the cµν are algebraically independent over K, so that A can be regarded as the algebra of all polynomials over K in n2 “indeterminates” cµν (µ, ν ∈ n). For each r ≥ 0 we denote by AK (n, r) the subspace of A consisting of the elements expressible as polynomials which are homogeneous of degree r 2 as K-space; in parin the cµν . Then AK (n, r) has finite dimension n +r−1 r ticular AK (n, 0) = K · 1A , where 1A denotes the constant function 1A which maps g → 1K for all g ∈ Γ. The K-algebra A has the standard grading AK (n, r). (2.1a) A = AK (n) = r≥0
If integers n, r (both ≥ 1) are given, we write I(n, r) for the set of all functions i : r → n. Such a function is usually written as vector or “multiindex” i = (i1 , . . . , ir ) with values i ∈ n. The symmetric group on the set r = {1, . . . , r} is denoted G(r) or G. It acts naturally on the right on I(n, r) by iπ = (iπ(1) , . . . , iπ(r) ), so that π ∈ G(r) acts as “place-permutation” on each i ∈ I(n, r). We make G(r) act also on the set I(n, r) × I(n, r) by (i, j)π = (iπ, jπ). We write i ∼ j to indicate that the elements i, j of I(n, r) are in the same G(r)-orbit, i.e. that j = iπ for some π ∈ G(r). Similarly (i, j) ∼ (k, l) means that k = iπ and l = jπ for some π ∈ G(r). As an example of the use of this notation, notice that AK (n, r) is spanned, as K-space, by the monomials (2.1b)
ci,j = ci1 j1 ci2 j2 · · · cir jr ,
12
2 Polynomial Representations of GLn (K): The Schur algebra
for all i, j ∈ I(n, r). Of course the pair (i, j) is not uniquely determined by the monomial (2.1b); in fact ci,j = ck,l if and only if (i, j) ∼ (k, l). The space AK (n, r) has as K-basis the set of distinct monomials (2.1b), and these are in bijective correspondence with the G(r)-orbits of I(n, r) × I(n, r). Thus 2 . the number of these orbits is n +r−1 r
2.2 The categories MK (n), MK (n, r) The maps ∆ : K Γ → K Γ×Γ , ε : K Γ → K (see introduction) behave as follows on the functions cµν (µ, ν ∈ n): (2.2a) ∆(cµν ) = cµλ ⊗ cλν , ε(cµν ) = δµν . λ∈n
These follow from the rule for multiplying two matrices, and from the formula for the unit element 1Γ of Γ. Since ∆, ε are both multiplicative we deduce, for any “multi-indices” p, q ∈ I(n, r) of length r ≥ 1, cp,s ⊗ cs,q , ε(cp,q ) = δp,q . (2.2b) ∆(cp,q ) = s∈I
Here δp,q = 1 or 0, according as p = q or p = q. These formulae show that A = AK (n) is a subcoalgebra (hence also a subbialgebra) of F (K Γ ), and that each AK (n, r) is a subcoalgebra of AK (n) (for r = 0 this is because ∆1A = 1A ⊗ 1A ). We shall write MK (n) and MK (n, r) for the categories modAK (n) (KΓ) and modAK (n,r) (KΓ). Thus MK (n) is the category of finite-dimensional (left) KΓ-modules which afford “polynomial” representations of Γ = GLn (K); and MK (n, r) is the subcategory consisting of those affording representations in which all the coefficients are polynomials homogeneous of degree r in the cµν . By an argument first given by Schur [47, p. 5] in case K = C, but valid for any infinite field K of any characteristic, we have (2.2c) Theorem. Each KΓ-module V ∈ MK (n) has a direct sum decomposition Vr , V = r≥0
where for each r ≥ 0, Vr is a sub-module of V with cf(Vr ) ≤ AK (n, r), that is Vr ∈ MK (n, r). In other words each polynomial representation of Γ is equivalent to a direct sum of homogeneous ones. Remark. In fact (2.2c) follows from a general theorem onA-comodules, where A is any coalgebra which is a direct sum A = A of subcoalgebras A ( ranging over anindex set P ). This theorem says that if V = (V, τ ) ∈ com(A), then V = V , where for each ∈ P the space V is the unique maximum sub-comodule of V such that cf(V ) ≤ A , i.e. such that V ∈ com(A ) [20, p. 156, (1.6c). The proof given there does not depend on the assumption that the subcoalgebra summands R are minimal].
2.3 The Schur algebra SK (n, r)
13
2.3 The Schur algebra SK (n, r) Theorem (2.2c) shows that each indecomposable module V ∈ MK (n) is homogeneous, i.e. V ∈ MK (n, r) for some r ≥ 0. This means that we may as well confine our attention to homogeneous modules. From now on, let r ≥ 0 be fixed, and define SK (n, r) to be the dual space of AK (n, r): SK (n, r) = AK (n, r)∗ = HomK (AK (n, r), K). As K-space, SK (n, r) has basis {ξi,j : i, j ∈ I(n, r)} dual to the basis {ci,j : i, j ∈ I(n, r)} of AK (n, r). For i, j ∈ I(n, r), ξi,j is the element of SK (n, r) given by 1 if (i, j) ∼ (p, q) , all p, q ∈ I(n, r). ξi,j (cp,q ) = 0 if (i, j) ∼ (p, q) As with the ci,j we have an equality rule to take into account: ξi,j = ξk,l if and only if (i, j) ∼ (k, l). The dimension of SK (n, r) is of course equal 2 = dim AK (n, r). to n +r−1 r Since AK (n, r) is a coalgebra, its dual SK (n, r) is an associative algebra. We saw in §1 that the product ξη of elements ξ, η of SK (n, r) is defined as follows: if c ∈ AK (n, r) and if ∆(c) = ct ⊗ ct t
where the sum is finite and the ct , ct ∈ AK (n, r), then ξ(ct )η(ct ). (2.3a) (ξη)(c) = t
The unit element of SK (n, r) will be denoted ε ; it is given by ε(c) = c(1Γ ) for all c ∈ AK (n, r). Applying (2.3a) to a basis element c = cp,q of AK (n, r), we get (see (2.2b)) ξ(cp,s )η(cs,q ). (ξη)(cp,q ) = s∈I(n,r)
Specializing to the case where ξ = ξi,j , η = ξk,l are basis elements of SK (n, r), we deduce a Multiplication Rule for SK (n, r). {Z(i, j, k, l, p, q).1K }ξp,q , (2.3b) ξi,j ξk,l = p,q
where the sum is over a set of representatives (p, q) of the G(r)-orbits of I(n, r) × I(n, r), and Z(i, j, k, l, p, q) = Card{ s ∈ I(n, r) : (i, j) ∼ (p, s) and (k, l) ∼ (s, q) }.
14
2 Polynomial Representations of GLn (K): The Schur algebra
This multiplication rule (rather differently expressed) is due to Schur (see [47, p. 20]). Some special cases are worth noticing. (2.3c) For any i, j, k, l ∈ I(n, r) there hold (i) ξi,j ξk,l = 0 unless j ∼ k, and (ii) ξi,i ξi,j = ξi,j = ξi,j ξj,j . For example, (i) holds because if ξi,j ξk,l = 0, then by (2.3b) there must exist s, p, q with (i, j) ∼ (p, s) and (k, l) ∼ (s, q). This implies j ∼ s and k ∼ s, hence j ∼ k. 2 = ξi,i , and ξi,i ξj,j = 0 if i ∼ j. Of course From (2.3c) follows that ξi,i if i ∼ j, then (i, i) ∼ (j, j) and hence ξi,i = ξj,j . But the distinct ξi,i form a set of mutually orthogonal idempotents, and their sum is the unit element ε of SK (n, r). This last equation, ξi,i , sum over a set of representatives of the G(r)-orbits (2.3d) ε = i of I(n, r) is proved by evaluating both sides at all the basis elements cp,q of AK (n, r). Of great importance in the modular theory for GLn is the fact that, for fixed n, r, the scheme or family of algebras SK (n, r) is “defined over Z”, in the K following sense. Let us use a superscript K to denote the basis elements ξi,j of SK (n, r). It is clear from (2.3b) that the Z-submodule SZ (n, r) of SQ (n, r), Q (i, j ∈ I(n, r)), is multiplicatively closed—it is which is generated by the ξi,j a Z-order in SQ (n, r). And for any field K, there is an isomorphism of K-alQ K gebras SZ (n, r) ⊗ K ∼ . = SK (n, r) which takes each ξi,j ⊗ 1K → ξi,j
2.4 The map e : KΓ → SK (n, r) For each g ∈ Γ we define the element eg ∈ SK (n, r) by eg (c) = c(g) for all c ∈ AK (n, r). It is clear from (2.3a) and §1 that eg eg = egg for all g, g ∈ Γ; also e1 = ε by the definition of ε. So if we extend the map g → eg linearly we get a map e : KΓ → SK (n, r) which is a morphism of K-algebras. Any function f ∈ K Γ has a unique extension to a linear map f : KΓ → K. With this convention, the image under e of an element κ = κg g ∈ KΓ, is “evaluation at κ”; i.e. (2.4a)
e(κ) : c → c(κ), all c ∈ AK (n, r).
Propositions (2.4b), (2.4c) give the most important facts about e. (2.4b) Proposition. (i) e is surjective. (ii) Let Y = Ker e, and let f be any element of K Γ . Then f ∈ AK (n, r) if and only if f (Y ) = 0.
2.4 The map e : KΓ → SK (n, r)
15
Proof. (i) If Im e were a proper subspace of SK (n, r) = AK (n, r)∗ , there would exist some 0 = c ∈ AK (n, r) such that eg (c) = c(g) = 0 for all g ∈ Γ, a contradiction. (ii) If f ∈ AK (n, r) and κ ∈ Y , we have e(κ) = 0 and hence f (κ) = 0, by (2.4a). So f (Y ) = 0. Now suppose conversely that f is any element of K Γ such that f (Y ) = 0. By (i) there is an exact sequence e
0 −→ Y −→ KΓ −→ SK (n, r) −→ 0, from which it is clear that there exists an element y ∈ SK (n, r)∗ such that y(e(κ)) = f (κ) for all κ ∈ KΓ. By the natural isomorphism SK (n, r)∗ ∼ = AK (n, r), there exists c ∈ AK (n, r) such that y(ξ) = ξ(c) for all ξ ∈ SK (n, r). Put ξ = e(κ), then we have f (κ) = e(κ)(c) = c(κ), all κ ∈ KΓ. Therefore f = c, and the proof of (2.4b) is complete. (2.4c) Proposition. Let V ∈ mod(KΓ). Then V ∈ MK (n, r) if and only if Y V = 0. Proof. Let {vb } be a basis of V , and (rab ) the invariant matrix afforded by the action of KΓ on this basis (see §1). Clearly Y V = 0 if and only if rab (Y ) = 0 for all a, b. By the last proposition, this is equivalent to saying that all the rab lie in AK (n, r), that is, that cf(V ) ≤ AK (n, r). But of course this is the condition for V to belong to MK (n, r), and so the proof of (2.4c) is complete. These propositions show that the categories MK (n, r) and mod(SK (n, r)) are equivalent, and in a very elementary way; an object V in either category can be transformed into an object of the other, using the rule (2.4d)
κv = e(κ)v, all κ ∈ KΓ, v ∈ V
to relate the action on V of the two algebras KΓ and SK (n, r). Since both actions determine the same algebra of linear transformations on V , the concepts of submodule, module homomorphism, etc. coincide in the two categories. This category equivalence was one of the main techniques used by Schur in his dissertation [47, p. 21]. We might mention that the action of SK (n, r) = AK (n, r)∗ on a module V ∈ MK (n, r), which is given by (2.4d), is the same as that which is obtained by the general procedure outlined in the introduction. If the action of Γ on a basis {vb } of V is given by equations (1e), then the action of SK (n, r) is given by ξ(rab )va , all ξ ∈ SK (n, r), b ∈ B. ξvb = a
For it is clear that (2.4d) holds, whenever κ = g and v = vb . By linearity it holds for all κ ∈ KΓ and v ∈ V .
16
2 Polynomial Representations of GLn (K): The Schur algebra
2.5 Modular theory R. Brauer’s theory of modular representations of finite groups was extended, by Brauer himself [5] and by Nakayama [42, 43, 44], to finite dimensional algebras. More recently Serre [49] (see also [20]) extended the theory to affine algebraic groups, or rather to affine algebraic group schemes. (The group scheme GLn is a functor, which associates to each commutative ring K the group ΓK = GLn (K).) We can describe the characteristic modular “reduction” or “decomposition” process as follows. Let AZ (n), AZ (n, r) be the subsets of AQ (n), AQ (n, r) respectively, consisting of those polynomials in the cµν whose coefficients all lie in Z. These are “Z-forms” of AQ (n), AQ (n, r); for example, AZ (n, r) is the Z-span of the Q-basis {cQ i,j } of AQ (n, r), and we have ∆AZ (n, r) ⊆ AZ (n, r) ⊗ AZ (n, r) and ε(AZ (n, r)) ≤ Z (see (2.2b)). For any infinite field K, there is a K-coalQ gebra isomorphism AZ (n, r) ⊗ K ∼ = AK (n, r) which takes ci,j ⊗ 1K → cK i,j for all i, j ∈ I(n, r). The Z-order SZ (n, r) which we defined in 2.3, is the set of all elements ξ ∈ SQ (n, r) such that ξ(AZ (n, r)) ≤ Z. Now let VQ be any object in MQ (n, r); we shall regard VQ as a module for SQ (n, r) when this is convenient. By a Z-form of VQ is meant a subset VZ which (i) is the Z-span of some Q-basis {vb } of VQ , and (ii) is closed to the action of SZ (n, r). If RQ = (rab ) is the invariant matrix defined by {vb } (see (1e)), then condition (ii) just says that all the rab lie in AZ (n, r). Still another formulation of (ii) is that τ (VZ ) ⊆ VZ ⊗ AZ (n, r), where (VQ , τ ) is the AZ (n, r)-comodule determined by VQ . That every QΓQ -module VQ in MQ (n, r) contains at least one Z-form, follows from [5, p. 256, §6] or [49, p. 43, lemme 2] or [20, p. 158, (2.2c)]. Now take any infinite field K. It is clear that the K-space VK = VZ ⊗ K can be regarded as a left module for SK (n, r) ∼ = SZ (n, r) ⊗ K, hence as a KΓK -module in MK (n, r). The transition from VQ to VK is particularly easy to express in terms of invariant matrices; the invariant matrix RK defined by the K-basis {vb ⊗ 1K } of VK , is (rab ⊗ 1K ), where (rab ) = RQ is the invariant matrix defined by the basis {vb } of VQ . In the case where K has finite characteristic p, this amounts to “reducing mod p” the coefficients of RQ . Our notation in the preceding discussion conceals a disadvantage: in general there are many different Z-forms VZ , VZ , . . . of a given QΓQ -module VQ ∈ MQ (n, r), and the corresponding KΓK -modules VK = VZ ⊗ K, VK = VZ ⊗ K, . . . may be not all isomorphic. However one of the classical results of modular theory, deducible from [5, p. 258, (8)] or [49, p. 44, th´eor`eme 2] or [20, p. 162, (2.5a)], says that, for any type of simple KΓK module Lλ ∈ MK (n, r), the multiplicity mλ (VK ) of Lλ as a composition factor in VK depends only on VQ , i.e. is the same for all Z-forms VZ of VQ .
2.6 The module E ⊗r
17
In the case that VQ = Vµ is a simple QΓQ -module, this multiplicity is often written dµλ , and referred to as a decomposition number for the modular reduction MQ (n, r) → MK (n, r).
2.6 The module E⊗r Fix our infinite field K, and write Γ = ΓK = GLn (K). Let E = EK = K · e1 ⊕ · · · ⊕ K · en be an n-dimensional K-space with a basis { eν : ν ∈ n } on which Γ acts “naturally”: geν = gµν eµ = cµν (g)eµ , all g ∈ Γ, ν ∈ n. µ∈n
µ∈n
Since the corresponding invariant matrix is C = (cµν ), we see that the KΓmodule E is an object of AK (n, 1). Now let r ≥ 1, then Γ acts on the r-fold tensor power E ⊗r = E ⊗ · · · ⊗ E in the usual way (⊗ here means ⊗K ). The space E ⊗r has K-basis { ei = ei1 ⊗ · · · ⊗ eir : i ∈ I(n, r) }, and relative to this the action of Γ is given by gej = gej1 ⊗ · · · ⊗ gejr =
gi1 j1 · · · gir jr ei =
i∈I(n,r)
ci,j (g)ei ,
i∈I(n,r)
all g ∈ Γ, j ∈ I(n, r). The corresponding invariant matrix is (ci,j ) = C × · · · × C, and this shows that E ⊗r ∈ MK (n, r). According to what was said in 2.4, E ⊗r can be regarded as an SK (n, r)-module, by the rule (2.6a) ξej = ξ(ci,j )ei , all ξ ∈ SK (n, r), j ∈ I(n, r). i∈I(n,r)
In a very famous paper [48] which appeared in 1927, Schur rederived all the results of his 1901 dissertation [47] by an analysis of this module E ⊗r . Although his method gives a complete answer only when char K = 0, it is still valuable for fields of finite characteristic. We make the symmetric group G(r), and hence also its group algebra KG(r), act on the right of E ⊗r by (2.6b)
ei π = eiπ , all i ∈ I(n, r), π ∈ G(r).
It is clear that this action commutes with that of KΓ, or (what is the same) with that of SK (n, r); we can verify from (2.6a) that (ξx)π = ξ(xπ) for all ξ ∈ SK (n, r), x ∈ E ⊗r and π ∈ G(r). We have however a stronger statement.
18
2 Polynomial Representations of GLn (K): The Schur algebra
(2.6c) Theorem (Schur). Let ψ : SK (n, r) → EndK (E ⊗r ) be the representation afforded by the SK (n, r)-module E ⊗r . Then (i) Im ψ = EndKG(r) (E ⊗r ), and (ii) Ker ψ = 0. Hence SK (n, r) ∼ = EndKG(r) (E ⊗r ). Proof. Each element θ ∈ EndK (E ⊗r ) has matrix, say (Ti,j ), relative to the basis {ei } of E ⊗r . Here i, j run independently over the set I = I(n, r), of course, and the Ti,j ∈ K. From (2.6b) follows at once that θ lies in EndKG(r) (E ⊗r ) if and only if (2.6d)
Tiπ,jπ = Ti,j , for all i, j ∈ I and all π ∈ G(r).
Consequently EndKG(r) (E ⊗r ) has a K-basis in one-to-one correspondence with the set Ω of all G(r)-orbits on I ×I, namely if ω is such an orbit, define the corresponding basis element θω to be that θ ∈ EndK (E ⊗r ) whose matrix (Ti,j ) has Ti,j = 1 or 0 according as (i, j) ∈ ω or not. Now it follows very readily from (2.6a) that, for any (p, q) ∈ I × I, the basis element ξp,q of SK (n, r) is represented on E ⊗r by ψ(ξp,q ) = θω , where ω is the G(r)-orbit containing (p, q). Therefore ψ induces an isomorphism SK (n, r) → EndKG(r) (E ⊗r ), and this proves the theorem. Remark. The proof of (2.6c) shows that SK (n, r) has a faithful matrix representation by the algebra of all nr × nr matrices (Ti,j ) which satisfy condition (2.6d). The basis element ξp,q is represented by the matrix having Ti,j = 1 or 0 according as (i, j) ∼ (p, q) or not. The idempotents ξi,i are represented by diagonal matrices, and the “orthogonal” decomposition (2.3d) is easy to deduce from this. (2.6e) Corollary (Schur [47, 48]). If char K = 0, or if char K = p > r, then SK (n, r) is semisimple. Hence every V ∈ MK (n, r) is completely reducible. Proof. Under the given conditions on char K, the group algebra KG(r) is semisimple (since char K does not divide |G(r)| = r!). Therefore every KG(r)-module, and in particular E ⊗r , is completely reducible. But the endomorphism algebra of a completely reducible module is semisimple, so by (2.6c), SK (n, r) is semisimple. The equivalence of categories MK (n, r) and mod SK (n, r) now completes the proof of (2.6e). ⊗r The family of modules (EK ), with r fixed but with K varying, is clearly “defined over Z” in the sense of the following definition (which is simply a version of the definition of GLn -module, GLn being regarded as affine group scheme over Z. See [49, p. 46]).
2.7 Contravariant duality
19
Definition. Suppose that for each infinite field K we have a KΓK -module VK ∈ MK (n, r). We say that the family {VK } is defined over Z if there is a Z-form VZ of VQ , and for each K an isomorphism δK : VZ ⊗ K ∼ = VK in the category MK (n, r). More exactly we say {VK } is Z-defined by VZ and {δK }. ⊗r Example 1. Take VK = EK . The module VZ = i∈I(n,r) Z · ei is a Z-form of VQ (we write eµ,K , ei,K = ei1 ,K ⊗ · · · ⊗ eir ,K for the basis elements ⊗r ), and for each K the K-map δK : VZ ⊗ K → VK taking of EK , EK ⊗r } ei ⊗ 1K → ei,K , for all i ∈ I(n, r), is an isomorphism in MK (n, r). So {EK is defined over Z. Definition. Suppose {VK }, {WK } are both families of modules in MK (n, r), both defined over Z, by VZ and {δK }, WZ and {ηK }, respectively. Suppose we have for each K a morphism θK : VK → WK in MK (n, r). We say that the family {θK } is defined over Z if θQ maps VZ into WZ , and for each K the diagram shown commutes. VZ ⊗ K
θ Q ⊗ πK
ηK
δK VK
/ WZ ⊗ K
θK
/ WK
Example 2. Define the rth symmetric power Dr,K = Dr (EK ) of EK to be the rth homogeneous subspace of the polynomial ring K[e1 , . . . , en ]; the elements e1 = e1,K , . . . , en = en,K are regarded as commuting indeterminates. ⊗r → Dr (EK ) taking ei = ei1 ⊗ · · · ⊗ eir to There is a surjective K-map θK : EK the monomial e(i) = ei1 · · · eir , for all i ∈ I(n, r). It is well-known that Dr,K has a unique structure as a KΓ-module, such that θK becomes a KΓ-map; in fact the action on Dr,K of a given g ∈ Γ, is the restriction to Dr,K of the unique K-algebra automorphism of K[e1 , . . . , en ] which takes eµ → geµ for all µ ∈ n. We can show that the family {Dr,K } is defined over Z; the relevant Z-form Dr,Z in Dr,Q is the set of all homogeneous polynomials of degree r in the variables e1 = e1,Q , . . . , en = en,Q , which have coefficients in Z. The isomorphism ηK : Dr,Z ⊗ K → Dr,K takes e(i),Q ⊗ 1K → e(i),K for all i ∈ I(n, r). It is clear now that the family of morphisms {θK } is defined over Z in the sense of the last definition.
2.7 Contravariant duality In this section we keep K fixed and write Γ = ΓK . The dual space V ∗ = HomK (V, K) of a KΓ-module V ∈ MK (n, r) can be made into a right KΓ-module in a natural way: if f ∈ V ∗ , g ∈ Γ,
20
2 Polynomial Representations of GLn (K): The Schur algebra
we define f g ∈ V ∗ by (f g)(v) = f (gv) for all v ∈ V . To make V ∗ into a left KΓ-module the traditional practice in group theory is to use the map g → g −1 to “reverse” multiplication; one defines a left action (denoted by a dot) of Γ on V ∗ by (g · f )(v) = f (g −1 v). However the KΓ-module V ∗ so defined will not, in general, belong to our category MK (n, r). But if we replace g −1 by g tr (transposed matrix) in the above definition, we get a left KΓ-module structure on V ∗ which is still in MK (n, r). We denote by V ◦ the space V ∗ , equipped with this action (2.7a)
(g · f )(v) = f (g tr v), all g ∈ Γ, f ∈ V ∗ , v ∈ V .
The module V ◦ is called the “contravariant dual” to V ; an analogous dual applies to rational modules over all semisimple algebraic groups Γ, and has been used a great deal in recent years by Wong [56], Verma [53] and Jantzen [29]. It is convenient to express (2.7a) in terms of the action of SK (n, r). It is easy to see that the K-linear map J : SK (n, r) → SK (n, r), defined by J(ξi,j ) = ξj,i for all i, j ∈ I(n, r), is an involutory anti-automorphism of SK (n, r). In fact one has clearly, for any ξ ∈ SK (n, r) (2.7b)
J(ξ)(ci,j ) = ξ(cj,i ), all i, j ∈ I(n, r),
and by taking ξ = eg we find that J(eg ) = egtr , all g ∈ Γ. So if V ∈ MK (n, r) is regarded as SK (n, r)-module the action (2.7a) which defines the contravariant dual V ◦ (= V ∗ ) reads (2.7c)
(ξ · f )(v) = f (J(ξ)v), all ξ ∈ SK (n, r), f ∈ V ∗ , v ∈ V .
It is clear that V → V ◦ gives an exact contravariant functor on MK (n, r), and that the usual isomorphism V → (V ∗ )∗ of K-spaces, gives a natural isomorphism V → (V ◦ )◦ . Definition. Let V , W be modules in MK (n, r). Then a K-bilinear form (, ):V ×W →K is called contravariant if it has the property. (2.7d)
(ξv, w) = (v, J(ξ)w), all ξ ∈ SK (n, r), v ∈ V , w ∈ W .
The proof of the next proposition is standard. (2.7e) Proposition. If V, W ∈ MK (n, r) are given, there is a bijective correspondence between contravariant forms ( , ) : V × W → K and morphisms Λ : V → W ◦ in MK (n, r), given by Λ(v)(w) = (v, w), all v ∈ V , w ∈ W . The form ( , ) is non-singular (=non-degenerate) if and only if Λ is an isomorphism.
2.8 AK (n, r) as KΓ-bimodule
21
Example 1. We can use the last proposition to show that the module E ⊗r is self-dual, that is that E ⊗r ∼ = (E ⊗r )◦ . For there is clearly a non-singular bilinear form , : E ⊗r × E ⊗r → K, defined by ei , ej = δij , all i, j ∈ I(n, r). But a simple calculation, using (2.6a), shows that , satisfies the contravariant condition (2.7d). Call , the canonical form on E ⊗r . Example 2. Let {VK } be a family of modules (VK ∈ MK (n, r)) which is Z-defined by VZ and {δK } as in the last section. Let {va,Q } (a running over some finite index set B) be a basis of VQ which Z-generates VZ . For each K, write va,K = δK (va,Q ⊗ 1K ) so that { va,K : a ∈ B } is a basis of VK . We can now show easily that the family {VK◦ } is defined over Z. For each K, write {fa,K } for the basis of VK∗ = VK◦ dual to {va,K }. Then VZ◦ = { f ∈ VQ : f (VZ ) ⊆ VZ } is a Z-form of VQ◦ , having Z-basis {fa,Q }. It is not hard to see that {VK◦ } is Z-defined by VZ◦ and the maps δˆK : VZ◦ ⊗K → VK◦ which take fa,Q ⊗ 1K → fa,K , all a ∈ B. Example 3. Let {VK } and {WK } be families (VK and WK both in MK (n, r)) which are Z-defined by VZ , {δK } and WZ , {ηK } respectively. If for every K we have a bilinear form ( , )K : VK × WK → K, we say that the family {( , )K } is defined over Z if ( , )Q maps VZ × WZ to Z, and for each K, and for each vZ ∈ VZ , wZ ∈ WZ there holds δK (vZ ⊗ 1K ), ηK (wZ ⊗ 1K ) = (vZ , wZ )Q · 1K . K
If all the ( , )K are contravariant, then we can show that the family of mor◦ (given, for each K, by Proposition (2.7e)) is defined phisms ΛK : VK → WK ⊗r is defined over Z. For example, the family of canonical forms , K on EK ⊗r ⊗r ◦ ) derived over Z, and so therefore is the family of isomorphisms EK → (EK from these forms.
2.8 AK (n, r) as KΓ-bimodule We saw in the introduction (p. 2) that the space K Γ of all functions f : Γ → K is a KΓ-bimodule, i.e. it is a left and right KΓ-module, and these two actions of KΓ commute. If we take an element c ∈ AK (n, r), then there holds an equation ∆(c) = t ct ⊗ ct , for suitable elements ct , ct ∈ AK (n, r) (see 2.3). By formulae (1c), (1d), (2.8a) g◦c= ct (g)ct , c ◦ g = ct (g)ct , t
t
for any g ∈ Γ. This shows that AK (n, r) is closed to both the left and the right KΓ-actions, hence AK (n, r) is itself a KΓ-bimodule. Now extend (2.8a) by linearity, to give the action of an arbitrary element κ ∈ KΓ:
22
2 Polynomial Representations of GLn (K): The Schur algebra
κ◦c=
ct (κ)ct ,
c◦κ=
t
ct (κ)ct .
t
By (2.4b), we find κ ◦ c = c ◦ κ = 0, for any κ ∈ Y = Ker e. Then (2.4c) shows that AK (n, r), regarded as left KΓ-module, belongs to MK (n, r). (n, r) of Similarly AK (n, r) belongs to an analogously defined category MK right KΓ-modules (see 4.4). Thus AK (n, r) becomes an SK (n, r)-bimodule, with actions ξ(ct )ct , c ◦ ξ = ξ(ct )ct , for ξ ∈ SK (n, r). (2.8b) ξ◦c= t
t
Now let us define a bilinear form ( , ) : SK (n, r) × AK (n, r) → K by the rule: if ξ ∈ SK (n, r), c ∈ AK (n, r), then (ξ, c) = J(ξ)(c). This is non-singular, in fact {ξi,j }, {ci,j } are dual bases with respect to ( , ) (see (2.7b)). The reader may check that for any ξ, η ∈ SK (n, r), c ∈ AK (n, r) there holds (2.8c)
(ξη, c) = (η, J(ξ) ◦ c) = (ξ, c ◦ J(η)).
In fact, using (2.3a), (2.8b) and the definition of ( , ), we see that all three expressions just given are equal to t (ξ, ct ) (η, ct ). But (2.8c) shows that ( , ) is contravariant, SK (n, r) and AK (n, r) being regarded as left SK (n, r)-modules (and even when they are regarded as right modules). By (2.7e) we deduce that AK (n, r) ∼ = (SK (n, r))◦ , an isomorphism of KΓ-bimodules.
§3
Weights and Characters
3.1
Weishts In this chapter we describe
of modules
in
MK(n,r),
in terms of the Schur algebra
results go back to Schur researches
of Weyl and Chevalley.
ideal domain,
Let
n,r ~ 1
G(r)-orbits
in
weights
(more precisely,
weight
6
of reductive
see [Se,
number of
The elements
as unordered
such that
partitions
of
any
It acts on
w E W,
w
~ = (~w(1)'''''
dominant weight,
W
i.e
into
of
denote the set of all of
A(n,r)
GLn,
i.e.
n
acts on
for each
a weight
to (ordered)
parts.
A+(n,r)
~v
is the
can also be regarded
can be identified with the Weyl group wi = (~(il),... , w(i r)
if
W-orbit of X
A
which gives the
~ f ~ ' ~
r).
parts (zero parts being allowed).
7~(n,r):
Each
will be called
of dimension
These vectors
on the left,
weights correspond Denote by
to the category
This action commutes with that of
~w(n) )"
in classical
split over a
6 = (61,... , 6 n)
~ 6,
W = G(n)
l(n,r)
i ~ I(n,r).
right, and therefore -i
e,~ ....
i p = v. r
The symmetric group GL . n
A(n,r)
they are weights
i = (i I .... ,ir)
p ~ -r-
group schemes
All our
§3].
is specified by the vector
content of any
of
For the generalization
be given~ and let
I(n,r).
SK(n,r) o
[S], and all have been generalized
of rational representations principal
the theory of weights and characters
such that
partitions
of
w ~ W, A(n,r)
~ ~ A(n,r)
on the then
contains exactly one
X1 ~ "'" ~ Xn. r
G(r)
for
Thus dominant
into not more than
the set of all dominant weights.
n
37
3.2
Weight spaces Take a fixed infinite field
weight < .
e E A(n,r)
K.
If
i E l(n,r)
we shall denote the idempotent
This is reasonable, since
<j,j
belongs to the
{.
.
(see
if and only if
2.3) by i ~ j.
The
orthogonal decomposition (2.3d) now reads
s
=
If we apply this to any module
(3.2a)
V =
a decomposition of
V
In fact space
Va
of
[[ <~ ~(A(n,r) V ~ MK(n,r)
we get
~]~> < V, aEi(n,r)
as direct sum of subspaces ([S, pp. 6,7])
V,
< V
Tn(K )
a-weight
which is defined as
all
x(t) E Tn(K)}.
is the diagonal subgroup (a maximal split torus) of
consisting of all diagonal matrices tl,... , tn E K character
~ V.
coincides with the
V a = {v E V : x(t)v = tI ,..tnnv ,
Here
•
X
= K\{0}. : Tn(K ) ~ K
To show that
(3.2b)
ex(t) =
For each ,
by
x(t) = diag(tl,... , tn) c~ E A = A(n,r),
F K = GLn(K)
with
define the multiplicative
51 an X (x(t)) = t I ...t n
~i~V = V~,
first verify the formula
~i en aEAZ tI ... tn ~
,
all
x(t) E Tn(K) ,
by evaluating both sides at each deduce
c. . (i,j E I(n,r)). If v E ~ V , we 1,3 51 ~ V s" x(t)v = ex(t)v = t I ...tnnv; hence ~ V ~ But for distinct
~,~ E A(n,r)
the multiplicative characters
k , X~
are unequal (since
38
K
is infinite),
spaces that
V
and by a familiar argument
, ~ E A(n,r),
< V = V~
is direct.
for each
it
follows
Comparing
that the sum of the
this with
(3.2a), we see
e:
(3.2c)
V =
%~ V ~. ~Efi
Remark.
It follows from (3.2b), and the fact that
image of K'Tn(K) of
SK(n,r )
which has the
commutative, Example V =
under the map
split,
For each
ArE
is any
~
(see 2.4)
(5 E A(n,r))
is infinite,
is the subalgebra as
K-basis.
that the
DK(n,r)
This is a
semisimple algebra. r
satisfying
(= Altr(E)) r-element
e
K
is a
0 ! r ~ n,
KF-module
subset of
~
in
the
r
th
~(n,r).
exterior power
If
(i I < i 2 < ... < ir)
s = {il, .... ir}
write
e s = e i A ... A e i r
Then
V
has a basis consisting
x(t) E Tn(K) = ~(s)
then
x(t)e s = til~.,
is the weight containing
r-element
subsets
s,s'
the weight spaces r-element
3.3
subset
V~
Let
V
n
and
of weight
Let
($)
elements
e s.
Moreover
~I an t.1 es = t I ... tn es' r i = (il,...,ir).
V~ = 0
1
or zero:
for all other
distinct
~(s), ~(s').
So
v~(S)
for any
= K.e
s
~ E A(n,r).
spaces ~(n,r),
w E W = G(n).
and
~
an element of
Then the
if
where
Clearly,
give distinct weights
all have dimension
be a module in
Proposition
are isomorphic.
of
s ~,
Some properties
(3.3a)
of these
K-spaces
A(n,r).
V ~, V w(~)
39
Proof
Let
el,...,
en
of
n~ 1
verify
hence that
(3.3b)
n
be the element of
w E
to
ew(1),... , ew(n)
v ~+ n v w
gives a
Let
0 ~ VI ~
V ~ -~ V ~2
0
Since
Now let to
be an exact sequence in
sequence of
is obtained
is idempotent,
K-spaces
by applying
KF-module
~(n,s)
to every term of
the result follows.
be any non-negative
MK(n,r) ,
regarded as
(3.3c)
~a
r,s
* t I ..... t n E K ,
for all
is exact.
The second sequence
the first.
It is simple to
V ~ ~ V w(~) .
0 ~ V 1 ~ V -~ V 2 ~ 0
Then the naturally induced
elementary
respectively.
K-isomorphism
MK(n,r).
belonging
which maps the basis elements
x(t I .... , tn)n w = X(tw(1),... , tw(n)) ,
Proposition
Proof
F = GL (K) n
integers and
respectively.
V, W
Clearly
in the usual way, belongs
to
be
~-modules
V ~ W = V ®KW ,
~(n,
r+s).
It is
to verify the
Proposition.
Let
where the sum is over all
y E A(n, r+s). e E A(n,r),
Then
(V ® W)~ =
8 (A(n,s)
Z~
V~ ~
such that
(~i ..... an) + (~i ..... Pn ) = (~I ..... ~n )" Next suppose with a subset of i,j E l(n,r).
L
is a field containing
SL(n,r)
Then
~K ~ =
K ~i,j
~L ,
~ E A(n,r).
into an
identify
with the subset
(3.3d) V~=
Pr@position.
$~K V.
for all
SL(n,r)-module
with
~° ,j'
V ® 1L
of
VL,
VL = ~
SK(n,r) for all
So if we make
by "extension of scalars",
The weight-space
In particular,
We identify
by identifying
VL = V ®K L V
K.
and
we have the L
dimK V~ = dim L V L .
VL
is the
L-span of
W ~,
40
Contravariant spaces.
If
duality (see 2.7) behaves well with respect to weight-
V,W E MK(n,r )
and
(,) : V × W ~ K
is a contravariant
form, then
(V~,W ~) = 0
for any distinct weights
[W, p. 42],
[J, p. 6]).
For
(~aV,~W)
= (V, ~ W )
= 0.
only if the restrictions E A(n,r).
(3.3e)
Taking
J(~)
,
(,)~ : V
× W
(see
so by the contravariant
It follows that
W = V°
Proposition.
= ~
~,~ E A(n,r)
(,)
~ K
is non-singular
are non-singular
bilinear
property if and
for all
there follows the
dimK V~ = dimK(V°)~ ,
for all
V E MK(n,r )
and
E A(n,r). Finally let as in 2.6.
{V K}
Because the idempotent$
we have a direct sum These
(3.3f)
in
and
Z-defined by
SQ(n,r)
actually lie in
~ V Z = V Z ~ VQ~ Z-module
VZ, {5 K}
VZ,
for all
Sz(n,r) ~ E A(n,r).
are themselves free
We have the
Proposition
6K : V Z ® K ~ V K
3.4
V Z = Z @ ~ Vz,Q
%aYQVz, being surmnands of the free
Z-modules.
free
be a family of modules,
Z-module
For each ~Q V Z ® 1K
of ~Q~ VZ,
K, •
VK
is the
Hence
K-span of the image under equals the rank of the
dim K V aK
and so is independent of
K.
Characters Let
V
be a
KFK-mOdule in
we assign the monomial
~i ~n X 1 ... Xn
MK(n,r).
To each weight
of degree r
in
n
~ E A(n,r)
indeterminates
XI,...,X n
over the rational field
Q.
character,
cf.
is defined to be the polynomial
[J', p• 274]) of
V
Then the character
(or formal
4]
@v(XI,...,Xn)
~i • X1
dimK v~
Z
=
an ... X n
~EA(n,r) Thus
~V
is an element
homogeneous
of degree
(3.4a)
of the polynomial r.
By (3.3a)
ring
Z[XI,.,.~Xn] ,
it is symmetric,
and is
Jn fact
Z dimK V% • mt(X 1 ..... Xn) ,
+v(XI ..... Xn) =
k the sum being over all dominant monomial
symmetric
of the distinct Xl,... , X n. character
function
monomials
V =
ArE
k E A+(n,r).
(see for example
obtained
For example,
of
weights
let
r
(see 3.2)
from
[M, p. ii]),
kI A n X1 ... X n
be in the range is the
r th
Here
~k
is the
i.e.
the sum
by permuting
0 ! r ! n,
elementar7
then the
symmetric
function
e = ~(i,i,...,I,0 ..... 0) = XIX 2 ... X r + ... --r The propositions If
0 ~ V1 ~ V + V 2 + 0
in 3.3 give rise to propositions is an exact
sequence
in
~(n,r),
about characters. then
~V
=
~V 1
+
2 ,
by (3.3b).
It follows
series for
V,
by induction
on the length
g
say
V : V ° D V1 D %/2 D .. . D Vii = 0,
that
(3.4b)
~V = o=ig %o-I/Vo From
(3.3c) we have
of a composition
42 when
V E MK(n,r),
W E ~(n,s);
this extends of course to a similar formula
for tensor products of any finite number of factors. = (~i''''' ~r ) ![i + "'" + ~ r
is any partition of
= r)
r
....> I~r ~ 0
~I
and
then the s~wmetric function
~ ( X I ..... Xn)
is the character of the module ~V
(i.e.
For example if
lies in the ring
~(n,r)
~i
.o°
e
~Ir
£ ~i E ®... ® A Zr E.
Since every character
of all symmetric functions in
Z[XI,... , Xn],
and since by the fundamental theorem on symmetric functions ([M, (2.4), p. 13]) S(n,r) is Z-spanned by the e (3.4d)
Theorem.
by all characters
above, we have the well-known
The additive subgroup of ~V' V E MK(n,r),
group is independent of the field
is
Z[XI,,.., Xn]
S(n,r).
In particular, this additive
K.
We must next connect our "formal" character character
~V
of
V = (V,p) E ~ ( n , r ) ,
~v(g) - Trace p(g),
~V
is an element of
~(n,r),
(rab)
(3.4e)
Theorem (see [S, p, 17]).
g E rK = GLn(K)
afforded by any basis
there holds
are the eigenvalues of
g.
Let
~V
with the natural
defined hy
all
in fact
matrix
which is generated
g E r
~V
[Va }
= GLn(K ).
is the trace of the invariant of
V.
V E MK(n,r ).
~T(g) = %T(
Then for any where
43
Proof
By (3.3d), the character
module L.
VL = V ® K
L (~(n,r)
@V
obtained by extending
Since this process replaces
coincides with that
K
on
FK'
~V
K.
C
be the
by a function on
by the
to a larger field I = GLn(L)
which
we may legitimately assume, in proving (3.4e),
n x n
Define elements
(I)
matrix
(c v) ,
and let
fl' .... fn ( ~ ( n , r )
u
be an indeterminate
by
det(ul-C) = u n - fl un-I + ... + (-l)nfn
It is clear that
f (g) = ~r(~ , ~ ), r 1 ~''" n %
of
~(n,r)
by
~
% (K1 . . . . . ~n )
-I
g
r,
=
as above.
Then we have, for any
having
dim V ~ diagonal terms
traces
we have
~i ~i
g ( FK
~"(g) "
Relative to a basis of
V = Z~ Vg~ diag(~l,...,~ n)
Now we may write
Define the element
is diagovalizable i.e. that there is some
= diag(~l,...,~n).
decomposition
of
I < r < n. -- -
(bg ( Z ) .
~ = Z (b .iK) f~l ...f~rr "
(2) Now suppose
for
~i ~r = Z b~ -~I "'" ~r
the sum being over the partitions
zgz
K
V
is algebraically closed. Let
over
@V
is unchanged if we replace
V
z ~ FK
such that
which is adapted to the
is represented by a diagonal matrix
an "'" ~n '
for each
~ (A(n,r).
Taking
44
2
We have now two polynomials in
n
and
~(g) = ~v(g)
~V'
and by (2) and (3,),
diagonalizable elements of are distinct belongs to satisfying polynomial
d(g) # O, (i).
Corollary.
F K = f~n(K).
D,
we have
where
~ = ~V'
~i''''' ~t
namely
for all
g
~(g) = ~v(g )
for all
~i''''' ~t
Proof
the natural characters ~i''''' ~t elements of If
~(n,r).
z .
are the characteristics of a set of VI,... , V t ~ ~ ( n , r ) . S(n,r).
(see [CR, p. 184, (27.13)] shows that FI,... , F t
are linearly independent
is a non-trivial relation
p = char K
Then by (3.4e)
g ~ FK
(It is here that we need absolute irreducibility.)
Zl~ 1 + ... + zt~ t = 0
assume, in case
of
of
and this completes the proof of (3.4e).
are linearly independent elements of
A theorem of Frobenius-Schur
D
is the discriminant of the
mutually non-isomorphic, absolutely irreducible modules Then
in the set
Since every matrix whose eigenvalues
d ~ ~(n,r(r-l))
It follows
Suppose that
variables c v ,
is finite, that
p
(z~ E Z),
we may
does not divide all the
(Zl.iK)~l(g) + ... + (zt'iK)~t(g) = 0
for all
g ~ FK-
But this contradicts the Frobenius-Schur theorem.
3.5
Irreducible modules in
~(n,r)
The next theorem, forerunner of far-reaching generalizations by Weyl [We]
and Chevalley(
=ee[S~])
, is due in the case
K = ~
to Schur
Is, p. 37]. The leading term of a polynomial in to the usual lexicographical of monomials
X1
... X
an n
Z[XI,... , Xn]
(Gaussian) ordering of weights
(see [S, p. 17]).
is taken relative a ~ A(n,r),
or
45 (3.5a)
Theorem
Let
any infinite field. (i)
For each
FX, K E ~ ( n , r )
n,r
be given integers,
whose character
These
(iii)
Every irreducible module
Proof to
(i)
k.
(k E A+(n,r))
[S, p. 37]
Let
U
X kl 1 ... X kn n .
a E K
such that
U' = {u 6 U : 0(u) = a.u} equal to the scalar map (ii) for
Z-basis of
V E ~(n,r)
~ = (~l,...,~r) V =
~(n,r). Fk, K
for
We may take
be the partition conjugate A~IE
~V U
of
• k,K(k E A+(n,r))
V
~
and since e(u) = a.u
e
to be
FX, K.
Uk
u E U k.
is a submodule of
kn ... X n .
By
To prove that
U
KFK-endomorphism
Since our assumption on
must map
for
kl
has character
whose character has
it is enough to show that every
U,
into itself, there
But the set
hence
U = U'
and
@
is
a-~. mk
If we express the functions
equations of the form
~... ® A~rE
is therefore
U
The monomial symmetric functions S(n,r).
~I Xn X 1 ... X n
is isomorphic to
is scalar (see [CR, p. 202, (29.13)].
shows that dim U k = i,
is some
form a
The leading term of
is absolutely irreducible,
~U
has leading term
there is some composition factor
leading term
of
be
~ E A+(n,r).
~V = hi~I " " ~ ~r
0
~,K
We saw in 3.4 that the module
(3.4b)
K
these exists an absolutely irreducible module
(ii)
exactly one
Let
Then
k E A+(n,r)
~,K
n ~ I, r t 0.
~X,K = ex +
Z
~
also form a basis for
(k E A+(n,r))
form a basis
@X,K
in terms of these, we get
~
and it follows at
,
S(n,r).
46 (iii) by
Suppose
Fk, K
L
L
is an algebraically closed field containing
the module
Fk, K L .
irreducible, so is ¢~ = ~k,K
as
~ , K ® K L 6 ~(n,r).
6 A+(n,r))
Fk,KL ,
Fk, K
Denote
is absolutely
Fk, K L has the same character
On the other hand
FX, K , by (3.2d).
isomorphic to one of the
Since
K.
Any irreducible
X 6 ~(n,r)
must be
since otherwise the characters
~X' ~k
would be linearly independent by (3.4e) Corollary, and that
would contradict (ii) above. Now let
V
be any irreducible module in
a minimal submodule of that
Fk,K L ~ X "
V L = V ~K L.
hence the space
HomSK(n,r)(Fk,K,. , V) # O.
Therefore
MK(n,r) , and let
There is some
k 6 A+(n,r),
IIomsL(n,r)(Fk,KL , VL) # 0,
X
be
then, such it follows
V ~ F>~,K, and the proof of Theorem
(3.5a) is complete.
Remarks. (i)
The proof above shows that
isomorphism in
~(n,r),
FX, K
by the properties
is defined, uniquely up to
(a)
Fk, K ~ ~(n,r)
is
irreducible, and (b) FX, K has character CX,K whose leading term is kI kn X 1 ... X Moreover if L is any field containing K, then n
Fk,K ® K L 6 ~(n,r)
has the corresponding properties
fact its character is the same as that of ~k,K = ~k,L ~k,K Write
if
K, L
for
~k,K'
{~h,p : k6 A+(n,r)} dk~
are infinite fields with
is the same, for all infinite fields ~k,p
Fk, K.
is a
if
p = char. K. Z-basis of
which appear in the equations
K
In other words, K _c L.
It follows that
of given characteristic.
For each
~(n,r).
(a), (b), since in
If
p,
the set
p > 0,
the coefficients
47
=
Z ~EA+(n,r)
@k,O
k E A+(n,r),
dk~'k, p,
are the decomposition numbers relative to the modular reduction from to
~(n,r),
(ii)
where
K
is any field of characteristic
Suppose that
K
is fixed.
ring for the category corresponding
~(n),
to a module
was said in 3.3, that R(~(n))
R(MK(n ))
and let
[V]
V E ~(n).
[V] -~ ~V
onto the ring
Let
p
MQ(n,r)
(see 2.5).
be the Grothendiock
be the element of
R(MK(n))
Then it as clear from (3.4a) and what
defines an isomorphism of rings from
S(n) =
% _S_(n,r) of all symmetric functions in r>0
Z[X I,..., X n]. (iii)
The symmetric functions
~k,0
were determined by Schur [S,
and are now known as Schur functions or finite characteristic
p,
S-functions
for
and write
X1
... X ~n n ; s(w)
~k
,P
For
have not yet
We sketch here a proof, rather different from
Sehur's, of his theorem for characteristic X~
(see [M, p.24]).
the irreducible characters
been calculated explicitly.
23],
recall that
for the "sign"
(+i)
zero.
W = G(n)
If
e E A(n,r)
acts on
A(n,r)
of a permutation
w E W.
we write (see 3.1), Define
a = ~ ( X l , .... Xn) = Z s(w) X w(~), an alternating function in =~ wEW which in fact is expressible as the n × n determinant Xl,..., 1 Let 6 = (n-l, n-2,...,l,O) E A(n, ~ n(n-1)). Define the S-function iX
= [k(Xl ..... Xn)
for any
(see [M, (3.2). (4.3)]) (in fact
the
k E A+(n,r)
~k (k E A+(n,r))
has leading term X%), n
~l = a~+6/-9~ " form a
-
Then
Z-basis of
S(n,r)
and there holds the formal identity
1
l _ X vV ~ ~,v=l
by
~. "'IXi]I.
Z + kEA (n)
S_.(X)S=x(Y),
48
where and
Y = (YI""' A+(n) =
Yn )
is a set of variables independent of
X = (XI,..., Xn),
~ A+(n,r). r~0
Schur's theorem [S, §23]
is that
¢'k,0 = ~
'
for all
k 6 A+(n,r).
To prove this, it is enough to show that (I) remains true when replaced by
#~0's.
given degree
r
(2)
For then, considering only the part of (I) which is of
in (both)
%
X
and
Y,
S%(X)Sx(Y) =
k~A+(n,r) = If we write
~% =
Z vk~ ~
we shall have
Z +
=
~k o(X)@x,O (Y)"
k(A (n,r) (k 6 A+(n,r))
be a signed permutation matrix, i.e.
the
to order and sign (cf. [M, p. 35]). X k,
we must have
So we must prove (I), with
'
we get an integral matrix
which, on account of (2), must be orthogonal.
leading term
S=>'s are
This matrix must therefore
S=k's coincide with
But since both
S) = ....
~)
k,O
Sk's
g=X
for all
replaced by
and
~k,0's ~'k
coincide on the two sides of the proposed equation.
Z (r)
I] X ~v y ~v ~,v=l
=
~+
X~A (n,r)
r Let
module
have
~k,0's.
It is clearly r ~ O,
So we must prove
~X,o(X)~x,O(7),
where the sum on the left is over all non-negative that
up
X 6 A+(n,r)
sufficient to prove that the sums of terms, of each given degree
(3)
(v~v)
integrsl
matrices such
= r. K
@r EK
irreducible faithfully on
be an infinite field of characteristic is completely reducible Fk, K (k E A+(n,r)) E 6Y,
zero.
Because the
(2.6e), and contains each (absolutely)
with positive multiplicity
see (2.6c)) we have
(SK(n,r)
acts
49
•~i(n,r) =
Now
~(n,r)
is a
~(n,r).
y = diag(y I ..... yn )
cf(~l,K ) .
K~-bii'~odule, using right and left translation
operators g o c,c o g (see sub-bimodule of
~ ~ + h~A (n,r)
~ i. Each coefficient space Take diagonal matrices
(~ ,yy E K ),
ef(F~.,K)
is a
x = diag(xl,...,Xn),
and calculate the trace
f(x,y)
of
the linear transformation
(c ~ y o c o x : c E ~ ( n , r ) ) .
Using the basis of monomials is obtained by substituting
ci, j x
for
for X,
But from (4) we get another basis of A+(n,r)
(r~b)
for
the coefficient functions
F~, K,
relative
~(n,r), y
~(n,r), rab
to some b a s i s of
theorem (see 3.4) shows t h a t t h e s e f u n c t i o n s If we choose a basis for Fk, K
for
we find that Y
f(x,y)
in the left side of (3).
by taking for each
appearing in an invari~nt matrix
F~, K. (The F r o b e n i u s - S c h u r k rab are l i n e a r l y i n d e p e n d e n t . )
which is adapted to the weight-space
decomposition, so that each basis element belongs to some weight-space it is easy to show that the trace of the map is
~(x)~k(y);
hence by (4),
of the variables
(c -~ y o c o x : c E c f ( ~ , K ) )
f(x,y) = Z~ ~ ( x ) ~ ( y ) .
since it holds for arbitrary substitutions XI''''' Xn' YI''''' Yn"
F~, K,
But this proves (3), * Xl''''' Xn' Y I ' ' ' " Yn ~ K
3 Weights and Characters
3.1 Weights In this chapter we describe the theory of weights and characters of modules in MK (n, r), in terms of the Schur algebra SK (n, r). All our results go back to Schur [47], and all have been generalized in classical researches of Weyl and Chevalley. For the generalization to the category of rational representations of reductive group schemes split over a principal ideal domain, see [49, §3]. Let n, r ≥ 1 be given, and let Λ(n, r) denote the set of all G(r)-orbits in I(n, r). The elements α, β, . . . of Λ(n, r) will be called weights (more precisely, they are weights of GLn , of dimension r). A weight α is specified by the vector α = (α1 , . . . , αn ) which gives the content of any i = (i1 , . . . , ir ) ∈ α, i.e. for each ν ∈ n, αν is the number of ∈ r such that i = ν. These vectors α can also be regarded as unordered partitions of r into n parts (zero parts being allowed). The symmetric group W = G(n) can be identified with the Weyl group of GLn . It acts on I(n, r) on the left, wi = (w(i1 ), . . . , w(ir )) for any w ∈ W and i ∈ I(n, r). This action commutes with the action of G(r) on the right, and therefore W acts on Λ(n, r): w−1 α = (αw(1) , . . . , αw(n) ) for α ∈ Λ(n, r) and w ∈ W . Each W -orbit of Λ(n, r) contains exactly one dominant weight, i.e. a weight λ such that λ1 ≥ · · · ≥ λn . Thus dominant weights correspond to (ordered) partitions of r into not more than n parts. Denote by Λ+ (n, r) the set of all dominant weights.
3.2 Weight spaces Fix an infinite field K. If i ∈ I(n, r) belongs to the weight α ∈ Λ(n, r) we shall denote the idempotent ξi,i (see 2.3) by ξα . This is reasonable, since ξi,i = ξj,j if and only if i ∼ j. The orthogonal decomposition (2.3d) now reads ε= ξα . α∈Λ(n,r)
24
3 Weights and Characters
If we apply this to any module V ∈ MK (n, r) we get (3.2a) V = ξα V , α∈Λ(n,r)
a decomposition of V as a direct sum of subspaces ξα V . In fact [47, pp. 6,7], ξα V coincides with the α-weight-space V α of V , which is defined as αn 1 V α = { v ∈ V : x(t)v = tα 1 · · · tn v, all x(t) ∈ Tn (K) }.
Here Tn (K) is the diagonal subgroup (a maximal split torus) of ΓK = GLn (K) consisting of all diagonal matrices x(t) = diag(t1 , . . . , tn ) with t1 , . . . , tn in K ∗ = K\{0}. For each α ∈ Λ = Λ(n, r), define the multiplicative charαn 1 acter χα : Tn (K) → K ∗ by χα (x(t)) = tα 1 · · · tn . α To show that ξα V = V , first verify the formula αn 1 tα (3.2b) ex(t) = 1 · · · tn ξα , all x(t) ∈ Tn (K), α∈Λ
by evaluating both sides at each ci,j (i, j ∈ I(n, r)). If v ∈ ξα V , we deαn α 1 duce x(t)v = ex(t) v = tα 1 · · · tn v; hence ξα V ⊆ V . But for distinct elements α, β ∈ Λ(n, r) the multiplicative characters χα , χβ are unequal (since K is infinite), and by a familiar argument it follows that the sum of the weight-spaces V α , α ∈ Λ(n, r), is direct. Comparing this with (3.2a), we see that ξα V = V α for each α: V α. (3.2c) V = α∈Λ
Remark. It follows from (3.2b), and the fact that K is infinite, that the image of K · Tn (K) under the map e (see 2.4) is the subalgebra DK (n, r) of SK (n, r) which has the ξα (α ∈ Λ(n, r)) as K-basis. This is a commutative, split, semisimple algebra. Example. For each r satisfying 0 ≤ r ≤ n, the rth exterior power V = Λr E (= Altr (E)) is a KΓ-module in MK (n, r). If s = {i1 , . . . , ir } is any r-element subset of n (i1 < i2 < · · · < ir ) write es = ei1 ∧ . . . ∧ eir . These nr elements es form a K-basis of V . Moreover if x(t) ∈ Tn (K) and α = α(s) is αn 1 the weight containing (i1 , . . . , ir ), then x(t)es = ti1 · · · tir es = tα 1 · · · tn es . Clearly, distinct r-element subsets s, s of n give distinct weights α(s), α(s ). So the weight-spaces V α all have dimension 1 or zero: V α(s) = K · es for any r-element subset s ⊆ n, and V α = 0 for all other α ∈ Λ(n, r).
3.3 Some properties of weight spaces Let V be a module in MK (n, r), and α an element of Λ(n, r).
3.3 Some properties of weight spaces
25
(3.3a) Proposition. Let w ∈ W = G(n). Then the K-spaces V α , V w(α) are isomorphic. Proof. Let nw be the element of Γ = GLn (K) which maps the basis elements e1 , . . . , en of E to ew(1) , . . . , ew(n) respectively. It is simple to verify n−1 w x(t1 , . . . , tn )nw = x(tw(1) , . . . , tw(n) ), for all t1 , . . . , tn ∈ K ∗ , hence that v → nw v gives a K-isomorphism from V α onto V w(α) . (3.3b) Proposition. Let 0 → V1 → V → V2 → 0 be an exact sequence in MK (n, r). Then the naturally induced sequence of K-spaces 0 → V1α → V α → V2α → 0 is exact. Proof. The second sequence is obtained by applying ξα to every term of the first. Since ξα is idempotent, the result follows. Now let r, s be any non-negative integers and V , W be KΓ-modules belonging to MK (n, r), MK (n, s) respectively. Clearly V ⊗ W = V ⊗K W , regarded as KΓ-module in the usual way, belongs to MK (n, r + s). It is elementary to verify the (3.3c) Proposition. Let γ ∈ Λ(n, r + s). Then V α ⊗ W β, (V ⊗ W )γ = α,β
where the sum is over all α ∈ Λ(n, r), β ∈ Λ(n, s) such that α + β = γ. Next suppose L is a field containing K. We identify SK (n, r) with a subset K L with ξi,j , for all i, j ∈ I(n, r). Then ξαK = ξαL , of SL (n, r) by identifying ξi,j for all α ∈ Λ(n, r). So if we make VL = V ⊗K L into an SL (n, r)-module by “extension of scalars”, and identify V with the subset V ⊗ 1L of VL , we have the α
(3.3d) Proposition. The weight-space VL = ξαL VL is the L-span of the α α α weight space V = ξαK V . In particular, dimK V = dimL VL . Contravariant duality (see 2.7) behaves well with respect to weight-spaces. If V, W ∈ MK (n, r) and ( , ) : V × W → K is a contravariant form, then (V α , W β ) = 0 for any distinct weights α, β ∈ Λ(n, r) (see [56, p. 42], or [29, p. 6]). For we have J(ξα ) = ξα , so by the contravariant property (ξα v, ξβ w) = (v, ξα ξβ w) = 0. It follows that ( , ) is non-singular if and only if the restrictions ( , )α : V α × W α → K are non-singular for all α ∈ Λ(n, r). Taking W = V ◦ , there follows the
26
3 Weights and Characters
(3.3e) Proposition. dimK V α = dimK (V ◦ )α , for all α ∈ Λ(n, r). Finally let {VK } be a family of modules, Z-defined by VZ , {δK } as in 2.6. Because the idempotents ξαQ in SQ (n, r) actually lie in SZ (n, r), we have a di α rect sum VZ = α ξαQ VZ , and ξαQ VZ = VZ ∩VQ , for all α ∈ Λ(n, r). These ξαQ VZ , being summands of the free Z-module VZ , are themselves free Z-modules. We have the α
(3.3f ) Proposition. For each K, VK is the K-span of the image under the α map δK : VZ ⊗ K → VK of ξαQ VZ ⊗ 1K . Hence dimK VK equals the rank of the free Z-module ξαQ VZ , and so is independent of K.
3.4 Characters Let V be a KΓK -module in MK (n, r). To each weight α ∈ Λ(n, r) we assign the monomial X1α1 · · · Xnαn of degree r in n indeterminates X1 , . . . , Xn over the rational field Q. Then the character (or formal character, cf. [32, p. 274]) of V is defined to be the polynomial (dimK V α ) · X1α1 · · · Xnαn . ΦV (X1 , . . . , Xn ) = α∈Λ(n,r)
Thus ΦV is an element of the polynomial ring Z[X1 , . . . , Xn ], and is homogeneous of degree r. By (3.3a) it is symmetric, in fact (3.4a) ΦV (X1 , . . . , Xn ) = (dimK V λ ) · mλ (X1 , . . . , Xn ), λ
the sum being over all dominant weights λ ∈ Λ+ (n, r). Here mλ is the monomial symmetric function (see for example [39, p. 11]), i.e. the sum of the distinct monomials obtained from X1λ1 · · · Xnλn by permuting X1 , . . . , Xn . For example, let r be in the range 0 ≤ r ≤ n, then the character of the exterior power V = Λr E (see 3.2) is the rth elementary symmetric function er = m(1,1,...,1,0,...,0) = X1 X2 · · · Xr + · · · . The propositions in section 3.3 give rise to propositions about characters. Suppose 0 → V1 → V → V2 → 0 is an exact sequence in MK (n, r), then ΦV = ΦV1 + ΦV2 , by (3.3b). It follows by induction on the length l of a composition series of V , say V = V0 ⊃ V1 ⊃ V2 ⊃ · · · ⊃ Vl = 0, that (3.4b)
ΦV =
l σ=1
From (3.3c) we have
ΦVσ−1 /Vσ .
3.4 Characters
(3.4c)
27
ΦV ⊗W = ΦV ΦW
when V ∈ MK (n, r), W ∈ MK (n, s); this extends of course to a similar formula for tensor products of any finite number of factors. For example if µ = (µ1 , . . . , µr ) is any partition of r (i.e. µ1 ≥ · · · ≥ µr ≥ 0 and µ1 + · · · + µr = r) then the symmetric function eµ (X1 , . . . , Xn ) = eµ1 · · · eµr is the character of the module Λµ1 E⊗· · ·⊗Λµr E. Since every character ΦV lies in the ring Sym(n, r) of all symmetric functions in Z[X1 , . . . , Xn ], and since by the fundamental theorem on symmetric functions ([39, (2.4), p. 13]) Sym(n, r) is Z-spanned by the eµ above, we have the well-known (3.4d) Theorem. The additive subgroup of Z[X1 , . . . , Xn ] which is generated by all characters ΦV , V ∈ MK (n, r), is Sym(n, r). In particular, this additive group is independent of the field K. We must next connect our “formal” character ΦV with the natural character ϕV of V = (V, ) ∈ MK (n, r), defined by ϕV (g) = Trace (g), all g ∈ Γ = GLn (K). The map ϕV is an element of AK (n, r), in fact ϕV is the trace of the invariant matrix (rab ) afforded by any basis {va } of V . (3.4e) Theorem (see [47, p. 17]). Let V ∈ MK (n, r) and g ∈ GLn (K), then ϕV (g) = ΦV (ζ1 , . . . , ζn ), where ζ1 , . . . , ζn are the eigenvalues of g. Proof. By (3.3d), the character ΦV is unchanged if we replace V by the module VL = V ⊗K L ∈ ML (n, r) obtained by extending K to a larger field L. Since this process replaces ϕV by a function on ΓL = GLn (L) which coincides with ϕV on ΓK = GLn (K), we may legitimately assume, in proving (3.4e), that K is algebraically closed. Let C be the n × n matrix (cµν ), and let u be an indeterminate over K. Define elements f1 , . . . , fn ∈ AK (n, r) by (1) det (uI − C) = un − f1 un−1 + · · · + (−1)n fn . It is clear that fr (g) = er (ζ1 , . . . , ζn ), for 1 ≤ r ≤ n. Now we may write bµ eµ1 1 · · · eµr r (bµ ∈ Z), ΦV = µ
the sum being over the partitions µ of r, as above. Define the element ψ of AK (n, r) by ψ = µ (bµ · 1K )f1µ1 · · · frµr . Then we have, for any g ∈ ΓK (2) ΦV (ζ1 , . . . , ζn ) = ψ(g).
28
3 Weights and Characters
Now suppose g is diagonalizable, i.e. that there is some z ∈ ΓK such that zgz −1 = diag(ζ1 , . . . , ζn ). Relative to a basis of V which is adapted to the decomposition V = V α , diag(ζ1 , . . . , ζn ) is represented by a diagonal matrix having dim V α diagonal terms ζ1α1 · · · ζnαn , for each α ∈ V (n, r). Taking traces we have (3) ϕV (g) = ϕV (zgz −1 ) = ΦV (ζ1 , . . . , ζn ). We have now two polynomials in n2 variables cµν , namely ψ and ϕV , and by (2) and (3), ψ(g) = ϕV (g) for all g in the set D of diagonalizable elements of ΓK = GLn (K). Since every matrix whose eigenvalues are distinct belongs to D, we have ψ(g) = ϕV (g) for all g ∈ ΓK satisfying d(g) = 0, where d ∈ AK (n, r) is the discriminant of the polynomial (1). It follows that ψ = ϕV , and this completes the proof of (3.4e). Corollary. Suppose that Φ1 , . . . , Φt are the characteristics of a set of mutually non-isomorphic, absolutely irreducible modules V1 , . . . , Vt ∈ MK (n, r). Then Φ1 , . . . , Φt are linearly independent elements of Sym(n, r). Proof. A theorem of Frobenius-Schur (see [11, p. 184,(27.13)]) shows that the natural characters ϕ1 , . . . , ϕt of V1 , . . . , Vt are linearly independent elements of AK (n, r). (It is here that we need absolute irreducibility.) If there is a non-trivial relation z1 Φ1 + · · · + zt Φt = 0 (zτ ∈ Z), we may assume, in case p = char K is finite, that p does not divide all the zτ . Then by (3.4e) (z1 · 1K )ϕ1 (g) + · · · + (zt · 1K )ϕt (g) = 0 for all g ∈ ΓK . But this contradicts the Frobenius-Schur theorem.
3.5 Irreducible modules in MK (n, r) The next theorem, forerunner of far-reaching generalizations by Weyl [54] and Chevalley [49], is due in the case K = C to Schur [47, p. 37]. The leading term of a polynomial in Z[X1 , . . . , Xn ] is taken relative to the usual lexicographical (Gaussian) ordering of weights α ∈ Λ(n, r), or of monomials X1α1 · · · Xnαn (see [47, p. 17]). (3.5a) Theorem. Let n, r be given integers, n ≥ 1, r ≥ 0. Let K be an infinite field. Then (i) For each λ ∈ Λ+ (n, r) there exists an absolutely irreducible module Fλ,K in MK (n, r) whose character Φλ,K has leading term X1λ1 · · · Xnλn . (ii) These Φλ,K (λ ∈ Λ+ (n, r)) form a Z-basis of Sym(n, r). (iii) Every irreducible module V ∈ MK (n, r) is isomorphic to Fλ,K for exactly one λ ∈ Λ+ (n, r).
3.5 Irreducible modules in MK (n, r)
29
Proof. (i) (see [47, p. 37]) Let µ = (µ1 , . . . , µr ) be the partition conjugate to λ. We saw in 3.4 that the module V = Λµ1 E ⊗ · · · ⊗ Λµr E has character ΦV = eµ1 1 · · · eµr r . The leading term of ΦV is therefore X1λ1 · · · Xnλn . By (3.4b) there is some composition factor U of V whose character has leading term X1λ1 · · · Xnλn . We may take U to be Fλ,K . To prove that U is absolutely irreducible, it is enough to show that every KΓK -endomorphism θ of U is scalar (see [11, p. 202, (29.13)]). Since our assumption on ΦU shows that dim U λ = 1, and since θ must map U λ into itself, there is some a ∈ K such that θU (u) = a · u for u ∈ U λ . But the set U = { u ∈ U : θ(u) = a · u } is a submodule of U , hence U = U and θ is equal to scalar map a · 1U . (ii) The monomial symmetric functions mλ (λ ∈ Λ+ (n, r)) form a basis of Sym(n, r). If we express the functions Φλ,K in terms of these, we get equations of the form Φλ,K = mλ + µ<λ zλµ mµ , and it follows that the characters Φλ,K (λ ∈ Λ+ (n, r)) also form a basis for Sym(n, r). (iii) Suppose L is an algebraically closed field containing K. Denote L the module Fλ,K ⊗K L ∈ ML (n, r). Since Fλ,K is absolutely irreby Fλ,K L L . On the other hand Fλ,K has the same character Φλ = Φλ,K ducible, so is Fλ,K as Fλ,K , by (3.3d). Any irreducible X ∈ ML (n, r) must be isomorphic to one L , since otherwise the characters ΦX , Φλ (λ ∈ Λ+ (n, r)) would be of the Fλ,K linearly independent by Corollary (3.4e), and that would contradict (ii) above. Now let V be any irreducible module in MK (n, r), and let X be a minimal submodule of VL = V ⊗K L. There is some λ ∈ Λ+ (n, r), then, L L ∼ , VL ) = 0, it folsuch that Fλ,K = X; hence the space HomSL (n,r) (Fλ,K ∼ lows HomSK (n,r) (Fλ,K , V ) = 0. Therefore V = Fλ,K , and the proof of Theorem (3.5a) is complete. Remarks. (i)
The proof above shows that Fλ,K is defined, uniquely up to isomorphism in MK (n, r), by the properties (a) Fλ,K ∈ MK (n, r) is irreducible, and (b) Fλ,K has character Φλ,K whose leading term is X1λ1 · · · Xnλn . Moreover if L is any field containing K, then Fλ,K ⊗K L ∈ ML (n, r) has the corresponding properties (a), (b), since in fact its character is the same as that of Fλ,K . In other words, Φλ,K = Φλ,L if K, L are infinite fields with K ⊆ L. It follows that Φλ,K is the same, for all infinite fields K of given characteristic. Write Φλ,p for Φλ,K , if p = char K. For each p, the set { Φλ,p : λ ∈ Λ+ (n, r) } is a Z-basis of Sym(n, r). If p > 0, the coefficients dλµ which appear in the equations dλµ Φµ,p , λ ∈ Λ+ (n, r) Φλ,0 = µ∈Λ+ (n,r)
are by definition the decomposition numbers relative to the modular reduction from MQ (n, r) to MK (n, r), where K is any field of characteristic p (see 2.5).
30
3 Weights and Characters
(ii) Suppose that K is fixed. Let R(MK (n)) be the Grothendieck ring for the category MK (n), and let [V ] be the element of R(MK (n)) corresponding to a module V ∈ MK (n). Then it is clear from (3.4a) and an isomorphism of rings what was said in 3.3, that [V ] → ΦV defines from R(MK (n)) onto the ring Sym(n) = r≥0 Sym(n, r) of all symmetric functions in Z[X1 , . . . , Xn ]. (iii) The symmetric functions Φλ,0 were determined by Schur [47, p. 23], and are now known as Schur functions or S-functions (see [39, p. 24]). For finite characteristic p, the irreducible characters Φλ,p have not yet been calculated explicitly. We sketch here a proof, rather different from Schur’s, of his theorem for characteristic zero. If α ∈ Λ(n, r) we write X α for X1α1 · · · Xnαn ; recall that W = G(n) acts on Λ(n, r) (see 3.1), and write s(w) for the “sign” (∈ {−1, 1}) of a permutation w ∈ W . Define aα = aα (X1 , . . . , Xn ) = w∈W s(w)X w(α) , an alternating function in X1 , . . . , Xn , which in fact is expressible as the n × n determiα nant |Xi j |. Let δ = (n − 1, n − 2, . . . , 1, 0) ∈ Λ(n, 12 n(n − 1)). Define the S-function Sλ = Sλ (X1 , . . . , Xn ) for any λ ∈ Λ+ (n, r) by Sλ = aλ+δ /aδ . Then (see [39, (3.2), (4.3)]) the Sλ form a Z-basis of Sym(n, r) (in fact Sλ has leading term X λ ), and there holds the formal identity (1)
n
1 = 1 − X µ Yν µ,ν=1
Sλ (X) Sµ (Y ),
λ∈Λ+ (n)
where Y = (Y1 , . . . , Yn ) is a second set of variables, independent of the set X = (X1 , . . . , Xn ), and Λ+ (n) = r≥0 Λ+ (n, r). Schur’s theorem [47, p. 23] is that Φλ,0 = Sλ , for all λ ∈ Λ+ (n, r). To prove this, it is enough to show that (1) remains true when Sλ ’s are replaced by Φλ,0 ’s. For then, considering only the part of (1) which is of given degree r in (both) X and Y , we shall have Sλ (X)Sλ (Y ) = Φλ,0 (X)Φλ,0 (Y ). (2) λ∈Λ+ (n,r)
λ∈Λ+ (n,r)
vλµ Φµ (λ ∈ Λ+ (n, r)) we get an integral matrix (vµν ) If we write Sλ = which, on account of (2), must be orthogonal. This matrix must therefore be a signed permutation matrix, i.e. the Sλ ’s coincide with Φλ,0 ’s up to order and sign (cf. [39, p. 35]). But since both Sλ and Φλ,0 have leading term X λ , we must have Sλ = Φλ,0 for all λ ∈ Λ+ (n, r). So we must prove (1), with Sλ ’s replaced by Φλ,0 ’s. It is clearly sufficient to prove that the sums of terms, of each given degree r ≥ 0, coincide on the two sides of the proposed equation. So we must prove (3)
n
(r)
µ,ν=1
r
r
Xµµν Yν µν
=
λ∈Λ+ (n)
Φλ,0 (X) Φλ,0 (Y ),
3.5 Irreducible modules in MK (n, r)
31
wherethe sum on the left is over all non-negative integral matrices such that µ,ν rµν = r. Let K be an infinite field of characteristic zero. Because the module E ⊗r is completely reducible (2.6e), and contains each (absolutely) irreducible Fλ,K (λ ∈ Λ+ (n, r)) with positive multiplicity (SK (n, r) acts faithfully on E ⊗r , see (2.6c)) we have cf(Fλ,K ). (4) AK (n, r) = λ∈Λ+ (n,r)
Now AK (n, r) is a KΓ-bimodule, using right and left translation operators g ◦ c, c ◦ g (see 2.8). Each coefficient space cf(Fλ,K ) is a subbimodule of AK (n, r). Take diagonal matrices x = diag(x1 , . . . , xn ) and y = diag(y1 , . . . , yn ) (xµ , yν ∈ K ∗ ), then calculate the trace f (x, y) of the linear transformation c → y ◦ c ◦ x (c ∈ AK (n, r)). Using the basis of monomials ci,j for AK (n, r), we find that f (x, y) is obtained by substituting x for X, y for Y in the left side of (3). But from (4) we get another basis of AK (n, r), by taking for each λ ∈ Λ+ (n, r) λ λ the coefficient function rab appearing in an invariant matrix (rab ) for Fλ,K , relative to some basis of Fλ,K . (The Frobenius-Schur theorem (see 3.4) λ are linearly independent.) If we choose a shows that these functions rab basis for Fλ,K which is adapted to the weight-space decomposition, so α , it is easy to that each basis element belongs to some weight-space Fλ,K show that the trace of the map c → y◦c◦x (c ∈ cf(Fλ,K )) is Φλ (X)Φλ (Y ); hence by (4), f (x, y) = λ Φλ (X)Φλ (Y ). But this proves (3), since it holds for arbitrary substitutions x1 , . . . , xn , y1 , . . . , yn ∈ K ∗ of the variables X1 , . . . , Xn , Y1 , . . . , Yn .
§4.
The modules
4.1
Preamble
Dk.~K In this section and the next we shall define, for each
k = (kl,... , kn) Dk, K
and
in
Vk, K
A+(n,r)
(see Introduction).
We shall define in
~(n,r)
case
(see 4.4).
K = ~,
and for each infinite field
~,K
Both have character
K,
the modules
~k,0 = S~\(XI''"'Xn)"
in terms of certain determinantal functions
These modules have been known, in the "classical"
for a very long time -- they were discovered (under the guise
of "primary convariants") by J. Deruyts in 1892 [D, p. 71] (1) .
More recently
they have been described, for fields of arbitrary characteristic, by M. Clausen [CI, p , i 8 0 ] , and by G. D. James [Ja, p. 129]. refers to the module
DX, K
in fact James
as a "Weyl module", but we prefer to reserve this term for
Vh, K
which was defined by Carter-Lusztig
as we show in 5.2, is the contravariant dual of However James's construction in [Ja ] ours, since it can be used to identify
Dh, K
[CL, p. 211], and which,
DX, K. gives more information than
with the "induced module" of
a one-dimensional character on the lower triangular Borel subgroup
Bn(K) 4.2.
of
F K = GLn(K).
k-tableaux
We mention this identification
For the rest of
The diagram (or "shape") of
§4, k
[k] = {(s,t) : 1 e s,
in 4.8.
k = (k I ..... kn) ~ A+(n,r)
is fixed.
is defined to be the subset i ~ t ! k } S
of
Z x Z
(cf. [DKR, p. 66]).
bijective, of
(I)
[k]
into a set.
A
k-tableau is a map, not necessarily Since
[k]
has
r
elements, there exists
I am indebted to J. Towber for this reference. In his article IT], Towber
gives an account of the history of Dh,K; see particularly IT, po 448].
51
at least one bijection
T : IX] ~ r.
We shall arbitrarily
biJection,
and call it the basic k -tabLeau ~
of
is
(s,t)
(4.2a)
x(s,t),
we may depict
. . . . . .
x(2,1)
x(2,2)
...
x(3,1)
x(3,2)
...
x(l,k I)
x(2,k2)
°i,
p 6 r
appears
we say that
p
R(T)
is the subgroup
preserve
T
as
x(l,2)
Thus every element
T
If the image under
x(l,l)
• .,
of
T
T = Tk .
choose one such
is in row
s
once in (4.2a).
and in column of
the rows of (4.2a),
exactly
G = G(~)
t of
T.
p = x(s,t),
The row stabilizer
consisting
and the column
If
of all
stabilizer
n E G
C(T)
which
is defined
similarly. If k-tableau
i = (il,... , Jr)
is an element
iT : [k] ~ ~
T i.
= x(s,t), (s,t)
we often
place,
of
by
refer
to
of
In general, i
I(n,r), Ti
as the entry
P
we denote
the
is not bijective. in place
P ,
If
or in the
T.. I
Example basic
Suppose k-tableau
i i 3 i5)
r = 5, T = I1 \2
Ti =
i2
4.3°
Bideterminants
of l(n,r). formula
i4
n = 3 3 4
51 /
The entries
We define
Let
and
K
k = (3,2,0).
Then,
of
T.l
for any
belong
be an infinite
an element
(T i : Tj)=
We might
take for our
i 6 I(3,5),
to the set: _3 = {1,2,3}.
field,
and let
(T i : Tj) K
of
i,J
be elements
AK(n,r)
by the
52
(4.3a)
(T i : Tj) =
Here
s(o)
ci,jo
=
c
~ s(o)ci,jo = o~C(T)
is the sign of -i
io
"
o.
for any
Z o~C(T) s(°)ci°'J
The second equality comes from the fact that
~ ~ G(r).
,j
Apart from the interchange of rows and columns, bideterminant
in the sense of Desarmenien,
= (~I''''' ~r )
be the partition of
might not be an element of Then it is easy to see that t
of
[k],
of the
Gt x ~t
x(s,t),
(If
~t = O,
we take this determinant
to be
s(o)(T i : Tj),
Let
for all
(T i : Tj)
22
'
Ti =
Cla
Clb
Cl d
Clelclf|
C2a
e2b
C2d
C2e I
be
n
parts.)
Ti,
= 1,...,~ t
i.)
From this, or directly
is zero if there are equal entries at or of
Tj.
Also
(T i : Tjo) = (Tio : Tj) =
a E C(T).
k = (3,2,0),
T~ =
(notice that
determinants
s,s'
any two places in a column of
k
Let
is the product, over all columns
x(s',t)
from (4.3a), we see that
Example I.
conjugate to
since it might have more than
(T i : Tj)
is a
Kung and Rota [DKR, p. 67].
~
A+(n,r),
(T i : Tj)
T
be as in the example in 4.2. .
Then
(Tg : Ti)
is equal to
Let
53 ,Example 2.
Let
(4.3h)
be the element of
Tg
i
1
2
2
3
3
whose
l(n,r)
•. ,2 , .
k-tableau is
I1 • J
In other words,
gx(s,t) = s,
for all
(s,t) ~ [hi. Then
(Tg : T~) = C(~l)C(~ 2) ... where, for any integer of the
4.4.
n × n
matrix
Definition of The space
m(0 ~ m ~ n),
denotes the
DX, K
AK(n,r )
h,j ~ I(n,r) = I
is a blmodule for
KF,
Z ~(ci,j)Ch, i i~l
(4.4a')
Ch, j o ~ =
Z ~(Ch,i)ci, j, iEI
~(n,r), ~(n,r)
As left module
acting SK(n,r).
~(n,r)
and
belongs to the category
and as right module it belongs to the analogously defined category of all right, finite-dimensional
coefficient space lies in ~(n,r),
F = FK
we have (see
~ o Oh, j =
~ ~ SK(n,r ).
with
Equivalently, it is a bimodule for
(4.4a)
for all
mth leading minor
C = (c v).
by right and left translations. If
c(m)
~(n,r).
KF
(or
SK(n,r))-mo~ules whose
In fact the coefficient space of
whether regarded as left or right
KF-module, is precisely
AK(n,r).
54
Now let that
~
~
be the element of
belongs to the weight
Definition
Dk, K
is the
l(n,r)
defined in (4.3b).
It is clear
k = (kl,... , in).
K-span of the bideterminants
(Tz : T!),
for all
i E l(n,r).
Dk, K
is therefore a subspace of
(4.4a), multiply by
L ~.~.
s(o),
~ o(Tg,j) =
It follows that
Dk, K
~(n,r).
and sum over all
a
% ~(ci,j)(T g : Ti) , i61 is a left
Replace in
all
SK(n,r)-submodule
h
C(T).
by
Zo
in
We get
< E SK(n,r ).
of
AK(n,r).
If we
compare (4.4b) with (2.6a), we see also that the surJective linear map = ~K : EK ®r + Dk,K' an
SK(n,r)-module
Remark basic
k-tableau
i 6 l(n,r),
T
and of
(see 4.2).
T' = ~T
and if
g
for some
Z, = Z -i
In case
~ 6 G(r).
we have
k = (r,0,...,O)
T i = (i I .... , Jr)
map this to the monomial Dr, K
j E l(n,r),
is
is isomorphic to the
and
ell "'" eir rth
depends our choice of
However any other bijective
diagram is a single row of length
i E I(n,r),
for all
(Tg : T i)
Then
T i' = TiT
k-tableau
(% of
and
Therefore
T.
we shall ~ i t e r,
T'
for any
(Ti, : Ti) = (Tz : T i ).
is in fact independent of the choice of
Example i. k
e3 -" (T~:j)
homomorphlsm.
Our definition of
can be written
Dk, K
which takes
Dr, K
for
Tg = (I,I,...,i).
: T i) = Cl,il ... Cl,ir,
DX,K.
For any If we
2.6, Example 2, we see that
symmetric power of
E.
The
55 Example 2.
Assume
we shall write and
r ~ n.
In case
k = (i,i,...,I,0,...,0)
with
r
l's,
for Dk~ K. The k diagram is a single column, (Ir),K has entries 1,2 .... ,r. For any i E I(n,r), ( % : Ti) is the
T
D
r-rowed determinant we see that
D
det(c~, i ).
Mapping this onto
is isomorphic to the
rth
ell A...A e ir
(see 3.2), r
exterior power
A EKO
(ir),K 4.5.
The basis theorem for As usual,
K
is an infinite field.
(4.5a) ...............Basis ...... theorem for such that
Ti
Dk, K
Dk~ K.
DX, K
Our aim is to prove the
has
K-basis consisting of all
is "standard", i.e. the entries in each row of
Ti
(Tg : T i)
are weakly
increasing
(!)
from left to right, and the entries in each column are strictly
increasing
~)
from top to bottom.
This theorem generalizes the theorem of Specht-Garnir for Specht modules (see [P] and [Ja, §§8, 13]).
It may be deduced from a more general basis
theorem of D@sarm~nien, Kung and Rota [DKR, p. 78]. also to prepare for the transition to the module
For completeness, and
Vk, K
in
§5,
we give here
a proof of (4.5a) based on a combinatorial lemma of Carter-Lusztig (see 4.6). We begin by showing that (4.5b) All the
The set
{(T~ : Ti) : T i
(Tg : T i) (i E I(n,r))
k~(n,r) = ~(n,r) o ~k"
in = j.
lie in the "right"
eg, i (i E I(n,r))
if and only if there is some
The condition
stabilizer of the basic
is linearly independent. k-weight-space
For it is clear from (2.3c) that
is spanned by the elements o~, i = cg,j
standard}
~ E G(r)
g~ = g is equivalent to X-tableau (4.2a).
(remember
$o
kAK(n,r) ~k= ~,~).
such that
~ E R(T), c~, i = cg,j
~
Now = g and
the row if and only if
56
T i, Tj
can be obtained from one another by a row permutation (thus the
correspond to James's For each ~s(i) if
(4.5c)
"k-tabloids" [Ja. pp. I0, 127]).
i 6 I(n,r), define
If
i,j 6 I(n,r)
and
~(i)
usual lexicographic way.
If
Ti
s
(~l(i) ..... ~n(i)), of
T i.
Clearly
where
~(i) = ~(j)
by what we have just said; hence
We order these vectors
(4.5d)
~(i) =
is the sum of the entries in row
c4, i = cg,j,,
c~,i
~(i) # ~(j),
then
cg, i # c
(they are in fact elements of
j.
A(n,r))
in the
The reader may verify
is standard and if
1 # ~ ~ C(T),
then
~(i~) > ~(i).
Now suppose we have a non-trivial linear relation
Z fi(Tg : T i) = O, i6H
fi 6 K,
where the sum is over a non-empty subset that
Ti
is standard.
(4.5e)
H
of those
fi # 0
i ~ I(n,r)
such
By (4.3a), this gives
Z i~H
Z fi s(~) cg = 0. ~fC(T) ,in
By (4.5c) we can equate to zero the partial sum of all terms (i,~)
in (4.5e)
such that
the
~(i~)
least of the for
is equal to any given
~(i~)
(i ~ H,
i ~ H, ~ ~ C(T),
~ ~ A(n,r).
~ C(T)). only if
By (4.5d),
~ = i.
Take for
6,
we can have
~ = ~(i~),
So the partial sum has the
form Z f =0, i~H' i c~,i for some non-empty subset
H'
of
H.
But since
Ti
is standard for all
57
i E H',
the
c~, i
appearing in (4.5f) are linearly independent, and this
gives a contradiction.
4.6.
This proves (4.5b).
The Carter-Lusztig lemma To complete the proof of the basis theorem (4.5a) we must show that
every bideterminsnt combination of
(Tg : Ti)
(Tg : T i)
(i E I(n,r))
for which
Ti
is expressible as a linear
is standard.
This is the harder
part of the proof of (4.5a), and we deduce it from a combinatorial lemma (4.6a) of Carter-Lusztig.
The proof of (4.6a) is given in [CL, pp. 214, 215],
and does not depend essentially on other results in
[CL].
The reader may
note some variations of expmession between Carter-Lusztig's paper and this one: our
their "semi-standard" is our "standard"; their Ti ;
their
oT
to the "weight" of
is our
i.
the "type"
k'
corresponds to
of
T( = T i)
corresponds
Finally, in our (4.6a) we have no > restricted
to tableaux of a given type
(4.6a)
Tio ;
T
f
k v.
Carter-Lusztig lemma Let
f : I(n,r) ~ F
be any map with values in an abelian group
F,
satisfying the following three conditions: (i)
f(i) = O,
if
Ti
has equal entries at two distinct places in the
same column. (ii)
f(io) = s(q)f(i),
(iii) (Garnir relations) for any ~+i and
i E l(n,r)
of the basic G(J)
for any
i E l(n,r)
and
~ E C(T).
Z s(v)f(iv) = O, vEG(J)
and any non-empty subset k-tableau
T.
Here
h
J
of the
(h+l)th column
is any element of
is a transversal of the set of cosets
{vX : v E Y},
{1,2 .... ,r-l}, where
Y
58
is the subgroup of
G(r)
consisting of all
element outside
U J,
and
Then {f(i) : T. i Remarks
~
Im f
~ G(r)
which fix every
X = C(T) @ Y.
lies in the subgroup of
F
generated by the set
standard}.
Condition (ii) shows that the sum in (iii) is independent of the
choice of transversal
G(J).
Condition (iii) is equivalent, by an argument
given in [CL, p. 212], to condition (37) of
[CL, p, 214].
A slightly weaker
form of (4.6a), which uses a larger set of "Garnir relations" (but is still adequate for our purposes) can be proved by an adaptation of the proof of Garnir [Ga] for Specht modules; see [P, pp. 93, 94] or [Ja, pp. 29, 30]. Lemma (4.6a) completes the proof of the basis theorem (4.5a), as soon as we observe (4.6b)
The function
f(i) = (Tg : Ti)
satisfies conditions (i), (ii) and
(iii) of (4.6a). Proof
That
f(i)
satisfies (i) and (ii) has already been said in 4.3.
The proof of (iii) is like that for Specht polynomial~. ~ Every element of the set with (with
v ~ G(J)
and
B = Y • C(T)
~ ~ C(T).
f(i) = (T~ : Ti))
Z v~G(J)
has unique expression
transposition in
becomes
Z ~C(T)
{~,~}, R(T).
, = vo,
So the left side of the Garnir relation (iii)
s(~)s(~)cg,iv °
An argument given in [P, pp. 92, 93] shows that union of subsets
Noos as follows.
in which
K
For such a pair
=
Z ~EB B
s(~)c~ i~' '
can be written as disjoint
(which depends on
~)
is some
5g
s(.)c~,i~
because
c%,i~ K
+ s(~)c~,i~ K
= O,
~K = % for all
= c~K,i ~ = c%,i~ (notice
K ~ R(T)).
Hence the sum above is zero, as required.
4.7.
Some consequences of the basis theore______mm Let
~. ~
~ { A(n,r)
Putting
~(=
be a given weight, and
~a,a )
for
o (Tg : Tj) = (Tg : Tj)
each bideterminant i.e.
~
or zero, according as
(Tg : Tj)
(4.7a) with
j.
(j E I(n,r)) Dk,K,
j E e
or not.
Consequently
is a "weight element" of where
~
Dk, K,
is the weight
Then (4.5a) gives the first statement below:
For each i E ~
an element of
in formula (4.4b) we see that
it belongs to the weight-space
containing
a 6 I(n,r)
and
~ ~ A(n,r), Ti
Dk, Kq
standard.
has
K-basis consisting of all
Hence the character
~D
(T~ : T i)
is equal to X,K
~ ( x I .... , Xn). The second statement in (4.7a) follows fact that the coefficient of number of
h-tableaux
Ti
X~
in
from the first, and from the
S=%(XI,... , X n)
which have "content"
can be proved by a direct combinatorial
(i.e.
is precisely the weight)
~
-- this
argument from the definition of
(see [M, p. 42]). Since when
(4.7b)
~
char K = O,
If
is the character of an irreducible module in we deduce
char K = 0,
then
DX, K
is irreducible.
~(n,r)
~A
60 We show next that the family infinite field
K,
we may write
is defined over
{~,K }
(~
Z.
For each
to denote the element
: Ti) K
(~
: T i)
defined in 4.3.
(4.7c)
The
Z-span
(i E l(n,r)) is a D~, Z to
~,Z
of the elements
Z-form of
~,Q.
6K : ~ , Z
K~
and the maps
(~
: Ti) Q
The family ~,K
{~,K }
is
Z-defined by
which take each
(T~ : Ti) Q ® 1K
D~,K~+ k~(n,r)
is also defined
(Te : Ti) K. Moreover the family of inclusions
over
Z.
Proof
The Carter-Lusztig !emma (4.6a), applied to the function
f(i) = (T~ : Ti)Q,
shows that
(Tg : Ti) Q
{(Tg : Ti) Q : T i But by (4.5a), this set is a basis of to the left action of Dk, Q.
The maps
5K
Sz(n,r ).
is in the
standard}. Dk, Q.
By (4.4b),
It follows that
described above are clearly
are isomorphisms by (4.5a) applied to
Z-span of
DX, K.
DX, Z
D~, Z is a
is invariant Z-form of
SK(n,r)-morphisms , and
The last statement of (4.7c)
is immediate from the definitions. Remark
Dk, Z
is a direct summand, as
Z-module, of
inc Az(n,r ) the exact sequence 0 ~ DX, Z -----> from (4.7c) and the next lemma.
is
Az(n,r);
Z-split
equivalently,
This follows
61 (4.7d)
Lemma.
Z-defined by
Suppose VZ
{VK} , {WK}
and {6K} , W Z
family of morphisms in (i)
If all the 0 + VZ
(ii)
If
}~(n,r),
@Q----> W Z
is
also defined over
eK : V K ~ W K
is a
(see 2.6).
Then
Z
Z-split.
9K
are surJective, then the exact sequence
VZ ~ W Z ~ 0
is
Z-split.
Let of
M = (m~)
be the matrix of
VQ, WQ J which {9K}
and that for each
K,
appropriate bases of has rank
d
Z-generate
are equal to
M' = (m~)
9Q,
VZ, W Z
is defined over ~
= (mB~ • IK)
VK, W K.
= dimQVQ,
relative to bases respectively.
Z
implies
M
9K
The assumption
9K
relative to
are injective, this means
and that it still has rank p.
{v },
is an integral matrix,
is the matrix of
If all the
reduction modulo any rational prime M
Suppose
~(n,r),
are injective, then the exact sequence
that the family
M
{~K }.
all the
Proof {w~}
9K
and
are families of modules in
d,
even after
Hence all elementary divisors of
i, and therefore there exists an integral matrix
such that
M'M = identity.
This proves (i). The proof of (ii)
is similar.
4.8
James's construction of
DX, K
G. D. James has given [Ja, p. 129] a construction of a W X, D~ K"
KFK-mOdule
which he calls a "Weyl module", but which is in fact isomorphic to If
e,~ E A(n,r),
then the elements
ci,j(i ~ ~, J E ~)
may be identified with James's "s-tabloids of type not necessary that either
~ = (~i ..... an)
dominant, i.e. be proper partitions of X~(n,r)
in our language (see 4.5).
r.)
or
of
~(n,r)
~" [Ja, p. 127].
~ = (~i .... , ~n )
James's space
S°'%
(It is
be becomes
He defines, for all pairs of integers
62
s E {i .... ,n} and
v E {0,..., ks+ 1 }
a linear map
k ~s,v :
~(n,r)
~ ~(n,r)
by the rule:
(4.8a)
Ss,v(C~,i) = Zh
sunmed over all ks+ I - v
h 6 l(n,r)
of the entries
Definition such that
%s,v(C) = O
obtained by replacing,
s + I by
[Ja, p. 129]
Ch'i'
Wk
in
e
Tg),
s.
is the set of all elements
for all
(or in
s E {i .... ,n}
c E k~(n,r)
v E {0, .... ks+ I - i}.
By a rather delicate combinatorial argument, James proves a theorem [Ja, 26.3, p. 128] from which it follows readily that the set
{(T$ : T i) : T i
standard}; hence
~
= Dk, K.
definition has a very important group-theoretical
Wk
has as basis
However James's
interpretation,
which we
shall now give. Let
U
= U (K)
be the subgroup of
lower triangular unipotent matrices in generated by the elements
Us(t)
F.
F = GLn(K)
consisting of all
It is well-known that
U-
is
(s 6 {I .... ,n-l}, t E K) where
I 0
"°° i
us(t)
t 1
=
0
(row
S)
(row
s+l)
.
'. i
Now write F
on
~(n)
g = Us(t)
and
(see Introduction)
i E l(n,r).
By definition of the action
63
(4.8b)
cg i o g = '
it is clear that
(4.8c)
For all
such that
h E l(n,r) the entries
then
Cg,h(g ) = tw,
s + i
by
s.
c £ k~(n,r),
ks+l ks+l-V Z t v=0
s E {!,...,n-l}
w
is the number of
Ce,h(g ) = tw,
in
Z
(or in
it is clear that
Te) ,
w
t 6 K.
of
(i E l(n,r)).
%s,v(C) = c, K
Naturally we prove
for all
c E X~(n,r).
is infinite, we see that for any
the conditions
~s,v(C) = O,
all
s E {i, .... n-l},
v E {0,..., ks+l-l}
are equivalent to the conditions c o Us(t) = c,
all
s E {i .... ,n-l},
which in turn are equivalent to the conditions
(4.8e)
for any
~s,v(C),
and
c = ce, i
So by (4.8d), and using the fact that c E kAK(n,r) ,
where
Thus (4.8b) gives a formula
(4.8d), by taking first the case v = ks+l,
(s+l,s)},
In other words,
which can be obtained by replacing,
c o Us(t) =
If
is zero unless
p E r, (gp,h@) E {(i,i) .... ,(n,n),
(gp,hp) = (s+l, s).
(4.8d)
for all
C~,h(g)ch, i •
Cg'h(g) = gglhl "'" gP~rrh
and if (4.8c) is satisfied, p
Z hEI(n,r)
c o u = c,
all
u E Un(K).
t E K,
64 k
~(n,r)
is the right
Extend the character Bn(K) = Tn(K)Un(K) ,
X k of
k-weight-space of Tn(K )
by defining
allows us to reformulate James's
(4.8e)
Theorem
c £ ~(n,r)
The module
Xk
~(n,r)
FK = GL (K)), n of
for all
u ~ Un(K).
This
theorem as follows.
Dk, K = W
k
is the set of all elements
which satisfy the conditions
This shows that
for
(see 4.5).
(see 3.2) to the Borel subgroup
Xk(u) = 1
c 0 b = Xk(b)c,
category
~(n,r)
Bn(K).
Dk, K
all
b 6 B-(K). n
is the induced module
Ind~-(Kk)
in the
(or, in fact, in the category of all rational modules of a
K-B-(K)-module n
Kk
which affords the character
For a discussion of a theorem equivalent to (4.8e), for
semisimple algebraic groups, see [J"~ §i.].
4 The modules Dλ,K
4.1 Preamble In this section and the next we shall define, for each λ = (λ1 , . . . , λn ) in Λ+ (n, r) and for each infinite field K, the modules Dλ,K and Vλ,K (see introduction). Both have character Φλ,0 = Sλ (X1 , . . . , Xn ). In 4.4, we shall define Dλ,K in terms of certain determinantal functions in AK (n, r). These modules have been known, in the “classical” case K = C, for a very long time—they were discovered (under the guise of “primary covariants”) by J. Deruyts in 1892 [13, p. 72]1 . More recently they have been described, for fields of arbitrary characteristic, by M. Clausen [8, p. 180], and by G. D. James [27, p. 129]. In fact James refers to Dλ,K as a “Weyl module”, but we prefer to reserve this term for the module Vλ,K which was defined by Carter-Lusztig [6, p. 211], and which, as we show in 5.2, is the contravariant dual of Dλ,K . However James’ construction in [27] gives more information than ours, since it can be used to identify Dλ,K with the “induced module” of a one-dimensional character on the lower triangular Borel subgroup Bn− (K) of ΓK = GLn (K). We mention this identification in 4.8.
4.2 λ-tableaux For the rest of §4, λ = (λ1 , . . . , λn ) ∈ Λ+ (n, r) is fixed. The diagram (or “shape”) of λ is defined to be the subset [λ] = { (s, t) : 1 ≤ s, 1 ≤ t ≤ λs } of Z × Z (cf. [15, p. 66]). A λ-tableau is a map, not necessarily bijective, of [λ] into a set. Since [λ] has r elements, there exists at least one bijection T : [λ] → r. We shall arbitrarily choose one such bijection, and call it 1 I am indebted to J. Towber for this reference. In his article [52], Towber gives an account of the history of Dλ,K : see particularly [52, p. 448].
34
4 The modules Dλ,K
the basic λ-tableau T = T λ . If the image under T of (s, t) is x(s, t), we may depict T as x(1, 1) x(1, 2) · · · (4.2a)
···
x(1, λ1 )
x(2, 1) x(2, 2) · · · x(2, λ2 ) x(3, 1) x(3, 2) · · · ···
···
Thus every element ∈ r appears exactly once in (4.2a). If = x(s, t), we say that is in row s and column t of T . The row stabilizer R(T ) of T is the subgroup of G = G(r) consisting of all π ∈ G which preserve the rows of (4.2a), and the column stabilizer C(T ) is defined similarly. If i : r → n is an element of I(n, r), we denote the λ-tableau i ◦ T : [λ] → n by Ti . In general, Ti is not bijective. If = x(s, t), we often refer to i as the entry in place , or in the (s, t) place, of Ti . Example. Suppose r = 5, n = 3 and λ = (3, 2, 0). We might take for our
1 3 5 i1 i3 i5 basic λ-tableau T = . Then, for any i ∈ I(n, r), Ti = . 2 4 i2 i4 The entries of Ti belong to the set 3 = {1, 2, 3}.
4.3 Bideterminants Let K be an infinite field, and let i, j be elements of I(n, r). We define an element (Ti : Tj ) = (Ti : Tj )K of AK (n, r) by the formula (4.3a) (Ti : Tj ) = s(σ)ci,jσ = s(σ)ciσ,j . σ∈C(T )
σ∈C(T )
Here s(σ) denotes the sign of σ. The second equality comes from the fact that ci,jσ = ciσ−1 ,j , for any σ ∈ G(r). Apart from the interchange of rows and columns, the element (Ti : Tj ) is a bideterminant in the sense of D´esarm´enien, Kung and Rota [15, p. 67]. Let µ = (µ1 , . . . , µr ) be the partition of r conjugate to λ (notice that µ might not be an element of Λ+ (n, r), since it might have more than n parts). Then it is easy to see that (Ti : Tj ) is the product, over all columns t of [λ], of the µt × µt determinants . det cix(s,t) ,jx(s ,t) s,s =1,...,µt
(If µt = 0, we take this determinant to be 1.) From this, or directly from (4.3a), we see that (Ti : Tj ) is zero if there are equal entries at any two places in a column of Ti , or of Tj . Also (Ti : Tjσ ) = (Tiσ : Tj ) = s(σ)(Ti : Tj ), for all σ ∈ C(T ).
4.4 Definition of Dλ,K
35
Example 1. Let λ = (3, T be as in the example in 4.2. Consi 2, 0), 1 1 1 a d f der Tl = , Ti = . Then 2 2 b e c1a c1b c1d c1e c ∈ AK (3, 5). (Tl : Ti ) = c2a c2b c2d c2e 1f Example 2. Let l be the element of I(n, r) whose λ-tableau is ⎞ ⎛ 1 1 ··· ··· 1 ⎟ ⎜ ⎜ 2 2 ··· 2 ⎟ ⎟. (4.3b) Tl = ⎜ ⎟ ⎜ ⎟ ⎜ 3 3 ··· ⎠ ⎝ ··· In other words, lx(s,t) = s, for all (s, t) ∈ [λ]. Then (Tl : Tl ) = c(µ1 )c(µ2 ) · · · , where, for any integer m (0 ≤ m ≤ n), c(m) denotes the mth leading minor of the n × n matrix C = (cµν ).
4.4 Definition of Dλ,K The space AK (n, r) is a bimodule for KΓ, with Γ = ΓK acting by right and left translations. Equivalently, it is a bimodule for SK (n, r). If h, j ∈ I(n, r) = I, we have (see 2.8) (4.4a) ξ ◦ ch,j = ξ(ci,j )ch,i i∈I
and (4.4a )
ch,j ◦ ξ =
ξ(ch,i )ci,j ,
i∈I
for all ξ ∈ SK (n, r). As left module AK (n, r) belongs to the category MK (n, r), (n, r) of and as right module it belongs to the analogously defined category MK all right, finite-dimensional KΓ- (or SK (n, r)-) modules whose coefficient space lies in AK (n, r). In fact the coefficient space of AK (n, r), whether regarded as left or right KΓ-module, is precisely AK (n, r). Now let l be the element of I(n, r) defined in (4.3b). It is clear that l belongs to the weight λ = (λ1 , . . . , λn ). Definition. Dλ,K is the K-span of all bideterminants (Tl : Ti ), i ∈ I(n, r). Hence Dλ,K is a subspace of AK (n, r). Replace h by lσ in (4.4a), multiply by s(σ), and sum over all σ in C(T ). We get
36
(4.4b)
4 The modules Dλ,K
ξ ◦ (Tl : Tj ) =
ξ(ci,j )(Tl : Ti ), all ξ ∈ SK (n, r).
i∈I
It follows that Dλ,K is a left SK (n, r)-submodule of AK (n, r). If we compare (4.4b) with (2.6a), we see also that the surjective linear map ⊗r ϕ = ϕK : EK → Dλ,K ,
which takes ej → (Tl : Tj ) for all j ∈ I(n, r), is an SK (n, r)-module homomorphism. Remark. Our definition of l and (Tl : Ti ) depends on our choice of basic λ tableau T (see 4.2). However any other bijective λ-tableau T can be written T = πT for some π ∈ G(r). Then Ti = Tiπ for any i ∈ I(n, r), and if l = lπ −1 we have (Tl : Ti ) = (Tl : Ti ). Therefore Dλ,K is in fact independent of the choice of T . Example 1. In case λ = (r, 0, . . . , 0) we shall write Dr,K for Dλ,K . The λ diagram is a single row of length r, and Tl = (1 1 . . . 1). For any i ∈ I(n, r), we have Ti = (i1 i2 . . . ir ) and (Tl : Ti ) = c1,i1 · · · c1,ir . If we map this to the monomial ei1 · · · eir of 2.6, example 2, we see that Dr,K is isomorphic to the rth symmetric power of E. Example 2. Assume r ≤ n. In case λ = (1, . . . , 1, 0, . . . , 0) with r 1’s, we shall write D(1r ),K for Dλ,K . The λ diagram is a single column, and Tl has entries 1, 2, . . . , r. For any i ∈ I(n, r), (Tl : Ti ) is the r-rowed determinant det (cl,1 ). Mapping this onto ei1 ∧ . . . ∧ eir (see 3.2), we see that D(1)r ,K is isomorphic to the rth exterior power of Λr EK .
4.5 The basis theorem for Dλ,K As usual, K is an infinite field. Our aim is to prove the (4.5a) Basis theorem. Dλ,K has K-basis consisting of all (Tl : Ti ) such that Ti is “standard”, i.e. the entries in each row of Ti are weakly increasing (≤) from left to right, and the entries in each column are strictly increasing (<) from top to bottom. This theorem generalizes the theorem of Specht-Garnir for Specht modules (see [45] and [27, §8, 13]). It may be deduced from a more general basis theorem of D´esarm´enien, Kung and Rota [15, p. 78]. For completeness, and also to prepare for the transition to the module Vλ,K in §5, we give here a proof of (4.5a) based on a combinatorial lemma of Carter-Lusztig (see 4.6). We begin by showing that (4.5b) The set { (Tl : Ti ) : Ti standard } is linearly independent.
4.6 The Carter-Lusztig lemma
37
Proof. All the (Tl : Ti ) (i ∈ I(n, r)) lie in the “right” λ-weight-space AK (n, r) = AK (n, r) ◦ ξλ .
λ
For it is clear from (2.3c) that λAK (n, r) is spanned by the elements cl,i , where i ∈ I(n, r) (remember ξλ = ξl,l ). Now cl,i = cl,j if and only if there is some π ∈ G(r) such that lπ = l and iπ = j. The condition lπ = l is equivalent to π ∈ R(T ), the row stabilizer of the basic λ-tableau ((4.2a)). So cl,i = cl,j if and only if Ti , Tj can be obtained from one another by a row permutation (thus the cl,i correspond to James’ “λ-tabloids” [27, pp. 10, 127]). For each i ∈ I(n, r), define β(i) = (β1 (i), . . . , βn (i)), where βs (i) is the sum of the entries in row s of Ti . Clearly β(i) = β(j) if cl,i = cl,j , by what we have just said; hence (4.5c) If i, j ∈ I(n, r) and β(i) = β(j), then cl,i = cl,j . We order these vectors β(i) (they are in fact elements of Λ(n, r)) in the usual lexicographic way. The reader may verify (4.5d) If Ti is standard and if 1 = π ∈ C(T ), then β(iπ) > β(i). Now suppose we have a non-trivial linear relation fi (Tl : Ti ) = 0, fi ∈ K, fi = 0, i∈H
where the sum is over a non-empty subset H of those i ∈ I(n, r) such that Ti is standard. By (4.3a), this gives (4.5e) fi s(π) cl,iπ = 0. i∈H π∈C(T )
By (4.5c) we can equate to zero the partial sum of all terms (i, π) in (4.5e) such that β(iπ) is equal to any given β ∈ Λ(n, r). Take for β, the least of the β(iπ) (i ∈ H, π ∈ C(T )). By (4.5d), we can have β = β(iπ), for i ∈ H, π ∈ C(T ), only if π = 1. So the partial sum has the form (4.5f ) fi cl,i = 0, i∈H
for some non-empty subset H of H. But since Ti is standard for all i ∈ H , the cl,i appearing in (4.5f) are linearly independent, and this gives a contradiction. This proves (4.5b).
4.6 The Carter-Lusztig lemma To complete the proof of the basis theorem (4.5a) we must show that every bideterminant (Tl : Ti ) (i ∈ I(n, r)) is expressible as a linear combination
38
4 The modules Dλ,K
of (Tl : Ti ) for which Ti is standard. This is the harder part of the proof of (4.5a), and we deduce it from a combinatorial lemma (4.6a) of CarterLusztig. The proof of (4.6a) is given in [6, p. 214, 215], and does not depend essentially on other results in [6]. The reader may note some variations of expression between Carter-Lusztig’s paper and this one: their “semi-standard” is our “standard”; their T corresponds to our Ti ; their σT is our Tiσ ; the “type” λ of T (= Ti ) corresponds to the “weight” of i. Finally, in our (4.6a) we have not restricted f to tableaux of a given type λ . (4.6a) Carter-Lusztig lemma. Let f : I(n, r) → F be any map with values in an abelian group F , satisfying the following three conditions: (i) f (i) = 0, if Ti has equal entries at two distinct places in the same column. (ii) f (iσ) = s(σ) f (i), for any i ∈ I(n, r) and σ ∈ C(T ). (iii) (Garnir relations) ν∈G(J) s(ν) f (iν) = 0, for any i ∈ I(n, r) and any non-empty subset J of the (h + 1)th column Ch+1 of the basic λ-tableau T . Here h is any element of {1, 2 . . . , r − 1}, and G(J) is a transversal of the set of cosets { νX : ν ∈ Y }, where Y is the subgroup of G(r) consisting of all π ∈ G(r) which fix every element outside Ch ∪J, and X = C(T )∩Y . Then Im f lies in the subgroup of F generated by the set { f (i) : Ti standard }. Remarks. Condition (ii) shows that the sum in (iii) is independent of the choice of transversal G(J). Condition (iii) is equivalent, by an argument given in [6, p. 212], to condition (37) of [6, p. 214]. A slightly weaker form of (4.6a), which uses a larger set of “Garnir relations” (but is still adequate for our purposes) can be proved by an adaptation of the proof of Garnir [19] for Specht modules; see [45, pp. 93, 94] or [27, pp. 29, 30]. Lemma (4.6a) completes the proof of the basis theorem (4.5a), as soon as we observe (4.6b) The function f (i) = (Tl : Ti ) satisfies conditions (i), (ii) and (iii) of (4.6a). Proof. That f (i) satisfies (i) and (ii) has already been said in 4.3. The proof of (iii) is like that for Specht polynomials, and goes as follows. Every element of the set B = Y · C(T ) has unique expression π = νσ, with ν ∈ G(J) and σ ∈ C(T ). So the left side of the Garnir relation (iii) (with f (i) = (Tl : Ti )) becomes s(ν) s(σ) cl,iνσ = s(π) cl,iπ . ν∈G(J) σ∈C(T )
π∈B
An argument given in [45, pp. 92, 93] shows that B can be written as disjoint union of subsets {π, πκ}, in which κ (which depends on π) is some transposition in R(T ). For such a pair s(π) cl,iπ + s(πκ) cl,iπκ = 0, because cl,iπκ = clκ,iπ = cl,iπ (notice lκ = l for all κ ∈ R(T )). Hence the sum above is zero, as required.
4.7 Some consequences of the basis theorem
39
4.7 Some consequences of the basis theorem Let α ∈ Λ(n, r) be a given weight, and suppose a ∈ I(n, r) is an element of α. Putting ξα (= ξa,a ) for ξ in formula (4.4b) we see that ξα ◦ (Tl : Tj ) = (Tl : Tj ) or zero, according as j ∈ α or not. Consequently each bideterminant (Tl : Tj ) (where j ∈ I(n, r)) is a “weight element” of Dλ,K , i.e. it belongs to the weightα , where α is the weight containing j. Then (4.5a) gives the first space Dλ,K statement below: α (4.7a) For each α ∈ Λ(n, r), Dλ,K has K-basis consisting of all (Tl : Ti ) with i ∈ α and Ti standard. Hence the character of Dλ,K is equal to the Schur function Sλ (X1 , . . . , Xn ).
The second statement in (4.7a) follows from the first, and from the fact that the coefficient of X α in Sλ = Sλ (X1 , . . . , Xn ) is precisely the number of λ-tableaux Ti which have “content” (i.e. weight) α—this can be proved by a direct combinatorial argument from the definition of Sλ (see [39, p. 42]). Since Sλ is the character of an irreducible module in MK (n, r) when char K is zero, we deduce (4.7b) If char K = 0, then Dλ,K is irreducible. We show next that the family Dλ,K is defined over Z. For each infinite field K, we may write (Tl : Ti )K to denote the element (Tl : Ti ) defined in 4.3. (4.7c) The Z-span Dλ,Z of the elements (Tl : Ti )Q (i ∈ I(n, r)) is a Z-form of Dλ,Q . The family {Dλ,K } is Z-defined by Dλ,Z and the maps δK : Dλ,Z ⊗ K → Dλ,K , (Tl : Ti )Q ⊗ 1K → (Tl : Ti )K Moreover the family of inclusions Dλ,K → λAK (n, r) is also defined over Z. Proof. When applied to the function f (i) = (Tl : Ti )Q , the Carter-Lusztig lemma (4.6a) shows that (Tl : Ti )Q is in the Z-span of { (Tl : Ti )Q : Ti standard }. But by (4.5a), this set is a basis of Dλ,Q . By (4.4b), Dλ,Z is invariant to the left action of SZ (n, r). It follows that Dλ,Z is a Z-form of Dλ,Q . The maps δK described above are clearly SK (n, r)-morphisms, and are isomorphisms by (4.5a) applied to Dλ,K . The last statement of (4.7c) is immediate from the definitions. Remark. Dλ,Z is a direct summand, as Z-module, of AZ (n, r); equivalently, inc
the exact sequence 0 → Dλ,Z → AZ (n, r) is Z-split. This follows from (4.7c) and the next lemma.
40
4 The modules Dλ,K
(4.7d) Lemma. Suppose {VK }, {WK } are families of modules in MK (n, r), and are Z-defined by VZ and {δK }, WZ and {ηK }. Suppose θK : VK → WK is a family of morphisms in MK (n, r), also defined over Z (see 2.6). Then θQ
(i) If all the θK are injective, then the exact sequence 0 → VZ → WZ is Z-split. θQ
(ii) If all the θK are surjective, then the exact sequence VZ → WZ → 0 is Z-split. Proof. Let M = (mβα ) be the matrix of θQ , relative to bases {vα }, {wβ } of VQ , WQ which Z-generate VZ , WZ respectively. The assumption that the family {θK } is defined over Z implies M is an integral matrix, and that for each K, MK = (mβα ·1K ) is the matrix of θK relative to the appropriate bases of VK , WK . If all the θK are injective, this means that M has rank d = dimQ VQ , and that it still has rank d, even after reduction modulo any rational prime p. Hence all elementary divisors of M are equal to 1, and therefore there exists an integral matrix M = (mαβ ) such that M M = identity. This proves (i). The proof of (ii) is similar.
4.8 James’s construction of Dλ,K G. D. James has given [27, p. 129] a construction of a KΓK -module W λ , which he calls a “Weyl module”, but which is in fact isomorphic to Dλ,K . If α, β ∈ Λ(n, r), then the elements ci,j (i ∈ α, j ∈ β) of AK (n, r) may be identified with James’s “α-tabloids of type β” [27, p. 127]. (It is not necessary that either α = (α1 , . . . , αn ) or β = (β1 , . . . , βn ) be dominant, i.e. be proper partitions of r.) James’s space S ◦,λ becomes λAK (n, r) in our language (see 4.5). He defines, for all pairs of integers s ∈ {1, . . . , n} and v ∈ {0, . . . , λs+1 } a linear map ψs,v : λAK (n, r) → AK (n, r) by the rule: (4.8a)
ψs,v (cl,i ) =
ch,i ,
h
summed over all h ∈ I(n, r) obtained by replacing, in l (or in Tl ) λs+1 − v of the entries s + 1 by s. Definition (see [27, p. 129]). W λ is the set of all elements c ∈ λAK (n, r) such that ψs,v (c) = 0 for all s ∈ {1, . . . , n} and v ∈ {0, . . . , λs+1 − 1}. By a rather delicate combinatorial argument, James proves a theorem (see [27, 26.3, p. 128]) from which it follows readily that W λ has as basis the set { (Tl : Ti ) : Ti standard }; hence W λ = Dλ,K . However James’s definition
4.8 James’s construction of Dλ,K
41
has a very important group-theoretical interpretation, which we shall now give. Let U − = Un− (K) be the subgroup of Γ = GLn (K) consisting of all lower triangular unipotent matrices in Γ. It is well-known that U − is generated by the elements us (t) (s ∈ {1, . . . , n − 1}, t ∈ K) where ⎞ ⎛ 1 ⎟ ⎜ .. ⎟ ⎜ . ⎟ ⎜ ⎟ (row s) ⎜ 1 ⎟ ⎜ us (t) = ⎜ ⎟ (row s + 1) t 1 ⎟ ⎜ ⎟ ⎜ . .. ⎠ ⎝ 1 with t in position (s + 1, s). Now write g = us (t) and i ∈ I(n, r). By definition of the action of Γ on AK (n) (see introduction) (4.8b) cl,i ◦ g = cl,h (g) ch,i . h∈I(n,r)
It is clear that cl,h (g) = gl1 h1 · · · glr hr is zero unless (4.8c) For all ∈ r, (l , h ) ∈ {(1, 1), . . . , (n, n), (s + 1, s)}, and if (4.8c) is satisfied, then cl,h (g) = tw , where w is the number of such that (l , h ) = (s + 1, s). In other words, cl,h (g) = tw , for any h ∈ I(n, r) which can be obtained by replacing, in l (or Tl ), w of the entries s + 1 by s. Thus (4.8b) gives a formula
λs+1
(4.8d)
c ◦ us (t) =
tλs+1 −v ψs,v (c),
v=0
for all c ∈ λAK (n, r), s ∈ {1, . . . , n − 1} and t ∈ K. Naturally we prove (4.8d) by taking first the case c = cl,i (i ∈ I(n, r)). If v = λs+1 , it is clear that ψs,v (c) = c, for all c ∈ λAK (n, r). So by (4.8d), and using the fact that K is infinite, we see that for any c ∈ λAK (n, r), the conditions ψs,v (c) = 0, all s ∈ {1, . . . , n − 1}, v ∈ {0, . . . , λs+1 − 1} are equivalent to the conditions c ◦ us (t) = c, all s ∈ {1, . . . , n − 1}, t ∈ K, which in turn are equivalent to the conditions (4.8e)
c ◦ u = c, all u ∈ Un− (K).
42
4 The modules Dλ,K
Recall from 4.5 that λAK (n, r) is the right λ-weight-space of AK (n, r). Extend the character χλ of Tn (K) (see 3.2) to the Borel subgroup Bn− (K) = Tn (K) Un− (K), by defining χλ (u) = 1 for all u ∈ Un− (K). This allows us to reformulate James’s theorem as follows. (4.8f ) Theorem. The module Dλ,K = W λ is the set of all c ∈ AK (n, r) which satisfy the conditions c ◦ b = χλ (b)c, all b ∈ Bn− (K). This shows that Dλ,K is the induced module IndΓB − (Kλ ) in the category MK (n, r) (or, in fact, in the category of all rational modules for GLn (K)), of a K · Bn− (K)-module Kλ which affords the character χλ of Bn− (K). For a discussion of a theorem equivalent to (4.8f), for semisimple algebraic groups, see [30, §1].
§5
The Carter-Lusztig modules
5.1
Definition of Let
%~,K
~ ~ #~+(n,r)
Denote by
NK
~,K
be given, and let
the kernel of the
defined in 4.4.
Let
VX, K
the canonical form
on
E~rK ' Vk,K
{x (
is also a submodule of
o Vk, K ~ (Dk, K) ,
contravariant
to
DX, K
If
K
is an
NK,
relative to
SK(n,r)-submodule
of
Since
<, > is non-singular,
we
form
(,) : Vk, K × D k K -~ K
by
@r x ~ Vk,K, y ~ E K .
all
dual modules in
V%,K
~ = Sk(X 1 ..... Xn) o ' Vk, K is irreducible, and is isomorphic
has finite characteristic,
V~~,K
and
5.2
Vh~,K is Carter-Lusztig's
then in general
are not isomorphic.
We shall identify
Vk, K
Lusztig in [CL, pp. 211, 222]. more exactly.
"
(by 3.3e))
if char K = 0, then
(see 4.7). Dk,K
,
NK
and because (contravariant)
have the same character
In particular,
and E~.
(x, ~K(y)) = <x,y~
l~(n,r)
--> 0
.~r : <x, Nk;~ = 0}
is contravariant
may define a non-singular,
Hence
,K
(see 2.7, example I):
Vk, K =
<,>
D~
~(n,r)
be the orthogonsl complement to
<,>
(5.1b)
~r ~K : EK ~ ~ , K
SK(n,r)-epJmorphism
®r ~ K --> EK
0 --~ N K
Since
be any infinite field.
We have then an exact sequence in
(5.1a) Delinition.
K
"Weyl module" with the module
~
defined by Carter-
First we must describe
N K = Ker ~K
@6
(5.2a)
NK
(i)
is the
R1
K-span of the subset
consists of all
ei
R = R1 U R2 U R 3
such that
i E I(n,r)
and
of
E~,
where
Ti
has equal
entries in two distinct places in some column. (ii)
R2
cons~ists of all
e i - s(o)elo ,
(iii)
R3
consists of all elements
where
Y
i ~ I(n,r)
s(w)eiv,
where
and
o ~ C(T).
i ~ l(n,r) _
o~G(J) and
J
is a non-empty subset of
Proof the
All the elements of K-span
N'
F = EK®r'-'/m onto function
of
R.
~,K'
R
Ch+I(T)
lie in
for some
N = NK,
h E {1,2 ..... r-l}.
by (4.6b), and so
There is therefore a well-defined given by
~(x+N') = ~K(X),
all
K-map
x ~ E~ r.
N ~
contains from
Now the
f : I(n,r) ~ F given by f(i) = ei+N' clearly satisfies the hypotheses
of (4.6a), hence
F
is K-spanned by the set {e i + N' : T i standard}.
this set onto a basis of implies
NC
N';
hence
~,K'
by (4.5a).
N = N',
Therefore
and (5.2a) is proved.
Ker ~ = O,
But ~ maps
which
As a corollary
we have
(5.2b)
~,K
x E ~EKr which satisfy the
is the set of all elements
following conditions: (i)
<x,
ei>=
0
for all
i ~ I(n,r)
such that
Ti
has equal entries
Jn two distinct places in the same c61umn. (ii)
~
(iii)
= s(o)x %
for all
s(~)x~ -I = 0
o E C(T). any non-empty subset
J
of
~h+l(T), h ~ {l,2,...,r-l}
~ G(J) Proof (5.2a) shows that <x, R
>= 0
for
VX, K
s = 1,2,3.
consists of all
x E E~ r
such that
(5.2b) is an almost immediate consequence
S
of this, together with the fact that an "invariance" condition
<,>
is non-singular, and satisfies
67
(5.2c)
<x~, y> = <x, y~
for all
x,y 6 E K ,
n ~ G(r).
form the deflnitien of
<,>
>
This last condition is verified trivially
(see 2.7, example I).
It is now easy to see that "Weyl module"
-I
Vk, K
~oincides with Carter-Lusztig's
~± ", conditions (28), (29) of [CL, p. 211] are essent~ally
(i), (ii), (iii) of (5.2b).
Example..... I.
If
k = (r,0, "'" ,0)
(i), (il) of (5.25) are vacuous. transpositions all s ~ e t r i c
x
in
contravariant sense to the
over all
~1
e~ = e I
Pn
...e n
for
•
So
Assume
{v
Vk, K
Dr, K
: a ~ A(n,r)},
(v ,e~) = 6
r ~ n
x ~ = x,
for all
is the space of
EK~r. Therefore this module is dual in the
where
The non-singular contravariant form
so that
Conditions
Vk,K"
Condition (iii) says
rth symmetric power
has basis
is given by
[x ample 2. We write
Vr, K
I ~ ~.
(see 5.])
V r,K
v = (h, h +I) (h = i,..., r-l). tensors•
Example i).
we write
for all
~'~
{e ~ : ~ ( A(n,r)}
and that
of
EK va
(see 4.4, =
~ei,
sum
(,) : Vr, K × Dr, K ~ K
a,~ 6 A(n,r).
is a basis of
k = (I,...,I,0 .... ,0)
D
Here r,K"
with
r
l's.
V
for Vk, K. Condition (iii) of (5.2b) is vacuous, and (ir),K conditions (i), (ii) show that V is the space of all antisymmetric (ir),K whatever tensors in E,er K . V (ir),K is an irreducible module in ~ ( n , r ) , the characteristic of
K,
since its character
er(Xl,...,X n)
non-trivial expression as a sum of symmetric functions in Therefore
V
(Ir),K
Is isomorphic to the
rth
has no
Z(XI,...,Xn].
exterior power
68
D(ir), K =
~&rEK
ArEK ~ V
(see 4.4, example 2). which takes each
e
(ir),K
'fhere is an isomorphism
= e s
A ...4
ei
iI
(see 3.3) to r
(eil ® "'" ~ elr )( ~6G Z s(~)~). 5.3
The Carter-Lusztig Carter-Lusztig
is a cyclic module
basis for
Vk, K
have given a basis for [CL, pp. 216-219],
different proof of these results.
VX,K,
V~, K
It Is interesting that the Carter-Lusztig DX, K
If
X
given in (4.5a);
are connected by a certain unimodular matrix
(see (5.3)) which has appeared in work of D~sarmenien
Notation.
VX, K
We shall give here a slightly
basis (see (5.3b)) is not the dual of the basis of these bases of
and shown that
[De, ~.7~ ].
is any subset of the syum~etric group
Ix]
=
z
~,
{x}
G(r),
we shall write
=
n~X These are elements of the group-rlng even belong to
ZG(r).
(5.3a)
be the element of
Let
fg = eg{C(T)}
Proof For
lies in
Verify that y ~ RI U
as in (5.2a) y = e~{G(J)}. B = G(J)C(T) up
g
B
R2 (iii).
KG(r),
l(n,r)
of course.
K = Q,
given in (4.3b).
they
Then
Vk, K .
= O,
for all
there is no problem.
y E ~i U R 2 U R 3
Suppose then that
In the notation just introduced,
Using (5.2c),
<e~{C(T)},
{~, ~K}, each
(see (5.2a)).
y =
% s(v)eiv, v~G(J)
this reads
ei{G(J)} > = <e~, el{B}>,
= YC(T) -- see the proof of (4.6b).
as union of pairs
If
K
where
As in that proof, we break
being a transposition
6g
in
R(T).
Using (5.2c) again, and the fact that
<e%, ei(s(~)~
+ s(~K)~<)> = 0
for each pair.
e%<
= e% ,
Therefore
we have
= 0,
and this completes the proof of (5.3a).
Remark.
fg
is the element denoted by
Now let by
fg.
Since
As
V'
be the
K-space,
V'
V'
is
in [CL, p. 216].
SK(n,r)-submodule of
V~, K
is spanned by the elements
~i,jfg = (~i,jeg){C(T)}, Hence
~
~i,jfg(i,j
we see by (2.6a) that
K-spanned by the elements
which is generated E I(n,r)).
~i,jfg = 0
unless
b i = ~i,gfg(i ~ I(n,r)).
In fact we have the following much more precise statement.
(5.3b)
Theorem
[CL, Theorem 3.5, p. 218].
{b i = ~i,~fg
is a
K-basis for
generated as
Proof
Let
Vk, K.
SK(n,r)
In particular
(or
i,j ~ I(n,r).
that there is some ~ ~ R(T)
iR(T) of
h ~ l(n,r)
~ ~ G(r)
(see (4.3b)).
standard}
i.e.
VX ,K
is
f~.
From (2.6a) we have
~i,ge$ =
where the sum is over all
Ti
V' = Vk, K,
KFK)-module by
(5.3c)
if
: i ~ I(n,r),
The set
with
~i: e h , h such that
h = i~, f = ~ .
(i,~) ~ (h,g), But
g = ~
So the sum in (5.3c) is over the
i.e. such if and only
R(T)-orbit
i.
Now we bring in the form and calculate
(,) : VX, K x D~, K ~ K
introduced in 5.1,
70
(bi, (Tg : Tj)) = =
< ~i,ge~ ,
=
<
and
~(i,j)
i
=
~ eh , h~iR(T)
% s(o),
belong to the same Now suppose
% ~C(T)
o E C(T)
If
this implies
o = i,
(see 4.5), and if the matrix I
(~(i,j)),
= {k E I(n,r)
that
either
i > J
singular.
Note.
5.4
i
and
for all
and
J
i
that
jo
There
R(T)-orbit.
~(i) = ~(jo)
~(jo) > ~(j).
Let
~
denote
running over the set I
i,j ~ I ,
any total order
then
~(i,i) = i.
~(i,j) = (bi, (Tg : Tj)),
standard}
~(i,j) # O.
are in the same
If we give
and clearly
{(Tg : Tj) : Tj
A proof that
such
In any case we have
standard}.
{b i : T i
standard}
~
such
is a unimodular ~(i,j) ~ 0 Hence
and since
is a basis of
is a basis of
>
VX, K.
~ (,)
implies is nonis
Dk, K,
it
This proves (5.3b).
is unimodular is given by D~sarm~nien in [De, ~ . ~ ] .
Some consequences of the basis theorem For each
b i = ~i,~f~ of
o ~ C(T)
then (4.5d) implies
i = j ,
But since
follows that
is equal to
For by what we have shown above,
or
non-singular and
jo
i = j.
~(i) > ~(j) = i ~ J
triangular matrix.
~(i,j),
using (5.3c).
are both standard, and that
with
: Tk
s(o)ejo > ,
sum over all
such that
¢ ~ 1
using (5.2c)
R(T)-orbit.
Ti, Tj
must exist
< ~i,~eg{C(T)}, ej>
ej{C(T)} > ,
This last expression, which we denote (5.3d)
=
i.
From
i ~ I(n,r),
satisfies (5.3b)
it is clear by (2.3c) that the element
~i,i bi = b i , we deduce
thus
~ K, b i ~ V~,
where
is the weight
71
(5.4a)
Let
a (£(n,r).
{b i : i ~ ~,
In particular,
Then
Ti
Vk, K
has
K-basis
standard}
V k,K ~ = K.fg
(since
b~ = ~g,g fg
= f~).
A well-
The element
fg
known argument (see e.g. [J", p. 2]) shows (5.4b)
VX, K
has a unique maximal submodule
does not lie in character
Proof Vk,K,
~ax ,K '
~k,p,
By (5.3b)
V' =
FX,K = Vk,K/~ax ,K
has
Any proper submodule
M
The irreducible module
where
VX, K
p = char. K.
is generated by
since it does not contain
lie in
~ax ,K "
fg,
a proper
Z Vk,K, ask
sum of all proper submodules
M
f&.
has
M k= M N VX, K = 0,
K-subspace of of
and is the unique maximal submodule
VX, K
V~, K.
lies in
V',
hence is proper,
and does not contain
Since
DX, K
is dual to
VX, K
VX, K
M
Therefore the
third statement in (5.4b) now follows from the definition of (see 3.5, Remark (i)) and the fact that kI kn ~k = XI ... Xn + . . . .
and so
of
f~.
The
~k,p
has character
we have the following corollary to
(5.4b).
(5.4c) Hence
Dk, K
has a unique minimal submodule
minK , FX, K D~,
_min uk, K
and
min Dk, K
~
(Fk,K)°.
are isomorphic modules.
Proof.
The second statement follows from the first, and the fact that any
module
V
V°
in
~k(n,r)
(see (3.3e)).
If
has the same character as its (contravariant) dual V
is irreducible
this implies
V ~ V °.
72
Remark
Since
Fk, K
same is true of
has
minK . Dk,
k-weight space of dimension minK Dk,
Therefore
SK(n,r)-module
by, the element
k-weight-space
of
DX, K
is
1
(by (5.4c))
the
contains, hence is generated as
(Tg : Tg).
For (4.7a) shows that the
K (T~ : T~).
This proves (5.4d) below.
A
quite different proof comes from a standard argument for semisimple algebraic groups, using the fact (cf. 4.8) that the action of the upper and lower unipotent F = GL (K). n (5.4d)
(T~ : Tg)
of
@r V~,Q = {x ( EQ : < x, NQ > 0},
Vk, Z = E~Z
Lemma.
(i)
bi,Q
~r EZ ,
The sets
=
Vk, Z.
Vk, Z.
It
Vk, Q
is defined over
NO = Ker
is a
that,
lies in
Since (i) is a Q-basis of
Z-form of
Since
j ~ I(n,r).
Take any
t h e p r o o f of
(5.3b)
j gives
Ti .
(see 5.1).
it has
Vk, Q.
<x,
Sz(n,r) ,
i 6 I(n,r),
(by (5.3b)), the proof of (5.4e)
we have T. J
Z-basis
hence that (i) is a subset of
x = ~k.b. ll
such that
Recall that
standard}
every element
x E E~r Z ,
~
the standard
~r,
We certainly have
k.1 ~ Q"
Ti
for each
Vk, Q
be a c h i e v e d i f we show t h a t
some
~Q
Z.
are both closed to the action of
is clear
~i,~ e~,Q{C(T)}
Z-span of (i). for
Vk, K
where
N Vk, Q
generates the irreducible module DX, K .
{bi, Q : i E l(n,r),
hence so is
will
triangular subgroups of
min
Dk, K
We show next that the family
Proof.
is stable under
See [St. p. 214].
The element
(5.4e)
(Tg : Tg)
is
x ( Vk, Z
is
(sum is over <x,
ej>
standard.
e.> = ~k. !,~ ( i , j ) , j 1
6
in the
Ti Z
standard)
for all
The c a l c u l a t i o n t h e sum b e i n g o v e r
s
But t h e " D e s a r m e n i e n m a t r i x "
~ = ~(i,j)),
in case
in
73
K = Q,
is integral and unimodular.
standard
Remark
E~;
Tj) follows
(5.4e) shows that
hence the inclusion
0 ~ l~, Z ~ EZ ~
in the category EZ~ ® K.
8K
(5.4f)
described.
as submodule of {EK ~}
is
Z-defined
From (5.4e) and (5.3b) it is
induces an isomorphism
bi, Q ® IK~
The family
If we tensor with
0 ~ V~, Z ~ K ~ E~r ~ K
VX, z ~ K
6 K : el, Q ® 1 K ~ el, K.
Z-submodule of
Z-split.
we showed that the family
8~ : Vk,Z®
which maps
is
We shall regard
In 2.6, Example 1
immediate that
Uence (5.4e) is proved.
is a pure
we get an exact sequence
MK(n,r ).
E~ ~r and maps
Ti).
(all
VX, Z = Sz(n,r)f ~.
Vk, Z = E~ r n VX,Q
K
~k i ~ (i,j) ~ Z
(all standard
It is clear that
any infinite field
by
ki ~ Z
So from
bi, K
{Vk, K } is
for all
K -+ VX, K.
i E I(n,r).
Z-defined by
The family of inclusions
So is the family of contravariant
Vk, Z
VX, K ~+ E ~r K
forms
From all this we deduce:
and the maps
6'K just
is also defined over
(')K : VX,K x Dk, K -+ K
Z.
defined in
5.1.
5.5
Contravariant
forms on
J.C. Jantzen (see
V%, K
~], CJ'] ....
)
has studied contravariant forms
on the Weyl modules for a simply-connected,
semisimple algebraic group;
particular,
SLn(K)
his results apply to the group
alteration to our case
F = GL (K) . n
,
in
and extend with little
In this section we shall give an
independent description of the contravariant
forms on the modules
V%, K .
74
We saw that element
is generated
f~ = e~{C(T)}
in the submodule form
V%, K
< , >
on
(5.5a)
of
E mr ,
Emr{C(T)} E mr ,
of
as
S(= SK(n,r))-module
and it follows
E mr
that
by the
VI, K
is contained
We have the canonical
and from (5.2c) we deduce
<x{C(T) }, y> = <x, y{C(T)}>
This allows us to define a "contracted"
contravariant
that
for all
v e r s i o n of
x, y g E mr
< , >
on
Emr{C(T)}
by
the rule
(5.5b)
If
x, y e E ~r ,
define
<<x{C(T)},
y{C(T)} >> = <x, y{C(T)} >
Any ambiguity
arising
from the fact that an element
expressed
x{C(T)}
= x'{C(T)}
as
is eliminated
by (5.5a).
contravariant
form on
symmetric, rule
(5.5b) gives
We might
also m e n t i o n
form on
If we restrict
V%, K ,
=
of
E mr ,
is a symmetric, it to
V%, K
which is moreover
= <e~,e~{C(T)}>
fields
x, x'
may be
~ s(o) oEC(T) << ' >>K '
K , is defined over
Z
we get a
non-zero,
<e~,e~o>
since
= 1
constructed
in
in the sense of
3.
Any contravariant factor, with
.
<< , >>
that the family of forms
this way for all infinite 2.7, Example
It is clear that
<>
Emr{C(T)}
for distinct elements
Emr{C(T)}
contravariant
of
<< , >> .
form
(
,
)
on
V%, K
For the contravariant
coincides, property
up to a scalar
(2.7d),
together with
75
the fact that values
VX, K = Sf~ ,
shows that
(f~,v) , v E VX, K .
with each
v
If
(
,
v s V%, K
)
is
determined by the
is decomposed as sum
belonging to the weight space
V%, K~
v = Ev~ ,
(~ c A(n,r)) ,
then
since weight spaces for distinct weights are orthogonal with respect to (
,
)
(see 3.4),
by the values space is
(f%,v)
K.f%
for
(f~,v) = (f~,vx) , v
(see 5.4a).
Therefore (since for all
we have
in the So
(
<> = i)
i.e.
%-weight space
,
if
)
(
,
)
x VX, K .
is determined
But this weight
is completely determined by (f~,f%) •
(f~,f%) = k ,
then
(v,w) = k<> ,
v,w e VX, K o
In the work of Jantzen which we have mentioned, and also in the earlier work of W.J~ Wong ([W],[W']) , a Weyl module submodule
(5.5c) space
(~W',
of
V .
In our case the result reads as follows.
Theorem 3B, p.362~).
M = {v g VX, K : <
M
M # V%, K , of
is that it provides a method of calculating the maximal
Vmax
submodule
Proof.
V
V%, K (~ ~ X)
of
I,K
since
The radical of
>>= O}
<< , >> ,
i,e.
coincides with the unique maximal
f% ~ax ,K
Since
M .
V~, K
by the contravariant property.
Therefore
M £~%ax -
lies in the sum V'
V'
is orthogonal to
.
v~%ax lies in ,K
M
Also
But we saw in the proof
,K
of all the weight-spaces X VX, K = K.f~ ,
we have
(~ax VX,K ) = <
the
V%, K .
is a submodule of
(5.4b) that
the importance of the contravariant form on
and the proof of (5.5c) is complete.
76
Example.
We shall calculate
of 5.2, Example
i).
one row, and so
Since
the form
I = (r,O,.,.,O),
C(T) = {i} .
to
Vr, K
of the canonical
{v
: ~ e A(n,r)}
and
< , >
E ~r .
form
Since
(r,~)
maxK , M = Vr,
1
has only
is just the restriction to the basis
I, the form is given by where
r~ f
C~n -
of this form is spanned by those
the integer
(notation
Relative
= (r,~).l K ,
C~I . . . .
divides
on
Vr, K
the diagram of
<< , >>
(r, ~)
M
on
Therefore,
given in 5.2, Example
= O (e # B)
So the radical
<< , >>
v
p = char K
for which
.
the irreducible
module
F r , K = V r,~~/vma~ r,~
(see
(5.4b)) has basis
(v
The
a-weight
the character with
(r,e)
+ M : ~ s A(n,r),
space of
Fr, K
of
is
Fr, K
~ O mod p .
~r,p
expansion
(5.5d)
(XI+°..+Xn)r
is
~
r,p
~ 0 m o d p} .
K . ( v ~ + M) =
IX ~ 1
°'°
Since the integers
the multinomial
we have the result:
is
(r,~)
=
~ esA(n,r)
,
for all
x~n n
(r,~)
~i
(r,~)X 1
(which is a polynomial
t h e sum o f t h o s e m o n o m i a l s
X l l ' " .xan n
'
e e A(n,r)
sum over all
.
Hence
~ e A(n,r)
are the coefficients
in
~n
... X n
over
Z ,
which have non-zero
by definition)
coefficients
77
when
(5.5d) is reduced mod
p .
The reader may deduce
case of Steinberg's
"Tensor Product T h e o r e m " ( ~ t ,
r = r0 + r l P
(O < ro,rl,... , ! p - l )
+ ...
i ~r,p(Xl,...,Xn) M = 0
=
symmetric
5.6
function"
Z-forms
of
Fr, 0 = Vr, K .
h =r
and
~ ~eA(n,r)
Of course in case
The character
X ~ (~i, p.14])
p = 0
is the "complete
.
V%,Q
D%,Q
are irreducible, mr
~Q : EQ
In fact the map
isomorphism.
: if
then
In this section we work over the rational V%,Q
p.218])
i
E ~ri,p(X ~ ...,X p ) . i>O ' n
and we can take
from this a special
For
~
÷ DX,Q
is certainly
field
Q .
The modules
and isomorphic
to each other
i n d u c e s a map
~:Vx,Q ~ DX, Q
a homomorphism, and i t
(see 5.1).
w h i c h i s an
is non-zero
because
(5.6a)
~(f ) =
Therefore
by Schur's
We would Z-form
Z ocC(T)
L ,
a
the argument
L % = Z. Yfz
Z-form of
essential
lemma
V%,Q
for some ,
and
the Z-forms
(since
lying in
V%,Q = Q'fz
O # y ~ Q .
(y-IL)% = Z . f
if we confine our attention
"normalized"
s(o)(T~:Tzo)
=
V%,Q
at the end of 3.3 shows that
of rank 1 '
Z oEC(T)
IC(T) I(T%:T%)
•
~ is an isomorphism.
like to describe
a free Z-submodule that
s(o) %(e%o ) =
by the condition
to
.
For any such
L % = L ~V~,Q
has dimension
It is clear that .
Therefore
Z-forms
L
i),
y-iL
is so is also
we shall lose nothing of
V%,Q
which are
78
(5.6b)
L % = Z.f
We already know two such normalized Sz(n,r).f %
Z-forms, namely
~r V%, Z = E Z {'~ V I , Q
=
(see (5.4e)), and
(5.6c)
XI, z = g-l(ic(T ) IDI,z) .
XI, Z
is a Z-form of
is a
Z-form of
VI,Q , because
DI,Q •
DI, Z
-
hence also
IC(T) IDI,Z
-
It is normalized, because
XI,ZI = ~-I(Ic(T) ID%I,Z) = %-I(z. IC(T) l.(r :T~)) = Z.f% , by (4.7c) and (5.6a).
Our aim is the following theorem.
(5.6d) ~
Verma IV, p.681]). Then
Proof.
contain
write
SZ
L
for
f% ,
Sz(n,r) ).
spaces
La
for
Z-form of
V%,Q
which
it contains
Sz.f ~ = V%, Z
(we shall
<> = <>
using the contravariant property of
SzL ~ L . ~ ~ I .
be any
On the other hand
= <<SzL,f~>> ~ <> , and the fact that
L
Vk, Z C_ L c_ X%, Z .
satisfies (5.6b).
Since
Let
So
But
f%
<< , >>
is orthogonal to all the weight
<> = <> = <>
= Z<> = Z . We have hereby proved that
L
Y%,Z = {y ~ V%,Q : <> ~C Z}
lies in the set
79
So it will'be
enough
to prove
follows
from the argument
let
be any element
z
DX, Q
~(z)
.~ kj 3
=
the sum being over
just given,
of
Y%,Z
"
taking
Using
That
X%,Z ~ Y % , z
L = X%, Z .
the basis
Conversely
theorem
(4.5a) for
Q .
Because
IC(T)I(T%:Tj) ,
j e I(n,r) z e YI,Z
such that
Our definition the case
is standard;
the
k. J
lie
(5.5b) gives
e Z ,
for all
a particularly
simple
i e I(n,r)
formula
.
for
<<
,
>>
in
char K = O , namely 1 <> =IC(T-~
(5.6f)
For if
T. J
we have
<> = <> i i
by
Y%,Z = X%,Z
we may write
(5.6e)
in
that
u = x{C(T)}
(5.2c),
following
(5.6g)
{C(T)} 2 =
for all
u, v E EmriC(T) j "- -~
as in (5.5b), we have
IC(T) I{C(T)}
.
We use
= <x,y{C(T)} 2>
(5.6f)
to
make
the
calculation:
<>
= I kj = E k. ~(i,j) J J J comes
of the D~sarm~nien T. , l
for all standard therefore
, v = y{C(T)}
and also
The last equality
standard
,
from the calculation
coefficient
~(i,j)
the unimodularity T. . J
the proof
Referring
of (5.6d)
.
preceding
Since
(5.6g)
of the Desarmenien to (5.6e),
is complete.
the definition lies in
matrix
this proves
Z
for all
shows that
that
(5.3d)
k° ~ Z J
z s X%, Z , and
5 The Carter-Lusztig modules Vλ,K
5.1 Definition of Vλ,K Let λ ∈ Λ+ (n, r) be given, and let K be any infinite field. Denote by NK ⊗r the kernel of the SK (n, r)-epimorphism ϕK : EK → Dλ,K defined in 4.4. We have then an exact sequence in MK (n, r) (5.1a)
ϕK
⊗r 0 −→ NK −→ EK −→ Dλ,K −→ 0.
Definition. Let Vλ,K be the orthogonal complement to NK , relative to the ⊗r (see 2.7, example 1): canonical form , on EK (5.1b)
⊗r Vλ,K = { x ∈ EK : x, NK = 0 }.
⊗r Since , is contravariant and NK is an SK (n, r)-submodule of EK , Vλ,K ⊗r is also a submodule of EK . Since , is non-singular, we may define a nonsingular, contravariant form ( , ) : Vλ,K × Dλ,K → K by ⊗r (x, ϕK (y)) = x, y , all x ∈ Vλ,K , y ∈ EK . ◦ Hence Vλ,K ∼ , and because (contravariant) dual modules in MK (n, r) = Dλ,K have the same character (by (3.3e)), ΦVλ,K = Sλ (X1 , . . . , Xn ). In particular, if char K = 0, then Vλ,K is irreducible, and is isomorphic to Dλ,K (see 4.7). If K has finite characteristic, then in general Vλ,K and Dλ,K are not isomorphic.
5.2 Vλ,K is Carter-Lusztig’s “Weyl module” λ
We shall identify Vλ,K with the module V defined by Carter and Lusztig in [6, pp. 211, 222]. First we must describe NK = Ker ϕK more exactly. ⊗r (5.2a) NK is the K-span of the subset R = R1 ∪ R2 ∪ R3 of EK , where
44
5 The Carter-Lusztig modules Vλ,K
(i) R1 consists of all ei such that i ∈ I(n, r) and Ti has equal entries in two different places in some column. (ii) R2 consists of all ei − s(σ)eiσ , where i ∈ I(n, r) and σ ∈ C(T ). (iii) R3 consists of all elements ν∈G(J) s(ν) eiν , where i ∈ I(n, r) and J is a non-empty subset of Ch+1 (T ) for some h ∈ {1, 2 . . . , r − 1}. Proof. All the elements of R lie in N = NK by (4.6b), and so N contains the K-span N of R. There is therefore a well-defined K-map ψ ⊗r ⊗r /N onto Dλ,K , given by ψ(x + N ) = ϕK (x), all x ∈ EK . Now from F = EK the function f : I(n, r) → F given by f (i) = ei + N clearly satisfies the hypotheses of (4.6a), hence F is K-spanned by the set { ei + N : Ti standard }. But ψ maps this set onto a basis of Dλ,K , by (4.5a). Therefore Ker ψ = 0, which implies N ⊆ N ; hence N = N , and (5.2a) is proved. As a corollary, we have ⊗r which satisfy the following (5.2b) Vλ,K is the set of all elements x ∈ EK conditions:
(i) x, ei = 0 for all i ∈ I(n, r) such that Ti has equal entries in two distinct places in the same column. (ii) xσ = s(σ) x for all σ ∈ C(T ). (iii) ν∈G(J) s(ν) xν −1 = 0 for any h ∈ {1, 2 . . . , r − 1} and any non-empty subset J of Ch+1 (T ). ⊗r such that x, Rs = 0 Proof. (5.2a) shows that Vλ,K consists of all x ∈ EK for s = 1, 2, 3. (5.2b) is an almost immediate consequence of this, together with the fact that , is non-singular, and satisfies an “invariance” condition
(5.2c)
xπ, y = x, yπ −1
⊗r , π ∈ G(r). This last condition is verified trivially from the for all x, y ∈ EK definition of , (see 2.7, example 1).
It is now easy to see that Vλ,K coincides with Carter-Lusztig’s “Weyl λ
module” V ; conditions (28), (29) of [6, p. 211] are essentially (i), (ii), (iii) of (5.2b). Example 1. If λ = (r, 0, . . . , 0) we write Vr,K for Vλ,K . Conditions (i) and (ii) of (5.2b) are vacuous. Condition (iii) says xν = x, for all transpositions ν = (h, h + 1) (h = 1, . . . , r − 1). So Vλ,K is the space of all symmetric ⊗r . Therefore this module is dual in the contravariant sense tensors x in EK th to the r symmetric power Dr,K of EK (see 4.4, example 1). Vr,K has basis { vα : α ∈ Λ(n, r) }, where vα = ei , sum over all i ∈ α. The non-singular contravariant form ( , ) : Vr,K ×Dr,K → K (see 5.1) is given by (vα , eβ ) = δα,β for all α, β ∈ Λ(n, r). Here eβ = eβ1 1 · · · eβnn , so that { eβ : β ∈ Λ(n, r) } is a basis of Dr,K .
5.3 The Carter-Lusztig basis for Vλ,K
45
Example 2. Assume r ≤ n and that λ = (1, . . . , 1, 0, . . . , 0) with r 1’s. We write V(1r ),K for Vλ,K . Condition (iii) of (5.2b) is vacuous, and conditions (i), (ii) show that V(1r ),K is the space of all antisymmetric tensors ⊗r in EK . V(1r ),K is an irreducible module in MK (n, r), whatever the characteristic of K, since its character er (X1 , . . . , Xn ) has no non-trivial expression as a sum of symmetric functions in Z[X1 , . . . , Xn ]. Therefore V(1r ),K is isomorphic to the rth exterior power D(1r ),K = Λr EK (see 4.4, example 2). There is an isomorphism Λr EK → V(1r ),K which takes (see 3.3) ei1 ∧ . . . ∧ eir −→ (ei1 ⊗ · · · ⊗ eir ) s(π) π . π∈G
5.3 The Carter-Lusztig basis for Vλ,K Carter-Lusztig have given a basis for Vλ,K , and shown that Vλ,K is a cyclic module [6, pp. 216–219]. We shall give here a slightly different proof of these results. It is interesting that the Carter-Lusztig basis (see (5.3b)) is not the dual of the basis of Dλ,K given in (4.5a); these bases of Vλ,K are connected by a certain unimodular matrix Ω (see (5.3d)) which has appeared in work of D´esarm´enien [14, p. 74]. Notation. If X is any subset of the symmetric group G(r), we shall write π, {X} = s(π) π. [X] = π∈X
π∈X
These are elements of the group ring KG(r), of course. If K = Q, they even belong to ZG(r). (5.3a) Let l be the element of I(n, r) given in (4.3b). Then fl = el {C(T )} lies in Vλ,K . Proof. Verify that fl , y = 0, for all y ∈ R1 ∪ R2 ∪ R3 (see (5.2a)). For y ∈ R1 ∪ R2 there is no problem. Suppose then that y = ν∈G(J) s(ν) eiν , as in (5.2a)(iii). In the notation just introduced, this reads y = el {G(J)}. Using (5.2c),
el {C(T )}, ei {G(J)} = el , ei {B} , where B = G(J)C(T ) = Y · C(T )—see the proof of (4.6b). As in that proof, we break up B as a union of pairs {π, πκ}, each κ being a transposition in R(T ). Using (5.2c) again, and the fact that el κ = el , we have for each such pair el , ei (s(π)π + s(πκ)πκ) = 0. Therefore fl , y = 0, and this property completes the proof of (5.3a). Remark. The element fl is denoted by Φµ in [6, p. 216].
46
5 The Carter-Lusztig modules Vλ,K
Now let V be the SK (n, r)-submodule of Vλ,K which is generated by fl . As K-space, V is spanned by the elements ξi,j fl (i, j ∈ I(n, r)). However, since ξi,j fl = (ξi,j el ){C(T )}, we see by (2.6a) that ξi,j fl = 0 unless j ∼ l. Hence V is K-spanned by the elements bi = ξi,l fl (i ∈ I(n, r)). In fact we have the following much more precise statement. (5.3b) Theorem (see [6, Theorem 3.5, p. 218]). The set { bi = ξi,l fl : i ∈ I(n, r), Ti standard } is a K-basis of Vλ,K . In particular, V = Vλ,K , i.e. Vλ,K is generated by fl as SK (n, r)- (or KΓK -)module. Proof. Let i, j ∈ I(n, r). From (2.6a) we have (5.3c) ξi,l el = eh , h
where the sum is over all h ∈ I(n, r) such that (i, l) ∼ (h, l), i.e. such that there is some π ∈ G(r) with h = iπ, l = lπ. But l = lπ if and only if π ∈ R(T ) (see (4.3b)). So the sum in (5.3c) is over the R(T )-orbit iR(T ) of i. Now we bring in the form ( , ) : Vλ,K × Dλ,K → K introduced in 5.1, and calculate (bi , (Tl : Tj )) = bi , ej = ξi,l el {C(T )}, ej = ξi,l el , ej {C(T )} , using (5.2c) eh , s(σ)ejσ , using (5.3c). = h∈iR(T )
σ∈C(T )
This last expression, which we denote Ω(i, j), is equal to (5.3d) Ω(i, j) = s(σ), sum over all σ ∈ C(T ) such that jσ and i σ belong to the same R(T )-orbit. Now suppose Ti , Tj are both standard, and that Ω(i, j) = 0. There must exist σ ∈ C(T ) such that jσ and i are in the same R(T )-orbit. If σ = 1, this implies i = j. In any case we have β(i) = β(jσ) (see 4.5), and if σ = 1 then (4.5d) implies β(jσ) > β(j). Let Ω denote the matrix (Ω(i, j)), with i and j running over the set I ∗ = { k ∈ I(n, r) : Tk standard }. If we give I ∗ any total order > such that β(i) > β(j) implies i > j for all i, j ∈ I ∗ , then Ω is a unimodular triangular matrix. For by what we have shown above, Ω(i, j) = 0 implies either i > j or i = j, and clearly Ω(i, i) = 1. Hence Ω is non-singular. But since Ω(i, j) = (bi , (Tl : Tj )), and since ( , ) is non-singular and { (Tl : Tj ) : Tj standard } is a basis of Dλ,K , it follows that { bi : Ti standard } is a basis of Vλ,K . This proves (5.3b). Note. A proof that Ω is unimodular is given by D´esarm´enien in [14, p. 74].
5.4 Some consequences of the basis theorem
47
5.4 Some consequences of the basis theorem For each i ∈ I(n, r), it is clear by (2.3c) that the element bi = ξi,l fl satisα , where α is the weight of i. From (5.3b) we fies ξi,i bi = bi ; thus bi ∈ Vλ,K deduce α (5.4a) Let α ∈ Λ(n, r). Then Vλ,K has K-basis { bi : i ∈ α, Ti standard }. λ = K ·fl (since bl = ξl,l fl = fl ). A well-known argument In particular, Vλ,K (see e.g. [31, p. 2]) shows max
(5.4b) Vλ,K has a unique maximal submodule Vλ,K . The element fl does not max max lie in Vλ,K . The irreducible module Fλ,K = Vλ,K /Vλ,K has character Φλ,p , where p = char K. Proof. By (5.3b) Vλ,K is generated by fl . Any proper submodule M of Vλ,K , λ = 0, and so M lies in since it does not contain fl , has M λ = M ∩ Vλ,K V =
α Vλ,K ,
α=λ
a proper K-subspace of Vλ,K . Therefore the sum of all proper submodules M max of Vλ,K lies in V , hence is proper, and is the unique maximal submodule Vλ,K , and does not contain fl . The third statement in (5.4b) now follows from the definition of Φλ,p (see 3.5, Remark (i)) and the fact that Vλ,K has character Sλ = X1λ1 · · · Xnλn + · · · . Since Dλ,K is dual to Vλ,K , we have the following corollary to (5.4b). min min (5.4c) Dλ,K has a unique minimal submodule Dλ,K , and Dλ,K ∼ = (Fλ,K )◦ . min Hence Dλ,K , Fλ,K are isomorphic modules.
Proof. The second statement follows from the first, and the fact that any module V in MK (n, r) has the same character as its (contravariant) dual V ◦ (see (3.3e)). If V is irreducible this implies V ∼ = V ◦. Remark. Since Fλ,K has λ-weight-space of dimension 1 (by (5.4b)) the same min min is true of Dλ,K . Therefore Dλ,K contains, hence is generated as SK (n, r)module by, the element (Tl : Tl ). For (4.7a) shows that the λ-weight-space of Dλ,K is K · (Tl : Tl ). This proves (5.4d) below. A quite different proof comes from a standard argument for semisimple algebraic groups, using the fact (cf. 4.8) that (Tl : Tl ) is stable under the action of upper and lower unipotent triangular subgroups of Γ = GLn (K). See [50, p. 214]. min
(5.4d) The element (Tl : Tl ) of Dλ,K generates the irreducible module Dλ,K . We show next that the family Vλ,K is defined over Z. Recall from 5.1 that Vλ,Q = { x ∈ EQ⊗r : x, NQ = 0 }, where NQ = Ker ϕQ .
48
5 The Carter-Lusztig modules Vλ,K
(5.4e) Lemma. Vλ,Z = EZ⊗r ∩ Vλ,Q is a Z-form of Vλ,Q . It has Z-basis B = { bi,Q : i ∈ I(n, r), Ti standard }. Proof. The sets EZ⊗r , Vλ,Q are both closed to the action of SZ (n, r), hence so is Vλ,Z . It is clear that, for each i ∈ I(n, r), bi,Q = ξi,l el,Q {C(T )} lies in EZ⊗r , hence that B is a subset of Vλ,Z . Since B is a Q-basis of Vλ,Q (by (5.3b)), the proof of (5.4e) will be achieved if we show that every element x ∈ Vλ,Z is in the Z-span of B. We certainly have x = ki bi (sum is over Ti standard) for some ki ∈ Q. Since x ∈ EZ⊗r , we have x, ej ∈ Z for all j ∈ I(n, r). Take any j such that Tj is standard. The calculation in the proof of (5.3b) ki Ω(i, j), the sum being over the standard Ti . But the gives x, ej = “D´esarm´enienmatrix” Ω = (Ω(i, j)), in case K = Q, is integral and unimodular. So from ki Ω(i, j) ∈ Z (all standard Tj ) follows ki ∈ Z (all standard Ti ). Hence (5.4e) is proved. Remark. (5.4e) shows that Vλ,Z = SZ (n, r)fl . It is clear that Vλ,Z = EZ⊗r ∩ Vλ,Q is a pure Z-submodule of EZ⊗r ; hence that the inclusion 0 → Vλ,Z → EZ⊗r is Z-split. If we tensor with any infinite field K we get an exact sequence 0 → Vλ,Z ⊗ K → EZ⊗r ⊗ K in the category MK (n, r). We shall regard Vλ,Z ⊗ K as submodule of EZ⊗r ⊗ K. ⊗r } is Z-defined by EZ⊗r and In 2.6, example 1 we showed that the family {EK maps δK : ei,Q ⊗ 1K → ei,K . From (5.4e) and (5.3b) it is immediate that δK induces an isomorphism δK : Vλ,Z ⊗ K → Vλ,K ,
which maps bi,Q ⊗ 1K → bi,K for all i ∈ I(n, r). From all this we deduce: (5.4f ) The family {Vλ,K } is Z-defined by Vλ,Z and the maps δK just de⊗r scribed. The family of inclusions Vλ,K → EK is also defined over Z. So is the family of contravariant forms ( , )K : Vλ,K × Dλ,K → K defined in 5.1.
5.5 Contravariant forms on Vλ,K J.C. Jantzen (see [29], [32], ...) has studied contravariant forms on the Weyl modules for a simply-connected, semisimple algebraic group; in particular, his results apply to the group SLn (K), and extend with little alteration to our case Γ = GLn (K). In this section we shall give an independent description of the contravariant forms on the modules Vλ,K . We saw that Vλ,K is generated as S (= SK (n, r))-module by the element fl = el {C(T )} of E ⊗r , and it follows that Vλ,K is contained in the submodule E ⊗r {C(T )} of E ⊗r . We have the canonical contravariant form , on E ⊗r , and from (5.2c) we deduce that
5.5 Contravariant forms on Vλ,K
(5.5a)
49
x{C(T )}, y = x, y{C(T )} for all x, y ∈ E ⊗r .
This allows us to define a “contracted” version of , on E ⊗r {C(T )} by the rule (5.5b) If x, y ∈ E ⊗r , define
x{C(T )}, y{C(T )} = x, y{C(T )} . Any ambiguity arising from the fact that an element of E ⊗r {C(T )} may be expressed as x{C(T )} = x {C(T )} for distinct elements x, x of E ⊗r , is eliminated by (5.5a). It is clear that
, is a symmetric, contravariant form on E ⊗r {C(T )}. If we restrict it to Vλ,K , we get a symmetric, contravariant form on Vλ,K , which is moreover non-zero, since rule (5.5b) gives
fl , fl = el , el {C(T )} = s(σ) el , elσ = 1. σ∈C(T )
We might also mention that the family of forms
, K , constructed in this way for all infinite fields K, is defined over Z in the sense of 2.7, example 3. Any contravariant form ( , ) on Vλ,K coincides, up to a scalar factor, with
, . For the contravariant property (2.7d), together with the fact by the values (fl , v), v ∈ Vλ,K . that Vλ,K = Sfl , shows that ( , ) is determined vα , with each vα belonging to the If v ∈ Vλ,K is decomposed as a sum v = α (α ∈ Λ(n, r)), then since weight-spaces for distinct weights weight-space Vλ,K are orthogonal with respect to ( , ) (see 3.3, p. 25), we have (fl , v) = (fl , vλ ), i.e. ( , ) is determined by the values (fl , v) for all v in the λ-weightλ . But this weight-space is K · fl (see (5.4a)). So ( , ) is comspace Vλ,K pletely determined by (fl , fl ). Therefore (since
fl , fl = 1) if (fl , fl ) = k, then (v, w) = k
v, w , for all v, w ∈ Vλ,K . In the work of Jantzen which we have mentioned, and also in the earlier work of W.J. Wong [56, 57], the importance of the contravariant form on a Weyl module V is that it provides a method of calculating the maximal max of V . In our case the result reads as follows. submodule V (5.5c) [57, Theorem 3B, p. 362]. The radical of
, , that is, the space M = { v ∈ Vλ,K :
v, Vλ,K = 0 }, coincides with the unique maximal max submodule Vλ,K of Vλ,K . Proof. Notice that M is a submodule of Vλ,K , by the contravariant property. max / M . Therefore M ⊆ Vλ,K . But we saw in the proof Also M = Vλ,K , since fl ∈ max α (α = λ). of (5.4b) that Vλ,K lies in the sum V of all weight-spaces Vλ,K λ Since V is orthogonal to Vλ,K = K · fl , we have (Vλ,K , Vλ,K ) =
Vλ,K , Sfl ⊆
SVλ,K , fl ⊆
V , fl = 0. max
max
max
max
This shows that Vλ,K lies in M , and the proof of (5.5c) is complete.
50
5 The Carter-Lusztig modules Vλ,K
Example. We shall calculate the form
, on Vr,K (notation of 5.2, example 1). Since λ = (r, 0, . . . , 0), the diagram of λ has only one row, and so C(T ) = {1}. Therefore,
, is just the restriction to Vr,K of the canonical form , on E ⊗r . Relative to the basis { vα : α ∈ Λ(n, r) } given in 5.2, example 1, the form is given by vα , vβ = 0 (α = β) and vα , vα = (r, α)·1K , where r! . (r, α) = α1 ! · · · αn ! So the radical M of this form is spanned by those vα for which p = char K divides the integer (r, α). max max Since M = Vr,K , the irreducible module Fr,K = Vr,K /Vr,K (see (5.4b)) has basis { vα + M : α ∈ Λ(n, r), (r, α) ≡ 0 modulo p }. The α-weight-space of Fr,K is K ·(vα +M ), for all α ∈ Λ(n, r). Hence the charX1α1 · · · Xnαn , sum over all α ∈ Λ(n, r) with (r, α) ≡ 0 acter of Fr,K is Φr,p = modulo p. Since the integers (r, α) are the coefficients in the multinomial expansion (5.5d) (X1 + · · · + Xn )r = (r, α) X1α1 · · · Xnαn , α∈Λ(n,r)
we have the result: Φr,p (which is a polynomial over Z, by definition) is the sum of those monomials X1α1 · · · Xnαn which have non-zero coefficients when (5.5d) is reduced modulo p. The reader may deduce from this a special case of Steinberg’s “Tensor Product Theorem” [50, p. 218]: If 0 ≤ r0 , r1 , . . . ≤ p − 1 such that r = r0 + r1 p + · · · , then
i i Φr,p (X1 , . . . , Xn ) = Φri ,p (X1p , . . . , Xnp ). i≥0
Of course in case p = 0, M = 0 and we can take Fr,0 = Vr,K . The character is the “complete symmetric function” hr = α∈Λ(n,r) X α (see [39, p. 14]).
5.6 Z-forms of Vλ,K In this section we work over the rational field Q. We have seen in 5.1 that the modules Vλ,Q and Dλ,Q are irreducible, and isomorphic to each other. In fact the map ϕQ : EQ⊗r → Dλ,Q induces a map ϕ : Vλ,Q → Dλ,Q which is an isomorphism. For ϕ is certainly a homomorphism, and it is non-zero because (5.6a) ϕ(fl ) = s(σ) ϕ(elσ ) = s(σ) (Tl : Tlσ ) = |C(T )| (Tl : Tl ). σ∈C(T )
σ∈C(T )
5.6 Z-forms of Vλ,K
51
Therefore by Schur’s lemma ϕ is an isomorphism. We would like to describe all the Z-forms lying in Vλ,Q . If L is any λ such Z-form, then the argument at the end of 3.3 shows that Lλ = L ∩ Vλ,Q λ is a free Z-submodule of rank 1 (since Vλ,Q = Q · fl has dimension 1), so that Lλ = Z · yfl , for some 0 = y ∈ Q. It is clear that y −1 L is also a Z-form of Vλ,Q , and (y −1 L)λ = Z · fl . Therefore we shall lose nothing essential if we confine our attention to Z-forms L of Vλ,Q which are “normalized” by the condition (5.6b)
Lλ = Z · fl .
We already know two such normalized Z-forms, namely Vλ,Z = E ⊗r ∩ Vλ,Q = SZ (n, r) · fl (see (5.4e)), and (5.6c)
Xλ,Z = ϕ−1 ( |C(T )| Dλ,Z ).
Xλ,Z is a Z-form of Vλ,Q , because Dλ,Z —hence also |C(T )| Dλ,Z —is a Z-form of Dλ,Q . It is normalized, because λ λ = ϕ−1 ( |C(T )| Dλ,Z ) = ϕ−1 (Z · |C(T )| · (Tl : Tl )) = Z · fl , Xλ,Z
by (4.7c) and (5.6a). Our aim is the following theorem. (5.6d) (cf. [53, p. 681]). Let L be any Z-form of Vλ,Q which satisfies (5.6b). Then Vλ,Z ⊆ L ⊆ Xλ,Z . Proof. We write SZ = SZ (n, r). Since L contains fl , it contains SZ · fl = Vλ,Z . On the other hand
L, Vλ,Z =
L, SZ · fl =
SZ L, fl ⊆
L, fl , using the contravariant property of
, and the fact that SZ L ⊆ L. But we know that fl is orthogonal to all the weight spaces Lα for α = λ. It follows that
L, fl =
Lλ , fl =
Z·fl , fl = Z·
fl , fl = Z. We have hereby proved that L lies in the set Yλ,Z = { y ∈ Vλ,Q :
y, Vλ,Z ⊆ Z }. So it will be enough to prove that Yλ,Z = Xλ,Z . That Xλ,Z ⊆ Yλ,Z follows from the argument just given, taking L = Xλ,Z . Conversely let z be any element of Yλ,Z . Using the basis theorem (4.5a) for Dλ,Q we may write kj |C(T )| (Tl : Tj ), (5.6e) ϕ(z) = j
the sum being over j ∈ I(n, r) such that Tj is standard; the kj lie in Q. Because z ∈ Yλ,Z we have
bi , z =
z, bi ∈ Z, for all i ∈ I(n, r). Our definition (5.5b) gives a particularly simple formula for
, in the case char K = 0, namely
52
(5.6f )
5 The Carter-Lusztig modules Vλ,K
u, v =
1
u, v , for all u, v ∈ E ⊗r {C(T )}. |C(T )|
For if u = x{C(T )}, v = y{C(T )} as in (5.5b), then u, v = x, y {C(T )}2 by (5.2c), and also {C(T )}2 = |C(T )| {C(T )}. We use (5.6f) to make the following calculation: (5.6g)
bi , z = kj bi , ϕ−1 (Tl : Tj ) = kj Ω(i, j). j
j
The last equality comes from the calculation preceding the definition (5.3d) of the D´esarm´enien coefficient Ω(i, j). Since (5.6g) lies in Z for all standard Ti , the unimodularity of the D´esarm´enien matrix shows that kj ∈ Z for all standard Tj . Referring to (5.6e), this proves that z ∈ Xλ,Z , and therefore the proof of (5.6d) is complete.
§6.
Representation
6.1
The functor
theory of the s y ~ e t r i c
f:~(n,r)
÷ mod KG(r)
group
(r i n )
In this chapter we shall apply our results on the representations FK = GLn(K) , to the representation G(r)
.
IS] . in
The method is to use a process Suppose first that
A(n,r)
;
u-weight
r
space
l's). V~
determines
ES, sections
in
a functor
III, IVy)
Then there exists a weight
~
(notice that
f:MK(n,r)
that in case MK(n,r)
K = ~
are completely
which uses another functor,
in
representation of
MK(n,r)
profitable
MK(n,r)
theory of
G(r)
the
The correspondence
Sehur proved
(see
n < r
by this means he showed that
The proof which we have given later proof in
by an argument
~(r,r)
to
~',
p.77])°
(Is, pp.61-63])
~(n,r)
.
This
6.5.
Of course Schur used his functor about
.
,
hence are determined up to
Schur's
this time from
second functor will be described
;
~ , p.35].
handle the case
V s MK(n,r)
n ,
this funetor gives an equivalence
reducible,
(see
(I,I ..... 1,0 ..... O)
is a vector of length
KG(r)-module.
KG(r)
of this fact, see (2.6e), is essentially
make deductions
~
÷ mod KG(r)
and mod
isomorphism by their characters
ThenSchur was able to
of the symmetric group
invented by Schur in his dissertation
can be regarded as a left
Mc(n,r)
K
We shall see that for any module
between the categories modules
r ! n .
we denote this by
and contains
V ÷ V~
theory over
of
f ,
and its "inverse"
(see 6.2), to
his starting point was the known .
But since we have already got some knowledge
by the "combinatorial"
methods of §§4,5, it is also sometimes
to work in the other direction.
81
Let us keep Any m o d u l e
K, n, r
V ~ MK(n,r)
for any w e i g h t
fixed for the moment, can be r e g a r d e d as left
e ~ A(n,r)
regarded as left
and w r i t e
,
the w e i g h t - s p a c e
S(~)-module, w h e r e
S(e)
S = SK(n,r)
.
S-module, and therefore V~ = ~ V
(see 3.2) can be
denotes the algebra
~ S~
.
We
get then a functor
(6.1a)
f :M~(n,r) ÷ rood S(~)
w h i c h takes each 0:V + V'
in
S(~)
V s MK(n,r)
~(n,r)
is a
to
,
V ~ c mod S(~)
to its r e s t r i c t i o n
K-algebra with
$
,
and each m o r p h i s m
@~ :V ~ ÷ V ,~
as identity element.
If we choose some
G element
i e I(n,r)
(6. Ib)
w h i c h belongs
i = (i 1 ... 1 ~---~w- - ~
to
~ ,
2 2 ... 2 ~
for example
...
n n ... n)
w e m a y use the m u l t i p l i c a t i o n rules in 2.3 to show that as
K-space, b y the elements
~iz,i
follows that, for any elements if
~, ~'
' ~ c G .
~, 7'
of
is spanned,
F r o m the e q u a l i t y rule in 3.2
G ,
~iz,i = $i~',i
of
G .
So
S(~)
has
K-basis
if and only
{~i~,i } ,
over a set of r e p r e s e n t a t i v e s of the d o u b l e - c o s e t space
N o w suppose that
(6.1c)
S(~)
b e l o n g to the same double coset w i t h respect to the subgroup
G~ = {~ E G:i~ = i}
The e l e m e n t
,
r < n ,
and that
(6.1b) c o r r e s p o n d i n g to
u = (l,2,...,n)
Since the stabilizer in
G
e
e = ~
e
G~G
~
running
/G
is the w e i g h t d e s c r i b e d above. is w r i t t e n
l(n,r)
of this element is
.
G
= {I} ,
the algebra
82
S(~) has
K-basis
multiplication ~,~' in
{~u~,u : ~ c G} .
rule
G .
An elementary
(2.3b) shows that
We have therefore
application
~u~,u~u~,, u = g u ~ ' , u
an isomorphism
of
for all
of K-algebras
~
(6. Id)
S(~)
which takes isomorphism
~u~,u ÷ ~
KG(r)
for all
to be the functor
Hecke ring follows. set
K .
~(G,H) H(G,H)
H\G/H
By means of this
and mod KG(r)
can be identified.
the Schur functor
~, p.22~,
+ mod KG(r)
For the general case, where
over
.
f = f
with no restriction HK(G,G ~)
S(~)
we define
f:MK(n,r)
Remark.
~ ~ G = G(r)
the categories mod
With this identification
,
on
n, r)
S(~)
~
is any weight in
is isomorphic
We may follow Iwahori for any subgroup
has a free
Z-basis
of all double-cosets
of H
H
~,
A(n,r)
(and
to the Hecke ring
p.218~
and define the
of any finite group
as
{XA} ,
where
in
the product of elements
G;
A
G ,
runs over the in
this basis is given by
(6.1e)
where if Hv
XA XB =
¥
is any fixed element of
in the set
rule, see
~,
C g H\G/H
A-Iy ~ B . §~].
ZA'B'C XC '
C, ZA,B, C
(For an explanation
Alternatively
we may define
is the number of H-cosets of this artificial-looking ~(G,H)
to be the
83
e n d o m o r p h i s m ring of the subset r e g a r d e d as right
ZG-module.
Z G - e n d o m o r p h i s m of
EH]ZG
[H]ZG
HK(G,H)
R e t u r n i n g n o w to our case
6.2
~H]
(7 ~ G),
to
G = G(r)
becomes If
K
K-algebra
, H = G
,
the
is any
~(G,H)@ Z K .
we leave it as an
S(~) ÷ ~ K ( G , G )
is an i s o m o r p h i s m of
General theory of the functor
XA
~A] .)
to be the
prove that the K - l i n e a r map
~i~,i ÷ XG ~G
ZG , this subset b e i n g
In this i n t e r p r e t a t i o n
w h i c h takes
c o m m u t a t i v e ring, we define
exercise to
of
g i v e n by
K-algebras.
f:mod S + mod eSe
It soon becomes clear that m a n y p r o p e r t i e s of Schur's functor b e l o n g to a m u c h m o r e general context. need to be f i n i t e - d i m e n s i o n a l ) We define a functor the subspace
eV
E mod eSe .
If
f(0):eV ÷ eV' morphism.
of
Let
S
and let
be any e ¢ 0
K-algebra
be any idempotent in
f:mod S ÷ m o d eSe as follows. V
@:V ÷ V'
is an
eSe-module,
to be the r e s t r i c t i o n of
mod
@ ;
It is important to observe that
If
f
S ,
clearly
S .
V s m o d S , clearly
so we define
is a m o r p h i s m in
(it does not
f(V) = eV then w e define
f(@)
is an
eSe-
is an exact functor,
in
other w o r d s
If
(6.2a)
0 + V' ÷ V ÷ V'' * 0
0 ÷ eV' ÷ eV + eV'' ÷ 0
This is quite elementary. well known,
(I)
is an exact sequence in is an exact sequence in
The next proposition,
mod mod
S , eSe
then .
though easy and u n d o u b t e d l y
does not seem to appear in the literature
(I)
(a special case is
Our functor is a special case of functor described by M. A u s l a n d e r Communications indebted
in A l g e b r a
to J. A l p e r i n
for
I(1974),177-268; this
reference.
see p . 2 4 3 .
I am
84
g i v e n by Curtis and F o s s u m 6.2. appears, of
T.
p.402].
M u c h of the p r e s e n t section
sometimes w i t h different proofs,
Martins
(6.2b)
~F,
If
in the Ph.D. d i s s e r t a t i o n
~ ~a]) .
V g mod
S
is irreducible,
then
eV
is either zero or
is an irreducible module in mod eSe.
Proof.
Let
and also
W
be any n o n - z e r o
SW = SeW ,
to
V .
Hence
if
eV # 0 ,
w h i c h is a n o n - z e r o
eV = e(SeW) = ( e S e ) W ~ then
N o w suppose S-submodules
e S e - s u b m o d u l e of
Vo
IV
is an i r r e d u c i b l e
V ~ mod S , of
W .
V
and define
such that
the largest
S-submodule of
V
also define
a(V) = V/V(e ) .
eV.
Then
S - s u b m o d u l e of This proves eSe-module.
V(e )
eV o = O
-
W = eW ,
V ,
is equal
W = eV .
Therefore
This proves
(6.2b).
to be the sum of all the in other words,
w h i c h is contained in
is
V(e)
(I - e)V .
We
Then we can make a functor
a:mod S ÷ m o d S ;
notice that if into
V'
(e)
0:V ÷ V' hence
'
@
is a m o r p h i s m in
mod S ,
induces a w e l l - d e f i n e d map
then
@
maps
V
(e)
a(@):a(V) ÷ a(V')
.
The virtue of this functor,
is that it gets rid of the part of each m o d u l e
V
f ,
in
w h i c h is annihilated b y f(V)
(6.2c)
.
and does this w i t h o u t d e s t r o y i n g a n y t h i n g
E x p r e s s e d precisely, we have
Let
V a mod S .
induces an i s o m o r p h i s m
T h e n the n a t u r a l map
f(cq.):f(V) ÷ f(a(V)). ,s
~v:V ÷ a(V) = ~/V(e )
85
Proof.
Clearly
f(~v)
f(V)
= eV,
is o n t o
zero
since
V(e)~
Our next which we
can
the
objective
Se
also
If
~:W ÷ W'
a right
Ker
of
~V
to is
f ( ~ v ) = eV f ~ V ( e )
is an i s o m o r p h i s m .
functors
partially,
= Se
meS e W
in
Let
W
,
S-module
from
as i n v e r s e s
mod
S .
h(W) in m o d
We g e t
proposition
c mod
for
eSe
.
W
e mod
(it is a left
eSe-module,
the n e x t
(6.2d)
And
f(~v )
is to d e f i n e
is a m o r p h i s m
is a m o r p h i s m Moreover
Thus
.
the r e s t r i c t i o n
mod
to
eSe
f .
to
As
mod
first
S , attempt
definition
is a l e f t
and
is j u s t
= e.a(V)
.
at l e a s t
h(W)
Since
which
f(a(V)) (l-e)V
serve,
employ
,
eSe
.
ideal
of
is w e l l - d e f i n e d eSe
,
then
in this w a y
shows
that
h
Then
e.h(W)
S ,
and
h(~)
is a left
= ~Se
a functor
,
* h(W')
eSe ÷ m o d
inverse"
and
S-module.
H ~:h(W)
h:mod
is a " r i g h t
= e m W
of c o u r s e )
to
f
S. .
the m a p
~ w + e m w(w
Proof. Thus
that there
for all 0 = q(e and
gives
e.h(W)
= e(Se
the m a p
check that
E W)
defined
= w
, w
E W
.
This
is p r o v e d .
.
= eSe
takes
eSe-map.
is a w e l l - d e f i n e d
m w)
eSe-isomorphism
m eSe W)
above
it is an
s E Se
(6.2d)
an
W
onto
Then
q:Se if
establishes
w
e.h(W)
m eSe W = e ~ W
To p r o v e map
W
e.h(W)
that
;
the
,
is s u c h injectivity
.
as stated.
it is e l e m e n t a r y
it is i n j e c t i v e ,
~eSe W + W g W
= f(h(W))
such that of
that
first q(s
notice
m w)
e m w = 0 the m a p
to
,
= esw we
get
w c e ~ w
,
,
88
h
The trouble w i t h the functor i r r e d u c i b l e module
W
is that it u s u a l l y takes an h(W)
to a m o d u l e
w h i c h is not irreducible.
H o w e v e r we have
(6.2e)
If
W ~ m o d eSe
is irreducible,
m a x i m a l proper submodule of
Proof. Thus
Write
an
V = h(W)
a(V) # 0 ,
N o w let
V'
h(W)
.
.
T h e n by
,
is contained in
Definition.
by
Let
V(e ) .
h*
for all
By f ,
W c mod eSe
irreducibles
(6.2f)
Proof.
If
V .
If
eV' # 0
then
eV = e.h(W)
e.h(W)
. So
V .
eV'
, being
(recall
Then eV' = 0 ,
i.e.
(6o2e).
denote the functor
ah: m o d eSe ÷ mod S ,
so that
= h (W) /h (W) (e)
.
f(h*(W)) ~ W
for all
to irreducibles.
V c mod
There is an
f(a(V)) ~ f(V) ~ W°
is a proper submodule of
eSe-module
This proves
is the unique
is irreducible.
a contradiction.
(6.2c) and (6.2d) this functor i.e.
a(h(W))
(6.2d)) is equal to
h*(W)
to
V(e )
e S e - s u b m o d u l e of the i r r e d u c i b l e
h(W)(e )
(6.2d) and (6.2c),
be any p r o p e r submodule of
V' ~_. SeV' = S(e m W) = h(W) = V , V'
Hence
w h i c h shows that
e.h(W) = e m W ~ W
then
S
is
h*
,
like
W ~ m o d eSe
h , .
By
is a right inverse (6.2e)
h*
takes
We have finally
irreducible and if
eV # 0 ,
S-map B:h(eV) = Se ~eSe eV ÷ V ,
then
w h i c h takes
h*(eV) ~ V
s m ev
87
to
sev ,
equals
V
for all
s e Se ,
because
V
proper submodule hence
of
v ~ V .
is irreducible. h(eV)
.
But
the only maximal proper
Therefore onto
B
The image of
induces
is
So the kernel of
eVc
mod eSe
submodule
an isomorphism
B
of
of
SeV , B
is a maximal
is irreducible
h(eV)
is
which
by (6.2b),
h(eV)(e)
,
by (6o2e).
h(eV)/h(eV)(e ) = a(h(eV))
= h*(eV)
V .
Taking
(6.2g)
all these facts, we arrive at our main theorem.
Theorem.
modules Then
together
Let
be a full set of irreducible
in
mod S ,
indexed by a set
{eV%:
% E A' }
is a full set of irreducible
Moreover
Remarks
if
V%, K ,
then
for any
V ~ mod S
eV t O
that
Let
it will be useful
eSe-module
= 0 )
Vmax ,
modules
Hom S (Se, V) ~ eV
(see
to notice
A' = {% g A : eV%# O} .
DR, V
p.375]).
eV m a x
if
V
image of
V ~ mod S
proper
of
is Se .
modules
has a unique
is either equal
to
eV
submodule
(i.e°
of the
The proof is easy.
In the same context we shall use the following:
symmetric bilinear
.
(isomorphism
to the Carter-Lusztig
or else it is the unique maximal
eV .
mod eSe
Therefore
is a homorphic
that if any
then
in
.
When we come to apply the Schur functor
e(V/V max)
denotes
A •
h*(eV%)
if and only if
maximal proper submodule
3.
V%~
It is well known
irreducible,
2.
% s A' ,
I.
K-spaces),
Again,
{V : % s A}
form on
the restriction
V
such that
of this form to
the proof is an easy exercise.
(eV,
If
(l-e)V) = 0 ,
eV , then
rad(
(
,
)
is a
and if (
, ~ = e.rad(
, , )
)e
88
6.3
Application
I. Specht modules
and their duals
In this section we shall apply the general special
case of the Schur functor
any infinite We take
field,
S = SK(n,r)
identify
eSe
~u~,u ÷ w , for any
with
and
n , r
,
e = Sw
KG(r)
for all
V e MK(n,r)
Notice
correspondence We shall write partitions
Recall
of
are fixed integers = ~u,u
w c G = G(r)
A = A+(n,r)
the effect of X
,
that
DX, K
of
that
%
X
f
is
and
takes
f(V) = eV = V ~
on the modules
of
r A
Z s(o) ~EC(T) c£'i~'
all
%-weight
space
of
f(Dx,K)
is the
= ~l(n,r)
w-weight space
space
o ~%
D%,KW
.
as the set of all A
K-span of the elements
(T£:Ti)
%AK(n,r)
r ~ n)
is a fixed element of
is the
D%, K ,
are in one-to-one
(because
We saw (p°55) that these bideterminants
w-weight
K
r ~ n .
(6.1d), which
A+(n,r)
and think of
From now on,
(T£:Ti) =
the (left)
such that
(see 6.1 for notation)
Notice
with the partitions
(p.54)
Here
.
that the elements
r .
÷ mod KG(r).
by the isomorphism
Our aim is to calculate VX,K
f:~(n,r)
theory of 6.2 to the
of
i e l(n,r)
all lie in the right
~(n,r) D%,K
.
,
.
But by definition
and therefore
lies in
89
(6.3a)
of
%AK(n,r) ~ = E o AK(n,r) o E%
%AK(n,r)
.
Elementary calculations based on formulae (4.4a,a')
and (6.3a) show that
if and only if
%~(n,r) ~ =
v' s vR(T)
I K.c ~sG %,u~
Since
C~U~ = C~u~V
(see 4.5 ), there is an isomorphism of
K-spaces
(6.3b)
%AK(n,r) ~ ÷ KG[R(T)]
which takes
c
~(T)]
u~ +
,
left ideal of the group algebra the other hand
%AK(n,r)m ,
SK(n,r)-module
%AK(n,r) ,
(T%:Ti)
such that
~ G
i c w
and
KG[R(T~
hence is a left
KG-module.
f(D~,K)
On
KG-module by means of (6.1d). c
%,uv
to give
C~UT~
It follows
KG-isomorphism.
T.~
has
K-basis consisting of all
is standard.
The elements
i = u~ (~ c G) .
to
to the left KG-submodule
i
in
The isomorphism
o~C(T) and so it takes
is a
~-weight space of the left
which by (4.4a) is equal to
f(D%, K) = D%, K
(T%:Tu)
NOW
acts on the element
can be written, uniquely, in the form (6.3b) takes
~ c G .
becomes a left
at once that (6.3b) is a left
By (4.7a), p.59,
KG ,
being the
To be explicit, the element ~C%,u~ = SuT,u o e%,u~ ,
for all
(left ideal)
90
ST,K = KG (C(T) } ~R(T)]
of
KG .
We shall define
ST, K
to be the Sg e c h t m o d u l e
corresponding to the bijective
X-tableau
T .
(over
K)
(This is a little
different from the original definition of Specht;
for an explanation
of the latter, and of the equivalence of the two definitions, see ~, p.9d.)
(6.3c)
We have now the
Theorem.
~{C(T)}~R(T) i
ST, K
has
K-basis consisting of the elements
such that
Tu~
is standard.
is an irreducible Tx ,
X-tableau full
set
DX, K
Dm x,K is
(by (4.7a)), NK(n,r)
If we choose for each
SX, K = STX ,K ,
.
in
given by (4.7a)
X s A
ST, K
a bijective
{Sx, K : X s A}
Then the this
last
present
look of
is
f(Dx,K)
the module
E~ r
and s i n c e
a full
statement
case
at
already quoted.
by ( 4 . 7 b )
{DX, K : X ~ A}
a subspaee
in
set
is a
g SX, K
VX, K .
and t h e r e f o r e
= (E~r) m .
S = SK(n,r) no r e s t r i c t i o n
on
From the E Hr ,
on
formula
we s e e
n , r)
that
for
DX, K
follows
has
at
is non-zero
By d e f i n i t i o n
f(Vx,K)
(2.6a)
If char K = 0
= Vm X,K
which gives
any weight
then
character
of irreducible
(6.3c)
•
f(E ~r)
then
then
KG-modules.
irreducible
Now l e t ' s is
char K = O
The first statement comes by applying the isomorphism (6.3b) to
the basis of
since
and w r i t e
of irreducible
Proof.
each
KG-module.
If
modules
S in
once from (6.2g), for
all
(see is
X ~ A .
(5.1b))
a sub-space
the action
~ ¢ A(n,r)
this
of (and with
of
91
(Emr) e =
~ E mr
=
E
K.e i
i ~
So in particular of
(Emr) ~
for all
has
{e
: ~ g G} .
u~
KG-module
is given by
T , ~ ~ G .
Therefore
there
The structure
Teu~ = Su~,ueu~
is a left
= eu~
'
KG-isomorphism
(Emr) ~ ÷ KG ,
takes
e u~
By (5.3b), standard}
.
(see 5.3.).
+ ~ ,
for all
(5.4a) weJknow
Recall
that,
If we put
i = u~
bu~ = eu~[R(T) ~
{C(T)}
(6.3d)
to the element
v rR(T)]
to the left
KG-submodule
We have
that
V%, K
has K-basis
i c l(n,r)
in formula
H~
U~
b i = $i,%f% = ~i,%e~{C(T)}
is carried by the isomorphism
KG . Therefore
of
: ~ s G , T
(5.3e) we get ~u~,~e% = eu~[R(T) ]
This element {C(T)}
,
{b
~ K V%,
is carried
(left ideal)
K
KG .
~ ~ G .
for any
Hence
of
K-basis
as left
(6.3d)
which
(Emr) ~
= KG[R T)]
the following
theorem,
whose proof
is entirely
analogous
to that of (6.3c).
(6.3e) ~(T~
Theorem. {C(T)}
is an irreducible
ST, K
such that
has Tu~
KG-module.
K-basis
consisting
is standard. If we choose
If
of the elements char K = O
for each
X e A
then
S--T,K
a bijective
92
X-tableau
T%
and write
~
= S %,K
full set of irreducible
The
modules
module
ST, K
is in fact dual
the contravariant
of this form.
~ ~ G ,
when we regard
form
, D%, K~
that these KG-modules form, by means
the calculation
(V,~u,u~
=
described
as
=
property
in 5.3
in 5.1.
,
But this becomes
KG-modules
to exhibit
the
(2.7d) gives
, ~u~_l,u d)
(v
(~v, d) = (v,~-id)
by means of (6.1d),
are dual to each other.
given
are dual to each
and this shows
Naturally we can transfer
(6.3d),
to give an invariant
to do this,
the following
this
form
and also to apply explicit
version of
form in question.
The
~ , ~' e G , vT
The m a t r i x relative
d)
We lave it to the reader
Theorem.
the tableaux
)
to the Specht
are dual to each other under
KG-modules
There is an invariant bilinear
for all
,
of the isomorphism~(6.3b),
ST, K x ST, K ÷ K
the invariant
(
v ~ V%, K , d e D w ,K
V%, Km
a
=~ a " , p-460 ].
V%, K , D ,K
The contravariant
(~u~,u v , d)
(6.3f)
(in the usual sense)
the modules
w (see 3.3) V%, K, D%, K
restriction
is
• %,K
ST, K - this was first proved by G.D. James
other under
for all
: ~ ~ A}
'
KG-modules.
We can give another proof:
Therefore
then T%,K
where
and
form
~
ST, K
, ST, K
( , ):ST,KXST,K
~, = Es(o)
,
are row-equivalent.
(~ ,~,
: ~T , ~'T standard)
ordering
of the standard
÷ K
such that
sum over all
~'dT
to a suitable
are dual to each other.
is unipotent ~T .
s C(T) such that
triangular,
93
Remarks.
I.
Since
identified with the basic tableau 2. of
The m a t r i x the
e,
3.
(~ ,~,)
Desarmenlen
=
matrix
~(u~,u~')
For if we take
with
~
.
in the last theorem is just that part
(~(i,j))
Therefore
to
corresponding
to
ST, Z = ZG{C(T) }[R(T)]
these last are
and have
Z-bases
{~R(T)]{C(T) } : T u~
standard}.
6.4
Irreducible
Application
Throughout r ,
can be
u
1,j
e w ;
in
II.
fact
is replaced by
are integers
, -ST, Z = ZG~R(T~{ C ( T ) }
{~{C(T) }[R(T)]
that
satisfying
K
: Tu~
ST, Q ,
standard}
,
char K = p .
has finite
r ~n
Z .
(6.3b),
Z-forms of the QG-modules
K G(r)-modules,
this section we assume n
K
then we may check that the isomorphisms
D ~X,Z ' VX,Z ~
respectively,
and that
Tug
in this section remain true when
K = Q ,
respectively. ST, Q
appearing
and
T ,
T
.
All the results
(6 • 3d) take
the tableau
u = (1,2, .... r) ,
.
characteristic
p ,
In 5.4 we constructed
a
+ full set
{F%, K : X s A (n,r) = A}
Apply the Schur functor
(6.4a) subset of
Let A
A
Of course
and we have by (6.2g)
be the set of all partitions
consisting
{F ~~,K : ~ E A' }
f ,
of irreducible modules
of those
%
of
such that
is a full set of irreducible
in the next theorem.
~(n,r)
.
the theorem
r ,
and let
F~ # 0 ~,K
A'
be the
Then
KG(r)-modules.
this still leaves open the crucial question:
The answer is contained
in
what is the set
A' ?
94
(6.4b)
Theorem
3.2]).
(Clausen
The set
A' of (6.4a)
= (Xl,X2,...,Xr,O, for which all O
and
p-i
Proof.
~C£,
... )
We must
of
the integers
module
min D%,K
show that
Therefore for all
X~ # 0
X
which is zero unless
X
is
~i,~ o (T~:T~)
on
i.e. lie between
in (6.4c)
iG = w
the elements
i.e. iT
this module
X is column
K-space
by the elements
but with by
the X •
p-regular.
by
(T%:T%)
.
~i,j o (T :T~)
,
By (4.4b)
=
~ .
E hcl
Ei,j(Ch,~)(T~:Th)
,
Therefore
by the elements
The element
is the weight
We denote
SK(n,r)-module
.
j~
FX, K ,
as
Z ~i,~(Ch,~)(T~:Th) heI
i g I .
elements
p-regular"
not with
if and only if
as
K-spanned =
to work,
is generated
it is spanned
~
which are "column
hi - %2 ' X2 - X3 ' "'" ' %r
~i,j o (T~:T~)
for all
EJa', Theorem
of those partitions
(see (5.4c)).
i,j c I = I(n,r)
(6.4c)
r
James
.
By (5.4d),
where
consists
It will be convenient
isomorphic
Lemma 6.4, p.184],
(6.4c)
containing
such that
=
(T%:Th)
i .
i e ~ .
So
If
X~
is
i e ~ ,
for all
are all distinct.
,
~-weight
lies in the
iT = iT' => ~ = ~' , (~ e R(T))
Z h~iR(T)
K-spanned then
~,~' So
space
G
s G .
X~ ,
by those
acts regularly In particular
95
(6.4d)
If
i E ~ ,
then
that
is the group of all elements
Suppose preserve
the set of columns
permutation each
e
maps Since
t
of the basic
0q of
is a p e r m u t a t i o n T
x(s,t)
to
IWql = %
q
- %
has length x(S,eq(t)) q+l
,
the order of
'
(see 4.3) that
(T%:Ti) = (T%:Ti0)
So by breaking
up the sum in (6.4d) IHI .
If
%
that
%
enough to show that
(6.4e)
p-regular,
into
~u,~ o (T%:T%)
T .
which
Such a
of all
of (4.2a) and all
for such
, t ~ Wq
(%1 - %2)~(%2 - %3 )' .... as product i s I
p-singular, i.e.
R(T)
t ->- 1
H-orbits,
of determinants
and all
~ c H .
we see that it is IHI
is divisible
by
X~ = 0 .
To prove the other half, we assume
and show that # 0 .
X~ # 0 .
By (6.4d)
and
For this it is (4.3a),
is equal to
E E ocC (T) TeR(T)
s (o) c%o,uT
There is a unique element entries
for all
is zero,
one half of (6.4b).
is column
~u,~ o (T :T%)
,
of
Wq
is
(T%:T i)
is column
hence every term (6.4d)
This proves
of
e
.
where
s -> 1 , H
(T%:Ti)
el,e2,... ,
In the notation for all
from the expression
p ,
%-tableau
of the set
q .
Now it follows
divisible by
~ ~eR(T)
can be specified by a sequence
q -> 1 , col
that
H
~i,~ o (T~:T%) =
in each column of
T~ ,
~ E C(T) namely
which reverses
the order of the
9G
~:x(s,t)
For
example
if
TZ =
We
shall
If
~
y E G = ~o
M = 4)
.
columns are
]
2
2
2
2
2
3 4
same
elements
2
2
3
3
1
1
1
3
2
2
4
1
1
the R(T)
that
TZ
row.
places
of
is p r i m e
c£~,u
in
to
in
(6.4e)
entry
also
in
,
hence
T%~
that
, M-2, TZ~
,
~T
= ~
satisfying
is
The
in
T~
,
Tzo
,
all e n t r i e s
also
H
q>l
in
TZ~
hence
iK # 0
just
is hence
are
in the
M
in
T~,q
consider
arguments The
M are
the p l a c e s
similar group
to of
order
,
glven
, and
there
all e n t r i e s
z = J . has
zero.
tile e x a m p l e , M
Next we
and by
1
T = T ,
(in
entries
that
)q
M
TZ~ T ,
.
clearly
then
implies
say
all
((%q-%q+l) ~
argument
s(~)w,
,
is n o t
= c%o,uT, This
in turn,
= TZo
(6.4e)
.
in
...
in
c~r, u
TZ~ T = T~o M
=
cz~,u
that
the e n t r i e s
.
T%~
£~y = Zo , uT = UT
M-I
p
of
such
Since
as
conclude T e R(T)
are
and h e n c e But
1 ,
the m a x i m u m
, .
1
coefficient
w =
which
t g Wq
2
by e n t r i e s
given
and
4
, T e
first
s -> 1 ,
for
4
that
t e WM
occupies that
1
Consider
in the
in the
1
In
,
we h a v e
1
such
.
(7,5,2,2)
1
prove
~ s C(T)
some
% =
÷ x(q+l-s,t)
shows
that
so the p r o o f
of
the
coefficient
(6.4b)
is c o m p l e t e .
97
This theorem has some interesting lemma c o n c e r n i n g
the
left
module for the algebra
ideal
S{
of
w
S(~) = ~ S ~
consequences.
,
S ,
First we need a
which
is
also
a right
and hence, by (6.1d), a right
KG-~aodule.
(6.4f)
S~ w E mr
S~
has
K-basis
given by
and a right
{~i,u : i e I(n,r)}
~i,u + ei
(i g l(n,r))
By 6.2, Remark i, w
,
2.7, Example
V
is
Vw # 0
and hence of
(6.4g)
If
E mr .
V ~ M~(n,r) to
(James
module of
E mr
But both
E mr
S = SK(n,r)-map
is irreducible,
a submodule
~a',
of
1
ST,K = KG[R(T)~{C(T)}
of 6.3.
<< , >>
Emr{C(T)}
on the space
(E~r) w {C(T)} ,
E mr
be irreducible.
is a homomorphic
and
V
image of
are self-dual
Vw # 0
(Notice,
F%, K
is column
(by
if and only if
we a s s u m e
is isomorphic
r
< n
to a sub-
p-regular.
In 5.5 was defined a contravariant .
.)
the "dual" Specht module
Restrict
and then transfer it to
(6.3d).
V
then
Theorem 3.2])
if and only if
V e MK(n,r)
We have therefore
Next we have a theorem concerning
isomorphism
K-isomorphism
is a left
Now let
if and only if
I, and (5.4c), proof).
isomorphic
Corollary
The
KG-map.
The proof of (6.4f) is routine.
S~
.
this to the
KG{C(T)}
The result is a symmetric,
w-weight
form space
by means of the
invariant form on
KG{C(T)}
98
which we denote by if
~, ~' s G ,
(
,
)
and which is specified
,
then
I s(~-l~ ') (6.4h)
(~{C(T)}
In 5.5 we considered and showed
unique maximal
if
-I
, ~ C(T)
,
, 7r'{C(T)}) = 0
V%, K ,
by the formula:
if
-i , ~ C(T)
the form obtained by restricting ((5.5c))
submodule
that the radical
~ax,K
now to apply the Schur functor,
of
V~, K .
.
<< , >>
to
of this form is the
It is a routine matter
and use Remarks
2,3 of 6.2 to prove
the following.
(6.4i)
Let
restricting ST, K
be the invariant
of
below),
%
is column
then the radical of ST, K ,
From
form on
the form given by (6.4h).
if and only if
p-regular, _-~nax ST, K
( , )
Then
( , )
p-regular.
( , )
K)
automorphism
of
KG
elementary
given by
is non-zero %
device.
B(~) = s(~)~
on
is column sub-module
.
(6.4i) we may deduce a w e l l - k n o w n
by the following
If
obtained by
is the unique maximal
_-max ~ T , K / S T , K ~ f(F
and
S--T,K
theorem of James Let $ ,
denote
for all
(see (6.4k),
the
~ c G •
K-algebra Let
K S
denote
the field
~k = s(~)k ~(M)
,
for
K ,
regarded
~ E G , k s K .
is also a left ideal of
6(M) ~ MmKK s
as one-dimensional
which takes
KG ,
Then if
M
,
by the action
is any left ideal of
and there is a
m m 1K ÷ 6(m)
KG-module
for all
KG ,
KG-isomorphism m s M .
It is trivial
99
to check that
B
maps
{C(T)} ,
respectively, where
T'
of
% )
T .
r
conjugate to
The bilinear form
[R(T)]
is the
%'-tableau
if
[R(T')] (%'
,
{C(T')}
is the partition
obtained by "transposing" the (6.4h) on
KG{C(T)}
syrmnetric, invariant bilinear form the formula:
to
~, 7' g G
( , )
%-tableau
is translated by on
KG~R(T')]
: {1
if
~-l~'e
0
if
B
to a
specified by
then
,
R(T') ,
(6.4j)
Moreover
B(~T,K) = ST,,K
= ST, K mK K s
(6.4k)
(James ~a, Theorems ii.I, 11.5]).
~-tableau, where
the invariant form on (6.4j).
Then
( , )
~
is
submodule
Remark.
~
ST,,K
is a partition of
~
ST,,K
is p-regular if
p-regular, then the radical of max
ST,,K of
ST,,K
'
the module
D~
(IJa, p.39])
module
in
[Ja', §i]
Let
r .
Let
if and only if
and
~'
( , )
T'
be a ( , )
be
~
is
is column p-regular). is the unique maximal
max
ST,,K/ST,,K = f(F ,,K) ~ KS
Comparison with the notation of James in
Dx
ST',K
obtained by restricting the form given by
is non-zero on
p-regular (by definition, If
which shows incidentally that
.
and so (6.4i) translates as follows:
Theorem
bijective
-
-I , ~ R(T')
is isomorphic to
is isomorphic to
between James's two families of irreducible
~a]~
shows that
f(F~',K) ~K Ks
f(F ,K) .
The
So the connection
KG-modules is
100
(6.4~)
D ~'' ~ D% H K K
,
for all column
p-regular
% .
s
The importance (6.4i),
of James's
or of the equivalent
is that it gives a satisfactory
(isomorphism p-regular
classes
~ ,
D%
of) irreducible is isomorphic
of a dual Specht module f
theorem,
~I,K
gives an independent
has a one-sided interesting
h
using the isomorphism ~ S~ W ~ KG(r)
namely
h,h g{w
of (6.1d).
of the
for each column
to the unique irreducible
quotient
We have also seen that Schur's DIm
f(Vl, K)
the functor
is related
First we re-define
labelling
KG-modules:
connection,
inverse,
that
"
"natural"
theorem
h
.
as functors
~ Emr
of
in 6.2.
used by James
of from
for any
V
V e ~(n,r)
such that (6.2d)
(6.4m) Define w e W .
that
Let the
W
V(~)
that for each
,
takes
{~
,
S-submodules
to
eu ,
U
it follows
h(W) ~ = e u ~ W .
be any left ideal of
Ker y = h(W)(~) .
(6.4f)
[Ja'].
W E mod KG we define
is the sum of all
Since
S-map 7:h(W) ÷ EmrW
Then
h*(W) & EmrW
U~ = O .
,
in
and the isomorphism
h(~'$) = E mr mKG W , h (W) = h(W)/h(W)(~)
where
It is
from mod KG(r) ÷ MK(n,r)
(6.4f),
This means
The Schur functor
defined
to a construction
functor
by .
KG ,
regarded
as left
KG-module.
y(x ~ w) = xw ,
for all
x s E ~r ,
Therefore
y
induces
an
S-isomorphism
101
Proof.
Suppose that
written
v = eu m w
But we have e ~ = e u
V ~h(W)(~)
If M .
for some
0 = y(v) = e w u
(~ ~ G)
u~
V = Ker y . w
.
form a
s W
since
This implies
K - b a s i s of
e KG u
of
V~
can be
V ~ ~~ h ( W ) m = e u m W w = 0 ,
.
.
since the elements
Hence
i.e.
V = 0 ,
is not zero, it contains some irreducible submodule
Since the ~ -weight space of M .
M~ ~ 0
,
v
.
y(h(W)(~))
true of
Any element
by
But
M ,
(6.4g)
.
h(W)(w )
is zero,
as i r r e d u c i b l e submodule of
This c o n t r a d i c t i o n implies
the same m u s t be E ~r
,
satisfies
h(W)(~)C
V ,
and the
rest of the proof of (6.4m) is immediate.
Since left
KG
is a Frobenius a l g e b r a
KG-module is i s o m o r p h i c to some left ideal
(61.6)]).
If we combine this w i t h
c o n s t r u c t i n g some irreducible James
~a']).
F ,K
for
Example.
%
S-module where
that
v~
EmrW p
of
e v e r y irreducible KG (~R,
column
W = K~G]
is an ideal of G .
If
is a submodule of does not divide
terms
r = q(p-l) + s .
KG,
and as left
By (6.4m),
i E ~ g A(n,r)
is the basis element of
q
isomorphic to
p-regular.
E ~ r w = Emr[G].
w h e r e there are
p.417,
(this m e t h o d is due to
Of course we can get in this way only modules
V
Vr, K
r,K '
,
h (W) then
and has weights So
It is easy to see that p-I
,
KG-module
W
affords
is isomorphic to the ei~G ] =
IG Iv
,
described in 5.2, Example I.
IG [ = ~l!...~n ~
is the highest such weight.
given by
W
,
(6.4m) and (6.2g), we have a way of
SK(n,r)-modules
the trivial r e p r e s e n t a t i o n of
So
([CR, p.4203)
~ ,
for all
h (W) ~ F%, K ,
~
such
where
%
% = (p-l,...,p-l,s,O,...,O),
and the n o n - n e g a t i v e integers
q,s
are
102
6.5
Application
III.
The functor
MK(N,r)
In this section we fix our infinite integer MK(n,r)
r > 0 . ,
as
such that
n
We consider varies.
N > n .
functor
of 6.2.
the irreducible in a very is based
and
modules
satisfactory
in
N > n ,
I(N,r)
I(n,r)
is the subset
With this convention, SK(N,r ) .
For
(6.5a)
SK(N,r )
for which
two elements
of all
are positive
integers
,
to those in
to characters
(see
,
whose
(6.5b)).
as a subset of
~,
It p.61].
I(N,r)
with components
components
can be regarded
from
and behaves
in his dissertation
I(n,r)
i
"mod S + mod eSe"
MK(n,r)
i = (il,...,ir)
of those
, i
all lie in
as a subalgebra
of
has basis
SK(n,r)
Sk,%
the categories
a very easy way of passing
given by Schur
i,j e I(n,r)
~i,j'
gives
EK(N,r)
j , 1,j s I(N,r)
and we can identify $i,j
d
SK(n,r )
and also the
case of our general
we may regard
consists
n
+ MK(n,r)
way with regard
on a construction
Since namely
The functor
N ,
> n)
a functor
d = %,n:MK(N,r)
as a special
K ,
between
that
We shall produce
which can be viewed
field
connections
Suppose
÷MK(n,r)(N
,
with .
of type
the
K-subspace
For the rule (6.5a),
spanned by those
(2.3b)
for multiplying
has the consequence
that
E N n .
103
if
i,j,k,Z
product p,q
e I(n,r)
~i,j~k,£
do belong
to
it would be if
,
then the coefficient
is zero unless both I(n,r)
,
$i,jSk, ~
as by
follows. a
=
If
then this coefficient
map
~ = (al,...,an)
(al,...,an,O,...,O)
~p,q
p,q e I(n,r)
were computed
We define an i n j e c t i v e
of any
~ ÷ a
,
while
if
is the same as
in
SK(n,r)
.
of
A(n,r)
into
we d e f i n e
A(N,r)
e A(n,r)
,
Then
image of
A(n,r)
: Bn+ 1 = ... = ~
= O} .
the
in the
e A(N,r)
a
under this
map is the set
A(n,r)
Notice
that if
i
= {B e A(N,r)
belong
So another description G(r)-orbits
of
of
I(N,r)
to a weight A(n,r)
(6.5b)
e = E~B ,
This is an idempotent e ~i,j = ~i,j ~i,j e = ~i,j
or zero,
I(n,r)
element
sum over all
of
SK(N,r ) , according
or zero, according
,
then
i e I(n,r)
.
is that it is the set of those
which lie in
Now define the following
B E A(n,r)
as as
of
.
SK(N,r)
B e A(n,r)*
and it is clear i c I(n,r) j ~ I(n,r)
once that
eSK(N,r)e
:
= SK(n,r ) •
(using
(2.3c))
that
or not, and or not.
It follows
at
104
From 6.2
(taking
S = SK(N,r))
which takes each some agreeable
(6.5c) which
If
V ~ MK(N,r)
properties,
V c MK(N,r)
lie in
A(n,r)
to
~V
to
then
Z
in
B e A(N,r)
E A(n,r) statement
eV = ZV B ,
N
in
or not;
This functor has
.
n
variables
~eV
XI, .... X N)
eV $ = e ~$V = V $
the first statement
~ ~ A(N,r)
(which is a
X I , . . . , X n)
is related
by the formula
= ~v(Xl ..... Xn,O ..... O)
then
comes from this,
summed over those
the character
variables
~ev(Xl,...,Xn)
If
Y~(n,r)
which we now describe.
over
(a polynomial
Proof.
eVe
Consequently
symmetric polynomial
a:MK(N,r)÷~(n,r)
we have a functor
.
or zero,
according
of (6.5c) follows.
together with the definition
as
The second
of a character
(see 3.4).
Next we look to see how our functor behavas and
VX,K
(6.5d) the
FX, K .
Lemma. X-tableau
Proof.
X
n+l
places
be distinct,
Let T. i
X s A+ (N,r)\
x,K '
A(n,r)*
and
i ~ I(n,r)
.
Then
is not standard.
we must have
satisfies Xn+ 1 # 0 .
in its first column. since
D
For this we need a lemma.
= (XI,X2,...,X N)
X ~ A(n,r)
on the modules
i ~ I(n,r)
.
X I ~ X 2 ~ ... ~ XN , So the
X-tableau
But the entries Therefore
T. i
Ti
and since has at least
in this column cannot all is not standard
(see (4.5a)).
105
(6.5e)
Theorem.
D%, K ~ VI, K
or
other words
Proof. if
FI, K
Then
eXl = 0
Suppose
X
= Dl, K
or
Vl, K ,
such that the
i E B
implies
Conversely, eXl # 0
by
eX l # 0
,
T. i
~
denote
,
and that
of
is standard
Since
n
parts.
~ s i(n,r)
Then
to the number of
((4.5a),
(6.5d) that
FI, K
In
non-zero
is equal
X%
any one of
~ s i(n,r)
has more than
we deduce from
eX l = O .
X%
if and only if
~ ~ i(n,r)
l-tableau
then from (6.5c) that
and let
the dimension
i s I(n,r)
eF%, K = 0
,
if and only if
first that
i ~ B
we have
+ % s A (N,r)
Let
(5.3b)).
Since
Xl = O ,
and
is a factor module
of
Vl, K ,
also.
suppose
(6.5c),
that
~ c i(n,r) *
for any of the modules
We know that XI
X ~l # 0 ,
in question.
hence
This completes
the proof of (6.5e)°
If
~ e A+(N,r)
I ~ A(n,r)
,
is that
then a necessary ~ = p
for some
and sufficient ~ s A+(n,r)
.
condition
that
We know from (5.4b)
+ that
{FI,K:I
s A (N,r)
is a full set of irreducible
modules
in
So (6.2g) and the theorem just proved give the first statement
MK(N,r)
in (6.5f)
below.
(6.5f) MK(n,r) character
p
{eF ,,K : p s A+(n,r) .
In fact formula,
(including
is a full set of irreducible modules
eF , K ~ F ,K valid
p = O)
for
(isomorphism
all
+ ~ e A (n,r)
,X n)
=
,
in
MK(n,r))
and for
.
every
...
~p,,p(X I,
...,Xn,O
,...,0)
Hence the characteristic
:
n,p(X I,
in
.
.
106
Proof.
The second and third statements
eF , , K
and
F
K are both i r r e d u c i b l e modules in
character with leading
Remarks
i.
following
~I ~n X I ...X
term
A direct proof lines.
Let
that
E(n)
The inclusion
E(n) mr C
of a
E(N) mr
and to induce an isomorphism
2. onto
N > n > r ,
+ A (N,r)
have at most e A+(n,r)
(6.5g)
categories.
hd ,
E(N)
el,...,e n ,
with basis V ~,K
el,...,e N . to
eV~*,K
'
Similarly we may show
eF ,,K
.
~ ~ ~*
.
For any
+ I g A (N,r)
r
non-zero
parts.
Then
(6.5f)
N > n > r ,
are naturally
MK(N,r)
equivalent
the functor described to
r ,
for some uniquely
d = dN, n
= MK(r,r)
a functor
take for
W e MK(n,r)
of
dN,n:b~(N,r) classes
can defined
~ M K(n,r)
of irreducible
In fact we have the stronger result:
[Co, p.7]).
h
% = ~*
+ A (n,r)
from
being a partition
the sets of isomorphism
(6.5g), we must produce
dh
,
shows that the functor
the functor
In particular
gives a b i j e c t i o n
Hence
for example
each
K-space with basis
can be shown to take
in these two categories.
If
To prove
(i), 3.5).
can be made along the
F ,K
K-space
F ,K
the map
induces a bijection between modules
a
having
eD ~*,K
D, K
If
NK(n,r)
(see Remark
eF ,,K
denote
which we regard as subspace
that
follow from the fact that
defines
for all
an equivalence
N ~ r .
h:MK(n,r ) ÷ MK(N,r)
to the appropriate
of
identity
We leave it to the reader to verify
such that functors
(see
that we may
in 6.2, which in the present
case takes
107
h(W) = SK(N,r)e
We might mention then the
that
(6.5g)
K-algebras
Do,
p.34]) .
6.6
Application
has another
SK(N,r)
IV.
Some
mSK(n,r )
and
on decomposition MQ(n,r)
context,
numbers.
÷ MK(n,r)
and prove
a general
If
V ~ mod S
and
of
V%
to
V% ,
(6.6a)
in
V .
in mod
% e A ,
in any composition
series
V = VO b V I ' ) . . .
Here
Lemma. n%(eV)
If
result
theory
of
of T. Martins
EMa]
reduction
S
n%(V)
{V% : % g A}
he a full set
is an algebra
over any field.
the composition
is the number of
multiplicity
of factors
isomorphic
V
-)Vz = 0 .
be an idempotent
to be the set of those
Let
denote by n%(V)
e
eSe"
in 2.5.
where
That means,
Now let
(6.6b)
S ,
(see
numbers.
Then we apply this to the modular
We start with a piece of notation. modules
N > n > r ,
are Mot ira equivalent
our "mod S--)mod
which was described
of irreducible
If
on decomposition
In this section we first extend 6.2 to a "modula#"
formulation:
SK(n,r)
theorems
W .
% g A
of
S ,
such that
I e A' ,
is the composition
then
and define
A' ,
as in (6.2g),
eV% # O .
n%(eV)
multiplicity
= n%(V) of
,
eV%
for any
V ~ mod S .
in the eSe-module
eV .
108
Proof.
From
By (6.2a),
(6.6a) we get a series of eSe-modules
e(Vj_I/V j) m eVj_i/eV j
Removing
those terms
Vj_I/V j
is isomorphic
composition
series
Now let properties IC)
C
K
,
element of
SC
~:R ÷ K .
SK = SR ~ K
i.e.
,
for
f.k = ~(r)k
theory of algebras
(ii)
reduction of
R
Let
SC
SK
mR ,
(see
~,
VR
K
and
with the
(iii)
SR
as
is an
If
{U~, K : d s A}
VK
via
w ,
representation
the categories (cf. 2.5),
as
or "admissible
is the R-span of some C-basis
SK-mOdule;
"R-order"
{u I m iK,..,u b R IK}
reduction"
[~o, 48.1(iv),
the identity
R-module
connects
can be found an "R-form"
as left
contains
there is a
R. Brauer's m o d u l a r
of "modular
or
R
contains
thus
is regarded
~ANT, p.lll])
§6, p.256],
{Vx, C : X s A} ,
C
C-algebra with finite basis
closed:
r c R , k ~ K).
can be regarded VC .
and
for which
(in particular, R ,
be a
1 .
at once.
K-algebra with K-basis
by the process
(i)
of
j = 1 .....
we are left with a
S R = Ru I * ... e Ru b
is a
V C ~ mod S C
,
be a subring of
ideal domain
(IB]; see also
V R , i.e.
SRV R ~ V R
VK = VR m K
(6.6c)
mod
In each
R-lattice"
i.e.
~ s A\A'
and is m u l t i p l i c a t i v e l y
means
follows.
and
such that the set
Then
and
eV.j_l = eV.j ,
The lemma follows
be fields,
m
SC
for
for some
eV .
(here and below,
mod
Vw
eSe-modules),
is the field of fractions
{Ul,...,Ub}
SC .
for which
is a principal
ring-homomorphism
in
to
for
C ,
(i) R
, (ii)
eVj_ 1
(as
eV = eV 0 ~ e V l ~ ... D~eVz = O.
of
p.299]).
VC ,
and
Then
is called a modular
109
are full sets of irreducible modules we define for each
% ~ A ,
Let
e = eR
in
n~(V%,K)
be an idempotent
m o d eSce
Similarly,
, namely
e K = e R m 1K
{eKU~, K : ~ e A'}
eSRe
R-module,
,
eSRe
reduction
from
decomposition
the
with
numbers by .
Proof.
be an
eSce-module
eKSKe K . to
in a modular
VR
e ~ SC ,
A' = {% e A : eV~, C
K-algebra
in
C-algebra SR .
mod eKSKe K ; .
#0}
S K = SR m K ,
mod eKSKe K ,
namely
eSce
-
this is because,
as
For the same reason, we can we have a process
let us denote
Of course
of modular
the corresponding
these are defined only for
these decomposition
numbers,
and
is very simple and satisfactory.
[Ma]).
R-form of
V R = eV R • (l-e)V R eV C ,
Since
where
in the
Therefore
d%~(eSe)
(T. Martins .
,
modules
The connection between
d%~(S) = d%~(eSe)
Let
in the
defined previously,
Theorem
sun,hand of
U6, K
dx6 = d%6(S)
A' = {6 ~ A : eKU~, K # 0}
R-order
mod eSce
~ e A'
dx6 (S)
(6.6d)
number
SR .
{eV~, C : ~ c A'}
is a direct summand of
eSRe ~ K
,
respectively,
By (6.2g) we get a full set of irreducible
is an idempotent
where
is an
identify
X e A'
of
in the ring
and so we get a full set of irreducible
Now
mod S K
VX, C
we can apply the theory of 6.2. modules
SC ,
6 g A the dec£mposition
to be the composition m u l t i p l i c i t y reduction of
in mod
Let
% e A' ,
V C = V%, C This implies
and also that the
6 e A'
Then that
eKSKeK-mOdule
eV R eV R
.
Then
is a direct is an
eV R m K
R - f o r m of the can be
110
identified
with
multiplicity of
U~, K
of
in
eKV K ,
where
eKU6, K
in
VK .
R = Z ,
and define
K
~:Z ÷ K
any infinite ~(n) = n.l K •
given in 2.3.
the categories
and
MK(n,r)
Identify
respectively,
Corresponding
C = Q
for all SK
to the sets
with mod SQ
Denote
the decomposition
numbers
which
appear
(6.6c)
Fix
SK(n,r)
by the isomorphism
and
(6.6e) map from (i)
Theorem. A(n,r)
mod
n, r
p ,
SK
with
and let
MQ(n,r)
in the general
case we take
+
these
,
{F%, K : ~. g A (n,r)}
.
sets are indexed by the same set
numbers
by
d%u = dl
(GLn)
.
These
A+(n,r).)
are the same
in the formulae
~%,o(XI,...,Xn)
of 3.5, Remark
characteristic
n s Z .
+
in this case,
multiplicity
as in 2.4.
{V>~,Q : ~ e A (n,r)}
(It happens
the composition
(field of rational
field of finite
S--SQ(n,r), S Z = Sz(n,r) Identify
(6.6b)
(6.6d).
of (6.6d) we take
by
By
is the same as the composition
eKV K
This proves
For our applications numbers),
VK = VR m K .
=
E+ s A (n,r)
d%~,p(Xl,''',X n)
(i).
Suppose into
that
A(N,r)
da6(GL n) = d ,B,(GLN)
,
N > n , given
for any
in
and let 6.5.
~ ÷
Then
~,B ~ A+(n,r)
•
be the injective
111
(ii)
d~,~(GL N) = O ,
Proof.
(i)
and let
(6.5e),
X e A+(N,r)k~(n,r) *
is a direct application
e = eZ
e e SZ ,
for any
and
be the element of e K = e m 1K
the sets
A',A'
A+(N,r) /~ A(n,r) *
of (6.6d). SQ
Take
defined
is the element of
and
p e A(n,r)*
SQ = SQ(N,r)
as in (6.5b). SK
defined by
, etc.,
Clearly (6.5b).
By
which appear in (6.6d) are both equal to
So we take
~ = ~ , , p = B*
in (6.6d),
and then use
(6.5f). (ii)
Since
% ~ A(n,r)
,
composition m u l t i p l i c i t y eF ,K
in
eKV%, K ,
Part
decomposition
(6.6f)
Here
Fp, K
A+(N,r) . identical
V%, K
numbers
In other words
dx~(GLN)
= O ,
for all
n
r and
are contained
K ,
the
in the m a t r i x
E A(r)
which can be identified with the set of all
~ = (~l,...,%r) and the map
So (6.6e)(i)
the
is complete.
d% (GLn)
,
By (6.6b)
is the same as that of
(i) of this theorem shows that, with fixed
+ A(r) = A (r,r)
N > r ,
in
eKV%, K = 0 .
p c A(n,r)
(dx~(GLr))%,D
partitions Then
of
because
and the proof of (6.6e)
Remark.
then by (6.5e),
of ~ ÷ ~
r .
Assume induces
first
n = r
a bijection
of
in (6.6e). A(r)
shows that the d e c o m p o s i t i o n m a t r i x for
(up to this bijection)
with
(6.6f).
Next take
N = r .
onto GL N Then
is
112 *
n ~ r
and the map
those
% c A(r)
matrix
the submatrix
of (6.6f)
to partitions
Theorem
for
than
bijectively n
non-zero
n
in this case we merely
Here
partitions
r .
in
of
~%,K
"
E SQ ,
(see (6.3e),
(6.6g)
A(P)(r)
which are column
Let
eK = ~ (6.4%)
showing
dx6(G(r))
~ SK .
,
~a',
the matrix
given in James's
article
[Ja'~.
3.~).
S = SQ(n,r)
If and
those columns
and
,
p-regular multiplicity (r # n) ,
The result
then
~ ~ A(P)(r)
•
and char
of etc.,
D~ ~ eKF6, K
r ! n ,
2 < r < 6 ,
To see this,
KG(r)-modules
the composition
each).
shows
G(r) is
p-singular.
6 c A(P)(r)}
S%,Q e eV%,Q
% c A(r)
for
which
which
group
suppress
the set of all column
preceding
Theorem
(6.6f)
:
(6.6d) with
We have
for all
QG(r)-,
{D 6
denote
and the remarks
(James
= d%6(G(r))
,
denotes
Now we may apply
Theorem
d%6(GLn)
~ s A(r)}
with
all rows and columns
for the symmetric
that we have full sets of irreducible
respectively.
So the
parts.
numbers
to partitions
:
parts.
(up to this bijection)
by repressing than
onto the set of
a simple proof of a theorem of James,
of (6.6f)
{~.,Q
Tables
obtained
having more
of (6.6f), which refer
e = ~
i (n,r)
is identical
n
of decomposition
also a submatrix
D6
GL
(6.6d) gives
that the matrix
recall
takes
which have not more
decomposition
refer
+
~ * ~
is as follows.
K = 2,3
are
6 Representation theory of the symmetric group
6.1 The functor f : MK (n, r) → mod KG(r) (r ≤ n) In this chapter we shall apply our results on the representations of the general linear group ΓK = GLn (K), to the representation theory over K of the symmetric group G(r). The method is to use a process invented by Schur in his dissertation [47]. Suppose first that r ≤ n. Then there exists a weight ω = (1, 1, . . . , 1, 0, . . . , 0) in Λ(n, r) containing r 1’s. We shall see that for any module V ∈ MK (n, r), the ω-weight-space V ω can be regarded as a left KG(r)-module. The correspondence V → V ω determines a functor f : MK (n, r) → mod KG(r). Schur proved (see [47, sections III, IV]) that in case K = C this functor gives an equivalence between the categories MK (n, r) and mod KG(r); by this means he showed that modules in MC (n, r) are completely reducible, hence are determined up to isomorphism by their characters (see [47, p. 35]). The proof which we have given of this fact, see (2.6e), is essentially Schur’s later proof in [48, p. 77]. Then Schur was able to handle the case n < r by an argument [47, pp. 61–63] which uses another functor, this time from MK (r, r) to MK (n, r). This second functor will be described in 6.5. Of course Schur used his functor f , and its “inverse” (see 6.2), to make deductions about MK (n, r)—his starting point was the known representation theory of G(r). But since we have already got some knowledge of MK (n, r) by the “combinatorial” methods of §4, §5, it is also sometimes profitable to work in the other direction. Let us keep K, n, r fixed for the moment, and write S = SK (n, r). Any module V ∈ MK (n, r) can be regarded as left S-module, and therefore for any weight α ∈ Λ(n, r), the weight-space V α = ξα V (see 3.2) can be regarded as left S(α)-module, where S(α) denotes the algebra ξα Sξα . We get then a functor (6.1a)
fα : MK (n, r) → mod S(α),
54
6 Representation theory of the symmetric group
which takes each module V ∈ MK (n, r) to V α ∈ mod S(α), and each morphism θ : V → V in MK (n, r) to its restriction θα : V α → (V )α . S(α) is a K-algebra with ξα as identity element. If we choose some element i ∈ I(n, r) which belongs to α, for example (6.1b)
i = (1, 1, . . . , 1, 2, 2, . . . , 2, . . . , n, n, . . . , n), α1
α2
αn
we may use the multiplication rules in 2.3 to show that S(α) is spanned, as K-space, by the elements ξiπ,i , π ∈ G. From the equality rule in 3.2 follows that, for any elements π, π ∈ G, ξiπ,i = ξiπ ,i if and only if π, π belong to the same double coset with respect to the subgroup Gα = { π ∈ G : iπ = i } of G. So S(α) has K-basis {ξiπ,i }, π running over a set of representatives of the double-coset space Gα \G/Gα . Now suppose that r ≤ n, and that ω is the weight described above. The element (6.1b) corresponding to α = ω is written (6.1c)
u = (1, 2, . . . , r) ∈ I(n, r).
Since the stabilizer in G of this element is Gω = {1}, the algebra S(ω) has Kbasis { ξuπ,u : π ∈ G }. An elementary application of multiplication rule (2.3b) shows that ξuπ,u ξuπ ,u = ξuππ ,u , for all π, π in G. We have therefore an isomorphism of K-algebras (6.1d)
S(ω) ∼ = KG(r),
which takes ξuπ,u → π for all π ∈ G = G(r). By means of this isomorphism the categories mod S(ω) and mod KG(r) can be identified. With this identification we define the Schur functor [47, p. 22], f : MK (n, r) −→ mod KG(r) to be the functor f = fω . Remark. For the general case, where α is any weight in Λ(n, r) (and with no restriction on n, r) S(α) is isomorphic to the Hecke ring HK (G, Gα ) over K. We may follow Iwahori [25, p. 218] and define the Hecke ring H(G, H) for any subgroup H of any finite group G, as follows. H(G, H) has free Z-basis {χA }, where A runs over the set H\G/H of all double-cosets of H in G; the product of elements in this basis is given by zA, B, C χC , (6.1e) χA χB = C∈H\G/H
where if γ is any fixed element of C, zA, B, C is the number of H-cosets Hπ in the set A−1 γ ∩ B. (For an explanation of this artificial-looking rule, see [25, §1].) Alternatively we may define H(G, H) to be the endomorphism ring of the subset [H]ZG of ZG, this subset being regarded as right ZGmodule. In this interpretation χA becomes the ZG-endomorphism of [H]ZG which takes [H] to [A].
6.2 General theory of the functor f : mod S → mod eSe
55
Returning now to our case G = G(r), H = Gα , we leave it as an exercise to prove that the K-linear map S(α) → HK (G, Gα ) given by ξiπ,i → χGα πGα for all π ∈ G, is an isomorphism of K-algebras.
6.2 General theory of the functor f : mod S → mod eSe It soon becomes clear that many properties of Schur’s functor belong to a much more general context. Let S be any K-algebra (it does not need to be finite-dimensional) and let e = 0 be any idempotent in S. We define a functor f : mod S → mod eSe as follows. If V ∈ mod S, clearly the subspace eV of V is an eSe-module, so we define f (V ) = eV ∈ mod eSe. If θ : V → V is a morphism in mod S, then we define f (θ) : eV → eV to be the restriction of θ; clearly f (θ) is an eSe-morphism. It is important to observe that f is an exact functor, in other words (6.2a) Suppose 0 → V → V → V → 0 is an exact sequence in mod S, then 0 → eV → eV → eV → 0 is an exact sequence in mod eSe. This is quite elementary. The next proposition, though easy and undoubtedly well known, does not seem to appear in the literature1 (a special case is given by Curtis and Fossum [12, p. 402]. Much of the present section 6.2 appears, sometimes with different proofs, in the Ph.D. dissertation of T. Martins [41]). (6.2b) If V ∈ mod S is irreducible, then eV is either zero or is an irreducible module in mod eSe. Proof. Let W be any non-zero eSe-submodule of eV . Then W = eW , and also SW = SeW , which is a non-zero S-submodule of V , is equal to V . Hence eV = e(SeW ) = (eSe)W ⊆ W . This proves W = eV . Therefore if eV = 0, then eV is an irreducible eSe-module. This proves (6.2b). Now suppose V ∈ mod S, and define V(e) to be the sum of all the S-submodules V0 of V such that eV0 = 0—in other words, V(e) is the largest S-submodule of V which is contained in (1 − e)V . We also define a(V ) = V /V(e) . Then we can make a functor a : mod S → mod S; notice that if θ : V → V is a morphism in mod S, then θ maps V(e) into V(e) , hence θ induces a well-defined map a(θ) : a(V ) → a(V ). The virtue of this functor, is that it gets rid of the part of each module V which is annihilated by f , and does this without destroying anything in f (V ). Expressed precisely, we have 1 Our functor is a special case of a functor described by M. Auslander in [2]; see p. 243. I am indebted to J. Alperin for this reference.
56
6 Representation theory of the symmetric group
(6.2c) Let V ∈ mod S. Then the natural map αV : V → a(V ) = V /V(e) induces an isomorphism f (αV ) : f (V ) → f (a(V )). Proof. Clearly f (αV ), which is just the restriction of αV to f (V ) = eV , is onto f (a(V )) = e · a(V ). And Ker f (αV ) = eV ∩ V(e) = 0 since V(e) ⊆ (1 − e)V . Thus f (αV ) is an isomorphism. Our next objective is to define functors from mod eSe to mod S, which can serve, at least partially, as inverses to f . As first attempt we employ the definition for W ∈ mod eSe. h(W ) = Se ⊗eSe W, Since Se is a left S-module (it is a left ideal of S, of course) and also a right eSe-module, h(W ) is well-defined and is a left S-module. If ψ : W → W is a morphism in mod eSe, then h(ψ) = 1Se ⊗ ψ : h(W ) → h(W ) is a morphism in mod S. We get in this way a functor h : mod eSe → mod S. Moreover the next proposition shows that h is a “right-inverse” to f . (6.2d) Let W ∈ mod eSe. Then e · h(W ) = e ⊗ W , and the map w → e ⊗ w (w ∈ W ) gives an eSe-isomorphism W ∼ = e · h(W ) = f (h(W )). Proof. e · h(W ) = e(Se ⊗eSe W ) = eSe ⊗eSe W = e ⊗ W , as stated. Thus the map defined above takes W onto e · h(W ); it is elementary to check that it is an eSe-map. To prove that it is injective, first notice that there is a well-defined map η : Se ⊗eSe W → W such that η(s ⊗ w) = esw, for all s ∈ Se, w ∈ W . Then if w ∈ W is such that e ⊗ w = 0, we get 0 = η(e ⊗ w) = w. This establishes the injectivity of the map w → e ⊗ w, and (6.2d) is proved. The trouble with the functor h is that it usually takes an irreducible module W to a module h(W ) which is not irreducible. However we have (6.2e) If W ∈ mod eSe is irreducible, then h(W )(e) is the unique maximal proper submodule of h(W ). Hence a(h(W )) is irreducible. Proof. Write V = h(W ). Then by (6.2d) and (6.2c), f (a(V )) ∼ = f (V ) ∼ = W. Thus a(V ) = 0, which shows that V(e) is a proper submodule of V . Now let V be any proper submodule of V . If eV = 0 then eV , being an eSe-submodule of the irreducible eSe-module V = e · h(W ) (recall e · h(W ) = e ⊗ W ∼ = W, by (6.2d)) is equal to e · h(W ). Then V ⊇ SeV = S(e ⊗ W ) = h(W ) = V , a contradiction. So eV = 0, i.e. V is contained in V(e) . This proves (6.2e). Definition. Let h∗ denote the functor ah : mod eSe → mod S, so that h∗ (W ) = h(W )/h(W )(e) for all W ∈ mod eSe. By (6.2c) and (6.2d) this functor h∗ , like h, is a right inverse to f , i.e. f (h∗ (W )) ∼ = W for all W ∈ mod eSe. By (6.2e) h∗ takes irreducibles to irreducibles. We have finally
6.3 Application I. Specht modules and their duals
57
(6.2f ) If V ∈ mod S is irreducible and if eV = 0, then h∗ (eV ) ∼ =V. Proof. There is an S-map β : h(eV ) = Se ⊗eSe eV → V , which takes s ⊗ ev to sev, for all s ∈ Se, v ∈ V . The image of β is SeV , which equals V because V is irreducible. So the kernel of S is a maximal proper submodule of h(eV ). But eV ∈ mod eSe is irreducible by (6.2b), hence the only maximal proper submodule of h(eV ) is h(eV )(e) , by (6.2e). Therefore β induces an isomorphism of h(eV )/h(eV )(e) = a(h(eV )) = h∗ (eV ) onto V . Taking together all these facts, we arrive at our main theorem. (6.2g) Theorem. Suppose { Vλ : λ ∈ Λ } is a full set of irreducible modules in mod S, indexed by a set Λ, and let Λ = {λ ∈ Λ : eVλ = 0}. Then {eVλ : λ ∈ Λ } is a full set of irreducible modules in mod eSe. Moreover if λ ∈ Λ , then Vλ ∼ = h∗ (eVλ ). ∼ eV (as K-spaces), for Remarks. 1. It is well-known that HomS (Se, V ) = any V ∈ mod S (see [11, p. 375]). Therefore if V is irreducible, eV = 0 if and only if V is a homomorphic image of Se. 2. When we come to apply the Schur functor to the Carter-Lusztig modules Vλ,K , it will be useful to notice that if any V ∈ mod S has a max max , then eV is either equal to eV unique maximal proper submodule V max ) = 0) or else it is the unique maximal proper submodule of (i.e. e(V /V the eSe-module eV . The proof is easy. 3. In the same context we shall use the following: If ( , ) is a symmetric bilinear form on V such that (eV, (1 − e)V ) = 0, and if ( , )e denotes the restriction of this form to eV , then rad ( , )e = e · rad( , ). Again, the proof is an easy exercise.
6.3 Application I. Specht modules and their duals In this section we shall apply the general theory of 6.2 to the special case of the Schur functor f : MK (n, r) → mod KG(r). Here K is any infinite field, and n, r are fixed integers such that r ≤ n. We take S = SK (n, r) and e = ξω = ξu,u (see 6.1 for notation), and identify eSe with KG(r) by the isomorphism (6.1d), which takes ξuπ,u → π, for all π ∈ G = G(r). Notice that f (V ) = eV = V ω , for any V ∈ MK (n, r). Our aim is to calculate the effect of f on the modules Dλ,K , Vλ,K . Notice that the elements λ of Λ+ (n, r) are in one-to-one correspondence with the partitions λ of r (because r ≤ n). We shall write Λ = Λ+ (n, r), and think of Λ as the set of all partitions of r. From now on, λ is a fixed element of Λ. Recall (p. 35) that Dλ,K is the K-span of the elements (Tl : Ti ) = s(σ) cl,iσ , all i ∈ I(n, r). σ∈C(T )
58
6 Representation theory of the symmetric group
We saw (p. 37) that these bideterminants (Tl : Ti ) all lie in the right λ-weightspace λAK (n, r) = AK (n, r) ◦ ξλ of AK (n, r). But by definition f (Dλ,K ) is ω of Dλ,K , and therefore lies in the (left) ω-weightthe ω-weight-space Dλ,K space (6.3a)
AK (n, r)ω = ξω ◦ AK (n, r) ◦ ξλ
λ
of λAK (n, r). Elementary calculations based on formulae (4.4a), (4.4a ), (6.3a) λ ω show that AK (n, r) = π∈G K · cl,uπ . Since cl,uπ = cl,uπ if and only if π ∈ πR(T ) (see 4.5), there is an isomorphism of K-spaces (6.3b)
AK (n, r)ω → KG[R(T )]
λ
which takes cl,uπ → π[R(T )], for all π ∈ G. Now KG[R(T )] is a left ideal of the group algebra KG, hence is a left KG-module. On the other hand λAK (n, r)ω , being the ω-weight-space of the left SK (n, r)-module λAK (n, r), becomes a left KG-module by means of (6.1d). To be explicit, the element τ ∈ G acts on the element cl,uπ to give τ cl,uπ = ξuτ,u ◦ cl,uπ , which by (4.4a) is equal to cl,uτ π . It follows at once that (6.3b) is a left KG-isomorphism. ω has K-basis consisting of all (Tl : Ti ) By (4.7a), p. 39, f (Dλ,K ) = Dλ,K such that i ∈ ω and Ti is standard. The elements i in ω can be written, uniquely, in the form i = uπ (π ∈ G). The isomorphism (6.3b) takes (Tl : Tuπ ) to s(σ) πσ [R(T )] = π {C(T )} [R(T )], σ∈C(T )
and so it takes f (Dλ,K ) to the left KG-submodule (left ideal) ST,K = KG {C(T )} [R(T )] of KG. We shall define ST,K to be the Specht module (over K) corresponding to the bijective λ-tableau T . (This is a little different from the original definition of Specht; for an explanation of the latter, and of the equivalence of the two definitions, see [45, p. 91].) We have now the (6.3c) Theorem. The Specht module ST,K has K-basis consisting of the elements π {C(T )} [R(T )] such that Tuπ is standard. If char K = 0, then ST,K is an irreducible KG-module. If we choose for every λ ∈ Λ a bijective λ-tableau T λ , and write Sλ,K = ST λ ,K , then {Sλ,K : λ ∈ Λ} is a full set of irreducible KG-modules. Proof. The first statement comes by applying the isomorphism (6.3b) to the basis of Dλ,K given by (4.7a), already quoted. If char K = 0, then each Dλ,K is irreducible by (4.7b), and since Dλ,K has character Sλ (by (4.7a)), { Dλ,K : λ ∈ Λ } is a full set of irreducible modules in MK (n, r). Then the last statement in (6.3c) follows at once from (6.2g), since in this present case f (Dλ,K ) ∼ = Sλ,K is non-zero for all λ ∈ Λ.
6.3 Application I. Specht modules and their duals
59
Now let’s look at the module Vλ,K . By definition (see (5.1b)) this is a ω subspace of E ⊗r , therefore f (Vλ,K ) = Vλ,K is a subspace of f (E ⊗r ) = (E ⊗r )ω . From the formula (2.6a) which gives the action of S = SK (n, r) on E ⊗r , we see that for any weight α ∈ Λ(n, r) (and with no restriction on n, r) (E ⊗r )α = ξα E ⊗r = K · ei . i∈α
So in particular (E ⊗r )ω has K-basis { euπ : π ∈ G }. The structure of (E ⊗r )ω as left KG-module is given by τ euπ = ξuτ,u euπ = euτ π , for all τ, π ∈ G. Therefore there is a left KG-module isomorphism (6.3d)
(E ⊗r )ω → KG,
which takes euπ → π, for all π ∈ G. We know that Vλ,K has K-basis { buπ : π ∈ G, Tuπ standard }, by (5.3b) and (5.4a). Recall that, for any i ∈ I(n, r), bi = ξi,l fl = ξi,l el {C(T )} (see 5.3). If we put i = uπ in formula (5.3c) we get ξuπ,l el = eu π[R(T )]. This means buπ = eu π[R(T )]{C(T )}. This element is carried by the isomorphism (6.3d) to the element π[R(T )]{C(T )} of KG. Therefore Vλ,K is carried to the left KG-submodule (left ideal) S T,K = KG [R(T )] {C(T )} of KG. We have the following theorem, whose proof is entirely analogous to that of (6.3c). (6.3e) Theorem. The module S T,K defined above has K-basis consisting of the elements π [R(T )] {C(T )} such that Tuπ is standard. If char K = 0, then S T,K is an irreducible KG-module. If we choose for each λ ∈ Λ a bijective λ-tableau T λ , and write S λ,K = S T λ ,K , then {S λ,K : λ ∈ Λ} is a full set of irreducible KG-modules. The module S T,K is in fact the dual (in the usual sense) to the Specht module ST,K —this was first proved by G. D. James [26, p. 460]. We can give another proof: the modules Vλ,K , Dλ,K are dual to each other under ω ω , Dλ,K are dual the contravariant form ( , ) described in 5.1. Therefore Vλ,K to each other under the restriction of this form (see 3.3). The contravariant property (2.7d) gives (ξuπ,u v, d) = (v, ξu,uπ d) = (v, ξuπ−1 ,u d), ω ω , d ∈ Dλ,K . But this becomes (πv, d) = (v, π −1 d) when for all π ∈ G, v ∈ Vλ,K ω ω we regard Vλ,K , Dλ,K as KG-modules by means of (6.1d), and this shows that these KG-modules are dual to each other. Naturally we can transfer this form, by means of the isomorphisms (6.3b), (6.3d), to give an invariant form S T,K × ST,K → K. We leave it to the reader to do this, and also to apply the calculation given in 5.3 to exhibit the following explicit version of the invariant form in question.
60
6 Representation theory of the symmetric group
(6.3f ) Theorem. The KG-modules S T,K , ST,K are dual to each other. There is an invariant bilinear form ( , ) : S T,K × ST,K → K such that π [R(T )] {C(T )}, π {C(T )} [R(T )] = Ωπ,π for all π, π ∈ G, where Ωπ,π = s(σ), sum over all σ ∈ C(T ) such that the tableaux πT and π σT are row-equivalent. The matrix ( Ωπ,π : πT, π T standard ) is unipotent triangular, relative to a suitable ordering of the standard πT . Remarks. 1. Since u = (1, 2, . . . , r), the tableau Tu can be identified with the basic tableau T , and Tuπ with πT . 2. The matrix (Ωπ,π ) appearing in (6.3f) is just that part of the D´esarm´enien matrix (Ω(i, j)) corresponding to i, j ∈ ω ; in fact Ωπ,π = Ω(uπ, uπ ). 3. All the results in this section remain true when K is replaced by Z. For if we take K = Q, then we may check that the isomorphisms (6.3b), (6.3d) ω ω , Vλ,Z to ST,Z = ZG {C(T )} [R(T )], S T,Z = ZG [R(T )] {C(T )}, take Dλ,Z respectively. Therefore these last are Z-forms of the QG-modules ST,Q and S T,Q , respectively, with Z-bases { π {C(T )} [R(T )] : Tuπ standard } and { π [R(T )] {C(T )} : Tuπ standard }.
6.4 Application II. Irreducible KG(r)-modules, char K = p Throughout this section we assume that K has finite characteristic p, and that r, n are positive integers satisfying r ≤ n. In 5.4 we constructed a full set { Fλ,K : λ ∈ Λ+ (n, r) = Λ } of irreducible modules in MK (n, r). Apply the Schur functor f , and we have by (6.2g) the theorem (6.4a) Let Λ be the set of all partitions of r, and let Λ be the subset of Λ ω ω = 0. Then { Fλ,K : λ ∈ Λ } is a full set consisting of those λ such that Fλ,K of irreducible KG(r)-modules. Of course this still leaves open the crucial question: what is the set Λ ? The answer is contained in the next theorem. (6.4b) Theorem (Clausen2 , James3 ). The set Λ of (6.4a) consists of those partitions λ = (λ1 , λ2 , . . . , λr , 0, . . .) of r which are “column p-regular” i.e. for which all the integers λ1 − λ2 , λ2 − λ3 , . . . , λr lie between 0 and p − 1.
2 3
[8, Lemma 6.4, p. 184] [28, Theorem 3.2]
6.4 Application II. Irreducible KG(r)-modules, char K = p
61
Proof. It will be convenient to work, not with Fλ,K , but with the isomormin phic module Dλ,K (see (5.4c)). We denote this module by X. We must show that X ω = 0 if and only if λ is column p-regular. By (5.4d), X is generated as SK (n, r)-module by (Tl : Tl ). Therefore it is spanned as K-space by the elements ξi,j ◦ (Tl : Tl ), for all i, j ∈ I = I(n, r). By (4.4b) ξi,j ◦ (Tl : Tl ) = ξi,j (ch,l ) (Tl : Th ), h∈I
which is zero unless j ∼ l. Therefore X is K-spanned by the elements (6.4c) ξi,l ◦ (Tl : Tl ) = ξi,l (ch,l ) (Tl : Th ) = (Tl : Th ), all i ∈ I. h∈I
h∈iR(T ) α
The element (6.4c) lies in the α-weight-space X , where α is the weight containing i. So X ω is K-spanned by those elements (6.4c) such that i ∈ ω. If i ∈ ω, then G acts regularly on iG = ω i.e. iπ = iπ implies π = π , for all π, π ∈ G. In particular the elements iτ (τ ∈ R(T )) are all distinct. So (6.4d) If i ∈ ω, then ξi,l ◦ (Tl : Tl ) = τ ∈R(T ) (Tl : Tiτ ). Suppose that H is the group of all elements θ of R(T ) which preserve the set of columns of the basic λ-tableau T . Such a permutation θ can be specified by a sequence θ1 , θ2 , . . ., where for each q ≥ 1, θq is a permutation of the set Wq of all t ≥ 1 such that column t of T has length q. In the notation of (4.2a), θ maps x(s, t) to x(s, θq (t)), for all s ≥ 1, and all t ∈ Wq . Since |Wq | = λq −λq+1 , the order of H is (λ1 − λ2 )! (λ2 − λ3 )! · · · . Now it follows from the expression of (Tl : Ti ) as product of determinants (see 4.3) that (Tl : Ti ) = (Tl : Tiθ ), for all i ∈ I and all θ ∈ H. So by breaking up the sum in (6.4d) into H-orbits, we see that it is divisible by |H|. If λ is column p-singular, |H| is divisible by p, hence every term (6.4d) is zero, i.e. X ω = 0. This proves one half of (6.4b). To prove the other half, we assume that λ is column p-regular, and show that X ω = 0. For this it is enough to show that ξu,l ◦ (Tl : Tl ) = 0. By (6.4d) and (4.3a), ξu,l ◦ (Tl : Tl ) is equal to s(σ) clσ,uτ . (6.4e) σ∈C(T ) τ ∈R(T )
There is a unique element π ∈ C(T ) which reverses the order of the entries in each column of Tl , namely π : x(s, t) → x(q + 1 − s, t),
for s ≥ 1, and t ∈ Wq .
For example if λ = (7, 5, 2, 2) we have 1111111 22222 Tl = , 33 44
Tlπ
4422211 33111 = . 22 11
62
6 Representation theory of the symmetric group
We shall prove that the coefficient of clπ,u in (6.4e) is not zero. If σ ∈ C(T ) and τ ∈ R(T ) are such that clπ,u = clσ,uτ , then there is some γ ∈ G such that lπγ = lσ, uγ = uτ . This implies γ = τ , hence lπτ = lσ. Consider the maximum entry in Tl , say M (in the example, M = 4). In Tl , and hence also in Tlσ , all entries M are in the columns t ∈ WM . But in Tlπ , hence also in Tlπτ , all entries M are in the first row. Since Tlπτ = Tlσ , all entries M in Tlσ are in the same places as the entries M in Tlπ . Next we consider the places occupied by entries M − 1, M − 2, . . . in turn, and by arguments similar to that given conclude that Tlπ = Tlσ , hence that π = σ. The group of elements τ ∈ R(T ) satisfying lπτ = lπ clearly has order q
(λq − λq+1 )! , w= q≥1
which is prime to p. The argument just given shows that the coefficient of clπ,u in (6.4e) is s(π) w · 1K = 0, and so the proof of (6.4b) is complete. This theorem has some interesting consequences. First we need a lemma concerning the left ideal Sξω of S, which is also a right module for the algebra S(ω) = ξω Sξω , and hence, by (6.1d), a right KG-module. (6.4f ) Sξω has K-basis { ξi,u : i ∈ I(n, r) }. The K-isomorphism Sξω → E ⊗r given by ξi,u → ei for all i ∈ I(n, r) is a left S = SK (n, r)-map and a right KG-map. The proof of (6.4f) is routine. Now let V ∈ MK (n, r) be irreducible. By 6.2, remark 1, V ω = 0 if and only if V is a homomorphic image of Sξω , and hence of E ⊗r . But both E ⊗r and V are self-dual (by 2.7, Example 1, and (5.4c), proof). We have therefore (6.4g) If V ∈ MK (n, r) is irreducible, then V ω = 0 if and only if V is isomorphic to a submodule of E ⊗r . (Notice, we assume r ≤ n.) Corollary (James [28, Theorem 3.2]). Fλ,K is isomorphic to a submodule of E ⊗r if and only if λ is column p-regular. Next we have a theorem concerning the “dual” Specht module S T,K = KG [R(T )] {C(T )} of 6.3. In 5.5 was defined a contravariant form
, on the space E ⊗r {C(T )}. Restrict this to the ω-weight-space (E ⊗r )ω {C(T )}, and then transfer it to KG{C(T )} by means of the isomorphism (6.3d). The result is a symmetric, invariant form on KG{C(T )} which we denote by ( , ), and which is specified by the formula: if π, π ∈ G, then s(π −1 π ) if π −1 π ∈ C(T ), (6.4h) π {C(T )}, π {C(T )} = / C(T ). 0 if π −1 π ∈
6.4 Application II. Irreducible KG(r)-modules, char K = p
63
In 5.5 we considered the form obtained by restricting
, to Vλ,K , and showed (see (5.5c)) that the radical of this form is the unique maximal submax module Vλ,K of Vλ,K . It is a routine matter now to apply the Schur functor, and use remarks 2, 3 of 6.2 to prove the following. (6.4i) Let ( , ) be the invariant form on S T,K obtained by restricting the form given by (6.4h). Then ( , ) is non-zero on S T,K if and only if λ is column p-regular. If λ is column p-regular, then the radical of ( , ) is the max max unique maximal submodule S T,K of S T,K , and S T,K / S T,K ∼ = f (Fλ,K ). From (6.4i) we may deduce a well-known theorem of James (see (6.4k), below), by the following elementary device. Let β denote the K-algebra automorphism of KG given by β(π) = s(π) π, for all π ∈ G. Let Ks denote the field K, regarded as one-dimensional KG-module by the action πk = s(π) k, for π ∈ G, k ∈ K. Then if M is any left ideal of KG, β(M ) is also a left ideal of KG, and there is a KG-isomorphism β(M ) ∼ = M ⊗K Ks which takes m ⊗ 1K → β(m), for all m ∈ M . It is trivial to check that β maps {C(T )}, [R(T )] to [R(T )], {C(T )} respectively, where T is the λ tableau (λ is the partition of r conjugate to λ) obtained by “transposing” the λ-tableau T . The bilinear form (6.4h) on KG{C(T )} is translated by β to a symmetric, invariant bilinear form ( , ) on KG[R(T )] specified by the formula: if π, π ∈ G, then 1 if π −1 π ∈ R(T ), (6.4j) π [R(T )], π [R(T )] = / R(T ). 0 if π −1 π ∈ Moreover β(S T,K ) = ST ,K —which shows incidentally that ST ,K ∼ = S T,K ⊗K Ks —and so (6.4i) translates as follows. (6.4k) Theorem (James [27, Theorems 11.1, 11.5]). Let T be a bijective µ-tableau, where µ is a partition of r. Let ( , ) be the invariant form on ST ,K obtained by restricting the form given by (6.4j). Then ( , ) is nonzero on ST ,K if and only if µ is p-regular (by definition, µ is p-regular if µ is column p-regular). If µ is p-regular, then the radical of ( , ) is the unique max max maximal submodule ST ,K of ST ,K , and ST ,K /ST ,K ∼ = f (Fµ ,K ) ⊗K Ks . Remark. Comparison with the notation of James in [27], shows that the module Dµ [27, p. 39] is isomorphic to f (Fµ ,K ) ⊗K Ks . The module Dλ in [27, §1] is isomorphic to f (Fλ,K ). So the connection between James’s two families of irreducible KG-modules is (6.4l) Dλ ∼ = Dλ ⊗K Ks , for all column p-regular λ.
64
6 Representation theory of the symmetric group
The importance of James’s theorem, or of the equivalent theorem (6.4i), is that it gives a satisfactory “natural” labelling of the (isomorphism classes of) irreducible KG-modules: for each column p-regular λ, Dλ is isomorphic to the unique irreducible quotient of a dual Specht module S λ,K . We have also seen that Schur’s functor f gives an independent connection, Dλ ∼ = f (Vλ,K ). The Schur functor has a one-sided inverse, namely the functor h∗ defined in 6.2. It is interesting that h∗ is related to a construction used by James in [28]. First we re-define h, h∗ as functors from mod KG(r) → MK (n, r), using the isomorphism Sξω ∼ = E ⊗r of (6.4f), and the isomorphism ξω Sξω ∼ = KG(r) of (6.1d). This means that for each W ∈ mod KG(r) we define h(W ) = E ⊗r ⊗KG W,
h∗ (W ) = h(W )/h(W )(ω) ,
where for any V ∈ MK (n, r), V(ω) is the sum of all S-submodules U of V such that U ω = 0. Since the isomorphism (6.4f) takes ξω to eu , it follows from (6.2d) that h(W )ω = eu ⊗ W . (6.4m) Let W be any left ideal of KG, regarded as left KG-module. Define the S-map γ : h(W ) → E ⊗r W by γ(x ⊗ w) = xw, for all x ∈ E ⊗r , w ∈ W . Then Ker γ = h(W )(ω) . Hence γ induces an S-isomorphism h∗ (W ) ∼ = E ⊗r W . Proof. Suppose that V = Ker γ. Any element v of V ω can be written v = eu ⊗w for some w ∈ W , since V ω ⊆ h(W )ω = eu ⊗ W . But we have 0 = γ(v) = eu w. This implies w = 0, since the elements eu π = euπ (π ∈ G) form a K-basis of eu KG. Hence v = 0, i.e. V ⊆ h(W )(ω) . If γ(h(W )(ω) ) is not zero, it contains some irreducible submodule M . Since the ω-weight-space of h(W )(ω) is zero, the same must be true for M . But M , as irreducible submodule of E ⊗r , satisfies M ω = 0 by (6.4g). This contradiction implies h(W )(ω) ⊆ V , and the rest of the proof of (6.4m) is immediate. Since KG is a Frobenius algebra [11, p. 420], every irreducible left KG-module is isomorphic to some left ideal W of KG [11, p.417, (61.6)]. If we combine this with (6.4m) and (6.2g), we have a way of constructing some irreducible SK (n, r)-modules (this method is due to James [28]). Of course we can get in this way only modules isomorphic to Fλ,K for λ column p-regular. Example. W = K[G] is an ideal of KG, and as left KG-module W affords the trivial representation of G. By (6.4m), h∗ (W ) is isomorphic to the S-module E ⊗r W = E ⊗r [G]. If i ∈ α ∈ Λ(n, r), then ei [G] = |Gα | vα , where vα is the basis element of Vr,K described in 5.2, Example 1. So E ⊗r W is a submodule of Vr,K , and has weights α, for all α such that p does not divide |Gα | = α1 ! · · · αn !. So h∗ (W ) ∼ = Fλ,K , where λ is the highest such weight. It is easy to see that λ = (p − 1, . . . , p − 1, s, 0, . . .), where there are q terms p − 1, and the non-negative integers q, s are given by r = q(p − 1) + s.
6.5 Application III. The functor f : MK (N, r) → MK (n, r) (N ≥ n)
65
6.5 Application III. The functor f : MK (N, r) → MK (n, r) (N ≥ n) In this section we fix our infinite field K, and also the integer r ≥ 0. We consider connections between categories MK (n, r), as n varies. Suppose that N , n are positive integers such that N ≥ n. We shall produce a functor d = dN,n : MK (N, r) → MK (n, r), which can be viewed as special case of our general “mod S → mod eSe” functor of 6.2. The functor d gives a very easy way of passing from the irreducible modules in MK (N, r) to those in MK (n, r), and behaves in a very satisfactory way with regard to characters (see (6.5b)). It is based on a construction given by Schur in his dissertation [47, p. 61]. Since N ≥ n, we may regard I(n, r) as a subset of I(N, r), namely I(N, r) consists of all i = (i1 , . . . , ir ) with components i ∈ N , and I(n, r) is the subset of those i whose components all lie in n. With this convention, SK (n, r) can be regarded as a subalgebra of SK (N, r). For SK (N, r) has basis (6.5a)
{ ξi,j : i, j ∈ I(N, r) },
and we can identify SK (n, r) with the K-subspace spanned by those ξi,j for which i, j ∈ I(n, r). For the rule (2.3b) for multiplying two elements ξi,j , ξk,l of type (6.5a), has the consequence that if i, j, k, l ∈ I(n, r), then the coefficient of any ξp,q in the product ξi,j ξk,l is zero unless both p, q ∈ I(n, r), while if p, q do belong to I(n, r), then this coefficient is the same as it would be if ξi,j ξk,l were computed in SK (n, r). We define an injective map α → α∗ of Λ(n, r) into Λ(N, r) as follows. If α = (α1 , . . . , αn ) ∈ Λ(n, r), we define α∗ ∈ Λ(N, r) by α∗ = (α1 , . . . , αn , 0, . . . , 0). Then the image of Λ(n, r) under this map is the set Λ(n, r)∗ = { β ∈ Λ(N, r) : βn+1 = · · · = βN = 0 }. Notice that if i belongs to a weight β ∈ Λ(n, r)∗ , then i ∈ I(n, r). So another description of Λ(n, r)∗ is that it is the set of those G(r)-orbits of I(N, r) which lie in I(n, r). Now define the following element of SK (N, r): (6.5b) e= ξβ , sum over all β ∈ Λ(n, r)∗ . β
This is an idempotent of SK (N, r), and it is clear (using (2.3c)) that eξi,j = ξi,j or zero, according as i ∈ I(n, r) or not, and ξi,j e = ξi,j or zero, according as j ∈ I(n, r) or not. It follows at once that
66
6 Representation theory of the symmetric group
eSK (N, r)e = SK (n, r). From 6.2 (taking S = SK (N, r)) we have a functor d : MK (N, r) → MK (n, r) which takes each V ∈ MK (N, r) to eV ∈ MK (n, r). This functor has some agreeable properties, which we now describe. β V , summed over those β ∈ Λ(N, r) (6.5c) If V ∈ MK (N, r) then eV = which lie in Λ(n, r)∗ . Consequently the character ΦeV (which is a symmetric polynomial over Z in n variables X1 , . . . , Xn ) is related to ΦV (a polynomial in N variables X1 , . . . , XN ) by the formula ΦeV (X1 , . . . , Xn ) = ΦV (X1 , . . . , Xn , 0, . . . , 0). Proof. Suppose β ∈ Λ(N, r), then eV β = eξβ V = V β or zero, according as β ∈ Λ(n, r)∗ or not; the first statement of (6.5c) follows. The second statement comes from this, together with the definition of a character (see 3.4). Next we look to see how our functor behaves on the modules Dλ,K , Vλ,K and Fλ,K . For this we need a lemma. (6.5d) Lemma. Let λ ∈ Λ+ (N, r)\Λ(n, r)∗ and i ∈ I(n, r). Then the λ-tableau Ti is not standard. Proof. Clearly, λ = (λ1 , λ2 , . . . , λN ) satisfies λ1 ≥ λ2 ≥ · · · ≥ λN , and since λ ∈ / Λ(n, r)∗ we must have λn+1 = 0. So the λ-tableau Ti has at least n+1 places in its first column. But the entries in this column cannot all be distinct, since i ∈ I(n, r). Therefore Ti is not standard (see (4.5a)). (6.5e) Theorem. Let λ ∈ Λ+ (N, r), and let Xλ denote any one of Dλ,K , Vλ,K or Fλ,K . Then eXλ = 0 if and only if λ ∈ Λ(n, r)∗ . In other words eXλ = 0 if and only if λ has more than n non-zero parts. Proof. Suppose first that λ ∈ / Λ(n, r)∗ , and that β ∈ Λ(n, r)∗ . If Xλ = Dλ,K or Vλ,K , then the dimension of Xλβ is equal to the number of i ∈ β such that the λ-tableau Ti is standard ((4.5a), (5.3b)). Since i ∈ β implies i ∈ I(n, r), we deduce from (6.5d) that Xλβ = 0, and then from (6.5c) that eXλ = 0. Since Fλ,K is a factor module of Vλ,K , we have eFλ,K = 0 also. Conversely, suppose that λ ∈ Λ(n, r)∗ . For any of the modules Xλ in question, we know that Xλλ = 0, hence eXλ = 0 by (6.5c). This completes the proof of (6.5e). If λ ∈ Λ+ (N, r), then a necessary and sufficient condition that λ ∈ Λ(n, r)∗ is that λ = µ∗ for some µ ∈ Λ+ (n, r). However, we know from (5.4b) that { Fλ,K : λ ∈ Λ+ (N, r) } is a full set of irreducible modules in MK (N, r). So (6.2g) and the theorem just proved give the first statement in (6.5f) below.
6.6 Application IV. Some theorems on decomposition numbers
67
(6.5f ) {eFµ∗ ,K : µ ∈ Λ+ (n, r)} is a full set of irreducible modules in MK (n, r). In fact eFµ∗ ,K ∼ = Fµ,K (isomorphism in MK (n, r)). Hence there is the character formula, valid for all µ ∈ Λ+ (n, r), and for every characteristic p (including p = 0): Φµ,p (X1 , . . . , Xn ) = Φµ∗ ,p (X1 , . . . , Xn , 0, . . . , 0). Proof. The second and third statements follow from the fact that eFµ∗ ,K and Fµ,K are both irreducible modules in MK (n, r) having character with leading term X1µ1 · · · Xnµn (see remark (i), 3.2). Remarks. (i) A direct proof that eFµ∗ ,K ∼ = Fµ,K can be made along the following lines. Let E(n) denote a K-space with basis e1 , . . . , en , which we regard as subspace of a K-space E(N ) with basis e1 , . . . , eN . The inclusion E(n)⊗r ⊆ E(N )⊗r can be shown to take Vµ,K to eVµ∗ ,K , and to induce an isomorphism Fµ,K ∼ = eFµ∗ ,K . Similarly we may show ∼ ∗ that Dµ,K = eDµ ,K . (ii) If N ≥ n ≥ r, then the map µ → µ∗ gives a bijection from Λ+ (n, r) onto Λ+ (N, r). For any λ ∈ Λ+ (N, r), being a partition of r, can have at most r non-zero parts. Hence λ = µ∗ for some uniquely defined element µ ∈ Λ+ (n, r). Then it follows from (6.5f) that the functor dN,n : MK (N, r) → MK (n, r) induces a bijection between the sets of isomorphism classes of irreducible modules in these two categories. In fact we have the stronger result: (6.5g) If N ≥ n ≥ r, the functor d = dN,n defines an equivalence of categories. In particular MK (N, r) MK (r, r) for all N ≥ r. To prove (6.5g), we must produce a functor h : MK (n, r) → MK (N, r) such that hd, dh are naturally equivalent to the appropriate identity functors (see for example [10, p. 7]). We leave it to the reader to verify that we may take for h the functor described in 6.2, which in the present case takes each W ∈ MK (n, r) to h(W ) = SK (N, r)e ⊗SK (n,r) W. We might mention that (6.5g) has another formulation: If N ≥ n ≥ r, then the K-algebras SK (N, r) and SK (n, r) are Morita equivalent (see [10, p. 34]).
6.6 Application IV. Some theorems on decomposition numbers In this section we first extend our “mod S → mod eSe” theory of 6.2 to a “modular” context, and prove a general result of T. Martins [41] on decomposition
68
6 Representation theory of the symmetric group
numbers. Then we apply this to the modular reduction MQ (n, r) → MK (n, r) which was described in 2.5. We start with a piece of notation. Let { Vλ : λ ∈ Λ } be a full set of irreducible modules in mod S, where S is an algebra over any field. If V ∈ mod S and λ ∈ Λ, denote by nλ (V ) the composition multiplicity of Vλ in V . That means, nλ (V ) is the number of factors isomorphic to Vλ , in any composition series of V (6.6a)
V = V0 ⊃ V1 ⊃ · · · ⊃ Vl = 0.
Now let e be an idempotent of S, and define Λ , as in (6.2g), to be the set of those λ ∈ Λ such that eVλ = 0. (6.6b) Lemma. If λ ∈ Λ , then nλ (eV ) = nλ (V ), for any V ∈ mod S. Here nλ (eV ) is the composition multiplicity of eVλ in the eSe-module eV . Proof. From (6.6a) we get a series of eSe-modules eV = eV0 ⊇ eV1 ⊇ · · · ⊇ eVl = 0. By (6.2a), e(Vj−1 /Vj ) ∼ = eVj−1 /eVj (as eSe-modules), for j = 1, . . . , l. Removing those terms eVj−1 for which eVj−1 = eVj , i.e. for which Vj−1 /Vj is isomorphic to Vπ for some π ∈ Λ\Λ , we are left with a composition series for eV . The lemma follows at once. Now let C, K be fields, and R be a subring of C with the properties (i) R is a principal ideal domain (in particular, R contains 1C ), (ii) C is the field of fractions of R, and (iii) there is a ring-homomorphism π : R → K. Suppose SC is a C-algebra with finite basis {u1 , . . . , un }, such that the set SR = Ru1 ⊕ · · · ⊕ Rub contains the identity element of SC and is multiplicatively closed: thus SR is an “R-order” in SC . Then SK = SR ⊗ K is a Kalgebra with K-basis {u1 ⊗ 1K , . . . , un ⊗ 1K } (here and below, ⊗ means ⊗R , and K is regarded as R-module via π, i.e. r · k = π(r)k, for r ∈ R, k ∈ K). R. Brauer’s modular representation theory of algebras ([5]; see also [1, p. 111]) connects the categories mod SC and mod SK by the process of “modular reduction” (cf. 2.5), as follows. In each VC ∈ mod SC can be found an “R-form” or “admissible R-lattice” VR , i.e. (i) VR is the R-span of some C-basis of VC , and (ii) SR VR ⊆ VR (see [5, §6, p. 256], or [16, 48.1(iv), p. 299]). Then VK = VR ⊗ K can be regarded as left SK -module; VK is called a modular reduction of VC . If (6.6c)
{ Vλ,C : λ ∈ Λ },
{ Uδ,K : δ ∈ ∆ }
6.6 Application IV. Some theorems on decomposition numbers
69
are full sets of irreducible modules in mod SC , mod SK respectively, we define for each λ ∈ Λ, δ ∈ ∆ the decomposition number dλδ = dλδ (S) to be the composition multiplicity nδ (Vλ,K ) of Uδ,K in a modular reduction of Vλ,C . Let e = eR be an idempotent in the ring SR . Since e ∈ SC , we can apply the theory of 6.2. By (6.2g) we get a full set of irreducible modules in mod eSC e, namely { eVλ,C : λ ∈ Λ }, where Λ = { λ ∈ Λ : eVλ,C = 0 }. Similarly, eK = eR ⊗ 1K is an idempotent in the K-algebra SK = SR ⊗ K, and so we get a full set of irreducible modules in mod eK SK eK , namely { eK Uδ,K : δ ∈ ∆ }, where ∆ = { λ ∈ ∆ : eK Uδ,K = 0 }. Now eSR e is an R-order in the C-algebra eSC e—this is because, as Rmodule, eSR e is a direct summand of SR . For the same reason, we can identify eSR e⊗K with eK SK eK . Therefore we have a process of modular reduction from mod eSC e to mod eK SK eK ; let us denote the corresponding decomposition numbers by dλδ (eSe). Of course these are defined only for λ ∈ Λ , δ ∈ ∆ . The connection between these decomposition numbers, and the dλδ (S) defined previously, is very simple and satisfactory. (6.6d) Theorem (T. Martins [41]). Let λ ∈ Λ , δ ∈ ∆ . Then dλδ (S) = dλδ (eSe). Proof. Let VR be an R-form of VC = Vλ,C . Then eVR is a direct summand of VR = eVR ⊕ (1 − e)VR . This implies that eVR is an R-form of the eSC emodule eVC , and also that the eK SK eK -module eVR ⊗ K can be identified with eK VK , where VK = VR ⊗ K. By (6.6b) the composition multiplicity of eK Uδ,K in eK VK is the same as the composition multiplicity of Uδ,K in VK . This proves (6.6d). For our applications of (6.6d) we take C = Q (field of rational numbers), R = Z, K any infinite field of characteristic p > 0, and define π : Z → K by π(n) = n · 1K for all n ∈ Z. Fix n, r and let S = SQ (n, r), SZ = SZ (n, r). Identify SK with SK (n, r) by the isomorphism given in 2.3. Identify the categories mod SQ and mod SK with MQ (n, r) and MK (n, r) respectively, as in 2.4. Corresponding to the sets (6.6c) in the general case we take { Vλ,Q : λ ∈ Λ+ (n, r) },
{ Fλ,K : λ ∈ Λ+ (n, r) }.
(It happens in this case, these sets are indexed by the same set Λ+ (n, r).) Denote the decomposition numbers by dλµ = dλµ (GLn ). These are the same numbers which appear in the formulae dλµ Φλ,p (X1 , . . . , Xn ) Φλ,0 (X1 , . . . , Xn ) = µ∈Λ+ (n,r)
of 3.5, remark (1). (6.6e) Theorem. Suppose that N ≥ n, and let α → α∗ be the injective map from Λ(n, r) into Λ(N, r) given in 6.5. Then
70
6 Representation theory of the symmetric group
(i) dαβ (GLn ) = dα∗ β ∗ (GLN ), for any α, β ∈ Λ+ (n, r). (ii) dµν (GLN ) = 0, for any λ ∈ Λ+ (N, r)\Λ(n, r)∗ and µ ∈ Λ(n, r)∗ . Proof. (i) is a direct application of (6.6d). Take SQ = SQ (N, r), etc., and let e = eZ be the element of SQ defined as in (6.5b). Clearly e ∈ SZ , and eK = e ⊗ 1K is the element of SK defined by (6.5b). By (6.5e), the sets Λ , ∆ which appear in (6.6d) are both equal to Λ+ (N, r) ∩ Λ(n, r)∗ . So we take λ = α∗ , µ = β ∗ in (6.6d), and then use (6.5f). (ii) Suppose λ ∈ / Λ+ (n, r)∗ , then by (6.5e), eK Vλ,K = 0. By (6.6b) the composition multiplicity of Fµ,K in Vλ,K is the same as that of eFµ,K in eK Vλ,K , because µ ∈ Λ(n, r)∗ . In other words dλµ (GLN ) = 0, and the proof of (6.6e) is complete. Remark. Part (i) of this theorem shows that, with fixed r and K, the decomposition numbers dλµ (GLn ) for all n are contained in the matrix (6.6f )
(dλµ (GLr ))λ,µ∈Λ(r) .
Here Λ(r) = Λ+ (r, r), which can be identified with the set of all partitions λ = (λ1 , . . . , λr ) of r. Assume first n = r in (6.6e). Then N ≥ r, and the map α → α∗ induces a bijection of Λ(r) onto Λ+ (N, r). So (6.6e)(i) shows that the decomposition matrix for GLN is identical (up to this bijection) with (6.6f). Next take N = r. Then n ≤ r and the map α → α∗ takes Λ+ (n, r) bijectively onto the set of those λ ∈ Λ(r) which have not more than n non-zero parts. So the decomposition matrix for GLn is identical (up to this bijection) with the submatrix of (6.6f) obtained by repressing all rows and columns which refer to partitions having more than n parts. Theorem (6.6d) gives a simple proof of a theorem of James, which shows that the matrix of decomposition numbers for the symmetric group G(r) is also a submatrix of (6.6f) — in this case we merely suppress those columns of (6.6f) which refer to partitions which are column p-singular. To see this, recall that we have full sets of irreducible QG(r)-, KG(r)-modules { S λ,Q : λ ∈ Λ(r) },
{ Dδ : δ ∈ Λ(p) (r) },
respectively. Here Λ(p) (r) denotes the set of all column p-regular partitions of r. Let dλδ (G(r)) denote the composition multiplicity of Dδ in S λ,K . Now we may apply (6.6d) with S = SQ (n, r) (r ≤ n), etc., e = ξω ∈ SQ , eK = ξω ∈ SK . We have Sλ,Q ∼ = eVλ,Q and Dδ ∼ = eK Fδ,K (see (6.3e), (6.4l) and the remarks preceding each). The result is as follows. (6.6g) Theorem (James [28, Theorem 3.4]). If r ≤ n, then dλδ (GLn ) = dλδ (G(r)), for all λ ∈ Λ(r) and δ ∈ Λ
(p)
(r).
Tables showing the matrix (6.6f) for 2 ≤ r ≤ 6, and char K = 2, 3 are given in James’s article [28].
Appendix on Schensted correspondence and Littelmann paths by K. Erdmann, J.A. Green and M. Schocker
A Introduction
A.1 Preamble These lectures describe some combinatorial properties of the set I(n, r) of all “words” i1 i2 . . . ir of length r, whose “letters” i1 , i2 , . . . , ir are drawn from the “alphabet” n = {1, . . . , n}. Clearly I(n, r) is a finite set, with nr elements. Let Λ(n, r) be the set of all vectors β = (β1 , . . . , βn ) whose coefficients are non-negative integers satisfying ν∈n βν = r. The elements β ∈ Λ(n, r) are sometimes called weights (see section 3.1). Let Λ+ (n, r) be the subset of Λ(n, r) consisting of all β which are dominant, i.e. which satisfy β1 ≥ · · · ≥ βn (≥ 0). A dominant weight in this sense is often referred to as a partition of r with no more than n parts. Example. Λ+ (2, 4) = {(4, 0), (3, 1), (2, 2)}. The set I(n, r) plays a humble rˆ ole in the representation theory of the general linear group GL(n, K) (see section 2.6), because it indexes the basis { vi = vi1 ⊗ · · · ⊗ vir : i ∈ I(n, r)} of the r-fold tensor power V ⊗r of a vector space V of dimension n, with respect to a given basis {v1 , . . . , vn } of V . But the present work is not based on linear algebra. We shall see that I(n, r) has a rich combinatorial structure in its own right, based on two operations which may be performed on any word i ∈ I(n, r); namely (A.1a) the Robinson–Schensted algorithm, and (A.1b) the application of maps e˜c , f˜c which are essentially Littelmann’s “root operators” eα , fα (see [35] and (A.3g)(2)). Peter Littelmann uses the root operators as foundation of a remarkable theory [35], sometimes called the “path model” of the classical representation theory of GLn ; this is more combinatorial, and simpler in some ways, than the classical theory. Our work is an attempt to understand this “protorepresentation theory” of GLn .
74
A Introduction
A striking feature of Littelmann’s theory is that it applies to arbitrary complex, symmetrizable Kac-Moody algebras. Our work, which applies only to sln , is therefore restricted to the special case of algebras of type An−1 . But there is some advantage in this restriction; Littelmann’s “paths” become “words”, and we may work in the familiar combinatorial context of this set of lecture notes. (A.1a) and (A.1b) will be described briefly in §A.2, §A.3, and discussed in more detail later.
A.2 The Robinson-Schensted algorithm This algorithm (henceforth referred to as the Schensted process) turns a word i ∈ I(n, r) into a triple (λ(i), P (i), Q(i)), where (A.2a) λ(i) = (λ1 (i), . . . , λn (i)) is a dominant weight; i.e. λ(i) is a partition of r into at most n parts, (A.2b) P (i) is a standard tableau of “shape” λ(i) (see section 4.2 and (4.5a)). The entries in the tableau P (i) are the letters i1 , i2 , . . . , ir in the word i, permuted in such a way that P (i) is standard, i.e. so that the entries in each row of P (i) are weakly increasing (≤) from left to right, and the entries in each column are strictly increasing (<) from top to bottom. (A.2c) Q(i) is a standard tableau of “shape” λ(i), whose entries are the integers 1, . . . , r permuted in such a way that Q(i) is standard1 . Schensted calls P (i) and Q(i) the P -symbol and the Q-symbol (respectively) of the word i (see [46, p. 181]). Schensted’s rules which define λ(i), P (i) and Q(i) will be given in §B.2. But it may be useful to look at the special case r = 2, where λ(i), P (i) and Q(i) are easy to describe. Take r = 2, and n any integer ≥ 2. A typical word in I(n, 2) is i1 i2 . The only dominant weights are (2, 0, 0, . . . , 0) and (1, 1, 0, . . . , 0). The only values 1 for Q(i) are the tableaux 1 2 and . 2 Table A.1 below (which is made using the rules in §B.2; see (B.4b)) produces a partition I(n, 2) = I 1 2 ∪˙ I 1 , where 2
(A.2d) I 1 2 = { i ∈ I(n, 2) : i1 ≤ i2 } and I 1 = { i ∈ I(n, 2) : i1 > i2 }. 2
1
Standard tableaux were first defined by A. Young [59] in his representation theory of the symmetric group Sym{1, . . . , r}. For this reason, standard tableaux are often called Young tableaux, or generalized Young tableaux [34].
A.3 The operators e˜c , f˜c
75
From this table we see that the set I 1 2 is the set of all i ∈ I(n, 2) such 1 that Q(i) = 1 2 , and I 1 is the set of all i ∈ I(n, 2) such that Q(i) = . 2 2 These sets are therefore the equivalence classes for the equivalence ≈ which we shall define in §A.4. The general case will be discussed in §C.1.
i
λ(i)
P (i)
Q(i)
i1 ≤ i2
(2, 0, 0, . . . , 0)
i1 i2
1 2
i1 > i2
(1, 1, 0, . . . , 0)
i2 i1
1 2
Table A.1. The Schensted process in case n ≥ r = 2.
A.3 The operators ˜ ec , ˜ fc Let a, b ∈ n, a = b, and let αa,b = (0, . . . , 0, 1, 0, . . . , 0, −1, 0, . . . , 0) denote the element of Zn which has 1, −1 at the places a, b respectively, and zero at all other places. These n(n − 1) vectors are called the roots of a system of type An−1 . Define Σ = {α1,2 , α2,3 , . . . , αn−1,n }. This is a subset of the set of all roots; its elements are called the simple roots.2 Choose an element c ∈ {1, 2, . . . , n−1}. To define Littelmann’s operators e˜c and f˜c we need some preliminary definitions. • Define the map ω = ωc,c+1 : n → Z by the rule ω(ν) = 1, −1 or zero, according as ν = c, ν = c + 1, or ν ∈ / {c, c + 1}. • Define the map hic : {0, 1, . . . , r} → Z by the rule: (A.3a) hic (0) = 0, and hic (t) = ω(i1 ) + · · · + ω(it ) for all t ∈ {1, . . . , r}. This means for any t ∈ {1, . . . , r}, (A.3b) hic (t) is the number of c’s in the initial segment i1 i2 . . . it of the word i, minus the number of c + 1’s in this segment.3 • Next let M = Mci denote the largest of the integers hic (0), hic (1), . . . , hic (r). Notice that Mci is always ≥ 0 since hic (0) = 0. 2
To read this Appendix, it is not necessary to know the theory of roots and root systems! 3 i hc is sometimes called the height function.
76
A Introduction
• There may be several values of t ∈ {0, 1, . . . , r} such that hic (t) = Mci ; let q = qci be the least of these values, and let q = q ci be the greatest. (A.3c) Lemma. (i) If q = 0, then iq = c. (ii) If q = r, then iq+1 = c + 1. Proof. (i) Suppose q = 0. We know that hic (q) = M . Let µ = hic (q − 1). By (A.3a) M = hic (q) = µ + ω(iq ). The possible values for ω(iq ) are 1, −1 and 0. But if ω(iq ) = −1 then M = µ − 1, hence µ > M against the definition of M . If ω(iq ) = 0, then µ = M , against the definition of q, which says that q is the least value of t for which hic (t) = M . Hence ω(iq ) = 1, which implies that iq = c. The proof of (ii) is similar, and is left to the reader. (A.3d) Definition (see [35, §1]). With the notation given above, define maps e˜c , f˜c : I(n, r) → I(n, r) ∪ {∞} as follows. (A.3e) If M i = 0, define f˜c (i) = ∞ (or say “f˜c (i) is undefined”). If M i = 0, define f˜c (i) to be the word s ∈ I(n, r) given by st = it if t = q, and sq = c + 1. (A.3f ) If M i = hic (r), define e˜c (i) = ∞ (or say “˜ ec (i) is undefined”). If M i = hic (r), define e˜c (i) to be the word s ∈ I(n, r) given by st = it if t = q + 1, and sq+1 = c. (A.3g) Remarks. (1) We have labelled these operators with the index c, rather than with the corresponding simple root α = αc,c+1 . (2) Let B : I(n, r) → I(n, r) be the operator which turns each word i1 i2 . . . ir into its “reverse” ir ir−1 . . . i2 i1 . Then the maps just defined are related to Littelmann’s “root operators” fα , eα (see [35, §1]) as follows: f˜c = Bfα B, e˜c = Beα B. (3) Let i ∈ I(n, r). Then each of f˜c , e˜c takes i either to ∞, or to a word which is identical to i except at one place. At this “critical place”, f˜c (i) changes the entry from c to c + 1, and e˜c (i) changes the entry from c + 1 to c (see (A.3c)). (4) The weight wt(i) of a word i ∈ I(n, r) is the vector β ∈ Zn defined as follows: for each ν ∈ n, βν is the number of places ∈ r for which i = ν (see section 3.1). Then (3) shows that wt(f˜c (i)) = wt(i) − αc,c+1 , if f˜c (i) = ∞. Similarly wt(˜ ec (i)) = wt(i) + αc,c+1 , if e˜c (i) = ∞. ˜ (5) The maps fc , e˜c are “inverse” to each other in the sense: if f˜c (i) = ∞, then e˜c f˜c (i) = i, while if e˜c (i) = ∞, then f˜c e˜c (i) = i. (6) Concatenation. If i ∈ I(n, r) and j ∈ I(n, s), define the concatenation of i and j to be the word i | j = (i1 , . . . , ir , j1 , . . . , js ) ∈ I(n, r + s). Then for any c ∈ {1, . . . , n − 1} we have f˜c (i) | j if Mci ≥ hic (r) + Mcj , and f˜c (i | j) = i | f˜c (j) if Mci < hic (r) + Mcj ,
A.3 The operators e˜c , f˜c
and e˜c (i | j) =
77
e˜c (i) | j if Mci > hic (r) + Mcj , and i | e˜c (j) if Mci ≤ hic (r) + Mcj .
All of the statements in (A.3g) are due (and in much greater generality) to Littelmann; see [35, §2]. However these statements are also easily verified directly from the definitions above. As an example, we prove the second of the two statements in (A.3g)(5), namely (5*) If i ∈ I(n, r) and c ∈ {1, . . . , n−1} such that e˜c (i) = ∞, then f˜c e˜c (i) = i. Proof. To calculate e˜c (i), we first calculate the height function hic . This function was defined in (A.3a): hic (0) = 0, and hic (t) = ω(i1 ) + · · · + ω(it ) for all t ∈ {1, . . . , r}; it is given as the third line of table A.2 below. Let M = Mci ; recall the definition of q = q ci (see (A.3b) and (A.3c)), and notice that in our case q < r, because e˜c (i) = ∞. By (A.3c)(ii), iq+1 = c + 1. In the fourth row of table A.2 are inequalities (e.g. hic (t) ≤ M ) which, taken together, express that q is the largest value of t such that hic (t) = M .
t
0
it hic (t)
0
1
2
···
q = q ci
q+1
···
r
i1
i2
···
iq
iq+1 = c + 1
···
ir
ω(i1 )
ω(i1 ) + ω(i2 )
···
M = Mci
M −1
···
hic (r)
≤M st hsc (t) f˜c (s)t
i1 0
M i2
···
iq
<M +1 i1
i2
···
<M c M +1
iq
c+1
···
ir
≤M +1 ···
ir
Table A.2. The height functions of i and s = e˜c (i).
According to definition (A.3f), the word s = e˜c (i) coincides with i except at the place q + 1. At this place iq+1 = c + 1, and sq+1 = c. To calculate f˜c (s), we must know the function hsc . Clearly hsc (t) = hic (t) for all t ∈ {0, . . . , q}, and it is easy to see that hsc (t) = hic (t) + 2 for all t ∈ {q + 1, . . . , r}. Now hic (q+1) = hic (q)+ω(iq+1 ) = M −1, hence hsc (q+1) = M +1. We may now check the inequalities in the penultimate line of table A.2. These show ec (i)) = i, as that Mcs = M + 1 > 0, and that qcs = q + 1. But then f˜c (s) = f˜c (˜ required.
78
A Introduction
A.4 What is to be done The Schensted process associates to each i ∈ I(n, r) a triple (λ(i), P (i), Q(i)). In §C.1 we shall define two equivalences on the set I(n, r). Definitions. (A.4a) i ∼ j means that P (i) = P (j), and (A.4b) i ≈ j means that Q(i) = Q(j). Donald Knuth [34] introduced the relation ∼, and proved that ∼ is the equivalence on I(n, r) generated by a collection of basic moves i → j; see §C.3. The main result of these lectures is the following analogue to Knuth’s theorem: (A.4c) Theorem A. Let i, j ∈ I(n, r). Then i ≈ j if and only if there is a finite sequence of words (elements of I(n, r)): i(1), i(2), . . . , i(s) such that i(1) = i, i(s) = j and for each adjacent pair i(ν), i(ν + 1) either there exists an element c ∈ {1, . . . , n − 1} such that f˜c (i(ν)) = i(ν + 1), or there exists an element c ∈ {1, . . . , n − 1} such that e˜c (i(ν)) = i(ν + 1). Expressed less formally, Theorem A says that ≈ is the equivalence relation on I(n, r) generated by a collection of basic moves i ⇒ j, where i ⇒ j means that e˜c (i) = j for some c ∈ {1, 2, . . . , n − 1}, or that f˜c (i) = j. Theorem A will be proved in §D.2. Chapter D also contains some notes on the representation theory of the “Littelmann algebra” L(n, r), which is an analogue of the Schur algebra S(n, r). The proof of Theorem A depends on the following (A.4d) Proposition B. Let c ∈ {1, 2, . . . , n−1}. Then the operation f˜c commutes with the Schensted process, in the following sense: (A.4e) KP (f˜c (i)) = f˜c (KP (i)) for all i ∈ I(n, r). To explain the symbols KP which appear in (A.4e), we need the Definition. Suppose given a standard tableau
P =
y1,1
y1,2
···
···
y2,1
y2,2
···
y2,λ2
···
ym,λm
.. . ym,1
y1,λ1
A.4 What is to be done
79
of shape λ ∈ Λ+ (n, r), with all its entries ya,b ∈ n. Define the “Knuth unwinding” of P to be the following word: KP = ym,1 · · · ym,λm ym−1,1 · · · ym,λm−1 · · · y1,1 · · · y1,λ1 ; see §C.2, or [34, page 173]. This is an element of I(n, r), therefore we can apply the operators f˜c , e˜c to it, and (A.4e) makes sense4 . Proposition B will be proved in §C.4.
4 Some authors identify the tableau P with the word KP , but to be cautious, we shall not make this identification in this Appendix.
B The Schensted Process
B.1 Notations for tableaux Choose a dominant weight λ ∈ Λ+ (n, r). The shape of λ is the following subset of Z × Z (see, for example, section 4.2): (B.1a) [λ] := { (a, b) : 1 ≤ a ≤ n, 1 ≤ b ≤ λa }. A λ-tableau, or a tableau of shape λ, is a map U : [λ] → Z. We may think of U as a “partial matrix”, with U ((a, b)) as the entry in U at the place (a, b). These entries are elements of Z, and U has entries only at the places (a, b) ∈ [λ]. The entry U ((a, b)) is often denoted ua,b . We usually assume that a tableau U = (ua,b )(a,b)∈[λ] is standard ; i.e. that each row (a) is weakly increasing from left to right: ua,1 ≤ ua,2 ≤ · · · ≤ ua,λa , and that each column (b) is strictly increasing from top to bottom: u1,b < u2,b < · · · < uβb ,b (βb is the length of column (b)). Notice that the latter condition implies that if the entries ua,b of a tableau U all lie in n, then the number of rows of U cannot exceed n.
B.2 The map Sch : I(n, r) → T(n, r) Let T (n, r) be the set of all triples (λ, P, Q) such that λ ∈ Λ+ (n, r), P is a λ-tableau whose entries are drawn from the set n = {1, 2, . . . , n}, and Q is a λ-tableau whose entries are 1, 2, . . . , r in some order. (The r entries of Q are distinct; the r entries of P may include repetitions.) The subject of this chapter is Schensted’s map [46, pp. 180–181] (B.2a) Sch : I(n, r) → T (n, r). We shall define Sch in §B.3, and prove in §B.6 that it is bijective [46, p. 182]. The map Sch is defined by induction on r. For r = 1, let
82
B The Schensted Process
(B.2b) Sch(i) = (1, 0, . . . , 0), i1 , 1 for any one-letter word i = i1 in I(n, 1). Note that i1 , 1 are tableaux of shape (1, 0, . . . , 0), which means that each can be regarded as a 1 × 1 matrix. From now on we assume r > 1. Insertion. Fundamental for Schensted’s work [46] is a process (or algorithm) which “inserts” a given element x1 of n into a given tableau U . The result of this process is a tableau U ← x1 whose entries are the entries of U (although perhaps in a different order), together with one extra entry x1 . Example. Using the methods to be explained in §B.4, we shall show that if 1 1 1 2 U= and x1 = 1, then U ← x1 is the tableau 2 (see (B.4b)). 4 4 The insertion process will be described in the next section; see (B.3b) and (B.3d). Now suppose U is the middle term of an element (µ, U, V ) of T (n, r − 1). As soon as we have calculated P = U ← x1 , we shall be able (see (B.3e)) to construct a dominant weight λ ∈ Λ+ (n, r), and also a λ-tableau Q, such that (λ, P, Q) is an element of T (n, r). We shall denote this element (µ, U, V ) ← x1 . See also [46, p. 181] and [34, pp. 712, 713]. The process provides the inductive step needed to define Sch(i) for any i = i1 i2 . . . ir−1 ir ∈ I(n, r) (r > 1), namely (B.2c) Definition. Sch(i) := Sch(i ) ← ir , where i = i1 i2 . . . ir−1 . Therefore we have a formula (which can also be used as a definition of Sch(i), see [46, p. 181]). (B.2d) Formula. Sch(i) := (· · · ((Sch(i1 ) ← i2 ) ← i3 )
···
) ← ir .
B.3 Inserting a letter into a tableau Suppose we have (1) an element (µ, U, V ) of T (n, r − 1), where r > 1, and (2) an element x1 of n. In this section we define the element (µ, U, V ) ← x1 of T (n, r). To do this, we first define the tableau U ← x1 . Schensted does this by modifying U row by row. For this purpose, it is convenient1 to supplement each row (a) of U with two “virtual entries” ua,0 = 0 and ua,µa +1 = ∞. Note that (a, 0) and (a, µa + 1) are not elements of [µ], and therefore ua,0 and ua,µa +1 are not true entries in row (a). Take any y ∈ n. Even though y may not be equal to any of the entries ua,k of row (a), we may “position” y into row (a), using the following elementary lemma. 1
See [34, p. 711].
B.3 Inserting a letter into a tableau
83
(B.3a) Lemma. For any y ∈ n and any a ∈ n, there is a unique element k(a) = k(a, y) in {1, 2, . . . , µa , µa + 1} such that ua,k(a,y)−1 ≤ y < ua,k(a,y) . Proof. Define k(a, y) to be the smallest k ∈ {1, 2, . . . , µa , µa + 1} such that y < ua,k . Example. Suppose µa = 5, and that row (a) (including the virtual entries) is (0) 2 2 2 3 7 (∞) . If y = 2, then k(a) = 4, because ua,3 ≤ 2 < ua,4 . If y = 1, then k(a) = 1, because ua,0 ≤ 1 < ua,1 . If y = 4, then k(a) = 5, because ua,4 ≤ 4 < ua,5 . The situation k(a, y) = µa + 1 = 6 occurs if and only if ua,5 ≤ y < ∞, that is, if and only if y ≥ 7. The insertion sequence. Let µ ∈ Λ+ (n, r), let U be a µ-tableau whose entries all lie in n, and let x1 ∈ n. In order to define P := U ← x1 first make the “insertion sequence” (B.3b) x1 , k(1), x2 , k(2), ..., xz , k(z), which contains all the data needed to construct U ← x1 . Definition of the insertion sequence. Step 1. • x1 is the given element of n. • k(1) is the smallest k ∈ {1, . . . , µ1 , µ1 + 1} such that x1 < u1,k . Equivalently, k(1) is the unique element of {1, . . . , µ1 , µ1 + 1} such that u1,k(1)−1 ≤ x1 < u1,k(1) . The case k(1) = µ1 + 1 occurs if and only if u1,µ(1) ≤ x1 (< x1,µ(1)+1 = ∞), i.e. if and only if x1 ≥ u1,µ1 (hence x1 is ≥ all entries in row (1) of U ). • If k(1) = µ1 + 1, the sequence is ended. Step 2. Now assume that k(1) = µ1 + 1. Then continue the definition of the insertion sequence. • x2 := u1,k(1) . • k(2) is the smallest k ∈ {1, . . . , µ2 , µ2 + 1} such that x2 < u2,k . Equivalently, k(2) is the unique element of {1, . . . , µ2 , µ2 + 1} such that u2,k(2)−1 ≤ x2 < u2,k(2) . The case k(2) = µ2 + 1 occurs if and only if x2 ≥ u2,µ(2) . • If k(2) = µ2 + 1, the sequence is ended. Step 3. Now assume that k(2) = µ2 + 1. Then continue • x3 := u2,k(2) , etc. Inductive Step. The general step is as follows: after xa−1 (:= ua−2,k(a−2) ) and k(a − 1) have been defined, then • If k(a − 1) = µa−1 + 1, the sequence is ended. Now assume that k(a − 1) = µa−1 + 1, and proceed to define
84
B The Schensted Process
• xa := ua−1,k(a−1) , • k(a) is the least k ∈ {1, . . . , µa , µa + 1} such that xa < ua,k . Equivalently, k(a) is the unique element of {1, . . . , µa , µa + 1} such that (B.3c) ua,k(a)−1 ≤ xa < ua,k(a) . Definition of z. For each a such that k(a − 1) = µa−1 + 1, (B.3c) shows that xa < ua,k(a) = xa+1 . Therefore the sequence x1 < x2 < · · · is finite (x1 , x2 , . . . are all elements of n). Define z to be the largest element of n such that k(z − 1) = µz−1 + 1. Then we must have k(z) = µz + 1 (otherwise we could go on to define k(z + 1)), and uz,µz ≤ xz (< ∞), i.e. xz is ≥ every entry in row (z) of U . (B.3d) Definition of U ← x1 . Let λ = µ + εz , where εz is the n-vector with 1 in place z, and zero at all other places. We shall show in (B.5b) that λ ∈ Λ+ (n, r). Define U ← x1 to be the λ-tableau P = (pa,b )(a,b)∈[λ] whose entry pa,b is identical with the corresponding entry ua,b of U , except 1◦ at the places (a, k(a)) for a = 1, 2, . . . , z − 1. At these places we define pa,k(a) = xa (whereas ua,k(a) = xa+1 ), and 2◦ at place (z, µz + 1), where U has no entry, we define P to have entry pz,µz +1 = xz . The shape of P is λ = µ + εz = (µ1 , . . . , µz−1 , µz + 1, µz+1 , . . . , µn ), because row (a) of P has the same length as row (a) of U , for all a = z, while the length of row (z) of P is one more than the length of row (z) of U . Note that, for all a > z, the row (a) of P is identical to row (a) of U . (B.3e) Definition of (µ, U, V) ← x1 . Let (µ, U, V ) ∈ T (n, r − 1), and let x1 be an element of n. Let P be the tableau U ← x1 defined in (B.3d). Let λ be the weight µ + εz . Define Q by enlarging the µ-tableau V , giving it a new entry r in place (z, µz + 1). Then (µ, U, V ) ← x1 is by definition the triple (λ, P, Q). (B.3f ) Exercise. Prove that k(1) ≥ k(2) ≥ · · · ≥ k(z) in any case. [Hint. Let a ∈ {2, . . . , z}. We must prove that k(a) ≤ k(a − 1). By definition k(a) lies in {1, . . . , µa + 1}, therefore k(a) ≤ µa + 1, which is ≤ k(a − 1) if k(a − 1) ≥ µa + 1. But if k(a − 1) < µa + 1, i.e. k(a − 1) ≤ µa , then there exists an entry ua,k(a−1) in row (a) of U . Column standardness of U shows that ua,k(a−1) > ua−1,k(a−1) . Therefore xa = ua−1,k(a−1) < ua,k(a−1) . But k(a) is the least k ∈ {1, . . . , µa + 1} such that xa < ua,k . It follows that k(a) ≤ k(a − 1).] Note. We have not yet proved that the triple (λ, P, Q) belongs to T (n, r). For this we must show that λ ∈ Λ+ (n, r), and that P , Q are standard. These things will be proved in §B.5, but we first look at some examples.
B.4 Examples of the Schensted process
85
B.4 Examples of the Schensted process The basic operation for the Schensted process is the insertion of a letter into a tableau. So suppose r > 1, let µ be an element of Λ+ (n, r − 1), let U be a µ-tableau, and let x1 be any element of n. We want to find the tableau P = U ← x1 . The tableau P = U ← x1 (see (B.3d)) can be made by modifying the rows (1), (2), . . . of U , in turn. First “position” x1 (which may or may not be equal to one of the entries of U ) into row (1) of U (see Lemma (B.3a)). This means, find the (unique) element k(1) such that u1,k(1)−1 ≤ x1 < u1,k(1) . Assume that k(1) = µ1 + 1. Let x2 := u1,k(1) . Now let x1 “bump”2 x2 into row (2), which means: (i) change the entry x2 = u1,k(1) in place (1, k(1)) to x1 = p1,k(1) , and then (ii) “position” x2 into row (2), that is: find the unique index k(2) such that u2,k(2)−1 ≤ x2 < u2,k(2) . Then row (1) of U , changed by (i), is row (1) of P . Now we are ready to change row (2) of U into row (2) of P . In general, when row (a − 1) of U has been changed into row (a − 1) of P , we define xa := ka−1,k(a−1) and “bump” xa into row (a). This process goes on until we reach row (z), where k(z) = µz + 1. Then row (z) of P is made by adjoining an entry xz to row (z) of U , in the new place (z, µz + 1) (which was not a place for U ). All subsequent rows of P are the same as the corresponding rows of U . It is sometimes better to use a slightly different “technology”, to construct U ← x1 from U . Here one makes the parameters x1 , k(1), x2 , k(2), . . . as before, and records these on the tableau U ; for each a we put bracketed (xa ) between the entries ua,k(a−1) and ua,k(a) of row (a). We do not change any of the entries of U . The resulting diagram (it is not a tableau in our sense) is called “U prepared for insertion of x1 ”. We pass from this diagram to P = U ← x1 by replacing . . . (xa ) xa+1 . . . by . . . xa . . ., for each a ∈ {1, . . . , z − 1}; for row (z), replace . . . uz,µz (xz ) by . . . uz,µz xz . (B.4a) Example. Suppose we want to insert x1 = 2 into the tableau U shown in the left-hand column of table B.1 below. The second column shows U “prepared” for this insertion. This means that we have put bracketed (xa ) between the entries ua,k(a)−1 , ua,k(a) for each a = 1, 2, . . . , z; we have not yet changed any of the entries of U . Once U has been prepared, make P = P (i) from U by changing the entry ua,k(a) = xa+1 to pa,k(a) = xa , for all a = 1, 2, . . . , z − 1, i.e. the term xa in the bracketed (xa ) replaces its right-hand neighbour xa+1 = ua,k(a) . This procedure determines row (z), namely it is the first row where (xz ) does not have a right-hand neighbour. The row (z) of P , is 2 The verb “bump” was introduced, in this context, by Knuth [34, p. 713]. Alternatives would be “dump”, or even “jump”.
86
B The Schensted Process
found by replacing (xz ) by xz . In the example shown, we have x1 = 2, x2 = 3, z = 2, k(1) = 5, k(2) = 3. Note that 3 in row (1) of U , is bumped into row (2).
U
1
1
3
3
4
2
P =U ←2
U prepared for insertion of 2
2
3
1
1
3
3 (3)
2
2 (2) 3
4
1
1
2
3
3
3
2
2
4
Table B.1. Example for the insertion process.
(B.4b) Example. Consider two “extreme” possibilities. (i) It can happen that, for some a, the element xa < ua,1 . In that case k(a) = 1, and (B.3a) shows that 0 ≤ xa < ua,1 . When U is “prepared” as above, row (a) looks like this: (xa ) ua,1 ua,2 · · · ua,µa . (ii) It can happen that, for some a, we have k(a) = µa + 1, and µa+1 = 0. This means that we must “bump” xa+1 (= ua,k(a) ) into an empty row (a + 1). We have 0 < xa+1 < ∞, which allows us to say that k(a + 1) = 1. Then k(a + 1) = µa+1 + 1. So a + 1 = z, and P has an entry xa+1 in place (a + 1, 1). The following example illustrates both possibilities (i) and (ii). Suppose we insert x1 = 1 into the tableau U =
1 2 . Prepared for this insertion, U 4
1 (1) 2 1 1 becomes (2) 4 . Hence P = U ← 1 = 2 . In this example x1 , x2 , x3 (4) 4 are 1, 2, 4, respectively, z = 3; and k(1) = 2, k(2) = 1, k(3) = 1. We are now in a position to calculate the P -symbol P (i) and the Q-symbol Q(i) of a given word i ∈ I(n, r). To find P (i) = P (i1 i2 , · · · ir ), we must calculate, successively, the P -symbols of the words i1 , i1 i2 , . . . , i1 i2 · · · ir , starting with P (i1 ) = i1 , and using the insertion process P (i1 i2 · · · it ) = P (i1 i2 · · · it−1 ) ← it .
B.4 Examples of the Schensted process
87
It follows that the entries of P (i) are the entries i1 , . . . , ir of i, in some order. The construction of Q(i) is different, Q(i1 · · · it ) is not made by inserting it into Q(i1 · · · it−1 ); the construction follows definition (B.3e), see Example (B.4c) below. (B.4c) Example. Calculate P (i), where i = 1 4 2 1 2. Calculate also λ(i) and Q(i). First we must work out the successive tableaux Pt (i) = P (i1 . . . it ), for y t = 1, 2, . . . , 5. We find (the operator −→ means “insert y into the tableau on the left”) 4 2 P1 (i) = 1 −→ P2 (i) = 1 4 −→ P3 (i) = 1 2 4
1 1 2 1 1 2 1 −→ P4 (i) = 2 −→ P5 (i) = P (i) = 2 . 4 4 At the same time we get the dominant weight at each stage, namely λ(i1 · · · it ) is just the shape of the tableau Pt (i). In particular, λ(i) = (3, 1, 1, 0, . . . , 0). Now we make the tableaux Qt (i) = Q(i1 . . . it ) as follows: if we know Qt−1 (i), then Qt (i) is got by putting “t” in the place which was new, when Pt (i) was constructed from Pt−1 (i). Thus Q1 (i) = 1 ,
Q2 (i) = 1 2 ,
Q3 (i) = 1 2 , 3 1 2 , Q4 (i) = 3 4
1 2 5 Q5 (i) = Q(i) = 3 . 4
(B.4d) Example. Calculate λ(i), P (i), Q(i) for any word i = i1 i2 ∈ I(n, 2), and so verify the table given in §A.2. To find P (i), we must insert i2 into the tableau U = i1 . When U is prepared for this insertion, it becomes i1 (i2 ) in case i1 ≤ i2 , and it becomes
(i2 ) i1
in case i1 > i2 . Therefore P (i) = i1 i2
(i1 )
in case i1 ≤ i2 ,
i2
in case i1 > i2 . It follows that λ(i) is (2, 0, 0, . . . , 0) or i1 (1, 1, 0, . . . , 0), in these respective cases. To find Q(i), we must add 2 to V = Q(i1 ) = 1 in the place (z, µz + 1).
and P (i) =
In the case i1 ≤ i2 , we have z = 1, so Q(i) = 1 2 . In case i1 > i2 , we have z = 2, so Q(i) =
1 . 2
88
B The Schensted Process
B.5 Proof that (µ, U, V) ← x1 belongs to T(n, r) We keep the notations of §§B.2, B.3, B.4. Suppose that r > 1 and that (µ, U, V ) ∈ T (n, r − 1). Let x1 ∈ n. The triple (λ, P, Q) = (µ, U, V ) ← x1 is defined in (B.3e). In this section we shall prove that (λ, P, Q) ∈ T (n, r). For this we must show that λ ∈ Λ+ (n, r), and that P , Q are both standard. Write ua,b for the (a, b)-entry of U , and pa,b for the (a, b)-entry of P . (B.5a) Proposition. The weight λ = µ + εz = (µ1 , . . . , µz−1 , µz + 1, µz+1 , . . . , µn ) is dominant. It follows that Q is standard. Proof. We already know that µ1 ≥ · · · ≥ µz−1 ≥ µz ≥ µz+1 ≥ · · · ≥ µn because µ is dominant. If λ is not dominant, it must be that µz−1 = µz . But this leads to a contradiction. We know that xz ≥ uz,µz , and that uz,µz > uz−1,µz because U is standard. But uz−1,µz ≥ uz−1,k(z−1) = xz (the last equality is the definition of xz ), and putting these inequalities together gives the contradiction xz > xz . By the definition (B.3e), Q is made by adding an entry r at the end of row (z) of V . It is clear that Q is a standard λ-tableau, whose entries are 1, 2, . . . , r in some order. (B.5b) Proposition. The λ-tableau P is standard. Proof. First we shall show that P is “row standard”, i.e. that (i) pa,h−1 ≤ pa,h for all adjacent pairs (a, h − 1), (a, h) of places in any row (a) of [λ]. If (a, k(a)) is not one of (a, h − 1), (a, h) then by (B.5a) pa,h = ua,h and pa,h−1 = ua,h−1 , therefore (i) follows from the corresponding fact for row (a) of U . If (a, h) = (a, k(a)) then pa,h−1 = ua,h−1 ≤ xa , and xa = pa,k(a) = pa,h ; thus (i) holds. There remains the case (a, h − 1) = (a, k(a)). Then (i) says pa,k(a) ≤ pa,k(a)+1 . But pa,k(a) = xa and pa,k(a)+1 = ua,k(a)+1 . Thus (i) follows from xa < xa+1 = ua,k(a) ≤ ua,k(a)+1 . To complete the proof of Proposition (B.5b), we must show that P is “column standard”, i.e. that if (a, h) and (a + 1, h) are adjacent places in the same column of [λ], then (ii) pa+1,h > pa,h . If h = k(a) and h = k(a + 1) then pa,h = ua,h and pa+1,h = ua+1,h , hence (ii) follows from ua+1,h > ua,h , which holds because U is column standard.
B.6 The inverse Schensted process
89
If h = k(a), h = k(a + 1) then ua,h = ua,k(a) = xa+1 > xa . But ua+1,k(a) > ua,k(a) because U is column standard. Therefore pa+1,k(a) = ua+1,k(a) > xa = pa,k(a) , which proves (ii) in this case. Now suppose that h = k(a + 1), h = k(a). Then pa+1,k(a+1) = xa+1 , and pa,k(a+1) = ua,k(a+1) . In place (a, k(a)) of P we have xa (see (B.3d)). Since P is “row standard” (just proved, above) and k(a + 1) ≤ k(a) (see (B.3f)) we have ua,k(a+1) ≤ xa ; also xa < xa+1 by (B.3b). So pa,k(a+1) = ua,k(a+1) ≤ xa < xa+1 = pa+1,k(a+1) . This proves (ii) in case h = k(a + 1), h = k(a). There remains only the case h = k(a) = k(a + 1). In this case pa+1,h = pa+1,k(a+1) = xa+1 and pa,h = pa,k(a) = xa . But xa+1 > xa , therefore (ii) holds. The proof of Proposition (B.5b) is now complete.
B.6 The inverse Schensted process This section and the next are devoted to Schensted’s fundamental (B.6a) Theorem (see [46, p. 182]; [34, pp. 715–716]). The map Sch : I(n, r) → T (n, r) is bijective. This will be proved by constructing a map M : T (n, r) → I(n, r) which is a two-sided inverse to Sch (see (B.7b)). If r = 1, it is easy to make a map M inverse to Sch. The only element + (n, 1) is λ = (1, 0, . . . , 0), hence any in Λ T (n, 1) has the form element in to be x (regarded as λ, , λ, x , 1 for some x ∈ n. We define M x 1 a 1-letter word). By (B.2b), Sch(x) = λ, x , 1 . It is easy to check now that M : T (n, 1) → I(n, 1) is a two-sided inverse to Sch : I(n, 1) → T (n, 1). It follows that Sch is bijective in case r = 1. From now on in this section, assume that r > 1. The process given in §B.3 delivers a map, which we call insertion, (B.6b) J : T (n, r − 1) × n → T (n, r), which takes a pair ((µ, U, V ), x1 ) to the element (µ, U, V ) ← x1 of T (n, r). Next define another map, called extrusion, (B.6c) E : T (n, r) → T (n, r − 1) × n. To make E, we need an “inverse Schensted process”, which will turn any (λ, P, Q) ∈ T (n, r) into a pair consisting of a triple (µ, U, V ) ∈ T (n, r − 1) and an element w1 ∈ n. How to define E. Let (λ, P, Q) ∈ T (n, r). Let (a, b) ∈ [λ] be the (unique) place where qa,b = r. Since Q is standard, r must be at the end of its row. Therefore if a = z, then b must be λz , so that qz,λz = r. But r is also at the end of its column, which implies that λz > λz+1 . This proves
90
B The Schensted Process
(B.6d) The weight µ = λ − εz is dominant. Hence µ ∈ Λ+ (n, r − 1). Definition of the extrusion sequence. We shall next define the extrusion sequence (B.6e) l(z), wz , l(z − 1), wz−1 , . . . , l(1), w1 . To make this sequence, we must know Q (which determines z) as well3 as the λ-tableau P . Step 1. • l(z) = λz ; • wz := pz,λz (this is the entry in P , at the place (z, λz ) where Q has entry r). Step 2. • l(z − 1) is the largest l ∈ {1, 2, . . . , λz−1 } such that pz−1,l < wz . Equivalently, l(z − 1) is the unique element in {1, 2, . . . , λz−1 } such that pz−1,l(z−1) < wz ≤ pz−1,l(z−1)+1 . • wz−1 := pz−1,l(z−1) . Inductive Step. When l(a + 1) and wa+1 := pa+1,l(a+1) have been defined, we go on to define • l(a) is the largest l ∈ {1, . . . , λa } such that pa,l < wa+1 . Equivalently, l(a) is the unique element in {1, . . . , λa } such that (B.6f ) pa,l(a) < wa+1 ≤ pa,l(a)+1 . • wa := pa,l(a) . Note that if a < z there is always at least one l ∈ {1, 2, . . . , λa } such that pa,l < wa+1 , namely l = l(a + 1); this is because P is column standard, hence pa,l(a+1) < pa+1,l(a+1) = wa+1 . So the extrusion sequence (B.6e) always ends with . . . , l(1), w1 . Final Step. The last two terms are as follows. • l(1) is the largest l ∈ {1, 2, . . . , λ1 } such that p1,l < w2 , and • w1 := p1,l(1) . We say that the element w1 ∈ n has been “extruded”4 from P (or more precisely from the given element (λ, P, Q) in T (n, r)). But the extrusion process also defines an element (µ, U, V ), see below. (B.6g) Definition of E. Let (λ, P, Q) ∈ T (n, r). Then E((λ, P, Q)) is the pair ((µ, U, V ), w1 ), where • µ = λ − εz = (λ1 , . . . , λz−1 , λz − 1, λz+1 , . . . , λn ), To define P = U ← x1 , we did not need to know Q, because z is defined by the insertion sequence; see (B.3c), (B.3d). 4 In the way that a small amount (w1 ) of toothpaste is extruded from its tube (λ, P, Q). 3
B.6 The inverse Schensted process
91
• V is Q, with the entry qz,λz = r removed, • w1 is the extruded element of n defined above, and • U = (ua,b )(a,b)∈[µ] is the following µ-tableau: ua,b = pa,b for all (a, b) ∈ [µ], except 1◦ at the places (a, l(a)) for a = 1, 2, . . . , z − 1. At these places we define ua,l(a) = wa+1 (whereas pa,l(a) = wa ), and 2◦ there is no entry in U at the place (z, λz ), because U is a µ-tableau and / [µ]. (z, λz ) ∈ To complete the definition of E, the following lemma is required. (B.6h) Lemma. The triple (µ, U, V ) belongs to T (n, r − 1). Proof. From (B.6d) we know that µ ∈ Λ+ (n, r − 1). It is clear that V is a µ-tableau whose entries are 1, 2, . . . , r − 1, in some order. It remains only to show that U is standard. The proof of this is very similar to that of Proposition (B.5b), and we leave to the reader. (B.6i) Proposition. The maps J, E are inverse to each other. Proof. We shall first prove that (i) E ◦ J = idT (n,r−1)×n . Take any element ((µ, U, V ), x1 ) in T (n, r − 1) × n. Let (B.6j) x1 , k(1), . . . , xz , k(z), be the insertion sequence used to define (λ, P, Q) = J((µ, U, V ), x1 ) = (µ, U, V ) ← x1 . Here z is such that k(z) = µz + 1 = λz , where λ = µ + εz . Note that Q has r in place (z, λz ), and pz,λz = xz (see (B.3d) and (B.3e)). To prove (i) it is enough to show that E((λ, P, Q)) = ((µ, U, V ), x1 ). Now E((λ, P, Q)) is determined by the extrusion sequence (see (B.6e)) (B.6k) l(z), wz , l(z − 1), wz−1 , . . . , l(1), w1 . The “z” which appears in (B.6k) indexes the row of Q which contains the entry r, see (B.6d). Therefore this “z” is the same as the z in (B.6j). From (B.6e) we have l(z) = λz and wz = pz,λz . But from the definition of P , pz,λz = pz,µz +1 = xz . Therefore (B.6l) l(z) = k(z) and wz = xz . Our ambition is to prove (B.6m) l(a) = k(a) and wa = xa for all a ∈ {z, z−1, . . . , 1}. Suppose a < z and that (using “upward” induction) (B.6n) l(a + 1) = k(a + 1) and wa+1 = xa+1 .
92
B The Schensted Process
By (B.3c), there holds xa < xa+1 = ua,k(a) ≤ ua,k(a)+1 . However the definitions in §B.3 show that xa = pa,k(a) , and (B.6n) gives wa+1 = xa+1 . So xa = pa,k(a) < wa+1 ≤ ua,k(a)+1 = pa,k(a)+1 . But comparing this with (B.6f), we see that l(a) = k(a). Hence wa = pa,l(a) = pa,k(a) = xa ; thus (B.6m) holds for all a. In particular, w1 = x1 , and we find easily that E(J(((µ, U, V ), x1 )))) = E((λ, P, Q)) = ((µ, U, V ), x1 ); in other words we have proved (i). To complete the proof of (B.6i) we must prove (ii) J ◦ E = idT (n,r) . Take any element (λ, P, Q) ∈ T (n, r). Let (B.6k) be the extrusion sequence which defines E((λ, P, Q)) = ((µ, U, V ), w1 ) (see (B.6g)). Let (B.6o) (w1 =) x1 , k(1), x2 , k(2), . . . , xz , k(z) be the insertion sequence which defines J((µ, U, V ), w1 ). In order to prove (ii) we must show that J((µ, U, V ), w1 ) = (λ, P, Q). The first step is to prove (B.6p) wa = xa for all a ∈ {1, 2, . . . , z}. This holds for a = 1, by definition. Suppose that (B.6p) holds for some a. By (B.6f), l(a) is the unique element of {1, 2, , . . . , z} such that (B.6q) pa,l(a) = wa < wa+1 ≤ pa,l(a)+1 . From this follows that pa,l(a)−1 ≤ wa < wa+1 . Hence, using the definition (B.6g) of U , we have ua,l(a)−1 ≤ wa < ua,l(a) , and since wa = xa , there holds ua,l(a)−1 ≤ xa < ua,l(a) . However this proves that l(a) = k(a), from (B.3c). Consequently wa+1 = pa,l(a) = pa,k(a) = xa+1 (see (B.3b); we are here using the insertion of x1 into (µ, U, V )). Now we can prove, by induction on a, that (B.6r) wa = xa and l(a) = k(a), for all a ∈ {1, 2, . . . , z}. Using (B.6r) and the definitions (B.3d) and (B.3e) (applied to the insertion of x1 into (µ, U, V )), it is quite easy to show that J((µ, U, V ), w1 ) = (λ, P, Q). This concludes the proof of Proposition (B.6i).
B.7 The ladder We shall define a map M : T (n, r) → I(n, r) inverse to Sch : I(n, r) → T (n, r), and hence prove Schensted’s Theorem (B.6a). M will be given as the product of maps E0 , E1 , . . . , Er−1 displayed in table B.2 (“The ladder”) below.
B.7 The ladder
Set
Typical element of set i1 i2 . . . is is+1 . . . ir−1 ir
I(n, r) J1
6
Er−1
?
T (n, 1) × I(n, r − 1) J2
(λ1 , P1 , Q1 ), i2 . . . is is+1 . . . ir−1 ir
6
Er−2
? .. .
Js−1
.. .
6
Er−s+1
?
T (n, s − 1) × I(n, r − s + 1) Js
(λs , Ps , Qs ), is+1 . . . ir−1 ir
6
Er−s−1
?
.. .
6
E1
?
T (n, r − 1) × I(n, 1) Jr
(λs−1 , Ps−1 , Qs−1 ), is is+1 . . . ir−1 ir
Er−s
?
.. .
Jr−1
6 T (n, s) × I(n, r − s)
Js+1
93
(λr−1 , Pr−1 , Qr−1 ), ir
6
E0
? T (n, r)
(λr , Pr , Qr )
Table B.2. The ladder.
94
B The Schensted Process
Notations and Explanations. To define Es : T (n, r − s) × I(n, s) −→ T (n, r − s − 1) × I(n, s + 1), first apply E to a typical element (λr−s , Pr−s , Qr−s ) of the set T (n, r − s): this gives a pair ((λr−s−1 , Pr−s−1 , Qr−s−1 ), ir−s ) where ir−s is some element of n. By definition, Es takes the element ((λr−s , Pr−s , Qr−s ), ir−s+1 . . . ir−1 ir ) of T (n, r − s) × I(n, s) to ((λr−s−1 , Pr−s−1 , Qr−s−1 ), ir−s ir−s+1 . . . ir−1 ir ). The map Jr−s : T (n, r − s − 1) × I(n, s + 1) −→ T (n, r − s) × I(n, s) : takes (by definition) ((λr−s−1 , Pr−s−1 , Qr−s−1 ), ir−s ir−s+1 . . . ir−1 ir ) −→ ((λr−s , Pr−s , Qr−s ), ir−s+1 . . . ir−1 ir ), where (λr−s , Pr−s , Qr−s ) = (λr−s−1 , Pr−s−1 , Qr−s−1 ) ← ir−s . Note. To explain the top step of the ladder, take T (n, 0) to be the 1-element set which contains only the triple (λ, P, Q), where λ = (0, 0, . . . , 0) and P , Q are empty tableaux. Then identify T (n, 0) × I(n, r) with I(n, r). In the same way, the bottom step is T (n, r) × I(n, 0) = T (n, r), where I(n, 0) consists of the empty word only. (B.7a) Exercise. Prove that Jr−s = E−1 s . [Hint: use Proposition (B.6i).] As we go up the ladder, the successive operators Es erode T (n, r), step by step, until it becomes I(n, r). This progress is inverted as we go down from I(n, r) to T (n, r), using the operators Js . But this “going down” is exactly described by the formula (B.2c), which means that Sch = Jr ◦ Jr−1 ◦ · · · ◦ J1 . Define (B.7b) M := Er−1 ◦ · · · ◦ E0 . By (B.7a), M is a two-sided inverse to Sch. This proves Theorem (B.6a).
C Schensted and Littelmann operators
C.1 Preamble Schensted (see §§B.3, B.6) associates to every word i ∈ I(n, r) a unique triple (λ(i), P (i), Q(i)) ∈ T (n, r). This provides the following decomposition (disjoint union) of the set I(n, r): (C.1a) I(n, r) = Iλ (n, r), λ∈Λ+ (n,r)
where Iλ (n, r) is the set of all i ∈ I(n, r) such that λ(i) = λ, for each dominant weight λ ∈ Λ+ (n, r). We define the shape of a word i to be the shape of P (i) (which is also the shape of Q(i)). So Iλ (n, r) is the set of all words of shape λ. In a case where n, r are supposed known, we may write Iλ (n, r) = Iλ . Example. The set I(3, 3) is decomposed into three subsets I(300) , I(210) and I(111) ; this decomposition of I(3, 3) is illustrated in §E.1. Assume from now on that λ ∈ Λ+ (n, r) is fixed. Definition. Define two equivalence relations ∼ and ≈ on Iλ : if i, j ∈ Iλ then (C.1b) i ∼ j means that P (i) = P (j), and (C.1c) i ≈ j means that Q(i) = Q(j). We will use the following notation. (C.1d) For any (standard) λ-tableau P whose entries are drawn from the set n, let Iλ (P, ∼) be the ∼ equivalence class { i ∈ Iλ : P (i) = P }, and (C.1e) For any (standard) λ-tableau Q whose entries are 1, 2, . . . , r (in some order), let Iλ (Q, ≈) be the ≈ equivalence class { i ∈ Iλ : Q(i) = Q }.
96
C Schensted and Littelmann operators
Remark. If either of the tableaux P , Q is given, its shape λ is known. For this reason we will usually omit the suffix λ, and write Iλ (P, ∼) = I(P, ∼) and Iλ (Q, ≈) = I(Q, ≈). The equivalence relation ∼ was introduced by Knuth, who proved that ∼ is the equivalence relation on Iλ generated by a certain collection of basic (or “elementary”) moves i → j, each of which affects only two places in i and j. Knuth’s theorem will be proved in §§C.3, C.4. This proof is based on Knuth’s paper [34, Theorem 6, p. 723]. Littelmann defines a graph G, in a wider context than here [35, p. 504]. Theorem A (see (A.4c) and Chapter D) will show that (in our present context) the equivalence relation determined by G is equal to ≈. We regard Theorem A as an analogue to Knuth’s theorem; it says that ≈ is the equivalence relation on Iλ generated by a certain collection of elementary moves i ⇒ j, where i ⇒ j means that there exists c ∈ {1, 2, . . . , n − 1} such that f˜c (i) = j or such that e˜c (i) = j. Notice that if i ⇒ j, then the words i and j differ in exactly one place; see (A.3g)(2). Example. The tables in §E.1 show the ∼ and ≈ classes for the case n = r = 3. The ≈ classes are given as vertical columns in these tables; for example I 1 3 , ≈ = {211, 212, 311, 213, 312, 313, 322, 323}, 2 1 and I 2 , ≈ is the one-word set {321}. The ∼ classes are given as hori3 zontal rows in the tables in §E.1; for example I 1 3 , ∼ = {231, 213}, 2 and I 1 1 2 , ∼ is the one-word set {112}. The one-word set {321} is both a ∼ and a ≈ class.
C.2 Unwinding a tableau To each tableau Y we shall associate a word KY , which may be called the (Knuth) unwinding of Y , as follows (see [34, p. 723] or [18, p. 17]). Let λ ∈ Λ+ (n, r) be a dominant weight, and let m be the number of rows of [λ], so that λ1 ≥ λ2 ≥ · · · ≥ λm > 0. Define the Knuth ordering < on [λ] as follows (see [34, p. 723]): (C.2a) (m, 1) < (m, 2) < · · · < (m, λm ) < (m − 1, 1) < (m − 1, 2) < · · · < (m − 1, λm−1 ) < ··· < (2, 1) < (2, 2) < · · · < (2, λ2 ) < (1, 1) < (1, 2) < · · · < (1, λ1 ).
C.2 Unwinding a tableau
97
Now let Y = (ya,b )(a,b)∈[λ] be any λ-tableau. Define KY to be the word (C.2b) of length r obtained by writing out the entries ya,b according to the order (C.2a): (C.2b) KY := ym,1 ym,2 . . . ym,λm ym−1,1 ym−1,2 . . . ym−1,λm−1 . . . y2,1 y2,2 . . . y2,λ2 y1,1 y1,2 . . . y1,λ1 . The word KY is (by definition) the unwinding of the tableau Y . So KY is the word obtained by writing out the entries of each row of Y from left to right, starting with the bottom row, and working up to the first row. 1 1 2 2 , then KY = 3231122. Example. If Y = 2 3 3 Suppose that i ∈ I(n, r). The Schensted process (see §B.3) constructs an element (λ(i), P (i), Q(i)) of T (n, r), where λ ∈ Λ+ (n, r) and P (i) is a λ-tableau. Then the “unwinding” KP (i) of P (i) is a word, an element of I(n, r). Thus we have an operation KP : I(n, r) → I(n, r), which takes each i in I(n, r) to KP (i). However, if we apply the Schensted process to KP (i), we just get P (i) again; this follows from Proposition (C.2c) below. (C.2c) Proposition. Let λ ∈ Λ+ (n, r) with λ1 ≥ · · · ≥ λm > 0, and let Y be a λ-tableau. (i) P (KY ) = Y , and (ii) Q(KY ) is completely determined by the shape λ of Y ; it is the same for all λ-tableaux Y . The tableau Q(KY ) is described in (C.2h). Proof of part (i) of Proposition (C.2c). We shall prove (i) by induction on the number m of rows of Y . If m = 1, then Y is a one-rowed tableau y1,1 y1,2
···
y1,λ1 of shape (λ1 , 0, . . . , 0), and KY = y1,1 y1,2 . . . y1,λ1 .
We make P (KY ) by successively inserting y1,2 , . . . , y1,λ1 into the tableau y1,1 (see (B.2d) and (B.3d)). But since y1,1 ≤ y1,2 ≤ · · · ≤ y1,λ1 , each insertion simply adds a new entry to the first row. Therefore P (KY ) = y1,1 y1,2
···
y1,λ1 = Y . Thus (i) holds if m = 1. By the definition (B.3e),
we have Q(KY ) = 1 2 · · · λ1 ; this proves that (ii) also holds. Now suppose that m > 1 and that Proposition (C.2c) holds for any tableau with m−1 rows. In particular it holds for the tableau X made by removing the first row of Y ; therefore P (KX) = X. It is clear that KY = KX | y1,1 · · · y1,λ1 , hence P (KY ) = P (KX) ← y1,1 ← · · · ← y1,λ1 = X ← y1,1 ← · · · ← y1,λ1 . Diagram C.1 shows X, and above it, in parentheses, are the entries of row (1) of Y ; these are not entries of X.
98
C Schensted and Littelmann operators (y1,1 ) y2,1 y3,1 .. . .. . .. . X= .. . .. . .. . .. . yβ1 ,1
(y1,2 ) · · · (y1,t−1 ) y2,2 · · · y2,t−1 y3,2 · · · y3,t−1 .. .. . . .. .. . . .. .. . . .. .. . . .. . yβt−1 ,t−1 .. . yβ2 ,2 0 ···
0
(y1,t ) · · · (y1,λ2 ) · · · (y1,λ1 ) y2,t · · · y2,λ2 · · · 0 y3,t · · · y3,λ2 · · · 0 .. .. . . .. . yβλ2 ,λ2 .. . yβt ,t 0
···
0
···
0
0
···
0
···
0
Diagram C.1. The tableau X, made by removing the first row from the tableau Y .
In diagram C.1, the number βs denotes the length of column s, including the term (y1,s ). Therefore β1 = m, and β = (β1 , β2 , . . . , βλ1 , 0, . . . , 0) can be regarded as the partition of r conjugate to λ = (λ1 , λ2 , . . . , λβ1 , 0, . . . , 0). There holds β1 ≥ β2 ≥ · · · ≥ βt−1 ≥ βt ≥ · · · ≥ βλ1 , but for ease of drawing, diagram C.1 illustrates a case where the βs are distinct. (C.2d) Definition. Let t ∈ {0, 1, 2, . . . , λ1 }. Let X[0] := X, and if t ≥ 1, define X[t] to be the diagram obtained from diagram C.1 by removing the parentheses from (y1,1 ), (y1,2 ), . . . , (y1,t ), and then pushing columns (1), (2), . . . , (t) down by one place. (C.2e) Remark. For each s ∈ {1, . . . , t}, column (s) of X[t] is the same as column (s) of Y . In particular, X[λ1 ] = Y . (C.2f ) Lemma. Let t ∈ {0, 1, 2, . . . , λ1 }. Then P (KX | y1,1 y1,2 . . . y1,t ) = X[t]. In particular, P (KY ) = P (KX | y1,1 y1,2 . . . y1,λ1 ) = X[λ1 ] = Y . Thus (C.2f) will complete the proof of part (i) of (C.2c). Proof of Lemma (C.2f ). We use induction on t. If t = 0, the lemma claims that X = X[0], which is true. So let t ∈ {1, 2, . . . , λ1 }, and suppose that the lemma is true when t is replaced by t − 1; that is P (KX | y1,1 y1,2 . . . y1,t−1 ) = X[t − 1]. Now P (KX | y1,1 y1,2 . . . y1,t ) = P (KX | y1,1 y1,2 . . . y1,t−1 ) ← y1,t . So to prove Lemma (C.2f), it will be enough to prove that X[t − 1] ← y1,t is the tableau X[t] defined in (C.2d). The tableau X[t − 1] is displayed in
C.2 Unwinding a tableau
X[t − 1] =
y1,1 y2,1 .. . .. . .. . .. . .. .
y1,2 y2,2 .. . .. . .. . .. .
··· ···
99
(y1,t ) · · · (y1,λ2 ) · · · (y1,λ1 ) y2,t · · · y2,λ2 · · · 0 y3,t · · · y3,λ2 · · · 0 .. .. . . .. . yβλ2 ,λ2 .. .
y1,t−1 y2,t−1 .. . .. . .. . .. .
yβt ,t
yβ2 −1,2 · · · yβt−1 ,t−1 yβ1 −1,1 yβ2 ,2 yβ1 ,1 0 ··· 0
0
···
0
···
0
0
···
0
···
0
Diagram C.2. The tableau X[t − 1] = P (KX | y1,1 y1,2 . . . y1,t−1 ).
diagram C.2. We calculate X[t − 1] ← y1,t by the general insertion procedure given in §B.3. To make the present notation conform with that in §B.3, take U := X[t − 1], x1 := y1,t and µ to be the shape of U . Calculate the insertion parameters x1 , k(1), x2 , k(2), . . . by the inductive rule: given the element xa = ya,t for some a ≥ 1, define k(a) to be the unique element k in {1, . . . , µa , µa + 1} such that (1) ua,k−1 ≤ xa < ua,k . Then define xa+1 := ua,k(a) . It is very easy to find k(a) in our case. There holds (2) ya,t−1 ≤ ya,t < ya+1,t , for any a ∈ {1, . . . , βt −1}; the inequalities in (2) follow from the fact that Y is standard. But (2) is the same as (1), if we take k = t and xa+1 = ya+1,t . This shows that k(a) = t and xa+1 = ya+1,t . So starting with x1 = y1,t , which is given, we may find all insertion parameters for the insertion X[t − 1] ← y1,t . The result is given in table C.1 below. The parameter z (see the line under (B.3c)) is equal to βt . Notice that all the k(a) are equal to t, which means
2
···
a
1
xa
y1,t y2,t · · ·
k(a)
t
t
···
a
···
ya,t · · · t
···
βt yβt ,t t
Table C.1. Insertion parameters for the insertion X[t − 1] ← y1,t .
100
C Schensted and Littelmann operators
that all the xa lie in column t. Use (B.3d) to find X[t − 1] ← y1,t ; the result is X[t] as claimed in Lemma (C.2f). Proof of part (ii) of Proposition (C.2c). We are dealing with the word j = KY , whose letters are indexed by the set [λ]. Fix an element (a, s) ∈ [λ]. Then the segment j(m,1) j(m,2) · · · j(a,s) of j has length (C.2g) ψ (λ) (a, s) := λm + · · · + λa+1 + s. This gives a bijective, order-preserving map ψ = ψ (λ) : [λ] → r. We may regard ψ = ψ (λ) as the λ-tableau whose “unwinding” Kψ is the word1 1 2 3 · · · (r−1) r. Notice that the tableau ψ = ψ (λ) is, in general, not standard. 6 7 8 Example. If λ = (3, 3, 2, 0, . . . , 0), then ψ = ψ (λ) = 3 4 5 . 1 2 We shall prove part (ii) of Proposition (C.2c) by proving the following much stronger result: (C.2h) Proposition (see [3, Appendix C]). Let Y be any λ-tableau. Then Q(KY ) = Q(λ) , where Q(λ) is the λ-tableau given by Q(λ) (a, s) := ψ (λ) (βs + 1 − a, s), for all (a, s) ∈ [λ]. Expressed in words: Q(λ) is obtained by reversing each column of the tableau ψ (λ) . 1 2 5 Example. If λ = (3, 3, 2, 0, . . . , 0), then Q(λ) = 3 4 8 . 6 7 If Proposition (C.2h) is true, the tableau Q(λ) must be standard, since it is the Q-symbol of a word KY (see §B.5). Proof of Proposition (C.2h). We use induction on m. The case m = 1 is easy; if Y = y1,1 y1,2
···
y1,λ1 , then Q(KY ) = 1 2
· · · λ1 , which is the
same as Q(λ) (in this case we have Q(λ) = ψ (λ) ). So now suppose that m > 1 and that Proposition (C.2h) is true when Y is replaced by any tableau with m − 1 rows. In particular it is true for the ∗ tableau X obtained by removing the first row from Y , so that Q(KX) = Q(λ ) , where λ∗ = (λ2 , . . . , λm , 0, . . . , 0) ∈ Λ+ (n, r − λ1 ) is the shape of X. 1
This word belongs to I(r, r). To define ψ (λ) , we should regard λ as an element of Λ+ (r, r), but notice that [λ] = [λ ] if λ is obtained from λ by adding zeros: λ = (λ1 , . . . , λm , 0, 0, . . . , 0).
C.2 Unwinding a tableau
101
We proved part (i) of Proposition (C.2c), namely that P (KY ) = Y , by calculating in turn the P -symbols of the words KX,
KX | y1,1 ,
KX | y1,1 y1,2 ,
...,
KX | y1,1 y1,2 · · · y1,λ1 = KY.
So we shall do the same for the Q-symbols. The first step is to find Q(KX | y1,1 ). Use the procedure described in (B.3e), taking U = X, x1 = y1,1 and r = ψ(1, 1). (Notice that r is the length of the word KX | y1,1 ). To go from X = P (KX) to P (KX | y1,1 ) = X ← y1,1 is very easy; push down the first column of X by one place, and then put y1,1 into the top place of that column (see proof of part (i) of Proposition (C.2c)). The tableaux X and X ← y1,1 are shown in table C.2. To find Q(KX | y1,1 ), ∗ use the recipe in (B.3e). Our induction hypothesis gives Q(KX) = Q(λ ) . The λ∗ -tableau X y2,1 y3,1 .. . .. . .. . .. . yβ1 ,1
y2,2 y3,2 .. . .. . .. .
··· ···
yβ2 ,2 0
··· ···
The (λ∗ + ε1 )-tableau X ← y1,1
y2,λ2 y3,λ2 .. .
0 0 .. .
yβλ2 ,λ2 .. .
0 .. .
0 0
0 0
y1,1 y2,2 y3,2 y2,1 .. .. . . .. .. . . .. .. . . .. . yβ2 ,2 yβ1 −1,1 0 0 yβ1 ,1
··· ···
y2,λ2 y3,λ2 .. .
0 0 .. .
yβλ2 ,λ2 .. .
0 .. .
0 0 0
0 0 0
··· ··· ···
Table C.2. Inserting y1,1 into the tableau X (P -symbol).
∗
∗
Diagram C.3 shows ψ (λ ) . To make Q(λ
ψ (λ
∗
)
)
ψ(2, 1) ψ(2, 2) · · · ψ(2, λ2 ) ψ(3, 1) ψ(3, 2) · · · ψ(3, λ2 ) .. .. .. . . . .. .. . . · · · ψ(βλ2 , λ2 ) = .. .. .. . . . .. 0 . ψ(β2 , 2) 0 ··· 0 ψ(β1 , 1) ∗
∗
we reverse each column of ψ (λ ) ; this 0 0 .. . 0 . 0 0 0
Diagram C.3. The tableau ψ (λ ) , where λ∗ = (λ2 , . . . , λm , 0, . . . , 0).
102
C Schensted and Littelmann operators
gives the left-hand pane in table C.3. Now follow the instructions in (B.3e): to
The Q-symbol of the word KX
The Q-symbol of the word KX | y1,1
ψ(β1 , 1) ψ(β2 , 2) · · · ψ(βλ2 , λ2 ) 0
ψ(β1 , 1) ψ(β2 , 2) · · · ψ(βλ2 , λ2 ) 0
.. .
.. .
.. .
.. .
.. .
.. .
.. .
.. .
ψ(4, 1)
ψ(3, 2) · · ·
ψ(3, λ2 ) 0
ψ(4, 1)
ψ(3, 2) · · ·
ψ(3, λ2 ) 0
ψ(3, 1)
ψ(2, 2) · · ·
ψ(2, λ2 ) 0
ψ(3, 1)
ψ(2, 2) · · ·
ψ(2, λ2 ) 0
ψ(2, 1)
0
···
0
0
ψ(2, 1)
0
···
0
0
ψ(1, 1)
0
···
0
0
Table C.3. Inserting y1,1 into the tableau X (Q-symbol).
go from the left-hand pane to the right-hand pane, we adjoin a new place—this must be the place which is new when we go from P (KX) to P (KX) ← y1,1 , namely the place (β1 , 1). And in this place we must put “r”, which in our case is ψ(1, 1). This gives Q(KX | y1.1 ) shown in the right-hand pane of table C.3. We go on to insert y1,2 , . . . , y1,λ1 in turn. At each insertion, say y1,t , we adjoin ψ(1, t) to the bottom of column (t). But the new column (t) so made is the same as column (t) of Q(λ) . When we have inserted y1,λ1 we have the complete tableau Q(λ) . This finishes the proof of Proposition (C.2h). Hence we have proved Proposition (C.2c). (C.2i) Exercise. Let λ ∈ Λ+ (n, r), and let i be any element of Iλ (n, r). Then i = KP (i) if and only if Q(i) = Q(λ) . In other words, the ≈ class of Q(λ) consists of all i ∈ Iλ (n, r) which satisfy i = KP (i). [Hint. Let i ∈ I(n, r). Schensted’s Theorem (B.6a) tells us that (i) i = KP (i) if and only if Sch(i) = Sch(KP (i)). Now assume that i ∈ Iλ (n, r), which means that λ(i) = λ (see (C.1a)). Hence (ii) Sch(i) = (λ, P (i), Q(i)). To calculate Sch(KP (i)) we take Y = P (i) in (C.2c). This shows that P (KP (i)) = P (i). Since P (i) has shape λ, it follows that λ(KP (i)) = λ. But (C.2c)(ii) and (C.2h) tell us that Q(KP (i)) = Q(λ) . Therefore (iii) Sch(KP (i)) = (λ, P (i), Q(λ) ). Now (i), together with (ii) and (iii), give the desired result: i = KP (i) if and only if Q(i) = Q(λ) .]
C.3 Knuth’s theorem
103
C.3 Knuth’s theorem (C.3a) Theorem (see [34, p. 723]). Let i, j be words in I(n, r). Then i ∼ j (i.e. P (i) = P (j)) if and only if there is a finite sequence of words (C.3b) i(1), i(2), . . . , i(s) such that i(1) = i, i(s) = j and each consecutive pair of words i(σ − 1), i(σ) is connected by a basic (or elementary) move of type K or K . These basic moves are as follows [34, p. 723]. Definition. A move of type K changes a word (C.3c) . . . b c a . . .
to
. . . b a c . . .,
where a, b, c are letters (i.e. elements of n) such that a < b ≤ c. A move of type K changes a word (C.3d) . . . a c b . . .
to
. . . c a b . . .,
where a, b, c are letters (i.e. elements of n) such that a ≤ b < c. Remarks. (i) Each basic move is assumed to be symmetric, i.e. if a move takes a word w to another word w , then it also takes w to w. (ii) In (C.3c) and (C.3d), the symbol . . . stands for a word (possibly empty) which is not changed in the move. For example the type K move (C.3c) changes BbcaC to BbacC, where B, C are fixed words. (By (i), this move also takes BbacC to BbcaC.) The “only if” part of Knuth’s theorem will be proved in this section, and the “if” part will be proved in §C.4. So in this section (§C.3) we must prove (C.3e) If i, j ∈ I(n, r) are such that P (i) = P (j), then i can be connected to j by a finite sequence of basic moves. The essence of this is that every insertion operation U to U ← x can be broken down into a sequence of basic moves. The next proposition puts this fact in a form suitable for our purposes; in (C.3p) it will be shown that (C.3f) implies (C.3e). (C.3f ) Proposition. Let r > 1, µ ∈ Λ+ (n, r − 1). Let U be any µ-tableau and x any element of n. Regard KU and x as words of lengths r − 1 and 1 respectively, so that the “concatenation” w = KU | x of these may be regarded as an element in I(n, r). Then there is a finite sequence of basic moves in I(n, r) which takes w to the word K(U ← x).
104
C Schensted and Littelmann operators
Proof. We shall give in (C.3i)–(C.3k) an explicit sequence of basic moves which takes w to K(U ← x)2 . It is desirable to fix first some notation for the words which will be used in the proof of (C.3f). (C.3g) Notation for words and places. All the words in this section have length r, and their entries are labelled by the set of places [µ]∪{(r)}. The r − 1 elements of [µ] are arranged according to the Knuth order (see (C.2a)), and (r) is the last place. Therefore if µ= (µ1 , . . . , µm , 0, . . . , 0) ∈ Λ+ (n, r − 1) µj = r − 1), then a typical word looks (with µ1 ≥ · · · ≥ µm > 0 and like this: y = ym,1 . . . ym,µm . . . y1,1 . . . y1,µ1 yr . To resume the proof of (C.3f), write x = x1 and w = KU | x, so that w = um,1 . . . um,µm . . . u1,1 . . . u1,µ1 x1 . Recall from (B.3b) the “parameters” of the insertion of x1 into the tableau U : for each a ∈ {1, . . . , z}, define xa := ua−1,k(a−1) if a > 1, or if a = 1 define x1 = x. Define k(a) to be the smallest k ∈ {1, 2, . . . , µa , µa + 1} such that xa < ua,k . If k(a) = µa + 1 (which means that xa ≥ ua,µa ), then the insertion sequence stops at this stage. Define z to be the first a such that xa ≥ ua,µa . The tableau U ← x1 is denoted P = (pa,b )(a,b)∈[λ] , where λ = µ + εz (see (B.3d)). According to (B.3d), each row a > z of the tableau U , is identical to the corresponding row of P = U ← x1 ; and (B.3d)(1◦ ) shows that also uz,t = pz,t for all t ∈ {1, . . . , µz }. Therefore (C.3h) ua,t = pa,t for all places (a, t) ≤ (z, µz ). Next define a sequence of words ξ(a, t), one for each place (a, t) ∈ [µ], which will “interpolate” between the words w = KU | x1 and KP = K(U ← x1 ). Use the following notation: if τ ∈ [µ], then τ + (respectively τ −) denotes the place immediately after (respectively immediately before) τ in the order (C.3g) of [µ] ∪ {(r)}. For example, if a ∈ {2, . . . , z}, then (a, t)+ is (a, t + 1) for all 1 ≤ t < µa , and (a, µa )+ = (a − 1, 1). Definition of the words ξ(a, t). (C.3i) If (a, t) ≤ (z, µz ), then define ξ(a, t) := KP . (C.3j) If (a, t) > (z, µz ) and k(a)+1 ≤ t ≤ µa , then define ξ(a, t)(a,t)+ := xa , ξ(a, t)τ := uτ if τ ≤ (a, t), and ξ(a, t)τ := pτ − if τ > (a, t)+. (C.3k) If (a, t) > (z, µz ) and 1 ≤ t ≤ k(a), then define ξ(a, t)(a,t) := xa+1 , ξ(a, t)τ := uτ if τ < (a, t), and ξ(a, t)τ := pτ − if τ > (a, t).
2 In fact the construction of these basic moves is an essential part of Knuth’s proof of his theorem; see [34, end of p. 723, and first 7 lines of p. 724].
C.3 Knuth’s theorem
105
(C.3l) Pivot of ξ(a, t). Assume that (a, t) > (z, µz ). Then the word ξ(a, t) can be described as follows: define the pivot of ξ(a, t) to be the pair (xa , (a, t)+) in case (C.3j), and to be (xa+1 , (a, t)) in case (C.3k). Then in both cases the rule is: at every place τ left of the pivot, let ξ(a, t)τ = uτ , and at every place τ right of the pivot, let ξ(a, t)τ = pτ − . At the pivot itself, ξ(a, t) has entry xa in case (C.3j), or xa+1 in case (C.3k). If a word is among the ξ(a, t), it is completely determined by its pivot. (C.3m) Proposition. ξ(1, µ1 ) = w = KU | x1 . ξ(z − 1, 1) = KP = K(U ← x1 ). ξ(a, 1) = ξ(a + 1, µa+1 ), for all a ∈ {1, . . . , m − 1}. If k(a) + 1 ≤ t ≤ µa , there is a basic move of type K which takes ξ(a, t) to ξ(a, t − 1). (v) If 2 ≤ t ≤ k(a), there is a basic move of type K which takes ξ(a, t) to ξ(a, t − 1).
(i) (ii) (iii) (iv)
Proof. (i) The pivot of ξ(1, µ1 ) is (x1 , (r)), therefore ξ(1, µ1 ) has uτ in each place τ ∈ [µ], and x1 in place (r). Hence ξ(1, µ1 ) = KU | x1 . (ii) The pivot of ξ(z −1, 1) is (xz , (z −1, 1)) since 1 ≤ t ≤ k(z −1) for t = 1. Thus ξ(z − 1, 1) has xz at place (z − 1, 1). At a place τ < (z − 1, 1), the entry in ξ(z − 1, 1) is uτ , and this equals pτ , by (C.3h). At t > (z − 1, 1), the entry is pτ − . Therefore ξ(z − 1, 1) = KP . (iii) All that is needed, is to show that both ξ(a, 1) and ξ(a + 1, µa+1 ) have the same pivot (xa+1 , (a, 1)). We leave this as an exercise for the reader. (iv) Suppose that k(a) + 1 ≤ t ≤ µa . By definition (C.3j), the entries of ξ(a, t) at the places (a, t)−, (a, t), (a, t)+ are u(a,t)− , ua,t , xa , respectively. If these entries are denoted b, c, a, then we shall prove that a < b ≤ c. The inequality b ≤ c, i.e. u(a,t)− ≤ u(a,t) , follows from (a, t)− = (a, t − 1) and the standardness of U (note that k(a)+1 ≤ t implies that 2 ≤ t). To see that a < b, use the inequality ua,k(a)−1 ≤ xa < ua,k(a) (see (B.3c)). Standardness of U gives ua,k(a) ≤ ua,t−1 , since k(a) + 1 ≤ t. Therefore xa < ua,k(a) ≤ ua,t , hence a < b. We may now make a move of type K which interchanges a and c, and leaves ξ(a, t) otherwise unchanged. At the places (a, t), (a, t)+, the word K ξ(a, t) has entries xa , ua,t . However ua,t = pa,t since t = k(a). It is now easy to see that K ξ(a, t) = ξ(a, t − 1). (v) The proof is on the same lines as that of (iv). Suppose 2 ≤ t ≤ k(a). By (C.3k) the entries in ξ(a, t) at the places (a, t)− = (a, t − 1), (a, t), (a, t)+ are ua,t−1 , xa+1 , pa,t , respectively. If these entries are denoted a, c, b, we leave it to the reader to prove that a ≤ b < c. This shows that there is a move of type K which interchanges a, c, and leaves ξ(a, t) otherwise unchanged. Therefore the entries in K ξ(a, t) at places (a, t − 1), (a, t), are xa+1 , ua,t−1 . But ua,t−1 = pa,t−1 because t−1 = k(a). It follows that K ξ(a, t) = ξ(a, t−1). This completes the proof of Proposition (C.3m).
106
C Schensted and Littelmann operators
And this proves Proposition (C.3f). (C.3n) Example. Take µ = (4, 2, 1, 1) ∈ Λ+ (4, 8), and U as given in (C.3o). Then U is a µ-tableau. Now take x1 = 1. We calculate P = U ← x1 by the methods of §B.4. This tableau also is given in (C.3o). It is a λ-tableau, where λ = (4, 2, 2, 1) ∈ Λ+ (4, 9).
(C.3o)
1 1 2 2 U= 2 4 , 3 4
1 1 1 2 P = 2 2 . 3 4 4
The parameters for the insertion of x1 = 1 are as follows (see (B.3b)): z = 3 and x1 = 1 (= p1,k(1) ), x2 = 2 (= u1,k(1) = p2,k(2) ), x3 = 4 (= u2,k(2) = p3,k(3) ), k(1) = 3,
k(2) = 2,
k(3) = 2.
It is rather easy to display the words ξ(a, t), for all (a, t) ∈ [µ] (see table C.4). By (C.3m)(i),(ii) we know that ξ(1, 4) = KU | x1 , and ξ(2, 1) = KP ; so write in these words. If (a, t) ≤ (z, µz ) then ξ(a, t) = KP by (C.3i), and this gives us ξ(3, 1) and ξ(4, 1). If (a, t) > (z, µz ), determine the pivot of ξ(a, t), using (C.3j) or (C.3k) as appropriate. For example the pivot of ξ(1, 4) is (x1 , (1, 4)+) = (x1 , (9)), and the pivot of ξ(1, 2) is (x2 , (1, 2)). For each ξ(a, t), we have underlined the first term (xa or xa+1 ) in the pivot of ξ(a, t) in table C.4. Proof of the “only if ” part of Knuth’s theorem. Suppose i ∈ I(n, r), and for each s ∈ {0, 1, . . . , r} define Ps (i) = P (i1 . . . is ) (take P0 (i) to be the empty tableau). Let s ∈ {1, 2, . . . , r} and let U = Ps−1 (i) and x = is . Then Proposition (C.3f) provides a sequence of words ξ(a, t), and hence a sequence of basic moves taking the word KPs−1 (i) | is to the word K(Ps−1 (i) ← is ) = KPs (i). Using the notation of §B.7 (the “ladder”), we may now construct a sequence of basic moves taking KPs−1 | is is+1 . . . ir to KPs | is+1 . . . ir ; we simply use the sequence of words ξ(a, t) | is+1 . . . ir in place of the ξ(a, t). Since we can do this for each s = 1, 2, . . . , r, we deduce the following fundamental proposition: (C.3p) Proposition. Given i ∈ I(n, r), there is a sequence of basic moves in I(n, r) which takes i to KP (i). It is now easy to prove (C.3e), which is the “only if” part of Knuth’s theorem (C.3a). For given i, j ∈ I(n, r) such that P (i) = P (j), we use (C.3p) to make two sequences of basic moves, one taking i to KP (i) and one taking j to KP (j) = KP (i). Then the first of these sequences, followed by the “reverse” of the second, takes i to j.
C.4 The “if” part of Knuth’s theorem
Place
(4, 1)
(3, 1)
(2, 1)
(2, 2)
(1, 1)
(1, 2)
(1, 3)
(1, 4)
(9)
Move
ξ(1, 4)
u4,1
u3,1
u2,1
u2,2
u1,1
u1,2
u1,3
u1,4
ξ(1, 3)
u4,1
u3,1
u2,1
u2,2
u1,1
u1,2
x2
x1 p1,4 = p1,3 = u1,4
K
ξ(1, 2)
u4,1
u3,1
u2,1
u2,2
u1,1
x2
p1,2
p1,3
p1,4
K
ξ(1, 1)
u4,1
u3,1
u2,1
u2,2
x2
p1,1 p1,2 = u1,1
p1,3
p1,4
−
ξ(2, 2)
u4,1
u3,1
u2,1
x3
x2 p1,1 = p2,2
p1,2
p1,3
p1,4
K
p4,1 p3,1 x3 p2,1 = u4,1 = u3,1 = p3,2
p2,2
p1,1
p1,2
p1,3
p1,4
−
ξ(3, 1)
p4,1
p3,1
p3,2
p2,1
p2,2
p1,1
p1,2
p1,3
p1,4
−
ξ(4, 1)
p4,1
p3,1
p3,2
p2,1
p2,2
p1,1
p1,2
p1,3
p1,4
−
ξ(2, 1)
x1
107
K
Table C.4. The sequence of words associated to Schensted insertion.
C.4 The “if ” part of Knuth’s theorem Let n, r be positive integers. In this section we will prove the “if” part of Knuth’s theorem (C.3a), that is: if i, j ∈ I(n, r) can be connected by a sequence of basic moves as in (C.3b), then i ∼ j (i.e. P (i) = P (j)). Clearly it will be enough to prove: (C.4a) If i and j in I(n, r) are connected by a basic move, then P (i) = P (j). For a standard tableau U and letters x1 , . . . , xk , we write U ← x1 x2 · · · xk = (· · · ((U ← x1 ) ← x2 ) · · · ) ← xk to ease the reading. We will prove the following (C.4b) Proposition (see [34, pp. 721, 722]). Let U be a standard tableau with entries drawn from n, and let a, b, c ∈ n. (1) If a < b ≤ c, then U ← bac = U ← bca. (2) If a ≤ b < c, then U ← acb = U ← cab.
108
C Schensted and Littelmann operators
This proposition implies (C.4a), by means of a simple induction on the length r of i and j. Our proof of the proposition builds on Schensted’s original description of the insertion process U ← x, which reads as follows. Let U be a µ-tableau and x be a letter. If U is the empty tableau or u1,µ1 ≤ x (so that the insertion sequence (B.3b) has length z = 1), then U ← x is obtained from U by appending the letter x to the first row of U : x U ←x=
If U is not empty and x < u1,µ1 (so that the insertion sequence (B.3b) has length z > 1), then choose k ≤ µ1 minimal with x < u1,k and set y = u1,k . The tableau U ← x has first row (u1,1 , . . . , x, . . . , u1,µ1 ) (with x in column k), ˜ ← y. Here U ˜ is the while the remaining rows of U ← x are given by U “sub-tableau” of U obtained from U by removing the first row. In illustrative terms: k x U ←x=
˜ ←y U
Of course, this description of U ← x follows directly from our description (B.3d). The idea of proof for the proposition is this. In either case (1) or (2), check that the first rows of the two tableaux shown coincide. Then consider the tableaux obtained by removing the first rows. If any of the three letters a, b, c does not bump a letter into the second row, then it is fairly easy to see that these “sub-tableaux” are equal. If all three letters a, b, c bump letters into the second row—x, y, z, say—then the letters x, y, z can be shown to satisfy either (1) or (2). We can then conclude by induction on the number of rows of U . Before we give the proof of (C.4b), let’s look at two examples. Example 1. Let n = 5, a = 1, b = 1 and c = 3 (so that part (2) of the proposition applies), and consider the tableau
C.4 The “if” part of Knuth’s theorem
109
1 1 2 3 3 4 U= 2 2 3 4 3 4 4 5 We have ˜= 2 2 3 4 . U 3 4 4 5 Let us concentrate on the first rows of U ← acb and U ← cab. We get 1 1 1 1 3 3 U ← acb = U ← 131 = ˜ U ← 243 from Schensted’s inductive description, because a = 1 bumps x = 2 from the first row of U , c = 3 bumps z = 4 from the first row of U ← a, and b = 1 bumps y = 3 from the first row of U ← ac. Similarly, we get 1 1 1 1 3 3 U ← cab = U ← 311 = ˜ U ← 423 ˜ ← 243 = U ˜ ← 423 (by induction on the Applying (C.4b)(2), it follows that U number of rows), hence also U ← 131 = U ← 311. Example 2. Let n = 5 again, a = 1, b = 1 and c = 2 (so that part (2) of proposition (C.4b) applies again), and consider the tableau 1 1 3 3 3 3 U= 2 3 4 4 4 4 5 5 5 We have ˜= 2 3 4 4 4 . U 4 5 5 5 In this case, we get 1 1 1 1 3 3 U ← acb = U ← 121 = ˜ U ← 332 from Schensted’s inductive description, because a = 1 bumps x = 3 from the first row of U , c = 2 bumps z = 3 from the first row of U ← a, and b = 1 bumps y = c = 2 from the first row of U ← ac. Similarly, we get 1 1 1 1 3 3 U ← cab = U ← 211 = ˜ U ← 323 ˜ ← 332 = U ˜ ← 323 (by This time applying (C.4b)(1), it follows that U induction on the number of rows), hence also U ← 121 = U ← 211.
110
C Schensted and Littelmann operators
Proof of Proposition (C.4b). The proof is done by induction on the number m of rows of U . If m = 0, then U is the empty tableau and (1)
U ← bac =
a c = U ← bca if a < b ≤ c, b
(2)
U ← acb =
a b = U ← cab if a ≤ b < c. c
Thus (C.4b) holds in case m = 0. Suppose m > 0. Let µ denote the shape of U and write U = (ux,y )(x,y)∈[µ] . ˜ be the tableau obtained from U by removing the first row. Furthermore, let U We shall prove the parts (1) and (2) separately. Proof of part (1) of Proposition (C.4b). Assume we have a, b, c ∈ n such that a < b ≤ c. We want to prove that U ← bac = U ← bca. To find the tableau W := U ← b, let b bump u1,l into row (2), where l is the smallest element of {1, . . . , µ1 , µ1 + 1} such that b < u1,l . This means: (i)
row (1) of W is the same as row (1) of U except at place (1, l), where w1,l = b, and ˜ =U ˜ ← y, (ii) the tableau obtained by removing the first row of W , is W where y := u1,l . (If l = µ1 + 1, so that y = ∞, we make the convention ˜ .) that inserting ∞ into row (2) has no effect on U
W =U ←b l b ˜ ←y U
U ← ba k a ˜ ← yx U
U ← bc
l b
l b
p c
˜ ← yz U
Table C.5. Inserting b, a and b, c into U when a < b ≤ c.
To find the tableaux U ← ba = W ← a and U ← bc = W ← c, define k to be the smallest element of {1, . . . , µ1 , µ1 + 1} and p to be the smallest element of {1, . . . , µ1 + 1, µ1 + 2} such that a < w1,k and c < w1,p , respectively. (The case p = µ1 + 2 only occurs when k = µ1 + 1.) Then (iii) k ≤ l (because a < b = w1,l ), and
C.4 The “if” part of Knuth’s theorem
111
(iv) l < p (because w1,l = b ≤ c < w1,p ). Make U ← ba from W by letting a bump x := w1,k into row (2); make U ← bc from W by letting c bump z := w1,p into row (2). The resulting tableaux are shown in table C.5. To find U ← bac, insert c into the tableau W := U ← ba. First find the smallest p in {1, . . . , µ1 , µ1 + 1, µ1 + 2} such that c < w1,p . But any p such that c < w1,p is > l (because w1,l = b ≤ c), and all the entries w1,s in row (1) of W such that s ≥ l, coincide with the corresponding entries w1,s in row (1) of W , because the process which takes W to W = W ← a affects only the part of row (1) to the left of (1, l). Therefore p = p , and U ← bac is shown in the left pane of table C.6. An entirely similar argument gives U ← bca, using the fact that the process which takes W to W = W ← b affects only the part of row (1) to the right of (1, l); this tableau is shown in the right pane of table C.6. We next prove that x < y ≤ z. First, to prove x < y, observe U ← bac k a ˜ ← yxz U
l b
U ← bca p c
k a
l b
p c
˜ ← yzx U
Table C.6. Inserting b, a, c and b, c, a into U when a < b ≤ c.
that (iii) gives x = w1,k ≤ w1,l = b < u1,l = y. Second, to prove y ≤ z, observe that (iv) gives y = u1,l ≤ u1,p = z. This argument is also valid when y = ∞, because then z = ∞ as well. ˜ ← yxz = U ˜ ← yzx by the induction hypothesis; and It follows that U table C.6 gives the desired result U ← bac = U ← bca. There is still one “loose end” to be tidied up! Namely it can happen that k = l; the argument above still works, but we must re-draw the first rows of the tableaux shown in table C.6. Each of these rows (which are equal) looks like k=l p c a This concludes the proof of Proposition (C.4b)(1). Proof of part (2) of Proposition (C.4b). We now assume that a ≤ b < c and show that U ← acb = U ← cab. To find the tableaux U ← a and U ← c, define k and p to be the smallest elements of {1, . . . , µ1 , µ1 + 1} such that a < u1,k and c < u1,p , respectively.
112
C Schensted and Littelmann operators
As usual, make V := U ← a from U by letting a bump x := u1,k into row (2), and make W := U ← c from U by letting c bump z := u1,p into row (2). Then k ≤ p (because a < c). We consider two cases: Case 1. k < p. Then k ≤ µ1 , hence the first row of V has µ1 entries. To find the tableau U ← ac = V ← c define p to be the smallest element of {1, . . . , µ1 , µ1 + 1} such that c < v1,p . But any p with c < v1,p is > k (because v1,k = a < c), and all the entries v1,s in row (1) of V with s > k, coincide with the corresponding entries u1,s of U (because the first rows of U and V differ only at place (1, k)). It follows that p = p , and c bumps the letter z = u1,p of V into row (2). An entirely similar argument shows that a bumps the letter x = u1,k = w1,k of W into row (2). The tableaux V ← c and W ← a are shown in table C.7. V ← c = U ← ac k a
p c
˜ ← xz U
W ← a = U ← ca k a
p c
˜ ← zx U
Table C.7. Inserting a, c and c, a into U when a ≤ b < c and k < p.
The first rows of V ← c and W ← a coincide. Hence b bumps the same letter y = v1,l = w1,l into row (2) of V ← c and W ← a. Furthermore, it follows from a ≤ b < c that k < l ≤ p. The tableaux U ← acb = V ← cb and U ← cab = W ← ab are displayed in table C.8. U ← acb k a ˜ ← xzy U
l b
U ← cab p c
k a
l b
p c
˜ ← zxy U
Table C.8. Inserting a, c, b and c, a, b into U when a ≤ b < c and k < p.
C.4 The “if” part of Knuth’s theorem
113
We next prove that x ≤ y < z. First, we have x = u1,k ≤ u1,l = y if l < p, and x = v1,k ≤ v1,p−1 ≤ c = y if l = p. Second, y = v1,l ≤ v1,p = c < u1,p = z. ˜ ← xzy = U ˜ ← zxy by the induction hypothesis; and It follows that U table C.8 gives U ← acb = U ← cab in case k < p. Case 2. k = p. Then x = u1,k = u1,p = z. To find the tableau V ← c in this case, define p to be the smallest element of {1, . . . , µ1 + 1, µ1 + 2} such that c < v1,p . But v1,k = a < c < u1,k ≤ u1,k+1 = v1,k+1 , hence p = k + 1. Therefore c bumps the letter w = v1,k+1 = u1,k+1 of V into row (2). To find the tableau W ← a, note that w1,p−1 = u1,p−1 ≤ a < c = w1,p . Therefore a bumps the letter c = w1,p of W into row (2). The tableaux V ← c and W ← a are shown in table C.9. V ← c = U ← ac
W ← a = U ← ca
k a c ˜ ← zw U
k a ˜ ← zc U
Table C.9. Inserting a, c and c, a into U when a ≤ b < c and k = p.
From v1,k = a ≤ b < c = v1,k+1 it follows that b bumps the letter c (in column k + 1) of V ← c into row (2). From w1,k = a ≤ b < c < u1,k ≤ u1,k+1 = w1,k+1 it follows that b bumps the letter w = u1,k+1 into row (2) of W ← a. The resulting tableaux V ← cb = U ← acb and W ← ab = U ← cab are shown in table C.10. U ← acb
U ← cab
k a b
k a b
˜ ← zwc U
˜ ← zcw U
Table C.10. Inserting a, c, b and c, a, b into U when a ≤ b < c and k = p.
114
C Schensted and Littelmann operators
It is clear that c < z ≤ w, because c < u1,p = z ≤ u1,p+1 = u1,k+1 = w. This argument is also valid when z = ∞, for then w = ∞ as well. Therefore ˜ ← zwc = U ˜ ← zcw, and part (1) of Proposition (C.4b) implies that U so U ← acb = U ← cab. This concludes the proof of Proposition (C.4b), hence of Knuth’s theorem (C.3a).
C.5 Littelmann operators on tableaux Suppose we have λ ∈ Λ+ (n, r), c ∈ {1, 2, . . . , n−1} and a λ-tableau P . The operator f˜c does not act on P , but it does act on the word KP . Theorem (C.5b) below will show that there is a unique tableau P˜ such that K P˜ = f˜c (KP ). It is reasonable to define f˜c (P ) to be P˜ . In (C.3g), we regarded the entries in the words KU | x1 and K(U ← x1 ) as indexed by the r-element set [µ] ∪ {(r)}. More generally, we can take any ordered r-element set T = {τ1 , τ2 , . . . , τr } such that τ1 < τ2 < · · · < τr , and use T to index the entries in a word i of length r. This means that if i is a word of length r, we write i = iτ1 iτ2 . . . iτr . In this section, it will be convenient to take T = [λ], because [λ] indexes the entries of the word KP (see §C.2). For the moment, let T = {τ1 , τ2 , . . . , τr } be an arbitrary r-element set with τ1 < τ2 < · · · < τr . Then all the definitions for f˜c given in §A.3 translate into definitions for words indexed by T in a trivial manner (we leave it to the reader to make the analogous translations for e˜c ). In case T = [λ] these definitions appear as follows. First define ωc,c+1 = ω : n → Z as in §A.3, so that ω(ν) = 1, −1 or 0 according as ν = c, c + 1 or ν ∈ / {c, c + 1}. = hKP : [λ] ∪ {0} → Z is given so that hKP (0) = 0, while The map hKP c for any t ∈ [λ] we define (C.5a) hKP (t) := ω(pa,b ), (a,b)≤t
the order ≤ being that given by (C.2a). : t ∈ [λ] }. Let M = McKP be the largest element of the set {0} ∪ { hKP c If M = 0 define f˜c (KP ) := ∞, or say that “f˜c (KP ) is undefined”. If M = 0, let q = qcKP be the least element t of [λ] such that hKP (t) = M . Then there must hold pa,b = c, where q = (a, b); see (A.3c). In this case we define f˜c (KP ) to be the word obtained from KP by changing the entry pa,b = c to c + 1; all other entries in KP are left unchanged. The next theorem shows that if f˜c (KP ) = ∞, it is possible to define a tableau f˜c (P ) in such a way that K(f˜c P ) = f˜c (KP ) 3 3
Some authors identify the tableau P with the word KP , and view theorem (C.5b) as justification of this practice. But in this Appendix we will be cautious (perhaps over-cautious!) and we do not make this identification.
C.5 Littelmann operators on tableaux
115
(C.5b) Theorem. Let λ ∈ Λ+ (n, r), c ∈ {1, 2, . . . , n − 1} and P be a λ-tableau. Using the definitions above, assume M = 0, and define q = (a, b) to be the least place (in the order (C.2a)) such that h((a, b)) = M . We know from (A.3c) that pa,b = c. Then we have also (1) If (a, b + 1) ∈ [λ], then pa,b+1 ≥ c + 1. (2) If (a + 1, b) ∈ [λ], then pa+1,b > c + 1. (3) If we change P to P˜ by changing the entry pa,b = c to p˜a,b = c + 1, and leaving unchanged all the other entries in P , we get a λ-tableau P˜ which is standard. (4) K P˜ = f˜c (KP ). Proof. (1) Since P is standard, pa,b+1 ≥ pa,b = c. If pa,b+1 < c + 1, we would KP have pa,b+1 = c. This gives hKP c ((a, b + 1)) = hc ((a, b)) + ω(c) = M + 1, contradicting the definition of M . So there must hold pa,b+1 ≥ c + 1. (2) Since P is standard, we must have pa+1,b > c. Unless pa+1,b > c + 1, we have pa+1,b = c+1. We shall show that this leads to a contradiction. Table C.11 shows the rows (a) and (a+1) of P , and their entries in certain columns. Let b ···
b − 1
b
···
b
b+1
···
a
···
c
···
c
≥c
···
a+1
···
pa+1,b +1 c + 1 · · ·
c + 1 > c + 1 ···
Table C.11. Rows (a) and (a + 1) of P .
denote the leftmost of all columns such that pa,b = c. Since a ≥ c + 1, entries in row (a + 1) to the right of column (b) are all > c + 1. The entries in the same row, in columns (b ), . . . , (b), are all equal to c + 1. This is because such an entry pa+1,b is left of pa+1,b = c + 1, and is also > pa,b = c. From the definition (C.5a) we deduce (C.5c) hKP (a, b) = hKP (a + 1, b − 1) + X + Y + Y ∗ + Z, where X =
ω(pa+1,x ),
Y =
b ≤x≤b
Y∗ =
1≤x≤b
ω(pa+1,x ),
b+1≤x≤λa+1
ω(pa,x )
Z =
ω(pa,x ).
b ≤x≤b
But for b + 1 ≤ x ≤ λa+1 all the entries pa+1,x are > c + 1, hence all the summands ω(pa+1,x ) = 0, therefore Y = 0. Similarly Y ∗ = 0 because all the elements pa,x (for 1 ≤ x ≤ b − 1) are < c. Finally X + Z = 0 because X + Z is a sum of pairs ω(c) + ω(c + 1) = 0. Therefore (C.5c) implies that hKP (a, b) = hKP (a + 1, b − 1). But this contradicts our definition
116
C Schensted and Littelmann operators
of (a, b) as the least place in [λ] such that hKP (a, b) = M . This proves part (2) of Theorem (C.5b). Part (3) is now proved, since (1) and (2) show that P˜ is standard. Then (4) follows. (C.5d) Example. Let λ = (2, 2, 0, . . . , 0), regarded as an element of Λ+ (n, 4) for some n ≥ 4. Consider the λ-tableau P = 2 2 . Then KP = 3422. Now 3 4
r
1
2
3
4
t
(2, 1)
(2, 2)
(1, 1)
(1, 2)
3
4
2
2
−1
−1
0
1
3
4
2
3
(KP )r hKP (r) (f˜c (KP ))r
Table C.12. Illustration of Theorem (C.5b).
let c = 2. Calculate f˜c (KP ) using table C.12. Notice that we have shown two set 4 and [λ], either of which can be used to index the letters of the word KP . We see that q KP = 4, or equivalently, q KP = (1, 2). Therefore f˜c (P ) = 2 3 . 3 4
C.6 The proof of Proposition B In this section we shall prove the fact, fundamental for our work, that the operation KP commutes with all the Littelmann operators f˜c . In other words, we shall prove the Proposition B. Let i ∈ I(n, r) and c ∈ {1, 2, . . . , n−1} such that f˜c (i) = ∞. Then f˜c (KP (i)) = ∞ and (C.6a) f˜c (KP (i)) = KP (f˜c (i)). For i, j ∈ I(n, r) we write iK j (respectively, iK j) if i and j are connected by a basic move of type K (respectively, K ); see (C.3c), (C.3d). The proof of Proposition B is based on the following two lemmas. (C.6b) Lemma. Let i, j ∈ I(n, r), and suppose that j is obtained from i by a basic move, say, i = (. . . , ik , ik+1 , . . .),
j = (. . . , ik+1 , ik , . . .),
ik < ik+1 .
Then M i = M j , and there are the following alternatives for q i and q j :
C.6 The proof of Proposition B
117
(a) If q i ∈ / {k, k + 1}, then q j = q i . i (b) If q = k + 1, then q j = k. (c) If q i = k, then either ik+1 = c + 1, iK j and q j = k + 2, or ik+1 = c + 1 and q j = k + 1. Proof. Set x := ik and z := ik+1 , so that x < z. We observe that (i) hjc (ν) = hic (ν) for all ν = k. This follows directly from the definition (A.3a) of hic and the fact that the words i and j are identical at all places except ν = k and ν = k + 1. Next we show that (ii) M i = M j . Suppose first that q i = k. Then M j ≥ hjc (q i ) = hic (q i ) = M i , by (i). Assume that M j > M i . Then q j = k, by (i). This implies z = c and x < c, by (A.3c)(i). It follows that M i ≥ hic (k + 1) = hjc (k + 1) = hjc (k) = M j , a contradiction. This shows M i = M j in case q i = k. Suppose now q i = k. Then x = c, by (A.3c)(i). Hence M i ≥ M j , by (i). Assume that M i > M j . Then M i > hjc (k + 1) = hic (k + 1) = M i + ωc (z), which implies z = c + 1. If iK j, that is, x < ik−1 ≤ z, then ik−1 = c + 1 and thus hic (k) = hic (k − 2) + ωc (ik−1 ) + ωc (x) = hic (k − 2). This contradicts the minimal choice of q i . If iK j, that is, if x ≤ ik+2 < z, then ik+2 = c and thus M j ≥ hjc (k + 2) = hic (k + 2) = hic (k) = M i , again a contradiction. This shows M i = M j also in case q i = k, and (ii) is proved. / {k, k + 1}. Assume q j = q i , Now, for the proof of (a), suppose that q i ∈ then q j = k, by (i), and thus z = c. It follows that x < c and therefore, by (ii), hic (k + 1) = hjc (k) = M j = M i . This implies q i < k, since q i = k, k + 1, hence also q j < k, by (i)—a contradiction. Part (a) is proved. Now let q i = k + 1. Then z = c and x < c, and we get from (ii) that hjc (k) = hjc (k + 1) = hic (k + 1) = M i = M j . It follows that q j ≤ k. In fact, by (i), q j = k. This implies (b). Consider finally the case where q i = k. Then x = c, and q j ≥ k, by (ii). Suppose additionally that z = c + 1. Then z > c + 1, and it follows that hjc (k) < hjc (k + 1) = hic (k + 1) = hic (k) = M i = M j . But this implies q j = k + 1. To conclude, let z = c + 1. Assume iK j, so that x < ik−1 ≤ z. Then ik−1 = c + 1 and hic (k) = hic (k − 2) + ωc (ik−1 ) + ωc (x) = hic (k − 2). This contradicts the minimal choice of q i . It follows that iK j as asserted, that is x ≤ ik+2 < z. Hence ik+2 = c. Direct verification gives hjc (k) = hic (k) − 2 = M i −2 = M j −2, hjc (k +1) = M j −1 and hjc (k +2) = M j . Therefore q j = k +2 as claimed in (c). (C.6c) Lemma. Let i, j ∈ I(n, r), and suppose j is obtained from i by a basic move. If f˜c (i) = ∞, then f˜c (j) = ∞, and f˜c (j) is obtained from f˜c (i) by a basic move. There is a corresponding statement (and proof ), with e˜c replacing f˜c .
118
C Schensted and Littelmann operators
Proof. Thanks to symmetry in i and j, we may assume that either there exist components y = ik−1 , x = ik , z = ik+1 of i such that (K )
i = (. . . , y, x, z, . . .),
j = (. . . , y, z, x, . . .),
x < y ≤ z,
or components x = ik , z = ik+1 , y = ik+2 of i such that (K )
i = (. . . , x, z, y, . . .),
j = (. . . , z, x, y, . . .),
x ≤ y < z.
Suppose f˜c (i) = ∞. Then M j = M i > 0, by Lemma (C.6b). This implies that f˜c (j) = ∞ as well. We now consider the three cases listed in Lemma (C.6b). / {k, k + 1}. Then q i = q j , by Lemma (C.6b). Hence x and z Case (a). q i ∈ remain unchanged when we apply f˜c to i and j. The claim follows directly if y is not changed, either. Suppose that y = c, iK j and q i = k − 1. Then we get4 f˜c (i) = (. . . , c + 1, x, z, . . .),
f˜c (j) = (. . . , c + 1, z, x, . . .),
x < c ≤ z.
However, we have z ≥ c + 1, since in case z = c we get hjc (k) = hjc (k − 1) + 1, and this contradicts the maximality of hjc (q j ). Hence f˜c (i)K f˜c (j). Suppose now that y = c, iK j and q i = k + 2. Then we get f˜c (i) = (. . . , x, z, c + 1, . . .),
f˜c (j) = (. . . , z, x, c + 1, . . .),
x ≤ c < z.
However, we have z > c + 1, since in case z = c + 1 we get hic (k) = hic (k + 2), and this contradicts the minimal choice of q i . Hence f˜c (i)K f˜c (j). Case (b). q i = k + 1. Then q j = k and z = c, hence f˜c (i) = (. . . , x, c + 1, . . .),
f˜c (j) = (. . . , c + 1, x, . . .).
If iK j, then x ≤ y < z < c + 1, therefore f˜c (i)K f˜c (j). In case iK j, we get x < y ≤ z < c + 1, hence f˜c (i)K f˜c (j). Case (c). q i = k. Here x = c, and we need to consider the alternative given in Lemma (C.6b)(c). Suppose first that z = c + 1, that iK j and q j = k + 2. Then y = c since c = x ≤ y < z = c + 1. Hence f˜c (i) = (. . . , c + 1, c + 1, c, . . .),
f˜c (j) = (. . . , c + 1, c, c + 1, . . .).
We get that f˜c (j)K f˜c (i). The case where z = c + 1 and q j = k + 1 remains. Here z > c + 1 and f˜c (i) = (. . . , c + 1, z, . . .),
f˜c (j) = (. . . , z, c + 1, . . .).
Suppose iK j, then y ≥ c + 1 since otherwise hic (k + 2) = hic (k) + 1. Therefore c + 1 ≤ y < z and f˜c (i)K f˜c (j). Similarly, if iK j, then y > c + 1, since otherwise hic (k − 2) = hic (k). Hence c + 1 < y ≤ z and f˜c (i)K f˜c (j). 4
Those values which were changed by f˜c are underlined.
C.6 The proof of Proposition B
119
We are now in a position to give the Proof of Proposition B. From Proposition (C.2c), we get P (KP (i)) = P (i). Hence, by Theorem (C.3a), there exist words i(0) , i(1) , · · · , i(k−1) , i(k) ∈ I(n, r) such that i(0) = i, i(k) = KP (i), and i(ν) is obtained from i(ν−1) by a basic move. From Lemma (C.6c), it follows that f˜c (i(ν) ) = ∞ and that f˜c (i(ν) ) is obtained from f˜c (i(ν−1) ) by a basic move, for all ν ∈ {1, . . . , k}. Applying Theorem (C.3a) again, we get P f˜c (i) = P f˜c (i(0) ) = P f˜c (i(k) ) = P f˜c (KP (i)) , hence (∗) KP (f˜c (i)) = KP (f˜c (KP (i))). There is a standard tableau P˜ such that K P˜ = f˜c (KP (i)), by Theorem (C.5b)(4). And by Proposition (C.2c)(i), KP (K P˜ ) = K P˜ . Therefore (∗) becomes KP (f˜c (i)) = K P˜ = f˜c (KP (i))).
D Theorem A and some of its consequences
In what follows, n, r are fixed positive integers.
D.1 Ingredients for the proof of Theorem A We shall prove Theorem A in the next section, but we must first study some words in I(n, r) which play a special role for the action of the Littelmann operators. To describe these words, we need the following lemma, which is an immediate consequence of the definitions in §A.3. (D.1a) Lemma. If i ∈ I(n, r) and c ∈ {1, . . . , n − 1}, then (i) f˜c (i) = ∞ if and only if #{ ν ≤ t : iν = c } ≤ #{ ν ≤ t : iν = c + 1 } for all t ∈ {1, . . . , r}, and (ii) e˜c (i) = ∞ if and only if #{ ν ≥ s : iν = c } ≥ #{ ν ≥ s : iν = c + 1 } for all s ∈ {1, . . . , r}. We are interested in the words which satisfy (i) for all c. So we set Υ := i ∈ I(n, r) : f˜c (i) = ∞ for all c ∈ {1, . . . , n − 1} . Define an operator W : I(n, r) → I(n, r) by W (i1 i2 . . . ir ) = (n + 1 − i1 , n + 1 − i2 , . . . , n + 1 − ir ). Then a word i belongs to Υ if and only if W (i) is a “lattice permutation”1 . 1 This term is rather confusing, because we shall use it for words which may not be permutations! A word j ∈ I(n, r) is called a permutation if n = r and the entries in j are 1, 2, . . . , r in some order. Lattice permutations in this sense are used by D.E. Littlewood in the character theory of the symmetric group Sym(r) (see [36, page 67]). Lattice permutations in the present sense appear in [40] and [37].
122
D Theorem A and some of its consequences
Definition. A lattice permutation, in our language, is a word j ∈ I(n, r) such that (D.1b) #{ ν ≤ s : jν = 1 } ≥ #{ ν ≤ s : jν = 2 } ≥ ··· ≥ #{ ν ≤ s : jν = n − 1 } ≥ #{ ν ≤ s : jν = n }, for all s ∈ {1, . . . , r}. For example, the word j = 11122132, an element of I(3, 8), is a lattice permutation. The word i = 33322312 belongs to Υ, because W (i) = j. Similarly, we are interested in the words which satisfy condition (ii) in Lemma (D.1a) for all c, and we set T := i ∈ I(n, r) : e˜c (i) = ∞ for all c ∈ {1, . . . , n − 1} . In (A.3g)(2), the operator B : I(n, r) → I(n, r) was defined by B(i1 i2 . . . ir−1 ir ) = ir ir−1 · · · i2 i1 . Thus a word i belongs to T if and only if B(i) is a lattice permutation. Define an operator C : I(n, r) → I(n, r) by C = BW = W B. Explicitly, C(i1 i2 . . . ir−1 ir ) = (n + 1 − ir , n + 1 − ir−1 , . . . , n + 1 − i2 , n + 1 − i1 ). Remarks. (i) All these operators have square equal to the identity in I(n, r). (ii) If i ∈ I(n, r) and Sch(i) = (λ(i), P (i), Q(i)), then λ(i) is the shape of i (see §C.1). The operator C preserves shape (i.e. λ(Ci) = λ(i), see (D.3g)), but the operators B and W do not. For example, using the tables in §E.1, we see that i = 221 has shape (2, 1, 0), but B(i) = 122 and W (i) = 223 both have shape (3, 0, 0). However, C(i) = 322 has the same shape as i. (D.1c) Lemma. The operator C induces a bijection T → Υ. Hence |T| = |Υ|. Proof. Let i ∈ T. Then B(i) is a lattice permutation, and W (C(i)) = B(i). This shows that C(i) ∈ Υ. Prove similarly that i ∈ T implies that C(i) ∈ Υ. From now on in this section, we fix λ ∈ Λ+ (n, r). The tableaux Tλ and Zλ . Define two λ-tableaux as follows: (D.1d) Tλ = (Ts,t )(s,t)∈[λ] where Ts,t = s for all (s, t) ∈ [λ]; we denote the word KTλ by iλ . (D.1e) Zλ = (Zs,t )(s,t)∈[λ] where Zs,t = n − βt + s for all (s, t) ∈ [λ], and βt denotes the length of column t of Zλ ; we denote the word KZλ by iλ .
D.1 Ingredients for the proof of Theorem A
123
Example. If λ = (5, 3, 2, 0, 0) ∈ Λ+ (5, 10), then 1 1 1 1 1 3 3 4 5 5 and Zλ = 4 4 5 . (D.1f ) Tλ = 2 2 2 3 3 5 5 Notice that Tλ is our old friend from (4.3b), where it is called Tl . It is useful to think of Zλ as the tableau obtained from Tλ by subjecting it to two successive operations: first reverse each column of Tλ , and secondly replace each entry x in the tableau by n + 1 − x. In our example, 1 1 1 1 1 −→ Tλ = 2 2 2 3 3
3 3 2 1 1 −→ 2 2 1 1 1
3 3 4 5 5 = Zλ . 4 4 5 5 5
Notation. Define Q(λ) to be the set of all (standard) λ-tableaux whose entries are 1, 2, . . . , r in some order. Recall from (C.1e) that I(Q, ≈) is the set of all words i ∈ I(n, r) such that Q(i) = Q, for each Q ∈ Q(λ). These sets are the equivalence classes for ≈. (D.1g) Theorem. Let Q ∈ Q(λ). Then: (i) There is a unique word i ∈ I(n, r) such that Q(i) = Q and i belongs to T. Moreover P (i) = Tλ . (ii) There is a unique word i ∈ I(n, r) such that Q(i) = Q and i belongs to Υ. Moreover P (i) = Zλ . (D.1h) Notation. For each Q ∈ Q(λ), denote the word i in (i) by iQ , and the word i in (ii) by iQ . Proof of Theorem (D.1g). (i) By Schensted’s Theorem (B.6a) there is a unique word i such that P (i) = Tλ and Q(i) = Q. We claim that this i belongs to T. Let c ∈ {1, 2, . . . , n}. From §A.3, we know that e˜c (i) = ∞ if and only if the function hic attains its maximum Mci at the last place in the word i, i.e. hic (r) = Mci . Now hic (r) is the sum of the ω(iν ) for ν = 1, 2, . . . , r (see (A.3a)). But the r entries in the word KP (i) form a permutation of KP (i) the r entries of i. Hence hc (r) = hic (r) = Mci . By Lemma (C.6b) and Proposition (C.3p), the words i and KP (i) give the same maximum, KP (i) KP (i) i.e. Mci = Mc . We can calculate Mc = McKTλ easily; it is λc − λc+1 , and it is attained at the last place (1, λ1 ) of KP (i). Therefore the maximum Mci of hic is also attained at the last place of i. This shows that e˜c (i) = ∞ for all c, hence i ∈ T. Now we must prove uniqueness: if j ∈ I(Q, ≈) ∩ T, then j = i. It is enough ec (j)), by Proposito prove that P (j) = Tλ . We know e˜c (KP (j)) = KP (˜ KP (j) tion B, hence e˜c (KP (j)) = ∞ for all c. So the height function hc takes its maximum value at “place r”, i.e. at place (1, λ1 ) ∈ [λ].
124
D Theorem A and some of its consequences
Consider the last entry t in the first row of P (j); we must show that t = 1. If this is false, then t > 1. Consider the height functions for c = t − 1. The entry in P (j) at place (1, λ1 ) (which corresponds to place r in the word KP (j)) KP (j) KP (j) is c + 1. So we have hc (r) < hc (r − 1), a contradiction. Hence all entries of the first row of P (j) are equal to 1. Next consider the last entry, t say, in the sth row of P (j). We have t ≥ s since P (j) is standard. KP (j) Suppose t > s, and set c = t − 1. As before, the height function hc does not take its maximum value at r: it is constant on the letters of rows 1 up to s − 1, and its value at the last place of row s, say x, is less than its value at the place immediately preceding this last place. This is a contradiction. We have proved that P (j) = Tλ , and since we have assumed Q(j) = Q, the words j and i must be equal. If we let λ vary over all partitions in Λ+ (n, r), then this shows that |T| is equal to the number of standard tableaux having entries 1, 2, . . . , r in some order; this is also the total number of ≈-classes in I(n, r). (ii) By Schensted’s theorem (B.6a) there is a unique word i ∈ I(n, r) with Q(i) = Q and P (i) = Zλ . Using Lemma (C.6b) and Proposition B, as in the proof of (i), it is quite easy to see that i ∈ Υ. Therefore each ≈-class of words of shape λ contains at least one element of Υ. But as was noted above, if we let λ vary, the number of ≈-classes is |T|, which is equal to |Υ| by Lemma (D.1c). This implies that each ≈-class contains a unique element of Υ; this must be the word i having P (i) = Zλ and Q(i) = Q. This completes the proof of Theorem (D.1g). (D.1i) Proposition. iλ = iQ(λ) and iλ = iQ(λ) . Proof. Taking Y = Tλ in propositions (C.2c) and (C.2h), we get P (iλ ) = Tλ and Q(iλ ) = Q(λ) . However (D.1g) and (D.1h) say that P (iQ(λ) ) = Tλ and Q(iQ(λ) ) = Q(λ) . Therefore iλ = iQ(λ) , by Schensted’s theorem (B.6a). A similar proof, using Zλ in place of Tλ , gives iλ = iQ(λ) .
D.2 Proof of Theorem A We shall now prove the Theorem A described in the introduction (see (A.4a)): (D.2a) Theorem A. Let i, j ∈ I(n, r). Then i ≈ j if and only if there is a finite sequence of words (elements of I(n, r)): i(1), i(2), . . . , i(s) such that i(1) = i, i(s) = j and for each adjacent pair i(ν), i(ν + 1) either there exists an element c ∈ {1, . . . , n − 1} such that f˜c (i(ν)) = i(ν + 1), or there exists an element c ∈ {1, . . . , n − 1} such that e˜c (i(ν)) = i(ν + 1).
D.2 Proof of Theorem A
125
It is clear that the “if” part of this theorem is equivalent to the following (D.2b) Proposition. If i, j ∈ I(n, r) and c ∈ {1, . . . , n − 1} such that either f˜c (i) = j, or e˜c (i) = j, then Q(i) = Q(j). Proof. Suppose there is an element c ∈ {1, . . . , n − 1} such that j = f˜c (i). This implies that f˜c (i) = ∞. (a) We claim that Q(i) and Q(j) have the same shape. Equivalently, we claim that P (i) and P (j) have the same shape. Let P (i) have shape λ. By Proposition B (see (C.6b)) we know that KP (j) = KP (f˜c (i)) = f˜c (KP (i)). Now take P = P (i) in Theorem (C.5b). This says that there is a λ-tableau P˜ such that K P˜ = f˜c (KP ). Therefore KP (j) = K P˜ . This shows that P (j) has the same shape λ as P˜ , which is the shape of P (i). This proves claim (a). (b) We shall use induction on r to prove that Q(i) = Q(j). If r = 1 then i and j are one-letter words, and Q(i) = Q(j) follows from (B.2b). Assume now that r > 1. Write i = i ir and j = j jr , where i = i1 · · · ir−1 and j = j1 · · · jr−1 lie in I(n, r − 1). There is a place q ∈ {1, 2, . . . , r} such that iq = c, jq = c+1 and iν = jν for all ν = q (see (A.3e)). We either have q < r, or q = r. If q < r, then j = f˜c (i ), hence Q(i ) = Q(j ) by the induction hypothesis. If q = r, then j = i and clearly Q(i ) = Q(j ). It follows that Q(i ) and Q(j ) have the same shape, µ say, in either case. Let λ be the shape of Q(i). By (a), λ is also the shape of Q(j). To get Q(i) from Q(i ), one puts r into the unique place which, when added to [µ], gives [λ]. To get Q(j) from Q(j ) = Q(i ), one puts r in the unique place which, when added to [µ], gives [λ]. Hence Q(j) = Q(i). This completes the proof of Proposition (D.2b); and this proves the “if” part of Theorem A. Proof of the “only if ” part of Theorem A. We assume that i, j ∈ I(n, r) are such that Q(i) = Q(j); we must prove that there exists a sequence of words i = i(1), (2), . . . , i(s) = j with the properties listed in (D.2a). First make the (D.2c) Definition. Let i ∈ I(n, r). We define the size of i, denoted sz(i), by sz(i) := i1 + i2 + · · · + ir . This is a positive integer, and from the definitions of f˜c , e˜c it is clear that if f˜c (i) = ∞ then sz(f˜c (i)) = sz(i)+1, and if e˜c (i) = ∞ then sz(˜ ec (i)) = sz(i) − 1. We make the convention sz(∞) = 0. Let λ ∈ Λ+ (n, r) and Q ∈ Q(λ). Let w ∈ I(Q, ≈) (see (C.1e)), and let S(w) denote the set of all words of the form e˜c1 e˜c2 · · · e˜ct (w), where c1 , c2 , . . . , ct are arbitrary elements of {1, 2, . . . , n − 1}. We allow that t may be zero, in which case e˜c1 e˜c2 · · · e˜ct (w) = w. In general, e˜c1 e˜c2 · · · e˜ct (w) has size sz(w)−t. Let S be minimal amongst the sizes of the elements of S(w), and choose an element w := e˜c1 e˜c2 · · · e˜ct (w) of S(w) of size S (there may be many such w ). Then e˜c (w ) = ∞ for all c ∈ {1, 2, . . . , n − 1}, because if e˜c (w ) = ∞, then e˜c (w ) would be an element of S(w) of size S − 1.
126
D Theorem A and some of its consequences
By Proposition (D.2b), all the elements of S(w) lie in I(Q, ≈). But then Theorem (D.1g) tells us that w = e˜c1 e˜c2 · · · e˜ct (w) = iQ . This implies that w = f˜c · · · f˜c f˜c (iQ ). In other words, any element w ∈ I(Q, ≈) can be t
2
1
f˜
joined to i by a finite sequence of steps of the form i(ν) −→ i(ν + 1); equivae˜ lently iQ can be joined to w by a sequence of steps of the form i(ν+1) −→ i(ν). So given i, j ∈ I(Q, ≈), we can join i to j by a sequence of the type described in the statement of Theorem A, by first joining i to iQ and then joining iQ to j. This completes the proof of Theorem A. The arguments just given, together with Theorem (D.1g), provide valuable information on the ≈-classes. We summarize this in the Q
(D.2d) Proposition. Let λ ∈ Λ+ (n, r) and Q ∈ Q(λ). Then: (i) There is a unique word iQ in I(Q, ≈) lying in T, i.e. such that e˜c (i) = ∞ for all c ∈ {1, . . . , n − 1}. This word is specified by Sch(iQ ) = (λ, Tλ , Q). (ii) There is a unique word iQ in I(Q, ≈) lying in Υ, i.e. such that f˜c (i) = ∞ for all c ∈ {1, . . . , n − 1}. This word is specified by Sch(iQ ) = (λ, Zλ , Q). (iii) The following three conditions on a word i ∈ I(n, r) are equivalent (i.e. each condition implies the other two): (1) i ∈ I(Q, ≈). (2) There exist c1 , . . . , ct ∈ {1, . . . , n − 1} such that i = f˜c1 · · · f˜ct (iQ ). (3) There exist d1 , . . . , ds ∈ {1, . . . , n − 1} such that i = e˜d1 · · · e˜ds (iQ ). In (2) and (3), we allow t and s to be = 0, respectively. In these cases we interpret f˜c1 · · · f˜ct (iQ ) to be iQ and e˜d1 · · · e˜ds (iQ ) to be iQ , respectively. Proof. All the statements above can be deduced easily from Theorem (D.1g), Theorem (D.2a) (i.e. Theorem A) and the proof of Theorem (D.2a). Weights. Remember (see (A.3g)(3), or §3.1) that the weight wt(i) of a word i is the n-vector (w1 , . . . , wn ), where for each ν ∈ n, wν is the number of ρ ∈ r such that iρ = ν. Classical representation theory of GLn (C), which can be regarded as a sequel to classical invariant theory, uses weights extensively—they describe the (polynomial) representations Kλ of the diagonal subgroup Tn (C), see §3.2; then these are “induced” to give irreducible (polynomial) representations of GLn (C), see the end of Chapter 4. (D.2e) Remark. It is clear that wt(i) = wt(j), if i, j ∈ I(n, r) are such that j = iπ for some π in the symmetric group Sym(r) (the symmetric group is denoted G(r) in §2.1). In particular, wt(i) = wt(KP (i)) for any i ∈ I(n, r), because the entries in KP (i) are the same as the entries in i, apart from a place permutation π ∈ Sym(r). In the classical representation theory of GLn (C), which is essentially the representation theory of the Schur algebra S(n, r), the (isomorphism types of) simple modules are indexed by dominant weights, i.e. by the elements of Λ+ (n, r). We shall see in §D.4 that this holds also for the (isomorphism
D.3 Properties of the operator C
127
types of) simple modules for the Littelmann algebra L = L(n, r), although the argument is different from that which applies to S(n, r). The weights of the elements of I(Q, ≈) have properties given in the next proposition. (D.2f ) Proposition. Let λ ∈ Λ+ (n, r) and Q ∈ Q(λ), then (i) wt(iQ ) = λ = (λ1 , . . . , λn ), (ii) wt(iQ ) = (λn , . . . , λ1 ), (iii) iQ (respectively, iQ ) is the only word in I(Q, ≈) having weight (λ1 , . . . , λn ) (respectively, (λn , . . . , λ1 )), and (iv) the weight ω of any word in I(Q, ≈) satisfies the inequalities (λ1 , . . . , λn ) ω (λn , . . . , λ1 ). (If ξ, η ∈ Λ(n, r), we write ξ η to mean that the difference ξ − η lies in the set U = α∈Σ Z+ α; see [33, page 3]). Proof. (i) From (D.1g)(i) we know that P (iQ ) = Tλ . Therefore wt(iQ ) is the same as the weight of KTλ (see Remark (D.2e)). It is very easy to see that wt(KTλ ) = (λ1 , . . . , λn ). (ii) In the same way, we deduce from (D.1g)(ii) that wt(iQ ) is the same as the weight (u1 , . . . , un ) of KZλ . So for each δ ∈ n, uδ is the number of pairs (s, t) ∈ [λ] such that n − βt + s = δ. For each t ∈ n, there is exactly one entry δ in column t of Zλ , if and only if 1 ≤ βt − (n − δ). Therefore uδ equals the number of columns of Zλ of lengths greater than or equal to n + 1 − δ. But this number is λn+1−δ . (iii) and (iv) Let i be any word in I(Q, ≈). By (D.2d) we know that there exist integers c1 , . . . , ct in {1, . . . , n − 1} such that i = f˜c1 · · · f˜ct (iQ ). From (A.3g)(3), we know that wt(i) = wt(iQ ) − αc1 ,c1 +1 − · · · − αct ,ct +1 . Therefore i iQ ; moreover the case wt(i) = wt(iQ ) = (λ1 , . . . , λn ) occurs only if t = 0 i.e. only if i = iQ . A similar argument shows that the weight of any word i ∈ I(Q, ≈) is wt(iQ ), with equality only if i = iQ .
D.3 Properties of the operator C First we want to understand how the action of C is related to the action of the Littelmann operators. Comparing the height functions of i and Ci. Fix i ∈ I(n, r), and consider the height function hic for some c ∈ {1, 2, . . . , n − 1}. This depends on the iν which are equal to c or c + 1. The operator C turns c and c + 1 into n − c + 1 and n − c, respectively. This suggests comparing hic with hCi n−c . Take some s ∈ {1, . . . , r}, then by definition (D.3a) hic (s) = #{ ν ≤ s : iν = c } − #{ ν ≤ s : iν = c + 1 }.
128
D Theorem A and some of its consequences
Now write Ci = j1 . . . jr for a moment and consider (D.3b) hCi n−c (r − s) = #{ ρ ≤ r − s : jρ = n − c } −#{ ρ ≤ r − s : jρ = n − c + 1 }. We have jρ = n−ir−ρ+1 +1 for all ρ ∈ {1, . . . , r}. Furthermore, n−iν +1 = n−c if and only if iν = c + 1, and n − iν = n − c if and only if iν = c, for all ν ∈ {1, . . . , r}. So (D.3b) gives (D.3c) hCi n−c (r − s) = #{ ν ≥ s + 1 : iν = c + 1 } − #{ ν ≥ s + 1 : iν = c }. By using the notation: Πb := #{ ν ∈ Π : iν = b }, for every subset Π of {1, . . . , s}, and every element b of n, formula (D.3c) becomes (i) hCi n−c (r − s) = −{s + 1, . . . , r}c + {s + 1, . . . , r}c+1 . Also, by definition of the height function hic , we have (ii) hic (s) = {1, . . . , s}c − {1, . . . , s}c+1 . If we subtract (i) from (ii) we get (D.3d) If i ∈ I(n, r) and Y = hic (r), then hic (s) − hCi n−c (r − s) = Y for all s ∈ {0, . . . , r}. Example. Let n = 3, r = 5, c = 1, and consider i = 22111 ∈ I(n, r). Then Ci = 33322 and n − c = 2. The height functions hi1 and hCi 2 are hi1 (0), hi1 (1), hi1 (2), hi1 (3), hi1 (4), hi1 (5) = ( 0, −1, −2, −1, 0, 1),
Ci Ci Ci Ci Ci hCi (5), h (4), h (3), h (2), h (1), h (0) = (−1, −2, −3, −2, −1, 0). 2 2 2 2 2 2
Note that hi1 has the maximum at place r and hCi 2 has maximum value zero. (D.3e) Lemma. Let c ∈ {1, 2, . . . , n − 1}. Then, for each i ∈ I(n, r), we have C(˜ ec (i)) = f˜n−c (Ci) and C(f˜c (i)) = e˜n−c (Ci). Proof. Since C 2 is the identity, the second part follows from the first. We prove the first part. By (D.3d), we have a geometric description of how the height functions are ˜ = hCi , one reflects the graph of h in the related: given h = hic , then to find h n−c ˜ vertical line x = r, and translates it in the “y-axis” direction so that h(0) = 0. i Explicitly, let s + t = r and Y = hc (r), then i hCi n−c (t) = hc (s) − Y.
D.4 The Littelmann algebra L(n, r)
129
˜ is a reflection about a vertical line, the last maximum of hi (at Since h c place q¯), becomes the first maximum of hCi ¯). Furthermore, n−c (at place r − q ˜ is zero. This shows that if e˜c (i) = ∞ if q¯ = r then the maximum of h ˜ then fn−c (Ci) = ∞. Assume now that q¯ < r. By (A.3c) we know that iq¯+1 = c + 1, and e˜c (i) is obtained from i by replacing iq¯+1 = c + 1 by c. We get the word C(˜ ec (i)) = · · · (n − c + 1)(n − iq¯ + 1) · · · where the letters shown are at places r − q¯ and r − q¯ + 1. Now consider C(i) = · · · (n − c)(n − iq¯ + 1) · · · where the letters shown ˜ assumes its maximum at are at places r − q¯ and r − q¯ + 1. We know that h ˜ place r − q¯ for the first time. So fn−c (Ci) replaces the letter n−c at place r − q¯ ec (i)). by n − c + 1. Hence f˜n−c (Ci) is equal to C(˜ From the definition of C we see immediately the following. (D.3f ) Lemma. If a word i ∈ I(n, r) has weight µ = (µ1 , . . . , µn ), then the word Ci has weight (µn , . . . , µ1 ). We want to show now: (D.3g) Lemma. The operator C preserves the shape. Proof. Let i ∈ I(n, r) and Q(i) = Q, and suppose Q has shape λ. Assume first that i = iQ , then C(i) = iR for some standard tableau R. By (D.2f) we can identify the shapes of the words iQ and iR from their weights. The weight of iQ is λ, hence the weight of C(iQ ) is (nλ1 , (n − 1)λ2 , . . .). So iR also has shape λ. In general, by (D.2d)(iii), there are c1 , . . . , ct ∈ {1, 2, . . . , n − 1} such that i = f˜c1 · · · f˜ct (iQ ). From (D.3e) it follows that C(i) = e˜n−c1 · · · e˜n−ct (CiQ ). But we have already seen that CiQ has shape λ; now Proposition B implies that C(i) also has shape λ. (D.3h) Remark. The operator C does not preserve the Q-symbol in general. But it gives a pairing on the set of standard tableaux of the same shape.
D.4 The Littelmann algebra L(n, r) (D.4a) Let V be an n-dimensional vector space over a field F with basis v1 , . . . , vn . Then the r-fold tensor product V ⊗r has basis {vi : i ∈ I(n, r)}. (In §§1–6, V ⊗r is called E ⊗r .) For each α ∈ Σ, where α = αc,c+1 (see §A.3), we let f˜c and e˜c act on the tensor space by linear maps, defining
130
D Theorem A and some of its consequences
f˜c vi := vf˜c i ,
e˜c vi := ve˜c i
and using linear extension. We set v∞ := 0. Let L = L(n, r) be the subalgebra of EndF (V ⊗r ) generated by these linear maps f˜c and e˜c , for c ∈ {1, 2, . . . , n − 1}. This algebra will be called the Littelmann algebra. (D.4b) As an F -space, the Littelmann algebra L is spanned the set of all monomials m = m1 m2 . . . mt of lengths t ≥ 1, where each mτ is either f˜c or e˜c (for some c ∈ {1, 2, . . . , n − 1}). We do not include the monomial m = 1EndF V ⊗r of length zero. But it may happen that L does contain this element (see Proposition (D.4e), below). (D.4c) An element H ∈ EndF (V ⊗r ) will often be described by its matrix (Hi,j )i,j∈I(n,r) , whose entries Hi,j ∈ F are defined by the equations (D.4d) Hvj = i∈I(n,r) Hi,j vi , all j ∈ I(n, r). We often identify H with its matrix (Hi,j )i,j∈I(n,r) , and we often identify f˜c , e˜c with the elements of EndF (V ⊗r ) defined by (D.4a). (D.4e) Proposition. L has an identity element, viz. DS , the diagonal matrix having (DS )i,i = 1, 0 according as i ∈ S or not; here S := I(n, r) \ (Υ ∩ T). Reminder: from §D.1 we have Υ = i ∈ I(n, r) : f˜c (i) = ∞ for all c ∈ {1, 2, . . . , n − 1} and T=
i ∈ I(n, r) : e˜c (i) = ∞ for all c ∈ {1, 2, . . . , n − 1} .
Proof of Proposition (D.4e). For any subset A of I(n, r), define DA to be the element of EndF (V ⊗r ) whose matrix with respect to the basis {vi : i ∈ I} is / A. The following diagonal, and (DA )ii = 1 or 0, according as i ∈ A or i ∈ facts are easily checked. DI(n,r) is the identity element of EndF (V ⊗r ). (ii) For any c, the matrix of f˜c e˜c is equal to DZ(c) , where Z(c) is the set of all i such that e˜c (i) = ∞. Similarly e˜c f˜c = DY (c) , where Y (c) is the set of all i such that f˜c (i) = ∞. (i)
(iii) If A, B ⊆ I(n, r), then DA DB = DA∩B and DA∪B = DA + DB − DA∩B . (iv) If A1 , . . . , Aw ⊆ I(n, r) such that DAt ∈ L for all t = 1, 2, . . . , w, then DA ∈ L where A = A1 ∪ A2 ∪ . . . ∪ Aw . Now check that I(n, r) \ T = c Z(c) and I(n, r) \ Υ = c Y (c), hence S = I(n, r) \ (Υ ∩ T) =
c
Z(c) ∪
c
Y (c).
D.5 The modules MQ
131
The partition I(n, r) = S ∪ (Υ ∩ T) of I(n, r) allows us to decompose each linear operator H ∈ EndF (V ⊗r ) in matrix form as H (1,1) H (1,2) H= , H (2,1) H (2,2) with H (1,1) ∈ EndF (S, S), H (1,2) ∈ HomF (Υ ∩ T, S), H (2,1) ∈ HomF (S, Υ ∩ T) and H (2,2) ∈ EndF (Υ ∩ T). For each c ∈ {1, 2, . . . , n − 1}, it is easy to verify the following facts. (v) If H = e˜c , or if H = f˜c , then H (1,2) , H (2,1) , H (2,2) are all zero matrices; also f˜c is the transpose of e˜c . (vi) If H ∈ L, then H (1,2) , H (2,1) , H (2,2) are all zero and H (1,1) 0 H= . 0 0 (vii) DS is the matrix shown, with H (1,1) the identity matrix. Proposition (D.4e) follows from these facts. (D.4f ) Example. If n = r = 2, then I(n, r) = {11, 12, 21, 22} = S ∪ (Υ ∩ T), where S = {11, 12, 22}, and Υ ∩ T = {21}. We have ⎞ ⎞ ⎞ ⎛ ⎛ ⎛ 0 1 0 0 0 0 0 0 1 0 0 0 ⎜0 0 1 0⎟ ⎜1 0 0 0⎟ ⎜0 1 0 0⎟ ⎟ ⎟ ⎟ ⎜ ⎜ ⎜ e˜1 = ⎜ ⎟ , f˜1 = ⎜ ⎟ , DS = ⎜ ⎟. ⎝0 0 0 0⎠ ⎝0 1 0 0⎠ ⎝0 0 1 0⎠ 0 0 0 0 0 0 0 0 0 0 0 0
D.5 The modules MQ Let λ ∈ Λ+ (n, r). For each standard tableau Q in Q(λ) we define MQ to be the subspace of V ⊗r which has F -basis all vi such that Q = Q(i), that is, all i in Iλ (Q, ≈). By Proposition B, this is an L-submodule of V ⊗r . We get therefore a direct sum decomposition of the tensor space V ⊗r into L-submodules MQ . V ⊗r = λ∈Λ+ (n,r) Q∈Q(λ)
(D.5a) MQ = LviQ = LviQ . This follows from (D.2d). For z = i ξi vi ∈ V ⊗r , define the support of z to be supp(z) = { i ∈ I(n, r) : ξi = 0 }.
132
D Theorem A and some of its consequences
(D.5b) Lemma. If z = i ξi vi and c ∈ {1, 2, . . . , n − 1} such that e˜c z = 0, then supp(z) lies in the set I(n, r) \ Z(c). Proof. Let U := { i : e˜c (i) = U := { i : e˜c (i) = ∞ }. ∞ } = Z(c) and Then z = z + z , where z = i∈U ξi vi and z = i∈U ξi vi . We have by assumption ξi ve˜c (i) = e˜c z = 0. (∗) i∈U
But for each i ∈ U , e˜c (i) = ∞, hence f˜c e˜c (i) = i. Applying f˜c to (∗), we get 0 = i∈U ξi vi . But the vi are linearly independent, so all the ξi (for i ∈ U ) are zero. Therefore z = 0 which shows that z = z has support in U = I(n, r) \ Z(c). (D.5c) Corollary. If z ∈ V ⊗r is annihilated byall e˜c , c ∈ {1, 2, . . . , n − 1}, then supp(z) lies in c I(n, r) \ Z(c) = I(n, r) \ c Z(c) = T. Similarly, if z ∈ V ⊗r isannihilated by all f˜c , c ∈ {1, 2, . . . , n − 1}, then supp(z) lies in I(n, r) \ c Y (c) = Υ. There may be a word i ∈ I(n, r) such that vi is annihilated by the algebra L. According to Proposition (D.2d), we must have iQ = i = iQ in this case, and the ≈-class of i consists of i alone. From (D.2f), the shape λ of Q has the property (λ1 , . . . , λn ) = (λn , . . . , λ1 ), hence λ = (k n ) (and r = nk). In this case MQ = F vi . There may be more than one such i ∈ Iλ (n, r). For example, I(2,2) (2, 4) contains i = (2211) and j = (2121) in Υ ∩ T. As a consequence, we have to allow L-modules which are not unital. An L-module M is then defined to be an F -space on which L acts by linear transformations so that x(ym) = (xy)m, for all x, y ∈ L and m ∈ M . The L-module M is defined to be simple (= irreducible) if either (1) LM = 0 and M is a simple F -space, that is, has F -dimension 1, or (2) M = 0, the element DS of L acts as the identity on M , and M has no L-submodules except M and {0}. We aim to show that the modules MQ are simple as L-modules. To do so, it is helpful to exploit a subalgebra of L. (D.5d) The involutory anti-automorphism J of L = L(n, r). At this point we find a strong similarity with the involutory anti-automorphism J of S(n, r) defined in §2.7. Define a symmetric bilinear map , on V ⊗r by the rule vi , vj = δi,j for all i, j ∈ I(n, r). Given H ∈ EndF (V ⊗r ), we defined in (D.4c), (D.4d) its matrix (Hi,j ). Now define J(H) ∈ EndF (V ⊗r ) by the rule: the matrix of J(H) is the transpose of the matrix of H.
D.5 The modules MQ
133
By (D.4d), Hi,j = Hvj , vi for all i, j ∈ I(n, r). Replacing H by J(H), we get J(H)i,j = J(H)vj , vi . But by definition, J(H)i,j = Hj,i = Hvi , vj . Therefore Hvi , vj = J(H)vj , vi = vi , J(H)vj for all i, j ∈ I(n, r). Equivalently, (D.5e) Hv, w = v, J(H)w for all v, w ∈ V ⊗r . We may use (D.5e) as a definition of J(H) when H is given. From the elementary properties of transposed matrices we see that the linear map J : EndF V ⊗r → EndF V ⊗r is involutory (i.e. J 2 is the identity) and is an anti-automorphism (i.e. J(H1 H2 ) = J(H2 )J(H1 ) for all H1 , H2 ). But from our present viewpoint the important fact is (D.5f ) For all c ∈ {1, 2, . . . , n − 1} there holds J(f˜c ) = e˜c and J(˜ ec ) = f˜c . These facts follow from the properties stated in (A.3g)(4). We leave the details as an exercise for the reader. But from (D.5f) we see that J maps L into itself, so it gives a map J : L → L which is an involutory anti-automorphism of the algebra L = L(n, r). (D.5g) Take a (total) order ≤ on the set I(n, r) such that sz(i) ≤ sz(j) implies that i ≤ j for all words i, j in I(n, r). Using such an order, the matrix of e˜c is upper triangular, and the matrix of f˜c is lower triangular. We have therefore: (D.5h) Corollary. Let L+ be the subalgebra of L, generated by the elements e˜c , c ∈ {1, 2, . . . , n − 1}. Then L+ is nilpotent. (D.5i) Lemma. The module MQ is simple. Note that this holds for arbitrary fields F . Proof. This is clear if MQ is an 1-dimensional module which is annihilated by L, so we assume that this is not the case. We fix i = iQ , then vi generates MQ as an L-module, by (D.2a). Let 0 = x ∈ MQ , it suffices to show that vi lies in the L-submodule generated by x. To do so, we consider the L+ -submodule of MQ generated by x. By (D.5h), this submodule contains some non-zero element z such that e˜c z = 0 for all c. By (D.5c), the support of z lies in T. But it also lies in I(Q, ≈) since z ∈ MQ . It follows that z is a scalar multiple of vi , by (D.2d)(i). We assumed z is non-zero, hence viQ lies in the submodule generated by x. (D.5j) It follows by a well-known theorem on finite dimensional algebras (or on rings with minimum condition; see e.g. [11, Theorem (25.2), page 164]), that L is semisimple.
134
D Theorem A and some of its consequences
Furthermore, every unital simple L-module M occurs as a submodule (and hence as a summand) of V ⊗r . Take any non-zero element x ∈ M . Then M = Lx, and the map θ : L → M which takes u → ux is an epimorphism of L-modules. But since L is semisimple, it follows that θ maps some simple submodule N of L isomorphically . And the simple sub onto M modules of L are submodules of V ⊗r = λ∈Λ+ (n,r) Q∈Q(λ) MQ (see the displayed formula, above (D.5a)). This shows that every unital simple L-module is isomorphic to MQ for some Q ∈ Q(λ), for some λ ∈ Λ+ (n, r). To classify the simple L-modules we must find out when MQ and MR are isomorphic.
D.6 The λ-rectangle Fix λ ∈ Λ+ (n, r) and use the following notation: (D.6a) P(λ) = {P1 , P2 , . . . , Pdλ } is the set of all standard λ-tableaux whose entries all lie in n, and (D.6b) Q(λ) = {Q1 , Q2 , . . . , Qfλ } is the set of all standard λ-tableaux whose entries are {1, 2, . . . , r} in some order (see D.1). Definition. If P ∈ P(λ) and Q ∈ Q(λ), let P : Q denote2 the word i ∈ I(n, r) such that P (i) = P and Q(i) = Q. In the notation of B.7, (D.6c) P : Q = M(λ, P, Q) = Sch−1 (λ, P, Q). It is useful to display the set of words of shape λ in the following “λ-rectangle” P1 : Q1 P1 : Q2 P2 : Q1 P2 : Q2 (D.6d)
.. .
··· ···
.. .
Pdλ : Q1 Pdλ : Q2
P1 : Qfλ P2 : Qfλ .. .
...
Pdλ : Qfλ
This rectangle has the following properties: (D.6e) (i) Every element of Iλ (n, r) appears once and only once in (D.6d) (see (B.6a)). (ii) The hth row {Ph : Q1 , . . . , Ph : Qfλ } is the ∼-class Iλ (Ph , ∼), for each h ∈ {1, . . . , dλ } (see (C.1d)). (iii) The k th column {P1 : Qk , . . . , Pdλ : Qk } is the ≈-class Iλ (Qk , ≈), for each k ∈ {1, . . . , fλ } (see (C.1e)). 2
Not to be confused with the bideterminant (Ti : Tj ) defined in (4.3a).
D.7 Canonical maps
135
(D.6f ) From now on we shall arrange the notation in the λ-rectangle (D.6d) so that P1 = Tλ and Q1 = Q(λ) . Recall from (C.2i) that Q(λ) ∈ Q(λ) has the property: an element i ∈ Iλ (n, r) has Q(i) = Q(λ) if and only if i = KP (i).
D.7 Canonical maps Fix λ ∈ Λ+ (n, r) again. The entries P : Q in (D.6d) are elements of I(n, r). From now on we shall make the (D.7a) Convention. When convenient, we shall regard each i ∈ I(n, r) as the element vi of V ⊗r . With this convention, the column of the rectangle (D.6d) corresponding to a given Q ∈ Q(λ) is a basis of the L-module MQ (see §D.5). (D.7b) Definition. If Q, R ∈ Q(λ), then the F -linear map γQ,R : MQ → MR which takes Ph : Q → Ph : R for each Ph ∈ P(λ), is the canonical map from MQ to MR . Since any two columns in (D.6d) have the same length, the canonical map is an F -linear isomorphism. It is clear that γQ,R γS,Q = γS,R and γQ,R = (γR,Q )−1 , for all Q, R, S ∈ Q(λ). Our ambition in this section is to prove that any canonical map is an isomorphism of L-modules (see (D.7f) and (D.7i)), and that any L-homomorphism MQ → MR is a scalar multiple of the canonical map γQ,R (see (D.7h)). (D.7c) Lemma. Let Q ∈ Q(λ) and P ∈ P(λ), and let i = P : Q. Then KP (i) = P : Q(λ) = γQ,Q(λ) (i). In other words, the operation i → KP (i) is achieved (for i ∈ Iλ (n, r)) by the canonical map γQ,Q(λ) . Proof. By (C.3p) one may make a sequence of basic moves joining i to KP (i). Since basic moves do not change P -symbols, we know that KP (i) = P : Q for some Q ∈ Q(λ). But KP (i) equals KP (KP (i)), hence its Q-symbol is Q(λ) (see (C.2i)). Therefore KP (i) = P : Q(λ) . Notice that this holds for any i in column Q of (D.6d), and in particular it holds for f˜c (i), for any c ∈ {1, . . . , n − 1}. By Proposition B (see C.6) we have f˜c (KP (i)) = KP (f˜c (i)), and in our case this gives (D.7d) f˜c (P : Q(λ) ) = KP (f˜c (i)) or, as a commutative diagram,
136
D Theorem A and some of its consequences
(D.7e)
KP (i) ⏐ ⏐ "
γ
←−−−−
i ⏐ ⏐ "
γ KP (f˜c(i) ) ←−−−− f˜c(i)
where γ = γQ,Q(λ) and the vertical arrows indicate action of f˜c . Thus the action of f˜c commutes with γ, when applied to any i in the Q-column of (D.6d). In the same way, one has a diagram like (D.7e), with e˜c replacing f˜c . Then we can replace f˜c in (D.7e) by any element of L and still have a commutative diagram, since the elements f˜c and e˜c , c ∈ {1, . . . , n − 1} generate L as F -algebra. So we get a (D.7f ) Corollary to (D.7c). For each pair Q, R of tableaux in Q(λ), the canonical map γQ,R : MQ → MR is an isomorphism of L-modules. Proof. The argument above shows that the corollary holds if R = Q(λ) , and −1 . this gives the general case, since γQ,R = γR,Q (λ) γ Q,Q(λ) (D.7g) Lemma. Suppose Q, R are both tableaux whose entries are 1, 2, . . . , r in some order (possibly of different shapes). If ψ : MQ → MR is a homomorphism of L-modules, then ψ(viQ ) = αviR for some α ∈ F . Proof. Let z = ψ(viQ ), then e˜c (z) = ψ(˜ ec (viQ )) = 0 for all c ∈ {1, 2, . . . , n−1}. Therefore supp(z) ⊆ T, by Corollary (D.5c). But also supp(z) ⊆ I(R, ≈) since z ∈ MR , hence supp(z) ⊆ T ∩ I(R, ≈) = {iR } (see (D.2d)); this means that z = αviR for some α ∈ F . (D.7h) Corollary. Suppose Q, R are tableaux whose entries are 1, 2, . . . , r in some order. If Q, R have the same shape then HomL (MQ , MR ) = F γQ,R . Proof. If ψ ∈ HomL (MQ , MR ), i.e. if ψ : MQ → MR is an L-homomorphism, then ψ(viQ ) = αviR for some α ∈ F . But in the present case we know that γQ,R : MQ → MR also is an L-homomorphism, by (D.7f). Therefore we have L-homomorphisms ψ and αγQ,R which take viQ to the same element αviR . Since viQ is an L-generator of MQ , by (D.5a), the map ψ is equal to αγQ,R . The module MQ is simple, and MQ ∼ = MQ(λ) . For each λ, let Mλ = MQ(λ) . (D.7i) Lemma. Let λ, µ ∈ Λ+ (n, r), then Mλ ∼ = Mµ if and only if λ = µ. Proof. Suppose that there is an isomorphism ψ : Mλ → Mµ of L-modules. Then ψ(viλ ) = αviµ for some α ∈ F , by (D.7g). (Note that Mλ = MQ(λ) and, (λ) (µ) by (D.1i), iQ = iλ and iQ = iµ .) If we apply repeatedly f˜1 ’s to iλ then we replace each time the last 1 by a 2, and we can do this λ1 − λ2 times, and the next time we get zero. Similarly
D.8 The algebra structure of L(n, r)
137
if we apply f˜1 to iµ repeatedly, then we can do this µ1 −µ2 times before we get zero. The isomorphism shows now that λ1 −λ2 = µ1 −µ2 . The same argument with f˜2 shows that λ2 − λ3 = µ2 − µ3 , and so on. Both λ and µ have degree r which forces λ = µ. Exercise 1. Let λ = (5, 4, 2). Find f˜1 viλ and verify that f˜12 viλ = v∞ = 0. Find also f˜2t viλ for t = 1, 2, 3. Exercise 2. If λ = (k, . . . , k), where kn = r, we have 1 1 2 2 Tλ =
··· ···
.. .. . . k k
1 2 .. . .
···
k
Check by direct calculation that f˜c (KTλ ) = ∞ = e˜c (KTλ ) for all c.
D.8 The algebra structure of L(n, r) Each Mλ has endomorphism algebra F . It follows now that L is isomorphic to the direct sum of matrix algebras, Mdλ (F ), L∼ = λ
where the sum is taken over all λ ∈ Λ+ (n, r) with λ = (k n ), and where dλ denotes the dimension of Mλ . This follows from the Frobenius–Schur theorem, see [11, Theorem 27.8, page 183], but we shall give a direct proof that the representation L → EndF (Mλ ) afforded by the simple module Mλ is surjective, for all λ ∈ Λ+ (n, r), λ = (k n ). The problem is to give elements of L which realize the “matrix units”. Fix λ ∈ Λ+ (n, r). The module Mλ is simple, it has F -basis { vi : i ∈ I(λ) } where we set I(λ) := I(Q(λ) , ≈) = { i ∈ Iλ : KP (i) = i } (see (C.2i)). An element Φ of EndF (Mλ ) is regarded as a matrix (Φij )i,j∈I(λ) in the usual way, Φ(vj ) = Φij vi , all j ∈ I(λ). i∈I(λ)
Monomials in f˜c , e˜c are often identified with the matrices of the linear transformations which they determine on Mλ .
138
D Theorem A and some of its consequences
(D.8a) Lemma. Suppose Φ is a monomial in the e˜c and f˜c , c ∈ {1, . . . , n−1}. Let (Φij )i,j∈I(λ) be the matrix of Φ restricted to Mλ . (i) Each row or column of the matrix has at most one non-zero entry (which is then equal to 1). (ii) If (Φij )i,j∈I(λ) has rank 1, then it is a matrix unit. Proof. For each i ∈ I(λ), Φ(vi ) is either zero, or a basis element. So all but at most one entries of each column are zero, and if there is a non-zero entry then it is equal to 1. This implies part (i), since if Φ = m1 · · · mt , then J(Φ) = J(mt ) · · · J(m1 ) is also a monomial. But (J(Φ)) is the transpose of (Φ) (see (D.5d)). Part (ii) follows. (D.8b) We want to show that every element of EndF (Mλ ) can be represented by some element of L, i.e. that the map L → EndF (Mλ ) is surjective. And we would like to do this by showing that each “matrix unit” Ei,j ∈ Mdλ (F ) can be represented by some polynomial in the f˜c , e˜c . It is enough to prove that Eiλ ,iλ can be represented by a monomial in L. Namely, if s, t ∈ I(λ) then by (D.2d) there are monomials p and q in f˜’s and e˜’s with p(viλ ) = vs and q(vt ) = viλ . The linear map pEiλ ,iλ q of V ⊗r has rank at most 1 (the rank of Eiλ ,iλ ), so by (D.8a) it is equal to Est , and this is then also represented by an element in L. The L-module Mλ = MQ(λ) has basis { vi : i ∈ I(λ) = I(Q(λ) , ≈) }. The dλ elements of I(λ) can be arranged (see (D.1g) and (D.6d)) as (λ)
i(1) = iQ
= iλ ,
i(2),
...,
i(dλ − 1),
i(dλ ) = iQ(λ) = iλ .
(D.8c) Proposition. (i) There are c(1), . . . , c(b) ∈ {1, . . . , n − 1} such that Φ := f˜c(b) . . . f˜c(1) maps iλ to iλ . (ii) The number b in (i) is given by sz(iλ ) + b = sz(iλ ). (iii) If d(1), . . . , d(s) ∈ {1, . . . , n − 1} are such that f˜d(s) . . . f˜d(1) iλ = ∞, then s ≤ b. (iv) Φ(vi(a) ) = 0, for all a ∈ {2, 3, . . . , dλ }. This shows that the matrix of Φ on Mλ is the matrix unit Eiλ ,iλ . Proof. (i) is a direct application of (D.2d)(iii)(2). We recall the proof, because it brings up useful information. The idea is to apply operators f˜c(1) , f˜c(2) , . . . in succession to the word iλ , in such a way that, for each t = 1, 2, . . . f˜c(t) · · · f˜c(1) iλ = ∞. The word f˜c(t) · · · f˜c(1) iλ has size sz(iλ ) + t, by (D.2c). The sizes of words in I(n, r) are bounded by rn. Hence, however we choose c(1), c(2), . . ., we
D.9 The character of Mλ
139
must reach b such that f˜c(b) · · · f˜c(1) iλ = ∞, but z = f˜c(b) · · · f˜c(1) iλ has the property f˜c (z) = ∞ for all c ∈ {1, . . . , n − 1}. This implies z ∈ Υ; however z ∈ I(λ) = I(Q(λ) , ≈), by (D.2b), hence z = iλ by (D.2d)(ii). This proves parts (i) and (ii) of (D.8c). To prove part (iii), note that the argument above (replace t by s) shows that, applying further operators f˜d(s+1) , f˜d(s+2) , . . . , f˜d(b ) to f˜d(s) . . . f˜d(1) iλ if necessary, we must reach b such that f˜d(b ) · · · f˜d(s+1) f˜d(s) · · · f˜d(1) iλ = iλ . Taking the size of each side of this equation, we get sz(iλ ) + b = sz(iλ ); this shows that b = b. Therefore s ≤ b = b. (iv) If a ∈ {2, 3, . . . , dλ }, then by (D.2d)(iii), i(a) = f˜d(1) . . . f˜d(u) iλ for some d(1), . . . , d(u) ∈ {1, . . . , n − 1}, and u ≥ 1. But this implies that Φ(i(a)) = f˜c(b) · · · f˜c(1) f˜d(u) · · · f˜d(1) iλ . If this were = ∞, it would contradict (iii), since b + u > b. Therefore Φ(i(a)) = ∞, hence Φ(vi(a) ) = 0. Remarks. (i) In (D.8c)(i), there may be several ways of choosing c(1), . . . , c(b) so that Φ = f˜c(b) . . . f˜c(1) maps iλ to iλ . But by (ii) the length b of any such sequence is always the same, namely b = sz(iλ ) − sz(iλ ). (ii) For any Φ ∈ L, the matrix of J(Φ) is the transpose of the matrix of Φ. This is true by definition if the matrices are defined in terms of the natural basis { vi : i ∈ I(n, r) } of V ⊗r (see (D.5d)), hence it is true also for the matrices defined in terms of the basis { vi : i ∈ I(λ) } of Mλ . Therefore, if Φ = f˜c(b) . . . f˜c(1) as in (D.8c), then the map J(Φ) = e˜c(1) . . . e˜c(b) has matrix Eiλ ,iλ . Example (see chapter E). . Take λ = (2, 1, 0) and Q(λ) = 1 3 . We can 2 take i(1) = 211,
i(2) = 311,
i(3) = 312,
i(4) = 322,
i(5) = 323.
i(3) = 213,
i(4) = 313,
i(5) = 323.
Another possibility is to take i(1) = 211,
i(2) = 212,
D.9 The character of Mλ The basis { vi : i ∈ I(Q, ≈) } for MQ consists of eigenvectors for the diagonal matrices in the general linear group GL(n, F ). Hence MQ has a formal character (as defined in §3.4), also when the field F is finite. Explicitly, let (MQ )α = ξα MQ (see §3), then the formal character of MQ is by definition
140
D Theorem A and some of its consequences
ΦMQ (X1 , . . . , Xn ) =
dim(MQ )α X1α1 · · · Xnαn .
α∈Λ(n,r)
The P -symbol preserves weights, hence dim(MQ )α = dim(Mλ )α , where λ is the shape of Q. Therefore ΦMQ = ΦMλ , that is, the formal character of MQ depends only on the shape of Q. Let Vλ be the “Weyl module” associated to λ; this is a module for the Schur algebra, see section 5.2. The following is due to P. Littelmann, in far more generality [35, Introduction]. (D.9a) Corollary. The modules Mλ and Vλ have the same formal character. Proof. In (5.4a) we saw that (Vλ )α has F -basis indexed by standard λ-tableaux of weight α. We also know from the characterisation of Mλ given above that (Mλ )α has basis vi labelled by standard λ-tableaux of weight α. Hence Mλ has the same formal character as Vλ . Note that it follows that ΦMλ = ΦMµ if and only if λ = µ. (This is also visible directly, by considering the “highest terms” of the formal characters.)
D.10 The Littlewood–Richardson Rule Suppose λ and µ are partitions with λ ∈ Λ+ (n, r), µ ∈ Λ+ (n, s). Then cνλ,µ ΦVν . ΦVλ · ΦVµ = ν
The coefficients cνλ,µ are non-negative integers, and the Littlewood–Richardson rule is a combinatorial rule for computing these integers. As we have seen, the L-module Mλ has the same formal character as the Schur algebra module Vλ . Then we have ΦMλ · ΦMµ = cνλ,µ ΦMν . ν
Here the sum is taken over all ν ∈ Λ+ (n, r + s). This leads to the following combinatorial description of the coefficients. (D.10a) Let W be the set of words i ∈ I(n, r + s) of the form i = jk with the following properties: (a) KP (j) = j and P (j) has shape λ, (b) k = iµ , and (c) the reverse B(i) of i is a lattice permutation of weight ν. Then cνλ,µ is equal to #W, the number of elements of W.
D.10 The Littlewood–Richardson Rule
141
A number of proofs of (D.10a) exist, and Littelmann gives a wide ranging generalization to cover any complex symmetrizable Kac-Moody Lie algebra [35, Introduction]. The proof we give is the special case which applies to gl(n) or to GLn . Proof. By definition Mλ is a direct summand of V ⊗r , and Mµ is a direct summand of V ⊗s . Then Mλ ⊗ Mµ is a direct summand of V ⊗(r+s) , as a vector space, since vi ⊗ vj = vij . It is invariant under the linear maps f˜c and e˜c . For example, f˜c (vij ) is either vf˜c (i)j or vif˜c (j) , or zero; and each of these belong again to Mλ ⊗ Mµ . Furthermore, Mλ ⊗ Mµ is the direct sum of L-modules MR for some standard tableaux R with shapes in Λ+ (n, r + s). Therefore cνλ,µ is precisely the number of such R such that MR occurs as a direct summand in Mλ ⊗ Mµ and MR ∼ = Mν . Each MR contains a unique “highest weight vector” vi such that e˜c (i) = ∞ for all c, namely the basis vector for i = iR . Hence cνλµ is equal to the number of words i = jk where (i) i belongs to T (see §D.1), and P (i) has shape ν; (ii) k = KP (k), and P (k) has shape µ; (iii) KP (j) = j and P (j) has shape λ. We know i ∈ T if and only if B(i) is a lattice permutation (see (D.1b)). For i ∈ T, the shape is the same as the weight (see (D.2f)). Furthermore, if B(i) is a lattice permutation then so is B(k), and since k = KP (k), we have k = iµ . This completes the proof of (D.10a). (D.10b) Very often, the Littlewood–Richardson rule is stated in a different form. It says that the coefficient cνλ,µ is equal to the size of the set C of standard (skew) tableaux T of shape ν \ µ and of weight λ such that the word w(T ) is a lattice permutation. Here the word w(T ) is obtained by reading T from right to left and from rows 1, 2, 3, . . .. We will now show directly that #C = #W, by means of a bijection from W onto C. Suppose i belongs to W, where i = jk as in (D.10a). Then always k = iµ , and we must consider the tableau P (j). Let rts be the number of times the letter s occurs in row t of P (j); since P (j) is standard, row t of P (j) starts with some letter ≥ t, and it has the form trtt (t + 1)rt,t+1 . . . nrtn , t ≥ 0 Write the multiplicities rst as an upper triangular matrix: ⎛ ⎞ r11 r12 ··· ⎜ r22 r23 · · ·⎟ ⎜ ⎟ . U =⎜ r33 · · ·⎟ ⎝ ⎠ .. .
142
D Theorem A and some of its consequences
By transposing this matrix, we can define a skew tableaux T = ψ(U ), depending on i = jk, as follows: The tth row of T starts at position (t, µt + 1) and has the multiplicities taken from the tth column of U , that is, row t is trtt (t − 1)rt−1,t . . . 1r1t . The associated word is then w(T ) = 1r11 (2r22 1r12 )(3r33 2r23 1r13 ) · · · We will show that T belongs to C, and that the map ψ : U → T is a bijection between W and C. (1) The word j has weight ν \ µ if and only if for each s, the sum of the entries in column s of the matrix U is equal to νs − µs . This means for the skew tableau T that the sum of the entries in row s is equal to νs − µs , for each s, that is T has shape ν \ µ. (2) The tableau P (j) has shape λ provided row t of P (j) has λt entries, for each t, that is rtv = λt . v≥t
This is equivalent with saying that the skew tableau w(T ) has weight λ. (3) The tableau P (j) is standard if and only if v+1
rs+1,y ≤
y=s+1
v
rs,y .
y=s
for all s ≥ 1 and all v ≥ s. This means for the word w(T ) that in each initial section the number of entries equal to s is ≥ the number of entries equal to s + 1, for each s ≥ 1. That is, P (j) is standard if and only if w(T ) is a lattice permutation. (4) The word B(i) is of the form (1µ1 2µ2 · · · )(· · · xr1x · · · 2r12 1r11 )(· · · xr2x · · · 2r22 )(· · · xr3x · · · 3r33 ) · · · This is a lattice permutation if and only if for each s ≥ 1 and each v µs +
v y=1
rys ≥ µs+1 +
v+1
ry,s+1 .
y=1
This is equivalent with T being standard. Combining (1) to (4), we see that if i = jk ∈ W and if U is the matrix encoding j, then the skew tableau T = ψ(U ) belongs to C. Conversely if we start with some T ∈ C, then T is the transpose of a matrix U , and this encodes a word i = jk in W. So ψ is a bijection.
D.11 Lascoux, Leclerc and Thibon
143
D.11 Lascoux, Leclerc and Thibon This is a brief summary of Chapter 6 of the collective work “Algebraic combinatorics on words” [38]. This chapter is called “The plactic monoid”, and its authors are A. Lascoux, B. Leclerc and J.-Y. Thibon. We refer to this chapter, and to its authors, as LLT. Our main purpose is to show that LLT prove facts which imply Theorem A and Proposition B (see (D.11h)). Reference numbers for sections, propositions, etc. in LLT are enclosed in square brackets (so that, for example, [6.1] stands for [38, 6.1]). (D.11a) The background of LLT is work of M. P. Sch¨ utzenberger, which expresses the combinatoric background of work by A. Young, G. de B. Robinson, D. E. Littlewood, etc. on the representation theory of the finite symmetric group. (D.11b) Words and tableaux. In LLT the set of all words on the alphabet A = {1, . . . , n} is denoted A∗ . So in our language, A∗ = r≥0 I(n, r). In LLT (page 3), a tableau3 is a word i in A∗ such that i = KP for some standard tableau P in the sense of section B.1. For example, i = 544135 is a 1 3 5 . If we know that i is a tableau, tableau, because i = KP for P = 4 4 5 the corresponding tableau P (which LLT call its planar representation) is uniquely defined. The shape λ of i is, by definition, the shape of P . In the example above, the shape is λ = (3, 2, 1, 0, 0). (D.11c) In [6.1] the Schensted algorithm is described. It takes each word i to a tableau KP (i). We can take P (i) to be the tableau defined in (B.4b), (B.4c). The equivalence ∼ on A∗ is defined in [6.2, bottom of page 4]: if i, j are words, then i ∼ j means KP (i) = KP (j). The equivalence ≡ on A∗ is defined on page 5 to be the equivalence on A∗ generated by basic moves (see (C.3c), (C.3d) and [6.2.3, 6.2.4]. LLT do not use the term “basic move”.) (D.11d) Knuth’s theorem (C.3a), [6.2.5] says that ≡ coincides with ∼. This is proved in [6.2], elegantly and economically, by a theorem of C. Greene [21]. Greene’s theorem itself is also proved in [6.2]. Now the main (D.11e) Definition (see [6.2.2]). The plactid monoid Pl(A) := A∗ /∼ is the quotient of A∗ by ∼. Elements of Pl(A) are the ∼-classes, or “plactic classes” in A∗ .
3
We write tableau (underlined) for a word which is a “tableau in the sense of LLT”. A tableau (not underlined) is a standard tableau in the sense of section B.1 of this Appendix. Later in LLT a tableau KP and its planar representation P are often identified.
144
D Theorem A and some of its consequences
Knuth shows that ∼ is compatible with the product of words: if u, u , v, v are words, then u ∼ u and v ∼ v implies uu ∼ vv (see [34, Corollary, page 724]). Product of words is by concatenation, so that uu = u | u ; see (A.3g)(6). Therefore Pl(A) is a monoid (i.e. a semigroup with identity): the product of the ∼-class of u with the ∼-class of u , is defined to be the ∼-class of uu . If u is any word, then u ∼ KP (u) (see (C.3p), [6.2.3]). Every ∼-class contains exactly one tableau; see Theorem [6.2.5]. (D.11f ) A main theme in LLT is that it is often useful to “lift” a symmetric polynomial to Z[Pl(A)]. Suppose that M is any monoid. Then Z[M ], which is the free Z-module with M as Z-basis, is a ring. In case M = A∗ we can identify the ring Z[A∗ ] with the tensor ring T (V ) = Z ⊕ V ⊕ (V ⊗ V ) ⊕ · · · over the free Z-module V = Zν1 ⊕ · · · ⊕ Zνn , by identifying each word i = i1 · · · ir ∈ A∗ with the tensor product νi = νi1 ⊗ · · · ⊗ νir (compare with (D.4a)). Yet another interpretation of Z[A∗ ] is as the ring of all polynomials (over Z) in non-commuting variables ν1 , . . . , νn ; here one regards every tensor product νi = νi1 ⊗ · · · ⊗ νir as the monomial νi1 · · · νir . Now suppose that ξ1 , . . . , ξn are commuting variables. Then there is an epimorphism of rings κ : Z[A∗ ] → Z[ξ1 , . . . , ξn ] which takes νσ → ξσ for all σ ∈ {1, . . . , n}. And this map factors through the map π : Z[A∗ ] → Z[Pl(A)] induced by the natural epimorphism A∗ → Pl(A); this means that i ∼ j implies κ(i) = κ(j). (It is enough to check this in case i is connected to j by a basic move.) So there exists a ring epimorphism η : Z[Pl(A)] → Z[ξ1 , . . . , ξn ] such that κ = ηπ. In section [6.4] LLT define a “plactic Schur function” Sλ in Z[Pl(A)] which is mapped by η onto the classical Schur function in the variables ξ1 , . . . , ξn (see remark (iii) in section 3.5). Then they deduce the Littlewood–Richardson rule from an identity in Z[Pl(A)] (see Theorem [6.4.5]). (D.11g) Returning to section [6.3]; LLT define Schensted’s Q-symbol. So for any i ∈ A∗ , one defines the tableau Q(i) (or more correctly the tableau KQ(i)) which is a byproduct of the sequence of tableaux P (i1 ), P (i1 i2 ), . . . which is used to make P (i); see the example (B.4c), or the example in LLT (page 7). By its construction, Q(i) is what LLT call a “standard” tableau, i.e. if i ∈ I(n, r), then the entries of Q(i) are the numbers 1, 2, . . . , r in some order. The shape of Q(i) is the shape λ of P (i). LLT prove the Robinson– Schensted theorem [6.3.1], which says that the map ρ : i → (P (i), Q(i)) induces a bijection from the set Iλ (n, r) of all words i of given shape λ (see §C.1) to the set Tab(λ, A) × STab(λ). (In our notation, Tab(λ, A) = P(λ) and STab(λ) = Q(λ); see (D.6a) and (D.6b).) This is essentially the theorem (B.6a) which says the map Sch is bijective. It is proved in the same way, by constructing the inverse map ρ−1 . The rest of section [6.3] is devoted to applications to representations of the symmetric group S(n). A permutation σ of {1, . . . , n} is regarded as a word σ = σ1 · · · σn of length n. Then G. de B. Robinson discovered and Sch¨ utzenberger proved the theorem [6.3.3]: Q(σ) = P (σ −1 ). LLT give a short
D.11 Lascoux, Leclerc and Thibon
145
proof of this fact, and also generalize it to obtain, for any word i, a description of Q(i) as P (σ −1 ) for a certain permutation σ constructed from i (see [6.3.7]). Then a further generalization, gives them a generating function for the number dλ of plactic classes of given weight λ (see [6.3.10]). Notice that dλ appears in the “λ-rectangle” (D.6d). (D.11h) In section [6.5], the set of all i ∈ A∗ for which Q(i) is a given “standard” tableau Q is called a coplactic class. In our terminology (see section C.1) this is the ≈-class Iλ (Q, ≈), where ≈ is the equivalence relation on A∗ defined in (A.4b): i ≈ j means Q(i) = Q(j). (LLT do not give a symbol for ≈.) In order to give “structure” to the coplactic classes, LLT introduce three operations on words (which then induce linear operations on Z[A∗ ]). For a given c ∈ {1, . . . , n − 1}, the LLT operators are called ec , fc , σc . We shall see in (D.11i) that ec , fc are just the Littelmann operators e˜c , f˜c defined in section A.3. We do not have the operator σc in the Appendix, but it is used extensively in the latter part of LLT. Proof of Theorem A. Theorem [6.5.1(i)] says that if θ is either ec or fc , then Q(θi) = Q(i) for any word i such that θi = 0. This is Proposition (D.2b); it is the “if” part of Theorem A. The “only if” part follows from Proposition [6.5.2(i)]; one defines a graph Γ (called the Littelmann graph c in section E.2) to have for vertices all words i ∈ A∗ , with arrow i −→ j where fc i = j. If i, j are such that i ≈ j, i.e. if i, j are in the same coplactic class, then [6.5.2(i)] says that i, j are in the same connected component of Γ, which means that we connect i and j by a chain of links, each link being of c c the form either −→ or ←−. But this is “only if” for Theorem A. Proof of Proposition B. Theorem [6.5.1(ii)] says that LLT operators are compatible with the equivalence ≡. For example, if i, j ∈ A∗ and if i ≡ j, then for any c ∈ {1, . . . , n − 1} there holds fc (i) ≡ fc (j). (This includes the statement fc (i) = 0 if and only if fc (j) = 0.) But this is essentially the Lemma (C.6c). To deduce Proposition B, we combine (C.6c) with (C.3p), which says that for any i there holds i ≡ KP (i). However LLT have proved this in Proposition [6.2.3]. Therefore LLT have proved Proposition B. (D.11i) We sketch the proof that the LLT operators ec , fc are the same as the Littelmann operators e˜c , f˜c , respectively. Let c ∈ {1, . . . , n − 1}, and keep this fixed. To calculate e˜c (i) and f˜c (i) for a given word i of length p, use the function hic (t) (see (A.3b)). This gives parameters M i , q i , q i , and these are sufficient to determine e˜c (i) and f˜c (i). Let us say that words i, j are isologous if M i , q i , q i are equal to M j , q j , q j , respectively. To calculate hic (t), one needs only the entries c, c + 1 in i. We say that letters other than c, c + 1 are neutral. In the example below i = 235342233 is a word in I(5, 9), and c = 2. Our first move is to replace each neutral entry by the empty square, indicated by a “ · ” in the third line of table D.1 below. Now we look for an adjacent (c + 1, c) pair, i.e. entries ia , ib of i
146
D Theorem A and some of its consequences t
1 2 3 4 5 6 7 8 9
it
2 3 5 3 4 2 2 3 3
it
2 3
·
3
·
2 2 3 3
jt
2 3
·
·
·
·
2 3 3
kt
2
·
·
·
·
·
·
3 3
Table D.1. Successive construction of isologous words.
such that ia = c + 1, ib = c, a < b, and iz is neutral for all places z such that a < z < b, if there is any such place. In our example, (i4 , i6 ) is an adjacent (3, 2) pair. Now replace both entries ia , ib by neutral letters. It is (very) easy to see that the resulting word j is isologous to i. We next look for adjacent (c + 1, c) pairs in j; in the example, (j2 , j7 ) is such a pair. Then “neutralize” this pair, etc. After a finite number of steps we reach a word k that contains no adjacent (c + 1, c) pair. In this word, there may be r entries c, and they all occur before any of the s entries c + 1. (Either or both of r, s may be zero.) By construction k is isologous to i. But it is very simple to describe the function hkc (t): starting from the left, it ascends by the r steps c, moves horizontally if there are some neutral entries between the last c and the first c + 1, then descends by the s entries c + 1. If r = 0 then M i = M k = 0, and if s = 0 we have hic (p) = hkc (p) = M k ; if r > 0 then q i = q k is the last place with entry c, and if s > 0 then q i = q k is the place immediately before the first c + 1. In the example given in table D.1, we have exactly one c at place 1 in k, and two c + 1’s at places 8, 9, respectively; hence q i = 1 and q i = 7. We now have all that is needed to construct e˜c (i) and f˜c (i). We leave it to the reader to compare our construction of e˜c , f˜c with LLT’s construction of their operators ec , fc , and to show that the two constructions are identical.
E Tables
E.1 Schensted’s decomposition of I(3, 3) Let n = 3 and r = 3. For each i ∈ I(3, 3), the Q-symbol Q = Q(i) of i then contains each of the numbers 1, 2, 3 exactly once. We write I(Q) = I(Q, ≈) for all these tableaux Q. Then ∪˙
∪˙
I(111) ⎞ ⎛ 1 =I 1 2 3 ∪˙ I ⎝ 2 ⎠ . ∪˙ I 1 2 ∪˙ I 1 3 2 3 3
I(3, 3) =
I(300)
I(210)
Table E.1 below contains, for each λ ∈ Λ+ (3, 3), the tableaux ψ (λ) , Q(λ) (see (C.2g) and (C.2h)), the tableaux Tλ , Zλ , and the words iλ , iλ obtained from these (see (D.1d) and (D.1e)).
λ
ψ (λ)
Q(λ)
Tλ
iλ
Zλ
iλ
(3, 0, 0)
1 2 3
1 2 3
1 1 1
111
3 3 3
333
(2, 1, 0)
2 3 1
1 3 2
1 1 2
211
2 3 3
323
(1, 1, 1)
3 2 1
1 2 3
1 2 3
321
1 2 3
321
Table E.1. Various data associated with λ ∈ Λ+ (3, 3).
148
E Tables
The elements of the sets I(Q) with their P -symbols and Q-symbols are listed in table E.2.
λ
Q(i)
(3, 0, 0)
(2, 1, 0)
(1, 1, 1)
P (i)
i
P (i)
1 1 1
111
1 1 2
121
211
1 1 2
112
1 2 2
122
1 2 2
221
212
1 1 3
113
1 1 3
131
311
1 3 2
231
213
1 2 3
132
312
1 3 3
331
313
2 2 3
232
322
2 3 3
332
323
1 2 3
1 3 2
2 2 2
222
1 2 3
123
2 2 3
223
1 3 3
133
2 3 3
233
3 3 3
333
1 2 3
i
P (i)
i
1 2 3
321
1 2 3
Table E.2. P -symbols and Q-symbols of the words i ∈ I(3, 3).
E.2 The Littelmann graph I(3, 3) Let n, r be positive integers. Following Littelmann [35, §2], we define the structure of a graph on I(n, r) by saying that i, j ∈ I(n, r) are connected
E.2 The Littelmann graph I(3, 3)
149
by an edge if there exists an element c ∈ {1, . . . , n − 1} such that f˜c (i) = j or f˜c (j) = i. This graph is the Littelmann graph (it is the undirected form of the directed graph Γ in (D.11i)). The connected components of this graph are precisely the coplactic, or ≈-classes I(Q, ≈), where Q is a standard tableau with entries 1, . . . , r in some order. This follows from Theorem A (and (A.3g)(5), where we have seen that f˜c (i) = j if and only if e˜c (j) = i). Therefore we can use these tableaux to label the connected components of the Littelmann graph. For n = r = 3, the Littelmann graph is shown in table E.3. In this display, two words i, j are connected by a (directed) edge labelled by 1 or 2 according as f˜1 (i) = j or f˜2 (i) = j.
1 2 3
1 2 3
1 3 2
1 2 3
111
1 2 1; ;2 ; ;
2 1 1; ;2 ; ;
321
1
1
1 1 2;
1 2 2M M 1
1
;
;
2 2 1
2
113
M
2
M
1
M& 2 2 2 123 qq 1 qq q 2 qqqqq 2 x 2 2 3; 133 ; ; 2 ; 1 2 3 3 2
131
2
;
1
2 3 1 2
1
132
3 3 1;
1
232
;; ;; 2 1 ;; 332
2 1 2
311
2
2 1 3 2
1
312
3 1 3;
1
322
;; ;; 2 1 ;; 323
333
Table E.3. The four Littelmann graphs in I(3, 3).
Note that, if λ ∈ Λ+ (n, r) and Q = Q(λ) , then iλ is at the top and iλ is at the bottom of the corresponding connected component of the Littelmann graph (see table E.1 and (D.2d), (C.2c)).
Index of symbols
e
evaluation map
14
2
Y
= Ker e
14
Chapter 1 ◦ ∆
coproduct
3
E
natural module
17
ε
counit
3
Dr,K
rth symmetric power
19
F (K Γ )
finitary functions
3
V◦
contravariant dual
20
rab
coefficient function
4
J
anti-auto of SK (n, r) ⊗r
20
4
,
4
MK (n, r)
modA (KΓ)
4
Chapter 3
com(A)
right A-comodules
5
Λ(n, r)
weights
23
= HomK (A, K)
6
W
= G(n), Weyl group
23
δab
Kronecker delta
modA (KΓ)
∗
A
canonical form on E
22
Λ+ (n, r) dominant weights
Chapter 2
21
23
11
ξα
= ξi,i where i ∈ α
23
ΓK
= GLn (K) (≥§2)
11
Vα
weight-space
24
cµν
coefficient function
11
Tn (K)
diagonal subgroup
24
11
χα
character of Tn (K)
24
AK (n, r)
11
r
Λ E
exterior power
24
I(n, r)
11
ΦV
formal character
26
11
mλ
monomial symm fct
26
11
er
elementary symm fct
26 27
GLn (K)
AK (n)
G(r)
symmetric group
∼ ci,j
= ci1 j1 ci2 j2 · · · cir jr
11
eµ
= eµ1 · · · eµr
MK (n)
= modAK (n) (KΓ)
12
ϕV
natural character
27
12
Fλ,K
irreducible module
28
SK (n, r) Schur algebra
13
Φλ,K
irreducible character
28
ξi,j
13
Φλ,p
= Φλ,K if char K = p
29
MK (n, r) = modAK (n,r) (KΓ) basis element of SK
152
Index of symbols decomp numbers
29
u
= (1, 2, . . . , r)
s(w)
sign of w ∈ G(n)
30
f
Schur functor
Sλ
Schur function
30
V(e)
dλ,µ
Chapter 4 [λ] T
λ
R(T )
shape of λ
33
54 54 55
a
mod S → mod S
55
h
mod eSe → mod S
56
=a◦h
56
= G(r)
57
∗
basic λ-tableau
34
h
row stabilizer
34
G
+
C(T )
column stabilizer
34
Λ
= Λ (n, r)
57
Ti
= i ◦ Tλ
34
ST,K
Specht module
58
(Ti : Tj ) bideterminant
34
Sλ,K
= ST λ ,K
58
l
35
S T,K
dual Specht module
59
Dλ,K
35
S λ,K
= S T λ ,K
59
ϕK
⊗r EK → Dλ,K
36
Ωπ,π
60
Dr,K
= D(r,0,...,0),K
36
Ks
63
D(1r ),K
= D(1,...,1,0,...,0),K
36
d
MK (N, r) → MK (n, r)
65
λ
AK (n, r) right λ-weight-space
37
α∗
= (α, 0, . . . , 0)
65
β(i)
37
comp multiplicity
68
Chapter 5 NK
∗
Λ(n, r) nλ (V )
= Ker ϕK
Vλ,K
65
43
Chapter A
43
I(n, r)
words
Vr,K
= V(r,0,...,0),K
44
n
= {1, 2, . . . , n}
73
V(1r ),K
= V(1,...,1,0,...,0),K
45
Λ(n, r)
weights
73
[X]
=
π
45
Λ+ (n, r) dominant weights
73
{X}
=
s(π) π π∈X
45
λ(i)
shape of i
74
fl
= el {C(T )}
45
P (i)
P -symbol of i
74
bi
= ξi,l fl
π∈X
73
46
Q(i)
Q-symbol of i
74
46
αa,b
root
75
47
ω
Dλ,K
47
hic
height function
75
,
49
Mci
maximal height
75
50
qci
50
q ci
51
Littelmann operator
Chapter 6
e˜c f˜c
Littelmann operator
76
ω
= (1, 1, . . . , 1, 0, . . . , 0)
53
B
reversing operator
76
S
= SK (n, r)
53
eα
root operator
76
root operator
76
weight of i
76
Ω max
Vλ,K
min
(r, α) hr
=
r! α1 !···αn !
complete symm fct
Xλ,Z
S(α)
= ξα Sξα
53
fα
fα
MK (n, r) → mod S(α)
53
wt(i)
75
76 76 76
Index of symbols
153
i|j
concatenation of i, j
76
W
121
∼
P -equivalence
78
T
122
≈
Q-equivalence
78
C
122
KP
Knuth unwinding of P
79
Tλ iλ
Chapter B
122 = KTλ
122
[λ]
shape of λ
81
Zλ
T (n, r)
triples (λ, P, Q)
81
iλ
Sch
Schensted’s map
81
Q(λ)
123
U ← x1
insertion
82
iQ
123
122 = KZλ
122
(µ, U, V ) ← x1
82
iQ
k(a)
83
sz(i)
size of i
z
84
v∞
=0
130
εz
84
L(n, r)
Littelmann algebra
130
123 125
J
insertion map
89
DA
130
E
extrusion map
89
Z(c)
130
M
inverse of Sch
92
Y (c)
Es
94
MQ
irr. L-module
Js
94
supp(z)
support of z ∈ V ⊗r
131
J
anti-auto of L
132
Chapter C Iλ (n, r)
words of shape λ
95
P(λ)
Iλ
= Iλ (n, r)
95
dλ
130 131
134 = #P(λ)
134
Iλ (P, ∼) = {i ∈ Iλ : P (i) = P }
95
fλ
= #Q(λ)
134
Iλ (Q, ≈) = {i ∈ Iλ : Q(i) = Q}
95
P :Q
= M(λ, P, Q) ∈ I(n, r)
134
I(P, ∼)
= Iλ (P, ∼)
96
γQ,R
canonical map
135
I(Q, ≈)
= Iλ (Q, ≈)
96
Mλ
= MQ(λ)
136
(λ)
, ≈)
X[t]
98
I(λ)
= I(Q
ψ (λ)
100
Ei,j
matrix unit
Q(λ)
100
W
140
103
C
141
K K
ξ(a, t) f˜c (P )
basic move basic move = f˜c (KP )
138
103
A
words on A
143
104
≡
=∼
143
114
Pl(A) Z[M ]
Chapter D Υ
∗
137
121
∗
= A /∼
143 144
References
1. E. Artin, C. Nesbitt, and R. M. Thrall. Rings with minimum condition. University of Michigan Press, Ann Arbor, 1948. 2. M. Auslander. Representation theory of Artin algebras I, II. Comm. in Alg., 1:177–268, 1974. 3. D. Blessenohl and M. Schocker. Noncommutative representation theory of the symmetric group. Imperial College Press, London, 2005. 4. A. Borel. Properties and representations of Chevalley groups. In Seminar on Algebraic Groups and Related Finite Groups (The Institute for Advanced Study, Princeton, N.J.), Lecture Notes in Mathematics, volume 131, pages 1–55. Springer, Berlin, 1971. 5. R. Brauer. On modular and p-adic representations of algebras. Proc. Nat. Acad. Sci. U.S.A., 25:252–258, 1939. 6. R.W. Carter and G. Lusztig. On the modular representations of the general linear and symmetric groups. Math. Z., 136:193–242, 1974. 7. C. Chevalley. Certains sch´emas de groupes semi-simples. S´eminaire Bourbaki, Soc. Math. France, Paris, Exp. No. 219, Vol. 6:219–234, 1995. 8. M. Clausen. Letter place algebras and a characteristic-free approach to the representation theory of the general linear and symmetric groups, I. Adv. in Math., 33:161–191, 1979. 9. E. Cline, B. Parshall, and L. Scott. Cohomology, hyperalgebras, and representations. J. Algebra, 63:98–123, 1980. 10. P. M. Cohn. Morita equivalence and duality. Queen Mary College Mathematics Notes, University of London, 1976. 11. C. W. Curtis and I. Reiner. Representation theory of finite groups and associative algebras. John Wiley and Sons (Interscience), New York, 1962. 12. C.W. Curtis and T. V. Fossum. On centralizer rings and characters of representations of finite groups. Math. Z., 107:402–406, 1968. 13. J. Deruyts. Essai d’une th´eorie g´en´erale des formes alg´ebriques. M´em. Soc. Roy. Li`ege, 17:1–156, 1892. 14. J. D´esarm´enien. appendix to “Th´ eorie combinatoire des invariants classiques”, volume 1/S-01 of S´eries de Math. pures et appl. G.-C. Rota, Universit´e LouisPasteur, Strasbourg, 1977. 15. J. D´esarm´enien, J. P. S. Kung, and G.-C. Rota. Invariant theory, Young bitableaux and combinatorics. Adv. in Math., 27:63–92, 1978.
156
References
16. L. Dornhoff. Group representation theory. Marcel Dekker, New York, 1972. ¨ 17. F. G. Frobenius. Uber die Charaktere der symmetrischen Gruppe. Sitzber. Kgl. Preuß. Akad. Wiss., pages 516–534, 1900. 18. W. Fulton. Young Tableaux, volume 35 of London Mathematical Society, Student Texts. Cambridge University Press, 1997. 19. H. Garnir. Th´eorie de la repr´esentation lin´eaire des group sym´ etriques, volume 10 of M´em. Soc. Roy. Sci. (4). Li`ege, 1950. 20. J. A. Green. Locally finite representations. J. Algebra, 41:137–171, 1976. 21. C. Greene. An extension of Schensted’s theorem. Adv. in Math., 14:254–265, 1974. 22. W. J. Haboush. Central differential operators on split semi-simple groups over fields of positive characteristic. In S´eminaire d’Algebre P. Dubreil et MariePaule, Springer Lecture Notes in Math. 795, pages 35–85. Springer, Berlin, 1980. 23. G. Higman. Representations of general linear groups and varieties of p-groups. In Proc. Internat. Conf. Theory of Groups, Austral. Nat. Univ. Canberra, Aug. 1965, pages 167–173. Gordon and Breach, New York, 1967. 24. G. Hochschild. Introduction to affine algebraic groups. Holden-Day, San Francisco, 1971. 25. N. Iwahori. On the structure of a Hecke ring of a Chevalley group over a finite field. J. Fac. Sci. Univ. Tokyo Sect. I, 10:215–236, 1964. 26. G. D. James. Some counterexamples in the theory of Specht modules. J. Algebra, 46:457–461, 1977. 27. G. D. James. The representation theory of the symmetric group, volume 682 of Lecture Notes in Mathematics. Springer, Berlin, 1978. 28. G. D. James. The decomposition of tensors over fields of prime characteristic. Math. Z., 172:161–178, 1980. 29. J. C. Jantzen. Darstellungen halbeinfacher algebraischer Gruppen und zugeordnete kontravariante Formen. In Bonner Math. Schriften, volume 67. Bonn, 1973. 30. J. C. Jantzen. Darstellung halbeinfacher Gruppen und kontravariante Formen. J. reine angew. Math., 290:117–141, 1977. ¨ 31. J. C. Jantzen. Uber das Dekompositionsverhalten gewisser modularer Darstellungen halbeinfacher Gruppen und ihrer Lie-Algebren. J. Algebra, 49:441–469, 1977. 32. J. C. Jantzen. Weyl modules for groups of Lie type. In Proc. of London Math. Soc. Symposium in Finite Simple Groups. Durham, 1978. 33. V. Kac. Infinite dimensional Lie algebras, Third edition. Cambridge University Press, Cambridge, 1990. 34. D. E. Knuth. Permutations, matrices and generalized Young tableaux. Pacific J. Math., 34:709–727, 1970. 35. P. Littelmann. Paths and root operators in representation theory. Annals of Math., 142:499–525, 1995. 36. D. E. Littlewood. The theory of group characters. Oxford University Press (Clarendon), Oxford, 1950. 37. D. E. Littlewood and R. Richardson. Group Characters and Algebra. Philos. Trans. of the Royal Soc. of Lond., Ser. A, 233:99–141, 1934. 38. M. Lothaire. Algebraic combinatorics on words, volume 90 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 2002.
References
157
39. I. G. Macdonald. Symmetric functions and Hall polynomials. Oxford University Press (Clarendon), Oxford, 1979. 40. P. A. MacMahon. Combinatory Analysis I, II. Cambridge University Press, 1915/16, reprint by Chelsea Publishing Company, 1960. 41. T. Martins. Hook representations of the general linear group. Ph.D. thesis, Warwick University, Coventry, 1981. 42. T. Nakayama. On Frobeniusean algebras I. Ann. of Math., 40:611–633, 1939. 43. T. Nakayama. On Frobeniusean algebras II. Ann. of Math., 42:1–21, 1941. 44. T. Nakayama. On Frobeniusean algebras III. Jap. J. Math., 18:49–65, 1942. 45. M. H. Peel. Specht modules and symmetric groups. J. Algebra, 36:88–97, 1975. 46. C. Schensted. Longest increasing and decreasing subsequences. Canad. J. Math., 13:179–191, 1961. ¨ 47. I. Schur. Uber eine Klasse von Matrizen, die sich einer gegebenen Matrix zuordnen lassen. Dissertation, Berlin, 1901. In I. Schur, Gesammelte Abhandlungen I, 1–70, Springer, Berlin, 1973. ¨ 48. I. Schur. Uber die rationalen Darstellungen der allgemeinen linearen Gruppe. Sitzber. K¨ onigl. Preuß. Ak. Wiss., Physikal.-Math. Klasse, pages 58–75, 1927. In I. Schur, Gesammelte Abhandlungen III, 68–85, Springer, Berlin, 1973. 49. J.-P. Serre. Groupes de Grothendieck des sch´emas en groupes r´eductifs d´eploy´es. Publ. I.H.E.S., 34:37–52, 1968. 50. R. Steinberg. Lectures on Chevalley groups. Yale University, New Haven, 1966. 51. M. E. Sweedler. Hopf algebras. W. A. Benjamin, New York, 1969. 52. J. Towber. Young symmetry, the flag manifold, and representations of GL(n). J. Algebra, 61:414–462, 1979. 53. D.-N. Verma. The role of affine Weyl groups in the representation theory of algebraic Chevalley groups and their Lie algebra. In I. M. Gelfand, editor, Lie groups and their representations, pages 653–705. John Wiley and Sons, New York, 1975. 54. H. Weyl. Theorie der Darstellung kontinuierlicher halbeinfacher Gruppen durch lineare Transformationen. Math. Z., 23:271–309, 1925. 24:328–376, 377–395, 789–791, 1926. 55. H. Weyl. The classical groups. Princeton University Press, Princeton, 1946. 56. W. J. Wong. Representations of Chevalley groups in characteristic p. Nagoya Math. J., 45:39–78, 1971. 57. W. J. Wong. Irreducible modular representations of finite Chevalley groups. J. Algebra, 20:355–367, 1972. 58. A. Young. On quantitative substitutional analysis (2). Proc. London Math. Soc., 34(1):261–397, 1902. 59. A. Young. On quantitative substitutional analysis (3). Proc. London Math. Soc., 28(2):255–292, 1928.
Index
affine group scheme, 8, 16, 18 affine ring, 5 alphabet, 73 antisymmetric tensor, 45
decomposition number, 9, 17, 29, 69 defined over Z, 7, 14, 19 diagonal subgroup, 24 dominant weight, 23 D´esarm´enien matrix, 45, 46, 48
basic move, 78, 103, 135 basis for Dλ,K , 36 basis for Vλ,K , 46 bideterminant, 34 bump, 85, 86
entry of a tableau, 81 equality rule, 13 equivalence ≈, 75, 78, 95 equivalence ∼, 78, 95 equivalent categories, 15 representations, 2 exterior power, 24, 36, 45 extrusion, 89 sequence, 90
canonical form, 21, 43 canonical map, 135 Carter-Lusztig basis, 45 Carter-Lusztig lemma, 38 Carter-Lusztig module, 43 character, 26, 139 formal, 26, 139 natural, 27 coalgebra, 3, 12 coefficient function, 4 space, 4 column stabilizer, 34 column standard, 88 comodule, 5 completely reducible, 133 composition multiplicity, 68 concatenation, 76, 103 contravariant, 19 dual, 20 form, 20
finitary function, 3 Garnir relations, 38 Hecke ring, 54 height function, 75 hyperalgebra, 7 induced module, 42 insertion, 82 insertion parameters, 99, 104 insertion, sequence, 83 invariant matrix, 4, 17 involutory anti-automorphism, 132 James module, 40 James’s theorems, 63, 70
160
Index
KΓ-bimodule, 21, 31, 35 KΓ-isomorphism, 2 KΓ-map, 2 KΓ-module, 2 irreducible, 28 K-space, 2 Knuth unwinding, 79, 96 Knuth’s theorem, 103 Knuth, Donald, 78, 96 L-homomorphism, 135 ladder, 92, 93 λ-rectangle, 134 lattice permutation, 122, 140 letter, 73 Littelmann algebra, 130, 137 Littelmann, Peter, 73, 96, 140 Littlewood–Richardson rule, 140 Martins theorem, 69 matrix unit, 137, 138 modular reduction, 8, 16, 68 modular theory, 6, 16 module MQ , 131 Morita equivalent, 67 move basic, 78 multi-index, 11 operator B, 76, 122 operator C, 122, 127 operators e˜c , f˜c , 75 P -symbol, 74 partition, 23, 60 column p-regular, 60 path, 74 path model, 73 pivot, 105 place, 81 place permutation, 11 Proposition B, 78, 116 Q-symbol, 74 representation, 2 A-rational, 4 matrix, 4 polynomial, 5 Robinson–Schensted algorithm, 74
root, 75 simple, 75 root operator, 73, 76 row stabilizer, 34 row standard, 88 Schensted, 81 Schensted process, 74, 81 inverse, 89 Schensted’s map, 81 theorem, 89 Schur algebra, 7, 13 Schur function, 30 Schur functor, 53, 54, 57 semigroup, 2 semigroup-algebra, 2 semisimple, 133 shape of a tableau, 74, 81 of a weight λ, 81 of a word, 95, 129 size of a word, 125 Specht module, 58, 63 dual, 59, 62 standard, 81 column, 88 row, 88 standard tableau, 36, 74, 81 Steinberg’s tensor product theorem, 50 support, 131 symmetric function, 26 complete, 50 elementary, 26, 29 monomial, 26, 29 ring of, 27 symmetric group, 11, 53 symmetric power, 19, 36, 44 symmetric tensor, 44 tableau, 33, 74, 81 basic λ-, 34 standard, 36, 74, 81 Young, 74 Theorem A, 78, 121, 124 unit matrix, 137, 138
Index unital, 132, 134 unwinding, Knuth, 79, 96 weight, 23, 73, 126 dominant, 73 of a word, 76
weight space, 23 Weyl group, 23 Weyl module, 10, 33, 43, 140 word, 73 Z-form, 7, 8, 16, 50
161
Lecture Notes in Mathematics For information about earlier volumes please contact your bookseller or Springer LNM Online archive: springerlink.com
Vol. 1711: W. Ricker, Operator Algebras Generated by Commuting Proje´ctions: A Vector Measure Approach (1999) Vol. 1712: N. Schwartz, J. J. Madden, Semi-algebraic Function Rings and Reflectors of Partially Ordered Rings (1999) Vol. 1713: F. Bethuel, G. Huisken, S. Müller, K. Steffen, Calculus of Variations and Geometric Evolution Problems. Cetraro, 1996. Editors: S. Hildebrandt, M. Struwe (1999) Vol. 1714: O. Diekmann, R. Durrett, K. P. Hadeler, P. K. Maini, H. L. Smith, Mathematics Inspired by Biology. Martina Franca, 1997. Editors: V. Capasso, O. Diekmann (1999) Vol. 1715: N. V. Krylov, M. Röckner, J. Zabczyk, Stochastic PDE’s and Kolmogorov Equations in Infinite Dimensions. Cetraro, 1998. Editor: G. Da Prato (1999) Vol. 1716: J. Coates, R. Greenberg, K. A. Ribet, K. Rubin, Arithmetic Theory of Elliptic Curves. Cetraro, 1997. Editor: C. Viola (1999) Vol. 1717: J. Bertoin, F. Martinelli, Y. Peres, Lectures on Probability Theory and Statistics. Saint-Flour, 1997. Editor: P. Bernard (1999) Vol. 1718: A. Eberle, Uniqueness and Non-Uniqueness of Semigroups Generated by Singular Diffusion Operators (1999) Vol. 1719: K. R. Meyer, Periodic Solutions of the N-Body Problem (1999) Vol. 1720: D. Elworthy, Y. Le Jan, X-M. Li, On the Geometry of Diffusion Operators and Stochastic Flows (1999) Vol. 1721: A. Iarrobino, V. Kanev, Power Sums, Gorenstein Algebras, and Determinantal Loci (1999) Vol. 1722: R. McCutcheon, Elemental Methods in Ergodic Ramsey Theory (1999) Vol. 1723: J. P. Croisille, C. Lebeau, Diffraction by an Immersed Elastic Wedge (1999) Vol. 1724: V. N. Kolokoltsov, Semiclassical Analysis for Diffusions and Stochastic Processes (2000) Vol. 1725: D. A. Wolf-Gladrow, Lattice-Gas Cellular Automata and Lattice Boltzmann Models (2000) Vol. 1726: V. Mari´c, Regular Variation and Differential Equations (2000) Vol. 1727: P. Kravanja M. Van Barel, Computing the Zeros of Analytic Functions (2000) Vol. 1728: K. Gatermann Computer Algebra Methods for Equivariant Dynamical Systems (2000) Vol. 1729: J. Azéma, M. Émery, M. Ledoux, M. Yor (Eds.) Séminaire de Probabilités XXXIV (2000) Vol. 1730: S. Graf, H. Luschgy, Foundations of Quantization for Probability Distributions (2000) Vol. 1731: T. Hsu, Quilts: Central Extensions, Braid Actions, and Finite Groups (2000) Vol. 1732: K. Keller, Invariant Factors, Julia Equivalences and the (Abstract) Mandelbrot Set (2000)
Vol. 1733: K. Ritter, Average-Case Analysis of Numerical Problems (2000) Vol. 1734: M. Espedal, A. Fasano, A. Mikeli´c, Filtration in Porous Media and Industrial Applications. Cetraro 1998. Editor: A. Fasano. 2000. Vol. 1735: D. Yafaev, Scattering Theory: Some Old and New Problems (2000) Vol. 1736: B. O. Turesson, Nonlinear Potential Theory and Weighted Sobolev Spaces (2000) Vol. 1737: S. Wakabayashi, Classical Microlocal Analysis in the Space of Hyperfunctions (2000) Vol. 1738: M. Émery, A. Nemirovski, D. Voiculescu, Lectures on Probability Theory and Statistics (2000) Vol. 1739: R. Burkard, P. Deuflhard, A. Jameson, J.-L. Lions, G. Strang, Computational Mathematics Driven by Industrial Problems. Martina Franca, 1999. Editors: V. Capasso, H. Engl, J. Periaux (2000) Vol. 1740: B. Kawohl, O. Pironneau, L. Tartar, J.-P. Zolesio, Optimal Shape Design. Tróia, Portugal 1999. Editors: A. Cellina, A. Ornelas (2000) Vol. 1741: E. Lombardi, Oscillatory Integrals and Phenomena Beyond all Algebraic Orders (2000) Vol. 1742: A. Unterberger, Quantization and Nonholomorphic Modular Forms (2000) Vol. 1743: L. Habermann, Riemannian Metrics of Constant Mass and Moduli Spaces of Conformal Structures (2000) Vol. 1744: M. Kunze, Non-Smooth Dynamical Systems (2000) Vol. 1745: V. D. Milman, G. Schechtman (Eds.), Geometric Aspects of Functional Analysis. Israel Seminar 19992000 (2000) Vol. 1746: A. Degtyarev, I. Itenberg, V. Kharlamov, Real Enriques Surfaces (2000) Vol. 1747: L. W. Christensen, Gorenstein Dimensions (2000) Vol. 1748: M. Ruzicka, Electrorheological Fluids: Modeling and Mathematical Theory (2001) Vol. 1749: M. Fuchs, G. Seregin, Variational Methods for Problems from Plasticity Theory and for Generalized Newtonian Fluids (2001) Vol. 1750: B. Conrad, Grothendieck Duality and Base Change (2001) Vol. 1751: N. J. Cutland, Loeb Measures in Practice: Recent Advances (2001) Vol. 1752: Y. V. Nesterenko, P. Philippon, Introduction to Algebraic Independence Theory (2001) Vol. 1753: A. I. Bobenko, U. Eitner, Painlevé Equations in the Differential Geometry of Surfaces (2001) Vol. 1754: W. Bertram, The Geometry of Jordan and Lie Structures (2001) Vol. 1755: J. Azéma, M. Émery, M. Ledoux, M. Yor (Eds.), Séminaire de Probabilités XXXV (2001) Vol. 1756: P. E. Zhidkov, Korteweg de Vries and Nonlinear Schrödinger Equations: Qualitative Theory (2001)
Vol. 1757: R. R. Phelps, Lectures on Choquet’s Theorem (2001) Vol. 1758: N. Monod, Continuous Bounded Cohomology of Locally Compact Groups (2001) Vol. 1759: Y. Abe, K. Kopfermann, Toroidal Groups (2001) Vol. 1760: D. Filipovi´c, Consistency Problems for HeathJarrow-Morton Interest Rate Models (2001) Vol. 1761: C. Adelmann, The Decomposition of Primes in Torsion Point Fields (2001) Vol. 1762: S. Cerrai, Second Order PDE’s in Finite and Infinite Dimension (2001) Vol. 1763: J.-L. Loday, A. Frabetti, F. Chapoton, F. Goichot, Dialgebras and Related Operads (2001) Vol. 1764: A. Cannas da Silva, Lectures on Symplectic Geometry (2001) Vol. 1765: T. Kerler, V. V. Lyubashenko, Non-Semisimple Topological Quantum Field Theories for 3-Manifolds with Corners (2001) Vol. 1766: H. Hennion, L. Hervé, Limit Theorems for Markov Chains and Stochastic Properties of Dynamical Systems by Quasi-Compactness (2001) Vol. 1767: J. Xiao, Holomorphic Q Classes (2001) Vol. 1768: M.J. Pflaum, Analytic and Geometric Study of Stratified Spaces (2001) Vol. 1769: M. Alberich-Carramiñana, Geometry of the Plane Cremona Maps (2002) Vol. 1770: H. Gluesing-Luerssen, Linear DelayDifferential Systems with Commensurate Delays: An Algebraic Approach (2002) Vol. 1771: M. Émery, M. Yor (Eds.), Séminaire de Probabilités 1967-1980. A Selection in Martingale Theory (2002) Vol. 1772: F. Burstall, D. Ferus, K. Leschke, F. Pedit, U. Pinkall, Conformal Geometry of Surfaces in S4 (2002) Vol. 1773: Z. Arad, M. Muzychuk, Standard Integral Table Algebras Generated by a Non-real Element of Small Degree (2002) Vol. 1774: V. Runde, Lectures on Amenability (2002) Vol. 1775: W. H. Meeks, A. Ros, H. Rosenberg, The Global Theory of Minimal Surfaces in Flat Spaces. Martina Franca 1999. Editor: G. P. Pirola (2002) Vol. 1776: K. Behrend, C. Gomez, V. Tarasov, G. Tian, Quantum Comohology. Cetraro 1997. Editors: P. de Bartolomeis, B. Dubrovin, C. Reina (2002) Vol. 1777: E. García-Río, D. N. Kupeli, R. VázquezLorenzo, Osserman Manifolds in Semi-Riemannian Geometry (2002) Vol. 1778: H. Kiechle, Theory of K-Loops (2002) Vol. 1779: I. Chueshov, Monotone Random Systems (2002) Vol. 1780: J. H. Bruinier, Borcherds Products on O(2,1) and Chern Classes of Heegner Divisors (2002) Vol. 1781: E. Bolthausen, E. Perkins, A. van der Vaart, Lectures on Probability Theory and Statistics. Ecole d’ Eté de Probabilités de Saint-Flour XXIX-1999. Editor: P. Bernard (2002) Vol. 1782: C.-H. Chu, A. T.-M. Lau, Harmonic Functions on Groups and Fourier Algebras (2002) Vol. 1783: L. Grüne, Asymptotic Behavior of Dynamical and Control Systems under Perturbation and Discretization (2002) Vol. 1784: L.H. Eliasson, S. B. Kuksin, S. Marmi, J.-C. Yoccoz, Dynamical Systems and Small Divisors. Cetraro, Italy 1998. Editors: S. Marmi, J.-C. Yoccoz (2002) Vol. 1785: J. Arias de Reyna, Pointwise Convergence of Fourier Series (2002)
Vol. 1786: S. D. Cutkosky, Monomialization of Morphisms from 3-Folds to Surfaces (2002) Vol. 1787: S. Caenepeel, G. Militaru, S. Zhu, Frobenius and Separable Functors for Generalized Module Categories and Nonlinear Equations (2002) Vol. 1788: A. Vasil’ev, Moduli of Families of Curves for Conformal and Quasiconformal Mappings (2002) Vol. 1789: Y. Sommerhäuser, Yetter-Drinfel’d Hopf algebras over groups of prime order (2002) Vol. 1790: X. Zhan, Matrix Inequalities (2002) Vol. 1791: M. Knebusch, D. Zhang, Manis Valuations and Prüfer Extensions I: A new Chapter in Commutative Algebra (2002) Vol. 1792: D. D. Ang, R. Gorenflo, V. K. Le, D. D. Trong, Moment Theory and Some Inverse Problems in Potential Theory and Heat Conduction (2002) Vol. 1793: J. Cortés Monforte, Geometric, Control and Numerical Aspects of Nonholonomic Systems (2002) Vol. 1794: N. Pytheas Fogg, Substitution in Dynamics, Arithmetics and Combinatorics. Editors: V. Berthé, S. Ferenczi, C. Mauduit, A. Siegel (2002) Vol. 1795: H. Li, Filtered-Graded Transfer in Using Noncommutative Gröbner Bases (2002) Vol. 1796: J.M. Melenk, hp-Finite Element Methods for Singular Perturbations (2002) Vol. 1797: B. Schmidt, Characters and Cyclotomic Fields in Finite Geometry (2002) Vol. 1798: W.M. Oliva, Geometric Mechanics (2002) Vol. 1799: H. Pajot, Analytic Capacity, Rectifiability, Menger Curvature and the Cauchy Integral (2002) Vol. 1800: O. Gabber, L. Ramero, Almost Ring Theory (2003) Vol. 1801: J. Azéma, M. Émery, M. Ledoux, M. Yor (Eds.), Séminaire de Probabilités XXXVI (2003) Vol. 1802: V. Capasso, E. Merzbach, B.G. Ivanoff, M. Dozzi, R. Dalang, T. Mountford, Topics in Spatial Stochastic Processes. Martina Franca, Italy 2001. Editor: E. Merzbach (2003) Vol. 1803: G. Dolzmann, Variational Methods for Crystalline Microstructure – Analysis and Computation (2003) Vol. 1804: I. Cherednik, Ya. Markov, R. Howe, G. Lusztig, Iwahori-Hecke Algebras and their Representation Theory. Martina Franca, Italy 1999. Editors: V. Baldoni, D. Barbasch (2003) Vol. 1805: F. Cao, Geometric Curve Evolution and Image Processing (2003) Vol. 1806: H. Broer, I. Hoveijn. G. Lunther, G. Vegter, Bifurcations in Hamiltonian Systems. Computing Singularities by Gröbner Bases (2003) Vol. 1807: V. D. Milman, G. Schechtman (Eds.), Geometric Aspects of Functional Analysis. Israel Seminar 20002002 (2003) Vol. 1808: W. Schindler, Measures with Symmetry Properties (2003) Vol. 1809: O. Steinbach, Stability Estimates for Hybrid Coupled Domain Decomposition Methods (2003) Vol. 1810: J. Wengenroth, Derived Functors in Functional Analysis (2003) Vol. 1811: J. Stevens, Deformations of Singularities (2003) Vol. 1812: L. Ambrosio, K. Deckelnick, G. Dziuk, M. Mimura, V. A. Solonnikov, H. M. Soner, Mathematical Aspects of Evolving Interfaces. Madeira, Funchal, Portugal 2000. Editors: P. Colli, J. F. Rodrigues (2003) Vol. 1813: L. Ambrosio, L. A. Caffarelli, Y. Brenier, G. Buttazzo, C. Villani, Optimal Transportation and its
Applications. Martina Franca, Italy 2001. Editors: L. A. Caffarelli, S. Salsa (2003) Vol. 1814: P. Bank, F. Baudoin, H. Föllmer, L.C.G. Rogers, M. Soner, N. Touzi, Paris-Princeton Lectures on Mathematical Finance 2002 (2003) Vol. 1815: A. M. Vershik (Ed.), Asymptotic Combinatorics with Applications to Mathematical Physics. St. Petersburg, Russia 2001 (2003) Vol. 1816: S. Albeverio, W. Schachermayer, M. Talagrand, Lectures on Probability Theory and Statistics. Ecole d’Eté de Probabilités de Saint-Flour XXX-2000. Editor: P. Bernard (2003) Vol. 1817: E. Koelink, W. Van Assche(Eds.), Orthogonal Polynomials and Special Functions. Leuven 2002 (2003) Vol. 1818: M. Bildhauer, Convex Variational Problems with Linear, nearly Linear and/or Anisotropic Growth Conditions (2003) Vol. 1819: D. Masser, Yu. V. Nesterenko, H. P. Schlickewei, W. M. Schmidt, M. Waldschmidt, Diophantine Approximation. Cetraro, Italy 2000. Editors: F. Amoroso, U. Zannier (2003) Vol. 1820: F. Hiai, H. Kosaki, Means of Hilbert Space Operators (2003) Vol. 1821: S. Teufel, Adiabatic Perturbation Theory in Quantum Dynamics (2003) Vol. 1822: S.-N. Chow, R. Conti, R. Johnson, J. MalletParet, R. Nussbaum, Dynamical Systems. Cetraro, Italy 2000. Editors: J. W. Macki, P. Zecca (2003) Vol. 1823: A. M. Anile, W. Allegretto, C. Ringhofer, Mathematical Problems in Semiconductor Physics. Cetraro, Italy 1998. Editor: A. M. Anile (2003) Vol. 1824: J. A. Navarro González, J. B. Sancho de Salas, C ∞ – Differentiable Spaces (2003) Vol. 1825: J. H. Bramble, A. Cohen, W. Dahmen, Multiscale Problems and Methods in Numerical Simulations, Martina Franca, Italy 2001. Editor: C. Canuto (2003) Vol. 1826: K. Dohmen, Improved Bonferroni Inequalities via Abstract Tubes. Inequalities and Identities of Inclusion-Exclusion Type. VIII, 113 p, 2003. Vol. 1827: K. M. Pilgrim, Combinations of Complex Dynamical Systems. IX, 118 p, 2003. Vol. 1828: D. J. Green, Gröbner Bases and the Computation of Group Cohomology. XII, 138 p, 2003. Vol. 1829: E. Altman, B. Gaujal, A. Hordijk, DiscreteEvent Control of Stochastic Networks: Multimodularity and Regularity. XIV, 313 p, 2003. Vol. 1830: M. I. Gil’, Operator Functions and Localization of Spectra. XIV, 256 p, 2003. Vol. 1831: A. Connes, J. Cuntz, E. Guentner, N. Higson, J. E. Kaminker, Noncommutative Geometry, Martina Franca, Italy 2002. Editors: S. Doplicher, L. Longo (2004) Vol. 1832: J. Azéma, M. Émery, M. Ledoux, M. Yor (Eds.), Séminaire de Probabilités XXXVII (2003) Vol. 1833: D.-Q. Jiang, M. Qian, M.-P. Qian, Mathematical Theory of Nonequilibrium Steady States. On the Frontier of Probability and Dynamical Systems. IX, 280 p, 2004. Vol. 1834: Yo. Yomdin, G. Comte, Tame Geometry with Application in Smooth Analysis. VIII, 186 p, 2004. Vol. 1835: O.T. Izhboldin, B. Kahn, N.A. Karpenko, A. Vishik, Geometric Methods in the Algebraic Theory of Quadratic Forms. Summer School, Lens, 2000. Editor: J.P. Tignol (2004) Vol. 1836: C. Nˇastˇasescu, F. Van Oystaeyen, Methods of Graded Rings. XIII, 304 p, 2004.
Vol. 1837: S. Tavaré, O. Zeitouni, Lectures on Probability Theory and Statistics. Ecole d’Eté de Probabilités de Saint-Flour XXXI-2001. Editor: J. Picard (2004) Vol. 1838: A.J. Ganesh, N.W. O’Connell, D.J. Wischik, Big Queues. XII, 254 p, 2004. Vol. 1839: R. Gohm, Noncommutative Stationary Processes. VIII, 170 p, 2004. Vol. 1840: B. Tsirelson, W. Werner, Lectures on Probability Theory and Statistics. Ecole d’Eté de Probabilités de Saint-Flour XXXII-2002. Editor: J. Picard (2004) Vol. 1841: W. Reichel, Uniqueness Theorems for Variational Problems by the Method of Transformation Groups (2004) Vol. 1842: T. Johnsen, A.L. Knutsen, K3 Projective Models in Scrolls (2004) Vol. 1843: B. Jefferies, Spectral Properties of Noncommuting Operators (2004) Vol. 1844: K.F. Siburg, The Principle of Least Action in Geometry and Dynamics (2004) Vol. 1845: Min Ho Lee, Mixed Automorphic Forms, Torus Bundles, and Jacobi Forms (2004) Vol. 1846: H. Ammari, H. Kang, Reconstruction of Small Inhomogeneities from Boundary Measurements (2004) Vol. 1847: T.R. Bielecki, T. Björk, M. Jeanblanc, M. Rutkowski, J.A. Scheinkman, W. Xiong, Paris-Princeton Lectures on Mathematical Finance 2003 (2004) Vol. 1848: M. Abate, J. E. Fornaess, X. Huang, J. P. Rosay, A. Tumanov, Real Methods in Complex and CR Geometry, Martina Franca, Italy 2002. Editors: D. Zaitsev, G. Zampieri (2004) Vol. 1849: Martin L. Brown, Heegner Modules and Elliptic Curves (2004) Vol. 1850: V. D. Milman, G. Schechtman (Eds.), Geometric Aspects of Functional Analysis. Israel Seminar 20022003 (2004) Vol. 1851: O. Catoni, Statistical Learning Theory and Stochastic Optimization (2004) Vol. 1852: A.S. Kechris, B.D. Miller, Topics in Orbit Equivalence (2004) Vol. 1853: Ch. Favre, M. Jonsson, The Valuative Tree (2004) Vol. 1854: O. Saeki, Topology of Singular Fibers of Differential Maps (2004) Vol. 1855: G. Da Prato, P.C. Kunstmann, I. Lasiecka, A. Lunardi, R. Schnaubelt, L. Weis, Functional Analytic Methods for Evolution Equations. Editors: M. Iannelli, R. Nagel, S. Piazzera (2004) Vol. 1856: K. Back, T.R. Bielecki, C. Hipp, S. Peng, W. Schachermayer, Stochastic Methods in Finance, Bressanone/Brixen, Italy, 2003. Editors: M. Fritelli, W. Runggaldier (2004) Vol. 1857: M. Émery, M. Ledoux, M. Yor (Eds.), Séminaire de Probabilités XXXVIII (2005) Vol. 1858: A.S. Cherny, H.-J. Engelbert, Singular Stochastic Differential Equations (2005) Vol. 1859: E. Letellier, Fourier Transforms of Invariant Functions on Finite Reductive Lie Algebras (2005) Vol. 1860: A. Borisyuk, G.B. Ermentrout, A. Friedman, D. Terman, Tutorials in Mathematical Biosciences I. Mathematical Neurosciences (2005) Vol. 1861: G. Benettin, J. Henrard, S. Kuksin, Hamiltonian Dynamics – Theory and Applications, Cetraro, Italy, 1999. Editor: A. Giorgilli (2005) Vol. 1862: B. Helffer, F. Nier, Hypoelliptic Estimates and Spectral Theory for Fokker-Planck Operators and Witten Laplacians (2005)
Vol. 1863: H. Fürh, Abstract Harmonic Analysis of Continuous Wavelet Transforms (2005) Vol. 1864: K. Efstathiou, Metamorphoses of Hamiltonian Systems with Symmetries (2005) Vol. 1865: D. Applebaum, B.V. R. Bhat, J. Kustermans, J. M. Lindsay, Quantum Independent Increment Processes I. From Classical Probability to Quantum Stochastic Calculus. Editors: M. Schürmann, U. Franz (2005) Vol. 1866: O.E. Barndorff-Nielsen, U. Franz, R. Gohm, B. Kümmerer, S. Thorbjønsen, Quantum Independent Increment Processes II. Structure of Quantum Levy Processes, Classical Probability, and Physics. Editors: M. Schürmann, U. Franz, (2005) Vol. 1867: J. Sneyd (Ed.), Tutorials in Mathematical Biosciences II. Mathematical Modeling of Calcium Dynamics and Signal Transduction. (2005) Vol. 1868: J. Jorgenson, S. Lang, Posn (R) and Eisenstein Sereies. (2005) Vol. 1869: A. Dembo, T. Funaki, Lectures on Probability Theory and Statistics. Ecole d’Eté de Probabilités de Saint-Flour XXXIII-2003. Editor: J. Picard (2005) Vol. 1870: V.I. Gurariy, W. Lusky, Geometry of Müntz Spaces and Related Questions. (2005) Vol. 1871: P. Constantin, G. Gallavotti, A.V. Kazhikhov, Y. Meyer, S. Ukai, Mathematical Foundation of Turbulent Viscous Flows, Martina Franca, Italy, 2003. Editors: M. Cannone, T. Miyakawa (2006) Vol. 1872: A. Friedman (Ed.), Tutorials in Mathematical Biosciences III. Cell Cycle, Proliferation, and Cancer (2006) Vol. 1873: R. Mansuy, M. Yor, Random Times and Enlargements of Filtrations in a Brownian Setting (2006) Vol. 1874: M. Yor, M. Émery (Eds.), In Memoriam PaulAndré Meyer - Séminaire de Probabilités XXXIX (2006) Vol. 1875: J. Pitman, Combinatorial Stochastic Processes. Ecole d’Eté de Probabilités de Saint-Flour XXXII-2002. Editor: J. Picard (2006) Vol. 1876: H. Herrlich, Axiom of Choice (2006) Vol. 1877: J. Steuding, Value Distributions of LFunctions(2006) Vol. 1878: R. Cerf, The Wulff Crystal in Ising and Percolation Models, Ecole d’Eté de Probabilits de Saint-Flour XXXIV-2004. Editor: Jean Picard (2006) Vol. 1879: G. Slade, The Lace Expansion and its Appli- cations, Ecole d’Eté de Probabilités de Saint-Flour XXXIV-2004. Editor: Jean Picard (2006) Vol. 1880: S. Attal, A. Joye, C.-A. Pillet, Open Quantum Systems I, The Hamiltonian Approach (2006) Vol. 1881: S. Attal, A. Joye, C.-A. Pillet, Open Quantum Systems II, The Markovian Approach (2006) Vol. 1882: S. Attal, A. Joye, C.-A. Pillet, Open Quantum Systems III, Recent Developments (2006) Vol. 1883: W. Van Assche, F. Marcellàn (Eds.), Orthogonal Polynomials and Special Functions, Computation and Application (2006) Vol. 1884: N. Hayashi, E.I. Kaikina, P.I. Naumkin, I.A. Shishmarev, Asymptotics for Dissipative Nonlinear Equations (2006) Vol. 1885: A. Telcs, The Art of Random Walks (2006) Vol. 1886: S. Takamura, Splitting Deformations of Degenerations of Complex Curves (2006) Vol. 1887: K. Habermann, L. Habermann, Introduction to Symplectic Dirac Operators (2006) Vol. 1888: J. van der Hoeven, Transseries and Real Differential Algebra (2006) Vol. 1889: G. Osipenko, Dynamical Systems, Graphs, and Algorithms (2006)
Vol. 1890: M. Bunge, J. Funk, Singular Coverings of Toposes (2006) Vol. 1891: J.B. Friedlander, D.R. Heath-Brown, H. Iwaniec, J. Kaczorowski, Analytic Number Theory, Cetraro, Italy, 2002. Editors: A. Perelli, C. Viola (2006) Vol. 1892: A. Baddeley, I. Bárány, R. Schneider, W. Weil, Stochastic Geometry, Martina Franca, Italy, 2004. Editor: W. Weil (2007) Vol. 1893: H. Hanßmann, Local and Semi-Local Bifurcations in Hamiltonian Dynamical Systems, Results and Examples (2007) Vol. 1894: C.W. Groetsch, Stable Approximate Evaluation of Unbounded Operators (2007) Vol. 1895: L. Molnár, Selected Preserver Problems on Algebraic Structures of Linear Operators and on Function Spaces (2007) Vol. 1896: P. Massart, Concentration Inequalities and Model Selection, Ecole d’Eté de Probabilitiés de SaintFlour XXXIII-2003. Editor: J. Picard (2007) Vol. 1897: R. Doney, Fluctuation Theory for Lévy Processes, Ecole d’Eté de Probabilitiés de Saint-Flour2005. Editor: J. Picard (2007) Vol. 1898: H.R. Beyer, Beyond Partial Differential Equations, On linear and Quasi-Linear Abstract Hyperbolic Evolution Equations (2007) Vol. 1899: Seminaires de Probabilitiés XL. Editors: C. Donati-Martin, M. Émery, A. Rouault, C. Stricker (2007) Vol. 1900: E. Bolthausen, A. Bovier (Eds.), Spin Glasses (2007)
Recent Reprints and New Editions Vol. 1618: G. Pisier, Similarity Problems and Completely Bounded Maps. 1995 – 2nd exp. edition (2001) Vol. 1629: J.D. Moore, Lectures on Seiberg-Witten Invariants. 1997 – 2nd edition (2001) Vol. 1638: P. Vanhaecke, Integrable Systems in the realm of Algebraic Geometry. 1996 – 2nd edition (2001) Vol. 1702: J. Ma, J. Yong, Forward-Backward Stochastic Differential Equations and their Applications. 1999. – Corr. 3rd printing (2005) Vol. 830: J.A. Green, Polynomial Representations of GLn , with an Appendix on Schensted Correspondence and Littelmann Paths by K. Erdmann, J.A. Green and M. Schocker. 1980 – 2nd corr. and augmented edition (2007)