Institute of Mathematical Statistics LECTURE NOTES–MONOGRAPH SERIES
Multivariate Statistics A Vector Space Approach
Mo...
257 downloads
1771 Views
31MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Institute of Mathematical Statistics LECTURE NOTES–MONOGRAPH SERIES
Multivariate Statistics A Vector Space Approach
Morris L. Eaton
Volume 53
Institute of Mathematical Statistics LECTURE NOTES–MONOGRAPH SERIES Volume 53
Multivariate Statistics A Vector Space Approach
Morris L. Eaton
Institute of Mathematical Statistics Beachwood, Ohio, USA
Institute of Mathematical Statistics Lecture Notes–Monograph Series
Series Editor: R. A. Vitale
The production of the Institute of Mathematical Statistics Lecture Notes–Monograph Series is managed by the IMS Office: Jiayang Sun, Treasurer and Elyse Gustafson, Executive Director.
Library of Congress Control Number: 2006940290 International Standard Book Number 9780940600690, 0-940600-69-2 International Standard Serial Number 0749-2170 c 2007 Institute of Mathematical Statistics Copyright All rights reserved Printed in Lithuania
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii 1. VECTOR SPACE THEORY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1. Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2. 1.3. 1.4. 1.5. 1.6. 1.7.
Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 The Cauchy–Schwarz Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 The Space L(V, W ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Determinants and Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 The Spectral Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 2. RANDOM VECTORS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 2.1. Random Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 2.2. Independence of Random Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 2.3. Special Covariance Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 3. THE NORMAL DISTRIBUTION ON A VECTOR SPACE . . . . . . . . . . . . . . . 103 3.1. The Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 3.2. Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 3.3. Independence of Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 3.4. Conditional Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 3.5. The Density of the Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 4. LINEAR STATISTICAL MODELS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.1. The Classical Linear Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.2. More About the Gauss–Markov Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 4.3. Generalized Linear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 5. MATRIX FACTORIZATIONS AND JACOBIANS . . . . . . . . . . . . . . . . . . . . . . . 159 5.1. Matrix Factorizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.2. Jacobians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 6. TOPOLOGICAL GROUPS AND INVARIANT MEASURES . . . . . . . . . . . . 184 6.1. Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 6.2. Invariant Measures and Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 6.3. Invariant Measures on Quotient Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 6.4. Transformations and Factorizations of Measures . . . . . . . . . . . . . . . . . . . . . . 218 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 7. FIRST APPLICATIONS OF INVARIANCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 7.1. Left On Invariant Distributions on n × p Matrices . . . . . . . . . . . . . . . . . . . . . 233 7.2. Groups Acting on Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
iii
iv 7.3. 7.4. 7.5. 7.6.
Invariant Probability Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 The Invariance of Likelihood Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Distribution Theory and Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Independence and Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 8. THE WISHART DISTRIBUTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 8.1. Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 8.2. Partitioning a Wishart Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 8.3. The Noncentral Wishart Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 8.4. Distributions Related to Likelihood Ratio Tests . . . . . . . . . . . . . . . . . . . . . . . 318 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 9. INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS . . . . 334 9.1. The MANOVA Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 9.2. MANOVA Problems with Block Diagonal Covariance Structure . . . . . . . 350 9.3. Intraclass Covariance Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 9.4. Symmetry Models: An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 9.5. Complex Covariance Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 9.6. Additional Examples of Linear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 10. CANONICAL CORRELATION COEFFICIENTS . . . . . . . . . . . . . . . . . . . . . . . 403 10.1. Population Canonical Correlation Coefficients . . . . . . . . . . . . . . . . . . . . . . . 403 10.2. Sample Canonical Correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 10.3. Some Distribution Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 10.4. Testing for Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 10.5. Multivariate Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 COMMENTS ON SELECTED PROBLEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
Preface The purpose of this book is to present a version of multivariate statistical theory in which vector space and invariance methods replace, to a large extent, more traditional multivariate methods. The book is a text. Over the past ten years, various versions have been used for graduate multivariate courses at the University of Chicago, the University of Copenhagen, and the University of Minnesota. Designed for a one year lecture course or for independent study, the book contains a full complement of problems and problem solutions. My interest in using vector space methods in multivariate analysis was aroused by William Kruskal’s success with such methods in univariate linear model theory. In the late 1960s, I had the privilege of teaching from Kruskal’s lecture notes where a coordinate free (vector space) approach to univariate analysis of variance was developed. (Unfortunately, Kruskal’s notes have not been published.) This approach provided an elegant unification of linear model theory together with many useful geometric insights. In addition, I found the pedagogical advantages of the approach far outweighed the extra effort needed to develop the vector space machinery. Extending the vector space approach to multivariate situations became a goal, which is realized here. Basic material on vector spaces, random vectors, the normal distribution, and linear models take up most of the first half of this book. Invariance (group theoretic) arguments have long been an important research tool in multivariate analysis as well as in other areas of statistics. In fact, invariance considerations shed light on most multivariate hypothesis testing, estimation, and distribution theory problems. When coupled with vector space methods, invariance provides an important complement to the traditional distribution theory-likelihood approach to multivariate analysis. Applications of invariance to multivariate problems occur throughout the second half of this book. A brief summary of the contents and flavor of the ten chapters herein follows. In Chapter 1, the elements of vector space theory are presented. Since my approach to the subject is geometric rather than algebraic, there is an emphasis on inner product spaces where the notions of length, angle, and orthogonal projection make sense. Geometric topics of particular importance in multivariate analysis include singular value decompositions and angles between subspaces. Random vectors taking values in inner product spaces is the general topic of Chapter 2. Here, induced distributions, means, covariances, and independence are introduced in the inner product space setting. These results are then used to establish many traditional properties of the multivariate normal distribution in Chapter 3. In Chapter 4, a theory of linear models is given that applies directly to multivariate problems. This development, suggested by Kruskal’s treatment of univariate linear models, contains results that identify all the linear models to which the Gauss–Markov Theorem applies. Chapter 5 contains some standard matrix factorizations and some elementary Jacobians that are used in later chapters. In Chapter 6, the theory of invariant integrals (measures) is outlined. The many examples here were chosen to illustrate the theory and prepare the reader for the statistical applications to follow. A host of statistical applications of invariance, ranging from the invariance of likelihood methods to the use of invariance in deriving distributions and establishing indev
vi
pendence, are given in Chapter 7. Invariance arguments are used throughout the remainder of the book. The last three chapters are devoted to a discussion of some traditional and not so traditional problems in multivariate analysis. Here, I have stressed the connections between classical likelihood methods, linear model considerations, and invariance arguments. In Chapter 8, the Wishart distribution is defined via its representation in terms of normal random vectors. This representation, rather than the form of the Wishart density, is used to derive properties of the Wishart distribution. Chapter 9 begins with a thorough discussion of the multivariate analysis of variance (MANOVA) model. Variations on the MANOVA model including multivariate linear models with structured covariances are the main topic of the rest of Chapter 9. An invariance argument that leads to the relationship between canonical correlations and angles between subspaces is the lead topic in Chapter 10. After a discussion of some distribution theory, the chapter closes with the connection between testing for independence and testing in multivariate regression models. Throughout the book, I have assumed that the reader is familiar with the basic ideas of matrix and vector algebra in coordinate spaces and has some knowledge of measure and integration theory. As for statistical prerequisites, a solid first year graduate course in mathematical statistics should suffice. The book is probably best read and used as it was written—from front to back. However, I have taught short (one quarter) courses on topics in MANOVA using the material in Chapters 1, 2, 3, 4, 8, and 9 as a basis. It is very difficult to compare this text with others on multivariate analysis. Although there may be a moderate amount of overlap with other texts, the approach here is sufficiently different to make a direct comparison inappropriate. Upon reflection, my attraction to vector space and invariance methods was, I think, motivated by a desire for a more complete understanding of multivariate statistical models and techniques. Over the years, I have found vector space ideas and invariance arguments have served me well in this regard. There are many multivariate topics not even mentioned here. These include discrimination and classification, factor analysis, Bayesian multivariate analysis, asymptotic results and decision theory results. Discussions of these topics can be found in one or more of the books listed in the Bibliography. As multivariate analysis is a relatively old subject within statistics, a bibliography of the subject is very large. For example, the entries in A Bibliography of Multivariate Analysis by T. W. Anderson, S. Das Gupta, and G. H. P. Styan, published in 1972, number over 6000. The condensed bibliography here contains a few of the important early papers plus a sample of some recent work that reflects my bias. A more balanced view of the subject as a whole can be obtained by perusing the bibliographies of the multivariate texts listed in the Bibliography. My special thanks go to the staff of the Institute of Mathematical Statistics at the University of Copenhagen for support and encouragement. It was at their invitation that I spent the 1971–1972 academic year at the University of Copenhagen lecturing on multivariate analysis. These lectures led to Multivariate Statistical Analysis, which contains some of the ideas and the flavor of this book. Much of the work herein was completed during a second visit to Copenhagen in 1977–1978. Portions of the work have been supported by the National Science Foundation and the University of Minnesota. This generous support is gratefully acknowledged. A number of people have read different versions of my manuscript and have made a host of constructive suggestions. Particular thanks go to Michael Meyer, whose good sense of pedagogy led to major revisions in a number of places. Others whose
vii
help I would like to acknowledge are Murray Clayton, Siu Chuen Ho, and Takeaki Kariya. Most of the typing of the manuscript was done by Hanne Hansen. Her efforts are very much appreciated. For their typing of various corrections, addenda, changes, and so on, I would like to thank Melinda Hutson, Catherine Stepnes, and Victoria Wagner.
Morris L. Eaton Minneapolis, Minnesota May 1983
Notation (V, (·, ·)) L(V, W ) Gl(V ) O(V ) Rn Lp,n Gln On Fp,n G+ T G+ U Sp+ A>0 A0 det tr xy A⊗B Δr L(·) N (μ, Σ) W (Σ, p, n)
an inner product space, vector space V and inner product (·, ·) the vector space of linear transformations on V to W the group of nonsingular linear transformations on V to V the orthogonal group of the inner product space (V, (·, ·)) Euclidean coordinate space of all n-dimensional column vectors the linear space of all n × p real matrices the group of n × n nonsingular matrices the group of n × n orthogonal matrices the space of n × p real matrices whose p columns form an orthonormal set in Rn the group of lower triangular matrices with positive diagonal elements—dimension implied by context the group of upper triangular matrices with positive diagonal elements—dimension implied by context the set of p × p real symmetric positive definite matrices the matrix or linear transformation A is positive definite A is positive semidefinite (non-negative definite) determinant trace the outer product of the vectors x and y the Kronecker product of the linear transformations A and B the right-hand modulus of a locally compact topological group the distributional law of “·” the normal distribution with mean μ and covariance Σ on an inner product space the Wishart distribution with n degrees of freedom and p × p parameter matrix Σ
viii
CHAPTER 1
Vector Space Theory
In order to understand the structure and geometry of multivariate distributions and associated stati~ticalproblems, it is essential lhat we be able to distinguish those aspects of multivariate distributions that can be described without reference to a coordinate system and those that cannot. Finite dimensional vector space theory provides us with a framework in which it becomes relatively easy to distinguish between coordinate free and coordinate concepts. It is fair to say that the material presented in t h s chapter furnishes the language we use in the rest of this book to describe many of the geometric (coordinate free) and coordinate properties of multivariate probability models. The treatment of vector spaces here is far from complete, but those aspects of the theory that arise in later chapters are covered. Halmos (1958) has been followed quite closely in the first two sections of this chapter, and because of space limitations, proofs sometimes read "see Halmos (1958)." The material in this chapter runs from the elementary notions of basis, dimension, linear transformation, and matrix to inner product space, orthogonal projection, and the spectral theorem for self-adjoint linear transformations. In particular, the linear space of linear transformations is studied in detail, and the chapter ends with a discussion of what is commonly known as the singular value decomposition theorem. Most of the vector spaces here are finite dimensional real vector spaces, although excursions into infinite dimensions occur via applications of the Cauchy-Schwarz Inequality. As might be expected, we introduce complex coordinate spaces in the discussion of determinants and eigenvalues. Multilinear algebra and tensors are not covered systematically, although the outer product of vectors and the Kronecker product of linear transformations are covered. It was felt that the simplifications and generality obtained by introducing tensors were not worth the price in terms of added notation, vocabulary, and abstractness.
2
VECTOR SPACE THEORY
1.1. VECTOR SPACES Let R denote the set of real numbers. Elements of R, called scalars, are denoted by a, P,. . . . Definition 1.1. A set V, whose elements are called vectors, is called a real vector space if:
(I) to each pair of vectors x, y E V, there is a vector x sum of x and y, and for all vectors in V, (i) (ii) (iii) (iv)
+ y E V, called the
x+y=y+x. (x+y)+z=x+(y+z). There exists a unique vector 0 E V such that x + 0 = x for all x. For each x G V, there is a unique vector - x such that x + ( - x ) = 0.
(11) For each a
E R and x E V , there is a vector denoted by a x the product of a and x, and for all scalars and vectors,
(9 (ii) (iii) (iv)
E
V, called
4 P x ) = (aP>x. lx=x. (a+p)x=ax+px. a ( x + y ) = ax + ay.
In II(iii), ( a + p ) x means the sum of the two scalars, a and p , times x, while ax px means the sum of the two vectors, ax and px. T h s multiple use of the plus sign should not cause any confusion. The reason for calling V a real vector space is that multiplication of vectors by real numbers is permitted. A classical example of a real vector space is the set Rn of all ordered n-tuples of real numbers. An element of Rn, say x, is represented as
+
and x, is called the ith coordinate of x. The vector x + y has ith coordinate xi + y, and ax, a E R, is the vector with coordinates ax,, i = 1,. . . , n. With
VECTOR SPACES
3
0 E Rn representing the vector of all zeroes, it is routine to check that Rn is a real vector space. Vectors in the coordinate space Rn are always represented by a column of n real numbers as indicated above. For typographical convenience, a vector is often written as a row and appears as x' = (x,, . . . , x,). The prime denotes the transpose of the vector x E Rn. The following example provides a method of constructing real vector spaces and yields the space Rn as a special case.
+
Example 1.1. Let 5X be a set. The set V is the collection of all the real-valued functions defined on EX. For any two elements xi, x, E V , define xi + x, as the function on EX whose value at t is x,(t) + x,(t). Also, if a E R and x E V , ax is the function on % given by (ax)(t) = ax(t). The symbol 0 E V is the zero function. It is easy to verify that V is a real vector space with these definitions of addition and scalar multiplication. When EX = {l, 2,. . . , n), then V is just the real vector space Rn and x E Rn has as its ith coordinate the value of x at i E EX. Every vector space discussed in the sequel is either V (for some set EX) or a linear subspace (to be defined in a moment) of some V.
+
Before defining the dimension of a vector space, we need to discuss linear dependence and independence. The treatment here follows Halmos (1958, Sections 5-9). Let V be a real vector space.
Definition 1.2. A finite set of vectors {x,li = 1,. . . , k ) is linearly dependent if there exist real numbers a,,. . . , a,, not all zero, such that Ca,xi = 0. Otherwise, {x,(i= 1,. . . , k) is linearly independent. A brief word about summation notation. Ordinarily, we do not indicate indices of summation on a summation sign when the range of summation is clear from the context. For example, in Definition 1.2, the index i was specified to range between 1 and k before the summation on i appeared; hence, no range was indicated on the summation sign. An arbitrary subset S c V is linearly independent if every finite subset of S is linearly independent. Otherwise, S is linearly dependent.
Definition 1.3. A basis for a vector space V is a linearly independent set S such that every vector in V is a linear combination of elements of S. V is finite dimensional if it has a finite set S that is a basis. = (0,. . . , 0, 1,0,. . . , 0) where the one occurs as the ith coordinate of ei, i = 1,. . . , n. For x E Rn,
+ Example 1.2. Take V = Rn and let E:
4
VECTOR SPACE THEORY
it is clear that x = Exi&,where xi is the ith coordinate of x. Thus every vector in R n is a linear combination of E,,. . . , E,. TOshow that {Eili = 1,. . . , n) is a linearly independent set, suppose Ca,ei = 0 for some scalars ai, i = 1,. . . , n. Then x = Ca,ei = 0 has a i as its ith coordinate, so a i = 0, i = 1,. . . , n. Thus {ei(i= 1,. . . , n) is a basis for Rn and Rn is finite dimensional. The basis {e,li = 1,. . . , n) is called the standard basis for Rn. 6 Let V be a finite dimensional real vector space. The basic properties of linearly independent sets and bases are: (i) If {x,,. . . , x,) is a linearly independent set in V, then there exist vectors x,, ,, . . . , x,,, such that {x,, . . . , x,,,) is a basis for V. (ii) All bases for V have the same number of elements. The dimension of V is defined to be the number of elements in any lsasis. (iii) Every set of n 1 vectors in an n-dimensional vector space is linearly dependent.
+
Proofs of the above assertions can be found in Halmos (1958, Sections 5-8). The dimension of a finite dimensional vector space is denoted by dim(V). If {x,,. . . , x,) is a basis for V, then every x E V is a unique linear combination of {x,,. . . , x,)-say x = &xi. That every x can be so expressed follows from the definition of a basis and the uniqueness follows from the linear independence of {x,,. . . , x,). The numbers a,,. . . , a, are called the coordinates of x in the basis {x,,. . . , x,). Clearly, the coordinates of x depend on the order in which we write the basis. Thus by a basis we always mean an ordered basis. We now introduce the notion of a subspace of a vector space.
Definition 1.4. A nonempty subset M c V is a subspace (or linear rnanifold) of V if, for each x, y E M and a , P E R, ax By E M.
+
A subspace M of a real vector space V is easily shown to satisfy the vector space axioms (with addition and scalar multiplication inherited from V), so subspaces are real vector spaces. It is not difficult to verify the following assertions (Halmos, 1958, Sections 10-12):
(i) The intersection of subspaces is a subspace. (ii) If M is a subspace of a finite dimensional vector space V, then dim(M) G dim(V).
PROPOSITION 1.1
5
(iii) Given an m-dimensional subspace M of an n-dimensional vector space V, there is a basis {x,,. . . , x,,. . . , x,) for V such that {x,, . . . , x,) is a basis for M. Given any set S G V, span(S) is defined to be the intersection of all the subspaces that contain S-that is, span(S) is the smallest subspace that contains S. It is routine to show that span(S) is equal to the set of all linear combinations of elements of S. The subspace span(S) is often called the subspace spanned by the set S. If M and N are subspaces of V, then span(M U N ) is the set of all vectors of the form x + y where x E M and y E N. The suggestive notation M N = {zlz = x + y, x E M, y E N ) is used for span(M u N ) when M and N are subspaces. Using the fact that a linearly independent set can be extended to a basis in a finite dimensional vector space, we have the following. Let V be finite dimensional and suppose M and N are subspaces of v.
+
(i) Let m = dim(M), n = dim(N), and k = dim(M n N). Then there exist vectors x,, . . . , x,, y,+ . . . , y,, and z,, . . . , z, such that {x,,. . . , x,) is a basis for M n N, {x,,. . . , x,, y,, . . , y,) is a basis for M, {x,, . . . , x,, z,, . . . , z,) is a basis for N, and {x,,. . . , x,, yk+,,.. . , ym,z,,,,. . . , z,) is a basis for M + N. If k = 0, then {x,,. . . , x,) is interpreted as the empty set. (ii) dim(M + N ) = dim(M) + dim(N) - dim(Mf? N). (iii) There exists a subspace MI c V such that M n MI = (0) and M M, = v.
,, ,,
,,
,,.
+
Definition 1.5. If M and N are subspaces of V that satisfy M and M N = V, then M and N are complementary subspaces.
+
n N = (0)
The technique of decomposing a vector space into two (or more) complementary subspaces arises again and again in the sequel. The basic property of such a decomposition is given in the following proposition. Proposition 1.1. Suppose M and N are complementary subspaces in V. Then each x E V has a unique representation x = y + z with y E M and z E N.
+
Proof: Since M + N = V, each x E V can be written x = y, z, with y,~Mandz,~N.Ifx=y~+z,withy~~Mandz~~N,thenO=x-
6 x
VECTOR SPACE THEORY
=
nN
(Y, - Y,) =
+ (z, - 22).
Hence (Y, - Y,) = (z, - z,) so (Y, - Y,) E M
(0). Thus y, = y,. Similarly, z, = z,.
The above proposition shows that we can decompose the vector space V into two vector spaces M and N and each x in V has a unique piece in M and in N. Thus x can be represented as (y, z) with y E M and z E N. Also, note that if x,, x, E V and have the representations (y,, z,), (y,, z,), then a x , + px2 has the representation (ay, py,, az, pz,), for a, P E R. In other words the function that maps x into its decomposition (y, z) is linear. To make this a bit more precise, we now define the direct sum of two vector spaces.
+
+
Definition 1.6. Let V, and V2 be two real vector spaces. The direct sum of V, and V,, denoted by V, @ V,, is the set of all ordered pairs {x, y), x E V,, y E V,, with the linear operations defined by a,{x,, y,) + a2(x2, Y2) " { ~ I x+I a2x2, a1Y1 + ( ~ 2 ~ 2 ) That V, @ V, is a real vector space with the above operations can easily and V2 with be verified. Further, identifying V, with {{x,, 0)Ix E V,) = ((0, y)ly E V,) = P2, we can t h n k of V, and V2 as complementary sub+ = V, @ V, and n P2= {0,0), whch is spaces of V, @ V,, since the zero element in V, @ V,. The relation of the direct sum to our previous decomposition of a vector space should be clear.
v,
vl
vl v2
+
Example 1.3. Consider V = Rn, n >, 2, and let p and q be positive integers such that p q = n. Then RP and R4 are both real vector spaces. Each element of Rn is a n-tuple of real numbers, and we can construct subspaces of Rn by setting some of these coordinates
+
equal to zero. For example, consider M
=
{x E Rnlx =
(9
(:)
with
y ~ R P , 0 ~ R ~ ) a n d N = ( x E R " l x =w i t h O ~ R ~ a n d z ~ R4). It is clear that dim(M) = p, dim(N) = q, M n N = {O), and M N = Rn. The identification of RP with M and R4 with N shows that it is reasonable to write RP @ R9 = RP+4.
+
1.2. LINEAR TRANSFORMATIONS Linear transformations occupy a central position, both in vector space theory and in multivariate analysis. In this section, we discuss the basic
PROPOSITION
7
1.2
properties of linear transforms, leaving the deeper results for consideration after the introduction of inner products. Let V and W be real vector spaces. Definition 1.7. Any function A defined on V and taking values in W is called a linear transformation if A(a,x, + a2x2)= a I A ( x I ) a2A(x2)for all x,, x, E Vand a,, a, E R.
+
Frequently, A(x) is written Ax when there is no danger of confusion. Let C(V, W) be the set of all linear transformations on V to W. For two linear transformations A, and A, in C(V, W), A, A, is defined by (A, A,)(x) = A,x A,x and (aA)(x) = aAx for a E R. The zero linear transformation is denoted by 0. It should be clear that C(V, W) is a real vector space with these definitions of addition and scalar multiplication.
+
+
+
+
Example 1.4. Suppose dim(V) = m and let x,,. . . , x, be a basis for V. Also, let y,, . . . , ym be arbitrary vectors in W. The claim is that there is a unique linear transformati0n.A such that Ax, = y,, i = 1,. . . , m. To see this, consider x E V and express x as a unique linear combination of the basis vectors, x = Caixi. Define A by
The linearity of A is easy to check. To show that A is unique, let B be another linear transformation with Bx, = y,, i = 1,. . . , n. Then (A - B)(x,) = 0 for i = 1,. . . , n, and (A - B)(x) = (A B)(Caix,) = Ca,(A - B)(xi) = 0 for all x E V. Thus A = B.
+
The above example illustrates a general principle-namely, a linear transformation is completely determined by its values on a basis. This principle is used often to construct linear transformations with specified properties. A modification of the construction in Example 1.4 yields a basis for C(V, W) when V and W are both finite dimensional. T h s basis is given in the proof of the following proposition. Proposition 1.2. If dim(V) mn.
=
m and dim(W)
=
n , then dim(C(V, W))
=
Proof. Let x,,. . . , xm be a basis for V and let y,,. . . , y, be a basis for W. Define a linear transformation A,,, i = 1,. . . , m and j = 1,. . . , n , by
8
VECTOR SPACE THEORY
For each ( j , i), A.. has been defined on a basis in V so the linear J! transformation A j j is uniquely determined. We now claim that ( A j i J i= 1,. . . , m; j = 1,. . . , n) is a basis for C(V, W). To show linear independence, suppose CCajiAji = 0. Then for each k = 1,. . . , m ,
Since {y,, . . . , y,) is a linearly independent set, this implies that 9, = 0 for all j and k. Thus linear independence holds. To show every A E C(V, W) is a linear combination of the A .., first note that Ax, is a vector in Wand thus ! is a unique linear combination of y,, . . . , y,, say Ax, = Cja,, y, where a,, E R. However, the linear transformation CCajiAji evaluated at x, is J
Since A and CCaj,Aji agree on a basis in V, they are equal. Thls completes the proof since there are mn elements in the basis {A,,Ji = 1,. . . , m; j = 1,..., n) for C(V, W). Since C(V, W) is a vector space, general results about vector spaces, of course, apply to C(V, W). However, linear transformations have many interesting properties not possessed by vectors in general. For example, i = 1,2,3. If A E C(V,, V2) and B E C(V., V3), consider vector spaces then we can compose the functions B and A by defining (BA)(x) = B( A(x)). The linearity of A and B implies that BA is a linear transformation on Vl to V3-that is, BA E C(V,, &). Usually, BA is called the product of B and A. There are two special cases of C(V, W) that are of particular interest. First, if A, B E C(V, V), then AB E C(V, V) and BA E C(V, V), so we have a multiplication defined in C(V, V). However, this multiplication is not commutative-that is, AB is not, in general, equal to BA. Clearly, A(B C) = AB + AC for A, B, C E C(V, V). The identity linear transformation in C(V, V), usually denoted by I , satisfies AI = IA = A for all A E C(V, V), since Ix = x for all x E V. Thus C(V, V) is not only a vector space, but there is a multiplication defined in C(V, V). The second special case of C(V, W) we wish to consider is when W = R -that is, W is the one-dimensional real vector space R. The space C(V, R) is called the dual space of V and, if dim(V) = n, then dim(C(V, R)) = n. Clearly, C(V, R) is the vector space of all real-valued linear functions defined on V. We have more to say about C(V, R ) after the introduction of inner products on V.
v,
+
PROPOSITION
9
1.3
Understanding the geometry of linear transformations usually begins with a specification of the range and null space of the transformation. These objects are now defined. Let A E C(V, W) where V and W are finite dimensional.
Definition 1.8. The range of A, denoted by %(A), is % ( A ) = {ulu E W, Ax
=
u for somex
E
V).
The null space of A, denoted by %(A), is
It is routine to verify that %(A) is a subspace of W and %(A) is a subspace of V. The rank of A, denoted by r(A), is the dimension of %(A).
Proposition 1.3. If A dim(% (A)) = n.
E
C(V, W) and n
=
dim(V), then r(A)
+
ProoJ: Let M be a subspace of V such that M @ %(A) = V, and consider a basis {x,,. . . , x,) for M. Since dim(M) dim(%(A)) = n, we need to show that k = r(A). To do this, it is sufficient to show that {AX,,. . . , Ax,) is a basis for % ( A ) . If 0 = &,Axi = A(Ca,x,), then Caixi E M n %(A) so &,xi = 0. Hence a , = . . = a, = 0 as {x,, . . . , x,) is a basis for M. Thus {Ax,,. . . , Ax,) is a linearly independent set. To verify that {AX,,.. . , Ax,) spans %(A), suppose w E %(A). Then w = Ax for some x E V. Write x = y z where y E M and z E %(A). Then w = A(y z) = Ay. Since y E M, y = Ca,x, for some scalars a,, . . . , a,. Therefore, w = A(Caixi) = CaiAxi.
+
+
+
Definition 1.9. A linear transformation A E C(V, V) is called invertible if there exists a linear transformation, denoted by A-', such that AA-' = A-'A = I. The following assertions hold; see Halmos (1958, Section 36): (i) A is invertible iff %(A) = V iff Ax = 0 implies x = 0. (ii) If A, B, C E C(V, V) and if AB = CA = I, then A is invertible and B = C = A-1. (iii) If A and B are invertible, then AB is invertible and (AB)-' = B-'A-'. If A is invertible and a * 0, then ( a ~ ) - '= a - ' ~ - ' and =A.
10
VECTOR SPACE THEORY
In terms of bases, invertible transformations are characterized by the following. Proposition 1.4. Let A E C(V, V) and suppose { x , , .. . , x,) is a basis for V. The following are equivalent:
(i) A is invertible. (ii) { A X , , .. . , Ax,) is a basis for V Proof. Suppose A is invertible. Since dim(V) = n, we must show { A x , ,. . . , Ax,) is a linearly independent set. Thus if 0 = CaiAxi = A(Caixi), then Ca,x, = 0 since A is invertible. Hence ai = 0, i = 1,. . . , n, as { x , ,. . . , x,) is a basis for V. Therefore, { A X , , .. . , Ax,) is a basis. Conversely, suppose { A X , ., . . , Ax,) is a basis. We show that Ax = 0 implies x = 0. First, write x = Caix, so Ax = 0 implies CaiAxi = 0. Hence a, = 0, i = l , . . . , n, as { A x , , . . ., Ax,) is a basis. Thus x = 0, so A is invertible. We now introduce real matrices and consider their relation to linear transformations. Consider vector spaces V and W of dimension m and n, respectively, and bases { x , , .. . , x,) and { y , , .. . , yn) for V and W. Each x E V has a unique representation x = Ca,x,. Let [ x ]denote the column vector of coordinates of x in the given basis. Thus [ x ]E Rm and the ith coordinate of [ x ] is a,, i = 1,. . . , m. Similarly, [ y ]E Rn is the column vector of y E W in the basis { y , , .. . , y,). Consider A E C(V, W) and express Ax, in the given basis of W, Ax, = C,ai,yi for unique scalars a,,, i = 1,. . . , n, j = 1,. . . , m. The n x m rectangular array of real scalars
is called the matrix of A relative to the two given bases. Conversely, given any n x m rectangular array of real scalars {aiJ),i = 1,. . . , n, j = 1,. . . , m , the linear transformation A defined by Ax, = Cia,,yi has as its matrix [A1 = {a,,). Definition 1.10. A rectangular array {a,,) : m X n of real scalars is called an m X n matrix. If A = {a,,) : m X n is a matrix and B = {b,,) : n x p is a matrix, then C = AB, called the matrix product of A and B (in that order) is defined to be the matrix {c,,): m X p with ciJ = Ckaikbkj.
PROPOSITION
1.5
11
In this book, the distinction between linear transformations, matrices, and the matrix of a linear transformation is always made. The notation [A] means the matrix of a linear transformation with respect to two given bases. However, symbols like A, B, or C may represent either linear transformations or real matrices; care is taken to clearly indicate whch case is under consideration. Each matrix A = {a,,) : m X n defines a linear transformation on R n to Rm as follows. For x E Rn with coordinates x,, . . . , x,, Ax is the vector y in Rm with coordinates y, = Cja,,x,, i = 1,. . . , m. Of course, this is the usual row by column rule of a matrix operating on a vector. The matrix of this linear transformation in the standard bases for R n and Rmis just the matrix A. However, if the bases are changed, then the matrix of the linear transformation changes. When m = n, the matrix A = {a,,) determines a linear transformation on Rn to R n via the above definition of a matrix times a vector. The matrix A is called nonsingular (or invertible) if there exists a matrix, denoted by A-', such that A A ' = A-'A = Inwhere I, is the n X n identity matrix consisting of ones on the diagonal and zeroes off the diagonal. As with linear transformations, A-' is unique and exists iff Ax = 0 implies x = 0. The symbol C,,, denotes the real vector space of m x n real matrices with the usual operations of addition and scalar multiplication. In other words, if A = {a,,) and B = {b,,) are elements of en,,, then A + B = {aij b,,) and aA = {aa,,). Notice that C,,, is the set of m x n matrices (m and n are in reverse order). The reason for writing C,,, is that an m x n matrix determines a linear transformation from Rn to Rm. We have made the choice of writing C(V, W) for linear transformations from V to W, and it is an unpleasant fact that the dimensions of a matrix occur in reverse order to the dimensions of the spaces V and W. The next result summarizes the relations between linear transformations and matrices.
+
Proposition 1.5. Consider vector spaces V,, V2, and with bases {x,, . . . , xnI),{y,,. . . , y,,), and {z,, . . . , zn3),respectively. For x E V,, y E V2, and z E V3, let [x], [y], and [z] denote the vector of coordinates of x, y, and z in the given bases, so [x] E Rn1, [ y ] E Rn2, and [z] E Rn3. For A E C(Vl, V2) and B E C(V2,V3) let [A] ([B]) denote the matrix of A(B) relative to the bases{x,, ..., x,,) and{y1,..., yn2)({yl,..., Y,,) and{zl,. .., zn3)).Then:
(9 [Ax1 = [Al[xl. (ii) [BA] = [B][A]. (iii) If Vl = V2 and A is invertible, [ A p ' ] = [A]- I . Here, [ A p ' ] and [A] are matrices in the bases {x,,. . . , x,,) and {x,,. . . , x, ,).
12
VECTOR SPACE THEORY
Proof A few words are in order concerning the notation in (i), (ii), and (iii). In (i), [Ax] is the vector of coordinates of Ax E V2 with respect to the basis (y,,. . . , ynZ)and [A][x] means the matrix [A] times the coordinate vector [x] as defined previously. Since both sides of (i) are linear in x, it suffices to verify (i) for x = x,, j = 1,. . . , n,. But [A][x,] is just the column vector with coordinates a . ., i = 1,. . . , n,, and Ax, = Ciai,yi, so [Ax,] is the '! column vector with coordinates a . ., i = 1,. . . , n,. Hence (1) holds. For (ii), [B][A] is just the ma&x product of [B] and [A]. Also, [BA] is the matrix of the linear transformation BA E C(V,, F/,) with respect to the bases (x,, . . . , x,,) and (z,, . . . , zn3).To show that [BA] = [B][A], we must verify that, for all x E V, [BA][x] = [B][A][x]. But by (i), [BA][x] = [BAx] and, using (i) twice, [B][A][x]= [B][Ax] = [BAx]. Thus (ii) is established. In (iii), [A]-' denotes the inverse of the matrix [A]. Since A is invertible, AA-' = A-'A = I where I is the identity linear transformation on Vl to V,. Thus by (ii), with Indenoting the n X n identity matrix, I,,= [ I ] = [AA-'1 = [A][A- '1 = [A-'A] = [ A- '][A]. By the uniqueness of the matrix inverse, [A-'1 = [A]-'. Projections are the final topic in this section. If V is a finite dimensional vector space and M and N are subspaces of V such that M @ N = V , we have seen that each x E V has a unique piece in M and a unique piece in N. In other words, x = y + z where y E M, z E N, and y and z are unique. Definition 1.11. Given subspaces M and N in V such that M @ N = V , if x = y z with y E M and z E N, then y is called the projection of x on M along N and z is called the projection of x on N along M.
+
Since M and N play symmetric roles in the above definition, we concentrate on the projection on M. Proposition 1.6. The function P mapping V into V whose value at x is the projection of x on M along N is a linear transformation that satisfies
(i) % ( P ) = M, % ( P ) = N. (ii) p 2 = P. Proof. We first show that P is linear. If x = y + z with y E M, z E N, then by definition, Px = y. Also, if x, = y, + z, and x, = y, + z, are the decompositions of x, and x,, respectively, then a l x l a,x, = (a, y, a, y,) ( a I z I + a2z2)is the decomposition of a , x l a,x,. Thus P ( a l x l + a,x,) = alPx, a2Px, so P is linear. By definition Px E M, so %(P)
+
+
+
+
+
PROPOSITION
1.7
13
c M. But if x E M, Px = x and % ( P ) = M. Also, if x E N, Px = 0 so % ( P ) 2 N. However, if Px = 0, then x = 0 + x, and therefore x E N. Thus % ( P ) = N. To show p 2 = P, note that Px E M and Px = x for x E M. Hence, Px = P(Px) = P2x, which implies that P = p 2 . A converse to Proposition 1.6 gives a complete description of all linear transformations on V to V that satisfy A2 = A.
Proposition 1.7. If A E C(V, V) and satisfies = A, then %(A) $ %(A) = V and A is the projection on %(A) along %(A).
Proof: To show %(A) $ %(A) = V, we must verify that %(A) n %(A) = (0) and that each x E V is the sum of a vector in %(A) and a vector in %(A). If x E %(A) n %(A), then x = Ay for some y E V and Ax = 0. since^^ = A , 0 = A x = Ay = x and%(A) n %(A) = (0). F o r x E V, write x = Ax + ( I - A)x and let y = Ax and z = ( I - A)x. Then y E %(A) by definition and Az = A(I - A)x = (A - A2)x = 0, SO z E %(A). Thus %(A) $ %(A) = V. The verification that A is the projection on %(A) along %(A) goes as follows. A is zero on %(A) by definition. Also, for x E %(A), x = Ay for some y E V. Thus Ax = A2y = Ay = x, so Ax = x and x E %(A). However, the projection on %(A) along %(A), say P, also satisfies Px = x for x E %(A) and Px = 0 for x E %(A). This implies that P = A since %(A) @ %(A) = V. The above proof shows that the projection on M along N is the unique linear transformation that is the identity on M and zero on N. Also, it is clear that P is the projection on M along N iff I - P is the projection on N along M. 1.3. INNER PRODUCT SPACES
The discussion of the previous section was concerned mainly with the linear aspects of vector spaces. Here, we introduce inner products on vector spaces so that the geometric notions of length, angle, and orthogonality become meaningful. Let us begin with an example.
+
Example 1.5. Consider coordinate space Rn with the standard basis { E , , . . . , E,). For x, y E Rn, define x'y = Cx, y, where x and y have coordinates x,,. . . , xn and y,,. . . , yn. Of course, x' is the transpose of the vector x and x'y can be thought of as the 1 x n
14
VECTOR SPACE THEORY
matrix x' times the n x 1 matrix y. The real number x'y is sometimes called the scalar product (or inner product) of x and y. Some properties of the scalar product are: (i) x'y = y'x (symmetry). (ii) x'y is linear in y for fixed x and linear in x for fixed y. (iii) x'x = Cyx? 2 0 and is zero iff x = 0. The norm of x, defined by llxll = (x'x)'/~,can be thought of as the distance between x and 0 E Rn. Hence, Ilx - yll = (C(x, - y,) 2 ) 1/2 is usually called the distance between x and y. When x and y are both not zero, then the cosine of the angle between x and y is x'y/llxll 11 yll (see Halmos, 1958, p. 118). Thus we have a geometric interpretation of the scalar product. In particular, the angle between x and y is 77/2(cos 77/2 = 0) iff x'y = 0. Thus we say x and y are orthogonal (perpendicular) iff x'y = 0. Let V be a real vector space. An inner product on V is obtained by simply abstracting the properties of the scalar product on Rn. Definition 1.12. An inner product on a real vector space V is a real valued function on V x V , denoted by (., .), with the following properties: (i) (x, Y ) = (Y,x) (symmetry). (ii) ( ~ ~ 1 x+1a2x2, Y ) = aI(x1, Y) + a2(x2, Y ) (linearity). (iii) (x, x) >, 0 and (x, x) = 0 only if x = 0 (positivity).
+
From (i) and (ii) it follows that (x, a , y, a2y2)= a,(x, y,) + a2(x, y2). In other words, inner products are linear in each variable when the other variable is fixed. The norm of x, denoted by Ilxll, is defined to be llxll = (x, x ) ' / ~and the distance between x and y is Ilx - yII. Hence geometrically meaningful names and properties related to the scalar product on Rn have become definitions on V. To establish the existence of inner products on finite dimensional vector spaces, we have the following proposition. Proposition 1.8. Suppose {x,,. . . , x,) is a basis for the real vector space V. The function (., .) defined on V x V by (x, y) = C;aiP,, where x = Caixi and y = CPixi, is an inner product on V. Proof: Clearly (x, y) = (y, x). If x = Caixi and z = Cyixi, then (ax + yz, y) = C(aai + yyi)Pi = aCaiPi + yCyiPi = a(x, y) y(z, y). T h s
+
PROPOSITION
1.9
15
establishes the linearity. Also, (x, x ) = Caf, whch is zero iff all the a, are zero and t h s is equivalent to x being zero. Thus (., .) is an inner product on V. A vector space V with a given inner product product space.
( a ,
a )
is called an inner
Definition 1.13. Two vectors x and y in an inner product space (V, (., .)) are orthogonal, written x Iy, if (x, y) = 0. Two subsets S, and S2 of V are orthogonal, written S, I S,, if x Iy for all x E S , and y E S2. Definition 1.14. Let (V, (., .)) be a finite dimensional inner product space. A set of vectors (x,, . . . , x,) is called an orthonormal set if (xi, x,) = Sij for i , j = 1,. . . , k where Sij = 1 if i = j and 0 if i *j. A set (x,,. . . , x,) is called an orthonormal basis if the set is both a basis and is orthonormal.
First note that an orthonormal set (x,, . . . , x,) is linearly independent. To see this, suppose 0 = Caixi. Then 0 = (0, x,) = (Caixi, x,) = Cai(xi, x,) = CiaiSij= a,. Hence a, = 0 for j = 1,. . . , k and the set (x,, . . . , x,) is linearly independent. In Proposition 1.8, the basis used to define the inner product is, in fact, an orthonormal basis for the inner product. Also, the standard basis for Rn is an orthonormal basis for the scalar product on Rn-ths scalar product is called the standard inner product on Rn. An algorithm for constructing orthonormal sets from linearly independent sets is now given. It is known as the Gram-Schmidt orthogonalization procedure. Proposition 1.9. Let {x,, . . . , x,) be a linearly independent set in the inner product space (V, (., .)). Define vectors y,,. . . , y, as follows: XI
y,
and
=)Jx,JJ
for i = 1,. . . , k - 1. Then (y,,. . . , y,) is an orthonormal set and span(x,,. . . , xi) = span(y,,. . . , y,), i = 1,. . . , k . Proof. See Halmos (1958, Section 65).
16
VECTOR SPACE THEORY
An immediate consequence of Proposition 1.9 is that if (x,, . . . , x,) is a basis for V, then {y,, . . . , y,) constructed above is an orthonormal basis for (V, (., .)). If {y,,. . . , y,) is an orthonormal basis for (V, (., .)), then each x in V has the representation x = C(x, y,) yi in the given basis. To see this, we know x = Ca, yi for unique scalars a,, . . . , a,. Thus
Therefore, the coordinates of x in the orthonormal basis are (x, y,), i = 1,. . . , n. Also, it follows that (x, x ) = C(x, yi)2. Recall that the dual space of V was defined to be the set of all real-valued linear functions on V and was denoted by C(V, R). Also dim(V) = dim(C(V, R)) when V is finite dimensional. The identification of V with C(V, R) via a given inner product is described in the following proposition.
Proposition 1.10. If (V, (., -)) is a finite dimensional inner product space and if f E C(V, R), then there exists a vector x, E V such that f ( x ) = (x,, x ) for x E V. Conversely, (x,, .) is a linear function on V for each x, E v. Proof: Let x,, . . . , x, be an orthonormal basis for V and set ai= f(xi) for i = 1,. . . , n. For x, = Ca,x,, it is clear that (x,, x,) = a, = =(xi). Since the two linear functions f and (x,, .) agree on a basis, they are the same function. Thus f (x) = (x,, x) for x E V. The converse is clear.
Definition 1.15. If S is a subset of V, the orthogonal complement of S , denoted by SL, is S' = {xlx 1y for ally E S ) . It is easily verified that S' is a subspace of V for any set S , and S I S L . The next result provides a basic decomposition for a finite dimensional inner product space.
Proposition 1.11. Suppose M is a k-dimensional subspace of an n-dimensional inner product space (V, (., .)). Then (i) M n M'= (0). (ii) M @ M L = V. (iii) ( M L ) ' = M. Proof. Let {x,,. . . , x,) be a basis for V such that (x,, . . . , x,) is a basis for M. Applying the Gram-Schmidt process to (x,, . . . , x,), we get an ortho-
PROPOSITION1.12
17
normal basis {y,,. . ., y,) such that {y,,.. ., y,) is a basis for M. Let N = span{y,+,,. . . , y,). We claim that N = M L . It is clear that N G M L sincey, 1 M f o r j = k 1, ..., n. Butifx E M L , thenx = C;(x, y,)y,and (x, y,) = 0 for i = 1,..., k since x E M L , that is, x = C ; + , ( x , y,)y, E N. Therefore, M = NL . Assertions (i) and (ii) now follow easily. For (iii), M L is spanned by {y,, ,, . . . , y,) and, arguing as above, (MI)' must be spanned by y,, . . . , y,, which is just M.
+
The decomposition, V = M @ M L , of an inner product space is called an orthogonal direct sum decomposition. More generally, if M,, . . . , Mk are @ Mk, subspaces of V such that Mi 1 M, for i # j and V = MI @ M2 @ we also speak of the orthogonal direct sum decomposition of V. As we have seen, every direct sum decomposition of a finite dimensional vector space has associated with it two projections. When V is an inner product space and V = M @ M L , then the projection on M along M L is called the orthogonalprojection onto M. If P is the orthogonal projection onto M, then I - P is the orthogonal projection onto M I . The thing that makes a projection an orthogonal projection is that its null space must be the orthogonal complement of its range. After introducing adjoints of linear transformations, a useful characterization of orthogonal projections is given. When (V, .)) is an inner product space, a number of special types of linear transformations in C(V, V) arise. First, we discuss the adjoint of a linear transformation. For A E C(V, V), consider (x, Ay). For x fixed, (x, Ay) is a linear function of y, and, by Proposition 1.9, there exists a unique vector (which depends on x ) z(x) E V such that (x, Ay) = (z(x), y ) for ally E V. Thus z defines a function from V to V that takes x into z(x). However, the verification that z(a,x, + a2x2)= alz(xl) + a2z(x2) is routine. Thus the function z is a linear transformation on V to V, and this leads to the following definition. ( a ,
Definition 1.16. For A E C(V, V), the unique linear transformation in C(V, V), denoted by A', which satisfies (x, Ay) = (A'x, y), for all x, y E V, is called the adjoint (or transpose) of A. The uniqueness of A' in Definition 1.16 follows from the observation that if (Bx, y ) = (Cx, y ) for all x, y E V, then ((B - C)x, y) = 0. Taking y = ( B - C)x yields ((B - C)x, (B - C)x) = 0 for all x, so (B - C)x = 0 for all x. Hence B = C. Proposition 1.12. If A, B E C(V, V), then (AB)' invertible, then (A- I)' = (A')- l . Also, (A')' = A.
=
B'A', and if A is
18
VECTOR SPACE THEORY
Proof. (AB)' is the transformation in C(V, V) that satisfies ((AB)'x, y) = (x, ABy). Using the definition of A' and B', (x, ABy) = (A'x, By) = (B'A'x, y). Thus (AB)' = B'A'. The other assertions are proved similarly. Definition 1.17. A linear transformation in C(V, V) is called:
(i) Self-adjoint (or symmetric) if A = A'. (ii) Skew symmetric if A' = -A. (iii) Orthogonal if (Ax, Ay) = (x, y) for x, y
E
V.
For self-adjoint transformations, A is: (iv) Non-negative definite (or positive semidefinite) if (x, Ax) 2 0 for x E V. (v) Positive definite if (x, Ax) > 0 for all x * 0. The remainder of this section is concerned with a variety of descriptions and characterizations of the classes of transfcrmations defined above. Proposition 1.13. Let A
(i) (ii) (iii) (iv)
E
C (V, V). Then
% ( A ) = (%(A'))'. %(A) = % ( AA'). %(A) = % ( A'A). r( A) = r( A')
Proof. Assertion (i) is equivalent to (%(A))'= %(Af). But x E %(A') means that 0 = (y, A'x) for ally E V, and this is equivalent to x I %(A) since (y, A'x) = (Ay, x). This proves (i). For (ii), it is clear that %(AA') c %(A). If x E %(A), then x = Ay for some y E V. Write y = y, y2 where y, E %(Af) and y2 E (%(A'))' . From (i), (%(A'))' = %(A), so Ay2 = 0. Since y, E %(Af),y, = A'z for some z E V. Thus x = Ay = Ay, = AA'z, so x E %(AA'). To prove (iii), if Ax = 0, then A'Ax = 0, so %(A) g %( A'A). Conversely, if A'Ax = 0, then 0 = (x, A'Ax) = (Ax, Ax), so Ax = 0, and %(A'A) s %(A). Since dim(% (A)) dim(% (A)) = dim(V), dim(% (A')) dim(% (Af)) = dim(V), and %(A) = (%(At))' , it follows that r(A) = r(A').
+
+
+
PROPOSITION
19
1.14
If A E C(V, V) and r(A) = 0, then A = 0 since A must map everything into 0 E V. We now discuss the rank one linear transformations and show that these can be thought of as the "building blocks" for C(V, V). Proposition 1.14. For A E C(V, V), the following are equivalent:
(i) r ( A ) = 1. (ii) There exist x, x E v.
*0
and yo * 0 in V such that Ax
=
(yo, x)x, for
Proof: That (ii) implies (i) is clear since, if Ax = (yo, x)x,, then %(A) = span{x,), which is one-dimensional. Thus suppose r(A) = 1. Since 3(A) is one-dimensional,there exists x, E %(A) with x, * 0 and %(A) = span{x,). As Ax E %(A) for all x, Ax = a(x)x, where a(x) is some scalar that depends on x. The linearity of A implies that a(P,xl + P2x2)= P1a(xl)+ P2a(x2). Thus a is a linear function on V and, by Proposition 1.10, a(x) = (yo, x) for some yo E V. Since a(x) 0 for some x E V, q yo -+ 0. Therefore, (i) implies (ii). f
This description of the rank one linear transformations leads to the following definition. Definition 1.18. Given x, y E V, the outer product of x and y, denoted by x y, is the linear transformation on V to V whose value at z is (x y)z = (Y,z)x. ThusxO y E C(V,V) a n d x o y = O i f f x o r y i s z e r o . Whenx * 0 and y += 0, % (x q y) = span{x) and 9L(Xq y) = (span{y))' . The result of Proposition 1.14 shows that every rank one transformation is an outer product of two nonzero vectors. The following properties of outer products are easily verified:
0) xU(a,y, + a2y2) = a1xO Yl + a2xO ~ 2 . (ii) (a,xl + a2x2)Uy = a1x,Oy + a 2 x 2 0y. (iii) (XUy)' = yOx. (iv) (x,
Y , ) ( x , ~ 2 =) (Y,,~21x1q Y2.
One word of caution: the definition of the outer product depends on the inner product on V. When there is more than one inner product for V, care must be taken to indicate which inner product is being used to define the outer product. The claim that rank one linear transformations are the building blocks for C(V, V) is partially justified by the following proposition.
20
VECTOR SPACE THEORY
Proposition 1.15. Let {x,,. . . , x,) be an orthonormal basis for (V, (., .)). Then {xiO xj; i, j = 1,. . . , n) is a basis for C(V, V).
Proof: If A E C(V, V), A is determined by the n2 numbers aij = (xi, Ax,). But the linear transformation B = CCai,x,O x, satisfies
Thus B = A so every A E e(V, V) is a linear combination of {xiOx,li, j = 1,. . . , n}. Since dim(C(V, V)) = n2, the result follows. q Using outer products, it is easy to give examples of self-adjoint linear transformations. First, since linear combinations of self-adjoint linear transformations are again self-adjoint, the set M of self-adjoint transformations is a subspace of C(V, V). Also, the set N of skew symmetric transformations is a subspace of C(V, V). It is clear that the only transformation that is both self-adjoint and skew symmetric is 0, so M n N = (0). But if A E C(V, V), then A = -A + A f 2
+-A - 2A '
'
A+A' EM, 2
and
A-A' 2
-E N
This shows that C(V, V) = M @ N. To give examples of elements of M, let x,,. . . , x, be an orthonormal basis for (V,(., .)). For each i, xiO xi is self-adjoint, so for scalars ai, B = CaixiOx, is self-adjoint. The geometry associated with the transformation B is interesting and easy to describe. = xi xi, SO xi xi is a projection on span{xi) Since llx,ll = 1, (xi along ( ~ p a n { x , ) -that )~ is, xiO xi is the orthogonal projection on span{xi} as the null space of x,O x, is the orthogonal complement of its range. Let Mi = span{x,}, i = 1,. . . , k. Each M, is a one-dimensional subspace of (V,(.;)),M,I M , i f i * j , a n d M , @ M 2 @ @ M,= V.Hence, Visthe direct sum of n mutually orthogonal subspaces and each x E V has the unique representation x = C(x, xi)xi where (x, xi)xi = (xiO xi)x is the projection of x onto Mi, i = 1,. . . , n. Since B is linear, the value of Bx is completely determined by the value of B on each Mi, i = 1,. . . , n. However, if y E M,, then y = ax, for some a E R and By = aBx, = aCa,(xiO xi)x, = aol,xj = ~ y Thus . when B is restricted to M,, B is a, times the identity transformation, and understanding how B transforms vectors has become particularly simple. In summary, take x E V and write x = C(x, xi)xi; then Bx = Cai(x, x,)x,. What is especially fascinating and useful is that every self-adjoint transformation in C(V, V) has the representation Ca,xiO xi for some orthonormal basis for V and some scalars a,,. . . , a,. This fact is
PROPOSITION
1.16
21
known as the spectral theorem and is discussed in more detail later in this chapter. For the time being, we are content with the following observation about the self-adjoint transformation B = CaixiOxi: B is positive definite iff a , > 0, i = 1,. . . , n. This follows since (x, Bx) = Cai(x, and x = 0 iff (x, = 0 for all i = 1,. . . , n. For exactly the same reasons, B is non-negative definite iff a i > 0 for i = 1,. . . , n. Proposition 1.16 introduces a useful property of self-adjoint transformations. Proposition 1.16. If A, and A, are self-adjoint linear transformations in C(V, V) such that (x, A,x) = (x, A,x) for all x, then A, = A,.
Proof: It suffices to show that (x, A,y) = (x, A2y) for all x, y
Since (z, A,z) = (z, A,z) for all z
E
E
V. But
V, we see that (x, Aly) = (x, A2y).
In the above discussion, it has been observed that, if x E V and llxll = 1, then x x is the orthogonal projection onto the one-dimensional subspace span{x). Recall that P E C(V, V) is an orthogonal projection if P is a projection (i.e., P 2 = P ) and if % ( P ) = (%(P))' . The next result characterizes orthogonal projections as those projections that are self-adjoint. Proposition 1.17. If P
E
C(V, V), the following are equivalent:
(i) P is an orthogonal projection. (ii) P 2 = P = P ' . Proof: If (ii) holds, then P is a projection and P is self-adjoint. By Proposition 1.13, % ( P ) = (%(P'))' = (%(P))I since P = P'. Thus P is an orthogonal projection. Conversely, if (i) holds, then p2 = P since P is a projection. We must show that if P is a projection and % ( P ) = ( % ( P ) ) l , then P = P'. Since V = % ( P ) @ %(P), consider x, y E V and write x = x, + x,, y = y, + y, with x,, y, E % ( P ) and x,, y2 E % ( P ) = ( % ( P ) ) l . Using the fact that P is the identity on %(P), compute as follows: (P'x, y )
=
( x , PY) = (XI + x2, PY,) = (XI?Y,) = (px,, Y,)
22
VECTOR SPACE THEORY
Since P' is the unique linear transformation that satisfies (x, Py) = P'.
we have P
=
(P'x, y),
It is sometimes convenient to represent an orthogonal projection in terms of outer products. If P is the orthogonal projection onto M, let {x,, . . . , x,) be an orthonormal basis for M in (V,(., Set A = Cx,O xi so A is self-adjoint. If x E M, then x = C(x, xi)xi and Ax = (XxiO xi)x = C(x, x,)x, = X. If x E M I , then Ax = 0. Since A agrees with P on M and M I , A = P = Cx,O xi. Thus all orthogonal projections are sums of rank one orthogonal projections (given by outer products) and different terms in the sum are orthogonal to each other (i.e., (x,O xi)(x,O x,) = 0 if i *j). Generalizing this a little bit, two orthogonal projections PI and P2 are called orthogonal if PIP2 = 0. It is not hard to show that PI and P, are orthogonal to each other iff the range of PI and the range of P, are orthogonal to each other, as subspaces. The next result shows that a sum of orthogonal projections is an orthogonal projection iff each pair of summands is orthogonal. a)).
Proposition 1.18. Let PI,. . . , P, be orthogonal projections on (V, (., .)). Then P = P I + . . . + P, is an orthogonal projection iff PiP, = 0 for i *j.
Proof. See Halmos (1958, Section 76).
q
We now turn to a discussion of orthogonal linear transformations on an inner product space (V, (., Basically, an orthogonal transformation is one that preserves the geometric structure (distance and angles) of the inner product. A variety of characterizations of orthogonal transformations is possible. a)).
Proposition 1.19. If (V, (., is a finite dimensional inner product space and if A E C(V, V), then the following are equivalent: a))
(i) (ii) (iii) (iv)
(Ax, Ay) = (x, y) for all x, y E V. llAxll = llxll for all x E V. AA' = A'A = I. If {x,,. . . , x,) is an orthonormal basis for (V, (., .)), then {Ax,,. . . , Ax,) is also an orthonormal basis for (V, (., -)).
Proof. Recall that (i) is our definition of an orthogonal transformation. We prove that (i) implies (ii), (ii) implies (iii), (iii) implies (i), and then show that (i) implies (iv) and (iv) implies (ii). That (i) implies (ii) is clear since
PROPOSITION
23
1.20
11 ~ ~ 1 =1 ,(Ax, Ax).
For (ii) implies (iii), (x, x) = (Ax, Ax) = (x, A'Ax) implies that A'A = I since A'A and I are self-adjoint (see Proposition 1.16). But, by the uniqueness of inverses, this shows that A' = A-' so I = AA-' = AA' and (iii) holds. Assuming (iii), we have (x, y ) = (x, A'Ay) = (Ax, Ay) and (i) holds. If (i) holds and {x,,.. . , x,) is an orthonormal basis then 4, = (xi, x,) = (Ax,, Ax,), which implies that {Ax,,. . . , for (V, ( . , Ax,) is an orthonormal basis. Now, assume (iv) holds. For x E V, we have = C(x, xi)'. Thus x = C(x, xi)x, and (1x11~ ,))a
llAx1I2 = (Ax, Ax)
=
(x, x,)Ax,,
i
( x , x,) Ax,
Therefore (ii) holds. Some immediate consequences of the preceding proposition are: if A is orthogonal, so is A-' = A' and if A , and A, are orthogonal, then AlA2 is orthogonal. Let B(V) denote all the orthogonal transformations on the inner product space (V, ( -, . )). Then Q(V)is closed under inverses, I E B(V), and O(V) is closed under products of linear transformations. In other words, B(V) is a group of linear transformations on (V,(., .)) and B(V) is called the orthogonal group of (V,(., .)). This and many other groups of linear transformations are studied in later chapters. One characterization of orthogonal transformations on (V, (., .)) is that they map orthonormal bases into orthonormal bases. Thus given two orthonormal bases, there exists a unique orthogonal transformation that maps one basis onto the other. This leads to the following question. Suppose {x,,. . . , x,) and {y,,. . . , y,) are two finite sets of vectors in (V(-, .)). Under what conditions will there exist an orthogonal transformation A such that Ax, = y, for i = 1,. . ., k? If such an A E Q(V) exists, then (xi, x,) = (Ax,, Ax,) = (y,, y,) for all i, j = 1,. . . , k. That this condition is also sufficient for the existence of an A E O(V) that maps xi toy,, i = 1,. . . , k, is the content of the next result. Proposition 1.20. Let {x,,.. ., x,) and {y,,. .., y,) be finite sets in ( V, ( ., . )). The following are equivalent: (i) (x,,x,)= (y,, y,) fori, j = 1,..., k. (ii) There exists an A E O(V) such that Ax,
= y,
for i
=
1,. . . , k .
24
VECTOR SPACE THEORY
Proof. That (ii) implies (i) is clear, so assume that (i) holds. Let M = span{x,, , x,). The idea of the proof is to define A on M using linearity and then extend the definition of A to V using linearity again. Of course, it must be verified that all this makes sense and that the A so defined is orthogonal. The details of this, which are primarily computational, follow. First, by (i), Ca,xi = 0 iff Caiy, = 0 since ( C a i x i , C ~ x , = ) CCai9(xi, x,) = CCa,aj(y,, y,) = (Caiyj,Cajyj). Let N = span{y,,. . . , y,) and define B on M to N by B(Caixi) = Caiyi. B is well defined since Caixi = CPixi implies that Ca,y, = C&yi and the linearity of B on M is easy to check. Since B maps M onto N, dim(N) g dim(M). But if B(Caixi) = 0, then Ca,y, = 0, so Caixi = 0. Therefore the null space of B is (0) G M and dim(M) = dim(N). Let M I and N' be the orthogonal complements of M and N, respectively, and let {u,,. . . , us) and {v,,. . . , us) be orthonormal bases for M I and N L , respectively. Extend the definition of B to V by first defining B(ui) = vi for i = 1,. . . , s and then extend by linearity. Let A be = llw112 for the linear transformation so defined. We now claim that llA~11~ all w E V. To see this write w = w, + w2 where w, E M and w2 E M L . Then Awl E N and Aw, E NL . Thus 1 1 ~ ~ =1 llAwI 1 ~ + Aw2112= l l ~ w , 1 1 ~ ll~w,11~. But w, = Caixi for some scalars a,. Thus
...
+
Similarly, I I A W=, ~llw2112. ~ ~ Since llw112 = llw1112 + llw2112, the claim that
11 Aw112 = 11 wl12 is established. By Proposition 1.19, A is orthogonal. 4
Example 1.6. Consider the real vector space R nwith the standard basis and the usual inner product. Also, let be the real vector despace of all n x n real matrices. Thus each element of termines a linear transformation on Rn and vice versa. More precisely, if A is a linear transformation on R n to R n and [A] denotes the matrix of A in the standard basis on both the range and domain of A, then [Ax] = [A]x for x E Rn.Here, [Ax] E R n is the vector of coordinates of Ax in the standard basis and [A]x means the matrix [A] = {aij) times the coordinate vector x E Rn.Conversely, if [A] E and we define a linear transformation A by Ax = [Alx, then the matrix of A is [A]. It is easy to show that if A is a linear
en,
en,
en,,
THE CAUCHY-SCHWARZ INEQUALITY
25
transformation on Rn to Rn with the standard inner product, then [A'] = [A]' where A' denotes the adjoint of A and [A]' denotes the transpose of the matrix [ A ] .Now, we are in a position to relate the notions of self-adjointness and skew symmetry of linear transformations to properties of matrices. Proofs of the following two assertions are straightforward and are left to the reader. Let A be a linear transformation on Rn to Rn with matrix [ A ] . (i) A is self-adjoint iff [ A ] = [A]'. (ii) A is skew-symmetric iff [A]' = - [ A ] . Elements of C,, , that satisfy B = B' are usually called symmetric matrices, while the term skew-symmetric is used if B' = - B , B E C,, ,. Also, the matrix B is called positive definite if x'Bx > 0 for all x E R n , x * 0. Of course x'Bx is just the standard inner product of x with Bx. Clearly, B is positive definite iff the linear transformation it defines is positive definite. If A is an orthogonal transformation on R n to Rn, then [ A ] must satisfy [ A ] [ A ] '= [ A ] ' [ A ]= In where In is the n X n identity matrix. Thus a matrix B E is called orthogonal if BB' = B'B = In. An interesting geometric iherpretation of the condition BB' = B'B = I, follows. If B = {b,,), the vectors b, E R n with coordinates b. ., i = 1,. . . , n, are the column vectors of B and the vectors ci E RIJn with coordinates b,,, j = 1,. . . , n, are the row vectors of B. The matrix BB' has elements c:c, and the condition BB' = I, means that c;cJ = a,,-that is, the vectors c,, . . . , c, form an orthonormal basis for Rn in the usual inner product. Similarly, the condition B'B = I, holds iff the vectors b,,. . . , bn form an orthonormal basis for R n . Hence a matrix B is orthogonal iff both its rows and columns determine an orthonormal basis for Rn with the standard inner product.
en
1.4. THE CAUCHY-SCHWARZ INEQUALITY The form of the Cauchy-Schwarz Inequality given here is general enough to be applicable to both finite and infinite dimensional vector spaces. The examples below illustrate that the generality is needed to treat some standard situations that arise in analysis and in the study of random
26
VECTOR SPACE THEORY
variables. In a finite dimensional inner product space (V, (., the inequality established in this section shows that I(x, y)l 6 llxllll yll where 1 1 ~ 1 = 1~ (x, x). Thus - 1 Q (x, ~)/llxllllyllQ 1 and the quantity (x, ~)/llxllllyII is defined to be the cosine of the angle between the vectors x and y. A variety of applications of the Cauchy-Schwarz Inequality arise in later chapters. We now proceed with the technical discussion. Suppose that V is a real vector space, not necessarily finite dimensional. Let [., . ] denote a non-negative definite symmetric bilinear function on V x V-that is, [ - , . ] is a real-valued function on V x V that satisfies (i) [x, YI = [Y,XI,(ii) [alxl + a2x2, yl = a,[x,, yl + az[x,, yl, and (iii) [x, XI 2 0. It is clear that (i) and (ii) imply that [x, a,y, + a2y2]= a,[x, y,] + a,[x, y2].The Cauchy-Schwarz Inequality states that [x, Q [x, x][y, y]. We also give necessary and sufficient conditions for equality to hold in this inequality. First, a preliminary result. a)),
Proposition 1.21. Let M
=
{x([x,x] = 0). Then M is a subspace of V.
Proof: If x E M and a E R, then [ax, ax] = a2[x,X]= 0 SO ax E M. Thus we must show that if x,, x2 E M, then x, x, E M. For a E R, 0 Q Ix, ax,, x, + ax,] = [x,, x,] + 2a[x,, x2] + a2[x2,x,] = 2a[x1,x2] since x,, x, E M. But if 2a[x,, x,] 2 0 for all a E R, [x,, x,] = 0, and this implies that 0 = [x, + ax,, x, + ax,] for all a E R by the above equality. Therefore, x, + ax, E M for all a when x,, x2 E M and thus M is a subspace.
+
+
Theorem 1.1. (Cauchy-Schwan Inequality). Let . ] be a non-negative definite symmetric bilinear function on V X V and set M = {x)[x,x] = 0). Then: [ a ,
(i) [x, yI2 Q [x, x][Y,Y] for x, Y E v(ii) [x, Y]2= [x, x][y, y] iff ax + by E M for some real a and p not both zero. Proof: To prove (i), we consider two cases. If x E M, then 0 Q [y + ax, y + ax] = [y, y] + 2a[x, y] for all a E R, so [x, y] = 0 and (i) holds. Sirnilarly, if y E M, (i) holds. If x P M and y P M, let x, = x/[x, x]'/2 and let y , = y/[y, y]'/2. Then we must show that I[x,, yl]l Q 1. This follows from the two inequalities 0
Q
[x, - y,, x, - Y I I
=
2 - 2[x,, Y I I
PROPOSITION
1.21
and 0
< [x, + y , , x , + y , l
=
2
+ 2[x,, y,l.
The proof of (i) is now complete. To prove (ii), first assume that [x, y]2 = [x, x][y, y]. If either x E M or y E M, then ax + py E M for some a, p not both zero. Thus consider x P M and y P M. An examination of the proof of (i) shows that we can have equality in (i) iff either 0 = [x, - y,, x, - y,] or 0 = [x, + y,, x, +y,] and, in either case, this implies that ax by E M for some real a, P not both zero. Now, assume ax + py E M for some real a, P not both zero. If a = 0 or /3 = 0 or x E M or y E M, we clearly have equality in (i). For the case when crp * 0, x GC M, and y P M, our assumption implies that x, + yy, E M for some y * 0, since M is a subspace. Thus there is a real y * 0 such that0 = [x, yy,,x, yy,] = 1 + 2y[xl, y,] + y2. Theequation for the roots of a quadratic shows that this can hold only if ([x,,y,]l = 1. Hence equality in (i) holds.
+
+
+
+
+
Example 1.7. Let (V, ( - , .)) be a finite dimensional inner product space and suppose A is a non-negative definite linear transformation on V to V. Then [x, y] = (x, Ay) is a non-negative definite symmetric bilinear function. The set M = (xl(x, Ax) = 0) is equal to %(A)-this follows easily from Theorem l.l(i). Theorem 1.1 ~ (x, Ax)(y, Ay) and provides conditions for shows that (x, A Y ) < equality. In particular, when A is nonsingular, M = (0) and equality holds iff x and y are linearly dependent. Of course, if A = I, then 1 1 which is one classical form of the we have (x, Y ) <~l l ~ 1 1 ~ y112, Cauchy-Schwarz Inequality. Example 1.8. In this example, take V to be the set of all continuous real-valued functions defined on a closed bounded interval, say a to b, of the real line. It is easily verified that
is symmetric, bilinear, and non-negative definite. Also [x, x ] > 0 unless x = 0 since x is continuous. Hence M = (0). The Cauchy-Schwarz Inequality yields
+
28
+
VECTOR SPACE THEORY
Example 1.9. The following example has its origins in the study of the covariance between two real-valued random variables. Consider a probability space ( 9 , F, Po) where 9 is a set, F is a a-algebra of subsets of 9 , and Po is a probability measure on 9.A random variable X is a real-valued function defined on 9 such that the inverse image of each Borel set in R is an element of F ; symbolically, X-'(B) E Ffor each Borel set B of R. Sums and products of random variables are random variables and the constant functions on 9 are random variables. If X is a random variable such that jlX(w)lPo(do) < + co, then X is integrable and we write & X for jX(o)Po(do). Now, let V be the collection of all real-valued random variables X, such that &x2 < + co. It is clear that if X E V, then ax E V for X:), if XI and X2 are in V, all real a. Since (XI + X2)2< 2(X: then XI + X2 is in V. Thus V is a real vector space with addition being the pointwise addition of random variables and scalar multiplication being pointwise multiplication of .random variables by scalars. For XI, X2 E V, the inequality IX,X21G X: + X: implies that X,X2 is integrable. In particular, setting X2 = 1, XI is integra.] on V X V by [XI, X2] = &(XlX2).That [., . ] is ble. Define symmetric and bilinear is clear. Since [XI, XI] = &x:2 0, [ - , . ] is non-negative definite. The Cauchy-Schwarz Inequality yields (&XlX2)2< &x:&x:, and setting X2 = 1, this gves ( & x , ) ~< &X:. Of course, this is just a verification that the variance of a random variable is non-negative. For future use, let var(Xl) = &X: To discuss conditions for equality in the Cauchy-Schwarz Inequality, the subspace M = {XI[X, XI = 0) needs to be described. Since [X, XI = &X2, X E M iff X is zero, except on set of Po measure zero-that is, X = 0 a.e. (Po). Therefore, (&XlX2)2= &x:&x: iff axl + PX2 = 0 a.e. (Po) for some real a, fi not both zero. In particular, var(X,) = 0 iff XI - &XI = 0 a.e. (Po). A somewhat more interesting non-negative definite symmetric bilinear function on V X V is
+
[ a ,
and is called the covariance between XI and X2. Symmetry is clear and bilinearity is easily checked. Since cov{Xl, XI) = &x:( & x ~= ) ~var(X,), cov{. , -) is non-negative definite and M I = {Xlcov{X, X} = 0) is just the set of random variables in V that have
THE SPACE
C(V, W)
29
variance zero. For this case, the Cauchy-Schwarz Inequality is
Equality holds iff there exist a, P, not both zero, such that var(aX, fix2) = 0; or equivalently, a(X, - &XI) P(X2 - &X,) = 0 a.e. ( P o ) for some a, /3 not both zero. The properties of cov{., .) given here are used in the next chapter to define the covariance of a random vector.
+
+
+
When (V, (., .)) is an inner product space, the adjoint of a linear transformation in C(V, V) was introduced in Section 1.3 and used to define some special linear transformations in C(V, V). Here, some of the notions discussed in relation to C(V, V) are extended to the case of linear transformaand (W,[., -1) are two inner product tions in C(V, W) where (V,(., spaces. In particular, adjoints and outer products are defined, bilinear functions on V X W are characterized, and Kronecker products are introduced. Of course, all the results in this section apply to C(V, V) by taking (W, [., .I) = (V, (., .)) and the reader should take particular notice of this special case. There is one point that needs some clarification. Given (V, (. , and (W, [., .I), the adjoint of A E C(V, W), to be defined below, depends on both the inner products ( - , .) and [., .I. However, in the previous discussion of adjoints in C(V, V), it was assumed that the inner product was the same on both the range and the domain of the linear transformation (i.e., V is the domain and range). Whenever we discuss adjoints of A E C(V, V) it is assumed that only one inner product is involved, unless the contrary is explicitly stated-that is, when specializing results from C(V, W) to C(V, V), we take W = V and [., .] = (-, .). The first order of business is to define the adjoint of A E C(V, W) where (V,(., .)) and (W,[., are inner product spaces. For a fixed w E W, [w, Ax] is a linear function of x E V and, by Proposition 1.10, there exists a unique vector y(w) E V such that [w, Ax] = (y(w), x ) for all x E V. It is easy to verify that y(alw, + a2w2)= a , y(wl) + a, y(w,). Hence y(.) determines a linear transformation on W to V, say A', whch satisfies [w, Ax] = (A'w, x) for all w E Wand x E V. a))
a))
a])
Definition 1.19. Given inner product spaces (V, ( . , . )) and ( W, [ ., .I), if A E C(V, W), the unique linear transformation A' E C(W, V) that satisfies [w, Ax] = (A'w, x ) for all w E W and x E V is called the adjoint of A.
30
VECTOR SPACE THEORY
The existence and uniqueness of A' was demonstrated in the discussion
preceeding Definition 1.19. It is not hard to show that ( A
+ B)' = A' + B',
(A')' = A, and (aA)' = aA'. In the present context, Proposition 1.13 becomes Proposition 1.22.
Proposition 1.22. Suppose A (i) (ii) (iii) (iv)
E
C(V, W). Then:
%(A) = (%(Ar))'. % (A) = % ( AA'). % (A) = % ( A'A). r(A) = r(At).
ProoJ: The proof here is essentially the same as that given for Proposition 1.13 and is left to the reader. The notion of an outer product has a natural extension to C(V, W).
Definition 1.20. For x E (V,(-, .) and w E (W, [., .I), the outer product, wO x is that linear transformation in C(V, W) given by (wO x)(y) = (x, y)w for ally E V. If w = 0 or x = 0, then w x = 0. When both w and x are not zero, then w q x has rank one, %(w q x ) = span{w), and %(w q x ) = (span{x))l. Also, a minor modification of the proof of Proposition 1.14 shows that, if A E C(V, W), then r(A) = 1 iff A = wO x for some nonzero w and x.
Proposition 1.23. The outer product has the following properties:
+
(i) (alwl a2w2)q x = aIwIq x (ii) wO(a,x, + a2x2)= alwO x l (iii) ( w O x ) ' = xO w E C(W,V).
+ a2w2q X. + a 2 w 0 x,.
If (Vl,(-, .),), (V2,(-,.),), and (&,(., .)3) are inner product spaces with x1 E Vl, x2, y2 E V2, and y3 E V3, then
Proof: Assertions (i), (ii), and (iii) follow easily. For (iv), consider x E Vl. Then ( x 2 0 x l ) x = (xl, xI1x2,so ( y 3 0 y2)(x20x l ) x = ( X I ,X ) I ( Y ~~O2 ) x 2 = ( X I , x ) I ( Y ~~ , 2 ) 2 ~ 3 V3. ( ~ 2 ~, 2 ) 2 ( ~ q3 = ( ~ 2Y, ~ ) , ( ~~I 1, 1 ~Thus 3 . (iv)
There is a natural way to construct an inner product on C(V, W) from inner products on V and W. This construction and its relation to outer products are described in the next proposition.
Proposition 1.24. Let {x,,. . . , x,) be an orthonormal basis for (V, (., .)) and let {w,,. . . , w,) be an orthonormal basis for (W, [., .I). Then: (i) { w , x,(i ~ = 1,. . . , n, j = 1,. . . , m) is a basis for C(V, W). Let a 11. . = [w,, Ax,]. Then: (ii) A = CCa,,w,O x, and the matrix of A is [A] = {a,,) in the given bases. If A
=
CCa,,w,U x, and B = CCb,,w,U x,, define (A, B)
= CCa,,b,,.
Then:
(iii) ( . , - ) i s aninnerproductonC(V,W) and{w,Ox,li = 1, ..., n, j - 1,. . . , m) is an orthonormal basis for (C(V, W), ( . , a)).
Proof: Since dim(C(V, W)) = mn, to prove (i) it suffices to prove (ii). Let B = CCa,,w,O x,. Then
so[w,, Bx,] = [w,, Ax,] for i = 1,. . . , n and j = 1,. . . , m. Therefore, [w, Bx] = [w, Ax] for all w E Wand x E V, which implies that [w, (B - A)x] = 0. Choosing w = ( B - A)x, we see that ( B - A)x = 0 for all x E V and, therefore, B = A. To show that the matrix of A is [A] = {a,,), recall that the matrix of A consists of the scalars b,, defined by Ax, = ZkbkJwk.The inner product of w, with each side of this equation is a,,
=
[w,, AX,]
=
Zbkj[w,, w,]
=
b.1J
k
and the proof of (ii) is complete. For (iii), ( - , is clearly symmetric and bilinear. Since (A, A) = =a$, the positivity of ( . , - ) follows. That {win x,li = 1,. . . , n, j = 1,. . . , m) is an orthonormal basis for (C(V, W), ( . , .)) follows immediately from the definition of ( . , a)
a ) .
A few words are in order concerning the inner product ( . , .) on C(V, W). Since {win x,li = 1,. . . , n, j = 1,. . . , m) is an orthonormal basis,
32
VECTOR SPACE THEORY
we know that if A E C(V, W), then
since this is the unique expansion of a vector in any orthonormal basis. However, A = CC[w,, Ax,]w, x, by (ii) of Proposition 1.24. Thus (A, win x,) = [w,, Ax,] for i = 1,. . . , n and j = 1,. . . , m. Since both sides of this relation are linear in w, and x,, we have ( A , w q x ) = [w, Ax] for all w E W and x E V. In particular, if A = G q 2, then
This relation has some interesting implications. Proposition 1.25. The inner product ( . , . ) on C(V, W) satisfies
(i) ( G O 2 , w O x )
=
[$,W](~,X)
for all G, w E Wand 2, x E V, and ( . , .) is the unique inner product with this property. Further, if (z,,. . . , z,) and (y,,. . . , y,) are any orthonormal bases for W and V, respectively, then ( z , 0 yJli = 1,. . . , n, j = 1,. . . , m) is an orthonormal basis for (C(V, W), ( . , .)).
Pro05 Equation (i) has been verified. If (. , .) is another inner product on C(V, W) that satisfies (i), then
for all i , , i2 = 1,. . . , n and j,, j2 = 1,. . . , m where (x,,. . . , x,) and (w,,. . . , w,) are the orthonormal bases used to define ( - , .). Using (i) of Proposition 1.24 and the bilinearity of inner products, this implies that (A, B) = (A, B) for all A, B E C(V, W). Therefore, the two inner products are the same. The verification that ( z , 0 yjli = 1,. . . , n, j = 1,. . . , m) is an q orthonormal basis follows easily from (i). The result of Proposition 1.25 is a formal statement of the fact that ( , does not depend on the particular orthonormal bases used to define it, but ( - , . ) is determined by the inner products on V and W. Whenever V and Ware inner product spaces, the symbol ( . , .) always means the inner product on C(V, W) as defined above. a )
33
PROPOSITION 1.25
+
Example 1.10. Consider V = Rmand W = Rn with the usual inner products and the standard bases. Thus we have the inner product ( , on em,,--the linear space of n x m real matrices. For A = { a , , ) and B = { b , j )in em,,, a )
(A, B)
z
aijbjj.
=
1-1 j=l
If C
=
AB': n
X
n. then
so (A, B) = Cc,,. In other words, (A, B) is just the sum of the diagonal elements of the n x n matrix AB'. This observation leads to the definition of the trace of any square matrix. If C : k x k is a real matrix, the trace of C, denoted by trC, is the sum of the diagonal elements of C. The identity (A, B) = (B, A) shows that tr AB' = tr B'A for all A, B E ,. In the present example, it is clear that w O x = wx' for x E Rm and w E Rn, so w O x is just the n x 1 matrix w times the 1 x m matrix x'. Also, the identity in Proposition 1.25 is a reflection of the fact that
em
for w, VG E Rn and x, 2
E
+
Rm.
If (V, (., .)) and (W, [., .I) are inner product spaces and A E C(V, W), then [Ax, w] is linear in x for fixed w and linear in w for fixed x. This observation leads to the following definition. Definition 1.21. A function f defined on V x W to R is called bilinear if:
These conditions apply for scalars a , and a,; x, x,, x, E W.
E
V and w, w,, w,
Our next result shows there is a natural one-to-one correspondence between bilinear functions and C(V, W).
34
VECTOR SPACE THEORY
Proposition 1.26. If f is a bilinear function on V X W to R, then there exists an A E C(V, W) such that f ( x , w ) = [Ax, w ] for all x E V and w E W. Conversely, each A E C(V, W) determines the bilinear function [Ax, w] on V X W. Proof. Let {x,,. . . , x,) be an orthonormal basis for (V, (., and {w,,. . . , w,)beanorthonormalbasisfor(W,[~, .I). Setaij = f(xj, w,) for i = 1,. . . , n and j = 1,. . . , m and let A = CCai,wiO x,. By Proposition 1.24, we have )a
a,,
=
[AX,,wi]
= f(x,,
wi).
The bilinearity off and of [Ax, w] implies [Ax, w] = f(x, w) for all x and w E W. The converse is obvious.
E
V q
Thus far, we have seen that C(V, W) is a real vector space and that, if V and W have inner products (., .) and [., .I, respectively, then C(V, W) has a natural inner product determined by (., .) and [., .I. Since C(V, W) is a vector space, there are: h e a r transformations on C(V, W) to other vector spaces and there is not much more to say in general. However, C(V, W) is built from outer products and it is natural to ask if there are special linear transformations on C(V, W) that transform outer products into outer products. For example, if A E C(V, V) and B E C(W, W), suppose we define B @ A on C(V, W) by ( B 8 A)C = BCA' where A' denotes the transpose of A E C(V, V). Clearly, B @ A is a linear transformation. If C = wO x, then ( B 8 A)(wO x) = B(wO x)A' E C(V, W). But for v E V ,
This calculation shows that ( B @ A)(wO x) = (Bw)O(Ax), so outer products get mapped into outer products by B @ A. Generalizing this a bit, we have the following definition. Definition 1-22. Let (Vl,(-, .),I, (I/,,(., .I2), (W,,[., .Il), and (W2,[., .I2) be inner product spaces. For A E C(V,, V,) and B E C( W,, W,), the Kronecker product of B and A, denoted by B 8 A , is a linear transformation on C(Vl, W,) to C(V,, W,), defined by ( B @ A)C for all C
E
C(V,, W,).
= BCA'
In most applications of Kronecker products, V, = V2 and W, = W,, so B 8 A is a linear transformation on C(V,, W,) to C(V,, W,). It is not easy to say in a few words why the transpose of A should appear in the definition of the Kronecker product, but the result below should convince the reader that the definition is the "right" one. Of course, by A', we mean the linear transformation on V2 to V,, which satisfies (x,, Ax,), = (A'x,, x,), for x, E V, and x, E V,. Proposition 1.27. In the notation of Definition 1.22,
Also, (ii) (B 8 A)'
=
B' 8 A',
where (B 8 A)' denotes the transpose of the linear transformation B 8 A on (C(VI>WI), ( . , . ) I ) to (C(V2, W,), ( , .),I. Prooj
To verify (i), for v,
E
V2, compute as follows:
Since this holds for all v, E V,, assertion (i) holds. The proof of (ii) requires we show that B' 8 A' satisfies the defining equation of the adjoint-that is, for C, E C(V,, W,) and C, E C(V,, W,),
Since outer products generate C(V,, W,), it is enough to show the above holds for C, = w,Ox, with w, E Wl and x, E V,. But, by (i) and the definition of transpose,
=
[B1C2Ax,,w , ] ,
and this completes the proof of (ii).
=
(B'C,A, w,U x , ) ,
36
VECTOR SPACE THEORY
We now turn to the case when A
E
C(V, V) and B
E
C(W, W) S O B8 A
is a linear transformation on C(V, W) to C(V, W). First note that if A is
self-adjoint relative to the inner product on V and B is self-adjoint relative to the inner product on W, then Proposition 1.27 shows that B @ A is self-adjoint relative to the natural induced inner product on C(V, W). Proposition 1.28. F o r A i = C(V,V), i = 1,2, and Bi E C(W,W), i = 1,2, we have:
0) (B, 8 A,)(B, @ A,) = (BIB,) 8 (AIA,). 1 . (ii) If A,' and B;' exist, then (B, 8 A,)-' = B;' @ A-' (iii) If A, and Bl are orthogonal projections, then B, @ A, is an orthogonal projection. Proot
The proof of (i) goes as follows: For C
E
C(V, W),
(B, 8 A,)(B2 8 A2)C = (B, 8 A1)(B2CA;) = B,B,CA;A;
Now, (ii) follows immediately from (i). For (iii), it needs to be shown that (B, 8 A,), = B, 8 A, = (B, 8 A,)'. The second equality has been verified. The first follows from (i) and the fact that B: = B, and A: = A,. Other properties of Kronecker products are given as the need arises. One issue to think about is this: if C E C(V, W) and B E C(W, W), then BC can be thought of as the product of the two linear transformations B and C. is, However, BC can also be interpreted as (B 8 I)C, I E C(V, ',)-that BC is the value of the linear transformation B @ I at C. Of course, the particular situation determines the appropriate way to think about BC. Linear isometries are the final subject of discussion in t h s section, and are a natural generalization of orthogonal transformations on (V, (., .)). Consider finite dimensional inner product spaces V and W with inner and [., .] and assume that dim V 6 dim W. The reason for products this assumption is made clear in a moment. ( a ,
a )
Definition 1.23. A linear transformation A if (v,, v,) = [Av,, Av,] for all v,, v, E V.
E
C(V, W) is a linear isometry
If A is a linear isometry and v E V, v * 0, then 0 < (v, v) = [Av, Av]. This implies that %(A) = (01, so necessarily dim V g dim W. When W = V
PROPOSITION 1.29
37
and [., .] = (., .), then linear isometries are simply orthogonal transformations. As with orthogonal transformations, a number of equivalent descriptions of linear isometries are available. Proposition 1.29. For A E C(V, W) (dim V < dim W), the following are equivalent:
(i) A is a linear isometry. (ii) A'A = I E C(V,V). (iii) [Av, Av] = (0, v), v E V. Proof. The proof is similar to the proof of Proposition 1.19 and is left to the reader.
The next proposition is an analog of Proposition 1.20 that covers linear isometries and that has a number of applications. Proposition 1.30. Let v,,. . . , v, be vectors in (V,(-, .)), let w,,. . . , w, be vectors in (W, .I), and assume dim V 6 dim W. There exists a linear isometryA E C(V, W) such that Av, = w,, i = 1,. . . , k, iff (v,, v,) = [w,,w,] fori, j = 1,..., k. [a,
Proof: The proof is a minor modification of that given for Proposition 1.20 and the details are left to the reader.
Proposition 1.31. Suppose A E C(V, W,) and B E C(V, W2) where dim W, 6 dim W,, and (., .), [., a],, and [., .I, are inner products on V, W,, and W,. Then A'A = B'B iff there exists a linear isometry \k E C( W,, W,) such that A = \kB. Proof: If A = \kB, then A'A = B1\k'\kB = B'B, since \k'\k = I E C(W,, W,). Conversely, suppose A'A = B'B and let { u , , . . . , om)be a basis for V. With xi = Av, E W, and y, = Bv, E W,, i = 1,. . . , m, we have [xi, x,],, = [Av,, Av.] = (v,, AfAvJ) = (v,, B'Bv,) = [Bv,, Bv,], = [y,, y,], for J 1 i, J = 1,. . . , m. Applying Proposition 1.30, there exists a linear isometry \k E C(W2, W,) such that \ky, = x, for i = 1, ..., m. Therefore, \kBv, = Av, for i = 1,. . . , m and, since {v,, . . . , vm)is a basis for V, \kB = A.
+
Example 1.11. Take V = Rm and W = Rn with the usual inner products and assume m g n. Then a matrix A = {ajj) : n x m is a
38
VECTOR SPACE THEORY
linear isometry iff A'A = I, where I, is the m X m identity matrix. If a,, . . . , a, denote the columns of the matrix A , then A'A is just the m x m matrix with elements aja,, i, j = 1,. . . , m. Thus the condition A'A = I, means that aja, = so A is a linear isometry on Rm to Rn iff the columns of A are an orthonormal set of vectors in R". Now, let G,,n be the set of all n X m real matrices that are linear isometries-that is, A E G,,n iff A'A = I,. The set G,," is sometimes called the space of m-frames in Rn as the columns of A form an m-dimensional orthonormal "frame" in Rn. When m = 1, GI,, is just the set of vectors in Rn of length one, and when m = n, Gn, is the set of all n x n orthogonal matrices. We have much more to say about T, in later chapters. An immediate application of Proposition 1.31 shows that, if A : n, X m and B : n, X m are real matrices with n, < n then A'A = B'B iff A = \kB where \k : n, X n, satisfies *'\k = In2.In particular, when n, = n ,, A'A = B'B iff there exists an orthogonal matrix \k: n, x n, such that A = \kB.
,,
1.6. DETERMINANTS AND EIGENVALUES
At this point in our discussion, we are forced, by mathematical necessity, to introduce complex numbers and complex matrices. Eigenvalues are defined as the roots of a certain polynomial and, to insure the existence of roots, complex numbers arise. This section begins with complex matrices, determinants, and their basic properties. After defining eigenvalues, the properties of the eigenvalues of linear transformations on real vector spaces are described. In what follows, 6 denotes the field of complex numbers and the symbol i is reserved for If a E 6 , say a = a ib, then ti = a - ib is the complex conjugate of a. Let 6 " be the set of all n-tuples (henceforth called vectors) of complex numbers-that is, x E C" iff
m.
+
The number x, is called the jth coordinate of x, j = 1,. . . , n. For x, y E a", x y is defined to be the vector with coordinates x, + y,, j = 1,. . . , n, and for a E C, ax is the vector with coordinates ax,, j = 1,. . . , n. Replacing R by 6 in Definition 1.1, we see that a" satisfies all the axioms of a vector
+
39
DETERMINANTS AND EIGENVALUES
space where scalars are now taken to be complex numbers, rather than real numbers. More generally, if we replace R by C in (11) of Definition 1.1, we have the definition of a complex vector space. All of the definitions, results, and proofs in Sections 1.1 and 1.2 are valid, without change, for complex vector spaces. In particular, a" is an n-dimensional complex vector space and the standard basis for a" is {E,,.. . , E,) where E, has its jth coordinate equal to one and the remaining coordinates are zero. As with real matrices, an m x n array A = {a,,) for j = 1,. . . , rn, and k = 1,. . . , n where a,, E C is called an m x n complex matrix. If A = {a,,) : rn X n and B = {b,,) : n X p are complex matrices, then C = AB is the rn x p complex matrix with with entries c,, = X,a,,b,, for j = 1,. . . , rn and I = 1,. . . , p. The matrix C is called the product of A and B (in that order). In particular, when p = 1, the matrix B is n x 1 so B is an element of T . Thus if x E a" (x now plays the role of B) and A : rn x n is a complex matrix, Ax E a"". Clearly, each A : m x n determines a linear transformation on a" to a"" via the definition of Ax for x E a*. For an m X n complex matrix A = {a,,), the conjugate transpose of A, denoted by A*, is the n X rn matrix A* = {if,,), k = 1,. . . , n, j = 1,. . . , rn, where a,, is the complex conjugate of a,, E C. In particular, if x E a",x* denotes the conjugate transpose of x. The following relation is easily verified:
where y Ea"",x E a",and A is an m x n complex matrix. Of course, the bar over y*Ax denotes the complex conjugate of y*Ax E C. With the preliminaries out of the way, we now want to define determinant functions. Let C?, denote the set of all n x n complex matrices so C?, is an n2-dimensional complex vector space. If A E C?,, write A = (a,, a,,. . . , a,) where a, is the jth column of A. Definition 1.24. A function D defined on E?, and taking values in @ is called a determinant function if
(i) D(A) = D(a,, . . . , a,) is linear in each column vector a, when the other columns are held fixed. That is, ~ ( a,..., , aa,
+ Pb,,. .. , a,)
=
a D ( a l , . . . , aJ,. . ., a,) + P D ( a l , . . . , b,,. . . , a,)
for a, /3 E C. (ii) For any two indices j and k, j < k, ~ ( a , ,. .. , a,, . . . , a k , .. . , a,)
=
-D(a1,. . . , a k , .. . , a,,. . . , a,).
40
VECTOR SPACE THEORY
Functions D on ento C that satisfy (i) are called n-linear since they are linear in each of the n vectors a,, . . . , a, when the remaining ones are held
fixed. If D is n-linear and satisfies (ii), D is sometimes called an alternating n-linear function, since D(A) changes sign if two columns of A are interchanged. The basic result that relates all determinant functions is the following. Proposition 1.32. The set of determinant functions is a one-dimensional complex vector space. If D is a determinant function and D s 0, then D ( I ) * 0 where I is the n X n identity matrix in e n .
Proof. We briefly outline the proof of this proposition since the proof is instructive and yields the classical formula defining the determinant of an n x n matrix. Suppose D(A) = D(a,,. . . , a,) is a determinant function. For each k = 1,. . . , n, a, = Cjajk&jwhere {E,,. . . , en) is the standard basis and A = {ajk): n X n. Since D is n-linear and a , = Ca,,e,, for ~
(,,..., a a,)
=
C U , , D ( E ~a,,..., , an). J
Applying this same argument for a, ~ ( a , ,. .. , a,)
=
=
x xajllaj2Z~('jl,
a3,...,
'j2>
.i~ j2
Continuing in the obvious way, D(a,,. . . , a,)
=
ajl1aj22... a j , n ~ ( ~ej2,. j l , .., E,~)
119 . . . ,in
where the summation extends over all j,, . . . ,j, with 1 < j, < n for 1 = 1,. . . , n. The above formula shows that a determinant function is determined by the n" numbers D(ejl,.. . , E,") for 1 < j, < n, and t h s fact followed solely from the assumption that D is n-linear. But since D is alternating, it is clear that, if two columns of A are the same, then D(A) = 0. In particular, if two indicesj, and jkare the same, then D ( E ~.,.,. , E,~)= 0. Thus the summation above extends only over those indices where j,, . . . ,j, are all distinct. In other words, the summation extends over all permutations of the set {I, 2,. . . , n). If ?r denotes a permutation of 1,2,. . . , n, then
PROPOSITION
41
1.33
where the summation now extends over all n! permutations. But for a fixed permutation ~ ( l ) ,..., a(n) of I,.. ., n, there is a sequence of pairwise interchanges of the elements of a(l), . . . , a(n), which results in the order 1,2,. . . , n. In fact there are many such sequences of interchanges, but the number of interchanges is always odd or always even (see Hoffman and Kunze, 1971, Section 5.3). Using this, let sgn(a) = 1 if the number of interchanges required to put m(l), . . . , a ( n ) into the order 1,2,. . . , n.is even and let sgn(a) = - 1 otherwise. Now, since D is alternating, it is clear that
Therefore, we have arrived at the formula D ( a , , . . . , a,) = D(I)C,sgn(a)a,(,), . . . a,(,), since D ( I ) = D(E,,.. . , E,). It is routine to verify that, for any complex number a, the function defined by
is a determinant function and the argument given above shows that every determinant function is a D, for some a E C. This completes the proof; for more details, the reader is referred to Hoffman and Kunze (1971, Chapter 5).
en,
Definition 1.25. If A E the determinant of A, denoted by det(A) (or det A), is defined to be D,(A) where Dl is the unique determinant function with D , ( I ) = 1.
The proof of Proposition 1.32 gives the formula for det(A), but that is not of much concern to us. The properties of det(.) given below are most easily established using the fact that det(-) is an alternating n-linear function of the columns of A. Proposition 1.33. For A, B
E
en:
(i) det(AB) = det A det B. (ii) det A* = det A. (iii) det A * 0 iff the columns of A are linear independent vectors in the complex vector space a". I f A , , : n, x n , , A I 2 :n,
X
n,,A2,: n2 X n,, andA,,: n,
X
n2arecomplex
42
VECTOR SPACE THEORY
matrices, then:
]
(iv) detjA" O = detjA" = det A,, det A,,. A21 A22 0 A22 (v) If A is a real matrix, then det(k) is real and det(A) = 0 iff the columns of A are linearly dependent vectors over the real vector space Rn. ProoJ: The proofs of these assertions can be found in Hoffman and Kunze (1971, Chapter 5). These properties of det(.) have a number of useful and interesting implications. If A has columns a,, . . . , a,, then the range of the linear transformation determined by A is just span{a,, . . . , a,). Thus A is invertible iff span{a,,. . . , a,) = a" iff det A * 0. If det A * 0, then 1 = det A A ' = det Adet A-I, so det A-' = l/det A. Consider B,, : n, x n,, BIZ: n, X n,, B2, : n2 X n,, and B2, : n2 X n2- complex matrices. Then it is easy to verify the identity:
where All, A,,, A,,, and A,, are defined in Proposition 1.33. This tells us how to multiply the two (n, + n,) X (n, + n,) complex matrices in terms of their blocks. Of course, such matrices are called partitioned matrices. Proposition 1.34. Let A be a complex matrix, partitioned as above. If det A,, +; 0, then:
(i) det
A21 A22 If det A,, * 0, then: (ii) det
A22
=
det A,,det(A2, - A,,A,'A,,).
=
det A,,det(A,, - A,,A;~,,).
ProoJ: For (i), first note that
PROPOSITION
1.35
by Proposition 1.33, (iv). Therefore, by (i) of Proposition 1.33, det
A21
A22
=
det
=
det
=
det ~ , , d e t ( ~ , ,A , , A ~ ~ ~ A , , ) .
0
The proof of (ii) is similar. Proposition 1.35. Let A : n x m and B : m x n be complex matrices. Then
det! I,,+ AB) = det( I,
+ BA).
Proof. Apply the previous proposition to
We now turn to a discussion of the eigenvalues of an n X n complex matrix. The definition of an eigenvalue is motivated by the following considerations. Let A E To analyze the linear transformation determined by A, we would like to find a basis x,,. . . , x , of such that Ax, = Ajx,, j = 1,. . . , n, where A, E a . If this were possible, then the matrix of the linear transformation in the basis {x,,. . . , x,) would simply be
en.
a,
where the elements not indicated are zero. Of course, this says that the linear transformation is A, times the identity transformation when restricted to span{x,). Unfortunately, it is not possible to find such a basis for each linear transformation. However, the numbers A,, . . . , A,, which are called eigenvalues after we have an appropriate definition, can be interpreted in another way. Given A E a , Ax = Ax for some nonzero vector x iff (A A I ) x = 0, and this is equivalent to saying that A - A I is a singular matrix,
44
VECTOR SPACE THEORY
that is, det(A - AI)
x + 0 such that Ax
=
=
calculation shows that
0. In other words, A
-
A I is singular iff there exists
Ax. However, using the formula for det(.), a bit of
where a,, a,,. . . , a,-, are complex numbers. Thus det(A - AI) is a polynomial of degree n in the complex variable A, and it has n roots (counting multiplicities). This leads to the following definition. Definition 1.26. Let A
E
enand set p(A)
=
det(A - AI).
Then nth degree polynomial p is called the characteristic polynomial of A and the n roots of the polynomial (counting multiplicities) are called the eigenvalues of A. If p(A)
=
det(A - XI) has roots A,, . . . , A,, then it is clear that
since the right-hand side of the above equation is an nth degree polynomial with roots A,, . . . , A, and the coefficient of A" is ( - 1)". In particular,
so the determinant of A is the product of its eigenvalues. There is a particular case when the characteristic polynomial of A can be computed explicitly. If A E en, A = {a,,) is called lower triangular if a,, = 0 when k > j. Thus A is lower triangular if all the elements above the diagonal of A are zero. An application of Proposition 1.33 (iv) shows that when A is lower triangular, then
But when A is lower triangular with diagonal elements a,,, j = 1,. . . , n, then A - A I is lower triangular with diagonal elements (a,, - A), j = 1,. . . , n.
PROPOSITION
1.36
Thus
A has eigenvalues a , , ,. . . , a,,. Before returning to real vector spaces, we first establish the existence of eigenvectors (to be defined below). SO
Proposition 1.36. If X is an eigenvalue of A nonzero vector x E C such that Ax = Ax.
E
en, then there exists a
Proof. Since X is an eigenvalue of A, the matrix A - X I is singular, so the dimension of the range of A - X I is less than n. Thus the dimension of the null space of A - X I is greater than 0. Hence there is a nonzero vector in the null space of A - X I , say x, and (A - AI)x = 0. Definition 1.27. If A E en, a nonzero vector x E Cn is called an eigenvector of A if there exists a complex number X E such that Ax = Ax.
a
If x * 0 is an eigenvector of A and Ax = Ax, then (A - XI)x = 0 so A - X I is singular. Therefore, X must be an eigenvalue for A. Conversely, if X E is an eigenvalue, Proposition 1.36 shows there is an eigenvector x such that Ax = Ax. Now, suppose V is an n-dimensional real vector space and B is a linear transformation on V to V. We want to define the characteristic polynomial, and hence the eigenvalues of B. Let (v,,. . . , vn) be a basis for V so the matrix of B is [B] = {b,,) where the bjk's satisfy Bv, = Cjbjkvj. The characteristic polynomial of [B] is
a
p(A)
=
det([B] - XI)
a.
If we could show that p(A) where I is the n x n identity matrix and A E does not depend on the particular basis for V, then we would have a reasonable definition of the characteristic polynomial of B. Proposition 1.37. Suppose {v,,. . . , 0,) and (y,,. . . , y,) are bases for the real vector space V, and let B E C(V, V). Let [B] = (b,,) be the matrix of B in the basis (v,, . . . , vn) and let [B], = (a,,) be the matrix of B in the basis (y,, . . . , y,). Then there exists a nonsingular real matrix C = (c,,) such that
46
VECTOR SPACE THEORY
ProoJ: The numbers a,, are uniquely determined by the relations
k
BY, = xu,, y,,
=
1,. . . , n .
J
Define the linear transformation C, on V to V by Clv, = yJ, j = 1,. . . , n. Then C, is nonsingular since C, maps a basis onto a basis. Therefore,
and this yields
Thus the matrix of C;'BC, in the basis (v,, . . . , v,) is (a,,). From Proposition 1.5, we have
where [C,] is the matrix of C, in the basis (v,,. . . , 0,). Setting C = [C,], the conclusion follows. The above proposition implies that p(X)
=
det([B] - XI)
=
d e t ( c - ' [ ~ ] C- XI)
=
det(C-'([B] - AI)C) =
det([B], - XI).
Thus p(A) does not depend on the particular basis we use to represent B, and, therefore, we call p the characteristic polynomial of the linear transformation B. The suggestive notation p(X)
=
det(B - XI)
is often used. Notice that Proposition 1.37 also shows that it makes sense to define det(B) for B E C(V, V) as the value of det[B] in any basis, since the value does not depend on the basis. Of course, the roots of the polynomial p(X) = det(B - XI) are called the eigenvalues of the linear transformation B. Even though [B] is a real matrix in any basis for V, some or all of the eigenvalues of B may be complex numbers. Proposition 1.37 also allows us
PROPOSITION
47
1.38
to define the trace of A E C(V, V). If { v , , . . . , v,) is a basis for V, let trA = tr[A] where [A] is the matrix of A in the given basis. For any nonsingular matrix C,
which shows that our definition of tr A does not depend on the particular basis chosen. The next result summarizes the properties of eigenvalues for linear transformations on a real inner product space. Proposition 1.38. Suppose (V,(., product space and let A E C(V, V).
a))
is a finite dimensional real inner
(i) If X E C is an eigenvalue of A, then X is an eigenvalue of A. (ii) If A is symmetric, the eigenvalues of A are real (iii) If A is skew-symmetric, then the eigenvalues of A are pure imaginary (iv) If A is orthogonal and A is an'eigenvalue of A, then AX = 1.
Proof: If A
E
C(V, V), then the characteristic polynomial of A is
where [A] is the matrix of A in a basis for V. An examination of the formula for det(.) shows that
real numbers since [A] is a real matrix Thus if where a,,.. . , a,,-,- are p(A) = 0, then p(A) = p (A) = 0 so whenever p(X) = 0, p(X) = 0. This establishes assertion (i). For (ii), let X be an eigenvalue of A, and let {v,,. . . , v,) be an orthonorma1 basis for (V, (., .)). Thus the matrix of A, say [A], is a real symmetric matrix and [A] - X I is singular as a matrix acting on a".By Proposition 1.36, there exists a nonzero vector x E a" such that [A]x = Ax. Thus x*[A]x = Ax*x. But since [A] is real and symmetric,
Thus Xx*x = Ax*x and, since x
+
0, X
=
X so A is real.
48
VECTOR SPACE THEORY
To prove (iii), again let [A] be the matrix of A in the orthonormal basis {ol,.. . , on} so [A]' = [A]* = -[A]. If A is an eigenvalue of A, then there exists x E C , x * 0, such that [A]x = Ax. Thus x*[A]x = Ax*x and Since x * 0, X = -A, which implies that A = ib for some real number b-that is, A is pure imaginary and this proves (iii). If A is orthogonal, then [A] is an n x n orthogonal matrix in the orthonormal basis (v,,. . . , v,). Again, if A is an eigenvalue of A, then [A]x = Ax for some x E C , x * 0. Thus Xx* = x*[A]* = x*[A]' since [A] is a real matrix. Therefore as [A]'[A] = I. Hence AX = 1 and the proof of Proposition 1.38 is complete. It has just been shown that if (V, ( - , .)) is a finite dimensional vector space and if A E C(V, V) is self-adjoint, then the eigenvalues of A are real. The spectral theorem, to be established in the next section, provides much more useful information about self-adjoint transformations. For example, one application of the spectral theorem shows that a self-adjoint transformation is positive definite iff all its eigenvalues are positive. If A E C(V, W) and B E C(W, V), the next result compares the eigenvalues of AB E C(W, W) with those of BA E C(V, V). Proposition 1.39. The nonzero eigenvalues of AB are the same as the nonzero eigenvalues of BA, including multiplicities. If W = V, AB and BA have the same eigenvalues and multiplicities. Proof: Let m BA is
=
dim V and n
=
dim W. The characteristic polynomial of
pl(A) = det(BA - AI,). Now, for A
* 0, compute as follows:
=
1 ( - ~ ) " d e t ( ~ )-(AIn) ~ ~ = ----(-Urndet(AB - XI.). (-A)"
Therefore, the characteristic polynomial of AB, say p,(X) is related to p l ( X ) by
=
det(AB - XI,),
Both of the assertions follow from this relationship.
1.7. THE SPECTRAL THEOREM
The spectral theorem for self-adjoint linear transformations on a finite dimensional real inner product space provides a basic theoretical tool not only for understanding self-adjoint transformations but also for establishng a variety of useful facts about general linear transformations. The form of the spectral theorem given below is slightly weaker than that given in Halmos (1958, see Section 79), but it suffices for most of our purposes. Applications of this result include a necessary and sufficient condition that a self-adjoint transformation be positive definite and a demonstration that positive definite transformations possess square roots. The singular value decomposition theorem, which follows from the spectral theorem, provides a useful decomposition result for linear transformations on one inner product space to another. This section ends with a description of the relationship between the singular value decomposition theorem and angles between two subspaces of an inner product space. Let ( V , ( . ,.)) be a finite dimensional real inner product space. The spectral theorem follows from the two results below. If A E C(V, V ) and M is subspace of V,M is called invariant under A if A ( M ) = {Axlx E M ) G M. Proposition 1.40. Suppose A E C(V, V ) is self-adjoint and let M be a subspace of V. If A ( M ) G M, then A ( M L ) L M I .
ProoJ: Suppose v E A ( M L ) . It must be shown that ( v , x ) = 0 for all x E M. Since v E A ( M L ) , v = A v , for v , E M L . Therefore, ( v ,X
) =
since A is self-adjoint and x
( A v l ,X
E
) =
( v , ,A X ) = 0
M implies Ax
E
M by assumption.
Proposition 1.41. Suppose A E C(V, V ) is self-adjoint and A is an eigenvalue of A. Then there exists a v E V , v * 0, such that Av = Xu.
50
VECTOR SPACE THEORY
Proof: Since A is self-adjoint, the eigenvalues of A are real. Let { v , ,. . . , v , ) be a basis for V and let [ A ]be the matrix of A in this basis. By Proposition 1.36, there exists a nonzero vector z E a* such that [ A ] z = Az. Write z = z , iz, where z , E R nis the real part of z and z , E R" is the imaginary part of z . Since [ A ] is real and A is real, we have [ A l z , = Az, and [ A l z , = Az,. But, z , and z , cannot both be zero as z * 0. For definiteness, say z , * 0 and let u E V be the vector whose coordinates in basis { v , ,. . . , v , ) are z , . Then v * 0 and [ A ] [ v ]= A [ v ] .Therefore A v = Av.
+
Theorem 1.2 (Spectral Theorem). If A E C(V, V) is self-adjoint, then there exists an orthonormal basis ( x , ,. . . , x , ) for V and real numbers A,, . . . , A, such that
Further, A , , . . . , A , are the eigenvalues of A and Ax,
=
A,x,, i
=
1,. . . , n.
Proof: The proof of the first assertion is by induction on dimension. For n = 1, the result is obvious. Assume the result is true for integers 1,2,. . . , n - 1 and consider A E C(V, V), which is self-adjoint on the inner product space (V, (. , .)), n = dim V. Let A be an eigenvalue of A . By Proposition 1.41, there exists u E V, v * 0, such that A v = Av. Set x , = v/llvll and A, = A. Then Ax, = A,x,. With M = span{x,), it is clear that A ( M ) c M so A(MA)c M L by Proposition 1.40. However, if we let A , be the restriction of A to the (n - 1)-dimensional inner product space ( M A,(. , -)), then A , is clearly self-adjoint. By the induction hypothesis there is an orthonormal basis ( x , , .. . , x , - , ) for M I and real numbers A , , . . . , A n - , such that
It is clear that { x , ,. . . , x , ) is an orthonormal basis for V and we claim that
To see this, consider v , v , E M I . Then Avo
=
Av,
E
V and write v ,
=
v,
+ Av, = A,v, + A,v2 = A,v, +
+ v,
x
with v ,
E
M and
n- 1 1
Ai(xi0xi)v2.
However.
since v, E M and v, E M I . But (v,, x,)x, = v, since v , E span{x,). Therefore A = C;AixiO xi, which establishes the first assertion. For the second assertion, if A = C;AixiCl xi where {x,,. . . , x,) is an orthonormal basis for (V, (., .)), then Ax,
=
x A , ( x i n xi).,
=
x A i ( x i , x,)xi
=
A,x,.
i
I
Thus the matrix of A, say [A], in this basis has diagonal elements A,, . . . , A, and all other elements of [A] are zero. Therefore the characteristic polynomial of A is n
p(A)
=
det([A] - XI)
=
n ( A i - A), 1
which has roots A,, . . . , A,. The proof of the spectral theorem is complete. q
When A = CA,x,O xi, then A is particularly easy to understand. Namely, A is X i times the identity transformation when restricted to span{x,). Also, if x E V , then x = C(x,, x)xi SO Ax = CA,(x,, x)x,. In the case when A is an ~ orthogonal projection onto the subspace M, we know that A = C f x , x, where k = dim M and {x,,. . . , x,) is an orthonormal basis for M. Thus A has eigenvalues of zero and one, and one occurs with multiplicity k = dim M. Conversely, the spectral theorem implies that, if A is self-adjoint and has only zero and one as eigenvalues, then A is an orthogonal projection onto a subspace of dimensional equal to the multiplicity of the eigenvalue one. We now begin to reap the benefits of the spectral theorem. Proposition 1.42. If A E C(V, V), then A is positive definite iff all the eigenvalues of A are strictly positive. Also, A is positive semidefinite iff the eigenvalues of A are non-negative.
Proof. Write A in spectral form:
52
VECTOR SPACE THEORY
where {x,,. . . , x,) is an orthonormal basis for (V, (., Then (x, Ax) = CAi(x,, x)*. If A, > 0 for i = 1,. . . , n, then x + 0 implies that Chi(xi, x)' > 0 and A is positive definite. Conversely, if A is positive definite, set x = x, and we have 0 < (x,, Ax,) = A,. Thus all the eigenvalues of A are strictly positive:. The other assertion is proved similarly. q a)).
The representation of A in spectral form suggests a way to define various functions of A. If A = CAix,U xi, then
More generally, if k is a positive integer, a bit of calculation shows that
For k = 0, we adopt the convention that A0 = I since C x i o xi = I. Now if p is any polynomial on R, the above equation forces us to define p(A) by
This suggests that, if f is any real-valued function that is defined at A,, . . . , A,, we should define f (A) by
Adopting this suggestive definition shows that if A,,. . . , A, are the eigenvalues of A, then f (A,), . . . , f (A,) are the eigenvalues of f ( A ) . In particular, if A, * 0 for all i and f ( t ) = t-', t * 0, then it is clear that f(A) = A-'. Another useful choice for f is given in the following proposition. Proposition 1.43. If A E C(V, V) is positive semidefinite, then there exists a B E C(V, V) that is positive semidefinite and satisfies B* = A.
Proof. Choose f ( t )
=
t'/2, and let
PROPOSITION
1.43
53
The square root is well defined since A, 2 0 for i = 1,. . . , n as A is positive semidefinite. Since B has non-negative eigenvalues, B is positive definite. That B' = A is clear. There is a technical problem with our definition of f(A) that is caused by the nonuniqueness of the representation
for self-adjoint transformations. For example, if the first n, Xi's are equal and the last n - n, Xi's are equal, then
However, C;lxiO xi is the orthogonal projection onto M I = span(xl, . . . , x n l ) If y,,. . , , yn is any other orthonormal basis for (V, (., such that span{x,, . . . , x,,) = span{y,, . . . , ynl),it is clear that a))
Obviously, A,, . . . , A n are uniquely defined as the eigenvalues for A (counting multiplicities), but the orthonormal basis {x,,. . . , x,) providing the spectral form for A is not unique. It is therefore necessary to verify that the definition of f(A) does not depend on the particular orthonormal basis in the representation for A or to provide an alternative representation for A. It is this latter alternative that we follow. The result below is also called the spectral theorem. Theorem 1.2a (Spectral Theorem). Suppose A is a self-adjoint linear transformation on V to V where n = dim V. Let A, > A, > . . > A, be the distinct eigenvalues of A and let ni be the multiplicity of A,, i = 1,. . . , r. Then there exists orthogonal projections P I , . . . , P, with Pi< = 0 for i *j, n, = rank(P,), and CYP, = I such that
Further, this decomposition is unique in the following sense. If p , >
>
54
VECTOR SPACE THEORY
pk and Q,, . . . , Q, are orthogonal projections such that QiQ, CQi = I, and
then k
=
r , pi = Xi, and Qi = Pi for i
=
=
0 for i
*j,
1,. . . , k .
Proof. The first assertion follows immediately from the spectral representation given in Theorem 1.2. For a proof of the uniqueness assertion, see Halmos (1958, Section 79). q Now, our definition off (A) is
when A = C;A,Pi. Of course, it is assumed that f is defined at A,,. . . , A,. This is exactly the same definition as before, but the problem about the nonuniqueness of the representation of A has disappeared. One application of the uniqueness part of the above theorem is that the positive semidefinite square root given in Proposition 1.43 is unique. The proof of this is left to the reader (see Halmos, 1958, Section 82). Other functions of self-adjoint linear transformations come up later and we consider them as the need arises. Another application of the spectral theorem solves an interesting extremal problem. To motivate this problem, suppose A is self-adjoint on (V, (. , with eigenvalues A , 2 A, 2 . - . 2 A,. Thus A = CAixiO xi where {x,, . . . , x,) is an orthonormal basis for V. For x E V and llxll = 1, we ask how large (x, Ax) can be. To answer this, write and (x, Ax) = CAi(x,(xiO xi)x) = CAi(x, x,),, and note that 0 ,< (x, 1 ~ C;(xi, x ) ~ Therefore, . A, 2 CAi(x, x,), with equality for x = 1=1 1 ~ 1 = x,. The conclusion is a))
sup ( x , ~ x ) = A , x,llxll= 1
where A, is the largest eigenvalue of A. This result also shows that A,(A)-the largest eigenvalue of the self-adjoint transformation A-is a convex function of A. In other words, if A, and A, are self-adjoint and LY E [0, 11, then A,(aA, + (1 - a)A2) < aAI(Al) + (1 - a)A1(A2). TO prove this, first notice that for each x E V, (x, Ax) is a linear, and hence convex, function of A. Since the supremum of a family of convex functions is a convex
PROPOSITION
1.44
function, it follows that
A,(A) =
sup ( x , A x ) x,llxll= 1
is a convex function defined on the real linear space of self-adjoint linear transformations. An interesting generalization of this is the following. Proposition 1.44. Consider a self-adjoint transformation A defined on the n-dimensional inner product space (V,(. , .)) and let A , > A , > . . . > A n be the ordered eigenvalues of A. For 1 < k < n, let 3, be the collection of all k-tuples { v , ,. . . , v,) such that { v , ,. . . , v,) is an orthonormal set in ( V ,(., .)). Then
ProoJ: Recall that ( . , - ) is the inner product on C(V,V ) induced by the inner product (., on V , and ( x , A x ) = ( x u x, A ) for x E V. Thus a )
k
=
j z v i o v,, A \ . k
k
Z ( v ; ,AV,)
C ( v , o v,, A )
=
Write A in spectral form, A = C;A,x,O x,. For { v , , .. . , v k )E a k , Pk = Cfv,q v, is the orthogonal projection onto span{v,, . . . , v,). Thus for { v , ,. . . , 0,) E
a,,
Since Pk is an orthogonal projection and Ilxill ( x , , P,x,) < 1. Also,
because C;x,U xi
=
I
E
1, i
=
C(V,V ) . But Pk = CfviU vi, SO k
( P k ,I )
=
=
Z ( v i u V;,I )
k
=
C ( V , ,0 , ) = k.
1,. . . , n, 0 G
56
VECTOR SPACE THEORY
Therefore, the real numbers a, = (x,, Pkx,), i = 1,. . . , n, satisfy 0 < a, < 1 and C;ai = k. A moment's reflection shows that, for any numbers a,, . . . , a, satisfying these conditions, we have
since A , 2 . . . 2 A,. Therefore,
ak.
for {o,, . . . , vk) E However, setting v , in the above inequality.
=
x,, i
=
1,. . . , k, yields equality
For A E C(V, V), which is self-adjoint, define tr,A = CfA, where A , >, 2 A, are the ordered eigenvalues of A. The symbol trkA is read "trace sub-k of A." Since ( E $ I , Dv,, A ) is a linear function of A and trkA is the supremum over all {v,,. . . , 0,) E a k , it follows that trkA is a convex function of A. Of course, when k = n, tr, A is just the trace of A . For completeness, a statement of the spectral theorem for n x n symmetric matrices is in order.
..
Proposition 1.45. Suppose A is an n X n real symmetric matrix. Then there exists an n x n orthogonal matrix r and an n x n diagonal matrix D such that A = TDr'. The columns of r are the eigenvectors of A and the diagonal elements of D, say A,, . . . , A,, are the eigenvalues of A.
Proof. T h s is nothing more than a disguised version of the spectral theorem. To see ths, write
where x, E Rn, X i E R, and {x,, . . . , x,) is an orthonormal basis for Rn with the usual inner product (here x,O xi is xixj since we have the usual inner product on Rn). Let r have columns x,,. . . , x, and let D have diagonal elements A,, . . . , A,. Then a straightforward computation shows that
The remaining assertions follow immediately from the spectral theorem.
PROPOSITION
57
1.46
Our final application of the spectral theorem in this chapter deals with a representation theorem for a linear transformation A E C(V, W) where are finite dimensional inner product spaces. In this (V, ( - , .)) and (W, [., context, eigenvalues and eigenvectors of A make no sense, but something can be salvaged by considering A'A E C(V, V). First, A'A is non-negative definite and %(AfA) = %(A). Let k = rank(A) = rank(A'A) and let A, 2 - . & A, > 0 be the nonzero eigenvalues of A'A. There must be exactly k positive eigenvalues of A'A as rank(A) = k. The spectral theorem shows that a])
k
A'A
x,
= 1
where {x,,. . . , x,) is an orthonormal basis for V and A'Ax, i = 1,. . . , k, AIAxi = 0 for i = k 1,. . . , n. Therefore, %(A) = (span{x,,. . . , x , ) ) ~ .
+
= =
Alxi for %( A'A)
Proposition 1.46. In the notation above, let w, = (1,' &)AX, for i = 1,. . . , k. Then {w,,. . . , w,) is an orthonormal basis for %(A) 5 W and A = cf&wiO xi.
Proof: Sincedim%(A) = k,{w ,,..., wk)isabasisfor%(A)if{w,,..., w,) is an orthonormal set. But
and the first assertion holds. To show A = ~ f & w , O xi, we verify the two linear transformations agree on the basis {x,,. . . , x,). For 1 G j < k, Ax, = &wJ by definition and
For k
+ 1 6j
g n , Ax,
=
0 since %(A)
=
(span{x,, . . . , x,))' . Also
58
VECTOR SPACE THEORY
Some immediate consequences of the above representation are (i) AA' C:h,wiO w,, (ii) A' = c:&xio wi and Afwi = &xi for i = 1,. . ., k. summary, we have the following.
=
In
Theorem 1.3 (Singular Value Decomposition Theorem). Given A E C (V, W) of rank k, there exist orthonormal vectors x,, . . . , xk in V and w , , . . . , w, in W and positive numbers p,, . . . , pk such that
Also, %(A) = span(w,, . . . , w,), %(A) = (span(x,, . . . , x,))l , Ax, = p,w,, i = 1,. . . , k, A' = Ctp,x, w,, A'A = C t p ~ x , x,, AA' = Cfp?w, w,. The numbers p:, . . . , p i are the positive eigenvalues of both AA' and A'A. For matrices, this result takes the following form.
Proposition 1.47. If A is a real n X m matrix of rank k, then there exist matrices r : n x k, D : k x k, and 9 : k x m that satisfy r'r = I,, 99' = I,, D is a diagonal matrix with positive diagonal elements, and
Proof. Take V = Rm, W = R n and apply Theorem 1.3 to get
where x,, . . . , x, are orthonormal in R m ,w,, . . . , wk are orthonormal in Rn, and pi > 0, i = 1,. . . , k. Let r have columns w,,. . . , w,, let 'k have rows x;, . . . , x;, and let D be diagonal with diagonal elements p,, . . . , p,. An easy calculation shows that k
~ p , w , x= ~rD\k. 1
In the case that A E C(V, V) with rank k, Theorem 1.3 shows that there exist orthonormal sets {x,,. . . , x,) and (w,,. . . , w,) of V such that
where p, > 0, i
=
1,..., k. Also, %(A)
=
span{w ,,..., w,) and %(A)
=
(span{x,, . . . , x,))' . Now, consider two subspaces MI and M2 of the inner and let PI and P2 be the orthogonal projections product space (V, onto MI and M2. In what follows, the geometrical relationship between the two subspaces (measured in terms of angles, which are defined below) is related to the singular value decomposition of the linear transformation PIP2E C(V, V). It is clear that %(P2P,) c M2 and %(P2P,) 2 M: . Let k = rank(P2P,) so k Q dim(M,), i = 1,2. Theorem 1.3 implies that ( a ,
a))
where pi > 0, i = 1,. . . , k, %(P2PI)= span{w,,.. . , w,) c M2, and (%(P2P1))' = span{x,,. . . , x,) c MI. Also, {w,,. . . , w,) and {x,,. . . , x,) are orthonormal sets. Since P2Pix, = p j y and (P2Pl)'P2PI = P IP2P2P I = P IP2P I = Cfp:xi q xi, we have
Therefore, for i, j
=
1,. . . , k,
since pj > 0. Furthermore, if x E MI n (span{x,,. . . , x,))' and w E M2, then (x, w) = (P,x, P2w) = (P2P,x,w) = 0 since P2P,x = 0. Similarly, if w E M2 n (span{w,,. . . , w,))' and x E MI, then (x, w) = 0. The above discussion yields the following proposition. Proposition 1.48. Suppose MI and M2 are subspaces of (V, (., .)) and let P I and P2 be the orthogonal projections onto MI and M2. If k = rank(P,P,), then there exist orthonormal sets {x,,. . . , x,) c MI, {w,,. . . , w,) c M2 and positive numbers p, 2 - . . 2 pk such that:
(i) P2P, = Cfpiw,0 xi. (ii) P IP2P I = CfpTxiq x i . (iii) P2PIP2 = C:p: wi q w,. (iv) 0 < pj Q 1 and (xi, w,) = Sijpj for i, j = 1,. . . , k. (v) If x E MI n (span{x,, . . . , x,))' and w E M2, then (x, w) = 0. If w E M2 n (span{w,,. . . , w,))' and x E MI, then (x, w) = 0.
60
VECTOR SPACE THEORY
ProoJ: Assertions (i), (ii), (iii), and (v) have been verified as has the relationship (x,, w,) = aijp,. Since 0 < p, = (x,, w,), the Cauchy-Schwarz Inequality yields (x,, w,) G Ilx,ll Ilw,ll = 1. In Proposition 1.48, if k = rank P2P I = 0, then MI and M2 are orthogonal to each other and PIP2= P2P, = 0. The next result provides the framework in which to relate the numbers p1 >, . . . 2 pk to angles.
Proposition 1.49. = M2,
In the notation of Proposition 1.48, let MI, = MI, M,, M,, = (span{x, ,. . . , xi- )) ' n M I ,
and
for i
=
2,. . . , k
+ 1. Also, for i = 1,. . . , k, let
and D2, = {wlw E M2,,llwII = 1). Then sup sup ( x , w)
X G D , ,W E D , ,
fori
=
=
(x,, w,)
=
pi
1,..., k.Also, M , ( , + , , I M 2 a n d M 2 ( , + , , IMI.
Proof. Since xi E Dl, and w, E D,,, the iterated supremum is at least (x,, w,) and (x,, w,) = pi by (iv) of Proposition 1.48. Thus it suffices to show that for each x E Dl, and w E D,,, we have the inequality (x, w) < p,. However, for x E Dl, and w E D,,,
since llwll
Since x
=
E
1 as w
E
Dl,. Thus
Dl,, (x, x,)
=
0 for j
=
1,. . . , i - 1. Also, the numbers a,
=
PROPOSITION
1.49
(x,, x ) satisfy ~ 0 g a, g 1 and Cfa,
<
1 as llxll
=
The last inequality follows from the fact that p, conditions on the a,'s. Hence,
1. Therefore,
> . - . > p,
> 0 and the
sup sup ( x , w ) = (x,,w,) = pi,
X E D , ,W E D * ,
and the first assertion holds. The second assertion is simply a restatement of (v) of Proposition 1.48.
Definition 1.28. Let MI and M2 be subspaces of ( V , ( . ,.)). Given the numbers p, 2 2 pk > 0, whose existence is guaranteed by Proposition 1.48, define 8, E [O, a/2) by
+
Let t = min{dimM,,dimM,) and set 8, = a/2 for i = k 1, ..., t. The numbers 8, ,< 8, g . . . g 8, are called the ordered angles between MI and M2. The following discussion is intended to provide motivation, explanation, and a geometric interpretation of the above definition. Recall that if y, and y, are two vectors in (V, (., .)) of length 1, then the cosine of the angle between y, and y, is defined by cos 8 = (y,, y,) where 0 g 8 g T. However, if we want to define the angle between the two lines span{y,) and span{y,), then a choice must be made between two angles that are complements of each other. The convention adopted here is to choose the angle in [O, a/2]. Thus the cosine of the angle between span{y,) and span{y2) is just I( y,, y2)1. To show this agrees with the definition above, we have Mi = span{yi) and Pi = yiO yi is the orthogonal projection onto Mi, i = 1,2. The rank of P2P, is either zero or one and the rank is zero iff y, Iy,. If y, Iy2, then the angle between MI and M, is ~ / 2 ,which agrees with Definition 1.28. When the rank of P, P I is one, P, P2P I = ( y,, y2)2y, y, whose only nonzero eigenvalue is (y,, y2)2.Thus p: = (y,, y2)2so p, = I(y,, y2)l = cos dl, and again we have agreement with Definition 1.28. Now consider the case when M, = span{y,), lly,ll = 1, and M2 is an Geometrically, it is clear that the angle arbitrary subspace of (V, (., .))a
62
VECTOR SPACE THEORY
between MI and M2 is just the angle between M I and the orthogonal projection of MI onto M,, say M,* = span{P, y,) where P, is the orthogonal projection onto M,. Thus the cosine of the angle between MI and M2 is
If P2y, = 0, then MI I M, and cos 8 = 0 so 0 = m/2 in agreement with Definition 1.28. When P, y, * 0, then PIP2PI = ( y,, P2y,) y, y,, whose = p:. only nonzero eigenvalue is (y,, P2yl) = (P2y,, Ply,) = llP,~,11~ Therefore, p, = (IP2yll(and again we have agreement with Definition 1.28. In the general case when dim(M,) > 1 for i = 1,2, it is not entirely clear how we should define the angles between MI and M,. However, the following considerations should provide some justification for Definition 1.28. First, if x E MI and w E M,, llxll = llwll = 1. The cosine of the angle between span x and span w is I(x, w)l Thus the largest cosine of any angle (equivalently, the smallest angle in [0, m/21) between a one-dimensional subspace of MI and a one-dimensional subspace of M2 is sup
sup ((x,w)l= sup
XED,,WED,,
sup ( x , w ) .
X E D , ,w e D 2 ,
The sets D l , and D,, are defined in Proposition 1.49. By Proposition 1.49, this iterated supremum is p, and is achieved for x = x, E D l , and w = w, E D,,. Thus the cosine of the angle between span{x,) and span{w,) is p,. Now, remove span{x,) from MI to get MI, = (span{x,))' n MI and remove span{w,} from M2 to get M,, = (span{w,))l n M2. The second largest cosine of any angle between MI and M, is defined to be the largest cosine of any angle between MI, and M,, and is given by
Next span{x2) is removed from MI, and span{w2) is removed from M2,, yielding M,, and M2,. The third largest cosine of any angle between MI and M, is defined to be the largest cosine of any angle between MI, and M,,, and so on. After k steps, we are left with MI(,+,) and M,(,+,,, which are orthogonal to each other. Thus the remaining angles are m/2. The above is precisely the content of Definition 1.28, given the results of Propositions 1.48 and 1.49. The statistical interpretation of the angles between subspaces is given in a later chapter. In a statistical context, the cosines of these angles are called
63
PROBLEMS
canonical correlation coefficients and are a measure of the affine dependence between the random vectors.
PROBLEMS All vector spaces are finite dimensional unless specified otherwise.
1. Let K + , be the set of all nth degree polynomials (in the real variable t ) with real coefficients. With the usual definition of addition and 1)-dimensional real scalar multiplication, prove that Vn+, is an ( n vector space.
+
2.
For A E C(V, W), suppose that M is any subspace of V such that M a3 %(A) = v. (i) Show that %(A) = A(M) where A(M) = {wlw = Ax for some x E M). (ii) If x,,. . . , x, is any linearly independent set in V such that span{x,, . . . , xk) n %(A) = {0), prove that Ax,, . . . , Ax, is linearly independent.
3. For A E C(V, W), fix wo E W and consider the linear equation Ax = w,. If w, P %(A), there is no solution to this equation. If w, E %(A), let x, be any solution so Ax, = w,. Prove that %(A) + x, is the set of all solutions to Ax = w,. 4.
For the direct sum space V,
$
V2, suppose A,
E
C (7, y ) and let
be defined by
for (vl, v2) E Vl 63 V2. (i) Prove that TI is a linear transformation. (ii) Conversely, prove that every TI E C(Vl $ V2, Vl a representation. (iii) If
CB
V2) has such
64
VECTOR SPACE THEORY
and
prove that the representation of TU is
5.
Let x , , . . . , x,, x,,, be vectors in V with x,,. . . , x, being linearly independent. For w,, . . . , w,, w,, in W, give a necessary and sufficient condition for the existence of an A E C(V, W ) that satisfies Ax, = w,, i = 1, ..., r + 1.
,
6. Suppose A E C(V, V) satisfies k so that B = kA is a projection.
=
CA where c
* 0. Find a constant
7. Suppose A is an rn x n matrix with columns a,,. . . , a n and B is an n x k matrix with rows b;,. . . , b;. Show that AB = C;a,b,[. 8. Let x i , . . . , x, be vectors in Rn, set M = span{x,, . . . , x,), and let A be the n X k matrix with columns x,, . . . , x, so A E C(Rk, Rn). (i) Show M = %(A). (ii) Show dim(M) = rank(A'A). 9. For linearly independent x,,. . . , x, in (V, (., -)), let y,,. . . , y, be the vectors obtained by applying the Gram-Schmidt (G-S) Process to x,,. . . , x,. Show that if zi = Ax,, i = 1,. . . , k, where A E O(V), then the vectors obtained by the G-S Process from z,,. . . , z, are Ay,,. . . , Ay,. (In other words, the G-S Process commutes with orthogonal transformations.)
* 0. ~ o r m y;, . . . , y; by x,/IIx,II and y,' = xi - (xi, y,')y,', i = 2,. . . , k: (i) Show span{x,,. . . , x,) = span{y,',. . . , y,') for r = 1,2,. . . , k. (ii) Show y,' I span{y;, . . . , y:) so span{y,', . . . , y,!) = span{y,') @ span{y; ,..., y,') for r = 2,..., k. (iii) Now, form y;,. . . , y; from y;,. . . , y: as the y"s were formed from the x's (reordering if necessary to achieve y; * 0). Show span{x,,. . . , x,) = span{y,') @ span{y;) @ span{y;,. . . , y;). rn = dim(span{x,, . . . , x,)). Show that after applying the Let (iv) above procedure rn times, we get an orthonormal basis y,', y:,. . . , y z for span{x,,. . . , x,).
lo. In (V,(., .)), let x,,. . . , x, be vectors with x, y;
=
65
PROBLEMS
.
(v) If x,, . . , x, are linearly independent, show that span{x,, . . . , x,) = span{y,', y;,. . . , y,') for r = 1,. . . , k. 11.
Let x,,. . . , x, be a basis for (V,(., .)) and w,,. . . , w, be a basis for (W, I., .I). For A, B E C(V, W), show that [Ax,, w,] = [Bx,, w,] for i = 1,. . . , m and j = 1,. . . , n implies that A = B.
12. For xi E (V,(., .)) and y, E ( W , [ . ,.I), i = 1,2, suppose that x, y, = x 2 n y2 t 0. Prove that x, = cx, for some scalar c * 0 and then y, = c-'y,. 13. Given two inner products on V, say (., .) and [ - , .], show that there exist positive constants c, and c2 such that c,[x, x] < (x, x ) < c2[x, x], x E V. Using ths, show that for any open ball in (V,(., .)), say B = (xl(x, x ) ' / ~< a), there exist open balls in (V, [., .I), say B, = {xl[x, x ] ' / ~< Pi), i = 1,2, such that B, G B c B,.
14. In (V,(., .)), prove that Ilx + yll h(x) = llxll is a convex function. 15.
< llxll + IIyII. Using ths, prove that
For positive integers I and J , consider the IJ-dimensional real vector space, V , of all real-valued functions defined on {1,2,. . . , I ) X {1,2,. . . , J ) . Denote the value of y E V at (i, j) by y,,. The inner product on V is taken to be (y, y') = CCy,,y',,. The symbol 1 E V denotes the vector all of whose coordinates are one. (i) Define A on V to V by Ay = 7..1 where y, = ( I J ) - 'LC y,, . Show that A is the orthogonal projection onto span(1). (ii) Define linear transformations B,, B,, and B, on V by ,
(B,Y),, = y,.-
(B,y),,
= y,,
y..
-7,.-7., +y..
where jj..=
J-'E~. 1J
J
and
y., = I ~ ' E ~ , , . Show that B,, B,, and B, are orthogonal projections and the
66
VECTOR SPACE THEORY
following holds:
(iii) Show that
16. For r E O(V) and M a subspace of V, suppose that T(M) that T(ML) c M L .
c M. Prove
17.
Given a subspace M of (V, (., show the following are equivalent: (i) J(x,y)l G cllxll for all x E M. (ii) llp,Wyll G c. Here c is a fixed positive constant and P, is the orthogonal projection onto Ad.
18.
In (V, (., suppose A and B are positive semidefinite. For C, D C(V, V) prove that (tr ACBD')~G tr ACBC'tr ADBD'.
a)),
a)),
E
19. Show that (l?is a 2n-dimensional real vector space. 20. Let A be an n x n real matrix. Prove: (i) If A, is a real eigenvalue of A, then there exists a corresponding real eigenvector. (ii) If A, is an eigenvalue that is not real, then any corresponding eigenvector cannot be real or pure imaginary. 21. In an n-dimensional space (V, (., .)), suppose P is a rank r orthogonal projection. For a, p E R, let A = a P + P ( I - P). Find eigenvalues, eigenvectors, and the characteristic polynomial of A. Show that A is positive definite iff a > 0 and /3 > 0. What is A-' when it exists? 22. Suppose A and B are self-adjoint and A - B >, 0. Let A, >, . . . >, A, and p , 2 . . . p, be the eigenvalues of A and B. Show that Xi >, pi, i = 1, ..., n. 23. If S , T E C(V, V) and S > 0, T >, 0, prove that ( S , T ) T = 0. 24.
For A
E
(C(V, V), ( . ,
a)),
show that (A, I )
=
tr A.
=
0 implies
67
PROBLEMS
25. Suppose A and B in C(V, V ) are self-adjoint and write A B to mean A-B20. (i) If A >, B, show that CAC' >, CBC' for all C E C(V, V). (ii) Show I 2 A iff all the eigenvalues of A are less than or equal to one. IS 2 B2? (iii) Assume A > 0, B > 0, and A B. Is All2 2 26. If P is an orthogonal projection, show that tr P is the rank of P. Let x,, . . . , x, be an orthonormal basis for (V, (., .)) and consider the vector space (C(V, V), ( , .)). Let M be the subspace of C(V, V) consisting of all self-adjoint linear transformations and let N be the subspace of all skew symmetric linear transformations. Prove: (i) {x, x, + x, x,li G j) is an orthogonal basis for M. (ii) {xiOx, - x,O xili < j) is an orthogonal basis for N. (iii) M is orthogonal to N and M @ N = C(V, V). (iv) The orthogonal projection onto M is A + (A + Af)/2, A E fxv, V ) .
28. Consider C,, ,with the usual inner product (A, B) = tr AB', and let 5, be the subspace of symmetric matrices. Then (S,, ( . , .)) is an inner product space. Show dim S, = n ( n + 1)/2 and for S , T E S,, (S, T ) = x,siitii+ 2zzi<,sijtij. 29. For A
E
C(V, W), one definition of the norm of A is
where 11 . )I is the given norm on W. (i) Show that lllAlll is the square root of the largest eigenvalue of A'A. (ii) Show that IllaAlll = IallllAlll, a E R and IIIA + Blll G IllAlll + 111B111.
30. In the inner product spaces (V, ( - , .)) and (W, [ - , .I), consider A E C(V, V) and B E C(W, W), which are both self-adjoint. Write these in spectral form as
n
B
=
~ p , w ,w,.~ 1
(Note: The symbol q has a different meaning in these two equations
68
VECTOR SPACE THEORY
since the definition of 13depends on the inner product.) Of course, x,, . . . , X, [w,,. . . , w,] is an orthonormal basis for (V, [(W, [., .])I. Also, {xiDyli = 1,. . . , m, j = 1,. . . , n ) is an orthonormal basis for (C(W, V), ( . , . )), and A 8 B is a linear transformation on C(W, V) to C( W, V). (i) Show that (A 8 B)(x,Ow,) = A,pj(xiOw,) so Aipj is an eigenvalue of A 63 B. (ii) Show that A 8 B = CCAip,(xi ?) 6(xi w,) and this is a spectral decomposition for A 8 B. What are the eigenvalues and corresponding eigenvectors for A 8 B? (iii) If A and B are positive definite (semidefinite), show that A 8 B is positive definite (semidefinite). (iv) Show that tr A 8 B = (tr A)(tr B) and det A 8 B = (det A)"(det B)". ( a ,
a))
31. Let x,,. . . , x, be linearly independent vectors in Rn, set M = span{x,, . . . , x,), and let A : n X p have columns x,, . . . , x,. Thus %(A) = M and A'A is positive definite. (i) Show that rC, = A(A'A)-'/2 is a linear isometry whose columns form an orthonormal basis for M. Here, (A'A)-'/2 denotes the inverse of the positive definite square root of A'A. (ii) Show that $4'= A(A'A)-'A' is the orthogonal projection on to M.
32. Consider two subspaces, MI and M2, of R n with bases x,, . . . , x, and y,,. . . , y,. Let A(B) have columns x,,. . . ,x, (y,,. . . , y,). Then P, = A(A'A)-'A' and P2 = B(B'B)-'B' are the orthogonal projections onto MI and M2, respectively. The cosines of the angles between MI and M2 can be obtained by computing the nonzero eigenvalues of P,P,P,. Show that these are the same as the nonzero eigenvalues of and of
33. In R ~ set , xi = (1,0,0,O), x i = (0, 1,0,O), y; = (1, 1, 1, l), and y; (1, - 1, 1, - 1). Find the cosines of the angles between MI span{x,, x2) and M2 = span{y,, y2). 34.
= =
For two subspaces MI and M2 of (V, (., .)), argue that the angles between MI and M, are the same as the angles between r ( M , ) and I'(M2) for any r E 8(V).
NOTES AND REFERENCES
35.
69
T h s problem has to do with the vector space V of Example 1.9 and V may be infinite dimensional. The results in this problem are not used in the sequel. Write XI = X2 if X, = X2 a.e. (Po) for XI and X2 in V. It is easy to verify -- is an equivalence relation on V. Let M = {XIX E V , X = 0 a.e. (Po)) so X, = X2 iff X, - X2 E M. Let L~ be the set of equivalence classes in V. (i) Show that L2 is a real vector space with the obvious definition of addition and scalar multiplication. Define ( -, .) on L2 by ( y,, y2) = GXl X2 where Xi is an element of the equivalence class y,, i = 1,2. (ii) Show that ( - , .) is well defined and is an inner product on L2. Now, let Go be a sub o-algebra of 9 . For y E L2, let Py denote the conditional expectation given Go of any element in y. (iii) Show that P is well defined and is a linear transformation on L2 to L2. Let N be the set of equivalence classes of all gomeasurable functions in V. Clearly, N is a subspace of L2. (iv) Show that p2 = P, P is the identity on N, and % ( P ) = N. Also show that P is self-adjoint-that is (y,, Py,) = (Py,, y,). Would you say that P is the orthogonal projection onto N?
NOTES AND REFERENCES 1. The first half of this chapter follows Halmos (1968) very closely. After ths, the material was selected primarily for its use in later chapters. The material on outer products and Kronecker products follows the author's tastes more than anything else. 2. The detailed discussion of angles between subspaces resulted from unsuccessful attempts to find a source that meshed with the treatment of canonical correlations given in Chapter 10. A different development can be found in Dempster (1969, Chapter 5).
3. Besides Halmos (1958) and Hoffman and Kunze (1971), I have found the book by Noble and Daniel (1977) useful for standard material on linear algebra. 4.
Rao (1973, Chapter 1) gives many useful linear algebra facts not discussed here.
Random Vectors
The basic object of study in this book is the random vector and its induced distribution in an inner product space. Here, utilizing the results outlined in Chapter 1, we introduce random vectors, mean vectors, and covariances. Characteristic functions are discussed and used to give the well known factorization criterion for the independence of random vectors. Two special classes of distributions, the orthogonally invariant distributions and the weakly spherical distributions, are used for illustrative purposes. The vector spaces that occur in this chapter are all finite dimensional.
2.1. RANDOM VECTORS Before a random vector can be defined, it is necessary to first introduce the Borel sets of a finite dimensional inner product space (V,(., .)). Setting llxll = (x, x)'I2, the open ball of radius r about xo is the set defined by S,(xo> = {xIIIx - xoll < I>.
Definition 2.1. The Borel a-algebra of (V, . )), denoted by %(V), is the smallest a-algebra that contains all of the open balls. (a,
Since any two inner products on V are related by a positive definite linear transformation, it follows that %(V) does not depend on the inner product on V-that is, if we start with two inner products on V and use these inner products to generate a Borel a-algebra, the two a-algebras are the same. Thus we simply call %(V) the Borel a-algebra of V without mentioning the inner product.
PROPOSITION
71
2.1
A probability space is a triple (8, 9 , Po) where 8 is a set, 9 i s a a-algebra of subsets of 8 , and Po is a probability measure defined on 9.
Definition 2.2. A random vector X E V is a function mapping 8 into V such that X-'(B) E 9for each Borel set B E %(V). Here, X-'(B) is the inverse image of the set B.
Since the space on which a random vector is defined is usually not of interest here, the argument of a random vector X i s ordinarily suppressed. Further, it is the induced distribution of X on V that most interests us. To define ths, consider a random vector X defined on 8 to V where (8, F, Po) is a probability space. For each Borel set B E %(V), let Q(B) = Po( X-'(B)). Clearly, Q is a probability measure on %(V) and Q is called the induced distribution of X-that is, Q is induced by X and Po. The following result shows that any probability measure Q on %(V) is the induced distribution of some random vector. Proposition 2.1. Let Q be a probability measure on %(V) where V is a finite dimensional inner product space. Then there exists a probability space ( 8 , 4, Po) and a random vector X on 8 to V such that Q is the induced distribution of X.
Proot Take 8 = V, 9 = %(V), Po = Q, and let X(w) Clearly, the induced distribution of Xis Q.
=
w for w
E
V.
Henceforth, we write things like: "Let X be a random vector in V with distribution Q," to mean that X is a random vector and its induced distribution is Q. Alternatively, the notation C(X) = Q is also used-ths is read: "The distributional law of X is Q." A function f defined on V to W is called Borel measurable if the inverse image of each set B E % ( W )is in %(V). Of course, if Xis a random vector in V, then f ( X ) is a random vector in W when f is Borel measurable. In particular, when f is continuous, f is Borel measurable. If W = R and f is Borel measurable on V to R, then f( X ) is a real-valued random variable. Definition 2.3. Suppose Xis a random vector in V with distribution Q and
f is a real-valued Borel measurable function defined on V. If f vl f(x)lQ(dx) < + co,then we say that f(X) has finite expectation and we write &f( X ) for lvf (x)Q(dx).
In the above definition and throughout this book, all integrals are Lebesgue integrals, and all functions are assumed Borel measurable.
72
+
RANDOM VECTORS
Example 2.1. Take V to be the coordinate space Rn with the usual inner product .) and let dx denote standard Lebesgue measure on Rn. If q is a non-negative function on Rn such that j q ( x ) dx = 1, then q is called a density function. It is clear that the measure Q given by Q ( B ) = /,q(x) dx is a probability measure on Rn so Q is the distribution of some random vector X on Rn. If E , , . . . , E , is the standard basis for Rn, then ( E , , X ) = Xi is the ith coordinate of X. Assume that Xi has a finite expectation for i = 1,. . . , n. Then G4 = jRn(q,x ) q ( x )dx = pi is called the mean value of X, and the vector p E Rn with coordinates p,, . . . , pn is the mean vector of X. Notice that for any vector x E Rn, & ( x ,X ) = &(CX,E,, X) = CX,G(E,, X ) = L i p i = ( x , p). Thus the mean vector p satisfies the equation & ( x ,X ) = ( x , p ) for all x E Rn and p is clearly unique. It is exactly t h s property of p that we use to define the mean vector of a random vector in an arbitrary inner product space V. ( a ,
+
Suppose X is a random vector in an inner product space ( V ,(., .)) and assume that for each x E V , the random variable ( x , X ) has a finite expectation. Let f ( x ) = G ( x , X ) , so f is a real-valued function defined a 2 x 2 )= & ( a , x , a 2 x 2 ,X ) = G [ a , ( x , ,X ) on V. Also, f ( a , x , a 2 ( x 2 ,X ) ] = a 1 F ( x I X , ) + a2G(x2,X ) = a , f ( x I ) + a2f ( x 2 ) Thus f is a linear function on V. Therefore, there exists a unique vector p E V such that f ( x ) = ( x , p ) for all x E V. Summarizing, there exists a unique vector p E V that satisfies G ( x , X ) = ( x , p ) for all x E V. The vector p is called the mean vector of X and is denoted by GX. This notation leads to the suggestive equation & ( x ,X ) = ( x , G X ) , which we know is valid in the coordinate case.
+
+
+
Proposition 2.2. Suppose X E ( V ,(. , .)) and assume X has a mean vector p. Let (W, [., .I) be an inner product space and consider A E C(V,W ) and W , E W. Then the random vector Y = AX wo has the mean vector Ap w,,-that is, GY = AGX wo.
+
+
+
Proof. The proof is a computation. For w & [ w ,Y I
+
=
& [ w ,AX
E
W,
+ w0]= G [ w , A X ] + [w, w,]
Thus Ap w0 satisfies the defining equation for the mean vector of Y and by the uniqueness of mean vectors, E Y = Ap + wo.
If X, and X, are both random vectors in (V, (., -)), whch have mean vectors, then it is easy to show that &(X, + X,) = GX, GX,. The following proposition shows that the mean vector p of a random vector does not depend on the inner product on V.
+
Proposition 2.3. If X is a random vector in (V, (., .)) with mean vector p satisfying &(x, X) = (x, p) for all x E V, then p satisfies &f(x, X) = f (x, p) for every bilinear function f on V x V.
Proof: Every bilinear function f is given by f (x,, x,) = (x,, Ax,) for some A E C(V, V). Thus &f(x, X) = &(x, AX) = (x, Ap) = f(x, p) where the second equality follows from Proposition 2.2. When the bilinear function f is an inner product on V, the above result establishes that the mean vector is inner product free. At times, a convenient choice of an inner product can simplify the calculation of a mean vector. The definition and basic properties of the covariance between two real-valued random variables were covered in Example 1.9. Before defining the covariance of a random vector, a review of covariance matrices for coordinate random vectors in R n is in order.
+
Example 2.2. In the notation of Example 2.1, consider a random vector X in R n with coordinates Xi = (E,, X) where E,,. . . , E,is the standard basis for R n and is the standard inner product. ~ s s u m ethat Gx;' < w , i = 1,. . . , n. Then cov(&,, = a,, exists for all i, j = 1,. . . , n. Let Z be the n X n matrix with elements a,,. Of course, a,, is the variance of Xi and a,, is the covariance between Xi and X,. The symmetric matrix I: is called the covariance matrix of X. Consider vectors x, y E R nwith coordinates xi and y,, i = 1,. . . , n. Then
+
( a ,
=
a )
3)
C C X , ~ , C O Vx,) ( ~= , C Cxiy,aij i j
j
Hence cov{(x, X), (y, X)) = (x, Z y). It is this property of I: that is used to define the covariance of a random vector.
+
With the above example in mind, consider a random vector X in an inner product space (V, (., and assume that &(x, X)' < w for all x E V. Thus a))
74
RANDOM VECTORS
(x, X) has a finite variance and the covariance between (x, X) and (y, X) is well defined for each x, y E V.
Proposition 2.4. For x, y
E
V, define f(x, y) by
Then f is a non-negative definite bilinear function on V X V ProoJ: Clearly, f (x, y) = f (y, x) and f (x, x) = var{(x, X)) 2 0, so it re-. mains to show that f is bilinear. Since f is symmetric, it suffices to verify that. f ( a l x l a2x2,y) = a , f(x,, y ) + a, f(x,, y). This verification goes as follows:
+
By Proposition 1.26, there exists a unique non-negative definite linear transformation Z such that f (x, y) = (x, Z y ).
Definition 2.4. The unique non-negative definite linear transformation 12 on V to V that satisfies
is called the covariance of X and is denoted by Cov(X). Implicit in the above definition is the assumption that &(x, X)' < + CC) for all x E V. Whenever we discuss covariances of random vectors, &(x, X)2 is always assumed finite. It should be emphasized that the covariance of a random vector in (V, (., .)) depends on the given inner product. The next result shows how the covariance changes as a function of the inner product.
Proposition 2.5. Consider a random vector X in (V, (., .)) and suppose Cov(X) = 2 . Let [., . ] be another inner product on V given by [x, y] = (x, Ay) where A is positive definite on (V, ( - , .)). Then the covariance of X in the inner product space (V, [ ., is ZA. a])
PROPOSITION
75
2.6
Proof. To verify that ZA is the covariance for X in (V, [., -I), we must show that cov{[x, XI, [y, X I ) = [x, ZAyj for all x, y E V. To do this, use the definition of [., . ] and compute: cov{[x, XI, [ Y , XI)
=
cov{(x, AX), ( y , AX))
=
cov{(Ax, X), (Ay, X))
Two immediate consequences of Proposition 2.5 are: (i) if Cov(X) exists in one inner product, then it exists in all inner products, and (ii) if Cov( X) = Z in (V, (., and if Z is positive definite, then the covariance of X in the inner product [x, y] = (x, Z-'y) is the identity linear transformation. The result below often simplifies a computation involving the derivation of a covariance. )a
Proposition 2.6. Suppose Cov(X) = Z in (V, (., If Z, is a self-adjoint linear transformation on (V, (., -)) to (V, ( - , that satisfies a)).
a))
var{(x, X))
(2.1)
then Z,
=
=
(x, Z,x)
for x
E
V,
2.
Proof. Equation (2.1) implies that (x, Z,x) = (x, Zx), x E V. Since Z, and Z are self-adjoint, Proposition 1.16 yields the conclusion Z, = 8 . When Cov(X) = Z is singular, then the random vector X takes values in To make this precise, let us consider the translate of a subspace of (V, the following. ( a ,
a)).
Proposition 2.7. Let X be a random vector in (V, (., .)) and suppose Cov(X) = Z exists. With p = G X and %(Z) denoting the range of 2 , P{X E % ( Z ) p) = 1.
+
+
Proof. The set % ( Z ) p is the set of vectors of the form x + p for x E %(Z); that is % ( Z ) + p is the translate, by p, of the subspace %(Z). The statement P{X E % ( Z ) + p) = 1 is equivalent to the statement P{X p E %(Z)) = 1. The random vector Y = X - p has mean zero and, by Proposition 2.6, Cov(Y) = Cov(X) = Z since var{(x, X - p)) = var{(x, X)) for x E V. Thus it must be shown that P{Y E %(Z)) = 1. If Z is nonsingular, then % ( 2 ) = V and there is nothing to show. Thus assume that the null space of Z, %(Z), has dimension k > 0 and let {x,, . . . , x,) be an orthonorma1 basis for %(Z). Since % ( Z ) and %(Z) are perpendicular and % ( Z ) @
76
RANDOM VECTORS
%(Z)
=
V, a vector x is not in %(Z) iff for some index i
=
* Oforsomei = 1,...,
k)
(x,,x) * 0. Thus
P{Y P % ( Z ) )
=
P{(x,,Y)
1,. . . , k ,
k
=s C p { ( x , , Y)
#
0).
1
But (xi, Y) has mean zero and var{(x,, Y)) = (xi, Zx,) = 0 since xi E %(Z). Thus (xi, Y) is zero with probability one, so P{(x,, Y) * 0) = 0. Therefore P{Y P %(Z)) = 0. Proposition 2.2 describes how the mean vector changes under linear transformations. The next result shows what happens to the covariance under linear transformations. Proposition 2.8. Suppose X is a random vector in (V, (., .)) with Cov(X) = Z. If A E C(V, W) where (W, [., .I) is an inner product space, then Cov( AX + wo) = AZA' for all wo Proof:
E
W.
By Proposition 2.6, it suffices to show that for each w E W, var[w, AX + w,]
However,
var[w, AX + wo] = var([w, AX] =
Thus Cov( AX
+ w,)
=
var(A'w, X )
=
[w, AZA'w].
+ [w, w,]) =
=
var[w, AX]
(A'w, ZA'w)
=
[w, AZA'w].
AZ A'
2.2. INDEPENDENCE OF RANDOM VECTORS With the basic properties of mean vectors and covariances established, the next topic of discussion is characteristic functions and independence of random vectors. Let X be a random vector in (V, (., .)) with distribution Q. Definition 2.5. The complex valued function on V defined by
is the characteristic function of X.
77
INDEPENDENCE OF RANDOM VECTORS
+
and t E R. In the above definition, e" = cos t i sin t where i = Since el' is a bounded continuous function of t , characteristic functions are well defined for all distributions Q on (V,(., .)). Forthcoming applications of characteristic functions include the derivation of distributions of certain functions of random vectors and a characterization of the independence of two or more random vectors. One basic property of characteristic functions is their uniqueness, that is, if Q, and Q, are probability distributions on (V, .)) with characteristic and +,, and if + , ( x )= +,(x) for all x E V , then Q, = Q,. A functions proof of this is based on the multidimensional Fourier inversion formula, which can be found in Cramer (1946). A consequence of this uniqueness is that, if X I and X, are random vectors in (V,(., .)) such that C((x,X I ) )= C((x,X,)) for all x E V , then C ( X l )= C(X2).This follows by observing that C((x,X I ) )= C((x,X,)) for all x implies the characteristic functions of X, and X, are the same and hence their distributions are the same. To define independence, consider a probability space (a, '3, Po) and let X E (V,(., .)) and Y E ( W ,[., .I) be two random vectors defined on a.
+,
( a ,
Definition 2.6. The random vectors X and Y are independent if for any Borel sets B, E % ( V )and B, E % ( W ) ,
In order to describe what independence means in terms of the induced it is necessary to define distributions of X E (V,(., .)) and Y E (W, [., what is meant by the joint induced distribution of X and Y. The natural vector space in which to have X and Y take values is the direct sum V @ W defined in Chapter 1. For {vi,w,) E V @ W , i = 1,2, define the inner product (. , .), by a ] ) ,
That (., .), is an inner product on V @ W is routine to check. Thus { X , Y ) takes values in the inner product space V @ W. However, it must be shown that { X ,Y ) is a Borel measurable function. Briefly, this argument goes as follows. The space V @ W is a Cartesian product space-that is, V @ W consists of all pairs ( v , w ) with v E V and w E W. Thus one way to get a a-algebra on V @ Wis to form the product o-algebra % ( V )x % ( W ) ,which is the smallest a-algebra containing all the product Borel sets B, x B, c V @ W where B, E % ( V ) and B, E % ( W ) . It is not hard to verify that inverse images, under { X , Y ) , of sets in % ( V )X % ( W )are in the a-algebra '3. But the product a-algebra % ( V )x % ( W )is just the a-algebra %(V @ W ) defined earlier. Thus { X , Y ) E V @ W is a random vector and hence has an
78
RANDOM VECTORS
induced distribution Q defined on %(V @ W). In addition, let Q l be the
induced distribution of X on 93( V ) and let Q, be the induced distribution of
Y on %(W). It is clear that Q,(B,) = Q(Bl X W) for B, E %(V) and Q2(B2)= Q(V X B,) for B2 E %(W). Also, the characteristic function of { X , Y ) E V @ Wis
and the marginal characteristic functions of X and Y are and
Proposition 2.9. Given random vectors X E (V, (., .)) and Y the following are equivalent:
E
(i) X and Y are independent. (ii) Q(B, X B,) = Q,(B,)Q,(B,) for all B, E % ( V )and B, (iii) +({v,w)) = +,(v)+,(w) for all v E Vand w E W.
(W, [.,
E
.I),
%(W).
Proof: By definition,
The equivalence of (i) and (ii) follows immediately from the above equation. To show (ii) implies (iii), first note that, iff, and f2 are integrable complex valued functions on V and W, then when (ii) holds,
by Fubini's Theorem (see Chung, 1968). Taking f,(v) v E V, and f2(w) = e'["l."] for w,, w E W, we have
=
e i ( " ~ , "for ) v
13
Thus (ii) implies (iii). For (iii) implies (ii), note that the product measure The uniqueness of characteristic Q , x Q , has characteristic function functions then implies that Q = Ql X Q,.
+,+,
Of course, all of the discussion above extends to the case of more than two random vectors. For completeness, we briefly describe the situation. Given a probability space (3,9, Po) and random vectors X, E ( 7 , (., .),), j = I,. . . , k, let Q, be the induced distribution of X, and let be the characteristic function of X,. The random vectors XI,. . . , X, are independent if for all B, E a ( ? ) ,
+,
Po{?
E
B,,j
=
1,..., k )
=
n p O { X , E B,). /= 1
To construct one random vector from XI,. . . , Xk, consider the direct sum V, $ . . . @ Vk with the inner product (., .) = Cf(., .),. In other words, if { v l , .. . , ok) and {w,,. . . , w,) are elements of V , $ . . . $ Vk, then the inner product between these vectors is Ct(v,, w,)~.An argument analogous to that given earlier shows that {XI,.. . , X,) is a random vector in V, (3 . . . V, and the Bore1 a-algebra of V, @ $ Vk is just the product a-algebra %(V,) X . . . X %(V,). If Q denotes the induced distribution of {X,,. . . , Xk), then the independence of XI,. . . , X, is equivalent to the assertion that
for all B,
E
% ( y )j, = 1,. . . , k, and t h s is equivalent to
Of course, when XI,. . . , Xk are independent and f , is an integrable real valued function on 7 , j = 1,. . . , k, then
T h s equality follows from the fact that
and Fubini's Theorem.
80
+
RANDOM VECTORS
Example 2.3. Consider the coordinate space RP with the usual inner product and let Q , be a fixed distribution on RP. Suppose X,,. . . , Xn are independent with each E RP, i = 1,. . . , n , and C ( X , ) = Q,. That is, there is a probability space (52,F, Po), each is a random vector on 52 with values in RP, and for Bore1 sets,
Thus { X , , . . . , X n ) is a random vector in the direct sum R* @ . . . @ RP with n terms in the sum. However, there are a variety of ways to think about the above direct sum. One possibility is to form the coordinate random vector
and simply consider Y as a random vector in RnP with the usual inner product. A disadvantage of t h s representation is that the independence of X,,. . . , Xn becomes slightly camouflaged by the notation. An alternative is to form the random matrix
Thus X has rows Xi', i = 1 , . . . , n, which are independent and each has distribution Q,. The inner product on Cp, is just that inherited from the standard inner products on R n and RP. Therefore X is a ,,, ( . , .)). In the random vector in the inner product space sequel, we ordinarily represent X,, . . . , Xn by the random vector X E Cp,n . The advantages of this representation are far from clear at this point, but the reader should be convinced by the end of this book that such a choice is not unreasonable. The derivation of the given in the next section should mean and covariance of X E provide some evidence that the above representation is useful.
(ep,
eP,
+
PROPOSITION
2.10
2.3. SPECIAL COVARIANCE STRUCTURES In this section, we derive the covariances of some special random vectors. The orthogonally invariant probability distributions on a vector space are shown to have covariances that are a constant times the identity transformation. In addition, the covariance of the random vector given in Example 2.3 is shown to be a Kronecker product. The final example provides an expression for the covariance of an outer product of a random vector with itself. Suppose (V, .)) is an inner product space and recall that B(V) is the group of orthogonal transformations on V to V. ( a ,
Definition 2.7. A random vector X in (V, .)) with distribution Q has an orthogonally invariant distribution if C(X) = C(rX) for all r E Q(V), or equivalently if Q(B) = Q(rB) for all Bore1 sets B and r E B(V). (
0
,
Many properties of orthogonally invariant distributions follow from the following proposition. Proposition 2.10. Let x, E V with llxoll = 1. If C(X) = C(TX) for O(V), then for x E V, /,C(X,X)) = C(IIxII(xO,X)).
rE
Proof. The assertion is that the distribution of the real-valued random variable (x, X) is the same as the distribution of Ilxll(xo, X). Thus knowing the distribution of (x, X) for one particular nonzero x E V gives us the distribution of (x, X) for all x E V. If x = 0, the assertion of the proposition is trivial. For x == 0, choose r E Q(V) such that rx, = x/llxll. This is possible since x, and x/llxll both have norm 1. Thus
where the last equality follows from the assumption that C(X) = C(rX) for all r E O(V) and the fact that r E B(V) implies I?' E Q(V). Proposition 2.11. Let x, E V with llxoll = 1. Suppose the distribution of X is orthogonally invariant. Then: (i) G ( ~ ) Ge'("?X )
=
cP(llxllx0). (ii) If GX exists, then &X = 0. (iii) If Cov(X) exists, then Cov(X) = a21where a 2 = var{(x,, X)), and I is the identity linear transformation.
82
RANDOM VECTORS
Proof. Assertion (i) follows from Proposition 2.10 and
For (ii), let p = &X. Since C(X) = C(rX), p = &X = &I'X = r & X = r p for all r E O(V). The only vector p that satisfies p = r p for all r E O(V) is p = 0. To prove (iii), we must show that a21satisfies the defining equation for Cov(X). But by Proposition 2.10, var{(x, X)} = var{llxll(x,, X)) so Cov(X)
=
=
~lxll~var{x,, X)
=
a 2 ( x ,x )
=
( x , a21x)
a21by Proposition 2.6.
Assertion (i) of Proposition 2.11 shows that the characteristic function cp of an orthogonally invariant distribution satisfies cp(Tx) = cp(x) for all x E V and r E O(V). Any function f defined on V and taking values in some set is called orthogonally invariant iff (x) = f ( r x ) for all r E O(V). A characterization of orthogonal invariant functions is given by the following proposition. Proposition 2.12. A function f defined on (V, .)) is orthogonally invariant iff f ( x ) = f(llxllx,) where x, E V, ',lxoll= 1. ( a ,
Proof. If f ( x ) = f(llxllxo), then f ( r x ) = f(llrxllx0) = f(llxllx0) = f ( x ) so f is orthogonally invariant. Conversely, suppose f is orthogonally invariant and x, E V with llxoll = 1. For x = 0, f(0) = f(llxIIx,) since llxll = 0. If x == 0, let r E O(V) be such that rx, = x/llxll. Then f ( x ) = f(rllxIIx,) = f (llxllxo). If X has an orthogonally invariant distribution in (V, (., .)) and h is a function on R to R, then f(x)
= &h((x, X))
clearly satisfies f ( r x ) = f(x) for r E O(V). Thus f ( x ) = f(llxIIx,) = Gh (\lxII(xo,X)), SO to calculatef (x), one only needs to calculate f (ax,) for a E (0, a).We have more to say about orthogonally invariant distributions in later chapters. A random vector X E V(., .) is called orthogonally invariant about x, if X - x, has an orthogonally invariant distribution. It is not difficult to show, using characteristic functions, that if X is orthogonally invariant about both x, and x,, then x, = x,. Further, if Xis orthogonally invariant
PROPOSITION
83
2.13
about x, and if GX exists, then G(X - x,) = 0 by Proposition 2.11. Thus x, = GX when & X exists. It has been shown that if X has an orthogonally invariant distribution and if Cov(X) exists, then Cov(X) = a 2 1 for some u2 > 0. Of course there are distributions other than orthogonally invariant distributions for which the covariance is a constant times the identity. Such distributions arise in the chapter on linear models. Definition 2.8. If X E (V, (., Cov(X)
a))
=
and a21 for some a 2 > 0,
X has a weakly spherical distribution. The justification for the above definition is provided by Proposition 2.13. Proposition 2.13. Suppose X is a random vector in (V, (., .)) and Cov(X) exists. The following are equivalent: (i) Cov(X) = a21 for some a 2 2 0. (ii) Cov(X) = Cov(rX) for all J? E Q(V). Proof. That (i) implies (ii) follows from Proposition 2.8. To show (ii) implies (i), let Z = Cov(X). From (ii) and Proposition 2.8, the non-negative definite linear transformation Z must satisfy Z = T Z r ' for all l? E Q(V). Thus for all x E V, /,llxll= 1,
But r'x can be any vector in V with length one since I" can be any element of O(V). Thus for all x, y, llxll = llyll = 1,
From the spectral theorem, write Z y = x,. Then we have
A,
=
(x,, Zx,)
=
=
Z;A,x,O xi and choose x
(x,, Zx,)
=
A,
for all j , k . Setting a 2 = A,,
That a 2 > 0 follows from the positive semidefiniteness of Z.
=
x, and
84
RANDOM VECTORS
Orthogonally invariant distributions are sometimes called spherical distributions. The term weakly spherical results from weakening the assumption that the entire distribution is orthogonally invariant to the assumption that just the covariance structure is orthogonally invariant (condition (ii) of Proposition 2.13). A slight generalization of Proposition 2.13, given in its algebraic context, is needed for use later in this chapter. Proposition 2.14. Supposef is a bilinear function on V x V where (V, ( . , . )) is an inner product space. Iff [ r x , , rx,] = f [x,, x,] for all x,, x, E V and r E Q(V),then f [x,, x,] = c(x,, x,) where c is some real constant. If A is a linear transformation on V to V that satisfies T'Ar = A for all r E O(V), then A = cI for some real c. Proof: Every bilinear function on V X V has the form (x,, Ax,) for some linear transformation A on Vto V. The assertion that f [ r x , , Tx,] = f [x,, x,] is clearly equivalent to the assertion that r ' A r = A for all r E B(V). Thus it suffices to verify the assertion concerning the.linear transformation A. Suppose r f A r = A for all r E B(V). Then for x,, x2 E V,
( x , , AX,) = ( x , , r f A r x 2 )= ( r x , , Arx,). By Proposition 1.20, there exists a
r such that
when x, and x, are not zero. Thus for x, and x, not zero,
However, this relationship clearly holds if either x, or x, is zero. Thus for all x,, X, E V, (x,, Ax2) = (AX,,x2), SO A must be self-adjoint. Now, using the spectral theorem, we can argue as in the proof of Proposition 2.13 to conclude that A = cI for some real number c.
+
Example 2.4. Consider coordinate space R n with the usual inner product. Let f be a function on [0, co) to [0, co) so that
is a density on Rn.If the coordinate random vector Thus f (11~11~)
PROPOSITION
85
2.14
X E Rn has f(11~11~) as its density, then for r E 8, (the group of n x n orthogonal matrices), the density of r X is again f(11~11~). This follows since 11 rxll = llxll and the Jacobian of the linear transformation determined by r is equal to one. Hence the distribution determined by the density is On invariant. One particular choice for f is f ( u ) = ( 2 ~ ) - " / ~ e - ' / and ~ " the density for X is then
Each of the factors in the above product is a density on R (corresponding to a normal distribution with mean zero and variance one). Therefore, the coordinates of X are independent and each has the same distribution. An example of a distribution on Rn that is weakly spherical, but not spherical, is provided by the density (with respect to Lebesgue measure)
where x E Rn, x' = (x,, x,, . . . , x,). More generally, if the random variables XI,. . . , Xn are independent with the same distribution on R, and a 2 = var(X,), then the random vector X with coordinates XI,. . . , X, is easily shown to satisfy Cov(X) = a21, where Inis the n X n identity matrix.
+
The next topic in this section concerns the covariance between two random vectors. Suppose Xi E ( y , .)i)for i = 1,2 where XI and X2 are defined on the same probability space. Then the random vector {XI, X2) takes values in the direct sum V, @ V,. Let [., denote the usual inner product on V, @ 6 inherited from (., .),, i = 1,2. Assume that Zii = Cov(Xi), i = 1,2, both exist. Then, let ( a ,
a ]
and note that the Cauchy-Schwarz Inequality (Example 1.9) shows that
Further, it is routine to check that f (. , .) is a bilinear function on Vl so there exists a linear transformation Z,, E C(V2, Vl) such that
X
V2
86
RANDOM VECTORS
The next proposition relates Z,,, ZI2,and Z2, to the covariance of (XI, X2) in the vector space (V, ', V2, .I). [ a ,
Proposition 2.15. Let 2 = Cov{X,, X,). Define a linear transformation A on V, @ V2 to Vl @ V2 by
where Z;, is the adjoint of Z,,. Then A
=
2.
Proof. It is routine to check that ~ 2 )(, ~ 3~, 4 ) = ] [{XI, so A is self-adjoint. To show A
=
~4)]
Z, it is sufficient to verify
[{XI,4 , A{-%,xZ)]
= [{XI,
x2), Z{XI, x2)l
by Proposition 1.16. However, [{XI,xZ), Z{XI, x2)] = var[(x1, x2), {XI, ~ 2 ) l = var{(x,, XI),
+ (x2, x2>2)
= var(x,, XI), + var(x2, X2)2
+2cov{(x,, XI),? ( ~ 2x2)2) , = (XI,Z,,x,),
+ ( ~ 2Z22x212 , + 2(x1, Z12x2)1
= (XI,211~1)l+ (x2,222x2)2
+ ( X I , Z12~2)l+ (2;2x13 x2)2 = [{XI,xz), {Z11x1 + 212~2,Z;zxI+ 222~2)l =
[{XI,x2), A(x1, x2)I.
It is customary to write the linear transformation A in partitioned form as
With t h s notation,
Definition 2.9. The random vectors XI and X2 are uncorrelated if Z12= 0. In the above definition, it is assumed that Cov(Xi) exists for i clear that Xl and X2 are uncorrelated iff
=
1,2. It is
Also, if XI and X2 are uncorrelated in the two given inner products, then they are uncorrelated in all inner products on Vl and V2. This follows from the fact that any two inner products are related by a positive definite linear transformation. for i = 1,2, suppose Given Xi E (I/;, (.,
We want to show that there is a linear transformation B E C(V2, V,) such that XI BX2 and X2 are uncorrelated random vectors. However, before this can be established, some preliminary technical results are needed. Consider an inner product space (V, (., and suppose A E C(V, V) is self-adjoint of rank k. Then, by the spectral theorem, A = CtA,x, xi where Xi * 0, i = 1,. . . , k, and (x,,. . . , x,) is an orthonormal set that is a basis for % ( A ) . The linear transformation
+
a))
is called the generalized inverse of A. If A is nonsingular, then it is clear that A- is the inverse of A . Also, A- is self-adjoint and A A - = A - A = ZfxiCi x,, which is just the orthogonal projection onto %(A). A routine computation shows that A-AA-= A-and A A - A = A . In the notation established previously (see Proposition 2.15), suppose ( XI, X2) E V, @ V2 has a covariance
Proposition 2.16. For the covariance above, %(B2,) BI2Z322.
c %(Z12)and Z I 2=
88
RANDOM VECTORS
Proof. For x 2 E %(Z2,), it must be shown that ZI2x2= 0. Consider x, E Vl and cr E R. Then Z2,(ax2) = 0 and since Z is positive semidefinite,
As this inequality holds for all a E R, for each x, E V, (x,, Z12x2),= 0. Hence Z I 2 x 2= 0 and the first claim is proved. To verify that Z12= ZI2Z;Z2,, it suffices to establish the identity Z12(I- ZGZ,,) = 0. However, I - Z;Z2, is the orthogonal projection onto GSL(Z2,). Since %(Z2,) c %(El,), it follows that Z12(I- Z;Z2,) = 0. We are now in a position to show that X, - ZI2Z;X2 and X2 are uncorrelated. Proposition 2.17. Suppose {XI, X2) E V,
@
V2 has a covariance
Then XI - Z12Z, X2 and X2 are uncorrelated, and Cov( XI - Z ,,Z; X2) = Z,, - ZI2Z;Z2, where Z2, = Xi2. Proof. For xi E T/;, i
=
1,2, it must be verified that
This calculation goes as follows:
PROPOSITION2.18
89
The last equality follows from Proposition 2.15 since Z12= Z12Z,Z22. To verify the second assertion, we need to establish the identity var(x1, XI - ZI2Z,X2),
=
(XI, (211 - Zl2~,Z2l)xl)l.
But
In the above, the identity Z;Z2,Z;
=
2, has been used.
We now return to the situation considered in Example 2.4. Consider independent coordinate random vectors XI,. . . , X, with each Xi E RP, and suppose that G 4 = p E RP, and Cov(4) = Z for i = 1,. . . , n. Form the random matrix X E ep, with rows Xi,. . . , XA. Our purpose is to describe the mean vector and covariance of X in terms of Z and p. The inner product on Cp, ., ( , .) is that inherited from the standard inner products on the coordinate spaces RP and Rn. Recall that, for matrices A, B E ,,
ep,
(A, B)
=
tr AB'
=
tr B'A = tr A'B
=
tr BA'.
Let e denote the vector in Rn whose coordinates are all equal to 1. Proposition 2.18.
In the above notation,
(i) & X = e p f . (ii) Cov( X) = In8 2 . Here Inis the n x n identity matrix and @ denotes the Kronecker product. Proof. The matrix ep' has each row equal to p' and, since each row of X has mean p', the first assertion is fairly obvious. To verify (i) formally, it must be shown that, for A E ,,
eP,
&(A, X)
=
(A, ep').
90
RANDOM VECTORS
Let a;, . . . , a:, a, &(A, X)
=
E
RP, be the rows of A. Then
& tr AX'
=
&C;aiX, = C;ai&X, = C;aip
=
tr Ape'
=
(A, epl).
Thus (i) holds. To verify (ii) it suffices to establish the identity var(A, X) for A
E
=
(A, ( I 8 Z)A)
Cp, ,. In the notation above,
The third equality follows from var(ajX) = aiZa, and, for i a; X, are uncorrelated.
*j, aj4
and
The assumption of the independence of XI,. . . , X, was not used to its full extent in the proof of Proposition 2.18. In fact the above proof shows that, if X,,. . . , X, are random variables in RP with &Xi = p, i = 1,. . . , n, then EX = ep'. Further, if XI,. . . , X, in RP are uncorrelated with Cov(X,) = Z, i = 1,. . . , n, then Cov(X) = I, 8 2 . One application of t h s formula for Cov(X) describes how Cov(X) transforms under Kronecker products. For example, if A E C,,, and B E gp,,, then (A 8 B)X = AXB' is a random vector in C,, ,. Proposition 2.8 shows that
In particular, if Cov(X) = I, 8 2 , then
Since A 8 B = (A 8 Ip)(In 8 B), the interpretation of the above covariance formula reduces to an interpretation for A 8 I, and I, 8 B. First, (I, 8 B)X is a random matrix with rows X,'Bf = (BT)', i = 1,. . . , n. If Cov(&) = Z, then Cov(B4) = BZB'. Thus it is clear from Proposition 2.18 that Cov((I, x B)X) = I, 8 (BZB'). Second, (A 8 Ip) applied to Xis the same as applying the linear transformation A to each column of X. When Cov(X) = I, 8 Z, the rows of X are uncorrelated and, if A is an n x n orthogonal matrix, then
PROPOSITION
91
2.19
Thus the absence of correlation between the rows is preserved by an orthogonal transformation of the columns of X. A converse to the observation that Cov((A S I,)X) = I, S Z for all A E O ( n ) is valid for random linear transformations. To be more precise, we have the following proposition. Proposition 2.19. Suppose ( v , (., .)i),i = 1,2, are inner product spaces ( , .)). The following are equivaand X is a random vector in (C(V,, lent:
K),
(i) Cov( X) = I, S 2. (ii) Cov((r S I,)X) = Cov(X) for all I'
E
O(V,).
Here, I, is identity linear transformation on v , i = 1,2, and Z is a non-negative definite linear transformation on V, to V,. Proof: Let \k = Cov(X) so \k is a positive semidefinite linear transformation on C(V,, V,) to C(V,, V,) and \k is characterized by the equation
cov{(A, X ) , (B, X)) for all A, B
E
=
(A, \kB)
C(Vl, V,). If (i) holds, then we have
=
I, S 2
=
Cov( X),
so (ii) holds. Now, assume (ii) holds. Since outer products form a basis for C(Vl, V,), it is sufficient to show there exists a positive semidefinite Z on V, to V, such that, for x,, x, E V, and y,, y, E V,,
Define H by
for x,, x,
E
V, and y,, y,
E
V,. From assumption (ii), we know that \k
92 satisfies 9 = (r €3 I,)\k(r
RANDOM VECTORS
€3
I,)' for all
r E O(V2). Thus
for all r E Q(V2).It is clear that H is a linear function of each of its four arguments when the other three are held fixed. Therefore, for x, and x, fixed, G is a bilinear function on V2 x V2 and this bilinear function satisfies the assumption of Proposition 2.14. Thus there is a constant, whch depends on x, and x,, say c[x,, x2], and
However, for y, = y2 * 0, H, as a function of x, and x,, is bilinear and non-negative definite on V, x V,. In other words, c[x,, x,] is a non-negative definite bilinear function on V, X Vl, so
for some non-negative definite 2 . Thus
The next topic of consideration in the section concerns the calculation of means and covariances for outer products of random vectors. These results are used throughout the sequel to simplify proofs and provide convenient formulas. Suppose is a random vector in (., .),) for i = 1,2 and let pi = EX,, and Zii = Cov(X,) for i = 1,2. Thus {XI; X,) takes values in Vl @ V, and
(v,
where I,,is characterized by
PROPOSITION
93
2.20
for xi E y , i = 1,2. Of course, Cov{X,, X,) is expressed relative to the natural inner product on V, @ V2 inherited from (V,, (., .),) and (V,, (. , .),). Proposition 2.20. For Xi E ( y ,( . , . )), i
=
1,2, as above,
Proof: The random vector XI X, takes values in the inner product space (c(V,, V,), ( . , .)). To verify the above formula, it must be shown that
for A E C(V2,V,). However, it is sufficient to verify this equation for A = x, x, since both sides of the equation are linear in A and every A is a linear combination of elements in C(V,, V,) of the form x, x,, xi E y , i = 1,2. For x, x, E C(V2,V,),
A couple of interesting applications of Proposition 2.20 are given in the following proposition. Proposition 2.21. For XI, X, in (V, ( ., .)), let pi = &Xi,Xii = Cov(X,) for i = 1,2. Also, let XI, be the unique linear transformation satisfying
for all x,, x,
E
V. Then:
(i) GX, XI = X I , + p, p,. (ii) &(XI, X,) = (I, XI,) + (PI, ~ 2 ) . (iii) &(XI,XI) = (I, XI,) + (P,, pl). Here I E C(V, V) is the identity linear transformation and ( . , .) is the inner product on C(V, V) inherited from (V, (. , .)).
94
RANDOM VECTORS
Proof. For(i), takeX, = X2and(Vl,(.,.),)=(V2,(.,.),)=(V,(.,.))in Proposition 2.20. To verify (ii), first note that
by the previous proposition. Thus for I E C(V, V), & ( I , XI
X2) = ( I , El,) + ( I , P I
P2).
However, (I, XI X2) = (X,, X2) and (I, p1 p2) = ( p l , p2) SO (ii) holds. Assertion (iii) follows from (ii) by taking XI = X2. One application of the preceding result concerns the affine prediction of one random vector by another random vector. By an affine function on a vector space V to W, we mean a function f given by f ( v ) = Av + w, where A E C(V, W) and w, is a fixed vector in W. The term linear transformation is reserved for those affine functions that map zero into zero. In the notation of Proposition 2.21, consider X, E ( - , .), for i = 1,2, let pi = &Xi, i = 1,2, and suppose
(v,
exists. An affine predictor of X2 based on X, is any function of the form AX, + x, where A E C(V,, V2) and x, is a fixed vector in V2. If we assume that p,, p2, and 2 are known, then A and x, are allowed to depend on these known quantities. The statistical interpretation is that we observe X,, but not X2, and X2 is to be predicted by AX, + x,. One intuitively reasonable criterion for selecting A and x, is to ask that the choice of A and x, minimize &IlX2 -
0x1 + x0)ll;.
Here, the expectation is over the joint distribution of XI and X2 and 11 . 11, is the norm in the vector space (V2,( , a),). The quantity &11 X2 - (AX, x,)ll; is the average distance of X2 - (AX, + x,) from 0. Since AX, + x, is supposed to predict X2, it is reasonable that A and x, be chosen to minimize this average distance. A solution to this minimization problem is given in Proposition 2.22.
-
+
Proposition 2.22. For X, and X2 as above,
with equality for A
=
2;,Z,
and x,
=
p2 - 2 i 2 2 i p l .
PROPOSITION
95
2.22
Proof. The proof is a calculation. It essentially consists of completing the square and applying (ii) of Proposition 2.21. Let I: = X, - pi for i = 1,2. Then
The last equality holds since &(Y2 - AY,) = 0. Thus for each A
with equality for x, Then
=
E
C(V,, V2),
p2 - Ap,. For notational convenience let Z2, = Xi2.
The last equality holds since &(Y2 - Z2,2,Y,) = 0 and Y2 - 2,,Z,Y1 is uncorrelated with Yl (Proposition 2.17) and hence is uncorrelated with (Z2,Z, - A)Yl. By (ii) of Proposition 2.21, we see that &(Y2 Z2,2,Y1, (Z2,Z, - A)Y1), = 0. Therefore, for each A 6 C(Vl, V2),
with equality for A = Z2,2,. However, Cov(Y2 - Z2,2, Y,) = Z2, Z2,2;2,, and &(Y2 - Z2,2; Y,) = 0 so (iii) of Proposition 2.21 shows that GIIY2 - ~ 2 1 ~ i Y I l = I ; (129
2 2 2 - 221211212).
Therefore,
611x2 - (AX1 + xo)lli 2 (I2,222 with equality for A
=
- 221211212)
2,,2, and x, = p 2 - Z2,Z,p,.
%
RANDOM VECTORS
The last topic in this section concerns the covariance of XU X when X is a random vector in (V, (. , .)). The random vector XU Xis an element of the vector space (C(V, V), ( . , -)). However, XU X is a self-adjoint linear transformation so XO Xis also a random vector in (M,, ( . , .)) where M, is the linear subspace of self-adjoint transformations in C(V, V). In what Thus the follows, we regard XU X as a random vector in (M,, ( , covariance of XU Xis a positive semidefinite linear transformation on (M,, ( . , . )). In general, this covariance is quite complicated and we make some simplifying assumptions concerning the distribution of X. a)).
Proposition 2.23. Suppose X has an orthogonally invariant distribution in (V, (., .)) where &1(X1l4< + co. Let u, and v, be fixed vectors in V with llvill = 1, i = 1,2, and (v,, v,) = 0. Set c, = var{(v,, X),) and c, = cov{(u,, X)2, (u,, x ) ~ ) .Then Cov(X0 X)
=
(c, - c,) I 8 I + c2T,,
where TI is the linear transformation on M, given by T,(A) = (I, A)I. In other words, for A, B E M,, cov((A, XO X), (B, XU X))
=
(A, ((c, - c , ) ~8 I + C,T,)B)
=
( C I- c2)(A, B) + c2(I, A)(I, B).
Proof: Since (c, - c,)I 8 1 + c2T, is self-adjoint on (M,, ( - , .)), Proposition 2.6 shows that it suffices to verify the equation var(A, XU X)
=
(c, - c,)(A, A)
+ c2(I, A),
for A E M, in order to prove that Cov(X0 X) First note that, for x
E
=
(c, - c 2 ) I @ I + c,T,.
V,
This last equality follows from Proposition 2.10 as the distribution of X is
PROPOSITION
2.24 =
0,
Again, the last equality follows since C ( X ) = C ( * X ) for \k
E
orthogonally invariant. Also, for x , , x ,
and
E
V with ( x , , x , )
B ( V ) so
can be chosen so that
For A E M,, apply the spectral theorem and write A x , , . . . , x , is an orthonormal basis for (V, (., .)). Then
=
( c , - c 2 ) ( A ,A )
+ c,(I, A)2.
=
C;aix,O xi where
q
When X has an orthogonally invariant normal distribution, then the constant c2 = 0 so Cov(X0 X ) = c , I @ I. The following result provides a slight generalization of Proposition 2.23. Proposition 2.24. Let X, v , , and v , be as in Proposition 2.23. For C E C(V,V), let Z = CC' and suppose Y is a random vector in (V, (., .)) with
98
RANDOM VECTORS
c(Y) = c(CX). Then
where T,(A)
=
Cov(Y0 Y)
=
(c, - c,)Z €3 2
(A, 2 ) Z for A
E
Ms.
+ c,T2
Proof. We apply Proposition 2.8 and the calculational rules for Kronecker products. Since (CX) q (CX) = (C €3 C)(XO X),
Cov(Y0 Y)
Cov((C €3 C)( xu X))
=
cov((cx0c x ) )
=
( C €3 C)Cov(X0 X)(C
=
( C €3 c)((c, - c,)I
=
(c, - c * ) ( c €3 C ) ( I €3 I ) ( C ' 8 C') +c,(C
=
€3
=
C)T,(C'
(c, - c 2 ) 2 €3 2
€3
€3
C)'
I + c,T,)(c'
€3
C')
C')
+ c,(C €3
It remains to show that (C €3 C)Tl(C' €3 C')
=
€3
C)T,(Cf €3 C').
T,. For A
E
M,,
((c 8 c ) I , A)(C €3 C ) ( I )
=
(CC', A)CC'
=
PROBLEMS 1.
If x,,. . . , x , is a basis for (V,(., .)) and if (xi, X) has finite expectation for i = 1,. . . , n, show that (x, X) has finite expectation for all x E V. Also, show that if (xi, X)' has finite expectation for i = 1,. . . , n, then Cov(X) exists.
2. Verify the claim that if X,(X,) with values in Vl(V2) are uncorrelated for one pair of inner products on V, and V,, then they are uncorrelated no matter what the inner products are on V, and V,.
3. Suppose Xi E y , i = 1,2 are uncorrelated. Iff, is a linear function on y , i = 1,2, show that
Conversely, if (2.2) holds for all linear functions f , and f,, then XI and X, are uncorrrelated (assuming the relevant expectations exist).
PROBLEMS
4.
For X
E
Rn, partition X as
with x E R r and suppose X has an orthogonally invariant distribution. Show that x has an orthogonally invariant distribution on Rr. Argue that the conditional distribution of x given x has an orthogonally invariant distribution. 5.
Suppose XI,. . . , Xk in (V, (. , -)) are pairwise uncorrelated. Prove that Cov(C:xi) = Cov( Xi).
c:
6. In R ~let, el,. . . , ek denote the standard basis vectors. Define a random vector U in Rk by specifying that U takes on the value ei with probability pi where 0 G pi G 1 and Cfpi = 1. (U represents one of k mutually exclusive and exhaustive events that can occur). Let p E Rk have coordinates p i , . . . , p,. Show that GU = p, Cov(U) = Dp - pp' where Dp is a diagonal matrix with diagonal entries p,, . . . , pk. When 0 < p i < 1, show that Cov(U) has rank k - 1 and identify the null space of Cov(U). Now, let XI,. . . , Xn be i.i.d. each with the distribution of U. The random vector Y = CYX, has a multinomial distribution (prove ths) with parameters k (the number of cells), the vector of probabilities p, and the number of trials n. Show that GY = np, Cov(Y) = n(D, - pp').
7. Fix a vector x in Rn and let n- denote a permutation of 1,2,. . . , n (there are n! such permutations). Define the permuted vector r x to be the vector whose ith coordinate is x(vF'(i)) where x( j ) denotes the jth coordinate of x. (This choice is justified in Chapter 7.) Let X be a random vector such that Pr{X = n-x) = l/n! for each possible permutation n-. Find GX and Cov(X). 8. Consider a random vector X E Rn and suppose C(X) = C(DX) for each diagonal matrix D with diagonal elements dii = 1, i = 1,. . . , n. If &llX112< oo,show that GX = 0 and Cov(X) is a diagonal matrix (the coordinates of X are uncorrelated).
+
9. Given X E (V,( - , .)) with Cov(X) = 2 , let A, be a linear transformation on ( V , ( . , to (K, .Ii), i = 1,2. Form Y = {AIX,A2X) with values in the direct sum W, $ W2. Show a))
[ a ,
in W, $ W2 with its usual inner product.
~00
RANDOM VECTORS
10. For X in (V, -,.)) with p = GX and Z = Cov(X), show that &(X, AX) = (A, 2 ) + (p, Ap) for any A E C(V, V). 11. In (Cp,,, ( . , . )), suppose the n X p random matrix X has the covariance I, @ Z for some p X p positive semidefinite 2 . Show that the rows of X are uncorrelated. If p = GX and A is an n x n matrix, show that GX'AX = (tr A)X + p'Ap.
12. The usual inner product on the space of p X p symmetric matrices, denoted by Sp, is ( . , .), given by (A, B) = trAB'. (This is the natural inner product inherited from (Cp,,, ( . , . )) by regarding Sp as a subspace of C,,,.) Let S be a random matrix with values in Sp and suppose that C(rSr') = C(S) for all r E flp. (For example, if X E RP has an orthogonally invariant distribution and S = XX', then C(rSr') = C(S).) Show that GS = cIp where c is constant. 13. Given a random vector X in (C(V, W), ( . , . )), suppose that C(X) = C((r @ +)X) for all r E O(W) a n d + E O(V). (i) If X has a covariance, show GX = 0 and Cov(X) = cI, 8 I, where c >, 0. (ii) If Y E C(V, W) has a density (with respect to Lebesgue measure) given by f(Y) = P((Y, Y)), Y C(V,W), show that C(Y) = C((T 8 +)Y) for r E Q(W) and E Q(V).
+
14.
Let X,, . . . , Xn be uncorrelated random vectors in RP with Cov(Xi) = Z, i = 1,. . . , n. Form the n x p random matrix X with rows Xi,. . . , XA and values in (C,, ., ( . , .)). Thus Cov(X) = In@ 2 . (i) Form k in the coordinate space RnP with the coordinate inner product where
In the space RnP show that
where each block is p x p.
PROBLEMS
(ii) Now, form 2 in the space RnP where
and Zj has coordinates X,,,. . . , Xnjfor i
where each block is n x n, Z
=
=
1,. . . , p. Show that
{a,,}.
15. The unit sphere in Rn is the set {xlx € Rn,llxll = 1}= %. A random vector X with values in % has a uniform distribution on % if C(X) = C(I'X) for all I' E 8,. (There is one and only one uniform distribution on %-this is discussed in detail in Chapters 6 and 7.) (i) Show that & X = 0 and Cov(X) = (l/n)I,. (ii) Let XI be the first coordinate of X and let x E Rn-' be the remaining n - 1 coordinates. What is the best affine predictor of Xl based on X ? How would you predict XI on the basis of X ? 16.
Show that the linear transformation T2 in Proposition 2.24 is Z O Z where denotes the outer product of the vector space ( M , , ( . , .)). Here, ( . , - ) is the natural inner product on C(V, V).
17. Suppose X E R2 has coordinates X, and X, that are independent with a standard normal distribution. Let S = XX' and denote the elements of S by s,,, s,,, and s12= s,,. (i) What is the covariance matrix of
(ii) Regard S as a random vector in (S,, ( . , .)) (see Problem 12). What is Cov(S) in the space (S,, ( . , .))? (iii) How do you reconcile your answers to (i) and (ii)?
102
RANDOM VECTORS
NOTES AND REFERENCES
1
In the first two sections of this chapter, we have simply translated well known coordinate space results into their inner product space versions. The coordinate space results can be found in Billingsley (1979). The inner product space versions were used by Kruskal (1961) in his work on missing and extra values in analysis of variance problems.
2. In the third section, topics with multivariate flavor emerge. The reader may find it helpful to formulate coordinate versions of each proposition. If nothing else, this exercise will soon explain my acquired preference for vector space, as opposed to coordinate, methods and notation.
3. Proposition 2.14 is a special case of Schur's Lemma-a basic result in group representation theory. The book by Serre (1977) is an excellent place to begin a study of group representations.
CHAPTER 3
The Normal Distribution on a Vector Space
The univariate normal distribution occupies a central position in the statistical theory of analyzing random samples consisting of one-dimensional observations. This situation is even more pronounced in multivariate analysis due to the paucity of analytically tractable multivariate distributionsone notable exception being the multivariate normal distribution. Ordinarily, the nonsingular multivariate normal distribution is defined on R n by specifying the density function of the distribution with respect to Lebesgue measure. For our purposes, this procedure poses some problems. First, it is desirable to have a definition that does not require the covariance to be nonsingular. In addition, we have not, as yet, constructed what will be called Lebesgue measure on a finite dimensional inner product space. The definition of the multivariate normal distribution we have chosen circumvents the above technical difficulties by specifying the distribution of each linear function of the random vector. Of course, t h s necessitates a proof that such normal distributions exist. After defining the normal distribution in a finite dimensional vector space and establishing some basic properties of the normal distribution, we derive the distribution of a quadratic form in a normal random vector. Conditions for the independence of two quadratic forms are then presented followed by a discussion of conditional distributions for normal random vectors. The chapter ends with a derivation of Lebesgue measure on a finite dimensional vector space and of the density function of a nonsingular normal distribution on a vector space.
104
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
3.1. THE NORMAL DISTRIBUTION Recall that a random variable Zo E R has a normal distribution with mean zero and variance one if the density function of Zo is
with respect to Lebesgue measure. We write C(Zo) = N(0,l) when Zo has density p. More generally, a random variable Z E R has a normal distribution with mean p E R and variance u2 > 0 if C(Z) = C(uZo + p) where C(Zo) = N(0,l). In this case, we write C(Z) = N(p, u2). When u2 = 0, the distribution N(p, u 2 ) is to be interpreted as the distribution degenerate at p. If C(Z) = N(p, a2), then the characteristic function of Z is easily shown to be
The phrase "Z has a normal distribution" means that for some p and some a > 0, C(Z) = N(p, u2). If Z,, . . . , Zk are independent with C(Zj) = N(pj, a;), then C(ZajZ,) = N(Z9pj, Z$a;). To see this, consider the characteristic function
Thus the characteristic function of Za,ZJ is that of a normal distribution with mean Za,p, and variance 2 4 ~ ; . In summary, linear combinations of independent normal random variables are normal. We are now in a position to define the normal distribution on a finite dimensional inner product space (V, (., .)).
Definition 3.1. A random vector X E V has a normal distribution if, for each x E V, the random variable (x, X ) has a normal distribution on R. To show that a normal distribution exists on (V, ( - , .)), let {x,, . . . , x , ) be an orthonormal basis for (V, (., .)). Also, let Z,,. . . , Z, be independent
105
PROPOSITION 3.1
N(0, 1) random variables. Then X = ZZixi is a random vector and (x, X) = Z(x, x,)Z,, which is a linear combination of independent normals. Thus (x, X) has a normal distribution for each x E V. Since &(x, X) = Z(x,, x)GZ, = 0, the mean vector of X i s 0 E V. Also, var(x, X)
=
2
var(Z(x, x , ) ~ , = ) Z ( x , x,) var(Z,) = Z ( x ,
=
(x, x).
Therefore, Cov(X) = I E C(V, V). The particular normal distribution we have constructed on (V, (., .)) has mean zero and covariance equal to the identity linear transformation. Now, we want to describe all the normal distributions on (V, (., .)). The first result in thls direction shows that linear transformations of normal random vectors are again normal random vectors. Proposition 3.1. Suppose X has a normal distribution on (V, (., .)) and let A E C(V, W), W, E W. Then AX + w, has a normal distribution on ( W , [ . ,-1). Proof. It must be shown that, for each w E W, [w, AX + w,] has a normal distribution on R. But [w, AX + w,,] = [w, AX] + [w, w,] = (A'w, X) + [w, w,]. By assumption, (A'w, X) is normal. Since [w, w,] is a constant, ( A'w, X) + [w, w,] is normal. If X has a normal distribution on (V, (., .)) with mean zero and covariance I, consider A E C(V, V) and p E V. Then AX + p has a normal distribution on (V, (., .)) and we know &(AX p) = A(&X) + p = p and Cov(AX + p) = A Cov(X) A' = AA'. However, every positive semidefinite linear transformation Z can be expressed as AA' (take A to be the positive semidefinite square root of 2). Thus given p E V and a positive sernidefinite Z, there is a random vector that has a normal distribution in V with mean vector p and covariance Z. If X has such a distribution, we write C(X) = N(p, 2). To show that all the normal distributions on V have been described, suppose X E V has a normal distribution. Since (x, X) is normal on R, var(x, X) exists for each x E V. Thus p = EX and Z = Cov(X) both exist and C(X) = N(p, 2). Also, C((x, X)) = N((x, p), (x, Ex)) for x E V. Hence the characteristic function of (x, X) is
+
Setting t
=
1, we obtain the characteristic function of X:
Summarizing this discussion yields the following.
106
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
Proposition 3.2. Given p E V and a positive semidefinite Z E C(V, V), there exists a random vector X E V with distribution N ( p , 2 ) and characteristic function
Conversely, if X has a normal distribution on V, then with p = GX and Z = Cov(X), C(X) = N(p, 2 ) and the characteristic function of X i s given by t.
(v,
Consider random vectors Xi with values in (., -),) for i = 1,2. Then {XI, X,) is a random vector in the direct sum Vl @ V,. The inner product on V , @ V2 is [ . , . ] where
u,, 0, E V, and v,, u, E V,. If Cov(X,) = Zii, i &{XI,X,) = {p,, p2) where pi = GX,, i = 1,2. Also,
as defined in Chapter 2 and Z,,
=
1,2, exists, then
= Z;,.
Proposition 3.3. If {XI, X,) has a normal distribution on V, and X, are independent iff Z,, = 0.
Proof. If XI and X, are independent, then clearly Z,, Z,, = 0, the characteristic function of {XI, X,) is
=
@
V,, then XI
0. Conversely, if
PROPOSITION
3.4
107
since Z , , = Z ; , = 0. However, for v , E V,, ( v , , X I ) , = [ ( v l , O ) , ( X l ,X,)], which has a normal distribution for all u , E V,. Thus C ( X I )= N ( p , , 8 ,) on V , and similarly C ( X , ) = N(p,, Z,) on V,. The characteristic function of ( X I ,X,) is just the product of the characteristic functions of X I and X,. Thus independence follows and the proof is complete. The result of Proposition 3.3 is often paraphrased as "for normal random vectors, X I and X, are independent iff they are uncorrelated." A useful consequence of Proposition 3.3 is shown in Proposition 3.4. Proposition 3.4. Suppose C ( X ) = N ( p , Z ) on ( V , ( - , .)), and consider A E C ( V , W , ) , B E C ( V , W,) where ( W , , [ . , .I,) and (W,,[., .I,) are inner product spaces. A X and B X are independent iff AZB' = 0.
Prooc We apply the previous proposition to X I = A X and X, ( X I , X,) has a normal distribution on W, @ W , follows from
and the normality of ( x , X ) for all x
E
=
BX. That
V . However,
=
(A'w,, ZB'w,)
=
[ w , , AZB'w,],.
Thus X I = A X and X, = B X are uncorrelated iff A 2 B' = 0. Since ( X I ,X,) has a normal distribution, the condition AZB' = 0 is equivalent to the independence of X I and X,. One special case of Proposition 3.4 is worthy of mention. If C ( X ) = N ( p , I ) on ( V , -)) and P is an orthogonal projection in C ( V , V ) , then P X and ( I - P ) X are independent since P ( I - P ) = 0. Also, it should be mentioned that the result of Proposition 3.3 extends to the case of k random vectors-that is, if ( X I ,X,, . . . , X,) has a normal distribution on the direct sum space V , @ V, @ . . @ V k , then X I , X,, . . . , X, are independent iff Xi and X, are uncorrelated for all i *j. The proof of this is essentially the same as that given for the case of k = 2 and is left to the reader. ( a ,
108
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
A particularly useful result for the multivariate normal distribution is the following. Proposition 3.5. Suppose C(X) = N(p, Z) on the n-dimensional vector space (V, -)). Write Z = C;A,x,n xi in spectral form, and let X, = (xi, X), i = 1,. . . , n. Then XI,. . . , X, are independent random variables that have a normal distribution on R with &X,= (x,, p) and var(X,) = Xi, i = 1,. . . , n. In particular, if Z = I, then for any orthonormal basis {x,,. . . , x,) for V, the random variables = (xi, X) are independent and normal with FX, = (xi, p) and var(&) = 1. ( a ,
Proof. For any scalars a,,. . . , a, in R, C;ai& = C;ai(xi, X) = (C;alxi, X), which has a normal distribution. Thus the random vector 2 E R nwith coordinates XI,. . . , X, has a normal distribution in the coordinate vector space Rn.Thus XI,.. . , X, are independent iff they are uncorrelated. However,
Thus independence follows. It is clear that each is normal with G X , = ( x l , p ) and var(X,) = A,, i = 1,..., n. When Z = I, then C;X,OX,= I for any orthonormal basis x,, . . . , x,. This completes the proof. q The following is a technical discussion having to do with representations of the normal distribution that are useful when establishing properties of the normal distribution. It seems preferable to dispose of the issues here rather than repeat the same argument in a variety of contexts later. Suppose X E (V, (., .)) has a normal distribution, say C(X) = N(p, Z), and let Q be the probability distribution of X on (V, (., -)). If we are interested in the distribution of some function of X, say f ( X ) E (W, [., then the underlying space on which X is defined is irrelevant since the distribution Q determines the distribution of f(X)-that is, if B E %(W), then ,)]a
Therefore, if Y is another random vector in (V,(., .)) with C(X) = C(Y), then f (X) and f(Y) have the same distribution. At times, it is convenient to represent C(X) by C(CZ + p) where C(Z) = N(0, I) and CC' = Z. Thus
109
QUADRATIC FORMS
+
C(X) = C(CZ + p) SO f ( X ) and f(CZ p) have the same distribution. A slightly more subtle point arises when we discuss the independence of two functions of X, say f,(X) and f2(X), taking values in ( W,, [ . , -1,) and ( W2, [ - , . ] ,). To show that independence of f,( X) and f2(X) depends only on Q, consider B, E $(K)for i = 1,2. Then independence is equivalent to
But both of these probabilities can be calculated from Q:
and
Again, if C(Y) = C(X), then f,(X) and f2(X) are independent iff f,(Y) and f2(Y) are independent. More generally, if we are trying to prove something about the random vector X, C(X) = N(p, Z), and if what we are trying to prove depends only on the distribution Q, of X, then we can represent X by any other random vector Y as long as C(Y) = C(X). In particular, we can take Y = CZ + p where C(Z) = N(0, I ) and CC' = 2 . This representation of X i s often used in what follows.
3.2. QUADRATIC FORMS The problem in this section is to derive, or at least describe, the distribution of (X, AX) where X E (V, .)), A is self-adjoint in C(V, V) and C(X) = N(p, 2). First, consider the special case of Z = I, and by the spectral theorem, write A = C;X,x,Ox,. Thus ( a ,
(X, AX)
=
(x, (C;X,xi~x,)X)
=
C;hi(xi, x ) ~
But X, = (xi, X), i = 1,. . . , n , are independent since Z = I (Proposition 3.5) and C(X,) = N((x,, p), 1). Thus our first task is to derive the distribution of Xf when C(X,) = N((x,, p), 1). Recall that a random variable Z has a chi-square distribution with rn degrees of freedom, written C(Z) = if Z has a density on (0, oo) given
Xi,
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
Here m is a positive integer and r(-)is the gamma function. The characteristic function of a Xi random variable is easily shown to be
Thus, if C(Z,) =
C(Zz) = X i , and Z, and Z2 are independent, then
+
Therefore, C(Z, Z,) = xi+,. This argument clearly extends to more than two factors. In particular, if C(Z) = X i , then, for independent random variables Z,, . . . , Zm with C(Z,) = x:, C(C;"Z,) = C(Z). It is not difficult to show that if C(X) = N(0,l) on R, then C(X2) = x:. However, if C(X) = N(a, 1) on R, the distribution of X2 is a bit harder to derive. To t h s end, we make the following definition.
1,2,.. . , be the density of a
Xirandom
Definition 3.2. 'Let pm, m variable and, for A 2 0, let
=
For A
0 for j > 0. A random variable with density
=
0, q,
=
1 and q,
=
is said to have a noncentral chi-square distribution with m degrees of freedom and noncentrality parameter A. 1f-z has such a distribution, we write C(Z>= xi(A>. When A = 0, it is clear that C(xi(0)) = The weights q,, j = 0,1,. . . , are Poisson probabilities with parameter A/2 (the reason for the 2 becomes clear in a bit). The characteristic function of a xi(A) random variable is
calculated as follows:
From this expression for the characteristic function, it follows that if C(Zi) = x i , ( h , ) , i = 1,2, with Z1and Z2 independent, then C(Z, Z,) = Xi,+m,(A,+ h2). This result clearly extends to the sum of k independent noncentral chl-square variables. The reason for introducing the noncentral chi-square distribution is provided in the next result.
+
Proposition 3.6. Suppose C(X)
=
N(a, 1) on R. Then C ( x 2 ) = x?(a2).
Proof: The proof consists of calculating the characteristic function of X2. A justification of the change of variable in the calculation below can be given using contour integration. The characteristic function of x2 is
Gexp(itx2) =
O 0 1 exp[itx2 - f ( x - a)'] dx 2a
-m
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
By the uniqueness of characteristic functions, C ( x 2 ) = x:(a2). Proposition 3.7. Suppose the random vector X in (V, (., .)) has a N(p, I) distribution. If A E C(V, V) is an orthogonal projection of rank k, then c((x9 AX)) = x2k((p, AP)). Proof. Let {x,,. . . , x,) be an orthonormal basis for the range of A. Thus A = Cfxinxi and
(X, AX)
=
C;(x,, x ) ~ .
But the random variables (xi, x ) ~ ,i = 1,. . . , k, are independent (Proposi) x:((xi, p)2). From the additive tion 3.5) and, by Proposition 3.6, C ( X , ~= property of independent noncentral chi-square variables,
Noting that (p, Ap)
=
c:(x,, p)2, the proof is complete.
q
When C(X) = N(p, Z), the distribution of the quadratic form (X, AX), with A self-adjoint, is reasonably complicated, but there is something that can be said. Let B be the positive semidefinite square root of Z and assume that p E CtL(2). Thus p E %(B) since %(B) = CtL(Z). Therefore, for some vector r E V, p = Br. Thus C(X) = C(BY) where C(Y) = N(r, I) and it suffices to describe the distribution of (BY, ABY) = (Y, BABY). Since A and B are self-adjoint, BAB is self-adjoint. Write BAB in spectral form: BAB
=
Z;hlx,Ox,
where {x,,. . . , x,) is an orthonormal basis for (V, (., .)). Then
and the random variables (x,, Y), i = 1,. . . , n , are independent with C((x,, Y)2) = x:((x,, T ) ~ )It. follows that the quadratic form (Y, BABY) has the same distribution as a linear combination of independent noncentral chi-square random variables. Symbolically,
In general not much more can be said about this distribution without some assumptions concerning the eigenvalues A,, . . . , A,. However, when BAB is an orthogonal projection of rank k, then Proposition 3.7 is applicable and
In summary, we have the following. Proposition 3.8. Suppose C(X) = N(p, Z) where p E %(X), and let B be the positive semidefinite square root of 2 . If A is self-adjoint and BAB is a rank k orthogonal projection, then
We can use a slightly different set of assumptions and reach the same conclusion as Proposition 3.8, as follows. Proposition 3.9. Suppose C(X) = N(p, 2 ) and let B be the positive semidefinite square root of Z. Write p = p, + p 2 where p, E % ( 2 ) and p 2 E %(2). If A is a self-adjoint such that Ap2 = 0 and BAB is a rank k orthogonal projection, then
Pro05 S i n c e A p 2 = 0 , ( X , A X ) = ( X - p , , A ( X - p , ) ) . L e t Y = X - p , so C(Y) = N(pl, Z) and C((X, AX)) = C((Y, AY)). Since p, E %(Z), Proposition 3.8 shows that
However, (p, Ap)
=
(p,, Ap,) as Ap2 = 0.
3.3. INDEPENDENCE OF QUADRATIC FORMS Thus far, necessary and sufficient conditions for the independence of different linear transformations of a normal random vector have been given
114
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
and the distribution of a quadratic form in a normal random vector has been described. In this section, we give sufficient conditions for the independence of different quadratic forms in normal random vectors. Suppose X E ( V , ( . , .)) has an N ( p , Z ) distribution and consider two self-adjoint linear transformations, A,, i = 1,2, on V to V. To discuss the independence of ( X , A , X ) and ( X , A,X), it is convenient to first reduce the discussion to the case when p = 0 and Z = I. Let B be the positive semidefinite square root of 2 so if C ( Y ) = N(0, I ) , then C ( X ) = C(BY + p). Thus it suffices to discuss the independence of ( B Y p , A , ( B Y p ) ) and ( B Y + p , A,(BY p ) ) when C ( Y ) = N(0, I ) . However,
+
( B Y + p , A,(BY
+
+
+ p ) ) = ( Y , BA,BY) + 2(BAip, Y ) + ( p , A l p )
for i = 1,2. Let C, = BA,B, i = 1,2, and let x, = 2BA,p. Then we want to know conditions under whch ( Y , C , Y ) + ( x , , Y ) and ( Y , C,Y) + ( x , , Y ) are independent when C ( Y ) = N(0, I ) . Clearly, the constants ( p , A,p), i = 1,2, do not affect the independence of the two quadratic forms. It is t h s problem, in reduced form, that is treated now. Before stating the principal result, the following technical proposition is needed. Proposition 3.10. For self-adjoint linear transformations A, and A , on ( V ,( . , . )) to (V,(., . )), the following are equivalent:
(i) A,A, = 0. (ii) % ( A , ) I % ( A 2 ) . Proof. If A,A2 = 0, then A,A2x = 0 for all x E V so % ( A 2 )c % ( A I ) . Since % ( A , ) I % ( A , ) , % ( A , ) I % ( A , ) . Conversely, if % ( A 1 )I % ( A 2 ) , then % ( A 2 )C %(A1)' = % ( A I ) and this implies that A,A,x = 0 for all x E V. Therefore, A,A2 = 0. Proposition 3.11. Let Y E ( V ,(., .)) have a N(0, I ) distribution and suppose Z, = ( Y , A , Y ) ( x i ,Y ) where A, is self-adjoint and x, E V , i = 1,2. If A,A2 = 0, A,x2 = 0, A 2 x , = 0, and ( x , , ~ , = ) 0, then 2, and 2, are independent random variables.
+
Proof. The idea of the proof is to show that 2, and 2, are functions of two different independent random vectors. To this end, let P, be the orthogonal projection onto % ( A i )for i = 1,2. It is clear that PiA,Pi = A, for i = 1,2. Thus 2, = ( P l y ,A,P,Y) + ( x , , Y ) for i = 1,2. The random vector ( P ,Y, ( x , , Y ) )takes values in the direct sum V @ R and 2, is a function of
PROPOSITION
115
3.12
t h s vector. Also, {P,Y, (x,, Y)) takes values in V $ R and Z2 is a function of t h s vector. The remainder of the proof is devoted to showing that {Ply,(x,, Y)) and {P,Y, (x2, Y)) are independent random vectors. T h s is done by verifying that the random vectors are jointly normal and that they are uncorrelated. Let [., . ] denote the induced inner product on the direct sum V @ R. The inner product of the vector {{y,, a,), {y,, a,)) in (V $ R) @ (V @ R ) with {{P,Y,(x,, Y)), {P2Y,(x,, Y))) is
which has a normal distribution since Y is normal. Thus {{Ply,(x,, Y)), {P,Y, (x,, Y))) has a normal distribution. The independence of these two vectors follows from the calculation below, whch shows the vectors are uncorrelated. For {y,, cu,) E V @ R and {y2,a,) E V @ R,
However, PIP, = 0 since % ( A , ) I %(A2). Also, P2xI = 0 as x, E %(A2) and, similarly, P,x2 = 0. Further, (x,, x2) = 0 by assumption. Thus the above covariance is zero so 2 , and 2, are independent. A useful consequence of Proposition 3.1 1 is Proposition 3.12. Proposition 3.12. Suppose C(X) = N ( p , 2 ) on (V, (. , -)) and let C,, i = 1,2, be self-adjoint linear transformations. If C,ZC, = 0,then (X, C,X) and (X, C2X) are independent. Proot Let B denote the positive semidefinite square root of 2 , and suppose C(Y) = N(0, I ) . It suffices to show that Z, = (BY p, C,(BY
+
+
116
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
p ) ) is independent of Z , = ( B Y p). But
+ p , C,(BY + p ) )
since C ( X ) = C ( B Y
+
zl = ( Y , B C i B Y ) + 2(BC,p, Y ) + ( p , C,p) for i = 1,2. Proposition 3.11 can now be applied with A, = BC,B and xi = 2BC,p for i = 1,2. Since 2 = BB, A , A , = BClBBC2B = BC,ZC2B = 0 as C,ZC2 = 0 by assumption. Also, A , x , = 2BC,BBC2p = 2BC,2C2p = 0. Similarly, A 2 x , = 0 and ( x , , x,) = 4(BC,p, BC,p) = 4(p, C l Z C 2 p )= 0. Thus ( Y , B C I B Y ) + 2(BC,p, Y ) and ( Y , B C 2 B Y ) 2(BC2p,Y ) are independent. Hence Z, and 2, are independent.
+
The results of this section are general enough to handle most situations that arise when dealing with quadratic forms. However, in some cases we need a sufficient condition for the independence of k quadratic forms. An examination of the proof of Proposition 3.11 shows that when C ( Y ) = N(0, I ) , the quadratic forms 2, = (Y, A I Y ) ( x , , Y ) , i = 1 , . . . , k, are mutually independent if, for each i *j , A,A, = 0, A,xJ = 0, A J x I= 0 , and ( x , , x,) = 0. The details of this verification are left to the reader.
+
3.4. CONDITIONAL DISTRIBUTIONS The basic result of this section gives the conditional distribution of one normal random vector given another normal random vector. It is t h s result that underlies many of the important distributional and independence properties of the normal and related distributions that are established in later chapters. Consider random vectors XI E ( y ,(., .),), i = 1,2, and assume that the random vector { X I ,X,) in the direct sum V, @ V2has a normal distribution with mean vector { p l , p,) E V l @ V2 and covariance given by
Thus C ( X , ) = N(p,, Z,,) on ( y ,(., for i = 1,2. The conditional distribution of XI given X, = x , E V, is described in the next result. a),)
Proposition 3.13. Let C(XlIX2= x,) denote the conditional distribution of XI given X, = x,. Then, under the above normality assumptions,
Here, Z ; denotes the generalized inverse of Z,,.
PROPOSITION
3.13
117
Proof. The proof consists of calculating the conditional characteristic function of XI given X2 = x,. To do this, first note that XI - Z12Z&X2and X, are jointly normal on Vl $ V2 and are uncorrelated by Proposition 2.17. Thus XI - Z12Z&X, and X2 are independent. Therefore, for x E V,,
where the last equality follows from the independence of X2 and XI CI2Z, X,. However, it is clear that
as XI - Zl2Z,X2 is normal on V, and has the given mean vector and covariance (Proposition 2.17). Thus
The uniqueness of characteristic functions yields the desired conclusion. For normal random vectors, X,E ( y ,(., .),), i = 1,2, Proposition 3.13 shows that the conditional mean of XI given X2 = x2 is an affine function of x, (affine means a linear transformation, plus a constant vector so zero does not necessarily get mapped into zero). In other words,
Further, the conditional covariance of XI does not depend on the value of X,. Also, this conditional covariance is the same as the unconditional covariance of the normal random vector XI - ZI2Z,X2. Of course, the specification of the conditional mean vector and covariance specifies the conditional distribution of X, gven X2 = x, as t h s conditional distribution is normal.
118
+
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
Example 3.1. Let W,, . . . , Wn be independent coordinate random vectors in RP where RP has the usual inner product. Assume that = N(p, 2) SO p E RP is the coordinate mean vector of each U: and Z is the p x p covariance matrix of each q.Form the random matrix X E % , n with rows y, i = 1,. . . , n. We know that
e(v)
&X = ep' and
where e E Rn is the vector of ones. To show X has a normal distribution on the inner product space (Cp,,,, ( . , .)), it must be verified that for each A E C,,,, ( A , X ) has a normal distribution. To do this, let the rows of A be a;, . . . , a:, a i E RP. Then n
(A, X)
=
tr AX'
= 1
a:l.t/;
C(v)
However, a : y has a normal distribution on R since = N ( p , 2 ) on RP. Also, since W,,. . . , Wn are independent, a;W,,. . . , aLWn are independent. Since a linear combination of independent normal random variables is normal, (A, X) has a normal distribution for each A E Cp, ,,. Thus
on the inner product space (Gp, ,,, ( . , . )). We now want to describe the conditional distribution of the first q columns of X given the last r columns of X where q r = p. After some relabeling and a bit of manipulation, this conditional distribution follows from Proposition 3.13. Partition each y into I: and Z, where I: E R4 consists of the first q coordinates of y and Z, E Rr consists of the last r coordinates of y . Let XI E Cq,n have rows Y;,. . . , Y,' and let X2 E Cr, have rows Z ; , . . . , Z,',. Also, partition p into p1 E Rq and p2 E R r so G I : = pl and GZ, = p2, i = 1,. . . , n. Further, partition the covariance matrix Z of each y so that
+
C O V { ~Z,; ) = where Z,,
=
(;:: I:;
Z;,. From the independence of W,, . . . , W,, it follows
PROPOSITION
3.13
that
and ( X I , X2) has a normal distribution on vector (ep',, ep;) and
C,,
$
C,,
with mean
Now, Proposition 3.13 is directly applicable to {XI, X2) where we make the parameter correspondence P;
i
+ +
=
1,2
and Zij *In @ Xij. Therefore, the conditional distribution of X, given X2 = x2 E normal with mean vector
C,, is
and C0v(X1~x2 = x,)
However, it is not difficult to show that (In@ Z2,)-= In @ .Z, Using the manipulation rules for Kronecker products, we have
and C 0 v ( x 1 ~ x=2 x,)
=
In @ (XI, - ZI2',2,,).
This result is used in a variety of contexts in later chapters.
4
120 3.5.
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
THE DENSITY OF THE NORMAL DISTRIBUTION
The problem considered here is how to define the density function of a nonsingular normal distribution on an inner product space (V, (., .)). By nonsingular, we mean that the covariance of the distribution is nonsingular. To motivate the technical considerations given below, the density function of a nonsingular normal distribution is first given for the standard coordinate space Rn with the usual inner product. Consider a random vector X in Rn with coordinates XI,.. . , X, and assume that XI,. . . , X, are independent with C(X,) = N(0,l). The symbol dx denotes Lebesgue measure on Rn. Since XI,. . . , X, are independent, the joint density of XI,. . . , X, in Rn is just the product of the marginal densities, that is, X has a density with respect to dx given by
where x
E
R n has coordinates x,, . . . , x,. Thus
and x'x is just the inner product of x with x in Rn. To derive the density of an arbitrary nonsingular normal distribution in Rn, let A be an n x n nonsingular matrix and set Y = AX p where p E Rn. Since C ( X ) = N(0, I,), C(Y) = N(p, Z) where Z = AA' is positive definite. Thus X = A p l ( Y - p) and the Jacobian of the nonsingular linear transformation on Rn to Rn sending x into A 1 ( x - p) is (det(AP')I where ( . ( denotes absolute value. Therefore, the density function of Y with respect to dy is
+
=
~ d e t ( ~ - ' ) l p ( ~ - '-( yp))
=
(det ~ ) - ' / ~ ( 2 . r r ) - ~ / ~ e x ~ [ --f p)'.Z-'(y (y - p)].
=
(det ~ ) - ' / ~ ( 2 7 7 ) - " / ~
Thus we have the density function with respect to dy of any nonsiilgular normal distribution on Rn. Of course, this expression makes no sense when Z is singular. Now, suppose Y is a random vector in an n-dimensional vector space (V, (., .)) and C(Y) = N(p, 2 ) where Z is positive definite. The expression
121
THE DENSITY OF THE NORMAL DISTRIBUTION
for y E V, certainly makes sense and it is tempting to call this the density function of Y E ( V , .)). The problem is: What is the measure on ( V ,(. , .)) with respect to which p, is a density? In other words, what is the analog of Lebesgue measure on ( V ,(., .))? To answer the question, we now show that there is a natural measure on ( V ,(. , .)), which is constructed from Lebesgue measure on Rn, and p, is the density function of Y with respect to this measure. The details of the construction of "Lebesgue measure" on an n- dimensional inner product space ( V ,(., .)) follow. First, we review some basic topological notions for ( V , ( . ,-)). Recall that S r ( x o )= {xlllx - xoll < r } is called the open ball of radius r with center x,. A set B G V is called open if, for each x, E B, there is an r > 0 such that Sr(xo)c B. Since all inner products on V are related by positive definite linear transformations, the definition of open does not depend on the given inner product. A set is closed iff its complement is open and a set if bounded iff it is contained in Sr(0) for some r > 0 . Just as in Rn, a set is compact iff it is closed and bounded (see Rudin, 1953, for the definition and characterization of compact sets in R n ) .As with openness, the definitions and characterizations of closedness, boundedness, and compactness do not depend on the particular inner product on V. Let 1 denote standard Lebesgue measure on Rn. To move 1 over to the space V , let x , , . . . , xn be a fixed orthonormal basis in ( V ,( . , . )) and define the linear transformation T on R" to V by (
0
,
where a E Rn has coordinates a , , . . . , a,. Clearly, T is one-to-one, onto, and maps open, closed, bounded, and compact sets of Rn into open, closed, bounded, and compact sets of V . Also, T - ' on V to Rn maps x E V into the vector with coordinates ( x i ,x ) , i = 1 , . . . , n. Now, define the measure v, on Borel sets B E % ( V ) by
+
T-'x) = l(T-'(B)) Notice that v,(B + x ) = l ( T 1 ( B+ x ) ) = I ( T - ' ( B ) = v , ( B ) since Lebesgue measure is invariant under translations. Also, v O ( B )< + cc if B is a compact set. T h s leads to the following definition.
Definition 3.3. A nonzero measure v defined on the Borel sets % ( V ) of ( V ,(. , .)) is invariant if:
+
(i) v ( B x ) = v ( B ) for x E V and B E % ( V ) . (ii) v ( B ) < cc for all compact sets B.
+
122
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
The measure v, defined above is invariant and it is shown that, if v is any invariant measure on %(I/), then v = cv, for some constant c > 0. Condition (ii) of Definition 3.3 relates the topology of V to the measure v. The measure that counts the number of points in a set satisfies (i) but not (ii) of Definition 3.3 and this measure is not equal to a positive constant times v,. Before characterizing the measure v,, it is now shown that vo is a dominating measure for the density function of a nonsingular normal distribution on (V, (., .)). Proposition 3.14. Suppose C(Y) = N(p, 2 ) on the inner product space (V7(. , .)) where 2 is nonsingular. The density function of Y with respect to the measure vo is given by p(y) for y
E
=
(2~)-""(det 2)-'/'exp[-f(y
-
p, Z P 1 ( y- p))]
V.
ProoJ: It must be shown that, for each Bore1 set B,
where IB is the indicator function of the set B. From the definition of the measure v,, it follows that (see Lehrnann, 1959, p. 38)
Let X = T-'(Y)
E
Rn so X is a random vector with coordinates (xi, Y),
i = 1,. . . , n. Thus X has a normal distribution in Rn with mean vector
T-l(p) and covariance matrix [Z] where [Z] is the matrix of Z in the given orthonormal basis x,, . . . , x,. Therefore,
PROPOSITION
3.14
The last equality follows since 1,-,(,,(a)
=
I,(T(a)) and
Thus
We now want to show that the measure v,, constructed from Lebesgue measure on Rn, is the unique translation invariant measure that satisfies
Let X+ be the collection of all bounded non-negative Bore1 measurable functions defined on V that satisfy the following: given f E X+, there is a compact set B such that f (0) = 0 if v G B. If v is any invariant measure on V and f E X+, then jf (v)v(dv) < oo since f is bounded and the v-measure of every compact set is finite. It is clear that, if v , and v, are invariant measures such that
+
\f(v)vl(do) then v ,
=
= /f(v)vl(dv)
for a u f ~X+,
v,. From the definition of an invariant measure, we also have
124
for all f E
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
X+and x E
V. Furthermore, the definition of vo shows that
for all f E X+. Here, we have used the linearity of T and the invariance of Lebesgue measure under multiplication of the argument of integration by a minus one.
Proposition 3.15. If v is an invariant measure on %(V), then there exists a positive constant c such that v = cvo. ProoJ: For f , g
E
X+, we have
=/
f(~)~o(~~~j~(~)~(d~).
Therefore.
for all f,g =
E
XLt.Fix f E Xi such that /f(w)vo(dw) = 1 and set c
J/(x)v(dx).Then
PROPOSITION
125
3.15
for all g E CXt. The constant c cannot be zero as the measure v is not zero. Thus c > 0 and v = cv,. The measure v, is called the Lebesgue measure on V and is henceforth denoted by do or dx, as is the Lebesgue measure on Rn. It is possible to show that v, does not depend on the particular orthonormal basis used to define it by using a Jacobian argument in Rn. However, the argument given above contains more information than ths. In fact, some minor technical modifications of the proof of Proposition 3.15 yield the uniqueness (up to a positive constant) of invariant measures on locally compact topological groups. Thls topic is discussed in detail in Chapter 6. An application of Proposition 3.14 to the situation treated in Example 3.1 follows.
+
Example 3.2. For independent coordinate random vectors y. E RP, i = 1,. . . , n, with C ( y ) = N(p, Z), form the random matrix X E Cp, , with rows y.',i = I , . . . , n. As shown in Example 3.1, c(X)
=
N(epf, I,, 8 2 )
on the inner product space (Cp,,, ( . , .)), where e E R n is the vector of ones. Let dX denote Lebesgue measure on the vector space Cp, ,. If ): is nonsingular, then I, 8 Z is nonsingular and (I, 8 8 ) = I, 8 Z-I. Thus when Z is nonsingular, the density of X with respect to dX is
'
It is shown in Chapter 5 that det(I,, 8 Z) = (det 2)". Since the inner product ( . , .) is given by the trace, the density p can be written
However, t h s form of the density is somewhat less revealing, from a statistical point of view, than (3.1). In order to make this statement more precise and to motivate some future statistical considerations, we now t h n k of p E RP and Z as unknown parameters. Thus, we
126
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
can write (3.1) as
where p ranges over RP and B ranges over all p X p positive definite matrices. Thus we have a parametric family of densities for the distribution of the random vector X. As a first step in analyzing t h s parametric family, let
It is clear that M is ap-dimensional linear subspace of C,, ,and M is simply the space of possible values for the mean vector of X. Let P, = ( l / n ) e e f so P, is the orthogonal projection onto span(e) G R n . Thus Pe 8 I, is an orthogonal projection and it is easily verified that the range of P, @ I, is M. Therefore, the orthogonal projection onto M is Pe 8 I,. Let Q , = In - P, so Q , 8 I, is the orthogonal projection onto M I and ( Q , 8 I,)(P, 8 I,) = 0. We now decompose X into the part of X in M and the part of X in M I -that is, write X = ( P , 8 I,)X ( Q , 8 I,)X. Substituting t h s into the exponential part of (3.2) and using the relation ( P e 8 I,)(I, 8 Z ) ( Q , 8 I,) = 0, we have
+
=
( P e X - ep', ( I , 8 2 - ' ) ( P , x - e p ' ) )
+ ~~Q,xB-'(Q,x)~
=
( P e X - ep', ( I , 8 Z - ' ) ( P , x - e p ' ) )
+ trxf~,xB-'.
Thus the density p ( X I p , Z ) is a function of the pair P e X and X f Q eX so P, X and X'Q, X is a sufficient statistic for the parametric family (3.2). Proposition 3.4 shows that ( P , 8 I,)X and ( Q , 8 I,) X are independent since ( P , 8 I,)(I, 8 Z ) ( Q , 8 I,) = ( P e e e )€3 Z = 0 as PeQe = 0. Therefore, PeX and X'Q,X are independent since P,X = ( P , 8 I,)X and X ' Q e X = ( ( Q , 8 I,)X)'((Q, 8 I,)X). To interpret the sufficient statistic in terms of the original random
PROBLEMS
vectors W,,. . . , W n , first note that
where
W = ( l / n ) Z V is the sample mean. Also,
The quantity ( l / n ) X r Q e X is often called the sample covariance matrix. Since e W r and Ware one-to-one functions of each other, we have that the sample mean and sample covariance matrix form a sufficient statistic and they are independent. It is clear that
The distribution of X I Q e X , commonly called the Wishart distribution, is derived later. The procedure of decomposing X into the projection onto the mean space (the subspace M) and the projection onto the orthogonal complement of the mean space is fundamental in multivariate analysis as in univariate statistical analysis. In fact, this procedure is at the heart of analyzing linear models-a topic to be considered in the next chapter.
+
PROBLEMS 1. Suppose X I , . . . , Xn are independent with values in (V, (. , -)) and C ( 4 . ) = N ( p i , Ai),i = 1 , . . ., n . Show that C ( Z 4 ) = N ( Z p i , Z A , ) . 2. Let X and Y be random vectors in R n with a joint normal distribution given by
where p is a scalar. Show that Ipl G 1 and the covariance is positive definite iff Ipl < 1. Let Q ( Y ) = I, - ( Y ' Y ) - ' Y Y ' . Prove that W = X ' Q ( Y ) X has the distribution of (1 - p 2 ) X ; - , (the constant 1 - p2 times a chi-squared random variable with n - 1 degrees of freedom).
128
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
3. When X E Rn and C( X) = N(0, Z) with Z nonsingular, then C(X) = C(CZ) where C(Z) = N(0, I,) and CC' = 2. Hence, C(C-'X) = C(Z) so C-' transforms X into a vector of i.i.d. N(0,l) random variables. There are many C- "s that do this. The problem at hand concerns the construction of one such C-'. Given any p x p positive definite matrix A, p >, 2, partition A as
where a , ,
E
R1,A,,
(i) Partition Z : n Show that
=
X
A;,
E
RP-'. Define T,(A) by
n as A is partitioned and set X(') = T,(Z)X.
where Xi1) = Z,, - Z,,Z,,/a,,. by' ) (ii) For k = 1,2,. . . , n - 2, define x ( ~ +
Prove that
for some positive definite X i k + ' ) . (iii) For k = 0,. . . , n - 2, let
where T(O)= T,(Z). With T = T("-,). . . T(O), show that X("-') = TX and Cov(Xin-I)) = I,. Also, show that T is lower triangular and Z - = T'T.
129
PROBLEMS
4.
Suppose X E R~ has coordinates XI and X2, and has a density -e
p(x)=
77
x - (
x
+x)]
if xIx2 > 0 otherwise
s o p is zero in the second and fourth quadrants. Show XI and X2 are both normal but X is not normal. 5.
Let XI,. . . , Xn be i.i.d. N(p, a 2 ) random variables. Show that U = - q)2 and W = ZX, are independent. What is the distribution of U?
6. For X E (V,(., with C(X) = N(0, I), suppose (X, AX) and (X, BX) are independent. If A and B are both positive semidefinite, prove that AB = 0. Hint: Show that tr AB = 0 by using cov{(X, AX),(X, BX)) = 0. Then use the positive semidefiniteness and tr AB = 0 to conclude that AB = 0. 0
)
)
The method used to define the normal distribution on ( V , ( . , .)) consisted of three steps: (i) first, an N(0, 1) distribution was defined on R'; (ii) next, if C(Z) = N(0, l), then W is N(p, a 2 ) if C(W) = C(uZ + p); and (iii) X with values in (V, .)) is normal if (x, X) is normal on R' for each x E V. It is natural to ask if this procedure can be used to define other types of distributions on (V, (., -)). Here is an attempt for the Cauchy distribution. For X E R', say Z is standard Cauchy (which we write as C(Z) = C(0,l)) if the density of Z is ( a ,
Say W has a Cauchy distribution on R' if C(W) = C(aZ + p ) for some p E R1 and a > 0-in this case write C(W) = C(p, a). Finally, say X E (V, (., -)) is Cauchy if (x, X) is Cauchy on R'. (i) Let W,, . . . , Wn be independent C(p,, a,), j = 1,. . . , n. Show that C(Za,q.) = C(Za .p ., ZJa,la,). Hint: The characteristic function J ! of a C(0, 1) distribution is exp[ - It1 1, t E R'. (ii) Let Z,,. . . , Z, be i.i.d. C(0,l) and let x,,. . . , x, be any basis for (V,(., .)). Show X = ZZjxj has a Cauchy distribution on ( V , ( . ,.I).
8. Consider a density on R' given by f(u)
=
[mt-'+(u/t)~(dr)
130
THE NORMAL DISTRIBUTION ON A VECTOR SPACE
where is the density of an N ( 0 , l ) distribution and G is a distribution function with G(0) = 0. The distribution defined by f is called a scale mixture of normals. (i) Let Z , be N ( 0 , l ) and let R be independent of Z , with C ( R ) = G. Show that U = RZ, has f as its density function. If C ( Y )= C ( c U ) for some c > 0, we can say that Y has a type-f distribution. (ii) In ( V ,(., .)), suppose C ( Z ) = N(0, I ) and form X = R Z where R and Z are independent and C ( R ) = G. For each x E V , show ( x , X ) has a type-f distribution. Remark. The distribution of X in ( V ,(., .)) provides a possible vector space generalization of a type-f distribution on R'. I#C
9. In the notation of Example 3.1, assume that p @ 2 ) on ., ( . , .)I. Also,
(ep,
=
0 so C ( X ) = N(0, In
,,
where C , , = C,, - 2,2C;1Z2,. Show that the conditional distribution of XiX, given X2 is the same as the conditional distribution of Xi XI given Xi X2. 10. The map T of Section 3.5 has been defined on Rn to ( V , ( . , .)) by Ta = C;a,x, where x,,. . . , x , is an orthonormal basis for ( V , ( . , .)). Also, we have defined v, by v,(B) = I ( T - ' ( B ) ) for B E % ( V ) . Consider another orthonormal basis y,, . . . , y, for ( V ( . , .)) and define TI by T,a = CTa,y,, a E Rn. Define v , by v , ( B ) = I(T; ' ( B ) ) for B E % ( V ) .Prove that v, = v,. 11. The measure v, in Problem 10 depends on the inner product (. , .) on V. Suppose [., - 1 is another inner product given by [ x ,y ] = ( x , A y ) where A > 0. Let v, be the measure constructed on ( V ,[., .I) in the same manner that v, was constructed on ( V ,(., .)). Show that v, = cv, where c = (det(A))1/2. 12. Consider the space Sp of p X p symmetric matrices with the inner product given by ( S , , S 2 ) = tr S,S2. Show that the density function of an N(0, I ) distribution on (S,, ( . , .)) with respect to the measure v, is
where S
=
( s i j ) ,i, j
=
1,. . . , p. Explain your answer (what is v,)?
131
NOTES AND REFERENCES
13. Consider XI,. . . , X,, which are i.i.d. N(p, Z) on RP. Let X E Cp,, have rows Xi,. . . , XA so C(X) = N(epf, I, 8 2). Assume that 2 has the form
where a 2 > 0 and - l / ( p - 1) < p < 1 so Z is positive definite. Such a covariance matrix is said to have intraclass covariance structure. (i) On RP, let A = (l/p)e,e; where e l E RP is the vector of ones. Show that a positive definite covariance matrix has intraclass covariance structure iff Z = aA + p ( I - A) for some positive scalars a and p. In this case 2-I = a-IA p - ' ( I - A). (ii) Using the notation and methods of Example 3.2, show that when tr AXfQeX,tr ( I (p, a2, p) are unknown parameters, then (X, A) XfQeX) is a sufficient statistic.
+
NOTES AND REFERENCES 1. A coordinate treatment of the normal distribution similar to the treatment given here can be found in Muirhead (1982). 2. Examples 3.1 and 3.2 indicate some of the advantages of vector space techniques over coordinate techniques. For comparison, the reader may find it instructive to formulate coordinate versions of these examples. 3. The converse of Proposition 3.11 is true. The only proof I know involves characteristic functions. For a discussion of this, see Srivastava and Khatri (1979, p. 64).
Linear Statistical Models
The purpose of this chapter is to develop a theory of linear unbiased estimation that is sufficiently general to be applicable to the linear models arising in multivariate analysis. Our starting point is the classical regression model where the Gauss-Markov Theorem is formulated in vector space language. The approach taken here is to first isolate the essential aspects of a regression model and then use the vector space machinery developed thus far to derive the Gauss-Markov estimator of a mean vector. After presenting a useful necessary and sufficient condition for the equality of the Gauss-Markov and least-squares estimators of a mean vector, we then discuss the existence of Gauss-Markov estimators for what might be called generalized linear models. This discussion leads to a version of the Gauss-Markov Theorem that is directly applicable to the general linear model of multivariate analysis.
4.1. THE CLASSICAL LINEAR MODEL
The linear regression model arises from the following considerations. Suppose we observe a random variable Y, € R and associated with Y, are known numbers z,,, . . . , zik, i = 1,. . . , n. The numbers z,,, . . . , zik might be indicator variables denoting the presence or absence of a treatment as in the case of an analysis of variance situation or they might be the numerical levels of some physical parameters that affect the observed value of y . It is assumed that the mean value of Y, is I Y , = C!zijPj where the P, are unknown parameters. It is also assumed that var(y) = u 2 > 0 and cov(y, Y,) = 0 if i *j. Let Y E Rn be the random vector with coordinates Y,,. . . , Y,, let Z = { z i j )be the n X k matrix of zjj's, and let p E Rk be the vector with coordinates P,,. . . , Pk. In vector form, the assumptions we have made
THE CLASSICAL LINEAR MODEL
133
concerning Y are that &Y = Zfl and Cov(Y) = a21n. In summary, we observe the vector Y whose mean is Zb where Z is a known n X k matrix, fl E R~ is a vector of unknown parameters, and Cov(Y) = a21nwhere a 2 is an unknown parameter. The two essential features of this parametric model are: (i) the mean vector of Y is an unknown element of a known subspace of Rn-namely, &Yis an element of the range of the known linear transformation determined by Z that maps R~ to Rn; (ii) Cov(Y) = a21n--that is, the distribution of Y is weakly spherical. For a discussion of the classical statistical problems related to the above model, the reader is referred to Scheffe (1959). Now, consider a finite dimensional inner product space (V, ( -, -)). With the above regression model in mind, we define a weakly spherical linear model for a random vector with values in (V, (. , .)). Definition 4.1. Let M be a subspace of V and let E, be a random vector in V with a distribution that satisfies &E, = 0 and Cov(eO)= I. For each p E M and a > 0, let Q,. denote the distribution of p + ae,. The family {Qp,,lp E M, o > 0) is a weakly spherical linear model for Y E V if the distribution of Y is in {Q,, .Ip E M, o > 0). Thls definition is just a very formal statement of the assumption that the mean vector of Y is an element of the subspace of M and the distribution of Y is weakly spherical so Cov(Y) = a 2 1 for some a 2 > 0. In an abuse of notation, we often write Y = p + E for p E M where E is a random vector with GE = 0 and COV(E)= 0'1. T h s is to indicate the assumption that we have a weakly spherical linear parametric model for the distribution of Y. The unobserved random vector E is often called the error vector. The subspace M is called the regression subspace (or manifold) and the subspace M L is called the error subspace. Further, the parameter p E M is assumed unknown as is the parameter a2. It is clear that the regression model used to motivate Definition 4.1 is a weakly spherical linear model for the observed random vector and the subspace M is just the range of Z. Given a linear model Y = p + E, p E M, & E = 0, COV(E)= u21, we now want to discuss the problem of estimating p. The classical Gauss-Markov approach to estimating p is to first restrict attention to linear transformations of Y that are unbiased estimators and then, within this class of estimators, find the estimator with minimum expected norm-squared deviation from p. To make all of this precise, we proceed as follows. By a linear estimator of p, we mean an estimator of the form AY where A E C(V, V). (We could consider affine estimators AY + v,, v, E V, but the unbiasedness restriction would imply u, = 0.) A linear estimator A Y of p is unbiased iff, when p E M is the mean of Y, we have &(AY) = p. This is equivalent
134
LINEAR STATISTICAL MODELS
to the condition that Ap = p for all p E M since GAY = A&Y = Ay. Thus AY is an unbiased estimator of p iff A p = p for all y E M. Let
The linear unbiased estimators of p are those estimators of the form AY with A E We now want to choose the one estimator (i.e., A E &) that minimizes the expected norm-squared deviation of the estimator from p. In other words, the problem is to find an element A E & that minimizes &IIAY - p112. The justification for choosing such an A is that IIAY - y112 is the squared distance between AY and p so &IIAY - p112 is the average squared distance between AY and p. Since we would like AY to be close to p , such a criterion for choosing A E & seems reasonable. The first result in this chapter, the Gauss-Markov Theorem, shows that the orthogonal projection onto M, say P, is the unique element in & that minimizes GllAY pIl2.
a.
Theorem 4.1 (Gauss-Markov Theorem). For each A a2 > 0,
E
&, p
E
M, and
where P is the orthogonal projection onto M. There is equality in this inequality iff A = P. Proof: W r i t e A = P + C s o C = A - P . S i n c e A p = p f o r p ~ M , C p = O for p E M and this implies that CP = 0. Therefore, C ( Y - p ) and P(Y - y ) are uncorrelated random vectors, so & ( C ( Y- p), P(Y - p ) ) = 0 (see Proposition 2.21). Now,
The third equality results from the fact that the cross product term is zero. This establishes the desired inequality. It is clear that there is equality in this inequality iff &IIC(Y - p)112 = 0. However, C ( Y - p ) has mean zero and covariance a 2 ~ so~ ' &llC(Y - p)112
=
a2(1,CC')
by Proposition 2.21. Since a2 > 0, there is equality iff ( I , CC') ( I , CC') = ( C , C ) and this is zero iff C = A - P = 0.
=
0. But
135
THE CLASSICAL LINEAR MODEL
The estimator PY of p E M is called the Gauss-Markov estimator of the mean vector and the notation fi = PY is used here. A moment's reflection shows that the validity of Theorem 4.1 has nothing to do with the parameter u2, be it known or unknown, as long as u2 > 0. The estimator fi = PY is also called the least-squares estimator of p for the following reason. Given the observation vector Y, we ask for that vector in M that is closest, in the given norm, to Y-that is, we want to minimize, over x E M, the expression IlY - x112 But Y = PY QY where Q = (I - P ) so, for x E M,
+
The second equality is a consequence of Qx
=
0 and QP
=
0. Thus
with equality iff x = PY. In other words, the point in M that is closest to Y is fi = PY. When the vector space V is Rn with the usual inner product, then JJY- x112 is just a sum of squares and fi = P Y E M minimizes this sum of squares-hence the term least-squares estimator.
+
Example 4.1. Consider the regression model used to motivate Definition 4.1. Here, Y E Rn has a mean vector ZP when j3 E Rk and Z is an n x k known matrix with k < n. Also, it is assumed that Cov(Y) = u21,, u2 > 0. Therefore, we have a weakly spherical linear model for Y and p = ZP is the mean vector of Y. The regression manifold M is just the range of Z. To compute the Gauss-Markov estimator of p, the orthogonal projection onto M, relative to the usual inner product on Rn, must be found. To find t h s projection explicitly in terms of Z, it is now assumed that the rank of Z is k. The claim is that P = Z(Z'Z)-'Zf is the orthogonal projection onto M. Clearly, P 2 = P and P is self-adjoint so P is the orthogonal projection onto its range. However, Z' maps Rn onto R~ since the rank of Z' is k. Thus (ZfZ)-'z' maps Rn onto Rk. Therefore, the range of Z(ZIZ)-'Z' is Z(Rk), which is just M, so P is the orthogonal projection onto M. Hence fi = Z(ZrZ)-'ZfY is the Gauss-Markov and least-squares estimator of p. Since p = Zfl, Z'p = Z'ZP and thus /3 = ( z f Z ) - ' z f p . There is the obvious temptation to call
the Gauss-Markov and least-squares estimator of the parameter P.
136
LINEAR STATISTICAL MODELS
Certainly, calling b the least-squares estimator of
P is justified since
ZB
= fi and Zy E M. Thus b minimizes the sum for all y E R ~as, of squares IIY - zy112as a function of y. However, it is not clear why b should be called the Gauss-Markov estimator of P. The discussion below rectifies this situation.
+
Again, consider the linear model in (V, (., Y = p E, where p E M, 0, and COV(E)= 0'1. As usual, M is a linear subspace of V and E is a random vector in V. Let (W, a]) be an inner product space. Motivated by the considerations in Example 4.1, consider the problem of estimating Bp, B E C(V, W), by a linear unbiased estimator A Y where A E C(V, W). That AY is an unbiased estimator of Bp for each p E M is clearly equivalent to Ap = Bp for p E M since &AY = Ap. Let a)),
&E =
[ a ,
@,
=
{AIA E C(V, W), Ap
=
Bp for p
E
M),
so AY is an unbiased estimator of Bp, p E M iff A E @,. The following result, which is a generalization of Theorem 4.1, shows that BP is the Gauss-Markov estimator for Bp in the sense that, for all A E @,,
Here 11
. 11,
is the norm on the space (W,
Proposition 4.1.
For each A
E
[ a ,
.I).
@ ,,
where P is the orthogonal projection onto M. There is equality in t h s inequality iff A = BP. Proof. The proof is very similar to the proof of Theorem 4.1. Define C E C(V, W) by C = A - BP and note that Cp = Ap - BPp = Bp - Bp = 0 since A E @, and Pp = p for p E M. Thus CP = 0, and t h s implies that BP(Y - p) and C(Y - p) are uncorrelated random vectors. Since these random vectors have zero means, &[BP(Y - p), C(Y - p)]
=
0.
PROPOSITION
For A
E
4.1
@,,
This establishes the desired inequality. There is equality in this inequality iff &(IC(Y - p)ll? = 0. The argument used in Theorem 4.1 applies here so there is equality iff C = A - BP = 0. Proposition 4.1 makes precise the statement that the Gauss-Markov estimator of a linear transformation of p is just the linear transformation applied to the Gauss-Markov estimator of p. In other words, the Gauss-Markov estimator of Bp is BP where B E C(V, W). There is one particular case of this that is especially interesting. When W = R, the real line, then a linear transformation on V to W is just a linear functional on V. By Proposition 1.10, every linear functional on V has the form (x,, x ) for some x, E V. Thus the Gauss-Markov estimator of (x,, p) is just (x,, P) = (x,, PY) = (Px,, Y). Further, a linear estimator of (x,, p), say (z, Y), is an unbiased estimator of ( x , , p) iff (z, p) = (x,, p) for all p 1. M. For any such vector z, Proposition 4.1 shows that
Thus the minimum of var(z, Y), over the class of all z's such that (z, Y) is an unbiased estimator of (x,, p), is achieved uniquely for z = Px,. In particular, if x, E M, z = x, achieves the minimum variance. In the definition of a linear model, Y = p + E, no distributional assumptions concerning E were made, other than the first and second moment assumptions GE = 0 and COV(E) = u21. One of the attractive features of Proposition 4.1 is its validity under these relatively weak assumptions. However, very little can be said concerning the distribution of fi = PY other than Gfi = p and Cov(fi) = u2p. In the following example, some of the implications of assuming that E has a normal distribution are discussed.
+
Example 4.2. Consider the situation treated in Example 4.1. A coordinate random vector Y E Rn has a mean vector p = Z/3 where Z is an n x k known matrix of rank k (k < n) and /3 E R~ is a vector of unknown parameters. It is also assumed that Cov(Y) = a21n.The Gauss-Markov estimator of p is fi = Z(ZfZ)-'z'Y. Since
138
LINEAR STATISTICAL MODELS
p
(ZrZ)- ' ~ ' p ,Proposition 4.1 shows that the Gauss-Markov p = (ZrZ)-'Z'$ = (z'z)-'Z'Y.Now, add the assumption that Y has a normal distribution-that is, C(Y) = N(p, a21n)where p E M and M is the range of Z. For this particular parametric model, we want to find a minimal sufficient statistic and the maximum likelihood estimators of the unknown parameters. The density function of Y, with respect to Lebesgue measure, is =
estimator of j3 is
where y E Rn, p E M, and u2 > 0. Let P denote the orthogonal projection onto M, so Q = I - P is the orthogonal projection onto M I . Since Ily - p112 = llPy - ~ 1 . 1 1+~ ll&112, the density of y can be written
is a sufficient statistic as the This shows that the pair {Py, ll~y11~) The normality assumpdensity is a function of the pair (Py, llQ~11~). tion implies that P Y and QY are independent random vectors as they are uncorrelated (see Proposition 3.4). Thus P Y and I ~ Q11' Yare is minimal sufficient and independent. That the pair (Py, llQ~11~) complete follows from results about exponential families (see Lehmann 1959, Chapter 2). To find the maximum likelihood estimators of p E M and a2, the densityp( y ( p ,u2) must be maximized over all values of p E M and a2. For each fixed a 2 > 0,
with equality iff p = Py. Therefore, the Gauss-Markov estimator = P Y is the maximum likelihood estimator for p. Of course, this also shows that p = (ZtZ)-'Z'Y is the maximum likelihood estimator of p. To find the maximum likelihood estimator of u2, it remains to maximize
fi
139
PROPOSITION 4.2
An easy differentiation argument shows that p(ylPy, a 2 ) is maximized for a 2 equal to llQY112/n. Thus e2 = (lQy112/n is the maximum likelihood estimator of a2. From our previous observation, fi = P Y and e2 are independent. Since C(Y) = N(p, a2Z),
c(P) = c ( P Y )
=
~ ( p a,2 p )
and
Also, c(QY)
since Qp
=
0 and Q2 = Q
=
~ ( 0a,2 e )
=
Q'. Hence from Proposition 3.7,
since Q is a rank n - k orthogonal projection. Therefore, 662
n-k n
=a2.
It is common practice to replace the estimator estimator
c2 by
the unbiased
62 = -llQYl12 n-k'
It is clear that d 2 is distributed as the constant a2/(n - k) times a - random variable.
Xi
+
The final result of this section shows that the unbiased estimator of a2, derived in the example above, is in fact unbiased without the normality assumption. Let Y = p + E be a random vector in V where p E M c V , G E = 0, and COV(E)= a21. Given thls linear model for Y, let P be the orthogonal projection onto M and set Q = I - P. Proposition 4.2. the estimator
Let n
=
dim V , k
=
62 = -
is an unbiased estimator of a2.
dim M, and assume that k < n. Then
IQyll2
n-k
140
LINEAR STATISTICAL MODELS
Proof. The random vector QY has mean zero and Cov(QY) = 02Q. By Proposition 2.2 1,
The last equality follows from the observation that for any self-adjoint operator S, (I, S ) is just the sum of the eigenvalues of S. Specializing this to the projection Q yields (I, Q) = n - k. 4.2.
MORE ABOUT THE GAUSS-MARKOV THEOREM
The purpose of this section is to investigate to what extent Theorem 4.1 depends on the weak sphericity assumption. In this regard, Proposition 4.1 provides some information. If we take W = V and B = I , then Proposition 4.1 implies that
where 11 . 11, is the norm obtained from an inner product [., .]. Thus the orthogonal projection P minimizes GI1AY over A E @ n omatter what inner product is used to measure deviations of AY from p. The key to the proof of Theorem 4.1 is the relationship G[P(Y - p), (A - P ) ( Y - P)]
=
0.
This follows from the fact that the random vectors P(Y - p) and (A P)(Y - p) are uncorrelated and
This observation is central to the presentation below. The following alternative development of linear estimation theory provides the needed generality to apply the theory to multivariate linear models. Consider a random vector Y with values in an inner product space and assume that the mean vector of Y, say p = GY, lies in a (V, (., known regression manifold M G V. For the moment, we suppose that Cov(Y) = Z where Z is fixed and known (Z is not necessarily nonsingular). As in the previous section, a linear estimator of p, say AY, is unbiased iff a))
A
E
@ ={AIAp = p , p
E
M).
Given any inner product [., .] on V, the problem is to choose A
E
@ to
MORE ABOUT THE GAUSS-MARKOV THEOREM
minimize
where the expectation is computed under the assumption that GY = p and Cov(Y) = Z. Because of Proposition 4.1, it is reasonable to expect that the minimum of +(A) occurs at a point Po E @wherePo is a projection onto M along some subspace N such that M n N = (0) and M N = V. Of course, N is the null space of Po and the pair M, N determines Po. To find the appropriate subspace N, write +(A) as
+
When the third term in the final expression for *(A) is zero, then Po minimizes *(A). If Po(Y - p) and ( A - Po)(Y - p) are uncorrelated, the third term will be zero (shown below), so the proper choice of Po, and hence N, will be to make Po(Y - p) and (A - Po)(Y - p) uncorrelated. Setting C = A - Po, it follows that %(C) 2 M. The absence of correlation between Po(Y - p) and C(Y - p) is equivalent to the condition
Here, C' is the adjoint of C relative to the initial inner product (., .) on V. Since %(C) 2 M, we have
9%(C')
=
(%(c))~
ML
and
'3,((CC') The symbol I refers to the inner product (., .). Therefore, if the null space of Po, namely N, is chosen so that N 2 (C(ML), then P0ZCf = 0 and Po minimizes +(A). Now, it remains to clean up the technical details of the above argument. Obviously, the subspace Z(M1) is going to play a role in what follows. First, a couple of preliminary results.
142
LINEAR STATISTICAL MODELS
Proposition 4.3. Suppose Z subspace of V , Then:
=
Cov(Y) in (V,(., .)) and M is a linear
(i) Z(ML) n M = (0). (ii) The subspace Z(ML) does not depend on the inner product on V. Proof. To prove (i), recall that the null space of Z is
since Z is positive semidefinite. If u E Z(ML) n M, then u = Xu, for some u, E ML . Since Zu, E M, (u,, Zu,) = 0 so u = Zu, = 0. Thus (i) holds. For (ii), let [., -1 be any other inner product on V. Then
for some positive definite linear transformation A,. The covariance transformation of Y with respect to the inner product [., .] is ZA, (see Proposition 2.5). Further, the orthogonal complement of M relative to the inner product [., . ] is {yl[x, y ] =
=
0 for all x
{~;'ul(x,
U) =
E
M)
=
{y((x,Aoy) = 0 for all x
0 for allx
E
M)
=
E
M)
A;~(M~).
Thus Z(ML) = (ZAo)(A;'(M1)). Therefore, the image of the orthogonal complement of M under the covariance transformation of Y is the same no matter what inner product is used on V. Proposition 4.4. Suppose XI and X, are random vectors with values in (V, (., .)). If XI and X, are uncorrelated and &X2= 0,then
for every bilinear function f defined on V x V. Proof. Since XI and X, are uncorrelated and X, has mean zero, for x,, x2 E V , we have
PROPOSITION
4.4
However, every bilinear form f on (V, (., -)) is given by
f where B
E
[ u l , u2l
=
( ~ 1Bu2) ,
C(V, V). Also, every B can be written as
where y,, . . . , y, is a basis for V. Therefore, F f [ x 1 ,x2]= GC C b ; ; ( x I , y i o y ; x 2 )
=
C C b i , & ( ~ ix, ~ ) ( Y ; ,~
2 =) 0.
We are now in a position to generalize Theorem 4.1. To review the assumptions, Y is a random vector in (V, .)) with GY = p E M and Cov(Y) = 2. Here, M is a known subspace of V and Z is the covariance of Let [., -1 be another product on Y relative to the given inner product (., V and set ( a ,
a).
for A
E
&, where (1 . 1 1 , is the norm defined by [.,
a].
Theorem 4.2. Let N be any subspace of V that is complementary to M and contains the subspace Z(ML). Here M L is the orthogonal complement of M relative to (., .). Let Po be the projection onto M along N. Then (4.1)
*(A)
> +(P,)
for A
E
@
If I1: is nonsingular, define a new inner product (., .), by
Then Po is the unique element of & that minimizes * ( A ) . Further, Po is the orthogonal projection, relative to the inner product (. , .), onto M. Proof. The existence of a subspace N 2 Z(ML), which is complementary to M, is guaranteed by Proposition 4.3. Let C E C(V, V) be such that M c %(C). Therefore,
LINEAR STATISTICAL MODELS
This implies that
since %(Po) = N 1 Z(ML). However, the condition P O X ' = 0 is equivalent to the condition that Po(Y - p) and C(Y - p) are uncorrelated. With these preliminaries out of the way, consider A E & and let C = A - Po so %(C) 2 M. Thus
The last equality follows by applying Proposition 4.4 to Po(Y - p) and C(Y - p). Therefore,
so Po minimizes \k over A E &. Now, assume that Z is nonsingular. Then the subspace N is uniquely defined ( N = Z(ML)) since dim(Z(ML)) = dim(ML) and M Z(ML) = V. Therefore, Po is uniquely defined as its range and null space have been specified. To show that Po uniquely minimizes \k, for A E &, we have
+
where C = A - Po. Thus *(A) > *(Po) with equality iff
This expectation can be zero iff C(Y - p) = 0 (a.e.) and this happens iff the covariance transformation of C(Y - p) is zero in some (and hence every) inner product. But in the inner product (., .), Cov(C(Y - p)) and this is zero iff C
=
=
CZC'
0 as Z is nonsingular. Therefore, Po is the unique
PROPOSITION
4.5
145
minimizer of \k. For the last assertion, let N, be the orthogonal complement of M relative to the inner product Then, (a,
N,
a),.
=
{yJ(x,y),
=
0 for allx
E
M)
=
{yl(x, Z-'y)
=
{Zyl(x, y )
=
0 for allx
E
M)
=
Z(ML).
=
0 for allx
E
M)
Since % ( P o ) = Z(ML), it follows that Po is the orthogonal projection onto M relative to (., .),. In all of the applications of Theorem 4.2 in this book, the covariance of Y is nonsingular. Thus the projection Po is unique and fi = PoY is called the Gauss-Markov estimator of p E M. In the context of Theorem 4.2, if Cov(Y) = a 28where Z is known and nonsingular and a 2 > 0 is unknown, then PoY is still the Gauss-Markov estimator for p E M since ( a 2 Z ) ( ~ I ) = Z(ML) for each a 2 > 0. That is, the presence of an unknown scale parameter a 2 does not affect the projection Po. ~ h u Po s still minimizes \k for each fixed a 2 > 0. Consider a random vector Y taking values in (V, ( - , with GY = p E M and a))
Here, 8, is assumed known and positive definite while a 2 > 0 is unknown. Theorem 4.2 implies that the Gauss-Markov estimator of p is fi = PoY where Po is the projection onto M along Zl(ML).Recall that the least-squares estimator of p is P Y where P is the orthogonal projection onto M in the given inner product, that is, P is the projection onto M along M L . Proposition 4.5. The Gauss-Markov and least-squares estimators of p are the same iff Z,(M) G M. Proof: Since Po and P are both projections onto M, PoY = P Y iff both Po and P have the same null spaces-that is, the Gauss-Markov and leastsquares estimators are the same iff
Since 2, is nonsingular and self-adjoint, this condition is equivalent to the condition Z, ( M ) G M.
146
LINEAR STATISTICAL MODELS
The above result shows that if Z,(M) C M, we are free to compute either P or Po to find fi. The implications of this observation become clearer in the next section.
4.3. GENERALIZED LINEAR MODELS First, consider the linear model introduced in Section 4.2. The random vector Y in (V, (., .)) has a mean vector p E M where M is a subspace of V and Cov(Y) = a2Z,. Here, 2 , is a fixed positive definite linear transformation and a > 0. The essential features of this linear model are: (i) the mean vector of Y is assumed to be an element of a known subspace M and (ii) the covariance of Y is an element of the set {a2Zlla2> 0). The assumption concerning the mean vector of Y is not especially restrictive since no special assumptions have been made about the subspace M. However, the covariance structure of Y is quite restricted. The set {a2Zllo2> 0) is an open half line from 0 E C(V, V) through the point 2, E C(V, V) so the set of the possible covariances for Y is a one-dimensional set. It is this assumption concerning the covariance of Y that we want to modify so that linear models become general enough to include certain models in multivariate analysis. In particular, we would like to discuss Example 3.2 w i t h the framework of linear models. Now, let M be a fixed subspace of (V, (., .)) and let y be an arbitrary set of positive definite linear transformations on V to V. We say that {M, y) is the parameter set of a linear model for Y if &Y = p E M and Cov(Y) E y. For a general parameter set {M, y), not much can be said about a linear model for Y. In order to restrict the class of parameter sets under consideration, we now turn to the question of existence of Gauss-Markov estimators (to be defined below) for p. As in Section 4.1, let @
=
{AIA E C(V, v), Ap
=
p for p
E
M).
Thus a linear transformation of Y is an unbiased estimator of p E M iff it has the form AY for A E @. The following definition is motivated by Theorem 4.2. Definition 4.2. Let {M, y) be the parameter set of a linear model for Y. For A, E @, A,Y is a Gauss-Markov estimator of p iff
for all A E @andZ E y. The subscript Z on the expectation means that the expectation is computed when Cov(Y) = 2 .
PROPOSITION
4.6
147
When y = {a211a2> 0), Theorem 4.1 establishes the existence and uniqueness of a Gauss-Markov estimator for p. More generally, when y = {a2Z,la2> 0), Theorem 4.2 shows that the Gauss-Markov estimator for p is Ply where PI is the orthogonal projection onto M relative to the given by inner product ( a ,
a ) ,
The problem of the existence of a Gauss-Markov estimator for general y is taken up in the next paragraph. Suppose that { M , y ) is the parameter set for a linear model for Y. Consider a fixed element Z , E y, and let (., .), be the inner product on V defined by
As asserted in Theorem 4.2, the unique element in & that minimizes &, , I 1 AY - p)I2 is PI-the orthogonal projection onto M relative to (. , .),. Thus if a Gauss-Markov estimator A,Y exists according to Definition 4.2, A, must be P I . However, exactly the same argument applies for Z , E y, so A, must be P2-the orthogonal projection onto M relative to the inner product defined by Z,. These two projections are the same iff Z l ( M L )= Z2(ML)-see Theorem 4.2. Since Z l and 2 , were arbitrary elements of y, the conclusion is that a Gauss-Markov estimator can exist iff Z , ( M L ) = Z 2 (M L ) for all Z , , 2 , E y. Summarizing t h s leads to the following. Proposition 4.6. Suppose that { M , y ) is the parameter set of a linear model for Y in ( V , ( . , .)). Let Z l be a fixed element of y. A Gauss-Markov estimator of p exists iff Z ( M L ) = Z l ( M L ) for all Z
E
y.
When a Gauss-Markov estimator of p exists, it is = P Y where P is the orthogonal projection onto M relative to any inner product [., . ] given by [ x , y ] = (x, Z - ' y ) for some Z E y. Proof: It has been argued that a Gauss-Markov estimator for p can exist iff Z l ( M L )= Z 2 ( M L ) for all X I , Z , E y. This is clearly equivalent to Z ( M L ) = Z l ( M L ) for all Z E M. The second assertion follows from the observation that when Z ( M L ) = Z l ( M L ) ,then all the projections onto M , relative to the inner products determined by elements of y, are the same. That fi = P Y is a consequence of Theorem 4.2.
148
LINEAR STATISTICAL MODELS
An interesting special case of Proposition 4.6 occurs when I E y. In this case, choose 2 , = I so a Gauss-Markov estimator exists iff Z(ML) = M I for all Z E y. This is clearly equivalent to Z(M) = M for all Z E y , which is equivalent to the condition
since each Z E y is nonsingular. It is this condition that is verified in the examples that follow.
+
Example 4.3. As motivation for the discussion of the general multivariate linear model, we first consider the multivariate version of the k-sample situation. Suppose Xij's, j = 1,. . . , n, and i = 1,. . . , k, are random vectors in RP. It is assumed that GTj = y,, Cov(Xij) = 2 , and different random vectors are uncorrelated. Form the random matrix X whose first n , rows are Xij, j = 1,. . . , n,, the next n , rows of X are Xij, j = 1,..., n,, and so on. Then X i s a random vector in (Cp,, ( , where n = Cfn,. It was argued in the discussion following Proposition 2.18 that a))
relative to the inner product ( . , .) on Cp, ,. The mean of X, say p = GX, is an n x p matrix whose first n , rows are all p;, whose next n , rows are all pi, and so on. Let B be the k x p matrix with rows p;, . . . , p',. Thus the mean of X can be written p = ZB where Z is an n x k matrix with the following structure: the first column of Z consists of n , ones followed by n - n , zeroes, the second column of Z consists of n , zeroes followed by n , ones followed by n - n , - n zeroes, and so on. Define the linear subspace M of Cp,, by
,
so M is the range of Z ,. Further, set
4,
y =
€3
I, as a linear transformation on CP,, to
{I, €3 212 E tlp,p,Z positive definite)
and note that y is a set of positive definite linear transformations on CP,, to Cp, ,. Therefore, GX E M and Cov( X) E y , and {M, y ) is a parameter set for a linear model for X. Since I, €3 I, is the identity
linear transformation on %,, and I, Q I, E y, to show that a Gauss-Markov estimator for p E M exists, it is sufficient to verify that, if x E M, then (I, 8 Z)x E M. For x E M, x = ZB for some B E Cp,,. Therefore,
which is an element of M. Thus M is invariant under each element of y so a Gauss-Markov estimator for p exists. Since the identity is an element of y, the Gauss-Markov estimator is just the orthogonal projection of X on M relative to the given inner product ( , .). To find t h s projection, we argue as in Example 4.1. The regression subspace M is the range of Z 8 I, and, clearly, Z has rank k. Let
which is an orthogonal projection; see Proposition 1.28. To verify that P is the orthogonal projection onto M, it suffices to show that the range of P is M. For any x E $,, ,,
which is an element of M since (ZfZ)-'Z'x E ep,,. However, if x E M, then x = ZB and Px = P(ZB) = ZB-that is, P is the identity on M. Hence, the range of P is M and the Gauss-Markov estimator of p is
fi Since p
=
=
PX = z ( z ' z ) - ' z ' x .
ZB,
and, by Proposition 4.1,
is the Gauss-Markov estimator of the matrix B. Further, &( B)
=
B
150
LINEAR STATISTICAL MODELS
and
For the particular matrix Z, Z'Z is a k X k diagonal matrix with diagonal entries n , , . . . , n , so (ZfZ)-' is diagonal with diagonal elements n ; . . . , n , A bit of calculation shows that the matrix B = (Z'Z)- 'Z'X has rows . . , where
',
'.
g , . x~
is the sample mean in the ith sample. Thus the Gauss-Markov estimator of the ith mean p i is i = 1,. . . , k .
z,
+
It is fairly clear that the explicit form of the matrix Z in the previous example did not play a role in proving that a Gauss-Markov estimator for the mean vector exists. This observation leads quite naturally to what is usually called the general linear model of multivariate analysis. After introducing this model in the next example, we then discuss the implications of adding the assumption of normality.
+
Example 4.4 (Multivariate General Linear Model). As in Example 4.3, consider a random matrix X in (C,, .,( , and assume that (i) GX = ZB where Z is a known n X k matrix of rank k and B is a k X p matrix of parameters, (ii) Cov(X) = I, b 2 where 2 is a p x p positive definite matrix-that is, the rows of X are uncorrelated and each row of X has covariance matrix 2 . It is clear we have simply abstracted the essential features of the linear model in Example 4.3 into assumptions for the linear model of this example. The similarity between the current example and Example 4.1 should also be noted. Each component of the observation vector in Example 4.1 has become a vector, the parameter vector has become a matrix, and the rows of the observation matrix are still uncorrelated. Of course, the rows of the observation vector in Example 4.1 are just scalars. For the example at hand, it is clear that a ) )
PROPOSITION 4.6
151
is a subspace of Cp, ,and is the range of Z 8 1,. Setting y = {I, 8
212 is a p x p positive definite matrix),
{M, y ) is the parameter set of a linear model for X. More specifically, the linear model for X is that & X = p E M and Cov(X) E y. Just as in Example 4.3, M is invariant under each element of y so a Gauss-Markov estimator of p = & X exists and is PX where
is the orthogonal projection onto M relative to ( . , .). Mimicking the argument given in Example 4.3 yields
and
In addition to the linear model assumptions for X, we now assume that C(X) = N(ZB, I, 8 Z) so X has a normal distribution in (C,, ,,, ( , .)). As in Example 4.2, a discussion of sufficient statistics and maximum likel~hoodestimators follows. The density function of X with respect to Lebesgue measure is
as discussed in Chapter 3. Let Po = Z(Z'Z)-'Z' and Qo = I - Po so P = Po @ I, is the orthogonal projection onto M and Q = Qo 8 I, is the orthogonal projection onto M I . Note that both P and Q commute with I, 8 2 for any Z. Since p E M, we have (x - ~ ~ (€3 12-')(x , - p))
because (Qx,(Z, 8 Z-')P(x - p))
=
(x, Q(I,
@
Z-')P(x - P))
152
LINEAR STATISTICAL MODELS
=
0 since Q(In @ 2 - ' ) P
=
=
tr(xZ-'x'Q,)
QP(In @ 2 - I )
=
=
0. However,
tr(xfQ0xZ-I).
Thus
=
(Px - p, (In@ z-')(Px - p))
+ tr(x'~,xZ-~).
Therefore, the densityp(xlp, Z) is a function of the pair {Px, x'Q,x} so the pair {Px, xtQ0x} is sufficient. That this pair is minimal sufficient and complete for the parametric family {p(.Ip, Z); p E M, Z positive definite} follows from exponential family theory. Since P(In 8 2 ) Q = PQ(In @ Z) = 0, the random vectors PX and QX are independent. Also, X'Q, X = (QX)'(QX) so the random vectors PX and X'Q,X are independent. In other words, {PX, X'Q,X} is a sufficient statistic and PX and XrQ0Xare independent. To derive the maximum likelihood estimator of p E M, fix Z. Then
x exp[- ~ ( P X- p, (1, 8 z-')(Px
-
p)) - 4 trx'Q,xZ-'1
with equality iff p = Px. Thus the maximum likelihood estimator of p is fi = PX, which is also the Gauss-Markov and least-squares estimator of p. It follows immediately that
is the maximum likelihood estimator of B. and
PROPOSITION
153
4.6
To find the maximum likelihood estimator of Z, the function
must be maximized over all p x p positive definite matrices Z. When xfQox is positive definite, this maximum occurs uniquely at
so the maximum likelihood estimator of Z is stochastically independent of fi. A proof that 5 is the maximum likelihood estimator of Z and a derivation of the distribution of f: is deferred until later.
+
The principal result of this chapter, Proposition 4.6, gives necessary and sufficient conditions on the parameter set {M, y) of a linear model in order that the Gauss-Markov estimator of p E M exists. Many of the classical parametric models in multivariate analysis are in fact linear models with a parameter set {M, y) so that there is a Gauss-Markov estimator for p E M. For such models, the additional assumption of normality implies that fi is also the maximum likelihood estimator of p, and the estimation of p is relatively easy if we are satisfied with the maximum likelihood estimator. For the time being, let us agree that the problem of estimating p has been solved in these models. However, very little has been said about the estimation of the covariance other than in Example 4.4. To be specific, assume C ( X ) = N(p, 2 ) where p E M c (V, (., .)) and {M, y) is the parameter set of this linear model for x . Assume that I E y and fi = PX is the Gauss-Markov estimator for p so Z M = M for all Z E y. Here, P is the orthogonal projection onto M in the given inner product on V. It follows immediately from Proposition 4.6 that fi = PX is also the maximum likelihood estimator of p E M. Substituting fi into the density of X yields
where n = dim V and Q = I - P is the orthogonal projection onto M I . Thus to find the maximum likelihood estimator of Z E y, we must compute
assuming that the supremum is attained at a point
5 E y. Although many
154
LINEAR STATISTICAL MODELS
examples of explicit sets y are known where
f: is not too difficult to find, 2 are not available. This
general conditions on y that yield an explicit
overview of the maximum likelihood estimation problem in linear models where Gauss-Markov estimators exists has been given to provide the reader with a general framework in which to view many of the estimation and testing problems to be discussed in later chapters.
PROBLEMS 1. Let Z be an n X k matrix (not necessarily of full rank) so Z defines a linear transformation on Rk to Rn. Let M be the range of Z and let z,,. . . , zk be the columns of Z. (i) Show that M = span{z,,. . . , z,). (ii) Show that Z(ZfZ)-Z' is the orthogonal projection onto M where (ZfZ)- is the generalized inverse of Z'Z. 2. Suppose XI,. . . , X, are i.i.d. from a density p(xlj3) = f (x - j3) where f is a symmetric density on R1 and jx2f (x) dx = 1. Here, j3 is an unknown translation parameter. Let X E Rn have coordinates XI,. . . ,
xn .
(i) Show that C(X) = C(j3e + E) where E ~ ., .., E, are i.i.d. with density f . Show that GX = j3e and Cov(X) = I,. (ii) Based on (i), find the Gauss-Markov estimator of j3. (iii) Let U be the vector of order statistics for X (Ul < U, < . . . < U,) so C(U) = C(j3e v) where v is the vector of order statistics of E. Show that G(U) = j3e + a, where a, = G v is a known vector (f is assumed known), and Cov(U) = Z, = Cov(v) where Z, is also known. Thus C(U - a,) = C(j3e (v - a,)) where G(v - a,) = 0 and COV(V- a,,) = 2,. Based on this linear model, find the Gauss-Markov estimator for j3. (iv) How do these two estimators of j3 compare?
+
+
3. Consider the linear model Y = p + E where p E M, GE = 0, and COV(E)= u21,. At times, a submodel of this model is of interest. In particular, assume p E w where w is a linear subspace of M. (i) Let M - w = {xlx E M, x I a). Show that M - w = M n w L . (ii) Show that PM - Pwis the orthogonal projection onto M - w and verify that II(PM- PW)x1l2= J I P , X ( ) ~ - ll~,x11~.
155
PROBLEMS
4. For this problem, we use the notation of Problem 1.15. Consider subspaces of RIJ given by M,
=
{yly,,
= y..
for all 2 , j )
M,
=
{yly,,
= ykj
for all i, k ; j = 1,. . . , J )
(i) Show that % ( A ) = M,, %(B,) = MI - M,, and %(B2) = M2 - M,. Let M, be the range of B,. (ii) Show that RIJ = M, $ (MI - M,) CB (M2 - M,) $ M,. (iii) Show that a vector p is in M = M, $ (M, - M,) $ (M2 - M,) iff p can be writte,n as pi, = a + pi + y,, i = 1,. . . , I , j = 1,. . . , J , where a, pi, and y, are scalars that satisfy ZPi = Zy, = 0. 5.
(The 9-test.) Most of the classical hypothesis testing problems in regression analysis or ANOVA can be described as follows. A linear model Y = p E, p E M, G E = 0, and COV(E)= u21 is given in (V,(., .)). A subspace o of M ( o * M ) is given and the problem is to test H,: p E w versus HI : y CZ o , y E M. Assume that C ( Y )= N ( p , u21) in ( V , ( . , .I). (i) Show that the likelihood ratio test of H, versus HI rejects for large values of F = ((P,-,Y ( ( 2 / ( ( Q M (12 ~where Q, = I - PM. (ii) Under H,, show that F is distributed as the ratio of two independent chi-squared variables.
+
6. In the notation of Problem 4, consider Y E RIJ with GY = y E M ( M is given in (iii) of Problem 4). Under the assumption of normality, use the results of Problem 5 to show that the $test for testing H, : PI = P2 - . . . = p, rejects for large values of
C i C j ( y j j- Jj.- Y.,
+ J..)2
Identify o for this problem. 7. (The normal equations.) Suppose the elements of the regression subspace M G R" are given by y = Xp where X is n X k and /3 E R k . Given an observation vector y, the problem is to find @ = P,y. The
156
LINEAR STATISTICAL MODELS
equations (in @)
are often called the normal equations. (i) Show that (4.2) always has a solution b E Rk. (ii) If b is any solution to (4.2), show that Xb = PMy.
8. For Y E Rn,assumep = GY E MandCov(Y) E ywherey = {ZIZ = aP, + @Qe,a > 0, @ > 0). As usual, e is the vector of ones, Pe is the orthogonal projection onto span{e), and Q, = 1- P,. (i) If e E M or e E M L , show that the Gauss-Markov and leastsquares estimators for p are the same for each a and @. (ii) If e 4 M and e P M I , show that there are values of a and @ so that the least-squares and Gauss-Markov estimators of p differ. (iii) If C(Y) = N(p, 2 ) with 2 E y and M G (span{e))' ( M * (span{e))l), find the maximum likelihood estimates for p, a, and @.What happens when M = span{e)? 9. In the linear model Y = X@+ E on Rn with X : n X k of full rank, GE = 0, and COV(E)= a2Z1(Z1 is positive definite and known), show that @ = x(x'Z;'X)-'X'Z;'Y and = (XIZ;'X)-'XIZ;'Y. 10. (Invariance in the simple linear model.) In (V, (., suppose that {M, y ) is the parameter set for a linear model for Y where y = {ZIZ = a21, a > 0). Thus GY = p E M and Cov(Y) E y. This problem has to do with the invariance of t h s linear model under affine transformations: (i) If I' E Q(V) satisfies I'(M) G M, show that r'(M) L M. Let O,(V) be those I? E Q(V) that satisfy r ( M ) G M. (ii) For x, E M, c > 0, and I' E QM(V), define the function (c, I', x,) on V to V by (c, I?, x,) y = cry + x,. Show that this function is one-to-one and onto and find the inverse of this function. Show that this function maps M onto M. (iii) Let F = (c, 1', x,)Y. Show that GF E M and C O V ( ~E) y. Thus (M, y) is the parameter set for and we say that the linear model for Y is invariant under the transformation (c, I',x,). Since GY = p, it follows that GF = (c, I',x,)y for y E M. If t(Y) (t maps V into M ) is any point estimator for p, then it seems plausible to use t(F) as a point estimator for (c, I?, xg)y = c r p + x,. Solving for p, it then seems plausible to use c-'I"(t(Y) - x,) as a point estimator for p. Equating these estimators of y leads to t(Y) = c-'rl(t(cI'Y a)),
+
NOTES AND REFERENCES
157
xo) - xo) or t ( ~+ rx,)~= ~ r t ( +~xo. ) (4.3) An estimator that satisfies (4.3) for all c > 0, E OM(V),and x, E M is called equivariant. (iv) Show that t,(Y) = PMYis equivariant. (v) Show that if t maps V into M and satisfies the equation t ( r Y + x,) = rt(Y) + x, for all r E OM(V)and x, E M, then t(Y) = PMY. 11. Consider U E Rn and V E Rn and assume C(U) = N(Z,/3,,u,,In) and C(V) = N(Z,P,, u2,1n). Here, Z, is n X k of rank k and Pi E Rk is an unknown vector of parameters, i = 1,2. Also, uii > 0 is unknown, i = 1,2. Now, let X = (UV) : n x 2 so p = GX has first column Z I P , and second column Z, P,. (i) When U and V are independent, then Cov(X) = In8 A where
In this case, show that the Gauss-Markov and least-squares estimates for p are the same. Further, show that the Gauss-Markov estimates for P, and P, are the same as what we obtain by treating the two regression problems separately. (ii) Now, suppose Cov(X) = In8 2 where
is positive definite and unknown. For general Z, and Z,, show that the regression subspace of X is not invariant under all In8 2 so the Gauss-Markov and least-squares estimators are not the same in general. However, if Z , = Z,, show that the results given in Example 4.4 apply directly. (iii) If the column space of Z, equals the column space of Z,, show that the Gauss-Markov and least-squares estimators of p are the same for each In8 2 . NOTES AND REFERENCES
1. Scheffi: (1959) contains a coordinate account of what might be called univariate linear model theory. The material in the first section here follows Kruskal (1961) most closely.
158
LINEAR STATISTICAL MODELS
2. The result of Proposition 4.5 is due to Kruskal (1968).
3. Proposition 4.3 suggests that a theory of best linear unbiased estimation can be developed in vector spaces without inner products (i.e., dual spaces are not identified with the vector space via the inner product). For a version of such a theory, see Eaton (1978). 4. The arguments used in Section 4.3 were used in Eaton (1970) to help answer the following question. Given X E C,, with Cov(X) = I,,8 Z where Z is unknown but positive definite, for what subspaces M does there exist a Gauss-Markov estimator for p E M? In other words, with y as in Example 4.4, for what M's can the parameter set {M, y ) admit a Gauss-Markov estimator? The answer to this question is that M must have the form of the subspaces considered in Example 4.4. Further details and other examples can be found in Eaton (1970).
CHAPTER 5
Matrix Factorizations and Jacobians
This chapter contains a collection of results concerning the factorization of matrices and the Jacobians of certain transformations on Euclidean spaces. The factorizations and Jacobians established here do have some intrinsic interest. Rather than interrupt the flow of later material to present these results, we have chosen to collect them together for easy reference. The reader is asked to mentally file the results and await their application in future chapters.
5.1. MATRIX FACTORIZATIONS
We begin by fixing some notation. As usual, Rn denotes n-dimensional coordinate space and C,,. is the space of n X m real matrices. The linear space of n x n symmetric real matrices, a subspace of C,, is denoted byS,. If S E Sn, we write S > 0 to mean S is positive definite and S > 0 means that S is positive semidefinite. Recall that %, ,is the set of all n x p linear isometries of RP into Rn, that is, Q E Gp,,iff Q'Q = Ip. Also, if T E Cn ,, then T = {ti,) is lower triangun triangular matrices with lar if ti, = 0 for i < j. The set of all n ~ ' lower tii > 0, i = 1,. . . , n, is denoted by G ; . The dependence of G: on the dimension n is usually clear from context. A matrix U E C,,, is upper triangular if U' is lower triangular and G& denotes the set of all n x n upper triangular matrices with positive diagonal elements.
.,
160
MATRIX FACTORIZATIONS AND JACOBIANS
Our first result shows that G: and G : are closed under matrix multipli-
cation and matrix inverse. In other words, G; and G t are groups of matrices with the group operation being matrix multiplication.
Proposition 5.1. If T = (ti,) E G:, then T-' E G: and the ith diagonal element of T-' is l/t,,, i = 1,. . . , n. If Tl and T2 E G:, then TIT2E G:.
ProoJ To prove the first assertion, we proceed by induction on n. Assume the result is true for integers 1,2,. . . , n - 1. When T is n X n, partition T as
where TI, is (n - 1) X (n - I), T2, is 1 x (n - I), and t,, is the (n, n) diagonal element of T. In order to be T-', the matrix
must satisfy the equation TA
SO
A l l = T i 1 , a,,
=
=
I,. Thus
l/t,,, and
The induction hypothesis implies that T i 1 is lower triangular with diagonal elements l/t,,, i = 1,. . . , n - 1. Thus the first assertion holds. The second assertion follows easily from the definition of matrix multiplication. Arguing in exactly the same way, G: is closed under matrix inverse and matrix multiplication. The first factorization result in this chapter is next. Proposition 5.2. Suppose A E C,, ,where p g n and A has rank p. Then A = 'kU where 'k E $,, and U E G: is p x p. Further, 'k and U are unique.
ProoJ The idea of the proof is to apply the Gram-Schmidt orthogonalization procedure to the columns of the matrix A. Let a,,. . . , a, be the
PROPOSITION
5.3
161
columns of A so ai E Rn, i = 1,. . . , p. Since A is of rank p, the vectors a,, . . . , a, are linearly independent. Let {b,,. . . , b,) be the orthonormal set of vectors obtained by applying the Gram-Schmidt process to a,, . . . , a, in the order 1,2,. . . , p. Thus the matrix \k with columns b,,. . . , b, is an element of %,, as \kf\k = I,. Since span{a,,. . . , a i ) = span{b,,. . . , b,) for i = 1,. . . , p, bja, = 0 if j > i, and an examination of the Gram-Schmidt Process shows that bja, > 0 for i = 1,. . . , p. Thus the matrix U = +'A is an element of G:, and
But \k\kf is the orthogonal projection onto span{b,,. . . , b,) = span{a,,. . . , a,) so \k\kfA = A, as 9 \ k f is the identity transformation on its range. T h s establishes the first assertion. For the uniqueness of \k and U, assume that A = \k,U, for \k, E 5$,, and U, E G:. Then \k,Ul = \kU, which implies that q'q,= UU;'. Since A is of rank p, Ul must have rank p so %(A) = %(\k,) = %(\k). Therefore, \k,\k;\k = 9 since 9,\k; is the orthogonal projection onto its range. Thus \kf\k,\k;9 = I,-that is, \kf\k1 is a p.X p orthogonal matrix. Therefore, UU;' = Pf\k, is an orthogonal matrix and UU; E G&.However, a bit of reflection shows that the only matrix that is both orthogonal and an element of G: is I,. Thus U = Ul so \k = \kl as U has rank p. 17
'
The main statistical application of Proposition 5.2 is the decomposition of the random matrix Y discussed in Example 2.3. This decomposition is used to give a derivation of the Wishart density function and, under certain assumptions on the distribution of Y = \kU, it can be proved that 9 and U are independent. The above decomposition also has some numerical applications. For example, the proof of Proposition 5.2 shows that if A = \kU, then the orthogonal projection onto the range of A is *\kt = A(AfA)-'A'. Hence this projection can be computed without computing (AfA)-'. Also, if p = n and A = \kU, then A-' = Up'*'. Thus to compute A-I, we need only to compute U- and this computation can be done iteratively, as the proof of Proposition 5.1 shows. Our next hecomposition result establishes a one-to-one correspondence between positive definite matrices and elements of G;. First, a property of positive definite matrices is needed.
'
Proposition 5.3.
For S
E
S, and S > 0,partition S as
162
MATRIX FACTORIZATIONS AND JACOBIANS
where S , , and S,, are both square matrices. Then S,,, S,,, S , , - S,,S,;'S,,, and SZ2- s ~ ~ sare ;~ all positive s ~ ~ definite. Proof. For x E RP, partition x into y and z to be comformable with the partition of S. Then, for x * 0,
0 < x'Sx
= yfS,,y
+ 2zfS2,y+ zfS2,z.
For y * 0 and z = 0, x t 0 so y f S , , y > 0, which shows that S , , > 0. Similarly, S,, > 0. For y * 0 and z = - SG'S,, y, 0 < x'Sx which shows that S , , -
Proposition 5.4.
=y
' ( ~ ,,
s,,s,-,'s,,)y ,
s,,s&'s,,> 0. Similarly, S,,
If S > 0, then S
=
-
S,,S,'S,,
TT' for a unique element T
> 0.
E
Gg.
Proof. First, we establish the existence of T and then prove it is unique. The proof is by induction on dimension. If S E S, with S > 0, partition S as
where S , , is ( p - 1 ) x ( p - 1 ) and S,, E (0, oo). By the induction hypothesis, S,, = T I I T ' , ,for T I , E G;. Consider the equation
which is to be solved for T,, : 1 x ( p - 1 ) and T,, E (0, oo). This leads to the two equations T2,T;, = S,, and T,,T,', + T z .= S,,. Thus TZ1= s21(T;1)-1,SO
Therefore, Hence, T,,
TA = S2, - s,,s;'S,,, =
whch is positive by Proposition 5.3. (S,, - S2,S,1~,,)'/2is the solution for T,, > 0. T h s shows
PROPOSITION5.5
163
that S = TT' for some T E G:. For uniqueness, if S = TT' = TIT;,then T ; ' T T f ( T ; ) - ' = I, so T;'T is an orthogonal matrix. But T r l T E G: and the only matrix that is both orthogonal and in Gg is I,. Hence, TC'T = I, and uniqueness follows. Let S ; denote the set of p X p positive definite matrices. Proposition 5.4 shows that the function F: G: + 5; defined by F ( T ) = TT' is both ; + G$ is also part one-to-one and onto. Of course, the existence of F-' : S of the content of Proposition 5.4. For TI E G:, the uniqueness part of Proposition 5.4 yields F - ' ( T I S T ; )= T , F - ' ( s ) . T h s relationship is used later in this chapter. It is clear that the above result holds for G: replaced by G:. In other words, every S E S,f has a unique decomposition S = UU' for U E G;.
ep,
Proposition 5.5. Suppose A E where p G n and A has rank p. Then A = \kS where \k E %, and S is positive definite. Furthermore, \k and S are unique. Proof. Since A has rank p , A'A has rank p and is positive definite. Let S be the positive definite square root of A'A, so A'A = SS. From Proposition 1.3 1, there exists a linear isometry \k E such that A = \kS. To establish the uniqueness of \I/ and S, suppose that A = \kS = \k,S, where \k, \k, E %, ,., and S and S, are both positive definite. Then % ( A ) = 4. (\k) = % (\k,). As in the proof of Proposition 5.2, this implies that \k'\k,\k;\k = I, since \kl\k; is the orthogonal projection onto %(\k,) = %(\k). Therefore, SS;' = *''PI is a p x p orthogonal matrix so the eigenvalues of SS; are all on the unit circle in the complex plane. But the eigenvalues of SS; are the same as the eigenvalues of s ' / ~ s ; ' S ' / ~(see Proposition 1.39) where S 1 / 2is the positive definite square root of S. Since S1/2S;'S'/2is positive definite, the eigenvalues of S ' / 2 ~ ; ' S ' / 2are all positive. Therefore, the eigenvalues of ~1/2s; 1 ~ must 1 all~ be equal to one, as this is the only point of intersection of (0, oo) with the unit circle in the complex plane. Since the only p x p matrix with all eigenvalues equal to one is the identity, S'/2S;'S'/2 = IP S = S , . Since S is nonsingular, \k = \k,.
3,
'
'
The factorizations established this far were concerned with writing one matrix as the product of two other matrices with special properties. The results below are concerned with factorizations for two or more matrices. Statistical applications of these factorizations occur in later chapters. Proposition 5.6. Suppose A is a p X p positive definite matrix and B is a p x p symmetric matrix. There exists a nonsingular p x p matrix C and a
164
MATRIX FACTORIZATIONS AND JACOBIANS
p X p diagonal matrix D such that A = CC' and B elements of D are the eigenvalues of A-'B.
=
CDC'. The diagonal
Proof. Let All2 be the positive definite square root of A and A - ' I 2 = By the spectral theorem for matrices, there exists a p x p orthogonal matrix r such that r ' A - ' / 2 ~ A - 1 / 2 r= D is diagonal (see Proposition 1.49, and the eigenvalues of A - ' l 2 B A - 'I2are the diagonal elements of D. Let C = A 1 l 2 r . Then CC' = A ' / ~ I ' I ' ' A ' / ~ = A and CDC' = B. Since the eigenvalues of A - '/,BA- 'I2are the same as the eigenvalues of A - ' B , the proof is complete. Proposition 5.7. S as
Suppose S is a p x p positive definite matrix and partition
where S , , is p , x p , and S,, is p, x p, with p , 6 p,. Then there exist nonsingular matrices Aii of dimension pi X pi, i = 1,2, such that Ai,SiiA:, = I,#, i = 1,2, and A , ,S,, A;, = (DO) where D is a p , X p , diagonal matrix and 0 is a p , x ( p, - p ,) matrix of zeroes. The diagonal elements of D 2 are the eigenvalues of S,'S12S,;'S2, where S,, = S;,, and these eigenvalues are all in the interval [0, 11.
Proof. Since S is positive definite, S , , and S,, are positive definite. Let be the positive definite square roots of S , , and S,,. Using s,'(~and in the form Proposition 1.46, write the matrix s,'/~s,,s,;'/~
,
,
where r is a p x p , orthogonal matrix, D is a p , x p diagonal matrix, and 9 is a p , x p2 linear isometry. The p , rows of 9 form an orthonormal set in RP2 and p, - p , orthonormal vectors can be adjoined to '3' to obtain a p2 x p, orthogonal matrix 9, whose first p , rows are the rows of '3'. It is clear that D9
=
(DO)+,
where 0 is a p , x ( p, - p , ) matrix of zeroes. Set A , , = '3''s; ' I 2 and A,, = 9 , ~ , ; ' / ~ so A,,Si,Aii = Ipz for i = 1,2. Obviously, A,,S,,A;, = (DO). Since s,'/~S,,SG'/~ = r D 9 ,
so the eigenvalues of S, ' / 2 ~ , 2 ~ ~ ' ~ 'I2 2 , are ~ , the diagonal elements of D ~ . Since the eigenvalues of s,'/~s,~sG's~,s,'/~ are the same as the eigenvalues of S;'S12S,-,1S2,, it remains to show that these eigenvalues are in [0, 11. By Proposition 5.3, S,, - S I 2 S ~ ' S 2is, positive definite so Ipl S;'/~S,~S;'S~,S,'/~ is positive definite. Thus for x E RPI, 06
x'~,'~~s~~s,-,'s~~s,'~~x 6 x'x,
which implies that (see Proposition 1.44) the eigenvalues of S;'/~S,,S;'S~,S,'/~ are in the interval [O, 11. It is shown later that the eigenvalues of s;'s,,s&'s~, are related to the angles between two subspaces of RP. However, it is also shown that these eigenvalues have a direct statistical interpretation in terms of correlation coefficients, and this establishes the connection between canonical correlation coefficients and angles between subspaces. The final decomposition result in this section provides a useful result for evaluating integrals over the space of p X p positive definite matrices. Proposition 5.8. Let :5 denote the space of p x p positive definite matrices. For S E, ;S partition S as
where S,, is pi x p,, i = 1,2, S,, is p , x p,, and S,, defined on ;5 to S i X ;5 X by
epz,
is a one-to-one onto function. The function h on :5, given by
=
S;,. The function f
x
SL X epz,P, to Spi
is the inverse off.
L Proof. It is routine to verify that f h is the identity function on Si x S x ep2,P1 and h f is the identity function on .5; This implies the assertions of the proposition. 0
0
166
MATRIX FACTORIZATIONS AND JACOBIANS
5.2. JACOBIANS Jacobians provide the basic technical tool for describing how multivariate integrals over open subsets of R n transform under a change of variable. To describe the situation more precisely, let B, and B , be fixed open subsets of R n and let g be a one-to-one onto mapping from B, to B , . Recall that the differential of g, assuming the differential exists, is a function D, defined on B, that takes values in and satisfies
en.
lim
S+O
I l d x + 6 ) - g ( x ) - Dg(x)611 =0 11611
for each x E B,. Here 6 is a vector in R n chosen small enough so that x + 6 E B,. Also, D g ( x ) 6 is the matrix D,(x) applied to the vector 6, and 1 ) . 11 denotes the standard norm on Rn.Let g , , . . . , gn denote the coordinate functions of the vector valued function g. It is well known that the matrix D , ( x ) is given by
In other words, the ( i , j) element of the matrix D , ( x ) is the partial derivative of g, with respect to x, evaluated at x E B,. The Jacobian of g is defined by
so the Jacobian is the absolute value of the determinant of Dg. A formal statement of the change of variables theorem goes as follows. Consider any real valued Bore1 measurable function f defined on the open set B , such that
where dy means Lebesgue measure. Introduce the change of variables y = g ( x ) , x E Bo in the integral jBlf ( y ) dy. Then the change of variables theorem asserts that
167
JACOBIANS
An alternative way to express (5.1) is by the formal expression (5.2)
d ( g ( x ) ) = J,(x) d x ,
x
E
B,.
To give a precise meaning to (5.2), proceed as follows. For each Bore1 measurable function h defined on B, such that jBol h(x)lJ,(x) dx < + co, define
and define I,(h)
=
h ( x ) d ( g ( x ) )= Bo
h ( g l ( x ) )dx. g(B0)
Then (5.2) means that I l ( h )= 1 2 ( h )for all h such that I,(lhJ)< + co. To show that (5.1) and the equality of I , and I , are equivalent, simply set f=hog-'sof~g=h.ThusI,(h)=I,(h)iff
since B , = g(B,). One property of Jacobians that is often useful in simplifying computations is the following. Let B,, B , , and B, be open subsets of Rn,suppose g, is a one-to-one onto map from Bo to B l , and suppose DgI exists. Also, suppose g, is a one-to-one onto map from Bl to B, and assume that Dg2 exists. Then, g, g, is a one-to-one onto map from Bo to B, and it is not difficult to show that 0
Of course, the right-hand side of this equality means the matrix product of D g 2 ( g , ( x )and ) D g l ( x ) .From this equality, it follows that
In particular, if B, = B, and g, = g; I , then g, g, function on B, so its Jacobian is one. Thus 1 = J,,.,,(x>
=
=
g;
J , ~ ( ~ I ( X ) ) J , , ( X ) ,X
'
o
g, is the identity Bo
168
MATRIX FACTORIZATIONS AND JACOBIANS
and
We now turn to the problem of explicitly computing some Jacobians that are needed later. The first few results present Jacobians for linear transformations. Proposition 5.9. Let A be an n x n nonsingular matrix and define g on Rn to Rn by g(x) = A(x). Then Jg(x) = Idet(A)I for x E Rn. Proof. We must compute the differential matrix of g. It is clear that the ith coordinate function off is g, where
Here A
=
{a,,} and x has coordinates x,, . . . , x,. Thus agi -(x)
ax,
so Dg(x) = {a,,}. Thus J,(x)
=
=
a,,
Idet(A)I.
Proposition 5.10. Let A be an n x n nonsingular matrix and let B be a p x p nonsingular matrix. Define g on the np-dimensional coordinate space n to e p , n by
c,,
g(X)
=
AXB'
=
(A 8 B)X.
Then Jg(X) = ldet AlPldet BIn. Proof. First note that A 8 B = (I, 8 B)(A 3€ I,). Setting g,(X) 1,)X and g,(X) = (I, 8 B)X, it is sufficient to verify that
J,,(X)
=
(det Alp
and Jg2(X)= ldet BIn.
=
(A 8
PROPOSITION
169
5.1 1
Let x,, . . . , x, be the columns of the n x p matrix X so xi E Rn. Form the np-dimensional vector
Since(A @ I,)XhascolumnsAx,, ..., Ax,, thematrixofA @ I, as alinear transformation on [XI is
where the elements not indicated are zero. Clearly, the determinant of this matrix is (det A)P since A occurs p times on the diagonal. Since the determinant of a linear transformation is independent of a matrix representation, we have that d e t ( ~@
I,) = (det A)'.
Applying Proposition 5.9, it follows that Jg,(X) = ldet Alp. Using the rows instead of the columns, we find that det(In @ B)
=
(det B)",
Jg2(X)= ldet BIn. Proposition 5.11. Let A be a p X p nonsingular matrix and define the function g on the linear space Sp of p x p real symmetric matrices by
g ( S ) = ASA' Then Jg(S) = ldet Alp+'.
=
(A
@
A)S.
170
MATRIX FACTORIZATIONS AND JACOBIANS
Proof. The result of the previous proposition shows that det(A 8 A) = (det A),, when A 8 A is regarded as a linear transformation on t,,,. However, t h s result is not applicable to the current case since we are considering the restriction of A 8 A to the subspace Sp of eP,,. To establish the present result, write A = r , D r 2 where I?, and r, are p x p orthogonal matrices and D is a diagonal matrix with positive diagonal elements (see Proposition 1.47). Then,
ASA'
=
(A 8 A)S
=
(I?, 8 I',)(D 8 D)(I', 8 r 2 ) S
so the linear transformation A 8 A has been decomposed into the composition of three linear transformations, two of which are determined by orthogonal matrices. We now claim that if r is a p x p orthogonal matrix and g, is defined on SP by
then J,, = 1. To see this, let ( . , .) be the natural inner product on gp,, restricted to S,, that is, let
Then
Therefore, r 8 r is an orthogonal transformation on the inner product space (S,,( . , so the determinant of this linear transformation on Sp is + 1. Thus g, is a linear transformation that is also orthogonal so J,, = 1 and the claim is established. The next claim is that if D is a p X p diagonal matrix with positive diagonal elements and g, is defined on S, by a)),
+
then Jg2= (det D)p+'. In the [ p ( p 1)/2]-dimensional space S,, let sij, 1 < j < i p, denote the coordinates of S. Then it is routine to show that the ( i , j) coordinate function of g, is g,, ij(S) = A,Aj.si, where A,, . . . , A, are the diagonal elements of D. Thus the matrix of the linear transformation g, is a [ p( p 1)/2] x [ p ( p 1)/2] diagonal matrix with diagonal entries
+
+
PROPOSITION 5.12
171
X,X, for 1 < j
< i < p. Hence the determinant of this matrix is the product of the A,hj for 1 < j < i < p. A bit of calculation shows this determinant is (IIh ,)P+'. Since det D = HA,, the second claim is established. To complete the proof, note that g(S)
=
ASA'
=
(I',
@
T,)(D @ D ) ( r 2 @ r 2 ) S = h I ( h 2 ( h 3 ( s ) ) )
where h,(S) = (I?, @ I?,)S, h,(S) A direct argument shows that
=
(D
@
D)S, and h,(S)
But JhI= 1 = Jh,and Jh2 = (det D)P+'. Since A which entails Jg= ldet Alp".
=
=
(I?,
@
r,)S.
r,DI?,, ldet A[ = det D,
Proposition 5.12. Let M be the linear space of p matrices and define g on M to M by
g(S)
=
Xp
skew-symmetric
ASA'
where A is a p x p nonsingular matrix. Then Jg(S) = ldet Alp-'. Proof
reader.
The proof is similar to that of Proposition 5.1 1 and is left to the
Proposition 5.13. Let G: be the set of p x p lower triangular matrices with positive diagonal elements and let A be a fixed element of G:. The function g defined on G; to G; by
g(T) has a Jacobian given by J,(T) elements of A.
= AT, =
T E G;
nfai,where a,,, . . . , a,,
+
are the diagonal
The set G; is an open subset of [ i p ( p l)]-dimensional coordinate space and g is a one-to-one onto function by Proposition 5.1. For T E G:, form the vector [TI with coordinates t,,, t,,, t,,, t,,,. . . , tpp and write the coordinate functions of g in the same order. Then the matrix of partial derivatives is lower triangular with diagonal elements Proof
172
MATRIX FACTORIZATIONS AND JACOBIANS
a33,.. . , a,, where aii occurs i times on the diagonal. Thus the a,,, determinant of this matrix of partial derivatives is l-Ifaii so Jg = nfa:,.
Proposition 5.14. G; by
In the notation of Proposition 5.13, define g on G: g ( T ) = TB,
to
T E G;
where B is a fixed element of G: . Then J g ( T )= IIfb:-'+ are the diagonal elements of B.
' where b ,,,. . . , b,,
ProoJ: The proof is similar to that of Proposition 5.13 and is omitted. Proposition 5.15. Let G: be the set of all p X p upper triangular matrices with positive diagonal elements. For fixed elements A and B of G:, define g by
Then,
where a,,, . . . , a,, and b,,, . . . , b,, are diagonal elements of A and B. ProoJ: The proof is similar to that given for lower triangular matrices and is left to the reader. Thus far, only Jacobians of linear transformations have been computed explicitly, and, of course, these Jacobians have been constant functions. In the next proposition, the Jacobian of the nonlinear transformation described in Proposition 5.8 is computed. Proposition 5.16. Let p , and p, be positive integers and set p = p, + p2. Using the notation of Proposition 5.8, define h on :S x SA X Cp,,,, to S ; by
Then Jh(All,A,,, A,,)
=
(det
PROPOSITION
173
5.16
Proof. For notational convenience, set S = h(A,,, A,,, A,,) and partition S as
where Sii is pi x pi, i, j = 1,2. The partial derivatives of the elements of S, as functions of the elements of A,,, A,, and A,,, need to be computed. Since S,, = A,, + A,,A,,A;,, the matrix of partial derivatives of thep,(p, 1)/2 elements of S,, with respect to t h e p , ( p , + 1)/2 elements of A,, is just the [ p ,( p I 1)/2]-dimensional identity matrix. Since S,, = A,, A,,, the matrix of partial derivatives of thep,p, elements of S,, with respect to the elements of A,, is the p , p, X p , p, zero matrix. Also, since S,, = A,,, the partial derivative of elements of S,, with respect to the elements of A,, or A,, are all zero and the matrix of partial derivatives of the p,(p, + 1)/2 elements of S,, with respect to the p,(p, + 1)/2 elements of A,, is the identity matrix. Thus the matrix of partial derivatives has the form
+
+
Sll
s22
All I,
1:
A,,
A,,
-
-
r]
so the determinant of this matrix is just the determinant of the p , p, X p , p, matrix B, which must be found. However, B is the matrix of partial derivatives of the elements of S,, with respect to the elements of A,, where S,, = A,,A,,. Hence the determinant of B is just the Jacobian of the transformation g(A,,) = A,,A,, with A,, fixed. This Jacobian is (det A,,)Pl by Proposition 5.10. As an application of Proposition 5.16, a special integral over the space
5; is now evaluated.
+
Example 5.1. Let dS denote Lebesgue measure on the set 5 ;. The integral below arises in our discussion of the Wishart distribution. For a positive integer p and a real number r > p - 1, let
In this example, the constant c(r, p ) is calculated. When p
=
1,
174
MATRIX FACTORIZATIONS AND JACOBIANS
S;
=
(0, oo) so for r > 0,
where T(r/2) is the gamma function evaluated at r/2. The first claim is that
for r > p and p >, 1. To verify this claim, consider S partition S as
where S,, E S;, S2, E (0, oo), and S12 is p change of variables
X
E
S>, and
1. Introduce the
where A,, E S ;, AZ2E (0, a),and A,, E RP. By Proposition 5.16, the Jacobian of this transformation is A!2. Since det S = det(S,, ~ , , ~ , ; ' ~ ; , ) d eS2, t = (det A,,)A2,, we have
Integrating with respect to A,, yields
Substituting this into the second integral expression for c(r, p
+ 1)
PROPOSITION
5.16
and then integrating on A,, shows that
This establishes the first claim. Now, it is an easy matter to solve for c(r, p). A bit of manipulation shows that with
for p
=
1,2,. . . , and r > p - 1, the equation c(r, p
+ 1) = ( 2 ~ ) ~ / , c ( lr ), c ( r - 1, p )
is satisfied. Further,
Uniqueness of the solution to the above equation is clear. In summary,
and this is valid for p = 1,2,. . . and r > p - .I. The restriction that r be greater than p - 1 is necessary so that T[(r - p 1)/2] be well defined. It is not difficult to show that the above integral is + oo if r 6 p - 1. Now, set w ( r , p) = l/c(r, p) so
+
is a density function on .5 ; When r is an integer, r 2 p, f turns out to be the density of the Wishart distribution.
+
Proposition 5.4 shows that there is a one-to-one correspondence between elements of ;5 and elements of G;. More precisely, the function g defined on G; by g(T)
=
TT',
T E G;
is one-to-one and onto S,i. It is clear that g has a differential since each
176
MATRIX FACTORIZATIONS AND JACOBIANS
coordinate function of g is a polynomial in the elements of T. One way to find the Jacobian of g is to simply compute the matrix of partial derivatives and then find its determinant. As motivation for some considerations in the next chapter, a different derivation of the Jacobian of g is given here. The first observation is as follows. Proposition 5.17. Let dS denote Lebesgue measure on ;5$ and consider the measure p on S ; gven by p(dS) = ~ S / I S I ( P + ' ) /For ~ . each Bore1 measurable function f on S ;, whch is integrable with respect to p, and for each nonsingular matrix A,
Proof. Set B = ASA'. By Proposition 5.11, the Jacobian of this transforma; to S; is ldet Alp+'. Thus tion on S
The result of Proposition 5.17 is often paraphrased by saying that the measure p is invariant under each of the transformations g, defined on ;5$ by g,(S) = ASA'. The following calculation gives a heuristic proof of this result:
-
ldet Alp+'
dS
-
~ A A ' I ( P + ' ) /I~S I ( P + ' ) / ~
dS I S ~ ( P +')I2=
~(ds).
In fact, a similar calculation suggests that p is the only invariant measure in S; (up to multiplication of p by a positive constant). Consider a measure v
PROPOSITION
177
5.18
of the form v ( d S ) = h ( S ) dS where h is a positive Borel measurable function and dS is Lebesgue measure. In order that v be invariant, we must have
so h should satisfy the equation
since g A ( S )= ASA' and ldet Alp+' and c = h ( I p ) .Then
=
IAA'I(P+')/2.Set S
=
I,, B
=
AA',
where c is a positive constant. Making t h s argument rigorous is one of the topics treated in the next chapter. The calculation of the Jacobian of g on G; to S ; is next. Proposition 5.18. For g ( T ) = TT', T
E
G;,
where t , , ,. . . , tpp are the diagonal elements of T Proof: The Jacobian J, is the unique continuous function defined on G ; that satisfies the equation
for all Borel measurable functions f for which the integral over Sl exists. But the left-hand side of this equation is invariant under the replacement of f ( S ) by f (ASA') for any nonsingularp x p matrix. Thus the right-hand side
178
MATRIX FACTORIZATIONS AND JACOBIANS
must have the same property. In particular, for A
E
G:,
we have
In this second integral, we make the change of variable T = A-'B for A E G: fixed and B E G:. By Proposition 5.12, the Jacobian of this ~ a,,, . . a, are the diagonal elements of A. transformation is l / l 7 [ ~ : where Thus
I,
Since this must hold fox all Bore1 measurable f and since Jg is a continuous function, it follows that for all T E G; and A E G;,
Setting A
=
T and noting that IT1
=
n f t i i , we have
Thus J,(T) is a constant k times nftlq-'+I. Hence
To evaluate the constant k, pick f (S)
But
=
1 ~ 1 ' / ~ e x ~ [t -r fs ] ,
r >p - 1
PROPOSITION
5.19
where c ( r , p ) is defined in Example 5.1. However,
so k = 2P. The evaluation of the last integral is carried out by noting that tii ranges from 0 to ca and t i , for j < i ranges from - ca to w . Thus the integral is a product of p ( p + 1 ) / 2 integrals on R , each of which is easy to evaluate.
A by-product of t h s proof is that
is a density function on G:. Since the density h factors into a product of densities, the elements of T, ti, for j < i, are independent. Clearly,
t? ( t i , ) = N ( 0 , l )
for j < i
and C(t:') = ~ 2 , - i + l
when r is the integer n 2 p. Proposition 5.19. Define g on GG to given by
S;
by g ( U ) = UU'. Then J,(U) is
where u , , , . . . , up, are the diagonal elements of U. Pro05 The proof is essentially the same as the proof of Proposition 5.18 and is left to the reader.
The technique used to prove Proposition 5.18 is an important one. Given g on G; to S ;, the idea of the proof was to write down the equation the
180
MATRIX FACTORIZATIONS AND JACOBIANS
Jacobian satisfies, namely,
for all integrable f . Since this equation must hold for all integrable f , J, is uniquely defined (up to sets of Lebesgue measure zero) by this equation. It is clear that any property satisfied by the left-hand integral must also be satisfied by the right-hand integral and this was used to characterize J,. In particular, it was noted that the left-hand integral remained the same iff (S) was replaced by f (ASA') for an nonsingular A. For A E G;, this led to the equation
which determined J,. It should be noted that only Jacobians of the linear transformations discussed in Propositions 5.1 1 and 5.13 were used to determine the Jacobian of the nonlinear transformation g. Arguments similar to this are used throughout Chapter 6 to derive invariant integrals (measures) on matrix groups and spaces that are acted upon by matrix groups.
PROBLEMS
eP,
1. Given A E with rank(A) = p, show that A and T E G;. Prove that 9 and T are unique.
=
9 T where 9
E
5,
2. Define the function F on S l to G$ as follows. For each S E,:S F(S) is the unique element in G; such that S = F(S)(F(S))'. Show that F(TSTf) = TF(S) for T E G$ and S E 5;.
3. Given S 4.
E
For S E , ;S
S;, show there exists a unique U E G: such that S = UU'. partition S as
where Sij is pi X p,, i, j
=
1,2. Assume for definiteness that p ,
< p,.
181
PROBLEMS
:(I
Show that S can be written as
:;(
(DO) Ip;
P
=
(
(
D
O
g,);
where A, is p, X p, and nonsingular, D is p , x p , and diagonal with diagonal elements in [0, 1). 5.
Let C,;
, be those elements in
ep,,
that have rank p. Define F on by F(\k, U) = \kU. (i) Show that F is one-to-one onto, and describe the inverse of F. (ii) For I' E 0, and T E G;, define @ T on e;,, to e;, , by ( r @ T)A = rAT'. Show that (I? @ T)F(\k, U) = F(r\k, UT'). Also, show that F-'((r @ T)A) = (r\k, UT') where FP'(A) = (*, U).
$,,X G& to C;,,
6. Let B, and B, be open sets in Rn and fix x, E B,. Suppose g maps B, into B1 and g(x) = g(x,) + A(x - x,) R(x - x,) where A is an n X n matrix and R ( . ) is a function that satisfies
+
llR(u)Il = 0. lim llull
u-0
Prove that A
=
Dg(xo)so Jg(xo)= Idet(A)I.
7. Let V be the linear coordinate space of all p x p lower triangular real matrices so V is of dimension p ( p 1)/2. Let Sp be the linear coordinate space of all p x p real symmetric matrices so 5, is also of dimension p ( p + 1)/2. (i) Show that G ; is an open subset of V. (ii) Define g on G; to Sp by g(T) = TT'. For fixed To E G;, show that g(T) = g(To) + L(T - To) + (T - T,)(T - To)' where L is defined on V to Sp by L(x) = xTd T,x', x E V. Also show that R(T - To) = ( T - T,)(T - To)' satisfies
+
+
IIR(x)ll - 0. lim llxll
x+O
(iii) Prove by induction that det L = 2PIIftlq-'+' where t,,,. . . , tpp are the diagonal elements of To. (iv) Using (iii) and Problem 6, show that Jg(T) = 2Pnftiq-'+ (This is just Proposition 5.18).
'.
182
MATRIX FACTORIZATIONS AND JACOBIANS
8. When S is a positive definite matrix, partition S and S - as
Show that
s" = ( s , ,- s ~ s 1 2 = -s~Is s-1 12 22
and verify the identity
s-1s sll 22 21
= ~
~
2 21
9. In coordinate space RP, partition x as x tion Z : p x p conformably as
s
s-l 2 11
=
(:),
G
~
~
~
~
~ *
and for Z > 0, parti-
Define the inner product (., .) on RP by ( u , v ) = u'Z-lv. (i) Show that the matrix
defines an orthogonal projection in the inner product (., .). What is % ( P ) ? (ii) Show that the identity
is the same as the identity
where ( x , x ) = 1
1 ~ 1 and 1~ x
=
(i).
)
-
~
NOTES AND REFERENCES
(iii) For a random vector
with C ( X ) = N(0, Z), Z > 0, use part (ii) to gve a direct proof via densities that the conditional distribution of Y given Z is N(Z,2%2'Z> 21, - Z12Z221221). 10. Verify the equation
where c(r, p) is given in Example 5.1. Here, r is real, r > p - 1
NOTES AND REFERENCES 1.
Other matrix factorizations of interest in statistical problems can be found in Anderson (1958), Rao (1973), and Muirhead (1982). Many matrix factorizations can be viewed as results that give a maximal invariant under the action of a group-a topic discussed in detail in Chapter 7.
2. Only the most elementary facts concerning the transformation of measures under a change of variable have been given in the second section. The Jacobians of other transformations that occur naturally in statistical problems can be found in Deemer and Olkin (1951), Anderson (1958), James (1954), Farrell (1976), and Muirhead (1982). Some of these transformations involve functions defined on manifolds (rather than open subsets of R") and the corresponding Jacobian calculations require a knowledge of differential forms on manifolds. Otherwise, the manipulations just look like magic that somehow yields answers we do not know how to check. Unfortunately, the amount of mathematics behind these calculations is substantial. The mastery of this material is no mean feat. Farrell (1976) provides one treatment of the calculus of differential forms. James (1954) and Muirhead (1982) contain some background material and references. 3. I have found Lang (1969, Part Six, Global Analysis) to be a very readable introduction to differential forms and manifolds.
Topological Groups and Invariant Measures
The language of vector spaces has been used in the previous chapters to describe a variety of properties of random vectors and their distributions. Apart from the discussion in Chapter 4, not much has been said concerning the structure of parametric probability models for distributions of random vectors. Groups of transformations acting on spaces provide a very useful framework in whlch to generate and describe many parametric statistical models. Furthermore, the derivation of induced distributions of a variety of functions of random vectors is often simplified and clarified using the existence and uniqueness of invariant measures on locally compact topological groups. The ideas and techniques presented in this chapter permeate the remainder of this book. Most of the groups occurring in multivariate analysis are groups of nonsingular linear transformations or related groups of affine transformations. Examples of matrix groups are given in Section 6.1 to illustrate the definition of a group. Also, examples of quotient spaces that arise naturally in multivariate analysis are discussed. In Section 6.2, locally compact topological groups are defined. The existence and uniqueness theorem concerning invariant measures (integrals) on these groups is stated and the matrix groups introduced in Section 6.1 are used as examples. Continuous homomorphisms and their relation to relatively invariant measures are described with matrix groups again serving as examples. Some of the material in this section and the next is modeled after Nachbin (1965). Rather than repeat the proofs given in Nachbin (1965), we have chosen to illustrate the theory with numerous examples. Section 6.3 is concerned with the existence and uniqueness of relatively invariant measures on spaces that are acted on transitively by groups of
185
GROUPS
transformations. In fact, this situation is probably more relevant to statistical problems than that discussed in Section 6.2. Of course, the examples are selected with statistical applications in mind.
6.1.
GROUPS
We begin with the definition of a group and then give examples of matrix groups. Definition 6.1. A group (G, 0) is a set G together with a binary operation 0 such that the following properties hold for all elements in G: 6) (g, g,) g, = g, O (g2 O g,). (ii) There is a unique element of G, denoted by e , such that g o e = e 0 g = g for all g E G. The element e is the identity in G. (iii) For each g E G, there is a unique element in G, denoted by g-I, such that g g- = gg = e . The element g- is the inverse of g. O
O
0
'
'
0
'
Henceforth, the binary operation is ordinarily deleted and we write g, g, for g, g,. Also, parentheses are usually not used in expressions involving more than two group elements as these expressions are unambiguously defined in (i). A group G is called commutative if g,g, = g,g, for all g,, g, E G. It is clear that a vector space V is a commutative group where the group operation is addition, the identity element is 0 E V, and the inverse of x is - x . 0
+
+
Example 6.1. If (V, (., .)) is a finite dimensional inner product space, it has been shown that the set of all orthogonal transformations O(V) is a group. The group operation is the composition of linear transformations, the identity element is the identity linear transformation, and if r E O(V), the inverse of r is I". When V is the coordinate space Rn, O(V) is denoted by 8,, which is just the group of n x n orthogonal matrices. Example 6.2. Consider the coordinate space RP and let G; be the set of all p X p lower triangular matrices with positive diagonal elements. The group operation in G; is taken to be matrix multiplication. It has been verified in Chapter 5 that G; is a group, the identity in G; is the p X p identity matrix, and if T E G;, T-' is
+
186
+
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
just the matrix inverse of T. Similarly, the set of p x p upper triangular matrices with positive diagonal elements GL is a group with the group operation of matrix multiplication.
+
Example 6.3. Let V be an n-dimensional vector space and let Gl(V) be the set of all nonsingular linear transformations of V onto V. The group operation in Gl(V) is defined to be composition of linear transformations. With this operation, it is easy to verify that GI(V) is a group, the identity in Gl(V) is the identity linear transformation, and if g E GI(V), g-' is the inverse linear transformation of g. The group GI(V) is often called the general linear group of V. When V is the coordinate space Rn, GI(V) is denoted by GI,. Clearly, GI, is just the set of n x n nonsingular matrices and the group operation is matrix multiplication.
+
It should be noted that C(V) is a subset of Gl(V) and the group operation in O(V) is that of GI(V). Further, Gg and GA are subsets of GI, with the inherited group operations. This observation leads to the definition of a subgroup. Definition 6.2. If (G, 0) is a group and H is a subset of G such that (H, 0) is also a group, then (H, 0) is a subgroup of (G, o). In all of the above examples, each element of the group is also a one-to-one function defined on a set. Further, the group operation is in fact function composition. To isolate the essential features of this situation, we define the following. Definition 6.3. Let (G, 0) be a group and let GX be a set. The group (G, 0) acts on the left of GX if to each pair (g, x) E G X %, there corresponds a unique element of GX, denoted by gx, such that 6) g,(g,x) (ii) ex = x.
=
(g,
O
g2)x.
The content of Definition 6.3 is that there is a function on G x % to % whose value at (g, x) is denoted by gx and under t h s mapping, (g,, g,x) and (g, g,, x ) are sent into the same element. Furthermore, (e, x) is mapped to x. Thus each g E G can be thought of as a one-to-one onto function from GX to GX and the group operation in G is function composition. To make t h s claim precise, for each g E G, define t, on GX to % by t,(x) = gx. 0
PROPOSITION
187
6.1
Proposition 6.1. Suppose G acts on the left of 5%. Then each t , is a one-to-one onto function from % to GX and: (i) tg1fg2= t,, (ii) t i 1 = t , - I .
0
g2.
Proof. To show t , is onto, consider x E %. Then t g ( g P ' x )= g ( g P ' x ) = ( g 0 g - ' ) x = ex = x where (i) and (ii) of Definition 6.3 have been used. Thus t , is onto. If t , ( x , ) = t g ( x 2 ) ,then gx, = gx, so
Thus t , is one-to-one. Assertion (i) follows immediately from (i) of Definition 6.3. Since te is the identity function on % and (i) implies that
we have t,-!
=
ti'.
Henceforth, we dispense with t , and simply regard each g as a function on % to 5% where function composition is group composition and e is the identity function on GX. All of the examples considered thus far are groups of functions on a vector space to itself and the group operation is defined to be function composition. In particular, G l ( V ) is the set of all one-to-one onto linear transformations of V to V and the group operation is function composition. In the next example, the motivation for the definition of the group operation is provided by thinlung of each group element as a function. 4
Example 6.4. Let V be an n-dimensional vector space and consider the set A l ( V ) that is the collection of all pairs ( A , x ) with A E G l ( V ) and x E V . Each pair ( A , x ) defines a one-to-one onto function from V to V by
The composition of ( A , , x , ) and ( A , , x , ) is ( A , , x 1 ) ( A 2 ,x 2 ) v
=
( A , ,x l ) ( A 2 v+ x 2 )
=
AlA2v + Alx2 + x,
188
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
Also, (I,0) E AI(V) is the identity function on V and the inverse of (A, x) is (A-I, -A-'x). It is now an easy matter to verify that AI(V) is a group where the group operation in AI(V) is
This group Al(V) is called the affine group of V. When V is the coordinate space Rn,Al(V) is denoted by Al,.
+
An interesting and useful subgroup of AI(V) is given in the next example.
+
Example 6.5. Suppose V is a finite dimensional vector space and let M be a subspace of V. Let H be the collection of all pairs (A, x ) where x E M, A(M) G M, and (A, x ) E Al(V). The group operation in H is that inherited from AI(V). It is a routine calculation to show that H is a subgroup of Al(V). As a particular case, suppose that V is Rn and M is the m-dimensional subspace of Rn consisting of those vectors x E Rn whose last n - m coordinates are zero. An n X n matrix A E GI, satisfies AM G M iff
where A,, is m x m and nonsingular, A,, is m x (n - m), and A,, is (n - m) x (n - m) and nonsingular. Thus H consists of all pairs (A, x) where A E GI, has the above form and x has its last n - m coordinates zero.
+
Example 6.6. In this example, we consider two finite groups that arise naturally in statistical problems. Consider the space Rn and let P be an n x n matrix that permutes the coordinates of a vector x E Rn. Thus in each row and in each column of P, there is a single element that is one and the remaining elements are zero. Conversely, any such matrix permutes the coordinates of vectors in Rn. The set 9, of all such matrices is called the group of permutation matrices. It is clear that Tn is a group under matrix multiplication and 9, has n! elements. Also, let 9, be the set of all n >( n diagonal matrices whose diagonal elements are plus or minus one:. Obviously, 9, is a group under matrix multiplication and 9, has 2" elements. The group 9, is called the group of sign changes on iRn. A bit of reflection shows that both Tn and 9, are subgroups of 0,.Now, let
+
H be the set
The claim is that H is a group under matrix multiplication. To see this, first note that for P E 9, and D E q n , PDP' is an element of 9,. Thus if P I D , and P2D, are in H , then
where P,
=
PIP2 and D,
=
P;D, P2D2. Also,
+
Therefore H i s a group and clearly has 2"n! elements.
Suppose that G is a group and H is a subgroup of G. The quotient space G / H , to be defined next, is often a useful representation of spaces that arise in later considerations. The subgroup H of G defines an equivalence relation in G by g, = g2 iff g; I g , E H. That = is an equivalence relation is easily verified using the assumption that H is a subgroup of G. Also, it is not difficult to show that g, = g2 iff the set g l H = {g,hlh E H ) is equal to the set g2H. Thus the set of points in G equivalent to g, is the set g, H. Definition 6.4. If H is a subgroup of G , the quotient space G / H is defined to be the set whose elements are gH for g E G.
The quotient space G / H is obviously the set of equivalence classes (defined by H ) of elements of G. Under certain conditions on H , the quotient space G / H is in fact a group under a natural definition of a group operation. Definition 6.5. A subgroup H of G is called a normal subgroup if g- 'Hg for all g E G.
When H is a normal subgroup of G , and giH
since H H
=
H.
E
G / H for i
=
=
H
1,2, then
190
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
Proposition 6.2. When H is a normal subgroup of G, the quotient space
G/H is a group under the operation
Proof: This is a routine calculation and is left to the reader.
+
Example 6.7. Let A l ( V ) be the affine group of the vector space V. Then H = { ( I ,x)lx
E
V}
is easily shown to be a subgroup of G, since ( I , x , ) ( I , x,) = ( I , x , x,). To show H is normal in AI(V), consider ( A , x ) E A I ( V ) and ( I , x,) E H. Then
+
which is an element of H. Thus gglHg c H for all g if ( I , x,) E H and ( A , x ) E Al(V), then ( A , x ) - ' ( ~A, x o ) ( A ,x )
=
E
Al(V). But
( I ,x O )
so g-'Hg = H, for g E Al(V). Therefore, H is normal in Al(V). To describe the group AI(V)/H, we characterize the equivalence relation defined by H. For ( A , , x,) E Al(V), i = 1,2,
is an element of H iff Ac1A2= I or A, = A,. Thus ( A , , x , ) is equivalent to (A,, x,) iff A , = A,. From each equivalence class, select the element (A,O). Then it is clear that the quotient group A l ( V ) / H can be identified with the group
where the group operation is
Now, suppose the group G acts on the left of the set %. We say G acts transitively on X if, for each x , and x , in %, there exists a g E G such that gx, = x,. When G acts transitively on %, we want to show that there is a natural one-to-one correspondence between % and a certain quotient space. Fix an element x , E % and let
H
=
{hlhx, = x,, h
E
G).
The subgroup H of G is called the isotropy subgroup of x,. Now, define the function 7 on G / H to % by r ( g H ) = gx,.
Proposition 6.3. The function T is one-to-one and onto. Further,
Pro05 The definition of r clearly makes sense as ghx, = gx, for all h E H. Also, r is an onto function since G acts transitively on %. If r ( g , H ) = r ( g , H ) , then g,x, = g2x0 SO g;lgl E H. Therefore, g,H = g,H so r is one-to-one. The rest is obvious. If H is any subgroup of G, then the group G acts transitively on
5X = G / H where the group action is
Thus we have a complete description of the spaces % that are acted on transitively by G. Namely, these spaces are simply relabelings of the quotient spaces G / H where H is a subgroup of G. Further, the action of g on 5X corresponds to the action of G on the quotient space described in Proposition 6.3. A few examples illustrate these ideas.
+
3,
Example 6.8. Take the set X to be ,--the set of n x p real matrices \k that satisfy \k'\k = I,, 1 < p < n. The group G = 0, of all n x n orthogonal matrices acts on %,, by matrix multiplication. That is, if r E 0, and \k E , then T\k is the matrix product of T and \k. To show that this group action is transitive, consider \k, and 'k, in %, ,. Then, the columns of 9,form a set of p orthonormal
3,
192
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
vectors in R n as do the columns of 'P2. By Proposition 1.30, there exists an n X n orthogonal matrix that maps the columns of .k, into the columns of q2.Thus r'P, = 'P2 so 0, is transitive on ,. A convenient choice of x , E %, to define the map r is
r
5,
where 0 is a block of ( n - p ) X p zeroes. It is not difficult to show that the subgroup H = ( r l r x , = x,, r E 0,) is
The function r is
which is the n X p matrix consisting of the first p columns of T. T h s gives an obvious representation of %, ,,.
+
+ Example 6.9. Let % be the set of all p x p positive definite
matrices and let G = GI,. The transitive group action is given by A ( x ) = AxA' where A is a p X p nonsingular matrix, x E %, and A' is the transpose of A . Choose x , E % to be I,. Obviously, H = Op and the map r is given by r ( A H ) = A(x,)
=
AA'
The reader should compare this example with the assertion of Proposition 1.31.
+
Example 6.10. In t h s example, take % to be the set of all n X p real matrices of rank p, p d n. Consider the group G defined by
where G; is the group of all p x p lower triangular matrices with positive diagonal elements. Of course, 8 denotes the Kronecker product and group composition is
+
PROPOSITION
6.3
The action of G on % is
To show G acts transitively on %, consider X I , X2 E % and write E G : , i = 1,2 (see Proposition Xi = \k,q,where \k, E %, and 5.2). From Example 6.8, there is a r E 8, such that r\k, = %k2.Let T' = U r 1 u 2so
Choose Xo
E
% to be
as in Example 6.8. Then the equation (r 8 T ) X o = Xo implies that I,
=
XAXo
=
( ( r8 T ) x , ) ' ( ~8 T ) X o = TX;T'rX,T'
=
TT'
so T = Ip by Proposition 5.4. Then the equation (r 8 I p ) X o = X, is exactly the equation occurring in Example 6.8 for elements of the subgroup H. Thus for this example,
Therefore,
is the representation for elements of X. Obviously,
and the representation of elements of % via the map r is precisely the representation established in Proposition 5.2. This representation of 3iis used on a number of occasions.
+
194
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
6.2. INVARIANT MEASURES AND INTEGRALS Before beginning a discussion of invariant integrals on locally compact topological groups, we first outline the basic results of integration theory on locally compact topological spaces. Consider a set X and let & be a Hausdorff topology for 5%. Definition 6.6. The topological space ( X , &)is a locally compact space if for each x E X , there exists a compact neighborhood of x. Most of the groups introduced in the examples of the previous section are subsets of the space Rm, for some m, and when these groups are given the topology of Rm, they are locally compact spaces. The verification of t h s is not difficult and is left to the reader. If (%, & ) is a locally compact space, X ( % ) denotes the set of all continuous real-valued functions that have compact support. Thus f E X ( X ) if f is a continuous and there is a compact set K such that f ( x ) = 0 if x P K. It is clear that X ( X ) is a real vector space with addition and scalar multiplication being defined in the obvious way. Definition 6.7. A real-valued function J defined on X ( X ) is called an integral if: (i) J(alfl + a2f2)= aIJ(fl) + a2J(f2) for (~1, a2 E R and f ~f 2, E %(XI. (ii) J ( f ) > O i f f > , O , f ~ X ( % ) . An integral J is simply a linear function on X ( X ) that has the additional property that J ( f j is nonnegative when f > 0. Let be the a-algebra generated by the compact subsets of X. If p is a measure on a ( % ) such that p(K) < w for each compact set K, it is clear that
a(%)
+
defines an integral on X(%). Such measures p are called Radon measures. Conversely, given an integral J , there is a measure p on a ( % ) such that p ( K ) < w for all compact sets K and
+
for f E X(%). For a proof of this result, see Segal and Kunze (1978,
INVARIANT MEASURES AND INTEGRALS
Chapter 5). In the special case when (%, &)is a a-compact space-that
1% is,
3C = u ;*K, where K , is compact-then the correspondence between in-
tegrals J and measures p that satisfy p ( K ) < + oo for K compact is one-to-one (see Segal and Kunze, 1978). All of the examples considered here are a-compact spaces and we freely identify integrals with Radon measures and vice versa. Now, assume (%, &) is a a-compact space. If an integral J on X ( % ) corresponds to a Radon measure p on % (%), then J has a natural extension to the class of all %(%)-measurable and p-integrable functions. Namely, J is extended by the equation
for all f for which the right-hand side is defined. Obviously, the extension of J is unique and is determined by the values of J on X ( % ) . In many of the examples in thls chapter, we use J to denote both an integral on X ( % ) and its extension. With this convention, J is defined for any %(%) measurable . function that is p-integrable where p corresponds to J . Suppose G is a group and & is a topology on G. Definition 6.8. Given the topology & on G, (G, & ) is a topological group if the mapping (x, y) + xy-' is continuous from G X G to G. If (G, & ) is a topological group and (G, & )is a locally compact topological space, (G, & ) is called a locally compact topological group
In what follows, all groups under consideration are locally compact topological groups. Examples of such groups include the vector space Rn, the general linear group GI,, the affine group Al,, and G;. The verification that these groups are locally compact topological groups with the Euclidean space topology is left to the reader. If (G, & )is a locally compact topological group, X(G) denotes the real vector space of all continuous functions on G that have compact support. For s E G and f E X(G), the left translate of f by s, denoted by sf, is defined by (sf)(x) = f(sP'x), x E G. Clearly, sf E X(G) for all s E G. Similarly, the right translate off E X(G), denoted byfs, is (fs)(x) = f (xs- ') and fs E X(G). Definition 6.9. An integral J * 0 on X(G) is left invariant if J(sf) = J ( f ) for all f E X(G) and s E G. An integral J * 0 on X(G) is right invariant if J ( fs) = J ( f ) for all f E X(G) and s E G .
1%
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
The basic properties of left and right invariant integrals are summarized in the following two results. Theorem 6.1. If G is a locally compact topological group, then there exist left and right invariant integrals on X(G). If J , and J2 are left (right) invariant integrals on X(G), then J2 = cJ, for some positive constant c. Proof.
See Nachbin (1965, Section 4, Chapter 2).
Theorem 6.2. Suppose that
is a left invariant integral on X(G). Then there exists a unique continuous function A, mapping G into (0, co) such that
for all s E G and f E X(G). The function A,, called the right-hand modulus of G, also satisfies:
Further, the integral
is right invariant. Proof. See Nachbin (1965, Section 5, Chapter 2). The two results above establish the existence and uniqueness of right and left invariant integrals and show how to construct right invariant integrals from left invariant integrals via the right-hand modulus A,. The right-hand modulus is a continuous homomorphism from G into (0, co)-that is, A, is continuous and satisfies A,(st) = A,(s)A,(t), for s, t E G. (The definition of a homomorphism from one group to another group is gven shortly.) Before presenting examples of invariant integrals, it is convenient to introduce relatively left (and right) invariant integrals. Proposition 6.4, given
below, provides a useful method for constructing invariant integrals from relatively invariant integrals.
Definition 6.10. A nonzero integral J on X ( G ) given by
is called relatively left invariant if there exists a function such that
for all s
E
G and f
E
x on G to (0, oo)
X(G). The function x is the multiplier for J .
It can be shown that any multiplier x is continuous (see Nachbin, 1965). Further, if J is relatively left invariant with multiplier X, then for s, t E G and f E X(G),
Thus st) = x(s)x(t). Hence all multipliers are continuous and are homomorphisms from G into (0, oo). For any such homomorphism X, it is clear that ~ ( e =) 1 and ~ ( s - ' )= l/x(s). Also, x(G) = {x(s)ls E G) is a subgroup of the group (0, oo) with multiplication as the group operation.
Proposition 6.4. Let (i) If J ( f )
=
x be a continuous hornomorphisin on G to (0, oo).
jf(x)p(dx) is left invariant on X(G), then
is a relatively left invariant integral on X(G) with multiplier X.
198
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
(ii) If J,( f )
then
=
j f ( x ) r n ( d x ) is relatively left invariant with multiplier X ,
is a left invariant integral. Proot
The proof is a calculation. For (i),
=
x ( s ) J l ( f ).
Thus J, is relatively left invariant with multiplier X . For (ii),
Thus J is a left invariant integral and the proof is complete. If J is a relatively left invariant integral with multiplier X , say
the measure m is also called relatively left invariant with multiplier X . A nonzero integral J , on X(G) is relatively right invariant with multiplier x if J,( fs) = x ( s ) J , (f ). Using the results given above, if J , is relatively right invariant with multiplier X , then J , is relatively Ieft invariant with multiplier
x / A , where A, is the right-hand modulus of G. Thus all relatively right and left invariant integrals can be constructed from a given relatively left (or right) invariant integral once all the continuous homomorphisms are known. Also, if a relatively left invariant measure m can be found and its multiplier x calculated, then a left invariant measure is given by m / x according to Proposition 6.4. This observation is used in the examples below.
+ Example 6.11. Consider the group GI, of all nonsingular n
X n matrices. Let ds denote Lebesgue measure on GI,. Since GI, = {sldet(s) * 0), G1, is a nonempty open subset of n2-dimensional Euclidean space and hence has positive Lebesgue measure. For f E %(GI,), let
To find a left invariant measure on GI,, it is now shown that J(sf) = Idet(s)InJ(f ) so J is relatively left invariant with multiplier ~ ( s =) (det(s)ln.From Proposition 5.10, the Jacobian of the transformation g(t) = st, s E GI,, is Idet(s)ln. Thus J(sf)
= /f
( s p i t )dt
=
ldet(s)ln/f ( t ) dt
=
ldet(s)ln~( f ).
From Proposition 6.4, it follows that the measure
is a left invariant measure on Gl,. A similar Jacobian argument shows that p is also right invariant, so the right-hand modulus of G1, is A, = 1. To construct all of the relatively invariant measures on GI,, it is necessary that the continuous homomorphisms x be characterized. For each a E R, let
Obviously, each X, is a continuous homomorphism. However, it can be shown (see the problems at the end of this chapter) that if x is a continuous homomorphism of GI, into (0, a),then x = X, for some a E R. Hence every relatively invariant measure on GI, is given by
where c is a positive constant and a
E
R.
200
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
A group G for whch A,
=
1 is called unimodular. Clearly, all commuta-
tive groups are unimodular as a left invariant integral is also right invariant.
In the following example, we consider the group G: , which is not unimodular, but G; is a subgroup of the unimodular group GI,.
+
Example 6.12. Let G; be the group of all n x n lower triangular matrices with positive diagonal elements. Thus G; is a nonempty open subset of [n(n + 1)/2]-dimensional Euclidean space so G: has positive Lebesgue measure. Let dt denote [n(n + 1)/2]dimensional Lebesgue measure restricted to G:. Consider the integral
defined on 'X(G:). The Jacobian of the transformation g(t) s E G;, is equal to
=
st,
where s has diagonal elements s ,,,. . . , s,, (see Proposition 5.13). Thus
Hence J is relatively left invariant with multiplier x0 so the measure
is left invariant. To compute the right-hand modulus A, for G;, let
so J, is left invariant. Then
PROPOSITION
201
6.4
By Proposition 5.14, the Jacobian of the transform g(t)
=
ts is
Therefore,
By Theorem 6.2,
is the right-hand modulus for G;. Therefore, the measure
is right invariant. As in the previous example, a description of the relatively left invariant measures is simply a matter of describing all the continuous homomorphisms on G:. For each vector c E Rn with coordinates c,,. . . , c,, let
where t E G; has diagonal elements t,,, . . . , tan.It is easy to verify that X, is a continuous homomorphism on G:. It is known that if x is a continuous homomorphism on G;, then x is given by X, for some c E Rn (see Problems 6.4 and 6.9). Thus every relatively left invariant measure on G: has the form
for some positive constant k and some vector c
E
Rn.
+
The following two examples deal with the affine group and a subgroup of G1, related to the group introduced in Example 6.5.
202
+
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
Example 6.13. Consider the group Al, of all affine transformations on Rn. An element of Al, is a pair (s, x) where s E G1, and x € Rn. Recall that the group operation in Al, is ( ~ 1x, I ) ( ~ 2 x2) ,
=
( ~ 1 ~S2l X,2 + XI)
SO
(s, x)-I
=
(s-' , - s - 'x).
Let ds dx denote Lebesgue measure restricted to Al,. In order to construct a left invariant measure on Al,, it is shown that the integral
is relatively left invariant with multiplier x O ( s ,X ) = ~det(s)1"+' For (s, X ) E Al,,
=
~det(s)l/f(s- 't , u) dt du.
The last equality follows from the change of variable u which has a Jacobian Idet(s)l. As in Example 6.1 1,
G ' I"
for each fixed u
f (s-'t, U ) dt E
Rn. Thus
=
ldet(s)in/ f ( t , u) dt GI,
=
s P i y - sx,
PROPOSITION
203
6.4
so J is relatively left invariant with multiplier measure p(ds, du)
=
x,.
Hence the
ds du ds du x o ( s , U ) 1det(s)ln+'
is left invariant. To find the right-hand modulus of Al,, let
be a left invariant integral. Then using an argument similar to that above, we have
Thus A,(s, x )
=
=
~det(s-')Int '~det(s)l"l f ( t , u)
=
~det(s)l-'J,( f ).
dt du 1det(t)ln+'
Idet(s)l-' so a right invariant measure on Al, is
v(ds, du) Now, suppose that
=
1
Ar('?
p ( d ~du) ,
=
ds du Idet(s)In '
x is a continuous homomorphism on Al,.
(s, x )
=
(s,O)(e, s-'x)
=
( e , x)(s,O)
Since
204
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
where e is the n x n identity matrix, x ( ~x,) Thus for all s
E
=
x must satisfy the equation
x ( ~ , O ) x ( es-'x) ,
=
x(s,O)x(e, x )
GI,,
Letting s-' converge to the zero matrix, the continuity of that
x implies
since (e, 0) is the identity in Al,. Therefore,
However.
so x is a continuous homomorphism on GI,. But every continuous homomorphism on GI, is given by s + Idet(s)la for some real a. In summary, x is a continuous homomorphism on Al, iff
for some real number a. Thus we have a complete description of all the relatively invariant integrals on Al,.
+
Example 6.14. In this example, the group G consists of all the n x n nonsingular matrices s that have the form
where p + q = n. Let M be the subspace of R n consisting of those vectors whose last q coordinates are zero. Then G is the subgroup of G1, consisting of those elements s that satisfy s ( M ) G M. Let ds,, ds,, ds,, denote Lebesgue measure restricted to G when G is regarded as a subset of ( p 2 + q 2 + pq)-dimensional Euclidean space. Since G is a nonempty open subset of this space, G has positive Lebesgue measure. As in previous examples, it is shown
+
PROPOSITION
6.4
that the integral
is relatively left invariant. For s
E
G,
A bit of calculation shows that
and
Let u22
sii1t22
Ul1 =
silt,,,
UI2 =
SfiltI2- ~ i l ~ ~ ~ ~ ; ~ ~ t ~ ~ .
=
The Jacobian of this transformation is X O ( S ) ldet(sll)lPldet(s22)lqldet(sH)lq Therefore, Jbf) so the measure
is left invariant. Setting
=
xo(s)J(f
=
1det(s~~)l"ldet(s~~)14.
206
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
a calculation similar to that above yields
where
Thus A, is the right-hand modulus of G and the measure
is right invariant. For a, /3
E
R, let
Clearly, xaB is a continuous homomorphism of G into (0, a ) . Conversely, it is not too difficult to show that every continuous homomorphsm of G into (0, co) is equal to xapfor some a, /3 E R. Again, this gives a complete description of all the relatively invariant integrals on G.
+
In the four examples above, the same argument was used to derive the left and right invariant measures, the modular function, and all of the relatively invariant measures. Namely, the group G had positive Lebesgue measure when regarded as a subset of an obvious Euclidean space. The integral on X(G) defined by Lebesgue measure was relatively left invariant with a multiplier that we calculated. Thus a left invariant measure on G was simply Lebesgue measure divided by the multiplier. From this, the right-hand modulus and a right invariant measure were easily derived. The characterization of the relatively invariant integrals amounted to finding all the solutions to the functional equation st) = x(s)x(t) where x is a continuous function on G to (0, a ) . Of course, the above technique can be applied to many other matrix groups-for example, the matrix group considered in Example 6.5. However, there are important matrix groups for which t h s argument is not available because the group has Lebesgue measure zero in the "natural" Euclidean space of which the group is a subset. For example, consider the group of n x n orthogonal matrices 8,. When regarded as a subset of n2-dimensional Euclidean space, 8, has Lebesgue measure zero. But, without a fairly complicated parameterization of On,it is not possible to regard 8, as a set of positive Lebesgue measure of some Euclidean space.
For this reason, we do not demonstrate directly the existence of an invariant measure on 0, in this chapter. In the following chapter, a probabilistic proof of the existence of an invariant measure on 8, is given. The group On, as well as other groups to be considered later, are in fact compact topological groups. A basic property of such groups is given next. Proposition 6.5. Suppose G is a locally compact topological group. Then G is compact iff there exists a left invariant probability measure on G.
Proof: See Nachbin (1965, Section 5, Chapter 2).
C]
The following result shows that when G is compact, left invariant measures are right invariant measures and all relatively invariant measures are in fact invariant. Proposition 6.6. If G is compact and x is a continuous homomorphism on G to (0, a ) , then ~ ( s =) 1 for all s E G.
Proof: Since x is continuous and G is compact, x(G) = {x(s)ls E G) is a compact subset of (0, a ) . Since x is a homomorphism, x(G) is a subgroup of (0,c.a). However, the only compact subgroup of (0, a ) is (1). Thus ~ ( s =) 1 for all s E G. The nonexistence of nontrivial continuous homomorphisms on compact groups shows that all compact groups are unimodular. Further, all relatively invariant measures are invariant. Whenever G is compact, the invariant measure on G is always taken to be a probability measure.
6.3. INVARIANT MEASURES ON QUOTIENT SPACES
In this section, we consider the existence and uniqueness of invariant integrals on spaces that are acted on transitively by a group. Throughout this section, GX is a locally compact Hausdorff space and X(%) denotes the set of continuous functions on % that have compact support. Also, G is a locally compact topological group that acts on the left of %.
-
Definition 6.11. The group G acts topologicaIly on % if the function from G x % to % given by (g, x ) gx is continuous. When G acts topologically on GX, %is a left homogeneous space if for each x E %, the function rxon G to % defined by .rr,(g) = gx is continuous, open, and onto GX.
208
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
The assumption that each rxis an onto function is just another way to say that G acts transitively on 5%. Also, it is not difficult to show that if, for one x E X , .rr, is continuous, open, and onto %, then for all x , nx is continuous, open, and onto X. To describe the structure of left homogeneous spaces X , fix an element x, E X and let
That H, is a closed subgroup of G is easily verified. Further, the function T considered in Proposition 6.3 is now one-to-one, onto, and T and 7 - I are both continuous. Thus we have a one-to-one, onto, bicontinuous mapping between X and the quotient space G/Ho endowed with the quotient topology. Conversely, let H be a closed subgroup of G and take X = G/H with the quotient topology. The group G acts on G/H in the obvious way (g(g,H) = gg,H) and it is easily verified that G/H is a left homogeneous space (see Nachbin 1965, Section 3, Chapter 3). Thus we have a complete description of the left homogeneous spaces (up to relabelings by T) as quotient spaces G/H where H is a closed subgroup of G. In the notation above, let X be a left homogeneous space. Definition 6.12. A nonzero integral J on X ( X )
is relatively invariant with multiplier x if, for each s
for all f
E
E
G,
X(%),
For f E X ( X ) , the function sf given by (sf )(x) = f(s-'x) is the left translate off by s E G. Thus an integral J on %(%) is relatively invariant with multiplier x if J(sf) = ~ ( sJ () f ). For such an integral,
so st) = x(s)x(t). Also, any multiplier x is continuous, whch implies that a multiplier is a continuous homomorphism of G into the multiplicative group (0, m).
INVARIANT MEASURES ON QUOTIENT SPACES
+
209
Example 6.15. Let EX be the set of all p x p positive definite matrices. The group G = GI, acts transitively on % as shown in Example 6.9. That % is a left homogeneous space is easily verified. For a E R, define the measure m, by
where dx is Lebesgue measure on %. Let Ja( f ) = jf(x)m,(dx). For s E GI,, s(x) = sxs' is the group action on %. Therefore,
The last equality follows from the change of variable x = sys', which has a Jacobian equal to ~det(s)lP+'(see Proposition 5.1 1). Hence J&f)
=
Idet(s)laJ(f)
for all s E GI,, f € X(%), and J, is relatively invariant with multiplier x,(s) = Jdet(s)la. For this example, it has been shown that for every continuous homomorphism x on G, there is a relatively invariant integral with multiplier X. That this is not the case in general is demonstrated in future examples.
+
The problem of the existence and uniqueness of relatively invariant integrals on left homogeneous spaces % is completely solved in the following result due to Weil (see Nachbin, 1965, Section 4, Chapter 3). Recall that xo is a fixed element of % and
is a closed subgror~pof G. Let A, denote the right-hand modulus of G and let A: denote the right-hand modulus of Ho.
210
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
Theorem 6.3. In the notation above:
(i) If J ( f )
=
jf(x)m(dx) is relatively invariant with multiplier X, then A:(h)
=
x(h)A,(h)
for all h
E
Ho.
(ii) If x is a continuous homomorphism of G to (0, m) that satisfies A:(h) = x(h)A,(h), h E Ho, then a relatively invariant integral with multiplier x exists. (iii) If J , and J, are relatively invariant with the same multiplier, then there exists a constant c > 0 such that J, = cJ,. Before turning to applications of Theorem 6.3, a few general comments are in order. If the subgroup Ho is compact, then A:(h) = 1 for all h E Ho. Since the restrictions of x and of A, to Ho are both continuous homomorphisms on Ho, A,(h) = ~ ( h =) 1 for all h E Ho as Ho is compact. Thus when Ho is compact, any continuous homomorphsm x is a multiplier for a relatively invariant integral and the description of all the relatively invariant integrals reduces to finding all the continuous homomorphisms of G. Further, when G is compact, then only an invariant integral on X ( X ) can exist as x = 1 is the only continuous homomorphism. When G and H are not compact, the situation is a bit more complicated. Both A, and A: must be calculated and then, the continuous homomorphisms x on G to (0, m) that satisfy (ii) of Theorem 6.3 must be found. Only then do we have a description of the relatively invariant integrals on X ( X ) . Of course, the condition for the existence of an invariant integral (X = 1) is that A:(h) = A,(h) for all h E Ho. If J is a relatively invariant integral (with multiplier X ) given by
then the measure m is called relatively invariant with multiplier X. In Example 6.15, it was shown that for each a E R, the measure ma was relatively invariant under GI, with multiplier x,. Theorem 6.3 implies that any relatively invariant measure on the space of p x p positive definite matrices is equal to a positive constant times an ma for some a E R. We now proceed with further examples.
+
Example 6.16. Let % = %,, and let G = 8,. It was shown in Example 6.8 that 8, acts transitively on %,,. The verification that
211
INVARIANT MEASURES O N QUOTIENT SPACES
%, ,is a left homogeneous space is left to the reader. Since 8, is compact, Theorem 6.3 implies that there is a unique probability measure p on %,, that is invariant under the action of 8, on %, ,.
%,
Also, any relatively invariant measure on , will be equal to a positive constant times p. The distribution p 1s sometimes called the uniform distribution on 3, ,. When p = 1, then
which is the rim of the unit sphere in Rn. The uniform distribution on F,, ,is just surface Lebesgue measure normalized so that it is a probability measure. When p = n, then %, = 8, and p is the uniform distribution on the orthogonal group. A different argument, probabilistic in nature, is given in the next chapter, which also establishes the existence of the uniform distribution on %, ,.
+
Example 6.17. Take % = RP - (0) and let G = GI,. The action of GIp on % is that of a matrix acting on a vector and this action is obviously transitive. The verification that % is a left homogeneous space is routine. Consider the integral
where dx is Lebesgue measure on %. For s E GI,, it is clear that J(sf ) = Idet(s)I J ( f ) so J is relatively invariant with multiplier x,(s) = Idet(s)l. We now show that J is the only relatively invariant integral on %(EX). This is done by proving that X, is the only possible multiplier for relatively invariant integrals on X(X).A convenient choice of x , E % is x , = el where e; = (1,0,. . . , 0). Then
A bit of reflection shows that h
where h2,
E
GI(,-
E
Ho iff
, and h , , is 1 X ( p - 1). A calculation similar to
+
212
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
that in Example 6.14 yields
as a left invariant measure on H,. Then the integral
is left invariant on X(H,) and a standard Jacobian argument yields
where
Every continuous homomorphism on GI, has the form x,(s) = Idet(s)Ia for some a E R. Since A, = 1 for GI,, X, can be a multiplier for an invariant integral iff
But A?(h) = Idet(h,,)l and for h E H,, x,(h) = Idet(h2,)la so the only value for a for which X, can be a multiplier is a = 1. Further, the integral J is relatively invariant with multiplier x,. Thus Lebesgue measure on 5% is the only (up to a positive constant) relatively invariant measure on 3, under the action of GI,. Before turning to the next example, it is convenient to introduce the direct product of two groups. If GI and G, are groups, the direct product of GI and G,, denoted by G = GI X G,, is the group consisting of all pairs (g,, g,) with g, E G,, i = 1,2, and group operation
If e, is the identity in G,, i = 1,2, then (el, e,) is the identity in G and (g,, g,)-' = (g; g; I). When GI and G, are locally compact topological groups, then GI X G, is a locally compact topological group when endowed with the product topology. The next two results describe all the continuous homomorphisms and relatively left invariant measures on GI X G, in terms
',
of continuous homomorphisms and relatively left invariant measures on G I and G,. Proposition 6.7. Suppose GI and G, are locally compact topological groups. , = Then x is a continuous homomorphism on G I X G, iff ~ ( ( g , g,)) ~ , ( g , ) ~ , ( g , ) (g,, , g,) E GI x G,, where X, is a continuous homomorphism on G,, i = 1,2. , = x,(g,)x,(g,), clearly x is a continuous homomorProof: If ~ ( ( g , g,)) phsm on G I x G,. Conversely, since (g,, g,) = (g,, e,)(e,, g,), if x is a continuous homomorphism on G I x G,, then
Setting x , ( g , ) lows.
=
~ ( g ,e,) , and x 2 ( g 2 ) = ~ ( e , g2), , the desired result fol-
Proposition 6.8. Suppose x is a continuous homomorphism on G I X G, with ~ ( g , g,) , = x l ( g , ) x 2 ( g , ) where xi is a continuous homomorphism on G,, i = 1,2. If m is a relatively left invariant measure with multiplier X, then there exist relatively left invariant measures mi on G, with multipliers x i , i = 1,2, and m is product measure m, X m,. Conversely, if m, is a relatively left invariant measure on G, with multiplier x,, i = 1,2, then m, X m, is a relatively left invariant measure on G I X G, with multiplier X, whch satisfies x ( g , , g2) = x,(g,)x2(g2).
Proof: This result is a direct consequence of Fubini's Theorem and the existence and uniqueness of relatively left invariant integrals. The following example illustrates many of the results presented in this chapter and has a number of applications in multivariate analysis. For example, one of the derivations of the Wishart distribution is quite easy given the results of this example.
+
Example 6.18. As in Example 6.10, % is the set of all n X p matrices with rank p and G is the direct product group 0, x G;. The action of (I',T ) E 6, x G: on % is
(I',T ) X Since % = {XIX E
(r @
T ) X = I'XT',
C,, ,, det(XfX) > 0), %
X E %. is a nonempty open
214
ep
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
subset of ,. Let dX be Lebesgue measure on % and define a measure on % by m(dX)
=
dX (det( X'X)) "I2 '
Using Proposition 5.10, it is an easy calculation to show that the integral
is invariant-that is, J ( ( r , T )f ) = J( f ) for ( r , T ) E 0, X G; and f E X ( % ) . However, it takes a bit more work to characterize all the relatively invariant measures on 5%. First, it was shown in Example 6.10 that, if Xo is
then Ho = {(I?,T)I(T, T ) X , = Xo) is a closed subgroup of 0, and hence is compact. By Theorem 6.3, every continuous homomorphsm on 0, x G,f is the multiplier for a relatively invariant integral. But every continuous homomorphism x on 0, X G: has the form ~ ( r T ), = x 1 ( r ) x 2 ( T )where X , and x 2 are continuous homomorphisms on 0, and G;. Since 0, is compact, X , = 1. From Example 6.12,
where c E RP has coordinates c,, . . . , c,. Now that all the possible multipliers have been described, we want to exhibit the relatively invariant integrals on X ( % ) . To this end, consider the space 9 = ,x G: so points in ?I are ('k, U ) where 'k is an n x p linear isometry and U is a p x p upper triangular matrix in G.: The group 0, x G; acts transitively on 9under the group action
5,
5,
,that is 0,-invariant Let po be the unique probability measure on and let v, be the particular right invariant measure on the group G:
PROPOSITION 6.8
given by
Obviously, the integral
is invariant under the action of 8, x Gg on $$, ,X G:, f E X($$,, x G t ) . Consider the integral
defined on X(%,,X G:) where X , is a continuous homomorphism on G:. The claim is that J,((r, T ) f ) = x C ( T ) J 2f( ) so J2 is relatively invariant with multiplier x,. To see this, compute as follows:
The last equality follows from the invariance of po and v,. Thus all the relatively invariant integrals on %(%,, x G:) have been explicitly described. To do the same for %(%), the basic idea is to move the integral J, over to %(%). It was mentioned earlier that the map on %, ,X G: to EX given by
+,
is one-to-one, onto, and satisfies
216
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
for group elements (I?, T). For f
Then for ( r , T) E
E
X(%), consider the integral
8, x G ; ,
since po and vr are invariant. Therefore, J3 is an invariant integral on X(!X). Since J is also an invariant integral on X(%), Theorem 6.3 shows that there is a positive constant k such that
More explicitly, we have the equation
for all f E X(%). This equation is a formal way to state the very nontrivial fact that the measure m on % gets transformed into the measure k(po x vr) on %, ,x G: under the mapping +, I. To evaluate the constant k, it is sufficient to find one particular function so that both sides of the above equality can be evaluated. Consider
Clearly,
The last equality follows from the result in Example 5.1, where c ( n , p) is defined. Therefore,
It is now an easy matter to derive all the relatively invariant integrals on X ( % ) . Let X, be a given continuous homomorphism on G g . For each X E %, let U(X) be the unique element in G: such that'X = *U(X) for some 'k E $, (see Proposition 5.2). It is clear that U(rXTt) = U(X)Tf for r E 8, and T E G$ . We have shown that
is relatively invariant with multiplier X, on X(%,, x G:). h E X(X), define an integral J4by
For
Clearly, J4is relatively invariant with multiplier xCsince J4(h) = ~ , ( h )where A(*, U) = h('kU). Now, we move J4 over to 5% by (6.1). In (6.1), take f ( X ) = h(X)xC(Uf(X)) so f('kU) = h(*U)x,(U'). Thus the integral
218
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
is relatively invariant with multiplier x,. Of course, any relatively invariant integral with multiplier X, on X(%) is equal to a positive constant times J,.
+
6.4. TRANSFORMATIONS AND FACTORIZATIONS OF MEASURES The results of Example 6.18 describe how an invariant measure on the set of n x p matrices is transformed into an invariant measure on x Gt under a particular mapping. The first problem to be discussed in t h s section is an abstraction of this situation. The notion of a group homomorphism plays a role in what follows.
q,,
Definition 6.13. Let G and H be groups. A function 17 from G onto H is a homomorphism if:
When there is a homomorphism from G to H, H is called a homomorphic image of G. For notational convenience, a homomorphic image of G is often denoted - the value of the homomorphism at g is g. In t h s case, g,g, = g,g, by Gand and g- = g- Also, if e is the identity in G, then e is the identity in G. Suppose % and 9 are locally compact spaces, and G and G are locally respectively. compact topological groups that act topologically on X and 9, It is assumed that G is a homomorphic image of G.
'
'.
Definition 6.14. A measurable function @ from % onto % is called equivariant if @ ( g x )= @ ( x ) for all g E G and x E %. Now, consider an integral
which is invariant under the action of G on X, that is
+
for g E G and f E X(%). Given an equivariant function from GX to 9 , there is a natural measure v induced on 9. Namely, if B is a measurable subset of %, v(B) = p(+-'(B)). The result below shows that under a regularity condition on +, the measure v defines an invariant (under c ) integral on X ( 9 ) . Proposition 6.9. If @ is an equivariant function from GX onto 9 that satisfies p(+-'(K)) < co for all compact sets K c 9 , then the integral
+
is invariant under
c. +
ProoJ: First note that J, is well defined and finite since p(+-'(K)) < co for all compact sets K c 9. From the definition of the measure v, it follows immediately that
Using the equivariance of
so J, is invariant under
+ and the invariance of p, we have
c.
Before presenting some applications of Proposition 6.9, a few remarks are order. The groups G and G are not assumed to act transitively on GX and , respectively. However, if G does act transitively on 9 and if 9 is a left homogeneous space, then the measure v is uniquely determined up to a positive constant. Thus if we happen to know an invariant measure on 9 , the identity
relates the G-invariant measure p to the c-invariant measure v. It was t h s
220
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
line of reasoning that led to (6.1) in Example 6.18. We now consider some further examples.
+
Example 6.19. As in Example 6.18, let X be the set of all n x p matrices of rank p, and let 9 be the space S ; of p x p positive definite matrices. Consider the map + on %, to S ; defined by
The group 0, X GI, acts on X by
and the measure
is invariant under 0, X GI,. Further,
and this defines an action of GI, on the mapping ( r , ~ +) A
=
S;.
It is routine to check that
( r , ~ )
is a homomorphism. Obviously,
since the action of GIp on A(S)
=
51; is
ASA';
Since GI, acts transitively on
SE
S;,
S;,
A
E
GI,.
the invariant measure
is unique up to a positive constant. The remaining assumption to verify in order to apply Proposition 6.9 is that + - ' ( K ) has finite p measure for compact sets K G Sp+.To do this, we show that
PROPOSITION
221
6.9
+ - ' ( K ) is compact in %. Recall that the mapping h on onto % given by
rp,,X Sp+
is one-to-one and is obviously continuous. Given the compact set K c S;, let
5,
Then K , is compact so ,x K , is a compact subset of It is now routine to show that
$,,x 5;.
which is compact since h is continuous and the continuous image of a compact set is compact. By Proposition 6.9, we conclude that the is invariant under GI, and satisfies measure v = p 0
for all f E %(S;). Since v is invariant under GI,, v a positive constant. Thus we have the identity
=
cv, where c is
To find the constant c, it is sufficient to evaluate both sides of (6.2) for a particular function f,. For f,, take the function
Clearly, the left-hand side of (6.2) integrates to one and this yields the equation
222
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
The result of Example 5.1 gives
In conclusion, the identity
has been established for all f f for which either side exists.
+
E
X(Sl), and thus for all measurable
Example 6.20. Again let Si be the set of n so the group 6, x G; acts on % b y
X
p matrices of rank p
Each element X E Si has a unique representation X = 9 U where 9 E Tp,,and U E G&. Define @ on X onto G& by defining $( X ) to be the unique element U E G: such that X = 9 U for some 9 E %,,. If @ ( X )= U , then @ ( ( r ,T ) X ) = UT', since when X = 9 U , ( r , T ) X = T 9 U T 1 . This implies that UT' is the unique element in Gh such that X = ( T 9 ) U T ' as r9 E GP,,. The mapping (r,T ) + T = ( r , T ) is clearly a homomorphism of (I?,T ) onto G; and the action of G; on G: is
Therefore, @ ( ( T ,T )X ) sure
=
( T , T ) $ ( X ) so @ is equivariant. The mea-
is t?, x G,+ invariant. To show that $ - ' ( K ) has finite p measure when K G: is compact, note that h ( 9 , U ) = \kU is a continuous function on %,, X G: onto 5%. It is easily verified that
+
PROPOSITION
223
6.9
But %,. x K is compact, which shows that + - ' ( K ) is compact since h is continuous. Thus p ( + - ' ( K ) ) < oo. Proposition 6.9 shows that v = p I # - 'is a G;-invariant measure on G: and we have the identity
+
0
for all f E %(G:).
However, the measure
is a right invariant measure on G:, and therefore, v , is invariant under the transitive action of G; on G;. The uniqueness of invariant measures implies that v = cv, for some positive constant c and
The constant c is evaluated by choosing f to be
Since (+(X))'+(X)
f( + ( x ) ) and
Therefore,
=
X'X,
=
( & ) n P ~ ~ ~ ~ n / 2 e x3ptr[X'X] -
224
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
where c ( n , p ) is defined in Example 5.1. T h s yields the identity
for all f
E
%(G;). In particular, when f ( U )
= f,(U'U),
we have
whenever either integral exists. Combining this with (6.3) yields the identity
for all measurable f for which either integral exists. Setting T = U' in (6.5) yields the assertion of Proposition 5.18.
+
The final topic in this chapter has to do with the factorization of a Radlon measure on a product space. Suppose 'X and 9 are locally compact and a-compact Hausdorff spaces and assume that G is a locally compact topological group that acts on !X in such a way that 9C is a homogeneous space. It is also assumed that p , is a G-invariant Radon measure on X so the integral
is G-invariant, and is unique up to a positive constant.
Proposition 6.10. Assume the conditions above on %, 05, G, and J,. Define G acting on the locally compact and a-compact space X X 9by g(x, y ) = (gx, y). If m is a G-invariant Radon measure on % X 9, then m = p , X v for some Radon measure v on 9.
ProoJ: By assumption, the integral
PROPOSITION
6.10
satisfies
For f2 E X ( 9 ) and f , E X ( % ) , the product f 1 f 2 , defined by ( f 1 f 2 ) ( xY, ) = f l ( x ) f 2 ( y ) ,is in X(% x X ) and
Fix f2
E
X ( 9 ) such that f2 2 0 and let
Since J ( g f ) = J( f ), it follows that H ( g f , ) = H( f , )
for g
E
G and f ,
E
X(%).
Therefore H is a G-invariant integral on X ( % ) . Hence there exists a non-negative constant c( f2) depending on f2 such that
and c ( f , ) = 0 iff H( f , ) = 0 for all f , E X ( % ) . For an arbitrary f2 E X ( 9 ) , write f2 = f: - f; where f: = max(f2,0 ) and f; = max(-f,, 0 ) are in X ( 9 ) . For such an f 2 , it is easy to show
Thus defining c on X ( X ) by c( f 2 ) c is an integral on X ( X ) . Hence
=
c( f:) - c( f;), it is easy to show that
for some Radon measure v. Therefore,
A standard approximation argument now implies that m is the product measure p X v.
,
226
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
Proposition 6.10 provides one technique for establishing the stochastic independence of two random vectors. Thls technique is used in the next chapter. The one application of Proposition 6.10 given here concerns the space of positive definite matrices.
+
Example 6.21. Let 2 be the set of all p X p positive definite matrices that have distinct eigenvalues. That 2 is an open subset of 5; follows from the fact that the eigenvalues of S E 5; are continuous functions of the elements of the matrix S.Thus 2 has nonzero Lebesgue measure in S l . Also, let be the set of p x p diagonal matrices Y with diagonal elements y,, . . . , yp that satisfy y , > y, > . . . > y,. Further, let % be the quotient space Cp/Qp where Qp is the group of sign changes introduced in Example 6.6. We now construct a natural one-to-one onto map from % X 9to 2. For X E X, X = r Q p for some r E Define by
eP.
To verify that Then
+ is well defined, suppose that
+
X
=
r,QP = r,Qp.
since r;r, E Qp and every element D E qpsatisfies DYD = Y for all Y E 05. It is clear that + ( X , Y ) has ordered eigenvalues y, > y, > . . . > yp > 0, the diagonal elements of Y. Clearly, the function is onto and continuous. To show $I is one-to-one, first note that, if Y is any element of 3,then the equation
+
implies that E qp( r Y r f = Y implies that TY = Y f and equating the elements of these two matrices shows that r must be If diagonal so E qP).
r
then Y , = Y, by the uniqueness of eigenvalues and the ordering of the diagonal elements of Y E %. Thus
PROPOSITION
6.10
when
Therefore,
r;r, yIr;r2= Y, , which implies that r;rI E 9,. Since = riqPfor i = 1,2, this shows that XI = X2 and that + is one-to-one. Therefore, + has an
inverse and the spectral theorem for matrices specifiesjust what + - I is. Namely, for Z E $5, let y, > - . > yp > 0 be the ordered eigenvalues of Z and write Z as
where Y E 9 has diagonal elements y, > lem is that r E Op is not unique since
rYrf= rDYDrf
- . > yp > 0. The prob-
for D
E
Gi),.
To obtain uniqueness, we simply have "quotiented out" the subgroup QP in order that be well defined. Now, let
+-'
be Lebesgue measure on 3 and consider v = p +-the induced measure on % x 3. The problem is to obtain some information about the measure v. Since is continuous, v is a Radon measure and v satisfies on % X 9, 0
+
for f E X(% X 3).The claim is that the measure v is invariant under the action of on % X X defined by
ap
To see this, we have
228
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
But a bit of reflection shows that rf+-'(z) = +-'(I"zI'). Since the Jacobian of the transformation T'ZT is equal to one, it follows that v is 8p-invariant. By Proposition 6.10, the measure v is a product measure v , x v, where v , is an aP-invariant measure on 5%. Since fIp is compact and 5% is compact, the measure v , is finite and we take v , ( % ) = 1 as a normalization. Therefore,
for all f E X ( %
X
9 ) .Setting h = f+-' yields
for h E X ( % ) . In particular, if h E X(%)satisfies h ( Z ) = h ( r Z I ' ' ) for all I' E fIp and Z E %, then h (+( X, Y )) = h ( Y ) and we have the identity
It is quite difficult to give a rigorous derivation of the measure v, without the theory of differential forms. In fact, it is not obvious that v, is absolutely continuous with respect to Lebesgue measure on 9 . The subject of this example is considered again in later chapters.
+
PROBLEMS
1. Let M be a proper subspace for V and set
where g ( M ) = {xlx = gv for some v E M ) . (i) Show that g ( M ) = M iff g ( M ) C M for g E GI(V) and show that G ( M ) is a group. Now, assume V = RP and, for x E RP, write x = with y E R4 and z E Rr, q + r = p. Let M = {xlx = (0,y E Rq).
(z)
PROBLEMS
(ii) For g
E
GIp, partition g as
Show that g E G ( M ) iff g l l E GI,, g22 E GI,, and g21 such g show that
=
0. For
(iii) Verify that G I = { g E G(M)lgll = I q , g 1 2= 0) and G, = { g E G(M)Jg,, = I,) are subgroups of G ( M ) and G, is a normal subgroup of G ( M ) . (iv) Show that G I n G, = { I ) and show that each g can be written uniquely as g = hk with h E G I and k E G,. Conclude that, if gi = hik,, i = 1,2, theng,g, = h,k,, where h, = h l h 2 and k, = h;'k,h,k,, is the unique representation of g l g 2 with h, E G I and k, E G2. 2. Let G ( M ) be as in Problem 1. Does G ( M ) act transitively on V - {O)? Does G ( M ) act transitively on V n M c where M c is the complement of the set M in V? 3. Show that 8, is a compact subset of Rm with m = n2. Show that 8, is a topological group when 8, has the topology inherited from Rm.If x is a continuous homomorphism from 8, to the multiplicative group (0, m), show that ~ ( r =) 1 for all r E 8,. 4.
Suppose x is a continuous homomorphism on (0, m ) to (0, m). Show that ~ ( x =) x u for some real number a.
5. Show that 8, is a compact subgroup of GI, and show that G: (of dimension n x n ) is a closed subgroup of GI,. Show that the uniqueness of the representation A = r U ( A E GI,, r E 8,, U E GL) is equivalent to 8, n G: = {I,). Show that neither nor G: is a normal subgroup of GI,. 6. Let (V, (., .)) be an inner product space. (i) For fixed v E V, show that x defined by ~ ( x =) exp[(v, x)] is a continuous homomorphism on V to (0, a).Here V is a group under addition.
230
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
(ii) If x is a continuous homomorphism on V, show that ~ ( x =) logx(x) is a linear function on V. Conclude that ~ ( x = ) exp[(v, x)] for some v E V. 7. Suppose x is a continuous homomorphism defined on GI, to (0, co). Using the steps outlined below, show that x(A) = ldet Al" for some real a. (i) First show that ~ ( r =) 1 for r E 0,. (ii) Write A = rDA with I?, A E 0, and D diagonal with positive diagonals A,, . . . , A,. Show that x(A) = x(D). (iii) Next, write D = nD,(Ai) where D,(c) is diagonal with all diagonal elements equal to one except the ith diagonal element, whch is c. Conclude that x(D) = nx(D,(A,)). (iv) Show that D,(c) = PDl(c)Pf for some permutation matrix P E 0,. Using this, show that x(D) = x(D,(X)) where X = nX,. (v) For A E (0, a ) , set ((A) = x(DI(A)) and show that ( is a continuous homomorphism on (0, co) to (0, co) so ((A) = ha for some real p. Now, complete the proof of x(A) = Idet Ala.
8. Let % be the set of all rank r orthogonal projections on R n to R n (1 G r < n - 1). (i) Show that 8, acts transitively on % via the action x Txr', r E 0,. For -+
what is the isotropy subgroup? Show that the representation of x in this case is x = $4' where # : n x r consists of the first r columns of r E On. (ii) The group Or acts on 5,,by # -+ #Af, A E 0,. This induces an ) , iff rl/, = #,Af for some equivalence relation on F,, (#, z I A E a,), and hence defines a quotient space. Show that the map [#] $4' defines a one-to-one onto map from this quotient space to %. Here [#I is the equivalence class of 4. -+
9. Following the steps outlined below, show that every continuous homomorphism on G: to (0, oo) has the form x ( T ) = np(tii)'~ where T : p x p has diagonal elements t ,,,. . . , tpp and c,,. . . , cp are real numbers.
231
PROBLEMS
(i) Let
and
Show that GI and G, are subgroups of Gg and G, is normal. Show that every T has a unique representation as T = hk with h E GI, k E G2. '(tii)'~. Also for (ii) An induction assumption yields ~ ( h =) T = hk, x(T) = x(h)x(k). (iii) Show that ~ ( k =) (tPp)'p for some real .c,
nf-
10. Evaluate the integral I, = j I XIXIYexp[- f tr X'X] dX where X ranges over all n x p matrices of rank p. In particular, for what values of y is this integral finite? 11. In the notation of Problems 1 and 2, find all of the relatively invariant integrals on RP n MCunder the action of G(M).
In Rn, let % = {xlx E Rn, x 4 span{e)). Also, let Sn-,(e) = {xlllxll = 1 , x E Rn,x'e = 0) and let % = R' X ( 0 , ~ x) Sn-,(e). For x E X , set F = n-'e'x and set s2(x) = Z(xi - T ) ~Define . a mapping T on 5% to 9by ~ ( x =) {a, s, (x - Fe)/s). (i) Show that T is one-to-one, onto and find T-I. Let Bn(e) = {TIT E B,, r e = e) and consider a group G defined by G = {(a, b, T)la E (0, w), b E R1, r E On(e)) with group composition given by (al, bl, r1)(a2,b2, r 2 ) = (a,a,, a1b2+ bl, r1r2). Define G acting on X a n d 9by (a, b, r ) x = a r x be, x E X , (a, b, r)(u, u, w) = (au + b, au, r w ) for (u, u, w) E 9. (ii) Show that ~ ( g x = ) g ~ ( x )g, E G. (iii) Show that the measure p(dx) = dx/sn is an invariant measure on X . (iv) Let y(dw) be the unique Bn(e) invariant probability measure on Sn (e). Show that the measure
+
-,
is an invariant measure on 9.
232
TOPOLOGICAL GROUPS AND INVARIANT MEASURES
f (x)p(dx) = kj3 f (7-'( y))v(dy) for all integrable (v) Prove that 1% f where k is a fixed constant. Find k. (vi) Suppose a random vector X E X has a density (with respect to dx) given by
where 6 E R' and a > 0 are parameters. Find the joint density of X and s.
13. Let % = Rn -.,{0) and consider X E 5% with an On-invariant distribution. Define on 5% to (0, GO) X TI,, by +(x) = (Ilxll, x/llxll). The group On acts on (0, oo) x TI,, by I'(u, u ) = (u, I'v). Show that +(I'x) = l?+(x) and use t h s to prove that: (i) 11 Xll and X/(I XI( are independent. (ii) X/ll X 11 has a uniform distribution on T,,,.
+
14.
Let X = {x E Rn(x,* xj for all i *j ) and let 9 = {y E Rnlyl < y, < . . . < y,). Also, let qnbe the group of n X n permutation matrices so On G an and Tn acts on 5% by x -+ gx. (i) Show that the map +(g, y) = gy is one-to-one and onto from qnx 9 to 5%. Describe +-I. (ii) Let X E X be a random vector such that C(X) = C(gX) for g E qn. Write +-'(x) = (P(X), Y(X)) where P(X) E Tn and Y(X) E 9. Show that P(X) and Y(X) are independent and that P(X) has a uniform distribution on qn.
NOTES AND REFERENCES 1. For an alternative to Nachbin's treatment of invariant integrals, see Segal and Kunze (1978). 2. Proposition 6.10 is the Radon measure version of a result due to Farrell (see Farrell, 1976). The extension of Proposition 6.10 to relatively invariant integrals that are unique up to constant is immediate-the proof of Proposition 6.10 is valid.
3. For the form of the measure v, in Example 6.21, see Deemer and Olkin (1951), Farrell (1976), or Muirhead (1982).
First Applications of Invariance
We now begin to reap some of the benefits of the labor of Chapter 6. The one unifying notion throughout this chapter is that of a group of transformations acting on a space. Within thls framework independence and distributional properties of random vectors are discussed and a variety of structural problems are considered. In particular, invariant probability models are introduced and the invariance of likelihood ratio tests and maximum likelihood estimators is established. Further, maximal invariant statistics are discussed in detail.
7.1. LEFT 0, INVARIANT DISTRIBUTIONS ON n
X
p MATRICES
The main concern of this section is conditions under which the two matrices \k and U in the decomposition X = \kU (see Example 6.20) are stochastically independent when X is a random n x p matrix. Before discussing this problem, a useful construction of the uniform distribution on %,,! is presented. Throughout this section, 5% denotes the space of n x p matnces of rank p so n > p. First, a technical result. Proposition 7.1. Let X E CP, have a normal distribution with mean zero and Cov( X) = I, @ I,. Then P( X E 5%) = 1 and the complement of 5% in C.,, has Lebesgue measure zero. Proof. Let XI,. . . , X, denote the p columns of X. Thus XI,. . . , Xp are independent random vectors in R n and &(Xi)= N(0, I,),i = 1,. . . , p. It is shown that P(X E 5%") = 0. To say that X E 5%' is to say that, for some
233
FIRST APPLICATIONS OF INVARIANCE
index i, Xi E span{'l
j
* i).
Therefore.
[ &E span{X,j * i)]) i= 1
g
P
CP{X,E span{X,lj * i)). 1
However, 4 is independent of the set of random vectors (X,l j # i) and the probability of any subspace M of dimension less than n is zero. Since p g n, the subspace span ($1 j * i) has dimension less than n. Thus conditioning on X, for j * i, we have P{X,
E
span{X,l j
* i)) = &P{x~ E span{X,lj * i)lX,, j * i ) = 0.
Hence P(X E Xc) = 0. Since Xc has probability zero under the normal distribution on C,, ,and since the normal density function with respect to Lebesgue measure is strictly positive on Cp, ,, it follows the GXc has Lebesgue measure zero. If X E C,,, is a random vector that has a density with respect to Lebesgue measure, the previous result shows that P(X E X) = 1 since Xc has Lebesgue measure zero. In particular, if X E Cp,, has a normal distribution with a nonsingular covariance, then P(X E GX) = 1, and we often restrict such normal distributions to GX in order to insure that X has rank p. For many of the results below, it is assumed that Xis a random vector in X, and in applications X is a random vector in %, ,,, which has been restricted to X after it has been verified that Xc has probability zero under the distribution of X. Proposition 7.2. Suppose X E X has a normal distribution with C(X) = N(0, I, @ I,). Let XI,. . . , X, be the columns of X and let \k E $,,be the random matrix whose p columns are obtained by applying the Gram-Schmidt orthogonalization procedure to XI,. . . , X,. Then \k has the uniform distribution on $, ,, that is, the distribution of \k is the unique probability measure on $, , that is invariant under the action of 8,on %, , (see Example 6.16).
PROPOSITION
7.3
235
Proof. Let Q be the probability distribution of \k of verified that
5,,. It must be
%,,.
for all Bore1 sets B of If r E On, it is clear that C(TX) = C(X). Also, it is not difficult to venfy that \k, which we now write as a function of X, say *(X), satisfies
This follows by loolung at the Gram-Schmidt Procedure, which defined the columns of 9.Thus
for all r E 8,. The second equality above follows from the observation that C(X) = C(I'X). Hence Q is an 0,-invariant probability measure on %, , and the uniqueness of such a measure shows that Q is what was called the uniform distribution on $$,,,. Now, consider the two spaces % and $$,,,X G:. Let + be the function on % to %, , X G: that maps X into the unique pair (*,U) such that X = 'kU. Obviously, $I-'(*, U) = \kU E %. Definition 7.1. If X E % i s a random vector with a distribution P , then P is left invariant under 0, if C(X) = C(TX) for all r E 0,. The remainder of this section is devoted to a characterization of the 0,-left invariant distributions on %. It is shown that, if X E %has an 0,-left invariant distribution, then for +(X) = (\k, U) E $, x G:, \k and U are ,. This stochastically independent and \k has a uniform distribution on assertion and its converse are given in the following proposition.
3,
Proposition 7.3. Suppose X E % is a random vector with an On-left invariant distribution P and write (*, U) = +(X). Then \k and U are stochastically independent and \k has a uniform distribution on Conversely, if \k E , and U E G: are independent and if \k has a uniform distribution on %, ,, then X = \kU has an 0,-left invariant distribution on %.
3,
%,,.
236 Proof.
FIRST APPLICATIONS OF INVARIANCE
The joint distribution Q of (q, U) is determined by
where B, is a Borel subset of
$,,
and B, is a Borel subset of G.:
Also,
for any Borel measurable function that is integrable. The group 8, acts on the left of %,, X G: by r(*, U ) = (r*, U ) and it is clear that +(rx)
=
r + ( x ) for X E %,
r E 8,.
We now show that Q is invariant under this group action and apply Proposition 6.10. For r E O,,
Therefore, Q is 8,-invariant and, by Proposition 6.10, Q is a product measure Q, X Q, where Q, is taken to be the uniform distribution on %, ,. That Q, is a probability measure is clear since Q is a probability measure. The first assertion has been established. For the converse, let Q, and Q, be the distributions of and U so Q, is the uniform distribution on 5, and Q, x Q, is the joint distribution of ( 9 , U) in %, ,X G:. The distribution P of X = \kU = +-I(*, U) is determined by the equation
*
for all integrable f . To show P is 6,-left invariant, it must be verified that
PROPOSITION
7.4
for all integrable f and
r E 8,.
But
where the next to the last equality follows from the 8,-invariance of Ql. Thus P is 8,-left invariant. Whenp = 1, Proposition 7.3 is interesting. In this case % = Rn - (0) and the 0,-left invariant distributions on % are exactly the orthogonally invariant distributions on Rn that have no probability at 0 € Rn.If X € Rn (0) has an orthogonally invariant distribution, then = X/llXll E 'TI,, is independent of U = 11 XI 1 and has a uniform distribution on 'TI, ,. There is an analogue of Proposition 7.3 for the decomposition of X E !X into (+, A) where \k E $, and A E Sp+(see Proposition 5.5).
+
+
Proposition 7.4. Suppose X E % is a random vector with an 0,-left invariant distribution and write + ( X ) = (+, A) where E %, n and A E S ; are the unique matrices such that X = +A. Then and A are independent and 9 has a uniform distribution on ,. Conversely, if \k E $,, and A E $5; are independent and if \k has a uniform distribution on $,., then X = +A has an 8,-left invariant distribution on 5%.
3,
+
+
ProoJ: The proof is essentially the same as that of Proposition 7.3 and is left to the reader.
Thus far, it has been shown that if X E % has an 8,-left invariant distribution for X = +U, 'k and U are independent and has a uniform distribution. However, nothing has been said about the distribution of U E GL. The next result gives the density function of U with respect to the right invariant measure
+
in the case that X has a density of a special form.
238
FIRST APPLICATIONS OF INVARIANCE
Proposition 7.5. Suppose X function
E
% has a distribution P given by a density
with respect to the measure
on %. Then the density function of U (with respect to vr) in the representation X = \kU is
Prooj If X E %, U(X) denotes the unique element of G: such that X = \kU(X) for some \k E Tp,,. To show go is the density function of U, it is sufficient to verify that
for all integrable functions h. Since X'X Example 6.20 show that
=
U'(X)U(X), the results of
where c = 2p(&)"~w(n, p). Since go(U) = cf0(UfU), go is the density of U. A similar argument gives the density of S
=
X'X.
Proposition 7.6. Suppose X E % has distribution P given by a density function f0(X1X),
X E %
with respect to the measure p. Then the density of S go@)
=
=
( m n P w ( n p)fo(S) ,
X'X is
with respect to the measure
Proof. With the notation S(X) = X'X, it is sufficient to verify that
for all integrable functions h . Combining the identities (6.4) and (6.5), we have
where c
=
(&)"~w(n, p). Since go = cfo, the proof is complete.
When X E % has the density assumed in Propositions 7.5 and 7.6, it is clear that the distribution of Xis On-leftinvariant. In this case, for X = \kU, \k and U are independent, has a uniform distribution on $$, ., and U has the density given in Proposition 7.5. Thus the joint distribution of \k and U has been completely described. Similar remarks apply to the situation treated in Proposition '7.6. The reader has probably noticed that the distribution of S = X'X was derived rather than the distribution of A in the representation X = \kA for \k E $$, and A E 5;. Of course, S = SO A is the unique positive definite square root of S. The reason for giving the distribution of S rather than that of A is quite simple-the distribution of A is substantially more complicated than that of S and harder to derive. In the following example, we derive the distributions of U and S when X E % has a nonsingular On-leftinvariant normal distribution.
+
+
Example 7.1. Suppose X E % has a normal distribution with a nonsingular covariance and also assume that C(X) = C(I'X) for all r E On.Thus & X = I'GX for all E On,which implies that GX = 0. Also, Cov(X) must satisfy Cov((r 8 Ip)X) = Cov(X) since C(X) = C((r 8 Ip)X). From Proposition 2.19, this implies that
for some positive definite Z as Cov(X) is assumed to be nonsingular. In summary, if X has a normal distribution in 5% that is 0,-left
240
FIRST APPLICATIONS OF INVARIANCE
invariant, then C(X)
=
N(0, In8 2 ) .
Conversely, if X is normal with mean zero and Cov(X) = In8 Z, then C(X) = C ( r X ) for all r E 0,. Now that the On-left invariant normal distributions on % have been described, we turn to the distribution of S = X'X and U described in Propositions 7.5 and 7.6. When C(X) = N(0, In8 Z), the density function of X with respect to the measure p(dX) = ~ X / I X ' X ~ is "/~
Therefore, the density of S with respect to the measure
is given by
go(^)
=
w(n, p ) l ~ - ' ~ ~ n / 2 e x p [tr- Z-'S]
according to Proposition 7.6. This density is called the Wishart density with parameters Z, p , and n. Here, p is the dimension of S and n is called the degrees of freedom. When S has such a density function, we write C(S) = W(Z, p, n), which is read "the distribution of S is Wishart with parameters Z, p, and n." A slightly more general definition of the Wishart distribution is gven in the next chapter, where a thorough discussion of the Wishart distribution is presented. A direct application of Proposition 7.5 yields the density
with respect to measure
when X = \kU, \k E %, ., and U E GL . Here, the nonzero elements of U are u i j , 1 < i < j < p. When Z = I,, g, becomes
GROUPS ACTING ON SETS
241
In G:, the diagonal elements of U range between 0 and cc and the elements above the diagonal range between - cc and + cc. Writing the density above as
we see that this density factors into a product of functions that are, when normalized by a constant, density functions. It is clear by inspection that C(ui,)= N(0,l) f o r i < j . Further, a simple change of variable shows that
Thus when Z = I,, the nonzero elements of U are independent, the elements above the diagonal are all N(0, l), and the square of the ith diagonal element has a chi-square distribution with n - i + 1 degrees of freedom. T h s result is sometimes useful for deriving the distribution of functions of S = U'U.
+
7.2. GROUPS ACTING ON SETS Suppose % i s a set and G is a group that acts on the left of % according to Definition 6.3. The group G defines a natural equivalence relation between elements of %-namely, write x, = x, if there exists a g E G such that x, = gx,. It is easy to check that = is in fact an equivalence relation. Thus the group G partitions the set % into disjoint equivalence classes, say
where A is an index set and the equivalence classes %, are disjoint. For each x E X, the set {gxlg E G) is the orbit of x under the action of G. From the definition of the equivalence relation, it is clear that, if x E %, then %, is just the orbit of x. Thus the decomposition of %into equivalence classes is
242
FIRST APPLICATIONS OF INVARIANCE
simply a decomposition of EX into disjoint orbits and two points are
equivalent iff they are in the same orbit.
Definition 7.2. Suppose G acts on the left of %. A function f on % to 9is invariant if f ( x ) = f ( g x ) for all x E EX and g E G. The function f is maximal invariant iff is invariant and f ( x , ) = f ( x , ) implies that x , = gx, for some g E G. Obviously, f is invariant iff f is constant on each orbit in X. Also, f is maximal invariant iff it is constant on each orbit and takes different values on different orbits. Proposition 7.7. Suppose f maps EX onto 9 and f is maximal invariant. Then h, mapping % into 2 , is invariant iff h ( x ) = k ( f ( x ) ) for some function k mapping 9into 2. Proot If h ( x ) = k ( f ( x ) ) , then h is invariant as f is invariant. Conversely, suppose h is invariant. Given y E 9,the set (XIf ( x ) = y } is exactly one orbit in EX since f is maximal invariant. Let z E 2 be the value of h on this orbit ( h is invariant), and define k ( y ) = z . Obviously, k is well defined and k ( f ( x ) )= h ( x ) .
Proposition 7.7 is ordinarily paraphrased by saying that a function is invariant iff it is a function of a maximal invariant. Once a maximal invariant function has been constructed, then all the invariant functions are known-namely, they are functions of the maximal invariant function. If the group G acts transitively on %, then there is just one orbit and the only invariant functions are the constants. We now turn to some examples.
+
Example 7.2. Let % = R n - (0) and let G = 8, act on EX as a group of matrices acts on a vector space. Given x E %, it is clear that the orbit of x is {ylllyll = Ilxll}. Let Sr = {xlllxll = r } , so
is the decomposition of % into equivalence classes. The real number r > 0 indexes the orbits. That f ( x ) = llxll is a maximal invariant function follows from the invariance off and the fact that f takes a different value on each orbit. Thus a function is invariant under the action of G on 9Ciff it is a function of Ilxll. Now, consider the space S , x (0, w ) and define the function on 9C to S , X ( 0 , co) by CI#
PROPOSITION
7.7
243
+
+(x) = (x/llxll, llxll). Obviously, is one-to-one, onto, and +-'(u, r ) = ru for ( u , r) E S, X (0, a ) . Further, the group action on 5% corresponds to the group action on S, x (0, co) gven by ( u , r) = ( u , r )
r E On.
+
In other words, + ( r x ) = r+(x) so is an equivariant function (see Definition 6.14). Since 0, acts transitively on S,, a function h on S, x (0, co) is invariant iff h(u, r ) does not depend on u. For this example, the space % has been mapped onto S, x (0, co) by so that the group action on % corresponds to a special group action on S, x (0, a)--namely, 0, acts transitively on S, and is the identity on (0, co). The whole point of introducing S, x (0, co) is that the function ho(u, r ) = r is obviously a maximal invariant function due to the special way in which 0, acts on S, x (0, a ) . To say it another way, the orbits in S, x (0, co) are S, x (r), r > 0, so the product space structure provides a convenient way to index the orbits and hence to give a maximal invariant function. This type of product space structure occurs in many other examples.
+
+
The following example provides a useful generalization of the example above.
+
Example 7.3. Suppose % is the space of all n x p matrices of rank p, p d n. Then On acts on the left of %by matrix multiplication. The first claim is that fo(X) = X'X is a maximal invariant function. That fo is invariant is clear, so assume that fo(X,) = f,(X,). Thus X;X, = X;X, and, by Proposition 1.31, there exists a r E 8, such that rX, = X,. This proves that fo is a maximal invariant. Now, the question is: where did fo come from? To answer this question, recall that each X E % has a unique representation as X = \kA where \k E %, and A E .;5 Let denote the map that sends X into the pair (\k, A) E % , n X Sp+ such that X = *A. The group 8, acts on n X Sp+ by
+
5,
and
+ satisfies
It is clear that ho(\kA) = A is a maximal invariant function on X ;5 under the action of 8, since 8, acts transitively on %,
%,
..
244
FIRST APPLICATIONS OF INVARIANCE
Also, the orbits in %, ,x
S; are 5$,,,x {A) for A E S; . It follows immediately from the equivariance of 4 that are the orbits in % under the action of 8,. Thus we have a convenient indexing of the orbits in % given by A. A maximal invariant function on % must be a one-to-one function of an orbit index-namely, A E SJ. However, fo(X) = X'X = A2 when X E { X J X =*A,
for some*
E
5$,,,).
Since A is the unique positive definite square root of A2 = X'X, we have explicitly shown why fo is a one-to-one function of the orbit index A. A similar orbit indexing in % can be given by elements U E G: by representing each X E % as X = *U, \k E Tp,,, and U E G.: The details of this are left to the reader.
+
Example 7.4. In this example, the set % is (RP - (0)) X group GIp acts on the left of % in the following manner:
S; . The
for (y, S ) E % and A E G1,. A useful method for finding a maximal invariant function is to consider a point (y, S ) E GX and then "reduce" (y, S ) to a convenient representative in the orbit of (y, S). The orbit of (y, S ) is {A(y, S)I A E G1,). To reduce a given point (y, S ) by A E GIp, first choose A = I'S-'/2 where E OP and S-1/2 is the inverse of the positive definite square root of S. Then and A(y, S ) = ( r ~ - ' / I~) ,~ , which is in the orbit of (y, S). Since S-'/2y and IIS-1/2yllel have the same length (E; = (1,0,. . . , O)), we can choose r E fIp such that
Therefore, for each (y, S ) E %, the point
+
PROPOSITION
7.7
is in the orbit of (y, S). Let
The above reduction argument suggests, but does not prove, that f, is maximal invariant. However, the reduction argument does provide a method for checlung that f, is maximal invariant. First, f, is invariant. To show f, is maximal invariant, iffo(yl, S,) = f0(y2, S,), we must show there exists an A E GI, such that A(y,, S,) = (y,, S,). From the reduction argument, there exists A, E G1, such that
11S,'/~~,ll = lIs;1/2~211 and this shows that AI(YI, Sl)
=
A 2 ( ~ 2S2) ,
Setting A = AT'A,, we see that A(y,, S,) = (y,, S,) so fo is maximal invariant. As in the previous two examples, it is possible to represent % as a product space where a maximal invariant is obvious. Let
9 = {(u, S)lu E R p , S E S+ P , u f.S ' u = 1). Then GI, acts on the left of
9by
A(u, S ) = (Au, ASA'). The reduction argument used above shows that the action of GI, is transitive on 9. Consider the map C$ from % t o % X (0, co)given by
The group action of GI, on
9 X (0, co) is
246
FIRST APPLICATIONS OF INVARIANCE
and a maximal invariant function is
Clearly, since GIp is transitive on 9. and satisfies
+ is a one-to-one onto function
,
+
Thus f (+(x, S)) = X'S- 'x is maximal invariant.
In the three examples above, the space X has been represented as a product space 9 x Q in such a way that the group action on X corresponds to a group action on 2'4 X %--namely,
and G acts transitively on 9. Thus it is obvious that
is maximal invariant for G acting on 9 X Q. However, the correspondence cp, a one-to-one onto mapping, satisfies for g ~ ( ~ =xg+(x) )
E
G, x
E
X.
The conclusion is that f,(cp(x)) is a maximal invariant function on 5%. A direct proof in the present generality is easy. Since
f,(cp(x)) is invariant. If f,(+(x,)) = fl(+(x2)), then there is a g E G such that g+(x,) = +(x,) since f , is maximal invariant on 9 X %. But g+(x,) = +(gx,) = +(x,), so gx, = x, as cp is one-to-one. Thus f,(cp(x)) is maximal invariant. In the next example, a maximal invariant function is easily found but the product space representation in the form just discussed is not available.
+
Example 7.5. The group Op acts on ;5 by
A maximal invariant function is easily found using a reduction argument similar to that given in Example 7.4. From the spectral
PROPOSITION
247
7.7
theorem for matrices, every S E S ; can be written in the form S = r , D r ; where TI E Op and D is a diagonal matrix whose diagonal elements are the ordered eigenvalues of S, say
Thus r;Sr, = D, which shows that D is in the orbit of S. Let fo on S; to RP be defined by: fo(S) is the vector of ordered eigenvalues of S. Obviously, f, is Op-invariant and, to show fo is maximal invariant, suppose fo(S,) = f0(S2). Then S, and S2have the same eigenvalues and we have
where D is the diagonal matrix of eigenvalues of Si, i = 1,2. Thus r2r;Sl(r2r;)' = S2, SO fO is maximal invariant. To describe the technical difficulty when we try to write S,fas a product space, first consider the case p = 2. Then : S = X I U %, where
and
+,
That 0, acts on both %, and %, is clear. The function defined on X I by +,(S) = X,(S) E (0, oo) is maximal invariant and establishes a one-to-one correspondence between %, and (0, 03). For %, define +, by
+,
so +2 is a maximal invariant function and takes values in the set 9 of all 2 X 2 diagonal matrices with diagonal elements y, and y,, y, > y2 > 0. Let 9, be the subgroup of 8, consisting of those diagonal matrices with rt 1 for each diagonal element. The argument given in Example 6.21 shows that the mapping constructed there establishes a one-to-one onto correspondence between %, and (8,/9,) x 9 , and 0, acts on (8,/9,) x 3' by
248
FIRST APPLICATIONS OF INVARIANCE
Further,
+ satisfies
S has been decomposed into GX, and %, whch Thus for p = 2, : are both invariant under 8,. The action of 8, on X I is trivial in that Tx = x for all x E GX, and a maximal invariant function on GX, is the identity function. Also, %, was decomposed into a product space where 8, acted transitively on the first component of the product space and trivially on the second component. From t h s decomposition, a maximal invariant function was obvious. Similar decompositions for p > 2 can be given for S;, but the number of component spaces increases. For example, when p = 3, let A,(S)> A, (S)>, A, (S) denote the ordered eigenvalues of S E 5:. The S is relevant decomposition for : where
Each of the four components is acted on by 8, and can be written as a product space with the structure described previously. The details of this are left to the reader. In some situations, it is sufficient to ; where consider the subset % of S
The argument given in Example 6.21 shows how to write % as a product space so that a maximal invariant function is obvious under the action of Op on Z.
+
Further examples of maximal invariants are given as the need arises. We end this section with a brief discussion of equivariant functions. Recall (see Definition 6.14) that a function + on % onto 9 is called equivariant if +(gx)= g+(x)where G acts on %, G acts on 9, and G is a homomorphic
PROPOSITION
249
7.8
image of G. If G = (2) consists only of the identity, then equivariant functions are invariant under G. In thls case, we have a complete description of all the equivariant functions-namely, a function is equivariant iff it is a function of a maximal invariant function on %. In the general case when G is not the trivial group, a useful description of all the equivariant functions appears to be rather difficult. However, there is one special case when the equivariant functions can be characterized. Assume that G acts transitively on % and G acts transitively on 9, where G is a homomorphic image of G. Fix xo E % and let
The subgroup Ho of G is called the isotropy subgroup of x,. Also, fix yo and let
E
9
be the isotropy subgroup of yo.
+
Proposition 7.8. In order that there exist an equivariant function on % to % such that + ( x o )= y o , it is necessary and sufficient that & G KO. Here 4 c G is the image of Ho under the given homomorphism. Proof First, suppose that for g E Ho,
+ is equivariant and satisfies + ( x o )= yo. Then,
so g E KO.Thus Ro c KO.Conversely, suppose that G K,. For x E %, the transitivity of G on % implies that x = gx, for some-g. ~ e f i n + e on % to 9by
+ ( x ) = gyo
where x
=
gxo.
well defined and is equivariant. If x It must be shown that isg2xo, then g; 'g, E Ho so g; 'g, E KO.Thus
since
=
g,xo =
250
FIRST APPLICATIONS OF INVARIANCE
Therefore + is well defined and is onto
That 4 is equivariant is easily checked.
9 since G acts transitively on 9.
The proof of Proposition 7.8 shows that an equivariant function is determined by its value at one point when G acts transitively on 96.More precisely, if and +, are equivariant functions on % such that + , ( x , ) = + , ( x , ) for some x , E %, then + , ( x ) = + , ( x ) for all x. To see thls, write x = gx, so
+,
Thus to characterize all the equivariant functions, it is sufficient to determine the possible values of + ( x , ) for some fixed x , E %. The following example illustrates these ideas.
+
Example 7.6. Suppose % = 9 = 5 ; and G = G = GI where the homomorphism is the identity. The action of GIp on s i p i s
To characterize the equivariant functions, pick x , equivariant function must satisfy
+
=
I,
E
S;.
An
for all r E Op. By Proposition 2.13, a matrix + ( I p ) satisfies this equation iff $ ( I p ) = kIp for some real constant k . Since + ( I p ) E S ;, k > 0. Thus
and for S
E, ;S
Therefore, every equivariant function has the form + ( S ) = kS for some k > 0.
+
Further applications of the above ideas occur in the following sections after it is shown that, under certain conditions, maximum likelihood estimators are equivariant functions.
251
INVARIANT PROBABILITY MODELS
7.3. INVARIANT PROBABILITY MODELS Invariant probability models provide the mathematical framework in which the connection between statistical problems and invariance can be studied. Suppose (%, 3 )is a measurable space and G is a group of transformations acting on X such that each g E G is a one-to-one onto measurable function from % to 5%. If P is a probability measure on ( O X , 3 ) and g E G, the probability measure gP on (%, 3 ) is defined by
It is easily verified that ( g l g 2 ) P= g , ( g , P ) so the group G acts on the space of all probability measures defined on (X, 3).
Definition 7.3. Let 9 be a set of probability measures defined on (%, 3). The set 9 is invariant under G if for each P E 9,gP E 9 for all g f G. Sets of probability measures 9 are called probability models, and when 9 is invariant under G, we speak of a G-invariant probability model. g
If X E X is a random vector with C ( X ) = P , then C ( g X ) = gP for G since
E
Thus 9 is invariant under G iff whenever C ( X ) E 9 , C ( g X ) E 9 for all g E G. There are a variety of ways to construct invariant probability models from other invariant probability models. For example, if Ta, a E A, are G-invariant probability models, it is clear that
U Ta and
a€A
(7
a€A
are both G-invariant. Now, given (%, 3 ) and a G-invariant probability model 9, form the product space %'"'= G X X
% x ... x %
and the product a-algebra a ( " ) on %("I. For P first defining
E
9 , define P ( " ) on a ( " ) by
252
FIRST APPLICATIONS OF INVARIANCE
where B, E 3; Once P(") is defined on sets of the form B , x - . . extension to %(") is unique. Also, define G acting on %(" by
for x
=
( x , , . . . , x,)
E
x B,, its
%(").
Proposition 7.9. Let 9 ( " )= { P ( " ) l PE 9). Then 9 ( " ) is a G-invariant when 9 is G-invariant. probability model on (%("),
a("))
Proof: It must be shown that gP(") E T ( " ) for g E G and P ( " ) E 9 ( " ) . However, P ( " )is the product measure P ( " ' = P ~P X
a
.
3
XP;
PET
and P ( " )is determined by its values on sets of the form B ,
x - .. x
B,. But
where the first equality follows from the definition of the action of G on
EX("). Then g ~ ( "is) the product measure
which is in 9(") as gP
E
9.
For an application of Proposition 7.9, suppose Xis a random vector with C(X) E 9 where 9 is a G-invariant probability model on %. If XI,. . . , X, are independent and identically distributed with C(Xi) E 9, then the random vector Y = (XI,..., X,) E %(") has distribution P(") E 9 ( " ) when C(Xi) = P, i = 1,. . . , n. Thus 9 ( " ) is a G-invariant probability model for Y. In most applications, probability models 9 are described in the form 9 = {PB18 E 8 ) where 8 is a parameter and O is the parameter space. When discussing indexed families of probability measures, the term "parameter space" is used only in the case that the indexing is one-to-one-that is,
PROPOSITION
253
7.10
POI= PO2implies that 8 , = 8,. Now, suppose 9 = { P B ) 8E O) is G-invariant. Then for each g E G and 8 E O, gP, E 9,so gP, = Po, for some unique 8' E O. Define a function g on 6) to O by
In other words, g8 is the unique point in O that satisfies the above equation. Proposition 7.10. Each g is a one-to-one onto function from O to O. Let G = { g l g E G). Then G is a group under the group operation of function composition and the mapping g -* g is a group homomorphism from G to G, that is:
ProoJ: To show that
g is one-to-one, suppose go,
=
g8,. Then
8 = 8,. The verification that g is onto goes which implies that Pol = Po2 so 1 as follows. If 8 E O, let 8' = g-'8. Then
so go' = 8 . Equations (i) and (ii) follow by calculations similar to those above. This shows that G is the homomorphic image of G and G is a group. An important special case of a G-invariant parametric model is the ) assume that v is a o-finite measure following. Suppose G acts on (%, 91and on (%, 3 ) that is relatively invariant with multiplier X , that is,
for all integrable functions f . Assume that model and
9 = {Pole E O) is a parametric
for all measurable sets B. Thusp(.le) is a density for P, with respect to v. If
254
FIRST APPLICATIONS OF INVARIANCE
9 is G-invariant, then gP,=P,,
forg€G,B~O.
Therefore,
for all measurable sets B. Thus the density p must satisfy x(g-')p(g-'xle)
= P(xIEe)
a-e. ( 4
or, equivalently, ~ ( ~ 1 =8P(gxlEe)x(g) )
a.e. ( 4 .
It should be noted that the null set where the above equality does not hold may depend on both 8 and g. However, in most applications, a version of the density is available so the above equality is valid everywhere. Thls leads to the following definition. Definition 7.4. The family of densities (p(-le)le E O} with respect to the relatively invariant measure v with multiplier x is (G - G)-invariant if
~ ( x l e= ) P(gxlse)x(g) for all x, 8, and g. It is clear that if a family of densities is (G - G)-invariant where G is a homomorphic image of G that acts on O, then the family of probability measures defined by these densities is a G-invariant probability model. A few examples illustrate these notions.
255
PROPOSITION 7.10
+
is a density with Example 7.7. Let % = Rn and suppose f(11~)1~) respect to Lebesgue measure on Rn. For p E Rn and 2: E S ;, set
For each p and Z, p(.lp, 2:) is a density on Rn. The affine group Al, acts on Rn by (A, b)x = Ax + b and Lebesgue measure is relatively invariant with multiplier
where (A, b) E Al,. Consider the parameter space Rn x family of densities
The group Al, acts on the parameter space Rn X (A, b)(p, 2:)
=
(Ap
S;
S;
and the
by
+ b, A2:Ar).
It is now verified that the family of densities above is (G - G)invariant where G = G = Al,. For (A, b) E Al,,
Therefore, the parametric model determined by the family of densities is Al,-invariant.
+
A useful method for generating a G-invariant probability model on a measurable space (%, 3 ) is to consider a fixed probability measure Po on (%, 3 ) and set
9 = {gPolg E G). Obviously, 9 is G-invariant. However, in many situations, the group G does
256
FIRST APPLICATIONS OF INVARIANCE
not serve as a parameter space for 9' since g, Po = g2Po does not necessarily imply that g, = g,. For example, consider % = Rn and let Po be given by
where f(11~11~) is the density on Rn of Example 7.7. Also, let G = Al,. To obtain the density of gPo, suppose X is a random vector with C(X) = Po. For g = (A, b) E Al,, (A, b)X = AX + b has a density given by p(xlb, AA')
=
~ d e t ( ~ ~ ' ) ~ - ' / ~-f~( )(' x( A A ' ) - ' ( x- b))
and this is the density of (A, b)Po. Thus the parameter space for
is Rn x that
S;.
Of course, the reason that Al, is not a parameter space for 9 is
for all n x n orthogonal matrices r. In other words, Po is an orthogonally invariant probability on Rn. Some of the linear models introduced in Chapter 4 provide interesting examples of parametric models that are generated by groups of transformations. 4
Example 7.8. Consider an inner product space (V, [. , -1) and let Po be a probability measure on V so that if C ( X) = Po, then GX = 0 and Cov(X) = I. Given a subspace M of V, form the group G whose elements consist of pairs (a, x ) with a > 0 and x E M. The group operation is
The probability model 9' = {gPolg E G) consists of all the distributions of (a, x ) X = a x + x where C(X) = Po. Clearly, G(aX+x)=x
and C o v ( a X + x ) = a 2 1 .
Therefore, if C(Y) E 9,then GY E M and Cov(Y) = a 2 1for some a 2 > 0, so 9' is a linear model for Y. For this particular example the
PROPOSITION
257
7.10
group G is a parameter space for 9. This linear model is generated by G in the sense that 9 is obtained by transforming a fixed probability measure Po by elements of G.
+
An argument similar to that in Example 7.8 shows that the multivariate linear model introduced in Example 4.4 is also generated by a group of transformations.
+
Example 7.9. Let C,, ,be the linear space of real n x p matrices with the usual inner product ( , .) on C,,,. Assume that Po is a probability measure on C,,, so that, if C(X) = Po, then GX = 0 and Cov(X) = I, 8 I,. To define a regression subspace M, let Z be a fixed n x k real matrix and set
Obviously, M is a subspace of gp,,. Consider the group G whose elements are pairs (A, y) with A E GI, and y E M. Then G acts on 'p, n by (A, y ) x
=
xA'
+ y = (I, 8 A)x + y,
and the group operation is
The probability model 9 = (gPolg E G) consists of the distributions of (A, y ) X = (I, 8 A)X + y where C(X) = Po. Since
and Cov((1, 8 A)X
+ y ) = I, 8 AA',
if C(Y) E 9 , then GY E M and Cov(Y) = I, 8 2 for some p x p positive definite matrix 2 . Thus 9 is a multivariate linear model as described in Example 4.4. If p > 1, the group G is not a parameter space for 9 , but G does generate 9 .
+
Most of the probability models discussed in later chapters are examples of probability models generated by groups of transformations. Thus these models are G-invariant and this invariance can be used in a variety of ways.
258
FIRST APPLICATIONS OF INVARIANCE
First, invariance can be used to give easy derivations of maximum likelihood estimators and to suggest test statistics in some situations. In addition, distributional and independence properties of certain statistics are often best explained in terms of invariance. 7.4.
THE INVARIANCE OF LIKELIHOOD METHODS
In this section, it is shown that under certain conditions maximum likelihood estimators are equivariant functions and likelihood ratio tests are invariant functions. Throughout this section, G is a group of transformations that act measurably on (GX, 91) and v is a o-finite relatively invariant measure on (GX, 91) with multiplier X. Suppose that 9 = {P,(B E O) is a G-invariant parametric model such that each Po has a density p(.le), which satisfies for all x E %, 8 E 0, and g E G. The group G = {glg E G) is the homomorphic image of G described in Proposition 7.10. In the present context, a point estimator of 0, say t , mapping %into O, is equivariant (see Definition 6.14) if
Proposition 7.11. Given the (G - G)-invariant family of densities (p(.le)lO E O), assume there exists a unique function 8 mapping % into O that satisfies
Then 8 is an equivariant function-that is,
Proof. By assumption, 8(gx) is the unique point in 0 that satisfies
But
PROPOSITION
7.12
SO
=
x(g-')P(xle(x))
= p(gxld(x)).
Thus
and, by the uniqueness assumption,
Of course, the estimator d(x) whose existence and uniqueness is assumed in Proposition 7.11 is the maximum likelihood estimator of 8. That 8 is an equivariant function is useful information about the maximum likelihood estimator, but the above result does not indicate how to use invariance to find the maximum likelihood estimator. The next result rectifies this situation. Proposition 7.12. Let {p(.le)l8 E @} be a ( G - G)-invariant family of densities on (%, 93). Fix a point xo E % and let Oxo be the orbit of x,. Assume that
and that 8, is unique. For x d(x) Then
=
E
Q0, define e(x) by
gx8,
where x
e is well defined on Fxo and satisfies
(i) &gx) = &x), x E Oxo. ( 4 s u ~ e , e ~ ( x l e=) P ( x I ~ ( x ) )x, E Furthermore, 9 is unique. Proof. The density p (. 18) satisfies
oxo.
=
gxxo.
260
FIRST APPLICATIONS OF INVARIANCE
where x is a multiplier on G. To show 8 is well defined on Qo, it must be verified that if x = gxxo = h x x o ,then gxeo = hxOO.Set k = so k x , = x , and we need to show that kB, = 8,. But
By the uniqueness assumption, kB, = 8, so 8 is well defined on Oxo. To establish (i), if x = g x x o , then gx = ( g g x ) x oso
For (ii), x
=
gxxo so
To establish the uniqueness of
8, fix x
E
Oxo and consider 8 , * &B,. Then
The strict inequality follows from the uniqueness assumption concerning 8,.
In applications, Proposition 7.12 is used as follows. From each orbit in the sample space %, we pick a convenient point x , and show that p ( x , l e ) is uniquely maximized at 8,. Then for other points x in this orbit, write x = g,x, and set 8 ( x ) = gxBo. The function 8 is then the maximum likeli-
PROPOSITION7.12
261
hood estimator of 8 and is equivariant. In some si-tuations,there is only one orbit in % SO this method is relatively easy to apply.
+ Example 7.10. Consider % = 0 = S; and let for S E S ; and Z E .;5$ The constant w(n, p), n 2 p, was defined in Example 5.1. That p ( . IZ) is a density with respect to the measure
follows from Example 5.1. The group GIp acts on S ; by A ( S ) = ASA' for A E G1, and S E S ; and the measure v is invariant. Also, it is clear that the density p(.IZ) satisfies
; , we apply the To find the maximum likelihood estimator of Z E S technique described above. Consider the point I, E S ; and note that the orbit of I, under the action of GIp is S ; so in this case there is only one orbit. Thus to apply Proposition 7.12, it must be verified that sup P ( I ~ I =~p(~,lZo) )
xss;
where Zo is unique. Taking the logarithm of p(IplZ) and ignoring the constant term, we have
=
sup
A,>O
[z:n
loghi 1
where A,,. . . , A, are the eigenvalues of B = 2-' E S;. However, for A > 0, nlog A - X is a strictly concave function of A and is
262
FIRST APPLICATIONS OF INVARIANCE
uniquely maximized at X
=
is uniquely maximized at A,
n. Thus the function
=
. -
n 2
=
A,
=
n, which means that
1 2
- loglBl - - t r B
is uniquely maximized at B
=
nI. Therefore,
and (l/n)I, is the unique point in S ; that achieves this supremum. To find the maximum likelihood estimator of Z, say f ( S ) , write S = AA' for A E GI,. Then
In summary,
is the unique maximum likelihood estimator of Z and
-1
=
a n ,S
)
4
"/2
exp[-
tr(:~)-'~]
The results of this example are used later to derive the maximum likelihood estimator of a covariance matrix in a variety of multivariate normal models.
+
We now turn to the invariance of likelihood ratio tests. First, invariant testing problems need to be defined. Let 9' = {P,Id E O) be a parametric
PROPOSITION
263
7.13
probability model on (%, 3 ) and suppose that G acts measurably on %. Let 0, and 0, be two disjoint subsets of 0. On the basis of an observation vector X E % with C(X) E 9, U 9, where
suppose it is desired to test the hypothesis H, : C(X)
E
9,
H , : C(X)
E
9,.
against the alternative
Definition 7.5. The above hypothesis testing problem is invariant under G if 9,and 9,are both G-invariant probability models.
Now suppose that 9, = {PoleE 0,) and 9, = (PB(8E 0 , ) are disjoint families of probability measures on (%, 9)such that each P has a density p(.10) with respect to a a-finite measure v. Consider
For testing the null hypothesis that C(X) E 9, versus the alternative that C(X) E 9 , , the test that rejects the null hypothesis iff A(x) < k , where k is chosen to control the level of the test, is commonly called the likelihood ratio test. Proposition 7.13. Given the family of densities {p(.le)le E 0, U o , ) , assume the testing problem for C(X) E 9, versus C(X) E 9, is invariant under a group G and suppose that
for some multiplier X. Then the likelihood ratio
is an invariant function.
264
Proot
FIRST APPLICATIONS OF INVARIANCE
It must be shown that A(x)
=
A(gx) for x
E
% and g
E
G. For
g E G,
The next to the last equality follows from the positivity of invariance of 0, and 0, U O .
,
x
and the
For invariant testing problems, Proposition 7.13 shows that that test function determined by A, namely
is an invariant function. More generally, any test function $I is invariant if +(x) = +(gx) for all x E % and g E G. The whole point of the above discussion is to show that, when attention is restricted to invariant tests for invariant testing problems, the likelihood ratio test is never excluded from consideration. Furthermore, if a particular invariant test has been shown to have an optimal property among invariant tests, then this test has been compared to the likelihood ratio test. Illustrations of these comments are given later in this section when we consider testing problems for the multivariate normal distribution. Comments similar to those above apply to equivariant estimators. Suppose (p(.lO)lO E @) is a ( G - G)-invariant family of densities and satisfies
for some multiplier X. If the conditions of Proposition 7.12 hold, then an equivariant maximum likelihood estimator exists. Thus if an equivariant estimator t with some optimal property (relative to the class of all equivariant estimators) has been found, then this property holds when t is compared to the maximum likelihood estimator. The Pitman estimator, derived in the next example, is an illustration of this situation.
+ Example 7.11. Let f be a density on RP with respect to Lebesgue
measure and consider the translation family of densities {p(.l8)18 E RP) defined by
For this example, X = O
=
g(x)=x
G
=
RP and the group action is
+ g,
x , g E RP.
It is clear that p ( g x l g @ )= ~ ( x l e ) ,
so the family of densities is invariant and the multiplier is unity. It is assumed that xf ( x ) dx L
=
0 and
/[lxl12f ( x ) dx <
P
+ m.
Initially, assume we have one observation X with C ( X ) E { p ( - l 8 ) \ 8 E RP). The problem is to estimate the parameter 8 . As a measure of how well an estimator t performs, consider
If t ( X ) is close to 8 on the average, then R(t, 8 ) should be small. We now want to show that, if t is an equivariant estimator of 8 , then
and the equivariant estimator to( X ) = X minimizes R ( t , 0 ) over all equivariant estimators. If t is an equivariant estimator, then t(x
so, with g
+g) = t(x)+g
= -x ,
t(x)=x
+ t(0).
Therefore, every equivariant estimator has the form t ( x ) = x + c where c E RP is a constant. Conversely, any such estimator t ( x ) =
266
FIRSTAPPLICATIONS OF INVARIANCE
x
+ c is equivariant. For t ( x ) = x + c,
To minimize R ( t , 0 ) over all equivariant t , the integral
must be minimized by an appropriate choice of c. But
minimizes the above integral. Hence t , ( X ) = X minimizes R ( t , 0 ) over all equivariant estimators. Now, we want to generalize t h s result to the case when X I , .. . , X, are independent and identically distributed with C(&) E { p ( . l e ) l e E RP), i = 1 , . . . , p . The argument is essentially the same as when n = 1. An estimator t is equivariant if t ( x l + g , . . ., X ,
so, setting g
+ g ) = t ( x l , .. ., x , ) + g
= -x , ,
t ( x l ,...,
X,) = X1
+ t ( 0 , x 2 - X I ,..., X ,
- XI).
Conversely, if t ( x l ,...,
X,) = Xl
+ + ( x 2 - X I ,..., X ,
-XI)
then t is equivariant. Here, is some measurable function taking values in RP. Thus a complete description of the equivariant estima-
DISTRIBUTION THEORY AND INVARIANCE
tors has been given. For such an estimator,
=
&,llt(Xl ,..., Xn)l12= R(t,O).
To minimize R(t, 0), we need to choose the function \k to minimize R(t,O)
Let
=
GollXl
+ \E(&
- X,,.
. ., X,
- XI)1l2.
q. = Xi - XI, i = 2,. . . , n. Then R(t,O)
=
&,IIXl + 'k(u2,. . ., Un)1I2
=
& ( & { I I+ ~ ,w , , . .. , U,)1121U,,. . ., u,)).
However, conditional on (U,, . . . , U,) = U, & ( l l ~ I
+ ' E ( ~ ) I I=~ &I ( ~l l )~ l- &(XIlU) + &(XllU) + 'k(u)l121u) = &(ll~I -
&(xllu)l121u) + Il&(XIlU) + W ) 1 l 2 .
Thus it is clear that ' k o ( u ) = -&(X11U)
minimizes R(t, 0). Hence the equivariant estimator t o ( xI , . . . , x,)
=
XI - & o ( x l ~ x-2xI , . . . ,
x, - x , )
=
R ( ~ , , o )G R ( ~ , o = ) ~ ( te ,)
satisfies ~ ( t , e, )
for all 8 E R P and all equivariant estimators t. The estimator to is commonly called the Pitman estimator.
+
7.5. DISTRIBUTION THEORY AND INVARIANCE When a family of distributions is invariant under a group of transformations, useful information can often be obtained about the distribution of invariant functions by using the invariance. For example, some of the results in Section 7.1 are generalized here.
268
FIRST APPLICATIONS OF INVARIANCE
The first result shows that the distribution of an invariant function depends invariantly on a parameter. Suppose (%, 3 ) is a measurable space acted on measurably by a group G. Consider an invariant probability model 9 = {P,(8 E 0 ) and let G be the induced group of transformations on O. Thus
C?) induces a family of distribuA measurable mapping r on (%, $8) to (9, tions on (9, C?), {Q,le E O} given by Proposition 7.14.
If
7
Proof. For each C E
is G-invariant, then Q,
=
Qp for 6'
E
O and g
E
G.
C?, it must be shown that
or, equivalently, that
But
Since rg
=
r as r is invariant,
An alternative formulation of Proposition 7.14 is useful. If C( X) E {P018 0 ) and if 7 is G-invariant, then the induced distribution of Y = r(X), which is Q,, satisfies Qe = In other words, the distribution of an invariant function depends only on a maximal invariant parameter. By definition, a maximal invariant parameter is any function defined on O that is maximal invariant under the action of Gon O. Of course, O is usually not a parameter space for the family {Q,l6' E O} as Q, = Q,-,, but any maximal G-invariant function on O often serves as a parameter index for the distribution of Y = r(X). E
+
ego.
Example 7.12. In this example, we establish a property of the distribution of the bivariate sample correlation coefficient. Consider
PROPOSITION 7.14
a family of densitiesp(.Ip, Z) on R2 given by
where p
E
R2 and Z E 5.:
Hence
and it is assumed that
Since the distribution on R2 determined by fo is orthogonally invariant, if Z E R2 has density fo(llxl12), then GZ
=
0
and Cov(Z)
=
cI,
+
for some c > 0 (see Proposition 2.13). Also, Z, = 2 ' / 2 ~ p has densityp(.Ip, 2 ) when Z has density fo(llxl12).Thus GZ,
=
p and Cov(Zl) = cZ.
The group Al, acts on R, by (A, b)x
=
+b
Ax
and it is clear that the family of distributions, say 9 = {P,,,I(p, Z) E CR2 X S : ) , having the densities p(.lp, Z), p E R2, Z E S l , is invariant under this group action. Lebesgue measure on R2 is relatively invariant with multiplier x u , b)
=
ldet(A)I
and P ( X I P ,2 )
= P((A,
~ ) X I A+Pb, AZA')X(A,b).
Obviously, the group action on the parameter space is (A, b)(p, 2 )
=
(Ap
+ b, AZA')
and (A
,
= A
,B
) , )
p,',
,
E
9.
270
FIRST APPLICATIONS OF INVARIANCE
Now, let X,, . . . , X,, n 2 3, be a random sample with C(Xi) E 9 SO the probability model for the random sample is A12-invariant by Proposition 7.9. Consider X = (l/n)C;X, and S = C;(& - X)(X, - X)' so X is the sample mean and S is the sample covariance matrix (not normalized). Obviously, S = S(X,,. . . , X,) is a function of XI,. . . , X, and S(AX,
+ b,. . . , AX, + b) = AS(X,,. . . , X,)A'.
That is, S is an equivariant function on (R2)" to group action on S l is (A, b)(S) Writing S
E
: S
=
: S
where the
ASA'.
as
the sample correlation coefficient is
Also, the population correlation coefficient is
when the distribution under consideration is P,, , and
Now, given that the random sample is from P,,,, the question is: how does the distribution of r depend on ( p , I ) ? To show that the distribution of r depends only on p, we use an invariance argument. Let G be the subgroup of Al, defined by (A, b)l(A, b) For (A, b)
E
E
A12,A
=
1122),ai,> 0, i
G, a bit of calculation shows that r
=
=
1,2
r(Xl,. . . , X,)
=
PROPOSITION
7.14
271
r ( A X , + b , . . . , AX, + b ) so r is a G-invariant function of X,,. . . , X,. By Proposition 7.14, the distribution of r , say Q,,=, satisfies
,
Thus Q,, depends on p, Z only through a maximal invariant function on the parameter space R2 X S l under the action of G. Of course, the action of G is
We now claim that
is a maximal G-invariant function. To see this, consider ( p , Z ) E R2 x: S . By choosing
and b
=
- A p , ( A , b ) E G and
so this point is in the orbit of (p, 2 ) and an orbit index is p. Thus p is maximal invariant and the distribution of r depends only on ( p , Z ) through the maximal invariant function p. Obviously, the distribution of r also depends on the function f,, but f, was considered fixed in this discussion. 4 Proposition 7.14 asserts that the distribution of an invariant function depends only on a maximal invariant parameter, but this result is not especially useful if the exact distribution of an invariant function is desired. The remainder of the section is concerned with using invariance arguments, when G is compact, to derive distributions of maximal invariants and to characterize the G-invariant distributions. First, we consider the distribution of a maximal invariant function when a compact topological group G acts measurably on a space (%, 3).Suppose that po is a a-finite G-invariant measure on (%, 3 )and f is a density with
272
FIRST APPLICATIONS OF INVARIANCE
respect to p,. Let r be a measurable mapping from ( X , 3 ) onto ( 9 , (3). Then r induces a measure on (%, C?), say v,, given by v o w
=
po(r-'(c))
and the equation
holds for all integrable functions h on ( 9 , t?). Since the group G is compact, there exists a unique probability measure, say 6, that is left and right invariant.
e)
Proposition 7.15. Suppose the mapping T from ( X , 3)onto ( 9 , is maximal invariant under the action of G on %. If X E %has density f with respect to p,, then the density of Y = r ( X ) with respect to v, is given by
Proof. First, the integral
is a G-invariant function of x and thus can be written as a function of the maximal invariant r . This defines the function q on 9. To show that q is the density of Y , it suffices to show that
for all bounded measurable functions k. But
The last equality holds since po is G-invariant and r is G-invariant. Since 6 is
PROPOSITION
7.15
a probability measure
Using Fubini's Theorem, the definition of q and the relationship between y o and v,, we have
In most situations, the compact group G will be the orthogonal group or some subgroup of the orthogonal group. Concrete applications of Proposition 7.15 involve two separate steps. First, the function q must be calculated by evaluating
Also, given p o and the maximal invariant r , the measure v, must be found.
+
Example 7.13. Take % = Rn and let y o be Lebesgue measure. The orthogonal group 0,acts on R nand a maximal invariant function is r ( x ) = 11x11~SO 9 = [O, a).If a random vector X E R" has a density f with respect to Lebesgue measure, Proposition 7.15 tells us how to find the density of Y = 11 x112with respect to the measure v,. To find vo, consider the particular density
Thus C ( X ) = N(0, I,), so C ( Y ) = X: and the density of Y with respect to Lebesgue measure dy on [0, co) is
Therefore, where
274
FIRST APPLICATIONS OF INVARIANCE
Since f , ( r x )
= f,(x),
the integration of f, over 8, is trivial and
4 0 ( Y ) = (J21;)-"exp[-Syl.
Solving for v,(dy), we have
since r ( 4 ) = 6. Now that v, has been found, consider a general density f on Rn. Then
and q ( y ) is the density of Y density f is given by
=
llX1)2with respect to v,. When the
then it is clear that
so the distribution of Y has been found in this case. The noncentral 1 ~ an interesting examchi-square distribution of Y = 1 1 ~ 1 provides ple where the integration over 8, is not trivial. Suppose C ( X ) = N ( P , In) so
Thus
Since x and Ilxll~,have the same length, x = l l x l l r , ~ ,for some I?, E Qn where E , is the first standard unit vector in Rn. Similarly,
PROPOSITION7.15
p
=
lIpllr2&lfor some r,
E
8,. Setting h = llp112,
q ( y ) = (&)-"exp[-
i ~ ] e x p [ -3 y l
Thus to evaluate q, we need to calculate H ( u ) = ~ne~p[~~;I';I"I'2&,]8(dI').
Since 8 is left and right invariant,
where y,, is the (1,l) element of I?. The representation of the uniform distribution on On given in Proposition 7.2 shows that when I' is uniform on O,, then
where C(Z) = N(0, I,) and 2, is the first coordinate of 2. Expanding the exponential in a power series, we have
Thus the moments of U, = Z1/llZll need to be found. Obviously, C(Ul) = C(- U,), so all odd moments of U, are zero. Also, Uf = z:/(z: + 2$2,2), whch has a beta distribution with parameters 3 and ( n - 1)/2. Therefore,
276
FIRST APPLICATIONS OF INVARIANCE
Hence
is the density of Y with respect to the measure v,. A bit of algebra and some manipulation with the gamma function shows that
where
Xk
distribution. This is the expression for the is the density of a density of the noncentral chi-square distribution discussed in Chapter 3.
+
Example 7.14. In this example, we derive the density function of the order statistic of a random vector X E Rn. Suppose X has a density f with respect to Lebesgue measure and let X,, . . . , Xn be the coordinates of X. Consider the space 94 c Rn defined by
The order statistic of X is the random vector Y E 3 consisting of the ordered values of the coordinates of X. More precisely, Y, is the smallest coordinate of X, Y2 is the next smallest coordinate of X, and so on. Thus Y = r ( X ) where r maps each x E Rn into the ordered coordinates of x-say r ( x ) E 94. To derive the density function of Y, we show that Y is a maximal invariant under a compact group operating on Rn and then apply Proposition 7.15. Let G be the group of all one-to-one onto functions from {l,2,. . . , n) to {1,2,. . . , n)-that is, G is the permutation group of {1,2,. . . , n). Of course, the group operation is function composition, the group inverse is function inverse, and G has n! elements. The group G acts on the left of Rn in the following way. For x E Rn and T E G, define TX E Rn to have ith coordinate x(mP'(i)). Thus the ith coordinate of ~ r xis the mP'(i) coordinate of x, so
+
PROPOSITION
7.15
277
The reason for the inverse on m in this definition is so that G acts on the left of Rn-that is,
It is routine to verify that the function r on % to 9 is a maximal invariant under the action of G on Rn. Also, Lebesgue measure, say I, is invariant so Proposition 7.15 is applicable as G is a finite group and hence compact. Obviously, the density q of Y = r ( X ) is
for y E 9 . To derive the measure v, on 9 , consider a measurable subset C c 9. Then
and
The third equality follows since (mlC) n (m2C) has Lebesgue measure zero for m, * m2 as the boundary of 9 in Rn has Lebesgue measure zero. Thus v, is just n! times I restricted to 9.Therefore, the density of the order statistic Y, with respect to v, restricted to 9, is
When f is invariant under perinutations, as is the case when XI,. . . , Xn are independent and identically distributed, we have
The next example is an extension of Example 7.13 and is related to the results in Proposition 7.6.
278
+
FIRST APPLICATIONS OF INVARIANCE
Suppose Xis a random vector in f?, ,, n > p, which has a density f with respect to Lebesgue measure dx on $,.,. Let r Example 7.15.
map C,,, onto the space of p x p positive semidefinite matrices, say S l , by r(x) = x'x. The problem in this example is to derive the density of S = r(X) = X'X. The compact group 0, acts on C,, ,and a group element r E 8, sends x into rx. It follows immediately from Proposition 1.31 that r is a maximal invariant function under the action of 8,on C,, ,. Since dx is invariant under On, Proposition 7.15 shows that the density of S is
with respect to the measure V, on 5; induced by dx and r. To find the measure v,, we argue as in Example 7.13. Consider the particular density
on
C,, so C(X) = N(0, In8 I,). For this f,, the density of S is
with respect to v,. However, by Propostion 7.6, the density of S with respect to dS/JSJ(P+')I2is
+
q l ( s ) = w(n, p ) l ~ J " / ~ e x ~t [r -( ~ ) ] . Therefore,
which shows that
279
PROPOSITION 7.15
In the above argument, we have ignored the set of Lebesgue measure zero where x E C,, has rank less than p. The justification for this is left to the reader. Now that v, has been found, the density of S for a general density f is obtained by calculating
When f ( x ) = h(x'x), then f ( r x ) = h(x'x) = h(7(x)) and q(S) = h(S). In this case, the integration over 6, is trivial. Another example where the integration over On is not trivial is given in the next chapter when we discuss the noncentral Wishart distribution.
+
As motivation for the next result of this section, consider the situation discussed in Proposition 7.3. This result gives a characterization of the On-leftinvariant distributions by representing each of these distributions as a product measure where one measure is a fixed On-invariant distribution and the other measure is arbitrary. The decomposition of the space X into the product space $,, , X G: provided the framework in whlch to state this representation of On-left invariant distributions. In some situations, this product space structure is not available (see Example 7.5) but a product measure representation for On-invariant distributions can be obtained. It is established below that, under some mild regularity conditions, such a representation can be given for probability measures that are invariant under any compact topological group that acts on the sample space. We now turn to the technical details. In what follows, G is a compact topological group that acts measurably on a measure space ( X , 3 ) and P is a G-invariant probability measure on (%, 9). The unique invariant probability measure on G is denoted by p and the symbol U E G denotes a random variable with values in G and distribution p. The a-algebra for G is the Bore1 a-algebra of open sets so U is a measurable function defined on some probability space with induced distribution p. Since G acts on %, % can be written as a disjoint union of orbits, say
where @isan index set for the orbits and %, n %, a fixed element of X, = (gx,Jg E G). Also, set
=@
if a
* a'. Let x,
be
280
FIRST APPLICATIONS OF INVARIANCE
and assume that 3i to ?I by
9is a measurable subset of 5%. The function r defined on
is obviously a maximal invariant function under the action of G on 5%. It is assumed that r is a measurable function from 5% to 9 where 9 has the a-algebra inherited from %. A subset B , c 9is measurable iff B , = 9 n B for some B E 3. If X E 5% has distribution P, then the maximal invariant Y = 7(X) has the induced distribution Q defined by
for measurable subsets Bl c 9.What we would like to show is that P is represented by the product measure p X Q on G x 9in the following sense. If Y E 9 has the distribution Q and is independent of U E G, then the random variable Z = UY E !X has the distribution P. In other words, C ( X) = C (UY) where U and Y are independent. Here, UY means the group element U operating on the point Y E 5%. The intuitive argument that suggests this representation is the following. The distribution of X, conditional on r ( X ) = x,, should be G-invariant on !X, as the distribution of X is G-invariant. But G acts transitively on 5%, and, since G is compact, there should be a unique invariant probability distribution on %, that is induced by p on G. In other words, conditional on r ( X ) = x,, X should have the same distribution as Ux, where U is "uniform" on G. The next result makes all of t h s precise. Proposition 7.16. Consider %, 9,and G to be as above with their respective a-algebras. Assume that the mapping h on G X 9 to Si given by h(g, y ) = gy is measurable. (i) If U E G and Y E 9are independent with e ( U ) = p and C(Y) = Q, then the distribution of X = UYis a G-invariant distribution on 5%. (ii) If X E 5% has a G-invariant distribution, say P, let the maximal invariant Y = 7(X) have an induced distribution Q on 9. Let U E G have the distribution p and be independent of X. Then C(X) = C(UY). Proof. For the proof of (i), it suffices to show that
PROPOSITION
7.16
for all integrable functions f and all g
E
G. But
In the above calculation, we have used the assumption that U and Y are independent, so conditional on Y, C(U) = C(gU) for g E G. To prove (ii) it suffices to show that
for all integrable f. Since the distribution of X is G-invariant &f(X)=&f(gX), g E G . Therefore, & f ( X ) = &u&,f(UX), as U and X are independent. Thus
However, for x E there exists an element k E G such that x Using the definition of r and the right invariance of p, we have
=
kx,.
=ji(gr(x))ddg).
Hence
where the second equality follows from the definition of the induced measure Q. In terms of the random variables, & f ( X ) = &U&,f(UY) where U and Y are independent as U and X are independent.
282
FIRST APPLICATIONS OF INVARIANCE
The technical advantage of Proposition 7.16 over the method discussed in Section 7.1 is that the space % is not assumed to be in one-to-one correspondence with the product space G X %. Obviously, the mapping h on G x 9 to % i s onto, but h will ordinarily not be one-to-one.
+
Example 7.16. In this example, take 9€= S,, the set of all p x p symmetric matrices. The group G = 0, acts on Spby
For S
E
S,, let
where y, 2 . . . 2 y, are ordered eigenvalues of S and the off-diagonal elements of Y are zero. Also, let % = {YIY = r(S), S E 5 ,). The spectral theorem shows that r is a maximal invariant function under the action of Op and the elements of % index the orbits in S,. The measurability assumptions of Proposition 7.16 are easily verified, so every On-invariantdistribution on S,, say P , has the representation given by
where p is the uniform distribution on 8, and Q is the induced distribution of Y. In terms of random variables, if C(S) = P and C(rSrl) = C(S) for all r E Op,then
where \k is uniform on 0, and is independent of the matrix of eigenvalues of S. As a particular case, consider the probability measure Po on S,f G S, with the Wishart density
where n 2 p, I ( S ) = 1 if S
E
$5; and is ze'ro otherwise. That po is a
PROPOSITION
283
7.16
density on Sp with respect to Lebesgue measure dS on Sp follows from Example 5.1. Also, p, is Op-invariant since dS is Op-invariant and p,(rSr') = p,(S) for all S E Sp and I? E Op. Thus the above results are applicable to this particular Wishart distribution. 4 The final example of this section deals with the singular value decomposition of a random n x p matrix. 4
Example 7.17. The compact group On by
X
Op acts on the space Cp,,
For definiteness, we take p ,< n. Define r on fp,,by
where A , >, . . . > A, > 0 and A,: . . . , A: are the ordered eigenvalues of X'X. Let 9 G Cp, ,be the range of r SO 9is a closed subset of Cp, ,. It is clear that r(rXAf) = r ( X ) for r E 8, and A E Op SO r is invariant. That r is a maximal invariant follows easily from the singular value decomposition theorem. Thus the elements of 9 index the orbits in Cp, ,and every X E Cp, ,can be written as
for some y E 9 and (I?, A) E On X Op. The measurability assumptions of Proposition 7.16 are easily checked. Thus if P is an (8, x aP)-invariant probability measure on Cp,, and C(X) = P, then
where (I?, A) has a uniform distribution on 0, x Op, Y has a distribution Q induced by r and P, and Y and (r,A) are independent. However, we can say a bit more. Since 8, X Op is a product group, the unique invariant probability measure on 8, X Op is the
284
FIRST APPLICATIONS OF INVARIANCE
product measure p1 x p2 where p,(p2) is the unique invariant probability measure on On(Op). Thus I' and A are independent and each is uniform in its respective group. In summary,
where T, Y, and A are mutually independent with the distributions given above. As a particular case, consider the density
with respect to Lebesgue measure on C,, ,. Since f0(TXAr) = fo(X) and Lebesgue measure is (8, x OP)-invariant, the probability measure defined by fo is (8,x 0,)-invariant. Therefore, when C(X) = N(0, I, x I,), X has the same distribution as TYA' where T and A are uniform and Y has the induced distribution Q on %.
+
7.6. INDEPENDENCE AND INVARIANCE Considerations that imply the stochastic independence of an invariant function and an equivariant function are the subject of this section. To motivate the abstract discussion to follow, we begin with the familiar random sample from a univariate normal distribution. Consider X E % with C(X) = N(pe, a21,) where p E R, a 2 > 0, and e is the vector of ones in Rn. The set % is Rn - span{e) and the reason for choosing this as the > 0 for x E %. The coordisample space is to guarantee that Zy(xi - x ) ~ nates of X, say XI,.. . , X,, are independent and C(&) = N(p, a 2 ) for i = 1,. . . , n. When p and a 2 are unknown parameters, the statistic t(X) = (s, X)where
is minimal sufficient and complete. The reason for using s rather than s 2 in the definition of t(X) is based on invariance considerations. The affine group Al, acts on % b y ( a , b)x = ax
+ be
for (a, b) E All. Let G be the subgroup of Al, given by G All, a > 0) so G also acts on %.
=
{(a, b)l(a, b)
E
285
INDEPENDENCE AND INVARIANCE
The probability model for X E % is generated by G in the sense that if
Z E %and C(Z) = N(0, I,),
C((a, b ) ~ =) C(aZ + be)
be, a21,).
=
Thus the set of distributions 9 = {N(pe, U ~ I , ) E I ~R, a 2 > 0) is obtained from an N(0, I,) distribution by a group operation. For this example, the group G serves as a parameter space for 9. Further, the statistic t takes its values in G and satisfies t((a, b ) X )
=
( a , b)(s,
+
X),
that is, t evaluated at (a, b)X = a x be is the same as the group element Thus t is an equivariant (a, b) composed with the group element (s, function defined on % to G and G acts on both % and G. Now, which functions of X, say h(X), might be independent of t(X)? Intuitively, since t(X) is sufficient, t(X) "contains all the information in X about the parameters." Thus if h(X) has a distribution that does not depend on the parameter value (such an h(X) will be called ancillary), there is some reason to believe that h(X) and t(X) might be independent. However, the group structure given above provides a method for constructing ancillary statistics. If h is an invariant function of X, then the distribution of h is an invariant function of the parameter (p, a2). But the group G acts transitively on the parameter space (i.e., G), so any invariant function will be ancillary. Also, h is invariant iff h is a function of a maximal invariant statistic. This suggests that a maximal invariant statistic will be independent of t(X). Consider the statistic
x).
where the inverse on t(X) denotes the group inverse in G. The verification that Z(X) is maximal invariant partially justifies choosing t to have values in G. For (a, b) E G, Z((a, b ) ~ =) ( t ( ( a , b ) ~ ) ) - ' ( a ,b ) X
so Z is invariant. Also, if
=
((a, b ) t ( ~ ) ) - ' ( a ,b)X
286
FIRST APPLICATIONS OF INVARIANCE
then y = [t(y)(t(x))-'] x , so X and Y are in the same orbit. Thus Z is maximal invariant and is an ancillary statistic. That Z(X) and t(X) are stochastically independent for each value of p and a 2 follows from Basu's Theorem given in the Appendix. The whole purpose of this discussion was to show that sufficiency coupled with the invariance suggested the independence of Z(X) and t(X). The role of the equivariance of t is not completely clear, but it is essential in the more abstract treatment that follows. Let Po be a fixed probability on (56,9 ) and suppose that G is a group Consider a measurable function t on that acts measurably on (%, 9). (%, 9 ) to ( 9 , C?,) and assume that c i s a homomorphic image of G that acts transitively on ( 9 , C?,) and that
Thus t is an equivariant function. For technical reasons that become apparent later, it is assumed that G is a locally compact and a-compact topological group endowed with the Bore1 a-algebra. Also, the mapping (g, y ) + gy from G X 9 to 9 is assumed to be jointly measurable. Now, let h be a measurable function on (%, 9 ) to ( 2 , C?,), whlch is G-invariant. If X E % and C(X) = Po, we want to find conditions under which Y = t(X) and Z = h(X) are stochastically independent. The following informal argument, which is made precise later, suggests the conditions needed. To show that Y and Z are independent, it is sufficient to verify that, for all bounded measurable functions f on ( 2 , C?,),
is constant for y E 9. That this condition is sufficient follows by integrating H with respect to the induced distribution of Y, say Q,. More precisely, if k is a bounded function on (3, C? ,) and H( y) = H( yo) for y E 9 , then
INDEPENDENCE AND INVARIANCE
287
and this implies independence. The assumption that H is constant justifies the next to the last equality while the last equality follows from
when H is constant. Thus under what conditions will H be constant? Since G acts transitively on 94,if H is G-invariant, then H must be constant and conversely. However,
The equivariance of t and the invariance of h justify the third and fourth equalities while the last equality is a consequence of C(gX) = gP, when C(X) = Po. Now, if t(X) is a sufficient statistic for the family 9 = {gPolg E G), then the last member of the above string of equalities is just H(y). Under this sufficiency assumption, H(y) = H(g-'y) so H is invariant and hence is a constant. The technical problem with this argument is caused by the nonuniqueness of conditional expectations. The conclusion that H( y) = H(g-Iy) should really be H(y) = H(g-'y) except for y E Ng where Ng is a set of Qo measure zero. Since this null set can depend on g, even the conclusion that H is a constant a.e. (Q,) is not justified without some further work. Once these technical problems are overcome, we prove that, if t(X) is sufficient for {gPolg E G), then for each g E G, h(X) and t(X) are stochastically independent when C(X) = gPo. The first gap to fill concerns almost invariant functions. Definition 7.6. Let ( X I , $73,) be a measurable space that is acted on measurably by a group GI. If p is a a-finite measure on ( X I , 91 ,) and f is a real-valued Bore1 measurable function, f is almost GI-invariant if for each g E GI, the set N, = {XIf ( x ) + f(gx)) has p measure zero. The following result shows that under certain conditions, an almost G,-invariant function is equal a.e. (p) to a GI-invariant function.
288
FIRST APPLICATIONS OF INVARIANCE
Proposition 7.17. Suppose that GI acts measurably on ( X I ,9,) and that
G, is a locally compact and a-compact topological group with the Borel
a-algebra. Assume that the mapping (g, x ) + gx from GI x %, to X I is measurable. If p is a a-finite measure on (EX ,, 9,) and f is a bounded almost GI-invariant function, then there exists a measurable invariant function f, such that f = f , a.e. (p). Proof: This follows from Theorem 4, p. 227 of Lehmann (1959) and the proof is not repeated here. The next technical problem has to do with conditional expectations.
Proposition 7.18. In the notation introduced earlier, suppose (%, 9)and ( 9 , '2,) are measurable spaces acted on by groups G and G where G is a homomorphic image of G. Assume that r is an equivariant function from X to 9 . Let Po be a probability measure on (%, 3 ) and let Q, be the induced distribution of r ( X ) when C(X) = Po. If f is a bounded G-invariant function on %. let
and
Then H,(gy)
=
H(y) a.e. (Q,) for each fixed g E G.
Proof. The conditional expectations are well defined since f is bounded. H ( y ) is the unique a.e. (Q,) function that satisfies the equation
for all bounded measurable k. The probability measure gP, satisfies the equation
for all bounded f,. Since r is equivariant, this implies that if C(X) = gPo, then C(r(X)) = gQO.Using this, the invariance off, and the characterizing
property of conditional expectation, we have for all bounded k,
Since the first and the last terms in this equality are equal for all bounded k, we have that H(y) = Hl(gy) a.e. (Q,). With the technical problems out of the way, the main result of this section can be proved. Proposition 7.19. Consider measurable spaces (GX, 3 ) and ( 9 , e l ) , which are acted on measurably by groups G and G where G is a homomorphic image of G. It is assumed that the conditions of Proposition 7.17 hold for the group and the space (%, e l ) , and that G acts transitively on 9 . Let r on X to 9be measurable and equivariant. Also let (Q, e,) be a measurable space and let h be a G-invariant measurable function from GX to '2. For a random variable X E % with C( X) = Po, set Y = r( X) and Z = h ( X) and assume that r ( X ) is a sufficient statistic for the family (gP,Jg E G) of distributions on (%, 93).Under these assumptions, Y and Z are independent when C(X) = gPo, g E G. Proof. First we prove that Y and Z are independent when C(X) = Po.Fix a bounded measurable function f on Z and let
Since r ( X ) is a sufficient statistic, there is a measurable function H on such that H,(Y)
=
H(Y)
3
fory P N,
where Ng is a set of gQo-measure zero. Thus (gQo)(Ng) = QO(g-'Ng) = 0.
290
FIRST APPLICATIONS OF INVARIANCE
Let e denote the identity in G. We now claim that H is a Q, almost G-invariant function. By Proposition 7.18, H,(y) = Hg(gy) a.e. (Q,). However, H(y) = He(y) a.e. Q, and Hg(gy) = H(gy) for gy P N,, where Q,(~-'N,) = 0. Thus H,(gy) = H(gy) a.e. Q,, and this implies that H(y) = H(gy) a.e. Q,. Therefore, there exists a G-invariant measurable function, H must be a say H, such that H = H a.e. Q,. Since G is transitive on 9, constant, so H is a constant a.e. Q,. Therefore,
is a constant a.e. Q, and, as noted earlier, this implies that Z = h(X) and Y = T(X) are independent when C ( X) = Po. When C ( X) = g, Po, let Po = glP, and note that {gP,lg E G) = igP0lgE G) so T(X) is sufficient for (gP,Jg E G). The argument given for Po now applies for Po. Thus Z and Y are independent when C ( X ) = glPo. A few comments concerning this result are in order. Since G acts transitively on {gP,lg E G) and Z = h(X) is G-invariant, the distribution of Z is the same under each gP,, g E G. In other words, Z is an ancillary statistic. Basu's Theorem, given in the A~pendix,asserts that a sufficient statistic, whose induced family of distributions is complete, is independent of an ancillary statistic. Although no assumptions concerning invariance are made in the statement of Basu's Theorem, most applications are to problems where invariance is used to show a statistic is ancillary. In Proposition 7.19, the completeness assumption of Basu's Theorem has been replaced by the invariance assumptions and, most particularly, by the assumption that the group G acts transitively on the space %.
+
Example 7.18. The normal distribution example at the beginning of this section provided a situation where the sample mean and sample variance are independent of a scale and translation invariant statistic. We now consider a generalization of that situation. Let X = R n - (span{e)) where e is the vector of ones in R n and with suppose that a random vector X E % has a density f(11~11~) respect to Lebesgue measure dx on X. The group G in the example at the beginning of this section acts on X by ( a , b)x
=
ax
Consider the statistic t(X)
=
+ be,
( a , b)
(s, X)where
and
E
G.
PROPOSITION
7.19
Then t takes values in G and satisfies t ( ( a ,b ) X ) = ( a , b ) t ( X ) for ( a , b ) E G. It is shown that t ( X ) and the G-invariant statistic
are independent. The verification that Z ( X ) is invariant goes as follows:
=
( ( a , b ) t ( ~ ) ) - ' ( ab,) X
=
(t(~))-'= X Z(X)
To apply Proposition 7.19, let Po be the probability measure with density f (11x112 , on X and let G = G = 9. Thus t ( X ) is equivariant and Z ( X ) is invariant. The sufficiency of t ( X ) for the parametric family {gPolg E G) is established by using the factorization theorem. For ( a , b ) E G, it is not difficult to show that ( a , b)Po has a density k ( x ( a ,b ) with respect to dx given by
Since
the density k ( x l a , b ) is a function of Cx? and C x , so the pair ( C X , ~ , C & )is a sufficient statistic for the family {gP,)g E G). However, t ( X ) = (s, is a one-to-one function of ( C q 2 ,C q ) SO t ( X ) is a sufficient statistic. The remaining assumptions of Proposition 7.19 are easily verified. Therefore, t ( X ) and Z ( X ) are independent under each of the measures ( a , b)Po for ( a , b ) in G.
z)
+
Before proceeding with the next example, an extension of Proposition 7.1 is needed.
292
FIRST APPLICATIONS OF INVARIANCE
Proposition 7.20. Consider the space %, ,, n >, p, and let Q be an n x n rank k orthogonal projection. If k 2 p, then the set
has Lebesgue measure zero. Proof: Let X E C,, be a random vector with C ( X) = N(0, In@ I,) = Po. It suffices to show that Po(B) = 0 since Po and Lebesgue measure are absolutely continuous with respect to each other. Also, write Q as
where
Since rank(I"DI'X) and C(rX)
=
=
rank(Dl?X)
C(X), it suffices to show that
Now, partition X as
Thus rank(DX) = rank(Xl). Since k > p and C(X,) = N(0, I, 8 I,), Proposition 7.1 implies that XI has rank p with probability one. Thus Po(B) = 0 so B has Lebesgue measure zero.
+
Example 7.19. This is generalization of Example 7.18 and deals with the general multivariate linear model discussed in Chapter 4.
PROPOSITION
293
7.20
As in Example 4.4, let M be a linear subspace of C,,. defined by M
=
( x ~ xE Cp,n,x
=
ZB, B
E
C,,
k)
where Z is a fixed n X k matrix of rank k. For reasons that are apparent in a moment, it is assumed that n - k >, p. The orthogonal projection onto M relative to the natural inner product ( . , .) on C,, is PM = P, 8 I, where
is a rank k orthogonal projection on Rn. Also, QM = Q, 8 I,, where Q, = I, - P, is the orthogonal projection onto M I and Q, is a rank n - k orthogonal projection on Rn. For this example, the sample space % is
..
Since n - k >, p, Proposition 7.20 implies that the complement of % has Lebesgue measure zero in ep, In this example, the group G has elements that are pairs (T, u) with T E G; where T is p x p and u E M. The group operation is
and the action of G on % is (T, u)x For this example, 9 =
=
xT'
+ u.
G = G and t on % to G is defined by
where T(x) is the unique element in G; such that xfQ,x = T(x)Tf(x). The assumption that n - k 2 p insures that x'Q,x has rank p. It is now routine to verify that
for x
E
% and (T, u)
E
G. Using this relationship, the function
294
FIRST APPLICATIONS OF INVARIANCE
is easily shown to be a maximal invariant under the action of G on 5%. Now consider a random vector X E X with C(X) = Po where Po has a density f ( ( x , x ) ) with respect to Lebesgue measure on %. We apply Proposition 7.19 to show that t ( X ) and h ( X ) are independent under gPo for each g E G. Since 9 = = G, t is an equivariant function and acts transitively on 9.The measurability assumptions of Proposition 7.19 are easily checked. It remains to show that t ( X ) is a sufficient statistic for the family (gPolg E G). For g = ( T , p ) E G, gPo has a density given by
Letting B
=
TT' and arguing as in Example 4.4, it follows that
since p E M. Therefore, the density p ( x l ( T , p ) ) is a function of the pair ( x f Q , x , P M x ) so this pair is a sufficient statistic for the family {gPolg E G). However, T ( x ) is a one-to-one function of x f Q , x so
is also a sufficient statistic. Thus Proposition 7.19 implies that t ( X ) and h ( X ) are stochastically independent under each probability measure gPo for g E G. Of course, the choice o f f that motivated thls example is f(w)
=
(&)-"*exp[-
iw]
so that Po is the probability measure of a N(0, I, 8 I,) distribution on %. One consequence of Proposition 7.19 is that the statistic h ( X ) is ancillary. But for the case at hand, we now describe the distribution of h ( X ) and show that its distribution does not even depend on the particular density f used to define Po. Recall that
where T ( x ) T f ( x )= x f Q , x and T ( x ) E G:.
Fix x
E
% and set
PROPOSITION
'k
=
7.20
( Q , x ) ( T f ( x ) ) - I . First note that
so \k is a linear isometry. Let N be the orthogonal complement in R n of the linear subspace spanned by the columns of the matrix Z. Clearly, dim(N) = n - k and the range of \k is contained in N since Q , is the orthogonal projection onto N . Therefore, \k is an element of the space Fp(N)=
{*I*'* = I,, range(*)
G N}.
Further, the group H
=
{rlr E 13,,
r ( N )=N )
is compact and acts transitively on % ( N ) under the group action
Now, we return to the original problem of describing the distribution of W = h ( X ) when C ( X ) = Po. The above argument shows that W E % ( N ) . Since the compact group H acts transitively on % ( N ) , there is a unique invariant probability measure v on % ( N ) . It will be shown that C ( W ) = v by proving C ( I ' W ) = C ( W ) for all r E H. It is not difficult to verify that I'Q, = Q,r for r E H. Since C ( I ' X ) = C ( X ) and T ( T X ) = T ( X ) , we have
Therefore, the distribution of Wis H-invariant so C ( W )= v.
+
Further applications of Proposition 7.19 occur in the next three chapters. In particular, this result is used to derive the distribution of the determinant of a certain matrix that arises in testing problems for the general linear model.
2%
FIRST APPLICATIONS OF INVARIANCE
PROBLEMS
1. Suppose the random n X p matrix X E % (GX as in Section 7.1) has a density given by f ( x ) = klx'xlYexp[- 4 trx'x] with respect to dx. The constant k depends on n, p, and y (see Problem 6.10). Derive the density of S = X'X and the density of U in the representation X = \kU with U E Gg and \k E n.
s,
2. Suppose X E GX has an On-left invariant distribution. Let P ( X ) = X(XIX)-'x' and S(X) = X'X. Prove that P ( X ) and S(X) are independent.
3. Let Q be an n X n non-negative definite matrix of rank r and set A = {xlx E Cp, ,, x'Qx has rank p). Show that, if r 2 p, then A' has Lebesgue measure zero. 4. With GX as in Section 7.1, On X GIp acts on % b y x + rxA' for r E 8, and A E GI,. Also, 0, X GI, acts on S; by S + ASA'. Show that $(x) = kx'x is equivariant for each constant k > 0. Are these the only equivariant functions?
5. The permutation group Tn acts on Rn via matrix multiplication x + gx, g E Tn. Let 9 = {yly E Rn, y, g y2 =S . . - < y,). Define f : Rn + 9 by f ( x ) is the vector of ordered values of the set {x,,. . . , x,) with multiple values listed. (i) Show f is a maximal invariant. (ii) Set I,(u) = 1 if u 2 0 and 0 if u < 0. Define F,(t) = n-'Z;l,(t - xi) for t E R1. Show Fnis also a maximal invariant. 6. Let M be a proper subspace of the inner product space (V, (., .)). Let A, be defined by A,x = - x for x E M and A,x = x for x E M L . (i) Verify that the set of pairs (B, y), with y E M and B either A, or A:, forms a subgroup of the affine group AI(V). Let G be this group. (ii) Show that G acts on M and on V. (iii) Suppose t : V + M is equivariant (t(Bx + y) = Bt(x) + y for (B, y) E G and x E V). Prove that t(x) = P,x.
7. Let M be a subspace of Rn ( M * Rn) so the complement of GX = Rn n Mc has Lebesgue measure zero. Suppose X E %has a density given by
PROBLEMS
+
297
dx < m. For where p E M and a > 0. Assume that a > 0, I? E O,(M), and b E M, the affine transformation (a, I?, b)x = aTx b acts on %. (i) Show that the probability model for X (p E M, a > 0) is invariant under the above affine transformations. What is the induced group action on (p, a2)? (ii) Show that the only equivariant estimator of p is P,x. Show that any equivariant estimator of a 2 has the form kllQx112 for some k > 0.
+
8. With % as in Section 7.1, suppose f is a function defined on GI, to [0, a ) , which satisfies f (AB) = f (BA) and
(i) Show that f (xfxZ- '), Z E S;, is a density on % with respect to d ~ / ( x ' x l " /and ~ under this density, the covariance (assuming it exists) is cI, @ 2 where c > 0. (ii) Show that the family of distributions of (i) indexed by Z E S; is invariant under the group 8, x GI, acting on % by (T, A)x = rxAf. Also, show that (I?, A)Z = AZAf. (iii) Show that the equivariant estimators of Z all have the form kXfX, k > 0. Now, assume that
where Co E Sp, is unique. (iv) Show Co = alp for some a > 0. (v) Find the maximum likelihood estimator of Z (expressed in terms of X and a in (iv)).
9. In an inner product space (V, (., -)), suppose X has a distribution Po. (i) Showthat C(llXll)= C(llYll) wheneverC(Y)= g P 0 , g € O(V). (ii) In the special case that C(X) = C(p + Z ) where p is a fixed vector and Z has an B(V)-invariant distribution, how does the depend on p? distribution of J(XJ1 10. Under the assumptions of Problem 4.5, use an invariance argument to show that the distribution of F depends on (p, a 2 ) only through the parameter ( 1 ~ ~-1 111~~~ p 1 1 ~ )What / a ~ .happens when p E a?
298
FIRST APPLICATIONS OF INVARIANCE
1. Suppose XI,. . . , Xn is a random sample from a distribution on RP (n > p ) with densityp(xlp, 2 ) = (Z(-'/2f ((x - p)'Z- '(x - p)) where p E RP and Z E S ;. The parameter 8 = det(2) is sometimes called the population generalized variance. The sample generalized variance is V = det((l/n)S) where S = 2;(xi - %)(xi - K)'. Show that the distribution of V depends on (p, 2 ) only through 8.
12. Assume the conditions under which Proposition 7.16 was proved. Given a probability Q on 9 , let denote the extension of Q to %-that is, Q(B) = Q(B n 9 ) for B E 3. For g E G, g e i s defined in the usual way-(~Q)(B) = e ( g - ' ~ ) . (i) Assume that P is a probability measure on % and
e
that is,
Show that P is G-invariant. (ii) If P is G-invariant, show that (7.1) holds for some Q. 13. Under the assumptions used to prove Proposition 7.16, let 9 be all the G-invariant distributions. Prove that r ( X ) is a sufficient statistic for the family 9. 14.
Suppose X E Rn has coordinates XI,. . . , Xn that are i.i.d. N(p, l), p E R'. Thus the parameter space for the distributions of X is the additive group G = R'. The function t : Rn + G given by t(x) = % gives a complete sufficient statistic for the model for X. Also, G acts on Rn by gx = x + ge where e E Rn is the vector of ones. (i) Show that t(gx) = gt(x) and that Z(X) = ( t ( ~ ) ) - ' X is an ancillary statistic. Here, (t(X))-I means the group inverse of t(X) E G so (t(X))-Ix = X - X e . What is the distribution of Z( X)? (ii) Suppose we want to find a minimum variance unbiased estimator (MVUE) of h(p) = &, f(Xl) where f is a given function. The Rao-Blackwell Theorem asserts that the MVUE is &( f(Xl)lt(X) = t). Show that this conditional expectation is
NOTES AND REFERENCES
299
where a2 = var(X, - X)= ( n - l)/n. Evaluate this for f ( x ) = 1 i f x g u,and f ( x ) = Oif x > u,. (iii) What is the MVUE of the parametric function (&%-'exp [ - &(x, - p ) 2 ] where x, is a fixed number?
15. Using the notation, results, and assumptions of Example 7.18, find an unbiased estimator based on t(X) of the parametric function h(a, b) = ((a, b) Po)(Xl g u,) where u, is a fixed number and XI is the first coordinate of X. Express the answer in terms of the distribution of 2, -the first coordinate of Z. What is this distribution? In the case that Po is the N(0, I,) distribution, show this gives a MVUE for h(a, b). 16. This problem contains an abstraction of the technique developed in Problems 14 and 15. Under the conditions used to prove Proposition ('2,) is (G, 3,) and G = G. The equivari7.19, assume the space (9, ance assumption on r then becomes r(gx) = g o r(x) since r(x) E G. Of course, r ( X ) is assumed to be a sufficient statistic for {gP,lg E G). (i) Let Z( X) = ( r ( X))-IX where ( r ( x))-' is the group inverse of r(X). Show that Z(X) is a maximal invariant and Z(X) is ancillary. Hence Proposition 7.19 applies. Q, denote the distribution of Z when C(X) is one of the Let (ii) distributions gPo, g E G. Show that a version of the conditional expectation &(f(X)Ir(X) = g) is f(gZ) for any bounded measurable f . (iii) Apply the above to the case when Po is N(0, I, @ I,) on % (as in Section 7.1) and take G = G ; . The group action is x + xT' for x E X and T E G;. The map r is r(X) = T in the representation X = 9 T ' with E %, ,and T E Gg. What is Qo? (iv) When X E % is N(0, I, 8 Z) with Z E Sp+, use (iii) to find a MVUE of the parametric function
where uo is a fixed vector in RP.
NOTES AND REFERENCES
1. For some inaterial related to Proposition 7.3, see Dawid (1978). The extension' of Proposition 7.3 to arbitrary compact groups (Proposition 7.16) is due to Farrell (1962). A related paper is Das Gupta (1979).
300
FIRST APPLICATIONS OF INVARIANCE
2. If G acts on 5% and t is a function from %. onto 9, it is natural to ask if we can define a group action on 9 (using t and G) so that t becomes equivariant. The obvious thlng to do is to picky E %, write y = t(x), and then define gy to be t(gx). In order that this definition make sense it is necessary (and sufficient) that whenever t(x) = t(%),then t(gx) = t(g%)for all g E G. When this condition holds, it is easy to show that G then acts on % via the above definition and t is equivariant. For some further discussion, see Hall, Wijsman, and Ghosh (1965). Some of the early work on invariance by Stein, and Hunt and Stein, first appeared in print in the work of other authors. For example, the famous Hunt-Stein Theorem given in Lehrnann (1959) was established in 1946 but was never published. This early work laid the foundation for much of the material in this chapter. Other early invariance works include Hotelling (1931), Pitman (1939), and Peisakoff (1950). The paper by Kiefer (1957) contains a generalization of the Hunt-Stein Theorem. For some additional discussion on the development of invariance arguments, see Hall, Wijsman, and Ghosh (1965). 4.
Proposition 7.15 is probably due to Stein, but I do not know a reference.
5.
Make the assumptions on %, 9, and G that lead to Proposition 7.16, and note that % is just a particular representation of the quotient space %/G. If v is any a-finite G-invariant measure on %, let S be the measure on 9 defined by
Then (see Lehmann, 1959, p. 39),
for all measurable functions h . The proof of Proposition 7.16 shows that for any v-integrable function f , the equation
holds. In an attempt to make sense of (7.2) when G is not compact, let
301
NOTES AND REFERENCES
pr denote a right invariant measure on G. For f
E
X ( X ) , set
Assuming this integral is well defined (it may not be in certain examples -e.g., X = R n - (0) and G = GI,), it follows that ](hx) = f ( x ) for h E G. Thus j is invariant and can be regarded as a function on 9 = %/G. For any measure 6 on 3,write j j d 6 to mean the integral of j , expressed as a function of y, with respect to the measure 6. In this case, the right-hand side of (7.2) becomes
However, for h
E
G, it is easy to show
so J is a relatively invariant integral. As usual, A, is the right-hand modulus of G. Thus the left-hand side of (7.2) must also be relatively invariant with multiplier A; I . The argument thus far shows that when p in (7.2) is replaced by p, (this choice looks correct so that the inside integral defines an invariant function), the resulting integral J is relatively invariant with multiplier A; I . Hence the only possible measures v for which (7.2) can hold must be relatively invariant with multiplier A;'. However, given such a v, further assumptions are needed in order that (7.2) hold for some 6 (when G is not compact and p is replaced by p,). Some examples where (7.2) is valid for noncompact groups are given in Stein (1956), but the first systematic account of such a result is Wijsman (1966), who uses some Lie group theory. A different approach due to Schwarz is reported in Farrell (1976). The description here follows Andersson (1982) most closely.
6. Proposition 7.19 is a special case of a result in Hall, Wijsman, and Ghosh (1965). Some version of this result was known to Stein but never published by him. The development here is a modification of that which I learned from Bondesson (1977).
CHAPTER 8
The Wishart Distribution
The Wishart distribution arises in a natural way as a matrix generalization of the chi-square distribution. If X,,. . . , X,, are independent with C(4) = N(0, l), then C;T* has a chi-square distribution with n degrees of freedom. When the are random vectors rather than real-valued random variables say Xi E RP with C(X> = N(0, I,), one possible way to generalize the above sum of squares is to form the p X p positive semidefinite matrix S = C;X, Essentially, this representation of S is used to define a Wishart distribution. As with the definition of the multivariate normal distribution, our definition of the Wishart distribution is not in terms of a density function and allows for Wishart distributions that are singular. In fact, most of the properties of the Wishart distribution are derived without reference to densities by exploiting the representation of the Wishart in terms of normal random vectors. For example, the distribution of a partitioned Wishart matrix is obtained by using properties of conditioned normal random vectors. After formally defining the Wishart distribution, the characteristic function and convolution properties of the Wishart are derived. Certain generalized quadratic forms in normal random vectors are shown to have Wishart distributions and the basic decomposition of the Wishart into submatrices is given. The remainder of the chapter is concerned with the noncentral Wishart distribution in the rank one case and certain distributions that arise in connection with likelihood ratio tests.
x.
8.1. BASIC PROPERTIES The Wishart distribution, or more precisely, the family of Wishart distributions, is indexed by a p X p positive semidefinite symmetric matrix Z, by a
dimension parameter p , and by a degrees of freedom parameter n. Formally, we have the following definition. Definition 8.1. A random p x p symmetric matrix S has a Wishart distribution with parameters Z, p , and n if there exist independent random vectors X,,. . . , X, in RP such that C ( X , ) = N(0, Z ) , i = 1,. . . , n and
In this case, we write C ( S ) = W ( Z , p , n ) . In the above definition, p and n are positive integers and Z is a p X p positive semidefinite matrix. When p = 1, it is clear that the Wishart distribution is just a chi-square distribution with n degrees of freedom and scale parameter Z 2 0. When Z = 0, then X, = 0 with probability one, so S = 0 with probability one. Since C;X,X; is positive semidefinite, the Wishart distribution has all of its mass on the set of positive semidefinite matrices. In an abuse of notation, we often write
when C ( S ) = W ( Z , p, n ) . As distributional questions are the primary concern in this chapter, this abuse causes no technical problems. If X E Cp, , has rows Xi,. . . , XA, it is clear that C ( X ) = N(0, I , @ Z ) and X ' X = L;tXIXl!.Thus if C ( S ) = W ( Z , p, n ) , then C ( S ) = C ( X ' X ) where C ( X ) = N(0, I, €3 Z ) in CP,?. Also, the converse statement is clear. Some further properties of the Wishart distribution follow. Proposition 8.1. If C ( S ) = W ( Z , p, n ) and A is an r x p matrix, then C ( A S A f )= W ( A Z A f ,r , n ) .
Proof:
SinceC(S)= W ( Z , p , n ) ,
where C ( X ) = N(0, I, 8 Z ) in Cp, ,. Thus C ( A S A ' ) = C(AX'XA') = C [ ( ( I , 8 A ) X ) ' ( I , 8 A ) X ] . But Y = ( I , @ A ) X satisfies C ( Y ) = N(0, I, @ ( A Z A ' ) ) in Cr,. and C ( Y f Y )= C ( A S A f ) .The conclusion follows from the definition of the Wishart distribution.
304
THE WISHART DISTRIBUTION
One consequence of Proposition 8.1 is that, for fixed p and n, the family of distributions {W(&p, n)lE > 0) can be generated from the W(I,, p, n) distribution and p X p matrices. Here, the notation Z >, 0 (Z > 0) means that Z is positive semidefinite (positive definite). To see this, if C(S) = W(I,, p, n) and Z = AA', then
In particular, the family {W(Z, p, n)(Z > 0) is generated by the W(I,, p, n) distribution and the group GI, acting on S, by A(S) = ASA'. Many proofs are simplified by using the above representation of the Wishart distribution. The question of the nonsingularity of the Wishart distribution is a good example. If C(S) = W(2, p, n), then S has a nonsingular Wishart distribution if S is positive definite with probability one. Proposition 8.2. Suppose C(S) = W(Z, p, n). Then S has a nonsingular Wishart distribution iff n > p and Z > 0. If S has a nonsingular Wishart distribution, then S has a density with respect to the measure v(dS) = dS/I SI( P + ')I2given by
Here, o ( p , n) is the Wishart constant defined in Example 5.1. Proof. Represent the W(Z, p, n) distribution as C(AS,A') where C(S,) = W(I,, p, n) and AA' = Z. Obviously, the rank of A is the rank of Z and Z > 0 iff rank of 2 is p. If n < p, then by Proposition 7.1, if C(X,) = N(0, I,), i = 1,. . . , n, the rank of C;&,X is n with probability one. Thus S, = C; Xi,X has rank n, which is less than p, and S = AS, A' has rank less than p with probability one. Also, if the rank of Z is r < p, then A has rank r so AS, A' has rank at most r no matter what n happens to be. Therefore, if n < p or if Z is singular, then S is singular with probability one. Now, consider the case when n >, p and Z is positive definite. Then S, = C;X,,X has rank p with probability one by Proposition 7.1, and A has rank p. Therefore, S = AS, A' has rank p with probability one. is When Z > 0, the density of X E ,?f
when C(X) = N(0, In8 Z). When n >, p, it follows from Proposition 7.6 that the density of S with respect to v(dS) isp(SJZ).
Recall that the natural inner product on S,, when S, is regarded as a subspace of C,, , is
,
The mean vector, covariance, and characteristic function of a Wishart distribution on the inner product space (S,,( . , .)) are given next.
Proposition 8.3. Suppose C(S) = W(2, p, n) on (S,, ( . , .)). Then (i) &S = n2. (ii) Cov(S) = 2nZ 8 2 . (iii) +(A) = &exp[i(A, S)]
=
II, - ~ ~ Z A I - " ' ~ .
Proof. To prove (i) write S = CT4X where C ( 4 ) = N(0, Z), and XI,. . . , X, are independent. Since &X,X; = 2 , it is clear that &S = nZ. For (ii), the independence of XI,. . . , Xn implies that
Cov(S)
=
Cov
i:C i C: 4q
=
Cov( X i y ) = n Cov( XIXi)
where XI XI is the outer product of XI relative to the standard inner product on RP. Since C(Xl) = C(CZ) where C(Z) = N(0, I,) and CC' = 2 , it follows from Proposition 2.24 that Cov(Xl XI) = 2 2 8 2. Thus (ii) holds. To establish (iii), first write C'AC = r D r ' where A E ,S, CC' = 2 , r E a,, and D is a diagonal matrix with diagonal entries A,, . . . , A,. Then
Again, C(Xl) = C(CZ) where C(Z) = N(0, I,). Also, C(I'Z)
=
C(Z) for
306
THE WISHART DISTRIBUTION
r E Op. Therefore, [(A)
=
& exp[iX;AXl] = &exp[iZfCfACZ]
where Z,, . . . , Zp are the coordinates of Z. Since Z,, . . . , Zp are independent with C(Zj) = N(0, I), Z; has a X: distribution and we have
The next to the last equality is a consequence of Proposition 1.35. Thus (iii) holds. Proposition 8.4. If C (S,) = W ( 2 ,p, n,) for i = 1,2 and if S, and S2 are independent, then I?(S, + S,) = W(Z, p, n, + n,). Prooj An application of (iii) of Proposition 8.3 yields this convolution result. Specifically, @(A)= & exp[i(A, S, + S,)]
=
n&
j= 1
exp i(A,
3)
The uniqueness of characteristic functions shows that C(S, W(Z, p, n, + n,).
+ S,) =
It should be emphasized that ( . , .) is not what we might call the standard inner product on Sp when Sp is regarded as a [ p ( p + 1)/2]dimensional coordinate space. For example, if p = 2, and S, T E Sp, then (S, T )
=
trST
=
s l , t l l + s2,t2,
+ 2sI2t,,
while the three-dimensional coordinate space inner product between S and
+
T would be s l , t l , s2,t2, Proposition 8.3 means that
+ s12tl,. In
cov((A, S ) , (B, S))
this connection, equation (ii) of
=
2n(A, ( 2 8 Z)B)
=
2n(A, ZBZ)
=
2n tr(AZBZ),
that is, (ii) depends on the inner product ( . , .) on Sp and is not valid for other inner products. In Chapter 3, quadratic forms in normal random vectors were shown to have chi-square distributions under certain conditions. Similar results are available for generalized quadratic forms and the Wishart distribution. The following proposition is not the most general possible, but suffices in most situations.
Proposition 8.5. Consider X E Cp, where C(X) = N(p, Q 8 2). Let S = X'PX where P is n x n and positive semidefinite, and write P = A2 with A positive semidefinite. If AQA is a rank k orthogonal projection and if Pp = 0, then
Proof: With Y = AX, it is clear that S
Since %(A)
=
% ( P ) and Pp
=
0, Ap
=
=
Y'Y and
0 so
By assumption, B = AQA is a rank k orthogonal projection. Also, S = Y'Y = Y'BY + Yf(I - B)Y, and C((I - B)Y) = N(0,O 8 Z) so Yf(I - B)Y is zero with probability one. Thus it remains to show that if C(Y) = N(0, B 8 Z) where B is a rank k orthogonal projection, then S = Y'BY has a W(Z, p, k ) distribution. Without loss of generality (make an orthogonal transformation),
Partitioning Y into Y, : k
X
p and Y2 : (n - k) X p, it follows that S
=
Y;Y,
308
THE WISHART DISTRIBUTION
and
Thus C ( S ) = W ( 2 ,p , k ) .
+
Example 8.1. We again return to the multivariate normal linear mode1 introduced in Example 4.4. Consider X E C,, with
where p is an element of the subspace M G C,,, defined by = {XIX E
C p , n ,x
=
ZB, B
E
Cp,k ) .
Here, Z is an n x k matrix of rank k and it is assumed that n - k 2 p. With P, = Z ( Z ' Z ) - ' z ' , PM = P, 8 I, is the orthogonal projection onto M and Q M = Q , @ I,, Q , = I - P,, is the orthogonal projection onto M I . We know that
is the maximum likelihood estimator of p. As demonstrated in Example 4.4, the maximum likelihood estimator of Z is found by maximizing
Since n - k >, p, x'Q,x has rank p with probability one. When X'Q, X has rank p, Example 7.10 shows that
is the maximum likelihood estimator of Z. The conditions of Proposition 8.5 are easily checked to verify that S = X'Q,X has a W(Z, p , n - k ) distribution. In summary, for the multivariate linear model, fi = P,X and 5 = n-'X'Q,X are the maximum likelihood estimators of p and Z. Further, fi and 2 are independent and
e(&) = ~
( zp ,, n - k ) .
#
PROPOSITION
8.6
8.2. PARTITIONING A WISHART MATRIX The partitioning of the Wishart distribution considered here is motivated partly by the transformation described in Proposition 5.8. If C ( S ) = W ( Z , p , n ) where n 2 p, partition S as
where S,,
=
S ; , and let
Here, Sij is pi X pj for i, j = 1,2 sop, + p, = p. The primary result of t h s section describes the joint distribution of ( S , ,. ,, S,,, S,,) when 2 is nonsingular. This joint distribution is derived by representing the Wishart distribution in terms of the normal distribution. Since C ( S ) = W ( Z , p , n ) , S = X'X where C ( X ) = N(0, In8 2 ) . Discarding a set of Lebesgue measure zero, X is assumed to take values in %, the set of all n x p matrices of rankp. With
it is clear that
Thus
where
is an orthogonal projection of rank n - p, for each value of X, when X E %. To obtain the desired result for the Wishart distribution, it is useful to first give the joint distribution of ( Q X , , P X , , X,). Proposition 8.6. The joint distribution of ( Q X , , P X , , X , ) can be described as follows. Conditional on X,, Q X l and PX, are independent with
310 and
THE WISHART DISTRIBUTION
Also,
Proof: From Example 3.1, the conditional distribution of XI given X2, say c ( X , l x , > , is
Thus conditional on X,, the random vector
is a linear transformation of X I . Thus W has a normal distribution with mean vector Q@IPI
,,-,, ]
( r ei P
2I =
(iZ2221)
since QX, = 0 and PX, = X,. Also, using the calculational rules for partitioned linear transformations, the covariance of W is
since QP = 0. The conditional independence and conditional distribution of Q X , and PX, follow immediately. That X, has the claimed marginal distribution is obvious. Proposition 8.7. Suppose C ( S ) = W ( 2 ,p , n ) with n >, p and Z > 0. Partition S into S,,, i, j = 1,2, where S,, is pi x p,, p , p, = p, and partition Z similarly. With S , , . , = S , , - S I 2 S ~ ' S 2S, , , . , and (S,,, S,,) are stochastically independent. Further,
+
PROPOSITION
8.7
and conditional on S,,,
The marginal distribution of S,, is W(E,,, p,, n ) . Proof. In the notation of Proposition 8.6, consider X E % with C ( X ) = N(0, I, @ 2 ) and S = X'X. Then S,, = Z X , for i, j = 1, 2 and S , , , , = X;QX,. Since PX, = X, and S,, = X;X,, we see that S2, = ( P X , )'XI = X; P X , , and conditional on X,,
To show that S , , . , and (S,,, S,,) are independent, it suffices to show that
for bounded measurable functions f and h with the appropriate domains of definition. Using Proposition 8.6, we argue as follows. For fixed X,, Q X , and P X , are independent so S , , . = XiQQX, and S,, = X;PX, are conditionally independent. Also,
,
and Q is a rank n - p, orthogonal projection. By Proposition 8.5, C ( x ; Q X , l x2)= W ( E , ,. p , , n - p,) for each X, so X;QXl and X2 are independent. Conditioning on X,, we have
,,
Therefore, S ,
, ,and (S,,, S2,) are stochastically independent. To describe .
312
THE WISHART DISTRIBUTION
the joint distribution of S2,and S2,, again condition on X2. Then
and this conditional distribution depends on X2 only through Sz2= X;X2. Thus
That S2,has the claimed marginal distribution is obvious. By simply permuting the indices in Proposition 8.7, we obtain the following proposition. Proposition 8.8. With the notation and assumptions of Proposition 8.7, let S2,.,= S,, - S2,Sfi1~,,.Then S,,., and (S,,, S,,) are stochastically independent and
,
C(S22.I) = W(222.1, P2, n - P I ) .
Conditional on S, ,
and the marginal distribution of S,, is W(Z,,, p,, n). Proposition 8.7 is one of the most useful results for deriving distributions of functions of Wishart matrices. Applications occur in this and the remaining chapters. For example, the following assertion provides a simple proof of the distribution of Hotelling's-T2, discussed in the next chapter. Proposition 8.9. Suppose So has a nonsingular Wishart distribution, say W(2, p, n), and let A be an r x p matrix of rank r. Then
e((as;'ar)')= W ( ( A Z - ~ A ~ ) -r ',,n - p + r ) . Proof: First, an invariance argument shows that it is sufficient to consider the case when Z = I. More precisely, write E = B2 with B > 0 and let C = AB-'. With S = B-'s,B-', C(S) = W(I, p, n) and the assertion is that
PROPOSITION
Now, let '4'
8.9
=
C'(CC')-'/2,
SO
c((*'s-'*)-I)
the assertion becomes =
W(I,, r , n - p
However, \k is p X r and satisfies \kf'4' = I,-that Since C ( r ' S r ) = e ( S ) for all r E BP,
is,
+ r).
*is a linear isometry.
Choose T so that
For this choice of r , the matrix (P'I"S-'I'\k)-' is just the inverse of the r x r upper left corner of S-I, and this matrix is
where V is r x r. By Proposition 8.7,
since C(S)
A'
=
W(I, p, n). This establishes the assertion of the proposition.
When r = 1 in Proposition 8.9, the matrix A' is nonzero vector, say = a E Rp. In this case,
when C(S) = W(2, p, n). Another decomposition result for the Wishart distribution, which is sometimes useful, follows. Lemma 8.10. Suppose S has a nonsingular Wishart distribution, say C(S) = W(Z, p, n), and let S = TT' where T E G;. Then the density of T with respect to the left invariant measure v(dT) = dT/Iltli is
314
THE WISHART DISTRIBUTION
If S and T are partitioned as
where S,, is pi x p,, p, + p2 = p, then S , , = T I I T ; , ,S,, = TllT;l, and S2, ., = T2,Ti2. Further, the pair ( T I , ,T,,) is independent of T2, and
Proof. The expression for the density of T is a consequence of Proposition 7.5, and a bit of algebra shows that S , , = TI,Ti,, S12= T I,Ti,, and S2,. = T2,Ti2. The independence of ( T I , ,T2,)and T2, follows from Proposition 8.8 and the fact that the mapping between S and T is one-to-one and onto. Also,
,
Since S l l and T I , are one-to-one functions of each other and S 1 2= TIIT;,, ~ ~ T l , T ; l I T l=l )N ( T I I T ; I ~ , ' ~ I ~ ~ T I I T ; ,@
222.1).
Thus
and T I ,is fixed. Proposition 8.11. Suppose S has a nonsingular Wishart distribution with C ( S ) = W ( 2 ,p, n ) and assume that 2 is diagonal with diagonal elements a l l , .. . , a,,. If S = TT' with T E G:, then the random variables {tijli > j } are mutually independent and
C (ti,) = N(0, a,,) and
for i > j
PROPOSITION
8.11
ProoJ: First, partition S, 2 , and T as
where S,, is 1 x 1. Since Z,, = 0, the conditional distribution of T,, given TI, does not depend on TI, and 8,, has diagonal elements a,,, . . . , a,,. It follows from Proposition 8.10 that t, Ti,, and T,, are mutually independent and
,,
The elements of T,, are t,,, t,,, . . . , t,,, and since Z,, is diagonal, these are independent with i
C(til) = N(0, a,,),
=
2,. . . , p .
Also.
and
The conclusion of the proposition follows by an induction argument on the dimension parameter p. When C(S) = W(Z, p, n) is a nonsingular Wishart distribution, the random variable JSIis called the generalized variance. The distribution of IS\ is easily derived using Proposition 8.11. First, write Z = B2 with B > 0 and let S, = B-'SB-'. Then C(S,) = W ( I ,p, n) and IS1 = IZIIS,I. Also, if for i = 1,. . . , p, and t,,, . . . , tpp TT' = S,, T E G : , then C(ti) = XZ,-i+ are mutually independent. Thus
,
c(Isl) = ~ ( l ~ l l s l=l )e(ImllTTfl)= E (
P
1 x 1t i ~)
Therefore, the distribution of IS1 is the same as the constant 1x1 times a product of p independent chi-square random variables with n - i + 1 degrees of freedom for i = 1,. . . , p.
316
THE WISHART DISTRIBUTION
8.3. THE NONCENTRAL WISHART DISTRIBUTION Just as the Wishart distribution is a matrix generalization of the chi-square distribution, the noncentral Wishart distribution is a matrix analog of the noncentral chi-square distribution. Also, the noncentral Wishart distribution arises in a natural way in the study of distributional properties of test statistics in multivariate analysis.
Definition 8.2. Let X E Cp, have a normal distribution N ( p , In €3 2 ) . A random matrix S E Sp has a noncentral Wishart distribution with parameters 2 , p, n , and A = p'p if C ( S ) = C ( X r X ) . In this case, we write C ( S ) = W ( Z , p, n; A). In thls definition, it is not obvious that the distribution of X'X depends on p only through A = p'p. However, an invariance argument establishes this. The group 8, acts nn CP,, by sending x into T x for x E CP,, and r E 8,. A maximal invariant under t h s action is x'x. When C ( X ) = N ( p , In €3 Z ) , C ( I ' X ) = N ( r p , In €3 2 ) and we know the distribution of X'X depends only on a maximal invariant parameter. But the group action on the parameter space is ( p , Z ) -+ ( r p , 2 ) and a maximal invariant is obviously (p'p, Z ) . Thus the distribution of X'X depends only on (p'p, 2). When A = 0, the noncentral Wishart distribution is just the W ( Z , p, n ) distribution. Let Xi,. . . , Xi be the rows of X in the above definition so X,,. . . , Xn are independent and C ( X , ) = N ( p i , Z ) where p;,. . . , ph are the rows of p. Obviously,
where Ai = pipi. Thus S, = XiX,', i clear that, if S = X'X, then
=
1,. . . , n , are independent and it is
In other words, the noncentral Wishart distribution with n degrees of freedom can be represented as the convolution of n noncentral Wishart distributions each with one degree of freedom. This argument shows that, if C ( S , ) = W ( Z , p, n,; A i ) for i = 1,2 and if S , and S, are independent, then C ( S , + S,) = W ( Z , p, n , n,, A, + A,). Since
+
it follows that GS = nZ when C(S)
=
+A
W(Z, p , n; A). Also,
but an explicit expression for Cov(S,) is not needed here. As with the central Wishart distribution, it is not difficult to prove that, when C(S) = W(Z, p, n; A), then S is positive definite with probability one iff n > p and Z > 0. Further, it is clear that if C(S) = W(Z, p, n; A) and A is an r x p matrix, then C(ASA') = W(AZAf, r, n; AAA'). The next result provides an expression for the density function of S in a special case. Proposition 8.12. Suppose t ( S ) = W(2, p, n; A) where n > p and Z > 0, and assume that A has rank one, say A = 1/17! with 17 E RP. The density of S with respect to v(dS) = dS/lS~(p+')/~ is given by
where p(SIZ) is the density of a W(Z, p, n) distribution given in Proposition 8.2 and the function H is defined in Example 7.13.
4,.
Proof. Consider X E CP,., with C(X) = N(p, In8 Z ) where p E and p'p = A. Since S = X'X 1s a maximal invariant under the action of 8, on Cp, ., the results of Example 7.15 show that the density of S with respect to ~ is the measure vo(dS) = (&)"pa (n, p ) l ~ l ( " - p - ' ) / dS
Here, f is the density of X and p, is the unique invariant probability measure on 8,. The density of X is
Substituting this into the expression for h(S) and doing a bit of algebra shows that the density p,(SIZ, A) with respect to v is
318
THE WISHART DISTRIBUTION
The problem is now to evaluate the integral over On. It is here where we use the assumption that A has rank one. Since A = p'p, p must have rank one so p = [q' where5 E Rn,11511 = 1, and q E RP, A = qq'. Since 11511 = 1,E = r,el for some I?, E On where E , E Rn is the first unit vector. Setting u = ( T J ' Z - ' S Z - ' ~ ) ~ /XZ-lq ~, = U&E, for some r2E On as UE, and XZ-'q have the same length. Therefore,
The right and left invariance of po was used in the third to the last equality and y,, is the (1,l) element of I?. The function H was evaluated in Example 7.13. Therefore, when A = qq',
The final result of this section is the analog of Proposition 8.5 for the noncentral Wishart distribution.
ep,
where C(X) = N ( p , Q @ 2 ) and let Proposition 8.13. Consider X E with A >, 0. If B = A Q A is S = X'PX where P 2 0 is n X n. Write P = a rank k orthogonal projection and if AQPp = A p , then
Proof: The proof of t h s result is quite similar to that of Proposition 8.5 and is left to the reader.
It should be noted that there is not an analog of Proposition 8.7 for the noncentral Wishart distribution, at least as far as I know. Certainly, Proposition 8.7 is false as stated when S is noncentral Wishart. 8.4. DISTRIBUTIONS RELATED TO LIKELIHOOD RATIO TESTS In the next two chapters, statistics that are the ratio of determinants of Wishart matrices arise as tests statistics related to likelihood ratio tests.
.
DISTRIBUTIONS RELATED TO LIKELIHOOD RATIO TESTS
319
Since the techniques for deriving the distributions of these statistics are intimately connected with properties of the Wishart distribution, we have chosen to treat this topic here rather than interrupt the flow of the succeeding chapters with such considerations. Let X E C,,, and S E Sp+ be independent and suppose that C ( X ) = N ( p , I, @ 2 ) and C ( S ) = W ( 2 , p, n ) where n > p and Z > 0. We are interested in deriving the distribution of the random variable
for some special values of the mean matrix p of X. The argument below shows that the distribution of U depends on ( p , Z ) only through Z-'/2p'pZ-'/2 where 2'12 is the positive definite square root of Z. Let S = 2 ' / 2 ~ , 2 ' / and 2 Y= Then S , and Y are independent, C ( S , ) = W ( I , p, n ) , and C ( Y ) = N ( p Z - ' / 2 , I, @ I,). Also, U=
IS
IS11
+IS1XfXI = IS, + Y'Y1
'
However, the discussion of the previous section shows that Y'Y has a noncentral Wishart distribution, say C ( Y ' Y ) = W ( I , p, n ; A ) where A = 2 - 1 / 2 p ' p 2 - 1 / 2 .In the following discussion we take Z = I, and denote the distribution of U by
where A
=
p'p. When p
=
is used. In the case that p
0, the notation
=
1,
where C ( S ) = x;. Since C ( X ) = N ( p , I,), C ( X f X )= X i ( A ) where A p'p 2 0. Thus
=
320 When X',(A) and
THE WISHART DISTRIBUTION
Xiare independent, the distribution of the ratio
is called a noncentral F distribution with parameters m, n, and A. When A = 0, the distribution of F(m, n; 0) is denoted by F,, and is simply called an F distribution with (m, n) degrees of freedom. It should be noted that this usage is not standard as the above ratio has not been normalized by the constant n/m. At times, the relationship between the F distribution and the beta distribution is useful. It is not difficult to show that, when and Xi are independent, the random variable 2
v =
Xn
xt + x',
has a beta distribution with parameters n/2 and m/2, and this is written as C(V) = $(n/2, m/2). In other words, V has a density on (0,l) given by
where a = n/2 and /3 random variable
=
m/2. More generally, the distribution of the
is called a noncentral beta distribution and the notation C(V(A)) = $(n/2, m/2; A) is used. In summary, when p = 1,
where A = pfp 2 0. Now, we consider the distribution of U when m C(Xf) = N(pf, I p ) where X' E RP and
The last equality follows from Proposition 1.35.
=
1. In this case,
PROPOSITION
8.14
Proposition 8.14. When m
=
where 6
=
Proof.
It must be shown that
1,
pp' 2 0.
C(XS-'X') For X fixed, X
t
=
F(p,n -p
+ 1,6)
0, Proposition 8.10 shows that
, , (
XX'
j
=
xi-,+
1
when C ( S ) = W ( 1 , p, n ) . Since this distribution does not depend on X, we have that ( X X ')/XS-'x' and XX' are independent. Further,
since C ( X ' ) = N ( p t , I,). Thus
The next step in studying C ( U ) is the case when m > 1, p > 1 , but 0.
(1. =
Proposition 8.15. Suppose X and S are independent where C ( S ) = W ( I , p , n ) and C ( X ) = N(0, I , @ I,). Then
where U,,. . . , Urnare independent and C ( q ) = % ( ( n - p Proof.
The proof is by induction on m and, when m
C ( u )= $8 ( ( n - p
=
+ 1)/2, p/2).
+ i ) / 2 , p/2).
1, we know
322 Since X'X
THE WISHART DISTRIBUTION =
C;"X,X,' where X has rows Xi,. . . , Xh,
The first claim is that
and
are independent random variables. Since X I , . . . , X, are independent and independent of S , to show U, and W are independent, it suffices to show that U , and S X,X; are independent. To do ths, Proposition 7.19 is applicable. The group GI, acts on ( S , X I ) by
+
A ( S , X I )= (ASA', AX,)
and the induced group action on T = S + X , X ; sends T into ATA'. The induced group action is clearly transitive. Obviously, T is an equivariant function and also U , is an invariant function under the group action on ( S , X I ) . That T is a sufficient statistic for the parametric family generated by GI, and the fixed joint distribution of ( S , X I ) is easily checked via the factorization criterion. By Proposition 7.19, U, and S + X , X i are independent. Therefore.
where Ul and W are independent and
However, C ( S + X , X ; ) = W ( I ,p , n plied to W yields
+ 1 ) and the induction hypothesis ap-
PROPOSITION
8.16
where W , ,. . . , W m - , are independent with
Setting U, =
w;-,, i = 2 , . . . , m , we have
where U,,. . . , Umare independent and
The above proof shows that q's are given by
and that these random variables are independent. Since C(S = W(I, p , n + i - I), Proposition 8.14 yields
+ C;-'X,X;)
In the special case that A has rank one, the distribution of U can be derived by an argument similar to that in the proof of Proposition 8.15. Proposition 8.16. Suppose X and S are independent where C(S) = W(I, p , n) and C(X) = N ( p , I, @ I,). Assume that p = 517' with 5 E Rm, 11511 = 1, and 17 € R P . Then
where U,,. . . , Urnare independent,
and n-p+m 2
e(um)= a (
p
T ;I V ) .
324
THE WISHART DISTRIBUTION
Proof. Let E, be the mth standard unit in Rm. Then E 6, as 11511 = 11&,11. Since
r
r.$= E,
for some
and C(I'X) = N(E,TJ', I, @ I,), we can take $, = E, without loss of generality. As in the proof of Proposition 8.15, XrX = C;"&y where XI,. . . , X, are independent. Obviously, C(X,) = N(0, I,), i = 1,. . . , m - 1, and C(X,) = N(q, I,). Now, write U = ll;"U, where
The argument given in the proof of Proposition 8.15 shows that
and {S + XIXi, X2,. . . , X,) are independent. The assumption that XI has mean zero is essential here in order to verify the sufficiency condition necessary to apply Proposition 7.19. Since U2,. . . , Urn are functions of { S X,X;, X,,. . . , X,), U, is independent of {U,,. . . , U,). Now, we simply repeat this argument m - 1 times to conclude that U,,. . . , Urn are independent, keeping in mind that XI,. . . , Xm-, all have mean zero, but X, need not have mean zero. As noted earlier,
+
By Proposition 8.14,
Now, we return to the case when p = 0. In terms of the notation C(U) = U(n, m , p), Proposition 8.14 asserts that
Further, Proposition 8.15 can be written
where thls equation means that the distribution U(n, m, p ) can be represented as the distribution of the product of m independent random variables with distribution U(n + i - 1, 1, p ) for i = 1,. . . , m. An alternative representation of U(n, m, p ) in terms of p independent random variables when m >, p follows. If m >, p and
with C(S) = W(I, p, n) and C(X) = N(0, I, 8 I,), the matrix T = X'X has a nonsingular Wishart distribution, C(T) = W(I, p, m). The following technical result provides the basic step for decomposing U(n, m, p ) into a product of p independent factors. Proposition 8.17. Partition S into Sijwhere Sij is pi x p,, i, j p , + p2 = p. Partition T similarly and let
Then the five random vectors S,,, TI,, S2,. independent. Further,
,,
7722.1~ and
=
1,2, and
Z are mutually
Proof: Since S and T are independent by assumption, (S,,, S,,, S,,. ,) and (T,,, TI,, T,,. ,) are independent. Also, Proposition 8.8 shows that (S,,, S,,) and S,, . are independent with
,
and
.,
Similar remarks hold for (TI,, TI,) and T2, with n replaced by m. Thus the
326
THE WISHART DISTRIBUTION
,
four random vectors ( S , , , S,,), S,,.,, ( T I , ,TI,), and T,,. are mutually and ( T I , ,TI,), the proposition follows if we show that Z is independent of the vector ( S , , , T,,). Conditional on ( S ,,, TI,),
independent. Since Z is a function of (S,,, S,,)
Let A ( B ) be the positive definite square root of S , ,(TI,). With V = A-IS,, and W = B-IT,,,
Also.
where
However, Q is easily shown to be an orthogonal projection of rank p,. By Proposition 8.5,
~ ( Z I ( S ,T ,I , > ))
=
W(I7 P 2 , P I )
,,
for each value of ( S , ,, T I,). Therefore, Z is independent of ( S , T I,) and the proof is complete. Proposition 8.18. If m 2 p, then.
Proof: By definition,
with n 2 p. In the notation of Proposition 8.17, partition S and T with p , = 1 andp, = p - 1. Then S l l , Til, S22.,, T22.1, and
are mutually independent. However,
and
Thus
and the two factors on the right side of this equality are independent by Proposition 8.17. Obviously,
Since C(T2,.,) = W(I, p - 1, m - l), C(Z) = W ( I ,p - 1, l), and T2,,, and Z are independent, it follows that
Therefore,
which implies the relation
Now, an easy induction argument establishes
328
THE WISHART DISTRIBUTION
which implies that
and this completes the proof. Combining Propositions 8.15 and 8.18 leads to the following.
Proposition 8.19.
If m >, p, then
Proof: For arbitrary m, Proposition 8.15 yields
where this notation means that the distribution U(n, m, p ) can be represented as the product of m independent beta-random variables with the factors in the product having a %((n - p i)/2, p/2) distribution. Since
+
Proposition 8.18 implies that
Applying Proposition 8.15 to U(n - p
+ m, p, m) yields
which is the distribution U(n, m, p). In practice, the relationship U(n, m, p ) = U(n - p + m, p, m) shows that it is sufficient to deal with the case that m < p when tabulating the
329
PROBLEMS
distribution U(n, m , p). Rather accurate approximations to the percentage points of the distribution U(n, m, p ) are available and these are discussed in detail in Anderson (1958, Chapter 8). This topic is not pursued further here.
PROBLEMS 1.
Suppose S is W(Z,2, n), n >, 2, Z > 0. Show that the density of can be written as r = s,,/
6
\iG +
and is defined as follows. Let XI and X2 be where p = ol,/ independent chi-square random variables each with n degrees of freedom. Then $(t) = G e ~ p [ t ( X , X , ) ~ for / ~ l It1 < 1. Using this representation, prove that p(r1p) has a monotone.likelihood ratio. 2. The gamma distribution with parameters a > 0 and X > 0, denoted by G(a, A), has the density
with respect to Lebesgue measure on (0, a). (i) Show the characteristic function of t h s distribution is (1 - iAt)-*. (ii) Show that a G(n/2,2) distribution is that of a distribution.
Xi
3. The above problem suggests that it is natural to view the gamma family as an extension of the chi-squared family by allowing nonintegral degrees of freedom. Since the W(Z, p , n) distribution is a generalization of the chi-squared distribution, it is reasonable to ask if we can define a Wishart distribution for nonintegral degrees of freedom. One way to pose this question is to ask for what values of a is Ga(A) = II, - 2iAIa, A E S,, a characteristic function. (We have taken Z = I, for convenience). (i) Using Proposition 8.3 and Problem 7.1, show that +a is a characteristic function for a = 1/2,. . . , ( p - 1)/2 and all real a > ( p - 1)/2. Give the density that corresponds to $a for a > ( p - 1)/2. W(Ip, p , 2a) denotes such a distribution. (ii) For any Z >, 0 and the values of a given in (i), show that $,(ZA), A E S,, is a characteristic function.
330 4.
THE WISHART DISTRIBUTION
Let S be a random element of the inner product space (S,, ( . ,
a ) )
where ( . , .) is the usual trace inner product on Sp. Say that S has an
0,-invariant distribution if C(S) = C(I'ST') for each r E 0,. Assume S has an 8,-invariant distribution. (i) Assuming &Sexists, show that &S= cIp where c = &s,, and s,, is the i, j element of S. (ii) Let D E Sp be diagonal with diagonal elements d l , . . . , dp. Show that var((D, S)) = (y - p)Cd? P(Cfdl)2 where y = var(s,,) and P = cov(s, , s,, ). (iii) For A E S,, show that var((A, S)) = (y - P)(A, A) P(IP, A),. From this conclude that Cov(S) = ( y - P ) I, 8 I, PI, q I,.
+
,
+ +
5.
Suppose S E S; has a density f with respect to Lebesgue measure dS restricted to S;. For each n > p , show there exists a random matrix X E C,,, that has a density with respect to Lebesgue measure on Cp, , and C(X'X) = C(S).
6.
Show that Proposition 8.4 holds for all n,, n, equal to 1,2,. . . , p - 1 or any real number greater than p - 1.
7. (The inverse Wishart distribution.) Say that a positive definite S E S; has an inverse Wishart distribution with parameters A, p , and v if C(S-I) = w(A-I, p , v p - 1). Here A E S; and v is a positive integer. The notation C(S) = IW(A, p , v) signifies that C(SP') = w(A-', p, v + p - 1). (i) If C(S) = IW(A, p , v) and A is r x p of rank r, show that C(ASAf) = IW(AAAf, r, v). (ii) If C(S) = IW(I,, p , v) and I' E Op, show that C(rSI") = C(S). (iii) If C(S) = IW(A, p , v), show that &(S) = ( V- 2)-'A. Show that Cov(S) has the form c,A 8 A c,AU A-what are c, and c,? (iv) Now, partition S into S , , : q x q, SI2: q X r, and S2,: r x r with S as in (iii). Show that C(S,,) = IW('A,,, q, v). Also show that C(S,,.,) = IW(A2,,,, r, v + q).
+
+
8. (The matric t distribution.) Suppose X is N(0, I, 8 I,) and S is W(I,, p, m), m >, p. Let S-'/, denote the inverse of the positive definite square root of S. When S and X are independent, the matrix T = XS-'I2 is said to have a matric t distribution and is denoted by C(T) = T(m - p + 1, I,, I,).
331
PROBLEMS
(i) Show that the density of T with respect to Lebesgue measure on C,, ,is given by
Also, show that C(T) = C(rTAf) for r E 8, and A E 8,. Using this, show GT = 0 and Cov(T) = c,I, 3€ I, when these exist. Here, c, is a constant equal to the variance of any element of T. (ii) Suppose V is IW(I,, p , v) and that T given V is N(0, I, 8 V). Show that the unconditional distribution of T is T(v, I,, I,). (iii) Using Problem 7 and (ii), show that if T is T(v, I,, I,), and TI, is the k x q upper left-hand corner of T, then TI, is T(v, Ik, Iq).
9. (Multivariate F distribution.) Suppose S, is W(I,, p, m) (for m = 1,2,. . . ) and is independent of S2, which is W(I,, p , v + p - 1) (for v = 1,2,. . . ). The matrix F = S; '/2S,SF 'I2has a matric F distribution that is denoted by F(m, v, I,). (i) If S is IW(I,, p, v) and V given S is W(S, p , m), show that the unconditional distribution of V is F(m, v , I,). (ii) Suppose T is T(v, I,, I,). Show that T'T is F(r, v, I,). (iii) When r 2 p, show that the F(r, v, I,) distribution has a density with respect to dF/I F I(,+ ' ) Igiven 2 by
(iv) For r > p , show that, if F is F(r, v, I,), then F-' is F(v + p 1, r - p + 1, I,). (v) If F is F(r, v, I,) and Fllis the q X q upper left block of F, use (ii) to show that Fllis F(r, v, Iq). (vi) Suppose Xis N(0, I, 8 I,) with r < p and S is W(I,, p , m) with m >, p , X and S independent. Show that XS-'x' is F ( p , m - p + 1, I,).
10. (Multivariate beta distribution.) Let S, and S2 be independent and suppose C(Si) = W(I,,, p , m,), i = 1,2, with m, + m, > p. The random matrix B = ( S , S2)- '/2S,(S, S,)- 'I2 has a p-dimensional multivariate beta distribution with parameters m, and m,. This is
+
+
332
THE WISHART DISTRIBUTION
written C(B)
=
B(m,, m,, I,) (when p
=
1, this is the univariate beta
distribution with parameters m,/2 and m2/2).
(i) If B is B(m,, m,, I,) show that C(I'Brf) = C(B) for all r E 8,. Use Example 7.16 to conclude that C(B) = C(qD9') where 9 E Op is uniform and is independent of the diagonal matrix D with elements A , 2 . . > A, > 0. The distribution of D is determined by specifying the distribution of A,, . . . , A, and this is the distribution of the ordered roots of (S, + S2)-'/2S,(S, S2)(ii) With S, and S, as in the definition of B, show that S:/2(Sl + s,)-'s,"~ is B(m,, m,, I,). (iii) Suppose F is F(m, v, I,). Use (i) and (ii) to show that ( I F ) is B ( p v - 1, m, I,) and F ( I + F ) - ' is B(m, p v - 1, I,). (iv) Suppose that X is N(0, I, I,) and that it is independent of S, which is W(Ip, p , m). When r 6 p and m > p, show that X(S XfX)-'X' is B(p, r m - p, I,). (v) If B is B(m,, m,, I,) and m, 2 p , show that det(B) is distributed as U(m,, m,, p ) in the notation of Section 7.4.
+
+
+
@J
+
+
+
NOTES AND REFERENCES 1. The Wishart distribution was first derived in Wishart (1928).
2. For some alternative discussions of the Wishart distribution, see Anderson (1958), Dempster (1969), Rao (1973), and Muirhead (1982).
3. The density function of the noncentral Wishart distribution in the general case is obtained by "evaluating"
(see the proof of Proposition 8.12). The problem of evaluating
en,,
has received much attention since the paper of James for A E (1954). Anderson (1946) first gave the noncentral Wishart density when
333
NOTES AND REFERENCES
p has rank 1 or rank 2. Much of the theory surrounding the evaluation of 4 and series expansions for 4 can be found in Muirhead (1982). 4.
Wilks (1932) first proved Proposition 8.15 by calculating all the moAnderson ments of U and showing these matched the moments of (1958) also uses the moment method to find the distribution of U. This method was used by Box (1949) to provide asymptotic expansions for the distribution of U (see Anderson, 1958, Chapter 8).
nU,.
Inference for Means in Multivariate Linear Models
Essentially, this chapter consists of a number of examples of estimation and testing problems for means when an observation vector has a normal distribution. Invariance is used throughout to describe the structure of the models considered and to suggest possible testing procedures. Because of space limitations, maximum likelihood estimators are the only type of estimators discussed. Further, likelihood ratio tests are calculated for most of the examples considered. Before turning to the concrete examples, it is useful to have a general model within which we can view the results of this chapter. Consider an n-dimensional inner product space (V, (., .)) and suppose that X is a random vector in V. To describe the type of parametric models we consider for X, let f be a decreasing function on [0, GO) to [0, co) such that f [(x, x)] is a density with respect to Lebesgue measure on (V, (. , .)). For convenience, it is assumed that f has been normalized so that, if Z E V has density f, then Cov(Z) = I. Obviously, such a Z has mean zero. Now, let M be a subspace of V and let y be a set of positive definite linear transformations on V to V such that I E y. The pair (M, y) serves as the parameter space for a model for X. For p E M and 2. E y,
is a density on V. The family
335
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
determines a parametric model for X. It is clear that if p ( . ( p , Z) is the density of X, then GX = p and Cov(X) = Z. In particular, when
then X has a normal distribution with mean p E M and covariance Z E y. The parametric model for X i s in fact a linear model for X with parameter set (M, y). Now, assume that Z ( M ) = M for all Z E y. Since I E y, the least-squares and Gauss-Markov estimator of p are equal to P X where P is the orthogonal projection onto M. Further, fl = P X is also the maximum likelihood estimator of p. To see this, first note that P C = Z P for Z E y since M is invariant under Z E y. With Q = I - P, we have (X -
p , Z - ' ( x - p))
=
( ~ ( -x p )
= (PX
-
+ ex, Z-'(p(x
p, 2-'(PX - p))
-
1-1)+ e x ) )
+ ( e x , 2-'QX).
The last equality is a consequence of
as QP
=
0 and 2- 'P
P Z - I. Therefore, for each Z
=
(X -
p , 2 - ' ( x - p))
E
y,
(QX,2-'Qx)
with equality iff p = Px. Since the function f was assumed to be decreasing, it follows that P = P X is the maximum likelihood estimator of p, and P is unique iff is strictly decreasing. Thus under the assumptions made so far, fl = P X is the maximum likelihood estimator for p. These assumptions hold for most of the examples considered in this chapter. To find the maximum likelihood estimator of 2 , it is necessary to compute sup I Z I - " ' [~(~e x , 2 - ' e x ) ] ~
E
Y
and find the point f: E y where the supremum is achieved, assuming it exists. The solution to this problem depends crucially on the set y and this is what generates the infinite variety of possible models, even with the assumption that Z M = M for Z E y. The examples of this chapter are generated by simply choosing some y 's for which f: can be calculated explicitly. We end this rather lengthy introduction with a few general comments about testing problems. In the notation of the previous paragraph, consider a parameter set (M, y ) with I E y and assume Z M = M for I: E y. Also, let
336
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
Mo c M be a subspace of V and assume that 2 M o = Mo for 2 E y. Consider the problem of testing the null hypothesis that p E Mo versus the alternative that p E ( M - M o ) Under the null hypothesis, the maximum likelihood estimator for p is P o = POXwhere Pois the orthogonal projection onto Mo. Thus the likelihood ratio test rejects the null hypothesis for small values of sup 1 2 1 - " / ~[(cox, f ~-'eox>l
where Qo = I - Po.Again, the set y is the major determinant with regard to the distribution, invariance, and other properties of A ( x ) . The examples in this chapter illustrate some of the properties of y that lead to tractable solutions to the estimation problem for B and the testing problem described above.
9.1. THE MANOVA MODEL The multivariate general linear model introduced in Example 4.4, also known as the multivariate analysis of variance model (the MANOVA model), is the subject of this section. The vector space under consideration is C,,, with the usual inner product ( . , .) and the subspace M of C,, is
where Z is a fixed n X k matrix of rank k. Consider an observation vector X E C,,, and assume that C(X)
=
N ( p , In3 € 2)
where p E M and 2 is an unknown p x p positive definite matrix. Thus the set of covariances for X is y =
{I,c3 BIZ
E
Spt)
and (M, y ) is the parameter set of the linear model for X. It was verified in Example 4.4 that M is invariant under each element of y. Also, the orthogonal projection onto M is P = P, 8 I, where
Further, Q
=
I-P
=
Q, 8 I, is the orthogonal projection onto M I where
THE MANOVA MODEL
Q,
=
I,, - P,. Thus
p
=
P X = ( P , €4 I,)X
is the maximum likelihood estimator of p
E
=
P,X
M and, from Example 7.10,
is the maximum likelihood estimator of Z when n - k >, p, which we assume throughout this discussion. Thus for the MANOVA model, the maximum likelihood estimators have been derived. The reader should check that the MANOVA model is a special case of the linear model described at the beginning of t h s chapter. We now turn to a discussion of the classical MANOVA testing problem. Let K be a fixed r X k matrix of rank r and consider the problem of testing Ho : K P
=
0 versus H , : KZp
*0
where p = ZZp is the mean of X. It is not obvious that this testing problem is of the general type described in the introduction to t h s chapter. However, before proceeding further, it is convenient to transform t h s problem into what is called the canonical form of the MANOVA testing problem. The essence of the argument below is that it suffices to take K = K O =(I, 0) in the above problem. In other words, a transformation of the original problem results in a problem where Z = Zo and K = K,. We now proceed with the details. The parametric model for X E C,, , is
and the statistical problem is to test H, : KZp = 0 versus H, : KZp * 0. Since Z has rank k , Z = 9 U for some linear isometry 9 : n x k and some k x k matrix U E G:. The k columns of 9 are the first k columns of some r E 8, so 9 Setting X
=
rfX,
=
r(;)
=
rz,.
p = UP, and K = KU-', we have
338
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
and the testing problem is H,: KB = 0 versus H I : KB same argument to K' as we did to 2,
for some A
E
0, and some r x r matrix Ul in G:.
and set Y = I',X, B
=
* 0. Applying
the
Let
Alp. Since
it follows that
and the testing problem is H, : K,B = 0 versus HI : K,B * 0. Thus after two transformations, the original problem has been transformed into a problem with Z = Z , and K = KO. Since KO = (I,O), the null hypothesis is that the first r rows of B are zero. Partition B into
and partition Y into
Because Cov(Y) is clear that
=
I, @ Z, Y,, Y,, and Y3 are mutually independent and it
339
THE MANOVA MODEL
Also, the testing problem is Ho : B , = 0 versus H I : B , * 0. It is this form of the problem that is called the canonical MANOVA testing problem. The only reason for transforming from the original problem to the canonical problem is that certain expressions become simpler and the invariance of the MANOVA testing problem is more easily articulated when the problem is expressed in canonical form. We now proceed to analyze the canonical MANOVA testing problem. To simplify some later formulas, the notation is changed a bit. Let Y , , Y,, and Y, be independent random matrices that satisfy
so B , is r x p and B, is s x p. As usual Z is a p x p unknown positive definite matrix. To insure the existence of a maximum likelihood estimator for Z, it is assumed that m 2 p and the sample space for Y, is taken to be the set of all m x p real matrices of rank p. A set of Lebesgue measure zero has been deleted from the natural sample space C,,, of Y,. The testing problem is Ho:B,
Setting n
=
r
=
0
versus H , : B,
* 0.
+ s + m and
C ( Y ) = N(p,I,
€3
2)where p is an element of the subspace
In this notation, the null hypothesis is that p
E
Mo c M where
340
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
Since M and M, are both invariant under I, 8 Z for all Z > 0, the testing problem under consideration is of the type described in general terms earlier. and y
=
{I, 8 ZIZ > 0).
When the model for Y is (M, y), the density of Y is p(YIB,, B,, 2 ) = (&)-"Iz(-"/~
In this case, the maximum likelihood estimators of Bl, B,, and Z are easily seen to be
When the model for Y is (M,, y), the density of Y is p(YI0, B,, 8 ) and the maximum likelihood estimators of B, and Z are
Therefore, the likelihood ratio test rejects for small values of
Summarizing this, we have the following result. Proposition 9.1. For the canonical MANOVA testing problem, the likelihood ratio test rejects the null hypothesis for small values of the statistic
Under H,, C (U) = U(m, r, p) where the distribution U(m,r, p ) is given in Proposition 8.15.
PROPOSITION
34 1
9.1
Proof. The first assertion is clear. Under H,, C(Y,) = N(0, I, @ Z) and C(Y3) = N(0, I, @ 2). Therefore, C(Y;Y,) = W(Z, p , r ) and C(Y;Y3)= W(Z, p, m). Since m >, p, Proposition 8.18 implies the result. Before attempting to interpret the likelihood ratio test, it is useful to see first what implications can be obtained from invariance considerations in the canonical MANOVA problem. In the notation of the previous paragraph, (M, y) is the parameter set for the model for Y and under the null hypothesis, (M,, y) is the parameter set for Y. In order that the testing problem be invariant under a group of transformations, both of the parameter sets (M, y) and (M,, y) must be invariant. With this in mind, consider the group G defined by G
=
{gig = (TI, r 2 , r 3 , 6, A ) ; r, E e,, r2E
s,,
where the group action on the sample space is given by
The group composition, defined so that the above action is a left action on the sample space, is
Further, the induced group action on the parameter set (M, y) is
where the point
has been represented simply by (B,, B2, 2 ) . Now it is routine to check that when Y has a normal distribution with &Y E M(&Y E M,) and Cov(Y) E y, then &gY E M(&gY E M,) and Cov(gY) E y, for g E G. Thus the
342
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
hypothesis testing problem is G-invariant and the likelihood ratio test is a G-invariant function of Y. To describe the invariant tests, a maximal invariant under the action of G on the sample space needs to be computed. The following result provides one form of a maximal invariant. Proposition 9.2. Let t = min{r, p), and define h(Yl, Y,, 3 ) to be the t-dimensional vector (A,, . . . , A,)' where A, > . > A, are the t largest eigenvalues of Y;Y, (Y;Y3)- I. Then h is a maximal invariant under the action of G on the sample space of Y.
Proof. Note that Y;Y,(Y;Y,)-' has at most t nonzero eigenvalues, and these t eigenvalues are nonnegative. First, consider the case when r < p so t = r. By Proposition 1.39, the nonzero eigenvalues of Y;Y,(Y'Y ) - I are the 3 same as the nonzero eigenvalues of Y,(Y;Y3)-'Y;, and these eigenvalues are obviously invariant under the action of g on Y. To show that h is maximal invariant for this case, a reduction argument similar to that in Example 7.4 is used. Given
we claim that there exists a go E G such that
where D is r X r and diagonal and has diagonal elements fi,, . . . , f i r . For g = ( r l , r2, r3,5>A),
By Proposition 5.2, Y3 = \k3U3 where \k, E %,, and U3 E G: Choose A' = U; 'A where A E aP is, as yet, unspecified. Then
is p x p.
and, by the singular value decomposition theorem for matrices, there exists
a
r, E Orand a A E tIpsuch that rlYIU<'A
=
(DO)
where D is an r x r diagonal matrix whose diagonal elements are the square roots of the eigenvalues of Y,(U,U;)-'y; = Y,(Y,Y;)-'Y;. With this choice for A E Or it is clear that &A' = Y3UC1A E Tp,, so there exists a r, E 0, such that
Choosing
r, = I,, 5 = - Y,A',
and setting
goy has the representation claimed. To show h is maximal invariant, suppose h(Y,, Y,, Y,) = h ( Z , , Z,, Z,). Let D be the r x r diagonal matrix, the squares of whose diagonal elements are the eigenvalues of Y,(Y,Y;)-'Y; and Z , ( Z , Z j ) - ' 2 ; .Then there exist go and g , E G such that
so Y = g;Ig,Z. Thus Y and Z are in the same orbit and h is a maximal invariant function. When r > p, basically the same argument establishes that h is a maximal invariant. To show h is invariant, if g = ( r , , r,, r,, 5, A), then the matrix Y;Y,(Y;Y,)gets transformed into A Y;Y,(Y;Y,)- 'A- when Y is transformed to gY. By Proposition 1.39, the eigenvalues of AY;Y,(Y;Y,)- 'Ap' are the same as the eigenvalues of Y;Y,(Y;Y,)-I, so h is invariant. To show h is maximal invariant, first show that, for each Y, there exists a go E G such that
'
'
where D is the p x p diagonal matrix of square roots of eigenvalues
344
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
(Y;Y,)(Y;Y,)-I. The argument for this is similar to that given previously and is left to the reader. Now, by mimicking the proof for the case r 6 pl, it follows that h is maximal invariant. Proposition 9.3. The distribution of the maximal invariant h (Y,, Y2, Y,) depends on the parameters (B,, B,, 2 ) only through the vector of the t largest eigenvalues of B; BIZ- I .
Proof: Since h is a G-invariant function, the distribution of h depends on (B,, B,, Z) only through a maximal invariant parameter under the induced action of G on the parameter space. Thls action, given earlier, is
However, an argument similar to that used to prove Proposition 9.2 shows that the vector of the t largest eigenvalues of B;B,B-' is maximal invariant in the parameter space. An alternative form of the maximal invariant is sometimes useful. Proposition 9.4. Let t = min{r, p ) and define h,(Y,, Y,, Y,) to be the t-dimensional vector (el,. . . , 8,)' where 8, g . . . g 8, are the t smallest eigenvalues of Y;Y,(Y;Y, + Y;Y,)-I. Then 8, = 1/(1 A,), i = 1,. . . , t, where Ai's are defined in Proposition 9.2. Further, h,(Y,, Y,, Y,) is a maximal invariant.
+
Proof: For A
E
[0, a),let 8
=
1/(1
+ A). If A satisfies the equation
then a bit of algebra shows that 8 satisfies the equation
and conversely. Thus 8, = 1/(1 + A,), i = 1,. . . , t, are the t smallest eigenvalues of Y;Y,(Y;Y, + Y;Y,)-I. Since h,(Y,, Y,, Y,) is a one-to-one function of h (Y,, Y2, Y,), it is clear that h ,(Y,, Y,, Y,) is a maximal invariant. Since every G-invariant test is a function of a maximal invariant, the problem of choosing a reasonable invariant test boils down to studying tests based on a maximal invariant. When t = min{p, r ) = 1, the following result shows that there is only one sensible choice for an invariant test.
PROPOSITION
345
9.5
Proposition 9.5. If t = 1 in the MANOVA problem, then the test that rejects for large values of A, is uniformly most powerful w i t h the class of G-invariant tests. Further, this test is equivalent to the likelihood ratio test. Proo,f: First consider the case when p negative scalar and
=
1. Then Y{Y,(Y;Y,)-' is a non-
Also. C ( Y , ) = N ( B , , a21,) and C ( Y , ) = N(0, a21,) where Z has been set equal to a2 to conform to classical notation when p = 1. By Proposition 8.14,
f(A,)
=
F(r, m; 6 )
where 6 = B ; B , / a 2 and the null hypothesis is that 6 = 0. Since the noncentral F distribution has a monotone likelihood ratio, it follows that the test that rejects for large values of A , is uniformly most powerful for testing 6 = 0 versus 6 > 0. As every invariant test is a function of A,, the case for p = 1 follows. Niow, suppose r = 1. Then the only nonzero eigenvalue of Y;Y,(Y;Y,)-' is Y,I(Y;Y,)-'Y;by Proposition 1.39. Thus
A,
=
y1(qy3)-'y;
and, by Proposition 8.14,
wher'e 6 = B I Z - ' B ; > 0. The problem is to test 6 = 0 versus 6 > 0. Again, the nloncentral F distribution has a monotone likelihood ratio and the test that rejects for large values of A , is uniformly most powerful among tests based on A,. The likelihood ratio test rejects H, for small values of
If p = 1, then A = (1 + A,)-' and rejecting for small values of A is equivalent to rejecting for large values of A,. When r = 1 , then
so again A
=
(1
+ A,)-'.
346
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
When t > 1, the situation is not so simple. In terms of the eigenvalues A,, . . . , A,, the likelihood ratio criterion rejects H, for small values of
However, there are no compelling reasons to believe that other tests based on A,, . . . , A, would not be reasonable. Before discussing possible alternatives to the likelihood ratio test, it is helpful to write the maximal invariant statistic in terms of the original variables that led to the canonical MANOVA problem. In the original MANOVA problem, we had an observation vector X E C p , , such that c ( X ) = N ( z ~ I,, , 8 2)
and the problem was to test K/3
=
0. We know that
and
are the maximum likelihood estimators of fi and 2 . Proposition 9.6. Let t = rnin{p, r). Suppose the original MANOVA problem is reduced to a canonical MANOVA problem. Then a maximal invariant in the canonical problem expressed in terms of the orignal variables is the vector (A,,. . . , A,)', A, 2 . . . >, A,, of the t largest eigenvalues of
Proof. The transformations that reduced the original problem to canonical form led to the three matrices Y,, Y2, and Y3 where Y, is r x p, Y2 is (k - r ) x p, and Y, is ( n - k) x p. Expressing Y, and Y3 in terms of X, Z, and K , it is not too difficult (but certainly tedious) to show that
By Proposition 9.2, the vector (A,, . . . , A,)' of the t largest eigenvalues of
Y{Y,(Y;Y3)-' is a maximal invariant. Thus the vector of the t largest eigenvalues of V is maximal invariant. In terms of X, Z, and K, the likelihood ratio test rejects the null hypothesis if
is too small. Also, the distribution of A under Ho is given in Proposition 9.1 as U(n - k, r, p). The distribution of A when KB == 0 is quite complicated when t > 1 except in the case when p has rank one. In this case, the distribution of A is given in Proposition 8.16. We now turn to the question of possible alternatives to the likelihood ratio test. For notational convenience, the canonical form of the MANOVA problem is treated. However, the reader can express statistics in terms of the original variables by applying Proposition 9.6. Since our interest is in invariant tests, consider Y, and Y,, which are independent, and satisfy
The random vector Y2 need not be considered as invariant tests do not involve Y2. Intuitively, the null hypothesis H o : B, = 0 should be rejected, on the basis of an invariant test, if the nonzero eigenvalues A, > . . . 2 A, of Y[Y,(Y;Y,)-' are too large in some sense. Since C(Y,) = N(B,, I, @ Z),
Also, it is not difficult to verify that (see the problems at the end of this chapter)
when m - p - 1 > 0. Since Y, and Y3 are independent, &Y;Y,(Y;Y~)-' =
r I + m - p - l p m-p-1
B;B,Z-I.
Therefore, the further B, is away from zero, the larger we expect the
348
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
eigenvalues of B ; B , Z - ' to be, and hence the larger we expect the eigenvalues of YiY,(Y;Y,)-' to be. In particular,
'
and tr B ; B , Z - is just the sum of the eigenvalues of B; B I Z - - ' .The test that rejects for large values of the statistic
is called the Lawley-Hotelling trace test and is one possible alternative to the likelihood ratio test. Also, the test that rejects for large values of
was introduced by Pillai as a competitor to the likelihood ratio test. A thlrd competitor is based on the following considerations. The null hypothesis Ho : B , = 0 is equivalent to the intersection over u E Rr, llull = 1 , of the null hypotheses H,: ulB, = 0. Combining Propositions 9.5 and 9.6, it follows that the test that accepts H, iff
is a uniformly most powerful test within the class of tests that are invariant under the group of transformations preserving H,. Here, c is a constant. Under H,,
so it seems reasonable to require that c not depend on u. Since H, is equivalent to n{H,lllull = 1, u E Rr>,Ho should be accepted iff all the H, are accepted-that is, H, should be accepted iff sup
U'Y,(Y;Y,)-'Y;~
g c.
Ilull= 1
However, this supremum is just the largest eigenvalue of Y,(Y;Y,)-'y;, which is A,. Thus the proposed test is to accept Ho iff A, g c or equivalently,
to reject H, for large values of A,. Thls test is called Roy's maximum root test. Unfortunately, there is very little known about the comparative behavior of the tests described above. A few numerical studies have been done for small values of r, m, and p but no single test stands out as dominating the others over a substantial portion of the set of alternatives. Since very accurate approximations are available for the null distribution of the likelihood ratio test, this test is easier to apply than the above competitors. Further, there is an interesting decomposition of the test statistic
whlch has some applications in practice. Let S = Y;Y, so C ( S ) = W ( Z ,p, m ) and let Xi,. . . , X: denote the rows of Y,. Under H,: B, = 0, XI,.. . , X,are independent and C( X,) = N(0, 2 ) . Further,
where
and
Proposition 8.15 gives the distribution of A; under H, and shows that A , , . . . , A, are independent under H,. Let Pi,. . . , &! denote the rows of B, and consider the r testing problems given by the null hypotheses
and the alternatives
for i
=
1,.
. . , r. Obviously, H,
=
n ;Hi and the lkelihood ratio test for
350
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
rejects Hi iff A iis too small. Thus the likelihood ratio test for Ho can be thought of as one possible way of combining the r independent test statistics into an overall test of n ;Hi. testing Hi against
9.2. MANOVA PROBLEMS WITH BLOCK DIAGONAL COVARIANCE STRUCTURE The parameter set of the MANOVA model considered in the previous section consisted of a subspace M = {pip = ZB, B E tp, ,) t., and a set of covariance matrices y = {I, 8
212 E Spi).
It was assumed that the matrix 2 was completely unknown. In this section, we consider estimation and testing problems when certain things are known about Z. For example, if 2 = 0 2 Z p with 0 2 unknown and positive, then we have the linear model discussed in Section 3.1. In this case, the linear model with parameter set { M , y) is just a univariate linear model in the sense that I, 8 2 = 021n8 Zp and In @ I, is the identity linear transformation on the .. This model is just the linear model of Section 9.1 when vector space p = 1 and np plays the role of n. Of course, when 2 = a2ZP,the subspace M need not have the structure above in order for Proposition 4.6 to hold. In what follows, we consider another assumption concerning Z and treat certain estimation and testing problems. For the models treated, it is shown that these models are actually "products" of the MANOVA models discussed in Section 9.1. Suppose Y E t p ,is, a random vector with GY E M where
ep,
+
and Z is a known n x k matrix of rank k. Write p = p, p,, pi > 1, for i = 1.2. The covariance of Y is assumed to be an element of
is Thus the rows of Y, say Y;,.. . , YL, are uncorrelated. Further, if partitioned into E RP1 and K .I.:E RPz, = then X, and K are also uncorrelated, since
(x, r),
PROPOSITION
351
9.7
Thus the interpretation of the assumed structure of yo is that the rows of Y are uncorrelated and within each row, the first p, coordinates are uncorrelated with the last p, coordinates. This suggests that we decompose Y into ,and W E ,where XE
epl,
ep2,
Obviously, the rows of X(W) are Xi,. . . , XL(W;,. . . , W,'). Also, partition B E Cp, into B, E C p l , and B, E ,. It is clear that
,
ep2,
and
Further, Cov(X) E Y,
'{ I , '3 ~
Sp:)
l l l ~ El l
and COV(W)E y2
{ I , '3 Z2,1Z2,
E
5.);
Since X and W are uncorrelated, if Y has a normal distribution, then X and W are independent and normal and we have a MANOVA model of Section 9.1 for both X and W (with parameter sets (MI, y,) and (M,, y,)). In summary, when Y has a normal distribution, Y can be partitioned into X and W, which are independent. Therefore, the density of Y is
where f , f,, and f, are normal densities on the appropriate spaces. Since we have MANOVA models for both X and W, the maximum likelihood estimators of p,, p ,, 2,,, and E2, follow from the result of the first section. For testing the null hypothesis Ho : KB = 0, K: r X k of rank r, a similar decomposition occurs. As B = (BIB2),Ho: KB = 0 is equivalent to the two null hypotheses H; : KB, = 0 and H: : KB, = 0. Proposition 9.7. Assume that n - k 2 max{ p,, p,). For testing Ho: KB 0, the likelihood ratio test rejects for small values of A = A, A, where
=
352
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
and
Here, Q,
Pro06
=
I - P, where P, = Z(Z'Z)-'z'
and
We need to calculate
where bXis the set of (p, Z) such that p earlier.
E
M and I,, 8 Z
E
yo. As noted
Also, (p, Z) E Hoiff (p,, Z,,) E Hd and (p,, Z2,) E H i . Further, (p, Z) E iff (pi, xii) E Eml where ?Xi is the set of (pi, Xii) such that pl E Ml and I,, 8 Zii E y, for i = 1,2. From these remarks, it follows that
where
and
However, *,(X) is simply the likelihood ratio statistic for testing Hd in the
MANOVA model for X. The results of Propositions 9.6 and 9.1 show that \k,(X) = (A,)"12. Similarly, $(W) = (A2)"12. Thus \k(Y) = (A,A2)n/2 so the test that rejects for small values of A = A,A, is equivalent to the likelihood ratio test. Since X and W are independent, the statistics A, and A, are independent. The distribution of h i when Hi is true is U(n - pi, r, pi) for i = 1,2. Therefore, when Ho is true, R I A 2is distributed as a product of independent beta random variables and the results in Anderson (1958) provide an approximation to the null distribution of A, A ,. We now turn to a discussion of the invariance aspects of testing H, : KB = 0 on the basis of the observation vector Y. The argument used to reduce the MANOVA model of Section 9.1 to canonical form is valid here, and this leads to a group of transformations GI, whch preserve the testing problem H: for the MANOVA model for X. Similarly, there is a group G, that preserves the testing problem H; for the MANOVA model for W. Since Y = (X, W), we can define the product group GI x G, acting on Y by
and the testing problem Ho is clearly invariant under this group action. A maximal invariant is derived as follows. Let ti = rnin{r, p,) for i = 1,2, and in the notation of Proposition 9.7, let
and
Letv, 2 . . . 2 vt, be the t, largest eigenvalues of V, and 8, 2 the t, largest eigenvalues of V,.
...
2 Ot2 be
Proposition 9.8. A maximal invariant under the action of GI x G, on Y is the (t, t,)-dimensional vector (v,,. . . , qt,; e l , . . . , Bf2)= h(Y) = (h,(X); h,(W)). Here, h,(X) = (771,. - - vt,) and h,(W) = (81,. . . , Ot2).
+
9
ProoJ: By Proposition 9.6, h,(X)(h,(W)) is maximal invariant under the action of G,(G,) on X( W). Thus h is G-invariant. If h(Y,) = h (Y,) where Y, = (XI, W,) and Y, = (X,, W,), then h,(X,) = h,(X2) and h,(W,) = h2(W2).Thus there exists g, E G,(g2 E G,) such that g,X, = X2(g2W, =
354
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
W2). Therefore,
so h is maximal invariant. As a function of h(Y), the likelihood ratio test rejects Ho if
+
is too small. Since t, t 2 > 1, the maximal invariant h(Y) is always of dimension greater than one. Thus the situation described in Proposition 9.5 cannot arise in the present context. In no case will there exist a uniformly most powerful invariant test of H o : KB = 0 even if K has rank 1. This completes our discussion of the present linear model. It should be clear by now that the results described above can be easily extended to the case when Z has the form
where the off-diagonal blocks of Z are zero. Here Z E ;5 and Zii E '5; Zsp, = p. In this case, the set of covariances for Y E C p , , is the set $, which consists of all I, @ Z where Z has the above form and each Z,, is unknown. The mean space for Y is M as before. For this model, Y can be decomposed into s independent pieces and we have a MANOVA model in Cp8, for each piece. Also, the matrix B(&Y = ZB) decomposes into B,,. . . , B,, B, E Cp,, and a null hypothesis Ho : KB = 0 is equivalent to the intersection of the s null hypotheses H i : KB, = 0, i = 1,. . . , s. The likelihood ratio test of Ho is now based on a product of s independent statistics, say A = n;Ai, where C(Ai) = U(n - pi, r, p,) and thus A is distributed as a product of independent beta random variables when Ho is true. Further, invariance considerations lead to an s-fold product group that preserves the testing problem and a maximal invariant is of dimension t , + . . t, where ti = min{r, pi), i = 1,. . . , s. The details of all this, whlch are mainly notational, are left to the reader. In this section, it has been shown that the linear model with a block diagonal covariance matrix can be decomposed into independent compo-
,
+
PROPOSITION
9.9
355
nent models, each of which is a MANOVA model of the type treated in Section 9.1. This decomposition technique also appears in the next two sections in which we treat linear models with different types of covariance structure. 9.3.
INTRACLASS COVARIANCE STRUCTURE
In some instances, it is natural to assume that the covariance matrix of a random vector possesses certain symmetry properties that are suggested by the sampling situation. For example, if n measurements are taken under the same experimental conditions, it may be reasonable to suppose that the order in which the observations are taken is immaterial. In other words, if XI,.. . , Xp denote the observations and X' = (X,, . . . , Xp) is the observation vector, then X and any permutation of X have the same distribution. Symbolically, this means that C(X) = C(gX) where g is a permutation matrix. If Z = Cov(X) exists, this implies that Z = gZg' for g E qPwhere qP denotes the group of p X p permutation matrices. Our first task is to characterize those covariance matrices that are invariant under qP-that is, those covariance matrices that satisfy Z = gZg' for all g E qP.Let e E Rp be the vector of ones and set Pe = (l/p)ee' so Pe is the orthogonal projection onto span{e). Also, let Qe = I, - P,. Proposition 9.9. are equivalent:
Let Z be a positive definite p x p matrix. The following
(i) Z = gZg' for g E qP. (ii) Z = a P e + / 3 Q e f o r a > O a n d / 3 > 0 . (iii) Z = U ~ A (where ~ ) u2 > 0, - l / ( p - 1) < p < 1, and A(p) is a p x p matrix with elements a,, = 1, i = 1,. . . , p, and aij(p) = p for i *j. Pro06
Since A ( ~=) (1 - p)Ip + pee'
=
(1 - p)IP + P P P ~
the equivalence of (ii) and (iii) follows by talung a = a2(1 + ( p - 1)p) and = u2(1 - p). Since ge = e for g E YP, gPe = Peg. Thus if (ii) holds, then
/3
356
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
so (i) holds. To show (i) implies (ii), let X E RP be a random vector with Cov(X) = 2. Then (i) implies that Cov(X) = Cov(gX) for g E qP.Therefore,
and COV(&,X,) Let y
=
=
COV(&,X,,);
i
*j , if *J'.
var(X,) and 6 = cov(X,, X,). Then Z
=
+
See'
+ ( y - S)Ip = pSP, + ( y - S)(P, + Q,)
where a = y ( p - 1)6 and fl = y - 6. The positivity of a and from the assumption that Z is positive definite.
fl follows
A covariance matrix Z that satisfies one of the conditions of Proposition 9.9 is called an intraclass covariance matrix and is said to have intraclass covariance structure. Now that intraclass covariance matrices have been described, suppose that X E Cp, ,has a normal distribution with p = & X E M and Cov(X) E y where M is a linear subspace of Cp, ,and
The covariance structure assumed for X means that the rows of X are independent and each row of X has the same intraclass covariance structure. In terms of invariance, if r 8 g E On @ qP, and In 8 Z E y, it is clear that
since
Conversely, if T is a positive definite linear transformation on C,,, satisfies
that
( r s g ) ~ ( I ' s g ) ' = Tf o r ~ @ g ~ 8 , @ ~ ~ , it is not difficult to show that T E y. The proof of this is left to the reader.
PROPOSITION
357
9.10
Since the identity linear transformation is an element of y, in order that the least-squares estimator of p E M be the maximum likelihood estimator, it is sufficient that
Our next task is to describe a class of linear subspaces M that satisfy the above condition.
Proposition 9.10. Let C be an r x p real matrix of rank r with rows c;,. . . , ci. If u,,. . . , u, is any basis for N = span{c,,. . . , c,) and U is an r X p matrix with rows u;, . . . , u:, then there exists an r x r nonsingular matrix A such that A U = C. Proof. Since u,, . . . , u, is a basis for N,
for some real numbers a,,. Setting A = {a,,), AU = C follows. As the basis {u,,. . . , u,) is mapped onto the basis {c,,. . . , c,) by the linear transformation defined by A, the matrix A is nonsingular. Given positive integers n and p, let k and r be positive integers that ,by satisfy k < n and r < p. Define a subspace M G
eP,
where Z, is n x k of rank k, Z, is r x p of rank r , and assume that e E RP is an element of the subspace spanned by rows of Z,, say e E N = span{z,,. . . , z,) and the rows of Z, are z;,. . . , z:. At this point, it is convenient to relabel thngs a bit. Let u, = e/ \l;T, u,,. . . , u,, be an orthonormal basis for N and let U : r X p have rows u;, . . . , ui. By Proposition 9.10, Z, = AU for some r X r nonsingular matrix A so
Summarizing, X E eP,,is assumed to have a normal distribution with GX E M and Cov(X) E y where M and y are given above. To decompose this model for X into the product of two simple univariate linear models, let r E tlp have u;,. . . , ui as its first r rows. With Y = (I, @ r ) X ,
358
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
and COV(Y)= (I, 8 r ) c o v ( x ) ( I , 8 r ) '
However,
urp= ( I ~0) E eP+ and r Q e r f = I, - E,E; where E; = (1,0,. . . , 0). Therefore, the matrix D = arPerf+ p r Q e r ' is diagonal with diagonal elements d l , . . . , dp given by d l = a and d, = . . = d, = p. Let Y , , .. . , Y, be the columns of Y and let b,,. . . , br be the columns of B. Then it is clear that Y,, . . . , Y, are independent,
C(y)
=
N(Zlbi, PI,),
i
=
2,..., r,
and
To piece things back together, set m by V' = (Y;, Y;,.. . , q).Then
where S E R('-')P, 6'
=
=
n ( p - 1) and let V E Rm be given
(b;,. . . , b:), and
rn x ( ( r -
PROPOSITION
9.10
359
Thus X has been decomposed into the two independent random vectors Y, and V and the linear models for Y, and V are given by the parameter sets (MI, Y,) and (M2, Y,) where
and
Both of these linear models are univariate in the sense that y, and y2 consist of a constant times an identity matrix. It is obvious that the general theory developed in Section 9.1 for the MANOVA model applies directly to the above two linear models individually. In particular, the maximum likelihood estimators of b,, a, 6, and P can simply be written down. Also, linear hypotheses about b, or 6 can be tested separately, and uniformly most powerful invariant tests will exist for such testing problems when the two linear models are treated separately. However, an interesting phenomenon occurs when we test a joint hypothesis about b, and 6. For example, suppose the null hypothesis Ho is that b, = 0 and 6 = 0 and the alternative is that b, * 0 or 6 * 0. This null hypothesis is equivalent to the hypothesis that B = 0 in the original model for X. By simply writing down the densities of Y, and V and substituting in the maximum likelihood estimators of the parameters, the likelihood ratio test for Ho rejects if
is too small. Here, 11 . (1 denotes the standard norm on the coordinate Euclidean space under consideration. Let
and
360
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
so W, and W, are independent and each has a beta distribution. When p > 3, then m = n ( p - 1 ) > n and it follows that A*/" = w,w;"/"is not in general distributed as a product of independent beta random variables. This is in contrast to the situation treated in Section 9.2 of this chapter. We end this section with a brief description of what might be called multivariate intraclass covariance matrices. If X E RP and Cov(X) = 2 , then 2 is an intraclass covariance matrix iff Cov(gX) = Cov(X) for all g E qp. When the random vector X is replaced by the random matrix Y: p x q, then the expression gY = (g €3 Iq)Y still makes sense for g E qP7 and it is natural to seek a characterization of Cov(Y) when Cov(Y) = Cov((g 8 I,)Y) for all g E 9,. For g E qp, the linear transformation g €3 I, just permutes the rows of Y and, to characterize T = Cov(Y), we must describe how permutations of the rows of Y affect T. The condition that Cov(Y) = Cov((g 8 I,)Y) is equivalent to the condition
For A and B in , :S
consider
Then To is a self-adjoint and positive definite linear transformation on C,,, to C,,,. It is readily verified that
That To is a possible generalization of an intraclass covariance matrix is fairly clear-the positive scalars a and P of Proposition 9.9 have become the positive definite matrices A and B. The following result shows that if T is (qP €3 Iq)-invariant-that is, if T satisfies T = ( g €3 Iq)T(g €3 I,)'-then T must be a To for some positive definite A and B. Proposition 9.11. If T is positive definite and (qP €3 I,)-invariant, then there exist q x q positive definite matrices A and B such that
Proot
The proof of this is left to the reader.
Unfortunately, space limitations prevent a detailed description of linear models that have covariances of the form I,,8 T where T is given in
361
SYMMETRY MODELS: AN EXAMPLE
Proposition 9.11. However, the analysis of these models proceeds along the lines of that given for intraclass covariance models and, as usual, these models can be decomposed into independent pieces, each of which is a MANOVA model.
9.4. SYMMETRY MODELS: AN EXAMPLE The covariance structures studied thus far in this chapter are special cases of a class of covariance models called symmetry models. To describe these, let (V, (., .)) be an inner product space and let G be a compact subgroup of O(V). Define the class of positive definite transformations y, by y, = {ZIZ E C(V, V), Z > 0, gZgf = Z
for all g
E
G).
Thus y, is the set of positive definite covariances that are invariant under G in the sense that Z = gZgf for g E G. To justify the term symmetry model for y,, first observe that the notion of "symmetry" is most often expressed in terms of a group acting on a set. Further, if X is a random vector in V with Cov(X) = 2 , then Cov(gX) = gZgf.Thus the condition that Z = gZgf is precisely the condition that X and gX have the same covariance-hence, the term symmetry model. Most of the covariance sets considered in this book have been symmetry models for a particular choice of (V, .)) and G. For example, if G = O(V), then (a,
as Proposition 2.13 shows. Hence 8(V) generates the weakly spherical symmetry model. The result of Proposition 2.19 establishes that when (V,(., = (C,,., ( . , .)I and
then
Of course, this symmetry model has occurred throughout this book. Using techniques similar to that in Proposition 2.19, the covariance models considered in Section 9.2 are easily shown to be symmetry models for an appropriate group. Moreover, Propositions 9.9 and 9.11 describe sets of
362
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
covariances (the intraclass covariances and their multivariate extensions) in exactly the manner in which the set y, was defined. Thus symmetry models are not unfamiliar objects. Now, given a closed group G G 6 ( V ) ,how can we explicitly describe the model ?y, Unfortunately, there is no one method or approach that is appropriate for all groups G. For example, the results of Proposition 2.19 and Proposition 9.9 were proved by quite different means. However, there is a general structure theory known for the models y, (see Andersson, 1975), but we do not discuss that here. The general theory tells us what y, should look like, but does not tell us how to derive the particular form of y,. The remainder of this section is devoted to an example where the methods are a bit different from those encountered thus far. To motivate the considerations below, consider observations XI,. . . , Xp, which are taken at p equally spaced points on a circle and are numbered sequentially around the circle. For example, the observations might be temperatures at a fixed cross section on a cylindrical rod when a heat source is present at the center of the rod. Impurities in the rod and the interaction of adjacent measuring devices may make an exchangeability assumption concerning the joint distribution of X,, . . . , X, unreasonable. However, it may be quite reasonable to assume that the covariance between Xj and X, depends only on how far apart X, and X, are on the circle-that is, C O V ( X,,,) ~ , does not depend on j, j = 1,. . . , p, where XP+, = XI; cov(X,, X,,,) does not depend on j, j = 1,. . . , p, where Xp = X2, and so on. Assuming that cov(X,, X,) does not depend on j, these assumptions can be succinctly expressed as follows. Let X E RP have coordinates XI,. . . , X, and let C be a p x p matrix with
+,
and the remaining elements of C zero. A bit of reflection will convince the reader that the conditions assumed on the covariances is equivalent to the condition that Cov(CX) = Cov(X). The matrix C is called a cyclic permutation matrix since, if x E RP has coordinates x,, . . . , x,, then Cx has coordinates x,, x,,. . . , x,, x,. In the case that p = 5, a direct calculation shows that I: = Cov(X) = Cov(CX) = CZC' iff I: has the form
363
SYMMETRY MODELS: AN EXAMPLE
where a2 > 0. The conditions on p , and p, so that Z is positive definite are given later. Covariances that satisfy the condition Z = CZC' are called cyclic covariances. Some further motivation for the study of cyclic covariances can be found in Olkin and Press (1969). To begin the formal treatment of cyclic covariances, first observe that CP = Ip SO the group generated by C is
Since C generates Go, it is clear that CZC' = Z iff gZg' = Z for all g E Go. In what follows, only the case of p = 29 1, q >, 1, is treated. When p is even, slightly different expressions are obtained but the analyses are similar. Rather than characterize the covariance set yGo directly, it is useful and instructive to first describe the set
+
aP
Recall that is the complex vector space of p-dimensional coordinate is the set of all p x p complex matrices. Consider complex vectors and the complex number r = exp[2mi/p] and define complex column vectors wk E QP with jth coordinate given by
for k
=
ep
1,. . . , p. A direct calculation shows that
so w,, . . . , wp is an orthonormal basis for
For future reference note that
QP.
where p = 29 + 1, q >, 1. Here, the bar over wk denotes complex conjugate, and e is the vector of ones in CP.The basic relation
shows that
364
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
' ep
As usual, * denotes conjugate transpose. Obviously, 1, r, . . . , rP- are eigenvalues of C with corresponding eigenvectors w,, . . . , wp. Let Do E be have columns diagonal with dkk= rk-I, k = I,.. . , p and let U E w,, . . . , w,. The relation (9.1) can be written C = UDoU*. Since UU* = I,, U is a unitary complex matrix.
eP
Proposition 9.12. The set
&Go
consists of those B
E
C?, that have the form
where p,, . . . , Pp are arbitrary complex numbers. Proof. If B has the form (9.2), the identity BC (9.1). Conversely, suppose BC = CB. Then
=
CB follows easily from
since U*U = I,. In other words, U*BU commutes with Do. But Do is a diagonal matrix with distinct nonzero diagonal elements. This implies that U*BU must be diagonal, say D, with diagonal elements PI,. . . , P,. Thus U*BU = D so B = UDU*. Then B has the form (9.2). The next step is to identify those elements of symmetric. Consider B E @Go so
&Go
that are real and
Now, suppose that B is real and symmetric. Then the eigenvalues of B, namely PI,. . . , Pp, are real. Since P I , . . . , PP are real and B is real, we have
The relationship Wk
=
wp-
,+,, k = 2,. . . , q + 1, implies that
Pk =
Pp-k+2,
PROPOSITION
k
=
2, ..., q
9.13
+ 1, so
But any B given by (9.3) is real, symmetric, and commutes with C and conversely. We now show that (9.3) yields a spectral form for the real symmetric elements of (EGO.Write w, = x, + iy, with x,, y, E RP, and define u, E RP by
The two identities
and the reality of w,yield the identities
Thus u,, . . . , up is an orthonormal basis for RP. Hence any B of the form (9.3) can be written
and this is a spectral form for B. Such a B is positive definite iff k = 1,. . . , q 1. This discussion yields the following.
+
P,
> 0 for
Proposition 9.13. The symmetry model yGo consists of those covariances 2 that have the form
wherea, > 0 f o r k
=
1, ..., q
+ 1.
366 Let
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
r
r
have rows u;, . . . , ui. Then
matrix with elements
is a p x p symmetric orthogonal
for j, k = 1,. . . , p. Further, any Z given by (9.4) will be diagonalized by -that is, r Z r is diagonal, say D, with diagonal elements
r
Since r simultaneously diagonalizes all the elements of yGo,7 I can sometimes be used to simplify the analysis of certain models with covariances in yGo. This is done in the following example. As an application of the foregoing analysis, suppose Y,,. . . , Yn are independent with Y, E RP, p = 29 1, and C(Y,) = N(p, Z), j = 1,. . . , n. It is assumed that Z is a cyclic covariance so Z E yGo. In what follows, we derive the likelihood ratio test for testing H,, the null hypothesis that the coordinates of p are all equal, versus H I , the alternative that p is completely unknown. As usual, form the matrix Y : n x p with rows Y;,j = 1,. . . , n, so
+
where p E RP and Z E yGo.Consider the new random vector Z = ( I n @ r ) Y where r is defined in the previous paragraph. Setting v = rp, we have C(Z) where D
=
=
N(evf, I,,8 D )
D r . As noted earlier, D is diagonal with diagonal elements
Since Z was assumed to be a completely unknown element of yGo,the diagonal elements of D are unknown parameters subject only to the restriction that ot, > 0, j = 1,. . . , q + 1. In terms of v = r p , the null hypothesis is H,, : v, = . = vp = 0. Because of the structure of D, it is convenient to relabel things once more. Denote the columns of Z by Z,, . . . , Zp and consider W,, . . . , W,, defined by
,
Thus W,
E
Rn and
W, E C2,.
for j
=
2,. . . , q
+ 1. Define vectors 5, E R'
Now, it is clear that W , ,. . . , W,,
, are independent and
Further, the null hypothesis is Ho : 6, = 0,j = 2,. . . , q + 1 , and the alternative is that tJt 0 for somej = 2,. . . , q 1. With the model written in t h s form, a derivation of the likelihood ratio test is routine. Let Pe = ee'/n and let 11 . 11 denote the usual norm on C,, Then the likelihood ratio test rejects Ho for small values of
+
..
Of course, the likelihood ratio test of H$j) : 6, rejects for small values of
=
0 versus H f j ) : 6,
*0
,
The random variables A,, . . . , A,+ are independent, and under HA]),
C(A,)
=
9 ( n - 1,l).
Thus under Ho, A is distributed as a product of the independent beta random variables, each with parameters n - 1 and 1. We end this section with a discussion that leads to a new type of structured covariance-namely, the complex covariance structure that is discussed more fully in the next section. This covariance structure arises when we search for an analog of Proposition 9.1 1 for the cyclic group Go. To keep things simple, assume p = 3 (i.e., q = 1 ) so Go has three elements and is a subgroup of the permutation group g3, which has six elements. Since p = 3, Propositions 9.9 and 9.13 yield that yq, = yGo and these symmetry models consist of those covariances of the form
where Pe = )eef and Q e = I, - P,.
368
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
Now, consider the two groups T3 7, Zr and Go @ I, acting on C,,, by
Proposition 9.1 1 states that a covariance T on C,,, is 9, @ I, invariant iff
for some r X r positive definite A and B. We now claim that for r > 1 , there are covariances on C,,, that cannot be written in the form ( 9 . 9 , but that are Go @ I, invariant. To establish the above claim, recall that the vectors u,, u,, and u3 defined earlier are an orthonormal basis for R3 and
These vectors were defined from the vectors w, = x , u, = x , + y,, k = 1,2,3. Define the matrix J by
+ iy,,
k
=
1,2,3, by
By Proposition 9.12, J commutes with C. Consider vectors v , and v , given by
so {v,, v , ) is an orthonormal basis for span {u,, u,). Since w3 = 6, we have u3 = x , - y,, which implies that 0, = a x , and v3 = This readily implies that
ay2.
so J is skew-symmetric, nonzero, and Ju, transformation To on C,,, to C,,, given by
=
0. Now, consider the linear
where A and B are r X r and positive definite and F is skew-symmetric. It is now a routine matter to show that ( C @ Zr)To = To(C @ I,) since CP, = PeC, CQ, = QeC, and JC = CJ. Thus To commutes with each element of Go @ I, and To is symmetric as both J and F a r e skew-symmetric. We now make two claims: first, that a nonzero F exists such that To is positive definite, and
+
second, that such a To cannot be written in the form (9.5). Since P, @ A Q , @ B is positive definite, it follows that for all skew-symmetric F ' s that are sufficiently small,
is positive definite. Thus there exists a nonzero skew-symmetric F so that To is positive definite. To establish the second claim, we have the following. Proposition 9.14.
Suppose that
P,@A,+Q,@B,+J@F,=P,@A,+Q,@B,+J@F,
where Aj and B,, j = 1,2, are symmetric and F,, j = 1,2, is skew-symmetric. This implies that A , = A,, B , = B,, and F, = F,. ProoJ: Recall that { u , , v , , v , ) is an orthonormal basis for R ~The . relation Q,u, = Ju, = 0 implies that for x E Rr
for j = 1,2 so u I O ( A l x )= u , O ( A , x ) . With ( - , . ) denoting the natural inner product on Cr, we have
,,
for all x E Rr. The symmetry of A , and A , yield A , Qev2 = v 2 , and J v , = - v , , we have
=
for all x
E
A,. Since Pev2 = 0,
V,U(B,X) - v30(F2x)
Rr. Thus
which implies that B , -yrF,x
for all x, y
=
E
=
=
B,. Further,
(u,Oy, V,O(B,X) - v,OF,x)
Rr. Thus F,
=
F,.
=
-ytF2x
370
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
In summary, we have produced a covariance
that is (Go @ I,)-invariant but is not (q3 @ I,)-invariant when r > 1. Of course, when r = 1, the two symmetry models y,3 and yGoare the same. At this point, it is instructive to write out the matrix of To in a special ordered basis for C,,,.Let E , , . . . , E , be the standard basis for Rr so
is an orthonormal basis for (t,,,, ( . , .)). A straightforward calculation shows that the matrix of To in this basis is
Since [To]is symmetric and positive definite, the 2 r x 2 r matrix
has these properties also. In other words, for each positive definite B, there is a nonzero skew-symmetric F (in fact, there exist infinitely many such skew-symmetric F 's) such that Z is positive definite. This special type of structured covariance has not arisen heretofore. However, it arises again in a very natural way in the next section where we discuss the complex normal distribution. It is not proved here, but the symmetry model of Go @ I, when p = 3 consists of all covariances of the form
where A and B are positive definite and F is skew-symmetric.
9.5. COMPLEX COVARIANCE STRUCTURES T h s section contains an introduction to complex covariance structures. One situation where this type of covariance structure arises was described at the end of the last section. To provide further motivation for the study of such models, we begin this section with a brief discussion of the complex normal distribution. The complex normal distribution arises in a variety of contexts
371
COMPLEX COVARIANCE STRUCTURES
and it seems appropriate to include the definition and the elementary properties of this distribution. The notation introduced in Section 1.6 is used here. In particular, (C is the field of complex numbers, @ is the n-dimensional complex vector space of n-tuples (columns) of complex numbers, and is the set of all n x n complex matrices. For x, y E C n , the inner product between x and y is
en
n
(x, y)
' C xjyj = x*y. j= 1
where x* denotes the conjugate transpose of x. Each x E (Cn has the unique representation x = u + iv with u, v E Rn. Of course, u is the real part of x, v is the imaginary part of x, and i = is the imaginary unit. This representation of x defines a real vector space isomorphism between (Cn and R ~ "More . precisely, for x E (Cn,let
+
where x = u + iv. Then [ax + by] = a[x] b [ y ]for x, y E (Cn,a, b E R, and obviously, [ . ] is a one-to-one onto function. In particular, this shows that (Cn is a 2n-dimensional real vector space. If C E e n , then C = A iB where A and B are n X n real matrices. Thus for x = u iv E a n ,
+
Cx
=
(A
+
+ iB)(u + iv) = Au - Bv + i(Av + Bu)
Au - Bv [cx]=(AV+Bu)=
(AB
-B u A )(v).
This suggests that we let {C) be the (2n) X (2n) partitioned matrix given by
With this definition, [Cx] = {C)[x]. The whole point is that the matrix CE applied to x E Cn can be represented by applying the real matrix (C) to the real vector [x] E R2". A complex matrix C E is called Hermitian if C = C*. Writing C = A + iB with A and B both real, C is Hermitian iff
en
en
372
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODELS
which is equivalent to the two conditions
Thus C is Hermitian iff (C) is a symmetric real matrix. A Hermitian matrix C is positive definite if x*Cx > 0 for all x E (Cn, x t 0. However, for Hermitian C,
so C is positive definite iff (C) is a positive definite real matrix. Of course, a Hermitian matrix C is positive semidefinite if x*Cx > 0 for x E (Cn and C is positive semidefinite iff (C) is positive semidefinite. Now consider a random variable X with values in (C. Then X = U + iV where U and V are real random variables. It is clear that the mean value of X must be defined by
assuming &U and &V both exist. The variance of X, assuming it exists, is defined by
where the bar denotes complex conjugate. Since X is a complex random variable, the complex conjugate is necessary if we want the variance of X to be a nonnegative real number. In terms of U and V,
It also follows that
for a, b E (C. For two random variables XI and X2 in (C, define the covariance between XI and X2 (in that order) to be
assuming the expectations in question exist. With this definition it is clear that cov(Xl, XI) = var(Xl), cov(X2, XI) = cov( X, , X,), and
COMPLEX COVARIANCE STRUCTURES
Further. cov{alX,
+ b,, a2X2+ b,)
=
a,H2cov{Xl, X2)
a.
for a , , a,, b,, b2 E We now turn to the problem of defining a normal distribution on Basically, the procedure is the same as defining a normal distribution on Rn. Step one is to define a normal distribution with mean zero and variance one on $, then define an arbitrary normal distribution on $ by an affine transformation of the distribution defined in step one, and finally we say that Z E Cn has a complex normal distribution if (a, Z ) = a*Z has a normal distribution in $ for each a E $". However it is not entirely obvious how to carry out step one. Consider X E $ and let CN(0,l) denote the distribution, yet to be defined, called the complex normal distribution with mean zero and variance one. Writing X = U iV, we have
an.
+
so the distribution of X on $ determines the joint distribution of U and V on R~ and, conversely, as [ . ] is one-to-one and onto. If C(X) = $N(O, I), then the following two conditions should hold: (i) C(aX)= $N(O,l)fora E $withaH= 1. (ii) [XI has a bivariate normal distribution on R2. When aH = 1 and X has mean zero and variance one, then a x has mean zero and variance one so condition (i) simply says that a scalar multiple of a complex normal is again complex normal. Condition (ii) is the requirement that a normal distribution on $ be transformed into a normal distribution on R2 under the real linear mapping [-I. It can now be shown that conditions (i) and (ii) uniquely define the distribution of [XI and hence provide us with the definition of a $N(O, 1) distribution. Since &X = 0, we have &[XI = 0. Condition (i) implies that
However, writing a
=
a
+ ip,
374
INFERENCE FOR MEANS I N MULTIVARIATE LINEAR MODELS
where r is a 2 x 2 orthogonal matrix with determinant equal to one since aa = a2 P 2 = 1. Therefore,
+
for all such orthogonal matrices. Using this together with the fact that 1 = var(X) = var(U) var(V) implies that
+
Hence
so the real and imaginary parts of X are independent normals with mean zero and variance one half. Definition 9.1. A random variable X = U + iV E C has a complex normal distribution with mean zero and variance one, written C(X) = cN(0, l), if
With this definition, it is clear that when C(X) = CN(0, l), the density of X on with respect to two-dimensional Lebesgue measure on C is
a
a
Given p E CJ and a2, u > 0, a random variable XI E has a complex normal distribution with mean p and variance u2 if C(X,) = C(aX + p) where C(X) = aN(0,l). In such a case, we write C(Xl) = QN(p, u2). It is clear that XI = Ul iVl has a CN(p, a 2 ) distribution iff Ul and Vl are independent and normal with variance f u 2 and means &Ul = p &V, = p2, where p = p, ip2. AS in the real case, a basic result is the following.
+
+
,,
Proposition 9.15. Suppose XI,. . . , Xmare independent random variables in with C(X,.) = @N(pj,a;), j = 1,. . . , m. Then
PROPOSITION
375
9.15
Proof, This is proved by considering the real and imaginary parts of each X,. The details are left to the reader. Suppose Y is a random vector in 6 " with coordinates Y,, . . . , Y, and that var(Y,) < co for j = 1,. . . , n. Define a complex matrix H with elements hjk given by
+
-
Since h,, = hkj, H is a Hermitian matrix. For a, b shows that cov{a*Y, b*Y)
=
E
a n , a bit of algebra
a*Hb = ( a , ~ b ) .
As in the real case, H is the covariance matrix of Y and is denoted by Cov(Y) = H. Since a*Ha = var(a*Y) 2 0, H is positive semidefinite. If H = Cov(Y) and A E en,it is readily verified that Cov(AY) = AHA*. We now turn to the definition of a complex normal distribution on the n-dimensional complex vector space 6". Definition 9.2. A random vector X E 6 " has a complex normal distribution if, for each a E Qn, (a, X) = a*X has a complex normal distribution on 6 .
an
has a complex normal distribution and if A E en,it is clear that If X E AX also has a complex normal distribution since (a, AX) = (A*a, X). In order to describe all the complex normal distributions on Cn,we proceed as in the real case. Let XI,. . . , X, be independent with C(X,) = a'N(0, 1) on 6 and let X E anhave coordinates XI,.. . , X,. Since
Proposition 9.15 shows that C(a*X) = 6N(O, Zqa,). Thus X has a complex normal distribution. Further, &X = 0 and
so Cov(X) = I. For A E C?, and p complex normal distribution and
E
Cn, it follows that Y = AX
+ p has a
376
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL
Since every nonnegative definite Hermitian matrix can be written as AA* for some A E en, we have produced a complex normal distribution on @n with an arbitrary mean vector p € and an arbitrary nonnegative definite Hermitian covariance matrix. However, it still must be shown that, if X and 2 in Cn are complex normal with GX = 6 2 and Cov(X) = C O V ( ~ then ), C(X) = C ( 2 ) . The proof of this assertion is left to the reader. Given this with fact, it makes sense to speak of the complex normal distribution on mean vector p and covariance matrix H as this specifies a unique probability distribution. If X has such a distribution, the notation
an
an
+
is used. Writing X = U iV, it is useful to describe the joint distribution of U and V when C(X) = @N(p, H) on Cn. First, consider 2 = 0+ i p where C ( 2 ) = CN(p, I). Then the coordinates of 2 are independent and it follows that
+
where p = p, ip2. For a general nonnegative definite Hermitian matrix H, write H = AA* with A E (2,. Then
Since
and
where A
=
B
+
+ iC, it follows that
But H = Z i F where Z is positive semidefinite, F is skew-symmetric, and the real matrix
is positive semidefinite. Since H
=
AA*, {H) = {A){A)', and therefore,
In summary, we have the following result. Proposition 9.16. Suppose C(X) = CN( y, H ) and write X = U y, + ip2, and H = 2 iF.Then
+
+ iV, p =
Conversely, with U and V jointly distributed as above, set X = U + iV, y = p, + iy2, and H = 2 + iF. ThenC(X) = m ( p , H). The above proposition establishes a one-to-one correspondence between n-dimensional complex normal distributions, say O ( y , H), and 2n-dimensional real normal distributions with a special covariance structure given by
where H = Z + iF. Given a sample of independent complex normal random vectors, the above correspondence provides us with the option of either analyzing the sample in the complex domain or representing everything in the real domain and performing the analysis there. Of course, the advantage of the real domain analysis is that we have developed a large body of theory that can be applied to this problem. However, this advantage is a bit illusory. As it turns out, many results for the complex normal distribution are clumsy to prove and difficult to understand when expressed in the real domain. From the point of view of understanding, the proper approach is simply to develop a theory of the complex normal distribution that parallels the development already given for the real normal distribution. Because of space limitations, this theory is not given in detail. Rather, we provide a brief list of results for the complex normal with the hope that the reader can see the parallel development. The proofs of many of these results are minor modifications of the corresponding real results. Consider X E Cp such that C(X) = CN(y, H) where H is nonsingular. Then the density of X with respect to Lebesgue measure on CP is
378 When H
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL =
I, then
With this result and the spectral theorem for Hermitian matrices (see Halmos, 1958, Section 79), the distribution of quadratic forms, say X*AX for a Hermitian, can be described in terms of linear combinations of independent noncentral chi-square random variables. As in the real case, independence of jointly complex normal random vectors is equivalent to the absense of correlation. More precisely, if C ( X ) = QN(p, H ) and if A : q x p and B : r x p are complex matrices, then AX and BX are independent iff AHB* = 0. In particular, if X is partitioned as
and H is partitioned similarly as
where Hjk isp, X p k , then XI and X2 are independent iff H12= 0. When H2, is nonsingular, this implies that X I - H 1 , H ~ ' X ,and X2 are independent. This result yields the conditional distribution of X I given X,, namely,
,,.,
= H I , - H,,H;'H,, and p, = G X , , j = 1,2. where H The complex Wishart distribution arises in a natural way, just as the real Wishart distribution did.
Definition 9.3. A p x p random Hermitian matrix S has a complex Wishart distribution with parameters H , p , and n if
where XI,. . . , X,, E
Cp are independent with C(5)
=
CN(0, H ) .
PROPOSITION
9.16
In such a case, we write c(S)
=
C w ( H , P, n).
In t h s definition, p is the dimension, n is the degrees of freedom and H is a p x p nonnegative definite Hermitian matrix. It is clear that S is always nonnegative definite and, as in the real case, S is positive definite with probability one iff H is positive definite and n 2 p. Whenp = 1 and H = 1, it is clear that
Further, complex analogues of Proposition 8.8, 8.9, and 8.13 show that if C(S) = W(H, p, n) with n >, p and H positive definite, and if C(X) = N(0, H ) with X and S independent, then
We now turn to a brief discussion of one special case of the complex MANOVA problem. Suppose XI,.. . , X, E QP are independent with
and assume that H > 0-that is, H is positive definite. The joint density of XI,. . . , X, with respect to 2np-dimensional Lebesgue measure is
=
where
X = n-'ZT
[
~ - ~ p ~ H ~ - e x(X, p - p ) * ~ ( -q Y ) ] j=1
and tr denote the trace. Here, X is the np-dimensional
380
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL
vector in QnPconsisting of X I , X,, . . . , Xn. Setting
s=
c (x, x)(x, n
-
-
x)*,
j= 1
we have
It follows that ( x , S ) is a sufficient statistic for this parametric family and
fi = x i s the maximum likelihood estimator of p. Thus
A minor modification of the argument given in Example 7.10 shows that when S > 0 , p(X(fi,H ) is maximized uniquely, over all positive definite H, at H = n-IS. When n >, p + 1, then S is positive definite with probability one so in this case, the maximum likelihood estimator of H is H = n-IS. If p = 0, then
where
Obviously, p ( XIO, H) is maximized at H = n- '3. Thus the likelihood ratio test for testing p = 0 versus p * 0 rejects for small values of
As in the real case, X and S are independent,
and
ADDITIONAL EXAMPLES OF LINEAR MODELS
Setting Y
=
fix,
so the likelihood ratio test rejects for large values of Y*S-'Y = T2. Arguments paralleling those in the real case can be used to show that
where 6 = np*H-Ip is the noncentrality parameter in the F distribution. Further, the monotone likelihood ratio property of the F- distribution can be used to show that the likelihood ratio test is uniformly most powerful among tests that are invariant under the group of complex linear transformations that preserve the above testing problem. In the preceeding discussion, we have outlined one possible analysis of the one-sample problem for the complex normal distribution. A theory for the complex MANOVA problem similar to that given in Section 9.1 for the real MANOVA problem would require complex analogues of many results given in the first eight chapters of this book. Of course, it is possible to represent everything in terms of real random vectors. This representation consists of an n X 2 p random matrix Y E C,,, ,where As usual, Z is n x r of rank r and B : r X 2 p is a real matrix of unknown parameters. The distinguishing feature of the model is that 9 is assumed to have the form
where Z : p x p is positive definite and F: p x p is skew-symmetric. For reasons that should be obvious by now, 'k's of the above form are said to have complex covariance structure. This model can now be analyzed using the results developed for the real normal linear model. However, as stated earlier, certain results are clumsy to prove and more difficult to understand when expressed in the real domain rather than the complex domain. Although not at all obvious, these models are not equivalent to a product of real MANOVA models of the type discussed in Section 9.1.
9.6. ADDITIONAL EXAMPLES OF LINEAR MODELS The examples of this section have been chosen to illustrate how conditioning can sometimes be helpful in finding maximum likelihood estimators and
382
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL
also to further illustrate the use of invariance in analyzing linear models. The linear models considered now are not products of MANOVA models and the regression subspaces are not invariant under the covariance transformations of the model. Thus finding the maximum likelihood estimator of mean vector is not just a matter of computing the orthogonal projection onto the regression subspace. For the models below, we derive maximum likelihood estimators and likelihood ratio tests and then discuss the problem of finding a good invariant test. The first model we consider consists of a variation on the one-sample problem. Suppose XI,. . . , Xn are independent with C(4) = N(p, Z) where Xi E RP, i = 1,. . . , n. As usual, form the n x p matrix X whose rows are XIf,i = 1,. . . , n. Then C(X)
=
~ ( e p ' In , 8 Z)
where e E Rn is the vector of ones. When p and Z are unknown, the linear model for X is a MANOVA model and the results in Section 9.1 apply directly. To transform this model to canonical form, let r be an n X n orthogonal matrix with first row e'/ fi.Setting Y = r X and P = G p ' ,
where el is the first unit vector in R n and
where Yl
E
ep,l, Y2 E Cp, , and m
=
P
E
Cp, ,. Partition Y as
n - 1. Then
and For testing H,: p = 0, the results of Section 9.1 show that the test that rejects for large values of Y,(Y;Y,)-lY; (assuming m > p) is equivalent to the likelihood ratio test and this test is most powerful within the class of invariant tests. We now turn to a testing problem to whlch the MANOVA results do not apply. With the above discussion in mind, consider U E Cp, and Z E where U and Z are independent with
,
eP,,
ADDITIONAL EXAMPLES OF LINEAR MODELS
and C ( Z ) = N ( 0 , I, @ 2 ) . Here, fl E CP:, and E > 0 is a completely unknown p X p covariance matrix. Partition p into /3, and p, where
Consider the problem of testing the null hypothesis H,: P I = 0 versus H I : P I .+ 0 where p, and E are unknown. Under H,, the regression subspace of the random matrix
and the set of covariances is
It is easy to verify that Mo is not invariant under all the elements of y so the maximum likelihood estimator of /3, under Ho cannot be found by leastsquares (ignoring 2 ) .To calculate the likelihood ratio test for Ho versus HI, it is convenient to partition U and Z as U = ( U , ,U 2 ) ,
.
E
C
P
i
=
1,2
and then condition on U, and Z , . Since U and Z are independent, the joint distribution of U and Z is specified by the two conditional distributions, C (U2IU, ) and C ( Z ,IZ , ), together with the two marginal distributions, C ( U , ) and C ( Z , ) .Our results for the normal distribution show that these distributions are C(U2IUl) = N(P2 + ( U , - P1)El11~12, ~ 2 2 . 1 )
384
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL
where Z is partitioned as
withZ,, beingp, Xp,, i , j = 1,2. As usual, Z,,., = Z,, - Z2,Z,'Z12. By Proposition 5.8, the reparameterization defined by q,,= Z,,, \k,, = Zfi1ZI2, and \k,, = Z,,,, is one-to-one and onto. To calculate the likelihood ratio test for Ho versus HI, we need to find the maximum likelihood estimators under Ho and H I . Proposition 9.17. The likelihood ratio test of Ho : PI = 0 versus H I : PI t 0 rejects Ho if the statistic
is too large. Here, S
=
Z'Z and
where S,, is pi X p,. Proof: Let fl(UIIPl,q l l ) be the density of C(U,), let f2(U21Ul,PI, P2, PI,, \k,,) be the conditional density of C(U21Ul), let f,(Z,l\k,,) be the density of C(Z,), and let f,(Z,IZ,, 'PI,, *22) be the density of C(Z21Z,). Under H,, PI = 0 and the unique value of p2 that maximizes f2(U21UI, 0, P2, q12, *22) is
for \El, fixed. It is clear that
where the symbol iu means "is proportional to." We now maximize with respect to *,,. With P2 = b2, \kI2occurs only in the density of Z2 given Z,. Since C(Z21Z,)= N(Z,\E12,I, 8 it follows from our treatment of the MANOVA problem that
PROPOSITION
9.17
and f 4 ( ~ 2 1 ~ 1 , @ 1*22) 2,
Ci
1*221-~/~ex~[-4 tr~22.1*22'1.
Since PI = 0, it is now clear that 1 $11
=
[ziz,
1
+ u;ull = --[S,, m+l
+ u;ul]
and
Substituting these values into the product of the four densities shows that the maximum under H, is proportional to
Under the alternative H I , we again maximize the likelihood function by first noting that
a = U2 -
-
PI)*,,
maximizes the density of U2 given U,. Also,
With this choice of p2, PI occurs only in the density of U, so maximizes the density of Ul and
p, = U,
It now follows easily that the maximum likelihood estimators of \kI2,q , , , and \k,, are
386
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL
Substituting these into the product of the four densities shows that the
maximum under HI is proportional to
Hence the likelihood ratio test will reject Ho for small values of
Thus the likelihood ratio test rejects for large values of
and the proof is complete. We now want to show that the test derived above is a uniformly most powerful invariant test under a suitable group of affine transformations. Recall that U and Z are independent and
The problem is to test H o : P I = 0 where /3 = ( P I ,P2) with P, E C P , , , , i = 1,2. Consider the group G with elements g = ( r , A , (0, a)) where
and
where A,, is pi X p, and A,, is nonsingular for i g = ( T , A , ( ( ) ,a)) is
=
1,2. The action of
The group operation, defined so G acts on the left of the sample space, is
It is routine to verify that the testing problem is invariant under G. Further,
PROPOSITION
9.18
387
it is clear that the induced action of G on the parameter space is
(r,A, (0, a ) ) ( P , Z ) = (PA' + (0, a ) , AZA'). To characterize the invariant tests for the testing problem, a maximal invariant under the action of G on the sample space is needed. Proposition 9.18. is
In the notation of Proposition 9.17, a maximal invariant
ProoJ: As usual, the proof consists of showing that A = UIS,'U; is an orbit index. Since m 2 p, we deal with those Z's that have rank p, a set of and Z E CP,, of probability one. The first claim is that for a given U E rank p, there exists a g E G such that
eP,,
where
E; =
(1,0,. . . , 0) E
fp,
and
q,
,and V is a p x p upper triangular matrix so Write Z = q V where \k E S = Z'Z = V'V. Then consider
where ti E Opt, i
'
=
1,2, and note that A is of the form
since (Vf)- is lower triangular. The values of ti, i moment. With this choice of A,
=
1,2, are specified in a
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL 388 which is in %, ,for any choice of 5, E 8pz,i = 1,2. Hence there is a r E 0,
such that
TZA'
=
Z,.
Since V is upper triangular, write
with V i j being pi x pj. Then
As S = V'V and V E G:, it follows that S,' = V1'(V")' so the vector U,V" has squared length A = u,v"(v")'U; = U , S , 'u;.Thus there exists 5; E OPl such that u,v"[; where Ei
=
=
A'l2~;
,.
( 1 , 0 , .. . , 0 ) E CPI, Now choose a a
=
UA'
E
,
Cpz, to be
U,V125;- U 2 ~ 2 2 ( ;
+ (0,a ) =
The above choices for A, 5,, r, and a yield g
=
(I', A, (0, a ) ) , which satisfies
and this establishes the claim. To show that
is maximal invariant, first notice that A is invariant. Further, if
389
PROPOSITION 9.18
both yield the same value of A, then there exists gi E G such that
Therefore,
and A is maximal invariant. To show that a uniformly most powerful G-invariant test exists, the distribution of A = U,S;'U; is needed. However,
and U, and S , , are independent. From Proposition 8.14, we see that
where 8 = P I X ; 'Pi and the null hypothesis is H, : 8 = 0. Since the noncentral F distribution has a monotone likelihood ratio, the test that rejects for large values of A is uniformly most powerful within the class of tests based on A. Since all G-invariant tests are functions of A, we conclude that the likelihood ratio test is uniformly most powerful invariant. The final problem to be considered in this chapter is a variation of the problem just solved. Again, the testing problem of interest is Ho : PI = 0 versus H I : P I * 0, but it is assumed that the value of j3, is known to be zero under both Ho and H,. Thus our model for U and Z is that U and Z are independent with
,,
ep,
where U E C,, /?, E C,, Z E m , and m > p. In what follows, the likelihood ratio test of Ho versus H , is derived and an invariance argument shows that there is no uniformly most powerful invariant test under a natural group that leaves the problem invariant. As usual, we partition U
390
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL
into U, and U2, 22
tp*,, SO
E
f?p,,l, and Z is partitioned into Zl
$,,,, and
Also
and S , , . , = S , , - SI2S,;'S2,. Proposition 9.19. The likelihood ratio test of Ho versus HI rejects for large values of the statistic
Proofi
Under Ho,
so the maximum likelihood estimator of 2 is e = -m +1 l (Z'Z
1 l + U'U) = -(s m+
+ U'U)
The value of the maximized likelihood function is proportional to A,
l e l - ( m + 1)/2.
Under HI, the situation is a bit more complicated and it is helpful to consider conditional distributions. Under HI,
PROPOSITION
9.19
and
The reparameterization defined by \k,, = Z , , . , , \k,, = z&~z,,, and \k,, = ZZ2 is one-to-one and onto. Let f,(U,IU2, PI, q2,,P ' ,), f2(U21'k,,), f3(Z11Z2,\k2,,\kl ,), and f4(Z2l\k,,) be the density functions with respect to Lebesgue measure dU, dU2 dZ, dZ2 of the four distributions above. It is clear that
maximizesf,(U,IU2, P,, andfl(UlIU2,PI,%,, \El,) ff Iql,1-'/2. With B2 substituted into f , , the parameter \k,, only occurs in the density f,(Z,IZZ, $1, '+I,). Since
our results for the MANOVA model show that
maximizes f3(Z,IZ2,\k2,,q , , ) for each value of \k, ,. When $2, is substituted into f,, an inspection of the resulting four density functions shows that the maximum likelihood estimators of \k,, and \k,, are
and
Under H I , this yields a maximized likellhood function proportional to
Therefore the likellhood ratio test rejects H,, for small values of
392
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL
However, IS22 +
W 2 l = 1 s 2 2 1 ( 1+
u2s2;'u;)
and
Thus
Now, the identity
us-'u' = (u,- u ~ s ~ ; ~ s ~ ~ -) su2s2;'s2,)' ~ ! ~ ( u +~ U2SG1U; follows from the problems in Chapter 5. Hence rejecting for small values of
where A is given in the statement of t h s proposition, is equivalent to rejecting for large values of A. The above testing problem is now analyzed via invariance. The group G consists of elements g = (I?,A ) where r E Om and
The group action is
and group composition is
(I-,? Al)(r2>A21
=
(rIr2,AlA2).
The action of the group on the parameter space is ( r , A ) ( P l , 2 ) = ( P , A { , ,AZA'). It is clear that the testing problem is invariant under the group G.
PROPOSITION
393
9.20
Proposition 9.20. Under the action of G on the sample space, a maximal invariant is the pair (W,, W,)where
and
w2= u2s~lu;. A maximal invariant in the parameter space is
Proof. As usual, the method of proof is a reduction argument that provides a convenient index for thc orbits in the sample space. Since m > p, a set of measure zero can be deleted from the sample space so that Z has rank p on the complement of this set. Let
and set u, = E; E Cp, and u, = (0,. . . , 0,1,0,. . . , 0) E CP,,where the one occurs in the ( p , 1) coordinate of u,.Now, given U and 2, we claim that there exists a g = (r,A ) E G such that
+
where
x: = (u1- ~2s1;'s21)sl1.2(~1 - u2sG1s21Y and
x,'= u2s,-,'u;. To establish thls claim, write Z = \kT where \k E $,,and T E Gf is a p x p lower triangular matrix. A modification of the proof of Proposition 5.2 establishes this representation for Z. Consider
394
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL
where& E Op,,i
=
1,2, SO[
Thus for any such 5 and
E
Op and
r E O,, ( r ,A ) E G. Also, r can be chosen so that rZA'
Z,
E
Gp,,.
Z'Z
=
T'T,
=
Now,
where Ti' is pi
X
pj and
Since
S
=
a bit of algebra shows that (u,TII
+ u , T ~ ~ ) ( +u ,u,T,IT )' ~ ~ = (u,- u,s;,'s,,)s,!,(u,- u2s;,ls2,)' = x:
and
( U , T ~ ~ ) ( U , T= ~ ~u2s-$u; )' = x2& Let 2, = (1,0,. . . , 0) E Cp,,, and 2, = (1,0,. . . , 0) E Cp,,,. Since the vectors X,Z, and U,T" U2T2' have the same length, there exists E Op, such that (u,T" u , T ~ ~ ) (=; X1EI.
+
+
For similar reasons, there exists a
6;
E
Op2 such that
U2T2,5; = X2Z2. With these choices for 5 , and t,,
PROPOSITION
9.20
Thus there is a g
=
(r,A ) E
G such that
Thls establishes the claim. It is now routine to show that (XI, X,) = ( X , ( U , Z ) , X,(U, 2 ) ) is an invariant function. To show that ( X I ,X,) is maximal invariant, suppose ( U , 2 ) and (0, 2 ) yield the same ( X , , X,) values. Then there exist g and g in G such that
This shows that ( X , , X,) is maximal invariant. Since the pair ( W , , W,) is a one-to-one function of ( X I ,X,), it follows that ( W , , W,) is maximal invariant. The proof that 6 is a maximal invariant in the parameter space is similar and is left to the reader. In order to suggest an invariant test for Ho : P I the distribution of ( W , , W,) is needed. Since
=
0 based on (W,, W,),
and
with S and U independent,
Therefore, W , is an ancillary statistic as its distribution does not depend on any parameters under Ho or H I . We now compute the conditional distribution of W , given W,. Proposition 8.7 shows that
3%
INFERENCE FOR MEANS I N MULTIVARIATE LINEAR MODEL
and C(S22) = W(Z22, P,, m ) where S,,., is independent of (S,,, S,,). Thus
and conditional on (S,,, U,),
Further, U, and U, S,;'S,, Therefore,
are conditionally independent-given
(S,, , U, ).
,
Since S,, . is independent of all other variables under consideration, and since
it follows from Proposition 8.14 that
where 6 = P,Z;!,P;. However, the conditional distribution of W , given (S,,, U,) depends on (S,,, U,) only through the function W, = u,s,;'u;. Thus
and C(W2)
= Fp2,m-p2+~.
3 9
PROBLEMS
Further, the null hypothesis is H, : 6 = 0 versus the alternative H , : 6 > 0. Under H,, it is clear that W , and W2 are independent. The likelihood ratio test rejects H, for large values of W, and ignores W,. Of course, the level of this test is computed from a standard F-table, but the power of the test involves the marginal distribution of W , when 6 > 0. This marginal distribution, obtained by averaging the conditional distribution C(W,IW,) with respect to the distribution of W 2 ,is rather complicated. To show that a uniformly most powerful test of H, versus H , does not exist, consider a particular alternative 6 = 6, > 0. Let fl(wIIw2,6 ) denote the conditional density function of W , given W2 and let f,(w,) denote the density of W2. For testing H,: 6 = 0 versus HI : 6 = 6,, the NeymanPearson Lemma asserts that the most powerful test of level a is to reject if
where ~ ( ais) chosen to make the test have level a. However, the rejection region for this test depends on the particular alternative 6, so a uniformly most powerful test cannot exist. Since W2is ancillary, we can argue that the test of H, should be carried out conditional on W,, that is, the level and the power of tests should be compared only for the conditional distribution of W , given W,. In this case, for w2 fixed, the ratio
is an increasing function of w, so rejecting for large values of the ratio (w, fixed) is equivalent to rejecting for W , > k. If k is chosen to make the test have level a , this argument leads to the level a likelihood ratio test.
PROBLEMS 1.
Consider independent random vectors X,, with C(X,,) = N ( p , , 8 ) for 1,. . . , n , and i = 1,. . . , k. For scalars a , , . . . , ak consider testing H,: Za,p, = 0 versus H, : Za,p, * 0. With T~ = 8a?n;', let bl = ~ - ' a , set , = n;'Z,X,, and let S, = Z,(X,, Write this problem in the canonical form of Section 9.1 and prove that the test that rejects for large values of A = ( Z , b , x ) ' ~ - ' ( Z , b , xis) UMP invariant. Here S = XIS,. What is the distribution of A under H,?
J =
x
X)(X,,
x).
2. Given Y E C and X E C k , of rank k , the least-squares estimate ' n of B can be characterized as the B that rniniB = ( X I X ) -fX'Y
398
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL
rnizes tr(Y - XB)'(Y - XB) over all k X p matrices. (i) Show that for any k x p matrix B,
+
(ii) A real-valued function defined for p X p nonnegative definite matrices is nondecreasing if +(S,) < +(S, + S2) for any S, and S2(s,'> 0, i = 1,2). Using (i), show that, if is nondecreasing, then +((Y - XB)'(Y - XB)) is minimized by B = B. (iii) For A that is p x p and nonnegative definite, show that +(S) = tr AS is nondecreasing. Also, show that +(S) = det(A + S ) is nondecreasing. (iv) Suppose +(S) = + ( r S r f ) for S 2 0 and r E 8, so +(S) can be written as +(S) = #(h(S)) where X(S) is the vector of ordered characteristic roots of S. Show that, if # is nondecreasing in each argument, then is nondecreasing.
+
+
3. (The MANOVA model under non-normality.) Let E be a random n x p matrix that satisfies C(TE4') = C(E) for all r E 8, and # E 6,. Assume that Cov(E) = In@ I, and consider a linear model for Y E C,, generated by Y = ZB + EA' where Z is a fixed n x k matrix of rank k, B is a k x p matrix of unknown parameters, and A is an element of GI,. (i) Show that the distribution of Y depends on (B, A) only through (B, AA'). (ii) Let M = (pip = ZB, B E Cp,k) and y = ( I n @ 212 > 0, Z is p x p). Show that (M, y ) serves as a parameter space for the linear model (the distribution of E is assumed fixed). (iii) Consider the problem of testing H, : RB = 0 where R is r X k of rank r. Show that the reduction to canonical form given in Section 9.1 can be used here to give a model of the form
where p, is r x p , p2 is (k - r ) x p , p3 is (n - k) x p , B , is r X p, B,is (k - r ) X p, B is n X p , and A is as in the original
399
PROBLEMS
model. Further, E and B have the same distribution and the null hypothesis is H,: B, = 0. (iv) Now, assume the form of the model in (9.6) and drop the tildas. Using the invariance argument given in Section 9.1, the testing problem is invariant and any invariant test is a function of the t largest eigenvalues of Y, (Y;Y,)- 'Y; where t = min{r, p). Assume n - k > p and partition E as Y is partitioned. Under H,, show that
(v) Using Proposition 7.3 show that W has the same distribution no matter what the distribution of E as long as C ( r E ) = C(E) for all r E On and E, has rank p with probability one. T h s distribution of W is the distribution obtained by assuming the elements of E are i.i.d. N(0,l). In particular, any invariant test of H, has the same distribution under H, as when E is N(0, In8 I,).
4. When Y, is N(B,, I, 8 2 ) and Y, is N(0, I, 8 2 ) with m 2 p verify the claim that GY;Y,(Y;Y~)-~ =
5.
r I + m-p-lp m-p-1
+ 2,
B;B,L-l.
Consider a data matrix Y: n X 2 and assume C(Y) = N(ZB, I, 8 8 ) where Z is n x 2 of rank two so B is 2 x 2. In some situations, it is is, the diagonal elements of reasonable to assume that a,, = a,,-that 8 are the same. Under this assumption, use the results of Section 9.2 to derive the likelihood ratio test for H, : b,, = b,,, b,, = b,, versus H, : b,, * b,, or b,, * b,,. Is this test UMP invariant?
6. Consider a "two-way layout" situation with observations qj, i = 1,. . . , m and j = 1,. . . , r. Assume qj = p a i P, eij where p, ai, and p, are constants that satisfy 8 a i = Z/3, = 0. The eij are random errors with mean zero (but not necessarily uncorrelated). Let Y be the m X n matrix of Ti's, u, be the vector of ones in R m ,u, be the vector of ones in Rn,a E R m be the vector with coordinates a,, and p E R n be the vector with coordinates b,. Let E be the matrix of eij's. (i) Show the model is Y = pu,u; + au; + u,P' E in the vector Here, a E R m with a'u, = 0 and /3 E R n with P'u, space
+ + +
en,,.
+
400
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL =
0. Let
Also, let ( . , . ) be the usual inner product on C,, ,. (ii) Show MI 1 M2 I M, I M I in (C,, ,, ( - , .)). Now, assume Cov(E) = I, @ A where A = yP SQ with P = n-'u2u;, Q = I - P, and y > 0 and S > 0 are unknown parameters. (iii) Show the regression subspace M = M I @ M2 @ M, is invariant under each I, @ A. Find the Gauss-Markov estimates for p, a, and /?. (iv) Now, assume E is N(0, I, 4€ A). Use an invariance argument to show that for testing Ho : a = 0 versus H I : a * 0, the test that rejects for large values of W = 11 PM,Y112/11QMY(12is a UMP invariant test. Here, Q, = I - P,. What is the distribution of W?
+
The regression subspace for the MANOVA model was described as M = {pip = ZB, B E C,,,) G Cp,, where Z is n X k of rank k. The subspace of M associated with the null hypothesis H o : RB = 0 (R is r X r of rank r ) is o = {pip = ZB, B E Cp, k , RB = 0). We know that P, = P, 8 I, where P, = Z(ZfZ)-'z' (P, is the orthogonal projection onto M in (C,, ,, ( , . ))). This problem gives one form for P,. Let W = Z(ZfZ)-'R . (i) Show that W has rank r. Let P, = W(WfW)- I W' so P, @ I, is an orthogonal projection. (ii) Show that %(P, @ I,) c M - o where M - o = M n oL . Also, show dim(%(P, @ I,)) = rp. (iii) Show that dim o = ( k - r)p. (iv) Now, show that P, 8 I, is the orthogonal projection onto M - o so P, 8 I, - P, 8 I, is the orthogonal projection onto o.
-
8. Assume XI,. . . , X, are i.i.d. from a five-dimensional N(0, 2 ) where Z is a cyclic covariance matrix ( 2 is written out explicitly at the beginning of Section 9.4). Find the maximum likelihood estimators of u2>P I , P2.
NOTES AND REFERENCES
401
9. Suppose XI,. . . , X,, are i.i.d. N(0, \k) of dimension 2p and assume \E has the complex form
Z;X,q and partition S as \k is partitioned. show that + S,,) and $ = (2n)-'(S,, - S,,) are the maximum likelihood estimates of Z and F. Let S
=
3 = (2n)-'(s,,
10.
11.
Let XI,.. . , X, be i.i.d. N(p, Z)p-dimensional random vectors where p and Z are unknown, Z > 0. Suppose R is r x p of rank r and consider testing H, : Rp = 0 versus H I : Rp * 0. Let = (l/n)ZyX, and S = C;(X, - X)(4 - X)'. Show that the test that rejects for large values of T = (R~)'(RSR')-'(RX) is equivalent to the likelihood ratio test. Also, show this test is UMP invariant under a suitable group of transformations. Apply this to the problem of testing p, = p, = . . . = pp where p,,. . . , pp are the coordinates of p.
Consider a linear model of the form Y = ZB + E with Z : n X k of rank k, B : k X p unknown, and E a matrix of errors. Assume the first column of Z is the vector e of ones (the regression equation has the constant term in it). Assume Cov(E) = A(p) 8 Z where A(p) has ones on the diagonal and p off the diagonal ( - l / ( n - 1) < p < 1). (i) Show that the GM and least-squares estimates of B are the same. (ii) When C(E) = N(0, A(p) 8 Z) with Z and p unknown, argue via invariance to construct tests for hypotheses of the form RB = 0 where R is r x k - 1 of rank r and B : ( k - 1) x p consists of the last k - 1 rows of B.
NOTES AND REFERENCES 1.
The material in Section 9.1 is fairly standard and can be found in many texts on multivariate analysis although the treatment and emphasis may be different than here. The likelihood ratio test in the MANOVA setting was originally proposed by Wilks (1932). Various competitors to the likelihood ratio test were proposed in Lawley (1938), Hotelling (1947), Roy (1953), and Pillai (1955).
2. Arnold (1973) applied the theory of products of problems (which he had developed in his Ph.D. dissertation at Stanford) to situations involving patterned covariance matrices. This notion appears in both this chapter and Chapter 10.
402
INFERENCE FOR MEANS IN MULTIVARIATE LINEAR MODEL
3. Given the covariance structure assumed in Section 9.2, the regression
subspaces considered there are not the most general for which the
Gauss-Markov and least-squares estimators are the same. See Eaton (1970) for a discussion. 4. Andersson (1975) provides a complete description of all symmetry models. 5.
Cyclic covariance models were first studied systematically in Olkin and Press (1969).
6. For early papers on the complex normal distribution, see Goodman (1963) and Giri (1965a). Also, see Andersson (1975). 7. Some of the material in Section 9.6 comes from Giri (1964, 1965b).
8. In Proposition 9.5, when r = 1, the statistic A, is commonly known as Hotelling's T2 (see Hotelling (193 1)).
C H A P T E R 10
Canonical Correlation Coefficients
This final chapter is concerned with the interpretation of canonical correlation coefficients and their relationship to affine dependence and independence between two random vectors. After using an invariance argument to show that population canonical correlations are a natural measure of affine dependence, these population coefficients are interpreted as cosines of the angles between subspaces (as defined in Chapter 1). Next, the sample canonical correlations are defined and interpreted as cosines of angles. The distribution theory associated with the sample coefficients is discussed briefly. When two random vectors have a joint normal distribution, independence between the vectors is equivalent to the population canonical correlations all being zero. The problem of testing for independence is treated in the fourth section of this chapter. The relationship between the MANOVA testing problem and testing for independence is discussed in the fifth and final section of the chapter.
10.1. POPULATION CANONICAL CORRELATION COEFFICIENTS There are a variety of ways to introduce canonical correlation coefficients and three of these are considered in this section. We begin our discussion with the notion of affine dependence between two random vectors. Let X E (V, (., .),) and Y E (W, .),) be two random vectors defined on the same probability space so the random vector Z = {X, Y) takes values in the vector space V @ W. It is assumed that Cov(X) = Z,, and Cov(Y) = Z,, both exist and are nonsingular. Therefore, Cov(Z) exists (see Proposition ( a ,
404
CANONICAL CORRELATION COEFFICIENTS
2.15) and is given by
Also. the mean vector of Z is p
=
GZ
=
{GX, G Y ) = { p , ,p 2 ) .
Definition 10.1. Two random vectors U and 0, in ( V , ( - , .),) are affinely equivalent if U = A 0 + a for some nonsingular linear transformation A and some vector a E V. It is clear that affine equivalence is an equivalence relation among random vectors defined on the same probability space and taking values in V. We now consider measures of affine dependence between X and Y , whch are functions of p = {GX,G Y ) and Z = Cov(Z) where Z = { X , Y ) . Let m ( p , 2 ) be some real-valued function of p and Z that is supposed to measure affine dependence. If instead of X we observe 2,which is affinely equivalent to X, then the affine dependence between X and Y should be the same as the affine dependence between 2 and Y. Similarly, if is affinely equivalent to Y, then the affine dependence between X and Y should be the same as the affine dependence between X and p. These remarks imply that m ( p , Z ) should be invariant under affine transformations of both X and Y. If ( A , a ) is an affine transformation on V, then ( A , a ) v = Av a where A is nonsingular on V to V. Recall that the group of all affine transformations on V to V is denoted by A l ( V ) and the group operation is given by
+
Also, let A l ( W ) be the affine group for W. The product group A l ( V ) x A I ( W ) acts on the vector space V @ W i n the obvious way: ( ( A ,a ) , ( B , b ) ) { v ,w ) = { A v + b , Bw
+ b).
The argument given above suggests that the affine dependence between X and Y should be the same as the affine dependence between ( A , a ) X and ( B , b)Y for all ( A , a ) E A l ( V ) and ( B , b ) E Al(W). We now need to interpret this requirement as a condition on m ( p , Z). The random vector ( ( A ,a ) , ( B , b ) ) { X ,Y ) = { A X + a , BY
+ b)
PROPOSITION
10.1
has a mean vector given by ( ( A , a ) , ( B , b ) ) { ~ ~l ,
2 =)
+ a, 4
2
+ b)
and a covariance given by
Therefore, the group A I ( V ) X A I ( W ) acts on the set
O For g
=-
=
{ ( p , Z)Ip
E
V @ W , Z 2 0 , Zii > 0 , i
=
1,2).
( ( A , a ) , ( B , b ) ) E A l ( V ) X A I ( W ) , the group action is given by
( P , Z ) -, (8l.b g ( Z ) )
where
and
Requiring the affine dependence between X and Y to be equal to the affine dependence between ( A , a ) X and ( B , b ) Y simply means that the function m defined on O must be invariant under the group action given above. Therefore, m must be a function of a maximal invariant function under the action of A I ( V ) x A I ( W ) on O. The following proposition gives one form of a maximal invariant. Proposition 10.1.
Let q
=
dim V , r
=
dim W, and let t
which is positive definite on V @ W, let A, eigenvalues of
> ...
2
A,
=
min{q, r). Given
> 0 be the t largest
406
CANONICAL CORRELATION COEFFICIENTS
where Z2, = Z;,. Define a function h on O by Z)
=
(A,, A, ,..., A,),
where A , > . - . 2 A, are defined in terms of 2 as above. Then h is a maximal invariant function under the action of G = Al(V) x AI(W) on O. Proof. Let {v,,. . . , v,) and {w,,. . . , w,) be fixed orthonormal sets in V and W. For each 2 , define Q I 2 ( 2 )by
where A , 2 . - . 2 A , are the t largest eigenvalues of A(Z). Given (p, Z) O, we first claim that there exists a g E G such that gp = 0 and
The proof of this claim follows. For g
=
E
((A, a), (B, b)), we have
Choose A = I ? Z L ' / ~and B = A2,'/2 where r E ijo(V), A E B(W), and 2;'/2 is the inverse of the positive definite square root of Xi,, i = 1,2. For each r and A,
and
Using the singular value decomposition, write
PROPOSITION
407
10.1
where { x , , . . ., x,) and { y , , . . . , y,) are orthonormal sets in V and W, respectively. This representation follows by noting that the rank of A,, is at most t and
has the same eigenvalues as A ( Z ) , which are A, B as above, it now follows that
> . . . > A, > 0. For A and
Choose r so that Tx, = vi and choose A so that A y,
=
widThen we have
so g ( B ) has the form claimed. With these choices for A and B, now choose a = - A p I and b = -Bp,. Then
The proof of the claim is now complete. To finish the proof of Proposition 10.1, first note that Proposition 1.39 implies that h is a G-invariant function. For the maximality of h , suppose that h ( p , 2 ) = h ( v , *). Thus
whlch implies that there exists a g and
g
such that
=
(p,2)
and
Therefore,
g-Ig(v, *) so h is maximal invariant.
408
CANONICAL CORRELATION COEFFICIENTS
The form of the singular value decomposition used in the proof of Proposition 10.1 is slightly different than that given in Theorem 1.3. For a linear transformation C of rank k defined on (V, ( , ,) to (W, ( . , .),), Theorem 1.3 asserts that
-
a )
where p, > 0, (x,, . . . , x,), and {w,,. . . , w,) are orthonormal sets in V and W. With q = dim V, r = dim W, and t = min{q, r), obviously k G t. When k < t, it is clear that the orthonormal sets above can be extended to {x,,. . . , x,) and (w,,. . . , w,), which are still orthonormal sets in V and W. Also, setting pi = 0 for i = k + 1,. . . , t, we have
and p: 2 . . . 2 p: are the t largest eigenvalues of both CC' and C'C. This form of the singular value decomposition is somewhat more convenient in this chapter since the rank of C is not explicitly mentioned. However, the rank of C is just the number of pi, which are strictly positive. The corresponding modification of Proposition 1.48 should now be clear. Returning to our original problem of describing measures of affine dependence, say m(p, Z), Proposition 10.1 demonstrates that m is invariant under affine relabelings of X and Y iff m is a function of the t largest eigenvalues, A,,. . . , A,, of A(Z). Since the rank of A(Z) is at most t, the remaining eigenvalues of A(Z), if there are any, must be zero. Before suggesting some particular measures m(p, Z), the canonical correlation coefficients are discussed. Definition 10.2. In the notation of Proposition 10.1, let p, = A'/*, i = 1,. . . , t. The numbers p, 2 p, 2 . . . 2 p, 2 0 are called the population canonical correlation coefficients. Since pi is a one-to-one function of Xi, it follows that the vector (p,, . . . , p,) also determines a maximal invariant function under the action of G on O. In particular, any measure of affine dependence should be a function of the canonical correlation coefficients. The canonical correlation coefficients have a natural interpretation as cosines of angles between subspaces in a vector space. Recall that Z = (X, Y) takes values in the vector space V $ W where (V, (., .),) and (W, (., .),)
PROPOSITION
409
10.2
are inner product spaces. The covariance of 2, with respect to the natural on V @ W, is inner product, say ( a ,
a),
In the discussion that follows, it is assumed that Z is positive definite. Let (., -), denote the inner product on V @ W defined by
for z , , z2 E V @ W. The vector space V can be thought of as a subspace of V @ W-namely, just identify V with V @ (0) c V @ W. Similarly, W is a subspace of V @ W. The next result interprets the canonical correlations as the cosines of angles between the subspaces V and W when the inner product on V @ W is (., .),. Proposition 10.2. Given 2, the canonical correlation coefficients p , >, . . . >, p, are the cosines of the angles between V and W as subspaces in the inner product space (V @ W, (., .),). Proof: Let PI and P, be the orthogonal projections (relative to (., .),) onto V @ (0) and W @ (O), respectively. In view of Proposition 1.48 and Definition 1.28, it suffices to show that the t largest eigenvalues of P,P2P, are A, = p?, i = 1,. . . , t . We claim that
is the orthogonal projection onto V @ (0). For (v, w) E V @ W ,
so the range of C , is V @ (0) and C , is the identity on V @ (0). That C: = C , is easily verified. Also, since
the identity C;Z = Z C , holds. Here C ; is the adjoint of C , relative to the
410
CANONICAL CORRELATION COEFFICIENTS
inner product (., .)-namely,
This shows that C , is self-adjoint relative to the inner product (., .),. Hence C , is the orthogonal projection onto V $ (0) in (V $ W , (., .),). A similar argument yields
as the orthogonal projection onto (0) $ W in (V $ W, (., -),). Therefore P, = C,, i = 1,2, and a bit of algebra shows that
where A(Z)
=
Z,'ZI2Z;'Z2, and
Thus the characteristic polynomial of P IP2PI is given by
where r = dim W. Since t = min(q, r ) where q = dim V, it follows that the t largest eigenvalues of PIP2PI are the t largest eigenvalues of A(Z). These are p: 2 . . - 2 p;, so the proof is complete. Another interpretation of the canonical correlation coefficients can be given using Proposition 1.49 and the discussion following Definition 1.28. Using the notation adopted in the proof of Proposition 10.2, write
where {ql,.. . , qt) is an orthonormal set in V $ (0) and (S,,. . . , 5,) is an orthonormal set in (0) ~3W. Here orthonormal refers to the inner product .), on V 61 W, as does the symbol in the expression for P2P1-that is, for z , , z2 E V CB W, ( a ,
PROPOSITION
10.2
411
The existence of this representation for P, PI follows from Proposition 1.48, as does the relationship
for i, j = 1,. . . , t. Define the sets D I i and D,,, i = 1,. . . , t , as in Proposition 1.49 (with M I = V @ (0) and M2 = (0) @ W ) , SO
for i = 1,. . . , t. To interpret p , , first consider the case i D l , iff
=
1. A vector
is in
and 1
Similarly, 5
E
=
( 7 ,Z V ) =
( 0 ,2 1 1 v ) 1 =
var(v, X ) , .
D,, iff
and
However, for 71
=
{ v ,0 ) E D l , and
5 = (0, w ) E D2,,
This is just the ordinary correlation between ( v , X), and ( w , Y ) , as v and w have been normalized so that 1 = var(v, X ) = var(w, Y ) , . Since (11, O2 < p, for all 71 E D l , and 5 E D,,, it follows that for every x E V , x * 0 , and y E W , y * 0 , the correlation between ( x , X ) , and ( y , Y ) , is no greater than p,. Further, writing 71, = {v,,O) and 5, = (0, w,), we have
,
which is the correlation between ( v , , X ) , and ( w , , Y ),. Therefore, P I is the
412
CANONICAL CORRELATION COEFFICIENTS
maximum correlation between (x, X), and (y, Y), for all nonzero x E V and y E W. Further, this maximum correlation is achieved by choosing x = v, andy = w,. The second largest canonical correlation coefficient, p,, satisfies the equality
A vector q is in Dl, iff q
=
{v,O),
v
E
T/
and 0
=
(7, TI)= = ("9 ~ 1 1 ~ 1 ) l .
Also, a vector [ is in D,, iff
and
These relationships provide the following interpretation of p,. The maximum correlation between (x, X), and (y, Y), is p, and is
since 1 = var(v,, X), = var(w,, Y),. Suppose we now want to find the maximum correlation between (x, X), and (y, Y), subject to the condition
Clearly (i) is equivalent to (ii)
PROPOSITION
413
10.2
Since correlation is invariant under multiplication of the random variables by positive constants, to find the maximum correlation between (x, X), and (y, Y), subject to (ii), it suffices to maximize cov{(x, X),, (y, Y),) over those x 's and y 's that satisfy (iii)
( x , ~ , , x ) =, 1, ( x , ~ , , v , ) =, 0 ( Y ?2 2 2 ~ ) 2= 19 ( Y , 2 2 2 ~ 1 = ) ~0.
However, x E V satisfies (iii) iff 7 = {x, 0) is in Dl, and y iff 5 = (0, y ) is in D,,. Further, for such x, y, q, and 5,
E
W satisfies (iii)
Thus maximizing this covariance subject to (iii) is the same as maximizing (q, 5)= for 7 E Dl, and 5 E D,,. Of course, this maximum is p, and is achieved at 7, E Dl, and 5, E D,,. Writing 7, = {v2,0)and 5, = (0, w,), it is clear that v, E V and w, E W satisfy (iii) and
Furthermore, Proposition 1.48 shows that
which implies that
Therefore, the problem of maximizing the correlation between (x, X), and (y, Y), (subject to the condition that the correlation between (x, X), and (v X), be zero and the correlation between (y, Y), and (w,, Y), be zero) has been solved. It should now be fairly clear how to interpret the remaining canonical correlation coefficients. The easiest way 'to describe the coefficients is by induction. The coefficient p, is the largest possible correlation between ( x , X), and (y, Y), for nonzero vectors x E V and y E W. Further, there exist vectors v, E V and w, E W such that
,,
and 1 = var(v,, X),
=
var(w,, Y)2.
414
CANONICAL CORRELATION COEFFICIENTS
These vectors came from q, and [, in the representation
given earlier. Since qi E V 3€ {0), we can write q, = {v,, 0), i = 1,. . . , t. Similarly, 5, = (0, w,), i = 1,. . . , t. Using Proposition 1.48, it is easy to check that
forj, k = 1,. . . , t. Of course, these relationships are simply a restatement of the properties of [,, . . . , 5, and q,, . . . , qt. For example,
However, as argued in the case of p,, we can say more. Given p,, . . . , p, and the vectors v,,. . . , v,-, and w,,. . . , w,-, obtained from q,,. . . , qi- and [,,. . . , 4,- ,, consider the problem of maximizing the correlation between (x, X), and (y, Y), subject to the conditions that
,
COV{(X, x),, (u,,
x),)= 0,
j
=
1,. . . , i - 1
C O V { ( ~ , Y ) ~ , ( W=~ 0, , Y ) j~=) 1,..., i - 1. By simply unravelling the notation and using Proposition 1.49, this maximum correlation is pi and is achieved for x = vi and y = w,. T h s successive maximization of correlation is often a useful interpretation of the canonical correlation coefficients. The vectors v,, . . . , v, and w,, . . . , w, lead to what are called the canonical variates. Recall that q = dim V, r = dim W and t = min{q, r). For definiteness, assume that q < r so t = q. Thus {v,,. . . , 0,) is a basis for V and satisfies
for j, k
=
1,. . . , q
SO
{v,,. . . , uq) is an orthonormal basis for V relative to
PROPOSITION
10.3
415
the inner product determined by Z , ,. Further, the linearly independent set
{ w , , . . . , w,) satisfies
( y ,' 2 2 ~ k ) , = ' j k so {w,,. . . , w,) is an orthonormal set relative to the inner product determined by Z,,. Now, extend this set to {w,, . . . , w,) so that this is an orthonormal basis for W in the Z,, inner product. Definition 10.3. The real-valued random variables defined by
and
are called the canonical variates of X and Y , respectively. Proposition 10.3. The canonical variates satisfy the relationships (i) var X,
=
var Yk = 1.
(ii) cov{ X,, Yk) = p,'
.
These relationships hold for j = 1,. . . , q and k are the canonical correlation coefficients.
=
1,. . . , r. Here,
PI,.
. . , P,
Proof: This is just a restatement of part of what we have established above. Let us briefly review what has been established thus far about the population canonical correlation coefficients p,,. . . , p,. These coefficients were defined in terms of a maximal invariant under a group action and this group action arose quite naturally in an attempt to define measures of affine dependence. Using Proposition 1.48 and Definition 1.28, it was then shown that p , , . . . , p, are cosines of angles between subspaces with respect to an inner product defined by Z. The statistical interpretation of the coefficients came from the detailed information given in Proposition 1.49 and this interpretation closely resembled the discussion following Definition 1.28. Given X in (V, (. , ,) and Y in (W, ( . , . ),) with a nonsingular covariance 0
)
416
CANONICAL CORRELATION COEFFICIENTS
the existence of special bases {v,,. . . , 0,) and {w,, . . . , w,) for V and W was established. In terms of the canonical variates =
(v,, x ) , ,
Y, = (w,,
y),,
the properties of these bases can be written 1 = var Xi = var Y, and
for i = 1,. . . , q and j = 1,. . . , r. Here, the convention that p, = 0 for i > t = min{q, r ) has been used although pi is not defined for i > t. When q g r, the covariance matrix of the variates X,,. . . , X,, Y,,. . . , Y, (in that order) is 4'
Z0 =
[(DO).
( DIr o))
where D is a q x q diagonal matrix with diagonal entries p, >, . . . >, p, and 0 is a q x ( r - q) block of zeroes. The reader should compare this matrix representation of Z to the assertion of Proposition 5.7. The final point of this section is to relate a prediction problem to that of suggesting a particular measure of affine dependence. Using the ideas developed in Chapter 4, a slight generalization of Proposition 2.22 is presented below. Again, consider X E (V, (., and Y E (W, .),) with & X = p,, &Y = p2, and a ) , )
( a ,
It is assumed that Z,, and Z,, are both nonsingular. Consider the problem of predicting X by an affine function of Y-say CY v, where C E C(W,V) and v, E V. Let [., .] be any inner product on V and let 11 . 11 be the norm defined by The following result shows how to choose C and v, to minimize
+
[ a ,
a].
&(IX- (CY + v0)1l2. Of course, the inner product [ . , . ] on V is related to the inner product ( ., . )
,
for some positive definite A,. Proposition 10.4. For any C
E
C(W,V) and v , E V, the inequality
holds. There is equality in this inequality iff
and
C
=
t = ZI2Z,'.
Here, ( , .) is the natural inner product on E(V, V) inherited from (V, ( - 9
.)I).
Proof. First, write
x - ( C Y + v,)
=
Ul + U2
where
U, = X -
(L'Y + 6 , )
=
X - pl - Z I 2 Z $ ( y - p 2 )
and
U2 =
(L' - C ) Y + 6, - 0 , .
Clearly, Ul has mean zero. It follows from Proposition 2.17 that U, and U2 are uncorrelated and C0v(U1)= Z , , - ZI2Z,'Z2,. Further, from Proposition 4.3 we have &[U,,U2]= 0. Therefore,
418
CANONICAL CORRELATION COEFFICIENTS
where the last equality follows from the identity
&U,OUl = Z , ,
-
ZI2Z,'Z2,
established in Proposition 2.21. Thus the desired inequality holds and there 1 1zero ~ iff U2 is zero with probability is equality iff GllU2112= 0. But & l l ~ ~ is one. This holds iff v, = do and C = C since Cov(Y) = Z 2 , is positive definite. This completes the proof. Now, choose A, to be 2;' in Proposition 10.4. Then the mean squared do, measured relative to Z;', is error due to predicting X by
e~ +
Here, I(
. 11 is obtained from the inner product defined by
We now claim that @ is invariant under the group of transformations discussed in Proposition 10.1, and thus is a possible measure of affine dependence between X and Y. To see this, first recall that ( . , .) is just the trace inner product for linear transformations. Using properties of the trace, we have
@ ( Z )= ( I , I
-
2,"22,22,'22,2;'/2)
9
=
C (1 - A,)
I= l
where A, >, . >, A, 2 0 are the eigenvalues of 2,'/22,22,'22,2,'/2. However, at most t = min{q, r ) of these eigenvalues are nonzero and, by definition, p, = A'/2, i = 1,. . . , t , are the canonical correlation coefficients. Thus
is a function of p,, . . . , p, and hence is an invariant measure of affine
419
SAMPLE CANONICAL CORRELATIONS
dependence. Since the constant q - t is irrelevant, it is customary to use
rather than +(Z) as a measure of affine dependence.
10.2. SAMPLE CANONICAL CORRELATIONS To introduce the sample canonical correlation coefficients, again consider inner product spaces (V, (., .),) and (W, (., .),) and let ( V @ W, (., .)) be the direct sum space with the natural inner product (., .). The observations consist of n random vectors Zi = {Xi, y) E V @ W, i = 1,. . . , n. It is assumed that these random vectors are uncorrelated with each other and C(Zi) = C(Z,) for all i, j. Although these assumptions are not essential in much of what follows, it is difficult to interpret canonical correlations without these assumptions. Given Z,, . . . , Zn, define the random vector Z by specifying that Z takes on the values Zi with probability l/n. Obviously, the distribution of Z is discrete in V @ W and places mass l/n at Z, for i = 1,. . . , n. Unless otherwise specified, when we speak of the distribution of Z, we mean the conditional distribution of Z given Z,, . . . , Z,, as described above. Since the distribution of Z is nothing but the sample probability measure of Z,, . . . , Z,, we can think of Z as a sample approximation to a random vector whose distribution is C(Z,). Now, write Z = {X, Y) with X E V and Y E W SO X is Xi with probability l / n and Y is with probability l/n. Given Z,, . . . , Z,,, the mean vector of Z is
and the covariance of Z is CovZ= S = -
I
C (z,- Z)n(z,- Z ) . -
,=,
Thls last assertion follows from Proposition 2.21 by noting that CovZ since the mean of Z is
Z.
=
& ( Z- Z ) o ( z - Z )
When V = Rq and W = Rr are the standard
420
CANONICAL CORRELATION COEFFICIENTS
coordinate spaces with the usual inner products, then S is just the sample covariance matrix. Since S is a linear transformation on V @ W to V @ W, S can be written as
It is routine to show that
s,,= -I C (4- X)o(x; - X ) ;=I
and S,, = S;,. The reader should note that the symbol Ci appearing in the expressions for S,,, S,,, and S,, has a different meaning in each of the three expressions-namely, the outer product depends on the inner products on the spaces in question. Since it is clear which vectors are in which spaces, this multiple use of should cause no confusion. Now, to define the sample canonical correlation coefficients, the results of Section 10.1 are applied to the random vector Z. For this reason, we assume that S = Cov Z is nonsingular. With q = dim V , r = dim W, and t = min{q, r), the canonical correlation coefficients are the square roots of the t largest eigenvalues of
In the sampling situation under discussion, these roots are denoted by r, a . . . 2 r, a 0 and are called the sample canonical correlation coefficients. The justification for such nomenclature is that r f , . . . , :r are the t largest eigenvalues of A(S) where S is the sample covariance based on Z,, . . . , Z,. Of course, all of the discussion of the previous section applies directly to the situation at hand. In particular, the vector (r,, . . . , r,) is a maximal invariant under the group action described in Proposition 10.1. Also, r,, . . . , r, are the cosines of the angles between the subspaces V @ (0) and (0) @ W in the vector space V @ W relative to the inner product determined by S.
421
SAMPLE CANONICAL CORRELATIONS
. . , 0,)and {w,, . . . , w,) be the canonical bases for V and Now, let {v,,. W.Then we have for i = 1,. . ., q and j = 1,. .. , r. The convention that r, = 0 for i > t is being used. To interpret what t h s means in terms of the sample Z,, . . . , Z,, consider r,. For nonzero x E V and y E W,the maximum correlation between (x,X), and (y,Y), is r, and is acheved for x = v, and y = w,. However, given Z,,. . . , Z,,we have
=
1
(x,s,,x),= -
C" (x,x,- x):
i=,
and, similarly,
An analogous calculation shows that
Thus var(x, X), is just the sample variance of the random variables (x,X,),,i = I,.. ., n , and var(y, Y), is the sample variance of (y, i = 1,. . . , n. Also, cov{(x, X),,(y, Y),)is the sample covariance of the random variables (x,X,),, (y,x),,i = 1,. . . , n. Therefore, the correlation between (x,X),and (y,Y ) , is the ordinary sample correlation coefficient for the random variables (x,&),, (y,x),,i = 1,. . . , n. This observation implies that the maximum possible sample correlation coefficient for (x,4 ) 1 (y, , x),,i = 1,. . . , n is the largest sample canonical correlation coefficient, r,, and this maximum is attained by choosing x = v, and y = w,. The interpretation of r,, . . . , r, should now be fairly obvious. Given i , 2 < i G t, and given r,,. . . , r,-,, v,,.. . , v i - , , and w,,. . . , w,-,, consider the problem of maximizing the correlation between (x,X), and (y,Y ) , subject to the conditions
x),,
422
CANONICAL CORRELATION COEFFICIENTS
These conditions are easily shown to be equivalent to the conditions that the
sample correlation for
be zero for j = 1,. . . , i - 1 with a similar statement concerning the Y's. Further, the correlation between (x, X), and ( y ,Y), is the sample correlation for (x, Xk),, (y, Yk),, k = 1,. . . , n. The maximum sample correlation is r, and is attained by choosing x = vi and y = w,. Thus the sample interpretation of r,, . . . , r, is completely analogous to the population interpretation of the population canonical correlation coefficients. For the remainder of this section, it is assumed that V = R4 and W = Rr are the standard coordinate spaces with the usual inner products, so V CB W is just RP where p = q + r. Thus our sample is Z,, . . . , Zn with Zi E RP and we write
with Xi E R4 and Y, E Rr, i assumed to be nonsingular, is
=
1,. . . , n. The sample covariance matrix,
where
s,,= 1- ~" ( x , - x ) ( x ~ - x ) ' I
and S,, = S;,. Now, form the random matrix 2 : n X p whose rows are (Zi - Z)' and partition 2 into U : n x q and V : n x r so that
2
=
(uv).
The rows of U are (X, - X)' and the rows of V are (Y, - Y)',i = 1,. . . , n. Obviously, we have nS = 2'2, nS,, = U'U, nS,, = V'V, and nS,, = U'V. The sample canonical correlation coefficients r, 2 . 2 r, are the square roots of the t largest eigenvalues of
However, the t largest eigenvalues of A(S) are the same as the t largest eigenvalues of PxPywhere
and
Now, P, is the orthogonal projection onto the q-dimensional subspace of Rn,say Mx,spanned by the columns of U. Also, Py is the orthogonal projection onto the r-dimensional subspace of Rn,say My,spanned by the
columns of V. It follows from Proposition 1.48 and Definition 1.28 that the sample canonical correlation coefficients r,, . . . , r, are the cosines of the angles between the two subspaces M x and M ycontained in Rn.Summarizing, we have the following proposition.
Proposition 10.5. Given random vectors
where XiE R4 and Y, E Rr,form the matrices U: n x q and V: n x r as above. Let M x c R n be the subspace spanned by the columns of U and let M y G R n be the subspace spanned by the columns of V. Assume that the sample covariance matrix
is nonsingular. Then the sample canonical correlation coefficients are the cosines of the angles between M x and My. The sample coefficients r,, . . . , r, have been shown to be the cosines of angles between subspaces in two different vector spaces. In the first case,
424
CANONICAL CORRELATION COEFFICIENTS
the interpretation followed from the material developed in Section 10.1 of this chapter: namely, r,,. . . , r, are the cosines of the angles between R4 & (0) c RP and (0) €0 Rr G RP when RP has the inner product determined by the sample covariance matrix. In the second case, described in Proposition 10.5, r,, . . . , r, are the cosines of the angles between M x and M yin Rn when R" has the standard inner product. The subspace M x is spanned by the columns of U where U has rows (Xi - X)', i = 1,. . . , n. Thus the coordinates of the jth column of U are Xi, for i = 1,. . . , n where Xi, is the jth coordinate of E Rq, and is the jth coordinate of X. This is the reason for the subscript X on the subspace M,. Of course, similar remarks apply to My. The vector (r,,.. . , r,) can also be interpreted as a maximal invariant under a group action on the sample matrix. Given
2:n x q have rows X;, i = 1 , . . . , n and let ?: n X r have rows i = 1,. . . , n. Then the data matrix of the whole sample is
let
r,
which has rows Z,', i = 1,. . . , n. Let e E Rn be the vector of all ones. It is assumed that 2 E 2 G Cp,, where 2 is the set of all n x p matrices such that the sample covariance mapping
has has rank p. Assuming that n >, p + 1, the complement of 2 in Lebesgue measure zero. To describe the group action on %, let G be the set of elements g = ( r , c, C) where
and
For g
=
( r , c, C), the value of g at
2 is
g2= ~ Z C +' ec'.
PROPOSITION
10.6
Since
~ ( ~= c2s ()Z ) c f . it follows that each g E G is a one-to-one onto mapping of % to %. The composition in G, defined so G acts on the left of %, is
Proposition 10.6. Under the action of G on %, a maximal invariant is the vector of canonical correlation coefficients r,, . . . , r, where t = rnin(q, r).
ProoJ Let S; be the space of p X p positive definite matrices so the sample covariiance mapping s : % + S; is onto. Given S E S;, partition S as
where S , , is q x q, S,, is r x r, and S,, is q X r. Define h on ;5 by letting h ( S ) be the vector ( A , ,. . . , A,)' of the t largest eigenvalues of
Since r, =
&,i = 1,. . . , t , the proposition will be proved if it is shown that
is a maximal invariant function. Thls follows since h ( s ( 2 ) )= ( A , ,. . . , A,)', which is a one-to-one function of ( r , ,. . . , r,). The proof that cp is maximal invariant proceeds as follows. Consider the two subgroups G I and G, of G defined by
and
G2 = { g l g = ( I n , ( ) C , ) E G). Note that G, acts on the space S; in the obvious way0, C ) , then
g , ( S ) = CSC',
S
E
.5;
namely, if g,
=
(In,
426
CANONICAL CORRELATION COEFFICIENTS
Further, since ( L c >c ) =
(r,c , zp)(zn,o,c ) ,
it follows that each g E G can be written as g i = 1,2. Now, we make two claims:
=
g,g, where g,
E
Gi,
(i) s : Z + S ; is a maximal invariant under the action of GI on 2. (ii) h : S ; + R' is a maximal invariant under the action of G2 on S;. Assuming (i) and (ii), we now show that cp(2) = h(s(2)) is maximal invariant. For g E G, write g = g,g, with gi E GI, i = 1,2. Since
and
we have cp(g2)
=
h(s(g1g22))
=
h(s(g22))
=
h(g242))
=
h(s(2)).
It follows that cp is invariant. To show that cp is maximal invariant, assume cp(2,) = rp(Z2). A g E G must be found so that g2,= 2,. Since h is maximal invariant under G , and
there is a g,
E
G2 such that g2(s(21))
=
422).
However, g,(s(2,))
=
s(g221)
=
422)
and s is maximal invariant under GI so there exists a g, such that
This completes the proof that cp, and hence r,, . . . , r,, is a maximal invariant
427
SOME DISTRIBUTION THEORY
-assuming claims (i) and (ii). The proof that s : 2 + S ; is a maximal invariant is an easy application of Proposition 1.20 and is left to the reader. ; + Rt is maximal invariant follows from an argument similar to That h : S that given in the proof of Proposition 10.1. The group action on 2 treated in Proposition 10.6 is suggested by the following considerations. Assuming that the observations Z,, . . . , 2, in RP are uncorrelated random vectors and C(Zi) = C(Z,) for i = 1,. . . , n , it follows that
&Z= ep' and covz where p = &Z, and Cov Z, we have
=
=
I, @ Z
2 . When
2 is transformed by g = (I?,
c, C),
&gZ = e(Cp + c)' and covgz
=
I, @ (CZC').
Thus the induced action of g on (p, 2 ) is exactly the group action considered in Proposition 10.1. The special structure of 62 and Cov 2 is reflected by the fact that, for g = (I?, 0, I,), we have & g =~62 and ~ o v g = z Cov 2.
10.3. SOME DISTRIBUTION THEORY The distribution theory associated with the sample canonical correlation coefficients is, to say the least, rather complicated. Most of the results in this section are derived under the assumption of normality and the assumption that the population canonical correlations are zero. However, the distribution of the sample multiple correlation coefficient is given in the general case of a nonzero population multiple correlation coefficient. Our first result is a generalization of Example 7.12. Let Z,, . . . , Z, be a random sample of vectors in RP and partition Zi as
428 CANONICAL CORRELATION COEFFICIENTS Assume that Z, has a density on RP given by
where f has been normalized so that
Thus when the density of Z , is p(.Ip, Z ) , then
Assuming that n
>p
+ 1, the sample covariance matrix
is positive definite with probability one. Here S , , is q x q, S,, is r x r, and S , , is q x r. Partitioning Z as S is partitioned, we have
Thus the squared sample coefficients, rf >, - . r:, are the t largest eigenvalues of s,'s,,s,;'s,, and the squared population coefficients, p: > . 2 p;, are the t largest eigenvalues of Z,'Z12Z;1Z2,. In the present generality, an invariance argument is given to show that the joint distribution of (r, ,. . . , r,) depends on (p, Z) only through ( p , , . . . , p,). Consider the group G whose elements are g = (C, c) where c E RP and
The action of G on RP is (C,c)z and group composition is
=
Cz
+c
The group action on the sample is
With the induced group action on (p, Z) given by
where g = (C, c ) , it is clear that the family of distributions of (Z,, . . . , 2,) that are indexed by elements of
is a G-invariant family of probability measures. Proposition 10.7. The joint distribution of (r,, . . . , r,) depends on (p, 2 ) only through (p,,. . . , p,).
Proof: From Proposition 10.6, we know that (r,,. . . , r,) is a G-invariant function of (Z,, . . . , Z,). Thus the distribution of (r,, . . . , r,) will depend on the parameter 8 = (p, 2 ) only through a maximal invariant in the parameter space. However, Proposition 10.1 shows that (p,,. . . , p,) is a maximal invariant under the action of G on O. Before discussing the distribution of canonical correlation coefficients, even for t = 1, it is instructive to consider the bivariate correlation coefficient. Consider pairs of random variables (Xi, y ) , i = 1,. . . , n, and let X E Rn and Y E Rn have coordinates Xi and y , i = 1,. . . , n. With e E Rn being the vector of ones, Pe = eel/n and Qe = I - Pe, the sample correlation coefficient is defined by
The next result describes the distribution of r when (K., a random sample from a bivariate normal distribution. Proposition 10.8. Suppose ( 4 , y )' random vectors with
E
x),i
=
1 , . . . , n, is
R ~ i, = 1,. . . , n, are independent
430 where p
CANONICAL CORRELATION COEFFICIENTS E
R 2 and
is positive definite. Consider random variables (U,, U2,U,) with: (i) (U,, U2)independent of U,. (ii) C(U,) = (iii) C ( U 2 ) =
xip2. Xi-1.
(iv) C(UIIU2) = N( where p
=
al2/(aI
w2,
{&
1).
is the correlation coefficient. Then we have
Proof: The assumption of independence and normality implies that the matrix ( X Y ) E C2, has a distribution given by c ( X Y ) = N ( e p f , In 8 2 ) . It follows from Proposition 10.7 that we may assume, without loss of generality, that Z has the form
When Z has this form, the conditional distribution of X given Y is e ( X I Y ) = N ( ( P , - ~ ~ 2+)P eY , (1 - p2)In) SO
C(Q,XIY)
=
N(PQ,Y, (1 - p2)ee).
Now, let v , , . . . , vn be an orthonormal basis for R n with v ,
=
e/
h
and
PROPOSITION
10.8
Expressing Q e X in this basis leads to
since Qee = 0. Setting
it is easily seen that, conditional on Y, we have that dent with
t2,. . . , t, are indepen-
and
Since
the identity
holds. This leads to
Setting U , = t2, U2 = J J Q , Y ) \and ~ , U, proposition.
=
z;(?
yields the assertion of the
The result of thls proposition has a couple of interesting consequences. When p = 0, then the statistic
432
CANONICAL CORRELATION COEFFICIENTS
has a Students t distribution with n - 2 degrees of freedom. In the general case, the distribution of W can be described by saying that: conditional on U,, W has a noncentral t distribution with n - 2 degrees of freedom and noncentrality parameter
Xt-,.
Let pm(.16) denote the density function of a nonwhere C(U,) = central t distribution with m degrees of freedom and noncentrality parameter 8. The results in the Appendix show that pm(.16) has a monotone likelihood ratio. It is clear that the density of W is
where f is the density of U, and m = n - 2. From this representation and the results in the Appendix, it is not difficult to show that h(.lp) has a monotone likelihood ratio. The details of this are left to the reader. In the case that the two random vectors X and Y in R n are independent, the conditions under whch W has a t,-, distribution can be considerably weakened. Proposition 10.9. Suppose X and Y in R n are independent and both IIQeXI1 and IIQeYII are positive with probability one. Also assume that, for some number p, E R , the distribution of X - pIe is orthogonally invariant. Under these assumptions, the distribution of
where
is a tn-, distribution.
Proof. The two random vectors QeX and QeY take values in the ( n - 1)dimensional subspace M
=
{xlx E R n , x'e
=
0).
PROPOSITION
10.9
Fix Y so the vector
has length one. Since the distribution of X - p,e is 0, invariant, it follows that the distribution of QeX is invariant under the group G = {I'lI'
E
On, I'e
=
e),
which acts on M. Therefore, the distribution of QeX/llQeXII is G-invariant on the set
But G is compact and acts transitively on 5% so there is a unique G-invariant distribution for QeX/llQeXII in %. From thls uniqueness it follows that
where C(Z) = N(0, I,) on Rn. Therefore, we have
and for each y, the claimed result follows from the argument given to prove Proposition 10.8. We now turn to the canonical correlation coefficients in the special case that t = 1. Consider random vectors and YJ with X, E R' and Y, E Rr, i = 1,. . . , n. Let X E Rn have coordinates XI,. . . , Xn and let Y E Cr, have rows Y;, . . . , Y,'. Assume that QeY has rank r so
is the orthogonal projection onto the subspace spanned by the columns of QeY. Since t = 1, the canonical correlation coefficient is the square root of the largest, and only nonzero, eigenvalue of
434
CANONICAL CORRELATION COEFFICIENTS
which is
For the case at hand, r, is commonly called the multiple correlation coefficient. The distribution of r: is described next under the assumption of normality. Proposition 10.10. Assume that the distribution of (XY) by
E
Cr+ ,, ,is given
and partition Z as
where a,, > 0, Z,, is 1 x r, and Z2, is r x r. Consider random variables U,, U2, and U, whose joint distribution is specified by: (i) (ii) (iii) (iv)
(U,, U2) and U, are independent. C(U,) = xi-,- ,. C(U2) = C(U,(U2)= X: (A), where A = p2(1 - p 2 ) - ' ~ , .
Xi- ,.
Here p = ( Z , 2 Z ~ ' Z 2 , / a , , ) ' / 2is the population multiple correlation coefficient. Then we have
Proof. Combining the results of Proposition 10.1 and Proposition 5.7, without loss of generality, Z can be assumed to have the form
where E ,
E
Rr and
E; =
(1,0,. . . , 0). When Z has this form, the conditional
PROPOSITION
10.10
distribution of X given Y is
where & X
=
ple and &Y = ep;.
Since Q,e
=
0, we have
The subspace spanned by the columns of QeY is contained in the range of Q, and this implies that QeP = PQ, = P so
Since
it follows that
Given Y, the conditional covariance of Q,X is (1 - P ~ ) Q and, , therefore, the identity PQ,(Q, - P ) = 0 implies that PQ,X and (Q, - P)Q,X are conditionally independent. It is clear that
so we have
since Q, - P is an orthogonal projection of rank n - r - 1. Again, conditioning on Y,
since PQ, 3.8 that
=
P and Q,YE, is in the range of P. It follows from Proposition
436
CANONICAL CORRELATION COEFFICIENTS
where the noncentrality parameter A is given by
That U2 = E;YQ,YE,has a Xi- distribution is clear. Defining U, and U3 by
uI = (1 - P * ) - ' I I P Q , X I I ~ and
u3= (1 - P ~ ) - ' I I ( Q-,P > Q , X I I ~ , the identity
holds. That U3 is independent of (U,, U2) follows by conditioning on Y. Since
where
the conditional distribution of Ul given Y is the same as the conditional distribution of U, gven U2. This completes the proof. When p
=
0, Proposition 10.10 shows that
e(-)
=
el$-)
2
n-r-
= c,n-r-19 I
which is the unnormalized F-distribution on (0,oo). More generally,
PROPOSITION
10.10
where
is random. T h s means that, conditioning on A
=
6,
Let !(.la) denote the density function of an F(r, n - r - 1; 6) distribution, and let h ( . ) be the density of a distribution. Then the density of r;/(l - r;) is
XE-l
k(w1p) =/mf(wlp2(l - p2)-'u)h(u) du. 0
From this representation, it can be shown, using the results in the Appendix, that k(w1p) has a monotone likelihood ratio. The final exact distributional result of this section concerns the function of the sample canonical correlations given by
when the random sample (Xi, Y,)', i = 1,. . . , n, is from a normal distribution and the population coefficients are all zero. This statistic arises in testing for independence, which is discussed in detail in the next section. To be precise, it is assumed that the random sample
satisfies
As usual, 4 E Rq, Y,
E
Rr, and the sample covariance matrix n
s = C ( z , - Z)(z,- Z)' 1
438
CANONICAL CORRELATION COEFFICIENTS
is partitioned as
where S , , is q X q and S2, is r x r. Under the assumptions made thls far, S has a Wishart distribution-namely,
Partitioning 2 , we have
and the population canonical correlation coefficients, say p , >, . . . >, p,, are all zero iff Z 1 2= 0. Proposition 10.11. Assume n - 1 >, p and let r, >, - . . 2 r, be the sample canonical correlations. When Z , , = 0, then
where the distribution U(n - r - 1 , r , q ) is described in Proposition 8.14. Proof. Since r:, . . . , r; are the t largest eigenvalues of
and the remaining q - t eigenvalues of A ( S ) are zero, it follows that
Since W is a function of the sample canonical correlations and Z,, Proposition 10.1 implies that we can take
=
0,
without loss of generality to find the distribution of W. Using properties of
PROPOSITION
10.11
determinants, we have
Proposition 8.7 implies that
and S , , . , and
s,,s;'s,,
are independent. Therefore,
and by definition, it follows that
C ( W ) = U(n
- r - 1,
r,q).
Since
the proof of Proposition 10.11 shows that Q(W) = U(n - q - 1, q, r ) so U(n - q - 1,q, r ) = U(n - r - 1, r , q ) as long as n - 1 a q r. Using the ideas in the proof of Proposition 8.15, the distribution of W can be derived when Z,, has rank one-that is, when one population canonical correlation is positive and the rest are zero. The details of this are left to the reader. We close this section with a discussion that provides some qualitative information about the distribution of r, >, . . . > r, when the data matrices X E Cq, and Y E CI, are independent. As usual, let Px and P, denote the orthogonal projections onto the column spaces of Q,X and Q,Y. Then the sample canonical correlations are the t largest eigenvalues of PyP,--say
+
It is assumed that Q,X has rank q and Q,Y has rank r. Since the distribution of (p(PyPx) is of interest, it is reasonable to investigate the
440
CANONICAL CORRELATION COEFFICIENTS
distributional properties of the two random projections Px and Py. Since X and Y are assumed to be independent, it suffices to focus our attention on P,. First note that Px is an orthogonal projection onto a q-dimensional subspace contained in M
=
(xlx
E
Rn,x'e
=
0).
Therefore, Px is an element of qq,n(e)=
{PI
P is an n x n rank q orthogonal projection, Pe = o
I
Furthermore, the space qq,,(e) is a compact subset of R " and ~ is acted on by the compact group On(e) = { r l r E On,r e
=
e),
with the group action given by P + I'Pr'. Since On(e) acts transitively on i.y,,?(e), there is a unique On(e)-invariantprobability distribution on qq,.(e). Thls is called the uniform distribution on qq,,,(e). Proposition 10.12. If C(X) = C(rX) for r distribution on qq,.(e).
E
On(e),then Px has a uniform
Proof: It is readily verified that
Therefore, if C(rX) = C(X), then
which implies that the distribution C(Px) on qq,.(e) is On(e)-invariant.The uniqueness of the uniform distribution on qq,.(e) yields the result. When C(X) = N(ep;, In8 Z,,), then C(X) = C(rX) for r E On(e), so Proposition 10.12 applies to this case. For any two n X n positive semidefinite matrices B, and B,, define the function (p(B,B,) to be the vector of the t largest eigenvalues of BIB2.In particular, (p(P,P,) is the vector of sample canonical correlations.
PROPOSITION
10.13.
441
Proposition 10.13. Assume X and Y are independent, C ( r X ) = C ( X ) for r E 8,(e), Q e X has rank q, and QeY has rank r. Then
where Po is any fixed rank r projection in qr,,(e). ProoJ: First note that
since the eigenvalues of P Y r P x r 1 are the same as the eigenvalues of r f P Y r P x . From Proposition 10.12, we have
Conditioning on Y , the independence of X and Y implies that
for all r E O,(e). The group Bn(e) acts transitively on qr,,(e), so for Y fixed, there exists a r E 8,(e) such that r ' P y r = Po. Therefore, the equation
holds for each Y since X and Y are independent. Averaging C((p(PyPx)IY) over Y yields t ( ( p ( P y P x ) ) ,which must then equal e((p(POPx)).This completes the proof. The preceeding result shows that C((p(PyPx))does not depend on the distribution of Y as long as X and Y are independent and C ( X ) = C ( r X ) for r E 8,(e). In this case, the distribution of (p(PyPx) can be derived under the assumption that C ( X ) = N(0, I, 8 I,) and C ( Y ) = N(0, I, €3 I,). Suppose that q G r so t = q. Then C((p(P,Px)) is the distribution of r, 2 . . 2 r, where A i = ri2, i = 1,. . . , q, are the eigenvalues of S,'S,,S~'S,, and
is the sample covariance matrix. To find the distribution of r,,. . . , r,, it
442
CANONICAL CORRELATION COEFFICIENTS
would obviously suffice to find the distribution of yi = 1 - Xi, i
whlch are the eigenvalues of
=
1,. . . , q,
where
It was shown in the proof of Proposition 10.11 that T, and T2 are independent and
and C(T2) = w(I,, q, r ) . Since the matrix
+
has the same eigenvalues as (TI T,)-'T,, it suffices to find the distribution of the eigenvalues of B. It is not too difficult to show (see the Problems at end of this chapter) that the density of B is
with respect to Lebesgue measure dB restricted to the set
Here, a(.,.) is the Wishart constant defined in Example 5.1. Now, the ordered eigenvalues of B are a maximal invariant under the action of the group 0, on % given by B + TBT', r E 0,. Let A be the vector of ordered eigenvaluesof B soA E R4, 1 2 A, 2 . . . 2 A, 2 0. Sincep(TBr') = p(B), T E Oq, it follows from Proposition 7.15 that the density of A is q(A) = p ( D , ) where DA is a q x q diagonal matrix with diagonal entries A,, . . . , A,. Of course, q(-) is the density of A with respect to the measure v(dA) induced by the maximal invariant mapping. More precisely, let
443
TESTING FOR INDEPENDENCE
and consider the mapping g, on ?€ to % defined by q(B) = A where A is the vector of eigenvalues of B. For any Bore1 set C c %, v(C)is defined by
v(C)=/
dB. 9-'(c)
Since q(A) has been calculated, the only step left to determine the distribution of A is to find the measure v. However, it is rather nontrivial to find v and the details are not given here. We have included the above argument to show that the only step in obtaining C(A) that we have not solved is the calculation of v. This completes our discussion of distributional problems associated with canonical correlations. The measure v above is just the restriction to % of the measure v, discussed in Example 6.1. For one derivation of v,, see Muirhead (1982, p. 104). 10.4. TESTING FOR INDEPENDENCE In t h s section, we consider the problem of testing for independence based on a random sample from a normal distribution. Again, let Z,, . . . , 2, be independent random vectors in RP and partition Zi as
It is assumed that C(Z,)
=
N(p, Z),
SO
for i = 1,. . . , n. The problem is to test the null hypothesis H, : Z,, = 0 against the alternative H, : Z,, * 0. As usual, let Z have rows Z,', i = 1,. . . , n so C(Z) = N(epf, I, 8 2 ) . Assuming n >, p + 1, the set % 5 CP,. where
has rank p is a set of probability one and % is taken as the sample space for Z. The group G considered in Proposition 10.6 acts on % and a maximal invariant is the vector of canonical correlation coefficients r,, . . . , r, where t = min(q, r).
444
CANONICAL CORRELATION COEFFICIENTS
Proposition 10.14. The problem of testing H, : Z,, = 0 versus HI : Z,, * 0 is invariant under G. Every G-invariant test is a function of the sample canonical correlation coefficients r,,. . . , r,. When t = 1, the test that rejects for large values of r, is a uniformly most powerful invariant test.
Proof. That the testing problem is G-invariant is easily checked. From Proposition 10.6, the function mapping Z into r,, . . . , r, is a maximal invariant so every invariant test is a function of r,, . . . , r,. When t = 1, the test that rejects for large values of r, is equivalent to the test that rejects for large values of U = r:/(l - r:). It was argued in the last section (see Proposition 10.10) that the density of U, say k(ulp), has a monotone likelihood ratio where p is the only nonzero population canonical correlation coefficient. Since the null hypothesis is that p = 0 and since every invariant test is a function of U, it follows that the test that rejects for large values of U is a uniformly most powerful invariant test. When t = 1, the distribution of U is specified in Proposition 10.10, and this can be used to construct a test of level a for H,. For example, if q = 1, then C(U) = F , , , - , - , and a constant c(a) can be found from standard tables of the normalized 9-distribution such that, under H,, P{U > c(a)) =
a.
In the case that t > 1, there is no obvious function of r,, . . . , r, that provides an optimum test of H, versus HI. Intuitively, if some of the ri's are "too big," there is reason to suspect that H, is not true. The likelihood ratio test provides one possible criterion for testing Z,, = 0. Proposition 10.15. statistic
The likelihood ratio test of H, versus H, rejects if the
is too small. Under H,, C(W) = U(n - r - 1, r, q), which is the distribution described in Proposition 8.14. Proof. The density function of Z is
Under both Ho and HI, the maximum likelihood estimate of p is fi = Under H,, the maximum likelihood estimate of Z is 2 = (l/n)S. Partition-
PROPOSITION
10.15
ing S as Z is partitioned, we have
where S l l is q
X
q, S12is q
X
r, and S2, is r X r. Under H,,, Z has the form
When Z has this form,
ell
e2,
From this it is clear that, under H,, = ( l / n ) S , , and = (l/n)S22. Substituting these estimates into the densities under Ho and HI leads to a likelihood ratio of
Rejecting H, for small values of A ( Z ) is equivalent to rejecting for small values of
The identity IS1
=
ISllI IS22 - S 2 1 S ; 1 ~ 1shows 21 that
where r:,. . . , r,2 are the t largest eigenvalues of S,;'S2,S,'SI2. Thus the
446
CANONICAL CORRELATION COEFFICIENTS
likelihood ratio test is equivalent to the test that rejects for small values of W. That C ( W )= U(n - r - 1, r, q ) under Ho follows from Proposition 10.11. The distribution of W under HI is quite complicated to describe except in the case that .XI, has rank one. As mentioned in the last section, when Z,, has rank one, the methods used in Proposition 8.15 yields a description of the distribution of W. Rather than discuss possible alternatives to the likelihood test, in the next section we show that the testing problem above is a special case of the MANOVA testing problem considered in Chapter 9. Thus the alternatives to the likelihood ratio test for the MANOVA problem are also alternatives to the likelihood ratio test for independence. We now turn to a slight generalization of the problem of testing that Z,, = 0. Again suppose that Z E satisfies C(Z) = N(epf, I,, @ 2 ) where p E RP and Z are both unknown parameters and n >, p 1. Given an integer k 2, let p,, . . . , p, be positive integers such that Zfpi = p. Partition Z into blocks Zij of dimension pi x pj for i, j = 1,. . . , k. We now discuss the likelihood ratio test for testing Ho : Zij = 0 for all i , j with i *j. For example, when k = p and each p, = 1, then the null hypothesis is that Z is diagonal with unknown diagonal elements. By mimicking the proof of Proposition 10.15, it is not difficult to show that the likelihood ratio test for testing Ho versus the alternative that Z is completely unknown rejects for small values of
+
Here, S = ( Z - ~Z')'(Z - eZ') is partitioned into S,, : pi partitioned. Further, for i = 1,. . . , k, define S(ii,by
so S(ii,is ( p , + . . . write
+ p,)
x (p, +
..
+ p,).
X
p, as Z was
Noting that S(,,, = S, we can
PROPOSITION
Define
10.15
q ,i = 1,. . . , k - 1, by
Under the null hypothesis, it follows from Proposition 10.11 that k
k
n- 1-
C
j=i+l
P,,
C
j=i+l
P,, Pi
To derive the distribution of A under H,, we now show that W,, . . . , Wk-, are independent random variables under H,. From this it follows that, under H,,
so A is distributed as a product of independent beta random variables. The independence of W,, . . . , Wk-, for a general k follows easily by induction once independence has been verified for k = 3. For k = 3, we have
and, under H,,
where Z has the form
To show W, and W, are independent, Proposition 7.19 is applied. The sample space for S is S ; -the space of p x p positive definite matrices. Fix Z of the above form and let Po denote the probability measure of S so Po is the probability measure of a W(Z, p, n - 1) distribution on S l . Consider the group G whose elements are (A, B) where A E Gl,, and B E Gl(p2+,3,
448
CANONICAL CORRELATION COEFFICIENTS
and the group composition is
It is easy to show that the action
defines a left action of G on S;. If C ( S )= W ( 2 ,p, n - l), then C ( ( A ,B ) [ S l )= W ( ( A ,B ) [ " l , p , n - 1) where
This last equality follows from the special form of 2 . The first thing to notice is that
is invariant under the action of G on 5:. Also, because of the special form of 2 , the statistic
4 s ) = ( ~ 1 1 ~, ( 2 2 ) )
Sp:
X
$+p2+p3)
is a sufficient statistic for the family of distributions {gP,lg E G). This follows from the factorization criterion applied to the family {gP,lg E G), which is the Wishart family
However, G acts transitively on $5,: x
SL
for [ S I ,S21 E '&2+P3) satisfies
X
S&2+p3).
%4
$+,2+,3,
in the obvious way:
Further, the sufficient statistic r ( S ) E 5,: B ) [ S l )= ( A ,
B)[~(s)I
X
so r ( . ) is an equivariant function. It now follows from Proposition 7.19 that the invariant statistic W , ( S ) is independent of the sufficient statistic r ( S ) . But
Thus W , and is a function of S(,,, and so is a function of r ( S ) = [ S , S(22)]. W2are independent for each value of Z in the null hypothesis. Summarizing, we have the following result. Proposition 10.16. Assume k = 3 and Z has the form specified under H,. Then, under the action of the group G on both S; and 5,: X S L 2 + P 3the ), invariant statistic
and the equivariant statistic
are independent. In particular, the statistic
being a function of r ( S ) , is independent of W,. The application and interpretation of the previous paragraph for general k should be fairly clear. The details are briefly outlined. Under the null hypothesis that Z i j = 0 for i , j = 1,. . . , k and i *j, we want to describe the distribution of
It was remarked earlier that each U: is distributed as a product of independent beta random variables. To see that W,,. . . , W,-, are independent, Proposition 10.16 shows that
450 CANONICAL CORRELATION COEFFICIENTS and S(,,, are independent. Since (W,, . . . , Wk- is a function of S(,,), Wl and (W2,.. . , Wk- ,) are independent, Next, apply Proposition 10.16 to S,,,, to conclude that
and S(,,, are independent. Since (W,, . . . , Wk- ,) is a function of S(,,,, W2 and (W3,. . . , Wk- ,) are independent. The conclusion that W,, . . . , W,-, are independent now follows easily. Thus the distribution of A under H, has been described. first To interpret the decomposition of A into the product consider the null hypothesis
nf-'%,
HA1):Z,,
=
0 f o r j = 2 ,..., k.
An application of Proposition 10.15 shows that the likel~hoodratio test of HA') versus the alternative that Z is unknown rejects for small values of
Assuming H$') to be true, consider testing HA2):Z2j = 0 f o r j = 3, ..., k versus H{'): Z2j * 0 for somej = 3, ..., k A minor variation of the proof of Proposition 10.15 yields a likelihood ratio test of HA2)versus H{~)(given HA1))that rejects for small values of
Proceeding by induction, assume null hypotheses HA'), i be true and consider testing
=
1,. . . , m
versus Hl("):Zrnj*O f o r s o m e j = m + 1, ..., k .
- 1, to
MULTIVARIATE REGRESSION
451
Given the null hypotheses Hd'), i = 1,. . . , m - 1, the likelihood ratio test of Hdm)versus Htm)rejects for small values of
The overall likelihood ratio test is one possible way of combining the likelihood ratio tests of HArn)versus Hfm),given that HAi), i = 1,. . . , m - 1, is true. 10.5.
MULTIVARIATE REGRESSION
The purpose of this section is to show that testing for independence can be viewed as a special case of the general MANOVA testing problem treated in Chapter 9. In fact, the results below extend those of the previous section by allowing a more general mean structure for the observations. In the notation of the previous section, consider a data matrix Z : n x p that is partitioned as Z = (XY) where Xis n x q and Y is n X r s o p = q + r. It is assumed that c ( z ) = N(TB, I, B Z) where T is an n X k known matrix of rank k and B is a k X p matrix of unknown parameters. As usual, 2 is a p X p positive definite matrix. This is precisely the linear model discussed in Section 9.1 and clearly includes the model of previous sections of this chapter as a special case. To test that X and Y are independent, it is illuminating to first calculate the conditional distribution of Y given X. Partition the matrix B as B = (B, B,) where B, is k x q and B2 is k x r. In describing the conditional distribution of Y given X, say C(Y JX), the notation
is used. Following Example 3.1, we have
and the marginal distribution of X is
452
Let W be the n x ( q
of parameters
CANONICAL CORRELATION COEFFICIENTS
+ k ) matrix ( X T ) and let C be the ( q + k ) x r matrix
In this notation, we have
and
Assuming n > p + k , the matrix W has rank q + k with probability one so the conditional model for Y is of the MANOVA type. Further, testing Ho : I:,, = 0 versus HI : I:,, * 0 is equivalent to testing H0: C, = 0 versus H, : C, * 0. In other words, based on the model for Z,
the null hypothesis concerns the covariance matrix. But in terms of the conditional model, the null hypothesis concerns the matrix of regression parameters. With the above discussion and models in mind, we now want to discuss various approaches to testing Ho and H0.In terms of the model
and assuming HI, the maximum likelihood estimators of B and I: are
where
MULTIVARIATE REGRESSION SO
C ( S ) = W ( 2 ,p , n - k ) . Under H,, the maximum likelihood estimator of B is still B as above and since
it is readily verified that
where
Substituting these estimators into the density of Z under H, and HI demonstrates that the likelihood ratio test rejects for small values of
Under H,, the proof of Proposition 10.11 shows that the distribution of A(Z) is U(n - k - r, r, q) as described in Proposition 8.14. Of course, symmetry in r and q implies that U(n - k - r, r, q) = U(n - k - q, q, r). An alternative derivation of t h s likelihood ratio test can be given using the conditional distribution of Y given X and the margnal distribution of X. T h s follows from two facts: (i) the density of Z is proportional to the conditional density of Y given X multiplied by the marginal density of X, and (ii) the relabeling of the parameters is one-to-one-namely, the mapping from (B, 2 ) to (C, B,, Z , , , 2,, . , ) is a one-to-one onto mapping of Cp, X S ; to Cr,(,+,) X C,, x S l X S,?.We now turn to the likelihood ratio test of H,versus HI based on the conditional model
,
C ( Y ( X )= N(WC, I,,
3€
Z,,.,)
where X is treated as fixed. With X fixed, testing H0 versus H , is a special case of the MANOVA testing problem and the results in Chapter 9 are
454
CANONICAL CORRELATION COEFFICIENTS
6
directly applicable. To express in the MANOVA testing problem form, let K be the q x (q k) matrix K = (I, 0), so the null hypothesis H, is
+
Recall that
e = (w'w)-'wY is the maximum likelihood estimator of C under H,. Let P, = W(W'W)-'W' denote the orthogonal projection onto the column space of W, let Q, = I,, - P,, and define V E S,? by
As shown in Section 9.1, based on the model C(YIX)
=
N(WC, I,, 8 Z,,.,),
the likelihood ratio test of I&,: KC = 0 versus H, : KC small values of
* 0 rejects H,
for
For each fixed X, Proposition 9.1 shows that under H,, the distribution of A,(Y) is U(n - q - k, q, r), which is the distribution (unconditional) of A(Z) under H,. In fact, much more is true. Proposition 10.17. In the notation above:
(i) V=S2,.,. (ii) ( K ~ ) ~ ( K ( w w ) - ~ K ' )l -( ~ k =) S ~ ~ S ~ ~ ~ S ~ , . (iii) A,(Y) = A(Z). Further, under H,, the conditional (given X) and unconditional distribution of A,(Y) and A(Z) are the same. Proof: To establish (i), first write S as
where P, = T ( T f T ) - I T ' is the orthogonal projection onto the column space of T . Setting Q , = I - P, and writing Z = ( X Y ) , we have
This yields the identity
where Po = Q T X ( X f Q T X ) - ' X ' Q , is the orthogonal projection onto the column space of Q,X. However, a bit of reflection reveals that Po = P , - P, SO
This establishes assertion (i). For (ii), we have S2,S,'S,,
=
Y'PoY
and
where U = W ( W ' W ) - ' K ' and P, is the orthogonal projection onto the column space of U. Thus it must be shown that P, = Po or, equivalently, that the column space of U is the same as the column space of Q,X. Since W = ( X T ) , the relationship
proves that the q columns of U are orthogonal to the k columns of T . Thus the columns of U span a q-dimensional subspace contained in the column space of W and orthogonal to the column space of T. But there is only one
456
CANONICAL CORRELATION COEFFICIENTS
subspace with these properties. Since the column space of Q T X also has these properties, it follows that P, = Po so (ii) holds. Relationshp (iii) is a consequence of (i), (ii), and
The validity of the final assertion concerning the distribution of A,(Y) and A(Z) was established earlier. The results of Proposition 10.17 establish the connection between testing for independence and the MANOVA testing problem. Further, under H,, the conditional distribution of A,(Y) is U(n - q - k, q, r ) for each value of X, so the marginal distribution of Xis irrelevant. In other words, as long as the conditional model for Y given X is valid, we can test & using the likelihood ratio test and under Ho, the distribution of the test statistic does not depend on the value of X. Of course, this implies that the conditional (given X) distribution of A(Z) is the same as the unconditional distribution of A(Z) under Ho. However, under HI, the conditional and unconditional distributions of A(Z) are not the same.
PROBLEMS
1. Given positive integers t , q, and r with t 6 q, r, consider random vectors U E R', V E R4, and W E Rr where Cov(U) = I, and U, V, and W are uncorrelated. For A : q X t and B : r x t, construct X = A U + Vand Y = B U + W. (i) With A,, = Cov(V) and A,, = Cov(W), show that
+ A,, Cov(Y) = BB' + A,,
Cov(X)
=
AA'
and the cross covariance between X and Y is AB'. Conclude that the number of nonzero population canonical correlations between X and Y is at most t. (ii) Conversely, given 2 E R4 and E Rr with t nonzero population canonical correlations, construct U, V, W, A , and B as above so that X = A U V and Y = BU + W have the same joint covariance as 2 and y.
+
457
PROBLEMS
2. Consider X E R4 and Y E Rr and assume that Cov(X) = Z,, and Cov(Y) = Z,, exist. Let Z,, be the cross covariance of X with Y. Recall that T,, denotes the group of n X n permutation matrices. (i) If gZ,,h = Z,, for all g E Tq and h E qr, show that Z,, = Se,e; for some S E R1 where el (e,) is the vector of ones in R4 (Rr). (ii) Under the assumptions in (i), show that there is at most one nonzero canonical correlation and it is 161(e;Z ; 'e ,)'I2 ( e ; Z ~ ' e , ) ' / ~ What . is a set of canonical coordinates?
3. Consider X E RP with Cov(X) = Z > 0 (for simplicity, assume GX = 0). This problem has to do with the approximation of X by a lower dimensional random vector-say Y = BX where B is a t x p matrix of rank t. (i) In the notation of Proposition 10.4, suppose A, : p X p is used to define the inner product [., .] on Rn and prediction error is ~ where 1 11 . 111 is defined ~ by [., .] and C measured by GIIX - ~ is p x t. Show that the minimum prediction error (B fixed) is
e
and the minimum is achieved for C = = ZB(BZBt)- I . (ii) Let A = Z 1 / 2 ~ o Z ' / 2and write A in spectral form as A = Zf'A,aiaj where A, >, . . . >, A, > 0 and a,,. . . , a, is an orthonormal basis for RP. Show that S(B) = tr A ( I - Q(B)) where Q(B) = Z1/2~'(BZB')-1BZ1/2 is a rank t orthogonal projection. Using this, show that S(B) is minimized by choosing Q = Q = Ziaitj, and the minimum is Z/+ ,Ai. What is a corresponding B and X = ~ B that X gives the minimum? Show that x = ~ B = X 2'/2&2- '/2x. (iii) In the special case that A,
=
I,, show that
where a,, . . . , a, are the eigenvectors of Z and Za, = Aia, with A, 2 . . 2 A,. (The random variables a jX are often called the principal components of X, i = 1,. . . , p. It is easily verified that cov(ajX, aJX) = SijAi.) 4.
+
In RP, consider a translated subspace M a, where a, E RP-such a set in RP is called a flat and the dimension of the flat is the dimension of M.
458
CANONICAL CORRELATION COEFFICIENTS
(i) Given any flat M
unique b,
E
MI
+
.
+ a,,
show that M
+ a,
=
M
+ b, for some
Consider a flat M a,, and define the orthogonal projection onto M + a, by x -+ P(x - a,) + a, where P is the orthogonal projection onto M. Given n points x,,. . . , x, in RP, consider the problem of finding the "closest" k-dimensional flat M + a, to the n points. As a measure of distance of the n points from M + a,, we use A(M, a,) = Z;llx, - ij112 where 11 . 1) is the usual Euclidean norm and i , = P(x, a,) + a, is the projection of xi onto M + a,. The problem is to find M and a, to minimize A(M, a,) over all k-dimensional subspaces M and all a,. (ii) First, regard a, as fixed, and set S(b) = Z;(x, - b)(x, - b)' for any b E RP. With Q = I - P, show that A(M, a,) = trS(a,)Q = trS(X)Q .(a, - X)'Q(a, - X) where X = n-'Z;x,. (iii) Write S(X) = Zf'A,viv~in spectral form where A , 2 . . . 2 A, >, 0 and v,, . . . , up is an orthonormal basis for RP. Using (ii), show that A(M, a,) >, Z,P+,Aiwith equality for z , = 2 and for M = span{v,,. . ., v,).
+
5. Consider a sample covariance matrix
with S,, > 0 for i = 1,2. With t = min{dim S,,, i = 1,2), show that the t sample canonical correlations are the t largest solutions (in A) to the equation (s,,s,;'s~, - A 2 ~ , , 1= 0, A E [0, oo).
6. (The Eckhart-Young Theorem, 1936.) Given a matrix A : n x p (say n 2 p), let k Q p. The problem is to find a matrix B : n x p of rank no greater than k that is "closest" to A in the usual trace inner product on Cp, ,. Let 4'3, be all the n x p matrices of rank no larger than k, so the problem is to find inf I(A - B112 ~ € 5 3 ~
where 1 1 ~ 1 =1 ~tr MM' for M E h,,. (i) Show that every B E 3, can be written J,C where J, is n x k, 4'4 = I,, and C is k x p. Conversely, $4E a , , for each such J, and C. (ii) Using the results of Example 4.4, show that, for A and J, fixed, inf llA - J,C112 = IIA - J , J , ' A ~ ~ ~ . C ~ C . pk,
459
PROBLEMS
++',
(iii) With Q = I Q is a rank n - k orthogonal projection. Show that, for each B E
a,,
((A- B ( (>,~ i n f ( l A Q ~=( ~inf tr QAA' = 8:+,A; Q Q
A, are the singular values of A. Here Q ranges where A, > . . over all rank n - k orthogonal projections. (iv) Write A = ZfAiuivi as the singular value decomposition for A . Show that B = ZfAiuiui achieves the infimum of part (iii). 7. In the case of a random sample from a bivariate normal distribution N ( p , Z), use Proposition 10.8 and Karlin's Lemma in the Appendix to show that the density of W = d x r ( l - r2)-'I2 ( r is the sample correlation coefficient) has a monotone likelihood ratio in 8 = p(l p2)Conclude that the density of r has a monotone likelihood ratio in p.
,
,
8. Let f,, denote the density function on (0, co)of an unnormalized F,, random variable. Under the assumptions of Proposition 10.10, show that the distribution of W = r:(l - r:)-' has a density given by
where
Note that h(.lp) is the probability mass function of a negative binomial distribution, so f(w1p) is a mixture of F distributions. Show that f(.Ip) has a monotone likelihood ratio.
9. (A generalization of Proposition 10.12.) Consider the space Rn and an integer k with 1 < k < n. Fix an orthogonal projection P of rank k, and for s < n - k, let ?Js be the set of all n x n orthogonal projections R of rank s that satisfy RP = 0. Also, consider the group O(P) = { r l r E On, r P = P r ) . (i) Show that the group O(P) acts transitively on ?Js under the action R + rRr'.
460
CANONICAL CORRELATION COEFFICIENTS
(ii) Argue that there is a unique O(P) invariant probability distribution on qS. (iii) Let A have a uniform distribution on 8 ( P ) and fix R , E $Ps. Show that AR,Af has the unique 8 ( P ) invariant distribution on 7 7 .
10. Suppose Z E C,, ,has an 8,-left invariant distribution and has rank p with probability one. Let Q be a rank n - k orthogonal projection withp + k < n and form W = QZ. (i) Show that W has rank p with probability one. (ii) Show that R = W(WfW)- 'W has the uniform distribution on $Pp (in the notation of Problem 9 above with P = I - Q and s = p). 11. After the proof of Proposition 10.13, it was argued that, when q g r, to find the distribution of r, > . . . > r,, it suffices to find the distribution of the eigenvalues of the matrix B = (T, T , ) - ' / 2 ~ l ( ~+, TI)-'/, where TI and T2 are independent with C(T,) = W(I,, q, n - r - 1) and C(T2) = W(I,, q, r). It is assumed that q g n - r - 1. Let f(.lm) denote the density function of the W(I,, q, m) distribution (m >, q) with respect to Lebesgue measure dS on S,. Thus f ( S ( m ) = o(m, q)l~l(m-q-1)/2 exp[ - tr S ]I ( S ) where
+
I(')
=
(
ifS>O otherwise.
(i) With W, = TI and W2 = TI + T,, show that the joint density of W, and W2 with respect to dW, dW2 is f(W,(n - r - l)f(W2 W,lr). (ii) On the set where Wl > 0 and W2 > 0, define B = W, W; 'I2and W2 = V. Using Proposition 5.1 1, show that W; Show that the Jacobian of this transformation is Idet V1(Q+1)/2. the joint density of B and V on the set where B > 0 and V > 0 is given by
(iii) Now, integrate out V to show that the density of B on the set 0 < B < I,is
461
PROBLEMS
Suppose the random orthogonal transformation r has a uniform distribution on 8,. Let A be the upper left-hand k x p block of r and assume p G k. Under the additional assumption that p G n - k, the following argument shows that A has a density with respect to Lebesgue measure on ,C, ,. (i) Let : n x p consist of the first p columns of I? so A : k x p has rows that are the first k rows of Show that 4 has a uniform distribution on %, Conclude that has the same distribution as Z(Z'Z)-'/2 where Z : n x p is N(0, In8 I,).
+
..
+.
($1
+
(ii) Now partition Z as Z = where X is k x p and Y is (n - k ) x p. Show that Z'Z = X'X Y'Y and that A has the same distribution as X(XfX YfY)- 'I2. (iii) Using (ii) and Problem 11, show that B = A'A has the density
+
+
with respect to Lebesgue measure on the set 0 < B < I,. (iv) Consider a random matrix L : k x p with a density with respect to Lebesgue measure given by
where for B
E
S,,
and
Show that B = L'L has the density p(B) gven in part (iii) (use Proposition 7.6). (v) Now, to conclude that A has h as its density, first prove the following proposition: Suppose % is acted on measurably by a compact group G and 7 : % + 94 is a maximal invariant. If P, and P2 are both G-invariant measures on EX such that P,(T-'(C)) = P2(7- '(C)) for all measurable C c 94, then P, = P,.
462
CANONICAL CORRELATION COEFFICIENTS
ep,
(vi) Now, apply the proposition above with X = !, G = a,, T(X) = x'x, PI the distribution of A, and P2 the distnbution of L as given in (iv). This shows that A has density h.
13. Consider a random matrix Z : n x p with a density given by f (ZJB, Z) = ~ Z I - " h(tr(Z /~ - TB)ZP'(Z - TB)') where T : n x k of rank k is known, B : k x p is a matrix of unknown parameters, and Z : p x p is positive definite and unknown. Assume that n > p + k, that sup ~ ~ l " / ~ h ( t r<( ~+ )m) ,
css;
and that h is a nonincreasing function defined on [0, m). Partition Z into X: n X q and Y: n X r, q + r = p , so Z = (XY). Also, partition Z into Zij, i, j = 1,2, where Z,, is q X q, Z2, is r x r, and Z,, is q X r. (i) Show that the maximum likelihood estimator of B is B = (T'T)- and ~ ( Z I B2 ), = 121-"/~h(tr,SZ-') where s = Z'QZ with Q = I - P and P = T(TfT)-IT'. (ii) Derive the likelihood ratio test of H, : Z,, = 0 versus H I : Z,, z 0. Show that the test rejects for small values of
(iii) For U : n X q and V : n X r, establish the identity tr(UV)2-'(UV)' = tr(V - U2,'ZI2)Z;'.,(~ - uZ,'Z,,) + tr UZG'U'. Use this identity to derive the conditional distribution of Y given X in the above model. Using the notation of Section 10.5, show that the conditional density of Y given X is
=
I Z 2 2 . , ~ - " / 2 h ( tr ( ~WC)Z;21.1(~ - WC)'
+ 7)+(7)
where 7 = tr(X - T B , ) Z i l ( x - TB,) and (+(7))-' = Je,,"h(tr uu' + 7) du. (iv) The null hypothesis is now that C, = 0. Show that, for each fixed 7, the likelihood ratio test (with C and Z,,., as parameters) based on the conditional density rejects for large values of A(Z). Verify (i), (ii), and (iii) of Proposition 10.17.
463
NOTES AND REFERENCES
(v) Now, assume that sup sup ~ ~ l " / ~ hC(+t rq ) + ( q ) 0'7
c€S,+
=
kz <
+ co.
Show that the likelihood ratio test for C, = 0 (with C, Z,,.,, Bl, and E l , as parameters) rejects for large values of A(Z). (vi) Show that, under H,, the sample canonical correlations based on S,,, S,,, S,, (here S = Z'QZ) have the same distribution as when Z is N(TB, I,,63 Z). Conclude that under H,, A(Z) has the same distribution as when Z is N(TB, I, 63 Z). NOTES AND REFERENCES 1.
Canonical correlation analysis was first proposed in Hotelling (1935, 1936). There are as many approaches to canonical correlation analysis as there are books covering the subject. For a sample of these, see Anderson (1958), Dempster (1969), Kshirsagar (1972), Rao (1973), Mardia, Kent, and Bibby (1979), Srivastava and Khatri (1979), and Muirhead (1982).
2. See Eaton and Kariya (1981) for some material related to Proposition 10.13.
Appendix
We begn t h s appendix with a statement and proof of a result due to Basu (1955). Consider a measurable space (%, 3) and a probability model {Pe(8 E 0)defined on (%, 9). Consider a statistic T defined on (%, 3)to (9, and let 3, = {T-'(B)IB E 3,). Thus 9, is a a-algebra and 9, c 9.Conditional expectation given 3, is denoted by &( .I%,).
a,),
Definition A.1. The statistic T is a sufficient statistic for the family {Pe(8 E 0)if, for each bounded 3 measurable function f , there exists a 3, measurable function j'such that Go(f 13,)= f a.e. Po for all 8 E 0. Note that the null set where the above equality does not hold is allowed to depend on both 8 and f . The usual intuitive description of sufficiency is that the conditional distribution of X E % (C(X)= Po for some 8 E 0) given T(X)= t does not depend on 8. Indeed, if P(.(t) is such a version of the conditional distribution of X given T(X)= t, then f defined by f( x ) = h (T(x))where
serves as a version of Ge( f 193,) for each 8 E 0. Now, consider a statistic U defined on (%, 3)to (%,
93,).
Definition A.2. The statistic U is called an ancillary statistic for the family
{Po18E 0) if the distribution of U on (%, 3,)does not depend on 8 E @-that is, if for all B E a,,
for all 8,q
E
0.
In many instances, ancillary statistics are functions of maximal invariant statistics in a situation where a group acts transitively on the family of probabilities in question-see Section 7.5 for a discussion. Finally, gven a statistic T on (%, 3 ) to (%,a,)and the parametric family {Pe18 E 0 ) , let {Q,l8 E O) be the induced family of distributions of T o n (9, %,)-that is,
Definition A.3. The family {Qele E O) is called boundedly complete if the only bounded solution to the equation
is the function h
=
0 a.e. Q, for all 8 E O.
At times, a statistic T is called boundedly complete-this means that the induced family of distributions of T is boundedly complete according to the above definition. If the family {Q,l8 E O) is an exponential family on a Euclidean space and if O contains a nonempty open set, then {Q,l8 E 0)is boundedly complete-see Lehmann (1959, page 132).
Theorem (Basu, 1955). If T is a boundedly complete sufficient statistic and if U is an ancillary statistic, then, for each 8, T(X) and U(X) are independent. Proof. It suffices to show that, for bounded measurable functions h and k on % and %, we have
Since U is ancillary, a = &, k(U( X)) does not depend on 8, so &,(k(U) a ) = 0 for all 8. Hence & , [ & , ( ( k ( ~ )- a)l3,)]
=
0 for all 8.
Since T is sufficient, there is a 3'3, measurable function, say j', such that &,((k(U) - a)l3,) = a.e. Po. But since j' is 3, measurable, we can write j ' ( x ) = J/ (T(x)) (see Lehmann, 1959, Lemma 1, page 37). Also, since k is
bounded, j can be taken to be bounded. Hence
and t+b is bounded. The bounded completeness of T implies that is 0 a.e. Q, where Qe(B) = Pe(TP1(B)),B E 9 , . Thus h(T)J/(T) = 0 a.e. Q,, so
=
&@h(~)(k(U - )a ) .
Thus (A. 1) holds. This Theorem can be used in many of the examples in the text where we have used Proposition 7.19. The second topic in this Appendix concerns monotone likelihood ratio and its implications. Let % and 9be subsets of the real line. Definition A.4. A nonnegative function k defined on % X positive of order 2 (TP-2) if, for x, < x, and y, < y,, we have
9 is totally
In the case that 9is a parameter space and k(. , y) is a density with respect to some fixed measure, it is customary to say that k has a monotone likelihood ratio when k is TP-2. This nomenclature arises from the observation that, when k is TP-2 and y, < y,, then the ratio k(., y,)/k(., y,) is nondecreasing in x-assuming that k(., y,) does not vanish. Some obvious examples of TP-2 functions are: exp[xy], xy for x > 0, y x for y > 0. If x = g(s) and y = h(t) where g and h are both increasing or decreasing, then k,(s, t) = k(g(s), h(t)) is TP-2 whenever k is TP-2. Further, if J/,(x) 0, t+b,(y) 2 0, and k is TP-2, then k,(x, y) = J/,(x)J/,(y)k(x, y) is also TP-2. The following result due to Karlin (1956) is of use in verifying that some of the more complicated densities that arise in statistics are TP-2. Here is the setting. Let %, 9 , and % be Borel subsets of R' and let p be a a-finite measure on the Borel subsets of 9.
Lemma (Karlin, 1956). Suppose g is TP-2 on % X
3 x %. If
is finite for all x
E
% and z
E
9 and
h is TP-2 on
%, then k is TP-2.
ProoJ: For x , < x , and z , < z , , the difference
A
=
k ( x , , z , ) k ( x , , z , )- k ( x , , z , ) k ( x , , z , )
can be written
Now, write A as the double integral over the set { y , < y2) plus the double integral over the set { y , > y,). In the integral over the set { y , > y,), interchange y , and y, and then combine with the integral over {y , < y,). This yields
On the set { y , < y,), both of the bracketed expressions are non-negative as g and h are TP-2. Hence A >, 0 so k is TP-2. Here are some examples.
+
Example A.1. With % = (0, m), let
be the density of a chi-squared distribution with m degrees of freedom. Since xmI2, x E % and m > 0 , is TP-2, f ( x , m ) is TP-2. Recall that the density of a noncentral chi-squared distribution with
469
PROPOSITION A. I
p degrees of freedom and noncentrality parameter A 2 0 is given by
+
Observe that f ( x , p 2 j ) is TP-2 in x and j and (X/2)Jexp[- iA]/j! is TP-2 in j and A. With 9 = {0,1,. . . ) and p as counting measure, Karlin's Lemma implies that h(x, A) is TP-2. 4
4
and X: are independent random Example A.2. Recall that, if Xi has a density given by variables, then Y = Xi/X;
If the random variable Xi is noncentral chi-squared, say x;(A), rather than central chi-squared, then Y = X2,(X)/Xi has a density
Of course, Y has an unnormalized F(p, m ; A) distribution according to our usage in Section 8.4. Since f ( y l p + 2 j, n ) is TP-2 in y and j, it follows as in Example A.l that h is TP-2. 4 The next result yields the useful fact that the noncentral Student's t distribution is TP-2. Proposition A.1.
Suppose f 2 0 defined on (0, co) satisfies
(i) j,"euxf(x) dx < + a for u E R 1 (ii) f(x/q) is TP-2 on (0, co) X (0, co). For 8 TP-2.
E RI
and t
E
R', define k by k(t, 8) = j,"eetxf(x) dx. Then k is
Proof. First consider t k to obtain
E
R 1and 8 > 0. Sct v
k(t,8)
=
=
Ox in the integral defining
" -1 6 is) do. 1
0
etVf -
Now apply Karlin's Lemma to conclude that k is TP-2 on R1 x (0, co). A similar argument shows that k is TP-2 on R' x ( - co, 0). Since k(t, 0) is a constant, it is now easy to show that k is TP-2 on R1 x R'.
+
Example A.3. Suppose X is N(p, 1) and Y is X 2 The random a noncentral variable T = X/ fi,which is, up to a factor of Student's t random variable, has a density that depends on p-the noncentrality parameter. The density of TI (derived by writing down the joint density of X and Y, changing variables to T and W = @, and integrating out W) can be written
&',
where +(t)
=
Ot(l
+ t2)-I/2
is an increasing function of t. Consider the function
k ( v , p) With f ( x ) is TP-2.
=
=
/n00
exp[opx] exp[-x2]x-" dx.
exp[-x2]x-", Proposition A.l shows k, and hence h ,
+
We conclude this appendix with a brief description of the role of TP-2 in one sided testing problems. Consider a TP-2 densityp(xl8) for x E !X c R1 and I3 E O c R'. Suppose we want to test the null hypothesis H,: I3 E ( - co, O,] n O versus H1 : 8 E (do, co) n 63. The following basic result is due to Karlin and Rubin (1956). Proposition A.2. Given any test Go of Ho versus Hl, there exists a test the form 1
ifx>xo
0
ifx<x,
+ of
with 0 a y a 1 such that Go+ a for 8 a 8, and Go+ 2 Go+, for 8 > 13,. For any such test +, Go+ is nondecreasing in 0.
Comments on Selected Problems
CHAPTER 1 4. This problem gives the direct sum version of partitioned matrices. For (ii), identify V , with vectors of the form { v , ,0) E V , @ V, and restrict T to these. This restriction is a map from V , to V , @ & so T{v,,0) = { z l ( v l )z, 2 ( v 1 )where ) z , ( v , ) E V , and z,(v,) E V,. Show that z , is a linear transformation on V , to V , and z, is a linear transformation on V , to V,. This gives A,, and A,,. A similar argument gives A,, and A,,. Part (iii) is a routine computation.
,
,
5. If x,, = C;c,x,, then w,, = C;c,w,. 8. If u E Rk has coordinates u,,. . . , u,, then Au = Cfuix, and all such vectors are just span {x,, . . . , x,). For (ii), r ( A ) = r(A') so dim 3( A'A ) = dim 3( AA'). 10. The algorithm of projecting x,,. . . , x , onto {span x,)' is known as Bjork's algorithm (Bjork, 1967) and is an alternative method of doing Gram-Schdt. Once you see that y,, . . . , y, are perpendicular to y,, this problem is not hard. 11. The assumptions and linearity imply that [ A x ,w] = [ B x ,w] for all x E V and w E W. Thus [ ( A - B ) x , w] = 0 for all w. Choose w = ( A - B ) x so ( A - B ) x = 0. 12. Choose z such that [ y , , z ] * 0. Then [ y , ,z ] x , = [y,, zlx, so set c = [y,, z ] / [ y , ,z]. Thus c x 2 0 y, = x2 y2 SO cyl x2 = y 2 0 x2. Hence cllx2112y,= 1 1 ~ ~ so1 y1 1 =~ cP'y2. ~ ~ 13. This problem shows the topologies generated by inner products are all the same. We know [ x ,y ] = ( x , Ay) for some A > 0. Let c, be the minimum eigenvalue of A, and let c, be the maximum eigenvalue of A.
COMMENTS ON SELECTED PROBLEMS
This is just the Cauchy-Schwarz Inequality. The classical two-way ANOVA table is a consequence of this problem. That A, B,, B,, and B, are orthogonal projections is a routine but useful calculation. Just keep the notation straight and verify that P 2 = P = P', which characterizes orthogonal projections. To show that I'(ML) C ML , verify that (u, r v ) = 0 for all u E M when v E M L . Use the fact that I''T = I and u = I'M,for some u , E M (since T(M) c M and I' is nonsingular). Use Cauchy-Schwarz and the fact that P,x = x for x E M. This is Cauchy-Schwarz for the non-negative definite bilinear form [C, Dl = tr ACBD'. Use Proposition 1.36 and the assumption that A is real. The representation a P + P ( I - P ) is a spectral type representation-see Theorem 1.2a. If M = %(P), let x,, . . . , x,, x,, . . . , x, be any orthonormal basis such that M = span{x,,. . . , x,). Then Ax, = axi, i = 1,. . . , r, and Ax, = fixi, i = r + 1,. . . , n. The characteristic polynomial of A must be ( a - A)'(fi - A)"-'. Since A , = s u ~ ~ ~ ~Ax), ~ ~ p, == ~ supllxll= ( x , ,(x, Bx), and (x, Ax) >, (x, Bx), obviously A, 2 p,. Now, argue by contradiction-let j be the smallest index such that Aj < CL/.Consider eigenvectors x,,. . . , x, and y,,. . . , y, with Ax, = A,x, and By, = piyi, i = 1,. . . , n. Let M = span{x,, x,, ,,. . . , x,) and let N = span{y,,. . . , y,). Since dim 1, dim M n N 2 1. Using the identities Aj = M = n -j SUPxEM,Ilxll=l(~, AX), P j = i n f x ~ ~ , l l x Bx), l l = lfor ( ~ any ~ x E M n N, llxll = 1, we have (x, Ax) 6 Aj < pj g (x, Bx), which is a contradiction. Write S = C;AixiO xi in spectral form where A i > 0, i = 1,. . . , n. Then 0 = (S, T ) = C;Ai(x,, Tx,), which implies (xi, Tx,) = 0 for i = 1,. . . , n as T > 0. Thls implies T = 0. Since tr A and (A, I ) are both linear in A, it suffices to show equality for A's of the form A = xO y. But ( x u y, I ) = (x, y). However, that tr x y = (x, y) is easily verified by choosing a coordinate system. B, and a Parts (i) and (ii) are easy but (iii) is not. It is false that A, 2 x 2 matrix counter example is not hard to construct. It is true that A'/, >, ~ ' 1 ,TO . see this, let C = B ' / ~ A - ' / ~so , by hypothesis, I C'C. Note that the eigenvalues of C are real and positive-being the same as those of B ' / ~ A - ' / ~ B ' /which ~ is positive definite. If A is any eigenvalue for C, there is a corresponding eigenvector-say x such that llxll = 1 and Cx = Ax. The relation I 2 C'C implies A2 6 1, SO 0 < A g 1 as A is positive. Thus all the eigenvalues of C are in (O,1] so
,,
+
473
CHAPTER 1
the same is true of A-'/4B'/2A-'/4. B ' / ~< A ' / ~ .
Hence A-
< I SO
1/4~'/2~-'/4
26. Since P is an orthogonal projection, all its eigenvalues are zero or one and the multiplicity of one is the rank of P. But tr P is just the sum of the eigenvalues of P. 28. Since any A E C(V, V) can be written as (A A')/2 + (A - A')/2, it follows that M + N = C(V, V). If A E M n N, then A = A' = -A, so A = 0. Thus C(V, V) is the direct sum of M and N so dim M dim N = n2. A direct calculation shows that {x,O x, x,O x,li < j ) U {x,O x, - x,O x,li < j) is an orthogonal set of vectors, none of which is zero, and hence the set is linearly independent. Since the set has n 2 elements, it forms a basis for C(V, V). Because xi x, + x, q xi E M and x,O xj - xJ xi E N, dim M > n(n 1)/2 and dim N >, n(n - 1)/2. Assertions (i), (ii), and (iii) now follow. For (iv), just verify that the map A + (A + Af)/2 is idempotent and self-adjoint. 29. Part (i) is a consequence of suplloll=AvII = suplloll= '[Av, A V ] ' / ~= suplloll= '(v, A'AV)'/~and the spectral theorem. The triangle inequality follows from 1I(A + Bill = s u ~ llAv ~ ~+ Boll ~ < ~ ~ ~~ P=~ ~~ ~ ~ + ~=~(IIA~II IIBvll) 9 ~ ~ P [ I , llAvll I I = I + suPll"ll=l IIBvll. 30. This problem is easy, but it is worth some careful thought-it provides more evidence that A 8 B has been defined properly and ( . , is an appropriate inner produce on C(W, V). Assertion (i) is easy since (A 8 B)(x, q w,) = (Ax,) q (Bw,) = (A,x,) (p,~,) = Aipjxi w,. Obviously, x,O w, is an eigenvector of the eigenvalue A,p,. Part (ii) follows since the two linear transformations agree on the basis {x,O w,li = 1,. . . , m , j = 1,. . . , n) for C(W, V). For (iii), if the eigenvalues of A and B are positive, so are the eigenvalues of A 8 B. Since the trace of a self-adjoint linear transformation in the sum of the eigenvalues (this is true even without self-adjointness, but the proof requires a bit more than we have established here), we have tr A 8 B = C,, ,hip, = (C,A,)(C,p,) = (tr A)(tr B). Since the determinant ,(A,pJ) = is the product of the eigenvalues, det(A 8 B) = (llAi)n(npj)m= (det A)"(det B)". 31. Since +'J, = I,, J, is a linearly isometry and its columns form an orthonormal set. Since R($)c M and the two subspaces have the same dimension, (i) follows. (ii) is immediate. 32. If C is n X k and D is k X n , the set of nonzero eigenvalues of CD is the same as the set of nonzero eigenvalues of DC. 33. Apply Problem 32. 34. Orthogonal transformations preserve angles.
+
+
+
+
a )
n,,
474
COMMENTS ON SELECTED PROBLEMS
35. This problem requires that you have a facility in dealing with conditional expectation. If you do, the problem requires a bit of calculation but not much more. If you don't, proceed to Chapter 2.
CHAPTER 2 1. Write x = C;c,x, so (x, X) = CC,(X,,X). Thus &l(x, X)I < C;lcilGl(xi, X)I and &l(x,, X)I is finite by assumption. To show that Cov(X) exists, it suffices to verify that var(x, X) exists for each x E V. But var(x, X ) = var{Cci(xi, X)} = CC cov{c,(xi, X), cj(xj, X)}. Then var{ci(xi, X)} = &[ci(xix)I2- [&c,(x,, X)I2, which exists by assumption. The Cauchy-Schwarz Inequality shows that [cov{c,(xi, X), c,(x,, X)}I2 g var{ci(xi, X)} var{c,(x,, X)}. But, var{c,(x,, X)} exists by the above argument. 2. All inner products on a finite dimensional vector space are related via the positive definite quadratic forms. An easy calculation yields the result of this problem. .), be an inner product on V;, i = 1,2. Since is linear on V;, 3. Let fi(x) = (x,, x), for xi E y, i = 1,2. Thus if XI and X2 are uncorrelated (the choice of inner product is irrelevant by Problem 2), (2.2) holds. Conversely, if (2.2) holds, then Cov{(x,, X,),,(x2, X,),} = 0 for xi E y , i = 1,2 since (x,, .), and (x,, .), are linear functions. 4. Let s = n - r and consider r E Or and a Bore1 set Bl of Rr.Then (a,
P~{I'xE B,)
=
P~{I'x E B,, x
E
R~}
The third equality holds since the matrix
is in On. Thus x has an 8;invariant distribution. That x given x has an Or-invariant distribution is easy to prove when X has a density with respect to Lebesgue measure on R n (the density has a version that
CHAPTER 2
475
+
satisfies f(x) = f ( + x ) for x E Rn, E 0,). The general case requires some fiddling with conditional expections-this is left to the interested reader. Let A, = Cov(X,), i = 1,. . . , n. It suffices to show that var(x, ZX,) = Z(X, Aix). But (x, XI), i = 1,. . . , n, are uncorrelated, so var[Z(x, &)I = 2 var(x, 4) = Z(x, A,x). &U = Zpie, = p. Let U, have coordinates U,, . . . , Uk. Then Cov(U) = GUU' - pp' and UU' is a p x p matrix with elements q.U,.For i *j, U,U, = 0 and for i = j , q.U,= q. SinceGq = p i , &UU'= Dp. When 0 < p i < 1, Dp has rank k and the rank of Cov(U) is the rank of I, - Dp-1/2pp'Dp-1/2.Let u = Dp-'/2p, so u E Rk has length one. Thus I, - uu' is a rank k - 1 orthogonal projection. The null space of Cov U is span{e) where e is the vector of ones in Rk. The rest is easy. The random variable X takes on n! values-namely the n! permutations of x-each with probability l/n!. A direct calculation gives &X = Xe where X = n-'Cyx,. The distribution of X is permutation ~ a,, = 1 invariant, whch implies that Cov X has the form a 2 where and a,, = p for i *j where - l/(n - 1) < p G 1. Since var(e'X) = 0, we see that p = - l/(n - 1). Thus a 2 = var(Xl) = n-'[C;(x, - Q 2 ] where XI is the first coordinate of X. = Setting D = -I, &X = -&X so &X = 0. For i *j, cov{X,, COV{-4, = - cov{Xi, X,) SO X, and X, are uncorrelated. The first equality is obtained by choosing D with d,, = - 1 and d,, = 1 in the relation e ( X) = C(DX). T h s is a direct calculation. It suffices to verify the equality for A = xO y as both sides of the equality are linear in A. ForA = xO y, (A, 2 ) = (x, Zy) and (p, Ap) = (p, x)(p, y), so the equality is obvious. To say Cov(X) = In8 Z is to say that cov{(tr AX'), (tr BX')) = tr AZB'. To show rows 1 and 2 are uncorrelated, pick A = e1v' and B = e2u' where u, v E R*. Let Xi and Xi be the first two rows of X. Then tr AX' = v'X,, tr BX' = u'X2, and tr AZB = 0. The desired equality is established by first showing that it is valid for A = xy', x, y E Rn, and using linearity. When A = xy', a useful equality is X'AX = C,C,x, y, X, Xi where the rows of X are Xi,. . . , X;. The equation r A r ' = A for r E Op implies that A = cIp for some c. Cov((r 8 I ) X) = Cov( X) implies Cov( X) = I 8 Z for some 2. Cov((I 8 J , ) X ) = Cov(X) then implies $21)' = Z, which necessitates Z = cI for some c >, 0. Part (ii) is immediate since r 8 J, is an orthogonal transformation on (C(V,W), ( ,
Xi)
3)
a)).
476
COMMENTS ON SELECTED PROBLEMS
14. This problem is a nasty calculation intended to inspire an appreciation for the equation Cov(X) = In8 Z. 15. Since C(X) = C ( - X), & X = 0. Also, C(X) = C (TX) implies Cov(X) = cI for some c > 0. But JIX112= 1 implies c = l/n. Best affine predictor of Xl given x is 0. I would predict XI by saying that Xl is with probability and XI is with probability 5 . 16. This is just the definition of q . 17. For (i), just calculate. For (ii), Cov(S) = 21, 8 I, by Proposition 2.23. The coordinate inner product on R~ is not the inner product ( , .) on S2.
-4
-4
CHAPTER 3 2. Since var(Xl)= var(Yl)= 1 and cov{Xl,Yl)= p, I p ( < 1. Form Z = (XY)-an n X 2 matrix. Then Cov(Z) = In8 A where
3. 4.
5. 6.
7.
8.
When Ipl < 1, A is positive definite, so I, 8 A is positive definite. Conditioning on Y, C(XIY) = N(pY,(l - p 2 ) ~ n )so, C(Q(Y)XIY) = N(0, (1 - p2)Q(Y)) as Q(Y)Y = 0 and Q(Y) is an orthogonal projection. Now, apply Proposition 3.8 for Y fixed to get C(W) = (1 p2)x;-1. Just do the calculations. Since p ( x ) is zero in the second and fourth quadrants, X cannot be normal. Just find the marginal density of Xl to show that Xl is normal. Write U in the form X'AX where A is symmetric. Then apply Propositions 3.8 and 3.11. Note that Cov(XO X) = 2 1 8 I by Proposition 2.23. Since (X, AX) = ( X q X, A), and similarly for (X, BX), 0 = cov{(X, AX), (X, BX)) = cov{(Xn X, A), ( X u X, B)) = (A, 2 ( 1 8 I ) B ) = 2 tr AB. = 0, which shows A1/2B1/2= 0 Thus 0 = tr A ' / ~ B A ' / ~SO A1l2~A1I2 and hence AB = 0. Since &[exp(itW,)]= exp{itpj - ujltl], &[exp(itZa,W,.)]= exp[itZajp, - (Zla,lu,)ltl], so C(Za,W,.) = C(Za,p,, Zlajlu,). Part (ii) is immediate from (i). For (i), use the independence of R and Zo to compute as follows: P{U g u) = P(Zo g u/R) = j,"P{Zo Q u / t ) ~ ( d t )= j,"@(u/t) G(dt) where 0 is the distribution function of Zo. Now, differentiate. Part (ii) is clear.
477
CHAPTER 4
9. Let 3 , be the sub a-algebra induced by T,(X) = X2 and let 9, be the sub a-algebra induced by T2(X) = X; X,. Since 3, c % for any bounded function f(X), we have &(f(X)I%,) = &(&(f(X)1%,)1%,). But for f (X) = h (Xi XI), the conditional expectation given 3 , can be computed via the conditional distribution of X;X, given X,, which is
,,
Hence &(h(X;X,)9,) is 3, measurable, so &(h(X;X,)(%,) = &(h ( X; Xl)I% ). This implies that the conditional distribution (3.3) serves as a version of the conditional distribution of X; XI given X;X2. Show that T-IT, : Rn + Rn is an orthogonal transformation so l(C) = 1((T-'T,)(C)). Setting B = Tl(C), we have v,(B) = v,(B) for Bore1 B. The measures v, and v, are equal up to a constant so all that needs to be calculated is v,(C)/v,(C) for some set C with 0 < v,(C) < + CQ. Do the calculation for C = {vl[v, v] G 1). The inner product ( . , .) on Sp is not the coordinate inner product. The "Lebesgue measure" on (S,, ( - , .)) given by our construction is not I(dS) = ni,dsjj, but is v,(dS) = ( f i ) ~ ( ~ - l ) l ( d ~ ) . Any matrix M of the form
,
10. 11. 12. 13.
can be written as M = a[(p - 1)b + 1]A + a(l - b)(I - A). This is a spectral decomposition for M so M has eigenvalues a((p - l)b + 1) and a ( l - b) (of multiplicityp - 1). Setting a = a[(p - l)b + 11 and j3 = a ( l - b) solves (i). Clearly, M-' = a-'A p - ' ( I - A) whenever a and p are not zero. To do part (ii), use the parameterization ( p , a, p ) given above (a = a 2 and b = p). Then use the factorization criterion on the likelihood function.
+
CHAPTER 4
1. Part (i) is clear since Zp = Ctpizi for /3 E Rk. For (ii), use the singular value decomposition to write Z = C;hixiu: where r is the rank of Z, {x,,. . . , x,) is an orthonormal set in Rn,{u,,. . . , u,) is an orthonormal , = span{x,,. . . , x,), and % ( Z ) = (span{u,,. . . , u , ) ) l . set in R ~ M
478
2.
COMMENTS ON SELECTED PROBLEMS
Thus (ZtZ)- = C ; X ; 2 ~ i ~ and j a direct calculation shows that Z(ZIZ)-Z1= Cix,x[, which is the orthogonal projection onto M. Since C(Xi) = C(p + ei) where GE, = 0 and var(e,) = 1, it follows that C(X) = C(Pe + E) where & E = 0 and COV(E)= I,. A direct aprof% !. this linear model. For (iii), plication of least-squares yields = since the same /3 is added to each coordinate of E,the vector of ordered X's has the same distribution as the pe v where v is the vector of ordered E'S.Thus C(U) = C(Pe + v) so &U = j3e a, and Cov(U) = Cov(v) = C,. Hence C(U - a,) = C(Pe + (v - a,)). Based on this model, the Gauss-Markov estimator for /3 is = (etC;'e)-'efZ;'(U - a,). Since 1= (l/n)et(U - a,) (show etao = 0 using the symmetry off ), it follows from the Gauss-Markov Theorem that var(p) < var( P ). That M - w = M n w L is clear since w G M. The condition (P, P,)~ = PM - Po follows from observing that P,,,P, = P, P, = P,. Thus P, - P, is an orthogonal projection onto its range. That %(P, - P a ) = M - w is easily verified by writing x E V as x = x, + x, + x, where x, E w, x2 E M - 0 , and x3 E M L . Then (P, - P,)(x, x, x,) = x, x, - x, = x,. Writing PM = PM - P, + P, and noting that (P, - P,)P, = 0 yields the final identity. That %(A) = M, is clear. To show %(B,) = M, - M,, first consider the transformation C defined by (Cy),, = yi., i = 1,. . . , I, j = 1,. . . , J. Then C2 = C = C', and clearly, %(C) G MI. But if y E M,, then Cy = y so C is the orthogonal projection onto MI. From Problem 3 (with M = MI and w = M,), we see that C - A , is the orthogonal projection onto M, - M,. But ((C - A,)y),, = y.., which is just (B,y),,. Thus B, = C - A, so %(B,) = M, - M,. A similar argument shows %(B2) = M, - M,. For (ii), use the fact that A, + Bl + B2 B, is the identity and the four orthogonal projections are perpendicular to each other. For (iii), first observe that M = MI + M2 and M, n M, = M,. If p has the assumed representation, let v be the vector with vij = (Y Pi and let [ be the vector with ti, = y,. Then v E MI and 5 E M, so p = v E MI M2.. Conversely, suppose p E Mo @ (MI - M,) @ (M2 - M,)-say p = 6 + v + [. Since 6 E M,,S,, = &,foraui,j,sosetcu = & . . ~ i n c e vE M, - M,,V~,- vik = o for all j, k for each fixed i and i . =0. Take j = 1 and set Pi = vil. Then v 'J. . = Pi for j = 1,. . . , J and, since t.=0, ZPi = 0. Similarly, setting y, = ti, = y, for all i, j and since [..= 0, Zy, = 0. Thus pi, = (Y Pi yj where Zp, = Zy, = 0. With n = dimV, the density of Y is (up to constants) f(ylp, a 2 ) = ~ - " e x ~ [ - ( 1 / 2 a~ )p112]. ~ ~ ~Using the results and notation Problem
+
3.
+
4.
+
+
+
x.-
+
+
+,,,
+ +
5.
++
+
479
CHAPTER 4
3, write V = o $ ( M - o ) $ M L so ( M - o ) $ M L = o ' . Under H,, p E o SO fi, = P, y is the maximum likelihood estimator of p and
where Q, = I - P,. Maximizing (4.4) over a 2 yields 6: = n - 'llQ, ~ 1 2.1 A similar analysis under H, shows that the maximum likelihood estimator of p is f i l = P M y and 6: = n - ' l ( ~ , ~ 1 1is~ the maximum likelihood estimator of u2. Thus the likelihood ratio test rejects for small values of the ratio
But Q , = Q M + pM-, and QMPM-, = 0, so l l Q W ~ 1 l 2 = llQMy112+ \ ~ P , _ , Y J ~ ~But . rejecting for small values of A ( y ) is equivalent to rejecting for large values of ( A ( y ) ) - 2 / n - 1 = I(PM - w ~ ~ ~ 2 / ~ l Q M ~ ~ 1 2 . Under H,, p E o so C(PM-,Y) = N(0, a2~,-,) and C ( Q M Y )= N(0, Since Q M P M - , = 0, Q M Y and P,_ ,Y are independent and C(IJPM-,Y(I)= a 2 X : where r = dim M - dim a . Also, C(llQMY1I2)=a 2 X i - kwhere k = dim M. 6. We use the notation of Problems 4 and 5. In the parameterization described in (iii) of Problem 4, P I = P2 = - . . = PI iff p E M2. Thus w = M2 so M - o = M I - M,. Since M L is the range of B3 (Problem 1.15), l l ~ , ~ 1 =l l~ l ~ ~ ~and 1 lit ~is ,clear that llB3y112= CC(y;, - & J., + Y . . ) ~ Also, . since M - o = M I - M,, PM-, = PM, - PMo and llpM-,y112 = IlPM,y112 - llPM,y1I2 = CiCjY: - C,C,Y.* = JC,(Y,. -
02eM).
Y..)~.
7. Since % ( X ' ) = % ( X ' X ) and X'y is in the range of X', there exists a b E R~ such that X'Xb = X'y. Now, suppose that b is any solution. First note that PMX = X since each column of X is in M. Since X'Xb = X'y, we have X'[Xb - P M y ]= X'Xb - X'PMy = X'Xb ( P M X ) ' y = X'Xb - X'y = 0. Thus the vector v = Xb - P M y is perpendicular to each column of X ( X ' u = 0 ) so v E M I . But Xb E M , and obviously, P M y E M , so u E M. Hence v = 0, so Xb = PMy. 8. Since I E y, Gauss-Markov and least-squares agree iff
(4.5)
(ape+
pee)M c M,
for all a,B > 0 .
But (4.5) is equivalent to the two conditions P,M G M and Q,M C M.
480
COMMENTS ON SELECTED PROBLEMS
But if e E M, then M = span{e) 8 MI where MI G (span{e))' . Thus P,M = span{e) c M and Q,M = M, E M, so Gauss-Markov equals least-squares. If e E ML , then M c {spane)' , so P,M = (0) and Q,M = M, so again Gauss-Markov equals least-squares. For (ii), if e E M I and e P M, then one of the two conditions PeM c M or Q,M c M is violated, so least-squares and Gauss-Markov cannot agree for all a and P. For (ii), since M G (span{e))l and M * (span{e))' , we can write Rn = span{e) @ M @ MI where MI = (span{e))' - M and MI * (0). Let PI be the orthogonal projection onto MI. Then the exponent in the density for Y is (ignoring the factor - 3) ( y - p)' (a-'P, P-'Q,) ( y - p) = (P,y + P l y + PM(Y - ~))'(a-'p, + P-'Q,)(P,y + Ply + PM(Y - p)) = ~-'Y'P,Y +P-'y'P1y + P P 1 ( y- p)'PM(y - p) where we have used the fact that Qe = PI + PM and PIPM= 0. Since det(aP, + PQ,) = aPn-I, the usual arguments yields fi = PMy, 6 = y'Pey, and b = ( n l)-ly'Ply as maximum likelihood estimators. When M = span{e), then the maximum likelihood estimators for (a, p) do not exist-other than the solution fi = P,y and 6 = 0 (which is outside the parameter space). The whole point is that when e E M, you must have replications to estimate a when the covariance structure is aP, + PQ,. 9. Define the inner product ( - , .) on Rn by (x, y) = X'C;'~. In the inner product space (Rn, .)), GY = XP and Cov(Y) = a21. The transformation P defined by the matrix X(X'C; 'x)- ' ~ ' 2' ;satisfies P 2 = P and is self-adjoint in (Rn,(., .)). Thus P is an orthogonal projection onto its range, which is easily shown to be the column space of X. The Gauss-Markov Theorem implies that fi = PY as claimed. Since p = Xb, X'p = X'XP so /3 = ( x ' x ) - ~ x ' ~ . Hence b = (x'x)-'x'fi, which is just the expression given. 10. For (i), each r E Q(V) is nonsingular so T(M) c M is equivalent to r ( M ) = M-hence I'-l(M) = M and r-I = r'. Parts (ii) and (iii) are easy. To verify (iv), t,(crY x,) = PM(crY x,) = cPMI'Y x, = cI'PMY + x, = cI't,(Y) x,. The identity PMI' = rPM for I' E QM(V)was used to obtain the third equality. For (v), first set r = I and x, = - PMy to obtain
+
(
0
,
+
+
+
+
Then to calculate t, we need only know t for vectors u E M I as QMy E M I . Fix u E ML and let z = t(u) so z E M by assumption. Then there exists a I' E OM(V) such that I'u = u and r z = -z. For ) rt(u) = I'z = - Z SO z = 0. Hence this I',we have z = t(u) = ~ ( I ' u = t(u) = 0 for all u E M I and the result follows.
481
CHAPTER 5
11. Part (i) follows by showing directly that the regression subspace M is invariant under each I,,€3 A . For (ii), an element of M has the form IJ. = {ZIP,,Z2P2) E '2,n for some p, E Rk and p, E R k . To obtain an example where M is not invariant under all I, €3 2 , take k = 1, Z, = E , , and 2, = e2 so p is
That the set of such p's is not invariant under all In63 2 is easily verified. When 2, = Z,, then p = Z,B where B is k x 2 with ith column p,, i = 1,2. Thus Example 4.4 applies. For (iii), first observe that Z, and Z, have the same column space (when they are of full rank) iff Z, = Z,C where C is k x k and nonsingular. Now, apply part (ii) with p, replaced by Cp,, so M is the set of p's of the form p = Z, B where B E C,, k .
CHAPTER 5 1. Let a,, . . . , a, be the columns of A and apply Gram-Schmidt to these vectors in the order a,, a,- . . . , a,. Now argue as in Proposition 5.2. 2. Follows easily from the uniqueness of F(S). 3. Just modify the proof of Proposition 5.4. 4. Apply Proposition 5.7 5. That F is one-to-one and onto follows from Proposition 5.2. Given X G : is the pair (4, U) where A = $U. For A E C;,., F-'(A) E (ii), F ( r + , UT') = T+UT' = (T 8 T)(+U) = (r @ T)(F(+, U)). If F-'(A) = (+, U), then A = +U and $ and U are unique. Then ( r 63 T)A = TAT' = r+UT' and rJ/E and UT' E GA. Uniqueness implies that F - ' ( ~ + U T ' )= (I?+, UT'). 6. When Dg(xo) exists, it is the unique n x n matrix that satisfies
,,
%,n
%,,
(5.3)
lim
x-X,,
Ilg(x> - g(x0) - Dg(xo)(x - x0)ll IIx - xoll
=
0.
But by assumption, (5.3) is satisfied by A (for Dg(xo)). By definition Jg(xo) = det(Dg(xo)).
482 COMMENTS ON SELECTED PROBLEMS 7. With tii denoting the ith diagonal element of T, the set {TJt,,> 0) is open since the function T + tii is continuous on V to R1. But G: = n f{Tlti, > O}, which is open. That g has the given representation is just a matter of doing a little algebra. To establish the fact that limx-O(I(R(~)II/II~II) = 0, we are free to use any norm we want on V and S ; (all norms defined by inner products define the same topology). = Using the trace inner product on V and SJ, 11 R(x)1I2 = trxx'xx' and 1 1 ~ 1 = 1 ~ trxx', x E V. But for S > 0, t r s 2 6 ( t r ~ ) ,SO 11 R(x)ll/llxll G tr xx', which converges to zero as x -+ 0. For (iii), write S = L(x), string the S coordinates out as a column vector in the order sI1,s,,, s,,, s3],s3,, s ~ ~. .,, . and string the x coordinates out in the same order. Then the matrix of L is lower triangular and its deterrninant is easily computed by induction. Part (iv) is immediate from Problem 6. 8. Just write out the equations SS- ' = I in terms of the blocks and solve. 9. That p 2 = P is easily checked. Also, some algebra and Problem 8 show that (Pu, v ) = (u, Po) so P is self-adjoint in the inner product (., .). Thus P is an orthogonal projection on (RP, (., .)). Obviously,
Since
A similar calculation yields IJ(I- p)x112 = z'Z;'z. For (iii), the exponent in the density of X is - t ( x , x ) = - t l l ~ x l l ~tll(I - p)x112. Marginally, Z is N(0, Z,,), so the exponent in Z's density is - $lJ(Ip)x112. Thus dividing shows that the exponent in the conditional density of Y given Z is - $ ( J P x ~which ~ ~ , corresponds to a normal distribution with mean Z12Z,'Z and covariance (2")-I = Z 11 ZI,Z,'Z,l. 10. On Gg , for j < i, tij ranges from - cc to + cc and each integral contributes 6 - t h e r e are p ( p - 1)/2 of these. For j = i, t,, ranges
483
CHAPTER 6
from 0 to co and the change of variable uii = t;/2 shows that the integral over t,i is ( f i ) r - i - ' I ' ( ( r - i + 1)/2). Hence the integral is equal to
which is just 2-Pc(r, p). CHAPTER 6
1. Each g E G I ( V ) maps a linearly independent set into a linearly independent set. Thus g ( M ) G M implies g ( M ) = M as g ( M ) and M have the same dimension. That G ( M ) is a group is clear. For (ii),
( iff g,, y
=
0 for y
) () E
Rq iff g,,
=
EM
for y E Rq
0. But
is nonsingular iff both g,, and g,, are nonsingular. That G , and G, are subgroups of G ( M ) is obvious. To show G, is normal, consider h E G, and g E G ( M ) . Then
has its 2,2 element I,, so is in G,. For (iv), that G I n G, Each g E G can be written as
("""")
=
0
=
g,,
)
( I q 0 (g;' 0 g22
=
{ I ) is clear.
"I) f
which has the form g = hk with h E G l and k E G,. The representation is unique as G I n G, = { I ) . Also, g,g, = h , k , h 2 k , = h , h , h ; ' k ,h 2 k 2 = h , k , by the uniqueness of the representation. 2. G ( M ) does not act transitively on V - (0) since the vector ($), y * 0 remains in M under the action of each g E G. To show G ( M ) is
484
COMMENTS ON SELECTED PROBLEMS
transitive on V n Mc, consider
3.
4.
5.
6.
with z, * 0 and z2 * 0. It is easy to argue there is a g E G ( M ) such that gx, = x2 (since z , * 0 and z2 * 0). Each n X n matrix r E 0, can be regarded as an n2-dimensional vector. A sequence {r,) converges to a point x E Rm iff each element of r, converges to the corresponding element of x. It is clear that the limit of a sequence of orthogonal matrices is another orthogonal matrix. To show 0, is a topological group, it must be shown that the map ( r , I)) + rI)' is continuous from 8, X 0, to On-this is routine. To show ~ ( r =) 1 for all r , first observe that H = { x ( r ) ( r E On) is a subgroup of the multiplicative group (0, oo) and H is compact as it is the continuous image of a compact set. Suppose r E H and r * 1. Then r J E H for j = 1,2, . . . as H is a group, but { r j ) has no convergent subsequence-this contradicts the compactness of H. Hence r = 1. Set x = e u and [(u) = log x(eU),u E R'. Then [(u, u,) = [(u,) [(u,) so [ is a continuous homomorphism on R1 to R1. It must be shown that [(u) = vu for some fixed real v. This follows from the solution to Problem 6 below in the special case that V = R'. Thls problem is easy, but the result is worth noting. Part (i) is easy and for part (ii), all that needs to be shown is that + is linear. First observe that
+
+
so it remains to verify that +(Av) = AC#J(v)for A E R'. (6.6) implies +(0) = 0 and +(nv) = n+(v) for n = 1,2,. . . . Also, +(-v) = -+(v) follows from (6.6). Setting w = nv and dividing by n, we have +(w/n) = (l/n)+(w) for n = 1,2,. . . . Now +((m/n)v) = m+((l/n)v) = (m/n)+(v) and by continuity, C#J(Av)= A+(v) for A > 0. The rest is easy. 7. Not hard with the outline given. 8. By the spectral theorem, every rank r orthogonal projection can be written r x , r ' for some r E On. Hence transitivity holds. The equation r x , r ' = x, holds for I- E 8, iff r has the form
485
CHAPTER 6
and thls gives the isotropy subgroup of x,. For r E On, rx,r' = r x , ( ~ x , ) ' and rx, has the form ($0) where $ : n X r has columns that are the first r columns of r . Thus rx,r' = $4'.Part (ii) follows by observing that $,$', = $,$; if $ l = $,A for some A E Or. The only difficulty here is (iii). The problem is to show that the only 9. continuous homomorphisms x on G, to (oo, oo) are t,", for some real a. Consider the subgroups G, and G4 of G, given by
'
The group G , is isomorphic to RP- SO the only homomorphisms are x -+ exp[Cf -'six,] and G4 is isomorphic to (0, oo) so the only homomorphisms are u + ua for some real a. For k E G,, write
so ~ ( k =) exp[Zaixi]ua. Now, use the condition x(k,k,) = x ( k l ) . ~ ( k , )to conclude a , = a, = . - . = a,-, = 0 so x has the claimed form. 10. Use (6.4) to conclude that
and then use Problem 5.10 to evaluate the integral over G:. You will find that, for 2y n > p - 1, the integral is finite and is I, = (&)"pu(n, p)/u(2y + n, p). If 2y + n < p - 1, the integral diverges. 11. Examples 6.14 and 6.17 give A, for G(M) and all the continuous homomorphisms for G(M). Pick x, E RP n MCto be
+
where z; = (1,0,. . . , O), z, E Rr. Then H, consists of those g's with the first column of g,, being 0 and the first column of g,, being z,. To apply Theorem 6.3, all that remains is to calculate the right-hand modulus of Ho-say A.: This is routine given the calculations of Examples 6.14 and 6.17. You will find that the only possible multi-
486
COMMENTS ON SELECTED PROBLEMS
pliers are ~ ( g =) lglllJg331 and Lebesgue measure on R* n MCis the only (up to a positive constant) invariant measure. 12. Parts (i), (ii), (iii), and (iv) are routine. For (v), J,( f ) = jf(x)p(dx) and J2(f ) = j f ( ~ - l ( ~ ) ) v ( dare ~ ) both invariant integrals on X(%). By Theorem 6.3, J, = W2 for some constant k. To find k, take f(x) = (&)-"sn(x)exp[ix'x] so J l (f ) = 1. Since s ( ~ - ' ( ~ = ) )v for y = (u, v, w),
For (vi), the expected value of any function of E and s(x), say q ( X , s(x)) is
u2
n(u-8)' u2
du dv.
Thus the joint density of X and s(x) is kvn-2 d " , ~ ) =6 h ( 3
v2
+
n ( ~ - 8 ) ~ u
(with respect to du dv) .
13. We need to show that, with Y(X)= X/llXll, P(llXll E B,Y E C ) = P(1IXII E B)P(Y E C). If P(I(XI1E B) = 0, the above is obvious. If not, set v(C) = P(Y E C,llXll E B)/P(IIXII E B) so v is a probability measure on the Borel sets of (ylll yll = 1) G Rn. But the relation cp(rx) = rcp(x) and the 8, invariance of C(X) implies that v is an 8,-invariant probability measure and hence is unique -(for all Borel B)-namely, v is uniform probability measure on (ylll yll = 1). 14. Each x E % can be uniquely written as gy with g E 9, and y E 9 (of course, y is the order statistic of x). Define Tn acting on gnX 9 by
CHAPTER 7
487
g(P, y ) = (gP, y). Then +-'(gx) = g+-'(x). Since P(gx) = gP(x), the argument used in Problem 13 shows that P ( X ) and Y(X) are independent and P ( X ) is uniform on 9,.
CHAPTER 7 1. Apply Propositions 7.5 and 7.6. 2. Write X = J/U as in Proposition 7.3 so 4 and U are independent. Then P(X) = $4' and S(X) = U'U and the independence is obvious. 3. First, write Q in the form
4.
5. 6.
7.
8.
where M is n x n and nonsingular. Since M is nonsingular, it suffices to show that (M-'(A))' has measure zero. Write x = (i.) where k is r X p. It then suffices to show that Bc = {xlx E EP, ,, rank(k) = p)" has measure zero. For this, use the argument given in Proposition 7.1. That the +'s are the only equivariant functions follows as in Example 7.6. Part (i) is obvious. For (ii), just observe that knowledge of F, allows you to write down the order statistic and conversely. Parts (i) and (ii) are clear. For (iii), write x = Px + Qx. If t is equivariant t(x + y ) = t(x) + y, y E M. This implies that t(Qx) = t(x) + Px (picky = Px). Thus t(x) = Px + t(Qx). Since Q = I - P, Qx E M L , so BQx = Qx for any B with (B, y) E G. Since t(Qx) E M, pick B such that Bx = - x for x E M. The equivariance of t then gives t(Qx) = t(BQx) = Bt(Qx) = -t(Qx), so t(Qx) = 0. Part (i) is routine as is the first part of (ii) (use Problem 6). An equivariant estimator of a 2 must satisfy t ( a r x + b) = a2t(x). G acts transitively on % and G acts transitively on (0, ce) (9for this case) so Proposition 7.8 and the argument given in Example 7.6 apply. When X E % with density f(x'x), then Y = XZ'l2 = (I, 8 Z ' / 2 ) ~ has density f ( 2 - '12x'xZ- 'I2) since dx/lx'xl n / 2 is invariant under x + xA for A E GI,. Also, when X has density f , then C((r8 A)X) = X) for all r E 8, and A E 8,. This implies (see Proposition 2.19) that Cov(X) = cI, 8 I, for some c > 0. Hence Cov((1, 8 Z'12)x) = cI, 8 Z. Part (ii) is clear and (iii) follows from Proposition 7.8 and Example 7.6. For (iv), the definition of C, and the assumption on f
e(
488
9.
10.
COMMENTS ON SELECTED PROBLEMS
imply f (I'C0Tf) = f (COrfr) = f(C,)for each r E Op.The uniqueness of C, implies C, = aIpfor some a > 0.Thus the maximum likelhood estimator of Z must be aXfX(seeProposition 7.12and Example 7.10). If C(X)= Po,then C(IIXII) is the same whenever C(X)E {PIP = gP,,g E O(V))since x + llxll is a maximal invariant under the action of O(V)on V.For (ii), C(llXll)depends on p through )IpII. Write V = o @ ( M- w)@ MI. Remove a set of Lebesgue measure zero from V and show the F ratio is a maximal invariant under the group action x + arx + b where a > 0,b E o,and E B ( V ) satisfies r(w)G w, r(M - o)G ( M - w). The group action on the parameter (p, a2)is p + arp + b and a2 + a2a2.A maximal invariant parameter is ll~,-,p11~/0~, which is zero when p E w. The statistic V is invariant under xi + Ax,+ b, i = 1,. . . , n, where b E RP,A E GI,, and det A = 1. The model is invariant under this group action where the induced group action on (p, Z)is p + A p + b and Z + AZA'.A direct calculation shows 8 = det(Z) is a maximal invariant under the group action. Hence the distribution of V depends 2)only through 8. on (p, For (i), if h E G and B E 9, (hP)(B)= P(h-l~) = jG(ge)(h-l~) ~(dg)=jGC(g-lh-~~)~(dg)=!GP((hg)-~~)~(dg)=jQ(g-~~)p(dg)= P(B),so hP = P for h E G and P is G invariant. For (ii), let Q be the distribution described in Proposition 7.16 (ii), so if C(X)= P, then C(X)= C(UY) where U is uniform on G and is independent of Y.Thus for any bounded %-measurable function f ,
r
11.
12.
Set f = I, and we have P(B)= jGe(g-l~)p(dg) so (7.1) holds. 13. For y E % and B E 93, define R(B1y) by R(B1y) = J,I,(gv)p(dg). For each y, R(.ly) is a probability measure on (%, 93)and for fixed B, R(BI - ) is (3, C?) measurable. For P E 9,(ii) of Proposition 7.16 shows that
But by definition of R(.l ), jGh(gy)p(dg)= j,h(x)R(dxl y),so (7.2)
CHAPTER 7
becomes
This shows that R(.l y ) serves as a version of the conditional distribution of X given r(X). Since R does not depend on P E 9,T(X) is sufficient. 14. For (i), that t(gx) = g 0 t(x) is clear. Also, X - X e = Q,X, which is N(O, Q), so is ancillary. For -(ii),- &(f (XI)\X = t) = &(f (XI - X +X)lX = t ) = &(f(&;Z(X) X)IX = t ) since Z ( X ) has coordinates X, - X, i = 1,. . . , n. Since Z and X are independent, this last conditional expectation (given X = t) is just the integral over the distribution of Z with X = t. But E;Z(X) = XI - Xis N(0, a 2 ) so the claimed integral expression holds. When f ( x ) = 1 for x G u, and 0 otherwise, the integral is just Q,((u, - t)/6) where Q, is the normal cumulative distribution function. 15. Let B be the set (-a, u,] so IB(Xl) is an unbiased estimator of h(a, b) when C(X) = (a, b)P,. Thus h(t(X)) = G(IB(Xl)lt(X))is an unbiased estimator of h(a, b) based on t(X). To compute h , we have &(IB(Xl)(t(X))= P{X, G u,lt(X)) = P{(X, - X)/s G (u, X)/s((s, X)). But (XI - X)/s = Z, is the first coordinate of Z ( X ) so is independent of (s, X). Thus h(s, X) = P,I(Z, G (u, - X)/s) = F((u, - X)/s) where F is the distribution function of the first coordinate of Z. To find F, first observe that Z takes values in % = {xlx E Rn, x'e = 0, llxll = 1) and the compact group Bn(e) acts transitively on %. Since Z ( r X ) = r Z ( X ) for r E B,(e), it follows that Z has a uniform distribution on Z (see the argument in Example 7.19). Let U be N(0, I n ) so Z has the same distribution as QeU/llQeUII and C(Z,) = ~(~;Q,u/llQ,Ull~> = ~((Qe~1)'Qeu/llQeu112). since l l ~ , ~ ~=l (n l ~ - l)/n and Q,U is N(0, Q,), it follows that C(Z,) = C(((n l ) / n ) ' / 2 ~ , )where W, = U,/(Z;-'q2)1/2. The rest is a routine computation. 16. Part (i) is obvious and (ii) follows from
+
Since Z ( X ) and r ( X) are independent and r ( X ) = g, the last member of (7.3) is just the expectation over Z of f(gZ). Part (iii) is just an application and Q, is the uniform distribution on $, .. For (iv), let B be a fixed Bore1 set in RP and consider the parametric function
490
COMMENTS ON SELECTED PROBLEMS
h(Z) = P,(X, E B) = ~ ~ , ( x ) ( ~ ) - ~ ( Z ( - ~ ix'Z-'xldx, /~ex~[where Xi is the first row of X. Since T(X) is a complete sufficient statistic, the MVUE of h(Z) is
But Z; = (7-'(x)x,)' is the first row of Z(X) so is independent of r(X). Hence L(T) = P,{Z, E T-'(B)) where P, is the distribution of Z, when Z has a uniform distribution on ,. Since Z, is the first p coordinates of a random vector that is uniform on {XIllxll = 1, x E Rn), it follows that Z, has a density +(ll~11~) for u E RP where is given by
5,
+
O
CHAPTER 8 1. Make a change of variables to r, x, = s,,/u,, and x, = s,,/u,,, and then integrate out x, and x,. That p(rlp) has the claimed form follows by inspection. Karlin's Lemma (see Appendix) implies that +(pr) has a monotone likelihood ratio. 3. For a = 1/2 ,... , ( p - 1)/2, let X,,.. . , X, be i.i.d. N(0, I,) with r = 2a. Then S = Xi XI! has +a as its characteristic function. For a > ( p 1)/2, the function pa(s) = k(a)ls 1" exp[ - 5 tr s ] is a density with respect to d s / l s l ( ~ + ~on ). /: ~S The characteristic function of pa is Ga. To show that c#I~(ZA) is a characteristic function, let S satisfy G exp(i(A, S)) = +a(A) = )Ip- 2iAJa.Then Z ' / 2 ~ Z ' / 2has &(ZA) as its characteristic function. 4. C(S) = C(rSl?') implies that A = FS satisfies A = FAr' for all l? E 8,. This implies A = cIp for some constant c. Obviously, c = Gs, ,. For (ii) var(tr DS) = var(Cpdisii) = Cfd?var(sii) + CC,,jd,d,cov(sii, s,,). Noting that C(S) = C(l?SI") for l? E O,, and in particular for permutation matrices, it follows that y = var(s,,) does not depend on i and p = cov(s,,, s,,) does not depend on i and j (i *j). Thus var(D, S ) =
491
CHAPTER 8
YCfd: + PCC,,jd,d, = (y - P)Cfd? + P(Cfdi)2. For (iii), write A E $5, as FDr' so var(A, S ) = var(I'Dr1, S ) = var(D, I"SI') = var(D, S ) = (y - P)Cfd: + P(Cf'di)2 = (y - P)tr A2 + P(tr A ) =~ (y - P)(A, A) P(I, A)2. With T = (y - P)Ip 8 I, + PIpO I,, it follows that var(A, S ) = ( A , TA), and since T is self-adjoint, this implies that Cov(S) = T. Use Proposition 7.6. Immediate from Problem 3. For (i), it suffices to show that C((ASAr)-I) = w((AAA')-I, r, v + r - 1). Since C(S-I) = w(A-I, p, v p - l), Proposition 8.9 implies that desired result. (ii) follows immediately from (i). For (iii), (i) implies 3 = A-'/2SA-1/2 is IW( I,, p, v) and C(S) = C (rSI") for all r E Op. Now, apply Problem 4 to conclude that &S= cIp where c = GS,,. That c = (v - 2)- is an easy application of (i). Hence (v - 2)-'I, = &S= A - ' / ~ ( & s ) A - ' / ~ SO &S = ( V- 2)-'A. Also, cov S = (y - 0 ) I,-@I, + PI, I, as in Problem 4. Thus Cov (3) = ( A ' / 2 8 A 1 / 2 ) ( C ~ ~ ~ ) ( A 1 / 2 8 A 1 / 2 ) = ( y - p j A 8 A + P AFor ~A. (iv), that C(S,,) = IW(A,,, q, v), take A = (I, 0) in part (i). To show C(S,;!,) = W(A,'.,, r, v q r - 1), use Proposition 8.8 on S-I, which is W(A-I, p, v + p - 1). For (i), letp,(x)p,(s) denote the joint density of X and S with respect Setting T = XS-'/2 and V = S, the to the measure dx ds/lsl(~+')/~. joint density of T and V is p,(tv'/2)p2(v)lvlr/2 with respect to dt dv/Iv I(,+ ')I2-the Jacobian of x + is lv 1 'I2--see Proposition 5.10. Now, integrate out v to get the claimed density. That C(T) = C(I'TA') is clear from the form of the density (also from (ii) below). Use Proposition 2.19 to show Cov(T) = c,Ir 8 I,. Part (ii) follows by integrating out v from the conditional density of T to obtain the marginal density of T as given in (i). For (iii) represent T as: T given V is N(0, I, 8 V) where Vis IW(I,, p, v). Thus T,, given Vis N(0, I, 8 V,,) where Vl, is the q x q upper left-hand comer of V. Since C(Vl,) = IW(I,, q, v), the claimed result follows from (ii). With V = S; ' / 2 ~ 'I2 1 ~and ~ S = S; I , the conditional distribution of V given S is W(S, p, m) and C(S) = I W ( I , p, v). Since V is unconditionally F(m, v, I,), (i) follows. For (ii), [(T) = T(v, I,, I,) means that C(T) = C(XS'l2) where C(X) = N(0, I, 8 I,) and C(S) = IW(I,, p, v). Thus &(T'T) = C(S'I2X' XS1/'). Since C(X'X) = W(I,, p, r), (ii) follows by definition of F(r, v, I,). For (iii), write F = T'T where C(T) = T(v, I,, I,), which has the density given in (i) of Problem 8. Since r 2 p, Proposition 7.6 is directly applicable to yield the density of F. To establish (iv), first note that C(F) = C(rFI")
+
5. 6. 7.
+
+ +
8.
9.
492
COMMENTS ON SELECTED PROBLEMS
for all r E OP. Using Example 7.16, F has the same distributions as J,DJ,' where J, is uniform on 8, and is independent of the diagonal matrix D whose diagonal elements A , > . . . 2 A, are distributed as the eigenvalues of F. Thus A,, . . . , A, are distributed as the eigenvalues of ST IS, where S, is W(I,, p, r ) and S; is IW(Ip, p, v). Hence C(FP') = C(J,D-'+') = C(J,DJ,') where the diagonal elements of D, . . > A; are the eigenvalues of S; IS,. Since S2 is say A; W(I,, p, v + p - l), it follows that J,D$' has the same distribution as an F(v + p - 1, r - p + 1, I,) matrix by just repeating the orthogonal invariance argument given above. (v) is established by writing F = T'T as in (ii) and partitioning T as Tl : r x q and T2 : r x ( p - q)
'
'
',
SO
T'T
=
Since C(T,) = T(v, I,, I,) and Fll = TiTl, (ii) implies that C(F,,) = F(r, v, I,). (vi) can be established by deriving the density of XS-'X' directly and using (iii), but an alternative argument is more instructive. First, apply Proposition 7.4 to X' and write X = v'/~J,' where V E ST, V = XX' is W(I,, r, p) and is independent of J, : p x r, which is uniform on T,,. Then XS- 'X' = v'/~w-' v ' / ~where W = (4's-'I))-' and is independent of V. Proposition 8.1 implies that C(W) = W(I,, r, m - p + r). Thus C(WP') = IW(I,, r, m - p 1). Now, use the orthogonal invariance of the distribution of XS-'x' to conclude that C ( XS- 'X') = C ( r D r f )where r and D are independent, r is uniform on Or, and the diagonal elements of D are distributed as the ordered eigenvalues of W-'V. As in the proof of (iv), conclude that C(rDTf) = F(p, m - p + 1, I,). 10. The function S + s'l2on Sp+ to S ; satisfies (I'SI")'/2 = rS'/2r'for r E 0,. With B(SI, S,) = (S1 + S 2 ) - 1 / 2 ~ 1 ( ~S2)-'I2, 1 it follows that B ( E , r ' , rS2rf)= rB(SI, S 2 ) r f .Since C(rS,I") = C(Si), i = 1,2, and S, and S, are independent, the above implies that C(B) = C(rBT') for r E 8,. The rest of (i) is clear from Example 7.16. For (ii), let B, = S,'/2(Sl + S2)-'Si/2 so C(B,) = C(rBlr!) for r E 8,. Thus C(Bl) = C(J,DJ,') where J, and D are independent, J, is uniform on 8,. Also, the diagonal elements of D, say A , 2 . . . 2 A, > 0, are distributed as the ordered eigenvalues of Sl(Sl + S2)-' so Bl is B(ml, m,, I,). (iii) is easy using (i) and (ii) and the fact that F( I + F)-' is symmetric. For (iv), let B = X(S + X'X)-'X' and observe that C(B) = C(rBI"), r E 8,. Since m > p, S-' exists so B = XS-'/~(I, S-'/2~'~~-1/2)-1S-1/2~'. Hence T = XS-'/~ is T(m - p + 1, I,, I,). Thus C(B) = C(J,DJ,') where J, is uniform on 8, and
+
+
+
493
CHAPTER 9
is independent of D. The diagonal elements of D, say A,,. . . , A,, are the eigenvalues of T(I + TfT)-IT'. These are the same as the eigenvalues of TT'(I, + T+')-' (use the singular value decomposition for T). But C(TT') = C(XS-'X') = F(p, m - p + 1, I,) by Problem 9 (vi). Now use (iii) above and the orthogonal invariance of C(B). (v) is trivial.
CHAPTER 9 1. Let B have rows v;, . . . , v; and form X in the usual way (see Example 4.3) so &X = ZB with an appropriate Z : n x k. Let R : 1 x k have entries a,,. . . , a,. Then RB = Cfaipi and Ho holds iff RB = 0. Now apply the results in Section 9.1. 2. For (i), just do the algebra. For (ii), apply (i) with S, = ( Y - XB)'(Y - XB) and S2= (X(B - B))'(X(B - B)), so @(S,)< +(S, + S,) for every B. Since A 2 0, tr A(S, S,) = tr AS, + tr AS, 2 tr ASl since tr AS, 2 0 as S, 2 0. To show det(A S ) is nondecreasing in S 2 0, First note that .4g + S, < A + S, S2in the sense of positive definiteness as S, > 0. Thus the ordered eigenvalues of (A + S, + S,), say A,,. . . , A,, satisfy Xi 2 p,, i = 1,. . . , p, where p,,. . . , p, are the ordered eigenvalues of A + S,. Thus det(A + S, + S,) > det(A + S , ) . This same argument solves (iv). 3. Since C(EJ,'Af) = C(EAf) for J, E O, the distribution of EA' depends only on a maximal invariant under the action A -+ AJ, of J, on GI,. This maximal invariant is AA'. (ii) is clear and (iii) follows since the reduction to canonical form is achieved via an orthogonal transformation E = TY where T E 8,. Thus = r p I?EAf. r is chosen so Tp has the claimed form and Ho is B, = 0. Setting 6, = r E , the model has the claimed form and C(E) = C(B) by assumption. The arguments given in Section 9.1 show that the testing problem is invariant and a maximal invariant is the vector of the t largest eigenvalues of Y,(~Y,)-'Y;.Under H,, Y, = EIAf, Y, = E,Ar so Y,(Y;Y,)-'Y; = E,(E;E,)-'E; = W. When C(rE) = C(E) for all T E On, write E = J,U according to Proposition 7.3 where J, and U are independent and J, is uniform on %, ,. Partitioning J, as E is partitioned, E, = J,,U, i = 1,2,3, so W = J,,u((JI,u)'J/,u)-'U'J,', = J,l(J,;J,3)-'J,;. The rest is obvious as the distribution of W depends only on the distribution of J,. 4. Use the independence of Y, and Y, and the fact that G(Y;Y,)-' = ( m - p - l)-'Z-l.
+
+
+
+
494
COMMENTS ON SELECTED PROBLEMS
5. Let I' E 0,be given by
and set F = YI'. Then C(F) = N(ZBr, I, 8 I"ZT). Now, let BI' have columns p, and p,. Then H, is that PI = 0. Also I"X is diagonal with unknown diagonal elements. The results of Section 9.2 apply directly to yield the likelihood ratio test. A standard invariance argument shows the test is UMP invariant. 6. For (i), look at the i, j elements of the equation for Y. To show M2 I M,, compute as follows: (au;, u,P') = tr au;/3u; = u;pu;a = 0 from the side conditions on a and p. The remaining relations MI I M, and M, I M, are verified similarly. For (iii) consider (I, 8 A)(pu,u; + au; u,Pf) = puI(Au2)' ~ ( A u , ) ' u,(AP)' = p y u , ~ ;+ yau; +6u,P1 E M where the relations Pu, = u, and QP = /3 when u;p = 0 have been used. This shows that M is invariant under each I, 8 A. It is now readily verified that fi = F. , 6 , = - F. and b, = Y., - Y .. For (iv), first note that the subspace w = {x(xE M, a = 0) defined by H, is invariant under each I, 8 A. Obviously, w = MI @ M,. Consider the group whose elements are g = (c, r , b) where c is a positive scalar, b E MI $ M,, and I' is an orthogonal transformation with invariant subspaces M,, MI $ M,, and ML . The testing problem is invariant under x + cI'x b and a maximal invariant is W (up to a set a measure zero). Since W has a noncentral F-distribution, the test that rejects for large values of W is UMP invariant. 7. (i) is clear. The column space of W is contained in the column space of Z and has dimension r. Let x,,. . . , x,, x,,,,. . . , x,, x,,,,. . . , x, be an orthonormal basis for Rn such that span{x,, . . . , x,) = column space of W and span{x,,. . . , x,) = column space of Z. Also, let y,,. . . , y, be any orthonormal basis for RP. Then {x,O y,li = 1,. . . , r, j = 1,. . . , p ) is a basis for %(P, 8 I,), which has dimension rp. Obviously, %(P, 8 I,) G M. Consider x E w so x = ZB with RB = 0. Thus (P, 8 I,)x = P,ZB = W(WfW)-'W'ZB = w ( w t w ) - l ~ ( ~ f ~ ) =- lw(wtw)-IRB ( ~ ~ ) ~ = O. T ~ U S %(P, 8 I,) 2 w, which implies %(P, 8 I,) c w' . Hence % ( P W8 I*) c M n w L . That dimw = (k - r ) p can be shown by a reduction to canonical form as was done in Section 9.1. Since w c M, dim(M - a ) = dim M - dim w = rp, which entails %(P, 8 I,) = M - u. Hence P, 8 I, - P, 8 I, is the orthogonal projection onto a . 8. Use the fact that TZI' is diagonal with diagonal entries a,, a,, a,, a,, a, (see Proposition 9.13 ff.) so the maximum likelihood estimators a,, a,,
+
+
+
t.
+
495
CHAPTER 10
and a, are easy to find-just transform the data by I'. Let D have diagonal entries ail, ai,, ai,, ai,, ai, so 5 = ~ D Tgives the maximum likelihood estimators of a 2 , p ,, and p,. 9. Do the problems in the complex domain first to show that if Z , , . . . , Z , are i.i.d. W ( 0 , 2H ) , then H = (1/2n)CyZ,Z:. But if Z, = U, + i y and
+
then H = (1/2n)C;(U, iV,)(U, - i y ) ' = ( 1 / 2 n ) [ ( S l l+ S22)+ i ( S I 2- S,,)] so = { H } . This gives the desired result. 10. Write R = M ( I , O)r where M is r X r of rank r and I' E 0,. With 6 = I'p, the null hypothesis is (I,0)6 = 0. Now, transform the data by r and proceed with the analysis as in the first testing problem considered in Section 9.6. 11. First write P, = PI + P2 where PI is the orthogonal projection onto e and P, is the orthogonal projection onto (column space of Z ) n {span e)' . Thus P, = PI 8 I, + P2 8 I,. Also, write A ( p ) = yPl + 6Q1where y = 1 ( n - l ) p , 6 = 1 - p , and Q , = I, - PI. The relations PIP2 = 0 = Q ,PI and P2Q, = Q ,P2 = P2 show that M is invariant under A ( p ) 8 2 for each value of p and 2. Write ZB = eb; Ciz;bj' so QIYis N ( C ' ; ( Q l z j ) b j ' , ( Q l A ( ~ ) Q8l ) 2).Now, Q I A ( P ) Q ,= 6Q1 so Q I Y is ~ ( P , k ( ~ , z , ) b6Q, ; , 8 2 ) .Also, P,Y is N(eb;, yP, 8 2). Since hypotheses of the form RB = 0 involve only b,, . . . , b,, an invariance argument shows that invariant tests of H, will not involve Ply-so just ignore Ply. But the model for Q,Y is of the MANOVA type; change coordinates so
4
+
+
Now, the null hypothesis is of the type discussed in Section 9.1.
CHAPTER 10 1. Part (i) is clear since the number of nonzero canonical correlations is always the rank of Z 1 2in the partitioned covariance of {X, Y } . For (ii), write
4%
COMMENTS ON SELECTED PROBLEMS
where Z,, has rank t , and Z,, > 0, Z,, > 0. First, consider the case when q g r, Z , , = I,, Z,, = I,, and
where D > 0 is t x t and diagonal. Set
so AB' = Z,,. Now, set A,, = I, - AA', A,, = I, - BB', and the problem is solved for this case. The general case is solved by using Proposition 5.7 to reduce the problem to the case above. That Z,, = 6e,e; for some 6 E R' is clear, and hence Z,, has rank 2. one-hence at most one nonzero canonical correlation. It is the square root of the largest eigenvalue of Z;'Z,,Z,'Z,, = 62Z;'e,e;Z;'e2e;. The only nonzero (possibly) eigenvalue is 6 ,e; 2; 'e,e;Z,'e,. To describe canonical coordinates, let
and then form orthonormal bases (6,, 6,,. . . , 6,) and {a,,. . . , a,.) for R4 and Rr. Now, set vi = 2,'/26,, W, = Z ; 2 1 / 2 ~for i = 1,. . . , q, j = 1,. . . , r. Then verify that X, = viX and Y, = w,'Y form a set of canonical coordinates for X and Y. 3. Part (i) follows immediately from Proposition 10.4 and the form of the covariance for {X, Y). That 6(B) = t r A ( I - Q(B)) is clear and the minimization of 6(B) follows from Proposition 1.44. To describe B,let : p X t have columns a,, . . . , a, so = I, and Q = ++'. Then show directly that B = +'2-'/2is the minimizer and ~ B = XP~/~QZ-I/~X is the best predictor. (iii) is an immediate application of (ii). 4. Part (i) is easy. For (ii), with u i = x, - a,,
+
+'+
4 9
CHAPTER 10
Since S(ao) = S ( 2 ) + n(F - ao)(F - a,)', (ii) follows. (iii) is an application of Proposition 1.44. 6. Part (i) follows from the singular value decomposition: For (ii), (x E eP,,lx = $C, C E eP,k ) is a linear subspace of and the orthogonal projection onto this subspace is ($4') 8 I,. Thus the 8 I)A = $$'A, and the C that achieves closest point to A is (($4') the minimum is = $'A. For B E a k , write B = $C as in (i). Then
eP,,
IIA - ~
1 2 1inf ~infllA - $c112= infllA - $ $ ' A I I ~= i n f l l ~ Q l l ~ . 4 c
4
Q
The last equality follows as each $ determines a Q and conversely. Since llAQ1I2 = tr AQ(AQ)' = tr A Q 2 ~=' tr QAA', llA - B112 >, inf tr QAA'. Q
Writing A = CfX,u,v: (the singular value decomposition for A), AA' = Cfhiuiuj is a spectral decomposition for AA'. Using Proposition 1.44, it follows easily that inf tr QAA' Q
P =
A;.
k+ 1
That B achieves the infimum is a routine calculation. 7. From Proposition 10.8, the density of W is
where pn-, is the density of a noncentral t distribution and f is the density of a X;-l distribution. For 19 > 0, set u = 0 ~ ' 'so~
Since p,-,(wlv) has a monotone likelihood ratio in w and v and f (v2/02) has a monotone likelihood ratio in v and 13, Karlin's Lemma implies that h(w1S) has a monotone likelihood ratio. For 0 < 0, set v = 8 uchange variables, and use Karlin's Lemma again. The last assertion is clear.
498
COMMENTS ON SELECTED PROBLEMS
8. For U2 fixed, the conditional distribution of W given U2 can be described as the ratio of two independent random variables-the numerator has a X;+2, distribution (gven K ) and K is Poisson with parameter A/2 where A = p2(1 - p2)-'U2 and the denominator is Xi-r- Hence, given U2, this ratio is $+2,,,-r-l with K described above, so the conditional density of W is
+
where (. 1 A/2) is the Poisson probability function. Integrating out U2 gives the unconditional density of W (at p). Thus it must be shown that FU2+(klA/2) = h(k1p)-this is a calculation. That f (elp) has a monotone likelihood ratio is a direct application of Karlin's Lemma. 9. Let M be the range of P. Each R E Os can be represented as R = $4' where is n x s, = I,, and P+ = 0. In other words, R corresponds to orthonormal vectors . . , I), (the colunins of +) and these vectors are in M I (of course, these vectors are not unique). But given any two such sets-say +,,. . . , and a,,. . . , a,, there is a r E 6 ( P ) such that r+,= a,, i = 1,. . . , s. This shows O(P) is compact and acts transitively on 9,, so there is a unique O(P) invariant probability distribution on OS. For (iii), AR,Af has an 8 ( P ) invariant distribution on Os-uniqueness does the rest. 10. For (i), use Proposition 7.3 to write Z = +U with probability one where and U are independent, 4 is uniform on %, ,, and U E G:. Thus with probability one, rank(QZ) = rank(Q+). Let S > 0 be independent of with C(S2) = W(Ip, p, n) so S has rankp with probability one. Thus rank(Q+) = rank(Q+S) with probability one. But rC/Sis N(0, I, 8 I,), which implies that Q+S has rank p. Part (ii) is a direct application of Problem 9. 12. That is uniform follows from the uniformity of r on 8,. For (ii), C(+) = C(Z(ZfZ)-'/2) and A = (I, O)+ implies that C(+) = C ( X( X'X YfY)- I). (iii) is immediate from Problem 11, and (iv) is an application of Proposition 7.6. For (v), it suffices to show that jf (x) PI(dx) = jf (x) P2(dx) for all bounded measurable f . The invariance of Pi implies that for i = 1,2, jf(x)Pi(dx) = jf(gx)P,(dx), g E G. Let v be uniform probability measure on G and integrate the above to get jf (x) P,(dx) = j(jGf (gx)v(dg))P,(dx). But the function x -t j, f(gx)v(dg) is G-invariant and so can be writtenj'(r(x)) as 7 is a maximal invariant. Since P1(r-I(C)) = P2(7-I(C)) for a11 measurable C, we have jk(r(x))P,(dx) = jk(r(x))P2(dx) for all bounded
+
+'+
+
+
+
+
+ +,
499
CHAPTER 10
measurable k. Putting things together, we have jf(x)P,(dx) = ~ ~ ( T ( x ) ) P ~=( jj(7(x))p2(dx) ~x) = jf(x)P2(dx) so P1 = P2. Part (vi) is immediate from (v). 13. For (i), argue as in Example 4.4:
The third equality follows from the relation QT = 0 as in the normal case. Since h is nonincreasing, this shows that for each 2 > 0,
and it is obvious that ~ ( Z I BZ), = lZ~-"/~h(trSZ-').For (ii), first note that S > 0 with probability one. Then, for S > 0,
=
sup l Z ~ - " / ~(tr h sZ-') 210
=
SUP ~ C l " / ~ h (c). tr C>O
Under Ho, we have
=
SUP Z,,>O,i=1,2
I Z ~ ~ I - " / ~ )(trZ2i1Sli ~ ~ ~ +- tr ~ 2/ ~, ' ~~ ~ ~ )
Thls latter sup is bounded above by sup J C ) " /(tr ~~ C) C>0
= k,
500
COMMENTS O N SELECTED PROBLEMS
which is finite by assumption. Hence the likelihood ratio test rejects for small values of k, IS,lI-n/21S22~-n/2~~(n/2, which is equivalent to rejecting for small values of A(Z). The identity of part (iii) follows from the equations relating the blocks of 2 to the blocks of 2 - ' . Partition B into B, : k X q and B2 : k X r so & X = TB, and & Y = TB,. Apply the identity with U = X - TB, and V = Y - TB, to give
Using the notation of Section 10.5, write
Hence the conditional density of Y given X is
where q = tr(X - TB,)Z,'(X - TB,) and (+(q))-' = je,,,h(tr uu' + q)du. For (iv), argue as in (ii) and use the identities established in Proposition 10.17. Part (v) is easy, given the results of (iv)-just note that the sup over Z,, and B, is equal to the sup over q > 0. Part (vi) is interesting-Proposition 10.13 is not applicable. Fix X,B,, and X I , and note that under H,, the conditional density of Y is
Thls shows that Y has the same distribution (conditionally) as
E=
CHAPTER 10
501
where E E Cr,, has density h(trEE' + q)+(q). Note TC, + that C(rEA) = C(E) for all r E 8, and A E fir. Let t = min(q, r ) and, given any n x n matrix A with real eigenvalues, let h(A) be the vector of the t largest eigenvalues of A . Thus the squares of the sample canonical correlations are the elements of the vector X(R ,RX) where R, = (QY)(YQY)-'(QY), R, = QX(X'QX)-'QX, since
(You may want to look at the discussion preceding Proposition 10.5.) Now, we use Problem 9 and the notation there-P = I - Q. First, R, E qr, R, E Tq, and B(P) acts transitively on Tr and Tq. Under H, (and X fixed), C(QY) = C(QEZ;{T,), which implies that C(rR ,r') = C(R,), r E 8(P). Hence R, is uniform on Tr for each X. Fix and choose r, so that ToROT; = Rx, Then, for each X, R, E
This shows that for each X, A(RyRx) has the same distribution as h(R ,R,) for R, fixed where R, is uniform on Tr. Since the distribution of X(R,R,) does not depend on X and agrees with what we get in the normal case, the solution is complete.
BIBLIOGRAPHY
Anderson, T. W. (1946). The noncentral Wishart distribution and certain problems of multivariate statistics. Ann. Math. Stat., 17, 409-431. Anderson, T. W. (1958). An Introduction to Multivariate Statistical Analysis. Wiley, New York. Anderson, T. W., S. Das Gupta, and G. H. P. Styan (1972). A Bibliography of Multivariate Analysis. Oliver and Boyd, Edinburgh. Andersson, S. A. (1975). Invariant normal models. Ann. Stat., 3, 132-154. Andersson, S. A. (1982). Distributions of maximal invariants using quotient measures. Ann. Stat., 10, 955-961. Arnold, S. F. (1973). Applications of the theory of products of problems to certain patterned covariance matrices. Ann. Stat., 1, 682-699. Arnold, S. F. (1979). A coordinate free approach to finding optimal procedures for repeated measures designs. Ann. Stat., 7, 812-822. Arnold, S. F. (1981). The Theory of Linear Models and Multivariate Analysis. Wiley, New York. Basu, D. (1955). On statistics independent of a complete sufficient statistic. Sankhja, 15, 377-380. Billingsley, P. (1979). Probability and Measure. Wiley, New York. Bjiirk, A. (1967). Solving linear least squares problems by Gram-Schmidt orthogonalization. BIT, 7, 1-21. Blackwell, D. and M. Girshick (1954). Theory of Games and Statistical Decision Functions. Wiley, New York. Bondesson, L. (1977). A note on sufficiency and independence. Preprint, University of Lund, Lund, Sweden. Box, G. E. P. (1949). A general distribution theory for a class of likelihood criteria. Biometrika, 36, 317-346. Chung, K. L. (1974). A Course in Probabilig Theory, second edition. Academic Press, New York. Cramer, H. (1946). Mathematical Methods of Statistics. Princeton University Press, Princeton, N.J. Das Gupta, S. (1979). A note on anciliarity and independence via measure-preserving transformations. Sankhya, 41, Series A, 117- 123.
504
BIBLIOGRAPHY
Dawid, A. P. (1978). Spherical matrix distributions and a multivariate model. J. Roy. Stat. Soc.
B, 39,254-261.
Dawid, A. P. (1981). Some matrix-variate distribution theory: Notational considerations and a Bayesian application. Biometrika, 68, 265-274. Deemer, W. L. and I. Olkin (1951). The Jacobians of certain matrix transformations useful in multivariate analysis. Biometrika, 38, 345-367. Dempster, A. P. (1969). Elements of Continuous Multivariate Analysis. Academic Press, Reading, Mass. Eaton, M. L. (1970). Gauss-Markov estimation for multivariate linear models: A coordinate free approach. Ann. Math. Stat., 41, 528-538. Eaton, M. L. (1972). Multivariate Statistical Analysis. Institute of Mathematical Statistics, University of Copenhagen, Copenhagen, Denmark. Eaton, M. L. (1978). A note on the Gauss-Markov Theorem. Ann. Inst. Stat. Math., 30, 181-184. Eaton, M. L. (1981). On the projections of isotropic distributions. Ann. Star., 9, 391-400. Eaton, M. L. and T. Kariya (1981). On a general condition for null robustness. University of Minnesota Technical Report No. 388, Minneapolis. Eckhart, C. and G. Young (1936). The approximation of one matrix by another of lower rank. Psychometrika, 1, 21 1-218. Farrell, R. H. (1962). Representations of invariant measures. Ill. J. Math., 6, 447-467. Farrell, R. H. (1976). Techniques of Multivariate Calculation. Lecture Notes in Mathematics # 520. Springer-Verlag, Berlin. Gin, N. (1964). On the likelihood ratio test of a normal multivariate testing problem. Ann. Math. Stat., 35, 181-190. Gin, N. (1965a). On the complex analogues of T' and R'-tests. Ann. Math. Stat., :36,664-670. Gin, N. (1965b). On the likelihood ratio test of a multivariate testing problem, 11. Ann. Math. Stat., 36, 1061-1065. Gin, N. (1975). Invariance and Minimax Statistical Tests. Hindustan Publishing Corporation, Dehli, India. Gin, N. C. (1977). Multivariate Statistical Inference. Academic Press, New York. Gnanadesikan, R. (1977). Methoh for Statistical Data Analysis of Multivariate Observations. Wiley, New York. Goodman, N. R. (1963). Statistical analysis based on a certain multivariate complex Gaussian distribution (An introduction). Ann. Math. Stat., 34, 152- 177. Hall, W. J., R. A. Wijsman, and J. K. Ghosh (1965). The relationship between sufficiency and invariance with applications in sequential analysis. Ann. Math. Stat., 36, 575-614. Halmos, P. R. (1950). Measure Theory. D. Van Nostrand Company, Princeton, N.J. Halmos, P. R. (1958). Finite Dimensional Vector Spaces. Undergraduate Texts in Mathematics, Springer-Verlag, New York. Hoffman, K. and R. Kunze (1971). Linear Algebra, second edition. Prentice Hall, Englewood Cliffs, N.J. Hotelling, H. (1931). The generalization of Student's ratio. Ann. Math. Stat., 2, 360-378. Hotelling, H. (1935). The most predictable criterion. J. Educ. Psych., 26, 139-142. Hotelling, H. (1936). Relations between two sets of variates. Biometrika, 28, 321-377. Hotelling, H. (1947). Multivariate quality control, illustrated by the air testing of sample bombsights, in Techniques of Statistical Analysis. McGraw-Hill, New York, pp. 111-184.
BIBLIOGRAPHY
505
James, A. T. (1954). Normal multivariate analysis and the orthogonal group. Ann. Math. Stat., 25, 40-75. Kariya, T. (1978). The general MANOVA problem. Ann. Stat., 6, 200-214. Karlin, S. (1956). Decision theory for Polya-type distributions. Case of two actions, I, in Proc. Third Berkeley Symp. Math. Stat. Prob., Vol. 1. University of California Press, Berkeley, pp. 115-129. Karlin, S. and H. Rubin (1956). The theory of decision procedures for distributions with monotone likelihood ratio. Ann. Math. Stat. 27, 272-299. Kiefer, J. (1957). Invariance, minimax sequential estimation, and continuous time processes. Ann. Math. Stat., 28, 573-601. Kiefer, J. (1966). Multivariate optimality results. In Multivariate Analysis, edited by P. R. Krishnaiah. Academic Press, New York. Kruskal, W. (1961). The coordinate free approach to Gauss-Markov estimation and its application to missing and extra observations, in Proc. Fourth Berkeley Symp. Math. Stat. Prob., Vol. 1. University of California Press, Berkeley, pp. 435-461. Kruskal, W. (1968). When are Gauss-Markov and least squares estimators identical? A co-ordinate free approach. Ann. Math. Stat., 39, 70-75. Kshirsagar, A. M. (1972). Multivariate Analysis. Marcel Dekker, New York. Lang, S. (1969). Analysis II. Addison-Wesley, Reading, Massachusetts Lawley, D. N. (1938). A generalization of Fisher's z-test. Biometrika, 30, 180-187. Lehmann, E. L. (1959). Testing Statistical Hypotheses. Wiley, New York. Mallows, C. L. (1960). Latent vectors of random symmetric matrices. Biometrika, 48, 133-149. Mardia, K. V., J. T. Kent, and J. M. Bibby (1979). Multivariate Analysis. Academic Press, New York. Muirhead, R. J. (1982). Aspects of Multivariate Statistical Theory. Wiley, New York. Nachbin, L. (1965). The Haar Integral. D. Van Nostrand Company, Princeton, N.J. Noble, B. and J. Daniel (1977). Ap$lied Linear Algebra, second edition. Prentice Hall, Englewood Cliffs, N.J. Olkin, I. and S. J. Press (1969). Testing and estimation for a circular stationary model. Ann. Math. Stat., 40, 1358-1373. Olkin, I. and H. Rubin (1964). Multivariate beta distributions and independence properties of the Wishart distribution. Ann. Math. Stat., 35, 261-269. Olkin, I. and A. R. Sampson (1972). Jacobians of matrix transformations and induced functional equations. Linear Algebra Appl., 5, 257-276. Peisakoff, M. (1950). Transformation parameters. Thesis, Princeton University, Princeton, N.J. Pillai, K. C. S. (1955). Some new test criteria in multivariate analysis. Ann. Math. Stat., 26, 117-121. Pitman, E. J. G. (1939). The estimation of location and scale parameters of a continuous population of any form. Biometrika, 30, 391-421. Potthoff, R. F. and S. N. Roy (1964). A generalized multivariate analysis of variance model useful especially for growth curve problems. Biometrika, 51, 313-326. Rao, C. R. (1973). Linear Statistical Inference and Its Applications, second edition. Wiley, New York. Roy, S. N. (1953). On a heuristic method of test construction and its use in multivariate analysis. Ann. Math. Stat., 24, 220-238. Roy, S. N. (1957). Some Aspects of Multivariate Analysis. Wiley, New York.
506
BIBLIOGRAPHY
Rudin, W. (1953). Principles of Mathematical Analysis. McGraw-Hill, New York. Scheffk, H. (1959). The Analysis of Variance. Wiley, New York. Segal, I. E. and Kunze, R. A. (1978). Integrals and Operators, second revised and enlarged edition. Springer-Verlag, New York. Serre, J. P. (1977). Linear Representations of Finite Groups. Springer-Verlag, New York Srivastava, M. S. and C. G. Khatri (1979). An Introduction to Multivariate Statistfcs. North Holland, Amsterdam. Stein, C. (1956). Some problems in multivariate analysis, Part I. Stanford University Technical Report No. 6 , Stanford, Calif. Wijsman, R. A. (1957). Random orthogonal transformations and their use in some classical distribution problems in multivariate analysis. Ann. Math. Statist., 28, 415-423. Wijsman, R. A. (1966). Cross-sections of orbits and their applications to densities of maximal invariants, in Proc. Fifth Berkeley Symp. Math. Stat. Probl., Vol. I. University of California Press, Berkeley, pp. 389-400. Wilks, S. S. (1932). Certain generalizations in tce analysis of variance. Biometrika, 24,471-494. Wishart, J. (1928). The generalized product moment distribution in samples from a normal multivariate population. Biometrika, 20, 32-52.
Index Affine dependence: invariance of, 405 measureof,404,418,419 between random vectors, 403 Affinely equivalent, 404 Almost invariant function, 287 Ancillary, 285 Ancillary statistic, 465 Angles between subspaces: definition, 61 geometric interpretation, 61 Action group, see Group Beta distribution: definition, 320 noncentral, 320 relation toF, 320 Betarandomvariables, products of, 236,321, 323 Bilinear, 33 Bivariate correlation coefficient: density has monotone likelihood ratio, 459 distribution of, 429,432 Borel u-algebra, 70 Borel measurable, 7 1 Canonical correlation coefficients: as angles between subspaces, 408,409 definition, 408 density of, 442 interpretation of sample, 421 as maximal correlations, 413 model interpretation, 456 and prediction, 4 18
population, 408 sample, as maximal invariant, 425 Canonical variates: definition, 415 properties of, 4 15 Cauchy-Schwarz Inequality, 26 Characteristic function, 76 Characteristic polynomial, 44 Chi-square distribution: definition, 109,110 density, 110,111 Compact group, invariant integral on, 207 Completeness: bounded, 466 independence, sufficiency and, 466 Complex covariance structure: discussion of, 381 example of, 370 Complex normal distribution: definition, 374,375 discussion of, 373 independence in, 378 relation to real normal distribution, 377 Complex random variables: covariance of, 372 covariance matrix of, 375 meanof, 372 variance of, 372 Complex vector space, 39 Complex Wishart distribution, 378 Conditional distribution: for normal variables, 116, 117 in normal random matrix. 118
INDEX Conjugate transpose, 39
Correlation coefficient, density of in normal sample, 329 Covariance: characterization of, 75 of complex random variables, 372 definition, 74 of outer products, 96,97 partitioned, 86 properties of, 74 of random sample, 89 between two random variables, 28 Covariance matrix, 73 Cyclic covariance: characterization of, 365 definition, 362 diagonalization of, 366 multivariate, 368
Density, of maximal invariant, 272,273-277 Density function, 72 Determinant: definition, 41 properties of, 41 Determinant function: alternating, 40 characterization of, 40 definition, 39 as n-linear function, 40 Direct product, 212 Distribution, induced, 7 1 Eigenvalue: and angles between subspaces, 61 definition, 44 of real linear transformations, 47 Eigenvector, 45 Equivariant, 21 8 Equivariant estimator, in simple linear model, 157 Equivariant function: description of, 249 example of, 250 Error subspace, see Linear model Estimation: Gauss-Markov Theorem, 134 linear, 133 of variance in linear model, 139 Expectation, 71
Factorization, see Matrix
F Distribution: definition, 320 noncentral, 320 relation to beta, 320 F test, in simple linear model, 155 Gauss-Markov estimator: definition, 135 definition in general linear model, 146 discussion of, 14C-143 equal to least squares, 145 existence of, 147 in k-sample problem, 148 for linear functions, 136 in MANOVA, 151 in regression model, 135 in weakly spherical linear model, 134 Generalized inverse, 87 Generalized variance: definition, 3 15 example of, 298 Gram-Schmidt Orthogonalization, 15 Group: action, 186,187 affine, 187, 188 commutative, 185 definition, 185 direct product, 2 12 general linear, 186 isotropy subgroup, 191 lower triangular, 185, 186 normal subgroup, 189,190 orthogonal, 185 permutation matrices, 188, 189 sign changes, 188,189 subgroup, 186 topological, 195 transitive, 191 unimodular, 200 upper triangular, 186 Hermitianmatrix. 371 Homomorphism: definition, 218 on lower triangular matrices, 230,231 matrices, on non-singular, 230 Hoteling's T2: complex case of, 381 as likelihood ratio statistic, 402
INDEX Hypothesis testing, invariance in, 263 Independence: in blocks, testing for, 446 characterization of, 78 completeness, sufficiency and, 466 decomposition of test for, 449 distribution of likelihood ratio test for, 447 likelihood ratio test for, 444,446 MANOVA model and, 453,454 of normal variables, 106, 107 of quadratic forms, 114 of random vectors, 77 regression model and, 45 1 sample mean and sample covariance, 126, 127 testing for, 443,444 Inner product: definition, 14 for linear transformations, 32 norm defined by, 14 standard, 15 Inner product space, 15 Integral: definition, 194 left invariant, 195 right invariant, 195 Intraclass covariance: characterization of, 355 definition, 131,356 multivariate version of, 360 Invariance: in hypothesis testing, 263 and independence, 289 of likelihood ratio, 263 in linear model, 296,256,257 in MANOVA model with block covariance, 353 in MANOVA testing problem, 341 of maximum likelihood estimators, 258 Invariance and independence, example of, 290-291,292-295 Invariant densities: definition, 254 example of, 255 Invariant distribution: example of, 282-283 onnxp matrices, 235 representation of, 280 Invariant function:
definition, 242 maximal invariant, 242 Invariant integral: on affine group, 202,203 on compact group, 207 existence of, 196 on homogeneous space, 208,2 10 on lower triangular matrices, 200 on matrices (nxp of rankp), 213-218 onm-frames, 210,211 and multiplier, 197 on non-sjngular matrices, 199 on positive definitematrices, 209,210 relatively left, 197 relatively right, 197, 198 uniqueness of, 196 see also Integral Invariant measure, on a vector space, 121-1 22 Invariant probability model, 25 1 Invariant subspace, 49 Inverse Wishart distribution: definition, 330 properties of, 330 Isotropy subgroup, 191 Jacobian: definition, 166 exampleof, 168,169,171,172,177 Kronecker product: definition, 34 determinant of, 67 properties of, 36,68 trace of. 67 Lawley-Hotelling trace test, 348 Least squares estimator: definition, 135 equal to Gauss-Markov estimator, 145 in k-sample problem, 148 in MANOVA, 151 in regression model, 135 Lebesque measure, on a vector space, 121-125 Left homogeneous space: definition, 207 relatively invariant integral on, 208-210 Left translate, 195 Likelihood ratio test: decomposition of in MANOVA, 349 definition. 263
INDEX Likelihood ratio test (Continued) in MANOVA model with block covariance, 35 1 in MANOVA problem 340 in mean testing problem, 384,390 Linear isometry: definition, 36 properties of, 37 Linear model: error subspace, 133 error vector, 133 invariance in, 156,157,256,257 with normal errors, 137 regression model, 132 regression subspace, 133 weakly spherical, 133 Linear transformation: adjoint of, l7,29 definition, 7 eigenvalues, see Eigenvalues invertible, 9 matrix of, 10 non-negative definite, 18 null space of, 9 orthogonal, 18 positive definite, 18 range of, 9 rank, 9 rank one, 19 self-adjoint, 18 skew symmetric, 18 transpose of, 17 vector space of, 7 Locally compact, 194 MANOVA: definition, 150 maximum likelihood estimator in, 151 with normal errors, 151 MANOVA model: with block diagonal covariance, 350 canonical form of, 339 with cyclic covariance, 366 description of, 336 example of, 308 and independence, 453,454 with intraclass covariance, 356 maximum likelihood estimators in, 337 under non-normality, 398 with non-normal density, 462 testingproblem in, 337
MANOVA testing problem: canonical form of, 339 complex case of, 379 description of, 337 with intraclass covariance structure, 359 invariance in, 341 likelihood ratio test in, 340,347 maximal invariant in, 342,346 maximal invariant parameter in, 344 uniformly most powerful test in, 345 Matric r distribution: definition, 330 properties of, 330 Matrix: definition, 10 eigenvalue of, 44 factorization, 160,162,163,164 lower triangular, 44,159 orthogonal, 25 partitionedpositive definite, 161, 162 positive definite, 25 product, 10 skew symmetric, 25 symmetric, 25 upper triangular, 159 Maximal invariant: density of, 278-279 example of, 242,243,246 parameter, 268 and product spaces, 246 representing density of, 272 Maximum likelihood estimator: of covariance matrix, 261 invariance of, 258 in MANOVA model, 151 in simple linear model, 138 Mean value, ofrandom variable, 72 Mean vector: of coordinate randomvector, 72 definition, 72 for outer products, 93 properties o h 7 2 of random sample, 89 M-frame, 38 Modulus, right hand, 196 Monotone likelihood ratio: for non-central chi-squared, 468,469 for non-central F, 469 fornon-central Student's .r, 470 and totally positive of order 2,467 Multiple correlation coefficient:
INDEX definition, 434 distribution of, 434 Multiplier: on affine group, 204 definition, 197 and invariant integral, 197 on lower triangular matrices, 201 on non-singular matrices, 199 Multivariate beta distribution: definition, 33 1 properties of, 331,332 Multivariate F distribution: definition, 33 1 properties of, 33 1 Maximal invariant, see Invariant function Multivariate General Linear Model, see MANOVA Non-central chi-squared distribution: definition, 110, 111 for quadratic forms, 112 Noncentral Wishart distribution: covariance of, 3 17 definition, 3 16 density of, 3 17 meanof, 317 properties of, 3 16 as quadratic form in normals, 3 18 Normal distribution: characteristic function of, 105, 106 complex, see Complex normal distribution conditional distribution in, 116, 117 covariance of, 105,106 definition, 104 density of, 120-126 density of normal matrix, 125 existence of, 105,106 independence in 106,107 mean of, 105,106 and non-central chi-square, 111 and quadratic forms, 109 relation to Wishart, 307 representation of, 108 for random matrix, 118 scale mixture of, 129, 130 sufficient statistic in, 126, 127, 131 of symmetric matrices, 130 Normal equations, 155,156 Orbit, 241 Order statistic, 276-277
Orthogonal: complement, 16 decomposition, 17 definition. 15 Orthogonal group, definition, 23 Orthogonally invariant distribution, 8 1 Orthogonally invariant function, 82 Orthogonal projection: characterization of, 21 definition, 17 in Gauss-Markov Theorem, 134 random, 439,440 Orthogonal transformation, characterization of, 22 Orthonormal: basis, 15 set, 15 Outer product: definition, 19 properties of, 19,30 Parameter, maximal invariant, 268 Parameter set: definition, 146 in linear models, 146 Parameter space, 252 Partitioning, a Wishart matrix, 3 10 Pillai trace test, 348 Pitman estimator, 264267 Prediction: affine, 94 and affine dependence, 416 Principal components: and closest flat, 457,458 definition, 457 low rank approximation, 457 Probability model, invariant, 25 1 Projection: characterization of, 13 definition, 12 Quadratic forms: independence of, 114,115 in normal variables, 109 Radon measure: definition, 194 factorization of, 224 Random vector, 7 1 Regression: multivariate, 45 1
INDEX Regression (Continued)
and testing for independence, 45 1 Right translate, 195 Roy maximum root test, 348,349 Regression model, see Linear model Regression subspace, see Linear model Relatively invariant integral, see Invariant integral Sample correlation coefficient, as a maximal invariant, 268-271 Scale mixture of normals, 129, 130 Self adjoint transformations, functions of, 52 Singular Value DecompositionTheorem, 58 Spectral Theorem: and positive definiteness, 51 statement of, 50,53 Spherical distributions, 84 Sufficiency: completeness, independence and, 466 definition, 465 Symmetry model: definition, 361 examples of, 361 Topological group, 195 Trace: of linear transformation, 47 of matrix, 33 sub-k, 56 Transitive group action, 191 Two way layout, 155 Uniform distribution: on M-frames, 234 on unit sphere, 101 Uncorrelated: characterization of, 98 definition, 87 random vectors, 88
Uniformly most powerful invariant test,
in MANOVA problem, 345 Vector space: basis, 3 complementary subspaces, 5 definition, 2 dimension, 4 direct sum, 6 dual space, 8 finite dimensional, 3 linearly dependent, 3 linearly independent, 3 linear manifold, 4 subspace, 4 Weakly spherical: characterization of, 83 definition, 83 linear model, 133 Wishart constant, 175 Wishart density, 239,240 Wishart distribution: characteristic function of, 305 convolution of two, 306 covariance of, 305 definition, 303 density of, 239,240 for nonintegral degrees of freedom, 329 in MANOVA model, 308 mean of, 305 noncentral, see Noncentral Wishart distribution nonsingular, 304 partitioned matrix and, 3 10 of quadratic form, 307 representation of, 303 triangular decomposition of, 3 13,314 Wishart matrix: distribution of partitioned, 3 1 1 ratio of determinants of, 3 19