MULTIPARAMETRIC STATISTICS
This page intentionally left blank
MULTIPARAMETRIC STATISTICS BY
Vadim I. Serdobolskii
Moscow State Institute of Electronics and Mathematics Moscow, Russia
AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO
Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, The Netherlands Linacre House, Jordan Hill, Oxford OX2 8DP, UK
First edition 2008 c 2008 Elsevier B.V. All rights reserved Copyright No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email:
[email protected]. Alternatively you can submit your request online by visiting the Elsevier web site at http://elsevier.com/locate/permissions, and selecting Obtaining permission to use Elsevier material Notice No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN: 978-0-444-53049-3 For information on all Elsevier publications visit our website at www.books.elsevier.com Printed and bound in The United Kingdom 08 09 10 11 12
10 9 8 7 6 5 4 3 2 1
ON THE AUTHOR Vadim Ivanovich Serdobolskii is a professor at Applied Mathematics Faculty, Moscow State Institute of Electronics and Mathematics. He graduated from Moscow State University as physicist theoretician. In 1952, received his first doctorate from Nuclear Research Institute of Moscow State University in investigations in resonance theory of nuclear reactions. In 2001, received the second (advanced) doctoral degree from the Faculty of Calculation Mathematics and Cybernetics of Moscow State University in the development of asymptotical theory of statistical analysis of high-dimensional observations. He is the author of the monograph “Multivariate Statistical Analysis. A High-Dimensional Approach,” Kluwer Academic Publishers, Dordrecht, 2000. Internet page: serd.miem.edu.ru E-mail:
[email protected]
v
This page intentionally left blank
CONTENTS
Foreword
xi
Preface
xiii
Chapter 1. Introduction: the Development of Multiparametric Statistics The Stein effect . . . . . . . . . . . . . . . . The Kolmogorov Asymptotics . . . . . . . . Spectral Theory of Increasing Random Matrices . . . . . . . . . . . . . . . . . . . . Constructing Multiparametric Procedures . . Optimal Solution to Empirical Linear Equations . . . . . . . . . . . . . . . . . . .
. . . .
1 4 10
. . . .
12 17
. .
19
Chapter 2. Fundamental Problem of Statistics 2.1. Shrinkage of Sample Mean Vector . . . . . . . . . Shrinkage for Normal Distributions . . . . . . Shrinkage for a Wide Class of Distributions . . Conclusions . . . . . . . . . . . . . . . . . . . 2.2. Shrinkage of Unbiased Estimators . . . . . . . . . Special Shrinkage of Normal Estimators . . . . Shrinkage of Arbitrary Unbiased Estimators . Limit Quadratic Risk of Shrinkage Estimators Conclusions . . . . . . . . . . . . . . . . . . . 2.3. Shrinkage of Infinite-Dimensional Vectors . . . . Normal distributions . . . . . . . . . . . . . . Wide Class of Distributions . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . 2.4. Unimprovable Component-Wise Estimation . . . Estimator for the Density of Parameters . . . Estimator for the Best Estimating Function .
vii
. . . . . . . . . . . . . . . .
21 23 24 29 32 33 33 35 41 43 45 46 50 54 56 59 63
viii
CONTENTS
Chapter 3. Spectral Theory of Large Sample Covariance Matrices 3.1. Spectral Functions of Large Sample Covariance Matrices . . . . . . . . . . . . . . . . . . . . . . . Gram Matrices . . . . . . . . . . . . . . . . . . Sample Covariance Matrices . . . . . . . . . . Limit Spectra . . . . . . . . . . . . . . . . . . 3.2. Spectral Functions of Infinite Sample Covariance Matrices . . . . . . . . . . . . . . . . . . . . . . . Dispersion Equations for Infinite Gram Matrices . . . . . . . . . . . . . . . . . . . . . Dispersion Equations for Sample Covariance Matrices . . . . . . . . . . . . . . . . . . . . . Limit Spectral Equations . . . . . . . . . . . . 3.3. Normalization of Quality Functions . . . . . . . . Spectral Functions of Sample Covariance Matrices . . . . . . . . . . . . . . . . . . . . . Normal Evaluation of Sample-Dependent Functionals . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . .
71 . . . .
75 75 83 88
.
97
.
98
. 103 . 105 . 114 . 116 . 117 . 124
Chapter 4. Asymptotically Unimprovable Solution of Multivariate Problems 127 4.1. Estimators of Large Inverse Covariance Matrices . 129 Problem Setting . . . . . . . . . . . . . . . . . . 130 Shrinkage for Inverse Covariance Matrices . . . 131 Generalized Ridge Estimators . . . . . . . . . . 133 Asymptotically Unimprovable Estimator . . . . 138 Proofs for Section 4.1 . . . . . . . . . . . . . . . 140 4.2. Matrix Shrinkage Estimators of Expectation Vectors 147 Limit Quadratic Risk for Estimators of Vectors 148 Minimization of the Limit Quadratic Risk . . . 154 Statistics to Approximate Limit Risk . . . . . . 159 Statistics to Approximate the Extremum Solution . . . . . . . . . . . . . . . . . . . . . . 162
CONTENTS
4.3. Multiparametric Sample Linear Regression . Functionals of Random Gram Matrices . Functionals in the Regression Problem . Minimization of Quadratic Risk . . . . . Special Cases . . . . . . . . . . . . . . . .
ix
. . . . .
. . . . .
. . . . .
. . . . .
Chapter 5. Multiparametric Discriminant Analysis 5.1. Discriminant Analysis of Independent Variables . . A Priori Weighting of Variables . . . . . . . . . Empirical Weighting of Variables . . . . . . . . Minimum Error Probability for Empirical Weighting . . . . . . . . . . . . . . . . . . . . . Statistics to Estimate Probabilities of Errors . . Contribution of a Small Number of Variables . . Selection of Variables by Threshold . . . . . . . 5.2. Discriminant Analysis of Dependent Variables . . . Asymptotical Setting . . . . . . . . . . . . . . . Moments of Generalized Discriminant Function Limit Probabilities of Errors . . . . . . . . . . . Best-in-the-Limit Discriminant Procedure . . . . The Extension to a Wide Class of Distributions Estimating the Error Probability . . . . . . . . Chapter 6. Theory of Solution to High-Order Systems of Empirical Linear Algebraic Equations 6.1. The Best Bayes Solution . . . . . . . . . . . . . . . 6.2. Asymptotically Unimprovable Solution . . . . . . . Spectral Functions of Large Gram Matrices . . . Limit Spectral Functions of Gram Matrices . . . Quadratic Risk of Pseudosolutions . . . . . . . . Minimization of the Limit Risk . . . . . . . . . Shrinkage-Ridge Pseudosolution . . . . . . . . . Proofs for Section 6.2 . . . . . . . . . . . . . . .
167 171 181 186 190
193 195 197 200 203 207 209 211 220 221 224 227 231 234 236
239 240 246 248 251 254 258 262 266
x
CONTENTS
Appendix: Experimental Investigation of Spectral Functions of Large Sample Covariance Matrices
285
References
303
Index
311
FOREWORD This monograph presents the mathematical theory of statistical methods different by treating models defined by a number of parameters comparable in magnitude with sample size and exceeding it. This branch of statistical science was developed in later decades and until lately was presented only in a series of papers in mathematical journals, remaining practically unknown to the majority of statisticians. The first attempt to sum up these investigations has been made in the author’s previous monograph “Multivariate Statistical Analysis. A High-Dimensional Approach,” Kluwer Academic Publishers, 2000. The proposed new monograph is different in that it pays more attention to fundamentals of statistics, refinines theorems proved in the previous book, solves a number of new multiparametric problems, and provides the solution of infinitely dimensional problems. This book is written, first, for specialists in mathematical statistics who will find here new settings, new mathematical methods, and results of a new kind urgently required in applications. Actually, a new branch of statistics is originated with new mathematical problems. Specialists in applied statistics creating statistical packages for statistical software will be interested in implementing new more efficient methods proposed in the book. Advantages of these methods are obvious: the user is liberated from the permanent uncertainty of possible degeneration of linear methods and gets approximately unimprovable algorithms whose quality does not depend on distributions. Specialists applying statistical methods to their concrete problems will find in the book a number of always stable, approximately unimprovable algorithms that will help them solve better their scientific or economic problems. Students and postgraduates may be interested in this book in order to get at the foremost frontier of modern statistical science that would guarantee them the success in their future carrier.
xi
This page intentionally left blank
PREFACE
This monograph presents the development of mathematical statistics using models of large dimension, and what is more essential, models with the number of unknown parameters so large that it is comparable in magnitude with sample size and may be much greater. This branch of statistics is distinguished by a new concept of statistical investigation, by problem settings, by specific phenomena, by methods, and by results. This theory may be qualified as “multiparametric statistics” or perhaps, more meaningfully, “essentially multiparametric statistics.” In the basement of the presented theory lie some restrictions of principle imposed on the degree of dependence of variables that allow to overcome the “curse of dimensionality, ” that is, of the necessity to analyze combinatorial large variety of abstract possibilities. They reduce to the restriction on invariant maximum fourth moment of variables and the requirement of decreasing variance of quadratic forms. These restrictions may be called conditions of multiparametric approach applicability, and they seem to be quite acceptable for a majority of real statistical problems of practical importance. The statistician using many-parametric models is hardly interested in the magnitude of separate parameters and would prefer, more probably, to obtain some more satisfactory solution for his concrete statistical problem. The consistency and the unbiasedness of estimators important for small dimension in many-dimensional problems are not most desirable properties. The classical Fisher concept of statistical investigation as that of sharpening of our knowledge in a process of unbounded accumulation of data is replaced by the necessity of taking optimal decisions under fixed sample size. In this situation, new estimators are preferable that could provide maximum gain in the purpose function. It means, first, a withdrawal of the Wald decision function approach and addressing concepts of the efficiency and dominance of estimators. However, it is well known that the best equivariant estimators are mostly not found till now. The main obstacle here, certainly, is that standard quality functions depend on unknown parameters. xiii
xiv
PREFACE
Fortunately, as is shown in this book, this difficulty of principle may be overcome in the asymptotics of the increasing number of parameters. This asymptotical approach was proposed by A. N. Kolmogorov in 1968–1970. He suggested to consider a sequence of statistical problems in which sample size N increases along with the number of parameters n so that the ratio n/N → c > 0. This setting is different in that a concrete statistical problem is replaced by a sequence of problems so that the theory may be considered as a tool for isolating leading terms that describe approximately the concrete problem. The ratio n/N shows the boundary of the applicability of traditional methods and the origin of new multiparametric features of statistical phenomena. Their essence may be understood if we prescribe the magnitude 1/n to contribution of a separate parameter and compare it with the variance of standard estimators that is of the order of magnitude of 1/N . Large ratios n/N show that the contribution of separate parameters is comparable with “noise level” produced by the uncertainty of sample data. In this situation, the statistician has no opportunity to seek more precise values of unknown parameters and is obliged to take practical decisions over the accessible data. The proposed theory of multiparametric statistics is advantageous for problems where n and N are large and variables are boundedly dependent. In this case, the inaccuracies of estimation of a large number of parameters become approximately independent and their summation produces an additional mechanism of averaging (mixing) that stabilizes functions uniformly dependent on arguments. It is of importance that this class of functions includes standard quality functions. As a consequence, their variance proves to be small, and in the multiparametric approach, we may say not on estimation of quality functionals but on their evaluation. This property changes radically the whole problem of estimator dominance and methods of search for admissible estimators. We may say that a new class of mathematical investigations in statistics appears whose purpose is a systematical construction of asymptotically better and asymptotically unimprovable solutions. Another specifically multiparametric phenomenon is the appearance of stable limit relations between sets of observable quantities and sets of parameters. These relations may have the form of
PREFACE
xv
the Fredholm integral equations of the first kind with respect to unknown functions of parameters. Since they are produced by random distortions in the process of observation, these relations may be called dispersion equations. These equations present functional relations between sets of first and second moments of variables and sets of their estimators. Higher order deviations are averaged out, and this leads to a remarkable essentially multiparametric effect: the leading terms of standard quality functions depend on only two first moments of variables. Practically, it means that for a majority of multivariate (regularized) statistical procedures, for a wide class of distributions, standard quality functions do not differ much from those calculated under the normality assumption, and the quality of multiparametric procedures is approximately population free. Thus, three drawbacks of the existing methods of multivariate statistics prove to be overcome at once: the instability is removed, approximately unimprovable solutions are found, and the population–free quality is obtained. Given approximately nonrandom quality functions, we may solve appropriate extremum problems and obtain methods of constructing statistical solutions that are asymptotically unimprovable independently of distributions. The main result of investigations presented in the book is a regular multiparametric technology of improving statistical solutions. Briefly, this technology is as follows. First, a generalized family of always stable statistical solutions is chosen depending on an a priori vector of parameters or an a priori function that fixes the chosen algorithm. Then, dispersion equations are derived and applied to calculate the limit risk as a function of only population parameters. An extremal problem is solved, and the extremum a priori vector or function is calculated that define the asymptotically ideal solution. Then, by using dispersion equations once more, this ideal solution is approximated by statistics and a practical procedure is constructed providing approximately maximum quality. Another way is, first, to isolate the leading term of the risk function and then to minimize some statistics approximating the risk. It remains to estimate the influence of small inaccuracies produced by asymptotic approach.
xvi
PREFACE
These technologies are applied in this book to obtain asymptotically improved and unimprovable solutions for a series of most usable statistical problems. They include the problem of estimation of expectation vectors, matrix shrinkage, the estimation of inverse covariance matrices, sample regression, and discriminant analysis. The same technology is applied for the minimum square solutions to large systems of empirical linear algebraic equations (over a single realization of random coefficient matrix and random right-hand side vector). For practical use, the implementation of simplest twoparametric shrinkage-ridge versions of existing procedures may be especially interesting. They save the user from the danger of a degeneration, and provide solutions certainly improved for large n and N . Asymptotically unimprovable values of shrinkage and ridge parameters for problems mentioned above are written out in the book. These two-parametric solutions improve over conventional ones and also over one-parametric shrinkage and ridge regularization algorithms, only insignificantly increase the calculation job, and are easy for programming. Investigations of specific phenomena produced by estimation of a large number of parameters were initiated by A. N. Kolmogorov. Under his guidance in 1970–1972, Yu. N. Blagovechshenskii, A. D. Deev, L. D. Meshalkin, and Yu. V. Arkharov carried out the first but basic investigations in the increasing-dimension asymptotics. In later years, A. N. Kolmogorov was also interested and supported earlier investigations of the author of this book. The main results exposed in this book are obtained in the Kolmogorov asymptotics. The second constituent of the mathematical theory presented in this book is the spectral theory of increasing random matrices created by V. A. Marchenko, L. A. Pastur (1967), and V. L. Girko (1975–1995) that was later applied to sample covariance matrices by the author. This theory was developed independently under the same asymptotical approach as the Kolmogorov asymptotics. Its main achievements are based on the method of spectral functions that the author learned from reports and publications by V. L. Girko in 1983. The spectral function method is used in this book for developing a general technology of construction of
PREFACE
xvii
improved and asymptotically unimprovable statistical procedures distribution free for a wide class of distributions. I would like to express my sincere gratitude to our prominent scientists Yu. V. Prokhorov, V. M. Buchstaber, and S. A. Aivasian, who appreciated the fruitfulness of multiparametric approach in the statistical analysis from the very beginning, supported my investigations, and made possible the publication of my books. I appreciate highly the significant contribution of my pupils and successors: my son Andrei Serdobolskii, who helped me much in the development of the theory of optimum solutions to large systems of empirical linear equations, and also V. S. Stepanov and V. A. Glusker who performed tiresome numeric investigations that presented convincing confirmation of practical applicability of the theory developed in this book. Moscow January, 2007
Vadim I. Serdobolskii
This page intentionally left blank
CHAPTER 1
INTRODUCTION THE DEVELOPMENT OF MULTIPARAMETRIC STATISTICS Today, we have the facilities to create and analyze informational models of high complexity including models of biosystems, visual patterns, and natural language. Modern computers easily treat information arrays that are comparable with the total life experience (near 1010 bits). Now, objects of statistical investigation are often characterized by very large number of parameters, whereas, in practice, sample data are rather restricted. For such statistical problems, values of separate parameters are usually of a small interest, and the purpose of investigation is displaced to finding optimal statistical decisions. Some examples. 1. Statistical analysis of biological and economic objects These objects are characterized by a great complexity and a considerable nuisance along with bounded samples. Their models depend on a great number of parameters, and the standard approach of mathematical statistics based on expansion in the inverse powers of sample size does not account for the problem specificity. In this situation, another approach proposed by A. N. Kolmogorov seems to be more appropriate. He introduced an asymptotics in which the dimension n tends to infinity along with sample size N, allowing to analyze the effects of inaccuracies accumulation in estimating a great number of parameters. 2. Pattern recognition Today, we must acknowledge that the recognition of biological and economic objects requires not so much data accumulation 1
2
1. INTRODUCTION
as the extraction of regularities and elements of structure against the noise background. These structure elements are then used as features for recognition. But the variety of possible structure elements is measured by combinatorial large numbers, and the new mathematical problem arises of efficient discriminant analysis in space of high dimension. 3. Interface with computer using natural language This problem seems to become the central problem of our age. It is well known that the printed matter containing the main part of classical literature requires rather moderate computer resources. For example, a full collection of A. S. Pushkin’s compositions occupies only 2–5 megabytes, while the main corpus of Russian literature can be written on 1-gigabyte disk. The principal problem to be solved is how to extract the meaning from the text. Identifying the meaning of texts with new information and measuring it with the Shannon measure, we can associate the sense of a phrase with the statistics of repeating words and phrases in the language experience of a human. This sets a problem of developing a technology of search for repeating fragments in texts of a large volume. A specific difficulty is that the number of repetitions may be far from numerous; indeed, the human mind would not miss even a single coincidence of phrases. Traditionally, the statistical investigation is related to a cognition process, and according to the R. Fisher conception, the purpose of statistical analysis is to determine parameters of an object in the process of analyzing more and more data. This conception is formalized in the form of an asymptotics of sample size increasing indefinitely, which lays in the foundation of well-developed theory of asymptotic methods of statistics. The most part of investigation in mathematical statistics deals with one-dimensional observations and fixed number of parameters under arbitrarily large sample sizes. The usual extension to many-dimensional case is reduced to the replacement of scalars by vectors and matrices and to studying formal relations with no insight into underlying phenomena. The main problem of mathematical statistics today remains the study of the consistency of estimators and their asymptotic properties under increasing sample size. Until recently, there is
1. INTRODUCTION
3
no fruitful approach to the problem of quality estimation of the statistical procedures under fixed samples. It was only established that nearly all popular statistical methods allow improvement and must be classified as inadmissible. In many-dimensional statistics, this conclusion is much more severe: nearly all consistent multivariate linear procedures may have infinitely large values of risk function. These estimators should be called “essentially inadmissible.” Meanwhile, we must acknowledge that today the state of methods of multivariate statistical analysis is far from satisfactory. Most popular linear procedures require the inversion of covariance matrix. True inverse covariance matrices are replaced by consistent estimators. But sample covariance matrices (dependently on data) may be degenerate and their inversion can be impossible (even for the dimension 2). For large dimension, the inversion of sample covariance becomes unstable, and that leads to statistical inferences of no significance. If the dimension is larger than sample size, sample covariance matrices are surely degenerate and their inversion is impossible. As a consequence, standard consistent procedures of multivariate statistical analysis included in most packages of statistical software do not guarantee neither stable nor statistically significant results, and often prove to be inapplicable. Common researchers applying methods of multivariate statistical analysis to their concrete problems are left without theoretical support confronted by their difficulties. The existing theory cannot recommend them nothing better than ignoring a part of data artificially reducing the dimension in hope that this would provide a plausible solution (see [3]). This book presents the development of a new special branch of mathematical statistics applicable to the case when the number of unknown parameters is large. Fortunately, in case of a large number of boundedly dependent variables, it proves to be possible to use specifically many-parametric regularities for the construction of improved procedures. These regularities include small variance of standard quality functions, the possibility to estimate them reliably from sample data, to compare statistical procedures by their efficiency and choose better ones. Mathematical theory developed in this book offers a number of more powerful versions of most
4
1. INTRODUCTION
usable statistical procedures providing solutions that are both reliable and approximately unimprovable in the situation when the dimension of data is comparable in magnitude with sample. The statistical analysis appropriate for this situation may be qualified as the essentially multivariate analysis [69]. The theory that takes into account the effects produced by the estimation of a large number of unknown parameters may be called the multiparametric statistics. The first discovery of the existence of specific phenomena arising in multiparametric statistical problems was the fact that standard sample mean estimator proves to be inadmissible, that is, its square risk can be diminished. The Stein Effect In 1956, C. Stein noticed that sample mean is not a minimum square risk estimator, and it can be improved by multiplying by a scalar decreasing the length of the estimation vector. This procedure was called “shrinkage,” and such estimators were called “shrinkage estimators.” The effect of improving estimators by shrinkage was called the “Stein effect.” This effect was fruitfully exploited in applications (see [29], [33], [34]). Let us cite the well-known theorem by James and Stein. ¯ denote Let x be an n-dimensional observation vector, and let x sample mean calculated over a sample of size N . Denote (here and below) by I the identity matrix. Proposition 1. For n > 2 and x ∼ N( μ, I), the estimator n−2 ¯ x (1) μ JS = 1 − ¯2 Nx has the quadratic risk ¯) − E( μ−μ ) = E( μ−x JS 2
2
n−2 N
2 E
1 ¯2 x
(2)
(here and in the following, squares of vectors denote squares of their length).
THE STEIN EFFECT
5
Proof. Indeed, μ−μ JS )2 = y + 2y2 E RJS = E(
¯ ¯ )T x ( μ−x 1 + y22 E 2 , 2 ¯ ¯ x x
(3)
where y = n/N and y2 = (n − 2)/N . Let f be the normal distri¯ = (N f )−1 ∇f , where bution density for N( μ, I/N ). Then, μ −x ¯ . Substitute ∇ is the differentiation operator in components of x this expression in the second addend of (3), and note that the expectation can be calculated by the integration in f d¯ x. Integrating by parts we obtain (2). The James–Stein estimator is known as a “remarkable example of estimator inadmissibility.” This discovery led to the development of a new direction of investigation and a new trend in theoretical and applied statistics, with hundreds of publications and effective applications [33]. In following years, other versions of estimators were offered that improved as standard sample mean estimator as the James– Stein estimator. The first improvement was offered in 1963 by Baranchik [10]. He proved that the quadratic risk of the James– Stein estimator can be decreased by excluding negative values of the shrinkage estimator (positive-part shrinkage). A number of other shrinkage estimators were proposed subsequently, decreasing the quadratic risk one after the other (see [31]). However, the James–Stein estimator is singular for small x ¯2 . In 1999, Das Gupta and Sinha [30] offered a robust estimator μ G =
1−
n x ¯, n+x ¯2
that for n ≥ 4 dominates μ =x ¯ with respect to the quadratic risk and dominates μ JS with respect to the absolute risk E| μ−μ G | (here and in the following, the absolute value of a vector denotes its length). In 1964, C. Stein suggested an improved estimator for x ∼ N( μ, dI), with unknown μ and d. Later, a series of estimators were proposed, subsequently improving his estimator (see [42]). A number of shrinkage estimators were proposed for the case N( μ, Σ) (see [49]).
6
1. INTRODUCTION
The shrinkage was also applied in the interval estimation. Using shrinkage estimator, Cohen [15] has constructed confidence intervals that have the same length but are different by a uniformly greater probability of covering. Goutis and Casella [27] proposed other confidence intervals that were improved with respect to both the interval length and the covering probability. These results were extended to many-dimensional normal distributions with unknown expectations and unknown covariance matrix. The Stein effect was discovered also for distributions different from normal. In 1979, Brandwein has shown that for spherically symmetrical distributions with the density f (|x − θ|), where θ is a vector parameter, for n > 3, the estimator μ = (1 − a/¯ x2 )¯ x domi¯ for a such that 0 < a < amax . For these distributions, nates μ =x other improved shrinkage estimators were found (see the review by Brandwein and Strawderman [13]). For the Poisson distribution of independent integer-valued variables ki , i = 1, 2, . . ., n with the vector parameter λ = (λ1 , λ2 , . . ., λn ), the standard unbiased estimator is the vector of rates (f1 , f2 , . . ., fn ) of events number i = 1, 2, . . ., n in the sample. of the form Clevenson and Zidek [14] showed that the estimator λ i = λ
a fi , 1− a+s
i = 1, 2, . . ., n where s =
n
fi
(4)
i=1
for n > 2 has the quadratic risk less than the vector (f1 , f2 , . . ., fn ), if n−1 ≤ a ≤ 2 (n−1). A series of estimators were found improving the estimator (4). Statistical meaning of shrinkage For understanding the mechanism of risk reduction, it is useful to consider the expectation of the sample average square. For distributions with the variance d of all variables, we have E¯ x2 = μ 2 + yd > μ 2 , where y = n/N , and, naturally, one can expect that shrinkage of sample average vectors may be useful. The shrinkage effect may be characterized by the magnitude of the ratio μ 2 /yd. This ratio may be interpreted as the “signal-to-noise” ratio. The shrinkage is purposeful for sufficiently small μ 2 /yd. For d = 1 and restricted dimension, y ≈ 1/N , and shrinkage is useful only for the
THE STEIN EFFECT
7
√ vector length less than 1/ N . For essentially many-dimensional statistical problems with y ≈ 1, the shrinkage can be useful only for bounded vector length √ when μ 2 ≈ 1, and its components have the order of magnitude 1/ N . This situation is characteristic of a number of statistical problems, in which vectors μ are located in a bounded region. The important example of these is highdimensional discriminant analysis in the case when the success can be achieved only by taking into account of a large number of weakly discriminating variables. Let it be known a priori that the vector μ is such that μ 2 ≤ c. In this case, it is plausible to use the shrinkage estimator μ = α¯ x, with the shrinkage coefficient α = c/(c + y). The quadratic risk R(a) = E( μ − a¯ x)2 = y
μ 2 y + c2 c ≤y < R(1). 2 (c + y) c+y
It is instructive to consider the shrinkage effect for simplest shrinkage with nonrandom shrinkage coefficients. Let μ = α¯ x, where nonrandom positive α < 1. For x ∼ N( μ, I), the quadratic risk of this a priori estimator is R = R(α) = (1 − α)2 μ 2 + α2 y,
(5)
y = n/N . The minimum of R(α) is achieved for α = α0 = μ 2 /( μ2 + y) and is equal to R0 = R(α0 ) = y μ2 /( μ2 + y).
(6)
Thus, the standard quadratic risk y is multiplied by the factor μ 2 /( μ2 +y) < 1. In traditional applications, if the dimension is not high, the ratio y is of order of magnitude 1/N , and the shrinkage effect proves to be insignificant even for a priori bounded localization of parameters. However, if the accuracy of measurements is low and the variance of variables is so large that it is comparable with N , the shrinkage can considerably reduce the quadratic risk. The shrinkage “pulls” estimators down to the coordinate origin; this means that the shrinkage estimators are not translation invariant. The question arises of their sensitivity to the choice of
8
1. INTRODUCTION
coordinate system and of the origin. In an abstract setting, it is quite not clear how to choose the coordinate center for “pulling” of estimators. The center of many-dimensional population may be located, generally speaking, at any faraway point of space and the shrinkage may be quite not efficient. However, in practical problems, some restrictions always exist on the region of parameter localization, and there is some information on the central point. As a rule, the practical investigator knows in advance the region of the parameter localization. In view of this, it is quite obvious that the standard sample mean estimator must be improvable, and it may be improved, in particular, by shrinkage. Note that this reasoning has not attracted a worthy attention of researches as yet, and this fact leads to the mass usage of the standard estimator in problems, where the quality of estimation could be obviously improved. It is natural to expect that as much as the shrinkage coefficient in the James–Stein estimator is random, it can decrease the quadratic risk less efficiently than the best nonrandom shrinkage. Compare the quadratic risk RJS of the James–Stein estimator (2) with the quadratic risk R0 of the best a priori estimator (6). Proposition 2. For n-dimensional observations x ∼ N( μ, I) with n > 2, we have RJS ≤ R0 + 4
1 n−1 ≤ R0 + 4/N. 2 2 N μ +y
Proof. We start from Proposition 1. Denote y2 = (n − 1)/N. Using the properties of moments of inverse random values, we find that RJS = y − y22 E(¯ x2 )−1 ≤ y − y22 (E¯ x2 )−1 = R0+ 4(n−1)N −2 /( μ2+ y). Thus, for large N , the James–Stein estimator practically is not worse than the unknown best a priori shrinkage estimator that may be constructed if the length of the vector is known.
THE STEIN EFFECT
9
Application in the regression analysis Consider the regression model Y = Xβ + ε,
ε ∼ N(0, I),
where X is a nonrandom rectangular matrix of size N × n of a full rank, β ∈ Rn , and I is the n × n identity matrix. The standard minimum square solution leads to the estimator β0 = (X T X)−1 X T Y, which is used in applied problems and included in most applied statistical software. The effect of application of the James–Stein estimator for shrinking of vectors β0 was studied in [29]. Let the (known) plan matrix is such that X T X is the identity matrix. Then, the problem of construction of regression model of best quality (in the meaning of minimum sum of residual squares) is reduced to estimation of the vector β = EX T Y with the minimum square risk. The application of the Stein-type estimators allows to choose better versions of linear regression (see [33], [84]). This short review shows that the fundamental problem of manydimensional statistics, estimation of the position of the center of population, is far from being ultimately solved. The possibility of improving estimators by shrinking attracts our attention to the improvement of solutions to other statistical problems. Chapter 2 of this book presents an attempt of systematical advance in the theory of improving estimators of expectation vectors of large dimension. In Section 2.1, the generalized Stein-type estimators are studied, in which shrinkage coefficients are arbitrary functions of sample mean vector length. The boundaries of the quadratic risk decrease are found. In Section 2.2, it is established that in case when the dimension is large and comparable with sample size, shrinking of a wide class of unbiased estimators reduces the quadratic risk independently of distributions. In Section 2.3, the Stein effect is investigated for infinite-dimensional estimators. In Section 2.4, “component-wise” estimators are considered that are defined by arbitrary “estimation functions” presenting some functional transformation of each component of sample mean vector. The quadratic risk of this estimator is minimized with the accuracy to terms small for large dimension and sample size.
10
1. INTRODUCTION
The Kolmogorov Asymptotics In 1967, Andrei Nikolaevich Kolmogorov was interested in the dependence of errors of discrimination on sample size. He solved the following problem. Let x be a normal observation vector, and ¯ ν be sample averages calculated over samples from population x number ν = 1, 2. Suppose that the covariance matrix is the identity matrix. Consider a simplified discriminant function ¯ 2 )T x − (¯ ¯ 2 )/2 g(x) = (¯ x1 − x x1 + x and the classification rule w(x) > 0 against w(x) ≤ 0.√This function leads to the probability of errors αn = Φ(−G/ D), where G and D are quadratic functions of sample averages having a noncentral χ2 distribution. To isolate principal parts of G and D, Kolmogorov proposed to consider not one statistical problem but a sequence of n-dimensional discriminant problems in which the dimension n increases along with sample sizes Nν , so that Nν → ∞ and n/Nν → λν > 0, ν = 1, 2. Under these assumptions, he proved that the probability of error αn converges in probability J − λ1 + λ2 , (7) plim αn = Φ − √ 2 J + λ1 + λ2 n→∞ where J is the square of the Euclidean limit “Mahalanobis distance” between centers of populations. This expression is remarkable by that it explicitly shows the dependence of error probability on the dimension and sample sizes. This new asymptotic approach was called the “Kolmogorov asymptotics.” Later, L. D. Meshalkin and the author of this book deduced formula (7) for a wide class of populations under the assumption that the variables are independent and populations approach each other in the parameter space (are contiguous) [45], [46]. In 1970, Yu. N. Blagoveshchenskii and A. D. Deev studied the probability of errors for the standard sample Fisher–Andersen– Wald discriminant function for two populations with unknown common covariance matrix. A. D. Deev used the fact that the probability of error coincides with the distribution function g(x). He obtained an exact asymptotic expansion for the limit of the
THE KOLMOGOROV ASYMPTOTICS
11
error probability α. The leading term of this expansion proved to be especially interesting. The limit probability of an error (of the first kind) proved to be J − λ1 + λ2 α = Φ −Θ √ , 2 J + λ1 + λ2 √ where the factor Θ = 1 − λ, with λ = λ1 λ2 /(λ1 + λ2 ), accounts for the accumulation of estimation inaccuracies in the process of the covariance matrix inversion. It was called “the Deev formula.” This formula was thoroughly investigated numerically, and a good coincidence was demonstrated even for not great n, N. Note that starting from Deev’s formulas, the discrimination errors can be reduced if the rule g(x) > θ against g(x) ≤ θ with θ = (λ1 − λ2 )/2 = 0 is used. A. D. Deev also noticed [18] that the half-sum of discrimination errors can be further decreased by weighting summands in the discriminant function. After these investigations, it became obvious that by keeping terms of the order of n/N , one obtains a possibility of using specifically multidimensional effects for the construction of improved discriminant and other procedures of multivariate analysis. The most important conclusion was that traditional consistent methods of multivariate statistical analysis should be improvable, and a new progress in theoretical statistics is possible, aiming at obtaining nearly optimal solutions for fixed samples. The Kolmogorov asymptotics (increasing dimension asymptotics [3]) may be considered as a calculation tool for isolating leading terms in case of large dimension. But the principal role of the Kolmogorov asymptotics is that it reveals specific regularities produced by estimation of a large number of parameters. In a series of further publications, this asymptotics was used as a main tool for investigation of essentially many-dimensional phenomena characteristic of high-dimensional statistical analysis. The constant n/N became an acknowledged characteristics in manydimensional statistics. In Section 5.1, the Kolmogorov asymptotics is applied for the development of theory allowing to improve the discriminant analysis of vectors of large dimension with independent components.
12
1. INTRODUCTION
The improvement is achieved by introducing appropriate weights of contributions of independent variables in the discriminant function. These weights are used for the construction of asymptotically unimprovable discriminant procedure. Then, the problem of selection of variables for discrimination is solved, and the optimum selection threshold is found. But the main success in the development of multiparametric solutions was achieved by combining the Kolmogorov asymptotics with the spectral theory of random matrices developed independently at the end of 20th century in another region. Spectral Theory of Increasing Random Matrices In 1955, the well-known physicist theoretician E. Wigner studied energy spectra of heavy nuclei and noticed that these spectra have a characteristic semicircle form with vertical derivatives at the edges. To explain this phenomenon, he assumed that very complicated hamiltonians of these nuclei can be represented by random matrices of high dimension. He found the limit spectrum of symmetric random matrices of increasing dimension n → ∞ with independent (over-diagonal) entries Wij , zero expectation, and the variance EWii2 = 2v 2 , EWij = v 2 for i = j [88]. The empirical distribution function (counting function) for eigenvalues λi of these matrices Fn (u) = n
−1
n
ind(λi ≤ u)
i=1
proved to converge almost surely to the distribution function F (u) with the density F (u) = (2πv 2 )−1
4v 2 − u2 ,
|u| ≤ 2|v|
(limit spectral density). This distribution was called Wigner’s distribution. In 1967, V. A. Marchenko and L. A. Pastur published the well-known paper [43] on the convergence of spectral functions
SPECTRAL THEORY OF INCREASING RANDOM MATRICES
13
of random symmetric Gram matrices of increasing dimension n → ∞. They considered matrices of the form B =A+N
−1
N
xm xTm ,
m=1
where A are nonrandom symmetric matrices with converging counting functions FAn (u) → FA (u), and xm are independent random vectors with independent components xmi such that Exmi = 0 and Ex2mi = 1. They assumed that the ratio n/N → y > 0, distribution is centrally symmetric and invariant with respect to components numeration, and the first four moments of xmi satisfy some tensor relations. They established the convergence FBn (u) → FB (u), where FBn (u) are counting functions for eigenvalues of B, and derived a specific nonlinear relation between limit spectral functions of matrices A and B. In the simplest case when A = I it reads h(t) =
(1 + ut)−1 dFB (u) = (1 + ts(t))−1 ,
where s(t) = 1 − t + yh(t). By the inverse Stilties transformation, they obtained the limit spectral density
FB (u) = √
1 (u2 − u)(u − u1 ), 2πyu
u1 ≤ u ≤ u2 ,
√ where u2 , u1 = (1 ± y)2 . If n > N , the limit spectrum has a discrete component at u = 0 that equals 1 − N/n. From 1975 to 2001, V. L. Girko created an extended limit spectral theory of random matrices of increasing dimension that was published in a series of monographs (see [22]–[26]). Let us describe in general some of his results. V. L. Girko studied various matrices formed by linear and quadratic transformations from initial random matrices X = {xmi } of increasing dimensions N × n with independent entries. The aim of his investigation is to establish the convergence of spectral functions of random matrices to some
14
1. INTRODUCTION
limit nonrandom functions F (u) and then to establish the relation between F (u) and limit spectral functions of nonrandom matrices. For example, for B = AT XX T A, the direct functional relation is established between limit spectra of nonrandom matrices A and random B. V. L. Girko calls such relations “stochastic canonical equations” (we prefer to call them “dispersion equations”). In the first (Russian) monograph “Random Matrices” published in 1975, V. L. Girko assumes that all variables are independent, spectral functions of nonrandom matrices converge, and the generalized Lindeberg condition holds: for any τ > 0, lim N −1
n→∞
N m=1
n−1
n
P
x2mi ind(x2mi ≥ τ ) → 0.
i=1
The main result of his investigations in this monograph was a number of limit equations connecting spectral functions of different random matrices and underlying nonrandom matrices. In monograph [25] (1995), V. L. Girko applied his theory specifically to sample covariance matrices. He refines his theory by withdrawing the assumption on the convergence of spectral functions of true covariance matrices. He postulates a priori some “canonical” equations, proves their solvability, and only then reveals their connection with limit spectra of random matrices. Then, V. L. Girko imposes more restrictive requirements to moments (he assumes the existence of four uniformly bounded moments) and finds limit values of separate (ordered) eigenvalues. Investigations of other authors into the theory of random Gram matrices of increasing dimension differ by more special settings and by less systematic results. However, it is necessary to cite paper by Q. Yin, Z. Bai, and P. Krishnaia (1984), who were first to establish the existence of limits for the least and the largest eigenvalues of Wishart matrices. In 1998, Bai and Silverstein [9] discovered that eigenvalues of increasing random matrices stay within the boundaries of the limit spectrum with probability 1. Spectral Functions of Sample Covariance Matrices Chapter 3 of this book presents the latest development in the spectral theory of sample covariance matrices of large dimension. Methods of spectral theory of random matrices were first applied
SPECTRAL THEORY OF INCREASING RANDOM MATRICES
15
to sample covariance matrices in paper [63] of the author of this monograph (1983). The straightforward functional relation was found between limit spectral functions of sample covariance matrices and limit spectral functions of unknown true covariance matrices. Let us cite this result since it is of a special importance to the multiparametric statistics. Spectra of true covariance matrices Σ of size n × n are characterized by the “counting” function
F0n (u) = n
−1
n
ind(λi ≤ u), u ≥ 0
m=1
of eigenvalues λi , i = 1, 2, . . ., n. Sample covariance matrices calculated over samples X = {xm } of size N have the form
C=N
−1
N
¯ )(xm − x ¯ )T , (xm − x
m=1
¯ are sample average vectors. where x Theorem 1. If n-dimensional populations are normal N(0, Σ), n → ∞, n/N → λ > 0, and functions F0n (u) → F0 (u), then for each t ≥ 0, the limit exist h(t) = lim En−1 tr(I + tC)−1 = n→∞
and
(1 + ts(t)u)−1 dF0 (u),
E(I + tC)−1 = (I + ts(t)Σ)−1 + Ωn ,
(8)
where s(t) = 1 − λ + λh(t) and Ωn → 0 (here the spectral norms of matrices are used). In 1995, the author of this book proved that these relations remain valid for a wide class of populations restricted by the values of two specific parameters: the maximum fourth central moment of a projection of x onto nonrandom axes (defined by vectors e of unit length) M = sup E (eT x)4 > 0 |e|=1
16
1. INTRODUCTION
and measures of dependence of variables ν = sup var(xT Ωx/n),
and
γ = ν/M,
Ω=1
where Ω are nonrandom, symmetric, positive semidefinite matrices of unit spectral norm. Note that for independent components of x, the parameter ν ≤ M/n. For normal distribution, γ ≤ 2/3n. The situation when the dimension n is large, sample size N is large, the ratio n/N is bounded, the maximum fourth moment M is bounded, and γ is small may be called the situation of the multiparametric statistics applicability. Section 3.1, the latest achievements of spectral theory of large Gram matrices and sample covariance matrices are presented. Theorem 1 is proved under weakest assumptions for wide class of distributions. Analytical properties of h(z) are investigated, and finite location of limit spectra is established. In Section 3.2, the dispersion equations similar to (8) are derived for infinite-dimensional variables. Note that the regularization of the inverse sample covariance matrix C −1 by an addition of a positive “ridge” parameter α > 0 to the diagonal of C before inversion produces the resolvent of C involved in Theorem 1. Therefore, the ridge regularization of linear statistical procedures leads to functions admitting the application of our dispersion equations with remainder terms small in the situation of multiparametric statistics applicability. Theorems proved in Sections 3.1 allow to formulate the Normal Evaluation Principle, presented in Section 3.3. It states that limiting expressions of standard quality functions for regularized multivariate statistical procedures are determined by only two moments of variables and may be approximately evaluated under the assumption of populations normality. We say that function f (x) of variable x from population S allows ε-normal evaluation in the square mean, if for S some normal distribution exists y ∼ N( μ, Σ) with μ = Ex and Σ = cov(x, x) such that E(f (x) − f (y))2 ≤ ε.
CONSTRUCTING MULTIPARAMETRIC PROCEDURES
17
Proposition 3. Under conditions of multiparametric statistics applicability (large N , bounded n/N and M , and small γ), the principal parts of a number of standard quality functions of regularized linear procedures allow ε-normal evaluation with small ε > 0. This means that, in the multiparametric case, it is possible to develop (regularized) statistical procedures such that 1. their standard quality functions have a small variance and allow reliable estimation from sample data; 2. the quality of these procedures is only weakly depending on distributions.
Constructing Multiparametric Procedures In Chapter 4, the spectral theory of sample covariance matrices is used for systematical construction of practical approximately unimprovable procedures. Using dispersion equations, one can calculate leading terms of quality functions in terms of parameters excluding dependence on random values, or on the opposite, to express quality functions only in terms of observable data excluding unknown parameters. To choose an essentially multivariate statistical procedure of best quality, one may solve two alternative extremum problems: (a) find an a priori best solution of statistical problem using the expression of quality function as a function only on parameters; (b) find the best statistical rule using the quality function presented as a function of only observable data. For n N , all thus improved multiparametric solutions pass to standard consistent ones. In case of large n and N , the following practical recommendation may be offered. 1. For multivariate data of any dimension, it is desirable to apply always stable and not degenerate, approximately optimal, multiparametric solutions instead of traditional methods consistent only for fixed dimension.
18
1. INTRODUCTION
2. It is plausible to compare different multivariate procedures theoretically for large dimension and for large sample size by quality function expressed in terms of first two moments of variables. 3. Using the multiparametric technique, it is possible to calculate principal parts of quality functions from sample data, compare different versions of procedures, and choose better ones for treating concrete samples. Let us describe the technology of construction of unimprovable multiparametric procedures. 1. Standard multivariate procedure is regularized, and a wide class of regularized solutions is chosen. 2. The quality function is studied, and its leading term is isolated. Then one of two tactics, (a) or (b), is followed. Tactic “a” 1. Using dispersion equation, the observable variables are excluded and the principal part of quality function is presented as a function only on parameters. 2. The extremum problem is solved, and an a priori best solution is found. 3. The parameters in this solution are replaced by statistics (having small variance), and a consistent estimator of the best solution is constructed. 4. It is remains to prove that this estimator leads to a solution whose quality function approximates well the best quality function. Tactic “b” 1. Using dispersion equations, the unknown parameters are excluded and the principal part of quality function is expressed as a function only on statistics. 2. An extremum problem is solved, and the approximately best solution is obtained depending only on observable data. 3. It is proved that this extremum solution provides the best quality with accuracy to remainder terms of the asymptotics. In Chapter 4, this multiparametric technique is applied to construction of a number of multivariate statistical procedures that are surely not degenerate and are approximately optimal independently
OPTIMAL SOLUTION TO EMPIRICAL LINEAR EQUATIONS
19
of distributions. Among these are problems of optimal estimation of the inverse covariance matrices, optimal matrix shrinkage for sample mean vector, and minimizing quadratic risk of sample linear regression. In 1983, the author of this book found [63] conditions providing the minimum of limit error probability in the discriminant analysis of large-dimensional normal vectors x ∼ N( μν , Σ), ν = 1, 2, within a generalized class of linear discriminant function. The inverted sample covariance matrix C in the standard discriminant “plug-in” linear discriminant function is replaced by the matrix Γ(C) = (I + tC)−1 dη(t), t>0
where η(t) are arbitrary functions of finite variation. In [63] the extremum problem is solved, and the Stilties equation is derived for the unimprovable function η(t) = η0 (t). This equation was used in [82] by V. S. Stepanov for treating some real (medical and economical) data. He found that it provides remarkably better results even for not great n, N ≈ 5 − 10. In Section 5.2, this method is extended to a wide class of distributions. Optimal Solution to Empirical Linear Equations Chapter 6 presents the development of statistical approach for finding minimum square pseudosolutions to large systems of linear empiric equations whose coefficients are random values with known distribution function. The standard solution to system of linear algebraic equations (SLAE) Ax = b using known empiric random matrix of coefficients R and empiric right-hand side vector y can be unstable or nonexisting if the variance of coefficients is = (RT R)−1 RT y sufficiently large. The minimum square solution x with empiric matrix R and empiric right-hand sides also can be unstable or nonexisting. These difficulties are produced by incorrect solution and the inconsistency of random system. The well-known Tikhonov regularization methods [83] are based on a rather artificial requirement of minimum complexity; they guarantee the existence of a pseudosolution but minimize neither the quadratic risk nor the residuals. Methods of the well-known confluent analysis [44] lead
20
1. INTRODUCTION
= (RT R − λI)−1 RT y, where λ ≥ 0 and I is the to the estimator x identity matrix. These estimators are even more unstable and surely do not exist when the standard minimum square solution does not exist (due to additional estimating of the coefficients matrices). In Chapter 6, the extremum problem is solved. The quadratic risk of pseudosolutions is minimized within a class of arbitrary linear combinations of regularized pseudosolutions with different regularization parameters. First, in Section 6.1, an a priori best solution is obtained by averaging over all matrices A with fixed spectral norm and all vectors b of fixed length. Section 6.2 presents the theoretical development, providing methods of the construction of asymptotically unimprovable solutions of unknown SLAE Ax = b from empiric coefficient matrix R and the right-hand side vector y under the assumption that all entries of the matrix R and components of the vector b are independent and normally distributed.
CHAPTER 2
FUNDAMENTAL PROBLEM OF STATISTICS MULTIPARAMETRIC CASE Analyzing basic concepts of statistical science, we must acknowledge that there is an essential difference between goals and settings of a single-parameter statistical investigations and statistical problems involving a number of unknown parameters. In case of one parameter, only one problem arises, the problem of achieving more and more accurate approximation to its unknown magnitude: and the most important property of estimators becomes their consistency. In case of many-parameter models, the investigator starts with the elucidation of his interest in the statistical problem and formulation of criteria of quality and choosing quality function. Separate parameters are no more of interest, and the problem of the achievement of maximum gain in quality arises. The possibility of infinite increase of samples is replaced by the necessity of optimal statistical decisions under fixed samples available for the investigator. Thus, many-parameter problems require a more appropriate statistical setting that may be expressed in terms of the gain theory and of statistical decision functions. Such approach was proposed by Neumann and Morgenstern (1947) and by A. Wald (1970) and led to the concept of estimators dominance and search for uniformly unimprovable (admissible) estimators minimizing risk function (see [91]). Suppose D is a class of estimators and R(θ, d) is the risk function in estimating the parameter θ from the region Θ with the estimator d. The estimator d1 is said to dominate the estimator d2 if R(θ, d1 ) ≤ R(θ, d2 ) for all θ ∈ Θ. The conventional approach to theoretical developments is the construction of estimators dominating some classes of estimators. The subclass K ∈ D of estimators 21
22
2. FUNDAMENTAL PROBLEM OF STATISTICS
is called complete if for each estimator d ∈ D − K, there exists an estimator d∗ ∈ K such that d∗ dominates d. The main progress achieved in the theory of estimator dominance is connected with the Cramer–Rao theorem and minimizing the quadratic risk. The quadratic risk function proved to be most convenient and most acceptable, especially for many-dimensional problems. In 1956, C. Stein found that for the dimension greater 2, a simple transformation of standard sample average estimator of expectation vector by introducing a common scalar multiple (shrinking) may reduce the quadratic risk uniformly in parameters, presenting an improved estimator. This discovery showed that well-developed technique of consistent estimation is not best for many-dimensional problems, and a new statistical approach is necessary to treat manyparameter case. Practically, for example, in order to locate better the point in three-dimensional space by a single observation, the observer should refuse from straightforward interpretation of the observed coordinates and multiply them by a shrinkage factor less 1. A series of investigations followed (see Introduction), in which it was established that nearly all most popular estimators allow an improvement, and this effect appears for a wide class of distributions. The gain could be measured by the ratio of the number of parameters to sample size (the Kolmogorov ratio). However, the authors of most publications analyze only simple modifications of estimators like shrinking, and till now, an infinite variety of other possible improvements remains not explored. In this chapter, we present a more detailed investigation of shrinkage effect and its application in the fundamental problem of unimprovable estimation of expectation vectors.
SHRINKAGE OF SAMPLE MEAN VECTORS
2.1.
23
SHRINKAGE OF SAMPLE MEAN VECTORS
Assume that n-dimensional observation vector x has independent components, the expectation vector μ = Ex exists, and vari¯ be a sample average vector ance of all components is 1. Let x calculated over a sample of size N . Denote y = n/N. Let the absolute value of a vector denote its length, and the square of a vector be the square of its length. We have the expectation E¯ x2 = μ 2 + y. Remark 1. In the family K(0) of estimators with a priori ¯ is sample average vecshrinkage of the form μ = α¯ x, where x tor and α is a nonrandom scalar, the minimum is achieved for α = α0 = 1 − y/( μ2 + y) and the quadratic risk is def
¯ )2 = R0 = E( ¯ )2 = y μ−x E( μ − α0 x
μ 2 . μ 2 + y
(1)
Definition 1. Let a class of estimators K1 be chosen for vectors μ . We call an estimator of μ composite a priori estimator if it is a collection of estimators from K0 with nonintersecting subsets of components of μ that includes all components. Theorem 2.1. For composite a priori estimators from K(0) , the sum of quadratic risks is not larger than the quadratic risk of estimating all components with a single a priori shrinkage coefficient. Proof. We write R0 in the form y − y 2 /( μ2 + y). Let angular brackets denote averaging over subsets μ (j), j = 1, . . ., k of the set of all components (μ1 , . . ., μn ) with weights n(j)/n, j = 1, . . ., k. Denote y(j) = n(j)/N for all j. Using the concavity property of the inverse function, we find that
y
y(j) μ 2 (j) + y(j)
=y
1 μ 2 (j)/y(j) + 1 =
Q.E.D.
y2 . μ 2 + y
≥
y = μ2 (j)/y(j) + 1
24
2. FUNDAMENTAL PROBLEM OF STATISTICS
Now, let a half of the vector μ components be equal to zero. Then in the family K(0) , a composite estimator can have the quadratic risk nearly twice as small. Shrinkage for Normal Distributions Consider an n-dimensional population N( μ, Σ) with the identity covariance matrix Σ = I. Consider a class of estimators μ of μ2 < ∞ vectors μ = (μ1 , . . ., μn ) over samples X of size N with E having the form μ = α¯ x, where α = α(X) is a scalar function. Let us restrict ourselves with the case when the function α(·) is ¯ is a sufficient differentiable with respect to all arguments. Since x statistics, it suffices to consider a complete subclass of estimators of the form μ = ϕ(|¯ x|)¯ x. Denote yν = (n − ν)/N, ν = 1, 2, 3, 4. Let K be a set of estimators of the form μ = ϕ(|¯ x|)¯ x, where ϕ(r) < 1 is a differentiable function of r = |¯ x| > 0 such that the product r2 ϕ(r) is uniformly bounded on any segment [0, b]. This class includes the “plug-in” estimator μ 1 = ϕ1 (|¯ x|)¯ x, where ϕ1 (r) = 1 − y/r2 for r > 0, and the James–Stein estimator μ S = ϕS (|¯ x|)¯ x, where ϕS (r) = 1 − y2 /r2 , and the majority of shrinkage estimators considered in the reviews [29], [30], and [33]. Let us compare the quadratic risk R1 = R(ϕ1 ) = E[ μ − ϕ1 2 ¯ (|¯ x|) x] of the estimator μ 1 with the quadratic risk ¯ ]2 of the estimator μ μ − ϕS (|¯ x|) x S . RS = R(ϕS ) = E[ Remark 2. For n ≥ 3, the moment E|¯ x|−2 exists, R1 = y − yy4 E|¯ x|−2 and RS = y − y22 E|¯ x|−2 . Indeed, rewrite the quadratic risk of μ 1 as follows: ¯ /¯ ¯ ]2 = E[( ¯ )2 + 2y( ¯ )T x μ − ϕ1 (|¯ x|) x μ−x μ−x x2 ]. R1 = E[ x2 + y 2 /¯ ¯) = Here in the second term under the sign of E, we have ( μ−x ¯. N −1 (∇f )/f , where f is the normal distribution density for x Integrating by parts, we find that this term equals 2y N
0
∞
¯ T ∇f n−2 1 x d¯ x = −2 E 2. 2 ¯ ¯ x x N
SHRINKAGE FOR NORMAL DISTRIBUTIONS
25
Thus, R1 = E[y − 2yy2 /|¯ x|2 + y 2 /¯ x2 ] = y − yy4 E|¯ x|−2 . The calculation of RS is similar. Remark 3. For n ≥ 4, we have 1 E 2= ¯ x
0
N 2 N = 2
=
∞
exp(−u|¯ x|2 ) du = ∞ t −n/2 2N dt = (1 + t) exp − μ 2 1+t 0 1 n/2−2 2N (1 − z) exp − μ z dz, 2 0
(2)
and the relations hold true ( μ2 + y)−1 ≤ E|¯ x|−2 ≤ min[y2−1 , ( μ2 + y4 )−1 ]; if μ = 0,
−2
then E|¯ x|
=
(3)
y2−1 .
Indeed, the first inequality in (3) follows from the inequality EZ −1 ≥ (EZ)−1 for positive Z, and the second inequality in (3) is a consequence of a monotone dependence on μ 2. Remark 4. For n ≥ 4, we have R0 < RS < R1 . Indeed, let us substitute z = 2x/N in (2) and write the difference RS − R0 in the form y
N/2
2
h(x) exp(− μ x − yx) dx + y 2
2
0
∞
exp(− μ2 x − yx) dx,
N/2
where
2 h(x) = 1 − 1 − n
2
2x 1− N
n/2−2 exp(xy).
Here h(0) = 1 − (1 − 2/n)2 > 0, and h(N/2) = 1. The derivative h (x) vanishes only if x = y/2. For this x, the function h(x) = h(y/2) = 1 − (1 − 2/n)2 (1 − 4/n)n/2−2 exp(2).
26
2. FUNDAMENTAL PROBLEM OF STATISTICS
This expression decreases monotonously from 1 at n = 4 to 0 as n → ∞. It follows h(y/2) > 0. We conclude that h(x) > 0 for any 0 ≤ x ≤ N/2. Consequently, we have RS > R0 . The difference R1 − RS = 4N −2 E|¯ x|−2 . The remark is justified. Remark 5. For n ≥ 4, the risk functions R1 and RS of the estimators μ 1 and μ S increase monotonously as | μ| increases; if μ = 0, then R1 = 2[1 + 2/(n − 2)]/N, RS = 2/N ; if μ = 0, the inequalities hold y − yy4 /( μ2 + y4 ) ≤ R1 ≤ y − yy4 /( μ2 + y), y − y22 /( μ2 + y4 ) ≤ RS ≤ y − y22 /( μ2 + y). Note that composite James–Stein estimators can have a greater summary quadratic risk than the quadratic risk of all-component estimator: for μ = 0, we have the quadratic risk RS = 2/N independently of n ≥ 3, while the quadratic risk of the composite estimator can be some times greater. ¯. Now we consider arbitrary estimators of the form μ = ϕ(|¯ x|) x Lemma 2.1. For n ≥ 3 and μ ∈ K, the quadratic risk ¯ )2 = μ − ϕ(r) x R(ϕ) = E( μ−μ )2 = E( = E[ μ2 + 2yϕ(r) − 2r2 ϕ(r) + 2rϕ (r)/N + r2 ϕ2 (r)], (4) where r = |¯ x|. ¯T μ Proof. We transform the addend of R(ϕ) including x by integrating by parts. Replace ¯2 + ¯T μ =x x
1 T ¯ ∇ln f (¯ x x), N
¯ , and ∇ is a vector where f (¯ x) is the normal density for the vector x ¯ . We find that of derivatives with respect to components of x ¯ ) = −E[rϕ (r) + nϕ(r)] x) = −E∇T (ϕ(r) x Eϕ(r)¯ xT ∇ln f (¯ ¯ → 0 as |¯ since the product ϕ(|¯ x|) f (¯ x) x x| → ∞. The lemma statement follows.
SHRINKAGE FOR NORMAL DISTRIBUTIONS
27
Denote by g(r) the distribution density of random value r = |¯ x| ¯ ∼ N( for x μ, I/N ). We calculate the logarithmic derivative y
g (r) 2 =N −r+μ cth(|μ|N r) . g(r) r
(5)
Denote ϕE (r) = 1 −
y1 1 g (r) | μ| 1 + . = cth(| μ|N r) − 2 r rN g(r) r N r2
(6)
For r → 0, we have ϕE (r) → 2/3 μ 2 N , and there exists the derivative ϕ E (+0). Let us prove that the function ϕE (r) provides the minimum of R(ϕ). Denote yν = (n − ν)/N, ν > 0. Theorem 2.2. For n ≥ 3, the estimator μ E = ϕE (r)¯ x, with ϕE (r) of the form (6) and r = |¯ x| belonging to the class K, and its quadratic risk RE = R(ϕE ) = E( μ−μ E )2 = μ 2 − Er2 ϕ2E (r) = 1 μ 2 1 . = y1 1 − y2 E 2 + E 2 2 − 2 r r N sh (|μ|N r) The quadratic risk of any estimator μ ∈ K equals R(ϕ) = R(ϕE ) + Er2 (ϕ(r) − ϕE (r))2 . Proof. We start from (4). First, integrating by parts, we calculate the contribution of the summand preceding to the last. We find that Erϕ (r) = rϕ (r)g(r) dr = − rϕ(r)g (r) dr − Eϕ(r) = −Erϕ(r)
g (r) − Eϕ(r). g(r)
The substitution in (4) transforms this expression to R(ϕ) = μ 2 + E r2 ϕ2 (r) − 2rϕ(r)ϕE (r) = =μ 2 − Er2 ϕ2E (r) + E(ϕ(r) − ϕE (r))2 .
28
2. FUNDAMENTAL PROBLEM OF STATISTICS
We obtain the first expression for RE and the expression for R(ϕ) in the formulation of this theorem. It remains to calculate RE . Substituting (6) we find that y2 1 g (r) 2 −E r− = RE = + r N g(r) y2 2 y2 2 1 g (r) 2 2 =μ −E r− . r− − g (r) dr − E r N r N g(r) μ 2
We calculate the integral integrating by parts. Obviously, it equals x|−2 . We obtain −1 + y2 E|¯ 1 1 g (r) 2 . RE = y − y1 y3 E 2 − E r N g(r)
(7)
Also integrating by parts, we calculate the last term 1 g (r) 2 1 ∞ y2 E = − r + μ cth(μN r) g (r) dr = N g(r) N +0 r 1 μ2 1 , = (1 + y2 E 2 ) + E 2 N r sh (|μ|N r) where μ = | μ|, in view of the fact that contributions of small r → 0 and r → ∞ vanish. The second expression for RE follows. Theorem 2.1 is proved. Remark 6. For n ≥ 3, the following inequalities are valid: 1 1 1 1 1 g (r) 1 + y2 E 2 ≤ E ≤ 1 + y1 E 2 , N r N g(r) N r 1 1 1 y1 − y1 y2 E 2 ≤ RE ≤ y1 − y1 y2 E 2 + N −2 E 2 , r r r where r = |¯ x|.
SHRINKAGE FOR A WIDE CLASS OF DISTRIBUTIONS
29
Remark 7. For n ≥ 4, we have 1 y2 y2 + ≤ RE ≤ y1 1 − 2 , y1 1 − 2 μ + y4 μ +y 2N 1 4 2 1 ≤ R1 − RE ≤ , ≤ RS − RE ≤ , N N N N where μ = | μ|. These inequalities follow from (7), Remarks 5 and 6, and inequalities (3). Since the estimator μ E has the minimum risk in K, one can conclude that, in this class, the estimator μ 1 provides the minimum quadratic risk with the accuracy up to 4/N , and the quadratic risk of the James–Stein estimator is minimum with the accuracy up to 2/N . Shrinkage for a Wide Class of Distributions From [72], it is known that in case of a large number of boundedly dependent variables, smooth functionals of sample moments approach functions of normal variables. We use the techniques developed in [71] for studying the effect of shrinkage of sample averages in this general case. Let us restrict distributions only by the following two requirements. A. There exist expectation vectors and the fourth moments of all components of the observation vector x. B. The covariance matrix cov(x, x) = I, where I is the identity matrix. The latter condition is introduced for the convenience. o Denote μ = Ex, x = x − μ . Let us introduce two parameters (see Introduction): o
¯ )4 > 0 M = sup E(eT x e
and
o
o
¯ T Ωx ¯ /n)/M, γ = sup var(x
(8)
Ω
where the supremes are calculated for all nonrandom unit vectors e and for all nonrandom, symmetric, positively semidefinite
30
2. FUNDAMENTAL PROBLEM OF STATISTICS
matrices of unit spectral norm. Note that in the case of normal observations, under condition B, we have M = 3, γ = 2/3n, and for any distribution with M < ∞ and independent components of x, we have the inequality γ ≤ 2/n + 1/n2 . Denote y = n/N, r02 = μ 2 + y. Remark 8. Under Assumptions A and B, def
x−μ )2 ≤ y[2 + y(1 + M γ)]/N, σ02 = var(¯ def
¯ 2 ≤ r02 [10 + y(1 + M γ)]/N. σ 2 = var x o
¯ 2 from the above we have Indeed, estimating the variance of x o
¯ 2 )2 = N −4 E(x
◦ ◦ ◦ ◦ xTm1 xm2 xTm3 xm4 , ◦
where the centered sample vectors x have the indexes m1 , m2 , m3 , m4 running from 1 to N . Nonzero contribution is provided by the summands with m1 = m2 = m3 = m4 and other summands that have pairwise coinciding indexes. Terms with m1 = m2 and m3 = m4 do not contribute to the variance. We find that o
◦
¯ 2 ) < N −3 E(x21 )2 + 2N −2 trΣ2 ≤ N −1 y[2 + y (1 + M γ)], σ02 = var(x o
o
¯ ) + 2var(x ¯ 2 ) ≤ [10 + y(1 + M γ)] r02 /N. x2 ) ≤ 8 var( μT x σ 2 = var(¯ The remark is justified. We may interpret the number b = nγ as a measure of dependence. We say that the variables are boundedly dependent if the product yγ < b/N < 1. In the general case without restrictions on distribution, it is convenient to replace the traditional Stein estimator by a regularized Stein estimator. Denote μ R = ϕR (|¯ x|)¯ x,
def
ϕR (r) = 1 − yr−2 ind(r2 > τ ), 1/2 ≤ ϕR (r) ≤ 1.
τ ≥ 0,
SHRINKAGE FOR A WIDE CLASS OF DISTRIBUTIONS
31
Theorem 2.3. Under Assumptions A and B for τ = y/2, the quadratic risk of the estimator μ R is R(ϕR ) ≤ R0 + 72[1 + y(1 + M γ)/6]/N.
(9)
Proof. Let us write E( μ−μ R )2 in the form o
o
¯2 − μ ¯ )/|¯ x|−2 + 2yEi(y − x T x R(ϕR ) = y − y 2 Ei|¯ x|2 ,
(10)
x|2 ≤ y/2), where i = ind(|¯ x|2 > y/2). Denote p = E(1 − i) = P(|¯ where the probability p = P(¯ x2 ≤ y/2) ≤ P(|ξ| > y/2) ≤ ≤ 4Eind(|ξ| > y/2) ξ 2 /y 2 ≤ 4σ 2 /y 2 , ¯ 2 . We find that and ξ = μ 2 + y − x x|−2 − r0−2 ) + r0−2 − p/r02 . Ei|¯ x|−2 = Ei(|¯ ¯ 2 > y/2 Here in the first term of the right-hand side, we may set x and therefore it is greater than p/r02 . We find that Ei|¯ x|−2 > r0−2 , and it follows that the sum of the first two terms in the right-hand side of (10) is not greater R0 = y − y 2 /r02 . The third term in the right-hand side of (10) presents a correco
o
¯2 − μ ¯ ) = 0, this term equals tion to R0 . Since E(y − x T x o
o
o
o
¯2 − μ ¯ ) r0−2 + 2yEi(y − x ¯2 − μ ¯ ) (|¯ 2yEi(y − x T x T x x|−2 − r0−2 ) = o
o
o o r 2 − |¯ ¯2 − μ ¯ x|2 T x y−x T 0 2−μ ¯ ¯ x + 2yEi(y − , (11) x ) = 2yEj r02 |¯ x|2 r02
where j = i − 1. Let us apply the Schwarz inequality. The first summand of the right-hand side of (11) in absolute value is not greater than √ 2yr0−2 p
o o 2 T ¯ ) + var( ¯) , var(x μ x
(12)
32
2. FUNDAMENTAL PROBLEM OF STATISTICS o
o
¯ 2 ) = σ02 and var( ¯ ) = μ2 /N. We find that (12) is where var(x μT x not greater than √ 4(σ0 + | μ|/ N )σ/r02 ≤ 4 40 + 24x + 2x2 /N ≤ 6(6 + x)/N, where x = y(1 + M γ). In the second term of the right-hand side of (11), we have iy|¯ x|−2√≤ 2. Similarly, this term does not exceed the quantity 4(σ0 + | μ|/ N ) σ/r02 . Thus, the expression (11) is no more than twice as large. Theorem 2.3 is proved. For symmetric distributions, the upper estimate of the remainder terms in (9) is approximately twice as small. The numeric coefficient in (9) can be decreased by increasing the threshold τ . Conclusions Thus, in all considered cases, the shrinkage of expectation vector estimators reduces the quadratic risk approximately by the factor μ 2 /( μ2 + y), where μ 2 is the square of the expectation vector length, the parameter y = n/N , n is the observation vector dimension, and N is the sample size. For normal distributions with unit covariance matrix, Theorem 2.2 presents a general formula for the quadratic risk of shrinkage estimators from the class K, with the shrinkage coefficient depending only on sample average vector length. This theorem also states the lower boundary of quadratic risks. The majority of improved estimators suggested until now belong to the class K. All these estimators reduce the quadratic risk as compared with James–Stein estimator by no more than 2/N (Remark 7). The shrinkage effect retains for a wide class of distributions with all four moments of all variables. In Theorem 2.3, it is proved that for these distributions with unit variance of all variables, the quadratic risk is the same as for normal distributions with accuracy up to c/N . For a wide class of distributions, the coefficient c depends neither on n nor on distributions as long as the ratio n/N is bounded.
SPECIAL SHRINKAGE OF NORMAL ESTIMATORS
2.2.
33
SHRINKAGE OF UNBIASED ESTIMATORS
The effect of shrinking sample mean vectors was well studied in a number of investigations published after the famous paper [80] by Stein in 1964. Most of them present an exact evaluation of the quadratic risk for the Stein-type estimators for normal and some centrally symmetric distributions. In this section, we set the problem of studying shrinkage effect for wide classes of statistical estimators. In the previous section, we showed that the reduction of the quadratic risk by shrinking is essentially multiparametric phenomenon and can be measured by the ratio of parameter space dimension n to sample size N . For small n, the gain is small and its order of magnitude is 1/N. Therefore, it is of a special interest to study the quadratic risk reduction for n comparable to N in more detail. For this purpose, the Kolmogorov asymptotics (N → ∞, n/N → y > 0) is appropriate. In this situation, the shrinkage effects are at their most. Fortunately, as is shown in [71], under these conditions, standard risk functions including the square risk function only weakly depend on distributions for a large class of populations. In this section, we study the shrinkage effect for the dimension of observations comparable to the sample size. First, we study shrinkage of unbiased estimators for normal distributions. Special Shrinkage of Normal Estimators In this section, we assume that n-dimensional vector of an original estimator θ is distributed as N(θ, I/N ), where I is the identity matrix. We consider a parametric family K(0) of shrinkage where α is a nonrandom scalar. For estimators of the form αθ, these, the quadratic risk is 2 R = R(α) = E(θ − αθ)
(1)
(here and in the following, squares of vectors denote squares of their lengths). Denote y = n/N. For the original estimator, R = R(1) = y. Minimum of R(α) is reached at α = α(0) =
34
2. FUNDAMENTAL PROBLEM OF STATISTICS
θ2 /(θ2 + y), and it equals R(α(0) ) = α(0) R(1). As an estimator of α(0) , we choose the statistics α (0) = 1 − y θ2 . Let us find the quadratic risk principal part for the estimators θ(0) = α (0) θ and compare it with the quadratic risk of the James–Stein estimator for which αS = 1 − (n − 2)/N θ2 . θS = α S θ, Theorem 2.4. For θ ∼ N(θ, I/N ) with θ = 0 and n ≥ 4, the estimator θ(0) has the quadratic risk (1) satisfying the inequalities θ2 θ2 + 4/N (0) ) ≤ y ≤ R( α y; θ2 + y − 4/N θ2 + y + 4/N
(2)
for the estimator θS = α (0) θ we have θ2 − 4/nN θ2 + 4(1 − 1/n)/N S ) ≤ y ≤ R( α y. θ2 + y − 4/N θ2 + y
(3)
Proof. The quadratic risk is θ2 )2 = yE(−1 + 2(θ, θ)/ θ2 + y/θ2 ), (4) μ − θ + y θ/ R( α(0) ) = E( where the expectation is calculated with respect to the distribution N(θ, I), θ ∈ Rn . Integrating by parts, we obtain the equality θ2 = 1 − (n − 2)N −1 E(θ2 )−1 . We find that E(θ, θ)/
R( α
(0)
n−4 1 )=y 1− E N θ2
.
The inequality E(θ2 )−1 ≥ 1/E(θ2 ) presents the first upper estimate in the theorem formulation. Similarly, for the James–Stein estimator with α S = 1 − (n − 2)/N θ2 , we get R( αS ) = y −
(n − 2)2 1 E . 2 N θ2
The inequality E(θ2 )−1 ≥ 1/(θ2 + y) provides the second upper estimate.
SHRINKAGE OF ARBITRARY UNBIASED ESTIMATORS
35
Further, for n > 4 we find 1 E = θ2
∞
0
N ≤ 2
N E exp(−uθ2 ) du = 2
1
(1 − t)n/2−2 exp(−θ2 N t/2) dt
0
exp[−(n/2 − 2) t − θ2 N t/2] dt =
θ2
1 . + y − 4/N
This provides both lower estimates in the theorem formulation. Theorem 2.4 shows that the estimators θ(0) and θS dominate the class of estimators K(0) with the accuracy up to 4/N and 4(n − 1)/nN , respectively. One can see that the upper and lower estimates in (3) are narrowed down as compared with (2) by a quantity of the order of 1/n2 . Theorem 2.4 can be applied in a special case when the observation vector x ∼ N( μ, I), and the original estimator is the sample mean ¯ calculated over a sample X = {xm } of size N . vector x For the distribution of the form N( μ, dI), where the parameter d > 0 is unknown, to estimate α(0) , one can propose the statistics x ¯ 2 , where α (1) = 1 − nd/N d = n−1 (N − 1)−1
N
¯ )2 . (xm − x
m=1
If n ≥ 4, the quadratic risk of this statistics is R = R( α(1) ) ≤
μ 2 + 4/N + 2/nN y. μ 2 + y
Shrinkage of Arbitrary Unbiased Estimators We consider the original class of unbiased estimators θ of n-dimen-sional vectors θ restricted by the following requirements. there exists the expectation and all 1. For all components of θ, moments of the fourth order. 2. The expectation exists E(θ2 )−2 .
36
2. FUNDAMENTAL PROBLEM OF STATISTICS
Let the estimator θ be calculated over a sample X of size N . We introduce the parameters Q = N 2 sup E[(e, θ − θ)2 ]2 ,
b2 = E (1/θ2 )2 ,
|e|=1
and M = max((θ2 )2 , Q),
(5)
where the supremum is calculated with respect to all nonrandom unit vectors e. The quantity Q presents the maximum fourth moment of the projection of the centered estimation vector θ onto nonrandom axes. Denote y = n/N . Let us characterize variance of the estimator θ by the matrix θ). The spectral norm D presents the maximal D = N cov(θ, second moment of projections of the centered vector θ on to nonrandom axes, D 2 ≤ Q. Denote d = n−1 tr D ≤ D and (for yd > 0) var[(θ − θ)T Ω(θ − θ)] ≤ 1, γ = sup Qy 2 Ω≤1 where the supremum is calculated for all nonrandom, symmetric, positively semidefinite matrices Ω with the spectral norm not greater 1. The quantity γ ≤ 1 restricts the variance of the esti mator vector components. In case of independent components of θ, it is easy to see that the quantity γ ≤ 1/n. We will isolate the principal part of the quadratic risk and search for upper estimates of the remainder terms that are small for bounded M , b, and y, for large N and small γ. For the original estimators, the quadratic risk (1) equals R(1) = yd. Let yd > 0. Minimum of R(α) is reached for α(0) = θ2 /(θ2 + y) and equals α(0) R(1). Let us find estimators of θ2 and Eθ2 and prove that they have small variance. First, let d be known. As an estimator of α(0) , we choose the statistics α (1) = 1 − d/θ2 . Denote by K(1) a class of estimators θ of the form α (1) θ. Denote d = n−1 trD, s = θ2 + yd, δ = θ − θ, ε = y 2 γ + 1/N. We have d2 ≤ D 2 ≤ Q ≤ M, E(1/θ2 ) ≤ b, Eδ 2 = yd, E(δ 2 )2 ≤ y 2 Q, Eδδ T = D/N, var δ 2 ≤ Q y 2 γ.
SHRINKAGE OF ARBITRARY UNBIASED ESTIMATORS
37
Denote all (possibly different) numeric coefficients by a single letter a. We find that var(θ2 ) = var[2(θ, δ) + δ 2 ] ≤ a (θ2 D /N + Q y 2 γ) ≤ aM ε2 . (7) Lemma 2.2. Under Assumptions 1 and 2, we have E where
1 1 = + ω1 , 2 s θ
E
θ2 (θ, θ) = + ω2 , s θ2
(8)
√ √ |ω1 | ≤ abs−1 M ε, |ω2 | ≤ ab M ε.
Proof. We find that |ω1 | ≤ s−1 |E(s − θ2 )/θ2 | ≤ s−1
E(θ2 )−2
√ varθ2 ≤ ab/s M ε.
we have var(θ, θ) ≤ θT Dθ/N and For the scalar product (θ, θ), − θ2 θ2 )/θ2 | ≤ |ω2 | ≤ s−1 |E(s(θ, θ) √ + var(θ2 ) ≤ ab M ε. ≤ a E(θ2 )−2 var(θ, θ) The lemma is proved. Theorem 2.5. Under Assumptions 1 and 2 for known d > 0 and y > 0, we have R( α(1) ) =
θ2 yd + ω, θ2 + yd
(9)
where |ω| ≤ abyd M (γy 2 + 1/N ), and a is a number. Proof. The quadratic risk θ2 )2 = ydE(−1 + 2(θ, θ)/ θ2 + yd/θ2 )2 . R( α(1) ) = E(θ − θ + dθ/ By Lemma 2.2, the second term in parentheses equals 2θ2 /s with √ the accuracy up to ab M ε; the third term is equal to yd/s with
38
2. FUNDAMENTAL PROBLEM OF STATISTICS
the same accuracy. It√follows that R( α(1) ) equals yd θ 2 /s with the accuracy up to abyd M ε. The theorem is proved. We conclude that under Assumptions 1 and 2, the estimators θ(1) are dominating the class of estimators K(0) with the accuracy up to abyd M (γy 2 + 1/N ). Now, suppose that the variance d of the original estimators is unknown. Instead of estimating d, we estimate θ2 . For the parameter θ2 , we propose the unbiased estimator of the form θ1T θ2 , where θ1 and θ2 are original estimators of θ over two half-samples X1 and X2 of equal sizes N/2 (for even N ). Consider the statistics α (2) = (θ1 , θ2 )/θ2 . Denote by K(2), the To estimate the quadratic class of estimators for θ of the form α (2) θ. (2) (2) , we prove the following two lemmas. risk R( α ) of the estimator α Assume that the estimators θ are calculated over half-samples. Then, the new values of L, M, D , d, and γ differ from the original ones by no more than a numeric factor. Lemma 2.3. Under Assumptions 1 and 2, we have 1. var(θ1 , θ2 ) ≤ as D /N 2. E(θ12 )2 ≤ aM (1 + y 2 ) √ 3. E[(θ1 , θ2 )2 ] − (θ2 )2 ≤ a(1 + y)M/ N 4. var[(θ1 , θ2 )2 ] ≤ aM 2 (1 + y 2 )ε2 , where a are numeric coefficients. Proof. Denote δ1 = θ1 − θ,
δ2 = θ2 − θ. We have
var(θ1 , θ2 ) ≤ a var[(θ, δ1 ) + (θ, δ2 ) + (δ1 , δ2 )] ≤ a(θT D θ/N + tr D2 /N 2 ) ≤ as D /N. The quantity E(θ12 )2 is not greater than aE[θ2 + (θ, δ1 ) + (δ1 )2 ]2 ≤ ≤ a[(θ2 )2 + θT Dθ + y 2 d2 + Qy 2 ] ≤ aM (1 + y 2 ). The second statement is proved.
SHRINKAGE OF ARBITRARY UNBIASED ESTIMATORS
39
Now, let us substitute (θ1 , θ2 ) = θ2 + Δ to the left-hand side of the third inequality. We obtain |E(θ1 , θ2 )2 − θ2 | = E|(2θ2 + Δ)Δ| ≤
E(2θ2 + Δ)2
√
EΔ2 .
Here EΔ2 = E[(θ, δ1 ) + (θ, δ2 ) + (δ1 , δ2 )]2 ≤ ≤ a(θT Dθ/N + trD2 /N 2 ) ≤ as D /N. We obtain the third statement. Let us prove the fourth statement. Denote ϕ = (θ1 , θ2 )2 . Using the independence of θ1 and θ2 , we obtain var ϕ ≤ 2[E var(ϕ|θ1 ) + var E(ϕ|θ1 )].
(10)
For θ1 = b = const, we have var[(b, θ2 )2 ] = var[2 (b, θ)(b, δ2 ) + (b, δ2 )2 ] ≤ ≤ a[b2 θ2 (bT Db)/N + Qy 2 (b2 )2 γ] ≤ a(b2 )2 M ε2 . By Statement 2, the first addend of the left-hand side of (10) is not greater than aM 2 (1 + y 2 )ε2 . In the second addend of (10), we have E(ϕ|θ1 ) = (θ, θ1 )2 + (θ1T Dθ1 )/N . Here the variance of the first term of the right-hand side is not greater than var[2θ2 (θ, δ1 ) + (θ, δ1 )2 ] ≤ a(θ2 )(θT Dθ/N + M y 2 γ) ≤ a(θ2 )2 M ε2 . The variance of the second term is (let Ω = D/ D ) N −2 var(θ1T Dθ1 )2 ≤ N −2 D var(2θT Ω δ1 + δ1T Ω δ1 ) ≤ ≤ N −2 D 2 (θT ΩDΩθ/N + M y 2 γ) ≤ M D 2 N −2 ε2 . It follows that the variance E(ϕ|θ1 ) is not greater than M 2 ε2 . Consequently, the left-hand side of (10) is not greater than aM 2 (1 + y 2 ) ε2 . Thus, we obtained the fourth lemma statement.
40
2. FUNDAMENTAL PROBLEM OF STATISTICS
Lemma 2.4. Under Assumptions 1 and 2, θ2 (θ1 , θ2 ) = 2 + ω1 , θ + yd θ2 2 2 (θ1 , θ2 ) = (θ ) + ω2 , 2. E (θ, θ) θ2 + yd θ2 (θ2 )2 (θ1 , θ2 )2 = 2 3. E + ω3 , θ + yd θ2
(11)
1. E
(12) (13)
√ √ √ where |ω1 | ≤ ab M ε, |ω2 | ≤ ab(1 + y) M ε, |ω3 | ≤ ab √ (1 + y) M ε. Proof. By virtue of Lemma 2.2, we have |ω1 | ≤ s−1 E[s(θ1 , θ2 ) − θ2 θ2 ]/θ2 ] 2 −2 var(θ1 , θ2 ) + var θ2 . ≤ a E(θ ) Using Statements 1 and 2 of Lemma 2.3, we obtain the upper estimate of ω1 in Lemma 2.4. We substitute (θ1 , θ2 ) = θ2 + Δ to the left-hand side of (12). The principal termequals (θ2 )2 /s + √ θ2 ω1 , and the term with Δ is not greater than Eθ2 /θ2 EΔ2 . θ2 b,
2
andby Lemma 2.3, Here the first multiple does not exceed and the second multiple is not greater than a as D M /N. √ Consequently, |ω2 | ≤ a(1 + y)bM/ε. Let us prove Statement 3 of Lemma 2.4. Denote ϕ = (θ1 , θ2 )2 , and let ϕ = Eϕ + Δ. We have E
ϕ 1 Δ = E ϕE +E , 2 2 θ θ θ2
where by Statement 3 of Lemma 2.3 in view of Lemma 2.2, √ 2 2 we obtain Eϕ = (θ ) + r, where |r| √≤ a(1 + y)M/ N , and E(1/θ2 ) = 1/s + ω1 , where |ω1 | ≤ ab/s M ε. We find that
LIMIT QUADRATIC RISK OF SHRINKAGE ESTIMATORS
41
√ √ |ω3 | ≤ E(1/θ2 ) + (θ2 )2 ω1 ≤ ab(1 + y)M/ N + abθ2 M ε ≤ ab(1 + y)M ε. The proof of Lemma 2.4 is complete. Theorem 2.6. Under Assumptions 1 and 2 for yd > 0, the quadratic risk is R( α(2) ) =
θ2 yd + ω, θ2 + yd
(14)
where |ω| ≤ abM (1 + y) γy 2 + 1/N , and a is a numeric coefficient. The statement of Theorem 2.6 immediately follows from (12) and (13). One can draw the conclusion that under conditions 1 and 2, the estimators θ(2) = α(2) θ are dominatingthe class of estimators K(0) with the accuracy up to ab (1 + y)M γy 2 + 1/N . Example. Let θ ∼ N(θ, I/N ). We have Q = 3, D = I, d = 1, γ = 2/(3n) independently on N . For n ≥ 6, we have N2 1 1 2 t(1 − t)n/2−3 exp(−θ2 N t/2) dt ≤ . b = 2 4 0 (θ + y − 6/N )2 (15) In this case, Theorem 2.6 provides the principal part of the quadratic risk of estimators from K(1) and K(2) being equal to y θ2 /(θ2 + y); this quantity fits the boundaries of the quadratic risk established in Theorem 2.4 with the accuracy up to 4/N . Limit Quadratic Risk of Shrinkage Estimators Now let us formulate our assertions in the form of limit theorems. We consider a sequence of problems of estimating vector θ D, M, b, γ, α PN = (n, θ, θ, (1) , α (2) , R)N ,
N = 1, 2, . . ., (16)
42
2. FUNDAMENTAL PROBLEM OF STATISTICS
in which (we do not write indexes N for arguments of (16)) the populations are determined by n-dimensional parameters θ. Unbiased estimators θ having the properties 1 and 2 are calculated over samples of size N . These estimators are characterized by the parameters M, b, D, and γ. The quadratic risk R of the form (1) is calculated for two shrinkage estimators: α (1) θ and α (2) θ. Theorem 2.7. Assume that for each N in (16), the conditions 1 and 2 hold and, moreover, b ≤ c1 , M ≤ c2 , where the constants c1 and c2 do not depend on N , and suppose that the limits exist lim n/N = y0 > 0; lim d = d0 > 0; lim θ2 = θ0 2 ; lim γ = 0.
N →∞
N →∞
N →∞
N →∞
Then, lim R(α(1) ) = lim R(α(2) ) =
N →∞
N →∞
θ02 y0 d 0 . θ02 + y0 d0
(17)
We conclude that the estimators θ(1) = α (1) θ and θ(2) = α (2) θ (0) are asymptotically dominating the class of estimators K in P as N → ∞. Note that each partition of the set of components θ in (16) to nonintersecting subsets allows to introduce a set of shrinkage coefficients α (1) and α (2) , which can be chosen optimally by Theorems 2.5 and 2.6. Theorem 2.8. Let the sequence of problems (16) be partitioned to k subsequences (1) (2) Pj = (ni , θj , θj , Dj , Mj , bj , γj , α j , α j , Rj )N ,
j = 1, . . ., k, such that for each N , we have n = n1 + . . . + nk , θ = (θ1 , . . ., θk ), θ = (θ1 , . . ., θk ). If each of the subsequences Pj satisfies conditions of Theorem 2.7, then R( α
(ν)
)≥
k j=1
(ν)
Rj ( αj ),
ν = 1, 2.
CONCLUSIONS
43
Proof. To prove this theorem, it is sufficient to estimate the sum of limit risks from above. Let aj denote the limit value of θj2 , and yj > 0 be a limit ratio nj /N , j = 1, . . ., k. Then, a = a1 + . . .+ ak = θ02 . Denote ρj = yj /y, j = 1, . . ., k. We have k k aj yj ρj =y−y , aj + yj 1 + rj j=1
(18)
j=1
where rj = aj /yj . Using the concavity of the function f (r) = 1/(1 + r), we find that the right-hand side of (17) is not greater than y − y/(1 + r¯), where the mean r¯ = a/y. This proves the theorem. We can draw a general conclusion that in the sequence (16) for each (sufficiently large) partition of the set of parameters, the optimal shrinkage of subvectors can decrease the limit quadratic risk. This allows to suggest a procedure of the sequential improvement of estimators by multiple partition to subsets and using different shrinkage coefficients. The problem of the purposefulness of partitions when n and N are fixed requires further investigations. Conclusions It is well known that superefficient estimators decrease the quadratic risk nonuniformly with respect to parameters, and this decrease can be appreciable only in a bounded region (at “superefficiency points”). In multidimensional problems, the domain of parameters is often bounded a priori, whereas their number may be considered as infinite. For example, in the discriminant analysis, the distance between centers of populations is bounded by the origin of the discrimination problem. For a fixed number of parameters and standard risk functions, the consistency of estimators guarantees the zero-limit risk. As compared with the James–Stein estimator, most of shrinkage estimators provide only an infinitesimal gain as N → ∞. Theorems 2.6 and 2.7 provide the substantial decrease of the quadratic risk if the number of parameters is large and comparable with N .
44
2. FUNDAMENTAL PROBLEM OF STATISTICS
Equations (6) and (7) determine values of N and γ that guarantee a finite-risk decrease or the risk minimization with a given accuracy. Theorem 2.7 states the property of the substantial asymptotic dominance for the estimators α (1) θ and α (2) θ when n is comparable with N . Theorem 2.8 guarantees the dominating property of estimators constructed by partitions.
SHRINKAGE OF INFINITE-DIMENSIONAL VECTORS
2.3.
45
SHRINKAGE OF INFINITE-DIMENSIONAL VECTORS
In using statistical models of great complexity, a class of problems arise in which the number of unknown parameters is large and essentially exceeds sample sizes. The actuality of these problems was discussed by Kolmogorov in his program note [41]. In [69] and [71], a number of specific effects are described characteristic of high-dimensional statistics, and these effects were used for the improvement of multivariate statistical procedures. One of these effects is the possibility to decrease the quadratic risk by shrinking estimators. In a series of papers (see review [33]), this effect was investigated thoroughly and applied for the improvement of some practical solutions. In most of these investigations, only normal distribution is considered, with the identical variance known a priori. As an extension of these investigations, we assume here that the observation vectors are infinite dimensional and have different variances of components. Let μ = (μ1 , μ2 , . . .) be vector of unknown parameters that is estimated over a sample X = {xm , m = 1, . . ., N } of infinitedimensional observations x = (x1 , x2 , . . .) with expectation μ = Ex. Suppose there exist all four moments of all components of the vector x. Denote di = var(xi ), i = 1, 2, . . .. We rank these quantities so that d1 ≥ d2 ≥ d3 ≥ . . . . Suppose the vector μ has a finite length and the convergence holds d1 + d2 + . . . = d. Let d > 0. Let ¯ denote a sample average vector calculated over X. x Let us study the quadratic risk of infinite-dimensional shrinkage estimator d ¯ , with α , (1) μ =α 0 x 0 = 1 − ¯2 Nx where
d = (N − 1)−1
N
¯ )2 (xm − x
(2)
m=1
is sample variance (here and in the following, squares of vectors mean squares of their length). The estimator (1) is similar to the James–Stein estimator and differs by the “plug-in” variance d. 2 2 Obviously, E¯ x =μ + d/N . For the sake of convenience, denote
46
2. FUNDAMENTAL PROBLEM OF STATISTICS
βi = di /N, i = 1, 2, . . . , β = d/N , and introduce the quantities ρi = di /d that characterize the relative contribution of components of x to the sum of variances, and let ρ1 ≥ ρ2 ≥ ρ3 ≥ . . . . Normal Distributions Independent components Suppose that random vectors x have independent components ◦ ◦ ¯ = x ¯−μ . μi , di ), i = 1, 2, . . . . Denote x = x − μ , x xi ∼ N( Obviously, ◦2
var(x ) = 2
d2i ,
◦2
¯ )=2 var(x
i
βi2
i
(here and in the following, the indexes i in sums are assumed to run from 1 to infinity). In the case when d1 = 1, and di = 0 for i > 1, the multiparametric problem degenerates. If the quantity ρ = ρ1 is small, the variance of x2 is defined by contributions of many variables, and this produces effects we study in this paper. First, consider a class of a priori estimators of μ of the form μ = α¯ x, where α is an a priori chosen nonrandom scalar. The quadratic risk of the estimator μ is R(α) = E( μ − α¯ x)2 = (1 − 2α) μ2 + α2 ( μ2 + β). Obviously, R(1) = β. The minimum of R(α) is achieved for α = 2 /( μ2 + β) and is equal to α0 = μ def
R0 = R(α0 ) =
μ 2 β. μ 2 + β
(3)
The problem is reduced to estimation of the parameter α0 . Special case 1 Consider a special case where the variance di = 0 for all i > n so that the problem becomes finite dimensional. Let all di = 1 for i ≤ n. Then, β = n/N , and to estimate α0 for n > 2, we may
NORMAL DISTRIBUTIONS
47
¯ 2 )¯ suggest the James–Stein estimator μ JS = (1 − (n − 2)/N x x. Its quadratic risk is as follows (see Introduction) RJS = y − (n − 2)2 /N 2 E|¯ x2 |−1 ≤ R0 + 4(n − 1)/nN. If μ = 0, we have RJS = N/(n − 2). Now we prove that the estimator (1) of α0 produces the quadratic risk approaching R0 in the infinite-dimensional case. Let us ¯ in the form write the quadratic risk of μ =α 0 x ¯ )2 = −β + 2E R( α0 ) = E( μ−α 0 x
β T β2 ¯ x + E μ , ¯2 ¯2 x x
Denote ρ = ρ1 and where β = d. 2
σ =
∞
ρ2i ≤ 1.
(4)
i=1
σ2
Lemma 2.5. For β > 0, n ≥ 3, ≤ ρ and
μ 2
and ρ < 1/3, we have
1 1 1 1 1 . ≤E 2 ≤ + ¯ x +β β β 1/3 − σ 2
(5)
Proof. The first lemma statement is an obvious property of inverse moments. Now we integrate over the distribution x ¯i ∼ N(μi , βi ), i = 1, 2, . . . , and obtain that 1 E 2= ¯ x
0
= 0
∞
E exp(−t¯ x2 ) dt =
∞ ∞ i=1
√
1 tμ2i dt. · exp − 1 + 2βi t 1 + 2βi t
(6)
If ρ2 = 0, then ρ1 = 1 in controversy to the lemma conditions. If ρ3 = 0, then ρ1 + ρ2 = 1 with ρ1 < 1/3, which is impossible. Therefore, under our lemma conditions, ρ1 ≥ ρ2 ≥ ρ3 > 0. Let us
48
2. FUNDAMENTAL PROBLEM OF STATISTICS
majorize (6), leaving in the common square root only the sum of products of three multiples. We find that ∞ 1 E 2 ≤ [1 + 8 t3 βi βj βk ]−1/2 dt, ¯ x 0 i<j
where (and below) the indexes i, j, k run over all natural numbers under the restrictions 1 ≤ i < j < k. Let us isolate the integration region [0, β −1 ). We obtain ∞ 1 −1 (8t3 β 3 E 2 ≤β + ρi ρj ρk )−1/2 dt ≤ ¯ x 1/β i<j
Note that 6
∞
ρi ρj ρk =
i<j
ρi ρj ρk − 3
i,j,k=1
∞
ρ2i + 2
∞
ρ3i ≥ 1 − 3σ 2 .
i=1
i=1
This proves the lemma. For special case 1, the inequality (5) can be sharpened. Lemma 2.6. For β > 0 and n ≥ 3, E
¯ μ T x 1 x ¯2 = 1 − βE 2 + 2 βi E 2i 2 . 2 ¯ ¯ (¯ x ) x x
(8)
i
Proof. Denote by fi =
(¯ xi − μi )2 1 exp[− ], 2βi (2πβi )1/2
fi =
∂ln fi ∂x ¯i
the probability density for x¯i and its derivative, i = 1, 2, . . . . We find that x ¯ fi 1 μ T x ¯i x ¯i (¯ xi + βi ) = 1 + f d¯ xi . E 2 =E 2 βi ¯ ¯ ¯2 i x x x fi i
i
NORMAL DISTRIBUTIONS
49
Integrating by parts, we obtain (for n ≥ 3) that this expression equals to 1 x ¯2i 1− βi E 2 − 2 2 2 . ¯ x (¯ x ) i
This coincides with the right-hand side of (8). The lemma is proved. , where d is defined by (2). Denote β = d/N Lemma 2.7. If N > 1 we have var β =
2 β 2σ2. N −1
Proof. For normal variables xm , one can use the linear transformation (the Helmert transformation) such that N
¯) = (xm − x 2
m=1
N −1
∼2
xm ,
m=1
∼
where xm are independent normal vectors distributed identically as xm . From (2), we obtain var β =
N −1 2 2 1 ∼2 βi . var(x ) = 2 2 N (N − 1) N −1 m=1
i
This is the lemma statement. Normal distribution: general case Let x = (x1 , x2 , . . .) ∼ N( μ, Σ), where the infinite-dimensional matrix Σ can be diagonalized by orthogonal transformations. One can diagonalize the matrix Σ and denote the eigenvalues by di , i = 1, 2, . . . . Suppose that their sum d = trΣ is finite. Then, Lemmas 2.6 and 2.7 remain valid with β = d/N. Theorem 2.9. Let an infinite-dimensional vector x ∼ N( μ, Σ) and 0 < trΣ < ∞. If d1 < d/3, n > 2 and N > 1.
50
2. FUNDAMENTAL PROBLEM OF STATISTICS
Then, σ 2 < 1/3, and for estimator (1) we have E ( μ−μ )2 = R( α0 ) ≤ R0 + 6βσ[1 + (1/3 − σ 2 )−1/2 ].
(9)
Proof. Let us start from (4). For normal distributions, ran¯ and β are independent. We substitute Eβ = β and dom values x T 2 E( μ x)/¯ x from (8). It follows R( α0 ) = β − β 2 E
1 x ¯2i 1 + 4β β E + E 2 var β. i 2 2 2 ¯ ¯ x x (¯ x )
(10)
i
μ2 + β), and the two first terms of the right-hand Here E|¯ x|−2 ≥ ( side are not greater than β 2 μ 2 /( μ2 + β) = R0 . Let us estimate the third term. It does not exceed −3
4βE|¯ x|
i
−3
βi |xi | ≤ 4βE|¯ x|
¯2 x|−2 σ. (11) x βi2 = 4β 2 E|¯ i
In the last term of (10) for N ≥ 2, we have var β ≤ 2β 2 σ 2 ; it follows that the sum of last two terms in (10) is not greater than 6β 2 σE|¯ x|−2 . We substitute the upper estimate of E|¯ x|−2 from Lemma 2.5 and come to the statement of our theorem. Thus, the infinite-dimensional shrinkage estimator μ = 2 ¯ provides the quadratic risk the same as the best (1 − β/¯ x ) x ¯ with the accuracy up to O(σ). Small a priori estimator μ = α0 x σ 2 ≤ ρ = min di /d produce an essentially multiparametric effect: i
the quadratic risk of estimator (1) approaches the quadratic risk of unknown unimprovable a priori estimator. Wide Class of Distributions Let us prove now that the same estimator (1) provides the quadratic losses ( μ−μ )2 approaching R0 for distributions different from normal. We establish this fact for populations in which all variables have the fourth central moment in the scheme of series of infinitedimensional observations x as sample size N → ∞.
WIDE CLASS OF DISTRIBUTIONS
51
Consider the sequence
P = {(S, μ , Σ, X, μ , L)N }, N = 1, 2, 3, . . .,
(12)
of estimation problems for populations S with expectation vectors μ = Ex = (μ1 , μ2 , . . .) and infinite-dimensional covariance matrices Σ = cov(x, x), in which an estimator μ is calculated over samples X = {xm } of size N → ∞ with quadratic loss function L = ( μ−μ )2 (we do not write out indexes for arguments of ◦ ◦ ◦ = (x1 , x2 , . . .). Let di be eigenvalues of (12)). Denote x = x − μ Σ, i = 1, 2, . . . , ordered so that d1 ≥ d2 ≥ d3 . . . . Suppose (12) satisfies the following requirements. A. For each N , the series μ21 + μ22 + . . . = μ 2 converges and as 2 2 N → ∞, we have lim μ =μ 0. B. For each N , the series d1 + d2 + . . . = d = trΣ converges and d > 0. C. For each N in the system of coordinates where Σ is diagonal, we have ◦ Ex4i sup ≤ κ, κ ≥ 1, ◦ i=1,2,...(Ex2 )2 i where the ratio is assumed to be 0 if the nominator is 0, and where κ does not depend on N . ◦ Under condition C, we have d2 ≤ E(x2 )2 ≤ κd2 . Denote ρi = di /d, i = 1, 2, . . . . Introduce the parameters σ2 =
trΣ2 = ρ2i ≤ 1, 2 (trΣ) i
◦
γ=
var(x2 ) ◦
E(x2 )2
≤1−
1 . κ
The parameter σ 2 defines the multiparametric character of the problem: for special case 1, we have σ 2 = 1/n, where n is a number parameters of the statistical model. The parameter γ restricts the dependence of variables. If components of x are independent, then γ≤
(κ − 1) σ 2 . 1 + (κ − 1) σ 2
52
2. FUNDAMENTAL PROBLEM OF STATISTICS
◦ ◦ If all components of x are proportional and xi = ρi /ρ1 x1 , i = ◦ ◦ 1, 2, . . . , then the quantity γ = 1 − (Ex21 )2 /E|x1 |4 can approach 1. D. Let d/N → β > 0 as N → ∞. E. Suppose as N → ∞, we have σ = σ(N ) → 0 and γ = γ(N ) → 0. Asymptotic conditions A–E are the extension of the Kolmogorov “increasing dimension asymptotics” used first in [17] and [45]. Consider special case 1. Then, the Kolmogorov condition n/N → y > 0 passes to condition D with β = y; condition C is provided by the requirements ◦
0 < M = sup E(eT x)4 < c1 , and the spectral norm Σ−1 ≤ c2 , |e|=1
where e are nonrandom unity vectors, the quantities σ 2 = 1/n → 0, and our quantity γ does not exceed the parameter ◦T
◦
sup var(x Ω x/n)/M → 0,
Ω=1
where Ω are nonrandom, symmetrical, positively semidefinite matrices. ¯, Denote (as in the above) the mean sample vector by x ◦
◦
◦
¯=x ¯−μ ¯1 , x ¯2 , . . .), d = tr Σ. x = (x Remark 1. Under Assumptions A–E, we have E(¯ x2 )2 ≤ ( μ2 )2 + 2 μ2 β + κβ 2 + O(N −1 ) and μ T Σ−1 μ → 0. (13) Indeed, we have ◦
◦
¯+x ¯ 2 )2 ≤ E(¯ x2 )2 = E( μ2 + 2μT x
◦ ◦ ◦ ¯ )2 + 8( ¯ )2 + 2(x ¯ )2 )2 . ≤ E | μ|4 + 2 μ2 (x μT x
WIDE CLASS OF DISTRIBUTIONS ◦
53 ◦
¯ |2 → β, E( ¯ )2 = μ Here the square μ 2 is bounded, E|x μT x T Σ μ/ ◦
¯ 2 )2 equals N → 0, and thus E(x ◦ ◦ ◦ 2 ◦ 2 −2 ¯i | |x ¯j | ≤ N E|xi |4 E|xj |4 ≤ κN −2 di dj = κβ 2 . E x i,j
i,j
i,j
The Remark is grounded. Lemma 2.8. Under Assumptions A–E as N → ∞, we have ¯ 2 ≤ d2 N −2 (κ/N + σ 2 ) → 0. var x ◦
◦
¯ ) + 2 varx ¯ 2 . Here ¯ 2 ≤ 2 var( μT x Proof. It is obvious that var x T −1 the first summand is no greater than 2 μ Σ μ → 0. The variance ◦
¯ 2 = N −4 var x
N
◦
◦
[E(x2m )2 − (Ex2m )2 ] + N −2 (1 − N −1 )trΣ, (14)
m=1 ◦
◦
def
. Let μ = 0 and xm = x = (x1 , x2 , . . .). By where xm = xm − μ condition D, the first summand in (14) is not larger than N −3 E(x2 )2 ≤ N −3 E ≤N
−3 i,j
Ex4i
i,j
x2i x2j ≤
Ex4j ≤ N −3 κd2 → 0.
The second summand of the right-hand side of (14) also vanishes. The proof is complete. Consider the unbiased estimator β =
N 1 ¯ )2 (xm − x N (N − 1) m−1
of the parameter β = d/N . Lemma 2.9. Under Assumptions A–E as N → ∞, we have var β → 0.
54
2. FUNDAMENTAL PROBLEM OF STATISTICS
Proof. It suffices to show that N −2 (N − 1)−2 var
N
◦
x2m → 0.
m=1 ◦
Since the vectors xm are independent, the sum here is a sum of ◦ ◦ ◦ variances of x2m . But var x2m ≤ Ex2m ≤ κd2 . It follows that for N > 1, var β ≤ κd2 /N (N − 1)2 → 0. The lemma is proved. Theorem 2.10. Under Assumptions A–E, the estimator μ of the form (1) has the quadratic losses such that def
plim ( μ−μ )2 = R0 = N →∞
μ 20 β . μ20 + β
Proof. In view of Lemmas 2.8 and 2.9, we have β → β and → μ20 + β in probability (i.p.). Let us write the quadratic losses in the form ¯2 x
¯) ¯ )2 − 2β + 2( μ−x μT x L = ( μ−μ )2 = (
β2 β + 2. 2 ¯ ¯ x x
(15)
¯ )2 → β in probability ¯ )2 = tr Σ/N → β and ( μ−x Here E( μ−x ¯ 2 → 0 by Lemma 2.8. The quantity β → β in probability since var x by Lemma 2.9. In the third term of the right-hand side of (15), μ T x → μ20 in probability since μ T Σ μ/N → 0. Further, |¯ x|−2 → 2 = β 2 in probability. We obtain the (μ20 + β)−1 and β2 → (Eβ) theorem statement. To prove the convergence ( μ−μ )2 → R0 in the square mean, it suffices to bound eight’s moments and require the uniform boundedness of moments E|¯ x|−4 . Conclusions For a wide class of distributions restricted by bounded dependence condition γ → 0, Theorem 2.10 establishes the quadratic loss decrease for shrinkage estimators (1) in problems with
CONCLUSIONS
55
variables of arbitrary dimension (greater than 3) and different variances under correlations, decreasing on the average. Indeed, let rij be the correlation coefficient for variables xi and xj in the observer coordinate system. Then, the second of conditions E can be written as σ2 =
trΣ2 = ρi ρj (rij )2 → 0. (trΣ)2 i,j
In case of finite dimension and identical variance of variables, we obtain the same effect as for the James–Stein estimator.
56
2. FUNDAMENTAL PROBLEM OF STATISTICS
2.4.
UNIMPROVABLE COMPONENT-WISE ESTIMATION
In this section, we extend the notion of shrinkage estimators to shrinking of separate components. This technique was investigated first in [64]. Denote by X a sample from N( μ, I) of size N , and let ¯ = (¯ x x1 , x ¯2 , . . ., x ¯n ) be sample average vector. Definition 1. The estimator x2 ), . . ., ϕ(¯ xn )} μ = {ϕ(¯ x1 ), ϕ(¯
(1)
is called a component-wise estimator of the vector μ = (μ1 , μ2 , . . ., μn ), and the function ϕ(·) identical for all components function is called the estimating function. Let us search for an estimator dominating the class K with the accuracy up to remainder terms small for large n and N . We restrict ourselves to normal n-dimensional distributions N( μ, I) with unit covariance matrix I. Denote by y = n/N the ¯. quadratic risk of the standard estimator μ =x We apply specifically multiparametric technique of studying relations between functions of unknown parameters and functions of observable variables. Define the density function f (t) = n−1
n i=1
fi (t) =
fi (t) = fi (t),
N N (t − μi )2 , exp − 2π 2
where
i = 1, 2, . . .n.
(2)
Here (and in the following) the subscript i in sums runs over i = 1, 2, . . ., n, and let angular brackets denote averaging over i. Let K denote a class estimators of the form (1) with the differentiable estimating functions ϕ(t) of the scalar argument. Theorem 2.11. For the estimators μ from K, we have −1 2 R = R(ϕ) = n E (μi − μ i ) = R0 + (ϕ(t) − ϕ0 (t))2 , (3) i
UNIMPROVABLE COMPONENT-WISE ESTIMATION
where
ϕ0 (t) = t +
1 f (t) , N f (t)
R0 =
n n − 2 N N
57
[f (t)]2 dt, (4) f (t)
(the prime indicates the derivative in t). Proof. We find that R(ϕ) equals 2 E (μi − x ¯i )2 − 2(μi − ϕ0 (¯ xi ))(¯ xi − ϕ0 (¯ xi )) + (¯ xi − ϕ0 (¯ xi ) i f (t) 1 [f (t)]2 2 n (μ − t) (t) dt + − f dt. = i i N nN f (t) N2 f (t)
n−1
i
i
In the second term, we substitute (μi − t)fi (t) = N −1 fi (t) and sum over i. The required expression for R0 follows. The theorem is proved. The estimator μ = {ϕ0 (¯ xi ), i = 1, 2, . . ., n} may be called the best a priori component-wise estimator. Let us study its effect on the quadratic risk decrease. From Theorem 2.11, it follows first that R0 ≤ y = n/N and that n [f (t)]2 dt ≤ 1. (5) N f (t) If the length of the vector μ is known a priori, then the shrinkage estimator μ = α¯ x may be used with the shrinkage coefficient α=μ 2 /( μ2 + y) (here and in the following, the square of a vector denotes the square of its length). Obviously, it decreases the quadratic risk and leads to def
R(α) = E( μ − α¯ x)2 =
μ 2 y. μ 2 + y
Remark 1. The inequality holds R0 ≤ R(α). Indeed, let us examine it. By the Cauchy–Buniakovsky inequality, we have [f (t)]2 ( μ2 + y)(y − R0 ) 2 2 1 = [ tf (t) dt] ≤ t f (t) dt . dt = f (t) y2 It follows immediately that R0 ≤ R1 .
58
2. FUNDAMENTAL PROBLEM OF STATISTICS
In a special case when all components of μ are identical, we have μ1 = μ2 = . . .μn (and for n = 1) the function ϕ0 (t) = μ1 for all t, we have f (t)/f (t) = N (μ1 − t), and the quadratic risk of the best component-wise estimator is R0 = 0 in contrast to the best shrinkage estimator that leads (for μ = 0) to the quadratic risk R(α) > 0. In case of a large scattering of the quantities μi , the a priori best estimating function ϕ0 (t) leads to R0 ≈ 0 again. Let us establish this fact. Theorem 2.12. Let the set of components of the vector μ be divided into two subsets A and B so that the distance between these subsets is not less than Δ > 0. Denote R = E( μ−μ 0 )2 = E RA = E
k i=1
n (μi − ϕ0 (¯ xi ))2 , i
(μi − ϕ0 (¯ xi ))2 ,
RB = E
n i=k+1
(μi − ϕ0 (¯ xi ))2 ,
0 < k < n. Then, 0 ≤ R − RA − Rb ≤ Δ2 /4 exp(−N Δ2 /8) n/N.
(6)
Proof. We retain notation (2) for the functions fi = fi (t), but numerate components from A by subscripts i = 1, 2, . . .k, and components from B by subscripts j = k + 1, k + 2, . . ., n. Denote fA = k/n fi , fB = (n−k)/n fj , where angular brackets denote averaging over i and j, and denote f = f (t) = fA + fB . We find that (fA fB − fA fB )2 n dt. R − RA − RB = 2 N f fA fB Let μj > μi . Using the Cauchy–Buniakovsky inequality in view of μj − μi ≥ Δ, we find that (fA fB − fA fB )2 (μj − μi ) fi fj 2 k(n − k) = ≤ fA fB yfi fj n2 k(n − k) ≤ (μj − μi )2 fi fj ≤ fA fB Δ2 /y, yn2
ESTIMATOR FOR THE DENSITY OF PARAMETERS
59
where the angular brackets denote averaging over i and j. It is √ obvious that fA fB ≤ f /2. We obtain the inequality Δ2 R0 − RA − RB ≤ 2N
fA fB dt.
Since k(n − k) ≤ n2 /4, the product fA fB ≤ fi fj /4. We note that fi fj ≤ N/2π exp[−N Δ2 /4 − N (t − μij )2 ], where μij = (μi + μj )/2. Also it is obvious that exp(−N (t − μij )2 ≤ n−2
2 exp(N (t − μij )2 /2) .
i,j
Integrating n2 summands, we obtain
fA fB dt ≤
n exp(−N Δ2 /8). 2
It follows that (6) is valid. The theorem is proved. Estimator for the Density of Parameters We estimate the empiric density of unknown parameters. In order to explicate better the essentially multiparametric effects, we replace the problem of estimating vectors μ of finite length by the √ problem of estimating the vector v of parameters vi = nμi , i = 1, 2, . . ., n, having the square average μ 2 . We define the quantity y = n/N and the function f (t, y) = fi (t, y), where fi (t, y) = (2πy)−1/2 exp(−(t−vi )2 /2y), (7) i = 1, 2, . . ., n, characterizing the set {vi }. ˜ = {˜ ˜m = xm } of n-dimensional observations x Let a sample X √ ¯ = (u1 , u2 , . . ., un ) = n¯ xm , m = 1, 2, . . ., N, be given and let u √ ˜ . We estimate the vectors ¯ be a vector of sample averages for X nx ˜ . Consider the class of component-wise estimators v = {vi } over X = {ϕ(ui )}, where ϕ(t) is the differentiable of v of the form v estimating function.
60
2. FUNDAMENTAL PROBLEM OF STATISTICS
By Theorem 2.12, the best a priori component-wise estimator def 0 = {ϕ0 (ui , y)}, where the estimating function of v has the form v ϕ0 (t) = ϕ0 (t, y) = t + y
f (t, y) . f (t, y)
(8)
The following statement immediately follows from Theorem 2.11. = {ϕ(u1 ), ϕ(u2 ), . . ., ϕ(un )}, where ϕ(·) is Remark 2. Let v the differentiable function. Then, )2 = E R = R(ϕ) = E(v − v
1 (vi − ϕ(ui ))2 = n i
= R0 +
(ϕ(t) − ϕ0 (t, y))2 f (t, y) dt,
where
0 )2 = y − y 2 R0 = E(v − v
[f (t, y)]2 dt. f (t, y)
Note that for any p > 0, we have [f (t, p)]2 p ≤ 1. f (t, p)
(9)
Remark 3. Consider the Bayes distribution N(0, β) of independent quantities μ1 , μ2 , . . ., μn identical for all μi . Then, the Bayes expectation EB of the quadratic risk is EB R0 ≥ yβ/(y + β). Let us prove this inequality. We have [f (t, d)]2 2 EB R0 = y − y dt. f (t, d) Here the function in the integrand is not greater fi (t, d)2 (vi − t)fi (t, d)2 = ≤ (vi − t)2 fi (t, d) fi (t, d) fi (t, d)
ESTIMATOR FOR THE DENSITY OF PARAMETERS
61
and the integral of the right-hand side is not greater d. It follows that EB R0 ≥ y − y 2 /d = yβ/(y + β). Remark 3 is justified. For β = 0, the quantity R0 = 0, while for the great scattering of the component magnitudes and large β, the Bayes quadratic risk does not differ much from the quadratic risk y of the standard estimator. Let ε > 0. Consider the statistics f(t) = f(t, ε) = n−1 fi (t, ε), i
1 exp[−(t − ui )2 /2ε], where fi (t, ε) = √ 2πε
(10)
i = 1, 2, . . ., n. The function f(t, ε) presents an ε-regularized density of empirical distribution of u1 , u2 , . . ., un . Note that f(t, ε) presents an unbiased estimator of the density f (t, d), where d = y + ε approximating f (t, y) for small ε. We use this function for the approximation of the a priori best estimating function, substituting d = y + ε instead of y. Let us obtain an upper estimate of the risk increase produced by this replacement. 1 = Lemma 2.10. The quadratic risk of the a priori estimator v {ϕ1 (ui )}, i = 1, 2, . . ., n, of the vector v equals f (t, d) , d = y + ε, f (t, d) R1 = R1 (ϕ1 ) = n−1 E(vi − ϕ1 (ui , d))2 ≤ R0 + aε2 /y, ϕ1 (t) = t + y
equals
i
where ϕ1 (t) = t +
f (t, d) , f (t, d)
and a is a numeric coefficient.
d = y + ε,
ε>0
62
2. FUNDAMENTAL PROBLEM OF STATISTICS
√ Proof. Denote for brevity xi = |t − vi |/ p, fi = fi (t, p) = (2πp)−1/2p exp(−x2i /2), ∂ ∂ ∂2 fit = f1 , fip = fi , fitp = fi , ∂p ∂p ∂p ∂t p > 0, i = 1, 2, . . ., n. Let us express the difference R1 − R0 in terms of the sum of squares of derivatives of the function f (t, p) at the points p intermediate between y and y + ε. In view of Remark 3, we have R1 − R0 = E[ϕ0 (ui , y) − ϕ0 (ui , y + ε)]2 f (t, y) dt = ∂ϕ0 (ui , p) 2 −1 2 = =n ε E ∂p i 2 |fit | |fitp | |fit fip | −1 2 pi , + pi + =n ε E fi fi fi2 i where p = pi ∈ [y, y + ε] for all i. The calculation of expectation with respect to ui reduces to the integration over fi (t, y)dt. Leaving only squares of summands, we write 2 f2 f2f2 −1 2 2 fitd 2 it ip it fi (t, y) dt, R − R0 ≤ 3n ε E 2 + p f2 + p 4 f f i i i i where p = p(i). Note that fi (t, y) ≤ p1/2 y −1/2 fi (t, p), p > 0. We calculate the derivatives xi fi fit = √ , p
fip =
(x2i − 1)fi , 2p
fitp =
xi (x2i − 3)fi . 2p3/2
The replacement p = pi by y only strengths the inequality. We calculate the integrals and obtain that R − R0 ≤ aε2 y, where a is a number. Lemma 2.10 is proved. Further, note that the statistics f(t, ε) presents a sum of n independent addends. Let us estimate its variance. Lemma 2.11. The variances varf(t, ε) ≤
1 √
2n πε
f (t, d),
varf (t, ε) ≤ √
1 f (t, d), 2π nε3/2
ESTIMATOR FOR THE BEST ESTIMATING FUNCTION
63
where d = y + ε/2. Proof. Denoting averages over i by angular brackets and using the independence of addends, we obtain that varf(t) = n−1 varfi (t) ≤ n−1 Efi2 (t) 1 1 = 2πε 2πy
i
(r − vi )2 (t − r)2 1 exp[− − ] dr = √ f (t, d). 2y ε 2 πε
The first statement is proved. Next, we have varf (t) = n−1 varfi (t) ≤ n−1 E[fi (t)]2 = (2πεn)−1 E exp[−(ui − t)2 /ε] (ui − t)2 /ε2 ≤ (2πε2 n)−1 E exp[−(ui − t)2 /2ε] = (2π)−1/2 ε−3/2 n−1 Efi (t) = (2π)−1/2 ε−3/2 n−1 f (t, d). We obtain the lemma statement. Estimator for the Best Estimating Function Substituting f(t) instead of f (t, y) to (8) we obtain some estimating function, that is, however, not bounded from above. We consider some δ-regularized statistics approximating ϕ0 (t) of the form ⎧ ⎪ ⎨ t + y f (t) for f(t) > δ ϕ 0 (t) = ϕ 0 (t, ε, δ) = f(t) ⎪ for f(t) ≤ δ, δ > 0. ⎩ t (11) 0 (ε, δ) = (ϕ 0 = v 0 (u1 ), ϕ 0 (u2 ), . . ., ϕ 0 (un )) as an estiWe choose v mator of the vector v = (v1 , v2 , . . ., vn ). Remark 4. Let n → ∞, ε → +0, δ → +0, y = n/N → y∞ and for each t the function f (t, y) → f (t, y∞ ) > 0. Then, for each t the difference ϕ 0 (t) − ϕ0 (t) → 0 in probability. We study now the quadratic risk R = R( v0 ) of the 0 . estimator v
64
2. FUNDAMENTAL PROBLEM OF STATISTICS
Lemma 2.12. If ε ≤ y, then the quadratic risk of the estimator (11) equals 0 )2 = En−1 (vi − ϕ R = E(v − v 0 (ui ))2 = i
2 f (t, y) −ϕ 0 (t) f (t, y) dt. y = R0 + E f (t, y) Proof. Note that, by definition, the functions fi (ui ) and fi (ui ) do not depend on ui . Substitute ϕ0 (·) from (11). We find that −1 R = En (vi − t − y ϕ 0 (t))2 fi (t, y) dt = i 1 2 2 (vi − t) fi (t, y) dt − E = (vi − t)ϕ 0 (t)fi (t, y) dt + n n i i 2 + y E [ϕ 0 (t)]2 f (t, y)dt. Here the first summand equals y. In the second summand, we can substitute vi − t = yfi (t, y)/fi (t, y), As a result we obtain R = y − 2yE ϕ 0 (t)f (t, y) dt + y 2 E [ϕ 0 (t)]2 f (t, y) dt = 2 f (t, y) = R0 + E y −ϕ 0 (t) f (t, y) dt. f (t, y)
(12)
The lemma is proved. Lemma 2.13. If y > 0 then f (t, y) f (t, d) 2 def 2 Δ1 = R1 − R0 = y f (t, y) dt ≤ aε2 /y. − f (t, y) f (t, d) (13) Proof. Denote fi = (2πp)−1/2 exp(−(t − vi )2 /2p), fit =
∂ fi , ∂t
fip =
∂ fi , ∂p
fipt =
∂2 fi . ∂p∂t
ESTIMATOR FOR THE BEST ESTIMATING FUNCTION
65
Let us express the difference in (13) in terms of a derivative at an intermediate point y ≤ p ≤ d = y + ε. We obtain Δ1 ≤ ε
2
n
−1
|fit |2 fi
i
|fipt | |fip fit | +p +p fi fi2
f (t, y) dt,
where the arguments of fi and its derivatives are p = pi , y ≤ p ≤ d, i = 1, 2, . . ., n. Keeping only squares of addends, we can write # " f2 f2 d 2 2 ipt 2 fip fit −1 it f (t, p) dt. +d 2 +d Δ1 ≤ 3ε n y fi2 fi fi2 i
√ We pass to the variables xi = (vi − t)/ p and obtain that fit2 ≤ x2i fi2 ,
2 ≤ x2 f 2 /p ≤ (x2 + 1)2 f 2 /2p, fip i i i i
2 ≤ (x2 + 2|x | + 1)2 f 2 /4p2 , fipt i i i
p = pi , i = 1, 2, . . ., n.
Here p ≥ y, d ≤ 2y. Integrating with respect to dxi , we obtain that Δ1 ≤ aε2 /y, where a is a number. Lemma 2.13 is proved. Now we pass from functions f (t, y) to f (t, y + ε) = Ef(t). The second addend of the right-hand side of (12) is not greater 2(Δ1 + Δ2 ), where Δ1 is defined in Lemma 2.13 and 2 f (t, d) y Δ2 = E −ϕ 0 (t) f (t, y) dt. f (t, d) In the right-hand side, we isolate the contribution of t such that f(t) < δ, for which ϕ 0 (t) = t. Denote " Δ21 = y 2 E and
f (t, d) f (t) − f (t, d) f(t)
#2
ind(f(t) > δ)f (t, y) dt.
2 f (t, d) − t f (t, y) dt. y Δ22 = f (t, d) √ Then, Δ2 = Δ21 + Δ22 . Denote λ = y δ.
66
2. FUNDAMENTAL PROBLEM OF STATISTICS
Lemma 2.14. For ε ≤ y, Δ21 ≤ 4y 5/2 ε−3/2 λ−2 n−1 . Proof. It is easy to see that Δ21 ≤ y 2 δ −2
E(f (t) − r(t)f(t))2 f (t, y) dt,
where r(t) = f (t, d)/f (t, d), d = y + ε. The quantity under the expectation sign is the variance var(f (t) − r(t)f(t)) ≤ 2var f (t) + 2r2 (t) varf(t). We use the inequality f (t, y) ≤ (2πy)−1/2 and estimate the variance using Lemma 2.11. In view of (9) we obtain Δ21
1 [f (t, d)]2 1 2y 2 f (t, d) dt + 3/2 ≤√ dt ≤ f (t, d) π nδ 2 eps1/2 ε 4y y 3/2 . ≤√ π nλ2 ε
Lemma 2.14 is proved. Denote μ 2 V1 = , y
1/m 1/m vi2m 2 m = (μi ) =N , y
Vm
i
m = 1, 2, . . . . The parameter Vm may be interpreted as a “signalto-noise” ratio. Lemma 2.15. The function def
h(λ) = y
2
√ [f (t, d)]2 2 /y + 6· y. ind(f (t, d) ≤ 2δ) dt < 6πλ μ f (t, d)
For any integer k ≥ 2, the inequality h(λ) ≤ 4πky λ1−2/k (Ak + Vk−1 ) holds, where Ak = 1/2 + [(2k − 3)!!]1/(k−1) .
ESTIMATOR FOR THE BEST ESTIMATING FUNCTION
67
Proof. To be concise, denote f = f (t, d), fi = fi (t, d) for all i. We apply the Cauchy–Buniakovsky inequality and find that f = [f (t, d)]2 = (vi − t)fi /d2 ≤ fi (vi − t)2 fi /d2 . 2
Consequently, y2 d2
h(λ) ≤
(vi − t)2 fi ind(f ≤ 2δ) dt.
(14)
Apply the Cauchy–Buniakovsky inequality once more and obtain √ the function fi = f ≤ λ/ y under the square root sign. We conclude that √ y h(λ) < 2λ 5/4 (vi − t)4 fi dt. d Let us multiply and √ $ divide the integrand by the function ρ(t) = dπ −1 (d+t2 )−1 , ρ(t) dt = 1. The average with respect to ρ(t) dt is not greater than the root from the square average; therefore the inequality holds 1/2 √ y 4 2 (vi − t) fi (d + t ) dt . (15) h(λ) ≤ 2πλ 3/2 d We √ calculate the integrals in (15) using the substitution t = vi + x d for each i and get the integrals with respect to the measure (2π)−1/2 exp(−x2 /2) dx. We find that the right-hand side of (15) does not exceed 2πλy 3vi2 + 18y. The first lemma statement is proved. 1/k
Further, let some integer k ≥ 2. Note that for (vi − t)2 fi c = kd1−1/2k . From (14) it follows that h(λ) ≤ c
y2 d2
1−1/k
ind(f ≤ 2δ) dt ≤ c
≤ 2c
y 2 1−2/k δ d2
fi
1/k
fi
y2 d2
dt.
<
fi 1−1/k dt (16)
68
2. FUNDAMENTAL PROBLEM OF STATISTICS
We multiply and divide the last integrand by the same function ρ(t) and apply the integral Cauchy–Bunyakovskii inequality. It follows that the integral in the right-hand side of (16) is not greater
π
1−1/k (1−1/k)/2
d
2
k−1
(1 + t /d)
1/k fi dt
.
(17)
We calculate i by the substitution √ the integral in (17) for each 2 t = vi + x d and use the inequality d + t ≤ d + 2vi2 + 2dx2 . When raising to powers k−1 in the integrand, the moments of N(0, 1) and moments of empiric distribution of {vi } appear. We estimate them from the above by the corresponding higher moments Ex2(k−1) = 2(k−1) (2k − 3)!! and vi so that a constant remains under the integral with respect to the measure (2π)−1/2 exp(−x2 /2) dx. This constant is not larger than
1 + 2 [(2k − 3)!!]1/(k−1) + 2Vk−1
k−1
.
(18)
Substituting (18) to (17), we obtain the lemma statement. Lemma 2.16. 1 Δ22 ≤ √ 2 π
1/4 d d √ + h(λ). ε nλ
(19)
Proof. We find that Δ22 is not larger than [f (t, d)]2 [f (t, d)]2 y2E ind(f(t) < δ) dt = y 2 P(f(t) < δ) dt. f (t, d) f (t, d) We divide the integration region into the subregions D1 = {t : f (t, d) > 2δ} and D2 = {t : f (t, d) ≤ 2δ}. In the region D1 , the difference |f (t, d) − f(t)| is not less than δ > 0, f (t, d) = Ef(t), and by virtue of the Chebyshev inequality, P(f(t) ≤ δ) ≤ σ/δ, where σ 2 = varf(t). We estimate this variance by Lemma 2.11. In the region D2 , we have f (t, d) > 2δ, and by Lemma 2.14, the contribution of D2 to (19) is not greater than h(λ). We obtain the required statement.
ESTIMATOR FOR THE BEST ESTIMATING FUNCTION
69
Theorem 2.13. If 0 ≤ ε ≤ y, then the quadratic risk of the estimator (11) is R = R(% v0 ) = En−1 (vi − ϕ 0 (ui , ε, λ))2 i
√ ≤ R0 + ay(1 + | μ|/ y) n−7/23 , where a is a numeric coefficient. Proof. Denote θ = ε/y ≤ 1. Summing upper estimates of remainder terms obtained in Lemmas 2.13, 2.14, 2.15, and 2.16, we obtain R ≤ R0 + Δ1 + Δ21 + Δ22 ≤
ay[θ2
+
θ−3/2 λ−2 n−1
√ + θ−1/4 λ−1 n−1/2 + λ1/2 (1 + | μ|)/ y].
Choose λ = θ4 , θ = n−2/23 we arrive at the inequality in the theorem formulation. For sufficiently large n, restricted ratios y = n/N , and for bounded moments μ2(k) , the quadratic risk R( v0 ) approaches R0 < y. It is not to be expected that the application of the multiparametric approach to the improvement of estimators without restrictions on parameters (see Remark 4) would succeed since in the case of a considerable scattering of parameters, the multiparametric problem reduces to a number of one-dimensional ones. In conclusion, we note that the slow decrease in the magnitude of the remainder terms with the increase in n is produced by double regularization of the optimal estimator (11). Such regularization proves to be necessary also in other multiparametric problems, where the improvement effect is reached by an additional mixing of a great number of boundedly dependent variables that produces new specifically multiparametric regularities. The averaging interval must be small enough to provide good approximation to the best-in-the-limit weighting function, and it must be sufficiently large in order to get free from random distortions. Therefore, the improvement is guaranteed only with inaccuracy of the order of magnitude of nα , where 0 < α < 1.
This page intentionally left blank
CHAPTER 3
SPECTRAL THEORY OF SAMPLE COVARIANCE MATRICES Spectra of covariance matrices may be successfully used for the improvement of statistical treatment of multivariate data. Until recently, the extreme eigenvalues were of a special interest, and theoretical investigations were concerned mainly with the analysis of statistical estimation of the least and greatest eigenvalues. The development of essentially multivariate methods required more attention to problems of estimating spectral functions of covariance matrices that may be involved in the construction of new more efficient versions of most popular multivariate procedures. The progress of theoretical investigations in studying spectra of random matrices of increasing dimension (see Introduction) suggested new asymptotic technique and produced impressive results in the creation of improved statistical methods. We may say that the main success of multiparametric statistics is based on methods of spectral theory of large sample covariance matrices and their limit spectra. This chapter presents the latest achievements in this field. The convergence of spectral distribution functions for random matrices of increasing dimension was established first by V. A. Marchenko and L. A. Pastur, and then V. L. Girko, Krishnaiah, Z. D. Bai and Silverstein et al. Different methods were used, but the most fruitful approach is based on study of resolvents as functions of complex parameter and their normed traces. The main idea is as follows. Let A be any real, symmetric, positive definite matrix of size n × n and H(z) = (I − zA)−1 be its resolvent. Denote h(z) = n−1 tr H(z). Then, the empirical distribution function of eigenvalues λi of A n ind(λi ≤ u) F (u) = i=1
71
72
3. SPECTRAL THEORY
may be calculated as follows: F (u) =
1 π
u
lim Im
ε→+0
h(z −1 )z −1 dv,
0
where z = u − iε. If h(z) converges as n → ∞, the limit distribution function for eigenvalues of matrices A is obtained. We use the Kolmogorov increasing dimension asymptotics in which the dimension n of observation vectors increases along with sample size N so that n/N → y > 0. This asymptotics was first introduced in another region in 1967 for studying spectra of random matrices by Marchenko and Pastur [43]. In this chapter, we present the results of investigations by the author of this book [63], [65], [67], [69], and [72] in developing spectral theory of large sample covariance matrices. To prove the convergence of spectral functions, we choose the method of one-by-one exclusion of independent variables. Let us illustrate the main idea and results of this approach in the following example. Let X be a sample of identically distributed random vectors X = {x1 , x2 , . . ., xn } from N(0, Σ). Consider the matrix S=
N 1 xm xTm , n m=1
which presents sample covariance matrix of a special form for the case when expectation of variables is known a priori. Let us single out an independent vector xm from X, m = 1, 2, . . ., N. Define H0 = H0 (t) = (I + tS)−1 , S m = S − N −1 xm xTm ,
h0 (t) = n−1 tr(I + tS)−1 ,
H0m = (I + tS m )−1 ,
H0 = H0m − tH0m xm xTm H0 /N,
ψm = xTm H0 xm /N,
Hxm = (1 − tψm )H0m xm ,
m = 1, 2, . . ., N. Obviously, 1 − tEψm = 1 − tE tr(H0 S)/N = def
s0 (t) = 1 − y + yh0 (t) for each m. For simplicity of notations, let the ratio n/N = y be constant as n → ∞. Assume that var(tψm ) → 0 as n → ∞ (this fact will be proved later in this chapter).
SPECTRAL THEORY
73
Proposition 1. Let X be a sample of size N from n-dimensional population N(0, Σ) and n/N → y > 0 as n → ∞. Then, for each t ≥ 0, −1 + ωn , h0 (t) = En−1 tr(I + tS)−1 = n−1 tr I + ts0 (t)Σ where s0 (t) = 1 − t + yh0 (t)
and
ωn → 0.
We present a full proof. Choose a vector xm ∈ X. For each m, we have tH0 xm xTm = t(1 − tψm ) H0m xm xTm . Here the expectation of the left-hand side is tEH0 S = I − EH0 . In the right-hand side, 1 − tψm = s0 (t) − Δm , where Δm is the deviation of tψm from the expectation value, EΔ2m → 0. We notice that EH0m xm xm = EH0m Σ. It follows that I − EH0 = ts0 (t)EH0m Σ − tEH0m xm xTm Δm . Substitute the expression for H0m in terms of H0 . Our equation may be rewritten in the form I = EH0 I + ts0 (t)Σ + Ω, where Ω = t2 s0 (t)EH0m xm xTm H0 Σ/N − tEH0m xm xTm Δm . We mul−1 tiply this from the right-hand side by R = I + ts0 (t)Σ , calculate the trace, and divide by n. It follows that n−1 tr R = h0 (t) + ωn , where |ωn | ≤ t2 s0 (t)E xTm H0 ΣRH0m xm /(nN )+tE xTm Δm RH0m xm /n. Let us estimate these matrix expressions in norm applying the Schwarz inequality. We conclude that |ωn | is not greater than 1/2 t2 Σ Ex2m /(nN ) + t E(x2m /n)2 EΔ2m ≤ τ 2 /N + τ var(tψm ), where τ =
√
M t ≥ 0. The proposition is proved.
74
3. SPECTRAL THEORY
The obtained asymptotic expression for h0 (t) is remarkable in that it states the convergence of spectral functions of S and shows that their principal parts are functions of only Σ, that is, of only two moments of variables. Relations between spectra of sample covariance matrices and true covariance matrices will be called dispersion equations.
GRAM MATRICES
3.1.
75
SPECTRAL FUNCTIONS OF LARGE SAMPLE COVARIANCE MATRICES
Following [67], we begin with studying matrices of the form S = N −1
N
xm xTm ,
m=1
where random xm ∈ Rn are of interest in a variety of applications different from statistics (see Introduction). Let us call them Gram ¯x ¯T , matrices in contrast to sample covariance matrices C = S − x ¯ and are more complicated. which depend also on sample averages x Gram Matrices We restrict distributions S of x with the only requirement that all components of x have fourth moments and Ex = 0. Denote Σ = cov(x, x). Define the resolvent H0 = H0 (z) = (I − zS)−1 as functions of a complex parameter z (with the purpose to use the analytical properties of H0 (z)). For measuring remainder terms as n → ∞, we define two parameters: the maximum fourth moment of a projection of x onto nonrandom axes (defined by vectors e of unit length) M = sup E(eT x)4 > 0
(1)
|e|=1
and special measures of the quadratic from variance ν = sup var(xT Ωx/n),
and γ = ν/M,
(2)
Ω=1
where Ω are nonrandom, symmetric, positive semidefinite matrices of unit spectral norm. For independent components of x, the parameter ν ≤ M/n.
76
3. SPECTRAL THEORY
Let us solve the problem of isolating principal parts of spectral functions as n → ∞ when n/N → y > 0. Define the region of complex plane G = {z : Re z < 0 or Imz = 0} and the function 1 if Re z ≤ 0, α = α(z) = |z|/|Im z| if Re z > 0 and Im z = 0. To estimate expressions involving the resolvent, we will use the following inequalities. Remark 1. Let A be a real symmetric matrix, v be a vector with n complex components, and vH be the Hermitian conjugate vector (here and in the following, the superscript H denotes the Hermitian conjugation). If u ≥ 0 and z ∈ G, then |1 − zu|−1 ≤ α, I − (I − zA)−1 ≤ α,
(I − zA)−1 ≤ α, |1 − vH (I − zA)−1 v|−1 ≤ α.
For z ∈ G, we have H0 (z) ≤ α. We will use the method of alternative elimination of independent sample vectors. In this section, we assume that the sample size N > 1. Let e be a nonrandom complex vector of length 1, eH e = 1. Denote h0 (z) = En−1 trH0 (z),
y = n/N,
= S − N −1 xm xTm , ϕm = ϕm (z) = xTm H0m xm /N, vm = vm (z) = eH H0m xm , Sm
H0m
s0 (z) = 1 − y + yh0 (z), = (I − zS m )−1 ,
ψm = ψm (z) = xTm H0 xm /N, um = um (z) = eH H0 xm ,
(3)
m = 1, 2, . . ., N. Remark 2. If z ∈ G, the following relations are valid H0 = H0m + zH m xm xTm H0 /N,
H0 xm = (1 + zψm )H0m xm ,
um = vm + zψm vm = vm + zϕm um , ψm = ϕm + zϕm ψm , |1 − zϕm
|−1
≤ α,
(1 + zψm )(1 − zϕm ) = 1,
|um | ≤ α|vm |,
m = 1, 2, . . ., N.
(4)
GRAM MATRICES
77
Lemma 3.1. If z ∈ G, then
Evm = 0, |Eum
|2
≤
√
4 ≤ M α4 , Evm
M |z|2 α2
1 + zEψm = s0 (z),
var ψm ,
m = 1, 2, . . ., N.
(5)
Proof. The first two inequalities immediately follow from the independence of xm and H m . Next, we have zEψm = EzxTm H0 xm /N = Ez tr(SH0 )/N = E tr(H0 − I)/N = = y(h0 (z) − 1) = s0 (z) − 1. We note that Eum = Evm Δm , where Δm = ψm − Eψm . The last lemma statement follows form the Schwarz inequality. Define the variance of a complex variable as var(z) = E(z − Ez)(z ∗ − Ez ∗ ), where (and in the following) the asterisk denotes complex conjugation, Introduce the parameters τ=
√
M |z| and δ = 2α2 y 2 (γ + τ 2 α4 /N ).
To estimate variances of functionals uniformly depending on a large number of independent variables, we use the technique of expanding in martingale differences. It would be sufficient to cite the Burkholder inequality (see in [91]). However, we present the following statement with the full proof. Lemma 3.2. Given a set X = {X1 , X2 , . . ., XN } of independent variables, consider a function ϕ(X) such that ϕ(X) = ϕm (X) + Δm (X), where ϕm (X) does not depend on Xm . If second moments exist for ϕ(X) and Δm = Δm (X), m = 1, 2, . . ., N , then var ϕ(X) ≤
N m=1
E(Δm − Em Δm )2 ,
78
3. SPECTRAL THEORY
where Em is the expectation calculated by integration with respect to the distribution of the variable Xm only. Proof. Denote by Fm the distribution function of independent Xm , m = 1, 2, . . ., N , and let dF m denote the product dF1 dF2 . . . dFm , m = 1, 2, . . ., N. Consider the martingale differences β1 = ϕ −
ϕdF1 ;
βm =
ϕdF
m−1
−
ϕdF m ,
m = 2, . . ., N,
where ϕ = ϕ(X). In view of the independence of X1 , X2 , . . ., XN , it can be readily seen that Eβi βj = 0 if i = j, i, j = 1, 2, . . ., N. Majoring the square of the first moment by second moment, we obtain 2 Eβm
= E[ (ϕ − ϕdFm ) dF m−1 ]2 ≤ ≤ E (ϕ − ϕdFm )2 dF m−1 = E(ϕ − Em ϕ)2 .
2 ≤ E (Δ We have Eβm m − lemma follows.
Δm dFm )2 . The statement of the
Lemma 3.3. If z ∈ G, then we have Var(eH H0 e) ≤ τ 2 α6 /N, and
Var ψm ≤ aM α δ, 4
Var ϕm ≤ M δ/2, m = 1, 2, . . ., N,
where a is a numerical constant. Proof. From (4), it follows that eH H0 e = eH H0m e + zvm um /N, m = 1, 2, . . ., N . Using Remark 2, we find that
Var(eH0 e) ≤ |z|2
N
E|vm um |2 /N 2 ≤
m=1
≤ |z|2 α2 E|vm |4 /N ≤ τ 2 α6 /N.
GRAM MATRICES
79
Now we fix some integer m = 1, 2, . . ., N . Denote Ω = EH0m , ΔH0m = H0m − Ω. Since Ω is nonrandom, we have Var ϕm = E|xTm ΔH0m xm |2 /N 2 + Var(xTm Ωxm )/N 2 .
(6)
Note that H0m is a matrix of the form H0 with N less by 1 if we replace the argument t by t = (1 − N −1 ) t. We apply the first statement of this lemma to estimate the conditional variance m Var (eH m H0 em ) under fixed xm , where em is a unit vector directed along xm , and find that the first summand in (6) is not greater than E|xm |4 τ 2 α6 /N 3 ≤ M τ 2 α6 y 2 /N. To estimate the second summand, we substitute the parameter γ from (2) to (6) and obtain that the second addend in (6) is not greater than Ω2 M n2 γ/N 2 ≤ M α2 y 2 γ. Summing both addends, we obtain the right-hand side of the second inequality in the statement of the lemma. Further, the equation connecting ϕm and ψm in Remark 2 may be rewritten in the form (1 − zϕm )Δψm = (1 + zEψm )Δϕm − zE Δϕm Δψm , where Δϕm = ϕm − Eϕm and Δψm = ψm − Eψm . We square the absolute values of both parts of this equation and take into account that |1 − zϕm |−1 ≤ α and |1 + zEψm | ≤ α. It follows that Var ψm ≤ α4 Var ϕm + α2 |z|2 Var ϕm Var ψm . Here in the second summand of the right-hand side, |z| Var ψm ≤ E|zψm |2 = E|zϕm (1 − zϕm )−1 |2 ≤ (1 + α)2 . But α ≥ 1. It follows that Var ψm ≤ 5α4 Var ϕm . The last statement of our lemma is proved.
80
3. SPECTRAL THEORY
Remark 3. If z ∈ G and u ≥ 0, then |1 − zs0 (z)u|−1 ≤ α. Indeed, first let Re z ≤ 0. We single out a sample vector xm . Using (5), we obtain s0 (z) = 1 + zE ψm = E(1 − zϕm )−1 , −1 = Er(z −1 − ϕ−1 )∗ , zs0 (z) = E(z −1 − ϕ−1 m ) m
where r ≥ 0. We examine that Re ϕm ≥ 0 for Re z ≤ 0. It follows that, in this case, Re zs0 (z) ≤ 0 and |1 − zs0 (z)u| ≥ 1. Now, let Re z > 0 and Im z = 0. The sign of Im z coincides with the sign of Im h0 (z) and with the sign of Im s0 (z). Therefore, |1 − zs0 (z)u| ≥ |z| Im z/|z|2 + u Im s0 (z) ≥ |Im z/z| = α−1 . Our remark is grounded for the both cases. Theorem 3.1. For any population in which all four moments of all variables exist, for any z ∈ G, we have Var(eH H0 (z)e) ≤ τ 2 α6 /N, EH0 (z) = (I − zs0 (z)Σ)−1 + Ω0 , √ def where Ω0 ≤ oN = aτ 2 α4 ( δ + α/N ) and a is a numerical constant. Proof. The first statement of the theorem is proved in Lemma 3.3. To prove the second one, we fix an integer m, m = 1, 2, . . ., N , and multiply both sides of the first relation in (4) by xm xTm . It follows H0 xm xTm = H0m xm xTm + zH0m xm xTm H0 xm xTm /N. Multiplying by z, calculate the expectation values: zEH0 xm xTm = EzH0 S = E(H0 − I) = = zEH0m Σ + zEH0m xm xTm (1 − zψm ).
GRAM MATRICES
81
We substitute zψm = s0 (z)−1+zΔψm , where Δψm = ψm − Eψm , and obtain that EH0 = I + zs0 (z)EH0m Σ + Ω1 , where Ω1 = z 2 EH0m xm xTm Δψm . Using (4) once more to replace H0m , we find that (I − zs0 (z)Σ)EH0 = I + Ω1 + Ω2 , where Ω2 = zs0 (z)E(H0m −H0 )Σ. Denote R = (I −zs0 (z)Σ)−1 . By Remark 3, R ≤ α. Multiplying by R, we obtain EH0 = R + Ω, where Ω = RΩ1 + RΩ2 . We notice that Ω is a symmetric matrix and, consequently, its spectral norm equals |eH Ωe|, where e is one of its eigenvalues. Denote f = Re, f H f ≤ α. We have Ω = |f T Ω1 e + f T Ω2 e|. Now, |f H Ω1 e| = |z| E|f H H0m xm (xTm e) Δψm | ≤ 1/4 ≤ |z|2 E|f H H0m xm |4 E|xTm e|4 Var ψm .
(7)
Here E|f H H0m xm√|4 ≤ M E|f H H0m H0m∗ f |2 ≤ M α8 , E|xTm e|4 ≤ M, Var ψm ≤ aM α4 δ. It√follows that the left-hand side of (7) is not greater than M |z|2 α4 δ. Then, Ω2 ≤ |f H Ω2 e| = |z|2 |s0 (z)| |Ef H H0 xm (xTm H0m Σe)/N | ≤ 1/2 ≤ |z|2 |s0 (z)| E|f H H0 xm |2 E|xTm H0m Σe|2 /N. Here we have that |s0 (z)| = |E(1 − zϕm )−1 | ≤ α; using (3) and (4), we find that m 2 E|f H H0 xm |2 ≤ |f |2 α2 E|eH 1 H0 xm | ≤
√
M α6 ,
where e1 = f /|f |. Obviously, E|xTm H0m Σe|2 =
√
M (eT ΣH0m ΣH0m∗ Σe) ≤ M 3/2 α2 .
82
3. SPECTRAL THEORY
Therefore, |f T Ω2 e| is not greater than M |z|2 α5 /N. We obtain the required upper estimate of Ω. This completes the proof. Corollary. For any z ∈ G, we have h0 (z) = n−1 tr(I − zs0 (z)Σ)−1 + ω,
(8)
where |ω| ≤ oN , and oN is defined in Theorem 3.1. Parameters restricting the dependence Let us investigate the requirements of parameters (1) and (2). Note that the boundedness of the moments M is an essential condition restricting the dependence of variables. Indeed, let Σ be a correlation matrix with the Bayes distribution of the correlation coefficients that is uniform on the segment [−1, 1]. Then, the Bayes mean EM ≥ En−1 trΣ2 ≥ (n + 2)/3. In case of N (0, Σ) with the matrix Σ with all entries 1, the value M = 3n2 . Let us prove that relation (8) can be established with accuracy to terms, in which M → ∞ and moments of variables are restricted only in a set. Denote Λk = n−1 tr Σk ,
Qk = E(x2 /n)k ,
W = n−2 sup E(xT Ωx )4 , Ω=1
k ≥ 0, where x and x are independent vectors and Ω are nonrandom, symmetric, positive semidefinite matrices of unit spectral norm. Remark 4. If t ≥ 0, thenrelation (8) holds with the remain der term ω such that ω 2 /2 ≤ Q2 y 2 (ν + W t2 /N ) + W/N 2 t4 . Using this inequality, we can show that if M is not bounded but Q1 and Q2 are bounded, then there exists a case when ν → 0 as n → ∞ and ω → 0 in (8). Indeed, let x ∼ N(0, Σ). Denote Λk = n−1 tr Σk , k = 1, 2, . . .. For normal x, we have M = 3Σ2 , Q2 = Λ21 +2Λ2 /n, W = 3(Λ22 + 2Λ4 /n), ν = 2Λ2 /n. Consider a special case when Σ = I + ρE, where E is a matrix all of whose entries are 1, and 0 ≤ ρ ≤ 1. Then, M = 3(1+nρ)2 , Λ1 = 1+ρ, Λ2 = 1+2ρ+nρ2 , Λk ≤ ak +bk ρk nk−1 , where ak and bk are positive numbers independent of n, and all
SAMPLE COVARIANCE MATRICES
83
Qk < c, where c does not depend on n. If ρ = ρ(n) = n−3/4 as n → ∞, then M → ∞, whereas the quantities Λ3 , Λ4 , and Q3 remain finite. Nevertheless, the quantities ν = O(n−1 ) and ω → 0. Sample Covariance Matrices The traditional (biased) estimator of the true covariance matrix Σ is N 1 ¯ m )T . C= (xm − x ¯)(xm − x N m=1
¯ ¯x ¯ T , where x To pass to matrix C, we use the relation C = S − x are sample mean vectors, and show that this difference does not influence the leading parts of spectral equations and affects only remainder terms. Consider the resolvent H = H(z) = (I − zC)−1 of matrix C and define h(z) = n−1 trH(z) and s(z) = 1 − y + yh(z). Denote also V = V (z) = eH H0 (z)¯ x, U = eH H(z)¯ x, T T ¯ , Ψ = Ψ(z) = x ¯ H(z)¯ ¯ H0 (z) x x, Φ = Φ(z) = x where e is a nonrandom complex vector with eH e = 1, the superscript H stands for the Hermitian conjugation. Remark 5. If z ∈ G, then ¯ T H(z), H(z) = H0 (z) − zH0 (z)¯ xx U = V − zΦU = V − zΨV, (1 + zΦ)(1 − zΨ) = 1,
|1 − zΨ| ≤ α.
Indeed, the first three identities may be checked straightforwardly. The fourth statement follows from Remark 1. Remark 6. If z ∈ G, then def
¯ T H0 (z)H0∗ (z)¯ q = |z| x x ≤ α2 .
84
3. SPECTRAL THEORY
Let us derive this inequality. Let w be a complex vector. We denote complex scalars wT w by w2 and the real product wH w by |w|2 . Denote the matrix product Z H Z by |Z|2 . Denote Ω2 = ¯ , a = |H(z))|¯ ¯ = Ω−1 x x. Then, I − zC, y ¯x ¯ T )−1 = Ω−1 (I − z y ¯y ¯ H )−1 Ω−1 , H0 (z) = (I − zC − z x ¯H y ¯=x ¯ T (I − zC)−1 x ¯ = aT (I − zC ∗ )a. y Therefore, ¯ |2 = ¯y ¯ H )−1 y q = |z|¯ xT H0 (z)H0∗ (z)¯ x = |z| · |Ω−1 (I − z y −2 ¯ |−2 |1 − z y ¯ 2 |−2 = |z|a2 1 − za2 + |z|2 aT Ca . = |z| · |¯ yH Ω−2 y Denote t = a2 /(1+|z|2 aT Ca). Let z = 0. If Re z < 0, then we have q ≤ |z|a2 ≤ 1. If Re z ≥ 0, then the quantity q ≤ |z|t2 /|1 − zt|2 . The maximum of the right-hand side of this inequality is attained for t = 1/Re z and equals q = qmax = |z|2 |Im z|−2 = α2 (z). This is our assertion. Lemma 3.4. If z ∈ G, then we have |z| var V ≤ 2τ α4 (1+τ α2 )/N. Proof. To use Lemma 3.2, we single out one of sample vectors, ∼ say, xm . Denote H0 = H0 (z), H = H(z), H0m = H0m (z), x = ¯ − N −1 xm . We have x ∼
∼
V = eH H0 x = eH H0 x + eH H m xm /N + zeH H0m xm xTm H0 x/N, where the first summand in the left-hand side does not depend on ∼ xm . Denote wm = xTm H0m x. By Lemma 3.2, we have |z| var V ≤ |z| N −2
N
E|um (1 + zwm )|2 .
m=1
But E|um |2 ≤
√
M α4 ,
E|zum wm |2 ≤
√
M α4 |z|2 (E|wm |4 )1/2 .
SAMPLE COVARIANCE MATRICES
85
It follows that ∼
∼
|z|2 E|wm |4 ≤ M |z|2 E(xT H0m H0m∗ x)2 ≤ M q , where q may be reduced to the form of the expression for q in Remark 6, with the number N less by unit with the argument z = (1 − N −1 )z. From Remark 6, it follows that q ≤ α2 and that |z|2 E|wm |4 ≤ M α4 . We obtain the required upper estimate of |z| var V. Lemma 3.4 is proved. Lemma 3.5. If z ∈ G, then EH(z) − EH0 (z) ≤ aω, where ω 2 = τ 2 α6 y(τ 2 α2 δ + (1 + τ α2 )/N ), and a is numerical coefficient. Proof. In view of the symmetry of matrices H = H(z) and H0 = H0 (z), we have ¯x ¯ T H0 = E|zV U | ≤ EH − EH0 ≤ zEH x 1/2 ≤ E|zV 2 |E|zU 2 | with some nonrandom unit vectors e in definitions of V and U . We have E|zV 2 | = |zEV 2 | + |z| Var V, √ x2 ≤ M |z| α2 y. E|zU 2 | = |z|α2 E¯ ¯ = |Eum |. But for any m = 1, 2, . . ., N, we have EV = EeT H0 x 2 3/2 2 From Lemma 3.1, it follows that |Eum | ≤ aM |z| α6 δ, where a is a number. Gathering up these estimates, we obtain the lemma statement. Lemma 3.6. If z ∈ G and u > 0, then |1 − zs(z)u|−1 ≤ α. Proof. Denote ∼
¯ )(xm − x ¯ )T , C = C − N −1 (xm − x ∼
∼
∼
¯ )T H(xm − x ¯ )T /N, Φ = (xm − x ∼
∼
H = (I − z C)−1 ,
¯ )T H(xm − x ¯ )T /N. Ψ = (xm − x
86
3. SPECTRAL THEORY
We examine the identities ∼
∼
¯ )(xm − x ¯ )T H/N, H = H + zH(xm − x
∼
∼
(1 − z Φ)(1 + z Ψ) = 1.
It follows that s(z) = 1 + y(h(z) − 1 ) = 1 + EN −1 tr(H(z) − I) = ¯ )T H(z) (xm − x ¯) = = 1 + zEN −1 tr H(z) C = 1 + zEN −1 (xm − x ∼
∼
= 1 + zEΨ = E(1 − z Φ)−1 . Suppose that Re z ≤ 0 and u > 0. Then, Re zs(z) = r(z −1 − Φ)∗ , where r > 0 and Re zs(z) ≤ 0. Therefore, |1 − zs(z)u| ≥ 1. Suppose Im z = 0. The sign of z coincides with the sign of h(z) and with the sign of s(z). Therefore, |1 − zs(z)u| = |z| · |Im z/|z|2 + uIm s(z)| ≥ α−1 . This proves the lemma. Theorem 3.2. If z ∈ G and N > 1, then Var(n−1 tr H(z)) ≤ aτ 2 α4 /N, EH(z) = (I − zs(z)Σ)−1 + Ω,
h(z) = n−1 tr(I − zs(z)Σ)−1 + ω, (9)
where s(z) = 1 + y(h(z) − 1), and √ √ Ω ≤ aτ max(1, λ) α3 [τ α δ + (1 + τ 2 α)/ N ], √ √ |ω| ≤ aτ α2 max(1, λ)[τ α2 δ + (1 + τ α3 )/ N ], and a are (different) numerical constants. Proof. From the matrix C, we single out the summand C m independent of xm : C = C m + Δm ,
¯ xTm . ¯T − x Δm = (1 + N −1 ) xm xTm − xm x
SAMPLE COVARIANCE MATRICES
87
Denote H = H(z), H m = (I − zC m )−1 . We have the identity H m = H + zH m Δm H. Applying Lemma 3.2, we obtain Var(n−1 trH) =
N m=1
≤
3|z|2
n−2 N −1 E((1
E|z n−1 tr(HΔm H m )|2 ≤ + N −1 ) · |xTm H m Hxm |2 +
¯ m |2 ), + |xTm H m Hxm |2 + |¯ xT H m H x x|4 ≤ M y 2 . where H m ≤ α, H ≤ α, E(x2m )2 ≤ M n2 , and E|¯ We conclude that Var(n−1 tr H) ≤ 3τ 2 α4 (1 + 3/N )/N. The first statement of our theorem is proved. Now, we start from Theorem 3.1. Obviously, EH(z) = (I − zs(z)Σ)−1 + (EH(z) − EH0 (z)) +
+(I − zs(z)Σ)−1 z(s0 (z) − s(z)) Σ (I − zs0 (z)Σ)−1 + Ω0 . (10)
Here |s0 (z) − s(z)| = y|h0 (t) − h(t)| ≤ τ yα2 /N. Note that last three summands in the right-hand side of (10) do not exceed τ 2 α4 y/N + aω + Ω0 , where ω is from Lemma 3.5. Substituting ω and Ω0 , we obtain that √ |ω| ≤ τ 2 α4 y/N + τ α2 /N + aτ 2 α4 ( δ + α/N ). This gives the required upper estimate of |ω|. The proof is complete. The equation (9) for h(z) was first derived for normal distribution in paper [63] in the form of a limit formula. In [65] and [67], these limit formulas were obtained for a wide class of populations. In [71], relations (9) were established for a wide class of distributions with fixed n and N .
88
3. SPECTRAL THEORY
Limit Spectra We investigate here the limiting behavior of spectral functions for the matrices S and C under the increasing dimension asymptotics. Consider a sequence P = {Pn } of problems
Pn = (S, Σ, N, X, S, C)n ,
n = 1, 2, . . .
(11)
in which spectral functions of matrices C and S are investigated over samples X of size N from populations S with cov(x, x) = Σ (we do not write out the subscripts in arguments of Pn ). For each problem Pn , we consider functions h0n (t) = n−1 tr(I − zS)−1 , FnS (u) =
1 n
n i=1
ind(λSi ≤ u),
hn (t) = n−1 tr(I − zC)−1 , FnC (u) =
1 n
n i=1
ind(λC i ≤ u),
where λSi and λC i are eigenvalues of S and C, respectively, i = 1, 2, . . ., n. We restrict (11) by the following conditions. A. For each n, the observation vectors in S are such that Ex = 0 and the four moments of all components of x exist. B. The parameter M does not exceed a constant c0 , where c0 does not depend on n. The parameter γ vanishes as n → ∞ in P. C. In P, n/N → λ > 0. D. In P for each n, the eigenvalues of matrices Σ are located on a segment [c1 , c2 ], where c1 > 0 and c2 does not depend on n, and FnΣ (u) → FΣ (u) as n → ∞ almost for any u ≥ 0. Corollary (of Theorem 3.1). Under Assumptions A–D for any z ∈ G, the limit exists lim hn (z) = h(z) such that n→∞
h(z) =
(1 − zs(z)u)−1 dFΣ (u),
s(z) = 1 − λ + λh(z), (12)
LIMIT SPECTRA
89
and for each z, we have lim E(I − zC)−1 − (I − zs(z)Σ)−1 → 0. n→∞
Let us investigate the analytical properties of solutions to (12). Theorem 3.3. If h(z) satisfies (12), c1 > 0, λ > 0, and λ = 1, then 1. |h(z)| ≤ α(z) and h(z) is regular near any point z √ ∈ G; −1 2. for any v =√Re z > 0 such that v < v2 = c1 (1 − λ)−2 or −2 v > v1 = c−1 2 (1 + λ) , we have lim
ε→+0
Im h(v + iε) = 0;
3. if v1 ≤ v ≤ v2 , then 0 ≤ Im h(v + iε) ≤ (c1 λv)−1/2 + ω, where ω → 0 as ε → +0; 4. if v = Re z < 0 then s(−v) ≥ (1 + c2 λ |v|)−1 ; 5. if |z| → ∞ on the main sheet of the analytical function h(z), then we have if 0 < λ < 1, then zh(z) = −(1 − λ)−1 Λ−1 + O(|z|−1 ), if λ = 1, then zh2 (z) = −Λ−1 + O(|z|−1/2 ), if λ > 1, then zs(z) = −β0 + O(|z|−1 ), where β0 is a root of the equation
(1 + β0 u)−1 dFΣ (u) = 1 − λ−1 .
Proof. The existence of the solution to (12) follows from Theorem 3.1. Suppose Im (z) > 0, then |h(z)| ≤ α = α(z). To be concise, denote h = h(z), s = s(z). For all u > 0 and z outside the beam z > 0, we have |1 − zsu|−1 ≤ α. Differentiating h(z) in (12), we prove the regularity of h(z). Define bν = bν (z) =
|1 − zs(z)u|−2 uν dFΣ (u),
ν = 1, 2.
90
3. SPECTRAL THEORY
Let us rewrite (12) in the form (h − 1)/s = z u(1 − zsu)−1 dFΣ (u).
(13)
It follows that Im[(h − 1)/s] = |s|−2 Im h = b1 Im z + b2 λ|z|2 Im h. Dividing by b2 , we use the inequality b1 /b2 ≤ c−1 1 . Fix some v = Re z > 0 and tend Im z = ε → +0. It follows that the product 2 (|s|−2 b−1 2 − λv )Im h → 0.
Suppose that Im h does not tend to 0 (v is fixed). Then, there exists a sequence {zk } such that, for zk = v + iεk , h = h(zk ), s = s(zk ), we have Im h → a, where a = 0. For these zk , we obtain 2 |s|−2 b−1 2 → λv as εk → +0. We apply the Cauchy–Bunyakovskii inequality to (5). It follows that |h−1|2 /|zk s|2 ≤ b2 . We obtain that |h−1|2 ≤ λ−1 +o(1) as εk → +0. It follows that |s−1|2 ≤ λ+o(1). So the values s are bounded for {zk }. On the other hand, it follows from (12) that Im h = b1 Im(zs) = b1 (Re s · Im z + λv Im h). We find that (b−1 1 − λv)Im h → 0 as Im z → 0. But Im h → a = 0 for {zk }. It follows that b−1 1 → λv. Combining this with the inequality 2 , we find that |s|−2 b−1 − b−1 v → 0. Note that b is |s|−2 b−1 → λv 1 2 2 1 −1 finite√for {zk } and c−2 ≤ b1 b−1 2 2 ≤ c1 . Substitute the boundaries (1 ± λ) + o(1) for |s|. We obtain that v1 + o(1) ≤ v ≤ v2 + o(1) as εk → +0. We can conclude that Im h → 0 for any positive v outside the interval [v1 , v2 ]. This proves the second statement of our theorem. Now suppose v1 ≤ v ≤ v2 . From (12), we obtain the inequality λIm h · Im(zh) ≤ c−1 1 . But h is bounded. It follows that the quantity (Im h)2 ≤ (c1 vλ)−1 . The third statement of our theorem is proved. Further, let v = Re z < 0. Then, the functions h and s are real and non-negative. We multiply both parts of (12) by λ. It follows that (h − 1)/zs ≤ b1 ≤ c2 . We obtain s ≥ (1 + c2 λ|z|)−1 . Let us prove the fifth theorem statement. Let λ < 1. For real z → −∞, the real value of 1 − zsu in (12) tends to infinity.
LIMIT SPECTRA
91
Consequently, h → 0 and s → 1 − λ. For sufficiently large |Re z|, we have ∞ Λ−k (zs)−k , h(z) = where Λk =
k=1
uk dFΣ (u). We conclude that h(z) = −(1 − λ)−1 Λ−1 z −1 + O(|z|−2 )
for real z < 0 and for any z ∈ G as |z| → ∞ in view of the properties of the Laurent series. Now let λ = 1. Then h = s. From (12), we obtain that h → 0 as z → −∞ and h2 = Λ−1 |z|−1 + O(|z|−2 ). Now suppose that λ > 1, z = −t < 0, and t → ∞. Then, by Lemma 3.6, we have s ≥ 0, h ≥ 1 − 1/λ, and s → 0. Equation (12) implies ts → β0 as is stated in the theorem formulation. This completes the proof of Theorem 3.3. Remark 7. Under Assumptions A–D for each u ≥ 0, the limit exists F (u) = plim FnC (u) such that
n→∞
(1 − zu)−1 dF (u) = h(z) .
(14)
Indeed, to prove the convergence, it is sufficient to cite Corollary 3.2.1 from [22] that states the convergence of {hnS (z)} and {hnC (z)} almost surely. By Lemma 3.5, both these sequences converge to the same limit h(z). To prove that the limits of FnS (u) and FnC (u) coincide, it suffices to prove the uniqueness of the solution to (12). It can be readily proved if we perform the inverse Stieltjes transformation. Theorem 3.4. Under Assumptions A–D, 1. if λ = 0, then F (u) = FΣ (u) almost everywhere for u ≥ 0; 2. if λ > 0 and λ = 1, then √ F (0) = F (u1 −0) =√max(0, 1−λ−1 ), F (u2 ) = 1, where u1 = c1 (1 − λ)2 , u2 = c2 (1 + λ)2 , and c1 and c2 are bounds of the limit spectra Σ;
92
3. SPECTRAL THEORY
3. if y > 0, λ = 1, and u > 0, then the derivative F (u) of the function F (u) exists and F (u) ≤ π −1 (c1 λu)−1/2 ; Proof. Let λ = 0. Then s(z) = 1. In view of (12), we have
−1
(1 − zu)
h(z) =
dF (u) =
(1 − zu)−1 dFΣ (u).
At the continuity points of FΣ (u), the derivative FΣ (u)
1 1 = lim Im h π ε→+0 z
1 = F (u), z
where z = u − iε, u > 0. Let λ > 0. By Theorem 3.2 for u < u1 and for u > u2 (note that u1 > 0 if λ > 0), the values Im[(u − iε)−1 h((u − iε)−1 )] → 0 as ε → +0. But we have Im
h((u − iε)−1 ) > (2ε)−1 [F (u + ε) − F (u − ε)]. u − iε
(15)
It follows that F (u) exists and F (u) = 0 for 0 < u < u1 and for u > u2 . The points of the increase of F (u) can be located only at the point u = 0 or on the segment [u1 , u2 ]. If λ < 1 and |z| → ∞, we have (1 − zu)−1 dF (u) → 0 and, consequently, F (0) = 0. If λ > 1 and |z| → ∞, then h(z −1 )z −1 ≈ (1 − λ−1 )/z and F (0) = 1 − λ−1 . The second statement of our theorem is proved. Now, let z = v + iε, where v > 0 is fixed and ε → +0. Then, using (12) we obtain that Im h = b1 Im (zs). Obviously, |Im h| ≤
|1 − zsu|−1 dFΣ (u) ≤
1 b1 = . c1 Im (zs) c1 Im h
If Im h remains finite, then b1 → (λv)−1 . Performing limit transition in (15), we prove the last statement of the theorem.
LIMIT SPECTRA
93
Theorem 3.5. If Assumptions A–D hold and 0 < λ < 1, then for any complex z, z outside of the half-axis z > 0, we have |h(z) − h(z )| < c3 |z − z |ζ , where c3 and ζ > 0 do not depend on z and z . Proof. From (12), we obtain −1 |h(z)| ≤ λ−1 max(λ, |1 − λ| + 2c−1 1 |z| ).
By definition, the function h(z) is differentiable for each z outside the segment V = [v1 , v2 ], v1 > 0. Denote a δ-neighborhood of the segment V by Vδ . If z is outside of Vδ , then the derivative h (z) exists and is uniformly bounded. It suffices to prove our theorem for v ∈ V1 , where V1 = Vδ − {z : Im z = 0}. Choose δ = δ1 = v1 /2. Then δ1 < |z| < δ2 for z ∈ V1 , where δ2 does not depend on z. We estimate the absolute value of the derivative h (z). For Im z = 0, from(15) by the differentiation we obtain
z
−1 −1
y
−
X
−2
s(z) u dFΣ (u) h (z) = X −2 u dFΣ (u), (16) zλ
where X = (1 − zs(z)u) = 0. Denote ϕ(z) =
1 zy
−
X −2 u dFΣ (u),
h1 = Im h(z),
z0 = Re z,
b1 =
z1 = Im z,
|X|−2 u dFΣ (u), s0 = Re s(z),
and let α with subscripts denote constants not depending on z. The right-hand side of (16) is not greater than α1 b1 for z ∈ V1 and therefore |h (z)| < α2 b1 |ϕ(z)|−1 . We consider two cases. Denote α3 = (2δ2 c2 )−1 . At first, let Re s(z) = s0 ≤ α3 . Using the relation h1 = b1 Im (zs(z)), we obtain that the quantity −Im ϕ(z) equals −2 −1
z1 |z|
λ
+ 2b1
−1
|X|−4 u2 (1 − z0 s0 u + z1 λ h1 u)h1 dFΣ (u).
94
3. SPECTRAL THEORY
In the integrand here, we have z0 > 0, 1 − z0 s0 u ≥ 1/2, z1 h1 > 0. From the Cauchy–Bunyakovskii inequality, it follows that
|X|−4 u2 dFΣ (u) ≥ b21 .
Hence |Im ϕ(z)| ≥ b1 h1 and |h (z)| ≤ α2 h−1 1 . Let Re ϕ(z) = λ−1 z0 |z|−2 − b1 + 2[Im zs(z)]2
|X|−4 u3 dFΣ (u).
Define p = λ−1 z0 |z|−2 − b1 . We have p = λ−1 |z|−2 z0 z1 |Im zs(z)|−1 (s0 − λh1 z1 /z0 ). Here |h1 | < α4 , z0 ≥ δ1 > 0, s0 > α3 > 0, and we obtain that p > 0 if z1 < α6 , where α6 = α3 α5 /λα4 . If z ∈ V1 and z1 > α6 , then the H¨ older inequality follows from the existence of a uniformly bounded derivative of the analytic function h(z) in a closed domain. Now let z ∈ V1 , z1 < α6 , p > 0, and s0 > α3 > 0. Then, |h (z)| ≤ α7 b1 |Re ϕ(z)|−1 , where Re ϕ(z) ≥
2(Im zs(z))2 c1
|X|−4 u2 dFΣ (u) ≥
≥ 2(Im zs(z))2 c1 b1 = 2c1 h21 . Substituting b1 = h1 /Im (zs(z)) and taking into account that s0 > 0, we obtain that |h (z)| ≤ α7 h−2 1 . Thus, for v ∈ Vδ and −2 0 < z1 < α6 for any s0 , it follows that |h (z)| ≤ α8 max (h−1 1 , h1 ) ≤ α9 h−2 1 . Calculating the derivative along the vertical line we obtain the inequality h21 |dh1 /dz| ≤ α9 , whence 3 h31 (z) ≤ h31 (z ) + 3α9 |z − z | ≤ h1 (z ) + α10 |z − z |1/3 if Im z · Im z > 0. The H¨ older inequality for h1 = Im h(z) with ζ = 1/3 follows. This completes the proof of Theorem 3.5.
LIMIT SPECTRA
95
Example. Consider limit spectra of matrix Σ of a special form of the “ρ-model” considered first in [63]. It is of a special interest since it admits an analytical solution to the dispersion equation (12). For this model, the limit spectrum of Σ is located on a √ √ segment [c1 , c2 ], where c1 = σ 2 (1 − ρ)2 and c2 = σ 2 (1 + ρ)2 , σ > 0, 0 ≤ ρ < 1. Its limit spectrum density is dFΣ (u) = du
(2πρ)−1 (1 − ρ)u−2 0
(c2 − u)(u − c1 ),
for u < c1
c1 ≤ u ≤ c2 ,
and for u > c2 .
The moments Λk =
uk dFΣ (u) for k = 0, 1, 2, 3, 4 are
Λ0 = 1, Λ1 = σ 2 (1 − ρ), Λ2 = σ 4 (1 − ρ), Λ3 = σ 6 (1 − ρ2 ), Λ4 = σ 8 (1 − ρ) (1 + 3ρ + ρ2 ). If ρ > 0, the integral η(z) =
−1
(1 − zu)
dFΣ (u) =
1 + ρ − κz −
(1 + ρ − κz)2 − 4ρ , 2ρ
where κ = σ 2 (1 − ρ)2 . The function η = η(z) satisfies the equation ρη 2 + (κz − ρ − 1)η + 1 = 0. The equation h(z) = η(zs(z)) can be transformed to the equation (h − 1)(1 − ρh) = κzhs, which is quadratic with respect to h = h(z), s = 1 − λ + λh. If λ > 0, its solution is h=
1 + ρ − κ(1 − λ)z −
(1 + ρ − κ(1 − λ)z)2 − 4(ρ + κzλ) . 2(ρ + κλz)
The moments Mk = (k!)−1 h(k) (0) for k = 0, 1, 2, 3 are M0 = 1, M1 = σ 2 (1 − ρ), M2 = σ 4 (1 − ρ) (1 + λ(1− ρ)) , M3 = σ 6 (1 − ρ) 1 + ρ + 3λ(1 − ρ) + λ2 (1 − ρ)2 . Differentiating the functions of the inverse argument, we find that, in particular, Λ−1 = κ−1 , Λ−2 = κ−2 (1+ρ), M−1 = κ−1 (1 − λ)−1 ,
96
3. SPECTRAL THEORY
M−2 = κ−2 (ρ + λ(1 − ρ))(1−λ)−3 . The continuous limit spectrum of the matrix C is located on the segment [u1 , u2 ], where u1 = σ 2 (1 −
λ + ρ(1 − λ))2 , u2 = σ 2 (1 +
and has the density ⎧ ⎪ ⎨ (1 − ρ) (u2 − u)(u − u1 ) 2πu (ρu + σ 2 (1 − ρ)2 y) f (u) = ⎪ ⎩ 0 otherwise.
λ + ρ(1 − λ))2
if u ∈ [u1 , u2 ],
If λ > 1, then the function F (u) has a jump 1 − λ−1 at the point u = 0. If λ = 0, then F (u) = FΣ (u) has a form of a unit step at the point u = σ 2 . The density f (u) satisfies the H¨older condition with ζ = 1/2. In a special case when Σ = I andρ = 0, we obtain the limit spectral density F (u) = √ (2π)−1 u−2 (u2 − u)(u − u1 ) for u1 ≤ u ≤ u2 , where u2,1 = (1 ± λ)2 . This “semicircle” law of spectral density was first found by Marchenko and Pastur [43].
INFINITE COVARIANCE MATRICES
3.2.
97
SPECTRAL FUNCTIONS OF INFINITE SAMPLE COVARIANCE MATRICES
The technique of efficient essentially multivariate statistical analysis presented in the review [69] and in book [71] is based on using spectral properties of large-dimensional sample covariance matrices. However, this presentation is restricted to dimensions not much greater than sample size. To treat efficiently the case when the sample size is bounded another asymptotics is desirable. In this section, we assume that the observation vectors are infinite dimensional and investigate spectra of infinite sample covariance matrices under restricted sample size that tends to infinity. Let F be an infinite-dimensional population and x = (x1 , x2 , . . .) be an infinite-dimensional vector from F. Let X = {xm }, m = 1, 2, . . . be a sample of size N from F. We restrict ourselves with the following two requirements. 1. Assume that fourth moments exist for all components of vector x ∈ F, and let the expectation Ex = 0. Denote Σ = cov(x, x) and let d1 ≥ d2 ≥ d3 ≥ . . . be eigenvalues of the matrix Σ. 2. Let the series of eigenvalues of Σ d1 +d2 +d3 +· · · converge, def
and let d = trΣ > 0. We introduce three parameters characterizing the populations. Denote M = sup E(eT x)4 > 0,
(1)
|e|=1
where (and in the following) e are nonrandom, infinite-dimensional unity vectors. Define the parameter k = E(x2 )2 /(Ex2 )2 ≥ 1, where (and in the following) squares of vectors denote squares of their lengths.
98
3. SPECTRAL THEORY
Denote γ = sup var(xT Ωx)/E(x2 )2 ≤ 1,
(2)
Ω=1
where Ω are nonrandom, positive, semidefinite, symmetrical infinite-dimensional matrices of spectral norm 1 (we use only spectral norms of matrices). Denote ρi = d1 /d, i = 1, 2, . . ., and ρ = ρ1 . In the particular case of normal distributions x ∼ N(0, Σ), we have M = 3Σ2 , k = 3, and ∞
γ=2
ρ2 trΣ2 i ≤ 2ρ. = 2 trΣ2 + 2 tr2 Σ 1 + 2ρ2i i=1
Let n components of the vector x ∼ N(0, Σ) have the variance 1, while the remaining components have variance 0. In this case, ρ = 1/n and n 2 2rij 2 2 = 1 , rij rij , γ= 2 n 1 + 2rij i,j=1 where rij = Σij / di dj are correlation coefficients. To apply methods developed in Section 3.1 to infinitedimensional vectors, we introduce an asymptotics, preserving the basic idea of Kolmogorov: as n → ∞, the summary variance d of variables has the same order of magnitude as the sample√size N . We consider the quantities N and d as large, M, d1 ≤ M , b = d/N as bounded, and γ and ρ as small. Dispersion Equations for Infinite Gram Matrices First we investigate spectral properties of simpler random matrices of the form S=N
−1
N
xm xTm ,
(3)
m=1
where xm ∈ X are random, independent, identically distributed infinite-dimensional vectors, m = 1, 2, . . ., N . The matrix S is sample covariance matrices of a special form when the expectation of variables is known a priori.
DISPERSION EQUATIONS FOR INFINITE GRAM MATRICES
99
Obviously, d1 /N = bρ,
N −1 trΣ = b,
E(x2 )2 ≤ kd2 ,
N −2 trΣ2 ≤ b2 ρ,
Σ = ES,
var(x2 /N ) ≤ kb2 γ,
and for x ∼ N(0, Σ), var(x2 /N ) ≤ 2b2 ρ. Consider the matrices R0 = R0 (t) = (S + tI)−1 and H0 = H0 (t) = SR0 (t), t > 0. Let us apply the method of one-by-one exclusion of independent vectors xm . Let t > 0. Denote S m = S − N −1 xm xTm ,
R0m = (S m + tI)−1 ,
vm = vm (t) = eT R0m (t)xm ,
um = um (t) = eT R0 (t)xm ,
φm = φm (t) = xTm (S m + tI)−1 xm /N, ψm = ψm (t) = xTm (S + tI)−1 xm /N, m = 1, 2, . . . . We have the identities R0 = R0m −R0m xm xTm R0 /N, ψm = φm −φm ψm , um = vm −ψm vm and the inequalities |ψm | ≤ 1,
|um | ≤ |vm |,
R0 ≤ R0m ≤ 1/t,
4 Evm ≤ M/t4
m = 1, 2, . . .. To obtain upper estimations of the variance for functionals depending on large number of independent variables, we apply Lemma 3.2. Lemma 3.7. If t > 0, the variance var(eT R0 (t)e) ≤ M/N t4 . Proof. For each m = 1, 2, . . ., N in view of (4), we have the identity eT R0 (t)e = eT R0m (t)e − um vm /N. Using the expansion in martingale-differences from Lemma 3.2, we find that var(eT R0 (t)e) ≤
N
4 N −2 |Eum vm |2 ≤ N −1 Evm .
m=1
We arrive at the statement of Lemma 3.7.
100
3. SPECTRAL THEORY
For normal observations, var (eT R0 (t)e) ≤ 3bd1 ρ/t4 . Lemma 3.8. If t > 0, then for each m = 1, 2, . . ., N , we have var φm (t) ≤ aω, var ψm (t) ≤ aω, where b2 ω = ak 2 t
M γ+ N t2
,
(4)
and a are absolute constants. Proof. Denote Ω = ER0m (t), Δm = R0m (t) − Ω. Then, 1/2 var φm = var(xTm Ωxm /N ) + E(xTm Δm xm )2 /N 2 .
(5)
The first summand is not greater than kb2 γ/t2 . The second summand in (4) equals E(x2m )2 /N 2 E(eT Δm e)2 , where random e = xm /|xm |. Note that E(eT Δm e)2 = var(eT R0m e), where we can estimate the right-hand side using Lemma 3.2 with N less by 1 and some different t > 0. We find that E(xTm Δm xm )2 /N 2 ≤ M E(x2m )2 /N 3 t4 ≤ ak M b2 /N t4 and consequently var φm ≤ aω, where ω is defined by (4). The first inequality is obtained. The second inequality follows from the identity ψm = (1 + φm )−1 φm . The lemma is proved. For normal observations, the remainder term in (4) is equal to
bd1 b2 ρ ω =a 1+ 2 . t t2 Denote s = s(t) = 1 − N −1 trEH0 (t), k = E(x2 )2 /(Ex2 )2 . It is easy to see that k ≥ 1, and for t > 0 |1 − s| ≤ b/t. Theorem 3.6. If t > 0, we have EH0 (t) = s(t) (s(t)Σ + tI)−1 Σ + Ω, 1 − s(t) = s(t)/N tr (s(t)Σ + tI)−1 Σ + o,
(6)
DISPERSION EQUATIONS FOR INFINITE GRAM MATRICES
101
where Ω is a symmetric, positive, semidefinite infinite-dimensional matrix such that
√ 1 + gτ √ , Ω ≤ a kτ 2 g γ + √ N 1 + gτ + g 2 τ √ 2 2 √ |o| ≤ a kτ g γ + , N √ √ where (and in the following) τ = M /t, g = b/ M , and a is a numeric coefficient. Proof. Let us multiply the first identity in (3) by xm xTm from the right and calculate the expectation. We find that ER0 S = ER0m xm xTm − ER0m xm xm R0 xm xTm /N = = ER0m Σ − Eψm R0m xm xTm .
(7)
The quantity E ψm = E tr R0 S/N = EN −1 tr H0 = 1 − s. Denote δm = ψm −Eψm . We substitute in (7) R0 S = H0 , ψm = 1−s + δm and obtain EH0 = sER0m Σ − ER0m xm xTm δm . (8) Now in the first term of the right-hand side of (8), we replace the matrix R0m by R0 . Taking into account that |s| ≤ 1 + b/t, we conclude that the contribution of the difference R0m − R0 in the first term of the right-hand side of (8) for some e is not greater in norm than 2 /N ≤ |seT R0m xm xTm R0 Σe|/N ≤ |s|d1 E|vm um |/N ≤ |s|Evm √ ≤ (1 + b/t) M d1 /N t2 .
The second term of right-hand side of (8) in norm is not greater than √ T 2 (xT e)2 Eδ 2 ≤ a M ω/t, E|vm (xm e)δm | ≤ Evm m m where ω is defined by (4). We obtain the upper estimate in norm √ 1 + gτ √ ). EH0 − sER0 Σ ≤ a k τ 2 (g γ + √ N The first statement of our theorem is proved.
102
3. SPECTRAL THEORY
Further, the first term of the right-hand side of (8) equals sER0 Σ plus the matrix Ω1 , for which |N −1 tr Ω1 | = a|ExTm R0m ΣR0 xm /N | ≤ √ ≤ |s|d1 E|um vm |/N ≤ (1 + b/t)d1 M /N t2 . The second term of the right-hand side of (8) is the matrix Ω2 , for which similarly |N −1 tr Ω2 | ≤ N −1 E|(xTm R0m xm )δm | ≤ √ 2 ≤ ab ≤ N −1 t−2 E(x2m )2 Eδm kω/t. Combining these two upper estimates we obtain the theorem statement. Special case 1: pass to finite-dimensional variables Let for some n, d1 ≤ d2 ≤ . . . ≤ dn > 0, and for all m > n, dm = 0. Denote by y the Kolmogorov ratio y = n/N . Consider the spectral functions ∼
∼
∼
H 0 = H 0 (z) = (I + zS)−1 ,
∼
∼
h = h(z) = n−1 tr H 0 (z). ∼
∼
def
Note that the matrix R0 (z) = z −1 H 0 (z −1 ), and s(z) = 1 − y + yh(z) = s(t), where t = 1/z > 0. The parameter γ introduced in Section 3.1 is not greater than the parameter γ in Section 3.2. By Lemma 3.3 from [11] (in the present notations), we obtain that var ψm ≤ 2b2 /t2 (γ + M/N t2 ), that strengthens the √ estimates in Lemma 3.8 owing to the absence of the coefficient k ≥ 1. The dispersion equations in Sections 3.1 and 3.2 have the form ∼
∼
∼
EH 0 (z) = (I + z s(z)Σ)−1 + Ω = ∼ = s(z) = 1 − y + N −1 tr (I + zS)−1 = ∼ ∼ = 1 − y + N −1 tr(I + z s(z)Σ)−1 + o, ∼ ∼ ∼ √ Ω ≤ aτ 2 (y γ + 1/N ) and | o| = yΩ with the remainder terms different by lack of the coefficient k ≥ 1. Thus, Theorem 3.6 is a generalization of Theorem 3.1 and also of limit theorems proved in monographs [21–25].
DISPERSION EQUATIONS FOR SAMPLE COVARIANCE
103
Dispersion Equations for Sample Covariance Matrices Let the standard sample covariance matrix be of the form C=N
−1
N
(xm − x)(xm − x) , T
x=N
−1
m=1
N
xm ,
m=1
¯ is the where xm are infinite-dimensional sample vectors and x T sample average vector. Note that C = S − x x . We use this relation to pass from spectral functions of S to spectral functions of C. Remark 1. Ex2 = b, E(x2 )2 ≤ (2k + 1) b2 . Indeed, the first equality is obvious. Next we find E(x2 )2 = b2 + var x2 = b2 + 2N −2 (1 − N −1 )trΣ2 . Here 2
trΣ =
∞
[E(xi xj ) ] ≤ 2
∞
E(xi xj )2 = E(x2 )2 ≤ kd2 .
i,j=1
i,j=1
The required inequality follows. Denote R = R(t) = (C + tI)−1 , −1
T
Φ = Φ(t) = x (S + tI)
H(t) = R(t)C, x,
−1
T
Ψ = Ψ(t) = x (C + tI) T
V = V (t) = e R0 x,
x,
U = U (t) = eT R x.
We have the identities (1 + Ψ)(1 − Φ) = 1, U = V + U Φ. It is easy to check that the positive Φ ≤ 1. Lemma 3.9. If t > 0, then √ |EV | ≤ M ω/t2 , 2
√ var V ≤ a
where a is a numeric coefficient.
√ M (1 + kM b/t2 ), N t2
104
3. SPECTRAL THEORY
Proof. Note that for each m = 1, 2, . . ., N , we have Evm = 0 and EV = Eum = −Evm (ψm − Eψm ). It follows that √ 2 var ψm ≤ a M ω/t2 . |EV |2 ≤ Evm Further, we estimate the variance of V using Lemma 3.2. Denote ∼ x = x−N −1 xm , where xm is the vector excluded from the sample. Using the first of equations (4), we rewrite V in the form ∼
∼
V = eT R0m x + eT R0 xm /N − eT R0 xm xTm R0m x/N, where the first summand does not depend on xm . In view of Lemma 3.2, var V ≤ N
−2
N
2 Evm (1 − wm )2 ≤ 2N −1
4 (1 + Ew 4 ), Evm m
m=1 ∼
∼
4 ≤ M E(x2 )2 / where by definition wm = xTm R0m x. Note that Ewm 4 2 4 t ≤ 3M kb /t . We obtain the statement of Lemma 3.9.
Lemma 3.10. If t > 0, the spectral norm kτ B E(R(t) − R0 (t)) ≤ a , Aγ + √ t N 1 E tr (H(t) − H0 (t)) ≤ b/N t, N where A = g 2 τ 2 , B = 1 + g + gτ + g 2 τ 4 , and the coefficient a is a number. Proof. By definition R − R0 = R x xT R0 , and there exists a nonrandom vector e such that E(R−R0 ) = E(eT R x xT R0 e) = EU V. Substitute U = V + U Φ and find that √ √ |EU V | = |EV 2 + EU V Φ| ≤ EV 2 + var V EU 2 Φ2 . Here Φ ≤ 1, EU 2 ≤ Ex2 /t2 . By Lemma 3.9, the expression in the right-hand side is not greater than √ (EV )2 + var V + var V b/t2 .
LIMIT SPECTRAL EQUATIONS
105
Substituting upper estimates of V , and var V from Lemma 3.9, and ω from (4), we arrive at the statement of Lemma 3.5. Theorem 3.7. If t > 0, then EH(t) = s(t)(s(t)Σ + tI)−1 Σ + Ω, √ √ √ where Ω ≤ kτ (A γ + B/ N ), A = gτ (1 + gτ ) γ, B = (1 + √ √ g) (1 + τ ) + g 2 τ 4 , and τ = M /t, g = b/ M . Proof. We have H0 (t) − H(t) = t (R(t) − R0 (t)), where the latter difference was estimated in Lemma 3.10. Combining upper estimates of the remainder terms in Lemma 3.10 and Theorem 3.10, we obtain the statement of Theorem 3.7. One can see that the principal parts of expectation of matrices H(t) and H0 (t) do not differ in norm asymptotically. Limit Spectral Equations Now we formulate special asymptotic conditions preserving the main idea of the Kolmogorov asymptotics: the sum of variances of all variables increases with the same rate as sample size. Consider a sequence
P = (F, M, Σ, k, γ, X, S)N ,
N = 1, 2, . . .
of problems of the investigation of infinite-dimensional matrices Σ =cov(x, x) by sample covariance matrices S of the form (3) calculated over samples X of size N from infinite-dimensional populations with the parameters M and γ defined above (we omit the subscripts N in arguments of P). Consider the empiric distribution function of eigenvalues di of the matrix Σ G0N (v)=N −1
∞
di · ind(di < v).
(9)
i=1
Assume the following. A. For each N , the parameter M < c1 , where c1 does not depend on N .
106
3. SPECTRAL THEORY
B. As N → ∞, the parameter k < c2 , where c2 does not depend on N . def C. As N → ∞, b = bN = trΣ/N → β > 0. D. As N → ∞, the quantities γ = γN → 0. E. Almost for all v > 0, the limit exists G0 (v) = lim G0N (v). N →∞
Obviously (as in the above), G0N√(0) = 0, and for almost all N , = G0 (c3 ) = β, where c3 = M . Denote (as in the above) R0 (t) = (S + tI)−1 , H0 (t) = R0 (t)S, s(z) = sN (z) = 1 − EN −1 trH0 (z), and H(t) = R(t)C.
G0N (c3 )
Theorem 3.8. Under Assumptions A–E for z = t > 0 and for Im z = t > 0 as N → ∞, the convergence holds def
s(z) = sN (z) = 1 − EN −1 trH0 (z) → σ(z), 1 − EN −1 trH(z) → σ(z),
(10)
where the function σ(z) = 1 − σ(z)
1 dG0 (v). σ(z)v + z
(11)
Proof. Let us rewrite the second of the equations (5) introducing the distribution function G0N (v): 1 = s(z) + s(z)
(s(z)v + t)−1 dG0N (v) + oN ,
(12)
where the remainder term oN vanishes as N → ∞. Here the function under the sign of the integral is continuous and bounded, and the limit transition is possible G0N (v) → G0 (v). The right-hand side of (12) with the accuracy up to oN depends monotonously on s = sN and for each N uniquely defines the quantity sN (t);
LIMIT SPECTRAL EQUATIONS
107
one can conclude that for each t > 0 as N → ∞, the limit exists σ(t) = lim sN (t). We obtain the first assertion of the theorem. The second assertion of Theorem 3.8 follows from Lemma 3.10. Now consider empirical spectral functions of the matrices S and C: def
G0N (u) = EN −1
∞
λi · ind(λi < u),
u > 0,
i=1
where λi are eigenvalues of S, and def
GN (u) = EN −1
∞
λi · ind(λi < u),
u > 0,
i=1
where λi are eigenvalues of C. Obviously, G0N (0) = GN (0) = 0, and for all u > 0, we have G0N (u) ≤ b, GN (u) ≤ b, and we find that s(z) = sN (z) = 1 − (u + z)−1 dG0N (u). (13) From (13) it follows that the function σ(z) is regular for z > 0 and allows analytical continuation in the region outside of the half-axis z < 0. We assume that for all z not lying on the half-axis z ≤ 0, the H¨older condition is valid |σ(z) − σ(z )| ≤ a |z − z |δ ,
(14)
where a, δ > 0. Theorem 3.9. Let Assumptions A–E and (14) be valid. Then, 1. there exists a monotonously increasing function G(u) ≤ β such that as N → ∞ almost everywhere for u > 0, the convergence to a common limit holds G0N (u) → G(u), GN (u) → G(u);
108
3. SPECTRAL THEORY
2. for all z not lying on the half-axis z ≤ 0, 1 − σ(z) =
1 dG(u); u+z
(15)
3. for u > 0 almost everywhere G (u) = π −1 Im σ(−u + iε), ε→+0
u > 0.
(16)
Proof. Set G0N (u) = bFN (u), where FN (u) is some monotonous function with the properties of a distribution function. Note that for fixed z = −u + iε, where u > 0, ε > 0, the quantity Im s(z)/bπ presents an ε-smoothed density FN (u). From the sequence of functions {FN (u)}, we extract any two convergent subsequences FNν (u) → F ν (u), ν = 1, 2. Let u > 0 be a continuity point of F 1 (u) and of F 2 (u) as well. Then, smoothed densities of distributions FN1 (u) and FN2 (u) converge as N → ∞ and converge to a common limit 1 Im σ(−u + iε) du . F (u, ε) = (17) πβ In view of the H¨ older condition for each u > 0 and ε → +0, there exists the limit σ(−u) = lim σ(−u + iε) and the limit F (u) = lim F (u, ε) = lim ε→+0
lim FN (u).
ε→+0 N →∞
(18)
Denote G(u) = βF (u). This is just the required limit for GN (u). Further, for z = −u+iε, u, ε > 0, then the functions f (z) = −E Im N −1 tr H(z)/bπ present smoothed densities for GN (u)/b. By Assumption E and (9), these functions converge as N → ∞ and converge to the derivative of F (u, ε). As ε → +0, Im f (z) → G (u). Equation (15) follows from (13). Theorem 3.9 is proved. Equations (13) and (15) establish a functional dependence between spectra of the matrices Σ and of S and C. Let the region S = {u : u > 0 & G (u) > 0} be called the region of (limit) spectrum of matrices S.
LIMIT SPECTRAL EQUATIONS
109
Theorem 3.10. Let Assumptions A–E hold, the condition (14) is valid and, moreover, as N → ∞, the quantities d1 → α > 0. Then in the spectrum region for real z = −u > 0, u ∈ S, the function σ(z) is defined and the following relations are valid.
1 dG0 (v) = 1. |σ(z)v − u|2 v 2 2. |σ(z)| dG0 (v) = 1. |σ(z)v − u|2
1. u
3. u ≤ α |σ(z)|2 . β β 1 4. 1 − ≤ Re ≤1+ . u σ(z) u √ 5. u ≤ ( α + β)2 .
Proof. Let z = −u + iε, u, ε > 0. Denote σ = σ(z), σ0 (z) = Re σ(z), σ1 = σ1 (z) = Im σ(z), Ik = Ik (z) =
vk dG0 (v). |vσ(z) + z|2
σ0 =
(19)
In view of the H¨ older condition, the functions σ(z) and Ik (z), k = 0, 1 are bounded and continuous for each u > 0 and ε → +0, and are defined and continuous for z = −u < 0. From (15), one can see that the sign of σ1 (z) coincides with the sign of ε > 0. Let us fix some u ∈ S, σ1 = σ1 (−u) > 0. The denominator under the integral sign in (19) is not smaller than u2 σ12 /|σ|2 and does not vanish for any fixed u > 0 and ε → +0. We equate the imaginary parts in the equality (11). It follows that 1 − uI0 = I0 σ0 ε/σ1 . As ε → +0, the quantity uI0 → 1. We obtain the limit relation in the first assertion of the theorem. Let us divide both parts of (11) by σ and compare the imaginary parts. We find that I1 − |σ|−2 = I0 ε/σ1 → 0. The second statement of our theorem follows.
110
3. SPECTRAL THEORY
Note that the integration region in (19) is bounded by a segment [0, α]. Consequently, I1 ≤ αI0 . Substituting limit values of I0 and I1 , we obtain Statement 3. Let us apply the Cauchy–Bunyakovskii inequality to equation (11). It follows that for z = −u < 0 1 − σ(z) 2 β 1 ≤β dG0 (v) = , σ(z) 2 |σ(z)v + z| u We get Statement 4 of our theorem. Consider these inequalities on the boundary at the point u ∈ S, where σ1 = we have the following 0. At this−1point for2 u ≥ β, inequality α/u ≥ |σ| ≥ |σ0 /σ | ≥ 1 − β/u. We arrive at the fifth statement of the theorem. Theorem 3.10 is proved. Model 1. Consider infinite-dimensional vectors x and let sample size N = 1, 2, . . . , while only n eigenvalues of Σ are different from 0 so that the ratio n/N → y > 0 as N → ∞. This case presents the increasing-“dimension asymptotics” that is the basic approach of the well-known spectral theory of random matrices by V. L. Girko and of the theory of essentially multivariate analysis [71]. Now we consider a special case when for each N (and n), all n nonzero eigenvalues of the matrix Σ coincide: α = d1 = d2 = · · · = dn . Then, the function G0 (v) = yα ind(v < α), and we obtain from (15) as ε → +0: (1 − σ(z))(σ(z)α − u) = yασ(z), where z = −u < 0. This is a quadratic equation in σ = σ(z). Let us rewrite it in the form Aσ 2 + Bσ + C = 0, where A = α, B = α(1 − y) + u, C = u. Starting from (15), we find the limit derivative √ 1 Im B 2 − 4AC = G (u) = π −1 Im σ(z) = 2πα 1 = 2πα (u2 − u)(u − u1 ), u > 0,
LIMIT SPECTRAL EQUATIONS
111
√ where u2,1 = α(1 ± y)2 . For y = 1, the density of the limit distribution F (u) of nonzero eigenvalues of S equals F (u) = G (u)/yu, while for y > 1, the function F (u) has a jump (1 − y)α at the point u = 0. This limit spectrum was first found under a finite-dimensional setting by Marchenko and Pastur in [43], 1967. Model 2. We make Model 1 more complicated by adding an infinite “tail” of identical infinitesimal eigenvalues to n = n1 nonzero eigenvalues of Σ. For fixed N and increasing n = n(N ), m = m(N ), let d1 = d2 = . . ., dn = α > 0, and dn+1 = dn+2 = dn+m = δ > 0, so that the ratios n/N → y1 > 0, m/N → y2 > 0, and b1 = y1 α → β1 > 0 and b2 = y2 δ → β2 > 0 while δ → 0. For s = s(t), t > 0, from equation (15), we obtain the relation 1 − s(t) β1 β2 = + + oN . s(t) αs(t) + z δs(t) + z
(20)
As N → ∞, we have s = s(t) → σ(t), the remainder term oN vanishes, and equation (20) passes to a quadratic equation with respect to σ(t). Rewrite it in the form Aσ 2 + Bσ + C = 0, where A = α(β2 + z), B = z(β1 + β2 + z − α), C = −z 2 , z > 0. We perform the analytical continuation of σ(z) to complex arguments z = −u + iε, u, ε > 0. Solving the Stilties equation we obtain u2 G(u2 ) − G(u1 ) = lim π −1 Im σ(−u + iε) du = ε→+0 u1 u2 √ 2 B − 4AC −1 du. (21) = lim π ε→+0 2|A| u1 First, we study possible jumps of the function G(u). They are produced when the coefficient A vanishes, i.e., when z = −β2 . The integration in the vicinity of u = β2 > 0 in (21) reveals a jump (2πd1 )−1 Re (B 2 − 4AC) that equals β2 (y1 − 1)/2 for y1 > 1 and 0 for y1 ≤ 1. We conclude that for y1 < 1, the function G(u) is continuous everywhere for u ≥ 0, and for y1 > 1, there is a discontinuity at the point u = β2 with a jump equal to β2 (y1 − 1).
112
3. SPECTRAL THEORY
The continuous part of G(u) is defined by equation (21). In the case of Model 2, the limit spectral density of S equals ⎧ ⎨
1 (u2 − u)(u − u1 ), if u1 ≤ u ≤ u2 , u−1 G (u) = 2πα|u − β2 | ⎩ if u < u1 , u > u2 , 0, √ where u2, 1 = β2 + α(1 ± y)2 , y = β1 /α. The influence of a “background” of small variances shifts the spectrum by the quantity β2 . It is easy to examine that these boundaries stay within the upper bound, established by Statement 5 of Theorem 3.10. Model 3. Let eigenvalues of the matrix Σ be λk = a2 /(a2 + N −2 k 2 ), a > 0, k = 0, 1, 2, . . . . As N → ∞ b=
∞ 1 λk → N k=0
∞ 0
a2 πa dt = = β. 2 2 a +t 2
The function ∞ def def 1 λk ind(λk < v) → G0 (v) = a · arctg N k=0 0 and dGdv(v) = √ a , 0 < v < 1. 2 v(1−v)
G0N (v) =
v 1−v ,
Let u > 0, σ = σ(−u). The dispersion equation (15) is transformed to β σ−1 1 2 =a dt = . (22) 2 2 σ t + a (u − σ) u(u − σ) The quantity Im σ(z) is connected with the spectral function G (u) by (15). Therefore, outside of the spectrum region, the function σ(−u) is real. This function monotonously decreases with the increase of u > 0 and σ(−u) ≥ 1. From (22), it follows that u ≥ σ(−u). For small u > 0, equation (22) has no solution. But this equation is equivalent to the cubic equation ϕ(σ, u) = a3 σ 3 + a2 σ 2 + a1 σ 1 + a0 = 0,
(23)
LIMIT SPECTRAL EQUATIONS
113
where a3 = u, a2 = β 2 − u2 − 2u, a1 = 2u2 + u, and a0 = −u2 , solvable in complex numbers. We conclude that for small u > 0, the equation (23) has only complex roots. For Im σ > 0, G (u) > 0. Consequently, small u > 0 are included in the spectrum region S. Denote σ0 = Re σ, σ1 = Im σ. In the spectrum region, the imaginary part of ϕ(σ, u) is proportional to σ1 > 0 with the coefficient a3 (3σ02 − σ12 ) + 2a2 σ0 + a1 = 0.
(24)
By Theorem 3.10, the spectrum region is bounded, and there exists an upper spectrum boundary uB > 0. This quantity may be calculated by solving simultaneously the system of two equations (23) and (24) with σ1 = 0. From Theorem 3.10 and equation (22), we obtain the inequality 1 + β 2 ≤ uB ≤ (1 + β)2 . One can prove that the spectral density G (u) has vertical derivatives at the boundary u = uB .
114
3. SPECTRAL THEORY
3.3.
NORMALIZATION OF QUALITY FUNCTIONS
In this section, following [72], we prove that in case of high dimension of variables, most of standard rotation invariant functionals measuring the quality of regularized multivariate procedures may be approximately, but reliably, evaluated under the hypothesis of population normality. This effect was first described in 1995 (see [72]). Thus, standard quality functions of regularized procedures prove to be approximately distribution free. Our purpose is to investigate this phenomenon in detail. We 1. study some classes of functionals of the quality function type for regularized versions of mostly used linear multivariate procedures; 2. single out the leading terms and show that these depend on only two moments of variables; 3. obtain upper estimates of correction terms accurate up to absolute constants. We restrict ourselves with a wide class of populations with four moments of all variables. Let x be an observation vector in n-dimensional population S from a class K04 (M ) of populations with Ex = 0 and the maximum fourth moment M = sup E(eT x)4 > 0,
(1)
where the supremum is calculated over all e, and e (here and in the following) are nonrandom vectors of unit length. Denote Σ = cov(x, x). We consider the parameter γ = sup [xT Ω x/n]/M,
(2)
Ω=1
where Ω are nonrandom, symmetrical, positive semidefinite n × n matrices.
NORMALIZATION OF QUALITY FUNCTIONS
115
Let X = (x1 , . . ., xN ) from S be a sample from S of size N . We consider the statistics ¯= x
1 N
C=
N
xm ,
m=1 N 1 (xm N m=1
S=
1 N
N m=1
xm xTm ,
¯ )(xm − x ¯ )T . −x
Here S and C are sample covariance matrices (for known and for unknown expectation vectors). Normalization Measure Definition 1. We say that function f : Rn → R1 of a random vector x is ε-normal evaluable (in the square mean) in a class of n-dimensional distributions S, if for each S with expectation vector a = Ex and covariance matrix Σ = cov(x, x), we can choose a normal distribution y ∼ N(a, Σ) such that E(f (x) − f (y))2 ≤ ε. We say that function f : Rn → R1 is asymptotically normal evaluable if it is ε-evaluable with ε → 0. √ Example 1. Let n = 1, ξ ∼ N(0, 1), x = ξ 3 / 15. Denote the distribution law of x by S. Then, the function f (x) = x is ε-normal evaluable (by normal y = ξ) in S with ε = 0.45. Example 2. Let S be an n-dimensional population with four moments of all variables and let f (t) be a continuous differentiable function of t ≥ 0 that has a derivative not greater b in absolute value. Then, f (¯ x2 ) is ε-normal evaluable with ε = c/N. Example 3. In Section 3.1, the assumptions were formulated, which provide the convergence of entries for the resolvents of sample covariance matrices S and C as n → ∞, N → ∞, and n/N → λ > 0. Limit entries of these resolvents prove to be depending only on limit values of spectral functions of the matrix Σ. By the above definition, the entries of these resolvents are asymptotically normal evaluable as n → ∞.
116
3. SPECTRAL THEORY
Spectral Functions of Sample Covariance Matrices We consider the resolvent-type matrices H0 = H0 (t) = (I + A + tS)−1 and H = H(t) = (I + A + tC)−1 , where (and in the following) I denotes the identity matrix and A is a positively semidefinite symmetric matrix of constants. Define spectral functions h0 (t) = n−1 trH0 (t), h(t) = n−1 tr H(t), s0 (t) = 1 − y + yh0 (t), s(t) = 1 − y + yh(t), where y = n/N. The phenomenon of normalization arises as a consequence of properties of spectral functions that were established in Section 3.1. We use theorems that single out the leading parts of spectral functions for finite n and N . The upper estimates of the remainder terms are estimated from the above by functions of n, N, M, γ, and t ≥ 0. Our resolvents H0 (t) and H(t) differ from those investigated in Section 3.1 by the addition of nonzero matrix A. The generalization can be performed by a formal reasoning as follows. Note that the linear transformation x = Bx, where B 2 = (I + A)−1 , takes H0 to the form H0 = B(I + tS )−1 B, where S = N −1
N
x x , T
xm = B xm ,
m = 1, 2, . . ., N.
m=1
It can be readily seen that M = sup E(eT x )4 ≤ M, and the products M γ ≤ M γ, where γ is the quantity (2) calculated for x . Let us apply results of Section 3.1 to vectors x = Bx. The matrix elements of our H0 can be reduced to matrix elements of the resolvents of S by the linear transformation B with B ≤ 1. The remainder terms for x = Bx are not greater than those for x. The same reasoning also holds for the matrix H. A survey of upper estimates obtained in Sections 3.1 shows that all these remain valid for the new H0 and H. Let us formulate a summary of results obtained in Section 3.1, which will be a starting point for our development below. To be more concise in estimates, we denote by pn (t) polynomials of a fixed degree with respect to t with numeric coefficients and ω = γ + 1/N .
NORMAL EVALUATION
117
Lemma 3.11. (corollary of theorems from Section 3.1). If t ≥ 0 and S ∈ K4 (M ), then 1. EH0 (t) = (I + A + ts0 (t)Σ) + Ω0 , where Ω0 ≤ p3 ω; 2. var(eT H0 e) ≤ a M t3 /N ; 3. tE¯ xT H0 (t)¯ x = 1 − s0 (t) + o,
|o|2 ≤ p5 (t) ω;
4. var(t¯ xT H0 (t)¯ x) ≤ a M t2 /N ; 5. EH(t) = (I + A + ts0 (t)Σ)−1 + Ω, where Ω2 ≤ p6 (t) ω; 6. var (eT H(t)e) ≤ M t2 /N ; 7. ts0 (t) E¯ xT H(t)¯ x = 1 − s0 (t) + o,
|o| ≤ p2 (t) ω;
8. var[t¯ x H(t)¯ x] ≤ p6 (t)/N. T
Normal Evaluation of Sample-Dependent Functionals We study rotation invariant functionals including standard quality functions that depend on expectation value vectors, sample means, and population and sample covariance matrices. For sake of generality, let us consider a set of k n-dimensional populations S1 , S2 , . . ., Sk , with expectation vectors Ex = ai , covariance matrices cov(x, x) = Σi for x form Si , moments Mi of the form (1), and the parameters γi of the form (2), i = 1, 2, . . ., k. To be more concise, we redefine the quantities N, y, τ, and γ: Mi ti , (3) N = min Ni , y = n/N, γ = max νi /Mi , τ = max i
i
i
i = 1, 2, . . ., k. Definition 1 will be used with these new parameters. Let Xi be independent samples from Si of size Ni , i = 1, 2, . . ., k. Denote yi = n/Ni , ¯ i = Ni−1 (xm − ai )(xm − ai )T , xm , Si = Ni−1 x m
m
Ci = Ni−1
¯ )(xm − x ¯ )T , (xm − x m
118
3. SPECTRAL THEORY
where m runs over all numbers of vectors in Xi , and i = 1, 2, . . ., k. We introduce more general resolvent-type matrices H0 = (I + t0 A + t1 S1 + · · · + tk Sk )−1 , H = (I + t0 A + t1 C1 + · · · + tk Ck )−1 , where t0 , t1 , . . ., tk ≥ 0 and A are symmetric, positively semidefinite matrices of constants. We consider the following classes of functionals depending on ¯ A, xi , Si , Ci , and ti , i = 1, 2, . . ., k. The class L1 = {Φ1 } of functionals Φ1 = Φ1 (t0 , t1 ) of the form (k = 1) ¯ T1 H0 x ¯ 1 , n−1 trH, eT He, t1 x ¯1H x ¯1. n−1 tr H0 , eT H0 e, t1 x def
Note that matrices tH = Cα = (C + αI)−1 with α = 1/t may be considered as regularized estimators of the inverse covariance matrix Σ−1 . The class L2 = {Φ2 } of functionals Φ2 = Φ2 (t0 , t1 , . . ., tk ) of the form
n−1
¯ i H0 x ¯j , n−1 trH0 , eT H0 e, ti x T ¯iH x ¯ j , i, j = 1, 2, . . ., k. tr H, e He, ti x
The class L3 = {Φ3 } of functionals of the form Φ3 = Dm Φ2 and ∂/∂t0 Dm Φ2 , where Φ3 = Φ3 (t0 , t1 , t2 , . . ., tk ) and Dm is the partial differential operator of the mth order
Dm =
∂m , ∂zi1 . . .∂zim
where zj = ln tj , tj ≥ 0, j = 0, 1, 2, . . ., k, and i1 , i2 , . . ., im are numbers from {0, 1, 2, . . ., k}. Note that by such differentiation of resolvents, we can obtain functionals with the matrices A, S, and C in the numerator. This
NORMAL EVALUATION
119
class includes a variety of functionals which are used as quality functions of some multivariate procedures, for example: ¯ j , n−1 tr(AH), ti n−1 tr(HCi H), eT HAHe, ¯ Ti H x ¯ Ti A¯ xj , ti x x ¯ j , t0 eT HAHAHe, i, j = 1, 2, . . ., k, etc. ¯ i HCi H x ti x The class L4 = {Φ4 } of functionals of the form Φ4 = Φ4 (η0 , η1 , . . ., ηk ) =
=
Φ3 (t0 , t1 , . . ., tk ) dη0 (t0 ) dη1 (t1 ) . . .dηk (tk ),
where ηi (t) are functions of t ≥ 0 with the variation not greater than 1 on [0, ∞), i = 0, 1, . . ., k, having a sufficient number of moments βj = tj |dηi (t)|, i = 1, 2, . . ., k, and the functions Φ3 are extended by continuity to zero values of arguments. This class presents a number of functionals constructed by arbitrary linear combinations and linear transformations of regularized ridge estimators of the inverse covariance matrices with different ridge parameters, for example, such as sums of αi (I + ti C)−1 and functions n−1 tr(I + tC)−k , k = 1, 2, . . ., exp(−tC) and other. Such functionals will be used in Chapter 4 to construct regularized approximately unimprovable statistical procedures. The class L5 = {Φ5 } of functionals Φ5 = Φ5 (z1 , z2 , . . ., zm ), where the arguments z1 , z2 , . . ., zm are functionals from L4 , and the Φ5 are continuously differentiable with respect to all arguments with partial derivatives bounded in absolute value by a constant c5 ≥ 0. Obviously, L1 ∈ L2 ∈ L3 ∈ L4 ∈ L5 . Theorem 3.11. Functionals Φ1 ∈ L1 are ε-normal evaluable in the class of populations with four moments of all variables with def
ε = ε1 = p10 ω. ∼
Proof. Let S denote a normal population N(0, Σ) with a matrix ∼
Σ = Σ1 = cov(x, x) that is identical in S and S. We set N = N1 , y = y1 = n/N, t0 = 1, t1 = t.
120
3. SPECTRAL THEORY ∼
∼
Let E and E denote the expectation operators for x ∼ S and
S, respectively, and by definition, let h0 (t) = En−1 trH0 ,
s0 (t) = 1 − y + EN −1 tr(I + A)H0 , ∼ ∼ h0 (t) = En−1 trH0 , s 0 (t) = 1 − y + EN −1 tr(I + A)H0 , ∼ ∼ G0 = (I + A + ts0 (t)Σ)−1 , G0 = (I + A + t s 0 (t)Σ)−1 .
∼
∼
Statement 1 of Lemma 3.11 implies that ∼
∼
|h0 (t) − h0 (t)|(1 + tN −1 trG0 ΣG0 ) ≤ o1 , where o1 is defined by Lemma 3.11. The trace in the parentheses is non-negative. From Statement 2 of Lemma 3.11, it follows that ∼
var(n−1 tr H0 ) ≤ o2 both in S1 and in S1 . We conclude that n−1 tr H0 is ε-normal evaluable with ε ≤ 4σ12 + 2σ2 ≤ p6 (t) ω 2 . Theorem 3.2 implies that ∼
∼
∼
EH0 − EH0 ≤ t|s0 (t) − s 0 (t)|G0 ΣG0 + 2σ1 , ∼ √ ∼ where |s0 (t) − s 0 (t)| ≤ 2o1 y and G0 ΣG0 ≤ M . Thus, the norm in the left-hand side is not greater than 2(1 + τ y)o1 . From Lemma 3.11, it follows that var(eT H0 e) ≤ o2 both for S1 and ∼
for S1 . We conclude that eT H0 e is ε-normal evaluable with ε = p8 (t) ω 2 . Further, by the Statement 8 of Lemma 3.11, we have ∼
∼
¯ | − |tE¯ ¯ | ≤ |s0 (t) − s 0 (t)| + 2|o3 |, xT H0 x |tE¯ xT H0 x where the summands in the right-hand side are not greater than p3 (t) ω and p3 (t) ω, respectively. In view of Statement 7 of Lemma ∼
¯ ) ≤ p2 (t)/N both for S1 and for S1 . We 3.11, we have var(t¯ xT H 0 x T ¯ is ε-normal evaluable with ε = p6 (t) ω 2 . conclude that t¯ x H0 x
NORMAL EVALUATION
121
Now we define h(t) = n−1 tr H, ∼
∼
s(t) = 1 − y + EN −1 tr(I + A)H,
h(t) = En−1 trH,
∼
∼
s(t) = 1 − y + EN −1 trH,
G = (I + A + ts(t)Σ)−1 ,
∼
∼
G = (I + A + t s(t)Σ)−1 .
From Lemma 3.11, it follows that ∼
∼
∼
|h(t) − h(t)| ≤ tN −1 tr(GΣG) |s0 (t) − s 0 (t)| + Ω ≤ p3 (t) ω. By Statement 6 of Lemma 3.11, we have var(n−1 trH) ≤ o2 /N both ∼
in S1 and in S1 . We conclude that n−1 trH is ε-normal evaluable with ε = p6 (t) ω. From Statement 5 of Lemma 3.11, it follows that ∼
∼
∼
EH − EH ≤ t|s0 (t) − s 0 (t)|GΣG + 2Ω ≤ p3 (t) ω. By Statement 6 of Lemma 3.11, we have var(eT He) ≤ p5 (t)/N ∼
both in S1 and in S1 . It follows that eT He is ε-normal evaluable with ε = p6 (t) ω 2 . Further, using Statements 1 and 7 of Lemma 3.11, we obtain ∼ that min(s0 (t), s 0 (t)) ≥ (1 + τ y)−1 , and ∼
∼
¯ | − |tE¯ ¯ | ≤ |1/s0 (t) − 1/ s 0 (t)| + o(t), |tE¯ xT H x xT H x where |o(t)| ≤ p5 (t)ω. The first summand in the right-hand side is not greater than p5 (t)ω. From Lemma 3.11, it follows that ¯ ) ≤ p6 (t)/N . We conclude that t¯ ¯ is ε-normal var(t¯ xT H x xT H x evaluable with ε = p10 (t)ω. This completes the proof of Theorem 3.11. Theorem 3.12. Functionals Φ2 ∈ L2 are ε-normal evaluable with ε = ε2 = k 2 ε1 . ∼
Proof. We consider normal populations Si = N(ai , Σi ) with ai = Ex and Σi = cov(x, x), for x in Si , i = 1, 2, . . ., k. Let Ei be the expectation operator for the random vectors x1 ∼ S1 , . . .,
xi ∼ Si ,
∼
xi+1 ∼ Si+1 , . . .,
∼
xk ∼ Sk ,
122
3. SPECTRAL THEORY
where the tilde denotes the probability distribution in the corresponding population, i = 1, 2, . . ., k − 1. Let E0 denote ∼
the expectation when all populations are normal, xi ∼ Si , i = 1, 2, . . ., k, and let Ek be the expectation operator for xi ∼ Si , i = 1, 2, . . ., k. Clearly, for each random f having the required expectations, E0 f − Ef =
k
(Ei−1 f − Ei f ).
i=1
Let us estimate the square of this sum as a sum of k 2 terms. We set f = Φ2 for the first three forms of the functionals Φ2 (depending on H0 ). Choose some i : 1 ≤ i ≤ k. In view of the independence of xi chosen from different populations, each summand can be estimated by Theorem 3.11 with H0 = H0i = (I + B i + ti Si )−1 , where k i−1 tj S j tj S j + B i = I + t0 A + j=1
j=i+1
is considered to be nonrandom for this i, i = 1, 2, . . ., k. By Theorem 3.11 each summand is ε-normal evaluable with ε = ε1 . We conclude that (Ef0 − Ef )2 ≤ k 2 ε1 . Similar arguments hold for f depending on H. This completes the proof of Theorem 3.12. Theorem 3.13. Functionals Φ3 ∈ L3 are ε-normal evaluable 1/2(m+1) with ε3 = amk (1 + A)2 (1 + τ y) ε2 , where amk are numerical constants and τ and y are defined by (3). Proof. Let X denote a collection of samples from populations ∼
(S1 , S2 , . . ., Sk ) and X a collection of samples from normal popu∼
∼
∼
lations (S1 , S2 , . . ., Sk ) with the same first two moments, respec∼
∼
∼
tively. Let us compare Φ3 (X) = Dm Φ2 (X) and Φ3 (X) = Dm Φ2 (X), ∼
where Φ2 and Φ2 are functionals from L2 . Note that ∂H0 = −H0 ti Si H0 ∂zi
and
∂H = −Hti Ci H, ∂zi
NORMAL EVALUATION
123
where zi = lnti , ti > 0, i = 1, 2, . . ., k. The differential operator Dm transforms H0 into sums (and differences) of a number of matrices Tr = H0 ti Si H0 . . .tj Sj H0 , i, j = 0, 1, 2, . . ., k with different numbers r of the multiples H0 , 1 ≤ r ≤ m + 1, T1 = H0 . Note that Tr ≤ 1, as is easy to see from the fact that the inequalities H0 Ω1 ≤ 1 and H0 ≤ 1 hold for H0 = (I + Ω1 + Ω2 )−1 and for any symmetric, positively semidefinite matrices Ω1 and Ω2 . Now, ∂/∂zi Tr is a sum of r summands of the form Tr+1 plus r − 1 terms of the form Tr , no more that 2r − 1 summands in total. We can conclude that each derivative ∂/∂zi Dm Φ2 is a sum of at ∼ most (2m+1)!! terms, each of these being bounded by 1 or by tj x2j ∼
for some j = 1, 2, . . ., k, depending on Φ2 . But E(tj x2 )2 ≤ τ 2 y 2 , where j = 1, 2, . . ., k, and y = n/N . It follows that 2 ∂ Dm Φ ≤ (1 + τ 2 y 2 )[(2m + 3)!!]2 E ∂zi for any i = 1, 2, . . ., k. We introduce a displacement δ > 0 of zi = ln ti being the same for all i = 1, 2, . . ., k and replace the derivatives by finite-differences. Let Δm be a finite-difference operator corresponding to Dm . We obtain Δm Φ2 = Dm Φ2 + δ
k ∂ Dm Φ2 ∂zi z=ξ, i=1
where ξ are some intermediate values of zi , i = 1, 2, . . ., k. By Theorem 3.12, the function Δm Φ2 is ε-normal evaluable with ε = ε = 2ε2 2m+1 /δ m . The quadratic difference E(Δm Φ2 − Dm Φ2 )2 ≤ ε = δ 2 k 2 (1 + τ 2 y 2 )[(2m + 3)!!]2 . def
We conclude that Dm Φ2 is ε-normal evaluable with ε = σ = 1/(2+m) ε + ε . Choosing δ = 2ε2 (1 + τ 2 y 2 )1/(2+m) , we obtain that 1/(1+m) , where the numerical coefficient a depends on m σ < aε2 and k. We have proved Theorem 3.13 for the functionals Dm Φ2 . Now consider functionals of the form ∂/∂t0 Dm Φ2 . Let us replace the derivative by a finite difference with the displacement
124
3. SPECTRAL THEORY
δ of the argument. An additional differentiation with respect to t0 and the transition to finite differences transform each term Tr into 2r summands, where r ≤ m + 1. Each summand can be increased √ by a factor of no more than (1 + A). Choose δ = σ. Reasoning similarly, we conclude that ∂/∂t0 Dm Φ2 is ε-normal evaluable √ with ε = 2(m + 1)2 (1 + A)2 σ. This completes the proof of Theorem 3.13. The next two statements follow immediately. Corollary 1. The functionals Φ4 ∈ L4 are ε-normal evaluable with ε = ε4 = ε3 . Corollary 2. The functionals Φ5 ∈ L5 are ε-normal evaluable in K4 (M ) with ε = ε5 ≤ a25 m2 ε4 . Conclusions Thus, on the basis of spectral theory of large sample covariance matrices that was developed in Chapter 3, it proves to be possible to suggest a method for reliable estimating of a number of sample-dependent functionals, including most popular quality functions for regularized procedures. Under conditions of the multiparametric method applicability, standard quality functions of statistical procedures display specific properties that may be used for the improvement of statistical problem solutions. The first of these is a decrease in variance produced by an accumulation of random contributions from a large number of boundedly dependent estimators. Under the Kolmogorov asymptotics, in spite of the increasing number of parameters, statistical functionals that uniformly depend on variables have the variance of the order of magnitude N −1 , where N is sample size. This means that standard quality functions of multiparametric procedures approach nonrandom limits calculated in the asymptotical theory. In this case, we may leave the problem of quality function estimation and say rather on the evaluation of quality functions than on estimation. The second specific fact established in this chapter is that the principal parts of quality functions prove to be dependent on only two moments of variables and are insensitive to higher ones. If the
CONCLUSIONS
125
fourth moment (1) is bounded, the remainder terms prove to be of the order of magnitude of γ + 1/N , where N is sample size, and of the measure of quadric variance γ = O(n−1 ) for boundedly dependent variables. This fact leads to a remarkable specifically multiparametric phenomenon: the standard quality functions of regularized statistical procedures prove to be approximately independent of distributions. Thus, a number of traditional methods of multivariate analysis developed previously, especially for normal populations, prove to have much wider range of applicability. Solutions of extremum problems under normality assumption retain their extremal properties for a wide class of distributions. For the first time in statistics, we obtain the possibility to construct systematically distribution-free improved and unimprovable procedures. The necessity of a regularization seems not much restrictive since we have the possibility to estimate the effect of regularization and choose best solutions within classes of regularized procedures. Actually, a worthy theory should recommend for practical usage of only obviously stable regularized procedures. It follows that the multiparametric approach opens a new branch of investigations in mathematical statistics, providing a variety of improved population-free solutions that should replace the traditional restrictively consistent not-always-stable methods of multivariate analysis. Also, it follows that, for problems of multiparametric statistics, we may propose the Normal Evaluation Principle to prove theorems first for normal distributions and then to estimate corrections resulting from non-normality using inequalities obtained above.
This page intentionally left blank
CHAPTER 4
ASYMPTOTICALLY UNIMPROVABLE SOLUTION OF MULTIVARIATE PROBLEMS The characteristic feature of multiparametric situation is that the variance of sample functionals proves to be small. This effect stabilizes quality functions of estimators and makes them independent on sampling. The observer gets the possibility not to estimate the quality of his methods, but to evaluate it, and thus to choose better procedures. Multiparametric theory suggests the special technique for systematic construction of improved solutions. It can be described as follows. 1. Dispersion equations are derived which connect nonrandom leading parts of functionals with functions, depending on estimators. These equations supply an additional information on the distribution of unknown parameters and spectral properties of true covariance matrices. 2. A class of generalized multivariate statistical procedures is considered which depend on a priori parameters and functions. To improve a statistical procedure, it suffices to choose better these parameters or function. 3. Using dispersion equations, the leading parts of quality functions are singled out and expressed, on one hand, in terms of parameters of populations (which is of theoretical interest) and, on the other hand, in terms of functions of statistics only (for applications). 4. The extremum problem is solved for nonrandom leading parts of quality functions or for their limit expressions, and extremum conditions are derived which determine approximately unimprovable or best-in-the-limit procedures.
127
128
4. ASYMPTOTICALLY UNIMPROVABLE
5. The accuracy of estimators of the best (unknown) extremum solutions is studied, and their dominating property is established. In this chapter, this technique is applied to some most usable statistical procedures.
ESTIMATORS OF LARGE INVERSE COVARIANCE MATRICES
4.1.
129
ESTIMATORS OF LARGE INVERSE COVARIANCE MATRICES
It is well known that standard linear statistical methods of multivariate analysis do not provide best solutions for finite samples and, moreover, are often not applicable to real data. Most popular linear procedures using the inverse covariance matrix are constructed by the “plugin” method in which the true unknown covariance matrix is replaced by standard sample covariance matrix. However, sample covariance matrices may be degenerate already for the observation dimension n = 2. The inversion becomes unstable for large dimension; for n > N , where N is sample size, the inverse sample covariance matrix does not exist. In these cases, the usual practice is to reduce artificially the dimension [3] or to apply some regularization by adding positive “ridge” parameters to the diagonal of sample covariance matrix before its inversion. Until [71], only heuristically regularized estimators of the inverse covariance matrices were known, and the problem of optimal regularization had no accurate solution. In this section we develop the successive asymptotical theory of constructing regular −1 of the inverse covariance matrices Σ−1 approxized estimators Σ imately unimprovable in the meaning of minimum quadratic losses −1 )2 . n−1 tr(Σ−1 − Σ In previous chapters, the systematic technique was described for constructing optimal statistical procedures under the Kolmogorov “increasing dimension asymptotics.” This technique is based on the progress of spectral theory of increasing random matrices and spectral theory of sample covariance matrices of increasing dimension developed in the previous chapter. The main success of the spectral theory of large random matrices is the derivation of dispersion equations (see Section 3.1) connecting limit spectral functions of sample covariance matrices with limit spectral functions of real unknown covariance matrices. They provide an additional information on unknown covariance matrices that will be used for the construction of improved estimators. In this chapter, first we choose a parametric family of estimators (depending on a scalar or a weighting function of finite variation)
130
4. ASYMPTOTICALLY UNIMPROVABLE
and apply the Kolmogorov asymptotics for isolating nonrandom principal part of the quadratic losses. Then, we solve the extremum problem and derive equations that determine the best in the limit parameters of this family and the best weighting function. Further, we approximate these parameters (or function) by statistics and construct the corresponding estimators of the inverse covariance matrix. Then, we prove that this estimator asymptotically dominates the chosen family. First, we present results and their discussion and then add proofs. Problem Setting Let x be an observation vector from an n-dimensional population S with expectation Ex = 0, with fourth moments of all components and a nondegenerate covariance matrix Σ = cov(x, x). A sample X = {xm } of size N is used to calculate the mean vector ¯ and sample covariance matrix x
C=N
−1
N
¯ )(xm − x ¯ )T . (xm − x
m=1
We use the following asymptotical setting. Consider a hypothetical sequence of estimation problems −1 )n }, P = {(S, Σ, N, X, C, Σ
n = 1, 2, . . . ,
where S is a population with the covariance matrix Σ = cov(x, x), −1 is an estimator Σ−1 X is a sample of size N from S, Σ calculated as function of the matrix C (we do not write the indexes n for arguments of P). Our problem is to construct the best −1 . statistics Σ We begin by consideration of more simple problem of improving estimators of Σ−1 by the introduction of a scalar multiple of C −1 (shrinkage estimation) for normal populations. Then, we consider a wide class of estimators for a wide class of populations.
SHRINKAGE FOR INVERSE COVARIANCE MATRICES
131
Shrinkage for Inverse Covariance Matrices Let K(1) be parametrically defined family of estimators of the −1 = αC −1 , where α is a nonrandom scalar. form Σ Suppose that the sequence of problems P is restricted by following conditions. 1. For each n, the observation vectors x ∼ N(0, Σ) , and all eigenvalues of Σ, are located on the segment [c1 , c2 ], where c1 > 0 and c2 do not depend on n. 2. The convergence holds n−1 tr Σ−ν → Λν , ν = 1, 2, in P. 3. For each n in P, the inequality holds N = N (n) > n + 2, and the ratio n/N → y < 1 as n → ∞. Remark 1. Under Assumptions 1–3, the limits exist M−1 = l.i.m. n−1 tr C −1 = (1 − y)−1 Λ−1 , n→∞
M−2 = l.i.m. n−1 tr C −2 = (1 − y)−2 Λ−2 + y (1 − y)−3 Λ2−1 n→∞
(here and in the following, l.i.m. denotes the limit in the square mean). Remark 2. Under Assumptions 1–3, the limits exist l.i.m. n−1 tr(Σ−1 − αC −1 )2 = R(α) = Λ−2 − 2αΛ−1 M−1 + α2 M−2 . n→∞
For the standard estimator, α = 1 and R = R(1) = y 2 (1 − y)−2 Λ−2 + y (1 − y)−3 Λ2−1 . Remark 3. Under Assumptions 1–3, the value R(α) reaches the minimum for α = αopt = (1 − y)−1 Λ−2 /M−2 = 1 − y − 2 /M yM−1 −2 and M2 R(αopt ) = Λ−2 − −1 = M−2 2 Λ2−1 (1 − y) Λ−2 = R(1). Λ−2 + y (1 − y)−1 Λ2−1 Λ2−1 + y (1 − y)Λ−2
132
4. ASYMPTOTICALLY UNIMPROVABLE
However, the parameter αopt is unknown to the observer. −1 = α Consider a class K(2) of estimators of the form Σ n C −1 , where the statistics α n as n → ∞ tends in the square mean to a constant α ≥ 0 as n → ∞. Remark 4. Under Assumptions 1–3, the convergence in the square mean holds n−1 tr(Σ−1 − α n C −1 )2 → R(α). To estimate the best limit parameter αopt , we construct the statistics α opt
n 1 tr 2 C −1 =α opt (C) = max 0, 1 − − N N tr C −2
−1 = α opt C −1 . and consider an estimator Σ opt Remark 5. Under Assumptions 1–3, l.i.m. α opt = αopt and n→∞
l.i.m. n−1 tr(Σ−1 − α opt C −1 )2 =
n→∞
= R(αopt ) = inf
K
(2)
l.i.m. n−1 tr(Σ−1 − α n C −1 )2 .
n→∞
−1 = α opt C −1 of matrices In this meaning, the estimators Σ opt
Σ−1 asymptotically dominate the class of estimators K(2) . In the case when y → 1, the standard consistent estimator of the matrix Σ−1 becomes degenerate and its quadratic risk increases infinitely, whereas the optimum shrinkage factor αopt → 0, and −1 remains the limit quadratic risk of the shrinkage estimator Σ opt 2 bounded tending to Λ−1 . Thus, the optimum shrinkage proves to be sufficient to suppress the increasing scatter of entries of illconditioned matrices C −1 . This provides a reason to recommend −1 for the improvement of linear regression analysis, the estimator Σ opt discriminant analysis, and other linear multivariate procedures.
GENERALIZED RIDGE ESTIMATORS
133
Generalized Ridge Estimators Regularized “ridge” estimators of the inverse covariance matrices are often used in applied statistics. The corresponding algorithms are included in many packages of applied statistical programs. In the subsequent sections, we develop a theoretical approach allowing 1. to find the dependence of the quadratic risk on the ridge parameter, 2. to study the effect of combined shrinkage-ridge estimators, 3. to calculate the quadratic risk of linear combinations of shrinkage-ridge estimators of the inverse covariance matrices, 4. to offer an asymptotically ε-unimprovable estimator. Consider a family K(3) of nondegenerating estimators Σ−1 of −1 = Γ(C), where the form Σ Γ(C) =
H(t) dη(t),
H(t) = (I + tC)−1 ,
t≥0
and η(t) is an arbitrary function of t with bounded variation on [0, ∞). We search estimators of the matrix Σ−1 that asymptotically dominate the class K(3) with respect to square losses. Define the maximum fourth moment of the projection x onto nonrandom axes M = sup E(eT x)4 > 0,
(1)
|e| =1
where e are nonrandom unit vectors, and the special measure of the quadratic forms variance ν = sup var(xT Ωx/n),
and γ = ν/M,
(2)
Ω=1
where Ω are nonrandom, symmetric, positively semidefinite matrices of unit spectral norms.
134
4. ASYMPTOTICALLY UNIMPROVABLE
Let us restrict the sequence P with the following requirements. A. For each n, the parameters M < c0 and all eigenvalues of Σ √ lay on a segment [c1 , c2 ], where c0 , c1 > 0 and c2 < c0 do not depend on n. B. The parameters γ vanish as n → ∞. C. The ratio n/N → y, where 0 < y < 1. D. For u ≥ 0, the convergence holds def
F0Σ (u) = n−1
n
ind(λi ≤ u) → FΣ (u),
i=1
where λi are eigenvalues of Σ, i = 1, . . ., n. Under Assumption D for any k > 0, the limits exist Λk = lim n−1 trΣk . n→∞ The Assumptions A–D provide the validity of spectral theory of sample covariance matrices developed in Section 3.1. Let us gather some inferences from theorems proved in Section 3.1 in the form of a lemma. Lemma 4.1. Under Assumptions A–D the following is true. 1. For all (real or complex) z except z > 0, the limit function exists h(z) = l.i.m. n−1 tr(I − zC)−1 = (1 − zs(z)u)−1 dFΣ (t); (3) n→∞
this function satisfies the H¨ older inequality |h(z) − h(z )| ≤ |z − z |ζ ,
0 < ζ < 1,
Im z · Im z = 0
and as |z| → ∞, the function zh(z) = −(1 − y)−1 Λ−1 + O(|z|−1 ). 2. For each (complex) z except z > 0, we have E(I − zC)−1 = (I − zs(z)Σ)−1 + Ωn (z), where s(z) = 1 − y + yh(z) and as n → ∞ Ωn (z) → 0.
(4)
GENERALIZED RIDGE ESTIMATORS
135
3. For u ≥ 0 as n → ∞, we have the weak convergence in the square mean Fn (u) = n
−1
n
ind(λj ≤ u) → F (u),
(5)
j =1
where λj are eigenvalues of the matrix C; the equation holds h(z) =
(1 − zu)−1 dF (u).
4. If c1 > 0 and y > 0 for each u > 0, the continuous derivative F (u) ≤ (c1 yu)−1/2 exists that vanishes for u < u1 and for u > u2 , √ √ where u1 = c1 (1 − y)2 and u2 = c2 (1 + y)2 . First we prove the convergence of the quadratic risk for the −1 = Γ(C) estimator Σ Rn = Rn (Γ) = En−1 tr(Σ−1 − Γ(C))2 .
(6)
We also consider a scalar function Γ(u) corresponding to the matrix Γ(C) (the matrix Γ(C) is diagonalized along with C with eigenvalues Γ(λ) that correspond to the eigenvalues λ of matrix C), Γ(u) = (1 + ut)−1 dη(t). Theorem 4.1. Let conditions A–D be satisfied. Then, def R(Γ) = lim Rn (Γ) = Λ−2 − 2 (Λ−1 + th(−t)s(−t)) dη(t)+ n→∞
th(−t) − t h(−t ) dη(t) dη(t ) = + t − t = Λ−2 − 2(1 − y) u−1 Γ(u) dF (u)+ +2y
Γ(u) − Γ(u ) dF (u) dF (u ) + u − u
Γ2 (u) dF (u).
(7)
136
4. ASYMPTOTICALLY UNIMPROVABLE
Notice that R(Γ) is quadratic with respect to Γ(·), and we can transform (7) to the form convenient for minimization. Consider the statistics gn (w) = (w − u)−1 dFn (u). If c1 > 0, y > 0, and Im w = 0, we have the convergence in the square mean gn (w) → g(w) = w−1 h(w−1 ). The (complex) function g(w) satisfies the H¨older condition and can be continuously extended to the half-axis u > 0, and for u > 0
(u − u )−1 dF (u )
Re g(u) = P
in the principal value meaning. Define Γopt (u) = (1 − y)u−1 + 2y Re g(u),
u > 0.
(8)
Remark 6. The relation (7) can be rewritten as follows: (Γ(u) − Γopt (u))2 dF (u), = Λ−2 − (Γopt (u))2 dF (u).
R(Γ) = Ropt + where
Ropt
Example 1. Equation (4) allows an explicit solution for a two-parametric “ρ-model” of limit spectral distribution F0 (u) = F0 (u, σ, ρ) of eigenvalues of the matrix Σ that is defined by the density function F0 (u)
=
(2πρ)−1 (1 − ρ)u−2 0
(c2 − u)(u − c1 ), for u1 ≤ u ≤ u2 ,
otherwise,
√ √ where u > 0, 0 ≤ ρ < 1, c1 = σ 2 (1 − ρ)2 , c2 = σ 2 (1 + ρ)2 , σ > 0. For this model (see in Section 3.1), Λ−1 = 1/κ, Λ−2 = (1 + ρ)/κ2 , where κ = σ 2 (1 − ρ)2 , and the function h(z) satisfies
GENERALIZED RIDGE ESTIMATORS
137
the equations (1 − h(z))(1 − ρh(z)) = κzh(z)s(z), 1 + ρ + κ(1 − y)z − (1 + ρ + κ(1 − y)z)2 − 4(ρ − κzy) h(z) = , 2(ρ − κyz) where s(z) = 1 − y + yh(z), 0 < y < 1. For this special case, the extremal function (8) is Γopt (u) = (1 − y + 2y Re h(u−1 ))u−1 = (ρ + y) (ρu + κy)−1 . The equation Γ
opt
(u) =
(1 + ut)−1 dη opt (t), u ≥ 0
t≥0
has a solution η
opt
(t) =
0
for t < ρκ−1 y −1 ,
(ρ + y)κ−1 y −1
for t ≥ ρκ−1 y −1 ,
ρ > 0.
Calculating Ropt = Ropt (Γ) we find that Ropt = ρy(ρ + y)−1 κ−2 . As ρ → 0 (passing to unit covariance matrices) or y → 0 (passing to the traditional asymptotics), we obtain Ropt → 0. The optimum estimator of Σ−1 in this case is (ρ + y)(ρC + κyI)−1 . Theorem 4.2. Under Assumptions A–D, for the statistics −2 − 2(1 − nN −1 ) nε (Γ) def = Λ u−1 Γ(u) dFn (u) + R u≥ε Γ(u) − Γ(u ) −1 (u) dF (u ) + Γ2 (u) dFn (u), dF + 2nN n n −u u u,u ≥ ε (9) we have lim l.i.m. Rn (Γ) = R(Γ). ε → +0 n → ∞
Example 2. Consider shrinkage-ridge estimators of the form −1 = Γ(C) = α(I + tC)−1 , where α > 0 is the shrinkage coeffiΣ cient and 1/t is the regularizing ridge parameter. In this case, the
138
4. ASYMPTOTICALLY UNIMPROVABLE
limit quadratic risk (7) equals R = R(Γ) = Λ−2 −2α (Λ−1 + zh(z)s(z))+α2
d (zh(z)), dz
z = −t.
As a statistics approximating this quantity, we can offer ∼ ∼ d Rnε = Λ−2 − 2α Λ−1 + zhn (z)sn (z) + α2 (zhn (z)), dt where z = −t, hn (z) = tr(I + tC)−1 , sn (z) = 1 − nN −1 −1 n = R. + nN hn (z). By virtue of Theorem 4.2, lim l.i.m. R ε→+0
n→∞
The best shrinkage coefficient can be easily found in an explicit form. The minimization in t can be performed numerically. Asymptotically Unimprovable Estimator Now we construct an estimator of (8) and show its dominating property. n (Γ) of the form (9) reaches In the general case, the function R no minima for any smooth functions Γ(u). The estimator gn (u) of the continuous function g(u) is singular for u > 0. To suggest a regular estimator, we perform a smoothing of gn (u). We construct a smoothed function (8) of the form −1 Γopt + 2yRe g(w)], ε (u) = Re[(1 − y)w
where w = u − iε, For
Γopt ε (u),
ε > 0.
one can offer a “natural estimator”
opt (u) = Re[(1 − nN −1 )w−1 + 2nN −1 Re gn (w)], Γ nε where w = u − iε, ε > 0. Lemma 4.2. Let u ≥ c1 > 0. For ε → +0, we have ζ |Γopt (u) − Γopt ε (u)| = O(ε ), ζ > 0.
For any ε > 0 and n → ∞, opt (u) − Γopt (u)|2 → 0. E|Γ nε ε
(10)
ASYMPTOTICALLY UNIMPROVABLE ESTIMATOR
139
This assertion follows immediately from Assumption C, (5), and (10). −1 = G(C), define the quadratic loss function Given estimator Σ Ln = Ln (G) = n−1 tr(Σ−1 − G(C))2 . opt −1 = Γ We offer an estimator Σ nε that presents a matrix diagonalized along with the matrix C with the eigenvalues n
λ λi − λj 2 i opt (λi ) = 1 − n + , Γ nε N λ2i + ε2 N (λi − λj )2 + ε2
(11)
j =1
where λi are the corresponding eigenvalues of C, i = 1, . . . , n. Theorem 4.3. Under Assumptions A–D lim
ε→+0
opt ) − R(Γopt )|2 = 0. l.i.m. E|Ln (Γ nε
n→∞
−1 that opt Corollary. The statistics Γ nε (C) is an estimator of Σ (2) asymptotically ε-dominates the class K in the square mean with respect to the square losses Ln (Γ). For applications, it is of importance to solve the problem of an optimal choice of smoothing factor ε > 0. The necessity of smoothing is due to the essence of the formulated extremum problems. The improving effect of (11) is achieved by an averaging within groups of boundedly dependent variables (eigenvalues) that makes it possible to use thus produced nonrandom regularities. These groups must be sufficiently small to produce more regularities and sufficiently large for these regularities to be stable. For the problem of estimating expectation vectors considered in Section 2.4 in the increasing dimension asymptotics n → ∞, N → ∞, n/N → y > 0, the optimal dependence ε = ε(n) was found explicitly. This dependence is of the form n = nα , where α > 0 is not large positive magnitude. In application to real problems, this fact seems to impose rather restrictive requirements to number of dimensions. However, numerical experiments (see Appendix) indicate that the theoretical upper estimates of the remainder terms are strongly
140
4. ASYMPTOTICALLY UNIMPROVABLE
overstated, and the quadratic risk functions are well described by the principal terms of the increasing dimension asymptotics even for small n and N . Remark 7. Consider the same “ρ-model” of limit spectra of sample covariance matrices as in Example 1. Let it be known a priori that the populations have the distribution functions F0n (u) of eigenvalues of Σ tending to F0 (u) = F0 (u, σ, ρ). Then, it suffices to construct consistent estimators of the parameters σ 2 and ρ. Consider the statistics ν = n−1 tr C ν , M
ν = 1, 2,
2 /M 1 , σ 2 = M
2 /M 2 . ρ = 1 − M 1
For these, under Assumptions A–D, in [71] the limits were found 1 = Λ1 > 0, plim M n→∞
2 = Λ2 + yΛ2 . plim M 1 n→∞
2 = σ 2 > 0, and plim ρ = ρ. It follows that the limits exist plim σ n→∞
n→∞
Let us construct the estimator Σ−1 of the matrix Σ−1
−1 = σ 2 ( ρ + nN −1 )( ρσ 2 C + n−1 N −1 tr2 C · I)−1 . Σ −1 have only uniformly bounded eigenvalues The matrices Σ −1 with probabilities pn → 1 as n → ∞. We conclude that Σ opt − Γ (C) → 0 in probability. By Theorem 4.1 and Remark 5, we have −1 )2 = Ropt . plim n−1 tr(Σ−1 − Σ n→∞
Thus, for the populations with F0n (u) → F0 (u, σ, ρ), the fam −1 } has the quadratic losses almost surely ily of estimators {Σ not greater than the quadratic losses of any estimator from the class K(2) . Proofs for Section 4.1. First, we prove that, under Assumptions A–D, the ε-regularized statistics n−1 trC −ν , ν → 1, 2, for all distributions tend to the same = C − iεI. limits as for normal ones. Denote C
PROOFS FOR SECTION 4.1
141
Lemma 4.3. Under Assumptions A–D, the limits exist ∼
l.i.m. n−1 trC −1 = (1 − y)−1 Λ−1 ,
M−1 = lim
M−2 =
ε→+0 n → ∞ ∼ lim l.i.m. n−1 trC −2 ε→+0 n → ∞
= (1 − y)−1 Λ−2 + y(1 − y)−3 Λ2−1 .
Proof. We start from the last expression in (3). For fixed ε > 0 and z = i(1/t + ε)−1 , we have for increasing t > 0: zh(z) = −(1−y)−1 Λ−1 −((1−y)−2 Λ−2 +y(1−y)−3 Λ2−1 )z −1 +ξ(z), where |ξ(z)| < c−3 t−2 . At the same time, the function zh(z) is the square mean limit of the function zn−1 tr(I − zC)−1 , which can be expanded as t → ∞ in the series ∼
∼
zn−1 tr(I − zC)−1 = −n−1 trC −1 + n−1 trC −2 t−1 + ζ(z), where E|ζ(z)|2 = O(ε−3 t−2 ). Comparing these expressions as t → ∞, we obtain the limits in the lemma formulation. ∼
l.i.m. Λ−2 = Λ−2 .
It follows that lim
εto+0 n → ∞
Theorem 4.1 (proof). The first theorem statement follows from 3. By definition (6), Rn (Γ) = n−1 trΣ−2 − 2En−1 trΣ−1 Γ(C) + En−1 tr Γ2 (C).
(12)
The first addend in the right-hand side of (12) tends to Λ−2 . Let T be a large positive number. In view of Lemma 4.1, the expectation of the second addend in the right-hand side of (12) equals −2n−1 tr[Σ−1
EH dη(t)] + oT =
t
= −2n
−1
−1
tr[Σ
t
(I + tsΣ−1 ) dη(t)] + on (T ) + oT ,
142
4. ASYMPTOTICALLY UNIMPROVABLE
where H = (I + tC)−1 , s = s(−t), on (T ) → 0 as n → ∞ for any fixed T > 0. The value oT is a contribution of large t ≥ T , which uniformly (with respect to n) decreases as t → ∞. We use once more the expression for EH in Lemma 4.1 and find that the second term of the right-hand side of (12) can be rewritten as −2 n−1 tr(Σ−1 − tsEH) dη(t) + on (T ) + oT = t
where h = h(−t), s = s(−t), on (T ) → 0, and on (T ) → 0 as n → ∞ for any fixed T > 0, and oT → 0 as T → ∞. By the first statement of Lemma 4.1, the value |tsh| is bounded as t → ∞. Tending n → ∞ and T → ∞, we obtain the second term of the right-hand side of (7). Further, let T > 0. Consider the expression tH(−t) − t H(−t ) −1 2 En tr Γ (C) = E n−1 tr dη(t)dη(t ), t − t |t−t |>ε (13) where H(−t) = (I+tC)−1 on the square 0 ≤ t, t ≤ T , ε > 0. In the def
2
region where |t − t | > ε, we have hn (−t) = n−1 tr H(−t) → h(−t) as n → ∞ for any fixed ε > 0 and t < T uniformly with respect 2 to t (here and in the following, the sign → denotes convergence in the square mean). Therefore, in this region, the integrand of (13) uniformly converges to (th(−t) − t h(−t ))/(t − t ) in the square mean, and we obtain the principal part of the last term in (7). Let us prove now that the contribution of the region |t − t | < ε is small. Let us expand the integrand in (13) with respect to x = |t − t | near the point t > 0. Note that |
d (thn (−t))| ≤ 1, dt
E|
d2 thn (−t)| ≤ 2En−1 tr C −1 < c, dt2
where the quantity c does not depend on n and t. It follows that the region |t − t | < ε contributes O(ε) to (13). We conclude that En−1 tr Γ2 (C) converges to the third term in (7). Theorem 4.1 is proved.
PROOFS FOR SECTION 4.1
143
Theorem 4.2 (proof).
We start from Theorem 4.1. By Lemma 4.3, Λ−1 = (1−y) u−1 dF (u). We have the identities (Λ−1 − (1 − y)th(−t)) dη(t) = (1 − y) u−1 Γ(u) dF (u),
2
th (−t) dη(t) =
Γ(u) − Γ(u ) dF (u) dF (u ), u − u
where the integrand is extended by continuity to u = u . The weak convergence in the square mean Fn (u) → F (u) implies the convergence in the square mean −1 2 2 n tr Γ (C) = Γ (u) dFn (u) → Γ2 (u) dF (u). In view of the convergence in the square mean Fn (u) → F (u), for each ε > 0, the integrals in (8) converge as n → ∞ uniformly to the integrals above. For ε < u1 , by Lemma 4.1, Fn (ε) → 0 in the square mean. As ε → +0, we obtain the assertion of our theorem. Now, we establish the convergence of the square losses of the optimal theoretical estimator Γopt ε (C). Lemma 4.4. Under Assumptions A–D for ε > 0, opt Ln (Γopt ε ) = R(Γε ) + ξn (ε) + O(ε),
where E|ξn (ε)|2 → 0 as n → ∞ for fixed ε > 0, and the upper estimate O(ε) is uniform in n. Proof. For fixed ε > 0, the function Γopt ε (u) is continuous and has bounded variation for u > 0. The quantity −1 −2 −1 opt 2 − 2n−1 tr[Σ−1 Γopt Ln (Γopt ε ) = n tr Σ ε (C)] + n tr[Γε (C)] ,
where Γε (C) ia a matrix corresponding to the scalar function Γε (u). Note that n−1 tr Σ−2 → Λ−2 in the right-hand side and 2 2 opt 2 2 (C)] = [Γ (u)] dF (u) → [Γopt n−1 tr[Γopt n ε ε ε (u)] dF (u).
144
4. ASYMPTOTICALLY UNIMPROVABLE
Let us compare these expressions with (7). We notice that it suffices to prove the convergence in the square mean n−1 trΣ−1 ϕ(C) → 2
−y
(1 − y)−1 u−1 ϕ(u) dF (u)
ϕ(u) − ϕ(u ) dF (u)dF (u ) u − u
(14)
for ϕ(u) = Γopt ε (u), where the latter expression in the integrand is extended by continuity to u = u . This relation is linear with respect to ϕ(u), and it is sufficient to prove (14) for ϕ(u) = w−1 and ϕ(u) = Re gn (w), w = u − iε, ε > 0. As n → ∞, we have the convergence in the square mean 2 n−1 tr Σ−1 (C − iεI)−1 → (1 − y)−1 Λ−2 + O(ε). Consequently, we have (1 − y)
u
−1
w
−1
dF (u) − y[ w−1 dF (u)]2 =
2 + O(ε) = (1 − y)−1 Λ−2 + O(ε). = (1 − y)M−2 − yM−1
Now let ϕ(u) = g(w),
w = u − iε,
n−1 tr Σ−1 g (C − iεI)−1 =
ε > 0. We find that
n−1 tr Σ−1 (C − z −1 I)−1 dF (u)
= − zn−1 tr Σ−1 (I − zC)−1 dF (u),
(15)
where z = (u−iε)−1 . Here the matrix g(A) is a matrix diagonalized along with A with eigenvalues g(λi ) on the diagonal, where λi are corresponding eigenvalues of A. By Lemma 4.1 for fixed ε > 0 and √ u < u2 = c2 (1 + y)2 , we have En−1 tr Σ−1 (I − zC)−1 = = n−1 trΣ−1 + zs(z)En−1 tr(I − zC)−1 + on (ε) = = Λ−1 + zs(z)h(z) + on (ε),
PROOFS FOR SECTION 4.1
145
where z = (u − iε)−1 , and uniformly in u on (ε) → 0 and on (ε) → 0 as n → ∞. In view of Lemma 4.1 as ε = 0 and n → ∞, we find that var[n−1 tr Σ−1 (I − zC)−1 ] → 0 uniformly with respect to u, and the right-hand sides of (15) converge to Λ1 + zsh in the square mean. Substituting g(z −1 ) = zh(z) and s(z) = 1 − y + yz −1 g(z −1 ), we obtain n−1 tr(C − iεI)−1 = (u − iε)−1 dFn (u) → (u − iε)−1 dF (u), where the right-hand side equals −zh(z) and z = i/ε. As ε → +0, this quantity tends to M−1 . Thus, the right-hand side of (15) equals −1 −Λ1 M1 −(1−y) zg(z ) dF (u)−y g 2 (z −1 ) dF (u) + ξn (ε) + oε , (16) where z = (u − iε)−1 , oε → 0 as ε → +0; for fixed ε > 0, we have the convergence E|ξn (ε)|2 → 0 as n → ∞. Recall that the arguments u are bounded in the integration region. By virtue of the H¨ older inequality for g(z), the contribution of the difference between z and u−1 in (16) is of the order of magnitude εζ as ε → +0, where ζ > 0. We have the identity 2 n−1 Re g(u) dF (u) = 2 (u − u )−1 dF (u ) u−1 dF (u) = p 2 = −[ u−1 dF (u)]2 = −M−1 , where M−1 = (1 − y)−1 Λ−1 . Consequently, (16) equals (1 − y) u−1 g ∗ (u) dF (u) − y g 2 (u) dF (u) + oε + ξn (ε), (17) where the asterisk denotes complex conjugation, and oε →0 as ε → +0. On the other hand, substitute ϕ(u) = g(w), where w = u − iε, to the right-hand side of (14). Using the identity g(u − iε) − g(u − iε) ) = g 2 (u − iε) dF (u), dF (u) dF (u u − u
146
4. ASYMPTOTICALLY UNIMPROVABLE
we obtain that for ϕ(u) = g(w) and w = u − iε, the right-hand side of (14) equals −1 (1 − y) u g(w) dF (u) − y g 2 (w) dF (u). (18) In view of the H¨ older condition, the real parts of (17) and (18) differ by oε + ξn (ε). Thus, the convergence (14) is proved for ϕ(u) = Re g(w), and for ϕ(u) = Γopt ε (u). Lemma 4.3 statement follows. Theorem 4.3 (proof). First note that for ε > 0 and u > 0, the scalar function opt −1 Γopt nε (u) = Γε (u) + u on + ξn (u, ε).
(19)
Here on → 0 and Eξn2 (u, ε) → 0 as n → ∞. For u ≥ c1 > 0, the opt (u) + r(ε), where r(ε) = O(εζ ) as ε → +0, function Γopt ε (u) = Γ ζ > 0. Consider the difference opt −1 opt opt Ln (Γopt nε ) − Ln (Γε ) = n tr Q(Γnε (C) − Γε (C)) ,
(20)
opt where Q = −2Σ−1 + Γopt nε (C) + Γε (C). Eigenvalues of Q are bounded for ε > 0, and in view of (19), the right-hand side of (20) is not greater than o(n−1 )tr C −1 + |ξn (u, ε)|. Therefore, Ln (Γopt nε ) tends to Ln (Γopt ) in the square mean as n → ∞. Lemma 4.4 ε opt opt 2 implies that E|Ln (Γε ) − R(Γε )| → 0. We apply Lemma 4.2 and Remark 6 and conclude that the assertion of Theorem 4.3 is true.
MATRIX SHRINKAGE ESTIMATORS
4.2.
147
MATRIX SHRINKAGE ESTIMATORS OF EXPECTATION VECTORS
Until recently, efforts to improve estimators of the expectation value vector by shrinkage were restricted to a special case of shrinkage estimators in the form of a scalar multiple of the sample mean vector depending only on length of sample mean vector (see Introduction). Such approach is natural for independent components of the observation vector. In case of dependent variables, it is more natural to shrink the observation vector components by weighting in the system of coordinates where the covariance matrix is diagonal, or where the sample covariance matrix is diagonal. This is equivalent to multiplying the observation vector by matrix function depending on sample covariance matrix. The effect of such matrix shrinkage was investigated in [75]. As it is clear from Introduction, the gain of shrinkage can be measured approximately by the ratio of the dimension to the sample size. This fact suggests that the effect of shrinkage can be adequately investigated under the Kolmogorov asymptotic approach when sample size increase along with the dimensionality so that their ratio tends to a constant. In Section 2.4, this asymptotics was applied to seek best component-wise shrinking coefficients under the assumption that variables are independent and normal. In this section, we develop the distribution-free shrinking techniques for the case of a large number of dependent variables . This is achieved by multiplying sample mean vector by matrix Γ(C) that is diagonalized together with sample covariance matrix C, and finding asymptotically optimum scalar weighting function Γ(λ) of eigenvalues λ of matrices C. ¯ be an n-dimensional sample mean vector calculated over Let x a sample X = {xm } of size N and sample covariance matrix be C = N −1
N
¯ )(xm − x ¯ )T . (xm − x
m=1
Consider a class K of estimators of expectation value vectors Ex = μ = (μ1 , . . ., μn ) of the form μ = Γ(C)¯ x,
(1)
148
4. ASYMPTOTICALLY UNIMPROVABLE
where the matrix function Γ(C) can be diagonalized together with C with Γ(λ) as eigenvalues, where λ are corresponding eigenvalues of C; the scalar function Γ : R1 → R1 (denoted by the same letter)
(1 + ut)−1 dη(t),
Γ(u) =
(2)
t≥0
has finite variation on [0, ∞), is continuous except, maybe, of a finite number of points, and has sufficiently many moments uk | dη(u)|, k = 1, 2 . . . Our problem is to find a function Γ(u) minimizing the square losses Ln = Ln (η) = ( μ−μ )2 . (3) Limit Quadratic Risk for Estimators of Vectors We use the increasing dimension approach in the limit form as follows. Consider a sequence of problems ¯ , C, μ P = {(S, μ , N, X, x )n },
n = 1, 2, . . .
(we do not write out the subscripts for the arguments of P), in which the expectation value vectors μ = Ex are estimated by ¯ and samples X of size N from populations S with sample means x sample covariance matrices C, and estimators of μ are constructed using an a priori chosen function Γ(u). We restrict the populations with the only requirement that all eight moments of all variables exist. Define
M8 = max ( μ2 )4 , E γ = sup Ω<1 o
o sup(eT x)8
> 0,
|e|=1 o o √ var(xT Ωx)/ M8 ,
is the centered observation vector, e is a non where x = x − μ random unity vector, and Ω are nonrandom symmetric matrices of spectral norm not greater than 1.
LIMIT QUADRATIC RISK FOR ESTIMATORS OF VECTORS
149
Define empirical distribution functions F0n (u) = n−1
n i=1
ind(λ0i ≤ u), Gn (u) =
Fn (u) =
n−1
n i=1
n i=1
μ2i ind(λ0i ≤ u),
ind(λi ≤ u),
where λ0i are eigenvalues of Σ, λi are eigenvalues of C, and μi are components of μ in the system of coordinates, in which the matrix Σ is diagonal, i = 1, . . . , n. To derive limit relations, we restrict P with the following conditions. A. The parameters 0 < M8 < c0 and γ → 0 in P, where c0 does not depend on n. B. For each n, all eigenvalues of Σ are located on a segment [c1 , c2 ], where c1 > 0 and c2 do not depend on n. C. The ratios n/N → y. D. For u ≥ 0, the weak convergence holds F0n (u) → F0 (u). E. For u ≥ 0 almost everywhere, the convergence holds Gn (u) → G(u). Under these conditions, the limit exists def
B = lim μ 2 = G(c2 ). n→∞
We start from the results of spectral theory of large-dimensional covariance matrices developed in Chapter 3. We consider the resolvent H = H(z) = (I − zC)−1 of sample covariance matrices C and use the following corollary of theorems from Section 3.1. Let G = G(ε) be a region of the complex plane outside some ε-neighborhood of the axis z > 0. We formulate the following corollary of theorems proved in Section 3.1. Lemma 4.5. Under Assumptions A–E, 1. The limits exist h(z) = l.i.m. n−1 tr(I − zC)−1 = lim n−1 tr(I − zs(z)Σ)−1 , n→∞
n→∞
where the convergence is uniform in G;
150
4. ASYMPTOTICALLY UNIMPROVABLE
2. the limits exist T (I − zC)−1 μ , b(z) = l.i.m. μ n→∞
¯ T (I − zC)−1 x ¯, k(z) = l.i.m. x n→∞
where the convergence is uniform in G, and k(z) =
b(z) + y(h(z) − 1)/s(z)
if z = 0,
B + yΛ1
if z = 0,
and Λ1 = lim n−1 tr Σ; n→∞
3. for u ≥ 0 almost everywhere, the limit exists F (u) = √ l.i.m. Fn (u), and F (u2 ) = 1, where u2 = c2 (1 + y)2 ;
n→∞
4. the equations hold −1 h(z) = (1 − zu) dF (u) = (1 − zs(z))−1 dF0 (u), b(z) = (1 − zs(z)u)−1 dG(u); 5. the inequality holds |h(z)−h(z )| < c|z − z |ζ , where c, ζ > 0; 6. if y < 1 and |z| → ∞, then h(z) → 0, b(z) → 0, and k(z) → 0 so that −1 −1 zh(z) ≈ −Λ−1 (1 − y) , zb(z) ≈ −(1 − y) u−1 dG0 (u); if y > 1 and |z| → ∞, then b(z) → b(∞) and k(z) → k(∞) so that with the accuracy up to O(|z|−2 ) we have h(z) ≈ 1 − y −1 − λ0 y −1 z −1 , u b(z) ≈ b(∞) − β dG(u) · z −1 , (1 + λ0 u)2 −1 k(z) − b(z) ≈ k(∞) − b(∞) − βλ−2 0 z ,
LIMIT QUADRATIC RISK FOR ESTIMATORS OF VECTORS
151
where λ0 and β are roots of the equations
λ0 u (1 + λ0 t)−1 dF0 (u) = 1 − y −1 , β = dF0 (u), y (1 + λ0 u)2 1 b(∞) = (1 + λ0 u)−1 dG(u), k(∞) = b(∞) + . λ0
First, we establish the convergence of expressions including two resolvents H(z) = (I − zC)−1 with different arguments. Denote the limits in the square mean by l.i.m. and the conver2 gence in the square mean by the sign →. Lemma 4.6. Under Assumptions A–E uniformly in z, z ∈ G as n → ∞, the convergence holds μ→ μ T H(z)H(z ) 2
zb(z) − z b(z ) , z − z
¯ T H(z)H(z )¯ x x→
zk(z) − z k(z ) . z − z
2
(4)
Proof. Let ε > 0 be arbitrarily small. Denote bn (z) = μ T H(z) μ, T ¯ kn (z) = x H(z)¯ x. We have H(z)H(z ) = (zH(z) − z H(z ))/ (z−z ). By Lemma 4.5, for z, z ∈ G and |z−z | > ε > 0 uniformly, the convergence holds def
X = μ T H(z)H(z ) μ=
def
¯ T H(z)H(z )¯ Y = x x=
zbn (z) − z bn (z ) 2 zb(z) − z b(z ) → , z − z z − z zkn (z) − z kn (z ) 2 zk(z) − z k(z ) → . z − z z − z
Suppose |z − z | < ε. It suffices to prove that X and Y can be written in the form X=
d (zb(z)) + ξn + η(ε), dz
Y =
d (zk(z)) + ξn + η(ε), dz
152
4. ASYMPTOTICALLY UNIMPROVABLE 2
where ξn → 0 as n → ∞ uniformly with respect to z and ε, and E|η(ε)|2 → 0 as ε → +0 uniformly in z and n. Using the identity H(z)H(z ) =
d (zH(z)) + (z − z)CH 2 (z)H(z ), dz
we obtain X=
d (zbn (z)) + ξn + η(ε), dz 2
μT CH 2 (z)H(z ) μ. Here ξn → 0 since the secwhere η(ε) = (z − z) ond derivatives bn (z) and b (z) exist and are uniformly bounded, and 2 2 d T 2 T 2 E 2 zbn (z) = 2Eμ CH 3 (z)μ = O(1)Eμ Cμ = O(1). dz As ε → +0, we have T 2 E|η(ε)|2 = O(ε2 )Eμ C μ = O(ε2 ) uniformly in n. This proves the first statement of our lemma. Analogously, we rewrite the expression for Y in the form Y =
d (zkn (z)) + ξn + η(ε), dz 2
where η(ε) = (z − z)¯ xT CH(z)H(z )¯ x. Here ξn → 0 in view of the 2 convergence kn (z) → k(z) and the existence and uniform boundedness of the moments 2 2 d ¯ |2 ≤ O(1)E|¯ ¯ |2 . E 2 zkn (z) = O(1)E|¯ xT C x xT S x dz o
o
¯ |2 ≤ 2(E| ¯T Sx ¯=x ¯−μ ¯˙ |2 ), x , and μT S μ|2 + E|x Indeed, E|¯ xT S x o
o
¯T Sx ¯ |2 ≤ N −2 E(tr2 S + 2 tr S 4 ) = O(1). E|x Therefore, ¯ |2 = O(ε2 ). xT C x E|ηn (z)|2 ≤ O(ε2 )E|¯
LIMIT QUADRATIC RISK FOR ESTIMATORS OF VECTORS
153
This completes the proof of Lemma 4.6. Theorem 4.4. Under Assumptions A–E, we have def
R = R(η) =
l.i.m. Ln (η) =
n→∞
tk(−t) − t k(−t ) dη(t) dη(t ), = B − 2 b(−t) dη(t) + t − t where the expression in the integrand is extended by continuity to t = t . Proof. We have ¯ dη(t) + Ln (η) = μ 2 − 2 μ T (I + tC)−1 x ¯ T (I + tC)−1 (I + t C)−1 x ¯ dη(t) dη(t ). + x (5) We have μ 2 → B as n → ∞. Denote H = (I + tC)−1 , H = (I + t C)−1 . The product HH = (tH − t H )/(t − t ). In the righthand side of (5), we obtain random values converging to the limits 2
¯ → b(−t), μ T Hx
¯ → (tk(−t) − t k(−t ))/(t − t ) ¯ T HH x x 2
as n → ∞ uniformly with respect to t, t by Lemma 4.5 and Lemma 4.6. We conclude that we can perform the limit transition in the integrands in (5). This proves Theorem 4.4. Example. Let the matrices Γ(C) have a “ridge” form: the function η(v) = 0 for v < t and η(v) = α > 0 for v ≥ t, corresponding to the estimator μ = α(I + tC)−1 . In this case, R = B − 2αb(−t) + α2
d (t · k(−t)). dt
¯ (the standard estimator). Then, the Let t = 0, α = 1, μ =x def
quadratic risk R = Rst = yΛ1 . Let t = 0, μ = α¯ x, where α is a constant. Then, we obtain that R = B(1 − α)2 + α2 yΛ1 , and the minimum of R equal to Rst B/(B + yΛ1 ) is attained for α = B/(B + yΛ1 ).
154
4. ASYMPTOTICALLY UNIMPROVABLE
Let t = 0. The minimum of R equal to Ropt = B − α0 b(−t) is attained for α = α0 = b(−t)/(t k(−t)). In a special case of the “ρ-model” (see Section 3.1) of limit spectra of the matrices Σ with a special choice of identical μ2i = μ2 /n for i = 1, . . . , n, we can express the values α0 and Ropt in the form of rational functions of h(−t). Then, b(z) = Bh(z). For ρ = 0, we obtain R = Rst B(B + yΛ1 )−1 (1 − y(1 − h)2 ), and the minimum is attained for h = 1 with t = 0, where h = h(−t). Corollary. Under assumptions of Theorem 4.4, the equality tk(−t) − t k(−t ) 0 dη (t ) = b(−t), t ≥ 0 t − t is sufficient for R(η) to have a minimum for η(t) = η 0 (t). Minimization of the Limit Quadratic Risk To find a solution of this equation, we use the analytic properties of functions h(z), s(z), b(z), and k(z). Define ∼
∼
h(z) = h(z) − h(∞),
∼
b(z) = b(z) − b(∞),
k(z) = k(z) − k(∞).
Lemma 4.7. Suppose conditions A–E hold and y = 1. Then, for any small σ > 0, we have ∼
1 R(Γ) = B − πi def
=
L
∼
b(z) Γ z
∼ 1 k(z) 2 1 1 dz + dz = Γ z 2πi z z
L
R(Γ)
if y < 1,
R(Γ) + 2Γ(0)b(∞) − Γ (0)k(∞) if y > 1, 2
(6)
where the contour of the integration L = (σ − i∞, σ + i∞). Proof. The functions h(z), b(z), and k(z) are analytical and have no singularities for Re z < σ, where 0 < σ < u−1 2 . In the ∼
∼
half-plane to right of L, the functions b(z) and k(z) decrease as
MINIMIZATION OF THE LIMIT QUADRATIC RISK
155
O(|z|−1 ) for |z| → ∞ and the function Γ(z −1 )z −1 = O(|z|−1 ). Therefore, the integration contour L may be closed by a semicircle of radius r = |z − σ| → ∞. We change the order of integration and use the residue theorem. It follows
1 2πi =
1 2πi
∼
L
b(z) 1 dz = Γ z z
∼
−1
b(z)(z + t)
dz dη(t) =
∼
b(−t) dη(t).
The latter integral in the right-hand side of (6) can be rewritten as 1 2πi
L
∼
k(z) 2 Γ z
∼ 1 1 z k(z) dz = dz dη(t) dη(t ). z 2πi (z + t)(z + t )
For t = t , we calculate two residues at the points z = −t and z = −t and obtain in the integrand ∼
∼
tk(−t) − t k(−t ) tk(−t) − t k(−t ) = − k(∞). t − t t − t d (tk(−t)). We dt gather summands and obtain the right-hand side of (6). Lemma 4.7 is proved. For t = t , the residue of the second order yields
Now we consider a special case when the above integrals can be reduced to integrals over a segment. By Lemma 4.6, for any v > 0, we have h(z) → h(v), where z = v + iε, ε → +0, and Im h(v) > 0 if and only if dF (v −1 ) > 0. For b(z) to be continuous, an additional assumption is required. We will need the H¨ older condition |b(z) − b(z )| < c|z − z |ζ ,
c, ζ > 0.
(7)
156
4. ASYMPTOTICALLY UNIMPROVABLE
Remark 1. Suppose Assumptions A–E are valid and 0 < y < 1. Then, the inequality (7) follows from the condition dG(u) < a0 , sup u ≥ 0 dF0 (u)
(8)
where a0 is a constant. Indeed, suppose (8) holds. Then, the limits exist b(v) = lim b(v + iε) and k(v) = y(h(v) − 1)/(vs(v)) = lim k(z) as z = v + iε → v. Define the region V = {v > 0 : Im k(v) > 0}, and the function Γ0 (u) = Im b(u−1 )/Im k(u−1 ),
u−1 ∈ V.
Let Γ0 (0) = 0 for y ≤ 1 and Γ0 (0) = b(∞)/k(∞) for y > 1. Note that 0 ≤ Γ0 (u) ≤ 1. Theorem 4.5. Suppose conditions A–E are fulfilled, (7) holds, and y = 1. Then, R = R(η) = R(Γ) is such that
R = Ropt +
1 π
V
2 1 1 v −1 dv + − Γ0 Im k(v) Γ v v
+ k(∞)(Γ(0) − Γ0 (0))2 ,
(9)
where R
opt
1 =B− π
V
(Im b(v))2 −1 v dv − Γ0 (0)b(∞). Im k(v)
Proof. Given Im z > 0, we contract the contour L to the beam z ≥ σ > 0 using analytical properties of functions in (6). The function 1 1 (z + t)−1 dη(t) = α(z) = Γ z z t≥0
MINIMIZATION OF THE LIMIT QUADRATIC RISK
157
has a bounded derivative for Re z > σ −1 and the expressions in the integrands in (6) satisfy the H¨ older inequality. The contribution of large |z| can be made arbitrarily small. Under the contraction of L to real points outside of V, contributions of integrals of b(z) and k(z) along oppositely directed parallel beams on both sides from the axis of abscissae mutually cancel. The contributions of real parts of b(z) and k(z) mutually cancel, while contributions of imaginary parts double. We obtain 2 1 v −1 dv + R(Γ) = B − Im b(v)Γ π v V
1 Im k(v)Γ v −1 dv − 2Γ(0)b(∞) + Γ2 (0)k(∞). v 2
+ V
(10)
We examine that the right-hand side of (9) coincides with the right-hand side of (10). This is the proof of Theorem 4.5. Denote U0 = {u : u = 0 or u−1 ∈ V}. Remark 2. Suppose conditions A–E hold, y = 1, and there exists a function of finite variation η 0 (t) such that
(1 + ut)−1 dη 0 (t) = Γ0 (u) for u ∈ U0 .
t≥0
Then
μ − Γ0 (C)¯ x)2 = Ropt . l.i.m. (
n→∞
The estimator μ = Γ0 (C)¯ x of the vector μ may be called the “best in the limit.” 2 /n, i = 1, . . . , n, for all Example. In a special case, let μ2i = μ n in the system of coordinates where Σ is diagonal. By Lemma 4.5, we have b(∞) = 0 if y < 1 and b(∞) = B(1 − y −1 ) if y > 0. ∼
The functionals R(Γ) and R(Γ) can be expressed in the form of integrals over the limit spectrum of matrices C. We find that R(Γ) = B − 2B
Γ(u) dF (u) +
q(u)Γ2 (u) dF (u),
158
4. ASYMPTOTICALLY UNIMPROVABLE
where ⎧ −1 −2 if u−1 ∈ V, ⎪ ⎨ B + yu|s(u )| q(u) = 0 if y < 1 and u = 0, ⎪ ⎩ if y > 1 and u = 0. B + (1 − y −1 )−1 λ−1 0 Assume that the function F0 (u) has a special form defined by the “ρ-model” of limit spectrum of the matrices Σ. In this case, the function h(−t) equals
2 1 + ρ + κ(1 − y)t + (1 + ρ + κ(1 − y)t)2 − 4ρ + 4κyt) , where ρ < 1 and κ = σ 2 (1 − ρ2 ) are the model parameters. We find that |s(v)|2 = (ρ + y(1 − ρ))(ρ + κyv)−1 and the function Γ0 (u) =
B α0 , = q(u) 1 + ut0
where α0 =
B[ρ + y(1 − ρ)] , B[ρ + y(1 − ρ)] + κy 2
t0 =
yρ . B[ρ + y(1 − ρ)] + κy 2
Let η 0 (t ) be a stepwise function with a jump α0 at the point t = t. We calculate R(Γ0 ) passing back to the contour L and calculating the residue at the point z = −t. We obtain R(Γ0 ) = B(1 − α0 h(−t0 )). As y → 0, we have α0 → 1, t0 → 0, and R0 → 0, thus showing the advantage of the standard estimator when the dimension increases slower than sample size. If Σ = σ 2 I for all n, then the values ρ = 0, α0 = B/(B + yΛ1 ), Λ1 = σ 2 , t0 = 0, Γ0 (C) = α0 I, and R0 = αRst , where Rst = yΛ1 . The ¯. corresponding optimum estimator has the shrinkage form μ = α0 x 0 0 0 As y → 1, the values α → B/(B + κ), t → ρ/(B + κ), Γ (C) → B[(B + κ)I + ρC]−1 , and ρθ Ropt , →θ 1−ρ+ Rst 1 + 1 + Λ1 ρ(1 − ρ)2 /(B + κ)
STATISTICS TO APPROXIMATE LIMIT RISK
159
where θ = B/(B + κ). The maximum effect of the estimator μ = Γ0 (C)¯ x is achieved for B κ, and this corresponds to the case of small μ when μ 2 (n−1 tr Σ−1 )−1 , when components of μ are much less in absolute value than the standard deviation of components of the sample mean vector, or for a wide spectrum of matrices Σ. If y → ∞, then Rst → ∞, whereas α0 → 0, and R0 → B < ∞, whereas R = y → ∞ for the standard estimator.
Statistics to Approximate Limit Risk Now we pass to the construction of estimators for the limit functions. Denote hn (z) = n−1 tr(I − zC)−1 , sn (z) = 1 + nN −1 (hn (z) − 1),
bn (z) = μ T (I − zC)−1 μ , kn (z) = bn (z) + nN −1
hn (z) − 1 . zsn (z)
From Lemma 4.5, it follows that the convergence in the square mean holds 2
hn (z) → h(z), 2
n−1 tr C → Λ1 , lim
T →∞
2
bn (z) → b(z),
2
n−1 tr C 2 → Λ2 + yΛ21 ,
l.i.m. T sn (−T ) = λ0 ,
n→∞
2
sn (z) → s(z),
lim
T →∞
l.i.m. kn (−T ) = k(∞).
n→∞
The asymptotically extremum estimator μ = Γ0 (C)¯ x involves −1 −1 the function Im b(u )/Im k(u ), which should be estimated by observations. But the natural estimator Γ0n (u) = Im bn (u−1 )/Im kn (u−1 ) is singular for u > 0 and may not approach Γ0 (u) as n → ∞. We introduce a smoothing by considering bn (z) and kn (z) for complex z with Im (z) > 0. In applications, the character of smoothing may
160
4. ASYMPTOTICALLY UNIMPROVABLE
be essential. To reach a uniform smoothing, we pass to functions of the inverse arguments T (zI − C) μ, an (z) = z −1 bn (z −1 ) = μ gn (z) = z −1 hn (z −1 ) = n−1 tr (zI − C)−1 , ¯ T (zI − C)−1 x ¯. ln (z) = z −1 kn (z −1 ) = x Remark 3. Under Assumptions A–E, the functions gn (z), an (z), and ln (z) converge in the square mean uniformly with respect to z ∈ G to the limits g(z), a(z), l(z), respectively, such that g(z) =
(z − s(z −1 )u)−1 dF0 (u), a(z) = (z − s(z −1 )u)−1 dG(u), l(z) = a(z) + y (zg(z) − 1)/s(z −1 ).
(11)
Remark 4. Under Assumptions A–E for y = 1, the functions g(z), a(z), and l(z) are regular with singularities only at the point ∼ z = 0 and on the segment [0, u2 ]. The functions a(z) = a(z) − ∼
b(∞)/z and l (z) = l(z) − k(∞)/z are bounded. As z → u > u2 , we have Im g(z) → 0, Im a(z) → 0, and Im l(z) → 0. Now we express (6) in terms of g(z), s(z), and l(z). Lemma 4.8. If conditions A–E hold and y = 1, then, as ∼
ε → +0, the function R(Γ) defined by (6) equals 2 B− π
∞ 0
1 Im a(u − iε)Γ(u) du + π ∼
∞
∼
Im l (u − iε)Γ2 (u) du + O(ε). 0
(12)
Proof. Functions in the integrands in (6) are regular and have no singularities for Re z ≥ σ > 0 outside the beam z ≥ σ. As ∼
|z| → ∞, there exists a real T > 0 such that b(z) has no singularities also for |z| > T . Let us deform the contour (σ − i∞,
STATISTICS TO APPROXIMATE LIMIT RISK
161
σ + i∞) in the integrals (6) into a closed contour L1 surrounding an ε-neighborhood of the segment [σ, T ]. Substitute w = z −1 . We find that ∼ ∼ 1 1 ∼ R(Γ) = B − a(w)Γ(w) dw + l (w)Γ2 (w) dw, πi 2πi L2
L2
where L2 is surrounding the segment [w0 , T ], where w0 = T −1 and t = σ −1 . If Re w ≥ 0, then the analytical function Γ(w) is ∼
∼
bounded by the inequality |Γ(w)| ≤ 1, and a(w), b(z), and l (w) tend to a constant as w → u, where u = 1/Re z > 0. Since the ∼
∼
functions a(w) and l (w) are analytical, we can deform the contour ∼
L2 into the contour L2 = (0−iε, 0+iε, T +iε, T −iε, 0−iε), where T > T0 > 0 is sufficiently large. Contributions of integrals along vertical segments of length 2ε are O(ε) as ε → +0. Real parts of the integrands on segments [iε, τ + iε] and [τ − iε, −iε] cancel, while the imaginary ones double. We obtain ∼
2 R(Γ) = B − π
T
1 Im[ a(w)Γ(w)] du + π ∼
0
T
∼
Im[ l (w)Γ2 (w)] du + O(ε), 0
where w = u − iε. Substitute Γ(u − iε) = Γ(u) + iε
t 1 dη(t). 1 + ut 1 + ut − iεt
Comparing with (12), we see that it is left to prove that the ∼
difference Γ(u − iε) − Γ(u) gives a contribution O(ε) into R(Γ). Consider the integral t 1 ∼ dw, (13) a(w) 1 + wt 1 + (w + iε)t L3
where the integration contour is L3 = (0 − iε, ∞ − iε). If Im w < ε, then the integrand has no singularities and is O(|w|−2 ) as |w| → ∞. It means that we can replace the contour L3 by the contour
162
4. ASYMPTOTICALLY UNIMPROVABLE ∼
L4 = (0 − iε, 0 − i∞). The function a(w) is uniformly bounded on L4 , and it follows that the integral (13) is uniformly bounded. Analo∼
gously, the integral with l (w) is also bounded. It follows that we can ∼
replace Γ(u − iε) in R(Γ) by Γ(u) with the accuracy to O(ε). We have proved the statement of Lemma 4.8.
Statistics to Approximate the Extremum Solution Let us construct an estimator of the extremal limit function Γ0 (u). Let ε > 0. Denote Γ0ε (u)
=
Im a(u − iε)/Im l(u − iε) ≤ 1,
if u ≥ 0,
0,
if u < 0;
Rεopt
1 =B− π
∞
(Im a(w))2 du − d, Im l(w)
0
where w = u − iε, d = 0 if y < 1, and d = b2 (∞)/k(∞) if y > 1. From (12), we obtain that
R(Γ) =
Rεopt
1 + π
∞ Im l(w)(Γ(u) − Γ0ε (u))2 du + 0
+ k(∞)(Γ(0) − Γ0ε (0))2 + O(ε). ∼
(14)
∼
x defined by We consider the smoothed estimator μ = Γ0ε (C)¯ the scalar function ∼
Γ0ε (u)
∞ = −∞
Γ0ε (u )
ε 1 du . π (u − u )2 + ε2
Lemma 4.9. If conditions A–E hold and y = 1, then ∼
x)2 < Rεopt + O(ε) + ξn (ε), ( μ − Γ0ε (C)¯
(15)
STATISTICS TO APPROXIMATE THE EXTREMUM SOLUTION
163
where Eξn2 (ε) → 0 as n → ∞ for fixed ε > 0, and O(ε) does not depend on n. Proof. We pass to the coordinate system where the matrix C ¯ therein. We is diagonal; let μi and x ¯i be components of μ and x find that ∼
x)2 = μ 2 − 2 ( μ − Γ0ε (C)¯
∼
μ2i Γ0ε (λi ) +
i
∼
x ¯2i Γ02 ε (λi ) − 2ζn , (16)
i ∼
x−μ ), and Eζn2 = O(N −1 ). T Γ0ε (C)(¯ where λj = λj (C), ζn = μ Note that
μ2j
j
∼
Γ0ε (λj )
1 = π
∞ Im an (u − iε) Γ0ε (u)du,
(17)
−∞
where λj = λj (C). For a fixed ε > 0, an (w) → a(w) as n → ∞ uniformly on [0, T ], where T = 1/ε. The contribution of u ∈ [0, T ] to (16) is not larger than j
≤
λj >T /2
T − λj λj 1 1 μ2j 1 − arctan − arctan( ) ≤ π ε π ε
μ2j +
2 ε2 μ 2 2 2 T 2 ε2 μ ε2 μ2 μj λj + ≤ = μ Cμ + . (18) 2π T j 2π T 2π
From Lemma 4.5, it follows that E( μT C μ)2 is bounded and the right-hand side of (18) can be expressed in the form O(ε) + ξn (ε), where O(ε) is independent on n and Eξn2 (ε) → 0 as n → ∞ for fixed ε > 0. Thus, the second term of the right-hand side of (16) equals ∞ 2 − Im a(u − iε) Γ0ε (u)du + O(ε) + ξn (ε). π 0
164
4. ASYMPTOTICALLY UNIMPROVABLE
We notice that the third term of (16) is ∞ ∼ ε/π 2 02 2 ¯ j Γε (λj ) ≤ ¯j Γ02 x x du = ε (u) (u − λj )2 + ε2 j
j
1 = π
−∞
+∞ Γ02 ε (u)Im ln (w) du,
(19)
−∞
where the second superscript 2 denotes the square, w = u − iε, 2 ε > 0, and 0 ≤ Γ0ε (u) ≤ 1. Note that ln (w) → l(w) uniformly for −
u ∈ [0, T ], and the contribution of u ∈ [0, T ] is not greater than ¯T C x ¯ 2 /(2π). But we have E¯ ¯ + ε2 x ¯ )2 2T −1 x x2 = O(1) and E(¯ xT C x = O(1). It follows that the third term of the right-hand side of (16) is not greater than 1 π
T 2 Γ02 ε (u)Im l(u − iε) du + O(ε ) + ξn (ε), 0 2
where O(ε) is finite as ε → +0 and ξn (ε) → 0 as n → ∞ for any ε > 0. We substitute μ 2 = B + o(1), Γ0ε (u) = Im a(w)/Im l(w), where w = u − iε. Gathering summands we obtain
( μ−
∼
x)2 Γ0ε (C)¯
1
T
(Im a(w))2 du + O(ε) + ξn (ε), (20) Im l(w)
0
where w = u−iε. We note that Im a(w) = O(ε)|w|−2 as |w| → ∞, and, consequently, the integral in (20) from 0 to T can be replaced by the integral from 0 to infinity with the accuracy to O(ε). The statement of Lemma 4.9 follows. Now, we consider the statistics Γ0nε (u)
−1 Im (wgn (w)) , = max 0, 1 − nN |sn (w−1 )|2
where w = u − iε,
STATISTICS TO APPROXIMATE THE EXTREMUM SOLUTION
and
∞
∼
Γ0nε (u)
=
Γ0nε (u )
−∞
165
ε/π du . [(u − u )2 + ε2 ]
Theorem 4.6. Suppose conditions A–E hold and y = 1. Then, ∼
2
∼
1. for a fixed ε > 0 as n → ∞, we have Γ0 nε (u) → Γ0 ε (u) uniformly on any segment; 2. we have ∼
x)2 < inf ( μ − Γ(C)¯ x)2 + O(ε) + ξn (ε), ( μ − Γ0nε (C)¯
(21)
Γ
where Γ = Γ(·) is defined by the estimator class K, the quantity O(ε) does not depend on n, and Eξn2 (ε) → 0 as n → ∞ for any ε > 0. Proof. For any fixed ε > 0, we have the uniform convergence ∼
2 ∼
in the square mean Γ0nε (u) → Γ0ε (u) on any segment by definition of these functions and Lemma 4.5. Denote ∼
∼
x)2 , x)2 − ( μ − Γ0ε (C)¯ μ − Γ0nε (C)¯ ρn = (
∼
∼
Δ(C) = Γ0nε (C) − Γ0ε (C).
lim Eρ2n = 0. It is suffices to show that ε → +0 n → ∞ 2 and E(¯ xT Δ¯ x)2 → 0. We single out a contribution
Let us prove that lim 2
x)2 → 0 E( μT Δ¯ of eigenvalues λi of C not exceeding T for some T > 0: let Δ(u) = Δ1 (u) + Δ2 (u), where Δ2 (u) = Δ(u) for |u| > T and Δ2 (u) = 0 for |u| ≤ T . Here the scalar argument u stands for eigenvalues of 2 C. By virtue of the first theorem statement, Δ1 (u) → 0 as n → ∞ uniformly on the segment [0, T ]. The contribution of |u| > T to E( μT Δ(C)¯ x)2 is not greater than
¯ ) = O(T −1 ). μ2j Ex2j ind(λj > T ) ≤ T −1 E(¯ xT C x
i
Let T = 1/ε. Then, Eρ2n = O(ε) as n → ∞. In view of Lemma 4.9, ∼
x)2 < Rε0 + O(ε) + ξn (ε), ( μ − Γ0nε (C)¯
166
4. ASYMPTOTICALLY UNIMPROVABLE
where the estimate O(ε) is uniform in n and Eξn2 → 0 as n → ∞ for fixed ε > 0. It follows that Rε0 ≤ R(Γ) + O(ε). This completes the proof of Theorem 4.6. ∼
x. Denote μ 0ε = Γ0nε (C)¯ We conclude that, in the sequence of problems {Pn } of estimation of n-dimensional parameters μ = Ex for populations restricted by conditions A–E, the family of estimators { μ 0ε } is asymptotically ε-dominating over the class of estimators μ ∈ K of μ as follows: for any ε > 0 and δ > 0, there exists an n0 such that for any n > n0 for any μ for any estimator Γ(C)¯ x, the inequality ( μ−μ 0ε )2 < ( μ−μ )2 + ε
(22)
holds with the probability 1 − δ. Under conditions A–E, the estimator μ 0ε provides quadratic opt losses asymptotically not exceeding R ≤ y and proves to be ε-unimprovable with ε → 0.
MULTIPARAMETRIC SAMPLE LINEAR REGRESSION
4.3.
167
MULTIPARAMETRIC SAMPLE LINEAR REGRESSION
Standard procedure of regression analysis starts with a rather artificial assumption that arguments of the regression are fixed, but the response is random due to inaccuracies of observation. In a more successive statistical setting, all empiric data should be considered as random and present a many-dimensional sample from population. Thus, we come to the “statistical regression,” or, more specifically, to “sample regression” which actually is a form of regularity hidden amidst random distortion. The quality of the regression may be measured by the quadratic risk of prognoses calculated over the population. The usual minimum square solution minimizing empirical risk measured by residual sum of squares (RSS) has the quadratic risk function that is not minimal (the difference is substantial for large n). In Section 3.3, it was proved that by using Normal Evaluation Principle, standard quality functions of regularized multivariate procedures may be approximately evaluated as in terms of parameters, as in terms of statistics. This fact was used in [68], [69], and [71] for search of regressions asymptotically unimprovable in a wide class. In this section, we develop this approach for finite dimension and finite sample size and present solutions unimprovable with a small inaccuracy that is estimated from the above in an explicit form. We consider (n + 1)-dimensional population S in which the observations are pairs (x, y), where x = (x1 , . . . , xn ) is a vector of predictors and y is a scalar response. o o Define the centered values x = x − Ex and y = y − Ey. We restrict the population by a single requirement that the four moments of all variables exist and there exists the moment M8 = o o E(x2 /n)2 y 4 (here and in the following, squares of vectors denote o the squares of lengths). Assume, additionally, that Ex2 > 0 (non degenerate case). Denote o
M4 = sup E(eT x)4 > 0, |e|=1
M = max(M4 , o
o
√
and γ = sup var(xT Ω x/n)/M, Ω=1
o4
M8 , Ey ), (1)
168
4. ASYMPTOTICALLY UNIMPROVABLE
where (and in the following) e is a nonrandom vector of unit length, and Ω are symmetric, positive, semidefinite matrices with unit spectral norm. We consider the linear regression y = kT x + l + Δ, where k ∈ Rn and l ∈ R1 . The problem is to minimize the quadratic risk EΔ2 by the best choice of k and l that should be calculated over a sample X = {(xm , ym )}, m = 1, . . . , N , from S. We denote λ = n/N , a = Ex, a0 = Ey, Σ = cov(x, x), σ 2 = var y, and g = cov(x, y). If σ > 0 and the matrix Σ is nondegenerate, then the a priori coefficients k = Σ−1 g and l = a0 − kT g provide minimum of EΔ2 , which equals σ 2 − gT Σ−1 g = σ 2 (1 − r2 ), where r is the multiple correlation coefficient. We start from the statistics ¯ = N −1 x
N m=1
xm ,
S = N −1
m=1 N m=1
C = N −1
N m=1
N
y¯ = N −1 xm xTm ,
¯ )(xm − x ¯ )T , (xm − x
ym ,
σ 2 = N −1
0 = N −1 g = N −1 g
m=1 N m=1
N m=1
(ym − y¯)2 ,
xm ym ,
¯ )(ym − y¯). (xm − x (2)
= C −1 g The standard minimum square “plug-in” procedure with k T ¯ has known demerits: this procedure does not and l = y¯ − k x guarantee the minimum risk, is degenerate for multicollinear data (for degenerate matrix C), and is not uniform with respect to the dimension. T x + The quadratic risk of the regression y = k l + Δ, where k and l are calculated over a sample with the “plug-in” constant T x ¯ , is given by term l = y¯ − k T x ¯ )2 = (1 + 1/N )R, EΔ2 = R + E(¯ y−k where T g + k T Σk). R = E(σ 2 − 2k def
(3)
MULTIPARAMETRIC SAMPLE LINEAR REGRESSION
169
Let us calculate and minimize R. We consider the following class of generalized regularized regressions. Let H0 = (I + tS)−1 and H = (I + tC)−1 be the resolvents of the matrices S and C, (everywhere below) in the respectively. We choose the coefficient k = Γ class K of statistics of the form k g, where Γ = Γ(C) =
tH(t) dη(t)
and η(t) are functions whose variation on [0, ∞) is at most one and that has sufficiently many moments def
ηk =
tk |dη(t)|,
k = 1, 2, . . .
The function η(t) presenting a unit jump corresponds to the “ridge regression” (see Introduction). The regression with the coefficients ∈ K may be called a generalized ridge regression. The quantity k (2) depends on η(t), R1 = R1 (η), and R(η) = σ − 2E 2
T
tg H(t) g dη(t) +
D(t, u) dη(t) dη(u), (4)
where def
T H(t)ΣH(u) D(t, u) = tu g g. Since all arguments of R(η) are invariant with respect to the translation of the origin, we assume (everywhere in the following) that a = Ex = 0 and a0 = Ey = 0. Our purpose is to single out leading parts of these functionals and to obtain upper bounds for the remainder terms up to absolute constants. To simplify notations of the remainder terms, we define τ=
√ M t,
ε=
γ + 1/N ,
clm = a max(1, τ l ) max(1, λm ),
where a, l, and m are non-negative numbers (for brevity we omit the argument t in cml = cml (t)). In view of (1), we can readily
170
4. ASYMPTOTICALLY UNIMPROVABLE
see that E(x2 )2 ≤ M,
E(¯ x2 )2 ≤ M λ2 ,
g2 ≤ M,
Σ 2 ≤ M,
E g 2 ≤ E g02 ≤ M (1 + λ).
As in previous sections, we begin by studying functions of more simple covariance matrices S and then pass to functions of C. Some Spectral Functions of Sample Covariance Matrices Our investigation will be based on results of the spectral theory of large sample covariance matrices developed in Chapter 3. Define H0 = H0 (t) = (I + tS)−1 , h0 (t) = n−1 tr H0 (t), h0 (t), s0 = s0 (t) = 1 − λ + λh0 (t), h0 (t) = E x, V = V (t) = eT H0 (t)¯
¯ T H0 (t)¯ Φ = Φ(t) = x x,
and H = H(t) = (I + tC)−1 , h(t) = n−1 tr H(t), h(t) = E h(t), s = s(t) = 1 − λ + λh(t). We write out some assertions from Section 3.1 in the form of the following lemma. Lemma 4.10. If t ≥ 0, then 1. s0 = E(1 − tψ1 ) ≥ (1 + τ λ)−1 , def
var(tψ1 ) ≤ δ = 2τ 2 λ2 (γ + τ 2 /N ) ≤ c42 ε; 2. EH0 = (I + ts0 Σ)−1 + Ω0 , where Ω0 ≤ c31 ε, var(eT H0 e) ≤ τ 2 /N ; 3. t(EV )2 ≤ c52 ε2 , var(tV ) ≤ c20 /N ; 4. tΦ ≤ 1, tEΦ = 1 − s0 + o, 5. EH − EH0 ≤ c74 ε , 2
o2 ≤ c52 ε2 ,
var(tΦ) ≤ c20 /N ;
|s − s0 | ≤ c11 /N ;
6. EH = (I + tsΣ)−1 + Ω, where Ω 2 ≤ c63 ε2 .
FUNCTIONALS OF RANDOM GRAM MATRICES
171
Functionals of Random Gram Matrices We consider random Gram matrices of the form S defined in (2) that can be used as sample covariance matrices when the expectation vectors are known a priori and may be set equal to 0. Let us use the method of the alternative elimination of independent sample vectors. Eliminating one of the sample vectors, say, the vector x1 , we denote 01 = g 0 − x1 y1 /N, g
¯1 = x ¯ 1 − x1 /N, x
S 1 = S − x1 xT1 /N · H01 = H01 (t) = (I + tS 1 )−1 . These values do not depend on x1 and y1 . The identity holds H0 = H01 − tH01 x1 xT1 H0 /N.
(5)
Also denote v1 = v1 (t) = eT H01 (t)x1 ,
u1 = u1 (t) = eT H0 (t)x1 ,
ϕ1 = ϕ1 (t) = xT1 H01 (t) x1 /N,
ψ1 = ψ1 (t) = xT1 H0 (t) x1 /N.
We have the identities H0 x1 = (1−tψ1 )H01 x1 . (6) Obviously, 0 ≤ tψ1 ≤ 1. It can be readily seen that u1 = (1−tψ1 )v1 ,
(1+tϕ1 )(1−tψ1 ) = 1,
1 − s0 = tEψ1 = tEN −1 tr(H0 S).
(7)
From (1) it follows that H0 ≤ H01 ≤ 1,
Eu41 ≤ Ev14 ≤ M.
(8)
Remark 1. E g02 ≤ M (1 + λ),
E| g0 |4 ≤ 2M 2 (1 + λ)2 ,
E| g01 |2 ≤ M (1 + λ),
E| g01 |4 ≤ 2M 2 (1 + λ)2 .
(9)
172
4. ASYMPTOTICALLY UNIMPROVABLE
01 does not depend on x1 and y1 , and Indeed, the value g 0 = Ey1 xT1 g 01 + Ey12 x21 /N. E g02 = Ey1 xT1 g Here, the first summand equals g2 (1 − N −1 ) ≤ M . The second one is not greater than (Ey14 (x21 /N )2 )1/2 ≤ M λ. Further, 2 xT1 g 0 = Ey1 g 02 xT1 g 01 + Ey12 g 02 x21 /N. E( g02 )2 = Ey1 g Using the Schwarz inequality, we find 01 )2 + 2Ey14 (x21 )2 /N 2 . E( g02 )2 ≤ 2E(y1 xT1 g 01 , we have Ey12 (xT1 e)2 ≤ M , In the first summand here, for fixed g 01 /| where e = g g01 |. Therefore, the first summand is not greater 2 than 2M E g0 ≤ 2M 2 (1 + λ). By (1) the second summand is not greater than 2M 2 λ2 . The second inequality in (9) follows. The same arguments may be used to establish the second pair of inequalities (9). Lemma 4.11. If t ≥ 0, then 0 | ≤ M 1/4 c32 ε, |tE¯ xT H0 (t) g
¯ T H0 (t) g 0 ) ≤ var(t x
√ M c42 /N.
∼1
0 = g0 + x1 y1 /N . Proof. Eliminating (x1 , y1 ), we substitute g 11 = 0, Ey1 = 0. It follows: We have ExT1 H01 g 0 = tExT1 H0 g 0 = tExT1 H0 g 01 + Etψ1 y1 = tE¯ xT H0 g 01 + E(1 − s0 − Δ1 )y1 = = tE(1 − tψ1 )xT1 H01 g 01 − EΔ1 y1 , = tEΔ1 xT1 H01 g where Δ1 = 1 − tψ1 − s0 . By Statement 1 of Lemma 4.10, EΔ21 ≤ δ ≤ c42 ε2 . Applying the Schwarz inequality, (1), and (9), we obtain that 0 )2 ≤ [t2 E(xT1 H01 g 01 )2 + Ey12 ] c42 ε2 ≤ (tE¯ xT H0 g √ √ g01 )2 + 1] c42 ε2 ≤ M c63 ε2 . ≤ M [t2 E(
FUNCTIONALS OF RANDOM GRAM MATRICES
173
The first statement follows. To estimate the variance, we use Lemma 3.2. Let us eliminate ∼ 0 ¯ − x1 /N . Then, x ¯ T H0 g the variables x1 and y1 . Denote x = x equals ∼
∼
01 − t xT H01 x1 xT1 H0 g 01 /N + xT1 H0 g 01 /N + y1 x ¯ T H0 x1 /N. xT H01 g The first term in the right-hand side does not depend on x1 and y1 . In view of the identical dependence on sample vectors, we conclude 0 ) is not greater than that t2 var(¯ xT H0 g ∼
01 )2 + E(txT1 H0 g 01 )2 + E(ty1 x ¯ T H0 x1 )2 ]/N. 3[E(t2 xT1 H01 x1 xT1 H0 g In view of (6), this inequality remains valid if H0 is replaced by H01 . After this replacement, we use (1). The square of the sum of the first two summand in the bracket is not greater than ∼
01 )4 E (t2 (xT H01 x1 )2 + 1)2 ≤ at4 E(xT1 H01 g √ ∼ ∼ ≤ aM t4 E| g01 |4 E( M t2 xT1 (H01 )2 x1 + 1), where a is a numerical coefficient. From the definition, one can see ∼ ∼ that tΦ ≤ 1 and txT1 (H01 )2 x1 ≤ 1. The square of the sum of√ the first two summands in the bracket is not greater than aM 3 t4 ( M t ∼ ¯ T H0 x1 = xT H0 x1 + 1) (1 + λ)2 ≤ M c52 . In the third summand, x + ψ1 . By (1) and (6), the square of this summand is not greater than ∼
∼
2(t4 Ey14 E(xT H01 x1 )4 + M ) ≤ 2M (M t4 E|x|4 + 1) ≤ M c42 . We √ conclude that the variance in Statement 2 is not greater than M c42 /N . The proof of Lemma 4.11 is complete. Lemma 4.12. 0 = ts0 (t)EeTH0 (t)g + o, tEeTH0 (t) g where
|o| ≤ c31 ε;
0 ) ≤ c41 /N. var(teT H0 (t) g
(10)
174
4. ASYMPTOTICALLY UNIMPROVABLE
Proof. Denote Δ1 = tψ1 − tEψ1 . Using (6), we find 0 = tEy1 eT H0 x1 = tEy1 (1 − tψ1 )eT H01 x1 = tEeT H0 g = ts0 Ey1 eT H01 x1 − tEy1 Δ1 eT H01 x1 = = ts0 EeT H01 g − tEv1 y1 Δ1 = = ts0 EeT H0 g + t2 s0 Eu1 xT1 H01 g/N − tEv1 y1 Δ1 . The last two terms present the remainder term o in the lemma formulation. We estimate it by the Schwarz inequality: √ |o| ≤ t2 (Eu21 E(xT1 H0 g)2 )1/2 /N + t δ(Ev12 y12 )1/2 ≤ √ ≤ τ 2 /N + t δM ≤ c31 ε. The first statement in (10) is proved. Now we estimate the variance eliminating independent vari0 . Let f = f 1 + Δ1 , where f 1 does not ables. Denote f = teT H0 g depend on x1 and y1 . We have 01 + tu1 y1 /N + t2 u1 (xT1 H01 g 01 )/N. f = eT H01 g By Lemma 4.2, we obtain var f ≤ N E Δ21 . Therefore, 01 )2 + t2 Eu21 y12 ] ≤ var f ≤ 2N −1 [t4 Eu21 (xT1 H01 g ≤ 2N −1 (Eu41 )1/2 M 1/2 t2 [t2 (E| g01 |4 )1/2 + 1] ≤ ≤ 2N −1 τ 2 (2τ 2 (1 + λ) + 1) ≤ c41 /N. Lemma 4.12 is proved. Lemma 4.13. If t ≥ 0, then 0 = σ 2 (1 − s0 ) + ts0 EgT H0 g 0 + o1 = tE g0T H0 g = σ 2 (1 − s0 ) + ts20 EgT H0 g + o2 , where |o1 | ≤
√
M c32 ε,
|o2 | ≤
√ M c32 ε;
0 ) ≤ M c42 /N. t2 var( g0T H0 g
FUNCTIONALS OF RANDOM GRAM MATRICES
175
0 = tEy1 xT1 H0 g 0 equals Proof. Using (6), we find that tE g0 H0 g 01 = tEy1 xT1 H0 x1 /N + ty1 ExT1 H0 g 01 . = tEy12 ψ1 + ty1 E(1 − tψ1 )xT1 H01 g 0 equals Substituting tψ1 = 1 − s0 − Δ1 , we find that Et g0T H0 g 01 + tEΔ1 y1 xT1 H01 g 01 . (11) σ 2 (1 − s0 ) − EΔ1 y12 + ts0 Ey1 xT1 H01 g The square of the second term in the right-hand side is not greater than M δ ≤ M c42 ε2 , where δ is defined in Lemma 4.10. Using the Schwarz inequality, we obtain that the square of the fourth term in (11) is not greater than 01 )2 ≤ δt2 Ey12 (xT1 H01 g ≤ δM t2 (E| g01 |4 )1/2 ≤ 2δM τ 2 (1 + λ) ≤ M c63 ε2 . In the third term of the right-hand side of (11), we have Ey1 xT1 = gT , and this term equals 01 = ts0 EgT H0 g 01 + t2 s0 EgT H0 x1 xT1 H01 g 01 /N = ts0 EgT H01 g 0 − ts0 Ey1 gT H0 x1 /N + t2 s0 EgT H0 x1 xT1 H01 g 01 /N. = ts0 EgT H0 g Here in the right-hand side, the first summand is that included in the lemma formulation. The square of the sum of the second and third summands on the right-hand side is not greater than 01 )2 ]/N 2 ≤ 2t2 E(gT H0 x1 )2 [Ey12 + t2 E(xT1 H01 g √ g01 )2 )/N 2 ≤ ≤ 2t2 g2 Eu21 M (1 + t2 E( ≤ 2M τ 2 [1 + 2τ 2 (1 + λ)]/N 2 ≤ M c41 /N 2 . √ We conclude that o21 ≤ M c63 ε2 and |o1 | ≤ M c32 ε. In view of the first statement of Lemma 4.12, we have = ts0 EgT H0 g + o, tEgT H0 g where |o| ≤ |g|c31 ε. Consequently, |o2 | ≤
√ M c32 ε.
176
4. ASYMPTOTICALLY UNIMPROVABLE def
0 . Using Lemma 0T H0 g Now we estimate the variance of f = t g 4.12 and taking into account the identical dependence on sample vectors, we have var f ≤ N Δ21 , where Δ1 = f − f 1 , and f 1 does not depend on x1 and y1 . We rewrite f in the form 01 + 2t f = t g01T H01 g g01T H0 x1 y1 /N + 0T H01 x1 xT1 H0 g 0 /N. + txT1 H0 x1 y12 /N 2 − t2 g Here the first summand does not depend on (x1 , y1 ). We find that var f is not greater than 01T H0 x1 )2 + t2 Eψ12 y14 + t4 E( 01 )2 /N, a t2 E(y1 g g01T H01 x1 )2 (xT1 H0 g where a is a numerical constant. We apply the Schwarz inequality. The square of the first summand in the square bracket is not greater than M t4 E| g01 |4 Ey14 ≤ 2M 4 t4 (1 + λ)2 ≤ M 2 c42 . It follows that the first summand is not greater than M c21 in absolute value. The second summand is not greater than M t2 by (1). Using (6) and (1), we obtain that the third summand is not greater than t4 E( g0T H01 x1 )4 /N 2 ≤ M t4 E| g0 |4 ≤ 2M 3 t4 (1 + λ)2 ≤ M c42 . Consequently, varf ≤ M c42 /N . This completes the proof of Lemma 4.13. Lemma 4.14. If t ≥ u ≥ 0, then 0 = tu E 01 + o, tu E g0T H0 (t)ΣH0 (u) g g01T H01 (t)ΣH01 (u) g where |o| ≤
√
M c31 /N ; the inequality holds def
0 ) ≤ δ = M 2 t2 c42 /N. t2 u2 var( g0T H0 (t)ΣH0 (u) g
FUNCTIONALS OF RANDOM GRAM MATRICES
177
0 = g 01 + x1 y1 /N , we obtain Proof. Substituting g def
0T H0 (t)ΣH0 (u) g 0 = tu g 01T H0 (t)ΣH0 (u) g 01 + f = g 01T H0 (t)ΣH0 (u)x1 y1 /N + tuy1 xT1 H0 (t)ΣH0 (u) g01 /N + + tu g + tuy12 xT1 H0 (t)ΣH0 (u)x1 /N 2 .
(12)
To prove the lemma statement, it suffices to show that the three last summands in (12) are small and the difference 01 − tu g 01T H0 (t)ΣH0 (u) g 01 01T H01 (t)ΣH01 (u) g d = tu g is small. In the second summand in (12), we use the Schwarz inequality. First, we single out the dependence on y1 . We use (6) to replace H0 by H01 . It follows that the expected square of the second summand in the right-hand side of (12) that is not greater than t2 u2
√
M E| g01T H01 (t)ΣH0 (u)x1 |2 /N 2 ≤ M 3 t4 (1+λ)/N 2 M c41 /N 2 .
The third summand in (12) is estimated likewise. To estimate the quantity d, we present in the form 01 /N + g01T H01 (t)x1 xT1 H0 (t)ΣH01 (u) g d = t2 u 01T H01 (u)x1 xT1 H0 (u)ΣH01 (t) g 01 /N + + tu2 g 01T H01 (t)x1 xT1 H0 (t)ΣH0 (u)x1 xT1 H01 g 01 /N 2 . + t2 u2 g Let us estimate Ed2 . Note that √
tu xT1 H0 (t)ΣH0 (u)x1 ≤
tuΦ(t)Φ(u) ≤ 1.
It follows that g01T H01 (t)x1 |2 E|xT1 H0 (t)ΣH01 (u) g01 |2 /N 2 + Ed2 /3 ≤ t4 u2 E| 01 |2 /N 2 + g01T H01 (u)x1 |2 E|xT1 H0 (u)ΣH01 (t) g + t2 u4 E| g01T H01 (t)x1 |2 | g01T H01 (u)x1 |2 /N 2 . + M t3 u3 E|
(13)
178
4. ASYMPTOTICALLY UNIMPROVABLE
Substituting H01 for H0 , using the relation H0 x1 = (1 − tψ1 )H01 x1 and (1), we obtain g01 |2 )2 /N 2 + M 2 t6 E| g01 |4 /N 2 ≤ Ed2 /3 ≤ 2M 2 t6 (E| ≤ 2M 4 t6 (1 + λ)2 /N 2 ≤ M c62 /N 2 . √ It follows that E|d| ≤ M c31 /N. The expected square of the second summand is not greater than √ g01T H01 (t)x1 )2 (xT1 H0 (t)ΣH0 (u)x1 )2 /N 4 . + 2t4 u2 M E( We use (6) to replace H0 by H01 . It follows that the expected square of the second summand that is not greater than g01 |2 /N 2 ≤ 4M 2 t2 τ 2 (1 + λ)/N 2 ≤ M 2 t2 c21 /N 2 . 4t4 M 2 E| The contribution of the √ √ fourth summand in (12) to f is not greater than M tEy12 /N ≤ M c10 /N . Thus, the right-hand side of (12) presents the leading term in the lemma formulation with the accuracy up to c31 /N . The first statement of the lemma is proved. Further, we estimate varf similarly using Lemma 3.2. We find that varf ≤ N EΔ21 , where f − Δ1 , does not depend on x1 and y1 . The value Δ1 equals the sum of last three terms in (12) minus d. The expectation of squares of the second and third terms in (12) is not greater than M 2 t2 c21 /N 2 . By (1), the square of the fourth term in (12) contributes no more than M t2 Ey14 /N 2 ≤ M 2 t2 /N 2 . We have the inequality Ed2 ≤ 2M 2 t2 c42 /N 2 . It follows that EΔ21 ≤ M 2 t2 c42 /N 2 . This is the second lemma statement. The proof of Lemma 4.14 is complete. Lemma 4.15. If t ≥ u ≥ 0, then 01 + tuE g0T H0 (t)SH0 (u) g0 = tus0 (t)s0 (u)E g01T H01 (t)ΣH01 (u) g g01T H01 (t)g + (1 − s0 (t))us0 (u)E g01T H01 (u)g + + (1 − s0 (u))ts0 (t) E + σ 2 (1 − s0 (t))(1 − s0 (u)) + o,
(14)
FUNCTIONALS OF RANDOM GRAM MATRICES
and E|o| ≤
√ M c42 ε;
179
t2 u2 var( g0T H0 (t)SH0 (u) g0 ) ≤ M c62 /N.
Proof. We notice that def
0 = tuE 0 . f = tuE g0T H0 (t)SH0 (u) g g0T H0 (t)x1 x1 H0 (u) g 01 + x1 y1 /N , we find 0 = g Substituting g 01T H0 (t)ψ1 (u)x1 + g01 + tuEy1 g f = tuE g01T H0 (t)x1 xT1 H0 (u) 01T H0 (u)ψ1 (t)x1 + tuEy12 ψ1 (t)ψ1 (u)/N 2 . + tuEy1 g In the first summand of the right-hand side, we substitute H0 x1 from (6) and tψ1 (t) = 1 − s0 (t) + Δ1 , EΔ21 ≤ δ. We find that the 01T H01 (t)ΣH01 (u) g 01 +o, where the first summand is tuEs0 (t)s0 (u) g leading term is involved in the formulation of the lemma, and the remainder term is such that 01 ]2 δ, g01T H01 (t)x1 xT1 H01 (u) g Eo2 ≤ at2 u2 E[ where a is numerical coefficient and δ is defined in Lemma 4.14. In view of (1), we have g01 |4 c42 ε2 ≤ M c84 ε2 . Eo2 ≤ M t4 E| We transform the last three summands of f substituting tψ1 (t) = = 1 − s0 (t) + Δ1 , EΔ21 ≤ δ. The sum of these terms is equal to g01T H01 (t)x1 y1 + (1− s0 (u))ts0 (t)E g01T H01 (u)x1 y1 + + (1 − s0 (t))us0 (u)E + (1 − s0 (t))(1 − s0 (u))Ey12 + o,
(15)
where the remainder term o is such that Eo2 is not greater than a u2 E( g01T H01 (t)x1 )2 y12 δ + t2 E( g01T H01 (u)x1 )2 y12 δ + Ey14 δ ≤ ≤ aM (2τ 2 (1 + λ) + 1) δ ≤ M c63 ε2 , where a is a numerical coefficient. The leading part of (15), as is readily seen, coincides with three terms in (14). The weakest
180
4. ASYMPTOTICALLY UNIMPROVABLE
upper estimate for the squares of the remainder terms is M c84 ε2 . Consequently, the first lemma statement holds with the remainder √ term M c42 ε. In the second statement, we first substitute tSH0 (t) = I −H0 (t). It follows 0T H0 (t) g 0 − u g 0T H0 (t)H0 (u) g 0 ). var f = var(u g Here the variance of the minuend is not greater than M c42 /N by Lemma 4.13. The variance of the subtrahend is not greater than M c62 /N by Lemma 4.14. The last statement of Lemma 4.15 follows. Theorem 4.7. If t ≥ u ≥ 0, then tuE g0T H0 (t)SH0 (u) g0 = tus0 (t)s0 (u)E g0T H0 (t)ΣH0 (u) g0 + + (1 − s0 (u))ts0 (t)E g0T H0 (t)g + (1 − s0 (t))us0 (u)E g0T H0 (u)g + + σ 2 (1 − s0 (t))(1 − s0 (u)) + o, where |o| ≤
√
(16)
M c42 ε.
Proof. First, we apply Lemma 4.15. The left-hand side can be √ transformed by (14) with the remainder term M c42 ε. We obtain the first summand in (16). Now we compare the right-hand sides of (14) and (16). By (6), the difference between the second summands does not exceed t|E g01T H01 (t)g − E g0T H0 (t)g| ≤ 01T H01 (t)x1 xT1 H0 (t)g/N | + t|Ey1 xT1 H0 (t)g/N | ≤ ≤ t |Et g ≤ [|Et g01T H01 (t)x1 |2 E|txT1 H0 (t)g|2 + Ey12 E|txT1 H0 (t)g|2 ]1/2 /N ≤ √ 1/2 ≤ E| g01 |2 M 2 t4 + M 2 t2 /N ≤ M c21 /N.
The difference between the third summands also does not exceed this quantity. The fourth summands coincide. We conclude that the equality in the √ formulation of the theorem holds with the inaccuracy at most M c42 ε. Theorem 4.7 is proved.
FUNCTIONALS IN THE REGRESSION PROBLEM
181
Functionals in the Regression Problem , we use the identities To pass to matrices C, H = H(t), and g ¯x ¯ T H, =g 0 − x ¯ y¯, and the identity H = H0 −tH0 x ¯x ¯T , g C = S −x ¯ and y¯ are centered sample averages of x and y. where x Remark 2. E| g|4 ≤ aM 2 (1 + λ)2 , where a is a numerical coefficient. Lemma 4.16. tEgT H(t) g = ts(t)EgT H(t)g + o1 , g = σ 2 (1 − s(t)) + ts(t)EgT H(t) g + o2 , tE gT H(t) where |o1 |, |o2 | ≤
√
(17)
M c43 ε.
Proof. We have 0 + t2 EgT H x ¯ y¯. ¯x ¯ T H tEgT H g = tEgT H0 g g0 − tEgT H0 x By Lemma 4.12, the first summand equals ts0 EgT H0 g + o, where √ |o| ≤ M c31 ε. We estimate the remaining terms using the Schwarz inequality. By Statement 3 of Lemma 4.10 with e = g/|g|, we find that the second term is not greater in absolute value than 2 )1/2 ≤ |g|(EtV 2 t3 E¯ x2 g
√ M c42 ε.
The third summand does not exceed |g|t(E¯ x2 E¯ y 2 )1/2 ≤ We conclude that 0 | ≤ |tEgT H g − tEgT H0 g
√
√
M c11 .
M c42 ε.
In view of Lemma up √ 4.10, we can replace s0 by s with an accuracy to tg2 c11 /N √ ≤ M c21 /N . It follows that tEgT H g = tsgT H0 g+o, where |o| ≤ M c42 ε. In view of Lemma 4.10, replacing H0 by H in the right-hand side we produce an inaccuracy of the same order. The first statement of our lemma is proved.
182
4. ASYMPTOTICALLY UNIMPROVABLE
Further, from (16) it follows ¯ T H 0 + t2 E ¯x ¯ T H g = tE g0T H0 g g0T H0 x g0 − 2tE¯ yx g, tE gT H where by Lemma 4.11, the second summand is not greater in absolute value than 1/2 √ ¯ |2 E¯ 02 t2 E| x2 g g0T H0 x ≤ M c43 ε. The third summand in absolute value is not greater than √ √ 2 )1/2 ≤ M c11 / N . x2 g t(E¯ y 2 E¯ Applying Lemma 4.13, we recall that 0 + o, tE g0T H g0 = σ 2 (1 − s0 ) + ts20 EgT H0 g √ where E|o| ≤ M c32√ ε. The difference between s and s0 contributes no more than M c21 /N . Now we have 0 = tEgT H ¯x ¯ T H ¯, g − t2 EgT H0 x g0 + tE¯ y gT H x tEgT H0 g where the first summand is written out in the lemma formulation. By Lemma 4.11 with e = g/|g|, the second term is not greater in absolute value than 1/2 √ ¯2g 02 |g| EtV 2 Et3 x ≤ M c42 ε. The third term in the absolute value is not greater than √ t(E¯ x2 E¯ y 2 )1/2 ≤ M c11 /N. We conclude that the first part of Statement 2 is valid. The second equation in Statement 2 follows from Statement 1, Lemma 4.10, and Lemma 4.13. This proves Lemma 4.16. Lemma 4.17. If t ≥ u ≥ 0, then tu|E gT H(t)ΣH(u) g − E g0T H0 (t)ΣH0 (u) g0 | ≤
√ M c63 ε.
FUNCTIONALS IN THE REGRESSION PROBLEM
183
¯x ¯ T H, we obtain Proof. Replacing H by H0 − tH0 x g= tu gT H(t)ΣH(u) ¯ T H(t)ΣH0 (u) g + g + t2 u gT H0 (t)¯ xx = tu gT H0 (t)ΣH0 (u) 2 T T H0 (u)¯ ¯ H(u)ΣH0 (t) g + xx + tu g T H0 (t)¯ ¯ T H(t)ΣH(u)¯ ¯ T H0 (u) g . xx xx + t2 u2 g
(18)
Here the first summand provides the required expression in the formulation of the lemma with an accuracy to the replacement of by g 0 , i.e., to an accuracy up to vectors g xy¯ + tuE g0T H0 (u)ΣH0 (t)¯ xy¯+ tuE g0T H0 (t)ΣH0 (u)¯ x y¯2 . + tuE¯ xT H0 (t)ΣH0 (u)¯ Let us obtain upper estimates for these three terms. We single out first the dependence on y¯. The square of the first term does not exceed ¯ 2 ) ≤ 2M 3 t4 λ(1 + λ)/N ≤ M c42 /N. y 2 E( g2 x M t4 E¯ The square of the second term can be estimated likewise. The square of the third term is not larger than √ y 2 ≤ M t2 E¯ y 2 ≤ M τ 2 /N. M t2 u2 EΦ(t) Φ(u)¯ It remains to estimate the sum of last three terms in (18). We have Σ ≤ M . By Lemma 4.11 and Remark 1, for u ≤ t, the expectation of the second summand in (18) is not greater than √
1/2 0T H0 x ¯ |2 t4 E¯ 2 M E|t g x2 g ≤ c53 ε.
The third summand can be estimated likewise. To estimate the fourth summand in (18), we note that √ ¯ T H02 (u)¯ x|2 ≤ |u x x| ≤ |uΦ(u)| ≤ 1. | uH0 (u)¯ In view of Lemma 4.11, the expectation of the fourth summand in (18) is not larger than √
1/2 √ T H0 (t)¯ 2 M E|t g x|2 Et5 |¯ x|4 g ≤ M c63 ε.
184
4. ASYMPTOTICALLY UNIMPROVABLE
We conclude that lemma statement holds. Lemma 4.18. If t ≥ u ≥ 0, then 0 | ≤ tu|E gT H(t)CH(u) g − E g0T H0 (t)SH0 (u) g
√ M c43 ε.
=g 0 − x ¯ y¯. It follows that Proof. We substitute g def
gT H(t)CH(u) g= f = tuE T 0 − tuE g0T H(t)CH(u)¯ x y¯ − = tuE g0 H(t)CH(u) g T 2 T ¯ H0 (t)CH0 (u)¯ x y¯ + tuE¯ y x x. −tuE g0 H(u)CH(t)¯ (19) Substituting uCH(u) = I − H(u) in the last three summands, we find that the square of the second term is not greater than ¯ 2 ≤ 2M 2 t2 λ(1 + λ)/N ≤ M c22 /N. y 2 E g02 x t2 E¯ The square of the third summand can be estimated likewise. The square of the fourth summand does not exceed y 4 E|¯ x|4 ≤ M 2 t2 λ2 /N 2 ≤ M c22 /N 2 . t2 E¯ Thus, the quantity f is equal to the first of the right√ √ summand hand side of (19) to an accuracy up to M c11 / N . It remains to estimate the contribution of the difference uH0 (t)CH(u) − uH0 (t)SH0 (u) = = H(t) − H(t)H(u) − H0 (t) + H0 (t)H0 (u). Using (16), we find that within an accuracy up to
√
√ M c11 / N ,
|f − tuE g0T H0 (t)SH0 (u) g0 | ≤ g0 | = ≤ tE| g0T (H(t) − H0 (t)) (H(u) + H0 (u)) 1/2 2 T T T ¯2g 02 ¯ G 0 H0 (t)¯ = t E| g0 H0 (t)¯ xx g0 | ≤ 2 E|t g x|2 Et2 x ,
FUNCTIONALS IN THE REGRESSION PROBLEM
185
where G ≤ 2. By Lemma 4.11, the right-hand side √ of the last inequality is not larger than 2M t λ(1 + λ)c32 ε ≤ M c43 ε. We conclude that the statement of our lemma is to an accuracy up to √ M c43 ε. The lemma is proved. Theorem 4.8. If t ≥ u ≥ 0, then tu E gT H(t)CH(u) g = tus(t)s(u)E gT H(t)ΣH(u) g + + (1 − s(u))ts2 (t)EgT H(t)g + + (1 − s(t))us2 (u)EgT H(u)g + σ 2 (1 − s(t))(1 − s(u)) + o, (20) where |o| ≤
√
M c63 ε.
Proof. We transform the left-hand side. First, we apply Lemma 4.18 and obtain the leading g0T H0 (t)SH0 (u) g0 with a √ term tuE correction not greater than M c43 ε. Then, we apply Theorem 4.7. Up to the same accuracy, this terms equals g0T H0 (t)ΣH0 (u) g0 + (1 − s0 (u))ts0 (t)E g0T H0 (t)g + tus0 (t)s0 (u)E g0T H0 (u)g + σ 2 (1 − s0 (t))(1 − s0 (u)). + (1 − s0 (t))us0 (u)E (21) We transform the first summand in (21) using Lemma 4.18 and Theorem 4.17. This lemma gives a correction not greater than √ M c31 /N . The first summand in the right-hand √ side of (20) is obtained with a correction not greater than M c63 ε. Next, we transform the second summand in (21). By Lemma 4.12, the equality holds tE gT H0 (t)g = ts0 (t)EgT H0 (t)g √ + o, where o is not greater in absolute value than M tc31 ε ≤ M c41 ε. The difference between s0 and s yields a lesser correction. We obtain the second summand of the right-hand side of (20). Similarly, we transform the expression with argument u. We obtain the third summand in (20). The substitution of s for√s0 gives a correction in the last summand that is not larger than M c11 /N . We conclude that the right-hand sides of (21) and (20) coincide with an accuracy up to √ M c63 ε. This proves Theorem 4.8.
186
4. ASYMPTOTICALLY UNIMPROVABLE
Minimization of Quadratic Risk We first express the leading part of the quadratic risk R in . terms of sample characteristics, that is, as a function of C and g Our problem is to construct reliable estimators for the functions T g and D(t, u) = tuEk T Σk that are involved in the expression tEk (3) for the quadratic risk. We consider the statistics s(t) = 1 − nN −1 + N −1 tr(I + tC)−1 ,
T H(t) κ (t) = t g g,
t κ(u) − u κ(t) u) def , K(t, = tu gT H(t)CH(u) g= t−u u) = K(t, u) − 1 − s(t) κ Δ(t, (u) − 2 − 1 − s(u) κ (t) + σ 1 − s(t) 1 − s(u) , u) is extended by continuity to t = u. where K(t, Remark 3. If t ≥ u ≥ 0, then u) + o, s(t)s(u)ED(t, u) = EΔ(t,
where
E|o| ≤
√
M c63 ε.
It is convenient to replace the dependence of the functionals on η(t) by that on a function ρ(t) of the form 1 def ρ(t) = dη(x). 0≤x≤t s(x) We note√that the variation of the function tk ρ(t) on [0, ∞) does not exceed M ηk+1 λ. Let us consider the quadratic risk (3) defined as a function of ρ(t), R = R(η) = R(ρ). Theorem 4.9. The statistic def 2 R(ρ) = σ −2
κ (t)− σ 2 1− s(t) dρ(t)+
u) dρ(t) dρ(u) Δ(t,
is an √ estimator of R = R(ρ) for which ER(ρ) = R(ρ) + o, where |o| ≤ M η8 c05 ε.
MINIMIZATION OF QUADRATIC RISK
187
Proof. The expression (3) equals R(ρ), with an accuracy up √ to M /N . Let us compare (3) with the right-hand side of the expression for R( ρ) in the formulation of our theorem. We have |σ 2 − E σ 2 | ≤ ≤ 2 M/N , s(t) ≥ (1 + τ λ)−1 . By Lemma 4.12, √ g−E κ (t) − σ 2 (1 − s(t) /s(t)| ≤ M c54 ε. |EtgT H(t) The differential dρ(t) = s−1 (t)dη(t), and the variation of η(t) is not larger than 1. One can see that the second summand in (3) equals the√second term of the expression for R(ρ), with an accuracy up to M c54 ε. The third summand in (3) by Theorem √ 4.8 is equal to the third term of R(ρ), with an accuracy up to M c85 ε. The coefficient c85 increases not faster than t8 with t. We arrive to the theorem statement. Now we pass to the calculation of the nonrandom leading part of the quadratic risk. Define φ(t) = t gT (I + tΣ)−1 g,
κ(t) = σ 2 (1 − s(t))2 + s2 (t)φ(ts(t)), tκ(u) − uκ(t) K(t, u) = , t−u Δ(t, u) = K(t, u) − (1 − s(t))κ(u) − (1 − s(u))κ(t) + +σ 2 (1 − s(t))(1 − s(u)),
where the function K(t, u) is extended by continuity to t = u. √ Remark 4. E κ(t) = κ(t) + o, where |o| ≤ M c43 ε. Lemma 4.19. If t ≥ u ≥ 0, then √ u) − K(t, u)| ≤ c43 M ε, |EK(t, √ u) − Δ(t, u)| ≤ c34 M ε. |EΔ(t, Proof. First, for some d > 0, let|t−u| ≥ d. For these arguments, u)−K(t, u)| ≤ τ c43 /d. Let |t−u| ≤ d, by Remark 4, we have |EK(t,
188
4. ASYMPTOTICALLY UNIMPROVABLE
d > 0. We expand the functions κ(u) and κ (u) to the Taylor series up to the second derivatives (t, u) − K(t, u) = EK κ(u) + ud κ (u) + ud2 κ (ξ)/2 + d κ(u)] − = d−1 E[u −d−1 [uκ(u) + udκ (u) + ud2 κ (ζ)/2 + dκ(u)], (22) where ξ and ζ are intermediate values of the arguments, u ≤ ξ, ζ ≤ 1. Here gT HCH g + t|E gT HCHCH g| ≤ |E k (ξ)| ≤ 2E g ≤ aM 3/2 λ(1 + λ), ≤ 3E gT C where a is a numerical coefficient. We also find that |s (u)| ≤ N −1 E tr(HCH) ≤ |s (u)|
≤
N −1 E
tr(HCHCH) ≤
√
M λ,
N −1 Etr
C 2 ≤ M λ,
|φ (us(u))| ≤ gT Rg + t(1 + τ λ) gT RΣRg ≤ M c21 , |φ (us(u))| ≤ a(1 + τ λ)2 gT RΣRg ≤ M 3/2 c22 , where R = (I + us(u)Σ)−1 . We conclude that t|Eκ (ξ)| ≤ M c32 . Thus, the terms with the second derivatives contribute to (22) no more than M c32 d. Further, we estimate |uE κ (u) − uκ (u)|. We have κ (ξ). E κ(u + d) − E κ(u) = d κ (u) + d2 E Analogously we substitute κ(u+d)−κ(u). Subtracting these expressions we find that, dE κ (u) + d2 E κ (ξ) − dEκ (u) − d2 Eκ (ζ) is not greater than
√
M c43 ε in absolute value. Consequently,
|Eu κ (u) − uκ (u)| ≤
√ M c43 ε + M c32 d.
MINIMIZATION OF QUADRATIC RISK
189
It remains to estimate the summand (Eu κ(u) − uκ(u))/d in the right-hand side of (22). This difference is not greater than c53 ε/d in absolute value, and √ consequently all the right-hand sides of (22) do not exceed M c43 ε + c32 (M d + c22 ε/d). Let us choose d = c22 ε/M . Then (since ε ≤ 1),√the right-hand side of (22) is not greater in absolute value than M ε c43 . The first statement is proved. Further, we have u) −Δ(t, u)| ≤ |EK(t, u) − K(t, u)| + |EΔ(t, + |r(t)κ(u) − r(t) κ (u) + r(u)κ(t) − r(u) κ(t)| + 2 r(t) r(u)|, + |σ 2 r(t)r(u) − σ where r(t) = 1 − s(t), r(t) = 1 − s(t). The first summand is estimated in Lemma 4.19. Note that the variance of n−1 tr H is not greater than c20 /N and therefore |E s(t) − s(t)| ≤ c11 ε. By √ u) − Remark 4, the upper estimate c43 M ε also holds for |EΔ(t, Δ(t, u)|. Lemma 4.19 is proved. Theorem 4.10. The quadratic risk (3) is R = R(ρ) = R0 (ρ) + o, where def Δ(t, u) dρ(t) dρ(u), R0 = R0 (ρ) = σ 2 −2 s(t)φ(ts(t)) dρ(t)+ (23) √
√
and |o| ≤ M η6 c05 ε. If some function of bounded variation ρopt (t) exists satisfying the equation Δ(t, u) dρopt (u) = κ(t) − σ 2 (1 − s(t)), t ≥ 0, then R0 (ρ) reaches the minimum for ρ(t) = ρopt (t) and 2 min R0 (ρ) = σ − s(t)φ(ts(t)) dρopt (t). ρ
2 Proof. We start from Theorem 4.9. The difference between σ 2 and E σ is not greater than 2 M/N . The difference between s(t)
190
4. ASYMPTOTICALLY UNIMPROVABLE
√ and E s(t) is not larger than c11 / N . By Lemma 4.16, √ φ(ts(t)) − E κ (t) − σ 2 (1 − s(t)) ≤ M c43 ε. We obtain the two first summands in (23). Further, by Lamma √ u)| ≤ c43 M ε. The statement of Theorem 4.19, |Δ(t, u) − EΔ(t, 4.10 follows. Usually, to estimate the efficiency of the linear regression, the RSS is used, which presents an empirical quadratic risk estimated over the same sample X T C k. T g +k Remp = σ 2 − 2k Theorem 4.11. For the linear regression with k = Γ(C) g and T x ¯ , the empiric quadratic risk Remp = Remp (η) may be l = y¯ − k written in the form Remp (η) = σ 2 − 2 κ(t) dη(t) + K(t, u) dη(t)dη(u) + o, √ where |o| ≤ c43 M ε. Proof. By Lemmas 4.12 and 4.19, we have √ |σ 2 − E σ2| ≤ 2 M ε
√ |κ(t) − E κ(t)| ≤ M c43 ε, √ u)| ≤ c43 M ε. |K(t, u) − EK(t,
The variation of η(t) on [0, ∞) is not larger than 1. We can easily see that the statement of Theorem 4.11 holds.
Special Cases We consider “shrinkage-ridge estimators” defined by the function η(x) = α ind(x ≥ t), t ≥ 0. The coefficient α > 0 is an analog of the shrinkage coefficient in estimators of the Stein estimator
SPECIAL CASES
191
type, and 1/t presents a regularization parameter. In this case, by Theorem 4.10, the leading part of the quadratic risk (3) is R0 (ρ) = R0 (α, t) = σ 2 − 2αφ(ts(t)) + α2 Δ(t, t)/s2 (t). If α = 1, we have R0 (ρ) = R0 (1, t) =
1 s2 (t)
d [t (σ 2 − κ(t))]. dt
In this case, the empirical risk is Remp (t) = s2 (t)R0 (t). For the optimum value α = αopt = s2 (t)φ(ts(t))/Δ(t, t), we have R0 (ρ) = R0 (d
opt
, t) = σ
2
s2 (t)φ2 (ts(t)) 1− Δ(t, t)
.
Example 1. Let λ → 0 (the transition to the case of fixed dimension under the increasing sample size N → ∞). To simplify formulas, we write out only leading terms of the expressions. If λ = 0, then s(t) = 1, h(t) = n−1 tr(I + tΣ)−1 , κ(t) = φ(t), Δ(t, t) = φ(t) − tφ (t). Set Σ = I. We have φ(t) ≈
σ 2 r2 t , 1+t
h(t) ≈
1 , 1+t
Δ(t, t) ≈
σ 2 r 2 t2 , (1 + t)2
where r2 = g2 /σ 2 is the square of the multiple correlation coefficient. The leading part of the quadratic risk (3) is R0 = σ 2 [1 − 2α2 t/(1 + t) + α2 r2 t2 /(1 + t)2 ]. For the optimal choice of d, as well as for the optimal choice of t, we have α = (1 + t)/t and Ropt = σ 2 (1 − r2 ), i.e., the quadratic risk (3) asymptotically attains its a priori minimum. Example 2. Let N → ∞ and n → ∞ so that the convergence holds λ = n/N → λ0 . Assume that the matrices Σ are nondegenerate for each n, σ 2 → σ02 , r2 = gT Σ−1 g/σ 2 → r02 , and the parameters γ → 0. Under the limit transition, for each fixed t ≥ 0, the remainder terms in Theorems 4.8–4.11 vanish. Let d = 1 and t → ∞ (the transition to the standard nonregularized
192
4. ASYMPTOTICALLY UNIMPROVABLE
regression under the increasing dimension asymptotics). Under these conditions, s(t) → 1 − λ0 ,
s (t) → 0,
φ(ts(t)) → σ02 r∗2 ,
def
κ(t) → κ(∞) = σ02 r02 (1 − λ0 ) + σ02 λ0 ,
tκ (t) → 0.
The quadratic risk (3) tends to R0 so that lim lim
lim |ER(t) − R0 | = 0,
t→∞ γ→0 N →∞
def
where R0 = σ02 (1−r02 )/(1−λ0 ). This limit expression was obtained by I. S. Yenyukov (see in [2]). It presents an explicit dependence of the quality of the standard regression procedure on the dimension of observations and the sample size. Note that under the same conditions, the empirical risk Remp → σ02 (1 − r02 )(1 − λ0 ) that is less than σ02 (1 − r02 ). Example 3. Under the same conditions as in Example 2, let the coefficients d be chosen optimally and then t → ∞. We have α = αopt (t) = s2 (t)φ(ts(t))/Δ(t, t) and t → ∞. Then, s(t) → 1 − λ0 ,
φ(ts(t)) → σ02 r02 ,
Δ(t, t) → σ02 (1 − λ0 )[λ0 (1 − r02 ) + (1 − λ0 )r02 ], αopt → r02 (1 − λ0 )[λ0 (1 − r02 ) + (1 − λ0 )r02 ]. By (23), the quadratic risk (3) R0 (t, αopt ) → R0 as t → ∞, where R0 =
σ02 (1 − r02 ) σ02 (1 − r02 )[λ0 + (1 − λ0 )r02 ] ≤ . 1 − λ0 λ0 (1 − r02 ) + (1 − λ0 )r02
If λ0 = 1, the optimal shrinkage coefficient αopt → 0 and the quadratic risk remains finite (tends to σ02 ) in spite of the absence of a regularization, whereas the quadratic risk for the standard linear regression tends to infinity.
CHAPTER 5
MULTIPARAMETRIC DISCRIMINANT ANALYSIS Many-dimensional recognition problems arise when a small number of leading features do not provide satisfactory discrimination and the statistician should additionally invoke a large number of less informative discriminating variables. Wald [86] characterized the set of variables in the discriminant analysis as a finite number of well-discriminating “structure variables” and as an infinite number of poorly discriminating “incident variables.” The small number of well-discriminating variables may be successfully treated by standard methods. But the technique of extracting discriminative information from a large number of incident variables was not developed until recently. In 1976, Meshalkin [46] noticed (see Introduction) that linear discriminant function may be approximately normal if a large number of addends in the discriminant functions are uniformly small and the concept of distributions uniformly approaching each other in parametric space may be used. He applied the Kolmogorov asymptotics and derived concise limit formulas for the probabilities of errors. In 1979 the author of this book used the same asymptotics for the development of an extended theory [60], [61] of improving discriminant by introducing weights of independent contributions of variables. This theory is presented in Section 5.1. In Section 5.2, we consider the general case of dependent variables. The standard Wald discriminant function is modified by introducing weights of variables in the coordinate system of where the (pooled) sample covariance is diagonal. As in [63], we start from the assumption of normality of variables, prove limit theorems, and find the “best-in-the-limit” linear discriminant function. Then, we extend these results to a wide class of distributions 193
194
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
using Normal Evaluation Principle proposed in Section 3.3. In the general case, linear discriminant function is no more normal. We return to the Fisher criterion of discriminant analysis quality and find solutions unimprovable in the meaning of this criterion.
DISCRIMINANT ANALYSIS OF INDEPENDENT VARIABLES
5.1.
195
DISCRIMINANT ANALYSIS OF INDEPENDENT VARIABLES
In this section, we solve the problem of discriminating largedimensional vectors x = (x1 , x2 , . . ., xn ) from two populations under assumption that the variables x1 , x2 , . . ., xn are independent and normally distributed. We consider a generalized family of linear discriminant functions different by introduction of weights for independent addends and apply the multiparametric technique to find the best weights for asymptotically improved linear discriminant function and asymptotically best rules for the selection of variables. Asymptotical Problem Setting. Let P = {Pn } be a sequence of the discrimination problems ¯ ν , g(x), α (Sν , Xν , N, x ν , ν = 1, 2)n
n = 1, 2, . . . ,
(1)
where S1 and S2 are two populations; Xν are samples from Sν , ¯ ν are sample mean vectors for Xν , of the same size N > 2, x ν = 1, 2; g(x) = g(x, X1 , X2 ) is the discriminant function used in the classification rule g(x) > θ against g(x) ≥ θ, α 1 = P(g(x) < θ|S1 ),
α 2 = P(g(x) ≥ θ|S2 )
are sample-dependent probabilities of errors of two kinds and θ is a threshold (we do not write out the subscripts n for arguments of (1)). Let us restrict (1) with the following assumptions. A. For each n, the vectors x in populations Sν from P are distributed normally with the density f (x, μ ν ) = (2π)−n/2 exp(x − μ ν )2 /2,
ν = 1, 2,
where μ 1 = (μ11 , μ12 , . . ., μ1n ) and μ 2 = (μ21 , μ22 , . . ., μ2n ). Denote μ =μ 1 − μ 2 = (μ1 , μ2 , . . ., μn ).
196
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
B. The populations Sν in (1) are contigual so that max N μ2i /2 ≤ c,
i=1,...,n
where c does not depend on n (here and in the following, squares of vectors denote squares of their length), and the ratio n/N → λ > 0 We introduce a description of the set {μ2i } in terms of an empirical distribution function
Rn (v) = n
−1
n
ind(N μ2i /2 ≤ v),
Rn (c) = 1.
i=1
C. For any v > 0 in P, the limits exist lim Rn (v) = R(v),
n→∞
2
J = lim μ = 2λ n→∞
vdR(v).
(2)
This condition does not restrict applications of the limit theory to finite-dimensional problems and is introduced in order to provide the limit form of results. Under Assumption B, R(c) = 1. ¯ 2 )/2, = ( a2 , . . . , an ) = (¯ x1 + x Define a a1 , ¯ ¯ ¯ ¯2 , . . . , x ¯n ) = x1 − x2 ∼ N( μ, 2/N ). x = (¯ x1 , x D. The discriminant function is of the form
g(x) =
n
ηi x ¯i (xi − ai ),
(3)
i=1
where the weights ηi have one of two forms: ηi = η(vi ), vi = ¯2i /2 N μ2i /2 (weighting by a priori data) or ηi = η(ui ), ui = N x (weighting by sample data), i = 1, 2, . . . , n. Assume that the function η(·) does not depend on n and is of bounded variation on [0, ∞) and continuous everywhere, perhaps, except a finite number of discontinuity points not coinciding with discontinuity points of R(v).
A PRIORI WEIGHTING OF VARIABLES
197
A Priori Weighting of Variables Denote
ν ) dx = 1/2 η(vi )¯ xi (μνi − ai ), Gnν (η) = g(x)f (x, μ i nν (η)]2 f (x, μ n (η) = η 2 (vi )[g(x) − G ν ) dx = η 2 (vi ) x ¯2i , D i
where vi = N μ2i /2 (here and in the following in sums over i, the subscript i = 1, 2, . . . , n). Probabilities of errors of two kinds depend on samples and are ⎞ ⎞ ⎛ ⎛ n1 (η) − θ n2 (η) + θ G G ⎠, α ⎠ . (4) α 1 = Φ⎝− ! 2 = Φ⎝− ! Dn (η) Dn (η) Lemma 5.1. Under Assumptions A–D for ηi = η(nμ2i /2) in (3), i = 1, 2, . . . , n, 1. the limits in the square mean exist nν (η) = (−)ν+1 G(η), ν = 1, 2, l.i.m. D n (η) = D(η), l.i.m. G n→∞
where G(η) = λ
n→∞
η(v)vdR(v),
D(η) = 2λ
η 2 (v)(v + 1)dR(v); (5)
2. if D(η) > 0, then G(η) − θ) G(η) − θ , plim α 2 (η) = Φ . plim α 1 (η) = Φ − D(η) n→∞ n→∞ D(η)
Proof. Let vi = N μ2i /2, i = 1, 2, . . . , n. The expectation 2 n1 (η) = 1/2 EG η(vi ) μi = n/N η(v)vdRn (v). i
198
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
n2 (η) is different by sign. The domain of inteThe quantity EG gration is bounded, and the function in the integrand is piecewise continuous almost everywhere. We conclude that the integral above n1 (η) → G(η), whereas G n2 (η) → −G(η). converges, and G The expectation 2 2 2 η(vi )E¯ xi = η(vi ) μi + = EDn (η) = N i
2n = N
i
η(v)(v + 1) dRn (v).
By the same reasoning, this integral converges to the integral over dR(v) in the lemma formulation. nν (η) and D n (η) vanish. These Let us show that variances of G two quantities present sums of independent addends and their variance is not greater than sums of expectations of squares. We note that the function η(·) is bounded, μ4i ≤ 4c2 /n2 . All fourth central nν (η) and moments of x ¯i are equal to 3/N 2 . It follows that var G −2 −2 var Dn (η) are not greater than sums of O(n ) + O(N ). In view of condition B, these variances are O(n−1 ). The first statement of Lemma 5.1 is proved. The second statement follows immediately. The proof is complete. n1 (1) → J/2 and Example 1. If η(v) = 1 for all v > 0, then G n (1) → J + 2λ as n → ∞ in the square mean, where J is defined D by (2). Minimum of Errors by A Priori Weighting It is easy to check that the minimum of the limit value of ( α1 + α 2 )/2 defined by (4) is achieved for the threshold θ = 0 (nontrivial optimum thresholds may be obtained for essentially different sample sizes, see Introduction). Lemma 5.1 states that if D(η) > 0, then min plim ( α1 + α 2 )/2 = Φ(−ρ(η)/2), θ
n→∞
where the “effective limit Mahalanobis distance” ρ(η) = 2G(η)/ D(η).
A PRIORI WEIGHTING OF VARIABLES
199
Theorem 5.1. Under Assumptions A–D, the variation of η(·) in g(x) of the form (3) with ηi = η(N μ2i /2), i = 1, 2, . . . , n, leads to the minimum half-sum of limit error probabilities inf min plim η
θ
2 α 1 + α = Φ(−ρ(ηopt )/2) 2
with η(v) = ηopt (v) = v/(v + 1) and ρ2 (ηopt ) = 2λ
v2 dR(v). v+1
Proof. Let us vary η(v) fixing θ = 0. We obtain the necessary condition of the extremum D(η)
vδη(v) dR(v) = 2G(η) (v + 1)η(v)δη(v) dR(v).
It follows that η(v) = const v/(v + 1). The proportionality coefficient does not affect the value of ρ(η). Set η(v) = ηopt (v) = v/(v + 1). Let us show that the value ρ(ηopt ) is not less than ρ(η) for any other η(t). Using the Cauchy–Bunyakovskii inequality, we obtain 2G(η) = 2 vη(v) dR(v) ≤ 1/2 v2 ≤2 = dR(v) (v + 1)η 2 (v) dR(v) v+1 = ρ(ηopt ) D(η). This completes the proof of Theorem 5.1. The optimal weighting of the form ηopt (v) = v/(v + 1) was first found by Deev in [18]. Example 2. (the case of a portion r of noninformative variables). Let R(v) = r ≥ 0 for 0 ≤ v ≤ b, and R(b) = 1 for v > b. If η(t) = 1 for all t (no weighting), then G(1) = J/2 = (1 − r)b, D(1) = J + 2λ = 2[(1 − r) b + λ], and ρ2 (1) = 2(1 − r)2 b2 /
200
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
[(1 − r)b + λ)]. If η(v) = ηopt (v) = v/(v + 1) (optimum weighting), then ρ2 (ηopt ) = 2(1 − r)b2 /(b + λ) ≥ ρ2 (1). Note that under Assumptions A–D, the contributions of variables to the discrimination are μ2i = O(n−1 ), i = 1, 2, . . . , n, ¯ 2i is of the order 1/N , that while the bias of their estimators x is, of the same order magnitude. This means that true contributions are substantially different from their estimators. Thus, we have the problem of improving discrimination by weighting variables using random x ¯2i . Empirical Weighting of Variables We consider now a more realistic problem when the weights in (3) are calculated by the estimators x ¯i of the quantities μi , i = 1, 2, . . ., n. β Let Fm (u) be the standard biased χ2 distribution function with m degrees of freedom and the bias parameter β ≥ 0. Denote β fm (u) =
β (u) ∂Fm , ∂u
β (u) = − fm+2
β (u) ∂Fm . ∂β 2
Theorem 5.2. Let Assumptions A–D be valid. For the discrim¯2i /2), i = 1, 2, . . . , n, the inant function (3) with weights ηi = η(N x limits in the square mean exist def l.i.m. Gn1 (η) = G(η) = λ [ η(u)f3β (u) du] β 2 dR(β 2 ), n→∞ def l.i.m. Dn (η) = D(η) = 2λ [ η 2 (u)uf1β (u) du] dR(β 2 ). (6) n→∞
Proof. We begin with the second statement. Denote vi = N μ2i /2, ui = N x ¯2i /2. The latter variable can be expressed as 2 ui = (vi + ξ) , where ξ ∼ N(0, 1). Therefore, ui is distributed as biased χ2 with the density f1β (·) with one degree of freedom and bias β 2 = N vi2 /2. Thus, the expectation 2 n (η) = 2/N E ui η (ui ) = 2n/N [ uη(u)f1β (u) du] dRn (β 2 ). ED i
EMPIRICAL WEIGHTING OF VARIABLES
201
By conditions A and C, the left-hand side of this expression converges to the limit in the theorem formulation. The expectation η(ui ) x ¯i (μ1i − ai ) = 1/2E η(ui ) μi x ¯i = i i 2 ∂ 1 η(ui )μi μi + f1 (¯ xi ) d¯ xi = = 2 N ∂μi i ∂ 1 η(u)(β 2 + β ) fiβ (u) du, = N ∂β
n1 (η) = E EG
i
xi ) = N/4π exp(−N (¯ where fi (¯ xi − μi )2 /4) and β 2 = β 2 (i) = 2 N μi /2 for each i. Using the identity f3β (u) − f1β (u) = 2
∂f1β (u) , ∂β 2
we obtain the expression that tends to the limit in the theorem formulation. n vanish. n1 and D It is left to check that variances of both G These variances present sums of addends that can be estimated from the above by second moments. Each of these moments does not exceed O(n−2 ) + O(N −2 ). In view of condition A, it follows n1 (η) and D n (η) are O(n−1 ). The that the variances of both G proof is complete. Denote σ(u) = β 2 f3β (u) dR(β 2 ), π(u) = uf1β (u) dR(β 2 ). (7) The limits in (6) may be rewritten in the form G(η) = λ
∞
η(u)σ(u) du, 0
D(η) = 2λ
∞
η 2 (u)π(u) du.
(8)
0
Theorem 5.3. Let Assumptions A–D hold and D(η) > 0. Then, x2i /2), we have using g(x) in (3) with the weight coefficients ηi = η(n¯
202
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
G(η) − θ plim α 1 = Φ − n→∞ D(η)
,
G(η) + θ plim α 2 = Φ − n→∞ D(η)
,
where G(η) and D(η) are defined by (8). Proof. Let F(·) denote the conditional distribution function of ¯ 1 and x ¯ 2 . Obviously, α the random value g = g(x) with fixed x 1 = F (θ). The function g(x) is a weighted sum of independent n1 , D n , and Tn1 sums of the first random values. Denote by G moments, variances, and third absolute moments (conditional under fixed samples) of the addends in (3). Applying the Esseen inequality, we find that the distribution function of g = g(x) differs n1 , D n ) by ω = O(Tn1 /D 3/2 ). from the distribution function of N(G n n → D(η) > 0 in the square mean. The expectation Here D |E Tn1 | ≤
E|ηi x ¯i (xi − ai )|3 ,
i 6 where coefficients ηi and moments E|xi − ai | are bounded. Hence, xi |6 . In view of conE Tn1 is of order of magnitude of the sum of |¯ ditions A and B, in this sum, each addend is O(n−3/2 ) + O(N −3/2 ) P P n1 → n (η) → as n → ∞. By Theorem 5.2, we have G G(η), and D
P D(η) > 0. We conclude that the random value ω → 0 and F(θ) tends in probability to the distribution function of N(G(η), D(η)). The symmetric conclusion for ν = 2 follows from assumptions. Theorem 5.3 is proved. The limit half-sum of error probabilities is reached for θ = 0 and is def
α(η) = plim ( α1 + α 2 )/2 = Φ(−ρ(η)/2),
(9)
n→∞
where ρ2 (η) = 4G2 (η)/D(η). Example 3. Suppose that the contributions μ2i of all variables to the distance between the populations are identical. Then, for some β 2 , R(v) = 0 for v < β 2 and R(v) = 1 for v ≥ β 2 . We have
MINIMUM ERROR PROBABILITY
203
σ(u) = β 2 f3β (u), π(u) = uf1β (u), β 2 G(η) = λβ η(u)f3 (u) du, D(η) = 2λ uη 2 (u)f1β (u) du, where 1 exp(−(t2 + β 2 )/2)ch βt, 2π t β, t > 0. exp(−(t2 + β 2 )/2) sh βt,
f1β (t2 ) = √ f3β (t2 ) = √
1 2π β
If η(u) = 1 for all u > 0, then G(1) = λβ 2 = J/2, and using (6), we find that D(1) = 2(β 2 + λ) = J + 2λ, and ρ(ηopt ) = ρ(1). Minimum Error Probability for Empirical Weighting We find the maximum of the function ρ(η) defined in (9). Theorem 5.4. Let Assumptions A–D hold. Varying the threshold θ and the function η(·) under fixed λ and R(v) in g(x) of the ¯2i /2), i = 1, 2, . . . , n, we obtain that form (3) with ηi = η(N x inf min plim (α1 + α2 )/2 = Φ(−ρ(ηopt )/2), η
θ
n→∞
where σ(u) ηopt (u) = π(u)
and
2
ρ (ηopt ) = 2λ
∞
+0
σ 2 (u) du. π(u)
(10)
Proof. We seek the extremum of (9) by variation of η(u). The necessary condition of the extremum is ∞
∞ σ(u)δη(u) du = G(u)
D(η) 0
π(u)η(u)δη(u) du. 0
Hence, η(u) = const σ(u)/π(u). The constant coefficient does not affect the value of ρ(η). Set ηopt (u) = σ(u)/π(u). Let us prove
204
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
that ηopt (u) is bounded. By Assumption B, the supports of the distributions Rn (β 2 ) and R(β 2 ) are bounded. For χ2 densities, we have the inequality uf1β (u) ≥ f3β (u). It follows that σ(u) =
β 2 f3β (u) dR(β 2 ) ≤ u
β 2 f1β (u) dR(β 2 ) ≤ cπ(u)/2,
u ≥ 0. We see that the function ηopt (u) is bounded and continuous for u ≥ 0. Substituting this function in (10), we obtain the second relation in (7). For any other η(u) using the integral Cauchy– Bunyakovskii inequality, we find ∞ 2G(η) = 2λ
⎡ σ(u)η(u) du ≤ 2λ ⎣
0
∞
σ 2 (u) π(u)
∞ du
+0
=
⎤1/2 π(u)η 2 (u) du⎦
=
0
D(η)ρ(η0 ).
This completes the proof of Theorem 5.4. Example 4. Let γ > 0. Consider a special distribution of β > 0 with 2 γ β γ δ(β 2 − β 2 )dβ = exp − dR(β 2 ) = 2π 2 γ 1/2 γβ 2 β −1 dβ 2 . = Γ−1 (1/2) exp − 2 2 (
Using the integral representation of f1β (u), we find that
f1β (u) dR(β 2 ) =
g 1/2 2
gu
Γ−1 (1/2) exp − u−1/2 , 2
where g = γ/(1 + γ) and
β 2 f3β (u) dR(β 2 ) =
u 1+γ
f1β (u) dR(β 2 ).
MINIMUM ERROR PROBABILITY
205
Hence, σ(u) = π(u)/(1 + γ). If η(u) = 1 for all u > 0, we have 2γ ρ (1) = 1+γ 2
D(1) = 2(1 + γ) G(1),
vdR(v).
For the best weight function η(u) = ηopt (u), we obtain 2λ ρ (ηopt ) = 1+γ 2
2λ σ(u) du = 1+γ
vdR(v).
Thus, ρ(ηopt ) = ρ(1). We conclude that, for this special case, the optimal weighting does not diminish the limit half-sum of the error probabilities. Example 5. Let all variables contribute identically to the 2 /n, i = 1, . . ., n. Then, distance between populations, μ2i = μ dR(β 2 ) = 0 only at the point β 2 = N μ 2 /2n and ρ2 (1) = Jβ 2 / 2 (β + 1). The best weighting function is √ th (β u) √ . =β ηopt (u) = u uf1β (u) β 2 f3β (u)
The following inequality is valid
[f3β (u)]2 uf1β (u)
du ≥
1 . 1 + β2
It is easy to examine that ρ(ηopt ) ≥ ρ(1). It is remarkable that in spite of identical contributions of variables to J, the optimal weighting provides the increase of ρ(η) and the decrease of plim n→∞
( α1 + α 2 )/2 owing to the effect of the suppression of large deviations of estimators. Theorem 5.5. If Assumptions A–D hold and ρopt (η) = ρ(1), then there exists γ > 0 such that dR(v) γ 1/2 −1 Γ = dv 2
γv
1 exp − v −1/2 . 2 2
(11)
206
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
Proof. We compare (9) and (10). The inequality ρ(ηopt ) ≥ ρ(1) is the Cauchy–Bunyakovskii inequality for the functions π(u) and σ(u)/ π(u). The case of the equality implies π(u) = σ (u)/ π(u) almost for all u > 0. In view of the continuity, we obtain σ(u) = C1 π(u), where C1 > 0. Substituting this relation to (9), we find that C1 π(u) =
= π(u) + π(u)/u + 2π (u),
∂f1β (u) dR(β 2 ) ∂u u > 0.
(1 + u)f1β (u) dR(β 2 ) + 2u
We integrate this differential equation and obtain that π(u) = C2 u1/2 exp (−C3 u),
C2 , C3 > 0.
(12)
Let us substitute π(u) from definition, divide by u, and perform the Fourier transformation of the both parts of this equality. It follows that π(u) const , exp(iut) dt = χβ1 (t) dR(β 2 ) = u (C3 − it)1/2 t ≥ 0. Denote s = t/(1 − 2it). Substituting (12), we obtain const exp(isβ 2 ) dR(β 2 ) = . C3 + is (2C3 − 1)1/2 This relation holds, in particular, at the interval {s : Im s = 1/4 and |Re s| < 1/4}. The analytical continuation to Im s ≥ 0 makes it possible to perform the inverse Fourier transformation dR(v) = C4 dv
+∞
exp(−ivs) ds = [C3 − is(2C3 − 1)]1/2 −∞ γv
= C5 v −1/2 exp − , 2
where C4 , C5 > 0 and γ = 2C3 /(1 − 2C3 ) > 0. Normalizing, we obtain (11). The proof is complete.
STATISTICS TO ESTIMATE PROBABILITIES OF ERRORS
207
Thus, the distribution (11) is the only limit distribution for which the effect of weighting by Theorem 5.4 produces no gain. Statistics to Estimate Probabilities of Errors Usually, the observer does not know, even approximately, the true values of either the parameters μ 1 μ 2 , or the law of distribution R(·). The functions σ(u) and π(u) involved in Theorem 5.4 also are not known. Let us construct their estimators. We describe the set of the observed values x ¯2i in terms of the sample-dependent function n (u) = n−1 ind(N x ¯2i /2 ≤ u). Q i
Remark 1. For each u ≥ 0, the limit exists n (u) = F β (u) dR(β 2 ), Q(u) = l.i.m. Q 1 n→∞
u≥0
(13)
(here and in the following, l.i.m. stands for the limit in the square mean). Indeed, since the random values ui are distributed as F1β (u) n (u) tends to with β 2 = N μ2i /2, we find that the expectation of Q n (u) ≤ 1. the right-hand side of (13). It is easy to see that var Q Statement (13) is grounded. From (10), it follows that the function Q(u) is monotone, continuous, increases as u1/2 for small u > 0, and Q(u) → 1 exponentially as u → ∞. The following integral relation holds: ∞ (1 − Q(u)) du = (1 + β 2 ) dR(β 2 ). 0
Lemma 5.2. The functions σ(u) and π(u) can be expressed in terms of the derivatives of Q(u) as follows: σ(u) = 2u Q (u) + (1 + u) Q (u),
π(u) = uQ (u).
These relations follow from (13), (7), and (6).
(14)
208
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
Remark 2. If the function η(u) is twice differentiable, u ≥ 0, then G(η) = λ [(u − 1)η(u) − 2uη (u)] dQ(u) = ∞ = λ [η(u) + (u − 3)η (u) − 2uη (u)] (1 − Q(u)) du − λη(0); 0 D (η) = 2λ uη 2 (u) dQ(u) = ∞ = 2λ
[ η(u) + 2uη (u)] η(u)(1 − Q(u)) du.
0
For a special case when η(u) = 1 for all u ≥ 0, we obtain (u − 1) dQ(u),
G(1) = J/2 = λ
∞ D(1) = J = 2 λ (1 − Q(u)) du. 0
Remark 3. The weighting function ηopt (u) can be written in the form
ηopt (u) =
2uQ (u) + (1 + u)Q (u) , uQ (u)
(15)
and the corresponding limit “effective Mahalanobis distance” is ∞ ρ(ηopt ) = 2κ
[2uQ (u) + (1 + u)Q (u)]2 du. uQ (u)
(16)
+0
As a corollary of Theorem 5.4, we can formulate the following assertion. Theorem 5.6. Under Assumptions A–D for g(x) of the form ¯2i /2), i = 1, 2, . . . , n, we have (3) with weights ηi = η(N x ρ(ηopt ) α 1 + α 2 , ≥Φ − plim 2 2 n→∞
CONTRIBUTION OF A SMALL NUMBER OF VARIABLES
209
where ρ(ηopt ) is defined by (16), and the equality holds for η(u) defined by (15). This theorem presents the limit form of asymptotically unimprovable discriminant function that can be constructed using ¯ 2i of variables to the square distance the empiric contributions x between samples. Contribution of a Small Number of Variables If the observer knows parameters of the populations exactly, then to minimize the discrimination errors, one needs all variables. For a sample discrimination rule with no weighting of variables, the limit probability of the discrimination error is not minimum, and the problem arises of choosing the best subset of variables minimizing the discrimination error. Under the problem setting accepted above, sums of the increasing number of variables produce nonrandom limit contributions to the discrimination. However, the contributions of separate variables remain essentially random. To obtain stable recommendations concerning the selection, we gather variables with neighboring values of μ ¯2i and x ¯2i into groups that are sufficiently large to have stable characteristics. Let us investigate the effect of an exclusion of an increasing number k of variables from the consideration. More precisely, consider two sequences (1) of the discrimination problems ∼
such that the second sequence is different by k = n − k variables with the same other arguments. Assume that k/n → γ ≥ 0 as n → ∞. In this section, we suppose that γ is small. Let us mark the characteristics of the second sequence by the sign tilde. Contribution of Variables in Terms of Parameters Suppose that all the excluded variables have close values of μ2i ≈ 2/N v0 , so that the new limit function is ∼
R(v) = ∼
R(v) − γ ind(v > v0 ) . 1−γ
(17)
Let δz = z − z denote the change of a value z when we pass from the first sequence to the second one. Assume that γ is small, and
210
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
let us study the effect of the exclusion of variables, keeping only first-order terms in γ → 0. Then, γ = −δγ/γ,
δR(v) = [ind(v − v0 ) − R(v)]δγ/γ,
δG(η) = v0 η(v0 )δγ,
δD(η) = 2(v0 + 1) η 2 (v0 )dγ,
where G(η) and D(η) are defined by (5). We set η(v) = 1 for all v ≥ 0. For a problem with the known set {μ2i } of contributions, we obtain δρ2 (η) =
2J [(J + 4γ) v0 − J] δγ, (J + 2γ)2
where J are defined by (2). Proposition 1. Let Assumptions A–D hold, η(v) = 1 for all v ≥ 0, and a small portion of variables be excluded by a priori values of μ2i so that (17) holds. Then, the limit error probability α(η) = Φ(−ρ(η)/2) is decreased by the exclusion of variables (17) if and only if the inequality holds v0 < J/(J + 4λ). In order to decrease the limit probability of the discrimination error, it should be recommended to exclude variables with μ2i <
J 2 . N J + 4λ
Contribution of Variables in Terms of Statistics Now we investigate the contribution of variables when a number q of variables with close sample characteristics are excluded. We assume that q/n → γ ≥ 0, where γ is small. Let δ be the symbol of variation when we pass to the decreased number of variables, so that γ = −δλ/λ, and δQ(u) = [ind(u ≥ u0 ) − Q(u)] δλ/λ,
(18)
¯2i /2 for all excluded variables. Keeping only where u0 = plim N x n→∞
first-order terms, we have δ G(η) = [(u0 − 1)η(u0 ) − 2u0 η (u0 )]δγ, u0 δD(η) = 2δγ [η(u) + 2uη (u)]η(u) du = 2u0 η 2 (u0 )δγ. 0
SELECTION OF VARIABLES BY THRESHOLD
211
Let η(u) = 1 for all u > 0. Then, we have δJ = 2δG(1) = 2(u0 − 1) δγ,
δD(1) = 2u0 δγ.
We obtain δρ2 (1) =
2J [u0 (J + 4γ) − 2(J + 2γ)]δγ. (J + 2γ)2
Proposition 2. Let Assumptions A–D hold, η(u) = 1 for all u > 0, and a small portion of variables be excluded by x¯2i so that (18) holds. Then, α(η) = Φ(−ρ(η)/2) decreases if and only if u0 < 2
J + 2γ , J + 4γ
and to decrease the limit probability of the discrimination error, it should be recommended to exclude variables with x ¯2i <
K 2 , N K + 2γ
where K = J + 2λ. Selection of Variables by Threshold We now consider the selection of a substantial portion of variables. Let q be the number of variables left for the discrimination ¯2i /2 > that were selected by the rule vi = N μ2i /2 > τ 2 and ui = N x 2 2 τ , where τ is the selection threshold, k ≤ n. To investigate the selection effect in our consideration, we apply the weighting function of the form η(u) = ind(u ≥ τ 2 ). Selection by A Priori Threshold Proposition 3. Given τ under Assumptions A–D for the weighting function, η(v) = ind(v ≥ τ 2 ) with v = vi = N x ¯2i /2, the limit exists δ = lim
n→∞
k = 1 − R(τ 2 ). n
212
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
Indeed, by (5), we have k η(vi ) = = n−1 n
η(v) dRn (v) = 1 − Rn (τ 2 ),
i
where Rn (v) → R(v) for all v > 0. To treat the selection problem, let us redefine G(δ) = G(η),
D(δ) = D(η), ρ(δ) = ρ(η), J(δ) = 2γ vη(v) dR(v).
By (5), we have G(δ) = λ vdR(v), v>τ 2
D(δ) = 2γ
v>τ 2
(v + 1) dR(v).
Let us find the threshold τ 2 and the limit portion δ of variables left for the discrimination that minimize α(δ) = Φ(−ρ(δ)/2). Denote by b1 and b2 the left and right boundaries of the distribution R(v): b1 = inf{v ≥ 0 : R(v) > 0}, b2 = inf{v ≥ 0 : R(v) = 1}. Theorem 5.7. Suppose Assumptions A–D hold and the discrimination function (3) is used with nonrandom weighting coefficients of the form ηi = ind(N μ2i /2 ≥ τ 2 ). Then, under the variation of δ, the condition b1 < J(1)/(J(1) + 4γ)
(19)
is sufficient for α(δ) = Φ(−ρ(δ)/2) to reach a minimum for δ such that 0 < δ ≤ 1. The derivative α (δ) exists almost everywhere for 0 < δ ≤ 1 and its sign coincides with the sign of the difference J(δ) − τ 2 (J(δ) + 4γδ). Proof. For δ = 1, we have τ 2 = 0 and η(v) = 1 for all v ≥ 0. Hence, (J(δ) − J(1))/2γ = − (1 − η(v)) vdR(v) ≥ −(1 − δ)τ 2 .
SELECTION OF VARIABLES BY THRESHOLD
213
The minimum of α(δ) is attained if ρ2 (δ) = J 2 (δ)/(J(δ) + 2γδ). We find ρ2 (δ) − ρ2 (1) ≥ c[J 2 (1) − τ 2 J(δ) (J(1) + 2γ) − 2γτ 2 J(δ)],
c > 0.
If δ → 1, then τ 2 → b1 , J(δ) → J(1), and if (19) holds, then there exists δ < 1 such that ρ2 (δ) ≥ ρ2 (1). On the other hand, δ → 0 implies that J(δ) → 0 and ρ2 (δ) ≤ J(δ) → 0. The function J(δ) is continuous for all δ, 0 < δ ≤ 1. Therefore, ρ2 (δ) reaches a minimum for 0 < δ < 1. The derivatives α (δ) and ρ (δ) exist at dτ 2 all points where exists, i.e., at all points where dR(τ 2 ) > 0. dδ For these δ, we have dJ(δ) = 2λτ 2 , dδ dρ2 (δ) 2λJ(δ) [τ 2 (J(δ) + 4λδ) − J(δ)]. = dδ (J(δ) + 2λδ)2 The second statement of the theorem follows. The proof of the theorem is complete. Example 6. Consider the case of a portion r ≥ 0 of noninformative variables. The function R(v) = r ≥ 0 for 0 ≤ v < b2 and R(b2 ) = 1. The value J(1) = 2λ(1 − r) b2 . For δ < 1 − r, we have τ 2 = b2 , J(δ) = 2λδb2 , ρ2 (δ) = ρ2 (1)δ, and the increase of δ decreases α(δ). For r > 0 and δ ≥ 1 − r, we have b1 = 0, and (19) is valid. We have J(δ) = 2λ(1 − r)b2 independently on δ, and the decrease of δ decreases α(δ). For r = 0, we have b1 = b2 , relation (19) does not hold, J(δ) = 2λ δb2 , and the decrease of δ increases α(δ). The selection is not purposeful if r = 0. Empirical Selection of Variables We consider a selection of k ≤ n variables with sufficiently large values of the statistics u i = x ¯2i ≥ 2τ 2 /N , where τ 2 is a fixed threshold. This problem is equivalent to the discrimination with the weight coefficients η(u) = ind(u ≥ τ 2 ), where u = ui , i = 1, 2, . . ., n. The number of variables left in the discriminant function is k = η(u1 ) + η(u2 ) + · · · + η(un ).
214
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
Lemma 5.3. If Assumptions A–D hold and the discriminant function (3) is used with the weights ηi = ind(ui ≥ τ 2 ), i = 1, . . ., n, then the limit exists k = 1 − Q(τ 2 ), n→∞ n
δ = plim
where Q(·) is defined by (13). Proof. We find that k E = [ η(u) dF1β (u) du] dR(β 2 ) + o(1) = n = η(u) dQ(u) + o(1) = 1 − Q(u) + o(1).
The ratio k/n is a sum of independent random values, and its variance is not greater than 1/n → 0. This proves the lemma. One can see that τ 2 = τ (δ) is a function decreasing monotonously with the increase of δ. The function η(·) is determined by the value of δ uniquely. Let us redefine G(δ) = G(η), D(δ) = D(η), ρ(δ) = ρ(η), and α(δ) = α(η). Remark 4. Under Assumptions A–D with η(u) = ind(u ≥ τ ), we have 2
∞
β 2 (1 − F3β (τ 2 )) dR(β 2 ), ∞ D(δ) = 2λ π(u) du = 2λ [ uf1β (u)du] dR(β 2 ) 2 2 u>τ τ ) * β 2 = 2λ (1 − F3 (τ )) + β 2 (1 − F5β (τ 2 ) dR(β 2 ). G(δ) = λ
τ2
σ(u) du = λ
We use (13) to express these functions in terms of the function Q(u).
SELECTION OF VARIABLES BY THRESHOLD
215
Remark 5. Under Assumptions A–D, we have ∞ G(δ) = λ(τ − 1)(1 − Q(τ )) + λ (1 − Q(u)) du; 2
2
τ2
∞ 2 D(δ) = 2λ(1 − Q(τ )) + λ (1 − Q(u)) du. τ2
By virtue of Theorem 5.3, random values α 1 and α 2 converge in probability to the limits depending on δ. Remark 6. Consider the problem of the influence of an informational noise on the discrimination. We modify the sequence of problems (1) by adding a block of independent variables number i = 0 and assume that the random vector x0 is distributed as N( μ0ν , Ir ), where μ 0ν ∈ Rr , ν = 1, 2, and Ir is the identity matrix of size r × r. To simplify formulas, suppose that μ 01 and μ 02 do 0 0 0 2 not depend on n. Denote J = ( μ1 − μ 2 ) . The discriminant function is modified by an addition of a normal variable distributed as N(±J 0 /2, J 0 ). Suppose that all remaining variables are noninformative: let R(v) = 1 for all v > 0. In this case, as n → ∞, J0 , α(δ) → Φ − 2 J 0 + D(δ) where D(δ) is defined by Remark 4. We also have Q(u) = F10 (u), δ = 1 − F10 (τ 2 ), and D(ρ) = 2λδ + 4λf30 (τ 2 ). Here the second summand is added to the variance of g(x); this additional term is produced by the selection of those variables that have the greater deviations of estimators (this effect was analyzed in detail in [62]). For a small portion δ of variables left, the selection substantially increases the effect of informational noise (as ln 1/δ). Theorem 5.8. Suppose Assumptions A–D hold and the discriminant function (3) is used with the weighting coefficients of the form ηi = ind(N x ¯2i /2 ≥ τ 2 ). Then, under variation of δ
216
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
1. the condition 2 (v + 1) dR(v) v exp(−v/2) dR(v) < < exp(−v/2) dR(v) vdR(v)
(20)
is sufficient for the value α(δ) = Φ −G(η)/ D(η) to have a minimum for 0 < δ < 1; 2. the derivative α (δ) exists for 0 < δ ≤ 1 and the sign of α (δ) coincides with the sign of the difference ∞
∞ σ(u) du − 2σ(τ )
2
2
π(τ ) τ2
π(u) du.
(21)
τ2
Proof. If δ = 1, then τ 2 = 0, G(1) = λ β 2 dR(β 2 ) = J/2, and D(1) = 2λ (β 2 + 1) dR(β 2 ) = J + 2λ. Using relations (6), we calculate the derivatives as τ 2 → +0 df1β (τ 2 ) = a exp −β 2 /2 , dτ dG(δ) 2a = −λ β 2 exp −β 2 /2 dR(β 2 ), 3 dτ 3 dD(δ) 2a exp −β 2 /2 dR(β 2 ), = −2λ 3 dτ 3 τ
√ where a = 2[ 2Γ(1/2)]−1 . dα(δ) at the point τ 2 = 0, we find dτ 3 that this derivative is positive if condition (20) holds. It follows that τ 2 is monotone depending on δ and α(δ) < α(1) for some δ < 1, τ 2 > 0. The first statement is proved. Calculating the derivative
SELECTION OF VARIABLES BY THRESHOLD
217
Further, we notice that ∞ ∞ 2 ρ (δ) = 2λ [ σ(u) du] [ π(u) du]−1 . 2
τ2
τ2
Differentiating ρ2 (δ) we obtain the second statement of the theorem. This completes the proof. Example 7. The function R(v) = r for 0 ≤ v < b = β 2 and R(b) = 1. The condition (19) takes the form r > (2b + exp (b/2) − 1)−1 (2b + 1) . The selection is purposeful for sufficiently large r and large b. For variables with identical nonzero contributions, we have r = 0 and the inequality does not hold. In this case, we find that ρ2 (δ) (1 − F3β (τ 2 ))2 (1 + β)2 . = ρ2 (1) (1 − F1β (τ 2 )) + β 2 (1 − F5β (τ 2 )) But F5β (τ 2 ) ≤ F3β (τ 2 ). Replacing F5β by F3β , we obtain the inequality ρ2 (δ) ≤ 1 − F3β (τ 2 ) < 1. 2 ρ (1) The minimum of α(δ) is attained for δ = 1, that is, using all variables. Example 8. Consider the special limit distribution (11) when the derivative R (v) exists. In this case, σ(u) = π(u)/(1 + γ). In the inequality (20), the left-hand side equals the right-hand side, and the sufficient condition for the selection to be purposeful is not satisfied. The value 2λ ρ (δ) = 1+γ
∞
2
σ(u) du. τ2
218
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
One can see that α(δ) is strictly monotone, decreasing with the decrease of δ and the increase of τ 2 . The minimum of α(δ) is attained when all variables are used. Remark 7. Let us rewrite the selection conditions (19) and (20) in the form b < J/(J + 4λ), 2 v exp(−v/2) dR(v) < J/(J + 2λ) exp(−v/2) dR(v). The left-hand side of the second inequality has the meaning of the mean contribution of weakly discriminating variables. It can be compared with the first inequality. One may see that for the “good” discrimination when
vdR(v) 1, the boundary of the
purposefulness of the selection using estimators is twice as less than under the selection by parameters. The sufficient condition (20) of the purposeful selection involves quantities that are usually unknown to the observer. Let us rewrite it in the form of limit functions of estimators. Denote ) * w(u) = 2 ln u−1/2 exp(u/2)Q (u) . Theorem 5.9. Suppose conditions A–D are fulfilled and the disx2i /2 ≥ crimination function (3) is used with the weights ηi = ind (¯ 2 τ ) of variables. Then, under the variation of δ, the condition
∞
(1 − 2w (0))
udQ(u) > 1
(22)
0
is sufficient for the α(δ) to attain a minimum for 0 < δ < 1. The minimum of α(δ) is attained for δ = δopt and τ = τopt and is such that 2 δopt = 1 − Q(τopt )
and
2 w (τopt ) = G(δopt )/D(δopt ).
Proof. Indeed, from (13) it follows that as u → 0, we have
SELECTION OF VARIABLES BY THRESHOLD
w (0) =
219
+ β exp(−β /2) dR(β ) exp(−β 2 /2) dR(β 2 ). 2
2
2
The relation (22) readily follows. The second assertion of the theorem follows from Theorem 5.8. Thus, the investigation of the empirical distribution function n (u) makes it possible to estimate the effect of selection of variQ ables from sample data. If inequality (20), holds, then the selection is purposeful (in the limit). Using (20), we are able to approximate 2 the best limit selection threshold τopt and the best limit portion δopt of chosen variables.
220
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
5.2.
DISCRIMINANT ANALYSIS OF DEPENDENT VARIABLES
In this section, we present the construction of asymptotically unimprovable, essentially multivariate procedure of linear discriminant analysis of vectors with dependent components. For normal variables, this investigation was first carried out in 1983 and is presented first in [63] and then in [71]. In this section, we extend results of [63] to a wide class of distributions using the Normal Evaluation Principle described in Section 3.3. We consider the problem of discriminating n-dimensional normal vectors x = (x1 , . . ., xn ) from one of two populations S1 and S2 with common unknown covariance matrix Σ = cov(x, x). We begin with the case of normal distributions. The discrimination rule is w(x) ≥ θ against w(x) < θ, where w(x) is a linear discriminant function, and θ is a threshold. The quality of the discriminant analysis is measured by probabilities of errors of two kinds (classification errors) α1 = P(w(x) < θ|S1 )
and α2 = P(w(x) ≥ θ|S2 )
(1)
for observations x from S1 and S2 . If the populations are normal N( μν , Σ), ν = 1, 2, with nondegenerate matrix Σ, then it is well known that, by the Neumann–Pearson lemma, the minimum of (α1 + α2 )/2 is attained with θ = 0 and w(x) of the form wNP (x) = ln
f1 (x) 2 )T Σ−1 (x − ( μ1 + μ 2 )/2), = ( μ1 − μ f2 (x)
where ν )T Σ−1 (x − μ ν )/2), fν (x) = (2π det Σ)−1/2 exp(−(x − μ are normal distribution densities,√ ν = 1, 2. The minimum of the μ1 − μ2 )T Σ−1 half-sum (α1 +α2 )/2 equals Φ(− J/2), where J = ( ( μ1 − μ 2 ) is “the square of the Mahalanobis distance.” The estimator of w(x) is constructed over samples X1 = {xm }, m = 1, 2, . . ., N1 and X2 = {xm }, m = 1, 2, . . ., N2 , of size N1 > 1 and
ASYMPTOTICAL SETTING
221
N2 > 1, N = N1 +N2 , from S1 and S2 . To construct an estimator of w(x), we use sample means ¯ ν = Nν−1 x
Nν
xm ,
xm ∈ Xν ,
ν = 1, 2
m=1
and the standard unbiased “pooled” sample covariance matrix C of the form C=
N2 − 1 N1 − 1 C1 + C2 , N −2 N −2
(2)
where Cν =
1 ¯ ν )(xm − x ¯ ν )T , ν = 1, 2. (xm − x Nν − 1 Xν
In applications, traditionally the standard “plug-in” Wald discriminant function is used ¯ 2 )T C −1 (x − (¯ ¯ 2 )/2). x1 + x w(x) = (¯ x1 − x However, it is well known that this function may be illconditioned, and obviously not the best even for low-dimensional problems (see Introduction). In this section, we choose a class of generalized, always stable discriminant functions replacing Σ−1 by a “generalized ridge estimator” of the inverse covariance matrices and develop a limit theory that can serve for the construction of improved and unimprovable discriminant procedures. This development is a continuation of researches initiated by A. N. Kolmogorov in 1968–1970 (see Introduction). Asymptotical Setting We apply the multiparametric technique developed in Chapter 3 and use the Kolmogorov asymptotics to isolate principal parts of the expected probabilities of errors with the purpose to find linear discriminant function, minimizing the probability of errors under the assumption that the dimension n → ∞ together with sample sizes N1 → ∞ and N2 → ∞ so that n/Nν → yν , ν = 1, 2.
222
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
Consider a sequence of discrimination problems ¯ ν , C, w(x), αν )n }, P = {(Sν , μ ν , Σ, Nν , Xν , x
ν = 1, 2,
n = 1, 2, . . . (we will not write out the subscripts n for the arguments of P). The n-dimensional observation vectors x = (x1 , . . ., xn ) are taken from one of two populations S1 and S2 ; the population centers are μ 1 = E1 x and μ 2 = E2 x, where (and in the following) E1 and E2 are expectation operators in S1 and S2 , respectively. Suppose P is restricted by the following assumptions (A–E). A. The populations Sν are normal N( μν , Σ), ν = 1, 2. B. For each n in P, all eigenvalues of Σ are located on the segment [c1 , c2 ], where c1 > 0 and c2 do not depend on n. C. For each n in P, the numbers N > n+2 and as n → ∞, the limits exist yν = lim n/Nν > 0, ν = 1, 2, and y = lim n/(N1 + n→∞
n→∞
N2 ) = y1 y2 /(y1 + y2 ), where y < 1. Denote μ =μ 1 − μ 2, = 1/2 μ x, G T Γ(C)¯
¯=x ¯1 − x ¯2, x =x ¯ T Γ(C)ΣΓ(C)¯ D x.
Consider the empiric distribution functions FnΣ (u) = n−1
n
ind (λi ≤ u)
i=1
of eigenvalues λi of Σ, i = 1, . . ., n, and the function Bn (u) =
n
μ2i /λi ind(λi ≤ u),
i=1
in the system of coordinates, in where μi are components of μ which Σ is diagonal, i = 1, 2, . . ., n.
ASYMPTOTICAL SETTING
223
D. For u > 0 as n → ∞, FnΣ (u) → FΣ (u) and Bn (u) → B(u) almost everywhere. Under conditions A–D, the limit exists J = B(c2 ) = lim μ T Σ−1 μ , n→∞
μ 2 ≤ c2 B(c2 ).
E. We consider the generalized discriminant function of the form ¯ 2 )T Γ(C) x − (¯ ¯ 2 )/2 , x1 + x w(x) = (¯ x1 − x
(3)
where the matrix Γ(C) depends on C so that Γ(C) is diagonalized together with C and has eigenvalues Γ(λ) corresponding to the eigenvalues λ of C, where the scalar function Γ : R1 → R1 is
(1 + ut)−1 dη(t),
Γ = Γ(u) = t≥0
and η(t) is a function of finite variation on [0, ∞) not depending on n. In addition, we assume that the function η(t) is differentiable at some neighborhood of the point t = 0 with t|η (t)| ≤ b, and a sufficient number of moments exist βj =
tj |dη(t)|,
j = 1, 2, . . .
Under Assumptions A–E, the probabilities of discrimination errors are
E1 w(x) − θ α1 = Φ − var w(x)
,
θ − E2 w(x) α2 = Φ − var w(x)
,
where the conditional expectation operators E1 and E2 and conditional variance var w(x) (identical in both populations) are calculated for fixed samples Xν , ν = 1, 2.
224
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
The half-sum α = (α1 + α2 )/2 reaches the minimum for θ = θnopt = 1/2 (E1 w(x) + E2 w(x)) o
o
o
o
¯ T2 Γ(C)x ¯2 − x ¯ T1 Γ(C)x ¯ 1 ), = 1/2 (x o
¯ν = x ¯ν − μ where x ν , ν = 1, 2, and for θ = θnopt α = α(θ) =
α1 (θ) + α2 (θ) D). = Φ(−G/ 2
(4)
We study sample-dependent probability of error (4). Let us show and D have decreasing variance, find their that random values G 2 /D. limit expressions, and minimize the principal part of the ratio G Moments of Generalized Discriminant Function −1 Let us use resolvents H(C) = (I + tC) of the matrices C and matrices Γ = Γ(C) = (I + tC)−1 dη(t), presenting linear
transformation of H(C). We consider spectral functions hn (t) = n−1 tr H(t),
ϕn (t) = n−1 tr ΣH(t),
¯ T H(t) x ¯, kn (t) = x
bn (t) = μ T H(t) μ ,
t ≥ 0,
¯ H(t)ΣH(t ) x ¯, Φn (t, t ) = μ H(t)ΣH(t ) μ , Ψn (t, t ) = x t, t ≥ 0. T
T
Remark 1. By the well-known Helmert transformation, the matrices (2) can be transformed to the form S = (N − 2)
−1
N −2
xm xTm ,
(5)
m=1
where independent xm ∼ N(0, Σ), m = 1, 2, . . ., N − 2. Denote y = n/(N − 2), N0 = (N1 − 1)(N2 − 1)/(N1 + N2 − 2), y0 = lim n/N0 . n→∞
MOMENTS OF GENERALIZED DISCRIMINANT FUNCTION
225
Lemma 5.4. Under Assumptions A–D as n → ∞, the variances of functions hn (t), bn (t), kn (t), Φn (t, t ), and Ψn (t, t ) decrease as O(N −1 ). Proof. The assertions of this lemma follows immediately from Theorem 3.13 of Chapter 3 since all these functions present functionals from the class L3 . Denote sn = sn (t) = 1 − y (1 − hn (t)) > 0. Remark 2. Under Assumptions A–D for each t > 0 as n → ∞, we have EH(t) = (I + tsn (t)Σ)−1 + Ωn , hn (t) = (1 + tsn (t)u)−1 dFnΣ (u) + ωn (t),
(6)
where ωn (t) and Ωn are polynomials of fixed degree with vanishing coefficients. This is a corollary of Theorem 3.2 from Chapter 3. Remark 3. Under Assumptions A–D for each t ≥ 0 s(t) = l.i.m. sn (t) = 1 − y + yh(t)
h(t) = l.i.m. hn (t), n→∞
n→∞
(where and in the following, l.i.m. denotes the limit in the square mean) and the equation holds h(t) =
(1 + ts(t)u)−1 dFΣ (u)
(7)
Remark 4. If y < 1, then the following is true. 1. Equation (7) has a unique solution h(t) that presents an analytical function h(z) regular everywhere except for z < 0. 2. The function h(z) satisfies the H¨older condition on the plane of complex z and decreases as O(1/|z|) when |z| → ∞. 3. For real z < 0, the quantity Im h(z) = 0 everywhere, √ except the segment [v1 , v2 ], where v1 = 1/(c2 (1 + y)), v2 = 1/ √ (c1 (1 − y)). These statements follow from Theorem 3.2 in Chapter 3.
226
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
Lemma 5.5. Under Assumptions A–D for each t ≥ 0, the limits exist b(t) = l.i.m. bn (t) = n→∞
(1 + ts(t)u)−1 dB(u),
1 − h(t) , n→∞ ts(t) y (1 − h(t)) . ϕ(t) = l.i.m. ϕn (t) = l.i.m. n−1 tr (ΣH(t)) = n→∞ n→∞ ts(t) k(t) = l.i.m. kn (t) = b(t) + y0
Proof. The first assertion follows readily from (6) and Assumption D. Further, in view of (6) we have EN −1 trΣH(t) = EN −1 trΣ(I + ts(t)Σ)−1 + ωn = =
n − tr(I + ts(t)Σ)−1 1 − h(t) + ωn = y + ωn N ts(t) ts(t)
with different ωn → 0. This is the second statement. For normal distributions, the matrix H(t) = (I + tC)−1 is ¯ . Using the previous lemma statement, we obtain independent of x ¯ T H(t)¯ Ex x=E
1 − h(t) 1 tr(ΣH(t)) = y0 , N0 ts(t)
t > 0.
Lemma 5.5 is proved. Lemma 5.6. Under Assumptions A–D for any fixed t, t ≥ 0, we have bn (t) − bn (t ) + ωn (t, t ), t − t kn (t) − kn (t ) ¯ T H(t)ΣH(t )¯ s(t)s(t )E x x= + ωn (t, t ), t − t s(t)s(t )E μ T H(t)ΣH(t ) μ=
(8) (9)
where the expressions in the right-hand sides are extended by continuity to t = t , and ωn (t, t ) are some polynomials of fixed degree with coefficients vanishing as n → ∞.
LIMIT PROBABILITIES OF ERRORS
227
Proof. In view of Remark 1, we calculate the expressions in the left-hand sides of (8) and (9) for H(t) = (I + tS)−1 and H(t ) = (I + t S)−1 with S defined by (5). In the asymptotic approach, it is convenient to prove the lemma for samples Xν with excluded vectors xν ∈ Xν , ν = 1, 2. Set t > 0, t = t . Denote ¯ν x ¯ Tν , S ν = S − (N − 2)−1 x ¯ Tν H(t) x ¯ν , ψν (t) = x
H ν (t) = (I + tS ν )−1 ,
sn (t) = E (1 − tψν (t)),
ν = 1, 2.
Then, H(t) = (1−tψν )H ν (t), ν = 1, 2. By Lemma 3.1, var ψν (t) → 0 as n → ∞, ν = 1, 2. Using the independence of H(t) from sample ¯ ν , ν = 1, 2, we obtain means x ¯ Tν H(t ) = t (H(t) − H(t ))/(t − t) = t E H(t)SH(t ) = t E H(t)¯ xν x ¯ Tν H ν (t ) = xν x = t E (1 − tψν (t)) (1 − t ψ(t ))H ν (t)¯ = tsn (t)sn (t ) E H ν (t)ΣH ν (t ) + Ωn ,
(10)
where (different) matrices Ωn are such that Ωn → 0. We use the relations H(t) = (1 − ψν (t))H ν (u), ν = 1, 2 and conclude that the right-hand side of (1) equals tsn (t)sn (t )E H(t)ΣH(t ) + Ωn , where the new remainder term is such that Ωn → 0 as n → ∞. Multiplying these matrix relations by μ from the left and from the right we obtain the first lemma statement. To evaluate the left-hand side of (9), we deal with sample covariance matrices C of the form (5). For normal distributions, ¯ do not depend on C. Therefore, we obtain sample means x E Ψ(t, t ) = E Φ(t, t ) + E N0−1 tr(ΣH(t)ΣH(t )). Using (10), we obtain the second lemma statement. The proof of Lemma 5.6 is complete. Limit Probabilities of Errors Now we pass to integrals with respect to dη(t). Let us perform the limit transition and find the limiting expressions for = 1/2 μ =x ¯ T Γ(C)ΣΓ(C)¯ G T Γ(C)¯ x and D x.
228
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
Theorem 5.10. Under Assumptions A–E, the following limits exist l.i.m. θnopt = θopt , n→∞
T
Γ(C)¯ x = 2G, l.i.m. μ n→∞
¯ T Γ(C)ΣΓ(C)¯ l.i.m. x x = D, n→∞
(11)
where 1 − h(t) θ = 1/2(y2 − y1 ) dη(t) , ts(t) G = 1/2 b(t) dη(t), k(t) − k(t ) dη(t)dη(t ), D= s(t)s(t ) (t − t) opt
(12)
and the latter integrand is extended by continuity to t = t . o
o
o
o
o
¯ T2 Γ x ¯2 −x ¯ T1 Γ x ¯ 1 ), where Proof. The functionals θnopt equal 1/2 (x
¯ ν = xν − x μν , ν = 1, 2, and Γ(C) = (I +tC)−1 belongs to the class L3 of functionals allowing ε-normal evaluation with the polynomial ε3 (t) → 0. By Lemma 5.4, its variance vanish as n → ∞. For normal variables, we have E θnopt = 1/2(N2−1 − N1−1 )E trΣΓ(C), and by Lemma 5.5, this quantity tends to the first expression in (12). This is the first lemma statement. Next, for normal distributions, we have T −1 dη(t) = E bn (t) dη(t) → b(t) dη(t), EG = E μ (I+tC) μ and we obtain the second relation in (11). For the third functional, we apply Lemmas 5.5 and 5.6. As n → ∞ we have sn (t) → s(t) > 1 − y > 0. Dividing by s(t) and s(t ), we obtain the third limit in (11). Theorem 5.10 is proved.
LIMIT PROBABILITIES OF ERRORS
229
Theorem 5.11. Suppose Assumptions A–E hold, D > 0, the discriminant function (3) is used, and the threshold θ is chosen for each n so that it minimizes (α1 + α2 )/2. Then, √ plim (α1 + α2 )/2 = Φ(− J eff /2), n→∞
where J eff = J eff (η) = 4G2 /D,
and G = G(η) and D = D(η) are defined by (12). Proof. The minimum of (α1 + α2 )/2 is attained for θ = θnopt , where θnopt is defined by (11). The statement of Theorem 5.11 immediately follows from Theorem 5.10. Example 1. Let us choose a special form of η(t): η(t ) = 0 for t ≤ t, and η(t ) = t for t ≥ t. Then, Γ(C) = t (I + tC)−1 is a ridge estimator of the matrix Σ−1 . In this case, θopt = 1/2 (y2 −y1 ) (1 − h(t))/s(t), G = 1/2 t b(t), and D = −t2 s(t)−2 k (t) > 0.
Example 2. Let Σ = σ 2 I, σ > 0 for all n = 1, 2, . . ., and let η(t) = 1 for all arguments t > 0. Then, we have Γ(C) = I, G = J/2, D = J + y1 + y2 , and θopt = (y2 − y1 )/2 in agreement with limit formula J eff = J eff (1) = J/2 J + y1 + y2 for independent components of normal x (see Introduction). Example 3. Let η(t ) = ind t ≤ t), where t → ∞. This corresponds to the transition to the standard discriminant function. As t → ∞ we find that h(t) → 0, s(t) → 1 − y, and tb(t) → J/(1 − y), where J = lim μ T Σ−1 μ . We obtain n→∞
G → J/2(1 − y)−1 ,
D → (J + y1 + y2 )(1 − y)−3 ,
and J eff → J 2 (1 − y)/(J + y1 + y2 ), in agreement with the well-known Deev formula (see Introduction). Example 4. Let η(t ) = ind (t ≤ t), and consider a special case when μ2i /λi = J/n for all components μi of the vector μ in the system of coordinates where Σ is diagonal, where λi are the corresponding eigenvalues of Σ, i = 1, 2, . . ., n (the case of
230
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
equal contributions to the Mahalanobis distance). Suppose that the limit spectrum of sample covariance matrices is given by the “ρ-model” that was described in Section 3.1. Then, the dependence of J eff on t can be found in an explicit form as follows: J eff = J eff (h) = J 2
J − y + 2yh − (ρ + y)h2 , J + y1 + y2
where 2h = [1 + ρ + d(1 − y)t *−1 , + (1 + ρ + d(1 − y) t)2 + 4(dy t − ρ) and d = σ 2 (1 − ρ)2 , σ > 0, and ρ < 1 are parameters of the model. We note that J eff (h) is a monotone function of h, and the maximum is attained for h = hopt = y (ρ + y)−1 , when t = topt = ρ/(dy) for ρ > 0 and y > 0. The maximum value is eff
J (t
opt
ρy J2 . 1− )= J + y1 + y2 ρ+y
(13)
This limit formula presents a version of Deev’s formula to this special model and asymptotically optimized discriminant function. The ratio 1 + y 2 /(ρ + y(1 − ρ) − y 2 ) characterizes the gain in the limit provided by the improved discriminant procedure as compared with the standard one. In the case when ρ = 0, the value J eff = J 2 /(J + y1 + y2 ); this corresponds to the discrimination in the system of coordinates where Σ = I for independent variables (although this fact is unknown to the observer). In the case when y = 1, we have J eff = J 2 /[(1 + ρ)/(J + y1 + y2 )] in contrast to the standard discriminant procedure for which J opt = 0. Theorem 5.12. Suppose that some function η(t) = η opt (t) of finite variation exists such that
k(t) − k(t ) opt dη (t ) = b(t)s(t), s(t ) (t − t )
t ≥ 0.
(14)
BEST-IN-THE-LIMIT DISCRIMINANT PROCEDURE
231
Then, for any function η(t) of finite variation on [0, ∞], we have J eff (η) ≤ J eff (η opt ). This assertion immediately follows from Theorems 5.10 and 5.11. Best-in-the-Limit Discriminant Procedure Theorem 5.11 shows that minimal half-sum of limit probabilities of discrimination errors is achieved when the ratio G2 /D is maximum. The numerator and denominator of this ratio are quadratic in η(t) and this allows the minimization in a general form. It is convenient to apply methods of complex analysis. Let us perform the analytical extension of functions h(t), s(t), b(t), and k(t) to the plane of complex z. Consider the contour L = (σ − iε, σ + iε) and functions t 1 1 dη(x) and α(z) = dρ(t). ρ(t) = z+t −∞ s(x) Lemma 5.7. Under Assumptions A–E, there exists σ > 0 such that 1 1 b(z)s(z)α(z)dz, D= k(z)α2 (z) dz. (15) G= 4πi L 4πi L Proof. From Lemma 5.5 and Remark 4, it follows that the functions h(z), b(z), and k(z) are analytic and regular everywhere except the segment [v1 , v2 ], with v1,2 defined by Remark 4, v1 > 0. For |z| → ∞, we have h(z) = O(|z|−1 ), s(z) = O(|z|−1 ), b(z) = O(|z|−1 ), and α(|z|) = O(|z|−1 ). Therefore, we can deform the contour L and close it from the above by half-circumference with the radius R → ∞. We substitute the expression for α(z) to righthand sides of (15), change the order of integration, and use the residue theorem. It follows that 1 b(z)s(z) 1 b(z)s(z)α(z)dz = dz dρ(t) = 2πi L 2πi L z+t = (Res[b(z)s(z)]z=−t ) dρ(t) = b(t) dη(t) = G.
232
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
Similarly, we obtain
=
1 2πi
1 2πi
k(z)α2 (z) dz =
1 k(z) dz dρ(t)dρ(t ) = s(z)s(z )(z + t)(z + t ) k(t) − k(t ) = dρ(t) dρ(t ) = D. s(t)s(t ) (t − t)
Our lemma is proved. Let us use the H¨older inequality and extend functions h(z), b(z), s(z), and k(z) to the real axis. Denote g(v) = Im(b(−v)s(−v)), d(v) = Im k(−v) = Im b(−v) + (y1 + y2 )
Im h(−v) . |vs(−v)|2
Lemma 5.8. Under Assumptions A–E, we have 1 G= 2π
v2
g(v)α(v) dv, v1
1 D= π
v2
d(v)α2 (v) dv,
v1
where the segment V = {v : 0 < v1 ≤ v ≤ v2 , and d(v) > 0}. Proof. We use Remark 4 and close the contour L by a halfcircumference from the above. The functions h(z), b(z), s(z), and k(z) are analytical and have no singularities for Re z < −σ everywhere, except the cut on the half-axis located at v ∈ [v1 , v2 ]. When |z| → ∞, the functions α(z), b(z), k(z) decrease as |z|−1 while s(z) is bounded. It follows that the integral over the infinitely remote region can be made arbitrarily small. The function α(z) has no singularities for z > 0. Contracting the contour to the real axis, we see that the real parts of the integrands are mutually canceled while the imaginary parts double. We obtain G and D from (12). The lemma is proved.
BEST-IN-THE-LIMIT DISCRIMINANT PROCEDURE
233
Note that 0 ≤ g(v) ≤ d(v), v > 0. Define α
opt
g(v) (v) = , v > 0, d(v)
J
opt
1 = J (η ) = π eff
o
g 2 (v) dv. d(v)
Theorem 5.13. Let conditions A–E be fulfilled and there exists a function of bounded variation η opt (t) such that
1 dη opt (t) = αopt (v), s(t)(v + t)
v ∈ V,
(16)
and J opt > 0. Then, 1. using the classification rule wopt (x) > θnopt against wopt (x) ≤ and the discriminant function
θnopt
¯ 2 )T Γopt (C)(x − (¯ ¯ 2 )/2) x1 − x x1 + x wopt (x) = (¯ and Γopt (C) = (I + tC)−1 dη opt (t) we have
√ plim (α1 + α2 )/2 = Φ(− J opt /2), n→∞
where J eff = J eff (η opt ); 2. for the classification with any other w(x) of the form (3), we have J opt ≤ J eff (η) for any function η(t) of bounded variation. 2 /n for each i = 1, 2, . . ., n. Then, Example 5. Let μ2i = μ g(v) = J/v Im h(v),
d(v) = (J + y1 + y2 )/v |s(v)|−1 Im h(v),
where J = lim μ T Σ−1 μ and αopt (v) = const|s(v)|−2 . For the n→∞
ρ-model considered in Section 3.1 with ρ > 0, we have |s(v)|2 = const (v + topt )−1 , where topt = ρd−1 y −1 and there exists the solution η opt (t) of equations (14) and (16) in the form of a unit
234
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
jump at the point t = topt . The corresponding asymptotically unimprovable discriminant function is ¯ 2 )T (C + λI)−1 (x − (¯ ¯ 2 )/2), x1 − x x1 + x wopt (x) = (¯ where λ = dy/ρ. This discriminant function includes a ridge estimator of the inverse covariance matrix Σ with the regularization λ > 0. In this case, the maximum J opt of J eff (η) is achieved within the class of discriminant functions (3) with Γopt (C) = (I + tC)−1 and the maximum value J
opt
J2 = J + y1 + y2
ρy 1− ρ+y
that coincides with (13). In case of ρ = 0, we obtain asymptotically ¯ 2 )T (x − (¯ x1 + unimprovable discriminant function w(x) = (¯ x1 − x opt 2 ¯ 2 )/2) (the spherical classificator) with J x = J /(J + y1 + y2 ) that corresponds to the discriminant function constructed for the case of a priori known covariance matrix Σ. Note that the solution of (14) and (16) may not exist. If it exists, however, serious difficulties are still to be expected when trying to replace k(t) and s(t) by their natural estimators. Relation (14) is the Fredholm integral equation of the first kind and its solution may be ill conditioned. For applications, some more detailed theoretical investigations are necessary. However, some researches show that these difficulties can be overcome. In [64] and [98], the asymptotic expressions for this solution were used to construct a practical improved discriminant procedure (see Introduction). This procedure was examined numerically in academic tests and successfully applied to a practical problem. The Extension to a Wide Class of Distributions Let us use the Normal Evaluation Principle proposed in Section 3.4. Note that under Assumptions A–E, the functionals θnopt , and D belong to the class L3 of functionals allowing ε-normal G, evaluation in the square mean with ε → 0. We may conclude that Theorem 5.10 remains valid for a wide class of distributions
THE EXTENSION TO A WIDE CLASS OF DISTRIBUTIONS
235
described in Section 3.4. However, the assertion of Theorem 5.11 is not valid for the extended class of distributions since the discriminant function (3) may be not normally distributed. Let us return to the first definition of quality of discrimination introduced by R. Fisher. He proposed the empirical quality function defined as 2 /D, that is the argument in (4). Fn = G We consider an extended class of populations restricted by the following Assumptions A1–A3. For x from Xν , denote o
¯ =x−μ x ν,
μ ν = E x, o
¯ )4 > 0, Mν = sup E (eT x |e|=1
o
o
¯ T Ωx ¯ /n)/M, γν = sup var (x
ν = 1, 2,
M = max(M1 , M2 ), ν = 1, 2,
Ω=1
γ = max(γ1 , γ2 ),
where Ω are nonrandom, symmetric, positive, semidefinite matrices of unit spectral norm. A1. All variables in the both populations in P have the fourth moments. A2. The parameters Mν and μ 2ν are uniformly bounded in P, ν = 1, 2. A3. The parameters γν → 0, ν = 1, 2. Remark 5. Under Assumptions A1–A3 and B–E, if the discriminant function (3) is used and D(η) ≥ 0, then for Fn = Fn (η) as n → ∞, we have plim Fn (η) = J eff (η), n→∞
where J eff (η) is defined in Theorem 5.11. Theorem 5.12 remains valid for distributions described in conditions A1–A3. The asymptotically unimprovable discriminant function wopt (x) remains as in Theorem 5.3. The following statement is a refinement of Theorem 5.13. Theorem 5.14. Let conditions A1–A3 and B–E be fulfilled, the discriminant function (3) used, and there exist a function of
236
5. MULTIPARAMETRIC DISCRIMINANT ANALYSIS
bounded variation η opt (t) such that
1 dη opt (t) = αopt (v), s(t)(v + t)
v ∈ V,
where V is defined in Lemma 5.8. Then, if J opt > 0, we have 1. the classification rule wopt (x) > θnopt against wopt (x) ≤ θnopt with the discriminant function ¯ 2 )T Γopt (C)(x − (¯ ¯ 2 )/2) wopt (x) = (¯ x1 − x x1 + x and Γopt (C) = (I + tC)−1 dη opt (t) leads to the quality function Fn (η o ) such that plim Fn (η opt ) = J eff (η opt ) = J opt ; n→∞
2. for the classification with any other w(x) of the form (3) we have J opt ≥ J eff (η) for any function η(t) of bounded variation. Proof is reduced to citing Theorems 5.11, 5,12, 5.14, and Remark 5. Estimating the Error Probability For the usage in practical problems, we should propose consistent estimators for the functions s(t), b(t), k(t), g(v), and d(v) defining the optimum discriminant function and suggest an estimator of the limit error probability. First, we estimate the parameters G and D and then the limit probability of error α = Φ(−G2 /D). Consider the statistics h(t) = N −1 tr H(t),
s(t) = 1 − t tr (CH(t)),
g(t) = k(t)/2 − (1 − s(t))/ s(t),
¯ T H(t)¯ k(t) = x x,
u) = tu¯ x. d(t, xT H(t)CH(u)¯
ESTIMATING THE ERROR PROBABILITY
237
Denote c = c( Σ , μ 2 ), N = N1 + N2 , N0 = N1 N2 /N , where Nν is sample size for Xν , yν = n/Nν , ν = 1, 2, and ε = 1/n + 1/N . Lemma 5.9. If 0 ≤ y < 1, then E( h(t) = h(t))2 ≤ c/N,
E( s(t) − s(t))2 ≤ c/N,
1 − h(t) E k(t) = μ T (I + tΣ)−1 μ + (y1 + y2 ) + ω, s(t) (1 − y)2 E( g (t) − g(t))2 ≤ cε, 2 d(t, u) − d(t, u) ≤ cε, (1 − y)2 E s(t) s(u) where |ω| ≤ cε. Define = G
h(t) n 1− ] dη(t), g(t)dη(t) = [ k(t) − N0 s(t) uk(t) − t k(u) dη(t) dη(u). D= s(t) s(u)
Theorem 5.15. If D > 0 and 0 ≤ y < 1, then as n → ∞, the estimator α = Φ(−G/
P → α. D)
To construct estimators of limit functions g(v) and d(v) consistent in the double-limit transition as ε → +0 and n → ∞ with the optimal dependence ε = ε(n), further investigations are necessary.
This page intentionally left blank
CHAPTER 6
THEORY OF SOLUTION TO HIGH-ORDER SYSTEMS OF EMPIRICAL LINEAR ALGEBRAIC EQUATIONS In this chapter, we develop the statistical approach to solving empirical systems of linear algebraic equations (SLAE). We consider the unknown solution vector as the vector of parameters, consider pseudosolutions as estimators, and seek estimators from wide classes minimizing the quadratic risk. First, we establish a general form of an unimprovable solution found in [54] under the Bayes approach by averaging over possible equations and over their solution vectors. Then, we use the asymptotic theory developed in [55]– [58] and isolate principal part of the quadratic risk function under the assumption that the number of unknown n → ∞ along with number of equations N → ∞ so that N/n − 1 → κ > 0. We apply the techniques of spectral theory of Gram matrices developed in Chapter 3 and derive the dispersion equation expressing limit spectral functions of unknown matrices of coefficients in terms of limit spectral functions of the observable empirical matrices. We consider a wide class of regularized pseudosolutions depending on an arbitrary function of finite variation and find the limit value of their quadratic risk. Then, we solve the extremum problem and find equations defining asymptotically best pseudosolution. Using dispersion equations, we remove the unknown parameters and express this pseudosolution in terms of limit functions of empirical matrix of coefficients and empirical right-hand side vectors only and thus obtain always stable pseudosolutions of minimum-limit quadratic risk.
239
240
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
6.1.
THE BEST BAYES SOLUTION
Suppose a consistent system of linear equations is given Ax = b, where A is the coefficient matrix of N rows and n columns, N ≥ n, x is the unknown vector of the dimension n, and b is the right-hand side vector of the dimension N . The observer only knows the matrix R = A + δA, and vector y = b + δb, where δA are matrices with random entries δAij ∼ N(0, p/n) and δb are vectors with components δbi ∼ N(0, q/n). Assume that all these random values are independent, i = 1, . . ., N, j = 1, . . ., n. We construct a solution procedure that is unimprovable on the average for a set of problems with different matrices A and vectors b, and consider Bayes distributions Aij ∼ N(0, a/n) and xj ∼ N(0, 1/n) for all i and j, assuming all random values are independent. It corresponds to a uniform distribution of entries of A, given the quadratic norm distributed as ν 2 and a uniform distribution of directions of vectors x. We minimize the quadratic risk of the estimator x
)2 . D = D( x) = E(x − x
(1)
Here (and in the following) the square of a vector denotes the square of its length, and the expectation is calculated over the joint distribution of A, δA, x, and δb. Let us restrict ourselves with a class K of regularized pseudosolutions of the form = ΓRT y, x
(2)
where the matrix function Γ = Γ(RT R) is diagonalized together with RT R and has eigenvalues γ(λ) corresponding to the eigenvalues λ of RT R. Let us restrict scalar function γ(u) by the requirement that it is a non-negative measurable function such that uγ(u) is bounded by a constant. For the standard solution, γ(u) = 1/u.
THE BEST BAYES SOLUTION
241
Define a nonrandom distribution function for the set of eigenvalues of the empirical matrices RT R as the expectation F (u) = En−1
n
ind(λi ≤ u),
i=1
where λi are eigenvalues of RT R. Note that W = RT R is the Wishart matrix and F (u) can be derived from the well-known distribution of eigenvalues. Using this function we can write, for example, En−1 tr(RΓ2 RT = n−1 E
n
λi γ 2 (λi ) =
uγ 2 (u) dF (u).
i=1
∈ K of the solution to the Proposition 1. Let an estimator x system Ax = b be calculated using the observed empiric matrix R = A + δA and vector y = b + δb, where entries of the matrices A = {Aij }, δA = {δAij }, and components of the vectors x = {xj } and δb = {δbi } are independent random values Aij ∼ N(0, a/n),
xj ∼ N(0, 1/n),
δAij ∼ N(0, p/n),
and δbi ∼ N(0, q/n), i = 1, . . ., N, j = 1, . . ., n, where a > 0. Then, the functional (1) equals D=
[1 − 2a(a + p)−1 uγ(u) + a2 (a + p)−2 u2 γ 2 (u)
+ suγ 2 (u)] dF (u),
(3)
where s = ap/(a + p) + q. Proof. The quantity D is a sum of three addends D = ExT x − 2ExT ΓRT y + EyT RΓ2 RT y. Let angular brackets denote the normed trace of a matrix: M = n−1 trM.
242
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
First, we average over the distribution of δb. Let us use the obvious property of normal distributions E(δbT M δb) = qn−1 trM, where M is an N × N matrix of constants. We find that D = ExT x − 2ExT ΓRT b + EbT RΓ2 RT b + qERΓ2 RT . Then, we average with respect to the distribution of unknowns x. In view of the equality b = Ax, we have D = 1 − 2EΓRT A + EAT RΓ2 RT A + qERΓ2 RT .
(4)
In further transformations, we use a simple property of normal distributions: if a random value r is normally distributed with zero average and the variance σ 2 , then for any differentiable function f (·), we have Erf (r) = σ 2 E
∂f . ∂r
(5)
For example, consider the second term in (3). Apply (7) to the normal entries of A. We obtain EΓRT A = n−1 EAij (ΓRT )ji =
a ∂(ΓRT )ji E , n2 ∂Aij
where (and in the following) the summation over repeated indexes is implied. All entries of random matrix R have the identical variance (a + p)/n. Therefore, EΓRT R = n−1 ERij (ΓRT )ji =
a + p ∂(ΓRT )ji E . n2 ∂Rij
The matrix ΓRT depends only on R = A + δA, and the partial derivatives with respect to elements A and R coincide. Comparing these two latter expressions, we find that EΓRT A = a(a + p)−1 EΓRT R.
THE BEST BAYES SOLUTION
243
Similarly, we derive the relation ERT RΓ2 RT A = a(a + p)−1 ERT RΓ2 RT R. Once again, we use (5) with respect to random A and obtain EAT RΓ2 RT A =
∂(RΓ2 RT )jk a EA + aERΓ2 RT , ij n2 ∂Akj
where (and in the following) k = 1, . . ., N . Further, we use (5) with respect to entries of δA. It follows that EAT RΓ2 RT δA =
∂(RΓ2 RT )jk p EA . ji n2 ∂(δA)ki
Adding two latter equalities we obtain EAT RΓ2 RT R =
∂(RΓ2 RT )jk a+p EA + aERΓ2 RT . ji n2 ∂Aki
Now we combine the five latter equalities and get the equation EAT RΓ2 RT A =
a2 ap ERT RΓ2 RT R + ERΓ2 RT . 2 (a + p) a+p (6)
These relations are sufficient to express D in terms of the observed variables only. Substitute (6) to (4). We obtain D=1−2
a a2 ERT RΓ2 RT R + EΓRT R + a+p (a + p)2
sERΓ2 RT ,
(7)
where s = ap/(a + p) + q. Passing to the (random) system of coordinates in which RT R is diagonal, we can replace formally En
−1
T
tr ϕ(R R) = E n
−1
n i=1
ϕ(λi ) =
ϕ(u) dF (u),
244
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
where λi are eigenvalues of matrices RT R. The expression (3) follows from (7). The proof of Proposition 1 is complete. Note that the integrand in (3) is quadratic with respect to γ(u) and allows the standard minimization for each u > 0 independently on F (u). Denote θ = a/(a + p). Corollary. The expression (3) reaches the minimum in the class K for γ(u) = γopt (u) = θ/(θ2 u + s) with =x opt = x
θ θ 2 RT R
+ sI
RT y,
and the minimum value of (1) is D = Dopt = D( xopt ) =
s dF (u). s + θ2 u
Note that functions γ(u) minimizing (3) do not depend on the opt is unknown function F (u). Therefore, the optimal solution x expressed in terms of only observable variables in spite of the fact that the minimum risk remains unknown. In Section 6.2, it will be shown that the function F (u) allows a simple limit expression as n → ∞, N → ∞, and n/N → c < 1. Conclusions Thus, we can draw a conclusion that the solution of empirical linear algebraic equations with normal errors can be stabilized and made more accurate by replacing standard minimum square solution by an optimal on the average regular estimator = α(RT R + tI)−1 RT y, x
(8)
with the “ridge” parameter t = s/θ2 and a scalar scaling coefficient α = 1/θ. Empirical ridge parameters are often used for stabilizing solutions of the regression and discriminant problems (see Introduction). But the problem of finding optimum ridge parameters is solved only for special cases.
CONCLUSIONS
245
The multiple α ≥ 1 presents an extension coefficient in contrast to shrinkage multiples for well-known Stein estimators (see Chapter 2). The matrix α(RT R + tI)−1 in (8) is, actually, a scaled and regularized estimator of (AT A)−1 optimal with respect to the minimization of D. Such combined “shrinkage-ridge” solutions are characteristic of a number of extremum problems solved in the theory of high-dimensional, multivariate statistical analysis (see Chapter 4). We may compare the optimum value Dopt with the square risk of the standard solution Dstd if we set γ(u) = 1/u in (3). Then, s (9) Dstd = [(1 − θ)2 + ] dF (u). u It is easy to prove that the difference Dstd − Dopt ≥ 0, and it is equal to 0 for q = 0 and p = 0. For p = 0 and q > 0, we have θ = 1, the matrix R is nonrandom, α = 1, t = q, and Dopt is less Dstd due to the factor u/(u + q) in the integrals with respect to dF (u). If p > 0 or q > 0, the expression (9) contains a term proportional to 1 u−1 dF (u) = En−1 trW −1 = const , N −n−1 where W is the well-known Wishart matrix calculated for variables distributed as N(0, N (a + p)/n) over a sample of size N . For N = n + 1, this integral is singular, and the quadratic risk of the standard solution is infinitely large, while Dopt ≤ 1 for our optimal solution.
246
6.2.
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
ASYMPTOTICALLY UNIMPROVABLE SOLUTION
V. L. Girko first used the increasing-dimension asymptotics (1973) for the investigation of random SLAE, in which the number of unknowns n increases along with the number of equations N so that n/N → c > 0 and found limit spectral distributions arising in this asymptotics. He found that, under uniformly small variance of coefficients, these limit spectra do not depend on distribution of coefficients. In monographs [24] and [25], V. L. Girko applied the increasing-dimension asymptotics for studying ridge solutions of SLAE and obtained some limit equations for expected solution vectors in terms of parameters: matrices A and vectors b. His main result was to derive asymptotically unbiased estimators of the solution vectors. He also found unbiased estimators for the vectors (AT A + tI)−1 Ab, t > 0. We consider a class K of pseudosolutions that present linear combinations of ridge solutions x = (RT R + tI)−1 RT y, t > 0. Our goal is to calculate the principal part of the quadratic risk of such pseudosolutions in the increasing-dimension asymptotics, and then to minimize it, obtaining algorithms that are asymptotically not worse than any pseudosolutions from K. The main results were obtained in [55]–[58]. The theory is developed as follows. 1. We consider a sequence of problems of random SLAE solution )n , P = (N, n, A, x, b, R, y, x
n = 1, 2, . . .
(1)
(we drop the indexes n for arguments of P), in which the system Ax = b of N equations with n unknowns is solved, x ∈ Rn , b ∈ RN , using the observed matrix R = A + Xi, where Ξ is a nuisance matrix, and y = b + ν is an observed right-hand vector with a nui being the pseudosolution. sance vector ν , x 2. We consider the parametrically defined family K of estima of x (the class of estimation algorithms) depending on n, tors x N , and on some vector of parameters or on some function that we call an “estimation function.”
ASYMPTOTICALLY UNIMPROVABLE SOLUTION
247
3. In the asymptotics N/n → 1 + κ, κ > 0, we calculate the limit quadratic risk of estimators from K. 4. We solve an extremum problem and find an optimal nonrandom estimation function guaranteeing the limit quadratic risk not greater than the quadratic risk of any algorithm from K. Thus, we find an estimator asymptotically dominating K. 5. Then, we construct a statistics (a consistent estimator) approximating the nonrandom optimal function (or parameters) that defines the pseudosolution algorithm. 6. It remains to prove that thus constructed algorithm leads to a quadratic risk not greater than any algorithm from K. In this chapter, we carry out this program completely only for two-parametric “shrinkage-ridge” estimators of unknown vector that have the form x = α (RT R + tI)−1 RT y. For arbitrary linear combinations of ridge estimators, we carry out only items 1–5 of the program. In the first section, we develop the asymptotic theory of spectral properties of random Gram matrices RT R. In the increasing-dimension asymptotics, the leading parts of spectral functions and of the resolvent are isolated, and it is proved that the variance of the corresponding functionals is small (Lemmas 6.1–6.4). We isolate the principal parts of expectations (Theorem 6.1). Then, we pass to the limit and deduce the basic “dispersion” equations relating limit spectra of matrices RT R to spectra of AT A (Theorem 6.2). We study the analytic continuation of the limit normed trace of the resolvent and find spectra bounds and the form of spectral density (Theorem 6.3). We investigate the quadratic risk of pseudosolutions in the form of linear combinations of ridge solutions with different ridge parameters. The leading part of the quadratic risk is isolated (Theorem 6.4), and its consistent estimator is suggested (Theorem 6.5). Then, we perform the limit transition (Theorem 6.6). We pass to the analytical continuation of spectral functions and solve the limit extremum problem (Theorem 6.7). Then, we
248
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
construct an ε-consistent estimator of the best limit estimation function (Theorem 6.8). As a special case, we study two-parametric pseudosolutions of the shrinkage-ridge form, characterised by a regularizing “ridge parameter” and the “shrinkage coefficient.” Theorem 6.9 provides the limit quadratic risk of these solutions under a double-limit transition: as the order of equations increases and the regularization parameter decreases. We separate proofs from the main text and place them at the end of Chapter 6.
Spectral Functions of Large Gram Matrices We consider a sequence (1) in which the matrices A and R are of size N × n, Ξ = R − A, and the square matrices Σ = AT A, S = RT R are of size n × n as n → ∞, and N → ∞. The random matrices Ξ have independent components (we do not write the indexes n and N ). Let Amj , Rmj , Ξmj denote entries of matrices A, R, Ξ, m = 1, . . ., N, j = 1, . . ., n. Assume that P satisfies the following requirements (let constants c and c with subscripts do not depend on n). 1. For each n, the inequality holds n ≤ c0 N (c1 is reserved for the lowest boundary of limit spectra). 2. For each n, the norm (only spectral norms of matrices are used) is bounded by a constant: A ≤ c2 . 3. As n → ∞, we have κn = N/n − 1 → κ ≥ 0, κ < c3 . 4. For each n, the entries Ξmj of Ξ are independent and normally distributed as N(0, d/n), m = 1 . . ., N , j = 1, . . ., n. Let d = c4 > 0 do not depend on n. Under these conditions, the entries of A have the order of mag√ nitude 1/ n on the average and are comparable with standard deviations for entries of the matrix R. Let us apply methods developed in spectral theory of random matrices of increasing dimension (Chapter 3). We consider the resolvent H = H(t) = (S + tI)−1 ,
t≥0
SPECTRAL FUNCTIONS OF LARGE GRAM MATRICES
249
and spectral function depending on H. First, we establish the fact of small variance for some spectral functions. For this purpose, we use the method of one-by-one exclusion of independent variables. Let varm (X), X = (X1 , . . ., XN ), denote the conditional variance calculated by integration over the distribution of only one variable Xm , m = 1, . . ., N . Remark 1. Let X = (X1 , . . ., XN ) be a vector with independent components, and let a function f (X) be of the form f (X) = f m (X) + δm (X), where f m (X) does not depend on Xm , m = 1, . . ., N . Suppose that two first moments of f (X) and f m (X) exist, m = 1, . . ., N . Then, var f (X) ≤
N
Evarm (δm (X)) .
m=1
This assertion is a consequence of the Burkholder inequality. Denote Σ = AT A,
S = RT R,
S m = S − rm rTm ,
H m = (S m + tI)−1 ,
and Φm = rTm H m rm , m = 1, . . ., N. Consider n-dimensional vectors am = {Amj }, rm = {Rmj }, j = 1, . . ., n, m = 1, . . ., N. Note that a2m ≤ c22 , Er2m ≤ c22 + d. We exclude the independent variables rm one-by-one. Remark 2. Let n and N be fixed, n ≤ N , A ≤ c2 and t ≥ c > 0. Under Assumption 4, the variance varm Φm ≤ ad(c22 + d) c−2 n−1 ,
m = 1, . . ., N.
Lemma 6.1. Under Assumptions 1–4 for t ≥ c > 0, the variance var n−1 trH(t) ≤ kN/n3 , where k = ad(c22 +d)[c2 +(c22 +d)2 ]/c6 and a is a numeric coefficient.
250
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
Lemma 6.2. Under Assumptions 1–4 for any nonrandom unity vectors e as n → ∞ uniformly in t ≥ c > 0, we have var(eT H(t)e) = O(n−1 ). Lemma 6.3. Under Assumptions 1–4 for any nonrandom unity vectors e, f ∈ RN as n → ∞ uniformly with respect to t ≥ c > 0, we have var eT RH(t)RT f = O(n−1 ). Lemma 6.4. Under Assumptions 1–4 for t, t ≥ c > 0 and any nonrandom unity vectors e ∈ RN as n → ∞ for t, t ≥ c > 0 uniformly, we have var[eT R(S + tI)−1 (S + t I)−1 RT e] = O(n−1 ). Denote hn = En−1 trH, sn = 1 + hn d. Lemma 6.5. Under Assumptions 1–4 as n → ∞ for t ≥ c > 0 uniformly, we have EHRT = EHAT /sn + Ω, where Ω = O(n−1 ). Lemma 6.6. Under Assumptions 1–4 as n → ∞ uniformly for t ≥ c > 0, we have EHRT R = EHRT A + d(κn + hn t)EH + O(n−1 ). Now we derive the equation connecting spectra of random matrices S = RT R of large dimension with spectra of nonrandom matrices Σ = AT A.
LIMIT SPECTRAL FUNCTIONS OF GRAM MATRICES
251
Theorem 6.1. Under Assumptions 1–4 as n → ∞ uniformly with respect to t ≥ c > 0, we have 1. hn = (1 + hn d) n−1 tr(AT A + rn sn I)−1 + O(n−1 ), 2. EH = sn (AT A + rn sn I)−1 + Ωn , where hn = En−1 trH, sn = 1 + hn d, rn = tsn + κn d, and Ωn = O(n−1 ) (in spectral norm).
Limit Spectral Functions of Gram Matrices Assume additionally that the weak convergence holds def
F0n (u) = n
−1
n
ind(λ0j ≤ u) → F0 (u),
u ≥ 0,
(2)
j=1
where λ0j are eigenvalues of the matrices AT A, j = 1, . . ., n. Theorem 6.2. Under Assumptions 1–4 and (2), the following statements are valid. 1. For t ≥ c > 0, the convergence in the square mean hn = En−1 trH(t) → h(t), sn = 1 + hn (t)d → s(t), rn = tsn (t) + κn d → r(t) holds uniformly in t ≥ c > 0, where s(t) = 1 + dh(t) and r(t) = ts(t) + κd. 2. For each t > 0, the equation holds h(t) = s(t) (u + r(t)s(t))−1 dF0 (u). (3) 3. As n → ∞, the weak convergence in probability holds def
Fn (u) = n
−1
n
P
ind(λj ≤ u) → F (u),
u > 0,
j=1
where λj are eigenvalues of the matrices RT R, j = 1, . . ., n.
252
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
4. For each t > 0, the equation holds h(t) =
(u + t)−1 dF (u).
(4)
5. As n → ∞, we have −1 EH(t) = s(t) AT A + r(t)s(t) + Ωn , where Ωn = O(n−1 ) uniformly in t ≥ c > 0. The limit equation (3) is called the dispersion equation for random matrices RT R of increasing dimension. Consider the support S = {v > 0 : F (v) > 0}. A set of z = −v, where v ∈ S, may be called the limit spectrum region of the matrices RT R. Theorem 6.3. Under Assumptions 1–4 and (2), the following is true. 1. The function h(t) defined by (3) for t > 0 allows the analytic continuation to complex arguments z. The function h(z) is regular on the plane of complex z, and inside of any compact not containing points z = −v such that v ∈ S is uniformly bounded and has a uniformly bounded derivative. 2. For any v = −Re z > 0 and ε = −Im z → +0, there exists the limit lim π −1 Im h(z) = F (v). 3. For v > 0, F (v) ≤ π −1 (vd)−1/2 . 4. For κ > 0, the set of points S is located on the segment [v1 , v2 ], where v1 =
κ2 d , 4( 1 + κ + 1)2 √
v2 = 2d(2 + κ) + 2.25 c2 ,
√ while the diameter of S is not less than πdκ/[4 (1 + 1 + κ)]. 5. For κ > 0 for all v ≥ 0, the function F (v) satisfies the Lipschitz condition with the exponent 1/3 and is differentiable
LIMIT SPECTRAL FUNCTIONS OF GRAM MATRICES
253
everywhere except, maybe, of the boundary points of the set S. 6. For κ > 0, the function h(z) satisfies the H¨ older condition on the whole plane of complex z with the exponent 1/3. Special cases Now we study characteristic features of limit spectra of the matrices S = RT R for a special case when F0 (v) = ind(v ≥ a2 ). In particular, it is true if for each n the matrix Σ = AT A = a2 I. From (3), we obtain the equation ths2 + h(a2 − d) + κhds = 1,
(5)
where h = h(t), s = s(t) = 1 + hd. Introduce the parameter of the “signal-to-noise ratio” T = a2 /d. 1. If d = 1 and the ratio T = 0, then matrices RT R are the Wishart matrices, (s−1)(κ−vs) = 1, and one can find the density F (v) = (2πv)−1
(v2 − v) (v − v1 ),
√ √ where v1 = ( 1 + κ − 1)2 , v2 = ( 1 + κ + 1)2 , v1 ≤ v ≤ v2 . This is the well-known Marchenko–Pastur distribution (it holds, in particular, for limit spectra of increasing sample covariance matrices). For κ > 0, the limit spectrum is separated from zero, and for κ = 0, it is located on the segment [0, 4]. 2. If κ = 0 and T > 0, then at the spectrum points as ε → +0, we find from (3) that 2vx = |s − 1|−2 d, where x = Re s. Using equation (4), one can calculate the limit density F (v) = π −1 Im h(v). It is of the form 1 F (v) = π
1/2 3x − 2 − 2(T − 1) (1 − x)2 , x 1 + 2x (T − 1)
where the parameter x = Re s ≥ 1/2 can be determined from the equation v=
1 + 2x (T − 1) . 2x (1 − 2x)2
254
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
Let T ≤ 1. Then, the left boundary of the limit spectrum defined by the distribution function F (v) is fixed by the point v = 0. Let T = 1. Then, F (v) = π −1 x (3x − 2), where x is defined by the equation 2x (1 − 2x)2 v = 1. In this case, the limit spectrum is located √ on the segment [0, 27/4]. For small v, the function F (v) ≈ 2π −1 3 v −1/3 , and F (27/4) = −∞. Let T = 1 + ε, where ε > 0 is small. Then, the lower spectrum boundary is located approximately at the point 4/27 ε3 ; the spectrum is located within the boundaries increasing with T .
Quadratic Risk of Pseudosolutions We study estimators of the solutions to SLAE Ax = b in the form of linear combinations of regularized minimum square solutions such that vectors x = (x1 , . . ., xn ) are estimated from the known random matrices of coefficients R = A + Ξ of size N × n and random right-hand side vectors y = b+ν . We consider a class K of estimators of the form = ΓR y, x T
T
where Γ = Γ(R R) =
(RT R + tI)−1 dη(t), (6)
t≥c
and η(t) is a function having the variation on [0, ∞) not greater than V = V (η), and such that η(t) = 0 for t < c. Consider a sequence of the solution problems (1), in which N equations are solved with respect to n unknown variables as n → ∞ and N = N (n) → ∞. The function η(t) and the quantity c > 0 do not depend on n. Assume that the following conditions are fulfilled (constants c with indexes do not depend on n). A. As n = 1, . . ., ∞ κn = N/n − 1 → κ ≥ 0. B. For each n, the system of equations Ax = b is soluble; the vector b lengths are not greater than c5 , spectral norms of
QUADRATIC RISK OF PSEUDOSOLUTIONS
255
A ≤ c2 , and the vectors x are normed so that (the scalar square) x2 = 1. C. For each n, the entries Ξmj of matrices Ξ and components νm of vectors ν are independent, mutually independent, and normally distributed as Ξmj ∼ N(0, d/n), νm ∼ N(0, q/n), j = 1, . . ., n, m = 1, . . ., N , where d = c4 > 0 and q = c6 do not depend on n. Under these conditions for fixed constants c1 −c6 , entries of the matrices A and components of b on the average are of the order of magnitude n−1/2 and are comparable with standard deviations for entries of R and components of y. We set the minimization problem for the quantity def
)2 = E[x2 − 2xT ΓRT y + yT RΓ2 RT y] Δn (η) = E(x − x with the accuracy up to terms small as n → ∞. Denote H = H(t) = (RT R + tI)−1 , hn = hn (t) = En−1 trH(t). Lemma 6.7. Under Assumptions A–C for t ≥ c > 0, we have ERHRT R = ERHRT A + d(κn + hn t)ERH + ω,
(7)
where |ω| ≤ adc−1/2 n−1 , and a is a numeric coefficient. Denote hn (t) = n−1 trH(t),
sn (t) = 1 + d hn (t),
ϕ n (t) = (y2 − yT RH(t)RT y), rn (t), rn = rn (t) = E ϕn = ϕn (t) = Eϕ n (t),
gn (t) =
rn (t) = t sn (t) + κn d,
hn (t) + κn )q ϕ n (t) − (t , rn (t)
sn = sn (t) = E sn (t), gn = gn (t) = E gn (t).
(8)
Remark 3. Under Assumptions A–C for fixed c1 − c4 and t ≥ c > 0, the uniform convergence as n → ∞ holds var ϕ n (t) = O(n−1 ),
var gn (t) = O(n−1 ).
256
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
Remark 4. Under Assumptions A–C as n → ∞, gn (t) =
ϕn (t) − (thn (t) + κn ) q + O(n−1 ). t (1 + hn (t)d) + κn d
Lemma 6.8. Under Assumptions A–C as n → ∞ uniformly in t ≥ c > 0, we have gn (t) = E ϕn (t) =
xT H(t)x + O(n−1 ) = xT Σ(Σ + rn (t)sn (t)I)−1 x + O(n−1 ), sn (t)
rn (t) ExT ΣH(t)x + q(thn (t) + κn ) + O(n−1 ) = sn (t)
= rn (t)xT Σ(Σ + rn (t)sn (t)I)−1 x + q(thn (t) + κn ) + O(n−1 ),
where rn (t) = tsn (t) + κn d. Denote Kn (t, t ) =
ϕn (t) − ϕn (t ) = EyT RH(t)H(t )RT y. t − t
Theorem 6.4. Under Assumptions A–C, the quantity def
Dn (η) = 1 − 2
gn (t) dη(t) +
Kn (t, t ) dη(t) dη(t )
(9)
of the form (6) as n → ∞, the quadratic risk is is such that for x )2 = Dn (η) + O(n−1 ). Δn (η) = E(x − x Theorem 6.4 allows to calculate the quadratic risk of regularized pseudosolutions and to search for the minimum in subclasses of functions η(t). For example, let η(t) = α ind(t > t), t ≥ c > 0 (shrinkage-ridge estimator). Then, the minimum of (9) is reached when α = gn (t)/ϕn (t) and 2gn (t)ϕn (t) = ϕn (t). This last equation may be used to determine the t providing the minimum of Δn (t) = 1 − gn2 (t)/ϕn (t). In another class, we consider linear combination of such estimators with a priori magnitudes
QUADRATIC RISK OF PSEUDOSOLUTIONS
257
of t and find an unimprovable set of coefficients by solution of a system of a small number of linear equations. Let us construct an unbiased estimator of the leading part of the quadratic risk that was found in Theorem 6.4. Define n (t, t ) = ϕn (t) − ϕn (t ) = yT RH(t)H(t )RT y, K t−t n (η) = 1 − 2 gn (t) dη(t) + n (t, t ) dη(t) dη(t ). D K
Now we prove the unbiasedness of this estimator. of the form (6), Theorem 6.5. Under Assumptions A–C for x we have n (η) = D(η) + ω D n ,
where
E ωn2 = O(n−1 ).
of the form (6) as n → ∞ Thus, under Assumptions A–C for x )2 in the square mean. in the square mean, we have Dn (η) → (x− x Theorems 6.4 and 6.5 establish upper boundaries of bias and n (η). These theorems variance of the quadratic risk estimator D are remarkable by that they allow to compare the efficiency of different versions of solution algorithms (defined by the function η(·)) using a single realization of the random system. Limit relations Assume that in P the convergence (2) holds. Let us pass to the limit. To provide the convergence of functions of R and x, we introduce an additional condition. In addition to Assumptions 1–4, suppose that for all v the limit exists
G(v) = lim
n→∞
n
x2j ind(λ0j ≤ v),
(10)
i=1
where λ0j are eigenvalues of the matrices Σ = AT A, and xj are components of the vector x in a system of coordinates where Σ is diagonal. By condition B, we have G(c2 ) = x2 = 1.
258
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
Lemma 6.9. Under Assumptions A–C, (2), and (10) for t, t ≥ c > 0 as n → ∞, the uniform convergence holds def hn (t) → h(t) = s(t) (v + r(t)s(t))−1 dF0 (v), def gn (t) → g(t) = v(v + r(t)s(t))−1 dG(v), def
ϕn (t) → ϕ(t) = r(t)g(t) + q(th(t) + κ), def
Kn (t, t ) → K(t, t ) =
ϕ(t) − ϕ(t ) , t − t
where the last expression is continuously extended to t = t . Theorem 6.6. Under Assumptions A–C, (2), and (10) for the = ΓRT y, the quadratic risk Δn (η) converges pseudosolutions x to the limit
lim Δn (η) = lim Dn (η) = n→∞ def = D(η) = 1 − 2 g(t) dη(t) + K(t, t ) dη(t) dη(t ). (11) n→∞
This statement is a direct consequence of the uniform convergence in Lemma 6.9 and the variance decrease in Remark 3. The formula (11) expresses the limit quadratic risk of all pseu = Γ(RT R)RT y from K. The right-hand side of (11) dosolutions x presents a quadratic function of η(t) and allows an obvious minimization. For example, one may search for the minimum in a class of stepwise functions η(t) that present a sum of a small number of jumps of different size. Minimization of the Limit Risk Let us minimize (11) in the most general class of all functions η(t) of bounded variation. To obtain an explicit form of the extremal solution, we pass to complex parameters and present the limit risk (11) in the form of function of only one argument.
MINIMIZATION OF THE LIMIT RISK
259
Lemma 6.10. Under Assumptions A–C, (2), (10) for κ > 0 the functions g(z) and ϕ(z) are regular for all z on the plane of complex z except, perhaps, points of the half-axis z < 0. Consider the scalar function corresponding to the matrix expression in (6) Γ(z) =
(t + z)−1 dη(t).
t≥c>0
Let L = (−i∞, +i∞) be the integration contour. Lemma 6.11. Under Assumptions A–C, (2), and (10) for κ > 0, the limit expression in (11) equals 1 D(η) = 1 − πi
1 g(z) Γ(−z)dz − 2πi L
ϕ(z) Γ2 (−z)dz.
(12)
L
The minimization of (12) remains difficult due to the fact that functions Γ(−z) are complex. We pass to real Γ(·). We notice that the function g(z) may be singular in the left half-plane of complex z for negative z. Actually, for example, let x be an eigenvector of the matrix Σ = AT A having only one eigenvalue λ for all n. Then, the function g(z) = (λ + w)−1 λ has a pole at w = r(z)s(z) = −λ. To provide the smoothness of g(z) and ϕ(z), we introduce an additional assumption. Suppose that for all z and z with Im z = 0 and Im z = 0, the H¨older condition holds |g(z) − g(z )| ≤ c5 |z − z |ζ ,
(13)
where c5 and 0 < ζ < 1 do not depend on z and z . Under this assumption for each Re z = −v < 0 and ε = − Im z → +0, the limits exist g(−v) = lim g(−v − iε) and ϕ(−v) = lim ϕ(−v − iε). Remark 5. Let κ > 0 and for each n in the system of coordinates where the matrix Σ = AT A is diagonal, components of x = (x1 , . . ., xn ) are uniformly bounded: x2j = O(n−1 ), j = 1, . . ., n. Then, function g(z) satisfies the H¨older condition (13).
260
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
Lemma 6.12. Let conditions A–C, (2), (10), and (13) be fulfilled and κ > 0. Then, 2 D(η) = 1 − Im g(−v) Γ(v)dv − π S 1 Im ϕ(−v) Γ2 (v)dv + O(ε), π S where S = {v > 0 : F (v) > 0}, Im g(−v) ≥ 0, Im ϕ(−v) ≤ 0. Denote by λ0 ≥ 0 the smallest eigenvalue of the matrices Σ = AT A, for n = 1, 2, . . . . Lemma 6.13. Let conditions A–C, (2), (10), and (13) be fulfilled, κ > 0, and λ0 > 0. Then, for z = −v < 0, where v ∈ S, we have Im g(z) κd 2 def |Im ϕ(z)| ≥ c6 > 0, Im ϕ(z) ≤ B = √λ v + λ0 v1 . (14) 0 1 Denote ρ(v) = −π −1 Im ϕ(−v) ≥ 0,
Γ0 (v) = −Im g(−v)/Im ϕ(−v) ≥ 0. (15)
For κ > 0 and λ0 ≥ c1 > 0, the function Γ0 (v) is defined and uniformly bounded everywhere for v ∈ S. It is easy to examine that 1 ρ(v)dv = lim Im ϕn (−v) dv = n→∞π S S 1 EyT RRT y ≤ (c3 + c1 c5 )(c2 + d). = lim n→∞π As a consequence, from Lemmas 6.11–6.13, we obtain the following statement. Theorem 6.7. Let conditions A–C, (2), (10), and (13) be fulfilled, c1 > 0, and κ > 0. Then, D(η) = lim E(x−Γ(RT R)RT y)2 = D0 + [Γ(v)−Γ0 (v)]2 ρ(v)dv, n→∞
S
MINIMIZATION OF THE LIMIT RISK
261
where D =1− 0
[Γ0 (v)]2 ρ(v)dv. S
= Γ0 (RT R)RT y Thus, under these conditions, the estimator x as n → ∞ asymptotically dominates the class of estimators K. However, the function Γ0 (v) is determined by parameters and is unknown to the observer. As an estimator Γ0 (v), we consider the ε-regularized statistics Im gn (z) 0 Γnε (v) = min , B , Im ϕ n (z) where B ≥ sup Γ0 (v) is from (14), and z = −v − iε, v, ε > 0. v∈S
We define the variance for complex variables Z: Var Z = E | Z − EZ|2 . Lemma 6.14. Under Assumptions A–C for any fixed z with Im z < 0, as n → ∞, we have Var hn (z) = Var[n−1 tr(RT R + zI)−1 ] = O(n−2 ) Var rn (z) = O(n−2 ),
Var ϕ n (z) = O(n−1 ),
Var gn (z) = O(n−1 ). Theorem 6.8. Let conditions A–C, (2), (10), (13), c1 > 0, and κ > 0 be fulfilled. Then, lim
lim
ε → +0 n → ∞ S
0 (v)]2 ρ(v)dv = 0. E[Γ0 (v) − Γ nε
(16)
0 (·) involved in the Thus, we proved that the statistics Γ nε 0 T T = Γnε (R R) R y approximates the unknown estimaalgorithm x tion function Γ0 (·) of the asymptotically dominating algorithm x 0 T T = Γ (R R)R y. The question of the limit quadratic risk provided 0 (v) requires further investigation. by the function Γ nε
262
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
Shrinkage-Ridge Pseudosolution Let us study a subclass of the class of pseudosolutions K that have the form = α(RT R + tI)−1 RT y, x
t, α ≥ 0,
(17)
where α > 0 and t > 0 do not depend on n. The coefficients α > 0 may be called “shrinkage coefficients” similarly to the Stein estimators, and the parameters t may be called “ridge parameters” similarly to the well-known regularized estimators of the inverse covariance matrices. For pseudosolutions (17), we have )2 = 1 − 2α gn (t) + α2 ϕ (x − x n (t) + ωn (t),
where E|ωn (t)|2 = O(n−1 ).
Remark 6. For any t > 0 and α > 0, the pseudosolutions (17) have the limit quadratic risk )2 = 1 − 2α g(t) + α2 ϕ (t), D(η) = D(α, t) = lim (x − x n→∞
D(α0 , t)
= 1 − g 2 (t)/ϕ (t),
where α0 = g 2 (t)/ϕ (t).
(18)
The case α = 1 and t = 0 corresponds to the standard minimum square solution. For α = α0 = g(t)/ϕ (t), we have the minimum D(α0 , t) = 1 − g 2 (t)/ϕ (t). Now we find the limit quadratic risk of regularized solutions (17) under the decreasing regularization: let κ > 0 and t → +0. Lemma 6.15. Under Assumptions A–C, conditions (2), (10), and κ > 0, the following is true. 1. For t > 0, the equation (3) has a unique solution h(t), and there exists the limit h(0) = lim h(t) for t → +0 such that 1 1 . ≤ h(0) ≤ κd (1 + κ) d + c2 Denote s(0) = 1 + h(0)d,
bν =
v dG(v), [v + κds(0)]ν
ν = 1, 2.
SHRINKAGE-RIDGE PSEUDOSOLUTION
263
2. The limits exist g(0) = lim g(t) = b1 ≤ 1, ϕ(0) = lim ϕ(t) = κ(g(0)d + q). t → +0
t → +0
3. For t → +0, the derivative exists s (0) =
ds(t) dh2 (0) s2 (0) =− . dt t=0 1 + κd2 h2 (0)
4. For t → +0, there exist the derivatives g (0) = −b2 [s2 (0) + κd s (0)]; κd s(0) ϕ (0) = b1 − b2 s(0) + qh(0), 1 + κd2 h2 (0) and ϕ (0) ≥ K(κ) c1 , where K = K(κ) > 0 with the right-hand side derivative K (0) > 0. Denote by α 0 = α 0 (t) = gn (t)/ϕ n (t) an estimator of the function 0 = α (t) = g(t)/ϕ (t), where gn (t) is defined by (8). The derivative ϕ n (t) = yT RH 2 (t)RT y, t > 0. Consider the ridge pseudosolutions (estimators of x) of the form
α0
t = (RT R + tI)−1 RT y, x 0t = α x 0 (RT R + tI)−1 RT y, as t → +0. Theorem 6.9. Under Assumptions of Theorem 6.8, we have lim
def
t )2 = 1 − 2g(0) + ϕ (0) = D(1, 0), lim E(x − x
t → +0 n → ∞
def
0t )2 = 1 − g 2 (0)/ϕ (0) = D(α0 , 0). (19) lim plim (x − x
t → +0 n → ∞
Here the first relation presents the limit quadratic risk of the t with decreasing regularization. The minimum square solution x
264
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
0t may be called asymptotically optimal shrinkage pseudosolution x minimum square solution with decreasing regularization. Denote Λν =
v −ν dF0 (v),
ν = 1, 2.
Remark 7. Under assumptions of Theorem 6.8 as κ → 0, we have k1 (d − c2 )/κ ≤ D(1, 0) ≤ k2 /κ, where k1 and k2 are constants. Thus, for sufficiently large “noise” d, the limit quadratic risk of the regularized minimum square solution can be indefinitely large. Finite risk may be guaranteed either by a ridge regularization (fixed t > 0) or by an excessive number of equations (for sufficiently large portion of additional equations κ). Special form of SLAE Now we investigate in more detail a special form of equations when for all n the matrices Σ = I, but the fact that the equations have such trivial form is unknown to the observer. For this case, we have F0 (v) = ind(v ≥ 1), and the functions g(t), ϕ(t), as well as the limit quadratic risk do not depend on x for all n since x2 = 1 and G(v) = F0 (v). Let us write out the limit relations. To be concise, denote h = h(t), s = s(t), and r = r(t). The dispersion equation (3) takes on the form 1 + rs = d + 1/h.
(20)
The functions g = g(t) = h/s and ϕ = ϕ(t) = hr/s + q(ts + κ). Let us compare the quadratic risks (18). First, suppose the “noise level” is low: 0 < d < 1. Let t → +0. Then, it is easy to find that h → (1 − d)−1 , s → (1 − d)−1 , g → 1, and ϕ (0) = (1 + q)/(1 − d). By Remark 6, the limit quadratic risk of standard regularized solution with decreasing regularization is lim D(1, t) = (d + q)/(1 − d). Under the optimum shrinkage, we n→∞
have D(α0 , t) → (d + q)/(1 + q). One can see that as t → +0, the decrease of regularization leads to the degeneration, while the shrinkage leads to a finite risk D(α0 , 0) < 1.
SHRINKAGE-RIDGE PSEUDOSOLUTION
265
Suppose d ≥ 1. Let t → +0. Then, solving equation (20) we obtain infinite h = h(t) and t1/3 h(t) → d−2/3 . The functions g = g(t) → 1 and ϕ = ϕ(t) ≈ (d + q) d−2/3 t2/3 . The quadratic risk without shrinkage is D(1, t) ≈ 2/3 (d + q)d−2/3 t−1/3 increases infinity, while the optimum shrinkage risk does not exceed 1. Now we investigate the dependence of the quadratic risk on the excessiveness of SLAE, that is, on the parameter κ > 0. Let d = 1. Then, equation (20) is simply rhs = 1. If κ > 0 and √ t → +0 the function h = h(t) remains finite and tends to 2( 4 + κ + √ −1/2 −1/2 κ) κ . Set t = 0. Then, as κ → 0 it is easy to see that the quadratic risk D(1, 0) ≈ qκ−1/2 while D(α0 , 0) ≈ 1. Setting d = 1 and q = 0, we study the effect of regularization. Then, we have rhs = 1 that is the cubic equation with respect to h(t). Functions g = g(t) = r/s, and ϕ (t) = 2h2 /[1 + 3h − κh2 s]. t = (RT R + tI)−1 RT y leads to the limit The regularized solution x risk def
D(1, t) = D(h) =
1−h 2h2 . + s 1 + 3h − κsh2
t with the best 0t = α x The limit risk of the regularized solution x 0 limit shrinkage coefficient α = α = g(t)/ϕ (t) is equal to
def
D(α0 , t) = D0 (h) = 1 −
1 + 3h − κsh2 < 1. 2s2
This last expression presents the function of h that has a unique minimum for some h = h0 ≤ 1. In view of the monotone dependence of h = h(t), there exists a unique minimum of the function D(α0 , t) in t, that defines the asymptotically unimprovable shrinkage-ridge solution. In the case when Σ = I, d = 1, and κ > 0, we can calculate the best limit function (15) analytically. It is
Γ0 (v) =
1 Im g(−v) 1 = , Im ϕ(−v) 1 + q v |s|2 − κ
266
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
where complex s = s(−v). The asymptotically unimprovable 0 = Γ0 (RT R) RT y leads to the limit risk solution x 1 D =1− 1+q 0
|s|2 (v
1 F (v)dv, |s|2 − κ)
where F (v) is the limit spectrum density for RT R. If κ = 0, then s is a (complex) root of the equation vs2 (s − 1) + 1 = 0. Denote 2 4 x = Re s. Then, √ 2x(2x − 1) v = 1, 2x = |s| . The spectral density −1 3x2 − 2x, x ≥ 2/3, and the spectrum is located F (v) = π on the segment [0, 27/4]. For q = 0, the limit quadratic risk is D0 ≈ 0.426.
Proofs for Section 6.2 We preserve all notations and the numeration of formulas, lemmas, and theorems from the above. Remark 2 (proof). Indeed, denote ξm = rm − am . It is obvious that varm Φm = varm (2aTm Ω ξmT + ξmT Ω ξm ),
where Ω = EH m .
Here the variance of the first addend is 4d aTm Ω2 am /n ≤ 4d c2 /c2 n. The variance of the second addend is 2d2 tr Ω2 /n ≤ 2d/c2 n. Thus, varm Φm ≤ ad(c2 + d) c−2 n−1 . Remark 2 holds true. Lemma 6.1 (proof). We have the identity Hrm = (1 + Φm )−1 H m rm . By Remark 1 var(n
−1
T 2 N 1 r Ω rm trH) ≤ 2 , Evarm m n 1 + Φm m=1
where Ω = H m does not depend on rm and Ω ≤ 1/c. The right-hand side of this inequality equals N n−2 Evarm(ψϕ), where
PROOFS FOR SECTION 6.2
267
ψ = rTm Ω2 rm , ϕ = (1 + Φm )−1 . We use the fact that if any two random values ψ and ϕ have appropriate moments and |ϕ| ≤ 1, then var(ψϕ) ≤ 2 var ψ + 4Eψ 2 var ϕ. First, we estimate varm ψ = varm (2aTm Ω2 ξm + ξmT Ω2 ξm ), where the matrix Ω = H m is fixed. The variance of the first addend is not greater than 4d n−1 aTm Ω4 am . The variance of the second addend is not greater than 2d2 n−1 tr Ω4 , and a2m ≤ A ≤ c2 . We may conclude that varm ψ ≤ ≤ ad(c2 + d) c−4 n−1 , where a is a numeric coefficient. Further, it is obvious that varm ϕ ≤ varm Φm . Let us calculate the expectation integrating only over the distribution ξm ∼ N(0, d/n). We obtain Eψ 2 ≤ E(r2m )2 /c4 = (a2m )2 + 2a2m d + 2d2 /c4 ≤ 2 (c2 + d)2 /c4 . Consequently, var(ψϕ) ≤ ad(c2 + d)(c2 + (c2 + d)2 )/nc6 . The Lemma 6.1 statement follows. Lemma 6.2 (proof). Denote additionally vm = eT H m rm , um = eT Hrm . Using the identity vm = (1 + Φm )um and Remark 1, we find that
var(e H(t)e) ≤ E T
N
m=1
varm
2 vm 1 + Φm
.
(21)
Denote ϕ = (1 + Φm )−1 , fm = H m e, pm = eT H m am . We have 2 2 4 varm (vm ϕ) ≤ 2 varm (vm ) + 2Em vm varm ϕ
268
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
where the expectation Em is calculated by integrating only over the distribution of ξm . We obtain that |fm | = O(1), pm = O(1), and
2 ) = var T T 2 ≤ varm (vm m 2pm fm ξm + (fm ξm ) ≤ a|f |2 (p2m d/n + |f |2 d2 /n2 ) = p2m O(1) + O(n−2 ); 4 ≤ a [p4 + E(f T ξ m )4 ] ≤ O(n−1 ) p2 + O(n−2 ), Em vm m m
where a is a number. In the former lemma, we have shown that varm ϕ = O(n−1 ). Consequently, the left-hand side of (21) is not greater than [O(n−1 ) p2m + O(n−2 )]. m
To prove Lemma 6.2, it suffices to show that the sum of p2m is bounded. Since Hrm = (1 + Φm )−1 H m rm , we have 2 p2m ≤ Em (eT H m rm )2 = Em vm = Em (1 + Φm )2 u2m .
Notice that
u2m = eT HRT RHe ≤ 1/c.
m
Denote TΩ ξ − Eξ T Ω ξm ), δm = Φm − EΦm = 2aTm Ω ξm + (ξm m 2 , Δm = (1 + Φm )2 − E(1 + Φm )2 = 2(1 + EΦm ) δm + δm
where Ω = EH m . The expectation of Φ2m is bounded, and the other TΩ ξ m exist. We substitute (1 + Φm )2 = moments of Φm , um and ξm = O(1) + Δm and estimate the contribution of the addend Δm . Note that u2m Δm ≤ 1/2(u2m + Δ2m u2m ). In the right-hand side of this equality, the sum of first summands is bounded. It remains to show that EΔ2m u2m = O(n−1 ), or that EΔ4m = O(n−2 ). Let us 2 . Passing to the coordinate system substitute Δm = O(1)δm + δm in which Ω is diagonal and using the normality of ξm , one can √ easily check that all moments of nδm are bounded by constants
PROOFS FOR SECTION 6.2
269
not depending on n. Consequently, EΔ4m = O(n−2 ). It follows that the sum of p2m is bounded. We arrive at Lemma 6.2 statement. Lemma 6.3 (proof). Using the rotation invariance of the problem setting, let us rotate the axes so that the vectors e become parallel to one of the axis. Then, for l, m = 1, . . ., N as n → ∞, it is required to prove that var rTl H(t)rm = O(n−1 ). First, consider the case when l = m. Denote Ψm = rTm Hrm , Φm = rTm H m rm . For N = 1, Φm = 1/t. The identity H = H m − H m rm rTm H implies var Ψm ≤ var Φm . In view of the independence H m of rm , we have var Φm = var(rTm Ω rm ) + E[rTm (H m − Ω) rm ]2 , where Ω = EH m . Here the first variance in the right-hand side is estimated similarly to varm ψ in Lemma 6.1 and is not greater than ad(c2 + d) c−2 n−1 . We estimate the second term using Lemma 6.2 (for fixed rm ), since the matrix H m is of the form of the matrix H with smaller number of variables. We find that it is O(n−1 ) with the coefficient E(r2m )2 = O(1). Thus, we proved the lemma statement for l = m. Now, let l = m, N > 1. We isolate the variables rl and rm . Denote ∼
∼
∼
∼
S = S − rl rTl − rm rTm , H = ( S + tI)−1 = H − Hrl rTl H− ∼
−Hrm rTm H,
∼
Φlm = rTl HrTm ,
Ψlm = rl Hrm ,
∼
where H does not depend on rl and rm . We obtain the identity Ψlm = Φlm − Φll Ψlm − Φlm Ψmm .
(22)
As in the proof of Lemma 6.2, we obtain (with N less by 1) that var Φll = O(n−1 ), var Ψmm = O(n−1 ), while Ψmm ≤ 1,
270
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
Ψlm ≤ Ψll Ψmm ≤ 1, and Φlm = O(1), l, m = 1, . . ., N . From (22) it follows that (1 + EΦll )Ψlm = (1 − EΨmm )Φlm + ω, where Eω 2 = O(n−1 ). Estimating the expected squares of deviations Ψlm and Φlm , we find that var Ψlm = var Φlm + O(n−1 ). In ∼ view of the independence of H from rl and rm , we have ∼
var Φlm = Er2l r2m var(eT He) + var(rTl Ωrm ), ∼
where e does not depend on rl and rm , and Ω = EH. Here the first summand is O(n−1 ) by Lemma 6.2. The second summand is var(aTl Ω ξm + ξlT Ω rm + ξlT Ω ξm ), where the first two summand expectations are O(n−1 ), and the variance of the third summand is not greater than 2d2 tr Ω2 /n = O(n−1 ). We conclude that var Φlm = O(n−1 ) and, consequently, var Ψlm = O(n−1 ). Lemma 6.3 is proved. Lemma 6.4 (proof). The proof is similar to that of Lemma 6.3. Lemma 6.5 (proof). We use the relation Eψ(x) x = Eψ(x)μ + σ 2 E
∂ψ(x) ∂x
(23)
that is valid for x ∼ N(μ, σ 2 ) for any differentiable function ψ(x) if there exist E|ψ(x)| and E|xψ(x)|. For x = Rmi , we have E(HRT )im = EHij Amj +
∂Hij d E n ∂Rmj
(24)
(we mean the summation over repeated indexes). Differentiating we obtain E
∂Hij = −E(H 2 RT )im − EtrH(HRT )im . ∂Rmj
(25)
PROOFS FOR SECTION 6.2
271
Here (spectral) norms HRT ≤ H 2 RT ≤
λmax (HRT RH) ≤ c−1/2 ,
λmax (H 2 RT RH 2 ) ≤ c−3/2 .
It follows that E(1 +
d trH)HRT = EHAT + Ω, n
where Ω = O(n−1 ). But HRT = O(1), and in view of Lemma 6.1, we have var(n−1 trH) = O(n−2 ). It follows that the expectation in the left-hand side can be replaced by the product of expectations with the accuracy up to O(n−1 ). The statement of Lemma 6.5 follows. Lemma 6.6 (proof). Let us use (23) with x = Rmk . We find that E(HRT R)ik = E(HRT A)ik +
d ∂Hij Rmj E n ∂Rmk
(we mean the summation over j and m). Here the derivative E
∂(Hij Rmj ) = N EHik −E(HRT RH)ik −EHik tr(RT RH). (26) ∂Rmk
Substituting RT RH = I − tH and differentiating with respect to Rmj in (26), we obtain the principal part with the additional terms Edn−1 H(I −tH), and Ω = E(Hn−1 trH)−EH ·En−1 trH. The first of these is O(n−1 ) in norm. Now we estimate the matrix in the subtrahend. Denote by e the eigenvector corresponding to the largest eigenvalue of this symmetric matrix. Let us apply the Cauchy– Bunyakovskii inequality. We find that the second term in norm is not greater than the square root of the product var(n−1 trH). var(eT He). Here the first multiple is O(n−2 ) by Lemma 6.1, while the second one is not greater than 1/c. We obtain the Lemma 6.6 statement.
272
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
Theorem 6.1 (proof). We start from the expression established in Lemma 6.6. Substitute HRT R = I − tH and use (24) in the left-hand side. Rearranging the summands we find that I = EH[tI + (1 + hn d)−1 AT A + κn dI + hn dtI] + Ω, where Ω = O(n−1 ). Inverting the matrix we obtain the both statements of Theorem 6.1. Theorem 6.2 (proof). The convergence of F0n (u) implies the convergence def
h0n (t) =
(u + t)−1 dF0n (u) → h0 (t),
t ≥ c > 0.
We rewrite Statement 2 of Theorem 6.1 as hn /sn = n−1 tr(AT A + uI)−1 + O(n−1 ) with u = rn sn ,
(27)
where the remainder term is small uniformly in t ≥ c > 0 as n → ∞. We have u = rn (t)sn (t) ≥ c > 0. It is not difficult to obtain the estimate anm (sn − sm ) = o(1) as n, m → ∞, where all the coefficients anm ≥ (dsn sm )−1 ≥ 1/d. It follows that we have the uniform convergence sn (t) → s(t), hn (t) → h(t) = (s(t)−1)/d, and rn (t) → r(t) = ts(t)+κd. By Lemma 6.1 uniformly var(n−1 trH(t)) → 0. It follows that all these functions converge in the square mean (Statements 1 and 2). Relation (3) follows. Further, we use V. L. Girko’s theorems on the convergence of spectral functions of random matrices. By Theorem 3.2.3 from [21], the convergence hn (t) → h(t) for all t > 0 implies the weak convergence Fn (u) almost surely (Statement 3) and the relation between h(t) and F (u) (Statement 4). Also from Theorem 6.1, Statement 5 follows easily. The proof of Theorem 6.2 is complete. Theorem 6.3 (proof). Starting from (4) one can easily establish the possibility of analytical continuation of the limit function h(z), its uniform boundedness, and boundedness of its derivative. Let −Re z = v > δ > 0
PROOFS FOR SECTION 6.2
273
and assume the contrary: there exists some sequence of z such that |h(z)| → ∞. Then, the ratio h(z)/s(z) tends to a positive constant, and the integral in the right-hand side of (3) vanishes, which is not true. It follows that the values h(z) are uniformly bounded. The first statement is proved. Statement 2 immediately follows from (4). Next, to be concise, denote h = h(z), s = s(z) = 1 + h(z)d, r = r(z) = zs (z) + κd, s0 = Re s(z), and s1 = Im s(z), and consider the function of ε > 0 (28) μν = μν (z) = |v + rs|−2 v ν dF0 (v), ν = 1, 2. We fix some v = −Re z > 0 and tend ε = − Im z → +0. Let v ∈ S (at the spectrum). Then, Im h(z) → πF (v) > 0. Divide both parts of (3) by s, take imaginary parts, and divide by Im h > 0. We find that 1/μ0 = p|s|2 d,
where p = 2vs0 − κd,
v ∈ S,
(29)
where |s| ≥ s1 → dπF (v) > 0. Now let us equate imaginary parts of the left- and right-hand sides of (3) and divide by Im h. We find that 1/μ0 = dμ1 /μ0 + d|s|2 v + s0 |s|2 ε/Im h. For v ∈ S and ε → +0, the last summand presents O(ε). From these two equations, it follows that |s|2 (p − v) = μ1 /μ0 + O(ε),
v ∈ S.
(30)
Since |s| ≥ |s1 | → dπF (v) > 0 and v > 0, we infer that p > v + O(ε). Further, applying the Cauchy–Bunyakovskii inequality to (3), we derive the inequality |h|2 ≤ μ0 |s|2 . Substituting μ0 from (28), we obtain |h|2 pd ≤ 1 + O(ε) and as ε → +0 we have |s − 1|2 ≤ d/p ≤ d/v,
v ∈ S.
(31)
274
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
√ In particular, it follows that for v ∈ S, the value Im h ≤ 1/ vd, and F (v) ≤ π −1 (vd)−1/2 . This is Statement 3 of our theorem. Let us find spectrum bounds defined by the function F (v). From (30) and(31), the inequality follows (2s0 − 1) v ≥ κd + O(ε) and v (1 + 2 d/v) ≥ κd + O(ε). Solving the quadratic equation √ with respect to v, we find that v ≥ v1 + O(ε), where def
v1 =
κ2 d , 4( 1 + κ + 1)2 √
v ∈ S.
(32)
Now note that μ1 ≤ μ0 c2 , where c2 is the upper bound of the matrices AT A spectrum. Equation (30) implies (33) (2s0 − 1)v − κd = p − v ≤ c2 |s|−2 + O(ε), v ∈ S. We have s0 ≥ 1− d/v + O(ε). Let v > 9d. Then, s0 ≥ 2/3 + O(ε) and |s|2 ≥ 4/9 + O(ε). From (33), it follows that (2s0 − 1) v ≤ 9/4 c2 + κd + O(ε). Finding the √ lower boundary of s0 from (31), we obtain the inequality v − 2 dv ≤ κd + 9/4 c2 + O(ε). Let us √ solve this quadratic inequality with respect to v as ε → +0. We find that v ≤ 2d (2 + κ) + 2.25 c2 + O(ε). This relation provides the upper spectrum bound in the theorem formulation. Spectrum bounds v1 and v2 are established. Now we integrate both parts of the inequality √ over v√∈ S, where F (v) ≤ π −1 (vd)−1/2 , and conclude that π d/2 ≤ v20 − √ v10 , where v20 ≥ v10 ≥ 0 are (unknown) extreme points of the set S. Thus, we find the lower estimate of the set S diameter shown in Statement 4. Let us establish the H¨ older inequality for s(z). In view of (4) for ε = Im z = 0, the function h(z) is differentiable. Let v = −Re z ∈ S. Differentiating the left- and right-hand sides of (3), we find the following equation for the derivative s = s (z): s /(ds2 ) = −m0 s (κd − 2vs) − m0 s2 , where def m0 = (u + rs)−2 dF0 (u). We rewrite this equation in the form s (B − p) = −s2 , where B = 1/(dm0 s2 ). Now estimate |s | from below. In view of (3) for z
PROOFS FOR SECTION 6.2
275
from any compact within S, the values s are bounded from below and from above. By (29) for sufficiently small ε > 0, we have p > δ/2 and, consequently, |m0 s2 | ≤ μ0 |s|2 = O(1). It follows that |s | ≤ const /|ρ|, where ρ = μ0 |s| − m0 s = 2
2
−2
|u/s + r|
dF0 (u) −
(u/s + r)−2 dF0 (u).
Denote w = u/s + r. It is easy to verify that Re ρ = 2
|w|−4 (Im w)2 dF0 (u).
The value Im w = −|s|−2 s1 − vs1√+ O(ε). For sufficiently small ε → +0, we have |Im w| > vs1 / 2. Applying the Cauchy– Bunyakovskii inequality and using (27) for sufficiently small ε > 0, we obtain Re ρ ≥ v 2 s21 |w|−4 dF0 (u) ≥ v 2 s21 |s|4 μ20 ≥ (vd)2 p−2 s21 . Here v ≥ δ > 0, and the values p are bounded from above. We conclude that |s | ≤ const /s21 . Thus, the function s31 has the derivative uniformly bounded in S. As z → −v − iε, where v < 0 and ε → +0, the limit function exists s1 (−v) = lim s1 (z). The difference s31 (z) − s31 (−v) = O(ε) and therefore s1 (z) ≤ s1 (−v) + O(ε1/3 ). By (4) we can infer that s31 has the uniformly bounded derivative on any compact beyond S. Thus, the function s1 (z) satisfies the H¨older inequality with the exponent 1/3 for all z. In view of (4) for v > 0, we have F (v) = π −1 Im h(−v). One can see that F (v) also satisfies the Lipschitz condition with the exponent 1/3. For κ > 0, we have v1 > 0. It follows that for 0 < v < v1 , we have F (v) = 0. Assume that F (v) has a jump at the point v = 0. Then by (4), the function h(z) has a pole at the point z = 0. It is easy to see that this contradicts (3). We conclude that F (v) satisfies the Lipschitz condition for all v ≥ 0. Set F (v) = 0 for v < 0. Then, F (v) satisfies the Lipschitz condition for all real v. Consider the integral in equation (4) as the Cauchy-type integral. The integration contour can be extended to the negative half-axis and be closed by an
276
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
infinitely remote circumference. We apply well-known theorems on the Cauchy-type integrals. It follows that the function h(z) defined by (4) satisfies the H¨ older equation with the exponent 1/3 on the whole plane of complex z. The proof of Theorem 6.3 is complete. Lemma 6.7 (proof). Let us use relation (23) with x = Rmj . We find E(RHRT R)lk = ERHRT Alk +
∂Rli Hij Rmj d , E n ∂Rmk
(34)
l = 1, . . ., N, k = 1, . . ., n (here the summation over i, j, and m is implied). The derivative E
∂Hij ∂Rli Hij Rmj = (N + 1)E(RH)lk + Rij Rmj , ∂Rmk ∂Rmk
(35)
where the second summand equals −(RH)lk tr(RT RH) − (RHRT H)lk √ and the spectral norms RH ≤ 1/ c, RHRT H ≤ 1/c. For t ≥ c > 0, we have n−1 tr(RT RH) = 1 − tn−1 trH, ERHn−1 trH − ERHEn−1 trH ≤ t−1 var(n−1 trH) = O(n−1 ). By Lemma 6.1 we find that (34) are entries of [N − n(1 − hn )] ERH + Ω, where the matrix Ω is uniformly bounded in spectral norm. The lemma is proved. Remark 3 (proof). Indeed, in view of the independence of H of y, the variance var ϕ n (t) = var(yT Ωy) + EE |yT Δn y|2 ≤ ≤ Ω 2 var y2 + E|y|2 (eT Δn e)2 = O(n−1 ), where Ω = ERH(t)H(t )RT , Δn = RH(t)H(t )RT −Ω, the expectation E is calculated for fixed y, and e depends only on y but not
PROOFS FOR SECTION 6.2
277
on R. Here the variance var y2 ≤ 2(2qb2 + q 2 /n)/n = O(n−1 ), Ey2 = O(1), and by Lemma 6.3, we have E(eT Δn e)2 = O(n−1 ). This is the first statement of Lemma 6.3. It is easy to verify the second half of Remark 3 using Lemma 6.1 and the fact that rn (t) ≥ c > 0. Remark 4 (proof). Indeed, note that the denominator of gn (t) contains the function rn (t) ≥ c > 0. Using Remark 3, it is easy to justify Remark 4. Lemma 6.8 (proof). First, let us transform ϕn = E(y2 − yT RHRT y) = E[b2 − bT RHRT b + qn−1 tr(I − RHRT )]. We substitute here the right-hand side of the equation b = Ax and use Lemma 6.7 and the identities HRT R = I − tH and tr(RHRT ) = n − t trH. It follows that ϕn = rn EbT RHx + q(N/n − 1 + thn ) + O(n−1 ). By Lemma 6.5, we have T −1 ϕn = rn s−1 n x Σ EHx + q(thn + κn ) + O(n ).
Using Theorem 6.1, we obtain the second lemma statement. Further, we apply Remark 4 and obtain the first lemma statement. The proof of Lemma 6.8 is complete. Theorem 6.4 (proof). First, using the identity Ht + HRT R = I, we rewrite the expression in Lemma 6.7 formulation in the form A = ERHRT A + (t + hn td + κn d)ERH + ω, with the same upper estimate of ω as in Lemma 6.7.
278
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
Let us multiply the left- and right-hand sides of this matrix by x from the right and by bT from the left. We find that b2 = EbT RHRT b + (t + thn d + κn d)EbT RHx + ω , √ where |ω | ≤ is greater than |ω| by no more than c2 times. It is obvious that Ey2 = b2 + N n−1 q. Note that for y ∼ N(0, q/n) for any constant matrix M , we have the relation EyT M y = bT M b + qn−1 EtrM. It follows that Ey2 − q(κn + 1) = = EyT RHRT y − q (1 − hn t) + (t + thn d + κn d)EyT RHx + ω with the same remainder term. Thus, we find that for t ≥ c > 0, the expectation EyT RHx =
ϕn − (thn + κn ) q + ω , t(1 + hn d) + κn d
(36)
where |ω | ≤ ad c−3/2 c2 and by Remark 4, the left-hand side of (36) equals gn (t) + O(n−1 ). Integrating over dη(t), we obtain the second term of the right-hand side of (9) with the accuracy up to O(n−1 ). Let us transform the third term of the right-hand side using the identity H(t)H(t ) = (t − t)(H(t) − H(t )). We obtain that 1/2
EyT RH(t)RT y − EyT RH(t )RT y dη(t) dη(t ) = t − t Eϕn (t) − Eϕn (t ) dη(t) dη(t ) = Kn (t, t ) dη(t) dη(t ). = t − t (37) T
2
Ey RΓ Ry =
Thus, we obtain the third term in the theorem formulation. The proof of Theorem 6.4 is complete. Theorem 6.5 (proof). n (t) ≤ y2 , and For t ≥ c > 0, the functions hn (t) ≤ c−1 , ϕ 2 gn (t) ≤ y . By Remark 3, the variances of these functions are
PROOFS FOR SECTION 6.2
279
O(n−1 ) uniformly in t ≥ c > 0. It follows that the integral of g(t) with respect to the measure η(t) converges in the square mean to the integral of g(t) with respect to the same measure. n (t, t ) = O(n−1 ) uniFurther, for any ε > 0, the variance K formly as |t − t | > ε. For |t − t | ≤ ε, we have 1 n (t, t ) = ϕ K n (t) + ϕ (θ) |t − t |, 2 n where θ is an intermediate value of the argument, and the second derivative is uniformly bounded. Choosing ε = n−1/2 , we find that n (t, t ) is uniformly approximating Kn (t, t ) in the square mean. K n (t, t ) also is O(n−1 ). It follows By Lemma 6.4, the variance K that the double integral in (9) converges in the square mean to the double integral in (11). We obtain the theorem statement. Lemma 6.9 (proof). The first two statements of Lemma 6.9 follow from Theorem 6.2, Lemma 6.8, and Remark 3. By (8) for t ≥ c > 0, the derivatives ϕn (t) and ϕ (t) exist and are uniformly bounded. The uniform convergence def ϕ(t) − ϕ(t ) Kn (t, t ) → K(t, t ) = t − t follows from the uniform convergence ϕn (t) → ϕ(t) and uniform convergence of derivatives ϕn (t). Thus, the lemma is proved. Theorem 6.6 (proof). This statement is a direct consequence of uniform convergence in Lemma 6.9 and the decrease of variances in Remark 3. Lemma 6.10 (proof). By Theorem 6.3, the analytical continuations of functions s(z) and r(z) are regular. To prove the existence of the derivative g (z), it suffices to show that the denominator in the integrand in (37) does not vanish. By (4) for κ > 0 and z around z = 0, the function s(z) is bounded, r(0) = κd, and it follows that s(z) is regular. We conclude that g (0) exists. For Im z = 0 by (4), we have Im h(z) = 0, and
280
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
(3) implies that Im h(z) = |s(z)|2 μ0 Im(r(z)s(z)), where μ0 is the integral defined in the proof of Theorem 6.3. Consequently, for these z, the derivative g (z) exists, and it follows that the derivative ϕ (z) also exists. The lemma is proved. Lemma 6.11 (proof). We start from the right-hand side of (12). To the right from L the functions g(z) and ϕ(z) have no singularities, and for |z| → ∞, we have |Γ(−z)| = O(|z|−1 ). For κ > 0, $ the value v1 > 0 and as |z| → ∞ by (4), we have zh(z) → v −1 dF (v) ≤ 1/v1 . Thus, s(z) = O(1), zs2 (z) ≈ z, and one can see that g(z) = O(|z|−1 ) and ϕ(z) = O(1). Consequently, we can close the contour L in the integrals (12) to the right side by an infinitely remote semicircle. The closed contours are formed. We find that & 1 g(z) g(t) dη(t) = − dz dη(t) = 2πi z−t 1 g(z)Γ(−z) dz, = 2πi L ϕ(t) − ϕ(t ) dη(t) = t − t & 1 1 1 1 ϕ(z) dz dη(t) dη(t ) = − = 2πi t − t z − t z − t 1 ϕ(z) Γ2 (−z) dz. =− 2πi L
The expression (12) passes to the right-hand side of (11). Our lemma is proved. Remark 5 (proof). Indeed, we note that in this case, the Radon–Nikodym derivatives dG(v)/dF0 (v) are uniformly bounded, and the integral in (37) is O(1)μ0 , where μ0 is defined in Theorem 6.3. Therefore, |g (z)| ≤ μ0 |(r(z)s(z))|. For z = −v − iε, v, ε > 0, the value 1/μ0 = p|s|2 d + O(ε), where p ≥ v ≥ v1 , |s| ≥ 1/2, and ε = |Im z|.
PROOFS FOR SECTION 6.2
281
Therefore, for sufficiently small ε, the values μ0 ≤ 8/v1 d are uniformly bounded. By Theorem 6.3, the functions r(z) and s(z) satisfy the H¨ older condition. We conclude that for v ∈ S, the functions g(z) and ϕ(z) also satisfy the H¨ older condition and allow the continuous extension to z = −v. Remark 5 justified. Lemma 6.12 (proof). We start from the right-hand side of (12). From (4), one can see that as |z| → ∞, the functions s(z) and ϕ(z) are uniformly bounded and g(z) decreases as O(|z|−1 ). We can deform the contour L by moving its remote parts to the region of negative Re z. Let us move these parts of L from above and from below to the half-lines (0, ±iε; −∞ ±iε), where some ε > 0. Since all functions in the integrands are bounded for Re z = 0, the contribution of the vertical segment of the closed integration path is O(ε). Let ε → +0 tending the upper half-line to the lower one. The functions g(z) and ϕ(z) satisfy the H¨ older condition (13) at the halfline z ≤ 0. Therefore, the real parts of these functions mutually cancel, whereas the imaginary parts double. The right-hand side of (12) takes the form 1−
2 π
∞
Im g(−v)Γ(v) dv −
0
1 π
∞
Im ϕ(−v)Γ2 (v) dv.
0
For v ∈ / S, Im h(−v) = 0 and imaginary parts of the functions r(−v), g(−v), ϕ(−v) vanish. We obtain the statement of Lemma 6.12. Lemma 6.13 (proof). Denote v = − Re z, s0 = Re s, s1 = Im s, r = r(z) = −vs+κd, r0 = Re r, g1 = Im g(z), and ϕ1 = Im ϕ(z), Aν =
uν dG(u), |u + r(z)s(z)|2
ν = 0, 1.
From Lemma 6.9, one can find that the imaginary parts g1 = −A0 Im(r(z)s(z)) = A0 (2vs0 − κd) ≥ 0, ϕ1 = −(A1 v + A0 |r|2 + qv/d).
282
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
It is clear that A0 λ0 ≤ A1 . Therefore, |ϕ1 | ≥ A0 λ0 v1 ≥
λ0 v1 , 2 (c22 + |rs|2 )
where |s|2 ≤ 1 + d/v, v ≥ v1 > 0, and |r|2 ≤ κ2 d2 + c22 |s|2 . Statement 1 is proved. The ratio g1 2|r0 | + κd 2 κd ≤ ϕ1 λ0 v + |r|2 ≤ √λ v + λ0 v , 0 v ≥ v1 > 0. Our lemma is proved. Lemma 6.14 (proof). First, we reconsider the proof of Lemma 6.1. We note that for any complex vectors q and for any symmetric real positive semidefinite matrices A, the inequalities hold A + zI ≤ |z|2 /|z1 |,
|1 + qH z(A + zI)−1 q|−1 ≤ |z/z1 |, (38)
where the upper H is the Hermitian conjugation sign. In Remark 2, we have now Ω ≤ |z/z1 |; we obtain (as z is fixed) the estimate Varm Φm = O(n−1 ). In the proof of Lemma 6.1, the upper estimates change: now we have Ω = O(1), |ϕ| = O(1). It follows that Varm ψ = O(n−1 ). The variance Varm ϕ ≤
E|1 + Φm |−2 |1 + EΦm |−2 Varm Φm ,
where the multiples under the radical sign are O(1) by the second of the inequalities (38) in the above. We find that Varm ϕ ≤ O(1) Varm Φm , Eψ 2 = O(1), and Varm (ψϕ) = O(n−1 ). Thus, we proved the first lemma statement. We now reconsider the proof of Lemma 6.3. Let e = f be a complex vector of unit length. For fixed z1 = 0 by (38), we have Ω = O(1) and, as before, E(r2m )2 = O(1). We obtain a refinement of Lemma 6.3: for any z1 = 0, we have Var[eH RH(z)RT e] = O(n−1 ). Thus, the first and the second statements of our lemma are proved. Next, we reconsider Remark 3. Now Ω = ERH(z)H(z )
PROOFS FOR SECTION 6.2
283
RT , Ω ≤ |z/z1 |, and |eH Δn e|2 = O(n−1 ). We obtain the third statement of Lemma 6.14. The value rn = rn (z) = zsn (z) + κd and |Im rn | ≥ |z1 |. From (8) it follows that for fixed z1 = 0, we have Var ϕ n (z) = O(n−1 ). The proof of Lemma 6.14 is complete. Theorem 6.8 (proof). Let z = −v − iε, v, ε > 0. Denote s1 = Im s(z), g1 = Im g(z), ϕ1 = Im ϕ(z), g1 = Im gn (z), ϕ 1 = Im ϕ n (z), and Γ0 (v) = g1 /ϕ1 . The values of v for which | g1 /ϕ 1 | > B lay outside S, where ρ(v) = 0, and the contribution of these v to (16) equals 0. For | g1 /ϕ 1 | ≤ B, we subtract g1 g1 g1 − g1 g1 − = + (ϕ1 − ϕ 1 ). ϕ1 ϕ1 ϕ1 ϕ 1 Denote by S a subset of S for which ρ(v) ≥ ε > 0. From Lemmas 6.13 and 6.14, it follows that for fixed ε > 0, the contribution of v ∈ S tends to zero as n → ∞. The ratios g1 /ϕ1 and g1 /ϕ 1 are bounded. It follows that the contribution of the region S − S tends to zero as ε → 0 independently of n. Thus, the theorem is proved. Remark 6 (proof). Indeed, the first statement follows from Theorem 6.6. The existence of the derivative ϕ (t) and the expressions for (x− x)2 follows from Lemma 6.14 and (29). Lemma 6.15 (proof). From (3), we find that for t > 0, this equation has a unique solution h(t) that tends to h(0) as t → +0, and 1 + s(0)
d dF0 (v) = 1, (v + κds(0))
s(0) = 1 + dh(0).
The left-hand side of this equation depends on s(0) monotonously, and this equation is uniquely solvable with respect to s(0). One can see that 1/s(0) + 1/(κs(0)) ≥ 1 and s(0) ≤ 1 + 1/κ. The second statement of our lemma follows from Lemma 6.9. Next, we
284
6. THEORY OF SOLUTION TO EMPIRICAL SLAE
differentiate both parts of the equation (3) with respect to t > 0. Isolating the coefficient of the derivative s (t) we obtain s (t) = −
dh2 (t) s2 (t) . 1 + dh2 (t) (r(t) + ts(t))
For t → +0, we obtain statement 3. Let us calculate the derivatives g (t) and ϕ (t) from Lemma 6.9 for t > 0. Then, we tend t → +0. From Lemma 6.9 by differentiating, we obtain the expression for ϕ (0). Further, it is obvious that b1 ≥ c1 /(c2 + d (1 + κ)), b2 ≤ b1 /(κd s(0)). For ϕ (0), we find that ϕ (0) ≥
λ0 κd2 h2 (0) . c2 + (1 + κ)d 1 + κd2 h2 (0)
We derive the last inequality in the lemma statement. Lemma 6.15 is proved. Theorem 6.9 (proof). The first statement immediately follows from Remark 6. To be concise, we denote g = g(t), g = gn (t), p = ϕn (t), p = ϕ n (t). We obtain 2 0t )2 = E E(x0t − x p g 2 /p − g 2 / p2 .
(39)
By Remark 6, we have 0 ≤ g 2 /p ≤ 1, and 0 ≤ g2 / p ≤ 1 with probability tending to 1. By Lemma 6.9 for fixed t > 0 as n → ∞, the convergence in the square mean g → g, and p → p holds. We conclude that the expression (39) tends to zero as n → ∞. Theorem 6.9 is proved.
APPENDIX
EXPERIMENTAL INVESTIGATION OF SPECTRAL FUNCTIONS OF LARGE SAMPLE COVARIANCE MATRICES The development of asymptotical theory of large sample covariance matrices made it possible to begin a systematic construction of improved and unimprovable statistical procedures of multivariate analysis (see Chapters 4, 5, and [71]). New procedures have a number of substantial advantages over traditional ones: they do not degenerate for any data and are applicable and approximately unimprovable independently of distributions. Most of theoretical results in multiparametric statistics were obtained in the “increasing-dimension asymptotics,” in which the observation dimension n → ∞ along with sample sizes N so that n/N → y > 0. This asymptotics serves as a tool for isolating principal parts of quality functions, for constructing their estimators, and for the solution of extremum problems. However, the problem of practical applicability of the improved procedures requires further investigations. Theoretically, it is reduced to obtaining upper estimations for the remainder terms. But weak upper estimates of the remainder terms derived until now seem to be too restrictive for applications. The question of practical advantages of asymptotically improved and unimprovable procedures remains open. In this appendix, we present data of the experimental (numerical) investigation of spectral functions of large sample covariance matrices presented in [73] and discuss results of their comparison with asymptotical relations of spectral theory.
285
286 EXPERIMENTAL INVESTIGATION OF SPECTRAL FUNCTIONS
1. THEORETICAL RELATIONS A remarkable feature of the spectral theory of random matrices of increasing dimension is its independence of distributions. In Chapter 3, it is shown that applicability of asymptotic formulas of this theory is determined by magnitudes of two special parameters. Let x be an observation vector, and S be a population for which E x = 0. Denote M = sup E(eT x)4 > 0,
(1)
|e|=1
where e are nonrandom unity vectors; γ = supΩ≤1 var(xT Ωx/n)/M,
(2)
where Ω are nonrandom, symmetric, positive semidefinite matrices with spectral norms not greater than 1. In Chapter 3, it was shown that the applicability of asymptotical theory is guaranteed in the situation when the moment M is bounded and the parameter γ is vanishing. Denote Σ = cov(x, x). For a nondegenerate normal distribution N(0, Σ), the values M = 3Σ2 , γ = 2/3 n−2 tr Σ2 . If Σ = I then M = 3 and γ = 2/3n. Let X = {xm } be a sample of size N from population S. Let ¯=N x
−1
N
xm ,
C=N
−1
m=1
N
¯ )(xm − x ¯ )T . (xm − x
m=1
Here C is sample covariance matrix (a biased estimator of Σ). Denote y = n/N , H = H (t) = (I + tC)−1 ,
hn (t) = n−1 trH (t),
sn (t) = 1 − y + yhn (t). We cite the basic limit spectral relation (dispersion equation) for functions of the matrices Σ and matrices C derived in Theorem 3.1 and in [25].
THEORETICAL RELATIONS
287
Theorem 1 (corollary of Theorem 3.1). For any populations with M > 0, for t ≥ 0, hn (t) = n−1 tr(I + tsn (t)Σ)−1 + ω(t),
(3)
EH (t) = (I + tsn (t)Σ)−1 + Ω, var n−1 H (t) ≤ aτ 2 /N,
(4)
√ √ 2 )/ N ], where ω(t) = Ω ≤ aτ max(1, y) [τ δ + (1 + τ √ a is a numeric coefficient, τ = M t, and δ = 2y 2 (γ + τ 2 /N ). For normal distributions, the expectation of n−1 trH (t) was investigated in more detail in [12], and it was found that for Σ = I √ def E |ω(t)| ≤ ω1 (t) = 2yt2 (1 + ty)/ nN + 3t/N, while the variance var[n−1 trH(t)] = O(n−1 N −1 ). The problem of recovering spectra of matrices Σ from the empirical matrices C may be solved in the form of limit relations (see Introduction). To pass to the limit, we consider a sequence of problems of the statistical analysis ¯ , C)n }, n = 1, 2, . . ., P = {(S, Σ, N, X, x
(5)
where (indexes n are omitted) S is n-dimensional population with Σ = cov(x, x), and X is a sample of size N from S. Theorem 2 (corollary of Theorem 3.1). Let moments (1) exist and be uniformly bounded in P; let n → ∞ so that n/N → λ > 0, γ → 0, and for almost all u ≥ 0 def
F0 (u) = lim
n→∞
n 1 ind(λ0i ≤ u), n
where λ01 , λ02 , . . ., λ0n are the eigenvalues of Σ. Then, for each t ≥ 0 1. there exists the limit h(t) = lim En−1 trH (t) = (1 + ts(t)u)−1 dF0 (u), n→∞
(6)
i=1
(7)
288 EXPERIMENTAL INVESTIGATION OF SPECTRAL FUNCTIONS
where s(t) = 1 − λ + λh(t); 2. the variance var n−1 tr H (t) → 0 as n → ∞. The limit dispersion equation (7) presents the main result of spectral theory of sample covariance matrices of increasing dimension. Consider an empiric distribution function for eigenvalues λi of the matrices C Fn (u) = n−1
n
ind(λi ≤ u),
u ≥ 0.
(8)
i =1
Theorem 3 (corrolary of Theorems 3.2 and 3.3). Let assumptions of Theorem 2 hold and, in addition, λ > 0 and all eigenvalues of matrices Σ in P exceed c1 > 0, where c1 does not depend on n. Then, 1. the function h(z) allows an analytical continuation to the region of complex z and satisfies the H¨ older condition for all z; P 2. the weak convergence holds Fn (u) → F (u), u ≥ 0; 3. and for all z, except z < 0, h(z) = (1 − zu)−1 dF (u). In [9], it was shown that as n → ∞, all eigenvalues of matrices C lay on the limit support almost surely. Consider a special case: let Σ = I, n = 1, 2, . . .. In this case, the function h(z) satisfies the quadratic equation h(z)−1 = zh(z)s(z), and the limit spectral density of C is F (u) = (2πλu)−1 (u2 − u)(u − u1 ), (9) where u1 = (1 − solution of (7) is h(t) = 2
√
λ)2 , u2 = (1 +
√
λ)2 for u1 ≤ u ≤ u2 . The
(1 + (1 − λ) t)2 + 4tλ + 1 + (1 − λ) t
−1
.
(10)
THEORETICAL RELATIONS
289
Suppose, additionally, that there exists an ε > 0 such that the quantity sup E(eT x)4+ε is uniformly bounded in P. Then, |e| = 1
Theorem 11.1 from [25] states that as n → ∞, the minimum and maximum eigenvalues of the matrices C converge in probability to the magnitudes α1 and α2 such that n y λk αi = plim 1 − , i = 1, 2, (11) n λk − xi k=1
where x = x1 and x = x2 are the minimum and maximum real roots of the equation n λ2k y = 1. n (λk − x)2 k=1
290 EXPERIMENTAL INVESTIGATION OF SPECTRAL FUNCTIONS
2. NUMERICAL EXPERIMENTS Numerical simulation was performed • to investigate the relation between spectral functions of true covariance matrices Σ and sample covariance matrices C, given by equations (3) and (7); • to study the convergence of the empirical distribution function of eigenvalues of C for large n; • to investigate boundaries of spectra of matrices C and to compare these with theoretical relations (9) and (11); • to study the dependence of the remainder terms ω(t) in (3) and (5) on the parameters t, M , y = n/N , and γ, and to compare with theoretical upper estimates; • to estimate experimentally the boundaries of the applicability of the asymptotic equation (3) to distributions different from normal. In the experiments, for fixed numbers n and N and given distribution law, samples were generated and used for calculation of sample means, sample covariance matrices, and spectral functions hn (t), H (t), and Fn (u). In tables, random functions hn (t) and Fn (u) are denoted by h(t) and F(u), and their averages (over s experiments) are denoted by h(t) and F(u) . The random inaccuracy ω(t) of the equation (3) was calculated in a series of s experiments. The average inaccuracy presents the estimator of E ω(t), and its mean square deviation presents the estimator of the variance of ω(t). These characteristics are compared in tables with theoretical upper estimates of the asymptotic formula inaccuracy. Empirical spectral function h(t) Distribution x ∼ N(0,I) In Tables 1–4 the first two rows present the values of the function h(t) for two independent experiments, the third row contains an
EMPIRICAL SPECTRAL FUNCTION h(t) DISTRIBUTION
291
estimator of the expectation along with the mean square deviation (in a series of s = 100 experiments). In the last row, the theoretical function h(t) is shown. The analytical dependence of the deviations of the form | h(t) −h(t)| = kN −b was fitted by the minimum square method: it is shown under the tables with the mean square error. From Tables 1–4, one can see the systematic decrease of h(t) with growing N for fixed y = n/N that can be explained by the increase of accuracy in the resolvent denominator. Table 1 : Spectral function h(t): x ∼ N (0, I), t = 1, y = 1, s = 100 b h(t):1 b h(t):2 b h(t)
h(t)
N =2
N = 10
N = 20
N = 50
0.806 0.820 0.794 ± 0.107 0.618
0.634 0.630 0.657 ± 0.025 0.618
0.652 0.612 0.639 ± 0.014 0.618
0.622 0.619 0.626 ± 0.005 0.618
Empirical law: h(t) − h(t) ≈ 0.32N −0.93 ± 0.23N −0.96 Table 2 : Spectral function h(t): x ∼ N (0, I), t = 2, y = 1, s = 100 b h(t):1 b h(t):2 b h(t)
h(t)
N =2
N = 10
N = 20
N = 50
0.750 0.874 0.700 ± 0.103 0.500
0.591 0.515 0.544 ± 0.028 0.500
0.504 0.537 0.521 ± 0.015 0.500
0.510 0.508 0.508 ± 0.006 0.500
Empirical law: h(t) − h(t) ≈ 0.46N −1.0 ± 0.21N −0.90 Table 3 : Spectral function h(t): x ∼ N (0, I), t = 1, y = 1/2, s = 100 b h(t):1 b h(t):2 b h(t)
h(t)
N =2
N = 10
N = 20
N = 50
0.493 0.537 0.740 ± 0.201 0.562
0.555 0.585 0.588 ± 0.048 0.562
0.580 0.578 0.581 ± 0.022 0.562
0.573 0.566 0.568 ± 0.010 0.562
Empirical law: h(t) − h(t) ≈ 0.42N −1.1 ± 0.43N −0.96
292 EXPERIMENTAL INVESTIGATION OF SPECTRAL FUNCTIONS
Table 4 : Spectral function h(t): x ∼ N (0, I), t = 2, y = 1/2, s = 100 b h(t):1 b h(t):2 b h(t)
h(t)
N =2
N = 10
N = 20
N = 50
0.392 0.170 0.708 ± 0.233 0.414
0.504 0.369 0.459 ± 0.059 0.414
0.446 0.422 0.435 ± 0.026 0.414
0.417 0.422 0.424 ± 0.010 0.414
Empirical law: h(t) − h(t) ≈ 0.52N −1.0 ± 0.50N −0.98 Values of h(t) tend to theoretical functions h(t) as N increases, and the difference decreases approximately by the law N −1 . The scatter of empiric h(t) in two experiments is covered by 2.5 σ, where σ 2 is the sample variance. Accuracy of the empirical dispersion equation. Distribution x ∼ N(0,I) In Tables 5–8, the experimental inaccuracy ω(t) of the equation (3) is presented. The function s(t) was replaced by 1 − y + y h(t), where h(t) was taken from Tables 1–4. As in the former tables, the first two rows present two independent experimental values of ω (t), the next row shows sample mean and the mean square deviation in a series of s = 100 experiments. The fourth row contains theoretical upper inaccuracy ω(t) in (3) with the coefficient a = 1 (this estimator is distribution free). The fifth row presents the minimum value of a, for which (3) holds with ω(t) substituted by its expectation; in the sixth row, the refined theoretical upper estimate ω1 (t) is shown (valid for normal distribution). Using ω (t), the analytical dependence ω(t) = kN −c on N was constructed by the minimum square method for the expectation and the square scatter. It is shown under the tables. From Tables 5–8, one may see that the average inaccuracy ω (t) (in the series of 100 experiments) of the equation (3) decreases approximately by the law N −1 as n = N increases. The theoretical upper estimate of ω(t) in Section 3.1 decreases as N −1/2 , while
ACCURACY OF THE EMPIRICAL DISPERSION EQUATION
293
Table 5 : The remainder term ω(t) in equation (3): x ∼ N (0, I), y = 1, t = 1, s = 100 ω(t):1 ω(t) : 2 ω(t)
ω(t) a ω1 (t)
N =4
N = 10
N = 20
N = 50
0.120 0.115 0.117 ± 0.058 7.53 0.0155 1.75
0.0570 0.0637 0.0526 ± 0.0284 4.76 0.0111 0.70
0.0303 0.0308 0.0254 ± 0.0141 3.37 0.00755 0.350
0.0161 0.0110 0.00959 ± 0.00525 2.13 0.0045 0.14
Empirical law: ω(t) ≈ 0.62N −1.2 ± 0.22N −0.93 Table 6 : The remainder term ω(t) in equation (3); x ∼ N (0, I), y = 1, t = 2, s = 100 ω(t):1 ω(t):2 ω(t)
ω(t) a ω1 (t)
N =4
N = 10
N = 20
N = 50
0.172 0.193 0.167 ± 0.064 52.7 0.00318 7.50
0.0347 0.0880 0.0684 ± 0.0329 33.3 0.00205 3.00
0.0668 0.0469 0.0309 ± 0.0162 23.6 0.00131 1.50
0.0128 0.0113 0.0129 ± 0.0057 14.9 0.000862 0.60
Empirical law: ω(t) ≈ 0.65N −1.0 ± 0.24N −0.92 Table 7 : The remainder term ω(t) in equation (3): x ∼ N (0, I), y = 1/2, t = 1, s = 100 ω(t):1 ω(t):2 ω(t)
ω(t) a ω1 (t)
N =4
N = 10
N = 20
N = 50
−0.151 0.0928 0.124 ± 0.097 2.84 0.0438 1.28
−0.0252 0.0253 0.0416 ± 0.0442 1.79 0.0232 0.51
0.0561 0.0102 0.0216 ± 0.0237 1.27 0.017 0.26
0.0154 0.0153 0.00970 ± 0.00857 0.802 0.0121 0.10
Empirical law: ω(t) ≈ 0.45N −1.0 ± 0.44N −1.0 the more refined estimator ω1 (t) (valid for normal distributions) decreases as N −1 . For n = N , the upper estimate ω(t) in all cases is comparable with 1 or more and has no sense as much the inaccuracy ω(t) is not greater than 1 by definition. The upper estimate ω1 (t)
294 EXPERIMENTAL INVESTIGATION OF SPECTRAL FUNCTIONS
Table 8 : The remainder term ω(t) in equation (3): x ∼ N (0, I), y = 1/2, t = 2, s = 100 ω(t):1 ω(t):2 ω(t)
ω(t) a ω1 (t)
N =4
N = 10
N = 20
N = 50
0.285 0.0647 0.169 ± 0.127 19.0 0.00891 4.33
0.0707 0.0906 0.0558 ± 0.0456 12.0 0.00464 1.73
−0.0126 0.0332 0.0284 ± 0.0241 8.50 0.00334 0.87
0.0182 0.0175 0.0121 ± 0.0109 5.38 0.00224 0.35
Empirical law: ω(t) ≈ 0.58N −1.0 ± 0.49N −1.0 proves to be ten times overstated as compared with the experimental value. For n = N , the mean square deviation of ω(t) decreases by the law N −1 that corresponds to the estimator of the function h(t) variance for normal distribution found in [63] and [12], while (3) guarantees only the law N −1/2 . The upper estimate of the variance of h(t) in [63] proves to be overstated by five times as compared with the empiric mean square deviation of ω(t) for t = 1 and n = N = 50. The disagreement with the experiment increases with the increase of t.
Spectral function h(t) for binomial and normal populations Tables 9–10, present the estimators h(t) of h(t) calculated in a series of s = 100 experiments together with the mean square deviation σ(t) of h(t). Spectral functions are compared for normal distribution N (0, I) and discrete binomial law B(1, p) (in which each component of x takes on values a = − p−1 (1 − p) or b = p(1 − p)−1 (p > 0) with the probabilities P(a) = p, P(b) = 1 − p so that E x = 0 and E x2 = 1. In the last row, limit values h(t) are shown calculated by (7). Tables 9 and 10 for sample size N = 50 show a good agreement of experimental results for these two distributions and a good fit to theoretical limits as well as a small scatter of estimators. Theoretical expected mean square scatter may be measured by the quantity
FUNCTION h(t) FOR NONSTANDARD NORMAL DISTRIBUTION 295
Table 9 : Spectral function h(t) ± σ(t): y = 1, N = 50, s = 100 N B(0.3) B(0.5) B(0.7) h(t)
t = 0.1
t = 0.5
t=1
t=2
0.917 ± 0.002 0.918 ± 0.001 0.918 ± 0.000 0.918 ± 0.001 0.916
0.736 ± 0.005 0.736 ± 0.003 0.736 ± 0.002 0.736 ± 0.003 0.732
0.625 ± 0.006 0.624 ± 0.005 0.623 ± 0.003 0.624 ± 0.004 0.618
0.509 ± 0.007 0.507 ± 0.005 0.505 ± 0.004 0.507 ± 0.006 0.500
Table 10 : Spectral function h(t) ± σ(t): y = 1/2, N = 50, s = 100 N B(0.3) B(0.5) B(0.7) h(t)
t = 0.1
t = 0.5
t=1
t=2
0.914 ± 0.003 0.914 ± 0.002 0.914 ± 0.000 0.914 ± 0.002 0.913
0.709 ± 0.006 0.705 ± 0.005 0.705 ± 0.003 0.705 ± 0.005 0.702
0.572 ± 0.009 0.566 ± 0.007 0.565 ± 0.004 0.567 ± 0.007 0.562
0.423 ± 0.011 0.421 ± 0.007 0.418 ± 0.005 0.420 ± 0.007 0.414
(nN s)−1/2 = 0.002. This fact demonstrates well the insensibility of spectral functions of sample covariance matrices to distributions when dimension is high (n = 50). Function h(t) for nonstandard normal distribution Table 11 presents results of numerical modeling of the expectation of h(t) and its mean square deviation calculated in a series of experiments for normal distribution with the covariance matrix of the form ⎛ ⎞ 2 1 −1 . . . 1 −1 ⎜ −1 2 −1 . . . 1 −1 ⎟ ⎜ ⎟ ⎜ .. .. .. .. ⎟ . .. Σ = ⎜ ... ⎟ . . . . . ⎜ ⎟ ⎝ −1 1 −1 . . . 2 −1 ⎠ −1 1 −1 . . . 1 2 This matrix corresponds to observations with the correlation coefficient ±1/2 and variance 2. Its eigenvalues include (for even n) n/2 units and n/2 of λ = 3. In the last row, the limit function
296 EXPERIMENTAL INVESTIGATION OF SPECTRAL FUNCTIONS
h(t) is shown. This function was calculated by (7) as a root of the equation 2h = (1 + ts)−1 + (1 + 3ts)−1 , h = h(t), s = s(t). Table 11 : Spectral function h(t): N (0, Σ), Σ = I, N = 20, s = 100 y=1 t = 0.5
y=1 t=1
y=1 t=2
y = 1/2 t = 0.5
y = 1/2 t=1
y = 1/2 t=2
b h(t) 0.65 ± 0.01 0.54 ± 0.01 0.44 ± 0.02 0.60 ± 0.02 0.47 ± 0.03 0.34 ± 0.03 h(t) 0.64 0.52 0.42 0.59 0.45 0.32
One may see a good agreement between the experimental and theoretical values of the function h(t). Function h(t) and the inaccuracy ω (t) for different coupling of variables Tables 12–14 present results of numeric simulation of the function h(t) and the inaccuracy ω (t) in s = 100 experiments for normal population with the covariance matrix ⎛ ⎞ 1 r ... r r ⎜r 1 . . . r r⎟ ⎜ ⎟ ⎜ ⎟ Σ = ⎜ ... ... . . . ... ... ⎟ (11) ⎜ ⎟ ⎝r r . . . 1 r⎠ r r ... r 1 for different r < 1. This matrix has a spectrum with a single eigenvalue λ1 = 1 − r + rn and n − 1 eigenvalues with λ = 1 − r. Theoretical limit function h(t) can be calculated from (7) and equals 2
−1 (1 + t(1 − y)(1 − r))2 + 4ty(1 − r) + 1 + t(1 − y)(1 − r) .
Tables 12–14 show a good fit between experimental average values of h(t) and limit values calculated from the dispersion equation (7). The good fit keeps on even for r = 0.9. Theoretical
FUNCTION h(t) AND THE INACCURACY ω (t)
297
Table 12 : Function h(0.5): N (0, Σ), N = 10, s = 100
r=0 r = 0.2 r = 0.4 r = 0.6 r = 0.8 r = 0.9
b h(t)
r=1 h(t)
ˆ ω (t)
b h(t)
y = 0.5 h(t)
ˆ ω (t)
0.755 ± 0.023 0.769 ± 0.022 0.791 ± 0.019 0.820 ± 0.017 0.861 ± 0.012 0.888 ± 0.009
0.732 0.766 0.805 0.854 0.916 0.954
0.03 ± 0.02 0.05 ± 0.02 0.07 ± 0.02 0.11 ± 0.02 0.16 ± 0.01 0.20 ± 0.01
0.723 ± 0.039 0.766 ± 0.035 0.759 ± 0.033 0.777 ± 0.031 0.819 ± 0.023 0.838 ± 0.019
0.702 0.742 0.788 0.844 0.913 0.953
0.03 ± 0.04 0.07 ± 0.04 0.07 ± 0.04 0.09 ± 0.03 0.13 ± 0.02 0.16 ± 0.02
Table 13 : Function h(1); N (0, Σ), N = 10, s = 100
r=0 r = 0.2 r = 0.4 r = 0.6 r = 0.8 r = 0.9
b h(t)
y=1 h(t)
ˆ ω (t)
b h(t)
y = 0.5 h(t)
ˆ ω (t)
0.652 ± 0.030 0.667 ± 0.028 0.696 ± 0.023 0.743 ± 0.019 0.805 ± 0.016 0.850 ± 0.010
0.618 0.656 0.703 0.766 0.854 0.916
0.05 ± 0.03 0.06 ± 0.03 0.11 ± 0.02 0.16 ± 0.02 0.25 ± 0.02 0.31 ± 0.01
0.595 ± 0.044 0.654 ± 0.043 0.649 ± 0.037 0.677 ± 0.042 0.746 ± 0.026 0.786 ± 0.020
0.562 0.608 0.667 0.742 0.844 0.913
0.05 ± 0.04 0.11 ± 0.04 0.10 ± 0.04 0.14 ± 0.04 0.21 ± 0.03 0.26 ± 0.02
Table 14 : Function h(2); N (0, Σ), N = 10, s = 100
r=0 r = 0.2 r = 0.4 r = 0.6 r = 0.8 r = 0.9
b h(t)
y=1 h(t)
ˆ ω (t)
b h(t)
y = 0.5 h(t)
ˆ ω (t)
0.543 ± 0.026 0.558 ± 0.030 0.593 ± 0.030 0.652 ± 0.025 0.732 ± 0.019 0.801 ± 0.016
0.500 0.538 0.587 0.656 0.766 0.854
0.07 ± 0.03 0.09 ± 0.03 0.14 ± 0.03 0.22 ± 0.03 0.33 ± 0.02 0.42 ± 0.02
0.457 ± 0.044 0.541 ± 0.040 0.533 ± 0.039 0.559 ± 0.036 0.652 ± 0.034 0.719 ± 0.024
0.414 0.461 0.523 0.608 0.742 0.844
0.06 ± 0.05 0.15 ± 0.04 0.14 ± 0.04 0.17 ± 0.04 0.27 ± 0.03 0.36 ± 0.03
upper estimates of ω(t) are strongly overstated. For large matrices of the form (11), the fourth maximum moment M ≈ 3r2 n2 , and theoretical upper estimate increases n2 times. These estimates are proportional to powers of M , whereas Tables 12–14 show the √ linear increase of ω(t) with r or r. Thus, the basic limit spectral equation (7) remains also valid for strongly coupled variables (up to r = 0.9).
298 EXPERIMENTAL INVESTIGATION OF SPECTRAL FUNCTIONS
Spectra of sample covariance matrices. Distribution x ∼ N(0, I) Sample covariance matrices C and their spectra were simulated. Table 15 shows maximum eigenvalues of C together with mean square deviations and the deviation of the averaged value from the theoretical limit value λmax = 4. In the last row, the empirical regularity obtained by the minimum square method is shown. In Table 16, the sample mean and the mean square deviation is presented for the uniform norm of the difference F − F of functions F(u) and F (u), where F(u) was calculated by (8) and F (u) is the theoretical density. Table 17 presents sample mean Table 15 : Maximum eigenvalues of C; y = 1, N (0, I), s = 100 N =2
N =4
N =6
N = 10
N = 20
N = 50
λmax 0.799 ± 0.806 2.32 ± 1.03 2.46 ± 0.66 3.05 ± 0.56 3.35 ± 0.35 3.70 ± 0.23 3.20 1.68 1.54 0.946 0.655 0.298 4 − λmax
Empirical law: 4 − λmax ≈ 5.08N −0.713 Table 16 : The difference of F(u) and F (u): N (0, I), y = 1, s = 100 N =2 N =4 N =6 N = 10 N = 20 N = 50
Fb − F : 1
Fb − F : 2
Fb − F
0.607 0.438 0.254 0.213 0.0862 0.0562
0.500 0.250 0.223 0.218 0.0640 0.0539
0.596 ± 0.137 0.356 ± 0.081 0.251 ± 0.057 0.174 ± 0.041 0.0970 ± 0.0169 0.0458 ± 0.0076
Empirical law: F − F ≈ 1.060N −0.799 Table 17 : Local difference F(1) − F (1); N (0, I), y = 1, s = 100 Fb(1)
F (1) Fb(1) − F (1)
N =4
N = 10
N = 20
N = 50
0.683 ± 0.169 0.609 0.0735
0.643 ± 0.062 0.609 0.0340
0.632 ± 0.034 0.609 0.0230
0.616 ± 0.015 0.609 0.00680
Empirical law: F(1) − F (t) ≈ 0.370N −0.996
GIRKO’s G2 ESTIMATOR
299
Table 18 : Girko’s G estimator of the function η(t): N (0, I), s = 100
N =2 N =4 N =6 N = 10 N = 20 N = 50 η(t)
y=1 t=1
y=1 t=2
y = 1/2 t=1
y = 1/2 t=2
0.786 ± 0.114 0.630 ± 0.087 0.612 ± 0.053 0.564 ± 0.039 0.529 ± 0.022 0.514 ± 0.009 0.500
0.683 ± 0.131 0.537 ± 0.095 0.470 ± 0.068 0.412 ± 0.051 0.379 ± 0.025 0.351 ± 0.011 0.333
0.738 ± 0.265 0.608 ± 0.129 0.571 ± 0.095 0.541 ± 0.051 0.520 ± 0.026 0.508 ± 0.012 0.500
0.571 ± 0.309 0.462 ± 0.136 0.430 ± 0.099 0.386 ± 0.071 0.359 ± 0.027 0.342 ± 0.012 0.333
of the function F(1) and its mean square deviation calculated in a series of experiments. In the last row, the empiric formulas are shown fitting the data from Tables 16 and 17. One may see that the disagreement with the theoretical limit formula (7) decreases proportionally to N −0.8 and N −1.0 and is approximately 0.04 by the uniform norm of the difference, and 0.007 for the function F (t) at the point u = 1. This fact indicates the lessening of the agreement at the endpoints of the spectra.
Girko’s G 2 estimator To estimate the spectral function η(t) = n−1 tr(I + tΣ)−1 V L Girko proposed (see [25], Chapter 5) the statistics η(t) = h(θ), where θ is a root of the equation t = θ(1 − y + y h(θ)), and y = n/N. It is proved that this equation is solvable for t > 0 and defines a function that converges almost surely to η(t) as n → ∞ and n/N → λ. Experimental investigation of this estimator is shown in Table 18. The empirical function η(t) along with its mean square deviation was calculated from spectra of matrices C by averaging over s = 100 experiments. For comparison, we also show the theoretical value η(t) = (1 + t)−1 . In the last row, the empirical laws are shown for the deviation η(t) from η(t). This table demonstrates the convergence of G estimator to η(t) with the inaccuracy O(N −1 ).
300 EXPERIMENTAL INVESTIGATION OF SPECTRAL FUNCTIONS
Conclusions The numerical experiments show first that spectral functions of sample covariance matrices of large dimension converge as n → ∞ and n/N → λ, where n is the dimension of observations and N is sample size. The empiric variance of the function hn (t) = n−1 tr(I + tC)−1 decreases by the law near to n−1 N −1 as predicted by the theoretical estimate [12]. The theoretical upper estimate of the function hn (t) variance by Theorem 1 shows only the decrease by the law N −1 , which seems to be related to taking no account of the independence of the observation vector components. The stronger theoretical estimation n−1 N −1 of the asymptotic variance proves to be yet excessive by factor 5–10. In all experiments, we observed the convergence of h(t) to the theoretical value h(t) defined by the dispersion equation (7). The difference between h(t) and averaged values of h(t) decreases approximately as N −1 . The theoretical upper estimate of the inaccuracy of asymptotic formulas found in Chapter 3 proves to be substantially overstated (by hundreds times). A more precise upper estimate of this inaccuracy in [12] proves to be overstated by two to four times. The maximum invariant fourth momentum (1) increases as n2 as n → ∞ for strongly coupled variables. The empirical dependence of the accuracy of the asymptotic equation (7) on coupling of variables (Tables 12–14) shows that the theoretical requirements to coupling bounds seem too stringent. The empirical law of dependence of inaccuracy on the correlation coefficient r does not agree with theoretical law for upper estimates of the remainder terms ω(t) and ω1 (t). Experiments show that, for normal populations, the equation (7) holds well even for r ≈ 0.8 when n = N = 50. Thus, methods of estimation of the remainder terms developed in Chapter 3 provide weak upper estimates that require sharpening and revision. It remains not quite clear what minimal restrictions on the dependence of variables are yet necessary for the canonic equations (7) to be accurate. Our experiments allow to make a general conclusion that the limit dispersion equations for large sample covariance matrices prove to be sufficiently accurate even for not large n and N : the average inaccuracy in different experiments decreases from 0.05
CONCLUSIONS
301
for n = N = 10 to 0.01 for n = N = 50. This result also keeps for nonstandard normal distribution and for other distributions including discrete distributions, and remains true for some cases of strongly coupled variables. In Section 3.3 the Normal Evaluation Principle was offered stating that standard quality functionals of regularized multivariate statistical procedures are only weakly depending on moments of variables higher than the second and thus approximately are the same as for normal distributions. This property is observed in the experiment with binomial distribution (Tables 9 and 10) and is confirmed by a good fit between experimental and theoretical data. Summarizing we may state that results of experiments substantiate the multiparametric approach and show that improved methods constructed in the above are expected to have the accuracy well acceptable for applications even for not large n and N (possibly, even for n = N = 5 − 10).
This page intentionally left blank
REFERENCES 1. D. J. Aigner and G. G. Judge. Application of pre-test and Stein estimation to economic data. Econometrica, 1977, pp. 1279–1288. 2. S. A. Aivazian, I. S. Yenyukov, and L. D. Meshalkin. Applied Statistics (in Russian), vol. 2: Investigation of Dependencies. Finansy i Statistika, Moscow, 1985. 3. S. A. Aivazian, V. M. Buchstaber, I. S. Yenyukov, and L. D. Meshalkin. Applied Statistics (in Russian). vol. 3: Classification and Reduction of Dimension. Finansy i Statistika, Moscow, 1989. 4. T. W. Anderson and R. R. Bahadur. Classification into multivariate normal distributions with different covariance matrices. Ann. Math. Stat. 1962, vol. 32, 2. 5. T. Anderson. An Introduction to Multivariate Statistical Analysis. J. Wiley, NY, 1958. 6. L. V. Arkharov, Yu. N. Blagoveshchenskii, and A. D. Deev. Limit theorems of multivariate statistics. Theory Probab. Appl. 1971, vol. 16, 3. 7. L. V. Arkharov. Limit theorems for characteristic roots of sample covariance matrices. Sov. Math. Dokl., 1971, vol. 12, pp. 1206–1209. 8. Z. D. Bai. Convergence rate of the expected spectral distribution of large random matrices. Ann. Probab. 1993, vol. 21, pp. 649–672. 9. Z. D. Bai and J. W. Silverstein. No eigenvalues outside the support of the limiting spectral distribution of largedimensional sample covariance matrices. Ann. Probab. 1998, vol. 26, pp. 316–345. 10. A. J. Baranchik. A family minimax estimators of the mean of a multivariate normal distribution. Ann. Math. Stat., 1970, vol. 41, pp. 642–645. 303
304
REFERENCES
11. T. A. Baranova. On asymptotic residual for a generalized ridge regression. In: Fifth International Conference on Probability and Statistics, Vilnius State University, Vilnius, 1989, vol. 3. 12. T. A. Baranova and I. F. Sentiureva. On the accuracy of asymptotic expressions for spectral functions and the resolvent of sample covariance matrices. In: Random Analysis, Moscow State University, Moscow, 1987, pp. 17–24. 13. A. C. Brandwein and W. E. Strawderman. Stein estimation: The spherically symmetric case. Stat. Sci., 1990, vol. 5, pp. 356–369. 14. M. L. Clevenson and J. V. Zidek. Simultaneous estimation of the means for independent Poisson law. J. Am. Stat. Assoc., 1975, vol. 70, pp. 698–705. 15. A. Cohen. Improved confidence intervals for the variance of a normal distribution. J. Am. Stat. Assoc., 1972, vol. 67, pp. 382–387. 16. M. J. Daniels and R. E. Kass. Shrinkage estimators for covariance matrices. Biometrics, 2001, vol. 57, pp. 1173–1184. 17. A. D. Deev. Representation of statistics of discriminant analysis, and asymptotic expansion when space dimensions are comparable with sample size. Sov. Math. Dokl., 1970, vol. 11, pp. 1547–1550. 18. A. D. Deev. A discriminant function constructed from independent blocks. Engrg. Cybern., 1974, vol. 12, pp. 153–156. 19. M. H. DeGroot. Optimal Statistical Decisions. McGrawHill, NY, 1970. 20. R. A. Fischer. The use of multiple measurements in taxonomic problems. Ann. Eugen. 7, 1936, pt. 2, pp. 179–188. Also in Contribution to Mathematical Statistics. J. Wiley, NY, 1950. 21. V. L. Girko. An introduction to general statistical analysis. Theory Probab. Appl. 1987, vol. 32, pp. 229–242. 22. V. L. Girko. Spectral Theory of Random Matrices (in Russian). Nauka, Moscow, 1988. 23. V. L. Girko. General equation for the eigenvalues of empirical covariance matrices. Random Operators and Stochastic Equations, 1994, vol. 2, pt. 1: pp. 13–24; pt. 2: pp. 175–188. 24. V. L. Girko. Theory of Random Determinants. Kluwer Academic Publishers, Dordrecht, 1990.
REFERENCES
305
25. V. L. Girko. Statistical Analysis of Observations of Increasing Dimension. Kluwer Academic Publishers, Dordrecht, 1995. 26. V. L. Girko. Theory of Stochastic Equations. Vols. 1 and 2. Kluwer Academic Publishers, Dordrecht/Boston/London, 2001. 27. C. Goutis and G. Casella. Improved invariant confidence intervals for a normal variance. Ann. Stat., 1991, vol. 19, pp. 2015–2031. 28. E. Green and W. E. Strawderman. A James–Stein type estimator for combining unbiased and possibly unbiased estimators. J. Am. Stat. Assoc., 1991, vol. 86, pp. 1001–1006. 29. M. H. J. Gruber. Improving Efficiency by Shrinkage. Marcel Dekker Inc., NY, 1998. 30. Das Gupta and B. K. Sinha. A new general interpretation of the Stein estimators and how it adapts: Application. J. Stat. Plan. Inference, 1999, vol. 75, pp. 247–268. 31. Y. Y. Guo and N. Pal. A sequence of improvements over the James–Stein estimator. J. Multivariate. Anal., 1992, vol. 42, pp. 302–312. 32. K. Hoffmann. Improved estimation of distribution parameters: Stein-type estimators. Teubner, Leipzig 1992 (Teubner-Texte zur Mathematik, Bd 128). 33. K. Hoffmann. Stein estimation—a review. Stat. Papers, 2000, vol. 41, pp. 127–158. 34. P. Hebel, R. Faivre, B. Goffinat, and D. Wallach. Shrinkage estimators applied to prediction of French winter wheat yields. Biometrics, 1993, vol. 40, pp. 281–293. 35. I. A. Ibragimov and R. Z. Khas’minski. Statistical Estimation. Asymptotical Theory. Springer, Berlin, 1981. 36. W. James, C. Stein, Estimation with the quadratic loss. Proc. Fourth Berkley Symposium on Mathematical Statistics and Probability, 1960, vol. 2, pp. 361–379. 37. G. Judge and M. E. Bock. The Statistical Implication of Pretest and Stein Rule Estimators in Econometrics. North Holland, Amsterdam, 1978. 38. T. Kubokawa. An approach to improving the James–Stein estimator. J. Multivariate Anal., 1991, vol. 36, pp. 121–126.
306
REFERENCES
39. T. Kubokawa. Shrinkage and modification techniques in estimation. Commun. Stat. Theory Appl., 1999, vol. 28, pp. 613–650. 40. T. Kubokawa and M. S. Srivastava. Estimating the covariance matrix: A new approach. J. Multivariate Anal., 2003, vol. 86, pp. 28–47. 41. A. N. Kolmogorov. Problems of the Probability Theory (in Russian). Probab. Theory Appl., 1993, vol. 38, 2. 42. J. M. Maata and G. Casella. Development in decisiontheoretical variance estimation. Stat. Sci., 1990, vol. 5, pp. 90–120. 43. V. A. Marchenko and L. A. Pastur. Distribution of eigenvalues in some sets of random matrices. Math. USSR Sbornik, 1967, vol. 1, pp. 457–483. 44. A. S. Mechenov. Pseudosoltions of Linear Functional Equations. Springer Verlag, NY LLC, 2005. 45. L. D. Meshalkin and V. I. Serdobolskii. Errors in the classification of multivariate observations. Theory Probab. Appl., 1978, vol. 23, pp. 741–750. 46. L. D. Meshalkin. Assignement of numeric values to nominal variables (in Russian). Statistical Problems of Control, (Vilnius), 1976, vol. 14, pp. 49–55. 47. L. D. Meshalkin. The increasing dimension asymptotics in multivariate analysis. In: The First World Congress of the Bernoulli Society, vol. 1, Nauka, Moscow, 1986, pp. 197–199. 48. N. Pal and B. K. Sinha. Estimation of a common mean of a several multivariate normal populations: a review. Far East J. Math. Sci. 1996, Special Volume, pt. 1, pp. 97–110. 49. N. Pal and A. Eflesi. Improved estimation of a multivariate normal mean vector and the dispersion matrix: how one effects the other Sankhya Series A, 1996, vol. 57, pp. 267–286. 50. L. A. Pastur. Spectra of random self-adjoint operators. Russ. Math. Surv., 1969, vol. 28, 1–67. 51. Sh. Raudys. Results in statistical discriminant analysis: a review of the former Soviet Union literature. J. Multivariate Anal., 2004, vol. 89, 1, pp. 1–35. 52. G. G. Roussas. Contiguity of Probability Measures. Cambridge University Press, Cambridge, 1972.
REFERENCES
307
53. S. K. Sarkar. Stein-type improvements of confidence intervals for the generalized variance. Ann. Inst. Stat. Math., 1991, vol. 43, pp. 369–375. 54. A. V. Serdobolskii. The asymptotically optimal Bayes solution to systems of a large number of linear algebraic equations. Second World Congress of the Bernoulli Society, Uppsala, Sweden, 1990, CP-387, pp. 177–178. 55. A. V. Serdobolskii. Solution of empirical SLAE unimprovable in the mean (in Russian). Rev. Appl. Ind. Math., 2001, vol. 8, 1, pp. 321–326. 56. A. V. Serdobolskii. Unimprovable solution to systems of empirical linear algebraic equations. Stat. Probab. Lett., 2002, vol. 60, 1–6, pp. 1–6, article full text PDF (95.9 KB). 57. A. V. Serdobolskii and V. I. Serdobolskii. Estimation of the solution of high-order random systems of linear algebraic equations. J. Math. Sci., 2004, vol. 119, 3, pp. 315–320. 58. A. V. Serdobolskii and V. I. Serdobolskii. Estimation of solutions of random sets of simultaneous linear algebraic equations of high order. J. Math. Sci., 2005, vol. 126, 1, pp. 961–967. 59. V. I. Serdobolskii. On classification errors induced by sampling. Theory Probab. Appl., 1979, vol. 24, pp. 130–144. 60. V. I. Serdobolskii. Discriminant Analysis of Observations of High Dimension (in Russian). Published by Scientific Council on the Joint Problem “Cybernetics”, USSR Academy of Sciences, Moscow, 1979, pp. 1–57. 61. V. I. Serdobolskii. Discriminant analysis for a large number of variables. Sov. Math. Dokl., 1980, vol. 22, pp. 314–319. 62. V. I. Serdobolskii. The effect of weighting of variables in the discriminant function. In: Third International Conference on Probability Theory and Mathematical Statistics, Vilnius State University, Vilnius, 1981, vol. 2, pp. 147–148. 63. V. I. Serdobolskii. On minimum error probability in discriminant analysis. Sov. Math. Dokl., 1983, vol. 27, pp. 720–725. 64. V. I. Serdobolskii. On estimation of the expectation values under increasing dimension. Theory Probab. Appl., 1984, vol. 29, pp. 170–171.
308
REFERENCES
65. V. I. Serdobolskii. The resolvent and spectral functions of sample covariance matrices of increasing dimension. Russ. Math. Surv., 1985, 40:2, pp. 232–233. 66. V. I. Serdobolskii and A. V. Serdobolskii. Asymptotically optimal regularization for the solution of systems of linear algebraic equations with random coefficients, Moscow State University Vestnik, “Computational Mathematics and Cybernetics,” Moscow, 1991, vol. 15, 2, pp. 31–36. 67. V. I. Serdobolskii. Spectral properties of sample covariance matrices. Theory Probab. Appl., 1995, vol. 40, p. 777. 68. V. I. Serdobolskii. Main part of the quadratic risk for a class of essentially multivariate regressions. In: International Conference on Asymptotic Methods in Probability Theory and Mathematical Statistics, St. Peterburg University, St. Peterburg, 1998, pp. 247–250. 69. V. I. Serdobolskii. Theory of essentially multivariate statistical analysis. Russ. Math. Surv., 1999, vol. 54, 2, pp. 351–379. 70. V. I. Serdobolskii. Constructive foundations of randomness. In: Foundations of Probability and Physics, World Scientific, New Jersey London, 2000, pp. 335–349. 71. V. I. Serdobolskii. Multivariate Statistical Analysis. A HighDimensional Approach. Kluwer Academic Publishers, Dordrecht, 2000. 72. V. I. Serdobolskii. Normal model for distribution free multivariate analysis. Stat. Probab. Lett., 2000, vol. 48, pp. 353–360. See also: V. I. Serdobolskii. Normalization in estimation of the multivariate procedure quality. Sov. Math. Dokl., 1995, v. 343. 73. V. I. Serdobolskii and A. I. Glusker. Spectra of largedimensional sample covariance matrices, preprint 2002, 1–19 at: www.mathpreprints.com/math/Preprint/vadim/20020622/1 74. V. I. Serdobolskii. Estimators shrinkage to reduce the quadratic risk. Dokl. Math., ISSN 1064-5624, 2003, vol. 67, 2, pp. 196–202. 75. V. I. Serdobolskii. Matrix shrinkage of high-dimensional expectation vectors. J. Multivariate Anal., 2005, vol. 92, pp. 281–297. 76. A. N. Shiryaev. Probability. Springer Verlag, NY Berlin, 1984.
REFERENCES
309
77. J. W. Silverstein and Z. D. Bai. On the empirical distributions of eigenvalues of a class of large-dimensional random matrices. J. Multivariate Anal. 1995, vol. 54, pp. 175–192. 78. J. W. Silverstein and S. I. Choi. Analysis of the limiting spectra of large dimensional random matrices. J. Multivariate Anal. 1995, vol. 54, pp. 295–309. 79. C. Stein. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution, Proc. of the Third Berkeley Symposium, Math. Statist. Probab., 1956, vol. 1, pp. 197–206. 80. C. Stein. Inadmissibility of the usual estimator for the variance of a normal distribution with unknown mean. Ann. Inst. Stat. Math., 1964, vol. 16, pp. 155–160. 81. C. Stein. Estimation of the mean of a multivariate normal distribution. Ann. Stat., 1981, vol. 9, pp. 1135–1151. 82. V. S. Stepanov. Some Properties of High Dimension Linear Discriminant Analysis (in Russian), Author’s Summary. Moscow States University, Moscow, 1987. 83. A. N. Tikhonov and V. Ya. Arsenin. Solution of Ill-Posed Problem. J. Wiley, NY, 1977. 84. H. D. Vinod. Improved Stein-rule estimator for regression problems. J. Econom., 1980, vol. 12, pp. 143–150. 85. A. Wald. On a statistical problem arising in the classification of an individual in one of two groups. Ann. Math. Stat., 1944, vol. 15, pp. 147–163. 86. A. Wald. Estimation of a parameter when the number of unknown parameters increases indefinitely with the number of observations. Ann. Math. Stat., 1948, vol. 19, pp. 220–227. 87. A. Wald. Statistical Decision Functions. J. Wiley, NY, 1970. 88. E. P. Wigner. On the distribution of roots of certain symmetric matrices. Ann. Math., 1958, vol. 67, pp. 325–327. 89. E. P. Wigner. Random matrices in physics. Ann. Inst. Math. Stat., 1967, vol. 9, pp. 1–23. 90. Q. Yin and P. R. Krishnaiah. On limit of the largest eigenvalue of the large-dimensional sample covariance matrix. Probab. Theory Relat. Fields, 1988, vol. 78, pp. 509–521. 91. S. Zachs. The Theory of Statistical Inference. J. Wiley, NY/London/Sydney/Toronto, 1971.
This page intentionally left blank
INDEX admissible estimators 3, 4,
classification errors 10, 195, 220, 233, 236 classes of normal evaluable functionals 21, 33, 114, 118, 181 component-wise estimation 9, 56, 57, 59, 147 consistent estimation and procedures 3, 11, 17, 22, 132, 139, 239 contigual distributions 10, 196 counting function 12 contribution of variables to discrimination 11, 193, 200, 209, 218–219, 230, 269, 281, 283 convergence of spectral functions 12–13, 45, 54, 71–74, 91, 106–107, 115, 131–136, 142–146, 149–152, 159, 165, 251, 255, 257, 272, 279 correlation coefficient 55, 82, 98, 168, 190 A. D. Deev 10, 11, 199 degenerate estimators 3, 17–18, 46, 129–130, 132, 167–168, 191, 220
21 analytical properties of spectral functions 75, 89, 95, 107, 111, 154, 156, 161, 206, 247, 266, 272, 279 asymptotically unimprovable estimators, procedures 12, 20, 42, 105, 115, 130, 133, 138, 147, 159, 166, 192, 195, 209, 221, 230, 234–235, 240, 246–247, 261, 264, 266 biased estimators in multiparametric problems 7, 9, 33, 35, 38, 42, 53, 61, 83, 200–201, 221, 246, 257 Yu. N. Blagoveshchenskii 10 best-in-the-limit procedures 127, 193, 231 boundaries of limit spectra 9, 14 41, 88, 112, 212, 254, 257 boundedly dependent variables 29, 30, 124 Burkholder 77, 249 canonical stochastic equations 14
311
312
discriminant analysis 2, 7, 10–11, 19, 43, 132, 193, 194, 220 discrimination error probability 10, 11, 43, 193, 203, 207, 210–213, 224, 231 dispersion equations 14, 16–18, 74, 95, 98, 102, 112, 127, 239, 247, 252, 264 dominating over a class 21, 41–44, 56, 166 effective Mahalanobis distance 10, 198, 208 empirical linear equations 19, 239 ε-dominating estimators and procedures 166 ε-normal evaluation 115, 119, 120–123 errors in discriminant analysis 10–11, 193, 231 Esseen inequality 202 essentially multivariate approach 4, 17, 45, 71, 97, 110, 220 estimation multiparametric 4, 12, 17–18, 33, 46, 51, 56, 71, 195 estimators of large dimension 3, 9, 11, 32, 46, 59, 69, 166, 191–192 extremum problems in multiparametric statistics 17, 18, 125, 127, 139, 199, 239, 245, 247 R. Fisher 2, 10, 194, 235 Fredholm integral equations 234
INDEX
free from distributions 125, 147 functionals normally evaluable 29, 77, 99, 114, 117–119, 122–125, 157, 169, 186, 234, 247 fundamental problem of statistics 21 generalized classes of solutions 9, 14, 19, 169, 195, 223 V. L. Girko 13, 14, 71, 110, 246, 272 Gram matrices 13, 14, 16, 75, 98, 171, 239, 247, 248, 251 higher moments 68 high-dimensional statistical analysis 11, 45, 245 Hermitian matrices 76, 83, 282 H¨older condition 94–96, 109, 134, 136, 145–146, 155, 225, 253, 259, 280–281 illconditioned problem 221, 234 inaccuracy of asymptotical methods 167 increasing dimension asymptotics 11 independent variables, components 6, 10–14, 16, 23, 147, 174, 195, 198, 202, 214, 230, 248–249, 255 infinite-dimensional problems 9, 16, 45, 47, 50–51, 97–98, 101, 103, 105
INDEX
information matrix 8, 129, 193 informational noise 215 inverse covariance matrix, estimators 1, 3, 19, 118, 119, 129–130, 131, 133, 221, 234, 262 inversion of sample covariance matrices 3, 14, 16, 129 James-Stein estimator 5, 8–9, 24, 26, 32, 34, 55 Kolmogorov asymptotics 1, 10, 22, 33, 45, 52, 72, 130, 147 leading parts of functionals 11, 114, 116, 127, 169, 178, 185–187, 191, 193, 247 linear procedures, equations 3, 16, 17, 114, 129, 132, 240, 244, 246, 254 lower and upper bounds of limit spectra 32, 34–35, 274 Mahalanobis distance 10, 198, 208, 220, 230 V. A. Marchenko 12, 71 martingale lemma 77, 99 matrix shrinkage 147, 190 measure of normalization 2, 7, 16, 30, 33, 67, 75, 115, 133, 147, 167, 220, 279 L. D. Meshalkin 10, 193 minimum risk, of errors 4, 7, 9, 19, 23, 26, 29, 46, 129, 131, 153, 167, 189, 191, 198, 212, 216, 219–221, 224, 230, 240, 244, 255–256, 258
313
multiparametric statistics 1, 4, 12, 15–19, 21–22, 33, 46, 50–51, 56, 59, 69–71, 124–125, 195, 222 multivariate statistical procedures 3–4, 11, 16, 17, 18, 45, 97, 114, 119, 125, 127, 127–129, 132, 167, 220, 245 Normal Evaluation Principle 16, 125, 167, 194, 220, 234 normally evaluable functions 195, 235, 242, 248, 255 normed trace of the resolvents 71, 241, 247 optimal decisions 1, 11, 17–18, 21, 69, 129, 139, 143, 193, 199, 205–206, 244, 264 parametric families of distributions 1, 3, 11, 15–17, 21–22, 33, 35, 46, 51, 56, 59, 69–71, 124 167, 193–194, 230, 246–247 L. A. Pastur 12, 71–72, 96 “plug-in” procedures 24, 45, 168, 221 pooled sample covariance matrices 193, 221 Principle of normal evaluation 16, 125, 167, 194, 220, 234 pseudosolution of random linear equations 20, 239, 240, 246–247, 254, 258, 262
314
purposefulness of a selection of variables 43, 218 quadratic risk 4–8, 19, 20, 22–24, 26–27, 31–38, 41–46, 50, 54–57, 61, 64, 69, 130, 148, 153, 154, 167, 186–187, 190–192, 231, 239, 240, 244, 246, 254, 258, 261–266 quality functions 3, 8, 16–18, 114, 117, 119, 124, 125, 127, 167, 192, 235 recognition problem 1, 193 regression multiparametric case 9, 18, 132, 167, 169, 181, 190, 244 regularity conditions 89, 167 regularization 16, 19, 69, 129, 191, 192, 234, 248, 262, 264 regularized multivariate procedures 16, 17, 29, 30, 61, 63, 114, 118, 119, 124, 125, 129, 140, 167, 239, 245, 254, 256, 262, 264 reliable estimation 4, 17, 124, 186 remainder terms of asymptotics 18, 32, 36, 56, 69, 75, 82, 83, 100, 102, 105, 106, 111, 116, 125, 169, 174, 179, 180, 191, 272, 278 resolvent of sample covariance matrices 16, 71, 75, 76, 83, 115, 116, 118, 149, 169, 224, 247, 248
INDEX
ρ-model of limit spectra 95, 136, 140, 154, 158, 230, 233 ridge parameter and ridge estimator 16, 119, 129, 133, 137, 153, 169, 190, 221, 234, 244–248, 256, 262, 263 risk function, minimization 3, 4–9, 21–26, 33–38, 41–44, 56, 60–61, 63, 69, 82, 133, 135, 138, 140, 154–159 167–168, 186–187, 231, 240, 244, 245, 256–259, 261–266 rotation invariant functional 114, 117, 269 RSS (residual sum of squares) 167, 190 rule of classification 8, 10, 11, 17, 195, 209, 220, 233, 236 sample covariance matrices 3, 14–17, 71–72, 75, 83–87, 97–98, 103, 105, 115, 116, 117, 124, 129–130, 134, 147–148, 171, 221, 230, 253 selection of variables in the discriminant analysis 195, 209, 211, 212, 213, 221 semicircle law 12 sequence of problems of the increasing dimension 10, 41, 43, 88, 90, 105, 130, 131, 134, 148, 166, 186, 209, 215, 222, 246, 248–249, 254 shrinkage estimators 4, 23, 24, 32, 44, 45, 190, 245, 247, 256, 262
INDEX
shrinkage procedure 4–8, 19, 20, 22, 24, 29, 32–33, 41–43, 50, 54–56, 130, 133, 147, 158, 192, 245, 248, 256, 262, 264–265 shrinkage-ridge estimators 133, 137, 190, 247, 256, 265 SLAE-empirical systems of linear algebraic equation 19, 239, 246, 254, 264, 265 smoothing of estimators 138, 139, 159 spectral theory of large sample covariance matrices 12, 13–17 71, 72, 75, 76, 83–84, 88, 97–98, 103, 116–117, 124–125, 129, 134, 149, 170 spectra of random matrices 12, 13, 14, 88, 105, 110–112, 129, 248, 250, 254, 272 stable statistical procedures 3, 17, 29, 129, 139, 209, 221, 240 standard estimators 8, 22, 131, 153, 158, 158, 159 statistics multiparametric 1–4, 12, 15, 16, 17, 18, 21–22, 33, 46, 50–51, 56, 59, 69, 71, 124–125, 195, 247, 261
315
Stein estimators 5, 8, 9, 24, 26, 29–32, 34, 43, 45, 47, 55, 190, 245 Stilties integral equation 13, 19, 111 system of empirical linear equations 19, 49, 190, 193, 239, 241, 244, 246, 254, 257 threshold of a selection 12, 32, 195, 198, 203, 211, 212, 213, 219, 220, 229 unbiased estimators and their shrinkage 6, 9, 33, 35, 38, 42, 55, 61 unimprovable procedures 4, 12, 17–19, 21, 50, 119, 125, 127, 129, 133, 167, 194, 209, 220, 221, 234, 240, 257, 266 upper bounds of the remainder terms 32, 36, 69, 75, 105, 116, 125, 139, 169, 180, 278 Wald 10, 21, 193, 221 weighting of variables in discriminant function 11, 130, 147, 197–198, 200, 203, 209 wide class of distributions, methods 29, 50–52, 87, 114, 125, 167 Wigner spectrum 12 Wishart distribution 14, 241, 245, 253
This page intentionally left blank