Probability and its Applications Published in association with the Applied Probability Trust
Editors: S. Asmussen, J. Gani, P. Jagers, T.G. Kurtz
Probability and its Applications Azencott et al.: Series of Irregular Observations. Forecasting and Model Building. 1986 Bass: Diffusions and Elliptic Operators. 1997 Bass: Probabilistic Techniques in Analysis. 1995 Berglund/Gentz: Noise-Induced Phenomena in Slow-Fast Dynamical Systems: A Sample-Paths Approach. 2006 Biagini/Hu/Øksendal/Zhang: Stochastic Calculus for Fractional Brownian Motion and Applications. 2008 Chen: Eigenvalues, Inequalities and Ergodic Theory. 2005 Costa/Fragoso/Marques: Discrete-Time Markov Jump Linear Systems. 2005 Daley/Vere-Jones: An Introduction to the Theory of Point Processes I: Elementary Theory and Methods. 2nd ed. 2003, corr. 2nd printing 2005 Daley/Vere-Jones: An Introduction to the Theory of Point Processes II: General Theory and Structure. 2nd ed. 2008 de la Peña/Gine: Decoupling: From Dependence to Independence, Randomly Stopped Processes, U-Statistics and Processes, Martingales and Beyond. 1999 de la Peña/Lai/Shao: Self-Normalized Processes. 2009 Del Moral: Feynman-Kac Formulae. Genealogical and Interacting Particle Systems with Applications. 2004 Durrett: Probability Models for DNA Sequence Evolution. 2002, 2nd ed. 2008 Ethier: The Doctrine of Chances. 2010 Feng: The Poisson–Dirichlet Distribution and Related Topics. 2010 Galambos/Simonelli: Bonferroni-Type Inequalities with Equations. 1996 Gani (ed.): The Craft of Probabilistic Modelling. A Collection of Personal Accounts. 1986 Gut: Stopped RandomWalks. Limit Theorems and Applications. 1987 Guyon: Random Fields on a Network. Modeling, Statistics and Applications. 1995 Kallenberg: Foundations of Modern Probability. 1997, 2nd ed. 2002 Kallenberg: Probabilistic Symmetries and Invariance Principles. 2005 Last/Brandt: Marked Point Processes on the Real Line. 1995 Molchanov: Theory of Random Sets. 2005 Nualart: The Malliavin Calculus and Related Topics, 1995, 2nd ed. 2006 Rachev/Rueschendorf: Mass Transportation Problems. Volume I: Theory and Volume II: Applications. 1998 Resnick: Extreme Values, Regular Variation and Point Processes. 1987 Schmidli: Stochastic Control in Insurance. 2008 Schneider/Weil: Stochastic and Integral Geometry. 2008 Serfozo: Basics of Applied Stochastic Processes. 2009 Shedler: Regeneration and Networks of Queues. 1986 Silvestrov: Limit Theorems for Randomly Stopped Stochastic Processes. 2004 Thorisson: Coupling, Stationarity and Regeneration. 2000
Shui Feng
The Poisson–Dirichlet Distribution and Related Topics Models and Asymptotic Behaviors
Shui Feng Department of Mathematics and Statistics McMaster University Hamilton, Ontario L8S 4K1 Canada
[email protected] Series Editors: Søren Asmussen Department of Mathematical Sciences Aarhus University Ny Munkegade 8000 Aarhus C Denmark Joe Gani Centre for Mathematics and its Applications Mathematical Sciences Institute Australian National University Canberra, ACT 0200 Australia
[email protected]
Peter Jagers Mathematical Statistics Chalmers University of Technology and Göteborg (Gothenburg) University 412 96 Göteborg Sweden
[email protected] Thomas G. Kurtz Department of Mathematics University of Wisconsin - Madison 480 Lincoln Drive Madison, WI 53706-1388 USA
[email protected]
ISSN 1431-7028 ISBN 978-3-642-11193-8 DOI 10.1007/978-3-642-11194-5 Springer Heidelberg Dordrecht London New York
e-ISBN 978-3-642-11194-5
Library of Congress Control Number: 2010928906 Mathematics Subject Classification (2010): 60J60, 60J70, 92D15, 60F05, 60F10, 60C05 © Springer-Verlag Berlin Heidelberg 2010 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: WMXDesign Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
To Brian, Ronnie, and Min
Preface
The Poisson–Dirichlet distribution, a probability on the infinite-dimensional simplex, was introduced by Kingman in 1975. Since then it has found applications in Bayesian statistics, combinatorics, number theory, finance, macroeconomics, physics and, especially, in population genetics. Several books have appeared that contain sections or chapters on the Poisson–Dirichlet distribution. These include, but are not limited to, Aldous [2], Arratia, Barbour and Tavar´e [9], Ewens [67], Kingman [127, 130], and Pitman [155]. This book is the first that focuses solely on the Poisson–Dirichlet distribution and some closely related topics. The purposes of this book are to introduce the Poisson–Dirichlet distribution, to study its connections to stochastic dynamics, and to give an up-to-date account of results concerning its various asymptotic behaviors. The book is divided into two parts. Part I, consisting of Chapters 1–6, includes a variety of models involving the Poisson–Dirichlet distribution, and the central scheme is the unification of the Poisson–Dirichlet distribution, the urn structure, the coalescent, and the evolutionary dynamics through the grand particle systems of Donnelly and Kurtz. Part II discusses recent progress in the study of asymptotic behaviors of the Poisson– Dirichlet distribution, including fluctuation theorems and large deviations. The original Poisson–Dirichlet distribution contains one parameter denoted by θ . We will also discuss an extension of this to a two-parameter distribution, where an additional parameter α is needed. Most developments center around the one-parameter Poisson–Dirichlet distribution, with extensions to the two-parameter setting along the way when there is no significant increase in complexity. Complete derivations and proofs are provided for most formulae and theorems. The techniques and methods used in the book are useful in solving other problems and thus will appeal to researchers in a wide variety of subjects. The selection of topics is based mainly on mathematical completeness and connections to population genetics, and is by no means exhaustive. Other topics, although related, are not included because they would take us too far afield to develop at the same level of detail. One could consult Arratia, Barbour and Tavar´e [9] for a discussion of general logarithmic combinatorial structures; Barbour, Holst and Janson [11] for Poisson approximation; Durrett [48] and Ewens [67] for comprehenvii
viii
Preface
sive coverage of mathematical population genetics; Bertoin [12] and Pitman [155] for fragmentation and coagulation processes; and Pitman [155] for connections to combinatorial properties of partitions, excursions, random graphs and forests. References for additional topics, including works on Bayesian statistics, functional inequalities, and multiplicative properties, will be given in the Notes section at the end of every chapter. The intended audience of this book includes researchers and graduate students in population genetics, probability theory, statistics, and stochastic processes. The contents of Chapters 1–6 are suitable for a one-term graduate course on stochastic models in population genetics. The material in the book is largely self-contained and should be accessible to anyone with a knowledge of probability theory and stochastic processes at the level of Durrett [47]. The first chapter reviews several basic models in population genetics including the Wright–Fisher model and the Moran model. The Dirichlet distribution emerges as the reversible measure for the approximating diffusions. The classical relation between gamma random variables and the Dirichlet distribution is discussed. This lays the foundation for the introduction of the Poisson–Dirichlet distribution and for an understanding of the Perkins disintegration theorem, to be discussed in Chapter 5. The second chapter includes various definitions and derivations of the Poisson– Dirichlet distribution. Perman’s formula is used, in combination with the subordinator representation, to derive the finite-dimensional distributions of the Poisson– Dirichlet distribution. An alternative construction of the Poisson–Dirichlet distribution is included using the scale-invariant Poisson process. The GEM distribution appears in the setting of size-biased sampling, and the distribution of a random sample of given size is shown to follow the Ewens sampling formula. Several urn-type models are included to illustrate the relation between the Poisson–Dirichlet distribution, the GEM distribution, and the Ewens sampling formula. The last section is concerned with the properties of the Dirichlet process. In Chapter 3 the focus is on the two-parameter Poisson–Dirichlet distribution, a natural generalization of the Poisson–Dirichlet distribution. The main goal is to generalize several results in Chapter 2 to the two-parameter setting, including the finitedimensional distributions, the Pitman sampling formula, and an urn model. Here, a fundamental difference in the subordinator representation is that the process with independent increments is replaced by a process with exchangeable increments. The coalescent is a mathematical model that traces the ancestry of a sample from a population. It is an effective tool in describing the genealogy of a population. In Chapter 4, the coalescent is defined as a continuous-time Markov chain with values in the set of equivalence relations on the set of positive integers. It is represented through its embedded chain and an independent pure-death Markov chain. The marginal distributions are derived for both the embedded chain and the puredeath Markov chain. Two symmetric diffusion processes, the infinitely-many-neutral-alleles model and the Fleming–Viot process with parent-independent mutation are studied in Chapter 5. The reversible measure of the infinitely-many-neutral-alleles model is shown to be the Poisson–Dirichlet distribution and the reversible measure of the
Preface
ix
Fleming–Viot process is the Dirichlet process. The representations of the transition probability functions are obtained for both processes and they involve the pure-death process studied in Chapter 4. It is shown that the Fleming–Viot process with parentindependent mutation can be obtained from a continuous branching process with immigration through normalization and conditioning. These can be viewed as the dynamical analog of the relation between the gamma distribution and the Dirichlet distribution derived in Chapter 1. This chapter also includes a brief discussion of the two-parameter generalization of the infinitely-many-neutral-alleles model. As previously mentioned, the urn structure, the coalescent and the infinitedimensional diffusions discussed so far, are unified in Chapter 6 under one umbrella called the Donnelly–Kurtz particle representation. This is an infinite exchangeable particle system with labels incorporating the genealogy of the population. The Fleming–Viot process in Chapter 5 is the large-sample limit of the empirical processes of the particle system and the Poisson–Dirichlet distribution emerges as a natural link between all of the models in the first six chapters. The material covered in the first six chapters concerns, for the most part, wellknown topics. In the remaining three chapters, our focus shifts to recent work on the asymptotic behaviors of the Poisson–Dirichlet distributions and the Dirichlet processes. In the general two-parameter setting, α corresponds to the stable component, while θ is related to the gamma component. When θ is large, the role of α diminishes and the behavior of the corresponding distributions becomes nonsingular or Gaussian. For small α and θ , the distributions are far away from Gaussian. These cases are more useful in physics and biology. Fluctuation theorems are obtained in Chapter 7 for the Poisson–Dirichlet distribution, the Dirichlet process, and the conditional sampling formulas when θ is large. As expected, the limiting distributions involve the Gumbel distribution, the Brownian bridge and the Gaussian distribution. Chapter 8 discusses large deviations for the Poisson–Dirichlet distributions for both large θ and small θ and α . The large deviation results provide convenient tools for evaluating the roles of natural selection. The large deviations for the Dirichlet processes are the focus of Chapter 9. The explicit forms of the rate functions provide a comparison between standard and Bayesian statistics. They also reveal the role of α as a measurement on the closeness to the large θ limit. Notes included at the end of each chapter give the direct sources of the material in those chapters as well as some remarks. These are not meant to be an historical account of the subjects. The appendices include a brief account of Poisson processes and Poisson random measures, and several basic results of the theory of large deviations. Some material in this book is based on courses given by the author at the summer school of Beijing Normal University between 2006 and 2008. I wish to thank Fengyu Wang for the opportunity to visit the Stochastic Research Center of Beijing Normal University. I also wish to thank Mufa Chen and Zenghu Li for their hospitality during my stay at the Center. Chapters 1–6 have been used in a graduate course given in the Department of Mathematics and Statistics at McMaster Univer-
x
Preface
sity during the academic year 2008–2009. I thank the students in those courses, who have helped me with suggestions and corrections. I wish to express special thanks to Donald A. Dawson for his inspiration and advice, and for introducing me to the areas of measure-valued processes, large deviations, and mathematical population genetics. I would also like to thank Fred M. Hoppe and Paul Joyce for sharing their insight on urn models. Several anonymous reviewers have generously offered their deep insight and penetrating comments on all aspects of the book, from which I benefited immensely. Richard Arratia informed me about the scale-invariant spacing lemma and the references associated with the scale-invariant Poisson process. Sion’s minimax theorem and the approach taken to Theorem 9.10 resulted from correspondence with Fuqing Gao. Ian Iscoe and Fang Xu provided numerous comments and suggestions for improvements. I would like to record my gratitude to Marina Reizakis, my editor at Springer, for her advice and professional help. The financial support from the Natural Sciences and Engineering Research Council of Canada is gratefully acknowledged. Last, but not least, I thank my family for their encouragement and steadfast support. Hamilton, Canada
Shui Feng November, 2009
Contents
Part I Models 1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Discrete Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.1 Genetic Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.2 The Wright–Fisher Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.3 The Moran Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Diffusion Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 An Important Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2
The Poisson–Dirichlet Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Definition and Poisson Process Representation . . . . . . . . . . . . . . . . . . 2.2 Perman’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Marginal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Size-biased Sampling and the GEM Representation . . . . . . . . . . . . . . 2.5 The Ewens Sampling Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Scale-invariant Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Urn-based Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Hoppe’s Urn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.2 Linear Birth Process with Immigration . . . . . . . . . . . . . . . . . . 2.7.3 A Model of Joyce and Tavar´e . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 The Dirichlet Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15 15 17 22 24 26 33 36 36 38 44 46 51
3
The Two-Parameter Poisson–Dirichlet Distribution . . . . . . . . . . . . . . . . 3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Marginal Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 The Pitman Sampling Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Urn-type Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53 53 54 58 62 66 xi
xii
Contents
4
The Coalescent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Kingman’s n-Coalescent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The Coalescent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 The Pure-death Markov Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67 67 72 73 80
5
Stochastic Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5.1 Infinitely-many-neutral-alleles Model . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5.2 A Fleming–Viot Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.3 The Structure of Transition Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.4 A Measure-valued Branching Diffusion with Immigration . . . . . . . . 107 5.5 Two-parameter Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6
Particle Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.1 Exchangeability and Random Probability Measures . . . . . . . . . . . . . . 113 6.2 The Moran Process and the Fleming–Viot Process . . . . . . . . . . . . . . . 116 6.3 The Donnelly–Kurtz Look-down Process . . . . . . . . . . . . . . . . . . . . . . . 120 6.4 Embedded Coalescent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Part II Asymptotic Behaviors 7
Fluctuation Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 7.1 The Poisson–Dirichlet Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 7.2 The Dirichlet Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 7.3 Gaussian Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 7.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8
Large Deviations for the Poisson–Dirichlet Distribution . . . . . . . . . . . . 151 8.1 Large Mutation Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 8.1.1 The Poisson–Dirichlet Distribution . . . . . . . . . . . . . . . . . . . . . 151 8.1.2 The Two-parameter Poisson–Dirichlet Distribution . . . . . . . . 158 8.2 Small Mutation Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 8.2.1 The Poisson–Dirichlet Distribution . . . . . . . . . . . . . . . . . . . . . 160 8.2.2 Two-parameter Generalization . . . . . . . . . . . . . . . . . . . . . . . . . 168 8.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 8.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
9
Large Deviations for the Dirichlet Processes . . . . . . . . . . . . . . . . . . . . . . . 179 9.1 One-parameter Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 9.2 Two-parameter Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 9.3 Comparison of Rate Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 9.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Contents
xiii
A
Poisson Process and Poisson Random Measure . . . . . . . . . . . . . . . . . . . . 199 A.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 A.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
B
Basics of Large Deviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Part I
Models
Chapter 1
Introduction
In this chapter, we introduce several basic models in population genetics including the Wright–Fisher model, the Moran model, and the corresponding diffusion approximations. The Dirichlet distribution is introduced as the reversible measure of the corresponding diffusion processes. Its connection to the gamma distribution is explored. These will provide the necessary intuition and motivation for the Poisson– Dirichlet distribution and other sophisticated models considered in subsequent chapters.
1.1 Discrete Models We begin this section with a brief introduction of the genetic terminology used throughout the book.
1.1.1 Genetic Background All living organisms inherit biological characteristics from their parents. The characteristics that can be inherited are called the genetic information. Genetics is concerned with the study of heredity and variation. Inside each cell of an organism, there is a fixed number of chromosomes, which are threadlike objects, each containing a single, long molecule called DNA (deoxyribonucleic acid). Each DNA molecule is composed of two strands of nucleotides in which the sugar is deoxyribose and the bases are adenine (A), cytosine (C), guanine (G), and thymine (T). Linked by hydrogen bonds with A paired with T, and G paired with C, the two strands are coiled around to form the famous double helix structure discovered by Watson and Crick in 1953. These DNA molecules are responsible for the storage and inheritance of genetic information. A gene is a hereditary unit composed of a portion of DNA. The place that a gene resides on a chromosome is called a locus (loci in plural form). S. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5 1, © Springer-Verlag Berlin Heidelberg 2010
3
4
1 Introduction
Different forms of a gene are called alleles. An example is the ABO blood group locus, admitting three alleles, A, B, and O. The complete set of genetic information of an organism is called a genome. An organism is called haploid if there is only one set of chromosomes; if chromosomes appear in pairs, the organism is diploid. The set of chromosomes in a polyploid organism is at least tripled. Bacteria and fungi are generally haploid organisms, whereas most higher organisms such as mammals are diploid. Human diploid cells have 46 (or 23 pairs) chromosomes and human haploid gametes (egg and sperm) each have 23 chromosomes. Bread wheat and canola have six and four sets of chromosomes, respectively. The genotype of an individual at a particular locus is the set of allele(s) presented. For a diploid individual, this is an unordered pair of alleles (one on each chromosome). A diploid individual is homozygous (heterozygous) at a particular locus if the corresponding genotype consists of two identical alleles (different alleles). A phenotype of an organism is any observable characteristic. Phenotypes are determined by an organism’s genotypes and the interaction of the genotypes with the environment. As a branch of biology, population genetics is concerned with the genetic structure and evolution of biological populations, and the genetic forces behind the evolution. The focus is on the population as a whole instead of individuals. The genetic makeup of a population can be represented by the frequency distribution of different alleles in the population. A population is monoecious if every individual has both male and female organs. If each individual can only be a male or female, then the population is called dioecious. We shall assume in this book that all populations are monoecious. Many forces influence the evolution of a population. A change in the DNA sequence is referred to as a mutation. The cause of a mutation could be a mistake in DNA replication, a breakage of the chromosome, or certain environmental impact. Fitness is a measure of the relative breeding success of an individual or genotype in a population at a given time. Those that contribute the most offspring to future generations are the fittest. The selection is a deterministic force that favors certain alleles over others and leads to the survival of the fittest. Mutations create variation of allele frequencies. A deleterious mutation results in less fit alleles which can be reduced in frequency by natural selection, while advantageous mutation produces alleles with higher fitness and this combined with the selection force will result in an increase in frequency. A mutation that brings no change to the fitness of an individual is called neutral. In a finite population, changes in allele frequencies may occur because of the random sampling of genes from one generation to the next. This pure random change in allele frequencies is referred to as a random genetic drift. It will result in a decay in genetic variability and the eventual loss or fixation of alleles without regard to the survival or reproductive value of the alleles involved. Other forces that play major roles in the evolution process include the mechanism of recombination and migration, which will not be discussed as these topics lie outside the focus of this book.
1.1 Discrete Models
5
R.A. Fisher, J.B.S. Haldane and S. Wright provided the theoretical underpinnings of population genetics in the 1920s and 1930s. Over the years population geneticists have developed more sophisticated mathematical models of allele frequency dynamics. Even though these models are highly idealized, many theoretical predictions based on them, on the patterns of genetic variation in actual populations, turn out to be consistent with empirical data. The basic mathematical framework can be described loosely as follows. Consider a large biological population of individuals. We are interested in the allele frequencies of the population at a particular locus. An allele will be called a type and different alleles correspond to different types. The complete set of alleles corresponds a type space which is modeled through a topological space. Both mutation and selection are described by a deterministic process. The random genetic drift corresponds to a random sampling. The population evolution corresponds to a probability-valued stochastic process describing the changes of the frequency distributions over time under the influence of mutation, selection, and random genetic drift. Different structures of mutation, selection, and genetic drift give rise to different mathematical models.
1.1.2 The Wright–Fisher Model The Wright–Fisher model is a basic model for the reproduction process of a monoecious, randomly mating population. It assumes that the population consists of N diploid or 2N haploid individuals, the population size remains the same in each generation, and there is no overlap between generations. We focus attention to a particular locus with alleles represented by the type space E. Time index is {0, 1, 2, 3, . . .}. Time 0 refers to the starting generation and positive time n corresponds to generation n. Consider the current generation as an urn consisting of 2N balls of different colors. Then, in the original Wright–Fisher model, the next generation is formed by 2N independent drawings of balls from the urn with replacement. This mechanism is called random sampling. More general models should take into account mutation and selection. Here, for simplicity, we will only consider models with mutation and random sampling. Two-allele Model with Mutation: E = {A1 , A2 } Let X(0) denote the total number of individuals of type A1 at time zero. The population evolves under the influence of mutation and random sampling. Assume that a type A1 individual can mutate to a type A2 individual with probability u2 , and a type A2 individual can mutate to a type A1 individual with probability u1 . Fix the value of X(0), the number of type A1 individuals after mutation becomes X(0)(1 − u2 ) + (2N − X(0))u1 . Let p = X(0) 2N denote the initial proportion of type A1 individuals in the population. After mutation, the proportion of type A1 individuals becomes
6
1 Introduction
p = (1 − u2 )p + u1 (1 − p). Let X(1) denote the total number of individuals of type A1 at time one. Since individuals in the next generation are selected with replacement from a population of size 2N with 2N p number of type A1 individuals, X(1) is a binomial random variable with parameters 2N, p . Repeat the same procedure, and let X(n) denote the number of type A1 individuals in generation n. Then X(n) is a discrete time, homogeneous, finite-state Markov chain. For any 0 ≤ i, j ≤ 2N, the one-step transition probability is 2N Pi j = P{X(1) = j|X(0) = i} = (p ) j (1 − p )2N− j , j where 2Nj is the binomial coefficient. If u1 and u2 are strictly positive, then the chain is positive recurrent and a unique stationary distribution exists. The original Wright–Fisher model corresponds to the case u1 = u2 = 0. K-allele Model with Mutation: E = {A1 , . . . , AK }, K > 2 Let X(0) = (X1 (0), . . . , XK (0)) with Xi (0) denoting the number of type Ai individuals at time zero. Mutation is described by a matrix (ui j )1≤i, j≤K with non-negative elements. Here, ui j is the mutation probability from type Ai to type A j for distinct i, j and K
∑ ui j = 1, i = 1, . . . , K.
(1.1)
j=1
Let
Xi (0) , i = 1, . . . , K. 2N The allele frequency after the mutation becomes p = (p1 , . . . , pK ), pi =
p = (p1 , . . . , pK ), where pi =
K
∑ p j u ji , i = 1, . . . , K.
(1.2)
j=1
Due to random sampling, the allele count in the next generation X(1) = (X1 (1), . . . , XK (1)) follows a multinomial distribution with parameters 2N and p . The nth generation can be obtained similarly from its preceding generation. The process X(n) is again a discrete time, homogeneous, finite-state Markov chain.
1.1 Discrete Models
7
1.1.3 The Moran Model Similarly to the Wright–Fisher model, the Moran model also describes the change in allele frequency distribution in a randomly mating population. But the population is haploid, and the mechanism of multinomial sampling in the Wright–Fisher model is replaced with a birth–death procedure in the Moran model. At each time step, one individual is chosen at random to die and another individual (may be the replaced individual) is chosen at random from the population to reproduce. Since birth and death occur at the same time, the population size remains constant. If each time step is considered as one generation, then only one reproduction occurs at each generation and, therefore, the generations are allowed to overlap. The Moran model is mathematically more tractable than the Wright–Fisher model because many interesting quantities can be expressed explicitly. When mutation is introduced, the birth probability will be weighted by the mutation probabilities. Two-allele Model with Mutation: E = {A1 , A2 } Assume that the population size is 2N. Let X(0) = i denote initial population size of A1 individuals, and p = i/2N. The mutation probability from A1 (A2 ) to A2 (A1 ) is u2 (u1 ). Then X(1) can only take three possible values: i − 1, i, i + 1. The case of X(1) = i−1 corresponds to the event that an A1 individual is chosen to die and an A2 individual is chosen to reproduce with weighted probability (1 − p)(1 − u1 ) + pu2 . The corresponding transition probability is P{X(1) = i − 1 | X(0) = i} = p[(1 − p)(1 − u1 ) + pu2 ]. Similarly we have P{X(1) = i + 1 | X(0) = i} = (1 − p)[(1 − p)u1 + p(1 − u2 )], and P{X(1) = i | X(0) = i} = 1 − [p(1 − p)(2 − u1 − u2 ) + p2 u2 + (1 − p)2 u1 ]. Clearly, X(n), the total number of individuals of type A1 at time n, is a discrete time, homogeneous, finite-state Markov chain. K-allele Model with Mutation: E = {A1 , . . . , AK }, K ≥ 2 As before, let X(0) = (X1 (0), . . . , XK (0)) with Xi (0) denoting the number of type Ai i (0) individuals at time zero and pi = X2N for i = 1, . . . , K. Let the mutation be described by the matrix (ui j )1≤i, j≤K satisfying (1.1). Then the allele frequency distribution after the mutation is given by (1.2). For any 1 ≤ l ≤ K, let el denote the unit vector with the lth coordinate being one. Starting with X(0) = i = (i1 , . . . , iK ), the next generation X(1) = (X1 (1), . . . , XK (1)) will only take values of the form j = i + el − ek for some 1 ≤ k, l ≤ K. For any distinct pair k, l, we have
8
1 Introduction
P{X(1) = i + el − ek | X(0) = i} = pk
K
∑ pm uml .
m=1
For j = i, the transition probability is P{X(1) = i | X(0) = i} = 1 − ∑ pk k=l
K
∑ pm uml
.
m=1
1.2 Diffusion Approximation A continuous time Markov process with continuous sample paths is called a diffusion process. By changing the scales of time and space, many Markov processes can be approximated by diffusion processes. Both the Wright–Fisher model and the Moran model are Markov processes with discontinuous sample paths. When the number of alleles is large, direct calculations associated with these models are difficult to carry out and explicit results are rare. In these cases, useful information can be gained through the study of approximating diffusion processes. This section includes a very brief illustration of the diffusion approximation. More detailed discussions on the use of diffusion process in genetics can be found in [48] and [67]. For any M ≥ m ≥ 1, let (X1 , . . . , Xm ) be a multinomial random variable with parameters M, q, where m
q = (q1 , . . . , qm ),
∑ qk = 1, qi ≥ 0, 1 ≤ i ≤ m.
k=1
The means and covariances of (X1 , . . . , Xm ) are E[Xi ] = Mqi , Cov(Xi , X j ) = Mqi (δi j − q j ), 1 ≤ i, j ≤ m.
(1.3)
K-allele Wright–Fisher Model with Mutation 1 For distinct i, j, change (ui j ) to ( 2N ui j ). Set
YN (t) = (Y1N (t), . . . ,YKN (t)),YiN (t) = Xi ([2Nt]), 1 N pN (t) = (pN1 (t), . . . , pNK (t)) = Y (t), 2N
(1.4) (1.5)
where [2Nt] is the integer part of 2Nt. Note that one unit of time in the YN (·) process corresponds to 2N generations 1 , then the time period [t,t + Δ t] correof the process X(·). If we choose Δ t = 2N sponds to one generation in the process X(·). During this period p becomes p due to mutation followed by a multinomial sampling. Let
1.2 Diffusion Approximation
9
Δ pN (t) = (Δ pN1 (t), . . . , Δ pNK (t)) = pN (t + Δ t) − pN (t) denote the change of pN (t) over [t,t + Δ t]. Our aim is to approximate E[Δ pNi (t)] and E[Δ pNi (t)Δ pNj (t)] for i, j = 1, . . . , K. Assume that pN (t) = p = (p1 , . . . , pK ). Then by (1.3) we have E[Δ pNi (t)] = E[pNi (t + Δ t) − pi + pi − pi ]
(1.6)
K
1 (p j u ji − pi ui j ) j=i 2N
= pi − pi = ∑
1 bi (p) 2N E[Δ pNi (t)Δ pNj (t)]
=
= (pi − pi )(pj − p j ) +Cov(pNi (t + Δ t), pNj (t + Δ t)) 1 1 1 = , a b (p)b (p) + (p) + o i j ij (2N)2 2N (2N)2
(1.7)
where bi (p) = ∑ p j u ji − ∑ pi ui j ,
(1.8)
ai j (p) = pi (δi j − p j ).
(1.9)
j=i
j=i
Letting N go to infinity, it follows that 1 E[Δ pNi (t)] = bi (p), N→∞ Δ t 1 E[Δ pNi (t)Δ pNj (t)] = ai j (p). lim N→∞ Δ t lim
Ignoring higher order terms that approach zero faster than Δ t, we have shown that when the mutation rate is scaled by a factor of 1/2N and time is measured in units of 2N generations, the Wright–Fisher model is approximated by diffusion process, K−1
d pi (t) = bi (p(t))dt +
∑ σi j (p(t))dB j (t)
j=1
with
K−1
∑ σil (x(t))σ jl (p(t)) = ai j (p).
l=1
10
1 Introduction
The generator of the diffusion is L f (p) =
K 1 K ∂2 f ∂f a (p) + bi (p) . i j ∑ ∑ 2 i, j=1 ∂ pi ∂ p j i=1 ∂ pi
In the two-allele case, the approximating diffusion solves the following stochastic differential equation: d p(t) = (u1 (1 − p(t)) − u2 p(t))dt + p(t)(1 − p(t))dBt , (1.10) where u1 , u2 are the mutation rates of the process. For a ≥ 0, let
Γ (a) =
∞
pa−1 e−p d p.
0
For u1 > 0, u2 > 0, let h(p) =
Γ (2(u1 + u2 )) 2u1 −1 p (1 − p)2u2 −1 , 0 < p < 1, Γ (2u1 )Γ (2u2 )
which is the density function of the Beta(2u1 , 2u2 ) distribution. By direct calculation we have that for any twice continuously differentiable functions f , g on [0, 1], 1
[g(p)L f (p)]h(p)d p = 0. 0
Thus the Beta(2u1 , 2u2 ) distribution is a reversible measure for the process p(t). When the mutation is symmetric; i.e., u1 = u2 = u, we write θ = 2u. Recall that u = 2N μ with μ being the original individual mutation rate, it follows that θ = 4N μ which is called the scaled population mutation rate. A particular selection model with a selection factor s is the stochastic differential equation d p(t) = (u1 (1 − p(t)) − u2 p(t) + sp(t)(1 − p(t)))dt + p(t)(1 − p(t))dBt .
(1.11)
If u1 > 0, u2 > 0, s > 0, this diffusion process will have a reversible measure given by
Cp2u1 −1 (1 − p)2u2 −1 e2sp d p, 0 < p < 1,
where C is the normalizing constant. It is clear that this distribution is simply a change-of-measure with respect to the selection-free case. For the K-allele model with mutation, if uii = 0, ui j =
θ , i, j ∈ {1, . . . , K}, i = j, 2(K − 1)
θ θ , . . . , K−1 ) distribution. then the reversible measure is the Dirichlet( K−1
1.3 An Important Relation
11
Here, for any m ≥ 2, the Dirichlet(α1 , . . . , αm ) distribution is a multivariate generalization of the Beta distribution, defined on the simplex
m
(p1 , . . . , pm ) : pi ≥ 0, ∑ pk = 1 , k=1
and has a density with respect to the (m − 1)-dimensional Lebesgue measure given by Γ (∑m αk ) α1 −1 αm−1 −1 αm −1 p f (p1 , . . . , pm−1 , pm ) = m k=1 · · · pm−1 pm , ∏k=1 Γ (αk ) 1 where pm = 1 − p1 − · · · − pm−1 . Next we turn to the Moran model with mutation. For distinct i, j, we make the same 1 change from (ui j ) to ( 2N ui j ). Set ZN (t) = (Z1N (t), . . . , ZKN (t)), ZiN (t) = Xi ([2N 2 t]), 1 N pN (t) = (pN1 (t), . . . , pNK (t)) = Z (t). 2N Here the time unit is 2N 2 generations. Assume that pN (t) = (p1 , . . . , pK ). Then by choosing Δ t = 2N1 2 , it follows that for any 1 ≤ i = j ≤ K, 1 E[pNi (t + Δ t) − pNi (t)] = bi (p)Δ t, 2 E[(pNi (t + Δ t) − pNi (t))2 ] = [pi (1 − pi ) + O(Δ t)]Δ t, E[(pNi (t + Δ t) − pNi (t))(pNj (t + Δ t) − pNj (t))] = [−pi p j + O(Δ t)]Δ t, which leads to 1 1 E[pNi (t + Δ t) − pNi (t)] = bi (p), Δt 2 1 lim E[(pNi (t + Δ t) − pNi (t))(pNj (t + Δ t) − pNj (t))] = ai j (p). N→∞ Δ t lim
N→∞
(1.12) (1.13)
Note that the time is speeded up by N(2N) instead of (2N)2 , and the drift is half of that of the Wright–Fisher model. Thus the scaled population mutation rate is given by θ = 2N μ in the Moran model.
1.3 An Important Relation In this section, we will discuss an important relation between the gamma distribution and the Dirichlet distribution. The dynamical analog will be exploited intuitively.
12
1 Introduction
A non-negative random variable Y is said to have a Gamma(α , β ) distribution if its density is given by f (y) =
1 yα −1 e−y/β , y > 0. β α Γ (α )
Theorem 1.1. For each m ≥ 2, let Y1 , . . . ,Ym be independent gamma random variables with respective parameters (αi , 1), i = 1, . . . , m. Define Xi =
Yi , m ∑k=1 Yk
i = 1, . . . , m.
Then (X1 , . . . , Xm ) has a Dirichlet(α1 , . . . , αm ) distribution and is independent of ∑m k=1 Yk . Proof. Define a transformation of (y1 , . . . , ym ) ∈ Rm , by xi = z=
yi , m ∑k=1 yk m
i = 1, . . . , m − 1,
∑ yk .
k=1
Then the determinant J(y1 , . . . , yn ) of the Jacobian matrix of this transformation is 1 given by zm−1 and the joint density g(x1 , . . . , xm−1 , z) of (X1 , . . . , Xm−1 , Z), where m
Z=
∑ Yk ,
k=1
is given by 1 yα1 −1 · · · yαmm −1 e−z zm−1 Γ ( α1 ) · · · Γ ( αm ) 1 Γ (∑m 1 m αm −1 k=1 αk ) xα1 −1 · · · xm z∑k=1 αk −1 e−z , = Γ ( α1 ) · · · Γ ( αm ) 1 Γ (∑m α ) k k=1
g(x1 , . . . , xm−1 , z) =
which implies that Z has a Gamma(∑m k=1 αk , 1) distribution, (X1 , . . . , Xm ) is independent of Z, and has a Dirichlet(α1 , . . . , αm ) distribution. 2 Remark: For any β > 0 and a Gamma(α , 1) random variable Y , the new variable β Y is a Gamma(α , β ) random variable. Thus Theorem 1.1 still holds if Y1 , . . . ,Ym are independent Gamma(αi , β ) random variables, i = 1, . . . , m. Consider m independent stochastic differential equations: 1 dYi (t) = (αi − β Yi (t))dt + Yi (t)dBi (t), i = 1, . . . , m. 2 The diffusion Yi (t) has a unique stationary distribution Gamma(αi , β ). Define
1.4 Notes
13
Xi (t) =
Yi (t) . m ∑k=1 Yk (t)
Applying Itˆo’s lemma formally, one obtains
1 1 Yk (t)(αi − β Yi (t)) −Yi (t) ∑ (αk − β Yk (t)) dt dXi (t) = 2 (∑m 2 k∑ k=1 Yk (t)) =i k=i Yi (t)dBi (t) − ∑ Yi (t) Yk (t)dBk (t) + ∑ Yk (t) k=i
1 = m (∑k=1 Yk (t))2 +
k=i
αi ∑k=i Yk (t) −Yi (t) ∑k=i αk dt 2
∑ Yk (t) Yi (t)dBi (t) −Yi (t)( ∑ Yk (t)dBk (t)) .
k=i
k=i
Now assume that ∑m k=1 Yk (t) = 1 for all t. Then this SDE becomes 1 dXi (t) = 2
m
αi − ( ∑ αk )Xi (t) dt k=1
+ (1 − Xi (t)) Xi (t)dBi (t) − ∑ Xi (t) Xk (t)dBk (t) .
k=i
Due to independence of the Brownian motions, the second term can be formally ˜ with B(t) ˜ being a standard Brownian motion, and written as Xi (t)(1 − Xi (t))d B(t) (X1 (t), . . . , Xm (t)) becomes the m-allele Wright–Fisher diffusion. Thus the structure in Theorem 1.1 has a dynamical analog. Later on, we will see that the rigorous derivation of this result is related to the Perkins disintegration theorem in measurevalued processes (cf. [20]).
1.4 Notes One can consult [86] for a concise introduction to population genetics. A brief introduction to the mathematical framework can be found in [127] and Chapter 10 in [62]. The most comprehensive references on mathematical population genetics are [67] and [48]. The origin of the Wright–Fisher model is [81] and [185]. The Moran model was proposed in [139]. The existence and uniqueness of the Wright–Fisher diffusion process in Section 1.2 was obtained in [55]. In [62], one can find the rigorous derivation of the diffusion approximation in Section 1.2 and a proof of the existence and uniqueness of the stationary distribution for the two-allele diffusion process with
14
1 Introduction
two way mutation and selection. The relation between the gamma distribution and the Dirichlet distribution is better understood in the polar coordinate system with the Dirichlet distribution being the angular part and the total summation as the radial part. The infinite dimensional analog of this structure is the Perkins disintegration theorem, obtained in [143].
Chapter 2
The Poisson–Dirichlet Distribution
The focus of this chapter is the Poisson–Dirichlet distribution, the central topic of this book. We introduce this distribution and discuss various models that give rise to it. Following Kingman [125], the distribution is constructed through the gamma process. An alternative construction in [8] is also included, where a scale-invariant Poisson process is used. The density functions of the marginal distributions are derived through Perman’s formula. Closely related topics such as the GEM distribution, the Ewens sampling formula, and the Dirichlet process are investigated in detail through the study of urn models. The required terminology and properties of Poisson processes and Poisson random measures can be found in Appendix A.
2.1 Definition and Poisson Process Representation For each K ≥ 2, set
(p1 , . . . , pK ) : p1 ≥ p2 ≥ · · · ≥ pK ≥ 0, ∑ p j = 1 ,
∇K =
j=1
∞
(p1 , p2 , . . . , ) : p1 ≥ p2 ≥ · · · ≥ 0, ∑ p j = 1 ,
∇∞ =
j=1
∇=
K
∞
(2.1)
(p1 , p2 , . . . , ) : p1 ≥ p2 ≥ · · · ≥ 0, ∑ p j ≤ 1 . j=1
The space ∇K can be embedded naturally into ∇∞ and thus viewed as a subset of ∇∞ . The space ∇ is the closure of ∇∞ in [0, 1]∞ , and the topology on each of ∇∞ and ∇ is the subspace topology inherited from [0, 1]∞ . For θ > 0, let (X1 , . . . , XK ) have θ θ a Dirichlet( K−1 , . . . , K−1 ) distribution. Let YK = (Y1K , . . . ,YKK ) be the decreasing order statistics of (X1 , . . . , XK ).
S. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5 2, © Springer-Verlag Berlin Heidelberg 2010
15
16
2 The Poisson–Dirichlet Distribution
Theorem 2.1. Let M1 (∇) denote the space of probability measures on ∇. Then the sequence {μK : K ≥ 2} of laws of YK converges weakly in M1 (∇). Proof. Let γ (t) be a gamma process; i.e., a process with stationary independent increments such that each increment, γ (t) − γ (s) for 0 ≤ s < t, follows a Gamma(θ (t − s), 1) distribution. We sometimes write γt instead of γ (t) for notational convenience. Set l +1 l , , l = 0, . . . , K − 1, Il = K −1 K −1 X˜l =
l+1 l γ ( K−1 ) − γ ( K−1 ) K γ ( K−1 )
,
˜ K = (Y˜ K , . . . , Y˜ K ) denote the descending order statistics of (X˜1 , . . . , X˜K ). It foland Y K 1 lows from Theorem 1.1 that (X˜1 , . . . , X˜K ) has the same distribution as (X1 , . . . , XK ). For l = 0, . . . , K − 1, denote the lth highest jump of γ (t) over the interval (0, 1) by Jl . Since the process γ (t) does not have any fixed jump point, for almost all sample paths the jumps occur in I1 , . . . , IK−1 . Thus with probability one, we have K ≥ Jl Y˜lK γ K −1 which implies that Jl . lim inf Y˜lK ≥ K→∞ γ (θ )
(2.2)
Fatou’s lemma guarantees that the summation over l of the left-hand side of equation (2.2) is no more than one. This, combined with the fact that the right-hand side adds up to one, implies that the inequality in (2.2) is actually an equality for all l and the lower limit is actually the limit. Let Pl (θ ) = γ (Jθl ) for any l ≥ 0, P(θ ) = (P1 (θ ), P2 (θ ), . . .), and let Πθ denote the law of Pθ . Then Πθ belongs to M1 (∇), and we have shown that for any fixed r ≥ 1 and any bounded continuous function g on ∇r , g(p1 , . . . , pr )d μK = g(p1 , . . . , pr )d Πθ . (2.3) lim K→∞ ∇
∇
It now follows from the Stone–Weierstrass theorem that (2.3) holds for every realvalued continuous function g on ∇. 2 Definition 2.1. The law Πθ of P(θ ) in Theorem 2.1 is called the Poisson–Dirichlet distribution with parameter θ . Let ς1 ≥ ς2 ≥ . . . be the random points of a Poisson process with intensity measure μ (dx) = θ x−1 e−x dx, x > 0. Set
2.2 Perman’s Formula
17 ∞
σ = ∑ ςi . i=1
Then we have: Theorem 2.2. For each positive θ , the distribution of ( ςσ1 , ςσ2 , . . .) is Πθ . Furthermore, σ has the distribution Gamma(θ , 1) and is independent of ( ςσ1 , ςσ2 , . . .). Proof. Noting that the jump sizes of the process γ (t) over the interval [0, 1] form a nonhomogeneous Poisson process with intensity measure μ (dx), it follows from the construction in Theorem 2.1 that the law of ( ςσ1 , ςσ2 , . . .) is Πθ . It is clear from the definition that σ has the distribution Gamma(θ , 1). The fact that σ and ( ςσ1 , ςσ2 , . . .) are independent, follows from Theorem 1.1. 2 The next example illustrates the application of the Poisson–Dirichlet distribution in random number theory. Example 2.1 (Prime factorization of integers). For each integer n ≥ 1, let Nn be chosen at random from 1, 2, . . . , n. Consider the prime factorization of Nn Nn = Π p pCp (n) , where C p (n) is the multiplicity of p. Define Kn = ∑ Cp (n), p
Ai (n) = ith biggest prime factor of Nn , i = 1, . . . , Kn , Ai (n) = 1, i > Kn , Li (n) = log Ai (n), i ≥ 1. 1 (n) L2 (n) Then, as n goes to infinity, ( Llog n , log n . . .) converges in distribution to (Y1 ,Y2 , . . .) whose law is the Poisson–Dirichlet distribution with parameter one.
2.2 Perman’s Formula We start this section with the definition of a stochastic process, the subordinator, which is closely related to the Poisson process and the Poisson random measure. Definition 2.2. A process {τs : s ≥ 0} is called a subordinator if it has stationary, independent, and non-negative increments with τ0 = 0. Definition 2.3. A subordinator {τs : s ≥ 0} is said to have no drift if for any λ ≥ 0, s ≥ 0,
18
2 The Poisson–Dirichlet Distribution
E[e
−λ τs
] = exp − s
∞ 0
−λ x
(1 − e
)Λ (dx) ,
where the measure Λ concentrates on [0, +∞) and is called the L´evy measure of the subordinator. All subordinators considered in this book are without drift and each has a L´evy measure Λ satisfying
Λ ((0, +∞)) = +∞, ∞ 0
x ∧ 1 Λ (dx) < +∞,
Λ (dx) = h(x)dx, for some h(x) ≥ 0.
(2.4) (2.5) (2.6)
Let J1 (τt ), J2 (τt ), . . . denote the jump sizes of τt up to time t, ranked by size in descending order. Then the sequence is infinite due to (2.4), and condition (2.5) ensures that for every t > 0, τt = ∑∞ i=1 Ji (τt ) < ∞ with probability one. The condition (2.6) implies that for every t > 0, the distribution of τt has a density with respect to the Lebesgue measure (cf. [106]). Set Piτt =
Ji (τt ) . τt
Then (P1τt , P2τt , . . .) forms a probability-valued random vector. Change h to t · h, and all calculations involving jumps of the subordinator {τs : 0 ≤ s ≤ t} change to those of the subordinator {τ˜s : 0 ≤ s ≤ 1} with L´evy measure th(x)dx. Without loss of generality, we will choose t = 1. Let g denote the density function of τ1 . For any Borel set A ⊂ [0, +∞), and any t ≥ 0, define N(t, A) = ∑ χA (τs − τs− ).
(2.7)
s≤t
Then for each fixed t, N(t, ·) is a Poisson random measure with mean measure t Λ (·). For simplicity, we write N(·) for N(1, ·). Lemma 2.1. For distinct 0 < p1 , . . . , pn < 1, p1 + · · · + pn < 1, we have P{some Piτ1 ∈ d p1 , . . . , some Piτ1 ∈ d pn , τ1 ∈ ds} = sn h(p1 s) · · · h(pn s) g(s(1 − p1 − · · · − pn ))d p1 · · · d pn ds. Proof. Let Ji = Ji (τ1 ). Then by properties of the Poisson process,
2.2 Perman’s Formula
19
P{some Ji ∈ dx1 , . . . , some Ji ∈ dxn , τ1 ∈ ds} = E[N(dx1 ) · · · N(dxn ) χ{τ1 ∈ds} ]
= E N(dx1 ) · · · N(dxn ) E χ{τ1 ∈ds} | N(dx1 ) · · · N(dxn ) . Since the mean measure of a Poisson process does not have fixed atoms, it follows that, conditioning on fixing the location of finite points, the remaining points form the same Poisson process. Thus
E χ{τ1 ∈ds} | N(dx1 ) · · · N(dxn ) = g(s − x1 − · · · − xn )ds and P{some Ji ∈ dx1 , . . . , some Ji ∈ dxn , τ1 ∈ ds} = h(x1 )h(x2 ) · · · h(xn ) g(s − x1 − · · · − xn )dx1 · · · dxn ds Making the change of variable pi =
xi s
leads to
P{some Piτ1 ∈ d p1 , . . . , some Piτ1 ∈ d pn , τ1 ∈ ds} = sn h(p1 s) · · · h(pn s) g(s(1 − p1 − · · · − pn ))d p1 · · · d pn ds. 2 Theorem 2.3. For each 0 ≤ p < 1, P{P1τ1 > p, τ1 ∈ ds} =
where In (p, ds) =
Bnp
=
(−1)n+1 In (p, ds), n! n=1
sn h(u1 s) · · · h(un s) g(s uˆn )du1 · · · dun ds,
Bnp
∞
∑
n
(u1 , . . . , un ) : p < ui < 1, ∑ ui < 1 , i=1
uˆn = 1 − u1 − · · · − un . Integrating out the s component, one gets the distribution function of P1τ1 . Note that there are only finite non-zero terms in the summation for each fixed 0 < p < 1, the change in the order of integration is justified. Proof. It follows from Lemma 2.1 that E[ #{(k1 , . . . , kn ) : ki distinct, (Pkτ11 , . . . , Pkτn1 ) ∈ B}, τ1 ∈ ds]
= B
sn h(u1 s) · · · h(un s) g(s(1 − u1 − · · · − un ))du1 · · · dun ds
20
2 The Poisson–Dirichlet Distribution
for each measurable set B ⊂ Rn , where the symbol # denotes the cardinality of the set following it. Set Kpn (ds) = #{(k1 , . . . , kn ) : ki distinct, (Pkτ11 , . . . , Pkτn1 ) ∈ Bnp , τ1 ∈ ds}. Then E[K pn (ds)] = In (p, ds). For any i ≥ 1, let Fi = {Piτ1 > p}. Then clearly F1 =
∞
Fi .
i=1
By the inclusion–exclusion formula, ∞
χF1 = ∑ χFi − ∑ χFi ∩Fj + · · · . i< j
i=1
Discounting the permutations, it follows that
χ{P1 >p, τ1 ∈ds} =
∞
(−1)n+1 K pn (ds) . ∑ n! n=1
Taking expectations leads to the result. 2 Next, we present Perman’s formulae for the finite dimensional density functions of random vector (τ1 , P1τ1 , . . . , Pmτ1 ) for each m ≥ 1. Theorem 2.4. (Perman’s formula) For each m ≥ 1, the vector (τ1 , P1τ1 , . . . , Pmτ1 ) ∈ Rm+1 + has a density function gm (t, p1 , . . . , pm ) with respect to the Lebesgue measure and (1) For t > 0 and p ∈ (0, 1),
g1 (t, p) = th(t p)
0
p 1−p ∧1
g1 (t(1 − p), z)d z.
(2) For
(2.8)
m
t > 0, m ≥ 2, 0 < pm < · · · < p1 < 1,
∑ pi < 1,
i=1
and pˆm−1 = 1 − p1 − · · · − pm−1 , we have gm (t, p1 , . . . , pm ) =
t m−1 h(t p1 ) · · · h(t pm−1 ) pm . g1 t pˆm−1 , pˆm−1 pˆm−1
(2.9)
2.2 Perman’s Formula
21
Proof. Noting that (J1 , . . . , Jm ) ≡ (J1 (τ1 ), . . . , Jm (τ1 )) are the m largest jumps of {τs : 0 ≤ s ≤ 1} with J1 > · · · > Jm , it follows that P(J1 < v1 ) = P(N([v1 , +∞)) = 0) = e−Λ ([v1 ,+∞)) = e and
− v+∞ h(z)dz 1
,
P(J1 ∈ dv1 ) = e−Λ ([v1 ,+∞)) h(v1 )dv1 .
Conditioning on {J1 ∈ dv1 }, we have P(J2 ∈ dv2 | J1 ∈ dv1 ) = e−Λ ([v2 ,v1 )) h(v2 )dv2 , P(J1 ∈ dv1 , J2 ∈ dv2 ) = e−Λ ([v2 ,+∞)) h(v1 )h(v2 )dv1 dv2 . It follows by induction that for v1 > v2 > · · · > vm > 0, P(J1 ∈ dv1 , . . . , Jm ∈ dvm ) = h(v1 ) · · · h(vm )e−Λ ([vm ,+∞)) dv1 · · · dvm . Let fv (x) be the density function of τ1 given Jm = v. Then fv (x) is the density function of τ˜1 , where τ˜s is a subordinator with L´evy measure Λ˜ (dx) = h(x)1{x≤v} dx. Hence for t > v1 + · · · + vm , (τ1 , J1 , . . . , Jm ) has joint density function h(v1 ) · · · h(vm )e−Λ ([vm ,+∞)) fvm (t − v1 − · · · − vm ). Making the change of variable vi = t pi gives the density function gm (t, p1 , . . . , pm ) = t m h(t p1 ) · · · h(t pm )e−Λ ([t pm ,+∞)) ft pm (t(1 − p1 − · · · − pm )). For m = 1, we have g1 (t, p1 ) = th(t p1 )e−Λ ([t p1 ,+∞)) ft p1 (t pˆ1 ), which implies that for m ≥ 2, pm pm pm −Λ ([t· pˆ pm ,+∞)) m−1 = th t · e , g1 t, ft· pm t 1 − pˆm−1 pˆm−1 pˆm−1 pˆm−1 pm = pˆm−1th(t pm )e−Λ ([pm ,+∞)) ft pm (t pˆm ), g1 t pˆm−1 , pˆm−1 and t m−1 h(t p1 ) · · · h(t pm−1 ) pˆm−1th(t pm )e−Λ ([pm ,+∞)) ft pm (t pˆm ) pˆm−1 pm t m−1 h(t p1 ) · · · h(t pm−1 ) g1 t pˆm−1 , = . pˆm−1 pˆm−1
gm (t, p1 , . . . , pm ) =
For m = 2,
th(t p1 ) p2 . g1 t(1 − p1 ), g2 (t, p1 , p2 ) = 1 − p1 1 − p1
22
2 The Poisson–Dirichlet Distribution
Integrating out p2 yields g1 (t, p1 ) =
p1 0
g2 (t, p1 , p2 )d p2
= th(t p1 )
p1
= th(t p1 )
= th(t p1 )
1 2
0
p2 1 d p2 g1 t(1 − p1 ), 1 − p1 1 − p1
p1 1−p1
0
g1 (t(1 − p1 ), z)dz
p1 1−p1 ∧1
0
g1 (t(1 − p1 ), z)dz.
Finally, we need to show that this equation determines g1 uniquely. First for p1 < p1 < 1, we have 1−p > 1. Hence 1 1
g1 (t, p1 ) = th(t p1 )
0 τ1
g1 (t(1 − p1 ), z)dz
= th(t p1 ) f (t(1 − p1 )), where f τ1 is the density function of τ1 . For thus
g1 (t, p1 ) = th(t p1 )
p1 1−p1
0
1 3
< p1 ≤ 12 , we have
1 2
<
p1 1−p1
≤ 1 and
g1 (t(1 − p1 )), z)dz
= th(t p1 )[ f τ1 (t(1 − p1 )) −
1 p1 1−p1
g1 (t(1 − p1 ), z)dz].
1 1 , k ](k = 1, 2, . . .) and thereInductively, g1 (t, p1 ) is uniquely determined over ( k+1 fore for all p1 > 0. 2
2.3 Marginal Distribution This section is concerned with the derivation of the marginal distributions of P(θ ) for each m ≥ 1. We begin with the distribution function of P1 (θ ). Theorem 2.5. For any 0 < p ≤ 1, ∞
(−θ )n n! n=1
P{P1 (θ ) ≤ p} = 1 + ∑
Bnp
(1 − ∑ni=1 ui )θ −1 du1 du2 · · · dun . u 1 u 2 · · · un
(2.10)
The function ρ (x) = P{P1 (θ ) < 1/x} for x > 0 is called the Dickman function. Proof. Let {τs : s ≥ 0} in Theorem 2.4 be the gamma process {γs : s ≥ 0} with L´evy measure Λ (dx) = θ x−1 e−x dx, x ≥ 0.
2.3 Marginal Distribution
23
It then follows from Theorem 2.2 that P1 (θ ) has the same law as P1τ1 . By direct calculation,
In (p, ds) = ds
Bnp
(u1 · · · un )−1
1 (suˆn )θ −1 e−s du1 · · · dun . Γ (θ )
Integrating with respect to s over [0, ∞), it follows that ∞
In (p, ds) =
0
Bnp
(1 − ∑ni=1 ui ) du1 · · · dun , u 1 · · · un
which, combined with Theorem 2.3, implies the result. 2 The joint distribution of (P1 (θ ), . . . , Pm (θ )) for each m ≥ 2 is given by the following theorem. Theorem 2.6. Let gθ1 (p) and gθm (p1 , . . . , pm ) denote the density functions of P1 (θ ) and (P1 (θ ), . . . , Pm (θ )), respectively. Then pgθ1 (p) = θ (1 − p)θ −1
(p/(1−p))∧1
gθ1 (x)dx θ m−1 ( pˆm−1 pm θ θ g , gm (p1 , . . . , pm ) = p1 p2 · · · pm−1 1 pˆm−1 pm θ m ( pˆm )θ −1 P P1 (θ ) ≤ ∧1 , = p1 · · · p m pˆm 0 )θ −2
(2.11) (2.12)
where 0 < pm < · · · < p1 , ∑m k=1 pk < 1. Proof. For each m ≥ 2, it follows from Theorem 2.2 that (P1 (θ ), . . . , Pm (θ )) and (P1τ1 , P2τ1 , . . . , Pmτ1 ) have the same distribution, (P1τ1 , P2τ1 , . . . , Pmτ1 ) is independent of τ1 , and τ1 is a Gamma(θ , 1) random variable. Hence g1 (t, p) = Γ (1θ ) t θ −1 e−t gθ1 (p). Integrating with respect to t on both sides of (2.8), it follows that gθ1 (p) = =
θ p
p 1−p ∧1
gθ1 (z)dz
+∞
θ 1 1 p θ p (1 + 1−p ) 1− p
tp
e− 1−p
0
p 1−p ∧1
0
1 θ −1 −t dt t e Γ (θ ) 1− p
gθ1 (z)dz,
which leads to (2.11). For m ≥ 2, gm (t, p1 , . . . , pm ) = Thus
1 m−1 −t θ e gm (p1 , . . . , pm ). t Γ (θ )
24
2 The Poisson–Dirichlet Distribution
gθm (p1 , . . . , pm ) =
+∞ m−1 t h(t p1 ) · · · h(t pm−1 ) 0
= gθ1 =
gθ1
pm pˆm−1 pm pˆm−1
pˆm−1
+∞ 0
pm gθ1 t pˆm−1 , dt pˆm−1
(θ t)m−1 (t p1 · · ·t pm )−1 e−t(p1 +···+pm−1 ) dt Γ (θ ) pˆm−1 et pˆm−1 (t pˆm−1 )1−θ
θ m−1 2 pˆm−1 p1 · · · pm−1
+∞ p1 +···+pm−1 − s pˆ
e
θ m−1 1 2 p1 · · · pm−1 pˆm−1 (1 + θ −2 θ m−1 pˆm−1 pm θ . = g p1 · · · pm−1 1 pˆm−1
= gθ1
pm pˆm−1
m−1
0
1 θ −1 −s s e ds Γ (θ )
1 p1 +···+pm−1 θ ) pˆm−1
2
2.4 Size-biased Sampling and the GEM Representation The proof of Theorem 2.1 includes an explicit construction of the Poisson–Dirichlet distribution through the gamma process. The distribution also appears in many different ways. One of these is related to size-biased sampling. Consider a population of individuals of a countable number of different types labeled {1, 2, . . .}. Assume that the proportion of type i individuals in the population is pi . A sample is randomly selected from the population and the type of the selected individual is denoted by σ (1). Next remove all individuals of type σ (1) from the population and then randomly select the second sample. This is repeated to get more samples. This procedure of sampling is called size-biased sampling. Denote the type of the ith selected sample by σ (i). Then (pσ (1) , pσ (2) , . . .) is called a size-biased permutation of (p1 , p2 , . . .). Theorem 2.7. Let P(θ ) have the Poisson–Dirichlet distribution with parameter θ > 0. Then the size-biased permutation (V1 ,V2 , . . .) of P(θ ) is given by V1 = U1 ,V2 = (1 −U1 )U2 ,V3 = (1 −U1 )(1 −U2 )U3 , . . .
(2.13)
where {Un : n ≥ 1} are i.i.d. Beta(1, θ ) random variables. Since n n n θ E 1 − ∑ Vk = E ∏ (1 −Uk ) = , 1+θ k=1 k=1 it follows that ∑∞ k=1 Vk = 1 with probability one. Proof. For each n ≥ 2, let (X1 , . . . , Xn ) be a symmetric Dirichlet(α , . . . , α ) random vector with density function f (x1 , . . . , xn ). Then X1 has a Beta(α , (n − 1)α ) distribution with density function f1 (x). Let (X˜1 , . . . , X˜n ) denote the size-biased permutation of (X1 , . . . , Xn ). By definition, for any x in (0, 1),
2.4 Size-biased Sampling and the GEM Representation
25
P{X˜1 ∈ (x, x + Δ x)} = nx f1 (x)Δ x + ◦(Δ x), and P{X˜1 ≤ x} = E[P{X˜1 ≤ x | X1 , . . . , Xn }] x Γ (nα ) x1 x1α −1 (1 − x1 )(n−1)α −1 dx1 =n Γ (α )Γ ((n − 1)α ) 0 x Γ (nα + 1) = xα (1 − x1 )(n−1)α −1 dx1 , Γ (α + 1)Γ ((n − 1)α ) 0 1 which implies that X˜1 = U˜ 1 has a Beta(α + 1, (n − 1)α ) distribution. By direct calculation, P{(X˜1 , . . . , Xσ (1)−1 , Xσ (1)+1 , . . .) ∈ (x1 , Δ x1 ) × · · · × (xn , Δ xn )} ≈ nx1 f (x1 , . . . , xn )Δ x1 · · · Δ xn , which shows that the distribution of (X˜1 , X1 , . . . , Xσ (1)−1 , Xσ (1)+1 , . . . , Xn ) is the Dirichlet(α + 1, α , . . . , α ). Given X˜1 , the remaining (n − 1) components divided by 1 − X˜1 will thus have the symmetric distribution Dirichlet(α , . . . , α ). Choose U˜ 2 in a way similar to that of U˜ 1 . Then U˜ 2 has a Beta(α + 1, (n − 2)α ) distribution. It is clear from the construction that X˜2 = (1 − U˜ 1 )U˜ 2 . Continuing this process to get all components, we can see that X˜1 = U˜ 1 , X˜k = (1 − U˜ 1 ) · · · (1 − U˜ k−1 )U˜ k , k = 2, . . . , n, where U˜ 1 , . . . , U˜ n are independent and U˜ k has a Beta(α + 1, (n − k)α ) distribution. For fixed r ≥ 1, let n → ∞, α → 0 such that nα → θ > 0. It follows that U˜ k → Uk , k = 1, . . . , r, X˜k → Vk , k = 1, . . . , r. Recall that P(θ ) is the limit of the order statistics of (X1 , . . . , Xn ) under the same limiting procedure. Since (X˜1 , . . . , X˜n ) is simply a rearrangement of (X1 , . . . , Xn ), it follows that the limit of the order statistics of (X˜1 , . . . , X˜n ) is also P(θ ). The theorem now follows from Theorem 1 in [35] and the fact that ∑∞ k=1 Vk = 1. 2 Remark: This theorem provides a very friendly description of the Poisson–Dirichlet distribution through i.i.d. random sequences. The law of (V1 ,V2 , . . .) is called the GEM distribution in Ewens [67], named for Griffiths [94] who noted its genetic
26
2 The Poisson–Dirichlet Distribution
importance first, and Engen [51] and McCloskey [137] for introducing it in the context of ecology.
2.5 The Ewens Sampling Formula As is shown in Chapter 1, the Dirichlet distribution characterizes the equilibrium behavior of a population that evolves under the influence of mutation and random sampling. Due to its connection to the Dirichlet distribution, the Poisson–Dirichlet distribution is expected, and will actually be shown in Chapter 5, to describe the equilibrium distribution of certain neutral infinite alleles diffusion models in population genetics. The parameter θ is then the scaled mutation rate of the population. The celebrated Ewens sampling formula provides a way of estimating θ assuming the population is selectively neutral. For any fixed n ≥ 1, let An =
(a1 , . . . , an ) : ak ≥ 0, k = 1, . . . , n;
n
∑ iai = n
.
(2.14)
i=1
Consider a random sample of size n from a Poisson–Dirichlet population. For any 1 ≤ i ≤ n, let Ai = Ai (n, θ ) denote the number of alleles appearing in the sample exactly i times. Consider the case of n = 3. If one type appears in the sample once and another appears twice, then A1 = 1, A2 = 1, A3 = 0. If all three types appear in the sample, then A1 = 3, A2 = 0, A3 = 0. If there is only one type in the sample, then A1 = 0, A2 = 0, A3 = 1. By definition, An = (A1 , . . . , An ) is an An -valued random variable. Every element of An will be called an allelic partition of the integer n, and An is called the allelic partition of the random sample of size n. The Ewens sampling formula gives the distribution of An . Theorem 2.8. (Ewens sampling formula) n! ESF(θ , a) ≡ P{An = (a1 , . . . , an )} = θ(n)
a j θ 1 ∏ j a j! , j=1 n
(2.15)
where θ(n) = θ (θ + 1) · · · (θ + n − 1). Proof. Let {γs : s ≥ 0} be the gamma process with L´evy measure
Λ (dx) = θ x−1 e−x dx, x > 0, and set Ji = Ji (γ1 ), Pi = γJ1i , i ≥ 1. Let (X1 , X2 , . . . , Xn ) be a random sample of size n from a population with allele frequency following the Poisson–Dirichlet distribution with parameter θ . Then X1 , . . . , Xn are (conditionally) independent, and for any k,
2.5 The Ewens Sampling Formula
we have
27
P{Xi = k | P(θ ) = (p1 , p2 , . . .)} = pk .
Thus the unconditional probability is given by P(Xi = k) = EPk (θ ). For each (a1 , . . . , an ) in An , the probability of the event {Ai = ai , i = 1, . . . , n} can be calculated as follows. Let k11 , . . . , k1a1 ; k21 , . . . , k2a2 ; . . . ; kn1 , . . . , knan be the distinct values of X1 , . . . , Xn . The total number of different assignments of X1 , . . . , Xn is clearly n! n! = . (1!)a1 a1 !(2!)a2 a2 ! · · · (n!)an an ! ∏nj=1 ( j!)a j a j ! For each assignment, the conditional probability given P(θ ) is Pk11 · · · Pk1a Pk221 · · · Pk22a · · · Pknn1 · · · Pknnan . 1
2
It then follows that P(An = (a1 , . . . , an ) | P(θ )) n! = n ∑ ∏ j=1 ( j!)a j a j ! distinct k11 ,...,k1a
;
Pk11 · · · Pk1a Pk221 · · · Pk22a · · · Pknn1 · · · Pknnan 1
2
1 k21 ,...,k2a ;...;kn1 ,...,knan 2
=
n!
1
∑
∏nj=1 ( j!)a j a j ! γ1n
distinct k11 ,...,k1a ; 1 k21 ,...,k2a ;...;kn1 ,...,knan 2
2 Jk11 · · · Jk1a Jk21 · · · Jk22a · · · Jknn1 · · · Jknnan . 1
2
Taking into account the independence of γ1 and P(θ ), it follows that E(γ1n )P(An = (a1 , . . . , an )) Γ (n + θ ) P(An = (a1 , . . . , an )) = Γ (θ ) ⎛ =
n!
⎜ ⎜
E ∏nj=1 ( j!)a j a j ! ⎝
n! = n E ∏ j=1 ( j!)a j a j !
∑
⎞
distinct k11 ,...,k1a ; 1 k21 ,...,k2a ;...;kn1 ,...,knan 2
∑
distinct Yi j ∈J
⎟ Jk11 · · · Jk1a Jk221 · · · Jk22a · · · Jknn1 · · · Jknnan ⎟ ⎠ 1
2
f 1 (Y11 ) · · · f1 (Y1a1 ) · · · fn (Yn1 ) · · · fn (Ynan ) ,
28
2 The Poisson–Dirichlet Distribution
where J is the set of jump sizes of τs over [0, 1] and fi (x) = xi . By Campbell’s theorem in Appendix A,
∑
E
distinct Yi j ∈J
f1 (Y11 ) · · · f1 (Y1a1 ) f2 (Y21 ) · · · f2 (Y2a2 ) . . . fn (Ynan )
a1 a2 an f1 (x)Λ (dx) f2 (x)Λ (dx) ··· fn (x)Λ (dx) = S S S aj n = ∏ θ x j x−1 e−x dx
j=1 n
S
= ∏ (θΓ ( j))a j , j=1
which leads to (2.15). 2 If samples of the same type are put into one group, then there will be at most n different groups. Let Ci (n) denote the size of the ith largest group with Ci (n) = 0 for i > n. Clearly the sequence {Ci (n) : i = 1, 2, . . .} contains the same information about the random sample X1 , . . . , Xn as An did, and the Poisson–Dirichlet distribution would be recovered from the limit of (
C1 (n) C2 (n) , , . . .) n n
as n goes to infinity (cf. [126]). It is more interesting and somewhat unexpected to see that the allelic partition An also has a limit as n approaches infinity, and the limit is a sequence of independent Poisson random variables, as is shown below. Theorem 2.9. (Arratia, Barbour and Tavar´e [6]) Let {ηi : i = 1, . . .} be a sequence of independent Poisson random variables with respective parameters θ /i. Then for every m ≥ 1, (A1 (n, θ ), . . . , Am (n, θ )) → (η1 , . . . , ηm ), as n goes to infinity, where the convergence is in distribution. Proof. For each fixed m ≥ 1, and non-negative a1 , . . . , am satisfying a = a1 + 2a2 + · · · + mam ≤ n, it follows from direct calculation that
P Ai (n, θ ) = ai , i = 1, . . . , m P Ai (n, θ ) = ai , i = 1, . . . , m; A j (n, θ ) = b j , j = m + 1, . . . , n = ∑ bm+1 ,...,bn ≥0; ∑nj=m+1 jb j =n−a
2.5 The Ewens Sampling Formula
29
which equals to P Ai (n, θ ) = ai , i = 1, . . . , m;
n
∑
jA j (n, θ ) = n − a
j=m+1
n ∑ jη j = n − a ∑ jη j = n j=m+1 j=1 P ∑nj=m+1 j η j = n − a . = P ηi = ai , i = 1, . . . , m P ∑nj=1 jη j = n = P ηi = ai , i = 1, . . . , m;
Set
n
Tkn = (k + 1)ηk+1 + · · · + nηn .
Then P{T0n = n} = P
n
∑ jη j = n
j=1
θ(n) exp −θ = n!
n
1 ∑j j=1
.
The generating function of Tmn is
T ∑nj=m+1 j η j mn E x =E x n
=
∏
E x jη j
j=m+1
j 1 x − = ∏ exp θ j j j=m+1 n 1 = exp −θ ∑ · exp θ j=m+1 j n
n
xj ∑ j j=m+1
.
j Set g(x) = exp − θ ∑mj=1 xj . Then
n
1 (n − a)! · P{Tmn = n − a} · exp θ ∑ j=m+1 j (n−a) n xj , = exp θ ∑ x=0 j=m+1 j
where the superscript (n − a) denotes the order of the derivatives. It follows from detailed expansion that
30
2 The Poisson–Dirichlet Distribution
(n−a) xj exp θ ∑ x=0 j=m+1 j m ∞ x j (n−a) xj · exp θ ∑ = exp −θ ∑ x=0 j=1 j j=1 j (n−a) = g(x) · exp{−θ log(1 − x)}
n
−θ
= (1 − x)
x=0
g (1) (x − 1)2 + · · · g(1) + g (1)(x − 1) + 2!
(n−a)
x=0 (n−a)
g (1) (1 − x)−(θ −2) − · · · = g(1)(1 − x)−θ − g (1)(1 − x)−(θ −1) + 2!
x=0
g (1) = g(1)θ(n−a) − g (1)(θ − 1)(n−a) + (θ − 2)(n−a) − · · · . 2!
Hence we obtain P{Tmn = n − a} 1 exp − θ = (n − a)! θ(n−a) exp − θ = (n − a)!
1 g (1) (θ − 1)(n−a) 1 ∑ j · g(1)θ(n−a) 1 − g(1) θ(n−a) + o n2 j=m+1 n 1 mθ (θ − 1) 1 ∑ j · 1 + θ + n − a − 1 + o n2 . j=1 n
Therefore, P{Tmn = n − a} = P{T0n = n} =
θ(n−a) (n−a)!
mθ (θ −1) 1 + θ +n−a−1 + o n12 θ(n) n!
n(n − 1) · · · (n − a + 1) 1 mθ (θ − 1) 1+ +o 2 (θ + n − a)(θ + n − a + 1) · · · (θ + n − 1) θ +n−a−1 n
−→ 1,
(n → ∞),
and P{Ai (n, θ ) = ai , i = 1, . . . , m} P{Tmn = n − a} = P{ηi = ai , i = 1, . . . , m} · P{T0n = n} −→ P{ηi = ai , i = 1, . . . , m} as n → ∞. 2
2.5 The Ewens Sampling Formula
31
On the basis of this result, it is not surprising to have the following representation of the Ewens sampling formula. Theorem 2.10. Let η1 , η2 , . . . be independent Poisson random variables with E[ηi ] =
θ . i
Then for each n ≥ 2, and (a1 , . . . , an ) in An ,
n
∑ jη j = n
P {An = (a1 , . . . , an )} = P ηi = ai , i = 1, . . . , n |
.
j=1
Proof. Let Tn = ∑nj=1 jη j . It follows from the Ewens sampling formula that
∑
∑nj=1 jb j
b j θ(n) θ 1 ∏ j b j ! = n! . =n j=1 n
By direct calculation, P ηi = ai , i = 1, . . . , n |
n
∑ jη j = n
j=1
= P{ηi = ai , i = 1, . . . , n}/P{Tn = n} =
=
(θ /i)ai − ∑nj=1 θ / j ai ! e (θ /i)bi − ∑n θ / j ∑(b1 ,...,bn )∈An ∏ni=1 bi ! e j=1 n a j
∏ni=1
n! θ(n)
∏
j=1
θ j
1 . a j!
2 Let Kn = ∑ni=1 Ai denote the total number of different alleles in a random sample of size n of a Poisson–Dirichlet population with parameter θ . Theorem 2.11. The statistic Kn / log n is an asymptotically consistent, sufficient estimator of θ and as n → ∞, Kn log n −θ → Z log n where convergence is in distribution and Z is a normal random variable with mean zero and variance θ . Proof. For any 1 ≤ m ≤ n, let Cm =
n
(a1 , . . . , an ) ∈ An : ∑ ai = m . i=1
32
2 The Poisson–Dirichlet Distribution
Then P{Kn = m} = P{An ∈ Cm } n! = ∑ θ(n) (a ,...,a )∈C n
1
= |Snm |
m
a j θ 1 ∏ j a j! j=1 n
(2.16)
θm , θ(n)
where Snm = (−1)n−m n!
n
∑
1
∏ ja j a j !
(2.17)
(a1 ,...,an )∈Cm j=1
is the signed Stirling number of the first kind. Later on, we will show that |Snm | is the total number of permutations of n numbers into m cycles, and is equal to the coefficient of θ m in the polynomial θ(n) . What is important now is that |Snm | is independent of θ . Given Kn = m, we have P{An = (a1 , . . . , an ) | Kn = m} =
n n! 1 χ{(a1 ,...,an )∈Cm } ∏ a j m |Sn | j a j! j=1
which implies that Kn is a sufficient statistic for θ . Let Λn (t) = log1 n log E[etKn ]. By Stirling’s formula for gamma functions, one has 1 Γ (et θ + n) Γ (θ ) log n→∞ log n Γ (θ + n) Γ (et θ )
lim Λn (t) = lim
n→∞
1 (et θ + n)(e θ +n) log n→∞ log n (θ + n)(θ +n) t
= lim
= θ (et − 1) ≡ Λ (t). Let
et θ . t i=0 e θ + i n
Mn (t) = E[etKn ], Nn (t) = ∑ Then a direct calculation shows that Mn (t) = Mn (t)Nn (t),
Mn (t) = [Nn2 (t) + Nn (t)]Mn (t), Mn (t)Mn (t) − (Mn (t))2 Mn2 (t) n i ∼ (θ et ) log n. = Nn (t) = θ et ∑ t 2 i=1 (e θ + i)
(log Mn (t)) =
Since Λ (t) = θ et , we thus have proved that
2.6 Scale-invariant Poisson Process
33
Λn (t) → Λ (t), uniformly in the neighborhood of zero. Applying Proposition 1.1 and Theorem 1.2 in [187], we conclude that
n→∞
Kn = θ, log n
lim
and
log n
Kn −θ log n
→ Z. 2
Since Kn is a sufficient statistic for θ , one likelihood function is Ln (θ , m) = P{Kn = m} = |Snm |
θm . θ(n)
Taking the derivative with respect to θ and setting the derivative equal to zero leads to n 1 . m=θ ∑ θ + i−1 i=1 For 1 ≤ m ≤ n − 1, the solution θˆ of this equation is the maximum likelihood estimator of θ . The case m = n corresponds to θ = ∞. For a fixed sample size, n, the number of alleles in the sample is an increasing function of θ . Therefore a sample with a larger number of alleles suggests a higher mutation rate. Assume that we have an unbiased estimator gˆ of the function g(θ ). Then by the Rao–Blackwell theorem in statistics, the conditional expectation of gˆ given Kn is usually a better estimator than g. ˆ
2.6 Scale-invariant Poisson Process In addition to the Poisson process representation in Theorem 2.2, the Poisson– Dirichlet distribution can be constructed through another Poisson process: the scaleinvariant Poisson process. Let η1 , η2 , . . . be the Poisson random variables in Theorem 2.9. For any n ≥ 1, one can check directly that ∞
Nn (·) = ∑ ηi δi/n (·) i=1
is a Poisson random measure on (0, ∞) with mean measure ∞
θ δi/n (·). i=1 i
μn (·) = ∑
34
2 The Poisson–Dirichlet Distribution
Since μn converges weakly to the measure
θ dx, x > 0, x
μ (dx) =
we are led to a Poisson process on (0, ∞) with mean measure μ . Let the points in be labeled almost surely as 0 < · · · < ς2 < ς2 < 1 < ς0 < ς−1 < ς−2 < · · · < ∞.
(2.18)
Theorem 2.12. The Poisson process on (0, ∞) with mean measure μ is scaleinvariant; i.e., for any c > 0, the random set c has the same distribution as . Proof. For any c > 0, it follows from Theorem A.4 that c is a Poisson process with mean measure x θ μ d = dx = μ (dx). c x This proves the result. 2 Let ς (θ ) = ∑ ςi , i>0
and B1 , B2 , . . . be a sequence of i.i.d. random variables with common distribution Beta(θ , 1). Theorem 2.13. The random variable ς (θ ) has the same distribution as ∞
k
∑ ∏ Bi ,
(2.19)
k=1 i=1
with a density function given by ∞ e−γθ xθ −1 (−θ )n (1 − ∑ni=1 yi )θ −1 dy1 · · · dyn , g˜θ (x) = 1+ ∑ Γ (θ ) n! y1 · · · yn Gnx n=1
where
γ = lim
n→∞
is the Euler’s constant, and Gnx
=
n
1 ∑ k − log n k=1
(2.20)
n
yi > x , i = 1, . . . , n; ∑ yi < 1 . −1
i=1
Proof. Let 0 be the homogeneous Poisson process on R with intensity θ > 0. Consider the map f : R −→ (0, ∞), x → e−x .
2.6 Scale-invariant Poisson Process
35
It follows from Theorem A.4 that f (0 ) is a Poisson process with mean measure
θ (log x) dx = μ (dx). Therefore, f (0 ) has the same distribution as . In particular, the set of points {ςi : i > 0} has the same distribution as f (0 ∩ (0, ∞)). Since the points in 0 ∩ (0, ∞) can be represented as {W1 ,W1 +W2 ,W1 +W2 +W3 , . . .} for a sequence of i.i.d. exponential random variables with mean 1/θ , it follows, by choosing Bi = e−Wi , that k ς (θ ) has the same distribution as ∑∞ k=1 ∏i=1 Bi . By its construction, Bi follows the Beta(θ , 1) distribution. Let g(x) = x · χ(0,1) (x). Then, it follows from Campbell’s theorem, that for any λ ≥ 0 E[e−λ ∑i g(ςi ) ] = E[e−λ ς (θ ) ] 1 −λ x θ = exp − (1 − e ) dx . x 0
(2.21)
Equation (2.20) now follows from the identity x 1 − e−y
y
0
dy =
∞ −y e x
y
dy + log x + γ , x > 0,
∞ e−y n λ y dy) is the Laplace transform of the function
and the fact that λ −θ (
Gnx
(1 − ∑ni=1 yi )θ −1 dy1 · · · dyn . y1 · · · yn 2
Theorem 2.14. (Arratia, Barbour and Tavar´e [8]) For any θ > 0, the conditional distribution of (ς1 , ς2 , . . .), given ς (θ ) = 1, is the Poisson–Dirichlet distribution with parameter θ . Proof. It suffices to verify that the conditional marginal distribution of (ς1 , ς2 , . . .) is the same as the corresponding marginal distribution of P(θ ). For any m ≥ 1, let h(u; p1 , . . . , pm ) denote the conditional probability of ς (θ ), given ςi = pi , i = 1, . . . , m. Then the joint density function of (ς1 , . . . , ςm , ς (θ )) is given by 1 pm−1 θ dx θm +···+ h(u; p1 , . . . , pm ). (2.22) exp − x p1 · · · pm p1 pm The conditional probability of (ς1 , . . . , ςm ), given ς (θ ) = 1, is pθm θ m h(1; p1 , . . . , pm )/g˜θ (1). p1 · · · p m
(2.23)
36
2 The Poisson–Dirichlet Distribution
It follows from the scale-invariant property that 1 − ∑m 1 i=1 pi h(1; p1 , . . . , pm ) = . g˜θ pm pm
(2.24)
Substituting (2.24) into (2.23), the theorem now follows from (2.12) and the fact that g˜ ( pˆm ) pm θ −1 θ pm P P1 (θ ) ≤ . ∧ 1 = pm pˆm g˜θ (1) 2
2.7 Urn-based Models In a 1984 paper [103], Hoppe discovered an urn model (called Hoppe’s urn) that gives rise to the Ewens sampling formula. This model turns out to be quite useful in various constructions and calculations related to the Poisson–Dirichlet distribution and the Ewens sampling formula. In this section we will discuss several closely related urn-type models.
2.7.1 Hoppe’s Urn Consider an urn that initially contains a black ball of mass θ . Balls are drawn from the urn successively with probabilities proportional to their masses. When the black ball is drawn, it is returned to the urn together with a ball of a new color not previously added with mass one; if a non-black ball is drawn it is returned to the urn with one additional ball of mass one and the same color as that of the ball drawn. Colors are labeled 1, 2, 3, . . . in the order of appearance. Let Xn be the label of the additional ball returned after the nth drawing. Theorem 2.15. For any i = 1, . . . , n, let Ai be the number of labels that appear i times in the sequence {X1 , . . . , Xn }. Then the distribution of An = (A1 , . . . , An ) is given by the Ewens sampling formula; i.e., for each (a1 , . . . , an ) in An n! n θ a j 1 . P{An = (a1 , . . . , an )} = θ(n) ∏ j a j! j=1 Proof. For each (a1 , . . . , an ) in An , let k = ∑ni=1 ai be the total number of times that the black ball is selected, which is the same as the total number of different labels. The set {n1 , . . . , nk } gives the number of balls of each label i = 1, . . . , k. Without loss of generality, we assume that these numbers are arranged in decreasing order so that n1 ≥ n2 · · · ≥ nk .
2.7 Urn-based Models
37
Define l = number of distinct integers in {n1 , . . . , nk }, and b1 = length of the run of n1 , bi = length of the ith run, i = 2, . . . , l. Then one can rewrite the Ewens sampling formula as P{An = (a1 , . . . , an )} =
θk n! . k θ(n) ∏ j=1 n j ∏li=1 bi !
(2.25)
Consider a sample path X1 = x1 , . . . , Xn = xn that is compatible with (a1 , . . . , an ). It is clear that θ k ∏ki=1 (ni − 1)! P{X1 = x1 , . . . , Xn = xn } = , (2.26) θ(n) where θ k counts for the selection of black balls k times, and each new color i will be selected ni − 1 additional times with a product of successive masses (ni − 1)!. The denominator is the product of successive masses of the balls in the urn of the first n selections. The total number of paths that are compatible with (a1 , . . . , an ) can be counted as the number of permutations of n objects which are divided into k groups with the decreasing order group sizes {n1 , . . . , nk }. The total number of ways of dividing n! n objects into k groups of sizes {n1 , . . . , nk } is n1 !···n . As the successive runs are k! characterized by (b1 , . . . , bl ), the total number of distinct permutations is thus n! . n1 ! · · · nk ! ∏lj=1 b j !
(2.27)
The multiplication of (2.26) and (2.27) leads to (2.25) and the theorem. 2 For each i = 1, . . . , n, set 1, if the ith draw is a black ball, ηi = 0, else. Clearly η1 , .., ηn are independent, and P{ηi = 1} =
θ . θ +i−1
As a direct application of Theorem 2.15 we obtain the following representation: Kn = η1 + · · · + ηn , which provides an alternative proof of Theorem 2.11.
(2.28)
38
2 The Poisson–Dirichlet Distribution
2.7.2 Linear Birth Process with Immigration Consider a population consisting of immigrants and their descendants. Immigrants enter the population according to a continuous time, pure-birth Markov chain with rate θ , and each immigrant initiates a family, the size of which follows a linear birth process with rate 1. Different families evolve independently. In comparison with Hoppe’s urn, the role of the black balls is replaced by immigration. This process provides an elementary way to study the age-ordered samples from infinite alleles models. We consider a population composed of various numbers of different types (for example mutants, alleles, in a biological context) which is evolving continuously in time. There is an input process I(t) describing how new mutants enter the population and a stochastic structure x(t) with x(0) = 1 and the convention x(t) = 0 if t < 0, prescribing the growth pattern of each mutant population. Mutants arrive at the times 0 ≤ T1 < T2 < · · · and initiate lines according to independent versions of x(t). Thus let {xi (t)} be independent copies of x(t) with xi (t) being initiated by the ith mutant. Then xi (t − Ti ) will be the size at time t of the ith mutant line. The process N(t) represents the total population size at time t: I(t)
N(t) = ∑ xi (t − Ti ). i=1
Assume that: (i) I(t) is a pure-birth process with rate θ and initial value zero. (ii) The process x(t) is a pure-birth process starting at x(0) = 1 and with infinitesimal birth rate qn,n+1 = n for n ≥ 1. Then the process N(t) is a linear birth process with immigration. It is a pure-birth process starting at N(0) = 0 with infinitesimal rates 1 ρn = lim P{N(t + h) − N(t) = 1 | N(t) = n}. h→0 h Obviously ρ0 = θ and for small h > 0 P{N(t + h) − N(t) = 1 | N(t) = n} I(t)
=E
θ + ∑ xi (t − Ti ) h + o(h) | N(t) = n
i=1
= (n + θ )h + o(h). Thus for n ≥ 1, ρn = n + θ . Let
2.7 Urn-based Models
39
ai (t) = { j : T j ≤ t and x j (t − T j ) = i} so that (a1 (t), . . . , aN(t) (t)) is the corresponding random allelic partition of N(t). Define
τn = inf{t ≥ 0 : N(t) = n} for n ≥ 1. Then
An = (A1 , . . . , An ) = (a1 (τn ), . . . , an (τn ))
is a random partition of n. Based on our construction we have P{An+1 = (a1 + 1, . . . , an , 0) | An = (a1 , . . . , an )} θ = ; n+θ if ai ≥ 1, 1 ≤ i < n, P{An+1 = (. . . , ai − 1, ai+1 + 1, . . . , 0) | An = (a1 , . . . , an )} iai = ; n+θ P{An+1 = (a1 , . . . , an−1 , 0, 1) | An = (a1 , . . . , an )} n , if an = 1; = n+θ
(2.29)
(2.30)
(2.31)
where the change of state in (2.29) corresponds to the introduction of a new mutant, while in (2.30) and (2.31) one of the existing mutant populations is increased by 1. As with Hoppe’s urn, we have established the following theorem. Theorem 2.16. The distribution of the random partition An = (a1 (τn ), . . . , an (τn )) is given by the Ewens sampling formula. Let y(t) be a pure-birth process on {0, 1, 2, . . .} with birth rate λn . Then the forward equation for the transition probability Pi j (t) has the form Pi j (t) = λ j−1 Pi, j−1 (t) − λ j Pi j (t), t ≥ 0.
(2.32)
Since the process is pure-birth,
Thus
Pi j (t) = 0, j < i, t ≥ 0.
(2.33)
Pii (t) = −λi Pii (t), t ≥ 0.
(2.34)
Since Pii (0) = 1, we conclude from (2.32) and (2.34), Pi j (t) = λ j−1
t 0
e−λ j (t−s) Pi, j−1 (s)ds, j > i, t ≥ 0,
(2.35)
40
2 The Poisson–Dirichlet Distribution
and
Pii (t) = e−λi t , t ≥ 0.
(2.36)
For the process N(t), we have N(0) = 0 and λi = i + θ . Therefore for each i, Pii (t) = e−(i+θ )t , t ≥ 0,
(2.37)
and Pi,i+1 (t) = (i + θ )
t
= (i + θ )e
e−(i+1+θ )(t−s) e−(i+θ )s ds
0 −θ t −it
e
(2.38)
(1 − e−t ), j > i, t ≥ 0.
Choosing i = 0 in (2.38) and substituting the latter into (2.35), yields t
P0,2 (t) = (1 + θ ) e−(2+θ )(t−s) θ e−θ s (1 − e−s )ds 0 θ + 1 −θ t e (1 − e−t )2 , t ≥ 0. = 2 By induction we conclude that for each n ≥ 1, θ + n − 1 −θ t P{N(t) = n} = e (1 − e−t )n . n
(2.39)
(2.40)
In general, for n ≥ m ≥ 1, t ≥ s ≥ 0 P{N(t) = n | N(s) = m} =
θ + n − 1 −(m+θ )(t−s) e (1 − e−(t−s) )n−m . n−m
(2.41)
Similarly for the linear birth process x(t) and n ≥ m ≥ 1, t ≥ 0 , we have n − 1 −m(t−s) e P{x(t) = n | x(s) = m} = (1 − e−(t−s) )n−m . (2.42) n−m In particular, given x(0) = 1, the distribution of x(t) is geometric with parameters e−t ; i.e., P{x(t) = n | x(0) = 1} = e−t (1 − e−t )n−1 . (2.43) This can be explained intuitively as follows: Note that e−t is the probability that a rate-one Poisson process makes no jumps over [0,t]. Consider this as the probability of “success”. For x(t) to have a value n, n − 1 jumps or “failures” need to occur over [0,t]. The one lonely “success” will be the starting value x(0). Theorem 2.17. Both the processes e−t x(t) and e−t N(t) are non-negative submartingales with respect to their respective natural filtrations. Therefore there are random variables X and Y such that
2.7 Urn-based Models
41
e−t x(t) → X, e−t N(t) → Y, a.s., t → ∞.
(2.44)
Proof. It follows from (2.41) that the expected value of N(t) given N(s) = m, is E[N(t) | N(s) = m] ∞ θ + n − 1 −(m+θ )(t−s) (1 − e−(t−s) )n−m − θ = ∑ (n + θ ) e m + θ − 1 n=m m+θ ∞ θ +l −1 = −(t−s) ∑ e−(m+1+θ )(t−s) (1 − e−(t−s) )l−(m+1) − θ (m + 1) + θ − 1 e l=m+1 = (m + (1 − e−(t−s) )θ )et−s , which, combined with the Markov property, implies that e−t N(t) is a non-negative submartingale. A similar argument shows that e−t x(t) is actually a martingale. It follows from (2.40) and (2.43) that E[e−t N(t)] = (1 − e−t )θ , E[e−t x(t)] = 1. By an application of the martingale convergence theorem (cf. Durrett [47]) we conclude that (2.44) holds. 2 Theorem 2.18. In Theorem 2.17, the random variable X has an exponential distribution with mean one, and the random variable Y has a gamma distribution with parameters θ , 1. Proof. Let p = e−t . For every real number λ the characteristic function of px(t) is
ψt (λ ) = E[eiλ px(t) ] =
peipλ . 1 − (1 − p)eipλ
Clearly as t goes to infinity,
ψt (λ ) →
1 = ψ (λ ), 1 − iλ
the characteristic function of an exponential random variable with parameter 1. For each λ < 1, let q = 1 − (1 − p)e pλ . It then follows from (2.40) that
42
2 The Poisson–Dirichlet Distribution
∞ λ pN(t) pλ n θ + n − 1 e−θ t (1 − e−t )n E[e ] = ∑ (e ) n n=0 ∞ θ + n − 1 e−θ t θ = ∑ q (1 − q)n n qθ n=0 θ 1 = et − (et − 1)e pλ θ θ 1 1 = → , t → ∞, 1 − λ + o(p) 1−λ
(2.45)
(2.46)
which implies that Y has a Gamma(θ , 1) distribution.
2 Let X1 , X2 , .. be independent copies of X. Recall that xi (t) = 0 for t < 0. Then the age-ordered family sizes x1 (t − T1 ), x2 (t − T2 ), . . . have the following limits. Theorem 2.19. e−t (x1 (t − T1 ), x2 (t − T2 ), . . .) → (e−T1 X1 , e−T2 X2 , . . .) a.s., t → ∞,
(2.47)
where the convergence is in space R∞ +. Proof. It suffices to verify that for every fixed r ≥ 1, e−t (x1 (t − T1 ), . . . , xr (t − Tr )) → (e−T1 X1 , . . . , e−Tr Xr ) a.s., t → ∞. From (2.44) and the fact that χ{Tr ≤t} converges to one almost surely, we conclude that e−t (x1 (t − T1 ), . . . , xr (t − Tr )) = (e−T1 e−(t−T1 ) x1 (t − T1 ), . . . , e(−Tr ) e−(t−Tr ) xr (t − Tr )) → (e−T1 X1 , . . . , e−Tr Xr ) a.s., t → ∞. 2 Theorem 2.20. The limit Y in Theorem 2.17 has the following representation: ∞
Y = ∑ e−Ti Xi a.s. i=1
Proof. For each fixed r ≥ 1, r
r
lim ∑ e−t xi (t − Ti ) ∑ e−Ti Xi = t→∞
i=1
i=1
≤ lim e−t N(t) = Y. t→∞
Thus with probability one
(2.48)
2.7 Urn-based Models
43 ∞
Y ≥ ∑ e−Ti Xi . i=1
This, combined with the fact that ∞
E[Y ] = θ = ∑
i=1
θ θ +1
i =E
∞
∑ e−Ti Xi
,
i=1
implies the theorem. 2 Let Ui = =
e−Ti Xi ∞ ∑k=i e−Tk Xk e−(Ti −Ti−1 ) Xi , −(Tk −Ti−1 ) X ∑∞ k k=i e
i = 1, 2, . . . ,
with T0 = 0. It is clear that U1 ,U2 , . . . are identically distributed. Rewrite U1 as U1 =
X1 . −(Tk −T1 ) X X1 + ∑∞ k k=2 e
−(Tk −T1 ) X has the same distribution as Y and is independent of X , by Since ∑∞ 1 k k=2 e Theorem 1.1, U1 has a Beta(1, θ ) distribution. Since
U2 =
X2 ∞ −(T ∑k=2 e k −T2 ) Xk
=
X2 ∞ X2 + ∑k=3 e−(Tk −T2 ) Xk
−(Tk −T2 ) is independent of both X1 and ∑∞ Xk , it follows that U1 and U2 are ink=2 e dependent. Using similar arguments, it follows that U1 ,U2 , . . . are independent with common distribution. Set
e−Ti Xi . V˜i = Y Then we have the following theorem. Theorem 2.21. V˜i = (1 −U1 ) · · · (1 −Ui−1 )Ui . Proof. By direct calculation, 1 −U1 =
−Tk X ∑∞ k k=2 e , ∞ ∑k=1 e−Tk Xk
(1 −U1 ) · · · (1 −Ui−1 ) =
−Tk X ∑∞ k k=i e . ∞ −T k e X ∑k=1 k
(2.49)
44
2 The Poisson–Dirichlet Distribution
Thus (1 −U1 ) · · · (1 −Ui−1 )Ui =
e−Ti Xi = V˜i . Y
2 Remark: The linear birth process with immigration gives a construction of the GEM representation.
2.7.3 A Model of Joyce and Tavar´e In both Hoppe’s urn and the linear birth-with-immigration model, family sizes are studied based on their age orders; but the detailed relations among family members are missing. In [120], Joyce and Tavar´e introduced a new representation of the linear birth process with immigration which incorporates the genealogical structure of the population. To describe the model, we start with several concepts of random permutations. For each integer n ≥ 1, let π be a permutation of set {1, 2, . . . , n}, characterized by π (1) · · · π (n). Set i1 = inf{i ≥ 1 : π i (1) = 1}. Then the set {π (1), . . . , π i1 −1 (1), 1} is called a cycle starting at 1. Similarly we can define cycles starting from any integer. Each permutation can be decomposed into several disjoint cycles. Consider the case of n = 5 and the permutation 24513. The cycle started at 1 is (241), the cycle started at 3 is (53). We can write the permutation as (241)(53). The length of a cycle is the cardinality of the cycle. For each i = 1, 2, . . . , n, let Ci (n) denote the number of cycles of length i. Then (C1 (n), . . . ,Cn (n)) is a allelic partition of n. The model of Joyce and Tavar´e can be described as follows: consider an urn that initially contains a black ball of mass θ . Balls are drawn from the urn successively with probabilities proportional to their masses. After each ball is drawn, it is returned to the urn with an additional ball of mass 1. The balls added are numbered by the drawing numbers. After the (n − 1)th drawing, there are n − 1 additional balls numbered 1, 2, . . . , n − 1 inside the urn. The ball to be added after the nth drawing will be numbered n. If the nth draw is a black ball, the additional ball will start a new cycle; if ball j, where 1 ≤ j ≤ n − 1, is selected, n is put immediately to the left of j in the cycle where j belongs. We illustrate this through the following example. n = 1, (1), n = 2, (1)(2), n = 3, (31)(2), n = 4, (31)(42), n = 5, (351)(42), n = 6, (3651)(42).
2.7 Urn-based Models
45
In this example, the first and the second draws are both black balls. Thus after the second draw there are two cycles. Ball 3 is added after ball 1 is selected, and ball 4 is added after ball 2 is selected. At the 5th draw, ball 1 is selected and at the 6th draw, ball 5 is selected. There are several basic facts about these cyclical decompositions: (a) The first cycle is started at 1; removing the numbers from the first cycle, the second cycle is started at the smallest of the remaining numbers and so on. (b) Inside a cycle, a pair of numbers with the larger one on the left indicates a successive relation with the one on the right being the parent. (c) A cycle corresponds to a family; the order of the cycles corresponds to the age order; the details within a cycle describe the relations between family members. (d) The total number of cycles corresponds to total number of families. In step 6 of the example, we see that there are two families, with family (3651) appearing first. In family (3651), 3 and 5 are the children of 1 and 6 is the child of 5. The advantage of this model is that the probability of any draw that results in k cycles, is θ k 1n−k . (2.50) θ (θ + 1) · · · (θ + n − 1) Let Tnk be the total number of permutations of n objects into k cycles. Then n
θk
∑ Tnk θ(n) = 1,
i=1
which implies that Tnk is the coefficient of θ k in the polynomial θ(n) . If K denotes the total number of cycles, then P{K = k} = Tnk
θk . θ(n)
(2.51)
Comparing with (2.51) and (2.16), it follows that K has the same distribution as Kn and Tnk = |Snk |. Furthermore, we have Theorem 2.22. The distribution of the random partition Cn = (C1 (n), . . . ,Cn (n)) is given by the Ewens sampling formula. Proof. For non-negative integers c1 , .., cn satisfying ∑ni=1 ici = n, the number of permutations of 1, 2, . . . , n with ci cycles of length i, i = 1, . . . , n, can be calculated as follows: first divide the n numbers into k = c1 + · · · + cn cycles, the total number
46
2 The Poisson–Dirichlet Distribution
of ways being n! ∏ni=1 (i!)ci
.
(2.52)
Next, for each cycle, the starting point is fixed but the remaining numbers are free to move around which will increase the count by a factor of n
∏((i − 1)!)ci .
(2.53)
i=1
Finally, to discount the repetition of ci cycles, the total count needs to be divided by n
∏ ci !.
(2.54)
i=1
The theorem is now proved by noting that the probability P{C1 (n) = c1 , . . . ,Cn (n) = n} is just (2.50) multiplied by (2.52) and (2.53) and then divided by (2.54). 2
2.8 The Dirichlet Process The Poisson–Dirichlet distribution is the distribution of allelic frequencies in descending order and individual type information is lost. Thus it is called unlabeled. Let S be a compact metric space, and ν0 a probability measure on S. The labeled version of the Poisson–Dirichlet distribution is a random measure with mean measure ν0 . If M1 (S) denotes the space of probability measures on S equipped with the usual weak topology, then the labeled version of the Poisson–Dirichlet distribution is the law of ∞
Ξθ ,ν0 = ∑ Pi (θ )δξi ,
(2.55)
i=1
where P(θ ) = (P1 (θ ), P2 (θ ), . . .) has the Poisson–Dirichlet distribution with parameter θ ; δx is the Dirac measure at x and is independent of P(θ ); ξ1 , ξ2 , . . . are i.i.d. random variables with common distribution ν0 . The random measure Ξθ ,ν0 is called the Dirichlet process, and its law is denoted by Πθ ,ν0 . Occasionally we may refer to Πθ ,ν0 as the Dirichlet process for notational convenience. Later on, we will see that there are also corresponding labeled and unlabeled dynamical models. Let ς1 ≥ ς2 ≥ · · · be the random points of the nonhomogeneous Poisson process on (0, ∞) with intensity measure μ (dx) = θ x−1 e−x dx. Then it follows from the Poisson process representation in Section 2.2 that ( ∑∞ς1 ςi , ∑∞ς2 ςi , . . . , ) has the i=1 i=1 Poisson–Dirichlet distribution with parameter θ .
2.8 The Dirichlet Process
47
Theorem 2.23. (Labeling theorem) Assume that each point of {ς1 , ς2 , . . . , } is labeled independently with labels 1, . . . , m. The probability of any particular point (i) (i) getting label i is denoted by pi . For any i = 1, . . . , m, let ς1 ≥ ς2 ≥ · · · be the (i) (i) points with label i in descending order. Then ς1 ≥ ς2 ≥ · · · are the points of a nonhomogeneous Poisson process on (0, ∞) with intensity measure θ pi x−1 e−x dx. Furthermore, let
σi =
∞
m
∑ ς j , σ = ∑ σi . (i)
i=1
k=1
Then ( σσ1 , . . . , σσm ) has the Dirichlet(θ p1 , . . . , θ pm ) distribution, for each 1 ≤ i ≤ m; (i) (i) ς ς ( σ1 i , σ2 i , . . . , ) is independent of σi and has the Poisson–Dirichlet distribution with parameter θ pi . Proof. For each ς in , let xς be a discrete random variable taking value i with probability distribution ν (i) = P{xς = i | ς } = pi for i = 1, . . . , m. Assuming that {xς : ς ∈ } are independent given , then by Theorem A.5, {(ς , xς ) : ς ∈ } is a Poisson process with state space (0, ∞) × {1, . . . , m} and mean measure μ × ν . The result now follows from Theorem A.3. 2 Our next result explores the connection between the Dirichlet process and the Dirichlet distribution. Theorem 2.24. Assume that ν0 is supported on a finite subset {s1 , . . . , sm } of S with γi = ν0 ({si }), i = 1, . . . , m. Let (X1 , . . . , Xm ) have the Dirichlet(θ γ1 , . . . , θ γm ) distribution. Then Πθ ,ν0 is the law of ∑m i=1 Xi δsi . Proof. Let ξ be distributed according to ν0 and ξ1 , ξ2 , . . . , independent copies of ξ . Note that Πθ ,ν0 is the law of ∞ ςj ∑ σ δξ j . j=1 Reorganizing the summation according to the labels in {s1 , . . . , sm }, one gets (i)
m m ςj ςj σi δ δ = = s ∑ σ ξ j ∑ ∑ σ i ∑ σ δsi , j=1 i=1 j:ξ =s i=1 ∞
j
i
which, combined with Theorem 2.23, implies the result. 2 Theorem 2.25. For any ν1 , ν2 in M1 (S) and any θ1 , θ2 > 0, let Ξθ1 ,ν1 and Ξθ2 ,ν2 be independent and have respective laws Πθ1 ,ν1 and Πθ2 ,ν2 . Let β be a Beta(θ1 , θ2 ) random variable and assume that β , Ξθ1 ,ν1 , and Ξθ2 ,ν2 are independent. Then the law of β Ξθ1 ,ν1 + (1 − β )Ξθ2 ,ν2 is Π θ1 ν1 +θ2 ν2 . θ1 +θ2 ,
θ1 +θ2
Proof. Let z1 ≥ z2 ≥ · · · be the random points of a nonhomogeneous Poisson process on (0, ∞) with intensity measure (θ1 + θ2 )x−1 e−x dx. Now label each such point
48
2 The Poisson–Dirichlet Distribution
independently by 1 or 2 with respective probabilities, (i) z1
θ1 θ1 +θ2
and
θ2 θ1 +θ2 .
For i = 1, 2,
(i) z2
≥ ≥ · · · be the points labeled i, in descending order. It follows from let (i) (i) Theorem 2.23 that z1 ≥ z2 ≥ · · · are the random points of a nonhomogeneous Poisson process on (0, ∞) with intensity measure θi x−1 e−x dx. (i) For i = 1, 2, let {ξ j : j ≥ 1} be i.i.d. with common distribution νi . Then applying Theorem 2.23 again, one has that for
σ˜ =
∞
∑ zk , σ˜ i =
k=1
σ˜ 1 σ˜
∞
∑ zk
(i)
,
k=1
has the same law as β , and Ξθi ,νi has the same law as ∑∞j=1 d
β Ξθ1 ,ν1 + (1 − β )Ξθ2 ,ν2 =
σ˜ 1 σ˜
∞
(1)
zj σ˜ 2 ∑ σ˜ 1 δξ j(1) + σ˜ j=1
∞
(i)
zj σ˜ i
δ
(i)
ξj
. Hence
(2)
zj ∑ σ˜ 2 δξ j(2) , j=1
(2.56)
d
where = means equal in distribution. To finish the proof we need to write down the exact relation between z1 ≥ z2 ≥ · · · (i) (i) and z1 ≥ z2 ≥ · · · and, using this relation, construct an i.i.d. sequence {ξk : k ≥ 1} (i) θ2 ν2 from {ξ j } with common distribution θ1 νθ11 + +θ2 . Let {xk : k ≥ 1} be an i.i.d. sequence of random variables with P{x1 = 1} =
θ1 = 1 − P{x1 = 2}. θ1 + θ2
(1)
(2)
(1)
(2)
Assume that {xk : k ≥ 1}, {ξ j : j ≥ 1},{ξ j : j ≥ 1}, {z j : j ≥ 1}, and {z j : j ≥ 1} are all independent. Then the labeling procedure for the first three goes as follows: (x )
z1 = z1 1 , (x ) z2 2 , if x2 = x1 z2 = (x ) z1 2 , if x2 = x1 , and
⎧ (x ) z3 3 , ⎪ ⎪ ⎪ (x ⎨ ) z1 3 , z3 = (x ) ⎪ z2 3 , ⎪ ⎪ ⎩ (x ) z2 3 ,
if x3 = x2 = x1 if x3 = x2 = x1 if x3 = x2 = x1 if x3 = x1 = x2 .
In general, one has (x )
zk = z j k , if j = #{1 ≤ i ≤ k, xi = xk }.
(2.57)
2.8 The Dirichlet Process
49
Using this relation, we construct the sequence {ξk : k ≥ 1} (xk )
ξk = ξ j
, if j = #{1 ≤ i ≤ k, xi = xk }.
Now we show that {ξk : k ≥ 1} is i.i.d. with common distribution For each fixed k,
P{ξk ≤ u} =
k
∑ P{ξ j k
(x )
(2.58) θ1 ν1 +θ2 ν2 θ1 +θ2 .
≤ u, j = #{1 ≤ i ≤ k, xi = xk }, xk = 1 or 2}
j=1
(1)
(2)
= P{ξ1 ≤ u, x1 = 1} + P{ξ1 ≤ u, x1 = 2} θ1 ν1 + θ2 ν2 = ((−∞, u]). θ1 + θ2 Thus the sequence {ξk : k ≥ 1} has common distribution For any k < l, P{ξk ≤ u, ξl ≤ v} =
k
l
∑ ∑ P{ξ j k
(x )
(xl )
≤ u, ξi
θ1 ν1 +θ2 ν2 θ1 +θ2 .
≤ v,
(2.59)
j=1 i=1
j = #{1 ≤ r ≤ k, xr = xk }, i = #{1 ≤ r ≤ l, xr = xl }} = I1 + I2 + I3 + I4 , where k
I1 =
l
∑ ∑ P{ξ j
(1)
(1)
≤ u, ξi
≤ v}
j=1 i=1
× P{ j = #{1 ≤ r ≤ k, xr = xk = 1}, i = #{1 ≤ r ≤ l, xr = xl = 1}} = ν1 ((−∞, u])ν1 ((−∞, v]) ×
l
k
∑ ∑
P{ j = #{1 ≤ r ≤ k, xr = xk = 1}, i − j = #{k < r ≤ l, xr = xl = 1}},
j=1 i= j+1
= ν1 ((−∞, u])ν1 ((−∞, v])P{xk = 1, xl = 1}, k
I2 =
l
∑ ∑ P{ξ j
(1)
(1)
≤ u, ξi
≤ v}
j=1 i=1
× P{ j = #{1 ≤ r ≤ k, xr = xk = 1}, i = #{1 ≤ r ≤ l, xr = xl = 2}} = ν1 ((−∞, u])ν2 ((−∞, v]) ×
k
l
∑ ∑
P{ j = #{1 ≤ r ≤ k, xr = xk = 1},
j=1 i=k− j+1
i − (k − j) = #{k < r ≤ l, xr = xl = 2}} = ν1 ((−∞, u])ν2 ((−∞, v])P{xk = 1, xl = 2},
50
2 The Poisson–Dirichlet Distribution k
I3 =
l
∑ ∑ P{ξ j
(1)
(1)
≤ u, ξi
≤ v}
j=1 i=1
× P{ j = #{1 ≤ r ≤ k, xr = xk = 2}, i = #{1 ≤ r ≤ l, xr = xl = 1}} = ν2 ((−∞, u])ν1 ((−∞, v]) ×
l
k
∑ ∑
P{ j = #{1 ≤ r ≤ k, xr = xk = 2},
j=1 i=k− j+1
i − (k − j) = #{k < r ≤ l, xr = xl = 1}} = ν2 ((−∞, u])ν1 ((−∞, v])P{xk = 2, xl = 1}, k
I4 =
l
∑ ∑ P{ξ j
(1)
(1)
≤ u, ξi
≤ v}
j=1 i=1
× P{ j = #{1 ≤ r ≤ k, xr = xk = 2}, i = #{1 ≤ r ≤ l, xr = xl = 2}} = ν2 ((−∞, u])ν2 ((−∞, v]) ×
k
l
∑ ∑
P{ j = #{1 ≤ r ≤ k, xr = xk = 2}, i − j = #{k < r ≤ l, xr = xl = 1}}
j=1 i= j+1
= ν2 ((−∞, u])ν2 ((−∞, v])P{xk = 2, xl = 2}. The independence of ξk and ξl now follows from (2.59). Putting (2.57) and (2.58) together, we obtain that
σ˜ 1 σ˜
∞
(1)
zj σ˜ 2 ∑ σ˜ 1 δξ j(1) + σ˜ j=1
∞
(2)
∞ zj zk ∑ σ˜ 2 δξ j(2) = ∑ σ˜ δξk . j=1 k=1
(2.60)
This proves the theorem. 2 Before moving on to a new chapter, we describe an urn-type construction for the Dirichlet process. Example 2.2 (Ethier and Kurtz [63]). For n ≥ 1, let Sn denote the n-fold product of S and E = S ∪ S2 ∪ · · · . Construct a Markov chain {X(m) : m ≥ 1} on E as follows: X(0) = (ξ ), X(1) = (ξ , ξ ), where ξ is an S-valued random variable with distribution ν0 . For n ≥ 2, P{X(m + 1) = (x1 , . . . , x j−1 , ξ j , x j+1 , . . . , xn ) | X(m) = (x1 , . . . , xn )} θ = , (2.61) n(n + θ − 1) and
2.9 Notes
51
P{X(m + 1) = (x1 , . . . , xn , x j ) | X(m) = (x1 , . . . , xn )} =
n−1 , n(n + θ − 1)
(2.62)
where ξ j , j = 1, 2, . . . , are i.i.d. random variables with common distribution ν0 . Define
τn = inf{m ≥ 2 : X(m) ∈ Sn },
(2.63)
and
ζn (x1 , . . . , xn ) =
1 n ∑ δxi . n i=1
(2.64)
Assume that ν0 is diffuse on S; i.e., ν0 ({x}) = 0 for all x in S. Then
ζn (X(τn+1 − 1)) → ζ , where the convergence is almost surely in M1 (S), and the law of the random measure ζ is Πθ ,ν0 . Remark: Let us return to Hoppe’s urn and assume that there are n balls inside the urn. Take a ball from the urn at random. Then the chance for a particular ball to be chosen is 1/n. The color of the selected ball is black with probability θ /(n − 1 + θ ), and a new color with probability (n − 1)/(n − 1 + θ ). The similarities of these probabilities to the transition probabilities in the above Markov chain model and the fact that the Dirichlet process does not depend on the particular ordering of its atoms explain why the result is expected naturally.
2.9 Notes The Poisson–Dirichlet distribution was introduced by Kingman in [125]. It arises as the stationary distribution of the ranked allele frequencies in the infinitely many alleles model in [181]. The proof of Theorem 2.1 comes from [125] and [126]. The connections to the limiting distribution of the ranked relative cycle length in a random permutation were shown in [163], [176] and [177]. The result in the example on prime factorizations of integers was first proved in [15]. A different proof was given in [34] using the GEM distribution. In [83], the Poisson–Dirichlet distribution was used to describe the distribution of random Belyi surfaces. A brief survey on the Poisson–Dirichlet distribution can be found in [102]. The material in Section 2.2 is from [145]. Dickman [31] obtained the asymptotic distribution of the largest prime factor in a large integer, which corresponds to θ = 1 in Theorem 2.5. The current form of Theorem 2.5 first appeared in [95]. The marginal distribution in Theorem 2.6 is given in [181]. The case of θ = 1 was derived in Vershik and Schmidt [176]. The proofs in Section 2.3 follow [145].
52
2 The Poisson–Dirichlet Distribution
The GEM distribution was introduced in [137] and [51] in the context of ecology. Its importance in genetics was recognized by Griffiths[94]. The result in Theorem 2.7 originated in [142]. The proof here is from Donnelly and Joyce [35], which seems to be the first paper to give a published proof of the result. Further development involving a subprobability limit can be found in [88]. The Ewens sampling formula was discovered by Ewens in [66]. A derivation of the formula using a genealogical argument was carried out in [122]. It was also derived in [3] by sampling from a Dirichlet prior. The proof here follows the treatment in [130]. Theorem 2.9 in the case of θ = 1 first appeared in [90]. The general case and the proof here are from [6]. Further discussions on approximations to the Ewens sampling formula can be found in [9]. Extensive applications in population genetics can be found in [67]. A general logarithmic structure including the Ewens sampling formula was studied in [7] and [9]. Theorem 2.11 first appeared in [90] in the case of θ equal to 1. The general case was obtained in [180]. A generalization of Theorem 2.11 to a functional central limit theorem was obtained in [26] for θ equal to one and in [101] for the general case. A nice alternative proof of this functional central limit theorem was obtained in [37] based on a Poisson embedding using a model in [121]. The associated large deviation principles were obtained in [75] and [70]. More properties of the Stirling numbers of the first kind can be found in [1]. Sampling formulae involving selection can be found in [99], [54], and [107]. The scale-invariant Poisson process representation was established in [8]. The approach taken in Section 2.6 is from [9]. Another remarkable property of the scaleinvariant Poisson process, the scale-invariant spacing lemma, can be found in [5] and [10]. The urn schemes in Section 2.7 have roots in the paper of Blackwell and MacQueen [16]. Hoppe’s urn was first introduced in [103], which also included the proof of Theorem 2.15. The interplay between the genealogical structure of the infinitely-many alleles model, the Poisson–Dirichlet distribution, GEM distribution, size-biased permutation, the Ewens sampling formula and Hoppe’s urn, were further explored in [33] and [104]. Section 2.7.2 is from [171]. The earliest reference for the equality (2.41) seems to be [123]. The model and results in Section 2.7.3 come from [120]. The existence of the Dirichlet process in Section 2.8 was established in [80]. The representation (2.55) was discovered in [162]. Theorem 2.23 is called the coloring theorem in [130]. The proofs of Theorem 2.24 and Theorem 2.25 are from [59] with more details included. The Markov chain urn model in Section 2.8 was first introduced in [58]. The study of the asymptotic behavior of the urn model can be found in [39]. There are extensive studies of the Poisson–Dirichlet distribution and related structures from the viewpoint of Bayesian statistics. Detail information can be found in [108], [109], [111], [112], [113], and the references therein. The references [173] and [174] include studies of the multiplicative property of the gamma random measure.
Chapter 3
The Two-Parameter Poisson–Dirichlet Distribution
The representation (2.13) is a particular case of the general residual allocation model (RAM), where {Un }n≥1 are simply [0, 1]-valued independent random variables. In this chapter, we study another particular class of RAM. The law of the ranked sequence of this RAM is called the two-parameter Poisson–Dirichlet distribution, a natural generalization of the Poisson–Dirichlet distribution. Our focus is on several two-parameter generalizations of the corresponding properties of the Poisson–Dirichlet distribution.
3.1 Definition For 0 ≤ α < 1 and θ > −α , let Uk , k = 1, 2, . . ., be a sequence of independent random variables such that Uk has the Beta(1 − α , θ + kα ) distribution. Set V1α ,θ = U1 , Vnα ,θ = (1 −U1 ) · · · (1 −Un−1 )Un , n ≥ 2. Then with probability one,
(3.1)
∞
∑ Vkα ,θ = 1.
k=1
Definition 3.1. The law of (V1α ,θ ,V2α ,θ , . . .) is called the two-parameter GEM distribution, denoted by GEM(α , θ ). Let P(α , θ ) = (P1 (α , θ ), P2 (α , θ ), . . .) denote (V1α ,θ ,V2α ,θ , . . .) sorted in descending order. Then the law of P(α , θ ) is called the two-parameter Poisson–Dirichlet distribution, and is denoted by PD(α , θ ). Proposition 3.1 (Perman, Pitman, and Yor [146]). The law of the size-biased permutation of P(α , θ ) is the two-parameter GEM distribution. Consider the following general RAM with {U˜ n : n ≥ 1} being a sequence of independent random variables taking values in [0, 1], and S. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5 3, © Springer-Verlag Berlin Heidelberg 2010
53
54
3 The Two-Parameter Poisson–Dirichlet Distribution
V˜1 = U˜ 1 , V˜n = (1 − U˜ 1 ) · · · (1 − U˜ n−1 )U˜ n , n ≥ 2. Let Hsbp denote the map of size-biased permutation. The next proposition gives a characterization of the GEM(α , θ ) distribution. Proposition 3.2 (Pitman [151]). The law of (V˜1 , V˜2 , . . .) is the same as the law of Hsbp (V˜1 , V˜2 , . . .) iff U˜ i is a Beta(1 − α , θ + iα ) random variable for i ≥ 1; i.e., (V˜1 , V˜2 , . . .) has the two-parameter GEM(α , θ ) distribution. Proposition 3.1, combined with Proposition 3.2, shows that the two-parameter Poisson–Dirichlet distribution characterizes the ranked random discrete distribution P = (P1 , P2 , . . .) satisfying Pi > 0, i = 1, 2, . . . Hsbp (P) = Hsbp (Hsbp (P)) in law. The Dirichlet process also has a two-parameter generalization. Definition 3.2. Let ξk , k = 1, . . . be a sequence of i.i.d. random variables with common distribution ν0 on [0, 1]. Set
Ξθ ,α ,ν0 =
∞
∑ Pk (α , θ )δξk .
(3.2)
k=1
The random measure Ξθ ,α ,ν0 is called the two-parameter Dirichlet process. For notational convenience, this term is also used for the law of Ξθ ,α ,ν0 , denoted by Dirichlet(θ , α , ν0 ) or Πα ,θ ,ν0 .
3.2 Marginal Distributions The marginal distributions of the two-parameter Poisson–Dirichlet distribution can be derived in a manner similar to those of the Poisson–Dirichlet distribution. For this we would need results on the subordinator representation of the two-parameter Poisson–Dirichlet distribution and a change-of-measure formula. Definition 3.3. For any 0 < α < 1, the suborbinator {ρs : s ≥ 0} is called a stable subordinator with index α if its L´evy measure is
Λα (dx) = cα x−(1+α ) dx, x > 0, for some constant cα > 0. For convenience, we will choose cα = sequel.
α Γ (1−α )
in the
Let Eα ,θ denote the expectation with respect to PD(α , θ ). The following proposition provides a subordinator representation for PD(α , 0), and establishes a changeof-measure formula between PD(α , θ ) and PD(α , 0).
3.2 Marginal Distributions
55
Proposition 3.3 (Perman, Pitman, and Yor [146]). For any t > 0, let J1 (ρt ) ≥ J2 (ρt ) ≥ · · · be the ranked values of the jump sizes of the stable subordinator with index α over the time interval [0,t]. Then the following hold: (1) The law of ( J1ρ(ρt t ) , J2ρ(ρt t ) , . . .) is PD(α , 0). (2) For any non-negative, measurable function f on R∞ +, J1 (ρ1 ) J2 (ρ1 ) Eα ,θ [ f (p1 , p2 , . . .)] = Cα ,θ Eα ,0 ρ1−θ f , ,... , ρ1 ρ1 where Cα ,θ =
(3.3)
Γ (θ + 1) . Γ ( αθ + 1)
As an application of the representation in (1), we get: Theorem 3.4. Let Zn = Λα (Jn (ρ1 ), +∞) and ψ (x) = Λα (x, +∞). Then: (1) Z1 < Z2 < · · · are the points of a homogeneous Poisson process on (0, ∞) with intensity 1, and Zn = Y1 + · · · + Yn , where {Yn } are i.i.d. exponential random variables with parameter 1; (2)
Jn (ρ1 ) ρ1
− α1
=
Zn
− α1
∑∞ i=1 Zi
;
(3) limn→+∞ n( Jnρ(ρ1 1 ) )α =
ρ1−α Γ (1−α )
a.s.
Proof. (1) By Theorem A.4, {ψ (Jn (ρ1 )) : n ≥ 1} are the points of a Poisson process with mean measure μ (·) = Λα (ψ −1 (·)). By direct calculation,
μ (dz) =
dΛα (ψ −1 (z)) d ψ −1 (z) dz = dz. d ψ −1 (z) dz
It follows by definition that for any u > 0, P(Z1 > u) = P(N((0, u]) = 0) = e−μ ((0,u)) = e−u . Thus Y1 = Z1 ,Y2 = Z2 − Z1 ,Y3 = Z3 − Z2 , . . . are i.i.d. exponential random variables with parameter 1. x− α (2) Since Λα (x, +∞) = Γ (1− α ) , it follows that Zn = Thus
Jn (ρ1 )−α . Γ (1 − α ) −1
Zn α Jn (ρ1 ) Jn (ρ1 ) = = +∞ 1 . ρ1 ∑i=1 Ji (ρ1 ) ∑+∞ Z − α i=1 i
56
3 The Two-Parameter Poisson–Dirichlet Distribution α
1 1 (3) By definition, Jn (ρρα1 ) = Zn Γ (1− α )ρ1α or, equivalently, Zn = Γ (1−α )ρ1α Pnα . By (1) 1 and the law of large numbers for the i.i.d. sequence {Yn }, we obtain that
Zn = 1, n→+∞ n lim
and lim n
n→∞
Jn (ρ1 ) ρ1
α
(3.4)
→ Γ (1 − α )−1 ρ1−α a.s.
(3.5)
2 The change-of-measure formula (3.3), yields the following characterization of the two-parameter Poisson–Dirichlet distribution. Theorem 3.5. Fix 0 < α < 1, θ > 0. Let g be a non-negative, measurable function defined on (0, ∞) such that ∞ du
uα +1
0
and
∞ du
uα +1
0
|1 − g(u)| < ∞,
(3.6)
(1 − g(u)) > 0.
(3.7)
Then, for any λ > 0, ∞
du e 0
−λ u
uθ −1 Eα ,θ Γ (θ )
∞
∏ g(uPn (α , θ ))
n=1
where Hg (α , λ ) =
∞ du 0
uα +1
=
Γ (1 − α ) α Hg (α , λ )
θ / α ,
(3.8)
(1 − e−λ u g(u)).
Proof. By (3.3) and a change of variable from u/ρ1 to v, we get ∞ ∞ θ −1 −λ u u Eα ,θ ∏ g(uPn (α , θ )) du e Γ (θ ) 0 n=1 ∞ ∞ Jn (ρ1 ) Cα ,θ ( ρ ) J − λ u n 1 ρ1 du uθ −1 Eα ,0 ρ1−θ ∏ g u e = Γ (θ ) 0 ρ1 n=1
∞ Cα ,θ ∞ θ −1 −λ vJn (ρ1 ) dv v E ∏ g(vJn (ρ1 ))e = . Γ (θ ) 0 n=1
(3.9)
By condition (3.6), we can apply Campbell’s theorem so that the last expression in (3.9) becomes
3.2 Marginal Distributions
57
∞ Cα ,θ ∞ θ −1 −λ vx dx dv v exp −cα (1 − g(vx)e ) α +1 Γ (θ ) 0 x 0 ∞
Cα ,θ α = dv vθ −1 e−cα Hg (α ,λ )v Γ (θ ) 0 Γ (1 − α ) θ /α = . α Hg (α , λ )
(3.10)
Putting together (3.9) and (3.10), we get (3.8). 2 Now we are ready to derive the marginal distributions for PD(α , θ ). For any β > −α , let (P1 (α , β ), P2 (α , β ), . . .) be distributed as PD(α , β ) and let gα ,β (p) denote the distribution function of P1 (α , β ). Theorem 3.6. Assume that α is strictly positive. Then for each n ≥ 1, the joint density function ϕnα ,θ (p1 , . . . , pn ) of (P1 (α , θ ), . . . , Pn (α , θ )) is given by Cα ,θ cnα pˆnθ +nα −1 pn α ,θ . (3.11) ϕn (p1 , . . . , pn ) = gα ,θ +nα (1+ α ) Cα ,θ +nα (p1 · · · pn ) pˆn Proof. Part (1) of Proposition 3.3, combined with Perman’s formula for h(x) = ρ ρ cα x−(α +1) , implies that the joint density function of (ρ1 , P1 1 , . . . , Pn 1 ) is −(α +1) t n−1 cn−1 pn α (t p1 · · ·t pn−1 ) · φ1 t pˆn−1 , pˆn−1 pˆn−1 n−1 (− α +1) −(n−1) α cα (p1 · · · pn−1 ) pn t = · φ1 t pˆn−1 , . pˆn−1 pˆn−1
φn (t, p1 , . . . , pn ) =
Since
φ1 (t, p) = th(t p)
0
p 1−p ∧1
φ1 (t(1 − p), u)du,
it follows that pn ∧1 pˆn pn φ1 t pˆn−1 , φ1 (t pˆn , u)du, = cα t pˆn−1 (t pn )−(α +1) · pˆn−1 0 and
φn (t, p1 , . . . , pn ) = cnα (p1 · · · pn )−(α +1)t −nα ·
0
pn pˆn ∧1
φ1 (t pˆn , u)du.
Taking into account the change-of-measure formula in Proposition 3.3, we get that the density function of (P1 (α , θ ), . . . , Pn (α , θ )) is
58
3 The Two-Parameter Poisson–Dirichlet Distribution
ϕnα ,θ (p1 , . . . , pn ) = Cα ,θ cnα (p1 · · · pn )−(α +1) ·
pn pˆn ∧1
+∞
du
0
= Cα ,θ cnα (p1 · · · pn )−(α +1) pˆnnα +θ −1 = =
Cα ,θ Cα ,θ +nα Cα ,θ Cα ,θ +nα
0
pn pˆn ∧1
0
cnα (p1 · · · pn )−(α +1) pˆnnα +θ −1 cnα
pˆnnα +θ −1 gα ,θ +nα (p1 · · · pn )α +1
t −(nα +θ ) φ1 (t pˆn , u)dt +∞
du 0
pn pˆn ∧1
0
pn pˆn
s−(nα +θ ) φ1 (s, u)ds
ϕ1α ,θ +nα (u)du
,
where, in the third equality, we used the fact that
ϕ1α ,θ +nα (v) = Cα ,θ +nα
+∞ 0
s−(θ +nα ) φ1 (s, v)ds,
which itself follows from the change-of-measure formula with θ + nα in place of α. 2 Remark: Clearly the independent sequence of Beta(1 − α , θ + nα ) random variables Un (α , θ ) converges in law to the i.i.d. sequence of Beta(1, θ ) random variables Un (0, θ ) when α converges to zero. The continuity of the GEM representation and the map of ordering imply that for each n ≥ 1, gα ,θ +nα (p) converges to the distribution function of P1 (θ ) in Theorem 2.5. By direct calculation, we have lim
Cα ,θ
α →0 Cα ,θ +nα
cnα = 1.
Thus by taking the limit of α going to zero, we get Theorem 2.6 from Theorem 3.6.
3.3 The Pitman Sampling Formula Due to the similarity between the one-parameter GEM distribution and the twoparameter GEM distribution, it is natural to expect a sampling formula in the twoparameter setting that is similar to the Ewens sampling formula. This expectation is realized and the resulting formula is called the two-parameter Ewens sampling formula or the Pitman sampling formula. In order to derive the formula, we start with a subordinator representation for the two-parameter Poisson–Dirichlet distribution. Proposition 3.7 (Pitman and Yor [156]). Assume that 0 < α < 1, θ > 0. (1) Let {σt : t ≥ 0} and {γt : t ≥ 0} be two independent subordinators with respective L´evy measures α Cx−(1+α ) e−x dx and θ x−1 e−x dx for some C > 0. Set
σα ,θ = σ ((CΓ (1 − α ))−1 γ1/α ),
3.3 The Pitman Sampling Formula
59
where, for notational convenience, we write σ (t) for σt . Let J1 (α , θ ) ≥ J2 (α , θ ) ≥ . . . denote the ranked jump sizes of {σt : t ≥ 0} over the random interval [0, (CΓ (1 − α ))−1 γ1/α ]. Then σα ,θ and
J1 (α , θ ) J2 (α , θ ) , ,... σα ,θ σα ,θ
are independent, and have respective distributions Gamma(θ , 1) and PD(α , θ ). (2) Suppose (P1 (0, θ ), . . .) and (P1 (α , 0), . . .) have respective distributions PD(0, θ ) and PD(α , 0). Independent of (P1 (0, θ ), . . .), let (P1m (α , 0), P2m (α , 0), . . .), m = 1, 2, . . . , be a sequence of i.i.d. copies of (P1 (α , 0), . . .). Then {Pm (0, θ )Pnm (α , 0) : n, m = 1, 2, . . .}, ranked in descending order, follows the PD(α , θ ) distribution. Theorem 3.8. For each n ≥ 1, let An denote the space of allelic partitions defined in (2.14). For any 0 < α < 1, θ > −α , and a = (a1 , . . . , an ) in An , the law of the random allelic partition An of a random sample of size n from a PD(α , θ ) population is given by the Pitman sampling formula PSF(α , θ , a) = P{An = (a1 , a2 , . . . , an )} =
(3.12)
n ((1 − α ) n! k−1 ( j−1) (θ + l α ) ∏ ∏ aj θ(n) l=0 ( j!) (a j !) j=1
)a j
,
where k = ∑nj=1 a j . Proof. We divide the proof into two cases: θ > 0 and −α < θ ≤ 0. First consider the case, θ > 0. Due to the subordinator representation in Proposition 3.7, the result is obtained by applying Theorem A.6. This is similar to the proof of the Ewens sampling formula in Theorem 2.8. Let X1 , . . . , Xn be a random sample of size n ≥ 1 from the PD(α , θ ) population. All distinct values of the sample are denoted by li j , i = 1, . . . , n; j = 1, . . . , ai . Set C(a1 , . . . , an ) = Then
n!
. ∏nj=1 ( j!)a j (a j !)
60
3 The Two-Parameter Poisson–Dirichlet Distribution
⎡ ⎢ PSF(α , θ , a) = C(a1 , . . . , an )Eα ,θ ⎢ ⎣
ai
n
∑
distinct l11 ,...,l1a ; 1 l21 ,...,l2a ;...;ln1 ,...,lnan 2
∏∏
i=1 j=1
Jli j (α , θ )
σα ,θ
i
⎤ ⎥ ⎥. ⎦
This, combined with the facts that σα ,θ is a Gamma(θ , 1) random variable, and is independent of ( J1σ(αα,,θθ ) , J2σ(αα,,θθ ) , . . .), implies that PSF(α , θ , a) =
⎡
C(a1 , . . . , an )Γ (θ ) ⎢ E⎢ ⎣ Γ (θ + n)
⎤ n
∑
distinct l11 ,...,l1a ; 1 l21 ,...,l2a ;...;ln1 ,...,lnan 2
⎥
ai
∏ ∏ Jlii j (α , θ )⎥⎦
⎡ ⎡ =
⎢ C(a1 , . . . , an )Γ (θ ) ⎢ E⎢ E⎢ ⎣ ⎣ Γ (θ + n)
(3.13)
i=1 j=1
∑
distinct l11 ,...,l1a ; 1 l21 ,...,l2a ;...;ln1 ,...,lnan 2
⎤⎤ n
⎥⎥
ai
∏ ∏ Jlii j (α , θ )|γ1/α ⎥⎦⎥⎦ . i=1 j=1
Applying Theorem A.6 to the conditional expectation in (3.13), it follows that PSF(α , θ , a) C(a1 , . . . , an )Γ (θ ) = E Γ (θ + n) C(a1 , . . . , an )Γ (θ ) E = Γ (θ + n)
ai
n
∞
∏ ∏ E ∑ Jmi (α , θ )|γ1/α
i=1 j=1 n
∏ i=1
E
m=1
∞
∑ Jmi (α , θ )|γ1/α
ai (3.14)
m=1
k n ∞ ai γ1/α C(a1 , . . . , an )Γ (θ ) i−1−α −x E = α C x e dx ∏ Γ (θ + n) CΓ (1 − α ) 0 i=1 k Γ ( αθ + k) n C(a1 , . . . , an )Γ (θ ) α = ∏ Γ (i − α )ai , Γ (θ + n) Γ (1 − α ) Γ ( αθ ) i=1 which leads to (3.12). Next, consider the case −α < θ ≤ 0. Since θ + α > 0, formula (3.12) holds for PSF(α , θ + α , a). Since the function fa (x1 , x2 , . . .) =
∑
distinct l11 ,...,l1a ; 1 l21 ,...,l2a ;...;ln1 ,...,lnan 2
n
ai
∏ ∏ xlii j i=1 j=1
3.3 The Pitman Sampling Formula
61
is symmetric, the sampling formula from the PD(α , θ ) population is the same as the sampling formula from the GEM(α , θ ) population. Thus PSF(α , θ , a) = C(a1 , . . . , an )E[ fa (V1α ,θ ,V2α ,θ , . . .)]. Decomposing the expectation V1α ,θ , it follows that
E[ fa (V1α ,θ ,V2α ,θ , . . .)]
(3.15)
in terms of the powers of
PSF(α , θ , a) n
= C(a1 , . . . , an ) ∑ χ{al ∈An } al l=0
×
Γ (θ + 1) Γ (1 − α )Γ (θ + α )
(3.16)
Γ (l + 1 − α )Γ (θ + n − l + α ) E[ fal (V1α ,θ +α ,V2α ,θ +α , . . .)], Γ (θ + n + 1)
where a0 = 1, al = (. . . , al − 1, . . .) and the coefficient al in the summation accounts for the fact that V1α ,θ could correspond to the proportion of any of the al families of size l. By the sampling formula (3.12), we have that for l = 0, E[ fal (V1α ,θ +α ,V2α ,θ +α , . . .)] =
∏nj=1 ((1 − α )( j−1) )a j (θ + α )(n)
k−1
∏ (θ + α + l α ),
(3.17)
l=0
and for l > 0 E[ fal (V1α ,θ +α ,V2α ,θ +α , . . .)] a −1
l (1 − α )l−1 ∏nj=1, j=l ((1 − α )( j−1) )a j = (θ + α )(n−l)
k−2
∏ (θ + α + l α ).
Putting (3.16)–(3.18) together, it follows that (θ + α )(n) ∏nj=1 ((1 − α )( j−1) )a j PSF(α , θ , a) = C(a1 , . . . , an ) (θ + 1)(n) (θ + α )(n) n
+ ∑ χ{al ∈An } al l=1
(3.18)
l=0
k−1
∏ (θ + α + l α )
l=0 al −1 n + α )(n−l) (1 − α )l−1 ∏ j=1, j=l ((1 − α )( j−1) )a j
(1 − α )(l) (θ (θ + 1)(n) k−2
(θ + α )(n)
× ∏ (θ + α + l α )
(3.19)
l=0
n ∏k−1 l=1 (θ + l α ) θ + kα + ∑ (lal − al α ) = C(a1 , . . . , an ) (θ + 1)(n) l=1 = C(a1 , . . . , an )
∏k−1 l=0 (θ + l α ) , (θ )(n)
which is just (3.12).
2
62
3 The Two-Parameter Poisson–Dirichlet Distribution
3.4 Urn-type Construction In this section, we consider an urn construction for the Pitman sampling formula. Even though it is not clear how this formula arises in a genetic setting, we will use some genetic terminologies in our discussion due to its similarity to the Ewens sampling formula. A random sample of size n could arise from a random sample of size n + 1 with one element being removed at random. Equivalently, a random sample of size n + 1 can be obtained by adding one element to a random sample of size n. Thus it is natural to expect that the two-parameter sampling formula has certain consistent structures. Given that a random sample X1 , . . . , Xn has an allelic partition a = (a1 , . . . , an ) or equivalently, (X1 , . . . , Xn ) = (a1 , . . . , an ), the value of Xn+1 could be one of X1 , . . . , Xn or a completely new value. If Xn+1 takes on a value that is not among the first n samples, then the allelic partition of X1 , . . . , Xn+1 is (a1 + 1, a2 , . . . , an , 0). By (3.12), pn+1 (a1 + 1, a2 , . . . , an , 0) =
k n ((1 − α ) (n + 1)! ( j−1) ) j (θ + l α ) ∏ ∏ aja ! (a1 + 1)θ(n+1) l=0 ( j!) j j=1
=
n + 1 θ + kα pn (a1 , a2 , . . . , an ). a1 + 1 n + θ
a
(3.20)
Note that moving Xn+1 around in the sample will not alter the allelic partition (a1 + 1, a2 , . . . , an , 0). Thus Xn+1 could appear as the last appearance is only one out of n + 1 possibilities. Since any of the a1 + 1 single appearance values could appear in the last sample, it follows that
θ + kα . (3.21) θ +n Similarly if Xn+1 takes on a value that appears in X1 , . . . , Xn r times for some 1 ≤ r ≤ n (denote this event by C(n, r)), then the allelic partition of X1 , . . . , Xn+1 is (. . . , ar − 1, ar+1 + 1, . . . , an+1 ), and P{Xn+1 takes on a new value|(X1 , . . . , Xn ) = a} =
P{C(n, r)|(X1 , . . . , Xn ) = (. . . , ar − 1, ar+1 + 1, . . . , an+1 )} =
ar (r − α ) . θ +n
(3.22)
Conversely, starting with the initial condition, P{X1 = a1 } = 1, the two-parameter sampling formula (3.12) is uniquely determined by the conditional probabilities in (3.21) and (3.22). This suggests an urn structure for the twoparameter sampling formula that is similar to the model in Section 2.7.2. Consider again a population composed of immigrants and their descendants. Each new immigrant starts a new family. Individuals in the same family are con-
3.4 Urn-type Construction
63
sidered to be the same type. Let I(t) be the immigration process describing how immigrants enter the population and let x(t) be a linear growth process, with x(0) = 1, describing the growth pattern of each family. Immigrants arrive at the times 0 ≤ T1 < T2 < · · · and initiate families evolving as independent versions of x(t). If {xi (t)} are independent copies of x(t) with xi (t) being initiated by the ith type, then xi (t − Ti ) will be the size at time t of the type i family. The process N(t) represents the total population size at time t: I(t)
N(t) = ∑ xi (t − Ti ). i=1
For N(t) ≥ 1, the families induce a random partition Π (t) of the integer N(t). Let A j (t) denote the number of families of size j at time t so that
Π (t) = (A1 (t), . . . , AN(t) (t)) is the corresponding random allelic partition. The transition mechanisms of these processes are as follows. (i) I(t) is a pure-birth process with I(0) = 0 and the birth rate 1 ri = lim P{I(t + h) − I(t) = 1|I(t) = i} h→0 h where
r0 = β0 , ri = α i + θ , for i ≥ 1
with β0 > 0, θ > −α , and 0 < α < 1. (ii) The process x(t) is also a pure-birth process but starting at x(0) = 1 and with infinitesimal birth rate n = n − α for n ≥ 1. (iii) The cumulative process N(t) is then a pure-birth starting at N(0) = 0 with rates determined by the processes {I(t), x1 (t − T1 ), . . . , xI(t) (t − TI(t) )}. Let 1 ρn = lim P{N(t + h) − N(t) = 1 | N(t) = n}. h→0 h Obviously ρ0 = β0 and for small h > 0 P{N(t + h) − N(t) = 1 | N(t) = n} I(t)
=E
α I(t) + θ + ∑ (xi (t − Ti ) − α ) h + o(h) | N(t) = n i=1
= (n + θ )h + o(h).
64
3 The Two-Parameter Poisson–Dirichlet Distribution
Thus for n ≥ 1, ρn = n + θ . Define τn = inf{t ≥ 0 : N(t) = n} for n ≥ 1. Then
Πn = Π (τn ) = (A1 (τn ), . . . , An (τn ))
will be a random partition of n. For any (a1 , . . . , an ) in An with k = ∑ni=1 ai , it follows from our construction that P{Πn+1 = (a1 + 1, . . . , an , 0) | Πn = (a1 , . . . , an )} =
kα + θ . n+θ
(3.23)
If ai > 1 for some 1 ≤ i < n, then P{Πn+1 = (a1 , . . . , ai − 1, ai+1 + 1, . . . , an , 0) | Πn = (a1 , . . . , an )} ai (i − α ) = . n+θ
(3.24)
Finally for an = 1, we have P{Πn+1 = (0, . . . , 0, 0, 1) | Πn = (0, . . . , 1)} =
n−α . n+θ
(3.25)
Comparing (3.23)-(3.25) with the conditional probabilities in (3.21) and (3.22), we obtain: Theorem 3.9. The distribution of the random partition Πn (τn ) is given by the Pitman sampling formula. The processes I(t), x(t), and N(t) are special cases of linear birth process with immigration, generically denoted by Y (t). This is a time homogeneous Markov chain with infinitesimal rates 1 λn = lim P[Y (t + h) −Y (t) = 1 | Y (t) = n] = λ n + c, h→0 h
(3.26)
where λ > 0, c > 0. By an argument similar to that used in Theorem 2.17 and Theorem 2.18, we have c + n − 1 −ct P[Y (t) = n | Y (0) = 0] = λ e (1 − e−λ t )n , n = 0, 1, . . . , (3.27) n and
lim e−λ t Y (t) = W c a.s.,
t→∞
λ
where Wd is a Gamma(d, 1) random variable. Starting with Y (0) = 1, the corresponding results for n ≥ 1 become
(3.28)
3.4 Urn-type Construction
65
P[Y (t) = n | Y (0) = 1] = and
c λ
+ n − 1 −(λ +c)t e (1 − e−λ t )n−1 , n−1
lim e−λ t Y (t) = W1+ c a.s. λ
t→∞
(3.29)
(3.30)
Specializing to I(t), x(t), and N(t), we obtain the following results: (i) For I(t), one has I(0) = 0, λ0 = β0 , λn = α n + θ , n ≥ 1. Let I ∗ (t) = I(t + T1 ). Then I ∗ (t) is a linear birth process with immigration starting with I ∗ (0) = 1 and with infinitesimal rates λn∗ = α n + θ . Thus lim e−α t I(t) = lim e−α T1 e−α (t−T1 ) I ∗ (t − T1 ) = e−α T1 Wθ /α +1 a.s.,
t→∞
t→∞
(3.31)
where T1 and Wθ /α are independent, and T1 has an exponential distribution with mean β0−1 . (ii) For x(t), one has x(0) = 1, λn = n − α , n ≥ 1, and lim e−t x(t) = W1−α a.s.
t→∞
(3.32)
(iii) For N(t), one has N(0) = 0, λ0 = β0 , λn = n + β − α , n ≥ 1 and lim e−t N(t) = e−T1 W1+θ a.s.
t→∞
(3.33)
With these results in hand, we are ready to prove the following theorem. Theorem 3.10. For 0 < α < 1, θ > −α , let n
Knα ,θ = ∑ Ai (τn ).
(3.34)
Knα ,θ = Sα ,θ a.s. n→∞ nα
(3.35)
i=1
Then
lim
Proof. By applying (3.31) and (3.33), we have Wθ /α I(t) e−α t I(t) = lim = a.s., t→∞ (N(t))α t→∞ [e−t N(t)]α (W1+θ )α lim
which gives (3.35). Here Wθ /α and W1+θ are correlated.
(3.36)
2
66
3 The Two-Parameter Poisson–Dirichlet Distribution
Remarks: (a) More information about the distribution of Sα ,θ can be found in [155], where its connection to the Mittag–Leffler distribution is explored. (b) A general scheme, called the Chinese restaurant process, can be adapted to give a unified treatment for the models in Section 2.7 and Section 3.4. Our presentation reflects the historical development of the subject.
3.5 Notes The most fundamental reference for the two-parameter Poisson–Dirichlet distribution is Pitman and Yor [156], which is also the source for the proof of Theorem 3.4 and Theorem 3.5. The connection to the Blackwell–MacQueen urn scheme [16] is included in the survey paper of Pitman [152]. The proof of Theorem 3.6 presented here appears in [73]. A different proof is found in Handa [100] using the GEM representation and the theory of point processes. The residual allocation model was first introduced in [98]. The Pitman sampling formula in Theorem 3.8 first appeared in Pitman [148], and was further studied in [149]. The first part of the proof of Theorem 3.8 can be found in Carlton [17], which also includes detailed calculations of moments and parameter estimation for the two-parameter Poisson–Dirichlet distribution. The proof for the second part seems to be new. Further discussions on sampling formulae can be found in [89]. The models in Section 3.4, motivated by the work in [121], are from [75]. Pitman [155] includes other urn-type models and related references. The conditional structures for the two-parameter sampling formulae are obtained in [149]. Theorem 3.10 is due to Pitman [150] who obtained his results by moment calculations and martingale convergence techniques. The proof here is from [75]. The Chinese restaurant process first appeared in [2]. Further development and background can be found in [155]. We have focused on the two-parameter Poisson–Dirichlet distribution with infinitely many types. For a population with m types, the proper ranges of α and θ for the two-parameter Poisson–Dirichlet distribution are α = −κ , θ = mκ for some κ > 0. In [29] and the references therein, connections can be found between the twoparameter Poisson–Dirichlet distribution and models in physics, including meanfield spin glasses, random map models, fragmentation, and returns of a random walk to the origin. Applications in macroeconomics and finance can be found in [4]. Reference [115] includes results in Bayesian statistics. Additional distributional results on subordinators are included in [116].
Chapter 4
The Coalescent
Consider a population that evolves under the influence of mutation and random sampling. In any generation, the gene of each individual is either a copy of a gene of its parent or is a mutant. It is thus possible to trace the lines of descent of a gene back through previous generations. By tracing back in time, the genealogy of a sample from the population emerges. The coalescent is a mathematical model that describes the ancestry of a finite sample of individuals, genes or DNA sequences taken from a large population. In this chapter, we introduce Kingman’s coalescent, derive its distribution, and study the embedded pure-death Markov chain. All calculations are based on the Wright–Fisher model.
4.1 Kingman’s n-Coalescent For each n ≥ 1, and 1 ≤ k ≤ n, let S(n, k) denote the total number of ways of partitioning n elements into k nonempty sets. The set {S(n, k) : n ≥ 1, 1 ≤ k ≤ n} is the collection of the Stirling numbers of the second kind. They are different from the unsigned Stirling numbers of the first kind Tnk = |Snk |, the total number of permutations of n objects into k cycles. We list several known facts about S(n, k). S(n, 1) = S(n, n) = 1,
n , n ≥ 2, 2 S(n, k) = S(n − 1, k − 1) + kS(n − 1, k), 1 k k n S(n, k) = ∑ (−1)k−i i . i k! i=1
S(n, 2) = 2n−1 − 1, S(n, n − 1) =
Consider the Wright–Fisher model with a large population size 2N and no mutation. Take a sample of size n from the population at the present time. Since the only S. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5 4, © Springer-Verlag Berlin Heidelberg 2010
67
68
4 The Coalescent
influence on the population is random genetic drift, any two individuals in the sample will share a most recent common ancestor (MRCA) sometime in the past. Going further back in time, more individuals will share an MRCA. Eventually all individuals in the sample will coalesce at a single individual which we call the ancestor of the sample. Consider the present time as t = 0. The time t = m will be the time of going back m generations in the past. Starting at time zero and going back one generation, we can ask the following question: What is the probability Pnk that the number of ancestors of the current sample is k in the previous generation? Clearly k has to be less than or equal to n; i.e., Pnk = 0 for k > n. Let Z0 = n, Z1 be the total number of ancestors, one generation back, and Zm denote the total number of ancestors, m generations back. Then Zm is a Markov chain with transition probability matrix (Pi j )1≤i, j≤n . The transition probability Pnk can be calculated as follows. First, the sample is partitioned into k groups. The total number of ways is given by S(n, k). Next for each fixed partition, we calculate the probability. Since the ancestor in the previous generation of each individual in the sample can be any of the 2N individuals, the total number of choices is (2N)n . There are 2N choices for the first ancestor, 2N − 1 choices for the second ancestor, and so on. The probability of each partition is (2N)(2N − 1) · · · (2N − k + 1)/(2N)n . Thus (2N)(2N − 1) · · · (2N − k + 1) Pnk = S(n, k) (2N)n ⎧ n (2) ⎪ 1 ⎪ k = n−1 ⎪ ⎨ 2N + o( (2N)2 ), n ) ( = 1 − 2 + o( 1 ), k = n ⎪ 2N (2N)2 ⎪ ⎪ ⎩ o( 1 2 ), else. (2N) 1 If 2N is much bigger than n, then by ignoring higher order terms in 2N , the number of ancestors will be n − 1 or n with respective approximate probabilities n 1 n 1 2 2N and 1− 2 2N . Similar arguments can be applied to any Pi j with 1 ≤ j ≤ i ≤ n. Starting at i, let σi2N denote the number of steps that the Markov chain Zt spent at state i. Then, for m ≥ 1, m−1
P{σi2N = m} ≈
i 2
2N
1−
i 2
2N
.
Using the same scaling as in diffusion approximations, we can see that for any t >0
4.1 Kingman’s n-Coalescent
69
i k−1
[2Nt] i σi2N 2 ≤ t} = ∑ 1− 2 P{ 2N 2N 2N k=1 i
≈ 1 − e−(2)t . σ 2N
i approaches an exponential random variIn other words, the macroscopic time 2N i able with parameter 2 . This brings us to a process, called the coalescent. To describe the coalescent, consider the current time as zero. For each n ≥ 1, let En = {1, 2, . . . , n} and En denote the collection of equivalence relations of En . Each element of En is thus a subset of En × En . For example, in the case of n = 3, the set
{(1, 1), (2, 2), (3, 3), (1, 3), (3, 1)} defines an equivalence relation that results in two equivalence classes {1, 3} and {2}. The set En is clearly finite and its elements will be denoted by η , ξ , etc. The equivalence relations that are of interest to us here are defined through the ancestral structures. Two individuals are equivalent if they have the same ancestor at some time t in the past. For ξ , η in En , we write ξ ≺ η if η is obtained from ξ by combining exactly two equivalence classes of ξ into one. For distinct ξ , η in En , set 1, ξ ≺ η qξ η = 0, else. Let |ξ | be the number of equivalence classes induced by ξ . Define |ξ | . qξ := −qξ ,ξ = 2 Definition 4.1. Kingman’s n-coalescent is a En -valued, continuous-time, Markov chain Xt with infinitesimal matrix (qξ η ) starting at X0 = {(i, i) : i = 1, . . . , n}. For each 2 ≤ i ≤ n, denote by τi , the waiting time between the (n − i)th jump and the (n − i + 1)th jump of Xt . Then τ 2 , . . . , τn are independent and τi is an exponential random variable with parameter 2i . Set T0 = 0 and Tm = τn + · · · + τn+1−m , 1 ≤ m ≤ n − 1 and Dt = |Xt |. Then Dt is a pure-death process starting at n with intensity k . lim h−1 P{Dt+h = k − 1 | Dt = k} = h→0 2 The marginal distribution of Dt is given by P{Dt = k} = P{Tn−k ≤ t} − P{Tn−k+1 ≤ t}.
70
4 The Coalescent
Introduce a discrete time process Yk = XTn−k , 1 ≤ k ≤ n. By direct calculation, for ξ ≺ η 1 P{Yk−1 = η | Yk = ξ } = |ξ | . 2
It is clear from the construction that Xt = YDt , and the processes Yk and Dt are independent. Theorem 4.1. For each η in En with |η | = k, P{Yk = η } =
(n − k)!k!(k − 1)! k ∏(ai !), n!(n − 1)! i=1
(4.1)
where a1 , . . . , ak are the sizes of the equivalence classes in η . Proof. We will prove the result by induction. Let Pk (η ) = P{Yk = η }. Then Pk (η ) =
∑ P{Yk+1 = ξ ,Yk = η }
ξ ≺η
1 = k+1 2
∑ Pk+1 (ξ ).
ξ ≺η
For k = n, we have
η = {(i, i) : i = 1, . . . , n}, a1 = · · · = an = 1. Since Pn (η ) = 1, (4.1) holds in this case. Assume the formula holds for k + 1 with 2 ≤ k < n. If the sizes of equivalence classes of η are a1 , .., ak , then for any ξ ≺ η , the sizes b1 , . . . , bk+1 of equivalence classes of ξ will be of the form a1 , . . . , ar−1 , λ , ar − λ , . . . , ak for some 1 ≤ r ≤ k and 1 ≤ λ < ar . For each fixed r ξ . Here the factor 1/2 is due to the double counting and λ , there are 12 aλr of such of the pair λ , ar − λ in aλr ; i.e., the same ξ is counted twice: once by choosing λ elements and once by choosing the other ar − λ elements. From the recursion above, we obtain
4.1 Kingman’s n-Coalescent
1 Pk (η ) = k+1 2
1
= k+1 2
=
71
(n − k − 1)!(k + 1)!k! k+1 ∏ bi ! n!(n − 1)! i=1 ξ ≺η
∑
k ar −1
∑∑
r=1 λ =1
(n − k − 1)!(k + 1)!k! 1 ar a1 ! · · · λ !(ar − λ )! · · · ak ! n!(n − 1)! 2 λ
k (n − k − 1)!k!(k − 1)! k a ! i ∏ ∑ (ar − 1). n!(n − 1)! i=1 r=1
Since ∑kr=1 (ar − 1) = n − k, it follows that Pk (η ) =
(n − k)!k!(k − 1)! k ∏ ai !. n!(n − 1)! i=1
Thus the formula (4.1) also holds for k, which implies the theorem. 2 The random variable Tn−1 is the total time needed for the n individuals to reach the MRCA. The mean and the variance of Tn−1 can be calculated explicitly:
E[Tn−1 ] =
n
∑ E[τk ]
k=2 n
=2∑
k=2 n
Var[Tn−1 ] =
(4.2)
1 1 = 2 1− , k(k − 1) n
∑ Var[τk ]
k=2 n
1 2 k=2 (k(k − 1))
n 1 1 2 + 2− =4∑ 2 k k(k − 1) k=2 (k − 1) n 1 1 2 = 8 ∑ 2 −4 1− . n k=2 k =4∑
(4.3)
The fact that E[τ2 ] = 1 ≈ 12 E[Tn−1 ], indicates the significant influence of the most ancient coalescence time. An alternate time measure, taking into account the sample sizes, is given by n
T˜n−1 =
∑ k τk .
k=2
The corresponding mean and variance are
(4.4)
72
4 The Coalescent n
E[T˜n−1 ] = 2 ∑
k=2 n
1 ≈ log n, k−1
(4.5)
1 . (k − 1)2 k=2
Var[T˜n−1 ] = 4 ∑
(4.6)
4.2 The Coalescent Letting n converge to infinity, one would expect to get a pure-death process Dt with entrance boundary ∞, and transition intensity k h + o(h). P{Dt+h = k − 1 | Dt = k} = 2 The rigorous construction of this process will be carried out in Section 4.3. Assuming the existence of the pure-death process, one can then construct a Markov process Xt on E , which is the collection of all equivalence relations on {1, 2, . . .}, such that |Xt | is a pure-death Markov process with transition probability Pξ η = 1/ |ξ2 | , ξ ≺ η . For n ≥ 1, define a map ρn : E → En such that ρn (ξ ) = {(i, j) ∈ ξ : 1 ≤ i, j ≤ n}. Then ρn (Xt ) is the n-coalescent. Thus all n-coalescents can be constructed on a single probability space. The process Xt is called a coalescent. The coalescent can be generalized to include mutation. This is done by considering the number of ancestors at a time t in the past which have not experienced mutation. Consider the infinitely many alleles model ([184]) where each mutation leads to a new allelic type. Take a sample of size n at time zero. Going back in time, two individuals are equivalent if they have a common ancestor and, for the most recent one, no mutation was experienced along the direct lines of descent from that common ancestor to the two individuals. If at this time an individual has an ancestor that is a mutant gene, the equivalence class containing the individual will be excluded from thereon. Assuming the rate of mutation is θ /2 along each line of descent. Thus going back into the past, one equivalence class will disappear, due to either a mutation or a coalescing event. The coalescent will now involve a pure-death process Dt with transition intensity 1 k P{Dt+h = k − 1 | Dt = k} = + kθ h + o(h). 2 2 The embedded chain will experience a coalescing event with probability
(2k) = ( )+ 12 kθ k 2
θ and a mutation with probability θ +k−1 . k−1 θ , we can see that the coalescent On the basis of the probabilities θ +k−1 and θ +k−1 with mutation corresponds to running Hoppe’s urn backward in time. Therefore the Ewens sampling formula can be derived from coalescent arguments. k−1 θ +k−1
4.3 The Pure-death Markov Chain
73
Starting with n individuals, the total time Tn,θ needed to reach the MRCA is now given by n
Tn,θ =
∑ τk,θ ,
k=1
where τk,θ is an exponential random variable with parameter 2k + 12 kθ . Note that τ1,θ is a new component due to mutation. The mean and variance are E[Tn,θ ] =
n
∑ E[τk,θ ]
k=1 n
1 , k=1 k(θ + k − 1)
=2∑ n
Var[Tn,θ ] =
∑ Var[τk,θ ]
k=1 n
1 . (k( θ + k − 1))2 k=1
=4∑
Depending on the value of θ , the total time could be longer or shorter, compared with the mutation-free case. Similarly, by taking into account the sample sizes, we can introduce the new time measure n T˜n,θ = ∑ kτk,θ , k=1
which is the total length of all the branches in the genealogy. The mean and variance of T˜n,θ are given by n
1 , θ + k−1 k=1
E[T˜n,θ ] = 2 ∑ n
1 . 2 k=1 (θ + k − 1)
Var[T˜n−1 ] = 4 ∑
4.3 The Pure-death Markov Chain In this section, we give a rigorous construction of the process {Dt : t ≥ 0} in Section 4.2, and derive some importantproperties that will be used later. ¯ = N {∞}. Set Let N = {0, 1, 2, . . .} and N
74
4 The Coalescent
n(n − 1 + θ ) , n ≥ 0, 2 ¯ : lim λn ( f (n − 1) − f (n)) exists and is finite}, C = { f ∈ C(N) n→∞ λn ( f (n − 1) − f (n)), n∈N Ω f (n) = limn→∞ λn ( f (n − 1) − f (n)), n = ∞,
λn =
¯ is the one-point compactification of N. where the topology on N Theorem 4.2. The linear operator Ω with domain C is the generator of a strongly continuous, conservative, contraction semigroup (Feller semigroup) {T (t)} defined ¯ on the space C(N). Proof. Let ¯ : f (m) = f (∞) for all sufficiently large enough m}. C0 = { f ∈ C(N) ¯ one has that Clearly C0 is a subset of C. For every function g in C(N), lim g(n) = g(∞).
n→∞
For each m ≥ 1, set
gm (n) =
Then
g(n), n ≤ m g(m), n > m.
sup |gm (n) − g(n)| → 0, m → ∞.
¯ n∈N
¯ Hence C0 , and thus C is dense in C(N). For each λ > 0 and f in C, denote λ f − Ω f by g. Next we show that
λ f ≤ g,
(4.7)
where f = sup | f (n)|. ¯ n∈N
By definition,
λ f (0) = g(0) ≤ g.
Assume that for any m ≥ 0,
λ | f (n)| ≤ g, n ≤ m. If f (m+1) ≥ f (m), then g ≥ g(m+1) ≥ λ f (m+1). The case of f (m+1) < f (m) is clear. Hence (4.7) holds and the operator Ω is dissipative. ¯ and λ > 0, there exists an f in C Finally we will show that for any g in C(N), such that λ f − Ω f = g.
4.3 The Pure-death Markov Chain
75
This can be done recursively. First, let f (0) = λ −1 g(0). Assuming that f (n) has been defined, we then have f (n + 1) = (λ + λn )−1 (λn f (n) + g(n + 1)). It follows from (4.7) that for any n ≥ 1 | f (n − 1) − f (n)| ≤ λn−1 (λ f + g) ≤ 2λn−1 g.
(4.8)
−1 Since ∑∞ n=1 λn < ∞, { f (n)} is a Cauchy sequence. Rewriting Ω f (n) as λ f (n) − g(n), reveals that the sequence {Ω f (n)} is also Cauchy. Hence Ω f (n) has a finite limits as n converges to infinity. Thus f is in C. Since Ω is clearly conservative, the theorem follows from the Hille–Yosida theorem. 2 The Markov process, associated with the semigroup {T (t)} in Theorem 4.2, is just the pure-death Markov chain {Dt : t ≥ 0}.
Theorem 4.3. For each n in N and t > 0, let dnθ (t) = T (t)(χ{n} )(∞) = P{Dt = n}. Then
dnθ (t) =
(2k−1+θ ) 1 − ∑∞ (−1)k−1 θ(k−1) e−λk t , n=0 k=1 k! ∞ (2k−1+θ ) k−n k (n + θ ) −λk t , n > 0, (−1) e ∑k=n (k−1) n k!
(4.9)
(4.10)
where the series are clearly absolutely convergent. Proof. For any m ≥ 1 and t > 0, define θ dmn (t) = T (t)(χ{n} )(m).
(4.11)
θ (t) = 0 for n > m. Since T (t)( χ ¯ Clearly dmn {n} )(·) is in C(N), it follows that
lim d θ (t) = dnθ (t). m→∞ mn Next we fix m ≥ 1 and set ⎛ 0 0 −λ0 ⎜ ⎜ ⎜ λ ⎜ 1 −λ1 0 ⎜ Q=⎜ ⎜ ⎜ ··· ··· ··· ⎜ ⎝ 0 0 0
···
0
(4.12)
0
⎞
⎟ ⎟ ··· 0 0 ⎟ ⎟ ⎟ ⎟. ⎟ ··· ··· ··· ⎟ ⎟ ⎠ · · · λm −λm
76
4 The Coalescent
The semigroup generated by Q is then the same as T (t) restricted to C({0, 1, . . . , m}). Let T(t) = (diθj (t))0≤i, j≤m . Then one has T(t) = eQt .
(4.13)
Next we look for the spectral representation of Q. It is clear that the eigenvalues of Q are −λ0 , −λ1 , . . . , −λm and each has multiplicity one. Let x = (x0 , . . . , xm ) be a left eigenvector corresponding to eigenvalue −λi . If i = 0, we can choose x = (1, 0, . . . , 0). For i ≥ 1, x solves the following system of equations: (λ1 )x1 = (−λi )x0 , (−λ1 )x1 + λ2 x2 = (−λi )x1 , (−λk )xk + λk+1 xk+1 = (−λi )xk , k ≥ 2, which implies xk = 0, k > i, λk+1 xk = (−1) xk+1 , k < i; λi − λk
(4.14) (4.15)
and by choosing xi = 1, it follows that
λk+1 · · · λi (λi − λk ) · · · (λi − λi−1 ) (k + θ ) · · · (i + θ − 1) i = (−1)i−k k (i + θ + k − 1) · · · (i + θ + i − 2) (k + θ )(i−1) i−k i = (−1) . k (i + θ )(i−1)
xk = (−1)i−k
(4.16)
Similarly, let y = (y0 , . . . , ym ) be a right eigenvector corresponding to the eigenvalue −λi . If i = 0, we can choose y = (1, . . . , 1) . For i ≥ 1, y solves the following system of equations: y0 = 0, λk yk−1 − λk yk = −λi yk , k ≥ 1, which implies, by choosing yi = 1, that yk = 0, k < i, k (i + θ )(i) λk · · · λi+1 = = , k > i. yk i (k + θ )(i) (λk − λi ) · · · (λi+1 − λi )
4.3 The Pure-death Markov Chain
77
For each i in {0, 1, . . . , m}, let ⎧ i=0 ⎪ ⎨ δ0 j , 0, j>i>0 ui j = ⎪ ⎩ (−1)i− j i ( j+θ )(i−1) , j ≤ i, i > 0, j (i+θ ) (i−1)
and ⎧ i=0 ⎪ ⎨ 1, 0, j
0, i ( j+θ )(i) ⎛ ⎞ vi0 ⎜ vi1 ⎟ ⎜ ⎟ ui = (ui0 , . . . , uim ), vi = ⎜ . ⎟ , ⎝ .. ⎠ vim
⎞
⎛
u0 ⎜ u1 ⎟ ⎜ ⎟ U = ⎜ . ⎟ , V = (v0 , . . . , vm ). ⎝ .. ⎠ um Then ui and vi are the respective left and right eigenvectors of Q corresponding to the eigenvalue −λi . By definition, for i < j, we have ui · v j = 0 and ui · vi = 1. For i > j, ui · v j =
i
∑ uik v jk
k= j
= Ci j Ai j , where i (θ + j)( j) , Ci j = j (θ + i)(i−1) i i − j (θ + k)(i−1) Ai j = ∑ (−1)i−k . i − k (θ + k)( j) k= j Let h(t) = (t − 1)i− j t θ +i+ j−2 . Then it is easy to see that Ai j = which implies that
d i− j−1 h(t) |t=1 = 0, dt i− j−1 ui · v j = δi j .
(4.17)
(4.18)
78
4 The Coalescent
Let I be the (m + 1) × (m + 1) identity matrix and ⎛ ⎞ 0 0 ··· 0 0 −λ0 ⎜ ⎟ ⎜ ⎟ ⎜ 0 −λ 0 ··· 0 0 ⎟ 1 ⎜ ⎟ ⎜ ⎟ Λ =⎜ ⎟ ⎜ ⎟ ⎜ ··· ··· ··· ··· ··· ··· ⎟ ⎜ ⎟ ⎝ ⎠ 0 0 0 · · · 0 −λm and
⎛
Λt
e
e−λ0 t
⎜ ⎜ ⎜ 0 ⎜ ⎜ =⎜ ⎜ ⎜ ··· ⎜ ⎝ 0
0
0
···
0
e−λ1 t
0
···
0
···
··· ··· ···
0
0
···
0
0
⎞
⎟ ⎟ 0 ⎟ ⎟ ⎟ ⎟. ⎟ ··· ⎟ ⎟ ⎠ e−λm t
It follows from (4.18) that UV = VU = I, UQV = Λ , Q = VΛ U. These combined with (4.13) imply that T(t) = VeΛ t U.
(4.19)
Hence θ (t) = dmn
m
∑ vkm ukn e−λk t
k=0
⎧ k m (θ +k)(k) (θ )(k−1) −λk t , n = 0 ⎨ 1 + ∑m k=1 (−1) k (θ +m)(k) (θ +k)(k−1) e (4.20) = ⎩ ∑m (−1)k−n m k (θ +k)(k) (θ +n)(k−1) e−λk t , n ≥ 1 k=n k n (θ +m)(k) (θ +k)(k−1) ⎧ (m)[k] k 2k−1+θ (θ + n) −λk t , n = 0 ⎨ 1 + ∑m (k−1) e k=1 (θ +m)(k) (−1) k! = (m)[k] k−n 2k−1+θ k (θ + n) −λk t , n ≥ 1. ⎩ ∑m (k−1) e k=n (θ +m) (−1) k! n (k)
which, by letting m approach infinity, implies (4.10).
2
Corollary 4.4 For any t > 0 and any r ≥ 1, P{Dt ∈ N} = 1,
(4.21)
4.3 The Pure-death Markov Chain
79
and
E[Dr (t)] < ∞.
(4.22)
Proof. Fix t > 0 and r ≥ 1. It follows from Theorem 4.3 that P{Dt ∈ N} =
∞
∑ dnθ (t)
n=0 ∞ ∞
(2k − 1 + θ ) k−n k (n + θ )(k−1) e−λk t (−1) ∑∑ n k! n=0 k=n ∞ k (2k − 1 + θ ) k (n + θ )(k−1) e−λk t (−1)k−n = ∑∑ n k! k=0 n=0 ∞ k (2k − 1 + θ ) −λk t k−n k = 1 + ∑ ∑ (−1) (n + θ )(k−1) e n k! k=0 n=0
=
∞
= 1 + ∑ Ak0 k=0
(2k − 1 + θ ) −λk t e , k!
where the interchange of summation is justified by the absolute convergence of the series. The result (4.21) now follows from (4.17). From (4.21), we have E[Dr (t)] =
∞
∑ nr dnθ (t).
(4.23)
n=1
For any n ≥ 1 and k ≥ n, let Bnk =
(2k − 1 + θ ) k (n + θ )(k−1) e−λk t . n k!
By direct calculation, there exists k0 ≥ 1 such that Bn(k+1) 2k + 1 + θ n + θ + k − 1 = e−(λk+1 −λk )t Bnk 2k − 1 + θ k + 1 − n 1 ≤ (2k + 1 + θ )e−kt < , k ≥ k0 , 2 which, combined with (4.10), implies that for n ≥ k0 ∞
dnθ (t) ≤
∑ Bnk ≤ Bnn .
k=n
Set
nr (n + θ )(n) e−λn t . n! It follows by direct calculation that for n large enough, we have En = nr Bnn =
(4.24)
80
4 The Coalescent
En+1 1 = e−(λn+1 −λn ) 1 + En n
r
(2n + θ + 1)(2n + θ ) (n + 1)(n + θ )
≤ (2 + θ )r+2 e−(λn+1 −λn ) < 1. Hence ∞
∑ En < ∞
n=1
which, combined with (4.23) and (4.24), implies (4.21). 2
4.4 Notes The coalescent was introduced by Kingman in [128] and [129]. The study on the theory of lines of descent in Griffiths [93] has the coalescent’s ingredients. In Watterson [184], the coalescent is analyzed in the context of infinite alleles models. Further studies on the coalescent for the infinite alleles model are found in [42], [43], and [36]. Other developments of coalescent theory include Hudson [105], Tajima [169], Tavar´e [170], and Neuhauser and Krone [140]. In recent years, coalescent processes allowing multiple collisions have been studied extensively. For details, see [153], [158], [161], [138], [27], and the references therein. A more comprehensive account of coalescent theory can be found in a recent book by Wakeley [178]. The treatments in Sections 4.1 and 4.2 follow Kingman [128]. The result in Theorem 4.3 is due to Tavar´e [170] and a more detailed version of his proof is provided here. Theorem 4.2 and Corollary 4.4 are based on results in Ethier and Griffiths [59].
Chapter 5
Stochastic Dynamics
The backward-looking analysis presented in the coalescent represents a major part in the current theory of population genetics. Traditionally, population genetics focuses mainly on the impact of various evolutionary forces on future generations of the population. In this chapter, we turn to this forward-looking viewpoint and study two stochastic models: the infinitely-many-neutral-alleles model, and the Fleming–Viot process with parent-independent mutation. The evolutionary forces in both models are mutation and random sampling. We establish the reversibility of both models, with the Poisson–Dirichlet distribution and the Dirichlet process as the respective reversible measures. Explicit representations involving the coalescent, are obtained for the transition probability functions for each of the two models. These reflect the natural relation between the forward-looking and backwardlooking viewpoints. The two-parameter counterpart of the infinitely-many-neutralalleles model is discussed briefly, and the relation between the Fleming–Viot process and a continuous branching process with immigration is investigated. The latter can be viewed as a dynamical analog of the beta–gamma relation, derived in Chapter 1. For basic terms and results on semigroup and generators, we refer to [62].
5.1 Infinitely-many-neutral-alleles Model For each K ≥ 2, let
ΔK =
K
(x1 , . . . , xK ) ∈ [0, 1] : ∑ xi = 1 . K
(5.1)
i=1
Recall from Chapter 1 that the K-allele Wright–Fisher diffusion with symmetric mutation is a Δ K -valued diffusion process xK (t) = (x1 (t), . . . , xK (t)) generated by
S. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5 5, © Springer-Verlag Berlin Heidelberg 2010
81
82
5 Stochastic Dynamics K ∂2 f ∂f 1 K ai j (x) + ∑ bi (x) , ∑ 2 i, j=1 ∂ xi ∂ x j i=1 ∂ xi 1 Kθ − xi , ai j (x) = xi (δi j − x j ), bi (x) = 2(K − 1) K
LK f (x) =
with domain D(LK ) = { f ∈ C(ΔK ) : f = g|ΔK for some g ∈ C2 (RK )}. The operator LK is closable in C(ΔK ), and for simplicity, LK will also be used to denote the closure. This process has many interesting properties. In particular, we will show in the next theorem that it is reversible with reversible measure Π K = θ θ , . . . , K−1 ). Dirichlet( K−1 Theorem 5.1. The diffusion process xK (t) = (x1 (t), . . . , xK (t)) is reversible and the reversible measure is Π K ; i.e., for any f , g in D(LK ),
f LK g d Π =
K
ΔK
ΔK
gLK f d Π K .
(5.2)
Proof. For any K ≥ 1, let NK = N × · · · × N. The generic element of N is denoted by n = (n1 , . . . , nK ). Define D0 (LK ) = { f n : fn (x) = xn = x1n1 · · · xKnK , x ∈ ΔK , n ∈ NK }, where f0 ≡ 1. Clearly D0 (LK ) is a subset of D(LK ). Furthermore, it is a core for LK . Thus by linearity, it suffices to verify (5.2) for any f , g in D0 (LK ). For any n, m in NK , choose f = fn , g = fm in (5.2) and set |n| = n1 + · · · + nK , |m| = m1 + · · · + mK , ei = (0, . . . , 1 , . . . , 0) ∈ Δ K , i = 1, . . . , K. ith
Clearly (5.2) holds for n = m = 0. Assume that m = 0 and set a = LK fm (x) =
and
θ K−1 .
Then
1 K 1 mi (mi − 1) fm−ei (x) − [|m|2 − |m|] f m (x) ∑ 2 i=1 2
+
Ka a K mi fm−ei (x) − |m| fm (x) ∑ 2 i=1 2
=
1 K 1 mi (mi + a − 1) fm−ei (x) − |m|(|m| + Ka − 1) fm (x), ∑ 2 i=1 2
5.1 Infinitely-many-neutral-alleles Model
ΔK
1 K ∑ mi (mi + a − 1) fm+n−ei (x) 2 i=1 ΔK 1 − |m|(|m| + Ka − 1) fm+n (x) d Π K 2 |m|(|m| + Ka − 1) 1 K mi (mi + a − 1) = ∑ m + n + θ − 1 − |m| + |n| + K θ − 1 2 i=1 i i K−1 K−1
f LK g d Π = K
83
Γ (m1 + n1 + a) · · · Γ (mK + nK + a) Γ (Ka) Γ (|m| + |n| + Ka − 1) Γ (a) · · · Γ (a) K |m||n| 1 mi n i −∑ = Kθ θ 2 |m| + |n| + K−1 − 1 i=1 mi + ni + K−1 −1 ×
×
Γ (m1 + n1 + a) · · · Γ (mK + nK + a) Γ (Ka) , Γ (|m| + |n| + Ka − 1) Γ (a) · · · Γ (a)
which is symmetric with respect to n and m. Hence we have (5.2) and the theorem. 2 Let ∇K , ∇∞ and ∇ be defined as in (2.1). For notational convenience, we use x, y, and z to denote the generic elements of ΔK , ∇K , ∇∞ , and ∇ in the sequel. Set σK (x) = (x(1) , . . . , x(K) ), the decreasing order statistics of x. Then clearly ∇K = σK (ΔK ). When K approaches infinity, Ethier and Kurtz [61] show that σK (xK (t)) converges in distribution to an infinite dimensional diffusion process x(t) in ∇ with generator L f (x) =
∂2 f θ ∞ ∂f 1 ∞ ai j (x) − ∑ xi . ∑ 2 i, j=1 ∂ xi ∂ x j 2 i=1 ∂ xi
(5.3)
The domain of the generator L is D(L) = span{1, ϕ2 , ϕ3 , . . .} ⊂ C(∇), where
(5.4)
∞
ϕn (x) = ∑ xin ,
(5.5)
i=1
is defined on ∇∞ and extends continuously to ∇. The process x(t) is called the infinitely-many-neutral-alleles model. It can be shown that starting from any point in ∇, the process x(t) stays in ∇∞ for positive t with probability one. Lemma 5.1. For each r ≥ 1, and n = (n1 , . . . nr ), let gn (x) = ϕn1 (x) · · · ϕnr (x), and gln (x) denote gn (x) without the factor ϕnl . Similarly gn (x) without factors ϕnl , ϕnm is denoted by glm n (x). Then
84
5 Stochastic Dynamics r nl Lgn = ∑ ϕnl −1 gln + ∑ nl nm ϕnl +nm −1 glm n 2 l=1 1≤l<m≤r
−
(5.6)
|n| (|n| + θ − 1)gn , 2
where ϕ1 ≡ 1. Proof. Write L = L 1 − L2 − L3 , with L1 =
1 ∞ ∂2 ∑ xi ∂ x2 , 2 i=1 i
L2 =
∂2 f 1 ∞ xi x j , ∑ 2 i, j=1 ∂ xi ∂ x j
L3 =
θ ∞ ∂ ∑ xi ∂ xi . 2 i=1
By direct calculation, r ∂ ϕ nl ∂ gn = ∑ gln , ∂ xi ∂ xi l=1 r ∂ 2 ϕnl ∂ 2 gn = ∑ gln ∂ xi ∂ x j ∂ xi2 l=1
+
∑
1≤l=m≤r
glm n
∂ ϕnl ∂ ϕnm ∂ xi ∂ x j
which implies nl 1 L gn = ∑ ϕnl −1 gln + nl nm ϕnl +nm −1 glm n , 2 2 1≤l∑ l=1 =m≤r r 1 |n| nl 2 + gn , = L gn = ∑ g n ∑ 2 1≤l=m≤r 2 l=1 2 1
L3 g n =
r
θ |n| gn . 2
(5.7)
(5.8) (5.9)
The equality (5.6) now follows from (5.7)–(5.8). 2 Theorem 5.2. The Poisson–Dirichlet distribution Πθ with parameter θ is the unique stationary and reversible distribution of the diffusion process x(·). Proof. We first show that Πθ is stationary and reversible for x(·). Set
5.2 A Fleming–Viot Process
85
ρK : ∇K → ∇∞ , (x1 , . . . , xK ) → (x1 , . . . , xK , 0, . . .). θ θ , . . . , K−1 ) distribution and Π θ ,K deLet XK = (X1 , . . . , Xk ) have the Dirichlet( K−1 note the law of ρK (σK (XK )). Then it follows from Theorem 2.1 that the sequence Π θ ,K converges weakly to Πθ . For any f in C(∇), let πK ( f ) = f ◦ ρK ◦ σK . Then πK maps D(L) into D(LK ). For any f , g in D(L), by direct calculation,
∇
f Lg d Πθ = lim
f Lg d Π θ ,K
K→∞ ∇∞
= lim
K→∞ ΔK
= lim
K→∞ ΔK
= lim
K→∞ ΔK
=
∇
πK ( f )πK (Lg) d Π θ ,K πK ( f )LK πK (g) d Π θ ,K πK (g)LK πK ( f ) d Π θ ,K
gL f d Πθ ,
where the fourth equality follows from Theorem 5.1. Assume that Π is another stationary distribution. Then for any r ≥ 1, and any n = (n1 , . . . , nr ) satisfying ni ≥ 2, ∇
Lgn d Π = 0.
(5.10)
This, combined with (5.6), implies that EΠ [gn ] can be calculated from gm such that |m| < |n|. It follows from (5.6) and (5.10) that Lϕ2 = 1 − (1 + θ )ϕ2 and EΠ [ϕ2 ] =
1 . θ +1
Induction on n shows that EΠ [gn ] = EΠθ [gn ] for all n. Hence Π = Πθ .
2
5.2 A Fleming–Viot Process In this section, we will study a labeled version of the infinitely-many-neutral-alleles model. This process turns out to be a special Fleming–Viot (FV) process. The FV process is a probability-valued stochastic process describing the evolution of the distribution of genotypes in a population under the influence of mutation and sampling with replacement.
86
5 Stochastic Dynamics
Let S be a compact metric space, C(S) the set of continuous functions on S, and M1 (S) the space of all probability measures on S equipped with the weak topology. For each μ in M1 (S) and φ in C(S), let μ , φ = S φ (x)μ (dx). Let A be the generator of a Markov process on S with domain D(A). Define D = {F : F(μ ) = f ( μ , φ1 , . . . , μ , φk ), f ∈ Cb2 (Rk ), φi ∈ D(A), k = 1, 2, . . . ; i = 1, . . . , k, μ ∈ M1 (S)},
(5.11)
where Cb2 (R) denotes the set of all functions on R with bounded continuous second order derivatives. Then the generator of the FV process with sampling rate 1 has the form δ F(μ ) μ (dx) L F(μ ) = A δ μ (x) S 1 δ 2 F(μ ) (5.12) + Q(μ ; dx, dy) 2 S S δ μ (x)δ μ (y) where
δ F(μ ) F(μ + εδx ) − F(μ ) = lim , ε →0+ δ μ (x) ε F(μ + ε1 δx + ε2 δy ) − F(μ ) δ 2 F( μ ) = lim , δ μ (x)δ μ (y) ε1 →0+, ε2 →0+ ε1 ε2 Q(μ ; dx, dy) = μ (dx)δx (dy) − μ (dx)μ (dy),
(5.13) (5.14)
and δx stands for the Dirac measure at x ∈ S. For F(μ ) = f ( μ , φ ), one has
δ F(μ ) = f ( μ , φ )φ (x), δ μ (x) δ 2 F(μ ) = f ( μ , φ )φ (x)φ (y). δ μ (x)δ μ (y) Nonprobability measures are used in the definition of δ F(μ )/δ μ (x) even though F is only defined on M1 (S). But F is clearly well defined for any finite measure. Another way to define the derivative is to use a convex combination and let
δ F(μ ) F((1 − ε )μ + εδx ) − F(μ ) = lim . δ μ (x) ε →0+ ε This only uses F on M1 (E). Although it will result in a different derivative, it will not alter L F(μ ). For this reason, we will stick with (5.13) and (5.14). The domain of L is D. The space S is called the type space or the space of alleles, A is known as the mutation operator, and the last term describes the continuous random sampling.
5.2 A Fleming–Viot Process
87
Let M = {Fφ1 ,...,φn : Fφ1 ,...,φn ( μ ) = μ , φ1 · · · μ , φn ,
(5.15)
φi ∈ D(A), 1 ≤ i ≤ n, n = 1, 2, . . .}. Each element in M is called a monomial and n is the order of monomial Fφ1 ,...,φn . M is a core for L . Let ν0 be a diffuse probability measure in M1 (S), and let Πθ ,ν0 be the labeled Poisson–Dirichlet distribution or Dirichlet process. By introducing labels to the infinitely-many-neutral-alleles model, we get an FV process with mutation operator Aφ (x) =
θ 2
S
(φ (y) − φ (x))ν0 (dy), D(A) = C(S).
(5.16)
We will call this process an FV process with the parent-independent mutation. For any n ≥ 1, 1 ≤ k ≤ n, define
σ (n, k) = {c = (c1 , . . . , ck ) : min c1 < · · · < min ck },
(5.17)
where each c is a partition of {1, . . . , n} with k non-empty sets. The number of elements in ci is denoted by |ci |, i = 1, . . . , k. Lemma 5.2. For any n ≥ 1, and φ1 , . . . , φn in C(S), M1 (S)
μ , φ1 · · · μ , φn Πθ ,ν0 (d μ )
(5.18)
θk k = ∑ ∑ (|c1 | − 1)! · · · (|ck | − 1)! ν0 , ∏ φ j . θ(n) ∏ j∈ci i=1 k=1 c∈σ (n,k) n
Proof. Let (Y1 ,Y2 , . . .) have law Πθ and ξ1 , ξ2 , . . . , be independent and identically distributed with common distribution ν0 . Then by direct calculation, M1 (S)
μ , φ1 · · · μ , φn Πθ ,ν0 (d μ ) ∞
∑
=
i1 ,...,in =1 n
=
E
(5.19)
n
∏ Yil φl (ξil ) l=1 k
∑ ∑ ∏ ν0 , ∏ φ j
k=1 c∈σ (n,k) i=1
j∈ci
∑
|c |
|c |
E[Yn1 1 · · ·Ynk k ].
n1 ,...,nk distinct
It follows from the Poisson process representation of the Poisson–Dirichlet distribution in Section 2.1, that
88
5 Stochastic Dynamics
∑
|c |
|c |
= (E[τθn ])−1
∑
E[Yn1 1 · · ·Ynk k ]
n1 ,...,nk distinct |c |
|c |
E[Xn11 · · · Xnkk ],
(5.20)
n1 ,...,nk distinct
=
∞ Γ (θ ) k |c | E[ ∑ X j i ] ∏ Γ (θ + n) i=1 j=1
= (|c1 | − 1)! · · · (|ck | − 1)!
θk , θ(n)
where {Xi : i = 1, . . .} is the Poisson process with intensity measure θ x−1 e−x dx, and the last equality follows from equation (A.2). The result (5.18) now follows from (5.19) and (5.20). 2 Theorem 5.3. The FV process with parent-independent mutation has a unique stationary distribution Πθ ,ν0 . Proof. Fix a Fφ1 ,...,φn in M . Write Fφi1 ,...,φn (μ ) = Fφ1 ,...,φn (μ ) = ij
∏ μ , φl ,
(5.21)
∏ μ , φl ,
(5.22)
l=i
l=i, j
(i, j)
Fφ1 ,...,φn (μ ) = μ , φi φ j Fφ1 ,...,φn (μ ). ij
(5.23)
Then by direct calculation n δ Fφ1 ,...,φn (μ ) = ∑ φi (x)Fφi1 ,...,φn , δ μ (x) i=1
δ 2 Fφ1 ,...,φn (μ ) = ∑ φi (x)φ j (y)Fφi1j ,...,φn , δ μ (x)δ μ (y) 1≤i= j≤n
(5.24) (5.25)
and
∑
L Fφ1 ,...,φn (μ ) =
1≤i< j≤n n
+
(i, j)
Fφ1 ,...,φn (μ )
n(n − 1 + θ ) θ
ν0 , φi Fφi1 ,...,φn (μ ) − Fφ1 ,...,φn (μ ). ∑ 2 i=1 2
If Π is a stationary distribution, then M1 (S)
or equivalently
(5.26)
L Fφ1 ,...,φn (μ )Π (d μ ) = 0,
5.2 A Fleming–Viot Process
89
∑
M1 (S)
=
(i, j)
1≤i< j≤n
Fφ1 ,...,φn (μ ) +
n(n − 1 + θ ) 2
M1 (S)
θ n ∑ ν0 , φi Fφi1 ,...,φn (μ ) Π (d μ ) 2 i=1
Fφ1 ,...,φn (μ )Π (d μ ).
Thus M1 (S) Fφ1 ,...,φn (μ )Π (d μ ) is determined recursively from the moments of the lower order monomials. For n = 1, we have M1 (S)
Fφ1 (μ )Π (d μ ) = ν0 , φ1 =
M1 (S)
Fφ1 (μ )Πθ ,ν0 (d μ ),
which implies that Π = Πθ ,ν0 . This gives uniqueness. Next we show that Πθ ,ν0 is indeed a stationary distribution. It follows from Lemma 5.2 that
n(n − 1 + θ ) Fφ1 ,...,φn ( μ )Πθ ,ν0 (d μ ) 2 M1 (S) =
(5.27)
θk k n(n − 1 + θ ) n (|c1 | − 1)! · · · (|ck | − 1)! ν0 , ∏ φ j , ∑ ∑ 2 θ(n) ∏ j∈ci i=1 k=1 c∈σ (n,k) (i, j)
M1 (S)
Fφ1 ,...,φn ( μ )Πθ ,ν0 (d μ )
n−1
=
∑
∑
(5.28)
(|b1 | − 1)! · · · (|bk | − 1)!
k=1 b∈σ (n−1,k)
ν0 , φi
M1 (S)
n−1
=
∑
θk
k
∏ θ(n−1) r=1
ν0 , ∏ h l l∈br
Fφi1 ,...,φn (μ )Πθ ,ν0 (d μ )
∑
(|d1 | − 1)! · · · (|dk | − 1)!
k=1 d∈σ (n−1,k)
θk θ(n−1)
k
(5.29)
ν0 , φi ∏ ν0 , ∏ hl r=1
,
l∈dr
where for every b in σ (n − 1, k), the term ∏kr=1 ν0 , ∏l∈br hl in (5.28) corresponds to ∏kr=1 ν0 , ∏l∈cr φl with c in σ (n, k) and certain |cl | ≥ 2; similarly, the term
ν0 , φi ∏kr=1 ν0 , ∏l∈dr hl in (5.29), corresponds to a term ∏k+1 r=1 ν0 , ∏l∈cr φl with c in σ (n, k) and certain |cl | = 1. Thus we can write θ n (i, j) i ∑ Fφ ,...,φn (μ ) + 2 ∑ ν0, φi Fφ1,...,φn (μ ) Πθ ,ν0 (d μ ) M1 (S) 1≤i< j≤n 1 i=1
n
=
∑ ∑
k=1 c∈σ (n,k)
k
Bc ∏ ν0 , ∏ φ j . i=1
j∈ci
It remains to make explicit the expression of Bc . In order to get contributions from
90
5 Stochastic Dynamics
∑
(i, j)
M1 (S) 1≤i< j≤n
Fφ1 ,...,φn (μ )Πθ ,ν0 (d μ ),
there is at least one 1 ≤ i ≤ k such that |ci | ≥ 2. For each i satisfying |ci | ≥ 2, there are |c | i 2 number of ways to obtain element b in σ (n − 1, k) by combining two elements in ci into one. The coefficients of the b’s so obtained are the same. Thus the total contributions will be 1 θk |ci | | − 1)! · · · (|c | − 1)! . (5.30) (|c 1 k ∑ 2 |ci | − 1 θ(n−1) i:|c |≥2 i
To have contributions from
θ n ∑ ν0 , φi Fφi1,...,φn (μ )Πθ ,ν0 (d μ ), M1 (S) 2 i=1
there is at least one 1 ≤ i ≤ k such that |ci | = 1. One element b in σ (n − 1, k − 1) is obtained by removing ci from c. Hence the total contribution from this term is
θ θ k−1 (|c | − 1)! · · · (|c | − 1)! . 1 k 2 i:|c∑ θ(n−1) |=1
(5.31)
i
Putting (5.30) and (5.31) together, one has |ci | θ k θ Bc = (|c1 | − 1)! · · · (|ck | − 1)! ∑ + 2 θ 2 (n−1) i:|c |≥2 i
θ k−1 ∑ θ(n−1) i:|c |=1
i
θk n(n − 1 + θ ) (|c1 | − 1)! · · · (|ck | − 1)! , = 2 θ(n) which is the same as the coefficient of ∏ki=1 ν0 , ∏ j∈ci φ j in n(n − 1 + θ ) 2
M1 (S)
Fφ1 ,...,φn (μ )Π (d μ ).
Hence Πθ ,ν0 is a stationary distribution.
2
Theorem 5.4. The stationary distribution Πθ ,ν0 is reversible for the FV process with parent-independent mutation. Proof. To prove the reversibility, it suffices to verify that M1 (S)
F(μ )L G(μ )Πθ ,ν0 (d μ ) =
M1 (S)
G(μ )L F(μ )Πθ ,ν0 (d μ )
(5.32)
for any F, G in M . Since μ , 1 can be included in both F and G if necessary, we can assume that
5.2 A Fleming–Viot Process
91
F( μ ) = μ , φ1 · · · μ , φn , G(μ ) = μ , ψ1 · · · μ , ψn , where φ1 , . . . , φn , ψ1 , . . . , ψn are in C(S). Let
φ = (φ1 , . . . , φn ), ψ = (ψ1 , . . . , ψn ), H(φ , ψ ) =
∑
n
∏ μ , φl μ , ψi ψ j ∏ μ , ψl Πθ ,ν0 (d μ )
1≤i< j≤n M1 (S) l=1 n
θ + ∑ ν0 , ψi 2 i=1
l=i, j
n
∏ μ , φl ∏ μ , ψl Πθ ,ν0 (d μ ).
M1 (S) l=1
l=i
It follows from (5.26), that (5.32) is equivalent to H(φ , ψ ) = H(ψ , φ ). To deal with 2n functions (φ1 , . . . , φn , ψ1 , . . . , ψn ), we are led to consider partitions of {1, . . . , 2n} into non-empty sets. Let ς (n, k) denote the set of partitions of {1, . . . , n} into k sets. Here 1 ≤ k ≤ 2n, and the empty set is allowed. Then for every 1 ≤ k ≤ 2n and {c} in σ (2n, k) there is a unique way of finding b and d in ς (n, k) such that bi
di = 0, / ci = bi
{n + l : l ∈ di }, i = 1, . . . , k.
(5.33)
The integration with respect to Πθ ,ν0 in H(φ , ψ ) involves only monomials of order 2n − 1. Thus H(φ , ψ ) =
1 2n ∑ ∑ h(b, d) 2 k=1 b,d∈ς (n,k) k
× ∏(|bi | + |di | − 1)! i=1
(5.34)
θk
k
θ(2n−1) ∏ i=1
ν0 , ∏ φ j ∏ ψ j . j∈bi
j∈di
Similarly H(ψ , φ ) =
1 2n ∑ ∑ h(d, b) 2 k=1 b,d∈ς (n,k) k
× ∏(|bi | + |di | − 1)! i=1
(5.35)
θk
k
θ(2n−1) ∏ i=1
ν0 , ∏ ψ j ∏ φ j . j∈bi
j∈di
If h(b, d) is symmetric in b and d, we then have H(φ , ψ ) = H(ψ , φ ). The coefficient h(b, d) can calculated as follows. The contributions from the term
∑
1≤i< j≤n
n
∏ μ , φl μ , ψi ψ j ∏ μ , ψl Πθ ,ν0 (d μ ) M1 (S) l=1
l=i, j
are from sets di satisfying |di | ≥ 2. For each such index i, the contribution is k |di | 1 θk (|b | + |d | − 1)! . r r ∏ 2 |bi | + |di | − 1 r=1 θ(2n−1)
(5.36)
92
5 Stochastic Dynamics
The contributions from the term
θ n ∑ ν0 , ψ i 2 i=1
n
∏ μ , φl ∏ μ , ψl Πθ ,ν0 (d μ )
M1 (S) l=1
l=i
correspond to sets ci with |ci | = 1 or equivalently to sets di and bi satisfying |di | = 1, |bi | = 0. The total contribution for each such i is
θ k−1
k
∏(|br | + |dr | − 1)! θ(2n−1) .
(5.37)
r=1
Adding (5.36) and (5.37) together, yields h(b, d) =
|di |(|di | − 1) + ∑ 1. |b | + |di | − 1 i:|d |=1,|b i:|d |≥2 i |=0
∑ i
i
(5.38)
i
Set I1 = {1 ≤ i ≤ k : |di | = 0}, J1 = {1 ≤ i ≤ k : |bi | = 0}, I2 = {1 ≤ i ≤ k : |di | = 1}, J2 = {1 ≤ i ≤ k : |bi | = 1}, I3 = {1 ≤ i ≤ k : |di | ≥ 2}, J3 = {1 ≤ i ≤ k : |bi | ≥ 2}. It follows from (5.33) that the intersection of I1 and J1 is empty. The function h(b, d) can be written as |di |(|di | − 1) χI3 (i) + χI2 J1 (i). |b i=1 i | + |di | − 1 k
h(b, d) = ∑
(5.39)
The difference between the ith terms of h(b, d) and h(d, b) is |di |(|di | − 1) [χI3 J1 (i) + χI3 J2 (i) + χI3 J3 (i)] + χI2 J1 (i) |bi | + |di | − 1 |bi |(|bi | − 1) [χJ I (i) + χJ3 I2 (i) + χJ3 I3 (i)] + χJ2 I1 (i) − |bi | + |di | − 1 3 1 = (|di | − |bi |)[χJ3 I3 (i) + χI3 J1 (i) + χI3 J2 (i) + χI2 J1 (i) + χJ2 I1 (i) + χJ3 I1 (i) + χJ3 I2 (i)] = (|di | − |bi |), which implies the symmetry of h(b, d), and the reversibility of Πθ ,ν0 .
2
5.3 The Structure of Transition Functions
93
5.3 The Structure of Transition Functions In this section, we study the structure of the transition probability function of the FV process with parent-independent mutation. An explicit representation is derived, which has an intuitive explanation. Using this result, we will show that the transition probability function of the infinitely-many-neutral-alleles process {x(t) : t ≥ 0} has a density with respect to Πθ . For any μ in M1 (S), let μ n denote the n-fold product measure μ × · · · × μ , and define P(t, μ , d ν ) = d0θ (t)Πθ ,ν0 (d ν ) ∞
+ ∑ dnθ (t)
n=1
Sn
where
νn,θ =
1 n+θ
(5.40)
Πn+θ ,νn,θ (d ν )μ n (dx1 × · · · dxn ), θ
n
∑ δxi + n + θ ν0 ,
(5.41)
i=1
and {dnθ (t) : n = 1, . . . ;t > 0} are given in Theorem 4.3. For any Fφ1 ,...,φm in M , define Pt (Fφ1 ,...,φm )(μ ) =
M1 (S)
Fφ1 ,...,φm (ν )P(t, μ , d ν ).
(5.42)
Let (X1 , . . . , Xn ) have a Dirichlet(1, . . . , 1) or uniform distribution on n , and, for s1 , . . . , sn in S, set
Ξn (s1 , . . . , sn ) = X1 δs1 + · · · + Xn δsn .
Lemma 5.3. For any m, n ≥ 1 and Fφ1 ,...,φm in M , Sn
E[Fφ1 ,...,φm (Ξn (s1 , . . . , sn ))] μ n (ds1 × . . . × dsn )
m n k [k] =∑ ∑ |b1 |! · · · |bk |! ∏ μ , ∏ φi , j=1 k=1 n(m) b∈σ (m,k) i∈b j
(5.43)
where n[0] = 1, n[k] = n(n − 1) · · · (n − k + 1), k = 0. Proof. We prove the result by induction on n. For n = 1, n[k] = 0 for k > 1. Thus the right-hand side of (5.43) is μ , ∏m i=1 φi . Since Ξ 1 (s1 ) is just the Dirac measure at s1 , the left-hand side is
94
5 Stochastic Dynamics
S
E[Fφ1 ,...,φm (Ξ1 (s1 ))] μ (ds1 ) =
m
μ , ∏ φi . i=1
Assume (5.43) holds for n − 1 ≥ 1; i.e., for any m ≥ 1, Sn
E[Fφ1 ,...,φm (Ξn−1 (s1 , . . . , sn−1 ))] μ n−1 (ds1 × . . . × dsn−1 )
m (n − 1) k [k] =∑ ∑ |b1 |! · · · |bk |! ∏ μ , ∏ φi . j=1 k=1 (n − 1)(m) b∈σ (m,k) i∈b j
(5.44)
Let (X1 , . . . , Xn ) be uniformly distributed on n . Set X˜n = Xn and X˜i =
Xi , i = 1, . . . , n − 1. 1 − Xn
By direct calculation, the joint density function of (X˜1 , . . . , X˜n ) is the product of a ˜ uniform density function on n−1 and (1 − xn )n−1 . Therefore, ∑n−1 r=1 Xr = 1 − Xn ˜ ˜ ˜ is a Beta(n − 1, 1) random variable, (X1 , . . . , Xn−1 ) is uniform on n−1 , and Xn is independent of (X˜1 , . . . , X˜n−1 ); so d
Ξn (s1 , . . . , sn ) = (1 − Xn )Ξn−1 (s1 , . . . , sn−1 ) + Xn δsn ,
(5.45)
and E[Fφ1 ,...,φm (Ξn (s1 , . . . , sn ))] = E[Fφ1 ,...,φm ((1 − Xn )Ξn−1 (s1 , . . . , sn−1 ) + Xn δsn )]
∑
=
E[(1 − Xn )|B| (Xn )|B | ] c
B⊂{1,...,m}
×E
∏ Ξn−1 (s1 , . . . , sn−1 ), φi ∏c φi (sn), i∈B
(n − 1)(|B|) |Bc |! = E ∑ n(m) B⊂{1,...,m}
(5.46)
i∈B
∏ Ξn−1 (s1, . . . , sn−1), φi ∏c φi (sn ), i∈B
i∈B
where Bc is the complement of B in {1, . . . , m}. Putting together (5.44) and (5.46), we obtain that Sn
E[Fφ1 ,...,φm (Ξn (s1 , . . . , sn ))] μ n (ds1 × . . . × dsn )
(n − 1)(|B|) |Bc |! = μ , ∏ φi ∑ n(m) i∈Bc B⊂{1,...,m} ×
Sn−1
E
∏ Ξn−1 (s1 , . . . , sn−1 ), φi i∈B
μ n−1 (ds1 × · · · × dsn−1 ).
5.3 The Structure of Transition Functions
95
Expanding the integration leads to
E[Fφ1 ,...,φm (Ξn (s1 , . . . , sn ))] μ n (ds1 × . . . × dsn )
m m! μ , ∏ φi = n(m) i=1
(n − 1)(|B|) μ , ∏ φi + ∑ n(m) i∈Bc B⊂{1,...,m},B=0/ Sn
(5.47)
k (n − 1)[k] c ×∑ ∑ |c1 |! · · · |ck |!|B |! ∏ μ , ∏ φi . i∈c j j=1 k=1 (n − 1)(|B|) c∈σ (|B|,k) |B|
Next we show that the right-hand side of (5.43) is the same as the right-hand side of (5.47). This is achieved by comparing the coefficients of the corresponding moments. It is clear that the coefficient nm! of the moment μ , ∏m i=1 φi is the same (m)
for both. For any 1 ≤ r ≤ m and b in π (m, r), the coefficient of ∏rj=1 μ ∏i∈b j φi on the right-hand side of (5.43) is n[r] |b1 |! · · · |br |!. n(m)
(5.48)
On the right-hand side of (5.47) there are r + 1 terms, corresponding to Bc being / the coefficient is b1 , . . . , bk , or the empty set. If Bc = 0, (n − 1)[r] |b1 |! · · · |br |!. n(m)
(5.49)
If Bc = bl for some l = 1, . . . , k, the coefficient is (n − 1)[r−1] |b1 |! · · · |br |!. n(m)
(5.50)
The lemma now follows from the fact that k(n − 1)[r−1] + (n − 1)[r] = n[r] . 2 Lemma 5.4. For any m, n ≥ 1 and Fφ1 ,...,φm in M ,
96
5 Stochastic Dynamics
Πn+θ ,νn,θ
[Fφ1 ,...,φm (ν )] μ n (ds1 × · · · × dsn ) |B| k n[k] = ∑ ∑ (n + θ )(m) ∑ |b1 |! · · · |bk |! ∏ μ , ∏ φi r=1 i∈br B⊂{1,...,m} k=1 b∈σ (|B|,k) c Sn
E
|B |
∑
×
l
∑c
(|d1 | − 1)! · · · (|dl | − 1)!θ l ∏ ν0 , ∏ φi
l=1 d∈σ (|B |,l) |B|
∑
=
r=1
|Bc |
∑ ∑ ∑
∑c
B⊂{1,...,m} k=1 b∈σ (|B|,k) l=1 d∈σ (|B
(5.51)
i∈dr
n[k] C(B, b, d), (n + θ )(m) |,l)
where
k
|b1 |! · · · |bk |! ∏ μ , ∏ φi
C(B, b, d) =
r=1
(5.52)
i∈br
× (|d1 | − 1)! · · · (|dl | − 1)!θ
l
l
∏
r=1
ν0 , ∏ φi
.
i∈dr
When B (Bc ) is empty, the summation over k ( l) is one. Proof. Let (X1 , . . . , Xn ) be uniformly distributed on n , and β is a Beta(n, θ ) random variable, independent of (X1 , . . . , Xn ). It follows from Theorem 2.24 and Theorem 2.25 that Πn+θ ,νn,θ can be represented as the law of
β Ξn (s1 , . . . , sn ) + (1 − β )Ξθ ,ν0 . Hence Πn+θ ,νn,θ
E
[Fφ1 ,...,φm (ν )]
=E
m
∏ β Ξn (s1 , . . . , sn ) + (1 − β )Ξθ ,ν0 , φi i=1
∑
=
E β
|B|
(1 − β )
|Bc |
B⊂{1,...,m}
∏ Ξn (s1, . . . , sn ), φi
Πθ ,ν0
E
∏ ν , φi
i∈Bc
×E
(5.53)
.
i∈B
By direct calculation, E[β |B| (1 − β )|B | ] = c
Γ (n + θ ) Γ (n + |B|)Γ (θ + |Bc |) n(|B|) θ(|Bc |) . = Γ (n)Γ (θ ) Γ (n + θ + m) (n + θ )(m)
It follows from Lemma 5.2 and Lemma 5.3 that
(5.54)
5.3 The Structure of Transition Functions
97
∏ ν , φi
EΠθ ,ν0
(5.55)
i∈Bc |Bc |
=∑
∑
(|d1 | − 1)! · · · (|dl | − 1)!
l=1 d∈σ (|Bc |,l)
and
Sn
E
θl
l
∏ θ(|Bc |) r=1
ν0 , ∏ φi , i∈dr
∏ Ξn (s1 , . . . , sn ), φi
μ n (ds1 × · · · × dsn )
(5.56)
i∈B
k n[k] =∑ ∑ |b1 |! · · · |bk |! ∏ μ , ∏ φi . r=1 k=1 n(|B|) b∈σ (|B|,k) i∈br |B|
2
Putting together (5.53)–(5.56), yields (5.51). Lemma 5.5. For any Fφ1 ,...,φm in M , lim Pt Fφ1 ,...,φm ( μ ) = Fφ1 ,...,φm ( μ ).
(5.57)
t→0
where Pt is defined in (5.42). Proof. Fix m ≥ 1 and Fφ1 ,...,φm in M . Define φ ,...,φm
Aθ1,μ
(n) =
Sn
Πn+θ ,νn,θ
E
[Fφ1 ,...,φm (ν )]μ n (ds1 × · · · × dsn ).
(5.58)
It follows from Lemma 5.4 that φ ,...,φ lim A 1 m (n) = Fφ1 ,...,φm (μ ) n→∞ θ , μ
(5.59)
since n[k] /(n + θ )(m) approaches zero for k = m. From the definition of P(t, μ , ·) in (5.40), it follows that M1 (S)
φ ,...,φm
Fφ1 ,...,φm (ν )P(t, μ , d ν ) = E[Aθ1,μ
φ ,...,φm
(Dt )] = T (t)(Aθ1,μ
)(∞),
(5.60)
where Dt is the pure-death process in Section 4.2, and T (t) is the corresponding semigroup. The equality (5.57) now follows from (5.59), and the fact that φ ,...,φm
lim T (t)(Aθ1,μ
t→0
φ ,...,φm
)(∞) = Aθ1,μ
φ ,...,φm
(∞) = lim Aθ1,μ n→∞
(n). 2
Lemma 5.6. For any Fφ1 ,...,φm in M , dPt Fφ1 ,...,φm = Pt (L Fφ1 ,...,φm ). dt
(5.61)
98
5 Stochastic Dynamics
Proof. By definition, one has Pt (Fφ1 ,...,φm )(μ ) =
∞
φ ,...,φm
∑ dnθ (t)Aθ1,μ
(5.62)
(n).
n=0
Taking the derivative with respect to t, one obtains ∞ dPt Fφ1 ,...,φm d θ (t) φ ,...,φ = ∑ n Aθ1,μ m (n) dt n=0 dt ∞
=
(5.63) φ ,...,φm
θ (t))Aθ1,μ ∑ (−λn dnθ + λn+1 dn+1
(n),
n=0
where λn = n(n − 1 + θ )/2, the interchange of summation and differentiation in the first equality follows from (4.22) in Corollary 4.4, and the second equality is due to the Kolmogorov forward equation for the process D(t). On the other hand,
∑
Pt (L Fφ1 ,...,φm ) =
(i, j)
1≤i< j≤m m
+
Pt Fφ1 ,...,φm
θ ∑ ν0 , φi Pt Fφi1,...,φm 2 i=1
(5.64)
− λm Pt Fφ1 ,...,φm . The coefficient of the term C(B, b, d) on the right-hand side of (5.63) is ∞
n[k]
θ (t)) ∑ (−λn dnθ + λn+1 dn+1 (n + θ(m) )
n=0
(5.65)
n[k] (n + 1)[k] =∑ − (n + θ )(m) (n + 1 + θ )(m) n=0 ∞ n[k] (n − k)(n + m + θ − 1) − n(n + θ − 1) θ + λm − λm = ∑ dn (t) (n + θ )(m) 2 n=1 ∞ n[k] (m − k)(n + m + θ − 1) − λm , = ∑ dnθ (t) (n + θ )(m) 2 n=0 ∞
θ λn+1 dn+1 (t)
where, in the last equality, the term corresponding to n = 0 is always zero since θ −1) 0[k] = 0 for k ≥ 1 and (m−k)(n+m+ equals λm for k = 0. 2 The coefficient of the term C(B, b, d) on the right-hand side of (5.64) can be calculated as follows. First we calculate the contributions from
∑
1≤i< j≤m
(i, j)
Pt Fφ1 ,...,φm .
5.3 The Structure of Transition Functions
99
If both i and j belong to br for some r = 1, . . . , k, then the contribution from term n[k] (i, j) θ . The total of such contributions is Pt Fφ1 ,...,φm is |b1r | ∑∞ n=0 dn (t) (n+θ ) (m−1)
∞
∑
I1 =
dnθ (t)
n=0
n[k] (n + θ )(m−1)
∞
|br | 1 ∑ 2 |br | 1≤r≤k
(5.66)
|br |≥2
|br | − 1 . 2
n[k]
∑ dnθ (t) (n + θ )(m) (n + θ + m − 1) ∑
=
n=0
1≤r≤k |br |≥1
If both i and j belong to dr for some r = 1, . . . , l, then the contribution is ∞ n[k] 1 d θ (t) . ∑ |dr | − 1 n=0 n (n + θ )(m−1)
The total of such contributions is ∞
I2 =
∑ dnθ (t)
n=0
n[k] (n + θ )(m−1)
∞
=
1 |dr | ∑ 2 |dr | − 1 1≤r≤l
(5.67)
|dr |≥2
n[k]
∑ dnθ (t) (n + θ )(m) (n + θ + m − 1) ∑
n=0
1≤r≤l |dr |≥2
|dr | . 2
i Next we calculate the contributions from the term θ2 ∑m i=1 ν0 , φi Pt Fφ1 ,...,φm . In order to have contributions in this case, there must exist an dr containing only one element for some 1 ≤ r ≤ l. The total contribution is ∞
I3 =
n[k]
∑ dnθ (t) (n + θ )(m−1) ∑
n=0
1≤r≤l |dr |=1
θ 1 . 2θ
(5.68)
The contribution from the last term in (5.64) is clearly I4 = −λm
∞
n[k]
∑ dnθ (t) (n + θ )(m) .
(5.69)
n=0
The coefficient can now be calculated by adding up all these terms as ∞
I = I1 + I2 + I3 + I4 = × ∞
=
∑
n=0
n[k]
∑ dnθ (t) (n + θ )(m)
n=0
n+θ +m−1 2
dnθ (t)
∑
r:1≤r≤k
(|br | − 1) +
∑
r:1≤r≤l
|dr | − λm
n[k] (m − k)(n + m + θ − 1) − λm , (n + θ )(m) 2
(5.70)
100
5 Stochastic Dynamics
which is the same as (5.65). 2 On the basis of these preliminary results, we are now ready to prove the main result of this section. Theorem 5.5. The function P(t, μ , d ν ) is the probability transition function of the FV process with parent-independent mutation. Proof. For every t > 0 and μ in M1 (S), let Q(t, μ , d ν ) be the probability transition function of the FV process with parent-independent mutation. For any m ≥ 1, and any Fφ1 ,...,φm in M , define Qt Fφ1 ,...,φm (μ ) =
M1 (S)
Fφ1 ,...,φm (ν )Q(t, μ , d ν ).
Then Qt Fφ1 ,...,φm (μ ) = Fφ1 ,...,φm (μ ) +
t 0
Qs L Fφ1 ,...,φm (μ )ds.
(5.71)
On the other hand, integrating both sides of (5.61) from 0 to t and taking Lemma 5.5 into account, one gets Pt Fφ1 ,...,φm (μ ) = Fφ1 ,...,φm (μ ) + Define Hm (t) =
sup
φi ∈C(S),φi ≤1,1≤i≤m
t 0
Ps L Fφ1 ,...,φm (μ ) ds.
|(Pt − Qt )Fφ1 ,...,φm (μ )|.
(5.72)
(5.73)
Then |(Pt − Qt )L Fφ1 ,...,φm ( μ )| ≤ λm |(Pt − Qt )Fφ1 ,...,φm (μ )| +
∑
1≤i< j≤m
(i, j)
|(Pt − Qt )Fφ1 ,...,φm (μ )| +
θ m ∑ |(Pt − Qt )Fφi1 ,...,φm (μ )| 2 i=1
(5.74)
≤ 2λm Hm (t), (i, j)
where the monomials Fφi1 ,...,φm and Fφ1 ,...,φm can be considered as being of order m, by inserting the constant function 1. Thus t (5.75) Hm (t) ≤ 2λm Hm (s)ds, 0
and the theorem follows from Gronwall’s lemma. 2 Remark: It is clear from the theorem that at every positive time, the values of the process are pure atomic probability measures, and the transition probability function does not have a density with respect to Πθ ,ν0 . The structure of the transition function provides a good picture of the genealogy of the population. With probability one, the population at time t > 0 has only finite number of distinct ancestors at time zero.
5.3 The Structure of Transition Functions
101
Given n distinct ancestors, each one, independently, will choose a type according to μ . After the n ancestors and their types are fixed, the distribution of the allele frequency in the population is a Dirichlet(θ ν0 + ∑ni=1 δxi ) process. It is expected that a similar representation can be obtained for the probability transition function of the infinitely-many-neutral-alleles model. What is more surprising is that the transition function in this case has a density with respect to the Poisson–Dirichlet distribution. Define Φ : M1 (S) → ∇, μ → (x1 , x2 , . . .), (5.76) where x1 , x2 , . . . , are the masses of the atoms of μ arranged in descending order. If ν0 is diffuse, then it is clear from the definition of Dirichlet process that
Πθ = Πθ ,ν0 ◦ Φ −1 .
(5.77)
In the remaining part of this section, we assume that ν0 is diffuse. Theorem 5.6. Let μt be the FV process with parent-independent mutation starting at μ . Then the process x(t) = Φ (μt ) is the infinitely-many-neutral-alleles process starting at (x1 , x2 , . . .) = Φ (μ ). Proof. For each r ≥ 1, and n = (n1 , . . . , nr ) with n = ∑ri=1 ni , let gn (x) = ϕn1 (x) · · · ϕnr (x), where ϕi is defined in (5.5). Let f denote the indicator function of the set {(s1 , . . . , sn ) : s1 = · · · = sn1 , . . . , sn−nr +1 = · · · = snr }. Then for μ in M1 (S) with atomic part ∑∞ i=1 xi δui , n
F(μ ) = μ , f ≡ n
∞
∑ xi δui
,f
= gn (Φ ( μ )).
(5.78)
i=1
It follows from direct calculation that r ni ϕni −1 (Φ (μ )) ∏ ϕn j (Φ (μ )) L F(μ ) = ∑ i=1 2 j=i +
∑
ni n j ϕni +n j −1
1≤i< j≤r
(5.79)
∏ ϕnl (Φ (μ )) − λn gn (Φ (μ ))
l=i, j
= Lgn (Φ (μ )). The result follows from the existence and uniqueness of Markov process associated with each of L and L, and fact that the domain of L is the span of {ϕi : i = 1, 2, . . .}. 2
102
5 Stochastic Dynamics
Lemma 5.7. For each n ≥ 1 and any μ , ν in M1 (S), if Φ ( μ ) = Φ (ν ), then Sn
μ n (ds1 × · · · × dsn )Πn+θ ,νn,θ (Φ −1 (·))
=
Sn
(5.80)
ν n (ds1 × · · · × dsn )Πn+θ ,νn,θ (Φ −1 (·)),
where −1
νn,θ = (n + θ )
n
∑ δsi + θ ν0
.
i=1
Proof. For each n ≥ 1, as in the proof of Lemma 5.4, Πn+θ ,νn,θ can be represented as the law of n
∞
i=1
i=1
β Ξn (s1 , . . . , sn ) + (1 − β )Ξθ ,ν0 = β ∑ Xi δsi + (1 − β ) ∑ Yi δξi . Thus Sn
μ n (ds1 × · · · × dsn )Πn+θ ,νn,θ (Φ −1 (·))
=
Sn
(5.81)
μ n (ds1 × · · · × dsn )P{Φ (β Ξn (s1 , . . . , sn ) + (1 − β )Ξθ ,ν0 ) ∈ ·}.
If μ does not have any atoms, then the left-hand side of (5.81) becomes P{(P1 , P2 , . . .) ∈ ·}, where (P1 , P2 , . . .) is β X1 , . . . , β Xn , (1 − β )Y1 , (1 − β )Y2 , . . . arranged in descending order, and the integral does not depend on μ . If μ has an atomic part, then the atoms in Ξn (s1 , . . . , sn ) will depend on the partition of {1, . . . , n} generated by s1 , . . . , sn such that i, j are in the same subset if si = s j . The distribution of these partitions under μ n (ds1 × · · · × dsn ) clearly depends only on the masses of atoms of μ . Thus the left-hand side of (5.81) depends on μ only through Φ (μ ), which implies (5.80). 2 It follows from (5.80) that for any x in ∇ and A ⊂ ∇, the transition probability function of the process xt can be written as Q(t, x, A) = P(t, μ , Φ −1 (A)),
(5.82)
where μ is any measure in M1 (S) such that Φ (μ ) = x. Our next task is to show that Q(t, x, A) has a density with respect to the Poisson–Dirichlet distribution Πθ . Lemma 5.8. For any t > 0, x ∈ ∇, A ⊂ ∇, and any μ ∈ M1 (S) satisfying Φ (μ ) = x,
5.3 The Structure of Transition Functions
103
∞
2m − 1 + θ −λm t e (5.83) m! m=2 m m μ n (ds1 × · · · × dsn )Πn+θ ,νn,θ (Φ −1 (A)). (n + θ )m−1 × ∑ (−1)m−n n n S n=0
Q(t, x, A) = Πθ (A) +
∑
Proof. Let β be a Beta(1, θ ) random variable and, independently of β , let (Y1 , . . .) have the Poisson–Dirichlet distribution with parameter θ . Let V1 ,V2 , . . . be the sizebiased permutation of Y1 ,Y2 , . . . in Theorem 2.7. Then for each Borel subset A of ∇, μ (ds)Π1+θ ,ν1,θ (Φ −1 (A)) = P{(Z1 , Z2 , . . .) ∈ A}, (5.84) S
where (Z1 , . . .) is β , (1 − β )V1 , (1 − β )V2 , . . . in descending order. Clearly (Z1 , . . .) has the Poisson–Dirichlet distribution with parameter θ and S
μ (ds)Π1+θ ,ν1,θ (Φ −1 (A)) = Πθ (A).
(5.85)
It follows from (5.40) and (5.82) that Q(t, x, A) = d0θ (t)Πθ ,ν0 (Φ −1 (A) ∞
+ ∑ dnθ (t) n=1
Sn
(5.86)
Πn+θ ,νn,θ (Φ −1 (A))μ n (dx1 × · · · dxn ).
Applying Theorem 4.3, we obtain Q(t, x, A) ∞
(2m − 1 + θ ) (−1)m−1 θ(m−1) e−λm t Πθ (A) m! m=1 ∞ (2m − 1 + θ ) m−1 m −λm t (1 + θ )(m−1) e +∑ (−1) μ (ds)Π1+θ ,ν1,θ (A) 1 m! S m=1 ∞ ∞ m (2m − 1 + θ ) (n + θ )(m−1) e−λmt (−1)m−1 +∑ ∑ n m! m=n n=2
= Πθ (A) −
×
Sn
∑
μ n (ds1 × · · · × dsn )Πn+θ ,νn,θ (Φ −1 (A))
(5.87)
∞
(2m − 1 + θ ) −λmt (−1)m−1 θ(m−1) Πθ (A) e m! m=2 ∞ (2m − 1 + θ ) −λmt m (1 + θ )(m−1) μ (ds)Π1+θ ,ν1,θ (A) +∑ e (−1)m−1 1 m! S m=2
= Πθ (A) −
∞
∑
(2m − 1 + θ ) −λmt e m! m=2 ∞ m (n + θ )(m−1) × ∑ (−1)m−1 μ n (ds1 × · · · × dsn )Πn+θ ,νn,θ (Φ −1 (A)), n n S n=2 +
∑
104
5 Stochastic Dynamics
where in the last equality, the terms corresponding to m = 1 cancel each other, and the interchange of summations is justified by their absolute convergence. It is now clear that (5.83) follows by collecting the last three terms together. 2 Note that by Lemma 5.7, Sn
μ n (ds1 × · · · × dsn )Πn+θ ,νn,θ (Φ −1 (·))
depends on μ only through the distribution of partitions of {1, . . . , n} induced by s1 , . . . , sn . Hence we can write Sn
where In =
μ n (ds1 × · · · × dsn )Πn+θ ,νn,θ (Φ −1 (·)) =
∑ C(n)ψn (x)Pn (·),
(5.88)
n∈In
l
n = (n1 , . . . , nl ) : n1 ≥ n2 ≥ · · · ≥ nl ≥ 1, ∑ nk = n, l = 1, . . . , n , k=1
and for n = (n1 , . . . , nl ), C(n) =
n 1 , n1 , . . . , nl ∏ni=1 ai !
with ai = #{k : nk = i, 1 ≤ k ≤ l}. For a given n = (n1 , . . . , nl ) in In , let (X1 , . . . , Xl ) have the Dirichlet(n1 , . . . , nl ) distribution. Independent of X1 , . . . , Xl , let (Y1 , . . .) have the Poisson–Dirichlet distribution. The random variable β has a Beta(n, θ ) distribution, and is independent of both (X1 , . . . , Xl ) and (Y1 , . . .). Then Pn is the law of the descending order statistics of β X1 , . . . , β Xl , (1 − β )Y1 , . . .. Lemma 5.9. Let n be a partition of {1, . . . , n} with n = (n1 , . . . , nl ). For each partition m = (m1 , . . . , mk ) of {1, . . . , m}, set
ψm (z) =
∑
i1 ,...,ik , distinct
m
k 1 zm i1 · · · zik ,
(5.89)
and for notational convenience, we set ψ0 ≡ 1. Then ∇
l∧k
ψm (z)dPn = ∑
∑
i=0 B⊂{1,...,k} π ∈C (B) j∈B |B|=i
θ k−i
∑ ∏ (nπ ( j) )(m j ) ∏ (m j − 1)! (n + θ )(m)
(5.90)
j∈B
where C (B) is the collection of all one-to-one maps from B into {1, . . . , l}. Proof. Since Pn is the law of the descending order statistics of β X1 , . . . , β Xl , (1 − mk 1 β )Y1 , . . ., each term zm i1 · · · zik in ψm under Pn can be written as
5.3 The Structure of Transition Functions
105
β ∑ j∈B m j (1 − β )m−∑ j∈B m j ∏ Xπ (jj) ∏ Yr j j m
m
(5.91)
j∈B
j∈B
for certain B ⊂ {1, . . . , k} with |B| ≤ l, π in C (B), and {r j : j ∈ B} ⊂ {1, 2, . . .}. Thus it follows from the independence that ∇
ψm (z)dPn =
l∧k
∑ ∑
i=0
B⊂{1,...,k} |B|=i
∑
π ∈C (B)
×E
∏
j∈B
E[β ∑ j∈B m j (1 − β )m−∑ j∈B m j ]
m Xπ (jj)
E
(5.92)
∑
∏
j∈B,r j distinct ι ∈B
Yrmι ι
.
By direct calculation, E[β ∑ j∈B m j (1 − β )m−∑ j∈B m j ] = and
E
∏
j∈B
m Xπ (jj)
=
(5.93)
Γ (n) (∏ j∈B Γ (nπ ( j) ))Γ (n − ∑ j∈B nπ ( j) ) ×
=
Γ (n + ∑ j∈B m j ) 1 θ(m−∑ j∈B m j ) , Γ (n) (n + θ )(m)
(∏ j∈B Γ (m j + nπ ( j) ))Γ (n − ∑ j∈B nπ ( j) ) Γ (n + ∑ j∈B m j )
(5.94)
Γ (n) (nπ ( j) )(m j ) . Γ (n + ∑ j∈B m j ) ∏ j∈B
Let mB denote the partition of {1, . . . , m − ∑ j∈B m j } with mB = {m j : j ∈ B}. Then E
∑
∏ Yrmι ι
j∈B,r j distinct ι ∈B
= EΠθ [ψmB (z)] =
(5.95)
θ k−i
∏ (m j − 1)! θ(m− j∈B
∑ j∈B m j )
,
where (5.20) is used in deriving the last equality. It is now clear that (5.90) follows by multiplying (5.93), (5.94), and (5.95) together. 2 Theorem 5.7. The transition function Q(t, x, A) has a density q(t, x, y) with respect to Πθ , and
106
5 Stochastic Dynamics ∞
2m − 1 + θ −λmt e m! m=2 m m (n + θ )(m−1) pn (x, y), × ∑ (−1)m−n n n=0
q(t, x, y) = 1 +
∑
where
(5.96)
ψn (x)ψn (y)
∑ C(n) EΠθ [ψn (z)] .
pn (x, y) =
(5.97)
n∈In
Proof. For each partition m = (m1 , . . . , mk ) of {1, . . . , m}, and each partition n = (n1 , . . . , nl ) of {1, . . . , n},
∑
ψm (z)ψn (z) =
i1 ,...,ik , distinct k∧l
=
∑
m
1 k zm i1 · · · zik
∑ ∑
n
j1 ,..., jl , distinct
∑
{
∑
∑
ψmBn (z),
∏ zmiτ τ ∏
r=0 B⊂{1,...,k} π ∈C (B) distinctindex τ ∈B |B|=r
k∧l
=
∑ ∑
znj11 · · · z jll
τ ∈π (B)
(5.98)
znjττ
∏ ziτ
mτ +nπ (τ )
τ ∈B
}
r=0 B⊂{1,...,k} π ∈C (B) |B|=r
where mBn = {mτ , nι , mω + nπ (ω ) : τ ∈ {1, . . . , k} \ B, ι ∈ {1, . . . , l} \ π (B), ω ∈ B} is a partition of {1, . . . , n + m}. It follows from (5.20) that EΠθ [ψn (z)] = (n1 − 1)! · · · (nl − 1)!
θl , θ(n)
(5.99)
and EΠθ [ψmBn (z)] =
∏ (mτ − 1)! ∏
τ ∈B
(nι − 1)!
(5.100)
ι ∈π (B)
× ∏ (mω + nπ (ω ) − 1)! ω ∈B
θ k+l−r , θ(n+m)
which implies that EΠθ [ψmBn (z)] EΠθ [ψn (z)] k∧l
=
∑ ∑
(5.101)
r=0 B⊂{1,...,k} π ∈C (B) ω ∈B |B|=r
θ k−r
∑ ∏ (nπ (ω ) )(mω ) ∏ (mτ − 1)! (n + θ )(m) . τ ∈B
5.4 A Measure-valued Branching Diffusion with Immigration
107
Taking account of (5.88) and (5.90), one obtains for n ≥ 1 and any partition m of {1, . . . , m}, n −1 ψm (z)d μ (ds1 × · · · × dsn )Πn+θ ,νn,θ (Φ (·)) (5.102) ∇
Sn
=
∇
ψm (z)pn (x, z) d Πθ .
It follows from (5.83) that ∇
ψm (z)Q(t, x, dz) =
∇
ψm (z)q(t, x, z)d Πθ .
(5.103)
For any n1 , . . . , nk , the equality k
ϕn1 (z) · · · ϕnk (z) = ∑
∑
i=1 b∈σ (k,i)
ψn(b1 ),...,n(bi ) (z)
(5.104)
holds on the space ∇∞ , where n(br ) = ∑ j∈br n j , r = 1, . . . , i. Hence the family {ψm : m ∈ Im , m = 0, . . .} is a determining class for probability measures on ∇∞ . The theorem now follows from the fact that both Πθ (·) and Q(t, x, ·) are concentrated on the space ∇∞ . 2
5.4 A Measure-valued Branching Diffusion with Immigration It was indicated in Chapter 1 that the finite-dimensional Wright–Fisher diffusion can be derived from a family of finite-dimensional independent diffusions, through normalization and conditioning. This is a dynamical analog of the relation between the gamma and Dirichlet distributions. In this section, we will consider an infinite dimensional generalization of this structure. In particular, we will describe a derivation of the Fleming–Viot process with parent-independent mutation through an infinitedimensional diffusion process, called the measure-valued branching diffusion with immigration, by conditioning the total mass to be 1. Let M(S) denote the space of finite Borel measures on S equipped with weak topology, and define D˜ = {F : F(μ ) = f ( μ , φ1 , . . . , μ , φk ), f ∈ Cb2 (Rk ), φi ∈ B(S), k = 1, 2, . . . ; i = 1, . . . , k, μ ∈ M(S)}. (5.105) Then the generator of the measure-valued branching diffusion with immigration has the form
108
5 Stochastic Dynamics
G F(μ ) =
θ 2
δ F(μ ) (ν0 (dx) − μ (dx)) S δ μ (x) 1 δ 2 F(μ ) ˜ μ (dx), F ∈ D, + 2 S S δ μ (x)δ μ (x)
(5.106)
where the immigration is described by ν0 , a diffuse probability measure in M1 (S). For a non-zero measure μ in M(S), set |μ | = μ , 1 and μˆ = |μμ | . For f in Cb2 (R) and φ in B(S), let F(μ ) = f ( μˆ , φ ) = H(μˆ ). Then
δ F(μ ) f ( μˆ , φ ) = (φ (x) − μˆ , φ ), δ μ (x) |μ | δ 2 F(μ ) φ (x) − μˆ , φ 2 ˆ = f ( μ , φ ) δ μ (x)δ μ (x) |μ | f ( μˆ , φ ) −2 (φ (x) − μˆ , φ ). | μ |2 Substituting this into (5.106), yields
f ( μˆ , φ ) θ φ (x)(ν0 (dx) − μˆ (dx)) G F( μ ) = |μ | 2 S f ( μˆ , φ ) (φ 2 (x) − μˆ , φ 2 )μˆ (dx) + 2|μ | S 1 L H(μˆ ), = |μ |
(5.107)
where L is the generator of the FV process. Formally, conditioning on |μ | = 1, G becomes L . Let μt be the measure-valued branching diffusion with immigration. Then the total mass process xt = μt , 1 is a diffusion on [0, ∞) with generator x d2 f θ df + (1 − x) . 2 dx2 2 dx Since xt may take on the value zero, a rigorous proof of the conditional result is technically involved. The measure-valued branching diffusion with immigration is reversible with the gamma random measure as the reversible measure. The log-Laplace functional of the gamma random measure is given by
θ
S
log(1 + θ φ (x))ν0 (dx).
In comparison with the construction of the Poisson–Dirichlet distribution from the gamma process in Chapter 2, it is expected that the conditioning result would hold.
5.5 Two-parameter Generalizations
109
For any a > 0, ν ∈ M(S), let Γa,ν denote the gamma random measure with logLaplace functional S
log(1 + a−1 φ (x))ν (dx).
Then the measure-valued branching diffusion with immigration has the following transition function representation that is similar to the representation of the FV process Pθ ,ν0 (t, μ , d ν ) = exp(−ct |μ |)Γeθ t/2 ct ,θ ν0 (d ν ) ∞
+ ∑ exp(−ct |μ |) n=1
ctn n!
Sn
(5.108)
μ n (dx1 , . . . , dxn )Γeθ t/2 ct ,θ ν0 +∑n
where ct =
k=1 δxk
(d ν ),
θ e−θ t/2 . 1 − e−θ t/2
5.5 Two-parameter Generalizations As shown in Chapter 3, the two parameter Poisson–Dirichlet distribution possesses many structures that are similar to the Poisson–Dirichlet distribution, including the urn construction, GEM representation, and sampling formula. It is thus natural to investigate the two-parameter analog of the infinitely-many-neutral-alleles model. Two such models will be described in this section. Model I: GEM Process for θ > 1, 0 ≤ α < 1 α For each i ≥ 1, let ai = 1−2α , bi = θ +i 2 . The process ui (t) is the unique strong solution of the stochastic differential equation
dui (t) = (ai − (ai + bi )ui (t))dt +
ui (t)(1 − ui (t))dBi (t), ui (0) ∈ [0, 1],
where {Bi (t) : i = 1, 2, . . .} are independent one-dimensional Brownian motions. It is known that the process ui (t) is reversible, and Beta(2ai , 2bi ) distribution is the reversible measure. By direct calculation, the scale function of ui (·) is given by ai +bi x 1 dy . si (x) = 2a 2b i 4 1/2 y (1 − y) i Since θ > 1, α ≥ 0, it follows that limx→1 si (x) = +∞ for all i. Therefore, starting from any point in [0, 1), with probability one the process ui (t) will not hit the boundary 1. Let E = [0, 1)N . The process
110
5 Stochastic Dynamics
u(t) = (u1 (t), u2 (t), . . .) is then an E-valued Markov process. Since, defined on E, the map H(u1 , u2 , . . .) = (u1 , (1 − u1 )u2 , . . .) is one-to-one, the process x(t) = H(u(t)) is again a Markov process. We call it the GEM process. It is clear from the construction that the GEM process is reversible, with reversible measure GEM(α , θ ), the two-parameter GEM distribution. Model II: Two-Parameter Infinitely-Many-Neutral-Alleles Model Recall the definition of D(L) in (5.4). For any 0 ≤ α < 1, θ > −α , and f in D(L), let ∞ ∂2 f ∂f 1 ∞ . (5.109) Lα ,θ f (x) = xi (δi j − x j ) − ∑ (θ xi + α ) 2 i,∑ ∂ xi ∂ x j i=1 ∂ xi j=1 For m1 , . . . , mk ∈ {2, 3, . . . } and k ≥ 1, it follows from direct calculation that Lα ,θ (ϕm1 · · · ϕmk ) k mi α mi − ϕmi −1 ∏ ϕm j + ∑ mi m j ϕmi +m j −1 ∏ ϕml =∑ 2 2 i< j i=1 j=i l=i, j k k mi mi θ + + ∑ mi m j ∏ ϕmi − ∑ (5.110) 2 2 i< j i=1 i=1 k mi mi α ϕmi −1 ∏ ϕm j + ∑ mi m j ϕmi +m j −1 ∏ ϕml − =∑ 2 2 i< j i=1 j=i l=i, j k 1 − m(m − 1 + θ ) ∏ ϕmi . 2 i=1
For any 1 ≤ l ≤ k, denote by {n1 , n2 , . . . , nl } an arbitrary partition of the set {m1 , m2 , . . . , mk }. By the Pitman sampling formula, we get Eα ,θ Lα ,θ (ϕm1 . . . ϕmk ) l (− αθ )(− αθ − 1) · · · (− αθ − l + 1) ni (n = ∑ − 1 − α ) i ∑ θ (θ + 1) · · · (θ + m − 2) n1 ,n2 ,...,nl i=1 2
∏(−α ) · · · (−α + n j − 1)
·
(−α ) · · · (−α + ni − 2)
j=i
(− θ )(− αθ − 1) · · · (− αθ − l + 1) 1 − m(m − 1 + θ ) α 2 θ (θ + 1) · · · (θ + m − 1) · ∏(−α ) · · · (−α + n j − 1) j
= 0,
5.6 Notes
111
where the value of the right-hand side is obtained by continuity when α = 0 or θ = 0. Thus Eα ,θ [Lα ,θ f ] = 0, ∀ f ∈ D(L).
(5.111)
Similarly, we can further check that Eα ,θ [g · Lα ,θ ( f )] = Eα ,θ [ f · Lα ,θ (g)], ∀ f , g ∈ D(L).
(5.112)
These two identities lead to the next result. Theorem 5.8. (Petrov [147]) (1) The operator Lα ,θ is closable in C(∇), and its closure generates a ∇-valued diffusion process. (2) The Markov process associated with Lα ,θ , called the two-parameter infinitelymany-neutral-alleles model, is reversible with respect to PD(α , θ ), the two-parameter Poisson–Dirichlet distribution. (3) The complete set of eigenvalues of operator Lα ,θ is {0, −λ2 , −λ3 , . . .}, which is the same as that of operator L.
5.6 Notes The infinitely-many-neutral-alleles model is due to Kimura and Crow [124]. Watterson [181] formulated the model as the limit of a sequence of finite-dimensional diffusions. The formulation and the approach taken in Section 5.1 follow Ethier and Kurtz [61]. The stationarity of ΠK in Theorem 5.1 is shown in Wright [186]. The reversibility is due to Griffiths [92]. The first FV process appears in Fleming and Viot [82] where a class of a probability-valued diffusion processes was introduced. The mutation process in [82] is the Brownian motion. Ethier and Kurtz [62] reformulated the infinitelymany-neutral-alleles model as an FV process with parent-independent mutation. The proofs of Lemma 5.2, Theorem 5.3, and Theorem 5.4 are from [56] with minor modifications. In [134] reversibility has been shown to be a unique feature of parent-independent mutation. Without selection, parent-independent mutation is the only motion that makes the FV process reversible. The transition density expansion is obtained in Griffiths [92] for the finitelymany-alleles diffusions. The results in Section 5.3 follow Ethier and Griffiths [59] for the most part. Theorem 5.7 was originally proved in Ethier [57]. The treatment given here, is to derive the result directly through the expansion in Theorem 5.5. Theorem 5.6 is from Ethier and Kurtz [64]. The equality (5.104) is from Kingman [126]. Another stochastic dynamic that has the Poisson–Dirichlet distribution as an invariant measure is the coagulation–fragmentation process. The basic model is a Markov chain with partitions of the unit interval as the state space. The transition
112
5 Stochastic Dynamics
mechanism involves splitting, merging and reordering. For more details and generalizations, see [172], [154], [136], [32], and the references therein. One can find introductions to measure-valued processes in [49] and [52]. For a more detailed study of measure-valued processes and related topics, [20], [133], and [144] are good sources. The relation between the measure-valued branching diffusion with immigration and the FV process is part of a general structure that connects the measure-valued branching process or superprocess with the FV process. Konno and Shiga [131] discovered that the FV process can be derived from the superprocess by normalization and a random time change. By viewing a measure as the product of a radial part and an angular part, Etheridge and March [53] showed that the original FV process can be derived from the Dawson–Watanabe process or super-Brownian motion, by conditioning the total mass to be one. A more general result, the Perkins disintegration theorem, was established in Perkins [143] where the law of a superprocess is represented as the integration, with respect to the law of the total mass process, of the law of a family of FV processes with sampling rate inversely proportional to the total mass. The representation of the transition function for the measure-valued branching process with immigration is obtained in [60]. Further discussions and generalizations can be found in [166] and [167]. Results on functional inequalities, not included here, can be found in [165], [78], [45], and [77]. General references on functional inequalities include [96], [97], [179], and [18]. Several papers have emerged recently investigating stochastic dynamics associated with the two-parameter Poisson–Dirichlet distribution and the two-parameter Dirichlet process. The GEM process is from [78]. The two-parameter infinitelymany-neutral-alleles model first appeared in [147]. In [76], symmetric diffusion processes, including the two-parameter infinitely-many-neutral-alleles model, are constructed and studied for both the two-parameter Poisson–Dirichlet distribution and the two-parameter Dirichlet process using techniques from the theory of Dirichlet forms. It is still an open problem to generalize the FV process with parentindependent mutation to the two-parameter setting. In [13], PD(α , θ ) is shown to be the unique reversible measure of a continuous-time Markov process constructed through an exchangeable fragmentation–coagulation process.
Chapter 6
Particle Representation
The Fleming–Viot process studied in Chapter 5 describes the macroscopic evolution of genotype distributions of a large population under the influence of parentindependent mutation and random genetic drift or random sampling. In this chapter, we focus on a microscopic system, a special case of the Donnelly–Kurtz particle model, with macroscopic average following the FV process. The Donnelly–Kurtz particle model is motivated by an infinite exchangeable particle system studied by Dawson and Hochberg in [24]. Due to exchangeability, particles can be labeled in a way that the genealogy of the population is explicitly represented. The empirical process of n particles of the system is shown to converge to the FV process as n tends to infinity. Our objective here is to explore the close relation, provided by this model, between the backward-looking coalescent of Chapter 4 and the forward-looking stochastic dynamics of Chapter 5. We would like to emphasize that the particle representation developed by Donnelly and Kurtz is much more general and powerful than what has been presented here.
6.1 Exchangeability and Random Probability Measures Let E be a Polish space, and E the Borel σ -field on E. The space M1 (E) is the collection of all probability measures on E equipped with the weak topology. Let (Ω , F ) be a measurable space. A random probability measure on E is a probability kernel ξ (ω , B), defined on Ω × E such that ξ (ω , ·) is in M1 (E) for each ω in Ω , and ξ (ω , B) is measurable in ω for each B in E . It can be viewed as an M1 (E)-valued random variable defined on (Ω , F ). The law of a random probability measure belongs to M1 (M1 (E)), the space of probability measures on M1 (E). Let N+ denote the set of strictly positive integers. A finite permutation π on N+ is a one-to-one map between N+ and N+ such that π (k) = k for all but a finite number of integers. Definition 6.1. A finite or infinite sequence Z = (Z1 , Z2 , . . .) of E-valued random variables defined on (Ω , F ) is said to be exchangeable if for any finite permutation π on N+ S. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5 6, © Springer-Verlag Berlin Heidelberg 2010
113
114
6 Particle Representation d
(Z1 , Z2 , . . .) = (Zπ (1) , Zπ (2) , . . .), d
where = denotes equality in distribution. For any n ≥ 1, and any random probability measure ξ , the nth moment measure Mn , of ξ is a probability measure on E n given by Mn (dx1 , . . . , dxn ) = E[ξ (dx1 ) · · · ξ (dxn )]. Clearly if the random vector (Z1 , , . . . , Zn ) has probability law Mn , then (Z1 , . . . , Zn ) is exchangeable. Since the family {Mn (dx1 , . . . , dxn ) : n ≥ 1} is consistent, the following theorem follows from Kolmogorov’s extension theorem. Theorem 6.1. For each P in M1 (M1 (E)), there exists an infinite sequence {Zn : n ≥ 1} of E-valued exchangeable random variables such that for each n ≥ 1, the joint distribution of (Z1 , . . . , Zn ) is given by EP [μ (dx1 ) · · · μ (dxn )] ≡
μ (dx1 ) · · · μ (dxn )P(d μ ).
For an infinite exchangeable sequence {Zn : n ≥ 1}, and any m ≥ 1, let Cm be the σ -field generated by events in σ {Z , Z , . . .} that are invariant under permutations 1 2 of (Z1 , . . . , Zm ). The σ -field C = ∞ m=1 Cm is called the exchangeable σ -field. The following theorem provides a converse to Theorem 6.1. Theorem 6.2. (de Finetti’s theorem) Conditional on C , the infinite exchangeable sequence {Zn : n ≥ 1} is i.i.d. Proof. For any m ≥ 1, 1 ≤ i ≤ m, it follows from exchangeability that (Z1 ,Y ) equals (Zi ,Y ) in distribution, where Y = (h(Z1 , . . . , Zm ), Zm+1 , . . .) for a symmetric measurable function h. Therefore, for any bounded measurable function g on E, E[g(Z1 ) | Cm ] = E[g(Zi ) | Cm ] 1 m =E ∑ g(Z j ) | Cm m j=1 =
1 m
m
∑ g(Z j ).
j=1
By the reversed martingale convergence theorem, one gets that 1 m
m
∑ g(Z j ) → E[g(Z1 ) | C ] almost surely.
j=1
For fixed k ≥ 1 and m ≥ k, set Ik,m = {( j1 , . . . , jk ) : 1 ≤ jr ≤ m, jr = jl for r = l}.
(6.1)
6.1 Exchangeability and Random Probability Measures
115
Let g1 , . . . , gk be bounded measurable functions on E. Then by exchangeability, E
k
∏ gr (Zr ) | Cm
1 m(m − 1) · · · (m − k + 1) ( j
=
r=1
∑
k
∏ g jr (Z jr ).
(6.2)
1 ,..., jk )∈Ik,m r=1
Noting that as m goes to infinity 1 m(m − 1) · · · (m − k + 1) 1≤ j ≤m,( ∑ j ,..., j i
1
and 1
k )∈Ik,m r=1
k
1 mk 1≤ j ≤m,( ∑ j ,..., j i
k
∏ g jr (Z jr ) → 0,
∏ g jr (Z jr ) → 0,
k )∈Ik,m r=1
it follows from the reversed martingale theorem and (6.1) that, with probability one, lim E
m→∞
k
∏ gr (Zr ) | Cm
=E
r=1
k
∏ gr (Zr ) | C
(6.3)
r=1 k
=
∏ E[gr (Z1 ) | C ].
r=1
Thus, conditional on C , {Zn : n ≥ 1} are i.i.d.
2
Since {Zn : n ≥ 1} are conditionally i.i.d., it is natural to get: Theorem 6.3. For each m ≥ 1, set ξm = m1 ∑m i=1 δZi . As m goes to infinity, ξm converges almost surely in M1 (E) to a random probability measure ξ , a version of the regular conditional probability P[Z1 ∈ · | C ]. Proof. Since E is Polish, there exists a countable set { fl : l ≥ 1} of bounded continuous functions that is convergence determining in M1 (E). The topology on M1 (E) is then the same as the topology generated by the metric
ρ (μ , ν ) =
∞
∑ 2−l (1 ∧ | μ − ν , fl |), μ , ν ∈ M1 (E).
l=1
Noting that ξm , fl =
1 m
∑m i=1 f l (Zi ), it follows from (6.1) that lim ρ (ξm , ξ ) = 0, almost surely.
m→∞
2
116
6 Particle Representation
6.2 The Moran Process and the Fleming–Viot Process Let S be the compact metric space in Section 5.2, and A be the parent-independent mutation generator defined in (5.16). For each n ≥ 1, the Moran particle system with n particles is an Sn -valued Markov process (X1n (t), . . . , Xnn (t)) that evolves as follows: 1. Motion process. Each particle, independently of all other particles, moves in S according to A for a period of time exponentially distributed with parameter n−1 2 . 2. Sampling replacement. At the end of its waiting period, the particle jumps at random to the location of one of the other (n − 1) particles. 3. Renewal. After the jump, the particle resumes the A-motion, independent of all other particles. This process is repeated for all particles. Thus the generator of the Moran particle system is given by n
K f (x1 , . . . , xn ) = ∑ Ai f (x1 , . . . , xn ) + i=1
1 [Φi j f (x1 , . . . , xn ) − f (x1 , . . . , xn )], (6.4) 2 i∑ =j
where f is in C(Sn ), Ai is A acted on the ith coordinate, and Φi j f is defined by
Φi j f (x1 , . . . , xn ) = f (. . . , x j−1 , xi , x j+1 , . . .). By relabeling, Φi j f can be viewed as a function in C(Sn−1 ). Theorem 6.4. For any n ≥ 1, the operator K generates a unique Markov process with semigroup Vtn satisfying Vtn f (x1 , . . . , xn ) = e−
n(n−1)t 2
1 + ∑ 2 i= j
Tt⊗n f (x1 , . . . , xn )
t
n(n−1)u − 2
e 0
(6.5)
n Tu⊗n (Φi j (Vt−u f ))(x1 , . . . , xn )du,
where f belongs to C(Sn ), Tt is the semigroup generated by A and Tt⊗n is the semigroup of n independent A-motions. Proof. Let {τk : k ≥ 1} be a family of i.i.d. exponential random variables with n n parameter n(n−1) 2 . Starting with (X0 (0), . . . , Xn (0)), we run n independent A-motions until time τ1 . Here, we assume that the A-motions and the τk ’s are independent of each other. At time τ1 , a jump occurs involving a particular pair (i, j), and the X jn (τ1 ) takes the value of Xin (τ1 ). After the jump, the pattern is repeated until τ1 + τ2 . This continues for each time interval [∑ki=1 τi , ∑k+1 i=1 τi ) for all k ≥ 1. Since k
∑ τi = ∞, k→∞ lim
i=1
we have constructed explicitly a Markov process with generator K. Let
6.2 The Moran Process and the Fleming–Viot Process
117
(X1n (t), . . . , Xnn (t)) be any Markov process generated by K and, for any f in C(Sn ), define Vtn f (x1 , . . . , xn ) = E[ f (X1n (t), . . . , Xnn (t)) | X1n (0) = x1 , . . . , Xnn (0) = xn ].
(6.6)
Let {τi j : i, j = 1, . . . , n; i = j} be a family of i.i.d. exponential random variables with parameter 1/2. Then τ = inf{τi j : i, j = 1, . . . , n; i = j} is the first time that a sampling replacement occurs. Clearly τ is an exponential random variable with n(n−1) 1 . Then, conditioning on τ , it follows that parameter 2 and P{τ = τi j } = n(n−1) Vtn f (x1 , . . . , xn ) = E[ f (X1n (t), . . . , Xnn (t))(χ{τ >t} + χ{τ ≤t} ) | X1n (0) = x1 , . . . , Xnn (0) = xn ] = E[ f (X1n (t), . . . , Xnn (t))χ{τ >t} | X1n (0) = x1 , . . . , Xnn (0) = xn ] (6.7) + ∑ E[ f (X1n (t), . . . , Xnn (t))χ{τ =τi j ≤t} | X1n (0) = x1 , . . . , Xnn (0) = xn ] i= j
n(n−1)t
= e− 2 Tt⊗n f (x1 , . . . , xn ) t n(n−1)u 1 n f ))(x1 , . . . , xn )du. + ∑ e− 2 Tu⊗n (Φi j (Vt−u 2 i= j 0 Considering functions of the forms n
∑ gi (xi ),
i=1
n
∑
gi j (xi , x j ),
i, j=1
and so on, one obtains the uniqueness of solutions to equation (6.5). 2 Remarks: 1. Let Ft denote the σ -algebra generated by (X1n (t), . . . , Xnn (t)) up to time t and thus {Ft : t ≥ 0} be the natural filtration of (X1n (t), . . . , Xnn (t)). Then the process can be formulated as the process such that for any f in C(Sn ) and any t > 0, t unique n n n f (X1 (t), . . . , Xn (t)) − 0 K f (X1 (u), . . . , Xnn (u))du is a martingale with respect to its natural filtration. In this framework, we say that (X1n (t), . . . , Xnn (t)) is the unique solution to the martingale problem associated with K. 2. Note that both the motion and the sampling replacement are symmetric. A permutation in the initial state will result in the same permutation at a later time. Thus if (X1n (0), . . . , Xnn (0)) is exchangeable, then the martingale problem associated with K is determined by symmetric functions and for each t > 0, (X1n (t), . . . , Xnn (t)) is exchangeable. Let 1 n ηn (t) = ∑ δXin (t) (6.8) n i=1
118
6 Particle Representation
be the empirical process of the Moran particle system. If (X1n (0), . . . , Xnn (0)) is exchangeable, then ηn (·) is a measure-valued Markov process. For any f ∈ Cb3 (Rk ), φ1 , . . . , φk ∈ C(S), 1 ≤ i, j ≤ k, let fi denote the first order partial derivative f with respect to the ith coordinate, and f i j be the corresponding second order partial derivative. Set F(μ ) = f ( μ , φ1 , . . . , μ , φk ), Fi (μ ) = fi ( μ , φ1 , . . . , μ , φk ), Fi j (μ ) = fi j ( μ , φ1 , . . . , μ , φk ). Then the generator of the process ηn (·) is given by Ln F(μ ) =
k
∑ Fi (μ ) μ , Aφi
(6.9)
i=1
+
1 k [ μ , φi φ j − μ , φi μ , φ j ]Fi j (μ ) + o(n−1 ). 2 i,∑ j=1
Clearly when n goes to infinity, Ln F(μ ) converges to L F(μ ), with L being the generator of the FV process with parent-independent mutation. Thus we have: Theorem 6.5. Let D([0, +∞), M1 (S)) be the space of c`adl`ag functions equipped with the Skorodhod topology. Assume that ηn (0) converges to η (0) in M1 (S) as n tends to infinity. Then the process ηn (t) converges in distribution to the FV process ηt in D([0, +∞), M1 (S)) as n goes to infinity; that is, the FV process can be derived through the empirical processes of a sequence of Moran particle systems. Proof. See Theorem 2.7.1 in [20]. For any f (x1 , . . . , xn ) in C(Sn ), let μ n = μ × · · · × μ and μ n , f =
Sn
2
f (x1 , . . . , xn )μ (dx1 ) · · · μ (dxn ).
It follows from direct calculation that n δ μ n , f
= ∑ μ n−1 , f i→x
δ μ (x) i=1
δ 2 μ n , f
= μ n−2 , f i→x, j→y , δ μ (x)δ μ (y) i∑ =j where f i→x and f i→x, j→y are obtained from f by setting xi = x, and xi = x, x j = y, respectively. Thus
6.2 The Moran Process and the Fleming–Viot Process
n
μ n , ∑ Ai f
L μ n , f =
i=1
+
1 [ μ n−1 , Φi j f − μ n , f ] 2 i∑ =j
119
(6.10)
= μ n , K f . Let η (t) denote the FV process started at μ in M1 (S). For any n ≥ 1, let Mn (t, μ ; dx1 , . . . , dxn ) denote the nth moment measure of η (t). It then follows from (6.10) that for any f (x1 , . . . , xn ) in C(Sn ),
···
=
f (x1 , . . . , xn )Mn (t, μ ; dx1 , . . . , dxn ) ···
(6.11)
Vtn f (x1 , . . . , xn )μ (dx1 ) · · · μ (dxn ),
where Vtn is the semigroup of the Moran particle system with n particles, and the system has initial distribution μ n . The equality (6.11) indicates a duality relation between the Moran particle systems and the FV process. To be precise, let S1 and S2 be two topological spaces, and B(S1 ) and B(S2 ) the respective set of bounded measurable functions on S1 and S2 . Similarly, we define B(S1 × S2 ) to be the set of bounded measurable functions on S 1 × S2 . Definition 6.2. Let Xt and Yt be two Markov processes with respective state spaces S1 and S2 . The process Xt and the process Yt are said to be dual to each other with respect to an F in B(S1 × S2 ) if for any t > 0, x in S1 and y in S2 , E[F(Xt , y) | X0 = x] = E[F(x,Yt ) | Y0 = y].
(6.12)
Remark: The use of dual processes is an effective tool in the study of stochastic processes. The basic idea of duality is to associate a dual process to the process of interest. The properties of the original process become more tractable if the dual process is simpler or easier to handle than the original process. The mere existence of a dual process usually guarantees the uniqueness of the martingale problem associated with the original process. Even when the dual process is not much simpler, it may still provide new perspectives on the original process. In the context of population genetics, dual processes relate naturally to the genealogy of the population. Note that {Mn (t, μ ; ·) ∈ M1 (Sn ) : n ≥ 1} forms a consistent family of probability measures, and for each n, Mn (t, μ ; dx1 , . . . , dxn ) is exchangeable; that is, Mn (t, μ ; B1 × · · · × Bn ) = Mn (t, μ ; Bπ (1) × · · · × Bπ (n) ) for any permutation π of the set {1, . . . , n}. By Kolmogorov’s extension theorem, this family of measures defines a sequence of exchangeable S-valued random variables (X1 (t), X2 (t), . . .) with a probability law P on (S∞ , S ∞ ), where S ∞ is the P-completion of the product σ -algebra. On the other hand, by de Finetti’s theorem, the particle system determines a random measure η˜ (t) with moment measures given by
120
6 Particle Representation
{Mn (t, μ ; ·) ∈ M1 (Sn ) : n ≥ 1}. Therefore one could construct the FV process from an infinite exchangeable particle system. Due to the exchangeable structure, one could go one-step further to construct a particle system that incorporates the genealogical structure of the population. This brings us to the main topic of this chapter: particle representation.
6.3 The Donnelly–Kurtz Look-down Process For each n ≥ 1 and f in C(Sn ), define n
K˜ f (x1 , . . . , xn ) = ∑ Ai f (x1 , . . . , xn ) + ∑ [Φi j f (x1 , . . . , xn ) − f (x1 , . . . , xn )]. (6.13) i< j
i=1
Clearly K˜ and K differ only in the aspect of sampling replacement. This seemingly minor change in the sampling term leads to a new and more informative particle system: the Donnelly–Kurtz look-down process. The existence and uniqueness of a Feller process with generator K˜ can be established by an argument similar to that used in the proof of Theorem 6.4. The n-particle look-down process, denoted by (Y1 (t), . . . ,Yn (t)) with each index representing a “level”, is a Markov process that evolves as follows: 1. Motion process. The particle at level k, independently of all other particles, moves in S according to A for a period of time exponentially distributed with parameter k − 1. 2. Sampling replacement. At the end of the waiting period, the particle at level k “looks down” at a particle chosen at random from the first k − 1 levels and assumes its type. 3. Renewal. After the jump, the particle resumes the A-motion, independent of all other particles. This process is repeated for all particles. Even though the transition mechanism is asymmetric, the look-down process still preserves exchangeability. Theorem 6.6. If (Y1 (0), . . . ,Yn (0)) is exchangeable with a law of the form Qn (dy1 · · · dyn ) =
μ (dy1 ) · · · μ (dyn )Θ (d μ )
for some Θ in M1 (M1 (S)), then (Y1 (t), . . . ,Yn (t)) is exchangeable for each t > 0. Proof. First note that for any μ in M1 (S), f in C(Sn ), and 1 ≤ i < j ≤ n, μ n−1 , Φi j f = μ n−1 , Φ ji f . Thus
6.3 The Donnelly–Kurtz Look-down Process
121
μ n , K f = μ n , K˜ f , and, by (6.10),
L μ n , f = μ n , K˜ f .
This relation of duality ensures that EΘ [ η (t)n , f ] = EQn [ f (Y1 (t), . . . ,Yn (t))],
(6.14)
where EΘ denotes the expectation with respect to the FV process with initial distribution Θ , and EQn denotes the expectation with respect to the look-down process (Y1 (t), . . . ,Yn (t)) with initial distribution Qn . By the dominated convergence theorem, f can be chosen as any bounded measurable function on Sn . Therefore, the equality (6.14) implies that for any measurable sets B1 , . . . , Bn in S, P{Yi (t) ∈ Bi , i = 1, . . . , n} = EΘ [η (t, B1 ) · · · η (t, Bn )]
(6.15)
from which the result follows. For any n ≥ 1, a measurable function f (x1 , . . . , xn ) on Sn is symmetric if
2
f (x1 , . . . , xn ) = f (xπ (1) , . . . , xπ (n) ) for every permutation π of {1, . . . , n}. Let Csym (Sn ) be the set of all continuous, symmetric functions on Sn . By definition, each f in Csym (Sn ) can be written as f (x1 , . . . , xn ) =
1 f (xπ (1) , . . . , xπ (n) ). n! ∑ π
(6.16)
Substituting this into (6.4) and (6.13), yields K f (x1 , . . . , xn ) 1 = ∑ K f (xπ (1) , . . . , xπ (n) ) n! π n
= ∑ Ai f (x1 , . . . , xn ) + i=1
1 (Φi j (x1 , . . . , xn ) − f (x1 , . . . , xn )) 2 i∑ =j
= K˜ f (x1 , . . . , xn ) ∈ Csym (Sn ). Therefore, the generators K and K˜ coincide on Csym (Sn ). Let B1 denote the σ algebra on S generated by functions in Csym (Sn ), and B2 the σ -algebra generated by the map 1 n η : Sn → M1 (S), (x1 , . . . , xn ) → ∑ δxi . n i=1 Then it is known (cf. [20]) that B1 = B2 . It is thus natural to expect:
122
6 Particle Representation
Theorem 6.7. For any n ≥ 1, let (X1n (t), . . . , Xnn (t)) be the n-particle Moran process with generator K, and (Y1n (t), . . . ,Ynn (t)) the n-particle look-down process generated ˜ Let ηn (t) be defined as in (6.8) and by K.
η˜ n (t) =
1 n ∑ δYi (t) . n i=1
(6.17)
Then the M1 (S)-valued processes ηn (t) and η˜ n (t) have the same distribution, provided that d (X1n (0), . . . , Xnn (0)) = (Y1n (0), . . . ,Ynn (0)) is exchangeable. Proof. See Lemma 2.1 in [38]. A different proof can be found in Theorem 11.3.1 of [20]. 2 For the Moran particle systems, a change in n will change the whole system. But for the look-down process, the n-particle process is naturally embedded into the (n+ 1)-particle process. This makes it possible to keep track of the genealogical structure of the population. By taking the projective limit, we end up with an infinite particle system (Y1 (t),Y2 (t), . . .) such that the first n coordinates form the n-particle lookdown process. We call the infinite particle system (Y1 (t),Y2 (t) . . .) the Donnelly– Kurtz infinite look-down process. Corollary 6.8 If (Y1 (0),Y2 (0), . . .) is exchangeable, then (Y1 (t),Y2 (t), . . .) is exchangeable for all t > 0. Proof. By de Finetti’s theorem, the exchangeability of (Y1 (0),Y2 (0), . . .) ensures that the law of (Y1 (0), . . . ,Yn (0)) is of the form in the hypothesis of Theorem 6.6 for each n ≥ 1. Thus the result follows from the fact that (Y1 (t), . . . ,Yn (t)) is exchangeable for each n ≥ 1. 2 Now we are ready to present the main result of this section. Theorem 6.9. Let (Y1 (t),Y2 (t) . . .) be the Donnelly–Kurtz infinite look-down process with an exchangeable initial law. Then
η˜ (t) = lim η˜ n (t) a.s. n→∞
(6.18)
and is a version of the FV process η (t). Proof. By Corollary 6.8, (Y1 (t),Y2 (t) . . .) is exchangeable for any t > 0. It follows from de Finetti’s theorem and Theorem 6.3 that for each t > 0
η˜ (t) = lim η˜ n (t) a.s. n→∞
The fact that η˜ (t) is a version of the FV processes, follows from Theorem 6.5 and Theorem 6.7. 2
6.4 Embedded Coalescent
123
6.4 Embedded Coalescent The FV process in Chapter 5 is concerned with the forward evolution of a population under the influence of mutation and sampling replacement. At the infinity horizon, the population stabilizes at an equilibrium state. The model of the coalescent in Chapter 4 took a backward view. The same equilibrium state can be reached by tracing the most recent common ancestors. The close relation between the two is demonstrated in the representation of the transition functions in Chapter 5. The urn models studied in Chapter 2 are just different views of the coalescent. We have seen how the FV process is embedded in the particle representation. It will be shown now that the coalescent is also naturally embedded in the look-down process. Thus the backward and forward viewpoints are unified through the particle representation. Let {Ni j (·) : i, j ≥ 1} be a family of independent Poisson processes on the whole real line with the Lebesgue measure as the common mean measure. Let each point in Ni j (·) correspond to a look-down event. Noting that there is a stationary process with generator A and stationary distribution ν0 , then by associating each point in Ni j (·) with an independent version of this stationary A-motion and following the lookdown mechanism, we will have an explicit construction of the look-down process on the whole real line. Running this process backwards in time, for any s < t ≤ 0, let
σ 1j (t) = sup u : ∑ Ni j ((u,t]) > 0 i< j
be the time of the most recent look-down from level j, and the level that is looked down at time σ 1j (t) is denoted by l 1j (t). Define σ 2j (t) = σl11 (t) (σ 1j (t)) and the level j
reached after the second most recent look-down from level j is denoted by l 2j (t). Similarly one can define σ kj (t), l kj (t) for k ≥ 3. The ancestor at time s of the level j particle at time t is then given by σ 1j (t) ≤ s j, γ j (s,t) = k (t) ≤ s < σ kj (t). l j (t), σ k+1 j Consider the present time as zero. Take a sample of size n from the population at time zero. For any t ≥ 0, let
ϒ (t) = {γ j (−t, 0) : j = 1, . . . , n} be the collection of ancestors at time t in the past of the sample at time zero. Introduce an equivalence relation Rn (t) on the set {1, 2, . . . , n} so that (i, j) belongs to Rn (t) if and only if γi (−t, 0) = γ j (−t, 0). Clearly there is a one-to-one correspondence between the indices in ϒ (t) and the equivalence classes derived from Rn (t). Set Dn (t) = |ϒ (t)| ≡ the cardinality of ϒ (t).
124
6 Particle Representation
Theorem 6.10. The process {Rn (u) : u ≥ 0} is Kingman’s n-coalescent and thus Dn (t), the number of equivalence classes in Rn (t), is a pure-death process starting at n with transition rates qk,k−1 =
k(k − 1) , k = 2, . . . , n. 2
Proof. By definition, Rn (0) = {(i, i) : i = 1, . . . , n} and {Rn (t) : t ≥ 0} is a pure jump Markov chain. Since each jump corresponds to a point in the Poisson processes {Ni j (·) : i, j ≥ 1}, the transition rate is thus one. At the time of a jump, the ancestor of one equivalence class looks down to the level associated another equivalence class. As a result, the two equivalence classes coalesce. The theorem now follows from Definition 4.1. 2
6.5 Notes The material in Section 6.1 comes from [24], [2], and [20]. The derivation, in Section 6.2, of the FV process from the Moran particle system was obtained in [25]. The look-down process in Section 6.3 was introduced and studied in [38]. In [40], the particle representations and the corresponding genealogical processes were obtained for the FV process with mutation, selection and recombination. The particle representation was generalized in [41], to the measure-valued branching process. Further generalizations can be found in [132], where particle representations were studied for the measure-valued branching process with spatially varying birth rates.
Part II
Asymptotic Behaviors
Chapter 7
Fluctuation Theorems
Consider a family of random variables {Yλ : λ > 0}. Assume that a weak law of large numbers holds; i.e., Yλ converges in probability to a constant as λ tends to infinity. A fluctuation theorem such as the central limit theorem, refers to the existence of constants a(λ ) and b(λ ) such that the “normalized” family a(λ )Yλ − b(λ ) converges in distribution to a nontrivial random variable as λ tends to infinity. When a fluctuation theorem is established, we will say the family {Yλ : λ > 0} or the family of laws of Yλ satisfies the fluctuation theorem. In the first two sections of this chapter, fluctuation theorems are established for the Poisson–Dirichlet distribution, the Dirichlet process, and their two-parameter counterparts, when the scaled mutation rate θ tends to infinity. The last section includes several Gaussian limits associated with both the one- and two-parameter Poisson–Dirichlet distributions.
7.1 The Poisson–Dirichlet Distribution In this section, we discuss the fluctuation theorems of the Poisson–Dirichlet distribution and its two-parameter generalization. The main idea is to exploit the subordinator representation and marginal distributions obtained in Chapter 2 and Chapter 3. Let Z1 ≥ Z2 ≥ · · · be the ranked jump sizes up to time one of the gamma process {γt : t ≥ 0} with L´evy measure Λ (dx) = θ x−1 e−x dx. It follows from Theorem 2.2 that the law of Z1 Z2 , ,... P(θ ) = (P1 (θ ), P2 (θ ), . . .) = γ1 γ1 is the Poisson–Dirichlet distribution Πθ . Let N(·) be the Poisson random measure associated with the jump sizes of γt over the interval [0, 1]. Then for any z ≥ 0, P{Z1 ≤ z} = P (N((z, +∞)) = 0) = e−Λ ((z,+∞)) = exp{−θ E1 (z)}, S. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5 7, © Springer-Verlag Berlin Heidelberg 2010
(7.1)
127
128
7 Fluctuation Theorems
where E1 (z) =
∞ −1 −u z u e du. The density function of Z1 is thus given by
θ −z e exp{−θ E1 (z)}, z > 0. z Lemma 7.1. For any i ≥ 1, Pi (θ ) converges in probability to zero as θ tends to infinity. Proof. It suffices to verify the case of i = 1. By direct calculation, we have E[γ1 ] = θ ,
E[γ12 ] = (θ + 1)θ ,
and lim E
θ →∞
Thus,
γ
1
θ
2
−1
= 0.
γ1 → 1 in probability as θ → ∞. θ
Since γ1 is independent of E[P1 (θ )] =
Z1 γ1 ,
(7.2)
it follows that
E[Z1 ] = θ
∞ 0
e−θ E1 (z) e−z dz −→ 0
as
θ → ∞. 2
√ Lemma 7.2. As θ tends to infinity, the family of random variables θ ( γθ1 − 1) converges in distribution to a standard normal random variable. √ Proof. Let ϕθ (t) denote the characteristic function of θ ( γθ1 − 1). By direct calculation, it it log ϕθ (t) = θ log 1 − √ −√ θ θ 2 t −→ − as θ → ∞, 2 which yields the lemma. Consider a Poisson random measure N(·) on R with mean measure
2
e−u du, u ∈ R. Let ζ1 ≥ ζ2 ≥ · · · be the sequence of points of the Poisson random measure, in descending order. Then for each r ≥ 1, the joint density of (ζ1 , . . . , ζr ) is e− ∑k=1 uk e−e r
−ur
, −∞ < ur < · · · < u1 < ∞,
(7.3)
7.1 The Poisson–Dirichlet Distribution
129
and for each k = 1, 2, . . ., the density function of ζk is
For
1 −ku−e−u e , u ∈ R. Γ (k)
(7.4)
β (θ ) = log θ − log log θ ,
(7.5)
we have: Lemma 7.3. The family of the scaled random variables θ P1 (θ ) − β (θ ) converges in distribution to ζ1 as θ tends to infinity. Proof. By L’Hospital’s rule, lim xex E1 (x) = 1.
x→∞
(7.6)
Thus for any real number u, lim exp{−θ E1 ((u + β (θ )))}
(u + β (θ ))e(u+β (θ )) E1 (u + β (θ )) = lim exp −θ θ →∞ (u + β (θ ))e(u+β (θ ))
(1 + o θ1 ) = lim exp −θ θ →∞ (u + β (θ ))eu · elog θ −log log θ log θ 1 = lim exp − 1 + o u θ →∞ (u + log θ − log log θ )e θ
θ →∞
(7.7)
−u
= e−e . This, combined with (7.1), implies that Z1 (θ√)− β (θ ) converges in distribution to ζ1 . Since γθ1 converges to 1 in probability, and θ ( γθ1 − 1) converges in distribution to the standard normal random variable, it follows that
θ P1 (θ ) − β (θ ) =
β (θ ) θ θ √ γ1 (Z1 − β (θ )) − θ −1 √ γ1 γ1 θ θ
(7.8)
and Z1 − β (θ ) both converge in distribution to ζ1 as θ tends to infinity.
2 To establish the fluctuation theorem for the Poisson–Dirichlet distribution, we need a classical result in probability theory, Scheff´e’s theorem. Theorem 7.1. (Scheff´e) Let {Xn : n ≥ 1} be a sequence of random variables taking values in Rd with corresponding density functions { fn : n ≥ 1}. If fn converges point-wise to the density function f of an Rd -valued random variable X, then Xn converges in distribution to X. Proof. For any Borel measurable set B in Rd , it follows from the dominated convergence theorem that
130
7 Fluctuation Theorems
|
B
fn (x)dx −
B
f (x)dx| ≤
B
≤2 ≤2
| fn (x) − f (x)|dx
B
( f (x) − fn (x))+ dx
B
f (x)dx → 0, n → ∞,
which implies the result. 2 Now we are ready to prove the fluctuation theorem for the Poisson–Dirichlet distribution. Theorem 7.2. As θ tends to infinity, (θ P1 (θ ) − β (θ ), θ P2 (θ ) − β (θ ), . . .) converges in distribution to (ζ1 , ζ2 , . . .). Proof. What we need to show is that the law of (θ P1 (θ ) − β (θ ), θ P2 (θ ) − β (θ ), . . .) converges weakly to the law of (ζ1 , ζ2 , . . .) in the space M1 (∇) as θ tends to infinity. By the Stone–Weierstrass theorem, it suffices to verify that for each m ≥ 1, (θ P1 (θ ) − β (θ ), . . . , θ Pm (θ ) − β (θ )) converges in distribution to (ζ1 , . . . , ζm ) as θ tends to infinity. Following (7.8), we have that for any t1 , . . . ,tm in R m
θ
m
∑ ti (θ P1(θ ) − β (θ )) = γ1 ∑ ti (Zi − β (θ ))
i=1
i=1
−
β (θ ) m θ √ γ1 θ − 1 √ ∑ ti . γ1 θ θ i=1
m Thus ∑m i=1 ti (θ P1 (θ ) − β (θ )) and ∑i=1 ti (Zi − β (θ )) have the same limiting distribution, or equivalently,
(θ P1 (θ ) − β (θ ), . . . , θ Pm (θ ) − β (θ )) and (Z1 − β (θ ), . . . , Zm − β (θ )) have the same limiting distribution. By direct calculation, the joint density function of (Z1 , . . . , Zm ) is
m θm exp − ∑ zi exp {−θ E1 (zm )} , z1 ≥ · · · ≥ zm > 0, z1 · · · zm i=1 and for any u1 ≥ u2 ≥ · · · ≥ um and large enough θ , the joint density function of (Z1 − β (θ ), . . . , Zm − β (θ ))
7.1 The Poisson–Dirichlet Distribution
is
131
m θm exp − ∑ ui − mβ (θ ) − θ E1 (um + β (θ )) (u1 + β (θ )) · · · (um + β (θ )) i=1
m (log θ )m = exp − ∑ ui exp{−θ E1 (um + β (θ ))}, (u1 + β (θ )) · · · (um + β (θ )) i=1
which, by (7.6), converges to
m
exp − ∑ ui exp{−e−um } i−1
as θ tends to infinity. It follows by Theorem 7.1 that (Z1 − β (θ ), . . . , Zm − β (θ )) converges in distribution to (ζ1 , . . . , ζm ) as θ tends to infinity. 2 By the continuity of the projection map, we get: Corollary 7.3 For each n ≥ 2, the family {θ Pn (θ ) − β (θ ) : θ > 0} converges in distribution to ζn as θ tends to infinity. Remarks: 1. Let V1 ,V2 , . . . be the GEM representation (2.13) of P(θ ). By direct calculation, we have i−1 θ 1 , i = 1, 2, . . . , E[Vi ] = θ +1 θ +1 E[P1 (θ )] =
∞
e−u e−θ E1 (u) du
0
≥ e−β (θ ) e−θ E1 (β (θ )) ≈
log θ . θ
This, combined with the fluctuation theorem, shows that the value of P1 (θ ) obtained by the ordering of V1 ,V2 , . . . increases its value by a scale factor of log θ for large θ . 2. In the classical extreme value theory (cf. [19]), the limiting distributions in the fluctuation theorems for the maximum of n independent and identically distributed random variables consist of three families of distributions: the Gumbel, Fr´echet, and Weibull. The distribution of ζ1 belongs to the family of Gumbel distributions. Next we turn to the fluctuation theorem for the two-parameter Poisson–Dirichlet distribution PD(α , θ ) with 0 < α < 1, θ > −α . Since θ will eventually tend to infinity, we may assume θ > 0. Let P(α , θ ) = (P1 (α , θ ), P2 (α , θ ), . . .) have the law PD(α , θ ). Then by Proposition 3.7, P(α , θ ) can be represented as J1 (α , θ ) J2 (α , θ ) , ,... , σα ,θ σ α ,θ
132
7 Fluctuation Theorems
which is independent of the Gamma(θ , 1) random variable σα ,θ . Let β (α , θ ) = log θ − (α + 1) log log θ − log Γ (1 − α ).
(7.9)
Then the following holds. Lemma 7.4. The family {θ P1 (α , θ ) − β (α , θ ) : θ > 0} converges in distribution to ζ1 as θ tends to infinity. Proof. First note that σα ,θ and γ1 have the same distribution. Since
θ P1 (α , θ ) − β (α , θ ) =
θ (J1 (α , θ ) − β (α , θ )) σ α ,θ β (α , θ ) θ √ σα ,θ √ θ −1 , − σα ,θ θ θ
(7.10)
it follows from an argument similar to that used in the proof of Lemma 7.3, that θ P1 (α , θ ) − β (α , θ ) and J1 (α , θ ) − β (α , θ ) have the same limiting distribution. For any u in R, choose θ large enough so that
α Γ (1 − α )
∞ u+β (α ,θ )
x−(1+α ) e−x dx < 1.
Then P{J1 (α , θ ) − β (α , θ ) ≤ u} = E[E[J1 (α , θ ) ≤ u + β (α , θ )|γ1/α ]] ∞ α −(1+α ) −x = E exp γ x e dx Γ (1 − α ) 1/α u+β (α ,θ ) −θ /α ∞ α x−(1+α ) e−x dx . = 1− Γ (1 − α ) u+β (α ,θ )
(7.11)
The lemma now follows from the fact that −θ /α ∞ α x−(1+α ) e−x dx lim 1 − θ →∞ Γ (1 − α ) u+β (α ,θ ) −θ /α α −(1+α ) −(u+β (α ,θ )) (u + β (α , θ )) = lim 1 − e θ →∞ Γ (1 − α ) (1+α ) −θ /α α e−u log θ = lim 1 − θ →∞ θ u + β (α , θ ) −u
= e−e . 2
7.1 The Poisson–Dirichlet Distribution
133
Theorem 7.4. As θ tends to infinity, (θ P1 (α , θ ) − β (α , θ ), θ P2 (α , θ ) − β (α , θ ), . . .) converges in distribution to (ζ1 , ζ2 , . . .). Proof. Following an argument similar to that used in the proof of Theorem 7.2, it suffices to verify that for each m > 1, (θ P1 (α , θ ) − β (α , θ ), . . . , θ Pm (α , θ ) − β (α , θ )) converges in distribution to (ζ1 , . . . , ζm ) as θ tends to infinity. For any u1 ≥ u2 · · · ≥ um > −∞, choose θ large enough such that ui + β (α , θ ) > 0, i = 1, . . . , m, θ and
ui + β (α , θ ) < 1. θ i=1 m
∑
By Theorem 3.6, the joint density function of (θ P1 (α , θ ) − β (α , θ ), . . . , θ Pm (α , θ ) − β (α , θ )) is ∑m (u +β (α ,θ ))
(1 − i=1 i θ )θ +mα −1 Cα ,θ cm α θm Cα ,θ +mα [(u1 + β (α , θ )) · · · (um + β (α , θ ))](1+α )
um + β (α , θ ) ×P θ P1 (α , θ + mα ) ≤ . ∑m (u +β (α ,θ )) 1 − i=1 i θ Since
Cα ,θ cm 1 α = , θ →∞ Cα ,θ +mα Γ (1 − α )m lim
e− ∑i=1 (ui +β (α ,θ )) = e− ∑i=1 ui θ −mΓ (1 − α )m (log θ )m(1+α ) , m
and
m
θ +mα −1 m ∑m (ui + β (α , θ )) = 1, lim e∑i=1 (ui +β (α ,θ )) 1 − i=1 θ →∞ θ
it follows that ∑m (u +β (α ,θ ))
(1 − i=1 i θ )θ +mα −1 Cα ,θ cm m α θm = e− ∑i=1 ui . lim θ →∞ Cα ,θ +mα [(u1 + β (α , θ )) · · · (um + β (α , θ ))](1+α ) On the other hand,
(7.12)
134
7 Fluctuation Theorems
P θ P1 (α , θ + mα ) ≤
um + β (α , θ ) 1−
∑m i=1 (ui +β (α ,θ ) θ
= P {(θ + mα )P1 (α , θ + mα ) − β (α , θ + mα ) + δ (α , θ ) ≤ um } . where
δ (α , θ ) = β (α , θ ) − β (α , θ + mα ) −
mα + ∑m um + β ( α , θ ) i=1 (ui + β (α , θ )) ∑m (u +β (α ,θ )) θ 1 − i=1 i θ
converges to zero as θ tends to infinity. By Lemma 7.4,
−um um + β ( α , θ ) lim P θ P1 (α , θ + mα ) ≤ = e−e , m (u +β (α ,θ ) ∑ θ →∞ 1 − i=1 i θ
(7.13)
which, combined with (7.12) and Theorem 7.1, implies the theorem. 2
7.2 The Dirichlet Process Let ν0 be a diffuse probability measure on [0, 1], and set h(t) = ν0 ([0,t]). Then h(t) is a nondecreasing, continuous function on [0, 1] with h(0) = 0, h(1) = 1. For the sequence ξk , k = 1, 2, . . . , of independent random variables with common distribution ν0 , let
Ξθ ,ν0 =
∞
∑ Pk (θ )δξk
k=1
and
Ξθ ,α ,ν0 =
∞
∑ Pk (α , θ )δξk
k=1
be the respective one-parameter and two-parameter Dirichlet processes. For every γ (h(t)) t in [0, 1], Ξθ ,ν0 ([0,t]) and Ξθ ,α ,ν0 ([0,t]) have the same distributions as γ1 and σ (h(t)γ1/α (CΓ (1−α ))−1 ) , σα ,θ
respectively. Let Bt denote the standard Brownian motion. For 0 ≤ t ≤ 1, the process ˆ = Bt − tB1 B(t)
is called the Brownian bridge. Both Brownian motion and the Brownian bridge appear as the limits in fluctuation theorems associated with subordinators. Due to the subordinator representations, it is natural to expect functional fluctuation theorems for the Dirichlet process and its two-parameter counterpart.
7.2 The Dirichlet Process
135
Let D = D([0, 1]) be the space of real-valued functions on [0, 1] that are right continuous and have left-hand limits (and left continuity at 1), and H denote the set of all strictly increasing and continuous functions from [0, 1] onto itself, keeping the end points fixed. For any x, y in D, set d(x, y) = inf{ sup |r(t) − t| sup |x(r(t)) − y(t)| : r ∈ H}. 0≤t≤1
0≤t≤1
Then (D, d) is a complete, separable metric space. The set C of all continuous functions on [0, 1] is a closed subset of D and the subspace topology of C is the topology of uniform convergence. For any m ≥ 1, and 0 ≤ t1 < · · · < tm ≤ 1, the projection map πt1 ,...,tm : D → Rm , x(·) → (x(t1 ), . . . , x(tm )) is measurable but not continuous on D. The keys to the proof of our fluctuation theorems are the following criterion for convergence in distribution and the comparison theorem. Theorem 7.5. Let Y (t) belong to C. Suppose that for any m ≥ 1, and any 0 ≤ t1 < · · · < tm ≤ 1, πt1 ,...,tm Yλ ⇒ πt1 ,...,tm Y, λ → ∞; (7.14) and there exist a > 0, b > 12 , and a nondecreasing continuous function g on [0, 1] such that for any 0 ≤ t1 ≤ t ≤ t2 ≤ 1, E[(|Yλ (t) −Yλ (t1 )| · |Yλ (t2 ) −Yλ (t)|)a ] ≤ (g(t2 ) − g(t1 ))2b .
(7.15)
Then Yλ ⇒ Y as λ tends to infinity. Proof. This follows from Theorem 15.6 in [14].
2
Theorem 7.6. (Comparison theorem) Let {Yλ (t) : λ > 0} and {Y˜λ (t) : λ > 0} be two families of processes in D. Assume that for any ε > 0, lim P{ sup |Yλ (t) − Y˜λ (t)| > ε } = 0.
λ →∞
(7.16)
0≤t≤1
Then Yλ ⇒ Y if and only if Y˜λ ⇒ Y as λ tends to infinity. Proof. Due to symmetry, it suffices to verify the “if” part. Assume that Y˜λ ⇒ Y . Then the following hold: (1) For any b > 0, there exists an c > 0 such that for all λ > 0, P{ sup |Y˜λ (t)| ≥ c} ≤ b.
(7.17)
0≤t≤1
(2) For any ε > 0 and b > 0, there exist a 0 < δ < 1, and an λ0 > 0 such that for λ ≥ λ0
136
7 Fluctuation Theorems
P{wY ˜ (δ ) ≥ ε } ≤ b, λ
(7.18)
where wY ˜ (δ ) = inf{ λ
sup 0
|Y˜λ (t) − Y˜λ (s)| : r ≥ 1, 0 = t0 < · · · < tr = 1,
tk − tk−1 > δ , k = 1, . . . , r}. It follows from (7.16), that Yλ (t) also satisfies conditions (7.17) and (7.18). Therefore, the family of laws of Yλ (·) is tight. Applying (7.16) again we obtain that Yλ ⇒ Y . 2 Theorem 7.7. Set Xθ (t) =
√
θ
γ (h(t)) − h(t) , X(t) = B(h(t)). θ
Then Xθ (·) converges in distribution to X(·) in D as θ tends to infinity. Proof. First note that X(·) is in C. For any m ≥ 1, 0 = t0 ≤ t1 < · · · < tm ≤ 1, let sk = h(tk ), k = 1, . . . , m. For any λ1 , . . . , λm in R and large θ , it follows by direct calculation that
E exp
m
∑ iλk (Xθ (tk ) − Xθ (tk−1 ))
k=1
√ λk = exp − ∑ θ (sk − sk−1 ) log 1 − i √ + iλk θ (sk − sk−1 ) θ k=1
m (sk − sk−1 )λk2 1 = exp − ∑ +O √ 2 θ k=1
m (s − s 2 k k−1 )λk → exp − ∑ , θ → ∞, 2 k=1
m
which implies that (Xθ (t1 ) − Xθ (t0 ), . . . , Xθ (tm ) − Xθ (tm−1 )) ⇒ (X(t1 ) − X(t0 ), . . . , X(tm ) − X(tm−1 )), as θ converges to infinity. Since the map f (x1 , . . . , xm ) = (x1 , x1 + x2 , . . . , x1 + · · · + xm ) is continuous, it follows from the continuous-mapping theorem ([14]) that (Xθ (t1 ), . . . , Xθ (tm )) ⇒ (X(t1 ), . . . , X(tm )), θ → ∞. On the other hand, for any 0 ≤ t1 < t < t2 ≤ 1,
(7.19)
7.2 The Dirichlet Process
137
E[(Xθ (t) − Xθ (t1 ))2 (Xθ (t2 ) − Xθ (t))2 ] 2 γ (h(t)) − γ (h(t1 )) − (h(t) − h(t1 )) = θE θ 2 γ (h(t2 )) − γ (h(t)) − (h(t2 ) − h(t)) ×E θ
(7.20)
= (h(t) − h(t1 ))(h(t2 ) − h(t)) ≤ (h(t2 ) − h(t1 ))2 , which is (7.15) with a = 2, b = 1, g = h. Therefore the result follows from Theorem 7.5. 2 Next we turn to the fluctuation theorem for Ξθ ,ν0 ([0,t]) or equivalently its subordinator representation through γ (h(t)) γ1 . Theorem 7.8. As θ tends to infinity, the process ˆ distribution to B(h(·)) in D.
√ γ (h(·)) θ ( γ1 − h(·)) converges in
Proof. By direct calculation, √
θ
√ γ (h(t)) θ γ (h(t)) − h(t) = Xθ (t) + θ −1 (7.21) γ1 θ γ1 γ (h(t)) = Xθ (t) − h(t)Xθ (1) − − h(t) Xθ (1). γ1
It follows from Theorem 7.7 that Xθ (·) ⇒ B(h(·)) and Xθ (1) ⇒ B(1). Since the map x(·) → x(·) − h(·)x(1) is continuous on D, it follows from the continuousmapping theorem that ˆ θ → ∞. Xθ (·) − h(·)Xθ (1) ⇒ B(h(·)), For any ε > 0, γ (h(t)) P sup − h(t) Xθ (1) ≥ ε γ1 0≤t≤1 γ (h(t)) θ γ (h(t)) Xθ (1) ≥ ε ≤ P sup − h(t) + −1 θ γ1 θ 0≤t≤1 γ (h(t)) ≤ P sup − h(t) Xθ (1) ≥ ε /2 θ 0≤t≤1 θ γ1 +P −1 Xθ (1) ≥ ε /2 . γ1 θ
138
7 Fluctuation Theorems
Since γ (h(t)) − h(t) is a martingale, it follows from the Cauchy–Schwarz inequality θ and Doob’s inequality that γ (h(t)) − h(t) Xθ (1) ≥ ε P sup γ1 0≤t≤1 2 1/2
1/2 γ (h(t)) 2 E sup E[Xθ2 (1)] − h(t) ≤ ε θ 0≤t≤1 √ + P{Xθ2 (1) ≥ θ ε /2} (7.22) 1/2 √ 2
1/2 4 γ1 E[Xθ2 (1)] −1 + P{Xθ2 (1) ≥ θ ε /2} ≤ E ε θ 6 ≤ √ E[Xθ2 (1)] θε 6 =√ , θε where the last equality is due to E[Xθ2 (1)] = 1. The theorem now follows from Theorem 7.6 and Theorem 7.7. 2 The fact that the gamma process has independent increments plays a key role in the proof of Theorem 7.7. In the subordinator representation of the two-parameter Dirichlet process, the increments of the process σ˜ (t) = σ (γ1/α (CΓ (1 − α ))−1 )t) are exchangeable instead of independent. Thus conditional arguments are needed to establish the corresponding fluctuation theorem.
σ˜ (h(t)) − h(t) . Yθ (t) = θ σα ,θ √ ˆ in D as θ tends to infinity. Then Yθ (·) converges in distribution to 1 − α B(h(·))
Theorem 7.9. Let
√
Proof. First note that σ˜ (1) = σα ,θ . For every s in [0, 1], we have E[σ˜ (s)] = sθ , E[σ˜ 2 (s)] = E[E[σ˜ 2 (s) | γ1/α ]] = E[Var[σ˜ (s) | γ1/α ] + (E[σ˜ (s) | γ1/α ])2 ] = (1 − α )θ s + (θ + α )θ s2 , and
σ˜ (s) (1 − α )s + α s2 . Var = θ θ
Thus for every 0 ≤ t ≤ 1,
7.2 The Dirichlet Process
139
σ˜ (h(t)) → h(t), in probability. θ Set Zθ (t) =
√1 (σ ˜ (h(t)) − h(t)σ˜ (1)). θ
It is not hard to check that
Yθ (t) = Zθ (t) − X˜θ (1) where X˜θ (1) =
√
(7.23)
θ
σ˜ (h(t)) − h(t) , σ˜ (1)
σ˜ (1) −1 θ
(7.24)
has the same distribution as Xθ (1) in (7.22). By an argument similar to that used in (7.22), it follows that for any ε > 0, σ˜ (h(t)) ˜ − h(t) ≥ ε P sup Xθ (1) σ˜ (1) 0≤t≤1 σ˜ (h(t)) θ σ˜ (h(t)) ≤ P sup X˜θ (1) − h(t) + ε −1 ≥ θ σ˜ (1) θ 0≤t≤1 σ˜ (h(t)) − h(t) X˜θ (1) ≥ ε /2 ≤ P sup θ 0≤t≤1 σ˜ (1) ˜ − 1 Xθ (1) ≥ ε /2 (7.25) +P θ 2 1/2 √ 2 σ˜ (h(t)) ≤ E sup − h(t) + P{X˜θ2 (1) ≥ θ ε /2} ε θ 0≤t≤1 1/2 2 σ˜ (h(t)) 2 2 E E sup − h(t) γ1/α + √ ≤ ε θ θε 0≤t≤1 1/2 2 σ˜ (1) 4 2 E E ≤ − 1 γ1/α +√ ε θ θε 6 =√ , θε where in the last inequality, Doob’s inequality and the martingale property of σ˜ (h(t)) − h(t) are used under the conditional law given γ1/α . By Theorem 7.6, Yθ (·) θ and Zθ (·) have the same limit in distribution. To prove the theorem it suffices to verify conditions (7.14) and (7.15) in Theorem 7.5 for the process Zθ (·). Let m ≥ 1, 0 = t0 ≤ t1 < · · · < tm ≤ 1, sk = h(tk ), k = 1, . . . , m, and λ1 , . . . , λm in R. By choosing λm to be zero if necessary, we can take tm = 1. Set m λˆ k = λk − λi (si − si−1 ).
∑
i=1
140
7 Fluctuation Theorems
Then we have m
∑ λˆ k (sk − sk−1 ) = 0,
k=1 m
∑ λˆ k2(sk − sk−1 ) =
k=1
m
∑
(sk − sk−1 )λk2 −
k=1
2
m
∑ (sk − sk−1 )λk
.
k=1
By direct calculation, m
m
i=1
k=1
ˆ k ) − B(s ˆ k−1 )) = ∑ λˆ k (B(sk ) − B(sk−1 )). ∑ λk (B(s
Thus,
√ E exp i 1 − α
m
ˆ k ) − B(s ˆ k−1 )) ∑ λk (B(s
k=1
√
= E exp i 1 − α
m
∑ λˆ k (B(sk ) − B(sk−1 ))
k=1
1−α m ˆ = exp − ∑ λk (sk − sk−1 ) 2 k=1
2 m 1−α m 2 = exp − . ∑ (sk − sk−1 )λk − ∑ (sk − sk−1 )λk 2 k=1 k=1 On the other hand, by writing m
∑ λk (Zθ (tk ) − Zθ (tk−1 )) =
i=1
m
λˆ k
∑ √θ (σ˜ (sk ) − σ˜ (sk−1 )),
k=1
it follows that for large θ ,
m
∑ iλk (Zθ (tk ) − Zθ (tk−1 )
E exp
k=1
∞ λˆ k m γ1/α √ x = E exp (sk − sk−1 ) (e θ − 1)α Cx−(1+α ) e−x dx ∑ CΓ (1 − α ) k=1 0
λˆ √k x m α 0∞ (e θ − 1)x−(1+α ) e−x dx θ = exp − log 1 − ∑ (sk − sk−1 ) α Γ (1 − α ) k=1
m (1 − α )(s − s ˆ2 1 k k−1 )λk = exp − ∑ +O √ . 2 θ k=1
7.2 The Dirichlet Process
141
As θ tends to infinity, we get
E exp
m
∑ iλk (Zθ (tk ) − Zθ (tk−1 ))
k=1
→ exp
1−α − 2
m
∑
(sk − sk−1 )λk2 − k=1
2
m
∑ (sk − sk−1 )λk
,
k=1
which, combined with the continuous-mapping theorem, implies that √ ˆ ˆ (Zθ (t1 ), . . . , Zθ (tm )) ⇒ 1 − α (B(h(t 1 )), . . . , B(h(t m ))), θ → ∞.
(7.26)
For t0 = 0, t4 = 1, λ1 = λ4 = 0, and any 0 ≤ t1 < t2 < t3 ≤ 1, expanding √ E[exp{i θ [(λ2 (Zθ (t2 ) − Zθ (t1 )) + λ3 (Zθ (t3 ) − Zθ (t2 ))]}]
√ 4 = E exp i θ ∑ λk (Zθ (tk ) − Zθ (tk−1 )
k=1
αγ1/α ∞ = E exp Γ (1 − α ) 0
4
∑ (sk − sk−1 )(e
iλˆ k x
− 1) x−(1+α ) e−x dx
k=1
as power series in λ2 and λ3 , and equating the coefficients of λ22 λ32 , we obtain E[(Zθ (t) − Zθ (t1 ))2 (Zθ (t2 ) − Zθ (t))2 ] =
I1 + I2 , θ2
where I1 =
6θΓ (4 − α ) [(1 − (s3 − s1 ))(s2 − s1 )2 (s3 − s2 )2 Γ (1 − α )
+ (s2 − s1 )(1 − (s2 − s1 ))2 (s3 − s2 )2 + (s3 − s2 )(1 − (s3 − s2 ))2 (s2 − s1 )2 ] ≤ C1 (α )θ (h(t3 ) − h(t2 ))(h(t2 ) − h(t1 )), I2 = 6(1 − α )2 (θ + α )θ (s2 − s1 )(s3 − s2 )[(1 − (s2 − s1 ))(1 − (s3 − s2 )) + 2(s2 − s1 )(s3 − s2 )] ≤ C2 (α )θ 2 (h(t2 ) − h(t1 ))(h(t3 ) − h(t2 )). It follows by choosing C3 (α ) = C1 (α ) +C2 (α ), that E[(Zθ (t) − Zθ (t1 ))2 (Zθ (t2 ) − Zθ (t))2 ] ≤ C3 (α )(h(t2 ) − h(t1 ))(h(t3 ) − h(t2 ))
(7.27)
which, combined with (7.26), implies the result. 2
142
7 Fluctuation Theorems
Remark: In comparison with the fluctuation theorem of the one-parameter Dirichlet √ process, the impact of the parameter α is reflected from the factor 1 − α .
7.3 Gaussian Limits Consider a population of individuals whose types are labeled by {1, 2, . . .}. Let p = (p1 , p2 , . . .) denote the relative frequencies of all types with pi denoting the relative frequency of type i for i ≥ 1. For any n ≥ 2, and a random sample of size n from the population, the probability that all individuals in the sample are of the same type is given by ∞
∑ pni ,
i=1
which is the function ϕn (p) defined in (5.5). If the relative frequencies of all types in the population follow the one-parameter Poisson–Dirichlet distribution, then ϕ2 (P(θ )) is known as homozygosity of the population in population genetics. For general n ≥ 2, we call ϕn (P(θ )) the homozygosity of order n. It follows from the Ewens sampling formula that E[ϕn (P(θ ))] =
(n − 1)! → 0, θ → ∞. (θ + 1)(n−1)
This implies that ϕn (P(θ )) converges to zero in probability. It is thus natural to consider fluctuation theorems for {ϕn (P(θ )) : n ≥ 2} as θ tends to infinity. Theorem 7.10. Let H j (θ ) = and
√ θ
θ j−1 ϕ j (P(θ )) − 1 , j = 2, 3, . . . Γ ( j)
Hθ = (H2 (θ ), H3 (θ ), . . .).
Then
Hθ ⇒ H, θ → ∞,
(7.28)
R∞ -valued
random element and for each r ≥ 2, where H = (H2 , H3 , . . .) is a (H2 , . . . , Hr ) has a multivariate normal distribution with zero means, and covariance matrix Cov(H j , Hl ) =
Γ ( j + l) − Γ ( j + 1)Γ (l + 1) , j, l = 2, . . . , r. Γ ( j)Γ (l)
Proof. First note that
P(θ ) =
Z1 Z2 , ,... . γ1 γ1
(7.29)
7.3 Gaussian Limits
143
For each j ≥ 1, set √ M j (θ ) = θ
1 Γ ( j)θ
∞
∑
j Zi − 1
,
i=1
Mθ = (M1 (θ ), . . .). For each fixed r ≥ 1, and any (λ1 , . . . , λr ) in Rr , set r
fθ (x) =
1
∑ Γ ( j)√θ λ j x j .
j=1
It follows from Theorem A.6, that r E eit ∑ j=1 λ j M j (θ )
∞ E eit ∑l=1 fθ (Zl ) ∞ √ r = e−it ∑ j=1 λ j θ exp θ (eit fθ (y) − 1)y−1 e−y dy 0
r 2 Γ ( j + l) t . → exp − ∑ λ j λl 2 j,l=1 Γ ( j)Γ (l) = e−it ∑ j=1 λ j r
√
θ
(7.30)
Let M = (M1 , . . . , ) be such that for each r ≥ 1, (M1 , . . . , Mr ) is a multivariate normal random vector with zero mean and covariance matrix
Γ ( j + l) , j, l = 1, . . . , r. Γ ( j)Γ (l)
(7.31)
Then (7.30) implies that Mθ converges in distribution to M. It follows from direct calculation, that for any j ≥ 2 j ∞ √ Zl θ j . (7.32) −1 H j (θ ) = M j (θ ) + θ ∑ γ1 l=1 Γ ( j)θ Since Mθ ⇒ M, it follows that ∞
Z
j
∑ Γ ( j)l θ
→ 1 in distribution.
(7.33)
l=1
By Theorem 7.7 and basic algebra, one gets √ θ j θ − 1 ⇒ − jM1 , γ1 which, combined (7.32) and (7.33), yields
(7.34)
144
7 Fluctuation Theorems r
r
j=2
j=2
∑ α j H j (θ ) ⇒ ∑ α j H j ,
(7.35)
where H j = M j − jM1 . The theorem now follows from the fact that the covariance of H j and Hl is
Γ ( j + l) − Γ ( j + 1)Γ (l + 1) . Γ ( j)Γ (l) 2 Next consider the case that the relative frequencies of all types in the population follow the two-parameter Poisson–Dirichlet distribution with parameters 0 < α < 1, θ > −α . By the Pitman sampling formula, we have that for any n ≥ 2, E[ϕn (P(α , θ ))] =
(1 − α )(n−1) → 0, θ → ∞. (θ + 1)(n−1)
Therefore, ϕn (P(α , θ )) also converges to zero in probability when θ tends to infinity. The corresponding fluctuation result for {ϕn (P(α , θ )) : n ≥ 2} is established in the next theorem. Theorem 7.11. Let H j (α , θ ) =
√
and
θ
Γ (1 − α ) j−1 θ ϕ j (P(α , θ )) − 1 , j = 2, 3, . . . Γ ( j − α)
Hαθ = (H2 (α , θ ), H3 (α , θ ), . . .).
Then
Hαθ ⇒ Hα , θ → ∞,
(7.36)
where Hα = (H2α , H3α , . . .) is a R∞ -valued random element and for each r ≥ 2, (H2α , . . . , Hrα ) has a multivariate normal distribution with zero mean and covariance matrix Cov(H αj , Hlα ) =
Γ (1 − α )Γ ( j + l − α ) + α − jl, j, l = 2, . . . , r. Γ ( j − α )Γ (l − α )
(7.37)
Proof. Similar to the proof of Theorem 7.10, we appeal to the subordinator representation J1 (α , θ ) J2 (α , θ ) , ,... σα ,θ σα ,θ of P(α , θ ) and define √ M j (α , θ ) = θ
j Γ (1 − α ) ∞ Jl (α , θ ) ∑ θ − 1 , j ≥ 1, Γ ( j − α ) l=1
7.3 Gaussian Limits
145
Mθα = (M1 (α , θ ), M2 (α , θ ), . . .). For each fixed r ≥ 1, any (λ1 , . . . , λr ) in Rr , set r
gθ (x) =
Γ (1 − α )
∑ Γ ( j − α )√θ λ j x j .
j=1
By direct calculation, ∞ 0
(eitgθ (x) − 1)x−(1+α ) e−x dx it Γ (1 − α ) ∑rj=1 λ j √ θ r 2 Γ (1 − α )Γ ( j + l − α )t 2 1 − ∑ . λ j λl + o 2 θΓ ( j − α ) Γ (l − α ) θ j,l=1
=
(7.38)
This, combined with Theorem A.6 applied to the conditional law given γ1/α , implies that for θ large enough r E eit ∑ j=1 λ j M j (α ,θ ) √ r ∞ = e−it ∑ j=1 λ j θ E eit ∑l=1 gθ (Jl (α ,θ )) √ r ∞ = e−it ∑ j=1 λ j θ E E[eit ∑l=1 gθ (Jl (α ,θ )) | γ1/α ] λj = e−it ∑ j=1 r
√
θ
αγ1/α ∞ itg (x) ×E exp (e θ − 1)x−(1+α ) e−x dx Γ (1 − α ) 0
r √ = exp −it ∑ λ j θ
(7.39)
j=1
∞ θ α itgθ (x) −(1+α ) −x × exp − log 1 − (e − 1)x e dx α Γ (1 − α ) 0 ⎫ ⎧ ⎡ 2 ⎤ ⎬ ⎨ t2 r r Γ (1 − α )Γ ( j + l − α ) 1 +α ∑ λj ⎦+o = exp − ⎣ ∑ λ j λl ⎭ ⎩ 2 j,l=1 Γ ( j − α )Γ (l − α ) θ j=1
t2 r Γ (1 − α )Γ ( j + j − α ) → exp − ∑ λ j λl +α . 2 j,l=1 Γ ( j − α )Γ (l − α ) For each r ≥ 1, let (M1α , . . . , Mrα ) be a multivariate normal random vector with zero mean and covariance matrix
Γ (1 − α )Γ ( j + l − α ) , j, l = 1, . . . , r, Γ ( j − α )Γ (l − α )
(7.40)
146
7 Fluctuation Theorems
and set Mα = (M1α , M2α , . . .). Then it follows from (7.39) that Mθα converges in distribution to Mα . Noting that for each j ≥ 2 ⎞ ⎛ − j ∞ √ M j (α , θ ) J ( α , θ ) i √ H j (α , θ ) = M j (α , θ ) + θ ⎝ ∑ − 1⎠ +1 , θ θ i=1 ∞
Ji (α , θ ) → 1, in probability, θ i=1
∑
and
√ θ
∞
Ji (α , θ ) ∑ θ −1 i=1
⇒ M1α ,
it follows that for any r ≥ 2 r
∑ λ j H j (α , θ ) ⇒
j=2
r
∑ λ j (Mαj − jM1α ).
(7.41)
j=2
Choosing H αj = M αj − jM1α for each j ≥ 2, we get the result.
2
For any n ≥ 1, let An be the collection of allelic partitions of n defined in (2.14). Consider a random sample of size n from a population with relative frequencies of different alleles given by P = (P1 , P2 , . . .). Given that P = p = (p1 , p2 , . . .), the conditional probability of the random partition An is given by n
ai
P{An = a | P = p} = φa (p) = C(a) ∑ ∏ ∏ pili j ,
(7.42)
i=1 j=1
where the summation is over distinct li j , i = 1, . . . , n; j = 1, . . . , ai ; and C(a) =
n!
(7.43)
. ∏ni=1 (i!)ai ai !
Let k = ∑ni=1 ai be the total number of different alleles in the sample, and {i : ai > 0, i = 1, . . . , n} the corresponding allelic frequencies. Denote the allelic frequencies in descending order by n1 ≥ · · · ≥ nk ≥ 1. Then n1 , . . . , nk is a partition of n and
φa (p) = C(a)
∑
distinct i1 ,...,ik
It follows from (5.104) and (7.44) that
n
pni11 · · · pikk .
(7.44)
7.3 Gaussian Limits
147
n
k
i=1
i=1
∏ ϕiai (p) = ∏ ϕni (p)
(7.45) k−1
= C−1 (a)φa (p) + ∑
∑
j=1 b∈σ (k, j)
C−1 (a( j, b))φa( j,b) (p),
where σ (k, j) is defined in (5.17), and a( j, b) is obtained from a by coalescing the k different alleles into j different alleles according to b. Let Eθ and Eα ,θ denote the expectations with respect to the one-parameter and the two-parameter Poisson–Dirichlet distributions, respectively. Then by Theorem 2.8 and Theorem 3.8, we have ESF(θ , a) = Eθ [φa (P)], PSF(α , θ , a) = Eα ,θ [φa (P)].
(7.46)
Since increasing θ increases the mean number of alleles in the population, the number of different alleles in a random sample of size n converges to n under both the one-parameter Poisson–Dirichlet distribution and the two-parameter Poisson– Dirichlet distribution as θ tends to infinity. In other words, in the large-θ limit, the random partition An converges in probability to a1 = (n, 0, . . . , 0), and lim ESF(θ , a1 ) = lim PSF(α , θ , a1 ) = 1,
θ →∞
θ →∞
lim ESF(θ , a) = lim PSF(α , θ , a) = 0 for a = a1 .
θ →∞
θ →∞
(7.47) (7.48)
The last theorem in this section gives the asymptotic normality of φa (P) under both the one-parameter Poisson–Dirichlet distribution and the two-parameter Poisson–Dirichlet distribution. The scaling factors depend naturally on whether a equals a1 or not. Theorem 7.12. Fix n ≥ 2. Let H and Hα be defined as in Theorem 7.10 and Theorem 7.11. (1) For any allelic partition a = a1 , we have n √ φa (P(θ )) − ESF(θ , a) ⇒ ∑ ai Hi , θ (7.49) ESF(θ , a) i=1 n √ φa (P(α , θ )) − PSF(α , θ , a) θ (7.50) ⇒ ∑ ai Hiα . PSF(α , θ , a) i=1 (2) For a = a1 and a2 = (n − 2, 1, 0, . . . , 0), we have √
θ
φa1 (P(θ )) − ESF(θ , a1 ) ESF(θ , a2 )
⇒ H2 , √ φa1 (P(α , θ )) − PSF(α , θ , a1 ) ⇒ H2α . θ PSF(α , θ , a2 )
(7.51) (7.52)
148
7 Fluctuation Theorems
Proof. We will focus on the proof of the two-parameter results. The one-parameter results will follow by taking α = 0. Replace p with P(α , θ ) in (7.45), and multiply α ) ai both sides by ∏ni=1 (θ i−1 ΓΓ (1− (i−α ) ) . Since n
∑ ai (i − 1) = n − k,
i=1
−1
(PSF(α , θ , a))
=
θ(n)
n
∏k−1 l=0 (θ + l α )
C −1 (a) ∏
i=1
Γ (1 − α ) Γ (i − α )
a i ,
it follows that ai n θ n ∏k−1 Hi (α , θ ) l=0 (θ + l α ) φa (P(α , θ )) + R(θ ), ∏ √θ + 1 = θ(n) θ k PSF(α , θ , a) i=2 where R(θ ) = θ
n−k
n
∏
i=2
Γ (1 − α ) Γ (i − α )
ai k−1
(7.53)
∑ ∑
C
j=1 b∈σ (k, j)
−1
(a( j, b))φa( j,b) (P(α , θ )) .
Noting that the number of different alleles j in a( j, b) is less than k, and the order of PSF(α , θ , a( j, b)) is θ j−n , it follows that, as θ tends to infinity, √ θ R(θ ) → 0, in probability. Since
θ n ∏k−1 1 l=0 (θ + l α ) , = 1+O k θ(n) θ θ
φa (P(α ,θ )) it follows from Theorem 7.11 that PSF( α ,θ ,a) converges to 1 in probability. Rewrite (7.53) as ai n √ √ φa (P(α , θ )) Hi (α , θ ) √ θ ∏ +1 −1 = θ −1 PSF(α , θ , a) θ i=2
(7.54)
(7.55)
(7.56)
+ R1 (θ ) + R2 (θ ), where √ R1 (θ ) = θ
θ n ∏k−1 l=0 (θ + l α ) −1 θ(n) θ k
√ φa (P(α , θ )) , R2 (θ ) = θ R(θ ). PSF(α , θ , a)
By (7.54) and (7.55), we have R1 (θ ) + R2 (θ ) converges to zero in probability. Thus
7.3 Gaussian Limits
√
θ
n
∏
149
i=2
ai
Hi (α , θ ) √ +1 θ
−1
and
√
θ
φa (P(α , θ )) −1 PSF(α , θ , a)
have the same limit in distribution. Expanding the product ai n Hi (α , θ ) √ + 1 , ∏ θ i=2 and applying the continuous-mapping theorem, it follows that ai n n √ Hi (α , θ ) √ θ ∏ +1 − 1 ⇒ ∑ ai Hiα θ i=2 i=2
(7.57)
which leads to (7.50). It remains to verify (7.52). Note that
∑ φa(P(α , θ )) = 1. a
Therefore
φa1 (P(α , θ )) − PSF(α , θ , a1 ) PSF(α , θ , a2 ) PSF(α , θ , a2 ) − φa2 (P(α , θ )) = PSF(α , θ , a2 ) PSF(α , θ , a) − φa (P(α , θ )) + ∑ PSF(α , θ , a2 ) a=a1 ,a2 PSF(α , θ , a2 ) − φa2 (P(α , θ )) PSF(α , θ , a2 ) PSF(α , θ , a) PSF(α , θ , a) − φa (P(α , θ )) . + ∑ PSF(α , θ , a2 ) PSF(α , θ , a) a=a1 ,a2
=
√ PSF(α ,θ ,a) Since θ PSF( converges to zero as θ → ∞ for a = a1 , a2 , it follows from α ,θ ,a2 ) (7.50) that √ PSF(α , θ , a2 ) − φa2 (P(α , θ )) √ φa1 (P(α , θ )) − PSF(α , θ , a1 ) and θ θ PSF(α , θ , a2 ) PSF(α , θ , a2 ) have the same limit, −H2α , in distribution as θ tends to infinity. Since H2α and −H2α have the same distribution, we get (7.52). 2
150
7 Fluctuation Theorems
7.4 Notes The study of the asymptotic behavior of the Poisson–Dirichlet distribution for large θ has a long history. It was first mentioned in [182] that the large-θ limit corresponds to a fixed mutation rate per nucleotide site within the locus, with a large number of sites. Watterson and Guess [183] obtained asymptotic results for the means of the most common alleles in the neutral model. The result in Theorem 7.2 is essentially obtained in Griffiths [91] where a similar fluctuation theorem was obtained for a K-allele model in the limit as K and θ both go to infinity. The proof presented here follows the idea in [91]. Theorem 7.4 first appeared in Handa [100] where a different proof is given. Theorem 7.6 appears in Chapter VI of [110]. A slightly different form appears as Exercise 18 of Chapter 3 in [62]. The results of both Theorem 7.8 and Theorem 7.9 appear in [114]. The proofs here are novel. Theorem 3.5 in [73] stated an incorrect fluctuation limit of the Dirichlet process. Griffiths [91] obtained the asymptotic normality of the homozygosity of order two for the K-allele model in the limit as K and θ both go to infinity. The results in Theorem 7.10 and the one-parameter part of Theorem 7.12 were obtained in [118]. Theorem 7.11 first appeared in [100] but the proof here is different. The twoparameter part of Theorem 7.12 seems to be new. The proof here follows the idea in [118] with some modifications in the details.
Chapter 8
Large Deviations for the Poisson–Dirichlet Distribution
As seen in Chapter 1, the parameter θ in the Poisson–Dirichlet distribution is the scaled population mutation rate in the context of population genetics. When the mutation rate is small, there is a tendency for only a few alleles to have high frequencies and dominate the population. On the other hand, large values of θ correspond to the situation where the proportions of different alleles are evenly spread. Noting that θ is proportional to the product of certain effective population size and individual mutation rate, large values of θ also correspond to a population with fixed individual mutation rate and large effective population size. The large deviations are established in this chapter for the Poisson–Dirichlet distribution and its two-parameter counterpart when θ tends to either zero or infinity. Also included are several applications that provide motivation for the large deviation results. Appendix B includes basic terminology and results of large deviation theory.
8.1 Large Mutation Rate This section is devoted to establishing LDPs (large deviation principles, see Appendix B) for the Poisson–Dirichlet distribution and its two-parameter counterpart when the parameter θ tends to infinity. Since the state spaces involved are compact, rate functions obtained are automatically good rate functions.
8.1.1 The Poisson–Dirichlet Distribution Recall that the Poisson–Dirichlet distribution with parameter θ , denoted by Πθ , is the law of P(θ ) = (P1 (θ ), P2 (θ ), . . .). It is a probability on the space ∇∞ =
∞
(p1 , p2 , . . .) : p1 ≥ p2 ≥ · · · ≥ 0, ∑ p j = 1 , j=1
S. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5 8, © Springer-Verlag Berlin Heidelberg 2010
151
152
8 Large Deviations for the Poisson–Dirichlet Distribution
and has the GEM representation through the descending order statistics of V1 = U1 , Vn = (1 −U1 ) · · · (1 −Un−1 )Un , n ≥ 2,
(8.1)
with {Un : n ≥ 1} being a sequence of i.i.d. Beta(1, θ ) random variables. For notational convenience, Πθ will also denote the extension of the Poisson–Dirichlet distribution to the closure ∇ of ∇∞ . The main result of this subsection is the LDP on the space ∇ for {Πθ : θ > 0} as θ tends to infinity. This will be established through a series of lemmas. Lemma 8.1. For any n ≥ 1, let Zn (θ ) = max{U1 , . . . ,Un }. Then the family {Zn (θ ) : θ > 0} satisfies an LDP on [0, 1] with speed 1/θ and rate function 1 , x ∈ [0, 1) log 1−x (8.2) I(x) = ∞, else. Proof. Let
Λ (λ ) = ess sup{λ y + log(1 − y)}
(8.3)
y∈[0,1]
=
λ − 1 − log λ , λ > 1 0, else,
where ess sup denotes the essential supremum. Then clearly Λ (λ ) is finite for all λ , and is differentiable. By direct calculation, we have E[eθ λ Zn ] =
1 0
exp{θ Fθ (y)} dy,
where log n + log θ θ θ −1 n−1 + log[1 − (1 − y)θ ] + log(1 − y). θ θ
Fθ (y) = λ y +
Therefore
(8.4)
lim log{E[eθ λ Zn ]} θ = Λ (λ ), 1
θ →∞
which, combined with Theorem B.8 (with ε = 1/θ ), implies the lemma.
2
Lemma 8.2. For any k ≥ 1, let nk (θ ) denote the integer part of θ k . Then the family {Znk (θ ) (θ ) : θ > 0} satisfies an LDP with speed 1/θ and rate function I defined in (8.2). Proof. Choosing n = nk (θ ) in (8.4), we get
8.1 Large Mutation Rate
153
(log nk (θ ) + log θ ) θ θ −1 nk (θ ) − 1 + log[1 − (1 − y)θ ] + log(1 − y). θ θ
Fθ (y) = λ y +
For any ε in (0, 1/2), and λ ≥ 0, we have 1 log E[eθ λ U1 ] θ →∞ θ 1 θλZ ≤ lim sup log E[e nk (θ ) ] θ θ →∞ ≤ max{λ ε , ess sup[λ y + log(1 − y)]},
Λ (λ ) = lim
y≥ε
where the last inequality follows from the fact that for y in [ε , 1], lim θ l log[1 − (1 − y)]θ = 0 for any l ≥ 1.
θ →∞
Letting ε go to zero, it follows that
Λ (λ ) = lim
θ →∞
1 θλZ log E[e nk (θ ) ]. θ
For negative λ , we have lim sup θ →∞
1 θλZ log E[e nk (θ ) ] ≥ lim ess sup{λ y + log(1 − y)} = 0 = Λ (λ ). θ δ →0 y≥δ
The lemma follows by another application of Theorem B.8. 2 Lemma 8.3. For any n ≥ 1, let Wn = (1 − U1 )(1 − U2 ) · · · (1 − Un ). Then for any δ > 0, 1 lim sup log P{Wn2 (θ ) ≥ δ } = −∞. (8.5) θ →∞ θ Proof. By direct calculation, P{Wn2 (θ ) ≥ δ } = P θ
n2 (θ )
∑
log(1 −U j ) ≥ θ log δ
j=1
≤e
θ log δ1
θ log(1−U1 ) n2 (θ )
(E[e ]) =e 1 2 = exp θ log − (θ − 1) log 2 . δ
The lemma follows by letting θ go to infinity.
θ log δ1
n2 (θ ) 1 2
2
154
8 Large Deviations for the Poisson–Dirichlet Distribution
The next lemma establishes the LDP for {P1 (θ ) : θ > 0}. Lemma 8.4. The family {P1 (θ ) : θ > 0} satisfies an LDP on [0, 1] with speed 1/θ and rate function I, given by (8.2). Proof. Use the GEM representation for P1 (θ ) and set Pˆ1 (θ ) = max{V1 , . . . ,Vn2 (θ ) }. Then clearly P1 (θ ) ≥ Pˆ1 (θ ). For any δ > 0, it follows from Lemma 8.3 that lim sup θ →∞
1 1 log P{P1 (θ ) − Pˆ1 (θ ) > δ } ≤ lim sup log P{Wn2 (θ ) > δ } = −∞. θ θ θ →∞
In other words, P1 (θ ) and Pˆ1 (θ ) are exponentially equivalent, and thus have the same LDPs, provided one of them has an LDP. By definition, we have U1 = Z1 (θ ) ≤ Pˆ1 (θ ) ≤ Zn2 (θ ) . Applying Lemma 8.1, Lemma 8.2, and Corollary B.9, we conclude that the law of Pˆ1 (θ ) satisfies an LDP on space [0, 1] with speed 1/θ and rate function I. 2 For any n ≥ 1, let ∇n =
n
(p1 , . . . , pn ) : 0 ≤ pn ≤ . . . ≤ p1 , ∑ pk ≤ 1 .
(8.6)
k=1
Lemma 8.5. For fixed n ≥ 2, the family {(P1 (θ ), . . . , Pn (θ )) : θ > 0} satisfies an LDP on the space ∇n with speed 1/θ and rate function log 1−∑n1 p , (p1 , . . . , pn ) ∈ ∇n , ∑nk=1 pk < 1 k=1 k (8.7) Sn (p1 , . . . , pn ) = ∞, else. Proof. Since ∇n is compact, by Theorem B.6, the family {(P1 (θ ), . . . , Pn (θ )) : θ > 0} satisfies a partial LDP. By Theorem 2.6, the density function gθ1 (p) of P1 (θ ) and the joint density function gθn (p1 , . . . , pn ) of (P1 (θ ), . . . , Pn (θ )) satisfy the relations gθ1 (p)p(1 − p)1−θ = θ
(p/(1−p))∧1 0
gθ1 (x) dx,
and gθn (p1 , . . . , pn ) n n = where
θ (1−∑k=1 pk )θ −1 (pn /(1−∑nk=1 pk ))∧1 θ g1 (u) du, 0 p1 ···pn
0,
◦
(p1 , . . . , pn ) ∈ ∇n else,
(8.8)
8.1 Large Mutation Rate
155
◦
∇n =
n
(p1 , . . . , pn ) ∈ ∇n : 0 < pn < · · · < p1 < 1, ∑ pk < 1 . k=1
◦
Clearly ∇n is the closure of ∇n . Now for any (p1 , . . . , pn ) ∈ ∇n and δ > 0, let G((p1 , . . . , pn ); δ ) = {(q1 , . . . , qn ) ∈ ∇n : |qk − pk | < δ , k = 1, . . . , n}, F((p1 , . . . , pn ); δ ) = {(q1 , . . . , qn ) ∈ ∇n : |qk − pk | ≤ δ , k = 1, . . . , n}. Then the family {G((p1 , . . . , pn ); δ ) : δ > 0, (p1 , . . . , pn ) ∈ ∇n } is a base for the ◦ topology of ∇n . First assume that (p1 , . . . , pn ) is in ∇n . Then we can choose ◦ δ so that F((p1 , . . . , pn ); δ ) is a subset of ∇n . By (8.8), for any (q1 , . . . , qn ) in F((p1 , . . . , pn ); δ ) gθn (q1 , . . . , qn ) ≤
θ n (1 − ∑nk=1 (pk − δ ))θ −1 , (p1 − δ ) · · · (pn − δ )
(8.9)
which implies lim sup θ →∞
1 1 log Pn,θ {F((p1 , . . . , pn ); δ )} ≤ − log , θ 1 − ∑nk=1 (pk − δ )
(8.10)
where Pn,θ denotes the law of (P1 (θ ), . . . , Pn (θ )). Therefore lim sup lim sup δ →0
θ →∞
1 log Pn,θ {F((p1 , . . . , pn ); δ )} ≤ −Sn (p1 , . . . , pn ). θ
(8.11)
Clearly (8.11) also holds for (p1 , . . . , pn ) = (0, . . . , 0). For other points outside ◦ ∇n , there are two possibilities. Either pn = 1 − ∑n−1 i=1 pi > 0 or pl = 0 for some 1 < l ≤ n. Since the estimate (8.9) holds in the first case, (8.11) holds too. In the second case, let k = inf{l : 1 < l < n, pl = 0}. Then we have Sn (p1 , . . . , pn ) = Sk (p1 , . . . , pk ) and (8.11) follows from the upper bound for Pk,θ {F((p1 , . . . , pk ); δ )} and the fact that Pn,θ {F((p1 , . . . , pn ); δ )} ≤ Pk,θ {F((p1 , . . . , pk ); δ )}. Next we turn to the lower bound. First note that if ∑nk=1 pk = 1, the lower bound is trivially true since Sn (p1 , . . . , pn ) = ∞. Hence we assume that ∑nk=1 pk < 1. ◦ If (p1 , . . . , pn ) is in ∇n , then we can choose δ so that 0<δ <
1 − ∑nk=1 pk n
156
8 Large Deviations for the Poisson–Dirichlet Distribution
and
◦
G((p1 , . . . , pn ); δ ) ⊂ ∇n .
Using (8.8) again, one has that for any (q1 , . . . , qn ) in G((p1 , . . . , pn ); δ ) gθn (q1 , . . . , qn ) ≥ θ n
n
θ −1
1 − ∑ (pk + δ ) k=1
((pn −δ )/(1−∑nk=1 (pk −δ )))∧1 0
g1 (u) du,
which implies 1 log Pn,θ {G((p1 , . . . , pn ); δ )} θ 1 ≥ − log n 1 − ∑k=1 (pk + δ )
pn − δ − inf I(p) : p < ∧1 1 − ∑nk=1 (pk − δ ) 1 , = − log 1 − ∑nk=1 (pk + δ )
lim inf θ →∞
where in the second line, we used the LDP obtained in Lemma 8.4. Letting δ go to zero, yields lim inf lim inf δ →0
θ →∞
1 log Pn,θ {G((p1 , . . . , pn ); δ )} ≥ −Sn (p1 , . . . , pn ). θ
(8.12)
◦
If (p1 , . . . , pn ) is not in ∇n , then there is 1 ≤ l ≤ n such that pl = 0. Choosing ε > 0 and δ > 0 so that ◦
(p1 (ε ), . . . , pn (ε )) ≡ (p1 + ε , . . . , pn + ε ) ∈ ∇n
◦
G((p1 (ε ), . . . , pn (ε )); δ ) ⊂ G((p1 ; . . . , pn ), δ ) ∩ ∇n , one gets lim inf θ →∞
1 log Pn,θ {G((p1 , . . . , pn ); δ )} θ 1 ≥ lim inf log Pn,θ {G((p1 (ε ), . . . , pn (ε )); δ )}. θ →∞ θ
(8.13)
Taking limits in the order of δ → 0, ε → 0 and δ → 0, and taking into account the continuity of Sn (p1 , . . . , pn ), it follows that (8.12) holds in this case. The lemma now follows from (8.11), (8.12) and Theorem B.6. 2 Lemma 8.6. For k ≥ 2, the family {Pk (θ ) : θ > 0} satisfies an LDP on [0, 1] with speed 1/θ and rate function
8.1 Large Mutation Rate
157
I k (p) =
1 , p ∈ [0, 1/k) log 1−kp ∞, else.
(8.14)
Thus for any k ≥ 1, the LDP for {P1 (θ ) : θ > 0} is the same as the LDP for the family {kPk (θ ) : θ > 0}. Proof. For any k ≥ 2, define the projection map
φk : ∇k −→ [0, 1], (p1 , p2 , . . . , pk ) → pk . Clearly φk is continuous, and Lemma 8.5 combined with the contraction principle implies that the law of Pk (θ ) satisfies an LDP on [0, 1] with speed θ and rate function I (p) = inf{Sk (p1 , . . . , pk ) : p1 ≥ · · · ≥ pk = p}. For p > 1/k, the infimum is over empty set and is thus infinity. For p in [0, 1/k], the infimum is achieved at the point p1 = p2 = · · · = pk = p. Hence I (p) = I k (p) and the result follows. 2 Now we are ready to prove the LDP for {Πθ : θ > 0}. Theorem 8.1. The family {Πθ : θ > 0} satisfies an LDP on the space ∇ with speed 1/θ and rate function log 1−∑∞1 p , p = (p1 , p2 , . . .) ∈ ∇, ∑∞ k=1 pk < 1 k=1 k S(p) = (8.15) ∞, else. Proof. Due to the compactness of ∇, it suffices, by Theorem B.6, to verify (B.7) for the family {Πθ : θ > 0}. The topology on ∇ can be generated by the following metric: ∞ |pk − qk | d(p, q) = ∑ , 2k k=1 ¯ δ) where p = (p1 , p2 , . . .), q = (q1 , q2 , . . .). For any fixed δ > 0, let B(p, δ ) and B(p, denote the respective open and closed balls centered at p with radius δ > 0. Set nδ = 1 + [log2 (1/δ )] where [x] denotes the integer part of x. Set Gnδ (p; δ /2) = {(q1 , q2 , . . .) ∈ ∇ : |qk − pk | < δ /2, k = 1, . . . , nδ }. Then we have
Gnδ (p; δ /2) ⊂ B(p, δ ).
By Lemma 8.5 and the fact that
Πθ {Gnδ (p; δ /2)} = Pnδ ,θ {G((p1 , . . . , pnδ ); δ /2)}, we get that
158
8 Large Deviations for the Poisson–Dirichlet Distribution
lim inf θ →∞
1 log Πθ {B(p, δ )} θ 1 ≥ lim inf log Pnδ ,θ {G((p1 , . . . , pnδ ); δ /2)} θ →∞ θ ≥ −Snδ (p1 , . . . , pnδ ) ≥ −S(p).
(8.16)
On the other hand, for any fixed n ≥ 1, δ1 > 0, let Fn (p; δ1 ) = {(q1 , q2 , . . .) ∈ ∇ : |qk − pk | ≤ δ1 , k = 1, . . . , n}. Then we have
Πθ {Fn (p; δ1 )} = Pn,θ {F((p1 , . . . , pn ); δ1 )},
and, for δ small enough,
¯ δ ) ⊂ Fn (p; δ1 ), B(p,
which implies that 1 ¯ δ )} log Πθ {B(p; θ 1 (8.17) ≤ lim sup log Pn,θ {F((p1 , . . . , pn ); δ1 )} θ θ →∞ ≤ − inf{Sn (q1 , . . . , qn ) : (q1 , . . . , qn ) ∈ F((p1 , . . . , pn ), δ1 )}.
lim lim sup
δ →0 θ →∞
Letting δ1 go to zero, and then n go to infinity, we get the upper bound lim lim sup
δ →0 θ →∞
1 ¯ δ )} ≤ −S(p), log Πθ {B(p, θ
which, combined with (8.16), implies the result.
(8.18) 2
8.1.2 The Two-parameter Poisson–Dirichlet Distribution Many properties of the Poisson–Dirichlet distribution have generalizations in the two-parameter setting. In this subsection, we obtain the two-parameter generalization of the LDP result for Πθ . The idea in the proof is very similar to that in the one-parameter case. Let 0 < α < 1 and θ > 0. Recall that the GEM representation for the twoparameter Poisson–Dirichlet distribution PD(α , θ ) is given by V1α ,θ = W1 ,Vnα ,θ = (1 −W1 ) · · · (1 −Wn−1 )Wn , n ≥ 2, where the sequence W1 ,W2 , . . . is independent and Wi is a Beta(1 − α , θ + iα ) random variable. Let P(α , θ ) = (P1 (α , θ ), P2 (α , θ ), . . .) denote the rearrangement of {Vnα ,θ : n = 1, 2, . . .} in descending order.
8.1 Large Mutation Rate
159
Lemma 8.7. The family {P1 (α , θ ) : θ > 0} satisfies an LDP on [0, 1] with speed 1/θ and rate function I given by (8.2). Proof. By the GEM representation, E[eλ θ W1 ] ≤ E[eλ θ P1 (α ,θ ) ] for λ ≥ 0 E[eλ θ W1 ] ≥ E[eλ θ P1 (α ,θ ) ] for λ < 0. On the other hand, by Proposition 3.7, E[eλ θ P1 (α ,θ ) ] ≤ E[eλ θ P1 (θ ) ] for λ ≥ 0 E[eλ θ P1 (α ,θ ) ] ≥ E[eλ θ P1 (θ ) ] for λ < 0. By Lemma 8.4 and an argument similar to that used in the proof of Lemma 8.1, the LDP for the family {W1 : θ > 0} is the same as the LDP for the family {P1 (θ ) : θ > 0}. The lemma then follows from Theorem B.8. 2 Lemma 8.8. For each n ≥ 2, the family {(P1 (α , θ ), . . . , Pn (α , θ )) : θ > 0} satisfies an LDP on ∇n with speed 1/θ and rate function Sn given by (8.7). Proof. Let Cα ,θ ,n =
Γ (θ + 1)Γ ( αθ + n)α n−1
Γ (θ + nα )Γ ( αθ + 1)Γ (1 − α )n
(8.19)
.
It follows from Theorem 2.6 and Theorem 3.6 that for any n ≥ 2, and (p1 , . . . , pn ) in ∇n , (P1 (α , θ ), P2 (α , θ ), . . . , Pn (α , θ )) and
(P1 (0, α + θ ), P2 (0, α + θ ), . . . , Pn (0, α + θ ))
have respective joint density functions
ϕnα ,θ (p1 , . . . , pn )
(1 − ∑ni=1 pi )θ +nα −1 = Cα ,θ ,n P P1 (α , nα ∏ni=1 pi
+ θ) ≤
pn 1 − ∑ni=1 pi
(8.20)
,
and gαn +θ (p1 , . . . , pn ) = (α Noting that
θ +α −1 n n (1 − ∑i=1 pi ) +θ) P P1 (0, α ∏ni=1 pi
+θ) ≤
pn 1 − ∑ni=1 pi
(8.21)
.
160
8 Large Deviations for the Poisson–Dirichlet Distribution
logCα ,θ ,n log(α + θ )n = lim = 0, θ →∞ θ →∞ θ θ lim
it follows from Lemma 8.4 and Lemma 8.7 that for any (p1 , . . . , pn ) in ∇n , 1 log ϕnα ,θ (p1 , . . . , pn ) θ →∞ θ 1 = lim log gαn +θ (p1 , . . . , pn ) θ →∞ θ = −Sn (p1 , . . . , pn ). lim
(8.22)
The lemma now follows from an argument similar to that used in the proof of Lemma 8.5. 2 The following theorem follows easily from Lemma 8.8 and the finite-dimensional approximations used in Theorem 8.1. Theorem 8.2. The family {PD(α , θ ) : θ > 0} satisfies an LDP on the space ∇ with speed 1/θ and rate function S defined in (8.15). Remarks: (a) The effective domain of S is ∇ \ ∇∞ , while PD(α , θ ) is concentrated on ∇∞ . (b) The LDP for (Wn ) holds trivially. Since P(α , θ ) is the image of the independent sequence (Wn ) through the composition of the GEM map and the ordering map, one would hope to apply the contraction principle to get the LDP for P(α , θ ). Unfortunately the ordering map is not continuous on the effective domain of S, and the contraction principle cannot be applied directly here. (c) The fact that α does not change the LDP is mainly due to the topology used on space ∇.
8.2 Small Mutation Rate In this section, we establish the LDP for Πθ when θ tends to zero, and the LDP for PD(α , θ ) when both θ and α converge to zero. As in the large mutation rate case, these results will be obtained through a series of lemmas and the main techniques in the proof are exponential approximation and the contraction principle.
8.2.1 The Poisson–Dirichlet Distribution Let U = U(θ ) be a Beta(1, θ ) random variable, and a(θ ) = (− log θ )−1 .
(8.23)
8.2 Small Mutation Rate
161
All results in this subsection involve limits as θ converges to zero, so that a(θ ) → 0+ . Lemma 8.9. The family {U(θ ) : θ > 0} satisfies an LDP on [0, 1] with speed a(θ ) and rate function 0, p = 1 (8.24) I1 (p) = 1, else. Proof. For any b < c in [0, 1], let I denote one of the intervals (b, c), [b, c), (b, c], or [b, c]. It follows from direct calculation that for c < 1 log(1 − rθ ) = −1, θ →0 log θ
lim a(θ ) log P{U ∈ I} = − lim
θ →0
1−c . If c = 1, then limθ →0 a(θ ) log P{U ∈ I} = 0. These, combined with where r = 1−b compactness of [0, 1], implies the result. 2 For any n ≥ 2, let Pˆn (θ ) = max{V1 , . . . ,Vn },
where Vk is defined in (8.1). Lemma 8.10. For any n ≥ 2, the family {Pˆn (θ ) : θ > 0} satisfies an LDP on [0, 1] with speed a(θ ) and rate function ⎧ ⎨ 0, p = 1 1 1 , k ), k = 1, 2, . . . , n − 1 In (p) = k, p ∈ [ k+1 (8.25) ⎩ n, else. Proof. Noting that Pˆn (θ ) is the image of (U1 , . . . ,Un ) through a continuous map, it follows from Lemma 8.9, the independence of (U1 , . . . ,Un ), and the contraction principle, that the family {Pˆn (θ ) : θ > 0} satisfies an LDP on [0, 1] with speed a(θ ) and rate function n
˜ = inf I(p)
∑ I1 (ui ) : 0 ≤ ui ≤ 1, 1 ≤ i ≤ n;
i=1
max{u1 , (1 − u1 )u2 , . . . , (1 − u1 ) · · · (1 − un−1 )un } = p . ˜ = 0. If p is in [1/2, 1), then at One can choose ui = 1 for i = 1, . . . , n to get I(1) least one of the ui is not one. By choosing u1 = p, ui = 1, i = 2, . . . , n, it follows ˜ = 1 for p in [1/2, 1). that I(p) For each m ≥ 2, we have max{u1 , (1 − u1 )u2 , . . . , (1 − u1 ) · · · (1 − um )} = max{u1 , (1 − u1 ) max{u2 , . . . , (1 − u2 ) · · · (1 − um )}}.
(8.26)
162
8 Large Deviations for the Poisson–Dirichlet Distribution
Noting that 1 max{u1 , 1 − u1 } ≥ , 0 ≤ u1 ≤ 1, 2 it follows from (8.26) and an induction on m, that for any 1 ≤ i ≤ m, 0 ≤ ui ≤ 1, max{u1 , (1 − u1 )u2 , . . . , (1 − u1 ) · · · (1 − um )} ≥
1 . m+1
(8.27)
1 1 Thus, for 2 ≤ k ≤ n − 1, and p in [ k+1 , k ), in order for the equality
max{u1 , (1 − u1 )u2 , . . . , (1 − u1 ) · · · (1 − un−1 )un } = p ˜ ≥ to hold, it is necessary that u1 , u2 , . . . , uk are all less than one. In other words, I(p) k. On the other hand, the function max{u1 , (1 − u1 )u2 , . . . , (1 − u1 ) · · · (1 − uk )} is a 1 , 1], so there exist u1 < 1, . . . , uk < 1 such that surjection from [0, 1]k onto [ k+1 max{u1 , (1 − u1 )u2 , . . . , (1 − u1 ) · · · (1 − uk )} = p. ˜ = k. By choosing u j = 1 for j = k + 1, . . . , n, it follows that I(p) 1 Finally, for p in [0, n ), in order for max{u1 , (1 − u1 )u2 , . . . , (1 − u1 ) · · · (1 − un−1 )un } = p ˜ = n. Therefore, to have solutions, each ui has to be less than one, and hence I(p) ˜I(p) = In (p) for all p in [0, 1]. 2 Lemma 8.11. The family {P1 (θ ) : θ > 0} satisfies an LDP on [0, 1] with speed a(θ ) and rate function ⎧ ⎨ 0, p = 1 1 1 , k ), k = 1, 2, . . . J1 (p) = k, p ∈ [ k+1 (8.28) ⎩ ∞, p = 0. Proof. By the GEM representation of P1 (θ ) and direct calculation it follows that for any δ > 0 and any n ≥ 1 P{P1 (θ ) − Pˆn (θ ) > δ } ≤ P{(1 −U1 ) · · · (1 −Un ) > δ } n θ −1 , ≤δ 1+θ which implies that lim sup a(θ ) log P{P1 (θ ) − Pˆn (θ ) > δ } ≤ −n. θ →0
(8.29)
Hence {Pˆn (θ ) : θ > 0} are exponentially good approximations of {P1 (θ ) : θ > 0}. By direct calculation,
8.2 Small Mutation Rate
163
J1 (p) = sup lim inf inf In (q), δ >0 n→∞ |q−p|<δ
and for every closed subset F of [0, 1] inf J1 (q) = lim sup inf In (q).
q∈F
n→∞ q∈F
Clearly J1 (p) is a good rate function. By Theorem B.5 we obtain the lemma.
2
For any n≥1 and any δ >0, let ∇n , Pn,θ , G((p1 , . . . , pn ); δ ), and F((p1 , . . . , pn ); δ ) be defined as in Section 8.1.1. Then we have: Lemma 8.12. For any fixed n ≥ 2, the family {Pn,θ : θ > 0} satisfies an LDP on space ∇n with speed a(θ ) and rate function
Jn (p1 , . . . , pn ) =
⎧ 0, ⎪ ⎪ ⎨ l − 1,
p
n + J1 ( 1−∑nn pi ⎪ ⎪ i=1 ⎩ ∞,
(p1 , p2 , . . . , pn ) = (1, 0 . . . , 0) 2 ≤ l ≤ n, ∑lk=1 pk = 1, pl > 0 ∧ 1), ∑nk=1 pk < 1, pn > 0 else.
(8.30)
Proof. For any fixed n ≥ 2, let gθ1 (p) and gθn (p1 , . . . , pn ) denote the density function of P1 (θ ) and (P1 (θ ), . . . , Pn (θ )), respectively. Since ∇n is compact, to prove the lemma it suffices to verify that for every (p1 , . . . , pn ) in ∇n , lim lim inf a(θ ) log Pn,θ (F((p1 , . . . , pn ); δ ))
δ →0 θ →0
= lim lim sup a(θ ) log Pn,θ (G((p1 , . . . , pn ); δ )) δ →0 θ →0
(8.31)
= −Jn (p1 , . . . , pn ). For any (p1 , . . . , pn ) in ∇n , define r = r(p1 , . . . , pn ) = max{i : 1 ≤ i ≤ n, pi > 0},
(8.32)
where r is defined to be zero if p1 = 0. We divide the proof into several mutually exclusive cases. Case I: r = 1; i.e., (p1 , . . . , pn ) = (1, . . . , 0). For any δ > 0, F((1, . . . , 0); δ ) ⊂ {(q1 , . . . , qn ) ∈ ∇n : |q1 − 1| ≤ δ }, and one can choose δ < δ such that {(q1 , . . . , qn ) ∈ ∇n : |q1 − 1| < δ } ⊂ G((1, . . . , 0); δ ).
164
8 Large Deviations for the Poisson–Dirichlet Distribution
This combined with Lemma 8.11 implies (8.31) in this case. Case II: r = n, ∑nk=1 pk < 1. Choose δ > 0 so that
1 − ∑ni=1 pi . δ < min pn , n
It follows from (8.8) that for any ◦
(q1 , . . . , qn ) ∈ F((p1 , . . . , pn ), δ ) ∩ ∇n , gθn (q1 , . . . , qn ) ≤
θ n (1 − ∑nk=1 (pk + δ ))θ −1 (p1 − δ ) · · · (pn − δ )
pn +δ ∧1 1−∑n (pk +δ ) k=1
0
gθ1 (u) du,
which, combined with Lemma 8.11, implies lim sup lim sup a(θ ) log Pn,θ {F((p1 , . . . , pn ); δ )} θ →0 δ →0
pn + δ ≤ −n + lim lim sup a(θ ) log P P1 (θ ) ≤ ∧ 1 (8.33) 1 − ∑nk=1 (pk + δ ) δ →0 θ →0 pn ∧ 1 , ≤ − n + J1 1 − ∑ni=1 pi where the right continuity of J1 (·) is used in the last inequality. On the other hand, consider the subset n ˜ 1 , . . . , pn ); δ ) ≡ ∏ pi + δ , pi + δ ∩ ∇◦n G(p 2 i=1 of G((p1 , . . . , pn ); δ ). Using (8.8) again, it follows that for any point (q1 , . . . , qn ) in ˜ 1 , . . . , pn ); δ ) the set G((p gθn (q1 , . . . , qn ) ≥θ
n θ −1 n (1 − ∑k=1 (pk + δ /2))
(p1 + δ ) · · · (pn + δ )
((pn +δ /2)/(1−∑n (p +δ /2)))∧1 k=1 k 0
which, combined with Lemma 8.11, implies lim inf a(θ ) log Pn,θ {G((p1 , . . . , pn ); δ )} θ →0
˜ 1 , . . . , pn ); δ )} ≥ lim inf a(θ ) log Pn,θ {G((p θ →0 pn + δ /2 ≥ −n − J1 ∧1 . 1 − ∑ni=1 (pi + δ /2) Therefore
gθ1 (u) du,
8.2 Small Mutation Rate
165
lim inf lim inf a(θ ) log Pn,θ {G((p1 , . . . , pn ); δ )} ≥ −Jn (p1 , . . . , pn ).
(8.34)
θ →∞
δ →0
Case III: 2 ≤ r ≤ n − 1, ∑ri=1 pi < 1 or r = 0. This case follows from the estimate (8.33) and the fact that J1 (0) = −∞. Case IV: r = n, ∑nk=1 pk = 1. ◦
◦
Note that for any δ > 0, F((p1 , . . . , pn ); δ ) ∩ ∇n is a subset of {(q1 , . . . , qn ) ∈ ∇n : |qi − pi | ≤ δ , i = 1, . . . , n − 1}. By applying Case II to (P1 (θ ), . . . , Pn−1 (θ )) at the point (p1 , . . . , pn−1 ), we get lim sup lim sup a(θ ) log Pn,θ {F((p1 , . . . , pn ); δ )} δ →0
θ →0
≤ lim lim sup a(θ ) log Pn−1,θ {F((p1 , . . . , pn−1 ); δ )}
(8.35)
δ →0 θ →0
≤ −[n − 1 + J1 (1)] = −(n − 1). On the other hand, one can choose small δ > 0 so that (q1 , . . . , qn ) in Set
◦ G((p1 , . . . , pn ); δ ) ∩ ∇n .
qn 1−∑ni=1 qi
> 1 for any
◦ G˜ = {(q1 , . . . , qn ) ∈ ∇n : pi < qi < pi + δ /(n−1), i = 1, . . . , n−1; pn − δ < qn < pn }.
Clearly G˜ is a subset of G((p1 , . . . , pn ); δ ). It follows from (8.8) that for any ˜ (q1 , . . . , qn ) in G, gθn (q1 , . . . , qn ) ≥
θ n−1 [θ (1 − ∑ni=1 qi )θ −1 ] . (p1 + δ /(n − 1)) · · · (pn−1 + δ /(n − 1))pn
Let
n−1
An = (q1 , . . . , qn−1 ) ∈ ∇n−1 : pi < qi < pi + δ /(n − 1), i = 1, . . . , n − 1, ∑ q j < 1 . j=1
Then G˜
θ
n
1 − ∑ qi
θ −1 dq1 · · · dqn
i=1
= An
= An
dq1 · · · dqn−1
pn ∧(1−∑n−1 qi ) i=1 pn −δ n−1
1 + δ − p n − ∑ qi i=1
θ
n
θ 1 − ∑ qi i=1
dq1 · · · dqn−1 ,
θ −1 dqn
166
8 Large Deviations for the Poisson–Dirichlet Distribution
which converges to a strictly positive number depending only on δ and (p1 , . . . , pn ), as θ goes to zero. Hence lim inf lim inf a(θ ) log Pn,θ {G((p1 , . . . , pn ); δ )} θ →0
δ →0
˜ ≥ lim lim inf a(θ ) log Pn,θ {G}
(8.36)
δ →0 θ →0
≥ −(n − 1). Case V: 2 ≤ r ≤ n − 1, ∑ri=1 pi = 1. First note that for any δ > 0, F((p1 , . . . , pn ); δ ) is a subset of {(q1 , . . . , qn ) ∈ ∇n : |qi − pi | ≤ δ , i = 1, . . . , r}. On the other hand, for each δ > 0 one can choose δ0 < δ such that for any δ ≤ δ0 ◦
G((p1 , . . . , pn ); δ ) ⊃ {(q1 , . . . , qn ) ∈ ∇n ; |qi − pi | < δ , i = 1, . . . , r}. Thus (8.31) follows from Case IV for (P1 (θ ), . . . , Pr (θ )). Putting together all the cases, we obtain the lemma. 2 For any n ≥ 1, set Ln =
n
(p1 , . . . , pn , 0, 0, . . .) ∈ ∇ : ∑ pi = 1 i=1
and L=
∞
Li .
i=1
Now we are ready to state and prove the main result of this subsection. Theorem 8.3. The family {Πθ : θ > 0} satisfies an LDP with speed a(θ ) and rate function ⎧ p ∈ L1 ⎨ 0, (8.37) J(p) = n − 1, p ∈ Ln , pn > 0, n ≥ 2 ⎩ ∞, p ∈ L. Proof. For any p = (p1 , p2 , . . .), q = (q1 , q2 , . . .) in ∇, let d(p, q) be the metric de¯ δ ) denote the refined in Theorem 8.1. For any fixed δ > 0, let B(p, δ ) and B(p, spective open and closed balls centered at p with radius δ > 0. We start with the case that p is not in L. For any k ≥ 1, δ > 0, set B¯ k,δ (p) = {(q1 , q2 , . . .) ∈ ∇ : |qi − pi | ≤ δ , i = 1, . . . , k}.
8.2 Small Mutation Rate
167
Choose δ > 0 so that 2k δ < δ . Then ¯ δ ) ⊂ B¯ k,δ (p), B(p, and ¯ δ )} lim sup lim sup a(θ ) log Πθ {B(p, δ →0
θ →0
≤ lim sup a(θ ) log Πθ {B¯ k,δ (p)} θ →0
≤ lim sup λ (θ ) log Pk,θ {F((p1 , . . . , pk ), δ )} θ →0
(8.38)
≤ − inf{Jk (q1 , . . . , qk ) : (q1 , . . . , qk ) ∈ F((p1 , . . . , pk ), δ )}. Letting δ go to zero, and then k go to infinity, we obtain lim lim inf a(θ ) log Πθ {B(p, δ )}
δ →0 θ →0
(8.39)
¯ δ )} = −∞. = lim lim sup a(θ ) log Πθ {B(p, δ →0 θ →0
Next consider the case of p belonging to L. Without loss of generality, we assume that p belongs to Ln with pn > 0 for some n ≥ 1. For any δ > 0, let ˜ δ ) = {q ∈ ∇ : |qk − pk | < δ , k = 1, . . . , n}, G(p; ˜ δ ) = {q ∈ ∇ : |qk − pk | ≤ δ , k = 1, . . . , n}. F(p; ˜ 2n δ ). Since ∑ni=1 pi = 1, it follows that, for ¯ δ ) is a subset of F(p; Clearly, B(p, any δ > 0, one can find δ < δ such that ˜ δ ). B(p, δ ) ⊃ G(p; Using results on (P1 (θ ), . . . , Pn (θ )) in Case V of the proof of Lemma 8.12, we get lim lim inf a(θ ) log Πθ (B(p, δ ))
δ →0 θ →0
¯ δ )) = lim lim sup λ (θ ) log Πθ (B(p, δ →0 θ →0
(8.40)
= −(n − 1). Thus the theorem follows from the compactness of ∇ and Theorem B.6.
2
168
8 Large Deviations for the Poisson–Dirichlet Distribution
Remarks: (a) If we consider the rate function J as an “energy” function, then the energy needed to get n ≥ 2 different alleles is n − 1. The values of J form a “ladder of energy”. The energy needed to get an infinite number of alleles is infinite and thus it is impossible to have infinitely many alleles under a large deviation. (b) The effective domain of J is clearly L. This is in sharp contrast to the result in Theorem 8.1, where the effective domain of the corresponding rate function associated with a large mutation rate is ∞
p ∈ ∇ : ∑ pi < 1 . i=1
The two effective domains are disjoint. One is part of the boundary of ∇ and the other is the interior of ∇, and both have no intersections with the set ∇∞ , where Πθ is concentrated. (c) By using techniques from the theory of Dirichlet forms, it was shown in [160] that for the infinitely-many-neutral-alleles model, with probability one, there exist times at which the sample path will hit the boundary of a finite-dimensional subsimplex of ∇ or, equivalently, the single point (1, 0, . . .) iff θ is less than one. The intuition here is that it is possible to have only a finite number of alleles in the population if the mutation rate is small; but in equilibrium, with Πθ probability one, the number of alleles is always infinite as long as θ is strictly positive. In other words, the critical value of θ between having only a finite number of alleles vs. an infinite number of alleles is zero for Πθ . In physical terms, this sudden change from one to infinity can be viewed as a phase transition. The result in Theorem 8.3 gives more details about this transition.
8.2.2 Two-parameter Generalization Consider the parameters α and θ in the ranges 0 < α < 1, θ + α > 0. For any δ > 0, it follows from the GEM representation for PD(α , θ ) that P V1α ,θ > 1 − δ ≤ P (P1 (α , θ ) > 1 − δ ) . By direct calculation, we have lim P V1α ,θ > 1 − δ = 1.
α +θ →0
Therefore, PD(α , θ ) converges in the space of probability measures on ∇ to δ(1,0,...) as α ∨ |θ | converges to zero. In this subsection, we establish the LDP associated with this limit. This is a generalization of Theorem 8.3.
8.2 Small Mutation Rate
169
Let a(α , θ ) = (− log(α ∨ |θ |)−1 . It is clear that a(α , θ ) converges to zero if and only if α ∨ |θ | converges to zero. Lemma 8.13. For each i ≥ 1, consider the Beta(1 − α , θ + iα ) random variable Wi in the GEM representation for PD(α , θ ). As a(α , θ ) converges to zero, the family {Wi : α + θ > 0, 0 < α < 1} satisfies an LDP on [0, 1] with speed a(α , θ ) and rate function I1 defined in (8.24). Proof. Fix i ≥ 1. Write the density function of Wi in the form of
Γ (θ + 1 + (i − 1)α ) (θ + iα )x−α (1 − x)θ +iα −1 . Γ (1 − α )Γ (θ + iα + 1) Then it follows from direct calculation that for any p in [0, 1), lim lim inf a(α , θ ) log P{|Wi − p| < δ }
δ →0 a(α ,θ )→0
= lim lim sup a(α , θ ) log P{|Wi − p| ≤ δ } δ →0 a(α ,θ )→0
=
lim
a(α ,θ )→0
a(α , θ ) log(θ + iα ) = −1.
For p = 1, lim lim inf a(α , θ ) log P{|Wi − 1| < δ }
δ →0 a(α ,θ )→0
= lim lim sup a(α , θ ) log P{|Wi − 1| ≤ δ } δ →0 a(α ,θ )→0
= lim
lim
δ →0 a(α ,θ )→0
a(α , θ ) log(1 − δ )θ +iα = 0.
The lemma now follows from Theorem B.6. 2 Lemma 8.14. The family {P1 (α , θ ) : α + θ > 0, 0 < α < 1} satisfies an LDP on [0, 1] as a(α , θ ) converges to zero with speed a(α , θ ) and rate function J1 (·) defined in (8.28). Proof. For any n ≥ 1, let Pˆn (α , θ ) = max{V1α ,θ , . . . ,Vnα ,θ }. Then by direct calculation
170
8 Large Deviations for the Poisson–Dirichlet Distribution
P{P1 (α , θ ) − Pˆn (α , θ ) > δ } ≤ P{(1 −W1 ) · · · (1 −Wn ) ≥ δ } n θ + iα ≤ δ −1 ∏ , i=1 θ + iα + 1 − α which leads to lim sup a(α , θ ) log P{P1 (α , θ ) − Pˆn (α , θ ) > δ } ≤ −n.
α ∨|θ |→0
The remainder of the proof uses arguments similar to those used in the proofs of Lemma 8.10 and Lemma 8.11. 2 Theorem 8.4. The family {Πα ,θ : α + θ > 0, 0 < α < 1} satisfies an LDP on ∇ as α ∨ |θ | converges to zero with speed a(α , θ ) and rate function J given in (8.37). Proof. It suffices to establish the LDP for the finite-dimensional marginal distributions since the infinite-dimensional LDP can be derived from the finite-dimensional LDP through approximation. For any n ≥ 2, let
ϕnα ,θ (p1 , . . . , pn ), and gαn +θ (p1 , . . . , pn ) be the respective joint density functions of (P1 (α , θ ), P2 (α , θ ), . . . , Pn (α , θ )) and (P1 (α + θ ), P2 (α + θ ), . . . , Pn (α + θ )) given in (8.20) and (8.21). Since lim
a(α ,θ )→0
a(α , θ ) log(α + θ ) = −1,
and lim
a(α ,θ )→0
a(α , θ ) logCα ,θ ,n = −n, ◦
it follows that for every (p1 , . . . , pn ) in ∇n lim
a(α ,θ )→0
a(α , θ ) log ϕnα ,θ (p1 , . . . , pn )
=
lim
a(α ,θ )→0
a(α , θ ) log gαn +θ (p1 , . . . , pn )
= −Jn (p1 , . . . , pn ). This, combined with Lemma 8.11 and Lemma 8.14, implies that the family (P1 (α , θ ), P2 (α , θ ), . . . , Pn (α , θ )) satisfies an LDP as a(α , θ ) converges to zero with speed a(α , θ ) and rate function Jn (p1 , . . . , pn ) defined in (8.30). 2
8.3 Applications
171
8.3 Applications Several applications of Theorem 8.1 and Theorem 8.3 will be considered in this section. For any n ≥ 2, let ∞
ϕn (p) = ∑ pni . i=1
Then ϕn (P(θ )) is the population homozygosity of order n defined in Section 7.3. Since ϕn (P(θ )) ≤ P1n−1 (θ ), it follows that ϕn (P(θ )) converges to zero as θ tends to infinity. Our next theorem describes the large deviations of ϕn (P(θ )) from zero. Theorem 8.5. The family {ϕn (P(θ )) : θ > 0} satisfies an LDP on [0, 1] as θ tends to infinity, with speed 1/θ and rate function I(x1/n ), where I is given by (8.2). Proof. For any n ≥ 2, the map ∞
ϕn : ∇ −→ [0, 1], p → ∑ pni i=1
is continuous. By Theorem 8.1 and the contraction principle, the family {ϕn (P(θ )) : θ > 0} satisfies an LDP on [0, 1], as θ tends to infinity, with speed 1/θ and rate function ¯ = inf{S(p) : p ∈ ∇, ϕn (p) = x}. I(x) Since for any p in ∇, we have ∞
∑ pi ≥ (ϕn (p))1/n = x1/n ,
i=1
¯ ≥ I(x1/n ). On the other hand, by choosit follows that S(p) ≥ I(x1/n ), and thus I(x) ing p = (x1/n , 0, . . .), ¯ ≤ I(x1/n ). Hence I(x) ¯ = I(x1/n ), and the result follows. one obtains I(x)
2
Remarks: (a) The LDP obtained here describes the deviations of ϕn (P(θ )) from zero. The ren−1 sult in Theorem 7.10 shows that θΓ (n) ϕn (P(θ )) converges to one in probability as θ tends to infinity. However, our original motivation was to study the large deviations n−1 of θΓ (n) ϕn (P(θ )) from one, which is still an open problem. (b) The Gaussian structure of Theorem 7.10 seems to indicate that the LDP for the n−1 family { θΓ (n) ϕn (P(θ )) : θ > 0} will hold with a speed of 1/θ ; but the following
172
8 Large Deviations for the Poisson–Dirichlet Distribution
calculations lead to a different answer. Assume that an LDP holds for the family with speed a(θ ) and a good rate function I. Then for any constant c > 0,
θ n−1 P ϕn (P(θ )) ≥ 1 + c Γ (n)
θ n−1 n U ≥ 1+c ≥P Γ (n) 1 Γ (n)(1 + c) 1/n = P U1 ≥ θ n−1 ⎡ θ (n−1)/n ⎤θ 1/n 1/n ( Γ (n)(1 + c)) ⎦ = ⎣ 1− , θ (n−1)/n
which implies that inf I(x) = 0 if lim
x≥1+c
θ →∞
a(θ ) = ∞. θ 1/n
Since c is arbitrary, I is zero over a sequence that goes to infinity, which contradicts the fact that {x : I(x) ≤ M} is compact for every positive M. Hence the LDP speed, if it exists, cannot grow faster than θ 1/n . n−1 (c) For r in [0, 1/2), the quantity θ r ( θΓ (n) ϕn (P(θ )) − 1) converges to zero in probability as θ tends to infinity. Large deviations associated with this limit for n−1 r ∈ (0, 1/2) are called the moderate deviation principle for { θΓ (n) ϕn (P(θ )) : θ > 0}. Recent work in [73] indicates that the LDP, corresponding to r = 0, may not hold n−1 for the family { θΓ (n) ϕn (P(θ )) : θ > 0}. When θ tends to zero, ϕn (P(θ )) converges to one. The LDP associated with this limit is established in the next theorem. Theorem 8.6. The family {ϕn (P(θ )) : θ > 0} satisfies an LDP on [0, 1], as θ tends to zero, with speed a(θ ) given in (8.23), and rate function ⎧ p=1 ⎪ ⎨ 0, 1 1 ˆ = k − 1, p ∈ [ n−1 , (k−1) J(p) (8.41) n−1 ), k = 2, . . . k ⎪ ⎩ ∞, p = 0. Thus in terms of large deviations, ϕn (P(θ )) behaves the same as P1n−1 (θ ). Proof. Due to Theorem 8.3 and the contraction principle, it suffices to verify that ˆ = inf{J(q) : q ∈ ∇, ϕn (q) = p} = inf{S(q) : q ∈ L, ϕn (q) = p}. J(p) For p = 1, it follows by choosing q = (1, 0, . . .) that inf{S(q) : q ∈ ∇, ϕn (q) = p} = 0.
8.3 Applications
173
For p = 0, there does not exist q in L such that ϕn (q) = p. Hence inf{S(q) : q ∈ L, ϕn (q) = p} = ∞. For any k ≥ 2, the minimum of ∑ki=1 qni over Lk is k−(n−1) , which is achieved when all qi ’s are equal. Hence for p ∈ [k−(n−1) , (k − 1)−(n−1) ), we have
ˆ inf{S(q) : q ∈ ∇, ϕn (q) = p} = k − 1 = J(p). 2
Let C(∇) be the set of all continuous functions on ∇, and λ (θ ) be a non-negative f function of θ . For every f in C(∇), define a new probability Πλ ,θ on ∇ as
Πλf ,θ (dp) =
eλ (θ ) f (p) Πθ (dp). EΠθ [eλ (θ ) f (p) ]
(8.42)
The case of f (p) = sϕ2 (p) corresponds to the symmetric selection model. If s > 0, then homozygotes have selective advantage over heterozygotes and the model is said to have underdominant selection. The case of s < 0 is the opposite of s > 0 and the model is said to have overdominant selection. The Poisson–Dirichlet distribution Πθ corresponds to s = 0. The general distribution considered here is simply a mathematical generalization to these selection models. Theorem 8.7. Let S(p) be defined as in (8.15) and assume that lim
θ →∞
λ (θ ) = c ∈ [0, +∞). θ
Then the family {Πλf ,θ : θ > 0} satisfies an LDP on ∇, as θ tends to infinity, with speed 1/θ and rate function Sc, f (p) = sup{c f (q) − S(q)} − (c f (p) − S(p)). q∈∇
Proof. By Theorem 8.1 and Theorem B.1, lim
θ →∞
λ (θ ) 1 1 log EΠθ [eλ (θ ) f (p) ] = lim log EΠθ [eθ θ f (p) ] θ →∞ θ θ = sup{c f (q) − S(q)}.
q
This, combined with the continuity of f , implies that for any p in ∇
(8.43)
174
8 Large Deviations for the Poisson–Dirichlet Distribution
1 log Πλf ,θ {d(p, q) < δ } ≥ − sup{c f (q) − S(q)} θ q
1 λ (θ ) + lim inf lim inf ( f (p) − δ ) + log Πθ {d(p, q) < δ } θ θ δ →0 θ →∞ ≥ −Sc, f (p),
lim inf lim inf δ →0
θ →∞
where δ converges to zero as δ goes to zero. Similarly we have 1 f log Πλ ,θ {d(p, q) ≤ δ } ≤ − sup{cH(q) − S(q)} q θ →∞ θ δ →0
1 λ (θ ) ( f (p) + δ ) + log Πθ {d(p, q) ≤ δ } + lim sup lim sup θ θ θ →∞ δ →0
lim sup lim sup
≤ −Sc, f (p). This, combined with the compactness of ∇ and Theorem B.6, implies the result. 2 Theorem 8.8. Assume that lim
θ →∞
λ (θ ) =∞ θ
(8.44)
and that f achieves its maximum at a single point p0 . Then the family {Πλf ,θ } satisfies an LDP on ∇, as θ tends to infinity, with speed 1/θ and rate function 0, if p = p0 (8.45) S∞, f (p) = ∞, else. Proof. Without loss of generality we assume that supp∈∇ f (p) = 0; otherwise we f can multiply both the numerator and the denominator in the definition of Πλ ,θ , by e−λ (θ ) f (p0 ) . For any p = p0 , choose δ small enough such that d1 =
sup
d(p,q)≤δ
f (q) < 2d2 = 2
inf
d(p0 ,q)≤δ
f (q) < 0.
Then by direct calculation
lim sup θ →∞
1 f log Πλ ,θ {d(p, q) ≤ δ } θ
λ (θ ) f (q) Π (dq) θ 1 {d(p,q)≤δ } e = lim sup log Πθ [eλ (θ ) f (q) ] θ E θ →∞ 1 λ (θ )(d1 −d2 ) Πθ {d(p, q) ≤ δ } ≤ lim sup log e Πθ {d(p0 , q) ≤ δ } θ →∞ θ = −∞,
8.3 Applications
175
and 1 f log Πλ ,θ {d(p0 , q) < δ } θ
λ (θ ) f (q) Ξ (dq) θ 1 {d(p0 ,q)<δ } e = lim inf log Π λ ( θ ) f (q) θ →∞ θ E θ [e ]
λ ( θ ) f (q) Π (dq) θ 1 {d(p0 ,q)≥δ } e = lim inf log 1 − θ →∞ θ EΠθ [eλ (θ ) f (q) ]
λ (θ ) f (q) Π (dq) θ 1 {d(p0 ,q)≥δ } e ≥ lim inf log 1 −
λ (θ ) f (q) Π (dq) θ →∞ θ θ {d(p0 ,q)≤δ1 } e exp{λ (θ )[e1 (δ ) − e2 (δ1 )]} 1 ≥ lim inf log 1 − θ →∞ θ Πθ {d(p0 , q) ≤ δ } = 0, by choosing small enough δ1 ,
lim inf θ →∞
where
e1 (δ ) =
sup
d(p0 ,q)≥δ
f (q), e2 (δ1 ) =
inf
d(p0 ,q)≤δ1
f (q).
Again the result follows from the compactness of ∇ and Theorem B.6.
2
Consider the overdominant selection model with f (p) = −ϕ2 (p). If λ (θ ) = o(θ ), then the selection model has the same LDP as the neutral model. At the critical scale θ , the rate function begins to depend on the selection and a phase transition occurs. The next result shows that a similar critical phenomenon also exists in the underdominant selection model. Corollary 8.9 For f (p) = φ2 (p), ⎧ ⎪ if limθ →∞ λ (θθ ) = 0 ⎨ S(p), Sc, f (p) = −c f (p) + S(p), if limθ →∞ λ (θθ ) = c ≤ c0 ⎪ ⎩ g(c) − c f (p) + S(p), if limθ →∞ λ (θθ ) = c ∈ (c0 , +∞), where, for c ≥ c0 , g(c) = log
1−
1 − 2/c 2
+c
and c0 > 2 solves the equation g(c0 ) = 0. Proof. The key step is to calculate
1+
2 1 − 2/c , 2
(8.46)
176
8 Large Deviations for the Poisson–Dirichlet Distribution
∞
∞
i=1
i=1
g(c) = sup c ∑ q2i + log(1 − ∑ qi ) . q∈∇
∞ 2 2 Write ∑∞ i=1 qi as x. Then for any given x, the maximum of ∑i=1 qi is x . Hence
g(c) = sup {cx2 + log(1 − x)}. x∈[0,1]
Let c0 satisfy log
1−
2 1 − 2/c0 1 + 1 − 2/c0 + c0 = 0. 2 2
Then it follows by direct calculation that c0 > 2, and ⎧ if c ≤ c0 ⎨ 0, √ 2 √ g(c) = 1− 1−2/c 1+ 1−2/c ⎩ log , if c > c0 . +c 2 2 2 Applying Theorem 8.3 and Theorem B.1 to when the mutation rate is small.
Πλf ,θ ,
we obtain the following result
Theorem 8.10. Let J(p) be defined as in (8.37) and a(θ ) be given as in (8.23). f For a fixed integer n ≥ 1, constant s, and f (p) = sϕn (p), the family {Πλ ,θ : θ > 0} satisfies an LDP on ∇, as θ tends to zero, with speed a(θ ) and rate function ⎧ J(p), limθ →0 λ (θ )a(θ ) = 0 ⎪ ⎪ ⎨ J(p) + sc(1 − ϕ (p)), limθ →0 λ (θ )a(θ ) = c > 0, s > 0 n Jλ ,s,n (p) = J(p) + |s|cϕ (p) n ⎪ ⎪ ⎩ − inf{ m|s|c n−1 + m − 1 : m ≥ 1}, limθ →0 λ (θ )a(θ ) = c > 0, s < 0. Proof. Theorem 8.3, combined with Varadhan’s lemma and the Laplace method, implies that the family {Πλf ,θ : θ > 0} satisfies an LDP on ∇, as θ converges to zero, with speed a(θ ) and rate function sup{scϕn (q) − J(q)} − (scϕn (p) − J(p)). q∈∇
The case c = 0 is clear. For c > 0, s > 0, sup{scϕn (q) − J(q)} q∈∇
achieves its maximum sc at q = (1, 0, . . .). For c > 0 and s < 0,
8.3 Applications
177
sup{scϕn (q) − J(q)} = − inf {|s|cϕn (q) + J(q)} = − inf
m≥1
q∈∇
q∈∇
|s|c +m−1 . mn−1 2
It is clear from the theorem that selection has an impact on the rate function only when the selection intensity λ (θ ) is proportional to a(θ )−1 . Consider the case of λ (θ ) = a(θ )−1 . Then for s > 0, the homozygote has selective advantage, and the small-mutation-rate limit is (1, 0, . . .) and diversity disappears. The energy Jλ ,s,n (p) needed for a large deviation from (1, 0, . . .) is larger than the neutral energy J(p). For s < 0, heterozygotes have selection advantage. Since Jλ ,s,n (·) may reach zero at a point that is different from (1, 0, . . .), several alleles can coexist in the population when the selection intensity goes to infinity and θ tends to zero. A concrete case of allele coexistence is included in the next result. Corollary 8.11 Let
n = 2, λ (θ ) = 2a(θ )−1 , ⎛
and lk =
⎞
⎜ 1 ⎟ 1 k(k + 1) , pk = ⎜ ,..., , . . .⎟ ⎝ ⎠ ∈ ∇∞ , k = 0, 1, . . . . 2 k + 1 ! k + 1" k+1
Assume that s < 0. Then for −lk+1 < s < −lk , k ≥ 0, the equation Jλ ,s,2 (p) = 0 has a unique solution pk . For k ≥ 1 and s = −lk , the equation Jλ ,s,2 (p) = 0 has two solutions pk−1 and pk . Proof. For any m ≥ 1, let Nm =
2|s| + m − 1. m
Then Nm − Nm+1 =
2|s| − m(m + 1) , m(m + 1)
and for any k ≥ 0 and s in (−lk+1 , −lk ), Nm − Nm+1 > 0 for m ≤ k, Nm − Nm+1 < 0 for m ≥ k + 1.
Hence inf
m≥1
2|s| + m − 1 = inf {|s|cϕn (q) + J(q)} = Nk+1 m q∈∇
178
8 Large Deviations for the Poisson–Dirichlet Distribution
and is attained at the unique point q = pk . In other words, pk is the unique solution to the equation Jλ ,s,2 (p) = 0. For k ≥ 1 and s = −lk , we have Nm − Nm+1 > 0 for m < k, Nk − Nk+1 = 0, Nm − Nm+1 < 0 for m ≥ k + 1, which shows that
inf {|s|cϕn (q) + J(q)} = Nk = Nk+1
q∈∇
and is attained at pk−1 and pk .
2
Remark: It is worth noting that {lk : k ≥ 0} are the death rates of Kingman’s coalescent.
8.4 Notes The motivations for the study of large deviations for the Poisson–Dirichlet distribution come from the work in Gillespie [87], where simulations were done for several models to study the role of population size in population-genetics models of molecular evolution. One of the models is an infinite-alleles model with selective overdominance or heterozygote advantage. It was observed and conjectured that if the selection intensity and the mutation rate get large at the same speed, the behavior looks like that of a neutral model. A rigorous proof of this conjecture was obtained in [119] through the study of Gaussian fluctuations. The results in Theorem 8.1 and Theorem 8.7 provide an alternate proof of this conjecture. The material in Section 8.1.1 comes from [23]. The results in Section 8.1.2 are based on the work in [71]. The LDP for a small mutation rate in Section 8.2.1 can be found in [72]. The LDP in Section 8.2.2 is from [74]. The underdominant and overdominant distributions are special cases of Theorem 4.4 in [65]. Corollary 8.9 is from [117], where the infinitely many alleles model with homozygote advantage was studied. In [73], moderate deviation principles were established for both the Poisson–Dirichlet distribution and the homozygosity. These results are generalized to the two-parameter setting in [74].
Chapter 9
Large Deviations for the Dirichlet Processes
The Dirichlet process and its two-parameter counterpart are random, purely atomic probabilities with masses distributed according to the Poisson–Dirichlet distribution and the two-parameter Poisson–Dirichlet distribution, respectively. The order by size of the masses does not matter, and the GEM distributions can then be used in place of the corresponding Poisson–Dirichlet distributions in the definition. When θ increases, these masses spread out evenly among the support of the type’s measure. Eventually the support is filled up and the types measure emerges as the deterministic limit. This resembles the behavior of empirical distributions for large samples. The focus of this chapter is the LDPs for the one- and two-parameter Dirichlet processes when θ tends to infinity. Relations of these results to Sanov’s theorem will be discussed. For simplicity, the space is chosen to be E = [0, 1] in this chapter even though the results hold on any compact metric space. The diffuse probability ν0 on E will be fixed throughout the chapter.
9.1 One-parameter Case Recall that M1 (E) denotes the space of probability measures on E equipped with the weak topology. Recall that the Dirichlet process with parameter θ is a random measure defined as ∞
Ξθ ,ν0 = ∑ Pi (θ )δξi , i=1
where P(θ ) = (P1 (θ ), P2 (θ ), . . .) has the Poisson–Dirichlet distribution with parameter θ , and independent of P(θ ), ξ1 , ξ2 , . . . are i.i.d. with common distribution ν0 . The law of Ξθ ,ν0 is denoted by Πθ ,ν0 . Theorem 9.1. As θ tends to infinity, Ξθ ,ν0 converges in probability to ν0 , in the space M1 (E). Proof. Let C(E) be the space of continuous functions on E equipped with the topology of uniform convergence. The compactness of E guarantees existence of a countS. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5 9, © Springer-Verlag Berlin Heidelberg 2010
179
180
9 Large Deviations for the Dirichlet Processes
able dense subset { fi : i ≥ 1} of C(E). For any μ , ν in M1 (E), define ∞
|μ − ν , fi | ∧ 1 . 2i i=1
dw ( μ , ν ) = ∑
Then dw is a metric generating the weak topology on M1 (E). For any f in C(E), E[Ξθ ,ν0 , f ] = ν0 , f , and, by the Ewens sampling formula, E[(Ξθ ,ν0 , f )2 ] = ν0 , f 2 E[ϕ2 (P(θ ))] + ν0 , f 2 E =
1 θ 2 ν0 , f 2 . ν0 , f + θ +1 θ +1
∑ Pi (θ )Pj (θ )
i= j
Thus for any δ > 0, r ≥ 1 satisfying 21−r < δ , 2i−1 δ P | Ξ − ν , f | ≥ 0 i θ ,ν0 ∑ r i=1 r 2 r E[(Ξθ ,ν0 − ν0 , fi )2 ] ≤ ∑ i−1 2 δ i=1 1 r r 2 = ∑ 2i−1 δ [ν0 , fi2 − ν0 , fi 2], θ + 1 i=1
P{dw (Ξθ ,ν0 , ν0 ) > δ } ≤
r
which leads to the result. 2 Next, the LDP for the family {Ξθ ,ν0 : θ > 0} is established through a series of lemmas. Lemma 9.1. For any n ≥ 1, let B1 , . . . , Bn be a measurable partition of the set E satisfying pi = ν0 (Bi ) > 0, i = 1, . . . , n. Then the family {(Ξθ ,ν0 (B1 ), . . . , Ξθ ,ν0 (Bn )) : θ > 0} satisfies an LDP on the space Δn defined in (5.1), as θ tends to infinity, with speed 1/θ and rate function n
H(p|q) = ∑ pi log i=1
pi . qi
Proof. First note that by Theorem 2.24, the distribution of (Ξθ ,ν0 (B1 ), . . . , Ξθ ,ν0 (Bn )) is Dirichlet(θ p1 , . . . , θ pn ).
(9.1)
9.1 One-parameter Case
181
For any Borel measurable subset C of Δn , by Stirling’s formula log P{(Ξθ ,ν0 (B1 ), . . . , Ξθ ,ν0 (Bn )) ∈ C}
√ α 1 2π (θ )θ − 2 e 12θ = log √ α1 αn 1 1 ( 2π )n (θ p1 )θ p1 − 2 e 12θ p1 · · · (θ pn )θ pn − 2 e 12θ pn θ p1 −1
×
=
C
q1
(9.2)
· · · qnθ pn −1 dq1 · · · dqn−1
1 1 n−1 α1 αn 1 (θ p1 ) · · · (θ pn ) + log + α− −···− log 2 2π 2 θ 12θ p1 pn n
− θ ∑ pi log pi + log
i=1
C
qθ1 p1 −1 · · · qnθ pn −1 dq1 · · · dqn−1 ,
where 0 < α , α1 , . . . , αn < 1 are some constants. For any ε > 0, let Cε = {q ∈ C : min1≤i≤n qi ≥ ε }. For any measurable function f on Δn , || f ||Lθ denotes the Lθ norm of f with respect to the Lebesgue measure m on Δn . Choosing θ large enough that min1≤i≤n {θ pi } > 1, then n ||χC e∑i=1 pi log qi ||Lθ
≤
C
= C
qθ1 p1 · · · qθn pn dq1 · · · dqn−1
qθ1 p1 −1 · · · qθn pn −1 dq1 · · · dqn−1
1/θ
1/θ (9.3)
= ||χC e∑i=1 (pi −1/θ ) log qi ||Lθ n
≤ ||χCε e∑i=1 pi log qi ||Lθ ε −n/θ + m(C \Cε ) n
≤ ||χC e∑i=1 pi log qi ||Lθ ε −n/θ + m(C \Cε ). n
Letting θ → ∞, then ε → 0, we obtain γ n θ p1 −1 θ pn −1 lim q1 · · · qn dq1 · · · dqn−1 = ess sup{χC e∑i=1 pi log qi : q ∈ Δn }. θ →∞
C
(9.4)
For any subset B of Δ n , we have ess sup{χB e∑i=1 θ pi log qi : q ∈ Δ n } ≤ e n
− infq∈B ∑ni=1 pi log q1
i
.
(9.5)
and for any open subset G of Δ n , ess sup{ χG e∑i=1 θ pi log qi : q ∈ Δ n } = ess sup{e∑i=1 θ pi log qi : q ∈ G} n
=e
n
− infq∈G ∑ni=1 θ pi log q1 i
(9.6)
.
Putting together (9.2), (9.4), and (9.5) yields that for any closed subset B of Δ n
182
9 Large Deviations for the Dirichlet Processes
lim sup θ →∞
1 log P{(Ξθ ,ν0 (B1 ), . . . , Ξθ ,ν0 (Bn )) ∈ B} ≤ − inf H(p|q), x∈B θ
(9.7)
while (9.6), combined with (9.2) and (9.4), implies that for any open subset G of Δn
lim inf θ →∞
1 log P{(Ξθ ,ν0 (B1 ), . . . , Ξθ ,ν0 (Bn )) ∈ G} ≥ − inf H(p|q). x∈G θ
(9.8)
Finally, by continuity, the level set {q ∈ Δ n : H(p|q) ≤ c} is compact for any c ≥ 0. 2 Remark: The rate function H(p | q) is the relative entropy of p with respect to q, defined in (B.10). If pi = 0 for some 1 ≤ i ≤ n, then Ξθ ,ν0 (Ai ) ≡ 0. By treating 0 log 00 as zero, the result of Theorem 9.1 can be generalized to cover these degenerate cases. Let P be the collection of all finite measurable partitions of E by Borel measurable sets. For any μ ∈ M1 (E), ι = {B1 , . . . , Br } ∈ P, define
πι ( μ ) = (μ (B1 ), . . . , μ (Br )). For any μ , ν ∈ M1 (E), the relative entropy H(μ |ν ) of μ with respect to ν can be written as 1 1 (9.9) H(μ |ν ) = sup gd μ − log eg d ν g∈C(E)
0
= sup 0
g∈B(E)
0
1
gd ν − log
1 0
e dν , g
where B(E) is the set of bounded measurable functions on E. Lemma 9.2. For any μ , ν ∈ M1 (E), H(μ |ν ) = sup H(πι (μ )|πι (ν )). ι ∈P
(9.10)
Proof. If μ is not absolutely continuous with respect to ν , then both sides of (9.10) are infinite. Now we assume that φ (x) = dd μν (x) exists. Then for any ι = (B1 , . . . , Br ) ∈ P, consider the function g ∈ B([0, 1]) defined as g(z) =
∑
log
i:μ (Bi )>0
Clearly H(πι (μ )|πι (ν )) =
1 0
μ (Bi ) χB (z). ν (Bi ) i
g(z)d μ − log
1 0
eg(z) d ν .
9.1 One-parameter Case
183
By (9.9), H(πι (μ )|πι (ν )) ≤ H(μ |ν ) which implies that sup H(πι (μ )|πι (ν )) ≤ H(μ |ν ).
ι ∈P
On the other hand, for any n ≥ 1, let
φn (x) =
n2n
∑ (k − 1)/2n χBk (x) + nχAn (x),
k=1
where Bk = {z : (k − 1)/2n ≤ φ (z) < k/2n }, An = {z : φ (z) ≥ n}. Then φn converges point-wise to φ . Let = {B1 , . . . , Bn2n , An }. Then sup H(πι (μ )|πι (ν )) ≥ H(π (μ )|π (ν )) ≥
ι ∈P
1 0
φn log φn d ν .
Letting n go to infinity, (9.10) follows from (B.10) and the monotone convergence theorem. 2 Let Eν = {x ∈ E : ν ({x}) = 0}. Then Lemma 9.2 can be modified to get: Lemma 9.3. For any μ , ν ∈ M1 (E), H(μ |ν ) =
sup
x1 <x2 <···<xk ∈Eν ,k≥1
H( μ x1 ,...,xk |ν x1 ,...,xk ),
(9.11)
where
μ x1 ,...,xk = (μ ([0, x1 )), μ ([x1 , x2 )), . . . , μ (xk , 1]), ν x1 ,...,xk = (ν ([0, x1 )), ν ([x1 , x2 )), . . . , ν (xk , 1]). Proof. By Lemma 9.2, H(μ |ν ) ≥
sup
x1 <x2 <···<xk ∈Eν ,k≥1
H( μ x1 ,...,xk |ν x1 ,...,xk ).
(9.12)
On the other hand, the representation (9.9) guarantees that for any ε > 0 there is a continuous function g on E such that g H(μ |ν ) ≤ gd μ − log e dν + ε . Now choose x1n < x2n · · · < xknn in Eν such that lim
max
n [|xi+1 − xin | +
n→∞ i=0,...,kn −1
max
n ] x,y∈[xin ,xi+1
|g(x) − g(y)|] = 0.
This choice is possible because Eν is a dense subset of E. Let x0n = 0. Then
184
9 Large Deviations for the Dirichlet Processes kn
n H(μ |ν ) ≤ ∑ g(xin )μ ([xin , xi+1 )) + g(xknn )μ ([xknn , 1]) i=0
− log
kn −1
∑e
i=0
≤
g(xin )
ν ([xknn , 1]) + ε + δn (g)
kn
n )) + αkn μ ([xknn , 1]) ∑ αi μ ([xin, xi+1
sup
αi ,i=0,...,kn
i=0
− log
n ν ([xin , xi+1 )) + e
g(xknn )
kn −1
n )) + eαkn ν ([xknn , 1]) ∑ eαi ν ([xin , xi+1
+ ε + δn (g)
i=0 n
n
n
n
= H(μ x1 ,...,xkn |ν x1 ,...,xkn ) + ε + δn (g), where δn (g) converges to zero as n goes to infinity. As n goes to infinity and ε goes to zero, we get that H(μ |ν ) ≤
sup
x1 <x2 <···<xk ∈Eν ,k≥1
H( μ x1 ,...,xk |ν x1 ,...,xk ),
which, combined with (9.12), implies the result. 2 Theorem 9.2. The family {Ξθ ,ν0 : θ > 0} satisfies an LDP on M1 (E), as θ tends to infinity, with speed 1/θ and rate function ˆ ν ) = H(ν0 |ν ), ν ∈ M1,ν0 (E) (9.13) H( ∞, else, where M1,ν0 (E) = {ν ∈ M1 (E) : the support of ν is contained in the support of ν0 }. Proof. For any δ > 0, and ν in M1 (E), let ¯ ν , δ ) = {μ ∈ M1 (E) : dw (μ , ν ) ≤ δ }, B(ν , δ ) = {μ ∈ M1 (E) : dw ( μ , ν ) < δ }, B( with dw as in the proof of Theorem 9.1. Since the weak topology on M1 (E) is generated by the family {μ ∈ M1 (E) : f ∈ C(E), x ∈ R, ε > 0, |μ , f − x| < ε }, one can find f 1 , . . . , f m in C(E) and ε > 0 such that { μ ∈ M1 (E) : | μ , f j − ν , f j | < ε : j = 1, . . . , m} ⊂ B(ν , δ ). Let M = sup{| f j (x)| : x ∈ E, j = 1, . . . , m},
9.1 One-parameter Case
185
and choose x1 , . . . , xk ∈ Eν such that sup{| f j (x) − f j (y)| : x, y ∈ [xi , xi+1 ], i = 0, 1, . . . , k, xk+1 = 1; j = 1, . . . , m} < ε /4. Choose 0 < δ1 <
ε , 2(k+1)M
and define
Ox1 ,...,xk (ν , δ1 ) = {μ ∈ M1 (E) : |μ ([xk , 1]) − ν ([xk , 1])| < δ1 , |μ ([xi , xi+1 )) − ν ([xi , xi+1 ))| < δ1 , i = 0, . . . , k − 1}, it follows that for any μ in Ox1 ,...,xk (ν , δ1 ) and any f j , one has f j (x)(μ (dx) − ν (dx)) |μ , f j − ν , f j | = [x ,1] k
k−1
+∑
i=0 [xi ,xi+1 )
<
f j (x)(μ (dx) − ν (dx))
k ε + ∑ | f j (xi )|δ1 < ε , 2 i=0
which implies that Ox1 ,...,xk (ν , δ1 ) ⊂ {μ ∈ M1 (E) : |μ , f j − ν , f j | < ε : j = 1, . . . , m} ⊂ B(ν , δ ). Introduce the map F : M1 (E) → Δ k+1 , μ → (μ ([0,t1 )), . . . , μ ([tk , 1])). Then Πθ ,ν0 ◦ F −1 is a Dirichlet distribution with parameters
θ (ν0 ([0, x1 )), . . . , ν0 ([xk , 1])). γ For any ν in M1,ν0 (E), it follows from Theorem 9.1 that ˆ ν ) ≤ −H(ν x1 ,...,xk |ν x1 ,...,xk ) − H( 0 1 ≤ lim inf log Πθ ,ν0 {Ox1 ,...,xk (ν , δ1 )} θ →∞ θ 1 ≤ lim inf log Πθ ,ν0 {B(ν , δ )}. θ →∞ θ
(9.14)
Letting δ go to zero, we obtain ˆ ν ) ≤ lim lim inf − H( δ →0 θ →∞
1 log Πθ ,ν0 {B(ν , δ )}. θ
If ν does not belong to the set M1,ν0 (E), then (9.15) holds trivially.
(9.15)
186
9 Large Deviations for the Dirichlet Processes
On the other hand, for any x1 , . . . , xk in Eν , the boundary points of sets [0, x1 ), [x2 , x3 ), . . . , [xk , 1] consist of x1 , . . . , xk only and they all have ν -measure zero. Hence the map F is continuous at ν . This implies that for any δ2 > 0, there exists δ > 0 such that ¯ ν , δ ) ⊂ Ox1 ,...,x (ν , δ2 ). B( k Set O¯ x1 ,...,xk (ν , δ2 ) = {μ ∈ M1 (E) : |μ ([xk , 1]) − ν ([xk , 1])| ≤ δ1 , |μ ([xi , xi+1 )) − ν ([xi , xi+1 ))| ≤ δ1 , i = 0, . . . , k − 1}. Then we have 1 ¯ ν , δ )} log Πθ ,ν0 {B( θ 1 ≤ lim lim sup log Πθ ,ν0 {O¯ t1 ,...,tk (ν , δ2 )}. δ →0 θ →0∞ θ
lim lim sup
δ →0 θ →∞
(9.16)
By letting δ2 go to zero and applying Theorem 9.1 again, we obtain lim lim sup
δ →0 θ →0∞
1 ¯ ν , δ )} ≤ −H θx1 ,...,xk (ν x1 ,...,xk ). log Πθ ,ν0 {B( ν0 θ
(9.17)
Finally, it follows by taking the supremum over the set Eν and applying Lemma 9.3 that 1 ¯ ν , δ )} ≤ −H( ˆ ν ), lim lim sup log Πθ ,ν0 {B( (9.18) δ →0 θ →∞ θ which, combined with (9.15), implies the theorem. 2 Remark: If the empirical distribution converges to ν0 in Sanov’s theorem, then the rate function for the corresponding LDP is given by H(·|ν0 ). Consider Ξθ ,ν0 as the empirical distribution weighted according to the Poisson–Dirichlet distribution. The limit is still ν0 when θ tends to infinity but the rate function for the corresponding LDP is given H(ν0 |·), which is a dual to the rate function in Sanov’s theorem. Corollary 9.3 Let F be a continuous function on M1 (E), and set
ΠθF,ν0 (d ν ) = C exp[θ F(ν )]Πθ ,ν0 (d ν ),
where C > 0 is the normalizing constant ( M1 (E) exp[θ F(ν )]Πθ ,ν0 (d ν ))−1 . Then the family {ΠθF,ν0 } satisfies an LDP on the space M1 (E), as θ tends to infinity, with speed 1/θ and rate function
9.2 Two-parameter Case
187
Hˆ F (ν ) =
ˆ μ )} − (F(ν ) − H( ˆ ν )). sup {F(μ ) − H(
μ ∈M1 (E)
Proof. By Theorem B.1, 1 logC θ 1 = − lim log eθ F(μ ) Πθ ,ν0 (d μ ) θ →∞ θ ˆ μ )}. = − sup{F(μ ) − H(
lim
θ →∞
(9.19)
μ
For any ν ∈ M1 (E), it follows from direct calculation that lim lim inf
δ →0 θ →∞
1 log θ
B(ν ,δ )
eθ F(μ ) Πθ ,ν0 (d μ )
1 log Πθ ,ν0 {B(ν , δ )} δ →0 θ →∞ θ 1 ¯ ν , δ )} = F(ν ) + lim lim sup log Πθ ,ν0 {B( δ →0 θ →∞ θ 1 eθ F(μ ) Πθ ,ν0 (d μ ) = lim lim sup log ¯ ν ,δ ) δ →0 θ →∞ θ B( ˆ ν ), = F(ν ) − H( = F(ν ) + lim lim inf
which combined with (9.19) implies 1 log ΠθF,ν0 {dw (μ , ν ) < δ } δ →0 θ →∞ θ 1 = lim lim sup log ΠθF,ν0 {dw ( μ , ν ) ≤ δ } = −Hˆ F (ν ). δ →0 θ →∞ θ lim lim inf
Since M1 (E) is compact, using Theorem B.6 again, we get the result.
2
9.2 Two-parameter Case The LDPs in this section are under the limit of θ tending to infinity. The parameters α and θ will thus be in the range of 0 < α < 1, θ > 0. For the given diffuse probability ν0 in M1 (E), let Ξθ ,α ,ν0 be the two-parameter Dirichlet process defined in (3.2) with the law of Ξθ ,α ,ν0 denoted by Πα ,θ ,ν0 . Without loss of generality, the support of the ν0 is assumed to be the whole space E. The next result is the two-parameter analog to Theorem 9.1. Theorem 9.4. As θ tends to infinity, Ξθ ,α ,ν0 converges in probability to ν0 in space M1 (E).
188
9 Large Deviations for the Dirichlet Processes
Proof. For any f in C(E), E[Ξθ ,α ,ν0 , f ] = ν0 , f , and, by the Pitman sampling formula, E[(Ξθ ,α ,ν0 , f ) ] = ν0 , f E[ϕ2 (P(α , θ ))] + ν0 , f E 2
2
=
2
1−α θ +α ν0 , f 2 . ν0 , f 2 + θ +1 θ +1
∑ Pi (α , θ )Pj (α , θ )
i= j
The remaining steps of the proof follow those in Theorem 9.1. 2 Let {σt : t ≥ 0}, {γt : t ≥ 0, }, and σα ,θ be defined as in Proposition 3.7. For notational convenience, the constant C in the L´evy measure of subordinator {σt : t ≥ 0} is chosen to be one in the sequel. The following lemma is the two-parameter generalization of Theorem 2.24. Lemma 9.4. Let
γ (α , θ ) =
αγ ( α1 ) . Γ (1 − α )
(9.20)
For any n ≥ 1, and any 0 < t1 < · · · < tn = 1, let A1 = [0,t1 ], Ai = (ti−1 ,ti ], i = 2, . . . , n be a partition of E. For simplicity, the partition A1 , . . . , An will be associated with t1 , . . . ,tn . Set ai = ν0 (Ai ), i = 1, . . . , n. Introduce the process Yα ,θ (t) = σ (γ (α , θ )t),t ≥ 0. Then (Ξθ ,α ,ν0 (A1 ), . . . , Ξθ ,α ,ν0 (An )) has the same distribution as Yα ,θ (∑nj=1 a j ) −Yα ,θ (∑n−1 Yα ,θ (a1 ) j=1 a j ) ,..., . σα ,θ σα ,θ Proof. This follows from the subordinator representation for PD(α , θ ) given in Proposition 3.7. 2 Let Zα ,θ (t) =
Yα ,θ (t) , θ
Zα ,θ (t1 , . . . ,tn ) =
Zα ,θ (a1 ), . . . , Zα ,θ
n
∑ aj
j=1
− Z α ,θ
n−1
∑ aj
j=1
.
9.2 Two-parameter Case
189
By direct calculation, one has
ϕ (λ ) = log E[eλ σ1 ] ∞
= 0
(eλ x − 1)x−(α +1) e−x dx
Γ (1−α ) =
∞,
α
(9.21)
[1 − (1 − λ )α ], λ ≤ 1 else
and 1 L(λ ) = lim log E[eλ γ1 ] θ →∞ θ log( 1−1 λ ), λ < 1 = ∞, else.
(9.22)
For any real numbers λ1 , . . . , λn , 1 log E[exp{θ (λ1 , . . . , λn ), Zα ,θ (t1 , . . . ,tn )}] θ n α ai 1 γ (1/ α ) ) = log E ∏(Eγ (1/α ) [exp{λi σ1 }] Γ (1−α ) θ i=1
n αν0 (Ai ) 1 1 = log E exp (9.23) ϕ (λi ) γ ∑ θ α i=1 Γ (1 − α ) n 1 α → Λ (λ1 , . . . , λn ) = L ∑ ν0 (Ai )ϕ (λi ) , θ → ∞. α Γ (1 − α ) i=1 For (y1 , . . . , yn ) in Rn+ , set Jt1 ,..,tn (y1 , . . . , yn )
=
=
sup
(λ1 ,...,λn )∈Rn
sup
n
∑ λi yi − Λ (λ1 , . . . , λn )
i=1
λ1 ,...,λn ∈(−∞,1]n
n
1 ∑ λi yi + α log i=1
n
∑ ν0 (Ai )(1 − λi )
(9.24) α
.
i=1
Theorem 9.5. The family {Zα ,θ (t1 , . . . ,tn ) : θ > 0} satisfies an LDP on the space Rn+ , as θ tends to infinity, with speed 1/θ and good rate function (9.24). Proof. First note that both of the functions ϕ and L are essentially smooth. Let DΛ = {(λ1 , . . . , λn ) : Λ (λ1 , . . . , λn ) < ∞}, DΛ◦ = interior of DΛ .
190
9 Large Deviations for the Dirichlet Processes
It follows from (9.22) and (9.23) that
DΛ =
α (λ1 , . . . , λn ) : ∑ ν0 (Ai ) ϕ (λi ) < 1 . Γ (1 − α ) i=1 n
The fact that ν0 has support E implies that ν0 (Ai ) > 0 for i = 1, . . . , n, and
DΛ =
n
(λ1 , . . . , λn ) : ∑ ν0 (Ai )[1 − (1 − λi )α ] < 1 i=1
= {(λ1 , . . . , λn ) : λi ≤ 1, i = 1, . . . , n} \ {(1, . . . , 1)}, DΛ◦
= {(λ1 , . . . , λn ) : λi < 1, i = 1, . . . , n}.
Clearly the function Λ is differentiable on DΛ◦ and grad(Λ )(λ1 , . . . , λn ) n 1 α = L ∑ ν0 (Ai )ϕ (λi ) (ν0 (A1 )ϕ (λ1 ), . . . , ν0 (An )ϕ (λn )). Γ (1 − α ) Γ (1 − α ) i=1 If a sequence (λ1m , . . . , λnm ) in DΛ◦ converges to a boundary point of DΛ◦ as m converges to infinity, then at least one coordinate of the sequence approaches one. Since the interior of {λ : ϕ (λ ) < ∞} is (−∞, 1) and ϕ is essentially smooth, it follows that Λ is essentially smooth. The theorem then follows from Theorem B.8. 2 For (y1 , . . . , yn ) in Rn+ and (x1 , . . . , xn ) in E n , define
1 (y1 , . . . , yn ), ∑nk=1 yk > 0 n F(y1 , .., yn ) = ∑k=1 yk (0, . . . , 0), (y1 , . . . , yn ) = (0, . . . , 0) and It1 ,..,tn (x1 , . . . , xn ) = inf{Jt1 ,..,tn (y1 , . . . , yn ) : F(y1 , . . . , yn ) = (x1 , . . . , xn )}.
(9.25)
Clearly n
It1 ,...,tn (x1 , . . . , xn ) = +∞, if
∑ xk = 1.
k=1
For (x1 , . . . , xn ) in Rn+ satisfying ∑nk=1 xk = 1, we have It1 ,..,tn (x1 , . . . , xn )
n
= inf Jt1 ,..,tn (ax1 , . . . , axn ) : a =
= inf
sup
(λ1 ,...,λn )∈(−∞,1]n
∑ yk > 0
k=1 n
1 a ∑ λi xi + log α i=1
n
∑ ν0 (Ai )(1 − λi )α
i=1
:a>0 .
9.2 Two-parameter Case
191
Further calculations yield It1 ,..,tn (x1 , . . . , xn ) = inf sup
(λ1 ,...,λn )∈(−∞,1]n
a − log a
(9.26)
n 1 log ∑ ν0 (Ai )[a(1 − λi )]α : a > 0 , α i=1 i=1
n n 1 log ∑ ν (Ai )τiα − ∑ τi xi = inf{a − log a : a > 0} + sup (τ1 ,...,τn )∈Rn+ α i=1 i=1
n n 1 = sup log ∑ ν0 (Ai )τiα + 1 − ∑ τi xi . (τ1 ,...,τn )∈Rn+ α i=1 i=1 n
− ∑ a(1 − λi )xi +
Theorem 9.6. The family (Ξθ ,α ,ν0 (A1 ), . . . , Ξθ ,α ,ν0 (An )) satisfies an LDP on the space E n , as θ tends to infinity, with speed 1/θ and rate function ⎧ 1 n ⎨ sup(τ1 ,...,τn )∈Rn+ { α log[∑i=1 ν0 (Ai )τiα ] n It1 ,..,tn (x1 , . . . , xn ) = +1 − ∑i=1 τi xi }, ∑nk=1 xk = 1 (9.27) ⎩ ∞, else. Proof. Since Jt1 ,...,tn (0, . . . , 0) = ∞, the function F is continuous on the effective domain of Jt1 ,..,tn . The theorem then follows from Lemma 9.4 and the contraction principle (Theorem B.2). 2 Remark: Since the effective domain of It1 ,..,tn (x1 , . . . , xn ) is contained in Δ n , by Theorem B.3, the result in Theorem 9.6 holds with E n being replaced by Δ n . For each μ in M1 (E), define 1 1 1 I α (μ ) = sup log ( f (x))α ν0 (dx) + 1 − f (x)μ (dx) , (9.28) 0 0 f ≥0, f ∈C(E) α Lemma 9.5. For any μ in M1 (E), I α (μ ) =
sup
{It1 ,..,tn (μ (A1 ), . . . , μ (An ))}.
(9.29)
0
Proof. It follows from Tietze’s extension theorem and Luzin’s theorem, that the set C(E) can be replaced with B(E) in the definition of I α . This implies that I α (μ ) ≥ sup{It1 ,..,tn ( μ (A1 ), . . . , μ (An )) : 0 < t1 < t2 < · · · < tn = 1; n = 1, 2, . . .}. On the other hand, for each non-negative f in C(E), let i ti = , τi = f (ti ), i = 1, . . . , n. n
192
9 Large Deviations for the Dirichlet Processes
Then 1 1 1 log ( f (x))α ν0 (dx) − f (x)μ (dx) α 0 0
n n 1 α = lim log ∑ ν0 (Ai )τi − ∑ τi μ (Ai ) n→∞ α i=1 i=1 ≤ sup{It1 ,..,tn (μ (A1 ), . . . , μ (An )) : 0 < t1 < t2 < · · · < tn = 1; n = 1, 2, . . .}, which implies I α (μ ) ≤ sup{It1 ,..,tn ( μ (A1 ), . . . , μ (An )) : 0 < t1 < t2 < · · · < tn = 1; n = 1, 2, . . .}. 2 Remark: It follows from the proof of Lemma 9.5 that the supremum in (9.29) can be taken over all partitions with t1 , . . . ,tn−1 being continuity points of μ . By monotonically approximating non-negative f (x) with strictly positive functions from above, it follows from the monotone convergence theorem that the supremum in (9.28) can be taken over strictly positive bounded functions; i.e., 1 1 1 I α (μ ) = sup log ( f (x))α ν0 (dx) + 1 − f (x)μ (dx) , (9.30) 0 0 f >0, f ∈C(E) α We are now ready to prove the main result of this section. Theorem 9.7. The family {Ξθ ,α ,ν0 : θ > 0} satisfies an LDP, as θ tends to infinity, with speed 1/θ and rate function I α . Proof. Let { f j (x) : j = 1, 2, . . .} be the countable dense (in the supremum norm) subset of C(E) used in the definition of the metric dw on M1 (E). Since M1 (E) is compact, it suffices to show that 1 log P{dw ( μ , ν ) < δ )} δ →0 θ →∞ θ 1 = lim lim sup log P{dw ( μ , ν ) ≤ δ )} δ →0 θ →∞ θ = −I α (μ ). lim lim inf
(9.31)
Choose a partition t1 , . . . ,tn and m large enough so that {ν ∈ M1 (E) : |ν , f j − μ , f j | < δ /2 : j = 1, . . . , m} ⊂ {dw (ν , μ ) < δ }. and For δ1 <
sup{| f j (x) − f j (y)| : x, y ∈ Ai , i = 1, . . . , n; j = 1, . . . , m} < δ /8. δ 4n ,
define
9.3 Comparison of Rate Functions
193
Vt1 ,...,tn (μ , δ1 ) = {(y1 , . . . , yn ) ∈ Δ n : |yi − μ (Ai )| < δ1 , i = 1, . . . , n}. Introduce the map
Ψt1 ,...,tn : M1 (E) → Δn , ν → (ν (A1 ), . . . , ν (An )). Then clearly
Ψt1−1 ,...,tn (Vt1 ,...,tn ( μ , δ1 )) ⊂ {dw (ν , μ ) < δ }. Next consider partitions t1 , . . . ,tn such that t1 , . . . ,tn−1 are continuity points of μ . Since Ψt1 ,...,tn is continuous at μ , for any δ2 > 0, one can choose δ > 0 small enough such that {dw (ν , μ ) ≤ δ } ⊂ Ψt1−1 ,...,tn {Vt1 ,...,tk ( μ , δ2 )}. Following an argument similar to that used in the proof of Theorem 9.2, and taking into account the remark after Lemma 9.5, we obtain (9.31). 2
9.3 Comparison of Rate Functions The large mutation LDPs for Πθ and PD(α , θ ) have the same rate function that is independent of the parameter α ; but rate function, I α , for the LDP of Ξθ ,α ,ν0 does depend on α . Since the value of a rate function, with a unique zero point, describes the difficulty for the underlying random element to deviate from the zero point, it is natural to compare the rate functions of Ξθ ,α ,ν0 and Ξθ ,ν0 . The main question is whether the LDP for the two-parameter Dirichlet process is consistent with the LDP for the one-parameter Dirichlet process in terms of the convergence of corresponding rate functions when α converges to zero. For any μ in M1 (E), let 1 1 I 0 (μ ) = sup log f (x)ν0 (dx) + 1 − f (x)μ (dx) . (9.32) f >0, f ∈B(E)
0
0
The next result shows that I 0 is the rate function for the LDP of Ξθ ,ν0 when θ tends to infinity. Lemma 9.6.
ˆ μ ) = H(ν0 |μ ). I 0 (μ ) = H(
(9.33)
Proof. If ν0 is not absolutely continuous with respect to μ , then H(ν0 | μ ) = +∞. Let A be a set such that μ (A) = 0, ν0 (A) > 0 and define m, x ∈ A fm (x) = 1, else.
194
9 Large Deviations for the Dirichlet Processes
Then
I 0 (μ ) ≥ ν0 (A) log m → ∞ as m → ∞.
Next we assume ν0 μ and denote H(ν0 |μ ) =
1 0
d ν0 d μ (x)
by φ (x). By definition,
φ (x) log(φ (x)) μ (dx).
For any M > 0, let φM (x) = φ (x) ∧ M. Let E1 = {x ∈ E : φ (x) ≥ e−1 }, E2 = E \ E1 . Since
1
lim log
M→∞
0
φM (x)μ (dx) = 0,
and the function x log x is bounded below, the monotone convergence on E1 and the dominated convergence theorem on E2 imply that I 0 (μ ) ≥ lim
M→∞
1 0
φM (x) log φM (x) ν0 (dx) − log
1 0
φM (x) μ (dx)
(9.34)
= H(ν0 |μ ). On the other hand, it follows, by letting f (x) = eg(x) in (9.9), that 1 1 H(ν0 |μ ) = sup log f (x) ν0 (dx) − log f (x) μ (dx) . f >0, f ∈B(E)
Since
1 0
we get that
0
f (x) μ (dx) − 1 ≥ log
0
1 0
f (x) μ (dx),
H(ν0 |μ ) ≥ I 0 (μ )
which combined with (9.34) implies (9.33).
(9.35) 2
The following result reveals the monotone structure among the rate functions of the two-parameter Dirichlet process. Theorem 9.8. For any μ in M1 (E), 0 ≤ α1 < α2 < 1, I α2 (μ ) ≥ I α1 ( μ ),
(9.36)
and for any α in (0, 1), there exists μ in M1 (E) satisfying ν0 μ such that I α (μ ) > I 0 (μ ).
(9.37)
9.3 Comparison of Rate Functions
195
Proof. By H¨older’s inequality, for any 0 < α1 < α2 < 1, 1 1 1 1 α1 α2 log ( f (x)) ν0 (dx) ≤ log ( f (x)) ν0 (dx) , α1 α2 0 0
(9.38)
and the inequality becomes strict if f (x) is not constant almost surely under ν0 . Hence I α (μ ) is non-decreasing in α over (0, 1). It follows from the concavity of log x that 1 1 1 α log ( f (x)) ν0 (dx) ≥ log f (x)ν0 (dx), α 0 0 which implies that I α (μ ) ≥ I 0 (μ ) for α > 0. Next choose μ in M1 (E) such that ν0 μ and
d ν0 d μ (x)
is not a constant with ν0
probability one; then I α ( μ ) > I α /2 ( μ ) ≥ I 0 (μ ) for α > 0.
2
The following form of minimax theorem is the key in establishing consistency of the rate functions. Definition 9.1. Let M and N be two spaces. A function f (x, y) defined on M × N is convexlike in x if for every x1 , x2 in M and 0 ≤ λ ≤ 1, there exists x in M such that f (x, y) ≤ λ f (x1 , y) + (1 − λ ) f (x2 , y), for all y ∈ N. The function f (x, y) is said to be concavelike in y if for every y1 , y2 in N and 0 ≤ λ ≤ 1, there exists y in N such that f (x, y) ≥ λ f (x, y1 ) + (1 − λ ) f (x, y2 ), for all x ∈ M. The function f (x, y) is convex–concavelike if it is convexlike in x and concavelike in y. Theorem 9.9. (Sion’s minimax theorem [164]) Let M and N be any two topological spaces and f (x, y) be a function on M × N. If M is compact, f (x, y) is convexconcavelike, and, for each y in N, f (x, y) is lower semi-continuous in x, then sup inf f (x, y) = inf sup f (x, y).
y∈N x∈M
x∈M y∈N
Now we turn to the problem of consistency. Theorem 9.10. For any μ in M1 (E), lim I α (μ ) = I 0 (μ ).
α →0
Proof. Let M = [0, 1/2], N = { f ∈ B(E) : f > 0}. For any α in M and f in N, define F(α , f ) =
log( 01 ( f (x))α ν0 (dx)) + 1 − 01 f (x) μ (dx), α > 0 1 α = 0. 0 log f (x) ν0 (dx) + 1 − 0 f (x) μ (dx),
1
α1
196
9 Large Deviations for the Dirichlet Processes
Then clearly
I 0 (μ ) = sup inf F(α , f ), f ∈N α ∈M
and, by Theorem 9.8,
lim I α (μ ) = inf sup F(α , f ).
α →0
α ∈M f ∈N
For each fixed α , F(α , f ) is clearly concave as a function of f from N. For fixed f in N, F(α , f ) is continuous in α . Since F(α , f ) is monotone in α , it is convexlike. The theorem now follows from Sion’s minimax theorem. 2 Remarks: (a) Both Ξθ ,α ,ν0 and Ξθ ,ν0 converge to ν0 for large θ . When θ becomes large, each component of P(θ , α ) is more likely to be small. The introduction of positive α plays a similar role. Thus the mass in Ξθ ,α ,ν0 spreads more evenly than the mass in Ξθ ,ν0 . In other words, Ξθ ,α ,ν0 is “closer” to ν0 than Ξθ ,ν0 . This observation is made rigorous through the fact that I α can be strictly bigger than I 0 . The monotonicity of I α in α shows that α can be used to measure the relative “closeness” to ν0 among all Ξθ ,α ,ν0 for large θ . (b) The process Yα ,θ (t) is a process with exchangeable increments. The method here may be adapted to establish LDPs for other processes with exchangeable increments. Consider the map Φ defined in (5.76) that maps every element in M1 (E) to the descending sequence of masses of all its atoms. One would hope to get the LDP for Πθ in Chapter 8 from the LDP for Πθ ,ν0 through the contraction principle using the map Φ . Unfortunately, Φ is not continuous on the effective domain of Hˆ as the 1 following example shows. Let μn = 12 ν0 + 2n ∑nk=1 δk/n2 . Then μn converges weakly 1 1 1 to 2 [ν0 + δ0 ] while Φ (μn ) = ( 2n , . . . , 2n , 0, 0..) converges to (0, . . . 0, . . .) rather than (1/2, 0, . . .) = Φ (δ0 ). The rate function in Theorem 8.1, however, does have a connection to the relative entropy, as the next result shows. Theorem 9.11. Let S(P) be defined in (8.15). For any m ≥ 1, n > m, set (p1 ,...,pm )
∇n
= {(q1 , . . . , qn−m ) : (p1 , . . . , pm , q1 , . . . , qn−m ) ∈ ∇n }.
Let Hn ((p1 , . . . , pm , q1 , . . . , qn−m )) denote the relative entropy of ( 1n , . . . , 1n ) with respect to (p1 , . . . , pm , q1 , . . . , qn−m ). Then (p1 ,...,pm )
S(p) = lim lim inf{Hn ((p1 , . . . , pm , q1 , . . . , qn−m )) : (q1 , . . . , qn−m ) ∈ ∇n m→∞ n→∞
}.
Proof. The equality holds trivially if p1 + · · · + pm = 1. Assume that ∑m i=1 pi < 1. Then
9.4 Notes
197
1 1 1 1 1 n−m log log + log + . npi n n n q1 · · · qn−m i=1 n m
Hn ((p1 , . . . , pm , q1 , . . . , qn−m )) = ∑
m Since ∑n−m i=1 qi = 1 − ∑i=1 pi , and the product q1 · · · qn−m reaches its maximum when q1 , . . . , qn−m are all equal, it follows that (p1 ,...,pm )
inf{Hn ((p1 , . . . , pm , q1 , . . . , qn−m )) : (q1 , . . . , qn−m ) ∈ ∇n 1 1 m n−m 1 n−m m log = log + ∑ log + n n n i=1 pi n n +
}
n−m 1 . log n 1 − ∑m i=1 pi
Letting n → ∞, followed by m → ∞, we obtain the result.
2
9.4 Notes The LDP result for the one-parameter Dirichlet process first appeared in [135]. Lemma 9.1 and Lemma 9.2 are from [21] and the expression (9.9) can be found in [44]. The remaining results of Section 9.1 are based on the material in [22]. Theorem 9.2, viewed as the reversed form of Sanov’s theorem, also appears in [84] and [85] where the posterior distributions with Dirichlet prior are studied. Ganesh and O’Connell [84] (page 202) gave the following nice explanation for the reverse relation of the rate functions: “in Sanov’s theorem we ask how likely the empirical distribution is to be close to ν0 , given that the true distribution is ν ; whereas in the Bayesian context we ask how likely it is that the true distribution is close to ν0 , given that the empirical distribution is close to ν .” An outline of the construction in Lemma 9.4 can be found on page 254 in [152]. The remaining material in Section 9.2 is from Feng [71]. Both Lemma 9.6 and Theorem 9.8 originated from [71]. The proof of Theorem 9.10 was communicated to me by F.Q. Gao. Theorem 9.11 can be found in [70]. The LDP results presented here are associated with the equilibrium distributions. There are also results on path level LDPs for the FV process. In [21], [68], [22], and [79], the LDPs are studied for the FV process with parent-independent mutation. The LDP for FV process without mutation can be found in [188]. Other closely related results can be found in [159] and [141].
Appendix A
Poisson Process and Poisson Random Measure
Reference [130] is the main source for the material on Poisson process and Poisson random measure.
A.1 Definitions Definition A.1. Let (Ω , F , P) be a probability space, and S be a locally compact, separable metric space with Borel σ -algebra B. The set S denotes the collection of all countable subsets of S. A Poisson process with state space S, defined on (Ω , F , P), is a map from (Ω , F , P) to S satisfying: (a) for each B in B,
N(B) = #{ ∩ B}
is a Poisson random variable with parameter
μ (B) = E[N(B)]; (b) for disjoint sets B1 , . . . , Bn in B, N(B1 ), . . . , N(Bn ) are independent. If B1 , B2 , . . . are disjoint, then, by definition, we have ∞
N(∪∞ i=1 Bi ) = ∑ N(Bi ), i=1
and
∞
μ (∪∞ i=1 Bi ) = ∑ μ (Bi ). i=1
Hence, N is a random measure, and μ is a measure, both on (S, B). The random measure N can also be written in the following form:
S. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5, © Springer-Verlag Berlin Heidelberg 2010
199
200
A Poisson Process and Poisson Random Measure
N=
∑ δς .
ς ∈
Definition A.2. The measure μ is called the mean measure of the Poisson process , and N is called a Poisson random measure associated with the Poisson process . The measure μ is also called the mean measure of N. Remark: Let S = [0, ∞) and μ be the Lebesgue measure on S. Then the Poisson random measure associated with the Poisson process with state space S and mean measure μ is just the one-dimensional time-homogeneous Poisson process that is defined as a pure-birth Markov chain with birth rate one. The random set is composed of all jump times of the process. Definition A.3. Assume that S = Rd for some d ≥ 1. Then the mean measure is also called the intensity measure. If there exists a positive constant c such that for any measurable set B,
μ (B) = c|B|, |B| = Lebesgue measure of B, then the Poisson process is said to be homogeneous with intensity c.
A.2 Properties Theorem A.1. Let μ be the mean measure of a Poisson process with state space S. Then μ is diffuse; i.e., for every x in S,
μ ({x}) = 0. Proof. For any fixed x in S, set a = μ ({x}). Then, by definition, P{N({x}) = 2} =
a2 −a e = 0, 2
which leads to the result. 2 The next theorem describes the close relation between a Poisson process and the multinomial distribution. Theorem A.2. Let be a Poisson process with state space S and mean measure μ . Assume that the total mass μ (S) is finite. Then, for any n ≥ 1, 1 ≤ m ≤ n, and any set partition B1 , . . . , Bm of S, the conditional distribution of the random vector (N(B1 ), . . . , N(Bm )) given N(S) = n is a multinomial distribution with parameters n and μ (B1 ) μ (Bm ) ,..., . μ (S) μ (S)
A.2 Properties
201
Proof. For any partitions n1 , . . . , nm of n, P{N(B1 ) = n1 , . . . , N(Bm ) = nm | N(S) = n} P{N(B1 ) = n1 , . . . , N(Bm ) = nm } = P{N(S) = n} =
=
μ (Bi )ni e−μ (Bi ) ni ! μ (S)n e−μ (S) n! m
∏m i=1
n n1 , . . . , nm
∏ i=1
μ (Bi ) μ (S)
ni . 2
Theorem A.3. (Restriction and union) (1) Let be a Poisson process with state space S. Then, for every B in B, ∩ B is a Poisson process with state space S and mean measure
μB (·) = μ (· ∩ B). Equivalently, ∩ B can also be viewed as a Poisson process with state space B with mean measure given by the restriction of μ on B. (2) Let 1 and 2 be two independent Poisson processes with state space S, and respective mean measures μ1 and μ2 . Then 1 ∪ 2 is a Poisson process with state space S and mean measure μ = μ1 + μ2 . Proof. Direct verification of the definition of Poisson process. 2 The remaining theorems in this section are stated without proof. The details can be found in [130]. Theorem A.4. (Mapping) Let be a Poisson process with state space S and σ finite mean measure μ (·). Consider a measurable map h from S to another locally compact, separable metric space S . If the measure
μ (·) = μ ( f −1 (·)) is diffuse, then f () = { f (ς ) : ς ∈ } is a Poisson process with state space S and mean measure μ . Theorem A.5. (Marking) Let be a Poisson process with state space S and mean measure μ . The mark of each point ς in , denoted by mς , is a random variable, taking values in a locally compact, separable metric space S , with distribution q(z, ·). Assume that: (1) for every measurable set B in S , q(·, B ) is a measurable function on (S, B); (2) given , the random variables {mς : ς ∈ } are independent;
202
A Poisson Process and Poisson Random Measure
˜ = {(ς , mς ) : ς ∈ } is a Poisson process on with state space S × S and mean then measure μ˜ (dx, dm) = μ (dx)q(x, dm). ˜ is aptly called a marked Poisson process. The Poisson process Theorem A.6. (Campbell) Let be a Poisson process on space (S, B) with mean measure μ . Then for any non-negative measurable function f ,
E exp −
∑
ς ∈
f (ς )
= exp S
(e− f (x) − 1)μ (dx) .
If f is a real-valued measurable function on (S, B) satisfying S
min(| f (x)|, 1)μ (dx) < ∞,
then for any complex number λ such that the integral S
converges, we have
(eλ f (x) − 1) μ (dx)
E exp λ
∑
ς ∈
f (ς )
Moreover, if
S
then
∑
Var
S
ς ∈
∑
ς ∈
(eλ f (x) − 1)μ (dx) .
| f (x)|μ (dx) < ∞,
E
= exp
f (ς ) = f (ς ) =
S
S
(A.1)
f (x)μ (dx), f 2 (x)μ (dx).
In general, for any n ≥ 1, and any real-valued measurable functions f1 , . . . , fn satisfying (A.1), we have E
∑
distinct ς1 ,...,ςn ∈
n
f 1 (ς1 ) · · · fn (ςn ) = ∏ E i=1
∑
ςi ∈
fi (ςi ) .
(A.2)
Appendix B
Basics of Large Deviations
In probability theory, the law of large numbers describes the limiting average or mean behavior of a random population. The fluctuations around the average are characterized by a fluctuation theorem such as the central limit theorem. The theory of large deviations is concerned with the rare event of deviations from the average. Here we give a brief account of the basic definitions and results of large deviations. Everything will be stated in a form that will be sufficient for our needs. All proofs will be omitted. Classical references on large deviations include [30], [50], [168], and [175]. More recent developments can be found in [46], [28], and [69]. The formulations here follow mainly Dembo and Zeitouni [28]. Theorem B.6 is from [157]. Let E be a complete, separable metric space with metric ρ . Generic elements of E are denoted by x, y, etc. Definition B.1. A function I on E is called a rate function if it takes values in [0, +∞] and is lower semicontinuous. For each c in [0, +∞), the set {x ∈ E : I(x) ≤ c} is called a level set. The effective domain of I is defined as {x ∈ E : I(x) < ∞}. If all level sets are compact, the rate function is said to be good. Rate functions will be denoted by other symbols as the need arises. Let {Xε : ε > 0} be a family of E-valued random variables with distributions {Pε : ε > 0}, defined on the Borel σ -algebra B of E. Definition B.2. The family {Xε : ε > 0} or the family {Pε : ε > 0} is said to satisfy a large deviation principle (LDP) on E as ε converges to zero, with speed ε and a good rate function I if
S. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5, © Springer-Verlag Berlin Heidelberg 2010
203
204
B Basics of Large Deviations
for any closed set F, lim sup ε log Pε {F} ≤ − inf I(x),
(B.1)
for any open set G, lim inf ε log Pε {G} ≥ − inf I(x).
(B.2)
ε →0
x∈F
ε →0
x∈G
Estimates (B.1) and (B.2) are called the upper bound and lower bound, respectively. Let a(ε ) be a function of ε satisfying a(ε ) > 0, lim a(ε ) = 0. ε →0
If the multiplication factor ε in front of the logarithm is replaced by a(ε ), then the LDP has speed a(ε ). It is clear that the upper and lower bounds are equivalent to the following statement: for all B ∈ B, − inf◦ I(x) ≤ lim inf ε log Pε {B} ε →0
x∈B
lim sup ε log Pε {B} ≤ − inf I(x), ε →0
x∈B
where B◦ and B denote the interior and closure of B respectively. An event B ∈ B satisfying inf◦ I(x) = inf I(x) x∈B
x∈B
is called a I-continuity set. Thus for a I-continuity set B, we have that lim ε log Pε {B} = − inf I(x).
ε →0
x∈B
If the values for ε are only {1/n : n ≥ 1}, we will write Pn instead of P1/n . If the upper bound (B.1) holds only for compact sets, then we say the family {Pε : ε > 0} satisfies the weak LDP. To establish an LDP from the weak LDP, one needs to check the following condition which is known as exponential tightness: For any M > 0, there is a compact set K such that on the complement K c of K we have lim sup ε log Pε {K c } ≤ −M. ε →0
(B.3)
Definition B.3. The family {Pε : ε > 0} is said to be exponentially tight if (B.3) holds. An interesting consequence of an LDP is the following theorem. Theorem B.1. (Varadhan’s lemma) Assume that the family {Pε : ε > 0} satisfies an LDP with speed ε and good rate function I. Let f and the family { fε : ε ≥ 1} be bounded continuous functions on E satisfying lim sup ρ ( fε (x), f (x)) = 0.
ε →0 x∈E
Then
B Basics of Large Deviations
205
lim ε log E Pε [e
ε →0
fε (x) ε
] = sup{ f (x) − I(x)}. x∈E
Remark: Without knowing the existence of an LDP, one can guess the form of the rate function by calculating the left-hand side of the above equation. The next result shows that an LDP can be transformed by a continuous function from one space to another. Theorem B.2. (Contraction principle) Let E, F be complete, separable spaces, and h be a measurable function from E to F. If the family of probability measures {Pε : ε > 0} on E satisfies an LDP with speed ε and good rate function I and the function h is continuous at every point in the effective domain of I, then the family of probability measures {Pε ◦ h−1 : ε > 0} on F satisfies an LDP with speed ε and good rate function, I , where I (y) = inf{I(x) : x ∈ E, y = h(x)}. Theorem B.3. Let {Yε : ε > 0} be a family of random variables satisfying an LDP on space E with speed ε and rate function I. If E0 is a closed subset of E, and P{Yε ∈ E0 } = 1, {x ∈ E : I(x) < ∞} ⊂ E0 then the LDP for {Yε : ε > 0} holds on E0 . The next concept describes the situation when two families of random variables are indistinguishable exponentially. Definition B.4. Let
{Xε : ε > 0}, {Yε : ε > 0}
be two families of E-valued random variables on a probability space (Ω , F , P). If for any δ > 0 the family {Pε : ε > 0} of joint distributions of (Xε ,Yε ) satisfies lim sup ε log Pε {{(x, y) : ρ (x, y) > δ }} = −∞, ε →0
then we say that {Xε : ε > 0} and {Yε : ε > 0} are exponentially equivalent with speed ε . The following theorem shows that the LDPs for exponentially equivalent families of random variables are the same. Theorem B.4. Let {Xε : ε > 0} and {Yε : ε > 0} be two exponentially equivalent families of E−valued random variables. If an LDP holds for {Xε : ε > 0}, then the same LDP holds for {Yε : ε > 0} and vice versa. To generalize the notion of exponential equivalence, we introduce the concept of exponential approximation next.
206
B Basics of Large Deviations
Definition B.5. Consider a family of random variables {Xε : ε > 0} and a sequence of families of random variables {Yεn : ε > 0}, n = 1, 2, . . . , all defined on the same probability space. Denote the joint distribution of (Xε ,Yεn ) by Pεn . Assume that for any δ > 0, (B.4) lim lim sup ε log Pεn {{(x, y) : ρ (x, y) > δ }} = −∞, n→∞
then the sequence {Xε : ε > 0}.
ε →0
{Yεn
: ε > 0} is called an exponentially good approximation of
Theorem B.5. Let the sequence of families {Yεn : ε > 0}, n = 1, 2, . . . , be an exponentially good approximation to the family {Xε : ε > 0}. Assume that for each n ≥ 1, the family {Yεn : ε > 0} satisfies an LDP with speed ε and good rate function In . Set I(x) = sup lim inf
inf
δ >0 n→∞ {y:ρ (y,x)<δ }
In (y).
(B.5)
If I is a good rate function, and for any closed set F, inf I(x) ≤ lim sup inf In (y),
x∈F
n→∞ y∈F
(B.6)
then the family {Xε : ε > 0} satisfies an LDP with speed ε and good rate function I. The basic theory of convergence of sequences of probability measures on a metric space have an analog in the theory of large deviations. Prohorov’s theorem, relating compactness to tightness, has the following parallel that links exponential tightness to a partial LDP, defined below. Definition B.6. A family of probability measures {Pε : ε > 0} is said to satisfy the partial LDP if for every sequence εn converging to zero there is a subsequence εn such that the family {Pεn : εn > 0} satisfies an LDP with speed εn and a good rate function I . Remark: The partial LDP becomes an LDP if the rate functions associated with different subsequences are the same. Theorem B.6. (Pukhalskii) (1) The partial LDP is equivalent to exponential tightness. Thus the partial LDP always holds on a compact space E. (2) Assume that {Pε : ε > 0} satisfies the partial LDP with speed ε , and for every x in E lim lim sup ε log Pε {ρ (y, x) ≤ δ }
δ →0
ε →0
= lim lim inf ε log Pε {ρ (y, x) < δ } = −I(x). δ →0 ε →0
Then {Pε : ε > 0} satisfies an LDP with speed ε and good rate function I.
(B.7)
B Basics of Large Deviations
207
For an Rd -valued random variable Y , we define the logarithmic moment generating function of Y or its law μ as
Λ (λ ) = log E[eλ ,Y ] for all λ ∈ Rd
(B.8)
Λ (·) is also called the cumulant where , denotes the usual inner product in generating function of Y . The Fenchel–Legendre transformation of Λ (λ ) is defined as Λ ∗ (x) := sup {λ , x − Λ (λ )}. (B.9) Rd .
λ ∈Rd
Theorem B.7. (Cram´er) Let {Xn : n ≥ 1} be a sequence of i.i.d. random variables in Rd . Denote the law of 1n ∑nk=1 Xk by Pn . Assume that
Λ (λ ) = log E[eλ ,X1 ] < ∞ for all λ ∈ Rd . Then the family {Pn : n ≥ 1} satisfies an LDP with speed 1/n and good rate function I(x) = Λ ∗ (x). The i.i.d. assumption plays a crucial role in Cram´er’s theorem. For general situations one has the following G¨artner–Ellis theorem. Theorem B.8. (G¨artner–Ellis) Let {Yε : ε > 0} be a family of random vectors in Rd . Denote the law of Yε by Pε . Define
Λε (λ ) = log E[eλ ,Yε ]. Assume that the limit
Λ (λ ) = lim εΛε (λ /ε ), n→∞
exists, and is lower semicontinuous. Set D = {λ ∈ Rd : Λ (λ ) < ∞}. If D has an nonempty interior D ◦ on which Λ is differentiable, and the norm of the gradient of Λ (λn ) converges to infinity, whenever λn in D ◦ converges to a boundary point of D ◦ (Λ satisfying these conditions is said to be essentially smooth), then the family {Pε : ε > 0} satisfies an LDP with speed ε and good rate function I = Λ ∗ . The next result can be derived from the G¨artner–Ellis theorem. Corollary B.9 Assume that {Xε : ε > 0}, {Yε : ε > 0}, {Zε : ε > 0} are three families of real-valued random variables, all defined on the same probability space with respective laws {P1ε : ε > 0}, {P2ε : ε > 0}, {P3ε : ε > 0}.
208
B Basics of Large Deviations
If both {P1ε : ε > 0} and {P3ε : ε > 0} satisfy the assumptions in Theorem B.8 with the same Λ (·), and with probability one X ≤ Y ≤ Z, then {P2ε : ε > 0} satisfies an LDP with speed ε and a good rate function given by I(x) = sup {λ x − Λ (λ )}. λ ∈R
Infinite-dimensional generalizations of Cram´er’s theorem are also available. Here we only mention one particular case: Sanov’s theorem. Let {Xk : k ≥ 1} be a sequence of i.i.d. random variables in Rd with common distribution μ . For any n ≥ 1, define
ηn =
1 n ∑ δXk n k=1
where δx is the Dirac measure concentrated at x. The empirical distribution ηn belong to the space M1 (Rd ) of all probability measures on Rd equipped with the weak topology. A well-known result from statistics says that when n becomes large one will recover the true distribution μ from ηn . Clearly M1 (Rd ) is an infinite dimensional space. Denote the law of ηn , on M1 (Rd ), by Qn . Then we have: Theorem B.10. (Sanov) The family {Qn : n ≥ 1} satisfies an LDP with speed 1/n and good rate function dν if ν μ Rd log d μ d ν , (B.10) H(ν |μ ) = ∞, otherwise, where ν μ means that ν is absolutely continuous with respect to μ and H(ν |μ ) is called the relative entropy of ν with respect to μ .
References
1. M. Abramowitz and I.A. Stegun (1965). Handbook of Mathematical Functions. Dover Publ. Inc., New York. ´ e de Probabilit´es de Saint 2. D.J. Aldous (1985). Exchangeability and Related Topics, Ecole d’Et´ Flour, Lecture Notes in Math., Vol. 1117, 1–198, Springer-Verlag. 3. C. Antoniak (1974). Mixtures of Dirichlet processes with applications to Bayesian nonparameteric problems. Ann. Statist. 2, 1152–1174. 4. M. Aoki (2008). Thermodynamic limit of macroeconomic or financial models: one- and twoparameter Poisson–Dirichlet models. J. Econom. Dynam. Control 32, No. 1, 66–84. 5. R. Arratia (1996). Independence of prime factors: total variation and Wasserstein metrics, insertions and deletions, and the Poisson–Dirichlet process. Preprint. 6. R. Arratia, A.D. Barbour and S. Tavar´e (1992). Poisson process approximations for the Ewens sampling formula. Ann. Appl. Probab. 2, No. 3, 519–535. 7. R. Arratia, A.D. Barbour and S. Tavar´e (1997). Random combinatorial structures and prime factorizations. Notices of AMS 44, No. 8, 903–910. 8. R. Arratia, A.D. Barbour and S. Tavar´e (1999). The Poisson–Dirichlet distribution and the scale invariant Poisson process. Combin. Probab. Comput. 8, 407–416. 9. R. Arratia, A.D. Barbour and S. Tavar´e (2003). Logarithmic Combinatorial Structures: A Probabilistic Approach. EMS Monographs in Mathematics, European Mathematical Society. 10. R. Arratia, A.D. Barbour and S. Tavar´e (2006). A tale of three couplings: Poisson–Dirichlet and GEM approximations for random permutations. Combin. Probab. Comput. 15, 31–62. 11. A.D. Barbour, L. Holst and S. Janson (1992). Poisson Approximation. Oxford University Press, New York. 12. J. Bertoin (2006). Random Fragmentation and Coagulation Processes. Cambridge Studies in Advanced Mathematics, 102. Cambridge University Press, Cambridge. 13. J. Bertoin (2008). Two-parameter Poisson–Dirichlet measures and reversible exchangeable fragmentation-coalescence processes. Combin. Probab. Comput. 17, No. 3, 329–337. 14. P. Billingsley (1968). Convergence of Probability Measures, Wiley, New York. 15. P. Billingsley (1972). On the distribution of large prime divisors. Period. Math. Hungar. 2, 283–289. 16. D. Blackwell and J.B. MacQueen (1973). Ferguson distribution via P´olya urn scheme. Ann. Statist. 1, 353–355. 17. M.A. Carlton (1999). Applications of the two-parameter Poisson–Dirichlet distribution. Unpublished Ph.D. thesis, Dept. of Statistics, University of California, Los Angeles. 18. M.F Chen (2005). Eigenvalues, Inequalities, and Ergodic Theory. Probability and its Applications (New York). Springer-Verlag, London. 19. S. Coles (2001). An Introduction to Statistical Modeling of Extreme Values. Springer-Verlag, London. S. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5, © Springer-Verlag Berlin Heidelberg 2010
209
210
References
´ e de Probabilit´es de 20. D.A. Dawson (1993). Measure-valued Markov Processes, Ecole d’Et´ Saint Flour, Lecture Notes in Math., Vol. 1541, 1–260, Springer-Verlag. 21. D.A. Dawson and S. Feng (1998). Large deviations for the Fleming–Viot process with neutral mutation and selection. Stoch. Proc. Appl. 77, 207–232. 22. D.A. Dawson and S. Feng (2001). Large deviations for the Fleming–Viot process with neutral mutation and selection, II. Stoch. Proc. Appl. 92, 131–162. 23. D.A. Dawson and S. Feng (2006). Asymptotic behavior of Poisson–Dirichlet distribution for large mutation rate. Ann. Appl. Probab. 16, No. 2, 562–582. 24. D.A. Dawson and K.J. Hochberg (1979). The carrying dimension of a stochastic measure diffusion. Ann. Probab. 7, 693–703. 25. D.A. Dawson and K.J. Hochberg (1982). Wandering random measures in the Fleming–Viot model. Ann. Probab. 10, 554–580. 26. J.M. DeLaurentis and B.G. Pittel (1985). Random permutations and Brownian motion. Pacific J. Math. 119, 287–301. 27. J. Delmas, J. Dhersin, and A. Siri-Jegousse (2008). Asymptotic results on the length of coalescent trees. Ann. Appl. Probab. 18, No. 3, 997–1025. 28. A. Dembo and O. Zeitouni (1998). Large Deviations Techniques and Applications. Second edition. Applications of Mathematics, Vol. 38, Springer-Verlag, New York. 29. B. Derrida (1997). From random walks to spin glasses. Physica D 107, 186–198. 30. J.D. Deuschel and D.W. Stroock (1989). Large Deviations. Academic Press, Boston. 31. K. Dickman (1930). On the frequency of numbers containing prime factors of a certain relative magnitude. Arkiv f¨or Matematik, Astronomi och Fysik 22, 1–14. 32. R. Dong, C. Goldschmidt and J. B. Martin (2006). Coagulation–fragmentation duality, Poisson–Dirichlet distributions and random recursive trees. Ann. Appl. Probab. 16, No. 4, 1733–1750. 33. P. Donnelly (1986). Partition structures, Polya urns, the Ewens sampling formula, and the age of alleles. Theor. Pop. Biol. 30, 271–288. 34. P. Donnelly and G. Grimmett (1993). On the asymptotic distribution of large prime factors. J. London Math. Soc. 47, No. 3, 395–404. 35. P. Donnelly and P. Joyce (1989). Continuity and weak convergence of ranked and size-biased permutations on the infinite simplex. Stoc. Proc. Appl. 31, 89–104. 36. P. Donnelly and P. Joyce (1991). Weak convergence of population genealogical processes to the coalescent with ages. Ann. Prob. 20, No. 1, 322–341. 37. P. Donnelly, T.G. Kurtz, and S. Tavar´e (1991). On the functional central limit theorem for the Ewens sampling formula. Ann. Appl. Probab. 1, No. 4, 539–545. 38. P. Donnelly and T.G. Kurtz (1996a). A countable representation of the Fleming–Viot measure-valued diffusion. Ann. Prob. 24, No. 2, 698–742. 39. P. Donnelly and T.G. Kurtz (1996b). The asymptotic behavior of an urn model arising in population genetics. Stoc. Proc. Appl. 64, 1–16. 40. P. Donnelly and T.G. Kurtz (1999a). Genealogical processes for the Fleming–Viot models with selection and recombination. Ann. Appl. Prob. 9, No. 4, 1091–1148. 41. P. Donnelly and T.G. Kurtz (1999b). Particle representations for measure-valued population models. Ann. Prob. 27, No. 1, 166–205. 42. P. Donnelly and S. Tavar´e (1986). The ages of alleles and a coalescent. Adv. Appl. Prob. 12, 1–19. 43. P. Donnelly and S. Tavar´e (1987). The population genealogy of the infinitely-many-neutral alleles model. J. Math. Biol. 25, 381–391. 44. M.D. Donsker and S.R.S. Varadhan (1975). Asymptotic evaluation of certain Markov process expectations for large time, I. Comm. Pure Appl. Math. 28, 1–47. 45. M. D¨oring and W. Stannat (2009). The logarithmic Sobolev inequality for the Wasserstein diffusion. Probab. Theory Relat. Fields 145, No. 1-2, 189–209. 46. P. Dupuis and R.S. Ellis (1997). A Weak Convergence Approach to the Theory of Large Deviations. Wiley, New York. 47. R. Durrett (1996). Probability: Theory and Examples. Second edition. Duxbury Press.
References
211
48. R. Durrett (2008). Probability Models for DNA Sequence Evolution. Second edition. Springer-Verlag, New York. 49. E.B. Dynkin (1994). An Introduction to Branching Measure-Valued Processes. Vol. 6, CRM Monographs. Amer Math. Soc., Providence. 50. R. S. Ellis (1985). Entropy, Large Deviations, and Statistical Mechanics. Springer, Berlin. 51. S. Engen (1978). Abundance Models with Emphasis on Biological Communities and Species Diversity. Chapman-Hall, London. 52. A.M. Etheridge (1991). An Introduction to Superprocesses. University Lecture Series, Vol. 20, Amer. Math. Soc., Providence, RI. 53. A.M. Etheridge and P. March (1991). A note on superprocesses. Probab. Theory Relat. Fields 89, 141–147. 54. A. Etheridge, P. Pfaffelhuber, and A. Wakolbinger (2006). An approximate sampling formula under genetic hitchhiking. Ann. Appl. Probab. 16, No. 2, 685–729. 55. S.N. Ethier (1976). A class of degenerate diffusions processes occurring in population genetics. Comm. Pure. Appl. Math. 29, 483–493. 56. S.N. Ethier (1990). The infinitely-many-neutral-alleles diffusion model with ages. Adv. Appl. Probab. 22, 1–24. 57. S.N. Ethier (1992). Eigenstructure of the infinitely-many-neutral-alleles diffusion model. J. Appl. Probab. 29, 487–498. 58. S.N. Ethier and R.C. Griffiths (1987). The infinitely-many-sites model as a measure-valued diffusion. Ann. Probab. 15, No. 2, 515–545. 59. S.N. Ethier and R.C. Griffiths (1993). The transition function of a Fleming–Viot process. Ann. Probab. 21, No. 3, 1571–1590. 60. S.N. Ethier and R.C. Griffiths (1993). The transition function of a measure-valued branching diffusion with immigration. In Stochastic Processes. A Festschrift in Honour of Gopinath Kallianpur (S. Cambanis, J. Ghosh, R.L. Karandikar and P.K. Sen, eds.), 71–79. Springer, New York. 61. S.N. Ethier and T.G. Kurtz (1981). The infinitely-many-neutral-alleles diffusion model. Adv. Appl. Probab. 13, 429–452. 62. S.N. Ethier and T.G. Kurtz (1986). Markov Processes: Characterization and Convergence, John Wiley, New York. 63. S.N. Ethier and T.G. Kurtz (1992). On the stationary distribution of the neutral diffusion model in population genetics. Ann. Appl. Probab. 2, 24–35. 64. S.N. Ethier and T.G. Kurtz (1993). Fleming–Viot processes in population genetics. SIAM J. Control and Optimization 31, No. 2, 345–386. 65. S.N. Ethier and T.G. Kurtz (1994). Convergence to Fleming–Viot processes in the weak atomic topology. Stoch. Proc. Appl. 54, 1–27. 66. W.J. Ewens (1972). The sampling theory of selectively neutral alleles. Theor. Pop. Biol. 3, 87–112. 67. W.J. Ewens (2004). Mathematical Population Genetics, Vol. I, Springer-Verlag, New York. 68. J. Feng (1999). Martingale problems for large deviations of Markov processes. Stoch. Proc. Appl. 81, 165–216. 69. J. Feng and T.G. Kurtz (2006). Large Deviations for Stochastic Processes. Amer. Math. Soc., Providence, RI. 70. S. Feng (2007). Large deviations associated with Poisson–Dirichlet distribution and Ewens sampling formula. Ann. Appl. Probab. 17, Nos. 5/6, 1570–1595. 71. S. Feng (2007). Large deviations for Dirichlet processes and Poisson–Dirichlet distribution with two parameters. Electron. J. Probab. 12, 787–807. 72. S. Feng (2009). Poisson–Dirichlet distribution with small mutation rate. Stoch. Proc. Appl. 119, 2082–2094. 73. S. Feng and F.Q. Gao (2008). Moderate deviations for Poisson–Dirichlet distribution. Ann. Appl. Probab. 18, No. 5, 1794–1824. 74. S. Feng and F.Q. Gao (2010). Asymptotic results for the two-parameter Poisson–Dirichlet distribution. Stoch. Proc. Appl. 120, 1159–1177.
212
References
75. S. Feng and F.M. Hoppe (1998). Large deviation principles for some random combinatorial structures in population genetics and Brownian motion. Ann. Appl. Probab. 8, No. 4, 975– 994. 76. S. Feng and W. Sun (2009). Some diffusion processes associated with two parameter Poisson–Dirichlet distribution and Dirichlet process. Probab. Theory Relat. Fields, DOI 10.1007/s00440-009-0238-2. 77. S. Feng, W. Sun, F.Y. Wang, and F. Xu (2009). Functional inequalities for the unlabeled two-parameter infinite-alleles diffusion. Preprint. 78. S. Feng and F.Y. Wang (2007). A class of infinite-dimensional diffusion processes with connection to population genetics. J. Appl. Prob. 44, 938–949. 79. S. Feng and J. Xiong (2002). Large deviation and quasi-potential of a Fleming–Viot process. Elect. Comm. Probab. 7, 13–25. 80. T.S. Ferguson (1973). A Baysian analysis of some nonparametric problems. Ann. Stat. 1, 209–230. 81. R.A. Fisher (1922). On the dominance ratio. Proc. Roy. Soc. Edin. 42, 321–341. 82. W.H. Fleming and M. Viot (1979). Some measure-valued Markov processes in population genetics theory. Indiana Univ. Math. J. 28, 817–843. 83. A. Gamburd (2006). Poisson–Dirichlet distribution for random Belyi surfaces. Ann. Probab. 34, No. 5, 1827–1848. 84. A.J. Ganesh and N. O’Connell (1999). An inverse of Sanov’s theorem. Statist. Probab. Lett. 42, No. 2, 201–206. 85. A.J. Ganesh and N. O’Connell (2000). A large-deviation principle for Dirichlet posteriors. Bernoulli 6, No. 6, 1021–1034. 86. J.H. Gillespie (1998). Population Genetics: A Concise Guide, The Johns Hopkins University Press, Baltimore and London. 87. J.H. Gillespie (1999). The role of population size in molecular evolution. Theor. Pop. Biol. 55, 145–156. 88. A.V. Gnedin (1998). On convergence and extensions of size-biased permutations. J. Appl. Prob. 35, 642–650. 89. A.V. Gnedin (2004). Three sampling formulas. Combin. Probab. Comput. 13, No. 2, 185– 193. 90. V.L. Goncharov (1944). Some facts from combinatorics. Izvestia Akad. Nauk. SSSR, Ser. Mat. 8, 3–48. 91. R.C. Griffiths (1979a). On the distribution of allele frequencies in a diffusion model. Theor. Pop. Biol. 15, 140–158. 92. R.C. Griffiths (1979b). A transition density expansion for a multi-allele diffusion model. Adv. Appl. Probab. 11, 310–325. 93. R.C. Griffiths (1980a). Lines of descent in the diffusion approximation of neutral Wright– Fisher models. Theor. Pop. Biol. 17, 35–50. 94. R.C. Griffiths (1980b). Unpublished note. 95. R.C. Griffiths (1988). On the distribution of points in a Poisson process. J. Appl. Prob. 25, 336–345. 96. L. Gross (1993). Logarithmic Sobolev inequalities and contractivity properties of semigroups. Dirichlet forms (Varenna, 1992), 54–88, Lecture Notes in Math., Vol. 1563, SpringerVerlag, Berlin. 97. A. Guionnet and B. Zegarlinski (2003). Lectures on logarithmic Sobolev inequalities. S´eminaire de Probabilit´es, XXXVI, 1–134, Lecture Notes in Math., Vol. 1801, SpringerVerlag, Berlin. 98. P.R. Halmos (1944). Random alms. Ann. Math. Stat. 15, 182–189. 99. K. Handa (2005). Sampling formulae for symmetric selection, Elect. Comm. Probab. 10, 223–234. 100. K. Handa (2009). The two-parameter Poisson–Dirichlet point process. Bernoulli, 15, No. 4, 1082–1116. 101. J.C. Hansen (1990). A functional central limit theorem for the Ewens sampling formula. J. Appl. Probab. 27, 28–43.
References
213
102. L. Holst (2001). The Poisson–Dirichlet distribution and its relatives revisited. Preprint. 103. F.M. Hoppe (1984). Polya-like urns and the Ewens sampling formula. J. Math. Biol. 20, 91–94. 104. F.M. Hoppe (1987). The sampling theory of neutral alleles and an urn model in population genetics. J. Math. Biol. 25, No. 2, 123–159. 105. R.R. Hudson (1983). Properties of a neutral allele model with intragenic recombination. Theor. Pop. Biol. 123, 183–201. 106. W.N. Hudson and H.G. Tucker (1975). Equivalence of infinitely divisible distributions. Ann. Probab. 3, No. 1, 70–79. 107. T. Huillet (2007). Ewens sampling formulae with and without selection. J. Comput. Appl. Math. 206, No. 2, 755–773. 108. H. Ishwaran and L. F. James (2001). Gibbs sampling methods for stick-breaking priors. J. Amer. Statist. Assoc. 96, No. 453, 161–173. 109. H. Ishwaran and L. F. James (2003). Generalized weighted Chinese restaurant processes for species sampling mixture models. Statist. Sinica. 13, No. 4, 1211–1235. 110. J. Jacod, A. N. Shiryaev (1987). Limit Theorems for Stochastic Processes. Springer-Verlag, New York. 111. L.F. James (2003). Bayesian calculus for gamma processes with applications to semiparametric intensity models. Sankhy¯a 65, No. 1, 179–206. 112. L.F. James (2005a). Bayesian Poisson process partition with an application to Bayesian L´evy moving averages. Ann. Statist. 33, No. 4, 1771–1799. 113. L.F. James (2005b). Functionals of Dirichlet processes, the Cifarelli-Regazzini identity and beta-gamma processes. Ann. Statist. 33, No. 2, 647–660. 114. L.F. James (2008). Large sample asymptotics for the two-parameter Poisson Dirichlet processes. In Bertrand Clarke and Subhashis Ghosal, eds, Pushing the Limits of Contemporary Statistics: Contributions in Honor of Jayanta K. Ghosh, 187–199, Institute of Mathematical Statistics, Collections, Vol. 3, Beachwood, Ohio. 115. L.F. James, A. Lijoi, and I. Pr¨unster (2008). Distributions of linear functionals of the two parameter Poisson–Dirichlet random measures. Ann. Appl. Probab. 18, No. 2, 521–551. 116. L.F. James, B. Roynette, and M. Yor (2008). Generalized gamma convolutions, Dirichlet means, Thorin measures, with explicit examples. Probab. Surv. 5, 346–415. 117. P. Joyce and F. Gao (2005). An irrational constant separates selective under dominance from neutrality in the infinite alleles model, Preprint. 118. P. Joyce, S.M. Krone, and T.G. Kurtz (2002). Gaussian limits associated with the Poisson– Dirichlet distribution and the Ewens sampling formula. Ann. Appl. Probab. 12, No. 1, 101– 124. 119. P. Joyce, S.M. Krone, and T.G. Kurtz (2003). When can one detect overdominant selection in the infinite-alleles model? Ann. Appl. Probab. 13, No. 1, 181–212. 120. P. Joyce and S. Tavar´e (1987). Cycles, permutations and the structure of the Yule process with immigration. Stoch. Proc. Appl. 25, 309–314. 121. S. Karlin and J. McGregor (1967). The number of mutant forms maintained in a population. Proc. Fifth Berkeley Symp. Math. Statist. and Prob., L. LeCam and J. Neyman, eds, 415–438. 122. S. Karlin and J. McGregor (1972). Addendum to a paper of W. Ewens. Theor. Pop. Biol. 3, 113–116. 123. D.G. Kendall (1949). Stochastic processes and population growth. J. Roy. Statist. Soc. 11, 230–264. 124. M. Kimura and J.F. Crow (1964). The number of alleles that can be maintained in a finite population. Genetics 49, 725–738. 125. J.C.F. Kingman (1975). Random discrete distributions. J. Roy. Statist. Soc. B 37, 1–22. 126. J.C.F. Kingman (1977). The population structure associated with the Ewens sampling formula. Theor. Pop. Biol. 11, 274–283. 127. J.C.F. Kingman (1980). Mathematics of Genetics Diversity. CBMS-NSF Regional Conference Series in Appl. Math. Vol. 34, SIAM, Philadelphia. 128. J.C.F. Kingman (1982). The coalescent. Stoch. Proc. Appl. 13, 235–248.
214
References
129. J.C.F. Kingman (1982). On the genealogy of large populations. J. Appl. Prob. 19A, 27–43. 130. J.C.F. Kingman (1993). Poisson Processes. Oxford University Press. 131. N. Konno and T. Shiga (1988). Stochastic differential equations for some measure-valued diffusions. Probab. Theory Relat. Fields 79, 201–225. 132. T.G. Kurtz (2000). Particle representations for measure-valued population processes with spatially varying birth rates. Stochastic Models, Luis G. Gorostiza and B. Gail Ivanoff, eds., CMS Conference Proceedings, Vol. 26, 299–317. 133. J.F. Le Gall (1999). Spatial Branching Process, Random Snakes and Partial Differential Equations. Lectures in Mathematics, ETH Z¨urich, Birkh¨auser. 134. Z. Li, T. Shiga and L. Yao (1999). A reversible problem for Fleming–Viot processes. Elect. Comm. Probab. 4, 71–82. 135. J. Lynch and J. Sethuraman (1987). Large deviations for processes with independent increments. Ann. Probab. 15, 610–627. 136. E. Mayer-Wolf, O. Zeitouni, and M.P.W. Zerner (2002). Asymptotics of certain coagulationfragmentation process and invariant Poisson–Dirichlet measures. Electron. J. Probab. 7, 1– 25. 137. J.W. McCloskey (1965). A model for the distribution of individuals by species in an environment. Ph.D. thesis, Michigan State University. 138. M. M¨ohle and S. Sagitov (2001). A classification of coalescent processes for haploid exchangeable population models. Ann. Probab. 29, 1547–1562. 139. P.A.P. Moran (1958). Random processes in genetics. Proc. Camb. Phil. Soc. 54, 60–71. 140. C. Neuhauser and S.M. Krone (1997). The genealogy of samples in models with selection. Genetics 145, 519–534. 141. F. Papangelou (2000). The large deviations of a multi-allele Wright–Fisher process mapped on the sphere. Ann. Appl. Probab. 10, No. 4, 1259–1273. 142. C.P. Patil and C. Taillie (1977). Diversity as a concept and its implications for random communities. Bull. Int. Statist. Inst. 47, 497–515. 143. E.A. Perkins (1991). Conditional Dawson-Watanabe superprocesses and Fleming–Viot processes. Seminar on Stochastic Processes, Birkh¨auser, 142–155. 144. E.A. Perkins (2002). Dawson-Watanabe Superprocess and Measure-Valued Diffusions, Ecole ´ e de Probabilit´es de Saint Flour, Lect. Notes in Math. Vol. 1781, 132–329. d’Et´ 145. M. Perman (1993). Order statistics for jumps of normalised subordinators. Stoch. Proc. Appl. 46, 267–281. 146. M. Perman, J. Pitman and M. Yor (1992). Size-biased sampling of Poisson point processes and excursions. Probab. Theory Relat. Fields 92, 21–39. 147. L.A. Petrov (2009). Two-parameter family of infinite-dimensional diffusions on the Kingman simplex. Funct. Anal. Appl. 43, No. 4, 279–296. 148. J. Pitman (1992). The two-parameter generalization of Ewens’ random partition structure. Technical Report 345, Dept. Statistics, University of California, Berkeley. 149. J. Pitman (1995). Exchangeable and partially exchangeable random partitions. Probab. Theory Relat. Fields 102, 145–158. 150. J. Pitman (1996a). Partition structures derived from Brownian motion and stable subordinators. Bernoulli 3, 79–96. 151. J. Pitman (1996b). Random discrete distributions invariant under size-biased permutation. Adv. Appl. Probab. 28, 525–539. 152. J. Pitman (1996c). Some developments of the Blackwell-MacQueen urn scheme. Statistics, Probability, and Game Theory, 245–267, IMS Lecture Notes Monogr. Ser. 30, Inst. Math. Statist., Hayward, CA. 153. J. Pitman (1999). Coalescents with multiple collisions. Ann. Probab. 27, 1870–1902. 154. J. Pitman (2002). Poisson–Dirichlet and GEM invariant distributions for split-and-merge transformations of an interval partition. Combin. Probab. Comput. 11, 501–514. ´ e de Probabilit´es de Saint 155. J. Pitman (2006). Combinatorial Stochastic Processes, Ecole d’Et´ Flour, Lecture Notes in Math., Vol. 1875, Springer-Verlag, Berlin. 156. J. Pitman and M. Yor (1997). The two-parameter Poisson–Dirichlet distribution derived from a stable subordinator. Ann. Probab. 25, No. 2, 855–900.
References
215
157. A.A. Puhalskii (1991). On functional principle of large deviations. In V. Sazonov and T. Shervashidze, eds, New Trends in Probability and Statistics, 198–218. VSP Moks’las, Moskva. 158. S. Sagitov (1999). The general coalescent with asynchronous mergers of ancestral lines J. Appl. Probab. 36, 1116–1125. 159. A. Schied (1997). Geometric aspects of Fleming–Viot and Dawson-Watanabe processes. Ann. Probab. 25, 1160–1179. 160. B. Schmuland (1991). A result on the infinitely many neutral alleles diffusion model. J. Appl. Probab. 28, 253–267. 161. J. Schweinsberg (2000). Coalescents with simultaneous multiple collisions. Electron. J. Probab. 5, 1–50. 162. J.A. Sethuraman (1994). A constructive definition of Dirichlet priors. Statist. Sinica 4, No. 2, 639–650. 163. L.A. Shepp and S.P. Lloyd (1966). Ordered cycle length in a random permutation. Trans. Amer. Math. Soc. 121, 340–351. 164. M. Sion (1958). On general minimax theorem. Pacific J. Math. 8, 171–176. 165. W. Stannat (2000). On the validity of the log-Sobolev inequality for symmetric Fleming–Viot operators. Ann. Probab. 28, No. 2, 667–684. 166. W. Stannat (2002). Long-time behaviour and regularity properties of transition semigroups of Fleming–Viot processes. Probab. Theory Relat. Fields 122, 431–469. 167. W. Stannat (2003). On transition semigroup of (A,Ψ )-superprocess with immigration. Ann. Probab. 31, No. 3, 1377–1412. 168. D.W. Stroock (1984). An Introduction to the Theory of Large Deviations. Springer-Verlag, New York. 169. F. Tajima (1983). Evolutionary relationship of DNA sequences in finite populations. Genetics 105, 437–460. 170. S. Tavar´e (1984). Line-of-descent and genealogical processes, and their applications in population genetics models. Theor. Pop. Biol. 26, 119–164. 171. S. Tavar´e (1987). The birth process with immigration, and the genealogical structure of large populations. J. Math. Biol. 25, 161–168. 172. N.V. Tsilevich (2000). Stationary random partitions of positive integers. Theor. Probab. Appl. 44, 60–74. 173. N.V. Tsilevich and A. Vershik (1999). Quasi-invariance of the gamma process and the multiplicative properties of the Poisson–Dirichlet measures. C. R. Acad. Sci. Paris, t. 329, S´erie I, 163–168. 174. N.V. Tsilevich, A. Vershik, and M. Yor (2001). An infinite-dimensional analogue of the Lebesgue measure and distinguished properties of the gamma process. J. Funct. Anal. 185, No. 1, 274–296. 175. S.R.S. Varadhan (1984). Large Deviations and Applications. SIAM, Philadelphia. 176. A.M. Vershik and A.A. Schmidt (1977). Limit measures arising in the asymptotic theory of symmetric groups, I. Theory Probab. Appl. 22, No. 1, 70–85. 177. A.M. Vershik and A.A. Schmidt (1978). Limit measures arising in the asymptotic theory of symmetric groups, II. Theory Probab. Appl. 23, No. 1, 36–49. 178. J. Wakeley (2006). An Introduction to Coalescent Theory. Roberts and Company Publishers. 179. F.Y. Wang (2004). Functional Inequalities, Markov Semigroups and Spectral Theory. Science Press, Beijing. 180. G.A. Watterson (1974). The sampling theory of selectively neutral alleles. Adv. Appl. Prob. 6, 463–488. 181. G.A. Watterson (1976). The stationary distribution of the infinitely-many neutral alleles diffusion model. J. Appl. Prob. 13, 639–651. 182. G.A. Watterson (1977). The neutral alleles model, and some alternatives. Bull. Int. Stat. Inst. 47, 171–186. 183. G.A. Watterson and H.A. Guess (1977). Is the most frequent allele the oldest? Theor. Pop. Biol. 11, 141–160. 184. G.A. Watterson (1984). Lines of descent and the coalescent. Theor. Pop. Biol. 26, 119–164.
216
References
185. S. Wright (1931). Evolution in Mendelian populations. Genetics 16, 97–159. 186. S. Wright (1949). Adaption and selection. In Genetics, Paleontology, and Evolution, ed. G.L. Jepson, E. Mayr, G.G. Simpson, 365–389. Princeton University Press. 187. L. Wu (1995). Moderate deviations of dependent random variables related to CLT. Ann. Prob. 23, No. 1, 420–445. 188. K.N. Xiang and T.S. Zhang (2005). Small time asymptotics for Fleming–Viot processes. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 8, No. 4, 605–630.
Index
alleles, 4 allelic partition, 26 Brownian bridge, 134 Chinese restaurant process, 66 coagulation–fragmentation, 111 coalescent, 72 cycle, 44 Dickman function, 22 diffusion process, 8 diploid, 4 Dirichlet process, 46 two-parameter , 54 Donnelly–Kurtz infinite look-down process, 122 look-down process, 120 duality, 119 effective domain, 203 Ewens sampling formula, 26 exchangeable σ -field, 114 random variable, 113 exponentially equivalent, 205 exponentially good approximation, 206 exponentially tight, 204 fitness, 4 Fleming–Viot process, 85 fluctuation theorem, 127 gamma random measure, 108 GEM (Griffiths-Engen-McCloskey) distribution, 25 process, 110
two-parameter distribution, 53 gene, 3 genome, 4 genotype, 4 haploid, 4 heterozygous, 4 homozygosity, 142 of order n, 142 homozygous, 4 Hoppe’s urn, 36 infinitely-many-neutral-alleles model, 83 two-parameter, 111 intensity, 200 intensity measure, 200 Kingman’s n-coalescent, 69 LDP partial, 206 weak, 204 LDP (large deviation principle), 203 level set, 203 locus, 3 marked Poisson process, 202 martingale problem, 117 mean measure, 200 moderate deviation principle, 172 moment measure, 114 monoecious, 4 monomial, 87 Moran particle system, 116 MRCA (most recent common ancestor), 68 mutation, 4 neutral, 4
S. Feng, The Poisson–Dirichlet Distribution and Related Topics, Probability and its Applications, DOI 10.1007/978-3-642-11194-5, © Springer-Verlag Berlin Heidelberg 2010
217
218 parent-independent mutation, 87 phenotype, 4 Pitman sampling formula, 58 Poisson process, 199 scale-invariant, 33 Poisson random measure, 200 random genetic drift, 4 random sampling, 5 continuous, 86 rate function, 203 good, 203 relative entropy, 208 residual allocation model, 53
Index scaled population mutation rate, 10 selection, 4 overdominant, 173 underdominant, 173 size-biased permutation, 24 sampling, 24 speed, 203 subordinator, 17 stable, 54
unlabeled, 46