Quan-Lin Li
Constructive Computation in Stochastic Models with Applications The RG-Factorization
Quan-Lin Li
Constructive Computation in Stochastic Models with Applications The RG-Factorization With 23 figures
Author Quan-Lin Li Department of Industrial Engineering Tsinghua University Beijing 100084, P. R. China E-mail:
[email protected]
ISBN 978-7-302-21501-1 Tsinghua University Press, Beijing ISBN 978-3-642-11491-5 e-ISBN 978-3-642-11492-2 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2009942424 © Tsinghua University Press, Beijing and Springer-Verlag Berlin Heidelberg 2010 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: Frido Steinen-Broo, EStudio Calamar, Spain Printed on acid-free paper Springer is a part of Springer Science + Business Media (www.springer.com)
Quan-Lin Li
ᬣഴۋౝᤴবኪူ ԢФःၸṊ3( ѫᝌழก Constructive Computation in Stochastic Models with Applications The RG-Factorization With 23 figures
⏙ढᄺߎ⠜⼒
ݙᆍ ㅔ ҟ ᴀкҟ㒡њ䱣ᴎൟЁ䅵ㅫᡔᴃⱘЏ㽕⸔⧚䆎ˈᘏ㒧њ䖥कᑈᴹݙ᠔পᕫⱘ ᮄ៤ᵰϢ䖯ሩDŽᅗᵘ䗴ᗻഄᓎゟњϔ㠀偀ᇨৃ䖛ⱘ RG-ߚ㾷ˈ݊Ё RG-ߚ㾷ᰃ偀ᇨ ৃ䖛Ϣ催ᮃ⍜ⱘ⊩ܗᅠ㕢㒧ড়ˈЎ∖㾷᮴䰤㓈˄ൟ˅㒓ᗻᮍ㒘ᦤկњ᳝ᬜ䗨 ᕘDŽܼк݅ߚЎϝϾ䚼ߚDŽϔ䚼ߚᦣ䗄њབԩᡞᥦ䯳㋏㒳ǃৃ䴴ᗻᎹǃࠊ䗴㋏㒳ǃ 䅵ㅫᴎ㔥㒰ǃѸ䗮㋏㒳ǃ᳡ࡵ㋏㒳ㄝᑨ⫼䱣ᴎൟ䕀࣪Ўഫൟ㒧ᵘⱘ偀ᇨৃ䖛ˈ䖭 Ўⷨお䆌ᅲ䰙㋏㒳ⱘᗻ㛑䆘ӋǃӬ࣪Ϣއㄪᦤկњ㒳ϔⱘ᭄ᄺ⧚䆎ḚᶊDŽѠ䚼ߚᦤ կњⷨお䱣ᴎൟⱘ䅵ㅫ⧚䆎⸔ˈࣙᣀ Censoring ϡবᗻǃRG-ߚ㾷ǃRG-ᇍيᗻǃ䈅 ߚᵤǃ〇ᗕ䅵ㅫǃⶀᗕ䅵ㅫǃ⏤䖥ᗻߚᵤǃᬣᛳᗻߚᵤㄝDŽϝ䚼ߚⷨおњ䱣ᴎൟЁ ⱘϔѯ⛁⚍䯂乬ˈ՟བᢳᑇ〇ߚᏗǃ䖲㓁⢊ᗕ偀ᇨৃ䖛ǃ偀ᇨৃ䝀䖛ǃ偀ᇨ ৃއㄪ䖛ǃⓨ࣪मᓜ䆎ㄝDŽ ᴀкⱘ䇏㗙ᇍ䈵Ўҷ᭄ǃᑨ⫼ὖ⥛ǃ䖤ㅍᄺǃㅵ⧚⾥ᄺǃࠊ䗴㋏㒳ǃ䅵ㅫᴎ㔥㒰ǃѸ 䗮㋏㒳ǃ᳡ࡵ㋏㒳ǃ⫳⠽Ꮉㄝ乚ඳЁ催ᑈ㑻ᄺ⫳ǃⷨお⫳ǃ⾥ᡔҎਬϢᎹᡔᴃҎਬDŽ ᴀкᇕ䴶䌈᳝⏙ढᄺߎ⠜⼒䰆Ӿᷛㅒˈ᮴ᷛㅒ㗙ϡᕫ䫔ଂDŽ ⠜ᴗ᠔᳝ˈ։ᴗᖙおDŽ։ᴗВ⬉䆱˖010-62782989 13701121933 к⠜㓪Ⳃ(CIP) ᭄ 䱣ᴎൟᵘ䗴ᗻ䅵ㅫ⧚䆎ঞ݊ᑨ⫼˖RG-ߚ㾷ᮍ⊩˖㣅᭛/ᴢ⊝ᵫ㨫. – – ࣫Ҁ˖⏙ढᄺߎ ⠜⼒ˈ2009.12 ISBN 978-7-302-21501-1 ĉ.ķ䱣Ă
Ċ.ķᴢĂ
ċ.ķ⬉ᄤ䅵ㅫᴎ – ᭄ؐ䅵ㅫ – 㣅᭛ Č.ķTP301.6
Ё⠜ᴀк佚 CIP ᭄Ḍᄫ (2009) 214331 ো 䋷ӏ㓪䕥˖䶽➩Б 䋷ӏ᷵ᇍ˖䍉Бᬣ 䋷ӏॄࠊ˖
ᓴ
ᔸ
ߎ⠜থ㸠˖⏙ढᄺߎ⠜⼒ ഄ ഔ˖࣫Ҁ⏙ढᄺᄺⷨॺ A ᑻ http://www.tup.com.cn 䚂 㓪˖100084 ⼒ ᘏ ᴎ˖010-62770175 䚂 䌁˖010-62786544 ᡩ〓Ϣ䇏㗙᳡ࡵ˖010-62776969,
[email protected] 䋼䞣ড作˖010-62772015,
[email protected] ॄ 㺙 㗙˖࣫Ҁ䲙ᯠᔽ㡆ॄࠋ᳝䰤݀ৌ 㒣 䫔˖ܼᮄढкᑫ ᓔ ᴀ˖153 u 235 ॄᓴ˖43.25 ᄫ᭄˖974 गᄫ ⠜ ˖2009 ᑈ 12 ᳜ 1 ⠜ ॄ˖2009 ᑈ 12 ᳜ 1 ॄࠋ ॄ ᭄˖1 ~ 1000 ᅮ Ӌ˖148 ܗ üüüüüüüüüüüüüüüüüüüüüüüüüüüüüüüüüüüüüü ᴀкབᄬ᭛ᄫϡ⏙ǃⓣॄǃ㔎义ǃצ义ǃ㜅义ㄝॄ㺙䋼䞣䯂乬ˈ䇋Ϣ⏙ढᄺߎ ⠜⼒ߎ⠜䚼㘨㋏䇗ᤶDŽ㘨㋏⬉䆱˖010-62770177 䕀 3103 ѻક㓪ো˖
To my friend Marcel F. Neuts for his pioneering contributions to stochastic models
Preface
Stochastic systems are involved in many practical areas, such as applied probability, queueing theory, reliability, risk management, insurance and finance, computer networks, manufacturing systems, transportation systems, supply chain management, service operations management, genomic engineering and biological sciences. When analyzing a stochastic system, block-structured stochastic models are found to be a useful and effective mathematical tool. In the study of the blockstructured stochastic models, this book provides a unified, constructive and algorithmic framework on two important directions: performance analysis and system decision. Different from those books given in the literature, the framework of this book is systematically organized by means of the UL- and LU-types of RG-factorizations, which are completely developed in this book and have been extensively discussed by the author. The RG-factorizations can be applied to provide effective solutions for the block-structured Markov chains, and are shown to be also useful for optimal design and dynamical decision making of many practical systems, such as computer networks, transportation systems and manufacturing systems. Besides, this book uses the RG-factorizations to deal with some recent interesting topics, for example, tailed analysis, continuous-state Markov chains, quasi-stationary distributions, Markov reward processes, Markov decision processes, sensitivity analysis, evolutionary game and stochastic game. Note that all these different problems can be dealt with by a unified computational method through the RG-factorizations. Specifically, this book pays attention to optimization, control, decision making and game of the block-structured stochastic models, although available results on these directions are still few up to now. The block-structured stochastic models began with studying the matrix-geometric stationary probability of a Quasi-Birth-And-Death (QBD) process which was first proposed to analyze two-dimensional queues or computer systems, e.g., see Evans (1967) and Wallace (1969). The initial attention was directed toward performance computation. Neuts (1978) extended the results of the QBD processes to Markov chains of GI M 1 type for the first time. Based on the phase type (PH) distribution given in Neuts (1975), Neuts (1981) opened an interesting and crucial i
door in numerical analysis of stochastic models, which has become increasingly important for dealing with large-scale and complex stochastic systems due to advent more powerful computational ability under fast development of computer technology and communication networks. For a complete understanding of stochastic models, it is necessary to review two key advances. Firstly, Neuts (1981) considered Markov chains of GI M 1 type whose stationary probability vectors are the matrixgeometric form, called matrix-geometric solution. For the matrix-geometric solution, the matrix R, the minimal nonnegative solution to the nonlinear matrix equation f
R
¦R
k
Ak , plays an important role. He indicated that numerical computation
k 0
of Markov chains of GI M 1 type can be transformed to that of the matrix R, and then an infinite-dimensional computation for the stationary probability vector is transformed to another finite-dimensional computation for the censored Markov chain to level 0. Readers may refer to Neuts (1981), Latouche and Ramaswami (1999), Bini, Latouche and Meini (2005) and others therein. Secondly, as a companion research for Markov chains of GI M 1 type, Neuts (1989) provided a detailed discussion for Markov chains of M G 1 type whose stationary probability vector has a complicated form, called matrix-iterative solution. Although the two types of Markov chains have different block structure, the matrix-iterative solution has many properties similar to those in the matrix-geometric solution, for example, the matrix-iterative solution is determined by the minimal nonnegative solution f
G to another key nonlinear matrix equation G
¦A G . k
k
k 0
These results given in Neuts’ two books (1981, 1989) are simple, perfect and computable. However, Markov chains of GI/M/1 type and Markov chains of M/G/1 type are two important examples in the study of block-structured stochastic models, while analysis of many practical stochastic systems needs to use more general block-structured Markov chains, e.g., see retrial queues given in Artalejo and Gómez-Corral (2008) and other stochastic models given in Chapters 1 and 7 of this book. Under the situation, these practical examples motivate us in this book to develop a more general algorithmic framework for studying the block-structured stochastic models, including generalization of the matrix-geometric solution and the matrix-iterative solution from the level independence to the level dependence. It is worthwhile to note that such a generalization is not easy and simple, it needs and requires application of new mathematical methods. During the two decades, the censoring technique is indicated to be a key method for dealing with more general block-structured Markov chains. Grassmann and Heyman (1990) first used the censoring technique to find some basic relationships between the matrixgeometric solution and the matrix-iterative solution from a more general model: Markov chains of GI/G/1 type. Furthermore, Heyman (1995) applied the censoring technique to provide an LU-decomposition for any ergodic stochastic matrix of infinite size, Li (1997) gave the LU-decomposition for Markov chains of GI/M/1 ii
type and also for Markov chains of M/G/1 type, and Zhao (2000) obtained the UL-type RG-factorization for Markov chains of GI/M/1 type. From these works, it may be clear that finding such matrix decomposition for general Markov chains is a promissing direction for numerical solution of block-structred stochastic models. Along similar lines, we have systematically developed the UL- and LU-types of RG-factorizations for any irreducible Markov chains in the past ten years, e.g., see Li and Cao (2004), Li and Zhao (2002, 2004) and Li and Liu (2004). This book summarizes many important results and basic relations for block-structured Markov chains by means of the RG-factorizations. The RG-factorizations are derived for any irreducible Markov chains in terms of the Wiener-Hopf equations, while some useful iterative relations among the R-, U- and G-measures are organized in the Wiener-Hopf equations. Specifically, the iterative relations are sufficiently helpful for dealing with performance computation and system decision. On the other hand, this book also provides new probabilistic interpretations for those results obtained by Neuts’ method. We may say that the RG-factorizations begin a new era in the study of block-structured stochastic models with an algebraic and probabilistic combination. The main contribution of this book is to construct a unified computational framework to study stochastic models both from stationary solution and from transient solution. When a practical system is described as a block-structured Markov chain, performance computation and system decision can always be organized as a system of linear equations: xA 0 or xA b where b z 0 . This book provides two different computational methods to deal with the system of linear equations. At the same time, it is seen from the computational process that the middle diagonal matrix of the RG-factorizations plays an important role based on the state classification of Markov chains. Method ć In this method the matrix A can be shown to have a UL-type RG-factorization A ( I RU ) diag (40 , 41 , 4 2 , ...) ( I GL ), where the size of the matrix 4 0 is always small and finite in level 0. This book summarizes two important conclusions: (1) If the block-structured Markov chain is positive recurrent, then the matrix 4 0 is singular and all the other matrices 4 k for kı1 are invertible. In this case, the UL-type RG-factorization can be used to solve the system of linear equations: xA 0 given in Section 2.4, and then such a solution can be used to obtain stationary performance analysis. (2) If the block-structured Markov chain is transient, then the matrix 4 k is invertible for kı0. In this case, the UL-type RG-factorization is used to solve the system of linear equations: xA b with x
b( I GL )1 diag (4 01 , 411 , 4 21 , ...) ( I RU )1 ,
which leads to transient performance analysis. iii
Method Ĉ In this method the matrix A can be shown to have an LU-type RG-factorization A ( I RL ) diag (/0 , /1 , /2 , ...) ( I GU ),
where the matrix /k is invertible for kı0. Therefore, the LU-type RG-factorization can be used to deal with the system of linear equations: xA b with x
b( I GU )1 diag (/01 , /11 , /21 , ...) ( I RL )1 ,
which further leads to transient performance analysis of a stochastic model. This book has grown out of my research and lecture notes on the matrix-analytic methods since 1997. Although I have made an effort to introduce explanations and definitions for mathematical tools, crucial concepts and basic conclusions in this book, it is still necessary for readers to have a better mathematical background, including probability, statistics, Markov chains, Markov renewal processes, Markov decision processes, queueing theory, game theory, matrix analysis and numerical computation. Readers are assumed to be familiar with the basic materials or parts of them. The organization of this book is strictly logical and more complete from performance computation to system decision. This book contains eleven chapters whose structured relationship is shown in Fig. 0.1. Chapters 1 and 7 introduce motivating examples from different research areas, such as queueing theory, computer networks and manufacturing systems. The examples are first described as the block-structured Markov chains, then they will help readers to understand the basic structure of practical stochastic models. Chapters 2, 3, 5, 6 and 9 systematically develop the construction of the RG-factorizations for Markov chains, Markov renewal processes and ȕ-discounted transition matrices. Chapters 4, 8, 10 and 11 apply the RG-factorizations to deal with some current interesting topics including tailed analysis, Markov chains on a continuous state space, transient solution, Markov reward processes, sensitivity analysis and game theory, respectively. Finally, we also provide two useful appendices which may be basically helpful for readers to understand the contents of this book. Every chapter consists of a brief summary, a main body and a discussion with “Notes in the Literature”. At the same time, every chapter also contains a number of problems whose purpose is to help readers understand the corresponding concepts, results and conclusions. It is hoped that this book will be useful for the first-year graduate students or advanced undergraduates, as well as researchers and engineers who are interested in, for example, applied probability, queueing theory, reliability, risk management, insurance and finance, communication networks, manufacturing systems, transportation systems, supply chain management, service operations management, performance evaluation, system decision, and game theory with applications. We suggest a full semester course with two or three hours per week. Shorter courses iv
can be also based on part of the chapters, for instance, engineering students or researchers may only study Chapters 1, 2, 6, 8, 10 and 11.
Figure 0.1 Organization of this book
It is a pleasure to acknowledge Marcel F. Neuts for his pioneering work which developed an important area: Numerical computation of stochastic models, his comments and suggestions are valuable and useful for improving the presentation of this book. I also thank Yiqiang Zhao for his cooperation on the RG-factorizations and block-structured stochastic models from 1999 to 2003. In fact, some sections of this book directly come from his or our collaboration works. Special thanks go to Jinhua Cao, Ming Tan, Naishuo Tian and Dequan Yue who encouraged me in the study of stochastic models during my master and Ph.D. programs. I am grateful to my friends J.R. Artalejo, N.G. Bean, Xiuli Chao, A. Dudin, A. Gómez-Corral, Fuzhou Gong, Xianping Guo, Qiming He, Zhengting Hou, Guanghui Hsu, Ninjian Huang, Haijun Li, Wei Li, Zhaotong Lian, Chuang Lin, Ke Liu, Zhiming Ma, Zhisheng Niu, T. Takine, Peter G. Taylor, Jeffery D. Tew, Jingting Wang, Dinghua Shi, Deju Xu, Susan H. Xu, David D. Yao and Hanqin Zhang for their great help and valuable suggestions on the matrix-analytic methods. I am indebted to Xiren Cao, Liming Liu and Shaohui Zheng for the financial support for my visits to Hong Kong University of Science and Technology in the recent years. Their valuable comments and suggestions helped me to develop new and interesting fields, such as perturbation analysis and Markov decision processes. I thank my master and Ph.D. students, such as Dejing Chen, Shi Chen, Yajuan Li, Jinqi Wang, Yang Wang, Yitong Wang, Xiaole Wu, Jia Yan and Qinqin Zhang. This book has benefited from the financial support provided by National Natural Science Foundation of China (Grant No. 10671107, Grant No. 10871114, Grant No. 60736028) and the National Grand Fundamental Research (973) Program of China (Grant No. 2006CB805901). I thank all of my colleagues in the Department of Industrial Engineering, Tsinghua v
University, each of who always provides me an ideal research environment. Finally, a special thanks go to my wife Shuang Chen and my son Mingyi Li for with constant understanding and support in my family for many years. Dr. Quan-Lin Li Department of Industrial Engineering Tsinghua University, Beijing, P. R. China May 30, 2009
vi
Contents
1 Stochastic Models ........................................................................................... 1 1.1 Stochastic Systems ................................................................................. 1 1.1.1 The Markov Property.................................................................. 2 1.1.2 A Discrete-Time Markov Chain with Discrete State Space ........ 2 1.1.3 A Continuous-Time Markov Chain with Discrete Space............ 6 1.1.4 A Continuous-Time Birth Death Process.................................... 8 1.1.5 Block-Structured Markov Chains ............................................... 9 1.2 Motivating Practical Examples............................................................. 12 1.2.1 A Queue with Server Vacations ................................................ 13 1.2.2 A Queue with Repairable Servers ............................................. 14 1.2.3 A Call Center ............................................................................ 15 1.2.4 A Two-Loop Closed Production System................................... 17 1.2.5 An E-mail Queueing System Under Attacks............................. 20 1.3 The QBD Processes .............................................................................. 23 1.3.1 Heuristic Expressions ............................................................... 23 1.3.2 The LU-Type RG-Factorization ................................................ 25 1.3.3 The UL-Type RG-Factorization ................................................ 27 1.3.4 Linear QBD-Equations ............................................................. 29 1.4 Phase-Type Distributions...................................................................... 33 1.4.1 The Exponential Distribution.................................................... 33 1.4.2 The Erlang Distribution ............................................................ 34 1.4.3 The PH Distribution.................................................................. 35 1.4.4 The Bivariate PH Distribution .................................................. 40 1.4.5 The Multivariate PH Distribution ............................................. 41 1.4.6 The Discrete-Time Multivariate PH Distribution ..................... 42 1.5 The Markovian Arrival Processes......................................................... 43 1.5.1 The Poisson Process.................................................................. 43 1.5.2 The PH Renewal Process .......................................................... 44 1.5.3 The Markovian Modulated Poisson Process............................. 48 1.5.4 The Markovian Modulated PH Process .................................... 49 1.5.5 The Markovian Arrival Processes ............................................. 49 1.5.6 The Batch Markovian Arrival Process ...................................... 52 1.5.7 The Multivariate Markovian Arrival Process............................ 53 1.5.8 The Multivariate Batch Markovian Arrival Process ................. 55 vii
1.6 Matrix-Exponential Distribution .......................................................... 57 1.7 Notes in the Literature .......................................................................... 60 Problems ........................................................................................................ 62 References ..................................................................................................... 65 2 Block-Structured Markov Chains .............................................................. 72 2.1 The Censoring Chains .......................................................................... 73 2.2 The UL-type RG-Factorization............................................................. 76 2.2.1 Level-Dependent Markov Chains of M/G/1 Type..................... 84 2.2.2 Level-Independent Markov Chains of M/G/1 Type .................. 87 2.2.3 Level-Dependent Markov Chains of GI/M/1 Type ................... 88 2.2.4 Level-Independent Markov Chains of GI/M/1 Type................. 89 2.2.5 The QBD Processes .................................................................. 89 2.3 The LU-Type RG-Factorization............................................................ 90 2.3.1 Level-Dependent Markov Chains of M/G/1 Type..................... 94 2.3.2 Level-Dependent Markov Chains of GI/M/1 Type ................... 95 2.3.3 The QBD Processes .................................................................. 95 2.4 The Stationary Probability Vector......................................................... 96 2.5 A- and B-measures ................................................................................ 98 2.6 Markov Chains with Finitely-Many Levels........................................ 109 2.6.1 The UL-Type RG-Factorization .............................................. 109 2.6.2 The LU-Type RG-Factorization .............................................. 110 2.6.3 The Stationary Probability Vector........................................... 113 2.7 Continuous-Time Markov Chains ...................................................... 114 2.7.1 The UL-type RG-factorization................................................ 115 2.7.2 The LU-Type RG-Factorization .............................................. 119 2.7.3 The Stationary Probability Vector........................................... 123 2.8 Notes in the Literature ........................................................................ 124 Problems ...................................................................................................... 126 References ................................................................................................... 128 3 Markov Chains of GI/G/1 Type ................................................................. 131 3.1 Markov Chains of GI/G/1 Type .......................................................... 132 3.2 Dual Markov Chains........................................................................... 145 3.3 The A- and B-Measures ...................................................................... 148 3.4 Spectral Analysis ................................................................................ 158 3.5 Distribution of the Minimal Positive Root ......................................... 165 3.5.1 The Positive Recurrence ......................................................... 165 3.5.2 The Null Recurrence ............................................................... 167 3.5.3 The Transience........................................................................ 167 3.5.4 The Minimal Positive Root..................................................... 167 3.6 Continuous-time Chains ..................................................................... 170 viii
3.7 Notes in the Literature ........................................................................ 172 Problems ...................................................................................................... 173 References ................................................................................................... 174 4 Asymptotic Analysis ................................................................................... 176 4.1 A Necessary and Sufficient Condition................................................ 177 4.2 Three Asymptotic Classes of {ʌ k } ..................................................... 183 4.3 The Asymptotics Based on the Solution K ........................................ 185 4.3.1 A is Irreducible........................................................................ 185 4.3.2 Markov Chains of GI/M/1 Type.............................................. 190 4.3.3 Markov Chains of M/G/1 Type ............................................... 190 4.3.4 A is Reducible ......................................................................... 191 4.4 The Asymptotics Based on the Boundary Matrices............................ 192 4.4.1 I D is a Pole............................................................................ 193 4.4.2 I D is an Algebraic Singular Point.......................................... 194 4.5 Long-Tailed Asymptotics of the Sequence {Rk}................................. 198 4.6 Subexponential Asymptotics of {ʌ k } ................................................. 205 4.6.1 Markov Chains of M/G/1 Type ............................................... 208 4.6.2 Regularly Varying Asymptotics of {ʌ k } ................................. 209 4.7 Notes in the Literature ........................................................................ 209 Problems ...................................................................................................... 211 References ................................................................................................... 213 5 Markov Chains on Continuous State Space ............................................ 216 5.1 Discrete-Time Markov Chains............................................................ 217 5.1.1 Markov Chains of GI/G/1 Type .............................................. 220 5.1.2 Markov Chains of GI/M/1 Type.............................................. 220 5.1.3 Markov Chains of M/G/1 Type ............................................... 220 5.1.4 QBD Processes ....................................................................... 221 5.2 The RG-Factorizations........................................................................ 221 5.2.1 The UL-Type RG-Factorization .............................................. 222 5.2.2 The LU-Type RG-Factorization .............................................. 223 5.2.3 The Stationary Probability Distribution.................................. 224 5.2.4 Markov Chains of GI/G/1 Type .............................................. 226 5.2.5 Markov Chains of GI/M/1 Type.............................................. 226 5.2.6 Markov Chains of M/G/1 Type ............................................... 227 5.2.7 QBD Processes ....................................................................... 228 5.2.8 An Algorithmic Framework.................................................... 228 5.3 The GI/G/1 Queue .............................................................................. 231 5.3.1 Constructing a Markov Chain of GI/M/1 Type ....................... 231 5.3.2 Constructing a Markov Chain of M/G/1 Type ........................ 235 5.4 Continuous-Time Markov Chains ...................................................... 237 ix
5.5 The QBD Processes ............................................................................ 241 5.5.1 The UL-Type RG-Factorization .............................................. 243 5.5.2 The LU-Type RG-Factorization .............................................. 248 5.6 Structured Matrix Expressions ........................................................... 252 5.7 A CMAP/CPH/1 Queue ...................................................................... 263 5.7.1 The CPH Distribution ............................................................. 263 5.7.2 The CMAP.............................................................................. 264 5.7.3 The CMAP/CPH/1 Queue....................................................... 266 5.8 Piecewise Deterministic Markov Processes ....................................... 267 5.8.1 Semi-Dynamic Systems .......................................................... 267 5.8.2 The I -Memoryless Distribution Family ................................. 269 5.8.3 Time Shift I -Invariant Transition Kernel ............................... 273 5.8.4 Piecewise Deterministic Markov Processes............................ 274 5.8.5 The Stationary Distribution..................................................... 275 5.8.6 The GI/G/k Queue................................................................... 279 5.9 Notes in the Literature ........................................................................ 284 Problems ...................................................................................................... 285 References ................................................................................................... 286 6 Block-Structured Markov Renewal Processes ......................................... 288 6.1 The Censoring Markov Renewal Processes........................................ 289 6.2 The UL-Type RG-Factorization.......................................................... 294 6.2.1 Level-Dependent Markov Renewal Processes of M/G/1 Type ........................................................................ 302 6.2.2 Level-Dependent Markov Renewal Processes of GI/M/1 Type ....................................................................... 303 6.2.3 Markov Renewal Equations.................................................... 304 6.3 The LU-Type RG-Factorization.......................................................... 305 6.4 Finite Levels ....................................................................................... 308 6.4.1 The UL-Type RG-Factorization .............................................. 309 6.4.2 The LU-Type RG-Factorization .............................................. 310 6.5 Markov Renewal Processes of GI/G/1 Type....................................... 311 6.6 Spectral Analysis ................................................................................ 317 6.7 The First Passage Times ..................................................................... 321 6.7.1 An Algorithmic Framework.................................................... 321 6.7.2 Markov Renewal Processes of GI/G/1 Type ........................... 322 6.8 Notes in the Literature ........................................................................ 326 Problems ...................................................................................................... 327 References ................................................................................................... 328 7 Examples of Practical Applications .......................................................... 331 7.1 Processor-Sharing Queues .................................................................. 332 x
7.2 Fluid Queues....................................................................................... 338 7.3 A Queue with Negative Customers..................................................... 345 7.3.1 The Supplementary Variables ................................................. 346 7.3.2 A Markov Chain of GI/G/1 Type ............................................ 348 7.3.3 The Stationary Queue Length ................................................. 355 7.3.4 The Busy Period...................................................................... 356 7.4 A Repairable Retrial Queue ................................................................ 361 7.4.1 The Supplementary Variables ................................................. 362 7.4.2 A Level-Dependent Markov Chain of M/G/1 Type ................ 364 7.4.3 The Stationary Performance Measures ................................... 371 7.5 Notes in the Literature ........................................................................ 375 7.5.1 The Processor-Sharing Queues ............................................... 375 7.5.2 The Fluid Queues.................................................................... 376 7.5.3 The Queues with Negative Customers.................................... 377 7.5.4 The Retrial Queues ................................................................. 378 Problems ...................................................................................................... 379 References ................................................................................................... 381 8 Transient Solution ...................................................................................... 389 8.1 Transient Probability .......................................................................... 390 8.1.1 Discrete-Time Markov Chains................................................ 390 8.1.2 An Approximate Algorithm .................................................... 392 8.1.3 Continuous-Time Markov Chains........................................... 395 8.2 The First Passage Times ..................................................................... 401 8.2.1 Discrete-Time GPH Distribution ............................................ 402 8.2.2 Continuous-Time GPH Distribution ....................................... 406 8.2.3 GMAPs ................................................................................... 409 8.2.4 Time-Inhomogeneous PH(t) Distribution ............................... 410 8.2.5 Time-Inhomogeneous MAP (t) ............................................... 411 8.2.6 A Time-Inhomogeneous MAP(t)/PH(t)/1 Queue................... 411 8.3 The Sojourn Times.............................................................................. 412 8.3.1 Discrete-Time Markov Chains................................................ 412 8.3.2 Continuous-Time Markov Chains........................................... 417 8.4 Time-Inhomogeneous Discrete-Time Models .................................... 420 8.4.1 The Transient Probability Vector ............................................ 421 8.4.2 The Asymptotic Periodic Distribution .................................... 422 8.5 Notes in the Literature ........................................................................ 426 Problems ...................................................................................................... 427 References ................................................................................................... 428 9 Quasi-Stationary Distributions ................................................................. 432 9.1 Finitely-Many Levels ......................................................................... 433 xi
9.1.1 The UL-Type RG-Factorization .............................................. 434 9.1.2 The LU-Type RG-Factorization .............................................. 435 9.1.3 State D -Classification and Quasi-stationary Distribution ....... 437 9.2 Infinitely-Many Levels ....................................................................... 438 9.2.1 The UL-Type RG-Factorization .............................................. 439 9.2.2 Two Sets of Expressions ......................................................... 440 9.2.3 The LU-Type RG-Factorization .............................................. 443 9.3 Markov Chains of M/G/1 Type........................................................... 447 9.3.1 The UL-Type RG-Factorization .............................................. 447 9.3.2 The State D -Classification ...................................................... 450 9.3.3 Two Sets of Expressions ......................................................... 452 9.3.4 Conditions for D -Positive Recurrence .................................... 455 9.4 Markov Chains of GI/M/1 Type.......................................................... 457 9.4.1 Spectral Analysis..................................................................... 461 9.4.2 Two Sets of Expressions ......................................................... 468 9.4.3 Conditions for D -Positive Recurrence .................................... 478 9.5 Markov Chains of GI/G/1 Type .......................................................... 481 9.6 Level-Dependent QBD Processes....................................................... 490 9.6.1 The UL-Type RG-Factorization .............................................. 491 9.6.2 Conditions for D -Positive Recurrence .................................... 497 9.7 Continuous-Time Markov Chains ...................................................... 500 9.7.1 The UL-Type RG-Factorization .............................................. 502 9.7.2 The LU-Type RG-Factorization .............................................. 506 9.8 Decay Rate for the GPH Distribution................................................. 507 9.8.1 The Discrete-Time PH Distribution with Finitely Many Phases ............................................................. 507 9.8.2 The Discrete-Time GPH Distribution with Infinitely-many Phases ........................................................... 510 9.8.3 The Level-Dependent QBD Processes.................................... 511 9.8.4 The Continuous-Time GPH Distribution ................................ 513 9.8.5 The Level-Dependent Markov Chains of M/G/1 Type ........... 514 9.8.6 The Level-Dependent Markov Chains of GI/M/1 Type .......... 516 9.9 QBD Processes with Infinitely-Many Phases..................................... 517 9.10 Notes in the Literature ...................................................................... 521 Problems ...................................................................................................... 522 References ................................................................................................... 523 10 Markov Reward Processes ...................................................................... 526 10.1 Continuous-Time Markov Reward Processes ................................. 527 10.1.1 The Expected Instantaneous Reward Rate at Time t ......... 529 10.1.2 The nth Moment of the Instantaneous Reward Rate at Time t ....................................................... 529 xii
10.1.3 The Distribution of the Instantaneous Reward Rate at Time t ....................................................... 529 10.1.4 The Accumulated Reward Over [0,t) ................................ 530 10.1.5 The Expected Accumulated Reward Ɏ(t) Over [0, t) ........ 530 10.1.6 The nth Moment of the Accumulated Reward Ɏ(t) Over [0,t) ..................................................... 530 10.2 The Transient Accumulated Rewards.............................................. 531 10.3 The First Accumulated Time........................................................... 534 10.4 Computation of the Reward Moments ............................................ 536 10.4.1 The Moments of the Transient Accumulated Reward ....... 536 10.4.2 The Moments of the First Accumulated Time ................... 538 10.5 Accumulated Reward in a QBD Process ........................................ 542 10.6 An Up-Type Reward Process in Finitely-Many Levels .................. 548 10.7 An Up-Type Reward Process in Infinitely-Many Levels................ 554 10.8 A Down-Type Reward Process ....................................................... 560 10.9 Discrete-Time Markov Reward Processes ...................................... 565 10.10 Notes in the Literature................................................................... 568 Problems .................................................................................................... 568 References.................................................................................................. 570 11 Sensitivity Analysis and Evolutionary Games ....................................... 574 11.1 Perturbed Discrete-Time Markov Chains........................................ 575 11.1.1 Markov Chains with Finitely-Many Levels....................... 575 11.1.2 Markov Chains with Infinitely-Many Levels .................... 579 11.1.3 The Realization Matrix and Potential Vector..................... 581 11.1.4 The Censored Structure in Sensitivity Analysis ................ 582 11.1.5 The Transient Performance Measure................................. 584 11.1.6 The Discounted Performance Measure.............................. 584 11.2 Two Important Markov Chains ....................................................... 584 11.2.1 Perturbed Markov Chains of GI/M/1 Type ........................ 585 11.2.2 Perturbed Markov Chains of M/G/1 Type ......................... 588 11.3 Perturbed Continuous-Time Markov Chains................................... 592 11.4 Perturbed Accumulated Reward Processes ..................................... 597 11.5 A Perturbed MAP/PH/1 Queue ....................................................... 600 11.5.1 A Perturbed PH Distribution ............................................. 600 11.5.2 A Perturbed MAP .............................................................. 601 11.5.3 A Perturbed MAP/PH/1 Queue.......................................... 602 11.6 Symmetric Evolutionary Games ..................................................... 605 11.7 Constructively Perturbed Birth Death Process................................ 618 11.7.1 An Embedded Chain.......................................................... 618 11.7.2 A Probabilistic Construction.............................................. 622 xiii
11.8 Asymmetric Evolutionary Games ................................................... 626 11.8.1 A 2 u 2 Game with Independent Structure ......................... 626 11.8.2 A 2 u 2 Game with Dependent Structure............................ 631 11.8.3 A 2 u 2 Game with Information Interaction ....................... 636 11.8.4 A 3 u 2 Asymmetric Evolutionary Game ........................... 640 11.9 Notes in the Literature..................................................................... 645 Problems .................................................................................................... 646 References.................................................................................................. 647 Appendix ........................................................................................................... 652 Appendix A Matrix Notation and Computation...................................... 652 A.1 Kronecker Product................................................................. 652 A.2 Perron-Frobenius Theory....................................................... 653 A.3 Inverses of Matrices of Infinite Size ..................................... 654 References ....................................................................................... 658 Appendix B Heavy-Tailed Distributions................................................. 658 References ....................................................................................... 667 Index.................................................................................................................. 669
xiv
1 Stochastic Models
Quan-Lin Li Department of Industrial Engineering Tsinghua University Beijing 100084, P.R. China E-mail:
[email protected] Abstract In this chapter, we provide an introduction to Markov chains, practical examples for block-structured Markov chains, QBD processes, ULand LU-types of RG-factorizations for QBD processes, the phase type (PH) distribution, the Markovian arrival process (MAP) and the matrix-exponential distribution. We list some necessary definitions and useful results, which are basic and crucial preliminaries in the study of stochastic models. Keywords stochastic model, block-structured Markov chain, QBD process, RG-factorization, phase type distribution, Markovian arrival process, matrixexponential distribution.
In this chapter, we provide some basic and useful preliminaries in the study of stochastic models. This contains a simple introduction to Markov chains with discrete state space, motivating practical examples for how to construct blockstructured Markov chains, applying the censoring technique to deal with a QBD process with either finitely-many levels or infinitely-many levels, the UL- and LU-types of RG-factorizations for the QBD process, the PH distribution, the Markovian arrival process and the matrix-exponential distribution. These contents are organized in seven sections. Here, we mainly list the main definitions and results without proofs. Readers may refer to Neuts [92,94] and others for the proofs, if necessary.
1.1 Stochastic Systems In this section, we show that Markov chain is a useful mathematical tool in the study of stochastic systems. We provide some useful discussions for Markov chains with discrete state space that are described as different types of block-structured Markov chains. These are useful for understanding the sequence of this book.
Constructive Computation in Stochastic Models with Applications
Modern science and technology has created our beautiful life and suitable working space. We use various natural or man-made systems on a daily basis. Important examples of such systems include manufacturing systems, communication networks and transportation systems. From the ordinary observations, we can easily find that random factors usually exist in these different systems. Thus, it is not only necessary but also important for studying the real systems under stochastic conditions. To do this, we now provide a simple introduction to stochastic processes, and specifically, Markov chains. Let X(t), which is either scalar or vector, be the state of a stochastic system at time t. Then {X(t):tı0} in general is a stochastic process. The stochastic process is a family of random variables X(t) for t T, where T is a non-null set. If T [0, + f ] or [0,a] with a >0, then the stochastic process called continuous-time; if T {0, 1, 2, ...}, then the stochastic process is discrete-time. In this case, we write the Markov chain {X(t)} or {Xt}. On the other hand, the stochastic process is distinguished by its state which is denoted as a real number. The range of possible state values for the random variables X(t) for t T is called the state space ȍ. If ȍ [0, + f ] or [0, b] with b >0, then the stochastic process is said to have a continuous state space; if ȍ {0, 1, 2, ...} or {0, 1, 2, ..., M} with M>0, then the stochastic process is called to have a discrete state space. In general, a stochastic process always has a complicated behavior that is difficult to be analyzed in detail. From practical applications, we only need to consider a subset of stochastic processes analyzed above.
1.1.1 The Markov Property A discrete-time stochastic process has the Markov property if P{Xn+1 | Xn, Xn–1, ..., X0} P{Xn+1 | Xn}. Similarly, a continuous-time stochastic process has the Markov property if P{X(t+u) | X(s), 0İsİu} P{X(t+u) | X(u)}. Based on the Markov property, we consider an important type of stochastic processes: Markov chains on discrete state space, which are either discrete-time or continuous-time.
1.1.2 A Discrete-Time Markov Chain with Discrete State Space Let {Xn} be a discrete-time Markov chain with discrete state space, that is, T {0, 1, 2, ...} and ȍ {0, 1, 2, ...}. The probability of Xn+1 being in state j given that Xn is in state i is called the one-step transition probability and is denoted by 2
1
Pi ,nj, n 1
P{ X n 1
j | Xn
Stochastic Models
i}.
If all the one-step transition probabilities are independent of time nı0, then the Markov chain is said to be homogeneous; otherwise it is said to be nonhomogeneous. In this book, we mainly analyze the homogeneous Markov chains for simplicity of description. In this case, let Pi,j Pi ,nj, n 1 for all i,j ȍ. We write
Pķ
§ P0,0 ¨ ¨ P1,0 ¨ ¨ P2,0 ¨ # ©
P0,1 P1,1 P2,1 #
P0,2 !· ¸ P1,2 !¸ ¸. P2,2 !¸ ¸ # ¹
Let Pi ,( nj ) and P ( n )
P
( n) i , j i , jı0
P{ X n
j | X0
i}
. Then it is easy to check that P(n) P n for each nı2.
State j is said to be accessible from state i if Pi ,( nj ) ! 0 for some integer nı1, denoted as i o j. Two states i and j are called communication if each one is accessible to the other, writen as i mo j. It is clear that the communication is an equivalence relation: (D1) reflexivity i mo i; (D2) symmetry if i mo j, then j mo i; (D3) transitivity if i mo j and j mo k, then i mo k. We can now partition the state space ȍ into some equivalent classes Ck for k 1, 2, ..., K. Then ȍ * Kk 1 Ck , where Ci Cj for i z j, and is an empty set. In each equivalent class, all the states can communicate with each other. If the state space ȍ has only one equivalent class, then the Markov chain is said to be irreducible; otherwise it is called to be reducible. Now, we define the period of state i, written as d(i), to be the greatest common divison (g.c.d) of all integers nı1 for which Pi ,(in ) ı 0 . Specifically, if Pi ,(in ) 0 for all nı1, then d(i) 0. In an equivalent class C ȍ, it is easy to prove that d(i) d(j) for all i, j C. Furthermore, for an irreducible Markov chain, we always have d(i) d( j) for all i, j ȍ. If d(i) 1, then state i is said to be aperiodic; otherwise it is periodic. For an irreducible Markov chain, it is said to be aperiodic if there exists a state which is aperiodic; while it has period d if there exists a state which is of period d.
ķ From this section, notation for vector or matrix will not use the bold form.
3
Constructive Computation in Stochastic Models with Applications
In what follows we provide state classification of Markov chains. We write f i ,(in )
Clearly, f i ,(1) i
P{ X n
i, X v z i for all v 1, 2,..., n 1| X 0
i}.
Pi ,i and for nı1, n
¦f
Pi ,(in )
(k ) (n k ) i ,i i ,i
P
.
k 0
Let f
¦f
f i ,i
( n) i ,i
.
n 0
Then we say that a state i is recurrent if fi,i 1; otherwise it is transient. The following proposition provides a necessary and sufficient condition under which a state i is recurrent. The proof is standard and is omitted here. f
Proposition 1.1 A state i is recurrent if and only if
¦P
(n) i ,i
f ; while a
n 1
f
state i is transient if and only if
¦P
(n) i ,i
f.
n 1
It is easy to see that in an equivalent class C ȍ, if a state i C is recurrent, then each state j C is also recurrent. Furthermore, for an irreducible Markov chain, it is clear that if a state i ȍ is recurrent, then each state j ȍ is also recurrent. Such a discussion is also valid for the transient case. We can further classify the recurrent class of states into two subsets: positive recurrence and null recurrence. To do that, we write f
mi
¦ nf
(n) i ,i
.
n 0
We say that a state i is positive recurrent if mi <+ f ; otherwise it is null recurrent. In an equivalent class C ȍ, it is easy to prove that if a state i C is positive recurrent, then each state j C is also positive recurrent. Furthermore, for an irreducible Markov chain, it is easy to see that if a state i ȍ is positive recurrent, then each state j ȍ is also positive recurrent. Such a discussion is also valid for the null recurrent case. For an irreducible Markov chain, we now consider the limiting
Si with normalization condition
¦ʌ iC
i
lim Pi ,(in ) n of
1 or
¦ʌ
i
1.
i:
We state the basic limit theorem for Markov chains as follows. 4
1
Stochastic Models
Theorem 1.1 For an irreducible, aperiodic and positive recurrent Markov chain, there must exist the unique limiting distribution vector ʌ (ʌ0, ʌ1, ʌ2, ...) with ʌi 1/mi for all i ȍ. At the same time,
lim Pi ,( nj ) n of
ʌj
is independent of i ȍ. If the Markov chain is either null recurrent or transient, then ʌ (ʌ0, ʌ1, ʌ2, ...) 0. If the Markov chain with period d is irreducible and positive recurrent, then ) lim Pi ,( nd j n of
ʌ j (d ),
which is independent of the initial state i ȍ. For an irreducible, aperiodic and positive recurrent Markov chain, it is clear that lim P n n of
lim P ( n ) n of
eʌ ,
where e is a column vector of ones with switable size. Thus eʌP
lim P n P n of
lim P n 1
n of
eʌ
which obviously leads to ʌP ʌ and ʌe 1. That is, the limiting distribution vector ʌ of the Markov chain P is the unique positive solution to the system of equations ʌP ʌ and ʌe 1. For an irreducible finite-state Markov chain P, it is either positive recurrent or transient. When P is positive recurrent, the matrix I P eʌ is invertible and is called the fundamental matrix of the Markov chain; when P is transient, the matrix I P is invertible. For an irreducible finite-state Markov chain P, the limit 1 N n P ¦ N of N 1 n 0 lim
must exist and is called the Cesaro limit. We denote the Cesaro limit by P*. Specifically, P* eʌ if P is aperiodic and positive recurrent. For the Cesaro limit, it is easy to check that PP* P*P P*P* P* and P n P* [P P*]n, nı1. 5
Constructive Computation in Stochastic Models with Applications
1.1.3 A Continuous-Time Markov Chain with Discrete Space We denote by {X(t)} a continuous-time Markov chain with T [0, + f ) and ȍ {0, 1, 2, ...}. We write P{ X (t u )
Pi , j (t )
j | X (u )
i}
which is independent of uı0. The Markov property asserts that Pi, j (t) satisfies (C1) Pi, j (t)ı0 for all i, j ȍ, (C2) ¦ Pi , j (t ) 1 for all i ȍ, j:
(C3) Pi , j ( s t )
¦P
i,k
( s ) Pk , j (t ) for all s, tı0 (Chapman-Kolmogorov relation),
k :
and (C4) lim Pi , j (t ) t o0
1, i j , ® ¯0, i z j.
Let § P0,0 (t ) P0,1 (t ) P0,2 (t ) ! · ¨ ¸ P (t ) P1,1 (t ) P1,2 (t ) ! ¸ . P(t ) ¨ 1,0 ¨ P2,0 (t ) P2,1 (t ) P2,2 (t ) ! ¸ ¨ ¸ # # © # ¹ From (C1) to (C4), it is easy to check that lim P(t h)
P(t ).
h o0
Therefore, P(t) is also a matrix of continuous functions for tı0. Actually, P(t) is a matrix of differentiable functions for tı0 in which the limits lim
1 Pi ,i (h)
h o 0
h
qi ,
i ȍ
and lim
Pi ,i (h)
h o 0
h
qi , j ,
i z j and i, j ȍ
exist, where 0İqi<+ f for iȍ, and 0İqi, j<+ f for i z j and i, jȍ. Note that 1 Pi ,i (h)
¦
j: {i}
which leads to that for iȍ, 6
Pi , j (h),
1
¦
qi
Stochastic Models
qi , j .
j: {i}
At the same time, the rates qi and qi, j furnish an infinitesimal description as follows: For h>0 sufficiently small, i z j and i, j ȍ, P{ X (t h)
j | X (t ) i} qi , j h o(h),
and for i ȍ,
P{ X (t h) i | X (t ) i} 1 qi h o(h). In contrast to the infinitesimal description, the sojourn description is given as follows. If the Markov chain is in state i, then it sojourns there for a duration that is exponentially distributed with parameter qi, and can jump to the next state j with probability qi, j /qi for j z i. We write
Q
§ q0 ¨ ¨ q1,0 ¨ ¨ q2,0 ¨q ¨ 3,0 ¨ # ©
q0,1
q0,2
q1
q1,2
q2,1
q2
q3,1
q3,2
#
#
q0,3 ! · ¸ q1,3 ! ¸ ¸ q2,3 ! ¸ . q3 ! ¸ ¸ ¸ # ¹
which is called the infinitesimal generator of the continuous-time Markov chain. Clearly lim
h o 0
P ( h) I h
Q.
Note that P(t h) P(t ) h
P ( h) I P (t ) h
P (t )
P ( h) I , h
we obtain Pc(t ) QP (t )
P (t )Q,
where Pc(t )
d P (t ) dt
lim
h o 0
P (t h) P (t ) . h
Therefore P (t )
exp{Qt} 7
Constructive Computation in Stochastic Models with Applications
by means of the initial condition P(0) I. The continuous-time Markov chain may also be called the Markov chain Q. During a similar analysis of the discrete-time Markov chains, we can introduce some basic concepts for the continuous-time case, such as, the irreducible and reducible, and the aperiodic and periodic. At the same time, we can also give the state classification: transience, null recurrence and positive recurrence. If the continuous-time Markov chain is irreducible, aperiodic and positive recurrent, then the limits lim Pi , j (t ) exist and is independent of the initial state t of
i ȍ, denoted as ʌj. Let ʌ (ʌ0, ʌ1, ʌ2, ...). Then lim P(t )
lim exp{Qt} eʌ
t of
t of
and clearly, the limiting distribution vector is the unique positive solution to the system of equations ʌQ 0 and ʌe 1.
1.1.4 A Continuous-Time Birth Death Process As a useful example, we now discuss a special continuous-time Markov chain: A continuous-time birth death process whose infinitesimal generator is given by
Q
§ O0 ¨ P ¨ 1 ¨ ¨ ¨ ©
O0 (O1 P1 )
P2
· ¸ ¸ O2 (O2 P 2 ) ¸, ¸ P3 (O3 P 3 ) O3 % % % % ¸¹
O1
where Oi >0 for iı0 and P j >0 for jı1. If the birth death process Q is irreducible, aperiodic and positive recurrent, then the limiting distribution is given by
[j
ʌj
,
f
¦[
j ı 0,
k
k 0
where
[ 0 1, [ k
[ k 1
Ok 1 Pk
O0 O1 ...Ok 1 , P1 P 2 ...P k
k ı 1.
If O0 0, O j ! 0 and P j ! 0 for jı1, then it is clear that the birth death process is transient with absorbing state 0. We denote by gm the probability of absorption into state 0 from the initial state m for mı1. Then we have 8
1
Stochastic Models
f
¦K gm
i
i m f
,
¦K
m ı 1,
i
i 0
where
K0 1, K k
K k 1
Pk Ok
P1 P 2 ...P k , O1O2 ...Ok
k ı 1.
1.1.5 Block-Structured Markov Chains A practical stochastic system always indicates special block structure of its corresponding Markov chain. This motivates us to develop constructively numerical computation of stochastic models on the line of Neuts [92, 94]. In what follows we summarize the useful block structure of Markov chains applied to real stochastic systems recently. In the study of stochastic models, Neuts opened a key door for developing numerical theory of stochastic models. It is necessary for us to review the two books by Neuts in 1981 and 1989, respectively. In the first book, Neuts [92] studied a level-independent QBD process whose transition matrix is given by
P
§ B1 ¨B ¨ 2 ¨ ¨ ¨¨ ©
B0 A1 A2
A0 A1 A2
· ¸ ¸ A0 ¸. ¸ A1 A0 ¸ % % %¸¹
(1.1)
By using the QBD process, Chapter 3 of Neuts [92] analyzed the M/PH/1, PH/M/c and PH/PH/1 queues; Chapter 5 studied buffer models including tandem queues and a multiprogramming model; and Chapter 6 discussed queues in random environment such as a queue with repairable server, a finitesource priority queue, and a queue with paired customers. As an important generalization, Chapter 1 of Neuts [92] systemically analyzed a level-independent Markov chain of GI/M/1 type whose transition matrix is given by
P
§ B1 ¨B ¨ 2 ¨ B3 ¨B ¨¨ 4 © #
B0 A1 A2 A3 #
A0 A1 A2 #
A0 A1 #
A0 #
· ¸ ¸ ¸. ¸ ¸ % ¸¹
(1.2)
9
Constructive Computation in Stochastic Models with Applications
Chapter 4 of Neuts [92] discussed the GI/PH/1, GI/PH/c, D/PH/1 and SM/PH/1 queues. In the second book, Neuts [94] analyzed a level-independent Markov chain of M/G/1 type whose transition matrix is given by § B1 ¨B ¨ 0 ¨ ¨ ¨¨ ©
P
B2 A1 A0
B3 A2 A1 A0
B4 A3 A2 A1 %
!· !¸ ¸ !¸ . !¸ ¸ %¸¹
(1.3)
Applying the Markov chains of M/G/1 type, Neuts [94] studied the M/SM/1 queue and its variants in Chapter 4, the BMAP/G/1 queue in Chapter 5, and several practical systems such as a data communication model, a poor man’s satellite, and a series queue with two servers in Chapter 6. Up to now, there have been many applied problems that are described as more general block-structured Markov chains. In what follows we list some basic examples from the block-structured Markov chains. (1) A level-dependent QBD process whose transition matrix is given by
P
§ A1(0) ¨ (1) ¨ A2 ¨ ¨ ¨ ¨ ©
A0(0) (1) 1 (2) 2
(1) 0 (2) 1 (3) 2
A
A
A
A
A0(2)
A
A1(3) A0(3) % %
· ¸ ¸ ¸. ¸ ¸ % ¸¹
(1.4)
(2) A Markov chain of GI/G/1 type whose transition matrix is given by
P
§ D0 ¨D ¨ 1 ¨ D2 ¨ D3 ¨ # ©
D1 A0 A1 A2 #
D2 A1 A0 A1 #
D3 A2 A1 A0 #
!· !¸ ¸ !¸ . !¸ % ¸¹
(1.5)
(3) A level-dependent Markov chain of GI/M/1 type whose transition matrix is given by
P
10
§ A0,0 ¨A ¨ 1,0 ¨ A2,0 ¨A ¨ 3,0 ¨ # ©
A0,1 A1,1 A2,1 A3,1 #
A1,2 A2,2 A3,2 #
A2,3 A3,3 #
A3,4 #
· ¸ ¸ ¸. ¸ ¸ % ¸¹
(1.6)
1
Stochastic Models
(4) A level-dependent Markov chain of M/G/1 type whose transition matrix is given by § A0,0 ¨ ¨ A1,0 ¨ ¨ ¨ ¨ ©
P
A0,1 A1,1 A2,1
A0,2 A1,2 A2,2 A3,2
A0,3 A1,3 A2,3 A3,3 %
!· ¸ !¸ !¸ . ¸ !¸ % ¸¹
(1.7)
(5) A general block-structured Markov chain with infinitely-many levels whose transition matrix is given by
P
§ P0,0 ¨ ¨ P1,0 ¨ P2,0 ¨ ¨ P3,0 ¨ # ©
P0,1 P1,1 P2,1 P3,1 #
P0,2 P1,2 P2,2 P3,2 #
P0,3 P1,3 P2,3 P3,3 #
!· ¸ !¸ !¸ . ¸ !¸ ¸ ¹
(1.8)
(6) A general block-structured Markov chain with finitely-many levels whose transition matrix is given by
P
§ P0,0 ¨ ¨ P1,0 ¨ P2,0 ¨ ¨ # ¨P © M ,0
P0,1 P1,1 P2,1 # PM ,1
P0,2 P1,2 P2,2 # PM ,2
! ! !
P0, M · ¸ P1, M ¸ P2, M ¸ . ¸ # ¸ ! PM , M ¸¹
(1.9)
Specifically, a Markov chain of M/G/1 type with finitely-many levels whose transition matrix is given by
P
§ A0,0 ¨ ¨ A1,0 ¨ ¨ ¨ ¨ ©
A0,1 ! A1,1 ! A2,1 ! %
A0, M 1 A1, M 1 A2, M 1 # AM , M 1
A0, M · ¸ A1, M ¸ A2, M ¸ ; ¸ # ¸ AM, M ¸¹
(1.10)
a Markov chain of GI/M/1 type with finitely-many levels whose transition matrix is given by 11
Constructive Computation in Stochastic Models with Applications
P
§ A0,0 ¨ ¨ A1,0 ¨ A2,0 ¨ ¨ # ¨A © M ,1
A0,1 A1,1 A2,1 # AM ,2
A1,2 A2,2 # AM ,3
A2,3 # % AM ,4 ! AM, M
· ¸ ¸ ¸; ¸ ¸ ¸ ¹
(1.11)
a QBD process with finitely-many levels whose transition matrix is given by
P
§ A1(0) ¨ (1) ¨ A2 ¨ ¨ ¨ ¨ ¨ ¨ ©
A0(0) A1(1) A2(2)
(1) 0 (2) 1
A A %
A0(2) %
%
A2( M 1)
A1( M 1) A2( M )
· ¸ ¸ ¸ ¸; ¸ ( M 1) ¸ A0 ¸ A1( M ) ¸¹
(1.12)
a special QBD process with finitely-many levels whose transition matrix is given by
P
§ A1(0) ¨ (1) ¨ A2 ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ©
A0(0) B A
C B C A B C % % % A
B A
C B A2( M )
· ¸ ¸ ¸ ¸ ¸ ¸; ¸ ¸ ¸ A0( M 1) ¸ A1( M ) ¸¹
(1.13)
This book will provide a unified algorithmic framework for dealing with the Markov chains given in Eq. (1.1) to Eq. (1.13).
1.2 Motivating Practical Examples In this section, we choose some simple practical systems under exponential-type assumptions to indicate how to organize the block-structured Markov chains given in Eq. (1.1) to Eq. (1.13). At the same time, more general examples will be arranged in Chapter 7 and in Problems of each chapter. Note that similar examples are considerable in the literature for studying queueing systems, communication networks, manufacturing systems and transportation systems etc. 12
1
Stochastic Models
1.2.1 A Queue with Server Vacations Queues with server vacations are of an important type of stochastic models, which have been extensively applied to communication networks, manufacturing systems and transportation systems. The first example analyzes a single-sever vacation model. We consider a single-server queue with server vacations, where the arrival process is a Poisson process with arrival rate O ! 0 , the service and vacation times are i.i.d. with exponential distributions F ( x) 1 e P x and V ( y ) 1 eJ y , respectively. We assume that the server can take at most N consecutive vacations. After the N consecutive vacations, the server has to enter an idle period, even though there is no customer in the waiting room. All the random variables defined above are assumed to be mutually independent. For this system, we denote the number of customers in the system and the state of server at time t, by Q(t) and Z(t) respectively. For example, Q(t) 0, 1, 2, ..., and
Z (t )
I ,0, °°W , m, ® °V , m, n, °¯
if the server is idle, if the server is busy with m customers in the system, if the server has been at the nth consecutive vacation and there are m customers in the system.
It is clear from Fig. 1.1 that the Markov chain {(Q(t), Z(t)), tı0} is a QBD process Q given in Eq. (1.1) whose block-entries are given by
Figure 1.1 Relation of state transitions 13
Constructive Computation in Stochastic Models with Applications
B1
§ O ¨J ¨ ¨ ¨ ¨ ¨ ¨ © B0
A1
(J O )
J
(J O ) % %
J A0
· ¸ ¸ ¸ ¸, ¸ (J O ) ¸ J (J O ) ¹¸
diag(O , O , O ,..., O , O ), A2
B2
§ · ¨ ¸ ¨ ¸ ¨ ¸ ¨ ¸, ¨ ¸ ¨ ¸ ¨ ¸ ©P¹
diag( P , 0,0,..., 0,0),
§ (O P ) ¨ J (J O ) ¨ J (J O ) ¨ ¨ % % ¨ J ¨ ¨ ©
· ¸ ¸ ¸ ¸. ¸ (J O ) ¸ J (J O ) ¸¹
1.2.2 A Queue with Repairable Servers The second example considers a simple repairable queueing system. We consider a queueing system with N identical servers and a repairman. The arrival process is a Poisson process with arrival rate O ! 0 , the service times are i.i.d. and are exponentially distributed with service rate P , and the life and repair times of each server are exponentially distributed with parameters D and E , respectively. We assume that the service discipline is FCFS and the repair discipline is as good as new after repaired. All the random variables defined above are assumed to be mutually independent. Let Q(t) and Z(t) be the number of customers in the system and the number of the available servers at time t, respectively. Then the Markov chain {(Q(t), Z(t)), tı0} is a QBD process Q given in Eq. (1.12) whose block-entries are given by
B1
B2 where 14
§ 0 0 · ¨B ¸ 1 1 ¨ 1 ¸ 2 2 2 ¨ ¸ ¨ ¸, % % % ¨ ¸ N 1 N 1 N 1 ¸ ¨ ¨ N N ¸¹ © (0, 0, 0,..., 0, 2),
B0
(0,0,0,...,0, N )T ,
1
Stochastic Models
i O I , 0 İ i İ N , i diag(min{i, j} P ), 0 İ i, j İ N , 0 4 O I , i 4 i i , 1 İ i İ N , A0 O I , A2 diag (0, P , 2 P , ..., N P ), A1 4 A0 A2 ,
4
§ E ¨ ¨D ¨ ¨ ¨ ¨ ¨¨ ©
E (D E ) E 2D (2D E ) %
E % ( N 1)D
· ¸ ¸ ¸ ¸. ¸ [( N 1)D E ] E ¸ ¸ ND ND ¸¹
1.2.3 A Call Center The third example discusses a call center which is modeled by means of retrial queues. We consider a call center which is described as a retrial queue with impatient customers. This system is structured in Fig. 1.2, which indicates that this system contains two areas: a service and waiting area, and an orbit. In this system, there
Figure 1.2 Queueing model for a call center 15
Constructive Computation in Stochastic Models with Applications
are s parallel identical servers, the sizes of the waiting room and the orbit are K s and r, respectively. The arrival process is a Poisson process with arrival rate O , the service times are i.i.d. and are exponentially distributed with service rate ȝ, the retrial time and the patient waiting time are exponential with parameters O1 and J , respectively. The service discipline is FCFS. Once the waiting time of a customer exceeds the patient waiting time, it immediately leaves this system. If an arrival customer can not enter the service and waiting area, then he has to go to the orbit for a retrial purpose with probability D , or he will immediately leave this system with probability 1 D . Once an arrival customer cannot enter the orbit, he immediately leave this system. All the random variables defined above are assumed to be mutually independent. Let Q(t) and Z(t) be the number of customers in the service and waiting area, and the number of customers in the orbit at time t, respectively. Then the Markov chain {(Q(t), Z(t)), tı0} is a QBD process with finitely-many levels whose infinitesimal generator is given by
§ A0 ¨B ¨ 1 Q ¨ ¨ ¨ ¨¨ ©
C A1 B2
C A2 %
C % BK 1
% AK 1 BK
· ¸ ¸ ¸, ¸ C¸ ¸ AK ¸¹
where for 0İiİK 1, Ai
diag(ai ,0 , ai ,1 , ai ,2 , ..., ai , r )
with for 0İjİr, ai , j
0 İ i İ s, (O jO1 i P ), ®[O jO s P (i s)J ], s 1 İ i İ K 1; ¯ 1
Ak
§ f K ,0 ¨ d ¨ 1 ¨ ¨ ¨ ¨ ¨ ©
DO f K ,1 d2
DO f K ,2 %
DO % d r 1
% f K , r 1 dr
· ¸ ¸ ¸ ¸, ¸ DO ¸ f K , r ¸¹
with for 0İjİr, fK, j 16
[DO (1 D ) jO1 s P ( K s )J ], 0 İ j İ r 1, ® j r, ¯ [(1 D )r O1 s P (i s )J ],
1
Stochastic Models
and for 1İjİr, dj
(1 D ) jO1 ; Bi
bi I ,
with for 0İiİK, 0 İ i İ s, i P , ® ¯ s P (i s )J , s 1 İ i İ K ;
bj and
C
§O ¨O ¨ 1 ¨ ¨ ¨¨ ©
O 2O1 O % % r O1
· ¸ ¸ ¸. ¸ ¸ O ¸¹
1.2.4 A Two-Loop Closed Production System The fourth example considers a simple closed loop production line, and constructs a block-structured infinitesimal generator. In Fig. 1.3, the rectangles and circles represent the machines and buffers in a production line, respectively. There are always plenty of raw materials which can be sent to machine M1. Note that each operation on the three machines must be performed with the support of some carts. Operations on machines M1 and M3 need carts A and B, respectively; while operations on machine M2 need carts A and B simultaneously. The manufacturing processes on the three machines are given as follows. For the first machine, the raw material is first loaded into an empty cart A which comes from buffer Ba, then the cart A with the raw material is sent to machine M1. After processing on M1, the cart A carries the products to buffer B1 for the further operation on machine M2. If the arriving cart A finds that machine M2 is available and there is also an empty cart B in buffer Bb, then the carts A and B enter machine M2 together for product processing. Once the operation on machine M2 is finished, all the products are loaded to cart B from cart A. At the same time, the cart A returns to buffer Ba while cart B brings the products to buffer B2. Once machine M3 is available, the cart B enters machine M3 for the final processing. After all these processes, these products must leave this system from machine M3, and the empty cart B returns to buffer Bb. The three machines are all reliable and always produce no bad parts. Machine M1 is starved if buffer Ba is empty, Machine M2 is starved if either buffer B1 is empty or buffer Bb is empty, Machine M3 is starved if B2 is empty. We assume that the service times of machine Mi are i.i.d. and are exponentially distributed with parameter ȝi for i 1, 2, 3; the total numbers of carts A and carts 17
Constructive Computation in Stochastic Models with Applications
B are m and n, respectively. All the random variables defined above are assumed to be mutually independent.
Figure 1.3 A two-loop closed production system
Let s(t) be the state of machine M2 at time t: If the machine is working, s(t) 1, otherwise s(t) 0. Let N1(t) and N3(t) denote the total number of carts A in both buffer Ba and machine M1 at time t, and the total number of carts B in both buffer B2 and machine M3 at time t, respectively. It is seen that this system is described as a continuous-time Markov chain {X(t), tı0}, where X(t) (s(t), N1(t), N3(t)). Note that the total number of carts A is m, the number of carts A in buffer B1 is m N1(t) s(t). Similarly, the total number of carts B is n, the numbers of carts B in buffer Bb is n N3(t) s(t). When s(t) 0, machine M2 is idle due to the condition that either buffer B1 or buffer Bb is empty. The state space of the Markov chain {X(t), tı0} is expressed by
:
{(0, m, n)} * {(0, i, n) : 0 İ i İ m 1} * {(0, m, j ) : 0 İ j İ n 1}
* {(1, i, j ) : 0 İ i İ m 1, 0 İ j İ n 1}. It is easy to check that the infinitesimal generator of the Markov chain {X(t), tı0} is given by
Q
where 18
§A ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨K ©
B C F1 F2 # Fm 2 Fm 1
D1 M T
D2 N P %
D3 ! Dm 1 N % % T
P T
N P T
E · ¸ Dm ¸ ¸ ¸ ¸, ¸ ¸ ¸ N ¸ ¸ P ¸¹
(1.14)
1
§ P1 ¨ P ¨ 3 A ¨ ¨ ¨ ©
B
C
Stochastic Models
· ¸ ( P1 P3 ) ¸ , ( P1 P3 ) P3 ¸ % % ¸ P3 ( P1 P3 ) ¸¹( n 1)u( n 1) § · ¨ ¸ ¨ ¸ , ¨ ¸ ¨ ¸ ¨P ¸ © 1 ¹( n 1)um
§ P3 ¨ P ¨ 1 ¨ ¨ ¨ ©
( P1 P3 )
P1
E
§ P1 ¨ ¨ ¨ ¨ ¨0 ©
P1 0
· ¸ ¸ , % ¸ P1 ¸ ! 0 ¸¹( n 1)un
· ¸ ¸ ( P1 P3 ) ¸ , % % ¸ P1 ( P1 P3 ) ¸¹mum
Dj is a matrix of size m u n with the (j, n)th element being P 3 and all the other elements being 0 for 1İjİm, Fi is a matrix of size n u m with the (n, i 1)th element being P 2 and all the other elements being 0 for 1İiİm 1, T P1 I nun , § 0 P2 ¨0 ¨ ¨¨ ©0
K
M
§ P2 ¨ P ¨ 3 ¨ ¨ ¨ ©
P2
· ¸ , ¸ % ¸¸ P 2 ¹ nu( n 1)
( P 2 P3 )
P3
N
§ 0 P2 · ¨ ¸ 0 % , ¨ % P 2 ¸¸ ¨¨ 0 ¸¹ nun ©
· ¸ ¸ ( P 2 P3 ) ¸ % % ¸ P3 ( P 2 P3 ) ¸¹ nun
and
P
§ ( P1 P 2 ) · ¨ ¸ P3 ( P1 P 2 P3 ) ¨ ¸ P3 ( P1 P 2 P 3 ) ¨ ¸ . % % ¨ ¸ ¨ ¸ ( ) P P P P 3 1 2 3 ¹ n un ©
For simplification of description, the infinitesimal generator given in Eq. (1.14) can be further rewritten as 19
Constructive Computation in Stochastic Models with Applications
Q
§ A0(0) ¨ (1) ¨ A1 ¨ A(2) ¨ (3)2 ¨ A3 ¨ A(4) ¨ 4 ¨ # ¨ ( m 1) ¨ A( m 1) ¨ A( m ) © m
A1(0)
A2(0)
A3(0)
A4(0)
A0(1)
A1(1)
A(2) 1
A0(2)
A1(2)
A(3)1
A0(3)
A1(3)
A(4) 1
A0(4) %
!
Am(0)1
A1(4) %
%
( m 1) 1
A
A0( m 1) A( m1 )
Am(0) · ¸ ¸ ¸ ¸ ¸ ¸, ¸ ¸ ( m 1) ¸ A1 ¸ (m) ¸ A0 ¹
where A0(0)
§ 0 · § E · (0) ¨ D ¸, ¨ D ¸ for 1 İ j İ m 1, Am © m¹ © j¹ ( m) (0, Fi ) for 1 İ i İ m 1, A m ( K , 0),
§ A B· ¨ 0 C ¸, © ¹ A( ii)
A0(1) (i ) 1
A
A(0) j
M,
A0( i )
N for 1 İ i İ m 1,
P for 2 İ i İ m, A( 1j )
T for 2 İ i İ m.
1.2.5 An E-mail Queueing System Under Attacks The fifth example provides a queueing model to describe an email queueing system with three types of attacks, as shown in Fig. 1.4, by means of an irreducible continuous-time QBD process with finitely-many levels.
Figure 1.4 Three types of attacks on an email account
The ordinary emails in each email account form a basic queue that is expressed by the email information units. The email arrival is a Poisson process with rate O , and each email information unit is dealt by the user through a time length that has an exponential distribution with mean 1/ P . These email information units 20
1
Stochastic Models
are processed by the user according to FCFS. Assume that each email account has a capacity of at most N email information units. The attack of cracking password is abstracted as an input to the second queue. By cracking the password of the email account, the attacker can get some useful and valuable email information for either his personal interests, business or other purposes. The events that occur for the succesful acquisition of the password are treated as attacking processes or a customer arrival process that is a Poisson process with rate D c . Such a customer remains in the email account with a time period xc that is exponential with mean 1/ J c . Since the attackers are only interested in some useful and valuable email information units, we assume that this kind of email attacks can grab the email information with a probability Pc(xc) when they remain in the email account. The attack of sending malicious emails attached with Trojan virus is also modeled as an input process in the third queue. The Trojan Horse is planted if the email user clicks the attachment. Based on this, such attackers can enter the email system for reading the email account’s private and valuable information. The malicious emails are regarded as a customer arrival process that is a Poisson process with rate D m . Some attached Trojan virus is defended by the fairwall or it is ignored by the email user. Let Pin denote the probability that the malicious attachment is clicked by the email user, which means that the attacker gets an access into the email account. This type of attacks remains in the email system for a time length xm which is exponential with mean 1/ J m . Let Pm(xm) be the probability that the attacker is able to obtain the email information. The fourth input process is the attack of email bombs. The successfully deployed email bombs are treated as the customers whose arrivals are a Poisson process with rate D b . Once an email bomb arrives, the email account is crashed for a time period whose length has an exponential distribution with mean 1/ E b . It is worth noting that the attacks of cracking password and sending malicious emails will not change the ordinary email queue. However, the attacks of email bombs can change the behavior of the ordinary email queue. Let n(t), s(t) and r(t) denote the number of email information units, the state of the email account and the types of attacks in the email system at time t, respectively, where 0İn(t)İN, N is the maximum number of email information units, s(t) {I, W, F} and r(t) {C, M, CM}. We provide a simple interpretation for the elements I, W, F, C, M and CM. Firstly, I, W and F stand for Idle, Working and Fail of the email account, respectively. Secondly, C, M and CM describe the attacks of cracking password, the attacks of malicious email, and the co-existing of the attacks of cracking password and malicious emails, respectively. Obviously, {(n(t), s(t), r(t)):tı0} is a QBD process whose state space ȍ is given by N
ȍ
*L , k
k 0
where 21
Constructive Computation in Stochastic Models with Applications
L0
{(0, I ), (0(C ) , I ), (0( M ) , I ), (0( CM ) , I ), (0, F )},
and for 1İiİN, Li
{(i, W ), (i (C ) , W ), (i ( M ) , W ), (i (CM ) , W ), (i, F )}.
Based on the state space ȍ * Nk 0 Lk , it is easy to see that the QBD process has the following infinitesimal generator
Q
§ A1(0) ¨ (1) ¨ A2 ¨ ¨ ¨ ¨ ©
A0(0) (1) 1
A %
(1) 0
A %
%
( N 1) 2
A
( N 1) 1 (N ) 2
A
A
· ¸ ¸ ¸, ( N 1) ¸ A0 ¸ (N) ¸ A1 ¹
(1.15)
where
A1(i )
§ (D c ¨ ¨ ¨ ¨ ¨ ©
Dm [ )
Jc Jm
Dc (D m [ J c )
Dm
0
0
0 (D c [ J m )
0
Jm
Jc
Dm Dc ([ J m
Eb
0
0
0
Dm
0
Db · Db ¸ ¸ Db ¸ J c ) Db ¸ E b ¸¹
with
[ Db O P , for 1İiİN 1, and
A1(0)
§ (D c ¨ ¨ ¨ ¨ ¨ ©
Dm K)
Jc Jm
Dc (D m K J c ) 0
0 (D c K J m )
0
Jm
Jc
Dm Dc (K J m
Eb
0
0
0
Db · Db ¸ ¸ Db ¸ J c ) Db ¸ E b ¸¹
with
K Db O;
A1( N )
with 22
§ (D c ¨ ¨ ¨ ¨ ¨ ©
Dm ] )
Jc Jm
Dc
Dm
(D m ] J c ) 0 (D c ] J m ) 0
0
Jm
Jc
Eb
0
0
0 Db · Dm Db ¸ ¸ Dc Db ¸ (] J m J c ) D b ¸ E b ¸¹ 0
1
] ( j) 2
Stochastic Models
Db P ,
diag ( P , P , P , P , 0), 1 İ j İ N ,
A
A0( k )
diag (O , O , O , O , 0),
0 İ k İ N 1.
1.3 The QBD Processes Based on the above motivating examples, this section analyzes a continuous-time QBD process with either finitely-many levels or infinitely-many levels. We construct the UL-and LU-types of RG-factorizations for the QBD process; while the RG-factorizations for any irreducible Markov chains will be systemically developed in Chapters 2, 3, 5, 6 and 9 of this book. In addition, we iteratively define the Rand G-measures, both of which are a direct generalization of the matrices R and G given in Neuts [92, 94], respectively. We considers an irreducible continuous-time QBD process with infinitelymany levels whose infinitesimal generator is given by
Q
§ A1(0) ¨ (1) ¨ A2 ¨ ¨ ©
A0(0) A1(1) A2(2)
A0(1) A1(2) %
A0(2) %
· ¸ ¸; ¸ % ¸¹
(1.16)
and an irreducible continuous-time QBD process with finitely-many levels whose infinitesimal generator is given by
Q
§ A1(0) ¨ A(1) ¨ 2 ¨ ¨ ¨ ©
A0(0) A1(1) %
A0(1) % A2( N )
% A1( N ) A2( N 1)
· ¸ ¸ ¸. A0( N ) ¸ ¸ A1( N 1) ¹
(1.17)
1.3.1 Heuristic Expressions We first consider an irreducible continuous-time level-independent QBD process with infinitely-many levels given in Eq. (1.16) with A0( i ) A0 and A1(i ) A1 for iı1, and A2( j ) A2 for jı2. Note that the repeated row has the non-zero blocks A2, A1 and A0, it is easy to conjecture that the stationary probability vector, ʌ (ʌ0, ʌ1, ʌ2, ...), of the QBD process Q has the following matrix-geometric form: ʌk
ʌ1 R k 1 , k ı 1, 23
Constructive Computation in Stochastic Models with Applications
where the matrix R is the minimal nonnegative solution to the matrix equation f
R
¦R
k
Ak , and ʌ0 and ʌ1 need to satisfy the following three conditions:
k 0
ʌ0 A1(0) ʌ1 A2(1) (0) 0
ʌ0 A
0,
ʌ1 ( A1 RA2 )
0
and ʌ0 e ʌ1 ( I R) 1 e 1.
This was described as Theorem 3.1 in Chapter 3 of Neuts [92]. It is easy to see f
that
¦ʌ e k
1 if and only if the spectral radious: sp(R)<1. Thus, the spectral
k 0
analysis of the rate matrix R is a key in the study of the level-independent QBD processes. Then, we analyze an irreducible continuous-time level-independent QBD process with finitely-many levels given in Eq. (1.17) with A0( i ) A0 for 1İiİN 1, A1( j ) A1 for 1İjİN, and A2( k ) A2 for 2İkİN. Note that this QBD process has the top and bottom boundaries. We take a common matrix as follows: §% % % · ¨ ¸ A2 A1 A0 ¨ ¸ Q ¨ A2 A1 A0 ¸. ¨ ¸ A2 A1 A0 ¨¨ ¸ % % % ¸¹ © When seeing the matrix Q from top to bottom, the rate matrix R is the minimal nonnegative solution to the matrix equation A0 RA1 R 2 A2
0;
and from bottom to top, the rate matrix R is the minimal nonnegative solution to the matrix equation R 2 A0 RA1 A2
0.
Note that the repeated row has only non-zero blocks A2, A1, A0. It is easy to conjecture that the stationary probability vector, ʌ (ʌ0, ʌ1, ..., ʌN, ʌN +1), of the QBD process Q has the following matrix-geometric form: ʌk
ʌ1 R k 1 ʌ N R N k , 1 İ k İ N .
In this case, we may check that ʌ0, ʌ1, ʌN and ʌN +1 need to satisfy the following 24
1
Stochastic Models
five conditions: ʌ0 A1(0) ʌ1 A2(1) ʌ N R N 1 A2(1) (0) 0
ʌ0 A
ʌ1 ( A1 RA2 ) ʌ N R
ʌ1 R ʌ1 R
N 2
N 1
(N) 0
A
(N) 0
ʌN A
N 2
0,
( RA1 A2 ) 0,
ʌ N 1 A1( N 1)
0, ( N 1) 2
( A0 RA1 ) ʌ N ( A1 RA0 ) ʌ N 1 A
0
and N 1
N 1
k 0
k 0
ʌ0 e ʌ N 1e ʌ1 ¦ R k e ʌ N ¦ R k e 1. Note that the three matrices A0(0) , A1(0) and A2(1) and the three matrices A0( N ) , A1( N 1) and A2( N 1) are the top boundary blocks and the bottom boundary blocks, respectively.
1.3.2 The LU-Type RG-Factorization We first construct the LU-type RG-factorizationķ for the QBD process with finitely-many levels. For the irreducible QBD process with N 2 levels given in Eq. (1.17), we write the U-measure as U0
A1(0)
and Uk
A1( k ) A2( k ) (U k11 ) A0( k 1) , 1 İ k İ N 1.
It is easy to check that U k is the infinitesimal generator of a Markov chain with mk states for 0 İ k İ N 1. Also, the Markov chain U k is transient, and thus the matrix U k is invertible for 0 İ k İ N ; while the Markov chain U N 1 is positive recurrent (resp. transient) if and only if the Markov chain Q is positive recurrent (resp. transient). Based on the U-measure {U k }, we can define the LU-type R- and G-measures respectively as Rk
A2( k ) (U k11 ), 1 İ k İ N 1,
and Gk
(U k1 ) A0( k ) ,
0 İ k İ N.
Note that the matrix sequence {Rk , 1 İ k İ N } is the unique nonnegative ķ From this section, notation for the LU-type measures will use U , R and G .
25
Constructive Computation in Stochastic Models with Applications
solution to the system of matrix equations Rk 1 Rk A0( k 1) Rk 1 A1( k ) A2( k 1)
0, 1 İ k İ N ,
with the boundary condition R1
A2(1) ( U 01 ).
Hence Rk 1
A2( k 1) [ Rk A0( k 1) A1( k ) ]1 , 1 İ k İ N .
The matrix sequence {Gk , 1 İ k İ N } is the unique nonnegative solution to the system of matrix equations A0( k ) A1( k ) Gk A2( k ) Gk 1Gk
0, 1 İ k İ N ,
with the boundary condition G0
( U 01 ) A0(0).
Thus Gk
[ A1( k ) A2( k ) Gk 1 ]1 A0( k ) , 1 İ k İ N .
For the QBD process with finitely-many levels given in Eq. (1.17), we can easily obtain the LU-type RG-factorization as follows: Q
( I RL ) UD ( I GU ),
(1.18)
where
RL
UD
GU
§0 · ¨R 0 ¸ ¨ 1 ¸ % % ¨ ¸, ¨ ¸ 0 RN ¨ ¸ 0 R N 1 © ¹ diag (U 0 , U1 , ..., U N , U N 1 ),
§ 0 G0 · ¨ ¸ 0 G 2 ¨ ¸ % % ¨ ¸. ¨ 0 GN ¸ ¨ 0 ¸¹ ©
We now extend the LU-type RG-factorization Eq. (1.18) to that for the 26
1
Stochastic Models
irreducible QBD process with infinitely-many levels given in Eq. (1.16). Clearly, this extension is easy to achieve by assuming only letting N o f. Note that the matrix U k is invertible for all kı0. For the QBD process with infinitely-many levels given in Eq. (1.16), the LU-type RG-factorization is given by Q
( I RL ) UD ( I GU ),
(1.19)
where
RL
§0 · ¨R 0 ¸ ¨ 1 ¸ R2 0 ¨ ¸, ¨ ¸ R3 0 ¨ % % ¸¹ ©
UD
diag (U 0 , U1 , U 2 , U 3 , ...),
GU
§ 0 G0 · ¨ ¸ 0 G1 ¨ ¸ 0 G2 ¨ ¸. ¨ 0 %¸ ¨ % ¸¹ ©
1.3.3 The UL-Type RG-Factorization For the irreducible QBD process with N 2 levels given in Eq. (1.17), we write the U-measure as U N 1
A1( N 1)
and Uk
A1( k ) A0( k )U k 1 A2( k 1) ,
0 İ k İ N.
It is easy to check that U k is the infinitesimal generator of a Markov chain with mk states for 0İkİN 1. Also, the Markov chain U k is transient, and thus the matrix U k is invertible for 1İkİN 1; while the Markov chain U0 is positive recurrent (resp. transient) if and only if the Markov chain Q is positive recurrent (resp. transient). Based on the U-measure {U k }, we can respectively define the UL-type R- and G-measures as Rk
A0( k ) (U k11 ),
0 İ k İ N, 27
Constructive Computation in Stochastic Models with Applications
and (U k1 ) A2( k ) , 1 İ k İ N 1.
Gk
Note that the matrix sequence {Rk , 0 İ k İ N 1} is the unique nonnegative solution to the system of matrix equations A0( k ) Rk A1( k 1) Rk Rk 1 A2( k 2)
0,
0 İ k İ N 1,
with the boundary condition RN
A0( N ) ( U N11 ).
Hence Rk
A0( k ) [ A1( k 1) Rk 1 A2( k 2) ]1 ,
0 İ k İ N 1.
Similarly, the matrix sequence {Gk , 1 İ k İ N} is the unique nonnegative solution to the system of matrix equations A0( k ) Gk 1Gk A1( k ) Gk A2( k )
0, 1 İ k İ N ,
with the boundary condition GN 1
( U N11 ) A2( N 1) .
Thus Gk
[ A0( k ) Gk 1 A1( k ) ]1 A2( k ) , 1 İ k İ N .
For the QBD process with finitely-many levels given in Eq. (1.17), the UL-type RG-factorization is given by Q
( I RU ) U D ( I GL ),
where
RU
28
§ 0 R0 · ¨ ¸ 0 R1 ¨ ¸ % % ¨ ¸, 0 RN ¸ ¨ ¨ 0 ¸¹ ©
UD
diag (U 0 , U1 , ..., U N , U N 1 ),
GL
§0 · ¨ G1 0 ¸ ¨ ¸. % % ¨ ¸ 0 GN ¨ ¸ ¨ ¸ 0 G N 1 © ¹
(1.20)
1
Stochastic Models
Finally, we simply describe the UL-type RG-factorization for the QBD process with infinitely-many levels given in Eq. (1.16). Let the matrix sequences {Rk} and {Gk} be the minimal nonnegative solutions to the systems of matrix equations A0( l ) Rl A1( l 1) Rl Rl 1 A2( l 2)
0,
l ı 0,
A0( k ) Gk 1Gk A1( k ) Gk A2( k )
0,
k ı 1,
and
respectively, Thus, we obtain Uk
A1( k ) Rk A2( k )
A1( k ) A0( k ) Gk 1 ,
k ı 0.
It is easy to check that Uk is the infinitesimal generator of a Markov chain with mk states for 0İkİN 1. Also, the Markov chain Uk is transient, and thus the matrix Uk is invertible for kı1; while the Markov chain U0 is positive recurrent (resp. transient) if and only if the Markov chain Q is recurrent (resp. transient). For the QBD process with infinitely-many levels given in Eq. (1.16), the UL-type RG-factorization is given by ( I RU )U D ( I GL ),
Q
(1.21)
where
§ 0 R0 · ¨ ¸ 0 R1 ¨ ¸, 0 R2 ¨ ¸ 0 %¸ ¨ ¨ % ¸¹ ©
RU
UD
GL
diag (U 0 , U1 , U 2 , ...),
§0 · ¨ G1 0 ¸ ¨ ¸. 0 G 2 ¨ ¸ G3 0 ¨ ¸ ¨ ¸ % % © ¹
1.3.4 Linear QBD-Equations Based on the UL- and LU-types of RG-factorizations, we provide a constructive approach to solve a linear QBD-equation XQ
b or QX
b,
(1.22) 29
Constructive Computation in Stochastic Models with Applications
where Q is the infinitesimal generator of an irreducible continuous-time leveldependent QBD process with either finitely-many levels or infinitely-many levels. When b 0, the Eq. (1.22) is homogeneous, otherwise it is nonhomogeneous. Note that we will provide a more general discussion for such systems of infinitedimensional linear equation in Chapter 9. 1.3.4.1 The Homogeneous Linear QBD-Equations
For the homogeneous linear QBD-equations, a well-known example is the equation ʌQ 0, where the QBD process Q is positive recurrent and ʌ is its stationary probability vector. For convenience of description, we denote by ʌk( LU ) and ʌk(UL ) the stationary probability vector obtained by means of the LU- and UL-type of RG-factorizations, respectively. Case ć Infinitely-many levels In this case, we apply the UL-type of RG-factorization to derive the stationary probability vector of a QBD process with infinitely-many levels. Let the matrix sequence {Rl, lı0} be the minimal nonnegative solution to the system of matrix equations A0( l ) Rl A1( l 1) Rl Rl 1 A2( l 2)
0,
l ı 0.
Using the RG-factorization Eq. (1.21), we have ʌ0 ʌk
N v,
N vR0 R1 ! Rk 1 ,
(1.23) k ı 1,
(1.24)
where N is a normalization constant, and v is the stationary probability vector of the censored chain U 0 A1(0) R0 A2(1) to level 0. Case Ĉ Finitely-many levels In this case, we use the LU- and UL-types of RG-factorizations to derive the stationary probability vector of a QBD process with finitely-many levels, which leads to the following three expressions. Expression 1.1 Using the LU-type RG-factorization Eq. (1.18), we get ʌ k( LU )
N vN 1 RN 1 RN ! Rk 1 ,
0 İ k İ N,
(1.25)
and ) ʌ N( LU 1
N vN 1 ,
(1.26)
where ț is a normalization constant and vN +1 is the stationary probability vector of the censored chain UN 1 A1( N 1) RN 1 A0( N ) to level N 1. 30
1
Stochastic Models
Expression 1.2 According to the UL-type of RG-factorization Eq. (1.20), we have ʌ0(UL )
M v0
(1.27)
M v0 R0 R1 ! Rk 1 , 1 İ k İ N 1,
(1.28)
and ʌ k(UL )
where M is a normalization constant and v0 is the stationary probability vector of the censored chain U 0 A1(0) R0 A2(1) to level 0. Expression 1.3 By means of a property of linear combination for the solutions of a system of linear equations, it follows from Eq. (1.25) to Eq. (1.28) that the stationary probability vector ʌ (ʌ0, ʌ1, ..., ʌN, ʌN +1) is given by u N 1 RN 1 RN ! R1 u0 ,
(1.29)
u N 1 RN 1 RN ! Rk 1 u0 R0 R1 ! Rk 1 , 1 İ k İ N ,
(1.30)
ʌ0 ʌk
ʌ N 1
u N 1 u0 R0 R1 ! RN ,
(1.31)
where the row vectors uN +1 and u0 are uniquely determined by the equations (uN +1, u0)QE 0 and k 1 N N 1 k 1 § · § · u N 1 ¨ I ¦ R j ¸ e u0 ¨ I ¦ R j ¸ e 1, k 0 j N 1 k 1 j 0 © ¹ © ¹
where QE
§ RN 1 RN ! R2 [ A2(1) R1 A1(0) ] · A1( N 1) RN 1 A0( N ) . ¨ (0) (1) (N ) ( N 1) ¸ A1 R0 A2 R0 R1 ! RN 1[ A0 RN A1 ]¹ ©
Note that QE is the infinitesimal generator of the censored Markov chain of Q to the set E {Level 0, Level N 1}. 1.3.4.2 The Nonhomogeneous Linear QBD-Equations
As an illustrating example, we now use LU-type of RG-factorization to solve a nonhomogeneous linear QBD-equation: XQ b or QX b. The maximal non1 of the matrix Q has probabilistic meaning, hence the positive inverse Qmax performance measures of a stochastic system can be expressed in terms of the 1 1 blocks of Qmax . Therefore, it is necessary to express each block of Qmax in order to give the solution X
1 bQmax or X
1 1 Qmax b. We set Qmax
(qm ,n )m , nı0 partitioned
according to the levels. 31
Constructive Computation in Stochastic Models with Applications
Based on the LU-type R-measure {Rk , k ı 1} and G-measure {Gk , k ı 0} , we write X k(l )
Rl Rl 1 Rl 2 ! Rl k 1 ,
l ı k ı ,
(1.32)
k ı 1, l ı
(1.33)
and Yk(l ) Gl Gl 1 Gl 2 ! Gl k 1 ,
The following Theorem provides expressions for each block in the maximal 1 non-positive inverse Qmax by means of the LU-type RG-factorization Q ( I RL ) U D ( I GU ) . In this case, we have 1 Qmax
( I GU )1 U D1 ( I RL )1
Theorem 1.2 (1) For the irreducible QBD process with infinitely-many levels given in Eq. (1.16),
qm, n
f 1 ( m ) ( m ) 1 (i m) if 0 İ n İ m 1, °U m X m n ¦ Yi U i m X i m n , i 1 ° ° 1 f ( m ) 1 ( i m ) if n m, ®U m ¦ Yi U i m X i , i 1 ° f ° ( m ) 1 ( m ) 1 (i m) °Yn mU n ¦ Yi U i m X i ( n m ) , if n ı m 1. i n m 1 ¯
(2) If the irreducible QBD process with finitely-many levels given in Eq. (1.17) is transient, then
qm , n
1 ( m ) N 1 m ( m ) 1 (i m ) °U m X m n ¦ Yi U i m X i m n , i 1 ° N 1 m ° 1 ( m ) 1 (i m ) °U m ¦ Yi U i m X i , i 1 ° ° ( m ) 1 N 1 m ( m ) 1 (i m ) ®Yn mU n ¦ Yi U i m X i ( n m ) , ° i n m 1 ° ° 1 ( N 1) °U N 1 X N 1 n , ° °U 1 , ¯ N 1
if 0 İ m İ N , 0 İ n İ m 1, if 0 İ m İ N , n
m,
if 0 İ m İ N , m 1 İ n İ N 1, if m N 1, 0 İ n İ N, if m n N 1.
Theorem 1.2 indicates that if an irreducible QBD process has infinitely-many levels, then the existence of the maximal non-positive inverse is irrespective of whether the QBD process is recurrent or transient. 32
1
Stochastic Models
When the QBD process is transient, we can use the UL-type RG-factorization Q ( I RU ) U D ( I GL ) to provide expression for the maximal non-positive inverse as follows: 1 Qmax
( I GL )1U D1 ( I RU )1
whose detail is similar to those in Theorem 1.2. Note that Appendix A.3 provides expressions for the inverse of a general block-structured matrix.
1.4 Phase-Type Distributions In this section, we provide an intuitive understanding for constructing a univariate PH distribution, and also list some basic properties of the PH distribution. Furthermore, we study a multivariate PH distribution, and give some important properties.
1.4.1 The Exponential Distribution The exponential distribution is extensively used to deal with various stochastic models. The key to these applications is that the exponential distribution has the memoryless property. Let X be a random variable with the exponential distribution function F(x) 1 exp { O x} for 0< O < f . The probability density function of F(x) is given by f (x) O exp { O x}. It is easy to check that for all t, sı0, P{ X ! t s | X ! s} P{ X ! t}, which is called the memoryless property of the exponential random variable X. The memoryless property can effectively simplify various conditional probabilities involved in a stochastic model so that mathematical analysis for the stochastic model can be simplified sufficiently. It is crucial to find a more general distribution which retains the memoryless property. As a minor generalization of the exponential distribution, we let X be a random variable with distribution function G(x) 1 D exp { O x} for 0< D İ1 and 0< O < f . The probability density function of G(x) is given by g(x) OD exp {O x}. It is easy to check that the random variable Y also has the memoryless property. However, there does not exist any other distribution with the memoryless property except for the two distributions. Therefore, it is necessary to find a class of new distributions which can retain some similar properties to the memoryless property of the exponential distribution. 33
Constructive Computation in Stochastic Models with Applications
1.4.2 The Erlang Distribution Based on the exponential distribution, A.K. Erlang use the stage method to construct the so-called Erlang distribution of order n, denoted as E[n, O ]. Let Xi be a random variable with distribution function F ( x) 1 exp {O x} for 1İiİn. n
We assume that X1, X2, ..., Xn are i.i.d.. Thus the random variable Z n
¦X
k
is
k 1
Erlang of order n with distribution function (O x ) k k! 0
n 1
Fn ( x )
P{Z n İ x} 1 exp{O x}¦ k
and the probability density function f n ( x)
exp{O x}
O (O x) n 1 . (n 1)!
For the Erlang distribution of order n, Fig. 1.5(a) expresses the random variable n
Zn
¦X
k
. Based on this, it is clear that the Erlang distribution of order n can be
k 1
understood as Fn ( x) 1 D eTx e, where D (1, 0,0,...,0), T is the infinitesimal generator of an n-state Markov chain, given by
T
§ O O · ¨ ¸ O O ¨ ¸ ¨ ¸, % % ¨ ¸ O O ¸ ¨ ¨ O ¸¹ ©
T0
§0· ¨ ¸ ¨0¸ ¨ # ¸. ¨ ¸ ¨0¸ ¨O¸ © ¹
(1.34)
Figure 1.5 The stage method
As a minor generalization of the Erlang distribution, we assume that these exponential random variables X1, X2, ..., Xn are independent but differently 34
1
Stochastic Models
distributional with parameters O1 , O2 , ! , On . In this situation, a generalized Erlang distribution is expressed by Fn ( x) 1 D eTx e, where D (1, 0,0,!, 0),
T
§ O1 O1 · ¨ ¸ O O 2 2 ¨ ¸ ¨ ¸, % % ¨ ¸ On 1 On 1 ¸ ¨ ¨ On ¸¹ ©
T0
§0· ¨ ¸ ¨0¸ ¨ # ¸. ¨ ¸ ¨0¸ ¨O ¸ © n¹
(1.35)
A further generalization of the Erlang distribution, which is seen from Fig. 1.5(b), can be given by D (1, 0,0,...,0) and
T
§ O1 O1 p1 · ¨ ¸ O2 O2 p2 ¨ ¸ ¨ ¸, % % ¨ ¸ On 1 On 1 pn 1 ¸ ¨ ¨ On ¹¸ ©
T0
§ O1 (1 p1 ) · ¨ ¸ ¨ O2 (1 p2 ) ¸ ¨ ¸. # ¨ ¸ ¨ On 1 (1 pn 1 ) ¸ ¨ ¸ On © ¹
(1.36)
1.4.3 The PH Distribution The above three examples given in Eq. (1.34) to Eq. (1.36) indicate that a distribution can be expressed by means of a Markov chain T with an initial probability vector D . Hence, this motivates us to propose a PH distribution under a unified framework (D , T ). The PH distribution is characterized by an absorbing Markov chain with finite states, which is measured by the time that the underlying Markov chain spends in all the transient states until the first absorption. The phase number of the PH distribution is equal to the number of transient states in the Markov chain T, and a row vector D is the initial probability vector of the Markov chain T. We consider a continuous-time Markov chain with state space : {1, 2,..., m, m 1} whose infinitesimal generator is given by (1,..., m ) m 1
Q
§T T 0 · (1,..., m) ¨ ¸ m 1 ©0 0 ¹
(1.37)
where T 0 : 0 and Te T 0 0. It is clear that state m 1 is an absorbing state and all the others are transient. Let (D , D m 1 ) be the initial probability vector of the Markov chain, where D e D m 1 1. For the Markov chain given in Eq. (1.37), the distribution F(x) of the time X until absorption into the absorbing state m 1 is 35
Constructive Computation in Stochastic Models with Applications
called a PH distribution with representation (D , T ). If T T 0D is the infinitesimal generator of an irreducible Markov chain, then this representation (D , T ) is called an irreducible representation. In this case, it follows from Lemma 2.2.2 in Chapter 2 of Neuts [92] that F ( x) 1 D exp{Tx}e. Obviously, F (0) 1 D e D m 1. If D
(1.38)
0, then F(x) 1 for all xı0; if 0 D e 1,
then F (0) D m 1 ; if D m 1 0, then F (0) 0. In general, if a stochastic model can be analyzed when the relevant distributions are exponential, then the stochastic model with corresponding PH distributions may also admit an algorithmic solution. Table 1.1 provides some useful relations between the exponential distribution and the PH distribution. Note that LST f exp{ sx}dF ( x ). denotes the Laplace-Stieltjes transform of F ( x), f * ( s )
³
0
Table 1.1 A comparison for the exponential and PH distributions Class of distributions
Exponential Distribution
PH Distribution
Distribution function
F ( x) 1 D exp{O x}
F ( x) 1 D exp{Tx}e
Distribution parameters
D, O
D, T, m
Density function
f ( x) DO exp{O x}
f ( x) D exp{Tx}T 0
Moments
Pi
LST
f * ( s ) 1 D DO ( s O ) 1
( 1)i 1 i !DO i
Pi
( 1)i i !D T i e
f * ( s ) D m 1 D ( sI T ) 1T 0
In what follows, we provide four useful properties for the PH distribution, while the proofs may refer to Chapter 2 of Neuts [92]. Property 1.1 For a PH distribution with representation (D , T ), the states 1, 2, ..., m are transient if and only if the matrix T is invertible, and T 1 0. Property 1.2 For a PH distribution with irreducible representation (D , T ), D exp {Tx} ! 0 and exp {Tx}T 0 ! 0 for each xı0. At the same time, D ( sI T ) 1 ! 0 and ( sI T ) 1T 0 ! 0 for s ı 0. Property 1.3 For two PH distributions with irreducible representations (D , T ) and ( E , S ), the matrix T S is invertible. Property 1.4 For a PH distribution with reducible representation (D , T ), we may delete rows and columns of T corresponding to a subset of state space {1, 2, ..., m} to obtain a smaller, irreducible representation (D1 , T1 ). As a companion distribution, we now consider a discrete-time PH distribution. Consider a discrete-time Markov chain with state space ȍ {1, 2, ..., m, m 1} whose transition probability matrix is given by 36
1
Stochastic Models
(1,..., m ) m 1
P
§T T 0 · (1,..., m) ¨ ¸ m 1 ©0 1 ¹
(1.39)
where T 0 : 0 and Te T 0 e. State m 1 is an absorbing state and all the others are transient. Let (D , D m 1 ) be the initial probability vector of the Markov chain, where D e D m 1 1. For the Markov chain given in (1.39), the distribution {pk , kı0} of the number N of state transitions until absorption into the absorbing state m 1 is called a discrete-time PH distribution with representation (D , T ). If T T 0D is the transition probability matrix of an irreducible Markov chain, then this representation (D , T ) is called an irreducible representation. In this case, it follows from Chapter 2 of Neuts [92] that k 0, D m 1 , ® k 1 0 ¯D T T , k ı 1.
pk
(1.40)
If a stochastic model can be analyzed when the relevant distributions are geometric, then the stochastic model with corresponding discrete-time PH distributions may admit an algorithmic solution. Table 1.2 provides some useful relations between the geometric distribution and the discrete-time PH distribution. We write f
P* ( z )
¦z
k
pk .
k 0
Table 1.2 A comparison for the geometric and PH distributions
Class of Distributions Distribution function
Distribution parameters
Geometric Distribution pk
k 0, 1 D , ® k 1 ¯D t (1 t ), k ı 1.
D, t
PH Distribution pk
k 0, D m 1 , ® k 1 0 ¯D T T , k ı 1.
D, T, m i 1
i !D t (1 t )
i
Moments
Pi
PGF
P* ( z ) 1 D zD (1 zt ) 1 (1 t )
Pi
i !D T i 1 ( I T ) i e
P* ( z )
am 1 zD ( I zt ) 1T 0
Now, we provide some closure properties of the PH distributions, the corresponding proofs may refer to Chapter 2 of Neuts [92]. For convenience of description, we introduce a notation: If the distribution of the random variable X is of phase type of order m with irreducible representation (D , T ) , we write as X~PH (D , T ; m); or F ( x) ~PH (D , T ; m) for the continuous-time case, while {pk }~ PH (D , T ; m) for the discrete-time case. Property 1.5 Let XR and XA be the stationary residual life time and stationary age of the random variable X, respectively. If X~PH (D , T ; m), then XR, XA~ 37
Constructive Computation in Stochastic Models with Applications
PH (T , T ; m) , where ș is the stationary probability vector of the Markov chain T T 0D for D e 1. Property 1.6 If X~PH(D , T ; m) and Y~PH( E , S ; n) , then X+Y~PH(J , L; m n), where J (D , am 1 E ) and §T T 0E · L ¨ ¸. ©0 S ¹ Property 1.7 If X~PH (D , T ; m) and Y~PH( E , S ; n) , then min {X, Y}~PH (D
E , T S ; m n), and max {X, Y}~PH (J , L; mn m n), where
J
(D
E , E n 1D , D m 1 E ),
§T S I S0 T 0 I · ¨ ¸ 0 ¸. L ¨ 0 T ¨ 0 0 S ¸¹ © Property 1.8 Let p
( p1 , p2 , ..., pk ) be a probability vector for pe 1, and k
Fj(x)~PH(D j , T j ; m j ) for j 1, 2, ..., k. Then G(x)
k
¦ p F ( x)~PH (J , L; ¦ m ), j
j
j
j 1
j 1
where J ( p1D1 , p2D 2 , ..., pkD k ) and L diag (T1 , T2 , ..., Tn ). It is worthwhile to note that infinite mixtures of PH distributions are generally not of phase type. An important and useful exception is given in Properties 1.9 and 1.10. Let F n* ( x) be the nth-fold convolution of the function F(x), that is, F 0*( x) 1, F 1*( x) F ( x) and F n*( x)
³
x 0
F ( n 1)* ( x u )dF (u ) for n ı 2.
Property 1.9 If {sk}~PH( E , S ; n) and F(x)~PH (D , T ; m), then the infinite f
mixture G(x)
¦s F
k*
k
( x)~PH (J , L; mn), where
k 0
J
D
E ( I D m 1 S ) 1 ,
L T
I (1 D m 1 )T 0D
( I D m 1 S ) 1 S .
The height of the jump J mn 1 at time 0 and the vector L0 are given by
E n 1 D m1 E ( I D m 1 S ) 1 S 0 ,
J mn1 L0
T 0
( I D m 1 S )1 S 0 .
Property 1.10 If {sk}~PH( E , S ; n) and {pv}~PH (D , T ; m), then for the infinite f
mixture Pk
¦ s {p }
k*
k
k 0
38
v
, {Pk } ~PH (J , L; mn), where
1
Stochastic Models
D
E ( I D m 1 S )1 ,
J
L T
I (1 D m 1 )T 0D
( I D m 1 S )1 S 0 .
In many applications, we need to consider the following integral ak
³
f 0
exp{O x}
(O x ) k dF ( x), k ı 0. k!
The following Property 1.11 describes a closure property of the PH distribution for the sequence {ak}. Property 1.11 If F(x)~PH (D , T ; m), then the sequence {ak}~PH (J , L; m), where
OD (O I T )1 ,
J
O (O I T )1 .
L
The height of the jump J mn 1 at time 0 and the vector L0 are given by
J m 1 D m 1 D (O I T ) 1T 0 , (O I T ) 1 T 0 .
L0
For the two random variables N and X, we write an P{N n} for nı0 and P{ X
§ n· n} ¨ ¸ p j (1 p) n j . © j¹
j|N
Then it is clear that f
P{ X
j}
¦ P{N
n}P{ X
j|N
n}
n 0 f
¦a
n
n 0
§ n· j n j ¨ ¸ p (1 p) . © j¹
The following Property 1.12 indicates that the random variable X is of phase type if the random variable N is of phase type. Property 1.12 If N~PH (D , T ; m), then X~PH (J , L; m), where
J
pD [ I (1 p)T ]1 ,
L
pT [ I (1 p)T ]1.
The height of the jump J mn 1 at time 0 and the vector L0 are given by
J m 1 D m 1 D (1 p) p [ I (1 p)T ]1T 0 , L0
[ I (1 p )T ]1T 0 . 39
Constructive Computation in Stochastic Models with Applications
In the rest of this section, we discuss a multivariate PH distribution which is always useful in modeling real situations that involve n nonnegative dependent random variables. Our analysis focuses on the following two cases: The bivariate PH distribution and the multivariate PH distribution. Definition 1.1 For a Markov chain with state space ȍ, a set ī ȍ is said to be stochastically closed if once the Markov chain enters the set ī, it never leaves again. For the stochastically closed sets of the Markov chain, we have following useful properties: (1) If ī1 and ī2 are two nonempty stochastically closed sets of the state space ȍ, then ī1 * ī2 and ī1 ī2 are all stochastically closed. (2) If ī is a nonempty stochastically closed set of the state space ȍ, then the infinitesimal generator can be simplified as *c * * § T1,1 T1,2 · ¨ ¸. * © 0 T2,2 ¹ c
T
1.4.4 The Bivariate PH Distribution Consider a continuous-time and right-continuous Markov chain {X(t)} with state space ȍ {1, 2, ..., m, m 1} whose infinitesimal generator is given in (1.37). Let ī1 and ī2 be two nonempty stochastically closed sets of the state space ȍ such that ī1 ī2 {m 1}. We define Yk with a convention: inf
inf{t : X (t ) * k }
f for the empty set . Let F ( x1 , x2 )
P{Y1 ! x1 , Y2 ! x2 }.
Then (Y1, Y2) or F(x1, x2) is said to be of bivariate phase type. For the bivariate PH distribution, we have the following closed-form expression: F ( x1 , x2 ) where g k
°D eTx2 g 2 eT ( x1 x2 ) g1e, 0 İ x2 İ x1 , ® Tx1 T ( x2 x1 ) g 2 e, 0 İ x1 İ x2 , °¯D e g1e
diag( g k (1), g k (2), ..., g k (m)) g k (i )
c °1, i * k , ® °¯0, i * k .
It is obvious that F ( x, x) D eTx g 2 g1e and F (0, 0) D g 2 g1e. 40
1
Stochastic Models
Now, we list some useful properties for the bivariate PH distribution as follows: Property 1.13 The probability density function of the bivariate PH distribution is given by f ( x1 , x2 )
°D eTx2 [T , g 2 ]eT ( x1 x2 ) g1e, 0 İ x2 İ x1 , ® Tx1 T (x x ) °¯D e [T , g1 ]e 2 1 g 2 e, 0 İ x1 İ x2 ,
where the operator [A, B] AB BA. Property 1.14 Let f ( s1 , s2 ) E[exp{( s1Y1 s2Y2 )}]. Then f ( s1 , s2 ) D [( s1 s2 ) I T ]1{[T , g 2 ]( s1 I T ) 1Tg1 [T , g1 ]( s2 I T ) 1Tg 2 Tg1 g 2 [T , g1 ] [T , g 2 ]}e. Property 1.15 E[Y1Y2 ] D (T 1 g1T 1 g 2 T 1 g 2T 1 g1 )e.
1.4.5 The Multivariate PH Distribution Consider a continuous-time and right-continuous Markov chain {X(t)} with state space ȍ {1, 2, ..., m, m 1} whose infinitesimal generator is given in Eq. (1.37). Let ī1, ī2, ..., īn be n nonempty stochastically closed sets of the state space ȍ such that ni 1 * i {m 1}. We define Yk
inf{t : X (t ) * k }.
Let F ( x1 , x2 , ..., xn )
P{Y1 ! x1 , Y2 ! x2 , ..., Yn ! xn }.
Then Y (Y1, Y2, ..., Yn) or F(x1, x2, ..., xn) is said to be of n-variate phase type. For the multivariate PH distribution, we call (D , T ; * 1 , * 2 , ..., * n ) its irreducible representation, or Y~PH (D , T ; * 1 , * 2 , ..., * n ). For the nonnegative vector (x1, x2, ..., xn), we write xi1ı xi2 ı " ı xin ı Thus, for the multivariate PH distribution we can write the closed-form expression as follows: 2
F ( x1 , x2 , ..., xn ) D exp{Txin }g in exp{T ( xik 1 xik )}g ik 1 e, k n
the probability density function is given by 41
Constructive Computation in Stochastic Models with Applications 3
f ( x1 , x2 , ..., xn )
(1)n D exp{Txin }[T , gin ] exp{T ( xik 1 xik )}[T , gik 1 ] k n
exp{T ( xi1 xi2 )}gi1 e and the joint mean is given by n
E[Y1Y2 "Yn ] (1) n D ¦ T 1 gZ ( i ) e, Z 4 i 1
where Ĭ is the set of all permutations of {1, 2, ..., n}, and Ȧ (Ȧ(1), Ȧ(2), ..., Ȧ(n)). For the multivariate PH distribution, we now list its useful properties as follows: Property 1.16 If Y~PH (D , T ; * 1 , * 2 , ..., * n ) and Z~PH ( E , S ; , ,..., m ), then
(Y , Z ) ~PH(D
E , T S ; * 1 , * 2 , ..., * n , , , ..., m ). Property 1.17 If (Y1, Y2, ..., Yn)~PH (D , T ; * 1 , * 2 , ..., * n ), then
(Yi1 , Yi2 , ..., Yik ) ~ PH(D , T ; * i1 , * i2 , ..., * ik ) for each subset {i1 , i2 , ..., ik } {1, 2, ..., n}.
1.4.6 The Discrete-Time Multivariate PH Distribution Consider a discrete-time Markov chain {Xk} with state space ȍ {1, 2, ..., m, m 1} whose transition probability matrix is given in Eq. (1.39). Let ī1, ī2, ..., īn be n nonempty stochastically closed sets of the state space ȍ such that ni 1 * i {m 1} . We define Nk
inf{k : X k * k }.
Let p (k1 , k2 , ..., kn )
P{N1
k1 , N 2
k2 , ..., N n
kn }.
Then N ( N1 , N 2 , ..., N n ) or { p(k1 , k2 , ..., kn )} is said to be of discrete-time multivariate phase type. For the discrete-time multivariate PH distribution, we call (D , T ; * 1 , * 2 , ..., * n ) its irreducible representation. When ki1 ı ki2 ı ... ı kin ı for the discrete-time multivariate PH distribution we can write the closed-form expression as follows: k
3
p(k1 , k2 , ..., kn ) D T in [T , gin ] T j n
42
ki j 1 ki j
[T , g ki ]T j 1
ki1 ki2
g ki e 1
1
Stochastic Models
and n
E[ N1 N 2 " N n ] D ¦ ( I T )1 gZ (i ) e. Z 4 i 1
1.5 The Markovian Arrival Processes In this section, we provide several important types of point processes and list their useful properties. Specifically, we indicate how to construct a MAP in terms of a Markov chain with finite states. Finally, we study multivariate Markovian arrival processes. The analysis of this section contains twofold: Renewal processes and non-renewal processes. First, we provide a unified framework of Markov chains for analyzing two renewal counting processes: The Poisson process and the PH renewal process.
1.5.1 The Poisson Process There is an arrival process with the kth arrival epoch W k for kı0, where W 0 Let X n
0.
W n W n 1 for nı1. Then Xn is the nth interarrival time. Obviously,
k
Wk
¦X
n
for kı0. We assume that the random variables Xn for nı1 are i.i.d..
n 1
Figure 1.6 shows a useful relation between the arrival epoch and the interarrival time.
Figure 1.6 The arrival epoch and the interarrival time
Let N(t) be the arrival number of the arrival process in the time interval (0;t]. In general, we denote by {N(t), tı0} the arrival process, which is also called a counting process. We provide a definition for the Poisson process as follows. Definition 1.2 A Poisson process with rate Ȝ is a counting process {N(t), tı0} for which (1) N(0) 0; (2) the process increments N (W1 ) N (W 0 ), N (W 2 ) N (W 1 ), ..., N (W n ) N (W n 1 ) are mutually independent; and 43
Constructive Computation in Stochastic Models with Applications
(3) for any sı0, t>0, P{N (t s ) N ( s )
k} exp{O t}
(O t ) k , k!
k ı 0,
which is independent of the initial epoch sı0. The following proposition provides an intuitive understanding of the Poisson process in terms of the exponential distribution. The proof is clear and is omitted here. Proposition 1.2 The random variables Xn for nı1 are i.i.d. with exponential distribution function F(x) 1 exp {O t} if and only if the counting process {N(t), tı0} is a Poisson process with rate O. Note that the exponential interarrival times Xn for nı1 are i.i.d., the Poisson process with rate O may be regarded as a pure birth process whose infinitesimal generator is given by § O O · ¨ ¸ O O ¨ ¸. Q ¨ ¸ O O ¨ ¸ % %¹ © We write P(k ; t )
P{N (t )
k}.
Let § P (0; t ) P(1; t ) P (2; t ) ¨ P (0; t ) P (1; t ) ¨ ¨ P (t ) P(0; t ) ¨ ¨ ¨ © Then it is easy to check that Pc(t )
with the initial condition P (0) which yields P (k ; t )
P (3; t ) P (2; t ) P(1; t ) P(0; t )
!· ¸ !¸ !¸ . ¸ !¸ %¸¹
P (t )Q or P c(t ) QP (t ), I . A simple computation can yield P (t ) exp{Qt},
exp{O t}
(O t ) k , k ı 0. k!
1.5.2 The PH Renewal Process An only different assumption from that for the Poisson process analyzed above is 44
1
Stochastic Models
that the distribution function F(x) is taken as a PH distribution of order m with irreducible representation (D , T ). In this case, we need to introduce a matrix sequence {P(k; t)} with respect to the m phases. Let P (k ; t ) ( Pj , j c (k ; t ))1İ j , j cİm , where Pj , j c (k ; t ) is a conditional probability that the Markov chain T T 0D is in the phase j c at time t and that k renewals occur in [0, t), given that the Markov chain starts in the phase j at time 0. The matrix sequence {P(k; t)} satisfies the forward Chapman-Kolmogorov differential equations Pc(0; t )
P(0; t )T ,
(1.41)
P(k ; t )T P(k 1; t )T 0D ,
Pc(k ; t )
k ı 1,
(1.42)
with the initial condition P(k; 0) G 0,k I for kı0; or the backward ChapmanKolmogorov differential equations Pc(0; t ) TP(0; t ),
(1.43)
Pc(k ; t ) TP(k ; t ) T 0D P(k 1; t ),
k ı 1.
(1.44)
f
Let P* ( z; t )
¦z
k
P(k ; t ). Then P* ( z;0)
I , and it follows from Eq. (1.41) and
k 0
Eq. (1.42), or Eq. (1.43) and Eq. (1.44) that P* ( z; t )
exp{(T zT 0D )t}.
(1.45)
Next, we consider the factorial moment matrices f
Vn (t )
k!
¦ (k n)! P(k ; t ),
n ı 0.
k n
It follows from Eq. (1.41) and Eq. (1.42) that V0c(t ) V0 (t )(T T 0D ), Vnc(t ) Vn (t )(T T 0D ) nVn 1 (t )T 0D ,
(1.46) n ı 1;
(1.47)
or from Eq. (1.43) and Eq. (1.44) that V0c(t ) (T T 0D )V0 (t ), Vnc(t )
(T T 0D )Vn (t ) T 0D nVn 1 (t ),
(1.48) n ı 1;
(1.49)
with the initial conditions V0(0) I and Vn(0) 0 for n ı 1. Now, we use a Markov chain to understand Eq. (1.41) to Eq. (1.44), and Eq. (1.46) to Eq. (1.49). 45
Constructive Computation in Stochastic Models with Applications
Let
Q
§ T T 0D · ¨ ¸ 0 T TD ¨ ¸ ¨ ¸ T T 0D ¨¨ ¸ % %¸¹ ©
(1.50)
and § P(0; t ) P(1; t ) P(2; t ) ¨ P(0; t ) P(1; t ) ¨ P (t ) ¨ P(0; t ) ¨ ¨ ¨ ©
P(3; t ) P(2; t ) P(1; t ) P(0; t )
!· ¸ !¸ !¸ . ¸ !¸ %¸¹
It is clear that Pc(t ) P (t )Q or Pc(t ) QP (t ) with the initial condition P (0) which can lead to Eq. (1.41) to Eq. (1.44). Let
Q
I,
§ T T 0D nT 0D · ¨ ¸ 0 0 T T D nT D ¨ ¸ ¨ ¸ T T 0D nT 0D ¨¨ ¸ % % ¸¹ ©
and § V0 (t ) V1 (t ) V2 (t ) ¨ V0 (t ) V1 (t ) ¨ V (t ) ¨ V0 (t ) ¨ ¨ ¨ ©
V3 (t ) V2 (t ) V1 (t ) V0 (t )
!· ¸ !¸ !¸ . ¸ !¸ %¸¹
Obviously, V c(t ) V (t )Q or V c(t ) QV (t ) with the initial condition V (0) I , which can yield Eq. (1.46) to Eq. (1.49). It is necessary to provide some computations for the matrix sequence {Vn(t)}. Referring to Sec. 5.1 of Neuts [94], we have (1) V0 (t )
exp{(T T 0D )t}.
(2) Let ș be the stationary probability vector of the Markov chain T T 0D , O T T 0 . Then T V1 (t )e O t. 46
1
Stochastic Models
(3)
DV1 (t )e O t 1 OD exp{(T T 0D )t}T 1e. For the PH renewal process, we take the renewal density function as f
u (t )
¦f
k*
(t ),
k 0
where f k * (t ) is the kth-fold convolution of the probability density function f(t) t
of the PH distribution F(t). It is worthwhile to note that E[N(t)] 1 ³ u ( x )dx. 0
The following proposition expresses the renewal density function, while the proof is easy and is omitted here. Proposition 1.3 For a PH renewal process with PH irreducible representation (D , T ), then the renewal density function is given by u (t ) D exp{(T T 0D )t}T 0 .
In the PH renewal process, we define the excess life ȟ(t) and the age Ș(t) at time t, respectively, both of which are expressed in Fig. 1.7. The following proposition provides expressions for the distribution function of ȟ(t), while the associated discussion for Ș(t) is similar.
Figure 1.7 The excess life and the age at time t
Proposition 1.4 For a PH renewal process with PH irreducible representation (D , T ), the excess life ȟ(t) at time t is of phase type with irreducible representation (J (t ), T ), where J (t ) D exp{(T T 0D )t}. Let
³
Ak
f 0
P (k ; t )dH (t ),
k ı 0,
and A k
³
f 0
P (k ; t )[1 H (t )]dt ,
k ı 0.
The following proposition provides a PH description for the two matrix sequences {Ak} and { Ak }, respectively. Proposition 1.5 Suppose H(t) is a PH distribution with irreducible representation ( E , S ). 47
Constructive Computation in Stochastic Models with Applications
(1) {D Ak e, k ı 0} is a discrete-time PH distribution with irreducible representation (J , L) of order mn, where
J
(D
E )(T S )1 (T 0D
I ), L
(T S ) 1 (T 0D
I ), (D
E )(T S )1 (e
S 0 ),
J mn 1 L0
(T S ) 1 (e
S 0 ).
(2) {D A k e, k ı 0} is a discrete-time PH distribution with irreducible representation (J , L) of order mn,
J
(D
T )(T S ) 1 (T 0D
I ), L
(T S ) 1 (T 0D
I ), (D
T )(T S ) 1 (e
S 0 ),
J mn 1 L0
(T S ) 1 (e
S 0 ),
where ș is the stationary probability vector of the Markov chain T T 0D .
1.5.3 The Markovian Modulated Poisson Process We consider an arrival process {(N(t), J(t)):t ı 0} which extends the Poisson process to a multiple Poisson process depending on m environment states. This arrival process is written as MMPP and is depicted in Fig. 1.8, which gives an obviously physical interpretation.
Figure 1.8 The state transitions in an MMPP 48
1
Stochastic Models
From Fig. 1.8, it is seen that the random environment {J(t):t ı 0} is a Markov chain with m environment states whose infinitesimal generator is given by (ci , j )1İi , jİm . When J(t) i, the counting process {N(t):t ı 0} is a Poisson process with rate Oi for 1İiİm. To express such an arrival process in terms of a Markov chain, we write diag(O1 , O2 , ..., Om )
C and D
diag(O1 , O2 , ..., Om ).
Using the matrix pair (C, D), the MMPP can be described as a block-structured pure birth process with environment block C and a birth rate block D. This corresponding infinitesimal generator is similar to that of the PH renewal process with environment block C T and birth rate block D T 0D , see Eq. (1.50).
1.5.4 The Markovian Modulated PH Process We extend the MMPP to a more general arrival process: Markovian modulated PH process (MMPHP). An only difference from the MMPP is that when J(t) i, the counting process {N(t):tı0} is a discrete-time PH renewal process with PH irreducible representation (D i , Ti ) of order ni for 1İiİm. In this case, the matrix pair (C, D) is respectively given by
C
c1,3eD 3 § c1,1 I T1 c1,2 eD 2 ¨ D c e c I T c 2,2 2 2,3 eD 3 ¨ 2,1 1 ¨ c3,1eD1 c3,2 eD 2 c3,3 I T3 ¨ # # # ¨ ¨ c eD cm,2 eD 2 cm,3eD 3 © m,1 1
c1, m eD m · ¸ c2, m eD m ¸ c3, m eD m ¸ ¸ # ¸ ! cm, m I Tm ¸¹ ! ! !
and D
diag(T10D1 , T20D 2 , T30D 3 , ! , Tm0D m ).
1.5.5 The Markovian Arrival Processes Observing the PH renewal process, the MMPP and the MMPHP, it is easy to see that each of them corresponds to a matrix pair (C, D), which then is used to express a block-structured pure birth process whose infinitesimal generator is given by 49
Constructive Computation in Stochastic Models with Applications
Q
§C D · ¨ ¸ C D ¨ ¸. C D ¨ ¸ ¨ % % ¸¹ ©
(1.51)
We now consider the MAP, which may have an intuitive interpretation such that the elements of the matrices C and D are more general. The matrix C has negative diagonal elements and nonnegative off-diagonal elements; the matrix D is non-zero and nonnegative; and the matrix C D is the infinitesimal generator of an irreducible Markov chain with m states. We provide a physical interpretation that the MAP can be obtained from a generalization of the MMPP. When the Markov environment process J(t) i, not only does the corresponding Poisson process depend on the present state i, but it is also related to jumping to the next state j. Thus, the arrival rate of the MAP should be given by Oi , j for 1İi, jİm, e.g.. See Fig. 1.9 for a clear illustration.
Figure 1.9 The state transitions in a MAP
Under the situation, we obviously have C
m m § m · diag ¨ ¦ O1, j , ¦ O2, j , ! , ¦ Om, j ¸ j 1 j 1 ©j1 ¹
and
D
50
§ O1,1 O1,2 ! O1, m · ¨ ¸ ¨ O2,1 O2,2 ! O2, m ¸ . ¨ # # # ¸ ¨¨ ¸¸ © Om,1 Om,2 ! Om, m ¹
1
Stochastic Models
Let ș be the stationary probability vector of the Markov chain C D. The stationary arrival rate of the MAP is given by O T De. We write P(k; t) ( Pj , j c (k ; t ))1İ j , j cİm , where Pj , jc (k ; t ) is a conditional probability that the Markov chain C D is in the phase j c at time t and that k renewals occur in [0, t), given that the Markov chain starts in the phase j at time 0. The matrix sequence {P(k; t)} satisfies the forward Chapman-Kolmogorov differential equations Pc(0; t ) Pc(k ; t )
P(0; t )C ,
P (k ; t )C P (k 1; t ) D,
k ı 1,
with the initial condition P(k ;0) G 0, k I for k ı 0; or the backward ChapmanKolmogorov differential equations Pc(0; t ) CP(0; t ), Pc(k ; t ) CP(k ; t ) DP(k 1; t ),
k ı 1.
f
Let P* ( z; t )
¦z
k
P(k ; t ). Then it is easy to see that
k 0
P* ( z; t )
exp{(C zD )t}.
Note that the MAP is not a renewal process, it is necessary to discuss for the inter-dependent structure of the MAP. To do end, we assume that an arrival occurs at time 0. Let W k be the kth arrival epoch of the MAP for kı0, where W 0 0. Let X n W n W n 1 . Then Xn is the nth interarrival time of the MAP for nı1. In general, these random variables Xn for nı1 are not independent but they are identically distributed with marginal densities given by f (t ) T exp{Ct}De. Define the matrix W(t) with the (i, j)th element Wi, j(t) which is a conditional probability density for an interarrival time [0,t), terminating at phase j and beginning from phase i. It is easy to check that W(t) exp {Ct}D. The transition probability matrix of the MAP evolves the phase of the environment stochastic process {J(t); tı0} from one arrival epoch to the next one is given by W
³
f 0
W (t )dt
³
f 0
exp{Ct}Ddt
C 1 D.
It is clear that T W T . The joint probability density function of the two random variables Xl and Xl+ k is given by f k ,l ( x, y ) T exp{Ct}DW k 1 exp{Ct}De. Therefore, we obtain 51
Constructive Computation in Stochastic Models with Applications
E[ X l X l k ]
f
f
0
0
³ ³
xyf k ,l ( x, y )dxdy T C 1W k C 1e.
Note that E[ X l ]E[ X l k ] (T C 1e)2
T C 1eT W k C 1e,
the correlation between the two random variables Xl and Xl+ k is given by
Uk
T C 1 ( I eT )W k C 1e . T C 1 (2 I eT )C 1e
The joint probability density of the k random variables X1, X2, ..., Xk is given by f ( x1 , x2 ,! , xk ) D exp{Cx1}D exp{Cx2 }D! exp{Cxk }De. The MAPs have an important closure property of superposition, which is similar to that of the Poisson processes. We describe this closure property of the MAPs in the following proposition, while the proof is omitted here. Proposition 1.6 If {(Ni(t), Ji(t)), tı0} is a MAP with matrix descriptor (Ci, K § K ½ · Di) of size mi for 1İiİK, then ®¨ ¦ N i (t ), ¦ J i (t ) ¸ , t ı 0¾ is also a MAP with i 1 ¹ ¯© i 1 ¿ K
matrix descriptor (C1 C2 ! CK , D1 D2 ! DK ) of size
¦m . i
i 1
1.5.6 The Batch Markovian Arrival Process We extend the MAP to a batch Markovian arrival process (BMAP) with matrix descriptor (C, D1, D2, D3, ...), where each element of the matrix Dk denotes the arrival rate of batch size k. Thus, the corresponding block-structured batch pure birth process has the following infinitesimal generator:
Q
§ C D1 D2 D3 ¨ C D1 D2 ¨ ¨ C D1 ¨ % ©
!· ¸ !¸ . !¸ ¸ %¹
For the BMAP, let P(k; t) ( Pj , j c (k ; t ))1İ j , j cİm , where Pj , j c (k ; t ) is a conditional f
probability that the Markov chain C ¦ Dk is in the phase j c at time t and that k 1
k renewals occur in [0, t), given that the Markov chain started in the phase j at time 0. 52
1
Stochastic Models
The matrix sequence {P(k; t)} satisfies the forward Chapman-Kolmogorov differential equations Pc(0; t )
P (0; t )C , k 1
Pc(k ; t )
P (k ; t )C ¦ P (k i; t ) Di ,
k ı 1,
i 1
with the initial condition P(k; 0) G 0, k I for k ı 0; or the backward ChapmanKolmogorov differential equations Pc(0; t ) CP(0; t ), k 1
Pc(k ; t ) CP (k ; t ) ¦ Di P (k i; t ),
k ı 1.
i 1
f
Let P* ( z; t )
¦z
k
P(k ; t ). Then it is easy to see that
k 0
P* ( z; t )
f § · ½ exp ®¨ C ¦ z k Dk ¸ t ¾ . k 1 ¹ ¿ ¯©
The BMAPs also have the closure property for superposition, which is described in the following proposition, while the proof is omitted here. Proposition 1.7 If {(Ni(t), Ji(t)), tı0} is a BMAP with matrix descriptor (C ( i ) , D1(i ) , D2( i ) , ...) of size mi for i 1, 2, then {(N1(t) N2(t), J1(t) J2(t)), tı0} is also a BMAP with matrix descriptor (C (1) C (2) , D1(1) D1(2) , D2(1) D2(2) , ...) of size m1 m2.
1.5.7 The Multivariate Markovian Arrival Process Now, we consider a K-dimensional Markovian arrival processes (MMAP[K]) and a K-dimensional batch Markovian arrival process (MBMAP[K]). An MMAP[K] {& (t ), J (t ) : t ı 0} is constructed by K different classes of arrivals (for example, customers, products and orders), where
& (t ) ( N1 (t ), N 2 (t ), ..., N K (t )), Ni(t) and J(t) denote the number of the ith class of arrivals in the time interval [0, t) for 1İiİK, and the phase of the Markov environment at time t, respectively. In general, an MMAP[K] can be described as an irreducible matrix descriptor (C, D(1), D(2), ..., D(K)) of size m, where C has negative diagonal elements and nonnegative off-diagonal elements; the matrix D(i)ı0 is the rate matrix of the ith K
class of arrivals for 1İiİK; and the matrix C ¦ D (i ) is the infinitesimal i 1
53
Constructive Computation in Stochastic Models with Applications
generator of an irreducible Markov chain with m states. K
Let ș be the stationary probability vector of the Markov chain C ¦ D (i ) . i 1
T D ( i ) e and
The stationary arrival rate of the ith class of arrivals is given by Oi K
¦ O . Under the
the total stationary arrival rate of the MMAP[K] is given by O
i
i 1
stable conditions, the probability that an arbitrary arrival is of the ith class is given by Oi / O for 1 İ i İ K . We write Pj , j c (n1 , n2 , ..., nK ; t ) J (t )
P{N1 (t )
n1 , N 2 (t )
n2 , ..., N K (t )
nK ;
j c | N1 (0) 0, N 2 (0) 0, ..., N K (0) 0, J (0)
j}
and P (n1 , n2 , ..., nK ; t ) ( Pj , j c (n1 , n2 , ..., nK ; t ))1İ j , j cİm . The matrix sequence {P (n1 , n2 , ..., nK ; t )} satisfies the forward ChapmanKolmogorov differential equations Pc(0, 0, ..., 0; t )
P(0, 0, ..., 0; t )C
and for niı1 and 1İiİK, Pc(n1 , n2 , ..., nK ; t )
P(n1 , n2 , ..., nK ; t )C K
¦ P(n1 , ..., ni 1 , ni 1, ni 1 , ..., nK ; t ) D (i ) , i 1
with the initial condition P (n1 , n2 ,! , nK ; 0)
I , ® ¯0,
n1
n2
! nK
0
otherwise;
or the backward Chapman-Kolmogorov differential equations Pc(0, 0, ..., 0; t ) CP(0, 0, ..., 0; t ) and for niı1 and 1İiİK, Pc(n1 , n2 , ..., nK ; t ) CP(n1 , n2 , ..., nK ; t ) K
¦ D (i ) P (n1 , ..., ni 1 , ni 1, ni 1 , ..., nK ; t ). i 1
Let 54
1 f
P* ( z1 , z2 , ! , zK ; t )
f
Stochastic Models
f
¦ ¦ ! ¦ z1n1 z2n2 ! zKnK P(n1 , n2 , ! , nK ; t ).
n1 0 n2 0
nK 0
Then K § · ½ P* ( z1 , z2 , ! , zK ; t ) exp ®¨ C ¦ zi D (i ) ¸ t ¾ . i 1 ¹ ¿ ¯©
In what follows we provide some useful properties for the MMAP[K]: (1) The arrival process of the ith class is an ordinary MAP with irreducible § · matrix descriptor ¨ C ¦ D ( j ) , D (i ) ¸ . j zi © ¹ (2) The arrivals process of classes i1 to iL is an MMAP[L] with irreducible § · matrix descriptor ¨ C ¦ D ( j ) , D (i1 ) , D ( i2 ) , ! , D (iL ) ¸ . j z i1 to iL © ¹ (3) The superposition of the two MMAP[K] and MMAP[L] is still an MMAP[M], where the number M depends on the practical system.
1.5.8 The Multivariate Batch Markovian Arrival Process An MBMAP[K] {( N (t ), J (t )) : t ı 0} is constructed by K different classes of arrivals with stochastic batch size, where N (t ) ( N1 (t ), N 2 (t ), ..., N K (t )), N i (t ) and J (t ) denote the number of the ith class of arrivals in the time interval [0, t) for 1İiİK, and the phase of the Markovian enviornment at time t, respectively. Let & K be a set of non-zero K-tuples of nonnegative integers. A generic element of & K is denoted as n (n1 , n2 , ..., nK ). An MBMAP[K] can be described as an irreducible matrix descriptor (C , Dn , n & K ) of size m, where C has negative diagonal elements and nonnegative off-diagonal elements; the matrix Dn : 0 is the rate matrix of arrivals of batch size n for n & K ; and the matrix C
¦
Dn is the infinitesimal generator of an irreducible Markov chain
n& K
with m states. We write C (k )
C
¦
Dn
nk 0, n& K
and Dr( k )
¦
Dn .
nk r , n& K
55
Constructive Computation in Stochastic Models with Applications
Then the arrival process of the kth class is a BMAP with irreducible matrix descriptor (C ( k ) , D1( k ) , D2( k ) , ...). Let ș be the stationary probability vector of the Markov chain C
¦
Dn . The stationary arrival rate of the kth class of arrivals is
n& K
f
given by Ok
T ¦ rDr( k ) e. Let r 1
¦
Dr
Dn .
n1 n2 ! nK r , n& K
Then the counting process {( N (t )e, J (t ))} is a BMAP with irreducible matrix descriptor (C, D1, D2, ...) and the stationary arrival rate of the BMAP is given by K
O
¦ O . Under the stable conditions, the probability that an arbitrary arrival is i
i 1
of the kth class is given by Ok / O for 1İkİK. We write Pj , j c ( n; t )
P{N (t )
n; J (t )
j c | N (0)
0, J (0)
j}
and P(n; t ) ( Pj , jc ( n; t ))1İ j , j cİm , where n (n1 , n2 , ..., nK ) The matrix sequence {P(n; t)} satisfies the forward Chapman-Kolmogorov differential equations Pc(0; t )
P(0; t )C
and for n & K , Pc(n; t )
P(n; t )C
¦
P(n h; t ) Dh ;
hİn , h& K
or the backward Chapman-Kolmogorov differential equations Pc(0; t ) CP(0; t ) and for n & K , Pc(n; t ) CP(n; t )
¦
Dh P (n h; t ).
hİn , h& K
Let z
( z1 , z2 , ! , zK ), z n
z1n1 z2n2 ! z KnK and P* ( z; t )
¦
n& K
Then 56
z n p(n; t ).
1
P* ( z; t )
Stochastic Models
· ½° °§ exp ®¨ C ¦ z n Dn ¸ t ¾ . ¨ ¸ n& K ¹ ¿° ¯°©
For the stationary version of the MBMAP[K], we list the useful results as follows: (1) Backward looking: The probability that the last arrival before an arbitrary time t has the batch size n is given by T Dn (C )1 e for n & K . (2) Forward looking: The probability that the last arrival after an arbitrary time t has the batch size n is given by T (C )1 Dn e for n & K . (3) At the arrival: The probability that an arbitrary arrival has the batch size n is given by T Dn e / O for n & K . We now extend the above probabilities to that of r consecutive arrivals with batch sizes n1, n2, ..., nr, respectively. We list the useful results as follows: (1) Backward looking: The probability that the last arrival before an arbitrary time t has the batch sizes n1, n2, ..., nr is given by
T Dn (C )1 Dn (C ) 1 ! Dn (C )1 e. 1
r
2
(2) Forward looking: The probability that the last arrival after an arbitrary time t has the batch sizes n1, n2, ..., nr is given by
T (C )1 Dn (C )1 Dn ! (C )1 Dn e. 1
2
r
(3) At the arrival: The probability that an arbitrary arrival has the batch sizes n1, n2, ..., nr is given by T Dn1 Dn2 ! Dnr e / O.
1.6 Matrix-Exponential Distribution In this section, we discuss a useful distribution: Matrix-exponential distribution, which is a formal generalization of the PH distribution. We analyze some useful properties of this distribution and its renewal process. Now, we extend the PH distribution with probability density function ˆ }Tˆ 0 f ( x) Dˆ exp{Tx to a more general distribution: Matrix-exponential distribution with probability density function b( x) D exp{Tx}s. Thus, a matrix-exponential distribution can be determined by a triple (D , T , s ) of size m, denoted as MED (D , T , s). The matrix-exponential distribution satisfies two basic conditions: D s ı 0 and D T 1s 1. Specifically, if D is a probability vector, T is an infinitesimal generator of the Markov chain with an absorbing 57
Constructive Computation in Stochastic Models with Applications
state and s Te, then the MED (D , T , s ) is a PH distribution. It is easy to check that the MED (D , T , s ) has the probability distribution function B( x) 1 D exp{Tx}T 1 s,
the Laplace transform b ( s ) D ( sI T )1 s,
which is rational, and the nth moment
Pn
(1) n 1 n!D T ( n 1) s.
Proposition 1.8 The following statements are equivalent: (1) b(x) D exp{Tx}s, (2) b ( s ) is rational, and n
(3) b(x)
¦c x j
j
exp{K j x}.
j 0
The following theorem provides a useful relation between the matrix-exponential distribution and the probability distribution with rational Laplace transform. Theorem 1.3 The Laplace transform of a matrix-exponential distribution can be written as b ( s )
b1 b2 s ! bn s n 1 s n a1 s n 1 ! an 1 s an
for some nı1 and some constants a1, a2, ..., an, b1, b2, ..., bn. At the same time, the matrix-exponential distribution has the matrix descriptor (D , T , s ) of size n, where
D
T
(b1 , b2 , ..., bn ),
1 § · ¨ ¸ 1 ¨ ¸ 1 ¨ ¸ ¨ ¸ % ¨ ¸ 1 ¸ ¨ ¨ a a an 2 an 3 ! a1 ¸¹ n 1 © n
and s
(0, 0, 0, 0, 0, 1)T .
Let the random variable X be matrix-exponential with matrix descriptor (D , T , s), and Fy ( x) 58
P{ X İ x y | X ! y}.
1
Stochastic Models
Then Fy(x) is also matrix-exponential with matrix descriptor (D y , T , s), where
D exp{Ty}
Dy
D exp{Ty}T 1s
.
If A is an invertible matrix of size m, then MED (D , T , s ) ~ MED (D A, A1TA, A1s ) ~ MED (D A1 , ATA1 , As).
(1.52)
For an arbitrary MED ( E , R, r ) of size m, it is necessary to give a closed PH representation (D , T , T 0 ). To do this, it follows from Eq. (1.52) that MED ( E , R, r ) ~ ( E A, A1 RA, A1r ). Let D E A, T A1 RA and T 0 Te A1r. Therefore, A1 RAe A1r , which leads to Ae R 1r. To determine the matrix A (ai , j ), we write (r1 , r2 , ..., rm ) R 1r. Using Ae R 1r , we may take a special matrix A as follows: (1) If ri z 0, then we take ai , j (2) If ri
ri , i j, ® ¯0, i z j.
0, then we take ai , j
1, i j , ° ®1, i j 1, °¯0, otherwise.
It is easy to check that the matrix A is invertible and Ae R 1r. Now, we study the minimal representation of the matrix-exponential distribution. Let Rp
span{s, Ts, T 2 s, ..., T p s},
Rf
span{T n s : n
Re
span{exp{Tx}s : x ı 0};
0, 1, 2, ...},
and Lp
span{D , D T , D T 2 , ..., D T p },
Lf
span{D T n : n 0, 1, 2, ...},
Le
span{D exp{Tx}: x ı 0}.
It is easy to check that R p Rf Re and L p Lf Le . Let (D , T , s ) and ( E , T , s ) be two representations of a matrix-exponential distribution. If dim ( R p ) p, then D E . On the other hand, let (D , T , s ) and (D , T , t ) be two representations of a matrix-exponential distribution. If dim ( L p ) p, then s t. 59
Constructive Computation in Stochastic Models with Applications
The following theorem provides a necessary and sufficient condition under which the representation (D , T , s ) of a matrix-exponential distribution is minimal. Theorem 1.4 A representation (D , T , s ) of a matrix-exponential distribution of size m is minimal if and only if dim ( Rm ) dim ( Lm ) m. For a representation (D , T , s ) of a matrix-exponential distribution, we can often construct its minimal representation. The steps for finding the minimal representation are given in the following theorem. Theorem 1.5 Suppose (D , T , s ) of size m is a matrix representation of a matrix-exponential distribution. (1) Let p dim ( Rm ) m. We denote by x1, x2, ..., xp a basis for the linear space Rm, and define an m u p matrix A (x1, x2, ..., xp) and the change of basis transformation B of size p u m such that Bxi ei for 1İiİp, where e1, e2, ..., ep denote the Euclidian basis vectors. Then (D A, BTA, Bs) is a p-dimensional representation of the matrix-exponential distribution. (2) Let q dim ( Lm ) m. We denote by y1, y2, ..., yp a basis for the linear space Lm, and define a p u m matrix C ( y1T , y2T , ..., y Tp )T and the change of basis transformation D of size p u m such that xiD ei for 1İiİp, where e1, e2, ..., ep denote the Euclidian basis vectors. Then (D D, CTD, Cs) is a p-dimensional representation of the matrix-exponential distribution. Now, we consider a renewal process where the interarrival distribution B(x) is matrix-exponential with matrix representation (D , T , s ). f
Let the renewal density function be u(x)
¦b
*n
( x). Then
n 0
u ( x) D exp{(T sD ) x}s. Furthermore, we consider a delayed renewal process with b0(x) E exp{Tx}s and b(x) D exp{Tx}s. Then the renewal density function is given by u0 ( x )
E exp{(T sD ) x}s.
Let ȟ(t) be the excess life time at time t. Then ȟ(t) is matrix-exponential with matrix representation ( E t , T , s ), where E t D exp{(T sD )t}; or another matrix representation (D , T , rt ), where rt exp{(T sD )t}s. Furthermore, for a delayed renewal process with b0(x) E exp{Tx}s and b( x) D exp{Tx}s, [ (t ) is also matrixexponential with matrix representation (J t , T , s ), where J t E exp{(T sD )t}.
1.7 Notes in the Literature For Markov chains, we may refer to Kemeny and Snell [64], Kemeny, Snell and Knapp [65], Anderson [7], Meyn and Tweedie [84], Karlin and Taylor [61, 62], Kijima [66], Taylor and Karlin [121]. The QBD process is an important example 60
1
Stochastic Models
in Markov chains, and provides a useful mathematical tool for studying stochastic models such as queueing systems, manufacturing systems, communication networks and transportation systems. Readers may refer to Chapter 3 of Neuts [92], Hajek [40], Gaver, Jacobs and Latouche [39], Latouche and Ramaswami [69], Neuts [96], Naoumov [86, 87], Bright and Taylor [29, 30], Ramaswami [110], Ramaswami and Taylor [111, 112], Meini [83], Kijima and Makimoto [68], Latouche and Ramaswami [70], He, Meini and Rhee [43], Li and Cao [73] and references therein. Specifically, some stochastic models have motivated the need for studying a level-dependent QBD process. Important examples include the retrial queues, see, for example, Artalejo [9], Anisimov and Artalejo [8], Breuer, Dudin and Klimenok [28] and Artalejo and Gómez-Corral [10]. The PH distribution plays an important role in numerical computation of stochastic models. Since the introduction of the PH distribution by Neuts [89], research on the PH distribution has been greatly motivated by many practical applications. Chapter 2 in Neuts [92] provided a detailed analysis for the PH distribution with many useful properties. Subsequent papers have been published on this theme from some different points of view. The properties of the PH distribution are studied by, such as, Assafe and Levikson [17], Neuts [93], Assaf and Langberg [15], Aldous and Shepp [2], Sengupta [118, 119], Bean and Nielsen [18], Hipp [51]. For the structured representation, readers may refer to Commault and Chemla [33, 34], Mocana and Commault [85], Commault and Mocanu [35, 36], He and Zhang [45 47]. For the phase structure and the minimal irreducible representation of the PH distribution, readers may refer to O’Cinnerde [102 106], Maier [80, 81], Maier and O’Cinnerde [82]. Specifically, O’Cinnerde [107] provided an overview on this direction. For the PH approximation, readers may refer to Altiok [5, 4], Bobbio and Telek [24], Bobbio, Horváth and Telek [22, 23], Bobbio, Horváth, Scarpa and Telek [21], Horváth and Telek [52], He, Wu and Li [50], Li, Lin and Li [78]. The statistical analysis of PH distributions has been given by some researchers, such as, Johnson [55 58], Bobbio and Cumani [20], Olsson [108], Rydén [117], Asmussen [12], Faddy [38]. The PH distribution has been extended to several useful classes as follows. Shanthikumar [120], Ahn and Ramaswami [1] analyzed a bilateral PH distribution, and Li, Wang and Zhou [73] studied a symmetric PH distributions. For the multivariate PH distribution, readers may refer to Assaf, Langberg, Savits and Shaked [16], Kulkarni [67], Li [72], Cai and Li [31], Asimit and Jones [11]. When the interarrival time is of phase type, Neuts [90] studied a PH-renewal process. Neuts and Latouche [98] discussed the superposition of two PH-renewal processes. Kao and Smith [59, 60] analyzed excess-, current-, and total-life distributions of phase-type renewal processes. Neuts [91] extended the PH renewal process to a versatile Markovian point process, which is called batch Markovian arrival process, e.g., see Neuts [94]. Lucantoni [79] provided a simple representation for the BMAP. Narayana and Neuts [88] considered the first two moment matrices of the counts for the MAP. Neuts [95] analyzed the burstiness of the MAP. For the 61
Constructive Computation in Stochastic Models with Applications
MAP and BMAP, readers may refer to some crucial works such as Liu and Neuts [77], Neuts, Liu and Narayana [99], Neuts [97], Johnson and Narayana [54], Nishimura and Sato [101], Johnson, Liu and Narayana [53], Breuer [27], Andersen, Neuts and Nielsen [6], Kawanishi [63], Nielsen, Nilsson, Thygesen and Beyer [100], Rydén [113 115]. The MAP describes bursty traffic in modern communication networks. Readers may refer to Chapter 5 in Neuts [94], Ramaswami [109], Lucantoni [79], Neuts [96], Alfa [3], Latouche and Ramaswami [70], Lee and Jeon [71], Chakravarthy [32], Li and Zhao [76, 75]. To describe multivariate and inter-dependent arrival processes, He and Neuts [44] provided a Markovian arrival process with marked transitions (MMAP[K]). Readers may refer to applications of the MMAP[K] such as He [42, 43]. Asmussen and Bladt [13] discussed matrix-exponential distributions. Asmussen and Perry [14] provided an operational calculus for matrix-exponential distributions. Readers may refer to Bladt [19], Fackrell [37], van de Liefvoort and Heindl [122], He and Zhang [47 49]. In this chapter, we refer to more references such as Kemeny and Snell [64], Kemeny, Snell and Knapp [65], Anderson [7], [92, 94], Li and Cao [73], Lucantoni [79], Assaf, Langberg, Savits and Shaked [16], He and Neuts [44] and Asmussen and Bladt [13].
Problems 1.1 Consider an M/M/c queue with server multiple vacations. In the following two different cases, please write the infinitesimal generators of the corresponding systems. (1) The vacation process of the c servers is Synchronous Start and Synchronous End. (2) The vacation process of the c servers is Independent Start and Independent End. 1.2 Consider a two-node closed queueing network depicted in Fig. 1.10 There are N customers in the system, the service times of servers 1 and 2 are i.d.d. and
Figure 1.10 A closed queueing network with server vacations 62
1
Stochastic Models
are exponentially distributed with service rates ȝ1 and ȝ2, respectively. If there is no customer in server 1 or 2, then the server enters a vacation state with multiple vacation discipline. The vacation times of servers 1 and 2 are exponentially distributed with vacation rates J 1 and J 2 , respectively. Please write the infinitesimal generators of the queueing system. 1.3 Consider an M/M/c queue with c repairable servers and N repairmen for 2İNİc. The parameters and relative assumptions are introduced in Section 1.2 for “A queue with repairable servers”. Please write the infinitesimal generators of the queueing system. 1.4 In Problem 1.3, if the c repairable servers may be different with parameters Pi , D i and E i for 1 İ i İ c, please write the infinitesimal generators of the corresponding systems. 1.5 Consider a bound call center with two classes A and B of customers and three different groups of agents, as depicted in Fig. 1.11. The A- and B-classes of customers arrive at the system according to Poisson processes with arrival rates O1 and O2 , respectively. The sizes of the A- and B-queues are M and N, respectively. If the A- or B-queue is fully loaded, the arriving customer has to be lost immediately. Once the waiting time of a customer exceeds the patient waiting time, then the arriving customer has to be lost immediately. The patient waiting times of the A- and B-customers are exponentially distributed with rates D1 and D 2 , respectively. The service times of A-specialist, B-specialist and Generalist are all i.i.d. and are exponentially distributed with service rates P1 , P 2 and P0 , respectively. Please write the infinitesimal generators of the queueing system.
Figure 1.11 A call center with two classes of customers
1.6 In Fig. 1.12, the arrival of repaired locomotives is a Poisson process with arrival rate O . Once the locomotive enters the repair shop, it is immediately disconnected into m different types of parts or it has to wait for a disconnected room on a queueing line. The sub-warehouse size of the ith type of parts, including m
the failed or repaired ones, is allocated to be Ni, thus N
¦N
i
is the size of the
i 1
63
Constructive Computation in Stochastic Models with Applications
repair shop. The repair time of each ith type of part is exponentially distributed with repair rate E i . We assume that the parts are as good as new after repair. When the disconnected process of the arriving locomotive is completed, each corresponding part is taken from its sub-warehouse, if it exists, and is installed in the suitable position of the locomotive. After being installed, the locomotive leaves the repair shop immediately. We assume that all the other time, except for the repair times, are so short that they can be ignored as zero. Please write the infinitesimal generators of the corresponding system.
Figure 1.12 A locomotive repair system
1.7 If a continuous-time QBD process Q with either infinitelymany levels or finitely mang levels is transient, please use the UL-type RG-factorization to compute the inverse of the infinitesimal generator Q. 1.8 Prove that (1) g1 g 2 g 2 g1 , and (2) T 1 ( g1 g 2 )T g1 g 2 . 1.9 Let * 1 and * 2 be two nonempty stochastically closed sets of the state space ȍ such that * 1 * 2 {m 1}. We write E0 * 1c * 2c , E1 * 1c * 2 and E2 * 1 * 2c . (1) Prove that E0
T
E1
E2
E0 § T0,0 T0,1 T0,2 · ¨ ¸ E1 ¨ T1,1 ¸. ¨ E2 T2,2 ¸¹ ©
(2) Using the above block structure, please compute F ( x1 , x2 ), f ( x1 , x2 ), f ( s , s ) and E[Y Y ]. 1 2 1 2 64
1
Stochastic Models
1.10 For an M/PH/1/c queue, prove that its departure point process is a MAP and write the associated matrix descriptor (C, D). Provide some numerical examples to indicate effects of the service density, U O / P , on the behavior of the MAP. 1.11 For a PH/PH/1 queue, use a finite-state MAP to provide an approximate algorithm for computing the mean and variance of its departure point process. Provide some numerical examples to indicate the effects of the service density, U O / P , on the behavior of the departure point process. 1.12 Provide concrete examples for applying the multivariate MAP to describe some multivariate dependent arrival processes in practical areas.
References 1. Ahn S. and V. Ramaswami (2005). Bilateral phase type distributions. Stochastic Models 21: 239 259 2. Aldous D. and L. Shepp (1987). The least variable phase type distribution is Erlang. Stochastic Models 3: 467 473 3. Alfa A.S. (1995). Modelling traffic queues at a signalized intersection with vehicle-actuated control and Markovian arrival processes. Computers & Mathematics with Applications 30: 105 119 4. Altiok T. (1989). Approximate analysis of queues in series with phase-type service times and blocking. Operations Research 37: 601 610 5. Altiok T. and H.G. Perros (1986). Open networks of queues with blocking: Split and merge configurations. IIE Transactions 18: 251 261 6. Andersen A.T., M.F. Neuts and B.F. Nielsen (2004). On the time reversal of Markovian arrival processes. Stochastic Models 20: 237 260 7. Anderson W.J. (1991). Continuous-time Markov chains: An applications-oriented approach, Springer-Verlag 8. Anisimov V.V. and J.R. Artalejo (2001). Analysis of Markov multiserver retrial queues with negative arrivals. Queueing Systems 39: 157 182 9. Artalejo J.R. (1999). A classified bibliography of research on retrial queues: Progress in 1990 1999. Top 7: 187 211 10. Artalejo J.R. and A. Gómez-Corral (2008). Retrial Queueing Systems: A Computational Approach, Springer 11. Asimit A.V. and B.L. Jones (2007). Extreme behavior of multivariate phase-type distributions. Insurance: Mathematics & Economics 41: 223 233 12. Asmussen S. (1997). Phase-type distributions and related point processes: fitting and recent advances. In Matrix-Analytic Methods in Stochastic Models, Alfa A.S. and S. Chakravarthy (eds.), Lecture Notes in Pure and Applied Mathematics 183, Dekker: New York, 137 149 13. Asmussen S. and M. Bladt (1997). Renewal theory and queueing algorithms for matrixexponential distributions. In Matrix-Analytic Methods in Stochastic Models, Lecture Notes in Pure and Applied Mathematics 183, Dekker: New York, 313 341 65
Constructive Computation in Stochastic Models with Applications
14. Asmussen S. and D. Perry (1998). An operational calculus for matrix-exponential distributions, with applications to a Brownian (q, Q) inventory model. Mathematics of Operations Research 23: 166 176 15. Assaf D. and N.A. Langberg (1985). Presentation of phase-type distributions as proper mixtures. Journal of Applied Probability 22: 247 250 16. Assaf D., N.A. Langberg, T.H. Savits and M. Shaked (1984). Multivariate phase-type distributions. Operations Research 32: 688 702 17. Assaf D. and B. Levikson (1982). Closure of phase type distributions under operations arising in reliability theory. The Annals of Probability 10: 265 269 18. Bean N.G. and B.F. Nielsen (2002). Decay rates of discrete phase-type distributions with infinitely-many phases. In Matrix-analytic methods: Theory and Applications, G. Latouche and P.G. Taylor (eds.), World Scientific, 17 38 19. Bladt M. and M.F. Neuts (2003). Matrix-exponential distributions: calculus and interpretations via flows. Stochastic Models 19: 113 124 20. Bobbio A. and A. Cumani (1992). ML estimation of the parameters of a PH distribution in triangular canonical form. In Computer Performance Evaluation, G. Balbo and G. Serazzi (eds.), Elsevier Science Publishers, 33 46 21. Bobbio A., A. Horváth, M. Scarpa and M. Telek (2003). Acyclic discrete phase type distributions: Properties and a parameter estimation algorithm. Performance Evaluation 4: 1 32 22. Bobbio A., A. Horváth and M. Telek (2004). The Scale Factor: A new degree of freedom in phase type approximation. Performance Evaluation 56: 121 144 23. Bobbio A., A. Horváth and M. Telek (2005). Matching three moments with minimal acyclic phase type distributions. Stochastic Models 21: 303 326 24. Bobbio A. and M. Telek (1994). A benchmark for PH estimation algorithms: results for acyclic-PH. Stochastic Models 10: 661 677 25. Bobbio A. and K.S. Trivedi (1990). Computation of the distribution of the completion time when the work requirement is a PH random variable. Stochastic Models 10: 533 548 26. Breuer L. (2002). On Markov-additive jump process. Queueing Systems 40: 75 91 27. Breuer L. (2002). An EM algorithm for batch Markovian arrival processes and its comparison to a simpler estimation procedure. Annals of Operations Research 112: 123 138 28. Breuer L., A. Dudin and V. Klimenok (2002). A retrial BMAP/PH/N system. Queueing Systems 40: 433 457 29. Bright L.W. and P.G. Taylor (1995). Calculating the equilibrium distribution in level-dependent quasi-birth-and-death processes. Stochastic Models 11: 497 526 30. Bright L.W. and P.G. Taylor (1996). Equilibrium distribution in level-dependent quasibirth-and-death processes. In Matrix Analytic Methods in Stochastic Models, A.S. Alfa and S. Chakravarthy (eds.), Marcel Dekker: New York, 359 375 31. Cai J. and H.J. Li (2005). Conditional tail expectations for multivariate phase-type distributions. Journal of Applied Probability 42: 810 825 32. Chakravarthy S.R. (2000). The batch Markovian arrival processes: A review and future work. In Advances in Probability Theory and Stochastic Processes, G. Latouche and P.G. Taylor (eds.), Notable Publications Inc.: New Jersey, 21 39 66
1
Stochastic Models
33. Commault C. and J.P. Chemla (1993). On dual and minimal phase-type representations. Stochastic Models 9: 421 434 34. Commault C. and J.P. Chemla (1996). An invariant of representations of phase-type distributions and some applications. Journal of Applied Probability 33: 368 381 35. Commault C. and S. Mocanu (2002). A generic property of phase-type representations. Journal of Applied Probability 39: 775 785 36. Commault C. and S. Mocanu (2003). Phase-type distributions and representations: some results and open problems for system theory. International Journal of Control 76: 566 580 37. Fackrell M. (2005). Fitting with matrix-exponential distributions. Stochastic Models 21: 377 400 38. Faddy M.J. (2002). Penalised maximum likelihood estimation of the parameters in a Coxian phase-type distribution. In Matrix-Analytic Methods, World Scientific, 107 114 39. Gaver D.P., P.A. Jacobs and G. Latouche (1984). Finite birth-and-death models in randomly changing environments. Advances in Applied Probability 16: 715 731 40. Hajek B. (1982). Birth-and-death processes on the integers with phases and general boundaries. Journal of Applied Probability 19: 488 499 41. He C., B. Meini and N.H. Rhee (2001). A shifted cyclic reduction algorithm for QBDs. SIAM Journal on Matrix Analysis and Applications 23: 673 691 42. He Q.M. (2000). Classification of Markov processes of matrix M/G/1 type with a tree structure and its applications to the MMAP[K]/G[K]/1 queues. Stochastic Models 16: 407 433 43. He Q.M. (2001). The versatility of MMAP[K] and the MMAP[K]/G[K]/1 queue. Queueing Systems 38: 397 418 44. He Q.M. and M.F. Neuts (1998). Markov chains with marked transitions. Stochastic Processes and their Applications 74: 37 52 45. He Q.M. and H. Zhang (2005). A note on unicyclic representations of phase type distributions. Stochastic Models 21: 465 483 46. He Q.M. and H. Zhang (2006). PH-invariant polytopes and Coxian representations of phase type distributions. Stochastic Models 22: 383 409 47. He Q.M. and H. Zhang (2006). Spectral polynomial algorithms for computing bi-diagonal representations for phase type distributions and matrix-exponential distributions. Stochastic Models 22: 289 317 48. He Q.M. and H. Zhang (2007). On matrix exponential distributions. Advances in Applied Probability 39: 271 292 49. He Q.M. and H. Zhang (2007). Coxian approximations of matrix-exponential distributions. Calcolo 44: 235 264 50. He X.F., S. Wu and Q.L. Li (2007). Production variability of production lines. International Journal of Production Economics 107: 78 87 51. Hipp C. (2006). Speedy convolution algorithms and Panjer recursions for phase-type distributions. Insurance: Mathematics & Economics 38: 176 188 52. Horváth A. and M. Telek (2007). Matching more than three moments with acyclic phase type distributions. Stochastic Models 23: 167 194 53. Johnson M.A., D. Liu and S. Narayana (1997). Burstiness descriptors for Markov renewal processes and Markovian arrival processes. Stochastic Models 13: 619 646 67
Constructive Computation in Stochastic Models with Applications
54. Johnson M.A. and S. Narayana (1996). Descriptors of arrival-process burstiness with application to the discrete Markovian arrival process. Queueing Systems 23: 107 130 55. Johnson M.A. and M.R. Taaffe (1989). Matching moments to phase distributions: mixtures of Erlang distributions of common order. Stochastic Models 5: 711 743 56. Johnson M.A. and M.R. Taaffe (1990). Matching moments to phase distributions: nonlinear programming approaches. Stochastic Models 6: 259 281 57. Johnson M.A. and M.R. Taaffe (1990). Matching moments to phase distributions: density function shapes. Stochastic Models 6: 283 306 58. Johnson M.A. and M.R. Taaffe (1991). An investigation of phase-distribution momentmatching algorithms for use in queueing models. Queueing Systems 8: 129 147 59. Kao E.P.C. and M.S. Smith (1992). On excess-, current-, and total-life distributions of phase-type renewal processes. Naval Research Logistics 39: 789 799 60. Kao E.P.C. and M.S. Smith (1993). On renewal processes relating to counter models: the case of phase-type interarrival times. Journal of Applied Probability 30: 175 183 61. Karlin S. and H.W. Taylor (1975). A First Course in Stochastic Processes (Second Edition), Academic Press, INC. 62. Karlin S. and H.W. Taylor (1981). A Second Course in Stochastic Processes, Academic Press, INC. 63. Kawanishi K. (2005). On the counting process for a class of Markovian arrival processes with an application to a queueing system. Queueing Systems 49: 93 122 64. Kemeny J.G. and J.L. Snell (1960). Finite Markov Chains, Princeton, N.J.D. Van Nostrand Co., Inc. 65. Kemeny J.G., J.L. Snell and A.W. Knapp (1976). Denumerable Markov Chains (Second Edition), Springer-Verlag: New York 66. Kijima M. (1997). Markov Processes for Stochastic Modeling, Chapman & Hall: London 67. Kulkarni V.G. (1989). A new class of multivariate phase type distributions. Operations Research 37: 151 158 68. Kijima M. and N. Makimoto (1999). Quasi-stationary distributions of Markov chains arising from queueing processes. In Applied probability and stochastic processes, J.G. Shanthikumar and U. Sumita (eds), Kluwer Academic Publishers, 277 311 69. Latouche G. and V. Ramaswami (1993). A logarithmic reduction algorithm for quasi-birthdeath processes. Journal of Applied Probability 30: 650 674 70. Latouche G. and V. Ramaswami (1999). Introduction to Matrix Analytic Methods in Stochastic Models, ASA-SIAM 71. Lee G. and J. Jeon (2000). A new approach to an N/G/1 queue. Queueing Systems 35: 317 322 72. Li H.J. (2003). Association of multivariate phase-type distributions, with applications to shock models. Statistics & Probability Letters 64: 381 392 73. Li Q.L. and J. Cao (2004). Two types of RG-factorizations of quasi-birthand-death processes and their applications to stochastic integral functionals. Stochastic Models 20: 299 340 74. Li Q.L., G.H. Wang and Y. Zhou (2004). Symmetric PH and EP distributions and their applications to the probability Hough transform. Computers & Mathematics with Applications 47: 823 844 68
1
Stochastic Models
75. Li Q.L., Y. Ying and Y.Q. Zhao (2006). A BMAP/G/1 retrial queue with a server subject to breakdowns and repairs. Annals of Operations Research 141: 233 270 76. Li Q.L. and Y.Q. Zhao (2004). A MAP/G/1 queue with negative customers. Queueing Systems 47: 5 43 77. Liu D. and M.F. Neuts (1991). Counter-examples involving Markovian arrival processes. Stochastic Models 7: 499 509 78. Li Y.J., C. Lin and Q.L. Li (2008). A simplified framework for stochastic workflow networks. Computers & Mathematics with Applications 56: 2700 2715 79. Lucantoni D.M. (1991). New results on the single server queue with a batch Markovian arrival process. Stochastic Models 7: 1 46 80. Maier R.S. (1991). The algebraic construction of phase-type distributions. Stochastic Models 7: 573 602 81. Maier R.S. (1993). Phase-type distributions and the structure of finite Markov chains. Journal of Computational and Applied Mathematics 46: 449 453 82. Maier R.S. and C.A. O’Cinneide (1992). A closure characterisation of phase-type distributions. Journal of Applied Probability 29: 92 103 83. Meini B. (1998). Solving QBD problems: the cyclic reduction algorithm versus the invariant subspace method. Advances in Performance Analysis 1: 215 225 84. Meyn S.P. and R.L. Tweedie (1993). Markov Chains and Stochastic Stability, Springer-Verlag. 85. Mocanu S. and C. Commault (1999). Sparse representations of phase-type distributions. Stochastic Models 15: 759 778 86. Naoumov V. (1996). Matrix-multiplicative approach to quasi-birth-and-death processes. In Matrix Analytic Methods in Stochastic Models, A.S. Alfa and S. Chakravarthy (eds.), Marcel Dekker: New York, 87 106 87. Naoumov V. (2000). Modified matrix-geometric solution for finite QBD processes. In Advances in Algorithmic Methods for Stochastic Models, G. Latouche and P.G. Taylor (eds.), Notable Publications Inc. NJ., 257 263 88. Narayana S. and M.F. Neuts (1992). The first two moment matrices of the counts for the Markovian arrival process. Stochastic Models 8: 459 477 89. Neuts M.F. (1975). Probability distributions of phase type. In Liber Amicorum Professor H.F. Emeritus, Department of Mathematics, University of Louvain, Belgium, 73 206 90. Neuts M.F. (1978). Renewal processes of phase type. Naval Research Logistics 25: 445 454 91. Neuts M.F. (1979). A versatile Markovian point process. Journal of Applied Probability 16: 764 779 92. Neuts M.F. (1981). Matrix-Geometric Solutions in Stochastic Models-An Algorithmic Approach, The Johns Hopkins University Press: Baltimore 93. Neuts M.F. (1984). The abscissa of convergence of the Laplace-Stieltjes transform of a PH-distribution. Simulation Computation 13: 367 373 94. Neuts M.F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Dekker: New York 95. Neuts M.F. (1993). The burstiness of point processes. Stochastic Models 9: 445 466 69
Constructive Computation in Stochastic Models with Applications
96. Neuts M.F. (1995). Matrix-analytic methods in stochastic models. In Advances in Queueing: Theory, Methods and Open Problems, J. Dhashalow (ed), CRC Press: Boca Raton, Florida, 265 292 97. Neuts M.F. (1996). Descriptors for point processes based on runs: the Markovian arrival process. Journal of Applied Mathematics and Stochastic Analysis 9: 469 488 98. Neuts M.F. and G. Latouche (1986). The superposition of two PH-renewal processes. In Semi-Markov models, Plenum: New York, 131 177 99. Neuts M.F., D. Liu and S. Narayana (1992). Local Poissonification of the Markovian arrival process. Stochastic Models 8: 87 129 100. Nielsen B.F., L.A.F. Nilsson, U.H. Thygesen and J.E. Beyer (2007). Higher order moments and conditional asymptotics of the batch Markovian arrival process. Stochastic Models 23: 1 26 101. Nishimura S. and H. Sato (1997). Eigenvalue expression for a batch Markovian arrival process. Journal of the Operations Research Society of Japan 40: 122 132 102. O’Cinneide C.A. (1989). On nonuniqueness of representations of phase-type distributions. Stochastic Models 5: 247 259 103. O’Cinneide C.A. (1990). Characterization of phase-type distributions. Stochastic Models 6: 1 57 104. O’Cinneide C.A. (1991). Phase-type distributions and majorization. The Annals of Applied Probability 1: 219 227 105. O’Cinneide C.A. (1991). Phase-type distributions and invariant polytopes. Advances in Applied Probability 23: 515 535 106. O’Cinneide C.A. (1993). Triangular order of triangular phase-type distributions. Stochastic Models 9: 507 529 107. O’Cinneide C.A. (1999). Phase-type distributions: open problems and a few properties. Stochastic Models 15: 731 757 108. Olsson M. (1996). Estimation of phase-type distributions from censored data. Scandinavian Journal of Statistics 23: 443 460 109. Ramaswami V. (1980). The N/G/1 queue and its detailed analysis. Advances in Applied Probability 12: 222 261 110. Ramaswami V. (1996). A tutorial overview of matrix analytic method: with some extensions & new results. In Matrix Analytic Methods in Stochastic Models, A.S. Alfa and S. Chakravarthy (eds), Marcel Dekker: New York, 261 296 111. Ramaswami V. and P.G. Taylor (1996). Some properties of the rate operators in level dependent quasi-birth-and-death processes with a countable number of phases. Stochastic Models 12: 143 164 112. Ramaswami V. and P.G. Taylor (1996). An operator analytic approach to product form networks. Stochastic Models 12: 121 142 113. Rydén T. (1994). Parameter estimation for Markov modulated Poisson processes. Stochastic Models 10: 795 829 114. Rydén T. (1995). Consistent and asymptotically normal parameter estimates for Markov modulated Poisson processes. Scandinavian Journal of Statistics 22: 295 303 115. Rydén T. (1996). An EM algorithm for estimation in Markov-modulated Poisson processes. Computational Statistics & Data Analysis 21: 431 447 70
1
Stochastic Models
116. Rydén T. (1996). On identifiability and order of continuous-time aggregated Markov chains, Markov-modulated Poisson processes, and phase-type distributions. Journal of Applied Probability 33: 640 653 117. Rydén T. (1997). Estimating the order of continuous phase-type distributions and Markovmodulated Poisson processes. Stochastic Models 13: 417 433 118. Sengupta B. (1989). Markov processes whose steady state distribution is matrix-exponential with an application to GI/PH/1 queue. Advances in Applied Probability 21: 159 180 119. Sengupta B. (1990). Phase-type representations for matrix-geometric solutions. Stochastic Models 6: 163 167 120. Shanthikumar J.G. (1985). Bilateral phase-type distributions. Naval Research Logistics 32: 119 136 121. Taylor H.M. and S. Karlin (1998). An Introduction to stochastic Models, Academic Press. 122. Liefvoort A. van de and A. Heindl (2005). Approximating matrix-exponential distributions by global randomization. Stochastic Models 21: 669 693
71
2 Block-Structured Markov Chains
Quan-Lin Li Department of Industrial Engineering Tsinghua University Beijing 100084, P.R. China E-mail:
[email protected] Abstract In this chapter, the censoring technique is applied to be able to deal with any irreducible block-structured Markov chain, which is either discrete-time or continuous-time. The R-, U- and G-measures are iteratively defined from two different censored directions: UL-type and LU-type. An important censoring invariance for the R- and G-measures is obtained. Using the censoring invariance, the Wiener-Hopf equations are derived, and then the UL- and UL-types of RG-factorizations are given. The stationary probability vector is given an R-measure expression; while the transient probability can be computed by means of the R-, U- and G-measures. Finally, the A- and B-measures are proposed in order to discuss the state classification of the block-structured Markov chain. Keywords stochastic model, block-structured Markov chain, the censoring technique, R-measure, U-measure, G-measure, A-measure, B-measure, censoring invariance, Wiener-Hopf equation, RG-factorization, state classification, stationary probability vector, transient probability, the first passage time.
In this chapter, the censoring technique is applied to be able to deal with any irreducible block-structured Markov chain with either finitely-many levels or infinitely-many levels, which is either discrete-time or continuous-time. Three probabilistic measures: R-, U-and G-measures, are defined from two different censored directions: UL-type and LU-type, and an important censoring invariance for the R-and G-measures is obtained. Based on the censoring invariance, the Wiener-Hopf equations are derived, and the UL-and UL-types of RG-factorizations for the transition matrix are given. Furthermore, the A- and B-measures are proposed in order to discuss the state classification of the block-structured Markov chain. This chapter systemically develops the decomposition theory: the RG-factorizations, for any irreducible Markov chains. Based on the RG-factorizations, effective algorithms can be designed under a unified, constructive computational framework
2
Block-Structured Markov Chains
in order to deal with performance computation and system decision for practical stochastic models in many applied areas. This chapter is organized as follows. Section 2.1 applies the censoring technique to deal with any irreducible discrete-time block-structured Markov chain. Some important properties of the censored Markov chain are given. For the discrete-time block-structured Markov chain, Sections 2.2 and 2.3 derive the UL-and LU-types of RG-factorizations, respectively. Section 2.4 provides an R-measure expression for the stationary probability vector of any positive recurrent block-structured Markov chain. Note that the R-measure expression is more effective than that in the literature. Specifically, those crucial formulae given in Neuts [26, 27] are simply re-derived by means of the R-measure expression. Section 2.5 defines Aand B-measures for any irreducible block-structured Markov chain, and constructs expressions for the A- and B-measures by means of the R-, U- and G-measures, respectively. Based on the A- and B-measures, necessary and sufficient conditions for the state classification of the block-structured Markov chain are obtained. Section 2.6 discusses the block-structured Markov chains with finitely-many levels. Section 2.7 gives the UL-and LU-types of RG-factorizations for any irreducible continuous-time block-structured Markov chain, and some useful results are summarized simply. Finally, Section 2.8 summarizes the references related to the results of this chapter.
2.1 The Censoring Chains In this section, the censoring technique is applied to be able to deal with any irreducible discrete-time block-structured Markov chain. Also, some important properties for the censored Markov chains are given. We consider an irreducible discrete-time block-structured Markov chain {Xn, nı0} on the state space ȍ {(k , j ) : k ı 0, 1 İ j İ mk } whose transition probability matrix is given by
P
§ P0,0 P0,1 P0,2 ! · ¨ ¸ ¨ P1,0 P1,1 P1,2 ! ¸ , ¨ P2,0 P2,1 P2,2 ! ¸ ¨ ¸ # # © # ¹
(2.1)
where Pi, j is a matrix of size mi u mj whose (r , r c) th entry is given by
( Pi , j ) r , r c
P{ X n 1
( j, r c) | X n
(i, r )}.
In this chapter, we always assume that the Markov chain P is irreducible and stochastic (or substochastic). Note that the stochastics and substochastics are denoted as Pe e and Pe 9 e, respectively, where e is a column vector of ones with 73
Constructive Computation in Stochastic Models with Applications
suitable size. Under the block structure, the state space ȍ is partitioned as ȍ *fi 0 Li with Li {(i,1), (i, 2), ..., (i, mi )}. For state (i, k), i is called the level variable and k the phase variable. We also write Lİk * ki 0 Li for the set of all the states in the levels up to k, and Lık for the complement of Lİ( k 1) . Now, we define a censored chain for an irreducible Markov chain whose transition probability matrix consists of scalar entries. We then treat a blockstructured Markov chain as a special case. Definition 2.1 Suppose that {Xn, nı0} is an irreducible Markov chain on the state space ȍ {0, 1, 2, ...}. Let E be a non-empty subset of ȍ. If the successive visits of Xn to the subset E take place at the nk th step of state transition. We write X kE X nk for kı1. Then the sequence { X kE , k ı 1} is called the censored chain with censoring set E. For convenience of description, we write P[İn ] for the censored transition probability matrix PE if the censored set E Lİn , in particular, P[ f ] P and P[0] P[İ0] . Similarly, P[ın ] is the censored transition probability matrix with the censored set E Lın , specifically, P[ı0] P. Let E c : E. According to the subsets E and E c , the transition probability matrix P is partitioned as E E
P
c
(2.2)
E §T U · ¸. c ¨ E ©V W ¹
l Lemma 2.1 If P is irreducible and V : 0, then each element of W
f
¦W
n
is
n 0
finite. Proof If P is irreducible, then W is strictly substochastic due to V : 0. The Markov chain W may be regarded as having absorbing state set E so that the expected number of visits to each state in E c is finite. Hence l W
f
¦W
n
f ,
n 0
l is the minimal nonnegative inverse of I W, denoted as ( I W ) 1 . where W min This completes the proof. l is referred to as the fundamental matrix of W. Similarly, if The matrix W U z 0, then Tl
f
¦T
n
, and specifically, Tl
( I T )1 when the order of the
n 0
matrix T is finite. In the following, we show that the censored chain { X kE , k ı 1} is also a Markov chain, and derive its transition probability matrix. 74
2
Block-Structured Markov Chains
Theorem 2.1 The censored chain { X kE , k ı 1} is a Markov chain whose transition probability matrix is given by
l . T UWV
PE
(2.3)
Proof It follows from the Markov property of P that the (i, j)th entry of the transition probability matrix of the censored chain { X kE , k ı 1} is given by
Pi ,Ej
P{ X nE1
j | X nE
i} P{ X 1E
j | X 0E
i}.
Thus, the censored chain { X kE , k ı 1} is obviously a Markov chain. In what follows we explicitly express P E in terms of the original transition probability matrix. To do this, we consider the following two possible cases: Case ć n1 1. In this case, i, j E , X 1E X 1 and P{ X 1E Case Ĉ n1
j | X 0E
i} Ti , j .
k for k ı 2. In this case, i, j E , X 1E
(2.4) X k and
P{ X 1E j | X 0E i} P{ X k j , X l E for l 1, 2,..., k 1| X 0 i} (UW k 2V )i , j .
(2.5)
It follows from Eq. (2.4) and Eq. (2.5) that P ^ X 1E
§ f · i` Ti , j ¨ ¦ UW k 2V ¸ ©k 2 ¹i , j l Ti , j (UWV )i , j .
j | X 0E
This completes the proof. c Remark 2.1 The censored chain { X kE , k ı 1} is a Markov chain whose transition probability matrix is given by PE
c
l . W V TU c
Note that the two censored Markov chains { X kE , k ı 1} and { X kE , k ı 1} have different utilities, which can lead to two different types of RG-factorizations later. Based on the above discussion on the censored chains, a probabilistic interpretation for each component of the matrix P E is listed as follows. l i , j is the expected number of visits to state j E c before entering E, (1) W given that the Markov chain starts in state i E c . l ) is the expected number of visits to state j E c before returning (2) (UW i, j to E, given that the Markov chain starts in state i E . l ) is the probability that upon entering E the first state visited is j E , (3) (WV i, j given that the Markov chain starts in state i E c . 75
Constructive Computation in Stochastic Models with Applications
l ) is the probability that upon returning to E the first state visited is (4) (UWV i, j j E , given that the Markov chain starts in state i E. Define the z-transformation for the censored Markov chain as
(P
( P E )* ( z )
) ( z ) , i, j E ,
E * i, j
where f
( Pi ,Ej )* ( z )
¦z
k
P{ X kE
j | X 0E
i}.
k 0
The following corollary provides a useful result for studying the censored Markov chain, the proof of which is obvious from Eq. (2.4) and Eq. (2.5). Corollary 2.1 ( P E )* ( z )
m V , zT z 2U zW
where m zW
f
¦z W n
n
.
n 0
Based on Definition 2.1, we can summarize the following five useful properties for the censored Markov chains. These properties can be easily proved by means of the sample path analysis for the censored Markov chains. Property 2.1 For E1 E2 , P E1 ( P E2 ) E1 . Property 2.2 P is irreducible if and only if P E is irreducible for every subset E of ȍ. Property 2.3 P is recurrent if and only if P E is recurrent for every subset E ȍ. Property 2.4 P is transient if and only if P E is transient for every subset E ȍ. Property 2.5 Suppose P is irreducible. (1) P is recurrent if and only if P E is recurrent for some subsets E ȍ. Specifically, if P is recurrent, then P[İn ] is positive recurrent for each nı0. (2) P is transient if and only if P E is transient for some subsets E ȍ. Specifically, if P is transient, then P[İn ] contains at least an absorbing state for each nı0.
2.2 The UL-type RG-Factorization In this section, three UL-type probabilistic measures: the R-, U- and G-measures, are defined by means of the censored chain, an important censoring invariance for the R- and G-measures is given, and a UL-type RG-factorization for the transition probability matrix is derived. 76
2
Block-Structured Markov Chains
We first define the R- and G-measures for the Markov chain P given in Eq. (2.1). Note that the R- and G-measures are crucial for studying block-structured Markov chains. For 0İi < j, Ri, j(k) is an mi u mj matrix whose (r , r c)th entry ( Ri , j (k ))r , r c is the probability that starting in state (i, r), the Markov chain makes its kth transition for a visit into state ( j , r c) without visiting any states in Lİ( j 1) during intermediate steps; or ( Ri , j (k ))r , r c
P{ X k ( j , r c), X l Lİ( j 1) for l 1, 2, ..., k 1| X 0 (i, r )}.
f
Write Ri , j
¦R
i, j
(k ). Then the (r , r c)th entry of Ri , j is the expected number of
k 1
visits to state ( j , r c) before hitting any states in Lİ( j 1) , given that the Markov chain starts in state (i, r). For 0İ j < i, Gi, j (k) is an mi u mj matrix whose (r , r c)th entry (Gi , j (k ))r , r c is the probability that starting in state (i, r), the Markov chain makes its kth transition for a visit into state ( j , r c) without visiting any states in Lİ(i 1) during intermediate steps; or (Gi , j ( k )) r , r c
P{ X k ( j, r c), X l Lİ(i 1) for l 1, 2, ..., k 1| X 0 (i, r )},
f
let us consider Gi , j
¦G
i, j
(k ). Then the (r , r c) th entry of Gi, j is the probability
k 1
of hitting state ( j, r c) when the Markov chain enters Lİ( i 1) for the first time, given that the Markov chain starts in state (i, r). The two matrix sequences {Ri, j} and {Gi, j} are called the R- and G-measures of the Markov chain P, respectively. We show that the R- and G-measures can be expressed in terms of the transition probability matrix. To see this, partition the transition probability matrix P according to the three subsets Lİ( n 1) , Ln and Lı( n 1) as
P
§ T U 0 U1 · ¨V T U ¸ . 0 2¸ ¨ 0 ¨V V T ¸ 2 1 ¹ © 1
(2.6)
§ T0 U 2 · ¨V T ¸ © 2 1 ¹
(2.7)
Let W
77
Constructive Computation in Stochastic Models with Applications
and
Wn l Partition W
§ D11 (n) D12 (n) · ¨ ¸ , n ı 0. © D21 (n) D22 (n) ¹
f
¦W
n
accordingly as
n 0
§ H11 H12 · ¨ ¸. © H 21 H 22 ¹
(2.8)
( R0,T n , R1,Tn , R2,T n , ..., RnT1, n )T ,
(2.9)
l W
We write
R n
where the superscript T stands for the transpose of a matrix, and G n
(Gn ,0 , Gn ,1 , Gn ,2 , ..., Gn ,n 1 ).
(2.10)
For convenience, for a matrix B ( B0 , B1 , B2 , ...) or B ( B0T , B1T , B2T , ...)T , let ( B)¢ i ² denote the ith block-entry Bi of the matrix B and ( B)¢ri,²r c the (r , r c) th entry in the ith block-entry of B. Lemma 2.2 For nı1, R n
U 0 H11 U1 H 21
(2.11)
G n
H11V0 H12V1 .
(2.12)
and
Proof We only prove Eq. (2.11), while Eq. (2.12) can be proved similarly. For 0İiİn 1, we consider two possible cases for ( Ri , n (k ))r , r c as follows: Case ć k 1. In this case,
( Ri , n (k )) r , r c
P{ X 1
(n, r c) | X 0
(i, r )} (U 0 ) ¢ri,²r c .
(2.13)
Case Ĉ kı2. In this case, ( Ri , n (k )) r , r c
P{ X k
(n, r c), X l Lİ( j 1)
for l 1, 2, ..., k 1| X 0
(i, r )}
(U 0 D11 (k 1) U1 D21 ( k 1)) ¢ri,²r .
(2.14)
Noting that D11(0) I and D21(0) 0, it follows from Eq. (2.13) and Eq. (2.14) that 78
2
Block-Structured Markov Chains
f
¦ (R
( Ri , n ) r , r c
i,n
( k ))r , r c
k 1
f
(U 0 )¢ri,²r c ¦ (U 0 D11 (k 1) U1 D21 (k 1))¢ri,²r c k 2
¢i²
f
§ · ¨ ¦ (U 0 D11 (k ) U1 D21 (k )) ¸ ©k 0 ¹r ,r c (U 0 H11 U1 H 21 )¢ri,²r c . This completes the proof. For the Markov chain P, let Wn be the southeast corner of P from level n, i.e.,
ln Wn ( Pi , j )i , jın . Let W
f
¦W
k n
l (nk ,) and W l (n,l ) be the kth block-row and the , and W
k 0
l n , respectively. lth block-column of W The following corollary easily follows from Lemma 2.2. Such expressions are crucial in our study later. Corollary 2.2 For 0İi< j,
Ri , j
l (j,1) , ( Pi , j , Pi , j 1 , Pi , j 2 , ...) W
(2.15)
Gi , j
l i(1,) ( P T , P T , P T , ...)T . W i, j i 1, j i 2, j
(2.16)
and for 0İ j < i,
From either Lemma 2.2 or Corollary 2.2, it is clear that the R- and G-measures l given in Eq. (2.8). In this depend on the entries of the fundamental matrix W l . The case, it is necessary to provide expression for the fundamental matrix W result in this lemma can be viewed as a generalized version of the expression for the minimal nonnegative inverse of a block-structured matrix of infinite size, while the proof is easy and is omitted here. For the matrix W given in Eq. (2.7), the l. following lemma expresses the four block-entries of the fundamental matrix W Lemma 2.3 For the block structure given in Eq. (2.8), we have H1,1
l ) 1 , ( I T0 U 2 TV 1 2
H1,2
( I T0 U 2 Tl1V2 ) 1U 2 Tl1 ,
H 2,1
l ( I T U TV l 1 TV 1 2 0 2 1 2) ,
H 2,2
l ( I T U TV l 1 l Tl1 TV 1 2 0 2 1 2 ) U 2 T1 .
Symmetrically, 79
Constructive Computation in Stochastic Models with Applications
H1,1
Tl0 Tl0U 2 ( I T1 V2 Tl0U 2 )1V2 Tl0 ,
H1,2
Tl0U 2 ( I T1 V2 Tl0U 2 )1 ,
H 2,1
( I T1 V2 Tl0U 2 )1V2 Tl0 ,
H 2,2
( I T1 V2 Tl0U 2 )1 .
The following theorem provides expressions for R n and G n . Theorem 2.2 For nı1, f l §l· k l ) l )l k (U 0 U1 TV 1 2 ¦¦ ¨ ¸ T0 (U 2 TV 1 2 l 0 k 0©k¹
R n and f
G n
l
§l·
¦¦ ¨© k ¸¹ T
k 0
l )l k (V U TV l (U 0 U1 TV 1 2 0 2 1 1 ).
l 0 k 0
Proof It follows from Lemmas 2.2 and 2.3 that
R n
l )( I T U TV l 1 (U 0 U1TV 1 2 0 2 1 2)
G n
l )1 (V U TV l ( I T0 U 2 TV 1 2 0 2 1 1 ).
and
The inverse transform for the above two equations immediately leads to the desired result. The following theorem provides an important property: Censoring invariance for the R- and G-measures. We denote by Ri[,İj n ] and Gi[,İj n ] the R- and G-measures of the censored Markov chain P[İn ] , respectively. Theorem 2.3 (1) For 0 İ i j İ n, Ri[,İj n ] [İn ] i, j
(2) For 0 İ j i İ n, G
Ri , j .
Gi , j .
Proof We only prove (1), while (2) can be proved similarly. First, we assume that j n and P is partitioned according to the three subsets L n , Ln and L! n as in Eq. (2.6). It follows from Theorem 2.1 that P [İ n ]
80
§ T U 0 · § U1 · l ¨ V T ¸ ¨ U ¸ T1 (V1 , V2 ) 0 ¹ © 0 © 2¹ l l · § T U1 TV U 0 U1 TV 1 1 1 2 ¨ ¸. ¨ V U TV ¸ l l T0 U 2 TV 2 1 1 1 2 ¹ © 0
(2.17)
2
Block-Structured Markov Chains
Using the definition of the R-measure, simple calculations from P[İn ] lead to R[İn n ]
l )( I T U TlV ) 1 (U 0 U1 TV 1 2 0 2 1 2 f
l ) [T U TV l l (U 0 U1 TV 1 2 ¦ 0 2 1 2] l 0
l f §l · k l ) l )l k . (U 0 U1 TV 1 2 ¦¦ ¨ ¸ T0 (U 2 TV 1 2 k l 0 k 0© ¹
(2.18)
Therefore, R[İn n ] R n according to Theorem 2.2. If j
P [İ n ]
( n) ( n) § I0,0 I0,1 " I0,( nn) · ¨ ( n) ¸ ( n) ( n) ¨ I1,0 I1,1 " I1, n ¸ , ¨ # # # ¸ ¨ ¸ ¨ I ( n) I ( n) " I ( n) ¸ n,1 n, n ¹ © n,0
n ı 0,
(2.19)
be block-partitioned according to levels. The following lemma provides useful equations among the block entries of the censored Markov chains, which can essentially lead to the Wiener-Hopf equations for the Markov chain. Lemma 2.4 For nı0, 0İi, jİn, f
Ii(,nj )
Pi , j
¦
k n 1
f
Ii(,kk ) ¦ [Ik(,kk) ]l Ik(,kj) . l 0
Proof Consider the censored matrix P[İn ] based on P[İ( n 1)] . It follows from Theorem 2.1 that
P [İ n ]
( n 1) ( n 1) § I0,0 I0,1 " I0,( nn1) · § I0,( nn1)1 · ¨ ( n 1) ¸ ¨ ¸ I1,1( n 1) " I1,( nn1) ¸ ¨ I1,( nn1)1 ¸ f ( n 1) l ( n 1) ( n 1) ¨ I1,0 ª¬In 1, n 1 º¼ In 1,0 , In 1,1 , ..., In( n1,1)n . ¦ ¨ # ¸ ¨ ¸ # # # ¨ ¸ ¨ ¸l 0 ¨ I ( n 1) I ( n 1) " I ( n 1) ¸ ¨ I ( n 1) ¸ n ,1 n, n ¹ © n,0 © n, n 1 ¹
81
Constructive Computation in Stochastic Models with Applications
Therefore, using Theorem 2.1 we obtain
Ii(,nj )
f
l
Ii(,nj 1) Ii(,(nn1)1) ¦ ª¬In( n1,1)n 1 º¼ In( n1,1)j l 0 f
l
Ii(,nj 2) Ii(,(nn2)2) ¦ ª¬In( n 2,2)n 2 º¼ In( n 2,2)j l 0
f
l
Ii(,(nn1)1) ¦ ª¬In( n1,1)n 1 º¼ In( n1,1)j l 0
f
f
¦ I ¦ ª¬I
" Pi , j
(k ) i,k
k n 1
l 0
(k ) k ,k
l
º¼ Ik( ,kj) ,
(f) i, j
where Pi , j I . This completes the proof. The following lemma provides expressions for the R- and G-measures. Lemma 2.5 (1) For 0İi< j, f
Ri , j
l
Ii(, jj ) ¦ ª¬I (j ,jj) º¼ . l 0
(2) For 0İ j
¦ ª¬I
(i ) i ,i
Gi , j
l 0
l
º¼ Ii(,ij) .
Proof Applying Corollary 2.2 to the censored chain P[İ j ] gives that f
Ri[,İj j ]
l
Ii(, jj ) ¦ ª¬I (j ,jj) º¼ ,
0 İ i j,
l 0
and f
Gi[,İj j ]
¦ ª¬I
(i ) i ,i
l 0
l
º¼ Ii(,ij) ,
0 İ j i.
The rest of the proof follows from the censoring invariance for the R- and G-measures proved in Theorem 2.3. We define the U-measure as
In(,nn) ,
n ı 0.
The following theorem provides an equivalent form to the equations in Lemma 2.4, and is called the Wiener-Hopf equations for any irreducible Markov chain in terms of the R- and G-measures. Theorem 2.4 (1) For 0İi< j, f
Ri , j ( I < j )
Pi , j
¦R
i,k
k j 1
82
( I < k ) Gk , j .
2
Block-Structured Markov Chains
(2) For 0İ j
( I < i ) Gi , j
¦R
Pi , j
i,k
( I < k ) Gk , j .
k i 1
(3) For nı0, f
Pn ,n
¦R
n,k
( I < k ) Gk , n .
k n 1
Proof We only prove (1), while (2) and (3) can be proved similarly. It follows from Lemma 2.5 that
Ri , j ( I < j ) Ii(, jj ) and
Ii(,ij) .
( I < i ) Gi , j
Using Lemma 2.4 and the censoring invariance in Theorem 2.3 leads to f
Ii(, jj )
Pi , j
f
¦ I ¦ ª¬I (k ) i ,k
k j 1
l 0
f
Pi , j
¦R
[İk ] i,k
(k ) k ,k
l
º¼ Ik(,kj)
( I < k ) G>kİ, jk @
k j 1 f
Pi , j
¦R
i,k
( I < k ) Gk , j .
k j 1
This completes the proof. Based on the Wiener-Hopf equations given in Theorem 2.4, a UL-type RG-factorization for the transition probability matrix P is obtained in the following theorem. The UL-type RG-factorization always holds for any irreducible Markov chain, and it is very useful for computing the stationary probability vector which can help to analyze stationary performance measures of a practical stochastic system. Theorem 2.5 For the Markov chain P given in Eq. (2.1), I P
( I RU ) ( I < D ) ( I GL ),
(2.20)
where
RU
§ 0 R0,1 R0,2 R0,3 ... · ¨ ¸ 0 R1,2 R1,3 ... ¸ ¨ ¨ 0 R2,3 ... ¸ , ¨ ¸ 0 ... ¸ ¨ ¨ % ¸¹ © diag (< 0 ,< 1 ,< 2 ,< 3 , ...) 83
Constructive Computation in Stochastic Models with Applications
and
GL
§ 0 · ¨G ¸ 0 ¨ 1,0 ¸ ¨ G2,0 G2,1 0 ¸. ¨ ¸ ¨ G3,0 G3,1 G3,2 0 ¸ ¨ # ¸ # # # % © ¹
Proof We only prove Eq. (2.20) for the entries in the first block-row and first block-column. The rest can be proved similarly. The entry (0, 0) on the right-hand side is f
I < 0 ¦ R0, k ( I < k ) Gk ,0 , k 1
which is equal to I P0,0 according to (3) of Theorem 2.4. The entry (0, l) with lı1 on the right-hand side is f
R0,l ( I < l )
¦R
0, k
( I < k ) Gk ,l ,
k l 1
which is equal to P0,l according to (1) of Theorem 2.4. Finally, to see that the entry (l, 0) with lı1 on the right-hand side is equal to the corresponding entry on the left-hand side, it follows from (2) of Theorem 2.4 that f
( I < l ) Gl ,0
¦R
l ,k
( I < k ) Gk ,0
Pl ,0 .
k l 1
This completes the proof. Now, we consider some important examples for simplifying the UL-type RG-factorization, which is necessary in the study of practical systems, and also provides some new understanding for the Neuts’ results, e.g., see [26, 27].
2.2.1 Level-Dependent Markov Chains of M/G/1 Type For the Markov chain P of M/G/1 type given in Eq. (1.7), we write
Wk
84
§ Ak , k ¨ ¨ Ak 1, k ¨ ¨¨ ©
Ak , k 1
Ak , k 2
Ak 1, k 1
Ak 1, k 2
Ak 2, k 1
Ak 2, k 2 %
... · ¸ ... ¸ , ... ¸ ¸ % ¸¹
k ı 1.
2
§ l (k ) We denote by ¨ W 1,1 © lk W
f
¦ [W ] . It l
k
T
l (2,1k ) , W
T
Block-Structured Markov Chains
T
· , ... ¸ the first block-column of the matrix ¹
follows from Eq. (2.17) and Eq. (2.19) that for 0İiİn and
l 0
0İ jİn 1,
Ii(,nj )
i j İ 1, Ai , j , ° i j ı 2, °0, ® f °A A W l l(,1n 1) A , 0 İ i İ n. i ,n l n 1, n °¯ i , n ¦ l 1
Therefore, f
¦A
Ri , j
i , j l
) l l(j1,1 W ,
0 İ i İ j,
(2.21)
l 0
Gk
Gi , j
def
(k ) l 1,1 Gk , k 1 W Ak ,k 1 ,
k ı 1,
(2.22)
0 for 0 İ j İ i 2, and f
Ak ,k ¦ Ak ,k i Gk i Gk i 1 ... Gk 1 ,
k ı 0.
(2.23)
i 1
The following lemma provides expressions for the first block-column in the (k ) l k in terms of W l 1,1 and the G-measure {G }. fundamental matrix W k
Lemma 2.6 For kı1 and jı2,
l (jk,1) W
(k )
l 1,1 Gk j 1Gk j 2 ... Gk 1W
(2.24)
and (k ) l 1,1 ( I < k ) W
(k ) l 1,1 W ( I < k )
I.
(2.25)
Proof Eq. (2.25) is clear from Lemma 2.3 and the expression < k T0 U 2 Tl1V2 . Thus we only need to prove Eq. (2.24). Let Wk
§ T0 U 2 · ¨V T ¸ , © 2 1 ¹
where 85
Constructive Computation in Stochastic Models with Applications
T0
Ak ,k ,
T1 Wk 1 ,
U2
( Ak ,k 1 , Ak , k 2 , Ak , k 3 , ...),
V2
( AkT, k 1 , 0, 0, 0, ...)T .
It follows from Lemma 2.3 that (k ) H 2,1
l ) 1 Tl1V2 ( I T0 U 2 TV 1 2
l H (k ) TV 1 2 1,1
(k ) l k 1V W l 1,1 W . 2
Note that
l k 1V W 2
( k 1) §W l 1,1 Ak 1, k · ¨ ¸ ¨ l ( k 1) ¸ W A ¨ 2,1 k 1, k ¸ ¨ l ( k 1) ¸ ¨ W 3,1 Ak 1, k ¸ ¨ ¸ # © ¹
§ Gk 1 · ¨¨ ( k 1) ¸¸ , © H 2,1 Ak 1, k ¹
and § Gk 1 · l (k ) ¨¨ ( k 1) ¸¸ W 1,1 , © H 2,1 Ak 1,k ¹
(k ) H 2,1
we obtain l (2,1k ) W (k ) l 3,1 W
(k ) l (2,1k 1) A W l 1,1 W k 1, k
(k )
l 1,1 , Gk 1W ( k 1)
(k)
l 1,1 A W l 1,1 Gk 2 W k 1, k
(k)
l 1,1 , Gk 2 Gk 1W
by induction, we have l (jk,1) W
(k ) l (jk1,11) A W l 1,1 W k 1, k
(k )
l 1,1 . Gk j 1Gk j 2 ... Gk 1W
This completes the proof. The following theorem provides a useful expression or interpretation for the Rand G-measures. Theorem 2.6 (1) For 0İ i < j, f § · 1 ¨ Ai , j ¦ Ai , j l Gl j Gl j 1 ... G j 1 ¸ ( I < j ) . l 1 © ¹
Ri , j
(2.26)
(2) The matrix sequence {Gi} is the minimal nonnegative solution to the system of matrix equations f
Gi
Ai ,i 1 ¦ Ai ,i l Gi l Gi l 1 ... Gi , l 0
86
i ı 1.
(2.27)
2
Block-Structured Markov Chains
Proof (1) Using Lemma 2.6, we obtain f
Ri , j
¦A
i, j l
l l(,1j ) W
l 1
f
(k )
l 1,1 Ai ,i l ¦ Ai , j l G j l 1G j l 2 " Gk 1W l 1
f § · A Ai , j l Gl j Gl j 1 " G j 1 ¸ ( I < j ) 1 . ¦ ¨ i, j l 1 © ¹
(2) Since Gk
(k ) l 1,1 W Ak , k 1 , we obtain that ( I < k ) Gk
Ak , k 1 or Gk
Ak , k 1
< k Gk . Note that f
Ak ,k ¦ Ak ,k i Gk i Gk i 1 " Gk 1 , i 1
it is easy to obtain the desired result given in Eq. (2.26). This completes the proof.
2.2.2 Level-Independent Markov Chains of M/G/1 Type For the Markov chain P of M/G/1 type given in Eq. (1.3), let the matrix G be the f
minimal nonnegative solution to the matrix equation G
¦ A G . Then k
k
the
k 0
G-measure is given by Gk
G1 , ® ¯G,
k 1,
G1
( I < ) 1 B0 ;
k ı 2,
where the U-measure f
<0
B1 ¦ Bk G k 2 G1 k 2
and for kı1 f
<
¦AG k
k 1
;
k 1
and the R-measure R0, j
§ f 1 k 1 · ¨ ¦ Bk G ¸ ( I < ) , k j 1 © ¹
j ı 1, 87
Constructive Computation in Stochastic Models with Applications
and for iı1 § f k 1 · 1 ¨ ¦ Ak G ¸ ( I < ) , k j 1 © ¹
Ri , j
j ı 1.
2.2.3 Level-Dependent Markov Chains of GI/M/1 Type Using an analysis similar to the Markov chain of M/G/1 type, we can deal with a Markov chain of GI/M/1 type given in Eq. (1.6). We provide a simple introduction to the R-, U- and G-measures. To show this, we write
Wk
§ Ak , k ¨ ¨ Ak 1, k ¨ Ak 2, k ¨¨ © #
Ak , k 1 Ak 1, k 1
Ak 1, k 2
Ak 2, k 1 Ak 2, k 2
#
#
· ¸ ¸, %¸ ¸ %¸¹
k ı 1.
(k ) (k ) l 1,1 l 1,2 lk We denote by §¨ W , W , ... ·¸ the first block-row of the matrix W © ¹ Hence we have
Rk
Ri , j
def
Rk , k 1
( k 1) l 1,1 Ak , k 1W ,
f
¦ [W ] . l
k
l 0
k ı 0,
0 for j ı i 2, f
¦Wl
Gi , j
(i) 1, l
Ai l , j ,
0 İ j i,
l 1
and f
Ak , k ¦ Rk Rk 1 ! Rk i 1 Ak i , k ,
k ı 0.
i 1
It is clear that for kı1 and jı2, l 1,( kj) W
(k ) l 1,1 W Rk Rk 1 ! Rk j 2 .
Therefore, the matrix sequence {Rk} is the minimal nonnegative solution to the system of matrix equations f
Rk
Ak , k 1 ¦ Rk Rk 1 ! Rk i 1 Ak i , k 1 ,
k ı 0,
i 1
Gi , j 88
f § · ( I < i )1 ¨ Ai , j ¦ Ri Ri 1 ! Ri l 1 Ai l , j ¸ . l 1 © ¹
2
Block-Structured Markov Chains
2.2.4 Level-Independent Markov Chains of GI/M/1 Type For the Markov chain P of M/G/1 type given in Eq. (1.2), let the matrix R be the f
minimal nonnegative solution to the matrix equation R
¦R
k
Ak . Then the
k 0
R-measure is given by Rk
R1 , k 1, ® ¯ R, k ı 2,
R1
B0 ( I < )1 ;
where
the U-measure f
<0
B1 ¦ R1 R k 2 Bk k 2
and for kı1 f
<
¦R
k 1
Ak ;
k 1
and the G-measure G0, j
§ f · ( I < ) 1 ¨ ¦ R k 1 Bk ¸ , © k j 1 ¹
j ı 1,
Gi , j
§ f · ( I < )1 ¨ ¦ R k 1 Ak ¸ , © k j 1 ¹
j ı 1.
and for iı1
2.2.5 The QBD Processes For the QBD process, the matrix sequences {Rk} and {Gk} are the minimal nonnegative solutions to the systems of matrix equations Rk
Ak( k1) Rk Ak( k11) Rk Rk 1 Ak( k1 2) ,
k ı 0,
and Gk
Ak( k1) Ak( k ) Gk Ak( k1) Gk 1Gk ,
k ı 1, 89
Constructive Computation in Stochastic Models with Applications
respectively. The RG-factorization Eq. (2.20) can be simplified as
RU
§ 0 R0 · ¨ ¸ 0 R1 ¨ ¸ 0 R2 ¨ ¸ ¨ 0 %¸ ¨¨ ¸ %¸¹ ©
(2.28)
GL
§0 · ¨G 0 ¸ ¨ 1 ¸ G2 0 ¨ ¸ ¨ ¸ G3 0 ¨¨ ¸ % %¸¹ ©
(2.29)
and
2.3 The LU-Type RG-Factorization In this section, three LU-type probabilistic measures: the R-, U- and G-measures, are defined in another censored direction, and an LU-type RG-factorization for the transition probability matrix is derived. Note that the method used in this section is similar to that in Section 2.2. For the block-structured Markov chain P given in Eq. (2.1), we use the same partition given in Eq. (2.2). Let P[ın ]
W V ( I T ) 1 U .
The block-entry expression of the matrix P[ın ] is written as
P[ın ]
§ Kn( n, n) Kn( n,n)1 Kn( n,n) 2 " · ¨ (n) ¸ (n) (n) ¨ Kn 1,n Kn 1, n 1 Kn 1, n 2 " ¸ ¨ (n) ¸. (n) (n) ¨ Kn 2,n Kn 2, n 1 Kn 2,n 2 " ¸ ¨ # ¸ # # © ¹
Lemma 2.7 For i, jın 1, we have n
Ki(,nj1)
k 0
Proof Since 90
1
Pi , j ¦Ki(,kk) ª¬ I Kk( k, k) º¼ Kk( k, j) .
2
P[ı( n 1)]
Block-Structured Markov Chains
§ Kn( n)1,n 1 Kn( n)1,n 2 Kn( n)1,n 3 !· ¨ (n) ¸ (n) (n) ¨ Kn 2, n 1 Kn 2, n 2 Kn 2, n 3 !¸ ¨ K ( n) K ( n) K ( n) !¸ ¨¨ n 3, n 1 n 3, n 2 n 3,n 3 ¸¸ # # © # ¹ (n) § Kn 1, n · ¨ ( n) ¸ 1 ¨K ¸ ¨ n 2,n ¸ ª¬ I Kn( n,n) º¼ Kn( n,n)1 , Kn( n,n) 2 , Kn( n, n) 3 , ... , (n) ¨ Kn 3,n ¸ ¨ # ¸ © ¹
we obtain 1
Ki(,nj1) Ki(,nj) Ki(,nn) ª¬ I Kn( n, n) º¼ Kn( n, j) 1
1
Ki(,nj1) Ki(,nn1)1 ª¬ I Kn( n1,1)n 1 º¼ Kn( n1,1)j Ki(,nn) ª¬ I Kn( n, n) º¼ Kn( n, j) " n
1
(k ) (k ) ª º (k ) Ki(0) , j ¦ Ki , k ¬ I Kk , k ¼ Kk , j . k 0
Note that Ki(0) ,j
Pi , j for all i, j ı 0. This completes the proof.
We define the U-measure as
) n Kn( n, n) ,
n ı 0,
(2.30)
the R-measure as Ri , j
1
Ki(, jj) ª¬ I K (j ,j j) º¼ ,
0 İ j i,
and the G-measure as 1
Gi , j
(i ) (i ) ¬ª I Ki ,i ¼º Ki , j ,
0 İ i j.
Ri , j
Ki(, jj) ( I ) j )1 ,
0 İ j i,
(2.31)
Gi , j
( I )i ) 1Ki(,ij) ,
0 İ i j.
(2.32)
It is obvious that
and
The following theorem provides the important Wiener-Hopf equations for the R-, U- and G-measures. Theorem 2.7 The LU-type R-, U- and G-measures defined above satisfy the following Wiener-Hopf equations, 91
Constructive Computation in Stochastic Models with Applications j 1
Ri , j ( I ) j )
Pi , j ¦ Ri , k ( I ) k ) Gk , j ,
0 İ j i,
(2.33)
0 İ i j,
(2.34)
k 0 i 1
( I )i ) Gi , j
Pi , j ¦ Ri , k ( I ) k ) Gk , j , k 0
and n 1
Pn , n ¦ Rn, k ( I ) k ) Gk , n ,
)n
n ı 0.
(2.35)
k 0
Proof We only prove Eq. (2.33), while Eq. (2.34) and Eq. (2.35) can be proved similarly. It follows from Eq. (2.31) that Ri , j ( I ) j ) Ki(, jj) .
(2.36)
By Lemma 2.7, we have j 1
Ki(, jj)
1
Pi , j ¦Ki(,kk) ª¬ I Kk( k, k) º¼ Kk( k, j) .
(2.37)
k 0
From Eq. (2.31), Eq. (2.32) and Eq. (2.37) we obtain j 1
Ki(, jj)
Pi , j ¦ Ri , k ( I ) k ) Gk , j , k 0
which, together with Eq. (2.36), leads to the stated result. By the Wiener-Hopf equations Eq. (2.33), Eq. (2.34) and Eq. (2.35), the following theorem constructs an LU-type RG-factorization, which is different from the UL-type RG-factorization given in Theorem 2.5. Theorem 2.8 For the Markov chain P defined in Eq. (2.1), I P
( I RL ) ( I ) D ) ( I GU ),
where
RL
)D 92
§ 0 · ¨ ¸ ¨ R1,0 0 ¸ ¨ R2,0 R2,1 0 ¸, ¨ ¸ ¨ R3,0 R3,1 R3,2 0 ¸ ¨ # # # # %¸¹ © diag () 0 , )1 , ) 2 , )3 , ...)
(2.38)
2
Block-Structured Markov Chains
and § 0 G0,1 G0,2 G0,3 ¨ 0 G1,2 G1,3 ¨ ¨ 0 G2,3 ¨ 0 ¨ ¨ ©
GU
!· ¸ !¸ !¸ . ¸ !¸ %¸¹
Proof We prove Eq. (2.38) for the block-entries of the first two block-rows. The rest can be proved similarly. For the first block-row, the entry (0, 0) is (0) I K0,0
I )0
I P0,0 ,
and the entry (0, l) for lı1 is 1
(0) (0) º¼ ª¬ I K0,0 º¼ K0,(0)l ª¬ I K0,0
( I ) 0 ) G0,l
K0,(0)l
P0,l
by means of Eq. (2.32). For the second block-row, the entry (1, 0) is 1
(0) (0) (0) ª¬ I K0,0 º¼ ª¬ I K0,0 º¼ K1,0 (0) K1,0 P1,0
R1,0 ( I ) 0 )
by Eq. (2.31). By Lemma 2.7, the entry Eq. (1, 1) is 1
1
(0) (0) (0) (0) (0) ª¬ I K0,0 º¼ ª¬ I K0,0 º¼ ª¬ I K0,0 º¼ K0,1 R1,0 ( I ) 0 ) G0,1 I )1 K1,0 (1) I K1,1 1
(0) (0) (0) (1) ª¬ I K0,0 º¼ K0,1 I K1,1 K1,0
I P1,1 , and the entry (1, k) for kı2 is R1,0 ( I )0 ) G0, k ( I )1 ) G1, k
1
1
(0) (0) (0) (0) ª¬ I K0,0 º¼ ª¬ I K0,0 º¼ ª¬ I K0,0 º¼ K0,(0)k K1,0 1
(1) (1) º¼ ª¬ I K1,1 º¼ K1,(1)k ª¬ I K1,1 1
(0) (0) ª¬ I K0,0 º¼ K0,(0)k K1,(1)k K1,0
P1, k . This completes the proof. 93
Constructive Computation in Stochastic Models with Applications
The LU-type RG-factorization, given in Theorem 2.5, can be applied to solve the inhomogeneous systems of infinite-dimensional linear equations: x( I P) b or ( I P ) x b where b z 0. The LU-type RG-factorization, given in Theorem 2.8, is useful in the study of transient solutions, such as, the first passage times and the sojourn times. In what follows we analyze some important examples for illustrating applications of the LU-type RG-factorization.
2.3.1 Level-Dependent Markov Chains of M/G/1 Type For the Markov chain P of M/G/1 type given in Eq. (1.7), the following corollary provides an expression for the matrix RL . Corollary 2.3 For the Markov chain P of M/G/1 type given in Eq. (1.7),
RL
§ 0 · ¨ ¸ ¨ R1,0 0 ¸ ¨ ¸. R2,1 0 ¨ ¸ R3,2 0 ¨ ¸ ¨ % % ¸¹ ©
Proof Note that P[ın ]
W V ( I T ) 1 U § An, n ¨ ¨ An 1, n ¨ ¨ © § An , n ¨ ¨ An 1, n ¨ ¨ ©
An, n 1 An, n 2 " · ¸ An 1, n 1 An 1, n 2 " ¸ § An, n 1 · 1 ¨ ¸ (I T ) U An 2, n 1 An 2, n 2 " ¸ © ¹ ¸ % %¹ An, n 1 An, n 2 " · ¸ An 1, n 1 An 1, n 2 " ¸ § * * * " · ¨ ¸, An 2, n 1 An 2, n 2 " ¸ © ¹ ¸ % %¹
where * denotes a non-zero block-entry, we obtain that Ki(,nj) which lead to Ri , j and 94
Ki(, jj) ª¬ I K (j ,j j) º¼
1
0,
0 for all n İ j İ i 2,
0 İ j İ i 2,
2
RL
Block-Structured Markov Chains
§ 0 · ¨R ¸ 0 ¨ 1,0 ¸ ¨ ¸. R2,1 0 ¨ ¸ R3,2 0 ¨ ¸ ¨ % % ¸¹ ©
This completes the proof.
2.3.2 Level-Dependent Markov Chains of GI/M/1 Type For the Markov chain P of GI/M/1 type given in Eq. (1.6), the G-measure can be simplified as Gi , j
0,
0 İ i İ j 2.
Thus, we have
GU
§ 0 G0,1 · ¨ ¸ 0 G1,2 ¨ ¸ ¨ ¸. 0 G2,3 ¨ ¸ 0 %¸ ¨ ¨ % ¸¹ ©
2.3.3 The QBD Processes We consider a discrete-time QBD process whose transition probability matrix is given in Eq. (1.4). The LU-type RG-factorization is given by I P
( I RL ) ( I ) D ) ( I GU ),
(2.39)
where
)D
RL
diag () 0 , )1 , ) 2 , )3 , ...), §0 · ¨R 0 ¸ ¨ 1 ¸ R2 0 ¨ ¸ ¨ ¸ R3 0 ¨¨ ¸ % % ¸¹ ©
and 95
Constructive Computation in Stochastic Models with Applications
§ 0 G0 · ¨ ¸ 0 G1 ¨ ¸ ¨ ¸. 0 G2 ¨ 0 % ¸¸ ¨ ¨ % ¸¹ ©
GU
2.4 The Stationary Probability Vector In this section, we apply the RG-factorizations to provide an R-measure expression for the stationary probability vector of any positive-recurrent discrete-time Markov chain with block structure. We always assume that the block-structured Markov chain P given in Eq. (2.1) is irreducible and positive recurrent, thus there must exist the stationary probability vector ʌ (ʌ0 , ʌ1 , ʌ 2 , ...). Obviously, ʌ ( I P) 0 and ʌe 1. Based on Theorem 2.5, we have ʌ ( I P ) ʌ ( I RU ) ( I < D ) ( I GL ). We write x
( x0 , x1 , x2 , ...) and x
ʌ ( I RU ).
Then x( I < D ) ( I GL ) 0, which leads to f x ( I < ) xk ( I < k ) Gk ,0 ¦ 0 ° 0 ° k 1 ® f ° x ( I < ) xk ( I < k ) Gk ,i ¦ i i °¯ k i 1
0, (2.40) 0,
for i ı 1.
Note that < 0 is the transition probability matrix of the censored chain P[0] . Since P is positive recurrent, P[0] is also positive recurrent. Hence there exists the stationary probability vector x0 such that x0 ( I < 0 ) 0 and x0 e 1. It is easy to check that (W x0 , 0, 0, ...) is a non-zero nonnegative solution to the systems of Eq. (2.40), where W is a constant. Solving the simplified system of linear equations ʌ ( I RU ) (W x0 , 0, 0, ...), we obtain ʌ (W x0 , 0, 0, ...) ( I RU )1 . Therefore, we can obtain the following theorem. Theorem 2.9 For the discrete-time block-structured Markov chain given in 96
2
Block-Structured Markov Chains
Eq. (2.1), the stationary probability vector is given by ʌ 0 ° ® °ʌ k ¯
W x0 , k 1
¦ʌ R i
i,k
,
k ı 1,
i 0
where x0 is the stationary probability vector of the censored Markov chain < 0 to f
level 0 and the scalar W is determined by
¦ʌ e k
1 uniquely.
k 0
Remark 2.2 Although the matrix I GL must be invertible, the equation y ( I GL ) 0 or ( I GL ) y 0 may have a non-zero nonnegative solution. This is different from the system of linear equations with finite dimensions. This is the reason why we need to take a special non-zero nonnegative solution to Eq. (2.40). As an illustration, we consider
GL
§0 · ¨G 0 ¸ ¨ 1 ¸ G 0 ¨ ¸. ¨ ¸ G 0 ¨¨ ¸ % % ¸¹ ©
When G is irreducible and stochastic, it is easy to see that ( gG1 , g , g , ...) is a non-zero nonnegative solution to the equation y ( I GL ) 0, where g gG and ge 1. When a level-dependent Markov chain of GI/M/1 type is irreducible and positive recurrent, the stationary probability vector is given by °ʌ0 ® °¯ʌ k
W x0 , ʌ k 1 Rk 1, k ,
k ı 1,
or ʌ 0 ° ® °ʌ k ¯
W x0 , k
ʌ0 Ri 1,i ,
k ı 1.
i 1
where x0 is the stationary probability vector of the censored Markov chain < 0 to f
level 0 and the scalar W is determined by
¦ʌ e k
1 uniquely.
k 0
For a level-independent Markov chain of GI/M/1 type, it is easy to check that Rk 1, k R for k ı 2, thus we have 97
Constructive Computation in Stochastic Models with Applications
W x0 ,
°ʌ0 ® °¯ʌk
ʌ0 R0,1 R k 1 ,
k ı 1,
where x0 is the stationary probability vector of the censored Markov chain < 0 to f
level 0 and the scalar W is determined by
¦ʌ e k
1 uniquely. This is the same as
k 0
the matrix-geometric solution in Neuts [26].
2.5 A- and B-measures In this section, we define A- and B-measures for discrete-time block-structured Markov chains, and also construct their expressions by means of the R- and G-measures, respectively. Based on the A- and B-measures, we provide conditions for the state classification of the Markov chains. We now define the A- and B-measures as follows: Definition 2.2 (1) For i, jı0 with i z j , we define Ai, j as a matrix of size mi u mj whose (r, s)th entry is the expected number of visits to state ( j, s) before hitting any state in level i, given that the process starts in state (i, r). That is Ai , j (r , s )
E [number of visits to state ( j, s) before hitting Li | x0
(i, r )].
(2) For i, jı0 with i z j , we define Bi,j as a matrix of size mi u mj whose (r, s)th entry is the probability of visiting state ( j, s) for the first time before hitting any state in level j, given that the process starts in state (i, r). That is Bi , j (r , s )
P {hitting state ( j, s) upon entering Lj for the first time | x0
(i, r )}.
Furthermore, the matrices Ai, i and Bi, i for iı0 are explained as follows: (3) The (r, s)th entry of Ai, i is the expected number of returning to state (i, s) before hitting any state in level i, given that the process starts in state (i, r). (4) The (r, s)th entry of Bi,i is the probability of returning to state (i, s) for the first time before hitting any state in level i, given that the process starts in state (i, r). We first derive expressions for A0, j and Bi,0 for i, jı0. To do this, we write P
§ P0,0 U · ¨ V W ¸. © ¹
(2.41)
l , P0,0 UWV
(2.42)
Then P[0] l where W
f
¦W k 0
98
n
1 ( I W ) min .
2
Block-Structured Markov Chains
It is clear from the definitions for the A- and B-measures that
l, ( A0,1 , A0,2 , A0,3 , ...) UW
(2.43)
l . WV
(2.44)
l , I WW
(2.45)
and T T T ( B1,0 , B2,0 , B3,0 , ...) T
Note that f
l W
¦W
k
l I WW
k 0
it follows from Eq. (2.43) that ( A0,1 , A0,2 , A0,3 , ...) U ( A0,1 , A0,2 , A0,3 , ...)W ,
(2.46)
which leads to the block-entry form f
A0, j
P0, j ¦ A0,k Pk , j , j ı 1;
(2.47)
k 1
and from Eq. (2.44) that T T T ( B1,0 , B2,0 , B3,0 , ...)T
T T T V W ( B1,0 , B2,0 , B3,0 , ...)T ,
(2.48)
which leads to the block-entry form f
Bi ,0
Pi ,0 ¦ Pi , k Bk ,0 ,
i ı 1.
k 1
Remark 2.3 Now, we provide comparisons between A-measure and R-measure, and between B-measure and G-measure. (1) It is easy to see from Eq. (2.43) and Corollary 2.2 that A0, k
l 1( < , k ) ( P0,1 , P0,2 , P0,3 , ...) W
and R0, k
l (k<,1) ; ( P0, k , P0, k 1 , P0, k 2 , ...) W
similarly, Bk ,0
l 1( k , < ) ( P T , P T , P T , ...)T W 1,0 2,0 3,0
and G0, k
<) l (1, W ( PkT,0 , PkT1,0 , PkT 2,0 , ...)T . k
99
Constructive Computation in Stochastic Models with Applications
l (2) Since ( I W ) W
l (I W ) W
I , it follows from Eq. (2.42) that
P[0]
T T T P0,0 ( A0,1 , A0,2 , A0,3 , ...) ( I W ) ( B1,0 , B2,0 , B3,0 , ...) T
P[0]
T T T P0,0 ( R0,1 , R0,2 , R0,3 , ...) ( I ') (G1,0 , G2,0 , G3,0 , ...) T ,
and
where ' diag (< 1 ,< 2 ,< 3 , ...). Remark 2.4 For the truncated transition matrix W, we have the UL-type RG-factorization as follows: I W
( I RU ) ( I < D ) ( I G L ).
Thus, we obtain ( A0,1 , A0,2 , A0,3 , ...) U ( I G L ) 1 ( I < D )1 ( I RU ) 1 and T T T ( B1,0 , B2,0 , B3,0 , ...) T
( I G L ) 1 ( I < D ) 1 ( I RU ) 1V .
Let A0,[İjn ] and Bi[,0İn ] be the A- and B-measures for the censored chain P[İn ] to the censored set Lİn , respectively. The following lemma provides an important censoring invariance for the A- and B-measures. Lemma 2.8 For nı1 and 1İi, jİn, A[0,İjn ]
A0, j
(2.49)
Bi[,0İn ]
Bi ,0 .
(2.50)
and
Proof We only prove Eq. (2.49), while the proof of Eq. (2.50) is similar. To prove Eq. (2.49), we write L0 L0 § P0,0
P
¨ L1o n ¨ V1 Lın 1 ¨© V2
L1o n Lın 1
U1 U 2 · ¸ W11 W12 ¸ . W21 W22 ¸¹
Then
P[İn ]
l V U U W l W · § P0,0 U 2 W 1 2 21 22 2 22 ¨ ¸. ¨ V W W l V W W W l W ¸ 12 22 2 11 12 22 21 ¹ © 1
It follows from Eq. (2.51) and Eq. (2.46) that 100
(2.51)
2
A
[İn ] 0,1
[İ n ] , A0,2 , ..., A0,[İnn]1 , A0,[İnn ]
Block-Structured Markov Chains
U U Wl W l W I W W W 1
2
21
22
11
12
22
21
1
.
Let l W
l (1,1) §W ¨ ¨W l © (2,1)
l (1, 2) · W ¸ l (2, 2) ¸ W ¹
1
ª § W11 W12 · º «I ¨ ¸» . ¬ © W21 W22 ¹ ¼
Then we obtain l (1,1) W
l W I W11 W12 W 21 22
and l (2,1) W l W (I W W W l W )1 . W 21 11 12 21 22 22 It follows from Eq. (2.46) that ( A0,1 , A0,2 , ..., A0, n 1 , A0, n )
l (1,1) · §W ¸ (U1 , U 2 ) ¨ ¨W ¸ l © (2,1) ¹
U
1
l W U 2W 21 22
I W
11
l W W12 W 21 22
1
.
Thus, we obtain that for nı1
A
[İn ] 0,1
[İ n ] , A0,2 , ..., A0,[İnn]1 , A0,[İnn ] ( A0,1 , A0,2 , ..., A0, n 1 , A0, n ).
This completes the proof. Let
i R U
§ 0 R1,2 R1,3 R1,4 ¨ 0 R2,3 R2,4 ¨ ¨ 0 R3,4 ¨ 0 ¨ ¨ ©
"· ¸ "¸ "¸ ¸ %¸ % ¸¹
and
i G L
§ 0 · ¨ ¸ ¨ G2,1 0 ¸ ¨ G3,1 G3,2 0 ¸. ¨ ¸ ¨ G4,1 G4,2 G4,3 0 ¸ ¨ # ¸ # # % % © ¹ 101
Constructive Computation in Stochastic Models with Applications
Note that the R-measure {Ri , j , 1 İ i j} and G-measure {Gi , j , 1 İ j i} are given in Theorem 2.5. The following theorem constructs useful relationships between A0, j and the R-measure, and between Bi ,0 and the G-measure. Theorem 2.10 (1) The matrix A0, j and the R-measure satisfy
A0, j
if j 1, R0,1 , ° j 1 ® ° R0, j ¦ A0, k Rk , j , if j ı 2, k 1 ¯
or the matrix expression is ( A0,1 , A0,2 , A0,3 , ...) ( I RU )
( R0,1 , R0,2 , R0,3 , ...).
(2) The matrix Bi,0 and the G-measure satisfy
Bi ,0
if i 1, G1,0 , ° i 1 ® °Gi ,0 ¦ Gi , k Bk ,0 , if i ı 2, k 1 ¯
or the matrix expression is T T T ( I G L ) ( B1,0 , B2,0 , B3,0 , ...)T
T T T (G1,0 , G2,0 , G3,0 , ...)T .
Proof Let Pi[,İj n ] be the (i, j)th block-entry of the censored chain to the censored set Lİn . Then using Eq. (2.47) we obtain n
A0,[İnn ]
P0,[İn n ] ¦ A0,[İkn ] Pk[,İn n ] , k 1
hence n 1
P0,[İn n ] ¦ A0,[İkn ] Pk[,İn n ] ,
A0,[İnn ] ª¬ I Pn[,İn n ] º¼
k 1
which leads to A0,[İnn ]
1
n 1
1
P0,[İn n ] ª¬ I Pn[,İn n ] º¼ ¦ A0,[İkn ] Pk[,İn n ] ª¬ I Pn[,İn n ] º¼ . k 1
Note that the R-measure has the following expression Rk[İ, nn ] we get 102
1
Pk[,İn n ] ª¬ I Pn[,İn n ] º¼ ,
2
Block-Structured Markov Chains
n 1
A0,[İnn ]
R0,[İnn ] ¦ A0,[İkn ] Rk[İ, nn ] . k 1
Applying the censoring invariance for the A- and R-measures, it is easy to see the desired result. This completes the proof. The following corollary provides expressions for the A- and R-measures in terms of the R- and G-measures, respectively. The proof is easy by using the iterative computations. Corollary 2.4 (1) For kı1, A0, k
R0, k
¦
¦
R0,i1 Ri1 , k
1İi1 İk 1
R0,i1 Ri1 ,i2 Ri2 , k
1İi2 i1 İ k
! R0,1 R1,2 ! Rk 2, k 1 Rk 1, k .
(2) For kı1, Bk ,0
Gk ,0
¦
Gk ,i1 Gi1 ,0
1İi1 İk 1
¦
Gk ,i1 Gi1 ,i2 Gi2 ,0
1İi1 i2 İk 1
! Gk ,k 1Gk 1,k 2 ! G2,1G1,0 .
Now, we further provide expressions for the matrices Ai,0 and B0, j for i, jı1 interms of the LU-type R- and G- measures, respectively. To do this, we shall use the LU-type R- and G-measures. Note that P[ı1]
W V ( I P0,0 )1U .
Thus, we obtain T T T ( A1,0 , A2,0 , A3,0 , ...)T
V ( I P0,0 )1
and ( B0,1 , B0,2 , B0,3 , ...)
( I P0,0 )1U .
It is easy to see that Ai ,0
Pi ,0 ( I P0,0 ) 1
B0, j
( I P0,0 )1 P0, j .
and
Using a probability argumentation, it is obvious from Theorem 2.10 that
Ai ,0
R1,0 , if i 1, ° i 1 ® ° Ri ,0 ¦ Ri , k Ak ,0 , if i ı 2, k 1 ¯ 103
Constructive Computation in Stochastic Models with Applications
and B0, j
G0, j , if j 1, ° j 1 ® °G0, j ¦ B0, k Gk , j , if j ı 2. k 1 ¯
Therefore, for kı1 we obtain A0, k
Rk ,0
¦
¦
Rk ,i1 Ri1 ,0
1İi1 İk 1
Rk ,i1 Ri1 ,i2 Ri2 ,0
1İi2 i1 İk 1
! Rk ,k 1 Rk 1,k 2 ! R2,1 R1,0
and B0,k
G0,k
¦
G0,i1 Gi1 , k
1İi1 İk 1
¦
G0,i1 Gi1 ,i2 Gi2 , k
1İi1 i2 İk 1
! G0,1G1,2 ! Gk 2,k 1Gk 1, k .
In what follows we express the matrices Ai, j and Bi, j for i, jı1, and construct useful relations between A-measure and R-measure, and between B-measure and G-measure. Let P[i] be the matrix obtained by deleting the ith block-row and the ith block-column in the matrix P. Then it is easy to see that for iı1, ( Ai ,0 , ! , Ai ,i 1 , Ai ,i 1 , !)
n ( Pi ,0 , ! , Pi ,i 1 , Pi ,i 1 , !) P [i ]
and for jı1, ( B0,T j , ! , B Tj 1, j , B Tj 1, j , !)T
n P [i ] ( P0,T j , ! , PjT1, j , PjT1, j , !)T .
Let Ai[,İj n ] and Bi[,İj n ] for 1İi, jİn be the A- and B-measures for the censored chain P[İn ] to the censored set Lİn , respectively; and Ai[,ıj n ] and Bi[,ıj n ] for i, jın, the A- and B-measures for the censored chain P[ın ] to the censored set Lın , respectively. The following corollary provides the censoring invariance for the A- and B-measures. The proof is easy and is omitted here. Corollary 2.5 (1) For nı1 and 1İi, jİn with i z j, Ai[,İj n ]
Ai , j
Bi[,İj n ]
Bi , j .
and
(2) For nı1 and i, jın with i z j Ai[,ıj n ] 104
Ai , j
2
Block-Structured Markov Chains
and Bi[,ıj n ]
Bi , j .
Using a probability analysis, we obtain the following corollary which indicates useful relations between A-measure and R-measure, and between B-measure and G-measure. Corollary 2.6 (1) If 1İi<j, then
Ai , j
if j i 1, Ri ,i 1 , ° j 1 ® ° Ri , j ¦ Ai , k Rk , j , if j ı i 2, k i 1 ¯
Bi , j
Gi , j , if j i 1, ° j 1 ® °Gi , j ¦ Bi ,k Gk , j , if j ı i 2. k 1 ¯
Ai , j
Ri , j , if j i 1, ° i 1 ® ° Ri , j ¦ Ri , k Ak , j , if j İ i 2, k 1 ¯
Bi , j
if j i 1, Gi , j , ° i 1 ® °Gi , j ¦ Gi ,k Bk , j , if j İ i 2. k 1 ¯
and
(2) If i > jı1, then
and
Now, we discuss the probability of the returning time and the expected returning number, which correspond to the matrices Bi, i and Ai, i for iı0, respectively. It is clear that Ai ,i
Pi ,i ¦ Ai , k Pk ,i k zi
and Ai ,i
¦R
i,k
k i
Ak ,i ¦ Ai , k Rk ,i k !i
¦ Ai,k Rk ,i ¦ Ri ,k Ak ,i . k i
k !i
Similarly, we have 105
Constructive Computation in Stochastic Models with Applications f
Bi ,i
Pi ,i ¦ Pi , k Bk ,i k zi
and
¦B
Bi ,i
i,k
k i
Gk , j ¦ Gi , k Bk ,i k !i
¦G
i,k
k i
Bk ,i ¦ Bi , k Gk ,i . k !i
i 1
e for all iı1 if and only if
Lemma 2.9 Bi ,0 e
¦G
i,k
e
e for all iı1.
k 0
Proof Suppose first that Bi ,0 e
e for all iı1. Note that
if i 1, G1,0 , ° i 1 ® °Gi ,0 ¦ Gi , k Bk ,0 , if i ı 2, k 1 ¯
Bi ,0 i 1
it is clear that
¦G
ik
e
e for all iı1.
k 0
i 1
Suppose now that
¦G
ik
e
e for all iı1. Then B1,0 e G1,0 e
e,
k 0
B2,0 e G2,0 e G2,1 B1,0 e
(G2,0 G2,1 ) e
e,
by induction, we can obtain that Bi ,0 e e for all iı1. This completes the proof. Theorem 2.11 Suppose that the Markov chain P given in Eq. (2.1) is irreducible and stochastic. P is recurrent if and only if Bi ,0 e e for all iı1. Proof Suppose first that P is recurrent. Let f ( i , r ),( j , s ) be the probability that the Markov chain ever makes a transition into state ( j, s), given that the Markov chain starts in state (i, r). Then f ( i , r ),( j , s ) 1 for all i, jı0, 1İrİmi and 1İsİmj. Note that Bi ,0 (r , s) ¦ Bi ,0 ( r , w) f (0, w),(0, s ) ,
f (i ,r ),(0, s )
wz s
m0
we obtain
¦B
i ,0
(r , k ) 1, that is, Bi ,0 e
e for all i ı 1.
k 1
Suppose now that Bi ,0 e
e for all i ı 1. Note that f
P[0]
P0,0 ¦ P0,k Bk ,0 , k 1
f
we obtain that P[0] e
P0,0 e ¦ P0, k Bk ,0 e k 1
106
f
P0,0 e ¦ P0, k e k 1
e. Thus P[0] is stochastic.
2
Block-Structured Markov Chains
Since P is irreducible, it is clear that P is recurrent. This completes the proof. Corollary 2.7 Suppose that the Markov chain P given in Eq. (2.1) is irreducible i 1
and stochastic. P is recurrent if and only if
¦G
i,k
e
e for all iı1.
k 1
Corollary 2.8 Suppose that the Markov chain P given in Eq. (2.1) is irreducible and stochastic. P is recurrent if and only if B0,0 is stochastic. Proof Suppose P is recurrent. It follows from Theorem 2.11 that Bi ,0 e e
for all iı1. Note that f
P0,0 ¦ P0, k Bk ,0 ,
B0,0
k 1
we obtain that f
B0,0 e
P0,0 e ¦ P0, k Bk ,0 e k 1
f
¦P
0, k
e
e
k 0
which indicates that B0,0 is stochastic. If B0,0 is stochastic. Suppose that P would be transient. Then there would exist e. In this case, at least i0 such that Bi0 ,0 e 9 B0,0 e
f
f
k 1
k 0
P0,0 e ¦ P0, k Bk ,0 e 9¦ P0, k e
e.
Hence B0,0 would be strictly substochastic. This leads to a contradiction. This completes the proof. For i, jı0 with i z j, we define Mi, j as a matrix of size mi u mj whose (r, s)th entry is the expected number of transitions needed to enter level j for the first time by hitting state ( j, s), given that the process starts in state (i, r). For i, jı0 with i j, the (r, s)th entry of Mi,i is the expected number of transitions needed to return to level i by hitting state (i, s), given that the process starts in state (i, r). f
Lemma 2.10 M 0,0 e
e ¦ A0, k e. k 1
Proof Let the rth row of A0,n be A0,n(r). Then it is easy to see that m0
f
mn
¦ M 0,0 (r , s) 1 ¦¦ E[number of visite to (n, w) s 1
n 1 w 1
before hitting level 0 | X 0
(0, r )]
f
1 ¦ A0,n (r ). n 1
This completes the proof. 107
Constructive Computation in Stochastic Models with Applications
Lemma 2.11 If the Markov chain P given in Eq. (2.1) is irreducible and stochastic, then P is positive recurrent if and only if the matrix M0,0 is finite. We now provide some discussion on how to relate the matrices A0, j for jı1 to the stationary probability vector ʌ . Note that
( A0,1 , A0,2 , A0,3 , ...) ( I R U )
( R0,1 , R0,2 , R0,3 , ...)
and (S 1 , S 2 , S 3 , ...) ( I R U ) W x0 ( R0,1 , R0,2 , R0,3 , ...) obtained by Theorem 2.9. Therefore, it is obvious that
Sk
W x0 A0, k ,
k ı 1.
Theorem 2.12 If the Markov chain P given in Eq. (2.1) is irreducible and f
recurrent, then P is positive recurrent if and only if the vector 0 e
¦A
0, k
e is
k 1
finite. Proof Suppose first that P is positive recurrent. It is clear that the Markov chain P exists the stationary probability vector S (S 0 , S 1 , S 2 , ...). Note that
Sk
W x0 A0, k for k ı 1, where x0 is the stationary probability vector of the censored
chain P[0] to level 0 and the constant W makes
f
¦S
k
e 1, hence we have,
k 0
f
W x0 ¦ A0, k e k 1
f
¦S
f
k
k 1
e ¦ S k e 1. k 0
f
Since x0 >0 and e>0, it is easy to see that 0 e
¦A
0, n
e is finite.
k 1
f
Suppose now that the vector 0 e
¦A
0, n
e is finite. Since the Markov chain P
k 1
is irreducible and recurrent, the censored chain P[0] to level 0 is positive recurrent with the stationary probability vector x0. Let ʌ0 cx0 and ʌ k cx0 A0, k for kı1. Then f
f § · ʌ e c x e A0, k e ¸ f. ¦ ¦ k 0¨ k 0 k 1 © ¹
Taking c
108
f ª § ·º « x0 ¨ e ¦ A0,k e ¸ » k 1 ¹¼ ¬ ©
1
W , it is clear that {S k } is the stationar probability
2
Block-Structured Markov Chains
vector of P, which leads to that P is positive recurrent. This completes the proof.
2.6 Markov Chains with Finitely-Many Levels In this section, we study block-structured Markov chains with finitely-many levels, and provide the RG-factorizations. Note that the results of this section should be regarded as a special case of the infinite-level case based on the condition that Pi , j 0 for i ı M 1 or j ı M 1, thus we only give a simple discussion here. We consider an irreducible discrete-time block-structured Markov chain with finitely-many levels whose transition probability matrix is given by § P0,0 P0,1 " P0, M · ¨ ¸ ¨ P1,0 P1,1 " P1, M ¸ , ¨ # # # ¸ ¨¨ ¸¸ © PM ,0 PM ,1 " PM , M ¹
P
(2.52)
where Pi,i is a matrix of size mi u mi for all 0İiİM, and the sizes of the other blocks are determined accordingly.
2.6.1 The UL-Type RG-Factorization For 0İi, jİk and 0İkİM, it is clear from Section 2.2 that M
Pi[, İj k ]
Pi , j
[İn ] 1 n,n
¦ P ^I P ` [ İn ] i,n
Pn[,İj n ] .
n k 1
Note that Pi[, İj M ]
] Pi , j and Pi[, İ j
Pi[0] ,j .
Let
Pn[,İn n ] ,
0İnİM,
Ri , j
Pi[, İj j ] ( I < j )1 ,
0İi İ j İ M,
Gi , j
( I < i )1 Pi[, İj i] ,
0İ j İi İ M.
and
Then the UL-type RG-factorization is given by I P
( I RU ) ( I < D ) ( I GL ),
where 109
Constructive Computation in Stochastic Models with Applications
R0, M · § 0 R0,1 R0,2 R0,3 " R0, M 1 ¨ 0 R1,2 R1,3 " R1, M 1 R1, M ¸¸ ¨ ¨ 0 R2,3 " R2, M 1 R2, M ¸ ¨ ¸ RU ¨ % % # # ¸, ¨ 0 RM 2, M 1 RM 2, M ¸ ¨ ¸ 0 RM 1, M ¸ ¨ ¨ 0 ¸¹ © < D diag (< 0 ,< 1 ,< 2 ,< 3 , ...,< M 1 ,< M )
and
GL
§ 0 · ¨G ¸ 0 ¨ 1,0 ¸ 0 ¨ G2,0 G2,1 ¸ ¨G ¸ 0 ¨ 3,0 G3,1 G3,2 ¸. ¨ G4,0 G4,1 G4,2 G4,3 0 ¸ ¨ ¸ # # # # % ¨ # ¸ ¨G ¸ " G G G G 0 M ,1 M ,2 M ,3 M , M 1 © M ,0 ¹
2.6.2 The LU-Type RG-Factorization For k İ i, j İ M and 0 İ k İ M, it is clear from Section 2.3 that k
1
Pi , j ¦ Pi[,ın n ] ^I Pn[,ın n ] ` Pn[,ıj n ] .
Pi[,ıj k 1]
n 0
Note that Pi[,ıj M ]
Pi[, Mj ] and Pi[,ıj 0]
Pi , j .
Let Pn[,ın n ] ,
)n [ı j ] i, j
0İnİ M, 1
Ri , j
P
(I ) j ) ,
0İ j iİ M,
Gi , j
( I ) i ) 1 Pi[,ıj i ] ,
0İi j İ M.
and
Then the UL-type RG-factorization is given by I P 110
( I RL ) ( I ) D ) ( I GU ),
2
Block-Structured Markov Chains
where § 0 · ¨R ¸ 0 ¨ 1,0 ¸ ¨ R2,0 R2,1 ¸ 0 ¨ ¸ RL ¨ R3,0 R3,1 R3,2 0 ¸ ¨ R4,0 R4,1 R4,2 R4,3 0 ¸ ¨ ¸ # # # % % ¨ # ¸ ¨R ¸ ! R R R R 0 M M M M M M ,0 ,1 ,2 ,3 , 1 © ¹ < D diag () 0 , )1 , ) 2 , )3 , ! , ) M 1 , ) M ) and
GU
§ 0 G0,1 G0,2 G0,3 ¨ 0 G1,2 G1,3 ¨ ¨ 0 G2,3 ¨ % ¨ ¨ ¨ ¨ ¨ ©
!
G0, M 1
!
G1, M 1
! G2, M 1 % # 0 GM 2, M 1 0
G0, M · ¸ G1, M ¸ G2, M ¸ ¸ # ¸. GM 2, M ¸ ¸ GM 1, M ¸ 0 ¸¹
Now, we analyze an interesting Markov chain with finitely-many levels whose transition probability matrix is given by
P
§ P0,0 P0,1 ¨ P P1,1 ¨ 1,0 P P ¨ 2,0 2,1 ¨ P ¨ 3,0 ¨ # ¨P ¨ M 2,0 ¨ PM 1,0 ¨ P © M ,0
P0,2 P0,3 P1,2 P2,2 P2,3 P3,2 P3,3 %
!
P0,4
P3,4 %
P0, M 1
% PM 2, M 3 PM 2, M 2 PM 2, M 1 PM 1, M 2 PM 1, M 1 PM , M 1
P0, M · ¸ ¸ ¸ ¸ ¸. ¸ ¸ ¸ PM 1, M ¸ PM , M ¸¹
Note the special block structure of this Markov chain, it is easy to see that using the UL-type censoring computation, we can derive a simpler expression for the UL-type RG-factorization. However, the LU-type RG-factorization can not be simplified more effectively. It is clear that PM[İ, MM ]
PM , M .
Then 111
Constructive Computation in Stochastic Models with Applications
P[İM 1]
§ P0, M ¨ ¨ 0 ( Pi , j )0İi , jİM 1 ¨ # ¨ ¨ 0 ¨P © M 1, M
§ PMT ,0 · · ¨ T ¸ ¸ ¨ 0 ¸ ¸ 1 ¸ ^ I PM[İ, MM ] ` ¨ # ¸ ¨ T ¸ ¸ ¨ 0 ¸ ¸ ¸ ¨ PT ¸ ¹ © M , M 1 ¹
and for 0İkİM 2, T
P[İk ]
( Pi , j )0İi , jİk
k 1] § Pk[İ1,0 · § P0,[İk k11] · ¨ T ¸ ¨ ¸ ¨ 0 ¸ ¨ 0 ¸ 1 ¨ # ¸ ^I Pk[İ1,kk1]1` ¨ # ¸ . ¨ T ¸ ¨ ¸ ¨ 0 ¸ ¨ 0 ¸ ¨ 0 ¸ ¨ 0T ¸ © ¹ © ¹
Therefore, we can obtain the UL-type RG-factorization as follows: I P
( I RU ) ( I < D ) ( I GL ),
where R0, M · § 0 R0,1 R0,2 R0,3 ! R0, M 1 ¨ ¸ 0 R1,2 ¨ ¸ ¨ ¸ 0 R2,3 ¨ ¸ RU ¨ % % ¸, ¨ ¸ 0 RM 2, M 1 ¨ ¸ 0 RM 1, M ¸ ¨ ¨ 0 ¸¹ © < D diag (< 0 ,< 1 ,< 2 ,< 3 , ! ,< M 1 ,< M ) and
GL
112
§ 0 · ¨ ¸ 0 ¨ G1,0 ¸ ¨ G2,0 G2,1 0 ¸ ¨ ¸ G3,2 0 ¨ G3,0 ¸. ¨G ¸ G4,3 0 ¨ 4,0 ¸ % % ¨ # ¸ ¨G ¸ G 0 M , M 1 © M ,0 ¹
T
2
Block-Structured Markov Chains
Consider another interesting Markov chain whose transition probability matrix is given by
P
§ P0,0 P0,1 ¨ ¨ P1,0 P1,1 ¨ P2,1 ¨ ¨ ¨ ¨ ¨P © M ,1 PM ,2
P1,2 P2,2 %
P2,3 %
%
PM 1, M 2 PM 1, M 1 PM , M 2 PM , M 1
!
P0, M P1, M P2, M # PM 1, M PM , M
· ¸ ¸ ¸ ¸, ¸ ¸ ¸ ¸ ¹
the LU-type RG-factorization can be simplified more effectively by using the Lu-type censoring computation.
2.6.3 The Stationary Probability Vector If the block-structured Markov chain P with finitely-many levels given in Eq. (1.9) is irreducible and stochastic, then it must be positive recurrent. Let ʌ (ʌ0 , ʌ1 , ʌ 2 , ʌ3 , ..., ʌ M ) be the stationary probability vector of the Markov chain. Then using the UL-type RG-factorization we obtain ʌ 0 ° ® °ʌ k ¯
W x0 , k 1
¦ʌ R i
i,k
, 1İ k İ M ,
(2.53)
i 0
where x0 is the stationary probability vector of the censored Markov chain < 0 to M
level 0 and the scalar W is determined by
¦ʌ e k
1 uniquely.
k 0
In what follows we consider two special cases: Markov chains of GI/M/1 type and Markov chains of M/G/1 type, and derive the matrix-product solution for the two cases. (1) For a Markov chain of GI/M/1 type with finitely-many levels, using the UL-type RG-factorization, the stationary probability vector is given as the matrixproduct solution ʌ 0 ° ® °ʌ k ¯
W x0 , k
ʌ0 Ri 1,i , 1 İ k İ M , i 1
where x0 is the stationary probability vector of the censored Markov chain < 0 to 113
Constructive Computation in Stochastic Models with Applications M
level 0 and the scalar W is determined by
¦ʌ e
1 uniquely.
k
k 0
(2) For a Markov chain of M/G/1 type with finitely-many levels, using the LU-type RG-factorization, the stationary probability vector is given as the matrixproduct solution ʌ M W xM , ° k ® °ʌ k ʌ M Ri 1,i , i M 1 ¯
k ı 1,
where xM is the stationary probability vector of the censored Markov chain < M to M
level M and the scalar W is determined by
¦ʌ e k
1 uniquely.
k 0
2.7 Continuous-Time Markov Chains In this section, the censoring technique is similarly applied to deal with any irreducible continuous-time block-structured Markov chain. The UL-and LU-types RG-factorizations are derived by means of a similar approach to that in Sections of 2.2 and 2.3, respectively. Based on this, we also provide a simple analysis for the stationary probability vector. Consider an irreducible continuous-time block-structured Markov chain {xt , t ı 0} whose infinitesimal generator is given by
Q
§ D0,0 D0,1 D0,2 !· ¨ ¸ ¨ D1,0 D1,1 D1,2 !¸ , ¨ D2,0 D2,0 D2,2 !¸ ¨ ¸ # # © # ¹
(2.54)
where Dk , k , k ı 0, are mk u mk matrices and the sizes of the other block-entries can be determined accordingly. The state space ȍ of this Markov chain is partitioned f
as ȍ
*L , i
where Li
{(i, j ); j 1, 2, ..., mi }.
i 0
For the continuous-time Markov chain Q, we first describe the censored Markov chain QE with the censored set E ȍ, which is different from that for the discrete-time case analyzed in Sections 2.2 and 2.3. If the successive visits of {Xt} to the subset E take place in the time intervals [t0, t1], [t2, t3], [t4, t5], ... . Note that t2k is the entering time of the Markov chain to the subset E, while t2k+1 is the leaving time of the Markov chain from the subset E for k ı 1. For n ı 1, we write 114
2
t
Block-Structured Markov Chains
for t [t0 , t1 ), t , n 1 ° °t2 n 1 ¦ (t2i t2i 1 ), for t [t2 n 1 , t2 n ), ® i 1 n ° for t [t2 n , t2 n 1 ). °t ¦ (t2i t2i 1 ), ¯ i1
Let X tE X t for t ı 0. Then { X tE } is the censored Markov chain with censoring set E. To study the continuous-time censored Markov chains, we partition the matrix Q as Q
§T U · ¨V W ¸ © ¹
according to the sets E and E c. Then the infinitesimal generators of the censored c chains { X tE } and { X tE } are respectively given by 1 T U (W ) min V
QE
and QE
c
W V (T ) 1U .
Since the Markov chain defined by Eq. (2.54) is irreducible, the two truncated chains with infinitesimal generators T and W are all transient, and hence the matrices T and W are all invertible. Note that the inverse of the matrix T is ordinary while the invertibility of the matrix W is under an infinite-dimensional meaning. Although the matrix W of infinite size may have multiple inverses, we 1 in general are interested in the maximal non-positive inverse Wmax of W , i.e., 1 1 W 1 İ Wmax İ 0 for every non-positive inverse W of W . Of course, 0 İ 1 1 1 (W ) min İ (W ) for every inverse (W ) of W . Here, we omit the detailed discussion for constructing the R-, U- and G-measures, including the censoring invariance; while we directly provide the following RG-factorizations for the irreducible continuous-time block-structured Markov chain.
2.7.1 The UL-type RG-factorization Let Q[ İ n ]
1 T U ( W ) min V.
(2.55)
The block-entry expression of the matrix Q[İn ] is written as 115
Constructive Computation in Stochastic Models with Applications
§ ¨ ¨ ¨ ¨¨ ©
Q[İn ]
( n) f 0,0
( n) f 0,1 ! f 0,( nn) · ¸ f1,1( n ) ! f1,(nn ) ¸ # # ¸ ¸ (n) f n ,1 ! f n(,nn) ¸¹
f1,0( n) # f n(,0n )
Lemma 2.12 For i, jİn 1, we have f
Di , j ¦ f i ,(kk ) ª¬ f k(,kk) º¼
f i ,( nj 1)
1
f k(,kj) .
(2.56)
k n
Proof Since
Q[İ( n 1)]
§ ¨ ¨ ¨ ¨¨ ©
(n) f 0,0
f1,0( n ) # (n) f n 1,0
§ ¨ ¨ ¨ ¨¨ ©
(n) f 0,1 !
f 0,( nn)1 · ¸ ! f1,(nn) 1 ¸ # ¸ ¸ (n) ! f n 1,n 1 ¸¹
f1,1( n ) # (n) f n 1,1
f 0,( nn) · ¸ 1 f1,(nn ) ¸ ª¬ f n(,nn) º¼ f n(,0n ) f n(,1n ) ! f n(,nn)1 , # ¸ (n) ¸ f n 1, n ¸¹
we obtain f i ,( nj 1)
f i ,( nj ) f i ,(nn ) ª¬ f n(,nn) º¼
1
f n(,nj)
f i ,( nj 1) f i ,(nn11) ª¬ f n(n1,1)n 1 º¼
1
f n(n1,1)j f i ,(nn ) ª¬ f n(,nn) º¼
1
f n(,nj)
# f
f i ,( fj ) ¦ f i ,(kk ) ª¬ f k(,kk) º¼
1
f k(,kj)
k n f
Di , j ¦ f i ,(kk ) ª¬ f k(,kk) º¼
1
f k(,kj) ,
k n
note that f i ,( fj )
Di , j for all i, jı0. This completes the proof.
We define the U-measure as Un
f n(,nn) ,
n ı
the R-measure as Ri , j 116
1
f i ,( jj ) ª¬ f j(, jj) º¼ ,
0 İ i j,
(2.57)
2
Block-Structured Markov Chains
and the G-measure as (i ) ¬ª fi ,i ¼º
Gi , j
1
f i ,(ij) ,
0 İ j i.
It is obvious that Ri , j
f i ,( jj ) (U j ) 1 ,
0 İ i j,
(2.58)
Gi , j
(U i ) 1 f i ,( ij) ,
0 İ j i.
(2.59)
and
The following theorem provides the Wiener-Hopf equations for the UL-type R-, U- and G-measures. Theorem 2.13 The R-, U- and G-measures defined above satisfy the following Wiener-Hopf equations: f
Di , j ¦ Ri , k (U k ) Gk , j ,
Ri , j (U j )
0 İ i j,
(2.60)
0 İ j i,
(2.61)
k j
f
Di , j ¦ Ri , k (U k ) Gk , j ,
(U i ) Gi , j
k i
and f
Dn, n ¦ Rn , k (U k ) Gk , n ,
Un
n ı 0.
(2.62)
k n
Proof We only prove Eq. (2.60), while Eq. (2.61) and Eq. (2.62) can be proved similarly. It follows from Eq. (2.58) that Ri , j (U j )
f i ,( jj ) .
(2.63)
By Lemma 2.12, we have f
fi ,( jj )
Di , j ¦ fi ,(kk ) ª¬ f k(,kk) º¼
1
f k(,kj)
k j
j 1
Di , j ¦ Ri ,k (U k )Gk , j ,
(2.64)
k 0
which, together with Eq. (2.63), leads to the stated result. By the Wiener-Hopf equations Eq. (2.60), Eq. (2.61) and Eq. (2.62), we construct the UL-type RG-factorization in the following theorem. The proof is clear and is omitted here. 117
Constructive Computation in Stochastic Models with Applications
Theorem 2.14 The infinitesimal generator Q defined in Eq. (2.54) can be factorized as follows, ( I RU ) U D ( I GU ),
Q where
§ 0 R0,1 R0,2 R0,3 ! · ¨ 0 R1,2 R1,3 ! ¸¸ ¨ ¨ 0 R2,3 ! ¸ , ¨ ¸ 0 !¸ ¨ ¨ % ¸¹ © diag (U 0 , U1 , U 2 , U 3 , !)
RU
UD and
§ 0 · ¨G ¸ 0 ¨ 1,0 ¸ ¨ G2,0 G2,1 0 ¸. ¨G ¸ ¨ 3,0 G3,1 G3,2 0 ¸ ¨ # # # # %¸¹ ©
GL
We consider an irreducible continuous-time block-structured Markov chain with finitely-many levels whose transition probability matrix is given by
Q
§ D0,0 D0,1 ! D0, M ¨D ¨ 1,0 D1,1 ! D1, M ¨ # # # ¨D D ! D M ,1 M ,M © M ,0
· ¸ ¸, ¸ ¸ ¹
For 0İi, jİk and 0İkİM, it is clear from Section 2.4 that M
Di[,İj k ]
Di , j
[İn ] 1 n, n
¦ D ^ D ` [İn ] i,n
Dn[İ, jn ] .
n k 1
Note that Di[,İj M ] Let
Di , j and Di[,İj 0] Un Ri , j
and
118
[İ j ] i, j
D
Di[0] ,j . Dn[İ,nn ] ,
0İnİM, 1
( U j ) ,
0İi j İ M,
2
Gi , j
( Ui )1 Di[,İj i ] ,
Block-Structured Markov Chains
0İ j i İ M.
Then the UL-type RG-factorization is given by Q
( I RU ) UD ( I GL ),
where
RU
R0, M · § 0 R0,1 R0,2 R0,3 ! R0, M 1 ¨ 0 R1,2 R1,3 ! R1, M 1 R1, M ¸ ¨ ¸ 0 R2,3 ! R2, M 1 R2, M ¸ ¨ ¨ % % # # ¸, ¨ ¸ 0 RM 2, M 1 RM 2, M ¸ ¨ ¨ 0 RM 1, M ¸ ¸ ¨¨ 0 ¸¹ © UD (U0 , U1 , U2 , U3 , ! , UM 1 , UM )
and
GL
§ 0 · ¨G ¸ 0 ¨ 1,0 ¸ 0 ¨ G2,0 G2,1 ¸ ¨G ¸. G G 0 3,1 3,2 ¨ 3,0 ¸ ¨ G4,0 G4,1 G4,2 G4,3 0 ¸ ¨ # ¸ # # # % % ¨¨ ¸¸ © GM ,0 GM ,1 GM ,2 GM ,3 ! GM , M 1 0 ¹
2.7.2 The LU-Type RG-Factorization Let Q[ın ]
W V (T ) 1U .
The block-entry expression of the matrix Q
Q[ın ]
[ın ]
(2.65)
is written as
§ hn( n, n) hn( n, n)1 hn( n, n) 2 ! · ¨ (n) ¸ ( n) ( n) ¨ hn 1, n hn 1, n 1 hn 1, n 2 ! ¸ . ¨ h( n ) hn( n)2, n 1 hn( n)2, n 2 ! ¸ n 2, n ¨ ¸ ¨ # ¸ # # © ¹ 119
Constructive Computation in Stochastic Models with Applications
Since
Q[ı( n 1)]
§ hn( n1,) n 1 hn( n1,) n 2 hn( n1,) n 3 ¨ (n) (n) (n) ¨ hn 2, n 1 hn 2, n 2 hn 2, n 3 ¨ (n) (n) (n) ¨ hn 3, n 1 hn 3, n 2 hn 3, n 3 ¨ # # # © (n) § hn 1, n · ¨ (n) ¸ 1 ¨h ¸ ¨ n 2, n ¸ ª¬ hn( n, n) º¼ hn( n, n)1 (n) ¨ hn 3, n ¸ ¨ # ¸ © ¹
!· ¸ !¸ ¸ !¸ ¸ ¹
hn( ,nn) 2 hn( n, n)3 ! ,
we obtain hi(,nj1)
1
hi(,nj) hi(,nn) ª¬ hn( n, n) º¼ hn( n, j) 1
1
hi(,nj1) hi(,nn1)1 ª¬ hn( n1,1)n 1 º¼ hn( n1,1)j hi(,nn) ª¬ hn( n, n) º¼ hn( n, j) # n
1
(k ) ª (k ) º (k ) hi(0) , j ¦ hi , k ¬ hk , k ¼ hk , j , k 0
note that hi(0) ,j
Di , j for all i, jı0. Therefore, for i, jın 1 we have n
hi(,nj1)
1
Di , j ¦ hi(,kk) ª¬ hk( k, k) º¼ hk( k, j) .
(2.66)
k 0
We define the U-measure as Un
hn( n, n) ,
n ı 0,
(2.67)
the R-measure as Ri , j
1
hi(, jj) ª¬ h(j ,jj) º¼ ,
0 İ j i,
and the G-measure as 1
Gi , j
(i ) (i ) ¬ª hi ,i ¼º hi , j ,
Ri , j
hi(, jj) (U j )1 ,
0 İ i j.
It is obvious that
and 120
0 İ j i,
(2.68)
2
Gi , j
(U i )1 hi(,ij) ,
Block-Structured Markov Chains
0 İ i j.
(2.69)
The following theorem provides the Wiener-Hopf equations for the LU-type R-, U-and G-measures. Theorem 2.15 The R-, U- and G-measures defined above satisfy the following Wiener-Hopf equations, j 1
Di , j ¦ Ri , k (U k ) Gk , j ,
Ri , j (U j )
0 İ j i,
(2.70)
0 İ i j,
(2.71)
k 0 i 1
Di , j ¦ Ri , k (Uk ) Gk , j ,
(U i ) Gi , j
k 0
and n 1
Un
Dn , n ¦ Rn , k (U k ) Gk , n ,
n ı 0.
(2.72)
k 0
Proof We only prove Eq. (2.70), while Eq. (2.71) and Eq. (2.72) can be proved similarly. It follows from Eq. (2.68) that Ri , j ( U j )
hi(, jj) .
(2.73)
We obtain j 1
1
Di , j ¦ hi(,kk) ª¬ hk( k, k) º¼ hk( k, j) .
hi(, jj)
(2.74)
k 0
From Eq. (2.68), Eq. (2.69) and Eq. (2.74) we obtain j 1
hi(, jj)
Di , j ¦ Ri , k (Uk ) Gk , j , k 0
which, together with Eq. (2.73), leads to the stated result. By the Wiener-Hopf equations Eq. (2.70), Eq. (2.71) and Eq. (2.72), the following theorem constructs the LU-type RG-factorization. The proof is obvious and is omitted here. Theorem 2.16 The infinitesimal generator Q defined in Eq. (2.54) can be factorized as follows, Q
( I RL ) U D ( I GU ),
(2.75)
where 121
Constructive Computation in Stochastic Models with Applications
§ 0 · ¨R ¸ 0 ¨ 1,0 ¸ ¨ R2,0 R2,1 0 ¸, ¨ ¸ ¨ R3,0 R3,1 R3,2 0 ¸ ¨ # # # # %¸¹ ©
RL
UD
diag (U 0 , U1 , U 2 , U 3 , ...)
and § 0 G0,1 G0,2 G0,3 ¨ 0 G1,2 G1,3 ¨ ¨ 0 G2,3 ¨ 0 ¨ ¨ ©
GU
!· ¸ !¸ !¸ . ¸ !¸ %¸¹
We consider an irreducible continuous-time block-structured Markov chain with finitely-many levels whose transition probability matrix is given by
Q
D0,1 ! D0, M · ¸ D1,1 ! D1, M ¸ . # # ¸ ¸ DM ,1 ! DM , M ¸¹
§ D0,0 ¨ ¨ D1,0 ¨ # ¨¨ D © M ,0
For kİi, jİM and 0İkİM, it is clear from Section 2.3 that k
1
Di , j ¦ Di[,ın n ] ^ Dn[ı,nn ] ` Dn[ı, jn ] .
Di[,ıj k 1]
n 0
[ıM ] i, j
Note that D
[M ] i, j
D
[ı0] i, j
and D
Di , j .
Let Dn[ı,nn ] ,
Un [ı j ] i, j
0İnİ M, 1
Ri , j
D
( U j ) ,
0İ j i İ M,
Gi , j
( Ui )1 Di[,ıj i ] ,
0İi j İ M.
and
Then the UL-type RG-factorization is given by Q 122
( I RL ) UD ( I GU ),
2
Block-Structured Markov Chains
where
RL
§ 0 · ¨R ¸ 0 ¨ 1,0 ¸ ¨ R2,0 R2,1 ¸ 0 ¨ ¸ 0 ¨ R3,0 R3,1 R3,2 ¸, ¨R ¸ R4,1 R4,2 R4,3 0 ¨ 4,0 ¸ # # # % % ¨ # ¸ ¨R ¸ ! R R R R 0 M ,1 M ,2 M ,3 M , M 1 © M ,0 ¹ U D (U0 , U1 , U2 , U3 , ! , UM 1 , UM )
and
GU
§ 0 G0,1 G0,2 G0,3 ¨ 0 G1,2 G1,3 ¨ ¨ 0 G2,3 ¨ % ¨ ¨ ¨ ¨ ¨ ©
!
G0, M 1
!
G1, M 1
!
G2, M 1
% # 0 GM 2, M 1 0
G0, M · ¸ G1, M ¸ G2, M ¸ ¸ # ¸. GM 2, M ¸ ¸ GM 1, M ¸ ¸ 0 ¹
2.7.3 The Stationary Probability Vector Let ʌ (ʌ0 , ʌ1 , ʌ 2 , ...) be the stationary probability vector of the continuous-time Markov chain Q. Then ʌ Q 0 and ʌ e 1. Thus we have ʌQ We write x
ʌ ( I RU ) U D ( I GL ).
( x0 , x1 , x2 , ...) and x
ʌ ( I RU ).
Then xU D ( I GL )
0.
Let (W x0 , 0, 0, ...) be a non-zero nonnegative solution to the equation xU D ( I GL ) 0, where x0 is the stationary probability vector of the censored Markov chain U0 to level 0. Then ʌ ( I RU ) (W x0 , 0, 0, ...), where W is a constant. Therefore, the stationary probability vector of the continuous-time block-structured Markov chain is given by 123
Constructive Computation in Stochastic Models with Applications
ʌ 0 ° ® °ʌ k ¯
W x0 , k 1
¦ʌ R i
i,k
,
k ı 1,
i 0
f
where the scalar W is uniquely determined by
¦ʌ e k
1.
k 0
For a Markov chain Q of GI/M/1 type, the stationary probability vector is given by ʌ 0 ° ® °ʌ k ¯
W x0 , k
ʌ0 Ri 1,i ,
k ı 1,
i 1
where x0 is the stationary probability vector of the censored Markov chain U0 to M
level 0 and the scalar W is determined by
¦ʌ e
1 uniquely.
k
k 0
For a Markov chain Q of GI/M/1 type with finitely-many levels, using the UL-type RG-factorization, the stationary probability vector is given by ʌ 0 ° ® °ʌ k ¯
W x0 , k
ʌ0 Ri 1,i , 1 İ k İ M , i 1
where x0 is the stationary probability vector of the censored Markov chain U0 to M
level 0 and the scalar W is determined by
¦ʌ e
1 uniquely.
k
k 0
For a Markov chain of M/G/1 type with finitely-many levels, the stationary probability vector of is given by ʌ M W xM , ° k ® °ʌ k ʌ M Ri 1,i , i M 1 ¯
k ı 1,
where xM is the stationary probability vector of the censored Markov chain UM to M
level M and the scalar W is determined by
¦ʌ e k
1 uniquely.
k 0
2.8 Notes in the Literature The censored Markov chain, also called watched Markov chain, was first considered by Lévy [15 17]. Since then, the censored Markov chains have been very useful 124
2
Block-Structured Markov Chains
in the study of Markov chains. Kemeny, Snell and Knapp [11] applied the censoring technique to show that each recurrent Markov chain has a positive regular measure unique to multiplication by a scalar. Freedman [4] used the censoring technique to approximate countable Markov chains for the limiting behavior and also for more general issues. The censoring technique has been successfully applied to block-structured Markov chains and Markov renewal processes. Examples include Grassmann and Heyman [5], Latouche [13], Heyman [8], Zhao and Liu [29], Zhao, Li and Braun [31, 32], Zhao, Li and Alfa [30], Latouche and Ramaswami [14], Zhao [28], Dudin and Klimenok [2], Li and Zhao [21 25], Li and Cao [18], Li and Liu [19], Klimenok and Dudin [12] and Dudin, Kim and Klimenok [3]. It is well known now how significant the matrices R and G are in the study of Markov chains of GI/M/1 type and Markov chains of M/G/1 type, respectively. Readers may refer to Neuts [26, 27], Latouche and Ramaswami [14] and Bini, Latouche and Meini [1] for more details. Grassmann and Heyman [5] first extended the two matrices R and G to the R- and G-measures {Rk} and {Gk} for Markov chains of GI/G/1 type in which Rk Rn k , n and Gk Gn , n k for 1 İ k n. The A-measure was first introduced in Karlin and Taylor [9, 10]. After then, Zhao, Li and Braun [31, 32] and Zhao, Li and Alfa [30] provided a detailed analysis on the A- and B-measures. In the study of Markov chains and stochastic models, we always encounter some systems of linear equations which may be either finitely or infinitely dimensional. Gaussian elimination is very useful in solving the systems of linear equations. Grassmann and Heyman [5] provided a detailed interpretation on Gaussian elimination by means of the censored technique, and they used Gaussian elimination to derive the Wiener-Hopf equations, which is a key for analyzing Markov chains of GI/G/1 type. Li and Zhao [21, 23] and Li and Liu [19] indicated that the Wiener-Hopf equations can lead to the RG-factorizations for any irreducible Markov chain. As indicated in Li and Cao [18], the RG-factorizations can be applied in dealing with the matrix equations x ( I P ) b or ( I P ) x b for discrete-time Markov chains and xQ b or Qx b for continuous-time Markov chains. Therefore, the RG-factorizations have established a new theoretic and algorithmic framework in the study of Markov chains and stochastic models for solving systems of linear equations. An important applied example is Li and Zhao [20, 22] for discussing quasi-stationary distributions of Markov chains of GI/M/1 type and Markov chains of M/G/1 type, respectively; while more general examples will be discussed in Chapter 9 of this book. Grassmann [6] established a UL-type two-matrix factorization for the matrix I P, where P is an irreducible transition probability matrix of finite size. Grassmann and Heyman [7] gave the same factorization for an irreducible Markov chain of GI/G/1 type, while Heyman [8] extended the result to an irreducible and positive recurrent Markov chain with infinitely-many states. Based on the Wiener-Hopf equations for the transition probability matrix of GI/G/1 type, Zhao [28] obtained 125
Constructive Computation in Stochastic Models with Applications
a UL-type RG-factorization for the matrix I P for the first time. For a leveldependent QBD process, Li and Cao [18] first provided two types: UL- and LU-types of RG-factorizations. Li and Zhao [21, 23] generalized the UL-type RG-factorization to an irreducible block-structured Markov renewal process with infinitely-many states, which can immediately lead to the UL-type RG-factorization for any irreducible block-structured Markov chain. Li and Liu [19] constructed the LU-type RG-factorization for any irreducible block-structured Markov chain, which is parallel to but different from that of Li and Zhao [21]. It is worthwhile to note that the UL-type RG-factorization is very useful for computing the stationary probability vector and more generally, analyzing the stationary performance measures; while the LU-type RG-factorization is a key for calculating the transient performance measures such as the first passage time and the sojourn time, as illustrated in Chapters 6 to 11 of this book. In this chapter, we mainly refer to Zhao [28], Zhao, Li and Braun [31, 32], Li and Zhao [21, 23], Li and Cao [18] and Li and Liu [19]. At the same time, we also add some new results without publication for a more systemical organization of this chapter.
Problems 2.1 Provide a unified definition for the censored Markov chains of an irreducible Markov chain which may be either discrete-time or continuous-time. 2.2 For the Markov chain P and two state subsets E1 and E2 with E1 E2, prove that ( P E2 ) E1 P E1 . 2.3 Let ȍ be the state space of the Markov chain P. Prove that P is irreducible if and only if P E is irreducible for each subset E ȍ. f
2.4 Note that Ri , j
¦R
i, j
f
(k ) and Gi , j
k 1
¦G
i, j
(k ) given in Section 2.2, please
k 1
explain the reason why the two matrices Ri , j and Gi , j have different probabilistic meaning. 2.5 Prove the censoring invariant property: (1) Ri[,İj n ] Ri , j for 0 İ i j İ n. (2) Ri[,ıj n ]
Ri , j for n 1 İ j i.
(3) Ai[,İj n ]
Ai , j for 0 İ i, j İ n.
(4) Ai[,ıj n ]
Ai , j for i, j ı n 1.
2.6 For a Markov chain of M/G/1 type, prove that for kı1 and jı2,
l (k ) W j ,1 126
(k )
l . Gk j Gk j 1 ! Gk 1W 1,1
2
Block-Structured Markov Chains
2.7 Let § 0 C1 · ¨ ¸ 0 C2 ¨ ¸ ¸ A ¨ 0 C3 ¨ ¸ % 0 ¨ ¸ ¨ ¸ % © ¹ and
B
§0 · ¨C 0 ¸ ¨ 1 ¸ ¨ ¸. C2 0 ¨ ¸ C3 0 ¨ ¸ ¨ % % ¸¹ ©
Please compute the inverses of the two matrices I A and I B. 2.8 If an irreducible discrete-time Markov chain whose transition probability matrix is given by
P
§ P0,0 P0,1 · ¨ ¸ P P P ¨ 1,0 1,1 1,2 ¸, ¨ P2,0 ¸ P2,2 P2,3 ¨ ¸ % %¹ © #
please derive the UL- and LU-types of RG-factorizations of the matrix I P. 2.9 For an irreducible level-independent Markov chain of GI/M/1 type, derive the UL- and LU-types of RG-factorizations. 2.10 For an irreducible level-independent Markov chain of M/G/1 type, derive the UL- and LU-types of RG-factorizations. 2.11 If an irreducible discrete-time Markov chain whose transition probability matrix is given by
P
§ P0,0 P0,1 P0,2 P0,3 !· ¨ ¸ ¨ P1,0 P1,1 ¸ ¨ P2,0 ¸, P2,2 ¨ ¸ P3,3 ¨ P3,0 ¸ ¨ # ¸ % © ¹
please derive the UL- and LU-types of RG-factorizations of the matrix I P. 127
Constructive Computation in Stochastic Models with Applications
2.12 Let §0 · ¨C 0 ¸ ¨ 1 ¸ A ¨ C2 0 ¸ ¨ ¸ C3 0 ¨¨ ¸ % %¸¹ © and Ci is nonzero matrix of size m for iı1. Please provide a nonzero solution to the matrix equation x( I A) 0. 2.13 For a PH/M/1 queue, apply the UL-type RG-factorization to compute the queue length distribution at time t>0 and the probability distribution of the busy period. 2.14 In Theorem 2.10, please compute
j ( R0,1 , R0,2 , R0,3 , !) I R U
1
f
2.15 Give some concrete examples and indicate useful difference between
¦A
0, k
k 1
f
and
¦B
k ,0
.
k 1
2.16 For a QBD process whose transition probability matrix is given by
P
§ B1 B0 · ¨B A A ¸ 1 0 ¨ 2 ¸, A2 A1 A0 ¨ ¸ ¨ % % % ¸¹ ©
compute Ai, i and Bi, i for iı0.
References 1. Bini D.A., G. Latouche and B. Meini (2005). Numerical Methods for Structured Markov Chains, Oxford University Press 2. Dudin A. and V.I. Klimenok (1999). Multi-dimensional quasi-Toeplitz Markov chains. Journal of Applied Mathematics and Stochastic Analysis 12: 393 415 3. Dudin A., C. Kim and V.I. Klimenok (2008). Markov chains with hybrid repeating rows-upperHessenberg, quasi-Toeplitz structure of the block transition probability matrix. Journal of Applied Probability 45: 211 225 4. Freedman D. (1983) Approximating Countable Markov Chains (Second Edition), Springer: New York 128
2
Block-Structured Markov Chains
5. Grassmann W.K. and D.P. Heyman (1990). Equilibrium distribution of block-structured Markov chains with repeating rows. Journal of Applied Probability 27: 557 576 6. Grassmann W.K. (1993). Means and variances in Markov reward systems. In Linear Algebra, Markov Chains, and Queueing Models, C.D. Meyer and R.J. Plemmons (eds.), Springer-Verlag: New York, 193 204 7. Grassmann W.K. and D.P. Heyman (1993). Computation of steady-state probabilities for infinite-state Markov chains with repeating rows. ORSA Journal on Computing 5: 282 303 8. Heyman D.P. (1995). A decomposition theorem for infinite stochastic matrices. Journal of Applied Probability 32: 893 901 9. Karlin S. and H.W. Taylor (1975). A First Course in Stochastic Processes (Second Edition), Academic Press, INC. 10. Karlin S. and H.W. Taylor (1981). A Second Course in Stochastic Processes, Academic Press, INC. 11. Kemeny J.G., J.L. Snell and A.W. Knapp (1976). Denumerable Markov Chains (Second Edition), Springer-Verlag: NewYork 12. Klimenok V.I. and A. Dudin (2006). Multi-dimensional asymptotically quasi-Toeplitz Markov chains and their application in queueing theory. Queueing Systems 54: 245 259 13. Latouche G. (1992). Algorithms for infinite Markov chains with repeating columns. In Linear algebra, Markov chains, and queueing models, Minneapolis, MN, 231 265; IMA Vol. Math. Appl., 48, Springer: New York, 1993 14. Latouche G. and V. Ramaswami (1999). Introduction to Matrix Analytic Methods in Stochastic Models, ASA-SIAM 15. Lévy P. (1951). Systèmes markoviens et stationaires. Cas dénombrable. Ann. Sci. École Norm. Sup. 68: 327 381 16. Lévy P. (1952). Complément à l’étude des processus de Markoff. Ann. Sci. École Norm. Sup. 69: 203 212 17. Lévy P. (1958). Processus markoviens et stationaires. Cas dénombrable. Ann. Inst. H. Poincaré 18: 7 25 18. Li Q.L. and J. Cao (2004). Two types of RG-factorizations of quasi-birth-and-death processes and their applications to stochastic integral functionals. Stochastic Models 20: 299 340 19. Li Q.L. and L.M. Liu (2004). An algorithmic approach on sensitivity analysis of perturbed QBD processes. Queueing Systems 48: 365 397 20. Li Q.L. and Y.Q. Zhao (2002). A constructive method for finding ȕ-invariant measures for transition matrices of M/G/1 type. In Matrix Analytic Methods: Theory and Applications, G. Latouche and P.G. Taylor (eds), World Scientific: New Jersey, 237 263 21. Li Q.L. and Y.Q. Zhao (2002). The RG-factorizations in block-structured Markov renewal processes with applications. Technical Report 381, Laboratory for Research in Statistics and Probability, Carleton University and University of Ottawa, Canada, 1 40 22. Li Q.L. and Y.Q. Zhao (2003). ȕ-invariant measures for transition matrices of GI/M/1 type. Stochastic Models 19: 201 233 23. Li Q.L. and Y.Q. Zhao (2004). The RG-factorization in block-structured Markov renewal processes with applications. In Observation, Theory and Modeling of Atmospheric Variability, X. Zhu (ed), World Scientific, 545 568 129
Constructive Computation in Stochastic Models with Applications
24. Li Q.L. and Y.Q. Zhao (2005). Light-tailed asymptotics of stationary probability vectors of Markov chains of GI/G/1 type. Advances in Applied Probability 37: 1075 1093 25. Li Q.L. and Y.Q. Zhao (2005). Heavy-tailed asymptotics of stationary probability vectors of Markov chains of GI/G/1 type. Advances in Applied Probability 37: 482 509 26. Neuts M.F. (1981). Matrix-Geometric Solutions in Stochastic Models-An Algorithmic Approach, The Johns Hopkins University Press: Baltimore 27. Neuts M.F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Dekker: New York 28. Zhao Y.Q. (2000). Censoring technique in studying block-structured Markov chains. In Advances in Algorithmic Methods for Stochastic Models, G. Latouche and P.G. Taylor (eds), Notable Publications Inc., NJ., 417 433 29. Zhao Y.Q. and D. Liu (1996). The censored Markov chain and the best augmentaion. Journal of Applied Probability 33: 623 629 30. Zhao Y.Q., W. Li and A.S. Alfa (1999). Duality results for block-structured transition matrices. Journal of Applied Probability 36: 1045 1057 31. Zhao Y.Q., W. Li and W.J. Braun (1998). Infinite block-structured transition matrices and their properties. Advances in Applied Probability 30: 365 384 32. Zhao Y.Q., W. Li and W.J. Braun (2003). Censoring, factorizations and spectral analysis for transition matrices with block-repeating entries. Methodology and Computing in Applied Probability 5: 35 58
130
3
Markov Chains of GI/G/1 Type
Quan-Lin Li Department of Industrial Engineering Tsinghua University Beijing 100084, P.R. China E-mail:
[email protected] Abstract In this chapter, we simplify the R-, U- and G-measures and the RG-factorizations for Markov chains of GI/G/1 type. Also, we derive a new RG-factorization for the repeated blocks and the four basic inequalities for the boundary blocks, which are useful in spectral analysis of Markov chains of GI/G/1 type. We analyze the dual Markov chain of any irreducible blockstructured Markov chain, and specifically discuss the dual chain of a Markov chain of GI/G/1 type. Furthermore, we simplify the A- and B-measures for Markov chains of GI/G/1 type, and also express the A- and B-measures by means of the R- and G-measures, respectively. Based on the A- and B-measures, we provide spectral analysis for the R- and G-measures, and provide conditions for the state classification of Markov chains of GI/G/1 type. Keywords stochastic model, block-structured Markov chain, Markov chain of GI/G/1 type, RG-factorization, dual Markov chain, spectral analysis, state classification.
In this chapter, we simplify the R-, U- and G-measures and the RG-factorizations given in Chapter 2, for Markov chains of GI G 1 type. Two important examples: Markov chains of GI M 1 type and Markov chains of M G 1 type (e.g., see Neuts [13, 14]), are given a detailed analysis under the RG-factorization framework. Also, we derive a new RG-factorization for the repeated blocks and the four basic inequalities for the boundary blocks, which are useful in spectral analysis of Markov chains of GI G 1 type. We analyze the dual chain of a block-structured Markov chain, and provide a detailed discussion for the dual chain of a Markov chain of GI G 1 type. Furthermore, we simplify the A- and B-measures for Markov chains of GI G 1 type, and also express the A- and B-measures by means of the R- and G-measures, respectively. Based on the A- and B-measures, we provide spectral analysis for the R- and G-measures, and provide conditions for the state classification of Markov chains of GI G 1 type.
Constructive Computation in Stochastic Models with Applications
This chapter is organized as follows. Section 3.1 simplifies the UL- and LU-types of RG-factorizations, and derives a new RG-factorization for the repeated blocks and the four inequalities for the boundary blocks. Section 3.2 introduces the dual Markov chains, which are used to derive dual measures for the R- and G-measures. Section 3.3 simplifies expressions for the A- and B-measures by means of the Rand G-measures. Section 3.4 gives spectral analysis for the R- and G-measures, which lead to conditions for the state classification. Section 3.5 studies the minimal positive solution to the matrix generating function equations det( I R ( z )) 0 and det( I G ( z )) 0. On a similar line, Section 3.6 provides a simple introduction to continuous-time Markov chains of GI G 1 type, which are necessary for analyzing practical systems in many applied areas. Finally, Section 3.7 provides some notes for the references related to the results of this chapter.
3.1 Markov Chains of GI/G/1 Type This section considers Markov chains of GI G 1 type, simplifies the R- and G-measures and the RG-factorizations given in Chapter 2, and derives a new RG-factorization for the repeated blocks and the four inequalities for the boundary blocks. Consider a Markov chain of GI G 1 type whose transition probability matrix is given by
P
§ D0 ¨ ¨ D1 ¨ D2 ¨ ¨ D3 ¨ # ©
D1 A0 A1 A2 #
D2 A1 A0 A1 #
!· ¸ !¸ !¸ , ¸ !¸ ¸ ¹
D3 A2 A1 A0 #
(3.1)
where the sizes of the matrices D0 , Di , D i for i ı 1 and Aj for f j f are m0 u m0 , m0 u m , m u m0 and m u m , respectively. For the transition probability matrices, comparing Eq. (3.1) with Eq. (2.1), it is easy to see that P0 j D j for j ı 0 , Pi0 D i for i ı 1 and Pi j Aj i for i, j ı 1 . Let Wn be the southeast corner of P begainning from level n. It is clear that ) l (,1) W l (n,1) and W l (1,) W l (1, for all n ı 1. Wn W for all n ı 1. Thus, we write W n Therefore, R0, j Ri , j 132
l (,1), ( D j , D j 1 , D j 2 ,!) W
l ( Aj i , Aj i 1 , Aj i 2 , !) W
(1)
j ı 1,
, 1 İ i j.
3
Markov Chains of GI/G/1 Type
Obviously, the matrices Ri , j for 1 İ i j only depend on the difference j i . We write Ri , j as R j i for all 1 İ i j. Therefore, for k ı 1 , Rk
l ( Ak , Ak 1 , Ak 2 , !) W
(,1)
.
(3.2)
Similarly, Gi ,0 Gi j
T
l (1,) DTi , DT(i 1) , DT(i 2) , ! , W
i ı 1,
T
l (1,) AT(i j ) , AT(i j 1) , AT(i j 2) , ! , 1 İ j i. W
The matrices Gi , j for 1 İ j i only depend on the difference i j. We write Gi , j as Gi j for all 1 İ j i . Therefore, for k ı 1 , Gk
T
l (1,) ATk , AT( k 1) , AT( k 2) , ! . W
(3.3)
The following lemma is a consequence of the repeating blocks in the Markov chain of GI G 1 type. In fact, this corresponds to the censoring invariance given in Theorem 2.3. Lemma 3.1 For n ı 1 , i , j 1 , 2 , 3 , ! , n , Pn[İin, n] j
Pn[İ1(ni, n1)]1 j
Pn[İ2(ni, n2)] 2 j
!.
Proof For n ı 1 , 1 İ i , j İ n , it is easy to see that
Pn[İin, n] j
l AT , AT , ! T, Ai j ( Ai 1 , Ai 2 , !) W ( j 1) ( j 2)
(3.4)
which is independent of n ı 1. Thus Pn[İin, n] j
Pn[İ1ni, n1 ]1 j
Pn[İ2(ni, n2)] 2 j
!.
This completes the proof. Based on the censoring invariance in Lemma 3.1, we define that for 1 İ i, jİn,
)0
Pn[,İn n ] ,
(3.5)
)i
Pn[İin n] ,
(3.6)
) j
Pn[İn n ]j .
(3.7)
It is easy to see that the (r , s )th entry of )i is the transition probability of the censored chain P[İn ] from state (n i , r ) to state (n, s ), while the (r , s )th entry of ) j is the transition probability of the censored chain P[İn ] from state (n, r ) to state ( n j , s). 133
Constructive Computation in Stochastic Models with Applications
The following theorem explicitly expresses the R- and G-measures in terms of the matrices )i for f i f. Theorem 3.1 (1) For i ı 1 , Ri
)i ( I )0 ) 1.
Gj
( I )0 ) ) j .
(3.8)
(2) For j ! 1 , 1
(3.9)
Proof We only prove Eq. (3.8), while Eq. (3.9) can be proved similarly. It follows from Lemmas 3.1 and 2.5 that f
Ri
Ri[İn ]
Pn[İin,n] ¦ ª«¬ Pn[,İn n ] º»¼
Rn[İin,]n
l
) i ( I ) 0 ) 1 .
l 0
This completes the proof. Remark 3.1 Using Lemma 2.5, for the Markov chain of GI G 1 type we can provide expressions for the matrices R0, j and Gi ,0 for i, j ı 1 as follows: R0, j
I0,( jj) ( I )0 ) 1
Gi ,0
( I ) 0 ) Ii(,0i ) .
and 1
f
¦A
Theorem 3.2 If the matrix A
k
is stochastic, then lim Gi ,0 i of
k f
0.
Proof Since f
Di e
¦
Ak e İ e
k i 1
and f
¦
Ae
Ak e
e,
k f
where e is a column vector of ones with switable size, it is easy to see that lim D i e 0. Note that i of
Gi ,0
l (1,) DTi , DT(i 1) , DT( i 2) ,! W f
¦ Wl k 1
134
(1, k )
D i k 1 ,
T
Markov Chains of GI/G/1 Type
3
We obtain f
lim Gi ,0 i of
f
l (1, k )D i k 1 lim ¦ W i of
¦ Wl
k 1
(1, k )
lim D ( i k 1)
0
i of
k 1
by means of the dominated convergence theorem due to the fact that for all l (1, k )D (i k 1) e İ e . This completes the proof. k ı 1 and i ı 1, W f
Remark 3.2 (1) In the matrix P given in Eq. (3.1), if
¦A
k
is finite, then
k f
lim A k k of
lim Ak
k of
0. By using a similar analysis to the proof for Theorem 3.2, it
follows from Eq. (3.2) and Eq. (3.3) that lim Rk
lim Gk
k of
k of
0.
f
(2) In the matrix P given in Eq. (3.1),
¦D
k
is finite, so lim Dk k of
k 0
obtain lim R0, k k of
0. Thus we
0.
Let f
R0
f
¦R
0, k
,
R
k 1
¦R ; k
k 1
and f
G0
f
¦G
k ,0
k 1
,
G
¦G . k
k 1
The following theorem provides an important property for the entry structure of the four matrices R0 , R, G0 and G. Note that two important examples of this theorem were discussed in Lemma 1.2.4 of Neuts [13] for Markov chains of GI M 1 type and in Lemma 2.3.6 of Neuts [14] for Markov chains of M G 1 type. Theorem 3.3 If the Markov chain P is irreducible, then (1) each column vector of the matrix R or R0 is not zero, and (2) each row vector of the matrix G or G0 is not zero. Proof We only prove R0 , while R , G and G0 can be proved similarly. Suppose that the j0 th column of R0 was zero. Since 0 İ R0,1 İ R0 , the j0 th column of R0,1 would be zero too. Let P[İ1] be the censored matrix of the Markov chain P given in Eq. (3.1) with censoring levels 0 and 1, partition P[İ1] according to levels 0 and 1 as P[İ1]
< 00 < 01 ·¸ ¸. < 10 < 11 ¸¸¹
§ ¨ ¨ ¨¨ ©
Then 135
Constructive Computation in Stochastic Models with Applications [İ1] R0,1 < 01 ( I < 11 ) ,
1
(3.10)
1
(3.11)
and we define [İ1] R1,0 < 10 ( I < 00 ) , [İ1] noting that R0,1 [İ1] 0,1
R
R0,1 according to the censoring invariance, the j0 th column of
would be zero.
Let N ((1, j0 ); (1 j0 )) be the number of state transitions for the censored Markov chain P[İ1] to eventually return to state (1, j0 ) by going through level 0 in intermediate transitions, given that the chain starts in state (1, j0 ). Then P{N ((1, j0 ); (1, j0 )) f} ! 0 , since P[İ1] is irreducible. Let iˆ be the number of state transitions for the censored Markov chain P[İ1] to travel from level 1 to level 0. Then
P ^ N ((1, j0 ); (1, j0 )) f`
lim P{N ((1, j0 ); (1, j0 )) f 0 İ iˆ İ M }.
M of
To evaluate P{N ((1, j0 ); (1, j0 )) f, 0 İ iˆ İ M } , we consider 1 1 E ¬ª z N ((1, j0 ) (1, j0 )) 1(0İiˆİM ) ¼º İ D ¢ j0 ² ( I z< 11 ) < 10 ( I z< 00 ) M
ª
1
¦ «¬< 01 ( I z<11 ) < 10 ( I z< 00 ) i 0
where D ¢ j0 ²
( I < 11 )
1
1 º i » ¼
< 01 ( I z< 11 ) 1 ª«« j0 º»» ,
0, ..., 0, 1, 0, 0, ..., 0 · § 0,
¸ , ( I < 11 ) 1 ª« j0 º» is the j0 th column of ¨ j m j 1 zeros zeros 0 © 0 ¹
and 1() is an indicator function. It follows from Eq. (3.10) and Eq. (3.11)
that ª º P{N ((1, j0 ); (1, j0 )) f, 0 İ iˆ İ M } E « z N ((1, j0 ); (1, j0 )) 1(0İiˆİM ) » ¬
¼_z 1
M
i
[İ1] D ¢ j0 ² ( I < 11 ) 1R1,0 ¦ ª«¬ R0,1[İ1] R1,0[İ1] º»¼ R0,1[İ1] ª«« j0 º»», i 0
[İ1] [İ1] ª where R0[İ11] ª« j0 º» is the j0th column of R0,1 . Since R0,1 j º « 0»
P{N ((1, j0 ); (1, j0 )) f 0 İ iˆ İ M } 0,
0 , we would obtain
for all M ı 0,
therefore, P ^ N ((1, j0 ); (1, j0 )) f`
lim P{N ((1, j0 ); (1, j0 )) f
M of
This leads to a contradiction to the above result with 136
0 İ iˆ İ M } 0
3
Markov Chains of GI/G/1 Type
P{N ((1, j0 ); (1, j0 )) f} ! 0. This completes the proof. The following theorem provides Wiener-Hopf equations for Markov chains of GI G 1 type. Note that the Wiener-Hopf equations for any irreducible Markov chain have been constructed in Theorem 2.4. Theorem 3.4 (1) For the repeated row and i ı 1, f
Ri ( I )0 )
Ai ¦ Ri k ( I )0 ) Gk ,
(3.12)
k 1
f
( I )0 ) Gi
A i ¦ Rk ( I ) 0 ) Gi k
(3.13)
k 1
and f
A0 ¦ Rk ( I )0 )Gk .
)0
(3.14)
k 1
(2) For the boundary blocks and i ı 1, f
R0,i ( I )0 )
Di ¦ R0i k ( I )0 ) Gk ,
(3.15)
k 1
f
( I )0 ) Gi ,0
Di ¦ Rk ( I )0 ) Gi k ,0
(3.16)
k 1
and f
<0
P[0]
D0 ¦ R0, k ( I ) 0 ) Gk ,0 .
(3.17)
k 1
Proof We only prove Eq. (3.12), while Eq. (3.13) to Eq. (3.17) can be proved similarly. When n is big enough, it follows from Theorem 3.1 that 1
Ri ( I )0 ) )i ( I )0 ) ( I )0 ) )i
Pn[İin, n] ,
and Pn[İin, n]
Pn[İi(, nn 1)] Ri[İ1( n 1)] Pn[İ1,( nn 1)] º [İ( n 1)] Pn[İi(, nn 1)] Ri[İ1( n 1)] ª¬ I Pn[İ1,( nn1)] 1 ¼ G1 Pn[İi(, nn 1)] Ri 1 ( I )0 )G1 N
! Pn[İi( nn N )] ¦ Ri k ( I )0 )Gk k 1
f
Ai ¦ Ri k ( I )0 )Gk . k 1
137
Constructive Computation in Stochastic Models with Applications
This completes the proof. For any irreducible Markov chain of GI G 1 type, the UL-type RG-factorization given in Eq. (3.1) can be simplified as I P
( I RU )( I ) D )( I GL ),
(3.18)
where
RU
§ 0 R01 ¨ 0 ¨ ¨ ¨ ¨ ¨ ©
R0 2 R1
0
R03 !· ¸ R2 !¸ R1 !¸ , ¸ 0 !¸ %¸¹
)D
diag < 0 , ) 0 , )0 , )0 , !
GL
§ 0 ¨ ¨ G10 ¨ G20 ¨ ¨ G30 ¨ # ©
and
0 G1 0 G2 G1 # #
· ¸ ¸ ¸. ¸ 0 ¸ # %¸¹
Let f
A* ( z )
¦
i f
f
z i Ai , R* ( z )
¦ z i Ri
f
G* ( z)
i 1
¦z
j
Gj.
j 1
For the Markov chain of GI G 1 type, the following theorem provides a new RG-factorization for the repeated row based on the z-transformation of the repeating blocks. Note that this RG-factorization is necessary in spectral analysis of Markov chains of GI G 1 type which will be organized in Sections 3.4 and 3.5 of this chapter. Theorem 3.5 I A* ( z ) [ I R* ( z )] ( I ) 0 ) [ I G* ( z )].
Proof It follows from Eq. (3.12), Eq. (3.13) and Eq. (3.14) that R* ( z )( I ) 0 ) ( I ) 0 ) G * ( z ) ) 0 f
I1 I 2 ¦ z k R ( I ) 0 ) z k Gk , k 1
138
(3.19)
Markov Chains of GI/G/1 Type
3
where f
I1
f
¦¦z
ik
Ri k ( I )0 ) z k Gk
i 1 k 1
f
k
R* ( z )( I )0 )[ I G* ( z )] ¦¦ z i Ri ( I )0 ) z k Gk k 1 i 1
and f
I2
f
¦¦z
k
Rk ( I ) 0 ) z ( j k ) G j k
j 1 k 1
f
f
R* ( z )( I ) 0 )[ I G* ( z )] ¦¦ z j R j ( I ) 0 ) z k Gk . k 1 j k
Since f
I1 I 2 ¦ z k R( I )0 ) z k Gk
A* ( z ) R* ( z )( I ) 0 )G* ( z ),
k 1
we get R* ( z )( I )0 ) ( I )0 ) G* ( z ) ) 0
A* ( z ) R* ( z )( I ) 0 ) G * ( z ),
which is equivalent to Eq. (3.19). This completes the proof. From Theorem 3.5, we have an important relation as follows: I A ( I R )( I ) 0 )( I G ).
(3.20)
Now, we consider the boundary matrix sequence {Dk , f k f}. However, such an RG-factorization given in Theorem 3.5 does not exist; but we can find a close and useful relationship for the boundary blocks. Let f
D* ( z )
¦z D i
i
f
D* ( z )
i 1
¦z
i
D i ,
j
G j 0 .
i 1
and f
R0* ( z )
¦z R i
0 i
i 1
f
,
G0* ( z )
¦z
(3.21)
j 1
The following theorem provides upper or lower bounds for the R- and G-measures with respect to the boundary blocks. Theorem 3.6 For z ı 0 , 1
R0* ( z ) ı D* ( z ) ( I ) 0 ) ,
(3.22) 139
Constructive Computation in Stochastic Models with Applications
R0* ( z )( I )0 )[ I G * ( z )] İ D* ( z ),
(3.23)
1
G0* ( z ) ı ( I ) 0 ) D* ( z )
(3.24)
[ I R* ( z )]( I ) 0 ) G0* ( z ) İ D* ( z )
(3.25)
and
Proof We only prove Eq. (3.22) and Eq. (3.23), while Eq. (3.24) and Eq. (3.25) can be proved similarly. It follows from Eq. (3.15) that f
R0,i ( I ) 0 )
Di ¦ R0,i k ( I ) 0 )Gk ,
(3.26)
k 1
and from Eq. (3.9) that Gk
1
( I )0 ) ) k .
It is obvious that f
f
¦ R0,i k ( I )0 ) Gk
¦R
0,i k
k 1
) k ı 0.
k 1
Hence it follows from Eq. (3.26) that R0,i ( I )0 ) ı Di
and for z ! 0 , R0* ( z )( I ) 0 ) ı D* ( z ).
(3.27)
Since the Markov chain is irreducible, the spectral radius of the matrix ) 0 is sp () 0 ) 1. f
¦ ()
Hence, the matrix I )0 is invertible and ( I ) 0 ) 1
0
) k ı 0. It follows
k 0
from Eq. (3.27) that 1
R0* ( z ) ı D* ( z )( I ) 0 ) . Note that f
f
¦z ¦R i
0, i k
i 1
k 1
f
( I ) 0 ) Gk
f
¦¦z
i k
R0,i k ( I ) 0 ) z k Gk
k 1 i 1 *
*
İ R0 ( z )( I ) 0 ) G ( z ),
140
3
Markov Chains of GI/G/1 Type
it follows from Eq. (3.26) that R0* ( z )( I )0 ) İ D* ( z ) R0* ( z )( I ) 0 ) G * ( z ), simple computations lead to
R0* ( z )( I ) 0 )[ I G * ( z )] İ D* ( z ).
This completes the proof. Applying Theorem 3.6, we have the following important inequalities 1
R0 ı D ( I ) 0 ) , R0 ( I ) 0 )( I G ) İ D , 1
G0 ı ( I )0 ) D and ( I R)( I ) 0 ) G0 İ D . Although the matrix sequence {R0, k } has not such an RG-fatorization given in Theorem 3.5, we can provide an expression for the matrix R0 k for k ı 1. The following lemma is useful for this purpose. 1 Lemma 3.2 Let Bi ) i ( I ) 0 ) for i ı 1 and § I ¨ ¨ B1 ¨ B2 ¨ ¨ B3 ¨ # ©
/
I B1 B2 #
I B1 #
· ¸ ¸ ¸. ¸ I ¸ # % ¸¹
I X1 #
· ¸ ¸ ¸, ¸ I ¸ # %¸¹
Then
/
1
§ I ¨ ¨ X1 ¨ X2 ¨ ¨ X3 ¨ # ©
I X1 X2 #
where f
Xl
¦
¦
Bn1 Bn2 ! Bni ,
l ı 1.
i 1 n1 n2 ! ni l n j ı1,1İ j İi
141
Constructive Computation in Stochastic Models with Applications
Proof Since / 1/
I , we obtain k 1
Bk ¦ Bk i X i X k
0,
for all k ı 1.
i 1
f
Let X * ( z )
¦z
f
k
¦z
X k and B* ( z )
k 1
k
Bk . Then
k 1
f
1
X * ( z)
ª¬ I B* ( z ) º¼ B* ( z ) f
f
¦z ¦ l
¦
¦ ª¬ B ( z ) º¼ *
i
i 1
Bn1 Bn2 ! Bni .
i 1 n1 n2 ! ni l n j ı1,1İ j İi
l 1
Thus we have f
Xl
¦
¦
Bn1 Bn2 ! Bni ,
l ı 1.
i 1 n1 n2 ! ni l n j ı1,1İ j İi
This completes the proof. The following theorem characterizes the expression for the matrix R0,k for k ı 1. Theorem 3.7 For the Markov chain of GI G 1 type, for each k ı 1 we have R0, k
1
f
1
Dk ( I )0 ) ¦ Dk i ( I ) 0 ) X i .
(3.28)
i 1
Proof It follows from Eq. (3.15) and Eq. (3.9) that for all k ı 1, f
R0 k ¦ R0 k i) i ( I ) 0 )
1
1
Dk ( I ) 0 ) ,
i 1
or § I ¨ ¨ B1 ( R0,1 , R0,2 , R0,3 , R0,4 , !) ¨ B2 ¨ ¨ B3 ¨ # © where Fk
Dk ( I ) 0 )
1
I B1 B2 #
I B1 #
· ¸ ¸ ¸ ¸ I ¸ # % ¸¹
for k ı 1. Using Corollary 3.10 yields that for all k ı 1, f
R0 k
Fk ¦ Fk i X i i 1
f
Dk ( I ) 0 )1 ¦ Dk i ( I ) 0 )1 X i . i 1
142
T
§ F1T · ¨ T¸ ¨ F2 ¸ ¨ FT ¸ , ¨ 3 ¸ ¨ F4T ¸ ¨ # ¸ © ¹
Markov Chains of GI/G/1 Type
3
This completes the proof. Let I R , IR0 , IG , IG0 , I A , I A , I D and I D denote the radii of convergence for R* ( z ), R0* ( z ), G * ( z ), G0* ( z ), A* ( z ), A* ( z ), D* ( z ) and D* ( z ), respectively, where f
A* ( z )
¦z
f
k
¦z
A* ( z )
Ak ,
k 1
k
A k .
k 1
The following theorem provides an important relation among the radii of convergence. Theorem 3.8 (1) I R I A and IG I A . (2) I R0 ID and IG0 I D . Proof We only prove I R I A and I R0 I D ; while the other two can be proved similarly. f
We first provide I R
I A . Since
f
¦A
k
k 1
and
¦A
k
are substochastic, 0 İ I Aİ
k 1
1 İ I A İ f . At the same time, it is seen that 0 İ IG İ 1 İ I R İ f . It follows from Eq. (3.19) that
A*( z ) ) 0 R* ( z ) G* ( z ) R* ( z ))0 G* ( z ) R* ( z ))0 )0 G* ( z ) R* ( z )G * ( z ) Since R* ( z ) is a power series and G* ( z ) is a Laurent series with only negative powers of z, R* ( z ) is analytic in _ z _ I R and G* ( z ) is analytic in _ z _! IG . Noting that A* ( z )
R* ( z ) ª¬ I ) 0 G * ( z ) ) 0G * ( z ) º¼ I )0 )0 G * ( z )
A* ( z )
ª « ¬
and I )0 R* ( z ) R* ( z ))0 º»¼ G * ( z ) I )0 R* ( z ))0 ,
it is clear that I R I A and IG 0 İ IG İ 1 İ IR İ f . Now, we prove I R0 I D . Note that for z ! 0 ,
I A by using 0 İ I A İ 1 İ I Aİ f and
R0* ( z ) ı D* ( z )( I )0 ) 1
(3.29)
and R0* ( z )( I )0 )[ I G * ( z )] İ D* ( z ). 143
Constructive Computation in Stochastic Models with Applications
Note that either I D 1 or I D ! 1 , thus we need to discuss the following two possible cases: Case ć I D 1 . In this case, it is easy to see from Eq. (3.29) that I R0İ ID 1 . Noting that R0 (1) is finite, we have I R0 ı 1 . Thus I R0 ID 1 . Case Ĉ I D ! 1 . In this case, it is easy to see from Corollary 3.10 that I G * ( z ) is invertible for any z ! 1 when P is irreducible and positive recurrent. Since G* ( z ) is non-increasing for z (1, f) , we can consider G ! 0 to be small enough such that for all z ! 1 G , R0* ( z )( I ) 0 )[ I G* (1 G )] İ R0* ( z )( I )0 )[ I G * ( z )] İ D* ( z ),
hence R0* ( z ) İ D* ( z )[ I G * (1 G )]1 ( I )0 ) 1
which, together with Eq. (3.29), illustrates that for z ! 1 G , D* ( z )( I ) 0 )1 İ R0* ( z ) İ D* ( z )[ I G* (1 G )]1 ( I ) 0 ) 1 ,
(3.30)
which indicates that I R0 ID ! 1 . This completes the proof. In the rest of this section, we discuss the LU-type R-measure {Ri , j ,0 İ j i} and G-measure {Gi j , 0 İ i j} , and provide expressions for the LU-type R- and G-measures. For the Markov chain of GI G 1 type, it is easy to see that the sizes of the matrices Ri ,0 for i ı 1 and G0 j for j ı 1 are m u m0 and m0 u m , respectively; while the sizes of all the matrices Ri , j for 1 İ j i and Gi , j for 1 İ i j are m u m. Let f
R<*, j ( z )
¦z
i
Ri j ,
j ı 0,
i j 1
and Gi ,< ( z )
f
¦zG j
i j
,
i ı 0.
j i 1
The following theorem and corollary provide expressions for R<*, j ( z ) for j ı 0 and Gi*,< ( z ) for i ı 0. The proof is clear and is omitted here. Theorem 3.9 (1)
R<*,0 ( z )( I )0 ) and for j ı 1, 144
D* ( z )
3
Markov Chains of GI/G/1 Type
j 1
z j A ( z ) ¦ R<*, k ( z )( I ) k ) Gk j .
R<*, j ( z )( I ) j )
k 0
(2) ( I )0 ) G0,* < ( z )
D* ( z )
and for i ı 1, i 1
z i A* ( z ) ¦ Ri k ( I )k ) Gk* < ( z ).
( I )i ) Gi*, < ( z )
k 0
For 0 İ l k , we write Sl , k
¦
Gl , k
Gl ,i1 Gi1 , k
1İi1 İk 1
¦
Gl ,i1 Gi1 ,i2 Gi2 , k
1İi1 i2 İk 1
! Gl ,l 1Gl 1,l 2 ! Gk 1, k
and Tk ,l
¦
Rk ,l
Rk ,i1 Ri1 l
1İi1 İk 1
¦
Rk , i1 Ri1 , i2 Ri2 , l
1İi2 i1 İk 1
! Rk , k 1 Rk 1, k 2 ! Rl 1,l .
Corollary 3.1 For k ı 1, k 1
z k A* ( z ) D* ( z ) S0, k A* ( z )¦ z l Sl , k
R<*, k ( z )( I ) k )
l 1
and k 1
( I ) k )Gk*,< ( z )
z k A* ( z ) Tk ,0 D* ( z ) ¦ z l Tk ,l A* ( z ). l 1
3.2 Dual Markov Chains In this section, we define a dual Markov chain for any irreducible Markov chain, and provide a useful relationship for the R- and G-measures between the dual Markov chain and the original Markov chain. For the Markov chain of GI G 1 type, we further analyze the R- and G-measures, and obtain useful properties based on the dual chain. Definition 3.1 For a Markov chain P, if there exists a positive left superregular vector D (D 0 , D1 , D 2 , ...) such that D ı D P , we write i P
diag 1 (D ) P T diag(D ), 145
Constructive Computation in Stochastic Models with Applications
i a dual Markov chain of the where diag (D ) diag (D 0 , D1 , D 2 , ...) , then we call P Markov chain P. i ( i ) is a dual chain of the Markov chain P ( P ) , then It is clear that if P P i, j i, j
i i, j P
diag 1 (D i ) PjTi diag (D j ).
i can be probabilistically interpreted in terms of the In general, the dual chain P time reversibility of the original chain P, e.g., see Kemeny, Snell and Knapp [5]. i i , j} and {G i i , j} be the UL-type R- and G-measures Let {Ri , j } and {Gi , j } , {R i , respectively. It is easy to check for the Markov chain P and the dual chain P from Corollary 2.2 that
i i, j R
diag 1 (D i ) G Tj ,i diag (D j )
i i, j G
diag 1 (D i ) R Tj ,i diag (D j ).
and
At the same time, we have 1 T
On the other hand, for the LU-type R-, U- and G-measures we have Rˆi , j
diag 1 (D i ) G Tj i diag (D j ),
Gˆ i , j
diag 1 (D i ) R Tj i diag (D j )
and i n diag 1 (D n ) ) nT diag (D n ). )
Let P[0] be a matrix obtained from the matrix P by deleting the first row and the first column. The following proposition provides a relationship of the positive left super-regular vectors between P and P[0] . The proof is clear and is omitted here. Proposition 3.1 If D (D 0 , D1 , D 2 , ...) is a positive left super-regular vector of P, then E (D1 , D 2 , D 3 , ...) is also a positive left super-regular vector of P[0] . Applying the dual chain, the following theorem gives an important property for the R-measure based on Gi , j e İ e and Gi , j e İ e . Theorem 3.10 If D (D 0 , D1 , D 2 , ...) is a positive left super-regular vector of P, then (1) D i Ri , j İ D j for 0 İ i j , and (2) D i Ri , j İ D j for 0 İ j i. Proof We only provide (1), while (2) can be proved similarly. 146
3
Markov Chains of GI/G/1 Type
i j ,i e İ e . According to the definition of the G-measure, it is easy to see that G 1 T i Applying G j ,i diag (D j ) Ri j diag (D i ) , we obtain
i j i e İ e, diag 1 (D j ) RiT, j diag(D i ) e G
Thus, D i Ri , j İ D j for 0 İ j i . This completes the proof. In what follows we study the dual chain for the the Markov chain P of GI G 1 type. Let § A0 ¨ ¨ A1 P[0] ¨ A2 ¨ ¨ A3 ¨ # ©
A1 A0 A1 A2 #
A2 A3 A1 A2 A0 A1 A1 A0 #
#
!· ¸ !¸ !¸ . ¸ !¸ ¸ ¹
The following lemma provides a relation of the positive left super-regular vectors between P and A. The proof is clear and is omitted here. Lemma 3.3 If T is a positive left super-regular vector of the Markov chain f
¦ A , then
A
k
(T , T , T , ...) is also a positive left super-regular vector of the
k f
Markov chain P[0]. Remark 3.3 If the Markov chain A with finite states is irreducible and stochastic, then the Markov chain A is positive recurrent, and T is its stationary probability vector. By using Lemma 3.3, it is easy to see that ik R
diag 1 (T ) GkT diag (T )
ik G
diag 1 (T ) RkT diag (T ).
and
Definition 3.2 For the Markov chain P of GI G 1 type, we write §D P ¨ 0 ©V
U · ¸ P[0] ¹
and k
P[0]
§ D0 ¨¨ ©V
U · ¸, k P[0] ¸¹ 147
Constructive Computation in Stochastic Models with Applications
where k P [0] diag 1 (T , T , T , ...) P[0]Tdiag (T , T , T , ...) k
Then we call P[0] the dual generating chain of P. Theorem 3.11 If T is a positive left super-regular vector of the Markov chain A, then T R İ T . k i Proof In the dual generating chain P>0@ , G
i İ e . Since G i that is, Ge
f
¦ Gi
k
is stochastic or substochastic,
k 1
diag 1 (T ) R T diag (T ) , we obtain i e ı Ge
diag 1 (T ) R T diag(T ) e,
which leads to
T T diag(T ) e ı R T diag(T ) e R TT T, clearly, T R İ T . This completes the proof.
3.3 The A- and B-Measures In this section, we simplify the A- and B-measures, given in Section 2.5, for Markov chains of GI G 1 type, and construct new expressions for the A- and B-measures by means of the R- and G-measures. Based on the A- and B-measures, we provide conditions for the state classification of Markov chains of GI G 1 type. For the Markov chain of GI G 1 type, we write
iU R
§ 0 R1 ¨ 0 ¨ ¨ ¨ ¨ ¨ ©
R2 R1 0
R3 R2 R1 0
!· ¸ !¸ !¸ ¸ %¸ % ¸¹
and
k L
G
148
§ 0 ¨ ¨ G1 ¨ G2 ¨ ¨ G3 ¨ # ©
0 G1 0 G2 G1 # #
· ¸ ¸ ¸. ¸ 0 ¸ % % ¸¹
3
Markov Chains of GI/G/1 Type
From (1) in Theorem 2.10, we have if j 1, R0,1 , ° j 1 ® ° R0, j ¦ A0, k Rk j , if j ı 2, k 1 ¯
A0, j
(3.31)
or the matrix expression k
( A0,1 , A0,2 , A0,3 , ...) §¨© I RU ·¸¹
( R0,1 , R0,2 , R0,3 , ...).
Using (2) in Theorem 2.10, the matrix Bi ,0 and the G-measure satisfy
Bi ,0
if i 1, G1,0 , ° i 1 ® °Gi ,0 ¦ Gi k Bk ,0 , if i ı 2, k 1 ¯
(3.32)
or the matrix expression § ¨ ©
T T T I GL ·¸¹ B1,0 , B2,0 , B3,0 , ... k
T
G
T 1,0
T
T T , G2,0 , G3,0 , ... .
If 1 İ i j , then using Corollary 2.6, we have
Ai , j
if j i 1, R1 , ° j 1 ® ° R j i ¦ Ai ,k R j k , if j ı i 2, k i 1 ¯
(3.33)
Bi , j
if j i 1, G1 , ° i 1 ® °Gi j ¦ Gi k Bk , j , if j İ i 2. k 1 ¯
(3.34)
and if 1 İ j i, then
At the same time, it is clear that i 1
Ai ,i
Ai ,0 R0,i ¦ Ai , k Ri k k 1
f
¦R
k i
Ak ,i
k i 1
and i 1
Bi ,i
Gi ,0 B0,i ¦ Gi k Bk ,i k 1
f
¦B
i ,k
Gk i .
k i 1
Let f
*0 ( z )
¦z
j
A0, j
j 1
149
Constructive Computation in Stochastic Models with Applications
and f
¦z
*i ( z )
j
Ai , j ;
j i 1
f
¦z
0* ( z )
i
Bi ,0
i
Bi , j .
i 1
and f
*j ( z )
¦z
i j 1
Then it respectively follows from Eq. (3.31) to Eq. (3.34) that *0 ( z )[ I R* ( z )]
R0* ( z ),
*i ( z )[ I R* ( z )] R* ( z ),
i ı 1,
[ I G* ( z )]0* ( z ) G0* ( z )
and [ I G* ( z )] *j ( z ) G* ( z ),
j ı 1.
Let f
i
¦
Ai. j ,
iı0
j i 1
and f
j
¦B
i, j
,
j ı 0.
i j 1
Then 0 ( I R) i ( I R)
R,
( I G )0
R0 , i ı 1; G0
and ( I G ) j
G,
j ı 1.
We now provide conditions for the state classification of Markov chains of GI G 1 type based on the A- and B-measures. For convenience of description, we 150
3
Markov Chains of GI/G/1 Type
assume that the Markov chain P given in Eq. (3.1) and the matrix A are irreducible and stochastic, unless stated. Lemma 3.4 The Markov chain P is recurrent if and only if for all i ı 1 i 1
Gi ,0 e ¦ Gi k e
e.
k 1
Proof This proof may directly follow Lemma 2.9 and Theorem 2.11. It is seen from Theorem 3.2 that if the matrix A is stochastic, then lim Gi ,0 i of
0.
Therefore, we have the following corollary. Corollary 3.2 (1) The Markov chain P is recurrent if and only if G is stochastic. (2) The Markov chain P is transient if and only if and G is strictly substochastic. Theorem 3.12 If the Markov chain P is recurrent, then P is positive recurrent if and only if R0 is finite and lim R k 0. k of
Proof Suppose first that P is positive recurrent. It follows from Theorem 3.8 that the matrix A0 is finite. Note that 0 ( I R ) R0 , we obtain R0 İ 0 , which leads to the conclusion that R0 is finite. Since 0
R0 0 R R0 R0 0 R R
R0 R0 R 0 R 2
R0 R0 R R0 0 R R 2
R0 R0 R R0 R 2 0 R 3
! N 1
R0 ¦ R k 0 R N k 0 N
ı R0
¦R
k
k 0
for all N ı 1, hence we obtain lim R k
k of
0.
Suppose now that R0 is finite and lim R k
0. In this case, the matrix I R is
k of
irreducible. Since 0 ( I R )
R0 , we obtain 0
1
R0 ( I R) f.
Therefore, P is positive recurrent by means of Theorem 2.12. This completes the proof. 151
Constructive Computation in Stochastic Models with Applications
Let
W
§ A0 ¨ ¨ A1 ¨ A2 ¨ ¨ A3 ¨ # ©
A1 A0 A1 A2 #
A2 A3 A1 A2 A0 A1 A1 A0 # #
!· ¸ !¸ !¸ . ¸ !¸ ¸ ¹
For the Markov chain W, we introduce the A- and B-measures for the first block-row and the first block-column as follows: A0 B 0 I , Ak A1, k 1 and B k Bk 1,1 for k ı 1. The following corollary provides two simple and useful relations between the A- and R-measures and the B- and G-measures, respectively. The proof is easy by using Eq. (3.33) and Eq. (3.34). Corollary 3.3 k 1
Ak
¦ AR
k i
(3.35)
k i
B i.
(3.36)
¦z
Ak
i
i 0
and k 1
Bk
¦G i 0
Let f
*
A ( z)
k
k 0
and * B ( z)
f
¦z
k
B k.
k 0
Then it follows from Eq. (3.35) that * * A ( z )[ I R ( z )]
I
(3.37)
I.
(3.38)
and from Eq. (3.36) that [ I G* ( z )]B *( z )
Now, we provide a useful relation between ) n and the A- and B-measures. This relation leads to a new expression for the R- and G-measures. Corollary 3.4 (1) For n ı 0, f
)n
¦A
ni
i 0
152
Bi
3
Markov Chains of GI/G/1 Type
and f
)n
¦AA
n i
i
.
i 0
(2) For n ı 0, f
¦A
Rn ( I ) 0 )
n i
Bi
i 0
and f
¦AA
( I )0 ) Gn
i
n i
.
i 0
Proof It follows from Eq. (3.5) and Eq. (3.4) that
) 0 Wn[İn n ] l where W
m AT , AT , AT , ... T , A0 ( A1 , A2 , A3 , ...) W 1 2 3
1
( I W ) min . Using Eq. (2.44) we obtain l AT , AT , AT , ... T W 1 2 3
B
T 2,1
T T , B3,1 , B4,1 , ...
T
T
B , B , B , ... , T 1
T 2
T 3
thus it is easy to see that f
¦ AB.
)0
i
i
i 0
On the other hand, note that l ( A1 , A2 , A3 , ...) W
( A1,2 , A1,3 , A1,4 , ...) ( A1, A 2, A3, ...),
thus we obatin f
)0
¦AA i
i
.
i 0
Similarly, we can easily prove the other equations by means of Rn ( I ) 0 ) ) n and ( I ) 0 ) Gn ) n for n ı 1 . This completes the proof. l ( l ) by means The following lemma expresses the fundamental matrix W W i, j l i , j is a matrix of size m u m . of the R-, U- and G-measures, where W Lemma 3.5 (1) l1,1 ( I ) 0 ) 1. W 153
Constructive Computation in Stochastic Models with Applications
(2) For i ı 2 or j ı 2,
l i, j W
i 1 l k , j , if i ı j , °¦ Gi k W °k 1 ® j 1 ° l i ,kR j k if iİ j. W °¯ ¦ k 1
Proof (1) is clear. We only need to prove the first part of (2) with i ı j ; while the other part for i İ j can be proved similarly. The RG-factorization of the matrix I W is given by
I W
( I RU )( I ) D )( I GL ),
which leads to l ( I GL ) W
1
1
( I ) D ) ( I RU ) .
(3.39)
Hence we obtain § I · ¨ G ¸ I ¨ 1 ¸ li , j ) ¨ G2 G1 I ¸ (W ¨ G G G I ¸ 2 1 ¨¨ 3 ¸ # # # # %¸¹ ©
§ (I )0 ) 1 * * ¨ 1 (I )0 ) * ¨ 1 ¨ (I )0 ) ¨ ¨ ©
!· ¸ !¸ ¸, !¸ % ¸¹
(3.40) 1
1
where * denotes the corresponding block-entry of the matrix ( I ) D ) ( I RU ) . It follows from Eq. (3.40) that W l 1,1 ( I ) 0 ) 1, ° i 1 ® l i , j ¦ Gi k W l k j °W ¯ k 1
0, i ! j ı 1.
Simple matrix computation can easily lead to the desired results. A useful relation between the fundamental matrix and the A- and B-measures is given in the following theorem. Theorem 3.13 l i, j W l i 1, j 1 B i 1 W l 1,1 A j 1 W
or
l i, j W
154
i 1 l 1,1 A k j i, if 1 İ i İ j , °¦ B kW °k 0 ® j 1 ° l 1,1 Ak , if 1 İ j İ i. B k i jW °¯¦ k 0
Markov Chains of GI/G/1 Type
3
Proof Let § A0 U · ¨ ¸. ©V W¹
W Then l W
§* ¨ ¨* © §* ¨¨ ©*
· ¸ l WV l ( I A UWV l¸ l ) UW W 0 ¹ · * . 1 l ¸ l l W WV ( I ) 0 ) UW ¸¹
*
1
(3.41)
Note that l 1,1 ( I ) 0 ) 1, W
B , B , B , ...
l WV
T
T 2
1
T 3
T
and l UW
( A1, A 2, A3, ...),
it follows from Eq. (3.41) that l i, j W l i 1, j 1 B i 1 W l 1,1 A j 1. W
Some matrix computations can lead to the desired result. Corollary 3.5 (1) f
i ,1
l 1,1 BW
1, j
l 1,1 A. W
¦ Wl i 1
(2) f
¦ Wl j 1
Proof The proof only needs to use the two relations as follows: l i ,1 W
W 1,1 B i 1 m
and l 1, j W l 1,1 A j 1. W
This completes the proof. 155
Constructive Computation in Stochastic Models with Applications
The following theorem provides a useful relation between {R0, k } and {Dk }. Theorem 3.14 If the Markov chain P is irreducible and stochastic, then the f
condition that
¦ kD
k
is finite implies that R0 is finite.
k 1
l 1,1 is finite and nonnegative, there is always a constant b ! 0 Proof Since W l1,1 e İ be. Therefore, such that W f
f
¦ Wl
i 1
i k
li 1 e e İ ¦W
l11 e İ be BW
i 1
due to Be İ e. Note that f
f
¦R
R0 e
0, k
i
k 1
f
İ
§
f
¦ ¦ D Wl
e
f
f
·§
¦ ¨© ¦ D ¸¨ ¦ Wl ¹© i
k 1
i k
f
İb
i ,1
e
k 1 i k
i ,1
i k
f
f
¦¦ De
· ¸e ¹
i
b¦ ¦ Di e
i
k 1 i k
i 1 k 1
f
b¦ iDi e, i 1
f
it is easy to see that the condition that
¦ kD
k
is finite implies that R0 is finite.
k 1
In general, the two matrices R and G are always finite for any irreducible Markov chain. However, the matrices A*(1) A and B *(1) B may be infinite for some irreducible Markov chain. In this case, we need sufficient conditions under which A or B is finite. Therefore, if A is finite, then A ( I R)
I;
(3.42)
( I G) B
I.
(3.43)
if B is finite, then
The following lemma provides conditions under which the matrix A or B is finite. Lemma 3.6 Suppose that the Markov chain P of GI G 1 type is irreducible f
and stochastic, and the matrix
¦ kD
k
is finite.
k 1
(1) P is positive recurrent if and only if A f , (2) P is transient if and only if B f , and (3) P is null recurrent if and only if A f and B 156
f .
Markov Chains of GI/G/1 Type
3
Proof We only prove (1), while (2) and (3) can be proved similarly. Suppose first that P is positive recurrent. It follows from Theorem 3.12 that lim R k
k of
1 0, and ( I R)
f
¦R
k
f. Thus, from Eq. (3.37) we obtain
k 0
A
lim A*( z )
lim [ I R* ( z )]
z o1
1
1
( I R) f.
z o1
Suppose now that A f. It follows from Eq. (3.42) that f
¦R
k
A f,
lim R k
0,
k 0
which leads to k of
therefore, P is positive recurrent by means of Theorem 3.12. This completes the proof. The following theorem provides a necessary and sufficient condition for the state classification of Markov chains of GI G 1 type. Theorem 3.15 Suppose that the Markov chain P of GI G 1 type is irreducible f
and stochastic, and the matrix
¦ kD
k
is finite.
k 1
f
(1) P is positive recurrent if and only if for each i ı 1 ,
¦Wl
i j
is finite,
j 1
f
(2) P is transient if and only if for each j ı 1 ,
¦Wl
i j
is finite, and
i 1
f
(3) P is null recurrent if and only if there exists an i ı 1 such that
¦Wl
i j
is
j 1
f
infinite, and there exists a j ı 1 such that
¦Wl
i j
is infinite.
i 1
Proof We only prove (1), while (2) and (3) can be proved similarly. Using Theorem 3.13, we obtain f
¦ Wl j 1
i, j
f § i 1 · l 1,1¦ A k j i ¨ ¦ B k ¸W j 1 ©k 0 ¹
§ i 1 · l 1,1 A, ¨ ¦ B k ¸W ©k 0 ¹
f
we obtain that for each i ı 1 ,
¦Wl
i j
is finite if and only if A f. Since
j 1
A ( I R) I , P is positive recurrent if and only if A f. Hence it is easy to see the desired result. 157
Constructive Computation in Stochastic Models with Applications
The following theorem provides a sufficient condition under which the Markov chain P of GI G 1 type is positive recurrent. Theorem 3.16 Suppose that P is irreducible and stochastic, and the matrix f
¦ kD
is finite. If the matrix A is strictly substochastic, then P is positive recurrent.
k
k 1
Proof It is easy to check that if the matrix A is strictly substochastic, then the
matrix ( I W ) 1 is finite, and for each i ı 1 ,
f
¦Wl
i j
is finite. The desired result
j 1
follows from Theorems 3.15 and 3.14. This completes the proof. The following corollary provides some important relations among the A- and B-measures and the R- and G-measures. Corollary 3.6 For the Markov chain W, (1) A f if and only if T R T , (2) B f if and only if Ge e, and (3) A f and B f if and only if T R T and Ge e. Proof We only prove (1), while (2) and (3) can be proved similarly. We construct a Markov chain of GI G 1 type whose transition probability matrix is given by §D P ¨ 0 © D
D · ¸, W¹
where D
( D1 , D2 , D3 , ...)
f
with
¦ kD
f , and
k
k 1
D
D
T 1
T
, DT2 , DT3 , ... .
f
Since
¦ kD
k
f , it is clear from Lemma 3.6 that Markov chain P is positive
k 1
recurrent if and only if A f , and if and only if lim R k k of
0, which leads to
T R T . This completes the proof.
3.4 Spectral Analysis In this section, we provide spectral analysis for the R- and G-measures of Markov chains of GI G 1 type. Note that the spectral analysis is a standard technique 158
3
Markov Chains of GI/G/1 Type
used in dealing with stochastic models, e.g., see Neuts [13, 14] for Markov chains of GI M 1 type and Markov chains of M G 1 type, respectively. Let H be a nonnegative matrix of size m, and Pi the ith eigenvalue of H for 1 İ i İ m. we write sp( H )
max{P1 , P 2 , ..., P m }.
The following lemma first provides a spectral property for the matrices R and G. Lemma 3.7 (1) Let A be irreducible and stochastic. Then sp( R) İ 1 and sp(G ) İ 1. (2) Let A be irreducible and strictly substochastic. Then sp( R) 1 and sp(G ) 1. Proof We first prove (1). Let A be irreducible and stochastic. Then sp ( A) 1. It follows from Eq. (3.20) that I A ( I R)( I ) 0 )( I G ) and det( I A) det( I R) det( I )0 ) det( I G ). Since
T ( I A) 0,
( I A)e 0,
it follows from sp( A) 1 that T R İ T and Ge İ e, thus it is clear that sp( R) İ 1 and sp(G ) İ 1 . We now prove (2). If A is irreducible and strictly substochastic, then det( I A) z 0, and hence det( I R) z 0 and det( I G ) z 0. Again, applying the condition that sp( R) İ 1 and sp(G ) İ 1 , it is easy to see that sp ( R ) 1 and sp(G ) 1 . This completes the proof. It is seen from (2) of Lemma 3.7 that if A is strictly substochastic, then sp( R) 1 and sp(G ) 1 . Thus, in the rest of this section we always assume that A is irreducible and stochastic. Let T be the stationary probability vector of the Markov chain A. The following theorem provides a necessary and sufficient condition on the state classification of the Markov chain P of GI G 1 type. Theorem 3.17 Suppose that P is irreducible and stochastic, and the matrix f
¦ kD
k
is finite.
k 1
(1) P is positive recurrent if and only if sp( R) 1 . In this case, sp G 1 . (2) P is null recurrent if and only if sp( R) 1 and sp(G ) 1 . (3) P is transient if and only if sp(G ) 1 . In this case, sp( R) 1 . Proof We only prove (1), while (2) and (3) can be proved similarly. If P is positive recurrent, then it is seen from Lemma 3.6 that A f. Since 159
Constructive Computation in Stochastic Models with Applications f
A( I R)
I , we obtain that
¦R
k
( I R)
1
A f , which leads to sp ( R ) 1.
k 0
On the other hand, if P is positive recurrent, then B f by using Lemma 3.6. Suppose that sp(G ) 1 . Since ( I G ) B I , we would obtain that B 1 ( I G ) f . This leads to a contradiction that B f. Thus sp(G ) 1 . If sp( R) 1 , then the matrix I R is invertible. Since A ( I R) I , we obatin 1 that A ( I R) f . Hence P is positive recurrent. This completes the proof. The following corollary provides conditions for the state classification of the Markov chain P of GI G 1 type. The proof follows Theorem 3.17 directly. Corollary 3.7 Suppose that P is irreducible and stochastic, and the matrix f
¦ kD
k
is finite.
k 1
(1) P is positive recurrent if and only if det( I R ) z 0 , (2) P is null recurrent if and only if det( I R ) 0 and det( I G ) 0 , and (3) P is transient if and only if det( I G ) z 0 . The following theorem provides important properties for the matrices R and G, which are related to the state classification. f
Theorem 3.18 If P is irreducible and stochastic, and the matrix ¦ kDk is k 1
finite. (1) P is positive recurrent if and only if T R T . In this case, Ge e. (2) P is null recurrent if and only if T R T and Ge e. (3) P is transient if and only if Ge e. In this case, T R T . Proof We only prove (1), while (2) and (3) can be proved similarly. It follows from Eq. (3.20) that the RG-factorization for the repeated row I A ( I R )( I ) 0 )( I G ). Since A is irreducible and stochastic, it is clear that T A T and Ae e . Thus we obtain
T ( I R)( I )0 )( I G ) 0 and ( I R)( I ) 0 )( I G ) e 0. Since P is positive recurrent if and only if sp ( R ) 1 . In this case, sp(G ) 1. Thus it is clear that T R T and Ge e . This completes the proof. f
Corollary 3.8 If P is irreducible and stochastic, and the matrix ¦ kDk is finite. k 1
f
(1) P is positive recurrent if and only if the matrix I ¦) k is invertible. In k 0
160
3
Markov Chains of GI/G/1 Type
f
this case, the matrix I ¦) k is singular. k 0
f
f
k 0
k 0
(2) P is null recurrent if and only if the two matrices I ¦) k and I ¦) k are both singular. f
(3) P is transient if and only if the matrix I ¦) k is invertible. In this case, k 0
f
the matrix I ¦) k is singular. k 0
Proof In the proof, we only need to note that f § · 1 ¨ I ¦ ) k ¸ ( I )0 ) k 0 © ¹
I R
and I G
(I )0 )
§ 1 ¨ ¨ ¨ ©
f
· ¸
k 0
¸ ¹
I ¦ )k ¸ ,
thus the results are clear according to Theorem 3.18. This completes the proof. The following theorem provides conditions for the state classification of the Markov chain P of GI G 1 type. This is the same as Chapter X 3 4 in Asmussen [2] for a necessary and sufficient condition based on the mean drift. Theorem 3.19 Suppose that P is irreducible and stochastic, and the matrix f
¦ kD
k
is finite.
k 1
f
f
(1) P is positive recurrent if and only if T ¦ kA k e ! T ¦ kAk e, k 1
k 1
f
f
(2) P is null recurrent if and only if T ¦ kA k e T ¦ kAk e, and k 1
k 1
f
f
k 1
k 1
(3) P is transient if and only if T ¦ kA k e T ¦ kAk e. Proof We only prove (1), while (2) and (3) can be proved similarly. Taking the derivative of the RG-factorization: I A* ( z ) [ I R* ( z )]( I ) 0 )[ I G* ( z )]
at z 1 , we obtain f
§ f
·
§ f
·
k 1
©k 1
¹
©k 1
¹
T ¦ k ( Ak A k )e T ¨¨¨ ¦ k) k ¸¸¸ (e Ge) (T T R) ¨¨¨ ¦ k) k ¸¸¸ e.
(3.44) 161
Constructive Computation in Stochastic Models with Applications
Since P is irreducible, the matrix I )0 is invertible. Note that f
¦)
R
k
( I )0 )
1
k 1
and ( I )0 )
G
f
1
¦)
k
,
k 1
f
it follows from Theorem 3.3 that there is no zero column in
¦) k 1
f
k
f § · ¨ thus ¦ k) k ¸ k 1 © ¹
f
§ · ¨ thus ¦ k) k ¸ . k 1 k 1 © ¹ Suppose firstly that P is positive recurrent. It follows from Theorem 3.18 that T R T and Ge e . It follows from Eq. (3.44) that
and there is no zero row in
¦)
k
f
§ f
·
k 1
©k 1
¹
T ¦ k ( Ak A k )e (T T R) ¨¨¨ ¦ k) k ¸¸¸ e 0 or f
f
k 1
k 1
T ¦ kAk e T ¦ kA k e. f
f
f
k 1
k 1
k 1
Suppose now that T ¦ kA k e ! T ¦ kAk e . It is clear that T ¦ k ( Ak A k )e 0. f
§ · Note that T ¨ ¦ k) k ¸ (e Ge) ı 0 , it follows from Eq. (3.44) that (T T R ) ©k 1 ¹ f § · ¨ ¦ k) k ¸ e ! 0, which leads to T R T , hence P is positive recurrent, and ©k 1 ¹ Ge e. This completes the proof. We denote by F ( z ), r ( z ) and g ( z ) the maximal eigenvalues of the matrices * A ( z ), R* ( z ) and G * ( z ), respectively. Corollary 3.9 Suppose that P is irreducible and stochastic, and the matrices f
¦ kD
k
k 1
f
and
¦ kA
k
are all finite.
k 1
(1) P is positive recurrent if and only if F c(1) 0 . In this case, g c(1) 0 and
F c(1) T ( I R)( I )0 )e g c(1). (2) P is null recurrent if and only if F c(1) 0 . 162
3
Markov Chains of GI/G/1 Type
(3) P is transient if and only if F c(1) ! 0 . In this case, r c(1) ! 0 and
F c(1) T ( I )0 )( I G ) e r c(1). Proof We only prove (1), while (2) and (3) can be proved similarly. For z ı 0 , we denote by F * ( z ) , T * ( z ) and e* ( z ) the maximal eigenvalue of the matrix A* ( z ) and the associated left and right eigenvectors, respectively. It is clear that T * (1) T and e* (1) e . It is clear that 1 F * ( z ) T * ( z )[ I A* ( z )]e* ( z )
which leads to f
f
k 1
k 1
F c(1) T ¦ kAk e T ¦ kA k e. f
f
k 1
k 1
Using Theorem 3.19, it is easy to see that T ¦ kAk e T ¦ kA k e, that is, F c(1) 0 . It follows from Eq. (3.19) that 1 F * ( z ) T * ( z )[ I R* ( z )]( I ) 0 )[ I G * ( z )]e* ( z ).
(3.45)
Since T * ( z )[ I R* ( z )]( I ) 0 ) is the left eigenvector of the matrix G* ( z ) , we have
T * ( z )[ I R* ( z )]( I ) 0 )G* ( z )
g ( z )T * ( z )[ I R* ( z )]( I ) 0 )
or 1 F * ( z ) [1 g * ( z )]T * ( z )[ I R* ( z )]( I ) 0 )e* ( z ).
(3.46)
When P is positive recurrent, taking the derivative of Eq. (3.46) and using ( I G )e we obtain
0,
F c(1) g c(1)T ( I R)( I )0 ) e, thus we get 1 g c(1) [T ( I R)( I ) 0 )e] F c(1).
Note that T ( I R)( I )0 )e ! 0 and F c(1) 0 , it is clear that g c(1) 0 . This completes the proof. The following theorem provides a useful relationship between the Markov k chain P and its dual generating chain P[0] . f
Theorem 3.20 If P is irreducible and stochastic, and the matrix ¦ kDk is finite. k 1
k
(1) P is positive recurrent if and only if P[0] is transient, k (2) P is null recurrent if and only if P[0] is null recurrent, and 163
Constructive Computation in Stochastic Models with Applications k
(3) P is transient if and only if P[0] is positive recurrent. Proof We only prove (1), while (2) and (3) can be proved similarly. If A is irreducible and stochastic, it is clear that T is the stationary probability k A diag 1 (T ) AT diag(T ) . Note that P and P[0] have the same block-entries vector of i f
in both the first block-row and the first block-column, and the matrix ¦ kDk is k 1
finite, thus we only need to check the mean drift condition as follows: f
f
T ¦ k iAk iA k e T diag 1 (T )¦ k ( A k Ak ) Tdiag(T ) e k 1
k 1
f
T
eT ¦ k ( A k Ak ) T T k 1 f
T ¦ k ( Ak A k ) e, k 1
k
which leads to the conclusion that P is positive recurrent if and only if P[0] is transient. This completes the proof. The following theorem characterizes the zeros of the two equations det(I R* ( z )) 0 and det( I G* ( z )) 0 . Lemma 3.8 (1) All the zeros of the equation det( I R* ( z )) 0 lie in the region | z |ı1 . (2) All the zeros of the equation det( I G * ( z )) 0 lie in the region | z | İ1. Proof We only prove (1), while (2) can be proved similarly. We denote by z0 a zero of the equation det( I R* ( z )) 0 and assume _ z0 _ 1 . T There exists a non-zero column vector x ( x1 , x2 , ..., xm ) such that x R* ( z0 ) x . T * Let _ x _ (_ x1 _, _ x2 _, ... , _ xm _) . Then _ x _İ R (_ z0 _) _ x _ R _ x _ , which leads to
T _ x _ T R _ x _İ T R _ x _İ T _ x _ . This is a contradiction. This completes the proof. The following corollary provides a necessary and sufficient condition for the state classification of the Markov chain P of GI G 1 type by means of the root distributions for the equations det( I R* ( z )) 0 and det( I G* ( z )) 0 . Corollary 3.10 Suppose that P is irreducible and stochastic, and the matrix f
¦ kD
k
is finite.
k 1
(1) P is positive recurrent if and only if all the zeros of the equation det( I R* ( z )) 0 reside outside the unit circle _ z _! 1. (2) P is null recurrent if and only if all the zeros of the equations det(I R* ( z )) 0 and det( I G* ( z )) 0 reside on the unit circle _ z _ 1. (3) P is transient if and only if all the zeros of the equation det( I G* ( z )) 0 reside inside the unit circle _ z _ 1 . 164
3
Markov Chains of GI/G/1 Type
3.5 Distribution of the Minimal Positive Root In this section, we discuss distribution of the minimal positive root of the equation: det( I R* ( z )) 0 or det( I G* ( z )) 0. We first construct some useful relations between a matrix equation and its associate spectral equation. Based on the RG-factorization for the repeated row and the state classification, we analyze the following three cases.
3.5.1 The Positive Recurrence If the Markov chain P of GI G 1 type is irreducible and positive recurrent, then it follows from Eq. (3.19) that {0 z I A det( I A* ( z ))
0} {0 z I A det( I R* ( z ))
* {0 z I A det( I G* ( z ))
0} 0}, (3.47)
since I ) 0 is invertible. Hence it follows from Eq. (3.47) and Corollary 3.10 that {1 z I A det( I A* ( z ))
0} {1 z I A det( I R* ( z ))
0}
0} {0 z İ 1 det( I G* ( z ))
0}
(3.48)
and {0 z İ 1 det( I A* ( z ))
Let ri ( z ), 2 İ i İ m, be all other eigenvalues of R* ( z ) whose maximal eigenvalue is r(z). The following theorem provides a useful relation between the maximal eigenvalue equation and the matrix equation det( I R* ( z )) 0 . Theorem 3.21 Suppose that the Markov chain of GI G 1 type is irreducible and positive recurrent. (3.49) (1) min{1 z I A det( I R* ( z )) 0} min{1 z I A r ( z ) 1}. (2) min{1 z I A det( I A* ( z ))
0} min{1 z I A F ( z ) 1}.
(3) min{1 z I A r ( z ) 1} min{1 z I A F ( z ) 1}. Proof (1) Let
: 0 {1 z I A r ( z ) 1} and
:
{1 z I A det( I R* ( z ))
0}.
165
Constructive Computation in Stochastic Models with Applications f
It is clear that : 0 : , since det( I R* ( z )) [1 r ( z )] [1 ri ( z )] for z ! 1. i 2
There are two possible cases for the set : as follows: Case ć : is not empty. In this case, it follows from Theorem 3.23 that there exists an K such that
K
min{1 z I A det( I R* ( z ))
0} min : *
Suppose that K : 0 , i.e., r (K ) z 1. Since det( I R (K )) 0 , there must exist a ri0 (K ) , 2 İ i0 İ m , such that ri0 (K ) 1. Noting that r (K ) is the maximal eigenvalue of R* (K ), therefore, r (K ) ı ri0 (K ) 1. By the assumption that r (K ) z 1, we should have r (K ) ! ri0 (K ) 1 . Let f ( z ) 1 r ( z ) . Then f ( z ) is continuous for z [1K ] , f (1) ! 0 and f (K ) 0 , hence there exists a point [ (1K ) such that f ([ ) 0 , i.e., r ([ ) 1. Clearly, [ : . This leads to a contradiction that K is the minimum in : . Therefore, r (K ) 1 or K : 0. Since : 0 : , min : İ min : 0 . Clearly, K and K : 0 . Therefore, K
min :
min : 0 follows from K
min :
min : 0 implies that Eq. (3.49) is true.
Case Ĉ : is empty. In this case, : 0 also is empty, since : 0 : . Hence Eq. (3.49) is true in the sense that both sides are infinite. (2) The proof is slightly different from that in (1). In this case, let f ( z ) 1 F ( z ). Since F (1) 1 , we need to find a z0 ! 1 such that F ( z0 ) 1 to make the argument used in (1). Since the Markov chain P of GI G 1 type is assumed to be irreducible and positive recurrent, and I A ! 1 , it follows from Proposition 4.6 in Asmussen [1] f
that T ¦ k ( Ak A k )e 0 , which leads to k 1
f
F c(1) T ¦ k ( Ak A k )e 0. k 1
Hence there exists a G ! 0 small enough such that
F (1 G ) F (1) 1. Let z0 1 G . Then f ( z0 ) ! 0 . The rest of the proof is similar to that in (1). (3) It follows from Eq. (3.47) that min{1 z I A det( I R* ( z ))
0} min{1 z I A det( I A* ( z ))
The proof follows from (1) and (2). This completes the proof. 166
0}.
3
Markov Chains of GI/G/1 Type
3.5.2 The Null Recurrence If the Markov chain P of GI G 1 type is irreducible and null recurrent, then
{z 1 det( I A* ( z ))
0} {z 1 det( I R* ( z ))
0}
*
* {z 1 det( I G ( z )) 0}. Furthermore, we have {z 1 F ( z ) 1} {z 1 r ( z ) 1} * {z 1 g ( z ) 1}.
3.5.3 The Transience If the Markov chain P of GI G 1 type is irreducible and transient, then {1 İ z I A det( I A* ( z ))
0} {1 İ z I A det( I R* ( z ))
0}
and {0 z 1 det( I A* ( z ))
0} {0 z 1 det( I G * ( z ))
0}.
Therefore, we obtain {1 İ z I A F ( z ) 1} {1 İ z I A r ( z ) 1} and {0 z 1 F ( z ) 1} {0 z 1 g ( z ) 1}.
3.5.4 The Minimal Positive Root Now, we consider an irreducible and positive recurrent Markov chain of GI G 1 type. In this case, we analyze the minimal positive root of det( I R* ( z )) 0 . To f
do this, we first provide expression for the generating function, 3 * ( z )
¦z ʌ k
k
,
k 1
of the stationary probability vector ʌ {ʌk }. Since the stationary probability vector ʌ {ʌk } can be expressed in terms of the R-measure as k 1
ʌk
ʌ0 R0,k ¦ ʌi Rk i ,
k ı 1,
i 1
167
Constructive Computation in Stochastic Models with Applications
we get
3 * ( z ) ʌ0 R0* ( z ) 3 * ( z ) R* ( z ),
(3.50)
3 * ( z ) ʌ0 R0* ( z )[ I R* ( z )]1
(3.51)
which leads to
whenever both sides are defined. Theorem 3.22 Suppose that the Markov chain of GI G 1 type is irreducible f
and positive recurrent, and the matrix A
¦
Ak is irreducible.
k f
(1) If the set
:
{z 1 _ z _ I A , det( I R* ( z ))
0}
is not empty, then there must exist a solution K ! 1 to det( I R* ( z )) 0 such that K İ_ z0 _ , for any z0 : . (2) If the set
:
{z I A _ z _ 1, det( I G* ( z ))
0}
is not empty, then there must exist a solution 0 [ 1 to det( I G* ( z )) 0 such that [ ı _ z0 _ , for any z0 : . Proof We only prove (1), while (2) can be proved similarly. To prove this theorem, we need construct a new Markov chain based on { Ak }:
P[
i0 §D ¨ ¨ D1 ¨ D2 ¨ ¨ D3 ¨ # ©
i1 D A0 A1 A2 #
i2 D A1 A0 A1 #
i3 D A2 A1 A0 #
!· ¸ !¸ !¸ , ¸ !¸ ¸ ¹
(3.52)
where the matrices D k , k ı 1 , and Al , f l f , are the same as that of P i k , k ı 0 , are constructed in a way such that given in Eq. (3.1) and the matrices D (a) IDi ı I A, and (b) the new Markov chain P[ is irreducible and positive recurrent. Such a construction is always possible, for example, i0 D
1 W e E, m
ik D
Wk k!
eW E ,
k ı 1,
where E is the matrix of ones of size m and W is a positive scalar. f d * It is easy to check that I D f , therefore, ¦ kDk D ( z )_ z 1 is finite, which, z d k 1 168
Markov Chains of GI/G/1 Type
3
together with I A ! 1 and the face that the Markov chain P in Eq. (3.1) is positive recurrent, implies that P[ is positive recurrent, see Proposition 4.6 in Asmussen [1]. i k is positive, k ı 0 , and the Markov chain P in Eq. (3.1) Since every entry of D is irreducible, it is obvious that P[ is irreducible. Let {ʌ k } be the stationary probability vector of P[ , which can be explicitly i 0, k} and {R i k} be the R-measure expressed in terms of the R-measure of P[ . Let {R i k Rk , for all k ı 1 . of P[ . It is easy to see from Eq. (3.52) and Eq. (3.1) that R j *( z ) and R i *0( z ) be the generating functions of {ʌ k } and {R i 0, k}, respectively. Let 3 It follows from Eq. (3.51) that i *0( z )[ I R* ( z )] j *( z ) ʌ0 R 3
1
1 i *0( z )adj( I R* ( z )), ʌ0 R * det( I R ( z ))
where adj ( I R* ( z )) is the adjoint of I R* ( z ) . It follows from Theorem 3.8 that
IRi
0
IDi ı I A and IR
I A . Let I3 be the radius of convergence of the vector
f
series 3 * ( z )
¦ z ʌ . Then, k
k
it follows from a standard result in complex
k 1
analysis (see Theorem 17.13 in Markushevich [12]) that z I3 is a singular point of 3 * ( z ) . Since : is not empty, there exists a solution z1 with 1 _ z1 _ I A to equation det( I R* ( z ))
0. This shows that z * 0
z1 is a singular point of the vector
*
complex function ʌ0 R ( z ) adj( I R ( z )) det ( I R* ( z )). Therefore, 1 I3 İ _ z1 _ I A , according to
I3
min{IR0 , IR , K} .
Suppose that det( I R* (I3 )) z 0 , then ʌ0 R0* ( z ) adj( I R* ( z )) det( I R* ( z )) would be analytic at z I3 , since 1 I3 min{I A , IDi } . Hence 3 * ( z ) also would be analytic at z I3 . This leads to a contradiction that z I3 is a singular point of 3 * ( z ). This completes the proof. Now, we provide a discussion on existence for a positive solution to det( I R* (I3 )) 0 , including the case where I A itself can be such a solution. Let r ( z ) be the maximal eigenvalue of R* ( z ) for z [1 , I A ) . We can extend the definition of r ( z ) to z
I A by defining r (I A )
lim r ( z ). Clearly, r ( z ) ı 0 for
z /I A
1 İ z I A , since R* ( z ) ı 0 .
Theorem 3.23 Suppose that the Markov chain of GI G 1 type is irreducible and positive recurrent. To equation det[ I R( z )] 0 , (1) if r (I A ) ! 1 , then there exists a positive solution K satisfying 1 K I A ; (2) if r (I A ) 1 and I A f , then I A is a positive solution; and 169
Constructive Computation in Stochastic Models with Applications
(3) if r (I A ) 1 , then there does not exist any positive solution. Proof Let f ( z ) 1 r ( z ). It is clear that r (1) 1 , hence f (1) 1 r (1) ! 0. (1) If r (I A ) ! 1 , there are two possible cases: Case ć r (I A ) f . In this case, there always exists [ with 1 [ I A such that 1 r ([ ) f , since r ( z ) is continuous at z for 1 z I A . Thus, we have f ([ ) 1 r ([ ) 0 . Since f ( z ) is continuous on [1 [ ] , f (1) ! 0 and f ([ ) 0 , there must exist a point K (1 [ ) such that f (K ) 0. Case Ĉ 1 r (I A ) f . In this case, an analysis similar to that of case ć shows that there always exists a point K (1 I A ) satisfying f (K ) 0 . (2) If r (I A ) 1 and I A f , it is obvious that z I A is a solution to equation f ( z ) 0 . (3) If r (I A ) 1 , since f (1) ı f ( z ) ı f (I A ) ! 0 , there does not exist any positive solution to equation det[ I R ( z )] 0 for z ! 1 . This completes the proof. Remark 3.4 Suppose that the Markov chain of GI G 1 type is irreducible and positive recurrent. (4) If r (I A ) 1 and I A f , then it is possible that there does not exist any positive solution to equation det[ I R( z )] 0 for z ! 1 , for example, the Markov chain of GI G 1 type satisfies a condition that r ( z ) is strictly increasing for z (a, f) , where a ! 1 .
3.6 Continuous-time Chains In this section, it is necessary for practical applications that we simply list some crucial results for continuous-time Markov chains of GI G 1 type. Consider a continuous-time Markov chain of GI G 1 type whose transition probability matrix is given by
Q
§ B0 ¨ ¨ B1 ¨ B2 ¨ ¨ B3 ¨ # ©
B1 B2 B3 C0 C1 C2 C1 C0 C1 C2 C1 C0 # # #
!· ¸ !¸ !¸ , ¸ !¸ ¸ ¹
(3.53)
where the sizes of the matrices B0 , Bi , Bi for i ı 1 and C j for f j f are m0 u m0 , m0 u m , m u m0 and m u m , respectively. Let Wn be the southeast corner of Q beginning from level n. Then it is clear l W 1 . For n ı 1 , i , j 1 , 2 , 3 , ! , n , that Wn W for all n ı 1 and W max Qn[İin, ]n j 170
Qn[İ1(ni,1)] n 1 j
Qn[İ 2( ni ,2)] n 2 j
!.
3
Markov Chains of GI/G/1 Type
We can define that for 1 İ i, j İ n ,
)0
Qn[İ,nn ] ,
)i
Qn[İ in,]n ,
) j
Qn[İ, nn] j .
It is easy to see that the (r , s )th entry of )i is the transition rate of the censored chain Q[İn ] from state (n i, r ) to state (n, s ) , while the (r , s )th entry of ) j is the transition rate of the censored chain Q[İn ] from state (n, r ) to state (n j , s ) . The R- and G-measures can be given in terms of the matrices )i for f i f as follows:
)i ()0 ) 1, i ı 1,
Ri and
1
() 0 ) ) j ,
Gj
j ! 1.
At the same time, we can provide expressions for the matrices R0, j and Gi ,0 for i, j ı 1 as follows: R0, j
I0,( jj) ()0 ) 1
Gi ,0
()0 ) Ii,0 .
and 1
i
The censored chain of Q to level 0 is given by f
Q[0]
<0
B0 ¦ R0, k ()0 )Gk ,0 . k 1
For the continuous-time Markov chain of GI G 1 type, the UL-type RG-factorization is given as
Q
( I RU )) D ( I GL ),
where
RU
)D
§ 0 R01 R0 2 R03 !· ¨ ¸ R1 R2 !¸ 0 ¨ ¨ R1 !¸ , 0 ¨ ¸ 0 !¸ ¨ ¨ %¸¹ © diag (< 0 , ) 0 , ) 0 , ) 0 , !)
and 171
Constructive Computation in Stochastic Models with Applications
GL
§ 0 ¨G ¨ 10 ¨ G20 ¨ ¨ G30 ¨ # ©
0 0
G1
G2 G1 #
#
· ¸ ¸ ¸. ¸ 0 ¸ # % ¸¹
The RG-factorization for the repeated row based on the z-transformation of the f
repeating blocks. Let C * ( z )
¦zC k
k
. Then
k f
C * ( z ) [ I R* ( z )]() 0 )[ I G * ( z )].
We now consider conditions for the state classification. Suppose that Q and f
f
¦C
C
k
are irreducible, Qe Ce
0 , and the matrix
k f
¦ kB
k
is finite. Let T
k 1
be the stationary probability vector of the Markov chain C. Then f
f
(1) Q is positive recurrent if and only if T ¦ kC k e ! T ¦ kCk e. k 1
k 1
f
f
(2) Q is null recurrent if and only if T ¦ kC k e T ¦ kCk e. k 1
k 1
f
f
k 1
k 1
(3) Q is transient if and only if T ¦ kC k e T ¦ kCk e. On the other hand, we have the following useful conditions for the state classification: (1) Q is positive recurrent if and only if T R T . In this case, Ge e. (2) Q is null recurrent if and only if T R T and Ge e. (3) Q is transient if and only if Ge e. In this case, T R T . When the Markov chain Q is positive recurrent, the stationary probability vector ʌ {ʌ k } can be expressed in terms of the R-measure as k 1
ʌk
ʌ0 R0 k ¦ ʌi Rk i ,
k ı 1,
i 1
and
3 * ( z ) S 0 R0* ( z )[ I R* ( z )]1.
3.7 Notes in the Literature Grassmann and Heyman [4] first analyzed Markov chains of GI G 1 type by means of the censored technique that generalized Markov chains of GI M 1 type 172
3
Markov Chains of GI/G/1 Type
in Neuts [13] and Markov chains of M G 1 type in Neuts [14]. Zhao, Li, and Braun [20] and Zhao [17] gave a detailed discussion for the R- and G-measures and the A- and B-measures for the Markov chain of GI G 1 type. Ramaswami [15, 16] provided a duality theorem between the Markov chain of GI M 1 type and the Markov chain of M G 1 type. Asmussen and Ramaswami [3] gave probabilistic interpretations for the duality theorem. Zhao, Li and Alfa [18] extended the duality theorem to Markov chains of GI G 1 type. Zhao, Li, and Braun [20], Zhao, Li and Alfa [18], Zhao and Li, and Braun [21] gave a detailed discussion for spectral analysis for the R- and G-measures, which leads to conditions for the state classification of Markov chains of GI G 1 type. Spectral analysis for the R- and G-measures plays a key role in the study of stochastic models. Important examples include Chapter 1 of Neuts [13] for spectral analysis of the matrix R in Markov chains of GI M 1 type, and Chapters 2 and 3 of Neuts [14] for spectral analysis of the matrix G in Markov chains of M G 1 type. Zhao, Li, and Braun [19] provided the RG-factorization for the repeated blocks in a Markov chain of GI G 1 type. Li and Cao [6] provided the UL-type RG-factorization for any irreducible QBD process with either finitely-many levels or infinitely-many levels. Li and Zhao [8, 9] provided the UL-type RG-factorization for any irreducible Markov chain, while Li and Liu [7] gave the LU-type RG-factorization for any irreducible Markov chain. In this chapter, we mainly refer to Zhao, Li, and Braun [19 21], Zhao, Li and Alfa [18], Zhao [17], Li and Zhao [8 11], Li and Cao [6] and Li and Liu [7].
Problems 3.1 If the Markov chain P given in Eq. (3.1) is irreducible and stochastic, and the matrix A is substochastic, prove that the Markov chain P is positive recurrent. 3.2 Prove that if the Markov chain P given in Eq. (3.1) is irreducible, then each f
row of the matrix G
¦G
k
is not zero.
k 1
3.3 Prove that IG0 ID and IG I A . 3.4 For the Markov chain P given in Eq. (3.1), compare the expression of the R-measure with that of the A-measure by means of the following partition
§D U · P ¨ 0 ¸. ©V W¹ Similarly, compare the G-measure with the B-measure. 3.5 Suppose that the Markov chains P and A are irreducible and stochastic, and f
the matrix
¦ kD
k
is finite. Prove that
k 1
173
Constructive Computation in Stochastic Models with Applications
(1) P is positive recurrent if and only if T R T . (2) P is null recurrent if and only if T R T and Ge e. (3) P is transient if and only if Ge e. 3.6 For the minimal positive root K to the equation det( I R* ( z )) 0 , please provide examples to indicate the four cases of K given in Section 3.5. 3.7 For an irreducible QBD process with the repeated blocks A1 , A0 and A1 of size m, we assume that the matrix A A1 A0 A1 is stochastic, prove that 1 (1) ( I ) ) ı 0 . 1 1 (2) R A1 ( I ) ) and G ( I ) ) A1 . (3) Use the structures of the matrices A1 , A0 and A1 to analyze the structures of the two matrices R and G, respectively. 3.8 For an irreducible QBD process with the repeated blocks A1 , A0 and A1 of size m, if the matrix A A1 A0 A1 is irreducible and d-periodic, please discuss the period of the two matrices R and G. 3.9 For the MAP G 1 queue, compute the A- and B-measures { A0, k } and {Bk ,0 }, respectively. 3.10 For the BMAP PH X 1 queue, provide conditions for the state classification and expression for the stationary queue length distribution. 3.11 For the PH PH 1 queue, write the R-, G-, A- and B-measures, and construct their useful relations.
References 1. Asmussen S. (1987). Applied Probability and Queues, John Wiley & Sons 2. Asmussen S. (2003). Applied Probability and Queues (Second Edition), Springer 3. Asmussen S. and V. Ramaswami (1990). Probabilistic interpretations of some duality results for the matrix paradigms in queueing theory. Stochastic Models 6: 715 733 4. Grassmann W.K. and D.P. Heyman (1990). Equilibrium distribution of block-structured Markov chains with repeating rows. Journal of Applied Probability 27: 557 576 5. Kemeny J.G., J.L. Snell and A.W. Knapp (1976). Denumerable Markov Chains (Second Edition), Springer-Verlag: New York 6. Li Q.L. and J. Cao (2004). Two types of RG-factorizations of quasi-birth-and-death processes and their applications to stochastic integral functionals. Stochastic Models 20: 299 340 7. Li Q.L. and L.M. Liu (2004). An algorithmic approach on sensitivity analysis of perturbed QBD processes. Queueing Systems 48: 365 397 8. Li Q.L. and Y.Q. Zhao (2002). The RG-factorizations in block-structured Markov renewal processes with applications. Technical Report 381, Laboratory for Research in Statistics and Probability, Carleton University and University of Ottawa, Canada, 1 40 9. Li Q.L. and Y.Q. Zhao (2004). The RG-factorization in block-structured Markov renewal processes with applications. In Observation, Theory and Modeling of Atmospheric Variability, X. Zhu (ed), World Scientific, 545 568 174
3
Markov Chains of GI/G/1 Type
10. Li Q.L. and Y.Q. Zhao (2005). Light-tailed asymptotics of stationary probability vectors of Markov chains of GI/G/1 type. Advances in Applied Probability 37: 1075 1093 11. Li Q.L. and Y.Q. Zhao (2005). Heavy-tailed asymptotics of stationary probability vectors of Markov chains of GI/G/1 type. Advances in Applied Probability 37: 482 509 12. Markushevich A.I. (1985). Theory of Functions of a Complex Variable, Chelsea Publishing Company: New York 13. Neuts M.F. (1981). Matrix-Geometric Solutions in Stochastic Models-An Algorithmic Approach, The Johns Hopkins University Press: Baltimore 14. Neuts M.F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Dekker: New York 15. Ramaswami V. (1990). A duality theorem for the matrix paradigms in queueing theory. Stochastic Models 6: 151 161 16. Ramaswami V. (1990). From the matrix-geometric to the matrix-exponential. Queueing Systems 6: 229 260 17. Zhao Y.Q. (2000). Censoring technique in studying block-structured Markov chains. In Advances in Algorithmic Methods for Stochastic Models, G. Latouche and P.G. Taylor (eds), Notable Publications Inc., NJ., 417 433 18. Zhao Y.Q., W. Li and A.S. Alfa (1999). Duality results for block-structured transition matrices. Journal of Applied Probability, 36: 1045 1057 19. Zhao Y.Q., W. Li and W.J. Braun (1997). On a decomposition for infinite transition matrices. Queueing Systems 27: 127 130 20. Zhao Y.Q., W. Li and W.J. Braun (1998). Infinite block-structured transition matrices and their properties. Advances in Applied Probability 30: 365 384 21. Zhao Y.Q., W. Li and W.J. Braun (2003). Censoring, factorizations and spectral analysis for transition matrices with block-repeating entries. Methodology and Computing in Applied Probability 5: 35 58
175
4 Asymptotic Analysis
Quan-Lin Li Department of Industrial Engineering Tsinghua University Beijing 100084, P.R. China E-mail:
[email protected] Abstract In this chapter, we consider asymptotic behavior for the stationary probability vector of any ergodic Markov chain of GI/G/1 type, and provide conditions under which the stationary probability vector is either light-tailed or heavy-tailed by means of the RG-factorization. At the same time, we provide expressions for both the light tail and the heavy tail. Note that the conditions and expressions can be completely determined by the repeating row and the boundary row. Keywords stochastic model, Markov chain of GI/G/1 type, RG-factorization, asymptotic analysis, light tail, geometric tail, semi- geometric tail, heavy tail, long tail, subexponential, regularly varying.
This chapter considers asymptotic behavior for the stationary probability vector, ʌ (ʌ 0 , ʌ1 , ʌ 2 ,!) or {ʌ k }, of any ergodic Markov chain of GI G 1 type given in (3.1), including heavy tail and light tail. We indicate that this asymptotic behavior can be completely determined by the repeating row with the RG-factorization and the boundary row with the Wiener-Hopf equations. This chapter is organized as follows. Section 4.1 derives a necessary and sufficient condition under which the stationary probability vector of the Markov chain of GI G 1 type is either light-tailed or heavy-tailed. Sections 4.2 discusses three singularity classes of the stationary probability vector, where the first two classes are light-tailed while the third one is for the heavy tail. For the light tail, Sections 4.3 and 4.4 provide two classes of explicit expressions: The geometric tail and the semi-geometric tail. For the heavy tail, Section 4.5 derives long-tailed asymptotic for the matrix sequence {Rk } in terms of the RG-factorization for the repeating matrix sequence. Section 4.6 gives a detailed analysis for subexponential asymptotics of the stationary probability vector. Finally, Section 4.7 provides some notes to the references related to the results of this chapter. Note that a simple introduction to the light and heavy tails of the matrix sequences is given in Appendix B.
4 Asymptotic Analysis
4.1
A Necessary and Sufficient Condition
In this section, we provide a necessary and sufficient condition under which the stationary probability vector {ʌk } is either light-tailed or heavy-tailed. As expected, the condition depends on asymptotic behavior of both the repeating blocks and the boundary blocks. We recall the R-measure expressions of the stationary probability vector {S k } as follows: ʌ0
W x0 ,
ʌk
ʌ0 R0, k ¦ ʌi Rk i ,
k 1
k ı 1,
i 1
3 * ( z ) ʌ0 R0* ( z ) 3 * ( z ) R* ( z ), or
3 * ( z ) ʌ0 R0* ( z )[ I R* ( z )]1 .
(4.1)
Obviously, the two radii I R0 and I R of convergence determine tailed classification of the stationary probability vector {ʌ k } and the associated asymptotic expressions. Note that the Markov chain P of GI G 1 type is determined by the two matrix sequences { Ak } and {Dk }, thus it is necessary to show that the two radii I R0 and IR of convergence can be related to the two radii I A and ID of convergence, respectively. The following lemma provides useful asymptotic relations between {Ak } and {Rk }, and between {Dk } and {R0, k }, respectively. Note that { Ak , k ı 1} is lighttailed if and only if I A ! 1 , and {Dk k ı 1} is light-tailed if and only if I D ! 1 . Lemma 4.1 (1) {Rk } is light-tailed if and only if I A ! 1. (2) {R0 k } is light-tailed if and only if I D ! 1. Proof We only prove (1), while (2) can similarly be proved by means of IR0 ID by Theorem 3.8. It is easy to check from Definition B.1 in Appendix B that the nonnegative matrix sequence {Ck } of size m u n is light-tailed if and only if IC ! 1, where f
IC is the radius of convergence of
¦ z C . Thus, the conclusion in this lemma k
k
k 1
follows from I R
I A by Theorem 3.8. This completes the proof.
Let I3 be the radius of convergence of 3 ( z ). The Eq. (4.1) implies that I3 İ min{IR0 ,IR }. Since I3 is equal to the radius of convergence of the expansion of the function in a power series on the right hand side, (4.1) shows that I3 will depend on I R0 , I R and the roots of the equation det ( I R ( z )) 0 . If det ( I R ( z )) 0 has a solution z such that _ z _! 1, then, by the continuity of 177
Constructive Computation in Stochastic Models with Applications
an implicit function, there exists a solution z0 such that K _ z0 _ min{_ z _: det ( I R ( z )) 0} ! 1, since det ( I R (1)) z 0 in terms of Corollary 3.7. Thus, we obtain
I3
min{I R0 ,I R ,K}.
(4.2)
The following theorem provides a necessary and sufficient condition under which the stationary probability vector {ʌk } is light-tailed. Theorem 4.1 Suppose that the Markov chain of GI G 1 type is irreducible and positive recurrent. {ʌ k } is light-tailed if and only if both {Dk } and { Ak } are light-tailed (that is, min{I A , ID } ! 1). Proof Suppose first that both {Dk } and { Ak } are light-tailed. Hence min{I A , ID } ! 1. Note that if det ( I R ( z )) 0 has a solution, then it follows from Theorems 3.22 and 3.23 and Remark 3.4 that
K _ z0 _ min{_ z _,1 _ z _İ I A , det ( I R ( z )) 0} implies that either 1 K f or K f. Therefore, using min{I A , ID } ! 1 and 1 K İ f, we obtain I3 min{I R0 ,I R ,K} ! 1, hence {Sk } is light-tailed. Now, suppose that {ʌ k } is light-tailed. We assume that at least one of the two matrix sequences {Dk } and { Ak } is heavy-tailed, for example, { Ak } is heavy-tailed. Obviously, we would have that I A 1, which would lead to I3 min{I R0 ,IR ,K} 1, thus {ʌ k } is heavy-tailed. This leads to a contradiction that {ʌk } is light-tailed. This completes the proof. In what follows we provide a necessary and sufficient condition under which the stationary probability vector {ʌ k } is heavy-tailed. To do this, we first need to introduce two types of convolutions of nonnegative matrix sequences. f
For a sequence of nonnegative scalars {g n } with
¦g
n
f, two associative
n 0
functions are defined as gİ x
¦
g k and g ! x
0İk İ x
¦g
k
n
x ı 0. Specifically, for an integer n ı 0, gİn
for an arbitrary real number
k!x
¦ g k and g ! n k 0
f
¦g. k
For
k n 1
convenience, we also write g ! n as gın 1 . For the real function gİ x associated with the sequence {g n }, the tail of gİ x is defined and expressed as gİ x gf gİ x g ! x for x ı 0. Specifically, for an integer n ı 0, gİn gın 1 . It is clear that if {g n } is a probability sequence, then gİ x is its distribution function and gİ x is the tail of this distribution function. In terms of the Riemann-Stieltjes integral, the convolution of two functions F ( x) and G ( x) is defined as 178
4 Asymptotic Analysis
F ( x) G ( x)
³
x 0
F ( x y )dG ( y ).
(4.3)
We denote by [ x] the maximum integer part of a real number x. For two sequences {cn } and {d n }, it follows from Eq. (4.3) that cİ x dİ x
³
x 0
[ x ] § [ x ] k ¨ ¨ ¨ k 0 ©¨ i 0
¦ ¦
F ( x y )dG ( y )
· ¸
ci ¸¸ d k ¸ ¹
[ x]
¦c
İ[ x ] k
dk .
k 0
Specifically, for an integer n ı 0, n
cİn dİn
¦c
İn k
dk ,
(4.4)
k 0
which is referred to as the convolution associated with the two sequences {cn } and {d n }. Furthermore, for a sequence {cn } we define cİr *n cİ( rn1)* * cİn for r ı 2 with cİ1*n cİn . It should be noted that the usual convolution of two sequences {cn } and {d n } is denoted by cn d n , defined as n
cn d n
¦c
nk
dk .
(4.5)
k 0
We further define cnr cn( r 1) cn for r ı 2 with c1n cn . According to Eq. (4.4) and Eq. (4.5), it is worthwhile to note a useful relationship between the usual convolution and the convolution associated with the sequences: n
cİn dİn
¦c
cİn d n
k
dk
(4.6)
k 0
and cn d n
cİn dİn cİn 1 dİn 1 .
Also, it is clear from Eq. (4.6) that f
cİn dİn
¦c
k
dk .
(4.7)
k n 1
Remark 4.1 The above two convolutions can be extended to sequences {cn : n 0, r1, r2,!} and {d n : n 0, r1, r2,!} by cİn dİn
¦c
İi
dj
i j n
and
cn d n
¦ cd i
j
,
i j n
179
Constructive Computation in Stochastic Models with Applications
respectively. We then obtain that for an arbitrary integer n, n
¦c
cİn dİn
k
dk
k f
and f
¦
cİn dİn
ck d k .
k n 1
For a sequence {cn , n ı 0}, if we set c n n
¦ cİk cnk
cİ2*n
k 0
0 for all n ı 1, then f
¦c
İk
cn k .
k 0
Specifically, if {cn , n ı 0} is a probability sequence, simple computations lead to f
cİ2*n
¦c
1 cİ2*n
f
ª k ¬«
1 cİn k º¼»
k 0
¦c c
k İn k
.
(4.8)
k 0
Now, we analyze the heavy-tailed asymptotics of the stationary probability vector {ʌk }. Since the Markov chain is positive recurrent, Corollary 3.10 shows that all solutions to the equation det ( I R* ( z )) 0, if any, reside outside the unit circle _ z _! 1. This means that I R* ( z ) is always invertible for all _ z _İ1. Therefore, x0 R0* ( z )[ I R* ( z )]1 ,
3 * ( z)
(4.9)
which implies f
x0 R0 k ¦ Rkn .
ʌk
(4.10)
n 0
Therefore, f
¦xR
ʌİk
0
0l
l k 1
f
¦ Rln .
(4.11)
n 0
The following lemma provides an expression for the tail of the stationary probability vector {ʌ k }. Lemma 4.2 For all k ı 1, f
x0 R0,İk ¦ Rİnk ,
ʌ İk
n 0
k
where R0,İk
¦R
0, l
l 1
180
k
, Rİnk
¦R
n l
l 1
and
(4.12)
4 Asymptotic Analysis
f
R0,İk ¦ Rİnk n 0
f
R0 ( I R )1 R0,İk ¦ Rİnk . n 0
Proof Note that k
f
l 0
n 0
¦ x0 R0,l ¦ Rln
f
¦
l k 1
f
x0 R0l ¦ Rln
R0 ( I R) 1 ,
n 0
it follows from Eq. (4.10) and Eq. (4.7) that f
k
ʌİk
x0 R0 ( I R) 1 ¦ x0 R0,l ¦ Rln l 1
n 0
f
R0 ( I R)1 R0,İk ¦ Rİnk n 0
f
x0 R0,İk ¦ Rİnk . n 0
This completes the proof. The following lemma provides a useful heavy-tailed property for the matrix sequences { Ak } and {Dk }. Lemma 4.3 (1) If I A 1, then the matrix sequence {Rk } is heavy-tailed. (2) If I D 1 , then the matrix sequence {R0k } is heavy-tailed. Proof We only prove (1), while (2) can be proved similarly. f
Since Ak ı 0 for each integer k and A
¦A
is stochastic, we obtain
k
k f
f ½ sup ® z ı 1: ¦ z k Ak f ¾ ı 1. k 1 ¯ ¿
I A f
If I A
1, then
k
¦ 1 G A
k
is infinite for any G ! 0. Hence, there exists at
k 1
least a pair (i0 , j0 ) such that f
¦e k 1
k ln 1 G
f
ak (i0 , j0 )
k
¦ 1 G a (i , j ) k
0
0
f.
k 1
Let H ln (1 G ) . Then H ! 0 due to G ! 0. Therefore, it is clear from (1) of definition B.3 that { Ak } is heavy-tailed. Note that I R I A 1, thus {Rk } is heavy-tailed. This completes the proof. The following theorem provides a necessary and sufficient condition under which the stationary probability vector {ʌk } is heavy-tailed. 181
Constructive Computation in Stochastic Models with Applications
Theorem 4.2 If the Markov chain of GI G 1 type is positive recurrent, then the stationary probability vector {ʌ k } is heavy-tailed if and only if min{I A , I D } 1. Proof We first prove the necessity of the condition. Suppose that min{I A , I D } ! 1. Then both I A ! 1 and I D ! 1 . It follows from (4.1) that
3 * ( z ) ʌ0 R0* ( z )[ I R ( z )]1 would yield that IS min{I A , ID ,K} ! 1. Hence, {ʌ k } would be light-tailed. This is a contradiction to the assumption that {ʌ k } is heavy-tailed. We now prove the sufficiency of the condition. Note that when I A ı 1 and I D ı 1, the assumption min{I A , ID } 1 implies I A 1 or ID 1 . Case ć I D 1. In this case, since
3 * ( z ) ʌ0 R0* ( z )[ I R* ( z )]1 ı ʌ0 R0* ( z ), ʌ k ı ʌ0 R0, k for k ı 1. Note that the Markov chain of GI G 1 type is irreducible and positive recurrent, the censored chain < 0 to level 0 is irreducible and positive recurrent, which leads to ʌ0 ! 0 because ʌ0 W x0 with W ! 0 and x0 ! 0 . Under the assumption that I D 1 , {R0,k } is heavy-tailed according to Lemma 4.3. Thus, there always exists at least a pair (i, j ) such that the sequence {r0k (i, j )} is heavytailed, where r0, k (i, j ) is the (i, j )th entry of the matrix R0 k for each k ı 1. It is clear that ʌ k ı ʌ0 R0, k implies
ʌ k ı ( 0, , 0,ʌ0 (i )r0 ,k (i, j ), 0, ,0 ), !
!
j 1 zeros
m j zeros
where ʌ0 (i ) is the ith entry of the positive row vector ʌ0 . Therefore, {ʌ k } is heavy-tailed. Case Ĉ I D ! 1 and I A 1. In this case, since
3 * ( z ) ʌ0 R0* ( z )[ I R* ( z )]1 ı ʌ 0 R0* ( z ) R* ( z ), which implies ʌ k ı ʌ 0 R0, k Rk , Under the assumption that I A
for all k ı 1.
(4.13)
1, {Rk } is heavy-tailed according to Lemma 4.3.
Thus there exists at least a pair {i0 , j0 } such that {rk (i0 , j0 )} is heavy-tailed. The assumption that the Markov chain is irreducible and positive recurrent leads to f
ʌ0 ! 0, and Theorem 3.3 leads to ʌ0 R0 ! 0. Note that R0
¦R
0, k
k 1
182
. Therefore,
4 Asymptotic Analysis
there always exists an l0 ı 1 such that the (i* , i0 ) th element r0,l0 (i* , i0 ) of R0,l0 is positive. Thus, it follows from Eq. (4.13) that for k ı N , ʌ k ı ( 0, , 0 , ʌ0 (i* )r0,l0 (i* , i0 )rk l0 (i0 , j0 ), 0, ,0 ). !
!
i0 1 zeros
(4.14)
m i0 zeros
Since ʌ0 (i* )r0,l0 (i* , i0 ) ! 0 and {rk l0 (i0 , j0 )} is heavy-tailed, Eq. (4.14) implies that {ʌk } is heavy-tailed. This completes this proof.
4.2
Three Asymptotic Classes of {ʌk }
In this section, we discuss three asymptotic classes of the stationary probability vector {ʌk }, and provide conditions of classification based on the two matrix sequences {Dk } and { Ak } in the Markov chain of GI G 1 type. We define a collection of matrix sequences consisting of all nonnegative matrix sequences (not necessarily square ones), whose sum is convergent. Let
f ½ ®{Bk }: Bk ı 0, ¦ Bk f ¾ . k 0 ¯ ¿
We classify the matrix sequences in into three classes as follows. f
Definition 4.1
Let I B be the radius of convergence of B* ( z )
¦z
k
Bk for
k 0
{Bk } . Then, (1) {Bk } is called class ć if I B ! 1 and B* (I B ) is infinite; (2) {Bk } is called class Ĉ if I B ! 1 and B* (I B ) is finite; (3) {Bk } is called class ĉ if I B 1. It is easy to see from I B ! 1 or I B 1 that {Bk } in class ć or class Ĉ is light-tailed, while {Bk } in class ĉ is heavy-tailed. When min{I A , ID } ! 1, both {Dk } and { Ak } are either class ć or class Ĉ. On the other hand, the condition min{I A , I D } 1 implies that either { Ak } or {Dk } is class ĉ, and therefore, {ʌ k } is class ĉ. Recall that K f if det( I R * ( z )) z 0 for all _ z _İ I A . Lemma 4.4 Suppose that the Markov chain of GI G 1 type is irreducible and positive recurrent. f ½ (1) If 1 K f and K İ I A , then ®¦ Rkn* ¾ is class ć. ¯n 0 ¿ f n* ½ f ½ (2) If K f, then ®¦ Rk ¾ is class ć when { Ak } is class ć; ®¦ Rkn* ¾ is ¯n 0 ¿ ¯n 0 ¿ 183
Constructive Computation in Stochastic Models with Applications
class Ĉ when { Ak } is class Ĉ. (3) If I D ! 1, then {R0, k } is class ć when {Dk } is class ć; {R0, k } is class Ĉ when {Dk } is class Ĉ. Proof We prove (1) and (2), while (3) can be proved similarly. (1) Assume that 1 K f and K İ I A . In this case, the radius of convergence of f
f
k 1
n 0
¦ z k ¦ Rkn* is K. When z K ,
1 adj( I R* ( z )) det( I R* ( z ))
det( I R (K ))
0 and adj ( I R * (K )) z 0 according to
f ½ Lemma 4.6. Thus, ®¦ Rkn* ¾ is class ć. ¯n 0 ¿ (2) Since I R I A from Theorem 3.8, it follows from Eq. (3.19) that f
f
k 1
n 0
¦ z k ¦ Rkn*
[ I R* ( z )]1
( I )0 )[ I G* ( z )][ I A* ( z )]1 f
( I ) 0 )[ I G ( z )]¦ [ A* ( z )]n . n 0
f
Thus, the radius of convergence of
f
¦z ¦R k
k 1
n* k
is I A .
n 0
If { Ak } is class ć, then A {I A } is infinite, hence
f
¦ [ A (I *
A
)]n is infinite.
n 0
f
Since I ) 0 and I G* (I A ) are invertible,
f
¦I ¦ R k A
k 1
n* k
is infinite. Therefore,
n 0
f n* ½ ®¦ Rk ¾ is class ć. ¯n 0 ¿ If { Ak } is class Ĉ, then I A f and A (I A ) f. Since K from Eq. (3.19) that I A*{I A } is invertible. Thus, f
f
¦I ¦ R k A
k 1
n* k
f, it follows
( I ) 0 )[ I G * (I A )][ I A* (I A )]1 f.
n 0
f ½ Therefore, ®¦ Rkn* ¾ is class Ĉ. ¯n 0 ¿ This completes the proof. Theorem 4.3 Suppose that the Markov chain of GI G 1 type is irreducible and positive recurrent. 184
4 Asymptotic Analysis
(1) If 1 K f and K İ min{I A , I D }, then {ʌ k } is class ć. (2) If K f and 1 I A ID , then {ʌk } is class ć when { Ak } is class ĉ; {ʌk } is class Ĉ when { Ak } is class Ĉ. (3) If K f and I A I D , then {ʌ k } is class ć when at least one of { Ak } and {Dk } is class ć; {ʌ k } is class Ĉ when both { Ak } and {Dk } are class Ĉ. (4) If 1 K f and 1 ID K İ I A , or K f and 1 I D I A , then {ʌk } is class ć when {Dk } is class ć; {ʌ k } is class Ĉ when {Dk } is class Ĉ. Proof We only prove (1), while (2), (3) and (4) can be similarly proved. Assume that 1K f and K İmin{I A , ID }. Since the radius of convergence of
3 * ( z) is K , det( I R* (K ))
1 ʌ0 R0* ( z )adj( I R* ( z )) * det( I R ( z ))
0 and
ʌ0 R0* K adj( I R* (K ))
m
ʌ0 R0* (K )v(K )u (K ) [1 ri (K )] : 0 i 2
*
according to Lemma 4.6 and Theorem 3.3, 3 (K ) is infinite. Therefore, {ʌ k } is class ć. This completes the proof. Note that Sections 4.3 and 4.4 will analyze the light-tailed behavior of the stationary probability vector {ʌ k }, which is in classes ć and Ĉ; while Sections 4.5 to 4.7 will discuss the heavy-tailed case of {ʌ k }, which is in class ĉ.
4.3
The Asymptotics Based on the Solution K
In this section, we assume that the minimal positive solution K ! 1 to det( I R * ( z )) 0 satisfies that K I A and K ID , which together with I3 min{IR0 ,IR ,K}, implies that {ʌk } is light-tailed and is determined by K only. Based on this, we can derive explicit asymptotic expressions for the light tail of the stationary probability vector {ʌ k }.
4.3.1
A is Irreducible
Note that I R
I A , IR
0
ID , K I A and K ID , it is clear that I3
min{IR0 , IR ,K} K ! 1.
Thus it follows from Eq.(4.1) that for any z with 1 _ z _ K , 185
Constructive Computation in Stochastic Models with Applications
3 * ( z)
1 ʌ0 R0* ( z ) adj( I R * ( z )). det( I R* ( z ))
(4.15)
The analysis on the singularity of 3 * ( z ) depends on that of the three functions: 1 det( I R* ( z )), R0* ( z ) and adj( I R* ( z )). Since R0* ( z ) and adj( I R* ( z )) are analytic in _ z _ min{I A , I D }, they are analytic at z K. Noting that z K is a singular point of 1 det( I R* ( z )), thus it also is a singular point of 3 * ( z ). In the following, we show that when A is irreducible, z K is a pole of order 1 of 3 * ( z ). For z ! 0, r ( z ) is the maximal eigenvalue of the matrix R* ( z ) , let ri ( z ) for 2 İ i İ m be all the other eigenvalues of R* ( z ). It is clear that r (K ) 1 and ri (K ) z 1 for all 2 İ i İ m. We denote by u ( z ) and v( z ) the Perron-Frobenius left and right eigenvectors of R* ( z ) for z ı 0, respectively. It is useful that u ( z ) and v( z ) can be expressed by the Perron-Frobenius left and right eigenvectors of A* ( z ), respectively. The following lemma characterizes u (K ) and v(K ) based on that of A* (K ). Lemma 4.5 If s (K ) and t (K ) are the Perron-Frobenius left and right eigenvectors of A (K ), respectively, then (1) u (K ) a s (K ) and v(K ) b ( I ) 0 )[ I G * (K )] t (K ), where a and b are two free positive factors. (2) u (K ) ! 0 if and only if s (K ) ! 0; v(K ) ı 0 but v(K ) z 0 if and only if t (K ) ! 0; and u ( z )e u ( z )v( z ) 1. Proof In this proof, (1) is obvious by using Eq. (3.19), and (2) can be obtained by noting that ( I )0 ) and I G (K ) are all invertible, where the invertibility of I G (K ) follows from K ! 1 and (1) of Corollary 3.10. This completes the proof. To describe 3 * ( z ) in Eq. (4.1) for details, we need to express the adjoint matrix of I R * (K ) more explicitly, which is given in the following lemma. Lemma 4.6 If the Markov chain of GI G 1 type is irreducible and positive recurrent, then adj( I R (K ))
f
[1 r (K )] v(K )u (K ). i
i 2
Proof Noting that det( I A* (K ))
det( I R* (K )) det( I )0 ) det( I G* (K )) 0,
and rank ( I A (K )) İ m 1. Since A* (K ) is irreducible, 1 F (K ) 0 is a simple eigenvalue of I A* (K ), and for any other eigenvalue 1 y (K ) of I A* (K ), _ 1 y (K ) _!_ 1 F (K ) _ 0. Thus, rank ( I A* (K )) m 1. Since the Markov chain of GI G 1 type is positive recurrent, it follows from Corollary 3.10 that for 186
4 Asymptotic Analysis
K ! 1 , I G* (K ) is invertible. Thus, from (3.19), I R* (K ) [ I A* (K )][ I G* (K )]1 ( I ) 0 )1.
Therefore, rank ( I R* (K ))
rank ( I A* (K ))
m 1
and rank (adj( I R (K ))) 1. Noting that adj( I D R* (K )) is continuous for D (0,1], we obtain adj( I R (K ))
lim adj( I D R* (K )). D /1
For D (0,1), we have adj( I D R* (K ))
det( I D R* (K )) [ I D R* (K )] 1 m
[1 D r (K )] (1 D )[ I D R (K )] *
i
1
.
i 2
Let T (D ) be an invertible matrix such that §1 D T (D )1[ I D R* (K )]T (D ) ¨ ©
· ¸ J (D ) ¹
is the Jordan canonical form of I D R* (K ), where the modules of all the diagonal entries of J (D ) are all greater than 1 D . Since rank ( I R* (K )) m 1, §0 · 1 T (1) ª¬ I R (K ) º¼ T (1) ¨ ¸ J (1) ¹ © and J (1) is invertible. Therefore, we obtain (1 D )[ I D R* (K )]1 Noting that lim(1 D ) J (D ) 1
0, we get
D /1
adj( I R* (K ))
§1 · T (D ) ¨ T (D )1 . 1 ¸ © (1 D ) J (D ) ¹
lim adj( I D R* (K )) D /1
m
lim [1 D ri (K )] (1 D )[ I D R* (K )] D /1
1
i 2
m
§1
[1 r (K )] T (1) ¨ i
i 2
©
· 1 ¸ T (1) 0¹
m
[1 r (K )] v(K )u (K ). i
i 2
187
Constructive Computation in Stochastic Models with Applications
This completes the proof. It is clear from the proof of Lemma 4.6 that z K is the only root of det( I R* (K )) 0 on _ z _ K. Therefore, z K is the only singular point of 3 * ( z ) on _ z _ K. The following lemma further illustrates that the singular point z K is a pole of order 1 of 3 * ( z ). Lemma 4.7 Let 1 r ( z ) (K z )h( z ).
(4.16)
Then, h(K ) ! 0. Proof It follows from Eq. (4.16) that r c( z )
h( z ) K z hc( z ),
which follows that r c(K ).
h(K )
(4.17)
Noting that u ( z ) R* ( z )
r ( z )u ( z ),
we have u c( z ) R ( z ) u ( z )
d * R ( z) dz
r c( z )u ( z ) r ( z )u c( z ).
This equation, multiplied by v ( z ) from the right and using u ( z ) v( z ) 1, yields r c( z ) u ( z )
d * R ( z ) v( z ). dz
Thus, we obtain f
r c(K ) u (K ) ¦ kK k 1 Rk v(K ) k 1
ı
1
K
1
*
u (K ) R (K )v(K )
K
1
K
r (K )
f
u (K ) ¦ kK k Rk v(K ) k 1
1
K
! 0.
It follows from Eq. (4.17) that h(K ) r c(K ) ! 0. This completes the proof. We are now ready to provide an expression for 3 * ( z ) by means of Lemmas 4.6 and 4.7. According to Lemma 4.6, we write m
adj( I R* ( z ))
[1 r (K )] v(K )u (K ) (K z) H ( z). i
i 2
188
4 Asymptotic Analysis
It follows from Eq. (4.1) and Eq. (4.16) that
3 * ( z)
1 Kz
1
ʌ0 R0* ( z )
m
h( z ) [1 ri ( z )] i 2
ª m º « [1 ri (K )]v(K )u (K ) (K z ) H ( z ) ». ¬i 2 ¼
(4.18)
Let m
L z
ʌ0 R0* ( z )v(K ) [1 ri (K )] i 2
.
m
h( z ) [1 ri ( z )] i 2
Since h( z ) is continuous at z K and h(K ) ! 0, there exists a V ! 0 small m
enough such that h( z ) ! 0, for all z (K V ,K V ). Note that h( z ),
[1 r ( z )] i
i 2
and R0* ( z ) are all analytic at z K , it is obvious that L( z ) is analytic at z K. Thus, an expansion of L( z ) in a power series at z K is given by ʌ0 R0* (K )v(K ) f L( k ) (K ) (K z )k , ¦ h(K ) k k 1
L( z )
where L( k ) (K )
dk L( z ) _z dz k
K
(4.19)
. It follows from Eq. (4.18) and Eq. (4.19) that
K
3 * ( z)
Kz
u (K ) g ( z ),
(4.20)
where K
ʌ0 R0* (K )v(K ) h(K )
is a constant, and ʌ0 R0* ( z )
g (z)
f
m
h( z ) [1 ri ( z )]
H (z) ¦ k 1
L( k ) (K ) (K z ) k 1 u (K ) k!
i 2
is analytic at z K , since
ʌ0 R0* ( z ) m
h( z ) [1 ri ( z )]
L( k ) (K ) (K z ) k are ¦ k ! k 1 f
, H ( z ) and
i 2
all analytic at z K . 189
Constructive Computation in Stochastic Models with Applications
Theorem 4.4 Suppose that the Markov chain of GI G 1 type is irreducible and positive recurrent. If 1 K min{I A , ID }, and the matrix A is irreducible, then the asymptotics of {Sk } is geometric, i.e., ʌk
KK
k 1
u (K ) O ((K H ) k ) eT ,
(4.21)
where H is a small positive number. Proof Based on Eq. (4.20), we first need to check K ! 0. Then Eq. (4.21) is obviously true according to a standard result on asymptotics of complex functions (for example, Theorem 5.2.1 in Wilf [37]). According to the assumption made on the Markov chain of GI G 1 type, every ʌ k is positive. It is clear that ʌ0 R0* (K ) ı ʌ0 R0* (1) ! 0, since ʌ0 ! 0 and each column vector of R0* (1) is not identically zero according to Theorem 3.3. Noting that v(K ) ı 0 and v(K ) z 0, we obtain ʌ0 R0* (K )v(K ) ı ʌ0 R0* (1)v(K ) ! 0. Now, K ! 0 follows from h(K ) ! 0. This completes the proof.
4.3.2
Markov Chains of GI M 1 Type
As an illustrating example, we consider a Markov chain of GI M 1 type. In this case, Rk 0 and R0 k 0 for k ı 2. Therefore, R0* ( z ) zR0,1 , R* ( z ) zR1 , IR0 ID f and IR I A f. It follows from (4.1) and ʌ1 ʌ 0 R01 that
3 * ( z)
zʌ1 ( I zR1 )1
1 1 zr
zʌ1
adj( I zR1 ),
m
(1 zr ) i
i 2
where r is the maximal eigenvalue of R1 , which is smaller than one when the Markov chain is positive recurrent, and ri , 2 İ i İ m, are the other eigenvalues of R1 . It is clear that K 1r. Therefore, 1 K min{I A , ID } f, and the asymptotics of {ʌ k } is geometric.
4.3.3
Markov Chains of M G 1 Type
The method of generating function is one of the common methods used to study the light-tailed behavior of the stationary probability vector {ʌ k }. (3.3.2) in Neuts [29] (or (2.1) in Falkenberg [11]) expresses 3 ( z ) in terms of
f
¦z
k
Ak
k 1
and another function D* ( z ). Note that our expression Eq. (4.1) is different from those in the literature, that is, 3 * ( z ) is expressed in terms of the two R-measure 190
4 Asymptotic Analysis
generating functions R0* ( z ) and R* ( z ). Furthermore, the relationship between R* ( z ) and A* ( z ), and between R0* ( z ) and D* ( z ) established in Theorems 3.5 and 3.6, enables us to obtain sharper results than those in the literature. Following the idea in Falkenberg [11], the generating function for Markov chains of GI G 1 type can be written as f
f
i 1
j i
3 * ( z )[ I A* ( z )] ʌ0 D* ( z ) ¦ z i ʌi ¦ z j A j . For a Markov chain of M G 1 type, it becomes f
f
¦zʌ ¦ z i
i
i 1
j
A j
ʌ1 A1
j i
and
3 * ( z ) [ʌ0 D* ( z ) ʌ1 A1 ][ I A* ( z )]1 , which was used in the literature to study the asymptotic behavior, including Falkenberg [11] among others. However, for the Markov chain of GI G 1 type, f
it is very inconvenient or difficult to explicitly express
f
¦zʌ ¦ z i
j
i
i 1
A j . This
j i
illustrates the reason why the R-measure is effective for explicitly expressing the generating function 3 ( z ).
4.3.4
A is Reducible
After reordering the states, we assume that A* ( z ) is written in the normal form § a1 ( z ) · ¨ ¸ % ¨ ¸ ¨ ¸ ap (z) ¨ ¸ * a p 1 ( z ) A ( z) ¨ ¸, ¨ ¸ % ¨ ¸ a p q ¨ ¸ ¨ r (z) ! r ( z) r (z) ! r (z) r ¸ p p 1 pq p q 1 ( z ) ¹ © 1 where p q ı 1, ai ( z ) for 1 İ i İ p q are irreducible and stochastic, K ! 1 is a solution to equation det( I ai ( z )) 0 for 1 İ i İ p, while I rp q 1 (K ) and I a p j (K ) for 1 İ j İ q are all invertible. It follows from Eq. (3.19) and Corollary 3.10 that for 1 z K 191
Constructive Computation in Stochastic Models with Applications
I R* ( z ) [ I A* ( z )][ I G* ( z )]1 ( I )0 )1 .
Thus we obtain p
q
det( I R* ( z )) (K z ) p bi ( z ) det[ I a p j ( z )] det[ I rp q 1 ( z )] i 1
j 1
[det( I G* ( z ))]1 [det( I ) 0 )]1 , where det( I ai ( z )) (K z )bi ( z ) for 1 İ i İ p. It follows from Lemma 4.7 that bi (K ) z 0 for 1 İ i İ p. Thus, it follows from Eq. (4.1) and Eq. (4.16) that
3 * ( z)
1 K ( z ), (K z ) p
where K ( z)
q ° p ½° ® bi ( z ) det[ I a p j ( z )] det[ I rp q 1 ( z )]¾ j 1 ¯° i 1 ¿°
1
[det ( I G* ( z ))][det( I )0 )] ʌ0 R0 ( z ) adj( I R* ( z )) f
is analytic at z K. Let K ( z )
¦ k 0
K ( k ) (K ) dk (K z )k , where K ( k ) (K ) K ( z )_ z K . dz k k!
Then p
3 * ( z)
¦ i 1
K ( p i ) (K ) f ( p j ) (K )(K z ) j . ¦K (K z )i j 0
By a standard result on asymptotics of complex functions (for example, Theorem 5.2.1 in Wilf [37]) we obtain p
ʌk
l
¦ (1) K l 1
( k l )
§ k l 1· ( p l ) (K ) O((K H ) k ) eT . ¨ ¸K l 1 © ¹
(4.22)
Remark 4.2 If 1 K min{I A , I D }, then Eq. (4.22) provides a more p general result on the light tail of {ʌk }. Let det( I A* ( z )) (K z ) T ( z ), where T (K ) z 0. Then, the tail of {ʌ k } is geometric if p 1; the tail of {ʌ k } is non-geometric if p ı 2.
4.4
The Asymptotics Based on the Boundary Matrices
In this section, we consider the role of the boundary matrices {Dk } played in analyzing the light-tailed asymptotics of {ʌ k }. We assume that 1 I D İ K I A . 192
4 Asymptotic Analysis
It is easy to see from Theorem 3.8 that z I D is a singular point of 3 * ( z ) so that the boundary matrices {Dk } determine the asymptotics of {ʌk }. For simplicity of description, we only discuss the cases where the singular point is either a pole of order d or an algebraic singular point.
4.4.1
I D is a Pole
We assume that ID is a pole of order d for the generating function matrix D* ( z ). The following lemma describes a singular point of the matrix R0* ( z ). Lemma 4.8 If z ID is a pole of order d of D* ( z ) of size m0 u m, then z ID is also a pole of order d of R0* ( z ), i.e., R0* ( z )
1 S ( z ), d (I D z )
(4.23)
where S ( z ) is analytic in _ z _ I D V for some V ! 0, and S (I D ) z 0. Proof If z I D is a pole of order d of D* ( z ) of size m0 u m, then there always exists a matrix B ( z ) such that D* ( z )
1 B( z ), d (I D z )
where B( z ) is analytic in _ z _ ID V for some V ! 0, and B (ID ) z 0. It follows from Eq. (3.30) that when G ! 0 is small enough and z ! 1 G , 1 1 B( z )( I ) 0 ) 1 İ R0* ( z ) İ B( z )[ I G * (1 G )]( I ) 0 ) 1 , d (I D z ) d (ID z ) which illustrates that z ID is a pole of order d of R0 ( z ) with Eq. (4.23). This completes the proof. Theorem 4.5 Assume that 1 ID K I A , and z I D is a pole of order d of D * ( z ). Let R0* ( z ) be given by Eq. (4.23). Then, for some small H ! 0 , d
ʌk
¦ (1) I j
j 1
(k j ) D
§ k j 1· k T ¨ ¸ Ld j O((I D H ) ) e , j 1 © ¹
where Lj Proof
1 dj {ʌ S ( z )[ I R* ( z )]1}_ z j ! dz j 0
ID
.
It follows from Lemma 4.8 and Eq. (4.1) that 193
Constructive Computation in Stochastic Models with Applications
1 ʌ0 S ( z )[ I R* ( z )]1 , d (ID z )
3 * (z)
where S ( z )[ I R* ( z )]1 is analytic in _ z _ I D V for 0 V I A I D . An expansion of ʌ0 S ( z )[ I R* ( z )]1 in a power series at z I D is ʌ0 S ( z )[ I R* ( z )]1
f
¦ L (I j
D
z) j .
j 0
Hence we obtain d
¦ (I
3 * ( z)
j 1
f 1 Ld j ¦ Ld i (I D z )i . j i 0 D z)
A standard result on asymptotics of complex functions (for example, Theorem 5.2.1 in Wilf [37]) leads to the desired result. This completes the proof. Corollary 4.1 Assume that 1 ID K I A , z I D is a pole of order d of D* ( z ) and the matrix A is irreducible. Let R0* ( z ) be given by Eq. (4.23). Then, for some small H ! 0 , d 1
ʌk
¦(1) I j
(k j ) D
j 1
§ k j 1· k T ¨ ¸ Ld 1 j O ((ID H ) ) e . © j 1 ¹
Proof Under the irreducible assumption on the matrix A, K is a simple root of the equation det( I R* ( z )) 0. Therefore, z I D is a pole of order d 1 of 3 * ( z ). The rest of the proof is similar to that for Theorem 4.5. This completes the proof. It is easy to see from Theorem 4.5 that the asymptotics of {ʌ k } is geometric if d 1 and 1 I D İ K I A ; otherwise it is not geometric, though it is still light-tailed. Clearly, if 1 I D K I A , then the asymptotics of {ʌ k } is not geometric.
4.4.2
I D is an Algebraic Singular Point
We assume that z I D is an algebraic singular point of D ( z ) such that D* ( z )
i (I z )D i *( z ), D D D
i where D ! 1 is not an integer, D
_ z _ ID
i (1), D i *( z ) is analytic in D* (1) (I D 1)D D i *(I D ) z 0. The restriction of D ! 1 V for some V ! 0, and D f
imposed here is to guarantee that
¦ kD
k
k 1
194
(4.24)
f.
4 Asymptotic Analysis
To provide the asymptotics of {ʌk } in this case, we need to introduce an operational principle. This principle for the continuous case was discussed in Abate and Whitt [2] (p.186). Heaviside Operational Principle Suppose that f ( z )
f
¦z
k
f k is the generating
k 1
function of a nonnegative sequence { f k }. If z * is the radius of convergence of f * ( z ), and the asymptotic expansion is f
¦ a (z
f * ( z)
*
k
z ) k T * ( z ),
k 1
*
where T ( z ) is the generating function of a nonnegative sequence {T k }, then f k ~ T k as k o f. It follows from Eq. (3.30) that W ( z ) İ R0* ( z ) İ V ( z ),
(4.25)
i I z i ( z )]( I ) ) 1 W ( z) [D D 0 D
(4.26)
where D
and V ( z)
* D 1 1 i ªD i* º ¬ (I D z ) D ( z ) ¼ [ I G (1 G )] ( I ) 0 ) .
It follows from Eq. (4.25) and Eq. (4.1) that for 0 z I D , ʌ0W ( z )[ I R* ( z )]1 İ 3 * ( z ) İ ʌ0V ( z )[ I R* ( z )]1 ,
(4.27)
since ʌ0 ! 0 and [ I R* ( z )]1 ı 0. There are two possible cases: Case ć 1 ID K I A . In this case, [ I R* ( z )]1 is analytic at z I D , hence an expansion of [ I R* ( z )]1 in a power series at z I D is given by f
¦*
[ I R* ( z )]1
n
(ID z )n ,
(4.28)
n 0
where * n
1 dn n [ I R* ( z )]1_ z n ! dz ʌ0W ( z )[ I R* ( z )]1
ID
. Eq. (4.28), together with Eq. (4.27), yields f
i ( I ) )1 * (I z ) n ʌ0 D ¦ n D 0 n 0
f
D n
¦ (I D z )
i *( z )( I ) 0 )1 * n . ʌ0 D
n 0
195
Constructive Computation in Stochastic Models with Applications
Let (1) ( z ) ʌ0W ( z )[ I R* ( z )]1
f
¦z k
(1) k
k 1
and f
i ( z ) D
¦ Di (I j
D
z) j .
j 0
By Heaviside Operational Principle and Theorem 5.3.1 in Wilf [37] we obtain that for any given 1 İ m İ k and as k o f f
(1) k
¦I
( k D n ) D
n 0
m
§ k D n j 1· i j ( I ) 0 ) 1 * n ¸ ʌ0 D k 0© ¹
¦¨ j
f
§ · O ¨ ¦ ID (k D n ) k m n D 2 ʌ0 D * (1)( I ) 0 )1 * n ¸ ©n 0 ¹ f m § k D n j 1· i j ( I ) 0 ) 1 ¦ I Dn ¨ ID (k D ) ¦ ʌ0 D ¸* n k j 0 n 0 © ¹ f § · O ¨ ID (k D ) k m D 2 ʌ0 D * (1)( I ) 0 ) 1 ¦ I Dn k n * n ¸ . n 0 © ¹
Let f
] k j D
¦I
n D
n 0
§ k D n j 1· ¨ ¸ * n , Mk k © ¹
f
¦I
n D
k n * n .
n 0
Then m
k
i j ( I )0 )1] k , j ,D I D (k D ) ¦ ʌ0 D
1
j 0
O ID (k D ) k m D 2 eT M k .
(4.29)
Let ʌ0V ( z )[ I R* ( z )]1
(2) ( z )
f
¦z k
(2) k
.
k 1
A similar analysis to Eq. (4.29) gives that for any given 1 İ m İ k and as k o f , m
(2) k
i j[ I G* (1 G )]1 ( I )0 ) 1] k , j ,D ID (k D ) ¦ ʌ0 D j 0
O I 196
( k D ) D
k m D 2 eT M k .
(4.30)
4 Asymptotic Analysis
It follows from Eq. (4.27) that for k ı 1, (2) (1) k İ ʌ k İ k ,
(4.31)
It follows from Eq. (4.29), Eq. (4.30) and Eq. (4.31) that there exists a bounded matrix sequence {H k (m,D )} with ( I ) 0 ) 1 İ H k (m,D ) İ [ I G* (1 G )]1 ( I ) 0 ) 1 such that for any given 1 İ m İ k and as k o f, m
i jH k (m,D )] k , j ,D I D (k D ) ¦ ʌ0 D
ʌk
j 0
O(I Case Ĉ have
1 ID
( k D ) D
k m D 2 eTM k ).
(4.32)
K I A , and the matrix A is irreducible. In this case, we
[ I R* ( z )]1
1 S ( z ), ID z
where S ( z ) is analytic at z I D , and S (I D ) z 0. A similar analysis to Eq. (4.32) can be used to obtain that for any given 1 İ m İ k and as k o f , m
ʌk
i jH k ( m,D 1)] k , j ,D 1 I D (k D 1) ¦ ʌ0 D j 0
O I D (k D 1) k m D 1eTM k . We summarize the above results into the following theorem. Theorem 4.6 Suppose that z I D is an algebraic singular point of D * ( z ) given by Eq. (4.23). (1) If 1 ID K I A , then for any given 1 İ m İ k and as k o f, m
ʌk
i jH k (m,D )] k , j ,D I D (k D ) ¦ ʌ0 D j 0
O I (2) If 1 ID and as k o f,
( k D ) D
k m D 2 eTM k .
K I A , and A is irreducible, then for any given 1 İ m İ k m
ʌk
i jH k ( m,D 1)] k , j ,D 1 I D (k D 1) ¦ ʌ0 D j 0
O I
( k D 1) D
k m D 1eTM k . 197
Constructive Computation in Stochastic Models with Applications
4.5
Long-Tailed Asymptotics of the Sequence {Rk}
In this section, we provide long-tailed asymptotics for the matrix sequence {Rk } if the matrix sequence { Ak } is long-tailed. The results in this section are key to deriving subexponential and regularly varying asymptotics of the stationary probability vector {ʌ k } in subsequent sections. For a matrix sequence {Bk }, if there exists a scalar sequence {E k } and a finite, non-zero nonnegative matrix W such that lim
k of
Bİk
E İk
W , then {E k } and W are called
a uniformly dominant squence and the associate ratio matrix, respectively. The following two lemmas are useful in determining a uniformly dominant sequence of the matrix sequence {Rk } and the associated ratio matrix if the matrix sequence { Ak } is long-tailed. Lemma 4.9 If the Markov chain of GI G 1 type is positive recurrent and f
f
¦
¦ kG
_ k _ Ak is finite, then
k
k f
is finite.
k 1
It follows from Eq. (3.20) that
Proof f
f
k 1
k 1
¦ kA k ¦ kAk
f
f
k 1
k 1
( I R )( I )0 )¦ kGk ¦ kRk ( I )0 )( I G ).
Since the Markov chain is positive recurrent, it follows from (1) in Corollary 3.7 that I R is invertible and ( I G )e 0. It is clear that ( I ) 0 ) 1 ( I R ) 1 : 0 f
is finite. Since
¦
_ k _ Ak is finite,
k f f
¦ kG e
§ f ¨
f
· ¸
¨ ©k 1
k 1
¸ ¹
( I ) 0 ) 1 ( I R )1 ¨ ¦ kA k ¦ kAk ¸ e
k
k 1
f
is finite. Therefore,
¦ kG
k
is finite. This completes the proof.
k 1
When the Markov chain of GI G 1 type is positive recurrent, the matrix I R is invertible according to Corollary 3.7. It follows from Eq. (3.20) that I A ( I R )( I ) 0 )( I G ). When A is irreducible and stochastic, the maximal eigenvalue of A is simple and equal to one. Hence, rank ( I A) m 1. Since the matrix I ) 0 is invertible, we obtain that rank ( I G ) m 1, hence the maximal eigenvalue of G is simple and equal to one. Letting g1 1 and gi , for 1 İ i İ m, be the m eigenvalues of the nonnegative matrix G, we have the following lemma. Lemma 4.10 If the Markov chain of GI G 1 type is positive recurrent and 198
4 Asymptotic Analysis
the matrix A is irreducible and stochastic, then the adjoint matrix of I G can be expressed as adj(I G ) N G e
T ( I R)( I ) 0 ) , T ( I R)( I ) 0 ) e
(4.33)
m
where N G
(1 g ) z 0. i
k 2
Proof Note that when the maximal eigenvalue of the matrix G is simple and is equal to one, we obtain for D (0, 1), adj(I D G )
det( I D G ) ( I D G )1 m
(1 D g ) (1 D )( I D G)
1
i
.
i 2
Let T (D ) be an invertible matrix such that
§1 D T (D )1 ( I D G )T (D ) ¨ ©
· ¸, J D ¹
which is the Jordan canonical form of the matrix I D G. Then (1 D )( I D G ) 1
§1 · T (D ) ¨ T (D ) 1. 1 ¸ (1 ) J ( ) D D © ¹
Since in the matrix §0 · T (1) 1 ( I G )T (1) ¨ ¸, J (1) © ¹ J (1) is invertible due to rank ( I G )
m 1, this implies lim(1 D ) J (D ) 1 D /1
0.
Note that adj ( I D G ) is continuous for D (0,1], we get adj(I G )
lim adj(I D G ) D /1
m
lim (1 D gi ) (1 D )( I D G )1 D /1
i 2
§1
· 1 ¸ T (1) © 0¹ T ( I R)( I )0 ) . NGe T ( I R)( I )0 )e
N GT (1) ¨
since the vectors e and
T ( I R)( I )0 ) are the right and left Perron-Frobenius T ( I R)( I )0 )e 199
Constructive Computation in Stochastic Models with Applications
eigenvectors of G, respectively. Since rank ( I G )
m 1, it is clear that
adj ( I G ) z 0, which implies N G z 0. This completes the proof. To study long-tailed asymptotics of the matrix sequence {Rk }, we need to extend the results in Lemma 4 and Proposition 1 of Jelenkoviü and Lazar [17] to a matrix setting, which are described in the following two lemmas. All the measures involved in the following can be signed measures. Let B() be the V -algebra of Borel sets on (f, f). The convolution of two measures P1 and P 2 is defined as ( P1 * P 2 )( B)
³
( f , f )
P1 ( B x) P2 (dx),
B B( ), B x { y : y x B}. For B B(), let U ( B) and V ( B) be two matrices of size m u m whose entries are finite measures, given as U ( B ) (uij ( B ))1İi , jİm and V ( B ) (vij ( B ))1İi , jİm. The convolution of the two matrices U and V of finite measures is defined as § m · (U V )( B) ¨ ¦ (uik vkj )( B) ¸ ©k 1 ¹1İi , jİm and the convolution of a matrix U of finite measures and a finite scalar measure v is defined as (U v)( B)
((uij v)( B))1İi , jİm.
Remark 4.3 It should be noted that when B is a singleton, the convolution for measures coincides with the ordinary convolution for sequences. Lemma 4.11 Let U and U be two matrices of finite measures of size m U ([ x, f )) C , where F ( x) is a long-tailed on (, B()). If (1) lim x of F( x )
distribution function and the matrix C is finite, and (2) U has a support on (f,0], then the matrix * lim
x of
and the matrix *
* ([ x, f)) F ( x)
U ((f 0])C ,
U U satisfies
lim
x of
200
U U satisfies
* ([ x, f)) F ( x)
CU ((f,0]).
4 Asymptotic Analysis
Proof
By using Lemma 4 in Jelenkoviü and Lazar [17], we obtain lim
x of
* ([ x, f))
1 § m · ¨ ¦ (uik ukj )([ x, f)) ¸ x of F ( x ) ©k 1 ¹1İi , jİm lim
F ( x)
§ m ¨ ¨ ¨ ©k 1
· ¸
¦ uik ((f,0]) ckj ¸
¸ ¹1İi , j İm
U ((f,0])C. This completes the proof. Lemma 4.12 Let U and U be two matrices of finite measures of size m on (, B( )). Assume that (1) P is a finite scalar measure with a support on (f,0] such that
P ((f, 0]) 0 and 0
³
( f ,0]
xP (dx) f;
(2) U has a support on [0, f) with at least one non-zero element, and all the non-zero elements U of are strictly positive on [a f) for a ! 0; and U ([ x, f )) C , where F ( x) is a long-tailed distribution function and (3) lim x of F( x )
P U , then
the matrix C is finite. If U lim
x of
U ([ x, f))
³
[ x , f ]
F( y ) dy
1
³
( f 0]
x P dx
C.
Proof The proof is obvious according to (2) of Proposition 1 in Jelenkoviü and Lazar [17]. The following lemma provides a structural property for the matrix sequence {Rk } if the matrix sequence { Ak } is long-tailed. Lemma 4.13 Suppose that the Markov chain of GI G 1 type is positive recurrent. If { Ak } is long-tailed with a uniformly dominant sequence { pk } and the associated ratio matrix W, then lim
k of
Rİk pİk
1
ı W ( I )0 ) .
Proof Note that the Markov chain of GI G 1 type is positive recurrent, it follows from Eq. (3.12) and Eq. (3.9) that for all k ı 1, f
Rk
Ak ( I ) 0 ) 1 ¦ Rk l) l ( I ) 0 )1 l 1
1
ı Ak ( I ) 0 ) ,
201
Constructive Computation in Stochastic Models with Applications
since ( I )0 )1 ı 0, Rk ı 0 and ) k ı 0 for k ı 1. Hence, for all k ı 1, f
f
¦Rı¦
Rİk
l
l k 1
k of
Rİk
lim
Aİk ( I ) 0 ) 1 .
l k 1
Since { Ak } is long-tailed and lim
k of
Ak ( I ) 0 ) 1
ı lim
k of
pİk
Aİk pİk Aİk pİk
W , it is clear that
( I ) 0 ) 1 W ( I ) 0 )1 .
Using Lemmas 4.11 and 4.12, we are able to prove the following theorem, which characterizes long-tailed asymptotics of the matrix sequence {Rk }. Theorem 4.7 Suppose that the Markov chain of GI G 1 type is positive f
recurrent, I A 1 and
¦
_ k _ Ak is finite. If { Ak } is long-tailed with a uniformly
k f
dominant probability sequence { pk } and the associated ratio matrix W, then lim
k of
Rİk
WeT ( I R )
pİk
T ( I R)( I ) 0 )¦ jG j e
f
,
(4.34)
j 1
f
where pİk
¦
pİn .
n k 1
Proof It follows from Eq. (3.19) that [ R* ( z ) I ]det( I G* ( z )) [ A* ( z ) I ]adj(I G* ( z ))( I )0 )1 ,
(4.35)
when the matrix I G* ( z ) is invertible. To evaluate the asymptotics of the coefficient matrix sequence in the generating function R* ( z ) I , we first analyze the asymptotics of the coefficient matrix sequence in the generating function [ A* z I ]. adj ( I G* ( z )) according to Lemma 4.11. Since A* ( z ) I and adj ( I G* ( z )) are analytic for I A _ z _ I A 1, where I A 1 results from the fact that { Ak } is long-tailed, we can write f
A* ( z ) I
¦z
k
lAk ,
k f
where lAk 202
if k z 0, Ak , ® A I , if k 0. ¯ 0
4 Asymptotic Analysis f
Since G* ( z )
¦z
k
Gk , according to the definition of the adjoint matrix we can
k 1
write 0
¦z
adj(I G* ( z ))
k
Sk
(4.36)
k f
and define S k
0 for all k ı 1. Let f
¦zQ
[ A* ( z ) I ] adj(I G * ( z )).
k
k
k f
Then f
f
¦zQ
¦z
k
k
k f
f
k
lA k
k f
¦z
k
Sk
k f
f
with
¦z
k
Sk
0 implies
k 1
Qk
¦
lAiS j
lAk Sk
(4.37)
i j k
with S j
0 for j ı 1 . Therefore, we obtain that for k ı 1 Aİk Sİk .
Qİk
If for a matrix sequence {Ck }, we define the matrix of measures by PC ( B)
¦C , k
then it follows from Remark 4.3, Eq. (4.37) and Lemma 4.11 that
k B
lim
k of
Qİk pİk
0
W
¦S
W adj I G .
k
(4.38)
k f
We now evaluate asymptotics of the coefficient matrix sequence in the generating function R* ( z ) I according to Eq. (3.19), Eq. (4.38) and Lemma 4.12. Let 0
det( I G* ( z ))
¦z
k
gk .
k f
Define P ( B)
¦
kB
g k . Since the Markov chain of GI G 1 type is positive
recurrent, then applying (1) of Corollary 3.7 we have 0
¦g
k
det( I G * (1))
det( I G )
0.
k f
203
Constructive Computation in Stochastic Models with Applications f
d {det( I G* ( z ))}_ z 1. To compute ¦ dz k f derivatives, elementwise, of both sides of the equation
0
kg k
It is clear that
det( I G * ( z )) I
¦ kg
k
, taking the
k f
adj(I G* ( z )) [ I G* ( z )]
leads to d {det( I G* ( z ))}_ z 1 I dz
f
adj(I G ) ¦ kGk k 1
d {adj(I G* ( z ))}_ z 1 ( I G ). dz
(4.39)
Multiplying by T and e on the both sides of Eq. (4.39), and using the fact that T e 1 and ( I G )e 0, it follows from Lemma 4.10 that 0
¦ kg
k f
k
d {det( I G * ( z ))}_ z dz
1
f
T adj(I G )¦ kGk e k 1
T ( I R)( I ) 0 ) f NG ¦ kG e. T ( I R )( I ) 0 )e k 1 k f
Note that
¦
f
_ k _ Ak is finite, Lemma 4.9 illustrat es that
k f
¦ kG
k
is finite. Thus
k 1
f
¦ kg
(4.40)
f
k
is finite and non-zero according to Lemma 4.10 and
k f
¦ kG e ı Ge k
e.
k 1
0
Therefore, 0
¦ kg
k
f. Since { Ak } is long-tailed with a uniformly dominant
k f
probabicity sequence {pk} and the associated ratio matrix W, Lemma 4.13 implies lim
k of
Rİk pık
1
ı W ( I )0 ) .
Note the fact that W : 0 and ( I ) 0 ) 1 : 0, it is clear that W ( I ) 0 ) 1 : 0, since the matrix I ) 0 is invertible. Hence, there exists at least a pair (i0 , j0 ) such that the (i0 , j0 ) th element of the matrix W ( I ) 0 )1 is positive. Therefore, Lemma 4.13 implies rİk (i0 , j0 ) ! 0 for all k ı N , where N is a large enough positive integer. Similarly, for each positive element of the matrix W ( I ) 0 ) 1 , denoted as the (i* , j * ) th element, we have rİk (i* , j * ) ! 0 for all k ı N . Define U ( B ) ¦ Rk and U ( B) ¦ Qk ( I ) 0 ) 1 . It follows from Eq. (3.19), Eq. (4.38) k B
and Lemma 4.12 that 204
k B
4 Asymptotic Analysis
lim
W adj(I G )(I ) 0 ) 1
Rİk
k of
0
pİk
¦ kg
.
(4.41)
k
k f
Substituting Eq. (4.33) and Eq. (4.40) into Eq. (4.41) leads to the expression in Eq. (4.34). This completes the proof.
4.6
Subexponential Asymptotics of {Sk}
In this section, we assume that I A 1 and I D ! 1. Under the condition that { Ak } S * and {Dk } is light-tailed, we characterize subexponential asymptotics of the stationary probability vector {ʌ k }. At the same time, we explicitly express a uniformly dominant sequence of {ʌ k } and the associated ratio vector. Since f
x0 R0,İk ¦ Rİnk
ʌİk
n 0
for k ı 1 according to Lemma 4.2, the tail of the stationary probability vector {ʌ k } can be expressed as a tail of convolution of the two matrix sequences f ½ {R0, k } and ®¦ Rkn ¾ . It is well-known that the convolution of two long-tailed ¯n 0 ¿ matrix sequences may not be long-tailed. Therefore, it is possible that the vector sequence {Sk } is not long-tailed, even though the matrix sequence { Ak } is longtailed. Based on this, the matrix sequence {Ak } is restricted to the subexponential class (including regularly varying class). According to Lemma 4.2, it is crucial to characterize subexponential asymptotics f ½ of the matrix sequence ®¦ Rkn ¾ . To do this, we need Lemma 4.3 in Asmussen, ¯n 0 ¿ Henriksen and Klüppelberg [3], which is restated in the following lemma. Lemma 4.14 Let H ( x) be a matrix of nonnegative function such that H H (f) H (0) is strictly substochastic (therefore, the spectral radius of H is strictly less than one). If there exists a probability distribution F ( x) S and a finite matrix L such that lim
x of
f
H ( x) F ( x)
¦H lim
k of
n
L, then
( x)
n 0
F ( x)
( I H )1 L( I H ) 1. 205
Constructive Computation in Stochastic Models with Applications
The following lemma characterizes subexponential asymptotics for the matrix sequence
^¦
f n 0
`
Rkn . Suppose that the Markov chain of GI G 1 type is positive
Lemma 4.15
f
¦
recurrent, I A 1 and
_ k _ Ak is finite. If { Ak } S * with a uniformly
k f
dominant probability sequence { pk } and the associated ratio matrix W, then f n ½ ®¦ Rk ¾ S and ¯n 0 ¿ f
¦R
n
( I R) 1WeT
İk
lim n
0
k of
(4.42)
.
f
pİk
T ( I R)( I )0 )¦ jG j e j 1
Proof Since { Ak } S * $ and pk( I ) f
1
Pp
pİk
for
f
f
¦ kp ¦ p k
İk
k 1
k ı 1, where P p
f according to the assumption that
¦ _k _ A
k
is finite. It
k f
k 1
follows from Theorem 4.7 that lim
k of
Rİk
Rİk 1 pİk
lim
(I ) İk
k of
p
P p L,
(4.43)
Pp
where WeT ( I R )
L
f
.
T ( I R)( I )0 )¦ jG j e j 1
' 1 RkT ' for k ı 1, where '
ik Let G
i clear that G
diag (T1 ,T 2 ,! ,T m ). Since T R 9 T , it is
f
¦ Gi
k
is strictly substochastic, since
k 1
i Ge
f
f
¦ Gi e ¦ ' k
k 1
1
RkT 'e
It follows from Eq. (4.42) that lim
k of
that ^ p
(I ) k
206
' 1 (T R )T 9 ' 1T T
e.
k 1
i İk G (I )
pİk
` is a probability sequence in
P p ' 1 LT ' and from Proposition B.1
+ . Therefore, using Lemma 4.14 we yield
4 Asymptotic Analysis f
¦ Gi lim
n İk
n 0
(I ) İk
k of
p
i ) 1 P ' 1 LT ' ( I G i ) 1. (I G p
(4.44)
Since the Markov chain of GI G 1 type is positive recurrent, I R is invertible. It is easy to see that ( I R) 1 L( I R)1 z 0 due to L z 0. Therefore, it follows from Eq. (4.44) that f
¦R
n
İk
lim n k of
0
(I ) İk
p
P p ( I R ) 1 L ( I R ) 1 P p ( I R)1WeT f
(4.45) .
T ( I R)( I ) 0 )¦ jG j e j 1
Notice that f
¦R
f
¦R
n
İk
n
İk
n 0
,
n 0
it follows from Eq. (4.45) that 1
f
¦ Rİnk
lim n
k of
0
pİk
lim
Pp
f
¦R
n
İk
n 0
pİ( Ik)
k of
( I R )1WeT f
.
T ( I R)( I ) 0 )¦ jG j e j 1
f ½ Therefore, ®¦ Rkn ¾ + according to Proposition B.4. This completes the proof. ¯n 0 ¿ For simplicity of description, we need the asymptotic assumption: If I D ! 1, then the matrix sequence {R0, k } is light-tailed. We only consider the light-tailed case in which there exists a uniformly dominant sequence {d k } and the associated ratio matrix D such that lim
k of
R0,İk dİ k
D.
(4.46)
The following theorem characterizes subexponential asymptotics of {ʌk } for the case: { Ak } + * and {Dk } is light-tailed. 207
Constructive Computation in Stochastic Models with Applications
Suppose that the Markov chain of GI G 1 type is positive
Theorem 4.8 f
recurrent, and
¦
_ k _ Ak is finite. If I D ! 1, the light-tailed {R0, k } has a
k f
uniformly dominant sequence {d k } with associated ratio matrix D, and if { Ak } + * with a uniformly dominant sequence { pk } and associated ratio matrix
W, then {Sk } + and lim
k of
ʌ0 D ( I R ) 1WeT
ʌİk
f
pİk
.
T ( I R)( I ) 0 )¦ jG j e j 1
Proof If { Ak } + * with a uniformly dominant sequence { pk } and associated ratio matrix W, then f
¦R
n
İk
lim
n 0
k of
( I R)1WeT f
pİk
T ( I R )( I ) 0 )¦ jG j e j 1
according to Lemma 4.15. If I D ! 1 , then {R0,k } is light-tailed. When {R0, k } has a uniformly dominant sequence {d k } and associated ratio matrix D, it is clear that dİk
o pİ( Ik) , since pk( I ) + according to { Ak } + * . Therefore, it follows
from Lemma 4.2 and Proposition B.7 that lim
k of
ʌİk
ʌ0 D( I R) 1WeT
pİk
T ( I R)( I ) 0 )¦ jG j e
f
.
j 1
Therefore, {ʌ k } + . This completes the proof.
4.6.1
Markov Chains of M/G/1 Type
We consider a Markov chain of M G 1 type with { Ak } + * and I D ! 1. It is clear that G j
0 and ) j
0 for j ı 2. In this case, we have
lim
k of
208
ʌİk pİk
T
ʌ0V ( I )0 ) 1 ( I R ) 1We . T ( I R)( I )0 )G1e
4 Asymptotic Analysis
Regularly Varying Asymptotics of {S k}
4.6.2
If the matrix sequence { Ak } is regularly varying and the matrix sequence {Dk } is light-tailed, then the stationary probability vector {ʌ k } is regularly varying. At the same time, we provide explicit expressions for a uniformly dominant sequence of {ʌ k } and the associated ratio vector. Theorem 4.9 Suppose that the Markov chain of GI G 1 type is positive f
¦ kD
recurrent, and
f
k
and
¦
_ k _ Ak are both finite. If I D ! 1 and { Ak } D
k f
k 1
for D ı 2, then {ʌ k } D 1 . Proof If{ Ak } D with a uniformly dominant sequence { pk } and associated ratio matrix W, then { pk } D . Thus, pİk k D lİk for all k ı 1, where f
{lk } 0 . Since
¦
f
_ k _ Ak is finite, it is clear that P p
k f
¦ kp
k
f
and
k 0
D ı 2 according to Proposition 1.3.6 in Bingham, Goldie and Teugels [7]. Let pk( I )
1
Pp
k
¦p
İk
. It follows from Proposition 1.5.10 in Bingham, Goldie and
k 0
Teugels [7] that as k o f, 1
pİ( Ik)
Pp ~
f
¦
pİk
l k 1
1
P p (D 1)
1
Pp
(k 1)
f
¦ k Dl
İk
l k 1
D 1
lİk 1.
Hence { pk( I ) } (D 1) . It follows from Lemma 4.15 that f
¦R
n
İk
lim
k of
n 0
pİ( Ik)
P p (D 1)( I R )1WeT f
.
T ( I R )( I ) 0 )¦ jG j e j 1
f
½ Therefore, ®¦ Rkn ¾ D 1 . According to (2) of Theorem 4.8, we obtain that ¯n 0 ¿
ʌİk k
D 1 (1) İk
4.7
Notes in the Literature
l ʌ0< 2< 1 as k o f. This completes the proof.
Asymptotic analysis of block-structured Markov chains is an interesting topic and has been studied for many years. For light-tailed asymptotic behavior of 209
Constructive Computation in Stochastic Models with Applications
stationary probability vectors, it might have been inspired by Takahashi [35] and Neuts and Takahashi [30]. Neuts [27] provided an excellent overview for asymptotic behavior of Markov chains of GI M 1 type. To establish useful relations between the light-tailed asymptotics and the parameters of a concrete queueing model, Neuts [28] discussed the caudal characteristic curves for some queues by means of the matrix-geometric solution. Bean, Li and Taylor [6] analyzed the caudal characteristics for QBD processes and also for Markov chains of GI M 1 type. Fujimoto, Takahashi and Makimoto [13] obtained an interesting result on the asymptotics of QBD processes with both infinite levels and infinite phases. Subsequent papers have been published on this theme, e.g., see Miyazawa and Zhao [25], Kroese, Scheinhardt and Taylor [20], Haque, Zhao and Liu [15], Li, Miyazawa and Zhao [21]. In contrast to Markov chains of GI M 1 type, it is more difficult to analyze asymptotic behavior of stationary probability vectors of Markov chains of M G 1 type. This difficulty is due to two basic facts: The matrix-iterative solution makes such an asymptotic analysis more difficult; and the stationary probability vectors of Markov chains of M G 1 type can be either light-tailed or heavy-tailed. The light-tailed asymptotics of stationary probability vectors of Markov chains of M G 1 type was studied by Falkenberg [11], Abate, Choudhury and Whitt [1], Choudhury and Whitt [9], MØller [26], Takine [23], Li and Zhao[23]. For subexponential asymptotics of stationary queue lengths, Resnick and Samorodnitsky [31] analyzed heavy-tailed asymptotic behavior for the stationary queue length of a G M 1 queue in terms of stochastic comparison when the arrival process is long range dependent. Based on a property on the generating functions of regularly varying sequences, Roughan, Veitch and Rumsewicz [32] derived power law asymptotics for the stationary queue length of an M G 1 queue with power law service times. According to the distributional version of Little’s law, Asmussen, Klüppelberg and Sigman [4] studied subexponential asymptotics for the stationary queue length of a GI G 1 queue with subexponential service times. Using the Mellin transform, Jacquet [16] provided results on polynomial tails for the stationary queue length of a single-server queue when the arrival process contains a finite or infinite number of on-off input sources. Shuang, Liu and Li [33] studied the subexponential asymptotics of stationary queue length for a M G 1 retrial queue. Li, Liu and Shuang [22] analyzed the regularly varying tail of the stationary buffer content for an infinite-buffer fluid queue driven by an M G 1 queue. Borovkov and Korshunov [8], Jelenkoviü and Lazar [17], Foss and Zachary [12] and Zachary [38] studied heavy-tailed asymptotics for random walks. Some researchers have studied heavy-tailed asymptotics of the stationary probability vectors. For a Markov chain of GI G 1 type with subexponential increments and with the repeating and boundary matrix sequences being tailequivalent, Asmussen and MØller [5] discussed subexponential asymptotics for the stationary level process. Takine [36] and Li and Zhao [24] discussed heavytailed asymptotics for the stationary probability vectors of Markov chains of 210
4 Asymptotic Analysis
GI G 1 type. Kim and Sohraby [18] studied tail behavior of the queue size and waiting time in a queue with discrete autoregressive arrivals, while Kim and Kim [19] analyzed regularly varying tails in a queue with discrete autoregressive arrivals of order p. This chapter mainly refers to Li and Zhao [23,24] and Jelenkoviü and Lazar [17].
Problems 4.1 For the MAP G 1 queue, prove that (1) the stationary queue length is light-tailed if and only if the service time distribution is light-tailed; (2) the stationary queue length is subexponential if and only if the service time distribution is subexponential. 4.2 For the BMAP M 1 queue with the BMAP expression {Dk }, if the matrix sequence {Dk } is regularly varying D for D ı 2, then compute the tails of the stationary queue length and the stationary waiting time. 4.3 For the M X M X 1 queue, if the bulk arrival distribution and the bulk service distribution are regularly varying D and E for D , E ı 2, then compute the tails of the stationary queue length and the stationary waiting time. 4.4 For the M G 1 retrial queue, if the service time distribution is regularly varying D for D ı 2, then compute the tails of the stationary queue length and the stationary waiting time. 4.5 For the M G ( M G ) 1 queue with a repairable server, if the service time distribution and the repair time distribution are regularly varying D and E for D , E ı 2, respectively, then compute the tails of the stationary queue length and the stationary waiting time. 4.6 For the M X G 1 queue with server multiple vacations, if the arrival bulk size distribution, the service time distribution and the vacation time distribution are all regularly varying, then compute the tails of the stationary queue length and the stationary waiting time. 4.7 For the BMAP G 1 queue with server single vacation, if the arrival bulk size distribution, the service time distribution and the vacation time distribution are all regularly varying, then compute the tails of the stationary queue length and the stationary waiting time. 4.8 Consider a double queue that arises when each arriving customer simultaneously place two demands handled by two servers independently. Customer arrivals form a Poisson process with rate O ! 0. Server one has regularly varying service times with probability distribution G ( x) D for D ! 2, while server two has exponential service times with rate P ! 0. Compute the tails for the 211
Constructive Computation in Stochastic Models with Applications
two-dimensional stationary joint queue length, the first stationary queue length and the second stationary queue length. 4.9 Consider a queueing system consisting of two parallel servers, each of which has a queue of itself. Customer arrivals form a Poisson process with rate O ! 0. On arrival the customer joins the shorter queue. When both queues have equal length, he joins the first queue with probability T and the second one with probability 1 T . Server one has regularly varying service times with probability distribution G ( x) D for D ! 2, while server two has exponential service times with rate P ! 0. Compute the tails for the two-dimensional stationary joint queue length, the first stationary queue length and the second stationary queue length. 4.10 A queueing system consists of a server and two queues formed by two types of customers, respectively. Arrivals of the two types of customers form two Poisson processes with rates O1 O2 ! 0. The server serves a customer in the longer queue. When both queues have equal lengths, he serves the first queue with probability T and the second one with probability 1 T . (1) If the service times for the first type of customers are regularly varying with probability distribution G ( x) D for D ! 2, while the service times for the second type are exponential with rate P ! 0, compute the tails for the twodimensional stationary joint queue length, the first stationary queue length and the second stationary queue length. (2) If the service times for the two types of customers are all regularly varying with probability distributions F ( x) D and G ( x) E for D , E ! 2, respectively, compute the tails for the two-dimensional stationary joint queue length, the first stationary queue length and the second stationary queue length. 4.11 A queueing system consists of a server and two queues formed by two types of customers respectively. Arrivals of the two types of customers form two Poisson processes with rates O1 O2 ! 0. The server alternately serves customers between both queues, for example, queue one to queue two, queue two to queue one, and so on. (1) If the service times for the first type of customers are regularly varying with probability distribution G ( x) D for D ! 2, while the service times for the second type are exponential with rate P ! 0, compute the tails for the twodimensional stationary joint queue length, the first stationary queue length and the second stationary queue length. (2) If the service times for the two types of customers are all regularly varying with probability distributions F ( x) D and G ( x) E for D , E ! 2, respectively, compute the tails for the two-dimensional stationary joint queue length, the first stationary queue length and the second stationary queue length. 212
4 Asymptotic Analysis
4.12 We consider an irreducible QBD process with N levels whose infinitesimal generator is given by § B1 ¨ ¨ B2 ¨ ¨ Q ¨ ¨ ¨ ¨ ¨ ©
B0 A1 A2
A0 A1 A0 % % % A2 A1 A2
A0 A1 C2
· ¸ ¸ ¸ ¸ ¸. ¸ ¸ C0 ¸ C1 ¸¹
Let A A0 A1 A2 be irreducible and stochastic, and T the stationary probability vector of the Markov chain A with finite states. It is clear that the QBD process is positive recurrent, and its stationary probability vector is expressed as ʌ (ʌ1 , ʌ 2 ,! , ʌ N ). Please discuss the limit lim ʌ N for each of the following three cases: N of
(1) T A0 e T A2 e , (2) T A0 e ! T A2 e and (3) T A0 e T A2 e . 4.13
We consider a BMAP G 1N queue. Let ʌ ( ʌ1 , ʌ2 ,!,. ʌ N ) be the stationary
distribution of the queue length. Please discuss the limit lim ʌ N for each of the N of
following three cases: (1) U 1, (2) U ! 1 and (3) U 1, where U
OP .
References 1. Abate J., G. Choudhury and W. Whitt (1994). Asymptotics for steady-state tail probabilities in structured Markov queueing models. Stochastic Models 10: 99 143 2. Abate J. and W. Whitt (1997). Asymptotics for M/G/1 low-priority waiting-time tail probabilities. Queueing Systems 25: 173 223 3. Asmussen S., L.F. Henriksen and C. Klüppelberg (1994). Large claims approximations for risk processes in a Markovian environment. Stochastic Processes and Their Applications 54: 29 43 4. Asmussen S., C. Klüppelberg and K. Sigman (1999). Sampling at subexponential times, with queueing applications. Stochastic Processes and Their Applications 79: 265 286 5. Asmussen S. and J.R. MØller (1999). Tail asymptotics for M/G/1 type queueing processes with subexponential increments. Queueing Systems 33: 153 176 6. Bean N.G., J.M. Li and P.G. Taylor (2000). Caudal characteristics of QBDs with decomposable phase spaces. In Matrix-analytic methods, G. Latouche and P.G. Taylor (eds.), World Scientific, 37 55 7. Bingham N.H., C.M. Goldie and J.L. Teugels (1987). Regular variation, Cambridge University Press 213
Constructive Computation in Stochastic Models with Applications 8. Borovkov, A.A. and D.A. Korshunov, (1996). Probabilities of large deviations for onedimensional Markov chains. I. Stationary distributions. Theory of Probability and its Applications 41: 1 24 9. Choudhury G. and W. Whitt (1994). Heavy-traffic asymptotic expansions for the asymptotic decay rates in the BMAP/G/1 queue. Stochastic Models 10: 453 498 10. Embrechts P. and E. Omey (1984). A property of longtailed distributions. Journal of Applied Probability 21: 80 87 11. Falkenberg E. (1994). On the asymptotic behavior of the stationary distribution of Markov chains of M/G/1 type. Stochastic Models 10: 75 97 12. Foss S. and S. Zachary (2002). Asymptotics for the maximum of a modulated random walk with heavy-tailed increments. In Analytic methods in applied probability, Amer. Math. Soc. Transl. Ser. 207, Amer. Math. Soc.: Providence RI, 37 52 13. Fujimoto K., Y. Takahashi and N. Makimoto (1998). Asymptotic properties of stationary distributions in two-stage tandem queueing systems. Journal of the Operations Research Society of Japan 41: 118 141 14. Goldie C.M. and C. Klüppelberg (1998). Subexponential distributions. In A Practical Guide to Heavy Tails: Statistical Techniques for Analyzing Heavy Tailed Distributions, R. Adler, R. Feldman and M.S. Taqqu (eds.), Birkhäuser: Boston, 435 459 15. Haque L., Y.Q. Zhao and L. Liu (2005). Sufficient conditions for a geometric tail in a QBD process with many countable levels and phases. Stochastic Models 21: 77 99 16. Jacquet P. (2000). Traffic and queueing from an unbounded of independent memoryless on/ off sources. In Self-similar network traffic and performance evaluation, K. Park and W. Willinger (eds.), John Wiley & Sons, 269 284 17. Jelenkoviü P.R. and A.A. Lazar (1998). Subexponential asymptotics of a Markovmodulated random walk with queueing applications. Journal of Applied Probability 35: 325 347 18. Kim B. and K. Sohraby (2006). Tail behavior of the queue size and waiting time in a queue with discrete autoregressive arrivals. Advances in Applied Probability 38: 1116 1131 19. Kim J. and B. Kim (2007). Regularly varying tails in a queue with discrete autoregressive arrivals of order p. Queueing Systems 56: 93 102 20. Kroese D.P., W.R.W. Scheinhardt and P.G. Taylor (2004). Spectral properties of the tandem Jackson network, seen as a quasi-birth-and-death process. The Annals of Applied Probability 14: 2057 2089 21. Li H., M. Miyazawa and Y.Q. Zhao (2007). Geometric decay in a QBD process with countable background states with applications to a join-the-shortest-queue model. Stochastic Models 23: 413 438 22. Li Q.L., L. Liu and W. Shang (2008). Heavy-tailed asymptotics for a fluid model driven by an M/G/1 queue. Performance Evaluation 65: 227 240 23. Li Q.L. and Y.Q. Zhao (2005). Light-tailed asymptotics of stationary probability vectors of Markov chains of GI/G/1 type. Advances in Applied Probability 37: 1075 1093 24. Li Q.L. and Y.Q. Zhao (2005). Heavy-tailed asymptotics of stationary probability vectors of Markov chains of GI/G/1 type. Advances in Applied Probability 37: 482 509 25. Miyazawa M. and Y.Q. Zhao (2004). The stationary tail asymptotics in the GI/G/1 type queue with countably many background states. Advances in Applied Probability 36: 1231 1251 214
4 Asymptotic Analysis
26. MØller J.R. (2001). Tail asymptotics for M/G/1 type queueing processes with light-tailed increments. Operations Research Letters 28: 181 185 27. Neuts M.F. (1981). Matrix-Geometric Solutions in Stochastic Models-An Algorithmic Approach, The Johns Hopkins University Press: Baltimore 28. Neuts M.F. (1986). The caudal characteristic curve of queues. Advances in Applied Probability 18: 221 254 29. Neuts M.F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Dekker: New York. 30. Neuts M.F. and Y. Takahashi (1981). Asymptotic behavior of the stationary distributions in the GI/PH/c queue with heterogeneous servers. Z. Wahrscheinlichkeitsth 57: 441 452 31. Resnick S. (1997). Performance decay in a single server exponential queueing model with long range dependence. Operations Research 45: 235 243 32. Roughan M., D. Veitch and M.P. Rumsewicz (1997). Numerical inversion of probability generating functions of power-law queues. Technical Report 0040, SERC, Software Engineering Research Centre, Level 2, 723 Swanston St, Carlton Vic, 3053, Australia 33. Shang W., L. Liu and Q.L. Li (2006). Tail asymptotics for the queue length in an M/G/1 retrial queue. Queueing Systems 52: 193 198 34. Sigman K. (1999). Appendix: A primer on heavy-tailed distributions. Queueing Systems 33: 261 275 35. Takahashi Y. (1981). Asymptotic exponentiality of the tail of the waiting-time distribution in a PH/PH/c queue. Advances in Applied Probability 13: 619 630 36. Takine T. (2004). Geometric and Subexponential Asymptotics of Markov Chains of M/G/1 Type. Mathematics of Operations Research 29: 624 648 37. Wilf H.S. (1994). Generatingfunctionology (second edition), Academic Press, Inc. 38. Zachary S. (2004). A note on Veraverbeke’s theorem. Queueing Systems 46: 9 14
215
5
Markov Chains on Continuous State Space
Quan-Lin Li Department of Industrial Engineering Tsinghua University Beijing 100084, P.R. China E-mail:
[email protected] Abstract In this chapter, we discuss Markov chains on continuous state space. We first analyze a discrete-time Markov chain on continuous state space, and then discuss a discrete-time Markov chain on a bivariate state space. Applying the censoring technique, we provide expression for the RG-factorizations, which are used to derive the stationary probability of the Markov chain. Further, we consider a continuous-time Markov chain on continuous state space. Specifically, we deal with a continuous-time level-dependent QBD process with continuous phase variable, and provide orthonormal representations for the R-, U- and G-measures, which lead to the matrix-structured computation of the stationary probability. As an application, we introduce continuousphase type (CPH) distribution and continuous-phase Markovian arrival process (CMAP), and then analyze a CMAP/CPH/1 queue. Finally, we study a piecewise deterministic Markov process, which is applied to deal with more general queues such as the GI/G/c queue. Keywords Markov chains on continuous state space, QBD process with continuous phase variable, orthonormal representation, continuous-phase type distribution, continuous-phase Markovian arrival process, piecewise deterministic Markov process.
In this chapter, we discuss Markov chains on continuous state space, which are a useful mathematical tool in the study of stochastic models. We first analyze a discrete-time Markov chain on continuous state space, and then discuss a discretetime Markov chain on a bivariate state space : N u R , where N {0, 1, 2, ...} and R [0, f ). Applying the censoring technique, we provide expression for the RG-factorizations, which are used to derive the stationary distribution. As an example, we study the GI G 1 queue in terms of the discrete-time Markov chain on continuous state space. Further, we consider a continuous-time Markov chain on continuous state space, and specifically deal with a continuous-time level-dependent
5 Markov Chains on Continuous State Space
QBD process with continuous phase variable, and provide the RG-factorizations. In L2 ([0 f)2 ) , which is a space of square integrable bivariate real functions, we provide orthonormal representations for the R-, U- and G-measures, which lead to the matrix structure of the RG-factorizations. Based on this, we introduce continuous-phase type (CPH) distribution and continuous-phase Markovian arrival process (CMAP), and then analyze a CMAP CPH 1 queue. Finally, we study a piecewise deterministic Markov process to be able to deal with more general queueing systems such as the GI G c queue. This chapter is organized as follows. Section 5.1 simply defines a discrete-time Markov chain on continuous state space, and discusses its irreducibility and aperiodicity. Section 5.2 extends the censoring technique to be able to deal with the Markov chain on bivariate state space {0, 1, 2, ...} u [0, f) , derives the RG-factorizations and also expresses the stationary probability distribution by means of an algebraic algorithm with orthonormal representations. Section 5.3 applies the RG-factorizations to discuss the GI G 1 queue in terms of the Markov chain of GI M 1 type and the Markov chain of M G 1 type, respectively. Section 5.4 defines a continuous-time Markov chain on continuous state space, and expresses the stationary probability distribution. Section 5.5 deals with the continuous-time QBD process with continuous phase variable, provides the UL-type RG-factorization and expresses the stationary probability distribution as an operator-multiplicative solution. In Section 5.6, if the matrix of generalized density functions of the continuous-time QBD process with continuous phase variable is in L2 ([0, f)2 ) , then the integral equations given in Section 5.5 can be converted into the associated matrix equations. Section 5.7 introduces CPH distribution and continuous-phase Markovian arrival process (CMAP), and then analyzes a CMAP CPH 1 queue. Section 5.8 studies a piecewise deterministic Markov process which leads to be able to deal with more general queueing systems such as the GI G c queue. Finally, Section 5.9 provides some notes to the references on Markov chains on continuous state space.
5.1 Discrete-Time Markov Chains In this section, we define a discrete-time Markov chain on continuous state space, and discuss its irreducibility and aperiodicity simply. Let V ( D) be a Borel V -algebra on the interval D, for example, D is either [0, f) , [0, a) or (b, f). We first introduce the identity kernel as follows. For a nonnegative kernel F ( x, A) for x ı 0 and A V (: ), if the bivariate function matrix I ( x, y ) satisfies that for all x ı 0 and A V (: ),
³
f 0
I ( x, y ) F ( y, A)dy
F ( x, A) 217
Constructive Computation in Stochastic Models with Applications
and
³
f 0
F ( x, y ) I ( y, A)dy
F ( x, A),
then I ( x, A) is called the identity kernel of F ( x, A) . In this chapter, we always use the notation: F 0 ( x, y ) F n ( x, y )
³
f 0
I ( x, y ); and for n ı 1,
F n1 ( x, z ) F ( z , y )dz.
It is easy to check that F n ( x, y )
³
f 0
F ( x, z ) F n 1 ( z , y )dz , n ı 1.
A Markov chain is a sequence of random variables X 0 , X 1 , X 2 , ... , taking values in the state space : . A basic property of Markov chains is that the past is conditionally independent of the future, given the present. In our previous chapters, we have discussed the Markov chains with : be discrete; while this chapter will study a Markov chain on state space : be either continuous or semi-continuous. In this case, the Markov chain is governed by a transition kernel K ( x, A) for x : and A : . Let V (: ) be a Borel V -algebra on : . Then the transition kernel K : : u V (: ) o [0,1] defines a Markov chain { X k } through the relation P{ X k 1 A _ X k , X k 1 , ... , X 0 } K ( X k , A). It is clear that K ( x, A) denotes the probability to move in one step from the state x into the state set A . The transition kernel K ( x, A) has two main properties as follows: (1) K ( x, ) is a probability measure for each x : , and (2) K (, A) is measurable for each A : . The two properties are explained as follows. The first property shows that K ( x, ) defines a probability density for which the Markov chain will move to the next step, given that the Markov chain is currently at x . The second property indicates that we can always evaluate the probability that the Markov chain will jump into some state set A from all possible state x. If there exists a function K ( x, y ) such that for all x : and A : ,
K ( x, A)
³
A
K ( x, y )dy
then K ( x, y ) is said to be a density of the transition kernel K ( x, A) . We write K 1 ( x, y )
and for n ı 2 , 218
K ( x, y )
5 Markov Chains on Continuous State Space n 1
K n ( x, y )
³: K ( x, z ) K ( z, y )dz ³: K ( x, z )K ( z, y )dz. n 1
If the kernel K ( x, A) is well behaved, then the Markov chain will have a stationary distribution ʌ( x) such that
ʌ ( y)
³: ʌ ( x) K ( x, y)dx,
y:,
(5.1)
ʌ ( A)
³: ʌ ( x) K ( x, A)dx,
A :.
(5.2)
or
It is clear that ʌ ( y ) is a density of the stationary probability ʌ( A). The main requirement for the Markov chain to reach its stationary distribution is that the Markov chain is irreducible and aperiodic. The irreducibility is defined as, for any x, y : , there always exists a positive integer n such that K n ( x, y ) ! 0. In other words, the Markov chain can jump into any state from any other state in a finite number of steps. If K ( x, y ) ! 0 for any x, y : , then the Markov chain is said to be strongly irreducible. The aperiodicity means that there exist no subsets of the state space : that can only be periodically visited by the Markov chain. Now, we extend the Markov chain on continuous state space [0, f) to a more general Markov chain on semi-continuous state space : {0, 1, 2, ...} u [0, f). For such a Markov chain, we define the transition kernel as follows:
K (i, x; j , A)
P{ X k 1 ( j , A) _ X k
(i, x)}.
At the same time, the nth step iteration of K (i, x; j, A) is given by f
K n (i, x; j , A)
¦³ k 0
f 0
K n 1 (i, x; k , y ) K (k , y; j , A)dy ,
n ı 1.
The following proposition describes some properties of the transition kernel K (i, x; j , A). The proof is clear and is omitted here. Proposition 5.1 (1) 0 İ K (i, x; j , A) İ 1 for i, j ı 0, x ı 0 and A V (: ), (2) K (i, x; j , ) 0 for i, j ı 0 , x ı 0 and the state set is null, and f
(3)
¦³ k 0
f 0
K (i, x; k , y )dy 1 for i ı 0 and x ı 0 .
Let Pi , j ( x, A)
K (i, x; j, A) and
P x, A
§ P0,0 x, A P0,1 x, A P0,2 x, A ! · ¨ ¸ ¨ P1,0 x, A P1,1 x, A P1,2 x, A ! ¸ . ¨ P2,0 x, A P2,1 x, A P2,2 x, A ! ¸ ¨¨ ¸¸ # # # © ¹
(5.3)
219
Constructive Computation in Stochastic Models with Applications
Then the Markov chain on semi-continuous state space {0, 1, 2, ...} u [0, f ) is determined by the transition kernel given in Eq. (5.3). It is clear that P n ( x, A)
³
f 0
P n 1 ( x, y ) P( y, A)dy,
n ı 1,
which can be expressed by means of the transition sub-kernel K n (i, x; j , A) for i, j ı 0 , x ı 0 and A V (: ) . In what follows we provide some important examples of the Markov chain on semi-continuous state space {0, 1, 2, ...} u [0, f). It is worthwhile to note that these examples are all the corresponding generalized versions of the discrete-time Markov chains with discrete state space studied in the previous chapters.
5.1.1 Markov Chains of GI/G/1 Type The transition kernel in Eq. (5.3) is simplified as
P x A
§ D0 x A ¨ ¨ D1 x A ¨ D2 x A ¨ # ©
D1 x A D2 x A ! · ¸ A0 x A A1 x A ! ¸ . A1 x A A0 x A ! ¸ ¸ # # ¹
5.1.2 Markov Chains of GI/M/1 Type The transition kernel in Eq. (5.3) is simplified as § D1 ( x, A) D0 ( x, A) · ¨ D ( x, A) C ( x, A) C ( x, A) ¸ 1 0 ¸. P( x, A) ¨ 2 ¨ D3 ( x, A) C2 ( x, A) C1 ( x, A) C0 ( x, A) ¸ ¨ ¸ # # # % % © ¹
5.1.3 Markov Chains of M/G/1 Type The transition kernel in Eq. (5.3) is simplified as § D1 ( x, A) D2 ( x, A) D3 ( x, A) D4 ( x, A) ¨ D ( x, A) C ( x, A) C ( x, A) C ( x, A) 1 2 3 P( x, A) ¨ 0 C ( x , A ) C ( x , A ) C ¨ 0 1 2 ( x, A) ¨ % % © 220
!· !¸ ¸. !¸ %¸¹
5 Markov Chains on Continuous State Space
5.1.4 QBD Processes The transition kernel in Eq. (5.3) is simplified as § A1(0) ( x, A) ¨ (1) A ( x, A) P( x, A) ¨ 2 ¨ ¨¨ ©
A0(0) ( x, A) A1(1) ( x, A) A2(2) ( x, A)
(1) 0 (2) 1
A ( x, A) A ( x, A) %
· ¸ ¸. ¸ A0(2) ( x, A) ¸ % % ¸¹
5.2 The RG-Factorizations In this section, we extend the censoring technique to be able to deal with the discrete-time Markov chain on semi-continuous state space : {0, 1, 2, ...} u [0, f), Based on this, we derive the RG-factorizations and express the stationary probability distributions. Let E {0, 1, 2, ..., n} and E c {n 1, n 2, n 3, ...}. According to the subsets E and E c , the transition kernel P( x, A) is partitioned as Ec
E
P( x, A)
E § T ( x, A)
U ( x, A) · ¨ ¸. E © V ( x, A) W ( x, A) ¹
(5.4)
c
It is clear that if P ( x, A) is irreducible and V ( x, A) : 0 , then each element of l ( x, A) W
f
¦W
n
( x, A) is finite, where
n 0
W 0 ( x, A)
I ( x, A),
W 1 ( x, A) W ( x, A)
and for n ı 2, W n ( x, A)
³: W
n 1
( x, z )W ( z , A)dz
³: W ( x, z )W
n 1
( z , A)dz.
l ( x, A) is referred to as the fundamental matrix of W ( x, A). The matrix W Suppose that { X k , k ı 0} is an irreducible Markov chain on semi-continuous state space {0, 1, 2, ...} u [0, f) . If the successive visits of X n to the subset E take place at the nk th step of state transition, we write X kE X nk for k ı 1 . Then the sequence { X kE , k ı 1} is called the censored chain with censoring set E. For convenience, we write P[İn ] ( x, A) for the censored transition kernel P E ( x, A) if 221
Constructive Computation in Stochastic Models with Applications
the censored set E Lİn , in particular, P[ f ] ( x, A) P ( x, A) and P[0] ( x, A) P[İ0] ( x, A) . Similarly, P[ın ] ( x, A) is the censored transition kernel with the censored set E Lın , specifically, P[ı ] ( x, A) P ( x, A) . The following lemma shows that the censored chain { X kE , k ı 1} is a Markov chain again. The proof is clear and is omitted here. Lemma 5.1 (1) The censored chain { X kE , k ı 1} is a Markov chain whose transition kernel is given by P E ( x, A) T ( x, A) ³
f 0
³
f 0
l ( y, z )V ( z , A)dydz. U ( x, y )W
(5.5)
c
(2) The censored chain { X kE , k ı 1} is a Markov chain whose transition kernel is given by f
c
P E ( x, A) W ( x, A) ³
0
³
f 0
V ( x, y )Tl ( y, z )U ( z , A)dydz. c
Note that the two censored Markov chains { X kE , k ı 1} and { X kE , k ı 1} have different utilities, which can lead to two different types of RG-factorizations.
5.2.1 The UL-Type RG-Factorization Let (n) (n) § I0,0 ( x, A) I0,1 ( x, A) ! I0,( nn) ( x, A) · ¨ (n) ¸ I1,0 ( x, A) I1,1( n ) ( x, A) ! I1,( nn) ( x, A) ¸ [İ n ] ¨ P ( x, A) , ¨ ¸ # # # ¨ (n) ¸ ¨ I ( x, A) I ( n ) ( x, A) ! I ( n ) ( x, A) ¸ n ,1 n, n © n,0 ¹
n ı 0,
be partitioned according to the levels. The following lemma provides a useful relationship among the entries of the censored Markov chains, which are essentially the Wiener-Hopf equations for the Markov chain. The proof is clear and is omitted here. Lemma 5.2 For n ı 0, 0 İ i , j İ n , f
Ii(,nj ) ( x, A) Pi , j ( x, A)
f
f
0
0
¦³ ³
k n 1
n
Ii(,kk ) ( x, y )Ik(,kk) ( y, z )Ik(,kj) ( z , A)dydz.
Based on Lemma 5.2, we provide expressions for the R-, U- and G-measures. For 0 İ i j , Ri j ( x, A) 222
³
f 0
n
Ii(, jj ) ( x, y )I (j ,jj) ( y , A)dy.
5 Markov Chains on Continuous State Space
For 0 İ j i , Gi , j ( x, A)
³
f n (i ) i ,i
0
I ( x, y )Ii(,ij) ( y, A)dy.
For n ı 0 ,
< n ( x, A) In(,nn) ( x, A). The following theorem provides the UL-type RG-factorization for the transition kernel P ( x, A) given in Eq. (5.3). Theorem 5.1 For the Markov chain with the transition kernel P ( x, A) given in Eq. (5.3), I ( x, A) P( x, A)
f
f
0
0
³ ³
[ I ( x, y ) RU ( x, y )]
[ I ( y, z ) < D ( y, z )][ I ( z , A) GL ( z , A)]dydz ,
(5.6)
where § 0 R0,1 ( x, A) R0,2 ( x, A) R0,3 ( x, A) ! · ¨ ¸ 0 R1,2 ( x, A) R1,3 ( x, A) ! ¸ ¨ RU ( x, A) ¨ 0 R2,3 ( x, A) ! ¸ , ¨ ¸ !¸ 0 ¨ ¨ % ¸¹ © < D ( x, A) diag(< 0 ( x, A),< 1 ( x, A),< 2 ( x, A),< 3 ( x, A), ...)
and 0 § · ¨ ¸ 0 ¨ G1,0 ( x, A) ¸ ¸. GL ( x, A) ¨ G2,0 ( x, A) G2,1 ( x, A) 0 ¨ ¸ ¨ G3,0 ( x, A) G3,1 ( x, A) G3,2 ( x, A) 0 ¸ ¨ ¸ # # # # % © ¹
5.2.2 The LU-Type RG-Factorization Let § Kn( n,n) ( x, y ) Kn( n,n)1 ( x, y ) Kn( n,n) 2 ( x, y ) ! · ¨ (n ) ¸ K ( x, y ) Kn( n)1,n 1 ( x, y ) Kn(n)1,n 2 ( x, y ) ! ¸ P[ın ] ( x, y ) ¨ n( n)1,n . ¨ Kn 2,n ( x, y ) Kn( n)2,n 1 ( x, y ) Kn( n)2,n 2 ( x, y ) ! ¸ ¨¨ ¸¸ # # # © ¹ 223
Constructive Computation in Stochastic Models with Applications
Then for i, j ı n 1, n
Ki(,nj1) ( x, y ) Pi , j ( x, y ) ¦ ³ k 0
f 0
³
f 0
n
Ki(,kk) ( x, u )Kk( k, k) (u, v)Kk( k, j) (v, y )dudv.
We define the U-measure as
) n ( x, y ) Kn( n,n) ( x, y ),
n ı 0,
the R-measure as Ri , j ( x, y )
³
f 0
n
0 İ j i,
)i ( x, z )Ki(,ij) ( z , y )dz ,
0 İ i j.
Ki(, jj) ( x, z )) j ( z , y )dz ,
and the G-measure as
Gi , j ( x, y )
³
f n 0
The LU-type RG-factorization for the Markov chain P( x, y ) given in Eq.(5.3) is given by I ( x , y ) P ( x, y )
f
f
0
0
³ ³
[ I ( x, u ) RL ( x, u )][ I (u , v) ) D (u , v)]
[ I (v, y ) GU (v, y )]dudv, where 0 § · ¨ ¸ 0 ¨ R1,0 ( x, y ) ¸ ¸, RL ( x, y ) ¨ R2,0 ( x, y ) R2,1 ( x, y ) 0 ¨ ¸ ¨ R3,0 ( x, y ) R3,1 ( x, y ) R3,2 ( x, y ) 0 ¸ ¨ ¸ # # # # % © ¹ ) D ( x, y ) diag()0 ( x, y ), )1 ( x, y ), ) 2 ( x, y ), )3 ( x, y ), ...) and § 0 G0,1 ( x, y ) G0,2 ( x, y ) G0,3 ( x, y ) ¨ 0 G1,2 ( x, y ) G1,3 ( x, y ) ¨ GU ( x, y ) ¨ 0 G2,3 ( x, y ) ¨ 0 ¨ ¨ ©
!· ¸ !¸ !¸ . ¸ !¸ %¸¹
5.2.3 The Stationary Probability Distribution Based on the UL-type RG-factorization, the following theorem provides expression 224
5 Markov Chains on Continuous State Space
for the stationary probability distribution of the discrete-time Markov chain on a semi-continuous state space. Theorem 5.2 The stationary probability distribution of the Markov chain with the transition kernel P( x, A) given in Eq. (5.3) is given by ʌ0 ( A) W x0 ( A), ° k 1 f ® ʌ ( A ) ¦ k ³ 0 ʌi ( x) Rik ( x, A)dx, k ı 1, ° i 0 ¯
(5.7)
where x0 ( A) is the stationary probability distribution of the censored Markov chain with the transition kernel < 0 ( x, A) to level 0, and the scalar W is uniquely f
determined by
¦³ k 0
f 0
ʌk ( x)dx 1 .
Proof If the Markov chain P( x, A) given in Eq. (5.3) exists the stationary probability
ʌ ( A)
vector
³
P ( x, A)]dx 0 and
0
f 0
(ʌ0 ( A), ʌ1 ( A), ʌ2 ( A), ...),
³
then
f 0
ʌ ( x)[ I ( x, A)
ʌ ( x)dx e 1 . Based on Theorem 5,1, we have
f
³ ʌ ( x)[ I ( x, A) P( x, A)]dy ³ ³ ³ ʌ ( x)[ I ( x, y) R ( x, y)][ I ( y, z) < 0 f
f
f
0
0
0
U
D
( y, z )]
[ I ( z , A) GL ( z , A)]dxdydz.
We write x( A)
³
f 0
ʌ ( x)[ I ( x, A) RU ( x, A)]dx
and x( A)
( x0 ( A), x1 ( A), x2 ( A), ...).
Then f
f
0
0
³ ³
x ( y )[ I ( y, z ) < D ( y , z )][ I ( z , A) GL ( z , A)]dydz
0,
which leads to f f f f ° ³ 0 x0 ( y ) > I ( y, A) < 0 ( y, A)@ dy ¦ ³ 0 ³ 0 xk ( y ) k 1 ° > I ( y , z ) < k ( y , z )@ Gk ,0 ( z , A)dy dz 0, °° ® f f f f ° x ( y ) I ( y , A ) < ( y , A ) d y > @ ¦ i i ³ ³ ³ 0 xk ( y ) 0 0 ° k i 1 ° > I ( y , z ) < k ( y , z )@ Gk i ( z , A)dy d z 0, for i ı 1. °¯
(5.8)
225
Constructive Computation in Stochastic Models with Applications
Note that < 0 ( y, A) is the transition kernel of the censored chain to level 0. If the Markov chain with the transition kernel P( x, A) given in Eq. (5.3) is positive recurrent, then < 0 ( y, A) is also positive recurrent, thus it exists the stationary probability distribution x0 ( A) . Hence there exists the stationary probability density function x0 ( y ) such that
³
f 0
x0 ( y )[ I ( y, A) < 0 ( y, A)]dy
0 and
³
f
x0 ( y )dy 1.
0
It is easy to check that (W x0 ( A), 0, 0, ...) is a non-zero nonnegative solution to the f
systems of Eq. (5.8), where W is uniquely determined by
¦³ k 0
Solving the simplified system of linear equations
³
f 0
f 0
ʌk ( x)dx 1.
ʌ ( x )[ I ( x, A) RU ( x, A)]dx
(W x0 ( A), 0, 0, ...) , we obtain the desired result.
In what follows we derive expression for the stationary probability distribution of some important examples.
5.2.4 Markov Chains of GI/G/1 Type The R-measure is given by R0, k ( x, A) and Rk ( x, A) for k ı 1 , x ı 0 and A V (: ) . In this case, it follows from Theorem 5.2 that ʌ0 ( A) W x0 ( A) ° k 1 f f ® ʌ ( A ) ʌ ( x ) R ( x , A )d x ʌi ( x) Rk i ( x, A)dx, k ı 1, ¦ 0 0 k ³ ³ ° k 0 0 i 1 ¯
where x0 ( A) is the stationary probability distribution of the censored Markov chain with the transition kernel < 0 ( x, A) to level 0, and the scalar W is uniquely f
determined by
¦³ k 0
f 0
ʌk ( x)dx 1.
5.2.5 Markov Chains of GI/M/1 Type The R-measure is given by R ( x, A) and R0,1 ( x, A) , where R ( x, A) is the minimal nonnegative solution to the kernel equation f
R( x, A)
¦³ k 0
f 0
R k ( x, y )Ck ( y, A)dy.
(5.9)
Applying the kernel R ( x, A), we can obtain another kernel R0,1 ( x, A) as follows: 226
5 Markov Chains on Continuous State Space f
³
0
¦³
0
R0,1 ( x, A)
l ( y , A)dy , D0 ( x, y )U
(5.10)
where f
U ( x, A)
k 1
f
R k 1 ( x, y )Ck ( y, A)dy.
In this case, it follows from Theorem 5.2 and the R-measure given in Eq. (5.9) and Eq. (5.10) that ʌ0 ( A) W x0 ( A), ° f °ʌ ( A) ³ 0 ʌ0 ( x) R0,1 ( x, A)dx ® 1 ° f k 1 °¯ʌk ( A) ³ 0 ʌ1 ( x) R ( x, A)dx,
(5.11) k ı 2.
Note that Eq. (5.11) is the same as Theorem 2 of Tweedie [27].
5.2.6 Markov Chains of M/G/1 Type The R-measure is given by R0, k ( x, A) and Rk ( x, A) for k ı 1 , x ı 0 and A V (: ). Thus, the expression for the stationary probability distribution is the same as that for GI G 1 type. However, for a Markov chain of M G 1 type, the R-measure R0, k ( x, A) and Rk ( x, A) for k ı 1 , x ı 0 and A V (: ) can be determined by the kernel G x A , which is the minimal nonnegative solution to the kernel equation f
¦³
G ( x A)
k 0
f 0
Ck x y G k ( y A)dy
(5.12)
In this case, we have f
R0, k ( x, A)
f
f
0
0
¦³ ³ i 1
l ( z , A)dydz Dk i ( x, y )G i 1 ( y, z )U
(5.13)
l ( z, A)dydz , Ck i ( x, y )G i 1 ( y, z )U
(5.14)
and f
Rk ( x, A)
f
f
0
0
¦³ ³ i 1
where f
U ( x, A)
¦³ k 1
f 0
Ck ( x, y )G k 1 ( y , A)dy. 227
Constructive Computation in Stochastic Models with Applications
5.2.7 QBD Processes The R- and G-measures are {Rk ( x, y ), k ı 0} and {Gl ( x, y ), l ı 1} , respectively. The R-measure {Rk ( x, y ), k ı 0} is the minimal nonnegative solution to the system of nonlinear kernel equations Rk ( x, y )
A0( k ) ( x, y ) ³ ³
f 0
³
f 0
f 0
Rk ( x, z ) A1( k 1) ( z , y )dz
Rk ( x, u ) Rk 1 (u , v ) A2( k 2) (v, y )dudv.
(5.15)
The G-measure {Gk , k ı 1} is the minimal nonnegative solution to the system of nonlinear kernel equations Gk ( x, y )
A2( k ) ( x, y ) ³ ³
f 0
³
f 0
f 0
A1( k ) ( x, z ) Gk ( z , y )dz
A0 ( x, u ) Gk 1 (u , v) Gk (v, y )dudv. k
(5.16)
The U-measure {U k ( x, y ), k ı 0} is given by U k ( x, y )
A1( k ) ( x, y ) ³ A1 ( x, y ) ³ k
f 0 f 0
Rk ( x, z ) A2( k 1) ( z , y )dz A0( k 1) ( x, z ) Gk 1 ( z, y )dz.
(5.17)
5.2.8 An Algorithmic Framework We provide an algorithmic framework for computing the stationary probability distribution. To do this, we need to introduce an orthogonal decomposition of functions in L2 ([0, f) 2 ) . We denote by {\ k ( x), k ı 0} an orthogonal basis in L2 ([0, f)) . Then
³
f 0
\ k ( x)\ l ( x)dx G kl , where G kl is 1 or 0 according as k
l or k z l ,
respectively. Such an orthogonal basis in L2 ([0, f)) always exits. In the function space L2 ([0, f )) , Li, Wang and Zhou [19] gave two different orthogonal bases, while Nielsen and Ramaswami [24] provided a finite-interval orthogonal basis. For the three orthogonal bases, we simply describe them as follows: 5.2.8.1 A Laguerre-Polynomial Orthogonal Base Let § n· ¨ ¸ ©k¹ 228
n! , k ! n k !
5 Markov Chains on Continuous State Space
ak( n )
k § n· 1 (1) ¨ ¸ , ©k ¹ k!
n ı 0, 0 İ k İ n,
(5.18)
and n
¦a
ln ( x )
( n) k
xk ,
n ı 0.
(5.19)
k 0
½ 1 ½ Then the function sequence ®exp ® x ¾ ln ( x), n ı 0 ¾ is an orthogonal basis in ¯ 2 ¿ ¯ ¿ L2 ([0, f)) . 5.2.8.2 A Hermite-Polynomial Orthogonal Base Let tn
§ n· ¨ ¸ and © 2¹ k
tn
(1) n ! n2 k (2 x ) . ¦ k !( n 2 k )! 0 k
H n ( x)
Then the function sequence
^ 1 § ¨ ¨ ©
4
·
`
ʌ 2n 1 n ! ¸¸ exp{ x 2 2}H n ( x), n ı 0 forms an ¹
orthogonal basis in L2 ([0, f)) . 5.2.8.3 A Finite-Interval Orthogonal Base Nielsen and Ramaswami [24] provided an orthogonal basis {\ n ( x), n ı 0} in L2 ([0, 1]) , where 2n 1 d n n [ x(1 x)] . n ! dx n
\ n ( x)
For simplicity of description, we consider the Markov chain of GI M 1 type whose transition kernel is given by § D1 ( x, A) D0 ( x, A) · ¨ ¸ D2 ( x, A) C1 ( x, A) C0 ( x, A) ¨ ¸. P( x, A) ¨ D3 ( x, A) C2 ( x, A) C1 ( x, A) C0 ( x, A) ¸ ¨ ¸ # # # % %¹ © We assume that the two kernel sequences {Dk ( x, y ), k ı 0} and {Ck ( x, y ), k ı 0} are all in L2 ([0, f )). We write f
Ck ( x , y )
f
¦¦c
\ m ( x)\ n ( y ) < ( x) Ck< ( y ) T
(k ) m n
(5.20)
m 0 n 0
229
Constructive Computation in Stochastic Models with Applications
and f
Dk ( x, y )
f
¦¦d
\ m ( x)\ n ( y ) < ( x) Dk< ( y ) T,
(k ) m ,n
(5.21)
m 0 n 0
where
< ( x) (\ 0 ( x), \ 1 ( x), \ 2 ( x), !), (k ) § c0,0 ¨ (k ) ¨ c1,0 ¨ c( k ) ¨ 2,0 ¨ # ©
Ck
(k ) c0,2 !· ¸ (k ) c1,2 !¸ (k ) c2,2 !¸ ¸ ¸ # ¹
(k ) c0,1 (k ) c1,1 (k ) c2,1
#
and
Dk
(k ) § d 0,0 ¨ (k ) ¨ d1,0 ¨ d (k ) ¨ 2,0 ¨ # ©
(k ) d0,2 !· ¸ (k ) d1,2 !¸ . (k ) d2,2 !¸ ¸ ¸ # ¹
(k ) d0,1 (k ) d1,1 (k ) d2,1
#
It is easy to check that T
R( x, y ) < ( x) R< ( y ) ,
(5.22)
where the matrix R is the suitable solution to the nonlinear matrix equation f
R
¦R C . k
(5.23)
k
k 0
The stationary probability distribution ʌ ( x) ʌ k ( x)
(ʌ 0 ( x), ʌ1 ( x), ʌ 2 ( x), ...) is given by
ʌ k< ( x) T,
k ı 0,
(5.24)
where ʌk
W x0 R k ,
k ı 0,
(5.25)
and the vector x0 is a nonnegative non-zeroz solution to the system of f
equations x0 ¦ R k Dk k 1
f
by ¦ ³ k 0
230
f 0
ʌ k ( x)dx 1.
0 and x0 e 1 , and the scalar W is uniquely determined
5 Markov Chains on Continuous State Space
5.3 The GI/G/1 Queue In this section, we apply the RG-factorizations to discuss the GI G 1 queue, which is constructed as either a Markov chain of GI M 1 type or a Markov chain of M G 1 type under the semi-continuous state space. We consider a GI G 1 queue, where the service time and interarrival time distributions are denoted by the functions G (t ) and F (t ) with P 1 and O 1
³
f 0
³
f 0
tdG (t )
tdF (t ), respectively.
5.3.1 Constructing a Markov Chain of GI/M/1 Type We consider the Markov chain { X n }, where X n ( N n , S n ) for n ı 1 , N n and Sn are the number of customers immediately before the nth interarrival time and the residual service time immediately after the nth interarrival time, respectively. Let V 1 , V 2 , V 3 , ... denote a renewal process with V n V n1 having the service time distribution G (t ); and let Rt denote the residual lifetime at time t in this process, that is, Rt V N ( t ) 1 t , where N (t ) is the renewal number in [0, t ). If R0 x , then V 1 x . We write Pnt ( x, y )
P{V n İ t V n 1 , Rt İ y _ R0
x}.
Clearly, Pnt ( x, y ) is the probability that n renewals occur in [0, t ) and the residual lifetime at time t is in [0, y ) , given R0 x. Let Yn V n V n 1 with V 0 0 for n ı 1 . Then {Yn } can be regarded as a Markov chain with continuous state space [0, f) whose transition kernel is given by K ( x, A)
P{Yn 1 A _ Yn
x}
which is independent of the number n ı 1 . We write D1 ( x, y )
P{Yn 1 İ y _ Yn
x}
D2 ( x, y )
P{Yn 1 ! y _ Yn
x}.
and
The following lemma provides an iterative relationship for the kernel sequence {Pnt ( x, y )}. Lemma 5.3 For the kernel sequence {Pnt ( x, y), n ı 0}, P0t ( x, y )
I ( x, y )
(5.26) 231
Constructive Computation in Stochastic Models with Applications
and for n ı 1, Pnt ( x, y )
³
f 0
Pnt1 ( x, z ) D1 ( z , y )dz ³
f 0
Pnt ( x, z ) D2 ( z , y )dz.
(5.27)
Proof Equation (5.26) is clear. Hence we only need to prove Eq. (5.27). This is completed by the following computations Pnt ( x, y )
P{V n İ t V n 1 , Rt İ y _ R0
³
f 0
³
³
f 0
³
³
P{V n İ t V n 1 , Rt İ y _ Y2 f
0
f
z , R0
Pnt1 ( x, z ) D1 ( z , y )dz ³
0
z , R0
x}
x}d{Y2 ! z _ R0 x}d{Y2 İ z _ Y1
z , R0
P{V n İ t V n 1 , Rt İ y _ Y2 f
x}d{Y2 İ z _ R0
z , R0
P{V n İ t V n 1 , Rt İ y _ Y2
P{V n İ t V n 1 , Rt İ y _ Y2
0 f
0
x}
x}d{Y2 ! z _ Y1
x} x} x}
Pnt ( x, z ) D2 ( z , y )dz.
This completes the proof. When using the kernel sequence {Pnt ( x, y ), n ı 0} , the GI G 1 queue can be described as a Markov chain of GI M 1 type on continuous state space [0, f). In this case, we have
³
Cn ( x , y )
f 0
Pnt ( x, y )dF (t )
n ı 0,
and f
Dn ( x, y ) G ( y ) ¦ Ck ( x, f),
n ı 0.
k n 1
Therefore, the Markov chain of GI M 1 type is determined by the two kernel sequences {Cn ( x, y )} and {Dn ( x, y )} . Let f
C ( x, y )
¦ C ( x, y). n
n 0
Then f
C ( x, y )
¦³ n 0 f
³
0
f 0
Pnt ( x, y )dF (t )
P{Rt İ y _ R0
x}dF (t ).
The bivariate function C ( x, y ) is the transition probability kernel of the residual lifetime process sampled at points of an independent renewal process generated by F (t ). If P 1 232
³
f 0
tdG (t ) f , then the residual lifetime process has the
5 Markov Chains on Continuous State Space x
invariant measure T ( x) P ³ [1 G (t )]dt . This is summarized in the following 0
lemma. Lemma 5.4 T ( x ) is the invariant measure of the Markov chain with transition probability kernel C ( x, y ).
³
Proof We only need to check the equality
f 0
T ( x) C ( x, y )dx T ( y ) .
Note that P{R0 İ x} T ( x)
³
f 0
T ( x) C ( x, y )dx
³ ³
f 0 f 0
P{R0 İ x}³
f
P{Rt İ y _ R0
0
x}dF (t )dx
P{Rt İ y}dF (t )
P{R0 İ y} T ( y ). This completes the proof. Let f
¦ nP ( x, f)
E t ( x)
t n
n 0 f
¦ nP{V
n
İ t V n 1 _ R0
x}
n 0
E[ N (t ) _ R0
x].
Then
U 1
f
f
0
0
t
³ ³ E ( x)dT ( x)dF (t ) ³ ³ E[ N (t ) _ R x]dP{R İ x}dF (t ) ³ E[ N (t )]dF t P ³ tdF (t ) f
0 f
f
0
0
0
f
0
0
P , O where E[ N (t )] P t and O 1
³
f 0
tdF (t ) . Hence U
OP . Obviously, if U 1,
then the GI G 1 queue is stable. Let R ( x, y ) be the minimal nonnegative solution to the nonlinear kernel equation f
R ( x, y )
¦³ n 0
f 0
R n ( x, z ) Cn ( z , y )dz. 233
Constructive Computation in Stochastic Models with Applications
We write f
¦³
< 0 ( x, y )
n 1
f 0
R n 1 ( x, z ) Dn ( z , y )dz.
It is easy to check that < 0 ( x, y ) is the transition probability kernel of the censored chain to level 0. Lemma 5.5 Let x0 ( x) be the invariant measure of the censored Markov chain with transition probability kernel < 0 ( x, y ) to level 0. Then x0 ( x ) G ( x ) for x ı 0.
Proof We only need to check the equality
³
f 0
x0 ( x)< 0 ( x, y )dx
³
f 0
³
f
x0 ( x)¦ ³
f 0
n 1 f
f 0
x0 ( x)¦ ³ n 1
f 0
³
f 0
x0 ( x)< 0 ( x, y )dx
x0 ( y ) .
R n 1 ( x, z ) Dn ( z , y )dzdx f
R n 1 ( x, z ) ¦ Ck ( x, f)dzG ( y )dx. k n 1
Let
c
³
f 0
f
f
x0 ( x)¦ ³
0
n 1
f
R n 1 ( x, z ) ¦ Ck ( x, f)dzdx. k n 1
Then
³
f 0
x0 ( x)< 0 ( x, y )dx
cG ( y ).
cG ( y ) . Note that x0 ( y ) is the invariant measure of the
Thus we can take x0 ( y )
censored Markov chain with transition probability kernel < 0 ( x, y ), we obtain
³
f 0
x0 ( y )dy 1. Since
³
f 0
x0 ( y )dy
c , hence we have c 1 , which leads to
x0 ( y ) G ( y ) for y ı 0 . This completes the proof. The following theorem provides the distribution of the stationary queue length for the GI G 1 queue. The proof is clear and is omitted here. Theorem 5.3 If U 1, then the distribution of the stationary queue length is given by
ʌ0 ( x ) N G ( x ) and ʌn ( x) N ³ where 234
f 0
G ( z ) R n ( z , x )dz ,
n ı 1,
5 Markov Chains on Continuous State Space
1
N
f
1 ¦ ³ n 1
f
³
0
f 0
.
G ( z ) R n ( z, x)dzdx
5.3.2 Constructing a Markov Chain of M/G/1 Type We take the Markov chain { X n }, where X n ( N n , Tn ) for n ı 1 , N n and Tn are the number of customers immediately after the nth interarrival time and the residual interarrival time immediately before the nth service time, respectively. Let W 1 , W 2 , W 3 , ... denote a renewal process with W n W n 1 having the interarrival time distribution F (t ) ; and let Rt denote the residual lifetime at time t in this process, that is, Rt W N t 1 t , where N (t ) is the renewal number in [0, t ) . If R0 x , then W 1 x. We write Pnt ( x, y )
P{W n İ t W n 1 , Rt İ y _ R0
x}.
Clearly, Pnt ( x, y ) is the probability that n renewals occur in [0, t ) and the residual lifetime at time t is in [0, y ) , given R0 x. For the Markov chain of M G 1 type, we write
³
D0 ( x, y ) f
Dn ( x, y )
³
Cn ( x, y )
³
0
f 0
P0t ( x, y )dG (t ),
Pnt1 ( x, y )dG (t ),
n ı 1,
Pnt ( x, y )dG (t ),
n ı 0.
and f 0
Let f
C ( x, y )
¦ C ( x, y). n
n 0
Then f
C ( x, y )
¦³ n 0 f
³
0
f 0
Pnt ( x, y )dG (t )
P{Rt İ y _ R0
x}dG (t ).
Hence C ( x, y ) is the transition probability kernel of the residual lifetime process sampled at points of an independent renewal process generated by G (t ) . If 235
Constructive Computation in Stochastic Models with Applications
O 1
³
f 0
tdF (t ) f, then the residual lifetime process has the invariant measure x
T ( x) O ³ [1 F (t )]dt. 0
Let f
¦ nP ( x, f)
E t ( x)
t n
E[ N (t ) _ R0
x].
n 0
Then
U
f
f
0 f
0
t
³ ³ E ( x)dT ( x)dG (t ) ³ E[ N (t )]dG (t ) O ³ t dG (t ) f
0
0
O , P where P 1
³
f 0
tdG (t ) . Hence U O P . Obviously, if U 1 , then the GI G 1
queue is stable. Let G ( x, y ) be the minimal nonnegative solution to the nonlinear kernel equation f
¦³
G ( x, y )
n 0
f 0
Cn ( z , y ) G n ( x, z )dz.
Let f
R0k ( x, y )
f
f
0
0
f
f
0
0
¦³ ³ i 1
l (v, y )dudv Dk i ( x, u ) G i 1 (u , v)U
and f
Rk ( x, y )
¦³ ³ i 1
l (v, y )dudv, Ck i ( x, u ) G i 1 (u , v )U
where f
U ( x, y )
¦³ k 1
f 0
Ck ( x, z ) G k 1 ( z , y )dz.
We write f
< 0 ( x, y )
¦³ n 1
f 0
Dn ( x, z ) G n ( z , y )dz.
It is easy to check that < 0 ( x, y ) is the transition probability kernel of the censored Markov chain to level 0. Let x0 ( x) be the invariant measure of the censored 236
5 Markov Chains on Continuous State Space
Markov chain with transition probability kernel < 0 ( x, y ). If U 1 , then the distribution of the stationary queue length is given by ʌ0 ( x) N x0 ( x), ° k 1 f f ® °ʌ k ( x) ³ 0 ʌ0 ( z ) R0 k ( z , x)dz ¦ ³ 0 ʌi ( z ) Rk i ( z , x)dz , i 1 ¯ f
where the scalar N is uniquely determined by
¦³ k 0
k ı 1,
f 0
ʌ k ( x)dx 1 .
5.4 Continuous-Time Markov Chains In this section, we define and study continuous-time Markov chains on continuous state space. We provide expression for the stationary probability distribution of the Markov chain, which is Harris recurrent and ergodic. For convenience of description, we always use calligraphic letters (for example, , ) to denote elements in V ( D) . To define a continuous-time Markov chain on continuous state space [0, f ), let real number O ( x, ) denote the kernel of the Markov chain from an initial state x to a state set V ([0, f)) . Intuitively, we interpret O ( x, ) in two possible cases: (1) If x , then O ( x, ) is the transition rate of the Markov chain from x to B. (2) If x , then O ( x, ) O ( x, c ) , where c [0, f) . The two cases can be well understood by means of the results for a continuoustime Markov chain on a discrete state space. We now provide conditions on the function O ( x, ) such that O ( x, ) for x ı 0 , V ([0, f)) forms the kernel of a continuous-time Markov chain on continuous state space. These conditions resemble that for the infinitesimal generator of a continuous-time Markov chain on a discrete state space. Definition 5.1 For the continuous-time Markov chain on continuous state space [0, f) , O ( x, ) on [0, f) u V ([0, f)) is called the kernel of the Markov chain if (1) for each fixed x ı 0, O ( x, ) is a signed measure on V ([0, f)) ; and for each fixed V [0 f) O is a real-value measurable function on x [0, f); (2) O ( x, ) İ 0 for x ; while 0 İ O ( x, ) f for x ; and (3) when *fi 1 i and i j for all i z j , where i V ([0, f)) for i ı 1 and is an empty set, we have 237
Constructive Computation in Stochastic Models with Applications f
O ( x, )
¦ O ( x, ). i
i 1
Based on Definition 5.1, we easily obtain the following useful properties. The proof is obvious and is omitted here. Proposition 5.2 (1) O x [0 x) ı 0 and O ( x, ( x, f)) ı 0 . (2) For the set sequence {n } on V ([0, f)) with n n1 for n ı 1 , 0 İ
O ( x, 1 ) İ O ( x, 2 ) İ ! İ O ( x,[0, x)) O ( x, ( x, f )) if x f where f lim n ; n of
while O ( x,{x}) İ O ( x, 1 ) İ O ( x, 2 ) İ ! İ O ( x,[0, f)) İ 0 if x 1 . For the continuous-time Markov chain on continuous state space [0, f ), the irreducibility and the associated conditions may be similarly provided as that in Tweedie [27] (p. 370). Intuitively, if the Markov chain with the kernel O ( x, ) is irreducible, then there exist at least two sets 1 V ([0, x)) and 2 V (( x, f)) such that O ( x, 1 ) ! 0 and O ( x, 2 ) ! 0. The negative number O ( x,{x}) is crucial for the classification of the continuoustime Markov chains on continuous state space. For example, (1) if O ( x,{x}) [O ( x,[0, x)) O ( x,( x, f ))] for all x ı 0 , then the Markov chain is said to be conservative; (2) if O ( x,{x}) ! f for all x ı 0 (resp. O ( x,{x}) f for some x ı 0 ), then the Markov chain is said to be stable (resp. instantaneous). Henceforth we always assume that the Markov chain is conservative, irreducible, stable and regular, see, e.g. Anderson [1]. If F V ([0, x)) , B V (( x, f)) and x , then F * {x} * B V ([0, f)). We assume that there exists an almost everywhere nonnegative function O ( x, y ) for x , y ı 0 such that
O ( x, F )
³
O ( x, y )dy
O ( x, B )
³
O ( x, y )dy.
F
and B
Then, we write O x, as
O ( x, )
³
F
O ( x, y )dy O ( x,{x}) ³ O ( x, y )dy. B
(5.28)
We call O ( x, y ) the generalized density function of the kernel O ( x, ) on [0, f ) u V ([0, f )) . To understand the generalized density function O ( x, y ) , we now provide a useful expression for O ( x, y ). Let 238
5 Markov Chains on Continuous State Space
/ ( x) > O ( x,[0, x)) O ( x,( x, f ))@. Then
O ( x,{x}) O ( x, x) / ( x) and
O ( x, y ) O ( x, y )(1 G xy ) / ( x)G xy , where G xy is equal to one or zero according to x y or x z y , respectively. Remark 5.1 (1) For a continuous-time Markov chain on a discrete state space : {0, 1, 2, ...} whose infinitesimal generator is given by Q (qi , j ) i , j 0,1,2,... , the integral expression Eq. (5.28) has the sum form:
³
qi , y dy
¦q
i, j
.
j
(2) We now provide a concrete example to construct the generalized density function O ( x, y ) , which illustrates that the continuous-time Markov chain on continuous state space is a convenient mathematical tool for studying many real systems. Let
O ( x, y )
1 ° ( y x 1) 3 , ° ® y4 ° , °¯ ( x 1) 3
y ı x, y x.
Then
/ ( x) ³
x 0
f y4 1 dy ³ dy 3 3 x ( x 1) ( y x 1)
1 x5 . 2 5( x 1) 3
Thus, for x, y ı 0 we have
O ( x, y ) O ( x, y )(1 G xy ) / ( x)G xy . For the continuous-time Markov chain { X (t ), t ı 0} on continuous state space [0, f) whose kernel is given by O ( x, ) on [0, f) u V ([0, f)) , we define its transition probability as P (t ; x, )
P{ X (t ) _ X (0)
x},
t ! 0.
We assume that the initial transition probability P (0; x, ) 1 ( x ) , where 1 ( x) stands for the indicator function, which is equal to one or zero according to x or x , respectively. The transition probability P (t ; x, ) can be regarded as a solution to the Kolmogorov differential equation. The Kolmogorov’s backward and forward equations are respectively given by 239
Constructive Computation in Stochastic Models with Applications
w P(t; x, ) wt
³
w P(t; x, ) wt
³
f 0
O ( x, y ) P(t;dy, )
(5.29)
P(t; x,dy )O ( y, ).
(5.30)
and f 0
We assume that the continuous-time Markov chain with the kernel O ( x, ) on [0, f) uV ([0, f)) is Harris recurrent and ergodic (e.g., see Athreya and Ney [3]). Let 3 ( ) for V ([0, f)) be the stationary probability distribution of the Markov chain. We also assume that there exists a generalized density function ʌ ( x) ı 0 for x ı 0 such that 3 ( ) ³ ʌ ( x)dx. It is worthwhile to note that such
a nonnegative function ʌ ( x) L2 ([0, f)) can be explicitly expressed below if the kernel O ( x, y ) L2 ([0, f) 2 ) . Note that
³
f 0
0 for each V ([0, f)) , it is clear that ʌ ( x)
ʌ ( x )O ( x, )dx
for x ı 0 is a non-zero solution to the integral equation
³
f 0
ʌ ( x)O ( x, y )dx
with the normalization condition
³
for each y ı 0,
0,
(5.31)
f
ʌ ( x)dx 1 .
0
Using the orthogonal basis {\ k ( x), k ı 0} , for O ( x, y ) L2 ([0, f)2 ) we write f
f
¦¦a
O ( x, y )
\ k ( x)\ l ( y ),
k l
(5.32)
k 0 l 0
where these coefficients ak l for k , l ı 0 are uniquely determined by the bivariate function O ( x, y ) and the orthogonal basis {\ k ( x), k ı 0} . Therefore, for each V ([0, f)) , f
f
¦¦a
O ( x, )
\ k ( x )³ \ l ( y )dy.
k l
k 0 l 0
On the other hand, for ʌ ( x) L2 ([0, f)) we have f
ʌ ( x)
¦ b\ ( x), i
(5.33)
i
i 0
where these coefficients bi for i ı 0 are uniquely determined by the bivariate function ʌ ( x) and the orthogonal basis {\ k ( x ), k ı 0} . It is clear that f
3 ( ) 240
³
ʌ ( x)dx
¦ b ³ \ ( x)dx. i 0
i
i
(5.34)
5 Markov Chains on Continuous State Space
Based on the expression Eq. (5.34), the stationary probability distribution 3 ( ) can be given once all the unknown coefficients bi for i ı 0 are determined. It follows from Eq. (5.31), Eq. (5.32) and Eq. (5.33) that for each y ı 0, f f
f
f
³ ¦ b \ ( x )¦ ¦ a 0
i
\ k ( x)\ l ( y )dx 0,
k l
i
i 0
k 0 l 0
which leads to the unknown row vector (b0 , b1 , b2 , ...) is a non-zero solution to the system of linear equations a0,1 a0,2 ! · ¸ a1,1 a1,2 ! ¸ 0 a2,1 a2,2 ! ¸ ¸ # # ¹
§ a0,0 ¨ a (b0 , b1 , b2 ,!) ¨ 1,0 ¨ a2,0 ¨ © # f
with the normalization condition
¦b ³ i
i 0
f 0
(5.35)
\ i ( x)dx 1 .
5.5 The QBD Processes In this section, we deal with continuous-time QBD processes with continuous phase variable, and derive the UL-type RG-factorization. Based on this, the stationary probability distribution of the QBD process is shown to be an operatormultiplicative solution by means of the UL-type RG-factorization. We describe a more general continuous-time Markov chain with a discrete level and continuous phase. The kernel matrix of the Markov chain is denoted as the function matrix B( x, ) for x ı 0 , V ([0, f)) with the (i, j )th entry bi , j ( x, ) for i , j ı 0 . Then it is easy to see from Definition 5.1 that (1) bi ,i ( x, ) is the kernel of a Markov chain for each i ı 0 , (2) bi , j ( x, ) ı 0 for each i z j , (3) ¦ bi j ( x, ) İ 0 for each i ı 0 , and j ı0
(4) B( x, )
³
B( x, y )dy .
The continuous-time Markov chains with a discrete level and continuous phase have many important examples, including GI G 1 type, GI M 1 type, M G 1 type and QBD processes. For simplification of discussion, here we mainly deal with the QBD processes. Let {( Lt , Pt ), t ı 0} be a continuous-time QBD process on a semi-continuous state space {0, 1, 2, ...} u [0, f) . The first coordinate Lt {0, 1, 2, ...} is called level variable and the second coordinate Pt [0, f) , phase variable. To write the 241
Constructive Computation in Stochastic Models with Applications
kernel of the QBD process, we need to introduce an element A(j k ) ( x, ) for x ı 0 , V ([0, f)) , and for either k 0 , i 1 , 2 or k ı 1 , j 0 , 1, 2 . It is obvious that A(j k ) ( x, ) is the transition rate from an initial state (k , x ) to the state set (k 1 j , ) for either k 0 , j 0 , 1 or k ı 1 , j 0 , 1 , 2 . Henceforth we assume that the continuous-time QBD process with continuous phase variable is conservative, stable and regular. According to Definition 5.1, we further show additional conditions on these elements as follows: (1) f A1( k ) ( x,{x}) 0 for k ı 0 and if j 1 and x , otherwise.
İ 0, A(j k ) ( x, ) ® ¯ı 0,
(2) For each pair ( k , j ) for either k 0 , j 1 , 0 or k ı 1 , j exists a generalized density function A(j k ) ( x, y ) such that A(j k ) ( x, )
³
0 , 1, 2 , there
A(j k ) ( x, y )dy.
(3)
³
f 0
ª¬ A1(0) ( x, y ) A0(0) ( x, y ) º¼ dy
0
and
³
f 0
ª¬ A2( l ) ( x, y ) A1( l ) ( x, y ) A0( l ) ( x, y ) º¼ dy
0,
l ı 1.
(4) There exist two non-empty sets 2 , 0 V ([0, f)) such that A2(l ) ( x, 2 ) ! 0 for all l ı 1 and A0( k ) ( x, 0 ) ! 0 for all k ı 0 . Based on these elements, we write the kernel matrix of the QBD process as Q ( x, ) ³ Q ( x, y )dy with elementwise integrals for V ([0, f)) , where the
matrix Q( x, y ) of generalized density functions for x, y ı 0 is given by § A1(0) ( x, y ) ¨ (1) A ( x, y ) Q ( x, y ) ¨ 2 ¨ ¨¨ ©
A0(0) ( x, y ) A1(1) ( x, y ) A2(2) ( x, y )
(1) 0 (2) 1
A ( x, y ) A ( x, y ) %
· ¸ ¸. ¸ A0(2) ( x, y ) ¸ % %¸¹
(5.36)
Now, we always use the matrix of generalized density functions to study the continuous-time QBD processes with continuous phase variable. To use the censoring technique, we need to define the matrix-integral inverse. Definition 5.2 (1) Let B( x, ) for x ı 0 , V ([0, f )) be the kernel matrix of a continuous-time Markov chain with a discrete level and continuous phase and B( x, y ) for x, y ı 0 , the associated matrix of generalized density functions. 242
5 Markov Chains on Continuous State Space
(2) For the bivariate function matrix A( x, y ) for x , y ı 0 , if there exists a bivariate function matrix A( x, y ) such that for each x , y ı 0 ,
³
f 0
A( x, z ) B( z , y )dz
I ( x, y )
B( x, z ) A( z , y )dz
I ( x, y ),
and
³
f 0
then A( x, y ) is called the matrix-integral inverse of B( x, y ) , we set A( x, y ) B ¢² ( x, y ). (3) Let : be the set of all negative matrix-integral inverses of B( x, y ) . If ¢² ¢² ¢² Bmax ( x, y ): and B ¢² ( x, y ) İ Bmax ( x, y ) İ0 for all B ¢² ( x, y ) : , then Bmax ( x, y ) is called the maximal negative matrix-integral inverse of B( x, y ) .
5.5.1 The UL-Type RG-Factorization To derive the UL-type RG-factorization, from Eq. (5.36) we write § A1( k ) ( x, y ) ¨ ( k 1) ¨ A2 ( x, y ) k Q ( x, y ) ¨ ¨¨ ©
A0( k ) ( x, y ) A1( k 1) ( x, y ) A2( k 2) ( x, y )
( k 1) 0 ( k 2) 1
A A
( x, y ) ( x, y )
%
· ¸ ¸. ¸ A0( k 2) ( x, y ) ¸ % %¸¹
If the continuous-time QBD process with continuous phase variable given in Eq. (5.36) is irreducible, then there must exist the maximal nonpositive matrixintegral inverse of k Q( x, y ) for k ı 1 . Let N k ( x, y ) be the (1,1)st entry of ¢² k Qmax ( x, y ) for k ı 1. We define the R- and G-measures as follows: Rk x y
³
f 0
A0
k
x z N k 1 z y dz
k ı 0
(5.37)
and Gk ( x, y )
³
f 0
N k ( x, z ) A2 ( z, y )dz, k
k ı 1.
(5.38)
We write the U-measure as U l ( x, y)
A1( l ) ( x, y ) ³
f 0
Rl ( x, z ) A2( l 1) ( z, y)dz
(5.39)
or U l ( x, y)
A1( l ) ( x, y ) ³
f 0
A0( l ) ( x, z ) Gl 1 ( z, y)dz,
l ı 0.
(5.40) 243
Constructive Computation in Stochastic Models with Applications
It is obvious that for k ı 1 , U k¢² ( x, y ), max
N k ( x, y )
(5.41)
which leads to
³
f 0
U k ( x, z ) N k ( z , y )dz
³
f 0
N k ( x, z ) U k ( z, y )dz
I ( x, y ).
(5.42)
Remark 5.2 U 0 ( x, ) is the kernel of the censored Markov chain of the QBD process to level 0 . Based on a censoring property, we need to identify the following two cases: (1) If the QBD process is positive recurrent (resp. null recurrent), then the censored Markov chain to level 0 with the kernel U 0 ( x, y ) is also positive recurrent (resp. null recurrent), and U 0 ( x,[0, f))
³
f 0
U 0 ( x, y )dy 0 for all x ı 0 .
(2) If the QBD process is transient, then the censored Markov chain to level 0 with the kernel U 0 ( x, y ) is also transient, and U 0 ( x,[0, f))
³
f 0
U 0 ( x, y )dy 0
for some x ı 0 . The following theorem provides the minimal nonnegative solutions to the kernel equations for the R-, U- and G-measures. Theorem 5.4 (1) The kernel sequence {Rk ( x, y ), k ı 0} is the minimal nonnegative solution to the system of integral equations A0( k ) ( x, y ) ³ ³
f 0 f 0
Rk ( x, z ) A1( k 1) ( z, y )dz
³
f 0
Rk ( x, z ) Rk 1 ( z, u ) A2( k 2) (u, y )dudz
0.
(5.43)
(2) The kernel sequence {Gk ( x, y ), k ı 1} is the minimal nonnegative solution to the system of integral equations A2( k ) ( x, y ) ³ ³
f 0 f 0
A1( k ) ( x, z ) Gk ( z , y )dz
³
f 0
A0( k ) ( x, z ) Gk 1 ( z , u )Gk (u , y )dudz
0.
(5.44)
Proof We only prove Eq. (5.43), while the proof of Eq. (5.44) is similar. It follows from Eq. (5.37) that
³
f 0
Rk ( x, z )U k 1 ( z , y )dz
f
f
0
0
³ ³
A0( k ) ( x, u ) N k 1 (u, z )U k 1 ( z, y)dudz.
(5.45)
Substituting Eq. (5.41) into the right-hand side of Eq. (5.42) leads to f
f
0
0
³ ³ 244
A0( k ) ( x, u ) N k 1 (u, z )U k 1 ( z, y )dudz
A0( k ) ( x, y ).
(5.46)
5 Markov Chains on Continuous State Space
Substituting Eq. (5.39) into the left-hand side of Eq. (5.45) leads to
³
f 0
Rk ( x, z )U k 1 ( z , y )dz
³
f
Rk ( x, z ) A1( k 1) ( z , y )dz
0
³
f 0
³
f
Rk ( x, z ) Rk 1 ( z , u ) A2( k 2) (u , y )dudz. (5.47)
0
Thus, Eq. (5.45) and Eq. (5.47), together with Eq. (5.46), illustrate that the kernel sequence {Rk ( x, y ), k ı 0} is a nonnegative solution to the system of integral functions Eq. (5.43). Furthermore, using a similar discussion to Chapter 1 in Neuts [23] yields that {Rk ( x, y ), k ı 1} is the minimal nonnegative solution of Eq. (5.43). This completes the proof. For the function R( x, y ) for x, y ı 0 , the positive number r is called the spectral radius of R( x, y ) if there exist two non-zero nonnegative functions u ( x, y ) and v ( x, y ) such that ru ( x, y )
³
rv( x, y )
³
f 0
u ( x, z ) R( z, y )dz
and f 0
R( x, z )v( z, y )dz.
It is interesting to study the spectral radius of R ( x, y ) , the key issues of which include (1) spectral radius existence, (2) spectral radius uniqueness and (3) spectral radius structure. If the QBD process is Harris recurrent and ergodic, and R( x, y ) has the spectral radius r, then 0 r 1, and r is an eigenvalue of the function R( x, y ), if any. For the continuous-time QBD processes with continuous phase variable, the UL-type RG-factorization is given by Q ( x, y )
f
f
0
0
³ ³ > I ( x, z ) R
U
( x, z )@U D ( z, u ) > I (u, y ) GL (u, y) @ dudz, (5.48)
where § 0 R0 ( x, y ) · ¨ ¸ 0 R1 ( x, y) ¨ ¸ ¸, RU ( x, y ) ¨ 0 R2 ( x, y) ¨ ¸ %¸ 0 ¨ ¨ % ¸¹ ©
U D ( x, y )
diag(U 0 ( x, y ), U1 ( x, y ), U 2 ( x, y ), !)
and 245
Constructive Computation in Stochastic Models with Applications
§ 0 · ¨ ¸ G x y ( , ) 0 ¨ 1 ¸ ¸. GL ( x, y ) ¨ G2 ( x, y ) 0 ¨ ¸ G3 ( x, y) 0 ¨ ¸ ¨ ¸ % % © ¹
Proof Computing the matrix product > I ( x, z ) RU ( x, z ) @U D ( z, u )[ I (u, y ) GL (u , y )] yields that the (k , k )th entry, the (k 1, k ) th entry and the (k , k 1)th entry of this product are I ( x, z )U k ( z , u ) I (u , y ) Rk ( x, z )U k 1 ( z , u )Gk 1 (u , y ), I ( x, z )U k ( z, u )Gk (u , y ) and Rk 1 ( x, z )U k ( z , u ) I (u, y ), respectively. For the (k , k )th entry of this product, since f
f
0
0
³ ³
I ( x, z )U k ( z , u ) I (u , y )dudz U k ( x, y )
and f
f
0
0
³ ³
Rk ( x, z )U k 1 ( z , u )Gk 1 (u , y )dudz
³
f 0
A0( k ) ( x, z )Gk 1 ( z, y )dz,
it follows from Eq. (5.40) that f
f
0
0
³ ³
[ I ( x, z )U k ( z , u ) I (u , y ) Rk ( x, z )U k 1 ( z , u ) Gk 1 (u, y )]dudz
A1( k ) ( x, y ).
Similarly, we can obtain ³
f 0
³
f 0
I ( x, z )U k ( z , u )Gk (u, y )dudz
A2( k ) ( x, y )
Rk 1 ( x, z )U k ( z , u ) I (u , y )dudz
A0( k ) ( x, y).
and ³
f 0
³
f 0
This completes the proof. For the continuous-time QBD process with continuous phase variable, the following theorem uses the RG-factorization to show that the generalized density function of the stationary probability distribution is an operator-multiplicative solution. 246
5 Markov Chains on Continuous State Space
Theorem 5.5 If the continuous-time QBD process with continuous phase variable is Harris recurrent and ergodic, then the generalized density function ʌ ( y ) (ʌ 0 ( y ), ʌ1 ( y ), ʌ 2 ( y ), ...) of its stationary probability distribution is operatormultiplicative, given by ʌk ( y )
³
f
ʌk 1 ( z ) Rk 1 ( z , y )dz ,
0
k ı 1,
(5.49)
where ʌ0 ( y ) N v0 ( y ) , v0 ( y ) is the generalized density function of the stationary probability distribution of the censored chain to level 0 with the kernel U 0 ( x, y ) which is Harris recurrent and ergodic, and N is a positive constant such that f
¦³ l 0
f 0
ʌl ( x)dx 1.
Proof Note that
³
f
ʌ ( z )Q( z , y )dz
0
(5.50)
0,
substituting Eq. (5.48) into Eq. (5.50) leads to f
f
f
0
0
0
³ ³ ³
ʌ ( z ) > I ( z , x) RU ( z , x) @U D ( x, u ) > I (u , y ) GL (u, y ) @ dudxdz
0.
(5.51)
We now solve the Eq. (5.51) by two steps. In the first step, we set w( y )
³
f 0
ʌ ( z ) > I ( z , y ) RU ( z, y ) @ dz,
(5.52)
which is partitioned as ( w0 ( y ), w1 ( y ), w2 ( y ), ...) according to the levels, then Eq. (5.52) becomes w0 ( y )
³
f
ʌ0 ( z ) I ( z , y )dz
0
ʌ0 ( y )
(5.53)
and for k ı 1 , f
³ > ʌ
wk ( y )
0
k 1
( z ) Rk 1 ( z, y ) ʌk ( z ) I ( z , y )@ dz
f
(5.54)
ʌk ( y ) ³ ʌk 1 ( z ) Rk 1 ( z, y)dz. 0
In the second step, we determine the sequence {wk ( y )} in terms of another equation f
f
0
0
³ ³
w( x)U D ( x, u ) > I (u, y ) GL (u, y ) @ dudx
0,
which is equivalent to 247
Constructive Computation in Stochastic Models with Applications f
f
0
0
³ ³ > w ( x)U ( x, u ) I (u, y) w l
l 1
l
( x)U l 1 ( x, u )Gl 1 (u , y )@ dudx
0
(5.55)
for all l ı 0 . Since the QBD process is Harris recurrent and ergodic, so is the censored chain to level 0 with the kernel U 0 ( x, y ) . Let v0 ( y ) be the generalized density function of the stationary probability distribution for the censored chain to level 0 with the kernel U 0 ( x, y ) . Then it is clear that (N v0 ( y ), 0, 0, ...) is a nonzero nonnegative solution to the Eq. (5.55), where N is a positive constant. Therefore, w0 ( y ) N v0 ( y ) ® ¯ wk ( y ) 0, for k ı 1.
(5.56)
Substituting Eq. (5.56) into Eq. (5.54) leads to ʌk ( y )
³
f 0
ʌ k 1 ( z ) Rk 1 ( z , y )dz ,
for k ı 1.
Using the normalization condition yields that the positive constant N is determined f
by
¦³ l 0
f 0
ʌl ( x)dx 1 . This completes the proof.
Remark 5.3 For the continuous-time level-independent QBD process with continuous phase variable whose matrix Q( x, y ) of generalized density functions is given in Eq. (5.68), by means of the uniformization technique we can transform this continuous-time QBD process to a discrete-time level-independent QBD process with continuous phase variable whose matrix of generalized density functions is given by ' ( x, y ) : ( x, y )Q ( x, y ),
P ( x, y ) where ' ( x, y )
diag(G xy , G xy , G xy , ...)
and
: ( x, y ) diag(/ 1 ( x) 4 1 ( x), 4 1 ( x), ...). Based on this, we can compute the stationary probability distribution of the continuous-time QBD process with continuous phase variable in terms of that algorithm proposed for the corresponding discrete-time QBD process.
5.5.2 The LU-Type RG-Factorization We now provide a computable framework for solving the following integral equation
³ 248
f 0
M ( x)Q( x, y )dx
f ( y ),
(5.57)
5 Markov Chains on Continuous State Space
given that f ( y ) z 0 for some y ı 0 and Q( x, ) is the kernel matrix of the continuous-time QBD process with continuous phase variable. The solution is a key for deriving expressions for some transient performance measures in stochastic models, for example, the probability distribution at time t, the distribution of a first passage time and the distribution of a sojourn time. It is easy to obtain that M ( x)
³
f 0
¢² f ( y )Qmax ( y, x)dy . Thus, it is necessary to
provide an expression for the maximal negative matrix-integral inverse of Q( x, y ) . For simplicity, we introduce the notation f ( x , y ) g ( x, y )
³
f 0
f ( x, z ) g ( z, y )dz.
We write Yk( l ) ( x, y ) Gl ( x, y ) Gl 1 ( x, y ) ! Gl k 1 ( x, y ),
l ı k ı 1,
and Z k( l ) ( x, y )
Rl 1 ( x, y ) Rl 2 ( x, y) ! Rl k ( x, y),
k ı 1, l ı 0.
Theorem 5.6 If the continuous-time QBD process with continuous phase variable given in Eq. (5.36) is transient, then ¢² Qmax ( x, y )
> I ( x, y ) GL ( x, y )@
¢²
¢²
¢²
>U D ( x, y )@max > I ( x, y ) RU ( x, y )@ ,
where
> I ( x, y) GL ( x, y)@ ¢²
>U D ( z , u )@max
¢²
§ I ( x, y ) · ¨ (1) ¸ I ( x, y ) ¨ Y1 ( x, y ) ¸, ¨ Y2(2) ( x, y ) Y1(2) ( x, y ) I ( x, y ) ¸ ¨ ¸ # # # %¹ ©
¢²
¢²
¢²
diag >U 0 ( x, y )@max , >U1 ( x, y )@max , >U 2 ( x, y )@max , !
and
[ I ( x , y ) RU ( x, y )]
¢²
§ I ( x, y ) Z1(0) ( x, y ) Z 2(0) ( x, y ) ¨ I ( x, y ) Z1(1) ( x, y ) ¨ ¨ I ( x, y ) ¨¨ ©
!· ¸ !¸ . !¸ ¸ % ¸¹
Proof We only need to check that ¢² Qmax ( x , y ) Q ( x, y )
I ( x, y )
and 249
Constructive Computation in Stochastic Models with Applications ¢² Q ( x, y ) Qmax ( x, y )
I ( x, y ).
Note that the matrices I ( x, y ) GL ( x, y ) and I ( x, y ) RU ( x, y ) of generalized density functions are lower-triangular and Eupper-triangular, respectively, thus their inverses are both unique. Some integral calculations lead to the stated result. To solve the integral equation Eq. (5.57), it is necessary to derive another useful factorization: The LU-type RG-factorization for the continuous-time QBD process with continuous phase variable. We write U 0 ( x, y )
A1(0) ( x, y )
and U k ( x, y )
¢²
A1( k ) ( x, y ) A2( k ) ( x, y ) ª¬ U k 1 ( x, y ) º¼ A0( k 1) ( x, y ),
k ı 1. (5.58)
It is easy to check that the generalized density function U k ( x, y ) is invertible for k ı 0 . Let the sequence {Rk ( x, y )} of generalized density functions be the unique nonnegative solution to the system of integral equations Rk 1 ( x, y ) Rk ( x, y ) A0( k 1) ( x, y ) Rk 1 ( x, y ) A1( k ) ( x, y ) A2( k 1) ( x, y ) 0,
k ı 2,
(5.59)
with the boundary condition R1 ( x, y )
¢²
A2(1) ( x, y ) ª¬ U0 ( x, y ) º¼ .
(5.60)
Similarly, let the matrix sequence {Gk ( x, y )} be the unique nonnegative solution to the system of integral equations A0( k ) ( x, y ) A1( k ) ( x, y ) Gk ( x, y ) A2( k ) ( x, y ) Gk 1 ( x, y ) Gk ( x, y )
0,
k ı 1,
(5.61)
with the boundary condition G0 ( x, y )
¢²
ª¬ U0 ( x, y ) º¼ A0(0) ( x, y ).
(5.62)
The following theorem provides the LU-type RG-factorization for the continuoustime QBD process with continuous phase variable. The proof is obvious and is omitted here. Theorem 5.7 For the continuous-time QBD process with continuous phase variable given in Eq. (5.36), the LU-type RG-factorization is given by Q ( x, y ) 250
¬ª I ( x, y ) RL ( x, y ) ¼º U D ( x, y ) ¬ª I ( x, y ) GU ( x, y ) ¼º ,
(5.63)
5 Markov Chains on Continuous State Space
where U D ( x, y )
diag(U 0 ( x, y ), U1 ( x, y ), U 2 ( x, y ), U 3 ( x, y ), ...),
§ 0 · ¨ ¸ 0 ¨ R1 ( x, y ) ¸ ¸ RL ( x, y ) ¨ R2 ( x, y ) 0 ¨ ¸ R3 ( x, y ) 0 ¨ ¸ ¨ % % ¸¹ ©
and
GU
§ 0 G0 ( x, y ) · ¨ ¸ 0 G1 ( x, y ) ¨ ¸ ¨ ¸. 0 G2 ( x, y ) ¨ ¸ 0 %¸ ¨ ¨ % ¸¹ ©
We write X k(l ) ( x, y )
Rl ( x, y ) Rl 1 ( x, y ) ! Rl k 1 ( x, y ),
l ı k ı 1,
(5.64)
k ı 1, l ı 0.
(5.65)
and Yk( l ) ( x, y ) Gl ( x, y ) Gl 1 ( x, y ) ! Gl k 1 ( x, y ),
The following Theorem provides expressions for each entry in the maximal ¢² 1 ( x, y ) (qm, n ( x, y )) m , nı0 partitioned ( x, y ) . We set Qmax nonpositive inverse Qmax according to the levels. Theorem 5.8 For the continuous-time QBD process with continuous phase variable given in Eq. (5.36), we have
qm , n ( x , y )
f ¢² ( m) ( m) ¢² °U m ( x, y ) X m n ( x, y ) ¦ Yi ( x, y ) U i m ( x, y ) i 1 ° if 0 İ n İ m 1, ° X i(i mm)n ( x, y ), ° f °U ¢² ( x, y ) ¦ Y m ( x, y ) U ¢² ( x, y ) X i m ( x, y ), i im i ° m i 1 ® ° if n m, ° f m °Y m ( x, y ) U ¢² ( x, y ) Yi ( x, y ) U i¢² ¦ nm n m ( x, y ) ° i n m 1 ° im if n ı m 1. °¯ X i n m ( x, y ),
251
Constructive Computation in Stochastic Models with Applications
5.6 Structured Matrix Expressions In this section, if the matrix of generalized density functions of the continuous-time QBD process with continuous phase variable is in L2 ([0, f)2 ) , then the integral equations given in the above section can be converted into the associated matrix equations. Therefore, some useful matrix-structured formulae, which resemble that in Section 1.3, are derived again. This leads to an algebraically algorithmic framework for calculating some performance measures of the continuous-time QBD processes with continuous phase variable. Since Q( x, y ) L2 ([0, f) 2 ), we obtain that for either k 0, i 0, 1 or k ı 1, i 0 , 1, 2 , f
Ai( k ) ( x, y )
f
¦¦a
\ m ( x)\ n ( y ) < ( x)A (i k )< ( y ) T ,
(k ) i m n
m 0 n 0
where {\ i ( x)} is an orthogonal basis in L2 ([0, f)) ,
< ( x) (\ 0 ( x), \ 1 ( x), \ 2 ( x), !) and k) § ai(,00 ¨ (k ) ¨ ai ,10 k) ¨ ai(,20 ¨¨ © #
A i( k )
k) ai(,01 k) ai(,11 k) ai(,21 #
k) ai(,02 !· ¸ (k ) ai ,12 ! ¸ k) !¸ ai(,22 ¸¸ # ¹
We write D( x)
diag (< ( x),< ( x),< ( x), !)
and
k
Q
§ A1( k ) ¨ ( k 1) ¨ A2 ¨ ¨¨ ©
A(0k ) A1( k 1) A (2k 2)
Note that 0 Q( x, y ) Q( x, y ) and 0 Q k
Q ( x, y )
( k 1) 0 ( k 2) 1
A A %
A(0k 2) %
· ¸ ¸. ¸ ¸ % ¸¹
Q. It is easy to check that for k ı 0 , D( x)( k Q) D( y ) T.
Proposition 5.3 (1) The matrix-identity kernel is given by I ( x, y ) 252
T
D( x) D( y ) .
5 Markov Chains on Continuous State Space
(2) For k ı 1, the maximal negative matrix-integral inverse of the matrix k Q ( x, y ) is given by k
¢² Qmax ( x, y )
D( x) ª«¬ k Q ¢² º»¼ D( y ) T,
(5.66)
where k Q ¢² is an inverse of the matrix k Q . Proof We only prove (1), while (2) can be proved similarly. T We assume that the matrix-identity kernel I ( x, y ) D ( x)CD( y ) , where the matrix C is unknown. Let B ( x, y ) be an arbitrary matrix of generalized density functions in L2 ([0, f)2 ) . Then B ( x, y )
T
D( x) BD ( y ) .
Notice that
³
f
I ( x, z ) B ( z , y )dz
B ( x, y )
B( x, z ) I ( z , y )dz
B ( x, y ),
0
and
³
f 0
we obtain T
0
T
0.
D( x)( I C ) BD( y )
and D ( x) B ( I C ) D ( y )
Since {\ i ( x)} is an orthogonal basis in L2 ([0, f)) and the matrix B is arbitrary, we obtain C I . This completes the proof. The following proposition provides an algebraically computational framework for calculating the kernel sequences {Rk ( x, y ), k ı 0} , {Gk ( x, y ), k ı 1} and {U l ( x, y ), l ı 0}. The proof is obvious and is omitted here. Proposition 5.4 Let N k be the (1,1)st block of the matrix k Q ¢² corresponding to the block-structure of the matrix k Q , given in Eq. (5.66). (1) For k ı 0 T
Rk ( x, y ) < ( x) Rk< ( y ) ,
where Rk
A (0k ) N k 1 satisfies the system of matrix equations A (0k ) Rk A1( k 1) Rk Rk 1A (2k 2)
0.
(5.67)
(2) For k ı 1 T
Gk ( x, y ) < ( x) Gk< ( y ) , 253
Constructive Computation in Stochastic Models with Applications
where Gk
N k A (2k ) satisfies the system of matrix equations A (2k ) A1( k ) Gk A 0( k ) Gk 1Gk
0.
(3) For l ı 0 , T
U l ( x, y ) < ( x)U l< ( y ) ,
where Ul
A1( l ) Rl A (2l 1)
A1( l ) A0( l ) Gl 1 .
Furthermore, Nk
U k¢² ,
k ı 1,
while U 0 is singular if the QBD process is recurrent, otherwise it is invertible. Remark 5.4 The sequence {Rk } is a non-zero solution to the system of matrix Eq. (5.67). However, a non-zero solution to the system of matrix Eq. (5.67) may not be the sequence {Rk }, because the system of matrix Eq. (5.67) possibly has different non-zero solutions, e.g., see Gail, Hantler and Taylor [13]. From the viewpoint of solutions, it is necessary but more difficult to provide conditions for identifying whether a non-zero solution to the system of matrix Eq. (5.67) is the sequence {Rk }. At the same time, the sequence {Gk } also has the same characteristics. The following theorem provides a matrix-structured form for the RG-factorization, which will be useful in constructing algebraically algorithmic solutions for the continuous-time QBD processes with continuous phase variable. Theorem 5.9 Q
( I RU )U D ( I GL ),
UD
diag(U 0 , U1 , U 2 , !),
where
RU
§ 0 R0 ¨ 0 ¨ ¨ ¨ ©
R1 0
· ¸ ¸ ¸ R2 ¸ % %¹
and
GL
254
§0 · ¨ ¸ ¨ G1 0 ¸. ¨ ¸ G2 0 ¨ ¸ % %¹ ©
5 Markov Chains on Continuous State Space
Proof Note that Q ( x, y )
T
D( x) QD( y ) , T
U D ( x, y )
D ( x)U D D( y ) ,
RU ( x, y )
D( x) RU D( y ) T,
GL ( x, y )
D ( x) GL D ( y ) ,
T
it follows from Eq. (5.48) that D( x)[Q ( I RU )U D ( I GL )]D y
T
0.
Therefore, we obtain ( I RU )U D ( I GL ).
Q
This completes the proof. The following theorem provides an algebraically computational framework for calculating the generalized density function ʌ ( x ) of stationary probability distribution. Theorem 5.10 If the continuous-time QBD process with continuous phase variable is Harris recurrent and ergodic, then ʌl ( y )
T
Sl< ( y ) ,
l ı 0,
where the row vector Sl is iteratively determined by Sl
Sl 1 Rl 1 ,
l ı 1,
and S0 is a non-zero solution to the equation S0U 0 for some y ı 0 . Proof It follows from Eq. (5.49) that ( Sl Sl 1 Rl 1 )< ( y )
T
T
0 satisfying S0< ( y ) ! 0
0.
Since {\ i ( x)} is an orthogonal basis in L2 ([0, f)) , we obtain Sl other hand,
³
f 0
ʌ0 ( x)U 0 ( x, y )dx
f
0 and
¦³
k 0
f 0
Sl 1 Rl 1. On the
ʌ k ( x)dx 1 imply S0U 0
0 and
T
S0 z 0 . Note that ʌ0 ( y ) ! 0 for some y ı 0 , hence S0< ( y ) ! 0 for some y ı 0 . This completes the proof. We now discuss an important special case: A continuous-time level-independent QBD process with continuous phase variable whose matrix of generalized density functions is given by 255
Constructive Computation in Stochastic Models with Applications
§ B1 ( x, y ) B0 ( x, y ) ¨ B ( x, y ) A1 ( x, y ) Q ( x, y ) ¨ 2 ¨ A2 ( x, y ) ¨ ©
A0 ( x, y ) A1 ( x, y ) %
· ¸ ¸. ¸ A0 ( x, y ) ¸ % %¹
(5.68)
In this case, we have Rk ( x, y )
R( x, y ),
k ı 1,
and the generalized density function of the stationary probability is given by ʌ 0 ( x ) ° °°ʌ1 ( x) ® °ʌ k ( x ) ° °¯
N v0 ( x)
³
f
0
ʌ0 ( z ) R0 ( z, x)dz
f
f
0
0
³ ³
f
! ³ ʌ1 ( z1 ) R( z1 , z2 ) 0
R( z2 , z3 )! R( zk 1 , x)dz1dz2 ! dzk 1 ,
k ı 2.
At the same time, we have S1 ® ¯Sk
S0 R0 , S1 R k 1 ,
for k ı 2.
Now, we provide an algorithm framework for computing the stationary distribution of the continuous-time level-independent QBD process with continuous phase variable whose matrix of generalized density functions is given in Eq. (5.68). We assume that the QBD process is Harris recurrent and ergodic, and each entry of its matrix of generalized density functions is in L([0, f)2 ) . To compute the stationary distribution of the QBD process, we describe an orthogonal algorithm as follows: Step 1 Orthogonal basis and orthogonal expansion Let the function sequence {\ n ( x), n ı 0} be an orthogonal basis in L2 ([0, f )). Then for an arbitrary function f ( x, y ) L2 ([0, f) 2 ) , we have f
f ( x, y )
f
¦¦ f
\ m ( x)\ n ( y ),
m, n
m 0 n 0
where f m,n
f
f
0
0
³ ³
f ( x, y )\ m ( x)\ n ( y )dxdy,
m, n ı 0.
Step 2 Orthogonal expansions for kernel elements Using the orthogonal basis {\ n ( x ), n ı 0} given in Step 1, we provide the orthogonal expansion for each entry of the matrix Q( x, y ) given in Eq. (5.68). 256
5 Markov Chains on Continuous State Space
First, we give the orthogonal expansions for the off-diagonal entries A0 ( x, y ) , A2 ( x, y ) , B0 ( x, y ) , B2 ( x, y ) as follows: f
Ai ( x, y )
f
¦¦a
(i ) m,n
\ m ( x)\ n ( y ), i 0, 2,
m 0 n 0
and f
Bi ( x, y )
f
¦¦b
(i ) m,n
\ m ( x)\ n ( y ), i 0, 2.
m 0 n 0
We take the matrix Ai with the (m, n)th entry am(i)n and the matrix Bi with the (m, n)th entry bm(i,)n for i 0, 2. Then, we give the orthogonal expansions for the diagonal entries A1 ( x, y ) and B1 ( x, y ) , respectively. To that end, we write
/ ( x) > B0 ( x,[0, f )) B1 ( x,[0, x) * ( x, f ))@ and
4 ( x) > A0 ( x,[0, f)) A1 ( x,[0, x) * ( x, f)) A2 ( x,[0, f)) @. Note that
4 ( x) > A0 ( x,[0, f)) A1 ( x,[0, x) * ( x, f)) B2 ( x,[0, f)) @ due to B2 ( x,[0, f ))
A2 ( x,[0, f )) . It is easy to see that for any x, y ı 0 ,
A1 ( x, y )
A1 ( x, y )[1 G xy ] 4 ( x)G xy
B1 ( x, y )
B1 ( x, y )[1 G xy ] / ( x)G xy .
and
Let f
A1 ( x, y )[1 G xy ]
f
¦¦a
(1,1) m, n
\ m ( x)\ n ( y )
m 0 n 0
and f
4 ( x)G xy
f
¦¦a
(1,2) m ,n
\ m ( x)\ n ( y ).
m 0 n 0
Then f
A1 ( x, y )
f
¦¦a
(1) m n
\ m ( x)\ n ( y ),
m 0 n 0
where 257
Constructive Computation in Stochastic Models with Applications
am(1), n
(1,2) am(1,1) , n am , n ,
m, n ı 0.
Therefore, we take the matrix A1 with the (m, n)th entry am(1),n . Similarly, let f
B1 ( x, y )[1 G xy ]
f
¦¦b
(1,1) m,n
\ m ( x)\ n ( y )
m 0 n 0
and f
/ ( x)G xy
f
¦¦b
(1,2) m,n
\ m ( x)\ n ( y ).
m 0 n 0
Then f
B1 ( x, y )
f
¦ ¦b
\ m ( x) \ n ( y ),
(1,2) bm(1,1) , n bm , n ,
m, n ı 0.
(1) m ,n
m 0 n 0
where bm(1), n
We take the matrix B1 with the (m, n) th entry bm(1),n . Step 3 Computation of the stationary probability distribution Let R be a non-zero solution to the matrix equation A0 RA1 R 2 A2
0 with
1 1
the setting that R lim R( N ) , where R(0) 0 and R( N ) A0 A R ( N 1) 2 A2 A11 N of
for N ı 1 . If S0 is the non-zero solution to the matrix equation S0U 0
0,
1
where U 0 B1 B0 [( A1 RA2 )] B2 , then the stationary probability distribution is given by ʌl ( y )
T
S0 R0 Rl< ( y ) ,
l ı 0, y ı 0,
where < ( y ) (\ 0 ( y ), \ 1 ( y ), \ 3 ( y ), ...) . Step 4 A truncated approximation When the sizes of the matrices A1 and / or B1 are infinite, it is necessary to truncate the matrices in order to compute the matrix R and / or the vector S0 approximately. After choosing a larger positive integer N, for i 0, 1, 2 we take
Ai
258
(i ) § a0,0 ¨ (i ) ¨ a1,0 ¨ # ¨ (i ) ¨a © N ,0
(i ) a0,1 ! a0,(i )N · ¸ (i ) ! a1,(iN) ¸ a1,1 # # ¸ ¸ (i ) (i ) ¸ aN ,1 ! aN , N ¹
5 Markov Chains on Continuous State Space
and (i) § b0,0 ¨ (i) ¨ b1,0 ¨ # ¨ (i) ¨b © N ,0
Bi
(i) b0,1 ! b0,( i )N · ¸ (i) ! b1,( iN) ¸ b1,1 # # ¸ ¸ bN( i ,1) ! bN( i ,)N ¸¹
For these new matrices Ai and Bi for i 0, 1, 2 , after going to step 3 again, we can give an approximate probability distribution below T
S0 R0 Rl< ( y ) ,
ʌl ( y)
l ı 0, y ı 0,
where < ( y ) (\ 0 ( y ), \ 1 ( y ), ..., \ N ( y )) , S0 is a row vector of dimension N 1, R0 and R are two matrices of order N 1. (1,2) Now, we provide a structure for the numbers am(1,2) , n and bm , n for m, n ı 0 . To that end, we need to note a crucial result for the G -function G xy for x, y ı 0 as follows:
³
f 0
4 ( x)G xy dy 4 ( x).
Let f
4 ( x)
¦a \ m
m
( x).
m 0
Then am(1,2) ,n
f
f
0 f
0
³ ³ 4 ( x)G \ ( x)\ ³ 4 ( x)\ ( x)\ ( x)dx xy
m
0
m
n
( y )dxdy
n
am , if m n, ® ¯0, if m z n. Similarly, we have bm(1,2) ,n
bm , if m n, ® ¯0, if m z n.
where f
/ ( x)
¦b \ m
m
( x).
m 0
259
Constructive Computation in Stochastic Models with Applications
The following theorem constructs a basic relationship among these numbers am(i,)n and bm(i,)n for i 0, 1, 2 and m, n ı 0 . This relationship is similar to the ordinary result that each row sum is zero in the infinitesimal generator of a continuous-time QBD process with countable phases.
³
Theorem 5.11 Let cn
f 0
\ n ( y )dy . For each given m ı 0, we have
(1) ªf º ª (i ) º « ¦ cn « am(1,1) , n ¦ am , n » » . i 0,2 «¬ n 0 ¬ ¼ »¼
am
(2) bm
ªf (0) º º « ¦ cn ª¬bm(1,1) , n bm , n ¼ » . ¬n 0 ¼
Proof We only prove (1), while the proof of (2) is similar. Note the QBD process is Harris recurrent and ergodic, we yield f
³ > A ( x , y ) A ( x, y ) A ( x, y ) @ dy 0
0
1
2
0.
Using the orthogonal expansions given in Step 2, we obtain
³ ³ and the relation A1 ( x, y )
³
f
f
0
A0 ( x, y )dy
¦\
( x)¦ cn am(0),n , n 0
f
A2 ( x, y )dy
f
m
m 0
( x)¦ cn am(2),n , n 0
A1 ( x, y )[1 G xy ] 4 ( x)G xy leads to f
f
0
f
m
m 0
f
0
¦\
A1 ( x, y )[1 G xy ]dy
¦\
m 0
f
(1,1) m ( x )¦ cn am , n n 0
and
³
f
0
f
4 ( x)G xy dy
¦\
m
( x)am .
m 0
Thus, we obtain f
¦\
m 0
m
° ªf º½ ª (i ) º ° ( x ) ®am « ¦ cn «am(1,1) 0. , n ¦ am , n » »¾ i 0,2 «¬ n 0 ¬ ¼ »¼ °¿ °¯
Since {\ m ( x), m ı 0} is an orthogonal basis in L2 ([0. f)) , we obtain that for 260
5 Markov Chains on Continuous State Space
each given m ı 0, am
ªf º ª (i ) º « ¦ cn « am(1,1) , n ¦ am , n » » . i 0,2 ¼ ¼» ¬« n 0 ¬
This completes the proof. We now extend the mean drift condition of the continuous-time level-independent QBD process with countable phases to the corresponding QBD process Q( x, y ) for x, y ı 0 with continuous phase. To do this, for x, y ı 0 we write A0 ( x, y ) A1 ( x, y ) A2 ( x, y )
A( x, y ) and U 0 ( x, y )
B1 ( x, y ) ³ B1 ( x, y ) ³
f 0 f 0
R0 ( x, z ) B2 ( z , y )dz B0 ( x, z )G1 ( z , y )dz.
We assume that the two Markov chains A( x , y ) and U 0 ( x, y ) on continuous state space are all Harris recurrent and ergodic. Let Z ( x ) be the stationary probability distribution of the Markov chain A( x, y ) . Then y ı 0 and
³
³
f 0
Z ( x) A( x, y )dx 0 for each
f 0
Z ( x)dx 1 .
Using a similar analysis on the mean drift condition of the continuous-time level-independent QBD process with countable phases, it is easy to see that the QBD process Q( x, y ) is Harris recurrent and ergodic if for all y ı 0,
³
f 0
f
Z ( x) A0 (x y )dx ³ Z ( x) A2 ( x y )dx 0
When using an orthogonal base to provide orthogonal expansions for some entries of the matrix of the generalized density functions given in Eq. (5.68), it is easy to see that the signs of the orthogonal coefficients always alternate. For this, the subtractive cancellation may be catastrophic in order to compute the matrix R in Step 3 according to the basic iterative procedure: R lim R( N ), where R(0) 0 N of
1 1
2
1 1
and R( N ) A0 A R( N 1) A2 A for N ı 1. In general, the sign alternation and the subtractive cancellation may result in the fact that the basic iterative computation is not numerically stable unless the size of the orthogonal expansion is small or otherwise is well structured. We now provide an example to indicate how the orthogonal algorithm works, which demonstrates that the algorithm is full of promise for numerical computation of the QBD process with continuous phase. We take the entries of the matrix Q( x, y ) given in Eq. (5.68) as follows: 261
Constructive Computation in Stochastic Models with Applications
B1 ( x, y ) { 1, B0 ( x, y ) { 1,
for all x, y ı 0,
1 ½ xy exp ® ( x y ) ¾ , ¯ 2 ¿ 1 1 ½ x( y 2) exp ® ( x y ) ¾ [1 G xy ] 3 ¯ 2 ¿
A0 ( x, y ) A1 ( x, y )
ª 2 º 1 ½ « 9 6(1 x) » exp ® x ¾ G xy 3 ¬ ¼ ¯ 2 ¿ and A2 ( x, y )
B2 ( x, y )
1 1 ½ ( x 1)( y 1) exp ® ( x y ) ¾ . 4 ¯ 2 ¿
It is easy to check that A0 ( x, y )
§ 1 1·§ 1 · 1 ½ 1 ½ exp ® x ¾ (1,1 x ) ¨ ¸¨ ¸ exp ® y ¾ , 1 1 1 y 2 ¯ ¿ ¯ 2 ¿ © ¹© ¹
A2 ( x, y )
§ ¨ 1 1 ½ exp ® x ¾ (1,1 x) ¨ ¯ 2 ¿ ¨1 ¨ © 2
1· ¸ 1 ½ 2 § 1 · ¸¨ ¸ exp ® y ¾ 1 ¸©1 y ¹ ¯ 2 ¿ ¸ 4 ¹
and 1· § 1 ¸ ¨ 1 ½ 1 ½ 3 § 1 · ¸¨ A1 ( x, y ) exp ® x ¾ (1,1 x ) ¨ ¸ exp ® y ¾ 1 1 y 2 ¯ ¿ ¯ 2 ¿ ¨ 1 ¸© ¹ ¨ ¸ 3 ¹ © § 2 · 0¸§ 1 · 9 1 ½ 1 ½ exp ® y ¾ exp ® x ¾ (1,1 x) ¨ 3 ¨ ¸ ¨¨ ¸ 1 y¹ ¯ 2 ¿ ¯ 2 ¿ 6 ¸© © 0 ¹ § 2 ¨ 8 3 1 ½ exp ® x ¾ (1,1 x) ¨ ¯ 2 ¿ ¨ 1 ¨ ©
1· ¸ 1 ½ 3 § 1 · ¸¨ ¸ exp ® y ¾ . 1 1 y¹ ¯ 2 ¿ 6 ¸¸ © 3¹
1 ½ 1 ½½ Note that ®exp ® x ¾ , (1 x) exp ® x ¾¾ are orthogonal bases in L2 ([0, f)), ¯ 2 ¿ ¯ 2 ¿¿ ¯ According to (1) in Remark 10, we obtain 262
5 Markov Chains on Continuous State Space
A0
§ 1 1· ¨ ¸, © 1 1 ¹
A1
§ 2 ¨ 8 3 ¨ ¨ 1 ¨ ©
1· ¸ 3 ¸, 1¸ 6 ¸ 3¹
A2
§ 2 ¨ 8 3 ¨ ¨ 1 ¨ ©
1· ¸ 3 ¸. 1¸ 6 ¸ 3¹
Let R be a non-zero solution to the matrix equation A0 RA1 R 2 A2 0 , and this solution is iteratively obtained by using step 3 of the orthogonal algorithm. Then § 1 · 1 ½ ʌl ( y ) N (r1 , r2 ) R l ¨ ¸ exp ® y ¾ , 1 y ¯ 2 ¿ © ¹ f
where (r1 , r2 )
R0 and N is constant such that
¦³ l 0
f
ʌl ( y )dy 1.
0
5.7 A CMAP/CPH/1 Queue In this section, we discuss a CMAP CPH 1 queue in terms of the continuous-time QBD process with continuous phase variable.
5.7.1 The CPH Distribution Let O ( x, ) on [0, f) u V ([0, f)) and O ( x, y ) for x, y ı 0 be the kernel and the generalized density function of the continuous-time Markov chain on continuous state space [0, f) , respectively. We assume that
O ( x,[0, f))
³
x 0
O ( x, y )dy O ( x, x) ³
f x
O ( x, y )dy İ 0
and there exists an interval [a b) [0 f) such that
O ( x,[0, f)) 0 for all x [a, b). Let O (0) ( x)
(5.69)
O ( x,[0, f)) . Then it is easy to see that the absorbing rate of the
Markov chain O ( x, ) from state x into the absorbing state is O (0) ( x) for x ı 0. Using a similar discussion on the PH distribution with an irreducible expression (D , T ) of size m (e.g., see Neuts [23] for more details), we write the kernel expression of the CPH distribution as (D ( x ), O ( x, y )), where D ( x) ı 0 for x ı 0 and
³
f 0
D ( x)dx 1. A simple computation leads to explicit expression of the CPH
distribution below: 263
Constructive Computation in Stochastic Models with Applications f
f
F (t ) 1 ³ D ( x) exp{O ( x, x)t} ª ³ exp{O ( x, y )t}dy º dx. 0 ¬« 0 ¼»
(5.70)
Note that f 0 İ exp{O ( x, x)t} ª ³ exp{O ( x, y )t}dy º İ 1 for all x [0, f). «¬ 0 »¼
Remark 5.5 To understand Eq. (5.70), we analyze the expression for the ordinary PH distribution with irreducible representation {E , S} of size m, where E ( E1 , E 2 , ..., E m ) and S ( si , j )1İi , jİm . It is easy to check that F (t ) 1 E exp{St}e m
m
i 1
j zi
1 ¦ E i exp{si ,i t}¦ exp{( S / )} i , j, where / diag ( s1,1 , s2,2 , ..., sm, m ) . It is well known from Neuts [23] that the PH distribution with an irreducible expression (D , T ) of size m for m f must be light-tailed. However, the CPH distribution with the kernel expression (D ( x), O ( x, y )) is either light-tailed or heavy-tailed. To construct the heavy-tailed case, we let 1, for x [0,1], ¯0, otherwise,
D ( x) ®
O ( x, y ) 0 for all x z y and O ( x, x) 0 for all x [0 f). Then 1
F (t ) 1 ³ exp{O ( x, x)t}dx. 0
We take that O ( x, x)
ln(a( x)) , where a( x) (1, f) . It is clear that F (t ) 1 ³
1 0
dx . t [a ( x)]
When a( x) x 1, we have F (t ) 1 1 (1 t ); when a( x) x t , we have F (t ) 1 1 (1 t 2 ) . Obviously, F (t ) is heavy-tailed in each of the two cases.
5.7.2 The CMAP Let C ( x, ) and C ( x, ) D( x, ) on [0, f) u V ([0, f)) be the kernels of the two continuous-time Markov chains on continuous state space [0, f) , and C ( x, y ) and C ( x, y ) D ( x, y ) the generalized density functions of the two Markov chains. 264
5 Markov Chains on Continuous State Space
We further assume that (1) C ( x,[0, f)) İ 0 and there exists an interval [a, b) such that for each x [ a, b) C ( x,[0, f)) 0. (2) The continuous-time Markov chain C ( x, ) D ( x, ) on [0, f) u
V ([0, f )) is Harris recurrent and ergodic. Specifically, for x [0, f), C ( x,[0, f)) D( x,[0, f))
0.
We write § C ( x , ) D ( x, ) · ¨ ¸ C ( x, ) D ( x, ) ¸ Q ( x, ) ¨ ¨ ¸ C ( x, ) D ( x, ) ¨ ¸ % %¹ © for x [0, f) and V ([0, f)) . The Markov process with kernel Q( x, ) for x [0, f) and V ([0, f)) is called a Markovian arrival process on continuous state space, denoted by CMAP. Let N (t ) and J (t ) be the arrival number and the continuous phase of the CMAP at time t. We write
Px, ( k , t )
P{N (t )
k , J (t ) _ N (0)
0, J (t )
x}.
Then the probability sequence {Px , (k , t ), k ı 0} for t ı 0, x [0, f) and
V ([0, f)) satisfies the forward Chapman-Kolmogorov differential equations w Px (0, t ) wt
³
f 0
Px y (0, t )C ( y, )dy
and for k ı 1 w Px , (k , t ) wt
³
f
f
0
0
Px , y (k , t ) C ( y, )dy ³ Px y (k 1, t ) D( y, )dy;
or the backward Chapman-Kolmogorov differential equations w Px, (0, t ) wt
³
f 0
C ( x, y ) Py , (0, t )dy
and for k ı 1 265
Constructive Computation in Stochastic Models with Applications
w Px (k , t ) wt
³
f
f
0
0
C ( x, y ) Py (k , t )dy ³
D( x, y ) Py (k 1, t )dy.
Let f
¦z
Px* ( z , t )
k
Px (k , t ).
k 0
Then w * Px ( z , t ) wt
³
0
w * Px ( z , t ) wt
³
0
f
Px* y ( z , t )[C ( y , ) zD ( y, )]dy
or f
[C ( x, y ) zD ( x, y )]Py*, ( z , t )dy.
We assume that the continuous-time Markov chain with kernel C ( x, ) D ( x, ) on [0, f) u V ([0, f)) is Harris recurrent and ergodic. Let T ( B ) be the stationary probability of this Markov chain with kernel C ( x, ) D ( x, ). Then the stationary arrival rate of the CMAP is given by
O
f
f
0
0
³ ³
T ( x) D( x, y )dxdy.
5.7.3 The CMAP/CPH/1 Queue Now, we use the CMAP and the CPH distribution to study a single server queue, where the customer arrivals form a CMAP with the kernel expression (C ( x, y ), D( x, y )) , and the service times are i.i.d. with a CPH distribution with the kernel expression ( E ( x), P ( x, y )) . P (0) ( x)
³
f 0
P ( x, y )dy. We denote such a queue as
the CMAP CPH 1 queue. Let N (t ) , I (t ) and J (t ) be the number of customers in this system at time t, the phases of the arrival and service processes at time t, respectively. {N (t ), I (t ), J (t ), t ı 0} is a continuous-time QBD process with continuous phase variable whose matrix of generalized density functions is given by D ( x, y ) § C ( x, y ) · ¨ (0) ¸ P ( x ) E ( y ) C ( x, y ) P ( x, y ) D ( x, y ) * ¨ ¸. Q ( x, y ) ¨ ¸ P (0) ( x) E ( y ) C ( x, y ) P ( x, y ) D ( x, y ) ¨ ¸ % % %¹ © 266
5 Markov Chains on Continuous State Space
Let T ( x) be the generalized density function of the stationary probability of the continuous-time Markov chain with continuous state space [0, f) whose generalized density function is < ( x, y ) C ( x, y ) D( x, y ) P ( x, y ) P (0) ( x) E ( y ) . Then it is easy to check that this system is stable if and only if f
f
f
f
0
0
0
0
³ ³
T ( x) D( x, y )dxdy ³
³
T ( x) P (0) ( x) E ( y )dxdy.
We denote by R( x, y ) the minimal nonnegative solution to the integral equation f
D( x, y ) ³ R( x, z ) >C ( z , y ) P ( z , y ) @ dz 0
³
f
f
0
0
³
R( x, z ) R( z , u ) P (0) (u ) E ( y )dudz
0.
Let I ( x) be the generalized density function of the stationary probability of the censored Markov chain whose generalized density function is given by f
U 0 ( x, y ) C ( x, y ) ³ R( x, z ) P (0) ( z ) E ( y )dz. 0
We write the generalized density function ʌ ( y ) (ʌ0 ( y ), ʌ1 ( y ), ʌ 2 ( y ), ...) of stationary probability distribution of the QBD process with the matrix Q* ( x, y ) of generalized density functions. Then °ʌ0 ( y ) NI ( y ) f ® °¯ʌ k ( y ) ³ 0 ʌ k 1 ( z ) R ( z , y )dz , for k ı 1,
where I ( y ) is the stationary probability of the censored chain with kernel U 0 ( x, y ), f
and N is a positive constant such that
¦³ k 0
f 0
ʌk ( y )dy 1 .
5.8 Piecewise Deterministic Markov Processes In this section, we first study a piecewise deterministic Markov process (PDMP). Then we apply the PDMP to analyze the GI G c queue, and provide the stationary and transient probability distributions of the queue length.
5.8.1 Semi-Dynamic Systems A deterministic system is described as follows. Given an initial state x0 , the state xt of a deterministic system at time t is completely determined by the initial state 267
Constructive Computation in Stochastic Models with Applications
x0 and time t. We denote by xt I (t , x0 ) the state change of this system. Obviously, xt is a function of x0 and t. It can be seen that the function I satisfies the following two conditions: (1) I (0, x) x; (2) I (t , I ( s, x)) I (t s, x) . In this case, the function xt I (t , x0 ) defines a dynamic system. Let E denote the state space of the dynamic system, and H the V -algebra generated by the Borel subsets of E. Now, we define a local semi-dynamic system, which is useful in the study of the PDMPs later. Definition 5.3 Let ( E , H ) be a Polish space. A local semi-dynamic system or a local semi-flow on E is a measurable mapping I : D o E , where D E u , such that for each x E D( x) {t : (t , x) D} [0, c( x)) and I satisfies the following two conditions: (1) I (0, x) x ; (2) for all x E and all s D( x) , t D(I ( s, x)) if and only if s t D( x), and in this case
I (t , I ( s, x)) I (t s, x). Condition (2) in Definition 5.3 can be interpreted as the future of this system can be precisely predicted as long as the present state of this system is known, and is independent of its history. This is the Markov property for the local semi-dynamic system, which indicates that a Markov process is a random generalization of the local semi-dynamic system. Now, we introduce some basic concepts for the local semi-dynamic system. The function t o I (t , x) is called a motion of the local semi-dynamic system, starting from state x. For each x E , the subset E ( x) {I (t , x) : t D( x)} is called a trajectory of the local semi-dynamic system, starting from state x. A state x E is called an equilibrium point if I (t , x) x for all t D( x). A trajectory E ( x) is said to be periodic if it is the trajectory of a periodic motion, that is, there exists the minimal positive number T D ( x) such that I (t T , x) I (t , x) for all t D( x), where state x is also called to be periodic. Theorem 5.12 If state x E is periodic, then D ( x) [0, f). Proof If T ! 0 is the period of state x, then I (T , x) I (0, x) x. Since [0, c( x)) we obtain 268
D( x)
D(I (T , x)) [0, T c( x)),
5 Markov Chains on Continuous State Space
c( x) T c( x). Note that T ! 0 and c( x) ! 0 , it is clear that c( x) f . This completes the proof. Definition 5.4 State x E is called a joint point of I if there exist two states x1 and x2 with x1 E ( x2 ),
x2 E ( x1 ),
E ( x1 ) E ( x2 ) z
such that
x I (t * ( x1 , x2 ), x1 ),
where t ( x1 x2 ) inf{t D( x1 ) I (t x1 ) E ( x2 )}
We call I to be time reversible if for a fixed t , the map I (t , ) : {x E : t c( x)} o E is reversible. Let y I (t , x). Then x I (t , y ). Obviously, a time reversible semi-dynamic system has no joint point. A semi-dynamic system is said to be smoothing if I (t , x) is differentiable for time t. In this case, we have dxt dt
lim
I (t 't , x) I (t , x) 't
't o 0
lim
I ('t , I (t , x)) I (t , x) 't
't o 0
lim
I ('t , xt ) xt 't
't o 0
.
Thus, we can write dxt dt
v( xt ),
where v ( xt )
lim
't o0
I ('t , xt ) xt 't
.
Clearly, the smoothing semi-dynamic system can be expressed by a stationary ordinary differential equation.
5.8.2 The I -Memoryless Distribution Family Let {F ( x, ) : x E} be a probability distribution family satisfying the following two conditions: (1) for a fixed x E , F ( x, ) is a probability distribution on ( ), and (2) for a fixed A ( ) , F ( , A) is H -measurable. 269
Constructive Computation in Stochastic Models with Applications
Definition 5.5 The probability distribution family {F ( x, ) : x E} is said to be memoryless if F ( x, s ) F (I ( s, x), t ),
F ( x, s t )
(5.71)
and for each x E , the support of F ( x, ): supp{F ( x, )} D( x), where F ( x, t ) F ( x,[t , f)). Theorem 5.13 Suppose {F ( x, ) : x E} is a memoryless probability distribution family. If the function F ( x, t ) has right derivative at t s supp{F ( x, )}, then F (I ( s, x), t ) has right derivative at t 0, and w F (I ( s, x), t ) _t wt
0
[ F ( x, s )]
1
w F ( x, t ) _t s . wt
Proof It follows from Eq. (5.71) that w F ( x, t ) _t wt
s
F ( x, s 't ) F ( x, s ) 't F ( x, s ) F (I ( s, x), 't ) F ( x, s ) lim 't o 0 't F (I ( s, x), 't ) 1 F ( x, s ) lim 't o 0 't w F (I ( s, x), t ) F ( x, s ) _t 0 , wt lim
't o 0
since lim F (I ( s, x), t )
t o 0
lim
t o 0
F ( x, s t ) 1. F ( x, s )
This completes the proof. For each x E , we write
O ( x)
w F ( x, t ) _t 0 , ° wt ® °0, ¯
w F ( x, t ) _t wt otherwise.
if
0
exists,
The measurability of F ( x, t ) with respet to x E indicates that O ( x) is
H -measurable. Theorem 5.14 Suppose {F ( x, ) : x E} is a memoryless probability distribution family. If F ( x, t ) is absolutely continuous in [0, cF ( x)) for a fixed x E and there exist at most countable points t [0, cF ( x)) such that I (t , x) is a I -joint point, where cF ( x ) inf{t : F ( x, t ) 0} , then 270
5 Markov Chains on Continuous State Space
F ( x, t )
^
`
exp t O (I (u , x))du , t [0, c ( x)) ° F ³0 ® °¯0, t [cF ( x), f).
Proof It follows from Theorem 5.13 that wF ( x, t ) wt F ( x, t )
O (I (t , x)), a.e. on [0, cF ( x)).
Hence we obtain w ln F ( x, t ) wt
O (I (t , x)), a.e. on [0, cF ( x)),
which leads to
^
`
t
F ( x, t ) N exp ³ O (I (u , x))du , 0
a.e. on [0, cF ( x)).
Note that F ( x,0) 1 , it is clear that N 1 and we obtain the desired result. This completes the proof. Corollary 5.1 Let Ee be the set of all equilibrium points of a semi-dynamic system I . Then there exists a nonnegative function O defined on Ee such that for each x Ee , F ( x, t ) exp{O ( x)t}.
Proof If x Ee , then I (u , x) x for 0 İ u İ t , and x is a periodic point which leads to cF ( x) f . Thus we obtain F ( x, t )
^
`
t
exp ³ O (I (u , x))du 0
exp ^O ( x)t` ,
a.e. on [0, f).
This completes the proof. Definition 5.6 Let J ( E ) {I (t , x) : F ( x, t ) F ( x, t ) ! 0, x E , t }.
If x J ( E ) , state x is called an F-jump point. For any x E and t [0, cF ( x)) , we write a ( x, t )
a(I (t , x))
F ( x, t ) F ( x, t ) F ( x, t ) 271
Constructive Computation in Stochastic Models with Applications
0 0. 0 Theorem 5.15 (1) If y I (t , x) J ( E ) , then a ( x, t ) a ( y ) 0. (2) If y I (t , x) J ( E ) but it is not a I -joint point, then a ( x, t ) a ( y ) which is independent of the pair (t , x). Proof (1) is clear. We only need to prove (2). If y I (t , x) J ( E ) but it is not a I -joint point, then for any two states x1 , x2 E with y I (t1 , x1 ) and y I (t2 , x2 ) , there exists a u for 0 u min{t1 , t2 } and a state z such that with convention
y I (u , z ),
z I (t1 u , x1 ),
z I (t2 u , x2 ).
Hence a( x1 , t1 )
F ( x1 , t1 ) F ( x1 , t1 ) F ( x1 , t1 ) F ( x1 , t1 u )[ F ( z , u ) F ( z , u )] F ( x1 , t1 u ) F ( z , u ) F ( z, u ) F ( z, u ) . F ( z, u )
Similarly, we have a( x2 , t2 )
F ( z, u ) F ( z, u ) . F ( z, u )
Thus, we obtain a( x1 , t1 )
a ( x2 , t2 )
a( y )
which is independent of the pair (t , x) . This completes the proof. The function a ( y ) has the following probabilistic setting a ( y ) sup{F ( x, t ) F ( x, t ) : y I (t , x)}.
Clearly, a ( y ) İ 1 . Specifically, a necessary and sufficient condition under which a ( y ) 1 is that y I (t , x) J ( E ) and y I (cF ( x), x) for some x E. Theorem 5.16 Let F ( x, t ) be discrete type for x E and its jump times {tn } can be ordered as t1 t2 ... . If for each n, xn I (tn , x) is not a I -joint point, then n 1
pn
a ( xn ) [1 a( xm )], m 1
where pn 272
n
F ( x, t ) F ( x , t n ) .
5 Markov Chains on Continuous State Space
pn F ( x, tn ) and F ( x, tn )
Proof Note that a( xn )
f
¦p
m
, we obtain
m n
a ( xn )
pn f
¦
,
pm
m n
hence we have pn
a( xn ) [1 a ( xn 1 )] pn 1 a( xn 1 ) n 1
a( xn ) [1 a( xm )]. m 1
This completes the proof.
5.8.3 Time Shift I -Invariant Transition Kernel Let Q( x, t , B ) for x E , t and B E satisfy (1) for a fixed pair ( x, t ), Q( x, t , ) is a probability measure on H , (2) for a fixed B, Q( B) is H u ( )-measurable. Definition 5.7 The transition kernel Q( x, t , B ) for x E , t and B H is called time shift I -invariant if for any x E , B H , s, t with s t , [0, c( x)),
Q( x, s t , B) Q(I ( s, x), t , B). We introduce a I -exit boundary as follows: w E {I (c( x), x) : x E},
which extends the concept of the I -joint point. Theorem 5.17 Let Q( x, t , B) for x E , t and B H be the time shift I -invariant transition kernel. The following two statements are true. (1) There exists a transition kernel q : ( E * w E ) u H o [0, 1] such that if x E * w E is neither a I -joint point nor an equilibrium point, then for any x0 E and t [0, c( x)) with x I (t , x0 ) , we have
Q( x0 , t , B)
q ( x, B),
which is independent of the pair ( x0 , t ). (2) There exists a transition kernel qe : Ee u H o [0, 1] such that if x Ee is an 273
Constructive Computation in Stochastic Models with Applications
equilibrium point, then Q ( x, t , B )
qe ( x, B ),
which is independent of time t. Proof (1) If x E * w E is neither a I -joint point nor an equilibrium point, then for any x1 , x2 E , t1 [0, c ( x1 )), t2 [0, c ( x2 )) satisfying x I (t1 , x1) I (t2 , x2 ), there exists a t0 for 0 t0 t1/t2 such that x0
I (t1 t0 , x1 ) I (t2 t0 , x2 ).
Thus we obtain Q ( x1 , t1 , B ) Q (I (t1 t0 , x1 ), t0 , B ) Q( x0 , t0 , B ) Q(I (t2 t0 , x2 ), t0 , B) Q( x2 , t2 , B ),
which shows that Q( x0 , t , B) is independent of the pair ( x0 , t ) , we write it as q ( x, B ). (2) If x Ee is an equilibrium point, then I ( s, x) x . Thus we obtain
Q ( x, s t , B ) Q (I (s, x ), t , B ) Q ( x, t , B ) Q ( x, 0, B ) which is denoted as qe ( x, B) . This completes the proof.
5.8.4 Piecewise Deterministic Markov Processes PDMPs are a generalization from Markov jump processes with respect to three main features. (1) The state space is not constrained to a countable set anymore, while it may be allowed to be continuous. (2) Between two jump points, the process is not restricted to remain constant, but may change deterministically. (3) The possibility of movement among jump points gives rise to a new kind of jumps which occur immediately upon reaching a certain state. The new kind of jump is called an intrinsic jump, since it is induced exclusively by the state of the system, while the other kind of jump, as induced by Markovian arrivals, is called an extrinsic jump. The PDMP F { X t , t } is a continuouzs-time Markov process on a Polish state space E, and can be determined by the following four characteristic representations: (1) A flow I : E u o E is a local semi-dynamic system on the state space E. (2) A closed set ' contains the states that induce the intrinsic jumps. 274
5 Markov Chains on Continuous State Space
(3) A function O : E o satisfies
sup{O ( x)} İ c f. xE
The function O ( x) indicates the intensity of an extrinsic jump occurring if the Markov process X is at state x. At the same time, F ( x, t )
^
`
t
exp ³ O (I (u , x))du , 0
which leads to the conclusion that {F ( x, ), x E} is a I -memoryless distribution family. (4) Q( x, t , ) for x E and t is the time shift I -invariant transition kernel. Specifically, the transition measure q : E u H 0 o [0,1] with H 0 H ( E \ ' ) , describes the behavior upon (extrinsic and intrinsic) jumps. We first define for all x E 0 E \ ' the deterministic variable t*
inf{t : I ( x, t ) '}
as the time until the set ' is reached from a state x E . Then we define the random vaiable T ( x) as the first (intrinsic and extrinsic) jump time after starting in state x. This is distributed as
^
`
exp t O (I ( x, u ))du , if t t , ° * ³0 P{T ( x) ! t} ® °¯0, if t ı t* ,
for all t . For simplicity of description on the distribution P{T ( x ) ! t} , we assume that the Markov process has only finitely many jumps in any finite interval. The PDMP F evolves in the following way: Starting in any state x E \ ' , it changes deterministically according to the flow I until it enters ' , inducing an intrinsic jump, or an extrinsic jump. Upon a jump, the state of F changes immediately according to the transition measure Q, leading to a state y E \ ' . Then the PDMP starts a new cycle, behaving as described until the next jump.
5.8.5 The Stationary Distribution We consider a PDMP F { X t , t } with characteristic triple (I , F , q ) . For x E and A H , we write p( x, A)
K ( x, A) L( x, A),
where 275
Constructive Computation in Stochastic Models with Applications
K ( x, A)
³
L( x, A)
³
c( x)
e s q (I ( s, x), A) F ( x,ds )
0
and c( x)
e s F ( x, s ) I A (I ( s, x))ds.
0
We denote by U the resolvent kernel associated with the PDMP F , that is, f Ex ª ³ I A ( X t )e t dt º . «¬ 0 »¼
U ( x, A)
Lemma 5.6 If the PDMP F is regular, then the resolvent kernel U is the minimal nonnegative solution to KU ( x, A) L( x, A) U ( x, A), where KU ( x, A)
³
c( x) 0
e s F ( x,ds ) ³ q(I ( s, x), dy )U ( y, A). E
Proof If the PDMP F is regular, then the random time sequence {W n } f . Let V be an exponential distributed random variable with
satisfies lim W n n of
rate 1 which is independent of the PDMP F . We write U n ( x, A)
Px { X W n /V A}, U ( x, A)
Px { X V A}.
It is seen that lim U n ( x, A) n of
lim Px { X W n /V A} Px { X V A} U ( x, A)
n of
and U n 1 ( x, A)
Px { X W n1ȁV A} Px { X W n1ȁV A V W 1} Px { X W n1ȁV A V ı W 1} L( x, A) Ex [ Px { X W n1ȁV A _ X W1 }I>V ıW1 @ ] L( x, A) Ex [U n ( X W1 , A) I[V ıW1 ] ] L( x, A) ³
c( x) 0
e s F ( x, ds) ³ q(I ( s, x) dy ) U n ( y, A) E
L( x, A) KU n ( x, A). We take U 0 ( x, A) 0, U1 ( x, A) L( x, A), # U n 1 ( x, A) L( x, A) KU n ( x, A), 276
n ı 2.
5 Markov Chains on Continuous State Space
It is clear that for any x E and A H , the kernel sequence {U n ( x, A)} is monotonely increasing for n ı 0. This indicates that the resolvent kernel U lim Un n of
is the minimal nonnegative solution to KU ( x, A) L( x, A) U ( x, A). This completes the proof. For x E and A H , we write I A ( x) L( x, A)
R( x, A) and S ( x, A)
L( x, A) I A ( x).
Lemma 5.7 Let D be a V -finite measure on ( E , H ). Then the following two statements are equivalent. (1) D cvR , where v is a probability measure and c is a positive constant. (2) D S ( E ) f . Proof (1) (2).
D S (E)
³ D (dx)S ( x, E ) c ³ v(dy ) ³ R( y,dx) S ( x, E ) E
E
E
c ³ v(dy ) ³ E
E
I dx ( y ) L ( x, E ) L( y , E )
c f.
(2) (1). We define the probability measure as v
DS D S (E)
.
Hence we obtain that D cvR , where c D S ( E ) . This completes the proof. For x E and A H , we define f
J ( x, A)
¦³ K n 0
A
n
( x, dy ) L( y, E ).
Obviously, J ( x, E ) U ( x, E ) 1 if the PDMP F is regular, hence J is a stochastic kernel. 277
Constructive Computation in Stochastic Models with Applications
The following theorem provides useful relations between the existence of a stationary distribution of the PDMP F and the existence of a V -finite invariant measure of the transition kernel p. Theorem 5.18 The following three statements are equivalent. (1) There exists a stationary distribution ʌ of the PDMP F . (2) There exists a probability measure v such that the positive V -finite measure P vR is invariant for the transition kernel p. (3) There exists a positive V -finite measure P , which is invariant for the transition kernel p , such that P S ( E ) f . Proof (1) (2). Let ʌ be a stationary distribution of the PDMP F . Then ʌU ʌ . We define v ʌ J , which is clearly a probability measure, since ʌJ ( E ) ʌU ( E ) ʌ ( E ) 1. f
Let P
vR . Then P
¦ ʌK
n
. Note that P L
ʌ and P K ʌ
P , we obtain
n 0
Pp
P ( K L) P K ʌ
P.
(2) (1). If there exists a probability measure v such that the positive V -finite measure P vR is invariant for the transition kernel p , and ʌ P L , then for each A H ,
³
ʌ ( A)
E
L( y, A) v(dy ) L( y, E )
is a probability measure. Since P is a positive V -finite measure, there exists a partition {Ei } of E such that P ( Ei ) f . Note that L p K and P P P , we obtain that for each A H , ʌU ( A)
¦³
U ( x, A) P L(dx)
¦³
U ( x, A) P ( I K )(dx)
¦³
L( x, A) P (dx)
i
i
i
Ei
Ei
Ei
ʌ ( A).
Therefore, ʌ is the stationary distribution of the PDMP F . (2) (3) can be easily proved by means of Lemma 5.7. This completes the proof. Corollary 5.2 (1) If ʌ is a stationary distribution of the PDMP F , then the positive V -finite measure ʌ JR is invariant for the transition kernel p , and ʌ JRL ʌ . (2) If v is a probability measure such that vR is invariant for the transition kernel p , then v is a stationary distribution ʌ of the PDMP F , and vRL J v . 278
5 Markov Chains on Continuous State Space
In what follows we provide an approach to express the stationary distribution of the PDMP F . Let Z 0 denote the initial state of F , and Z n the state of F after the nth jump. Then 2 {Z n : n ı 0} is called the Markov chain associated with F . If 2 has a stationary distribution ʌ satisfying
³³ E
where / ( x, t )
³
t 0
t* ( x ) 0
e / ( x,t ) dtdʌ ( x) f,
O (I ( x, u ))du , then a stationary distribution of F can be
constructed as follows: We define the set M {( x, t ) E u : t t* ( x)} , and denote by % the Borel V -algebra on M. For any set A H and measurable functions t1 , t2 E o with t1 ( x) t2 ( x) t* ( x) for all x E , we write BAt1 t2
{( x, t ) M : t1 ( x) t t2 ( x), x A}
and vʌ B
t1 t2 A
³³ ³³ A
E
t2 ( x )
t1 ( x )
e / ( x ,t ) dtdʌ ( x )
t* ( x ) / x t 0
e
. dtdʌ ( x )
Obviously, vʌ can be uniquely extended to a measure on %. Using the measurable restriction of the flow function I M o E to the set M, the measure vS I 1 is the stationary distribution of F .
5.8.6 The GI/G/k Queue We consider a GI G k queue. Arrivals occur independently with i.i.d. interarrival time distributed by A, and only single arrivals are allowed. In order to avoid multiple events (for example, arrivals and departures) occurring at the same time instant, we assume that A has a Lebesgue density a. There are k independent and identical servers in the queueing system, and each customer has identical service time distribution B in each server. The service discipline is FCFS and the capacity of the waiting room is infinite. 5.8.6.1 The State Change is Induced by a Service Event Let Q(t ) and J i (t ) be the queue length and the remaining service time of the ith server at time t for 1 İ i İ k and t ı 0. Then the Markov process !
^(Q(t ), J1 (t ), J 2 (t ), !, J k (t )) : t ı 0` 279
Constructive Computation in Stochastic Models with Applications k
has the state space E u ( ) . Specifically, if the ith server is idle, then J i (t ) xi 0 for 1 İ i İ k . For y E , we express y (n, x) with x ( x1 , x2 , ... , xk ) . We now describe the Markov process ! as a PDMP as follows: (1) The flow function A flow I on E is defined by
It (n, t ) (n,( x1 t ) ,( x2 t ) ,! , ( xk t ) ), where ( xi t ) max{0, xi t}. Obviously, this flow represents the proceeding service time. (2) The first passage time We define t* ( x )
min{xi : xi ! 0,1 İ i İ k}, if x z 0, ® if x 0. ¯f,
(3) The transition measure The transition measure QS describes the state changes of the system in the case of a server becoming idle. Let x ( )k and A
A1 u A2 u ! u Ak k , k
where k is the V -algebra of the Borel sets on ( ) . Then
QS ((n, x), {m} u A)
G m , n 1 1A j ( x j ) B ( Aj ), for n ı 1, ° j zi xi 0 ® °G 1 ( x), for n 0. ¯ m,n A
(4) The transition probability kernel For the case n ı 1 , only one server can become idle at a time. Since the queue has the Lebesgue-dominated single arrival input and the servers work independently, the probability that two servers finish their work (or an arrival and a service) at the same time instant is zero. Let P(t ;(n, x), {m} u A) be the probability that at time t after the last arrival, the PDMP is in state set {m} u A under the condition that it was in state (n, x) immediately after the last arrival. Further, let P ( r ) (t ; (n, x), {m} u A) denote the same probability, but restricted to the set of paths with r service completions until time t. Then the transition probability kernel and hence the transient distribution of the interarrival process is given iteratively by f
P(t ;(n, x),{m} u A)
¦P r 0
where 280
(r )
(t ;(n, x),{m} u A),
5 Markov Chains on Continuous State Space
for n ı 1, t t* ( x), G m , n 1A ( x te), ° ®G m ,01A (( x te) ), for n 0, °0 otherwise; ¯
P (0) (t ;(n, x),{m} u A)
and P (i ) (t t* ( x); (l , y ),{m} u A) ° ³E ® QS ((n, x), d(l , y )), t ! t* ( x), ° otherwise. ¯0,
P ( i 1) (t ;(n, x),{m} u A)
5.8.6.2 The State Change is Induced by an Arrival Event We define another transition measure QA , which describes the state changes of the queueing process induced by an arrival event, as follows:
QA ((n, x),{m} u A)
if x ! 0, G m , n 11A ( x), ° ®G m n 1A j ( x j ) B( Aj ), there exists xi j zi ° xi 0 ¯
0.
Then the transient distribution of the queueing process is given iteratively by f
P(t ;(n, x),{m} u A)
¦P
(r )
(t;(n, x),{m} u A),
r 0
where P (0) (t;( n, x),{m} u A)
P I (t;( n, x),{m} u A)
which is an initial probability, and P ( i 1) (t ;(n, x),{m} u A)
t
³³ ³
0 ( h, z )
(l y)
P ( i ) (t u; (h, z ), {m} u A)
P I (u;(n, x), d(l , y )) QA ((l , y ),d(h, z ))a (u )du
5.8.6.3 An Embedded Markov Chain of GI/M/1 Type Let {tl : l ı 0} denote the time instants of successive arrivals. Then {tl : l ı 0} is a series of stopping times with respect to the canonical filtration of the queueing process {Q(t ) : t ı 0} . We define X l Q(tl ) as the system state immediately before the lth arrival. Then X { X l : l ı 0} is the embedded Markov chain immediately before arrival instants whose transition probabilities are defined by P X ((n, x),{m} u A)
P{ X l 1 {m} u A _ X l
(n, x)}. 281
Constructive Computation in Stochastic Models with Applications
We have P X ((n, x),{m} u A)
³³ E
f 0
QA ((n, x), d(l , y )) P I (t;(l , y ),{m} u A) a(t )dt.
Let Ai ( x, A)
P X ((n, x),{n i 1} u A)
³
f 0
P I (t;(n 1, x),{n i 1} u A)a(t )dt
and Bi ( x, A)
P X ((i, x),{0} u A) f I x ! 0, ° ³ 0 P (t;( n 1, x),{0} u A) a(t )dt , ® f f ° ³ ³ P I (t;(0, y ),{0} u A)dB( yi ) a(t )dt , xl 0, ¯ 0 0 k
for all n ı i ı 0 , x ( ) and A k . In this case, the transition probability kernel is given by
PX
§ B0 ¨ ¨ B1 ¨ B2 ¨ © #
A0 A1 A2
A0 A1
A0
#
#
#
· ¸ ¸. ¸ ¸ %¹
It is clear that k
Bn ( x, ( ) )
f
¦
k
Ai ( x, ( ) ) 1
i n 1 k
for all n ı 0 and x ( ) . For the sequence { An ( x, A), n ı 0} of probability kernels, let Aˆ ( x, A) f
¦ A ( x, A). We assume that the Markov chain with transition probability kernel n
n 0
Aˆ ( x, A) is Harris recurrent and ergodic, and let T ( A) denote the stationary probability of the the Markov chain with transition probability kernel Aˆ ( x, A). Note that this kernel Aˆ ( x, A) is equal to the transition probability kernel of the remaining service times immediately before arrival instants if there is always at least one waiting customer, the stationary distribution of the kernel Aˆ ( x, A) equals the k-fold convolution of the respective stationary distribution J ( A) for one server. x
Hence we obtain that T ([0, x)) J k *([0, x )) , where J ([0, x)) P ³ [1 B (v)]dv 0
282
5 Markov Chains on Continuous State Space f
f
§ n ¨©
¦ nA
with P 1 ³ [1 B (v)]dv . We define E ( x) 0
n 1
Then using the mean drift condition, we obtain that
³
( ) k
x,( )
k· ¸ ¹
k
for x ( ) .
E ( x)dT ( x) ! 1 .
Since k
P[1 B( x )]dx ,
dT ( x )
i
i
i 1
we obtain
³
( )
k
E x dT x
f
f
³ ³ 0
( )
k
¦ nP
I
k
(t ;(n, x),{1} u ( ) )
n 1
k
P[1 B( xi )]dx1dx2 ! dxk a(t )dt i 1
kP³
f 0
ta(t )dt
kP
O
,
f
where O 1 ³ ta (t )dt . Therefore, this queueing system is stable if and only if 0
O kP . Let R( x, y ) be the minimal nonnegative solution to the nonlinear kernel equation f
R ( x, y )
¦³ n 0
f
R n ( x, z ) An ( z , y )dz.
0
We write f
< 0 ( x, y )
¦³ n 1
f 0
R n 1 ( x, z ) Bn 1 ( z , y )dz.
It is easy to check that < 0 ( x, y ) is the transition probability kernel of the censored chain to level 0. If O k P , then the stationary distribution of the embedded Markov chain with kernel P X is given by ʌ0 ( x) N w0 ( x) and f
ʌn ( x) N ³ w0 ( z ) R n ( z , x)dz , 0
n ı 1,
where w0 ( x) is the stationary probability of the Markov chain with transition probability kernel < 0 ( x, y ) and 283
Constructive Computation in Stochastic Models with Applications
1
N
f
1 ¦ ³ n 1
f
f
0
0
³
.
w0 ( x) R n ( z , x)dzdx
Now, we compute the stationary distribution pn ( A) of the queueing process. To do this, we write t
Ac (t ) 1 ³ a (u )du, 0
K (t ;(n, x),{m} u A)
P{Q (t ) {m} u A, H ! t _ Q(0) c
(n, x)}
I
A (t ) P (t ;(n, x),{m} u A), where H denotes the first inter-arrival time. Therefore, we obtain f
pm ( A)
O¦ ³ n m
( ) k
³
f 0
ʌn ( x) K (t;(n, x),{m} u A)dxdt
for m ı 0 and A B k .
5.9 Notes in the Literature Early results on discrete-time Markov chains on continuous state space were well documented in Finch [12], Athreya and Ney [3,4], Meyn and Tweedie [22], Hernández-Lerma and Lasserre [15] and Prieto-Rumeau and Hernández-Lerma [25]. As an important example, Tweedie [27] considered a discrete-time Markov chain of GI M 1 type with continuous phase variable. He showed that the stationary probability distribution is operator-geometric. Sengupta [26] used the operator-geometric solution to study the stationary buffer occupancy distribution in a data communication model. Nielsen and Ramaswami [24] studied orthonormal expansion for a discrete-time level-independent QBD process with continuous phase variable under appropriate and regular conditions, which lead to a computational framework in order to implement Tweedie’s operator-geometric solution. Breuer [5] described the infinitesimal generator for a Markov jump process on continuous state space, and provided the Kolmogorov’s forward and backward differential equations. Available results for the continuous-time Markov chains on continuous state space are few. From the standard theory of Markov chains (e.g. Anderson [1] and Kemeny, Snell and Knapp [17]), it is well known that the study of the continuoustime Markov chains is different from that of discrete-time Markov chains, although both of them can be related by using the uniformization technique. However, it is possible that the conditions for the uniformization technique cannot be satisfied 284
5 Markov Chains on Continuous State Space
by some practical continuous-time Markov chains, e.g., see Artalejo and GómezCorral [2] for some simple retrial queues. Li and Lin [18] provides a new theoretical framework for studying the continuous-time QBD process with continuous phase variable. Finally, this chapter introduces the PDMPs which enables us to deal with more general queueing systems such as the GI G c queue. For the PDMPs, reader may refer to Davis [9], Gugerli [14], Costa [8], Dempster [10], Liu [20,21], Dufour and Costa [11] and Hou and Liu [16]. At the same time, Breuer [6,7] used the PDMPs to deal with some general queueing systems. This chapter is written by means of Tweedie [27], Nielsen and Ramaswami [24], Li and Lin [18], Davis [9], Costa [8], Hou and Liu [16] and Breuer [6, 7].
Problems 5.1 Compute the stationary distribution of the queue length for the G SM 1 queue, where the interarrival time distribution is denoted by F ( x) , and the service times form a semi-Markov process with the transition probability matrix G ( x) of size k. 5.2 Compute the stationary distribution of the queue length for the SM G 1 queue, where the arrival process is a semi-Markov process with the transition probability matrix F ( x) of size k, and the service time distribution is denoted by G ( x) . 5.3 Consider the SM CPH 1 queue by means of the Markov chain of GI M 1 type on a continuous state space. 5.4 Discuss the CMAP G 1 queue by means of the Markov chain of M G 1 type on a continuous state space. 5.5 Study the CMAP CPH 1 queue by means of the continuous-time QBD process. 5.6 For a fluid queue driven by the GI G 1 queue, please apply the continuoustime Markov chain on continuous state space to analyze this fluid model. 5.7 Construct a more general fluid queue driven by a continuous-time Markov chain on continuous state space. 5.8 For the CMAP CPH 1 queue, apply the orthogonal algorithm to compute the means of the stationary queue length and the stationary waiting time. 5.9 Define a batch Markovian process with continuous phase variable, and then discuss its useful properties. 5.10 Use the PDMP to study the M G c retrial queue, and derive its stationary distributions for the queue length and the waiting time. 5.11 Use the PDMP to study the G G 1 queue with server vacations whose time distribution is general, and derive its stationary distributions for the queue length and the waiting time. 5.12 Use the PDMP to study the G G 1 queue with negative customers whose 285
Constructive Computation in Stochastic Models with Applications
interarrival time distribution is general, and derive its stationary distributions for the queue length and the waiting time.
References 1. Anderson W.J. (1991). Continuous-time Markov chains: An applications-oriented approach, Springer-Verlag 2. Artalejo J.R. and A. Gómez-Corral (2008). Retrial Queueing Systems: A Computational Approach, Springer 3. Athreya K.B. and P. Ney (1978). A new approach to the limit theory of recurrent Markov chains. Transactions of the American Mathematical Society 245: 493 501 4. Athreya K.B. and P. Ney (1982). A renewal approach to the Perron-Frobenius theory of non-negative kernels on general state space. Mathematische Zeitschrift 179: 507 529 5. Breuer L. (2002). On Markov-additive jump process. Queueing Systems 40: 75 91 6. Breuer L. (2003). Transient and stationary distributions for the GI G k queue with Lebesguedominated inter-arrival time distribution. Queueing Systems 45: 47 57 7. Breuer L. (2008). Continuity of the M G c queue. Queueing Systems 58: 321 331 8. Costa O.L.V. (1990). Stationary distributions for piecewise-deterministic Markov processes. Journal of Applied Probability 27: 60 73 9. Davis M.H.A. (1984). Piecewise-deterministic Markov processes: A general class of nondiffusion stochastic models. Journal of the Royal Statistical Society (Series B) 46: 353 388 10. Dempster M.A.H. (1991). Optimal control of piecewise-deterministic Markov processes. In Applied Stochastic Analysis, M.H.A. Davis and R.J. Elliott (eds), Gordon and Breach: New York, 303 325 11. Dufour F. and O.L.V. Costa (1999). Stability of piecewise deterministic Markov processes. SIAM Journal on Control and Optimization 37: 1483 1502 12. Finch P.D. (1963). A limit theorem for Markov chains with continuous state space. Journal of the Australian Mathematical Society 3: 351 358 13. Gail H.R., S.L. Hantler and B.A. Taylor (1994). Solutions of the basic matrix equation for M G 1 type and G M 1 type Markov chains. Stochastic Models 10: 1 43 14. Gugerli U.S. (1986). Optimal stopping of piecewise deterministic Markov processes. Stochastics 19: 221 236 15. Hernández-Lerma O. and J.B. Lasserre (2003). Markov chains and invariant probabilities. Progress in Mathematics, 211. Birkhäuser Verlag, Basel, 2003 16. Hou Z.T. and G.X. Liu (2005). Markov Skeleton Processes and their Applications, Science Press and International Press 17. Kemeny J.G., J.L. Snell and A.W. Knapp (1976). Denumerable Markov Chains (Second Edition), Springer-Verlag: New York 18. Li Q.L. and C. Lin (2006). Continuous-time QBD processes with continuous phase variable. Computers & Mathematics with Applications 52: 1483 1510 19. Li Q.L., G.H. Wang and Y. Zhou (2004). Symmetric PH and EP distributions and their applications to the probability Hough transform. Computers & Mathematics with Applications 47: 823 844 286
5 Markov Chains on Continuous State Space
20. Liu G.X. (1998). Piecewise deterministic Markov processes, Ph.D. Thesis, Changsha Railway University, Changsha, P.R. China 21. Liu G.X. (2002). Piecewise deterministic Markov processes and semi-dynamic systems. In Markov Processes and Control Markov Chains, J.A. Filar and Z.T. Hou (eds), Kluwer, 93 107 22. Meyn S.P. and R.L. Tweedie (1993). Markov chains and stochastic stability. Springer-Verlag, 1993 23. Neuts M.F. (1981). Matrix-Geometric Solutions in Stochastic Models-An Algorithmic Approach, The Johns Hopkins University Press: Baltimore 24. Nielsen B.F. and V. Ramaswami (1997). A computational framework for a quasi birth and death process with a continuous phase variable. In ITC-15, V. Ramaswami and P.E. Wirth (eds.), Elsevier Science B.V., 477 486 25. Prieto-Rumeau T. and O. Hernández-Lerma (2008). Ergodic control of continuous-time Markov chains with pathwise constraints. SIAM Journal on Control and Optimization 47: 1888 1908 26. Sengupta B. (1986). An operator geometric solution for a buffer occupancy problem. Stochastic Models 2: 251 271 27. Tweedie R.L., Operator-geometric stationary distributions for Markov chains, with application to queueing models. Advances in Applied Probability 14: 368 391
287
6 Block-Structured Markov Renewal Processes
Quan-Lin Li Department of Industrial Engineering Tsinghua University Beijing 100084, P.R. China E-mail:
[email protected] Abstract In this chapter, we provide the UL- and LU-types of RGfactorizations for the transition probability mass matrix of any irreducible Markov renewal process in terms of the censoring technique. Specifically, we deal with Markov renewal processes of GI/G/1 type, including the RG-factorization, the RG-factorization for the repeated blocks, the spectral analysis and the first passage time. Keywords Markov renewal process, Markov renewal processes of GI/G/1 type, RG-factorization, spectral analysis, the first passage time.
This chapter provides the UL- and LU-types of RG-factorizations for the transition probability mass matrix of an irreducible Markov renewal process in terms of the Wiener-Hopf equations. Specifically, Markov renewal processes of GI G 1 type are dealt with for such as, the RG-factorization for the repeated blocks, the spectral analysis and the first passage time. Note that Markov renewal process is a useful mathematical tool in the study of non-Markovian stochastic models. This chapter is organized as follows. Section 6.1 discusses the censoring Markov renewal processes for block-structured Markov renewal processes. Sections 6.2 and 6.3 derive the UL- and LU-types RG-factorizations for the transition probability mass matrix based on the Wiener-Hopf equations, respectively. Section 6.4 deals with block-structured Markov renewal processes with finitely-many levels. Section 6.5 studies Markov renewal processes of GI G 1 type. Section 6.6 considers spectral properties for the R- and G-measures. Section 6.7 analyzes the first passage times with effective algorithms, and also provides conditions for the state classification of Markov renewal processes of GI G 1 type in terms of the R-, U- and G-measures. Finally, Section 6.8 gives some notes to the references on Markov renewal processes.
6
Block-Structured Markov Renewal Processes
6.1 The Censoring Markov Renewal Processes In this section, the censoring technique is applied to deal with an irreducible blockstructured Markov renewal process. Based on the censored processes, conditions on the state classification of the Markov renewal process are provided. We consider a Markov renewal process {( X n , Tn ), n ı 0} on the state space
: u [0, f) with :
{(k , j ) : k ı 0, 1 İ j İ mk } , where X n is the state of the
process at the nth renewal epoch and Tn is the total renewal time up to the n th n
renewal, or Tn
¦W
i
with W 0
0 and W n being the inter-renewal interval time
i 0
between the (n 1) st and the nth renewal epochs for n ı 1. The transition probability mass matrix of the Markov renewal process {( X n , Tn ), n ı 0} is given by § P0,0 ( x) P0,1 ( x) P0,2 ( x) ! · ¨ ¸ P ( x) P1,1 ( x) P1,2 ( x) ! ¸ , P( x) ¨ 1,0 ¨ P2,0 ( x) P2,1 ( x) P2,2 ( x) ! ¸ ¨ ¸ # # © # ¹
(6.1)
where Pi , j ( x) is a matrix of size mi u m j whose (r , r c) th entry is ( Pij ( x)) r rc
P{ X n 1
( j, r c), Tn 1 İ x Tn _ X n
(i, r ), Tn }.
The matrix P(f), defined as lim P( x) entry-wisely, is referred to as the x of
embedded Markov chain of the Markov renewal process. Throughout this chapter, we assume that the Markov renewal process P ( x ) is irreducible and P ( x )e 9 e for all x ı 0, Where e is a column vector of ones with suitable size. Because of the block-partitioned structure of P ( x) , the Markov renewal process {( X n , Tn ), n ı 0} is referred to as a block-structured Markov renewal process. Many application problems can be naturally modeled as a block-structured Markov renewal process. We first define a censored process for a Markov renewal process whose transition probability mass matrix consists of scalar entries. We then treat a block-structured Markov renewal process as a special case. Definition 6.1 Suppose that {( X n , Tn ), n ı 0} is an irreducible Markov renewal process on the state space : u [0, f) , where X n : {0, 1, 2, ...} and Tn [0, f). Let E be a non-empty subset of : . If the successive visits of X n to the subset E take place at the nk th step of state transition, then the inter-visit time W kE between the (k 1) st and the k th visits to E is given by W kE W nk 1 1 k
Wn
k 1 2
! W nk for k ı 1 . Let X kE
X nk and TkE
¦W
E i
for k ı 1. Then the
i 1
289
Constructive Computation in Stochastic Models with Applications
sequence
^X § ¨ ©
E k
`
, TkE ·¸¹ , k ı 1 is called the censored process with censoring set E.
Throughout this chapter, we denote by ( B) r r c the (r , r c) th entry of the matrix B and by B C ( x) (or B( x) C ( x) ) the convolution of two matrix functions B( x) and C ( x ), i.e., B C ( x)
³
x 0
B( x u )dC (u ). We then recursively define B n*( x) B B ( n 1)*( x)
for n ı 1 with B 0* ( x)
I where I is the identity matrix.
For convenience of description, we write P[İn ] ( x) for the censored transition probability mass matrix P E ( x) if the censored set E Lİn , in particular, P[İf ] ( x) P( x) and P[0] ( x) P[İ0] ( x) . On the other hand, we write P[ın ] ( x) for the censored transition probability mass matrix with the censored set E Lt n , specifically, P[ı0] ( x) P ( x). Let E c : E. According to the subsets E and E c , the transition probability mass matrix P ( x) is partitioned as E
Ec
E § T ( x) U ( x) ·
P( x)
(6.2)
¨ ¸. E © V ( x) Q( x) ¹ c
l ( x) Lemma 6.1 If P( x) is irreducible, then each element of Q
f
¦Q
n*
( x)
n 0
is finite for x ı 0 . Proof If P( x) is irreducible, then P(f) is irreducible, since 0 İ P ( x) İ P (f). It is obvious that Q (f ) is strictly substochastic due to V ( f) : 0 . Hence, using Lemma 2.1 we have l (f) Q
f
¦Q
n
(f) [ I Q (f)]1 f,
n 0
1
where [ I Q(f)] is the minimal nonnegative inverse of I Q (f) . Since l (f) is finite and 0 İ Q l ( x) İ Q l (f) for x ! 0 , each element each element of Q l of Q ( x) is finite. This completes the proof. l ( x ) is referred to as the fundamental matrix of Q ( x ) . In the The matrix Q following, we show that the censored process {( X kE , TkE ), k ı 1} is a Markov renewal process again. Theorem 6.1 The censored process {( X kE , TkE ), k ı 1} is a Markov renewal process whose transition probability mass matrix is given by l V ( x). P E ( x) T ( x) U Q
(6.3)
Proof To show that {( X kE , TkE ), k ı 1} is a Markov renewal process, we need to show that the TnE1 is independent of X 0E , X 1E , ! , X nE1 , T0E , T1E , ! , TnE , 290
6
Block-Structured Markov Renewal Processes
given the state of X nE . This is clear from the fact that {( X n , Tn ), n ı 0} is a Markov renewal process with the strong Markov property. The (i,j)th entry of the transition probability mass matrix of the Markov renewal process {( X kE , TkE ), k ı 1} is ( P E ( x)) i , j
P ^ X nE1 P ^ X 1E
i, TnE `
j , TnE1 İ x TnE _ X nE j , T1E İ x _ X 0E
0` .
i, T0E
To explicitly express P E ( x) in terms of the original transition probability mass matrix, we consider the following two possible cases: Case Ē n1 1 . In this case, i , j E , X 1E X 1, T1E W 1 and P{ X 1E
j , T1E İ x _ X 0E
i, T0E
0} (T ( x)) i , j.
(6.4) k
Case Ĉ n1
k for k ı 2 . In this case, i , j E , X 1E
X k , T1E
¦W
l
and
l 1
P{ X 1E
j , T1E İ x _ X 0E
i, T0E
j, X j E
0} P{ X k
k
for j 1, 2, ! , k 1, ¦ W l İ x _ X 0
i, W 0
0}
l 1
(U Q ( k 2)* V ( x)) i , j.
(6.5)
It follows from Eq. (6.4) and Eq. (6.5) that f
P{ X 1E
j , T1E İ x _ X 0E
i, T0E
0} (T ( x)) i , j ¦ (U Q ( k 2)* V ( x)) i, j k 2
l V ( x)) . (T ( x)) i, j (U Q i, j This completes the proof. Remark 6.1 The censored process
^ X
Ec k
c
`
, TkE , k ı 1 is a Markov renewal
process whose transition probability mass matrix is given by c P E ( x) Q( x) V Tl U ( x).
As seen in Chapter 2, the UL- and LU-types of RG-factorizations are obtained by c means of the two different censored processes P E ( x) and P E ( x) , respectively. Based on the censored renewal processes above, a probabilistic interpretation for each component in the expression Eq. (6.3) for P E ( x) is available and useful. For the Markov renewal process P( x) , let TE c , E c (i, j ) be total renewal time until the process visits state j E c for the last time before entering E, given that the process starts in state i E c. Formally, assume that at the kth transition the process visits state j E c for the last time before entering E, given that the process starts 291
Constructive Computation in Stochastic Models with Applications k
in state i E c . Then TE c , E c (i, j )
¦W
l
. Similarly, let TE , E c (i, j ) be the total
l 1
renewal time until the process visits state j E c before returning to E, given that the process starts in state i E; TE c , E (i, j ) the total renewal time until the process enters E and upon entering E the first state visited is j E , given that the process started at state i E c ; and TE , E (i, j ) the total renewal time until the process enters E and upon returning to E the first state visited is j E , given that the process started at state i E. l ( x)) is the expected number of visits to state j E c before entering E (1) (Q i, j and TE c E c (i, j ) İ x , given that the process starts in state i E c . l ( x)) is the expected number of visits to state j E c before (2) (U Q i, j
returning to E and TE E c (i, j ) İ x , given that the process starts in state i E. l V ( x)) is the probability that the process enters E and upon entering (3) (Q i, j
E the first state visited is j E and TE c , E (i, j ) İ x , given that the process starts in state i E c . l V ( x)) is the probability that upon returning to E the first state (4) (U Q i, j visited is j E and TE , E (i, j ) İ x , given that the process starts in state i E . Define the double transformation of n and x for the censored Markov renewal process as *
k P E ( z, s)
P
* k E i, j
( z, s)
i , j E
,
where k*
P E i, j( z, s)
f
¦z ³ n
n 1
f
0
e sx dP{ X n
j , Tn İ x _ X 0
i, T0
0}.
i ( s ), Vi ( s ) and Q i ( s ) are defined convenThe single transformations Ti ( s ), U
tionally, for example, Ti ( s )
³
f 0
e sx dT ( x).
The following corollary provides a useful result for studying the two-dimensional random vector ( X 1E , T1E ) , the proof of which is obvious from Eq. (6.4) and Eq. (6.5). ( X 1E , T1E ) is important for the study of the Markov renewal process {( X n , Tn ), n ı 1}. It is worthwhile to notice that an important example is analyzed in Section 2.4 of Neuts [17]. 292
6
Block-Structured Markov Renewal Processes
Corollary 6.1 *
k P E ( z, s)
i ( s )[ I zQ i ( s )]1Vi ( s), zTi ( s) z 2 U
where i ( s )]1 [ I zQ
f
n
¦ z [Qi (s)] . n
n 0
Based on Definition 6.1, we have the following two useful properties. E Property 6.1 For E1 E2 , P E1 ( x) ( P E2 ) 1 ( x) . Property 6.2 P ( x) is irreducible if and only if P E ( x) is irreducible for all the subsets E of : . Now, we consider the state classification for an irreducible Markov renewal process. Çinlar [6] shows that P ( x) is recurrent or transient if and only if P(f) is recurrent or transient, respectively. Therefore, we illustrate the following useful relations. Proposition 6.1 (1) P ( x) is recurrent if and only if P E ( x) is recurrent for every subset E : . (2) P ( x) is transient if and only if P E ( x) is transient for every subset E : . Proposition 6.2 If P ( x) is irreducible, then (1) P ( x) is recurrent if and only if P E ( x) is recurrent for some subset E : ; and (2) P ( x) is transient if and only if P E ( x) is transient for some subset E : . The following proposition provides a sufficient condition under which a Markov renewal process P ( x) is positive recurrent. The proof is easy according to the f
fact that
¦³ j 0
f 0
xdPi j ( x) e is the mean total sojourn time of the Markov renewal
process P ( x) in state i. It is worthwhile to note that P ( x) may not be positive recurrent when P(f) is positive recurrent. This is a further result of Proposition 6.2 from the recurrent to the positive recurrent. Proposition 6.3 The Markov renewal process P ( x ) is positive recurrent if (1) P(f) is positive recurrent and f
(2)
¦³ j 0
f 0
xdPi j ( x) e is finite for all i ı 0.
Remark 6.2 (1) Remark b in Section 3.2 of Neuts [17] (p. 140) illustrates that condition (2) in Proposition 6.3 is strong. For example, for a Markov renewal f
process of M G 1 type, the sufficient condition only requires that
¦³ j 0
for i
0 and
³
f 0
f
0
xdPi j ( x) e
xdP10 ( x) are finite. Therefore, condition (2) in Proposition 6.3
can further be weakened. 293
Constructive Computation in Stochastic Models with Applications
6.2 The UL-Type RG-Factorization In this section, we define the UL-type R-, U- and G-measures for the Markov renewal process, and derive the UL-type RG-factorization for the transition probability mass matrix. For 0 İ i j , Ri j (k , x) is an mi u m j matrix whose (r , r c) th entry ( Ri , j ( x)) r , r c is the probability that starting in state (i , r ) at time 0, the Markov renewal process makes its kth transition in the renewal time interval [0, x ] for a visit into state ( j , r c) without visiting any states in Lİ( j 1) during intermediate steps; or ( Ri , j (k , x)) r , r c
P{ X k
( j , r c), X l Lİ( j 1) for l 1, 2, ! , k 1,
Tk İ x _ X 0
(i, r )}.
(6.6)
f
Let Ri , j ( x)
¦R
i, j
(k , x) . Then the (r , r c) th entry of Ri , j ( x) is the expected
k 1
number of visits to state ( j , r c) made in the renewal time interval [0, x ] without visiting any states in Lİ( j 1) during intermediate steps, given that the process starts in state (i, r ) at time 0. For 0 İ j i , Gi , j ( k , x) is an mi u m j matrix whose (r , r c) th entry (Gi , j ( x)) r , r c is the probability that starting in state (i , r ) at time 0, the Markov renewal process makes its k th transition in the renewal time interval [0, x] for a visit into state ( j , r c) without visiting any states in Lİ(i 1) during intermediate steps; or (Gi , j (k , x)) r ,r c
P{ X k
( j , r c), X l Lİ(i 1) for l 1, 2, ! , k 1,
Tk İ x _ X 0
(i, r )}.
(6.7)
f
Let Gi , j ( x)
¦G
i, j
(k , x) . Then the (r , r c) th entry of Gi , j ( x) is the probability
k 1
that starting in state (i , r ) at time 0, the Markov renewal process makes its first visit into Lİ(i 1) in the renewal time interval [0, x ] and upon entering Lİ(i 1) it visits state ( j , r c) . The two matrix sequences {Ri , j ( x)} and {Gi , j ( x)} are called the UL-type Rand G-measures of the Markov renewal process P ( x) , respectively. We partition the transition probability mass matrix P( x) according to the three subsets Lİ( n 1) , Ln and Lı n 1 as § T ( x ) U 0 ( x ) U1 ( x ) · ¨ ¸ P ( x ) ¨ V0 ( x ) T0 ( x ) U 2 ( x) ¸ . ¨ V ( x) V ( x) Q ( x) ¸ 2 0 © 1 ¹ 294
(6.8)
6
Block-Structured Markov Renewal Processes
Let § T ( x) U 2 ( x) · Q( x) ¨ 0 ¸ © V2 ( x) Q0 ( x) ¹
(6.9)
and § D ( n, x) D12 (n, x) · Q n* ( x) ¨ 11 ¸, © D21 (n, x) D22 (n, x) ¹
l ( x) Partition Q
n ı 0.
(6.10)
f
¦Q
n*
( x) accordingly as
n 0
l ( x) § H11 ( x) H12 ( x) · . Q ¨ ¸ © H 21 ( x) H 22 ( x) ¹
(6.11)
We write R n ( x) ( R0,T n ( x), R1,Tn ( x), R2,T n ( x), ! , RnT1,n ( x))
T
(6.12)
and (Gn ,0 ( x ), Gn ,1 ( x ), Gn ,2 ( x), ! , Gn ,n 1 ( x)).
G n ( x)
(6.13)
The following lemma provides expressions for the matrices R n ( x) and G n ( x). Lemma 6.2 For x ! 0 and n ı 1 , R n ( x) U 0 H11 ( x) U1 H 21 ( x)
(6.14)
H11 V0 ( x) H12 V1 ( x).
(6.15)
and G n ( x)
Proof We only prove Eq. (6.14) while Eq. (6.15) can be proved similarly. For 0 İ i İ n 1, we consider two possible cases for ( Ri , n (k , x)) r , r c as follows: Case Ē k 1. In this case, ( Ri , n (k , x)) r , r c
P{ X 1
(n, r c), T1 İ x _ X 0
(U 0 ( x))
¢i² r ,r c
(i, r ), T0
0}
(6.16)
.
Case Ĉ k ı 2 . In this case, ( Ri , n (k , x)) r , r c
P{ X k
(n, r c), X l Lİ( j 1) for l 1, 2, ! , k 1,
Tk İ x _ X 0
(i, r ), T0
0}
(U 0 D11 (k 1, x) U1 D21 (k 1, x))¢ri,²r .
(6.17) 295
Constructive Computation in Stochastic Models with Applications
Noting that D11 (0, x) Eq. (6.17) that
I and D21 (0, x)
0 , it follows from Eq. (6.16) and
f
( Ri , n ( x)) r , r c
¦ (R
i,n
(k , x)) r , r c
k 1
¢i²
f
¢i ²
(U 0 ( x)) r , r c ¦ (U 0 D11 (k 1, x) U1 D21 (k 1, x)) r , r c k 2
¢i ²
f
§ · ¨ ¦ (U 0 D11 (k , x) U1 D21 (k , x)) ¸ ©k 0 ¹ r,rc ¢i ²
(U 0 H11 ( x) U1 H 21 ( x)) r , r c . This completes the proof. For the Markov renewal process P( x) , let Qn ( x) be the southeast corner of P ( x ) beginning from level n, i.e., Qn ( x)
l ( x) ( Pi , j ( x)) i jın . Let Q n
f
¦Q
k* n
( x) ,
k 0
l ( k )( x) and Q l ( l )( x) be the kth block-row and the lth block-column of and Q n n l , respectively. The following corollary easily follows from Lemma 6.2. ( x ) Q n
Corollary 6.2 For 0 İ i j ,
l (,1)( x) Ri , j ( x) ( Pi , j ( x), Pi , j 1 ( x), Pi , j 2 ( x), !) Q j
(6.18)
and for 0 İ j i ,
T
l (1,)( x ) Pi , j ( x ) T , Pi 1, j ( x ) T , Pi 2, j ( x ) T , ! . Gi , j ( x) Q i
(6.19)
It follows from Eq. (6.9) that i (s) U i 2( s ) · i ( s ) §¨ T 0 Q ¨ i (s) Q i ( s ) ¸¸ 0 ©V 2 ¹
and from Eq. (6.10) that i 11( s ) H i 12( s ) · i l (s) § H Q ¨i ¸. i © H 21( s ) H 22( s ) ¹ From either Lemma 6.2 or Corollary 6.2 it is clear that the determination of the l ( x) . The R- and G-measures relies on the entries of the fundamental matrix Q k following lemma provides a formula for expressing the transformation of the fundamental matrix. Lemma 6.3 For Re( s ) ı 0 , 296
6
i 11( s ) H i 12( s ) H
Block-Structured Markov Renewal Processes
1 i ª I i ( s ) i ( s )Q l (s) i (s)º , U2 V2 » 0 «¬ T 0 ¼ 1 i i ª I i ( s ) i ( s )Q º i ( s )Q l l ( s ), i U U2 T 0 s V 2( s ) 0 0 2 ¬« ¼»
1 i i l ( s ) i ( s) ª I i ( s ) i ( s )Q l (s) i (s) º , i 21( s ) Q V2 « T0 U2 V2 » H 0 0 ¬ ¼ 1 i i i i l ( s) Q l ( s) i ( s) ª I i ( s ) i ( s )Q l ( s ) i ( s ) º i ( s )Q l ( s ). i ( ) s Q V2 « T0 U2 V2 » U2 H 22 0 0 0 0 ¬ ¼
Symmetrically, 1
k k k k i ( s ) i ( s )Tm º Vi ( s )Tm i 2( s ) ª I Q i i 11( s ) Tm0 ( s ) Tm0 ( s )U V2 H 0 ( s )U 2( s ) 0 ( s ), 0 ¬ ¼ 2 1
k k i ( s ) i ( s )Tm º , i 2( s ) ª I Q i i 12( s ) Tm0 ( s )U V2 H 0 0 ( s )U 2( s ) ¬ ¼
i 21( s ) H i 22( s ) H
1
m m i ( s ) i ( s )Tk ªI Q º Vi ( s )Tk i V 0 0 ( s )U 2( s ) 0 ( s ), 2 ¬ ¼ 2 m i ( s) i ( s)Tk i I Q V2 0 ( s )U 2( s ). 0
Theorem 6.2 For x ı 0 and n ı 1 , f l § l · k* l V ( x) º ªU 0 ( x) U1 Q ¦ ¨ ¸ T0 ( x) 2 0 ¬ ¼ ¦ l 0 k 0 ©k¹
R n ( x)
l V ( x) º
ªU 2 Q 2 0 ¬ ¼
( l k )*
and f
G n ( x)
l
§l·
¦ ¦ ¨ k ¸T l 0 k 0
© ¹
k* 0
l V ( x) º ( x) ªU 2 Q 2 0 ¬ ¼
( l k )*
l V ( x) º .
ªV0 ( x) U 2 Q 0 1 ¬ ¼ Proof It follows from Lemmas 6.2 and 6.3 that i n( s ) R
1 i i ª i ( s ) i ( s )Q l ( s ) i ( s ) º ª I i ( s ) i ( s )Q l (s) i (s) º U U V U V T 0 0 0 1 2 2 2 «¬ 0 »¼ «¬ »¼
i (s) Q n
1 i i ª I i ( s ) i ( s )Q l ( s ) i ( s ) º ª i ( s ) i ( s )Q l (s) i (s)º . U V V U2 V1 » T 0 0 0 2 2 ¬« ¼» ¬« 0 ¼
and
The inverse transform for the above two equations immediately leads to the desired result. 297
Constructive Computation in Stochastic Models with Applications
The following theorem provides a censoring invariance for the R- and G-measures of Markov renewal processes. We denote by Ri[,İj n ] ( x) and Gi[,İj n ] ( x) the R- and G-measures of the censored Markov renewal process P[İn ] ( x ) . Theorem 6.3 (1) For 0 İ i j İ n , Ri[,İj n ] ( x) Ri , j ( x) . (2) For 0 İ j i İ n , Gi[,İj n ] ( x) Gi , j ( x) . Proof We only prove (1) while (2) can be proved similarly. First, we assume that n j and P( x) is partitioned according to the three subsets L n , Ln and L! n as in Eq. (6.8). It follows from Theorem 6.1 that § T ( x) U 0 ( x) · § U1 ( x) · l P [İ n ] ( x ) ¨ ¸¨ ¸ Q 0 ( x) (V1 ( x), V2 ( x)) © V0 ( x) T0 ( x) ¹ © U 2 ( x) ¹ l V ( x) U ( x) U Q l V ( x) · § T ( x ) U1 Q 1 0 1 2 0 0 ¨ ¸. ¨ V ( x) U Q l V ( x) T ( x) U Q l V ( x) ¸ 2 0 1 0 2 0 2 © 0 ¹
(6.20)
Hence, simple calculations lead to f
R[İn n ] ( x)
l V ( x ) º ªT ( x) U Q l V ( x)º ªU 0 ( x) U1 Q 0 2 0 2 0 2 ¬ ¼ ¦ ¬ ¼ l 0
l*
f l § l · k* l V ( x)º ªU 0 ( x ) U1 Q T ( x) ¦ ¦ 2 0 ¬ ¼ l 0 k 0 ¨© k ¸¹ 0
l V ( x)º
ªU 2 Q 2 0 ¬ ¼ Therefore, R[İn n ] ( x)
( l k )*
(6.21)
.
R n ( x) according to Theorem 6.2.
If n ! j , we first censor the matrix P( x) in the set Lİ j , Ri[,İj j ] ( x)
Ri , j ( x) based
on the fact just proved. Next, we censor the matrix P ( x) in the set Lİn . Since according to Property 1 the censored matrix P[İ j ] can be obtained by the censored matrix P[İn ] , Ri[,İj n ] ( x) Ri[,İj j ] ( x) based on the fact just proved, hence, Ri[,İj n ] ( x) Ri , j ( x) for j n . This completes the proof. Let (n) (n) § I0,0 ( x) I0,1 ( x) ! I0,( nn) ( x) · ¨ (n) ¸ I1,0 ( x) I1,1( n ) ( x) ! I1,( nn) ( x) ¸ [İn ] ¨ P ( x) , ¨ # # # ¸ ¨ ( n) ¸ ¨ I ( x ) I ( n ) ( x) ! I ( n ) ( x ) ¸ n ,1 n, n © n,0 ¹
n ı 0,
be block-partitioned according to levels. The equations in the following lemma provide a relationship among the entries of censored Markov renewal processes, which are essentially the Wiener-Hopf 298
6
Block-Structured Markov Renewal Processes
equations for the Markov renewal process. Lemma 6.4 For n ı 0 , 0 İ i , j İ n , f
Ii(,nj ) ( x) Pi , j ( x)
¦
f
k n 1
l*
Ii(,kk ) ( x) ¦ ª¬Ik(,kk) ( x) º¼ Ik(,kj) ( x). l 0
[İn ]
Proof Consider the censored matrix P from Theorem 6.1 that
( x) based on P[İ( n 1)] ( x) . It follows
n 1 ( n 1) § I0,0 ( x) I0,1 ( x) ! I0,( nn1) ( x) · ¨ n 1 ¸ ¨ I1,0 ( x) I1,1( n 1) ( x) ! I1,( nn1) ( x) ¸ [ İn ] P ( x) ¨ ¸ # # # ¨ ¸ ¨ I ( n 1) ( x) I ( n 1) ( x) ! I ( n 1) ( x) ¸ n 0 n 1 n , n © ¹
§ I0,( nn1)1 ( x) · ¨ ( n 1) ¸ f ( x) ¸ I l* ¨ 1, n 1
ª¬In( n1,1)n 1 ( x) º¼ ¨ ¸ ¦ # l 0 ¨ ( n 1) ¸ ¨I ¸ ( ) x © n, n 1 ¹ 1) 1)
In(n1,0 ( x), In(n1,1 ( x), ! , In( n1,1)n ( x) .
Therefore, from repeatedly using Theorem 6.1 we obtain f
l*
Ii(,nj ) ( x) Ii(,nj 1) ( x) Ii(,nn1)1 ( x) ¦ ª¬In(n1,1)n 1 ( x) º¼ In(n1,1)j ( x) l 0 f
l*
Ii(,nj 2) ( x) Ii(,nn 2)2 ( x) ¦ ª¬In(n 2,2)n 2 ( x) º¼ In(n 2,2)j ( x) l 0
f
l*
Ii(,nn1)1 ( x) ¦ ª¬In(n1,1)n 1 ( x) º¼ In(n1,1)j ( x) l 0
f
! Pi j ( x)
f
¦I
(k ) i ,k
k n 1
l*
( x) ¦ ª¬Ik( ,kk) ( x) º¼ Ik( ,kj) ( x), l 0
(f) i, j
where Pi , j ( x) I ( x). This completes the proof. The following lemma provides expressions for the R- and G-measures. Lemma 6.5 (1) For 0 İ i j , f
l*
Ri , j ( x ) Ii(, jj ) ( x ) ¦ ª¬I (j ,jj) ( x ) º¼ . l 0
(2) For 0 İ j i , f
Gi , j ( x)
¦ ª¬I
(i ) i ,i
l 0
l*
( x) º¼ Ii(, jj ) ( x). 299
Constructive Computation in Stochastic Models with Applications
Proof Applying Corollary 6.2 to the censored process P[İ j ] ( x) gives that f
l*
Ri[,İj j ] ( x) Ii(, jj ) ( x) ¦ ª¬I (j ,jj) ( x) º¼ ,
0 İ i j,
l 0
and f
¦ ª¬I
Gi[,İj i ] ( x)
(i ) i ,i
l 0
l*
( x) º¼ Ii(, jj ) ( x),
0 İ j i.
The rest of the proof follows from the censoring invariance for the R- and G-measures proved in Theorem 6.3. Let
< n ( x) In(,nn) ( x),
n ı 0.
The following theorem provides an equivalent form, to the equations in Lemma 6.4, of the Wiener-Hopf equations stated in terms of the R- and G-measures. Theorem 6.4 (1) For 0 İ i j , f
Ri , j ( x) [ I < j ( x)]
¦
Pi , j ( x)
Ri , k ( x) [ I < k ( x)] Gk , j ( x).
k j 1
(2) For 0 İ j i , f
[ I < i ( x)] Gi , j ( x)
Pi , j ( x)
¦R
i,k
( x) [ I < k ( x)] Gk j ( x).
k i 1
(3) For n ı 0 , f
< n ( x) Pn ,n ( x)
¦
Rn, k ( x) [ I < k ( x)] Gk , n ( x).
k n 1
Proof We only prove (1) while (2) and (3) can be proved similarly. It follows from (1) in Lemma 6.5 that Ri , j ( x) [ I < j ( x)] Ii(, jj ) ( x). Using Lemma 6.4 and Theorem 6.5 leads to f
Ii(, jj ) ( x) Pi , j ( x)
¦I
(k ) i ,k
k j 1 f
Pi , j ( x)
¦
f
l
( x) ¦ ª¬Ik( ,kk) ( x) º¼ Ik( ,kj) ( x) l 0
Ri[,İk k ] ( x) > I < k ( x) @ Gk[İ, jk ] ( x)
k j 1 f
Pi , j ( x)
¦
k j 1
300
Ri ,k ( x) > I < k ( x) @ Gk , j ( x).
6
Block-Structured Markov Renewal Processes
This completes the proof. Based on the Wiener-Hopf equations, the following theorem gives the UL-type RG-factorization for the transition probability mass matrix. Theorem 6.5 For the Markov renewal process P( x) given in Eq. (6.1), I P ( x) [ I RU ( x)] [ I < D ( x)] [ I GL ( x)],
x ı 0,
(6.22)
or i ( s) I P
> I Ri U (s)@> I
I Gi L(s) @ ,
Re( s ) ı 0,
where § 0 R0,1 ( x) R0,2 ( x) R0,3 ( x) !· ¨ ¸ 0 R1,2 ( x) R1,3 ( x) !¸ ¨ RU ( x) ¨ 0 R2,3 ( x) !¸ , ¨ ¸ 0 !¸ ¨ ¨ %¸¹ © < D ( x) diag(< 0 ( x),< 1 ( x),< 2 ( x),< 3 ( x), !) and § 0 · ¨ ¸ 0 ¨ G1,0 ( x) ¸ ¸. GL ( x) ¨ G2,0 ( x) G2,1 ( x) 0 ¨ ¸ ¨ G3,0 ( x) G3,1 ( x) G3,2 ( x) 0 ¸ ¨ # ¸ # # # % © ¹ Proof We only prove Eq. (6.22) for the entries in the first block-row and first lock-column. The rest can be proved similarly. The entry (0, 0) on the right-hand side is f
I < 0 ( x) ¦ R0, k ( x) [ I < k ( x)] Gk ,0 ( x), k 1
which is equal to I P0,0 ( x) according to (3) of Theorem 6.4. The entry (0, l ) with l ı 1 on the right-hand side is f
R0,l ( x) [ I < l ( x)]
¦R
0, k
( x) [ I < k ( x)] Gk l ( x),
k l 1
which is equal to P0,l ( x) according to (1) of Theorem 6.4. Finally, to see that the entry (l ,0) with l ı 1 on the right-hand side is equal to the corresponding entry on the left-hand side, it follows from (2) of Theorem 6.4 that 301
Constructive Computation in Stochastic Models with Applications f
[ I < l ( x)] Gl ,0 ( x)
¦R
l ,k
( x) > I < k ( x) @ Gk ,0 ( x)
Pl ,0 ( x).
k l 1
This completes the proof. In what follows we list the main results for two important examples: Leveldependent Markov renewal processes of M G 1 type, and level-dependent Markov renewal processes of GI M 1 type.
6.2.1 Level-Dependent Markov Renewal Processes of M/G/1 Type Let Pk , k 1 ( x) Pk , k 2 ( x) § Pk , k ( x) ¨ P ( x) Pk 1, k 1 ( x) Pk 1, k 2 ( x) Qk ( x) ¨ k 1,k ¨ Pk 2, k 1 ( x) Pk 2,k 2 ( x) ¨ % ©
!· ¸ !¸ , !¸ ¸ %¹
k ı 1.
l ( k ) ( x) T , Q l ( k )( x) T , ")T the first block-column of the matrix Q l ( x) We denote by (Q 1,1 2,1 k f
¦ >Q ( x) @ k
l*
. Thus, the R- and G-measures are defined as
l 0
f
Ri , j ( x )
( j)
¦P
i, j l
l 0
Gk ( x )
def
l ( x ), ( x) Q l 1,1
0 İ i j,
l ( k )( x ) Pk , k 1 ( x ), Gk , k 1 ( x ) Q 1,1
(6.23)
k ı 1,
(6.24)
Gi , j ( x) 0 for 0 İ j İ i 2 , and f
< k ( x) Pk , k ( x) ¦ Pk , k i ( x) Gk i ( x) Gk i 1 ( x)! Gk 1 ( x), i 1
and
l ( k )( x) Q l ( k )( x) [ I < k ( x)] [ I < k ( x)] Q 1,1 1,1
I.
Therefore, we can obtain (1) For 0 İ i j , f
Ri , j ( x) [ Pi , j ( x) ¦ Pi , j l ( x) * G j l ( x) * G j l 1 ( x) l 1
l j ( x). *! * G j 1 ( x)] *< 302
k ı 0,
(6.25)
6
Block-Structured Markov Renewal Processes
(2) The matrix sequence {Gi ( x)} is the minimal nonnegative solution to the system of matrix equations f
Pi ,i 1 ( x) ¦ Pi ,i l ( x) Gi l ( x) Gi l 1 ( x) ! Gi ( x),
Gi ( x)
i ı 1.
l 0
(3) In the RG-factorization, we have § 0 · ¨ ¸ 0 ¨ G1 ( x) ¸ ¸. GL ( x) ¨ G2 ( x) 0 ¨ ¸ G3 ( x) 0 ¨ ¸ ¨ ¸ % % © ¹
6.2.2 Level-Dependent Markov Renewal Processes of GI/M/1 Type We write Pk , k 1 ( x) § Pk , k ( x) · ¨ ¸ P ( x) Pk 1, k 1 ( x) Pk 1, k 2 ( x) ¸, Wk ( x) ¨ k 1, k ¨ Pk 2, k ( x ) Pk 2, k 1 ( x) Pk 2, k 2 ( x) %¸ ¨ ¸ # # # %¹ ©
k ı 1.
(k ) (k ) l 1,1 l 1,2 l k ( x) We denote by W ( x), W ( x), ... the first block-row of the matrix W f
¦ >W ( x)@ k
l*
. Based on Corollary 2.2, we have
l 0
Rk (x)
def
Rk , k 1 ( x)
( k 1)
l 1,1 ( x), Pk , k 1 W
k ı 0,
Ri , j ( x) 0 for j ı i 2, f
Gi , j ( x)
¦ Wl
(i ) 1,l
Pi l , j ( x),
0 İ j i,
l 1
and f
< k ( x) Pk ,k ( x) ¦ Rk * Rk 1 * ! * Rk i * Pk i , k ( x),
k ı 0.
i 1
The matrix sequence {Rk ( x)} is the minimal nonnegative solution to the system of matrix equations 303
Constructive Computation in Stochastic Models with Applications f
Rk ( x)
Pk , k 1 ( x) ¦ Rk Rk 1 ! Rk i 1 Pk i , k 1 ( x),
k ı 0.
i 1
In the RG-factorization, we have § 0 R0 ( x) · ¨ ¸ 0 R ( x ) 1 ¨ ¸ ¸. RU ( x) ¨ 0 R2 ( x) ¨ ¸ 0 %¸ ¨ ¨ %¸¹ ©
6.2.3 Markov Renewal Equations We use the R-, U- and G-measures to express the block-structured Markov renewal matrix. Note that the Markov renewal equation plays an important role in the study of Markov renewal processes. Consider a block-structured Markov renewal equation U (t )
t
H (t ) ³ P (dx)U (t x), 0
t ı 0,
(6.26)
where P ( x ) is a block-structured transition probability mass matrix and H ( x) is a given matrix. We partition U ( x) and H ( x) according to the levels, and denote their block-entries by U i , j ( x) and H i , j ( x) for i , j ı 0 , respectively. To solve the Eq. (6.26), we need to compute the Markov renewal matrix f
M (t )
¦ [ P(t )]
n*
.
(6.27)
n 0
It is obvious that j (s) M
f
¦ [ Pi (s)]
n*
i ( s )]1 , [I P
(6.28)
n 0
i ( s ) for s ı 0 . It follows from which is the minimal nonnegative inverse of I P Eq. (6.26) and Eq. (6.28) that
i (s) [ I P i ( s )]1 H i (s) U
j (s) H i ( s ). M
(6.29)
i ( s) It is clear from Eq. (6.29) that the computation of the inverse of the matrix I P i ( s ) . For Re( s ) ı 0 , using the UL-type RG-factorization is crucial for expressing U we obtain
j ( s) [ I P i ( s)]1 M 304
k
k
k
[ I GL ( s)]1[ I < D ( s)]1[ I RU ( s)]1
6
Block-Structured Markov Renewal Processes
or f
M ( x)
n*
f
¦ >G ( x)@ ¦ >< L
n 0
n 0
n*
D
f
n*
( x) @ ¦ > RU ( x) @ . n 0
6.3 The LU-Type RG-Factorization In this section, we define the LU-type R-, U- and G-measures by means of another censored process, and derive the LU-type RG-factorization for the transition probability mass matrix. Let P[ın ] ( x) Q( x) V Tl U ( x),
(6.30)
where Tl ( x)
f
¦T
n*
( x).
n 0
The block-entry expression of the matrix P[ın ] ( x) is given by § Kn( n, n) ( x) Kn( n,n)1 ( x) Kn( n,n) 2 ( x) ¨ (n) K ( x) Kn( n)1, n 1 ( x) Kn( n)1, n 2 ( x) P[ın ] ( x) ¨ n( n)1, n ¨ Kn 2, n ( x) Kn( n)2, n 1 ( x) Kn( n)2, n 2 ( x) ¨¨ # # # ©
!· ¸ !¸ . !¸ ¸ % ¸¹
Lemma 6.6 For i, j ı n 1 , we have n
(k )
Ki(,nj1) ( x) Pi , j ( x) ¦ Ki(,kk) K k , k Kk( k, j) ( x).
(6.31)
k 0
Proof Since § Kn( n)1, n 1 ( x) Kn( n)1, n 2 ( x) Kn( n)1, n 3 ( x) ¨ ( n) K ( x) Kn( n)2, n 2 ( x) Kn( n)2, n 3 ( x) [ı( n 1)] P ( x) ¨ n( n)2, n 1 ¨ K ( x) K ( n) ( x) K ( n) ( x) n 3, n 2 n 3, n 3 ¨¨ n 3, n 1 # # # ©
!· ¸ !¸ !¸ ¸ %¸¹
§ Kn( n)1, n · ¨ ( n) ¸ K (n ) ¨ n 2, n ¸ K n ,n Kn( n,n)1 ( x) Kn( nn) 2 ( x) Kn( n,n)3 ( x) ! , ¨K(n) ¸ ¨¨ n 3, n ¸¸ © # ¹ 305
Constructive Computation in Stochastic Models with Applications
we obtain (n )
Ki(,nj1) ( x) Ki(,nj) ( x) Ki(,nn) K n ,n Kn( n, j) ( x) ( n 1)
Ki(,nj1) ( x) Ki(,nn1)1 K n 1, n 1 Kn( n1,1)j ( x) (n)
Ki(,nn) K n, n Kn( n, j) ( x)
! n
(k )
(k ) (k ) Ki(0) , j ( x ) ¦ Ki , k K k , k Kk , j ( x ). k 0
Note that Ki(0) Pi , j ( x) for all i , j ı 0 . This completes the proof. , j ( x) Using the censoring invariance, we define the U-measure as
) n ( x) Kn( n,n) ( x),
n ı 0,
(6.32)
the R-measure as ( j)
Ri , j ( x) Ki(, jj) K j , j ( x),
0 İ j i,
(i ) Gi , j ( x ) K i ,i Ki(,ij) ( x),
0 İ i j.
l j ( x ), Ri j ( x ) Ki(, jj) )
0 İ j i,
(6.33)
l i Ki(,ij) ( x), Gi , j ( x) )
0 İ i j.
(6.34)
and the G-measure as
It is obvious that
and
The following theorem provides the important Wiener-Hopf equations, which are satisfied by the R-, U- and G-measures. Theorem 6.6 The R-, U- and G-measures defined above satisfy the following Wiener-Hopf equations, j 1
Ri , j ( I )i ( x))
Pi , j ( x) ¦ Ri k ( I ) k ) Gk , j ( x),
0 İ j i,
(6.35)
0 İ i j,
(6.36)
k 0 i 1
( I )i ) Gi , j ( x)
Pi , j ( x) ¦ Ri k ( I ) k ) Gk , j ( x), k 0
and 306
6
Block-Structured Markov Renewal Processes
n 1
) n ( x) Pn, n ( x) ¦ Rn, k ( I U k ) Gk , n ( x),
n ı 0.
(6.37)
k 0
Proof We only prove Eq. (6.35), while Eq. (6.36) and Eq. (6.37) can be proved similarly. It follows from Eq. (6.33) that
Ri , j ( I ) j ( x)) Ki(, jj) ( x).
(6.38)
By Lemma 6.6, we have j 1
(k )
Ki(, jj) ( x) Pi , j ( x) ¦ Ki(,kk) K k , k Kk( k, j) ( x).
(6.39)
k 0
From Eq. (6.33), Eq. (6.34) and Eq. (6.39) we obtain j 1
Ki(, jj) ( x) Pi , j ( x) ¦ Ri ,k ( I ) k ) Gk , j ( x), k 0
which, together with Eq. (6.38), leads to the stated result. By the Wiener-Hopf Eq. (6.35), Eq. (6.36) and Eq. (6.37), the following theorem constructs an LU-type RG-factorization. Theorem 6.7 The Markov renewal process P( x) defined in Eq. (6.1) can be factorized as follows, I P ( x)
( I RL ( x)) ( I ) D ( x)) ( I GU ( x)),
(6.40)
where § 0 · ¨ ¸ 0 ¨ R1,0 ( x) ¸ ¸, RL ( x) ¨ R2,0 ( x) R2,1 ( x) 0 ¨ ¸ ¨ R3,0 ( x) R3,1 ( x) R3,2 ( x) 0 ¸ ¨ # ¸ # # # % © ¹ ) D ( x) diag() 0 ( x), )1 ( x), ) 2 ( x), )3 ( x), ...)
and
GU x
§ 0 G0,1 ( x) G0,2 ( x) G0,3 ( x) ¨ 0 G1,2 ( x) G1,3 ( x) ¨ ¨ 0 G2,3 ( x ) ¨ 0 ¨ ¨ ©
!· ¸ !¸ !¸ . ¸ !¸ %¸¹ 307
Constructive Computation in Stochastic Models with Applications
Proof We prove Eq. (6.40) for the block-entries of the first two block-rows. The rest can be proved similarly. For the first block-row, the entry (0, 0) is
I ) 0 ( x)
(0) I K0,0 ( x)
I P0,0 ( x),
and the entry (0, l ) for l ı 1 is, from Eq. (6.34) ( I ) 0 ) G0,l ( x)
(0) º (0) K0,(0)l ( x) ª«¬ I K0,0 » K 0,0 ¼
K0,(0)l ( x )
P0,l ( x).
For the second block-row, the entry (1, 0) is R1,0 ( I ) 0 ( x ))
(0)
(0) (0) K1,0
K 0,0 ª¬ I K0,0 ( x) º¼ (0) K1,0 ( x)
P1,0 ( x)
by Eq. (6.33). By Lemma 6.6, the entry (1, 1) is
R1,0 ( I ) 0 ) G0,1 ( x) I )1 ( x) (0)
(0)
(0) (0) º (0) K0,1 K1,0
K 0,0 ª«¬ I K0,0 x I K1,1(1) ( x) » K 0,0 ¼ (0)
(0) (0) (1) K1,0
K 0,0 K0,1 ( x) I K1,1 ( x)
I P1,1 ( x),
and the entry 1, k for k ı 2 is R1,0 ( I ) 0 ) G0, k ( x) ( I )1 ) G1, k ( x) (0)
(0)
(1)
(0) (0) º (1) º K0,(0)k ( x) ª« I K1,1 K1,(1)k ( x) K1,0
K 0,0 ª«¬ I K0,0 » K » K 0,0 1,1 ¼ ¬ ¼ (0)
(0) K1,0
K 0,0 K0,(0)k ( x) K1,(1)k ( x)
P1, k ( x).
This completes the proof.
6.4 Finite Levels In this section, as an important example we considers an irreducible Markov renewal process with finite levels, and derive the UL- and LU-types of RG-factorizations. We consider an irreducible block-structured Markov renewal process with finite levels whose transition probability mass matrix is given by 308
6
Block-Structured Markov Renewal Processes
§ P0,0 ( x ) P0,1 ( x) ! P0, M ( x) · ¨ ¸ P1,0 ( x) P1,1 ( x) ! P1, M ( x) ¸ P( x) ¨ , ¨ # ¸ # # ¨¨ ¸¸ © PM ,0 ( x) PM ,1 ( x) ! PM , M ( x) ¹
where Pi ,i ( x) is a matrix of size mi u mi for all 0 İ i İ M , and the sizes of the other blocks are determined accordingly.
6.4.1 The UL-Type RG-Factorization For 0 İ i, j İ k and 0 İ k İ M , it is clear from Section 6.2 that M
Pi[,İj k ] ( x)
Pi , j ( x)
¦
[İn ]
l n, n Pn[,İj n ] ( x). Pi[, İn n ] P
n k 1
Note that Pi[, İj M ] ( x)
Pi , j ( x) and Pi[,İj 0] ( x )
Pi[0] , j ( x) .
Let
< n ( x) Pn[,İn n ] ( x),
0İnİ M,
l j ( x), Pi[,İj j ] <
0İ j iİ M,
l i Pi[,İj i ] ( x), Gi , j ( x) <
0İ i j İ M.
Ri , j ( x) and
Then the UL-type RG-factorization is given by I P ( x)
( I RU ) ( I < D ) ( I GL ( x)),
where R0, M ( x) · § 0 R0,1 ( x) R0,2 ( x) R0,3 ( x) ! R0, M 1 ( x) ¨ ¸ 0 R1,2 ( x) R1,3 ( x) ! R1, M 1 ( x) R1, M ( x) ¸ ¨ ¨ 0 R2,3 ( x) ! R2, M 1 ( x) R2, M ( x) ¸ ¨ ¸ RU ( x) ¨ % % # # ¸, ¨ 0 RM 2, M 1 ( x) RM 2, M ( x) ¸ ¨ ¸ 0 RM 1, M ( x) ¸ ¨ ¨ ¸ 0 © ¹ < D ( x) diag(< 0 ( x),< 1 ( x),< 2 ( x),< 3 ( x), ! ,< M 1 ( x),< M ( x)) and 309
Constructive Computation in Stochastic Models with Applications
§ 0 · ¨ ¸ G x ( ) 0 ¨ 1,0 ¸ ¨ G2,0 ( x) G2,1 ( x) ¸ 0 ¨ ¸ 0 GL ( x) ¨ G3,0 ( x) G3,1 ( x) G3,2 ( x) ¸. ¨ G ( x) G ( x) G ( x) G ( x ) 0 ¸ 4,1 4,2 4,3 ¨ 4,0 ¸ # # # % % ¨ # ¸ ¨ G ( x) G ( x) G ( x) G ( x) ! G 0 ¸¹ M ,1 M ,2 M ,3 M , M 1 ( x ) © M ,0
6.4.2 The LU-Type RG-Factorization For k İ i, j İ M and 0 İ k İ M , it is clear from Section 6.3 that k
Pi[,ıj k 1] ( x)
[ın ]
n] l n, n Pn[,ıj n ] ( x). Pi , j ( x) ¦ Pi>ı
P ,n n 0
[ıM ] i, j
Note that P
( x)
[M ] i, j
P
[ı0] i, j
( x) and P
( x)
Pi , j ( x) .
Let
) n ( x) Pn[,ın n ] ( x),
0İnİ M,
l j ( x), Pi[,ıj j ] )
0İi j İ M,
l i Pi[,ıj i ] ( x), Gi , j ( x) )
0İ j i İ M.
Ri , j ( x) and
Then the LU-type RG-factorization is given by I P ( x)
( I RL ) ( I ) D ) ( I GU ( x)),
where § 0 · ¨ ¸ R x ( ) 0 ¨ 1,0 ¸ ¨ R2,0 ( x) R2,1 ( x) ¸ 0 ¨ ¸ 0 RL ( x) ¨ R3,0 ( x) R3,1 ( x) R3,2 ( x) ¸, ¨ R ( x ) R ( x) R ( x ) R ( x ) 0 ¸ 4,1 4,2 4,3 ¨ 4,0 ¸ # # # % % ¨ # ¸ ¨ R ( x ) R ( x) R ( x ) R ( x ) ! R 0 ¸¹ M ,1 M ,2 M ,3 M , M 1 ( x ) © M ,0 ) D ( x) diag() 0 ( x), )1 ( x), ) 2 ( x), )3 ( x), ! , ) M 1 ( x), ) M ( x)) 310
6
Block-Structured Markov Renewal Processes
and § 0 G0,1 ( x) G0,2 ( x) G0,3 ( x) ¨ 0 G1,2 ( x) G1,3 ( x) ¨ ¨ 0 G2,3 ( x) ¨ GU ( x) ¨ % ¨ ¨ ¨ ¨ ©
! G0, M 1 ( x) G0, M ( x) · ¸ G1, M ( x ) ¸ ! G1, M 1 ( x) G2, M ( x) ¸ ! G2, M 1 ( x) ¸ % # # ¸. 0 GM 2, M 1 ( x) GM 2, M ( x) ¸ ¸ 0 GM 1, M ( x) ¸ ¸ 0 ¹
6.5 Markov Renewal Processes of GI/G/1 Type In this section, we simplify the UL-type expressions for the R-, U- and G-measures and the RG-factorization for Markov renewal processes of GI G 1 type. Furthermore, we derive the RG-factorization for the repeated blocks and four Wiener-Hopf inequalities for the boundary blocks. Consider a Markov renewal process of GI G 1 type whose transition probability mass matrix is given by § D0 ( x) ¨ ¨ D1 ( x) P ( x) ¨ D2 ( x) ¨ ¨ D3 ( x) ¨ # ©
D1 ( x) A0 ( x) A1 ( x) A2 ( x) #
D2 ( x) D3 ( x) A1 ( x) A2 ( x) A0 ( x) A1 ( x) A1 ( x) A0 ( x) # #
!· ¸ !¸ !¸ , ¸ !¸ ¸ ¹
(6.41)
where the sizes of the matrices D0 ( x) , Di ( x) , D i ( x) for i ı 1 and Aj ( x) for f j f are m0 u m0 , m0 u m , m u m0 and m u m , respectively. Comparing Eq. (6.41) with Eq. (6.1), it is easy to yield P0, j ( x) D j ( x) for j ı 0 , Pi ,0 ( x) D i ( x) for i ı 1 and Pi , j ( x) Aj i ( x) for i , j ı 1 . It is clear l (,1) ( x) Q l (,1)( x) and Q l (1,)( x) that Qn ( x) Q ( x ) for all n ı 1 . Thus, we write Q j (1,) l ( x) for all i , j ı 1 . Therefore, Q i R0, j ( x) Ri , j ( x )
l (,1)( x ), ( D j ( x ), D j 1 ( x ), D j 2 ( x), !) Q
l ( A j i ( x ), Aj i 1 ( x), Aj i 2 ( x), !) Q
(,1)
j ı 1,
( x), 1 İ i j.
It is obvious that the matrices Ri , j ( x) for 1 İ i j only depend on the difference j i . We write Ri , j ( x) as R j i ( x) for all 1 İ i j . Therefore, for k ı 1 Rk ( x )
l ( Ak ( x ), Ak 1 ( x ), Ak 2 ( x ), !) Q
( ,1)
( x ). 311
Constructive Computation in Stochastic Models with Applications
Similarly, for i ı 1 ,
l (1,)( x) DTi ( x), DT(i 1) ( x), DT(i 2) ( x), ! Gi ,0 ( x) Q
T
and for 1 İ j i , T
l (1,)( x) AT(i j ) ( x), AT(i j 1) ( x), AT(i j 2) ( x), ! . Gi , j ( x) Q The matrices Gi , j ( x) for 1 İ j i only depend on the difference i j . We write Gi , j ( x) as Gi j ( x) for all 1 İ j i . Therefore, for k ı 1 , T
l (1,) ( x) ATk ( x ), AT( k 1) ( x), AT( k 2) ( x ), ! . Gk ( x ) Q For the Markov renewal process of GI G 1 type, the following lemma is a consequence of the repeating property of the transition probability mass matrix, which also leads to the censoring invariance. Lemma 6.7 For n ı 1 , i , j 1 , 2 , 3 , ! , n ,
Pn[İin, n] j ( x)
Pn[İ1(ni, n1)]1 j ( x)
Pn[İ2(ni, n2)] 2 j ( x) ! .
Proof For n ı 1 , i , j 1 , 2 , 3 , ! , n , it is easy to see that Pn[İin, n] j ( x)
l ( x) Ai j ( x) Ai 1 ( x), Ai 2 ( x), ! Q T
AT( j 1) ( x), AT( j 2) ( x), ! ,
which is independent of n ı 1 . Thus
Pn[İinn] j ( x)
Pn[İ1(ni n1)]1 j ( x)
Pn[İ2(ni,n2)] 2 j ( x) ! .
This completes the proof. Based on Lemma 6.7, we can define for 1 İ i, j İ n ,
)0 ( x) Pn[,İn n ] ( x), )i ( x) Pn[İin, n] ( x), ) j ( x) Pn[,İn n ]j ( x). The following theorem explicitly expresses the R- and G-measures in terms of the matrix sequence {)i ( x ) : for f i f}. Theorem 6.8 (1) For i ı 0 , f
Ri ( x) )i ( x) ¦ ) 0l * ( x). l 0
312
(6.42)
6
Block-Structured Markov Renewal Processes
(2) For j ! 1 , f
¦)
G j ( x)
l* 0
( x) ) j ( x).
(6.43)
l 0
Proof We only prove Eq. (6.42) while Eq. (6.43) can be proved similarly. It follows from Lemma 6.7 and Eq. (6.42) that f
Ri ( x)
Ri[İn ] ( x)
Pn[İin, n] ( x) ¦ ª¬ Pn[,İn n ] ( x) º¼
Rn[İin, ]n ( x)
l*
l 0
f
)i ( x) ¦ )0l* ( x). l 0
This completes the proof. The following theorem provides Wiener-Hopf equations for the repeating matrix sequence and also for the boundary matrix sequence. Theorem 6.9 (1) For i ı 1 , f
Ri ( x) [ I )0 ( x)]
Ai ( x) ¦ Ri k ( x) [ I ) 0 ( x)] Gk ( x),
(6.44)
k 1
for j ı 1 , f
[ I ) 0 ( x)] G j ( x)
A j ( x) ¦ Rk ( x) [ I ) 0 ( x)] G j k ( x)
(6.45)
k 1
and f
) 0 ( x)
A0 ( x) ¦ Rk ( x) [ I ) 0 ( x)] Gk ( x).
(6.46)
k 1
(2) For i ı 1 , f
R0,i ( x) [ I ) 0 ( x)]
Di ( x) ¦ R0,i k ( x) [ I ) 0 ( x)] Gk ( x),
(6.47)
k 1
for j ı 1 , f
[ I ) 0 ( x)] G j ,0 ( x)
D j ( x) ¦ Rk ( x) [ I ) 0 ( x)] G j k ,0 ( x)
(6.48)
k 1
and f
< 0 ( x) D0 ( x) ¦ R0, k ( x) [ I )0 ( x)] Gk ,0 ( x).
(6.49)
k 1
Proof We only prove Eq. (6.44) while Eq. (6.45) to Eq. (6.49) can be proved similarly. 313
Constructive Computation in Stochastic Models with Applications
When n is big enough, it follows from Theorem 6.8 that f
Ri ( x) [ I ) 0 ( x)] )i ( x) ¦ ) 0l * ( x) [ I ) 0 ( x)] )i ( x)
Pn[İin, n] ( x),
l 0
and Pn[İin, n] ( x)
Pn[İi(, nn 1)] ( x) Ri[İ1( n 1)] ( x) Pn[İ1,( nn 1)] ( x) [İ( n 1)] º ( x) Pn[İi(, nn 1)] ( x) Ri[İ1( n 1)] ( x) ª¬ I Pn[İ1,( nn 1)] 1 ( x ) ¼ G1
Pn[İi(, nn 1)] ( x) Ri 1 ( x) [ I ) 0 ( x)] G1 ( x) N
! Pn[İi(, nn N )] ( x) ¦ Ri k ( x) [ I )0 ( x)] Gk ( x) k 1
f
Ai ( x) ¦ Ri k ( x) [ I )0 ( x)] Gk ( x). k 1
This completes the proof. For an irreducible Markov renewal process of GI G 1 type, the UL-type RG-factorization given in Eq. (6.22) can be simplified as I P( x) [ I RU ( x)] [ I ) D ( x)] [ I GL ( x)],
(6.50)
where § 0 R0,1 ( x) R0,2 ( x) R0,3 ( x) ! · ¨ ¸ 0 R1 ( x) R2 ( x) ! ¸ ¨ RU ( x) ¨ 0 R1 ( x) ! ¸ , ¨ ¸ 0 !¸ ¨ ¨ % ¸¹ © ) D ( x) diag < 0 ( x), )0 ( x), )0 ( x), )0 ( x), ! and § 0 · ¨ ¸ 0 ¨ G1,0 ( x) ¸ ¸. GL ( x) ¨ G2,0 ( x) G1 ( x) 0 ¨ ¸ ¨ G3,0 ( x) G2 ( x) G1 ( x) 0 ¸ ¨ # # # # % ¸¹ © It is worthwhile to note that the LU-type RG-factorization given in Eq. (6.40) can not be further simplified. Thus we will not discuss the LU-type RG-factorization here. 314
6
Block-Structured Markov Renewal Processes
For the Markov renewal process of GI G 1 type, we write f
i Ai ( s )
³
i i(s) R
³
i 0( s ) e sx dAi ( x), )
0
f
0
³
f
0
i i(s) e sxd Ri ( x), G
³
f
0
e sx d)0 ( x),
e sx dGi ( x),
f
¦ z iA (s),
* i A ( z, s)
i
i
i f
i *( z , s) R
f
¦ z Ri (s), i
i
i *( z , s ) G
i 1
f
¦z
j
i j ( s ). G
j 1
The following theorem provides the RG-factorization for the repeated blocks based on the double transformations. Note that this RG-factorization is very useful for transient performance analysis such as the first passage time and the sojourn time. Theorem 6.10 * i *( z , s)]. i *( z , s )][ I ) i 0( s )][ I G Ii A ( z, s) [ I R
(6.51)
Proof It follows from Eq. (6.44), Eq. (6.45) and Eq. (6.46) that f
i i i k ( s )[ I ) i 0( s )]G i k ( s ), Ai ( s ) ¦ R
i i ( s )[ I ) i 0( s )] R
i ı 1,
k 1
f
i 0( s )]G i j (s) [I )
i i k ( s )[ I ) i 0( s )]G i j k ( s ), A j(s) ¦ R
j ı 1,
k 1
f
i 0( s ) )
i k ( x). i i k ( s )[ I ) i 0( s )]G A0 ( s ) ¦ R k 1
Therefore, we obtain i *( z , s )[ I ) i 0( s )] [ I ) i 0( s )]G i *( z , s ) ) i 0( s ) R f
i k ( s )[ I ) i 0( s )]z k G i k ( x), I1 I 2 ¦ z k R k 1
where f
I1
f
¦¦z
ik
i k ( x) i i k ( s )[ I ) i 0( s )]z k G R
i 1 k 1
i *( z , s )] i *( z , s )[ I ) i 0( s )][ I G R f
k
i k ( x) i i ( s )[ I ) i 0( s )]z k G ¦ ¦ zi R k 1 i 1
315
Constructive Computation in Stochastic Models with Applications
and f
I2
f
¦¦ z
i k (s) > I ) i 0( s ) @ z ( j k ) G i j k ( x) R
k
j 1 k 1
i *( z, s ) º i ( z , s ) > I ) i 0( s )@ ª I G R ¬ ¼ f
f
i k ( x). i j (s) > I ) i 0( s ) @ z k G ¦¦ zjR k 1 j k
Since f
i k ( x) i k ( s)[ I ) i 0( s)]z k G I1 I 2 ¦ z k R
* i A ( z, s)
k 1
i *( z , s ), i *( z , s )[ I ) i 0( s )]G R we get i *( z , s ) ) i *( z , s )[ I ) i 0( s )] [ I ) i 0( s )]G i 0( s ) R i *( z , s ), i *( z , s )[ I ) i 0( s )]G R
* i A ( z, s)
which is equivalent to Eq. (6.51). This completes the proof. Let i i(s) D
³
f 0
f
¦ z Di (s),
i * ( z , s ) D
i
i
e sxd Di ( x), i *( z , s ) D
i 1
i 0,i ( s ) R
³
i
i i ( s ), D
i 1
f 0
f
¦z
e sxd R0,i ( x),
f
e sxd Gi ,0 ( x),
i i ,0( s ) G
³
i *0( z , s ) G
¦z
0
and i *0( z , s ) R
f
¦ z Ri i
0, i
( s ),
i 1
f
j
i j ,0( s ). G
(6.52)
j 1
Theorem 6.11 For z ! 0 and s ı 0 , i *0( z , s ) ı D i * ( z , s )[ I ) i 0( s )]1, R
(6.53)
i *( z , s )] İ D i *0( z , s )[ I ) i 0( s )][ I G i * ( z , s ), R
(6.54)
i *0( z , s ) ı [ I ) i 0( s )]1D i *( z , s ) G
(6.55)
i *0( z , s ) İ D i *( z , s )][ I ) i 0( s )]G i *( z , s ). [I R
(6.56)
and
316
6
Block-Structured Markov Renewal Processes
Proof We only prove Eq. (6.53) and Eq. (6.54), while Eq. (6.55) and Eq. (6.56) can be proved similarly. It follows from Eq. (6.47) that f
i k ( s ), i 0,i ( s )[ I ) i 0( s )] D i i(s) ¦ R i 0,i k ( s )[ I ) i 0( s)]G R
(6.57)
k 1
and from Eq. (6.43) that
i k (s) [ I ) i 0( s )]1) i k ( s ). G It is obvious that for s ı 0 , f
¦ Ri 0,i k (s)[ I )i 0(s)]Gi k (s) k 1
f
¦ Ri
0,i k
i k ( s ) ı 0. ( s ))
k 1
Hence it follows from Eq. (6.57) that
i 0,i ( s )[ I ) i 0( s )] ı D i i(s) R and for z ! 0 , i *0( z , s ) > I ) i 0( s )@ ı D i * ( z , s ). R
(6.58)
Since the Markov renewal process is irreducible, the spectral radius
i 0( s )) İ sp() i 0(0)) sp()
sp() 0 (f)) 1
i 0( s)]1 ı 0. i 0( s ) is invertible and [ I ) for all s ! 0 . Furthermore, the matrix I ) Therefore, it follows from Eq. (6.19) that i *0( z , s ) ı D i * ( z s )[ I ) i 0( s )]1. R It follows from Eq. (6.58) that i *0( z , s )[ I ) i 0( s )] İ D i * ( z , s ) R i *0( z , s )[ I ) i 0( s )]G i *( z , s ), R
simple computations lead to i *0( z , s )[ I ) i 0( s )][ I G i *( z , s )] İ D i * ( z , s ). R
This completes the proof.
6.6 Spectral Analysis In this section, we provide spectral properties for the R- and G-measures of a Markov renewal process of GI G 1 type. These spectral properties are important 317
Constructive Computation in Stochastic Models with Applications
in the study of stochastic models. For simplicity, we assume that the matrix * A i A (1,0) is irreducible and stochastic. i *( z , s )) 0 and det( I G i *( z , s )) 0 , we first To discuss the equations det( I R need to provide the relations among the radii of convergence for some matrix functions. For s ı 0, we denote by I R ( s), IG ( s ), I R0 ( s), IG0 ( s ), I A ( s ), I A ( s ), i *( z , s ), i *( z , s ), G I ( s ) and I D ( s ) the convergence radii of the matrices R D
* * i *0( z , s ), G i *0( z, s ), i i * ( z , s ) and D i *( z , s ), respectively, where R A ( z , s), i A ( z , s ), D * i A( z, s)
f
¦z
k
f
¦z
* i A( z, s) A k ( s ) and i
k
i A k (s) .
k 1
k 1
Theorem 6.12 For s ı 0 , (1)
I A (s ) IR ( s ) ı 1,
0 İ I A (s ) IG (s ) İ 1;
(6.59)
(2)
ID ( s ) IR ( s), ID ( s) IG ( s). 0
0
(6.60)
Proof We first prove (1). It follows from Eq. (6.51) that i *( z , s )[ I ) i 0( s )][ I G i *( z , s )] ) i 0( s ) R i *( z , s ), i 0( s )]G [I )
(6.61)
* i i *( z , s )][ I ) i 0( s )]G i *( z , s ) ) i 0( s ) A ( z, s) [ I R i *( z , s )[ I ) i 0( s )]. R
(6.62)
* i A ( z, s)
and
Noting that IR (s ) is increasing in s ı 0 and IG ( s ) is decreasing in s ı 0 , it follows from Corollary 3.10 that I R ( s) ı IR (0) ı 1 and 0 İ IG ( s ) İ IG (0) İ 1 . It i *( z , s ) is analytic in _ z _ IR ( s ) and G i *( z , s ) is follows from Eq. (6.52) that R i 0( s ) is invertible, it follows from analytic in _ z _ ! IG ( s ) . Also, noting that I ) Eq. (6.61) that I A ( s ) IR ( s ) ı 1 and from Eq. (6.62) that 0 İ I A ( s ) IG ( s ) İ 1. Next we prove (2). We only prove the first one while the second one can be proved similarly. For s ı 0 , it is easy to see that either I D ( s) ! 1 or I D ( s ) 1 . We consider the following two possible cases: * Case Ē I D ( s ) ! 1 . In this case, since i A ( z , s ) is irreducible, we obtain that for s ! 0 and 1 z I D ( s) , i *( z , s )) sp(G i *(1,0)) sp(G 318
sp (G ) İ 1,
6
Block-Structured Markov Renewal Processes
thus it follows from Eq. (6.54) that i *0( z , s ) İ D i * ( z , s )[ I G i *( z , s )]1 ( I ) 0 ( s )) 1, R
it is clear that I R ( s ) ı I D ( s ) . On the other hand, it follows from Eq. (6.53) that IR ( s) İ ID ( s) . Hence I R ( s) ID (s ). Case ē I D ( s ) 1. In this case, it follows from Eq. (6.53) that IR ( s) İ i *0(1, s ) is always finite and has the probabilistic interpretation I D ( s ) 1. Since R in Theorem 6.11 I R ( s ) ı 1 . Hence, IR ( s) ID (s ). This completes the proof. For z ! 0 and s ! 0 , let F ( z , s ) , r ( z , s ) and g ( z , s ) be the maximal eigenvalues 0
0
0
0
0
0
* i *( z , s ) and G i *( z , s ) , respectively. of i A ( z, s) , R The following Lemma provides the useful relations among the minimal positive solutions for the R-measure and among the maximal positive solutions for the G-measure. Lemma 6.8 (1) For s ı 0 , the minimal positive solution of the equation i *( z, s)) 0 has the following relations det( I R
i *( z , s )) min{z ( s ) ı 1: det( I R *
min{z ( s ) ı 1: det( I i A ( z , s )) min{z ( s ) ı 1:1 r ( z , s )
0} min{z ( s ) ı 1:1 r ( z , s )
0},
0} min{z ( s ) ı 1:1 F ( z , s )
0},
0} min{z ( s ) ı 1:1 F ( z , s ) 0}.
i *( z , s )) 0 (2) For s ı 0, the maximal positive solution of the equation det( I G has the following relations i *( z , s )) max{z ( s ) İ 1: det( I G
0} max{z ( s ) İ 1:1 g ( z , s ) 0},
max{z ( s ) İ 1: det( I i A ( z , s )) 0} max{z ( s ) İ 1:1 F ( z , s ) 0}, max{z ( s) İ 1:1 g ( z , s ) 0} max{z ( s ) İ 1:1 F ( z , s ) 0}. *
i *( z , s )) We now determine the distribution of the roots of det( I R * i ( z , s )) 0 . det( I G Let
K
min{| z |, 1 İ| z |İ I A , det( I R * ( z ))
0}
]
max{| z |, I A İ| z |İ 1, det( I G * ( z ))
0}.
0 and
and
Note that K is the minimal positive solution of the equation det( I R* ( z )) 0 , and [ is the maximal positive solution of the equation det( I G* ( z )) 0 . Theorem 6.13 Suppose that K and [ are given in (1) and (2) of Theorem 3.22, respectively. (1) If P ( x) is positive recurrent, then for s ı 0 , 319
Constructive Computation in Stochastic Models with Applications
i *( z , s )} 0} {z :_ z _ !K} {z ( s ) : det{I R
(6.63)
and i *( z , s )} 0} {z :_ z _İ[ }. {z ( s ) : det{I G
(2) If P( x) is null recurrent, then for s ı 0 , i *( z , s )} 0} {z : [ İ_ z _İK}. i *( z , s )} 0 or det{I G {z ( s ) : det{I R
(3) If P( x) is transient, then for s ı 0 , i *( z , s )} 0} {z : _ z _ıK} {z ( s ) : det{I R
and i *( z , s )} 0} {z: _ z _ [ }. {z ( s ) : det{I G
Proof We only prove Eq. (6.63) while the other four can be proved similarly. It follows from Lemma 6.8 that i *( z , s )) min{z ( s ) ı 1: det( I R
0} min{z ( s ) ı 1:
det{I i A ( z, s)} 0} min{z ( s ) ı 1:1 F ( z, s ) *
0}.
* Since i A ( z , s ) is irreducible for z ı 0 and s ı 0 , F ( z , s ) is strictly increasing for z ı 0 and strictly decreasing for s ı 0, hence z0 ( s ) min{z ( s ) ı 1:1 F ( z , s ) 0} is increasing for s ı 0 . A similar analysis to that used for proving Theorem 3.22 leads to the fact that if P (f) is positive recurrent, then i *( z , s )} 0 , for any solution z z ( s) to equation det{I R
_ z ( s) _ı z0 ( s ) ! z0 (0) ı K. This completes the proof. The following theorem provides the positive roots for the equations i *( z , s )) 0 and det( I G i *( z , s )) 0 . det( I R Theorem 6.14 (1) If P( x) is positive recurrent, then for each s ! 0 , there i *( z0 ( s ), s )) 0 . must exist a unique z0 ( s ) with 1 K z0 ( s ) such that det(I R (2) If P( x) is transient, then for each s ! 0 , there must exist a unique z0 ( s ) i *( z0 ( s ), s )) 0. for 0 z0 ( s ) [ 1 such that det( I G Proof We only prove 1) while 2) can be proved similarly. * We write f ( z , s ) 1 F ( z , s ) . Since i A ( z , s ) is nonnegative and irreducible for z ! 0 and s ı 0 , it is obvious that F ( z, s) is strictly increasing for z ! 0 and strictly decreasing for s ı 0 . Noting that F (K ,0) 1 , we obtain that for any given s ! 0 , f (K , s ) 1 F (K , s ) ! 1 F (K ,0) 0. 320
(6.64)
6
Block-Structured Markov Renewal Processes
* On the other hand, since for any given s ! 0 , lim i A ( z, s) z / f
f, so that
F (f, s ) f, we get f (f, s ) 1 F (f, s ) 0,
(6.65)
Noting that for an arbitrarily given s ! 0 , f ( z , s ) is continuous and strictly increasing for z [K , f) , it follows from Eq. (6.64) and Eq. (6.65) that there always exists a unique positive solution z0 ( s ) (K , f) such that f ( z0 ( s ), s ) 1 F ( z0 ( s ), s ) 0. Therefore, for z ! 0 and s ı 0 it follows from Theorem 3.22 and Eq. (6.64) that z0 ( s )
min{z ( s ) ! K ! 1:1 F ( z , s )
0}
i ( z , s )) min{z ( s ) ! K ! 1: det( I R *
0}.
This completes the proof.
6.7 The First Passage Times In this section, we first provide an algorithmic framework for computing the first passage time of any irreducible Markov renewal process, and then consider the first passage times for a Markov renewal process of GI G 1 type.
6.7.1 An Algorithmic Framework Consider a Markov renewal process {( X n , Tn ), n ı 0} whose transition probability mass matrix is given in Eq. (6.1). Let E {0} and E c {1, 2,3, 4,...} . According to the subsets E and E c , the transition probability mass matrix P( x) is partitioned as E
P( x)
Ec
E § T ( x) U ( x) ·
¨ ¸. E © V ( x) Q( x) ¹
(6.66)
c
Let (D 0 , D ) be the initial probability vector of the Markov renewal process {( X n , Tn ), n ı 0} , where D (D1 , D 2 , D 3 , ...) and D 0 D e 1. We denote by f k ( x ) the joint probability that the Markov renewal process {( X n , Tn ), n ı 0} first visits state 0 at the kth step no later than time x ı 0 . It is easy to check that f 0 ( x ) D 0G ( x )
(6.67)
and f k ( x) D Q ( k 1)* ( x ) V ( x ),
k ı 1.
(6.68) 321
Constructive Computation in Stochastic Models with Applications
Taking the Laplace-Stieltjes transforms for Eq. (6.67) and Eq. (6.68), we obtain if * ( s ) D 0 0
and k 1
if * ( s ) D [Q i *( s)] Vi *( s ), k
k ı 1.
We write
i *( s, z ) F
f
¦z
k
if * ( s ). k
k 0
It is clear that 1
i *( s, z ) D 0 zD [ I z Q i *( s )] Vi *( s ). F
i *( s ) is the transition probability matrix When s ! 0 and 0 z İ 1, it is clear that zQ of a Markov chain. In this case, we can obtain the UL-type RG-factorization *
i ( s ) [ I RU ( s, z )][ I U D ( s, z )][ I GL ( s, z )], I zQ which leads to that i *( s, z ) D 0 zD [ I GL ( s, z )]1[ I U D ( s, z )]1[ I RU ( s, z )]1Vi *( s ); F and the LU-type RG-factorization
i *( s ) [ I RL ( s, z )][ I U D ( s, z )][ I GU ( s, z )], I zQ which yields that
i *( s, z ) D 0 zD [ I GU (s , z )]1[ I U D (s, z )]1[ I RL ( s, z )]1Vi *( s ). F
6.7.2 Markov Renewal Processes of GI/G/1 Type * i (s) R i (s) G i *(1, s ) and G i *(1, s ) . Then it follows from Let i A( s ) i A (1, s ) , R Eq. (6.51) that
i ( s )][ I i ( s )][ I G i ( s )]. Ii A( s ) [ I R )0
(6.69)
I A( x) [ I R( x)] [ I ) 0 ( x)] [ I G ( x)].
(6.70)
Hence, we obtain
f
Notice that G ( x)
¦ G ( x) is the distribution matrix of the first passage times k
k 1
of the Markov renewal process of GI G 1 type from Lıi to Lİ( i 1) for i ı 2 . 322
6
Block-Structured Markov Renewal Processes
Specifically, for a Markov renewal process of M G 1 type G ( x) G1 ( x) is called the matrix distribution of the fundamental period according to Chapter 2 and Chapter 3 in Neuts [17]. It is clear that if the Markov chain P (f ) is recurrent, then G (f) is stochastic; otherwise G (f) is strictly substochastic. j ( s ) we write P ( M ) (1) k d kk M j ( s ) _ . In the following, we For a matrix M k s 0 ds show how to compute conditional moments of the matrix distribution G ( x ) , defined as P k (G ) , based on Eq. (6.69). It follows from Eq. (6.69) that the first conditional moment of G ( x ) satisfies
P1 (G ) P1 ( R)( I ) 0 )( I G ) ( I R ) P1 () 0 )( I G ) ( I R )( I ) 0 ) P1 (G ), and the second conditional moment of G ( x ) satisfies
P 2 (G ) P 2 ( R)( I ) 0 )( I G ) ( I R) P 2 () 0 )( I G ) ( I R)( I ) 0 ) P 2 (G ) 2P1 ( R) P1 () 0 )( I G ) 2( I R) P1 () 0 ) P1 (G ) 2P1 ( R)( I ) 0 ) P1 (G ). Specifically, if the Markov renewal process is recurrent, then the first conditional moment of G ( x ) is simplified as 1
P1 (G ) e ( I ) 0 ) ( I R) 1P1 ( A) e, and the second conditional moment of G ( x ) is simplified as
P 2 (G )e ( I )0 ) 1( I R) 1{P 2 ( A) 2( I R) P1 ()0 ) P1 (G ) 2P1 ( R)( I )0 ) P1 (G )}e since ( I G ) e
0. f
Now, we consider the matrix distribution G0 ( x)
¦G
k ,0
( x) , which often
k 1
represents the matrix distribution of the busy period. To explicitly express G0 ( x ) , we need to compute Gk ,0 ( x) for all k ı 1 . It follows from Eq. (6.42) that f
i 0( s )]G i j ,0( s ) [I )
i j ( s) ¦ ) i k ( s )G i k j ,0( s ), D
j ı 1,
k 1
hence, §I ¨ ¨ ¨ ¨ ¨ ¨ ©
i 1,0( s ) · B1 ( s ) B2 ( s ) B3 ( s ) !·§ G ¸¨ i ¸ B1 ( s) B2 ( s) !¸¨ G 2,0( s) ¸ I i 3,0( s ) ¸ B1 ( s ) !¸¨ G I ¸¨ ¸ i 4,0( s) ¸ !¸¨ G I ¸¨ # ¸ %¹© ¹
§ F1 ( s ) · ¨ ¸ ¨ F2 ( s) ¸ ¨ F3 ( s) ¸ , ¨ ¸ ¨ F4 ( s) ¸ ¨ ¸ © # ¹
(6.71)
323
Constructive Computation in Stochastic Models with Applications
i 0( s )]1) i 0( s )]1D i i ( s ) and Fi ( s ) [ I ) i i ( s ) for i ı 1. where Bi ( s ) [ I ) Lemma 6.9 Let §I ¨ ¨ / (s) ¨ ¨ ¨ ¨ ©
B1 ( s ) B2 ( s ) B3 ( s ) ! · ¸ I B1 ( s ) B2 ( s ) ! ¸ I B1 ( s ) ! ¸ . ¸ I !¸ % ¸¹
Then for Re( s ) ı 0 , there always exists a unique inverse matrix / 1 ( s ) such that / 1 ( s )/ ( s ) / ( s )/ 1 ( s ) I , and §I ¨ ¨ / 1 ( s ) ¨ ¨ ¨ ¨ ©
X 1 (s)
X 2 (s)
I
X1 (s) I
X 3 (s) !· ¸ X 2 (s) !¸ X 1 (s) !¸ , ¸ I !¸ % ¸¹
(6.72)
where f
X l ( s)
¦ ¦
Bn1 ( s) Bn2 ( s)! Bni ( s),
l ı 1.
i 1 n1 n2 ! ni l n j ı1,1İ jİi
Proof Noting that / ( s)/ 1 ( s )
I , we obtain k 1
X k ( s ) Bk ( s ) ¦ Bi ( s ) X k i ( s )
k ı 1.
0,
(6.73)
i 1
f
Let X *( z , s )
¦z X
f
k
k
( s ) and B*( z , s )
k 1
¦z
k
Bk ( s ) . Then it follows from Eq. (6.73)
k 1
that X * ( z , s ) B* ( z , s ) B * ( z , s ) X * ( z , s )
0.
We obtain 1
f
X * ( z , s ) [ I B* ( z s )] B* ( z , s )
i
¦ [ B ( z, s)] *
i 1
f
f
¦z ¦ ¦ l
l 1
324
i 1 n1 n2 !ni l n j ı1, 1İ j İi
Bn1 ( s) Bn2 ( s)! Bni ( s ).
6
Block-Structured Markov Renewal Processes
Therefore, f
X l ( s)
¦ ¦
Bn1 ( s) Bn2 ( s)! Bni ( s),
l ı 1.
i 1 n1 n2 !ni l n j ı1, 1İ j İi
This completes the proof. Theorem 6.15 For Re( s ) ı 0 , i 0( s ) G
1
f
f
f
f
i k ( s) ¦ ¦ ¦ ¦ > I )i 0( s)@ ¦ D k 1
i 0( s )] [I )
1
k 1 l 1 i 1 n1 n2 !ni l n j ı1, 1İ j İi
i 0( s)]1) i 0( s)]1) i n ( s)[ I ) i n ( s)![ I ) i n ( s) ) 1 2 i i 0( s)]1D i k l ( s). [I ) Proof It follows from Eq. (6.71) that f
i k ,0( s ) G
Fk ( s ) ¦ X l ( s ) Fk l ( s) l 1
f
f
i 0( s )]1D i k (s) ¦ ¦ [I )
¦
i 0( s )]1 [I )
l 1 i 1 n1 n2 !ni l n j ı1, 1İ jİi 1
i 0( s )]1) i n (s) i n ( s )[ I ) i n ( s )![ I ) i 0 s ] ) ) 1 2 i i 0(s )]1D i ( k l )(s ). [I )
(6.74)
f
i 0( s ) Noting that G
¦ Gi
k ,0
( s ) , simple computations yield the proof.
k 1
Now, we provide conditions on the state classification for the Markov renewal processes of GI G 1 type. Based on the result in Theorem 6.15, simple computations can lead to the following corollary. Corollary 6.3 (1) If P( x) is recurrent, then G0 (f)e e. (2) If P( x) is transient, then G0 (f)e 9 e. i k (s) 0 , Remark 6.3 For a Markov renewal process of M G 1 type, since D 1
i 0( s ) > I ) i 0( s ) @ D i 1( s ) , which is the same as (2.4.3) in k ı 2 , it is clear that G Neuts [17] (p. 107). Theorem 6.15 extended Lemma 2.4.1 of Neuts [17] to a Markov renewal process of GI G 1 type. In what follows we express the transformation of the matrix < 0 ( x) P[0] ( x ) . It follows from Theorem 6.1 that i ( s )]1Vi ( s ). i 0 (s) U i ( s )[ I Q
(6.75) 325
Constructive Computation in Stochastic Models with Applications
w k * P[0] ( z , s )e _( z , s ) wz
Let N
N
(1,0)
. Then 2
D0 e ( D1 , D2 , D3 , !)(2 I 3Q)( I Q ) ( DT1 , DT2 , DT3 , !)T e.
Therefore, if the Markov chain is positive recurrent, it follows from Corollary 3.5 in Asmussen [2] that ʌ0
( x0N )1 x0 ,
(6.76)
where x0 is the stationary probability vector of < 0 (f) . Remark 6.4 If the Markov renewal process is of M G 1 type, Eq. (6.75) is the same as (3.2.1) in Neuts [17] (p. 135) and Eq. (6.76) is the same as (3.2.6) in Neuts [17] (p. 137). It is obvious that the state classification for the Markov chain P( f) is crucial for classifying the Markov renewal process P ( x ) as recurrent or transient. The following corollary, a consequence of Proposition 6.2, provides a sufficient condition for a recurrent Markov renewal process of GI G 1 type to be positive recurrent. Corollary 6.4 A Markov renewal process P ( x ) of GI G 1 type is positive recurrent if (1) P(f) is positive recurrent and (2)
³
f 0
f
xD k ( x)dx for all k ı 0 ,
¦³ k 1
f
0
f
xDk ( x)dx and
¦³
k f
f
0
xAk ( x)dx are
all finite.
6.8 Notes in the Literature The literature on Markov renewal processes is extensive. References, which are closely related to this chapter, include Pyke [19 21], Pyke and Schaufele [22 23], Teugels [28], Hunter [11], Çinlar [5, 7, 8], Todorovic and Gani [29], MalinovskiƱ [15], Lam [12], Johnson, Liu and Narayana [10], Rossetti and Clark [26], Vesilo [30], Ball and Milne [4]. For a comprehensive discussion on Markov renewal processes, readers may refer to a survey article by Çinlar [6]. The study of block-structured Markov renewal processes provides a useful modelling tool. Readers may refer to Neuts [17] for a study of Markov renewal processes of M G 1 type. Other references on block-structured Markov renewal processes include Neuts [16, 18], Sengupta [27], Asmussen and Ramaswami [3], Ramaswami [24 25], Zhao, Li and Alfa [31] and Hsu, Yuan and Li [9] among others. Li and Zhao [13 14] systemically studied the UL-types RG-factorization 326
6
Block-Structured Markov Renewal Processes
for block-structured Markov renewal processes, while the LU-types RG-factorization is given in this chapter for the first time. In this chapter, we mainly refer to Li and Zhao [13 14], Çinlar [5 8] and Neuts [17].
Problems 6.1 Prove Proposition 6.2 6.2 Prove Proposition 6.3 6.3 Provide a concrete example to indicate that Condition (2) of Proposition 6.3 can be further weakened. 6.4 Please provide an example to indicate that the embedded Markov chain P(f) is positive recurrent, but the Markov renewal process P ( x ) is not positive recurrent. 6.5 Prove Corollary 6.1. 6.6 Prove the censoring invariance for the block-structured Markov renewal process as follows: (1) for 0 İ i j İ n , Ri[,İj n ] ( x) Ri , j ( x) ; and (2) for 0 İ j i İ n , Gi[,İj n ] ( x) Gi , j ( x) . 6.7 Prove Corollary 6.2. 6.8 If P( x) is the transition probability mass matrix of an irreducible QBD renewal process with infinitely-many levels, please compute the Markov renewal matrix M ( x) . 6.9 If the transition probability mass matrix P ( x ) is of GI G 1 type, please compute the Markov renewal matrix M ( x) . 6.10 Consider a MAP G 1 with a repairable server, where the life time and the repair time of the server are exponential and general, respectively. Compute the reliability function by means of the UL-type RG-factorization. 6.11 For the Markov renewal process of GI M 1 type, please express Gi , j ( x) and < j ( x) by using the matrix R ( x ) . 6.12 In an irreducible Markov renewal process of GI G 1 type, please analyze the asymptotic behavior of the four matrix functions G ( x), G0 ( x), R( x) and R0 ( x ) with respect to each of the following three cases: f
(1) The matrix function A( x)
¦ A ( x) is heavy-tailed, and the matrix function k
k f f
D( x)
¦ D ( x) k
is light-tailed.
k f
(2) The matrix function A( x ) is light-tailed, and the matrix function D( x ) is heavy-tailed. (3) The matrix functions A( x) and D( x) are both heavy-tailed. 327
Constructive Computation in Stochastic Models with Applications
6.13 In an irreducible Markov renewal process of GI G 1 type, please analyze the asymptotic behavior of the four matrix functions G ( x), G0 ( x), R( x) and R0 ( x) with respect to each of the following three cases: (1) The matrix sequence { Ak (f)} is heavy-tailed, and the matrix sequence {Dk ( f )} is light-tailed. (2) The matrix sequence { Ak (f)} is light-tailed, and the matrix sequence {Dk ( f )} is heavy-tailed. (3) The matrix sequences { Ak (f)} and {Dk ( f )} are both heavy-tailed. 6.14 In a MAP G 1 queue, if the service time distribution is heavy-tailed, please analyze the asymptotic behavior of the four matrix functions Gk ( x ) for k ı 1, f
G ( x ), Rl ( x) for l ı 1 and R( x) , where G ( x)
¦ G ( x) and k
k 1
f
R ( x)
¦ R ( x) . k
k 1
6.15 In a stable BMAP G 1 queue, please analyze the asymptotic behavior of the stationary queue length, waiting time and busy period with respect to each of the following three cases: (1) The service time distribution is light-tailed and the BMAP matrix descriptor {Ck } is heavy-tailed. (2) The service time distribution is heavy-tailed and the BMAP matrix descriptor {Ck } is light-tailed. (3) The service time distribution is heavy-tailed and the BMAP matrix descriptor {Ck } is heavy-tailed. 6.16 In a MAP G 1N queue, when U 1 (or U ! 1 , or U 1) , please analyze the asymptotic behavior lim qN of the stationary queue length distribution {qk , 0 İ N of
k İ N } with respect to each of the following two cases: (1) The service time distribution is light-tailed. (2) The service time distribution is heavy-tailed. 6.17 In a BMAP G 1 queue, compute the transient distributions of the queue length and waiting time.
References 1. Asmussen S. (1987). Applied Probability and Queues, John Wiley & Sons 2. Asmussen S. (2003). Applied Probability and Queues (Second Edition), Springer 3. Asmussen S. and V. Ramaswami (1990). Probabilistic interpretations of some duality results for the matrix paradigms in queueing theory. Stochastic Models 6: 715 733 4. Ball F. and R.K Milne (2005). Simple derivations of properties of counting processes associated with Markov renewal processes. Journal of Applied Probability 42: 1031 1043 5. Çinlar E. (1968). Some joint distributions for Markov renewal processes. The Australian Journal of Statistics 10: 8 20 328
6
Block-Structured Markov Renewal Processes
6. Çinlar E. (1969). Markov renewal processes. Advances in Applied Probability 1: 123 187 7. Çinlar E. (1974). Periodicity in Markov renewal processes. Advances in Applied Probability 6: 61 78 8. Çinclar E. (1975). Introduction to Stochastic Processes, Prentice-Hall 9. Hsu G.H., X. Yuan and Q.L. Li (2000). First passage times for Markov renewal processes and application. Science in China (Series A) 43: 1238 1249 10. Johnson M.A., D. Liu and S. Narayana (1997). Burstiness descriptors for Markov renewal processes and Markovian arrival processes. Stochastic Models 13: 619 646 11. Hunter J.J. (1969). On the moments of Markov renewal processes. Advances in Applied Probability 1: 188 210 12. Lam C.T. (1993). Superposition of Markov renewal processes and applications. Advances in Applied Probability 25: 585 606 13. Li Q.L. and Y.Q. Zhao (2002). The RG-factorizations in block-structured Markov renewal processes with applications. Technical Report 381, Laboratory for Research in Statistics and Probability, Carleton University and University of Ottawa, Canada, 1 40 14. Li Q.L. and Y.Q. Zhao (2004). The RG-factorization in block-structured Markov renewal processes with applications. In Observation, Theory and Modeling of Atmospheric Variability, X. Zhu (ed), World Scientific, 545 568 15. MalinovskiƱ V.K. (1991). Large deviations for recurrent Markov renewal processes. Theory of Probability and its Applications 36: 170 173 16. Neuts M.F. (1981). Matrix-Geometric Solutions in Stochastic Models-An Algorithmic Approach, The Johns Hopkins University Press: Baltimore 17. Neuts M.F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Dekker: New York 18. Neuts M.F. (1995). Matrix-analytic methods in stochastic models. In Advances in Queueing: Theory, Methods and Open Problems, J. Dhashalow (ed), CRC Press: Boca Raton, Florida, 265 292 19. Pyke R. (1961). Markov renewal processes: definitions and preliminary properties. Annals of Mathematical Statistics 32: 1231 1242 20. Pyke R. (1961) Markov renewal processes with finitely many states. Annals of Mathematical Statistics 32: 1243 1259 21. Pyke R. (1964). Markov renewal processes with infinitely many states. Annals of Mathematical Statistics 35: 1746 1764 22. Pyke R. and R. Schaufele (1964). Limit theorems for Markov renewal processes. Annals of Mathematical Statistics 35: 1746 1764 23. Pyke R. and R. Schaufele (1966). The existence and uniqueness of stationary measures for Markov renewal processes. Annals of Mathematical Statistics 37: 1439 1462 24. Ramaswami V. (1990). A duality theorem for the matrix paradigms in queueing theory. Stochastic Models 6: 151 161 25. Ramaswami V. (1990). From the matrix-geometric to the matrix-exponential. Queueing Systems 6: 229 260 26. Rossetti M.D. and G.M. Clark (1999). Moment solutions for the state exiting counting processes of a Markov renewal process. Methodology and Computing in Applied Probability 1: 247 275 329
Constructive Computation in Stochastic Models with Applications
27. Sengupta B. (1989). Markov processes whose steady state distribution is matrix-exponential with an application to GI PH 1 queue. Advances in Applied Probability 21: 159 180 28. Teugels J.L. (1968). Exponential ergodicity in Markov renewal processes. Journal of Applied Probability 5: 387 400 29. Todorovic P. and J. Gani (1989). A Markov renewal process imbedded in a Markov chain. Stochastic Analysis and Applications 7: 339 353 30. Vesilo R.A. (2004). Long-range dependence of Markov renewal processes. Festschrift in honour of Daryl Daley. Australian & New Zealand Journal of Statistics 46: 155 171 31. Zhao Y.Q., W. Li and A.S. Alfa (1999). Duality results for block-structured transition matrices. Journal of Applied Probability, 36: 1045 1057
330
7 Examples of Practical Applications
Quan-Lin Li Department of Industrial Engineering Tsinghua University Beijing 100084, P.R. China E-mail: [email protected] Abstract In this chapter, we apply the RG-factorizations to deal with practical stochastic models, and indicate concrete procedures of performance computation under a unified algorithmic framework. The processor-sharing queue is directly constructed as a block-structured Markov chain, and the fluid queue can be constructed as a block-structured Markov chain by means of the Laplace transform; while the negative-customer queue and the retrial queue need to combine the supplementary variable method and the RGfactorizations such that the boundary conditions are simplified as a blockstructured Markov chain or a block-structured Markov renewal process. We provide performance analysis of the practical stochastic models. Keywords practical stochastic models, RG-factorization, supplementary variable method, processor-sharing queue, fluid queue, queue with negative customers, retrial queue.
In this chapter, we apply the RG-factorizations to deal with practical stochastic models, and indicate concrete procedures both for constructing a block-structured Markov chain and for applying the RG-factorizations to provide performance computation under a unified algorithmic framework. The processor-sharing queue is directly constructed as a block-structured Markov chain. The fluid queue can be constructed as a block-structured Markov chain by means of the Laplace transform or the Laplace-Steltjes transform. The negative-customer queue and the retrial queue need to combine the supplementary variable method and the RG-factorizations such that the boundary conditions are simplified as a blockstructured Markov chain or a block-structured Markov renewal process. This chapter is organized as follows. Section 7.1 investigates a BMAP M 1 processor-sharing queue with generalized processor-sharing discipline. Section 7.2 discusses an infinite-capacity fluid queue driven by a QBD process. Section 7.3 deals with a MAP G 1 queue with negative customers. Section 7.4 analyzes a
Constructive Computation in Stochastic Models with Applications
BMAP G 1 retrial queue with a repairable server. Finally, Section 7.5 summarizes the references related to the results of this chapter.
7.1
Processor-Sharing Queues
In this section, we study a BMAP M 1 processor-sharing queue with generalized processor-sharing discipline. Applying the RG-factorizations, we provide the stable condition of the system, and express the stationary distribution of queue length and the Laplace transform of the sojourn time. The processor-sharing queueing model is described as follows: The arrival process: The arrivals to the queue are modelled by a BMAP with f
irreducible matrix descriptor {Dk , k ı 0} of size m. We assume that
¦ kD
k
is
k 1
f
finite, and D
¦D
k
is the infinitesimal generator of an irreducible Markov
k 0
chain with De 0 ,Where e is a column vector of ones with suitable size. f
Let V be the stationary probability vector of D. Then O
V ¦ kDk e is the k 1
stationary arrival rate. The service times: The service times {F n , n ı 1} of the customers are assume to be i.i.d. exponential random variables with the service rate P . The generalized processor-sharing discipline: When there are n jobs in the system, the attained service of each job increases with rate f (n) relative to being along in the system. Here, f ( n) is a positive function such that 0 C1 İ nf (n) İ C2 for all n ı 1, where C1 and C2 are two positive constants. The independence: We assume that all the random variables defined above are mutually independent. Remark 7.1 (1) If f (n) 1n, then C1 C2 1. In this case, the generalized processor-sharing becomes the ordinary processor-sharing. In fact, it is easy to see that the generalized processor-sharing describes a wider class of the service disciplines. (2) The condition that 0 C1 f is necessary for guaranteeing the stability of the generalized processor-sharing queue. (3) If C2 f, then there are two useful cases in practice: (a) nf (n) f for
all n ı 1 but lim nf (n) n of
Nf ( N )
f; and (b) there exists a finite integer N such that
f. For the two cases, the approach given in this section still works well.
Let q(t ) and I (t ) be the number of the jobs in the system and the phase number of the BMAP input at time t, respectively. Then {q(t ), I (t ); t ı 0} is a 332
7
Examples of Practical Applications
continuous-time level-dependent Markov chain of M G 1 type whose infinitesimal generator is given by
Q
Let C
§ D0 ¨ ¨ f (1) P I ¨ ¨ ¨ ¨ ©
D1 D0 f (1) P I 2 f (2) P I
D2 D1 D0 2 f (2) P I 3 f (3) P I
D3 D2 D1 D0 3 f (3) P I %
!· ¸ !¸ !¸ . ¸ !¸ %¸¹
(7.1)
inf{nf (n)}, where N is a large positive integer. Then 0 C f nı N
according to the condition 0 C1 İ nf ( n) İ C2 for all n ı 1. Based on the main drift of the Markov chain Q, it is obvious that the generalized processor-sharing queue is stable if C P ! O , since nf (n) ı C P ! O for all n ı N . For the Markov chain of M G 1 type, let {G ( k ) , k ı 1} for the G-measure be the minimal nonnegative solution to the system of matrix equations f
kf (k ) P I [ D0 k f (k ) P I ]G ( k ) ¦ Di G ( k i ) G ( k i 1) ! G ( k )
0,
k ı 1.
i 1
Then for k ı 0 and l ı 1, the R-measure is given by Rl( k )
[ Dl Dl 1G ( k 2) Dl 2 G ( k 3) G ( k 2) !] U k11 ,
and for k ı 0, the U-measure is given by f
Uk
D0 k f (k ) P I ¦ Di G ( k i ) G ( k i 1) ! G ( k 1) . i 1
Thus, the RG-factorization is given by
Q
( I RU )U D ( I GL ),
where §I ¨ ¨ ( I RU ) ¨ ¨ ¨ ¨ ©
UD
R1(0) I
R2(0) R1(1) I
R3(0) R2(1) R1(2) I
!· ¸ !¸ !¸ , ¸ !¸ %¸¹
diag (U 0 ,U1 ,U 2 ,!),
and 333
Constructive Computation in Stochastic Models with Applications
§ I ¨ (1) ¨ G ( I GL ) ¨ ¨ ¨ ¨ ©
I G (2)
· ¸ ¸ ¸. ¸ I ¸ % %¸¹
I G (3)
Using the RG-factorization, the stationary probability vector (ʌ0 , ʌ1 , ʌ2 ,!) of the Markov chain Q is given by
W x0 ,
ʌ 0 ° ® °ʌ k ¯
k 1
¦ʌ R
(i ) k i
i
k ı 1,
,
i 0
where x0 is the stationary probability vector of the transition rate matrix U 0 and f
the scalar W is uniquely determined by
¦ʌ e
1.
k
k 0
Let qk
lim P{q (t )
t of
k} for k ı 0. Then
q0
W,
qk
¦ʌ R
k 1
i
(i ) k i
e,
k ı 1.
i 0
Thus, the mean of the stationary queue length is given by f
E[q]
k 1
¦¦ kʌ R i
(i ) k i
e.
k 1 i 0
Now, we derive the Laplace transform for the complementary distribution of the sojourn time by means of the RG-factorization, and also obtain the mean of the sojourn time. Let Wn denote the sojourn time experienced by a customer when there are n 1 customers in the system at his arrival time, and I n the phase number of the BMAP input at the arrival time of the n th customer. We write
Wn ( x, i )
P{Wn İ x, I n
i}
and
Wn ( x)
(Wn ( x,1), Wn ( x, 2), ! , Wn ( x, m)).
Using a standard probabilistic analysis, we show that the vector sequence {Wk ( x )} satisfies the following system of differential-difference equations 334
7
Examples of Practical Applications
f d W1 ( x) W1 ( x)[ D0 P f (1) I ] ¦ W1 l ( x) Dl dx l 1
(7.2)
and for k ı 2 d Wk ( x) dx
P kf (k 1)Wk 1 ( x) Wk ( x)[ D0 P kf (k ) I ] f
¦ Wk l ( x) Dl ,
(7.3)
l 1
with the initial conditions
Wn (0) D and Wk (0)
0 for all k z n,
(7.4)
where D is a probability vector with D e 1. Let
W ( x)
(W1 ( x ), W2 ( x ), W3 ( x ), !)
and
§ D0 f (1) P I ¨ ¨ f (2) P I ¨ ¨ ©
!· D2 ¸ !¸ D1 . D0 3 f (3) P I ! ¸ ¸ % %¹
D1 D0 2 f (2) P I 2 f (3) P I
(7.5)
Then the system of equations Eq. (7.2) to Eq. (7.5) is rewritten as
d W ( x ) W ( x ) dx
(7.6)
with an initial condition § · ¸. ! ! W (0) ¨ 0,0, ,0, D ,0, 0,
¨ ¸ n 1 vectors 0 of size m © ¹ Thus, we obtain W ( x) W (0) exp{ Ax}. We denote by w* ( s ) and wn* ( s ) the Laplace transform of the row vectors W ( x) and Wn ( x), respectively. For example, w* ( s )
³
f 0
e sxW ( x)dx. It is easy to check
that
³
f 0
e sx dW ( x)
W (0) sw* ( s). 335
Constructive Computation in Stochastic Models with Applications
Thus, it follows from Eq. (7.6) that w ( s )( sI )
W (0).
(7.7)
1 Let ( sI ) max denote the maximal non-positive inverse of the matrix sI
1 of infinite size with the setting that ( sI ) 1 İ ( sI ) max İ 0 for an arbitrary
inverse ( sI ) 1 of sI . Then w ( s )
1 W (0)( sI ) max .
Let {G ( k ) ( s ), k ı 1} for the G-measure be the minimal nonnegative solution to the system of matrix equations kf (k ) P I {D0 > s kf (k ) P @ I }G ( k ) ( s ) D1G ( k 1) ( s )G ( k ) ( s ) D2 G ( k 2) ( s )G ( k 1) ( s )G ( k ) ( s ) ! 0,
k ı 1.
For k ı 0, the U-measure is given by f
U k (s)
D0 [ s kf (k ) P ]I ¦ Di G ( k i ) ( s )G ( k i 1) ( s )! G ( k 1) ( s ), i 1
and for k ı 0 and l ı 1 the R-measure is given by Rl( k ) ( s ) [ Dl Dl 1G ( k 2) ( s ) Dl 2 G ( k 3) ( s )G ( k 2) ( s ) !][ U k 1 ( s )]1 .
Therefore, we obtain sI
[ I RU ( s )]U D ( s )[ I GL ( s )],
(7.8)
where §I ¨ ¨ I RU ( s) ¨ ¨ ¨ ¨ © U D (s)
R1(0) ( s) R2(0) ( s) R3(0) ( s) R1(1) ( s) R2(1) ( s) I I R1(2) ( s) I
!· ¸ !¸ !¸ , ¸ !¸ %¸¹
diag(U 0 ( s ),U1 ( s ),U 2 ( s ),U 3 ( s ),! ),
I § · ¨ (1) ¸ I ¨ G ( s ) ¸ ¸. I GL ( s ) ¨ I G (2) ( s ) ¨ ¸ (3) G ( s ) I ¨ ¸ ¨ % % ¸¹ © 336
7
Examples of Practical Applications
Let X 1(l ) ( s )
R1(l ) ( s ),
l ı 0,
X k(l)1 ( s ) R1( l ) ( s ) X k( l 1) ( s ) R2( l ) ( s ) X k( l12) ( s ) ! Rk( l ) ( s ) X 1( l k ) ( s ), l ı 0, k ı 1,
and Yk(l ) ( s ) G (l ) ( s )G (l 1) ( s )! G (l k 1) ( s ),
l ı 1, k ı 1.
If Re( s ) ! 0, then I RU ( s ), U D ( s ) and I GL ( s ) are invertible,
[ I RU ( s)]1
U D ( s )1
§I ¨ ¨ ¨ ¨ ¨ ¨ ©
X 1(0) ( s) I
X 2(0) ( s) X 1(1) ( s) I
X 3(0) ( s) X 2(1) ( s) X 1(2) ( s) I
!· ¸ !¸ !¸ , ¸ !¸ % ¸¹
diag(U 0 ( s )1 ,U1 ( s )1 ,U 2 ( s )1 ,U 3 ( s )1 ,! )
and
[ I GL ( s )]1
§ I · ¨ (1) ¸ I ¨ Y1 ( s ) ¸ ¨ Y2(2) ( s ) Y1(2) ( s ) ¸. I ¨ (3) ¸ (3) (3) ¨ Y3 ( s ) Y2 ( s ) Y1 ( s ) I ¸ ¨ # # # # % ¸¹ ©
Let wn* ( s ) be the Laplace transform of the sojourn time distribution Wn ( x ). It follows from Eq. (7.7) and Eq. (7.8) that w* ( s )
( w1* ( s ), w2* ( s), w3* ( s ),!) § ¨ 0,0,! ,0,
¨ 1 © n vectors 0 of size m
·
D ,0, 0,! ¸ [ I GL ( s)]1[U D ( s)]1[ I RU ( s)]1 . ¸ ¹
Thus, for Re( s ) ! 0 we obtain n 1
wn* ( s) D [U n 1 ( s )]1 D ¦ Yi ( n 1) ( s )[U n 1i ( s )]1 X i( n 1i ) ( s ). i 1
It follows from Eq. (7.7) that 337
Constructive Computation in Stochastic Models with Applications
E[Wn ] D [U n 1 (0)]1 e n 1
D ¦ Yi ( n 1) (0)[U n 1i (0)]1 X i( n 1i ) (0)e.
(7.9)
i 1
7.2
Fluid Queues
In this section, we study an infinite-capacity fluid queue driven by a leveldependent QBD process, and derive the stationary probability distribution of the buffer content. Consider an infinite capacity buffer in which the fluid input rate and the fluid output rate are influenced by a stochastic environment. The fluid model is described as a continuous-time QBD process {Z (t ), t ı 0} whose infinitesimal generator is given by
Q
§ A1(0) ¨ (1) ¨ A2 ¨ ¨¨ ©
A0(0) A1(1) A2(2)
· ¸ ¸, ¸ ¸ % ¸¹
A0(1) A1(2) A0(2) % %
(7.10)
where the size of the matrix A1( k ) is mk u mk for k ı 0, the sizes of the other block-entries are determined accordingly, and all empty entries are zero. The QBD process {Z (t ), t ı 0} is assumed to be irreducible and positive recurrent. Let ʌ (ʌ0 , ʌ1 , ʌ 2 ,!) be the stationary probability vector of the QBD process {Z (t ), t ı 0} partitioned according to the levels. Whenever the environment Z (t ) stays in state (k , i ), the net input rate of fluid (i.e., the input rate-the output rate) is P k i for k ı 0, 1 İ i İ mk . We assume that ( P 0,1 , P 0,2 ,! , P0, m0 ) 9 0, ( P k ,1 , P k ,2 ,! , P k , mk ) ı 0,
k ı 1,
and there exists at least a k0 ı 1 such that ( P k0 ,1 , P k0 ,2 ,! , P k0 ,mk ) : 0. 0
Let X (t ) be the buffer content at time t. Then it can not be negative. We write
/k
diag(P k ,1 , P k ,2 ,! , P k , mk ),
k ı 0.
Let f
d
¦ ʌ / e. k
k
k 0
Since the change of the process X (t ) depends only on its rate, which in turn 338
7
Examples of Practical Applications
changes according to the Markov chain {Z (t ), t ı 0}, it is clear that {( X (t ), Z (t )), t ı 0} is a Markov process. The state space of the Markov process {( X (t ), Z (t )), t ı 0} is given by
:
{( x, k , j ) : x ı 0, k
0,1, 2,! , j 1, 2,! , mk }.
If the stochastic environment {Z (t ), t ı 0} is ergodic, the quantity d is called the mean drift of the process { X (t ), t ı 0}. When the buffer is infinite, the bivariate Markov process {( X (t ), Z (t )), t ı 0} is ergodic under the mean drift d 0. We define P{ X (t ) x, Z (t )
F (t , x, (k , j ); y, (l , i))
( k , j ) _ X (0)
y, Z (0)
(l , i)}
for x, y ı 0, k , l 0,1, 2,!,1 İ i İ ml and 1 İ j İ mk . Note that F (t , x, (k , j ); y, (l , i)) is the joint conditional probability distribution of the process {( X (t ), Z (t )), t ı 0} at time t. When the process {( X (t ), Z (t )), t ı 0} is ergodic, we write Fk , j ( x)
lim F (t , x,(k , j ); y,(l , i)), t of
which is irrelevant to the initial state X (0) y and Z (0) (l , i ) according to a standard result in the theory of Markov processes. Let / diag (/0 , /1 , /2 ,!), ( x) (0 ( x), 1 ( x), 2 ( x),!), where ( Fk ,1 ( x), Fk ,2 ( x),! , Fk , mk ( x)),
k ( x)
k ı 0.
Using a standard probabilistic analysis, the vector function ( x) for x ı 0 can be shown to satisfy the following system of differential equation d ( x) / dx
( x)Q
(7.11)
with the boundary conditions 0 (0) ʌ0 and k (0) 0
for
k ı 1.
(7.12)
Now, we provide an approach for solving the system of differential Eq. (7.11) and Eq. (7.12). The approach is based on the RG-factorization of the matrix Q s/ for an arbitrary s ı 0. We write the Laplace transform of the vector function ( x ) as * ( s ), that is,
* (s)
³
f 0
e sx ( x)dx. Note that 0 ( s ) İ 1 for s ı 0. Then
³
f 0
e sx d ( x)
F (0) s ( s ).
It follows from Eq. (7.11) and Eq. (7.12) that 339
Constructive Computation in Stochastic Models with Applications
* ( s )[Q s/ ] (0)/
(S0 /0 ,0, 0,0,!).
(7.13)
To solve the Eq. (7.13), we first define the R- and G-measures of the matrix Q s/ for an arbitrary s ı 0, and then provide the RG-factorization. Let {Rk ( s )} and {Gk ( s)} for s ı 0 be the minimal nonnegative solutions to the systems of matrix equations A0( l ) Rl ( s ) ª«¬ A1( l 1) s/l 1 º»¼ Rl ( s ) Rl 1 ( s ) A2( l 2)
0,
l ı 0,
and A0( k ) Gk 1 ( s )Gk ( s ) [ A1( k ) s/k ]Gk ( s ) A2( k )
0,
k ı 1,
respectively. Thus, we obtain U l (s)
A1(l ) s/l Rl ( s ) A2( l 1) ,
l ı 0,
U l (s)
A1(l ) s/l A0(l ) Gl 1 ( s ),
l ı 0.
or
Clearly, the matrix U k ( s ) is invertible for k ı 1 and the matrix U 0 ( s ) may not be invertible for some s ı 0 due to /0 9 0. The RG-factorization of the matrix Q s/ for s ı 0 is given by Q s/ [ I RU ( s )]U D ( s)[ I GL ( s)], where U D (s)
diag(U 0 ( s),U1 ( s), U 2 ( s), U 3 ( s),! ),
§ 0 R0 ( s ) · ¨ ¸ 0 R1 ( s ) ¨ ¸ ¸ RU ( s ) ¨ 0 R2 ( s ) ¨ ¸ 0 %¸ ¨ ¨ % ¸¹ ©
and § 0 · ¨ ¸ 0 ¨ G1 ( s ) ¸ ¸. GL ( s ) ¨ G2 ( s ) 0 ¨ ¸ G3 ( s ) 0 ¨ ¸ ¨ ¸ % % © ¹ 340
(7.14)
7
Examples of Practical Applications
Let X k(l ) ( s)
Rl ( s) Rl 1 ( s)! Rl k 1 ( s),
l ı 0, k ı 1,
(7.15)
l ı k ı 1.
(7.16)
and Yk(l ) ( s) Gl ( s)Gl 1 ( s)! Gl k 1 ( s),
Then for s ı 0, X 1(0) ( s ) I
X 2(0) ( s) X 1(1) ( s ) I
X 3(0) ( s) X 2(1) ( s ) X 1(2) (s ) I
[ I RU (s )]1
§I ¨ ¨ ¨ ¨ ¨ ¨ ©
!· ¸ !¸ !¸ ¸ !¸ % ¸¹
(7.17)
[ I GL ( s)]1
§ I · ¨ (1) ¸ I ¨ Y1 ( s) ¸ ¨ Y2(2) ( s) Y1(2) ( s) ¸. I ¨ (3) ¸ (3) (3) ¨ Y3 ( s) Y2 ( s) Y1 ( s) I ¸ ¨ # # # # % ¸¹ ©
(7.18)
and
To solve the Eq. (7.13), we need the following assumption: The matrix U 0 ( s ) is invertible for s (a, b), where 0 İ a b İ f. The following theorem provides the unique solution to the Eq. (7.13). Theorem 7.1 If the matrix U 0 ( s ) is invertible for s (a, b), then 0* ( s )
ʌ0 /0 [U 0 ( s )1 ]
and k ( s )
ʌ0 /0 [U 0 ( s ) 1 ]R0 ( s ) R1 ( s )! Rk 1 ( s ),
k ı 1.
Proof It follows from Eq. (7.13) and Eq. (7.14) that * (s)
(ʌ0 /0 ,0,0, 0,!)(Q s/ )1 (ʌ 0 /0 ,0, 0, 0,!)[ I GL ( s)]1U D ( s)1[ I RU ( s)]1.
Thus, we obtain 0* ( s )
1
ʌ0 /0 [U 0 ( s ) ] 341
Constructive Computation in Stochastic Models with Applications
and ʌ0 /0U 0 ( s ) 1 X k(0) ( s)
k* ( s )
ʌ 0 /0 [U 0 ( s) 1 ]R0 ( s) R1 ( s) ! Rk 1 ( s),
k ı 1.
This completes the proof. We define P{ X (t ) İ x _ X (0)
G (t , x; y)
y},
which is the conditional probability distribution of the process { X (t ), t ı 0} at time t. When the fluid model is ergodic, we write ( x)
lim G (t , x; y ) for all
t of
y ı 0. The following corollary provides the Laplace transform of the stationary probability distribution ( x) of the buffer content. Corollary 7.1 If the matrix U 0 ( s ) is invertible for s (a, b), then * (s)
f ª º ʌ0 /0 [U 0 ( s)1 ] « I ¦ R0 ( s ) R1 ( s )! Rl ( s )e » . l 0 ¬ ¼
In particular, if the QBD process is level-independent, that is, A2( k ) k ı 2, and A1( k ) A1 and A0( k ) A0 for all k ı 1, then * (s)
A2 for all
1
ʌ0 /0 [U 0 ( s) 1 ]{I R0 ( s )> I R( s ) @ }e. f
Proof Note that * ( s )
¦
* k
( s )e, we obtain
k 0
* (s)
0* ( s )e 1* ( s )e 2* ( s )e ! .
Simple computation leads to the stated result. Now, we express the Laplace-Stieltjes transforms for both the conditional distribution and the conditional mean of a first passage time. We define a first passage time as T
inf{t ! 0 : X (t )
0}.
It is clear that the first passage time T is finite a.s. if the mean drift d 0. We write the conditional distribution of the first passage time T as Bk ,i (t , x)
P{T İ t _ Z (0)
(k , i ), X (0)
x}.
Let Bk (t , x ) 342
( Bk ,1 (t , x ), Bk ,2 (t , x ),! , Bk , mk (t , x ))
7
Examples of Practical Applications
and (t , x)
( B0 (t , x), B1 (t , x), B2 (t , x),!).
Then the vector function (t , x) satisfies the following system of differential equation w w (t , x) (t , x)/ wt wx
x ! 0,
(t , x)Q,
(7.19)
with the initial conditions ʌ0 def T , if t ı 0, ʌ0 e (0, x) 0, if x ! 0,
(t , 0)
and Bk (0, 0)
if k ı 1.
0,
Let * (t ,K )
³
f 0
e K x (t , x)dx, K ı 0.
Then
³
f
1
0
K
eK x (t , x)dx
[ (t , 0) * (t ,K )].
It follows from Eq. (7.19) that ½ w 1 * * ® [ t ,0 (t ,K )]¾ (t ,K ) / wt ¯K ¿ Note that (t , 0)
1
K
[ (t , 0) * (t ,K )]Q.
(T , 0,0, 0,!), we obtain w * (t ,K ) wt
* (t ,K )(Q K/ ) (T , 0,0,0,!)Q.
(7.20)
Let k
* ([ ,K )
³
f 0
e[ t * (t ,K )d t ,
[ ı 0.
Then
³
f 0
e[ t * (t ,K )d t
1ª
[¬
k
* (0,K ) * ([ ,K ) º . ¼ 343
Constructive Computation in Stochastic Models with Applications
It follows from Eq. (7.20) that 1ª
k
* ([ ,K )
[¬
k 1 * (0,K ) * ([ ,K ) º (Q K/ ) (T ,0, 0, 0,!)Q. ¼ [
According to the initial conditions we obtain * (0,K ) k
* ([ ,K )(Q K/ [ I )
0. Therefore,
(T A1(0) ,T A0(0) ,0, 0,!).
(7.21)
We now study the conditional distribution of the first passage time T. Let f i ([ ) e [ t dH (t ) for [ ı 0. H (t ) P{T İ t} and H
³
Theorem 7.2
P{Z (0) Let ) ( x)
0
Suppose that (k , i), X (0)
I ( x), if k 0, x} ® i if k ı 1. ¯ 0,
(I1 ( x), I2 ( x),! , Im0 ( x)). Then i ([ ) H
³
f 0
i 0([ , x)dx, ) ( x )
i 0([ , x) is determined by where k
* 0([ ,K )
T [ I (K/ 0 [ I )U 0 ([ ,K )1 ].
Proof Applying the RG-factorization, it follows from Eq. (7.21) that k
* 0([ ,K )
T [ I (K/ 0 [ I )U 0 ([ ,K )1 ].
Let the initial probability ) k ( x ) of level k be equal to ) ( x ) or 0 according as k 0 or k ı 1, respectively. By the formula of total probability we obtain i ([ ) H
f
¦³ k 0
f 0
i k ([ , x)d x ) k ( x )
³
f 0
i 0([ , x)d x. ) ( x )
This completes the proof. We write the conditional mean of the first passage time T as H k ,i ( x)
E[T _ Z (0)
( k , i), X (0)
x].
Let H k ( x)
( H k ,1 ( x), H k ,2 ( x),!, H k , mk ( x)).
and ( x) 344
( H 0 ( x), H1 ( x), H 2 ( x),!).
7
Examples of Practical Applications
( x) satisfies the following system of differential
Then the vector function equation d dx
( x)Q eT ,
( x) /
x ! 0,
(7.22)
with the boundary conditions 0
(0)
0 and
k
( x)
ʌ k def Tk , 1 ʌ0 e
k ı 1.
(7.23)
Let
( s)
³
f 0
e sx d ( x),
s ı 0.
Then
³
f 0
e sx ( x)d x
1 [ (0) s
*
( s )].
It follows from Eq. (7.22) and Eq. (7.23) that *
( s )(Q s / )
[(0,T1 ,T 2 ,!) seT ].
Therefore, we obtain *
(s)
[(0,T1 ,T 2 ,!) seT ](Q s/ )1 [(0,T1 ,T 2 ,!) seT ][ I GL ( s )]1U D ( s )1[ I RU (s )]1.
We can further express * ( s ) by means of the R-, U- and G-measures of the matrix Q s/ without any difficulty.
7.3
A Queue with Negative Customers
In this section, we apply the RG-factorizations to analyze a queue with negative customers, and obtain the distributions of the stationary queue length and the busy period. Consider a single-server FCFS queue with two types of independent arrivals, positive and negative. Positive arrivals join the queue with the intention of being served and then leaving the system. At a negative arrival epoch, the system is affected if and only if customers are present. The arrival of a negative customer removes all the customers in the system. We assume that the arrivals of both positive and negative customers are MAPs with matrix descriptors (C1 , D1 ) and (C2 , D2 ), respectively. Also, the infinitesimal 345
Constructive Computation in Stochastic Models with Applications
generators C1 D1 and C2 D2 of sizes m1 u m1 and m2 u m2 are both irreducible. Thus the two Markov chains C1 D1 and C2 D2 are positive recurrent. Let T1 and T 2 be the stationary probability vectors of C1 D1 and C2 D2 , respectively. Then O1 T1 D1e and O2 T 2 D2 e are the stationary arrival rates of positive and negative customers, respectively. Further, we assume that the first customer to join the queue, when the server is idle, has the service time distribution given by
^
`
x
B0 ( x ) 1 exp ³ P 0 (v )dv 0
with mean 1P 0 (0, f). While the service times of all the other customers are i.i.d. random variables whose distribution function is given by
^
x
`
B( x) 1 exp ³ P (v)dv 0
with mean 1P (0, f). B0 ( x) and B( x) may be different and are referred to as distributions of the special and regular service times, respectively. The service process and the arrival processes of positive and negative customers are assumed to be mutually independent.
7.3.1
The Supplementary Variables
We introduce several supplementary variables to construct the differential equations for the queueing model. Further, we provide an approach for solving these equations. The crucial step of solving these equations is the connection of the boundary equations to a Markov chain of GI G 1 type. Let N (t ) be the number of customers in the system at time t, and let J1 (t ) and J 2 (t ) be the phases of the arrivals of positive and negative customers at time t, respectively. We define the states of the server as
I (t )
0, if the server is idle ° ® S , if the server is working with service time distribution B0 ( x ), °G, if the server is working with service time distribution B( x). ¯
For t ! 0, we define the random variable S (t ) as follows: i) If I (t ) S , then S (t ) represents the elapsed service time received by a customer with the special service time up to time t. ii) If I (t ) G, then S (t ) represents the elapsed service time received by a customer with the regular service time up to time t. iii) If I (t ) 0, then S (t ) represents the elapsed time since the last service completion during a busy period up to time t. Obviously, {I (t ), N (t ), J1 (t ), J 2 (t ), S (t ) : t ı 0} is a Markov process. Note that I (t ) 0 is equivalent to N (t ) 0, the state space 346
7
Examples of Practical Applications
of the process is expressed as
:
{(0, j1 , j2 , x) :1 İ j1 İ m1 ,1 İ j2 İ m2 , x ı 0} * {( S , k , j1 , j2 , x) : k ı 1,1 İ j1 İ m1 ,1 İ j2 İ m2 , x ı 0} * {(G, k , j1 , j2 , x) : k ı 1,1 İ j1 İ m1 ,1 İ j2 İ m2 , x ı 0}.
We write p0,i , j (t , x)dx
P{I (t )
0, J1 (t ) i, J 2 (t )
j, x İ S (t ) x dx},
pSk i j (t , x)dx
P{I (t )
S , N (t )
k , J1 (t ) i, J 2 (t )
j , x İ S (t ) x dx},
pGk i j t x dx
P{I (t ) G, N (t )
k , J1 (t ) i, J 2 (t )
j , x İ S (t ) x dx},
p0,i , j ( x) pSk ,i , j ( x) pGk ,i , j ( x)
P0 ( x)
lim p0,i , j (t , x ),
t of
lim pSk ,i , j (t , x),
t of
lim pGk ,i , j (t , x);
t of
( p011 ( x),! , p0,1, m2 ( x),! , p0, m1 ,1 ( x),! , p0, m1 , m2 ( x)),
PSk ( x)
( pSk ,1,1 ( x),! , pSk ,1,m2 ( x),! , pSk ,m1 ,1 ( x),! , pSk ,m1 ,m2 ( x)),
PGk ( x)
( pGk ,1,1 ( x),! , pGk ,1, m2 ( x),!, pGk , m1 ,1 ( x),! , pGk , m1 , m2 ( x)).
It is easy to see that P0 ( x), PSk ( x) and PGk ( x) for k ı 1 are row vectors of size m1m2 . Consider the number N (t ) of customers in the system at time t, the stability conditions of the system can be easily discussed in the same way as that in Jain and Sigman [87]. The arrival of a negative customer removes all the customers in the system. Therefore, the arrival epochs of negative customers with the irreducible MAP descriptor (C2 , D2 ) form positive recurrent regenerative times of the system. Clearly, {N (t ), t ı 0} is a positive recurrent regenerative process with a unique stationary distribution. Therefore, the queueing system is stable. If this system is stable, then the system of stationary differential equations of the joint probability density {P0 ( x), PSk ( x), PGk ( x), k ı 1} can be written as d P0 ( x) dx
P0 ( x)[(C1 C2 ) ( I
D2 )],
(7.24)
PS1 ( x)[C1 C2 P 0 ( x) I ],
(7.25)
PSk ( x)[C1 C2 P0 ( x) I ] PSk 1 ( x)( D1
I ),
(7.26)
d PS1 ( x) dx for k ı 2, d PSk ( x) dx
347
Constructive Computation in Stochastic Models with Applications
d PG1 ( x) dx
PG1 ( x)[C1 C2 P ( x) I ],
(7.27)
PGk ( x)[C1 C2 P ( x) I ] PGk 1 ( x)( D1
I ).
(7.28)
for k ı 2, d PGk ( x) dx
The joint probability density function {P0 ( x), PSk ( x), PGk ( x), k ı 1} should satisfy the boundary conditions
³
P0 (0)
f
0
P 0 ( x) PS 1 ( x)dx ³
f
0
P ( x) PG1 ( x)dx
f
f f ¦ ª ³ PSk ( x)dx ³ PGk ( x)dx º ( I
D2 ), « 0 0 ¬ ¼» k 1
³
PS1 (0)
f 0
PSk (0) PGk (0)
³
f 0
(7.29)
P0 ( x)dx( D1
I ),
(7.30)
k ı 2,
(7.31)
0,
P 0 ( x) PSk 1 ( x)dx ³
f 0
P ( x) PGk 1 ( x)dx
(7.32)
for k ı 1, and the normalization condition f f f ª f º½ ® ³ 0 P0 ( x)dx ¦ « ³ 0 PSk ( x)dx ³ 0 PGk ( x)dx » ¾ e 1. ¬ ¼¿ k 1 ¯
7.3.2
(7.33)
A Markov Chain of GI/G/1 Type
We provide an approach to solve the equations Eq. (7.24) to Eq. (7.33), which can be described as a Markov chain of GI G 1 type. It follows from Eq. (7.24) that P0 (0) exp{[C1 (C2 D2 )]x}.
P0 ( x)
To solve the equations Eq. (7.25) to Eq. (7.33), we define f
QS* ( z, x)
¦z
f
k
PSk ( x),
QG* ( z, x)
k 1
¦z
k
PGk ( x).
k 1
It follows from Eq. (7.25) and Eq. (7.26) that w * QS ( z , x ) QS* ( z , x )[(C1 zD1 ) C2 P 0 ( x ) I ], wx 348
(7.34)
7
Examples of Practical Applications
which leads to QS* ( z , x) QS* ( z , 0)[exp{(C1 zD1 ) x}
exp{C2 x}]B 0( x).
(7.35)
Similarly, it follows from Eq. (7.26) and Eq. (7.27) that QG* ( z , x) QG* ( z , 0)[exp{(C1 zD1 ) x}
exp{C2 x}]B ( x).
(7.36)
To obtain expressions of the vectors PSk ( x) and PGk ( x) for k ı 1, we need to define the conditional probabilities of the MAP with matrix descriptor (C1 , D1 ) as Pi j (n, t )
P{K (t )
n , J 1 (t )
j _ K (0)
0, J1 (0) i},
where K (t ) denotes the number of arrivals of the MAP during [0, t ). Let f
P(n, t )
( Pi j (n, t )) m um and P* ( z , t ) 1
1
¦z
n
P (n, t ). Then it follows from Chapter 5
n 0
of Neuts [24] that P* ( z , t )
exp{(C1 zD1 )t}.
(7.37)
Substituting Eq. (7.37) into Eq. (7.35) and Eq. (7.36), we have k
PSk ( x)
¦P
Sj
(0)[ P(k j , x)
exp{C2 x}]B 0( x)
(7.38)
(0)[ P (k j , x)
exp{C2 x}]B( x).
(7.39)
j 1
and k
PGk ( x)
¦P
Gj
j 1
Clearly, all the probability vectors PSj (0)
0 for j ı 2 according to Eq. (7.31)
and PS 1 (0) can be determined from Eq. (7.34) and Eq. (7.30) as PS1 (0)
P0 (0){[C1 (C2 D2 )]1}( D1
I ).
(7.40)
PSk ( x)
P0 (0) L[ P(k 1, x)
exp{C2 x}]B 0( x),
(7.41)
Therefore
where L
[C1 (C2 D2 )]1 ( D1
I ),
which is a stochastic matrix. The equations Eq. (7.40) and Eq. (7.41) provide a solution for the system of differential equations Eq. (7.24) to Eq. (7.28). Further, the boundary Eq. (7.29) to 349
Constructive Computation in Stochastic Models with Applications
Eq. (7.33) we used to determine the vectors P0 (0) and PGk (0) for k ı 1, which are complicated. To that end, we define T (S )
³
f 0
exp{(C1 D1 ) x}
exp{C2 x}B 0( x)dx, f
Ak( S )
³
Ak(G )
³
Bk(G )
³
0
P (k , x)
exp{C2 x}dB0 ( x ),
f 0
f 0
P(k , x)
exp{C2 x}dB( x),
P(k , x)
exp{C2 x}B( x)dx,
L[ A0( S ) T ( S ) ( I
D2 )],
H0
LAk( S ) ,
Hk
k ı 1,
f
H 1
A0(G ) H 2 ,
¦B
H 2
(G ) j
( I
D2 ).
j 0
Then it follows from Eq. (7.29) to Eq. (7.33) that PG /
PG
(7.42)
where PG
( P0 (0), PG1 (0), PG 2 (0), PG 3 (0),!)
and
/
§ H0 ¨ ¨ H 1 ¨ H 2 ¨ ¨ H 2 ¨H ¨¨ 2 © #
H1 A1(G ) A0(G )
H2 A2(G ) A1(G ) A0(G )
H3 A3(G ) A2(G ) A1(G ) A0(G )
!· ¸ !¸ !¸ ¸. !¸ !¸ ¸ % ¸¹
(7.43)
Theorem 7.3 The matrix / is irreducible, stochastic and positive recurrent. Proof According to the definition of H i for 2 İ i f and Ak( G ) for k ı 0, it is not difficult to see that / is irreducible. f
To prove that / is stochastic, we only need to check that
¦H e k
e,
k 0
f
H 1e ¦ Ak(G ) e k 1
f
¦ A k
k 1
350
G
f
e and H 2 e ¦ Ak(G ) e k 0
f
e
G H 2 e ¦ Ak e. We can obtain k 0
e. It is easy to see that H 1e
7 f
ª
f
º
k 0
» ¼
Examples of Practical Applications
L ««T ( S ) ( I
D2 ) ¦ Ak( S ) »» e
¦ Hke
« ¬
k 0
L³
f 0
L³ L³
exp{(C1 D1 ) x}
exp{C2 x}dB0 ( x)e
f 0
f 0
L³
exp{(C1 D1 ) x}
exp{C2 x}B 0( x)dx( I
D2 )e
exp{[(C1 D1 ) C2 ]x}dB0 ( x)e
f 0
exp{[(C1 D1 ) C2 ]x}B 0( x)dx( I
D2 )e.
Note that
³
f 0
exp{[(C1 D1 ) C2 ]x}B 0( x)dx( I
D2 )e
e³
f 0
exp{[(C1 D1 ) C2 ]x}dB0 ( x)e f
and the matrix L is stochastic, we obtain
¦H e k
e. Similarly, we can prove that
k 0
f
H 2 e ¦ Ak(G ) e
e.
k 0
f
Since H 2 ¦ Ak( G ) is stochastic and H 2 e : 0 the matrix k 0
f
¦A
(G ) k
is
k 0
substochastic. The matrix / is irreducible and stochastic, and the matrix f
¦ A k
G
is substochastic. Thus / is positive recurrent by means of theorem 3.16.
k 0
This completes the proof. Let ( x0 , x1 , x2 ,!) be the stationary probability vector of the matrix /. Then it follows from Eq. (7.42) that PG
D ( x0 , x1 , x2 ,!),
where D is determined by Eq. (7.33) as 1
D
f
x0 [V LT ( S ) ]e ¦ xk T ( G ) e k 1
with V
[C1 (C2 D2 )]1
and T (G )
³
f 0
exp{(C1 D1 ) x}
exp{C2 x}B( x)dx. 351
Constructive Computation in Stochastic Models with Applications
We solve the equation ( x0 , x1 , x2 ,!) ( x0 , x1 , x2 ,!) / for ( x0 , x1 , x2 ,!) based on the censoring technique and the RG-factorization. Note that the Markov chain / is of GI G 1 type, and is analyzed in Chapter 3 in detail. Let Q /[ı1] . Then the matrix Q is of M G 1 type. We denote by G the f
¦A
(G ) i
minimal nonnegative solution to the matrix equation G
G i . Thus we
i 0
have f
A1(G ) ¦ Ai(G1) G i ,
)0
(7.44)
i 1
f
R0, j
¦H
G i [ I )0 ]1 ,
j ı 1,
(7.45)
G i 1[ I )0 ]1 ,
j ı 1.
(7.46)
i j
i 0
and f
Rj
¦A
(G) i j
i 1
The following lemma is useful for studying the generating function of the matrix sequence {Rk }. f
¦z
Lemma 7.1 The matrix I AG* (1) is invertible, where AG* ( z )
k
Ak( G ) .
k 0
Proof Let O1 , O2 ,! , Om1 be the m1 eigenvalues of the matrix C1 D1 of size m1 u m1 , and P1 , P 2 ,! , P m2 the m2 eigenvalues of the matrix C2 of size m2 u m2 . We denote by Oi0 and P j0 the eigenvalues with largest real parts of the matrices C1 D1 and C2 , respectively. Note that (C1 , D1 ) and (C2 , D2 ) are the irreducible matrix descriptors of the MAPs for the positive and negative customers, respectively, it is clear that the matrices C1 D1 and C2 are infinitesimal generators of two continuous-time Markov chains. Therefore, Oi0 0 due to (C1 D1 )e 0, and P j0 0 due to both C2 e 9 0 and the fact that C2 D2 is irreducible. At the same time, the real part Re(Oi ) 0 for i z i0 , since C1 D1 is irreducible, and Re( P j ) İ P j0 0 for j z j0 . Note that the eigenvalues of the matrix (C1 D1 ) C2 are Oi P j for 1 İ i İ m1 and 1 İ j İ m2 (a standard result of the Kronecker sum, for example, see 2.4 in Graham [12]), the eigenvalue with largest real part of the matrix (C1 D1 ) C2 is P j0 due to the fact Re(Oi P j )
Re(Oi ) Re( P j ) İ Re(Oi0 ) Re( P j0 )
Oi P j 0
Since AG* (1) 352
³
f 0
exp{[(C1 D1 ) C2 ]x}dB ( x),
0
P j 0. 0
7
the eigenvalues of the matrix AG* (1) are
³
f 0
Examples of Practical Applications
exp{(Oi P j ) x}dB( x) for 1 İ i İ m1
and 1 İ j İ m2 . Using Re(Oi P j ) 0, we obtain
³
f 0
exp{(Oi P j ) x}dB ( x) İ ³
f 0
exp{[Re(Oi P j )] x}dB( x) 1
for 1 İ i İ m1 and 1 İ j İ m2 . This implies that all the m1m2 eigenvalues of the matrix I AG* (1) are not equal to zero. Therefore, the matrix I AG* (1) is invertible. This completes the proof. It follows from Theorem 3.5 that zI AG* ( z ) [ I R* ( z )]( I ) 0 )( zI G ).
(7.47)
Since AG* (1)e e H 2 e 9 e, it is clear that I AG* (1) is invertible which implies that I R* (1), I ) 0 and I G in Eq. (7.47) are all invertible. f
Let X * ( z )
¦z
k
xk . Then
k 1
X * ( z)
1 x0 e x0 R0* ( z )[ I R ( z )]1 , x0 R0* (1)[ I R* (1)]1 e
(7.48)
or xk
f 1 x0 e x0 R0,* k ¦ Rkn* , * 1 x0 R (1)[ I R (1)] e n 0 * 0
k ı 1.
(7.49)
The following lemma provides the G-measure and the censored chain < 0 , both of which are necessary for determining the crucial vector x0 given in Eq. (7.49). Lemma 7.2 The transition probability matrix < 0 of the censored Markov chain of / to level 0 is given by
<0
§ f f · § f · H 0 ¨ ¦ H i 1G i ¸ G1,0 ¨ ¦ ¦ H i j G i ¸ G2,0 , ©i 0 ¹ ©j 2i 0 ¹
(7.50)
where 1
G2,0
f f ª º I Ak(G l) G k 1 » H 2 , ¦ ¦ « l 0 k 1 ¬ ¼
and G1,0
f ª ( G ) i 1 º « I ¦ Ai G » i 1 ¬ ¼
1
° ½° ª f f (G ) i 1 º ® H 1 «¦ ¦ Ai j G » G2,0 ¾ . ¬j 1 i 1 ¼ ¯° ¿° 353
Constructive Computation in Stochastic Models with Applications
Proof
It follows from Eq. (3.17) that f
<0
H 0 ¦ R0, j ( I ) 0 )G j ,0 , j 1
which leads to
<0
§ f f · § f · H 0 ¨ ¦ H i 1G i ¸ G1,0 ¨ ¦ ¦ H i j G i ¸ G2,0 , ©i 0 ¹ ©j 2i 0 ¹ G2,0 for i ı 2. It follows from Eq. (3.16) that
by using Eq. (7.45) and Gi ,0
f
H k ¦ R j ( I ) 0 )G j k ,0 ,
( I ) 0 )Gk ,0
k ı 1,
j 1
which leads to f
( I ) 0 )G1,0
H 1 ¦ R j ( I ) 0 )G2,0
(7.51)
j 1
and f
( I ) 0 )G2,0
H 2 ¦ R j ( I ) 0 )G2,0
(7.52)
j 1
by using Gi ,0
G2,0 for i ı 2. Eq. (7.51) and Eq. (7.52) imply f ª ( G ) i 1 º « I ¦ Ai G » G1,0 i 1 ¬ ¼
ªf f º H 1 « ¦ ¦ Ai(Gj) G i 1 » G2,0 , ¬j 1 i 1 ¼
f ª ( G ) i 1 º « I ¦ Ai G » G2,0 i 1 ¬ ¼
ªf f º H 2 «¦ ¦ Ai(Gj)G i 1 » G2,0 , ¬j 1 i 1 ¼
and
respectively, by applying Eq. (7.44) and Eq. (7.46). Since the matrices f
f
f
I ¦ ¦ Ai(Gj) G i 1 and I ¦ Ai(G ) G i 1 are invertible, we obtain i 1
j 0 i 1
1
G2,0
f f ª ( G ) k 1 º I « ¦ ¦ Ak l G » H 2 l 0 k 1 ¬ ¼
and G10 354
f ª ( G ) i 1 º « I ¦ Ai G » i 1 ¬ ¼
1
° ½° ª f f (G ) i 1 º ® H 1 «¦ ¦ Ai j G » G2,0 ¾ . ¬j 1 i 1 ¼ ¯° ¿°
7
Examples of Practical Applications
This completes the proof. We now summarize the above discussions into the following theorem. Theorem 7.4 For the stable system, let y0 be the stationary probability vector of the censored Markov chain < 0 given in Eq. (7.50). Then, P0 0 exp{[C1 (C2 D2 )]x},
P0 ( x)
PSk ( x)
P0 (0) L[ P(k 1, x)
exp{C2 x}]B 0( x),
k ı 1,
and k
PGk ( x )
¦P
(0)[ P (k j , x )
exp{C2 x}]B ( x),
Gj
k ı 1,
j 1
where P0 (0)
x0 , x0 [V LT ( S ) ]e X * (1)T (G ) e
and f
x0 R0, k ¦ Rkn
PGk (0)
1 x0 e n 0 , x0 [V LT ( S ) ]e X * (1)T ( G ) e x0 R0* (1)[ I R* (1)]1
k ı 1,
with y0 . 1 ʌ0 R (1)[ I R* (1)]1 e
x0
7.3.3
* 0
The Stationary Queue Length
Now, we consider the distribution of the stationary queue length. Note that N (t ) is the number of customers in the system at time t , we write pk
pk( S )
lim P{N (t )
t of
k},
k ı 0,
lim P{N (t )
k , I (t )
lim P{N (t )
k , I (t ) G},
t of
S },
k ı 1,
and pk(G )
t of
pk
pk( S ) pk( G ) ,
k ı 1.
k ı 1. 355
Constructive Computation in Stochastic Models with Applications
Theorem 7.5
If the system is stable, then
D x0Ve,
p0 ° ® ° pk ¯
ª
k
º
¬
j 1
¼
D « x0 LBk( S1) ¦ x j Bk(G )j » e,
k ı 1,
where Bk( S )
³
f
0
P( k , x)
exp{C2 x}B 0( x)dx,
k ı 1.
Proof It follows from Eq. (7.34) that p0
³
f
0
P0 ( x)dxe
P0 (0){[C1 (C2 D2 )]1}e D x0Ve,
and from Eq. (7.39) and Eq. (7.41) that pk( S )
³
f
0
P0 (0) LBk( S1) e D x0 LBk( S1) e
PSk ( x)dxe
and pk(G )
³
f
0
k
f 0
exp{[(C1 zD1 ) C2 ] x}B 0( x)dx e
x0 R (1)[ I R* (1)]1 e
³
f 0
j 1
lim N (t ), then
D (1 x0 e) * 0
k
(0) Bk( G )j e D ¦ x j Bk( G )j e.
t of
E[ z N ] D x0Ve D x0 Lz ³
7.3.4
Gj
j 1
This completes the proof. If the system is stable and N
¦P
PGk ( x)dxe
x0 R0* ( z )[ I R* ( z )]1
exp{[(C1 zD1 ) C2 ]x}B( x)dx e.
The Busy Period
We provide an analysis for the busy period of the negative-customer queue. Let us introduce a new absorbing state 0 and modify the Markov process in the following way: Whenever the process visits state (0, j1 , j2 , x), it is absorbed into state 0. We study the absorbing time of the modified process, given that it starts from the state set {( S ,1, j1 , j2 ,0) :1 İ j1 İ m1 ,1 İ j2 İ m2 } with probability vector T 0 of size m1m2 . A similar analysis to the equations Eq. (7.24) to Eq. (7.28) and Eq. (7.29) to Eq. (7.33) can be used to obtain the following differential equations: 356
7
Examples of Practical Applications
§w w · ¨ ¸ PS 1 (t , x) © wt wx ¹
PS 1 (t , x)[C1 C2 P 0 ( x) I ],
§w w · ¨ ¸ PSk (t , x) © wt wx ¹
PSk (t , x)[C1 C2 P 0 ( x) I ]
(7.53)
PSk 1 (t , x)( D1
I ),
(7.54)
§w w · ¨ ¸ PG1 (t , x) © wt wx ¹
PG1 (t , x)[C1 C2 P ( x) I ],
(7.55)
§w w · ¨ ¸ PGk (t , x) © wt wx ¹
PGk (t , x)[C1 C2 P ( x) I ] PGk 1 (t , x)( D1
I ),
(7.56)
with the boundary conditions: PS 1 (t ,0) T 0G (t ),
PSl (t , 0)
0,
l ı 2,
(7.57)
for k ı 1 PGk (t , 0)
³
f 0
³
P 0 ( x) PSk 1 (t , x)dx
f 0
P ( x) PGk 1 (t , x)dx,
(7.58)
and the initial conditions: PS 1 (0, x) T 0G ( x),
PSl (0, x)
0,
l ı 2,
(7.59)
and PGk (0, x )
0,
k ı 1,
(7.60)
where 0, ¯ 1,
G (t ) ®
if t 0, if t ! 1.
Let
i Sk ( s, x) P
i Gk ( s, x ) P
³ ³
f
e st PSk (t , x)dt ,
0
f
¦z
k
i Sk ( s, x), P
k 1
f 0
i * ( s, z , x) Q S
e st PGk (t , x )dt ,
i * (s, z , x) Q G
f
¦z
k
i Gk ( s , x). P
k 1
357
Constructive Computation in Stochastic Models with Applications
Then it follows from Eq. (7.53), Eq. (7.54) and Eq. (7.57) that i * ( s, z , x ) Q S
zT 0 exp{[ sI (C1 zD1 ) C2 ]x}B 0( x) f
T 0 ¦ z k [ P (k 1, x)
exp{C2 x}]e sx B 0( x). k 1
Hence, i Sk ( s, x) T 0 [ P(k 1, x)
exp{C2 x}]e sx B 0( x), P
k ı 1.
(7.61)
It follows from Eq. (7.55) and Eq. (7.56) that i * ( s, z , x) Q i * ( s, z , 0) exp{[ sI (C1 zD1 ) C2 ]x}B ( x) Q G G f
k
¦ z ¦ Pi k
k 1
Gj
( s, 0)[ P (k j , x)
exp{C2 x}]e sx B( x).
j 1
We obtain that for k ı 1, k
¦ Pi
i Gk ( s, x) P
Gj
( s, 0)[ P( k j, x)
exp{C2 x}]e sx B( x).
(7.62)
j 1
i Sk ( s, x ) is explicitly expressed by the given information, while Clearly, P i P Gk ( s, x) is explicitly expressed by both the given information and the vectors i Gj ( s,0) can be determined by the boundary i Gj ( s, 0) for 1 İ j İ k , where P P condition Eq. (7.58). It follows from Eq. (7.58) that for k ı 1, i Gk ( s, 0) P
³
f
0
i Sk 1( s, x)dx ³ P 0 ( x) P
f
0
i Gk 1( s, x)dx. P ( x) P
(7.63)
Substituting Eq. (7.61) and Eq. (7.62) into Eq. (7.63) leads to i ( s )] * i ( s), i G ( s )[ I Q P
(6.64)
where i G ( s) P
i G1( s, 0), P i G 2( s, 0), P i G 3( s,0), P i G 4( s, 0),!), (P
i ( s ) (T i ( S )( s ),T i ( S )( s ),T i ( S )( s ),T i ( S )( s ),!) * 0 A1 0 A2 0 A3 0 A4
and (G ) §i A1 ( s ) ¨ (G ) A0 ( s) ¨i i (s) ¨ Q ¨ ¨ ¨ ©
358
(G ) i A2 (s) (G ) i A1 ( s ) (G ) i A0 ( s)
(G ) i A3 ( s ) (G ) i A2 (s) (G ) i A1 ( s ) (G ) i A0 ( s)
!· ¸ !¸ !¸ ¸ !¸ %¸¹
7
Examples of Practical Applications
with f
³ > P(k , x)
exp{C x}@ e
(S ) i Ak (s)
sx
2
0
dB0 ( x),
k ı 0,
and (G ) i Ak (s)
f
³ > P(k , x)
exp{C x}@ e
sx
2
0
dB( x),
k ı 0.
i ( s ) of M G 1 type without the boundary, For the discrete-time Markov chain Q i k ( s)}, the U-measure ) i 0( s ), and the G-measure we easily obtain the R-measure {R i ( s ). Therefore, the RG-factorization of the matrix I Q i ( s ) is given by G i ( s) [ I i ( s)][ I i ( s)][ I i ( s)], I Q GL RU
(7.65)
where i 1( s ) R i 2( s ) R i 3( s) §0 R ¨ i 1( s ) R i 2( s ) 0 R ¨ i U (s) ¨ i 1( s) 0 R R ¨ 0 ¨ ¨ ©
!· ¸ !¸ !¸ , ¸ !¸ %¸¹
i 0( s ),) i 0( s ),) i 0( s ),) i 0( s ),! )
i L( s) G
i U ( s ), I < i D ( s ) and I G i L ( s ) are invertible, and If Re( s ) ! 0, then I R their inverses are respectively given by
i U ( s )] [I R
1
§I ¨ ¨ ¨ ¨ ¨ ¨ ©
X1 (s) I
X 2 (s) X1 (s) I
X 3 (s) X 2 (s) X1 (s) I
!· ¸ !¸ !¸ , ¸ !¸ % ¸¹ 359
Constructive Computation in Stochastic Models with Applications
where f
X l (s)
¦ ¦ i 1
i D ( s )] [ I <
1
i n1 ( s ) R i n2 ( s )! R i ni ( s ), R
l ı 1,
n1 n2 ! ni l n j ı11İ j İ i
i 0( s )]1, [ I ) i 0( s)]1, [ I ) i 0( s)]1,! ) diag([ I )
and § I · ¨i ¸ I ¨ G( s) ¸ ¨ i 2( s ) G ¸. i ( s) I ¨G ¸ i ( s) I ¨G ¸ i 3( s) G i 2( s ) G ¨¨ ¸ # # # % ¸¹ © #
i L ( s)]1 [I G
Let B be the absorbing time of the modified process, given that it starts from the state set {( S ,1, j1 , j2 ,0) :1 İ j1 İ m1 ,1 İ j2 İ m2 } with probability vector T 0 , where
D x0V ( D1
I ) . D x0V ( D1
I )e
T0 We write B(t ) Theorem 7.6
i P{B ! t} and B( s )
³
f
0
e st B(t )dt.
If Re( s ) ! 0, then
f i B( s ) T 0 ³ exp{[(C1 D1 ) C2 ]x}e sx B 0( x)dxe 0
f f ª (G ) 1 º½ i k 1( s ) » ¾ > I ) iA(kG ) s G i 0 ( s )@ T 0 ®¦ « i ( s ) Al ¦ k l 1 ¼¿ ¯l 1 ¬ f ª º i k (s)» «I ¦ R k 1 ¬ ¼
Proof
1
³
f 0
exp{[(C1 D1 ) C2 ]x}e sx B ( s )dxe.
By using the supplementary variable method we can obtain i B(s )
³
f
0
f
* i * ( s,1, x )dxe Q ³ 0 Qi G (s,1, x)dxe. S
Note that
³ and 360
f
0
f
sx i * ( s,1, x)dx T 0 Q ³ 0 exp{[(C1 D1 ) C2 ]x}e B 0( x)dx S
7
³
f
0
Examples of Practical Applications
f
i * ( s,1, x)dx Q G
sx i * ( s,1, 0) Q ³ 0 exp{[(C1 D1 ) C2 ]x}e B( x)dx, G
i * ( s,1, 0). we only need to compute Q G
i * ( s,1,0) Since Q G
f
¦ Pi
Gk
i * ( s,1,0) ( s,0) , Q G
i G ( s )( I , I , I ,!) T. It follows from P
k 1
Eq. (7.64) and Eq. (7.65) that i * ( s,1,0) Q G
i ( s )[ I Q i ( s )] ( I , I , I ,!) T * 1
i ( s )[ I i ( s )]1[ I i ( s )]1[ I i ( s )]1( I , I , I ,!) T * GL
l 1
f
ª
T 0 ¦ « iAl(G ) ( s ) ¬
º (G ) i k 1( s ) » i Ak ( s )G l 1 ¼
¦
k
1
f ª º i 0 ( s )]1« I ¦ R i k (s) » . [I ) k 1 ¬ ¼
Some simple computations can lead to the desired result. It is easy to see from Theorem 7.6 that E[B] T 0 ³
f 0
exp{[(C1 D1 ) C2 ]x}B 0( x)dxe
f f ª (G ) º½ (G ) i k 1(0) » ¾ [ I ) i 0(0)]1 T 0 ®¦ « i Al (0) ¦ i A k (0)G k l 1 ¼¿ ¯l 1 ¬ f ª º i k (0) » «I ¦ R k 1 ¬ ¼
7.4
1
³
f 0
exp{[(C1 D1 ) C2 ]x}B ( x)dxe.
A Repairable Retrial Queue
In this section, we consider a BMAP G 11 retrial queue with a server subject to breakdowns and repairs, obtain the distribution of stationary queue length, the stationary availability and the stationary failure frequency. The retrial queueing model is described as follows: The arrival process: The arrivals to the retrial queue are modelled by a BMAP with irreducible matrix descriptor {Dk , k ı 0} of size m. We assume that f
D
¦D
k
is the infinitesimal generator of an irreducible Markov chain with
k 0
De
0 . Let V be the stationary probability vector of the Markov chain D. Then f
O V ¦ kDk e is the stationary arrival rate of the BMAP. k 1
361
Constructive Computation in Stochastic Models with Applications
The service times: The service times {F n , n ı 1} of the customers are assume to be i.i.d. random variables whose distribution function is given by B(t )
^
t
`
P{F n İ t} 1 exp ³ P (v)dv 0
with E[ F n ] 1P f. The life time and the repair time: The life time X of the server is exponential with a mean life time 1D , and does not change during the idle period of the server. The repair time Y of the server has the following distribution function V ( y)
^
y
`
P{Y İ y} 1 exp ³ E (v)dv 0
with E[Y ] 1E f. The retrial rule: We assume that there is no waiting space in the retrial queue, and the size of the orbit is infinite. If an arrival, either a primary or a retrial customer, finds that there is no customer in the server, then it enters the server immediately and receives service. Otherwise, it enters the orbit and makes a retrial at a later time. Returning customers behave independently of each other, and are persistently keep making retrials until they receive their requested service. Successive inter-retrial times {[ k , k ı 1} are i.i.d. exponential random variables with mean inter-retrial time 1T . The service discipline: If the server is busy at the arrival epoch, then all calls join the orbit, If the server is free, then one of the arriving customers begins his service and the others form sources of repeated calls. The repair discipline: When the server fails, it enters the state of failure and undergoes repair immediately. The customer who has been partially served has to wait to continue service. As soon as the repair of the server is completed, the server enters the working state immediately and continues to serve the customer. We assume that the repaired server is as good as a new server, and the service time is cumulative. The independence: We assume that all the random variables defined above are mutually independent.
7.4.1
The Supplementary Variables
Now, we introduce several supplementary variables to make the model Markovian and set up the system of stationary differential equations for the model. F n be the generalized service time of the n th customer, which is the Let i length of time since the beginning of the service for the n th customer until the F n includes the down time of the server due completion of the service. Clearly, i 362
7
Examples of Practical Applications
to server failures during the service period of the n th customer. It is easy to see that i º 1 §¨1 D ·¸ . F n for n ı 1 are i.i.d. random variables and E ª¬ F the sequence i n¼ P© E¹ It follows from Theorem 3 of Dudin and Klimenok [107] or Liang and Kulkarni i º O §¨1 D ·¸ 1, then the queueing system is stable. [119] that if U O E ª¬ F n¼ P© E¹ For the repairable BMAP G 1 retrial queue defined above, we denote by N (t ) the number of sources of repeated calls at time t, and define the states of the server as
L(t )
I , if the server is idle at time t , ° ®W , if the server is working at time t , ° R, if the server is under repair at time t. ¯
We introduce three variables J (t ), S (t ) and R(t ) representing the phase of the arrival process, the elapsed service time and the elapsed repair time at time t, respectively. Then {( L(t ), N (t ), J (t ), S (t ), R(t )) : t ı 0} is a Markov process with state space expressed as
:
{( I , k , j ) : k ı 0,1 İ j İ m} * {(W , k , j , x) : k ı 0,1 İ j İ m, x ı 0} * {( R, k , j , x, y ) : k ı 0,1 İ j İ m, x ı 0, y ı 0},
where k , j , x and y denote the number of customers in the orbit, the phase of the arrival process, the amount of the elapsed service time and the elapsed repair time, respectively. For k ı 0, we define P( I ,k , j ) (t ) P(W , k , j ) (t , x)dx
P{L(t )
I , N (t )
k , J (t )
P{L(t ) W , N (t ) 1 k , J (t )
j},
j, x İ S (t ) x dx}
and P( R ,k , j ) (t , x, y )dy
P{L(t ) S (t )
R, N (t ) 1 k , J (t )
j,
x, y İ R(t ) y dy}.
We write the above probabilities into vector form as PI , k (t ) ( P( I , k ,1) (t ), P( I , k ,2) (t ),! , P( I ,k ,m ) (t )), PW , k (t , x) ( P(W , k ,1) (t , x), P(W , k ,2) (t , x),! , P(W ,k , m ) (t , x)) and PR , k (t , x, y ) ( P( R , k ,1) (t , x, y ), P( R , k ,2) (t , x, y ),! , P( R , k , m ) (t , x, y )). 363
Constructive Computation in Stochastic Models with Applications
Since we are interested in the stationary behavior of the system, we define PI , k
lim PI , k (t ),
t of
PW , k ( x) PR k ( x, y )
lim PW k (t , x),
t of
lim PR , k (t , x, y ).
t of
The joint probability density {PI , k , PW , k ( x), PR , k ( x, y ), k ı 0} satisfies the following system of differential equations d PW ,0 ( x) dx
PW ,0 ( x){D0 [D P ( x)]I } ³
f 0
E ( y ) PR ,0 ( x, y )dy,
(7.66)
k
d PW , k ( x) dx
PW , k ( x){D0 [D P ( x)]I } ¦ PW ,i ( x) Dk i i 0
³
f
0
E ( y ) PR ,k ( x, y )dy ,
w PR ,0 ( x, y ) wy
k ı 1,
(7.67)
PR ,0 ( x, y )[ D0 E y I ],
(7.68)
k
w PR , k ( x, y ) wy
P( R , k ) ( x, y )[ D0 E ( y ) I ] ¦ PR ,i ( x, y ) Dk i ,
k ı 1,
(7.69)
i 0
with the boundary conditions PI , k (kT I D0 )
³
f
0
P ( x) PW , k ( x)dx,
k ı 0,
(7.70)
k
¦P
PW , k (0)
I ,i
Dk 1i (k 1)T PI , k 1 ,
k ı 0,
(7.71)
i 0
PR , k ( x, 0) D PW , k ( x),
k ı 0,
(7.72)
and the normalization condition f
¦ ª¬« P
I ,k
k 0
7.4.2
³
f 0
PW , k ( x)dx ³
f 0
³
f 0
PR , k ( x, y )dxdy º e 1. ¼»
(7.73)
A Level-Dependent Markov Chain of M/G/1 Type
Now, we provide an approach for solving the Eq. (7.66) to Eq. (7.73) with two crucial steps: Express PW , k ( x) and PR , k ( x, y ) in terms of boundary probabilistic 364
7
Examples of Practical Applications
vectors PW , k (0) by recognizing a new BMAP, and obtain PI , k and PW , k (0) by converting boundary equations into a level-dependent Markov chain of M G 1 type. Let f
D* ( z )
¦z
k
v ( D* ( z ))
Dk ,
k 0
³
f 0
exp{D* ( z ) y}dV ( y ).
The following lemma recognizes a new BMAP, which appears in the process of solving the system of differential equations. Lemma 7.3 Let f
< * ( z)
¦z< k
D* ( z ) D [ I v ( D* ( z ))].
k
(7.74)
k 0
Then < k for k ı 0 are coefficient matrices of a BMAP of size m. Proof To prove this lemma, we need to show that the following three conditions are satisfied: i) The diagonal entries of < 0 are strictly negative, the off-diagonal entries are nonnegative, and < 0 is invertible. ii) For k ı 1, < k ı 0 f
and
¦ k< k f. iii) < k 1
f
¦<
k
is irreducible and < e 0.
k 0
(1) It follows from Eq. (7.74) that
<0
D0 D [ I v ( D0 )].
(7.75)
It is clear from Eq. (7.75) that the off-diagonal entries of < 0 are nonnegative due to v ( D0 ) ı 0. Note that the i th diagonal entry of the matrix v ( D0 ) is the conditional probability that the BMAP returns to state i and no arrival occurs during a repair time, given that the BMAP starts in state i, we obtain that the i th diagonal entry of the matrix I v ( D0 ) is nonnegative. Hence, the diagonal entries of < 0 are strictly negative according to the assumption of D0 . It is easy to see from Eq. (7.75) that the off-diagonal entries of < 0 are nonnegative, and < 0 e 9 0. Therefore, the real parts of the eigenvalues of < 0 are all negative according to Gerdžgorin Theorem given in Horn and Johnson [112], and so < 0 is invertible. dk (2) It is clear that for k ı 1, k [v ( D* ( z ))]_ z 0 ı 0. Since dz
Dk D
1 dk [v ( D* ( z ))]_ z 0, k ! d zk
Dk ı 0 for k ı 1 and D ! 0, we obtain that for k ı 1,
< k ı 0. Using
f
¦ kD
k
f, we obtain
k 1
365
Constructive Computation in Stochastic Models with Applications f
¦ k< k k 1
f
ª I D fx exp{Dx}dV ( x) º kD f. k ³0 «¬ »¼ ¦ k 1
(3) Note that f
¦<
D D v ( D) D I ,
k
k 0
it is clear that < is irreducible, since D is irreducible and v ( D) ı 0. Noting that De 0 and [ I v ( D)]e 0, it is obvious that < e 0. This completes the proof. Remark 7.2 The BMAP with coefficient matrix sequence {< k } may be regarded as a generalized arrival process, which is composed of the sum of two parts: the first is the original BMAP with coefficient matrix sequence {Dk } the is an additional BMAP with coefficient matrix sequence ½ 1 dk [v ( D* ( z ))]_ z 0 , k 1, 2,!¾ . ®D [ I v ( D0 )],D k k ! dz ¯ ¿ The additional BMAP is related to the server subject to breakdowns and repairs. For the two BMAPs having coefficient matrix sequences {Dk } and {< k }, let K D (t ) and K< (t ) denote the numbers of arrivals in the time interval [0, t ), respectively, and J D (t ) and J< (t ) the phases at time t, respectively. We introduce the conditional probabilities for the two BMAPs by PjD, j c (n, t )
P{K D (t )
n, J D (t )
j c _ K D (0) 0, J D (0)
j}.
P<j , jc (n, t )
P{K< (t )
n, J< (t )
j c _ K< (0) 0, J< (0)
j}.
and
Let P D (n, t ) and P< (n, t ) be the matrices with entries PjD j c (n, t ) and P<j j c (n, t ) for 1 İ j , j c İ m, respectively. Then it follows from Neuts [24] or Lucantoni [120] that f
PD* ( z , t )
¦z
n
P D (n, t ) exp{D* ( z )t}
(7.76)
P< (n, t ) exp{< * ( z )t}.
(7.77)
n 0
and f
P<* ( z , t )
¦z
n
n 0
We write B( x) 1 B( x), V ( y ) 1 V ( y ), 366
7
Examples of Practical Applications
f
¦ z k PW ,k ( x),
PW* ( z , x)
f
PR* ( z , x, y )
k 0
¦z
k
PR , k ( x, y ).
k 0
Now, we solve the system of matrix equations Eq. (7.66) to Eq. (7.73). It follows from Eq. (7.68) and Eq. (7.69) that w * PR ( z , x, y ) wy
PR* ( z , x, y )[ D* ( z ) E ( y )],
hence from Eq. (7.72) we obtain
PR* ( z , x, y )
PR* ( z , x,0) exp{D* ( z ) y}V ( y )
D PW* ( z , x) exp{D* ( z ) y}V ( y ).
(7.78)
It follows from Eq. (7.66) and Eq. (7.67), together with Eq. (7.78), that w * PW ( z , x) wx
PW* ( z , x){D* ( z ) D [ I v ( D* ( z ))] P ( x) I }.
Hence, PW* ( z , x)
PW* ( z ,0) exp{{D* ( z ) D [ I v ( D* ( z ))]}x}B( x) PW* ( z ,0) exp{< * ( z ) x}B( x),
(7.79)
which, together with Eq. (7.77), leads to k
¦P
PW , k ( x)
W ,i
(0) P< (k i, x) B( x).
(7.80)
i 0
Similarly, Eq. (7.78), together with Eq. (7.76) and Eq. (7.80), leads to l
PR ,l ( x, y ) D ¦ PW , k ( x) P D (l k , y )V ( y ) k 0 l
k
D ¦ ¦ PW ,i (0) P< (k i, x) P D (l k , y ) B( x)V ( y ).
(7.81)
k 0 i 0
Equations Eq. (7.80) and Eq. (7.81) provide a solution for PW , k ( x) and PR , k ( x, y ) in terms of PW , k (0), k ı 0. In order to completely solve the system of differential equations, we still need to determine the vectors PW , k (0) and PI , k for k ı 0 from the boundary equations Eq. (7.70) and Eq. (7.71), and the normalization condition Eq. (7.73). We defiue PIW
( PI ,0 , PW ,0 (0), PI ,1 , PW ,1 (0), PI ,2 , PW ,2 (0), PI ,3 , PW ,3 (0),!), Ck
³
f 0
P< (k , x)dB ( x ),
k ı 0, 367
Constructive Computation in Stochastic Models with Applications
§ 0 kT I · ¨ ¸ , k ı 1, ©0 0 ¹ § kT I D0 D1 · ¨ ¸ , k ı 0, C0 I ¹ ©
A0( k ) A1( k ) Ak
§ 0 ¨ © Ck 1
Dk · ¸, 0 ¹
A2 A1(1) A(2) 0
A3 A2 A1(2) A(3) 0
k ı 2,
and § A1(0) ¨ (1) ¨ A0 Q ¨ ¨ ¨ ¨ ©
!· ¸ !¸ !¸ . ¸ !¸ %¸¹
A4 A3 A2 A1(3) %
(7.82)
According to the above definitions and the expression for PW , k ( x) in Eq. (7.80), the boundary equations Eq. (7.70) and Eq. (7.71) can be written as 0
PIW Q
(7.83)
In what follows, we show that the matrix Q is the infinitesimal generator of a continuous-time positive recurrent Markov chain. Therefore, the unique stationary probability vector X of Q can be used to determine the vectors PW , k (0) and PI , k for k ı 0. It is clear that PIW J X . Let X ( x0 , x1 , x2 ,!), where xk ( xk ,1 , xk ,2 ) for k ı 0 and the size of each vector xk , j , k ı 0 and j 1, 2, is m. The normalization condition Eq. (7.73) and the expressions for PW , k ( x) in Eq. (7.80) and PR , k ( x, y ) in Eq. (7.82) lead to
J
1 f
k
¦ ¦ (x
i ,1
k 0 i 0
f
l
k
,
xi ,2 Fk i )e D ¦ ¦ ¦ xi ,2 Fk i H l k l 0 k 0 i 0
where Fk
³
f 0
P< (k , x) B ( x)dx,
Hk
³
f 0
P D (k , y )V ( y )dy.
Theorem 7.7 The matrix Q is the infinitesimal generator of a continuous-time irreducible positive recurrent Markov chain. Proof It can be easily verified that the matrix Q can be regarded as the infinitesimal generator of a continuous-time irreducible Markov chain based on the definition of Q. In what follows we only need to prove that it is positive recurrent. 368
7
Examples of Practical Applications
For each k ı 1, f
A0( k ) A1( k ) ¦ Al l 2
f
where D
¦ Dl and C l 1
§ D0 kT I ¨ C ©
D kT I · ¸, I ¹
f
¦C . l
It is clear that the transition rate matrix
l 0
f
A0( k ) A1( k ) ¦ Al is irreducible and positive recurrent. Let y1( k ) , y2( k ) be the l 2
f
stationary probability vector of A0( k ) A1( k ) ¦ Al . Then l 2
y
(k ) 1
§ D kT I , y2( k ) ¨ 0 C ©
D kT I · ¸ 0, I ¹
hence, solving this equation gives ª1 º y1( k ) « ( D0 D C ) C I » ¬ kT ¼
0.
Noting that the matrix C is stochastic and the matrix D0 D is an infinitesimal 1 generator, D0 D C C I is an irreducible infinitesimal generator of size m kT 1 for each k ı 1. Thus, for each k ı 1 the Markov chain ( D0 D C ) C I is kT 1 (D0 positive recurrent. Let w( k ) be the stationary probability vector of kT D C ) C I . Then y1( k )
w( k ) and y2( k ) 1 kT w ( k ) D e
w( k ) ( D kT I ) . 1 kT w( k ) D e
Noting that k o f, 1 ( D0 D C ) C I o C I , kT it is clear that w( k ) o w, as k o f, where w is the stationary probability vector of the irreducible infinitesimal generator C I . Thus, as k o f, y1( k )
w( k ) o 0 and y2( k ) 1 kT w( k ) D e
w( k ) ( D kT I ) o w. 1 kT w( k ) D e 369
Constructive Computation in Stochastic Models with Applications
As k o f, simple computation leads to 1
1 § · w( k ) C ¨ I D0 ¸ e kT © ¹ (k ) o1 kT y1 e 1 1 (k ) § 1 · 1 w C¨ I D0 ¸ e kT kT © ¹
y
, y2( k ) A0( k ) e
y
, y2( k ) ¦ (l 1) Al e
(k ) 1
(7.84)
and f
(k ) 1
f
f
y1( k ) ¦ (l 1) Dl e y2( k ) ¦ (l 1)Cl 1e
l 2
l 2 f
l 2
o w¦ (l 1)Cl 1e l 2
O 1 P
(7.85)
O§ D· ¨1 ¸ 1. It follows from Eq. (7.84) and P© E¹
due to the stable condition U Eq. (7.85) that
f
lim y1( k ) , y2( k ) A0( k ) e ! lim y1( k ) , y2( k ) ¦ (l 1) Al e. k of
k of
l 2
Thus, there always exists a positive integer N big enough such that for all k ! N,
y
(k ) 1
f
, y2( k ) A0( k ) e ! y1( k ) , y2( k ) ¦ (l 1) Al e.
(7.86)
l 2
It is easy to check that f
¦ (l 1) A e f.
(7.87)
l
l 2
Therefore, it is easy to see from Eq. (7.86) and Eq. (7.87) that the continuoustime irreducible Markov chain Q is positive recurrent based on the principle of the mean drift. This completes the proof. For the level-dependent Markov chain Q of M G 1 type, let {G ( k ) } be the minimal nonnegative solution to the system of matrix equations A0( k ) A1( k ) G ( k ) A2 G ( k 1) G ( k ) A3G ( k 2) G ( k 1) G ( k ) ! 0,
k ı 1.
Then for k ı 0 and l ı 1, Rl( k )
[ Al 1 Al 2G ( k 2) Al 3G ( k 3) G ( k 2) !](U k11 ),
and for k ı 0, Uk 370
A1( k ) A2 G ( k 1) A3G ( k 2) G ( k 1) A4 G ( k 3) G ( k 2) G ( k 1) ! .
7
Examples of Practical Applications
Therefore, Q
( I RU )U D ( I GL ),
(7.88)
where §I ¨ ¨ ( I RU ) ¨ ¨ ¨ ¨ © UD
R1(0) I
R2(0) R1(1) I
R3(0) R2(1) R1(2) I
!· ¸ !¸ !¸ , ¸ !¸ % ¸¹
diag (U 0 ,U1 ,U 2 ,!)
§ I ¨ (1) ¨ G ( I GL ) ¨ ¨ ¨ ¨ ©
I G (2)
I G (3)
· ¸ ¸ ¸. ¸ I ¸ % %¸¹
Using the RG-factorization, the stationary probability vector of Q is given by x0 ° ® ° xk ¯
W y0 , k 1
¦xR i
(i ) k i
,
k ı 1,
i 0
where y0 is the stationary probability vector of the transition rate matrix U 0 and f
the scalar W is uniquely determined by
¦x e k
1.
k 0
7.4.3
The Stationary Performance Measures
The solution of the system of differential equations for the vectors PI , k , PW , k ( x) and PR , k ( x, y ) for k ı 0 is summarized in the following theorem. Theorem 7.8 If the system is stable, then for k ı 0, PI , k J xk ,1 , ° k °° PW , k ( x) J ¦ xk ,2 P< (k i, x) B( x), ® i 0 ° k l ° PR , k ( x, y ) DJ ¦ ¦ xi ,2 P< (l i, x) P D (k l , y ) B ( x)V ( y ). °¯ l 0 i 0 371
Constructive Computation in Stochastic Models with Applications
Remark 7.3 The method proposed in this section can be used in principle to deal with a retrial queue with a more general total retrial rate, where the total retrial rate is a function f (n,T ), n is the number of customers in the orbit and T is a M
parameter. For example, f (n,T )
¦ a (T )n
or f ( n,T ) C1 ln n C2 e nT . The case
i
i
i 0
with linear retrial rate f (n,T ) nT J was studied in Dudin and Klimenok [107]. Now, we express the distribution of stationary queue length. If the system is stable, we write pk
lim P{N (t )
k},
t of
k ı 0.
Note that pk( I ) pk(W ) pk( R )
lim P ^ L(t )
k` ,
k ı 0,
lim P{L(t ) W , N (t )
k},
k ı 1,
lim P ^ L(t )
k` ,
k ı 1.
t of
I , N (t )
t of t of
R, N (t )
we have p0
p0( I ) ,
pk
pk( I ) pk(W ) pk( R ) ,
k ı 1,
which leads to p0( I ) pk(W )
³
f 0
pI ,0 e,
PW , k 1 ( x)dxe, pk( R )
pk( I )
pI , k e, f
f
0
0
³ ³
k ı 1,
PR , k 1 ( x, y )dxdy,
k ı 1.
Therefore, the stationary queue length distribution is given by p0
J x0,1e
pk
k 1 k 1 j § · J ¨ xk ,1e ¦ xi ,2 Fk 1i D ¦ ¦ xi ,2 Fj i H k 1 j ¸ e, i 0 j 0 i 0 © ¹
k ı 1.
To obtain the stationary availability and the stationary failure frequency of the server, we need the following lemma. Lemma 7.4 If the system is stable, then O§ D· (1) the probability that the server is idle is PI 1 ¨1 ¸ , P© E¹
O , and P OD (3) the probability that the server is under repair is PR . PE
(2) the probability that the server is working is PW
372
7
Examples of Practical Applications
Proof It follows from Eq. (7.70) and Eq. (7.79) that
Tz
d * PI ( z ) PI* ( z ) D0 dz
PW* ( z,0)b (< * ( z )),
(7.89)
where f
¦z
PI* ( z )
k
b (< * ( z ))
PI , k ,
k 0
³
f 0
exp{< * ( z ) x}dB( x).
It follows from Eq. (7.71) that PW* ( z , 0)
1 * d PI ( z )[ D* ( z ) D0 ] T PI* ( z ). z dz
(7.90)
From Eq. (7.89) and Eq. (7.90) we obtain PI* ( z ) D* ( z )[ zI b (< * ( z ))]1.
PW* ( z , 0)
Noting that
³
f
exp{< * ( z ) x}B( x)dx [ I b (< * ( z ))][< * ( z )]1
0
and
³
f 0
exp{D* ( z ) x}V ( x)dx [ I v ( D* ( z ))][ D* ( z )]1 ,
using Eq. (7.78), Eq. (7.79) and Eq. (7.90) yields
P* ( z )
PI* ( z ) ³
f 0
PW* ( z , x)dx ³
f 0
³
f 0
PR* ( z , x, y )dxdy
PI* ( z ){I D* ( z )[ zI b (< * ( z ))] [ I b (< * ( z ))][< * ( z )]1 1
{I D [ I v ( D* ( z ))][ D* ( z )]1}}.
(7.91)
For z ı 0, we denote by F ( z ) and e( z ) the eigenvalue with maximal real part of the matrix D* ( z ) and the associated right eigenvector with the first entry normalized to one, respectively. It is obvious that lim F ( z ) 0, lim F c( z ) O z o1
z o1
and lim e( z ) e. It follows from Eq. (7.91) that z o1
P* ( z )e( z )
PI* ( z )e( z )
z 1 . z b( F ( z ) D [1 v ( F ( z ))])
(7.92)
Noting that, after some calculations, lim z o1
z 1 z b( F z D [1 v ( F ( z ))])
1 O§ D· 1 ¨1 ¸ P© E¹ 373
Constructive Computation in Stochastic Models with Applications
and lim P* ( z )e( z ) 1, z o1
we obtain PI
PI* (1)e(1) 1
O§ D· ¨1 ¸ . P© E¹
O OD and PR . This completes the proof. P PE This lemma shows that we can determine the scalar function P* ( z )e( z ) from Eq. (7.89) to Eq. (7.92). However, we can not explicitly obtain the vector function P* ( z ). This is the main reason why it is necessary for us to provide an approach for solving the Eq. (7.66) to Eq. (7.73). Meanwhile, it is also easy to see the basic difficulty of using the standard method (e.g., see Subsection 1.2.2 in Falin and Templeton [109]) to deal with the BMAP G 1 retrial queue and more generally, retrial queues of M G 1 type. Let We can similarly obtain that PW
A(t )
P{the server is up at time t}
and define the stationary availability of the server as A
lim A(t ). We denote
t of
by W f the stationary failure frequency of the server. Theorem 7.9 If the system is stable, then (1) the stationary availability of the server is given by
A 1
OD , PE
(2) the stationary failure frequency of the server is given by Wf Proof
O . P
Noting that f
A
D
¦ ¬«ª P
I ,k
k 0
³
f 0
PW , k ( x)dx º e ¼»
ª P* (1) f P* (1, x)dx º e ³0 W ¬« I ¼»
PI PW
and Wf
ª
f
D «¦ ³ ¬k
0
This completes the proof. 374
f 0
f º PW ,k ( x)dx » e D ³ PW* (1, x)dxe D PW . 0 ¼
7
Examples of Practical Applications
It is easy to see from Eq. (7.78) to Eq. (7.81) that Lemma 7.3 is a key to express the vector generating function PW* ( z, x), which is necessary to derive the two reliability indexes: The stationary availability and failure frequency, as shown in the proof of Theorem 7.9.
7.5
Notes in the Literature
In this section, we provide a simple introduction to the literature for processorsharing queues, fluid queues, queues with negative customers, and retrial queues.
7.5.1
The Processor-Sharing Queues
Processor-sharing queues are useful in the study of computer and communication systems. Early work on processor-sharing queues was motivated by the study of multiuser mainframe computer systems, e.g., see Kleinrock [16], Coffman and Kleinrock [6] and Coffman, Muntz and Trotter [7]. Recent interest in processorsharing queues is due to their applications to communication networks and web servers, for example, modeling congested links with TCP traffic and job schedulers in web servers. During the last few decades considerable attention has been paid to the study of processor-sharing queues, which have been well documented by, for example, books of Kleinrock [17], Cohen [9] and Asmussen [1] and survey papers of Cohen [8], Yashkov [34,35] and Yashkov and Yashkova [36]. For the M M 1 processor-sharing queue, Coffman, Muntz and Trotter [7] and O’Donovan [26] derived the Laplace–Stieltjes transform of the sojourn time distribution. Morrison [22] and Guillemin and Boyer [13] obtained an integral representation for the complementary distribution of the sojourn time by means of the Laplace-Stieltjes transform expression and the spectral theory, respectively. Sengupta and Jagerman [31] studied moments of the sojourn time conditioned on the number of customers found by an arriving customer. Braband [3,4] discussed the waiting time distributions of the closed M M N processor-sharing queue. Núñez-Queija [25] studied a Markovian processor-sharing queue with a service rate that varies over time, depending on the number of customers and on the state of a stochastic environment. Masuyama and Takine [21] provided a recursive formula to compute the stationary sojourn time distribution in the MAP M 1 processor-sharing queue. Li, Liu and Lian [20] applied the RG-factorization and a level-dependent Markov chain of M G 1 type to study a BMAP M 1 generalized processor-sharing queue. For the GI M 1 processor-sharing queue, the first two moments and the Laplace-Stieltjes transform of the sojourn time were derived in Ramaswami [27], Cohen [10], and Jagerman and Sengupta [14]. For the M G 1 processor-sharing queue, reader may refer to Schassberger [30], Resing, 375
Constructive Computation in Stochastic Models with Applications
Hooghiemstra and Keane [29], Yang and Knessl [33], Zwart and Boxma [37], Sericola, Guillemin and Boyer [32], Li and Lin [19], Cheung, van den Berg and Boucherie [5], and Egorova, Zwart and Boxma [11]. For the processor-sharing queues with bulk arrivals, Kleinrock, Muntz and Rodemich [18] first analyzed a processor-sharing queue with bulk arrivals. Rege and Sengupta [28] studied the M G 1 processor-sharing queue with bulk arrivals under a general job size distribution. Bansal [2] obtained expression for the expected response time of a job as a function of its size, where the service times of jobs have a generalized hyperexponential distribution and more generally, a distribution with rational Laplace transforms. Li, Liu and Lian [20] applied the RG-factorizations to study a BMAP M 1 generalized processor-sharing queue. Kim and Kim [15] considered concavity of the conditional mean of sojourn time in the M G 1 processor-sharing queue with batch arrivals.
7.5.2
The Fluid Queues
Fluid queues are motivated as modelling, e.g., high-speed communication networks, transportation systems, and manufacturing systems. During the last few decades considerable attention has been paid to the study of fluid queues, which have been well documented, for example, by survey papers of Kulkarni [51] and Ahn and Ramaswami [40]. In most of the studies of the fluid queues, the state space of the external stochastic environment is assumed to be finite. In this case, the stationary probability distribution of the buffer content is the unique solution to a system of differential equations, for example, see (8) in Anick, Mitra and Sondhi [41]. Until now, four methods for solving the system of differential equations have been presented as follows: (1) A spectral method was proposed in terms of computing the eigenvalues and eigenvectors of the coefficient matrices involved in a fluid model. Readers may refer to Anick, Mitra and Sondhi [41], van Doorn, Jagers and de Wit [65], Mitra [57], Stern and Elwalid [64], Kontovasilis and Mitrou [50], Blaabjerg, Andersson and Andersson [45], Karandikar and Kulkarni [48], Kulkarni [51], Lenin and Parthasarathy [53], Kulkarni and Tzenova [52], Adan, Resing and Kulkarni [39], and Masuyama and Takine [56]. Further, Asmussen [42] and Karandikar and Kulkarni [48] used this spectral method to analyze second-order fluid models with finite states. Rabehasaina and Sericola [60] studied a second-order Markov-modulated fluid queue with linear service rate. (2) Sericola and Tuffin [63] proposed a stable algorithm for computing the stationary probability distribution of the buffer content. In the stable algorithm, the stationary probability distribution is first assumed to be the product of an exponential function and a power series. Then the determination of the stationary 376
7
Examples of Practical Applications
probability distribution becomes the computation of the coefficients in the product. Barbot, Sericola and Telek [43] and Parthasarathy, Sericola and Vijayashree [59] further applied the stable algorithm to calculate transient distributions of fluid models such as the distribution of the busy period. (3) Based on the Wiener-Hopf factorization given in Barlow, Rogers and Williams [44], the stationary probability distribution of the buffer content is expressed as a matrix-exponential form by Rogers [62]. Li and Zhao [55] used the RG-factorizations to study a block-structured fluid models driven by leveldependent QBD processes with either infinitely-many levels or finitely-many levels. (4) Ramaswami [61] provided the matrix-analytic method for deriving the stationary probability distribution of the buffer content. He illustrated that the stationary probability distribution is a PH distribution. The matrix-analytic method was further extended by da Silva Soares and Latouche [46], Ahn and Ramaswami [40], and van Lierde, da Silva Soares and Latouche [66]. When the state space of the external stochastic environment is countably infinite, Virtamo and Norros [67] provided a spectral method for a fluid queue driven by an M M 1 queue, where the generalized eigenvalues are explicitly expressed by means of the Chebyshew Polynomials of the second kind. Adan and Resing [38] presented a method of embedded points, for deriving the stationary probability distribution of the buffer content. Parthasarathy, Vijayashree and Lenin [58] proposed a method of continued fraction to derive Laplace transform of the stationary probability distribution. Li, Liu and Shang [54] discussed the heavy-tailed behavior for the stationary probability distribution of the buffer content in a fluid queue driven by an M G 1 queue. Konovalov [49] analyzed the stability issue of a fluid queue driven by a GI G 1 queue. van Doorn and Scheinhardt [65] used the orthogonal Polynomials to express the stationary probability distribution of the buffer content for a fluid queue driven by an irreducible birth-death process. Li and Zhao [55] analyzed a block-structured fluid models driven by level-dependent QBD processes. Guillemin and Sericola [47] provided stationary analysis of a fluid queue driven by some countable state space Markov chain.
7.5.3
The Queues with Negative Customers
During the last decade, considerable attention has been paid to the study of queueing systems with negative arrivals. Since the introduction of the concept of negative customers by Gelenbe [77], research on queueing systems with negative arrivals has been greatly motivated by some practical applications such as computers, neural networks, manufacturing systems and communication networks. For a comprehensive analysis of queueing networks with negative arrivals, 377
Constructive Computation in Stochastic Models with Applications
readers may refer to Gelenbe and Pujolle [81], Chao, Miyazawa and Pinedo [74], and Serfozo [90]. Gelenbe [77] introduced negative arrivals to queueing networks and established the product form solution for an open queueing network. Subsequent papers have been published on this theme, among which, see Gelenbe and Schassberger [82], Henderson [86], Gelenbe [78,79], Pitel [89], Harrison and Pitel [84], Chao [73], Gelenbe and Pujolle [81], Artalejo and Gómez-Corral [70], Chao, Miyazawa and Pinedo [74], Serfozo [90], Anisimov and Artalejo [68], Zhu and Zhang [92], Dudin and Semenova [76], and Shin [91]. A recent review can be found in Artalejo [69]. Gelenbe, Glynn and Sigman [80] provided necessary and sufficient conditions for stability of single-server queues with negative arrivals. They illustrated that stability conditions may depend upon not only the arrival and service rates, but also the distributions of the interarrival and service times. For the M G 1 queue with negative arrivals, Harrison and Pitel [85] explicitly expressed the stability conditions. Harrison and Pitel [83] derived expressions for the Laplace transform of the sojourn time density in the M M 1 queue with Poisson arrivals of negative customers. For the stationary workload for M G 1 queues with negative arrivals, Boucherie and Boxma [72] studied a generalization in which a negative arrival removes a random amount of positive customers. Jain and Sigman [87] analyzed the case where a negative arrival removes all the customers in the system. They derived a Pollaczek-Khintchine formula. Bayer and Boxma [71] provided WienerHopf analysis for an M G 1 queue in which positive customers are removed just after a service completion time. Harrison and Pitel [85] studied the distributions of stationary queue length for the M G 1 queue with negative arrivals. The generating functions are expressed in terms of the Fredholm integral equations of the first kind. Artalejo and Gómez-Corral [96] extended the M G 1 queue with negative arrivals to handle situations where the positive customers follow a retrial policy. They proved that the distributions of the stationary queue length can still satisfy the Fredholm integral equation of the first kind, and provided an effective algorithm for numerically solving the Fredholm integral equation. Dudin and Nishimura [75] used the matrix-analytic method to analyze a BMAP SM 1 queue with disasters. Li and Zhao [88] considered the MAP G 1 queue with MAP arrivals of negative customers under two classes of removal rules.
7.5.4
The Retrial Queues
Retrial queues are an important mathematical model for telephone switch systems, digital cellular mobile networks, computer networks and so on. During the last two decades considerable attention has been paid to the study of retrial queues, which has been well documented, for example, by survey papers of Yang 378
7
Examples of Practical Applications
and Templeton [125], Falin [108], Kulkarni and Liang [116], Artalejo [96] and Gómez-Corral [110], and by two books of Falin and Templeton [109] and Artalejo and Gómez-Corral [98]. Retrial queues have been studied by some authors in terms of the matrix-analytic method. Readers may refer to Kulkarni [113,114], Kulkarni and Choi [115], Neuts and Rao [121], Liang and V. G. Kulkarni [118,119], Diamond and Alfa [104,105,106], Choi, Yang and Kim [102], He, Li and Zhao [111], Dudin and Klimenok [107], Choi, Chung and Dudin [103], Breuer, Dudin and Klimenok [99], Chakravarthy and Dudin [100,101], Li, Ying and Zhao [117], Shang, Liu and Li [122]. Retrial queues with unreliable servers have been discussed by some researchers. Readers may refer to Kulkarni and Choi [115], Yang and Li [124], Artalejo [95], Aissani [93], Aissani and Artalejo [94], Artalejo and Gómez-Corral [97], Wang, Cao and Li [123], and Li, Ying and Zhao [117].
Problems 7.1 Consider an M PH 1 processor sharing queue, where the server is shared equally by all customers in the system. Compute the distribution of the sojourn time in this system. Further, provide a detailed analysis for a MAP PH 1 processor sharing queue. 7.2 Consider an M M c processor sharing queue with each server being shared equally by all customers in its working space. Compute the distribution of the sojourn time in this system. 7.3 Consider an infinite capacity buffer where the fluid input and output rates are controlled by the idle period and the busy period of a MAP G 1 queue, respectively. Provide the stable condition of the fluid queue, and compute the stationary distribution of the buffer content. 7.4 Consider a finite capacity buffer where the fluid input and output rates are controlled by the idle period and the busy period of a MAP PH 1 queue, respectively. Provide the stable condition of the fluid queue, and compute the stationary distribution of the buffer content. 7.5 Analyze a single-server FCFS queue with two types of independent arrivals, positive and negative. Positive arrivals correspond to customers who upon arrival, join the queue with the intention of being served and then leaving the system. At a negative arrival epoch, the system is affected if and only if customers are present. We assume that the arrivals of both positive and negative customers are of MAPs with matrix descriptors (C1 , D1 ) and (C2 , D2 ), respectively, where the infinitesimal generators C1 D1 and C2 D2 of sizes m1 u m1 and m2 u m2 are irreducible and positive recurrent. The service times of the positive arrivals are i.i.d. and of phase type with irreducible representation ( E , S ) of size n. For each 379
Constructive Computation in Stochastic Models with Applications
of the following three cases, compute the stationary distribution of the queue length. (1) The arrival of a negative customer removes all the customers in the system. (2) The arrival of a negative customer removes the head customer (or the serving customer) in the system. (3) The arrival of a negative customer removes the final customer in the queue line. 7.6 Analyze a single-server FCFS queue with a finite waiting room and two types of independent arrivals, positive and negative. Positive arrivals correspond to customers who upon arrival, join the queue with the intention of being served and then leaving the system. Each negative arrival is not served while it only occupies the waiting room until it is full. We assume that the arrivals of both positive and negative customers are of MAPs with matrix descriptors (C1 , D1 ) and (C2 , D2 ), respectively, where the infinitesimal generators C1 D1 and C2 D2 of sizes m1 u m1 and m2 u m2 are irreducible and positive recurrent. The service times of the positive arrivals are i.i.d. and of phase type with irreducible representation ( E , S ) of size n. Compute the transient distribution of the queue length. 7.7 Consider a BMAP G G 11 retrial queue with no waiting space and the size of the orbit is infinite. If an arrival, either a primary or a retrial customer, finds that there is no customer in the server, then it enters the server immediately and receives service. Otherwise it enters the orbit and makes a retrial at a later time. Returning customers behave independently of each other and keep making retrials until they receive their requested service. Successive inter-retrial times are i.i.d. exponentially distributed random variables with retrial rate T . The arrivals to the retrial queue are modeled by a BMAP with irreducible matrix descriptor {Dk , k ı 0} of size m; and the service times for the primary and retrial customers are assumed to be i.i.d. and have two different distribution functions as follows:
^
t
`
B1 (t ) 1 exp ³ P1 (v)dv 0
and
^
t
`
B2 (t ) 1 exp ³ P 2 (v)dv , 0
respectively. Provide the stable condition of the retrial queue, and compute the stationary distribution of the queue length. 7.8 Consider a GI PH 11 retrial queue with no waiting space and the size of the orbit is infinite. If an arrival, either a primary or a retrial customer, finds that there is no customer in the server, then it enters the server immediately and receives service. Otherwise it enters the orbit and makes a retrial at a later time. Returning customers behave independently of each other and keep making 380
7
Examples of Practical Applications
retrials until they receive their requested service. Successive inter-retrial times are i.i.d. exponentially distributed random variables with retrial rate T . The arrivals to the retrial queue are modeled by a renewal process with interarrival time distribution F ( x); and the service times for the primary and retrial customers are assumed to be i.i.d. distributed PH random variables with irreducible representation ( E , S ) of size n. Compute the transient distribution of the queue length. 7.9 Consider a GI G 11 retrial queue with no waiting space and the size of the orbit is infinite. If an arrival, either a primary or a retrial customer, finds that there is no customer in the server, then it enters the server immediately and receives service. Otherwise it enters the orbit and makes a retrial at a later time. Returning customers behave independently of each other and keep making retrials until they receive their requested service. Successive inter-retrial times are i.i.d. exponentially distributed random variables with retrial rate T . The arrivals to the retrial queue are modeled by a renewal process with interarrival time distribution F ( x); and the service times for the primary and retrial customers are assumed to be i.i.d. distributed random variables with service time distribution G ( x). Compute the transient distribution of the queue length. 7.10 Consider a MAP PH 11 retrial queue with no waiting space and the size of the orbit is infinite. If an arrival, either a primary or a retrial customer, finds that there is no customer in the server, then it enters the server immediately and receives service. Otherwise it enters the orbit and makes a retrial at a later time. Returning customers behave independently of each other and keep making retrials until they receive their requested service. Successive inter-retrial times are i.i.d. PH distributed random variables with irreducible representation (D , T ) of size n. The arrivals to the retrial queue are modeled by a MAP with irreducible matrix descriptor (C , D) of size m; and the service times for the primary and retrial customers are assumed to be i.i.d. PH distributed random variables with irreducible representation ( E , S ) of size k. Compute the transient distribution of the queue length.
References Processor-Sharing Queues 1. Asmussen S. (2003). Applied Probability and Queues (Second Edition), Springer 2. Bansal N. (2003). Analysis of the M G 1 processor-sharing queue with bulk arrivals. Operations Research Letters 31: 401 405 3. Braband J. (1994). Waiting time distributions for M M N processor sharing queues. Stochastic Models 10: 533 548 4. Braband J. (1995). Waiting time distributions for closed M M N processor sharing queues. Queueing Systems 19: 331 344 381
Constructive Computation in Stochastic Models with Applications
5. Cheung S.K., H. van den Berg and R.J. Boucherie (2006). Insensitive bounds for the moments of the sojourn time distribution in the M G 1 processor-sharing queue. Queueing Systems 53: 7 18 6. Coffman E.G. and L. Kleinrock (1968). Feedback queueing models for time-shared systems. Journal of the Association for Computing Machinery 15: 549 576 7. Coffman E.G. Jr., R.R. Muntz and H. Trotter (1970). Waiting time distributions for processorsharing systems. Journal of the Association for Computing Machinery 17: 123 130 8. Cohen J.W. (1979). The multiple phase service network with generalized processor sharing. Acta Informatica 12: 245 284 9. Cohen J.W. (1982). The Single Server Queue (Second Edition), North-Holland Publishing Co.: Amsterdam-New York 10. Cohen J.W. (1984). Letter to the editor: The sojourn time in the GI M 1 queue with processor sharing. Journal of Applied Probability 21: 437 442 11. Egorova R., B. Zwart and O. Boxma (2006). Sojourn time tails in the M D 1 processor sharing queue. Probability in the Engineering and Informational Sciences 20: 429 446 12. Graham A. (1981). Kronecker Products and Matrix Calculus: with Applications, Ellis Horwood Limited 13. Guillemin F. and J. Boyer (2001). Analysis of the M M 1 queue with processor sharing via spectral theory. Queueing Systems 39: 377 397 14. Jagerman D.L., B. Sengupta (1991). The GI M 1 processor-sharing queue and its heavy traffic analysis. Stochastic Models 7: 379 395 15. Kim J. and B. Kim (2008). Concavity of the conditional mean sojourn time in the M G1 processor-sharing queue with batch arrivals. Queueing Systems 58: 57 64 16. Kleinrock L. (1967). Time-shared systems: a theoretical treatment. Journal of the Association for Computing Machinery 14: 242 261 17. Kleinrock L. (1976). Queueing Systems, Vol. Ĉ: Computer Applications, Wiley: New York 18. Kleinrock L., R. Muntz and E. Rodemich (1971). The processor-sharing queueing model for time-shared systems with bulk arrivals. Networks 1: 1 13 19. Li Q.L. and C. Lin (2006). The M G 1 processor-sharing queue with disasters. Computers & Mathematics with Applications 51: 987 998 20. Li Q.L., L. Liu and Z.T. Lian (2005). An RG-factorization approach for a BMAP/M/1 generalized processor-sharing queue. Stochastic Models 21: 507 530 21. Masuyama H. and T. Takine (2003). Sojourn time distribution in a MAPM 1 processorsharing queue. Operations Research Letters 31: 406 412 22. Morrison J.A. (1985). Response-time distribution for a processor-sharing system. SIAM Journal on Applied Mathematics 45: 152 167 23. Neuts M.F. (1981). Matrix-Geometric Solutions in Stochastic Models-An Algorithmic Approach, The Johns Hopkins University Press: Baltimore 24. Neuts M.F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Dekker: New York 25. Núñez-Queija R. (2001). Sojourn times in non-homogeneous QBD processes with processor sharing. Stochastic Models 17: 61 92 382
7
Examples of Practical Applications
26. O’Donovan T.M. (1976). Conditional response times in M M 1 processor-sharing models. Operations Research 24: 382 385 27. Ramaswami V. (1984). The sojourn time in the GI M 1 queue with processor sharing. Journal of Applied Probability, 21: 437 442 28. Rege K.M. and B. Sengupta (1993). The M G 1 processor-sharing queue with bulk arrivals. In Modelling and Performance Evaluation of ATM Technology, Elsevier: Amsterdam, 417 432 29. Resing J.A.C., G. Hooghiemstra and M.S. Keane (1990). The M G 1 processor sharing queue as the almost sure limit of feedback queues. Journal of Applied Probability 27: 913 918 30. Schassberger R. (1984). A new approach to the M G 1 processor-sharing queue. Advances in Applied Probability 16: 202 213 31. Sengupta B. and D.L. Jagerman (1985). A conditional response time of the M M 1 processor-sharing queue. AT&T Technical Journal 64: 409 421 32. Sericola B., F. Guillemin and J. Boyer (2005). Sojourn times in the M PH 1 processor sharing queue. Queueing Systems 50: 109 130 33. Yang Y. and C. Knessl (1993). Conditional sojourn time moments in the finite capacity GI M 1 queue with processor-sharing service. SIAM Journal on Applied Mathematics 53: 1132 1193 34. Yashkov S.F. (1987). Processor-sharing queues: some progress in analysis. Queueing Systems 2: 1 17 35. Yashkov S.F. (1992). Mathematical problems in the theory of processor-sharing queues. Journal of Soviet Mathematics 58: 101 147 36. Yashkov S.F. and A.S. Yashkova (2007). Processor sharing: A survey of the mathematical theory. Automation and Remote Control 68: 1662 1731 37. Zwart A.P. and O.J. Boxma (2000). Sojourn time asymptotics in the M G 1 processor sharing queue. Queueing Systems 35: 141 166
Fluid Queues 38. Adan I.J.B.F. and J.A.C. Resing (1996). Simple analysis of a fluid queue driven by an M M 1 queue. Queueing Systems 22: 171 174 39. Adan I.J.B.F., J.A.C. Resing and V.G. Kulkarni (2003). Stochastic discretization for the long-run average reward in fluid models. Probability in the Engineering and Informational Sciences 17: 251 265 40. Ahn S. and V. Ramaswami (2003). Fluid flow models and queues — a connection by stochastic coupling. Stochastic Models 19: 325 348 41. Anick D., D. Mitra and M.M. Sondhi (1982). Stochastic theory of a data-handling system with multiple sources. The Bell System Technical Journal 61: 1871 1894 42. Asmussen S. (1995). Stationary distributions for fluid flow models with or without Brownian noise. Stochastic Models 11: 21 49 43. Barbot N., B. Sericola and M. Telek (2001). Distribution of busy period in stochastic fluid models. Stochastic Models 17: 407 427 383
Constructive Computation in Stochastic Models with Applications
44. Barlow M.T., L.C.G. Rogers and D. Williams (1980). Wiener-Hopf factorization for matrices. In Séminaire de probabilités XIV, Lecture Notes in Mathematics 784, Springer: Berlin, 324 331 45. Blaabjerg S., H. Andersson and H. Andersson (1995). Approximating heterogeneous fluid queue with a birth-death fluid queue. IEEE Transactions on Communications 43: 1884 1887 46. Soares A. da Silva and G. Latouche (2002). Further results on the similarity between fluid queues and QBDs. In Matrix Analytic Methods Theory and Applications, G. Latouche and P.G. Taylor (eds.), World Scientific, 89 106 47. Guillemin F. and B. Sericola (2007). Stationary analysis of a fluid queue driven by some countable state space Markov chain. Methodology and Computing in Applied Probability 9: 521 540 48. Karandikar R.L. and V.G. Kulkarni (1995). Second-order fluid flow models: reflected Brownian motion in a random environment. Operations Research 43: 77 88 49. Konovalov V. (1998). Fluid queue driven by a GI G 1 queue, stable problems for stochastic models. Journal of Mathematical Sciences 91: 2917 2930 50. Kontovasilis K.P. and M.M. Mitrou (1995). Markov-modulated traffic with nearly complete decomposability characteristics and associated fluid queueing models. Advances in Applied Probability 27: 1144 1185 51. Kulkarni V.G. (1997). Fluid models for single buffer systems. In Frontiers in Queueing: Models and Applications in Science and Engineering, J. Dhashalow (ed.), CRC Press: Boca Raton, 321 338 52. Kulkarni V.G. and E. Tzenova (2002). Mean first passage times in fluid queues. Operations Research Letters 30: 308 318 53. Lenin R.B. and P.R. Parthasarathy (2000). A computational approach for fluid queues driven by truncated birth-death process. Methodology and Computing in Applied Probability 2: 373 392 54. Li Q.L., L. Liu and W. Shang (2008). Heavy-tailed asymptotics for a fluid model driven by an M/G/1 queue. Performance Evaluation 65: 227 240 55. Li Q.L. and Y.Q. Zhao (2005). Block-structured fluid models driven by level dependent QBD processes with either infinite levels or finite levels. Stochastic Analysis and Applications 23: 1087 1112 56. Masuyama H. and T. Takine (2007). Multiclass Markovian fluid queues. Queueing Systems 56, 143 155 57. Mitra D. (1988). Stochastic theory of a fluid model of producers and consumers coupled by a buffer. Advances in Applied Probability 20: 646 676 58. Parthasarathy P.R., K.V. Vijayashree and R.B. Lenin (2002). An M M 1 driven fluid queue — ontinued fraction approach. Queueing Systems 42: 189 199 59. Parthasarathy P.R., B. Sericola and K.V. Vijayashree (2005). Transient analysis of a fluid queue driven by a birth and death process suggested by a chain sequence. Journal of Applied Mathematics and Stochastic Analysis 2005: 143 158 60. Rabehasaina L. and B. Sericola (2004). A second-order Markov-modulated fluid queue with linear service rate. Journal of Applied Probability 41: 758 777 384
7
Examples of Practical Applications
61. Ramaswami V. (1999). Matrix analytic methods for stochastic fluid flows. In: ITC-16, P. Key and D. Smith (eds.), Elsevier Science B.V., 1019 1030 62. Rogers L.C.G. (1994). Fluid models in queueing theory and Wiener-Hopf factorization of Markov chains. The Annals of Applied Probability 4: 390 413 63. Sericola B. and B. Tuffin (1999). A fluid queue driven by a Markovian queue. Queueing Systems 31: 253 264 64. Stern T.E. and A.I. Elwalid (1991). Analysis of separable Markov-modulated rate models for information-handling systems. Advances in Applied Probability 23: 105 139 65. Doorn E.A. van, A.A. Jagers and J.S.J. de Wit (1987). A fluid reservoir regulated by a birth-death process. Stochastic Models 4: 457 472 66. van Lierde S., A. da Silva Soares and G. Latouche (2008). Invariant measures for fluid queues. Stochastic Models 24: 133 151 67. Virtamo J. and I. Norros (1994). Fluid queue driven by an M M 1 queue. Queueing Systems 16: 373 386
Queues with Negative Customers 68. Anisimov V.V. and J.R. Artalejo (2001). Analysis of Markov multiserver retrial queues with negative arrivals. Queueing Systems 39: 157 182 69. Artalejo J.R. (2000). G-networks: A versatile approach for work removal in queueing networks. European Journal of Operational Research 126: 233 249 70. Artalejo J.R. and A. Gómez-Corral (1998). Generalized birth and death processes with applications to queues with repeated attempts and negative arrivals. OR Spektrum 20: 223 233 71. Bayer N. and O.J. Boxma (1996). Wiener-Hopf analysis of an M G 1 queue with negative customers and of a related class of random walks. Queueing Systems 23: 301 316 72. Boucherie R.J. and O.J. Boxma (1995). The workload in the M G1 queue with work removal. Probability in the Engineering and Informational Sciences 10: 261 277 73. Chao X. (1995). A queueing network model with catastrophes and product form solution. Operations Research Letters 18: 75 79 74. Chao X., M. Miyazawa and M. Pinedo (1999). Queueing Networks: Customers, Signals, and Product Form Solutions, Wiley, Chichester 75. Dudin A. and S. Nishimura (1999). A BMAPSM 1 queueing system with Markovian arrival input of disasters. Journal of Applied Probability 36: 868 881 76. Dudin A. and O. Semenova (2004). A stable algorithm for stationary distribution calculation for a BMAPSM 1 queueing system with Markovian arrival input of disasters. Journal of Applied Probability 41: 547 556 77. Gelenbe E. (1991). Production-form queueing networks with negative and positive customers. Journal of Applied Probability 28: 656 663 78. Gelenbe E. (1993). G-networks with triggered customer movement. Journal of Applied Probability 30: 742 748 79. Gelenbe E. (1994). G-networks: a unifying model for neural and queueing networks. Annals of Operations Research 48: 433 461 385
Constructive Computation in Stochastic Models with Applications
80. Gelenbe E., P. Glynn and K. Sigman (1991). Queues with negative arrivals. Journal of Applied Probability 28: 245 250 81. Gelenbe E. and G. Pujolle (1998). Introduction to Queueing Networks (Second Edition), Wiley: Chichester 82. Gelenbe E. and R. Schassberger (1992). Stability of product form G-networks. Probability in the Engineering and Informational Sciences 6: 271 276 83. Harrison P.G. and E. Pitel (1993). Sojourn times in single-server queue with negative customers. Journal of Applied Probability 30: 943 963 84. Harrison P.G. and E. Pitel (1995). Response time distributions in tandem G-networks. Journal of Applied Probability 32: 224 247 85. Harrison P.G. and E. Pitel (1996). The M G 1 queue with negative customers. Advances in Applied Probability 28: 540 566 86. Henderson W. (1993). Queueing networks with negative customers and negative queue lengths. Journal of Applied Probability 30: 931 942 87. Jain G. and K. Sigman (1996). A Pollaczek-Khintchine formula for M/G/1 queues with disasters. Journal of Applied Probability 33: 1191 1200 88. Li Q.L. and Y.Q. Zhao (2004). A MAP/G/1 queue with negative customers. Queueing Systems 47: 5 43 89. Pitel E. (1994). Queues with Negative Customers, Ph.D. Thesis, Imperial College, London 90. Serfozo R. (1999). Introduction to Stochastic Networks, Springer-Verlag: New York 91. Shin Y.W. (2007). Multi-server retrial queue with negative customers and disasters. Queueing Systems 55: 223 237 92. Zhu Y. and Z.G. Zhang (2004). M G 1 queues with services of both positive and negative customers. Journal of Applied Probability 41: 1157 1170
Retrial Queues 93. Aissani A. (1994). A retrial queue with redundancy and unreliable server. Queueing Systems 17: 431 449 94. Aissani A. and J.R. Artalejo (1998). On the single server retrial queue subject to breakdowns. Queueing Systems 30: 309 321 95. Artalejo J.R. (1994). New results in retrial queueing systems with breakdowns of the servers. Statistics Neerlandica 48: 23 36 96. Artalejo J.R. (1999). A classified bibliography of research on retrial queues: Progress in 1990 1999. Top 7: 187 211 97. Artalejo J.R. and A. Gómez-Corral (1998). Unreliable retrial queues due to service interruptions arising from facsimile networks. Belgian Journal of Operations Research, Statistics and Computer Science 38: 31 41 98. Artalejo J.R. and A. Gómez-Corral (2008). Retrial Queueing Systems: A Computational Approach, Springer 99. Breuer L., A. Dudin and V. Klimenok (2002). A retrial BMAPPH N system. Queueing Systems 40: 433 457 386
7
Examples of Practical Applications
100. Chakravarthy S.R. and A. Dudin (2002). A multi-server retrial queue with BMAP arrivals and group services. Queueing Systems 42: 5 31 101. Chakravarthy S.R. and A. Dudin (2003). Analysis of a retrial queuing model with MAP arrivals and two types of customers. Mathematical and Computer Modelling 37: 343 363 102. Choi B.D., Y. Yang and B. Kim (1999). MAP1MAP2 M c retrial queue with guard channels and its applications to cellular networks. Top 7: 231 248 103. Choi B.D., Y.H. Chung and A. Dudin (2001). The BMAPSM 1 retrial queue with controllable operation modes. European Journal of Operational Research 131: 16 30 104. Diamond J.E. and A.S. Alfa (1995). Matrix analytic methods for M PH 1 retrial queues with buffers. Stochastic Models 11: 447 470 105. Diamond J.E. and A.S. Alfa (1998). Matrix analytic methods for MAPPH 1 retrial queues with buffers. Stochastic Models 14: 1151 1187 106. Diamond J.E. and A.S. Alfa (1999). Matrix analytic methods for multi-server retrial queues with buffers. Top 7: 249 266 107. Dudin A. and V.I. Klimenok (2000). A retrial BMAPSM 1 system with linear requests. Queueing Systems 34: 47 66 108. Falin G.I. (1990). A survey of retrial queues. Queueing Systems 7: 127 167 109. Falin G.I. and J.G.C. Templeton (1997). Retrial Queues; Chapman & Hall: London 110. Gómez-Corral A. (2006). A bibliographical guide to the analysis of retrial queues through matrix analytic techniques. Annals of Operations Research 141: 163 191 111. He Q.M., H. Li and Y.Q. Zhao (2000). Ergodicity of the BMAPPH ss k retrial queue with PH-retrial times. Queueing Systems 35: 323 347 112. Horn R.A. and C.R. Johnson (1985). Matrix Analysis, Cambridge University Press 113. Kulkarni V.G. (1983). On queueing systems with retrials. Journal of Applied Probability 20: 380 389 114. Kulkarni V.G. (1986). Expected waiting times in a multiclass batch arrival retrial queue. Journal of Applied Probability 23: 144 154 115. Kulkarni V.G. and B.D Choi (1990). Retrial queues with server subject to breakdowns and repairs. Queueing Systems 7: 191 208 116. Kulkarni V.G. and H.M. Liang (1997). Retrial queues revisited. In Frontiers in queueing, Probab. Stochastics Ser., CRC, Boca Raton, FL, 19 34 117. Li Q.L., Y. Ying and Y.Q. Zhao (2006). A BMAPG 1 retrial queue with a server subject to breakdowns and repairs. Annals of Operations Research 141: 233 270 118. Liang H.M. and V. G. Kulkarni (1993). Monotonicity properties of single-server retrial queues. Stochastic Models 9: 373 400 119.Liang H.M. and V. G. Kulkarni (1993). Stability condition for a single-server retrial queue. Advances in Applied Probability 25: 690 701 120. Lucantoni D.M. (1991). New results on the single server queue with a batch Markovian arrival process. Stochastic Models 7: 1 46 121. Neuts M.F. and B.M. Rao (1990). Numerical investigation of a multiserver retrial model. Queueing Systems 7: 169 190 387
Constructive Computation in Stochastic Models with Applications
122. Shang W., L. Liu and Q.L. Li (2006). Tail asymptotics for the queue length in an M G 1 retrial queue. Queueing Systems 52: 193 198 123. Wang J., J. Cao and Q.L. Li (2001). Reliability analysis of the retrial queue with server breakdowns and repairs. Queueing Systems 38: 363 380 124. Yang T. and H. Li (1994). The M G 1 retrial queue with the server subject to starting failures. Queueing Systems 16: 83 96 125. Yang T. and J.G.C. Templeton (1987). A survey on retrial queues. Queueing Systems 2: 201 233
388
8 Transient Solution
Quan-Lin Li Department of Industrial Engineering Tsinghua University Beijing 100084, P.R. China E-mail: [email protected]
Abstract In this chapter, we apply the RG-factorizations to provide a unified algorithmic framework for dealing with transient solution in stochastic models. The transient solution includes the transient probability, the first passage time, the sojourn time and time-inhomogeneous Markov chains. Based on the first passage time, we extends the PH distribution and the MAP to the GPH distribution and the GMAP from finite phases to infinite phases, respectively, and also study the time-inhomogeneous PH (PH(t)) distribution and the time-inhomogeneous MAP (MAP(t)). Finally, we analyze some queueing examples such as GMAP/GPH/1 and MAP(t)/PH(t)/1. Keywords stochastic models, RG-factorization, time-inhomogeneous Markov chain, transient solution, the first passage time, the sojourn time, GPH distribution, GMAP, time-inhomogeneous PH distribution, timeinhomogeneous MAP.
In this chapter, we apply the RG-factorizations to provide a unified algorithmic framework for dealing with transient solution in stochastic models. The transient solution includes the transient probability, the first passage time, the sojourn time, and performance measures of time-inhomogeneous Markov chains. Note that the first passage time has been dealt with in Section 6.7. This chapter extends the PH distribution and the MAP to the generalized PH (GPH) distribution and the generalized MAP (GMAP) from finite phases to infinite phases, respectively. It also studies the time-inhomogeneous PH (PH(t)) distribution and the time-inhomogeneous MAP (MAP(t)), both of which are useful in the study of time-inhomogeneous stochastic models. This chapter is organized as follows. Sections 8.1 studies the transient probability of an irreducible Markov chain which is either discrete-time or continuous-time. Also, this section and develops two effective algorithms for computing the
Constructive Computation in Stochastic Models with Applications
transient probability. Sections 8.2 analyzes the first passage time of an Markov chain. This section gives two effective algorithms for computing the first passage time distribution with respect to the discrete-time and continuous-time cases, respectively. Based on this, we develop the GPH distribution and the GMAP. Sections 8.3 analyzes the sojourn times of an irreducible Markov chain by means of the PH distributions and the RG-factorizations. Section 8.4 discusses a time-inhomogeneous discrete-time Markov chain, and analyzes the asymptotic periodic distribution of a d-period Markov chain. Finally, Section 8.5 summarizes notes for the references related to the results of this chapter.
8.1
Transient Probability
In this section, we provide an algorithmic framework for computing the transient probability of an irreducible Markov chain which is either discrete-time or continuous-time. The algorithmic framework is based on the RG-factorizations.
8.1.1
Discrete-Time Markov Chains
Consider an irreducible M-state Markov chain { X n , n ı 0} whose transition probability matrix is given by P ( pi , j )1İi , jİM , where M is either finite or infinite. Let D (D1 ,D 2 ,! ,D M ) be the initial probability vector of the Markov chain, that is, P{ X 0 k} D k for 1 İ k İ M . We write ʌ k (n) P{ X n k } and ʌ (n) (ʌ1 (n), ʌ2 (n),! , ʌ M (n)). Thus
S ( n) D P n ,
n ı 0.
(8.1)
It is necessary to explain the probabilistic setting for ʌ (n) with n ı 0 in terms of practical systems. The transient probability (8.1) can always be regarded as a forecast method for analyzing many practical issuses. For example, let us consider a market percentage prediction for five electric power companies. Due to dynamic competition, it is a crucial decision-making process to provide the market percentage prediction for each company. To achieve this, we can use the transient probability to compute the market percentage. Let pi , j be a probability that the last periodic customers of the ith electric power company purchase their electric power from the jth electric power company, we write the matrix P ( pi , j )1İi , jİ5 . The five different electric power companies are denoted as A, B, C, D and E. Based on statistical analysis, we can obtain the initial market percentage vector as ʌ (0) 390
(0.20, 0.18, 0.16, 0.24, 0.22)
8 Transient Solution
and the transition probability matrix P as A
P
§ 0.60 ¨ ¨ 0.05 ¨ 0.10 ¨ ¨ 0.05 E ¨ 0.10 ©
B
C
D
0.10 0.70 0.20 0.10
0.05 0.05 0.50 0.05
0.15 0.10 0.15 0.75
E
0.10 · ¸ 0.10 ¸ 0.05 ¸ . ¸ 0.05 ¸ 0.05 0.10 0.10 0.65 ¸¹
A B C D
Therefore, we can predict the market percentages of the five electric power companies in the current year as follows: ʌ (1)
ʌ (0) P
(0.18, 0.22, 0.13, 0.27, 0.20);
and the market percentages in the next year as follows: ʌ (2) ʌ (1) P
(0.17, 0.24, 0.12, 0.28, 0.19).
When the Markov chain P is positive recurrent, we denote by ʌ its stationary probability vector. In this case, if D
(ʌ1 , ʌ 2 ,! , ʌ M )
ʌ , then ʌ (n)
ʌP n
ʌ for
each n ı 0. At the same time, it is clear that lim ʌ (n) ʌ , which is independent n of
of any initial probability vector D . f
Let ʌ * ( z )
¦ z ʌ (n). Then n
ʌ * ( z ) D ( I zP)1 . When 0 z İ 1, it is clear
n 0
that zP is either stochastic or substochastic. In this case, we can obtain the UL-type RG-factorization I zP [ I RU ( z )][ I < D ( z )][ I GL ( z )], which leads to ʌ * ( z ) D [ I GL ( z )]1[ I < D ( z )]1[ I RU ( z )]1 ;
and the LU-type RG-factorization I zP [ I RL ( z )][ I U D ( z )][ I GU ( z )],
which yields ʌ * ( z ) D [ I GU ( z )]1[ I U D ( z )]1[ I RL ( z )]1.
Let Ni be the return number of the Markov chain {X n , n ı 0} to state i, and write J i
E[ N i | X 0
i ] and J
(J 1 , J 2 ,! , J M ). It is easy to check that 391
Constructive Computation in Stochastic Models with Applications f
¦ nʌ (n)
J
D P( I P)2
ʌ (1)[( I P)1 ]2 .
n 1
Therefore, (1) if the markov chain { X n , n ı 0} is transient, then
J
ʌ (1)[( I GL )1 ( I < D )1 ( I RU ) 1 ]2 ;
f, then
(2) if M
ʌ (1)[( I GU ) 1 ( I U D ) 1 ( I RL ) 1 ]2 .
J
8.1.2
An Approximate Algorithm
We provide an effective algorithm for computing the transient probability ʌ (n) for n ı 0 of the Markov chains with infinite states. This algorithm is a key element for constructing the other three algorithms in this chapter. For the vector D and the matrix P, there always exists a monotonously nondecreasing integer sequence {N k } such that for a sufficiently small H ! 0, we have f
¦
Dj H
(8.2)
j N 0 1
and f
¦
Pi , j H , 1 İ i İ N n 1 ,
n ı
(8.3)
j N n 1
Remark 8.1 It is necessary to provide a detailed interpretation or a concrete procedure to determine the integer sequence {N k } based on the following three steps: f
Step 1
Since D e İ 1, i.e., ¦ D i İ 1, there exists an integer N 0 ! 1 such that i 1
f
¦D
i
H.
i N 0 1
f
Step 2
Once determining N 0 , using
¦p
i, j
İ 1, there exists an integer
j 1
f
N1(i ) ı N 0 such that
¦
pi , j H for 1 İ i İ N 0 . Let
j N1( i ) 1
N1 392
max{N1(i ) :1 İ i İ N 0 }.
8 Transient Solution f
Step 3
When determining N k for k ı 1, using
¦p
i, j
İ 1, there exists an
j 1
f
¦
integer N k( i)1 ı N k such that
pi , j H for 1 İ i İ N k . Let
j N k( i)1 1
N k 1
max{N k(i)1 :1 İ i İ N k }.
Lemma 8.1 Let ʌ (n) D P n for n ı 0. Then for each n ı 0, we have f
¦
ʌ j ( n) H .
j N n 1
f
Proof Since ʌ (n)e D P n e İ1 for n ı 0. we obtain that
¦ʌ
j
(n) İ1. Hence,
j 1
we have f
¦
f
f
¦ ¦ ʌ (n) p
ʌ j ( n)
l
j N n 1
l, j
j N n 1 l 1
f
f
¦ ʌ (n 1) ¦ l
l 1
pl , j
j N n 1
f
H ¦ ʌl (n 1) İ H . l 1
This completes the proof. For a positive integer N, we write N
ʌ (n) ( Nʌ1 ( n), N ʌ2 (n), N ʌ3 ( n),! , N ʌ N 1 (n), N ʌ N ( n)).
Specifically, N0
ʌ j (0) D j , 1 İ j İ N 0 ,
(8.4)
and N n1 Nn
ʌ j ( n)
¦
Nn1
ʌi (n 1) pi , j , 1 İ j İ N n , n ı 1.
(8.5)
i 1
Theorem 8.1
For 1 İ j İ N n , n ı 1, Nn
0 İ ʌ j ( n)
Nn
ʌ j ( n) İ ¦ [ ʌ j ( n) j 1
Nn
ʌ j ( n)]
1 n( n 1)H . 2
Proof This proof contains the following three steps. Step 1 By induction, we first prove that ʌ j (n) Nn ʌ j (n) ı for 1 İ j İ N n , n ı 1. 393
Constructive Computation in Stochastic Models with Applications
When n 1, f
¦ [ʌ (0)
ʌ j (1) N1 ʌ j (1)
i
f
N0
¦
ʌi (0)] pi , j
ʌi (0) pi , j
i N 0 1
i 1
f
¦
ʌi (0) pi , j ı 0,
i N 0 1
since ʌi (0)
N0
ʌi (0) D i for 1 İ i İ N 0 . k , ʌ j (k ) Nn ʌ j ( k ) ı 0. Then for n
We assume that when n
Nk
f
¦ [ʌ ( k )
ʌ j (k 1) Nk 1 ʌ j (k 1)
k 1,
Nk
i
¦
ʌi (k )] pi , j
ʌi (k ) pi , j ı 0.
i N k 1
i 1
Therefore, we obtain that ʌ j (n) Nn ʌ j ( n) ı 0 for 1 İ j İ N n , n ı 1. Step 2 Note that ʌ j (n) Nn ʌ j (n) ı 0 for 1 İ j İ N n and n ı 1, it is clear that Nn
ʌ j (n) Nn ʌ j (n) İ ¦ [ʌ j (n) Nn ʌ j (n)]. j 1
Nn
Step 3 We prove that
¦ [ʌ
j
(n) Nn ʌj (n) ]
j 1
1 n(n 1)H for n ı 1 by induction. 2
When n 1, N1
¦ [ʌ
N1
j
f
¦¦
(1) Nn ʌ j (1)]
j 1 i N 0 1
j 1
i N 0 1
f
İ
¦
ʌi (0)
¦D
Nk
j 1
¦ [ʌ
N k 1
ª Nk
¦ « ¦ [ʌ ( k )
j ( k 1) Nn ʌ j ( k 1)]
j 1
j 1
¬i
Nk
i
H.
i
1 k (k 1)H . Then for n k 1, 2
Nn
Therefore, we obtain that
¦ [ʌ j 1
394
j
º ʌi (k ) pi , j » N k 1 ¼ f
N n ʌ i ( k )] pi , j i
1
İ ¦ >ʌi (k ) Nn ʌi (k )] i 1
j 1
i N 0 1
k , ¦ [ʌ j ( k ) Nn ʌ j (k )]
N k 1
ʌi (0)¦ pi , j
f
i N 0 1
We assume that when n
N1
f
¦
ʌi (0) pi , j
¦
f
¦
ʌi (k )
i N k 1
1 1 k (k 1)H H (k 1)(k 2)H . 2 2
1 (n) Nn ʌ j (n)] n(n 1)H for n ı 1. 2
8 Transient Solution
This completes the proof. Note that the sequence { Nn ʌ j (n)} can be computed effectively based on both the vector D and the matrix P, thus Nn ʌj ( n) can be used to approximate the transient probability ʌj (n) under a sufficiently small error H ! 0. Algorithm 8.1 Computation of Transient Probability INPUT: The vector D , the matrix P, and the error H ! 0. COMPUTATION: Step 1 Generating the integer sequence {N k } according to (8.2) and (8.3). Step 2 Iteratively computing { Nn ʌ j (n)} according to (8.4) and (8.5). OUTPUT: The approximate transient probability { Nn ʌ j (n)}. As an illustration, we consider a Geom/Geom/1 queue, the customer arrivals form a Bernoulli process: During a time interval (n, n ) there is an arrival with probability p (0,1) while there is no arrival with probability p 1 p. The service time S is geometric with distribution function P{S k} PP k 1 for P (0,1), P 1 P and k ı 1. The customer departure can only occur in the time interval (n , n). In this system, there is a single server, the service discipline is FCFS, and all the service times and the interarrival times are independent of each other. Let Ln be the number of customers in the system at time n for n ı 0. Then {Ln : n ı 0} is a discrete-time birth-death process whose transition probability matrix is given by
P
p § p · ¨ p P 1 p P pP ¸ pP ¨ ¸. pP 1 p P pP pP ¨ ¸ ¨ % % % ¸¹ ©
For the discrete-time birth-death process, it is easy to compute its transient probability ʌ (n) D P n for n ı 0 in terms of Algorithm 8.1.
8.1.3
Continuous-Time Markov Chains
We consider an irreducible continuous-time M-state Markov chain { X t , t ı 0} whose infinitesimal generator is given by Q (qi , j )1İi , jİM . Let D (D1 , D 2 ,! , D M ) be the initial probability vector of the Markov chain. We write hk (t ) P{ X t k} and H (t ) (h1 (t ), h2 (t ),! , hM (t )). It is easy to see that
d H (t ) dt
H (t )Q
with the initial condition H (0) D . Thus we obtain 395
Constructive Computation in Stochastic Models with Applications
H (t ) D exp{Qt}, t ı 0.
(8.6)
When the Markov chain Q is positive recurrent, we denote by ʌ ( ʌ1 , ʌ2 ,..., ʌ M ) its stationary probability vector. In this case, if D ʌ , then H (t ) ʌ exp{Qt} ʌ for each t ı 0. At the same time, lim H (t ) t of
probability vector D . Remark 8.2 Let V (t )
ʌ , which is independent of any initial
H (t )T for t ı 0. Then
d V (t ) QV (t ) dt with the initial condition V (0) suitable size. Thus we obtain V (t )
e ,Where e is a column vector of ones with exp{Qt}e, t ı 0.
It is clear that H (t )e DV (t ) for each t ı 0. f Let H ( s ) ³ e s t H (t )dt. Then H ( s ) D ( sI Q) 1 . When s>0, it is clear 0
that Q sI is the infinitesimal generator of a continuous-time Markov chain. In this case, we can obtain the UL-type RG-factorization sI Q [ I RU ( s)][ U D ( s)][ I GL ( s)], which leads to H ( s ) D [ I GL ( s )]1[ U D ( s )]1[ I RU ( s )]1 ;
and the LU-type RG-factorization sI Q [ I RL ( s )][U D ( s )][ I GU ( s )],
which yields H ( s ) D [ I GU ( s )]1[U D ( s )]1[ I RL ( s)]1 .
As an illustrating example, we consider a BMAP/PH(M/PH)/1 queue with a repairable server. The customer arrival process is a BMAP with irreducible matrix descriptor {Dk , k 0,1, 2,...} of size m, the service times are i.i.d. and are of phase type with irreducible representation ( E , S ) of size n, the life time of the server is exponential with mean 1/ J ! 0 and its repair time is of phase type. Now, we need to compute the mean of the first failure time F of the server. To this end, we denote by N(t), I(t) and J (t) the number of customers in the system and the two phases of the arrival and service processes at time t, respectively. At the same time, let the set of all the failed states be an absorbing state. Then {N (t ), I (t ), J (t ) : t ı 0} is an irreducible continuous-time Markov chain whose infinitesimal generator is given by 396
8 Transient Solution
Q
§ D0 ¨ I
S0 ¨ ¨ ¨ ¨¨ ©
D1 D0 S J I I
(S 0 E )
D2 D1 D0 S J I I
(S 0 E )
!· D3 !¸ D2 ¸ !¸ . D1 D0 S J I !¸ ¸ % % ¸¹ T
T
Note that the Markov chain Q is transient due to Q0 Qe (0, J e , J e , !)T. It is easy to see that the first failure time F is of phase type with irreducible representation (Z , Q) of infinite size, where Z is the initial probability vector. For example, we may take Z (T ,0, 0,!) and T is the stationary probability vector f
of the Markov chain
¦ D . Let the reliability function k
R (t )
P{F ! t}. Then
k 0
R(t ) Z exp{Qt}e. Thus E[ F ] Z Q 1e
Z ( I GL ) 1U D1 ( I RU )1 e.
Now, we provide an effective algorithm for computing the transient probability vector H (t ) for t ı 0 when M f. Our algorithmic analysis is classified into the following two cases. Case Ē Bounded diagonal elements We now discuss an irreducible infinite-state Markov chain Q with bounded diagonal elements, that is, c sup kı1{qk , k } f. In this case, we write P
1 I Q. c
It is easy to check that P is either stochastic or substochastic. At the same time, P is irreducible if and only if Q is irreducible. Note that H (t ) D exp{Qt} and Q cP cI , we obtain f
H (t ) D exp{(cP cI )t} e ct ¦ D P n n 0
(ct )n . n!
n
For the term D P , we write ʌ (0) D , ʌ ( n) D P n ,
Obviously, ʌ (n)
n ı 1.
ʌ (n 1) P for n ı 1. At the same time, for t ı 0 we have f
H (t )
e ct ¦ ʌ (n) n 0
(ct ) n n!
(8.7) 397
Constructive Computation in Stochastic Models with Applications
It is seen from Eq. (8.7) that computations of H(t) for t ı 0 can be related to the approximate computation of ʌ (n) D P n for n ı 0. We write N0
ʌ j (0) D j , 1 İ j İ N 0 ,
and N n1 Nn
ʌ j ( n)
¦
Nn1
ʌi (n 1) pi , j , 1 İ j İ N n , n ı 1.
i 1
For a sufficiently large positive integer M, we write M
M
H (t )
e ct ¦
Nn
n 0
ʌ ( n)
(ct )n . n!
(8.8)
Therefore, the function sequence {M H (t )} converges to the transient probability H(t) for each t ı 0, and this convergence is uniform for t [0, T ]. Algorithm 8.2 Computation of Transient Probability INPUT: The vector D , the matrix Q, the error H ! 0. COMPUTATION: Step 1 Generating the integer sequence {N k }. Step 2 Iteratively computing { Nn ʌ (n)}. Step 3 Determining the function sequence {M H (t )} according to Eq. (8.8). OUTPUT: The approximate transient probability {M H (t )}. We consider a BMAP/M/1 processor-sharing queue, where the customer arrivals form a BMAP with irreducible matrix descriptor {Dk , k ı 0} of size m. The service times are i.i.d. and are exponential with mean 1/ P . The service discipline is that when there are n customers in the system, each customer receives service at rate 1/ n. We now compute the sojourn time distribution. To this end, we introduce the following description. A randomly chosen customer who finds n customers in the system on arrival is called Customer Cn . Let wn denote the sojourn time of Customer Cn for n ı 1. We write Wn ( x) P{wn ! x}. Then it is easy to check that d Wn ( x) Wn ( x)Q dx with the initial condition Wn (0) D , where a is a probability vector and
Q
398
§ D0 P I ¨ 1 PI ¨ ¨ 2 ¨ ¨ ¨ ¨ ¨¨ ©
D1 D0 P I 2 PI 3
D4 ! · ¸ D1 D2 D3 ! ¸ ¸ D0 P I D1 D2 ! ¸ . ¸ ¸ 3 PI D0 P I D1 ! ¸ 4 ¸ % % % ¸¹ D2
D3
8 Transient Solution
Further, we obtain Wn ( x)
a exp{Qx}e
and 1 E[ wn ] D (Q) min e
D ( I GL ) 1U D1 ( I RU ) 1 e.
Let d 0, j be the jth diagonal element of the matrix D0 for 1 İ j İ m. It is easy to see that c
max1İ jİm {P d 0, j } P min1İ jİm {d 0, j } f. Thus we can use
Algorithm 8.2 for computing the sequence {M W (t )} which approximates the sojourn time distribution W(t) for each t ı 0. Case ē Unbounded diagonal elements For the continuous-time Markov chains, there exist many infinitesimal generators whose diagonal elements are unbounded. For example, we consider an M/M/1 retrial queue, where the input is a Poisson process with rate O ! 0, the service times are i.i.d. and are exponential with mean 1/ P ! 0. When there are n customers in the system, the retrial time of each customer in the orbit is exponential with mean 1/(nT ) ! 0. Let N (t ) be the number of customers in the system at time t and
I (t )
0, if the server is idle, ®1, if the server is busy. ¯
Then {( N (t ), I (t )) : t ı 0} is an irreducible continuous-time level-dependent QBD process whose infinitesimal generator is given by
Q
§ A0,0 ¨A ¨ 1,0 ¨ ¨ ©
A0,1 A1,1 A2,1
A1,2 A2,2 %
· ¸ ¸, A2,3 ¸ % % ¸¹
(8.9)
where for k ı 0, Ak , k
O · § O kT , ¨ P P ¸¹ O © §0 0· Ak , k 1 ¨ ¸ ©0 O¹
and
Ak 1, k It is clear that c
§ 0 kT · ¨ 0 0 ¸. © ¹
sup{O kT , O P} f. k ı0
399
Constructive Computation in Stochastic Models with Applications
When the diagonal elements of the matrix Q are unbounded, we introduce three different approximate methods for computing the transient probability H (t ) for each t ı 0. (1) The censored technique For the state space : {1, 2,3,!}, we take two sets E {1, 2,! , N } and E c {N 1, N 2, N 3,!}. Based on the two state sets, the infinitesimal generator Q is partitioned as E
Q
E §T ¨ E c ©U
Ec
V· W ¸¹
and the initial probability vector D is partitioned as D (D (1) ,D (2) ). Then the infinitesimal generator of the censored chain QE to set E is given by 1 T V (W )min U
QE
and the initial probability vector of the censored chain Q E is given by
DE
1 D (1) D (2) ( W ) min U.
Thus the transient probability vector of the censored Markov chain Q E is given by H N(C ) (t ) D E exp{Q E t},
t ı 0.
In order to show the approximate precision, we introduce a performance measure cN
D E ( Q E ) 1 e . 1 D (Q)min e
As the value of cN tends to one, the approximate precision tends to increase. (2) The truncated approximation We directly truncate the infinitesimal generator Q as the matrix T of size N. Let H N(T ) (t ) D (1) exp{Tt},
t ı 0.
Then we can use the function H N(T ) (t ) to approximate the transient probability H (t ) for each t ı 0. In order to indicate the approximate precision, we introduce a performance measure tN
D (1) (T ) 1 e . 1 D ( Q)min e
As the value of cN tends to one, the approximate precision tends to increase. 400
8 Transient Solution
(3) The modified approximation A modified approximation is a better idea that when changing some elements of the infinitesimal generator Q such that the modified matrix Qnew has the bounded diagonal elements, while the modified Markov chain Qnew is suitable to Algorithm 8.2. For example, a level-dependent Markov chain of GI/M/1 type may be modified as a level-independent Markov chain of GI/M/1 type, and a leveldependent Markov chain of M/G/1 type may be modified as a level-independent Markov chain of M/G/1 type. For example, we modify the QBD process Q given in (8.9) as a level-independent QBD process as follows:
Qnew
§ B1 ¨B ¨ 2 ¨ ¨ ¨¨ ©
B0 AK , K AK 1, K
AK , K 1 AK , K AK 1, K
AK , K 1 AK , K AK , K 1 % %
· ¸ ¸ ¸, ¸ ¸ % ¸¹
where K is a sufficiently large integer,
B1
§ A0,0 ¨A ¨ 1,0 ¨ ¨ ¨ ¨ ©
B0
8.2
A0,1 A1,1 %
A1,2 % % AK 2, K 3 AK 2, K 2 AK 1, K 2
§ · ¨ ¸ ¨ ¸ ¨ ¸, ¨ ¸ ¨A ¸ © K 1, K ¹
B2
· ¸ ¸ ¸, AK 2, K 1 ¸ ¸ AK 1, K 1 ¸¹
( AK 1, K ).
The First Passage Times
The first passage time for an irreducible discrete-time Markov chain with finite states is a discrete-time PH distribution, while the first passage time for an irreducible continuous-time Markov chain with finite states is a continuous-time PH distribution. Reader may refer to Chapter 2 in Neuts [39] for more details. In this section, we analyze the first passage time of an irreducible Markov chain with infinite states, which is either discrete-time or continuous-time. Note that the first passage time may be regarded as a generalized PH (GPH) distribution, which can be easily obtained from the N-state PH distribution as N o f. Based on this, many useful properties of the N-state PH distribution can be formally 401
Constructive Computation in Stochastic Models with Applications
extended to the GPH distribution. Also, we provide effective algorithms for computing the GPH distribution and its associated moments. Further, we discuss the GPH renewal processes and more generally, the generalized Markovian arrival processes (GMAPs), which have with infinite-many levels.
8.2.1
Discrete-Time GPH Distribution
Consider an irreducible discrete-time Markov chain with state space : {0,1, 2,!} whose transition probability matrix is given by
P
0 E 0 § 1 0· E ¨© T 0 T ¸¹
(8.10)
where E {1, 2,!), T 0 : 0 and Te T 0 e. We assume that state 0 is an absorbing state and all the others are transient. Let (D 0 ,D ) be the initial probability vector of the Markov chain, where D 0 D e 1. For the Markov chain given in Eq. (8.10), the probability distribution { pk , k ı 0} of the number N of state transitions until absorption into the absorbing state 0 is called a discrete-time GPH distribution with representation (D , T ). If T T 0D is the transition probability matrix of an irreducible Markov chain, then this representation (D , T ) is called an irreducible representation. In this case, we have pk
k 0, D 0 , ® k 1 0 ¯D T T , k ı 1.
f
¦z
Let P* ( z )
k
pk . Then
k 0
P* ( z ) D 0 zD ( I zT ) 1 T 0 .
When 0 z İ 1, it is clear that zT is substochastic. In this case, we can obtain the UL-type RG-factorization I zT
[ I RU ( z )][ I < D ( z )][ I GL ( z )],
which leads to P* ( z ) D 0 zD [ I GL ( z )]1[ I < D ( z )]1[ I RU ( z )]1 ;
and the LU-type RG-factorization I zT 402
[ I RL ( z )][ I U D ( z )][ I GU ( z )],
8 Transient Solution
which yields P* ( z ) D 0 zD [ I GU ( z )]1[ I U D ( z )]1[ I RL ( z)]1.
Using the generating function P* ( z ), we can obtain the mean E ( N ) D ( I T ) 1 e
and its factorial moment of order k E[ N ( N 1)! ( N k 1)] k !D T k 1 ( I T ) k e,
k ı 2.
Note that dk * P ( z )| z dz k
1
E[ N ( N 1)! ( N k 1)].
It is easy to see that the RG-factorizations can be used to compute the factorial moments by using the UL-type RG-factorization I T
( I RU )( I < D )( I GL )
or the LU-type RG-factorization I T
( I RL )( I U D )( I GU ).
Now, we list some useful properties for the discrete-time GPH distribution. The proofs are easy, thus they are omitted here. Property 8.1 Let { pk } and {qk } be two discrete-time GPH distributions with irreducible representations (D , T ) and ( E , S ), D 0 1 D e, E 0 1 E e. Then the discrete convolution { pk } * {qk } is a discrete-time GPH distribution with irreducible representations (J , L), where
J
(D ,D 0 E ),
§T T 0 E · L ¨ , S ¸¹ ©
J 0 D0 E0 , L0
§ T 0 E0 · ¨ 0 ¸. © S ¹
Property 8.2 Let { pk } be a discrete-time GPH distribution with irreducible representations (D , T ) and mean P . The residual distribution qk
1
P
f
¦
pk ,
k ı 0,
j k 1
is a discrete-time GPH distribution with irreducible representations (J , T ), where
J
1
P
D ( I T ) 1T , J 0
1
P
(1 D 0 ). 403
Constructive Computation in Stochastic Models with Applications
Property 8.3 Let { pk } and {qk } be two discrete-time GPH distributions f
with irreducible representations (D , T ) and ( E , S ). Then the mixture
¦ p (q ) v
v*
k
v 0
is of generalized phase type with irreducible representations (J , L), where
D
E ( I D 0 S ) 1 ,
J
E 0 D 0 E ( I D 0 S ) 1 S 0 ;
J0
L T
I (T 0D )
[( I D 0 S )1 S ], T 0
[( I D 0 S ) 1 S 0 ].
L0 Property 8.4
Let f
¦p
gk
n
n k
§ n· k nk ¨ k ¸ p (1 p ) , © ¹
k ı 0.
If { pk } is a discrete-time GPH distribution with irreducible representations (D , T ), then {g k } is also of generalized phase type with irreducible representations (J , L), where
J
D p[ I (1 p )T ]1 ,
J 0 D 0 D (1 p )[ I (1 p )T ]1T 0 ; L
pT [ I (1 p )T ]1 ,
L0
[ I (1 p)T ]1 T 0 .
In what follows we provide an approximate algorithm for computing the probabilities pk for k ı 0. For the vector D and the matrix T (Ti , j ), there always exists a monotonously non-decreasing integer sequence {N k } such that for a sufficiently small H ! 0, we have f
¦
Dj H
j N 0 1
and f
¦
Ti , j H , 1 İ i İ N n 1 , n ı 1.
j N n 1
Let ʌ (n) D T n for n ı 0. Then ʌ (0) D and ʌ (n) is clear that pn ʌ (n)T 0 for n ı 0. We write N0
404
ʌ j (0) D j , 1 İ j İ N 0 ,
ʌ (n 1)T for n ı 1. It
8 Transient Solution
and N n1 N n ʌ j ( n)
¦
N n1
ʌi (n 1)Ti , j , 1 İ j İ N n , n ı 1.
i 1
It is clear that the sequence { Nn ʌ j (n), n ı 0} is used to approximate the value
ʌ j (n). Based on this, we write Nn
Nn
¦
p ( n)
Nn
ʌi ( n 1)Ti 0 .
(8.11)
i 1
It is easy to check that the sequence { Nn p (n)} can effectively approximate the probability pn at time n for n ı 0. Algorithm 8.3 Computation of the GPH Distribution INPUT: The vector D , T 0 , the matrix T, the error H ! 0. COMPUTATION: Step 1 Generating the integer sequence {N k }. Step 2 Iteratively computing { Nn ʌ (n)}. Step 3 Determining the function sequence { Nn p( n)} according to Eq. (8.11). OUTPUT: The approximate transient probability { Nn p (n)}. We consider a Geom/Geom/c queue. The customer arrivals form a Bernoulli process during a time interval (n, n ), there is an arrival with probability p (0,1), while there is no arrival with probability p 1 p. The service time S is geometric with distribution function P{S k} PP k 1 for P (0,1), P 1 P and k ı 1. The customer departure can only occur in the time interval (n , n). In this system, there are c identical servers, the service discipline is FCFS, and all the service times and the interarrival times are independent of each other. Let Ln be the number of customers in the system at time n for n ı 0. We define
W
inf{n : Ln İ c}.
Then it is clear that the random variable W is the busy period of the system. It is easy to check that the busy period W is of generalized phase type with irreducible representation (D , T ), where
D T
(D c 1 ,D c 2 ,D c 3 ,!),
§ tc 1 tc tc 1 ¨ tc 1 tc ¨ tc 1 ¨ ¨ ©
! t1 ! t2 ! t3 % %
t0 · ¸ t1 t0 ¸ t2 t1 t0 ¸ % % % %¸¹
and for 0 İ k İ c 1, tk
§ c · k 1 c k 1 p¨ ¸P P © k 1¹
§c· p ¨ ¸ P k P ck . ©k¹ 405
Constructive Computation in Stochastic Models with Applications
For the discrete-time Markov chain T, it is easy to compute the GPH distribution pn for n ı 0 in terms of Algorithm 8.3.
8.2.2
Continuous-Time GPH Distribution
We consider an irreducible continuous-time Markov chain with state space : {0,1, 2,...} whose infinitesimal generator is given by
Q
0 E 0 § 0 0· E ¨© T 0 T ¸¹
(8.12)
0 and Te T 0 0. We assume that state 0 is an where E {1, 2,!}, T 0 : absorbing state and all the others are transient. Let (D 0 ,D ) be the initial probability vector of the Markov chain, where D 0 D e 1. For the Markov chain given in Eq. (8.12), the probability distribution F(x) of the time X until absorption into the absorbing state 0 is called a continuous-time GPH distribution with representation (D , T ). If T T 0D is the infinitesimal generator of an irreducible Markov chain, then this representation (D , T ) is called an irreducible representation. In this case, it is easy to check that F ( x) 1 D exp{Tx}e or F ( x) D exp{Tx}e. Let the Laplace-Stieljes transform of F(x) be f * ( s )
³
f 0
exp{ sx}dF ( x). Then
f * ( s ) D ( sI T )1 T 0 for s ! 0. It is clear that T sI is the infinitesimal generator of a Markov chain. In this case, we can obtain the UL-type RG-factorization sI T
[ I RU ( s)][U D ( s)][ I GL ( s)],
which leads to f * ( s ) D [ I GL ( s )]1[U D ( s )]1[ I RU ( s )]1 T 0 ; and the LU-type RG-factorization sI T 406
[ I RL ( s )][U D ( s )][ I GU ( s )],
8 Transient Solution
which yields f * ( s ) D [ I GU ( s )]1 [U D ( s )]1[ I RL ( s )]1 T 0 . It is clear that the noncentral moment of order i E[ X i ] i !(1)i D T i e,
i ı 1.
Note that we can use the RG-factorizations to compute the moments by using the UL-type RG-factorization T
( I RU )< D ( I GL )
or the LU-type RG-factorization T
( I RL )U D ( I GU ).
Now, we list some useful properties for the continuous-time GPH distribution. Note that Properties 1.5 to 1.12 in Chapter 1 can directly be extended here, we only choose some of them under a new different notation. Note that the proofs of the properties are easy, and thus are omitted here. Property 8.5 If X~GPH (D , T ) and Y~GPH ( E , S ), then X Y ~ PH (J , L), where J (D ,D 0 E ) and §T T 0E · L ¨ . S ¸¹ ©0 Property 8.6 If X ~ GPH(D , T ) and Y ~ GPH( E , S ), then min { X , Y } ~ PH(D
E , T S ), and max { X , Y } ~ PH (J , L), where
J
L
(D
E , E 0D ,D 0 E ),
§T S ¨ 0 ¨ ¨ 0 ©
I
S0 T 0
T0
I · 0 ¸. ¸ S ¸¹
Property 8.7 If {sk } ~ GPH ( E , S ) and F ( x) ~ GPH (D , T ), then the infinite f
mixture G ( x)
¦s F k
k*
( x) ~ GPH (J , L), where
k 0
J
D
E ( I D 0 S )1 ,
L T
I (1 D 0 )T 0D
( I D 0 S )1 S .
Now, we provide an approximate algorithm for computing the function F ( x)
D exp{Tx}e for x ı 0. 407
Constructive Computation in Stochastic Models with Applications
Let ʌ (n) D T n for n ı 0. Then ʌ (0) D and ʌ (n) Thus we obtain xn e n!
f
F ( x) D ¦ T n k 0
ʌ (n 1)T for n ı 1.
xn
f
¦ ʌ (n) n! e. n 0
We write N0
ʌ j (0) D j , 1 İ j İ N 0 ,
and N n1 Nn
ʌ j (n)
¦
ʌ (n 1)Ti , j , 1 İ j İ N n , n ı 1.
Nn1 i
i 1
It is clear that the sequence { Nn ʌ j (n)} is used to approximate the value ʌ j (n). For a sufficiently large positive integer M, we write M
M
F ( x)
¦
Nn
ʌ (n)
n 0
xn . n!
(8.13)
It is easy to check that the sequence {M F ( x)} converges to the transient solution F ( x) for each x ı 0, and this convergence is uniform for x [0, T ]. Algorithm 8.4 Computation of the GPH Distribution INPUT: The vector D , T 0 , the matrix T, the error H ! 0. COMPUTATION: Step 1 Generating the integer sequence {N k }. Step 2 Iteratively computing { Nn ʌ (n)}. Step 3 Determining the function sequence {M F ( x)} according to Eq. (8.13). OUTPUT: The approximate transient probability {M F ( x)} . We now compute the busy period distribution of a PH/PHX/1 queue, where the interarrival and service times are all of phase type with irreducible representations (D , T ) of size m and ( E , S ) of size n, respectively. The service batch size has the discrete probability distribution { pn }. It is clear that the busy period F is of generalized phase type with irreducible representations J W , where
J
D
E ,0,0,!
and
W
A0 408
(T 0D )
I ,
§ A1 ¨ ¨ A2 ¨ A3 ¨ © # A1
A0 A1 A2 # I
S,
· ¸ ¸, ¸ A0 ¸ # %¹ Ak pk I
( S 0 E ),
A0 A1 #
k ı 2.
8 Transient Solution
Let B( x)
P ^ F ! x` .
Then using Algorithm 8.4 we can approximate B ( x) in terms of the sequence {BM ( x)} for x [0,a].
8.2.3
GMAPs
Similar to Section 5.1 we can provide a GPH renewal process based on the GPH distribution. Now, we consider a GMAP with irreducible matrix descriptor (C , D) of infinite size. Note the GMAP is a special case of the CMAP given in Section 5.7. We write P(k ;t ) ( Pj j c (k ;t )) j j cı1 , where Pj , jc (k ;t ) is a conditional probability that the Markov chain C D is in the phase j c at time t and that k renewals occur in [0,t ), given that the Markov chain starts in the phase j at time 0. The matrix sequence {P (k ; t )} satisfies the forward Chapman-Kolmogorov differential equations Pc(0;t ) P(0;t )C , c P (k ;t ) P (k ;t )C P(k 1;t ) D, k ı 1; or the backward Chapman-Kolmogorov differential equations Pc(0;t ) CP(0;t ), c P (k ;t ) CP(k ;t ) DP(k 1; t ),
k ı 1.
At the same time, the initial condition is given by P(k ,0)
I , k 0, ® ¯0, k ı 1.
f
Let P* ( z; t )
¦z
k
P(k ; t ). Then it is easy to see that
k 0
P* ( z ; t )
exp{(C zD )t}.
If the Markov chain C D is irreducible and positive recurrent, we denote by T the stationary probability vector of the Markov chain C D . The stationary arrival rate of the GMAP is given by O T De. In what follows we discuss the inter-dependent structure of the GMAP. To do this, we assume that an arrival occurs at time 0. Let W k be the kth arrival epoch of the GMAP for k ı 0, where W 0 0. Let X n W n W n 1 . Then X n is the nth interarrival time of the GMAP for n ı 1. In general, these random variables X n for n ı 1 are not independent but they are identically distributed with 409
Constructive Computation in Stochastic Models with Applications
marginal densities given by f (t ) T exp{Ct}De. Define the matrix W (t ) with the (i, j ) th element Wi j (t ) which is a conditional probability density for an interarrival time [0, t ) , terminating at phase j and beginning at the initial phase i. It is easy to check that W (t ) exp{Ct}D. When the environment stochastic process {J (t ); t ı 0} evolves from one arrival epoch to the next one, the transition probability matrix of the GMAP is given by ³
W
f 0
exp{Ct}Ddt
1 Cmax D,
where 1
1 Cmax
1
( I GL ) U D1 ( I RU ) .
It is clear that T W T . The joint probability density of the two random variables X l and X l k is given by f k l ( x, y ) T exp{Ct}DW k 1 exp{Ct}De. Therefore, we obtain f
f
0
0
³ ³
E[ X l X l k ]
1 xyf k l ( x, y )dxdy T Cm1axW k Cmax e,
Note that 1 E[ X l ] E[ X l k ] (T Cmax e) 2
1 1 T Cmax eT W k Cmax e,
the correlation between the two random variables X l and X l k is given by
Uk
8.2.4
1 1 T Cmax ( I eT )W k Cmax e . 1 1 T Cmax (2 I eT )Cmax e
Time-Inhomogeneous PH(t) Distribution
We consider a time-inhomogeneous continuous-time Markov chain with state space : {1, 2,! , m, m 1} whose infinitesimal generator is given by (1, 2,! , m) m 1
Q
410
(1, 2,! , m) § T (t ) ¨ m 1 © 0
T 0 (t ) · ¸, 0 ¹
8 Transient Solution
where T 0 (t ) : 0 and T (t )e T 0 (t ) 0. It is clear that state m 1 is an absorbing state and all the others are transient. Let (D , D m 1 ) be the initial probability vector of the Markov chain, where D e D m 1 1. For the time-inhomogeneous Markov chain, the distribution F ( x) of the time X until absorption into the absorbing state m 1 is called a time-inhomogeneous PH distribution (PH (t )) with representation (D , T (t )). If T (t ) T 0 (t )D is the infinitesimal generator of an irreducible Markov chain, then this representation (D , T (t )) is called an irreducible representation. In this case, we have F ( x) 1 D exp
8.2.5
^³ T (t)dt` e. x
0
Time-Inhomogeneous MAP (t)
we consider a MAP (t ) with irreducible matrix descriptor (C (t ), D(t )). We write P(k ; t ) ( Pj j c (k ; t )) j j cı1 , where Pj j c (k ; t ) is a conditional probability that the Markov chain C (t ) D(t ) is in the phase j c at time t and that k renewals occur in [0, t ) , given that the Markov chain starts in the phase j at time 0. The matrix sequence {P (k ; t )} satisfies the forward Chapman-Kolmogorov differential equations Pc(k ; t )
Pc(0; t ) P(0; t )C (t ), P(k ; t )C (t ) P(k 1; t ) D(t ),
k ı 1;
or the backward Chapman-Kolmogorov differential equations Pc(0; t ) C (t ) P(0; t ), c P (k ; t ) C (t ) P(k ; t ) D(t ) P(k 1; t ),
k ı 1.
At the same time, the initial condition is given by I , k 0, ® ¯0, k ı 1.
P(k ,0) f
Let P* ( z; t )
¦z
k
P(k ; t ). Then it is easy to see that
k 0
P* ( z; t )
8.2.6
exp
^³ C(u) zD(u) du`. t
0
A Time-Inhomogeneous MAP(t)/PH(t)/1 Queue
We consider a time-inhomogeneous MAP(t ) PH(t )1 queue, where the arrival 411
Constructive Computation in Stochastic Models with Applications
process is a MAP (t ) with irreducible matrix descriptor (C (t ), D (t )) and the service time distribution is of phase type with irreducible representation (D , T (t )). The corresponding time-inhomogeneous QBD process is given by D (t )
I § C (t )
I · ¨ ¸ 0 D
[ ( ) ] ( ) ( ) ( )
I T t C t T t D t I ¸. Q (t ) ¨ ¨ ¸ I
[T 0 (t )D ] C (t ) T (t ) D (t )
I ¨ ¸ % % % ¹ © The transient probability vector of the QBD process is given by ʌ (t ) ʌ (0) exp
8.3
^³ Q(u)du`. t
0
The Sojourn Times
In this section, we study the sojourn times of an irreducible M-state Markov chain which is either discrete-time or continuous-time, and derive expressions for the probability distributions of the sojourn times. Note that M may be either finite or infinite.
8.3.1
Discrete-Time Markov Chains
Consider an irreducible discrete-time M-state Markov chain { X n } whose transition probability matrix P ( pi j )1İi jİM . Let D (D1 ,D 2 ,...,D M ), where D i P{ X 0 i} for 1 İ i İ M . Let E {1, 2,..., K } and E c {K 1, K 2,..., M }.Then according to the two sets E and E c , D and P are partitioned as
D
(D (1) ,D (2) )
and P
§ ¨ ¨ ¨¨ ©
P11 P21
P1 2 ·¸ ¸, P2 2 ¸¸¹
respectively. It is easy to see that the order of the matrix P2,2 is finite if M f or is infinite if M f. Since the Markov chain is irreducible, the two matrices I P1,1 and I P2 2 are both invertible. Specifically, this section only uses the 1 when M f. For simplicity of minimal nonnegative inverse ( I P2 2 ) min description, we still use the notation ( I P2 2 ) 1 for the minimal nonnegative 1 inverse ( I P2 2 )min . 412
8 Transient Solution
Let
41
P1,2 ( I P2 2 ) 1 P21 ( I P11 ) 1
42
P21 ( I P11 )1 P1 2 ( I P2 2 )1 .
and
Lemma 8.2 If the Markov chain P is positive recurrent and ʌ (ʌ (1) , ʌ (2) ) is its stationary probability vector, then ʌ (1)41 ʌ (1) and ʌ (2)4 2 ʌ (2) . Proof Note that ʌ (1) is the stationary probability vector of the censored chain < 1 P11 P12 ( I P22 ) 1 P21 to set E , we obtain that ʌ (1)< 1 ʌ (1) , that is ʌ (1) ª¬ P11 P1 2 ( I P2 2 )1 P21 º¼
ʌ (1) ,
which leads to S(1)41 ʌ (1) . Similarly, we can yield ʌ (2)4 2 ʌ (2) . This completes the proof. A sojourn of the Markov chain { X n } in set E is the state sequence X m X m 1 ,..., X m k for k ı 1, where X m , X m 1 ,..., X m k 1 E , while X m 1 , X m k E. This sojourn lasts k time units, and begins at time m and finishes at time m k . Let Vk be a random variable representing the kth sojourn of the Markov chain { X n } , k ı 1. It is clear that the irreducibility of the Markov chain assures the existence of infinite sojourns in E with probability 1. For each k ı 1, we write
vk
vk (i ) P{Vk i}, (vk (1), vk (2),! , vk ( K )), E {1, 2,! , k}.
The following theorem provides expression for the vector vk for k ı 1. Theorem 8.2 (1) v1
D (1) D (2) ( I P22 ) 1 P21 .
(2) For k ı 2, vk
v1 Lk 1 ,
where L
( I P11 ) 1 P1 2 ( I P2 2 ) 1 P21 ( I P11 ) 1 41 ( I P11 ) .
Proof (1) For i E c and j E , we write
hi j
P{V1
j _ X0
i} 413
Constructive Computation in Stochastic Models with Applications
(hi j )1İi jİK . It is easy to see that H is the matrix of absorption
and H
probabilities from E c to E , thus we obtain ( I P22 )1 P21 .
H Therefore,
j} D (1) j
P{V1
¦ P{ X
0
i}hi j .
iE c
This gives, in matrix notation,
D (1) D (2) ( I P22 ) 1 P21 .
v1
(2) Let V1c be a random variable representing the state of E c in which the first sojourn of the Markov chain { X n } begins. We write P{V1c
wi j and W
j | X0
i},
i E, j E c ,
( wi j )iE jE c . Then ( I P11 ) 1 P1 2 .
W Let li j and L
P{V2
j _ X0
i E, j E,
i},
(li j )1İi jİK . Then li j
¦ P{V
k _ X0
i}P{V2
j _ V1c
k X 0
¦ P{V
k _ X0
i}P{V2
j _ V1c
k}
c 1
k E
c
c 1
k E
c
¦w
h ,
i k k j
k E c
thus we obtain L WH . Note that P{Vk
j}
¦ P{V ¦vl
i}P{Vk
1
iE
( k 1) i i j
iE
hence we can obtain, in matrix notation, 414
,
k ı 2,
j _ V1
i}
i}
8 Transient Solution
v1 Lk 1 ,
vk
k ı 2.
This completes the proof. Remark 8.3 When the size of the set E or E c is large or infinite, we may use the RG-factorizations to compute the matrix L. We write ª¬ I RU(1) º¼ ª¬ I < D(1) º¼ ª¬ I GL(1) º¼
I P11
and I P2 2
ª¬ I RU(2) º¼ ª¬ I < D(2) º¼ ª¬ I GL(2) º¼ .
Then L
1
1
1
ª¬ I GL(1) º¼ ª¬ I < D(1) º¼ ª¬ I RU(1) º¼ P1 2 1
1
1
ª¬ I GL(2) º¼ ª¬ I < D(2) º¼ ª¬ I RU(2) º¼ P21 .
Now, we further simplify the two important matrices L and H according to the structure of the Markov chain P . Let E1 { j E : pi j ! 0 i E c }. Then E1 is the state set of E directly accessible from E c in one-step transition. We write E2 E E1 . Then according to the two sets E1 and E2 , the two matrices L and H are partitioned as E1
L
E1 § L1
¨ E2 © L2
E2
0· ¸ 0¹
and E1
E2
c
H E ( H1
0).
At the same time, the matrix P is partitioned as E1
P
E1 §¨ T11
¨ E2 ¨¨ 21 ¨ E c ¨¨ 31 ©
T T
E2
Ec
T1 2 T13 ·¸ ¸. T2 2 T23 ¸¸ ¸ 0 T33 ¸¸ ¹
The following theorem provides expressions for the matrices L1 , L2 and H1 in terms of the matrices Ti j for 1 İ i, j İ 3 . 415
Constructive Computation in Stochastic Models with Applications
Theorem 8.3
Based on the sets E1 , E2 and E c , we have [ I T11 T1 2 ( I T2 2 )1 T21 ]
L1
1
[T13 T1 2 ( I T2 2 )1 T23 ]( I T33 )1T3,1 , H1
( I T33 ) 1 T31 L1
and L2 Proof
( I T2 2 )1T21 L1 ( I T2 2 ) 1T23 H1.
Note that ( I P11 )1 P13 H ,
L WH we obtain
( I P11 ) L
P13 H ,
that is, in block-structured notation, L1 T11 L1 T1 2 L2 T13 H1 , ° ® L2 T21 L1 T2 2 L2 T23 H1 , °H T L T H , 31 1 3 3 1 ¯ 1 simple computation leads to the desired results. This completes the proof. We denote by N E r the time units spent in the rth sojourn of the Markov chain { X n } in the set E. Theorem 8.4 The random variable N E r is of phase type with irreducible representation (vr , P11 ) for r ı 1. Proof First, we derive the distribution of the random variable N E 1 . For i E , we write
E k (i) P{N E 1
k _ V1
i}
and
Ek
( E k (1), E k (2),!, E k ( K )).
Then
E 1 (i )
¦
pi j
jE c
and for k ı 2,
E k (i )
¦p
i l
l E
416
P{N E 1
k _ V2
l}.
8 Transient Solution
Hence, we obtain, in matrix notation,
E1
( I P11 )e
P1 2 e
and for k ı 2,
Ek
P11 E k 1
P1k11 E1
P1k11 ( I P11 )e.
Therefore, P{N E 1
k}
¦ P{V
1
i}P{N E 1
k _ V1
i}
iE
v1 P1k11 ( I P11 )e,
which indicates that the random variable N E 1 is of phase type with irreducible representation (v1 , P11 ) . Now, we derive the distribution of the random variable N E r for r ı 2. With similar analysis to that for N E 1 , we obtain P{N E r
k}
¦ P{V
r
i}P{N E r
k _ Vr
i}
iE
vr P1k11 ( I P11 )e,
which indicates that the random variable N E r is of phase type with irreducible representation (vr P11 ). This completes the proof. Using the moments of the discrete-time PH distribution, we have E ª¬ N Em r º¼
vr mP1m1 1 ( I P11 ) m e,
m ı 1.
Remark 8.4 The random vector ( N E 1 , N E 2 ,! , N E r ) is of r-dimensional phase type with the r-dimensional distribution function P{N E 1
n1 , N E 2 r
n2 ,!, N E r
^
nr }
`
v1P1n11 1P1 2 ( I P2 2 ) 1 P21 P1n1i 1 P1 2 e i 2
for n1 , n2 ,! , nr ı 1.
8.3.2
Continuous-Time Markov Chains
This analysis is similar to that for the discrete-time case. Therefore, only a simple description for the sojourn times of a continuous-time Markov chain is provided. Consider an irreducible continuous-time M-state Markov chain { X t , t ı 0} 417
Constructive Computation in Stochastic Models with Applications
whose infinitesimal generator Q
(qi j )1İi jİM . Let E {1, 2,! , K } and E c
{K 1, K 2,! , M } . Then the matrix Q is partitioned as E
Q
Ec
E § Q11
Q1 2 · ¨ ¸ E c ¨© Q21 Q2 2 ¸¹
A sojourn of the Markov chain { X t , t ı 0} in the state set E is a sequence X tm , X tm1 ,..., X tmk for k ı 1, where ti is an instant of the ith state transition; X tm , X tm1 ,..., X tmk 1 E , X tm1 , X tmk E. This sojourn lasts a time length X tmk X tm ,
and begins at time X tm and finishes at time X tmk . Let Vk be the state of E in which the kth sojourn of the Markov chain { X t t ı 0} begins. We write vk (i)
P{Vk
i}
and vk
(vk (1), vk (2),! , vk ( K )).
Then we obtain v1 D (1) D (2) ( Q2 2 ) 1 Q21 and for k ı 2, vk
v1 Lk 1 ,
where L
(Q11 )1 Q1 2 (Q2 2 ) 1 Q21 .
We denote by TE k the time length spent during the kth sojourn of the Markov chain { X t , t ı 0} in the state set E. Then for k ı 1, the random variable TE k is of phase type with irreducible representation (vk , Q11 ), and E ª¬TEmk º¼
vk m !(Q11 ) m e.
Remark 8.5 When the size of the set E or E c is large or infinite, we may use the RG-factorizations to compute the matrix L and the mean E[TE k ]. We write Q11 418
ª¬ I RU(1) º¼ ª¬U D(1) º¼ ª¬ I GL(1) º¼
8 Transient Solution
and Q22
ª¬ I RU(2) º¼ ª¬U D(2) º¼ ª¬ I GL(2) º¼ .
Then L
1
1
ª¬ I GL(1) º¼ ª¬ U D(1) º¼ ª¬ I RU(1) º¼ 1
1
1
1
Q1,2 ª¬ I GL(2) º¼ ª¬ U D(2) º¼ ª¬ I RU(2) º¼ Q2,1 and E ¬ªTE k ¼º
1
vk ¬ª I GL(1) ¼º ª¬ U D(1) º¼
1
1
(1) ¬ª I RU ¼º e.
The random vector (TE 1 , TE 2 ,! , TE k ) is of k-dimensional phase type with the m-dimensional distribution function P{TE 1 ! t1 , TE 2 ! t2 ,! , TE k ! tk } v1 (Q11 ) 1 ª¬ I exp{Q11t1}º¼ Q1 2 k
^
`
( Q2 2 ) 1 Q21 (Q11 ) 1 ¬ª I exp{Q11ti }¼º Q1 2 e. i 2
Now, we analyze a MAPPH1 queue, where the customer arrivals are a MAP with irreducible matrix descriptor ( D0 , D1 ) of size m and the service times are of phase type with irreducible representation ( E , S ) of size n . Let N (t ), I (t ) and J (t ) be the number of customers in the system, the phases of the arrival and service processes at time t , respectively. Then {( N (t ), I (t ), J (t )) : t ı 0} is a level-independent QBD process whose infinitesimal generator is given by
Q
§ D0 ¨ 0 ¨I
S ¨ ¨ ¨ ¨ ©
D1
E D0 S I
(S 0 E )
· ¸ D1
I ¸ ¸. D0 S D1
I ¸ 0 I
( S E ) D0 S D1
I ¸ % % % ¸¹
We consider the following two cases: Case Ē E {Level 0 Level 1} and D v1
(D 0 ,D1 ,!). Then
(D 0 ,D1 ) (D 2 ,D 3 ,!)(Q2 2 ) 1 Q21
and vk
v1 ª¬(Q11 ) 1 Q1 2 (Q22 ) 1 Q21 º¼
k 1
,
k ı 2.
Case ē E {Level 1, Level 2} and D (D 0 ,D1 ,D 2 ,!). We take E1c {Level 0} and E2c {Level 3, Level 4,!} . Then the matrix Q is partitioned as 419
Constructive Computation in Stochastic Models with Applications c
E1
Q
E
c
E2
E1c § Q11
Q1 2 Q13 · ¨ ¸ E ¨ Q21 Q2 2 Q23 ¸. c E2 ¨© Q31 Q3 2 Q33 ¸¹
In this case, we write E
Q
E § Q2 2
¨ E ¨ Q1 2 E ¨© Q3 2 c 1 c 2
E1c
E2c
Q21 Q23 · Q11 Q13 ¸¸. Q31 Q33 ¸¹
Hence, we obtain 1
v1
§ Q11 Q13 · § Q1 2 · (D1 ,D 2 ) (D 0 ,D 3 ,D 4 ,!) ¨ ¸ ¨ ¸ © Q31 Q33 ¹ © Q3 2 ¹
and for k ı 2 vk
8.4
1 ª § Q11 Q13 · § Q1 2 · º v1 « (Q2 2 )1 (Q21 Q23 ) ¨ ¸ ¨ ¸» «¬ © Q31 Q33 ¹ © Q3 2 ¹ »¼
k 1
Time-Inhomogeneous Discrete-Time Models
In this section, we discuss an irreducible time-inhomogeneous discrete-time Markov chain, and provide expression for its transient probability. Further, when the Markov chain is periodic, we obtain a new expression for the asymptotic periodic distribution. Consider an irreducible time-inhomogeneous discrete-time Markov chain {( X n , J n ) : n ı 0} whose transition probability matrix is given by § P0(n0) ¨ (n) P P (n) ¨ 1(0n ) ¨ P2 0 ¨¨ © #
P0(n1) P1(1n ) P2(n1) #
P0(n2) P1( 2n ) P2(n2) #
!· ¸ !¸ , !¸ ¸ % ¸¹
n ı 0,
where Pk(nk) is a matrix of size mk for k ı 0, the sizes of all the other matrices can be determined accordingly. The (i, j ) th entry of the matrix Pl (kn ) is given by
P
(n ) l k i , j
420
P{ X n 1
k , J n 1
j _ Xn
l, J n
i}
8 Transient Solution
which depends on the time n for n ı 0.
8.4.1
The Transient Probability Vector
We write ʌ k j (n) ʌ k ( n) ʌ ( n)
P{ X n
k, Jn
j},
(S k 1 (n), ʌ k 2 (n),! , ʌ k mk (n)), (S 0 (n), ʌ1 (n), ʌ2 (n),!),
n ı 0.
It is clear that if ʌ D is the initial probability vector, then ʌ ʌ P (0) and for n ı 1, ʌ ( n)
ʌ (n 1) P(n 1).
(8.14)
Hence we obtain that for 1 İ m İ n ʌ ( n)
ʌ (n m) P(n m) P(n m 1)! P(n 1),
(8.15)
we have ʌ (n) D P(0) P(1) P(2)! P(n 1),
n ı 1.
(8.16)
Based on (8.15), we introduce a notation ( (m, n)
P(n m) P(n m 1)! P(n 1) P(n), 1 İ m İ n.
(8.17)
Then ʌ ( n)
ʌ (n m)( (m, n 1), 1 İ m İ n.
(8.18)
The following two propositions provide some closure properties for the matrix ( (m, n) for 1 İ m İ n. Proposition 8.1 For 1 İ m İ n, we have (1) if P(n) is stochastic for each n ı 1, then ( (m, n) is also stochastic; (2) if P( n) is irreducible for each n ı 1, then ( (m, n) is also irreducible. Proposition 8.2 For 1 İ m İ n, we have (1) If the Markov chain P(n) is recurrent for each n ı 1, then the Markov chain ( (m, n) is also recurrent. (2) If the Markov chain P(n) is positive recurrent for each n ı 1, then the Markov chain ( (m, n) is also positive recurrent. (3) If the Markov chain P(n) is recurrent for each n ı 1, and there exists a Markov chain P(n k ) which is null recurrent for 0 İ k İ m, then the Markov 421
Constructive Computation in Stochastic Models with Applications
chain ( (m, n) is null recurrent. (4) If there exists a Markov chain P(n k ) which is transient for 0 İ k İ m, then the Markov chain ( (m, n) is transient. In what follows we analyze the block structure for the matrix ( (m, n) with respect to some special Markov chains such as the QBD processes, the Markov chains of GI M 1 type and the Markov chains of M G 1 type. (1) The QBD processes If the Markov chain P(n) is a QBD process for each n t 1 , then the Markov chain ( (m, n) is also a QBD process for 1 İ m İ n. We provide a computational illustration. Let § A0(n0) ¨ (n) A P(n) ¨ 10 ¨ ¨¨ ©
· A0(n1) ¸ A1(1n ) A1(n2) ¸, A2( n1) A2( n2) A2( n3) ¸ ¸ % % %¸¹
n ı 1.
Then §§ ¨¨ ¨© ¨§ ¨ P(n 1) P(n) ¨ ¨© ¨ ¨ ¨ ¨ ©
·
· § · ¸
¸¹ ¨© ¸¹ ¸ ¸
· § · § · ¸ ¸ ¨ ¸ ¨ ¸
¹ © ¹ © ¹ ¸, § · § · § · ¸ ¨ ¸ ¨ ¸ ¨ ¸ ¸ © ¹ © ¹ © ¹ ¸ % % % ¸¹
where represents a non-zero block which can be obtained easily. (2) Markov chains of GI M 1 type If the Markov chain P (n) is a Markov chain of GI M 1 type for each n ı 0, then the Markov chain ( (m, n) is also a Markov chain of GI M 1 type for 1 İ m İ n. (3) Markov chains of M G 1 type If the Markov chain P(n) is a Markov chain of M G 1 type for each n ı 0, then the Markov chain ( (m, n) is also a Markov chain of M G 1 type for 1 İ m İ n.
8.4.2
The Asymptotic Periodic Distribution
We assume that the irreducible time-inhomogeneous discrete-time Markov chain 422
8 Transient Solution
{( X n , J n ) : n ı 0} is d -periodic, that is,
P (n)
P(n kd ),
k ı 1, 0 İ n İ d 1.
n kd for k ı 1 and 0 İ n İ d 1, then
In this case, if m
P ( m)
P (n).
This indicates that the dynamical behavior of the Markov chain {( X n , J n ) : n ı 0} is completely determined by the d matrices P(0), P(1),! , P(d 1). Let m n kd for k ı 1 and 0 İ n İ d 1. Then ʌ ( m)
ʌ ( m 1) P(m 1) ʌ (n) P( n) P( n 1)! P( n d 1) P( n d ) P( n d 1)! P(n 2d 1) ! P( n ( k 1) d ) P(n (k 1)d 1)! P( n kd 1) ʌ (n)[ P( n) P( n 1)! P(n d 1)]k .
Hence, we obtain k
ʌ (m) D P(0) P(1) P(2)! P(n 1) > P(n) P(n 1)! P(n d 1)@ .
(8.19)
Let (n, d )
lim ʌ n kd .
k of
If P(k ) is stochastic for each k ı 1, then ( (n, n d 1)
P(n) P(n 1)! P(n d 1)
is also stochastic. Under the stochastic condition, the Cesaro limit ( * (n, n d 1)
1 N [( (n, n d 1)]l ¦ N of N 1 l 0 lim
always exists. Thus, we obtain
(n, d )
ʌ (n)( * (n, n d 1)
D P(0) P(1) P(2)! P(n 1)( * (n, n d 1).
(8.20)
Specifically, if the Markov chain ( (n, n d 1) is positive recurrent, then
T (n, d ) P(n, n d 1) T (n, d ) and T (n, d )e 1. Hence we have ( * (n, n d 1)
eT (n, d ). 423
Constructive Computation in Stochastic Models with Applications
In this case, it is clear that
(n, d ) D P(0) P(1) P(2)! P(n 1)eT (n, d ) T (n, d ).
(8.21)
This shows that (n, d ) can be expressed by the stationary probability vector
T (n, d ) of the Markov chain ( (n, n d 1) . For the limit (n, d ) for 0 İ n İ d 1, Eq. (8.20) provides expression of (n, d ) under a weaker condition in which the matrix P (k ) is stochastic for each k ı 1; while Eq. (8.21) provides expression of (n, d ) under a stronger condition under which the Markov chain ( (n, n d 1) is positive recurrent. Therefore, it is necessary that we discuss the conditions under which the Markov chain ( (n, n d 1) is positive recurrent, and provide effective algorithms for computing the stationary probability vector T (n, d ). Here, we only discuss the Markov chain of GI M 1 type. The other Markov chains can be similarly dealt with by the results of Chapter 2. If the Markov chain P(n) is a level-independent Markov chain of GI M 1 type for each n ı 1, then the Markov chain ( (m, n) is also a level-independent Markov chain of GI M 1 type for 1 İ m İ n. Let § B1( n ) ¨ (n) B P (n) ¨ 2 ¨ B( n ) ¨¨ 3 © #
B0( n ) A1( n ) A2( n ) #
· ¸ ¸, ¸ ¸ % ¸¹
(n) 0
A A1( n ) #
A0( n ) #
n ı 1.
Then § § · ¨ ¨ ¸ ¹ ¨© ¨ § · ¨ P(n 1) P(n) ¨ ¨© ¸¹ ¨ § · ¨¨ ¸ ¨ © ¹ ¨ # ©
· § · ¸ ¨ ¸ © ¹ ¸ ¸
§ · § · ¸ ¨ ¸ ¨ ¸ © ¹ © ¹ ¸. ¸ § · § · § · ¸ ¨ ¸ ¨ ¸ ¨ ¸ © ¹ © ¹ © ¹ ¸ # # # % ¸¹
The state space of the Markov chain ( (1, n) given by
( k 1)u2 1
: (n 1 to n) ® ¯
*
P (n 1) P (n), in GI M 1 type, is
(Level i ) : k
i k u2
½ 0,1, 2,!¾ . ¿
By induction, the state space of the Markov chain P (m, n) 424
P(n m) P(n m 1)! P(n),
8 Transient Solution
in GI M 1 type, is given by °( k 1)u2
m
1
*
: (n m to n) ® ¯°
(Level i ) : k
i k u 2m
½° 0,1, 2,!¾ , 1 İ m İ n. ¿°
That is, for the level-independent Markov chain ( (m, n) of GI M 1 type, the new kth level of : (n m to n) is given by ( k 1)u2m 1
kk Level
*
(Level i ).
i k u 2m
In this case, we have § B1( m , n ) ¨ ( m,n ) B P ( m, n ) ¨ 2 ¨ B( m , n ) ¨¨ 3 © #
B0( m , n ) A1( m, n ) A2( m, n ) #
( m,n) 0
A A1( m, n ) #
A0( m, n ) #
· ¸ ¸ , 1 İ m İ n. ¸ ¸ %¸¹
When the Markov chain P (k ) is d-periodic for each k ı 0, we are interested in the matrix ( (n, n d 1)
P(n) P(n 1)! P(n d 1),
0 İ n İ d 1.
We assume that the Markov chain ( ( n, n d 1) is irreducible and positive recurrent. Let Z (n, n d 1) be the stationary probability vector of the Markov chain. Then
Z (n, n d 1) (Z0 (n, n d 1), Z1 (n, n d 1), Z2 (n, n d 1),!) kk for k ı 0, and T (n, n d 1) its stationary partitioned according to Level probability vector
T (n, n d 1) (T 0 (n, n d 1),T1 (n, n d 1),T 2 (n, n d 1),!) kk for k ı 0. Obviously, Z (n, n d 1) T (n, partitioned according to Level d 1 n d 1). Let K 2 . Then for k ı 0,
Z k (n, n d 1) (T kK (n, n d 1),T kK 1 (n, n d 1), ! ,T k 1 K 1 (n n d 1)). Let R (n, n d 1) be the minimal nonnegative solution to the matrix equation f
R(n, n d 1)
¦ [ R(n, n d 1)]
k
Ak( n, n d 1) .
k 0
425
Constructive Computation in Stochastic Models with Applications
Then
Z k (n, n d 1) Z0 ( n, n d 1)[ R(n, n d 1)]k ,
k ı 1,
where Z 0 (n, n d 1) is the stationary probability vector of the censored chain f
U 0 (n, n d 1)
¦ [ R(n, n d 1)]
k
Bk( n,1n d 1)
k 0
k0 . In this case, we have to Level
(n, d ) Z (n, n d 1),
8.5
0 İ n İ d 1.
Notes in the Literature
Transient solution is always useful in the study of stochastic models such as the first passage time, the sojourn time and the fundamental period. The exact transient solution for the queue length and the busy period in an M M 1 queue was studied in terms of the modified Bessel functions, e.g., see, Lederman and Reuter [30], Champernowne [13], Clarke [15], Grassmann [19], Massey [36], Massey and Whitt [37,38], Abate and Whitt [1 2], Parthasarathy [42], Baccelli and Massey [5], and Zhang and Coyle [55]. Griffiths, Leonenko and Williams [20] considered the transient solution to the M Ek 1 queue. Lucantoni, Choudhury and Whitt [32] analyzed the transient performance measures of the BMAP G 1 queue. Choudhury, Lucantoni and Whitt [14] analyzed the M t Gt 1 queue. Taaffe and Ong [53] discussed a nonstationary PH (t )M (t ) s c queue, and Ong and Taaffe [41] analyzed the PH (t ) PH (t ) 1c queue. Dormuth and Alfa [17] discussed the MAP(t ) PH (t ) 1K queue. Hsu and He [22] analyzed the distribution of the first passage time for Markov chains of GI M 1 type. Hsu and Yuan [23] studied the transient solution of denumerable Markov chains. Hsu and Yuan [24] discussed the first passage times of Markov chains and the associated algorithms. Hsu, Yuan and Li [25] discussed the first passage time of Markov renewal processes. Aggregated Markov chains provide natural models of partially observable stochastic models, e.g., see Kemeny and Snell [28], Stewart [51], Fredkin and Rice [18], Ball and Sansom [9], Sumita and Rieders [52], Iordache, Bucurescu and Pascu [26], Ball [6], Ball, Milne and Yeo [7], Rubino and Sericola [45,47], Jalali and Hawkes [27], Ball and Yeo [10], Rydén [48], Ball, Milne, Tame and Yeo [8], Larget [29], and Stadje [50]. Based on the block-structured analysis, the sojourn times of Markov chains were discussed in, such as, Rubino and Sericola [46], Sericola [49] and Csenki [16]. Harrison and Lemoine [21] first studied the limits of the periodic queues. Readers may further refer to Asmussen and Thorisson [4], Bambos and Walrand [11], Lemoine [31], Rolski [43,44], Whillie [54], Breuer [12], and Alfa and Margolius [3]. 426
8 Transient Solution
Baccelli and Massey [5] and Margolius [33] provided a sample path analysis for the M t M t 1 queue. Margolius [34] analyzed transient and periodic solution to the time-inhomogeneous continuous-time QBD processes, and Margolius [35] i ( s, t ) to express the further introduced the two crucial matrices R ( s, t ) and R transient and periodic solution. This chapter is based on Hsu and Yuan [23,24], Hsu, Yuan and Li [25], Rubino and Sericola [46], Sericola [49], Csenki [16] and Alfa and Margolius [3].
Problems 8.1 Consider the classical M M 1 queue with arrival rate O and service rate P , compute the following transient performance measures: (1) The queue length distribution, (2) the waiting time distribution, and (3) the busy period distribution. 8.2 Consider the classical M M 1 queue with arrival rate O and service rate P . Let U O / P . For the three cases: U 1, U 1 and U ! 1, apply the censoring technique to compute the following transient performance measures: (1) The queue length distribution, and (2) the busy period distribution. 8.3 Consider the classical M M 1 queue with arrival rate O and service rate P . Let U O / P . For the three cases: U 1, U 1 and U ! 1, apply the directed truncation to compute the busy period distribution. 8.4 Consider the classical M / M X /1 queue with arrival rate O service rate P f
and the service batch size distribution {ak } with a
¦ ka . Let k
U
O /( P a ).
k 1
For the three cases: U 1, U 1 and U ! 1, apply the censoring technique to compute the busy period distribution under each following condition: (1) ak p k 1 (1 p) for k ı 1 and 0 p 1 .
9
f
1 . 2 k k 1 8.5 Consider an irreducible continuous-time birth-death process { X t , t ı 0} whose infinitesimal generator is given by (2) ak
2
for k ı 1 and 9
Q
§ O ¨ ¨ P ¨ ¨ ©
¦k
O (O P ) O P ( O P ) O %
%
· ¸ ¸. ¸ ¸ %¹ 427
Constructive Computation in Stochastic Models with Applications
Let D (D 0 ,D1 ,D 2 ,!) with D i P{ X 0 i} for i ı 0, and B( x) P{W ! x} where W inf{ X t 0}. Numerically compute the function B ( x ) under each following condition: (1) D k G k ,1 for k ı 0. (2) D k (3) ak
p k 1 (1 p ) for k ı 1 and 0 p 1.
9 k
2
f
for k ı 1 and 9
1
¦k
2
.
k 1
8.6 Consider the classical MAPPH1 queue with the MAP irreducible matrix descriptor (C , D) of size m and the PH irreducible representation ( E , S ) of size n . Let T be the stationary probability vector of the Markov chain C D . We write U (T De) /( E S 1e). For the three cases: U 1, U 1 and U ! 1, analyze the departure process of this queue by means of the GMAP. 8.7 Consider the classical M X M X 1 queue with the arrival rate O , the arrival f
batch size distribution {ak } with a
¦ ka
k
f, the service rate P , and the
k 1
f
service batch size distribution {bk } with b
¦ kb
k
f. Let U
(O a ) ( P b ) .
k 1
For the three cases: U 1, U 1 and U ! 1, compute the probability vectors v1 of the first sojourn of this queue in the state set E {2,3, 4}.
References 1. Abate J. and W. Whitt (1987). Transient behavior of the M M 1 queue: starting at the origin. Queueing Systems 2: 41 65 2. Abate J. and W. Whitt (1988). Transient behavior of the M M 1 queue via Laplace transforms. Advances in Applied Probability 20: 145 178 3. Alfa A.S. and B.H. Margolius (2008). Two classes of time inhomogeneous Markov chains: analysis of the periodic case. Annals of Operations Research 160: 121 137 4. Asmussen S. and H. Thorisson (1987). A Markov chain approach to periodic queues. Journal of Applied Probability 24: 215 225 5. Baccelli F. and W.A. Massey (1989). A sample path analysis for the M M 1 queue. Journal of Applied Probability 26: 418 422 6. Ball F. (1990). Aggregated Markov processes with negative exponential time interval omission. Advances in Applied Probability 22: 802 830 7. Ball F., R.K. Milne and G.F. Yeo (1991). Aggregated semi-Markov processes incorporating time interval omission. Advances in Applied Probability 20: 546 572 8. Ball F., R.K. Milne, I.D. Tame and G.F. Yeo (1997). Superposition of interacting aggregated continuous-time Markov chains. Advances in Applied Probability 29: 56 91 428
8 Transient Solution 9. Ball F. and M. Sansom (1988). Aggregated Markov processes incorporating time interval omission. Advances in Applied Probability 23: 772 797 10. Ball F. and G.F. Yeo (1993). Lumpability and Marginalisability for continuous-time Markov chains. Journal of Applied Probability 30: 518 528 11. Bambos N. and J. Walrand (1989). On queues with periodic input. Journal of Applied Probability 26: 381 389 12. Breuer L. (2001). The periodic BMAPPH c queue. Queueing Systems 38: 67 76 13. Champernowne D.G. (1956). An elementary method of solution of the queueing problem with a single server and constant parameters. Journal of the Royal Statistical Society (Series B) 18: 125 128 14. Choudhury G., D. Lucantoni and W. Whitt (1997). Numerical solution of piece-stationary M t Gt 1 queues. Operations Research 45: 451 463 15. Clarke A.B. (1956). A waiting line process of Markov type. Annals of Mathematical Statistics 27: 452 459 16. Csenki A. (1992). The joint distribution of sojourn times in finite Markov processes. Advances in Applied Probability 24: 141 160 17. Dormuth D.W. and A.S. Alfa (1997). Two finite-difference methods for solving MAP(t ) PH (1) 1K queueing models. Queueing Systems 27: 55 78 18. Fredkin D.R. and J.A. Rice (1986). On aggregated Markov processes. Journal of Applied Probability 23: 208 214 19. Grassmann W.K. (1977). Transient solutions in Markovian queues. European Journal of Operational Research 1: 396 402 20. Griffiths J.D., G.M. Leonenko and J.E. Williams (2006). The transient solution to M Ek 1 queue. Operations Research Letters 34: 349 354 21. Harrison J.M. and A.J. Lemoine (1977). Limit theorems for periodic queues. Journal of Applied Probability 14: 566 576 22. Hsu G.H. and Q.M. He (1991). The distribution of the first passage time for the Markov processes of GI M 1 type. Stochastic Models 7: 397 417 23. Hsu G.H. and X. Yuan (1994). Transient solutions for denumerable-state Markov processes. Journal of Applied Probability 31: 635 645 24. Hsu G.H. and X. Yuan (1995). First passage times and their algorithms for Markov processes. Stochastic Models 11: 195 210 25. Hsu G.H., X. Yuan and Q.L. Li (2000). First passage times for Markov renewal processes and application. Science in China (Series A) 43: 1238 1249 26. Iordache O., I. Bucurescu and A. Pascu (1990). Lumpability in compartmental models. Journal of Mathematical Analysis and Applications 146: 306 317 27. Jalali A. and A.G. Hawkes (1992). Generalised eigenproblems arising in aggregated Markov processes allowing for time interval omission. Advances in Applied Probability 24: 302 321 28. Kemeny J.G. and J.L. Snell (1960). Finite Markov Chains, Princeton, N.J.D. Van Nostrand Co., Inc. 29. Larget B. (1998). A canonical representation for aggregated Markov processes. Journal of Applied Probability 35: 313 324 429
Constructive Computation in Stochastic Models with Applications 30. Lederman W. and G.E.H. Reuter (1954). Spectral theory for the differential equations of simple birth and death processes. Phil. Trans. R. Soc. London A 12: 321 369 31. Lemoine A.J. (1989). Waiting time and workload in queues with periodic input. Journal of Applied Probability 26: 390 397 32. Lucantoni D.M., G.L. Choudhury and W. Whitt (1994). The transient BMAPG 1 queue. Stochastic Models 10: 145 182 33. Margolius B.H. (1999). A sample path analysis of the M t M t c queue. Queueing Systems 31: 59 93 34. Margolius B.H. (2007). Transient and periodic solution to the time-inhomogeneous QBD process. Queueing Systems 56: 183 194 35. Margolius B.H. (2008). The matrices R and G of matrix analytic methods and the timeinhomogeneous periodic quasi-birth death process. Queueing Systems 60: 131 151 36. Massey W.A. (1985). Asymptotic analysis of the time dependent M M 1 queue. Mathematics of Operations Research 10: 305 327 37. Massey W.A. and W. Whitt (1993). Networks of infinite-server queues with nonstationary Poisson input. Queueing Systems 13: 183 250 38. Massey W.A. and W. Whitt (1994). Un stable asymptotics for nonstationary queues. Mathematics of Operations Research 19: 267 291 39. Neuts M.F. (1981). Matrix-Geometric Solutions in Stochastic Models — An Algorithmic Approach, The Johns Hopkins University Press: Baltimore. 40. Neuts M.F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Dekker: New York 41. Ong K. and M. Taaffe (1988). Approximating nonstationary PH (t ) PH (t ) c queueing systems. Mathematics and Computers in Simulation 30: 441 452 42. Parthasarathy P.R. (1987). A transient solution to an M M 1 queue: a new simple approach. Advances in Applied Probability 19: 997 998 43. Rolski T. (1987). Approximation of periodic queues. Advances in Applied Probability 19: 691 707 44. Rolski T. (1989). Relationships between characteristics in periodic Poisson queues. Queueing Systems 4: 17 26 45. Rubino G. and B. Sericola (1989). On weak lumpability in Markov chains. Journal of Applied Probability 26: 446 457 46. Rubino G. and B. Sericola (1989). Sojourn time in finite Markov processes. Journal of Applied Probability 27: 744 756 47. Rubino G. and B. Sericola (1991). A finite characterization of weak lumpable Markov processes. I: the discrete time case. Stochastic Processes and their Applications 38: 195 204 48. Rydén T. (1996). On identifiability and order of continuous-time aggregated Markov chains, Markov-modulated Poisson processes, and phase-type distributions. Journal of Applied Probability 33: 640 653 49. Sericola B. (1990). Closed-form solution for the distribution of the total time spent in a subset of states of a homogeneous Markov processes during a finite observation period. Journal of Applied Probability 27: 713 719 430
8 Transient Solution
50. Stadje W. (2005). The evolution of aggregated Markov chains. Statistics & Probability Letters 74: 303 311 51. Stewart G.W. (1983). Computable error bounds for aggregated Markov chains. Journal of the Association for Computing Machinery 30: 271 285 52. Sumita U. and M. Rieders (1989). Lumpability and time reversibility and the aggregationdisaggregation method for large Markov chains. Stochastic Models 5: 63 81 53. Taaffe M. and K. Ong (1987). Approximating nonstationary PH (t ) M (t ) s c queueing systems. Annals of Operations Research 8: 103 116 54. Whillie H. (1998). Periodic steady state of loss systems with periodic input. Advances in Applied Probability 30: 152 166 55. Zhang J. and E.J. Coyle (1996). The transient solution of time-dependent M M 1 queues. IEEE Transactions on Information Theory 37: 1690 1696
431
9 Quasi-Stationary Distributions
Quan-Lin Li Department of Industrial Engineering Tsinghua University Beijing 100084, P.R. China E-mail: [email protected] Abstract In this chapter, we study quasi-stationary distributions for blockstructured Markov chains with either finitely-many levels or infinitely-many levels. We derive the RG-factorizations for the E -discounted block-structured transition matrix. We provide conditions for the state D -classification, and derive two sets of expressions for the quasi-stationary distribution. As important examples, we analyze Markov chains of M/G/1 type, Markov chains of GI/M/1 type, Markov chains of GI/G/1 type and level-dependent QBD processes. Keywords Stochastic models, RG-factorization, quasi-stationary distribution, E -discounted block-structured transition matrix, state D -classification, decay parameter.
In this chapter, we study quasi-stationary distributions for block-structured Markov chains with either finitely-many levels or infinitely-many levels. To achieve this, we give a detailed analysis for E -discounted block-structured transition matrices in terms of the RG-factorizations. We provide conditions for the state D -classification, and derive two sets of expressions for the quasi-stationary distribution based on the UL-type RG-factorization. As important examples, we analyze Markov chains of M/G/1 type, Markov chains of GI/M/1 type, Markov chains of GI/G/1 type and level-dependent QBD processes. It is worthwhile to note that this chapter provides a new theoretical interpretation for the matrix-geometric solution for Markov chains of GI/M/1 type and the matrix-iterative solution for Markov chains of M/G/1 type, e.g., see Neuts [21,22]. Also, this chapter extends the RG-factorizations to a more general class: The E -discounted transition matrices. The results of this chapter are useful in many applied areas such as Markov reward processes, Markov decision processes, and stochastic game theory. This chapter is organized as follows. Section 9.1 uses the Perron-Frobenius theorem to compute the quasi-stationary distributions for a block-structured Markov chain with finitely-many levels. Based on the RG-factorizations, Section 9.2
9 Quasi-Stationary Distributions
provides conditions for the state Į-classification and expressions for the quasistationary distributions of a block-structured Markov chain with infinitely-many levels. Sections 9.3 to 9.6 discuss Markov chains of M/G/1 type, Markov chains of GI/M/1 type, Markov chains of GI/G/1 type, and level-dependent QBD processes, respectively. Sections 9.7 analyzes the quasi-stationary distributions of continuoustime Markov chains. Sections 9.8 and 9.9 provide two important applications: Determining the decay parameter for the GPH distribution, and tailed analysis for the QBD processes with infinitely-many phases. Finally, Section 9.10 summarizes notes for the references related to the results of this chapter. Throughout this chapter, the block-structured transition probability matrix P ( p( i , r ),( j , s ) ), given in Eq. (2.1), is assumed to be irreducible. Let D be the radius of convergence for the matrix P. Then
D
f ½ sup ® z ı ¦ z n p((in, r) ),( j , s ) f ¾ , n ¯ ¿
where p((in, r) ),( j , s ) is the n-step transition probability of the Markov chain P from state (i,r) to state ( j,s). Note that the radius D of convergence is independent of states (i,r) and ( j,s). It is clear that D 1 if P is recurrent, while D >1 if P is transient. When P is transient, the states of P can be further classified as D -recurrent or
D -transient according to that Dm P
f
¦D
k
P k is infinite or finite, respectively. The
k 0
matrix Dm P
f
¦ (D P)
k
is referred as the fundamental matrix of D P. If P is D -
k 0
recurrent, either lim D n p((in, r) ),( j , s ) ! 0 for all states (i,r) and ( j,s), or lim D n p((in,r) ),( j , s ) 0 n of
n of
for all states (i,r) and ( j,s). In the former case, P is called D -positive recurrent and in the latter case, D -null recurrent. For 1< E İ D , a nonnegative non-zero row vector ʌ E is said to be the quasi-stationary distribution of P if ʌ E ʌ ( E ) E P and ʌ E e 1, where e is a column vector of ones with suitable size. This chapter focuses on the following three issues: (1) How to determine the radius D of convergence, (2) how to provide conditions for the state D -classification if D >1, and (3) how to express the quasi-stationary distribution ʌ E for 1< E İ D .
9.1
Finitely-Many Levels
In this section, we consider an irreducible block-structured Markov chain with finitely-many levels, provide conditions for the state D -classification and give 433
Constructive Computation in Stochastic Models with Applications
expression for the quasi-stationary distribution in terms of the RG-factorizations. We consider an irreducible block-structured Markov chain with finitely-many levels whose transition probability matrix is given by P0,1 ! P0, M P1,1 ! P1, M # # PM ,1 ! PM , M
§ P0,0 ¨ ¨ P1,0 ¨ # ¨¨ © PM ,0
P
· ¸ ¸ ¸ ¸¸ ¹
Then the E -discounted transition matrix E P is given by § E P0,0 ¨ ¨ E P1,0 ¨ # ¨¨ © E PM ,0
EP
E P0,1 ! E P1,1 ! #
E PM ,1
E P0, M · E P1, M ¸¸
¸ ¸ ! E PM , M ¸¹ #
(9.1)
Similar to the analysis in Section 2.6, for the discounted transition matrix E P we can define the R-, U- and G-measures which lead to the RG-factorizations.
9.1.1 The UL-Type RG-Factorization For 0 İ i, j İ k and 0 İ k İ M, it is clear from Section 2.6 that M
Pi[,İj k ] ( E )
E Pi , j
¦P
[İn ] i ,n
( E ){I Pn[,İn n ] ( E )}1 Pn[,İj n ] ( E ).
n k 1
Note that Pi[, İj M ] ( E )
E Pi , j and Pi[,İj 0] ( E ) Pi[0] , j ( E ).
Let
< n ( E ) Pn[,İn n ] ( E ),
0İnİM,
( E )[ I < j ( E )]1 ,
0İi j İM
Gi , j ( E ) [ I
0 İ j i İ M.
R i, j ( E )
[İ j ] i, j
P
and
Then the UL-type RG-factorization is given by I E P [ I RU ( E )][ I < D ( E )][ I GL ( E )], where 434
9 Quasi-Stationary Distributions
§ 0 R0,1 ( E ) R0,2 ( E ) R0,3 ( E ) ¨ 0 R1,2 ( E ) R1,3 ( E ) ¨ ¨ 0 R2,3 ( E ) ¨ RU ( E ) ¨ % ¨ ¨ ¨ ¨ ©
! R0, M 1 ( E ) R0, M ( E ) · ¸ R1, M ( E ) ¸ ! R1, M 1 ( E ) R2, M ( E ) ¸ ! R2, M 1 ( E ) ¸ % # # ¸, 0 RM 2, M 1 ( E ) RM 2, M ( E ) ¸ ¸ 0 RM 1, M ( E ) ¸ ¸ 0 ¹
< D ( E ) diag(< 0 ( E ),<1 ( E ),< 2 ( E ),<3 ( E ),!,< M 1 ( E ),< M ( E )) and 0 § · ¨ ¸ ( ) 0 G E ¨ 1,0 ¸ ¨ G2,0 ( E ) G2,1 ( E ) ¸ 0 ¨ ¸ 0 GL ( E ) ¨ G3,0 ( E ) G3,1 ( E ) G3,2 ( E ) ¸. ¨ G (E ) G (E ) G (E ) G (E ) 0 ¸ 4,1 4,2 4,3 ¨ 4,0 ¸ # # # # % % ¨ ¸ ¨ G (E ) G (E ) G (E ) G (E ) ! G ¸ ( ) 0 E M ,1 M ,2 M ,3 M , M 1 © M ,0 ¹
For the UL-type RG-factorization, the following proposition provides properties for the diagonal block matrix < D ( E ) . The proof is clear, and thus is omitted here. Proposition 9.1 (1) If 1 İ E < D , then I
<0 ( E ) E P0,0 E 2 ( P0,1 , P0,2 ,! , P0, M )[ I E P[ı1] ]1 T ( P1,0T , P2,0 ,! , PMT ,0 )T
Then
D
sup{E ı 1: det( I <0 ( E )) z 0}.
9.1.2 The LU-Type RG-Factorization For k İ i, j İ M and 0 İ k İ M, it is clear from Section 2.6 that k
Pi[,ıj k 1] ( E )
E Pi , j ¦ Pi[,ın n ] ( E ){I Pn[,ın n ] ( E )}1 Pn[,ıj n ] ( E ). n 0
435
Constructive Computation in Stochastic Models with Applications
Note that Pi[,ıj M ] ( E )
Pi[, Mj ] ( E ) and Pi[,ıj 0] ( E )
E Pi , j .
Let
)n ( E ) Pn[,ın n ] ( E ),
0İnİ M,
( E )[ I )j ( E )]1 ,
0İ j iİ M,
Gi , j ( E ) [ I )i ( E )]1 Pi[,ıj i ] ( E ),
0İi j İ M.
R i, j ( E )
[ı j ] i, j
P
and
Then the LU-type RG-factorization is given by I E P [ I RL ( E )][ I )D ( E )][ I GU ( E )],
where § 0 · ¨ ¸ 0 ¨ R1,0 ( E ) ¸ ¨ R2,0 ( E ) R2,1 ( E ) ¸ 0 ¨ ¸ 0 RL ( E ) ¨ R3,0 ( E ) R3,1 ( E ) R3,2 ( E ) ¸, ¨ R (E ) R (E ) R (E ) R (E ) 0 ¸ 4,1 4,2 4,3 ¨ 4,0 ¸ # # # # % % ¨ ¸ ¨ R (E ) R (E ) R (E ) R (E ) ! R ¸ E ( ) 0 M ,1 M ,2 M ,3 M , M 1 © M ,0 ¹
)D ( E ) diag()0 ( E ),)1 ( E ),)2 ( E ),)3 ( E ),! ,)M 1 ( E ),)M ( E )) and § 0 G0,1 ( E ) G0,2 ( E ) G0,3 ( E ) ¨ 0 G1,2 ( E ) G1,3 ( E ) ¨ ¨ 0 G2,3 ( E ) ¨ GU ( E ) ¨ % ¨ ¨ ¨ ¨ ©
! G0, M 1 ( E ) G0, M ( E ) · ¸ ! G1, M 1 ( E ) G1, M ( E ) ¸ ! G2, M 1 ( E ) G2, M ( E ) ¸ ¸ % # # ¸. 0 GM 2, M 1 ( E ) GM 2, M ( E ) ¸ ¸ 0 GM 1, M ( E ) ¸ ¸ 0 ¹
For the LU-type RG-factorization, the following proposition provides properties for the diagonal block matrix )D ( E ). The proof is clear and is omitted here. Proposition 9.2 (1) If 1 İ E < D , then I )i ( E ) is invertible for 0 İ i İ M. (2) If P is D -transient, then I )i (D ) is invertible for 0 İ i İ M. (3) If P is D -recurrent, then I )i (D ) is invertible for 0 İ i İ M 1 but 436
9 Quasi-Stationary Distributions
I )M (D ) is singular. Let
)M ( E ) E PM , M E 2 ( PM ,0 , PM ,1 ,! , PM , M 1 )[ I E P[İM 1] ]1 ( P0,TM , P1,TM ,! , PMT 1, M )T . Then
D
sup{E ı 1: det( I )M ( E )) z 0}.
The following corollary provides conditions for the state D -classification of the Markov chain P. The proof is clear, and thus is omitted here. Corollary 9.1 (1) P is D -recurrent if and only if det ( I <0 (D )) 0, and if and only if det ( I )M (D )) 0. (2) P is D -transient if and only if det ( I <0 (D )) z 0, and if and only if det ( I )M (D )) z 0. Since the Markov chain P is irreducible and finite-state, the condition under which P is D -recurrent indicates that P is D -positive recurrent. This is due to the fact that there does not exist the D -null recurrence for any finite-state, irreducible and D -recurrent Markov chain.
9.1.3
State D -Classification and Quasi-stationary Distribution
When ʌ ( E ) is the quasi-stationary distribution of the Markov chain P, then ʌ ( E )( I E P) 0. Using the UL-type RG-factorization, we obtain ʌ ( E )[ I RU ( E )][ I < D ( E )][ I GL ( E )] 0. Note that the Markov chain P is either D -positive recurrent or D -transient, we need to consider the following two cases: Case Ē 1 İ E D or E D under which P is D -transient. In this case, the three matrices I RU ( E ), I < D ( E ) and I GL ( E ) are all invertible, hence ʌ ( E ) 0. That is, the quasi-stationary distribution of P does not exist. Case ē E D under which P is D -positive recurrent. In this case, we write the quasi-stationary distribution ʌ (D ) (ʌ0 (D ), ʌ1 (D ),! , ʌM (D )). Using the UL-type RG-factorization, we obtain ʌ0 (D ) W x0 (D ), ° k 1 ® ( ) ʌ D ʌi (D ) Ri , k (D ), 1 İ k İ M , ¦ ° k i 0 ¯
(9.2) 437
Constructive Computation in Stochastic Models with Applications
where x0 (D ) is the left Perron-Frobenius eigenvector of the matrix <0 (D ) and M
the scalar W is uniquely determined by
¦ʌ
k
(D )e 1.
k 0
By means of the LU-type RG-factorization, we obtain ʌ M (D ) W xM (D ), ° M ® °ʌk (D ) ¦ ʌi (D ) Ri , k (D ), i k 1 ¯
0 İ k İ M 1,
where xM (D ) is the left Perron-Frobenius eigenvector of the matrix < M (D ) and M
the scalar W is uniquely determined by
¦ʌ
k
(D )e 1.
k 0
Remark 9.1 If P is D -positive recurrent and ʌ (D ) is the quasi-stationary distribution of P, then ʌ (D ) ʌ D D P. Thus we obtain.
J ʌ (D ) ʌ (D ) P, where J D 1 which is called the decay parameter of the Markov chain P. Since 0 < J < 1 if D >1 and ʌ (D ) P n J n ʌ (D ) for all n ı 1, the decay parameter J showes the convergence velocity of the matrix P n o 0, as n o f. In the rest of this chapter, we shall analyze the quasi-stationary distributions of Markov chains with infinitely-many levels, which are different from those in this section.
9.2
Infinitely-Many Levels
In this section, we consider an irreducible block-structured Markov chain with infinitely-many levels, provide conditions for the state D -classification, and give expressions for the quasi-stationary distributions based on the state D -classification and the RG-factorizations. We consider a E -discounted transition matrix as follows: § E P0,0 ¨ EP E P ¨ 1,0 ¨ E P2,0 ¨ © #
E P0,1 E P0,2 !· E P1,1 E P1,2 !¸¸ , E P2,1 E P2,2 !¸ #
#
(9.3)
¸ ¹
where the transition probability matrix P is given in Eq. (2.1). Let E be a subset of the state space : , and E c : ( According to the subsets E and Ec, the E -discounted transition matrix E P is partitioned as 438
9 Quasi-Stationary Distributions
E E § ET
EP
¨ E © EV c
Ec
EU · E W ¸¹
(9.4)
The censored E -discounted transition matrix P E ( E ) to the censored set E is given by l (E )E V , E T E UW
P E (E )
l (E ) where W
f
¦ (EW )
(9.5)
( I E W ) 1. Similarly, we also have
k
k 0
c
E W E V ( I E T ) 1 E U .
PE (E )
(9.6)
By using the two different censored E -discounted transition matrices P E ( E ) and c P E ( E ), we can organize the UL-and LU-types RG-factorizations, respectively, both of which are described by means of the censoring invariance.
9.2.1 The UL-Type RG-Factorization Let (n) (n) § I0,0 ( E ) I0,1 ( E ) ! I0,( nn) ( E ) · ¨ (n) ¸ I1,0 ( E ) I1,1( n ) ( E ) ! I1,( nn) ( E ) ¸ [İn ] ¨ , P (E ) ¨ # # # ¸ ¨ (n) ¸ ¨ I ( E ) I (n) ( E ) ! I (n) ( E ) ¸ n ,1 n,n © n ,0 ¹
n ı
Then for n ı 0,0 İ i,j İ n,
Ii(,nj ) ( E ) E Pi , j
f
f
k n 1
l 0
¦ Ii(,kk ) ( E )¦[Ik(,kk) ( E )]l Ik(,kj) (E ).
We write
< n ( E ) In(,nn) ( E ),
n ı 0,
f
Ri , j ( E ) Ii(, jj ) ( E )¦ [I (j ,jj) ( E )]l ,
0 İ i j,
l 0
and f
Gi , j ( E )
¦ [I
(i ) i ,i
( E )]l Ii(,ij) ( E ),
0 İ j i.
l 0
439
Constructive Computation in Stochastic Models with Applications
Thus we obtain Ri , j ( E ) Ii(, jj ) ( E )[ I < j ( E )]1 , 1
(i ) i, j
Gi , j ( E ) [ I
0 İ i j, 0 İ j i.
The UL-type RG-factorization is given by I E P [ I RU ( E )][ I < D ( E )][ I GL ( E )],
(9.7)
where § 0 R0,1 ( E ) R0,2 ( E ) R0,3 ( E ) ¨ 0 R1,2 ( E ) R1,3 ( E ) ¨ ¨ RU ( E ) 0 R2,3 ( E ) ¨ 0 ¨ ¨ ©
!· ¸ !¸ !¸ , ¸ !¸ % ¸¹
< D ( E ) diag (< 0 ( E ),<1 ( E ),< 2 ( E ),< 3 ( E ),!) and § 0 · ¨ ¸ 0 ¨ G1,0 ( E ) ¸ ¸. GL ( E ) ¨ G2,0 ( E ) G2,1 ( E ) 0 ¨ ¸ ¨ G3,0 ( E ) G3,1 ( E ) G3,2 ( E ) 0 ¸ ¨ # # # # %¸¹ © From the UL-type RG-factorization, the following proposition provides properties for the matrix < D ( E ). The proof is clear, and thus is omitted here. Proposition 9.3 (1) If 1İ E < D , then I
9.2.2
Two Sets of Expressions
In the RG-factorization Eq. (9.7), the three matrices, [ I RU ( E )], [ I < D ( E )] 440
9 Quasi-Stationary Distributions
and [ I GL ( E )] are associative. Corollary 1 9 of Kemeny, Snell and Knapp [13] indicates that they are associative with any nonnegative row vector ʌ , which will lead to solutions for the E -invariant measure. Lemma 9.1 Let S be any nonnegative row vector. Then ʌ ( I E P) {ʌ[ I RU ( E )]}{[ I < D ( E )][ I GL ( E )]} {ʌ[ I RU ( E )][ I < D ( E )]}[ I GL ( E )].
According to the two different cases for the D -recurrent and the D -transient, we derive two sets of expressions for the quasi-stationary distribution of the Markov chain P. The first set: E D under which the Markov chain P is D -recurrent. In this case, det ( I <0 (D )) 0. Thus, the system of linear equations x0 (D )[ I < 0 (D )] 0 and x0 (D )e 1 must exist a nonnegative non-zero solution based on the left Perron-Frobeniusthe vector of the matrix <0 (D ). The following theorem expresses the quasi-stationary distribution of the Markov chain P. The proof is clear by the RG-factorization. Theorem 9.1 If the Markov chain P is D -recurrent, then the quasi-stationary distribution ʌ (D ) (ʌ0 (D ), ʌ1 (D ),...) is given by ʌ0 (D ) W x0 (D ), ° k 1 ® ʌ ( D ) ʌi (D ) Ri , k (D ), ¦ ° k i 0 ¯
k ı 1,
where x0 (D ) is the left Perron-Frobeniusthe vector of the matrix <0 (D ) and the f
scalar W is uniquely determined by
¦ʌ
k
(D )e 1.
k 0
When the Markov chain P is D -recurrent, Theorem 9.1 provides a general expression for the quasi-stationary distribution for E D . However, if 1 İ E D or E D under which the Markov chain P is D -transient, the study of the quasistationary distribution is different. In what follows we provide a detailed analysis for the second case. The second set:1İE D or E D under which the Markov chain P is D -transient. In this case, det ( I )0 (D )) z 0. Obviously, the system of linear equations x0 (D )[ I )0 (D )] 0 and x0 (D )e 1 exists only a zero solution. Therefore, expression for the quasi-stationary distribution can not be derived by means of a similar method to Theorem 9.1. To derive expression of the quasi-stationary distribution, we first need to solve the linear equation y ( E )[ I GL ( E )] 0, that is 441
Constructive Computation in Stochastic Models with Applications
I § · ¨ ¸ I ¨ G1,0 ( E ) ¸ ¨ ¸ 0. ( y0 ( E ), y1 ( E ),...) G2,0 ( E ) G2,1 ( E ) I ¨ ¸ ¨ G3,0 ( E ) G3,1 ( E ) G3,2 ( E ) I ¸ ¨ ¸ # # # # %¹ ©
(9.8)
In general, it is not easy to provide an effective method to solve Eq. (9.8). For this reason, finding a nonnegative non-zero solution to Eq. (9.8) should be a key for expressing the quasi-stationary distribution. As illustration, we consider some special cases such as Markov chains of M/G/1 type, Markov chains of GI/M/1 type, Markov chains of GI/G/1 type and level-dependent QBD processes. Note that Sections 9.3 to 9.6 will organize how to solve Eq. (9.8) for the special cases. Suppose a nonnegative non-zero solution to Eq. (9.8) has been obtained. The following theorem provides the quasi-stationary distribution of the Markov chain. Theorem 9.2 For 1 E D or E D under which P is D -transient, if the linear equation y ( E )[ I GL ( E )] 0 exists a nonnegative non-zero solution y ( E ) ( y0 ( E ), y1 ( E ),...), then the quasi-stationary distribution ʌ ( E ) (ʌ0 ( E ), ʌ1 ( E ),...) is given by ʌ0 ( E ) W y0 ( E )[ I < 0 ( E )]1 , ° k 1 ® 1 ʌ ( E ) y ( E )[ I < ( E )] ʌi ( E ) Ri , k ( E ), ¦ k k ° k i 0 ¯
k ı 1,
f
where the scalar W is uniquely determined by
¦ʌ
k
( E )e 1.
k 0
Proof Since ʌ ( E )[ I RU ( E )][ I < D ( E )][ I GL ( E )] 0 and y ( E )[ I GL ( E )] 0, we only need to solve the equation (ʌ0 ( E ), ʌ1 ( E ),!)[ I RU ( E )][ I < D ( E )] ( y0 ( E ), y1 ( E ),!) which leads to (ʌ0 ( E ), ʌ1 ( E ),!)[ I RU ( E )] ( y0 ( E ), y1 ( E ),!)[ I < D ( E )]1 . Some simple computations can yield the desired result. 442
9 Quasi-Stationary Distributions
9.2.3 The LU-Type RG-Factorization Let § Kn( n, n) ( E ) Kn( n, n)1 ( E ) Kn( n, n) 2 ( E ) ! · ¨ ( n) ¸ Kn 1, n ( E ) Kn( n)1, n 1 ( E ) Kn( n)1, n 2 ( E ) ! ¸ [ın ] ¨ P (E ) ¨Kn( n)2, n ( E ) Kn( n)2, n 1 ( E ) Kn( n)2, n 2 ( E ) ! ¸ ¨¨ ¸¸ # # # © ¹ Then for n ı 0, i, j ı n 1, n
Ki(,nj1) ( E ) E Pi , j ¦Ki(,kk) ( E )[ I Kk( k, k) ( E )]1Kk( k, j) ( E ). k 0
We write
)n ( E ) Kn( n, n) ( E ),
n ı 0,
Ri , j ( E ) K ( E )[ I K ( E )]1 ,
0 İ j i,
Gi , j ( E ) [ I Ki(,ii) ( E )]1Ki(,ij) ( E ),
0 İ i j.
( j) i, j
( j) j, j
and
The LU-type RG-factorization is given by I P( E ) [ I RL ( E )][ I )D ( E )][ I GU ( E )],
(9.9)
where § 0 · ¨ ¸ 0 ¨ R1,0 ( E ) ¸ ¸, RL ( E ) ¨ R2,0 ( E ) R2,1 ( E ) 0 ¨ ¸ ¨ R3,0 ( E ) R3,1 ( E ) R3,2 ( E ) 0 ¸ ¨ # ¸ # # # % © ¹
)D ( E ) diag()0 ( E ),)1 ( E ),)2 ( E ),)3 ( E ),!) and § 0 G0,1 ( E ) G0,2 ( E ) G0,3 ( E ) ¨ 0 G1,2 ( E ) G1,3 ( E ) ¨ ¨ GU ( E ) 0 G2,3 ( E ) ¨ 0 ¨ ¨ ©
!· ¸ !¸ !¸ . ¸ !¸ %¸¹ 443
Constructive Computation in Stochastic Models with Applications
From the LU-type RG-factorization, the following proposition provides an important property for the matrix )D ( E ) which is a key to understand the LU-type RG-factorization. Proposition 9.4 If 1 İ E İ D , then I )i ( E ) is invertible for i ı 0. It is interesting to express the quasi-stationary distribution ʌ ( E ) ( ʌ0 ( E ), ʌ1 ( E ),!) by means of the LU-type RG-factorization. Since I )i ( E ) is invertible for 1 İ E İ D and i ı 0, it is easy to check that the equation z ( E )( I )D ( E )) ( I GU ( E )) 0 only exists a zero solution: z ( E ) 0. Therefore, in order to express the quasi-stationary distribution, we have to solve the equation ʌ ( E )[ I RL ( E )] 0. That is, I § · ¨ ¸ ( E ) R I ¨ 1,0 ¸ ¸ 0. (ʌ0 ( E ), ʌ1 ( E ),!) ¨ R2,0 ( E ) R2,1 ( E ) I ¨ ¸ ¨ R3,0 ( E ) R3,1 ( E ) R3,2 ( E ) I ¸ ¨ ¸ # # # # % © ¹
(9.10)
It is complicated to provide an effective method to solve Eq. (9.10). Therefore, the challenge is to find a nonnegative non-zero solution for Eq. (9.10) to obtain the quasi-stationary distribution. To achieve this, we consider a special case: A level-dependent Markov chain of M/G/1 type, it is easy to check that Ri , j ( E )
0, for i ı j 2.
In this case, Eq. (9.10) becomes a special case as follows: I § · ¨ ¸ ( E ) R I ¨ 1,0 ¸ ¸ 0, (ʌ0 ( E ), ʌ1 ( E ),!) ¨ I R2,1 ( E ) ¨ ¸ R3,2 ( E ) I ¨ ¸ ¨ ¸ % % © ¹
(9.11)
which yields ʌk ( E )
ʌ k 1 ( E ) Rk 1,k ( E ),
k ı 0.
(9.12)
It is worthwhile to note that the solution structure of Eq. (9.11) or Eq. (9.12) is important in the study of the quasi-stationary distribution of an irreducible Markov chain with infinitely-many levels. Lemma 9.2 If the level-dependent Markov chain of M/G/1 type is irreducible, then for the R-measure: Rk ( E ) Rk , k 1 ( E ) for k ı 1, there exists a sequence of 444
9 Quasi-Stationary Distributions
probability vectors { yk ( E ), k ı 0} and a sequence of positive scalars {Kk , k ı 0} such that yk 1 ( E ) Rk 1 ( E ) Kk yk ( E ). Proof Let y N , N ( E ) be an arbitrary probability vector on level N ı 1. Since the QBD process P is irreducible, each state on level k has a path to level k 1, which shows that each row of the matrix Rk ( E ) is non-zero and nonnegative. Hence y N , N ( E ) RN ( E ) is a convex combination of the rows of RN ( E ). We take
K N 1 y N , N 1 ( E )
y N , N ( E ) RN ( E )e.
Then K N 1 ! 0 and y N , N 1 ( E )
1
K N 1
y N , N ( E ) RN ( E )
is a probability vector on level N 1. Proceeding inductively, we can obtain a sequence of probability vector { yN , k ( E ),0 İ k İ N 1} and a sequence of positive scalars {K N , k , 0 İ k İ N 1} such that for 0 İ k İ N 1, yN , k 1 ( E ) Rk 1 ( E ) K N ,k yN , k ( E ). By repeating the above procedure, for each k ı 0 we can obtain a sequence of probability vector { y N , k ( E ), N ı k}. Since y N , k ( E )e 1 for N ı k, there exists a subsequence y N r ,k ( E ) such that this limit: lim y Nr , k ( E ) r of
yk ( E ), exists and is a
*
probability vector. Fix k k , we can obtain a sequence of stochastic vectors { yk ( E ), 0 İ k İ k *} and a sequence of positive numbers {Kk ,0 İ k İ k * } such that yk 1 ( E ) Rk 1 ( E ) Kk yk ( E ).
Again by the compactness of the probability vector set, there exists a subsequence {N rc} of {N r } such that this limit: lim y N c , k * 1 ( E ) yk * 1 ( E ), and we r of
have Kk*
r
yk* 1 ( E ) Rk * 1 ( E )e. Therefore, by induction the desired result follows
by repeating this argument infinitely-many times, which can be done by means of the Axiom of Choice. This completes the proof. The following theorem provides expression for the quasi-stationary distribution of the level-dependent Markov chain of M/G/1 type. f
Theorem 9.3 If
Kk 1
¦ K K !K k 0
0 1
f, then the quasi-stationary distribution k 1
for 1 İ E İ D is given by ʌ0 ( E ) W y0 ( E ) 445
Constructive Computation in Stochastic Models with Applications
and for k ı 1,
W
ʌk ( E )
K0K1 !Kk 1
yk ( E ),
where 1
W
f
Kk 2 ¦ k 0 K0K1 !K k
.
Proof Note that for k ı 0 yk 1 ( E ) Rk 1 ( E ) Kk yk ( E ),
we obtain
W y0 ( E )
W y1 ( E ) R1 ( E ), K0
which leads to ʌ0 ( E )
ʌ1 ( E ) R1 ( E );
W W y1 ( E ) y ( E ) R2 ( E ), K0 K0K1 2 which leads to ʌ1 ( E )
We assume that when n n k 1, we have ʌk 1 ( E )
ʌ 2 ( E ) R2 ( E ).
k, we have ʌ k ( E )
ʌ k 1 ( E ) Rk 1 ( E ). Then when
W
y (E ) K0K1 !Kk k 1 W y ( E ) Rk 2 ( E ) K0K1 !KkKk 1 k 2 ʌk 2 ( E ) Rk 2 ( E ).
Therefore, by induction it is easy to see that for any nonnegative integer n we have ʌn ( E )
ʌn 1 ( E ) Rn 1 ( E ),
which leads to I § · ¨ ¸ I ¨ R1,0 ( E ) ¸ ¸ 0. (ʌ0 ( E ), ʌ1 ( E ),!) ¨ R2,1 ( E ) I ¨ ¸ R3,2 ( E ) I ¨ ¸ ¨ % % ¸¹ © 446
9 Quasi-Stationary Distributions
Thus we obtain ʌ ( E )[ I RL ( E )][ I )D ( E )][ I GU ( E )] 0
or ʌ ( E )( I E P) 0. Since
§ · W W ʌ ( E )e ¨W y0 ( E ), y1 ( E ), y2 ( E ),!¸ e 1, K0 K0K1 © ¹ § · W W y2 ( E ),!¸ is the quasi-stationary it is clear that ʌ ( E ) ¨W y0 ( E ), y1 ( E ), K0 K0K1 © ¹ distribution of the level-dependent Markov chain of M/G/1 type. This completes the proof.
9.3
Markov Chains of M/G/1 Type
In this section, we consider the quasi-stationary distribution of an irreducible Markov chain of M/G/1 type. The conditions for the state D -classification and the expressions for the quasi-stationary distribution are derived in detail. We consider an irreducible aperiodic Markov chain of M/G/1 type whose transition matrix P is given by
P
§ D1 ¨D ¨ 0 ¨ ¨ ¨ ©
D2 A1 A0
D3 A2 A1 A0
D4 A3 A2 A1 %
!· !¸ ¸ !¸ , !¸ %¸¹
(9.13)
where D1 is a matrix of size m0 u m0, all Ai are square matrices of finite size m, the sizes of the other block-entries are determined accordingly and all empty entries are zero.
9.3.1 The UL-Type RG-Factorization We write
Q
§ A1 ¨A ¨ 0 ¨ ¨ ¨ ©
A2 A1 A0
A3 A2 A1 A0
A4 A3 A2 A1 %
!· !¸ ¸ !¸ . !¸ % ¸¹ 447
Constructive Computation in Stochastic Models with Applications f
m Let the fundamental matrix E Q
¦ ( E Q)
n
l ( E )) l ( E ). (Q Q 1,1 i, j i , jı and N ( E )
n 0
We first define the G-measure: The matrices G ( E ) and G1,0 ( E ). It is clear from Eq. (2.29) that G( E )
N ( E ) E A0 .
(9.14)
The (r,s)th entry of G ( E ) can be interpreted as the total expected discounted reward with rate E induced by hitting state (i,s) upon the process entering Lİi for the first time, given that the process starts in state (i 1,r). l ( E ) for j ı 1, which are key The following lemma expresses the matrices Q j ,1 to define the R-, U- and G-measures later. Lemma 9.3 l ( E ) G ( E ) j 1 N ( E ), j ı 1. Q j ,1
Proof It follows from the skip-free-to-left property of the transition matrix E Q that
l ( E )T , Q l ( E )T ,!)T (Q 2,1 3,1
l ( E )T ,!)T [ E A N ( E )]T . ( N ( E )T , Q 0 2,1
Using the recursive expressions and N ( E ) E A0 G ( E ) repeatedly, we can obtain the desired result. Though the matrix G ( E ) is defined as the product of N ( E ) and E A0 , we usually first compute G ( E ) and then determine N ( E ) in terms of G ( E ). The following theorem expresses N ( E ) in terms of G ( E ) and N 0 ( E ), the (1,1)st m P block-entry of the matrix E
f
¦ ( E P)
k
, in terms of N ( E ).
k 0
Theorem 9.4
For the transition matrix P of M/G/1 type, we have
(1) the matrix N ( E ) can be expressed as 1
N (E )
f ª k 1 º « I ¦ E Ak G ( E ) » , k 1 ¬ ¼
(9.15)
f
or N ( E ) is the fundamental matrix for < ( E )
E ¦ Ak G ( E ) k 1 ; k 1
(2) the matrix N 0 ( E ) can be expressed as N 0 ( E ) [ I <0 ( E )]1 ,
(9.16)
where f
<0 ( E ) E D1 ¦ E Dk 1G ( E )k 1 N ( E ) E D0 , k 1
448
(9.17)
9 Quasi-Stationary Distributions
or N 0 ( E ) is the fundamental matrix for < 0 ( E ). Proof We only prove (1), while (2) can be proved similarly. Let §T H · ( ¨ ¸, ©L Q¹ where T
A1 ,
H
( A2, A3 , A4 ,!),
( A0T ,0, 0,!)T .
L
l ( E ) is the fundamental matrix Then it is clear from ( Q, hence N ( E ) Q 1,1 m for E T E H E QE L. Thus we have l ( E ), Q l ( E ),!)T E A E T E H Em QE L E A1 E H (Q 1,1 2,1 0 f
l (E )E A . E A1 ¦ E Ak Q 0 k 1,1 k 2
Note that N ( E ) E A0 G ( E ) and using Lemma 9.3, this easily completes the proof. It follows from Eq. (9.15) that the matrix G ( E ) satisfies the following nonlinear matrix equation: f
G( E )
¦ E A G( E ) . k
k
(9.18)
k 0
At the same time, G ( E ) is the minimal nonnegative solution to Eq. (9.18). The G-measure for the matrix E P of M/G/1 type consists of two matrices, G ( E ) defined in Eq. (9.15) and G1,0 ( E ) defined by l (E )E D G1,0 ( E ) Q 0 1,1
N ( E ) E D0 .
(9.19)
The (r, s)th entry of G1,0 ( E ) can be interpreted as the total expected discounted reward with rate E induced by hitting state (0, s) upon the process entering level 0 for the first time, given that the process starts in state (1, r). m Q of E Q. Let the first block-column of Consider the fundamental matrix E T l T T m l E Q be (Q ( E ) , Q ( E ) ,!) . The R-measure for the matrix E P in Eq. (9.13) 1,1
2,1
consists of two sequences of matrices R0, k ( E ) and Rk ( E ) for k ı 1, defined by f
R0, k ( E )
¦ED
l (E ) Q l ,1
(9.20)
l ( E ). Q l ,1
(9.21)
k l
l 1
and f
Rk ( E )
¦E A
k l
l 1
449
Constructive Computation in Stochastic Models with Applications
Applying Lemma 9.3 to Eq. (9.20) and Eq. (9.21), the R-measure can then be expressed as for k ı 1, f
R0, k ( E )
¦ED
G ( E )i 1 N ( E )
(9.22)
G ( E )i 1 N ( E ).
(9.23)
k i
i 1
and f
Rk ( E )
¦E A
k i
i 1
For the transition matrix P of M/G/1 type in Eq. (9.13), the UL-type RG-factorization is given by I E P [ I RU ( E )][ I < D ( E )][ I GL ( E )],
(9.24)
where §I ¨ ¨ [ I RU ( E )] ¨ ¨ ¨ ¨ ©
R0,1 ( E ) R0,2 ( E ) R0,3 ( E ) !· ¸ R1 ( E ) R2 ( E ) !¸ I R1 ( E ) !¸ , I !¸¸ I %¸¹
< D ( E ) diag(<0 ( E ),< ( E ),< ( E ),!) and I § · ¨ G ( E ) ¸ I ¨ 1,0 ¸ [ I GL ( E )] ¨ ¸. I G ( E ) ¨ ¸ G ( E ) I ¨ ¸ ¨ % % ¸¹ ©
9.3.2
The State D -Classification
For the Markov chain of M/G/1 type, the determination of the radius D of convergence and the conditions on the state D -classification are based on the combination of the state D -classification for the corresponding matrix without boundaries and the special treatment of the boundary. For convenience of description, here we state two results in Kijima [14] again. Let f
A* ( z )
¦A z k
k 0
450
k
,
0 İ z z0 .
(9.25)
9 Quasi-Stationary Distributions
We denote by F ( z ) the Perron-Frobenius eigenvalue of the nonnegative matrix A* ( z ) for 0 İ z1, then there always exists the unique J such that F ( z ) ı J z for all 0 < z< z0 , and there exists some T with 0 T İ z0 such that F (T ) T J . If T z0 , then J F ( z0 ) / z0 . Otherwise, J and T can be determined by solving the system of equations F (T ) JT , ® ¯ F c(T ) J .
(9.26)
By using this lemma, Kijima [14] was able to show the following result. Theorem 9.5 For the transition matrix P of M/G/1 type without boundaries (Dk Ak for all k ı 0), if J is determined in Lemma 9.4, then the radius D of convergence of P satisfies D 1/ J and P is D -transient. In fact, T is the maximal eigenvalue of the matrix G( D ). The transition matrix of M/G/1 type without boundaries is always D -transient, However, the transition matrix of M/G/1 type with boundaries can be either D -transient, D -positive recurrent or D -null recurrent. For the transition matrix P of M/G/1 type given in Eq. (9.13) with boundaries, we can perform spectral analysis on the censored matrix <0 ( E ) to level 0 to obtain conditions on the state D -classifications and a determination of the radius of convergence. However, it seems more convenient to reach this goal by considering the relationship between the censored matrix <0 ( E ) and its fundamental matrix N0 (E ) . Let u0 ( E ) and n0 ( E ) be the maximal eigenvalues of the censored matrix < 0 ( E ) and its fundamental matrix N 0 ( E ), respectively. Then n0 ( E ) 1/[1 u0 ( E )]. It follows from results of linear algebra that the first two statements of the following lemma are true, for example, Seneta [32], and the other two follow from the definitions of the radius of convergence and N 0 ( E ). Lemma 9.5 Let D and D be the radiu of convergence of Q and P, respectively. In (1) and (2), assume 0 E İ D . (1) Both u0 ( E ) and n0 ( E ) are strictly increasing in E , and (2) u0 ( E ) <1 if and only if N 0 ( E ) f. (3) N 0 ( E ) f if E < D and N 0 ( E ) f if E ! D . (4) D İ D . The state D -classification is characterized by the following conditions. Theorem 9.6 (1) If for all 0 E İ D , u0 ( E ) 1, then N 0 (D ) f and D D . Therefore, P is D -transient; (2) If there exists a E * with 0 E * İ D such that u0 ( E * ) 1, then D E * 451
Constructive Computation in Stochastic Models with Applications
and N 0 (D ) f. Therefore, P is D -recurrent. Proof Based on the facts: n0 ( E ) 1/[1 u0 ( E )] and n0 ( E ) f if and only if N 0 ( E ) f, we discuss the following two cases: Case Ē There exists no solution to 1 u0 ( E ) 0 for 0 E İ D . In this case, n0 (D ) f. Hence N 0 (D ) f. Therefore, D ı D . This, together with (4) of Lemma 9.5, implies D D . Hence, P is D -transient. Case ē There exists a solution E * to 1 u0 ( E ) 0 for 0 E * İ D . In this case, n0 ( E * ) f, hence there exists at least one infinite entry of N 0 ( E ). This leads to D E * İ D . Therefore, P is D -recurrent. This completes the proof.
9.3.3
Two Sets of Expressions
Now, we use the UL-type RG-factorization to express the quasi-stationary distributions for the transition matrix P of M/G/1 type with boundary. We present two sets of expressions, one for an D -recurrent matrix with E D and the other for all the other cases. Case Ē E D under which P is D -recurrent. In this case, the quasi-stationary distribution is expressed in the following theorem. Theorem 9.7 If P is D -recurrent, then the quasi-stationary distribution is given by ʌ0 (D ) N x0 and k 1
ʌ k (D )
ʌ0 (D ) R0, k (D ) ¦ ʌi (D ) Rk i (D ),
k ı 1,
i 1
where x0 is the left Perron-Frobeniusthe vector of the matrix <0 (D ) and the f
constant ț satisfies
¦ʌ
k
(D )e 1.
k 0
Proof We solve ʌ (D )( I D P)
x
0 by two steps. In the first step, we write
ʌ (D )[ I RU (D )].
(9.27)
Then x[ I < D (D )][ I GL (D )] 0,
(9.28)
which is equivalent to
452
x0 [ I <0 (D )] x1[ I < (D )]G1,0 (D )
0,
xk [ I < (D )] xk 1[ I < (D )]G (D )
0,
k ı 1.
9 Quasi-Stationary Distributions
Since P is D -recurrent, it follows from Theorem 9.6 that the maximal eigenvalue of <0 (D ) is u0 (D ) 1. Therefore, for the censored discounted transition matrix <0 (D ) , there exists a nonnegative non-zero x0 such that x0 [ I <0 (D )] 0. Hence, (N x0 ,0, 0,!) is a solution to Eq. (9.28). It follows from Eq. (9.27) that
N x0
ʌ0 (D ),
k 1
ʌ 0 (D ) R0,k (D ) ¦ ʌi (D ) Rk i (D ) ʌk (D )
0,
k ı 1.
i 1
Some simple computations lead to the desired results. This completes the proof. Case ē E D for the D -recurrent or E İ D for the D -transient. In this case, we need to construct a nonnegative non-zero solution y to y[ I GL ( E )] 0, which is described as the following lemma. Lemma 9.6 For every 0 E İ D there exist a T E ! 0 and a nonnegative non-zero vector z such that zG ( E ) T E z. Proof Since G ( E ) ı 0, the maximal eigenvalue T E of G ( E ) is nonnegative. If T E ! 0, then the lemma is proved by choosing z as the left eigenvector of G ( E ) associated with T E . It follows from Neuts [22], by using the irreducibility of the Markov chain P , that T1 ! 0. Therefore, T E ı T1 ! 0 for all E ı 1 since G ( E ) is increasing in E . For 0 E 1, the proof also relies on the irreducibility of P. Suppose that there were an s with 0 < s< 1 such that T s 0. Then, T E 0 for all 0 E İ s. Therefore, all the eigenvalues of G ( E ), when 0 E İ s, would be zero according to the Perron-Frobenius theorem for nonnegative matrices. It follows from the Cayley-Hamilton theorem that Gm (E )
0,
0 E İ s,
(9.29)
where m is the size of matrix G ( E ). On the other hand, according to the probabilistic interpretation of G m ( E ) and the assumption of irreducibility on P, G m ( E ) z 0, which contradicts (9.29). This completes the proof. Let y
ʌ (D )[ I RU ( E )][ I < D ( E )].
(9.30)
This is equivalent to 453
Constructive Computation in Stochastic Models with Applications
ʌ0 [ I <0 ( E )],
y0 y1 yk
[ʌ0 R0,1 ( E ) ʌ1 ][ I < ( E )],
k 1 ª º « ʌ0 R0, k ( E ) ¦ ʌi Rk i ( E ) ʌ k » [ I < ( E )], i 1 ¬ ¼
k ı 2.
Since both [ I < 0 ( E )] and [ I < ( E )] are invertible in this case, we can express ʌ k in terms of yk as follows: y0 [ I < 0 ( E )]1 ,
(9.31)
ʌ0 R0,1 ( E ) y1[ I < ( E )]1 ,
(9.32)
ʌ0
ʌ1
k 1
ʌk
ʌ0 R0, k ( E ) ¦ ʌi Rk i ( E ) ʌk [ I < ( E )]1 ,
k ı 2.
(9.33)
i 1
Now, we solve y[ I GL ( E )] 0
(9.34)
for nonnegative non-zero y. If such a solution exists, then ʌ calculated by Eq. (9.31), Eq. (9.32) and Eq. (9.33) is nonnegative and non-zero. Equation (9.34) is equivalent to
y0 y1G1,0 ( E )
0,
yk yk 1G ( E ) 0,
k ı 1.
By using Lemma 9.6 and letting y0 zG1,0 ( E ), we can easily check that y ( y0 , z , z / T E , z / T E2 ,...) is a nonnegative non-zero solution to Eq. (9.34). Substituting y into Eq. (9.31), Eq. (9.32) and Eq. (9.33), the quasi-stationary distribution is given in the following theorem. Theorem 9.8 For 1< E < D or E D under which P is D -transient, the quasistationary distribution of P is given by
ʌ0
y0 N 0 ( E )
and for k ı 1, ʌk
k 1 z ° i ® N ( E ) ¦ G(E ) N ( E ) ¦ R j1 ( E ) R j2 ( E )! R ji ( E ) TE ° i 1 0İ j1 İ!İ jL İi ¯ ¦ t jt i k 1
k
G ( E ) k 1 G1,0 ( E ) N 0 ( E )¦ R0,i ( E ) i 1
¦
0İ j1 İ!İ jk i İk i t jt k i
¦
where R0 ( E ) 454
I.
½ ° R j1 ( E ) R j2 ( E )! R jk 1 ( E ) ¾ , °¿
9 Quasi-Stationary Distributions
Conditions for D -Positive Recurrence
9.3.4
The expressions for the quasi-stationary distribution are only based on the D -recurrence and the D -transience. However, it is necessary to classify the D -recurrence as the D -positive recurrence or the D -null recurrence. Now, we provide conditions for classifying the D -recurrence as either an D -positive recurrence or an D -null recurrence. For simplicity of description, we assume that the matrix A*(1) is irreducible and stochastic. Besides the quasi-stationary distribution ʌ provided in Theorem 9.7, we also need to similarly express the D -invariant vector v according to the system of equations [ I D P]v 0 and ʌv 1. It is easy to check by the RG-factorization that
v0
w0 ,
vk
G (D )k 1 G1,0 (D ) w0 ,
k ı 1,
(9.35)
where w0 is the unique, up to a multiplication of a positive constant, solution of
[ I <0 (D )]w0 0. It follows from Theorem 9.7 that
* ( z)
x0 R0* ( z )[ I R* ( z )]1
f
x0 R0* ( z )¦ [ R* ( z )]n , n 0
which gives f
x0 R0,k (D ) ¦ Rk (D ) n ,
ʌk
k ı 1.
(9.36)
n 0
It follows from Eq. (9.35) and Eq. (9.36) that f
¦ʌ v
i i
i 0
f f ª º x0 v0 x0 ¦ « R0, k (D ) * ¦ Rk (D )*n » G (D )k 1 G1,0 (D )v0 . k 1¬ n 0 ¼
(9.37)
f
Clearly,
¦ʌ v
i i
f if and only if
i 0
f
ª
¦ «¬ R
0, k
k 1
f º (D ) * ¦ Rk (D )*n » G (D ) k 1 f. n 0 ¼
(9.38)
Let gD and H (D ) be the maximal eigenvalue and the associated right eigenvector of G (D ), respectively. Since A*(1) is irreducible, we have H (D ) >0. It follows from Eq. (9.38) that f
ª
¦ «¬ R
0, k
k 1
f º (D ) * ¦ Rk (D )*n » G (D ) k 1 H (D ) n 0 ¼
f 1 * R0 ( gD )¦ [ R* ( gD )]n H (D ). gD n 0
455
Constructive Computation in Stochastic Models with Applications
Then, Eq. (9.38) is true if and only if, (1) R0* ( gD ) f, (2) R* ( gD ) f, and (3) the matrix I R* ( gD ) is invertible. The following lemma provides the conditions under which, (1) R0* ( gD ) f and (2) R* ( gD ) f. f
Lemma 9.7 (1) R0* ( gD ) f if and only if
¦ kD G(D )
k 1
k
f.
k 1
f
(2) R* ( gD ) f if and only if
¦ kA G(D )
k 1
k
f.
k 1
Proof We only prove (1), while (2) can be proved similarly. It is clear that f
R0* ( gD )
¦ gDk R0,k (D )
f
f
k 1
i 1
¦ gDk ¦D Dk i G(D )i 1 N (D ).
k 1
Hence we obtain
R0* ( gD ) N (D ) 1 H (D )
f
f
¦¦D gD k 1 i 1
k i 1
f
Dk i H (D ) D ¦ kg (D )k 1 Dk H (D ), k 1
f
which illustrates that R0* ( gD ) f if and only if
¦ kg (D )
k 1
Dk f, and if and
k 1
f
only if
¦ kD G(D ) k
k 1
f. This completes the proof.
k 1
In what follows we provide a condition under which the matrix I R* ( gD ) is invertible. To this end, we need to use the RG-factorization for the repeated row as follows: zI D A* ( z ) [ I R* ( z )][ I < (D )][ zI G (D )].
(9.39)
Let F ( z ) be the maximal eigenvalue of the matrix A*(z) for z>0. It is clear that property 7 about F ( z ) in Bean, Pollett and Taylor [3] (p. 393 394) also holds for the transition matrix of M/G/1 type. Note that the matrix A*(1) is irreducible and stochastic, then the equation z DF ( z ) has two different roots in (0, z0) if 1 İ D D , and one root repeated twice in (0, z0) if D D , where z0 is the radius of convergence of A*(z). Furthermore, the equation det ( zI D A* ( z )) 0 has two different roots in (0, z0) if 1İD D , and one root repeated twice in (0, z0) if D D . Lemma 9.8 (1) If D D , then the matrix I R* ( gD ) is invertible. (2) If D D , then the matrix I R* ( gD ) is singular. Proof (1) If D D , then the equation det ( zI D A* ( z )) 0 has two different roots in (0, z0). Since 456
9 Quasi-Stationary Distributions
{0 z z0 : det ( zI D A* ( z ))
0} {0 z z0 : det ( I R* ( z ))
0}
* {0 z z0 : det ( zI G (D ))
0}
and z gD is a positive root to the equation det ( zI G (D )) 0, it is not a positive root to the equation det ( I R* ( z )) 0. Thus, I R* ( g a ) is invertible. (2) If D D , then the equation det ( zI D A* ( z )) 0 has one root repeated twice in (0, z0). Since z gD is a positive and simple root to the equation det ( zI G (D )) 0, it must be a positive and simple root to the equation det ( I R* ( z )) 0. Thus, I R* ( gD ) is singular. This completes the proof. In the proof of Theorem 9.9, we need a result in Theorem 6.4 of Seneta (1981), which is restated in the following lemma in the block-partitioned form. Lemma 9.9 Suppose ʌ (ʌ0 , ʌ1 , ʌ2 ,!) is a E -invariant measure and v (v0T , v1T , v2T ,!)T is a E -invariant vector of the transition matrix P, partitioned f
according to levels. Then, P is D -positive recurrent if ʌv
¦ʌ v
i i
f, in which
k 0
case E D , ʌ is (a multiple of ) the unique D -invariant measure of P and v is (a multiple of ) the unique D -invariant vector of P. Conversely, if P is D -positive recurrent, and ʌ and v are respectively an invariant measure and vector, then ʌv f. For an D -recurrent P, the following theorem further provides conditions under which P is D -positive recurrent or D -null recurrent. The proof follows the above discussions. f
Theorem 9.9 If
¦ kD G(D ) k
k 1
k 1
f
f, ¦ kAk G (D )k 1 f and D D , then the k 1
D -recurrent Markov chain is D -positive recurrent; otherwise, it is D -null recurrent. Remark 9.2 If D 1 and D ! 1, then the three conditions in Theorem 9.9 are the same conditions as those in Remark b of Neuts [22] ( p. 140 141). This is f
because in this situation, G(1) is stochastic. Therefore, (1)
¦ kD G(1) k
k 1
f if
k 1
f
and only if
¦ kDk f; (2) k 1
f
¦ kAk G(1)k 1 f if and only if k 1
f
¦ kA
k
f; and (3)
k 1
for the recurrent matrix, D D if and only if I R* (1) is invertible, which is equivalent to U 1.
9.4
Markov Chains of GI/M/1 Type
In this section, we consider the quasi-stationary distribution of an irreducible Markov chain of GI/G/1 type, and derive conditions for the state D -classification 457
Constructive Computation in Stochastic Models with Applications
and expressions for the quasi-stationary distribution. We consider an irreducible Markov chain of GI/M/1 type whose transition matrix is given by
P
§ D1 ¨ ¨ D2 ¨ D3 ¨ © #
D0 A1 A2 #
A0 A1 #
· ¸ ¸, ¸ ¸ %¹
A0 #
(9.40)
where D1 is a square matrix of finite size m0, all Ai are square matrices of finite size m, the sizes of the other block-entries are determined accordingly and all empty entries are zero. f
Let A
¦A
k
and
k 0
§ A1 ¨ A Q ¨ 2 ¨ A3 ¨ © #
A0 A1 A2 #
A0 A1 #
A0 #
· ¸ ¸. ¸ ¸ %¹
(9.41)
Let D be the radius of convergence of Q. It is clear that D İ D . m l ( E )) Let E Q (Q i, j i , jı1 be the fundamental matrix for the matrix E Q and write l ( E ). Also, write the (1,1)st block-entry in E m N (E ) Q P as N 0 ( E ). 1,1 The R-measure for E P of GI/M/1 type consists of two matrices, R ( E ) and R0,1 ( E ) defined by
R( E )
E A0 N ( E )
(9.42)
E D0 N ( E ).
(9.43)
and R0,1 ( E )
Clearly, R( E ) and R0,1 ( E ) are matrices of size m u m and m0 u m, respectively. The G-measure for the matrix E P consists of two sequences of matrices Gk ,0 ( E ) and Gk ( E ) for k 1,2,…, which are of size m u m0 and m u m respectively, and is defined by f
Gk ,0 ( E )
¦ Ql
( E ) E Dk i
(9.44)
( E ) E Ak i .
(9.45)
1,i
i 1
and f
Gk ( E )
¦ Ql
1,i
i 1
458
9 Quasi-Stationary Distributions
The following lemma says that all the block-entries in the first block-row in m E Q can be explicitly expressed in terms of the two matrices R( E ) and N( E ). Lemma 9.10 l (E ) Q 1, j
N ( E ) R( E ) j 1 , j ı 1.
(9.46)
Proof It is clear that
l ( E ), Q l ( E ),!) (Q 1,2 1,3
m N (E )E H E Q,
where E H ( E A0 ,0, 0,!). The repeating structure and the property of skipfree-to-right of the transition matrix E Q lead to l ( E ), Q l ( E ),!) (Q 1,2 1,3
l ( E ), Q l ( E ),!). N ( E ) E A0 ( N ( E ), Q 1,2 1,3
The proof is completed by the above recursive expression and repeatedly using E A0 N ( E ) R ( E ). It follows from the definition Eq. (9.44) and Eq. (9.45) of the G-measure and Lemma 9.10 that for k 1,2,…, f
¦ N ( E ) R( E )
Gk ,0 ( E )
i 1
E Dk i
(9.47)
E Ak i .
(9.48)
i 1
and f
Gk ( E )
¦ N ( E ) R( E )
i 1
i 1
The following theorem says that both N( E ) and N0( E ) can be expressed in terms of the matrix R( E ). Theorem 9.10 For the transition matrix of GI/M/1 type, the matrix N( E ) is expressed as N ( E ) [ I < ( E )]1 ,
(9.49)
where f
¦ R( E )
< (E )
k 1
E Ak .
(9.50)
N 0 ( E ) [ I <0 ( E )]1 ,
(9.51)
k 1
The matrix N0( E ) is expressed as
where f
<0 ( E ) E D1 ¦ E D0 N ( E ) R( E )k 1 E Dk 1 .
(9.52)
k 1
459
Constructive Computation in Stochastic Models with Applications
Proof We only prove Eq. (9.49) and Eq. (9.50), while Eq. (9.51) and Eq. (9.52) can be proved similarly. Let
§T H · ¨ L Q ¸, © ¹
( where T
A1 ,
( A0 ,0,0,!),
H
L
( A2T , A3T , A4T ,!)T .
It is clear that l ( E ) [ I < ( E )]1 , N (E ) Q 1,1 where for (
Q and T A1,
< ( E ) E T E H Em QE L
f
l (E )E A E A1 E A0 ¦ Q k 1, k 1 k 2
f
¦ R( E )
k 1
E Ak .
k 1
The last equation follows from E A0 N ( E ) R ( E ) by using Lemma 9.10. It follows from Eq. (9.42) and Eq. (9.49) that R( E ) satisfies the matrix equation f
R( E )
¦ R( E )
k
E Ak .
(9.53)
k 0
It is clear that R ( E ) is the minimal nonnegative solution to Eq. (9.53). For the matrix P of GI/M/1 type, the UL-type RG-factorization is given by
I E P [ I RU ( E )][ I < D ( E )][ I GL ( E )], where § I R0,1 ( E ) · ¨ ¸ R( E ) I ¨ ¸ [ I RU ( E )] ¨ R( E ) I ¸, ¨ % %¸ ¨ ¸ © ¹
< D ( E ) diag(<0 ( E ),< ( E ),< ( E ),!) and 460
(9.54)
9 Quasi-Stationary Distributions
I § · ¨ G ( E ) ¸ I ¨ 1,0 ¸ I [ I GL ( E )] ¨ G2,0 ( E ) G1 ( E ) ¸. ¨ G ( E ) G ( E ) G ( E ) I ¸ 2 1 ¨¨ 3,0 ¸ # # # % ¸¹ ©
9.4.1
Spectral Analysis
Now, we provide spectral analysis for the R-and G-measures. The spectral analysis is a key for expressing the quasi-stationary distribution for the Markov chain of GI/M/1 type. For convenience of description, we first assume that the matrix A is stochastic, and then extend our discussions to the strictly substochastic case. Let U Z[ , f
where [
¦ kA e and Ȧ is the stationary probability vector of the matrix A. k
k 1
Let f
A* ( z )
¦A z k
k
,
0 İ z z0 ,
(9.55)
k 0
where z0 is the radius of convergence of function A* ( z ). Let F ( z ) , u(z) and v(z) be the maximal eigenvalue and the corresponding Perron-Frobenius left and right eigenvectors of the matrix A* ( z ), respectively, and r( E ), l( E ) and h( E ) the maximal eigenvalue and the corresponding Perron-Frobenius left and right eigenvectors of the matrix R( E ), respectively. Using the same proof as in Lemma 1.3.2 of Neuts [21], we can show from Eq. (9.53) that r ( E ) EF (r ( E )) and l( E ) cu (r( E )), where c is a non-zero constant. Without loss of generality, we set c 1. It means that z r( E ) is a positive solution of z EF ( z ). Actually, based on the discussion in Bean, Pollett and Taylor [3], we know from the irreducibility of the matrix A that for the equation z E F ( z ), there exist exactly two different positive solutions if 0 E D or only one (repeated twice) if E D . We denote the two solutions by T E and T E and assume that 0 T E İ T E . The following facts are a simple summarization on T E and T E based on Lemma 2.1 in Kijima [14]. We now consider the following three cases: (1) Assume U >1 and D > 1. In this case, T D 1. In fact, for 1< E < D , 0 T1 TE TD
TD T E T1 1;
and for 0 < E <1, 0 TE T1 T1 1 T E z0 . 461
Constructive Computation in Stochastic Models with Applications
(2) Assume U < 1 and D > 1. In this case, TD >1. In fact, for 1< E < D , 1 T1 TE TD
TD T E T 1 z 0 ;
and for 0 < E < 1, 0 TE T1 1 T1 T E z0 . (3) Assume U 1. In this case, TD 1. Note that TD D F (TD ), we obtain that 1 1, since the matrix A is stochastic. For 0< E <1, D F (1)
0 TE 1 T E z0 . Figure 9.1 and Fig. 9.2 intuitively depict the above three cases, respectively. Let f
GE* ( z )
¦ z G ( E ). k
k
k 1
Figure 9.1 The case with U ! 1
For 0 E İ D ,
Lemma 9.11
zI E A* ( z ) [ zI R ( E )][ I < ( E )][ I GE* ( z )].
(9.56)
In what follows we show that T E is the maximal eigenvalue of the matrix Rmax ( E ) defined in Eq. (9.57), which is another nonnegative solution to the matrix f
equation X
¦X
k
E Ak . We show TE
r ( E ). We define
k 0
Rmax ( E )
R( E ) h( E )u (T E ) [T E I R( E )],
(9.57)
where u (T E ) and h ( E ) are selected such that u (T E )h( E ) 1, since u (T E ) >0 according to the irreducibility of the matrix A. 462
9 Quasi-Stationary Distributions
Figure 9.2 (a) The case with U 1 ; (b) The case with U
1
Lemma 9.12 (1) T E is the maximal eigenvalue of Rmax ( E ) , and u( T E ) and h ( E ) are the associated left and right eigenvectors of Rmax ( E ) , respectively. f
(2) Rmax ( E ) is a solution to the equation X
¦X
k
E Ak for 0 E İ D .
k 0
(3) Rmax ( E ) is nonnegative for satisfying Rmax ( E ) : R ( E ) for 0 E D and Rmax (D ) R (D ). Proof (1) This can be directly verified. (2) By induction, we can obtain that for all k ı 0, Rmax ( E ) k
R( E )k h( E )u (T E ) [T Ek I R( E )k ]. f
Note that EF (T E ) T E , we have u (T E )¦T Ek E Ak
T E u (T E ). Therefore,
k 0
f
¦ Rmax ( E )k E Ak k 0
f
f ªf k º k k R ( E ) E A h ( E ) u ( T ) T E A ¦ k E « ¦ E k ¦ R ( E ) E Ak » k 0 k 0 ¬k 0 ¼ R ( E ) h( E )u (T E ) [T E I R ( E )] Rmax ( E ).
(3) Since h ( E ) ı 0 and u (T E ) ı it follows from Eq. (9.57) that in order to prove that Rmax ( E ) ı R( E ), we only need to check that u (T E ) R ( E ) İ T E u (T E ). To do this, we write R0 ( E )
0
and f
RN 1 ( E )
¦R
N
( E )k E Ak , N ı
k 0
It is clear that u (T E ) R0 ( E )
0 İ T E u (T E ). 463
Constructive Computation in Stochastic Models with Applications
In what follows we assume that for any given N l, u (T E ) Rl ( E ) İ T E u (T E ),
then for N
l 1 we obtain f
f
k 0
k 0
u (T E ) Rl 1 ( E ) u (T E )¦ Rl ( E ) k E Ak İ u (T E )¦T Ek E Ak
T E u (T E ).
Therefore, by induction we know that for all N ı 0 we have u (T E ) RN ( E ) İ T E u (T E ).
(9.58)
Similar analysis to Lemma 1.2.3 in Neuts [21] leads to R(E )
lim RN ( E ),
N of
it follows from Eq. (9.58) that u (T E ) R( E ) İ T E u (T E ). For 0< E
h( E )[T E r ( E )] : 0,
we obtain h( E )u (T E ) [T E I R( E )] : 0,
hence Rmax ( E ) : R ( E ). For E D , since the equation z DF ( z ) has only one solution, we obtain TD r (D ) and u (TD ) Al (TD ) is also a left eigenvector of the matrix R(D ). Clearly, h(D )u (TD ) [TD I R(D )] 0.
Thus, Rmax (D ) R(D ). This completes the proof. The following corollary follows from the proof of the above result. Corollary 9.3 For 0 E İ D ,TE ( E ) r ( E ). Further, we show that the larger solution T E to the equation z E F ( z ) is also a solution to the equation det ( I GE* ( z )) 0. Theorem 9.11 For all 0 E İ D , z T E is a positive solution to equation det ( I GE* ( z )) 0. Proof Since the matrix I < ( E ) is invertible for 0 E İ D , it follows from Lemma 9.11 that {z ! 0 : det ( zI E A* ( z ))
0} {z ! 0 : det ( zI R( E )) *
* {z ! 0 : det ( I GE ( z )) 464
0} 0}.
9 Quasi-Stationary Distributions
Since for 0 E D ,T E ! r ( E ) and r ( E ) is the maximal eigenvalue of R ( E ), we obtain that det (T E I R( E )) z 0 Note that det (T E I E A* (T E ))
0, we obtain
*
that z T E must be a positive solution to equation det ( I GE ( z )) 0. Finally, since f ( E ) det ( I GE* (T E )) is left-continuous at E lim f ( E ) 0.
D , det ( I GD* (TD ))
E /D
Lemma 9.13 For all 0 E İ D , the maximal eigenvalue of the nonnegative matrix GE* (T E ) is equal to one. Proof Let g (z) be the maximal eigenvalue of the nonnegative matrix GE* ( z ) for z [0, z0 ), where z0 is the radius of convergence of the matrix A* ( z ). When 1 z T E , we write O g (T E ). Therefore, g (T E ) 1. Suppose that O ! 1. Noting
O
that g(z) is an increasing and continuous function for z [0, z0 ), g (0) 0 and g (T E ) O ! 1, there must exist at least one point T * ( E ) (0,T E ) such that g (T * ( E )) 1 according to the Mean Value Theorem for continuous functions. Thus, det ( I GE* (T * ( E ))) 0, and so det (T * ( E ) I A* (T * ( E ))) 0. Therefore, z T * ( E ) is a positive solution to the equation z E F( z ). As shown earlier in this section, the equation z E F ( z ) has two different positive solutions T E and r ( E ) for 0 E D . For 0 E D , it follows from Theorem 9.11 that det ( I GE* (r ( E ))) z 0. Therefore, T * ( E ) z r ( E ). Based on this, z T * ( E ) is a third positive solution to the equation z E F ( z ), which is a contradiction. For E D , z TD r (D ) is the unique positive solution to the equation z DF ( z ). This contradicts that T * (D ) TD is also a positive solution to the equation z DF ( z ). This completes the proof. The conclusion given below will be used in constructing a solution for the quasi-stationary distribution. This conclusion is a direct consequence of Theorem 9.11 and Lemma 9.13. Corollary 9.4 For 0 E İ D , there always exists a nonnegative non-zero row vector x1 such that x1 x1GE* (T E ), that is, f
x1
x1 ¦T Ei Gi ( E ).
(9.59)
i 1
In what follows we first construct a formal solution for the quasi-stationary distribution {ʌk }. To guarantee ʌ k for each k ı 0 is finite, we need to know f
whether or not
¦T E
i 1
Gi ,0 ( E ) is finite, which is discussed in the following
i 1
theorem. 465
Constructive Computation in Stochastic Models with Applications
Suppose that the matrix A is stochastic. For 0 E İ D ,
Theorem 9.12
f
¦T E
(1) if U z 1, then
i 1
Gi ,0 ( E ) is finite
i 1
f
(2) if U 1 and
f
¦i
2
¦T E
Ai is finite, then
i 1
i 1
Gi ,0 ( E ) is finite
i 1
Proof It follows from Eq. (9.47) that f
f
¦T Ek 1Gk ,0 ( E )
f
N ( E )¦¦T Ek 1 R ( E )i 1 E Dk i
k 1
k 1 i 1
f k 1
N ( E )¦¦T Ei R ( E )k 1i E Dk 1. k 1 i 0
Since u (T E ) ! 0, the convergence of the above sum is equivalent to the convergence of f
f k 1
k 1
k 1 i 0
u (T E ) N ( E ) 1 ¦T Ek 1Gk ,0 ( E ) u (T E )¦¦T Ei R( E )k 1i E Dk 1 ı 0. Notice that u (T E ) R( E ) İ T E u (T E ), we obtain f k 1
f k 1
k 1 i 0
k 1 i 0
u (T E )¦¦T Ei R ( E ) k 1i E Dk 1 İ u (T E )¦¦T Ek 1 E Dk 1 f
u (T E )¦ kT Ek 1 E Dk 1. k 1
Since T E is strictly increasing for 0 E İ D according to the irreducibility of the matrix A, f
¦ kT E k 1
k 1
k 1
f
Therefore, the convergence of
f
Dk 1 İ ¦ kTDk 1 Dk 1 .
¦ kTD
k 1
Dk 1 is a sufficient condition for the
k 1
f
convergence of
¦T E
k 1
Gk ,0 ( E ) for 0 E İ D .
k 1
Since P is either stochastic or strictly substochastic and the matrix A is stochastic, we obtain k
Dk 1e İ e ¦ Ai e i 0
which means that 466
f
¦ A e, i
i k 1
9 Quasi-Stationary Distributions f
¦ kTD
k 1
f
Dk 1e İ ¦
f
¦ kTD
k 1
f
Ai e
k 1 i k 1
k 1
§ i 1
¦ ¨© ¦ kTD i 2
k 1
k 1
· ¸ Ai e. ¹
(9.60)
Consider the following two cases. Case Ē U z 1. In this case, TD z 1 by Lemma 2.1 in Kijima [14]. Hence, i 1
1 iTDi 1 (i 1)TDi (1 TD )2
¦ kTD
k 1
k 1
(9.61) f
It is clear from Eq. (9.60) and Eq. (9.61) that if
¦ iTD
i 1
Ai e is finite, then
i 1
f
¦ TD
Gi ,0 (D ) is finite. Notice that the matrix function A*(z) is analytic at
i 1
i 1
TD (0, z0 ), we obtain f
¦ iTD
i 1
d * A ( z )| z dz
Ai
i 1
f
¦ iTD
is finite. Therefore,
i 1
TD
Ai e is finite.
i 1
Case ē Therefore,
U 1. In this case, TD i 1
¦ kTDk 1 k 1
1 according to Lemma 2.1 in Kijima [14].
i 1
¦k k 1
i (i 1) . 2
The rest of the proof is obvious now. Remark 9.3 When the matrix A is irreducible and strictly substochastic, the equation z E F ( z ) still has two positive solutions if 0 E D or only one f
solution (repeated twice) if E
D . However, the series
¦T E
i 1
Gi ,0 ( E ) may be
i 1
f
infinite even under the assumption
¦i
2
Ai f. For example, let Ak
Ob k ,
i 1
0 b 1,0 O 1 b, k ı 0, and Dl
D
1 and TD 4bO f
1 . It follows from Eq. (9.47) that 2b
¦TD k 1
1 b O for l ı 2. We can obtain that 2(1 b)
1 b O f § 1 · Gk ,0 (D ) D N (D ) ¦k ¨ ¸ 2(1 b) k 1 © 2b ¹
k 1
k 1
f 467
Constructive Computation in Stochastic Models with Applications
1 whenever 0 b İ . 2 Remark 9.4 Let
G
f ½ sup ® E ı 1: ¦T Ei 1Gi ,0 ( E ) f ¾ . i 1 ¯ ¿
f
Then G
D İ D . If
¦TDk 1Gk ,0 (D ) f, then P is D -transient; if k 1
f
¦ TD
k 1
Gk ,0 (D )
k 1
f
is infinite, then P is D -recurrent. It is clear that if 0 E D , then ¦T Ei 1Gi ,0 ( E ) f; i 1
f
otherwise
¦T E
i 1
Gi ,0 ( E ) is infinite.
i 1
Now, we discuss the state D -classification of the Markov chain. To do this, let u0 ( E ) and n0 ( E ) be the maximal eigenvalues of the censored matrix <0 ( E ) and its fundamental matrix N 0 ( E ), respectively. Then n0 ( E ) 1/[1 u0 ( E )]. Theorem 9.13 (1) If for all 0 E İ D , u0 ( E ) 1, then N 0 (D ) f and D D . In this case, P is D -transient. (2) If there exists a E * with 0 E * İ D such that u0 ( E * ) 1, then D E * and N 0 (D ) is infinite. In this case, P is D -recurrent. Proof Based on the facts: n0 ( E ) 1/[1 u0 ( E )] and n0 ( E ) f if and only if N 0 ( E ) f, we discuss the following two cases: Case Ē There exists no solution to 1 u0 ( E ) 0 for 0 E İ D . In this case, n0 (D ) f, Hence N 0 (D ) f. Therefore, D ı D . This, together with the fact D İ D , implies D D . Hence, P is D -transient. Case ē There exists a solution E * to 1 u0 ( E ) 0 for 0 E * İ D . In this case, n0 ( E * ) f, hence there exists at least one infinite entry of N 0 ( E ). This leads to D E * İ D Therefore, P is D -recurrent. This completes the proof.
9.4.2
Two Sets of Expressions
We use the UL-type RG-factorization to express the quasi-stationary distribution ʌ ( E ) {ʌ0 ( E ), ʌ1 ( E ), ʌ 2 ( E ),!} for 1 E İ D . We present two sets of expressions: One for an D -recurrent matrix with E D and another for all the other cases. To compute the quasi-stationary distribution, we need to verify ʌ ʌ E P, that is, f
(1) ʌ0
¦ʌ E D k
k 0
468
k 1
,
9 Quasi-Stationary Distributions f
(2) ʌ1
ʌ0 E D0 ¦ ʌk E Ak , and k 1
f
(3) ʌl
¦ʌ
k l 1
E Ak for l ı 2.
k 0
The first set: D -recurrent with E D In this case, we solve ʌ ( I D P) 0 according to Eq. (9.54) by two steps. Let x ʌ [ I RU (D )].
(9.62)
x[ I < D (D )][ I GL (D )] 0.
(9.63)
Then
In the first step, if x ( x0 , x1 ,!) and ʌ to levels, then Eq. (9.63) is equivalent to
(ʌ0 , ʌ1 ,!) are partitioned according
f
x0 [ I < 0 (D )] ¦ xi [ I < (D )]Gi ,0 (D )
0,
i 1
f
xk [ I < (D )]
¦ x [ I < (D )]G
i k
i
(D )
0,
k ı 1.
i k 1
Since P is D -recurrent, it follows from (2) in Theorem 9.13 that the maximal eigenvalue of < 0 (D ) is u0 (D ) 1. Therefore, for the nonnegative and irreducible < 0 ( E ), there exists a positive x0 such that x0 [ I < 0 (D )] 0. Hence, (W x0 , 0,0,!) is a solution to Eq. (9.63). In the second step, to express ʌ k in terms of xk we solve Eq. (9.62) and obtain ʌ0
W x0
and ʌk
ʌ0 R0,1 (D ) R k 1 (D ),
k ı 1.
We prove the following theorem. Theorem 9.14 If P is D -recurrent, then the quasi-stationary distribution for E D is given by ʌ0
W x0
(9.64)
and ʌk
ʌ0 R0,1 (D ) R k 1 (D ),
k ı 1,
(9.65) 469
Constructive Computation in Stochastic Models with Applications
where x0 is the left Perron-Frobeniusthe vector of the matrix < 0 (D ), and the f
constant W makes
¦ʌ e
1.
k
k 0
Proof We show that ʌ k for k ı 0 given in Theorem 9.14 satisfy equations (1), (2) and (3). Equation (3) holds because of Eq. (9.53), f
¦ʌ
f
D Ak
k l 1
k 0
ʌ0 R0,1 (D ) R(D )l 2 ¦ R(D )k D Ak k 0
ʌ0 R0ˈ1D RD l 1
ʌl .
To see that (1) holds, we use Eq. (9.52) and the fact ʌ0< 0 (D ) ʌ0 , f
f ª º ʌ0 «D D1 R0,1 (D )¦ R(D )k 1D Dk 1 » k 1 ¬ ¼
¦ʌ D D k
k 1
k 0
ʌ0< 0 (D )
ʌ0 .
Finally, (2) holds by noticing R0,1 (D ) D D0 N (D ) and N (D ) ( I < (D )) 1 , and using Eq. (9.50), f
ʌ0D D0 ¦ ʌkD Ak k 1
f ª º ʌ0 «D D0 R0,1 (D )¦ R(D )k 1D Ak » k 1 ¬ ¼ ʌ 0D D0 [ I N (D )< (D )] ʌ 0 R0,1 (D ) ʌ1.
Therefore, the expression in Theorem 9.14 is unique, up to multiplication by a positive constant, D -invariant measure. This completes the proof. The second set: D -recurrent with E D or D -transient with E İ D . In this case, we need to proceed in two steps. Let ʌ[ I RU ( E )][ I < D ( E )].
y
(9.66)
Then y[ I GL ( E )] 0.
(9.67)
In the first step, we need to solve the Eq. (9.67) for a nonnegative non-zero row vector y. If such a solution exists, then ʌ can be calculated by y. Equation (9.67) is equivalent to f
y0
¦yG i
i ,0
(E )
(9.68)
i 1
and f
yk
¦y i 1
470
k i
Gi ( E ),
k ı 1.
(9.69)
9 Quasi-Stationary Distributions
We first consider a special solution to Eq. (9.69) and Eq. (9.68). Let
T Ek 1 x1 ,
yk
k ı 1,
(9.70)
where x1 is determined as the unique, up to constant multiplications, nonnegative f
x1 ¦T Ei Gi ( E ). Then it follows from Eq. (9.69) and
solution of the equation x1
i 1
Eq. (9.68) that f
x1 ¦T Ei 1Gi ,0 ( E )
y0
(9.71)
i 1
and from Eq. (9.69) and Eq. (9.70) that f
x1 ¦T Ei Gi ( E ),
x1
i 1
which is the same as Eq. (9.59) and it follows from Theorem 9.12 that y0 is finite. Therefore, Eq. (9.71) and Eq. (9.70) are a nonnegative non-zero solution to Eq. (9.68) and Eq. (9.69), or Eq. (9.67). In the second step, solving y ʌ [ I RU ( E )][ I < D ( E )] in Eq. (9.66) we obtain y0
ʌ0 [ I < 0 ( E )],
y1
[ ʌ0 R0,1 ( E ) ʌ1 ][ I < ( E )],
yk
[ ʌk 1 R( E ) ʌk ][ I < ( E )],
k ı 2
Since both [ I < 0 ( E )] and [ I < ( E )] are invertible in this case, we can express ʌ k in terms of yk : y0 N 0 ( E ),
(9.72)
ʌ0 R0,1 ( E ) y1 N ( E )
(9.73)
ʌ0 ʌ1
and ʌ k 1 R( E ) yk N ( E ),
ʌk
k ı 2.
(9.74)
Based on Eq. (9.72), Eq. (9.73), Eq. (9.74), Eq. (9.70) and Eq. (9.71), we can construct a formal solution for the E -invariant measure: The construction of ʌ0 is obvious. For the case of 0 E İ D with k ı 1, we have f
ʌk
x1 ¦T Ei 1Gi ,0 ( E ) N 0 ( E ) R0,1 ( E ) R k 1 ( E ) i 1
x1 N ( E )
¦
T Ei R j ( E ).
i j k 1 iı0, jı0
(9.75) 471
Constructive Computation in Stochastic Models with Applications
Note that the matrix T E I R ( E ) is invertible for 0 E D and singular for E D , the expression for E D is slightly different from the case for 0 E D , so we need to present them separately. When 0 E D ,
¦
T Ei R j ( E ) [T E I R( E )]1[T Ek I R k ( E )],
i j k 1 iı0, jı0
since T E I R( E ) is invertible. Therefore we give the following theorem. Theorem 9.15 For 0 E D , the quasi-stationary distribution of P is given by f
x1 ¦T Ei 1Gi ,0 ( E ) N 0 ( E )
ʌ0
(9.76)
i 1
and Vk ( E ) Wk ( E ),
ʌk
k ı 1,
(9.77)
where Vk ( E )
x1 N ( E )[T E I R ( E )]1T Ek
and Wk ( E )
f ½ x1 ®¦T Ei 1Gi ,0 ( E ) N 0 ( E ) R0,1 ( E ) N ( E )[T E I R ( E )]1 R ( E ) ¾ R k 1 ( E ). ¯i 1 ¿
Proof Let
g1 ( E )
x1 N ( E )[T E I R ( E )]1 .
It follows from Eq. (9.56) that f
T E I ¦T Ek E Ak k 0
f ª º [T E I R ( E )][ I < ( E )] « I ¦T Ei Gi ( E ) » , i 1 ¬ ¼
we obtain f ª º g1 ( E ) «T E I ¦T Ek E Ak » k 0 ¬ ¼
f ª º x1 « I ¦T Ei Gi ( E ) » i 1 ¬ ¼
0,
that is f
g1 ( E )¦T Ek E Ak
T E g1 ( E ).
k 0
Hence, we obtain g1 ( E ) N u (T E ), where N is a positive constant. We set N 472
1.
9 Quasi-Stationary Distributions
Let f ½ x1 ® N ( E )[T E I R ( E )]1 R ( E ) ¦T Ei 1Gi ,0 ( E ) N 0 ( E ) R0,1 ( E ) ¾ . i 1 ¯ ¿
g2 (E )
Then it follows from Eq. (9.77) that
T E g1 ( E )T Ek 1 g 2 ( E ) R ( E )k 1 ,
ʌk
k ı 1.
We first check (3) as follows: f
¦ʌ
k l 1
f
f
k 0
k 0
T E g1 ( E )¦T Ek l 2 E Ak g 2 ( E )¦ R( E )k l 2 E Ak
E Ak
k 0
T E g1 ( E )T El 1 g 2 ( E ) R( E )l 1
ʌl .
Then we check (1) as follows: f
f
¦ʌ ED k
f
x1 ¦T Ei 1Gi ,0 ( E ) N 0 ( E ) E D1 g1 ( E )¦ [T Ek R ( E ) k ]E Dk 1
k 1
k 0
i 1
k 1
f
f
i 1
k 1
x1 ¦T Ei 1Gi ,0 ( E ) N 0 ( E ) R0,1 ( E )¦ R( E ) k 1 E Dk 1 ,
note that R0,1 ( E )
E D0 N ( E ),
N 0 ( E )< 0 ( E )
I N 0 ( E ),
f
< 0 ( E ) E D1 E D0 N ( E )¦ R ( E )k 1 E Dk 1 , k 1
we obtain f
¦ʌ E D
k 1
k
k 0
f k 1
ʌ0 x1 N ( E )¦¦T Ei R ( E )k 1i E Dk 1 k 1 i 0
f
f
x1 N ( E )¦¦T Ei 1 R ( E ) k 1 E Dk i . i 1 k 1
It is easy to check that f k 1
¦¦T E R( E ) i
k 1 i
f
E Dk 1
k 1 i 0
f
¦¦T E
i 1
R( E )k 1 E Dk i .
i 1 k 1
Hence (1) is correct. Finally, we prove (2). To do this, we compute f
ʌ0 E D0 ¦ ʌk E Ak k 1
f
x1 ¦T Ei 1Gi ,0 ( E ) N 0 ( E ) E D0 i 1
f
f
k 1
k 1
g1 ( E )¦T Ek E Ak g 2 ( E )¦ R( E )k 1 E Ak , 473
Constructive Computation in Stochastic Models with Applications
note that f
g1 ( E )¦T Ek E Ak
g1 ( E )T E g1 ( E ) E A0
k 1
and f
g 2 ( E )¦ R( E ) k 1 E Ak
g 2 ( E )< ( E )
g1 ( E ) R( E ) g1 ( E ) E A0
k 1
f
x1 ¦T Ei 1Gi ,0 ( E ) N 0 ( E ) R0,1 ( E )< ( E ), i 1
we obtain f
ʌ0 E D0 ¦ ʌk E Ak k 1
f
g1 ( E )[T E I R( E )] x1 ¦T Ei 1Gi ,0 ( E ) N 0 (E ) R0,1 (E ) i 1
g1 ( E )T E g 2 ( E ) ʌ 1 . This implies that the expression in Theorem 9.97 is a solution for the E -invariant measure. This completes the proof. When P is D -transient with E D (in this case, D D ), a similar proof to that in Theorem 9.15 leads to that a nonnegative non-zero D -invariant measure of P is given by f
x1 ¦TDi 1Gi ,0 (D ) N 0 (D )
ʌ0
i 1
and Vk (D ) Wk (D ),
ʌk
k ı 1,
where Vk (D )
x1 N (D )
¦
TDi R j (D )
(9.78)
i j k 1 iı0, jı0
and f
Wk (D )
x1 ¦TDi 1Gi ,0 (D ) N 0 (D ) R0,1 (D ) R k 1 (D ). i 1
Under the assumption of the irreducibility of the matrix A, the maximal eigenvalue TD of the matrix R(D ) is simple. Let J (D ) diag (TD , / (D )) be the Jordan’s normal form of R(D ). Then the real part of each diagonal entry of / (D ) is strictly less than TD , and there always exists an invertible matrix T (D ) such that T 1 (D ) R(D )T (D ) 474
J (D )
diag(TD , / (D )).
9 Quasi-Stationary Distributions
We write
§ t (D ) T12 (D ) · T 1 (D ) ¨ 11 ¸, © T21 (D ) T22 (D ) ¹ where t11 (D ), T12 (D ), T21 (D ) and T22 (D ) are a scalar, row and column vectors of size (m 1) and a matrix of size (m 1) u (m 1), respectively. Lemma 9.14 For all k ı 0,
/ (D ) k (T21 (D ), T22 (D )) (T21 (D ), T22 (D )) R (D )k . Proof Simple matrix computation can lead to that for all k ı 0,
§ T k (t (D ), T12 (D )) · diag(TDk , / (D )k )T 1 (D ) ¨ D k 11 ¸ © / (D ) (T21 (D ), T22 (D )) ¹ and T 1 (D ) R(D ) k Since T 1 (D ) R (D ) k
§ (t11 (D ), T12 (D )) R(D )k · . ¨ k ¸ © (T21 (D ), T22 (D )) R(D ) ¹
diag(TDk , / (D ) k )T 1 (D ), we obtain that for all k ı 0,
/ (D ) k (T21 (D ), T22 (D )) (T21 (D ), T22 (D )) R (D )k . This completes the proof. Let k 1
Vk (D )
x1 N (D )¦TDi R(D ) k 1i i 0
and x1 N (D )T (D )
(\ ,< ).
Then k 1
Vk (D )
x1 N (D )¦TDi R (D )k 1i i 0
§ kT k 1 · § t11 (D ) T12 (D ) · 0 (\ ,< ) ¨ D 1 k k ¸¨ [TD I / (D )] [TD I / (D ) ] ¹ © T21 (D ) T22 (D ) ¸¹ © 0 kTDk 1\ (t11 (D ), T12 (D )) < [TD I / (D )]1[TDk I / (D )k ](T21 (D ), T22 (D ))
kTDk 1\ (t11 (D ), T12 (D )) TDk< [TD I / (D )]1 (T21 (D ), T22 (D )) < [TD I / (D )]1 (T21 (D ), T22 (D )) R (D )k . 475
Constructive Computation in Stochastic Models with Applications
Therefore, we obtain Wk (D ) Vk (D )
ʌk
kTDk 1\ (t11 (D ), T12 (D )) TDk< [TD I / (D )]1 (T21 (D ), T22 (D ))
{< [TD I / (D )]1 (T21 (D ), T22 (D )) R(D ) f
x1 ¦TDi 1Gi ,0 (D ) N 0 (D ) R0,1 (D )}R(D )k 1 . i 1
We summarize the above result in the following theorem. Theorem 9.16 If P is D -transient and E D , then the quasi-stationary distribution is given by f
ʌ0
x1 ¦TDi 1Gi ,0 (D ) N 0 (D ) i 1
and for k ı 1, ʌk
kTDk 1\ (t11 (D ), T12 (D )) TDk< [TD I / (D )]1 (T21 (D ), T22 (D )) {< [TD I / (D )]1 (T21 (D ), T22 (D )) R (D ) f
(9.79)
x1 ¦TDi 1Gi ,0 (D ) N 0 (D ) R0,1 (D )}R(D )k 1 . i 1
Now, we consider the special case where Di Ai for all i ı 0, which shows the transition matrix without the boundary. In this case, N0 (E )
N ( E ),
R0,1 ( E )
R ( E ),
Gi ,0 ( E ) Gi ( E ),
i ı
f
Let yk
T Ek y0 for k ı 0. Then y0
y0 ¦T Ei Gi ( E ) and y0
x1 / T E . Hence,
i 1
f
Wk ( E )
x1 ¦T Ei 1Gi ,0 ( E ) N 0 ( E ) R0,1 ( E ) R( E )k 1 i 1
x1
TE
N ( E ) R( E )k
y0 N ( E ) R( E ) k
and k 1
Vk ( E )
x1 N ( E )¦T Ei R( E ) k 1i i 0
k 1
y0 N ( E )¦T Ei 1 R( E ) k 1 i . i 0
Therefore, we obtain k
ʌk
Wk ( E ) Vk ( E )
y0 N ( E )¦T Ei R ( E )k i . i 0
476
(9.80)
9 Quasi-Stationary Distributions
Corollary 9.5 For the transition matrix P of GI/M/1 type without the boundary, (1) if 0 E D ,
ʌk (2) if E ʌk
y0 N ( E )[T E I R ( E )]1[T Ek I R ( E )k ],
k ı 1;
(9.81)
D, kTDk 1\ (t11 (D ), T12 (D )) TDk< [TD I / (D )]1 (T21 (D ), T22 (D )) < [TD I / (D )]1 (T21 (D ), T22 (D )) R(D )k ,
k ı 1.
(9.82)
For an irreducible level-independent QBD process, we have provided two different types of expressions from Markov chains of M/G/1 type and Markov chains of GI/M/1 type. It is necessary to indicate that the two expressions are identical. For simplicity of interpretation, we only consider the QBD process without the boundary, and can directly obtain the other from either of the two expressions. Lemma 9.15 For a QBD process without the boundary, let J D and TD be the maximal eigenvalues of the matrices G( D ) and R( D ), respectively. Then J D 1/ TD . Proof For the QBD process and z ! 0, we denote by F ( z ) and F ( z ) the maximal eigenvalues of the matrices A0 zA1 z 2 A2 and z 2 A0 zA1 A2 , respectively. Since 2 ª º 1 §1· z « A0 A1 ¨ ¸ A2 » , z ©z¹ «¬ »¼
2
2
z A0 zA1 A2 we obtain
F ( z )
§1· z2 F ¨ ¸. ©z¹
(9.83)
Noting that J D is the unique positive solution (repeated twice) to the equation z D F ( z ), it follows from Eq. (9.83) that J D is a positive solution to the equation 1/ z DF (1/ z ), that is, 1/ J D is a positive solution to the equation z D F ( z ). Since TD is the unique positive solution (repeated twice) to the equation z DF ( z ), we obtain that TD 1/ J D . This completes the proof. Using the Markov chain of M/G/1 type, we have ʌk
z k 1
JD
k 1
¦ G (D ) N (D ) R(D ) , i
i
(9.84)
i 0
where z is the nonnegative non-zero left eigenvector of the matrix G (D ). Noting that J D 1/ TD , y0 y0TD G (D ) and J D z zG (D ), we have y0 Az. Without loss of generality, we set c 1, i.e., z y0. Thus, it follows from Eq. (9.84) that 477
Constructive Computation in Stochastic Models with Applications k 1
ʌk
y0 N (D )¦TDk 1i R (D )i , i 0
which is the same as Eq. (9.80). Therefore, it follows from D
D that
kTDk 1\ (t11 (D ), T12 (D )) TDk< [TD I / (D )]1 (T21 (D ), T22 (D ))
ʌk
< [TD I / (D )]1 (T21 (D ), T22 (D )) R (D )k ,
k ı 1.
This leads to the explicit expression Eq. (9.82) based on the Markov chain of GI/M/1 type.
9.4.3
Conditions for D -Positive Recurrence
We now provide conditions under which the Markov chain P of GI/M/1 type is either D -positive recurrent or D -null recurrent. When the matrix P of GI/M/1 type is D -recurrent, the quasi-stationary distribution is matrix-geometric, as shown in Theorem 9.14. At the same time, the D -invariant vector, which is a nonnegative non-zero column vector v such that v D Pv, can be expressed as v0
w0 ,
(9.85)
k 1
vk
Gk ,0 (D )v0 ¦ Gk i (D )vi ,
k ı 1,
(9.86)
i 1
where w0 is the unique, up to multiplication of a positive constant, solution of [ I < 0 (D )]w0 0. The expression does not rely on whether or not the matrix A is stochastic. Moreover, when the matrix A is strictly substochastic, the equation z E F ( z ) has still two positive solutions if 0 E D or only one positive solution (repeated twice) if E D . Based on the fact, the following lemma provides conditions for classifying the D -positive recurrent and the D -null recurrent. For simplicity of description, we assume that the matrix A*(1) is irreducible and stochastic. f
Lemma 9.16 If
¦ kR(D ) k 1
k 1
f
Dk f, ¦ kR(D )k 1 Ak f and D D , then an k 1
D -recurrent Markov chain P is D -positive recurrent; otherwise, it is D -null recurrent. Proof According to Lemma 9.9, both expressions for the D -invariant measure ʌ and the D -invariant vector v are needed. Let V *( z )
f
f
¦ z v , G ( z ) ¦ z G (D ) and k
*
k
k 1
k
k
k 1
f
G0* ( z )
¦z G k
k ,0
k 1
478
(D ). For convenience of description, we express the D -invariant
9 Quasi-Stationary Distributions
vector in the following way. It follows from Eq. (9.85) and Eq. (9.86) that V * ( z ) [ I G* ( z )]1 G0* ( z )v0
f
¦[G ( z)] G ( z)v , *
n
* 0
0
n 0
which gives f
vk
¦ G (D ) k
n
Gk ,0 (D )v0 ,
k ı 1.
(9.87)
n 0
It follows from Theorem 9.14 and Eq. (9.87) that f
¦ ʌi vi i 0
f § f · x0 v0 x0 R0,1 (D )¦ R(D ) k 1 ¨ ¦ Gk (D ) n Gk ,0 (D ) ¸ v0 . k 1 ©n 0 ¹
(9.88)
Note that the matrix < 0 (D ) is nonnegative and irreducible, the invariant measure x0 and vector v0 are all positive. Since each column of the nonnegative matrix f
R0,1 (D ) is non-zero, it is clear that x0 R0,1 (D ) ! 0. Therefore,
¦ʌ v
i i
f if and
i 0
only if f
¦ R(D ) k 1
k 1
ªf º n « ¦ Gk (D ) Gk ,0 (D ) » f. ¬n 0 ¼
(9.89)
It is known that r (D ) and l (D ) are the maximal eigenvalue of R (D ) and the associated left eigenvector, respectively. Since A*(1) is irreducible, l (D ) >0. It follows from Eq. (9.89) that f ªf º l (D )¦ R (D ) k 1 « ¦ Gk (D ) n Gk ,0 (D ) » k 1 ¬n 0 ¼
f 1 l (D )¦ [G* (r (D ))]n G0* ( r (D )). r (D ) n 0
Clearly, Eq. (9.89) is true if and only if, (1) G0* (r (D )) f, (2) G * (r (D )) f, and (3) the matrix I G * (r (D )) is invertible. We first analyze the condition in (3). Note that the equation z EF ( z ) has exactly two different positive solutions z r (D ) and z TD if 0 D D or only one solution z r (D ) TD (repeated twice) if D D , also noting that z TD is the unique positive solution of the equation det ( I G* ( z )) 0, we obtain that if D D , then the matrix I G * (r (D )) is invertible; and if D D , then the matrix I G* (r (D )) is singular. Therefore, the matrix I G * (r (D )) is invertible if D D. We then analyze the conditions in (1) and (2). We show that (1) G0* (r (D )) f if f
and only if
¦ kr (D ) k 1
k 1
f
Dk f; and (2) G*(r (D )) f if and only if
¦ kr (D )
k 1
Ak f.
k 1
We only provide details for (1), while (2) can be discussed similarly. 479
Constructive Computation in Stochastic Models with Applications
It follows from Eq. (9.44) that f
¦ r (D ) G k
* G˄ 0 r (D ))
k ,0
f
f
¦ r (D ) ¦ N (D ) R(D ) k
(D )
k 1
k 1
i 1
D Dk i .
i 1
Hence, f
f
f
l (D ) N (D ) 1 G0* (r (D )) l (D )¦¦ rDk i 1D Dk i
l (D )¦ kr (D ) k 1D Dk ,
k 1 i 1
k 1
f
¦ kr (D )
which means that G0* (r (D )) f if and only if
k 1
Dk f. This completes
k 1
the proof. The following theorem and corollary provide simple conditions for classifying the D -positive recurrent and D -null recurrent for the Markov chain of GI/M/1 type. At the same time, the proof is not difficult according to Lemma 9.16 and is omitted here. Theorem 9.17 Suppose the Markov chain P of GI/M/1 type is D -recurrent and the matrix A is stochastic or strictly substochastic. P is D -positive if and only if (1) D D when r (D ) 1; f
(2) D D , ¦ kr (D ) k 1 Ak f and k 1
f
(3) D D and
¦ kA
k
f
¦ kr (D )
k 1
Dk f when r (D ) ! 1; or
k 1
f
f and
k 1
¦ kD
k
f when r (D ) 1.
k 1
When the matrix A is stochastic, the conditions in the above theorem for the cases of r (D ) ! 1 and r (D ) 1 can be further simplified as in the following corollary. Corollary 9.6 Suppose the Markov chain P of GI/M/1 type is D -recurrent and the matrix A is stochastic. f
(1) If r (D ) ! 1, P is D -positive if and only if D D and
¦ kr (D )
k 1
Ak f.
k 1
f
(2) If r (D ) 1, P is D -positive if D D and
¦k
2
Ak f.
k 1
f
Proof If the matrix A is stochastic, then Dk e İ ¦ Ai e for all k ı 2. Therefore, i k
f
¦ kr (D ) k 2
k 1
f
f
Dk e İ ¦¦ kr (D )k 1 Ai e k 2 i k
f
i
¦¦ kr (D )
k 1
Ai e
i 2 k 2
f i2 i 2 Ai e, if r (D ) 1, °¦ 2 °i 2 ®f 2 i i 1 ° 2r (D ) r (D ) (i 1)r (D ) ir (D ) A e, if r (D ) z 1. ¦ i 2 °¯ i 2 (1 r (D )) 480
9 Quasi-Stationary Distributions
The rest of this proof is clear. If the matrix A is stochastic, then Corollary 9.6 and (1) of Theorem 9.17 imply that the state D -classification of P is independent of the boundary matrices Dk for k ı 0. This is similar to Theorem 1.3.2 in Neuts [21].
9.5
Markov Chains of GI/G/1 Type
In this section, we consider the quasi-stationary distribution of an irreducible Markov chain of GI/G/1 type, and provide conditions for the state D -classification and expressions for the quasi-stationary distribution. From Eq. (3.1), the E -discounted transition matrix of GI/G/1 type is written as
EP
§ E D0 ¨ED 1 ¨ D E 2 ¨ ¨ E D3 ¨ # ©
E D1 E A0 E A1 E A2 #
E D2 E D3 E A1 E A2 E A0 E A1 E A1 E A0 #
!· !¸ ¸ !¸ . !¸ ¸ ¹
#
(9.90)
For the UL-type measures, we write that for 1 İ i, j İ n,
)0 ( E ) Pn[,İn n ] ( E ), )i ( E ) Pn[İin, n] ( E ), ) j ( E ) Pn[,İn n ]j ( E ), which are independent of the number n ı 1 according to the censoring invariance. We define the R-measure R0,i ( E )
P0,[İi i ] ( E )[ I ) 0 ( E )]1 ,
Ri ( E ) )i ( E )[ I ) 0 ( E )]1 ,
i ı 1;
(9.91)
the G-measure G j ,0 ( E ) [ I ) 0 ( E )]1 Pj[,0İ j ] ( E ), G j ( E ) [ I ) 0 ( E )]1) j ( E ),
j ı 1;
(9.92)
and the U-measure f
< 0 ( E ) E D0 ¦ R0, k ( E )[ I ) 0 ( E )]Gk ,0 ( E ) k 1
and f
) 0 ( E ) E A0 ¦ Rk ( E )[ I ) 0 ( E )]Gk ( E ). k 1
481
Constructive Computation in Stochastic Models with Applications
For the Markov chain of GI/G/1 type, the UL-type RG-factorization can be simplified as I E P [ I RU ( E )][ I ) D ( E )][ I GL ( E )],
(9.93)
where § 0 R0,1 ( E ) R0,2 ( E ) R0,3 ( E ) ¨ 0 R1 ( E ) R2 ( E ) ¨ RU ( E ) ¨ 0 R1 ( E ) ¨ 0 ¨ ©
!· !¸ ¸ !¸ , !¸ ¸ %¹
) D ( E ) diag(< 0 ( E ),) 0 ( E ),) 0 ( E ),) 0 ( E ),!) and § 0 · ¨ G (E ) ¸ 0 ¨ 1,0 ¸ 0 GL ( E ) ¨ G2,0 ( E ) G1 ( E ) ¸. ¨ G (E ) G (E ) G (E ) 0 ¸ 2 1 ¨¨ 3,0 ¸ # # # %¸¹ © # Let f
A* ( z , E )
¦ z EA i
i
E A* ( z ),
i f f
R* ( z , E )
¦ z R ( E ), i
i
i 1 f
G* ( z, E )
¦z
j
G j ( E ).
j 1
Using a similar analysis to that in Theorem 3.5, the RG-facorization for the repeated row is given by I A* ( z , E ) [ I R* ( z , E )][ I ) 0 ( E )][ I G * ( z , E )].
(9.94)
f
Suppose A
¦A
k
is irreducible and stochastic. Let T be the stationary
k f
probability vector of the Markov chain A, and f
U
T ¦ kAk e k 1 f
T ¦ kA k e k 1
482
.
9 Quasi-Stationary Distributions
Now, we provide some useful properties for the positive root z ( E ) to equation det ( I A* ( z , E )) 0. Let z1 ( E ) and z2 ( E ) be the positive solutions to the equations det (I * R ( z , E )) 0 and det ( I G *( z, E )) 0, respectively. It is clear that z1 ( E ) ı z2 ( E ) based on the definitions of R* ( z , E ) and G* ( z , E ). Note that {z ( E ) : det ( I A* ( z , E )) 0} {z ( E )) : det ( I R* ( z , ( E )) 0}
* {z ( E ) : det ( I G * ( z , ( E )) 0}
(9.95)
thus z1 ( E ) and z2 ( E ) must be two positive solutions to the equations det ( I A* ( z , E )) 0. Let r ( z , E ) and g ( z , E ) be the maximal eigenvalues of the matrices R* ( z , E ) and G* ( z , E ), respectively. It follows from Eq. (9.95) that {z ( E ) : EF ( z ) 1} {z ( E ) : r ( z , E ) 1} * ( z ( E ) : g ( z , E ) 1}. Thus, it is easy to see that z1 ( E ) and z2 ( E ) are the positive solutions to equations r ( z, E ) 1 and g ( z , E ) 1, respectively. Lemma 9.17 For 0 E İ D , z1 ( E ) is strictly decreasing, while z2 ( E ) is strictly increasing. Specifically, z1 (D ) z2 (D ). f
Proof Note that R* ( z , E )
¦ z R (E ) i
i
and
i 1
f
Ri ( E ) )i ( E )¦ [) 0 ( E )]n , n 0
it is easy to see that Ri ( E ) is monotonely increasing in E ı 0 for each i ı 1, and so is R* ( z , E ). Hence r ( z , E ) is monotonely increasing in E ı 0. It is easy to check that the equation r ( z , E ) 1 in dicates that z1 ( E ) is monotonely decreasing in E ı 0. Similarly, z2 ( E ) is monotonely increasing in E ı 0. Based on this, we can obtain that for 0 İ E İ D , z1 ( E ) ı z1 (D ) ı z2 (D ) ı z2 ( E ). Since there is the unique positive solution z (D ) to det ( I A* ( z , E )) 0, we have that z1 (D ) z2 (D ) z (D ). This completes the proof. Theorem 9.18 (1) If U 1, then z1 ( E ) ı z2 ( E ) ! 1 for 1 E İ D , while z2 ( E ) İ1 z1 ( E ) for 0 E İ 1. (2) If U ! 1, then z2 ( E ) İ z1 ( E ) 1 for 1 E İ D , while z2 ( E ) z2 (D ) 1 İ z1 ( E ) for 0 E İ 1. 483
Constructive Computation in Stochastic Models with Applications
(3) If U 1, then D 1 and z1 (D ) z2 (D ) 1, z2 ( E ) İ1 İ z1 ( E ) for 0 E İ D . Proof We only prove (1), while (2) and (3) can be proved similarly. We construct a Markov chain of GI/G/1 type whose transition matrix is given by
P
§ ¨ D0 ¨ ¨ D1 ¨ D2 ¨ ¨¨ D3 © #
f
¦D
0
0
A0 A1 A2 #
A1 A0 A1 #
A2 A1 A0 #
i
i 1
· !¸ ¸ !¸ . !¸ ¸ !¸ ¸ ¹
If U <1, then it is easy to see from the special boundary that P is positive recurrent, hence sp(R)<1 and sp(G) 1 by Theorem 3.17. Applying Lemma 9.17, we obtain that for 1 E İ D , z1 ( E ) ı z1 (D )
z2 (D ) ı z2 ( E ) ! z2 (1)
sp (G ) 1,
and for 1 E İ 1, z1 ( E ) ı z1 (D ) ! z2 (1) ı sp (G ) 1 and z2 ( E ) z2 (1)
sp(G ) 1.
This completes the proof. For intuitively understanding Lemma 9.17 and Theorem 9.18, Fig. 9.3 to Fig. 9.5 describe the structure of z1 ( E ) and z2 ( E ) for 0 E İ D , which are related to each of the three cases: 0 U 1, U 1 and U ! 1, respectively.
Figure 9.3 The case with 0 U 1 484
9 Quasi-Stationary Distributions
Figure 9.4 The case with U
1
Applying Lemma 9.17 and Theorem 9.18, the following three corollaries are easy to be deriven. Note that the last one is the main drift result for Markov chains of GI/G/1 type. Corollary 9.7 (1) U 1 if and only if z1 (D ) z2 (D ) ! 1. (2) U ! 1 if and only if z1 (D ) z2 (D ) 1. (3) U 1 if and only if z1 (D ) z2 (D ) 1.
Figure 9.5 The case with U ! 1
Corollary 9.8 The radius D of convergence of the Markov chain of GI/G/1 type is the minimal positive solution to the equation z1 ( x) z2 ( x) for x ! 0. Corollary 9.9 For E 1 , we have (1) U 1 if and only if z1 (1) ! 1 and z2 (1) 1. (2) U 1 if and only if z1 (1) 1 and z2 (1) 1. (3) U 1 if and only if z1 (1) z2 (1) 1. We write 1 < ( E ) E D0 E 2 ( D1 , D2 , D3 ,!)( I E W )min ( DT1 , DT2 , DT3 ,!)T .
485
Constructive Computation in Stochastic Models with Applications
Let [ ( E ) be the maximal eigenvalue of the censored E -discounted transition matrix < 0 ( E ) to level 0. Then the solution to the eigenvalue equation [ ( E ) 1 can determine the state D -classification of the Markov chain of GI/G/1 type. The following theorem provides conditions for the state D -classification. Theorem 9.19 (1) If there exists a minimal positive solution ȗ to the equation [ ( E ) 1 for E ! 0, then D ] and the Markov chain P is D -recurrent. (2) If there does not exist any positive solution to the equation [ ( E ) 1 for E ! 0, then the Markov chain P is D -transient. Proof We first prove (1) Let § N 0,0 ( E ) N 0,1 ( E ) N 0,2 ( E ) ! · ¨ N ( E ) N ( E ) N ( E ) !¸ 1,1 1,2 ¨ 1,0 ¸ ¨ N 2,0 ( E ) N 2,1 ( E ) N 2,2 ( E ) ! ¸ ¨ # ¸ # # © ¹
f
¦ ( E P) . k
k 0
Then N 0,0 ( E ) [ I < 0 ( E )]1 . Let Q ( E ) be the the maximal eigenvalue of the matrix N 0,0 ( E ). Then
Q (E )
1 , 1 [ (E )
E ! 0.
If [ (] ) 1, then
Q (] )
1 1 [ (] )
f,
(9.96)
0 E !].
(9.97)
while
Q (E )
1 f, 1 [ (E )
Note that m0 ½ m0 ½ min ®¦ [ N 0,0 ( E )]i , j ¾ İ Q ( E ) İ max ®¦ [ N 0,0 ( E )]i , j ¾ . 1İiİ m0 1İiİ m0 ¯j 1 ¿ ¯j 1 ¿
(9.98)
By using Eq. (9.96) and Eq. (9.97), the right inequation of Eq. (9.98) indicates that there exists at least a pair (i0 , j0 ) such that [ N 0,0 (] )]i0 , j0
f
while the left inequality of Eq. (9.98) indicates that for any pair (i, j ), 486
9 Quasi-Stationary Distributions
[ N 0,0 ( E )]i , j f,
0 E ].
Based on the definition of the state D -classification, it is seen that D ] and the Markov chain P is D -recurrent. Now, we prove (2). If there does not exist any positive solution to the equation [ ( E ) 1 for 0 E İ D , then [ N 0,0 ( E )]i , j f,
0 E İD,
and [ N 0,0 (D H )]i , j
f,
for any H ! 0.
Based on the definition of the state D -classification, it is seen that the Markov chain P is D -transient. This completes the proof. Let
W
§ A0 ¨A ¨ 1 ¨ A2 ¨ A3 ¨ # ©
A1 A0 A1 A2 #
A2 A1 A0 A1 #
A3 A2 A1 A0 #
!· !¸ ¸ !¸ . !¸ ¸ ¹
The following corollary provides the state D -classification for the Markov chain W. Note that it is a generalization of the state D -classification for Markov chains of M/G/1 type or GI/M/1 type by Kijima [14]. Corollary 9.10 The Markov chain W is always D -transient. Proof Let 1 < 0 ( E ) E A0 E 2 ( A1 , A2 , A3 ,!)( I E W )min ( AT1 , AT2 , AT3 ,!)T .
Then it is clear that [ ( E ) 1 for all E ! 0. Hence the Markov chain W is always D -transient. In what follows, we provide expressions for the quasi-stationary distribution. Applying the RG-factorization Eq. (9.93), the following theorem expresses the quasi-stationary distribution of the Markov chain P of GI/G/1 type which is D -recurrent. The proof is clear, and thus is omitted here. Theorem 9.20 If the Markov chain P of GI/G/1 type is D -recurrent, then the quasi-stationary distribution ʌ (D ) (ʌ 0 (D ), ʌ1 (D ), ʌ 2 (D ),!) is given by ʌ0 (D ) W x0 (D ), ° k 1 ®ʌ (D ) ʌ (D ) R (D ) ʌ (D )R (D ), ¦ i k i 0 0, k °¯ k i 1
k ı 1, 487
Constructive Computation in Stochastic Models with Applications
where x0 (D ) is the left Perron-Frobeniusthe vector of the matrix < 0 (D ) and the M
scalar W is uniquely determined by
¦ʌ
k
(D ) e 1.
k 0
Now, we consider 1 İ E D or E it is a key to solve the equation
D under which P is D -transient. In this case,
I § · ¨ G ( E ) ¸ I 1,0 ¨ ¸ I ( y0 ( E ), y1 ( E ), y2 ( E ),!) ¨ G2,0 ( E ) G1 ( E ) ¸ (0,0,0,!), ¨ G ( E ) G ( E ) G ( E ) I ¸ 2 1 ¨¨ 3,0 ¸ # # # # % ¸¹ © which leads to f °° y0 ( E ) ¦ yk ( E ) Gk ,0 ( E ) 0, k 1 ® f ° yk ( E ) ¦ yi ( E ) Gi k ( E ) 0, °¯ i k 1
(9.99) k ı 1.
The following lemma provides a spectral property for the matrices G* ( z , E ) f
and G0* ( z , E )
¦z
j
G j ,0 ( E ). Thus, the quasi-stationary distribution can be
j 1
directly expressed by means of the eigenvalues and eigenvectors of the matrices G* ( z , E ) and G0* ( z , E ). Lemma 9.18 There exists a constant J ! 0, and two nonnegative non-zero row vectors y and y0 such that f k °° y y ¦ J Gk ( E ), k 1 ® f ° y0 y ¦ J k Gk ,0 ( E ). °¯ k 1
(9.100)
Proof For the equation det ( I G* ( z , E )) 0, there exists a positive root J z2 ( E ). That is, det ( I G * (J , E )) 0. In this case, there must exist a nonnegative non-zero row vector y such that y
yG* (J , E ), i.e., y
f
y ¦ J k Gk ( E ). Once J k 1
and y are given, we write f
y0
y ¦ J k Gk ,0 ( E ). k 1
by means of the first equality of Eq. (9.100). This completes the proof. 488
9 Quasi-Stationary Distributions
The following lemma provides a nonnegative non-zero solution to the equation Eq. (9.99). Such a solution is a key for expressing the quasi-stationary distribution. Lemma 9.19 The row vector Y ( y0 , yJ 1 , yJ 2 , yJ 3 ,!) is a nonnegative non-zero solution to the equation Eq. (9.99). Proof Let y0 ( E ) y0 and yk ( E ) yJ k for k ı 1. It is easy to see that f
y0 ( E ) ¦ yk ( E ) Gk ,0 ( E ) k 1
f
y0 ¦ yJ k Gk ,0 ( E )
0
k 1
in terms of the second equality of Eq. (9.100). At the same time, for k ı 1 we have f
yk ( E )
¦
yi ( E )Gi k ( E )
yJ k
i k 1
f
¦ yJ
i
Gi k ( E )
i k 1
ª
f
º
J k « y y ¦ J k Gk ( E ) » ¬
k 1
¼
0 in terms of the first equality of Eq. (9.100). This completes the proof. Theorem 9.21 If 1 İ E D or E D under which P is D -transient, then the quasi-stationary distribution ʌ ( E ) (ʌ0 ( E ), ʌ1 ( E ), ʌ 2 ( E ),!) is given by ʌ0 ( E ) N y0 N 0 ( E ), ° k ®ʌ ( E ) NJ k yN ( E ) ʌ ( E ) R ( E ) ʌ ( E ) R ( E ), ¦ 0 0, k i k i °¯ k i 1
k ı 1,
where N 0 ( E ) [ I < 0 ( E )]1 , N ( E ) [ I )0 ( E )]1 and the positive constant N f
satisfies
¦ʌ
k
( E ) e 1.
k 0
Proof Since ʌ ( E )( I E P) 0, it is clear from the UL-type RG-factorization that ʌ ( E )[ I RU ( E )][ I ) D ( E )][ I GL ( E )] 0. Let Y
ʌ ( E )[ I RU ( E )][ I ) D ( E )].
(9.101)
Then Y [ I GL ( E )] 0, which leads to Y N ( y0 , yJ 1 , yJ 2 , yJ 3 ,!) in terms of Lemma 9.19. It follows from Eq. (9.101) that ʌ ( E )[ I RU ( E )][ I ) D ( E )] N ( y0 , yJ 1 , yJ 2 , yJ 3 ,!). 489
Constructive Computation in Stochastic Models with Applications
Thus, we obtain ʌ ( E )[ I RU ( E )] N ( y0 , yJ 1 , yJ 2 , yJ 3 ,!)[ I ) D ( E )]1
N ( y0 N 0 ( E ), J 1 yN ( E ), J 2 yN ( E ), J 3 yN ( E ),!). Some simple computations lead to the desired result.
9.6
Level-Dependent QBD Processes
In this section, we consider the quasi-stationary distribution of an irreducible leveldependent QBD process, and provide conditions for the state D -classification and expressions for the quasi-stationary distribution. Consider an irreducible discrete-time QBD process whose transition probability matrix is given by
P
§ A1(0) ¨ (1) ¨ A2 ¨ ¨ ¨ ¨ ©
A0(0) A1(1) A2(2)
(1) 0 (2) 1 (3) 2
A A A
A0(2) A1(3) %
A0(3) %
· ¸ ¸ ¸. ¸ ¸ %¸¹
Let D be the radius of convergence of the matrix P. It is clear that D ı 1. We write P
§ T (k ) ¨ (k ) ©U
V (k ) · ¸, Q(k ) ¹
k ı 0,
where § A1( k ) ¨ ( k 1) ¨ A2 ¨ ¨ ¨ ¨ ©
Q(k )
A0( k ) A1( k 1) A2( k 2)
( k 1) 0 ( k 2) 1 ( k 3) 2
A A A
A0( k 2) A1( k 3) %
A0( k 3) %
· ¸ ¸ ¸, ¸ ¸ %¸¹
the other matrices can be determined accordingly. Obviously, Q (0) P. We further assume that the Markov chain Q ( k ) is irreducible for each k ı 0. Let D k be the radius of convergence of the matrix Q ( k ) for k ı 0. Since
D 490
f ½ sup ® z ı 1: ¦ z n p((in, r) ),( j , s ) f, i, j ı 0¾ n 0 ¯ ¿
9 Quasi-Stationary Distributions
and
Dk
f ½ sup ® z ı 1: ¦ z n p((in, r) ),( j , s ) f, i, j ı k ı 1¾ , n 0 ¯ ¿
we obtain D k ı D for k ı 1.
9.6.1 The UL-Type RG-Factorization Now, we provide the UL-type R-, U- and G-measures for the matrix E P for 0 E İD. (1) The R-measure The R-measure: Rk ( E ) for k ı 0, is the minimal nonnegative solution to the system of equations
E A0( k ) Rk ( E ) E A1( k 1) Rk ( E ) Rk 1 ( E ) E A2( k 2)
Rk ( E ),
k ı 0,
or 1
A0( k ) Rk ( E ) A1( k 1) Rk ( E ) Rk 1 ( E ) A2( k 2) Specifically, since J have
E
Rk ( E ),
k ı 0.
1/ D is the decay parameter of the Markov chain P, we
A0( k ) Rk (J ) A1( k 1) Rk (J ) Rk 1 (J ) A2( k 2)
J Rk (J ),
k ı 0.
(2) The G-measure The G-measure: Gl ( E ) for l ı 1, is the minimal nonnegative solution to the system of equations
E A0(l ) Gl 1 ( E )Gl ( E ) E A1(l ) Gl ( E ) E A2(l )
Gl ( E ),
l ı
or A0(l ) Gl 1 ( E )Gl ( E ) A1(l ) Gl ( E ) A2(l )
1
E
Gl ( E ),
l ı
Specifically, for the decay parameter J we have A0( l ) Gl 1 (J )Gl (J ) A1( l ) Gl (J ) A2( l )
J Gl (J ), l ı
(3) The U-measure The U-measure: < k ( E ) for k ı 0, is given by
< k ( E ) E A1( k ) Rk ( E ) E A2( k 1) 491
Constructive Computation in Stochastic Models with Applications
or
< k ( E ) E A1( k ) E A0( k ) Gk 1 ( E ). The following theorem provides useful properties for the U-measure: < k ( E ) for k ı 0. Theorem 9.22 (1) For k ı 1, the matrices I < k ( E ) are all invertible for 0 E İD. (2) the matrix I < 0 ( E ) is invertible for 0 E D . The matrix I < 0 (D ) is invertible if the Markov chain P is D -transient; while I < 0 (D ) is singular if the Markov chain P is D -recurrent. Proof (1) According to the censoring technique, it is easy to see that for k ı 1, the Markov chains < k ( E ) are all D -transient. Thus, the matrices I < k (D ) for k ı 1 are all invertible for 0 E İ D . (2) Based on the censoring technique, it is clear that if the Markov chain P is D -transient, then the Markov chain < 0 (D ) is D -transient. Hence, the matrix I < 0 (D ) is invertible; if the Markov chain P is D -recurrent, then the Markov chain < 0 (D ) is D -recurrent, hence the matrix I < 0 (D ) is singular. This completes the proof. Note that the Markov chain P always has some related properties to the censored Markov chain < 0 ( E ) to level 0, it is necessary to provide a detailed analysis for the matrix < 0 ( E ) for 0 E İ D . Let f
N0 (E )
¦ [<
0
( E )]n .
n 0
Then N 0 ( E ) [ I < 0 ( E )]1 . Based on (2) in Theorem 9.22, it is easy to see that
D
sup{E ı 1: N 0 ( E ) f}
or
D
sup{E ı 1: det ( I < 0 ( E )) z 0}.
The following theorem provides conditions for the state D -classification of the QBD process P. The proof is clear, and thus is omitted here. Theorem 9.23 (1) If the matrix N 0 (D ) is finite, then P is D -transient. (2) If the matrix N 0 (D ) is infinite, then P is D -recurrent. Based on Theorem 9.23, we provide other conditions for the state D -classification of the QBD process P as follows: (1) If det ( I < 0 (D )) z 0, then P is D -transient. 492
9 Quasi-Stationary Distributions
(2) If det ( I < 0 (D )) 0, then P is D -recurrent. (3) Let u0 ( E ) be the maximal eigenvalue of the matrix < 0 ( E ). If there does not exist a positive solution to the equation u0 ( E ) 1 for E ı 1, then P is D -transient. (4) If there exists a positive solution K to the equation u0 ( E ) 1 for E ı 1, then P is D -recurrent. Based on the R-, U- and G-measures, the UL-type RG-factorization is given by I E P [ I RU ( E )][ I < D ( E )][ I GL ( E )], where § 0 R0 ( E ) · ¨ ¸ 0 R1 ( E ) RU ( E ) ¨ ¸, 0 R2 ( E ) ¨ ¸ ¨ % % ¸¹ © < D ( E ) diag (< 0 ( E ),< 1 ( E ),< 2 ( E ),!) and § 0 · ¨ G (E ) ¸ 0 ¨ 1 ¸ GL ( E ) ¨ G2 ( E ) 0 ¸. ¨ ¸ G3 ( E ) 0 ¨ % %¸¹ © Now, we provide expressions for the quasi-stationary distribution of the leveldependent QBD process. Our analysis is classified in two sets of expressions: D -recurrent with E D , and D -recurrent with E D or D -transient with E İ D . The first set: D -recurrent with E D Note that ʌ (D )[ I D P] 0, we have ʌ (D )[ I RU (D )][ I < D (D )][ I GL (D )] 0. Let x(D )
ʌ (D )[ I RU (D )]. Then x(D )[ I < D (D )][ I GL (D )] 0.
Since the QBD process P is D -recurrent, we can obtain x(D )
(W x0 (D ), 0,0,!),
where x0 (D ) is the left Perron-Frobeniusthe vector of the matrix < 0 (D ). Therefore, ʌ (D )[ I RU (D )] (W x0 (D ), 0,0,...), which leads to the quasi-stationary distribution of the QBD process P as follows: ʌ0 (D ) W x0 (D ),
(9.102) 493
Constructive Computation in Stochastic Models with Applications
ʌk (D )
ʌ0 (D ) R0 ( E ) R1 ( E )! Rk 1 ( E ),
k ı 1,
(9.103)
f
where the constant W makes
¦ʌ
k
(D )e 1.
k 0
The second set: D -recurrent with E D or D -transient with E İ D For E İ D under which P is D -transient or E D under which P is D -recurrent, it is clear that the matrix I < 0 ( E ) is invertible. In this case, we need to use the following lemma to compute the quasi-stationary distribution. Lemma 9.20 For the G-measure: Gl ( E ) for l ı 1, there exists a sequence of stochastic vectors {zk ( E ), k ı 0} and a sequence of positive numbers {U k , k ı 0} such that zk 1 ( E )Gk 1 ( E )
U k zk ( E ),
k ı 0.
Proof Let z N , N ( E ) be an arbitrary probability vector on level N ı 1. Since the QBD process P is irreducible, each state on level k has a path to level k 1, which shows that each row of the matrix Gk ( E ) is non-zero and nonnegative. Hence z N , N ( E )GN ( E ) is a convex combination of the rows of Gk ( E ) . We take
U N 1 z N , N 1 ( E )
z N , N ( E )GN ( E )e.
Then U N 1 ! 0 and z N , N 1 ( E )
1
U N 1
z N , N ( E )GN ( E )
is a probability vector on level N 1. Proceeding inductively, we can obtain a sequence of probability vector {z N , k ( E ), 0 İ k İ N 1} and a sequence of positive scalars {U N , k , 0 İ k İ N 1} such that for 0 İ k İ N 1, z N , k 1 ( E )Gk 1 ( E )
U N , k z N , k ( E ).
By repeating the above procedure, for each k ı 0 we can obtain a sequence of probability vector {z N , k ( E ), N ı k}. Since z N , k ( E )e 1 for N ı k , there exists a subsequence {z N r , k ( E ), r ı 1} such that lim z N r , k ( E ) r of
probability vector. k
zk ( E ), and zk ( E ) is a
k * , we can obtain a sequence of stochastic vectors {zk ( E ),
0 İ k İ k * } and a sequence of positive numbers {U k , 0 İ k İ k * } such that zk 1 ( E )Gk 1 ( E )
U k zk ( E ).
Again by the compactness of the probability vector set, there exists a subsequence {N rc} of {N r } such that lim z N c , k * 1 ( E ) zk * 1 ( E ), and Uk* zk* 1 ( E )Gk* 1 ( E )e. r of
494
r
9 Quasi-Stationary Distributions
Therefore, by induction the desired result follows by repeating this argument infinitely-many times, which can be done by means of the Axiom of Choice. This completes the proof. From ʌ ( E )[ I E P ] 0, we have ʌ ( E )[ I RU ( E )][ I < D ( E )][ I GL ( E )] 0. Let x( E ) ʌ ( E )[ I RU ( E )][ I < D ( E )]. Then x( E )[ I GU ( E )] 0, which is, in block entries, xk ( E )
xk 1 ( E ) Gk 1 ( E ), k ı 0.
Using Lemma 9.20, we write x0 ( E ) xk ( E )
z0 ( E )
1
U0 U1 ! U k 1
zk ( E ),
k ı 1.
It is easy to check that x0 ( E ) x1 ( E )
z0 ( E ) 1
U0
z1 ( E )
1
U0
z1 ( E )G1 ( E )
1
U0 U1
z2 ( E )G2 ( E )
x1 ( E )G1 ( E ), x2 ( E )G2 ( E ),
we assume that for n k we have xk ( E )
1 zk ( E ) U0 U1 ! U k 1
1 zk 1 ( E )Gk 1 ( E ) U0 U1 ! U k 1 Uk
xk 1 ( E )Gk 1 ( E ), then for n
k 1 we obtain xk 1 ( E )
1 zk 1 ( E ) U 0 U1 ! U k
1 zk 2 ( E )Gk 2 ( E ) U0 U1 ! U k U k 1
xk 2 ( E )Gk 2 ( E ). Therefore, by induction, for any nonnegative integer n we can obtain xn ( E )
1 z (E ) U0 U1 ! U n 1 n
xn 1 ( E )Gn 1 ( E ). 495
Constructive Computation in Stochastic Models with Applications
It is clear that the vector § · 1 1 x( E ) ¨ z0 ( E ), z1 ( E ), z2 ( E ),!¸ U0 U0 U1 © ¹ is a non-zero nonnegative solution to the equation x( E )[ I GL ( E )] 0. Thus, we have ʌ ( E )[ I RU ( E )][ I < D ( E )]
x( E ),
which leads to x( E )[ I < D ( E )]1[ I RU ( E )]1 .
ʌ(E ) Let
Nl ( E ) [ I < l ( E )]1 and X k(l ) ( E )
Rl ( E ) Rl 1 ( E )! Rl k 1 ( E ),
k ı 1, l ı 0.
Then [ I < D ( E )]1
diag( N 0 ( E ), N1 ( E ), N 2 ( E ),!)
and
[ I RU ( E )]1
§I ¨ ¨ ¨ ¨ ©
X 1(0) ( E ) I
X 2(0) ( E ) X 1(1) ( E ) I
X 3(0) ( E ) X 2(1) ( E ) X 1(2) ( E ) %
!· ¸ !¸ . !¸ % ¸¹
Therefore, we obtain ʌ0 ( E )
x0 ( E ) N 0 ( E )
z0 ( E ) N 0 ( E )
(9.104)
and for k ı 1 , k 1
ʌk ( E )
xk ( E ) N k ( E ) ¦ xl ( E ) N l ( E ) X k(l)l ( E ) l 0
k 1 1 1 zk ( E ) N k ( E ) ¦ zl ( E ) N l ( E ) U0 U1 ! U k 1 l 0 U 0 U1 ! U l 1
Rl ( E ) Rl 1 ( E )! Rk 1 ( E ).
(9.105)
Based on Eq. (9.102) to Eq. (9.105), we summarize the expressions for the quasi-stationary distribution of the level-dependent QBD process as the following theorem. 496
9 Quasi-Stationary Distributions
Theorem 9.24 (1) For E D under which P is D -recurrent, the quasistationary distribution of the level-dependent QBD process is given by ʌ0 (D ) W x0 (D ), ʌ k (D )
ʌ0 (D ) R0 ( E ) R1 ( E )! Rk 1 ( E ),
k ı 1,
where x0 (D ) is the left Perron-Frobeniusthe vector of the matrix < 0 (D ) and the f
constant W makes
¦ʌ
k
(D )e 1.
k 0
(2) For E İ D under which P is D -transient or E D under which P is D -recurrent, the quasi-stationary distribution of the level-dependent QBD process is given by ʌ0 ( E )
x0 ( E ) N 0 ( E )
z0 ( E ) N 0 ( E )
and for k ı 1 , ʌk ( E )
1 z (E )Nk (E ) U0 U1 ! U k 1 k k 1
¦ l 0
Remark 9.5
1 z ( E ) N l ( E ) Rl ( E ) Rl 1 ( E )! Rk 1 ( E ). U 0 U1 ! Ul 1 l
Since for l ı 0
zl ( E )
1
Ul
zl 1 ( E )Gl 1 ( E )
1 z ( E )Gk ( E )! Gl 2 ( E )Gl 1 ( E ), Ul Ul 1 ! U k 1 k we obtain ʌk ( E )
k 1 1 ª zk ( E ) « N k ( E ) ¦ Gk ( E )! Gl 2 ( E )Gl 1 ( E ) Nl ( E ) U0 U1 ! U k 1 l 0 ¬
º Rl ( E ) Rl 1 ( E ) ! Rk 1 ( E ) » . ¼
9.6.2
(9.106)
Conditions for D -Positive Recurrence
Now, we provide a condition under which the QBD process P is D -positive recurrent. Let f
N (D )
¦ (D P)
k
k 0
497
Constructive Computation in Stochastic Models with Applications
and N i , j (D ) the (i, j ) th block-entry of the matrix N (D ). It is clear that f
¦<
N 0,0 (D )
k 0
(D )
k 0
and f
N1,1 (D )
¦<
k 1
(D ),
k 0
where
< 0 (D ) D A1(0) D 2 A0(0) N1,1 (D ) A2(1) . The following theorem provides a condition under which the QBD process P is D -positive recurrent. Theorem 9.25 Suppose the QBD process P is D -recurrent. d (1) If the matrix N1,1 (D ) is finite, then P is D -positive recurrent. dD d (2) If the matrix N1,1 (D ) is infinite, then P is D -null recurrent. dD Proof We only prove (1), while (2) can be proved similarly. Since
< 0 (D ) D A1(0) D 2 A0(0) N1,1 (D ) A2(1) , we obtain d < 0 (D ) dD
A1(0) 2D A0(0) N1,1 (D ) A2(1) D 2 A0(0)
d N1,1 (D ) A2(1) . dD
Note that I < 1 (D ) is invertible, it is clear that the matrix N1,1 (D ) [ I < 1 (D )]1 d d N1,1 (D ) is finite, then the matrix < 0 (D ) is finite. dD dD For the QBD process { X n , n ı 0}, we write
is finite. If the matrix
f k(,ni;)k ,i
P{ X n
(k , i ), X 1 z (k , i ) for 1 İ l İ n 1| X 0
(k , i )}
and f
P k ,i (D )
¦ nD
n
f k(,ni ;)k ,i .
n 0
Note that the irreducible QBD process { X n , n ı 0} is D -positive recurrent if P k ,i (D ) f and D -null recurrent otherwise. 498
9 Quasi-Stationary Distributions
Note that < 0( n ) , the (i, j ) th entry of < 0( n ) , is the probability that the QBD i, j
process arrives at state (k , i ) at time n for the first time, given its initial state (k , i ) at time 0. Let f
F0,i ;0,i (D )
¦D
n
) f 0,( ni ;0, i.
n 0
Then
P 0,i (D ) D
d F0,i;0,i (D ). dD
Since F0,i ;0,i (D )
(< 0 (D ))i ,i ¦ (< 0 (D )) i , j1 (< 0 (D )) j1 ,i j1 z i
¦
(< 0 (D ))i , j1 (< 0 (D )) j1 , j2 (< 0 (D )) j2 ,i !
j1 , j2 z i f
t
¦ ¦ (< t 1
0
(D )) js1 , js ,
j0 , jt i s 1 jk z i ,1İk İt 1
we obtain f
P 0,i (D ) D ¦ t 1
r 1
t
¦ ¦ (<
0
(D )) js1 , js
j0 , jt i r 1 s 1 jk z i ,1İk İt 1
t d (< 0 (D )) jr 1 , jr (< 0 (D )) js1 , js . dD s r 1
Let x ( x1 , x2 ,! , xm0 ) be the left eigenvector of the matrix < 0 (D ) with the maximal eigenvalue K (D ) 1 due to the condition under which the QBD process P is D -recurrent. We write U
'< 0 (D )T ' 1
where ' diag ( x1 , x2 ,! , xm0 ). It is clear that the matrix U is stochastic. The expected first return time for state i of the Markov chain U is given by f
E[ Di ]
t
¦ t ¦ U t 1 f
¦t t 1
js 1 , js
j0 , jt i s 1 jk z i ,1İk İt 1 t
¦ (<
0
(D )T ) js1 , js f.
j0 , jt i s 1 jk z i ,1İk İt 1
499
Constructive Computation in Stochastic Models with Applications
If the matrix
d N1,1 (D ) is finite, then there exists a larger positive number K dD
such that d (< 0 (D )) jr 1 , jr İ K (< 0 (D )) jr 1 , jr . dD Thus, we have f
P0,i (D ) İ D K ¦ t t 1
t
¦ (<
0
(D )T ) js1 , js
j0 , jt i s 1 jk z i ,1İk İt 1
D KE[ Di ] f, which indicates that the QBD process P is D -positive recurrent. This completes the proof. Remark 9.6 Theorem 9.9 illustrates that the D -recurrent QBD process is D -positive recurrent if and only if D D . A similar analysis shows that the D -recurrent level-dependent QBD process is D -positive recurrent if and only if D D . Now, we compare this result with Theorem 9.25. If D D , then since d (2) N (D ) D D İ D 2 , where D D1 , N (2) ( z ) is analytic at z D . Thus, dD d (2) d (2) N (D ) f, then D D , since N ( z )| z D f. On the contrary, if dD dz N (2) (D ) f and N (2) ( z ) is increasing for 1 İ z D 2 .
9.7
Continuous-Time Markov Chains
In this section, similar to the analysis in the discrete-time case, we study the quasi-stationary distribution of an irreducible continuous-time Markov chain with block structure, and provide conditions for the state D -classification and expressions for the quasi-stationary distribution. We consider an irreducible continuous-time Markov chain whose infinitesimal generator is given by
Q
§ Q0,0 ¨Q ¨ 1,0 ¨ Q2,0 ¨ # ©
Q0,1 Q0,2 ! · Q1,1 Q1,2 ! ¸ ¸, Q2,1 Q2,2 ! ¸ ¸ # # ¹
where the size of the matrix Qk , k is mk u mk for k ı 0, and the sizes of the other matrices are determined accordingly. 500
9 Quasi-Stationary Distributions
We write P (t )
exp {Qt}
and let Pl ,i ;k , j (t ) be the (l , i; k , j ) th entry of the matrix P(t ), where l and k are the level numbers and i and j are the phase numbers. We define
D
f ° 1 sup ® E ı 1: ³ exp ® 0 ¯ E ¯°
½° ½ t ¾ Pl ,i ;k , j (t )dt f ¾ . ¿ ¿°
It is easy to check that if the Markov chain Q is irreducible, then the positive number D is independent of state (l , i ) and (k , j ). In this case,
³
f 0
1 ½ exp ® t ¾ exp{Qt }dt ¯ E ¿
³
f 0
1 º °½ ° ª exp ® «Q I » t ¾ dt E ¼ ¿° ¯° ¬ 1
ª1 º « I Q» . ¬E ¼ min 1 If the matrix [D 1 I Q ]min is finite, then the Markov chain Q is D -transient, otherwise it is D -recurrent. Let J D 1. Then J is the decay parameter of the Markov chain Q. For 1 İ E İ D , if there exists a nonnegative non-zero row vector ʌ E such that
ʌ (E )E Q
ʌ(E )
(9.107)
and ʌ ( E )e 1,
(9.108)
then ʌ ( E ) is called the quasi-stationary distribution of the continuous-time Markov chain Q. The following proposition provides a useful relation between the matrix P(t) and the quasi-stationary distribution ʌ ( E ). The proof is easy, and thus is omitted here. Proposition 9.5 If the vector ʌ ( E ) is the quasi-stationary distribution of the continuous-time Markov chain Q, then ʌ ( E ) P(t )
t ½ ʌ ( E ) exp ® ¾ ¯E ¿
or ª º t½ ʌ ( E ) «exp ® ¾ P (t ) » ¯ E¿ ¬ ¼
ʌ ( E ). 501
Constructive Computation in Stochastic Models with Applications
It follows from Eq. (9.107) that ʌ ( E )Q
1
E
ʌ(E )
or § 1 · ʌ (E ) ¨ Q I ¸ E ¹ ©
0.
(9.109)
Let )(E ) Q
1
E
I.
Note that the matrix ) ( E ) is the infinitesimal generator of an irreducible continuous-time Markov chain, thus it is necessary to provide the RG-factorizations for computing the quasi-stationary distribution.
9.7.1 The UL-Type RG-Factorization For the continuous-time Markov chain ) ( E ), we write E {0,1, 2,! , n} and E c {n 1, n 2,!}. Based on the two sets E and E c , the matrix ) ( E ) is partitioned as § T (E ) V (E ) · ) (E ) ¨ ¸. © H (E ) W (E ) ¹
Then [İn ] 1 ) ( E ) T ( E ) V ( E )[W ( E )]min H ( E ). [ İn ] The block-entry expression of the matrix ) ( E ) is written as
( n) § f 0,0 (E ) ¨ ( n) f (E ) [İn ] ( E ) ¨ 1,0 ) ¨ # ¨¨ ( n) © f n,0 ( E )
( n) f 0,1 (E ) !
f1,1( n) ( E ) ! # ( n) f n,1 ( E ) !
f 0,( nn) ( E ) · ¸ f1,(nn) ( E ) ¸ . # ¸ ¸ f n(,nn) ( E ) ¸¹
For i, j İ n 1, we have f i ,( nj 1) ( E )
f
1
E Qi , j ¦ f i ,(kk ) ( E ) ª¬ f k(,kk) ( E ) º¼ f k(,kj) ( E ). k n
We define the U-measure as U n (E ) 502
f n(,nn) ( E ),
n ı 0,
9 Quasi-Stationary Distributions
the R-measure as Ri , j ( E )
f i ,( jj ) ( E )[U j E )]1 ,
0 İ i j,
and the G-measure as [U i ( E )]1 f i ,(ij) ( E ),
Gi , j
0 İ j i.
The UL-type RG-factorization is given by ) ( E ) [ I RU ( E )]U D ( E )[ I GU ( E )],
where § 0 R0,1 ( E ) R0,2 ( E ) R0,3 ( E ) ! · ¨ 0 R1,2 ( E ) R1,3 ( E ) ! ¸ ¨ ¸ RU ( E ) ¨ 0 R2,3 ( E ) ! ¸ , ¨ 0 !¸ ¨¨ ¸ % ¸¹ © U D ( E ) diag (U 0 ( E ),U1 ( E ),U 2 ( E ),U 3 ( E ),!)
and § 0 · ¨ G (E ) ¸ 0 1,0 ¨ ¸ 0 GL ( E ) ¨ G2,0 ( E ) G2,1 ( E ) ¸. ¨ G (E ) G (E ) G (E ) 0 ¸ 3,1 3,2 ¨¨ 3,0 ¸ # # # %¸¹ © #
In what follows we study some useful properties for the censored discounted Markov chain U k ( E ) for k ı 0, which are necessary in analysis of the quasistationary distribution. Proposition 9.6 (1) For 1 İ E D , the matrix U k ( E ) is invertible for k ı 0. (2) The matrix U k (D ) is invertible for k ı 1. (3) The matrix U 0 (D ) is invertible if the Markov chain Q is D -transient; while U 0 (D ) is singular if the Markov chain Q is D -recurrent. Based on (3) in Proposition 9.6, we have
D
sup{E ı 1 : det(U 0 ( E )) z 0}.
Let N0 ( E )
U 01 ( E ).
Then
D
sup{E ı 1 : N 0 ( E ) f}. 503
Constructive Computation in Stochastic Models with Applications
When the Markov chain Q is D -recurrent, the following theorem provides expressions for the quasi-stationary distribution, which is similar to that in the discrete-time case. Theorem 9.26 For E D under which the continuous-time Markov chain Q is D -recurrent, the quasi-stationary distribution is expressed as ʌ0 (D ) W x0 (D ) and k 1
¦ ʌ (D ) R
ʌ k (D )
i
i,k
(D ),
k ı 1,
i 0
where x0 (D ) is a nonnegative non-zero solution to the system of equations f
x0 (D )U 0 (D )
0 and x0 (D )e 1, and the constant W makes
¦ʌ
k
(D )e 1.
k 0
Proof Since ʌ (D )[Q D 1 I ] 0,
we obtain ʌ (D )[ I RU (D )]U D (D )[ I GU (D )] 0. Let x
ʌ (D )[ I RU (D )].
Then xU D (D )[ I GU (D )] 0. Based on the censoring technique, it is clear that x (W x0 (D ), 0,0,!) is a nonnegative non-zero solution to xU D (D )[ I GU (D )] 0. Thus we obtain ʌ (D )[ I RU (D )] (W x0 (D ),0, 0,!), which leads to the desired result. This completes the proof. In what follows we consider another type of expression for the quasi-stationary distribution for either E D or E D under which the continuous-time Markov chain Q is D -transient. Note that in this case, a general irreducible Markov chain with infinitely-many levels can not be dealt with, thus we only simply study a continuous-time Markov chain of GI/G/1 type. We consider an irreducible continuous-time Markov chain of GI/G/1 type whose infinitesimal generator is given by § D0 ¨D ¨ 1 Q ¨ D2 ¨ D3 ¨ # © 504
D1 A0 A1 A2 #
D2 D3 A1 A2 A0 A1 A1 A0 # #
!· !¸ ¸ !¸ . !¸ ¸ ¹
9 Quasi-Stationary Distributions
The RG-factorization of the matrix ) ( E ) is given by ) ( E ) [ I RU ( E )]U D ( E )[ I GL ( E )],
where § 0 R0,1 ( E ) R0,2 ( E ) R0,3 ( E ) ¨ 0 R1 ( E ) R2 ( E ) ¨ RU ( E ) ¨ 0 R1 ( E ) ¨ 0 ¨ ©
UD (E )
!· !¸ ¸ !¸ , !¸ ¸ %¹
diag (U 0 ( E ),) 0 ( E ),) 0 ( E ),) 0 ( E ),!)
and § 0 · ¨ G (E ) ¸ 0 ¨ 1,0 ¸ 0 GL ( E ) ¨ G2,0 ( E ) G1 ( E ) ¸. ¨ G (E ) G (E ) G (E ) 0 ¸ 2 1 ¨¨ 3,0 ¸ # # # % ¸¹ © #
For the G-measure: Gk ,0 ( E ) and Gk ( E ) for k ı 1, there exist a constant J ! 0, and two nonnegative non-zero row vectors y and y0 such that f k °° y y ¦ J Gk ( E ), k 1 ® f ° y0 y ¦ J k Gk ,0 ( E ). °¯ k 1
Thus, using a similar analysis to that of the discrete-time case, we can obtain the following theorem whose proof is easy, and thus is omitted here. Theorem 9.27 If 1 İ E D or E D under which the continuous-time Markov chain Q of GI/G/1 type is D -transient, then the quasi-stationary distribution ʌ ( E ) (ʌ0 ( E ), ʌ1 ( E ), ʌ 2 ( E ),!) is given by ʌ0 ( E ) N y0 N 0 ( E ), ° k ®ʌ ( E ) NJ k yN ( E ) ʌ ( E ) R ( E ) ʌ ( E )R ( E ), ¦ 0 0, k i k i °¯ k i 1
k ı 1,
where N 0 ( E ) [U 0 ( E )]1 , N ( E ) [ ) 0 ( E )]1 and the positive constant N satisfies f
¦ʌ
k
( E )e 1.
k 0
505
Constructive Computation in Stochastic Models with Applications
9.7.2 The LU-Type RG-Factorization Now, we construct the LU-type RG-factorization of the continuous-time Markov chain ) ( E ). Let )[ın ] ( E ) W ( E ) H ( E )[T ( E )]1V ( E ).
The block-entry expression of the matrix )[ın ] ( E ) is written as § hn( n, n) ( E ) hn( n, n)1 ( E ) hn( n, n) 2 ( E ) ! · ¨ (n) ¸ ¨ hn 1, n ( E ) hn( n1,) n 1 ( E ) hn( n1,) n 2 ( E ) ! ¸ [ın ] Q (E ) ¨ ¸. (n) (n) (n) ¨ hn 2, n ( E ) hn 2, n 1 ( E ) hn 2, n 2 ( E ) ! ¸ ¨¨ ¸¸ # # # © ¹
For i, j ı n 1, we obtain n
hi(,nj1) ( E )
E Qi , j ¦ hi(,kk ) ( E )[hk( k,k) ( E )]1 hk( k, j) ( E ). k 0
We define the U-measure as Un (E )
hn( n, n) ( E ),
n ı 0,
the R-measure as hi(, jj) ( E )[U j ( E )]1 ,
0 İ j i,
Gi , j ( E ) (U i ( E )]1 hi(,ij) ( E ),
0 İ i j.
Ri , j ( E ) and the G-measure as
The LU-type RG-factorization is given by ) ( E ) [ I RL ( E )]U D ( E )[ I GU ( E )],
where § 0 · ¨ R (E ) ¸ 0 ¨ 1,0 ¸ ¸, RL ( E ) ¨ R2,0 ( E ) R2,1 ( E ) 0 ¨ ¸ ¨ R3,0 ( E ) R3,1 ( E ) R3,2 ( E ) 0 ¸ ¨ # # # # % ¸¹ © U D ( E ) diag (U 0 ( E ), U1 ( E ), U 2 ( E ), U 3 ( E ),!) 506
9 Quasi-Stationary Distributions
and § 0 G0,1 ( E ) G0,2 ( E ) G0,3 ( E ) ¨ 0 G1,2 ( E ) G1,3 ( E ) ¨ GU ( E ) ¨ 0 G2,3 ( E ) ¨ 0 ¨ ¨ ©
9.8
!· ¸ !¸ !¸ . ¸ !¸ %¸¹
Decay Rate for the GPH Distribution
In this section, we use the quasi-stationary distribution and the RG-factorization to study the decay rate of a GPH distribution which is either continuous-time or discrete-time, and show that the decay rate can be determined by both the transition matrix and the initial probability vector.
9.8.1
The Discrete-Time PH Distribution with Finitely Many Phases
We first consider a discrete-time PH distribution with irreducible representation (D , T ) of order m whose random variable X has the following distribution: pk
P{ X
k 0, D , k} ® 0 k 1 0 ı 1, T T , k D ¯
(9.110)
where D 0 D e 1 and T 0 Te e. For simplicity of exposition, we assume that the transition matrix T is irreducible. Note that the matrix T is irreducible if and only if ( I T )m 1 ! 0. It is clear from Eq. (9.110) that P* ( z )
E[ z X ] D 0 zD ( I zT ) 1T 0 .
(9.111)
We now are interested in the tailed behavior of the PH random variable X as follows: lim P{ X
k of
k} lim pk k of
or lim P{ X ! k } lim pİk
k of
k of
f
where pİk
¦
pl . Note that
l k 1
507
Constructive Computation in Stochastic Models with Applications f
pİk
¦ DT
l 1
D T k e,
T0
l k 1
thus we only need to analyze the tailed behavior lim pk . k of
For the irreducible nonnegative matrix T, we define f
½
D (T ) sup ® E ı 1: ¦ E n ti(,nj) f ¾ , ¯
¿
n 0
which is independent of i and j, where ti(,nj) is the (i, j ) th element of the matrix T n for n ı 0. We denote by U (T ) the maximal eigenvalue of the matrix T. Note that the matrix T is of order m, thus we have 1 . U (T )
D (T )
(9.112)
In this case, we only need to analyze the spectral radius U (T ) for studying the tailed behavior lim pk . To this end, the following two lemmas, stated in 8.4.4 and k of
8.4.6 in Horn and Johnson [11], are useful for describing the spectral radius U (T ). Lemma 9.21 For the irreducible nonnegative matrix T, we have (1) U (T ) ! 0 is an algebraically (and thus geometrically) simple eigenvalue of T. (2) There must exist a positive row vector u and a positive column vector v such that uT
U (T )u , Tv
U (T )v,
ue uv 1.
Lemma 9.22 If the irreducible nonnegative matrix T has r eigenvalues of maximal modulus U (T ), then the 1st eigenvalue is given by 2ʌl ½ i¾, ¯ r ¿
Ol
U (T ) exp ®
where i 2 1 and 0 İ l İ r 1. At the same time, the eigenvalue Ol is an algebraically (and thus geometrically) simple eigenvalue of T. If the irreducible nonnegative matrix T has r eigenvalues of maximal modulus U (T ), then there exists an invertible matrix D such that T
D 1 JD,
where J is the Jordan canonical form of the matrix T, J J1 508
§
2ʌl ½ i¾,l ¯ r ¿
U (T ) diag ¨ exp ® ©
daig ( J1 , J 2 ) and
· 0,1,! , r 1¸ . ¹
9 Quasi-Stationary Distributions
Hence, we obtain T k 1
D 1diag( J1k 1 , J 2k 1 ) D U k 1 (T ) D 1diag( J1k 1 ,[ U 1 (T ) J 2 ]k 1 ) D,
where § 2ʌl ½ diag ¨ exp ®k i¾ , l ¯ r ¿ ©
J1k 1
· 0,1,! , r 1¸ . ¹
It is clear that lim[ U 1 (T ) J 2 ]k 1
0,
k of
but this limit lim J1k 1 does not exist for r ı 2. In this case, we have k of
pk
D T k 1T 0 U k 1 (T ) D D 1diag J1k 1 ,[ U 1 (T ) J 2 ]k 1 DT 0 İU
k 1
U
k 1
||||diag I ,[ U
(T ) D || D 1||||diag J1k 1 ,[ U 1 (T ) J 2 ]k 1 |||| D || T 0
(T ) D || D
1
1
(T ) J 2 ]
k 1
|||| D || T
0
.
Let
I D || D 1 |||| diag( I ,0) |||| D || T 0 . Then as k o f, we have pk İ U k 1 (T ) D || D 1 |||| diag( I ,[ U 1 (T ) J 2 ]k 1 ) |||| D || T 0 o U k 1 (T )I . (9.113)
Specifically, if the irreducible nonnegative matrix T has a single eigenvalue of maximal modulus U (T ), then it is easy to check that T k 1
U k 1 (T )vu,
thus we obtain pk o U k 1 (T )M ,
as k o f,
(9.114)
where
M D vuT 0 . Definition 9.1 A nonnegative matrix A of order m is said to be primitive if it is irreducible and has only one eigenvalue of maximal modulus. 509
Constructive Computation in Stochastic Models with Applications
Obviously, if the matrix T is primitive, then the tailed behavior lim pk is k of
completely determined by the spectral radius U (T ) in terms of Eq. (9.114). The following proposition provides conditions under which the matrix T is primitive. Their proofs may refer to 8.5.2 and 8.5.9 in Horn and Johnson [1]. Proposition 9.7 (1) The matrix T of order m is primitive if and only if T n ! 0 for some n ı 1. 2 (2) The matrix T of order m is primitive if and only if T m 2 m 2 ! 0.
9.8.2
The Discrete-Time GPH Distribution with Infinitely-many Phases
When the size of the matrix T is infinite, and even though there do not exist the eigenvalues and eigenvectors for the matrix T, we have a similar concept: the decay rate 1/ DT and the quasi-stationary distribution ʌ (DT ), is given by ʌ (DT )DT T
ʌ (DT )
or ʌ (D T )T
1
DT
ʌ (D T )
which has been analyzed in the previous sections. Therefore, the tailed behavior lim pk can be described by the radius DT of convergence of the matrix T. k of
Note that [ I RU ( z )][ I < D ( z )][ I GL ( z )],
I zT
it follows from Eq. (9.111) that P* ( z ) D 0 zD [ I GL ( z )]1[ I < D ( z )]1[ I RU ( z )]1 T 0 .
Let
J
sup{E ı 1 : P* ( E ) f}.
It is clear that the discrete-time GPH distribution { pk } is heavy-tailed if and only if J 1, and { pk } is light-tailed if and only if J ! 1. In general, it is difficult to determine the decay rate 1/ J for a general GPH distribution with irreducible representation D (T ). In what follows we analyze two special cases for a level-dependent discrete-time QBD process. 510
9 Quasi-Stationary Distributions
9.8.3
The Level-Dependent QBD Processes
Let
T
§ A1(1) ¨ (2) ¨ A2 ¨ ¨ ©
A0(1) A1(2) A2(3)
(2) 0 (3) 1
A A %
· ¸ ¸. ¸ % ¸¹
A0(3) %
Then we consider the following two cases: Case Ē A0(1) e A1(1) e 9 e and A0( k ) e A1( k ) e A2( k ) e In this case, it is clear that
[ A
T0
(1) 2
e for k ı 2.
T
e]T , 0T ,0T ,! .
We introduce the notation N
A
j
AM AM 1 ! AN .
j M
Note that [ I < D ( z )]1 [ I RU ( z )]1 T 0
[ I GL ( z )]1
§ [ I < 1 ( z )]1 A2(1) e · ¨ ¸ 0 ¨ ¸, 0 ¨¨ ¸¸ # © ¹
§ I · ¨ Y (2) ( z ) ¸ I ¨ 2(3) ¸ (3) I ¨ Y3 ( z ) Y2 ( z ) ¸ ¨ Y (4) ( z ) Y (4) ( z ) Y (4) ( z ) I ¸ 3 2 ¨¨ 4 ¸ # # # % ¸¹ © #
with for 1 İ l İ k, Y1(l ) ( z )
I, l k 1
Yk(l ) ( z ) Gl ( z )Gl 1 ( z )! Gl k 1 ( z )
G ( z), i
i l
and
D [ I GL ( z )]1
§ f · (k ) ¨ ¦D k Yk ( z ),*,*,! ¸ ©k 1 ¹ 1 § f ª · º ¨¨ ¦ «D k G j ( z ) » ,*,*,! ¸¸ , ¼ ©k 1¬ j k ¹ 511
Constructive Computation in Stochastic Models with Applications
we obtain P* ( z ) D 0 zD [ I GL ( z )]1[ I < D ( z )]1[ I RU ( z )]1T 0 f
ª
1
k 1
¬
j k
º
D 0 z ¦ «D k G j ( z ) » [ I < 1 ( z )]1 A2(1) e. ¼
We write °
f
1
°½
°¯
k 1
j k
°¿
K sup ® E ı 1: ¦D k G j ( E )e f ¾ and
DT
sup{E ı 1:[ I < 1 ( E )]1 f}.
It is clear that
J
min{K ,D T }.
If the QBD process is level-independent, then Gk ( E ) G ( E ) for k ı 1. Hence we obtain
f
½
K sup ® E ı 1: ¦D k G k ( E ) f ¾ . ¯
¿
k 1
Case ē D (D1 , 0,0,!). In this case, note that
D [ I GL ( z )]1[ I < D ( z )]1
[ I RU ( z )]1
§I ¨ ¨ ¨ ¨ ¨¨ ©
X 1(1) ( z ) I
(D1[ I < 1 ( z )]1 ,0, 0,!), X 2(1) ( z ) X 1(2) ( z ) I
X 3(1) ( z ) X 2(2) ( z ) X 1(3) ( z ) I
with X 0(l ) ( z )
I,
X k(l ) ( z )
Rl ( z ) Rl 1 ( z )! Rl k 1 ( z )
l k 1
R ( z) i
i l
and 512
!· ¸ !¸ !¸ !¸ ¸ %¸¹
9 Quasi-Stationary Distributions
§ f (1) 0· ¨ ¦ X k ( z )Tk ¸ ¨k 1 ¸ * ¨ ¸ ¨ ¸ * ¨ ¸ # © ¹
[ I RU ( z )]1 T 0
§ f k 0· ¨ ¦ Ri ( z )Tk ¸ ¨k 1 i 1 ¸ * ¨ ¸, ¨ ¸ * ¨ ¸ # © ¹
we obtain f
k
P* ( z ) D 0 zD1[ I < 1 ( z )]1 ¦ Ri ( z )Tk0 . k 1 i 1
Let f
k
½
K sup ® E ı 1: ¦ Ri ( E )Tk0 f ¾ ¯
¿
k 1 i 1
and
DT Then J
9.8.4
sup{E ı 1:[ I < 1 ( E )]1 f}.
min{K ,D T }.
The Continuous-Time GPH Distribution
We consider a continuous-time GPH distribution with irreducible representation (D , T ) whose analysis is similar to the discrete-time case. Note that the size of the matrix T is infinite, we can determine the decay rate 1/ DT of the matrix T, and thus the quasi-stationary distribution ʌ (DT ) of the Markov chain T is given by ʌ (D T )
ʌ (D T )D T T
or ʌ (D T )T
1
DT
ʌ (D T ).
Therefore, the tailed behavior lim pk can be described by only the radious DT of k of
convergence of the matrix T. Note that F ( x) 1 D exp{Tx}e and the Laplace-Stieltjes transform 513
Constructive Computation in Stochastic Models with Applications
f * ( s ) D 0 D ( sI T ) 1 T 0 .
Since T sI
[ I RU ( s )]U D ( s )[ I GL ( s )],
it follows from Eq. (9.111) that f * ( s ) D 0 D [ I GL ( s )]1[U D ( z )]1[ I RU ( s )]1 T 0 .
Let sup{s ı 0 : f * ( s ) f}.
J
It is clear that the continuous-time GPH distribution F(x) is heavy-tailed if and only if J 0; F ( x) is light-tailed if and only if J ! 0. In general, it is difficult to determine the decay rate J for a general GPH distribution with irreducible representation (D , T ). In what follows we analyze a level-dependent Markov chain of M/G/1 type and a level-dependent Markov chain of GI/M/1 type, respectively.
9.8.5
The Level-Dependent Markov Chains of M/G/1 Type
Let
T
§ A1(1) ¨ (2) ¨ A0 ¨ ¨¨ ©
f
We assume that
¦A
(1) k
A2(1)
A3(1)
A4(1)
(2) 1 (3) 0
(2) 2 (3) 1
(2) 3 (3) 2
A
A
A
A
A %
A %
!· ¸ !¸ . !¸ ¸ %¸¹
f
e 9 0 and
¦A
(l ) k
e
0 for l ı 2. In this case, it is clear that
k 0
k 1
T0
[ A
(1) 0
T
e]T ,0T ,0T ,! .
Note that
[U D ( s )]1[ I RU ( s )]1 T 0
514
§ [ U1 ( s )]1 A0(1) e · ¨ ¸ 0 ¨ ¸, 0 ¨ ¸ ¨ ¸ # © ¹
9 Quasi-Stationary Distributions
§ I · ¨ Y (2) ( s ) ¸ I ¨ 2(3) ¸ (3) I ¨ Y3 ( s ) Y2 ( s ) ¸ ¨ Y (4) ( s ) Y (4) ( s ) Y (4) ( s ) I ¸ 3 2 ¨¨ 4 ¸ # # # %¸¹ © #
[ I GL ( s )]1
with for 1 İ l İ k , Y1(l ) ( s)
I l k 1
Yk(l ) ( s) Gl ( s)Gl 1 ( s)! Gl k 1 ( s)
G (s), i
i l
and § f · (k ) ¨ ¦D k Yk (s ),*,*,! ¸ ©k 1 ¹ 1 § f ª · º ¨¨ ¦ «D k G j (s ) » ,*,*,! ¸¸ , ¼ ©k 1¬ j k ¹
D [ I GL (s )]1
we obtain f * ( s ) D 0 D [ I GL ( s )]1[U D ( s )]1[ I RU ( s )]1T 0 f
ª
1
k 1
¬
j k
º
D 0 ¦ «D k G j ( s ) » [U1 ( s )]1 A0(1) e. ¼
We write °
f
1
°½
°¯
k 1
j k
°¿
K sup ® s ı 0 : ¦D k G j ( s)e f ¾ and
DT
sup{s ı 0 :[ U1 ( s )]1 f}.
It is clear that
J
min{K ,D T },
which depends on both the matrix T and the initial probability vector D (D 0 ,D1 ,D 2 ,!).
515
Constructive Computation in Stochastic Models with Applications
9.8.6
The Level-Dependent Markov Chains of GI/M/1 Type
Let
T
We assume that D
§ A1(1) ¨ (2) ¨ A2 ¨ A3(3) ¨ # ©
A0(1) A1(2) A2(3) #
(2) 0 (3) 1
A A #
A0(3) #
· ¸ ¸. ¸ %¸¹
(D1 , 0,0,!). In this case, note that
D [ I GL ( s )]1[U D ( s )]1 §I ¨ ¨ ¨ ¨ ¨¨ ©
[ I RU ( s )]1
(D1[U1 ( s )]1 ,0,0,!),
X 1(1) ( s ) I
X 2(1) ( s ) X 1(2) ( s ) I
X 3(1) ( s ) X 2(2) ( s ) X 1(3) ( s ) I
!· ¸ !¸ !¸ !¸ ¸ %¸¹
with X 0(l ) ( s)
I,
X k(l ) ( s)
Rl ( s) Rl 1 ( s)! Rl k 1 ( s)
l k 1
R (s). i
i l
Note that T 0
((T10 )T , (T20 )T ,(T30 )T ,!)T , we have
§ f k 1 0 · ¨ ¦ Ri ( s )Tk 1 ¸ ¨k 0 i 1 ¸ * ¨ ¸, ¨ ¸ * ¨ ¸ # © ¹
§ f (1) 0 · ¨ ¦ X k ( s )Tk 1 ¸ ¨k 0 ¸ * ¨ ¸ * ¨ ¸ ¨ ¸ # © ¹
[ I RU ( s )]1 T 0
we obtain f
k
P* ( z ) D 0 D1[U1 ( s )]1 ¦ Ri ( s )Tk01 . k 0 i 1
Let
f
k
½
K sup ® s ı 0 : ¦ Ri ( s)Tk01 f ¾ ¯
k 0 i 1
and
DT Then J 516
min{K ,D T }.
sup{s ı 0 :[U1 ( s )]1 f}.
¿
9 Quasi-Stationary Distributions
9.9
QBD Processes with Infinitely-Many Phases
In this section, we apply the quasi-stationary distribution and the UL-type RG-factorization to study the decay rate of the stationary probability vector of a discrete-time level-independent QBD process with infinitely-many phases. Consider an irreducible discrete-time level-independent QBD process with infinitely-many phases whose transition probability matrix is given by
P
§ B0 ¨C ¨ 0 ¨ ¨ ©
A0 B C
· ¸ A ¸, B A ¸ % % %¸¹
(9.115)
where the sizes of matrices B0 and B are infinite and the sizes of all other blocks are determined accordingly. We assume that the QBD process is irreducible and positive recurrent. Let ʌ (ʌ0 , ʌ1 , ʌ 2 ,!) be the stationary probability vector of the QBD process. Then ʌ k ʌ1 R k 1 for k ı 1, where the matrix R is the minimal nonnegative solution to the matrix equation A RB R 2 C R. In addition, let the matrix G be the minimal nonnegative solution to the matrix equation AG 2 BG C G. Let U be the transition probability matrix of the censored chain of the Markov chain ( to level 0, where
(
§B A · ¨C B A ¸ ¨ ¸. C B A ¨ ¸ ¨ % % % ¸¹ ©
Then U
1 B ( A,0, 0,!)[ I ( ]min (C T ,0,0,!) T
B A( I U ) 1 C. It is easy to see from the censoring technique that R
A( I U )1
G
( I U )1 C.
and It is easy to check from the RG-factorization for the repeated row that I (K 1 A B K C )
(K I R )( I U )(K 1 I G ).
We assume that there exists a positive constant K 1 and two positive vectors x and y such that 517
Constructive Computation in Stochastic Models with Applications
Condition 1 x(K 1 A B K C ) x and (K 1 A B K C ) y y. Condition 2 xe 1 and xy f. Condition 3 K 1 xAy z K xCy. Then xR K x and Rz K z , where z ( I U )(K 1 I G ) y. For the positive recurrent QBD process Q, if Conditions 1 to 3 hold, c ʌ1 z f and the QBD process ( is irreducible, then R n ~ K n zx,
as n o f.
Thus, we obtain that as n o f ʌ1 R n ~ K n cx.
ʌn
We consider a double QBD process P given in Eq. (9.115) whose repeated blocks are given by § a1 ¨a ¨ 2 A ¨ ¨ ¨ ©
a0 a1 a2
B
§ b1 ¨ ¨ b2 ¨ ¨ ¨ ©
b0 b1 b2
C
§ c1 ¨c ¨ 2 ¨ ¨ ¨ ©
c0 c1 c2
a0 a1 a2
· ¸ ¸ a0 ¸, ¸ a1 a0 % % %¸¹
b0 b1 b2
· ¸ ¸ b0 ¸ ¸ b1 b0 ¸ % % %¹
c0 c1 c2
· ¸ ¸ c0 ¸. ¸ c1 c0 % % % ¸¹
and
We write D(K )
A K B K 2C § J (K ) O (K ) · ¨ P (K ) J (K ) O (K ) ¸ ¨ ¸ P (K ) J (K ) O (K ) ¨ ¸ ¨ ¸ P (K ) J (K ) O (K ) ¨ ¸ % % %¹ ©
518
9 Quasi-Stationary Distributions
and § J (K ) O (K ) · ¨ P (K ) J (K ) O (K ) ¸ ¨ ¸ E (K ) ¨ P (K ) J (K ) O (K ) ¸. ¨ ¸ P (K ) J (K ) O (K ) ¨ ¸ % % % © ¹ Let D (K ) be the radius of convergence for the matrix D (K ). We denote by x and y the D (K )-invariant measure and the D (K )-invariant vector of the matrix D(K ). To determine D (K ), we first need to compute the radius D (K ) of convergence for the matrix E (K ). Let v 1/ D (K ) and F ( z ) P (K ) zJ (K ) z 2 O (K ). Then solving the system of equations F (T ) vT , ® F c(T ) v, ¯ we obtain P (K ) , °T O (K ) ® °v J (K ) 2 O (K ) P (K ). ¯ Hence we have
D (K )
1
J (K ) 2 O (K ) P (K )
.
To provide the state classification of the matrix D (K ), we consider the censored matrix of D(K ) to level 0 which is given by § J (K ) · 4O (K ) P (K ) ·¸ J (K ) O (K ) P (K ) §¨ J (K ) U 0 (K ) 1 . ¨ 1¸ ¸ K K K2 2 O (K ) P (K ) ¨ © K ¹ 2
©
¹
Thus, we obtain f
q(K ) U 0 (K ) 1
¦D K k
k
,
k 0
Note that, in Condition 2: xe 1 and xy f, this indicates that the matrix D(K ) is D (K ) -positive recurrent. That is, we consider the QBD process with infinitely-many phases in which the matrix D(K ) must be D (K )-positive recurrent. 519
Constructive Computation in Stochastic Models with Applications
The following theorem provides a sufficient condition under which the matrix D(K ) is D (K ) -positive recurrent. The proof is clear, and thus is omitted here. Theorem 9.28 If D (K ) D (K ), then the matrix D(K ) must be D (K ) -positive recurrent. In this case, D 1/ K . If the matrix D(K ) is D (K ) -positive recurrent, then the D (K )-invariant measure is given by x0
z0
and z0 R0,1 (D ) R k 1 (D ),
xk
k ı 1,
f
where z0 ! 0 such that
¦x
1, or
k
k 0
z0 R(D )
1 , 1 R0,1 (D )[1 R (D )]1 1 DJ (K ) (DJ (K ) 1)2 4D 2 O (K )P (K ) 2DP (K )
and
O (K ) R(D ); O (K )
R0,1 (D )
and the D (K )-invariant vector is given by y0
w0
and w0 G1,0 (D )G k 1 (D ),
yk
k ı 1,
f
where w0 ! 0 such that
¦x y k
k
f, or
k 0
w0 G (D ) and
520
1 , 1 G0,1 (D )[1 G (D )]1 1 DJ (K ) (DJ (K ) 1)2 4D 2 O (K )P (K ) 2DO (K )
9 Quasi-Stationary Distributions
P (K ) G (D ). P (K )
G1,0 (D ) Note that f
¦ xk yk k 0
§ O (K ) P (K ) f § P (K ) ·n 2 n · z0 w0 ¨1 R (D ) ¸ , ¦ ¨ ¸ ¨ ¸ O (K )P (K ) n 1 © O (K ) ¹ © ¹
f
we obtain that
¦x y k
k
f. if and only if
k 0
n
f
§ P (K ) · 2 n ¦ ¨ ¸ R (D ) f. n 1 © O (K ) ¹
Thus we have
P (K ) 2 R (D ) 1. O (K ) For the double QBD process, if the matrix D(K ) is D (K ) -positive recurrent with D 1/ K , then R n ~ K n zx,
as n o f.
Thus, we obtain that as n o f. ʌ1 R n ~ K n cx,
ʌn
where c
ʌ1 z f
and z
9.10
( I U )(K 1 I G ) y.
Notes in the Literature
Quasi-stationary behaviour for block-structured Markov chains is not only theoretically important, but also it is found to have many interesting applications in practical areas, including biology by Scheffer [30], Holling [10], Pakes [24] and Pollett [26]; chemistry by Oppenheim, Shuler and Weiss[23], Parsons and Pollett [25] and Pollett [27]; telecommunications by Schrijner [31]; queues by Makimoto [20], Kijima and Makimoto [15], among others. Three excellent overviews on the quasi-stationarity behaviour can be found in Schrijiner [31], Kijima and Makimoto [15] and Pollett [29]. 521
Constructive Computation in Stochastic Models with Applications
The study of the quasi-stationary behavior was initiated by Yaglom[33]. Since then, significant advances have been made through the efforts of many researchers. For the block-structured transition matrices, those works were centered in obtaining probabilistic measures for expressing the radius of convergence, the state D -classification, and the quasi-stationary distributions. Kijima [14] first analyzed the quasi-stationary distributions of the Markov chains of GI/M/1 type without the boundary and the Markov chains of M/G/1 type without the boundary through the matrix R and the matrix G, respectively. Some preliminary results for expressing the quasi-stationary distributions of the Markov chains of GI/M/1 type and M/G/1 type were obtained in Li [17]. A complete work for the quasi-stationary distributions of the Markov chains of GI/M/1 type and M/G/1 type were obtained in Li [18,19]. On the other hand, the quasi-stationarity distributions of the QBD processes can be found in Makimoto [20],Bean, Bright, Latouche, Pearce, Pollett and Taylor [1], and Bean, Pollett and Taylor [3,4]. The concept of E -invariant measures is a generalization of the quasi-stationarity distributions, e.g., see Derman [5], Harris [9], Latouche, Pearce and Taylor [16], and Gail, Hantler and Taylor [7]. In this case, ʌ can still be interpreted probabilistically in terms of the movement of particles whose initial states are governed by Poisson distributions. Readers may refer to Derman [5], Kelly [12], Latouche, Pearce and Taylor [16], and Gail, Hantler and Taylor [17]. This chapter is mainly based on Li and Zhao [18,19], Kijima [14], Bean, Pollett and Taylor [3], Fujimoto, Takahashi and Makimoto [6], Bean and Nielsen [2] and Haque, and Zhao and Liu [8]. At the same time, we have also added some new results without publication for a more systematical organization.
Problems 9.1 For a Markov chain P of GI/G/1 type, provide probabilistic interpretation on the R-, U- and G-measures of the matrix E P. 9.2 Consider a continuous-time Markov chain Q of GI/G/1 type without the boundary, prove that Q is D -transient and provide expression for the quasistationary distribution. 9.3 Construct an irreducible Markov chain with finitely many levels, which is D -transient. 9.4 For an D -recurrent Markov chain of GI/G/1 type, provide conditions under which P is D -positive recurrent. 9.5 For a PH/PH/1/N queue, compute the quasi-stationary distribution when the server is in a busy period. 9.6 For a MAP/PH/1 queue with a repairable server, compute the quasi-stationary distribution before the server fails for the first time. 9.7 For a BMAP/M X/1 queue with a repairable server, compute the quasi522
9 Quasi-Stationary Distributions
stationary distribution before the server fails for the first time. 9.8 For a level-dependent QBD process, provide expression for the quasistationary distribution in terms of the LU-type RG-factorization. 9.9 Consider a two-demand queueing system, where there are three types of customers. The first and second types of customers can enter server one and server two for their services, respectively. Each arrival of the third type of customers simultaneously places two service demands to server one and server two. The arrivals of the three types of customers form Poisson processes with arrival rates O1 , O2 and O3 , respectively. The service times at the two servers are exponential with rates P1 and P 2 , respectively. The waiting rooms before the two servers are infinite. Use the QBD process with infinitely-many phases to analyze the decay parameter for the stationary probability vector. 9.10 Consider a pre-emptive priority queue with an infinite waiting room. Two types of Poisson customers arrive independently at rate O1 for lower priority customers and at rate O2 for higher priority customers. The two types of customers require the same exponential service time at rate P . with the preemptive rule, a higher priority customer, upon arrival, passes all lower priority customers in the queue or takes over the service if a lower priority customer is currently being served. Use the QBD process with infinitely-many phases to analyze the decay parameter for the stationary probability vector.
References 1. Bean N.G., L. Bright, G. Latouche, C.E.M. Pearce, P.K. Pollett and P.G. Taylor (1997). The quasi-stationary behavior of quasi-birth-and-death processes. The Annals of Applied Probability 7: 134 155 2. Bean N.G. and B.F. Nielsen (2002). Decay rates of discrete phase-type distributions with infinitely-many phases. In Matrix-analytic methods: Theory and Applications, G. Latouche and P.G., Taylor (eds.), World Scientific, 17 38 3. Bean N.G., P.K. Pollett and P.G. Taylor (1998). The quasistationary distributions of level-independent quasi-birth-and-death processes. Stochastic Models 14: 389 406 4. Bean N.G., P.K. Pollett and P.G. Taylor (2000). Quasistationary distributions for leveldependent quasi-birth-and-death processes. Stochastic Models 16: 511 541 5. Derman C. (1955). Some contributions to the theory of denumerable Markov chains. Transactions of the American Mathematical Society 79: 541 555 6. Fujimoto K., Y. Takahashi and N. Makimoto (1998). Asymptotic properties of stationary distributions in two-stage tandem queueing systems. Journal of the Operations Research Society of Japan 41: 118 141 7. Gail H.R., S.L. Hantler and B.A. Taylor (1998). Matrix-geometric invariant measures for G/M/1 type Markov chains. Stochastic Models 14: 537 569 8. Haque L., Y.Q. Zhao and L. Liu (2005). Sufficient conditions for a geometric tail in a QBD process with many countable levels and phases. Stochastic Models 21: 77 99 523
Constructive Computation in Stochastic Models with Applications
9. Harris T.E. (1957). Transient Markov chains with stationary measures. Proceedings of the American Mathematical Society 8: 937 942 10. Holling C.S. (1973). Resilience and stability of ecological systems. Ann. Rev. Ecol. Systematics 4: 1 23 11. Horn R.A. and C.R. Johnson (1985). Matrix Analysis, Cambridge University Press 12. Kelly F.P. (1983). Invariant measures and the Q-matrix, probability, statistics and analysis. In London Math. Soc. Lecture Notes, Vol. 79, J.F.C. Kingman and G.E.H. Reuter (eds), 143 160 13. Kemeny J.G., J.L. Snell and A.W. Knapp (1976). Denumerable Markov Chains (Second Edition), Springer-Verlag: New York 14. Kijima M. (1993). Quasi-stationary distributions of single-server phase-type queues. Mathematics of Operations Research 18: 423 437 15. Kijima M. and N. Makimoto (1999). Quasi-stationary distributions of Markov chains arising from queueing processes. In Applied probability and stochastic processes, J.G. Shanthikumar and U. Sumita (eds), Kluwer Academic Publishers, 277 311 16. Latouche G., C.E.M. Pearce and P.G. Taylor (1998). Invariant measures for quasi-birthdeath processes. Stochastic Models 14: 443 460 17. Li Q.L. (1997). Stochastic Integral Functionals and Quasi-Stationary Distributions in Stochastic Models, Ph.D. Thesis, Institute of Applied Mathematics, Chinese Academy of Sciences, Beijing 100080, China 18. Li Q.L. and Y.Q. Zhao (2002). A constructive method for finding E -invariant measures for transition matrices of M/G/1 type. In Matrix Analytic Methods: Theory and Applications. G. Latouche and P.G. Taylor (eds), World Scientific: New Jersey, 237 263 19. Li Q.L. and Y.Q. Zhao (2003). E -invariant measures for transition matrices of GI/M/1 type. Stochastic Models 19: 201 233 20. Makimoto N. (1993). Quasi-stationary distributions in a PH/PH/c queue. Stochastic Models, 9: 195 212 21. Neuts M.F. (1981). Matrix-Geometric Solutions in Stochastic Models-An Algorithmic Approach, The Johns Hopkins University Press: Baltimore 22. Neuts M.F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Dekker: New York 23. Oppenheim I., K.E. Shuler and G.H. Weiss (1977). Stochastic theory of nonlinear rate processes with multiples stationary states. Journal of Physics A 88: 191 214 24. Pakes A.G. (1987). Limit theorems for the population size of a birth and death process allowing catastrophes. Journal of Mathematical Biology 25: 307 325 25. Parsons R.W. and P.K. Pollett (1987). Quasistationary distributions for autocatalytic reactions. Journal of Statistical Physics 46: 249 254 26. Pollett P.K. (1987). On the long-term behaviour of a population that is subject to largescale mortality or emigration. In Proceedings of the 8th National Conference of the Australian Society for Operations Research, S. Kumar (ed.), 196 207 27. Pollett P.K. (1988). Reversibility, invariance and P -invariance. Advances in Applied Probability 20: 600 621 524
9 Quasi-Stationary Distributions
28. Pollett P.K. and P.G. Taylor (1993). On the problem of establishing the existence of stationary distributions for continuous-time Markov chains. Probability in Engineering and Informational Science 7: 529 543 29. Pollett P.K. (2008). Quasi-stationary distributions: A bibliography. http://www.maths.uq.edu.au/ pkp/papers/qsds.html 30. Scheffer V.B. (1951). The rise and fall of a reindeer herd. Sci. Monthly 73: 356 362 31. Schrijner P. (1995). Quasi-Stationarity of Discrete-Time Markov Chains, Ph.D. Thesis, University of Twente, The Netherlands 32. Seneta F. (1981). Non-negative Matrices and Markov Chains, Springer-Verlag: New York 33. Yaglom A.M. (1947). Certain limit theorems of the theory of branching stochastic processes (in Russian). Doklady Akademii Nauk SSSR 56: 795 798
525
10
Markov Reward Processes
Quan-Lin Li Department of Industrial Engineering Tsinghua University Beijing 100084, P.R. China E-mail: [email protected] Abstract In this chapter, we consider reward processes of an irreducible continuous-time block-structured Markov chain. By using the RG-factorizations, we provide a unified algorithmic framework to derive expressions for conditional distributions and conditional moments of the reward processes. As an important example, we study the reward processes for an irreducible continuous-time level-dependent QBD process with either finitely-many levels or infinitely-many levels. At the same time, we provide a simple introduction to the reward processes of an irreducible discrete-time block-structured Markov chain. Keywords stochastic models, RG-factorization, reward process, accumulated reward, reward rate, the first accumulated time.
In this chapter, we consider various reward processes of an irreducible continuoustime block-structured Markov chain. By using the UL- and LU-types of RGfactorizations, we provide a unified algorithmic framework to derive expressions for conditional distributions and conditional moments of the reward processes. As an important example, we study the reward processes for an irreducible continuous-time level-dependent QBD process with either finitely-many levels or infinitely-many levels. At the same time, we provide a simple introduction to the reward processes of an irreducible discrete-time block-structured Markov chain. This chapter is organized as follows. Section 10.1 considers reward processes of an irreducible continuous-time block-structured Markov chain in terms of the UL- and LU-types of RG-factorizations. Section 10.2 deals with the transient accumulated rewards in terms of the partial differential equations. Section 10.3 computes moments of both the accumulated reward and the first accumulated time to a given reward. Section 10.4 analyzes an accumulated reward process for an irreducible continuous-time level-dependent QBD process in terms of a system of infinite-dimensional linear equations. Sections 10.5 and 10.6 study an
10
Markov Reward Processes
up-type reward process of the QBD process with finitely-many levels and with infinitely-many levels, respectively. Section 10.7 discusses a down-type reward process and a return-type reward process for the QBD process. Section 10.8 provides a simple introduction to the reward processes for an irreducible discrete-time block-structured Markov chain. Finally, Section 10.9 summarizes notes for the references related to the results of this chapter.
10.1
Continuous-Time Markov Reward Processes
In this section, we consider various reward processes of an irreducible continuoustime block-structured Markov chain in terms of the UL- and LU-types of RG-factorizations, and provide expressions for the conditional distributions and onditioncel moments of the reward processes. We consider an irreducible continuous-time block-structured Markov chain {xt , t ı 0} on the state space : {(k , j ) : k ı 0,1 İ j İ mk } whose infinitesimal generator is given by § D0,0 ¨ ¨ D1,0 ¨ D2,0 ¨ © #
D0,1 D1,1 D2,1
ʌk , j (t )
P{xt
Q
#
D0,2 ! · ¸ D1,2 ! ¸ . D2,2 ! ¸ ¸ # ¹
Let (k , j )},
ʌk (t ) (ʌ k ,1 (t ), ʌ k ,2 (t ),! , ʌ k , mk (t )) and ʌ (t )
(ʌ0 (t ), ʌ1 (t ), ʌ 2 (t ),!).
Then it is clear that
d ʌ (t ) dt
ʌ (t )Q,
(10.1)
which leads to ʌ (t )
ʌ (0) exp{Qt},
where S (0) is the initial probability vector of the Markov chain Q. If the Markov chain Q is positive recurrent, then the limit ʌ lim ʌ (t ) exists, t of and it is clear that ʌQ 0 and ʌe 1 , Where e is a column vector of ones with suitable size. Further, if ʌ ʌ , then ʌ (t ) ʌ for all t ı 0. 527
Constructive Computation in Stochastic Models with Applications
Let L(t )
t
³ ʌ ( x)dx. We call L(t ) 0
a cumulative state probability vector of the
Markov chain Q. At the same time, Lk ,i (t ) is the expected total time spent by the Markov chain Q at state (k , i ) during the time interval [0, t ). It is easy to check that d L(t ) dt
L(t )Q S (0).
(10.2)
Thus we have L(t )
Specifically, we write W
³
f 0
t
ʌ (0) ³ exp{Qx}dx.
(10.3)
0
ʌ ( x)dx. Note that lim dL(t ) dt
0, it follows from
t of
Eq. (10.2) that
W Q ʌ (0)
(10.4)
Using the UL-type RG-factorization, we obtain
W
1 ʌ (0)Qmax
1
1
ʌ (0)( I GL ) U D1 ( I RU ) .
For U D1
diag(U 01 ,U11 ,U 21 ,!),
it is worthwhile to note that U 01 is the ordinary inverse if the Markov chain Q is transient; U 01 is the group inverse U 0# if the Markov chain Q is recurrent. Note that the group inverse is given by U 01
1
(U 0 ex0 ) ex0 ,
where x0 is the sectionary probability vector of the Markov chain U 0 . On the other hand, applying the LU-type RG -factorization we obtain
W
1 ʌ (0)Qmax
1
1
ʌ (0)( I GU ) U D1( I RL ) .
Note that U 01 is the ordinary inverse of the censored matrix U 0 for any irreducible Markov chain Q. Remark 10.1 (1) If E : , then W e ¦ W k ,i is the expected total time ( k ,i )E
spent by the Markov chain Q in the state set E. (2) If Qe 9 0, then the Markov chain Q contains at least one absorbing state, hence W e
¦
( k ,i ):
528
W k ,i is the mean time to absorption.
10
Markov Reward Processes
Let f ( x) be a real function which guarantees that the random variable f ( X t ) is finite for t ı 0 a.s.. Then f ( X t ) is an instantaneous reward rate at time t. We now consider the instantaneous reward rate f ( X t ) through Eq. (10.5) to Eq. (10.9), which can be easily proved by the law of total probability.
10.1.1
The Expected Instantaneous Reward Rate at Time t
¦
E[ f ( X t )]
f
¦ʌ
ʌ k ,i (t ) f ( k , i)
( k ,i ):
k
(t ) f k
ʌ (t ) f ,
(10.5)
k 0
where fk
T
( f (k ,1), f (k , 2),!, f (k , mk )) ,
T
( f 0T , f1T , f 2T ,!) .
f
If the Markov chain Q is stable, then the limit of the expected instantaneous reward rate f ( X t ) is given by
¦: ʌ
lim E[ f ( X t )]
t of
10.1.2
f
k ,i f (k , i )
( k , i )
¦ʌ
k
fk
ʌf .
(10.6)
k 0
The nth Moment of the Instantaneous Reward Rate at Time t n
¦: ʌ
E[ f ( X t ) ]
k ,i
(t ) f (k , i) n .
(10.7)
( k ,i )
If the Markov chain Q is stable, then the limit of the nth moment of the instantaneous reward rate f ( X t ) is given by n lim E ª¬ f ( X t ) º¼
¦: ʌ
t of
10.1.3
k i
f (k i) n .
(10.8)
( k i )
The Distribution of the Instantaneous Reward Rate at Time t
Note that P{ f ( X t ) İ x}
¦
P{ f ( X t ) İ x, X t
(k , i )}
¦
P{ f ( k , i) İ x, X t
( k , i)}
¦
ʌk ,i (t ).
( k ,i ):
( k , i ):
f ( k , i )İ x ( k , i ):
529
Constructive Computation in Stochastic Models with Applications
If the Markov chain Q is stable, then
¦
lim P{ f ( X t ) İ x}
t of
10.1.4
ʌ k ,i .
(10.9)
f ( k , i )İ x ( k , i ):
The Accumulated Reward Over [0,t)
For the instantaneous reward rate f ( X t ), an accumulated reward over the time interval [0 t ) is defined as
) (t )
³
t 0
f ( X u )du.
(10.10)
We assume that the real function f ( x) makes the random variable ) (t ) f for t ı 0, a.s.. The accumulated reward process ) (t ) may represent many interesting performance measures of a stochastic model. For example, we analyze an irreducible QBD process: (1) if f (k , j ) 1 for m İ k İ n, then ) (t ) represents the sojourn time of the QBD process on levels m to n in the time interval [0, t ]. (2) If f (k , j ) is a reward associated with operating at state (k , j ) for (k , j ) : , then ) (t ) represents the total reward of the QBD process in the time interval [0, t ] .
10.1.5
The Expected Accumulated Reward Ɏ(t) Over [0,t)
Note that E[) (t )]
t E ª ³ f ( X u )du º ¬« 0 ¼» t
¦:
f ( k , i ) ³ P{ X u
¦
Lk ,i (t ) f ( k , i),
( k , i )
( k , i)}du
0
( k ,i ):
we obtain E[) (t )]
¦: L
k ,i
f
(t ) f (k , i )
( k , i )
10.1.6
k
k
L(t ) f .
(10.11)
k 0
The nth Moment of the Accumulated Reward Ɏ(t) Over [0,t) n
E[) (t ) ]
¦: L
k ,i
( k , i )
530
¦ L (t ) f
(t ) f (k , i ) n .
(10.12)
10
Markov Reward Processes
As a practical application, we use the Markov reward process to analyze a finite-buffer M M 1m queue with server breakdowns and repairs. A reward rate of 1 is assigned to all the system operational states and a reward rate of 0 is assigned to all the system failure states. The instantaneous availability of the system is E[ f ( X (t ))] and the cumulative operational time of the system in the time interval [0, t ) is E[) (t )]. The interval availability of the system in the time interval [0, t ) is E[) (t )]t , and the stationary availability of the system is
lim E[) (t )]t. On the other hand, the measures related to the time to the system
t of
first failure are of interest. To compute these measures, all the failure states are regarded as an absorbing state. In this case, the reliability function is E[ f ( X (t ))]. The lifetime of the system in the time interval [0, t ) is E[) (t )] and the mean time to the system first failure is E[) (f)].
10.2
The Transient Accumulated Rewards
In this section, we analyze the transient accumulated rewards of an irreducible continuous-time block-structured Markov chain in terms of a useful method devoloped by the partial differential equations. Based on the real function f ( x) and the Markov chain Q, the transient probability distribution of the accumulated reward ) (t ) is defined by
4 (t , x) P{) (t ) İ x}. We write ° H k , j (t , x) P{) (t ) İ x, X (t ) (k , j )}, ® °¯ H k (t , x) ( H k ,1 (t , x), H k ,2 (t , x),! , H k , mk (t , x))
(10.13)
and H (t , x) ( H 0 (t , x), H1 (t , x), H 2 (t , x),!); ' ( f k ) diag( f (k ,1), f (k ,1),! , f (k , mk )) and
' diag ' ( f 0 ), ' ( f1 ), ' ( f 2 ),! . It is clear that
4 (t , x) H (t , x)e. For the Markov reward process {) (t ), t ı 0}, it is easy to obtain the Kolmogorov’s forward equation as follows: 531
Constructive Computation in Stochastic Models with Applications
wH k , j (t , x) wt
wH k , j (t , x) wx
¦
f (k , j )
H l ,i (t , x) D(l ,i ),( k , j ) ,
(10.14)
( l ,i ):
or f
wH k (t , x) wH k (t , x) '( fk ) wt wx
¦ H (t , x) D l
l ,k
,
l 0
Therefore, we have wH (t , x) wH (t , x) ' wt wx
H (t , x)Q
(10.15)
with the initial conditions H (t ,0)
ʌ (0)G (t )
(10.16)
H (0, x)
ʌ (0)G ( x).
(10.17)
and
Remark 10.2 Let Lk , j (t , x) P{) (t ) İ x, X (t ) (k , j )} and L(t , x) H T(t , x). Then the Kolmogorov’s backward equation is given by
wLk , j (t , x) wt
f (k , j )
wLk , j (t , x)
¦
wx
D( k , j )(l ,i ) Ll ,i (t , x ),
(10.18)
( l , i ):
which leads to wL(t , x) wL(t , x) ' wt wx
QL(t , x),
(10.19)
with the initial conditions L(t ,0)
eG (t )
L(0, x)
eG ( x).
and
It is easy to see that 4 (t , x) ʌ (0) L(t , x). We introduce the necessary notation for the Laplace transform and the Laplace-Stieltjes transform as follows: f
A* ( s )
³
A ( s )
³
0
e st A(t )dt
and
532
f 0
e st dA(t ).
10
Markov Reward Processes
Now, we solve the Kolmogorov’s forward equation Eq. (10.15) with the initial conditions Eq. (10.16) and Eq. (10.17). Taking the Laplace transform of Eq. (10.15) with respect to t o s, we obtain sH * ( s, x) H (0, x)
Note that H (0, x)
dH * ( s , x ) ' dx
H * ( s, x)Q.
ʌ (0)G ( x) in Eq. (10.17), we obtain dH * ( s, x) ' dx
H * ( s, x)(Q sI ) ʌ (0).
(10.20)
Taking the Laplace-Stieltjes transform of Eq. (10.20) with respect to x o u , we obtain H *~ ( s, u )[Q ( sI u ' )] ʌ (0), 1
H *~ ( s , u )
ʌ (0)[Q ( sI u ' )]max ,
H ~~ ( s, u )
1 sʌ (0)[Q ( sI u' )]max .
which leads to
Let S ~~ ( s, u )
1
s[Q ( sI u ' )]max.
Then H ~~ ( s, u )
ʌ S ~~ ( s, u ).
Note that for s, u ı 0, the matrix Q ( sI u ' ) is the infinitesimal generator of an irreducible Markov chain, we have the UL-type RG-factorization Q ( sI u ' ) [ I RU ( s, u )]U D ( s, u )[ I GL ( s, u )], which leads to H ~~ ( s, u )
1
1
sʌ (0)[ I GL ( s, u )] U D1 ( s, u )[ I RU ( s, u )] ,
where U D1 ( s, u )
diag(U 01 (s, u ),U11 (s , u ),U 21 (s, u ),!),
On the other hand, we obtain the LU-type RG-factorization Q ( sI u ' ) [ I RL ( s, u )]U D ( s, u )[ I GU ( s, u )],
hence we have H ~~ ( s, u )
1
1
sʌ (0)[ I GU ( s, u )] U D1 ( s, u )[ I RL ( s, u )] . 533
Constructive Computation in Stochastic Models with Applications
10.3
The First Accumulated Time
In this section, we analyze the first accumulated time of the reward process to a given reward, and obtain the conditional distribution of the first accumulated time. Let * ( x) be the first accumulated time to a given reward x. Then
* ( x) min{t : ) (t )
x}.
(10.21)
We write C (t , x)
P{* ( x) İ t}.
(10.22)
Since ) (t ) is the accumulated reward over the time interval [0, t ), it is clear that the event * ( x) İ t is equivalent to the event ) (t ) ! x. Hence we get C (t , x) 1 P{) (t ) İ x} 1 4 (t , x).
(10.23)
Let the given reward W be a random variable with the distribution w( x ) P{W İ x}. Then P{* (W ) İ t}
C (t )
³
f 0
C (t , x)dw( x).
(10.24)
Now, we compute the distributions of the accumulated reward ) (t ) and the first accumulated time * ( x) to a given reward x . We define
S(l ,i ),( k , j ) (t , x)
P{) (t ) İ x, X (t )
(k , j ) _ X (0) (l , i )}
(10.25)
T(l ,i ),( k , j ) (t , x)
P{* ( x) İ t , X (t )
(k , j ) _ X (0)
(10.26)
and (l , i )}.
We write Sl ,k (t , x) S (t , x) Tl ,k (t , x) T (t , x)
( S(l ,i ),( k , j ) (t , x))1İiİm ,1İ jİm , l
k
( Sl k (t , x)) l kı0; (T(l ,i ),( k , j ) (t , x))1İiİm ,1İ jİm , l
k
(Tl , k (t , x)) l k ı0.
It is clear that
4 (t , x) ʌ (0) S (t , x)e,
(10.27)
C (t , x) ʌ (0)T (t , x)e
(10.28)
and [ S (t , x) T (t , x)]e 534
e.
10
Markov Reward Processes
The following theorem provides two useful relations for the two matrices S ( s, u ) and T ~~ ( s, u ), respectively. Theorem 10.1 ~~
( sI u ' Q) S ~~ ( s, u )
sI
(10.29)
and ( sI u ' Q)T ~~ ( s, u ) u ' Q.
(10.30)
Proof Consider an exponentially distributed reward requirement W with parameter u. Then it follows from Eq. (10.23) and Eq. (10.24) that C t u
³
f 0
C t x d ª«¬1 e ux º»¼
f
ª1 4 t x ºeuxdx ¬ ¼ 1 4 ~ t u u³
0
Note that C (t , u ) is of phase type with irreducible expression (ʌ (0), Q u ' ), thus we obtain 1 4 ~ (t , u ) C (t , u ) 1 ʌ (0) exp{(Q u ' )t}e,
which leads to
4 ~ (t , u ) ʌ (0) exp{(Q u ' )t}e. It follows from Eq. (10.27) and Eq. (10.28) that S ~ (t , u )
exp{(Q u ' )t},
(10.31)
which leads to Eq. (10.29). Note that T ~~ ( s, u )
I S ~~ ( s, u ),
we have ( sI u ' Q)T ~~ ( s, u )
( sI u ' Q)[ I S ~~ ( s, u )] u ' Q.
This completes the proof. It is easy to check that the matrix Q ( sI u ' ) for s, u ı 0 is the infinitesimal generator of an irreducible Markov chain. Thus, we obtain 1 s[Q ( sI u ' )]max
(10.32)
1 [Q (sI u ' )]max (u ' Q ).
(10.33)
S ~~ ( s, u ) and T ~~ ( s, u )
535
Constructive Computation in Stochastic Models with Applications
It is clear that T ~~ ( s, u )
1 ~~ S ( s, u )(u ' Q). s
Note that 1
1
1
1
1
[Q ( sI u ' )]max [ I GL ( s, u )] U D1 ( s, u )[ I RU (s, u )] or 1
[Q ( sI u ' )]max [ I GU ( s, u )] U D1 ( s, u )[ I RL ( s, u )] . Finally, we compute the first moments of the random variables ) (t ) and
* ( x ). It follows from Eq. (10.3) and Eq. (10.11) that t
E[) (t )] S (0) ³ exp{Qx}dx'e. 0
(10.34)
It follows from Eq. (10.22) and Eq. (10.23) that P{* ( x) İ t} 1 P{) (t ) İ x} which leads to E[* ( x)]
Let I ( x)
³
f 0
P{) (t ) İ x}dt.
E[* ( x)]. Then
I ~ (u )
³ ³
f
4 ~ (t , u )dt
0 f 0
ʌ (0) exp{(Q u ')t}edt
ʌ (0)(Q u ' ) max e. 1
10.4
(10.35)
Computation of the Reward Moments
In this section, we provide a method for computing moments of the accumulated reward and the first accumulated time to a given reward.
10.4.1
The Moments of the Transient Accumulated Reward
It follows from Eq. (10.15) that wH (t , x) wH (t , x) ' wt wx 536
H (t , x)Q
10
Markov Reward Processes
with the initial conditions H (t ,0)
ʌ (0)G (t )
H (0, x)
ʌ (0)G ( x).
and
We obtain wH (t , u ) wt
H (t , u )(Q u ' )
with the initial condition H (0, u ) S (0).
Thus we have H (t , u )
ʌ (0) exp{(Q u ' )t},
which leads to
4 (t , u ) ʌ (0) exp{(Q u ' )t}e. It is clear that for n ı 1, n
E[) (t ) ] (1)n
wn 4 (t , u )_u 0 . wu n
We write wn i (Q u ' ) _u 0 , n, i ı 1. wu n The following lemma provides an iterative relationship for the matrices / (n, i ) for n, i ı 1. The proof is clear, and thus is omitted here. Lemma 10.1 For n i ı 0,
/ (n, i )
/ (0, 0) / (n, 0) / (0, i ) / (1, i ) / (n,1) / ( n, i )
I, 0, Qi , i'Q i 1 , 0, Q/ (n, i 1) n'/ (n 1, i 1),
n ı 1, i ı 1, i ı 1, n ı 2, n, i ı 2.
The following theorem provides an expression for the nth moment E[) (t )n ]. Theorem 10.2 For n ı 1, , f
E[) (t ) n] (1)n ʌ (0)¦ i 0
ti / (n, i )e. i!
(10.36) 537
Constructive Computation in Stochastic Models with Applications
Proof It follows from Eq. (10.27) that
4 ~ (t , u ) ʌ (0) exp{(Q u' )t}e f
ʌ (0)¦ i 0
ti i (Q u ' ) e. iʽ
Hence, we obtain wn ~ 4 (t , u ) _u 0 wu n f ti n ( 1) ʌ (0)¦ / (n,i )e. i 0 i
E[) (t )n ] (1) n
This completes the proof.
10.4.2
The Moments of the First Accumulated Time
It is complicated to compute the function E[* ( x) n ] for n ı 1, thus we consider the following two cases: Case Ē f (k , i ) z 0 for all k ı 0 and 1 İ i İ mk . It follows from Eq. (10.15) that dH ( s, x) dx
H ( s, x)(Q sI )' 1
with the initial condition H ( s, 0)
ʌ (0).
Therefore, we get H ( s, x )
ʌ (0) exp{(Q sI )' 1 x},
which leads to
4 ~ ( s, x) ʌ (0) exp{(Q sI )' 1 x}e. Note that E[* ( x )n ] (1) n
wn ~ 4 (s , x) s 0 . ws n
Let M (n, i ) 538
wn 1 i , n [(Q sI ) ' ]_ s 0 ws
n, i ı 0.
10
Markov Reward Processes
Then M (0, 0) M (n, 0)
I, 0,
n ı 1, 1 i
M (0, i ) (Q' ) , i ı 1, 1 1 M (n, i ) Q' M (n, i 1) nQ' M (n 1, i 1), n, i ı 1. Therefore, we obtain f
E[* ( x)n ] (1) n ʌ (0)¦ i 0
xi M (n, i)e. i!
Now, we provide another method for computing E[* ( x) n ]. k
Let I ( n ) ( x)
E[* ( x) n ] and I ( n ) (u )
³
f
e ux dI ( n ) ( x). Then
0
w n ~~ C ( s, u ) _s 0 ws n n n w (1) n ʌ (0)T ~~ ( s,u )e_s 0 ws wn 1 n 1 (1) ʌ (0) n [Q ( sI u ' )]maxu'e_s 0 ws i wn f i n (1) ʌ (0) n ¦ (1) ( I su 1' 1 u 1' 1Q ) e_s 0 . ws i 0
k
I ( n ) (u ) (1) n
We write u (n, i )
wn ( I su 1' 1 u 1' 1Q )i _s 0 , ws n
n, i ı 0.
Thus we obtain f
k
I ( n ) (u ) (1) n ʌ (0)¦ (1) i u (n, i )e. i 0
Case ē f (k , i ) z 0 for 0 İ k İ N and 1 İ i İ mk , while f (k , i ) k ı N 1 and 1 İ i İ mk . In this case, we write
0 for
ʌ (0)
(ʌ1 (0), ʌ 2 (0)), ' diag('1,0)
and Q
§ ¨ ¨ ¨ ¨ ©
Q1,1
Q1,2 ·¸ ¸
Q2,1 Q2,2 ¸¸¹ 539
Constructive Computation in Stochastic Models with Applications
according to E {Level k : 0 İ k İ N } and E c
{Level k : k ı N 1}. Hence 1
Q1,2 · § Q1,1 ( sI u '1) ¨ ¸ . Q2,1 Q2,2 sI ¹ max ©
1 max
[Q ( sI u ' )] Let
§ A B· Q ( sI u ' ) ¨ ¸, ©C D¹ where A Q1,1 ( sI u '1 ), B
Q1,2 ,
C
Q2,1
and D
Q2,2 sI .
Then § A B· ¨ ¸ ©C D¹
1
§ F 1 ¨ 1 1 © D CF
· F 1BD1 ¸, 1 1 1 1 D D CF BD ¹
where F A BD 1C. It follows from Eq. (10.28) and Eq. (10.33) that C ~~ ( s, u )
ʌ (0)T ~~ ( s, u )e 1
ʌ (0)[Q ( sI u ' )]max u 'e § F 1 · § u '1e · F 1BD1 (ʌ1 (0), S 2 (0)) ¨ ¸¨ ¸ 1 1 1 1 1 1 D D CF BD ¹ © 0 ¹ © D CF ʌ1 (0) F 1u '1e ʌ2 (0) D 1CF 1u '1e.
Let C1~~ ( s, u )
ʌ1 (0) F 1u '1e
and C2~~ ( s, u )
ʌ2 (0) D 1CF 1u '1e.
Then C1~~ ( s, u ) 540
1
ʌ1 (0) [ sI u '1 Q1,1 Q1,2 ( sI Q2,2 ) 1Q2,1 ] u '1e,
10
Markov Reward Processes
which leads to C1~ ( s, x)
ʌ1 (0) exp{[ ( s) x}e,
where 1
[ ( s ) '11Q1,1 '11Q1,2 ( sI Q2,2 ) Q2,1 s'11 ; C2~~ ( s, u ) ʌ 2 (0)( sI Q2,2 )Q2,1 1
[ sI u '1 Q1,1 Q1,2 ( sI Q2,2 )1 Q2,1 ] u'1e, which yields C2~ ( s, x) ʌ2 (0)( sI Q2,2 )Q2,1 exp{[ ( s ) x}e. We write E[* ( x ) n ] I1( n ) ( x ) I2( n ) ( x ),
where
I1( n ) ( x) (1)n
wn ~ C1 ( s, x) _s ws n
I2( n ) ( x) (1) n
wn ~ C2 ( s , x ) _ s 0 . ws n
0
and
Note that f
i
C1~ ( s, x) S 1 (0)¦ i 0
[[ ( s ) x] e, i!
we obtain f
I1( n ) ( x) (1)n ¦ i 0
xi ( n ) L (i )e, i!
where L( n ) (i )
d n [[ ( s )]i _s 0 ds n 0 ° 1 i 1 1 ° '1 Q1,1 '1 Q1,2 Q2,2 Q2,1 , ° 1 1 1 °'1 '1 Q1,2 Q2,2 Q2,1 , ® n 1 1 1 °( 1) n !'1 Q1,2 Q2,2 Q2,1 , ° n n ° § · L( l ) (1) L( n 1) (i 1), ¨ ¸ °¦ l ¯l 0 © ¹
i
0, n ı 1,
i ı 0, n
0,
i 1, n 1, i 1, n ı 2, i ı 2, n ı 1. 541
Constructive Computation in Stochastic Models with Applications
Similarly, since f
C2~ (s , x ) ʌ 2 (0)(sI Q2,2 )Q2,1 ¦ i 0
f
ʌ 2 (0)¦ i 0
i
[[ ( s ) x] e i!
xi sI Q22 Q2,1[[ (s )]ie, i!
we obtain f
I2( n ) ( x) (1) nʌ 2 (0)¦ i 0
xi ( n ) H (i )e, i!
where H
(n)
(i )
d n {( sI Q2,2 )Q2,1[[ ( s )]i} ds n n
s 0
( n 1) 2,2 2,1 1 2,1 1 1,1
( 1) Q Q , i 0, n ı 0, ° 1 1 1 i °Q2,2 Q (' Q '1 Q1,2 Q2,2 Q2,1 ) , i ı 1, n 0, ®n ° § n · H ( l ) (i 1)G ( n 1) , i ı 1, n ı 1, ¨ ¸ °¦ ¯l 0 © l ¹ and
G (n)
1 '11Q1,1 '11Q1,2 Q2,2 Q2,1 , n 0, ° 1 n 1, ® '1 '1Q1,2Q2,2 Q2,1 , ° n 1 1 ¯(1) n !'1Q1,2 Q2,2Q2,1 , n ı 2.
Therefore for n ı 1, E[* ( x) n ] I1( n ) ( x)
f
(1)n ¦ i 0
xi ( n) [ L (i ) H ( n ) (i )]e. i!
Since the matrix Q2,2 is of infinite size, the inverse of Q2,2 is maximal nonpositive. 1 We can use the RG-factorizations to compute the maximal nonpositive inverse Q2,2 .
10.5
Accumulated Reward in a QBD Process
In this section, we consider the accumulated reward process of an irreducible continuous-time QBD process. We provide an iterative solution for the Laplace transforms of the conditional moments of the accumulated reward process in terms of a system of infinite-dimensional linear equations. 542
10
Markov Reward Processes
Let {x(t ), t ı 0} be an irreducible continuous-time QBD process with infinitesimal generator Q given in Eq. (1.4). We assume that the QBD process Q is separable and Borel measurable, and its sample functions are all lower semicontinuous at the right-hand side. Therefore, the QBD process has the strong Markov property. Intuitively, if each diagonal entry of the matrix Q is finite, then the QBD process also has the strong Markov property. We define an accumulated reward process as
) (t )
³
t 0
f ( x(u ))du.
(10.37)
Suppose the initial state of the QBD process {x (t ), t ı 0} is at state (k , j ), i.e, x(0) (k , j ). Let t
[ k , j (t ) E ª« ³ f ( x(u ))du _ x(0) (k , j ) º» ¬ 0 ¼
(10.38)
and T [ k (t ) ([ k ,1 (t ), [ k ,2 (t ),! , [ k , M (t )) , k
k ı 0.
We denote by [ k* ( s ) the Laplace transform of the column vector [ k (t ), that is,
[ k* ( s )
³
f
0
e st[ k (t )dt ,
k ı 0.
(10.39)
For an arbitrary matrix Al( k ) , we use al(,(k i), j ) to denote its (i, j ) th entry. The following theorem provides a system of infinite-dimensional linear equations satisfied by the vector sequence {[ k* ( s )}. Theorem 10.3 The vector sequence {[ k* ( s ), k ı 0} satisfies the following system of infinite-dimensional linear equations,
[0* ( s ) 0 ( s) ª¬e B0[0* ( s) B0(0)[1* ( s) º¼ ,
(10.40)
[l* ( s ) l ( s) ª¬e B2(l )[l*1 ( s) Bl[l* ( s) B0(l )[l*1 ( s) º¼ ,
(10.41)
and for l ı 1,
where for k ı 0, Bk
(k ) (k ) B1( k ) diag b1,(1,1) , b1,(2,2) ,! , b1,(( k M) k M k )
(10.42)
and k ( s )
§ · f (k , M k ) ¸ f (k ,1) f (k , 2) ¨ diag , ,! , . 2 ¨¨ ª s b( k ) º 2 ª s b( k ) º 2 ª º ¸ (k ) « » « » « s b1,( M , M ) » ¸ 1,(1,1) 1,(2,2) ¬ ¼ ¬ ¼ k k ¬ ¼ ¹ © 543
Constructive Computation in Stochastic Models with Applications
Proof We only prove Eq. (10.40), while the proof of Eq. (10.41) is similar. Let E inf{t : x(t ) z (l , i )}. Then we have t
[(l ,i ) (t ) E ª ³ f ( x(u ))du _ x(0) (l , i ) º ¬«
¼»
0 t
E ª ³ f ( x(u ))du , E ! t _ x(0) ¬« 0
(l , i ) º ¼»
t E ª ³ f ( x(u ))du, E İ t _ x(0) (l , i) º . ¬« 0 ¼»
When x (0)
(10.43)
(l , i ), it is easy to check that P{E ! t} exp ^b1,((l )i ,i ) t` with b1,((l )i ,i ) 0,
and (l ) b0,( i, j ) °(l 1, j0 ), with probability ( l ) 0 , b 1,( i , i ) ° ° (l ) b1,( i j ) ° with probability ( l )1 for ®(l , j1 ), b1,(ii ) ° ° (l ) °(l 1, j ), with probability b2,(i j2 ) . 2 ° b1,(( l )ii ) ¯
x( E )
j1 z i,
Let E be a V -algebra consisting of all the events of the QBD process which have occurred before the Markov time E . By the strong Markov property of the QBD process, we obtain that for l ı 1, t
E ª ³ f ( x(u ))du , E ! t _ x(0) «¬ 0
(l , i ) º »¼
f (l , i )t exp ^b1,((l )i i )t` ,
(10.44)
and t
E ª ³ f ( x(u ))du E İ t _ x(0) ¬« 0 M l 1
(l , i) º ¼»
t
(l ) (l ) f (l , i ) ¦ b0,( i j0 ) ³ x exp ^b1,( i i ) x` [ (l 1, j0 ) (t x )dx j0 1
0
M l 1
t
(l ) (l ) f (l , i ) ¦ b2,( i , j2 ) ³ x exp ^b1,( i ,i ) x` [ ( l 1, j2 ) (t x )dx j2 1 Ml
0
t
f (l , i )¦ b1,((l )i , j1 ) ³ x exp ^b1,(( l )i ,i ) x`[ (l , j1 ) (t x)dx j1 z i
0
(10.45)
Taking the Laplace transforms on the both sides of Eq. (10.44) and Eq. (10.45), it follows from Eq. (10.43) that for l ı 1,
[l* ( s ) l ( s ) ª¬e B2(l )[l*1 ( s ) Bl[l* ( s ) B0(l )[l*1 ( s ) º¼ . 544
10
Markov Reward Processes
This completes the proof. Let § 0 ( s ) ·§ B0 ¨ ¸¨ (1) ( ) s 1 i ( s) ¨ ¸¨ B2 Q ¨ ¸¨ 2 ( s ) ¨ ¸¨ % ¹© © § 0 ( s )e · ¨ ¸ ( s )e ¸ ( s) ¨ 1 and [ ( s) ¨ 2 ( s )e ¸ ¨ ¸ © # ¹
B0(0) B1 B2(2)
B0(1) B2
B0(2)
%
%
· ¸ ¸, ¸ ¸ %¹
§ [0* ( s) · ¨ * ¸ ¨ [1 ( s) ¸ . ¨ [ 2* ( s) ¸ ¨¨ ¸¸ © # ¹
Then it follows from Theorem 10.13 that i ( s )[ ( s ) ( s ). [ (s) Q
(10.46)
i ( s ) ı 0 and ( s ) ı 0 for s ! 0. Let [ ( s ) Note that Q 0 N 1
i ( s ) [ ( s ) ( s ), [ (s) Q N
0 and
N ı 0.
Theorem 10.4 If there exists a positive number K ! 0 such that 0 İ f (k , j ) İ K for all k ı 0,1 İ j İ M k , then for each s ! 0, the vector sequence {N [ ( s ),
N ı 0} is non-decreasing and upper bounded. Hence the limit f [ ( s)
lim
N of
N
[ (s )
exists for each s ! 0. Also, f [ ( s) is the minimal nonnegative solution to Eq. (10.46). Proof We first prove that for each s ! 0, the sequence {N [ ( s )} is nondecreasing for N ı 0. Since f (k , j ) ı 0 for (k , j ) : , k ( s ) ı 0 for all i ( s ) and ( s ) are all nonnegative for each s ! 0. k ı 0. Therefore, the matrices Q Therefore, 0 [ ( s ) 0 and 1
i ( s ) [ ( s ) ( s ) ( s ) ı [ ( s ). [ (s) Q 0 0
Assuming that N [ ( s ) ı N 1[ ( s ) for an arbitrarily given N, one can immediately show that N 1
i ( s ) [ ( s ) ( s ) ı Q i ( s ) [ ( s ) ( s ) [ (s) Q N N 1
N
[ ( s ).
Thus through induction, one can show that for each s ! 0, the vector sequence {N [ ( s )} is non-decreasing for N ı 0. We now prove that for each s ! 0, the sequence {N [ ( s), N ı 0} is upper bounded. Since there exists a positive number K ! 0 such that 0 İ f (k , j ) İ K for all k ı 0,1 İ j İ M k , we obtain 545
Constructive Computation in Stochastic Models with Applications
__ [ ( s ) __
sup lı0,1İiİ M l
^[
^³
* ( l ,i )
( s )`
f
t e st E ª ³ f ( x(W ))dW _ x(0) (l , i ) º « »¼ 0 ¬ lı0,1İiİ M l f 1 st İ K ³ te dt K 2 . 0 s
sup
0
`
A similar induction argument yields 0 İ__N [ ( s ) __İ__ [ ( s ) __İ K for every N
1 , s2
0,1, 2,! . Therefore, the following limit must exist for all s ! 0 f
[ (s)
lim
N of
N
[ ( s ).
Finally, we prove that f [ ( s ) is the minimal nonnegative solution to Eq. (10.46). To this end, for an arbitrary nonnegative solution [ ( s) to Eq. (10.46), we have 0 İ N [ (s ) İ [ (s ),
for each
N ı 0.
Hence, we obtain 0 İ f[ ( s) İ [ ( s), which implies that f [ ( s ) is the minimal nonnegative solution to Eq. (10.46). This completes the proof. Let
U (s)
sup
b1,(( k )j , j ) f (k , j )
k ı0,1İ j İM k
2 (k ) 1 ( j , j )
s b
.
Then
U (0)
sup k ı0,1İ jİ M k
f (k , j ) . b1,(( k )j j )
Theorem 10.5 Suppose U (0) 1. (1) The vector sequence {N [ ( s )} is uniformly convergent for s (0, f) with the convergent rate being of an exponential type. (2) If D denotes the set of all bounded nonnegative solutions to Eq. (10.46), then D {f [ ( s )}. Proof (1) It is easy to check that i ( s ) __ U ( s ) İ __ Q
sup kı01İ jİ M k
Therefore, as N o f, 546
f (k , j ) f (k , j ) İ sup (k ) s b1( j , j ) kı0,1İ jİM k b1,(( k )j , j )
U (0) 1.
10
__N [ ( s )
Markov Reward Processes
[ ( s ) __İ U ( s ) __N 1 [ ( s ) N 2 [ ( s ) __
N 1
İ! İ U (0)
N 1
__1 [ ( s ) 0 [ ( s) __o 0.
(2) Let < ( s ) and [ ( s) be any two bounded nonnegative solutions to Eq. (10.46). Then as N o f, __< ( s ) [ ( s ) __İ U (0) __< ( s ) [ ( s) __İ ! İ U (0) N __< ( s ) [ ( s ) __o 0. Thus, < ( s ) [ ( s ). Note that f [ ( s ) is a bounded nonnegative solution to Eq. (10.46), it is clear that {f [ ( s )}. Remark 10.3 Note that the level-dependent QBD process with infinitesimal generator Q is equivalent to another QBD process with infinitesimal generator T Q, where T is an arbitrarily given positive number, thus U (0) 1 in Theorem 10.5 can be relaxed to a weaker condition: sup k ı0,1İ j İM k
f (k , j ) 1. T b1,(( k )j , j )
In other words, the condition U (0) 1 given in Theorem 10.5 is in principle not restrictive. In what follows we provide an iterative relationship for the conditional moments of the accumulated reward process ) (t ), which shows the usefulness of the maximal non-positive inverse for the related computations. We write dn [( k , j ) ( s)_s 0 , ds n T ( m( k ,1) ( n), m( k ,2) (n),! , m( k , M k ) ( n)) ,
m( k , j ) (n) mk (n)
(1)n
and M ( n)
T (m0 (n)T , m1 (n)T , m2 (n)T ,!) .
It follows from Eq. (10.38) and Eq. (10.39) that for n ı 1, m( k , j ) (n)
f E ª ³ t n) (t )dt _ x(0) (k , j ) º . ¬« 0 ¼»
It follows from Eq. (10.46) that i (0)[ (0) (0). [ (0) Q i (0) I is the infinitesimal generator of an irreducible continuousIt is clear that Q time QBD process. Thus, there always exists an LU-type RG-factorization for the 547
Constructive Computation in Stochastic Models with Applications
i (0) I . Using the LU-type RG-factorization, we obtain matrix Q
i (0) I ]1 [Q max
1
1
1 ( I GU ) U D ( I RL ) .
Therefore i (0) I ]1 (0) [ (0) [Q max 1
1
( I GU ) U D1 ( I RL ) (0). Taking the first n derivatives with respect to s on the both sides of Eq. (10.46) and letting s 0, we obtain M (1)
i (0) I ]1 ® ª d Q i ( s ) º [ (0) d ( s ) ½¾ [Q max « _s 0 _s 0 » ds ¼ ¯ ¬ ds ¿
and for n ı 2, M ( n)
10.6
nl n n 1 i (0) I ]1 °® § n · ª d Q i ( s ) º M (l ) d ( s ) [Q max ¦ ¨ ¸ « 0 s _ _s » n l ds n °¯ l 0 © l ¹ ¬ ds ¼
0
°½ ¾. °¿
An Up-Type Reward Process in Finitely-Many Levels
In this section, we consider an up-type reward process, defined in finitely-many levels, of the QBD process given in Eq. (1.17). We explicitly express the Laplace transforms of the conditional distributions of the up-type reward process and its conditional moments. The QBD process Q, given in Eq. (1.17), can be denoted as 0 {xt (Z ), t ı 0} with state space : {( k , j ) : k ı 0,1 İ j İ M k }, where Z : is a sample path of the QBD process. It is worthwhile to note that the assumption that each diagonal element of the matrix Q is finite implies that the QBD process is a Markov jump process. Therefore, it follows from Theorem 1.1 in Chapter ē of Asmussen [2] that the QBD process has the strong Markov property. We write inf{t ı 0 : xt (Z ) ¯f,
Kk( j ) (Z ) ®
(k , j )}, if the t -set is not empty, otherwise.
(10.47)
Let V (k , j ) be a suitable non-zero nonnegative function defined on the set : . We define 548
10
[ k( j ) (Z )
³
Kk( j ) (Z ) 0
V ( xt (Z ))dt ,
Markov Reward Processes
k ı 0.
(10.48)
We now study the conditional distributions of the reward process [ k( j ) (Z ) and its conditional moments in terms of the LU-type RG-factorization. Let F( l ,i )( k , j ) ( x)
P l ,i ^[ k( j ) (Z ) İ x` ,
Fl , k ( x) [ F( l ,i )( k , j ) ( x)]1İiİM ,1İ jİM l
k
and fl* k ( s )
³
f
0
e sx dFl , k ( x),
0 İ l İ k 1.
It is clear that the (i, j ) th entry of the matrix fl*,k ( s ) is given by f (*l ,i )( k , j ) ( s )
^
Kk( j ) (Z ) ª E l ,i «exp s ³ V ( xt (Z ))dt 0 ¬
`
º ». ¼
The following theorem provides a system of matrix equations satisfied by the matrix sequence {fl*,k ( s),0 İ l İ k 1}. Theorem 10.6 The matrix sequence {fl*,k ( s ),0 İ l İ k 1} satisfies the system of matrix equations
1(0) ( s)f0,* k ( s ) A0(0) f1,*k ( s ) A2( l ) fl*1, k ( s ) A1( l ) ( s )fl*, k ( s ) A0(l ) fl*1, k ( s )
0,
(10.49)
0, 1 İ l İ k 2,
A2( k 1) fk* 2, k ( s ) A1( k 1) ( s)f k*1, k ( s) A0( k 1)
0,
(10.50) (10.51)
where
1(l ) ( s )
A1(l ) s diag V (l ,1),V (l , 2),!,V (l , M l ) ,
l ı 0.
Proof We only prove Eq. (10.50), while Eq. (10.49) and Eq. (10.51) can be proved similarly. Let W be the first transition time of the QBD process from the state (l , i ) to a nearby state: Either state (l 1, i ), state (l , i) or state (l 1, i ). Then P(l ,i ) {W İ x} 1 exp ^a1,(lii) x` ,
hence,
E( l ,i ) [exp{ sV (l , i )W }]
a1,(lii)
def (l ) 1,ii
sV (l , i ) a
H i(l ) ( s ). 549
Constructive Computation in Stochastic Models with Applications
It is clear that W is a Markov time. Let W be a V -algebra consisting of all the events of the QBD process which have occurred before the time W . Applying the strong Markov property of the QBD process, we obtain
^
`
Kk( j ) (Z ) ª º f (*l ,i )( k , j ) ( s ) E( l ,i ) «exp s ³ V ( xt (Z ))dt » 0 ¬ ¼ ª «
ª
« ¬
« ¬
^
E( l ,i ) « E( l ,i ) «« exp s ³
^
Kk( j ) (Z ) 0
W
E( l ,i ) [exp s ³ V ( xt (Z ))dt E
ª « ( l ,i ) « « ¬
0
^
exp s ³
Kk( j ) (Z ) W
`
ºº »
V ( xt (Z ))dt _ W »» » »» ¼¼
`
` ^
º
V ( xt (Z ))dt _ W »»
ª E(l ,i ) «exp{ sV (l , i )W }E xW (Z ) ¬
» ¼
Kk( j ) (Z ) ª exp s « ³0 V ( xt (Z ))dt ¬
`
ºº »». ¼¼
(10.52)
Note that P(l ,i ) {xW (Z )
(l 1, j )}
a2,(l ij) a1,( lii) a1,( lij)
P(l ,i ) {xW (Z )
(l , j )}
P(l ,i ) {xW (Z )
(l 1, j )}
a1,( lii)
,
a0,(l ij) a1,(lii)
, i z j,
,
it follows from Eq. (10.52) that f (*l ,i )( k , j ) ( s )
M l 1
a2,( l ij) c
jc 1
(l ) 1, ii
H i(l ) ( s ) ¦
a
(l ) 0, ij c (l ) 1, ii
M l 1
a
jc 1
a
H i( l ) (s ) ¦
Ml
f (*l 1, jc )( k , j ) ( s ) H i( l ) ( s ) ¦ jcz i
a1,( lij)c (l ) 1, ii
a
f (*l , jc )( k , j ) ( s )
f ( *l 1, jc)( k , j ) (s ),
which leads to A2( l ) fl*1, k ( s ) 1( l ) ( s )fl*, k ( s ) A0(l ) fl*1, k ( s )
0.
This completes the proof. Let U 0 ( s ) 1(0) ( s ), U i ( s ) 1( i ) ( s ) A2( i ) ¬ª U i11 ( s ) ¼º A0( i 1) , 1 İ i İ k 1, 550
10
Markov Reward Processes
and Gi ( s )
ª¬ U i1 ( s ) º¼ A0(i ) ,
0 İ i İ k 1.
The following theorem provides expressions for the matrices fl*,k ( s) for 0 İ l İ k 1. Theorem 10.7 If s ı 0, then k 1
G (s),
fl*,k ( s )
j
0 İ l İ k 1.
j l
Proof Using Theorem 10.6, we have G A k 1 ( s )f *k 1( s )
JG A k 1,
(10.53)
where G* f k 1( s )
f
* 0, k
T
( s )T , f1,*k ( s )T ,! , f k* 2, k ( s )T , f k*1, k ( s )T ,
JG A k 1( s )
0 , 0 ,! , 0 , A T
T ª « ¬
T
( k 1) 0
ºT » ¼
T
and § 1(0) ( s ) A0(0) ¨ (1) 1(1) ( s ) ¨ A2 A k 1 ( s ) ¨ % ¨ ¨ ¨ ©
(1) 0
A % A2( k 2)
· ¸ ¸ ¸. % ¸ ( k 2) ( k 2) 1 ( s ) A0 ¸ 1( k 1) ( s ) ¸¹ A2( k 1)
Since the QBD process given in Eq. (1.17) is irreducible, the QBD process A k 1 ( s ) for s ı 0 is transient so that the matrix A k 1 ( s ) is invertible. Note that A k 1 ( s ) [ I RL ( s )]U D ( s )[ I GU ( s )],
it follows from Eq. (10.53) that G* 1 JG 1 1 f k 1( s ) [ I GU ( s )] [U D ( s)] [ I RL ( s)] A k 1( s), where
1
[ I GU ( s )]
[U D ( s )]
1
§I ¨ ¨ ¨ ¨ ¨ ¨ ©
Y1(0) I
1
· Y2(0) ! Yk(0) 1 (1) (1) ¸ Y1 ! Yk 2 ¸ ¸, I ! Yk(2) 3 ¸ % # ¸ I ¸¹ 1
diag [U 0 ( s)] , [U1 ( s)] ,! , [U k 1 ( s )]
1
551
Constructive Computation in Stochastic Models with Applications
and
[ I RL ( s )]
1
§ I ¨ (1) ¨ X1 ¨ X 2(2) ¨ ¨ # ¨ X ( k 1) © k 1
I X1(2) #
I #
X k( k21)
X k( k31)
· ¸ ¸ ¸. ¸ % ¸ ! I ¸¹
After some matrix computations, we have 1
fl*, k ( s ) Yk(l 1) l [U k 1 ( s )] A0( k 1)
k 1
G (s). j
j l
This completes the proof. From Theorem 10.7 we easily obtain the following corollary. Corollary 10.1 If 0 İ l İ k 1, then
fl*, k ( s ) fl*,l 1 ( s )fl*1,l 2 ( s )! f k* 2, k 1 ( s )fk*1, k ( s ). Now, we compute the conditional moments of the reward process [ k( j ) (Z ). Let m((lg,i))( k , j )
E(l ,i ) ª¬[ k( j ) (Z ) g º¼
and M l(,gk )
m
(g) ( l ,i )( k , j ) 1İiİM 1İ j İM l k
,
0 İ l İ k 1, g ı 1.
We write
< l(,gk 1)
k 1 diag( V ( l ,1), V ( l , 2), ! , V ( l , M )) if g 1, l G j (0), ° j l ® °diag(V (l ,1), V (l , 2),! , V (l , M )) gM ( g 1) , if g ı 2, l ,k l ¯
for 0 İ l İ k 1. Note that m((lg,i))( k , j )
(1) g
dg * f ( l ,i )( k , j ) ( s )_s 0 , ds g
it follows from Theorem 10.6 that the matrix sequence {M l(,gk ) , g ı 1,0 İ l İ k 1} satisfies the system of matrix equations
< 0,( gk1) A1(0) M 0,( gk) A0(0) M 1,( gk ) < l(,gk 1) A2(l ) M l(g1,) k A1(l ) M l(,gk ) A0(l ) M l(g1,) k 552
0, 0, 1 İ l İ k 2,
(10.54) (10.55)
10
< k( g1,1)k A2( k 1) M k( g 2,) k A1( k 1) M k( g1,) k
Markov Reward Processes
0.
(10.56)
To solve the system of matrix equations Eq. (10.54), Eq. (10.55) and Eq. (10.56), let the matrix sequences {Rk }, {Gl } and {U i } be the LU-type measures of the QBD process given in Eq. (1.17) for N k 2. Based on this, the matrix sequences
^ X ` and ^Y ` are defined by Eq. (1.32) and Eq. (1.33), respectively. (k ) i
(l ) j
The following corollary provides an iterative relationship for the matrix sequence ^M l(,gk ) , g ı 1,0 İ l İ k 1` .
Corollary 10.2 Gl M l(g1,) k U l1 L(l ,gk) ,
M l(,gk ) M
(g) k 1, k
U
1 k 1
ª¬<
( g 1) k 1, k
0 İ l İ k 2, ( k 1) ( g ) 2 k 2, k
A
L
º¼ ,
where for 0 İ l İ k 2, L(l ,gk) < l(,gk 1) Rl< l(g1,k1) Rl Rl 1< l(g2,1)k ! Rl Rl 1 ! R1< 0,( gk1) . Applying (2) of Theorem 1.2, the following corollary further expresses the matrix sequence ^M m( g, k) , g ı 1,0 İ m İ k 1` .
Corollary 10.3 M m( g, k)
For g ı 1, 0 İ m İ k 1,
m 1 ª « « « n 0 ¬
¦ U X 1 m
k 1 m (m) mn
¦
i 1
º
Yi ( m ) U i1m X i(i mm)n »»< n(,gk1) » ¼
k 1 m
ª º « U m1 ¦ Yi ( m ) U i1m X i(i m ) »< m( g, k1) i 1 ¬ ¼ k 1 k 1 m ª ( m) º ¦ «Yn m U n1 ¦ Yi ( m) U i1m X ii ( nm m ) »< n(gk1) n m 1 ¬ i n m 1 ¼ 1 ( k 1) ( g 1) U k 1 X k 1 m< k 1, k .
Proof It follows from Eq. (10.54), Eq. (10.55) and Eq. (10.56) that JJG JJG k*1 M (kg)1 < (kg11), where JJG ( g ) M k 1 JJG ( g 1)
< k 1
ª¬M ª¬<
(g) 0, k
T
T
º¼ , ª¬ M 1,( gk ) º¼ ,! , ª¬ M k( g1,) k º¼
( g 1) 0, k
T
T
T T
º¼ , ª¬< 1,( gk 1) º¼ ,! , ª¬< k( g1,1)k
, º¼
T T
and 553
Constructive Computation in Stochastic Models with Applications
k*1
§ A1(0) ¨ (1) ¨ A2 ¨ ¨ ¨ ¨ ©
A0(0) A1(1)
(1) 0
A
%
% A2( k 2)
% A1( k 2) A2( k 1)
· ¸ ¸ ¸. ¸ A0( k 2) ¸ A1( k 1) ¸¹
Since the QBD process given in Eq. (1.17) is irreducible, the matrix k*1 is invertible for k ı 1, using (2) in Theorem 1.2 some matrix computations yield the desired result.
10.7
An Up-Type Reward Process in Infinitely-Many Levels
In this section, we consider an up-type reward process, defined in infinitely-many levels, of the QBD process given in Eq. (1.16). We provide expressions for the Laplace transforms of the conditional distributions of the up-type reward process and its conditional moments. We define inf{t ı 0 : xtL (Z ) ¯f,
k}, if the t - set is not empty, otherwise,
Kk (Z ) ®
where xtL (Z ) denotes the level number of the QBD process at time t. It is easy to see from Eq. (10.48) that min ^Kk( j ) (Z )` .
Kk (Z ) If the initial state x0 (Z )
(10.57)
1İ j İM k
(l , i ) with 0 İ l k , then the sequence {Kk (Z )} is
monotonely increasing for k ! l. Therefore, the limit: lim Kk (Z ) either f or k of
f, a.s.. Let
K (Z ) lim Kk (Z ), a.s.. k of
(10.58)
Then
inf{t ı 0 : xtL (Z )
K (Z ) ®
¯f,
f}, if the t - set is not empty, otherwise,
where xtL (Z ) f means that the level number of the QBD process can not be finite at time t. 554
10
Markov Reward Processes
We show below that K (Z ) is also a limit of other time sequences. To do this, we define a sequence of state jumping times as follows: Since the QBD process 0 {xt (Z ), t ı 0} is a Markov jump process,
W 1 (Z ) inf{t ı 0 : xt (Z ) z x0 (Z )} is the first jump time of the QBD process. Define 11 { yt (Z )
xW1 t (Z ), t ı 0}.
(10.59)
Using the strong Markov property, 11 is also a Markov jump process with the same infinitesimal generator as that of the QBD process 0 , and
W 2 (Z ) inf{t ı 0 : yt (Z ) z y0 (Z )}
(10.60)
is the second jump time of the QBD process 0 (or it is also the first jump time of the Markov process 1 1 ). Similar arguments to Eq. (10.59) and Eq. (10.60) indicate that W n (Z ) is the n th jump time of the QBD process 0 for n ı 3. It is easy to see that 0 W 0 (Z ) W 1 (Z ) W 2 (Z ) ! W n (Z ) ! , Hence, the limit: lim W n (Z ) either f or n of
a.s..
f, a.s., and K (Z )
lim W n (Z ).
n of
Let
[k (Z )
³
Kk ( Z ) 0
[ (Z )
V ( xt (Z ))dt ,
³
k ı 0,
K (Z ) 0
V ( xt (Z ))dt.
(10.61)
Since {Kk (Z )} is monotonely increasing for k ! l and the function V ( xt (Z )) is non-zero nonnegative, the sequence {[ k (Z )} is also monotonely increasing for k ! l. Hence,
[ (Z )
lim [ k (Z ),
k of
a.s..
Remark 10.4 (1) If V (k , j ) { 1 for k ı 0 and 1 İ j İ M k , then [ k(i ) (Z ) K (Z ), [ k (Z ) Kk (Z ) and [ (Z ) K (Z ). This implies that the reward process becomes the first passage times. (2) If (i ) k
1, if 0 İ l İ n, V (l , j ) { ® ¯0, otherwise, then [ k (Z ) becomes the total sojourn time of the QBD process staying in levels 0 to n before it enters level k for the first time. 555
Constructive Computation in Stochastic Models with Applications
The following lemma provides a criterion for recognizing [ (Z ) Lemma 10.2 [ (Z ) f, a.s., if and only if
f.
f
V (nk , ik ) ( nk ) 0 a1, ik ik
¦ k
f,
a.s.,
where {(nk , ik )} is a sequence of jumping states on the sample path Z of the QBD process given in Eq. (1.16). Based on this lemma, the following theorem provides a sufficient condition under which [ (Z ) f, a.s.. The sufficient condition is simple to be verified in practice. Theorem 10.8 Suppose that the function V (k , j ) for k ı 0,1 İ j İ M k is non-zero nonnegative. If the QBD process is recurrent, then
[ (Z ) f,
a.s..
Proof If the function V (k , j ) for k ı 0,1 İ j İ M k is non-zero nonnegative, then there always exists a state (k * , j * ) such that V (k * , j* ) ! 0. Since the QBD process is recurrent, it follows from Proposition 1.2 in Asmussen [2] that there must exist a subsequence {(nk j , ik j )} of the state sequence {( nk , ik )} on the sample path Z such that nk j
k*,
ik j
j* ,
j ı 0.
Hence, f
f V (n , i ) V (nk , ik ) kj kj ı¦ ( nk ) ( nk j ) j 0 a 0 a1,ik ik 1,i i
¦ k
kj kj
f
V (k * , j * )
j 0
a1,( kj*)j*
¦
*
f, a.s..
Using Lemma 10.2 leads to [ (Z ) f, a.s.. This completes the proof. We now express the Laplace transforms of the conditional distributions of the reward process [ (Z ) f a.s. and its conditional moments in terms of the UL-type RG-factorization. We write
I(*l ,i ) ( s ) E( l ,i )) [exp{ s[ (Z )}], )l* ( s )
I
* ( l ,1)
T
( s ), I(*l ,2) ( s ),! , I(*l , M l ) ( s ) ,
l ı 0.
The following theorem provides a system of vector equations satisfied by the vector sequence ^)l* ( s )` . Theorem 10.9 equations 556
The vector sequence ^)l* ( s )` satisfies the system of vector
10
1(0) ( s )) 0* ( s ) A0(0))1* ( s )
Markov Reward Processes
(10.62)
0
and A2( l )) l*1 ( s) 1( l ) ( s))l* ( s) A0( l )) l*1 ( s)
0,
l ı 1.
(10.63)
Proof We only prove Eq. (10.63), while Eq. (10.62) can be proved similarly. Using a similar method to Theorem 10.6 leads to
^
I(*l ,i ) ( s ) E(l ,i ) ª« exp s ³
K (Z )
`
V ( xt (Z ))dt º »¼ 0 ¬ K (Z ) ª º E(l ,i ) « exp{ sV (l , i)W }ExW (Z ) ª exp s ³ V ( xt (Z ))dt º » 0 ¬« ¼» ¼ ¬ M l 1 Ml a2,(l ij) c a1,( lij) c H i( l ) ( s ) ¦ (l ) I(*l 1, j c) ( s) H i(l ) ( s )¦ ( l ) I(*l , j c) ( s ) j c 1 a1,ii j c z i a1,ii
^
M l 1
a0,(lij) c
jc 1
(l ) 1,ii
H i(l ) ( s ) ¦
a
`
I(*l 1, j c) ( s ),
which can be written in matrix form A2( l )) l*1 ( s) 1( l ) ( s))l* ( s) A0( l ))l*1 ( s)
0.
This completes the proof. We now use the UL-type RG-factorization Eq. (1.21) to solve the system of vector Eq. (10.62) and Eq. (10.63). Based on this, we provide an expression for the vector sequence ^)l* ( s )`. We use the notation {R l ( s )}, {G k ( s)} and {U l ( s)} to denote the UL-type measures of the QBD process with infinitesimal generator, given by § 1(0) ( s ) A0(0) ¨ A(1) A0(1) 1(1) ( s ) ) ( s) ¨ 2 ¨ A2(2) 1(2) ( s ) ¨¨ % ©
A0(2) %
· ¸ ¸. ¸ ¸ %¸¹
For the matrix sequence {R l ( s )} for s ! 0, following identically Theorem 3.1 in Latouche, Pearce and Taylor [27] or Lemma 7 in Bean, Pollett and Taylor [3], we give the following lemma, which will be used later in the proof of Theorem 10.10. Lemma 10.3 For s ! 0, there exists a sequence { U k ( s), k ı 0} of positive scalars and a sequence {zk (s ), k ı 0} of non-zero nonnegative column vectors such that zk ( s )
U k ( s) R k ( s) zk 1 ( s),
k ı 0. 557
Constructive Computation in Stochastic Models with Applications
Using Lemma 10.3 and the RG-factorization Eq. (1.21), the following theorem describes the vector sequence ^) k* ( s )`. Theorem 10.10 For s ı 0, ,
) 0* ( s ) [U 0( s )]1 R 0( s ) z1 ( s ), )1* ( s) G1( s)[U 0( s)]1 R 0 s z1 s [U 1( s)]1 z1 ( s) and for k ı 2, 1
) k* ( s )
G (s)[U j
1
0
( s )] R 0( s ) z1 ( s )
j k
i 1
i 1
i 1 j k
l 1
k
¦ G j ( s )[U i ( s)]1 Ul ( s) zi ( s). Proof According to Eq. (1.21), the UL-type RG-factorization of the QBD process ) ( s) for s ı 0 is written as ) ( s) [ I RU ( s )]U D ( s )[ I GL ( s )]. JJG It follows from Theorem 10.9 that ) ( s )) ( s ) 0, where
JJG
)
) (s)
0
T
( s )T ,)1* ( s )T ,) 2* ( s )T ,! .
Hence, JJG [ I RU ( s )][U D ( s )][ I GL ( s )]) ( s )
0.
(10.64)
We now solve the equation Eq. (10.64) by two steps. In the first step, let JJG v( s) [U D ( s )][ I GL ( s )]) ( s ) partitioned according to the levels as v( s )
T
(v0 ( s ) T, v1 ( s)T , v2 ( s )T ,!) .
This is equivalent to
) 0* ( s ) [U 0( s)]1 v0 ( s), ) k* ( s ) G k ( s )) k*1 ( s ) [U k ( s)]1vk ( s ),
(10.65) k ı 1.
To determine the sequence {vk ( s)}, in the second step we solve [ I RU ( s )]v( s ) which is equivalent to 558
0,
(10.66)
vk ( s )
R k ( s )vk 1 ( s ),
10
Markov Reward Processes
k ı 0.
(10.67)
Using Lemma 10.3 and Eq. (10.67) leads to v0 ( s) v1 ( s )
R 0( s ) z1 ( s ),
(10.68)
z1 ( s ),
(10.69)
k 1
U (s) z
vk ( s )
l
k
( s ),
k ı 2.
(10.70)
l 1
Substituting Eq. (10.68), Eq. (10.68) and Eq. (10.70) into Eq. (10.65) and Eq. (10.66) leads to
) 0* ( s) [U 0( s)]1R 0( s) z1 ( s ), )1* ( s ) G1( s )[U 0( s )]1R 0( s ) z1 ( s ) [U 1( s )]1z1 ( s ) and for k ı 2, k 1
) k* ( s ) G k ( s )) k*1 ( s ) [U k ( s )]1 Ul ( s ) zk ( s) l 1
k 2
G k ( s )G k 1( s ))
* k 2
( s ) G k ( s )[U k 1( s )]1 Ul ( s ) zk 1 ( s ) l 1
k 1
[U k ( s )]1 Ul ( s ) zk ( s ) l 1
" 1
G (s)[U j
1
0
( s )] R 0( s ) z1 ( s )
j k
i 1
i 1
i 1 j k
l 1
k
¦ G j ( s )[U i ( s )]1 Ul ( s ) zi ( s ).
This completes the proof. We now compute the conditional moments of the reward process. For g ı 1, l ı 0, we write m((lg,i)) M l( g )
E(l ,i )) [[ (Z ) g ],
m
(g) ( l ,1)
, m((lg,2)) ,! , m((lg, M) l )
T
and
<
( g 1) l
°diag V (l ,1), V (l , 2),! , V (l , M l ) )l* (0), if g 1, ® ( g 1) °¯diag V (l ,1), V (l , 2),! , V (l , M l ) gM l , if g ı 2. 559
Constructive Computation in Stochastic Models with Applications
Based on Theorem 10.9, some computations yield that the vector sequence ^M l( g ) ` satisfies the system of vector equations
< 0( g 1) A1(0) M 0( g ) A0(0) M 1( g )
0
(10.71)
and
< l( g 1) A2(l ) M l(g1) A1(l ) M l( g ) A0(l ) M l(g1)
0, g ı 1, l ı 1.
(10.72)
Using (1) of Theorem 1.2 to solve the system of vector Eq. (10.71) and Eq. (10.72), we have the following corollary. Corollary 10.4 The vector sequence ^M m( g ) , g ı 1, m ı 0` is given by M m( g )
m 1
f ª º 1 (m) U X Yi ( m ) U i1m X i(i mm)n »< n( g 1) ¦ ¦ m mn « n 0 ¬ i 1 ¼ f ª º « U m1 ¦ Yi ( m ) U i1m X i(i m ) »< m( g 1) i 1 ¬ ¼ f f ª º ¦ «Yn(mm) U n1 ¦ Yi ( m ) U i1m X i(i ( nm)m ) »< n( g 1) . n m 1 ¬ i n m 1 ¼
It is easy to see from Corollary 10.4 that [ (Z ) f, a.s., if and only if M m(1) are infinite for some m ı 0. For example, M m(1) has a simple expression for a birth-death process. In this case, M m(1) f is a useful condition under which [ (Z ) f, a.s..
10.8
A Down-Type Peward Process
In this section, we consider a down-type reward process and a return-type reward process for the QBD process given in Eq. (1.16). By means of the RG-factorizations, we first express the Laplace transforms of the conditional distributions of the downtype reward process and its conditional moments. Based on the up- and down-type reward processes, we then provide a way to study the return-type reward process. Let
[0( j ) (Z )
³
K0( j ) (Z ) 0
V ( xt (Z ))dt ,
which is called a down-type reward process. Given that a level-independent QBD process starts at level 1, the down-type reward process [ 0( j ) (Z ) is a reward-type generalization of the busy period or the fundamental period, which is referred to Subsection 2.2 in Neuts [40] for more details. 560
10
Markov Reward Processes
We write Gk ,(i , j ) ( x)
P( k ,i ) ^[ 0( j ) (Z ) İ x` ,
Gk ( x) [Gk (i j ) ( x)]1İiİM
k
,1İ j İ M 0
and g*k ( s )
³
f
0
e sx dGk ( x).
A similar argument to the proof of Theorem 10.6 yields the following lemma. Lemma 10.4 The matrix sequence ^g*k ( s )` satisfies the system of matrix equations A2(1) 1(1) ( s )g1* ( s ) A0(1) g*2 ( s )
(10.73)
0
and
A2( k ) g*k 1 ( s ) 1( k ) ( s )g*k ( s ) A0( k ) g*k 1 ( s ) 0,
k ı 2.
(10.74)
Based on Lemma 10.4, we explicitly express the sequence ^g*k ( s )` in the following theorem. Theorem 10.11 For s ı 0, g*k ( s ) G k ( s )G k 1( s )G k 2 ( s )! G1 ( s ),
k ı 1,
where the sequence ^Gk ( s)` is the UL-type G-measure of the QBD process whose infinitesimal generator is given by § 1(1) ( s ) A0(1) ¨ (2) A A0(2) 1(2) ( s ) ) ( s) ¨ 2 ¨ A2(3) 1(3) ( s ) ¨¨ % ©
JG Proof Let g ( s ) JG 10.4 that ) ( s ) g ( s )
g (s) * 1
ª¬ A
T
(1) 2
· ¸ ¸. ¸ ¸ % ¸¹
A0(3) %
T
, g*2 ( s )T , g*3 ( s )T ,! . Then it follows from Lemma
T
T
º¼ ,0T , 0T ,! . Using the UL-type RG-factorization
leads to JG T 1 1 g ( s ) [ I GL ( s )]1[U D ( s )] [ I RU ( s )] ª¬ A2(1) º¼ , 0T ,0T ,!
^
`
T
T
T
T
(10.75)
1 [U 1( s )] A2(1) , ^Z1(1) [ U 1( s )]1 A2(1) ` ,! ,
where 561
Constructive Computation in Stochastic Models with Applications
Z k( k )
Gk ( s )G k 1( s )G k 2( s )! G 2( s ),
Notice that ª¬ U 11( s ) º¼ A2(1) We write
k ı 1.
G 1( s ), and so Eq. (10.75) gives the desired result.
^
`
Kk( j1) ª º E( k ,i ) «exp s ³ V ( xt )dt » 0 ¬ ¼
wk*,( i , j ) ( s ) and the matrix wk* ( s )
ª¬ wk*,( i , j ) ( s ) º¼ . 1İiİ M k ,1İ jİ M k 1
The following theorem provides expression for the sequence ^g*k ( s )` in terms of the skip-free property of the QBD process. Theorem 10.12 g*k ( s )
wk* ( s ) wk*1 ( s )! w1* ( s ),
(10.76)
where the matrix sequence ^wl* ( s )` satisfies the system of matrix equations A2( k ) 1( k ) ( s ) wk* ( s ) A0( k ) wk* 1 ( s ) wk* ( s )
0,
k ı 1.
(10.77)
Proof Similar argument to the proof of Theorem 10.6 leads to w*k ,(i , j ) ( s )
H i( k ) ( s )
a2,( kij) a1(kii) M k 1
a0,( kij)c
jc 1
a1,( kii)
H i( k ) ( s) ¦
Mk
a1,( kij)c
j c zi
a1,( kii)
H i( k ) ( s )¦
w*k ,( j c, j ) ( s )
Mk
¦w
* k 1,( j c, m )
( s) w*k ,( m, j ) ( s),
m 1
which is equivalent to Eq. (10.77). Note that when the QBD process arrives at level 0 from the initial level k, it certainly needs to pass through level k 1, level k 2,! , level 2 and level 1. Therefore, we get g*k ,(i , j ) ( s )
M1 M 2
M k 1
i1 1 i2 1
ik 1 1
¦ ¦! ¦ w
* k ,( i ,ik 1 )
* ( s )w*k 1,(ik 1 ,ik 2 ) ( s )! w1,( i1 , j ) ( s ),
which is equivalent to Eq. (10.76). For a level-independent QBD process, according to Theorem 10.12 we have the following corollary. Corollary 10.5 If A2( l ) A2 for l ı 2, A1( k ) A1 and A0( k ) A0 for k ı 1, then wl* ( s ) * k
g (s) 562
w* ( s ), *
w (s)
k 1
l ı 2, w1* ( s ).
10
Markov Reward Processes
In particular, g*k (0) G k 1G1 ,
where G1 equation
k ı 1,
w1* (0) and G is the minimal nonnegative solution to the matrix A2 A1G A0 G 2
0.
Remark 10.5 (1) Let Gk wk* (0) for k ı 1. Then the matrix sequence {Gk } is the minimal nonnegative solution to the system of matrix equations A2( k ) A1( k ) Gk A0( k ) Gk 1Gk
0,
k ı 1.
(2) If V (k , j ) 1 for all k ı 0,1 İ j İ M k , then the matrix sequence {Gk , k ı 1} is the same as (43) in Ramaswami and Taylor [46]. We now compute the conditional moments of the random variable [ 0( j ) (Z ). Let mk( g,()i , j )
g E( k ,i ) ª¬[ 0( j ) (Z ) º¼
and the matrix %k( g )
m
(g) k ,( i , j ) 1İiİM ,1İ j İM k 0
,
g ı 1.
We write
< k( g )
1 if g 0, °diag(V (k ,1),V (k , 2),! ,V (k , M k )) G j , j k ® °diag(V (k ,1),V (k , 2),! ,V (k , M )) g %( g 1) , if g ı 1, k k ¯
where the matrix sequence {Gk } is given in (1) of Remark 10.5. Note that mk( g,()i , j )
( 1) g
dg * g k ,(i , j ) ( s)_s 0 , ds g
it follows from Eq. (10.73) and Eq. (10.74) that the matrix sequence {%m( g ) , g ı1, m ı 1}. satisfies the system of matrix equations
< 1( g 1) A1(1) %1( g ) A0(1) %2( g )
(10.78)
0
and
< k( g 1) A2( k ) %k( g1) A1( k ) %k( g ) A0( k ) %k( g)1 0,
k ı 2.
(10.79)
The following corollary gives an expression for the matrix sequence {%m( g ), g ı1, m ı 1}. 563
Constructive Computation in Stochastic Models with Applications
For g ı 1, m ı 1,
Corollary 10.6 %m( g )
m 1
f º ¦ Yi ( m ) U i1m X i(i mm)n »< n( g 1) n 0 i 1 ¼ f ª º « U m1 ¦ Yi ( m ) U i1m X i(i m) »< m( g 1) i 1 ¬ ¼ f f ª º ¦ «Yn(mm) U n1 ¦ Yi ( m ) U i1m X i(i ( nm)m) »< n( g 1) , n m 1 ¬ i n m 1 ¼
ª ¦ «¬ U X 1 m
(m) m n
where {Rk }, {Gl } and {U l } are the LU-type measures of the QBD process whose infinitesimal generator is given by
§ A1(1) ¨ (2) ¨ A2 ¨ ¨¨ ©
A0(1) A1(2) A2(3)
· ¸ ¸, ¸ ¸ %¸¹
(2) 0
A A1(3)
A0(3)
%
%
(10.80)
and ^ X k(l ) ,1 İ k İ l` and ^Yk(l ) , k ı 1, l ı 0` are respectively expressed by {Rk } and {Gl } according to Eq. (1.32) and Eq. (1.33).
JJG Proof It follows from Eq. (10.78) and Eq. (10.79) that M ( g )
JJG ( g ) M
ª¬%
(g) 1
T
T
T
JJG < ( g 1), where
T
º¼ , ª¬%2( g ) º¼ , ª¬%(3g ) º¼ ,!
and JJG ( g 1)
T
T
T
T
ª¬< 1( g 1) º¼ , ª¬< 2( g 1) º¼ , ª¬< 3( g 1) º¼ ,! . JJG JJG Using (1) of Theorem 1.2 to solve the equation M ( g ) < ( g 1), we obtain the desired result. Finally, we study a return-type reward process and express the Laplace transforms of its conditional distributions in terms of the up- and down-type reward process discussed above. If the QBD process starts at state (k , i ) at time 0, then Kk( j ) (Z ) is called the first return time to state (k , i ). We write
<
hk*,( i , j ) ( s )
( j)
E( k ,i ) ªe s[k ¬
(Z )
º ¼
and the matrix hk* ( s ) 564
(hk ,(i , j ) ( s ))1İi , jİM . k
10
Markov Reward Processes
Using the law of total probability, we obatin 1
h0* ( s )
[1(0) ( s )] A0(0) w1* ( s)
(10.81)
and hk* ( s )
1
¬ª 1( k ) ( s ) ¼º ¬ª A2( k ) f k*1, k ( s ) A0( k ) wk* 1 ( s ) ¼º ,
k ı 1.
(10.82)
Clearly, it is not difficult to further derive the conditional moments of the return-type reward process according to Eq. (10.81) and Eq. (10.82).
10.9
Discrete-Time Markov Reward Processes
In this section, we provide a simple introdution to reward process of an irreducible discrete-time block-structured Markov chain. By using the UL- and LU-types of RG-factorizations, we provide expressions for conditional distributions and conditional moments of the reward process. Consider an irreducible discrete-time block-structured Markov chain { X n , n ı 0} whose transition probability matrix is given by Eq. (2.1), or § P0,0 ¨ ¨ P1,0 ¨ P2,0 ¨ © #
P
P0,2 ! · ¸ P1,2 ! ¸ . P2,2 ! ¸ ¸ # ¹
P0,1 P1,1 P2,1 #
Let S (0) (S 0 (0), S 1 (0), S 2 (0),!) be the initial probability vector of the Markov chain { X n , n ı 0}, that is, the (k , i) th entry of ʌ (0) is given by ʌ k ,i (0)
P{ X 0
(k , i)}.
Obviously, the transient probability vector of the Markov chain at time n is given by ʌ (n) ʌ (0) P n , n ı 0. Let f ( x) be a reward rate with repect to state x of the Markov chain { X n , n ı 0}, i.e., the reward rate is f (k , i ) if the Markov chain { X n , n ı 0} is at state ( k , i ). Then the random variable f ( X n ) is the instantaneous reward rate at time n. Hence E[ f ( X n )]
¦: ʌ
f
k ,i
( n) f ( k , i )
( k ,i )
¦ʌ
k
( n) f k
ʌ ( n) f
(10.83)
k 0
and E[ f ( X n ) r ]
¦: ʌ
r
k ,i
( n) f ( k , i ) ,
r ı 2.
( k ,i )
565
Constructive Computation in Stochastic Models with Applications
At the same time, the probability distribution of the random variable f ( X n ) is given by P{ f ( X n ) İ x}
¦
ʌk ,i (n).
f ( k , i )İ x ( k ,i ):
Also, the limit distribution of the random variable f ( X n ) is given by
¦
lim P{ f ( X n ) İ x} n of
ʌ k ,i .
f ( k ,i )İ x ( k ,i ):
If the Markov chain { X n , n ı 0} is positive recurrent and ʌ is the stationary probability vector of the Markov chain, then lim P n
eʌ and lim ʌ (n) ʌ . We
n of
write K
n of
lim E[ f ( X n )]. It is easy to see that n of
K
¦: ʌ
f
k ,i
f (k , i)
( k , i )
¦ʌ
k
ʌf .
fk
(10.84)
k 0
Now, we analyze the accumulated reward over these times 0,1,! , N as follows: N
¦ f (X
< (N )
n
).
(10.85)
n 0
N
Let L( N ) S (0)¦ P n and [ ( N )
E[< ( N )]. It is easy to check that
n 0
[ (N )
¦: L
k ,i
f
( N ) f (k , i)
( k , i )
¦ L (N ) f k
k
L( N ) f
(10.86)
k 0
and E[< ( N ) r ]
¦
Lk ,i ( N ) f (k , i) r ,
r ı 2.
( k ,i ):
It follows from Eq. (10.86) that
[ (0) ʌ (0) f [ ( N ) [ ( N 1) ʌ (0) P N f ,
(10.87) N ı 1.
(10.88)
Using the iterative relations Eq. (10.87) and Eq. (10.88), we can compute the reward value [ ( N ) for each N ı 0. The following proposition provides a useful property for the limit of the sequence {[ ( N )}. 566
10
Markov Reward Processes
Proposition 10.1 If the Markov chain P is positive recurrent, ʌ (0)e ! 0 and f ! 0, then lim [ ( N ) N of
f.
Proof Since the reward rate vector f ! 0, it is easy to see that the sequence {[ ( N )} is non-decreasing. Thus the limit lim [ ( N ) is either finite or infinite. N of
We assume that [
lim [ ( N ) f. Taking the limit N o f for the Eq. (10.88),
N of
and using P N o eʌ , we obtain
[
[ ʌ (0)e ʌ f ,
which leads to ʌ (0)e ʌf
0.
This yields a contradiction due to the fact that ʌ (0)e ! 0 and ʌf ! 0. Therefore,
[
lim [ ( N ) f. This completes the proof.
N of
Remark 10.6 If the Markov chain P is transient, then L (f )
1
ʌ (0)( I P) min.
Thus we obtain
[
1
ʌ (0)( I P) min f .
Proposition 10.1 provides a sufficient condition under which [
f. However,
this condition can have many forms once guaranteing that lim ʌ (0) P N f z 0. N of
For this, we should introduce a discounted reward function as follows: N
) (E , N )
¦E
n
f ( X n ),
n 0
where E (0,1] is a discounted factor. It is clear that ) (1, N ) < ( N ), a.s.. Let M ( E , N ) E[) ( E , N )]. Then M (1, N ) [ ( N ). In this case, we can compute the sequence {M ( E , N )} according to the following iterative relations:
M ( E ,0) ʌ f M ( E , N ) M ( E , N 1) E N ʌ (0) P N f ,
N ı 1.
If E N P N o 0 as N o f, then 1 M ( E ) M ( E , f) ʌ (0)( I E P ) min f.
(10.89)
Using the UL-type RG-factorization, we have
M ( E ) ʌ (0)[ I GL ( E )]1[ I < D ( E )]1[ I RU ( E )]1 f 567
Constructive Computation in Stochastic Models with Applications
or applying the LU-type RG-factorization, we obtain 1
M ( E ) ʌ (0)[ I GU ( E )]1[ I ) D ( E )]1[ I RL ( E )] f .
10.10
Notes in the Literature
Markov reward processes can accurately model practical systems that evolve stochastically over time. A Markov reward process consists of two elements: A Markov environment and an associated reward structure. Important examples include Naddor [38] for inventory systems, Daley [13] and Daley and Jacobs [14] for queues, Puri [41] for biological models, Kulkarni, Nicola and Trivedi [26], Masuda and Sumita [33], Donatiello and Grassi [17], Nabli and Sericola [37], and Vaidyanathan, Harper, Hunter and Trivedi [58] for communication networks. Karlin and McGregor [24], Wang [59], Puri [41,42], and McNeil [35] studied Markov reward processes for a birth-death process. Darling and Kac [15], Kesten [25], Mclean and Neuts [34], Puri [43], Howard [23], Meyer [36], Masuda and Sumita [33], and Stenberg, Manca and Silvestrov [51] analyzed Markov reward processes for Markov or semi-Markov processes. Glasserman [18,19,20], Ho and Cao [22], Cassandras [9], and Cao [7,8] studied Markov reward processes by means of infinitesimal perturbation analysis. Bobbio and Trivedi [5] provided a method for computing the first completion time distribution of a continuous-time Markov reward chain. Reibman, Smith and Trivedi [47] gave an overview for transient numerical approach of Markov reward models. Readers may further refer to Trivedi and Wagner [57], Reibman and Trivedi [48], Ciardo, Marie, Sericola and Trivedi [11], Ciardo, Blakemore, Chimento, Muppala and Trivedi [10], Ciardo and Trivedi [12], Rubino and Sericola [49], Masuda [32], Qureshi and Sanders [44], Mallubhatla, Pattipati and Viswanadham [31], Telek, Pfening and Fodor [54,55], Brenner and Kumar [6], Abdallah and Hamza [1], de Souza e Silva and Gail [16], Telek and Rácz [56], Bladt, Meini, Neuts and Sericola [4], Rácz [45], Telek, Horváth and Horváth [52,53], Grassmann and Luo [21], Stefanov [50], and Lisnianski [30]. In this chapter, we mainly refer to Li [28,29], Reibman, Smith and Trivedi [47], Telek, Pfening and Fodor [54,55], and Rácz [45]. At the same time, we have also added some new results without publication for a more systematical organization.
Problems 10.1 For a continuous-time level-dependent QBD process { X t , t ı 0} with finitely-many levels, discuss the two reward processes { f ( X t )} and {) (t )}. 568
10
10.2
Markov Reward Processes
For a discrete-time level-dependent QBD process { X n , n ı 0} with finitely-
many levels, discuss the two reward processes { f ( X n )} and {) (n)}, where n
) ( n)
¦ f (X
k
).
k 0
10.3 For a continuous-time Markov chain of GI M 1 type, study the reward process {) (t )}. 10.4 For a continuous-time level-dependent QBD process { X t , t ı 0} with infinitely-many levels, * ( x) min{t : ) (t ) İ x}, C (t , x) P{* ( x) İ t}, discuss the probability distribution C (t , x). 10.5 For a continuous-time Markov chain of M G 1 type, study the probability distribution C (t , x). Further, extend the results to a Markov chain of GI G 1 type. 10.6 Using the Markov reward process to study the multivariate PH distribution. 10.7 Consider an M G 1 queue with server breakdowns and repairs, where the distributions of the life time and repair time are exponential and general, respectively. A reward rate of 1 is assigned to all the system operational states and a reward rate of 0 is assigned to all the system failure states. Compute E[ f ( X (t ))], E[) (t )], E[) (t )]t and lim E[) (t )]t. t of
10.8 Consider a PH PH 1 queue. Let N (t ) be the number of customers in the system at time t. We define the reward rate f ( N (t )) Let ) )(t )
³
t 0
1, 0 İ N (t ) İ M , ® ¯0, N (t ) ı M 1.
f ( N ( x))dx. Compute E[) (t )], E[) (t )]t and lim E[) (t )]t. t of
10.9 Consider a BMAP G 1 queue. Let N (t ) be the number of customers in the system at time t. We define the reward rate f ( N (t )) Let ) (t )
³
t 0
1, 0 İ N (t ) İ M , ® ¯0, N (t ) ı M 1.
f ( N ( x))dx. Compute E[) (t )], E[) (t )]t and lim E[) (t )]t. t of
10.10 Consider a M M c retrial queue, where the retrial time is exponential. Let N (t ) be the number of customers in the system at time t. We define the reward rate f ( N (t )) Let ) (t )
³
t 0
1, 1 İ N (t ) İ M , ® ¯0, N (t ) 0 or N (t ) ı M 1.
f ( N ( x))dx. Compute E[) (t )], E[) (t )]t and lim E[) (t )]t. t of
569
Constructive Computation in Stochastic Models with Applications
10.11 Consider an accumulated reward in an irreducible continuous-tine Markov chain which is either finite-state or infinite-state, provide some sufficient conditions for iteratively computing the distribution of the accumulated reward.
References 1. Abdallah H. and M. Hamza (2002). On the sensitivity analysis of the expected accumulated reward. Performance Evaluation 47: 163 179 2. Asmussen S. (1987). Applied Probability and Queues, John Wiley & Sons 3. Bean N.G., P.K. Pollett and P.G. Taylor (2000). Quasistationary distributions for leveldependent quasi-birth-and-death processes. Stochastic Models 16: 511 541 4. Bladt M., B. Meini, M.F. Neuts and B. Sericola (2002). Distributions of reward functions on continuous-time Markov chains. In Matrix Analytic Methods: Theory and Applications, G. Latouche and P.G. Taylor (eds.), World Scientific: New Jersey, 39 62 5. Bobbio A. and K.S. Trivedi (1990). Computation of the distribution of the completion time when the work requirement is a PH random variable. Stochastic Models 10: 533 548 6. Brenner A. and U.D. Kumar (1998). Performance/dependability modelling using stochastic reward models: Transient behaviour. Microelectronics and Reliability 38: 449 454 7. Cao X.R. (1994). Realization Probabilities: The Dynamics of Queuing Systems, Springer-Verlag: London 8. Cao X.R. (2007). Stochastic Learning and Optimization: A Sensitivity-Based Approach, Springer-Verlag: New York 9. Cassandras C.G. (1993). Discrete-Event Systems: Modeling and Performance Analysis, Homewood, IL, Aksen Associates 10. Ciardo G., A. Blakemore, P.F.Jr. Chimento, J.K. Muppala and K.S. Trivedi (1993). Automated generation and analysis of Markov reward models using stochastic reward nets. In Linear algebra, Markov chains, and queueing models, IMA Vol. Math. Appl., 48, Springer: New York, 145 191 11. Ciardo G., R.A. Marie, B. Sericola and K.S. Trivedi (1990). Performability analysis using semi-Markov reward processes. IEEE Transactions on Computers 39: 1251 1264 12. Ciardo G. and K.S. Trivedi (1993). A decomposition approach for stochastic reward net models. Performance Evaluation 18: 37 59 13. Daley D.J. (1969). The total waiting time in a busy period of a stable single-server queue, ć. Journal of Applied Probability 6: 550 564 14. Daley D.J. and D.R.J. Jacobs (1969). The total waiting time in a busy period of a stable single-server queue, ē. Journal of Applied Probability 6 : 565 572 15. Darling D.A. and M. Kac (1957). On occupation times for Markoff processes. Transactions of the American Mathematical Society 84: 444 458 16. Souza E. de e Silva and H.R. Gail (1998). An algorithm to calculate transient distributions of cumulative rate and impulse based reward. Stochastic Models 14: 509 536 17. Donatiello L. and V. Grassi (1991). On evaluating the cumulative performance distribution of fault-tolerant computer-systems. IEEE Transactions on Computers 40: 1301 1307 570
10
Markov Reward Processes
18. Glasserman P. (1988). Infinitesimal perturbation analysis of a birth and death process. Operations Research Letter 7: 43 49 19. Glasserman P. (1991). Gradient Estimation Via Perturbation Analysis, Kluwer Academic Publishers 20. Glasserman P. (1992). Derivative estimates from simulation of continuous-time Markov chains. Operations Research 40: 292 308 21. Grassmann W.K. and J. Luo (2005). Simulating Markov-reward processes with rare events. ACM Transactions on Modeling and Computer Simulation 15: 138 154 22. Ho Y.C. and X.R. Cao (1991). Perturbation Analysis of Discrete-Event Dynamic Systems, Kluwer 23. Howard R.A. (1971). Dynamic Probabilistic Systems, Vol.**: Semi-Markov and Decision Processes, John Wiley and Sons 24. Karlin S. and J. McGregor (1961). Occupation time law for birth and death processes. In Proc. 4th Berkeley Symp. Math. Statist. Prob., 2: 249 272 25. Kesten H. (1962). On occupation times for Markov and semi-Markov processes. Transactions of the American Mathematical Society 103: 82 112 26. Kulkarni V.G., V.F. Nicola and K.S. Trivedi (1987). The completion time of a job on multimode systems. Advances in Applied Probability 19: 932 954 27. Latouche G., C.E.M. Pearce and P.G. Taylor (1998). Invariant measures for quasi-birthdeath processes. Stochastic Models 14: 443 460 28. Li Q.L. (1997). Stochastic Integral Functionals and Quasi-Stationary Distributions in Stochastic Models, Ph.D. Thesis, Institute of Applied Mathematics, Chinese Academy of Sciences, Beijing 100080, China 29. Li Q.L. and J. Cao (2004). Two types of RG-factorizations of quasi-birth-and-death processes and their applications to stochastic integral functionals. Stochastic Models 20: 299 340 30. Lisnianski A. (2007). The Markov reward model for a multi-state system reliability assessment with variable demand. Quality Technology & Quantitative Management 4: 265 278 31. Mallubhatla R., K.R. Pattipati and N. Viswanadham (1995). Discrete-time Markov-reward models of production systems. In Discrete event systems, manufacturing systems, and communication networks, Springer: New York, 149 175 32. Masuda Y. (1993). Partially observable semi-markov reward processes. Journal of Applied Probability 30: 548 560 33. Masuda Y. and U. Sumita (1991). A multivariate reward process defined on a semi-Markov process and its first-passage-time distributions. Journal of Applied Probability 28: 360 373 34. Mclean R.A. and M.F. Neuts (1967). The integral of a step function defined in a semiMarkov process. SIAM Journal on Applied Mathematics 15: 726 737 35. McNeil D.R. (1970). Integral functionals of birth and death processes and related limiting distributions. Annals of Mathematical Statistics 41: 480 485 36. Meyer J.F. (1982). Closed form solution of performability. IEEE Transaction on Computer C-31: 648 657 37. Nabli, H., Sericola, B. Performability analysis for degradable computer systems. Computers & Mathematics with Applications 39: 217 234 571
Constructive Computation in Stochastic Models with Applications
38. Naddor. E. (1966). Inventory Systems, Wiley: New York 39. Neuts M.F. (1981). Matrix-Geometric Solutions in Stochastic Models-An Algorithmic Approach, The Johns Hopkins University Press: Baltimore 40. Neuts M.F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Dekker: New York 41. Puri P.S. (1966). On the homogeneous birth-and-death process and its integral. Biometrika 53: 61 71 42. Puri P.S. (1968). Some further results on the homogeneous birth and death process and its integral. Mathematical Proceedings of the Cambridge Philosophical Society 64: 141 154 43. Puri P.S. (1971). A method for studying the integral functionals of stochastic processes with applications: *. Markov chains case. Journal of Applied Probability 8: 331 343 44. Qureshi M.A. and W.H. Sanders (1994). Reward model solution methods with impulse and rate rewards: an algorithm and numerical results. Performance Evaluation 20: 413 436 45. Rácz S. (2002). Numerical analysis of communication systems through Markov reward models, Ph.D. Thesis, Department of Telecomunications and Telematics, Budapest University of Technology and Economics 46. Ramaswami V. and P.G. Taylor (1996). Some properties of the rate operators in level dependent quasi-birth-and-death processes with a countable number of phases. Stochastic Models 12: 143 164 47. Reibman A., R. Smith and K.S. Trivedi (1989). Markov and Markov reward model transient analysis: an overview of numerical approaches. European Journal of Operational Research 40: 257 267 48. Reibman A. and K.S. Trivedi (1989). Transient analysis of cumulative measures of Markov model behavior. Stochastic Models 5: 683 710 49. Rubino G. and B. Sericola (1993). Sojourn times in semi-markov reward processesapplication to fault-tolerant systems modeling. Reliability Engineering & System Safety 41: 14 50. Stefanov V.T. (2006). Exact distributions for reward functions on semi-Markov and Markov additive processes. Journal of Applied Probability 43: 1053 1065 51. Stenberg F., R. Manca and D. Silvestrov (2007). An algorithmic approach to discrete time non-homogeneous backward semi-Markov reward processes with an application to disability insurance. Methodology and Computing in Applied Probability 9: 497 519 52. Telek M. and A. Horváth (1998). Supplementary variable approach applied to the transient analysis of Age-MRSPNs. In Proc. 3-rd IEEE Annual Int. Computer Performance and Dependability Symposium, 7 9 September 1998, Durham, NC, 44 51 53. Telek M., A. Horváth and G. Horváth (2004). Analysis of inhomogeneous Markov reward models. Linear Algebra and its Applications 386: 383 405 54. Telek M., A. Pfening and G. Fodor (1998). An effective numerical method to compute the moments of the completion time of Markov reward models. Computers & Mathematics with Applications 36: 59 65 55. Telek M., A. Pfening and G. Fodor (1998). Analysis of the completion time of Markov reward models and its application. Acta Cybernetica 13: 439 452 572
10
Markov Reward Processes
56. Telek M. and S. Rácz (1999). Numerical analysis of large Markov reward models. Performance Evaluation 36 & 37: 95 114 57. Trivedi K.S. and R.A. Wagner (1979). A decision model for closed queueing networks. IEEE Transactions on Software Engineering 5: 328 332 58. Vaidyanathan K., R.E. Harper, S.W. Hunter and K.S. Trivedi (2002). Analysis of software rejuvenation in cluster systems using stochastic reward nets. The Journal of Mathematical Sciences 1: 123 150 59. Wang Z.K. (1961). On the distributions of functionals of birth and death processes and their applications in theory of queues. Scientia Sinica 5: 160 170
573
11 Sensitivity Analysis and Evolutionary Games
Quan-Lin Li Department of Industrial Engineering Tsinghua University Beijing 100084, P.R. China E-mail: [email protected] Abstract In this chapter, we present sensitivity analysis for performance measures of an irreducible perturbed Markov chain which is either discretetime or continuous-time. By using the UL- and LU-types of RG-factorizations, we can express the nth derivatives of the performance measures, including the stationary, transient and discounted cases. Furthermore, we apply the sensitivity analysis to study symmetric evolutionary games by perturbed birth death processes and asymmetric evolutionary games by perturbed QBD processes. Keywords stochastic models, RG-factorization, sensitivity analysis, evolutionary game, perturbed Markov chain, perturbed birth death process, perturbed QBD process.
In this chapter, we present sensitivity analysis for performance measures of an irreducible perturbed Markov chain which is either discrete-time or continuoustime. By using the UL- and LU-types of RG-factorizations, we can express the nth derivatives of the performance measures, including the stationary, transient and discounted cases. Further, we apply the sensitivity analysis to study symmetric evolutionary games by perturbed birth death processes, and asymmetric evolutionary games by perturbed QBD processes. This chapter is organized as follows. Section 11.1 provides sensitivity analysis for the stationary performance measures of a perturbed discrete-time Markov chain with either finitely-many levels or infinitely-many levels. Also, it gives the censored structure for sensitivity analysis of a perturbed discrete-time Markov chain on either a large state space or an infinite state space. Section 11.2 obtains sensitivity analysis for a perturbed Markov chain of GI M 1 type and a perturbed Markov chain of M G 1 type by means of the matrix-geometric solution and the matrix-iterative solution, respectively. Section 11.3 discusses a perturbed continuoustime Markov chain. Section 11.4 derives the nth derivative of the conditional moments of the accumulated reward process for a QBD process. Section 11.5
11
Sensitivity Analysis and Evolutionary Games
discusses a perturbed MAP PH 1 queue, including sensitivity analysis for a perturbed PH distribution and also for a perturbed MAP. Sections 11.6 and 11.7 use perturbed birth death processes to analyze the evolutionary stable strategy (ESS) of symmetric evolutionary games. Section 11.8 discusses the ESS of asymmetric evolutionary games by terms of perturbed QBD processes. Finally, Section 11.9 summarizes notes for the references related to the results of this chapter.
11.1
Perturbed Discrete-Time Markov Chains
In this section, we provide sensitivity analysis for performance measures of a perturbed discrete-time Markov chain with either finitely-many levels or infinitelymany levels. Applying the RG-factorizations, we can express the nth derivative for the performance measures for n ı 1.
11.1.1
Markov Chains with Finitely-Many Levels
Consider an irreducible discrete-time block-structured Markov chain with N levels whose transition probability matrix is given by
P
§ P0,0 ¨ ¨ P1,0 ¨ # ¨¨ © PN ,0
P0,1 ! P0, N · ¸ P1,1 ! P1, N ¸ , # # ¸ ¸ PN ,1 ! PN , N ¸¹
and introduce a perturbed real matrix or a perturbed-directional matrix as follows:
V
§ V0,0 V0,1 ! V0, N · ¨ ¸ ¨ V1,0 V1,1 ! V1, N ¸ . ¨ # # # ¸ ¨¨ ¸¸ © VN ,0 VN ,1 ! VN , N ¹
Note that the block structure of the matrix V is the same as that of the matrix P. We assume that there exists a sufficiently small number H ! 0 such that iH P
P HV § P0,0 ¨ ¨ P1,0 ¨ # ¨¨ © PN ,0
P0,1 ! P0, N · § V0,0 V0,1 ! V0, N · ¸ ¨ ¸ P1,1 ! P1, N ¸ V1,0 V1,1 ! V1, N ¸ H¨ ¨ # # # ¸ # # ¸ ¸¸ ¨¨ ¸¸ PN ,1 ! PN , N ¹ © VN ,0 VN ,1 ! VN , N ¹
(11.1)
575
Constructive Computation in Stochastic Models with Applications
is still a transition probability matrix whose irreducibility and state classification are the same as those of the matrix P. i H are We assume that the Markov chain P and the perturbed Markov chain P both positive recurrent. In this case, it is clear that Ve 0, where e is a column vector of ones with suitable size. Let ʌ H and ʌ be the stationary probability i H and the Markov chain P, respectively. vectors of the perturbed Markov chain P We denote by f the reward rate vector with respect to the Markov chain P. In what follows we discuss the perturbed stationary performance measure KH , where KH ʌH f . Note that the stationary performance measure of the Markov chain P is given by K ʌ f limKH . H o0 It is clear that ʌH P H ʌH and SH e 1, thus we have ʌH ( P HV )
ʌH
(11.2)
and ʌH e 1.
(11.3)
Taking derivatives of the both sides of Eq. (11.2) for the variable H , we obtain d ʌH dH let H
d ʌH ( P HV ) ʌH V , dH
(11.4)
0 in Eq. (11.4), we have d ʌH _H dH
0
( I P)
ʌV .
(11.5)
It follows from Eq. (11.3) that d ʌ H _H dH
e
0,
d ʌ H _H 0 eʌ dH
0.
0
which leads to
Therefore, we obtain d ʌ H _H 0 ( I P eS ) dH
ʌV .
Note that I P eS is the fundamental matrix of the Markov chain P with finitely-many levels, and I P eS is always invertible, hence we obtain d ʌ H _H dH
1
0
ʌV ( I P eʌ ) .
(11.6)
Continuously taking derivatives of both sides of Eq. (11.2) in the variable H for n ı 2, we obtain 576
11
Sensitivity Analysis and Evolutionary Games
dn d n 1 [ ( )] ʌ I P H V n ʌH V , H dH n dH n 1
let H
0 we have dn ʌ H _H 0 ( I P ) dH n
n
d n 1 ʌ H _H 0 V . dH n 1
(11.7)
Note that dn ʌ H _H 0 eʌ dH n
0,
(11.8)
it follows from Eq. (11.7) and Eq. (11.8) that for n ı 2, dn ʌ H _H 0 ( I P eʌ ) dH n
n
d n 1 ʌ H _H 0 V . dH n 1
Therefore, we have dn ʌ H _H dH n
0
d n 1 1 ʌ H _H 0 nV ( I P eʌ ) n 1 dH n n !ʌ[V ( I P eʌ ) 1] .
(11.9)
Further, for the perturbed stationary performance measure we can obtain dn KH _H dH n
n
0
n !ʌ[V ( I P eʌ ) 1] f ,
n ı 1.
Now, we provide the generalized inverse expression for the nth derivative dn KH _H 0 . Note that the matrix I P is singular, we write the generalized dH n inverse as ( I P)
#
1
( I P eʌ ) eʌ.
Since d # ʌH _H 0 ( I P)( I P) dH
d dH d dH d dH d dH
ʌH _H 0 ( I P)[( I P eʌ )1 eʌ ] 1 ʌH _H 0 ( I P )( I P eʌ ) 1 ʌH _H 0 ( I P eʌ )( I P eʌ )
ʌH _H
0
577
Constructive Computation in Stochastic Models with Applications
and ʌV ( I P) #
ʌV [( I P eʌ )1 eʌ ] ʌV ( I P eʌ ) 1,
d ʌH _H dH
#
ʌV ( I P ) .
0
Further, we can obtain that for n ı 2, dn ʌH _H dH n
n
0
n !ʌ[V ( I P ) # ] .
Therefore, we obtain that for n ı 1, dn KH _H dH n
n
0
n !ʌ[V ( I P) # ] f .
Now, we provide a mathematical interpretation for the derivative
dn KH _H dH n
0
for n ı 1 as follows: (1) If for n 1, 2,..., 2k 1, dn KH _H dH n
0,
0
and d2k KH _H dH 2 k
0
0 (or ! 0),
then the perturbed stationary performance measure KH can reach its maximal (or minimal) value at H 0. In this case, system performance is not sensitive to the designed parameters of the system, thus such designed parameters are satisfied for system operations. (2) If for n 1, 2,..., 2k , dn KH _H dH n
0,
0
and d 2 k 1 KH _H dH 2 k 1
0
z 0,
then the perturbed stationary performance measure KH does not exist the maximal (or minimal) value at H 0. In this case, system performance is sensitive to the 578
11
Sensitivity Analysis and Evolutionary Games
designed parameters of the system, it is necessary to redesign (or readjust) system parameters.
11.1.2
Markov Chains with Infinitely-Many Levels
Consider an irreducible discrete-time block-structured Markov chain whose transition probability matrix is given by
P
§ P0,0 ¨ ¨ P1,0 ¨ P2,0 ¨ © #
P0,1 P1,1 P2,1 #
P0,2 !· ¸ P1,2 !¸ P2,2 !¸ ¸ # ¹
and the perturbed-directional matrix is given by
V
§ V0,0 V0,1 V0,2 ! · ¨ ¸ ¨ V1,0 V1,1 V1,2 ! ¸ . ¨ V2,0 V2,1 V2,2 ! ¸ ¨ ¸ # # © # ¹
Note that the block structure of V is the same as that of P, we assume that there exists a sufficiently small number H ! 0 such that PH
P HV § P0,0 ¨ ¨ P1,0 ¨ P2,0 ¨ © #
P0,1 P1,1 P2,1 #
P0,2 ! · § V0,0 V0,1 V0,2 ! · ¸ ¨ ¸ P1,2 ! ¸ V V1,1 V1,2 ! ¸ H ¨ 1,0 ¨ V2,0 V2,1 V2,2 !¸ P2,2 ! ¸ ¸ ¨ ¸ # # # ¹ © # ¹
(11.10)
is still a transition probability matrix whose irreducibility and state classification are the same as those of the matrix P. We now discuss the perturbed stationary performance measure KH with respect to the perturbed Markov chain PH and the reward rate vector f. We assume that the two Markov chains P and PH are both irreducible and positive recurrent. Then ʌH PH ʌH , which leads to ʌH ( P HV )
ʌH .
(11.11)
Taking derivatives of the both sides of Eq. (11.11) for the variable H , we obtain 579
Constructive Computation in Stochastic Models with Applications
d ʌH dH let H
d ʌH ( P HV ) ʌH V , dH
(11.12)
0 we have d ʌ H _H 0 ( I P) dH
ʌV ,
(11.13)
1
(11.14)
which leads to d ʌ H _H dH
0
ʌV ( I P) min,
where 1
( I P ) min
f
¦P . n
n 0
1
In order to compute the minimal nonnegative inverse ( I P) min , we apply the RG-factorizations. Using the UL-type RG-factorization, we obtain 1
( I P ) min
1
1
1
( I GL ) ( I < D ) ( I RU ) .
(11.15)
1
1
Note that ( I GL ) and ( I RU ) are ordinary matrix inverses, while ( I < D ) contains the group inverse of a matrix of finite size which is indicated by ( I < D )
1
1
#
1
1
diag(( I < 0 ) , ( I < 1 ) , ( I < 2 ) ,( I < 3 ) ,!),
1
(11.16)
where the group inverse is given by ( I < 0 )
#
1
( I < 0 ex0 ) ex0 ,
(11.17)
and x0 is the stationary probability vector of the finite-state Markov chain < 0 . Applying the LU-type RG-factorization, we obtain 1
( I P) min
1
1
1
( I GU ) ( I ) D ) ( I RL ) .
(11.18)
Continuously taking derivatives of both sides of Eq. (11.11) for the variable H , we obtain dn d n 1 [ ( )] ʌ I P H V n ʌH V , H dH n dH n 1
let H
0 we have dn ʌ H _H 0 ( I P ) dH n
580
n
d n 1 ʌ H _H 0 V , dH n 1
11
Sensitivity Analysis and Evolutionary Games
which leads to dn ʌ H _H dH n
0
n
d n 1 1 ʌ H _H 0 V ( I P) min dH n 1 n
1 º. n!ʌ ª¬V ( I P) min ¼
By means of Eq. (11.15) or using Eq. (11.18), we have dn ʌ H _H dH n
n
0
n !ʌ[V ( I GL ) 1( I < D ) 1( I RU ) 1]
0
n !ʌ[V ( I GU ) 1( I ) D ) 1( I RL ) 1] .
(11.19)
or dn ʌ H _H dH n Note that KH
n
(11.20)
ʌH f , it follows from Eq. (11.19) or Eq. (11.20) that for n ı 1, dn KH _H dH n
n
0
n !ʌ[V ( I GL ) 1( I < D ) 1( I RU ) 1] f
0
n !ʌ[V ( I GU ) 1( I ) D ) 1( I RL ) 1] f .
(11.21)
or dn KH _H dH n
11.1.3
n
(11.22)
The Realization Matrix and Potential Vector
Now, we discuss computations for the realization matrix D of the Markov chain P with either finitely-many levels or infinitely-many levels. Note that D eg T geT , where g is a potential vector. Note that for an arbitrary constant c, g ce is also a potential vector. The realization matrix plays an important role in sensitivity analysis of stochastic models based on the following relationship d ʌ H _H dH
0
ʌVD T ʌ T
ʌVg .
For the Markov chain P, the potential vector g satisfies the Poisson equation ( I P) g
f ʌ fe.
(11.23)
Thus, using the UL-type RG-factorization, it follows from Eq. (11.23) that g
1 ( I P ) min ( f ʌ fe) 1
( I GL ) ( I < D ) 1 ( I RU ) 1 ( f ʌ fe)
(11.24) 581
Constructive Computation in Stochastic Models with Applications
or using the LU-type RG-factorization, we have
g
1
( I P) min ( f ʌ fe) 1
1
1
( I GU ) ( I ) D ) ( I RL ) ( f ʌ fe).
(11.25)
It follows from Eq. (11.24) and Eq. (11.25) that the realization matrix D is given by D
eg T geT T
T
e( f ʌ fe) T[( I RU ) 1] [( I < D ) 1] [( I GL ) 1] 1
T
1
1
( I GL ) ( I < D ) ( I RU ) ( f ʌ fe)eT
(11.26)
or D
eg T geT T
T
T
e( f ʌ fe)T [( I RL ) 1] [( I ) D ) 1] [( I GU ) 1] 1
1
1
( I GU ) ( I ) D ) ( I RL ) ( f ʌfe)eT .
11.1.4
(11.27)
The Censored Structure in Sensitivity Analysis
When the perturbed discrete-time Markov chain has a large state space or an infinite state space, the censored approximation is a useful method. Let the state space : E * E c where E {0,1, 2,! , N }, we write P
§ PA ¨ © PC
PB · ¸, V PD ¹
§ V A VB · ¨ ¸, © VC VD ¹
based on the two subsets E and E c . Hence we have PH
§ PA HVA ¨ © PC HVC
PB HVB · ¸. PD HVD ¹
Obviously, the censored chain of the perturbed Markov chain PH to the set E is given by the transition probability matrix PHE
f
n 0
Lemma 11.1
Let PHE | K H L.
582
n
( PA HVA ) ( PB HVB )¦ ( PD HVD ) ( PC H VC ).
11
Sensitivity Analysis and Evolutionary Games
Then f
PE
K
PA PB ¦ PDn PC n 0
and f
2
f
§ f · L VA PB ¦ P V VB ¦ P P PB ¨ ¦ PDn ¸ VD PC n 0 n 0 ©n 0 ¹ n D C
n D C
2
f f § f · VA PB ¦ PDnVC VB ¦ PDn PC PBVD ¨ ¦ PDn ¸ PC . n 0 n 0 ©n 0 ¹
We write the following two general forms
Proof
1
( I PD H VD ) min
f
¦ (P
D
H VD )
n
n 0
and 1
( I PD ) min
f
n
¦ (P ) . D
n 0
Note that PHE
f
n
( PA HVA ) ( PB HVB )¦ ( PD HVD ) ( PC HVC ) n 0
1 ( PA HVA ) ( PB HVB )( I PD HVD )min ( PC HVC ) 1 1 ª¬ I H ( I PD )min | PA HVA ( PB HVB )( I PD )min VD º¼ ( PC HVC ) 1 1 | PA PB ( I PD )min PC H [VA PB ( I PD )min VC 1 2 VB ( I PD )min VD PC ], PC PB ( I PD )min
which leads to the stated results. This completes the proof. Let fE
( f 0 , f1 , f 2 ,! , f N )
T
f
PB ¦ PDn ( f N 1 , f N 2 , f N 3 ,!) T, n 0
E
E
and SH and S the stationary probability vectors of the censored Markov chains PHE and P E , respectively. Then KHE ʌHE f E . Therefore, for the perturbed stationary performance measure we can obtain that for n ı 1, dn E KH _H dH n
n
0
n !ʌ E [ L( I K eʌ E ) 1] f E . 583
Constructive Computation in Stochastic Models with Applications
11.1.5
The Transient Performance Measure
We provide sensitivity analysis for the transient performance measures. Let N
[H ( N ) ʌ (0)¦ ( PH ) f , k
N ı 0.
(11.28)
k 0
Then dn [H ( N ) dH n
N
ʌ (0)¦ k (k 1)! (k n 1)( PH )k n V n f , 1 İ n İ N .
(11.29)
k n
Hence, it is easy to see from Eq. (11.29) that dn [H ( N ) _H dH n
11.1.6
N
0
ʌ (0)¦ k (k 1)! (k n 1) P k n V n f , 1 İ n İ N.
(11.30)
k n
The Discounted Performance Measure
We provide sensitivity analysis for the E -discounted performance measures. Let M ( E ) ʌ (0)( I E P )1 f . (11.31) H
H min
Then dn MH ( E ) dH n
1 n n n ! E n ʌ (0)( I E PH )min V f,
which leads to dn MH ( E ) _H dH n
1 n
0
n ! E n ʌ (0)( I E P ) min V n f .
(11.32)
Note that n
n
( I E P ) min {[ I GL ( E )]1[ I < D ( E )]1[ I RU ( E )]1}
(11.33)
or n
1
n
( I E P ) min {[ I GU ( E )] [ I ) D ( E )]1[ I RL ( E )]1} .
11.2
(11.34)
Two Important Markov Chains
In this section, we provide sensitivity analysis for the stationary performance measures of Markov chains of GI M 1 type and Markov chains of M G 1 type 584
11
Sensitivity Analysis and Evolutionary Games
in terms of the matrix-geometric solution and the matrix-iterative solution, respectively. Using the matrix R or G, we express the n th derivative for the stationary performance measures.
11.2.1
Perturbed Markov Chains of GI/M/1 Type
Consider an irreducible discrete-time Markov chain of GI M 1 type whose transition matrix is given by
P
§ B1 ¨ ¨ B2 ¨ B3 ¨ ¨ B4 ¨ # ©
B0 A1 A2 A3 #
A0 A1 A2 #
A0 A1 #
· ¸ ¸ ¸, ¸ ¸ % ¸¹
A0 #
(11.35)
where B1 is a square matrix of finite size m0 , all Ai are square matrices of finite size m, the sizes of the other block-entries are determined accordingly and all empty entries are zero. We define a perturbed matrix of GI M 1 type as PH
P HV ,
(11.36)
where H ! 0 is sufficiently small, and the perturbed-directional matrix V is of GI M 1 type as follows:
V
§ D1 ¨ ¨ D2 ¨ D3 ¨ ¨ D4 ¨ # ©
D0 · ¸ C1 C0 ¸ ¸. C2 C1 C0 ¸ C3 C2 C1 C0 ¸ # # # # %¸¹
We assume that the irreduciblity and state classification of PH are the same as those of P. We provide sensitivity analysis for the stationary performance measure f
KH
ʌH f
¦ ʌH (k ) f
k
,
(11.37)
k 0
where ʌH (0) N (H ) x0 (H )
(11.38) 585
Constructive Computation in Stochastic Models with Applications
and ʌH (k ) N (H ) x0 (H ) R0,1 (H ) R(H ) k 1 ,
k ı 1,
(11.39)
the matrix R(H ) is the minimal nonnegative solution to the nonlinear matrix equation f
R (H )
¦ R(H )
k
( Ak H Ck ),
(11.40)
k 0
the matrix R0,1 (H ) is given by 1
R0,1 (H )
f ª º ( B0 H D0 ) « I ¦ R (H ) k 1 ( Ak H Ck ) » , k 1 ¬ ¼
(11.41)
x0 (H ) is the stationary probability vector of the censored chain to level 0 whose transition probability matrix is given by f
< 0 (H ) ( B1 H D1 ) ¦ R(H ) k ( Bk 1 H Dk 1 ),
(11.42)
k 1
and the positive constant N (H ) is given by
N (H )
1 . 1 x0 (H ) R0,1 (H )e x0 (H ) R0,1 (H )[ I R(H )]1e
(11.43)
To give the sensitivity analysis, let the matrix R be the minimal nonnegative f
solution to the nonlinear matrix equation R
¦R
k
Ak . It is seen later that each
k 0
expression in the sensitivity analysis will be given by the minimal nonnegative solution R and the matrix sequences { Ak }, {Bk }, {Ck } and {Dk }. It is worthwhile 1
f § · to note that R0,1 B0 ¨ I ¦ R k 1 Ak ¸ , < 0 k 1 © ¹
f
B1 ¦ R k Bk 1 , x0 is the stationary prok 1
1
bability vector of the censored chain < 0 , and N [1 x0 R0,1e x0 R0,1 ( I R ) 1e] . Our computations for the sensitivity analysis are listed in the following steps: d (1) Compute R(H ) _H 0 dH It follows from (11.40) that d R(H ) _H dH (2) Compute 586
d x0 (H ) _H dH
0
1
f
f § · R Ck ¨ I ¦ kR k 1 Ak ¸ . ¦ k 0 k 1 © ¹ k
0
(11.44)
11
Since x0 (H )
Sensitivity Analysis and Evolutionary Games
x0 (H )< 0 (H ), we obtain
d x0 (H ) _H dH
(3) Compute
d R0,1 (H ) _H dH
d R0,1 (H ) _H dH
(4) Compute
f ª x0 « D1 ¦ R k Dk 1 k 1 ¬ f d º # R(H ) _H 0 ¦ kR k 1 Bk 1 » ( I < 0 ) . dH k 1 ¼
0
0
1
0
2
f f § · § · D0 ¨ I ¦ R k 1 Ak ¸ B0 ¨ I ¦ R k 1 Ak ¸ k 1 k 1 © ¹ © ¹ f f d ª º « ¦ R k 1Ck R(H ) _H 0 ¦ (k 1) R k 2 Ak » . dH k 2 ¬k 1 ¼
d N (H ) _H dH d N (H ) _H dH
(11.45)
(11.46)
0
0
¬ª1 x0 R0,1e x0 R0,1 ( I R ) 1e ¼º
2
d ® x0 (H ) _H 0 R0,1 ª¬ I ( I R) 1º¼ e ¯ dH d x0 R01 (H ) _H 0 ª¬ I ( I R ) 1º¼ e dH d 2 ½ x0 R0,1 R(H ) _H 0 ( I R) e ¾ . dH ¿ (5) Compute d KH _H dH
0
d KH _H dH
0
f ª º½ ° k 1 ® N (H ) « x0 (H ) f 0 ¦ x0 (H ) R0,1 (H ) R(H ) f k » ¾ _H 0 k 1 ¬ ¼¿ °¯ f d § · N (H ) _H 0 ¨ x0 f 0 x0 R01 f1 ¦ x0 R01 R k 1 f k ¸ dH k 2 © ¹ d d d x0 (H ) _H 0 f N x0 (H ) _H 0 R0,1 f N x0 R0,1 (H ) _H 0 f N dH dH dH f f d d x0 (H ) _H 0 R0,1 R k 1 f k N ¦ x0 R0,1 (H ) _H 0 R k 1 f k N¦ d H k 2 dH k 2 f d N ¦ (k 1) x0 R0,1 R (H ) _H 0 R k 2 f k . dH k 2
d dH
587
Constructive Computation in Stochastic Models with Applications
Therefore, the derivative
d KH _H dH
0
can be given by Eq. (11.44), Eq. (11.45) and
Eq. (11.46).
11.2.2
Perturbed Markov Chains of M/G/1 Type
Consider an irreducible discrete-time Markov chain of M G 1 type whose transition matrix is given by
P
§ B1 ¨B ¨ 0 ¨ ¨ ¨¨ ©
B2 A1 A0
B3 A2 A1 A0
!· !¸ ¸ !¸ , !¸ ¸ %¸¹
B4 B5 A3 A4 A2 A3 A1 A2 % %
(11.47)
where B1 is a square matrix of finite size m0 , all Ai are square matrices of finite size m, the sizes of the other block-entries are determined accordingly and all empty entries are zero. We define a perturbed matrix of M G 1 type as PH
P HV ,
(11.48)
where H ! 0 is sufficiently small, and the perturbed-directional matrix V is given by
V
§ D1 ¨D ¨ 0 ¨ ¨ ¨¨ ©
D2 C1 C0
D3 C2 C1 C0
D4 D5 C3 C4 C2 C3 C1 C2 % %
!· !¸ ¸ !¸ . !¸ ¸ %¸¹
We assume that the irreducibility and state classification of PH are the same as those of P. We provide sensitivity analysis for the stationary performance measure f
KH
ʌH f
¦ʌ H (k ) f
k
,
(11.49)
k 0
where ʌH (0) N (H ) x0 (H )
(11.50)
and k 1
ʌH (k ) ʌH (0) R0,k (H ) ¦ ʌ H (i) Rk i (H ), i 1
588
(11.51)
11
Sensitivity Analysis and Evolutionary Games
the matrix G (H ) is the minimal nonnegative solution to the nonlinear matrix equation f
G (H )
¦ (A
k
H Ck )G (H ) k ,
(11.52)
k 0
the matrix R0,k (H ) is given by f
¦ (B
R0, k (H )
H Dk i )G (H )i 1
k i
i 1
1
f ª º « I ¦ ( Ak H Ck )G (H ) k 1 » , k 1 ¬ ¼
(11.53)
the matrix Rk (H ) is given by f
Rk (H )
¦ (A
k i
H Ck i )G (H )
i 1
i 1
1
f ª º « I ¦ ( Ak H Ck )G (H ) k 1» , k 1 ¬ ¼
(11.54)
and x0 (H ) is the stationary probability vector of the censored chain to level 0 whose transirion probability matrix is given by f
< 0 (H ) ( B1 H D1 ) ¦ ( Bk 1 H Dk 1 )G (H ) k 1 k 1
1
f
ª º « I ¦ ( Ak H Ck )G(H ) k 1 » ( B0 H D0 ). k 1 ¬ ¼
(11.55)
To give the sensitivity analysis, let the matrix G be the minimal nonnegative f
solution to the nonlinear matrix equation G
¦ A G . It is seen later that each k
k
k 0
expression in the sensitivity analysis will be given by the minimal nonnegative solution G and the matrix sequences { Ak }, {Bk }, {Ck } and {Dk }. It is worthwhile to note that 1
f
R0, k
f § · Bk i G i 1 ¨ I ¦ Ak G k 1 ¸ , ¦ i 1 k 1 © ¹ f
Rk
1
f § · Ak i G ¨ I ¦ Ak G k 1 ¸ , ¦ i 1 k 1 © ¹
i 1
1
<0
f f § · B1 ¦ Bk 1G k 1 ¨ I ¦ Ak G k 1 ¸ B0 k 1 k 1 © ¹
589
Constructive Computation in Stochastic Models with Applications
and x0 is the stationary probability vector of the censored chain < 0 . Our computations for the sensitivity analysis are listed in the following steps: d (1) Compute G (H ) _H 0 dH It follows from Eq. (11.52) that d G (H ) _H dH d R0, k (H ) _H dH
(2) Compute d R0, k (H ) _H dH
0
f § · I kAk G k 1 ¸ ¦ ¨ k 1 © ¹
¦C G . k
(11.56)
k
k 0
0
f
0
1 f
f § · Dk i G i 1 ¨ I ¦ Ak G k 1 ¸ ¦ i 1 k 1 © ¹ f
¦ (i 1) Bk i i 1
1
f d § · G (H ) _H 0 G i 2 ¨ I ¦ Ak G k 1 ¸ dH k 1 © ¹
1
2
f f § · ¦ Bk i G i 1 ¨ I ¦ Ak G k 1 ¸ i 1 k 1 © ¹ f f d ª º « ¦ Ck G k 1 ¦ ( k 1) Ak G k 2 G (H ) _H 0 » . d H k 1 ¬k 1 ¼
d Rk (H ) _H dH
(3) Compute d Rk (H ) _H dH
0
f
0
f § · Ck i G i 1 ¨ I ¦ Ak G k 1 ¸ ¦ i 1 k 1 © ¹ f
¦ (i 1) Ak i i 2
(11.57)
1
f d § · G (H ) _H 0 G i 2 ¨ I ¦ Ak G k 1 ¸ dH k 1 © ¹
1
2
f f § · ¦ Ak i G i 1 ¨ I ¦ Ak G k 1 ¸ i 1 k 1 © ¹ f f d ª º G (H ) _H 0 » . « ¦ Ck G k 1 ¦ ( k 1) Ak G k 2 dH k 2 ¬k 1 ¼
d x0 (H ) _H 0 dH Since x0 (H ) x0 (H )< 0 (H ), we obtain (4) Compute
590
(11.58)
11
d x0 (H ) _H dH
Sensitivity Analysis and Evolutionary Games
1 f f ° k 1 § k 1 · x0 ® D1 ¦ ( Bk 1 Dk 1 )G ¨ I ¦ Ak G ¸ B0 k 1 k 1 © ¹ °¯
0
1
f
¦ (k 1) Bk 1 k 2
f d § · G (H ) _H 0 G k 2 ¨ I ¦ Ak G k 1 ¸ B0 dH k 1 © ¹ 2
f f § · ¦ Bk 1G k 1 ¨ I ¦ Ak G k 1 ¸ k 1 k 1 © ¹ f ª f º ½ d # «« ¦ Ck G k 1 ¦ (k 1) Ak G (H ) _H 0 G k 2 B0 »» ¾ ( I < 0 ) . « » dH k 2 ¬k 1 ¼¿
(11.59) d N (H ) _H 0 dH It follows from Eq. (11.50) and Eq. (11.51) that
(5) Compute
ʌH (0) N (H ) x0 (H ) ʌH (1) N (H ) x0 (H ) R0,1 (H ) and ʌH (k ) N (H ) x0 (H ) R0,k (H ) k
N (H ) x0 (H ) ¦
¦
R0,i1 (H ) Ri2 (H )! Rim (H ),
m 2 i1 i2 ! im k 1İi1 ,i2 ,!,im İk
thus, we get
N (H )
1 , ª º f k 1 x0 (H )¦ « R0, k (H ) ¦ ¦ R0,i1 (H ) Ri2 (H )! Rim (H ) »» e « k 1 m 2 i1 i2 ! im k 1İi1 , i2 ,!, im İk ¬« ¼»
which easily leads to the expression of (6) Compute
d KH _H dH
d N (H ) _H 0 . dH
0
Note that f
KH
N (H ) x0 (H ) f 0 N (H ) x0 (H ) R0,1 (H ) f1 N (H ) x0 (H )¦ R0, k (H ) f k k 2
f
k
N (H ) x0 (H )¦¦
¦
R0,i1 (H ) Ri2 (H )! Rim (H ) f k .
k 2 m 2 i1 i2 ! im k 1İi1 , i2 ,!, im İk
591
Constructive Computation in Stochastic Models with Applications
Therefore, the derivative
d KH _H dH
0
can be given by means of the above steps (1)
to (5).
11.3
Perturbed Continuous-Time Markov Chains
In this section, we provide sensitivity analysis for the stationary performance measure of a perturbed continuous-time Markov chain which is irreducible and positive recurrent. By means of the UL- and LU-types of RG-factorizations, we express the nth derivative for the stationary performance measure. We consider an irreducible perturbed continuous-time Markov chain {xH (t ), t ı 0} whose infinitesimal generator is given by i Q H
Q HV ,
(11.60)
where
Q
§ B0,0 ¨ ¨ B1,0 ¨ B2,0 ¨ ¨ B3,0 ¨ # ©
B0,1
B0,2
B1,1
B1,2
B2,1
B2,2
B3,1 #
B3,2 #
B0,3 !· ¸ B1,3 !¸ B2,3 !¸ ¸ B3,3 !¸ ¸ # ¹
is the infinitesimal generator of an irreducible continuous-time Markov chain with block structure, the perturbed-directional matrix V is given by
V
§ C0,0 ¨ ¨ C1,0 ¨ C2,0 ¨ ¨ C3,0 ¨ # ©
C0,1 C0,2 C1,1
C1,2
C2,1 C2,2 C3,1 C3,2 # #
C0,3 !· ¸ C1,3 !¸ C2,3 !¸ ¸ C3,3 !¸ ¸ # ¹
is a non-zero matrix with the size of each block Ci , j equal to that of Bi , j for i is still the i, j ı 0, and H is a sufficiently small positive number such that Q H infinitesimal generator of a continuous-time Markov chain whose irreducibility and state classification are the same as those of the Markov chain Q. We assume that i if the Markov chain Q is positive recurrent, then the perturbed Markov chain Q H can also be positive recurrent for each sufficiently small H ! 0. In this case, Ve 0. i are both irreducible and positive We assume that the two Markov chains Q and Q H recurrent. Let ʌ and ʌH be the stationary probability vectors of the Markov 592
11
Sensitivity Analysis and Evolutionary Games
i , respectively. We write K ʌ f and KH ʌH f . In what follows chains Q and Q H we provide sensitivity analysis for the stationary performance measures KH . i 0 Note that ʌH is the unique positive solution to the system of equations ʌH Q H and ʌH e 1, we obtain dʌH _H 0 Q dH d n ʌH _H 0 Q dH n
ʌV ,
(11.61)
d n 1ʌH _H 0 nV , dH n 1
(11.62)
n ı 2,
and d n ʌH _H dH n
0
e
(11.63)
n ı 1.
0,
1 , it follows from Since Q always has the maximal nonpositive inverse Qmax Eq. (11.61) and Eq. (11.62) that
d n ʌH _H dH n
n
0
1 n !ʌ ª¬ Qmax V º¼ ,
n ı 1.
(11.64)
Using Eq. (11.63) and Eq. (11.64), we have n
1 ʌ ª¬ Qmax V º¼ e
n ı 1,
0,
For the stationary performance measure KH dn KH _H dH n
(11.65)
ʌH f , n
0
1 n !ʌ ª¬ Qmax V º¼ f .
(11.66)
Therefore, it follows from the UL-type RG-factorization that for n ı 1, dn KH _H dH n
n
0
1 1 n !ʌ ª¬( I GL ) U D1 ( I RU ) V º¼ f .
(11.67)
or from the LU-type RG-factorization that for n ı 1, dn KH _H dH n
0
n !ʌ ª( I GU ) ¬
1
U ( I R ) 1 D
L
1
n
Vº f. ¼
(11.68)
In what follows we simply discuss the realization matrix of the continuous-time Markov chain. The realization matrix D is expressed in terms of a potential vector g which satisfies the Poisson equation Qg
f ʌfe.
(11.69) 593
Constructive Computation in Stochastic Models with Applications
Thus, using the UL-type RG-factorization, it follows from Eq. (11.69) that g
1 Qmax ( f ʌ fe)
( I GL )
1
U ( I R 1 D
1
U
) ( f ʌ fe)
(11.70)
or using the LU-type RG-factorization, we have g
1 Qmax ( f ʌ fe) ( I GU )
1
U ( I R ) 1 D
1
L
( f ʌ fe).
(11.71)
The realization matrix is given by D
eg T geT T
T
1 1 T e( f ʌfe) ª¬( I RU ) º¼ U D1 ª¬( I GL ) º¼
( I GL )
1
U ( I R 1 D
T
1
U
) ( f ʌ fe)eT .
(11.72)
or D
eg T geT T
T
1 1 e( f ʌfe) T ª( I RL ) º U D1 ª( I GU ) º ¬ ¼ ¬ ¼
( I GU )
1
U ( I R ) 1 D
L
1
( f ʌ fe)eT .
T
(11.73)
As an important example, we consider a perturbed continuous-time leveldependent QBD process with infinitely-many levels whose infinitesimal generator is given by Q(H ) Q H V ,
(11.74)
where H is a sufficiently small positive number,
Q
§ B1(0) ¨ (1) ¨ B2 ¨ ¨ ¨ ¨ ©
B0(0) B1(1) B2(2)
B0(3) %
· ¸ ¸ ¸ ¸ ¸ %¸¹
C0(3) %
· ¸ ¸ ¸. ¸ ¸ % ¸¹
(1) 0
B B1(2) B2(3)
B0(2) B1(3) %
and the perturbed-directional matrix
V
594
§ C1(0) ¨ (1) ¨ C2 ¨ ¨ ¨ ¨ ©
C0(0) C1(1) C2(2)
(1) 0
C C1(2) C2(3)
C0(2) C1(3) %
(11.75)
11
Sensitivity Analysis and Evolutionary Games
Let ʌ (ʌ 0 , ʌ1 , ʌ 2 ,!) be the stationary probability vector of the QBD process Q, partitioned according to the levels. We denote by the matrix sequence {Rk } the minimal nonnegative solution to the system of matrix equations B0( k ) Rk B1( k 1) Rk Rk 1 B2( k 2)
0,
k ı 0.
Then
N x0
ʌ0 and
N x0 R0 R1 ! Rk 1 ,
ʌk
k ı 1,
where x0 is the stationary probability vector of the censored chain to level 0 with B1(0) R0 B2(1) , N is a normalization constant such
infinitesimal generator U 0 f
that
¦ʌ e k
1, i.e.,
k 0
1
N
.
f
1 ¦ x0 R0 R1 ! Rk e k 0
In principle, Eq. (11.68), using the LU-type RG-factorization, provides expression for the nth derivative of the stationary performance measure KH at
0. It is easy to see that for a small n ı 1, K ( n ) can be further simplified using the LU-type R-, U- and G-measures. The following proposition provides expression d for KH _H 0 by means of the LU-type R-, U- and G-measures. dH Proposition 11.1
H
d KH _H dH
f
k
k 0
i 0
ª¬ʌ 0 C1(0) ʌ1C1(1) º¼ ¦ Yk(0) U k1 ¦ X i( k ) f k i
0
f
¦ ª¬ʌ m 1C0( m 1) ʌ m C1( m ) ʌ m 1C2( m 1) º¼ m 1 f
k
¦ Yk(mm) U k1 ¦ X i( k ) f k i , i 0
k m
where
X 0(l ) X
(l ) k
I,
l ı 0,
Rl Rl 1 Rl 2 ! Rl k 1 ,
l ı k ı 1,
and Y0(l ) (l ) k
Y
I,
l ı 0,
Gl Gl 1Gl 2 ! Gl k 1 ,
k ı 1, l ı 0. 595
Constructive Computation in Stochastic Models with Applications
Proof Setting n 1 in Eq. (11.68), we have d KH _H dH
0
ʌV ( I GU )
1
U ( I R ) 1 D
L
1
f.
Thus, we obtain ʌV
(ʌ0 C1(0) ʌ1C1(1) , ʌ0 C0(0) ʌ1C1(1) ʌ2 C2(2) , ʌ1C0(1) ʌ2 C1(2) ʌ3 C2(3) ,!),
and
( I GU )
1
U ( I R ) 1 D
1
L
k § f (0) · 1 (k ) ¨ ¦ Yk U k ¦ X i f k i ¸ i 0 ¨k 0 ¸ k ¨ f (1) ¸ 1 (k ) ¨ ¦ Yk 1 U k ¦ X i f k i ¸ i 0 ¨ k1 ¸. f k ¨ ¸ 1 (2) (k ) ¨ ¦ Yk 2 U k ¦ X i f k i ¸ i 0 ¨k 2 ¸ ¨ ¸ # © ¹
f
Simple computations yield the stated result. This completes the proof. Now, we provide sensitivity analysis for the transient performance measure J (t ) for t ı 0, where t
J (t ) E[) (t )] E ª ³ f ( X u )du º . ¬«
0
¼»
It is easy to see from Eq. (10.3) that t
J (t ) ʌ (0) ³ exp{Qx}dxf .
(11.76)
0
i we can write the perturbed Based on Eq. (11.76), for the perturbed Markov chain Q H transient performance measure as t
i x}dxf . JH (t ) ʌ (0) ³ exp{Q H 0
Therefore, we obtain dn JH (t ) dH n
t
i x}dxV n f , ʌ (0) ³ x n exp{Q H 0
n ı 1,
which leads to dn JH (t ) _H dH n 596
t
0
ʌ (0) ³ x n exp{Qx}dxV n f , 0
n ı 1.
(11.77)
11
Sensitivity Analysis and Evolutionary Games
Finally, we provide sensitivity analysis for the E -discounted performance measure I ( E ), where f
I ( E ) E ª« ³ e E u f ( X u )du º» . ¬ 0 ¼ It is easy to see from Eq. (10.11) that 1 I ( E ) ʌ (0)(Q E I ) max f.
(11.78)
i we can write the perturbed Based on Eq. (11.78), for the perturbed Markov chain Q H
discounted performance measure as 1
i EI f. I H ( E ) ʌ (0) Q H max
Therefore, we obtain dn I (E ) dH n H
1 n
n 1 i EI Vnf , (1) n !ʌ (0) Q H max
n ı 1,
which leads to dn I ( E ) _H dH n H
11.4
1 n
0
(1) n 1 n !ʌ (0)(Q E I ) max V n f ,
n ı 1.
(11.79)
Perturbed Accumulated Reward Processes
In this section, we derive the nth derivative for the conditional moments of the accumulated reward process of an irreducible continuous-time perturbed QBD process. Consider the accumulated reward process of an irreducible continuous-time perturbed QBD process. We assume that the perturbed QBD process {xH (t ), t ı 0}, given in Eq. (11.74), is separable and Borel measurable, and its sample functions are almost all lower semi-continuous at the right-hand side. Therefore, the perturbed QBD process has the strong Markov property. Similar to (10.38), we define t
\ k , j (t , H ) E ª« ³ f ( xH (u ))du _ xH (0) (k , j ) º» . ¬ 0 ¼ Let T < k (t , H ) (\ k ,1 (t , H ),\ k ,2 (t , H ),! ,\ k , M (t , H )) k
597
Constructive Computation in Stochastic Models with Applications
and
< k* ( s, H )
³
f 0
e st< k (t , H )dt , k ı 0.
The sequence {< k* ( s, H )} then satisfies the following system of infinite-dimensional linear equations, as in Theorem 10.3,
^
`
< 0* ( s, H ) 0 ( s, H ) e 0 (H )< 0* ( s, H ) ª¬ B0(0) H C0(0) º¼< 1* ( s, H ) ,
(11.80)
and for l ı 1,
^
< l* ( s, H ) l ( s, H ) e ª¬ B2(l ) H C2(l ) º¼< l*1 ( s, H )
`
l (H )< l* ( s, H ) ª«¬ B0(l ) H C0(l ) º»¼< l*1 ( s, H ) ,
(11.81)
where for k ı 0, k ( s, H )
§ · f (k , M k ) f (k ,1) ¨ ¸ diag ,! , 2 ¨¨ ª s b( k ) H c ( k ) º 2 ¸¸ (k ) (k) ª º H s b c 1,(1,1) 1,(1,1) ¼ 1,( M k , M k ) 1,( M k , M k ) ¼ ¹ ¬ ©¬
and k (H )
(k ) (k ) (k ) B1( k ) H C1( k ) diag b1,(1,1) H c1,(1,1) ,! , b1,(( k M) k , M k ) H c1,( M k ,M k ) .
Note that k (H ) k H k , where k is given in Eq. (10.42) and
k
(k ) (k ) (k) C1( k ) diag c1,(1,1) , c1,(2,2) ,! , c1,( Mk ,Mk ) .
Let § 0 ( s , H ) · ¨ ¸ 1 ( s, H ) i ( s, H ) ¨ ¸ Q ¨ ¸ 2 ( s, H ) ¨ ¸ %¹ © § 0 (H ) B0(0) H C0(0) ¨ (1) 1(H ) B H C2(1) B0(1) H C0(1) ¨ 2 ¨ 2(H ) B2(2) H C2(2) ¨¨ % © § 0 ( s , H )e · ¨ ¸ 1 ( s, H )e ¸ ¨ ( s, H ) , < ( s, H ) ¨ 2 ( s , H )e ¸ ¨ ¸ # © ¹ 598
B0(2) H C0(2)
%
§< 0* ( s, H ) · ¨ * ¸ ¨< 1 ( s, H ) ¸ . ¨< 2* ( s, H ) ¸ ¨¨ ¸¸ # © ¹
· ¸ ¸, ¸ ¸ %¸¹
11
Sensitivity Analysis and Evolutionary Games
Then it follows from Eq. (11.80) and Eq. (11.81) that i ( s, H )< ( s, H ) ( s, H ). < ( s, H ) Q
(11.82)
Let
2
2
(k ) (k ) º¼ ,! , ª¬ s b1,(( k M) T k ( s, H ) diag ª¬ s b1,(1,1) H c1,(1,1)
k
,Mk )
(k ) º H c1,( Mk ,Mk ) ¼ ,
4 ( s, H ) diag(T 0 ( s, H ),T1 ( s, H ),T 2 ( s, H ),! ), fk and f
f
T 0
T
( f (k ,1), f (k , 2),! , f (k , M k )) ,
T
, f1T , f 2T ,! . Then, using Eq. (11.82), we yield
4 ( s, H )< (s, H )
Qi
§ 0 ¨ (1) ¨ B2 ¨ ¨ ©
B0(0) 1 B2(2)
1
i < ( s, H ) f , HQ 2
(11.83)
where
i Q 1
(1) 0
B 2
B0(2)
%
%
· ¸ ¸ ¸ ¸ %¹
and
i Q 2
§ 0 C0(0) ¨ (1) 1 ¨ C2 ¨ C2(2) ¨ ©
(1) 0
C
2 %
C0(2) %
· ¸ ¸. ¸ ¸ %¹
Let
[ ( n ) ( s)
wn < ( s, H )_H 0, 4 ( n ) ( s) n wH
wn 4 ( s, H )_H 0, wH n
n ı 1.
Taking the derivatives with respect to H on the both sides of Eq. (11.83) and letting H 0, we have i 4 ( s ) º [ (1) ( s ) ª¬Q ¼ 1
where 4 ( s) 4 ( s, 0) and [ ( s ) obtain
f
i 4 (1) ( s ) º [ ( s ), ª¬Q 2 ¼
[ ( s ) are given in Theorem 10.4. Hence, we 1
(1) i 4 ( s ) º ªQ i [ (1) ( s ) ª¬Q ¼ max ¬ 2 4 ( s) º¼ [ ( s). 1
(11.84) 599
Constructive Computation in Stochastic Models with Applications
Further, for the nth derivative we can obtain i 4 (s) º [ ( n) (s) ª¬Q ¼ 1
i 4 (1) ( s ) º [ ( n 1) ( s ) n ª¬Q 2 ¼ n 1 n § · ¦ ¨ ¸4 ( n l ) ( s )[ (l ) ( s). l 0 ©l ¹
Therefore, 1
(1) ( n 1) i 4 ( s ) º ªQ i [ ( n ) ( s) n ª¬Q ( s) 1 ¼ max ¬ 2 4 ( s ) º¼ [
i 4 (s) º ¬ªQ 1 ¼
1
max
n 1
§ n·
¦ ¨ l ¸4 l 0
( nl )
© ¹
( s )[ ( l ) ( s ),
(11.85)
where
Qi
1
4 (s)
1 max
1
1
1
1
[ I GL ( s )] U D ( s )1[ I RU ( s )]
or
Qi 4 (s) 1
11.5
1 max
1 [ I GU ( s)] U D ( s ) [ I RL ( s)] .
A Perturbed MAP/PH/1 Queue
This section provides sensitivity analysis for the stationary queue length of a perturbed MAP PH 1 queue. To achieve this, we must first provide a sensitivity analysis for the PH distribution as well as the MAP.
11.5.1
A Perturbed PH Distribution
Consider a PH distribution F ( x) for x ı 0 with irreducible representation (D , T ) of order J 1 , where D e 1. Thus we have F ( x)
P{ X İ x} 1 D exp{Tx}e,
where X is the corresponding PH random variable. We denote by S a matrix of order J 1 , which satisfies that the perturbed matrix T H S has negative diagonal elements and nonnegative off-diagonal elements with (T H S )e 9 0 for each safficiently small H ! 0. Let T 0 Te and S 0 Se. We assume that the irreducibility of the matrix T T 0D is the same as that for the matrix (T H S ) (T 0 H S 0 ) D . Obviously, (D , T H S ) is the irreducible representation of order J 1 of a perturbed PH distribution FH ( x) with FH ( x) 600
P{ X H İ x} 1 D exp{(T H S ) x}e.
11
Sensitivity Analysis and Evolutionary Games
Let fH* ( s ) be the Laplace-Stieltjes transform of FH ( x). Then fH* ( s ) D ( sI T H S ) 1(T 0 H S 0 ).
We obtain wn * fH ( s)_H wH n
n !D ( sI T ) [( sI T ) 1S nT 0 S n e], n
0
In addition, from E ª¬ X Hk º¼ dn E ª X Hk º dH n ¬ ¼ _H
0
n ı 1.
(1) k k ![D (T H S ) ke], we obtain (1)
k n
(k n 1)!kD T ( k n ) S n e,
k ı 1, n ı 1.
For the PH distribution, the maximal eigenvalue of the matrix T is an important quantity. In what follows we provide sensitivity analysis for the maximal eigenvalue. Let c and d, cH and dH be the left and right Perron-Frobenius eigenvectors of the matrices T and T H S , respectively. Assume cd 1. We denote by O and OH the maximal eigenvalues of T and T H S , respectively. It is easy to verify that d cH _H 0 (O I T ) dH Note that (O I T )d
0 and cd
cS
d OH _H 0 c. dH
1, we obtain
d OH _H dH
0
cSd .
11.5.2 A Perturbed MAP Consider a MAP with irreducible matrix descriptor (C1 , D1 ) of order J 2 , where the diagonal elements of the matrix C1 are all negative, the off-diagonal elements of the matrix C1 are all nonnegative, each element of the matrix D1 is nonnegative and the Markov chain with infinitesimal generator C1 D1 is irreducible and positive recurrent. Let T be the stationary probability vector of the Markov chain with infinitesimal generator C1 D1 . Then r T D1e is the stationary arrival rate. Let C2 and D2 be two matrices of order J 2 such that for each sufficiently small H ! 0, the diagonal elements of the matrix C1 H C2 are all negative, the off-diagonal elements of the matrix C1 H C2 are all nonnegative, each element of the matrix D1 H D2 is nonnegative and (C2 D2 )e 0. It is clear that (C1 H C2 , D1 H D2 ) is the irreducible matrix descriptor of a perturbed MAP. Let T H be the stationary probability vector of the perturbed Markov chain with 601
Constructive Computation in Stochastic Models with Applications
infinitesimal generator (C1 D1 ) H (C2 D2 ). Then
T H > (C1 D1 ) H (C2 D2 ) @ 0 and T H e 1. Thus, we obtain dn TH e dH n
for
n ı 1, H ı 0,
(11.86)
d TH _H 0 (C1 D1 ) dH
T (C2 D2 ).
(11.87)
0,
and
It follows from Eq. (11.86) and Eq. (11.87) that d TH _H 0 (C1 D1 N eT ) dH where N
T (C2 D2 ),
min {c1,( i ,i ) d1,( i ,i ) }. Note that the matrix C1 D1 N eT is invertible,
1İiİJ 2
we have dn T H _H dH n
n
1 (1) nnT ª¬(C2 D2 )(C1 D1 N eT ) º¼ ,
0
n ı 1.
Therefore, the nth derivative of the stationary arrival rate rH of the perturbed MAP is given by dn rH _H dH n
0
1 n 1 (1) n !T ª¬(C2 D2 )(C1 D1 N eT ) º¼
^
n 1
`
1 D2 e ª¬(C2 D2 )(C1 D1 N eT ) º¼ D1e ,
n ı 1.
11.5.3 A Perturbed MAP/PH/1 Queue Consider a MAP PH 1 queue. We denote by q (t ), I (t ) and J (t ) the number of customers in the system, the phase numbers of the MAP input and the PH service time at time t, respectively. It is obvious that {q(t ), I (t ), J (t ), t ı 0} is a levelindependent QBD process whose infinitesimal generator is given by
)1
602
§ C1 ¨ I
T0 ¨ ¨ ¨ ¨¨ ©
D1
D C1 T I
T 0D
D1
I C1 T I
T 0D
D1
I C1 T %
· ¸ ¸ ¸. ¸ D1
I ¸ % %¸¹
(11.88)
11
Sensitivity Analysis and Evolutionary Games
Let T and Z be the stationary probability vectors of the Markov chains C1 D1 and T T 0D , respectively. Then the QBD process with infinitesimal generator given in Eq. (11.88) is positive recurrent, null recurrent or transient according to T D1e ZT 0 , T D1e ZT 0 or T D1e ! ZT 0 , respectively. Now, we impose an infinitesimal perturbation on both the input stream and the service times, i.e., (C1 H C2 , D1 H D2 ) and (D , T H S ), respectively. The infinitesimal generator of the perturbed QBD process {q (t ), I (t ), J (t ), t ı 0} is given by )H
)1 H)2 ,
(11.89)
where )1 is given in Eq. (11.88) and
)2
§ C2 ¨ 0 ¨I
S ¨ ¨ ¨ ¨ ©
D2
D C2 S I
S 0D
D2
I C2 S I
S 0D
D2
I C2 S %
· ¸ ¸ ¸. ¸ D2
I ¸ % % ¸¹
Let R and RH be the minimal nonnegative solutions to the matrix equations ( D1
I ) R (C1 T ) R 2 ( I
T 0D )
0
and [( D1 H D2 )
I ] RH [(C1 H C2 ) (T H S )] RH2 [ I
(T 0 S 0 )D ] 0,
respectively. Then d RH _H dH
0
[( D2
I ) R(C2 S ) R 2 ( I
S 0D )] [(C1 T ) 2 R( I
T 0D )]1.
Since H ! 0 is sufficiently small, the perturbed MAP PH 1 queue will keep the properties of state classification for the original MAP PH 1 queue. We now provide sensitivity analysis for the stationary queue length in the perturbed MAP PH 1 queue as an example. If T D1e ZT 0 , then the perturbed MAP PH 1 queue is positive recurrent. To analyze the sensitivity of the stationary queue length, we need the derivatives of the stationary probability vector of the perturbed QBD process, since P{LH k} ʌ k (H )e for k ı 0, where LH lim qH (t ) almost everywhere. t of Let ʌ (ʌ0 , ʌ1 , ʌ 2 ,!) and ʌ H (ʌ0 H , ʌ1 H , ʌ 2 H ,!) be the stationary probability vectors of the two level-independent QBD processes with infinitesimal generators )1 given in Eq. (11.88) and )H given in Eq. (11.89), respectively. Then 603
Constructive Computation in Stochastic Models with Applications
ʌk (H )
ʌ1 (H ) RHk 1 ,
k ı 2,
(11.90)
and (ʌ0 (H ), ʌ1 (H )) is the unique positive solution to the system of equations ʌ0 (H )(C1 H C2 ) ʌ1 (H ) ª¬ I
(T 0 H S 0 ) º¼
0,
(11.91)
ʌ0 (H )[( D1 H D2 )
D ] ʌ1 (H ){(C1 H C2 ) (T H S ) RH [ I
(T 0 H S 0 )]D } 0, 1
and ʌ0 (H )e ʌ1 (H )( I RH ) e 1. It is obvious that ʌl It follows from Eq. (11.91) and Eq. (11.92) that
d § d · § C1 ¨ ʌ0 (H )_H 0, ʌ1 (H )_H 0 ¸ ¨ I
T 0 dH © dH ¹©
(11.92)
ʌl (0) for all l ı 0.
D1
D · ¸ ʌ0 C2 , A, 0 C1 T R ( I
T D ) ¹
where d ª º A ʌ 0 ( D2
D ) ʌ1 «C2 S R( I
S 0D ) RH _H 0 ( I
T 0D ) » . H d ¬ ¼ Note that § C1 ¨ 0 © I
T
D1
D · ¸ 0 C1 T R ( I
T D ) ¹
1
§ A11 ¨ © A21
A12 · ¸, A22 ¹
where A11
C11 C11 ( D1
D ) B 1 ( I
T 0 )C11 ,
A12
C11 ( D1
D ) B 1 ,
A21
B 1 ( I
T 0 )C11 ,
A22
B 1 ,
B
I ª¬C1 T R( I
T 0D ) º¼ ( I
T 0 )C11 ( D1
D ),
we obtain d ʌ0 (H )_H dH d ʌ1 (H )_H dH
0
ʌ0 C2 A11 AA21 ,
0
ʌ0 C2 A12 AA22 .
(11.93)
It follows from Eq. (11.90) and Eq. (11.93) that for k ı 2,
d ʌk (H )_H dH
0
(ʌ0C2 A12 AA22 ) R k 1 (k 1)ʌ1 R k 2 [( D2
I ) R (C2 S ) 1
R 2 ( I
S 0D )][(C1 T ) 2 R( I
T 0D )] . 604
(11.94)
11
Sensitivity Analysis and Evolutionary Games
Note that E[ LH ] ʌ1 (H )( I RH ) 2 e, we obtain d E[ LH ]_H dH
2
0
3
(ʌ 0C2 A12 AA22 )( I R) e 2ʌ1 ( I R) [( D2
I ) R(C2 S ) 1
R 2 ( I
S 0D )][(C1 T ) 2 R( I
T 0D )] e.
(11.95)
The derivatives in Eq. (11.93), Eq. (11.94) and Eq. (11.95) can be obtained once matrix R is calculated numerically. As an illustration, we now consider a perturbed M M 1 queue with perturbed arrival rate O aH and perturbed service rate P bH . Let U OP 1 and U (H ) (O aH ) ( P bH ) 1. It is clear that the perturbed M M 1 queue is stable if U 1. Simple calculations show that ʌ k (H ) [1 U (H )]U (H ) k for k ı 0. Therefore, we obtain d ʌk (H )_H dH
0
U k 1[k ( k 1) U ]
a bU
P
,
(11.96)
and d2 ʌ k (H )_H dH 2
0
U k 2
a bU
P2
{[a (k 1) b U (k 1)][k (k 1) U ] (11.97)
U (k 1)(D b U )}.
Since the mean of the stationary queue length in the perturbed M M 1 queue is E[ LH ] U (H ) [1 U (H )], we have dn E[ LH ]_H dH n
11.6
0
(1)
n 1
n !(a P bO )
P O
n 1
,
n ı 1.
(11.98)
Symmetric Evolutionary Games
In this section, we apply the perturbed birth death processes to study a 2 u 2 symmetric evolutionary game, and obtain the evolutionary stable strategy (ESS) of the symmetric evolutionary game by means of the stationary probability distribution of the perturbed birth-death process. Note that a 2 u 2 evolutionary game contains 2 types of players, and each player has 2 strategies. The evolutionary game is a useful mathematical tool developed by biologists for predicting population dynamics in the context of interactions. The Evolutionary Stable Strategy (ESS) is an important concept which is characterized by a property of robustness against invaders (or mutations). More specifically, (1) if an ESS is reached, then the proportions of each population do not change 605
Constructive Computation in Stochastic Models with Applications
in time; and (2) at the ESS, the populations are immune from being invaded by other small populations. Obviously, the ESS is stronger than Nash equilibrium in which it is only requested that a single user would not benefit by a change (mutation) of its behavior. Now, it is necessary to provide a mathematical description for the ESS, which will be useful in the rest of this section later. Consider a large population of players. Each tagged individual needs to take some strategies with the strategy set K. We denote by J(p, q) the expected payoff for our tagged individual if it uses a strategy p when meeting another individual who adopts the strategy q. In general, the payoff may be regarded as a fitness, and strategies with larger fitness are expected to propagate faster in a population. Let p and q belong to a set K of available strategies. In the standard framework for evolutionary games, there are a finite number of pure strategies, and a general strategy of an individual is a probability distribution over the pure strategies. Note that J(p, q) is linear in p and q. Suppose that the whole population uses a strategy q, and a small fraction H ! 0 (called mutations) adopts another strategy p. Evolutionary forces are expected to select against p if J (q, H p (1 H )q ) ! J ( p, H p (1 H )q ).
A strategy q is said to be ESS if for every p z q there exists some H ! 0 such that the above inequality holds for all H (0, H ) . In fact, we expect that if for all p z q , J (q, q ) ! J (q, p ).
This indicates that the mutation fraction in the population will tend to decrease, since it has a lower reward which leads to a lower growth rate. Thus the strategy q is immune to the mutations. On the other hand, if for all p z q, J (q, q ) J ( p, q) and J (q, p ) ! J ( p, p), then a population using strategy q are weakly immune against a mutation using strategy p. In this case, if the mutant’s population grows, then we shall frequently have individuals with strategy q competing with the mutations, since the condition J (q, p ) ! J ( p, p ) ensures that the growth rate of the original population exceeds that of the mutations. Based on the above analysis, a strategy q is said to be ESS if for all p z q, J ( q, q) ! J ( q, p);
or J ( q, q )
J ( p, q) and J (q, p ) ! J ( p, p).
In the remainder of this section, we use the perturbed Markov chains to study the ESS for some symmetric evolutionary games and asymmetric evolutionary games. 606
11
Sensitivity Analysis and Evolutionary Games
We consider a 2 u 2 symmetric evolutionary game whose payoff matrix is given by V1
A
V2
§ (a, a) (b, c) · ¨¨ ¸¸ , © (c , b ) ( d , d ) ¹
V1 V2
(11.99)
where V 1 and V 2 denote the two strategies for each player. We assume that the populations of players of types 1 and 2 are both N. For simplicity of description, we may only study the players of type 1 with the payoff matrix, given by V1 V 2 V1 § a ¨ V 2 ¨© c
A
b· ¸. d ¸¹
(11.100)
In fact, it is seen from Eq. (11.99) that the payoff matrix for players of type 2 is given by V1 V2 V1 V2
B
§a c · ¨¨ ¸¸. b d © ¹
We denote by z (t ) the number of players of type 1 who are playing strategy V 1 at time t. Then z (t ) S {0,1, 2,! , N 1, N }. Using the total probability law, the profitability of choosing strategies V 1 and V 2 can be defined as fV 1 ( z )
z Nz a b N N
(11.101)
fV 2 ( z )
z Nz c d, N N
(11.102)
and
respectively. Note that the players switch from one strategy to another according to the bounded rationality, using the two functions fV1 ( z ) and fV 2 ( z ), the state transition rates are defined as
Oi (H ) H N max{ fV (i ) fV (i ),0}, 1
2
0 İ i İ N 1,
(11.103)
and
P i (H ) H N max{ fV (i ) fV (i ), 0}, 1 İ i İ N , 2
1
(11.104)
where H is the exogenous mutation rate, N estimates in some sense the speed at which the boundedly rational players react to their environment, thus N is called 607
Constructive Computation in Stochastic Models with Applications
the learning ability rate of the players. Based on Eq. (11.103) and Eq. (11.104), it is seen from the average meaning that {z (t ), t ı 0} is a continuous-time birth death process on state space S whose infinitesimal generator is given by
QH
§ O0 (H ) O0 (H ) · ¨ ¸ ¨ P1 (H ) J 1 (H ) O1 (H ) ¸ ¨ ¸ P 2 (H ) J 2 (H ) O2 (H ) ¨ ¸, % % % ¨ ¸ ¨ P N 1 (H ) J N 1 (H ) ON 1 (H ) ¸ ¨¨ ¸ P N (H ) P N (H ) ¸¹ ©
(11.105)
where
J i (H ) Oi (H ) Pi (H ), 1 İ i İ N 1. We assume that the perturbation parameter H ! 0 is small enough, it is clear that Oi (H ) ! 0 for 0 İ i İ N 1 and P j (H ) ! 0 for 1 İ j İ N , so that the QBD process QH is irreducible. Since the state space is finite and QH e 0, the QBD process QH is postive recurrent. Let ʌH
ʌ
H 0
, ʌ1H , ʌ 2H ,! , ʌ HN 1 , ʌ HN
be the stationary probability vector of the QBD process QH . We write
[0H
1
and
[ kH
O0 (H )O1 (H )! Ok 1 (H ) . P1 (H )P 2 (H )! P k (H )
(11.106)
Then ʌnH
[ nH N
¦[
,
0 İ n İ N.
(11.107)
H k
k 0
It follows from Eq. (11.101) and Eq. (11.102) that fV 1 ( z )
a b z b N
(11.108)
fV 2 ( z )
cd z d, N
(11.109)
and
608
11
Sensitivity Analysis and Evolutionary Games
respectively. Thus, fV 1 ( z ) and fV 2 ( z ) are both linear functions to z. Based on the two linear functions, we can distinguish three cases as follows. Case Ē The two lines cross in the interval [0, N ]. In this case, there exists a unique z* [0, N ] such that fV 1 ( z * )
fV 2 ( z * ),
which leads to z*
d b N. ( a c ) ( d b)
Let i* be the largest integer part of z* . Then i* {0,1, 2,..., N 1, N } and i* İ z * . We write N 1 ª °°N ¬ fV 2 (i ) fV1 (i ) º¼ , if i İ 2 , ® °N ª f (i ) f (i ) º , if i ! N 1 , V2 ¼ °¯ ¬ V1 2
ai
a N 1 a N 1 ! aN 1 BN
2
1
2
2
a1a2 ! a N 1 2
and
D
a0 a1 ! ai* 1 § a* · a0 a1 ! ai 1 ¨1 i ¸ ¨ a* ¸ i 1 ¹ ©
.
The following theorem describes the stationary probability vector ʌ H in Case Ē. Theorem 11.1 (1) If b İ d , then the stationary probability vector ʌ H puts probability 1 on state N when i* ( N 2) 2, on state 0 when i* ! ( N 2) 2. It puts probability 1 (1 BN ) on state 0 and BN (1 BN ) on state N when i* ( N 2) 2. (2) If b ! d , then the stationary probability vector ʌ H puts probability D on state i* and probability 1 D on state i* 1. Proof (1) If b İ d , then it follows from Eq. (11.101) and Eq. (11.102) that fV1 (0) İ fV 2 (0). Note that fV 1 ( z ) and fV 2 ( z ) are linear functions, we obtain that fV1 (i ) İ fV 2 (i) when i İ i* and fV 1 ( j ) ! fV 2 ( j ) when j ! i* . Let ai
N ª fV (i) fV (i ) º , if i İ i* , ° ¬ 2 ¼ 1 ® * °¯N ª¬ fV1 (i ) fV 2 (i ) º¼ , if i ! i . 609
Constructive Computation in Stochastic Models with Applications
Then Oi (H ) H , Pi (H ) H ai for i İ i* , and O j (H ) H a j , P j (H ) H for j ! i* . We write Aj (H )
1 (H a1 )(H a2 )! (H a j )
and B j (H )
(H ai* 1 )(H ai* 2 )! (H a j 1 ) (H a1 )(H a2 )! (H ai )
.
Therefore, we have
[0H
1
and
[ Hj
H j Aj (H ), for 1 İ j İ i* 1, °° * i for j i* , i* 1, ®H Ai* (H ), ° 2i* j 1 B j (H ), for j ı i* 2. °¯H
(11.110)
For simplicity of description, we suppress the dependence of Aj (H ) and B j (H ) on H , and write Aj (H ) Aj and B j (H ) B j . In these notation, it follows from Eq. (11.107) that
ʌ Hj
1 ° , for j 0, * * i* °1 A1H ! A * H B * H i 1 ! BN H 2i N 1 i i 2 ° °° H j Aj H , for 1 İ j İ i* 1, ® i* i* 1 2i N 1 °1 A1H ! Ai* H Bi* 2H ! BN H ° * H 2i j 1 B j H ° , for i* 2 İ j İ N . ° i* i* 1 2i* N 1 °¯1 A1H ! Ai* H Bi* 2H ! BN H
(11.111) If i* ! ( N 2) 2, it is clear from Eq. (11.111) that lim ʌ Hj H o0
1, for j 0, ® ¯0, for 1 İ j İ N .
If i ( N 2) 2, it is seen from Eq. (11.111) that 610
11
0, for 0 İ j İ N 1, ® ¯1, for j N .
lim ʌ Hj H o0
If i*
Sensitivity Analysis and Evolutionary Games
( N 2) 2, it is clear that N is odd and
[ NH
*
H 2i N 1 BN (H ) BN (H ).
Based on Eq. (11.111), we obtain ʌ0H
1 i*
*
1 A1H ! Ai* H Bi* 2H i 1 ! BN 1H BN
,
which leads to lim ʌ0H H o0
1 . 1 BN (0)
It is obvious from Eq. (11.111) that lim ʌ Hj H o0
0, 1 İ j İ N 1.
Note that ʌ NH
BN i*
1 A1H ! Ai* H Bi* 2H i
*
1
! BN 1H BN
,
we obtain BN (0) . 1 BN (0)
lim ʌ NH H o0
(2) If b ! d , then it follows from Eq. (11.101) and Eq. (11.102) that fV1 (0) ! fV 2 (0), we obtain that fV 1(i ) ! fV 2 (i ) when i İ i* and fV 1( j ) fV 2( j ) when j ! i* . Let
ai
N ª fV (i ) fV (i ) º , if i İ i* , 2 ° ¬ 1 ¼ ® * °¯N ª¬ fV 2 (i ) fV 1 (i ) º¼ , if i ı i 1.
Then Oi (H ) H ai , P i (H ) H , for i İ i* , and Oi (H ) H , P j (H ) H a j for j ı i* 1. We write
[0H
1
and 611
Constructive Computation in Stochastic Models with Applications
° (H a0 )(H a1 )! (H a j 1 ) , for 1 İ j İ i* , ° Hj ° ° (H a0 )(H a1 )! (H ai ) for j i 1, ® i* H H ( ) a ° i* 1 ° ° (H a0 )(H a1 )! (H ai ) for j ı i* 2. ° H 2i* j 1 (H a * )! (H a ) j i 1 ¯
[ Hj
(11.112)
Using Eq. (11.110), we easily obtain
lim ʌ Hj H o0
D , for j i* , ° * ®1 D , for j i 1, °0, otherwise. ¯
This completes the proof. The following theorem further provides some useful properties for the stationary probability vector ʌ H in Case Ē. Theorem 11.2 For any H ! 0, the stationary probability vector ʌ H satisfies the following properties: (1) If b İ d , then ʌ0H ! ʌ1H ! ! ! ʌiH* and ʌiH* 1 ʌiH* 2 ! ʌ HN . At the same time, ʌ Hj attains its minimum at j
i* and j
i* 1, and ʌiH*
ʌiH* 1 .
(2) If b ! d , then ʌ0H ʌ1H ! ʌiH* , ʌiH* 1 ! ʌiH* 2 ! ! ! ʌ HN , and ʌiH* İ ʌiH* 1 if and only if z * ı (2i* 1) 2. At the same time, ʌ Hj attains its maximum at j
i* if z * (2i* 1) 2 and at j
i* 1 if z * ! (2i* 1) 2.
Proof We first prove (1) For i İ i* 2, it follows from Eq. (11.110) that
[iH1
H i 1 H [iH . H ai 1 (H a1 )(H a2 )! (H ai )(H ai 1 )
Note that H (H ai 1 ) 1 when ai 1 ! 0, we obtain [iH1 [iH , which leads to 612
11
Sensitivity Analysis and Evolutionary Games
ʌ0H ! ʌ1H ! ! ! ʌiH* . For i ı i* 2, it follows from Eq. (11.110) that *
[iH1
H i 1 (H ai 1 )(H ai 2 )! (H ai ) *
H
i 1 i*
*
[iH
(H a1 )(H a2 )! (H ai* )
H ai . H
Note that (H ai ) H ! 1 when ai 1 ! 0, we obtain [iH1 ! [iH , which leads to ʌiH* 1 ʌiH* 2 ! ʌ HN . For i
i* , i * 1, it follows from Eq. (11.110) that
[iH which shows that [iH*
Hi
*
(H a1 )(H a2 )! (H ai* )
[iH 1 , which yields that ʌiH *
*
,
ʌiH* 1 .
Now, We prove (2) For i İ i* 1, it follows from Eq. (11.110) that
[iH1
(H a0 )(H a1 )! (H ai 1 )(H ai )
H i 1 [iH
H ai ! [ iH , H
thus we obtain ʌ0H ʌ1H ! ʌiH* . For i ı i* 1, it follows from Eq. (11.112) that
[iH1
(H a0 )(H a1 )! (H ai* ) *
H 2i i (H ai 1 )! (H ai 1 ) H [iH [ iH , H ai 1 *
we get ʌiH* 1 ! ʌiH* 2 ! ! ! ʌ HN . It follows from Eq. (11.112) that
[iH 1 [iH *
*
H ai . H ai 1 *
*
613
Constructive Computation in Stochastic Models with Applications
Since ai* İ ai* 1 if and only if z* ı (2i* 1) 2, ʌiH* İ ʌiH* 1 if and only if z * ı (2i* 1) 2. Hence, it is clear that ʌ Hj attains its maximum at j i* if z * (2i* 1) 2 and at j i* 1 if z * ! (2i* 1) 2. This completes the proof. Case ē The two lines do not cross in the interval [0, N ], that is, there exists no z * [0, N ] such that fV1 ( z * ) fV 2 ( z * ). In this case, note that fV 1 ( z ) and fV 2 ( z ) are both linear functions to z, we have two different classes: fV1 ( z ) ! fV 2 ( z ) and fV 1 ( z ) fV 2 ( z ). The following theorem provides some useful properties for the stationary probability vector ʌ H in Case Ĉ. Theorem 11.3 (1) If b İ d , then for any H ! 0, ʌ0H ʌ1H ! ʌ NH ,
and for 1 İ i İ N 1, H H H H * °ʌi 1 ʌi ! ʌi ʌi 1 , if i 0, ® H * H H H °¯ʌi 1 ʌi ʌi ʌi 1 , if i ! N .
At the same time, we have 0, for 0 İ i İ N 1, ® ¯1, for i N .
lim ʌiH H o0
(2) If b ! d , then for any H ! 0, ʌ0H ! ʌ1H ! ! ! ʌ HN ,
and for 1 İ i İ N 1, °ʌiH1 ʌiH ʌiH ʌiH1 , if i* 0, ® H * H H H °¯ʌi 1 ʌi ! ʌi ʌi 1 , if i ! N .
At the same time, we have lim ʌiH H o0
1, for i 0, ® ¯0, for 1 İ i İ N .
Proof We only need to prove (1), while (2) can be proved similarly. If b İ d , then for any H ! 0, fV 1 ( z ) İ fV 2 ( z ) for each z [0, N ]. Since there exists no z* [0, N ] such that fV 1 ( z* ) fV 2 ( z* ), we obtain fV1 ( z ) fV 2 ( z ) for each z [0, N ]. In this case,
ai 614
N [ fV (i ) fV (i )], 2
1
for 0 İ i İ N ,
11
Sensitivity Analysis and Evolutionary Games
which leads to that Oi (H ) H , Pi (H ) H ai for 0 İ i İ N . It follows from Eq. (11.110) that for 0 İ i İ N ,
[iH1
(H a0 )(H a1 )! (H ai 1 )(H ai )
H i 1 [iH
(H ai )
H
! [ iH ,
thus we obtain ʌ0H ʌ1H ! ʌ HN .
We consider the case with i* 0. Note that for any H ! 0, fV 1 ( z ) and fV 2 ( z ) are linear functions, and fV1 ( z ) fV 2 ( z ) for each z [0, N ], it is easy to see that ai fV 2 (i ) fV1 (i ) is increasing in i 0,1, 2,! , N . Since
[iH1 [iH
[iH
ai
H
,
it is easy to see that [iH1 [iH is increasing in i 0,1, 2,! , N . Therefore, S iH1 [iH1 [iH is increasing in i 0,1, 2,! , N . Similarly, we can analyze the case with i* ! N . The details of this proof are omitted here. Note that when Oi (H ) H , Pi (H ) H ai for 0 İ i İ N , it follows from Eq. (11.111) for 0 İ i İ N ,
ʌ Hj
1 , for j 0, ° i* i* 1 2 i* N 1 A A B B H ! H H ! H 1 * * 1 N i i 2 ° ® j H Aj H ° , for 1 İ j İ N . °1 A H ! A H i* B H i* 1 ! B H 2i* N 1 1 N i* i* 2 ¯
it is clear that lim ʌiH H o0
1, for i 0, ® ¯0, for 1 İ i İ N .
This completes the proof. Case Ĕ The two lines are identical in the interval [0, N ], that is, the two functions fV1 ( z ) and fV 2 ( z ) are equal for all z 0,1, 2,! , N . The following theorem provides some useful properties for the stationary probability vector ʌ H in Case ĉ. 615
Constructive Computation in Stochastic Models with Applications
Theorem 11.4
For any H ! 0 and each i ʌiH
0,1, 2,! , N ,
1 N
and lim ʌiH H o0
1 . N
Proof Note that the two functions fV 1 ( z ) and fV 2 ( z ) are equal for all
z 0,1, 2,!, N , thus it is seen that Oi (H ) H , Pi (H ) H for 0 İ i İ N . It follows from Eq. (11.106) that for each i 0,1, 2,! , N ,
[iH
1,
which, together with Eq. (11.107), leads to that for any H ! 0, ʌiH
1 . N
Therefore, lim ʌiH H o0
1 . N
This completes the proof. For the 2 u 2 symmetric evolutionary game under the above three cases, we can provide further analysis from the payoff matrix given in Eq. (11.100). We summarize the following three classes. Class 1 The class of a strictly dominant strategy: (1) If a ! c and b ! d , then strategy V 1 strictly dominates strategy V 2 . (2) If a c and b d , then strategy V 2 strictly dominates strategy V 1 . Class 2 The coordination class (when a ! c and b d ) with its two pure Nash equilibria (a, a) and (d , d ) and one mixed-strategy equilibrium p* (d b) [(a c) (d b)]. Class 3 When a c and b ! d , there is no pure-strategy symmetric Nash equilibrium. The game has a unique equilibrium in mixed-strategy equilibrium p* (d b) [(a c) (d b)]. It is easy to see the risk dominance of strategy V 1 (resp. strategy V 2 ), that is, a c ! d b (resp. a c d b ), is equivalent to z* N 2 (resp. z * ! N 2). The following definition provides a slightly more risk restrictive concept. Definition 11.1 Strategy V 1 is called strictly more dominant than strategy V 2 if z * ( N 1) 2. On the other hand, Strategy V 2 is called strictly more risk dominant than strategy V 1 if z * ! ( N 1) 2. 616
11
Sensitivity Analysis and Evolutionary Games
Theorem 11.5 (1) In class 1, the stationary distribution ʌ H puts probability 1 on the dominant strategy. (2) In class 2, the stationary distribution ʌ H puts probability 1 on the pure Nash equilibrium that is strictly more risk dominant. (3) In class 3, the expected value of lim ʌ H is equal to p* , that is, H o0
N
ʌH ¦ k lim H k 0
o0
k
(d b) [(a c) (d b)].
Proof (1) In class 1, when a ! c and b ! d , it follows from Eq. (11.101) and Eq. (11.102) that for each z [0, N ],
fV 1 ( z ) fV 2 ( z )
z Nz (a c) (b d ) ! 0. N N
Hence, using (2) in Theorem 11.3 we can obtain the desired result. (2) In class 2, when b d , it follows from Eq. (11.108) and Eq. (11.109) that fV1 (0) fV 2 (0), and z*
( d b) N. ( a c ) ( d b)
Note that a ! c, we obtain ( d b) 1 (a c ) ( d b) which leads to z * [0, N ]. As seen above, the inequality z* N 2 is equivalent to saying that the pure Nash equilibrium (a, a ) is risk dominant. But, in this case it might happen that i* ( N 1) 2, which does not imply by Theorem 11.1 that the stationary distribution ʌ H puts probability 1 on state N. On the contrary, if we assume that strategy V 1 is strictly more dominant than strategy V 2 , then the inequality i* ( N 1) 2 follows, which implies the desired result. Likewise, we can analyze the case with strategy V 2 which is strictly more dominant than strategy V 1 . (3) When a c and b ! d , we can obatin (i* 1) z * z * i*
D , 1D
which implies the desired result. This completes the proof. It is necessary to consider the following two borderline cases: (1) When a c and b d , the two functions fV 1 ( z ) and fV 2 ( z ) are equal for all z 0,1, 2,! , N , 617
Constructive Computation in Stochastic Models with Applications
the two strategies are indistinguishable, thus Theorem 11.4 indicates that the stationary distribution ʌ H is uniform on the state space S {0,1, 2,!, N }. (2) When a c b d , then z* 1 2, using Theorem 11.1 we obtain that the stationary distribution ʌ H puts probability 1 2 on state 0 and probability 1 2 on state N.
11.7
Constructively Perturbed Birth Death Process
In this section, we further construct the perturbed birth death process to study the ESS of the 2 u 2 symmetric evolutionary game.
11.7.1 An Embedded Chain For the 2 u 2 symmetric evolutionary game, based on Eq. (11.100) we write
z 1 Nz a b N 1 N 1
(11.113)
z N 1 z c d. N 1 N 1
(11.114)
fV 1 ( z ) and
fV 2 ( z )
It is easy to see from Eq. (11.113) that at least one player can play strategy V 1 , and from Eq. (11.114) that at least one player can play strategy V 2 . Under the two conditions, the best-response decision rule can be defined as follows: (1) when playing strategy V 1 , switch to strategy V 2 if fV 1 ( z ) fV 2 ( z 1); and (2) when playing strategy V 2 , switch to strategy V 1 if fV 1 ( z 1) ! fV 2 ( z ). From fV1 ( z ) fV 2 ( z 1), it follows from Eq. (11.113) and Eq. (11.114) that ( N 1)D * 1,
z nV1
while using fV 1 ( z 1) ! fV 2 ( z ) we have
z ! nV 2
( N 1)D * ,
where
D*
d b . acd b
Therefore, we obtain that (1) when playing strategy V 1 , switch to strategy V 2 if z nV 1 ; and (2) when playing strategy V 2 , switch to strategy V 1 if z ! nV 2 . 618
11
Sensitivity Analysis and Evolutionary Games
It is clear that nV1 nV 2 1. Hence the two thresholds nV1 and nV 2 are closely linked to the mixed equilibrium strategy D * with nV 2 N İ D * İ nV1 N . In what follows, we shall consider two different cases based on whether or nor nV1 (or nV 2 ) is an integer. Case Ē nV 2 (or nV1 nV 2 1) is not an integer. In this case, we denote by n the unique integer between nV1 and nV 2 . Obviously, n* can be taken to be an integer approximation to the mixed equilibrium strategy D * . Therefore, when playing strategy V 1 , switch to strategy V 2 if z İ n* ; and when playing strategy V 2 , switch to strategy V 1 if z ! n* . Now, we consider the dynamics. In each period, one player is sampled at uniformly random from the population and receives the opportunity with probabilities z N (for V 1 ) and ( N z ) N (for V 2 ) to revise his strategy. With probability 1 H , the selected player takes the action prescribed by the myopic best reply. With probability H for 0 H 1 2, he mutates and takes the opposite action. Let pi , j (H ) be the probability of transition from a state i to another state j. Clearly, if pi , j (H ) 0 whenever _ i j _! 1, then the Markov chain P(H ) ( pi , j (H )) is a birth death process. We take n * °°(1 H ) N , 1 İ n İ n , pn ,n 1 (H ) ® °H n , n* 1 İ n İ N , °¯ N N n * °°(1 H ) N , 0 İ n İ n 1, pn, n 1 (H ) ® °H N n , n* İ n İ N 1, °¯ N and pn, n (H ) 1 pn , n 1 (H ) pn , n 1 (H ),
0 İ n İ N.
Let ʌ (H ) (ʌ0 (H ), ʌ1 (H ), ʌ 2 (H ),! , ʌ N 1 (H ), ʌ N (H )) be the stationary probability vector of the perturbed birth death process P(H ). Then ʌ (H ) is given by
ʌn (H )
n 0, C , ° °C E n § N · , 1 İ n İ n* , ° ¨ ¸ n ® © ¹ ° °C E 2 n* n § N · , n* 1 İ n İ N , ¨ ¸ °¯ ©n¹ 619
Constructive Computation in Stochastic Models with Applications
where
H
E
1 H
and 1
*
C Theorem 11.6
n N ª º n§N· 2 n* n § N · «1 ¦ E ¨ ¸ ¦ E ¨ ¸» . © n ¹ n n* 1 © n ¹¼ ¬ n1
(1) If 2n* ! N , then 1, n 0, ® ¯0, 1 İ n İ N .
lim ʌn (H ) H o0
(2) If 2n *
N , then
lim ʌ n (H ) H o0
1 ° 2 , n 0, ° ®1 , n N, °2 ° ¯0, 1 İ n İ N 1.
(3) If 2n* N , then lim ʌn (H ) H o0
1, n 2n* , ® ¯0, others.
Case ē nV 2 (or nV1 nV 2 1) is an integer. In this case, the payoff ties occur at state nV1 and nV 2 . We denote by K a fixed probability of switching from the current strategy if a tie occurs, reflecting, e.g., un-modeled switching cost. It is easy to see that when playing strategy V 1 , player 1 switches to V 2 if 0 İ z İ nV1 1, does not change if z ı nV 1 1, and switches randomly if z nV 1 . On the other hand, when playing strategy V 2 , player 1 switches to V 1 if z ı nV 2 1, does not change if 0 İ z İ nV 2 1, and switches randomly if z nV 2 . Based on the switching probability, the birth death process P(H ) ( pi , j (H )) is described as
pn, n 1 (H )
620
n 1 İ n İ nV 2 , °(1 H ) N , ° n ° ®[(1 H )K (1 K )H ] , n nV 1 , N ° ° n nV1 1 İ n İ N , °H N , ¯
11
pn, n 1 (H )
Sensitivity Analysis and Evolutionary Games
N n 0 İ n İ nV 2 1, °(1 H ) N , ° N n ° , n nV 2 , ®[(1 H )K (1 K )H ] N ° ° N n nV1 İ n İ N 1, °¯H N ,
and pn, n (H ) 1 pn , n 1 (H ) pn , n 1 (H ),
0 İ n İ N.
Thus, the stationary probability ʌ (H ) is given by C , ° N °C E n §¨ ·¸ , ° ©n¹ ° ® nV 2 § N · °C E ¨ n ¸ , © ¹ ° ° 2 nV1 n 1 § N · ¨ n ¸, °C E © ¹ ¯
ʌ n (H )
n
0,
1 İ n İ nV 2 , n
nV1 ,
nV1 1 İ n İ N ,
where 1
C
§ · N ¨ ¸ ª nV 2 n § N · º 2 nV n 1 § N · nV 1 ¨ N ¸ «1 ¦ E ¨ ¸ E 1 ¨¨¨ ¸¸¸ ¦ E 1 » . ¨ ¸ ¨ nV ¸ © n ¹ ¼» n nV1 1 1 ¹ © ¬« n 1 © n ¹
Theorem 11.7
(1) If 2nV 1 ! N 1, then lim ʌn (H ) H o0
(2) If 2nV1
1, n 0, ® ¯0, 1 İ n İ N .
N 1, then
lim ʌ n (H ) H o0
1 ° 2 , n 0, ° ®1 , n N, °2 ° ¯0, 1 İ n İ N 1.
(3) If 2nV1 N 1, then lim ʌn (H ) H o0
°1, n 2nV1 1, ® °¯0, others. 621
Constructive Computation in Stochastic Models with Applications
11.7.2
A Probabilistic Construction
For the 2 u 2 symmetric evolutionary game, based on Eq. (11.100) we write fi
(i 1)a ( N i )b N 1
(11.115)
gi
ic ( N 1 i )d . N 1
(11.116)
and
At every step, one individual is chosen to reproduce. The probability that an individual using strategy V 1 is chosen is given by if i . if i ( N i) gi We assume that with probability H ! 0, a V 1-offspring is mutant which plays strategy V 2 instead of strategy V 1 , and with probability NH ! 0, a V 2-offspring is mutant which plays strategy V 1 . After reproduction, the offspring replaces a randomly chosen member of the population, so that the population size is constant. The process that describes the number of individuals playing strategy V 1 is a birth death process whose transition probability matrix P(H ) ( pi , j (H )) with pi j (H ) 0 for _ i j _! 1 is given by p0,1 (H ) 1 p0,0 (H ) NH , pN , N 1 (H ) 1 pN , N (H ) H ,
and for 1 İ i İ N 1, pi ,i 1 (H ) pi ,i 1 (H )
if i (1 H ) ( N i ) giNH N i , if i ( N i ) gi N if i H ( N i ) gi (1 NH ) i if i ( N i ) g i N
and pi ,i (H ) 1 pi ,i 1 (H ) pi ,i 1 (H ).
Because of the presence of mutations, the birth death process is irreducible and positive recurrent, and thus the stationary probability vector ʌ H (ʌ0H , ʌ1H , ʌ 2H ,! ,
ʌ HN 1 , ʌ HN ). Let ʌ 622
lim ʌ H with ʌi* H o0
lim ʌiH for 0 İ i İ N . H o0
11
Pˆ
Sensitivity Analysis and Evolutionary Games
We take a new birth death process with the transition probability matrix ( pˆ i , j ). We write pˆ 0,0
pˆ N , N
1,
and for 1 İ i İ N 1, pˆ i ,i 1
if i N i , if i ( N i ) gi N ( N i ) gi i ifi ( N i ) gi N
pˆ i ,i 1 and
pˆ i ,i 1 pˆ i ,i 1 pˆ i ,i 1.
It is clear that pˆ i , j
lim pi , j (H ) for 0 İ i, j İ N , which leads to Pˆ H o0
lim P(H ). H o0
Thus, ʌ is the stationary probability vector of the birth death process Pˆ . Let *
J
N 1
fi
i 1
i
g
.
(11.117)
For very small mutation rates, the moran process spends nearly all the time at one of the two states 0 and N. It is easy to see that ʌ0* and ʌ *N are the limits of the fractions of time that the moran process spends at the states “all V 1 ” and “all V 2 ”, respectively. Therefore, we obtain ʌ0*
NJ 1 NJ
(11.118)
and ʌ *N
1 . 1 NJ
(11.119)
The following lemma provides a useful bound for ʌ0* and ʌ *N . The proof is clear and is omitted here. Lemma 11.2 (1) ʌ0* ! 1 2 (or ʌ *N 1 2) if and only if J ! 1N , (2) ʌ0* 1 2 (or ʌ *N ! 1 2) if and only if J 1N , and (3) ʌ0* 1 2 (or ʌ *N 1 2) if and only if J 1N . Now, we construct a two-state Markov chain whose stationary probability vector is ʌ0* , ʌ *N . Such a constructed Markov chain can be used to study the underlying game with more than two pure strategies. For the birth death process Pˆ , the states 0 and N are absorbing and the other states are all transient. Let U1,2 623
Constructive Computation in Stochastic Models with Applications
be the probability of absorption at state 0 , if the birth death process is initially at state N 1. Thus, U1,2 is the probability that a single individual that plays strategy V 2 takes over a population where everyone else plays strategy V 1 . Define U 2,1 analogously. It is easy to check that N 1
gj
f j 1
U1,2
j
N 1
i
i 1
j 1
1 ¦
gj fj
and 1
U 2,1
N 1
i
i 1
j 1
1 ¦
gj
.
fj
It is clear that J U 2,1 U1,2 . We take a two-state Markov chain whose transition probability matrix is given by P
§1 NU 2,1 NU2,1 · ¨ ¸, 1 U1,2 ¹ © U1,2
Simple computation shows that two-state Markov chain P has stationary probability vector ʌ0* , ʌ *N . In what follows, we discuss how the limiting distribution ʌ0* , ʌ *N depends on the size N of the population and the payoff matrix. In this situation, we rewrite ʌ * as ʌ * ( N ) ʌ0* ( N ), ʌ1* ( N ), ʌ 2* ( N ),! , ʌ *N ( N ) in order to indicate the dependence on the number N. Definition 11.2 Strategy V is called to be favored by the moran process if * ʌi ( N ) ! 1 2, and Strategy V is called to be selected by the moran process if lim ʌi* ( N ) 1.
N of
To determine favored and selected strategies, we rewrite Eq. (11.117) as N 1
J (N )
[ib ( N 1 i)a] i 1 N 1
.
[ic ( N 1 i)d ] i 1
Following Lemma 11.2, the following theorem indicates how the favored and selected strategies depend on the payoff matrix. The proof is easy and is omitted here. 624
11
Sensitivity Analysis and Evolutionary Games
Theorem 11.8 Suppose N 1. (1) If b ! c and a ! d , then ʌ0* ( N ) ! 1 2 for each N. (2) If b ! c and a d , then ʌ0* ( N ) ! 1 2 may depend on the population size N. A sufficient condition for ʌ0* ( N ) ! 1 2 is given by
N 2
bc . d a
Recall that in a 2 u 2 game, a strategy is risk dominant if it is the unique best response to the distribution (1 2,1 2) for the two strategies V 1 and V 2 . Thus, strategy V 1 is risk dominant if a b ! c d ; while strategy V 2 is risk dominant if a b c d . Strategy V 1 is Pareto-dominant if a ! d ; while strategy V 2 is Pareto-dominant if a d . The following theorem provides sufficient conditions for the favored and selected strategies. Theorem 11.9 (1) If b ! c and a ! d , then lim ʌ0* ( N ) 1 . N of
(2) If b ! d ! a ! c, then lim ʌ0* ( N ) 1. N of
(3) If d ! b ! c ! a, then lim ʌ0* ( N ) N of
0.
(4) If d ! b ! a ! c or d ! a ! b ! c, then there are two pure-strategy Nash equlibibria, and lim ʌ0* ( N ) is either 1 or 0 as N of
³
less than
1 0
1
³ ln > b (a b) x@ dx is greater or 0
ln[d (c d ) x]dx. The risk dominant equilibrium need not be
selected, even if it is also Pareto-dominant. (5) If b ! c ! d ! a or b ! d ! c ! a, then there are two pure-strategy Nash equlibibria, and lim ʌ0* ( N ) is either 1 or 0 as N of
less than
³
1 0
1
³ ln[b (a b) x]dx 0
is greater or
ln[ d (c d ) x]dx.
Proof (1) The ratio of each pair of terms in J ( N ) is bounded away from 1,
hence lim J ( N ) N of
f, which leads to lim ʌ0* ( N ) 1. N of
(2) If b ! d ! a ! c, then it is easy to check that
J (N )
[( N 1)b][( N 2)b a]![2b ( N 3)a][b ( N 2) a] ! 1, [c ( N 2)d ][2c ( N 3)d ]![( N 2)c d ][( N 1)c]
which leads to lim ʌ0* ( N ) 1. N of
(3) If d ! b ! c ! a, then when N ! 2 max{(b c) (d b),(b c) (c a )}, it is easy to see that J ( N ) 1 which leads to lim ʌ0* ( N ) N of
0. 625
Constructive Computation in Stochastic Models with Applications
(4) and (5) To prove the two results, we need to approximate J ( N ) as N o f. Note that
J (N )
d N 1 aj b( N 1 j ) , a j 0 cj d ( N 1 j )
N 1
^
0
^
0
`
1
d [aj b( N 1 j )] | d exp N ³ ln[ a (b a) x]dx O(1) j 0
and N 1
`
1
a [cj d ( N 1 j )] | a exp N ³ ln[c (d c) x]dx O(1). j 0
Therefore, lim J ( N ) is equal to 0 or f by means of the comparison for the two N of
above integrals. Based on this, (4) and (5) can be proved. This completes the proof.
11.8
Asymmetric Evolutionary Games
In this section, we apply the perturbed QBD processes to study the ESS for asymmetric evolutionary games under a unified algorithmic framework. For a 2 u 2 asymmetric evolutionary game, we discuss three cases: Independent structure, dependent structure and information interaction, and provide numerical solution for the ESS of each case by means of the RG-factorizations. Further, we simply discuss the ESS for a 3 u 2 asymmetric evolutionary game.
11.8.1 A 2 u 2 Game with Independent Structure Consider a 2 u 2 asymmetric evolutionary game with independent structure, where each player has two strategies V 1 and V 2 , the populations of players of type 1 and 2 are M and N, and their payoff matrices are given by A1
§ a1 ¨ © c1
b1 · ¸, d1 ¹
A2
§ a2 ¨ © c2
b2 · ¸, d2 ¹
respectively. Based on the populations and the payoff matrices, we can setup a transition rate of players from strategy V i to V j under an average setting. Thus, for the first type of players we take fV11 (m) 626
a1m b1 ( M m) M
11
Sensitivity Analysis and Evolutionary Games
and c1m d1 ( M m) . M
fV12 (m)
Similarly, for the second type of players we write fV21 ( n)
a2 n b2 ( N n) N
fV22 (n)
c2 n d 2 ( N n) . N
and
Further, for the first type of players, the transition rate switching strategy from V 2 to V 1 is defined as
OH1 (i ) H N 1 max ^ fV1 (i ) fV1 (i ),0` , i 0,1,!, M 1, 1
2
and the transition rate switching strategy from V 1 to V 2 is defined as
PH1 (i ) H N 1 max ^ fV1 (i ) fV1 (i ), 0` , i 1, 2,! , M . 2
1
Similarly, for the second type of players, the transition rate switching strategy from V 2 to V 1 is defined as
OH2 ( j ) H N 2 max ^ fV2 ( j ) fV2 ( j ), 0` , 1
2
j
0,1,!, N 1,
and the transition rate switching strategy from V 1 to V 2 is defined as
PH2 ( j ) H N 2 max ^ fV2 ( j ) fV2 ( j ), 0` , 2
1
j 1, 2,! , N .
Note that H and N i are called an exogenous mutation rate and a learning ability rate. Let z (t ) ( z1 (t ), z2 (t )), where zi (t ) is the number of the players of type i who are playing strategy V 1 at time t for i 1, 2. Note that each player switches from one strategy to another according to the bounded rationality with myopia, inertia and mutation for a small probability H ! 0. Based on this, we can model this game as a continuous-time level-dependent QBD process {z (t ), t ı 0} on the state space S
{(0,0),(0,1),! ,(0, N );(1, 0), (1,1),!, (1, N ); !;( M ,0), ( M ,1),! ,( M , N )}, 627
Constructive Computation in Stochastic Models with Applications
where the first type of players are regarded as “level” while the second type of players are regarded as “phase”. Therefore, the infinitesimal generator of this QBD process is given by § A10 (H ) A00 (H ) · ¨ 1 ¸ 1 1 ¨ A2 (H ) A1 (H ) A0 (H ) ¸ ¨ ¸ A22 (H ) A12 (H ) A02 (H ) ¨ ¸ Q(H ) ¨ % % % ¸, M 2 M 2 M 2 ¨ ¸ A2 (H ) A1 (H ) A0 (H ) ¨ ¸ A2M 1 (H ) A1M 1 (H ) A0M 1 (H ) ¸ ¨ ¨ ¸ A2M (H ) A1M (H ) ¹ © where for 0 İ k İ M , § aH (k ,0) OH2 (0) · ¨ ¸ 2 2 OH (1) aH (k ,1) ¨ PH (1) ¸ 2 2 ¨ ¸ P (2) a ( k , 2) O (2) H H H A1k (H ) ¨ ¸ % % % ¨ ¸ ¨ PH2 ( N 1) aH (k , N 1) OH2 ( N 1) ¸ ¨ ¸ ¨ PH2 ( N ) aH (k , N ) ¸¹ ©
with for k
0,
aH (0, l ) for k
OH1 (0) OH2 (0), l 0, ° 1 2 2 İ l İ N 1, l l O (0) O ( ) P ( ), 1 ® H H H °O 1 (0) P 2 ( N ), l N, H ¯ H
M,
aH ( M , l )
PH1 ( M ) OH2 (0), l 0, ° 1 2 2 ® PH ( M ) OH (l ) PH (l ), 1 İ l İ N 1, ° P 1 ( M ) P 2 ( N ), l N, H ¯ H
and for 1 İ k İ M 1,
aH (k , l )
OH1 (k ) PH1 (k ) OH2 (0), l 0, ° 1 1 2 2 ®OH (k ) PH (k ) OH (l ) PH (l ), 1 İ l İ N 1, °O 1 (k ) P 1 (k ) P 2 ( N ), l N. H H ¯ H
At the same time, for 0 İ m İ M 1, 628
11
Sensitivity Analysis and Evolutionary Games
A0m (H ) OH1 (m)diag(1,1,1,! ,1,1),
and for 1 İ n İ M , A2n (H )
PH1 (n)diag(1,1,1,! ,1,1).
For the QBD process, we write U M (H )
A1M (H ), 1
A1k (H ) A0k (H ) [U k 1 (H )] A2k 1 (H ),
U k (H )
0 İ k İ M 1,
and Rk (H )
1
A0k (H )[U k 1 (H )] ,
0 İ k İ M 1.
Let ʌ (H )
(ʌ0 (H ), ʌ1 (H ), ʌ2 (H ),! , ʌ M 1 (H ), ʌ M (H ))
with ʌ k (H ) (ʌk0 (H ), ʌ1k (H ), ʌ k2 (H ),! , ʌkN 1 (H ), ʌ kN (H ))
be the stationary probability vector of the QBD process Q(H ). Then ʌk (H ) W (H )v0 (H ) R0 (H ) R1 (H ) R2 (H )! Rk 1 (H ),
0İk İ M,
where v0 (H ) is the stationary probability vector of the censored Markov chain U 0 (H ) to level 0, and the constant W (H ) is given by 1
W (H )
.
M
1 v0 (H )¦ R0 (H ) R1 (H ) R2 (H )! Rk 1 (H )e k 1
To compute the ESS of this game numerically, we need to study the limiting distribution ʌ*
lim ʌ H H o0
where
W*
limW (H ), H o0
v0*
lim v0 (H ), H o0
Rk*
lim Rk (H ). H o0
In what follows we provide two numerical examples to indicate the structure of the two probabilities pi (H ) and q j (H ) with respect to the two learning ability rates N 1 and N 2 , and the exogenous mutation rate H . For simplification of description, we write 629
Constructive Computation in Stochastic Models with Applications N
pi (H )
¦ʌ
P{z1 (f) i}
j
i
(H ),
j 0
M
q j (H )
P{z2 (f)
¦ʌ
j}
j i
(H ).
i 0
Example 11.1 (The learning ability rate) In the 2 u 2 asymmetric evolutionary game, the parameters are given by
A1 M
§ 20 0 · ¨ ¸, © 2 8¹
A2
N
10,
6,
N1 N 2 N
§ 40 1 · ¨ ¸ © 0 10 ¹
H
0.5,
0.1,0.2, 0.4,0.6, 0.8,1.0.
Table 11.1 indicates how the two probabilities pi (H ) and q j (H ) depend on the learning ability rate N . Table 11.1
The role of the learning ability rate
For the first type of players z1 (f )
N N N N N N
0
1
2
3
4
5
6
0.1
0.0457
0.0264
0.0264
0.0299
0.0598
0.1715
0.6402
0.2
0.0171
0.0070
0.0070
0.0088
0.0264
0.1251
0.8087
0.4
0.0046
0.0012
0.0012
0.0018
0.0090
0.0760
0.9063
0.6
0.0019
0.0004
0.0004
0.0006
0.0044
0.0539
0.9384
0.8
0.0010
0.0001
0.0001
0.0003
0.0026
0.0417
0.9541
1.0
0.0006
0.0001
0.0001
0.0002
0.0017
0.0340
0.9634
For the second type of players z2(f) 0 z2(f) 1 z2(f) 2 z2(f) 3 z2(f) 4 z2(f) 5 z2(f) 6 z2(f) 7 z2(f) 8 z2(f) 9 z2(f) 10
0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0005 0.0025 0.0155 0.1088 0.8725 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0004 0.0047 0.0620 0.9328 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0013 0.0332 0.9654 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0006 0.0227 0.9767 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0003 0.0172 0.9825 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0002 0.0138 0.9859
Example 11.2 (The exogenous mutation rate) In the 2 u 2 asymmetric evolutionary game, the parameters are given by 630
11
§ 20 0 · ¨ ¸, © 2 8¹
A1 M
Sensitivity Analysis and Evolutionary Games
A2
10, N 1
§ 40 1 · ¨ ¸ © 0 10 ¹
N2 N
6,
N
H
0.2,0.6,1.0,1.5, 2.0, 4.0.
0.6,
Table 11.2 indicates how the two probabilities pi (H ) and q j (H ) depend on the exogenous mutation rate H . Table 11.2
The role of the exogenous mutation rate
For the first type of players z1 (f)
H 0.2 H 0.6 H 1.0 H 1.5 H 2.0 H 4.0
0 0.0002 0.0029 0.0082 0.0171 0.0270 0.0613
1 0.0000 0.0006 0.0026 0.0070 0.0128 0.0395
2 0.0000 0.0006 0.0026 0.0070 0.0128 0.0395
3 0.0001 0.0010 0.0036 0.0088 0.0154 0.0435
4 0.0008 0.0061 0.0144 0.0264 0.0385 0.0761
5 0.0232 0.0631 0.0950 0.1251 0.1465 0.1827
6 0.9757 0.9257 0.8737 0.8087 0.7469 0.5573
For the second type of players z2(f) 0 z2(f) 1 z2(f) 2 z2(f) 3 z2(f) 4 z2(f) 5 z2(f) 6 z2(f) 7 z2(f) 8 z2(f) 9 z2(f) 10
0.0000 0.0000 0.0000 0.0000 0.0000 0.0001
0.0000 0.0000 0.0000 0.0000 0.0000 0.0001
0.0000 0.0000 0.0000 0.0000 0.0000 0.0001
0.0000 0.0000 0.0000 0.0000 0.0000 0.0001
0.0000 0.0000 0.0000 0.0000 0.0000 0.0001
0.0000 0.0000 0.0000 0.0000 0.0000 0.0004
0.0000 0.0000 0.0000 0.0000 0.0001 0.0012
0.0000 0.0000 0.0001 0.0004 0.0009 0.0050
0.0001 0.0009 0.0023 0.0047 0.0079 0.0241
0.0093 0.0269 0.0433 0.0620 0.0791 0.1333
0.9906 0.9722 0.9543 0.9328 0.9120 0.8354
11.8.2 A 2 u 2 Game with Dependent Structure Consider a 2 u 2 asymmetric evolutionary game with dependent structure, where each player has two strategies V 1 and V 2 , the populations of players of types 1 and 2 are M and N, their payoff matrices are given by A1
§ a1 ¨ © c1
b1 · ¸, d1 ¹
A2
§ a2 ¨ © c2
b2 · ¸, d2 ¹
respectively. We assume that the dependent structure is expressed as four factors D1O (i, j ), D1P (i, j ), D2O (i, j ) and D2P (i, j ) for 0 İ i İ M and 0 İ j İ N . Note 631
Constructive Computation in Stochastic Models with Applications
that when there are i players of type 1 playing strategy V 1 and j players of type 2 playing strategy V 1 , D1O (i, j ) and D1P (i, j ) are two dependent factors for the players of type 1 switching strategy V 2 to strategy V 1 and switching strategy V 1 to strategy V 2 , respectively; while D2O (i, j ) and D2P (i, j ) can be similarly explained for the players of type 2. Let a1m b1 ( M m) , M c m d1 ( M m) fV12 (m) 1 ; M a2 n b2 ( N n) fV21 ( n) N fV11 (m)
and fV22 (n)
c2 n d 2 ( N n) . N
Applying the dependently structured factors, we can further define the transition rate either switching strategy V 2 to strategy V 1 or switching strategy V 1 to strategy V 2 as follows. For the first type of players, the transition rate switching strategy from V 2 to V 1 is defined as that for i 0,1,! , M 1, aH1 (i, j )
^
`
D1O (i, j ) ªH N 1 max fV11 (i ) fV12 (i ),0 º ¬ ¼
OH1 (i ) D1O (i, j ),
and the transition rate switching strategy from V 1 to V 2 is defined as that for i 1, 2,! , M , bH1 (i, j )
^
`
D1P (i, j ) ªH N 1 max fV12 (i ) fV11 (i),0 º ¬ ¼
PH1 (i) D1P (i, j ).
Similarly, for the second type of players, the transition rate switching strategy from V 2 to V 1 is defined as that for j 0,1,! , N 1, aH2 (i, j )
^
`
D2O (i, j ) ªH N 2 max fV21 ( j ) fV22 ( j ),0 º ¬ ¼
D2O (i, j )OH2 ( j ),
and the transition rate switching strategy from V 1 to V 2 is defined as that for j 1, 2,!, N , bH2 (i, j ) 632
^
`
D2P (i, j ) ªH N 2 max fV22 ( j ) fV21 ( j ),0 º ¬ ¼
D2P (i, j ) PH2 ( j ).
11
Sensitivity Analysis and Evolutionary Games
Remark 11.1 The four dependently structured factors D1O (i, j ), D1P (i, j ), D2O (i, j ) and D2P (i, j ) for 0 İ i İ M and 0 İ j İ N can be chosen extensively, for example,
D1O (i, j )
c1D i E j , c1i 2 j 3 , c1 ln(i ) ln( j ), c1 D i E j , c1
Di ln( j )
.
Specifically, if the four dependently structured factors are all equal to one, then this game becomes the case with independent structure, as studied above. Let z (t ) ( z1 (t ), z2 (t )), where zi (t ) is the number of the players of type i who are playing strategy V 1 at time t for i 1, 2. Then we can model this game as a continuous-time level-dependent QBD process {z (t ), t ı 0} on the state space S
{(0,0),(0,1),! ,(0, N );(1, 0), (1,1),! , (1, N ); !;( M ,0), ( M ,1),! ,( M , N )},
and its state transition relation is given in Fig. 11.1.
Figure 11.1 The state transition relation
Based on Fig. 11.1, the infinitesimal generator of this QBD process is given by § A10 (H ) A00 (H ) · ¨ 1 ¸ 1 1 ¨ A2 (H ) A1 (H ) A0 (H ) ¸ ¨ ¸ A22 (H ) A12 (H ) A02 (H ) ¨ ¸ Q(H ) ¨ % % % ¸, M 2 M 2 M 2 ¨ ¸ A2 (H ) A1 (H ) A0 (H ) ¨ ¸ A2M 1 (H ) A1M 1 (H ) A0M 1 (H ) ¸ ¨ ¨ ¸ A2M (H ) A1M (H ) ¹ © where for 0 İ k İ M , 633
Constructive Computation in Stochastic Models with Applications
§ [H (k , 0) aH2 (k , 0) · ¨ 2 ¸ 2 ¨ bH (k ,1) [H (k ,1) aH (k ,1) ¸ 2 2 ¨ ¸ b ( k , 2) [ ( k , 2) a ( k , 2) H H H A1k (H ) ¨ ¸ % % % ¨ ¸ ¨ bH2 (k , N 1) [H (k , N 1) aH2 (k , N 1) ¸ ¨ ¸ ¨ bH2 (k , N ) [H (k , N ) ¸¹ ©
with for k
0,
aH1 (0,0) aH2 (0,0), l 0, ° 1 2 2 ®aH (0, l ) aH (0, l ) bH (0, l ), 1 İ l İ N 1, °a1 (0, N ) b 2 (0, N ) l N, H ¯ H
[H (0, l ) for k
M,
[H ( M , l )
bH1 ( M , 0) aH2 ( M ,0), l 0, ° 1 2 2 ®bH ( M , l ) aH ( M , l ) bH ( M , l ), 1 İ l İ N 1, °b1 ( M , N ) b 2 ( M , N ), l N, H ¯ H
and for 1 İ k İ M 1,
[H (k , l )
OH1 (k ,0) PH1 (k , 0) OH2 (k , 0), l 0, ° 1 1 2 2 ®OH (k , l ) PH (k , l ) OH (k , l ) PH (k , l ), 1 İ l İ N 1, °O 1 (k , N ) P 1 (k , N ) P 2 (k , N ), l N. H H ¯ H
At the same time, for 0 İ m İ M 1, A0m (H ) diag aH1 (m,0), aH1 (m,1), aH1 (m, 2),! , aH1 (m, N 1), aH1 (m, N ) ,
for 1 İ n İ M , A2n (H )
diag bH1 (n,0), bH1 (n,1), bH1 (n, 2),! , bH1 (n, N 1), bH1 (n, N ) .
For the QBD process, we write U M (H )
A1M (H )
and for 0 İ k İ M 1, U k (H )
1
A1k (H ) A0k (H )> U k 1 (H )@ A2k 1 (H ),
and Rk (H ) 634
1
A0k (H )> U k 1 (H )@ ,
0 İ k İ M 1.
11
Sensitivity Analysis and Evolutionary Games
Let ʌ (H )
(ʌ0 (H ), ʌ1 (H ), ʌ2 (H ),! , ʌ M 1 (H ), ʌ M (H ))
ʌ k (H )
(ʌk0 (H ), ʌ1k (H ), ʌ k2 (H ),! , ʌkN 1 (H ), ʌ kN (H ))
with
be the stationary probability vector of the QBD process Q(H ). Then ʌk (H ) W (H )v0 (H ) R0 (H ) R1 (H ) R2 (H )" Rk 1 (H ),
0İk İ M,
where v0 (H ) is the stationary probability vector of the Markov chain U 0 (H ) and the constant W (H ) is given by 1
W (H )
.
M
1 v0 (H )¦ R0 (H ) R1 (H ) R2 (H )! Rk 1 (H )e k 1
To compute the ESS numerically, we need to study the limiting distribution ʌ*
lim ʌ (H ) H o0
where
W*
v0*
limW (H ), H o0
Rk*
lim v0 (H ), H o0
lim Rk (H ), H o0
0 İ k İ M 1.
In what follows we provide a comparable numerical example to indicate how the limiting distribution ʌ * depends on the dependently structured factors D1O (i, j ), D1P (i, j ), D2O (i, j ) and D2P (i, j ) for 0 İ i İ M and 0 İ j İ N . Example 11.3 (The dependently structured factors) In the 2 u 2 asymmetric evolutionary game, the payoff matrix is given by A1 M D1O (i, j ) D2O (i, j )
§ 20 0 · ¨ ¸, © 2 8¹
A2
8, N 1
N
N2
0.8,
H
0.1,
(1 D ) (1 E ) ,
D1P (i, j )
(1 D ) (1 E ) ,
i j (1 [ ) (1 K ) ,
D2P (i, j )
i j (1 [ ) (1 K ) ,
i
j
i
j
where in the original model D E [ K 0; while in a comparable model D E 0.2, [ 0.6,K 0.4. Figure 11.2 indicates how the two probabilities pi (H ) and q j (H ) depend on the dependently structured factors. It is seen that when this game has an independent structure, two types of players can choose 635
Constructive Computation in Stochastic Models with Applications
their dominant strategies P{zi (f) 8} 1 for i 1, 2. However, when introducing the dependent structure to this game, the dominant strategies become that P{z1 (f) 8} 1,
P{z2 (f)
0} 1.
Figure 11.2 The role of the dependent structure
11.8.3 A 2 u 2 Game with Information Interaction Consider a 2 u 2 asymmetric evolutionary game with information interaction, where each player has two strategies V 1 and V 2 , the populations of the players of types 1 and 2 are M and N, their payoff matrices are given by A1
§ a1 ¨ © c1
b1 · ¸, d1 ¹
A2
§ a2 ¨ © c2
b2 · ¸, d2 ¹
respectively. We define
636
fV11 (n)
a1n b1 ( N n) , N
fV12 (n)
c1n d1 ( N n) ; N
fV21 ( m)
a2 m b2 ( M m) M
11
Sensitivity Analysis and Evolutionary Games
and c2 m d 2 ( M m) . M
fV22 (m)
For the first type of players, the transition rate switching strategy from V 2 to V 1 is defined as
OH1 (i ) H N 1 max ^ fV1 (i ) fV1 (i), 0` , i 0,1,!, N 1, 1
2
and the transition rate switching strategy from V 1 to V 2 is defined as
PH1 (i ) H N 1 max ^ fV1 (i ) fV1 (i ),0` , i 1, 2,!, N . 2
1
Similarly, for the second type of players, the transition rate switching strategy from V 2 to V 1 is defined as
OH2 (i ) H N 2 max ^ fV2 (i ) fV2 (i ), 0` , i 0,1,! M 1, 1
2
and the transition rate switching strategy from V 1 to V 2 is defined as
PH2 (i ) H N 2 max ^ fV2 (i ) fV2 (i), 0` , i 1, 2,! , M . 2
Let z (t )
1
( z1 (t ), z2 (t )), where zi (t ) is the number of the players of type i who
are playing strategy V 1 at time t for i 1, 2. We can model this game as a continuous-time level-dependent QBD process {z (t ), t ı 0} on the state space S
{(0,0),(0,1),! ,(0, M );(1,0),(1,1),! ,(1, M ); !;( N ,0),( N ,1),! , ( N , M )},
and the infinitesimal generator of this QBD process is given by
Q H
§ A10 (H ) A00 (H ) · ¨ 1 ¸ 1 1 ¨ A2 (H ) A1 (H ) A0 (H ) ¸ ¨ ¸ A22 (H ) A12 (H ) A02 (H ) ¨ ¸ % % % ¨ ¸, N 2 N 2 N 2 ¨ ¸ A2 (H ) A1 (H ) A0 (H ) ¨ ¸ A2N 1 (H ) A1N 1 (H ) A0N 1 (H ) ¸ ¨ ¨ ¸ A2N (H ) A1N (H ) ¹ ©
where for 0 İ k İ N , 637
Constructive Computation in Stochastic Models with Applications
§ aH (k ,0) OH2 (0) · ¨ ¸ 2 2 OH (1) aH (k ,1) ¨ PH (1) ¸ 2 2 ¨ ¸ P (2) a ( k , 2) O (2) H H H A1k (H ) ¨ ¸, % % % ¨ ¸ ¨ PH2 ( M 1) aH (k , M 1) OH2 ( M 1) ¸ ¨ ¸ ¨ PH2 ( M ) aH (k , M ) ¸¹ ©
for k
for k
0,
aH (0, l )
OH1 (0) OH2 (0), l 0, ° 1 2 2 İ l İ M 1, O (0) O ( l ) P ( l ), 1 ® H H H °O 1 (0) P 2 ( M ), l M, H ¯ H
aH ( N , l )
PH1 ( N ) OH2 (0), l 0, ° 1 2 2 ® PH ( N ) OH (l ) PH (l ), 1 İ l İ M 1, ° P 1 ( N ) P 2 ( M ), l M. H ¯ H
N
and for 1 İ k İ N 1,
aH (k , l )
OH1 (k ) PH1 (k ) OH2 (0), l 0, ° 1 1 2 2 İ l İ M 1, O ( k ) P ( k ) O ( l ) P ( l ), 1 ® H H H H °O 1 (k ) P 1 (k ) P 2 ( M ), l M, H H ¯ H
At the same time, for 0 İ m İ N 1, A0m (H )
diag OH1 (0), OH1 (1), OH1 (2),! , OH1 (M 1), OH1 (M ) ,
for 1 İ n İ N , A2n (H )
diag PH1 (0), PH1 (1), PH1 (2),! , PH1 ( M 1), PH1 ( M ) .
For the QBD process, we write U N (H )
U k (H )
A1N (H ), 1
A1k (H ) A0k (H )[U k 1 (H )] A2k 1 (H ),
for 0 İ k İ N 1,
and Rk (H ) 638
1
A0k (H )[U k 1 (H )] , 0 İ k İ N 1.
11
Sensitivity Analysis and Evolutionary Games
Let ʌ (H )
(ʌ0 (H ), ʌ1 (H ), ʌ2 (H ),! , ʌ N 1 (H ), ʌ N (H ))
with ʌk (H )
ʌ
0 k
(H ), ʌ1k (H ), ʌk2 (H ),!, ʌkM 1 (H ), ʌkM (H )
be the stationary probability vector of the QBD process Q (H ). Then ʌk (H ) W (H )v0 (H ) R0 (H ) R1 (H ) R2 (H )! Rk 1 (H ),
0 İ k İ N,
where v0 (H ) is the stationary probability vector of the censored Markov chain U 0 (H ) to level 0, and the constant W (H ) is given by
1
W (H )
.
N
1 v0 (H )¦ R0 (H ) R1 (H ) R2 (H )! Rk 1 (H )e k 1
To compute the ESS numerically, we need to study the limiting distribution ʌ*
lim ʌ H H o0
where
W*
v0*
limW (H ), H o0
lim v0 (H ), H o0
Rk*
lim Rk (H ),
0 İ k İ N 1.
H o0
In what follows we provide two numerical examples to indicate how the limiting distribution ʌ * depends on the information interaction. Example 11.4 (The information interaction) In the 2 u 2 asymmetric evolutionary game, the payoff matrices are given by § 20 10 · ¨ ¸, © 2 8¹
A1 M
8,
N
10,
H
A2 0.1,
§10 1 · ¨ ¸ © 0 40 ¹
N1 N 2
N
0.8.
Figure 11.3 indicates how the two probabilities pi (H ) and q j (H ) depend on the information interaction. Example 11.5 (The information interaction) In the 2 u 2 asymmetric evolutionary game, the payoff matrices are given by A1 M
8,
N
§ 2 20 · ¨ ¸, ©10 0 ¹ 10,
H
A2 0.1,
§12 1 · ¨ ¸ © 0 8¹
N1 N 2
N
0.8. 639
Constructive Computation in Stochastic Models with Applications
Figure 11.3 The role of the information interaction
Figure 11.4 indicates how the two probabilities pi (H ) and q j (H ) depend on the information interaction.
Figure 11.4 The role of the information interaction
11.8.4 A 3 u 2 Asymmetric Evolutionary Game Consider a 3 u 2 asymmetric evolutionary game with information interaction, where there are three types of players and each player has two strategies V 1 and 640
11
Sensitivity Analysis and Evolutionary Games
V 2 , the populations of the players of type k are N k for k 1, 2,3, their payoff matrices are given by A1
A2
1 1 § c1,1,1 c1,1,2 ¨¨ 1 1 © c2,1,1 c2,1,2 2 2 § c1,1,1 c1,1,2 ¨¨ 2 2 © c1,2,1 c1,2,2
1 c1,2,1
c12,2,1 2 c2,1,1 2 c2,2,1
1 · c1,2,2 ¸, 1 c2,2,2 ¸¹ 2 · c2,1,2 ¸ 2 c2,2,2 ¸¹
and A3
3 § c1,1,1 ¨¨ 3 © c1,1,2
3 c1,2,1
3 c2,1,1
c
c
3 1,2,2
3 2,1,2
3 · c2,2,1 ¸, 3 c2,2,2 ¸¹
2 respectively. For example, c1,1,2 denotes the payoff for the players of type 2 when
the players of types 1, 2 and 3 play strategies V 1 , V 2 and V 3 , respectively. For 0 İ k İ N1 , 0 İ i İ N 2 and 0 İ j İ N 3 , we define i j i § j · 1 § i · j 1 c1,1,2 ¨1 ¸ c1,2,1 ¨1 ¸ N 2 N3 N2 © N3 ¹ N 2 ¹ N3 © § i ·§ j · 1 c1,2,2 ¸, ¨1 ¸ ¨1 N 2 ¹© N3 ¹ ©
fV11 (, i, j )
1 c1,1,1
fV12 (, i, j )
c12,1,1
fV21 ( k , , j )
i j i § j · 1 § i · j c12,1,2 ¨1 ¸ c2,2,1 ¨1 ¸ N 2 N3 N 2 © N3 ¹ © N2 ¹ N3 § i ·§ j · c12,2,2 ¨1 ¸; ¸ ¨1 N3 ¹ © N 2 ¹© k j k § j · 2 § k · j 2 c1,1,2 ¨1 ¸ c2,1,1 ¨ 1 ¸ N1 N 3 N1 © N3 ¹ N1 ¹ N 3 © § k ·§ j · 2 c2,1,2 ¸, ¨1 ¸ ¨1 N1 ¹© N3 ¹ © 2 c1,1,1
k j k § j · 2 § k · j 2 c1,2,2 ¨1 ¸ c2,2,1 ¨1 ¸ N1 N3 N1 © N3 ¹ N1 ¹ N3 © § k ·§ j · 2 c2,2,2 ¸, ¨1 ¸ ¨1 N N © 1 ¹© 3 ¹
2 fV22 k , , j c1,2,1
641
Constructive Computation in Stochastic Models with Applications
k i k § i · 3 § k · i 3 c1,2,1 ¨1 ¸ c2,1,1 ¨ 1 ¸ N1 N 2 N1 © N2 ¹ N1 ¹ N 2 © § k ·§ i · 3 c2,2,1 ¨1 ¸¨ 1 ¸, N1 ¹© N2 ¹ ©
fV31 ( k , i, )
fV32 (k , i, )
3 c1,1,1
k i k § i · 3 § k · i 3 c1,2,2 ¨1 ¸ c2,1,2 ¨ 1 ¸ N1 N 2 N1 © N2 ¹ N1 ¹ N 2 © § k ·§ i · 3 c2,2,2 ¨1 ¸¨ 1 ¸, N1 ¹© N2 ¹ © 3 c1,1,2
For the first type of players, the transition rate switching strategy from V 2 to V 1 is defined as that for i
0,1,! , N 2 1 and j
0,1,! , N3 1,
OH1 (, i, j ) H N 1 max ^ fV1 (, i, j ) fV1 (, i, j ), 0` , 1
2
and the transition rate switching strategy from V 1 to V 2 is defined as that for i 1, 2,! , N 2 and j 1, 2,! , N 3 ,
PH1 (, i, j ) H N 1 max ^ fV1 (, i, j ) fV1 (, i, j ),0` . 2
1
For the second type of players, the transition rate switching strategy from V 2 to
V 1 is defined as that for k
0,1,! , N1 1 and j
0,1,! , N3 1,
OH2 (k , , j ) H N 2 max ^ fV2 (k , , j ) fV2 (k , , j ),0` , 1
2
and the transition rate switching strategy from V 1 to V 2 is defined as that for k 1, 2,! , N1 and j 1, 2,! , N3 ,
PH2 (k , , j ) H N 2 max ^ fV2 (k , , j ) fV2 (k , , j ), 0` . 2
1
For the third type of players, the transition rate switching strategy from V 2 to
V 1 is defined as that for k
0,1,! , N1 1 and i
0,1,! , N 2 1,
OH3 (k , i, ) H N 3 max ^ fV3 (k , i, ) fV3 (k , i, ), 0` , 1
2
and the transition rate switching strategy from V 1 to V 2 is defined as that for k 1, 2,! , N1 and i 1, 2,! , N 2 ,
PH3 (k , i, ) H N 3 max ^ fV3 (k , i, ) fV3 (k , i, ),0`. 2
642
1
11
Let z (t )
Sensitivity Analysis and Evolutionary Games
( z1 (t ), z2 (t ), z3 (t )), where zi (t ) is the number of the players of type i
who are playing strategy V 1 at time t for i 1, 2,3. We can model this game as a continuous-time level-dependent QBD process { z (t ), t ı 0} on the state space S
{(0,0,0), (0, 0,1),! , (0,0, N1 );(0,1, 0),(0,1,1),! ,(0,1, N1 ); !;(0, N 2 ,0),(0, N 2 ,1),! ,(0, N 2 , N1 );(1, 0,0),(1,0,1),! ,(1,0, N1 ); !;(0, N 2 , N1 ),(1, N 2 , N1 ),! ,( N3 , N 2 , N1 )}
with the state transition relation shown in Fig. 11.5.
Figure 11.5 The state transition relation
For a given integer j 0,1,! , N3 , we have the transition rates OH1 (, i, j ) and PH (, i, j ) for the players of type 1, and OH2 (k , , j ) and PH2 (k , j ) for the players of type 2, thus we can construct a QBD process with the infinitesimal generator QH1,2 ( j ) with respect to the players of types 1 and 2, as indicated above. Applying the state transition relation, the infinitesimal generator of the QBD process describing the 3 u 2 asymmetric evolutionary game is given by 1
§ /1,2 (0) < H ¨ /1,2 (1) < H ¨ )H Q1,2,3 (H ) ¨ % % ¨ ¨ )H ¨¨ ©
· ¸ ¸ ¸, % ¸ /1,2 ( N3 1)
where 643
Constructive Computation in Stochastic Models with Applications
/1,2 (l )
QH1,2 (0) < H , l 0, ° 1,2 İ l İ N 3 1, ( ) < ) , 1 Q l ® H H H °Q1,2 ( N ) ) , l N3 , H 3 ¯ H
diag(OH3 (0, 0, ), OH3 (0,1, ),! , OH3 (0, N 2 , ); OH3 (1, 0, ), OH3 (1,1, ), ! , OH3 (1, N 2 , );! , OH3 ( N1 , 0, ), OH3 ( N1 ,1, ),! , OH3 ( N1 , N 2 , ))
and
)H
diag( PH3 (0,0, ), PH3 (0,1, ),! , PH3 (0, N 2 , ); PH3 (1, 0, ), PH3 (1,1, ), ! , PH3 (1, N 2 , );! , PH3 ( N1 , 0, ), PH3 ( N1 ,1, ),! , PH3 ( N1 , N 2 , )).
For the QBD process Q1,2,3 (H ), we write U N3 (H ) QH1,2 ( N 3 ) )H , for 1 İ k İ N 1, 1
U k (H ) QH1,2 (k ) < H )H < H [U k 1 (H )] )H , and 1
U 0 (H ) QH1,2 (0) < H < H [U1 (H )] )H . We write 1
Rk (H ) < H [U k 1 (H )] , 0 İ k İ N 1. Let ʌ (H )
ʌ (H ), ʌ (H ), ʌ (H ),!, ʌ 0
1
2
N3 1
(H ), ʌ N3 (H )
be the stationary probability vector of the QBD process Q1,2,3 (H ). Then ʌk (H ) W (H )v0 (H ) R0 (H ) R1 (H ) R2 (H )! Rk 1 (H ),
0 İ k İ N3 ,
where v0 (H ) is the stationary probability vector of the censored Markov chain U 0 (H ) to level 0, and the constant W (H ) is given by
W (H )
1 N3
.
1 v0 (H ) ¦ R0 (H ) R1 (H ) R2 (H )! Rk 1 (H )e k 1
To compute the ESS numerically, we need to study the limiting distribution ʌ* 644
lim ʌ H H o0
11
Sensitivity Analysis and Evolutionary Games
where
W*
11.9
lim W (H ), H o0
v0*
lim v0 (H ), H o0
Rk*
lim Rk (H ), H o0
0 İ k İ N 3 1.
Notes in the Literature
Infinitesimal perturbation analysis is a sensitivity-analytic technique for discreteevent dynamic systems using single sample path performance measures. The basic approach is to obtain unbiased or strongly consistent estimates for derivatives of the stationary performance measures. Representative early works include Suri and Zazanis [71], Glasserman [32], Ho and Cao [44], Cassandras [18], Cao [9, 14] and Bao, Cassandras and Zananis [4]. For infinitesimal perturbation analysis of Markov chains, Cao, Yuan and Qiu [17] proposed a novel approach in terms of the realization factor. Cao [10, 11, 12, 13] developed two fundamental concepts: the realization matrix and the performance potential vector, to express derivatives of the stationary performance measures. The subsequent works may be referred to Cao and Chen [15], Cao and Wan [16] and the book by Cao [14]. Perturbation analysis of Markov chains is quite extensive, e.g., see Delebecque [23], Hunter [49, 50, 51], Korolyuk and Turbin [54], Cao and Chen [14], Roberts, Rosenthal and Schwartz [65], Bielecki and Stettner [5], Altman, Avrachenkov and Núñez-Queija [2], Solan and Vieille [70], Li and Liu [57] and Gambin, Krzyanowski and Pokarowski [31]. For Perturbation analysis of queueing systems, readers may refer to Latouche [55], Gong, Pan and Cassandras [33], Chang and Nelson [19], He [40], Grassmann and Chen [35], Grassmann [34] and He and Neuts [41]. The study of evolutionary game theory was originated in mathematical biology. Maynard Smith and Price [62] first introduced an important equilibrium concept: evolutionarily stable strategy (ESS), which is used to capture the possible stable outcomes of a dynamic evolutionary process, readers may also refer to Maynard Smith [60, 61] for more details. To derive the ESS, Taylor and Jonker [77] provided a basic method: replicator dynamics, which can be described as the ordinary differential equations based on the natural selection and the associated fitness. Based on this, the stability of the ordinary differential equations for the replicator dynamics can be used to express the ESS. Important examples of the replicator dynamics include the study of animal behavior, population ecology, population genetics, road traffic, computer networks, industrial evolutionary structure, and complex networks etc. Since 1980s, significant advances on the replicator dynamics have been made by the efforts of many researchers. Readers may refer to, such as, Vincent [79], Hines [42], Harsanyi and Selten [36], Robson [66], Cressman [22], Björnerstedt and Weibull [7], Vega-Redondo [78], Samuelson [68], 645
Constructive Computation in Stochastic Models with Applications
Schlag [69], Hofbauer and Sigmund [47], Hofbauer [45], Hofbauer and Sandholm [46], Hart and Mas-Colell [38, 39], Hofbauer and Sigmund [48] and Suri [72]. For a stochastic evolutionary game, the replicator dynamics is not effective no longer when considering the strategic behavior over a long time horizon. To deal with the stochastic evolutionary games, perturbed Markov chains can be applied to analyze the ESS, that is, the perturbed stationary probability vector directly expresses the ESS based on its perturbed limit structure, which is the time proportion spent by this process in each population state. Foster and Young [27] gave stochastic evolutionary dynamics by means of the perturbed Markov chains whose stationary probability vector and the associated perturbed limit indicates the ESS. The subsequent works may refer to, for example, Takada and Kigami [74], Young [80, 81], Ellison [24], Kandori, Mailath and Rob [52], Nöldeke and Samuelson [63], Kandori and Rob [53], Binmore, Samuelson and Vaughan [6], Amir and Berninghaus [3], Robson and Vega-Redondo [67], Rhode and Stegeman [64], Fudenberg and Levine [28], Ellison [25], Corradi and Sarin [21], Tanaka [75], Hart [37], Blume [8], Tadj and Touzene [73], Taylor, Fudenberg, Sasaki and Nowak [76], Alos-Ferrer and Neustadt [1], Fudenberg and Imhof [29], Fudenberg, Nowakb, Taylorb and Imhof [30], Chen and Chow [20] and Fishman [26]. In this chapter, we mainly refer to Cao, Yuan and Qiu [17], Cao and Chen [15], Li and Zhao [58, 59], Li and Cao [56] and Li and Liu [57] for perturbation analysis of Markov chains; and Amir and Berninghaus [3], Tadj and Touzene [73], Alos-Ferrer and Neustadt [1], Fudenberg and Imhof [29] and Fudenberg, Nowakb, Taylorb and Imhof [30] for evolutionary games. At the same time, we also add some new results without publication for a more systematical organization of this chapter.
Problems 11.1
For a perturbed discrete-time level-independent QBD process, d (1) use the UL-type RG-factorization to compute KH _H 0 ; dH d KH _H 0 ; (2) use the LU-type RG-factorization to compute dH (3) provide numerical examples to compare the results given in (1) and (2).
11.2
For a perturbed continuous-time level-independent QBD process, d (1) use the UL-type RG-factorization to compute [H ( N ) _H 0 ; dH d MH ( E ) _H 0 for 0 E 1. (2) use the UL-type RG-factorization to compute dH
646
11
Sensitivity Analysis and Evolutionary Games
11.3 For a perturbed PH distribution and a perturbed MAP which form a MAP/ PH/1 queue, (1) Provide sensitivity analysis for the mean of stationary waiting time; (2) Provide sensitivity analysis for the mean of busy period. 11.4 Applying the matrix-geometric solution, provide numerical experiments for sensitivity analysis of a perturbed Markov chain of GI M 1 type. 11.5 Applying the matrix iterative solution, provide numerical experiments for sensitivity analysis of a perturbed Markov chain of M G 1 type. 11.6 For the perturbed MAP G 1 queue, apply the UL-type RG-factorization to provide sensitivity analysis for the mean of the stationary queue length, and give some numerical experimentations. 11.7 For the perturbed SM PH 1 queue, apply the LU-type RG-factorization to provide sensitivity analysis for the mean of the stationary queue length, and give some numerical experimentations. 11.8 Apply the RG-factorizations to provide sensitivity analysis for Markov chains of GI G 1 type. 11.9 Provide a sensitivity analysis for the BMAP with irreducible matrix descriptor {Dk , k ı 0}. 11.10 Provide a sensitivity analysis for the MMAP[K] with irreducible matrix descriptor {Dn , n N k}. 11.11 Consider the 3 u 2, 2 u 3, N u 2, 2 u M symmetric evolutionary games, and provide their ESS numerically. 11.12 Consider the 3 u 2, 2 u 3, N u 2, 2 u M asymmetric evolutionary games, and provide their ESS numerically.
References 1. Alos-Ferrer C. and I. Neustadt (2005). Best-response dynamics in a birth-death model of evolution in games. In an initial version 2. Altman E., K.E. Avrachenkov and R. Núñez-Queija (2004). Perturbation analysis for denumerable Markov chains with application to queueing models. Advances in Applied Probability 36: 839 853 3. Amir M. and S.K. Berninghaus (1996). Another approach to mutation and learning in games. Games and Economic Behavior 14: 19 43 4. Bao G., C.G. Cassandras and M.A. Zazanis (1996). First- and second-derivatives estimators for cyclic closed-queueing networks. IEEE Transaction on Automatic Control 41: 1106 1124 5. Bielecki T.R. and L. Stettner (1998). Ergodic control of singularly perturbed Markov process in discrete time with general state and compact action spaces. Applied Mathematics and Optimization 38: 261 281 647
Constructive Computation in Stochastic Models with Applications 6. Binmore, K., L. Samuelson and R. Vaughan (1995). Musical chairs: Modeling noisy evolution. Games and Economic Behavior 11:1 35 7. Björnerstedt, J. and J.W. Weibull (1996). Nash equilibrium and evolution by imitation. In The Rational Foundations of Economic Behavior, Arrow, K. J. et al. (eds), St. Martin’s Press: New York, 155 181 8. Blume, L.E. (2003). How noise matters. Games and Economic Behavior 44: 251 271 9. Cao X.R. (1994). Realization Probabilities: The Dynamics of Queuing Systems, SpringerVerlag: London 10. Cao X.R. (1998). The Maclaurin series for performance functions of Markov chains. Advances in Applied Probability 30: 676 692 11. Cao X.R. (1998). The relations among potentials, perturbation analysis, and Markov decision processes. Discrete Event Dynamic Systems 8: 71 87 12. Cao X.R. (2003). From perturbation analysis to Markov decision processes and reinforcement learning. Discrete Event Dynamic Systems 13: 9 39 13. Cao X.R. (2003). Semi-Markov decision problems and performance sensitivity analysis. IEEE Transaction on Automatic Control 48: 758 769 14. Cao X.R. (2007). Stochastic Learning and Optimization: A Sensitivity-Based Approach, Springer-Verlag: New York 15. Cao X.R. and H.F. Chen (1997). Perturbation realization, potentials, and sensitivity analysis of Markov processes. IEEE Transaction on Automatic Control 42: 1382 1393 16. Cao X.R. and Y.W. Wan (1998). Algorithms for sensitivity analysis of Markov systems through potentials and perturbation realization. IEEE Transaction on Control Systems Technology 6: 482 494 17. Cao X.R., X.M. Yuan, and L. Qiu (1996). A single sample path-based performance sensitivity formula for Markov chains. IEEE Transaction on Automatic Control 41: 1814 1817 18. Cassandras C.G. (1993). Discrete-Event Systems: Modeling and Performance Analysis, Homewood, IL, Aksen Associates 19. Chang C.S. and F. Nelson (1993). Perturbation analysis of the M M 1 queue in a Markovian environment via the matrix-geometric method. Stochastic Models 9: 233 246 20. Chen H.C. and Y. Chow (2008). Equilibrium selection in evolutionary games with imperfect monitoring. Journal of Applied Probability 45: 388 402 21. Corradi V. and R. Sarin (2000). Continuous approximations of stochastic evolutionary game dynamics. Journal of Economic Theory 94: 163 191 22. Cressman R. (1992). The Stability Concept of Evolutionary Game Theory. A Dynamic Approach, Springer-Verlag: Berlin 23. Delebecque F. (1983). A reduction process for perturbed Markov chains. SIAM Journal on Applied Mathematics 43: 325 350 24. Ellison, G. (1993). Learning, local interaction, and coordination. Econometrica 61: 1047 1071 25. Ellison, G. (2000). Basins of attraction, long run equilibria, and the speed of step-by-step evolution. Review of Economic Studies 67: 17 45 26. Fishman M.A. (2008). Asymmetric evolutionary games with non-linear pure strategy payoffs. Games and Economic Behavior 63: 77 90 27. Foster D. and P. Young (1990). Stochastic evolutionary game dynamics. Theoretical Population Biology 38: 219 232 648
11
Sensitivity Analysis and Evolutionary Games
28. Fudenberg D. and D.K. Levine (1998). The Theory of Learning in Games, MIT Press: Cambridge, MA 29. Fudenberg D. and L.A. Imhof (2006). Imitation processes with small mutations. Journal of Economic Theory 131: 251 262 30. Fudenberg D., M.A. Nowakb, C. Taylorb and L.A. Imhof (2006). Evolutionary game dynamics in finite populations with strong selection and weak mutation. Theoretical Population Biology 70: 352 363 31. Gambin A., P. Krzyanowski and P. Pokarowski (2008). Aggregation algorithms for perturbed Markov chains with applications to networks modeling. SIAM Journal on Scientific Computing 31: 45 73 32. Glasserman P. (1991). Gradient Estimation Via Perturbation Analysis, Kluwer Academic Publishers 33. Gong W.B., J. Pan and C.G. Cassandras (1992). Obtaining derivatives of stationary queue length probabilities through the matrix-geometric solution method. Applied Mathematics Letters 5: 69 73 34. Grassmann W.K. (1996). Optimizing steady state Markov chains by state reduction. European Journal of Operational Research 89: 277 284 35. Grassmann W.K. and X. Chen (1995). The use of derivatives for optimizing steady state queues. Journal of the Operational Research Society 46: 104 115 36. Harsanyi, J. and R. Selten (1988). A General Theory of Equilibrium Selection in Games, MIT Press: Cambridge, MA 37. Hart, S. (2002). Evolutionary dynamics and backward induction. Games and Economic Behavior 41: 227 264 38. Hart, S. and A. Mas-Colell (2000). A simple adaptive procedure leading to correlated equilibrium. Econometrica 68: 1127 1150 39. Hart, S. and A. Mas-Colell (2003). Uncoupled dynamics do not lead to Nash equilibrium. American Economic Review 93: 1830 1836 40. He Q.M. (1995). Differentiability of matrices R and G in the matrix-analytic method. Stochastic Models 11: 123 132 41. He Q.M. and M.F. Neuts (1997). On Episodic queues. SIAM Journal on Matrix Analysis and Applications 18: 223 248 42. Hines, W. (1987). Evolutionary stable strategies: a review of basic theory. Theoretical Population Biology 31: 195 272 43. Hipp C. (2006). Speedy convolution algorithms and Panjer recursions for phase-type distributions. Insurance: Mathematics & Economics 38: 176 188 44. Ho Y.C. and X.R. Cao (1991). Perturbation Analysis of Discrete-Event Dynamic Systems, Kluwer 45. Hofbauer, J. (2000). From Nash and Brown to Maynard Smith: Equilibria, dynamics, and ESS. Selection 1: 81 88 46. Hofbauer, J. and W.H. Sandholm (2002). On the global convergence of stochastic fictitious play. Econometrica 70: 2265 2294 47. Hofbauer J. and K. Sigmund (1998). Evolutionary Games and Population Dynamics, Cambridge University Press 649
Constructive Computation in Stochastic Models with Applications 48. Hofbauer J. and K. Sigmund (2003). Evolutionary game dynamics. American Mathematical Society Bulletin (New Series) 40: 479 519 49. Hunter J.J. (1986). Stationary distributions of perturbed Markov chains. Linear Algebra and its Applications 82: 201 214 50. Hunter J.J. (2005). Stationary distributions and mean first passage times of perturbed Markov chains. Linear Algebra and its Applications 410: 217 243 51. Hunter J.J. (2006). Mixing times with applications to perturbed Markov chains. Linear Algebra and its Applications 417: 108 123 52. Kandori, M., G.J. Mailath and R. Rob (1993). Learning, mutation, and long run equilibria in games. Econometrica 61: 29 56 53. Kandori, M. and R. Rob (1995). Evolution of equilibria in the long run: A general theory and applications. Journal of Economic Theory 65: 383 414 54. Korolyuk V.S. and A.F. Turbin (1993). Mathematical Foundations of the State Lumping of Large System, Kluwer Academic Pulishers 55. Latouche G. (1988). Perturbation analysis of phase-type queue with weakly correlated arrivals. Advances in Applied Probability 20: 896 912 56. Li Q.L. and J. Cao (2004). Two types of RG-factorizations of quasi-birth-and-death processes and their applications to stochastic integral functionals. Stochastic Models 20: 299 340 57. Li Q.L. and L.M. Liu (2004). An algorithmic approach on sensitivity analysis of perturbed QBD processes. Queueing Systems 48: 365 397 58. Li Q.L. and Y.Q. Zhao (2002). The RG-factorizations in block-structured Markov renewal processes with applications. Technical Report 381, Laboratory for Research in Statistics and Probability, Carleton University and University of Ottawa, Canada, 1 40 59. Li Q.L. and Y.Q. Zhao (2004). The RG-factorization in block-structured Markov renewal processes with applications. In Observation, Theory and Modeling of Atmospheric Variability, X. Zhu (ed), World Scientific, 545 568 60. Maynard Smith J. (1979). Evolutionary game theory. In Vito Volterra Symposium on Mathematical Models in Biology, Rome, 73 81; or in Lecture Notes in Biomath., 39, Springer: Berlin-New York, 1980 61. Maynard Smith, J. (1982). Evolution and the Theory of Game, MIT Press: Cambridge, MA 62. Maynard Smith, J. and G.R. Price (1973). The logic of animal conflict. Nature 246: 15 18 63. Nöldeke, G. and L. Samuelson (1993). An evolutionary analysis of backward and forward induction. Games and Economic Behavior 5: 425 454 64. Rhode, P. and M. Stegeman (1996). A comment on “Learning, mutation, and long-run equilibria in games. Econometrica 64: 443 449 65. Roberts G.O., J.S. Rosenthal and P.O. Schwartz (1998). Convergence properties of perturbed Markov chains. Journal of Applied Probability 35: 1 11 66. Robson A.J. (1990). Efficiency in evolutionary games: Darwin, Nash and the secret handshake. Journal of Theoretical Biology 144: 379 396 67. Robson A.J. and F. Vega-Redondo (1996). Efficient equilibrium selection in evolutionary games with random matching. Journal of Economic Theory 70: 65 92 68. Samuelson L. (1997). Evolutionary Games and Equilibrium Selection, MIT Press: Cambridge, MA 650
11
Sensitivity Analysis and Evolutionary Games
69. Schlag, K.H. (1998). Why imitate, and if so, how? A boundedly rational approach to multi-armed bandits. Journal of Economic Theory 78: 130 156 70. Solan E. and N. Vieille (2003). Perturbed Markov chains. Journal of Applied Probability 40: 107 122 71. Suri R. and M.A. Zazanis (1988). Perturbation analysis gives strongly consistent sensitivity estimates for the M G 1 queue. Management Science 34: 39 64 72. Suri S. (2007). Computational evolutionary game theory. In Algorithmic game theory, Cambridge University Press, 717 736 73. Tadj L. and A. Touzene (2003). A QBD approach to evolutionary game theory. Applied Mathematical Modelling 27: 913 927 74. Takada T. and J. Kigami (1991). The dynamical attainability of ESS in evolutionary games. Journal of Mathematical Biology 29: 513 529 75. Tanaka, Y. (2000). A finite population ESS and a long run equilibrium in a n players coordination game. Mathematical Social Sciences 39: 195 206 76. Taylor C., D. Fudenberg, A. Sasaki and M.A. Nowak (2004). Evolutionary game dynamics in finite populations. Bulletin of Mathematical Biology 66: 1621 1644 77. Taylor, P.D. and L. Jonker (1978). Evolutionarily stable strategies and game dynamics. Mathematical Biosciences 40: 145 156 78. Vega-Redondo, F. (1996). Evolution, Games and Economic Behaviour, Oxford University Press 79. Vincent T.L. (1985). Evolutionary games. Journal of Optimization Theory and Applications 46: 605 612 80. Young, H.P. (1993). The evolution of conventions. Econometrica 61: 57 84 81. Young, H.P. (2004). Strategic Learning and Its Limits, Oxford University Press, Oxford
651
Appendix
Appendix A Matrix Notation and Computation In this appendix, we provide some notation for Kronecker product of matrices (e.g., see Graham [1]), Perron-Probenius theory of nonnegative matrices (e.g., see Seneta [2]), and inverses of matrices of infinite size.
A.1
Kronecker Product
Let A and B be two matrices, where the (i, j ) th element of A is ai , j . The Kronecker product of the two matrices A and B is defined as A
B
(ai , j B ),
and the Kronecker sum of the two matrices A and B is defined as
A B
I
B A
I.
The useful properties of the Kronecker product are listed as follows: (1) ( A B)
C A
C B
C ; A
( B C ) A
B A
C. (2) ( A
B)
C A
( B
C ). (3) ( A
B)(C
D) ( AC )
( BD). T (4) ( A
B ) AT
B T . 1 (5) ( A
B) A1
B 1. (6) exp{( A B) x} exp{ Ax}
exp{Bx}. d (7) [exp{Ax}
exp{Bx}] [exp{Ax}
exp{Bx}]( A B). dx (8) If AD OD and B E PE , then
( A
B)(D
E ) OP (D
E ) and ( A B)(D
E ) (O P )(D
E ).
Appendix
A.2
Perron-Frobenius Theory
Let K1 ,K2 ,!Km 1 ,Km be all the eigenvalues of the matrix A of size m. The spectral radius of the matrix A is defined as sp( A)
max{K1 ,K2 ,! ,Km 1 ,Km }.
For simplicity of description, we also write U ( A) sp( A). Consider a square matrix A (ai , j )1İi , jİm. If ai , j ı 0 for each pair (i, j ) with 1 İ i, j İ m, then A is called to be nonnegative; if ai , j ! 0 for each pair (i, j ) with 1 İ i, j İ m, then A is called to be positive. The nonnegative matrix A is said to be primitive if there exists a positive integer k such that Ak ! 0. Obviously, A is primitive if A ! 0. The nonnegative matrix A is said to be irreducible if m 1 ( I A) ! 0. For the nonnegative matrices, we list some useful properties as follows: (1) If 0 İ A İ B, then 0 İ U ( A) İ U ( B). If 0 A B, then 0 U ( A) U ( B). (2) If A ı 0, then U ( I A) 1 U ( A). (3) If A ı 0, then U ( A) is an eigenvalue of the matrix A, and there exists a vector x : 0 such that Ax U ( A) x or xA U ( A) x. (4) If A ı 0 and there exists a vector x ! 0 such that Ax O x, then O U ( A). If A ! 0 and there exists a vector x : 0 such that Ax O x, then O U ( A). (5) If A ı 0, x ! 0 and D x İ Ax İ E x, then D İ U ( A) İ E . If A ı 0, x ! 0 and D x Ax E x, then D U ( A) E . (6) If A ! 0 or A is primitive, then there exists a unique positive vector x with xe 1 such that Ax U ( A) x or xA U ( A) x. The following two theorems are always useful in the study of nonnegative matrices and stochastic models. Theorem A.1 Suppose the nonnegative matrix A is primitive. (1) U ( A) ! 0 and U ( A) is an eigenvalue of the matrix A. (2) There exist two vectors x ! 0 and y ! 0 such that Ax U ( A) x and yA U ( A) y. (3) U ( A) !_ O _ for any eigenvalue O z U ( A). (4) U ( A) is a simple root of the characteristic equation det(O I A) 0. k
ª A º (5) lim « yx. » k of U ( A) ¬ ¼ Theorem A.2 Suppose the nonnegative matrix A is irreducible. (1) U ( A) ! 0 and U ( A) is an eigenvalue of the matrix A. 653
Constructive Computation in Stochastic Models with Applications
(2) There exist two vectors x ! 0 and y ! 0 such that Ax U ( A) x and yA U ( A) y. (3) U ( A) is a simple root of the characteristic equation det(O I A) 0. k½ (4) If U ( A) _ O _ for an eigenvalue O z U ( A), then O U ( A) exp ®2ʌi ¾ with n¿ ¯ k½ i 2 1. At the same time, U ( A) exp ®2ʌi ¾ is a simple root of the characteristic n¿ ¯ equation det(O I A) 0.
A.3
Inverses of Matrices of Infinite Size
We now compute the inverses of the two matrices I RU and I GL , which are useful in the study of stochastic models. The following two cases are listed. Case Ē The inverse of the matrix I RU Let
RU
§ 0 R0,1 ¨ 0 ¨ ¨ ¨ ¨ ¨ ©
R0,2 R1,2 0
R0,3 R1,3 R2,3 0
!· ¸ !¸ !¸ . ¸ !¸ % ¸¹
Then
( I RU )
1
§I ¨ ¨ ¨ ¨ ¨ ¨ ©
Z 0,1 I
Z0,2 Z1,2 I
Z0,3 Z1,3 Z 2,3 I
!· ¸ !¸ !¸ , ¸ !¸ % ¸¹
where Z a ,b
X k( a ,b )
654
b a 1 ( a ,b ) ° ¦ X k , b ı a 2, ®k 0 °R , b a 1, ¯ a ,b
Ra , a 1 Ra 1, a 2 ! Rb 2,b 1 Rb 1,b , k 0, ° ® 1 İ k İ b a 1, ¦ Ym(a,,mb) ,!,m , °¯ a m1 m2 ! mk b 1 2 k
Appendix
Ym(1a,,mb2) ,!, mk
Ra , a 1 Ra 1, a 2 ! Rm1 1, N( m ) RN( m ) , N( N 1
1
( m1 ) )
! Rmk 1, N( mk ) RN( mk ) , N( N( m ) )! Rb 2,b 1Rb 1,b , k
° mk 1, mk 1 z mk 1 , ® °¯ N ( mk 1 ) , mk 1 mk 1 .
N ( mk )
^
`
To understand the sequence Ym(1a,,mb2) ,!, mk , we provide four examples as follows: (0,5) Y2,4 (2,6) 3,4
Y
(0,5) Y2,3,4
R0,1 R1,3 R3,5 , R2,5 R5,6 ,
(2,6) 3
Y
R0,5 ,
R2,4 R4,5 R5,6 .
It is necessary to consider the following two special cases: Markov chains of GI G 1 type and level-dependent QBD processes. For a Markov chain of GI G 1 type, let § 0 R1 ¨ 0 ¨ ¨ ¨ ¨ ¨ ©
iU R
R2 R1 0
R3 R2 R1 0
!· ¸ !¸ !¸ . ¸ !¸ % ¸¹
Then
iU ) (I R
1
§I ¨ ¨ ¨ ¨ ¨ ¨ ©
Z1 I
Z2 Z1 I
Z3 Z2 Z1 I
!· ¸ !¸ !¸ , ¸ !¸ % ¸¹
where f
Zl
¦
¦
Rn1 Rn2 ! Rni .
i 1 n1 n2 ! ni l n j ı1, 1İ j İi
For a level-dependent QBD process, we write
RU
§ 0 R0 ¨ 0 ¨ ¨ ¨ ¨ ¨ ©
R1 0
R2 0
· ¸ ¸ ¸. ¸ %¸ % ¸¹ 655
Constructive Computation in Stochastic Models with Applications
Then
( I RU )
§I ¨ ¨ ¨ ¨ ¨ ¨ ©
1
X 1(0) I
X 2(0) X 1(1) I
!· ¸ !¸ !¸ , ¸ !¸ %¸¹
X 3(0) X 2(1) X 1(2) I
where X k(l )
Rl Rl 1 Rl 2 ! Rl k 1 ,
Case ē The inverses of the matrix I GL Let § 0 ¨ 0 ¨ G1,0 GL ¨ G2,0 G2,1 0 ¨ ¨ G3,0 G3,1 G3,2 ¨ # # # ©
k ı 1, l ı 0.
· ¸ ¸ ¸. ¸ 0 ¸ # % ¸¹
Then
( I GL )
1
§ I ¨ I ¨ T1,0 ¨ T2,0 T2,1 I ¨ ¨ T3,0 T3,1 T3,2 ¨ # # # ©
· ¸ ¸ ¸, ¸ I ¸ # % ¸¹
where Ta ,b
H k( a ,b )
a b 1, Ga ,b , ° a b 1 ® ( a ,b ) ° ¦ H k , a ı b 2, ¯k 0
Ga , a 1Ga 1, a 2 ! Gb 2,b 1Gb 1,b , k 0, ° ® Fm(1a,,mb2) ,!,mk , 1 İ k İ a b 1, ¦ °¯a ! m1 ! m2 !!! mk !b
Fm(1a,,mb2) ,!, mk
Ga , a 1Ga 1, a 2 !Gm1 1, L ( m ) GL ( m ), L ( L 1
1
( m1) )
! Gmk 1, L ( mk )GL ( mk ) , L ( L ( mk ))! Gb 2,b 1Gb 1,b , L( mk ) 656
°mk 1, mk 1 z mk 1 , ® °¯ L( mk 1 ) , mk 1 mk 1 .
Appendix
^
`
To understand the sequence Fm(1a,,mb2) ,..., mk , we provide two examples as follows: (5,0) F4,2
(5,0) F4,3,2
G5,3G3,1G1,0 ,
G5,1G1,0 .
It is necessary to consider the following two special cases: Markov chains of GI G 1 type and level-dependent QBD processes. For a Markov chain of GI G 1 type, let § 0 ¨ ¨ G1 ¨ G2 ¨ ¨ G3 ¨ # ©
iL G
0 G1 0 G2 G1 # #
· ¸ ¸ ¸. ¸ 0 ¸ # % ¸¹
Then
i L) (I G
1
§ I ¨ ¨ W1 I ¨ W2 W1 I ¨ ¨ W3 W2 W1 ¨ # # # ©
· ¸ ¸ ¸, ¸ I ¸ # % ¸¹
where f
Wl
¦
¦
Gn1 Gn2 ! Gni ,
l ı 1.
i 1 n1 n2 ! ni l n j ı1, 1İ j İi
For a level-dependent QBD process, we write §0 ¨ ¨ G1 0 ¨ G2 ¨ ¨ ¨ ©
GL
0 G3
· ¸ ¸ ¸. ¸ 0 ¸ % % ¸¹
Then
( I GL )
1
§ I ¨ (1) ¨ Y1 ¨ Y2(2) ¨ (3) ¨ Y3 ¨ # ©
I Y1(2) I (3) Y2 Y1(3) # #
· ¸ ¸ ¸, ¸ I ¸ # % ¸¹ 657
Constructive Computation in Stochastic Models with Applications
where Yk(l )
Gl Gl 1Gl 2 ! Gl k 1 ,
l ı k ı 1.
References 1. A. Graham (1981). Kronecker Products and Matrix Calculus: with Applications, Ellis Horwood Limited 2. F. Seneta (1981). Non-negative Matrices and Markov Chains, Springer-Verlag: New York
Appendix B Heavy-Tailed Distributions In this appendix, we provide definitions and preliminary properties for light tail, heavy tail, long tail, subexponentiality and regular variation for sequences of nonnegative matrices. These preliminaries are useful in the study of blockstructured stochastic models. Following the standard definition of the light tail for a sequence of nonnegative scalars, the light tail of a sequence of nonnegative matrices is defined as follows. Definition B.1 For a sequence {Ck } of nonnegative scalars, it is called light-tailed if f
¦C
exp{H k} f, for some H ! 0.
k
k 1
For a sequence {Ck } of nonnegative matrices of size m u n, it is called light-tailed if for all i 1, 2,! , m and j 1, 2,! , n, the sequences {Ck (i, j )} of nonnegative scalars are light-tailed, where Ck (i, j ) is the (i, j ) th entry of Ck . For the study of the light tail, readers may refer to Wilf [13] and Abate and Whitt [1] for more details, some of which are used in Sections 4.3 and 4.4. Based on the light-tailed definition, we now define the heavy tail as follows. Definition B.2 For a sequence {Ck } of nonnegative scalars, it is called heavy-tailed if f
¦C
k
exp{H k} f,
for all H ! 0.
k 1
For a sequence {Ck } of nonnegative matrices of size m u n, it is called heavy-tailed if there exists at least one pair (i0 , j0 ) such that the sequences {Ck (i0 , j0 )} of nonnegative scalars are heavy-tailed. 658
Appendix
According to Subsection 1.3 and Subsection 1.4 in Embrechts, Klüppelberg and Mikosch [6], we provide the following definitions for a sequence of nonnegative scalars to be heavy-tailed, long-tailed, subexponential and regularly varying. f
Definition B.3 (1) A sequence {cn } of nonnegative scalars with
¦c
n
f
n 0
is called heavy-tailed if for all H ! 0 f
¦c
n
exp{H n} f.
(A)
n 0
Otherwise,{cn } is called light-tailed. Denote by sequences.
the class of the heavy-tailed f
(2) A sequence {cn } of nonnegative scalars with
¦c
n
f is called long-tailed
n 0
if cİn ! 0 for all n ! N , where N is a large enough positive integer, and
lim
n of
cİn m cİn
1,
for any integer m ı 0.
(B)
Denote $ as the class of the long-tailed sequences. (3) A probability sequence {cn } is called subexponential if lim
n of
cİ2*n cİn
(C)
2.
Denote + as the class of the subexponential sequences. f
(4) (i) A sequence {ln } of nonnegative scalars with
¦l
n
f is called slowly
n 0
varying if lİn ! 0 for n ! N , where N is a large enough positive integer, and lim
n of
lİ[ O n ] lİn
1 for any O ! 0. Denote 0 as the class of the slowly varying f
sequences. (ii) A sequence {cn } of nonnegative scalars with
¦c
n
f is called
n 0
regularly varying with index D (f, f ) if cİn nD lİn for all n ı N . Denote D as the class of the regularly varying sequences with index D . f
Let {ck } be a sequence of nonnegative scalars with
¦c
k
c f, then
k 0
cİ2*n
c 2 cİ2*n , and 659
Constructive Computation in Stochastic Models with Applications
cİ2*n
lim
n of
cİn
2c
(D)
1 ½ if and only if ® ck ¾ is subexponential. According to Teugels [15], properties of ¯c ¿ a subexponential sequence also hold for a sequence of nonnegative scalars satisfying (D). To characterize subexponential asymptotics, we need to introduce the class * + , where + * + . For a sequence {ck } of nonnegative scalars with P c f
¦ kc f, we define c
(I ) k
k
k 0
1
Pc
k
¦c
İl
. Clearly, ^ck( I ) ` is a probability sequence.
l 0
Following Klüppelberg [11], the integral tail of the sequence {ck } is defined as cİ( Ik) for k ı 1. Klüppelberg [11] illustrated that for {ck } + , it is possible that ^ck( I ) ` + . Klüppelberg [11] provided a useful sufficient condition under which ^ck( I ) ` + . Definition B.4
and lim
k of
cİk cİk cİk
A sequence {ck } of nonnegative scalars is in + * if P c f
2Pc .
Proposition B.1 (1) If {ck } + * , then ^ck( I ) ` +.
(2) If {ck } D for D ! 1, then ^ck( I ) ` (D 1) . In (c) of Theorem 5.1 in Goldie and Klüppelberg [10], a condition on the closeness of convolution associated with two subexponential sequences was provided, which is restated in the following proposition. Proposition B.2 If { pk }, {qk } S , then { pk qk } S if and only if {O pk (1 O )qk } S for all O (0,1). Proof The proof is clear by noting the following two relationships: (1) The sequence { pk } + if and only if the function pİk + ; and (2) the sequence { pk qk } + if and only if the function of convolution pİk * qİk + , since k
pİk * qİk
¦p
l
ql .
l 0
Definition B.5
(1) (Tail-equivalent) Two sequences {ck } and {d k } of
nonnegative scalars are called tail-equivalent, denoted as cİk [ dİk , if
lim
k of
660
cİk dİ k
[ (0, f).
Appendix
(2) (Tail-lighter) A sequence {ck } of nonnegative scalars is tail-lighter than a sequence {d k } of nonnegative scalars, or {d k } is tail-heavier than {ck } , denoted as cİk
cİk
o(dİk ), if lim
k of
dİ k
0.
Remark B.1 (1) It is easy to check that , $ and D are all closed with respect to tail-equivalence. (2) Teugels [15] proved that + is closed with respect to tail-equivalence. (3) Goldie and Klüppelberg [10] ( p. 445) illustrated that + * is closed with respect to tail-equivalence. Now, we extend the above notion for sequences of nonnegative scalars to that for sequences of nonnegative matrices. We will abuse the notation, without any confusion, by using the same , $, + , + * and D for the classes of heavy-tailed, long-tailed, subexponential and regularly varying matrix sequences, respectively. Definition B.6 We assume that the nonnegative matrices Bn for n ı 1 have f
the same size and
¦B
n
is finite.
n 0
(1) The sequence {Bn } of nonnegative matrices is called heavy-tailed if there exists at least one entry sequence of {Bn } which is heavy-tailed. Otherwise, {Bn } is called light-tailed. Denote as the class of the heavy-tailed matrix sequences of all sizes. (2) The sequence {Bn } of nonnegative matrices is called long-tailed if there exists at least one entry sequence of {Bn } which is long-tailed and all the other entry sequences are either long-tailed or tail-lighter than a long-tailed entry sequence of {Bn }. Denote $ as the class of the long-tailed matrix sequences of all sizes. (3) The sequence {Bn } of nonnegative matrices is called subexponential if there exists at least one entry sequence of {Bn } which is subexponential and all the other entry sequences are either subexponential or tail-lighter than a subexponential entry sequence of {Bn }. Denote + as the class of subexponential matrix sequences of all sizes. (4) The sequence {Bn } of nonnegative matrices is called regularly varying with index D (f, f) if there exists at least one entry sequence of {Bn } which is regularly varying with index D and all the other entry sequences are either regularly varying with index E (f,D ] or tail-lighter than an entry sequence in D of {Bn }. Denote D as the class of the regularly varying matrix sequences with index D of all sizes. (5) The sequence {Bn } of nonnegative matrices is in + * if there exists at least one entry sequence of {Bn } which is in + * and all the other entry sequences are 661
Constructive Computation in Stochastic Models with Applications
either in + * or tail-lighter than a entry sequence in + * of {Bn }. We denote b(i, j ) as the (i, j ) th entry of the matrix B. For a sequence {Bk } of matrices, Bİk and Bİk are defined elementwise as Bİk (bİk (i, j )) and Bİk (bİk (i, j )), respectively. We denote : as the class of heavy-tailed matrix sequences satisfying the property that for each sequence {Bk } in : there exists a heavy-tailed scalar
sequence {E k } and a finite, non-zero nonnegative matrix W such that lim
k of
Bİk
E İk
W.
The sequence {E k } of nonnegative scalars and the matrix W are called a uniformly dominant sequence of the matrix sequence {Bk }, and the associated ratio matrix, respectively. Proposition B.3 A heavy-tailed matrix sequence {Bk } is in : if and only if there exists at least a pair (i0 , j0 ) such that the sequence {bk (i0 , j0 )} is heavytailed and the limit lim k of
bİk (i, j ) bİk (i0 , j0 )
is either zero or a positive number for all i and j.
Proof For the sufficiency, if there exists at least a pair (i0 , j0 ) such that the
sequence {bk (i0 , j0 )} is heavy-tailed and the limit lim
k of
a positive number for all i and j, then the matrix W zero, and nonnegative. We take E k
bİk (i, j ) bİk (i0 , j0 )
lim
k of
is either zero or
Bİk bİk (i0 , j0 )
is finite, non-
bk (i0 , j0 ) for k ı 1. This implies {Bk } : .
For the necessity, if {Bk } : , then there exists a heavy-tailed scalar sequence {E k } and a finite, non-zero, nonnegative matrix W such that lim
k of
Bİk
E İk
W . We
assume that the (i0 , j0 ) th entry w(i0 , j0 ) of the matrix W is not zero. Then bİk (i, j ) lim
k of
bİk (i, j ) bİk (i0 , j0 )
lim
k of
E İk bİk (i0 , j0 )
w(i, j ) w(i0 , j0 )
E İk for all i and j, which is either zero or a positive number. Since bİk (i0 , j0 ) w(i0 , j0 ) Eİk , {E k } is heavy-tailed and w(i0 , j0 ) ! 0, {bk (i0 , j0 )} is obviously heavy-tailed. This completes the proof. Remark B.2 It is possible that a heavy-tailed matrix sequence is not in : . 662
Appendix
To illustrate this, we consider the vector sequence §1 1 · ¨ , [1 a sin(2ʌ log k )] ¸ ©k k ¹
Bİk
for k ı 1. Using G 4 in Embrechts and Omey [7] (p. 81 82) yields that
{bk (1, 2)} + while {bk (1,1)} 0 , hence {Bk } + . It is clear that
bİk (1, 2) bİk (1,1)
1 a sin(2ʌ log k ) has neither a finite nor an infinite limit. The following proposition provides a way of using a sequence of nonnegative scalars to characterize the tail of a sequence of nonnegative matrices. The proof is clear according to Definition B.6 and Remark B.1. Proposition B.4 Assume a heavy-tailed matrix sequence {Bk } : with a uniformly dominant sequence {E k } and the associated ratio matrix W. (1) {Bk } is long-tailed if and only if {E k } is long-tailed. (2) {Bk } is subexponential if and only if {E k } is subexponential. (3) {Bk } is regularly varying with index D (f, f) if and only if {E k } is regularly varying with index D . (4) {Bk } + * if and only if {E k } + * . We provide some basic properties for heavy-tailed matrix sequences which are useful for characterizing the tail behavior of block-structured stochastic models. For simplicity, we assume that all the nonnegative matrices involved are square matrices with a common size m. Proposition B.5 For two sequences {Bk } and {Ck } of nonnegative matrices, if (1) there exists a nonnegative invertible matrix W such that Bk ı WCk for all k ! N , where N is a large enough positive integer, and (2) {Ck } is heavy-tailed, then {Bk } is heavy-tailed. Proof If {Ck } is heavy-tailed, then there exists at least a pair (i0 , j0 ) such that the (i0 , j0 ) th entry sequence {Ck (i0 , j0 )} is heavy-tailed. Since W is invertible, each column of W is not zero. For the i0 th column of W, we assume that the (i1 , i0 ) th entry w(i1 , i0 ) ! 0. Then we obtain m
¦ w(i , l )C (l , j ) ı w(i , i )C (i , j ). 1
k
0
1
0
k
0
0
l 1
Since Bk ı WCk , bk (i1 , j0 ) ı w(i1 , i0 ) Ck (i0 , j0 ). Notice that w(i1 , i0 ) ! 0 and {Ck (i0 , j0 )} is heavy-tailed. It follows from (1) in Definition B.3 that {Bk (i1 , j0 )}
is heavy-tailed. Therefore, {Bk } is heavy-tailed according to (1) in Definition B.6. 663
Constructive Computation in Stochastic Models with Applications
Remark B.3 In Proposition B.5, condition (1) can be replaced, for example, by W ! 0, or by WBk ı Ck for all k ! N , where N is a large enough positive integer. Proposition B.6 For two sequences {Bk } and {Ck } of nonnegative matrices, suppose that (1) there exists a nonnegative invertible matrices V and a matrix W ı V such that VCk İ Bk İ WCk for all k ! N , where N is a large enough positive integer, (2) {Ck } : and (3) {Bk } $. (1) If {Ck } + , then {Bk } +. (2) If {Ck } + * , then {Bk } + * . (3) If {Ck } D for D (f, f), then {Bk } D . Proof We only prove (1); (2) and (3) can be similarly proved. Under assumptions that {Ck } + and {Ck } : , it follows from Propositions
B.3 and B.4 that there exists a pair (i0 , j0 ) such that the (i0 , j0 ) th entry sequence {Ck (i0 , j0 )} + and the limit lim
k of
Cİk (i, j ) Cİk (i0 , j0 )
is equal to either zero or a positive
Cİk
* with the (i, j ) th entry being W (i, j ). Then * is Cİk (i0 , j0 ) finite, non-zero, and nonnegative. Hence, for an arbitrarily small number H ! 0 there always exists a large enough positive integer N 0 such that constant. Let lim k of
* H eeT
Cİk Cİk (i0 , j0 )
* H eeT
(E)
for all k ! N 0 . This, together with VCk İ Bk İ WCk for all k ! N means that for all k ! max{N , N 0 }, V (* H eeT )Cİk (i0 , j0 ) İ Bİk İ W (* H eeT )Cİk (i0 , j0 ).
(F)
We denote h(i, j ) and f (i, j ) as the (i, j ) th entries of the matrices H V (* H eeT ) and F
and J
W (* H eeT ), respectively. Let 4 *
{(i, j ) : h(i, j ) ! 0}
*
max {W (i, j )} W (i , j ). Then J ! 0 due to * : 0. We take 0 H J .
1İi , jİm
Then V (* H eeT ) z 0, since W (i* , j* ) H ! 0 and V is invertible. Since V is invertible and nonnegative, each row of V is not zero. In the i1 th row of V, we assume that the (i1 , i* ) th entry v(i1 , i* ) ! 0. Therefore, for a small enough H ! 0, m
h(i1 , j * )
*
1
l 1
664
m
¦ v(i , l )[W (l, j ) H ] ı v(i , i )J H ¦ v(i , l ) ! 0, *
1
1
l 1
Appendix
which implies that the set 4 is not empty. Now, we assume the pair (i1 , j1 ) 4 , then it follows from (F) that for all k ! max{N , N 0 }, h(i1 , j1 )cİk (i0 , j0 ) İ bİk (i1 , j1 ) İ f (i1 , j1 )cİk (i0 , j0 ),
(G)
which illustrates that the sequence {bk (i1 , j1 )} is heavy tailed due to the two facts that h(i1 , j1 ) ! 0 and {ck (i0 , j0 )} + . For an arbitrary pair (i2 , j2 ) z (i1 , j1 ), it is clear that for all k ! max{N , N 0 }, 0 İ bİk (i2 , j2 ) İ f (i2 , j2 )cİk (i0 , j0 ).
(H)
It follows from (H) and the right-hand side of (G) that for all k ! max{N N 0 }, 0 İ bİk (i2 , j2 ) İ
f (i2 , j2 ) bİk (i1 , j1 ), h(i1 , j1 )
(I)
which shows that the sequence {bk (i1 , j1 )} is not tail-lighter than the sequence {bk (i2 , j2 )} for each pair (i2 , j2 ) z (i1 , j1 ). Therefore, the assumption that {Bk } $ implies that {bk (i1 , j1 )}$. Notice that {ck (i0 , j0 )}+ , {bk (i1 , j1 )} $, h(i1 , j1 ) ! 0 and f (i1 , j1 ) ! 0, it follows from (F) and (a) of Theorem 2.1 in Klüppelberg [11] that {bk (i1 , j1 )} + . Similarly, we can check that for an arbitrary pair (i, j ) with 1 İ i, j İ m, the entry sequence {bk (i, j )} is either subexponential or tail-lighter than the subexponential entry sequence {bk (i1 , j1 )} according to (ć). It follows from (3) in Definition B.6 that {Bk } + . Remark B.4 (1) In Proposition B.6, conditions (2) and (3) are necessary. Refer to (a) of Theorem 2.1 in Klüppelberg [11] for details. Condition (1) can be replaced, for example, by V ! 0 and W ! 0. For two sequences {Bk } and {Ck } of matrices, Bİk * Cİk is defined elementwise as Bİk Cİk
§ · ¨ ¦ bİk (i, r ) cİk (r , j ) ¸ . © r ¹
The following three propositions characterize tail behavior of convolutions for sequences of nonnegative matrices. Proposition B.7 If (1) { pk } + , {qk } is any probability sequence and qİk o( pİk ) , and (2) Bİk
W pİk , Cİk
V qİk , then
Bİk Cİk
Proof
WV pİk .
It is easy to check that 665
Constructive Computation in Stochastic Models with Applications
§ m · ¨ ¦ bİk (i, r ) cİk (r , j ) ¸ . ©r 1 ¹
Bİk Cİk
Since Bİ k
W pİk and Cİk bİk (i, r )
V qİk , we obtain
w(i, r ) pİk and cİk (r , j )
v(r , j )cİk (r , j ).
If w(i, r ) 0 or v(r , j ) 0, then we take bİk (i, r ) cİk (r , j ) and v(r , j ) z 0, then bİk (i, r ) cİk (r , j )
w(i, r )v(r , j )
0. If w(i, r ) z 0
bİk (i, r ) cİk (r , j )
. w(i, r ) v(r , j )
Since
bİk (i, r ) w(i, r ) { pk } + and qİk
pİk qİk
pİk ,
cİk (r , j ) v (r , j )
qİk ,
o( pİk ), it follows from Proposition 2.7 in Sigman [14] that
pİk , and so bİk (i, r ) cİk (r , j )
w(i, r )v(r , j ) pİk .
Therefore, we obtain Bİk Cİk
§ m ¨ ¨ ¨ ©r 1
¦ w(i, r )v(r , j ) p
İk
· ¸ ¸ ¸ ¹
WV pİk .
This completes the proof. Proposition B.8 If { pk } + , and two sequences {Ck(1) } and {Ck(2) } of nonnegative matrices satisfy Cİ( lk) H l pİk for l 1, 2, where H1 and H 2 are two finite, non-zero nonnegative matrices, then Cİ(1)k Cİ(2)k Proof The condition that Cİ( lk)
cİ(1)k (i, r )
( H1eeT eeT H 2 ) pİk . H l pİk for l 1, 2 implies
h1 (i, r ) pİk ,
cİ(2)k (r , j )
h2 (r , j ) pİk .
Note that Cİ(1)k Cİ(2)k 666
§ m (1) · (2) ¨ ¦ cİk (i, r ) cİk (r , j ) ¸ , ©r 1 ¹
Appendix
using Theorem 5.1 in Goldie and Klüppelberg [10] leads to cİ(1)k (i, r ) cİ(2)k (r , j )
> h1 (i, r ) h2 (r , j )@ p
İk
.
Simple computations lead to Cİ(1)k Cİ(2)k
§ ¨ ©
H1eeT eeT H 2 ·¹¸ pİk .
This completes the proof. Proposition B.9 If Bİk Wk D lİ(1)k and Cİk Vk E lİ(2)k , where D , E ! 0 , the scalar sequences {lk(1) } , {lk(2) } 0 , and W and V are two finite, non-zero, nonnegative matrices, then
Bİk Cİk
WVk D l (1) , if D E , İk ° ° E (2) if D ! E , ®WVk lİk , ° D ª º T (1) T (2) °¯k «¬«Wee lİk ee V lİk »¼» , if D E .
Proof The first two equalities are obtained by Proposition B.7. The last one follows from a proposition in Feller [8] (p. 278).
References 1. J. Abate and W. Whitt (1997). Asymptotics for M/G/1 low-priority waiting-time tail probabilities. Queueing Systems 25: 173 223 2. P. Embrechts (1982). Subexponential distribution functions and their applications, a review. In Proc. 7th Brasov Conf. Probability Theory, VNU Science Press, pp. 125 136 3. P. Embrechts and C.M. Goldie (1980). On closure and factorization properties of subexponential and related distributions. Journal of the Australian Mathematical Society (Series A) 29: 243 256 4. P. Embrechts and C.M. Goldie (1982). On convolution tails. Stochastic Processes and Their Applications 13: 263 278 5. P. Embrechts, C.M. Goldie and N. Veraverbeke (1979). Subexponentiality and infinite divisibility. Z. Wahrscheinlichkeitstheorie Verw. Gebiete 49: 335 347 6. P. Embrechts, C. Klüppelberg and T. Mikosch (1997). Modelling Extremal Events for Insurance and Finance, Springer-Verlag 7. P. Embrechts and E. Omey (1984). A property of longtailed distributions. Journal of Applied Probability 21: 80 87 8. W. Feller (1971). An Introduction to Probability Theory and Its Applications, John Wiley & Sons, Inc. 9. C.M. Goldie (1978). Subexponential distributions and dominated-variation tails. Journal of Applied Probability 15: 440 442 667
Constructive Computation in Stochastic Models with Applications
10. C.M. Goldie and C. Klüppelberg (1998). Subexponential distributions. In A Practical Guide to Heavy Tails: Statistical Techniques for Analyzing Heavy Tailed Distributions, R. Adler, R. Feldman and M.S. Taqqu (eds.), Birkhäuser: Boston, pp. 435 459 11. C. Klüppelberg (1988). Subexponential distributions and integrated tails. Journal of Applied Probability 25: 132 141 12. C. Klüppelberg (1989). Subexponential distributions and characterizations of related classes. Probability Theory and Related Fields 82: 259 269 13. H.S. Wilf (1994). Generatingfunctionology (second edition), Academic Press, Inc. 14. K. Sigman (1999). Appendix: A primer on heavy-tailed distributions. Queueing Systems 33: 261 275 15. J.L. Teugels (1975). The class of subexponential distributions. The Annals of Probability 3: 1000 1011
668
Index
A-measure 72,99,104,105,125,173 accumulated reward 526,530,531,534,536, 542,543,547,566,570,574,597 algebraic singular point 193,194,197 asymmetric evolutionary game 626,631, 636,640,643 asymptotic periodic distribution 422 B-measure 72,104,105,173 block-structured Markov chains 1,9,10,12, 72,73,109,130,175,209,432,521 block-structured Markov renewal processes 288,326,327 block-structured stochastic model 658,663 BMAP 52,53,56,61,62,211,328,332,334, 631,365,366,380,396,398,647 boundary 25,26,28,131,132,137,139,176, 177,192,193,210,250,273,311,313,331, 339,345,346,348,349,357–359,364–366, 368,450,452,476,477,481,484,522 boundary block 25,131 boundary row 176 censoring chain 73 censoring invariance 72,76,81,82,100,103, 104,115,133,136,298,300,306,312,329, 439,481 censoring technique 1,72,73,114,125,216, 217,221,242,288,289,352,427,492,504,517 Cesaro limit 5,423 Chapman-Kolmogorov differential equation 45,51,53,54,56,265,409,411 CMAP 264–266,409 continuous phase variable 216,217,242, 243,245,246,248–250,252,254,255,263, 266,284–287 convolution 38,47,67,178,179,200,205,282, 290,403,660 CPH distribution 263,266
E -discounted 432,434,438,439,481,486, 584,597 double QBD process 518,521 down-type reward 527,560 dual chain 131,145–147 dual generating chain 148,163 dual Markov chain 131,145,146 eigenvector 163,373,438,453,455,464,477, 479,499 Erlang distribution 34,35 evolutionary game 605,607,616,618,622, 626 exponential distribution 1,13,20,21,33,34, 36,44 factorial moment 45,403 finitely-many levels 1,11,12,16,20,23–30, 32,72,73,109,111,113,114,118,122,124, 288,432,433,526,527,548,569,574–576, 581,594 first accumulated time 526,534,536,538 first passage time 72,126,249,280,288,315, 342,344,389,390,401,426 fundamental matrix 5,74,79,85,153,154, 221,290,296,433,448,449,451,458,468,476 fundamental period 323,426,560 geometric tail 176 GMAP 389,409,410,428 GPH distribution 389,390,402–407,433, 507,510,513,514 Harris recurrent 237,240,245,247,248,255, 256,260,261,265,266,282 heavy-tailed 176–178,180–183,185,210, 264,327,328,377,510,514,658,662,663 identity kernel 217,218 infinitely-many levels 1,11,23,27,29,30,32, 72,173,327,377,432,438,444,504,526,527, 554,569,574,575,579,581,594
Constructive Computation in Stochastic Models with Applications infinitely-many phases 433,517,519,523 irreducible 3–5,8,20,23–25,27,30,32,36, 37,41,42,45,47–50,53–56,61,72–74,76, 81–83,96,97,106,107–109,113–115,118, 122,125–127,131,135–138,140,144,145, 147,151,156–170,172–174,182–186,190, 191,194,197–199,213,219,221,238,243, 263,264,288–290,293,308,314,317,318, 320,321,327,328,332,338,346,347,350– 352,361,365,366,368–370,377,379–381, 389,390,396–406,408,409,411,412,416– 422,425,427,428,433,443,437,438,444, 445,447,455–458,467,469,477–479,481, 482,490,494,498,500–502,504,507,508– 510,513,514,517,518,522,526–528,530, 531,533,535,542,543,547,551,554,565, 570,574,575,579,585,588,592,597,600, 601,608,622,647,653 kernel 217–223,225–229,231,242,244,245, 248,249,252,253,256,263–267,273,275– 278,280,282,283,286 level-dependent 10,11,30,61,66,84,88,94, 95,97,126,216,302,303,333,338,364,365, 370,377,390,399,401,432,433,442,444, 445,447,490,493,496,497,500,510,511, 514,516,523,526,547,568–570,594,627, 633,637,643,655,657 level-independent 9,10,23,24,87,89,97, 127,248,255,256,261,284,342,401,419, 424,425,477,512,517,523,560,562,602, 603,646 light-tailed 176–178,182,183,185,190,192, 194,205,207–211,215,264,327,328,510, 514,658,659,661 linear QBD-equation 29–31 long-tailed 176,198,200–202,204,205,659, 661,663 LU-type RG-factorization 25–27,30,32,90, 92,94,95,110,111,113,114,119,121,126, 173,223,224,248,250,305,307,310,314, 322,391,396,402,403,406,407,435,436, 438,443,444,506,523,528,533,547,548, 549,568,582,593–595,646,647 MAP 43,50–52,55,61,62,65,349,389,575, 601,602 MAP(t) 411,412 670
Markov chains 1,2,4,8–10,12,23,43,60,61, 65,67–69,72,73,75,76,81,84,87–89,94,95, 98,109,113–115,124,125,126,128–132, 135,137,145,148,150,157,159,170,172– 175,190,208–210,214,216–218,220,222, 226,227,237,238,241,261,264,284,285– 287,346,352,385,389,392,395,399,412, 417,422,424,426,428,430–433,438,442, 447,492,500,514,516,521–525,570,572, 575,579,583,584,592,603,606,645–651, 655,657 Markov chains of GI/G/1 type 130,131, 132,175,214,220,226,432,433,442,481 Markov chains of GI/M/1 type 88,89,95, 113,125,220,260,432,442,457,516,522, 585 Markov chains of M/G/1 type 10,84,87, 94,113,125,208,220,227,432,433,447, 477,514,522,588 Markov property 2,6,75,268,291,543,544, 548,550,555,597 Markov renewal equation 304 Markov renewal matrix 304,327 Markov renewal process 125,288,289, 298,302–304,311,322,326–330,426,531, 568,569 Markov renewal processes of GI/G/1 type 288,311,322 Markov renewal processes of GI/M/1 type 303 Markov renewal processes of M/G/1 type 302 matrix generating function 132 matrix-exponential distribution 1,57–60 matrix-geometric solution 69,71,98,210, 432,574,585,647,649 matrix-iterative solution 218,432,574,585 matrix-product solution 113,114 maximal eigenvalue 163,169,186,190,198, 199,451,453,455,456,461–463,465,468, 474,477,479,483,486,493,499,508,601 maximal non-positive inverse 31,32,115, 336,547 MBMAP[K] 55,57 mean drift 161,164,261,283,339,342,370 measure 125,200,233,234,236,237,273, 275,277–281,400,441,457,470,471,474, 478,479,576,583–585,588,592,593,596
Index memoryless probability distribution 270 memoryless property 33 minimal nonnegative inverse 74,79,290, 304,412,580 minimal nonnegative solution 24,29,30, 86–89,277,228,233,236,244,245,267,276, 277,283,303,333,336,340,352,370,425, 449,460,491,517,545,546,563,586,589, 595,603 minimal positive solution 185,319,485,486 MMAP[K] 53–55,62,67,647 MMPHP 49 MMPP 48–50 null recurrent 4,5,156,157,159,160–164, 167,172,174,320–422,433,451,457,478, 480,498,603 D -null recurrent 433,451,457,478,480,498 orthogonal 228,229,240,252,253,255–257, 260–263,285,377 orthogonal base 228,229,261 orthogonal decomposition 288 PDMP 267,274–280,285 period 3,5,13,21,128,174,268,323,328,345, 346,356,362,363,377,379,383,390,405,408, 426–431,560,570,619,647 perturbed birth death process 574,618,619 perturbed MAP 575,600,601,603,647 perturbed Markov chain 574,576,579,582, 596,597,601,647 perturbed PH distribution 575,600,647 perturbed QBD process 597,603 perturbed stationary performance measure 576–579 perturbed transient performance measure 596 perturbed-directional matrix 575,579,585, 588,592,594 PH distribution 1,33,35–37,39,40–42,45, 47,48,57,58,61,66,263,264,266,377,389, 390,401–404,406,407,411,417,433,507, 510,514,569,575,600,601,647 PH renewal process 43,44,47,49,409 PH(t) 410,411 Poisson process 13,14,16,20,21,43,44,48, 50,63,211,212,399 pole 186,188,193,194
positive recurrent 4,5,8,25,27,29,30,73,76, 96,97,108,109,113,144,147,151,156–170, 172–174,178,180,182–184,320,327,379, 380,478,500,592,603,622 D -positive recurrent 433–438,457,478,480, 498,500 potential vector 581,593,645 QBD process 1,10,12,23,266,477,542,554, 657 quasi-stationary distribution 432–434,437, 441,442,444,445,469,472,481,487,496,501, 504,523 R-measure 32,72,73,87,91,96,104,105,120, 146,172,227,359 radii of convergence 318 radius of convergence 169,177,183,185, 433,456,461,465,490,522 realization matrix 581,582,593,594,645 recurrent 4,5,8,29,76,96,107,147,162,226, 255,320,347,452,457,520,556,601,603 D -recurrent 433,437,452,492,504 reducible 3,191 regularly varying 176,198,205,209–212, 659,661,663 renewal density function 47,60 renewal process 43,44,47,49,51,57,60,126, 131,228,232,294,317,332,409 repeated block 131 repeated row 23,24,137,138,160,165,172, 456,482,517 return-type reward 527,560,564 reward moment 536 reward process 526,527,531,542,543,548, 549,560,564,565,597 reward rate 526,529,531,548,554,565,567, 569,576,579 RG-factorization 1,23,25–33,68,72,76,90, 109,110,112–115,117,119,121,122,124– 126,131,138,160,173,174,217,222,223, 243,248,294,301,305,309,311,326,403,406, 434,435,439,468,502,505–507,558 RG-factorization for repeated row 517 semi-geometric tail 176 semi-dynamic system 268,269,274 sensitivity analysis 129,174,570,574,575, 581,582,584–586,588–590,592,593,596, 597,600,601,603,647 671
Constructive Computation in Stochastic Models with Applications sojourn time 126,249,293,315,332,334, 337,375,376,378,379,382,383,389,398, 399,426,430,555 spectral analysis 131,132,138,158,173,175, 288,317,451,461 spectral radius 140,205,245,317,508,510 state classification 4,8,72,73,98,131,132, 148,150,157,159–161,164,165,172–174, 288,289,325,326,519,576,579,585,588, 592,603 state D -classification 432,433,437,438,440, 447,450,451,457,468,481,486,487,490, 492,500,519,522 stationary performance measure 576,585, 588,592,593,645 stationary probability 23,24,30,31,38,46, 48,51,54,56,72,73,83,96,113,123,213,224, 241,355,517,592,646 stationary probability vector 23,24,30,46, 72,96,113,123,176,182,326,586,601 stochastic model 1,33,36,37
672
subexponential 176,198,205–207,210,211, 213–215,658,569,660,663,665,667,668 supplementary variable 331,346,360,362, 572 symmetric evolutionary game 605,607, 616,618,622,626,631,636,640,643,647,648 time-inhomogeneous 389,390,410–412, 420,422,427,430 transient 4,5,25,27,29,32,33,389,390,405, 426,536,584,623 D -transient 433,435–437,440,441,451,452, 468,470,493,503 transient probability 72,267,389,390,397, 400,405,408,412,420,421,431,565 transient solution 389,408,426 U-measure 25,27,72,82,87,89,91,224,228, 243,306,359,506 UL-type RG-factorization 27,29,33,64,76, 83,92,109,110–115,117,119,122,124,222, 243,294,309,432,439,447,491,502 up-type reward 527,548,554 Wiener-Hopf equation 72