fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
Advanced Analytical Models Over 800 Models and 300 Applications from the Basel II Accord to Wall Street and Beyond
JOHNATHAN MUN
John Wiley & Sons, Inc.
iii
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
vi
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
Advanced Analytical Models
i
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
Founded in 1807, John Wiley & Sons is the oldest independent publishing company in the United States. With offices in North America, Europe, Australia, and Asia, Wiley is globally committed to developing and marketing print and electronic products and services for our customers’ professional and personal knowledge and understanding. The Wiley Finance series contains books written specifically for finance and investment professionals as well as sophisticated individual investors and their financial advisors. Book topics range from portfolio management to e-commerce, risk management, financial engineering, valuation, and financial instrument analysis, as well as much more. For a list of available titles, please visit our Web site at www.WileyFinance.com.
ii
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
Advanced Analytical Models Over 800 Models and 300 Applications from the Basel II Accord to Wall Street and Beyond
JOHNATHAN MUN
John Wiley & Sons, Inc.
iii
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
C 2008 by Johnathan Mun. All rights reserved. Copyright
Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the Web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Designations used by companies to distinguish their products are often claimed as trademarks. In all instances where John Wiley & Sons, Inc., is aware of a claim, the product names appear in initial capital or all capital letters. Readers, however, should contact the appropriate companies for more complete information regarding trademarks and registration. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our Web site at www.wiley.com. Library of Congress Cataloging-in-Publication Data: Mun, Johnathan. Advanced analytical models : over 800 models and 300 applications from the Basel II Accord to Wall Street and beyond / Johnathan Mun. p. cm. — (Wiley finance series) Includes index. ISBN 978-0-470-17921-5 (cloth/dvd) 1. Finance—Mathematical models. 2. Risk assessment—Mathematical models. 3. Mathematical models. 4. Computer simulation. I. Title. HG106.M86 2008 003 .3—dc22 2007039385 Printed in the United States of America. 10 9 8 7 6 5 4 3 2 1
iv
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
Dedicated to my wife, Penny. Without your encouragement, advice, and support, this modeling book would never have taken off. “Delight yourself in the Lord and He will give you the desires of your heart.” —Psalms 37:4 (NIV)
v
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
vi
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
Contents
Preface
xv
Software Applications
xvii
PART 1
Modeling Toolkit and Risk Simulator Applications Introduction to the Modeling Toolkit Software Introduction to Risk Simulator Running a Monte Carlo Simulation Using Forecast Charts and Confidence Intervals Correlations and Precision Control Tornado and Sensitivity Tools in Simulation Sensitivity Analysis Distributional Fitting: Single Variable and Multiple Variables Bootstrap Simulation Hypothesis Testing Data Extraction, Saving Simulation Results, and Generating Reports Regression and Forecasting Diagnostic Tool Statistical Analysis Tool Distributional Analysis Tool Portfolio Optimization Optimization with Discrete Integer Variables Forecasting 1. Analytics—Central Limit Theorem 2. Analytics—Central Limit Theorem—Winning Lottery Numbers 3. Analytics—Flaw of Averages 4. Analytics—Mathematical Integration Approximation Model 5. Analytics—Projectile Motion 6. Analytics—Regression Diagnostics 7. Analytics—Ships in the Night 8. Analytics—Statistical Analysis 9. Analytics—Weighting of Ratios
1 1 2 6 16 18 22 29 33 35 40 42 42 52 54 56 70 72 79 84 88 93 96 100 109 111 123
vii
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
viii
CONTENTS
10. Credit Analysis—Credit Premium 11. Credit Analysis—Credit Default Swaps and Credit Spread Options 12. Credit Analysis—Credit Risk Analysis and Effects on Prices 13. Credit Analysis—External Debt Ratings and Spread 14. Credit Analysis—Internal Credit Risk Rating Model 15. Credit Analysis—Profit Cost Analysis of New Credit
125
16. Debt Analysis—Asset-Equity Parity Model 17. Debt Analysis—Cox Model on Price and Yield of Risky Debt with Mean-Reverting Rates 18. Debt Analysis—Debt Repayment and Amortization 19. Debt Analysis—Debt Sensitivity Models 20. Debt Analysis—Merton Price of Risky Debt with Stochastic Asset and Interest 21. Debt Analysis—Vasicek Debt Option Valuation 22. Debt Analysis—Vasicek Price and Yield of Risky Debt
137
23. Decision Analysis—Decision Tree Basics 24. Decision Analysis—Decision Tree with EVPI, Minimax, and Bayes’ Theorem 25. Decision Analysis—Economic Order Quantity and Inventory Reorder Point 26. Decision Analysis—Economic Order Quantity and Optimal Manufacturing 27. Decision Analysis—Expected Utility Analysis 28. Decision Analysis—Inventory Control 29. Decision Analysis—Queuing Models 30. Exotic Options—Accruals on Basket of Assets 31. Exotic Options—American, Bermudan, and European Options with Sensitivities 32. Exotic Options—American Call Option on Foreign Exchange 33. Exotic Options—American Call Options on Index Futures 34. Exotic Options—American Call Option with Dividends 35. Exotic Options—Asian Lookback Options Using Arithmetic Averages 36. Exotic Options—Asian Lookback Options Using Geometric Averages 37. Exotic Options—Asset or Nothing Options 38. Exotic Options—Barrier Options 39. Exotic Options—Binary Digital Options 40. Exotic Options—Cash or Nothing Options 41. Exotic Options—Chooser Option (Simple Chooser) 42. Exotic Options—Chooser Option (Complex Chooser) 43. Exotic Options—Commodity Options 44. Exotic Options—Currency (Foreign Exchange) Options
127 129 131 133 135
138 141 145 147 149 151 153 158 169 170 172 174 176 178 180 182 184 186 188 189 190 191 193 195 196 197 198 199
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
ix
Contents
45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79.
Exotic Options—Double Barrier Options Exotic Options—European Call Option with Dividends Exotic Options—Exchange Assets Option Exotic Options—Extreme Spreads Option Exotic Options—Foreign Equity–Linked Foreign Exchange Options in Domestic Currency Exotic Options—Foreign Equity Struck in Domestic Currency Exotic Options—Foreign Equity with Fixed Exchange Rate Exotic Options—Foreign Takeover Options Exotic Options—Forward Start Options Exotic Options—Futures and Forward Options Exotic Options—Gap Options Exotic Options—Graduated Barrier Options Exotic Options—Index Options Exotic Options—Inverse Gamma Out-of-the-Money Options Exotic Options—Jump-Diffusion Options Exotic Options—Leptokurtic and Skewed Options Exotic Options—Lookback with Fixed Strike (Partial Time) Exotic Options—Lookback with Fixed Strike Exotic Options—Lookback with Floating Strike (Partial Time) Exotic Options—Lookback with Floating Strike Exotic Options—Min and Max of Two Assets Exotic Options—Options on Options Exotic Options—Option Collar Exotic Options—Perpetual Options Exotic Options—Range Accruals (Fairway Options) Exotic Options—Simple Chooser Exotic Options—Spread on Futures Exotic Options—Supershare Options Exotic Options—Time Switch Options Exotic Options—Trading-Day Corrections Exotic Options—Two-Asset Barrier Options Exotic Options—Two Asset Cash or Nothing Exotic Options—Two Correlated Assets Option Exotic Options—Uneven Dividend Payments Option Exotic Options—Writer Extendible Option
80. Forecasting—Data Diagnostics 81. Forecasting—Econometric, Correlations, and Multiple Regression Modeling 82. Forecasting—Exponential J-Growth Curves 83. Forecasting—Forecasting Manual Computations 84. Forecasting—Linear Interpolation and Nonlinear Spline Extrapolation 85. Forecasting—Logistic S-Growth Curves
200 201 203 204 205 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 228 229 230 231 232 233 234 235 237 238 239 248 254 257 259 264
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
x
CONTENTS
86. 87. 88. 89.
Forecasting—Markov Chains and Market Share Forecasting—Multiple Regression Forecasting—Nonlinear Extrapolation and Forecasting Forecasting—Stochastic Processes, Brownian Motion, Forecast Distribution at Horizon, Jump-Diffusion, and Mean-Reversion 90. Forecasting—Time-Series ARIMA 91. Forecasting—Time-Series Analysis
267 269 271
92. Industry Applications—Biotech Manufacturing Strategy 93. Industry Applications—Biotech Inlicensing Drug Deal Structuring 94. Industry Applications—Biotech Investment Valuation 95. Industry Application—Banking: Integrated Risk Management, Probability of Default, Economic Capital, Value at Risk, and Optimal Bank Portfolios 96. Industry Application—Electric/Utility: Optimal Power Contract Portfolios 97. Industry Application—IT—Information Security Intrusion Risk Management 98. Industry Applications—Insurance ALM Model
287
329 349
99. Operational Risk—Queuing Models at Bank Branches
354
100. 101. 102. 103. 104. 105.
273 276 283
289 291
293 321
106. 107. 108. 109. 110.
Optimization—Continuous Portfolio Allocation Optimization—Discrete Project Selection Optimization—Inventory Optimization Optimization—Investment Portfolio Allocation Optimization—Investment Capital Allocation I (Basic Model) Optimization—Investment Capital Allocation II (Advanced Model) Optimization—Military Portfolio and Efficient Frontier Optimization—Optimal Pricing with Elasticity Optimization—Optimization of a Harvest Model Optimization—Optimizing Ordinary Least Squares Optimization—Stochastic Portfolio Allocation
376 380 386 390 394 400
111. 112. 113. 114. 115. 116.
Options Analysis—Binary Digital Instruments Options Analysis—Inverse Floater Bond Options Analysis—Options-Trading Strategies Options Analysis—Options-Adjusted Spreads Lattice Options Analysis—Options on Debt Options Analysis—Five Plain Vanilla Options
405 407 413 420 422 424
117. Probability of Default—Bond Yields and Spreads (Market Comparable) 118. Probability of Default—Empirical Model 119. Probability of Default—External Options Model (Public Company)
356 362 366 372 373
432 434 437
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
xi
Contents
120. Probability of Default—Merton Internal Options Model (Private Company) 121. Probability of Default—Merton Market Options Model (Industry Comparable)
441 442
122. Project Management—Cost Estimation Model 123. Project Management—Critical Path Analysis (CPM PERT GANTT) 124. Project Management—Project Timing
443
125. Real Estate—Commercial Real Estate ROI
456
126. Risk Analysis—Integrated Risk Analysis 127. Risk Analysis—Interest Rate Risk 128. Risk Analysis—Portfolio Risk Return Profiles
460 472 474
129. 130. 131. 132. 133.
477 478 479 481 487
Risk Hedging—Delta-Gamma Hedging Risk Hedging—Delta Hedging Risk Hedging—Effects of Fixed versus Floating Rates Risk Hedging—Foreign Exchange Cash Flow Model Risk Hedging—Hedging Foreign Exchange Exposure
446 453
134. Sensitivity—Greeks 135. Sensitivity—Tornado and Sensitivity Charts Linear 136. Sensitivity—Tornado and Sensitivity Nonlinear
491 496 503
137. 138. 139. 140. 141. 142. 143. 144.
510 517 525 528 531 534 538
145. 146. 147. 148. 149.
Simulation—Basic Simulation Model Simulation—Best Surgical Team Simulation—Correlated Simulation Simulation—Correlation Effects on Risk Simulation—Data Fitting Simulation—Debt Repayment and Amortization Simulation—Demand Curve and Elasticity Estimation Simulation—Discounted Cash Flow, Return on Investment, and Volatility Estimates Simulation—Infectious Diseases Simulation—Recruitment Budget (Negative Binomial and Multidimensional Simulation) Simulation—Retirement Funding with VBA Macros Simulation—Roulette Wheel Simulation—Time Value of Money
150. Six Sigma—Obtaining Statistical Probabilities, Basic Hypothesis Tests, Confidence Intervals, and Bootstrapping Statistics 151. Six Sigma—One- and Two-Sample Hypothesis Tests Using t-Tests, Z-Tests, F-Tests, ANOVA, and Nonparametric Tests (Friedman, Kruskal Wallis, Lilliefors, and Runs Tests)
542 546 548 556 560 562
571
590
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
xii
CONTENTS
152. Six Sigma—Sample Size Determination and Design of Experiments 153. Six Sigma—Statistical and Unit Capability Measures, Specification Levels, and Control Charts 154. Valuation—Buy versus Lease 155. Valuation—Banking: Classified Loan Borrowing Base 156. Valuation—Banking: Break-Even Inventory with Seasonal Lending Trial Balance Analysis 157. Valuation—Banking: Firm in Financial Distress 158. Valuation—Banking: Pricing Loan Fees Model 159. Valuation—Valuation Model 160. Value at Risk—Optimized and Simulated Portfolio VaR 161. Value at Risk—Options Delta Portfolio VaR 162. Value at Risk—Portfolio Operational and Credit Risk VaR Capital Adequacy 163. Value at Risk—Right-Tail Capital Requirements 164. Value at Risk—Static Covariance Method
623 627 631 634 637 640 642 644 647 651 653 657 661
165. Volatility—Implied Volatility 166. Volatility—Volatility Computations (Log Returns, Log Assets, Implied Volatility, Management Assumptions, EWMA, GARCH)
663
167. 168. 169. 170. 171. 172.
673 674 676 678 679
Yield Curve—CIR Model Yield Curve—Curve Interpolation BIM Model Yield Curve—Curve Interpolation NS Model Yield Curve—Forward Rates from Spot Rates Yield Curve—Term Structure of Volatility Yield Curve—U.S. Treasury Risk-Free Rates and Cubic Spline Curves 173. Yield Curve—Vasicek Model
664
680 690
PART 2
Real Options SLS Applications
693
174. Introduction to the SLS Software Single Asset and Single Phased Module Multiple Asset or Multiple Phased SLS Module Multinomial SLS Module SLS Excel Solution Module SLS Excel Functions Module Lattice Maker Module
695 697 704 705 709 712 714
175. Employee Stock Options—Simple American Call Option 176. Employee Stock Options—Simple Bermudan Call Option with Vesting 177. Employee Stock Options—Simple European Call Option
715 716 719
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
Contents
178. Employee Stock Options—Suboptimal Exercise 179. Employee Stock Options—Vesting, Blackout, Suboptimal, Forfeiture 180. Exotic Options—American and European Lower Barrier Options 181. Exotic Options—American and European Upper Barrier Options 182. Exotic Options—American and European Double Barrier Options and Exotic Barriers 183. Exotic Options—Basic American, European, and Bermudan Call Options 184. Exotic Options—Basic American, European, and Bermudan Put Options 185. Real Options—American, European, Bermudan, and Customized Abandonment Options 186. Real Options—American, European, Bermudan, and Customized Contraction Options 187. Real Options—American, European, Bermudan, and Customized Expansion Options 188. Real Options—Contraction, Expansion, and Abandonment Options 189. Real Options—Dual Variable Rainbow Option Using Pentanomial Lattices 190. Real Options—Exotic Chooser Options 191. Real Options—Exotic Complex Floating American and European Chooser 192. Real Options—Jump-Diffusion Option Using Quadranomial Lattices 193. Real Options—Mean-Reverting Calls and Puts Using Trinomial Lattices 194. Real Options—Multiple Assets Competing Options 195. Real Options—Path-Dependent, Path-Independent, Mutually Exclusive, Non–Mutually Exclusive, and Complex Combinatorial Nested Options 196. Real Options—Sequential Compound Options 197. Real Options—Simultaneous Compound Options 198. Real Options—Simple Calls and Puts Using Trinomial Lattices
xiii 720 723
725 728 731 734 736
739 749 756 763 767 770 771 774 777 779
781 783 791 795
PART 3
Real Options Strategic Case Studies—Framing the Options 199. Real Options Strategic Cases—High-Tech Manufacturing: Build or Buy Decision with Real Options 200. Real Options Strategic Cases—Oil and Gas: Farm-Outs, Options to Defer, and Value of Information
799 801 810
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
xiv
CONTENTS
201. Real Options Strategic Cases—Pharmaceutical Development: Value of Perfect Information and Optimal Trigger Values 202. Real Options Strategic Cases—Option to Switch Inputs 203. Valuation—Convertible Warrants with a Vesting Period and Put Protection
814 817 821
APPENDIX A List of Models
827
APPENDIX B List of Functions
837
APPENDIX C Understanding and Choosing the Right Probability Distributions
899
APPENDIX D Financial Statement Analysis
919
APPENDIX E Exotic Options Formulae
927
APPENDIX F Measures of Risk
941
APPENDIX G Mathematical Structures of Stochastic Processes
957
Glossary of Input Variables and Parameters in the Modeling Toolkit Software
963
About the DVD
995
About the Author
999
Index
1001
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
Preface
A
dvanced Analytical Models is a large collection of advanced models with a multitude of industry and domain applications. The book is based on years of academic research and practical consulting experience, coupled with domain expert contributions. The Modeling Toolkit software that holds all the models, Risk Simulator software, and Real Options SLS software were all developed by the author, with over 1,000 functions, tools, and model templates in these software applications. The trial versions are included in the accompanying DVD. The applications covered are vast. Included are Basel II banking risk requirements (credit risk, market risk, credit spreads, default risk, value at risk, etc.) and financial analysis (exotic options and valuation), risk analysis (stochastic forecasting, risk-based Monte Carlo simulation, optimization), real options analysis (strategic options and decision analysis), Six Sigma and quality initiatives, management science and statistical applications, and everything in between, such as applied statistics, manufacturing, decision analysis, operations research, optimization, forecasting, and econometrics. This book is targeted at practitioners who require the algorithms, examples, models, and insights in solving more advanced and even esoteric problems. This book does not only talk about modeling or illustrate basic concepts and examples; it comes complete with a DVD filled with sample modeling videos, case studies, and software applications to help you get started immediately. In other words, this book dispenses with all the theoretical discussions and mathematical models that are extremely hard to decipher and apply in the real business world. Instead, these theoretical models have been coded up into user-friendly and powerful software, and this book shows the reader how to start applying advanced modeling techniques almost immediately. The trial software applications allow you to access the approximately 300 model templates and 800 functions and tools, understand the concepts, and use embedded functions and algorithms in their own models. In addition, you can run risk-based Monte Carlo simulations and advanced forecasting methods, and perform optimization on a myriad of situations, as well as structure and solve customized real options and financial options problems. Each model template that comes in the Modeling Toolkit software is described in this book. Descriptions are provided in as much detail as the applications warrant. Some of the more fundamental concepts in risk analysis and real options are covered in the author’s other books. It is suggested that these books, Modeling Toolkit: Applying Monte Carlo Simulation, Real Options Analysis, Stochastic Forecasting, and Portfolio Optimization (2006) and Real Options Analysis, Second Edition (2005),
xv
fm
JWBK121-Mun
March 18, 2008
xvi
20:14
Char Count= 0
PREFACE
both published by John Wiley & Sons, be used as references for some of the models in this book. Those modeling issues that are, in the author’s opinion, critical, whether they are basic issues or more advanced analytical ones, are presented in detail. As software applications change continually, it is recommended that you check the author’s web site (www.realoptionsvaluation.com) frequently for any analytical updates, software upgrades, and revised or new models.
ACKNOWLEDGMENTS A special thank you to the contributors, including Mark Benyovszky, Morton Glantz, Uriel Kusiatin, and Victor Wong. DR. JOHNATHAN MUN
[email protected] California, USA
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
Software Applications
T
his book covers the following software applications: Modeling Toolkit Over 800 functions, models, and tools and over 300 Excel and SLS templates covering the following applications:
Business analytics and statistics (CDF, ICDF, PDF, data analysis, integration) Credit and Debt Analysis (credit default swap, credit spread options, credit rating, debt options and pricing) Decision Analysis (decision tree, Minimax, utility functions) Exotic Options (over 100 types of financial and exotic options) Forecasting (ARIMA, econometrics, EWMA, GARCH, nonlinear extrapolation, spline, time-series) Industry Applications (banking, biotech, insurance, IT, real estate, utility) Operations Research and Portfolio Optimization (continuous, discrete, integer, static, dynamic, and stochastic) Options Analysis (BDT interest lattices, debt options, options trading strategies) Portfolio Models (investment allocations, optimization, risk and return profiles) Probability of Default and Banking Credit Risk (private, public and retail debt, credit derivatives and swaps) Real Options Analysis (over 100 types: abandon, barrier, contract, customized, dual asset, expand, multi-asset, multi-phased, pentanomials, quadranomials, sequential, switch, and trinomials) Risk Hedging (delta and delta-gamma hedges, foreign exchange and interest rate risk) Risk Simulation (correlated simulation, data fitting, Monte Carlo simulation, risk-simulation) Six Sigma (capability measures, control charts, hypothesis tests, measurement systems, precision, sample size) Statistical Tools (ANOVA, Two-Way ANOVA, nonparametric hypotheses tests, parametric tests, principal components, variance-covariance) Valuation (APT, buy versus lease, CAPM, caps and floors, convertibles, financial ratios, valuation models) Value at Risk (static covariance and simulation-based VaR) Volatility (EWMA, GARCH, implied volatility, Log Returns, Real Options Volatility, probability to volatility) Yield Curve (BIS, Cox, Merton, NS, spline, Vasicek)
xvii
fm
JWBK121-Mun
March 18, 2008
20:14
Char Count= 0
xviii
SOFTWARE APPLICATIONS
Risk Simulator Over 25 statistical distributions covering the following applications:
Applied Business Statistics (descriptive statistics, CDF/ICDF/PDF probabilities, stochastic parameter calibration) Bootstrap Nonparametric Simulation and Hypothesis Testing (testing empirical and theoretical moments) Correlated Simulations (simulation copulas and Monte Carlo) Data Analysis and Regression Diagnostics (heteroskedasticity, multicollinearity, nonlinearity, outliers) Forecasting (ARIMA, Auto-ARIMA, J-S curves, GARCH, Markov chains, multivariate regressions, stochastic processes) Optimization (static, dynamic, stochastic) Sensitivity Analysis (correlated sensitivity, scenario, spider, tornado) Real Options SLS Customizable Binomial, Trinomial, Quadranomial, and Pentanomial Lattices Lattice Makers (lattices with Monte Carlo simulation) Super fast super lattice algorithms (running thousands of lattice steps in seconds) Covering the following applications:
Exotic Options Models (barriers, benchmarked, multiple assets, portfolio options) Financial Options Models (3D dual asset exchange, single and double barriers) Real Options Models (abandon, barrier, contract, expand, sequential compound, switching) Specialized Options (mean-reverting, jump-diffusion, and dual asset rainbows) Employee Stock Options Valuation Toolkit Applied by the U.S. Financial Accounting Standards Board for FAS 123R 2004 Binomial and closed-form models Covers:
Blackout Periods Changing Volatility Forfeiture Rates Suboptimal Exercise Multiple Vesting
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
PART
1
Modeling Toolkit and Risk Simulator Applications
his book covers about 300 different analytical model templates that apply up to 800 modeling functions and tools from a variety of software applications. Trial versions of these software applications are included in the book’s DVD or can be downloaded directly from the Web at www.realoptionsvaluation.com. Part I of the book deals with models using the Modeling Toolkit and Risk Simulator software applications. Part II deals with real options and financial option models using the Real Options SLS software. Readers who are currently expert users of the Modeling Toolkit software and Risk Simulator software may skip this section and dive directly into the models.
T
INTRODUCTION TO THE MODELING TOOLKIT SOFTWARE The Modeling Toolkit software incorporates about 800 different advanced analytical models, functions, and tools, applicable in a variety of industries and applications. Appendix 1 lists the models available in the software as of this book’s publication date. To install this software for a trial period of 30 days, insert the DVD that comes with the book or visit www.realoptionsvaluation.com and click on Downloads. Look for the Modeling Toolkit software. This software works on Windows XP, or Vista, and requires Excel XP, 2003 or 2007 to run. At the end of the installation process, you will be prompted for a license key. Please use this trial license: Name: 30 Day Trial Key: 4C55-0BA2-420E-CA84 To start the software, click on Start | Programs | Real Options Valuation | Modeling Toolkit | Modeling Toolkit. This action will start Excel. Inside Excel, you will notice a new menu item called Modeling Toolkit. This menu is self-explanatory, as the models are categorized by application domain, and each model is described in more detail in this book. Please note that this software uses Excel macros. If you receive an error message on macros, it is because your system is set to a high security level. You need to fix this by starting Excel XP or 2003 and clicking on Tools | Macros | Security | Medium and restarting the software. If you are using Excel 2007, you can simply click on Enable Macros when prompted (or reset your security settings when in Excel 2007 by clicking on the Office button located at
1
Intro
JWBK121-Mun
2
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
the top left of the screen and selecting Excel Options | Trust Center | Trust Center Settings | Macro Settings | Enable All Macros). Note that the trial version will expire in 30 days. To obtain a full corporate license, please contact the author’s firm, Real Options Valuation, Inc., at admin@ realoptionsvaluation.com or visit the company’s web site (www.realoptions valuation.com). Notice that after the software expiration date, some of the models that depend on Risk Simulator or Real Options SLS software still will function, until their respective expiration dates. In addition, after the expiration date, these worksheets still will be visible, but the analytical results and functions will return null values. Finally, software versions continually change and improve, and the best recommendation is to visit the company’s web site for any updated or newer software versions or details on installation and licensing. The Appendixes provide a more detailed list of all the functions, tools, and models and the Glossary details the required variable inputs in this software.
INTRODUCTION TO RISK SIMULATOR This section also provides the novice risk analyst an introduction to the Risk Simulator software for performing Monte Carlo simulation, where a trial version of the software is included in the book’s DVD. Please refer to About the DVD at the end of this book for details on obtaining this extended trial license. This section starts off by illustrating what Risk Simulator does and what steps are taken in a Monte Carlo simulation as well as some of the more basic elements in a simulation analysis. It continues with how to interpret the results from a simulation and ends with a discussion of correlating variables in a simulation as well as applying precision and error control. Many more advanced techniques such as ARIMA forecasts and optimization are also discussed. Software versions with new enhancements are released continually. Please review the software’s user manual and the software download site (www.realoptionsvaluation.com) for more up-to-date details on using the latest version of the software. See Modeling Risk: Applying Monte Carlo Simulation, Real Options Analysis, Stochastic Forecasting, and Portfolio Optimization (Hoboken, NJ: John Wiley & Sons, 2007), also by the author, for more technical details on using Risk Simulator. Risk Simulator is a Monte Carlo simulation, forecasting, and optimization software. It is written in Microsoft .NET C# and functions with Excel as an add-in. This software is compatible and often used with the Real Options SLS software shown in Part II of this book, also developed by the author. Stand-alone software applications in C++ are also available for implementation into other existing proprietary software or databases. The different functions or modules in both software applications are briefly described next. The Appendixes provide a more detailed list of all the functions, tools, and models.
The Simulation Module allows you to: Run simulations in your existing Excel-based models Generate and extract simulation forecasts (distributions of results) Perform distributional fitting (automatically finding the best-fitting statistical distribution)
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
Compute correlations (maintaining relationships among multiple simulated random variables) Identify sensitivities (creating tornado and sensitivity charts) Test statistical hypotheses (finding statistical differences and similarities between pairs of forecasts) Run bootstrap simulation (testing the robustness of result statistics) Run custom and nonparametric simulations (simulations using historical data without specifying any distributions or their parameters for forecasting without data or applying expert opinion forecasts) The Forecasting Module can be used to generate: Automatic time-series forecasts (with and without seasonality and trend) Automatic ARIMA (automatically generate the best-fitting ARIMA forecasts) Basic Econometrics (modified multivariate regression forecasts) Box-Jenkins ARIMA (econometric forecasts) GARCH Models (forecasting and modeling volatility) J-Curves (exponential growth forecasts) Markov Chains (market share and dynamics forecasts) Multivariate regressions (modeling linear and nonlinear relationships among variables) Nonlinear extrapolations (curve fitting) S-Curves (logistic growth forecasts) Spline Curves (interpolating and extrapolating missing nonlinear values) Stochastic processes forecasts (random walks, mean-reversions, jumpdiffusions, and mixed processes). The Optimization module is used for running: Linear and nonlinear optimization Static optimization (without simulation), dynamic optimization (with simulation), and stochastic optimization (with simulation and run multiple times) Discrete, continuous and integer decision variables Analytical Tools Correlated simulations Data diagnostics (autocorrelation, correlation, distributive lags, heteroskedasticity, micronumerosity, multicollinearity, nonlinearity, nonstationarity, normality, outliers, partial autocorrelation, and others) Data extraction Data fitting Data import and export Distribution analysis (PDF, CDF, ICDF) Distribution designer (creating customized distributions and Delphi simulation) Hypothesis tests and bootstrap simulation Sensitivity and dynamic scenario analysis Statistical analysis (descriptive statistics, distributional fitting, hypothesis tests, nonlinear extrapolation, normality, stochastic parameter estimation, timeseries forecasts, trending, and others) Tornado and spider charts The Real Options Super Lattice Solver (SLS) is another stand-alone software that complements Risk Simulator, used for solving simple to complex real options problems. See Part II of this book for details on this software’s applications.
3
Intro
JWBK121-Mun
4
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
To install the software, insert the accompanying DVD, click on the Install Risk Simulator link, and follow the onscreen instructions. You will need to be online to download the latest version of the software. The software requires Windows XP/Vista, administrative privileges, and Microsoft .NET Framework 1.1 and 2.0 installed on the computer. Most new computers come with Microsoft .NET Framework 1.1 already preinstalled. However, if an error message pertaining to requiring .NET Framework occurs during the installation of Risk Simulator, exit the installation. Then install the relevant .NET Framework software, also included in the DVD (found in the DOT NET Framework folder). Complete the .NET installation, restart the computer, and then reinstall the Risk Simulator software. Version 1.1 of the .NET Framework is required even if your system has version 2.0/3.0, as they work independently of each other. You may also download this software on the Download page of www.realoptionsvaluation.com. See the About the DVD section at the end of this book for details on obtaining an extended trial license. Once installation is complete, start Microsoft Excel. If the installation was successful, you should see an additional Risk Simulator item on the menu bar in Excel and a new icon bar, as shown in Figure I.1. Figure I.2 shows the icon toolbar in more detail. Please note that Risk Simulator supports multiple languages (e.g., English, Chinese, Japanese, and Spanish) and you can switch among languages by going to Risk Simulator | Languages. You are now ready to start using the software for a trial period. You can obtain permanent or academic licenses from www.realoptionsvaluation.com. If you are using Windows Vista, make sure to disable User Access Control before installing the software license. To do so: Click on Start | Control Panel | Classic View
FIGURE I.1 Risk Simulator menu and icon toolbar (Continued)
JWBK121-Mun
March 18, 2008
20:35
Char Count=
FIGURE I.1 (Continued)
Intro
5
Intro
JWBK121-Mun
6
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
(on the left panel) | User Accounts | Turn User Account Control On or Off and uncheck the option, Use User Account Control (UAC), and restart the computer. When restarting the computer, you will get a message that UAC is turned off. You can turn this message off by going to the Control Panel | Security Center | Change the Way Security Center Alerts Me | Don’t Notify Me and Don’t Display the Icon. The sections that follow provide step-by-step instructions for using the software. As the software is continually updated and improved, the examples in this book might be slightly different from the latest version downloaded from the Internet.
RUNNING A MONTE CARLO SIMULATION Typically, to run a simulation in your existing Excel model, you must perform these five steps: 1. 2. 3. 4. 5.
Start a new or open an existing simulation profile. Define input assumptions in the relevant cells. Define output forecasts in the relevant cells. Run the simulation. Interpret the results.
If desired, and for practice, open the example file called Basic Simulation Model and follow along the examples on creating a simulation. The example file can be found on the menu at Risk Simulator | Example Models.
1. Starting Models a New Simulation Profile To start a new simulation, you must first create a simulation profile. A simulation profile contains a complete set of instructions on how you would like to run a simulation; it contains all the assumptions, forecasts, simulation run preferences, and so forth. Having profiles facilitates creating multiple scenarios of simulations; that is, using the same exact model, several profiles can be created, each with its own specific simulation assumptions, forecasts, properties, and requirements. The same analyst can create different test scenarios using different distributional assumptions and inputs or multiple users can test their own assumptions and inputs on the same model. Instead of having to make duplicates of the model, the same model can be used and different simulations can be run through this model profiling process.
FIGURE I.2 Risk Simulator icon toolbar (Continued)
JWBK121-Mun
March 18, 2008
20:35
Char Count=
FIGURE I.2 (Continued)
Intro
7
Intro
JWBK121-Mun
8
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.3 New simulation profile
Start a new simulation profile by performing these three steps. 1. Start Excel and create a new or open an existing model. You can use the Basic Simulation Model example to follow along: Risk Simulator | Examples | Basic Simulation Model. 2. Click on Risk Simulator | New Simulation Profile. 3. Enter a title for your simulation including all other pertinent information (Figure I.3). The elements in the new simulation profile dialog, shown in Figure I.3, include:
Title. Specifying a simulation profile name or title allows you to create multiple simulation profiles in a single Excel model. By so doing, you can save different simulation scenario profiles within the same model without having to delete existing assumptions and change them each time a new simulation scenario is required. Number of trials. Enter the number of simulation trials required. Running 1,000 trials means that 1,000 different iterations of outcomes based on the input assumptions will be generated. You can change this number as desired, but the input has to be positive integers. The default number of runs is 1,000 trials. Pause on simulation error. If checked, the simulation stops every time an error is encountered in the Excel model; that is, if your model encounters a computational error (e.g., some input values generated in a simulation trial may yield a divide-by-zero error in a spreadsheet cell), the simulation stops. This feature is important to help audit your model to make sure there are no computational errors in your Excel model. However, if you are sure the model works, there is no need for you to check this preference. Turn on correlations. If checked, correlations between paired input assumptions will be computed. Otherwise, correlations will all be set to zero and a simulation is run assuming no cross-correlations between input assumptions. Applying correlations will yield more accurate results if correlations do indeed exist and will tend to yield a lower forecast confidence if negative correlations exist.
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
9
FIGURE I.4 Change active simulation
Specify random number sequence. By definition, a simulation yields slightly different results every time it is run by virtue of the random-number generation routine in Monte Carlo simulation. This is a theoretical fact in all randomnumber generators. However, when making presentations, sometimes you may require the same results. (For example, during a live presentation you may like to show the same results that are in some pregenerated printed reports from a previous simulation run; when you are sharing models with others, you also may want the same results to be obtained every time.) If that is the case, check this preference and enter in an initial seed number. The seed number can be any positive integer. Using the same initial seed value, the same number of trials, and the same input assumptions always will yield the same sequence of random numbers, guaranteeing the same final set of results.
Note that once a new simulation profile has been created, you can come back later and modify your selections. In order to do this, make sure that the current active profile is the profile you wish to modify; otherwise, click on Risk Simulator | Change Simulation Profile, select the profile you wish to change, and click OK. (Figure I.4 shows an example where there are multiple profiles and how to activate, duplicate or delete a selected profile.) Then click on Risk Simulator | Edit Simulation Profile and make the required changes.
2. Defining Input Assumptions The next step is to set input assumptions in your model. Note that assumptions can be assigned only to cells without any equations or functions (i.e., typed-in numerical values that are inputs in a model), whereas output forecasts can be assigned only to cells with equations and functions (i.e., outputs of a model). Recall that assumptions
Intro
JWBK121-Mun
10
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.5 Setting an input assumption
and forecasts cannot be set unless a simulation profile already exists. Follow these three steps to set new input assumptions in your model: 1. Select the cell you wish to set an assumption on (e.g., cell G8 in the Basic Simulation Model example). 2. Click on Risk Simulator | Set Input Assumption or click the Set Assumption icon in the Risk Simulator icon toolbar. 3. Select the relevant distribution you want, enter the relevant distribution parameters, and hit OK to insert the input assumption into your model (Figure I.5). Several key areas are worthy of mention in the Assumption Properties. Figure I.6 shows the different areas.
Assumption Name. This optional area allows you to enter in unique names for the assumptions to help track what each of the assumptions represents. Good modeling practice is to use short but precise assumption names. Distribution Gallery. This area shows all of the different distributions available in the software. To change the views, right-click anywhere in the gallery and select large icons, small icons, or list. More than two dozen distributions are available. Input Parameters. Depending on the distribution selected, the required relevant parameters are shown. You may either enter the parameters directly or link them to specific cells in your worksheet. Click on the Link icon to link an input parameter to a worksheet cell. Hard-coding or typing the parameters is useful when the assumption parameters are assumed not to change. Linking to worksheet cells is useful when the input parameters themselves need to be visible on the worksheets or can be changed, as in a dynamic simulation (where the input parameters themselves are linked to assumptions in the worksheets creating a multidimensional simulation, or simulation of simulations).
JWBK121-Mun
March 18, 2008
20:35
Char Count=
FIGURE I.6 Assumption properties
Intro
11
Intro
JWBK121-Mun
March 18, 2008
20:35
12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Data Boundary. Typically, the average analyst does not use distributional or data boundaries truncation, but they exist for truncating the distributional assumptions. For instance, if a normal distribution is selected, the theoretical boundaries are between negative infinity and positive infinity. However, in practice, the simulated variable exists only within some smaller range. This range can be entered to truncate the distribution appropriately. Correlations. Pairwise correlations can be assigned to input assumptions here. If assumptions are required, remember to check the Turn on Correlations preference by clicking on Risk Simulator | Edit Simulation Profile. See the discussion on correlations later in this chapter for more details about assigning correlations and the effects correlations will have on a model. Short Descriptions. Short descriptions exist for each of the distributions in the gallery. The short descriptions explain when a certain distribution is used as well as the input parameter requirements. See the section in the appendix “Understanding Probability Distributions” in Modeling Risk: Applying Monte Carlo Simulation, Real Options Analysis, Stochastic Forecasting, and Portfolio Optimization (Hoboken, NJ: John Wiley & Sons, 2006), also by the author, for details about each distribution type available in the software.
Note: If you are following along with the example, continue by setting another assumption on cell G9. This time use the Uniform distribution with a minimum value of 0.9 and a maximum value of 1.1. Then proceed to defining the output forecasts in the next step.
3. Defining Output Forecasts The next step is to define output forecasts in the model. Forecasts can be defined only on output cells with equations or functions. Use these three steps to define the forecasts: 1. Select the cell on which you wish to set an assumption (e.g., cell G10 in the Basic Simulation Model example). 2. Click on Risk Simulator | Set Output Forecast or click on the set forecast icon on the Risk Simulator icon toolbar. 3. Enter the relevant information and click OK. Figure I.7 illustrates the set forecast properties, which include:
Forecast Name. Specify the name of the forecast cell. This is important because when you have a large model with multiple forecast cells, naming the forecast cells individually allows you to access the right results quickly. Do not underestimate the importance of this simple step. Good modeling practice is to use short but precise assumption names. Forecast Precision. Instead of relying on a guesstimate of how many trials to run in your simulation, you can set up precision and error controls. When an errorprecision combination has been achieved in the simulation, the simulation will pause and inform you of the precision achieved. Thus the number of simulation trials is an automated process; you do not have to guess the required number
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
13
FIGURE I.7 Set output forecast
of trials to simulate. Review the section on error and precision control for more specific details. Show Forecast Window. This property allows you to show or not show a particular forecast window. The default is to always show a forecast chart.
4. Run Simulation If everything looks right, simply click on Risk Simulator | Run Simulation or click on the Run icon on the Risk Simulator toolbar, and the simulation will proceed. You may also reset a simulation after it has run to rerun it (Risk Simulator | Reset Simulation or the Reset icon on the toolbar), or to pause it during a run. Also, the step function (Risk Simulator | Step Simulation or the Step icon on the toolbar) allows you to simulate a single trial, one at a time, which is useful for educating others on simulation (i.e., you can show that at each trial, all the values in the assumption cells are being replaced and the entire model is recalculated each time).
5. Interpreting the Forecast Results The final step in Monte Carlo simulation is to interpret the resulting forecast charts. Figures I.8 to I.15 show the forecast chart and the statistics generated after running the simulation. Typically, these sections on the forecast window are important in interpreting the results of a simulation:
Forecast Chart. The forecast chart shown in Figure I.8 is a probability histogram that shows the frequency counts of values occurring and the total number of trials simulated. The vertical bars show the frequency of a particular x value occurring out of the total number of trials, while the cumulative frequency (smooth line) shows the total probabilities of all values at and below x occurring in the forecast. Forecast Statistics. The forecast statistics shown in Figure I.9 summarize the distribution of the forecast values in terms of the four moments of a distribution.
Intro
JWBK121-Mun
March 18, 2008
20:35
14
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.8 Forecast chart
You can rotate between the histogram and statistics tab by depressing the space bar. Preferences. The preferences tab in the forecast chart (Figure I.10) allows you to change the look and feel of the charts. For instance, if Always Show Window On Top is selected, the forecast charts will always be visible regardless of what other software is running on your computer. The Semitransparent When Inactive is a powerful option used to compare or overlay multiple forecast charts at once
FIGURE I.9 Forecast statistics
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
15
FIGURE I.10 Forecast chart preferences (e.g., enable this option on several forecast charts and drag them on top of one another to visually see the similarities or differences). Histogram Resolution allows you to change the number of bins of the histogram, anywhere from 5 bins to 100 bins. Also, the Update Data Interval section allows you to control how fast the simulation runs versus how often the forecast chart is updated. That is, if you wish to see the forecast chart updated at almost every trial, this will slow down the simulation as more memory is being allocated to updating
FIGURE I.11 Forecast chart options
Intro
JWBK121-Mun
16
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
the chart versus running the simulation. This is merely a user preference and in no way changes the results of the simulation, just the speed of completing the simulation. The Copy Chart button allows you to copy the active forecast chart for pasting into other software applications (e.g., PowerPoint or Word) and the Close All and Minimize All buttons allow you to control all opened forecast charts at once. Options. This forecast chart option (Figure I.11) allows you to show all the forecast data or to filter in or out values that fall within some specified interval or within some standard deviation that you choose. Also, you can set the precision level here for this specific forecast to show the error levels in the statistics view. See the section on precision and error control for more details.
USING FORECAST CHARTS AND CONFIDENCE INTERVALS In forecast charts, you can determine the probability of occurrence called confidence intervals; that is, given two values, what are the chances that the outcome will fall between these two values? Figure I.12 illustrates that there is a 90% probability that the final outcome (in this case, the level of income) will be between $0.2781 and $1.3068. The two-tailed confidence interval can be obtained by first selecting TwoTail as the type, entering the desired certainty value (e.g., 90) and hitting Tab on the keyboard. The two computed values corresponding to the certainty value will then be displayed. In this example, there is a 5% probability that income will be below $0.2781 and another 5% probability that income will be above $1.3068; that is, the two-tailed confidence interval is a symmetrical interval centered on the median or 50th percentile value. Thus, both tails will have the same probability. Alternatively, a one-tail probability can be computed. Figure I.13 shows a LeftTail selection at 95% confidence (i.e., choose Left-Tail as the type, enter 95 as
FIGURE I.12 Forecast chart two-tailed confidence interval
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
17
FIGURE I.13 Forecast chart one-tailed confidence interval the certainty level, and hit Tab on the keyboard). This means that there is a 95% probability that the income will be below $1.3068 (i.e., 95% on the left tail of $1.3068) or a 5% probability that income will be above $1.3068, corresponding perfectly with the results seen in Figure I.12. In addition to evaluating the confidence interval (i.e., given a probability level and finding the relevant income values), you can determine the probability of a given income value (Figure I.14). For instance, what is the probability that income will be
FIGURE I.14 Forecast chart left-tail probability evaluation
Intro
JWBK121-Mun
18
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.15 Forecast chart right-tail probability evaluation less than or equal to $1? To do this, select the Left-Tail probability type, enter 1 into the value input box, and hit Tab. The corresponding certainty will be computed. (In this case, there is a 64.80% probability income will be at or below $1.) For the sake of completeness, you can select the Right-Tail probability type, enter the value 1 in the value input box, and hit Tab (Figure I.15). The resulting probability indicates the right-tail probability past the value 1; that is, the probability of income exceeding $1. In this case, we see that there is a 35.20% probability of income at or exceeding $1. Note that the forecast window is resizable by clicking on and dragging the bottom right corner of the window. Finally, it is always advisable that before rerunning a simulation, you reset the current simulation by selecting Risk Simulator | Reset Simulation.
CORRELATIONS AND PRECISION CONTROL The correlation coefficient is a measure of the strength and direction of the relationship between two variables and can take on any values between –1.0 and +1.0; that is, the correlation coefficient can be decomposed into its direction or sign (positive or negative relationship between two variables) and the magnitude or strength of the relationship (the higher the absolute value of the correlation coefficient, the stronger the relationship). The correlation coefficient can be computed in several ways. The first approach is to manually compute the correlation coefficient r of a pair of variables x and y using: n xi yi − xi yi r x,y = 2 2 2 2 n xi − xi n yi − yi
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
19
The second approach is to use Excel’s CORREL function. For instance, if the 10 data points for x and y are listed in cells A1:B10, then the Excel function to use is CORREL (A1:A10, B1:B10). The third approach is to run Risk Simulator’s Multi-Variable Distributional Fitting Tool, and the resulting correlation matrix will be computed and displayed. It is important to note that correlation does not imply causation. Two completely unrelated random variables might display some correlation, but this does not imply any causation between the two (e.g., sunspot activity and events in the stock market are correlated, but there is no causation between the two). There are two general types of correlations: parametric and nonparametric correlations. Pearson’s correlation coefficient is the most common correlation measure and usually is referred to simply as the correlation coefficient. However, Pearson’s correlation is a parametric measure, which means that it requires both correlated variables to have an underlying normal distribution and that the relationship between the variables is linear. When these conditions are violated, which is often the case in Monte Carlo simulations, the nonparametric counterparts become more important. Spearman’s rank correlation and Kendall’s tau are the two nonparametric alternatives. The Spearman correlation is used most commonly and is most appropriate when applied in the context of Monte Carlo simulation––there is no dependence on normal distributions or linearity, meaning that correlations between different variables with different distribution can be applied. In order to compute the Spearman correlation, first rank all the x and y variable values and then apply the Pearson’s correlation computation. Risk Simulator uses the more robust nonparametric Spearman’s rank correlation. However, to simplify the simulation process and to be consistent with Excel’s correlation function, the correlation user inputs required are the Pearson’s correlation coefficient. Risk Simulator then applies its own algorithms to convert them into Spearman’s rank correlation, thereby simplifying the process.
Applying Correlations in Risk Simulator Correlations can be applied in Risk Simulator in several ways:
When defining assumptions, simply enter the correlations into the correlation grid in the set input assumption dialog in Figure I.6. With existing data, run the Multi-Variable Distribution Fitting Tool to perform distributional fitting and to obtain the correlation matrix between pairwise variables. If a simulation profile exists, the assumptions fitted automatically will contain the relevant correlation values. With the use of a direct-input correlation matrix, click on Risk Simulator | Edit Correlations after multiple assumptions have been set, to view and edit the correlation matrix used in the simulation.
Note that the correlation matrix must be positive definite; that is, the correlation must be mathematically valid. For instance, suppose you are trying to correlate three variables: grades of graduate students in a particular year, the number of beers they
Intro
JWBK121-Mun
March 18, 2008
20:35
20
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
consume a week, and the number of hours they study a week. You would assume that these correlation relationships exist:
Grades and Beer:
−
Grades and Study: Beer and Study:
+ −
The more they drink, the lower the grades (no show on exams) The more they study, the higher the grades The more they drink, the less they study (drunk and partying all the time)
However, if you input a negative correlation between Grades and Study and assuming that the correlation coefficients have high magnitudes, the correlation matrix will be nonpositive definite. It would defy logic, correlation requirements, and matrix mathematics. However, smaller coefficients sometimes still can work, even with the bad logic. When a nonpositive definite or bad correlation matrix is entered, Risk Simulator automatically informs you of the error and offers to adjust these correlations to something that is semipositive definite while still maintaining the overall structure of the correlation relationship (the same signs as well as the same relative strengths).
Effects of Correlations in Monte Carlo Simulation Although the computations required to correlate variables in a simulation are complex, the resulting effects are fairly clear. Figure I.16 shows a simple correlation model (Correlation Risk Effects Model in the example folder). The calculation for revenue is simply price multiplied by quantity. The same model is replicated for no correlations, positive correlation (+0.9), and negative correlation (–0.9) between price and quantity. The resulting statistics are shown in Figure I.17. Notice that the standard deviation of the model without correlations is 0.1450, compared to 0.1886 for the positive correlation model and 0.0717 for the negative correlation model. That is, for simple models with positive relationships (e.g., additions and multiplications), negative correlations tend to reduce the average spread of the distribution and create a tighter and more concentrated forecast distribution as compared to positive correlations with larger average spreads. However, the mean remains relatively stable. This implies that correlations do little to change the expected value of projects but can reduce or increase a project’s risk. Recall in financial theory that
FIGURE I.16 Simple correlation model
JWBK121-Mun
March 18, 2008
20:35
Char Count=
FIGURE I.17 Correlation results
Intro
21
Intro
JWBK121-Mun
22
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.18 Correlations recovered
negatively correlated variables, projects, or assets when combined in a portfolio tend to create a diversification effect where the overall risk is reduced. Therefore, we see a smaller standard deviation for the negatively correlated model. In a positively related model (e.g., A + B = C or A × B = C), a negative correlation reduces the risk (standard deviation and all other second moments of the distribution) of the result (C) whereas a positive correlation between the inputs (A and B) will increase the overall risk. The opposite is true for a negatively related model (e.g., A – B = C or A/B = C), where a positive correlation between the inputs will reduce the risk and a negative correlation increases the risk. In more complex models, as is often the case in real-life situations, the effects will be unknown a priori and can be determined only after a simulation is run. Figure I.18 illustrates the results after running a simulation, extracting the raw data of the assumptions, and computing the correlations between the variables. The figure shows that the input assumptions are recovered in the simulation; that is, you enter +0.9 and –0.9 correlations and the resulting simulated values have the same correlations. Clearly there will be minor differences from one simulation run to another, but when enough trials are run, the resulting recovered correlations approach those that were inputted.
TORNADO AND SENSITIVITY TOOLS IN SIMULATION One of the powerful simulation tools in Risk Simulator is tornado analysis––it captures the static impacts of each variable on the outcome of the model; that is, the tool automatically perturbs each variable in the model a preset amount, captures the fluctuation on the model’s forecast or final result, and lists the resulting perturbations ranked from the most significant to the least. Figures I.19 through I.24 illustrate the application of a tornado analysis. For instance, Figure I.19 is a sample discounted
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
23
FIGURE I.19 Sample discounted cash flow model
cash flow model where the input assumptions in the model are shown. The question is: What are the critical success drivers that affect the model’s output the most? That is, what really drives the net present value of $96.63 or which input variable impacts this value the most? The tornado chart tool can be obtained through Risk Simulator | Tools | Tornado Analysis. To follow along the first example, open the Tornado and Sensitivity Charts (Linear) file in the examples folder. Figure I.20 shows this sample model, where cell G6 containing the net present value is chosen as the target result to be analyzed. The target cell’s precedents in the model are used in creating the tornado chart. Precedents are all the input variables that affect the outcome of the model. For instance, if the
Intro
JWBK121-Mun
March 18, 2008
24
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.20 Running a tornado analysis
model consists of A = B + C, and where C = D + E, then B, D, and E are the precedents for A (C is not a precedent as it is only an intermediate calculated value). Figure I.20 shows the testing range of each precedent variable used to estimate the target result. If the precedent variables are simple inputs, then the testing range will be a simple perturbation based on the range chosen (e.g., the default is ±10%). Each precedent variable can be perturbed at different percentages if required. A wider range is important as it is better able to test extreme values rather than smaller perturbations around the expected values. In certain circumstances, extreme values may have a larger, smaller, or unbalanced impact (e.g., nonlinearities may occur where increasing or decreasing economies of scale and scope creep in for larger or smaller values of a variable) and only a wider range will capture this nonlinear impact.
PROCEDURE Use these three steps to create a tornado analysis: 1. Select the single output cell (i.e., a cell with a function or equation) in an Excel model (e.g., cell G6 is selected in our example). 2. Select Risk Simulator | Tools | Tornado Analysis.
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
25
3. Review the precedents and rename them as appropriate (renaming the precedents to shorter names allows a more visually pleasing tornado and spider chart) and click OK. Alternatively, click on Use Cell Address to apply cell locations as the variable names.
Results Interpretation Figure I.21 shows the resulting tornado analysis report, which indicates that capital investment has the largest impact on net present value (NPV), followed by tax rate, average sale price and quantity demanded of the product lines, and so forth. The report contains four distinct elements: 1. A statistical summary listing the procedure performed. 2. A sensitivity table (Figure I.22) shows the starting NPV base value of $96.63 and how each input is changed (e.g., Investment is changed from $1,800 to $1,980 on the upside with a +10% swing and from $1,800 to $1,620 on the downside with a –10% swing). The resulting upside and downside values on NPV are –$83.37 and $276.63, with a total change of $360, making it the variable with the highest impact on NPV. The precedent variables are ranked from the highest impact to the lowest impact. 3. The spider chart (Figure I.23) illustrates these effects graphically. The y-axis is the NPV target value while the x-axis depicts the percentage change on each of the precedent value. The central point is the base case value at $96.63 at 0% change from the base value of each precedent. Positively sloped lines indicate a positive relationship or effect; negatively sloped lines indicate a negative relationship (e.g., investment is negatively sloped, which means that the higher the investment level, the lower the NPV). The absolute value of the slope indicates the magnitude of the effect computed as the percentage change in the result given a percentage change in the precedent. A steep line indicates a higher impact on the NPV y-axis given a change in the precedent x-axis. 4. The tornado chart (Figure I.24) illustrates the results in another graphical manner, where the highest-impacting precedent is listed first. The x-axis is the NPV value, with the center of the chart being the base case condition. Green (lighter) bars in the chart indicate a positive effect; red (darker) bars indicate a negative effect. Therefore, for investments, the red (darker) bar on the right side indicate a negative effect of investment on higher NPV––in other words, capital investment and NPV are negatively correlated. The opposite is true for price and quantity of products A to C (their green or lighter bars are on the right side of the chart).
Notes Remember that tornado analysis is a static sensitivity analysis applied on each input variable in the model––that is, each variable is perturbed individually, and the resulting effects are tabulated. This makes tornado analysis a key component to execute before running a simulation. Capturing and identifying the most important impact drivers in the model is one of the very first steps in risk analysis. The next step
Intro
JWBK121-Mun
26
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.21 Tornado analysis report
is to identify which of these important impact drivers are uncertain. These uncertain impact drivers are the critical success drivers of a project; the results of the model depend on these critical success drivers. These variables are the ones that should be simulated. Do not waste time simulating variables that are neither uncertain nor have little impact on the results. Tornado charts assist in identifying these critical success drivers quickly and easily. Following this example, it might be that price and quantity should be simulated, assuming if the required investment and effective tax rate are both known in advance and unchanging.
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
27
FIGURE I.22 Sensitivity table
Although the tornado chart is easier to read, the spider chart is important to determine if there are any nonlinearities in the model. For instance, Figure I.25 shows another spider chart where nonlinearities are fairly evident (the lines on the graph are not straight but curved). The example model used is Tornado and Sensitivity Charts (Nonlinear), which applies the Black-Scholes option pricing model. Such nonlinearities cannot be easily ascertained from a tornado chart and may be important information in the model or may provide decision makers important insight
FIGURE I.23 Spider chart
Intro
JWBK121-Mun
March 18, 2008
28
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.24 Tornado chart into the model’s dynamics. For instance, in this Black-Scholes model, the fact that stock price and strike price are nonlinearly related to the option value is important to know. This characteristic implies that option value will not increase or decrease proportionally to the changes in stock or strike price and that there might be some interactions between these two prices as well as other variables. As another example, an engineering model depicting nonlinearities might indicate that a particular part or component, when subjected to a high enough force or tension, will break. Clearly, it is important to understand such nonlinearities.
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
29
FIGURE I.25 Nonlinear spider chart
SENSITIVITY ANALYSIS A related feature is sensitivity analysis. While tornado analysis (tornado charts and spider charts) applies static perturbations before a simulation run, sensitivity analysis applies dynamic perturbations created after the simulation run. Tornado and spider charts are the results of static perturbations, meaning that each precedent or assumption variable is perturbed a preset amount one at a time, and the fluctuations in the results are tabulated. In contrast, sensitivity charts are the results of dynamic perturbations in the sense that multiple assumptions are perturbed simultaneously and their interactions in the model and correlations among variables are captured in the fluctuations of the results. Tornado charts therefore identify which variables drive the results the most and hence are suitable for simulation; sensitivity charts identify the impact to the results when multiple interacting variables are simulated together in the model. This effect is clearly illustrated in Figure I.26. Notice that the ranking of critical success drivers is similar to the tornado chart in the previous examples. However, if correlations are added between the assumptions, Figure I.27 shows a very different picture. Notice, for instance, price erosion had little impact on NPV, but when some of the input assumptions are correlated, the interaction that exists between these correlated variables makes price erosion have more impact. Note that tornado analysis cannot capture these correlated dynamic relationships. Only after a simulation is run will such relationships become evident in a sensitivity analysis. A tornado chart’s presimulation critical success factors therefore sometimes will be different from a sensitivity chart’s postsimulation critical success factors. The postsimulation critical success factors should be the ones that are of interest, as these more readily capture the interactions of the model precedents.
Intro
JWBK121-Mun
March 18, 2008
30
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.26 Sensitivity chart without correlations
FIGURE I.27 Sensitivity chart with correlations
PROCEDURE Use these three steps to create a sensitivity analysis: 1. Open or create a model, define assumptions and forecasts, and run the simulation––the example here uses the Tornado and Sensitivity Charts (Linear) file. 2. Select Risk Simulator | Tools | Sensitivity Analysis. 3. Select the forecast of choice to analyze and click OK (Figure I.28).
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
31
FIGURE I.28 Running sensitivity analysis
Note that sensitivity analysis cannot be run unless assumptions and forecasts have been defined and a simulation has been run.
Results Interpretation The results of the sensitivity analysis comprise a report and two key charts. The first is a nonlinear rank correlation chart (Figure I.29) that ranks from highest to lowest the assumption-forecast correlation pairs. These correlations are nonlinear and nonparametric, making them free of any distributional requirements (i.e., an assumption with a Weibull distribution can be compared to another with a beta distribution). The results from this chart are fairly similar to that of the tornado analysis seen previously (of course without the capital investment value, which we decided was a known value and hence was not simulated), with one special exception. Tax rate was relegated to a much lower position in the sensitivity analysis chart (Figure I.29) as compared to the tornado chart (Figure I.24). This is because by itself, tax rate will have a significant impact. Once the other variables are interacting in the model, however, it appears that tax rate has less of a dominant effect. This is because
Intro
JWBK121-Mun
32
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.29 Rank correlation chart
tax rate has a smaller distribution, as historical tax rates tend not to fluctuate too much, and also because tax rate is a straight percentage value of the income before taxes, whereas other precedent variables have a larger effect on NPV. This example proves that it is important to perform sensitivity analysis after a simulation run to ascertain if there are any interactions in the model and if the effects of certain variables still hold. The second chart (Figure I.30) illustrates the percent variation explained; that is, of the fluctuations in the forecast, how much of the variation can be explained by each of the assumptions after accounting for all the interactions among variables? Notice that the sum of all variations explained is usually close to 100% (sometimes other elements impact the model, but they cannot be captured
FIGURE I.30 Contribution to variance chart
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
33
here directly), and if correlations exist, the sum may sometimes exceed 100% (due to the interaction effects that are cumulative).
Notes Tornado analysis is performed before a simulation run while sensitivity analysis is performed after a simulation run. Spider charts in tornado analysis can consider nonlinearities while rank correlation charts in sensitivity analysis can account for nonlinear and distributional-free conditions.
DISTRIBUTIONAL FITTING: SINGLE VARIABLE AND MULTIPLE VARIABLES Another powerful simulation tool is distributional fitting; that is, which distribution does an analyst or engineer use for a particular input variable in a model? What are the relevant distributional parameters? If no historical data exist, then the analyst must make assumptions about the variables in question. One approach is to use the Delphi method, where a group of experts are tasked with estimating the behavior of each variable. For instance, a group of mechanical engineers can be tasked with evaluating the extreme possibilities of the diameter of a spring coil through rigorous experimentation or guesstimates. These values can be used as the variable’s input parameters (e.g., uniform distribution with extreme values between 0.5 and 1.2). When testing is not possible (e.g., market share and revenue growth rate), management still can make estimates of potential outcomes and provide the best-case, most-likely case, and worst-case scenarios, whereupon a triangular or custom distribution can be created. However, if reliable historical data are available, distributional fitting can be accomplished. Assuming that historical patterns hold and that history tends to repeat itself, historical data can be used to find the best-fitting distribution with their relevant parameters to better define the variables to be simulated. Clearly, adjustments to the forecast value can be made (e.g., structural shifts and adjustments) as required, to reflect future expectations. Figures I.31 through I.33 illustrate a distributional-fitting example. The next discussion uses the Data Fitting file in the examples folder.
PROCEDURE Use these four steps to perform a distributional-fitting model: 1. Open a spreadsheet with existing data for fitting (e.g., use the Data Fitting example file from the Risk Simulator | Example Models menu). 2. Select the data you wish to fit, not including the variable name. (Data should be in a single column with multiple rows.) 3. Select Risk Simulator | Tools | Distributional Fitting (Single-Variable). Decide if you wish to fit to continuous or discrete distributions.
Intro
JWBK121-Mun
March 18, 2008
34
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.31 Single-variable distributional fitting
4. Select the specific distributions you wish to fit to or keep the default where all distributions are selected and click OK (Figure I.31). 5. Review the results of the fit, choose the relevant distribution you want, and click OK (Figure I.32).
Results Interpretation The null hypothesis (Ho ) being tested is such that the fitted distribution is the same distribution as the population from which the sample data to be fitted comes. Thus, if the computed p-value is lower than a critical alpha level (typically 0.10 or 0.05), then the distribution is the wrong distribution. Conversely, the higher the p-value, the better the distribution fits the data. Roughly, you can think of p-value as a percentage explained; that is, if the p-value is 1.00 (Figure I.32), then setting a normal distribution with a mean of 100.67 and a standard deviation of 10.40 explains close to 100% of the variation in the data, indicating an especially good fit. The data was from a 1,000-trial simulation in Risk Simulator based on a normal distribution with a mean of 100 and a standard deviation of 10. Because only 1,000 trials were simulated, the resulting distribution is fairly close to the specified distributional parameters, and in this case, about a 100% precision.
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
35
FIGURE I.32 Distributional fitting result
Both the results (Figure I.32) and the report (Figure I.33) show the test statistic, p-value, theoretical statistics (based on the selected distribution), empirical statistics (based on the raw data), the original data (to maintain a record of the data used), and the assumption complete with the relevant distributional parameters (i.e., if you selected the option to automatically generate assumption and if a simulation profile already exists). The results also rank all the selected distributions and how well they fit the data.
BOOTSTRAP SIMULATION Bootstrap simulation is a simple technique that estimates the reliability or accuracy of forecast statistics or other sample raw data. Bootstrap simulation can be used to answer a lot of confidence and precision-based questions in simulation. For instance,
Intro
JWBK121-Mun
36
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.33 Distributional fitting report
suppose an identical model (with identical assumptions and forecasts but without any random seeds) is run by 100 different people; the results will clearly be slightly different. The question is, if we collected all the statistics from these 100 people, how will the mean be distributed, or the median, or the skewness, or excess kurtosis? Suppose one person has a mean value of, say, 1.50 while another has 1.52. Are these two values statistically significantly different from one another, or are they statistically similar and the slight difference is due entirely to random chance? What about 1.53? So, how far is far enough to say that the values are statistically different? In addition, if a model’s resulting skewness is –0.19, is this forecast distribution negatively skewed or is it statistically close enough to zero to state that this distribution is symmetrical and not skewed? Thus, if we bootstrapped this forecast 100 times (i.e., run a 1,000-trial simulation for 100 times and collect the 100 skewness coefficients), the skewness distribution would indicate how far zero is away from –0.19. If the 90% confidence on the bootstrapped skewness distribution contains the value zero, then we can state that on a 90% confidence level, this distribution is symmetrical and not skewed, and the value –0.19 is statistically close enough to zero. Otherwise, if zero falls outside of this 90% confidence area, then this distribution is negatively skewed. The same analysis can be applied to excess kurtosis and other statistics. Essentially, bootstrap simulation is a hypothesis-testing tool. Classical methods used in the past relied on mathematical formulas to describe the accuracy of sample statistics. These methods assume that the distribution of a sample statistic approaches a normal distribution, making the calculation of the statistic’s standard error or confidence interval relatively easy. However, when a statistic’s sampling distribution
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
37
FIGURE I.34 Nonparametric bootstrap simulation
is not normally distributed or easily found, these classical methods are difficult to use. In contrast, bootstrapping analyzes sample statistics empirically by sampling the data repeatedly and creating distributions of the different statistics from each sampling. The classical methods of hypothesis testing are available in Risk Simulator and are explained in the next section. Classical methods provide higher power in their tests but rely on normality assumptions and can be used only to test the mean and variance of a distribution, as compared to bootstrap simulation, which provides lower power but is nonparametric and distribution-free, and can be used to test any distributional statistic.
PROCEDURE 1. Run a simulation with assumptions and forecasts. 2. Select Risk Simulator | Tools | Nonparametric Bootstrap. 3. Select only one forecast to bootstrap, select the statistic(s) to bootstrap, and enter the number of bootstrap trials and click OK (Figure I.34).
Results Interpretation Figure I.35 illustrates some sample bootstrap results. The example file used was Hypothesis Testing and Bootstrap Simulation. For instance, the 90% confidence for the skewness statistic is between –0.0189 and 0.0952, such that the value 0 falls
Intro
JWBK121-Mun
38
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
within this confidence, indicating that on a 90% confidence, the skewness of this forecast is not statistically significantly different from zero, or that this distribution can be considered as symmetrical and not skewed. Conversely, if the value 0 falls outside of this confidence, then the opposite is true: The distribution is skewed (positively skewed if the forecast statistic is positive, and negatively skewed if the forecast statistic is negative).
Notes The term bootstrap comes from the saying “to pull oneself up by one’s own bootstraps” and is applicable because this method uses the distribution of statistics
FIGURE I.35 Bootstrap simulation results (Continued)
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
39
FIGURE I.35 (Continued)
themselves to analyze the accuracy of the statistics. Nonparametric simulation is simply randomly picking golf balls from a large basket with replacement where each golf ball is based on a historical data point. Suppose there are 365 golf balls in the basket (representing 365 historical data points). Imagine if you will that the value of each golf ball picked at random is written on a large whiteboard. The results of the 365 balls picked with replacement are written in the first column of the board with 365 rows of numbers. Relevant statistics (e.g., mean, median, mode, standard deviation, etc.) are calculated on these 365 rows. The process is then repeated, say, 5,000 times. The whiteboard will now be filled with 365 rows and 5,000 columns. Hence, 5,000 sets of statistics (i.e., there will be 5,000 means, 5,000 medians, 5,000 modes, 5,000 standard deviations, etc.) are tabulated and their distributions shown.
Intro
JWBK121-Mun
March 18, 2008
40
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
The relevant statistics of the statistics are then tabulated, where from these results you can ascertain how confident the simulated statistics are. Finally, bootstrap results are important because according to the Law of Large Numbers and Central Limit Theorem in statistics, the mean of the sample means is an unbiased estimator and approaches the true population mean when the sample size increases.
HYPOTHESIS TESTING A hypothesis test is performed when testing the means and variances of two distributions to determine if they are statistically identical or statistically different from one another (i.e., to see if the differences between the means and variances of two different forecasts that occur are based on random chance or if they are, in fact, statistically significantly different from one another). This analysis is related to bootstrap simulation with several differences. Classical hypothesis testing uses mathematical models and is based on theoretical distributions. This means that the precision and power of the test is higher than bootstrap simulation’s empirically based method of simulating a simulation and letting the data tell the story. However, the classical hypothesis test is applicable only for testing means and variances of two distributions (and by extension, standard deviations) to see if they are statistically identical or different. In contrast, nonparametric bootstrap simulation can be used to test for any distributional statistics, making it more useful; the drawback is its lower testing power. Risk Simulator provides both techniques from which to choose.
PROCEDURE 1. Run a simulation. 2. Select Risk Simulator | Tools | Hypothesis Testing. 3. Select the two forecasts to test, select the type of hypothesis test you wish to run, and click OK (Figure I.36).
Results Interpretation A two-tailed hypothesis test is performed on the null hypothesis (Ho ) such that the population means of the two variables are statistically identical to one another. The alternative hypothesis (Ha ) is such that the population means are statistically different from one another. If the calculated p-values are less than or equal to 0.01, 0.05, or 0.10 alpha test levels, it means that the null hypothesis is rejected, which implies that the forecast means are statistically significantly different at the 1%, 5%, and 10% significance levels. If the null hypothesis is not rejected when the p-values are high, the means of the two forecast distributions are statistically similar to one another. The same analysis is performed on variances of two forecasts at a time using the pairwise F-test. If the p-values are small, then the variances (and standard deviations) are statistically different from one another; otherwise, for large p-values, the variances are statistically identical to one another. The example file used was Hypothesis Testing and Bootstrap Simulation.
JWBK121-Mun
March 18, 2008
20:35
Char Count=
FIGURE I.36 Hypothesis testing
Intro
41
Intro
JWBK121-Mun
March 18, 2008
42
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Notes The two-variable t-test with unequal variances (the population variance of forecast 1 is expected to be different from the population variance of forecast 2) is appropriate when the forecast distributions are from different populations (e.g., data collected from two different geographical locations, or two different operating business units, etc.). The two-variable t-test with equal variances (the population variance of forecast 1 is expected to be equal to the population variance of forecast 2) is appropriate when the forecast distributions are from similar populations (e.g., data collected from two different engine designs with similar specifications, etc.). The paired dependent twovariable t-test is appropriate when the forecast distributions are from exactly the same population and subjects (e.g., data collected from the same group of patients before an experimental drug was used and after the drug was applied, etc.).
DATA EXTRACTION, SAVING SIMULATION RESULTS, AND GENERATING REPORTS Raw data of a simulation can be extracted very easily using Risk Simulator’s Data Extraction routine. Both assumptions and forecasts can be extracted, but a simulation must be run first. The extracted data can then be used for a variety of other analyses, and the data can be extracted to different formats—for use in spreadsheets, databases, and other software products.
PROCEDURE 1. Open or create a model, define assumptions and forecasts, and run the simulation. 2. Select Risk Simulator | Tools | Data Extraction. 3. Select the assumptions and/or forecasts you wish to extract the data from and click OK. The simulated data can be extracted to an Excel worksheet, a flat text file (for easy import into other software applications), or as *.risksim files (which can be reopened as Risk Simulator forecast charts at a later date). Finally, you can create a simulation report of all the assumptions and forecasts in the model by going to Risk Simulator | Create Report. A sample report is shown in Figure I.37.
REGRESSION AND FORECASTING DIAGNOSTIC TOOL This advanced analytical tool in Risk Simulator is used to determine the econometric properties of your data. The diagnostics include checking the data for heteroskedasticity, nonlinearity, outliers, specification errors, micronumerosity, stationarity and stochastic properties, normality and sphericity of the errors, and multicollinearity. Each test is described in more detail in their respective reports in the model.
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.37 Sample simulation report
43
Intro
JWBK121-Mun
March 18, 2008
44
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.38 Running the data diagnostic tool
PROCEDURE
Open the example model (Risk Simulator | Examples | Regression Diagnostics) and go to the Time-Series Data worksheet and select the data including the variable names (cells C5:H55). Click on Risk Simulator | Tools | Diagnostic Tool. Check the data and select the Dependent Variable Y from the drop-down menu. Click OK when finished (Figure I.38).
A common violation in forecasting and regression analysis is heteroskedasticity; that is, the variance of the errors increases over time (see Figure I.39 for test results using the diagnostic tool). Visually, the width of the vertical data fluctuations increases or fans out over time, and typically, the coefficient of determination (R-squared coefficient) drops significantly when heteroskedasticity exists. If the variance of the dependent variable is not constant, then the error’s variance will not be constant. Unless the heteroskedasticity of the dependent variable is pronounced, its effect will not be severe: the least-squares estimates will still be unbiased, and the estimates of the slope and intercept will either be normally distributed if the errors are normally distributed, or at least normally distributed asymptotically (as the number of data points becomes large) if the errors are not normally distributed. The estimate for the variance of the slope and overall variance will be inaccurate, but the inaccuracy is not likely to be substantial if the independent-variable values are symmetric about their mean.
JWBK121-Mun March 18, 2008 20:35
FIGURE I.39 Results from tests of outliers, heteroskedasticity, micronumerosity, and nonlinearity
Intro Char Count=
45
Intro
JWBK121-Mun
46
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
If the number of data points is small (micronumerosity), it may be difficult to detect assumption violations. With small sample sizes, assumption violations such as non-normality or heteroskedasticity of variances are difficult to detect even when they are present. With a small number of data points, linear regression offers less protection against violation of assumptions. With few data points, it may be hard to determine how well the fitted line matches the data, or whether a nonlinear function would be more appropriate. Even if none of the test assumptions are violated, a linear regression on a small number of data points may not have sufficient power to detect a significant difference between the slope and zero, even if the slope is nonzero. The power depends on the residual error, the observed variation in the independent variable, the selected significance alpha level of the test, and the number of data points. Power decreases as the residual variance increases, decreases as the significance level is decreased (i.e., as the test is made more stringent), increases as the variation in observed independent variable increases, and increases as the number of data points increases. Values may not be identically distributed because of the presence of outliers. Outliers are anomalous values in the data. Outliers may have a strong influence over the fitted slope and intercept, giving a poor fit to the bulk of the data points. Outliers tend to increase the estimate of residual variance, lowering the chance of rejecting the null hypothesis (i.e., creating higher prediction errors). They may be due to recording errors, which may be correctable, or they may be due to the dependent-variable values not all being sampled from the same population. Apparent outliers may also be due to the dependent-variable values being from the same, but non-normal, population. However, a point may be an unusual value in either an independent or a dependent variable without necessarily being an outlier in the scatter plot. In regression analysis, the fitted line can be highly sensitive to outliers. In other words, least squares regression is not resistant to outliers; thus, neither is the fitted-slope estimate. A point vertically removed from the other points can cause the fitted line to pass close to it, instead of following the general linear trend of the rest of the data, especially if the point is relatively far horizontally from the center of the data. However, great care should be taken when deciding if the outliers should be removed. Although in most cases when outliers are removed the regression results look better, a priori justification must first exist. For instance, if one is regressing the performance of a particular firm’s stock returns, outliers caused by downturns in the stock market should be included; these are not truly outliers as they are inevitabilities in the business cycle. Forgoing these outliers and using the regression equation to forecast one’s retirement fund based on the firm’s stocks will yield incorrect results at best. In contrast, suppose the outliers are caused by a single nonrecurring business condition (e.g., merger and acquisition) and such business structural changes are not forecast to recur; then these outliers should be removed and the data cleansed prior to running a regression analysis. The analysis here only identifies outliers and it is up to the user to determine if they should remain or be excluded. Sometimes, a nonlinear relationship between the dependent and independent variables is more appropriate than a linear relationship. In such cases, running a linear regression will not be optimal. If the linear model is not the correct form, then the slope and intercept estimates and the fitted values from the linear regression will be biased, and the fitted slope and intercept estimates will not be meaningful. Over a restricted range of independent or dependent variables, nonlinear models may be
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
47
well approximated by linear models (this is in fact the basis of linear interpolation), but for accurate prediction a model appropriate to the data should be selected. A nonlinear transformation should first be applied to the data before running a regression. One simple approach is to take the natural logarithm of the independent variable (other approaches include taking the square root or raising the independent variable to the second or third power) and run a regression or forecast using the nonlinearly transformed data. Another typical issue when forecasting time-series data is whether the independent-variable values are truly independent of each other or are dependent. Dependent variable values collected over a time series may be autocorrelated. For serially correlated dependent-variable values, the estimates of the slope and intercept will be unbiased, but the estimates of their forecast and variances will not be reliable, and hence the validity of certain statistical goodness-of-fit tests will be flawed. For instance, interest rates, inflation rates, sales, revenues, and many other time-series data are typically autocorrelated, where the value in the current period is related to the value in a previous period, and so forth (clearly, the inflation rate in March is related to February’s level, which in turn is related to January’s level, and so forth). Ignoring such blatant relationships will yield biased and less accurate forecasts. In such events, an autocorrelated regression model or an ARIMA model may be better suited (Risk Simulator | Forecasting | ARIMA). Finally, the autocorrelation functions of a series that is nonstationary tend to decay slowly (see Nonstationary report in the model). If autocorrelation AC(1) is nonzero, it means that the series is first-order serially correlated. If AC(k) dies off more or less geometrically with increasing lag, it implies that the series follows a low-order autoregressive process. If AC(k) drops to zero after a small number of lags, it implies that the series follows a low-order movingaverage process. Partial correlation PAC(k) measures the correlation of values that are k periods apart after removing the correlation from the intervening lags. If the pattern of autocorrelation can be captured by an autoregression of order less than k, then the partial autocorrelation at lag k will be close to zero. Ljung-Box Q-statistics and their p-values at lag k have the null hypothesis that there is no autocorrelation up to order k. The dotted lines in the plots of the autocorrelations are the approximate two standard error bounds. If the autocorrelation is within these bounds, it is not significantly different from zero at the 5% significance level. Autocorrelation measures the relationship to the past of the dependent Y variable to itself. Distributive lags, in contrast, are time-lag relationships between the dependent Y variable and different independent X variables. For instance, the movement and direction of mortgage rates tend to follow the federal funds rate but at a time lag (typically one to three months). Sometimes, time lags follow cycles and seasonality (e.g., ice cream sales tend to peak during the summer months and are hence related to the previous summer’s sales, 12 months in the past). The distributive lag analysis (Figure I.40) shows how the dependent variable is related to each of the independent variables at various time lags, when all lags are considered simultaneously, to determine which time lags are statistically significant and should be considered. Another requirement in running a regression model is the assumption of normality and sphericity of the error term. If the assumption of normality is violated or outliers are present, then the linear regression goodness-of-fit test may not be the most powerful or informative test available, and this could mean the difference between either detecting a linear fit or not. If the errors are not independent and not
FIGURE I.40 Autocorrelation and distributive lag results
Intro JWBK121-Mun
48
March 18, 2008 20:35 Char Count=
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
49
FIGURE I.41 Test for normality of errors
normally distributed, it may indicate that the data might be autocorrelated or suffer from nonlinearities or other more destructive errors. Independence of the errors can also be detected in the heteroskedasticity tests (Figure I.41). The normality test on the errors performed is a nonparametric test, which makes no assumptions about the specific shape of the population from which the sample is drawn, allowing for smaller sample data sets to be analyzed. This test evaluates the null hypothesis of whether the sample errors were drawn from a normally distributed population, versus an alternate hypothesis that the data sample is not normally distributed. If the calculated D-Statistic is greater than or equal to the D-Critical values at various significance values, then reject the null hypothesis and accept the alternate hypothesis (the errors are not normally distributed). Otherwise, if the DStatistic is less than the D-Critical value, do not reject the null hypothesis (the errors are normally distributed). This test relies on two cumulative frequencies: one derived from the sample data set, and the second from a theoretical distribution based on the mean and standard deviation of the sample data. Sometimes, certain types of time-series data cannot be modeled using any other methods except for a stochastic process, because the underlying events are stochastic in nature. For instance, you cannot adequately model and forecast stock prices, interest rates, the price of oil, and other commodity prices using a simple regression model, because these variables are highly uncertain and volatile, and they do not follow a predefined static rule of behavior; in other words, the process is not stationary. Stationarity is checked here using the runs test, while another visual clue is found in the autocorrelation report (the ACF tends to decay slowly). A stochastic process is a sequence of events or paths generated by probabilistic laws. That is, random events can occur over time but are governed by specific statistical and probabilistic rules. The main stochastic processes include random walk or Brownian motion, mean-reversion, and jump-diffusion. These processes can be used to forecast a multitude of variables that seemingly follow random trends but are restricted by probabilistic laws. The process-generating equation is known in advance, but the actual results generated are unknown (Figure I.42). The random walk or Brownian motion process can be used to forecast stock prices, prices of commodities, and other stochastic time-series data given a drift or growth rate and volatility around the drift path. The mean-reversion process can be
FIGURE I.42 Stochastic process parameter estimation
Intro JWBK121-Mun
50
March 18, 2008 20:35 Char Count=
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
51
used to reduce the fluctuations of the random walk process by allowing the path to target a long-term value, making it useful for forecasting time-series variables that have a long-term rate such as interest rates and inflation rates (these are long-term target rates by regulatory authorities or the market). The jump-diffusion process is useful for forecasting time-series data when the variable can occasionally exhibit random jumps, such as oil prices or the price of electricity (discrete exogenous event shocks can make prices jump up or down). These processes can also be mixed and matched as required. Multicollinearity exists when there is a linear relationship between the independent variables. When this occurs, the regression equation cannot be estimated at all. In near collinearity situations, the estimated regression equation will be biased and provide inaccurate results. This situation is especially true when a step-wise regression approach is used, where the statistically significant independent variables will be thrown out of the regression mix earlier than expected, resulting in a regression equation that is neither efficient nor accurate. One quick test of the presence of multicollinearity in a multiple regression equation is that the R-squared value is relatively high while the t-statistics are relatively low. Another quick test is to create a correlation matrix between the independent variables. A high cross-correlation indicates a potential for autocorrelation. The rule of thumb is that a correlation with an absolute value greater than 0.75 is indicative of severe multicollinearity. Another test for multicollinearity is the use of the Variance Inflation Factor (VIF), obtained by regressing each independent variable to all the other independent variables, obtaining the R-squared value and calculating the VIF. A VIF exceeding 2.0 can be considered as severe multicollinearity. A VIF exceeding 10.0 indicates destructive multicollinearity (Figure I.43). The Correlation Matrix lists the Pearson’s product moment correlations (commonly referred to as the Pearson’s R) between variable pairs. The correlation coefficient ranges between –1.0 and +1.0 inclusive. The sign indicates the direction of association between the variables, while the coefficient indicates the magnitude or strength of association. The Pearson’s R measures only a linear relationship, and is less effective in measuring nonlinear relationships.
FIGURE I.43 Multicollinearity errors
Intro
JWBK121-Mun
March 18, 2008
52
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
To test whether the correlations are significant, a two-tailed hypothesis test is performed and the resulting p-values are listed. P-values less than 0.10, 0.05, and 0.01 are highlighted to indicate statistical significance. In other words, a p-value for a correlation pair that is less than a given significance value is statistically significantly different from zero, indicating that there is a significant linear relationship between the two variables. The Pearson’s product moment correlation coefficient (R) between two variables COV (x and y) is related to the covariance (cov) measure where Rx,y = sx s yx,y . The benefit of dividing the covariance by the product of the two variables’ standard deviations (s) is that the resulting correlation coefficient is bounded between –1.0 and +1.0 inclusive. This makes the correlation a good relative measure to compare among different variables (particularly with different units and magnitude). The Spearman rankbased nonparametric correlation is also included in the analysis. The Spearman’s R is related to the Pearson’s R in that the data is first ranked and then correlated. The rank correlations provide a better estimate of the relationship between two variables when one or both of them is nonlinear. It must be stressed that a significant correlation does not imply causation. Associations between variables in no way imply that the change of one variable causes another variable to change. When two variables are moving independently of each other but in a related path, they may be correlated but their relationship might be spurious (e.g., a correlation between sunspots and the stock market might be strong, but one can surmise that there is no causality and that this relationship is purely spurious).
STATISTICAL ANALYSIS TOOL Another very powerful tool in Risk Simulator is the statistical analysis tool, which determines the statistical properties of the data. The diagnostics run include checking the data for various statistical properties, from basic descriptive statistics to testing for and calibrating the stochastic properties of the data.
PROCEDURE Open the example model (Risk Simulator | Example Models | Statistical Analysis) and go to the Data worksheet and select the data including the variable names (cells C5:E55).
Click on Risk Simulator | Tools | Statistical Analysis (Figure I.44). Check the data type, whether the data selected is from a single variable or multiple variables arranged in columns. In our example, we assume that the data areas selected are from multiple variables. Click OK when finished. Choose the statistical tests you wish to perform. The suggestion (and by default) is to choose all the tests. Click OK when finished (Figure I.45).
Spend some time going through the reports generated to get a better understanding of the statistical tests performed (sample reports are shown in Figures I.46 to I.49).
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.44 Running the statistical analysis tool
FIGURE I.45 Statistical tests
53
Intro
JWBK121-Mun
54
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.46 Sample statistical analysis tool report
DISTRIBUTIONAL ANALYSIS TOOL This is a statistical probability tool in Risk Simulator that is rather useful in a variety of settings, and can be used to compute the probability density function (PDF), which is also called the probability mass function (PMF) for discrete distributions (we will use these terms interchangeably), where given some distribution and its parameters, we can determine the probability of occurrence given some outcome x. In addition, the cumulative distribution function (CDF) can also be computed, which is the sum of the PDF values up to and including this x value. Finally, the inverse cumulative distribution function (ICDF) is used to compute the value x given the probability of occurrence. This tool is accessible via Risk Simulator | Tools | Distributional Analysis. As an example, Figure I.50 shows the computation of a binomial distribution (i.e., a distribution with two outcomes, such as the tossing of a coin, where the outcome is either heads or tails, with some prescribed probability of heads and tails). Suppose we toss a coin two times and set the outcome heads as a success; we use the binomial distribution with Trials = 2 (tossing the coin twice) and Probability = 0.50 (the probability of success, of getting heads). Selecting the PDF and setting the range of
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
55
FIGURE I.47 Sample statistical analysis tool report (hypothesis testing of one variable)
values x as from 0 to 2 with a step size of 1 (this means we are requesting the values 0, 1, and 2 for x), the resulting probabilities are provided in the table and graphically, as well as the theoretical four moments of the distribution. As the outcomes of the coin toss are heads-heads, tails-tails, heads-tails, and tails-heads, the probability of getting exactly no heads is 25%, one heads is 50%, and two heads is 25%. Similarly, we can obtain the exact probabilities of tossing the coin, say, 20 times, as seen in Figure I.51. The results are presented in both table and graphical formats. Figure I.51 shows the same binomial distribution but now the CDF is computed. The CDF is simply the sum of the PDF values up to the point x. For instance, in Figure I.51, we see that the probabilities of 0, 1, and 2 are 0.000001, 0.000019, and 0.000181, whose sum is 0.000201, which is the value of the CDF at x = 2 in Figure I.52. Whereas the PDF computes the probabilities of getting two heads, the CDF computes the probability of getting no more than two heads (or probabilities of 0, 1, and 2 heads). Taking the complement (i.e., 1 – 0.000201 obtains 0.999799 or 99.9799%) provides the probability of getting three heads or more. Using this distributional analysis tool, even more advanced distributions can be analyzed, such as the gamma, beta, negative binomial, and many others in Risk
Intro
JWBK121-Mun
March 18, 2008
20:35
56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.48 Sample statistical analysis tool report (normality test)
Simulator. As a further example of the tool’s use in a continuous distribution and the ICDF functionality, Figure I.53 shows the standard normal distribution (normal distribution with a mean of zero and standard deviation of one), where we apply the ICDF to find the value of x that corresponds to the cumulative probability of 97.50% (CDF). That is, a one-tail CDF of 97.50% is equivalent to a two-tail 95% confidence interval (there is a 2.50% probability in the right tail and 2.50% in the left tail, leaving 95% in the center or confidence interval area, which is equivalent to a 97.50% area for one tail). The result is the familiar Z-Score of 1.96. Therefore, using this distributional analysis tool, and the standardized scores for other distributions, the exact and cumulative probabilities of other distributions can all be obtained quickly and easily.
PORTFOLIO OPTIMIZATION In today’s competitive global economy, companies are faced with many difficult decisions. These decisions include allocating financial resources, building or expanding facilities, managing inventories, and determining product-mix strategies. Such
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
57
FIGURE I.49 Sample statistical analysis tool report (stochastic parameter estimation)
decisions might involve thousands or millions of potential alternatives. Considering and evaluating each of them would be impractical or even impossible. A model can provide valuable assistance in incorporating relevant variables when analyzing decisions and finding the best solutions for making decisions. Models capture the most important features of a problem and present them in a form that is easy to interpret. Models often provide insights that intuition alone cannot. An optimization model has three major elements: decision variables, constraints, and an objective. In short, the optimization methodology finds the best combination or permutation of decision variables (e.g., which products to sell and which projects to execute) in every conceivable way such that the objective is maximized (e.g., revenues and net income) or minimized (e.g., risk and costs) while still satisfying the constraints (e.g., budget and resources). Obtaining optimal values generally requires that you search in an iterative or ad hoc fashion. This search involves running one iteration for an initial set of values, analyzing the results, changing one or more values, rerunning the model, and repeating the process until you find a satisfactory solution. This process can be very tedious and time-consuming even for small models, and often it is not clear how to adjust the values from one iteration to the next.
Intro
JWBK121-Mun
58
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.50 Distributional analysis tool (binomial distribution with 2 trials and a 0.5 probability of success)
A more rigorous method systematically enumerates all possible alternatives. This approach guarantees optimal solutions if the model is correctly specified. Suppose that an optimization model depends on only two decision variables. If each variable has 10 possible values, trying each combination requires 100 iterations (102 alternatives). If each iteration is very short (e.g., 2 seconds), then the entire process could be done in approximately three minutes of computer time. However, instead of two decision variables, consider six, then consider that trying all combinations requires 1,000,000 iterations (106 alternatives). It is easily possible for complete enumeration to take weeks, months, or even years to carry out.
The Traveling Financial Planner A very simple example is in order. Table I.1 illustrates the traveling financial planner problem. Suppose the traveling financial planner has to make three sales trips to New York, Chicago, and Seattle. Further suppose that the order of arrival at each city is irrelevant. All that is important in this simple example is to find the lowest total cost possible to cover all three cities. Table I.1 also lists the flight costs from these different cities. The problem here is cost minimization, suitable for optimization. One basic approach to solving this problem is through an ad hoc or brute force method. That is, manually list all six possible permutations, as seen in Table I.2. Clearly the cheapest itinerary is going from the East Coast to the West Coast, going from New York to
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
59
FIGURE I.51 Distributional analysis tool (binomial distribution with 20 trials)
Chicago and finally on to Seattle. Here, the problem is simple and can be calculated manually, as there were three cities and hence six possible itineraries. However, add two more cities and the total number of possible itineraries jumps to 120. Performing an ad hoc calculation will be fairly intimidating and time-consuming. On a larger scale, suppose there are 100 cities on the salesperson’s list; the possible itineraries will be as many as 9.3 × 10157 . The problem will take many years to calculate manually, which is where optimization software steps in, automating the search for the optimal itinerary. The example illustrated up to now is a deterministic optimization problem; that is, the airline ticket prices are known ahead of time and are assumed to be constant. Now suppose the ticket prices are not constant but are uncertain, following some distribution (e.g., a ticket from Chicago to Seattle averages $325, but is never cheaper than $300 and usually doesn’t exceed $500). The same uncertainty applies to tickets for the other cities. The problem now becomes an optimization under uncertainty. Ad hoc and brute force approaches simply do not work. Software such as Risk
Intro
JWBK121-Mun
March 18, 2008
60
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.52 Distributional analysis tool (binomial distribution’s CDF with 20 trials)
Simulator can take over this optimization problem and automate the entire process seamlessly. The next section discusses the terms required in an optimization under uncertainty.
The Lingo of Optimization Before embarking on solving an optimization problem, it is vital to understand the terminology of optimization—the terms used to describe certain attributes of the optimization process. These words include decision variables, constraints, and objectives. Decision variables are quantities over which you have control—for example, the amount of a product to make, the number of dollars to allocate among different investments, or which projects to select from among a limited set. As an example, portfolio optimization analysis includes a go or no-go decision on particular projects.
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
61
Modeling Toolkit and Risk Simulator Applications
FIGURE I.53 Distributional Analysis Tool (Normal Distribution’s ICDF and Z-Score)
In addition, the dollar or percentage budget allocation across multiple projects also can be structured as decision variables. Constraints describe relationships among decision variables that restrict the values of the decision variables. For example, a constraint might ensure that the total amount of money allocated among various investments cannot exceed a specified
TABLE I.1
Traveling Financial Planner
Seattle to Chicago Chicago to Seattle New York to Seattle Seattle to New York Chicago to New York New York to Chicago
$325 $225 $350 $375 $325 $325
Intro
JWBK121-Mun
March 18, 2008
62
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
TABLE I.2
Multiple Combinations of the Traveling Financial Planner Problem
Seattle–Chicago–New York Seattle–New York–Chicago Chicago–Seattle–New York Chicago–New York–Seattle New York–Seattle–Chicago New York–Chicago–Seattle
$325 + $325 = $650 $375 + $325 = $700 $225 + $375 = $600 $325 + $350 = $675 $350 + $325 = $675 $325 + $225 = $550
Three cities means 3! = 3 × 2 × 1 = 6 itinerary permutations. Five cities means 5! = 5 × 4 × 3 × 2 × 1 = 120 permutations. One hundred cities means 100! = 100 × 99 × · · · × 1 = 9.3 × 10157 permutations!
amount (or at most one project from a certain group can be selected), budget constraints, timing restrictions, minimum returns, or risk tolerance levels. Objectives give a mathematical representation of the model’s desired outcome, such as maximizing profit or minimizing cost, in terms of the decision variables. In financial analysis, for example, the objective may be to maximize returns while minimizing risks (maximizing the Sharpe ratio or returns-to-risk ratio). The solution to an optimization model provides a set of values for the decision variables that optimizes (maximizes or minimizes) the associated objective. If the real business conditions were simple and if the future were predictable, all data in an optimization model would be constant, making the model deterministic. In many cases, however, a deterministic optimization model cannot capture all the relevant intricacies of a practical decision-making environment. When a model’s data are uncertain and can only be described probabilistically, the objective will have some probability distribution for any chosen set of decision variables. You can find this probability distribution by simulating the model using Risk Simulator. An optimization model under uncertainty has several additional elements, including assumptions and forecasts. Assumptions capture the uncertainty of model data using probability distributions, whereas forecasts are the frequency distributions of possible results for the model. Forecast statistics are summary values of a forecast distribution, such as the mean, standard deviation, and variance. The optimization process controls the optimization by maximizing or minimizing the objective. Each optimization model has one objective, a variable that mathematically represents the model’s objective in terms of the assumption and decision variables. Optimization’s job is to find the optimal (minimum or maximum) value of the objective by selecting and improving different values for the decision variables. When model data are uncertain and can only be described using probability distributions, the objective itself will have some probability distribution for any set of decision variables. Many algorithms exist to run optimization, and many different procedures exist when optimization is coupled with Monte Carlo simulation. In Risk Simulator, there are three distinct optimization procedures and optimization types as well as different decision variable types. For instance, Risk Simulator can handle continuous decision variables (1.2535, 0.2215, and so forth) as well as integers decision variables (e.g., 1, 2, 3, 4, and so forth), binary decision variables (1 and 0 for go and no-go decisions), and mixed decision variables (both integers and continuous variables).
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
63
On top of that, Risk Simulator can handle linear optimization (i.e., when both the objective and constraints are all linear equations and functions) as well as nonlinear optimizations (i.e., when the objective and constraints are a mixture of linear as well as nonlinear functions and equations). As far as the optimization process is concerned, Risk Simulator can be used to run a static optimization; that is, an optimization that is run on a static model, where no simulations are run. In other words, all the inputs in the model are static and unchanging. This optimization type is applicable when the model is assumed to be known and no uncertainties exist. Also, a static optimization can be first run to determine the optimal portfolio and its corresponding optimal allocation of decision variables before more advanced optimization procedures are applied. For instance, before running a stochastic optimization problem, a static optimization is first run to determine if there exist solutions to the optimization problem before a more protracted analysis is performed. Next, dynamic optimization is applied when Monte Carlo simulation is used together with optimization. Another name for such a procedure is simulationoptimization. That is, a simulation is first run; the results of the simulation are then applied in the Excel model, and then an optimization is applied to the simulated values. In other words, a simulation is run for N trials, and then an optimization process is run for M iterations until the optimal results are obtained or an infeasible set is found. That is, using Risk Simulator’s optimization module, you can choose which forecast and assumption statistics to use and replace in the model after the simulation is run. Then, these forecast statistics can be applied in the optimization process. This approach is useful when you have a large model with many interacting assumptions and forecasts, and when some of the forecast statistics are required in the optimization. For example, if the standard deviation of an assumption or forecast is required in the optimization model (e.g., computing the Sharpe ratio in asset allocation and optimization problems where we have mean divided by standard deviation of the portfolio), then this approach should be used. The stochastic optimization process, in contrast, is similar to the dynamic optimization procedure with the exception that the entire dynamic optimization process is repeated T times. That is, a simulation with N trials is run, and then an optimization is run with M iterations to obtain the optimal results. Then the process is replicated T times. The results will be a forecast chart of each decision variable with T values. In other words, a simulation is run and the forecast or assumption statistics are used in the optimization model to find the optimal allocation of decision variables. Then, another simulation is run, generating different forecast statistics, and these new updated values are then optimized, and so forth. Hence, each of the final decision variables will have its own forecast chart, indicating the range of the optimal decision variables. For instance, instead of obtaining single-point estimates in the dynamic optimization procedure, you can now obtain a distribution of the decision variables, hence, a range of optimal values for each decision variable, also known as a stochastic optimization. Finally, an efficient frontier optimization procedure applies the concepts of marginal increments and shadow pricing in optimization. That is, what would happen to the results of the optimization if one of the constraints were relaxed slightly? Say, for instance, the budget constraint is set at $1 million. What would happen to the portfolio’s outcome and optimal decisions if the constraint were now $1.5 million or $2 million, and so forth? This is the concept of the Markowitz efficient
Intro
JWBK121-Mun
64
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
frontiers in investment finance, where if the portfolio standard deviation is allowed to increase slightly, what additional returns will the portfolio generate? This process is similar to the dynamic optimization process with the exception that one of the constraints is allowed to change, and with each change, the simulation and optimization process is run. This process is best applied manually using Risk Simulator. That is, run a dynamic or stochastic optimization, then rerun another optimization with a constraint, and repeat that procedure several times. This manual process is important, as by changing the constraint, the analyst can determine if the results are similar or different, and hence, whether it is worthy of any additional analysis, or can determine how far a marginal increase in the constraint should be to obtain a significant change in the objective and decision variables. One item is worthy of consideration. There exist other software products that supposedly perform stochastic optimization but in fact they do not. For instance, after a simulation is run, then one iteration of the optimization process is generated, and then another simulation is run, then the second optimization iteration is generated and so forth; this is simply a waste of time and resources. That is, in optimization, the model is put through a rigorous set of algorithms, where multiple iterations (ranging from several to thousands of iterations) are required to obtain the optimal results. Hence, generating one iteration at a time is a waste of time and resources. The same portfolio can be solved using Risk Simulator in under a minute as compared to multiple hours using such a backward approach. Also, such a simulation-optimization approach will typically yield bad results, and is not a stochastic optimization approach. Be extremely careful of such methodologies when applying optimization to your models. The following are two example optimization problems. One uses continuous decision variables, while the other uses discrete integer decision variables. In either model, you can apply discrete optimization, dynamic optimization, stochastic optimization, or even the efficient frontiers with shadow pricing. Any of these approaches can be used for these two examples. Therefore, for simplicity, only the model setup will be illustrated and it is up to the user to decide which optimization process to run. Also, the continuous model uses the nonlinear optimization approach (this is because the portfolio risk computed is a nonlinear function, and the objective is a nonlinear function of portfolio returns divided by portfolio risks), while the second example of an integer optimization is an example of a linear optimization model (its objective and all of its constraints are linear). Therefore, these two examples encapsulate all of the procedures aforementioned. Example: Optimization with Continuous Decision Variables Figure I.54 illustrates the sample continuous optimization model. The example here uses the Continuous Optimization file accessed through Risk Simulator | Examples. In this example, there are 10 distinct asset classes (e.g., different types of mutual funds, stocks, or assets) where the idea is to most efficiently and effectively allocate the portfolio holdings such that the best bang for the buck is obtained. That is, to generate the best portfolio returns possible given the risks inherent in each asset class. In order to truly understand the concept of optimization, we will have to delve more deeply into this sample model in order to see how the optimization process can best be applied. The model shows the 10 asset classes, and each asset class has its own set of annualized returns and annualized volatilities. These return and risk measures are
JWBK121-Mun
March 18, 2008
20:35
Char Count=
FIGURE I.54 Continuous optimization model
Intro
65
Intro
JWBK121-Mun
March 18, 2008
20:35
66
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
annualized values such that they can be consistently compared across different asset classes. Returns are computed using the geometric average of the relative returns, while the risks are computed using the logarithmic relative stock returns approach. See the chapters on Volatility Models for details on computing the annualized volatility and annualized returns on a stock or asset class. The Allocation Weights in column E holds the decision variables, which are the variables that need to be tweaked and tested such that the total weight is constrained at 100% (cell E17). Typically, to start the optimization, we will set these cells to a uniform value, where in this case, cells E6 to E15 are set at 10% each. In addition, each decision variable may have specific restrictions in its allowed range. In this example, the lower and upper allocations allowed are 5% and 35%, as seen in columns F and G. This means that each asset class may have its own allocation boundaries. Next, column H shows the return to risk ratio, which is simply the return percentage divided by the risk percentage, where the higher this value, the higher the bang for the buck. The remaining model shows the individual asset class rankings by returns, risk, return to risk ratio, and allocation. In other words, these rankings show at a glance which asset class has the lowest risk, the highest return, and so forth. The portfolio’s total returns in cell C17 is SUMPRODUCT(C6:C15, E6:E15), that is, the sum of the allocation weights multiplied by the annualized returns for each asset class. As an example, with a portfolio of four assets, we haveRP = ω A RA + ω B RB + ωC RC + ω D RD, where RP is the return on the portfolio, RA,B,C,D are the individual returns on the projects, and ωA,B,C,D are the respective weights or capital allocation across each project. In addition, the portfolio’s diversified risk in cell D17 is computed by taking σ P = i n m ωi2 σi2 + 2ωi ω j ρi, j σi σ j . Here, ρ i,j are the respective cross-correlations bei=1
i=1 j=1
tween the asset classes—hence, if the cross-correlations are negative, there are risk diversification effects, and the portfolio risk decreases. However, to simplify the computations here, we assume zero correlations among the asset classes through this portfolio risk computation, but assume the correlations when applying simulation on the returns as will be seen later. Therefore, instead of applying static correlations among these different asset returns, we apply the correlations in the simulation assumptions themselves, creating a more dynamic relationship among the simulated return values. Finally, the return to risk ratio or Sharpe ratio is computed for the portfolio. This value is seen in cell C18, and represents the objective to be maximized in this optimization exercise. To summarize, we have the following specifications in this example model: Objective: Decision Variables: Restrictions on Decision Variables: Constraints:
Maximize Return to Risk Ratio (C18) Allocation Weights (E6:E15) Minimum and Maximum Required (F6:G15) Total Allocation Weights Sum to 100% (E17)
The model has been preset to run the optimization (simply click on Risk Simulator | Optimization | Run Optimization) or alternatively, the following shows how to recreate the optimization model.
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
67
PROCEDURE
Open the example file and start a new profile by clicking on Risk Simulator | New Profile and provide it a name. The first step in optimization is to set the decision variables. Select cell E6 and set the first decision variable (Risk Simulator | Optimization | Set Decision) and click on the link icon to select the name cell (B6), as well as the lower bound and upper bound values at cells F6 and G6. Then, using Risk Simulator copy, copy this cell E6 decision variable and paste the decision variable to the remaining cells in E7 to E15. Make sure to use Risk Simulator copy and paste, rather than Excel’s copy and paste. The second step in optimization is to set the constraint. There is only one constraint here; that is, the total allocation in the portfolio must sum to 100%. So, click on Risk Simulator | Optimization | Constraints. . . and select ADD to add a new constraint. Then, select the cell E17 and make it equal (=) to 100%. Click OK when done. The final step in optimization is to set the objective function and start the optimization by selecting the objective cell C18, select Risk Simulator | Optimization | Set Objective and Risk Simulator | Optimization | Run Optimization and choosing the optimization of choice (Static Optimization, Dynamic Optimization, or Stochastic Optimization). To get started, select Static Optimization. Check to make sure the objective cell is set for C18 and select Maximize. You can now review the decision variables and constraints if required, or click OK to run the static optimization. Once the optimization is complete, you may select Revert to revert back to the original values of the decision variables as well as the objective, or select Replace to apply the optimized decision variables. Typically, Replace is chosen after the optimization is done.
Figure I.55 shows the screen shots of these procedural steps. You can add simulation assumptions on the model’s returns and risk (columns C and D) and apply the dynamic optimization and stochastic optimization for additional practice. Results Interpretation The optimization’s final results are shown in Figure I.56, where the optimal allocation of assets for the portfolio is seen in cells E6:E15. That is, given the restrictions of each asset fluctuating between 5% and 35%, and where the sum of the allocation must equal 100%, the allocation that maximizes the return to risk ratio is seen in Figure I.56. A few important points have to be noted when reviewing the results and optimization procedures performed thus far:
The correct way to run the optimization is to maximize the bang for the buck or returns to risk Sharpe ratio as we have done. If instead we maximized the total portfolio returns, the optimal allocation result is trivial and does not require optimization to obtain. That is, simply allocate 5% (the minimum allowed) to the lowest eight assets, 35% (the maximum allowed) to the highest-returning asset, and the remaining (25%) to the secondbest returns asset. Optimization is not required. However, when allocating the
Intro
JWBK121-Mun
68
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.55 Running Continuous Optimization in Risk Simulator
JWBK121-Mun
March 18, 2008
20:35
Char Count=
FIGURE I.56 Continuous optimization results
Intro
69
Intro
JWBK121-Mun
March 18, 2008
20:35
70
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
TABLE I.3
Optimization Results
Objective
Portfolio Returns
Portfolio Risk
Portfolio Returns to Risk Ratio
Maximize Returns to Risk Ratio Maximize Returns Minimize Risk
12.69% 13.97% 12.38%
4.52% 6.77% 4.46%
2.8091 2.0636 2.7754
portfolio this way, the risk is a lot higher as compared to when maximizing the returns to risk ratio, although the portfolio returns by themselves are higher. In contrast, one can minimize the total portfolio risk, but the returns will now be less.
Table I.3 illustrates the results from the three different objectives being optimized. From the table, the best approach is to maximize the returns to risk ratio; that is, for the same amount of risk, this allocation provides the highest amount of return. Conversely, for the same amount of return, this allocation provides the lowest amount of risk possible. This approach of bang for the buck or returns to risk ratio is the cornerstone of the Markowitz efficient frontier in modern portfolio theory. That is, if we constrained the total portfolio risk levels and successively increased it over time we will obtain several efficient portfolio allocations for different risk characteristics. Thus, different efficient portfolio allocations can be obtained for different individuals with different risk preferences.
OPTIMIZATION WITH DISCRETE INTEGER VARIABLES Sometimes, the decision variables are not continuous but discrete integers (e.g., 1,2,3) or binary (e.g., 0 and 1). That is, we can use such optimization as on-off switches or go/no-go decisions. Figure I.57 illustrates a project selection model where there are 12 projects listed. The example here uses the Discrete Optimization file found either on the start menu at Start | Real Options Valuation | Risk Simulator | Examples or accessed directly through Risk Simulator | Example Models. Each project, like before, has its own returns (ENPV and NPV for expanded net present value and net present value—the ENPV is simply the NPV plus any strategic real options values), costs of implementation, risks, and so forth. If required, this model can be modified to include required full-time equivalences (FTE) and other resources of various functions, and additional constraints can be set on these additional resources. The inputs into this model are typically linked from other spreadsheet models. For instance, each project will have its own discounted cash flow or returns on investment model. The application here is to maximize the portfolio’s Sharpe ratio subject to some budget allocation. Many other versions of this model can be created—for instance, maximizing the portfolio returns, minimizing the risks, or adding additional constraints where the total number of projects chosen cannot exceed 6. All of these items can be run using this existing model.
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
71
FIGURE I.57 Discrete integer optimization model
The example model had been preset and is ready to run (Risk Simulator | Change Profile and select the optimization profile and then click on Risk Simulator | Optimization | Run Optimization) or follow the procedure below to recreate the optimization model form scratch).
PROCEDURE
Open the example file and start a new profile by clicking on Risk Simulator | New Profile and provide it a name. The first step in optimization is to set up the decision variables. Set the first decision variable by selecting cell J4, and select Risk Simulator | Optimization | Set Decision, click on the link icon to select the name cell (B4), and select the Binary variable. Then, using Risk Simulator copy, copy this cell J4 decision variable and paste the decision variable to the remaining cells in J5 to J15. This is the best method if you have only several decision variables and you can name each decision variable with a unique name for identification later. Make sure to use the Risk Simulator copy and paste, rather than Excel copy and paste functions. The second step in optimization is to set the constraints. There are two constraints here; that is, the total budget allocation in the portfolio must be less than $5,000 and the total number of projects must not exceed 6. So, click on Risk Simulator | Optimization | Constraints. . . and select ADD to add a new constraint. Then, select the cell D17 and make it less than or equal (≤) to 5000. Repeat by setting cell J17 ≤ 6.
Intro
JWBK121-Mun
March 18, 2008
72
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
The final step in optimization is to set the objective function and start the optimization by selecting cell C19 and selecting Risk Simulator | Optimization | Set Objective; then, run the optimization using Risk Simulator | Optimization | Run Optimization and choosing the optimization of choice (Static Optimization, Dynamic Optimization, or Stochastic Optimization). To get started, select Static Optimization. Check to make sure that the objective cell is either the Sharpe ratio or portfolio returns to risk ratio and select Maximize. You can now review the decision variables and constraints if required, or click OK to run the static optimization.
Figure I.58 shows the screen shots of these procedural steps. You can add simulation assumptions on the model’s ENPV and risk (columns C and D) and apply the dynamic optimization and stochastic optimization for additional practice. Results Interpretation Figure I.59 shows a sample optimal selection of projects that maximizes the Sharpe ratio. In contrast, one can always maximize total revenues, but as before, this is a trivial process and simply involves choosing the highest returning project and going down the list until you run out of money or exceed the budget constraint. Doing so will yield theoretically undesirable projects as the highest-yielding projects typically hold higher risks. Now, if desired, you can replicate the optimization using a stochastic or dynamic optimization by adding in assumptions in the ENPV and Risk values. For additional hands-on examples of optimization in action, see the various chapters on optimization models.
FORECASTING Forecasting is the act of predicting the future, whether it is based on historical data or on speculation about the future when no history exists. When historical data exist, a quantitative or statistical approach is best, but if no historical data exist, then potentially a qualitative or judgmental approach is usually the only recourse.
Different Types of Forecasting Techniques Generally, forecasting can be divided into quantitative and qualitative techniques. Qualitative forecasting is used when little to no reliable historical, contemporaneous, or comparable data exists. Several qualitative methods exist, such as the Delphi or expert opinion approach (a consensus-building forecast by field experts, marketing experts, or internal staff members); management assumptions (target growth rates set by senior management); as well as market research or external data or polling and surveys (data obtained through third-party sources, industry and sector indexes, or from active market research). These estimates can be either single-point estimates (an average consensus) or a set of forecast values (a distribution of forecasts). The latter can be entered into Risk Simulator as a custom distribution, and the resulting forecasts can be simulated. That is, a nonparametric simulation can be run using the estimated data points themselves as the distribution.
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.58 Running discrete integer optimization
73
Intro
JWBK121-Mun
74
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE I.59 Optimal selection of projects that maximizes the Sharpe ratio On the quantitative side of forecasting, the available data or data that need to be forecasted can be divided into time-series (values that have a time element to them, such as revenues of different years, inflation rates, interest rates, market share, and so forth, or failure rates); cross-sectional (values that are time-independent, such as the grade point average of sophomore students across the nation in a particular year, given each student’s levels of SAT scores, IQ, and number of alcoholic beverages consumed per week); or mixed panel (mixture between time-series and panel data, such as predicting sales over the next 10 years given budgeted marketing expenses and market share projections; this means that the sales data is time-series, but exogenous variables such as marketing expenses and market share exist to help to model the forecast predictions). The Risk Simulator software provides the user several forecasting methodologies:
Auto-ARIMA Basic Econometrics Box-Jenkins ARIMA Custom Distributions J-S Curves GARCH Markov Chains Maximum Likelihood Multivariate Regression Nonlinear Extrapolation Spline Curves Stochastic Process Forecasting Time-Series Analysis In general, to create forecasts, several quick steps are required:
Start Excel and enter in or open your existing historical data. Select the data, and click on Risk Simulator and select Forecasting.
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
75
Select the relevant forecasting application (Auto-ARIMA, Basic Econometrics, Box-Jenkins ARIMA, J-S Curves, GARCH, Markov Chains, Maximum Likelihood, Multivariate Regression, Nonlinear Extrapolation, Spline Curves, Stochastic Process Forecasting, or Time-Series Analysis), and enter the relevant inputs.
The following provides a quick review of each methodology and several quick getting-started examples in using the software. More detailed descriptions and example models of each of these techniques are found throughout this book. Auto-ARIMA Autoregressive integrated moving average (ARIMA) is an advanced econometric modeling technique. ARIMA looks at historical time-series data and performs back-fitting optimization routines to account for historical autocorrelation (the relationship of one value versus another in time), the stability of the data to correct for the nonstationary characteristics of the data, and this predictive model learns over time by correcting its forecasting errors. Advanced knowledge in econometrics is typically required to build good predictive models using this approach. The Auto-ARIMA module automates some of the traditional ARIMA modeling by automatically testing multiple permutations of model specifications and returns the best-fitting model. Running the Auto-ARIMA is similar to regular ARIMA forecasts, the difference being the P, D, Q inputs are no longer required and different combinations of these inputs are automatically run and compared. See Chapter 90, “Forecasting—Time-Series ARIMA,” for more technical details on running and interpreting an ARIMA model. This approach can only be used to forecast time-series data and can include other independent variables in its forecasts. Basic Econometrics Econometrics refers to a branch of business analytics, modeling, and forecasting techniques for modeling the behavior or forecasting certain business or economic variables. Running the Basic Econometrics models are similar to doing regular regression analysis except that the dependent and independent variables are allowed to be modified before a regression is run. See Chapter 87, “Forecasting—Multiple Regression,” for details on running regression models. This approach can be used to model the relationship or forecast time-series, cross-sectional, as well as mixed data sets. Box-Jenkins ARIMA A summary of this methodology is provided earlier, in the Auto-ARIMA section. See Chapter 90, “Forecasting—Time-Series ARIMA,” for details on running an ARIMA model. This approach can only be used to forecast time-series data and can include other independent variables in its forecasts.
Intro
JWBK121-Mun
76
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Custom Distributions Using Risk Simulator, expert opinions can be collected and a customized distribution can be generated. This forecasting technique comes in handy when the data set is small or the goodness-of-fit is bad when applied to a distributional fitting routine. See Chapter 132, “Risk Hedging—Foreign Exchange Cash Flow Model,” for details on creating a custom distribution (in the chapter, a custom distribution is created to forecast foreign exchange rates). This approach can be used to forecast time-series, cross-sectional, or mixed data sets. J-S Curves The J-curve or exponential growth curve is where the growth of the next period depends on the current period’s level and the increase is exponential. This means that over time, the values will increase significantly from one period to another. This model is typically used in forecasting biological growth and chemical reactions over time. The S-curve or logistic growth curve starts off like a J-curve, with exponential growth rates. Over time, the environment becomes saturated (e.g., market saturation, competition, overcrowding); the growth slows; and the forecast value eventually ends up at a saturation or maximum level. This model is typically used in forecasting market share or sales growth of a new product from market introduction until maturity and decline. See Chapter 82, “Forecasting—Exponential J-Growth Curves,” for details on running the J-curve model, and Chapter 85, “Forecasting—Logistic S-Growth Curves,” for running the S-curve model. These approaches are used to forecast time-series data. GARCH The generalized autoregressive conditional heteroskedasticity (GARCH) model is used to model historical and forecast future volatility levels of a marketable security (e.g., stock prices, commodity prices, oil prices, and so forth). The data set has to be a time-series of raw price levels. GARCH will first convert the prices into relative returns and then run an internal optimization to fit the historical data to a mean-reverting volatility term structure, while assuming that the volatility is heteroskedastic in nature (changes over time according to some econometric characteristics). See Chapter 166, “Volatility—Volatility Computations (Log Returns, Log Assets, Implied Volatility, Management Assumptions, EWMA, GARCH),” for details on the GARCH model. This approach is used for forecasting the timeseries of volatility of a marketable security. There must be a lot of data available and the data points must all be positive. Markov Chains A Markov chain exists when the probability of a future state depends on a previous state and when linked together forms a chain that reverts to a long-run steady state level. This approach is typically used to forecast the market share of two competitors. The required inputs are the starting probability of a customer
Intro
JWBK121-Mun
March 18, 2008
20:35
Char Count=
Modeling Toolkit and Risk Simulator Applications
77
in the first store (the first state) will return to the same store in the next period, versus the probability of switching to a competitor’s store in the next state. See Chapter 86, “Forecasting—Markov Chains and Market Share,” for details. This method is used to forecast a time-series of probabilistic states and the long-run steady-state condition.
Maximum Likelihood Maximum likelihood estimation (MLE) is used to forecast the probability of something occurring given some independent variables. For instance, MLE is used to predict if a credit line or debt will default given the obligor’s characteristics (30 years old, single, salary of $100,000 per year, and has a total credit card debt of $10,000); or the probability a patient will have lung cancer if the person is a male, between the ages of 50 and 60, smokes five packs of cigarettes per month, and so forth. See Chapter 118. “Probability of Default—Empirical Model,” for details on running this MLE model. The data set are typically cross-sectional and the dependent variable has to be binary (with values of 0 or 1). Multivariate Regression Multivariate regression is used to model the relationship structure and characteristics of a certain dependent variable as it depends on other independent exogenous variables. Using the modeled relationship, we can forecast the future values of the dependent variable. The accuracy and goodness of fit for this model can also be determined. Linear and nonlinear models can be fitted in regression analysis. See Chapter 87, “Forecasting—Multiple Regression,” for details on running regression models. This methodology can be used to model and forecast timeseries data, cross-sectional data, or mixed data. Nonlinear Extrapolation The underlying structure of the data to be forecasted is assumed to be nonlinear over time. For instance, a data set such as 1, 4, 9, 16, 25 is considered to be nonlinear (these data points are from a squared function). See Chapter 88, “Forecasting—Nonlinear Extrapolation and Forecasting,” for details on nonlinear extrapolation forecasts. This methodology is typically applied to forecast time-series data. Sometimes, cross-sectional data can be applied if there is a nonlinear relationship between data points arranged from small to large values. Spline Curves Sometimes there are missing values in a time-series data set. For instance, interest rates for years 1 to 3 may exist, followed by years 5 to 8, and then year 10. Spline curves can be used to interpolate the missing years’ interest rate values based on the data that exist. Spline curves can also be used to forecast or extrapolate values of future time periods beyond the time period of available data. The data can be linear or nonlinear.
Intro
JWBK121-Mun
78
March 18, 2008
20:35
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
See Chapter 172, “Yield Curve—U.S. Treasury Risk-Free Rates and Cubic Spline Curves,” for details. This methodology is used to back-fit and forecast time-series data only.
Stochastic Process Forecasting Sometimes variables are stochastic and cannot be readily predicted using traditional means. These variables are said to be stochastic. Nonetheless, most financial, economic and naturally occurring phenomena (e.g., motion of molecules through the air) follow a known mathematical law or relationship. Although the resulting values are uncertain, the underlying mathematical structure is known and can be simulated using Monte Carlo risk-simulation. See Chapter 89, “Forecasting—Stochastic Processes,” for details on stochastic process forecasting where we forecast using random walk, Brownian motion, mean-reverting, and jump-diffusion processes. Time-Series Analysis In well-behaved time-series data (typical examples include sales revenues and cost structures of large corporations), the values tend to have up to three elements, a base value, trend, and seasonality. Time-series analysis uses these historical data and decomposes them into these three elements, and recomposes them into future forecasts. In other words, this forecasting method, like some of the others discribed, first performs a back-fitting (backcast) of historical data before it provides estimates of future values (forecast). See Chapter 91, “Forecasting—Time-Series Analysis,” for details on time-series decomposition models. This methodology is applicable only to time-series data.
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
1. Analytics—Central Limit Theorem
79
1. Analytics—Central Limit Theorem File Name: Analytics—Central Limit Theorem Location: Modeling Toolkit | Analytics | Central Limit Theorem Brief Description: Illustrating the concept of Central Limit Theorem and Law of Large Numbers using Risk Simulator’s set assumptions functionality, where many distributions, at the limit, are shown to approach normality Requirements: Modeling Toolkit, Risk Simulator
This example shows how the Central Limit Theorem works by using Risk Simulator and without the applications of any mathematical derivations. Specifically, we look at how the normal distribution sometimes can be used to approximate other distributions and how some distributions can be made to be highly flexible, as in the case of the beta distribution. The Central Limit Theorem contains a set of weak-convergence results in probability theory. Intuitively, they all express the fact that any sum of many independent and identically distributed random variables will tend to be distributed according to a particular attractor distribution. The most important and famous result is called the Central Limit Theorem, which states that if the sum of the variables has a finite variance, then it will be approximately normally distributed. As many real processes yield distributions with finite variance, this theorem explains the ubiquity of the normal distribution. Also, the distribution of an average tends to be normal, even when the distribution from which the average is computed is decidedly not normal.
DISCRETE UNIFORM DISTRIBUTION In this model, we look at various distributions and see that over a large sample size and various parameters, they approach normality. We start off with a highly unlikely candidate, the discrete uniform distribution. The discrete uniform distribution is also known as the equally likely outcomes distribution. Where the distribution has a set of N elements, and each element has the same probability (Figure 1.1). This distribution is related to the uniform distribution but its elements are discrete instead of continuous. The input requirement is such that minimum < maximum and both values must be integers. An example would be tossing a single die, with 6 sides. The probability of hitting 1, 2, 3, 4, 5, or 6 is exactly the same: 1/6. So, how can a distribution like this be converted into a normal distribution? The idea lies in the combination of multiple distributions. Suppose you now take a pair of dice and toss them. You would have 36 possible outcomes; that is, the first
c01
JWBK121-Mun
80
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 1.1 Tossing a single die and the discrete uniform distribution (values between 1 and 6)
FIGURE 1.2 Tossing two dice (36 possible outcomes) single die can be 1 and the second die can be 1, or perhaps 1–2, or 1–3, and so forth, until 6–6, with 36 outcomes as in Figure 1.2. Now, summing up the two dice, you get an interesting set of results (Figure 1.3). If you then plotted out these sums, you get an approximation of a normal distribution (Figure 1.4). In fact, if you threw 12 dice together and added up their values, and repeated the process many times, you get an extremely close discrete normal distribution. If
FIGURE 1.3 Summation of two dice
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
1. Analytics—Central Limit Theorem
81
FIGURE 1.4 Approximation to a normal distribution
you add 12 continuous uniform distributions, where the results can, say, take on any continuous value between 1 and 6, you obtain a perfectly normal distribution.
POISSON, BINOMIAL, AND HYPERGEOMETRIC DISTRIBUTIONS Continuing with the examples, we show that for higher values of the distributional parameters (where many trials exist), these three distributions also tend to normality. For instance, in the Other Discrete worksheet in the model, notice that as the number of trials (N) in a binomial distribution increases, the distribution tends to normal. Even with a small probability (P) value, as the number of trials N increases, normality again reigns (Figure 1.5). In fact, as N × P exceeds about 30, you can use the normal distribution to approximate the binomial distribution. Also, this is important, as at high N values, it is very difficult to compute the exact binomial distribution value, and the normal distribution is a lot easier to use. We can test this approximation by using the Distributional Analysis tool (Start | Programs | Real Options Valuation | Risk Simulator | Distribution Analysis). As an example, we test a binomial distribution with N = 5000 and P = 0.50. We then compute the mean of the distribution, NP = 2500 and the standard deviation of the binomial distribution, NP(1 − P) = 5000(0.5)(1 − 0.5) = 35.3553. We then enter these values in the normal distribution and look at the Cumulative Distribution Function (CDF) of some random range. Sure enough, the probabilities we obtain are close although not precisely the same (Figure 1.6). The normal distribution does in fact approximate the binomial distribution when N × P is large (compare the results in Figures 1.6 and 1.7). The examples also examine the hypergeometric and Poisson distributions. A similar phenomenon occurs. When the input parameters are large, they revert to
FIGURE 1.5 Different faces of a binomial distribution
c01 JWBK121-Mun
82
March 15, 2008 21:56 Char Count=
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
1. Analytics—Central Limit Theorem
83
FIGURE 1.6 Normal approximation of the binomial
the normal approximation. In fact, the normal distribution also can be used to approximate the Poisson and hypergeometric distributions. Clearly there will be slight differences in value as the normal is a continuous distribution whereas the binomial, Poisson, and hypergeometric are discrete distributions. Therefore, slight variations will obviously exist.
BETA DISTRIBUTION Finally, the Beta worksheet illustrates an interesting distribution, the beta distribution. Beta is a highly flexible and malleable distribution and can be made to approximate multiple distributions. If the two input parameters, alpha and beta, are equal, the distribution is symmetrical. If either parameter is 1 while the other parameter is greater than 1, the distribution is triangular or J-shaped. If alpha is less than beta, the distribution is said to be positively skewed (most of the values are near the minimum value). If alpha is greater than beta, the distribution is negatively skewed (most of the values are near the maximum value).
c01
JWBK121-Mun
84
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 1.7 Binomial approximation of the normal
2. Analytics—Central Limit Theorem—Winning Lottery Numbers File Name: Analytics—Central Limit Theorem (Winning Lottery Numbers) Location: Modeling Toolkit | Analytics | Central Limit Theorem—Winning Lottery Numbers Brief Description: Applying distributional fitting on past winning lottery numbers and to illustrate the Central Limit Theorem and Law of Large Numbers Requirements: Modeling Toolkit, Risk Simulator
This fun model is used to illustrate the behavior of seemingly random events. For the best results, first review the Central Limit Theorem model in Chapter 1 before
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
2. Analytics—Central Limit Theorem—Winning Lottery Numbers
85
going over this example model. As in the Central Limit Theorem model, tossing a single six-sided die will yield a discrete uniform distribution with equal probabilities (1/6) for each side of the die. In contrast, when a pair of dice is tossed, there are 36 permutations of outcomes, and the sum of each of the 36 outcomes actually follows a discrete normal distribution. When more dice are tossed, the resulting sums are normally distributed. The same concept applies here. Suppose that in a lottery there are 6 numbers you have to choose. First, you choose 5 numbers ranging from 1 to 47, without repetition, and then, you choose the sixth special number, from 1 to 27. You need to hit all 6 numbers to win the lottery jackpot. Clearly, assuming that the lottery balls selected at random are truly random and fair (i.e., the State Lottery Commission actually does a good job), then over many trials and many lottery games, the distribution of each value selected follows a discrete uniform distribution. That is, the probability of the number 1 being chosen is 1/47, and the probability of the number 2 being chosen is also 1/47, and so forth. However, if all of the 5 balls that are randomly chosen between 1 and 47 are summed, an interesting phenomenon occurs. The Historical worksheet in the model shows the actual biweekly historical lottery winning numbers for the past 6 years. Summing up the 5 values and performing a distributional fitting routine using Risk Simulator reveals that the distribution is indeed normal. The probability and odds of hitting the jackpot is clearly very small. In fact, this can be computed using a combinatorial equation. The probability of selecting the 5 exact numbers out of 47 is 1 out of 1,533,939 chances. That is, we have: Cxn =
n! 47! 47! 47 x 46 x 45 x . . . x 1 = = = x!(n − x)! 5!(47 − 5)! 5!42! (5 x 4 x . . . x 1)(42 x 43 x . . . x 1)
= 1,533,939 Where C represents the number of possible combinations, n is the total number of balls in the population, while x represents the total number of balls chosen at a time.
FIGURE 2.1 Lottery winnings and payoffs
c01
JWBK121-Mun
86
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 2.2 Lottery numbers and the discrete uniform distribution The chances of choosing the sixth special number is 1 out of 27, hence, the probability is 1/27. Therefore, the chance of choosing the correct 5 numbers and the special number is 1,533,939 × 27 = 41,416,353. So, the odds are 1 in 41,416,353 or 0.000002415% probability of hitting the jackpot. In fact, from the State Lottery Commission, we see that the published odds are exactly as computed (Figure 2.1). Also as expected, performing a data-fitting routine to the raw numbers (the first 5 values between 1 and 47), we have a discrete uniform distribution (see Figure 2.2 or the Report 2 worksheet in the model). However, when we performed a data fitting on the sum of the 5 values, we obtain an interesting result. The distribution is, as expected, normal (see Figure 2.3 or the Report worksheet in the model).
FIGURE 2.3 Sum of the lottery numbers is normally distributed
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
2. Analytics—Central Limit Theorem—Winning Lottery Numbers
87
FIGURE 2.4 Simulation of lottery numbers In fact, running the simulation 10,000 trials using the fitted normal assumption, we obtain the forecast results seen in Figure 2.4. The theoretical distribution predicts that 90% of the time, the sum of the first five winning lottery numbers will be between 71 and 167 (rounded), and 50% of the time, they will be between 99 and 138 (rounded), as seen in Figures 2.4 and 2.5. We then looked at the raw historical winning numbers and computed the percentage
FIGURE 2.5 Confidence interval of lottery results
c01
JWBK121-Mun
88
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 2.6 Actual empirical statistics of lottery drawings
of winning sequences that fall within this range (actual statistics are shown in Figure 2.6). Sure enough, the empirical statistics of actual winning numbers are very close to the theoretically predicted values. This means that, if you followed the statistics, and picked the first 5 numbers such that the sum of the 5 values is within 71 and 167, your odds will have significantly improved! This is of course only an academic example and does not guarantee any results. So, don’t go rushing out to buy any lottery tickets!
3. Analytics—Flaw of Averages File Name: Analytics—Flaw of Averages Location: Modeling Toolkit | Analytics | Flaw of Averages Brief Description: Illustrating the concept of the Flaw of Averages (where using the simple average sometimes yields incorrect answers) through the introductions of harmonic averages, geometric averages, medians, and skew Requirements: Modeling Toolkit
This model does not require any simulations or sophisticated modeling. It is simply an illustration of various ways to look at the first moment of a distribution (measuring the central tendency and location of a distribution); that is, the mean or average value of a distribution or data points. This model shows how a simple arithmetic average sometimes can be wrong in certain cases and how harmonic averages, geometric averages, and medians are sometimes more appropriate.
FLAW OF AVERAGES: GEOMETRIC AVERAGE Suppose you purchased a stock at some time period (call it time zero) for $100. Then, after one period (e.g., a day, a month, a year), the stock price goes up to $200
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
89
3. Analytics—Flaw of Averages
(period one), at which point you should sell and cash in the profits, but you do not, and hold it for another period. Further suppose that at the end of period two, the stock price drops back down to $100, and then you decide to sell. Assuming there are no transaction costs or hidden fees for the sake of simplicity, what is your average return for these two periods? Period
Stock Price
0 1 2
$100 $200 $100
First, let’s compute it the incorrect way, using arithmetic averages: Absolute Return from Period 0 to Period 1: Absolute Return from Period 1 to Period 2: Average Return for both periods:
100% –50% 25%
That is, the return for the first holding period is (New – Old)/Old or ($200− $100)/$100 = 100%, which makes sense as you started with $100 and it then became $200, or returned 100%. Next, the second holding period return is ($100 − $200)/$200 = –50%, which also makes sense as you started with $200 and ended up with $100, or lost half the value. So, the arithmetic average of 100% and –50% is (100% + [–50%])/2 = 25%. Well, clearly you did not make 25% in returns. You started with $100 and ended up with $100. How can you have a 25% average return? So, this simple arithmetic mean approach is incorrect. The correct methodology is to use geometric average returns, applying something called relative returns: Period
Stock Price
Relative Returns
0 1 2
$100 $200 $100
2.00 0.50
Absolute returns are similar to relative returns, less one. For instance, going from $10 to $11 implies an absolute return of ($11 – $10)/$10 = 10%. However, using relative returns, we have $11/$10 = 1.10. If you take 1 off this value, you obtain the absolute returns. Also, 1.1 means a 10% return and 0.9 means a –10% return, and so forth. The preceding table shows the computations of the two relative returns for the two periods. We then compute the geometric average where we have: Geometric Average =
N
X1 X0
X2 X1
···
XN XN−1
−1
That is, we take the root of the total number of periods N of the multiplications of the relative returns. We then obtain a geometric average of 0.00%.
c01
JWBK121-Mun
March 15, 2008
21:56
90
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Alternatively, we can use Excel’s equation of “=POWER(2.00*0.50,1/2)–1” to obtain 0%. Note that the POWER function in Excel takes X to some power Y in “POWER (X,Y)”; the root of 2 (N is 2 periods in this case, not including period 0) is the same as taking it to the power of 1/2. This 0% return on average for the periods make a lot more sense. Be careful when you see large stock or fund returns as some may actually be computed using arithmetic averages. Where there is an element of time series in the data and fluctuations of the data are high in value, be careful when computing the series’ average; the geometric average might be more appropriate. Note: For simplicity, you can also use Excel’s GEOMEAN function on the relative returns and deduct one from it. Note that you have to take the GEOMEAN of the relative returns: =GEOMEAN(2,0.5)–1 and minus one, not the raw stock prices themselves.
FLAW OF AVERAGES: HARMONIC AVERAGE Say there are three friends, Larry, Curly, and Moe, who happen to be cycling enthusiasts, apart from being movie stars and close friends. Further, suppose each one has a different level of physical fitness and they ride their bikes at a constant speed of 10 miles per hour (mph), 20 mph, and 30 mph respectively. Biker
Constant Miles/Hour
Larry Curly Moe
10 20 30
The question is, how long will it take, on average, for all three cyclists to complete a 10-mile course? Well, let’s first solve this problem the incorrect way, in order to understand why it is so easy to commit the Flaw of Averages. First, computing it the wrong way, we obtain the average speed of all three bikers; that is, (10 + 20 + 30)/3 = 20 mph. So, it would take 10 miles/20 miles per hour = 0.5 hours to complete the trek on average. Biker
Constant Miles/Hour
Larry Curly Moe Average: Distance: Time:
10 20 30 20 miles/hour 10 miles 0.5 hours to complete the 10-mile trek
Had we done this, we would have committed a serious mistake. The average time is not 0.5 hours using the simple arithmetic average. Let us prove why this is the case. First let’s show the time it takes for each biker to complete 10 miles. Then we simply take the average of these times.
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
91
3. Analytics—Flaw of Averages
Biker
Constant Miles/Hour
Larry Curly Moe Average:
Time to Complete 10 miles
10 20 30 0.6111 hours
1.00 hours 0.50 hours 0.33 hours
So, the true average is actually 0.6111 hours or 36.67 minutes, not 30 minutes or 0.5 hours. How do we compute the true average? The answer lies in the computation of harmonic averages, where we define the N where N is the total number of elements, in this case, 3; harmonic average as (1/x i) and Xi are values of the individual elements. That is, we have these computations: Biker
Constant Miles/Hour
Larry Curly Moe
10 20 30
N SUM (1/X) Harmonic Arithmetic
3.0000 0.1833 16.3636 20.0000
Therefore, the harmonic average speed of 16.3636 mph would mean that a 10mile trek would take 10/16.3636 or 0.6111 hours (36.67 minutes). Using a simple arithmetic average would yield wrong results when you have rates and ratios that depend on time.
FLAW OF AVERAGES: SKEWED AVERAGE Assume that you are in a room with 10 colleagues and you are tasked with figuring out the average salary of the group. You start to ask around the room to obtain 10 salary data points, and then quantify the group’s average: Person
Salary
1 2 3 4 5 6 7 8 9 10 Average
$75,000 $120,000 $95,000 $69,800 $72,000 $75,000 $108,000 $115,000 $135,000 $100,000 $96,480
c01
JWBK121-Mun
March 15, 2008
21:56
92
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
This average is, of course, the arithmetic average or the sum of all the individual salaries divided by the number of people present. Suddenly, a senior executive enters the room and participates in the little exercise. His salary, with all the executive bonuses and perks, came to $20 million last year. What happens to your new computed average?
Person
Salary
1 2 3 4 5 6 7 8 9 10 11 Average
$75,000 $120,000 $95,000 $69,800 $72,000 $75,000 $108,000 $115,000 $135,000 $100,000 $20,000,000 $1,905,891
The average now becomes $1.9 million. This value is clearly not representative of the central tendency and the “true average” of the distribution. Looking at the raw data, to say that the average salary of the group is $96,480 per person makes more sense than $1.9 million per person. What happened? The issue was that an outlier existed. The $20 million is an outlier in the distribution, skewing the distribution to the right. When there is such an obvious skew, the median would be a better measure as the median is less susceptible to outliers than the simple arithmetic average. Median for 10 people: Median for 11 people:
$97,500 $100,000
Thus, $100,000 is a much better representative of the group’s “true average.” Other approaches exist to find the “true” or “truncated” mean. They include performing a single variable statistical hypothesis t-test on the sample raw data or simply removing the outliers. However, be careful when dealing with outliers; sometimes outliers are very important data points. For instance, extreme stock price movements actually may yield significant information. These extreme price movements may not be outliers but, in fact, are part of doing business, as extreme situations exist (i.e., the distribution is leptokurtic, with a high kurtosis) and should be modeled if the true risk profile is to be constructed. Another approach to spot an outlier is to compare the mean with the median. If they are very close, the distribution is probably symmetrically distributed. If the mean and median are far apart, the distribution is skewed. And a skewed mean is typically a bad approximation of the true mean of the distribution. Care should be
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
4. Analytics—Mathematical Integration Approximation Model
93
taken when you spot a high positive or negative skew. You can use Excel’s SKEW function to compute the skew: Skew for 10 people: Skew for 11 people:
0.28 3.32
As expected, the skew is high for the 11-person group as there is an outlier and the difference between the mean and median is significant.
4. Analytics—Mathematical Integration Approximation Model File Name: Analytics—Mathematical Integration Approximation Model Location: Modeling Toolkit | Analytics | Mathematical Integration Approximation Model Brief Description: Applying simulation to estimate the area under a curve without the use of any calculus-based mathematical integration Requirements: Modeling Toolkit, Risk Simulator
THEORETICAL BACKGROUND There are several ways to compute the area under a curve. The best approach is mathematical integration of a function or equation. For instance, if you have the equation: f (x) = x3 then the area under the curve between 1 and 10 is found through the integral:
10 x3 dx = 1
x4 4
10 = 2499.75 1
Similarly, any function f (x) can be solved and found this way. However, for complex functions, applying mathematical integration might be somewhat cumbersome. This is where simulation comes in. To illustrate, how would you solve a seemingly simple problem like the next one? B A
1 dx x4 − sin(1 − x4 )
c01
JWBK121-Mun
March 15, 2008
94
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Well, dust off those old advanced calculus books and get the solution: B A
1 dx = x4 − Sin(1 − x4 )
−1/4
B
Sin(x)
Log Sin(x)1/4 − x(1 + Sin(x))1/4 − Log Sin(x)1/4 − x(1 + Sin(x))1/4 − 2ArcTan x 1+Sin(x)
3/4 1/4 4Sin(x) (1 + Sin(x))
A
The point is, sometimes simple-looking functions get really complicated. Using Monte Carlo simulation, we can approximate the value under the curve. Note that this approach yields approximations only, not exact values. Let’s see how this approach works . . . The area under a curve can be seen as the shaded area (A.T.R.U.B) in Figure 4.1, between the x-axis values of A and B, and the y = f (x) curve. Looking closely at the graph, one can actually imagine two boxes. Specifically, if the area of interest is the shaded region or A.T.R.U.B, then we can draw two imaginary boxes, A.T.U.B and T.Q.R.U. Computing the area of the first box is simple, where we have a simple rectangle. Computing the second box is trickier, as part of the area in the box is below the curve and part of it is above the curve. In order to obtain the area under the curve that is within the T.Q.R.U box, we run a simulation with a uniform distribution between the values A and B and compute the corresponding values on the y-axis using the f (x) function, while at the same time we simulate a uniform distribution between f (A) and f (B) on the y-axis. Then we find the average number of times the simulated values on the y-axis is at or below the curve or f (x) value. Using this average value, we multiply it by the area in the box to approximate the value under the curve in the box. Summing this value with the smaller box of A.T.U.B provides the entire area under the curve.
FIGURE 4.1 Graphical representation of a mathematical integration
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
4. Analytics—Mathematical Integration Approximation Model
95
MODEL BACKGROUND The analysis in the Model worksheet illustrates an approximation of a simple equation, namely, we have this equation to value: 10 x3 dx 1
Solving this integration, we obtain the value under the curve of:
10 x3 dx = 1
x4 4
10 = 2499.75 1
Now we attempt to solve this using simulation through the model shown in Figure 4.2.
PROCEDURE 1. We first enter the minimum and maximum values on the x-axis. In this case, they are 1 and 10 (cells C11 and D11 in Figure 4.2). This represents the range on the x-axis we are interested in. 2. Next, compute the corresponding y-axis values (cells C12 and D12). For instance, in this example we have y = f(x) = x3 , which means that for x = 1, we have y = 13 = 1 and for x = 10, we have y = 103 = 1000. 3. Set two uniform distribution assumptions between the minimum and maximum values, one for x and one for y (cells E11 and E12). 4. Compute the PDF equation, in this example, it is y = f(x) = x3 in cell E13, linking the x value in the equation to the simulated x value. 5. Create a dummy 0,1 variable and set it as “IF(SimulatedY <= PDFEquation, 1, 0)” in cell E15 and set it as a forecast cell. This means that if the simulated y value is under the curve on the graph, the cell returns the value 1, or the value 0 if it is above the curve.
FIGURE 4.2 Simulation model for mathematical integration
c01
JWBK121-Mun
96
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
6. Run the simulation and manually type in the simulated mean value into cell E16. 7. Compute the approximate value under the curve using “SimulatedMean* (MaxX–MinX)*(MaxY–MinY) + (MaxX–MinX)*MinY” that is, the area T.Q.R.U plus the area under the curve of A.T.U.B.
The result is 2494.11, a close approximation to 2499.75, with an error of about 0.23%. Again, this is a fairly decent result but still an approximation. This approach relieves the analyst from having to apply advanced calculus to solve difficult integration problems.
5. Analytics—Projectile Motion File Name: Analytics—Projectile Motion Location: Modeling Toolkit | Analytics | Projectile Motion Brief Description: Applying simulation to compute the probabilistic distance a missile will travel given the angle of attack, initial velocity, and probability of midflight failure Requirements: Modeling Toolkit, Risk Simulator
This example illustrates how a physics model can be built in Excel and simulated, to account for the uncertainty in inputs. This model shows the motion of a projectile (e.g., a missile) launched from the ground with a particular initial velocity, at a specific angle. The model computes the height and distance of the missile at various points in time and maps the motion in a graph. Further, we model the missile’s probability of failure (i.e., we assume that the missile can fail midflight and is a crude projectile rather than a smart bomb, where there is no capability for any midcourse corrections and no onboard navigational and propulsion controls). We know from physics that the location of the projectile can be mapped on an x-axis and y-axis chart, representing the distance and height of the projectile at various points in time, and are determined by: Location on x-axis : Xt = V0 tCos(θ )
1 2 gt 2 t = time, V0 = initial velocity, g = gravitational acceleration, θ = angle of attack Location on y-axis : Yt = V0 tSin(θ ) −
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
97
5. Analytics—Projectile Motion
FIGURE 5.1 Assumptions in the missile trajectory model In addition, we can compute these assuming that there is no midflight failure: Time to reach max height = Max height =
V02 [Sin(θ )]2 2g
Time to impact = Max range =
V0 Sin(θ ) g
2V0 Sin(θ ) g
V02 Sin(2θ ) g
The inputs are initial velocity, angle, time step, and failure rate (Figure 5.1). Time step (delta t) is any positive value representing how granular you wish the distance and height computations to be, in terms of time. For instance, if you enter 0.10 for time steps, the time period will step 0.1 hours each period. Velocity, angle, and failure rate can be uncertain and hence subject to simulation. The results are shown as a table and graphical representation. Failure rate is simulated using a Bernoulli distribution (yes-no or 0-1 distribution, with 1 representing failure and 0 representing no failure), as seen in Figure 5.2. Figure 5.3 illustrates the sample flight path or trajectory of the missile.
PROCEDURE This model has preset assumptions and forecasts and is ready to be run by following these steps: 1. Go to the Model worksheet, click on Risk Simulator | Change Profile, and select the Projectile Motion profile. 2. Run the simulation by clicking the RUN icon or Risk Simulator | Run Simulation. 3. On the forecast chart, select Right-Tail, type in 110, and hit TAB on your keyboard. The results indicate that there is only 6.80% chance that you will be
c01
JWBK121-Mun
98
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 5.2 Missile coordinates midflight and probability of missile failure hit by the missile if you lived 110 miles away, assuming all the inputs are valid (Figure 5.4). Alternatively, you can re-create the model assumptions and forecast by following these steps: 1. Go to the Model worksheet and click on Risk Simulator | Change Profile and select the Projectile Motion profile. 2. Select cell E4 and define it as an input assumption (Risk Simulator | Set Input Assumption) and choose a distribution of your choice (e.g., Normal) and click OK. Repeat for cells E5 and E7, one at a time. Then select cell E9 and make it a Discrete Uniform distribution between 0 and 44; make cell E10 a Bernoulli distribution and click on the link icon and link it to cell E7; and
FIGURE 5.3 Graphical representation of the missile flight path
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
5. Analytics—Projectile Motion
99
FIGURE 5.4 Probabilistic forecast of projectile range make sure to check the Enable Dynamic Simulations check box and click OK (Figure 5.5). 3. Select cell D60 and make it a forecast (Risk Simulator | Set Output Forecast). 4. Run the simulation by clicking the RUN icon or Risk Simulator | Run Simulation.
FIGURE 5.5 Simulating midflight failures
c01
JWBK121-Mun
March 15, 2008
100
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
6. Analytics—Regression Diagnostics File Name: Analytics—Regression Diagnostics Location: Modeling Toolkit | Analytics | Regression Diagnostics Brief Description: Most forecast models (e.g., time-series, extrapolation, ARIMA, regression, etc.) suffer from model errors because analysts neglect to check the data for the correct model assumptions (autocorrelation, heteroskedasticity, micronumerosity, multicollinearity, nonlinearity, outliers, seasonality, sphericity, stationarity, structural breaks, etc.). This model shows how to run a Diagnostic Analysis on your data before generating forecast models. Requirements: Modeling Toolkit, Risk Simulator
This model provides a sample data set on which we can run Risk Simulator’s Diagnostic Tool in order to determine the econometric properties of the data. The diagnostics include checking the data for heteroskedasticity, nonlinearity, outliers, specification errors, micronumerosity, stationarity and stochastic properties, normality and sphericity of the errors, and multicollinearity. Each test is described in more detail in their respective reports in the model.
PROCEDURE To run the analysis, follow these instructions: 1. Go to the Time-Series Data worksheet and select the data including the variable names (cells C5:H55). 2. Click on Risk Simulator | Tools | Diagnostic Tool. 3. Check the data and select the Dependent Variable Y from the drop-down menu. Click OK when finished (Figure 6.1). Spend some time reading through the reports generated from this diagnostic tool. A common violation in forecasting and regression analysis is heteroskedasticity (i.e., the variance of the errors increases over time). See Figure 6.2 for test results using the diagnostic tool. Visually, the width of the vertical data fluctuations increases or fans out over time, and typically, the coefficient of determination (R-squared coefficient) drops significantly when heteroskedasticity exists. If the variance of the dependent variable is not constant, then the error’s variance will not be constant. Unless the heteroskedasticity of the dependent variable is pronounced, its effect will
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
6. Analytics—Regression Diagnostics
101
FIGURE 6.1 Running the data diagnostic tool not be severe: The least-squares estimates will still be unbiased, and the estimates of the slope and intercept will be either normally distributed, if the errors are normally distributed, or at least normally distributed asymptotically (as the number of data points becomes large), if the errors are not normally distributed. The estimate for the variance of the slope and overall variance will be inaccurate, but the inaccuracy is not likely to be substantial if the independent-variable values are symmetric about their mean. If the number of data points is small (micronumerosity), it may be difficult to detect assumption violations. With small sample sizes, assumption violations such as nonnormality or heteroskedasticity of variances are difficult to detect even when they are present. With a small number of data points, linear regression offers less protection against violation of assumptions. With few data points, it may be hard to determine how well the fitted line matches the data or whether a nonlinear function would be more appropriate. Even if none of the test assumptions is violated, a linear regression on a small number of data points may not have sufficient power to detect a significant difference between the slope and zero, even if the slope is nonzero. The power depends on the residual error, the observed variation in the independent variable, the selected significance alpha level of the test, and the number of data points. Power decreases as the residual variance increases, decreases as the significance level is decreased (i.e., as the test is made more stringent), increases as the variation in observed independent variable increases, and increases as the number of data points increases.
FIGURE 6.2 Results from tests of outliers, heteroskedasticity, micronumerosity, and nonlinearity
c01 JWBK121-Mun
102
March 15, 2008 21:56 Char Count=
c01
JWBK121-Mun
March 15, 2008
21:56
6. Analytics—Regression Diagnostics
Char Count=
103
Values may not be identically distributed because of the presence of outliers. Outliers are anomalous values in the data. Outliers may have a strong influence over the fitted slope and intercept, giving a poor fit to the bulk of the data points. Outliers tend to increase the estimate of residual variance, lowering the chance of rejecting the null hypothesis (i.e., creating higher prediction errors). They may be due to recording errors, which may be correctable, or they may be due to not all of the dependent-variable values being sampled from the same population. Apparent outliers may also be due to the dependent-variable values being from the same, but nonnormal, population. However, a point may be an unusual value in either an independent or dependent variable without necessarily being an outlier in the scatter plot. In regression analysis, the fitted line can be highly sensitive to outliers. In other words, least-squares regression is not resistant to outliers; thus, neither is the fittedslope estimate. A point vertically removed from the other points can cause the fitted line to pass close to it, instead of following the general linear trend of the rest of the data, especially if the point is relatively far horizontally from the center of the data. However, great care should be taken when deciding if the outliers should be removed. Although in most cases, the regression results look better when outliers are removed, a priori justification must first exist. For instance, if one is regressing the performance of a particular firm’s stock returns, outliers caused by downturns in the stock market should be included; these are not truly outliers. Rather they are inevitabilities in the business cycle. Forgoing these outliers and using the regression equation to forecast one’s retirement fund based on the firm’s stocks will yield incorrect results at best. In contrast, suppose the outliers are caused by a single nonrecurring business condition (e.g., merger and acquisition) and such business structural changes are not forecast to recur. Then these outliers should be removed and the data cleansed prior to running a regression analysis. The analysis here only identifies outliers. It is up to the user to determine if they should remain or be excluded. Sometimes a nonlinear relationship between the dependent and independent variables is more appropriate than a linear relationship. In such cases, running a linear regression will not be optimal. If the linear model is not the correct form, then the slope and intercept estimates and the fitted values from the linear regression will be biased, and the fitted slope and intercept estimates will not be meaningful. Over a restricted range of independent or dependent variables, nonlinear models may be well approximated by linear models (this is in fact the basis of linear interpolation), but for accurate prediction, a model appropriate to the data should be selected. A nonlinear transformation should be applied to the data first before running a regression. One simple approach is to take the natural logarithm of the independent variable (other approaches include taking the square root or raising the independent variable to the second or third power) and run a regression or forecast using the nonlinearly transformed data. Another typical issue when forecasting time-series data is whether the independent-variable values are truly independent of each other or whether they are dependent. Dependent variable values collected over a time series may be autocorrelated. For serially correlated dependent-variable values, the estimates of the slope and intercept will be unbiased, but the estimates of their forecast and variances will not be reliable; hence the validity of certain statistical goodness-of-fit tests will be flawed. For instance, interest rates, inflation rates, sales, revenues, and many other
c01
JWBK121-Mun
104
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
time-series data typically are autocorrelated, where the value in the current period is related to the value in a previous period and so forth. (Clearly, the inflation rate in March is related to February’s level, which in turn is related to January’s level, etc.) Ignoring such blatant relationships will yield biased and less accurate forecasts. In such events, an autocorrelated regression model or an ARIMA model may be better suited (Risk Simulator | Forecasting | ARIMA). Finally, the autocorrelation functions of a series that is nonstationary tend to decay slowly (see Nonstationary report in the model). If autocorrelation AC(1) is nonzero, it means that the series is first order serially correlated. If AC(k) dies off more or less geometrically with increasing lag, it implies that the series follows a low-order autoregressive process. If AC(k) drops to zero after a small number of lags, it implies that the series follows a low-order movingaverage process. Partial correlation PAC(k) measures the correlation of values that are k periods apart after removing the correlation from the intervening lags. If the pattern of autocorrelation can be captured by an autoregression of order less than k, then the partial autocorrelation at lag k will be close to zero. Ljung-Box Q-statistics and their p-values at lag k have the null hypothesis that there is no autocorrelation up to order k. The dotted lines in the plots of the autocorrelations are the approximate two standard error bounds. If the autocorrelation is within these bounds, it is not significantly different from zero at the 5% significance level. Autocorrelation measures the relationship to the past of the dependent Y variable to itself. Distributive lags, in contrast, are time-lag relationships between the dependent Y variable and different independent X variables. For instance, the movement and direction of mortgage rates tend to follow the Federal Funds Rate but at a time lag (typically one to three months). Sometimes time lags follow cycles and seasonality (e.g., ice cream sales tend to peak during the summer months and hence are related to last summer’s sales, 12 months in the past). The distributive lag analysis (Figure 6.3) shows how the dependent variable is related to each of the independent variables at various time lags, when all lags are considered simultaneously, to determine which time lags are statistically significant and should be considered. Another requirement in running a regression model is the assumption of normality and sphericity of the error term. If the assumption of normality is violated or if outliers are present, then the linear regression goodness-of-fit test may not be the most powerful or informative test available, and this could mean the difference between detecting a linear fit or not. If the errors are not independent and not normally distributed, it may indicate that the data might be autocorrelated or suffer from nonlinearities or other more destructive errors. Independence of the errors can also be detected in the heteroskedasticity tests (Figure 6.4). The Normality test on the errors performed is a nonparametric test, which makes no assumptions about the specific shape of the population from which the sample is drawn, thus allowing for smaller sample data sets to be analyzed. This test evaluates the null hypothesis of whether the sample errors were drawn from a normally distributed population versus an alternate hypothesis that the data sample is not normally distributed. If the calculated D-Statistic is greater than or equal to the D-Critical values at various significance values, then reject the null hypothesis and accept the alternate hypothesis (the errors are not normally distributed). Otherwise, if the D-Statistic is less than the D-Critical value, do not reject the null hypothesis
JWBK121-Mun March 15, 2008 21:56
FIGURE 6.3 Autocorrelation and distributive lag results
c01 Char Count=
105
c01
JWBK121-Mun
106
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 6.4 Test for normality of errors
(the errors are normally distributed). This test relies on two cumulative frequencies: one derived from the sample data set, the second from a theoretical distribution based on the mean and standard deviation of the sample data. Sometimes certain types of time-series data cannot be modeled using any other methods except for a stochastic process, because the underlying events are stochastic in nature. For instance, you cannot adequately model and forecast stock prices, interest rates, price of oil, and other commodity prices using a simple regression model, because these variables are highly uncertain and volatile and do not follow a predefined static rule of behavior (in other words, the process is not stationary). Stationarity is checked here using the Runs test while another visual clue is found in the Autocorrelation report (the autocorrelation function, ACF, tends to decay slowly). A stochastic process is a sequence of events or paths generated by probabilistic laws. That is, random events can occur over time but are governed by specific statistical and probabilistic rules. The main stochastic processes include random walk or Brownian motion, mean-reversion, and jump-diffusion. These processes can be used to forecast a multitude of variables that seemingly follow random trends but are restricted by probabilistic laws. The process-generating equation is known in advance, but the actual results generated are unknown (Figure 6.5). The random walk Brownian motion process can be used to forecast stock prices, prices of commodities, and other stochastic time-series data given a drift or growth rate and volatility around the drift path. The mean-reversion process can be used to reduce the fluctuations of the random walk process by allowing the path to target a long-term value, making it useful for forecasting time-series variables that have a long-term rate, such as interest rates and inflation rates (these are long-term target rates by regulatory authorities or the market). The jump-diffusion process is useful for forecasting time-series data when the variable occasionally can exhibit random jumps, such as oil prices or price of electricity (discrete exogenous event shocks can make prices jump up or down). These processes also can be mixed and matched as required. Multicollinearity exists when there is a linear relationship between the independent variables. When this occurs, the regression equation cannot be estimated at all. In near-collinearity situations, the estimated regression equation will be biased and
JWBK121-Mun March 15, 2008 21:56
FIGURE 6.5 Stochastic process parameter estimation
c01 Char Count=
107
c01
JWBK121-Mun
108
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 6.6 Multicollinearity errors using correlations and VIF measures provide inaccurate results. This situation is especially true when a stepwise regression approach is used, where the statistically significant independent variables will be thrown out of the regression mix earlier than expected, resulting in a regression equation that is neither efficient nor accurate. One quick test of the presence of multicollinearity in a multiple regression equation is that the R-squared value is relatively high while the t-statistics are relatively low. Another quick test is to create a correlation matrix between the independent variables. A high cross-correlation indicates a potential for autocorrelation. The rule of thumb is that a correlation with an absolute value greater than 0.75 is indicative of severe multicollinearity. Another test for multicollinearity is the use of the Variance Inflation Factor (VIF), obtained by regressing each independent variable to all the other independent variables, obtaining the R-squared value and calculating the VIF. A VIF exceeding 2.0 can be considered as severe multicollinearity. A VIF exceeding 10.0 indicates destructive multicollinearity (Figure 6.6). The Correlation Matrix lists the Pearson’s Product Moment Correlations (commonly referred to as the Pearson’s R) between variable pairs. The correlation coefficient ranges between −1.0 and +1.0 inclusive. The sign indicates the direction of association between the variables while the coefficient indicates the magnitude or strength of association. The Pearson’s R only measures a linear relationship and is less effective in measuring nonlinear relationships. To test whether the correlations are significant, a two-tailed hypothesis test is performed and the resulting p-values are computed. P-values less than 0.10, 0.05, and 0.01 are highlighted in blue to indicate statistical significance. In other words, a p-value for a correlation pair that is less than a given significance value is statistically significantly different from zero, indicating that there is a significant linear relationship between the two variables. The Pearson’s Product Moment Correlation Coefficient (R) between two variCOV ables (x and y) is related to the covariance (cov) measure where Rx,y = sx s yx,y . The benefit of dividing the covariance by the product of the two variables’ standard deviations (s) is that the resulting correlation coefficient is bounded between –1.0 and +1.0 inclusive. This makes the correlation a good relative measure to compare among different variables (particularly with different units and magnitude).
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
7. Analytics—Ships in the Night
109
The Spearman rank-based nonparametric correlation is also included in the analysis. The Spearman’s R is related to the Pearson’s R in that the data are first ranked and then correlated. The rank correlations provide a better estimate of the relationship between two variables when one or both of them is nonlinear. It must be stressed that a significant correlation does not imply causation. Associations between variables in no way imply that the change of one variable causes another variable to change. Two variables that are moving independently of each other but in a related path may be correlated, but their relationship might be spurious (e.g., a correlation between sunspots and the stock market might be strong but one can surmise that there is no causality and that this relationship is purely spurious).
7. Analytics—Ships in the Night File Name: Analytics—Ships in the Night Location: Modeling Toolkit | Analytics | Ships in the Night Brief Description: Simulating arrival and meeting times and figuring out the probability that two individuals will meet given a meeting time window Requirements: Modeling Toolkit, Risk Simulator
This chapter models the arrival of two people at a prespecified location at a prespecified time and figures out the probability of whether they will meet. For instance, we can simulate the probability that two people will meet if they have a prespecified time window to meet (e.g., if the time to meet is between 1:30 PM and 2:00 PM, then the time window to meet is 30 minutes) and time to wait (e.g., when each person arrives, the person will wait up to, say, 5 minutes before leaving). The arrival times of both people are simulated assumptions using a uniform distribution between 0 and the time window to meet (i.e., if the time window is between 1:30 PM and 2:00 PM, then the person can arrive exactly at 1:30 PM, defined as 0 minutes arrival time; 1:31 PM, defined as 1 minute arrive time, etc.) as seen in Figure 7.1. Therefore, both people meet if the absolute value of the difference between these arrival times defined as “Absolute Value of (Second Person’s Arrival Time minus the First Person’s Arrival Time)” is less than or equal to the Waiting Time,
FIGURE 7.1 Meeting time model
c01
JWBK121-Mun
110
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 7.2 Summary results of probability of meeting
then they meet; otherwise they do not meet. We simulate both arrival times without any correlations. You can add in correlations to account for any co-experiences upon arrival (e.g., traffic congestion is felt by both parties, and they can potentially arrive late together if they take the same route). After running the simulation, the forecast chart is typically bimodal, one for zero (not meeting) and the other for one (meeting). The table and forecast chart results (Figures 7.2 and 7.3) show the results of a 1,000-trial simulation run with a seed value. The left-tail probability less than or equal to 0.9990 is 68.50%. This means that 68.50% of the time, the two parties do not meet while they meet 31.50% of the time. Figure 7.2 shows various combinations of outcomes to test the sensitivity of the results of whether the two meet or not. The sample results in Figure 7.2 make perfect sense. For instance, if they agree on a 30-minute window, the longer they agree to wait when they each arrive, the higher the probability of meeting up (i.e., 31.50% increases to 51.60% but notice that the probability is not doubled). In contrast, the wider the time to meet window (e.g., 60 minutes), the smaller the probability of meeting given the same agreed time to meet. One interesting fact is that as long as the proportion of waiting time to time window is the same, the probability of meeting is the same (e.g., 10 minutes waiting to a 60-minute window is the same as 5 minutes waiting to a 30-minute window).
FIGURE 7.3 Forecast chart of meeting versus not meeting
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
8. Analytics—Statistical Analysis
111
PROCEDURE To run the predefined simulation, simply: 1. Go to the Model worksheet and click on Risk Simulator | Change Profile and select the Ships in the Night profile. 2. Click on the Run icon or click on Risk Simulator | Run Simulation. 3. Once the simulation has been completed, in the forecast chart, select Left-Tail or Right-Tail and enter in 0.999 in the Value box and hit TAB on the keyboard, to obtain the certainty value. To re-create the simulation model, simply: 1. Go to the Model worksheet and click on Risk Simulator | New Profile and give it a name. 2. Go to cell G8 and set an assumption (click on Risk Simulator | Set Input Assumption and select Uniform Distribution and type in 0 for the minimum value. Click on the link icon beside the maximum input parameter and link it to cell G5, and click OK. Repeat for cell G9. 3. Select cell G11 and make it a forecast (click on Risk Simulator | Set Output Forecast) and click OK. 4. Click on the Run icon or click on Risk Simulator | Run Simulation. 5. Once the simulation has been completed, in the forecast chart, select Left-Tail or Right-Tail and enter in 0.999 in the Value box and hit TAB on the keyboard, to obtain the certainty value.
8. Analytics—Statistical Analysis File Name: Analytics—Statistical Analysis Location: Modeling Toolkit | Analytics | Statistical Analysis Brief Description: Applying the Statistical Analysis tool to determine the key statistical characteristics of your data set, including linearity, nonlinearity, normality, distributional fit, distributional moments, forecastability, trends, and stochastic nature of the data Requirements: Modeling Toolkit, Risk Simulator
This model provides a sample data set on which to run the Statistical Analysis tool in order to determine the statistical properties of the data. The diagnostics run include checking the data for various statistical properties.
c01
JWBK121-Mun
March 15, 2008
112
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
PROCEDURE To run the analysis, follow these instructions: 1. Go to the Data worksheet and select the data including the variable names (cells D5:F55). 2. Click on Risk Simulator | Tools | Statistical Analysis (Figure 8.1). 3. Check the data type, whether the data selected are from a single variable or multiple variables arranged in columns. In our example, we assume that the data areas selected are from multiple variables. Click OK when finished. 4. Choose the statistical tests you wish performed. The suggestion (and by default) is to choose all the tests. Click OK when finished (Figure 8.2). Spend some time going through the reports generated to get a better understanding of the statistical tests performed. The analysis reports include the following statistical results:
Descriptive Statistics: Arithmetic and geometric mean, trimmed mean (statistical outliers are excluded in computing its mean value), standard error and its corresponding statistical confidence intervals for the mean, median (the 50th
FIGURE 8.1 Running the Statistical Analysis tool
c01
JWBK121-Mun
March 15, 2008
21:56
8. Analytics—Statistical Analysis
Char Count=
113
FIGURE 8.2 Statistical tests
percentile value), mode (most frequently occurring value), range (maximum less minimum), standard deviation and variance of the sample and population, confidence interval for the population standard deviation, coefficient of variability (sample standard deviation divided by the mean), first and third quartiles (25th and 75th percentile value), skewness, and excess kurtosis. Distributional Fit: Fitting the data to all 24 discrete and continuous distributions in Risk Simulator to determine which theoretical distribution best fits the raw data, and proving it with statistical goodness-of-fit results (Kolmogorov-Smirnov and chi-square tests’ p-value results). Hypothesis Tests: Single variable one-tail and two-tail tests to see if the raw data is statistically similar or different from a hypothesized mean value. Nonlinear Extrapolation: Tests for nonlinear time-series properties of the raw data to determine if the data can be fitted to a nonlinear curve. Normality Test: Fits the data to a normal distribution using a theoretical fitting hypothesis test to see if the data is statistically close to a normal distribution. Stochastic Calibration: Using the raw data, various stochastic processes are fitted (Brownian motion, jump-diffusion, mean-reversion, and random walk processes) and the levels of fit as well as the input assumptions are automatically determined. Autocorrelation and Partial Autocorrelation: The raw data is tested to see if it is correlated to itself in the past by applying some econometric estimations and tests of autocorrelation and partial autocorrelation coefficients. Time-Series Forecasting: Eight most commonly used time-series decomposition models are applied to determine if the raw data set follows any trend and seasonality, and whether the time-series is predictable. Trend Analysis: A linear time-trend is tested to see if the data has any appreciable trend, using a linear regression approach.
c01
JWBK121-Mun
114
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 8.3 Sample report on descriptive statistics
Figure 8.3 shows a sample report generated by Risk Simulator that analyzes the statistical characteristics of your data set, providing all the requisite distributional moments and statistics to help you determine the specifics of your data, including the skewness and extreme events (kurtosis and outliers). The descriptions of these statistics are listed in the report for your review. Each variable will have its own set of reports. Figure 8.4 shows the results of taking your existing data set and creating a distributional fit on 24 distributions. The best-fitting distribution (after Risk Simulator goes through multiple iterations of internal optimization routines and statistical analyses) is shown in the report, including the test statistics and requisite p-values, indicating the level of fit. For instance, Figure 8.4’s example data set shows a 99.54% fit to a normal distribution with a mean of 319.58 and a standard deviation of 172.91. In addition, the actual statistics from your data set are compared to the theoretical statistics of the fitted distribution, providing yet another layer of comparison. Using
JWBK121-Mun March 15, 2008 21:56
FIGURE 8.4 Sample report on distributional fitting
c01 Char Count=
115
c01
JWBK121-Mun
116
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 8.5 Sample report on theoretical hypothesis tests
this methodology, you can take a large data set and collapse it into a few simple distributional assumptions that can be simulated, thereby vastly reducing the complexity of your model or database while at the same time adding an added element of analytical prowess to your model by including risk analysis. Sometimes, you might need to determine if the data set’s statistics are significantly different than a specific value. For instance, if the mean of your data set is 0.15, is this statistically significantly different than say, zero? What about if the mean was 0.5 or 10.5? How far enough away does the mean have to be from this hypothesized population value to be deemed statistically significantly different? Figure 8.5 shows a sample report of such a hypothesis test. Figure 8.6 shows the test for normality. In certain financial and business statistics, there is a heavy dependence on normality (e.g., asset distributions of option pricing models, normality of errors in a regression analysis, hypothesis tests using t-tests,
JWBK121-Mun March 15, 2008 21:56
FIGURE 8.6 Sample report on testing for normality
c01 Char Count=
117
FIGURE 8.7 Sample report on nonlinear extrapolation forecast (nonlinear trend detection)
c01 JWBK121-Mun
118
March 15, 2008 21:56 Char Count=
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
8. Analytics—Statistical Analysis
119
FIGURE 8.8 Sample report on trend line forecasts (linear trend detection)
z-tests, analysis of variance, and so forth). This theoretical test for normality is automatically computed as part of the statistical analysis tool. If your data set is a time-series variable (i.e., data that has an element of time attached to them, such as interest rates, inflation rates, revenues, and so forth, that are time-dependent) then the Risk Simulator data analysis reports shown in Figures 8.7 to 8.11 will help in identifying the characteristics of this time-series behavior, including the identification of nonlinearity (Figure 8.7) versus linear trends (Figure 8.8), or a combination of both where there might be some trend and nonlinear seasonality effects (Figure 8.9). Sometimes, a time-series variable may exhibit relationship to the
c01
JWBK121-Mun
120
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 8.9 Sample report on time-series forecasting (seasonality and trend detection)
past (autocorrelation). The report shown in Figure 8.10 analyzes if these autocorrelations are significant and useful in future forecasts, that is, to see if the past can truly predict the future. Finally, Figure 8.11 illustrates the report on nonstationarity to test if the variable can or cannot be readily forecasted with conventional means (e.g., stock prices, interest rates, foreign exchange rates are very difficult to forecast with conventional approaches and require stochastic process simulations) and identifies the best-fitting stochastic models such as a Brownian motion random walk, meanreversion and jump-diffusion processes, and provides the estimated input parameters for these forecast processes.
JWBK121-Mun March 15, 2008 21:56
FIGURE 8.10 Sample report on autocorrelation (past relationship and correlation detection)
c01 Char Count=
121
c01
JWBK121-Mun
122
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 8.11 Sample report on stochastic parameter calibration (nonstationarity detection)
c01
JWBK121-Mun
March 15, 2008
21:56
Char Count=
9. Analytics—Weighting of Ratios
123
9. Analytics—Weighting of Ratios File Name: Analytics—Weighting of Ratios Location: Modeling Toolkit | Analytics | Weighting of Ratios Brief Description: Illustrating how weighted averages are computed and how sometimes picking the wrong weighting factor results in wrong answers Requirements: Modeling Toolkit
This is a simple example model illustrating how choosing the weighting factor is critical in obtaining the correct answer to a weighted average approach. In Example 1, we see a portfolio of eight projects, each with its own revenue, cost, and return on investment (ROI) values (Figure 9.1). The idea is to compute the total portfolio ROI given these various projects. This first example illustrates that given a simple identical allocation of cost, revenues, and ROI, you can simply compute the average of the total ROI, and this yields the correct ROI for the portfolio. However, in Example 2, we have a set of very different projects, which is oftentimes the case in real life, with different costs, revenues, and individual ROIs. Using a simple sum or simple average will yield grossly misleading results. For instance, a grossly inflated sum yields a value of 3,900% ROI while the average per project yields a value of 487.59%, which are both incorrect. The correct ROI for the total portfolio should be 288.08% (as confirmed by summing the total costs and total revenues for all projects in the portfolio and then computing the ROI for this portfolio level), as seen in Figure 9.1. The same result can be obtained through a weighted average of the ROI. However, choosing the right weighting factor is critical. For instance, in the examples
FIGURE 9.1 Weighting of ratios
c01
JWBK121-Mun
124
March 15, 2008
21:56
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 9.2 Different weighting schemes
provided, we wanted to determine the total portfolio ROI, which is computed as (Returns – Cost)/Cost, which means that the denominator in this case is Cost. If we choose cost as the weighting factor, we get the correct answer of 288.08%. Had we chosen revenues as the weighting factor, we get an incorrect answer (Figure 9.2). Therefore, weighted averages and weighting factors are important when determining portfolio totals. This is a simple concept, but its importance and potentials for error are often overlooked.
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
10. Credit Analysis—Credit Premium
125
10. Credit Analysis—Credit Premium File Name: Credit Analysis—Credit Premium Location: Modeling Toolkit | Credit Analysis | Credit Premium Brief Description: Determining the credit risk premium that should be charged beyond any standard interest rates depending on the default probability of the debt holder Requirements: Modeling Toolkit
This model is used to determine the credit risk premium that should be charged above the standard interest rate given the default probability of this debt or credit’s anticipated cash flows. Enter the years and relevant cash flows in the model as well as the interest rate and anticipated default probability, and click Compute to determine the credit spread required (Figure 10.1). All values should be positive, and the default probability input can be determined using any of the Modeling Toolkit’s default probability models. For instance, in Figure 10.1, assume a regular 10-year bond with a $1,000 face value paying 10% coupon rate. Where the prevailing interest rate is 5%, a 1% default probability means that the default risk premium spread is 0.14%, making the total interest charge 5.14%. That is, the present value of cash flows (PVCF) is the present value of future coupon payments and face value receipt at maturity, discounted at the prevailing interest rate. The sum of these values is the net present value (NPV), which is the price of the bond, assuming zero default. By adding a probability of default, the cash flows are adjusted by this default rate and a corresponding default risk premium can be determined whereby the NPV of these risk-adjusted PVCF values will equal the original no-default NPV. The results indicate that the lower the probability of default on this debt, the lower the additional default spread (interest rate premium), and the higher the probability of default, the higher the required interest rate premium. Using this simple probability-adjusted cash flow model, we can determine the approximate simple credit spread required to obtain the same probability-adjusted cash flow as another debt with no default. This is a simple cash flow model for approximation purposes and does not account for more exotic and real-life conditions such as mean-reverting interest rates, stochastic interest rates (smiles, frowns, and other shapes with localized time-dependent volatilities), credit risk migration, risk-neutral versus empirical probabilities of default adjustments, recovery rate, changing cumulative probabilities of default, and other qualitative factors, which will be addressed in later chapters and models. Nonetheless, this model provides a simple and highly intuitive approximation of the default spread required on a risky debt or credit issue, and can be used as a simple benchmark against more advanced models that will be discussed in later chapters.
FIGURE 10.1 Simple credit spread premium computation given default probabilities
c10-c20 JWBK121-Mun
126
March 15, 2008 22:1 Char Count=
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
11. Credit Analysis—Credit Default Swaps and Credit Spread Options
127
11. Credit Analysis—Credit Default Swaps and Credit Spread Options File Name: Credit Analysis—Credit Default Swaps and Credit Spread Options Location: Modeling Toolkit | Credit Analysis | Credit Default Swaps and Credit Spread Options Brief Description: Examining the basics of credit derivatives Requirements: Modeling Toolkit, Risk Simulator
A credit default swap (CDS) allows the holder of the instrument to sell a bond or debt at par value when a credit event or default occurs. This model computes the valuation of the CDS spread (Figure 11.1). A CDS does not protect against movements of the credit spread (only a credit spread option [CSO] can do that), but protects only against defaults. Typically, to hedge against defaults and spread movements, both CDSs and credit spread options (CSOs) are used. This CDS model assumes a no-arbitrage argument, and the holder of the CDS makes periodic payments to the seller (similar to a periodic insurance premium) until the maturity of the CDS or until a credit event or default occurs. Because the notional amount to be received in the event of a default is the same as the par value of the bond or debt, and time value of money is used to determine the bond yield, this model does not require these two variables as input assumptions. In contrast, CSOs are another type of exotic debt option where the payoff depends on a credit spread or the price of the underlying asset that is sensitive to interest rate movements such as floating rate or inverse floating rate notes and debt. A CSO call option provides a return to the holder if the prevailing reference credit spread exceeds the predetermined strike rate, and the duration input variable is used to translate the percentage spread into a notional currency amount. The CSO expires when there is a credit default event. Again, note that a CSO can protect only against any movements in the reference spread and not a default event (only a CDS can do that). Typically, to hedge against defaults and spread movements, both CDSs and CSOs are used. In some cases, when the CSO covers a reference entity’s underlying asset value and not the spread itself, the credit asset spread options are used instead (Figure 11.1).
c10-c20
JWBK121-Mun
128
March 15, 2008
22:1
Char Count=
CREDIT ANALYSIS—CREDIT PREMIUM
FIGURE 11.1 Credit default swaps and credit spread options
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
12. Credit Analysis—Credit Risk Analysis and Effects on Prices
129
12. Credit Analysis—Credit Risk Analysis and Effects on Prices File Name: Credit Analysis—Credit Risk and Effects on Prices Location: Modeling Toolkit | Credit Analysis | Credit Risk and Effects on Prices Brief Description: Valuing the effects of credit spreads that apply to bond and debt prices Requirements: Modeling Toolkit, Risk Simulator
Banks selling or issuing fixed income products and vehicles or issuing debt and credit lines need to understand the effects of interest rate risks. This model is used to analyze the effects of a credit spread as applied to the price of a bond or debt. The worse the credit rating, the higher the required credit spread and the lower the market value of the bond or debt. That is, the lower the credit rating of a debt issue, the higher the probability of default and the higher the risk to the debt holder. This means that the debt holder will require a higher rate of return or yield on the debt in remuneration of this higher risk level. This additional yield is termed the credit spread (the additional interest rate above and beyond a similar debt issue with no default risk). Using a higher yield, the price of the debt is obtained through summing the present values of future coupon payments and principal repayment cash flows, and the higher the yield, the lower the value of the bond or debt. So, risk reduces the value of the bond, which means that assuming the bond does not default, the yield to maturity on the bond is higher for a risky bond than for a zero-risk bond (i.e., with the same cash flows in the future, the cheaper the bond, the higher the yield on the bond, holding everything else constant). This model also allows you to determine the effects on the price of debt of a change in credit rating (see Figure 12.1). The question now is, why do we analyze credit risk based on the bond market as an example? Bonds are, by financial definition, the same as debt. A bond is issued by a corporation or some government agency (debt is obtained by individuals or companies in need of external funding; that is, a bank provides a loan or credit line to these individuals or corporations). The issuer receives a discounted portion of the face value up front (for debt, the obligor receives the funds up front). A bond is purchased or held by an investor (debt is issued by a bank or financial institution, so it is the holder of the debt). A bond has a face value (for a debt we call this the principal). A bond provides periodic payments to the bondholder (the borrower sends periodic payments back to the bank for a debt or credit line). A bond is highly sensitive to the outside market environment (inflation, bond market fluctuations, competition, economic environment) and interest rates (higher prevailing interest rates means lower bond prices), and a credit line or debt is susceptible to the same pressures.
c10-c20
JWBK121-Mun
March 15, 2008
130
22:1
Char Count=
CREDIT ANALYSIS—CREDIT PREMIUM
FIGURE 12.1 Credit risk analysis However, there are differences between a bond and regular debt or credit. A bond pays back the principal to the bondholder at maturity, whereas a debt or credit line’s principal amount is typically amortized over the life of the debt. However, there are amortizing bonds that provide the same payment as debt, and balloon debt that provides the same payments as bonds. Regardless of the type, both a bond and debt have credit risk, prepayment risk, market risk, and other risks (e.g., sovereign and foreign exchange risk for international issues), and the cash flows can be determined up front; hence, the valuation approaches are very similar. In addition, in all the models introduced in this book in credit risk modeling, probability of default models, bond valuation, and debt options, we are assuming that these models and techniques are applicable to both regular debt and corporate bonds.
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
13. Credit Analysis—External Debt Ratings and Spread
131
13. Credit Analysis—External Debt Ratings and Spread File Name: Credit Analysis—Debt Rating and Spreads Location: Modeling Toolkit | Credit Analysis | External Debt Rating and Spreads Brief Description: Simulating and computing the credit risk spread of a particular firm based on industry standards (EBIT and interest expenses of the debt holder), to determine the risk-based spread Requirements: Modeling Toolkit, Risk Simulator
This model is used to run a simulation on a company’s creditworthiness given that the earnings are uncertain. The goal is to determine the credit category this firm is in, given its financial standing and industry it operates in. Assuming we have a prespecified credit category for various industries (see the various credit risk rating models in the Modeling Toolkit for examples on how to determine the creditworthiness of a customer or company using internal and external risk-based models and how the credit category and scoring table is obtained, as well as using the Modeling Toolkit’s probability of default models to determine the creditworthiness of an obligor), we can then determine what interest rate to charge the company for a new loan. For instance, Figure 13.1 illustrates a manufacturing firm’s earnings before interest and taxes (EBIT) as well as its current interest expenses, benchmarked against some long-term government bond rate. Using industry standards, the debt is then rated appropriately using a predefined credit ratings table, and the default spread and total interest charge is then computed. This model is very similar to the examples described in the previous two chapters in that the relevant credit spread is determined for a risky debt. In Chapter 10, given a probability of default (we discuss the applications of probability of default models in detail later in the book), we can determine the relevant credit spread. In Chapter 12, given the credit spread, we can determine the impact in price of a bond or risky debt. In this chapter, we are interested in obtaining the spread on risky debt by looking at industry comparables and using other financial ratios such as a company’s interest coverage. This approach is more broad-based and can be used assuming no other financial data are available. Each of the inputs can be simulated using Risk Simulator to determine the statistical confidence of the cost of debt or interest rate to charge the client. Please see Figure 13.2. By running a simulation, we can better estimate the relevant credit spread and its confidence interval, rather than relying on a single snapshot value of some financial ratios to determine the credit spread.
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
132
CREDIT ANALYSIS—CREDIT PREMIUM
FIGURE 13.1 Simulating debt rating and spreads
FIGURE 13.2 Forecast distribution of interest rate with spreads
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
14. Credit Analysis—Internal Credit Risk Rating Model
133
14. Credit Analysis—Internal Credit Risk Rating Model File Name: Credit Analysis—Internal Credit Risk Rating Model Location: Modeling Toolkit | Credit Analysis | Internal Credit Risk Rating Model Brief Description: Examining credit ratings model like the ones used by Moody’s to determine the credit shortfall risk and alphabetical risk rating given the probability of default Modeling Toolkit Function Used: B2CreditRatingWidth, B2CreditRiskShortfall
This is an internal credit rating model similar to the ones used by Moody’s or other rating agencies. Obtain the required probability of default input from other probability of default models in the Modeling Toolkit (see the chapters on modeling probability of default later in this book), and use this model to determine the rating of this company or debt holder. To further enhance the model, the category widths of the rating table can be refined and calibrated through additional parameter-width estimates, given actual data. For instance, Figure 14.1 illustrates how a ratings table can be generated. This model shows how the previous model, the External Ratings and Spreads example, is generated. That is, by entering the desired number of total categories for the risk ratings table and the base level (which requires calibration with actual data, and typically ranges between 1.1 and 3.0), you can develop the ratings table and determine the risk rating and credit risk shortfall, given some probability of default, recovery rate, and interest charged to a company. In some instances, the base level can be calibrated using a backcasting approach, where several base-level values are tested using various probabilities of default, and the resulting risk rating is used to benchmark against other industry standards.
c10-c20
JWBK121-Mun
134
March 15, 2008
22:1
Char Count=
CREDIT ANALYSIS—CREDIT PREMIUM
FIGURE 14.1 Internal credit risk rating model and risk table generation
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
15. Credit Analysis—Profit Cost Analysis of New Credit
135
15. Credit Analysis—Profit Cost Analysis of New Credit File Name: Credit Analysis—Profit Cost Analysis of New Credit Location: Modeling Toolkit | Credit Analysis | Profit Cost Analysis of New Credit Brief Description: Analyzing the cost and profit from a potential credit issue based on the possibilities of nonpayment by the debt holder Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2CreditAcceptanceCost, B2CreditRejectionCost
This model is used to decide if new credit should be granted to a new applicant based on the requisite costs of opening the new account as well as other incremental costs. In addition, using the cost of funds and average time to receive payments as well as the probability of nonpayment or default (use the various probability of default models in the Modeling Toolkit, explained in later chapters, to determine this), we can then determine the cost of accepting and the cost of rejecting this new line of credit and the probability of breakeven. By using this model, a bank or credit-issuing firm can decide if it is more profitable to accept or reject the application as well as compute the probability of breakeven on this line of credit. Please see Figure 15.1.
c10-c20
JWBK121-Mun
136
March 15, 2008
22:1
Char Count=
CREDIT ANALYSIS—CREDIT PREMIUM
FIGURE 15.1 Credit acceptance and rejection profit and cost model
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
16. Debt Analysis—Asset-Equity Parity Model
137
16. Debt Analysis—Asset-Equity Parity Model File Name: Debt Analysis—Asset-Equity Parity Model Location: Modeling Toolkit | Debt Analysis | Asset-Equity Parity Model Brief Description: Applying the asset-equity parity relationship to determine the market value of risky debt and its required return as well as the market value of asset and its volatility Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2AEPMarketValueDebt, B2AEPMarketValueAsset, B2AEPRequiredReturnDebt
This model applies the asset-equity parity assumptions, whereby given the company’s book values of debt and asset and the market value of equity and its corresponding volatility, we can determine the company’s market value of risky debt and the required return on the risky debt in the market, and impute the market value of the company’s assets and the asset’s volatility. Enter the relevant inputs and click on Compute Implied Volatility to determine the implied volatility of the asset (Figure 16.1).
FIGURE 16.1 Asset-equity options model
c10-c20
JWBK121-Mun
138
March 15, 2008
22:1
Char Count=
CREDIT ANALYSIS—CREDIT PREMIUM
This model applies the basic parity assumption that the assets of a company equal any outstanding equity plus its liabilities. In publicly traded companies, the equity value and equity volatility can be computed using available stock prices, while the liabilities of the firm can be readily quantified based on financial records. However, the market value and volatility of the firm’s assets are not available or easily quantified (contrary to the book value of asset, which can be readily obtained through financial records). These values are required in certain financial, options, and credit analyses and can be obtained only through this asset-equity parity model. In addition, using the equity volatility and book value of debt, we can also impute the market value of debt and the required return on risky debt. This concept of assetequity-options parity will be revisited later in the chapters on probability of default models. Briefly, equity owners (stockholders) of a firm have the option to pay off the firm’s debt to own 100% of the firm’s assets. That is, assets in a public company are typically funded by debt and equity, and although debt has a higher priority for repayment (especially in default and bankruptcy situations) only equity holders can cast votes to determine the fate of the company, including the ability and option to pay off all remaining debt and own all of the firm’s assets. Therefore, there is an options linkage among debt, equity, and asset levels in a company, and we can take advantage of this linkage (through some simultaneous stochastic equations modeling and optimization) to obtain the firm’s market value of asset, asset volatility, market value of debt, and (discussed in later chapters) the probability of default of the firm.
17. Debt Analysis—Cox Model on Price and Yield of Risky Debt with Mean-Reverting Rates File Name: Debt Analysis—Cox Model on Price and Yield of Risky Debt with Mean-Reverting Rates Location: Modeling Toolkit | Debt Analysis | Cox Model on Price and Yield of Risky Debt with Mean-Reverting Rates Brief Description: Applying the Cox model to price risky debt as well as to model the yield curve, assuming that interest rates are stochastic and follow mean-reverting rates Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2CIRBondPrice, B2CIRBondYield
The Cox-Ingersoll-Ross (CIR) stochastic model of mean-reverting interest rates is modeled here to determine the value of a zero coupon bond and to reconstruct the yield curve (term structure of interest rates). This model assumes a stochastic
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
17. Debt Analysis—Cox Model on Price and Yield of Risky Debt with Mean-Reverting Rates
139
and mean-reverting term structure of interest rates with a rate of reversion as well as long-run rate that the interest reverts to in time. A yield curve CIR model shown in a later chapter is used to generate the yield curve and term structure of interest rates using this CIR model. Such mean-reverting models imply that the underlying asset (bond or debt) is highly susceptible to fluctuations in the prevailing interest rates of the economy. And in order to model the prices of these interest-sensitive instruments, the behavior of interest rates will first need to be modeled. Interest rates are assumed to be meanreverting; that is, there is a long-run mean rate that short-term interest rates are attracted to, tend to, or revert to, at some rate of mean reversion. For instance, in the United States, the Federal Reserve controls interest rates in the economy (by changing the discount rate at which banks can borrow from the central bank, forcing this rate to permeate throughout the economy; changing the required reserve ratio of bank holdings; or performing some sale or purchase of Treasury securities by the Federal Open Market Commission) in order to control the general inflation rate and economic growth. This means that there are typically a long-term economic growth rate, an inflation rate, and hence, an interest rate that the Federal Reserve targets. When rates are too high, they are reduced, and vice versa, which means that interest rates, although stochastic and unpredictable in nature, are not as volatile as one would expect. This mean-reverting dampening effect is modeled here. Finally, the interest rate volatility can be easily modeled using the Volatility models in the Modeling Toolkit (see the volatility estimation chapters for details), and the market price of interest rate risk is the same as the market price of risk for equity. It is the prevailing interest rate for a similar type of bond or debt (with the same credit risk structure), less the risk-free rate of similar maturities. You may also use Risk Simulator to run simulations on the inputs to determine the price and yield of debt or to determine the input parameters, such as the long-run mean rate and rate of mean reversion. (Use Risk Simulator’s Statistical Analysis Tool to determine these stochastic input parameters when calibrated on historical data. The Forecasting—Data Diagnostics model has examples on how to calibrate these stochastic input parameters based on historical data.) Please see Figure 17.1.
FIGURE 17.1 Cox model with underlying mean-reverting interest rates
c10-c20 JWBK121-Mun
140
March 15, 2008 22:1 Char Count=
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
18. Debt Analysis—Debt Repayment and Amortization
141
18. Debt Analysis—Debt Repayment and Amortization File Name: Debt Analysis—Debt Repayment and Amortization Location: Modeling Toolkit | Debt Analysis | Debt Repayment and Amortization Brief Description: Simulating interest rates on a mortgage and amortization schedule to determine the potential savings on interest payments if additional payments are made each period and when the interest rates can become variable and unknown over time Requirements: Modeling Toolkit, Risk Simulator
This is an amortization model examining a debt repayment schedule. In this example, we look at a 30-year mortgage with a portion of that period based on a fixed interest rate and a subsequent period of variable rates with minimum and maximum caps. This model illustrates how the mortgage or debt is amortized and paid off over time, resulting in a final value of zero at the end of the debt’s maturity (Figure 18.1). Further, this model allows for some additional periodic (monthly, quarterly, semiannually, annually) payment, which will reduce the total amount of payments, the total interest paid, and the length of time it takes to pay off the loan. Notice that initially the principal paid off is low but increases over time. Because of this, the initial interest portion of the loan is high but decreases over time as the principal is paid off. The required input parameters are highlighted in boxes, and an assumption on the uncertain interest rate is set in cell D9, with a corresponding forecast cell at I12. By entering additional payments per period, you can significantly reduce the term of the mortgage (i.e., pay it off faster), and at much lower total payments (you end up saving a lot on interest payments). A second forecast cell is set on J12, to find out the number of years it takes to pay off the loan if additional periodic payments are made.
PROCEDURE You may either change the assumptions or keep the existing assumptions and run the simulation: 1. Click on Risk Simulator | Change Profile and select the Debt Repayment and Amortization profile and click OK. 2. Run the simulation by clicking on Risk Simulator | Run Simulation.
JWBK121-Mun
March 15, 2008
22:1
Char Count=
FIGURE 18.1 Debt amortization table
c10-c20
142
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
18. Debt Analysis—Debt Repayment and Amortization
143
INTERPRETATION The resulting forecast chart on the example inputs indicates that paying an additional $500 per month can potentially save the mortgage holder $136,985 (minimum) and at most $200,406 (maximum), given the assumed uncertainty fluctuations of interest rates during the variable rate period. The 90% confidence level also can be obtained, meaning that 90% of the time, the total interest saved is between $145,749 and $191,192 (Figures 18.2–18.3).
FIGURE 18.2 Forecast of total interest paid
FIGURE 18.3 Forecast statistics of total interest paid
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
144
CREDIT ANALYSIS—CREDIT PREMIUM
FIGURE 18.4 Forecast of payoff year
In addition, the Payoff Year with Extra Payment forecast chart shows that given the expected fluctuations of interest rates and the additional payments made per period, there is a 90% chance that the mortgage will be paid off between 20.9 and 21.8 years, with an average of 21.37 years (mean value). The quickest payoff is 20.67 years (minimum). If interest rates are on the high end, payoff may take up to 22 years (maximum), as seen in Figures 18.4 and 18.5.
FIGURE 18.5 Forecast statistics of payoff year
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
19. Debt Analysis—Debt Sensitivity Models
145
19. Debt Analysis—Debt Sensitivity Models File Name: Debt Analysis—Debt Sensitivity Models Location: Modeling Toolkit | Debt Analysis | Debt Sensitivity Models Brief Description: Modeling a debt instrument’s sensitivity to interest rates using duration and convexity methods while applying various discrete and continuous discounting measures Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2BondPriceDiscrete, B2BondPriceContinuous, B2BondYTMDiscrete, B2BondYTMContinuous, B2DurationDiscrete, B2DurationContinuous, B2MacaulayDuration, B2ModifiedDuration, B2ConvexityYTMDiscrete, B2ConvexityYTMContinuous, B2ConvexityDiscrete, B2ConvexityContinuous, B2BondPriceChange
This model (Figure 19.1) is used to determine a debt’s interest sensitivity while accounting for a series of cash flow payments from the debt over time (e.g., a bond’s coupon payments and the final face value repayment) as interest rates are changing over time. The debt’s price, yield to maturity, duration and convexities are computed.
FIGURE 19.1 Debt sensitivity models
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
146
CREDIT ANALYSIS—CREDIT PREMIUM
Duration and convexity are sensitivity measures that describe exposure to parallel shifts in the spot interest rate yield curve applicable to individual fixed income instruments or entire fixed income portfolios. These sensitivities cannot warn of exposure to more complex movements in the spot curve, including tilts and bends, only parallel shifts. The idea behind duration is simple. Suppose a portfolio has a duration measure of 2.5 years. This means that the portfolio’s value will decline about 2.5% for each 1% increase in interest rates—or rise about 2.5% for each 1% decrease in interest rates. Typically, a bond’s duration is positive, but exotic instruments, such as mortgage-backed securities, may have negative durations, including portfolios that short fixed income instruments or portfolios that pay fixed for floating interest payments on an interest rate swap. Inverse floaters tend to have large positive durations. Their values change significantly for small changes in rates. Highly leveraged fixed income portfolios tend to have very large (positive or negative) durations. In contrast, convexity summarizes the second-most significant piece of information, or the nonlinear curvature of the yield curve, whereas duration measures the linear or first-approximation sensitivity. Duration and convexity have traditionally been used as tools for immunization or asset-liability management. To avoid exposure to parallel spot curve shifts, an organization (e.g., an insurance company or defined benefit pension plan) with significant fixed income exposures might perform duration matching by structuring assets so that their duration matches the duration of its liabilities so the two offset each other. Even more effective (but less frequently practical) is duration-convexity matching, in which assets are structured so that durations and convexities match. The standard financial computations for duration, modified duration, and convexity are: Macaulay Bond Duration =
n PVCFt t=1
VB
time
Macaulay YTM 1+ #coupons
Modified Duration =
Convexity =
2
d P = di 2
n CF (t 2 + t) (1+i)t
t=1
(1 + i)2
where PVCF is the present value of cash flows (CF) for individual periods, V is the value of the bond, time (t) is the time period at which the cash flow occurs, YTM is the bond’s yield to maturity (internal rate of return on the bond, or identical interest yield by holding the bond to maturity, computed based on the value of the bond at time zero), and i is the interest rate used to discount and price the bond.
c10-c20
JWBK121-Mun
March 15, 2008
22:1
Char Count=
20. Debt Analysis—Merton Price of Risky Debt with Stochastic Asset and Interest
147
20. Debt Analysis—Merton Price of Risky Debt with Stochastic Asset and Interest File Name: Debt Analysis—Merton Price of Risky Debt with Stochastic Asset and Interest Location: Modeling Toolkit | Debt Analysis | Merton Price of Risky Debt with Stochastic Asset and Interest Brief Description: Computing the market value of risky debt using the Merton option approach, assuming that interest rates are mean reverting and volatile, while further assuming that the company’s internal assets are also stochastic and changing over time Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Function Used: B2BondMertonBondPrice
The Merton model for risky debt computes the market value of debt while taking into account the book values of assets and debt in a company as well as the volatility of interest rates and asset value over time. The interest rate is assumed to be stochastic in nature and is mean reverting, at some rate of reversion, to a long-term value (Figure 20.1). Further, the model also requires as inputs the market price of
FIGURE 20.1 Merton model of risky debt assuming stochastic interest and asset movements
c10-c20
JWBK121-Mun
148
March 15, 2008
22:1
Char Count=
CREDIT ANALYSIS—CREDIT PREMIUM
FIGURE 20.2 Forecast distribution of price of risky debt
risk and the correlation of the company’s asset value to the market. You can set the correlation and market price of risk to zero for indeterminable conditions, while the rate of reversion and long-run interest rates can be determined and modeled using Risk Simulator’s Statistical Analysis tool. Simulation on any of the inputs can also be run using Risk Simulator, to determine the risk and statistical confidence of the market price of risky debt. Please see Figure 20.2. This model is similar in nature to the Cox mean-reverting model where both assume that the underlying debt instrument is highly sensitive to fluctuations in interest rates. The differences would be that this Merton model accounts for stochastic interest rates as well as stochastic asset movements, and that the market is used to calibrate the movements of the asset value (the correlation between asset and market returns is a required input).
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
21. Debt Analysis—Vasicek Debt Option Valuation
149
21. Debt Analysis—Vasicek Debt Option Valuation File Name: Debt Analysis—Vasicek Debt Option Valuation Location: Modeling Toolkit | Debt Analysis | Vasicek Debt Option Valuation Brief Description: Applying the Vasicek model of debt options assuming that interest rates are stochastic, volatile, and mean-reverting Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2BondVasicekBondCallOption, B2BondVasicekBondPutOption
This is the Vasicek model on bond or debt options, where the underlying debt issue changes in value based on the level of prevailing interest rates; that is, the interest rates have a mean-reverting tendency where the interest rate at any future period approaches a long-run mean interest rate with a rate of reversion and volatility around this reversion trend. Both options are European and can be executed only at maturity. To determine other types of options, such as American and Bermudan options, use the Modeling Toolkit’s Options on Debt modules to build lattices on mean-reverting interest rates and their respective option values. Please see Figure 21.1. A few interesting behaviors can be determined from this model. Typically, a bond call option will have a lower value if the long-run mean interest rate is higher than the current spot risk-free rate and if the rate of reversion is high (this means that the spot interest rate reverts to this long-run rate quickly). The inverse situation is also true, where the bond call option value is higher if the long-run rate is lower than the spot rate and if the reversion rate is high. The opposite effects apply to the bond put option. Nonetheless, the reversion effect will be overshadowed by a high volatility of interest rates. Finally, at negligible maturities and interest rates (e.g., bond maturity of 0.01, option maturity of 0.001, spot risk-free rate of 0.001) the bond option values revert to the intrinsic value or Face Value – Strike for a bond call option, and Strike – Face Value for a put option. There is also another model for solving bond call and put options, available through the functions B2BondHullWhiteBondCallOption and B2BondHullWhiteBondPutOption. These two Hull-White models are slightly different in that they do not require a long-run interest rate. Rather, just the reversion rate will suffice. If we calibrate the long-run rate in the Vasicek model carefully, we will obtain the same results as in the Hull-White model. Typically, the Vasicek model is simpler to use.
FIGURE 21.1 Vasicek & Van Deventer model of pricing debt options and yield with mean-reverting rates
c21-30 JWBK121-Mun
150
March 15, 2008 22:4 Char Count=
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
22. Debt Analysis—Vasicek Price and Yield of Risky Debt
151
22. Debt Analysis—Vasicek Price and Yield of Risky Debt File Name: Debt Analysis—Vasicek Price and Yield of Risky Debt Location: Modeling Toolkit | Debt Analysis | Vasicek Price and Yield of Risky Debt Brief Description: Applying the Vasicek model to price risky debt and to compute the yield on risky debt, where the underlying interest rate structure is stochastic, volatile, and mean reverting (this model is also often used to compute and forecast yield curves) Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2BondVasicekBondPrice, B2BondVasicekBondYield
The Vasicek stochastic model of mean-reverting interest rates is modeled here to determine the value of a zero coupon bond as well as reconstructing the term structure of interest rates and interest rate yield curve. This model assumes a meanreverting term structure of interest rates with a rate of reversion as well as longrun rate that the interest reverts to in time. Use the Yield Curve Vasicek model to generate the yield curve and term structure of interest rates using this model. You may also use Risk Simulator to run simulations on the inputs to determine the price and yield of the zero coupon debt or to determine the input parameters, such as the long-run mean rate and rate of mean reversion (use Risk Simulator’s Statistical Analysis Tool to determine these values based on historical data). Please see Figure 22.1. The previous chapter uses a modification of this Vasicek model to price debt call and put options, versus the models described in this chapter, which are used for pricing the risky bond and the yield on the bond based on stochastic and mean-reverting interest rates. If multiple maturities and their respective input parameters are obtained, the entire term structure of interest rates and yield curve can be constructed, where the underlying process of these rates are assumed to be mean reverting.
JWBK121-Mun
March 15, 2008
22:4
Char Count=
FIGURE 22.1 Price and yield of risky debt
c21-30
152
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
23. Decision Analysis—Decision Tree Basics
153
23. Decision Analysis—Decision Tree Basics File Name: Decision Analysis—Decision Tree Basics Location: Modeling Toolkit | Decision Analysis | Decision Tree Basics Brief Description: Creating and valuing a simple decision tree and running a Monte Carlo simulation on a decision tree Requirements: Modeling Toolkit, Risk Simulator
This model provides an illustration on the basics of creating and solving a decision tree. This forms the basis for the more complex model (see the Decision Tree with EVPI, Minimax and Bayes’ Theorem model presented in the next chapter) with expected value of perfect information, Minimax computations, Bayes’ theorem, and simulation. Briefly, this example illustrates the modeling of a decision by an oil and gas company of whether small drill bits or large drill bits should be used in an offshore oil exploration project on a secondary well (Figure 23.1). There is a chance (30% probability expected) that this secondary well will become critical (if the primary well does not function properly or when oil prices start to skyrocket). If the large drill bits are used, it will cost more to set up ($20M as opposed to $10M). However, if things do become critical, the project can be completed with less of a delay than if the secondary drill bit is small. The probabilities of occurrence are tabulated in Figure 23.1, showing the weeks to completion or delays and their respective probabilities. The cost of delay per day is $17M. The table also provides the cost to run the drill bits.
DECISION TREE CONSTRUCTION AND VALUATION Using these assumptions, a simple decision tree can be constructed. The decision tree is a graphical representation of decisions, probabilities of outcomes, and payoffs given certain outcomes, based on a chronological order. The value of each branch on the tree is then computed using backward induction or expected value analysis. The decision tree is constructed as seen in Figure 23.2. The payoffs at the terminal nodes of the decision tree are computed based on the path undertaken. For instance, $248M is computed by: Initial Setup Cost of Large Drill Bit + Number of Days Delayed ∗ (Cost of Large Drill Bits Per Day + Delay Cost Per Day)
JWBK121-Mun
March 15, 2008
22:4
Char Count=
FIGURE 23.1 Decision tree assumptions
c21-30
154
JWBK121-Mun
March 15, 2008
22:4
Char Count=
FIGURE 23.2 Decision tree construction
c21-30
155
c21-30
JWBK121-Mun
March 15, 2008
156
22:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
and so forth. Then, the expected values are computed using a backward induction. For instance, $289.80M is computed by: 30% ($248) + 50% ($286) + 20% ($362) and the expected value at initiation for the Large Drill Bit of $120.82M is computed by: 30% ($289.80) + 70% ($48.40) The benefit of choosing the large drill bit is the difference between the $120.82M and $131.44M costs, or $10.62M in cost savings. That is, the decision with the lowest cost is to go with the large drill bit. The computations of all remaining nodes on the tree are visible in the model.
MONTE CARLO SIMULATION The inputs shown are single-point estimates, and using them as is will reduce the robustness of the model. By adding in input assumptions and probability distributions, you then can run the analysis through a Monte Carlo simulation in Risk Simulator. The highlighted values in Figures 23.1 and 23.2 are set up as simulation input assumptions and the resulting forecast output is the benefit of choosing the large drill bit. To run the simulation, simply: 1. Click on Risk Simulator | Change Simulation Profile and select the Decision Tree Basics profile. 2. Go to the Model worksheet and click on Risk Simulator | Run Simulation or click on the Run icon. The output forecast chart obtained after running the simulation is shown in Figure 23.3. Once the simulation is complete, go to the forecast chart and select Right Tail and type in 0 in the right-tail input box and hit TAB on the keyboard. The results show that there is a 97.50% probability that choosing large drill bits will be worth more than small drill bits (Figure 23.3); that is, the net benefit exceeds zero. You are now ready to continue on to more sophisticated decision tree analyses. Please refer to the next chapter, Decision Tree with EVPI, Minimax, and Bayes’ Theorem, for an example of more advanced decision tree analytics.
PROS AND CONS OF DECISION TREE ANALYSIS As you can see, a decision tree analysis can be somewhat cumbersome, with all the associated decision analysis techniques. In creating a decision tree, be careful of its pros and cons. Specifically: Pros Easy to use, construct, and value. Drawing the decision tree merely requires an understanding of the basics (e.g., chronological sequence of events, payoffs,
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
23. Decision Analysis—Decision Tree Basics
157
FIGURE 23.3 Simulating a decision tree
probabilities, decision nodes, chance nodes, etc., and the valuation is simple expected values going from right to left on the decision tree). Clear and easy exposition. A decision tree is simple to visualize and understand.
Cons Ambiguous inputs. The analysis requires single-point estimates of many variables (many probabilities and many payoff values), and sometimes it is difficult to obtain objective or valid estimates. In a large decision tree, these errors in estimations compound over time, making the results highly ambiguous. The problem of garbage-in, garbage-out is clearly an issue here. Monte Carlo simulation with Risk Simulator helps by simulating the uncertain input assumptions thousands of times, rather than relying on single-point estimates. Ambiguous outputs. The decision tree results versus Minimax and Maximin results produce different optimal strategies, as illustrated in the next chapter’s more complex decision tree example. Using decision trees will not necessarily produce robust and consistent results. Great care should be taken. Difficult and confusing mathematics. To make the decision tree analysis more powerful (rather than simply relying on single-point estimates), we revert to using Bayes’ theorem. From the next chapter’s decision tree example, using the Bayes’ theorem to derive implied and posterior probabilities is not an easy task at all. At best it is confusing, and at worst, extremely complex to derive. In theory, each payoff value faces a different risk structure on the terminal nodes. Therefore, it is theoretically correct only if the payoff values (which have to be in present values) are discounted at different risk-adjusted discount rates, corresponding to the risk structure at the appropriate nodes. Obtaining a single valid discount rate is difficult enough, let alone multiple discount rates.
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
158
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
The conditions, probabilities, and payoffs should be correlated to each other in real life. In a simple decision tree, this is not possible, as you cannot correlate two or more probabilities. This is where Monte Carlo simulation comes in as you can correlate pairs of input assumptions in a simulation. Decision trees do not solve real options. Be careful. Real options analysis requires binomial and multinomial lattices and trees to solve, not decision trees. Decision trees are great for framing strategic real options pathways but not for solving real options values. For instance, decision trees cannot be used to solve a switching option, barrier option, or option to expand, contract, and abandon. The branches on a decision tree are outcomes, while the branches on a lattice in a real option are options, not obligations. They are very different animals, so care should be taken when setting up real options problems versus decision tree problems. See Parts II and III of this book for more advanced cases involving real options problems and models.
24. Decision Analysis—Decision Tree with EVPI, Minimax, and Bayes’ Theorem File Name: Decision Analysis—Decision Tree with EVPI, Minimax and Bayes’ Theorem Location: Modeling Toolkit | Decision Analysis | Decision Tree with EVPI, Minimax and Bayes’ Theorem Brief Description: Creating and solving a decision tree, applying Monte Carlo risk simulation on a decision tree, constructing a risk profile on a decision tree, computing Expected Value of Perfect Information (EVPI), computing Minimax and Maximin analysis, computing a Bayesian analysis on posterior probabilities, and understanding the pros and cons of a decision tree analysis Requirements: Modeling Toolkit, Risk Simulator
In this model, we create and value a decision tree using backward induction to make a decision on whether a large, medium, or small facility should be built, given uncertainties in the market. This model is intentionally set up such that it is generic and can be applied to any industry. For instance, in the oil and gas exploration and production application, “large, medium, and small” refers to the size of the drilling platform to be built; in the manufacturing industry, these sizes refer to the size of a manufacturing facility to be built; in the real estate business, they refer to the size of the condominiums or quality (expensive, moderate, or low-priced housing), and so forth. In this model, we also determine if there is value in information, whether a
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
24. Decision Analysis—Decision Tree with EVPI, Minimax, and Bayes’ Theorem
159
FIGURE 24.1 Payoff table and assumed probability of outcomes
market research (the analogy in other industries would include wildcat exploratory wells, proof of concept development, initial research and development, etc.) is worth anything and if it should be performed. In addition, other approaches are used, including the Minimax and Maximin computations, together with the application of Bayes’ theorem (posterior updating on probabilities). The problem is: A real estate developer (oil and gas exploration firm, manufacturing entity, research and development outfit, etc.) is faced with the decision on whether to build small, medium, or large condominiums. Clearly the decision depends on how the market will be doing over the next few years, which is an unknown variable that may have significant impact on the decision. The question is whether market research should be performed to obtain valuable information on how the market will be over the next few years as well as what the optimal decisions should be, given that the market research results can return either a Favorable or Unfavorable outcome. In consultation with marketing professionals and subject matter experts, the information in Figure 24.1 is generated and assumed to hold. For instance, the term Probability (Strong | Favorable) is the probability that the true outcome is a strong market given that a favorable rating is obtained through the market research. The same convention is applied to the other probabilities. A decision tree is then constructed using these inputs. That is, the initial decision is whether to invest in market research. If no market research is performed, the next step is to decide if a small, medium, or large facility should be built. If a market research is conducted, we wait for the results of the market research before deciding on whether to build the small, medium, or large facility.
DECISION TREE CONSTRUCTION AND SOLUTION Using the assumptions just presented, a decision tree is built (Figure 24.2), based on a chronological order, starting from the decision (a square box denotes a decision) of whether market research should be conducted or not. If no market research is performed, the next decision (another square box) is whether a Small, Medium, or Large facility should be built. Under each decision, the uncertainty of whether a Strong or Weak market exists are modeled (uncertainty outcomes are denoted by a circle), complete with its assumed probability of occurrence and payoff under each state of nature according to the listed assumptions. Next, the same construction is repeated with an intermediate step of performing the market research and waiting on the results (Favorable or Unfavorable). The
c21-30
JWBK121-Mun
160
March 15, 2008
22:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 24.2 Decision tree construction
decision on whether to build the Small, Medium, or Large facility now depends on the results of the market research, but all possible outcomes are enumerated, including their probabilities of occurrence and payoff levels according to the earlier assumptions. The expected values are then computed on the tree as shown in Figure 24.2 (e.g., 7.94 is computed by taking 94% × 8 + 6% × 7, or 13.46 is computed by taking 94% ×14 + 6% × 5, and 18.26 is computed by taking the MAX [7.94, 13.46, 18.26] to find the optimal decision given a Favorable market research outcome). The optimal path is then listed in bold on the decision tree. The expected value is then worked all the way back to obtain 15.93 for market research (77% × 18.26 + 23% × 8.15) and 14.20 for no market research (computed through MAX [7.80, 12.20, 14.20] to find the optimal decision given no research). Clearly the higher expected value is 15.93, that is, to perform the market research. Further, if the market research results show a Favorable condition, the
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
24. Decision Analysis—Decision Tree with EVPI, Minimax, and Bayes’ Theorem
161
FIGURE 24.3 Probabilistic simulation assumption
optimal decision is to build a Large facility. Conversely, build a Medium facility if an Unfavorable result is obtained.
MONTE CARLO SIMULATION ON DECISION TREES The assumptions used in a decision tree are single-point estimates and may not be precise. The payoff values and probabilities are simply estimations at best. Hence, these single-point estimates can be enhanced through Risk Simulator’s Monte Carlo simulation. For the basic input assumptions, subject matter experts can be tasked with obtaining a most likely single-point estimate result while at the same time providing minimum and maximum estimates of each payoff and probability. These estimates are then entered into a Monte Carlo simulation (Figures 24.3–24.5). The model already has these assumptions preset. To run the simulation, simply: 1. Click on Risk Simulator | Change Profile and select the Decision Tree EVPI Minimax profile to open the simulation profile. 2. Go to the Model worksheet and click on Risk Simulator | Run Simulation or click on the Run icon to run the simulation. The forecast result of the Expected Value of Imperfect Information (cell R30) is shown in Figure 24.6, which is also the value of the difference between the expected value of performing market research and not performing a market research. The forecast chart can be used to determine the probability that performing a market research is better than not having any research. Further, the forecast chart shows
FIGURE 24.4 Payoff tables
JWBK121-Mun
March 15, 2008
22:4
Char Count=
FIGURE 24.5 Running a simulation
c21-30
162
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
24. Decision Analysis—Decision Tree with EVPI, Minimax, and Bayes’ Theorem
163
FIGURE 24.6 Simulation results
that there is a 79.80% probability that doing market research is better than not doing any market research (select Right Tail on the forecast chart, enter in 0 on the left-tail input box, and hit TAB on the keyboard). The expected value or mean of the distribution is 0.9736, indicating that the Expected Value of Imperfect Information is worth $0.9736M, which is the maximum amount of money the firm should spend, on average, to perform the market research. Further, the forecast chart on the Bayesian Posterior Probability (cell H60) indicates that the 95% confidence interval is between 87.83% and 97.69%. This means that on a 95% statistical basis (which means you are 95% sure), the results from the market research provide an accuracy of between 87.83% and 97.69%. That is, if the research results indicate that the market is strong, there is an 87.83% to 97.69% level of accuracy that the market is actually strong in reality. These computations are clearly seen in the Excel model.
RISK PROFILE AND EVPI A risk profile based on the optimal path in the decision tree can be constructed. Specifically, comparing between not doing any market research versus performing the research, and looking at the optimal decision pathways, we obtain the results shown in Figure 24.7.
c21-30
JWBK121-Mun
March 15, 2008
164
22:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 24.7 Risk profile
FIGURE 24.8 Expected value of perfect information
Notice that the expected values (EV) from the risk profile are exactly those in the decision tree (i.e., $15.93M and $14.20M), indicative of the two values from the optimal paths. Further, note that there is an 80% probability of making money ($20M) versus a 20% chance of losing money (−$9M) without the market research. In contrast, the risk profile has changed such that the chance of making a positive payoff is 95.38% (i.e., 72.38% + 8.05% + 14.95%) as compared to 4.62% of possibly losing money. Clearly, doing the research adds positive value to the project. The difference between these two values or $1.73M is the Expected Value of Imperfect Information (EVII); that is, the value of the market research on a singlepoint estimate basis. This is termed “Imperfect Information” because no market research is perfect or provides the true result 100% of the time. There will clearly be some uncertainty left over. Further, the expected value of perfect information (EVPI) can be obtained by computing the expected value of perfect information given the states of nature and the expected value of imperfect information given the states of nature, and taking their differences (Figure 24.8). EVPI assumes that the information from the market research is 100% exact and precise.
MAXIMIN AND MINIMAX ANALYSIS Maximin or maximizing the minimum outcomes, specifically, looking at the no research strategy, the minimum payoffs (i.e., in the Weak states of nature), the maximum value is $7M (comparing among $7M, $5M, and –$9M). Therefore, using the Maximin approach, a Small facility should be developed (Figure 24.9).
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
24. Decision Analysis—Decision Tree with EVPI, Minimax, and Bayes’ Theorem
165
FIGURE 24.9 Maximin approach
FIGURE 24.10 Minimax approach In contrast, using the Minimax approach of minimizing the maximum regret, we see that the Medium facility should be built (Figures 24.10–24.11). The MINIMAX is computed by first obtaining the opportunity cost or regret matrix. For instance, if we built a Small facility, either a Strong or Weak market can exist. If a Strong market exists, we obtain $8M in payoff, but had we made a Large facility instead, the payoff would have been $20M. Therefore, you regret your decision, as you could have made an additional $12M. Continuing with the analysis, a MINIMAX matrix is obtained for each state of nature. The maximum of these regrets in all states of nature is computed for each decision. The minimum is then computed. In this case, the result points to $6M or to build the Medium facility.
FIGURE 24.11 Minimax decision tree
c21-30
JWBK121-Mun
March 15, 2008
166
22:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
BAYESIAN ANALYSIS According to probability rules, we have: P(A ∩ B) = P(A).P(B | A) P(B ∩ A) = P(B).P(A|B) P(A ∩ B) = P(B ∩ A) So, we have: P(A).P(B | A) = P(B).P( A| B) P(A).P(B | A) P( A| B) = P(B) Further, if we have: P(B) = P(A ∩ B) + P( A ∩ B) P(B) = P( A).P(B |A) + P( A ).P(B A ) P( A).P(B | A) P(A).P(B |A) = n P( A|B) = P( A).P(B |A) + P(A ).P(B |A ) P(Ai ).P(B | Ai ) i=1
The intersection and union rules can be seen graphically in Figure 24.12. Using the Bayes’ theorem, and given that we have the information determined from expert opinions in consultation with the marketing professionals in Figure 24.13, we can derive the posterior probabilities. To simplify, we denote A as the state of nature (A for Strong, and A for Weak), while we set B as the Favorable (B) or Unfavorable (B ) outcomes from the marketing research activities. Using these rules, we can compute the joint probability P(B ∩ A) and the Bayes’ posterior probability P(A | B) as seen in Figure 24.14.
FIGURE 24.12 Graphical representation of probability theory
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
24. Decision Analysis—Decision Tree with EVPI, Minimax, and Bayes’ Theorem
167
FIGURE 24.13 Input assumptions in a Bayes’ computation
FIGURE 24.14 Bayes’ Theorem
That is, we have: P(B ∩ A) = P( A ∩ B) = P( A).P(B | A) = (0.80)(0.90) = 0.72 (0.80)(0.90) P( A).P(B |A) = = 0.94 P( A|B ) = n (0.80)(0.90) + (0.20)(0.25) P( Ai ).P(B | Ai ) i=1
This means that the market research holds significant value. If the research returns a Favorable judgment, there is a 94% probability that the true state of nature is Strong, and so forth. Using Bayes’ theorem, we can determine this posterior probability, showing that the chances of making correct decisions are significantly higher than leaving it to chance. That is, we obtain a 94% chance of being right as compared to an 80% chance if no market research is performed. The analysis can be repeated for the Unfavorable market research outcome.
PROS AND CONS OF DECISION TREE ANALYSIS As you can see, decision tree analysis can be somewhat cumbersome, with all the associated decision analysis techniques. In creating a decision tree, be careful of its pros and cons. Specifically: Pros Easy to use, construct, and value. Drawing the decision tree merely requires an understanding of the basics (e.g., chronological sequence of events, payoffs, probabilities, decision nodes, chance nodes, etc., and the valuation is simple expected values going from right to left on the decision tree). Clear and easy exposition. A decision tree is simple to visualize and understand.
c21-30
JWBK121-Mun
168
March 15, 2008
22:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Cons Ambiguous inputs. The analysis requires single-point estimates of many variables (many probabilities and many payoff values), and sometimes it is difficult to obtain objective or valid estimates. In a large decision tree, these errors in estimations compound over time, making the results highly ambiguous. The proverbial problem of garbage-in, garbage-out is clearly an issue here. Monte Carlo simulation with Risk Simulator helps by simulating the uncertain input assumptions thousands of times, rather than relying on single-point estimates. Ambiguous outputs. The decision tree results versus Minimax and Maximin results produce different optimal strategies, as illustrated in this chapter. Using decision trees will not necessarily produce robust and consistent results. Great care should be taken. Difficult and confusing mathematics. To make the decision tree analysis more powerful (rather than simply relying on single-point estimates), we revert to using Bayes’ theorem. From the decision tree example, using the Bayes’ theorem to derive implied and posterior probabilities is not an easy task at all. At best it is confusing, and at worst, extremely complex to derive. In theory, each payoff value faces a different risk structure on the terminal nodes. Therefore, it is theoretically correct only if the payoff values (which have to be in present values) are discounted at different risk-adjusted discount rates, corresponding to the risk structure at the appropriate nodes. Obtaining a single valid discount rate is difficult enough, let alone multiple discount rates. The conditions, probabilities, and payoffs should be correlated to each other in real life. In a simple decision tree, this is not possible as you cannot correlate two or more probabilities. This is where Monte Carlo simulation comes in, as you can correlate pairs of input assumptions in a simulation. Decision trees do not solve real options problems. Be careful. Real options analysis requires binomial and multinomial lattices and trees to solve, not decision trees. Decision trees are great for framing strategic real options pathways but not for solving real options values. For instance, decision trees cannot be used to solve a switching option, barrier option, or option to expand, contract, and abandon. The branches on a decision tree are outcomes, while the branches on a lattice in a real option are options, not obligations. They are very different animals, so be extremely careful. In fact, Parts II and III are devoted to solving advanced real options problems, complete with explanations, examples, models, and case studies, clearly indicating that real options analysis is a lot more powerful and correct rather than using decision trees in solving most decisions under uncertainty and risk.
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
25. Decision Analysis—Economic Order Quantity and Inventory Reorder Point
169
25. Decision Analysis—Economic Order Quantity and Inventory Reorder Point File Name: Decision Analysis—Economic Order Quantity and Inventory Reorder Point Location: Modeling Toolkit | Decision Analysis | Economic Order Quantity and Inventory Reorder Point Brief Description: Computing the optimal order size and optimal inventory units to hold that minimizes total cost and maximizes profits using the EOQ models Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2ZEOQ, B2ZEOQOrders, B2ZEOQProbability, B2ZEOQReorderPoint, B2ZEOQExcess
This model illustrates the theory of economic order quantity (EOQ), which is in essence an internal optimization model to determine the best time and the optimal amount to order new inventory given levels of demand, uncertainty in demand, the cost of placing new orders, the cost of holding on to obsolete inventory, backorder requirements, and service level requirements. Each of these inputs can be simulated (i.e., set as input assumptions, and the resulting values set as forecasts). The model is fairly self-explanatory. Please see Figure 25.1.
FIGURE 25.1 EOQ model and inventory recorder point
c21-30
JWBK121-Mun
170
March 15, 2008
22:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
The results obtained tell the store manager how many parts to order the next time an order is placed, the approximate number of orders per year of these many parts, the probability that the part will be out of stock, the reorder point (when inventory on hand hits this point, it is time to reorder) and the excess number of parts available (safety level), on average, even with the highest levels of demand.
26. Decision Analysis—Economic Order Quantity and Optimal Manufacturing File Name: Decision Analysis—Economic Order Quantity and Optimal Manufacturing Location: Modeling Toolkit | Decision Analysis | Economic Order Quantity and Optimal Manufacturing Brief Description: Computing the economic order quantity (EOQ) for inventory purposes and simulating an EOQ model to obtain confidence intervals for optimal quantities to order Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2ZEOQ, B2ZEOQOrders, B2ZEOQProbability, B2ZEOB, B2ZEOBBatch, B2ZEOBProductionCost, B2ZEOBHoldingCost, B2ZEOBTotalCost
Economic order quantity (EOQ) models define the optimal quantity to order that minimizes total variable costs required to order and hold in inventory or the optimal quantities to manufacture. EOQ models provide several key insights. The most important one is that the optimal order quantity Q* varies with the square root of annual demand and not directly with annual demand. This fact provides an important economy of scale. For instance, if demand doubles, the optimal inventory does not double but goes up by the square root of 2, or approximately 1.4. This also means that inventory rules based on time supply are not optimal. For example, a simple rule like inventory managers having to maintain a “month’s worth” of inventory can lead to dangerously low inventory or inventory shortage situations. If demand doubles, then a month’s worth of inventory is twice as large. As noted, this is more quantity than is optimal; a doubling in demand should optimally increase inventory only by the square root of 2, or about 1.4 times. The EOQ model is a lot more than simple inventory computations, as it also computes the optimal quantity to order and the number of orders per year on average, as well as the probability that a product will be out of stock, providing a risk assessment of the probability some items will be out (Figure 26.1). The inputs
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
26. Decision Analysis—Economic Order Quantity and Optimal Manufacturing
171
FIGURE 26.1 EOQ and optimal reordering
can be subjected to Monte Carlo simulation using Risk Simulator. For instance, any of the inputs, such as demand level or cost of placing an order, can be set as input assumptions and the EOQ as the output forecast. The EOQ model also can be extended to cover manufacturing outputs, specifically finding the optimal quantities to manufacture given assumed average demand levels, production capacity, cost of setting up production, holding cost of inventory, and so forth (Figure 26.2). These two models are fairly self-explanatory and use preprogrammed functions in the Modeling Toolkit software.
PROCEDURE This model already has Monte Carlo simulation predefined using Risk Simulator. To run this model, simply click on Risk Simulator | Change Simulation Profile and select the EOQ Model profile, then click on Risk Simulator | Run Simulation. However, if
FIGURE 26.2 EOQ and optimal manufacturing
c21-30
JWBK121-Mun
172
March 15, 2008
22:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
you wish, you can create your own simulation model: 1. Start a new simulation profile and add various assumptions on Demand, Cost of Placing an Order, Holding Cost, and Cost of Lost Sales. 2. Add forecasts to the key outputs, such as EOQ or EOQ Probability. 3. Run the simulation and interpret the resulting forecast charts as usual.
27. Decision Analysis—Expected Utility Analysis File Name: Decision Analysis—Expected Utility Analysis Location: Modeling Toolkit | Decision Analysis | Expected Utility Analysis Brief Description: Constructing a utility function using risk preferences and expected value of payoffs for a decision maker, as an alternative to expected value analysis Requirements: Modeling Toolkit
This model applies utility theory in decision analysis to determine which projects should be executed, given that the projects are risky and carry with them the potential for significant gains as well as significant losses. Using a single-point estimate or expected value given various probabilities of events occurring may not be the best approach. An alternative approach is to determine the risk preferences of the decision maker through the construction of a utility curve. Using the utility curve, we can then determine which project or strategy may best fit the decision maker’s risk preferences. The first portion of the model looks at the payoff table under three scenarios (high, moderate, and low prices) and their respective payoffs for three strategies (execute Project A, execute Project B, or do nothing). The expected value shows that Project A has the highest payoff. Hence, the best decision is to execute Project A. However, if the company is a small start-up firm and the owner or decision maker is fairly risk adverse (e.g., even a small loss of, say, $30,000 can potentially mean that the business faces bankruptcy). Under such circumstances, executing Project A may not be the best alternative. To analyze this situation, we revert to the use of utility functions. To start off, as an example, suppose we have a project where the maximum payoff on all strategies is $50,000 and the minimum is –$50,000. We construct a graduated set of payoffs from these two values and call them the guaranteed payoff (Figure 27.1). Then the decision maker is to determine the indifference probability. (The maximum is always set to 100% whereas the minimum value is always set to 0%.) Gradually moving down the payoffs, you determine the indifference levels. For
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
27. Decision Analysis—Expected Utility Analysis
173
FIGURE 27.1 Expected utility analysis
instance, say there is a coin toss game where if heads appears, you receive $50,000 versus a loss of −$50,000 if tails appears. This game yields an expected value of $0. In such a scenario, what is the probability you will accept $40,000 guaranteed versus playing the risky game? In the example, the decision maker sets the chances as 97%, and so forth. As another example, the decision maker is willing to accept a sure thing of $10,000 about 85% of the time rather than take the gamble of losing –$50,000, although the upside of $50,000 is still a potential. From the risk-adverse utility function generated, we can then recompute the payoff table by converting the monetary payoffs to utility payoffs. In this example, it is clear that both projects are fairly risky to the owner and the best course of action is to stay away from both projects. For more examples on decision analysis, see the Decision Analysis—Decision Tree with EVPI, Minimax, and Bayes’ Theorem model. In addition, for more advanced decision analysis and strategies, see the real options analysis models and cases in Parts II and III of this book.
c21-30
JWBK121-Mun
174
March 15, 2008
22:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
28. Decision Analysis—Inventory Control File Name: Decision Analysis—Inventory Control Location: Modeling Toolkit | Decision Analysis | Inventory Control Brief Description: Creating and simulating a simple inventory control model, which is a precursor to more advanced economic order quantity models Requirements: Modeling Toolkit, Risk Simulator
This model shows a simple inventory control model, with various inputs such as order cost, carrying or inventory cost, and the projected annual demand, which is deemed to be uncertain and can be simulated. The idea of this model is to allow for the ability to view the various interactions among the input variables and the resulting orders that need to be placed per year. Figures 28.1 and 28.2 illustrate the inventory control model and its output charts. For more advanced
FIGURE 28.1 Inventory control model
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
28. Decision Analysis—Inventory Control
175
FIGURE 28.2 Inventory control charts
and complex inventory control models, see the Optimization—Inventory Optimization model, or see the other models dealing with economic order quantity applications (Decision Analysis—Economic Order Quantity and Inventory Reorder Point as well as Decision Analysis—Economic Order Quantity and Optimal Manufacturing).
c21-30
JWBK121-Mun
176
March 15, 2008
22:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
29. Decision Analysis—Queuing Models File Name: Decision Analysis—Queuing Models Location: Modeling Toolkit | Decision Analysis | Queuing Models Brief Description: Setting up a queuing model, simulating a queuing model, and interpreting the results from a queuing model Requirements: Modeling Toolkit, Risk Simulator
Think of how queuing models work. Consider a call center, available beds waiting to be occupied in a hospital, or a checkout counter at a McDonald’s fastfood restaurant. The queue is the line of people waiting to get served. (Typically, the arrival rates of patrons follow a Poisson distribution on a per-period basis, per hour or per day, etc.) The number of checkout counters open is the number of channels in a queuing model (the rate at which the servers are able to serve the patrons typically follows an exponential distribution). The questions that a queuing model answers are how many servers or channels should there be if you do not want your patrons to wait more than X minutes; or if we have Y servers, what is the probability that a patron arriving will have to wait and what is the average wait time, and so forth. These types of models are extremely powerful when coupled with simulation, where the arrival rates and service times are variable and simulated. Imagine applications
FIGURE 29.1 Multiple channel queuing model
c21-30
JWBK121-Mun
March 15, 2008
22:4
Char Count=
29. Decision Analysis—Queuing Models
177
FIGURE 29.2 M/G/k blocked queuing model
from staffing call centers, customer service lines, and checkout counters to how many hospital beds should exist in a hospital per type of diagnostic-related group. These models are based on operations research queuing models. The singlechannel queuing model and the multiple-channel queuing model (Figure 29.1 shows the multiple channel model) assume a Poisson distribution of arrival rates and exponential distribution of service times, with the only difference between them being the number of channels. Both the MG1 single arbitrary model and the M/G/k blocked queuing model (Figure 29.2) assume the same Poisson distribution on arrival rates but do not rely on the exponential distribution for service times. The two main differences between these two general-distribution (G) models are that the M/G/k uses multiple channels as compared to the single-channel MG1, as well as the fact that the MG1 model assumes the possibility of waiting in line while the M/G/k model assumes customers will be turned away if the channels are loaded when they arrive. Spend some time in the models to see various interacting effects of each input on the result.
c21-30
JWBK121-Mun
178
March 15, 2008
22:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
RUNNING A MONTE CARLO SIMULATION In all of these models, the results are closed form. Hence, only the input assumptions (arrival rates and service rates) are uncertain and should be simulated. The forecast results should be any of the outputs of interest.
30. Exotic Options—Accruals on Basket of Assets File Name: Exotic Options—Accruals on Basket of Assets Location: Modeling Toolkit | Real Options Models | Accruals on Basket of Assets Brief Description: Understanding Accruals on Basket instruments, which are financial portfolios of multiple underlying assets where the holder of the instrument receives the maximum of the basket of assets or some prespecified guarantee amount Requirements: Modeling Toolkit, Real Options SLS
The Accruals on a Basket of Assets are exotic options in which there are several assets in a portfolio and the holder of the instrument receives the maximum of either the guaranteed amount or any one of the values of the assets. This instrument can be modeled as either an American option, which can be executed at any time up to and including maturity, or as a European option, which can be exercised only at maturity, or a Bermudan option (exercisable only at certain times). Using the multiple assets and multiple-phased module of the Real Options SLS software, we can model the value of an accrual option (Figure 30.1). It is highly recommended that at this point the reader first familiarizes him/herself with the Introduction to the Real Options SLS Software chapter before attempting to solve any SLS models. This model, although using the SLS software, is still listed under the section of Exotic Options because basket accruals are considered exotic options and are solved using similar methodologies to other exotics. The inputs are usual option inputs. The terminal and intermediate equations are shown next. The Terminal Node Equation is: Max(FirstAsset,SecondAsset,ThirdAsset,FourthAsset,Guarantee) The Intermediate Node Equation is: Max(FirstAsset,SecondAsset,ThirdAsset,FourthAsset,OptionOpen)
JWBK121-Mun March 15, 2008 22:4
FIGURE 30.1 Accruals on basket of assets (solved using SLS)
c21-30 Char Count=
179
c31-40
JWBK121-Mun
180
March 15, 2008
22:7
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
31. Exotic Options—American, Bermudan, and European Options with Sensitivities File Name: Exotic Options—American, Bermudan, and European Options with Sensitivities Location: Modeling Toolkit | Exotic Options | American, Bermudan, and European Options Brief Description: Computing American, Bermudan, and European options with Greek sensitivities Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2BinomialAmericanCall, B2BinomialBermudanCall, B2BinomialEuropeanCall, B2BinomialAmericanPut, B2BinomialBermudanPut, B2BinomialEuropeanPut, B2GeneralizedBlackScholesCall, B2GeneralizedBlackScholesPut, B2ClosedFormAmericanCall, B2ClosedFormAmericanPut, B2CallDelta, B2CallGamma, B2CallTheta, B2CallRho, B2PutVega, B2PutDelta, B2PutGamma, B2PutTheta, B2PutRho, B2PutVega
American options can be exercised at any time up to and including maturity, European options can be exercised only at maturity, and Bermudan options can be exercised at certain times (exercisable other than during the vesting blackout period). In most cases, American ≥ Bermudan ≥ European options except for one special case: For plain-vanilla call options when there are no dividends, American = Bermudan = European call options, as it is never optimal to exercise early in a plainvanilla call option when there are no dividends (Figure 31.1). However, once there is a sufficiently high dividend rate paid by the underlying stock, we clearly see that the relationship where American ≥ Bermudan ≥ European options applies (Figure 31.2). European options can be solved using the Generalized Black-Scholes-Merton model (a closed-form equation) as well as using binomial lattices and other numerical methods. However, for American options, we cannot use the Black-Scholes model and must revert to using the binomial lattice approach and some closed-form approximation models. Using the binomial lattice requires an additional input variable: lattice steps. The higher the number of lattice steps, the higher the precision of
c31-40
JWBK121-Mun
March 15, 2008
22:7
Char Count=
31. Exotic Options—American, Bermudan, and European Options with Sensitivities
181
FIGURE 31.1 American, Bermudan, and European options with Greeks the results. Typically, 100 to 1,000 steps are sufficient to achieve convergence. Use the Real Options SLS software to solve more advanced options vehicles with a fully customizable lattice model. Only binomial lattices can be used to solve Bermudan options. In Parts II and III of this book, the Real Options SLS software is used to solve more advanced and exotic options vehicles with a fully customizable lattice model.
FIGURE 31.2 American, Bermudan and European options with Dividends
c31-40
JWBK121-Mun
182
March 15, 2008
22:7
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
32. Exotic Options—American Call Option on Foreign Exchange File Name: Exotic Options—American Call Option on Foreign Exchange Location: Modeling Toolkit | Real Options Models | American Call Option on Foreign Exchange Brief Description: Computing American and European options on Foreign Exchange Requirements: Modeling Toolkit, Real Options SLS
A foreign exchange option (FX option or FXO) is a derivative where the owner has the right but not the obligation to exchange money denominated in one currency into another currency at a previously agreed upon exchange rate on a specified date. The FX options market is the deepest, largest, and most liquid market for options of any kind in the world. The valuation here uses the Garman-Kohlhagen model (Figure 32.1). You can use the Exotic Options—Currency (Foreign Exchange) Options model in the Modeling Toolkit software to compare the results of this Real Options SLS model. The exotic options model is used to compute the European version using closed-form models; the Real Options SLS model, the example showcased here, computes the European and American options using a binomial lattice approach. When using the binomial lattice approach in the Real Options SLS software, it is important to remember to set the Dividend Rate as the foreign country’s risk-free rate, PV Underlying Asset, as the spot exchange rate, and Implementation Cost as the strike exchange rate. As an example of a foreign exchange option, suppose the British pounds (GBP) versus the U.S. dollar (USD) is USD2/GBP1. Then the spot exchange rate is 2.0. Because the exchange rate is denominated in GBP (the denominator), the domestic risk-free rate is the rate in the United Kingdom, and the foreign rate is the rate in the United States. This means that the foreign exchange contract allows the holder the option to call GBP and put USD. To illustrate, suppose a U.K. firm is getting US$1M in six months, and the spot exchange rate is USD2/GBP1. If the GBP currency strengthens, the U.K. firm loses when it has to repatriate USD back to GBP but gains if the GBP currency weakens. If the firm hedges the foreign exchange exposure with an FXO and gets a call on GBP (put on USD), it hedges itself from any foreign exchange fluctuation risks. For discussion purposes, say the timing is short, interest rates are low, and volatility is low. Getting a call option with a strike of 1.90 yields a call value approximately
c31-40
JWBK121-Mun
March 15, 2008
22:7
Char Count=
32. Exotic Options—American Call Option on Foreign Exchange
183
FIGURE 32.1 Foreign exchange option
0.10 (i.e., the firm can execute the option and gain the difference of 2.00 – 1.90, or 0.10, immediately). This means that the rate now becomes USD1.90/GBP1, and it is cheaper to purchase GBP with the same USD, or the U.K. firm gets a higher GBP payoff. In situations where volatility is nonzero and maturity is higher, there is significant value in an FXO, which can be modeled using the Real Options SLS approach.
c31-40
JWBK121-Mun
184
March 15, 2008
22:7
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
33. Exotic Options—American Call Options on Index Futures File Name: Exotic Options—American Call Option on Index Futures Location: Modeling Toolkit | Real Options Models | American Call Option on Index Futures Brief Description: Computing the value of an Index option using closed-form models and binomial lattices Requirements: Modeling Toolkit, Real Options SLS
The index option is similar to a regular option but the underlying asset is a reference stock index, such as the Standard & Poor’s 500. The analysis can be solved using a closed-form generalized Black-Scholes-Merton model. The model used in this chapter is similar to the closed-form model but applies the binomial lattice instead. The difference here is that Black-Scholes can solve only European options while the binomial lattice model is capable of solving American, European, and Bermudan or mixed and customized options. Instead of using the asset value or current stock price as the input, we use the current index value. All other inputs remain the same, as in other options models. Please see Figure 33.1. Index futures are an important investment vehicle as stock indexes cannot be traded directly, so futures based on stock indexes are the primary vehicles for trading indexes. Index futures operate in essentially the same way as other futures, and are traded in the same way. As indexes are based on many separate stocks, index futures are settled in cash rather than stocks. In addition, index futures allow investors to participate in the entire market without the significant cost of purchasing each underlying stock in an index. Index futures are widely used for hedging market movements and are applied in portfolios for its diversification effects.
c31-40
JWBK121-Mun
March 15, 2008
22:7
Char Count=
33. Exotic Options—American Call Options on Index Futures
FIGURE 33.1 American and European index futures
185
c31-40
JWBK121-Mun
186
March 15, 2008
22:7
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
34. Exotic Options—American Call Option with Dividends File Name: Exotic Options—American Call Option with Dividends Location: Modeling Toolkit | Real Options Models | American Call Option with Dividends Brief Description: Solving the American call option using customizable binomial lattices as well as closed-form approximation models Requirements: Modeling Toolkit, Real Options SLS
The American call option with dividends model computes the call option using Real Options SLS software by applying the binomial lattice methodology. The results are benchmarked with closed-form approximation models. (See Figure 34.1.) You can compare the results from this model with the European call option with dividends example. Note that when there are dividends, American options (which can be exercised early) are often worth more than Bermudan options (options that can be exercised before termination but not during blackout periods, such as vesting or contractual nontrading days), which in turn are worth more than European options (options that can be exercised only at maturity). This value relationship is typically true for most options. In contrast, for plain vanilla basic call options without dividends, American options are worth the same as the Bermudan and European options, as they are never optimal to exercise early. This fact is true only for simple plain vanilla call options when no dividends exist. See Chapter 31 on modeling American, Bermudan, and European options for further details on comparisons of value.
c31-40
JWBK121-Mun
March 15, 2008
22:7
Char Count=
34. Exotic Options—American Call Option with Dividends
FIGURE 34.1 American call option with dividends (solved in SLS using customized equations)
187
c31-40
JWBK121-Mun
188
March 15, 2008
22:7
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
35. Exotic Options—Asian Lookback Options Using Arithmetic Averages File Name: Exotic Options—Asian Arithmetic Location: Modeling Toolkit | Exotic Options | Asian Arithmetic Brief Description: Solving an Asian lookback option using closed-form models, where the lookback is linked to the arithmetic average of past prices Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2AsianCallwithArithmeticAverageRate, B2AsianPutwithArithmeticAverageRate
Asian options (also called Average Options) are options whose payoffs are linked to the average value of the underlying asset on a specific set of dates during the life of the option. An average rate option is a cash-settled option whose payoff is based on the difference between the average value of the underlying asset during the life of the option and a fixed strike. The arithmetic version means that the prices are simple averages rather than geometric averages. End users of currency, commodities, or energy trading tend to be exposed to average prices over time, so Asian options are attractive for them. Asian options are also popular with corporations and banks with ongoing currency exposures. These options are also attractive because they tend to be less expensive—sell at lower premiums—than comparable vanilla puts or calls. This is because the volatility in the average value of an underlying stock or asset tends to be lower than the volatility of the actual values of the underlying stock or asset. Also, in situations where the underlying asset is thinly traded or there is the potential for its price to be manipulated, an Asian option offers some protection. It is more difficult to manipulate the average value of an underlying asset over an extended period of time than it is to manipulate it just at the expiration of an option. Please see Figure 35.1.
c31-40
JWBK121-Mun
March 15, 2008
22:7
Char Count=
36. Exotic Options—Asian Lookback Options Using Geometric Averages
189
FIGURE 35.1 Computing the Asian lookback option (arithmetic mean)
36. Exotic Options—Asian Lookback Options Using Geometric Averages File Name: Exotic Options—Asian Geometric Location: Modeling Toolkit | Exotic Options | Asian Geometric Brief Description: Solving an Asian lookback option using closed-form models, where the lookback is linked to the geometric average of past prices Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2AsianCallwithGeometricAverageRate, B2AsianPutwithGeometricAverageRate
Asian options (also called Average Options) are options whose payoffs are linked to the average value of the underlying asset on a specific set of dates during the life of the option. An average rate option is a cash-settled option whose payoff is based on the difference between the average value of the underlying asset during the life of the option and a fixed strike. The geometric version means that the prices are geometric averages rather than simple arithmetic averages, providing a more conservative measure. End users of currency, commodities, or energy trading tend to be exposed to average prices over time, so Asian options are attractive for them. Asian options are also popular with corporations and banks with ongoing currency exposures. These options are attractive because they tend to be less expensive—sell at lower premiums—than comparable vanilla puts or calls. This is because the volatility in the average value of an underlying stock or asset tends to be lower than the volatility
c31-40
JWBK121-Mun
March 15, 2008
22:7
190
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 36.1 Computing the Asian lookback option (geometric mean)
of the actual values of the underlying stock or asset. Also, in situations where the underlying asset is thinly traded or there is the potential for its price to be manipulated, an Asian option offers some protection. It is more difficult to manipulate the average value of an underlying asset over an extended period of time than it is to manipulate it just at the expiration of an option. Please see Figure 36.1.
37. Exotic Options—Asset or Nothing Options File Name: Exotic Options—Asset or Nothing Location: Modeling Toolkit | Exotic Options | Asset or Nothing Brief Description: Computing an asset or nothing option where, as long as the asset is above water, the holder will receive the asset at maturity Requirements: Modeling Toolkit, Real Options SLS Modeling Toolkit Functions Used: B2AssetOrNothingCall, B2AssetOrNothingPut
An Asset or Nothing option is exactly what it implies. At expiration, if the option is in the money, regardless of how deep it is in the money, the option holder receives the stock or asset. This means that for a call option, as long as the stock or asset price exceeds the strike at expiration, the stock is received. Conversely, for a put option, the stock is received only if the stock or asset value falls below the strike price. Please see Figure 37.1.
c31-40
JWBK121-Mun
March 15, 2008
22:7
Char Count=
38. Exotic Options—Barrier Options
191
FIGURE 37.1 Computing the asset or nothing option
38. Exotic Options—Barrier Options File Name: Exotic Options—Barrier Options Location: Modeling Toolkit | Exotic Options | Barrier Options Brief Description: Valuing various types of barrier options: up and in, down and in, up and out, down and out call and put options Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2BarrierDownandInCall, B2BarrierUpandInCall, B2BarrierDownandInPut, B2BarrierUpandInPut, B2BarrierDownandOutCall, B2BarrierUpandOutCall, B2BarrierDownandOutPut, B2BarrierUpandOutPut
Barrier options become valuable or get knocked in the money only if a barrier (upper or lower barrier) is breached (or not), and the payout is in the form of the option on the underlying asset. Sometimes, as remuneration for the risk of not being knocked in, a specified cash rebate is paid at the end of the instrument’s maturity (at expiration), assuming that the option has not been knocked in. As an example, in an up and in call option, the instrument pays the specified cash amount at expiration if and only if the asset value does not breach the upper barrier (the asset value does not go above the upper barrier), providing the holder of the instrument with a safety net or a cash insurance. However, if the asset breaches the upper barrier, the option gets knocked in and becomes a live option. An up and out option is live only as long as the asset does not breach the upper barrier, and so forth. The term monitoring periodicities means how often during the life of the
c31-40
JWBK121-Mun
192
March 15, 2008
22:7
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 38.1 Computing barrier options
option the asset or stock value will be monitored to see if it breaches a barrier. As an example, entering 12 implies monthly monitoring, 52 means weekly monitoring, 252 indicates monitoring for daily trading, 365 means monitoring daily, and 1,000,000 is used for continuous monitoring. Please see Figure 38.1. In general, barrier options limit the potential of a regular option’s profits and hence cost less than regular options without barriers. For instance, in Figure 38.1, if we assume no cash rebates, a Generalized Black-Scholes call option returns $6.50 as opposed to $5.33 for a barrier option (up and in barrier set at $115 with a stock price and strike value of $100). This is because the call option will only be knockedin if stock price goes above this $115 barrier, thereby reducing the regular option’s profits between $115 and $100. This reduction effect is even more pronounced in the up and out call option, where all the significant upside profits (above the $115 barrier) are completely truncated.
c31-40
JWBK121-Mun
March 15, 2008
22:7
39. Exotic Options—Binary Digital Options
Char Count=
193
39. Exotic Options—Binary Digital Options File Name: Exotic Options—Binary Digital Options Location: Modeling Toolkit | Exotic Options | Binary Digital Options Brief Description: Using binary digital options, instruments that either get knocked in or out of the money depending on whether the asset value breaches or does not breach certain barriers Requirements: Modeling Toolkit, Real Options SLS Modeling Toolkit Functions Used: B2BinaryDownAndInCashAtExpirationOrNothingCall, B2BinaryUpAndInCashAtExpirationOrNothingCall, B2BinaryDownAndInAssetAtExpirationOrNothingCall, B2BinaryUpAndInAssetAtExpirationOrNothingCall, B2BinaryDownAndOutCashAtExpirationOrNothingCall, B2BinaryUpAndOutCashAtExpirationOrNothingCall, B2BinaryDownAndOutAssetAtExpirationOrNothingCall, B2BinaryUpAndOutAssetAtExpirationOrNothingCall, B2BinaryDownAndInCashAtExpirationOrNothingPut, B2BinaryUpAndInCashAtExpirationOrNothingPut, B2BinaryDownAndInAssetAtExpirationOrNothingPut, B2BinaryUpAndInAssetAtExpirationOrNothingPut, B2BinaryDownAndOutCashAtExpirationOrNothingPut, B2BinaryUpAndOutCashAtExpirationOrNothingPut, B2BinaryDownAndOutAssetAtExpirationOrNothingPut, B2BinaryUpAndOutAssetAtExpirationOrNothingPut
Binary exotic options (also known as Digital, Accrual, or Fairway options) become valuable only if a barrier (upper or lower barrier) is breached (or not), and the payout could be in the form of some prespecified cash amount or the underlying asset itself. The cash or asset exchanges hands either at the point when the barrier is breached or at the end of the instrument’s maturity (at expiration), assuming that the barrier is breached at some point prior to maturity. For instance, in the down and in cash at expiration option, the instruments pay the specified cash amount at expiration if and only if the asset value breaches the lower barrier (the asset value goes below the lower barrier), providing the holder of the instrument a safety net or a cash insurance in case the underlying asset does not perform well. With up and in options, the cash or asset is provided if the underlying asset goes above the upper barrier threshold. In up and out or down and out options, the asset or cash is paid as long as the upper or lower barrier is not breached. With at expiration options, cash and assets are paid at maturity, whereas the at hit instruments are payable at the point when the barrier is breached. Please see Figure 39.1.
JWBK121-Mun
March 15, 2008
22:7
Char Count=
FIGURE 39.1 Binary digital options
c31-40
194
c31-40
JWBK121-Mun
March 15, 2008
22:7
Char Count=
40. Exotic Options—Cash or Nothing Options
195
40. Exotic Options—Cash or Nothing Options File Name: Exotic Options—Cash or Nothing Location: Modeling Toolkit | Exotic Options | Cash or Nothing Brief Description: Computing the cash or nothing option, where if in the money at expiration, the holder receives some prespecified cash amount Requirements: Modeling Toolkit, Real Options SLS Modeling Toolkit Functions Used: B2CashOrNothingCall, B2CashOrNothingPut
A Cash or Nothing option is exactly what it implies. At expiration, if the option is in the money, regardless of how deep it is in the money, the option holder receives a predetermined cash payment. (See Figure 40.1.) This means that for a call option, as long as the stock or asset price exceeds the strike at expiration, cash is received. Conversely, for a put option, cash is received only if the stock or asset value falls below the strike price.
FIGURE 40.1 Cash or nothing options
c41-50
JWBK121-Mun
196
March 15, 2008
22:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
41. Exotic Options—Chooser Option (Simple Chooser) File Name: Exotic Options—Simple Chooser Location: Modeling Toolkit | Exotic Options | Simple Chooser Brief Description: Computing the value of an option that can become either a call or a put Requirements: Modeling Toolkit, Real Options SLS Modeling Toolkit Functions Used: B2ChooserBasicOption, B2GeneralizedBlackScholesCall, B2GeneralizedBlackScholesPut
A Simple Chooser option allows the holder the option to choose if the option is a call or a put within the chooser time. Please see Figure 41.1. Regardless of the choice, the option has the same contractual strike price and maturity. Typically, a chooser option is cheaper than purchasing both a call and a put together. It provides the same level of hedge at a lower cost, but the strike price for both options are identical. The Complex Chooser option in the next chapter allows for different strike prices and maturities.
FIGURE 41.1 Simple exotic chooser option
c41-50
JWBK121-Mun
March 15, 2008
22:9
Char Count=
42. Exotic Options—Chooser Option (Complex Chooser)
197
42. Exotic Options—Chooser Option (Complex Chooser) File Name: Exotic Options—Complex Chooser Location: Modeling Toolkit | Exotic Options | Complex Chooser Brief Description: Using the complex chooser option, which allows the holder to choose if it becomes a call or a put option by a specific time, while the maturity and strike price of the call and put can be different Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2ChooserComplexOption, B2GeneralizedBlackScholesCall, B2GeneralizedBlackScholesPut
A Complex Chooser option allows the holder the option to choose if the option is a call or a put within the chooser time. The option allows for different strike prices and maturities. Typically, a chooser option is cheaper than purchasing both a call and a put together. It provides the same level of hedge at a lower cost, while the strike price for both options can be different. Please see Figure 42.1.
FIGURE 42.1 Complex chooser option
c41-50
JWBK121-Mun
198
March 15, 2008
22:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
43. Exotic Options—Commodity Options File Name: Exotic Options—Commodity Options Location: Modeling Toolkit | Exotic Options | Commodity Options Brief Description: Modeling and valuing a commodity option where the commodity’s spot and future values are used to value the option, while the forward rates and convenience yields are assumed to be mean reverting and volatile Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2CommodityCallOptionModel, B2CommodityPutOptionModel
This model computes the values of commodity-based European call and put options, where the convenience yield and forward rates are assumed to be mean reverting, and each has its own volatilities and cross-correlations. This is a complex multifactor model with interrelationships among all variables. Please see Figure 43.1. Great care should be taken to calibrate the inputs into this model.
FIGURE 43.1 Commodity option
c41-50
JWBK121-Mun
March 15, 2008
22:9
Char Count=
44. Exotic Options—Currency (Foreign Exchange) Options
199
44. Exotic Options—Currency (Foreign Exchange) Options File Name: Exotic Options—Currency Options Location: Modeling Toolkit | Exotic Options | Currency Options Brief Description: Valuing a foreign exchange currency option, typically used in hedging foreign exchange fluctuations, where the key inputs are the spot exchange rate and the contractual purchase or sale price of the foreign exchange currency for delivery in the future Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2CurrencyCallOption, B2CurrencyPutOption
A foreign exchange option (FX option or FXO) is a derivative where the owner has the right but not the obligation to exchange money denominated in one currency into another currency at a previously agreed upon exchange rate on a specified date. The FX options market is the deepest, largest, and most liquid market for options of any kind in the world. The valuation here uses the Garman-Kohlhagen model (Figure 44.1). As an example, suppose the British pounds (GBP) versus the U.S. dollar (USD) is USD2/GBP1. Then the spot exchange rate is 2.0. Because the exchange rate is denominated in GBP (the denominator), then the domestic risk-free rate is the rate in the United Kingdom, and the foreign rate is the rate in the United States. This means that the foreign exchange contract allows the holder the option to call GBP and put USD. To illustrate, suppose a U.K. firm is getting US$1M in six months, and the spot exchange rate is USD2/GBP1. If the GBP strengthens, the U.K. firm loses if it has to
FIGURE 44.1 Foreign exchange option
c41-50
JWBK121-Mun
200
March 15, 2008
22:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
repatriate USD back to GBP; it gains if the GBP currency weakens. If the firm hedges the foreign exchange exposure with an FXO and gets a call on GBP (put on USD), it hedges itself from any foreign exchange fluctuation risks. For discussion purposes, say the timing is short, interest rates are low, and volatility is low. Getting a call option with a strike of 1.90 yields a call value approximately 0.10 (i.e., the firm can execute the option and gain the difference of 2.00 – 1.90, or 0.10, immediately). This means that the rate now becomes USD1.90/GBP1, and it is cheaper to purchase GBP with the same USD, or the U.K. firm gets a higher GBP payoff.
45. Exotic Options—Double Barrier Options File Name: Exotic Options—Double Barrier Options Location: Modeling Toolkit | Exotic Options | Double Barrier Options Brief Description: Valuing various double barrier options, including up and in, up and out, down and in, and down and out combinations, where a call or put option is knocked in or out of the money depending if it breaches an upper or lower barrier Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2BarrierDoubleUpOutDownOutCall, B2BarrierDoubleUpOutDownOutPut, B2BarrierDoubleUpInDownInCall, B2BarrierDoubleUpInDownInPut
Barrier options become valuable or get knocked in the money only if a barrier (upper or lower barrier) is breached (or not), and the payout is in the form of the option on the underlying asset. Double barrier options have two barriers, one above the current asset value (upper barrier) and one below it (lower barrier). Please see Figure 45.1. Either barrier has to be breached for a knock-in or knock-out event to occur. In general, barrier options limit the potential of a regular option’s profits and hence cost less than regular options without barriers. As an example, in the up and out, down and out call option, the instrument is knocked out if the asset breaches either the upper or lower barriers, but remains in effect and the option is live at the end of maturity if the asset prices remain between these two barriers. The term monitoring periodicities means how often during the life of the option the asset or stock value will be monitored to see if it breaches a barrier. For example, entering 12 implies monthly monitoring, 52 means weekly monitoring, 252 indicates monitoring for daily trading, 365 means monitoring daily, and 1,000,000 is used for continuous monitoring. The delta curvature can also be entered for the upper and lower barrier asset prices if using barrier options as a delta or gamma hedge; otherwise, set these as zero.
c41-50
JWBK121-Mun
March 15, 2008
22:9
Char Count=
46. Exotic Options—European Call Option with Dividends
201
FIGURE 45.1 Double barrier options
46. Exotic Options—European Call Option with Dividends File Name: Exotic Options—European Call Option with Dividends Location: Modeling Toolkit | Real Options Models | European Call Option with Dividends Brief Description: Using the customized binomial lattice approach to solve a European call option with dividends, where the holder of the option can exercise only at maturity, not before Requirements: Modeling Toolkit, Real Options SLS
A European call option enables the holder to exercise the option only at maturity. An American option can be exercised at any time before as well as up to and including maturity. A Bermudan option is like an American and European option mixed––at certain vesting and blackout periods, the option cannot be executed until the end of the blackout period, when it then becomes exercisable at any time until its maturity. In a simple plain vanilla call option, all three varieties have the same value, as it is never optimal to exercise early, making all three options revert to the value of a simple European option. However, this does not hold true when dividends exist. When dividends are high enough, it is typically optimal to exercise a call option
c41-50
JWBK121-Mun
202
March 15, 2008
22:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
early, particularly before the ex-dividend date hits, reducing the value of the asset and, consequently, the value of the call option. The American call option typically is worth more than a Bermudan option, which typically is worth more than a European option, except in the special case of a simple plain vanilla call option. Figure 46.1 illustrates how a simple European call option with dividends can be modeled using the binomial lattice with 100 steps. The Terminal Equation is simply set as Max (Asset – Cost, 0), indicative of the ability to execute the option at the terminal nodes at maturity if it is in the money or expire worthless otherwise. The Intermediate Equation is set as OptionOpen, which indicates that at any time before expiration, the only thing the owner can do is to keep the option open and wait until maturity to decide if the option should be executed. Refer to Part II in this book on using the Real Options SLS software for more details on using the software in more sophisticated and customized ways.
FIGURE 46.1 European call option with dividends (using SLS and customized equations)
c41-50
JWBK121-Mun
March 15, 2008
22:9
Char Count=
47. Exotic Options—Exchange Assets Option
203
47. Exotic Options—Exchange Assets Option File Name: Exotic Options—Exchange Assets Location: Modeling Toolkit | Exotic Options | Exchange Assets Brief Description: Using this European option allows the holder to swap out one asset for another at expiration, with predetermined quantities of each asset to be swapped Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2AssetExchangeAmericanOption, B2AssetExchangeEuropeanOption
The Exchange Assets option (see Figure 47.1) provides the option holder the right at expiration (European version) to swap and give Asset 2 away and, in return, receive Asset 1, with predetermined quantities of the first and second assets. The American option allows the swap to occur at any time before as well as up to and including maturity. Clearly, the more negative the correlation between the assets, the larger the risk reduction and diversification effects and hence the higher the value of the option. Sometimes in a real options world, where the assets swapped are not financial instruments but real physical assets, this option is called a switching option. We will discuss switching options in later chapters when dealing with real options applications.
FIGURE 47.1 Exchange assets options
c41-50
JWBK121-Mun
204
March 15, 2008
22:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
48. Exotic Options—Extreme Spreads Option File Name: Exotic Options—Extreme Spreads Location: Modeling Toolkit | Exotic Options | Extreme Spreads Brief Description: Computing extreme spread option values, where the vehicle is divided into two segments, and the option pays off the difference between the extreme values (min or max) of the asset during the two time segments Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2ExtremeSpreadCallOption, B2ExtremeSpreadPutOption, B2ExtremeSpreadReverseCallOption, B2ExtremeSpreadReversePutOption
Extreme Spread options have their maturities divided into two segments, starting from time 0 to the first time period (first segment) and from the first time period to maturity (second segment). An extreme spread call option pays the difference between the maximum asset value from the second segment and the maximum value of the first segment. Conversely, the put pays the difference between the minimum of the second segment’s asset value and the minimum of the first segment’s asset value. A reverse call pays the minimum from the first less the minimum of the second segment, whereas a reverse put pays the maximum of the first less the maximum of the second segments. Please see Figure 48.1.
c41-50
JWBK121-Mun
March 15, 2008
22:9
Char Count=
49. Exotic Options—Foreign Equity–Linked Foreign Exchange Options in Domestic Currency
205
FIGURE 48.1 Extreme spreads options
49. Exotic Options—Foreign Equity–Linked Foreign Exchange Options in Domestic Currency File Name: Exotic Options—Foreign Equity Linked Forex Options Location: Modeling Toolkit | Exotic Options | Foreign Equity Linked Forex Brief Description: Computing the option where the underlying asset is in a foreign market, the exchange rate is fixed in advance to hedge the exposure risk, and the strike price is set as a foreign exchange rate rather than a price Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2EquityLinkedFXCallOptionDomestic Value, B2EquityLinkedFXPutOptionDomesticValue
Equity-Linked Foreign Exchange (forex) options (see Figure 49.1) are options whose underlying asset is in a foreign equity market. The option holder can hedge the fluctuations of the foreign exchange risk by having a strike price on the foreign
c41-50
JWBK121-Mun
206
March 15, 2008
22:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
exchange rate. The resulting valuation is in the domestic currency. There are three closely related models in this chapter and the next two chapters (foreign equity–linked foreign exchange option, foreign equity struck in domestic currency, and foreign equity with fixed exchange rate in domestic currency). Their similarities and differences can be summarized in these ways:
The underlying asset is denominated in a foreign currency. The foreign exchange rate is domestic currency to foreign currency. The option is valued in domestic currency. The strike prices are different where: The exchange rate is the strike for the foreign equity–linked foreign exchange option. The domestic currency is the strike for the foreign equity struck in domestic currency option. The foreign currency is the strike for the foreign equity with fixed exchange rate in domestic currency option.
FIGURE 49.1 Equity-linked options
c41-50
JWBK121-Mun
March 15, 2008
22:9
Char Count=
50. Exotic Options—Foreign Equity Struck in Domestic Currency
207
50. Exotic Options—Foreign Equity Struck in Domestic Currency File Name: Exotic Options—Foreign Equity Struck in Domestic Currency Location: Modeling Toolkit | Exotic Options | Foreign Equity Domestic Currency Brief Description: Valuing the options on foreign equities denominated in foreign exchange currency while the strike price is in domestic currency Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2ForeignEquityDomesticCurrencyCall, B2ForeignEquityDomesticCurrencyPut
Foreign Equity Options Struck in Domestic Currency is an option on foreign equities in a foreign currency, but the strike price is in domestic currency. At expiration, assuming the option is in the money, its value will be translated back into the domestic currency. The exchange rate is the spot rate for domestic currency to foreign currency, the asset price is denominated in a foreign currency, and the strike price is in domestic currency. Please see Figure 50.1.
FIGURE 50.1 Foreign equity with the strike in domestic currency
c51-70
JWBK121-Mun
208
March 15, 2008
22:10
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
51. Exotic Options—Foreign Equity with Fixed Exchange Rate File Name: Exotic Options—Foreign Equity with Fixed Exchange Rates Location: Modeling Toolkit | Exotic Options | Foreign Equity Fixed Forex Brief Description: Valuing foreign equity options where the option is in a currency foreign to that of the underlying asset but with a risk hedging on the exchange rate Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2ForeignEquityFixedFXRateDomesticValueQuantoCall, B2ForeignEquityFixedFXRateDomesticValueQuantoPut
Quanto options, also known as foreign equity options, are traded on exchanges around the world. The options are denominated in a currency different from that of the underlying asset. The option has an expanding or contracting coverage of the foreign exchange value of the underlying asset. The valuation of these options depends on the volatilities of the underlying assets and the currency exchange rate as well as the correlation between the currency and the asset value. Please see Figure 51.1.
FIGURE 51.1 Foreign equity in domestic currency
c51-70
JWBK121-Mun
March 15, 2008
22:10
Char Count=
52. Exotic Options—Foreign Takeover Options
209
52. Exotic Options—Foreign Takeover Options File Name: Exotic Options—Foreign Takeover Options Location: Modeling Toolkit | Exotic Options | Foreign Takeover Options Brief Description: Computing a foreign takeover option, where the holder has the right to purchase some foreign exchange at a prespecified rate for the purposes of a takeover or acquisition of a foreign firm Requirements: Modeling Toolkit Modeling Toolkit Function Used: B2TakeoverFXOption
In a Foreign Takeover option with a foreign exchange element, if a successful takeover ensues (if the value of the foreign firm denominated in foreign currency is less than the foreign currency units required), the option holder has the right to purchase the number of foreign units at the predetermined strike price (denominated in exchange rates of the domestic currency to the foreign currency) at the option expiration date. Please see Figure 52.1.
FIGURE 52.1 Foreign exchange takeover options
c51-70
JWBK121-Mun
210
March 15, 2008
22:10
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
53. Exotic Options—Forward Start Options File Name: Exotic Options Forward Start Location: Modeling Toolkit | Exotic Options | Forward Start Brief Description: Computing the value of an option that technically starts in the future, and its strike price is a percentage of the asset price in the future when the option starts Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2ForwardStartCallOption, B2ForwardStartPutOption
Forward Start options (Figure 53.1) start at the money or proportionally in or out of the money after some time in the future (the time to forward start). These options sometimes are used in employee stock options, where a grant is provided now but the strike price depends on the asset or stock price at some time in the future and is proportional to the stock price. The Alpha variable measures the proportionality of the option being in or out of the money. Alpha = 1 means the option starts at the money or that the strike is set exactly as the asset price at the end of the time to forward start. Alpha < 1 means that the call option starts (1–Alpha) percent in the money, or (1–Alpha) out of the money for puts. Conversely, for Alpha > 1, the call option starts (Alpha–1) out of the money and puts start (Alpha–1) in the money.
FIGURE 53.1 Forward start options
c51-70
JWBK121-Mun
March 15, 2008
22:10
Char Count=
54. Exotic Options—Futures and Forward Options
211
54. Exotic Options—Futures and Forward Options File Name: Exotic Options—Futures Options Location: Modeling Toolkit | Exotic Options | Futures and Forward Options Brief Description: Applying the same generalities as the Black-Scholes model but the underlying asset is a futures or forward contract, not a stock Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2FuturesForwardsCallOption, B2FuturesForwardsPutOption
The Futures option (Figure 54.1) is similar to a regular option, but the underlying asset is a futures or forward contract. Be careful here; the analysis cannot be solved using a Generalized Black-Scholes-Merton model. In many cases, options are traded on futures. A put is the option to sell a futures contract, and a call is the option to buy a futures contract. For both, the option strike price is the specified futures price at which the future is traded if the option is exercised. A futures contract is a standardized contract, typically traded on a futures exchange, to buy or sell a certain underlying instrument at a certain date in the future, at a prespecified price. The future date is called the delivery date or final settlement date. The preset price is called the futures price. The price of the underlying asset
FIGURE 54.1 Futures options
c51-70
JWBK121-Mun
212
March 15, 2008
22:10
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
on the delivery date is called the settlement price. The settlement price normally converges toward the futures price on the delivery date. A futures contract gives the holder the obligation to buy or sell, which differs from an options contract, which gives the holder the right but not the obligation to buy or sell. In other words, the owner of an options contract may exercise the contract. If it is an American-style option, it can be exercised on or before the expiration date; a European option can be exercised only at expiration. Thus, a futures contract is more like a European option. Both parties of a futures contract must fulfill the contract on the settlement date. The seller delivers the commodity to the buyer, or, if it is a cash-settled future, cash is transferred from the futures trader who sustained a loss to the one who made a profit. To exit the commitment prior to the settlement date, the holder of a futures position has to offset the position either by selling a long position or by buying back a short position, effectively closing out the futures position and its contract obligations.
55. Exotic Options—Gap Options File Name: Exotic Options—Gap Options Location: Modeling Toolkit | Exotic Options | Gap Options Brief Description: Valuing gap options, where there are two strike prices with respect to one underlying asset and where the first strike acts like a barrier with the second strike price coming into play when that barrier is breached. Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2GapCallOption, B2GapPutOption
Gap options are similar to Barrier options and Two Asset Correlated options in the sense that the call option is knocked in when the underlying asset exceeds the reference Strike Price 1, making the option payoff the asset price less Strike Price 2 for the underlying. Similarly, the put option is knocked in only if the underlying asset is less than the reference Strike Price 1, providing a payoff of Strike Price 2 less the underlying asset. Please see Figure 55.1.
c51-70
JWBK121-Mun
March 15, 2008
22:10
Char Count=
56. Exotic Options—Graduated Barrier Options
213
FIGURE 55.1 Gap options
56. Exotic Options—Graduated Barrier Options File Name: Exotic Options—Graduated Barriers Location: Modeling Toolkit | Exotic Options | Graduated Barriers Brief Description: Modeling Graduated Barrier models, which are similar to barrier options with flexible and graduated payoffs, depending on how far above or below a barrier the asset ends up at maturity Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2GraduatedBarrierDownandInCall, B2GraduatedBarrierDownandOutCall, B2GraduatedBarrierUpandInPut, B2GraduatedBarrierUpandOutPut
Graduated or Soft Barrier options are similar to standard Barrier options except that the barriers are no longer static values but a graduated range between the lower and upper barriers. The option is knocked in or out of the money proportionally. Both upper and lower barriers should be either above (for up and in or up and out options) or below (for down and in or down and out options) the starting stock price or asset value. For instance, in the down and in call option, the instruments become knocked in or live at expiration if and only if the asset or stock value breaches the lower barrier (asset value goes below the barriers). If the option to be valued is a down and in call,
c51-70
JWBK121-Mun
214
March 15, 2008
22:10
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 56.1 Graduated barrier options then both the upper barrier and the lower barrier should be lower than the starting stock price or asset value, providing a collar of graduated prices. For instance, if the upper and lower barriers are $90 and $80, and if the asset price ends up being $89, a down and out option will be knocked out 10% of its value. Standard barrier options are more difficult to delta hedge when the asset values and barriers are close to each other. Graduated barrier options are more appropriate for delta hedges, providing less delta risk and gamma risk. Please see Figure 56.1.
57. Exotic Options—Index Options File Name: Exotic Options—Index Options Location: Modeling Toolkit | Exotic Options | Index Options Brief Description: Understanding index options, which are similar to regular plain vanilla options and can be solved using the Black-Scholes model, with the only difference where the underlying asset is not a stock but an index Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2StockIndexCallOption, B2StockIndexPutOption, B2GeneralizedBlackScholesCall, B2GeneralizedBlackScholesPut
The Index option (Figure 57.1) is similar to a regular option, but the underlying asset is a reference stock index, such as the Standard & Poor’s 500. The analysis can be solved using a Generalized Black-Scholes-Merton Model as well.
c51-70
JWBK121-Mun
March 15, 2008
22:10
Char Count=
58. Exotic Options—Inverse Gamma Out-of-the-Money Options
215
FIGURE 57.1 Index options
58. Exotic Options—Inverse Gamma Out-of-the-Money Options File Name: Exotic Options—Inverse Gamma Out-of-the-Money Options Location: Modeling Toolkit | Exotic Options | Inverse Gamma Out-of-theMoney Options Brief Description: Analyzing options using an inverse gamma distribution rather than the typical normal-lognormal assumptions. This type of options analytical model is important for extreme in- or out-of-the-money options Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2InverseGammaCallOption, B2InverseGammaPutOption
This model computes the value of European call and put options using an Inverse Gamma distribution, as opposed to the standard normal distribution. This distribution accounts for the peaked distributions of asset returns and provides better estimates for deep out-of-the-money options. The traditional Generalized BlackScholes-Merton model is also provided as benchmark. Please see Figure 58.1.
c51-70
JWBK121-Mun
216
March 15, 2008
22:10
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 58.1 Inverse gamma option
59. Exotic Options—Jump-Diffusion Options File Name: Exotic Options—Jump Diffusion Location: Modeling Toolkit | Exotic Options | Jump Diffusion Brief Description: Assuming the underlying asset in an option follows a Poisson jump-diffusion process instead of a random-walk Brownian motion, which is applicable for underlying assets such as oil and gas commodities and price of electricity Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2MertonJumpDiffusionCall, B2MertonJumpDiffusionPut
A jump-diffusion option is similar to a regular option except that instead of assuming that the underlying asset follows a lognormal Brownian motion process, the process here follows a Poisson jump-diffusion process. That is, stock or asset prices follow jumps, which occur several times per year (observed from history). Cumulatively, these jumps explain a certain percentage of the total volatility of the asset. Please see Figure 59.1.
c51-70
JWBK121-Mun
March 15, 2008
22:10
Char Count=
60. Exotic Options—Leptokurtic and Skewed Options
217
FIGURE 59.1 Jump-Diffusion options
60. Exotic Options—Leptokurtic and Skewed Options File Name: Exotic Options—Leptokurtic and Skewed Options Location: Modeling Toolkit | Exotic Options | Leptokurtic and Skewed Options Brief Description: Computing options where the underlying assets are assumed to have returns that are skewed and leptokurtic or have fat tails and are leaning on one end of the distribution rather than having symmetrical returns Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2AltDistributionCallOption, B2AltDistributionPutOption
This model is used to compute the European call and put options using the binomial lattice approach when the underlying distribution of stock returns is not normally distributed, not symmetrical, and has additional slight kurtosis and skew. Be careful when using this model to account for a high or low skew and kurtosis. Certain combinations of these two coefficients actually yield unsolvable results. The Black-Scholes results are also included to benchmark the effects of a high kurtosis and positive or negatively skewed distributions compared to the normal distribution assumptions on asset returns. Please see Figure 60.1.
c51-70
JWBK121-Mun
March 15, 2008
22:10
218
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 60.1 Leptokurtic options
61. Exotic Options—Lookback with Fixed Strike (Partial Time) File Name: Exotic Options—Lookback with Fixed Strike Partial Time Location: Modeling Toolkit | Exotic Options | Lookback Fixed Strike Partial Time Brief Description: Computing payoff on the option, the difference between the highest or the lowest attained asset price against the strike, when the strike price is predetermined Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2FixedStrikePartialLookbackCall, B2FixedStrikePartialLookbackPut
In a Fixed Strike Option with Lookback Feature (Partial Time), the strike price is predetermined, while at expiration, the payoff on the call option is the difference between the maximum asset price less the strike price during the time between the starting period of the lookback to the maturity of the option (see Figure 61.1). Conversely, the put will pay the maximum difference between the lowest observed asset price less the strike price during the time between the starting period of the lookback to the maturity of the option.
c51-70
JWBK121-Mun
March 15, 2008
22:10
Char Count=
62. Exotic Options—Lookback with Fixed Strike
219
FIGURE 61.1 Lookback options with fixed strike (partial lookback time)
62. Exotic Options—Lookback with Fixed Strike File Name: Exotic Options—Lookback with Fixed Strike Location: Modeling Toolkit | Exotic Options | Lookback Fixed Strike Brief Description: Computing the value of an option where the strike price is fixed but the value at expiration is based on the value of the underlying asset’s maximum and minimum values during the option’s lifetime Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2FixedStrikeLookbackCall, B2FixedStrikeLookbackPut
In a Fixed Strike Option with Lookback feature (Figure 62.1), the strike price is predetermined, while at expiration, the payoff on the call option is the difference between the maximum asset price less the strike price during the lifetime of the option. Conversely, the put will pay the maximum difference between the lowest observed asset price less the strike price during the lifetime of the option.
c51-70
JWBK121-Mun
220
March 15, 2008
22:10
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 62.1 Lookback options with fixed strike
63. Exotic Options—Lookback with Floating Strike (Partial) File Name: Exotic Options—Lookback with Floating Strike Partial Time Location: Modeling Toolkit | Exotic Options | Lookback Floating Strike Partial Time Brief Description: Computing the value of an option where the strike price is not fixed but floating and the value at expiration is based on the value of the underlying asset’s maximum and minimum values starting from the lookback inception time to maturity, as the purchase or sale price Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2FloatingStrikePartialLookbackCallonMin, B2FloatingStrikePartialLookbackPutonMax
In a Floating Strike Option with Lookback feature (Partial Time), the strike price is floating; at expiration, the payoff on the call option is being able to purchase the underlying asset at the minimum observed price from inception to the end of the lookback time (see Figure 63.1). Conversely, the put will allow the option holder to sell at the maximum observed asset price from inception to the end of the lookback time.
c51-70
JWBK121-Mun
March 15, 2008
22:10
Char Count=
64. Exotic Options—Lookback with Floating Strike
221
FIGURE 63.1 Lookback options with floating strike (partial lookback)
64. Exotic Options—Lookback with Floating Strike File Name: Exotic Options—Lookback with Floating Strike Location: Modeling Toolkit | Exotic Options | Lookback Floating Strike Brief Description: Computing the value of an option where the strike price is not fixed but floating and the value at expiration is based on the value of the underlying asset’s maximum and minimum values during the option’s lifetime as the purchase or sale price Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2FloatingStrikeLookbackCallonMin, B2FloatingStrikeLookbackPutonMax
In a Floating Strike Option with Lookback feature (Figure 64.1), the strike price is floating; at expiration, the payoff on the call option is being able to purchase the underlying asset at the minimum observed price during the life of the option. Conversely, the put will allow the option holder to sell at the maximum observed asset price during the life of the option.
c51-70
JWBK121-Mun
222
March 15, 2008
22:10
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 64.1 Lookback options with floating strike
65. Exotic Options—Min and Max of Two Assets File Name: Exotic Options—Min and Max of Two Assets Location: Modeling Toolkit | Exotic Options | Min and Max of Two Assets Brief Description: Computing the value of an option where there are two underlying assets that are correlated with different volatilities, and the differences between the assets’ values are used as the benchmark for determining the value of the payoff at expiration Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2CallOptionOnTheMin, B2CallOptionOnTheMax, B2PutOptionOnTheMin, B2PutOptionOnTheMax
Options on Minimum or Maximum are used when there are two assets with different volatilities. (See Figure 65.1.) Either the maximum or the minimum value at expiration of both assets is used in option exercise. For instance, a call option on the minimum implies that the payoff at expiration is such that the minimum price between Asset 1 and Asset 2 is used against the strike price of the option.
c51-70
JWBK121-Mun
March 15, 2008
22:10
Char Count=
66. Exotic Options—Options on Options
223
FIGURE 65.1 Options on the minimum and maximum of two assets
66. Exotic Options—Options on Options File Name: Exotic Options—Options on Options Location: Modeling Toolkit | Exotic Options | Options on Options Brief Description: Computing the value of an option on another option, or a compound option, where the option provides the holder the right to buy or sell a subsequent option at the expiration of the first option Requirements: Modeling Toolkit Modeling Toolkit Functions Used:B2CompoundOptionsCallonCall, B2CompoundOptionsCallonPut, B2CompoundOptionsPutonCall, B2CompoundOptionsPutonPut
Options on Options, sometimes known as Compound Options, allow the holder to call or buy versus put or sell an option in the future (Figure 66.1). For instance, a put on call option means that the holder has the right to sell a call option in some future period for a specified strike price (strike price for the option on option). The time for this right to sell is called the maturity of the option on option. The maturity
c51-70
JWBK121-Mun
224
March 15, 2008
22:10
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 66.1 Compound options on options of the underlying means the maturity of the option to be bought or sold in the future, starting from now.
67. Exotic Options—Option Collar File Name: Exotic Options—Option Collar Location: Modeling Toolkit | Exotic Options | Options Collar Brief Description: Computing the call-put collar strategy; that is, short a call and long a put at different strike prices such that the hedge is costless and effective Requirements: Modeling Toolkit
The call and put collar strategy requires that one stock be purchased, one call be sold, and one put be purchased (Figure 67.1). The idea is that the proceeds from the call sold are sufficient to cover the proceeds of the put bought. Therefore, given a specific set of stock price, option maturity, risk-free rate, volatility, and dividend of a stock, you can impute the required strike price of a call if you know what put to purchase (and its relevant strike price) or the strike price of a put if you know what call to sell (and its relevant strike price).
c51-70
JWBK121-Mun
March 15, 2008
22:10
Char Count=
68. Exotic Options—Perpetual Options
225
FIGURE 67.1 Creating a call and put collar strategy
68. Exotic Options—Perpetual Options File Name: Exotic Options—Perpetual Options Location: Modeling Toolkit | Exotic Options | Perpetual Options Brief Description: Computing the value of an American option that has a perpetual life where the underlying is a dividend-paying asset Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2PerpetualCallOption, B2PerpetualPutOption
The perpetual call and put options are American options with continuous dividends that can be executed at any time but have an infinite life. Clearly a European option (only exercisable at termination) has a zero value, hence only American options are viable perpetual options. American closed-form approximations with 100-year maturities are also provided in the model to benchmark the results. Please see Figure 68.1.
FIGURE 68.1 Perpetual American options
c51-70
JWBK121-Mun
226
March 15, 2008
22:10
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
69. Exotic Options—Range Accruals (Fairway Options) File Name: Real Options Models—Range Accruals Location: Modeling Toolkit | Real Options Models | Range Accruals Brief Description: Computing the value of Fairway options or Range Accrual options, where the option pays a specified return if the underlying asset is within a range but pays something else if it is outside the range at any time during its maturity Requirements: Modeling Toolkit and Real Options SLS
A Range Accrual option is also called a Fairway option. Here the option pays a certain return if the asset value stays within a certain range (between the upper and lower barriers) but pays a different amount or return if the asset value falls outside this range during any time before and up to maturity. The name Fairway option sometimes is used, as the option is similar to the game of golf where if the ball stays within the fairway (a narrow path), it is in play, and if it goes outside, a penalty might be imposed (in this case, a lower return). Such options and instruments can be solved using the Real Options SLS software as seen in Figure 69.1, using the Custom Option approach, where we enter the terminal equation as: If (Asset >= LowerBarrier & Asset<= UpperBarrier, Asset ∗ (1 + InsideReturn), Asset ∗ (1 + OutsideReturn)) If we wish to solve a European option, we enter this as the intermediate equation: OptionOpen If we are solving an American option, the intermediate equation is: If (Asset >= LowerBarrier & Asset<= UpperBarrier, Max(Asset ∗ (1 + InsideReturn), OptionOpen), Max(Asset ∗ (1 + OutsideReturn), OptionOpen))
c51-70
JWBK121-Mun
March 15, 2008
22:10
Char Count=
69. Exotic Options—Range Accruals (Fairway Options)
FIGURE 69.1 Range accruals or fairway options
227
c51-70
JWBK121-Mun
228
March 15, 2008
22:10
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
70. Exotic Options—Simple Chooser File Name: Exotic Options—Simple Chooser Location: Modeling Toolkit | Exotic Options | Simple Chooser Brief Description: Computing the value of an option where the holder has the ability to decide if it is a call or a put at some future time period, where this option is similar to purchasing a call and a put together but costs less Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2ChooserBasicOption, B2GeneralizedBlackScholesCall, B2GeneralizedBlackScholesPut
A simple Chooser Option (Figure 70.1) allows the holder the ability to choose if the option is a call or a put within the chooser time. Regardless of the choice, the option has the same contractual strike price and maturity. Typically, a chooser option is cheaper than purchasing both a call and a put together, but provides the same level of hedge at a lower cost, but the strike prices for both options are identical. The Complex Chooser Option allows for different strike prices and maturities.
FIGURE 70.1 Simple chooser option
c71-80
JWBK121-Mun
March 15, 2008
22:12
Char Count=
71. Exotic Options—Spread on Futures
229
71. Exotic Options—Spread on Futures File Name: Exotic Options—Spread on Futures Location: Modeling Toolkit | Exotic Options | Spread on Futures Brief Description: Computing the value of an option where the underlying assets are two different futures contracts and their spreads (the difference between the futures) are used as a benchmark to determine the value of the option Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2FuturesSpreadCall, B2FuturesSpreadPut
A Spread on Futures option (Figure 71.1) is an option where the payoff is the difference between the two futures values at expiration. That is, the spread is: Futures 1 – Futures 2, while the call payoff is the value: Spread – Strike, and the put payoff is: Strike – Spread.
FIGURE 71.1 Spread on futures options
c71-80
JWBK121-Mun
230
March 15, 2008
22:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
72. Exotic Options—Supershare Options File Name: Exotic Options—Supershare Options Location: Modeling Toolkit | Exotic Options | Supershares Brief Description: Computing the value of an option where it is a hybrid of a double-barrier fairway option that pays a percentage proportional to the level it is in the money within these barriers Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2SuperShareOptions
Typically, Supershare options are traded or embedded in supershare funds. These options are related to down and out, up and out double-barrier options, where the option has value only if the stock or asset price is between the upper and lower barriers, and at expiration, the option provides a payoff equivalent to the stock or asset price divided by the lower strike price (S/X Lower). Please see Figure 72.1.
FIGURE 72.1 Supershare options
c71-80
JWBK121-Mun
March 15, 2008
22:12
Char Count=
73. Exotic Options—Time Switch Options
231
73. Exotic Options—Time Switch Options File Name: Exotic Options—Time Switch Options Location: Modeling Toolkit | Exotic Options | Time Switch Brief Description: Computing the value of an option where its value is dependent on how many times a barrier is breached during the life of the option Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2TimeSwitchOptionCall, B2TimeSwitchOptionPut
In a Time Switch option (Figure 73.1), the holder receives the Accumulated Amount × Time Steps each time the asset price exceeds the strike price for a call option (or falls below the strike price for a put option). The time steps are how often the asset price is checked if the strike threshold has been breached (typically, for a one-year option with 252 trading days, set DT as 1/252). Sometimes the option has already accumulated past amounts, or as agreed to in the option as a minimum guaranteed payment, as measured by the number of time units fulfilled (which is typically set as 0).
FIGURE 73.1 Time switch options
c71-80
JWBK121-Mun
232
March 15, 2008
22:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
74. Exotic Options—Trading-Day Corrections File Name: Exotic Options—Trading Day Corrections Location: Modeling Toolkit | Exotic Options | Trading Day Corrections Brief Description: Computing the value of plain vanilla call and put options corrected for the number of trading days and calendar days left in their maturity Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2TradingDayAdjustedCall, B2TradingDayAdjustedPut, B2GeneralizedBlackScholesCall, B2GeneralizedBlackScholesPut
An option with a trading-day correction uses a typical option and corrects it for the varying volatilities. Specifically, volatility tends to be higher on trading days than on nontrading days. The trading-days ratio is simply the number of trading days left until maturity divided by the total number of trading days per year (typically between 250 and 252). The calendar-days ratio is the number of calendar days left until maturity divided by the total number of days per year (365). Please see Figure 74.1. Typically, with the adjustments, the option value is lower. In addition, if the trading-days ratio and calendar-days ratio are identical, the results of the adjustment is zero and the option value reverts back to the generalized Black-Scholes results. This is because the number of days left is assumed to be all fully trading.
FIGURE 74.1 Options with trading-day corrections
c71-80
JWBK121-Mun
March 15, 2008
22:12
Char Count=
75. Exotic Options—Two-Asset Barrier Options
233
75. Exotic Options—Two-Asset Barrier Options File Name: Exotic Options—Two Asset Barrier Location: Modeling Toolkit | Exotic Options | Two Asset Barrier Brief Description: Computing the value of an option with two underlying assets, one of which is a reference benchmark and the other of which is used to compute the value of the option payoff Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2TwoAssetBarrierDownandInCall, B2TwoAssetBarrierUpandInCall, B2TwoAssetBarrierDownandInPut, B2TwoAssetBarrierUpandInPut, B2TwoAssetBarrierDownandOutCall, B2TwoAssetBarrierUpandOutCall, B2TwoAssetBarrierDownandOutPut, B2TwoAssetBarrierUpandOutPut
Two-Asset Barrier options become valuable or get knocked in the money only if a barrier (upper or lower barrier) is breached (or not), and the payout is in the form of the option on the first underlying asset. In general, a barrier option limits the potential of a regular option’s profits and hence costs less than regular options without barriers. A barrier option thus is a better bet for companies trying to hedge a specific price movement as it tends to be cheaper. Please see Figure 75.1. As an example, in the up and in call option, the instrument is knocked in if the asset breaches the upper barrier but is worthless if this barrier is not breached. The payoff on this option is based on the value of the first underlying asset (Asset 1) less the Strike price, while whether the barrier is breached or not depends on whether the second reference asset (Asset 2) breaches the barrier. The monitoring Periodicities refers to how often during the life of the option the asset or stock value will be monitored to see if it breaches a barrier. As an example, entering 12 implies monthly monitoring, 52 for weekly, 252 for daily trading, 365 for daily calendar, and 1,000,000 for continuous monitoring.
c71-80
JWBK121-Mun
234
March 15, 2008
22:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 75.1 Two-asset barrier options
76. Exotic Options—Two Asset Cash or Nothing File Name: Exotic Options—Two Asset Cash or Nothing Location: Modeling Toolkit | Exotic Options | Two Asset Cash Brief Description: Computing the value of an option that pays a prespecified amount of cash as long as the option stays in the money at expiration, regardless of how valuable the intrinsic value is at maturity Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2TwoAssetCashOrNothingCall, B2TwoAssetCashOrNothingPut, B2TwoAssetCashOrNothingUpDown, B2TwoAssetCashOrNothingDownUp
Cash-or-nothing options pay out a prespecified amount of cash at expiration as long as the option is in the money, without regard to how much it is in the money. In a two-asset cash-or-nothing option, both assets must be in the money before cash is paid out. (For call options, both asset values must be above their respective strike prices; for puts, both assets must be below their respective strike prices.) For the up-down option, this implies that the first asset must be above the first strike price
c71-80
JWBK121-Mun
March 15, 2008
22:12
Char Count=
77. Exotic Options—Two Correlated Assets Option
235
FIGURE 76.1 Two-asset cash-or-nothing options and the second asset must be below the second strike price. Conversely, the down-up option implies that cash will be paid out only if, at expiration, the first asset is below the first strike and the second asset is above the second strike. Please see Figure 76.1.
77. Exotic Options—Two Correlated Assets Option File Name: Exotic Options—Two Correlated Assets Location: Modeling Toolkit | Exotic Options | Two Asset Correlated Brief Description: Computing the value of an option that depends on two assets, a benchmark asset that determines if the option is in the money and another asset that determines the payoff on the option at expiration Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2TwoAssetCorrelationCall, B2TwoAssetCorrelationPut
The Two Correlated Asset options use two underlying assets, Asset 1 and Asset 2. Typically, Asset 1 is the benchmark asset (e.g., a stock index or market comparable stock), whereby if at expiration Asset 1’s values exceed Strike 1’s value, then the
c71-80
JWBK121-Mun
236
March 15, 2008
22:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 77.1 Two correlated asset options
option is knocked in the money, meaning that the payoff on the call option is Asset 2 – Strike 2 and Strike 2 – Asset 2 for the put option (for the put, Asset 1 must be less than Strike 1). Please see Figure 77.1. A higher positive correlation between the reference and the underlying asset implies that the option value increases as the comovement of both assets is highly positive, meaning that the higher the price movement of the reference asset, the higher the chances of it being in-the-money and, similarly, the higher the chances of the underlying asset’s payoff being in the money. Negative correlations reduce the option value as even if the reference asset is in-the-money and the value of the option of the underlying is very little in the money or completely out of the money. Correlation is typically restricted between −0.9999 and +0.9999. If there is a perfect correlation (either positive or negative), then there is no point in issuing such an asset, as a regular option model will suffice. Two Correlated Asset options sometimes are used as a performance-based payoff option. For instance, the first or reference asset can be a company’s revenues, profit margin, or an external index such as the Standard & Poor’s 500, or some reference price in the market such as the price of gold. Therefore, the option is live only if the benchmark value exceeds a prespecified threshold. This European option is exercisable only at expiration. To model an American (exercisable at any time up to and including expiration) or Bermudan option (exercisable at specific periods only, with blackout and vesting periods where the option cannot be exercised), use the Real Options SLS software.
c71-80
JWBK121-Mun
March 15, 2008
22:12
Char Count=
78. Exotic Options—Uneven Dividend Payments Option
237
78. Exotic Options—Uneven Dividend Payments Option File Name: Exotic Options Uneven Dividend Payments Location: Modeling Toolkit | Exotic Options | Uneven Dividends Brief Description: Computing the value of plain vanilla call and put options when the dividend stream of the underlying asset comes in uneven payments over time Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2GeneralizedBlackScholesCallCash Dividends, B2GeneralizedBlackScholesPutCashDividends, B2GeneralizedBlackScholesCall, B2GeneralizedBlackScholesPut
Sometimes dividends are paid in various lump sums, and sometimes these values are not consistent throughout the life of the option. Accounting for the average dividend yield and not the uneven cash flows will yield incorrect valuations in the Black-Scholes paradigm. Please see Figure 78.1.
FIGURE 78.1 Options with uneven dividend payments
c71-80
JWBK121-Mun
238
March 15, 2008
22:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
79. Exotic Options—Writer Extendible Option File Name: Exotic Options—Writer Extendible Location: Modeling Toolkit | Exotic Options | Writer Extendible Brief Description: Computing the value of various options that can be extended beyond the original maturity date if the option is out of the money, providing an insurance policy for the option holder, and thereby typically costing more than conventional options Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2WriterExtendibleCallOption, B2WriterExtendiblePutOption
Writer Extendible options can be seen as insurance policies in case the option becomes worthless at maturity. Specifically, the call or put option can be extended automatically beyond the initial maturity date to a date with a new extended strike price, assuming that at maturity, the option is out of the money and worthless. This extendibility provides a safety net of time for the holder of the option. Please see Figure 79.1.
FIGURE 79.1 Options with uneven dividend payments
c71-80
JWBK121-Mun
March 15, 2008
22:12
Char Count=
80. Forecasting—Data Diagnostics
239
80. Forecasting—Data Diagnostics File Name: Forecasting—Data Diagnostics Location: Modeling Toolkit | Forecasting | Data Diagnostics Brief Description: Using Risk Simulator to run diagnostics on your data before generating forecast models, including checking for heteroskedasticity, nonlinearity, outliers, specification errors, micronumerosity, stationarity and stochastic properties, normality and sphericity of the errors, and multicollinearity Requirements: Modeling Toolkit, Risk Simulator
This example model provides a sample data set on which we can run Risk Simulator’s Diagnostic Tool so that we can determine the econometric properties of the data. The diagnostics run include checking the data for heteroskedasticity, nonlinearity, outliers, specification errors, micronumerosity, stationarity and stochastic properties, normality and sphericity of the errors, and multicollinearity. Each test is described in more detail in its respective report.
PROCEDURE To run the analysis, follow these instructions: 1. Go to the Time-Series Data worksheet and select the data including the variable names (cells C5:H55) as seen in Figure 80.1. 2. Click on Risk Simulator | Tools | Diagnostic Tool. 3. Check the data and select the dependent variable from the drop-down menu. Click OK when finished. Spend some time reading through the reports generated from this diagnostic tool. A common violation in forecasting and regression analysis is heteroskedasticity (i.e., the variance of the errors increases over time). Visually, the width of the vertical data fluctuations increases or fans out over time, and typically, the coefficient of determination (R-squared coefficient) drops significantly when heteroskedasticity exists. If the variance of the dependent variable is not constant, then the error’s variance will not be constant. Unless the heteroskedasticity of the dependent variable is pronounced, its effect will not be severe: The least-squares estimates will still be unbiased, and the estimates of the slope and intercept will be either normally distributed if the errors are normally distributed, or at least normally distributed asymptotically (as the number of data points becomes large) if the errors are not
c71-80
JWBK121-Mun
240
March 15, 2008
22:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 80.1 Running a diagnostic analysis on your data set
normally distributed. The estimate for the variance of the slope and overall variance will be inaccurate, but the inaccuracy is not likely to be substantial if the independentvariable values are symmetric about their mean. If the number of data points is small (micronumerosity), it may be difficult to detect assumption violations. With small samples, assumption violations such as nonnormality or heteroskedasticity of variances are difficult to detect even when they are present. With a small number of data points, linear regression offers less protection against violation of assumptions. With few data points, it may be hard to determine how well the fitted line matches the data or whether a nonlinear function would be more appropriate. Even if none of the test assumptions is violated, a linear regression on a small number of data points may not have sufficient power to detect a significant difference between the slope and zero, even if the slope is nonzero. The power depends on the residual error, the observed variation in the independent variable, the selected significance alpha level of the test, and the number of data points. Power decreases as the residual variance increases, decreases as the significance level is decreased (i.e., as the test is made more stringent), increases as the variation in observed independent variable increases, and increases as the number of data points increases. Values may not be identically distributed because of the presence of outliers. Outliers are anomalous values in the data. Outliers may have a strong influence over the fitted slope and intercept, giving a poor fit to the bulk of the data points. Outliers
c71-80
JWBK121-Mun
March 15, 2008
80. Forecasting—Data Diagnostics
22:12
Char Count=
241
tend to increase the estimate of residual variance, lowering the chance of rejecting the null hypothesis (i.e., creating higher prediction errors). They may be due to recording errors, which may be correctable, or they may be due to the dependent-variable values not all being sampled from the same population. Apparent outliers may also be due to the dependent-variable values being from the same, but nonnormal, population. However, a point may be an unusual value in either an independent or dependent variable without necessarily being an outlier in the scatter plot. In regression analysis, the fitted line can be highly sensitive to outliers. In other words, least-squares regression is not resistant to outliers, thus, neither is the fitted-slope estimate. A point vertically removed from the other points can cause the fitted line to pass close to it instead of following the general linear trend of the rest of the data, especially if the point is relatively far horizontally from the center of the data. However, great care should be taken when deciding if the outliers should be removed. Although in most cases the regression results look better when outliers are removed, a priori justification must first exist. For instance, if you are regressing the performance of a particular firm’s stock returns, outliers caused by downturns in the stock market should be included; these are not truly outliers as they are inevitabilities in the business cycle. Forgoing these outliers and using the regression equation to forecast your retirement fund based on the firm’s stocks will yield incorrect results at best. In contrast, suppose the outliers are caused by a single nonrecurring business condition (e.g., merger and acquisition), and such business structural changes are not forecast to recur. Then these outliers should be removed and the data cleansed prior to running a regression analysis. The analysis here only identifies outliers; it is up to the user to determine if they should remain or be excluded. Sometimes a nonlinear relationship between the dependent and independent variables is more appropriate than a linear relationship. In such cases, running a linear regression will not be optimal. If the linear model is not the correct form, then the slope and intercept estimates and the fitted values from the linear regression will be biased, and the fitted slope and intercept estimates will not be meaningful. Over a restricted range of independent or dependent variables, nonlinear models may be well approximated by linear models (this is in fact the basis of linear interpolation), but for accurate prediction, a model appropriate to the data should be selected. A nonlinear transformation should be applied to the data first, before running a regression. One simple approach is to take the natural logarithm of the independent variable (other approaches include taking the square root or raising the independent variable to the second or third power) and run a regression or forecast using the nonlinearly transformed data. The results from running these tests are seen in Figure 80.2. Another typical issue when forecasting time-series data is whether the independent-variable values are truly independent of each other or whether they are dependent. Dependent-variable values collected over a time series may be autocorrelated. For serially correlated dependent-variable values, the estimates of the slope and intercept will be unbiased, but the estimates of their forecast and variances will not be reliable. Hence the validity of certain statistical goodness-of-fit tests will be flawed. For instance, interest rates, inflation rates, sales, revenues, and many other time-series data typically are autocorrelated, where the value in the current period is related to the value in a previous period and so forth. (Clearly, the inflation rate in March is related to February’s level, which in turn is related to January’s level, etc.) Ignoring such blatant relationships will yield biased and less accurate forecasts. In
FIGURE 80.2 Heteroskedasticity, micronumerosity, outliers, and nonlinearity results
c71-80 JWBK121-Mun
242
March 15, 2008 22:12 Char Count=
c71-80
JWBK121-Mun
March 15, 2008
80. Forecasting—Data Diagnostics
22:12
Char Count=
243
such events, an autocorrelated regression model or an ARIMA model may be better suited (you can run an ARIMA model through the menu selection Risk Simulator | Forecasting | ARIMA). Please refer to the advanced ARIMA forecasting chapter for details. Finally, the autocorrelation functions of a series that is nonstationary tend to decay slowly (see the Nonstationary report). If autocorrelation AC(1) is nonzero, the series is first-order serially correlated. If AC(k) dies off more or less geometrically with increasing lag, the series follows a low-order autoregressive process. If AC(k) drops to zero after a small number of lags, the series follows a low-order moving-average process. Partial correlation PAC(k) measures the correlation of values that are k periods apart after removing the correlation from the intervening lags. If the pattern of autocorrelation can be captured by an autoregression of order less than k, then the partial autocorrelation at lag k will be close to zero. Ljung-Box Q-statistics and their p-values at lag k have the null hypothesis that there is no autocorrelation up to order k. The dotted lines in the plots of the autocorrelations are the approximate two standard error bounds. If the autocorrelation is within these bounds, it is not significantly different from zero at the 5% significance level. Autocorrelation measures the relationship to the past of the dependent Y variable to itself. Distributive Lags, in contrast, are time-lag relationships between the dependent Y variable and different independent X variables. For instance, the movement and direction of mortgage rates tend to follow the Federal Funds Rate but at a time lag (typically one to three months). Sometimes time lags follow cycles and seasonality (e.g., ice cream sales tend to peak during the summer months and hence are related to last summer’s sales, 12 months in the past). The distributive lag analysis in Figure 80.3 shows how the dependent variable is related to each of the independent variables at various time lags, when all lags are considered simultaneously, to determine which time lags are statistically significant and should be considered. Another requirement in running a regression model is the assumption of normality and sphericity of the error term. If the assumption of normality is violated or outliers are present, then the linear regression goodness-of-fit test may not be the most powerful or informative test available, and this could mean the difference between detecting a linear fit or not. If the errors are not independent and not normally distributed, the data might be autocorrelated or suffer from nonlinearities or other more destructive errors. Independence of the errors can also be detected in the heteroskedasticity tests. The Normality test on the errors performed is a nonparametric test, which makes no assumptions about the specific shape of the population from which the sample is drawn, allowing for smaller sample data sets to be analyzed. This test evaluates the null hypothesis of whether the sample errors were drawn from a normally distributed population versus an alternate hypothesis that the data sample is not normally distributed (Figure 80.4). If the calculated D Statistic is greater than or equal to the D Critical values at various significance values, reject the null hypothesis and accept the alternate hypothesis (the errors are not normally distributed). Otherwise, if the D Statistic is less than the D Critical value, do not reject the null hypothesis (the errors are normally distributed). This test relies on two cumulative frequencies: one derived from the sample data set, the second derived from a theoretical distribution based on the mean and standard deviation of the sample data.
JWBK121-Mun
March 15, 2008
22:12
Char Count=
FIGURE 80.3 Autocorrelation and distributive lags
c71-80
244
c71-80
JWBK121-Mun
March 15, 2008
22:12
80. Forecasting—Data Diagnostics
Char Count=
245
FIGURE 80.4 Normality of errors
Sometimes certain types of time-series data cannot be modeled using any other methods except for a stochastic process, because the underlying events are stochastic in nature. For instance, you cannot adequately model and forecast stock prices, interest rates, price of oil, and other commodity prices using a simple regression model, because these variables are highly uncertain and volatile, and do not follow a predefined static rule of behavior. In other words, the processes are not stationary. Stationarity is checked here using the Runs Test while another visual clue is found in the Autocorrelation report. (The ACF, or autocorrelation function, tends to decay slowly.) A stochastic process is a sequence of events or paths generated by probabilistic laws. That is, random events can occur over time but are governed by specific statistical and probabilistic rules. The main stochastic processes include Random Walk or Brownian motion, mean reversion, and jump diffusion. These processes can be used to forecast a multitude of variables that seemingly follow random trends but are restricted by probabilistic laws. The process-generating equation is known in advance, but the actual results generated are unknown. The Random-Walk Brownian motion process can be used to forecast stock prices, prices of commodities, and other stochastic time-series data, given a drift or growth rate and volatility around the drift path. The mean-reversion process can be used to reduce the fluctuations of the random-walk process by allowing the path to target a long-term value, making it useful for forecasting time-series variables that have a long-term rate, such as interest rates and inflation rates. (These are long-term target rates by regulatory authorities or the market.) The jump-diffusion process is useful for forecasting time-series data when the variable occasionally can exhibit random jumps, such as oil prices or price of electricity. (Discrete exogenous event shocks can make prices jump up or down.) These processes can also be mixed and matched as required. Figure 80.5 illustrates the results from Risk Simulator’s Data Diagnostic Tool, to determine the stochastic parameters of the data set. It shows the probability of a stochastic fit as opposed to conventional models and the relevant input parameters in these stochastic models. It is up to the user to determine if the probability of fit is significant enough to use these stochastic processes. Multicollinearity exists when there is a linear relationship between the independent variables. When this occurs, the regression equation cannot be estimated at all.
c71-80
JWBK121-Mun
March 15, 2008
22:12
246
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 80.5 Stochastic processes
In near-collinearitysituations, the estimated regression equation will be biased and provide inaccurate results. This situation is especially true when a stepwise regression approach is used, where the statistically significant independent variables will be thrown out of the regression mix earlier than expected, resulting in a regression equation that is neither efficient nor accurate. One quick test of the presence of multicollinearity in a multiple regression equation is that the R-squared value is relatively high while the t-statistics are relatively low. Another quick test is to create a correlation matrix between the independent variables. A high cross-correlation indicates a potential for autocorrelation. The rule of thumb is that a correlation with an absolute value greater than 0.75 is indicative of severe multicollinearity. Another test for multicollinearity is the use of the Variance Inflation Factor (VIF), obtained by regressing each independent variable to all the other independent variables, obtaining the R-squared value, and calculating the VIF (Figure 80.6). A VIF exceeding 2.0 can be considered as severe multicollinearity. A VIF exceeding 10.0 indicates destructive multicollinearity. The correlation matrix lists the Pearson’s Product Moment Correlations (commonly referred to as the Pearson’s R) between variable pairs. The correlation coefficient ranges between −1.0 and +1.0 inclusive. The sign indicates the direction of
c71-80
JWBK121-Mun
March 15, 2008
22:12
Char Count=
80. Forecasting—Data Diagnostics
247
FIGURE 80.6 Correlation and variance inflation factors
association between the variables while the coefficient indicates the magnitude or strength of association. The Pearson’s R only measures a linear relationship and is less effective in measuring nonlinear relationships. To test whether the correlations are significant, a two-tailed hypothesis test is performed and the resulting p-values are listed as shown in Figure 80.6. P-values less than 0.10, 0.05, and 0.01 are highlighted to indicate statistical significance. In other words, a p-value for a correlation pair that is less than a given significance value is statistically significantly different from zero, indicating that there is a significant linear relationship between the two variables. The Pearson’s R between two variables (x and y) is related to the covariance (cov) COV measure where Rx,y = sx s yx,y . The benefit of dividing the covariance by the product of the two variables’ standard deviations (s) is that the resulting correlation coefficient is bounded between −1.0 and +1.0 inclusive. This makes the correlation a good relative measure to compare among different variables (particularly with different units and magnitude). The Spearman rank-based nonparametric correlation is also included in the report. The Spearman’s R is related to the Pearson’s R in that the data is first ranked and then correlated. The rank correlations provide a better estimate of the relationship between two variables when one or both of them is nonlinear. It must be stressed that a significant correlation does not imply causation. Associations between variables in no way imply that the change of one variable causes another variable to change. When two variables are moving independently of each other but in a related path, they may be correlated, but their relationship might be spurious (e.g., a correlation between sunspots and the stock market might be strong but one can surmise that there is no causality and that this relationship is purely spurious).
c81-90
JWBK121-Mun
March 15, 2008
248
22:14
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
81. Forecasting—Econometric, Correlations, and Multiple Regression Modeling File Name: Forecasting—Econometric, Correlations, and Multiple Regression Modeling Location: Modeling Toolkit | Forecasting | Econometric, Correlations and Multiple Regression Modeling Brief Description: Illustrating how to use Risk Simulator for running a multiple regression analysis and econometric modeling Requirements: Modeling Toolkit, Risk Simulator
This example shows how a multiple regression can be run using Risk Simulator. The raw data are arranged in the Cross-Sectional Data worksheet, which contains cross-sectional data on 50 months of a bank’s key risk indicators as they pertain to Basel II’s Operational Risk analysis guidance. The idea is to see if there is a relationship between the Impact Losses ($M) per month and these explanatory variables using multiple regression, econometric modeling, and correlation analysis.
PROCEDURE To run this model, simply: 1. In the Cross Sectional Data worksheet, select the area C5:H55 (Figure 81.1). 2. Select Risk Simulator | Forecasting | Multiple Regression. 3. Choose Impact Losses as the dependent variable in the regression and click on OK to run the regression (Figure 81.2).
MODEL RESULTS ANALYSIS Note that more advanced regressions, such as lag regressors, stepwise regression, and nonlinear regressions, can also be run using Risk Simulator. For more details on running such regressions as well as the interpretation of the regression report, refer to Modeling Risk, 2nd Edition (Hoboken, NJ: John Wiley & Sons, 2006), by Dr. Johnathan Mun.
c81-90
JWBK121-Mun
March 15, 2008
22:14
Char Count=
81. Forecasting—Econometric, Correlations, and Multiple Regression Modeling
249
FIGURE 81.1 Econometric data set
RESULTS SUMMARY Refer to the Report worksheet for details on the regression output. The report has more details on the interpretation of specific statistical results. It provides these elements: multiple regression and analysis of variance output, including coefficients of determination, hypothesis test results (single variable t-test and multiple variable F-test), computed coefficients for each regressor, fitted chart, and much more. Additional econometric analyses can be performed. For instance, a nonlinear relationship is appropriate here. Simply follow the three-step procedure described, but remember to check the Nonlinear Regression box before hitting OK. Also, interactions between variables might exist in this model, and a modification of the data variables can be done before further regression models are run. In such instances, combine various independent variables to create new variables (e.g., X1 , X2 . . . Xn , and X1 X2 , X1 X3 , X1 X4 , etc., and then run a Stepwise Regression). In addition, the Basic Econometrics tool in Risk Simulator can be used to first modify these dependent and independent variables on-the-fly, before a regression is run.
c81-90
JWBK121-Mun
March 15, 2008
250
22:14
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 81.2 Running a multiple regression Finally, the previous chapter on Forecasting—Data Diagnostics provides the critical modeling prerequisites before running any multivariate regression analysis, to make sure that all the required assumptions have not been violated in the data set and to check the modeling and predictive quality of the data. Sometimes, the variables themselves require tweaking before a multiple regression is run. This is the basics of econometrics. Using Risk Simulator, this process can be applied quickly and the variables can be manipulated (log, addition, subtraction, multiplication, lagged, and so forth using the Basic Econometrics tool) and modeled. The procedure is described next.
PROCEDURE To run a basic econometrics model, simply: 1. In the Cross Sectional Data worksheet, select the area C5:H55 (Figure 81.1). 2. Select Risk Simulator | Forecasting | Basic Econometrics. 3. Type in the required single dependent variable and multiple independent variables with the required modifications but separated by a semicolon (Figure 81.3). Click Show Results to view the results of this model, and make additional modifications as required, and when done, click on OK to generate the full model report as seen in Figure 81.4.
JWBK121-Mun
March 15, 2008
22:14
Char Count=
FIGURE 81.3 Basic econometrics results
c81-90
251
c81-90
JWBK121-Mun
March 15, 2008
22:14
252
FIGURE 81.4 Basic econometrics report
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
c81-90
JWBK121-Mun
March 15, 2008
22:14
Char Count=
81. Forecasting—Econometric, Correlations, and Multiple Regression Modeling
FIGURE 81.4 (Continued)
253
c81-90
JWBK121-Mun
March 15, 2008
254
22:14
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
82. Forecasting—Exponential J-Growth Curves File Name: Forecasting—Exponential J-Growth Curves Location: Modeling Toolkit | Forecasting | Exponential J-Growth Curves Brief Description: Illustrating how to use Risk Simulator to run a simulation model on the exponential J-growth curve where the growth rate is uncertain over time, to determine the probabilistic outcomes at some point in the future Requirements: Modeling Toolkit, Risk Simulator
In mathematics, a quantity that grows exponentially is one whose growth rate is always proportional to its current size. Such growth is said to follow an exponential law. This implies that for any exponentially growing quantity, the larger the quantity gets, the faster it grows. But it also implies that the relationship between the size of the dependent variable and its rate of growth is governed by a strict law, of the simplest kind: direct proportion. The general principle behind exponential growth is that the larger a number gets, the faster it grows. Any exponentially growing number will eventually grow larger than any other number that grows at only a constant rate for the same amount of time. Figure 82.1 shows an exponential growth curve.
PROCEDURE You can run the preset simulation by clicking on Risk Simulator | Change Profile and selecting the J Curve profile, then run the simulation by going to Risk Simulator | Run Simulation. Alternatively, you can set your own or change the underlying simulation assumption as well as the input parameters. The Growth Rate and Initial Value are the input values in the model to determine the characteristics of the J curve. In addition, you can also enter in the Lookup Year to determine the value for that particular year, where the corresponding value for that year is set as a forecast. Using the initial model inputs and running the simulation yields the forecast charts shown in Figure 82.2. The expected value of Year 50 is 1717.63 units. An alternative approach is to go to Risk Simulator | Forecasting | J-S Curves to generate forecasts of exponential J-growth curves.
c81-90
JWBK121-Mun
March 15, 2008
22:14
82. Forecasting—Exponential J-Growth Curves
FIGURE 82.1 Exponential J curve
Char Count=
255
JWBK121-Mun
March 15, 2008
22:14
Char Count=
FIGURE 82.2 Simulated results of a J curve
c81-90
256
c81-90
JWBK121-Mun
March 15, 2008
22:14
Char Count=
83. Forecasting—Forecasting Manual Computations
257
83. Forecasting—Forecasting Manual Computations File Name: Forecasting—Forecasting Manual Computations Location: Modeling Toolkit | Forecasting | Forecasting Manual Computations Brief Description: Illustrating how to use Risk Simulator for running optimizations to determine the optimal forecasting parameters in several time-series decomposition models Requirements: Modeling Toolkit, Risk Simulator
This example model shows the manual computations of the eight time-series decomposition and forecasting techniques in Risk Simulator and how the computations are done. We illustrate the step-by-step computations of backcast and forecast values, plus use Risk Simulator’s optimization routines to find the best and optimal forecast parameters (e.g., Alpha, Beta, and Gamma) that yield the lowest error. The eight methods covered in this model are shown in Figure 83.1.
FIGURE 83.1 Eight time-series decomposition forecasting models
c81-90
JWBK121-Mun
March 15, 2008
258
22:14
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
PROCEDURE The manual computations are shown in each of the worksheets in the model. In some of the methods, optimization is required to find the best input parameters to backcast-fit and forecast-fit the data going forward. In each method requiring optimization (six of the eight methods, excluding the Single Moving Average and Double Moving Average approaches), a new profile has already been created and the relevant objectives and decision variables have been set up. To run the model, follow these two steps: 1. Go to the relevant worksheet and change the profile to match the worksheet (Risk Simulator | Change Profile) and find the matching profile for the model. 2. You can change the decision variables (Alpha, Beta, and Gamma) in the model to some starting point (e.g., 0.1) so that you can tell whether, at the end of the optimization procedure, anything had changed. Then run the optimization (Risk Simulator | Optimization | Run Optimization). To re-create the optimization, follow these four steps instead: 1. Go to the relevant worksheet and create a new simulation profile (Risk Simulator | New Profile) and give it a name. 2. Select the RMSE or Root Mean Squared Error cell and make it an objective (Risk Simulator | Optimization | Set Objective), and set to Minimize. 3. Select the input parameter one at a time (e.g., Alpha, Beta, or Gamma), make it a decision variable (Risk Simulator | Set Decision), and make it a Continuous Variable between 0 and 1 (the input parameters Alpha, Beta and Gamma can take values only between 0 and 1, inclusive). 4. Run the optimization (Risk Simulator | Optimization | Run Optimization), and choose Static Optimization.
c81-90
JWBK121-Mun
March 15, 2008
22:14
Char Count=
84. Forecasting—Linear Interpolation and Nonlinear Spline Extrapolation
259
84. Forecasting—Linear Interpolation and Nonlinear Spline Extrapolation File Name: Forecasting—Linear Interpolation Location: Modeling Toolkit | Forecasting | Linear Interpolation and Modeling Toolkit | Yield Curve | Spline Interpolation and Extrapolation Brief Description: Illustrating how to compute a linear and nonlinear interpolation/extrapolation of missing values in a time-series forecast model Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2InterpolationLinear, B2Cubicspline
Sometimes interest rates or any type of time-dependent rates may have missing values. For instance, the Treasury rates for Years 1, 2, and 3 exist, and then jump to Year 5, skipping Year 4. We can, using linear interpolation (i.e., we assume the rates during the missing periods are linearly related), determine and fill in, or interpolate, their values. Figure 84.1 illustrates how to use this function. In contrast, the cubic spline polynomial interpolation and extrapolation model is used to “fill in the gaps” of missing values (interpolation) and forecasting outside of the known values (extrapolation) when the underlying structure is nonlinear. For example, we can apply this approach to spot yields and term structure of interest rates whereby the model can be used to both interpolate missing data points within a time series of interest rates (as well as other macroeconomic variables such as inflation rates and commodity prices or market returns) and to extrapolate outside the given or known range, useful for forecasting purposes. Figure 84.2 illustrates the cubic spline model. The cubic spline extrapolation approach can be applied either by using the Modeling Toolkit function or Risk Simulator’s Cubic Spline module as seen in Figure 84.3. The example file using this module can be accessed through Risk Simulator | Examples | Advanced Forecasting Models and going to the Cubic Spline worksheet. Figure 84.4 shows the results of the cubic spline module.
FIGURE 84.1 Interpolating missing values in a linear time-series
c81-90 JWBK121-Mun
260
March 15, 2008 22:14 Char Count=
JWBK121-Mun March 15, 2008 22:14
FIGURE 84.2 Interpolating missing values and forecast extrapolation in a nonlinear time-series
c81-90 Char Count=
261
FIGURE 84.3 Cubic Spline forecast using Risk Simulator
c81-90 JWBK121-Mun
262
March 15, 2008 22:14 Char Count=
JWBK121-Mun
March 15, 2008
22:14
Char Count=
FIGURE 84.4 Cubic Spline forecast report
c81-90
263
c81-90
JWBK121-Mun
March 15, 2008
264
22:14
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
85. Forecasting—Logistic S-Growth Curves File Name: Forecasting—Logistic S-Growth Curves Location: Modeling Toolkit | Forecasting | Logistic S-Growth Curves Brief Description: Illustrating how to run a simulation model on the logistic S-growth curve where the growth rate is uncertain over time, to determine the probabilistic outcomes at some point in the future Requirements: Modeling Toolkit, Risk Simulator
A logistic function or logistic curve models the S curve of growth of some variable. The initial stage of growth is approximately exponential; then, as competition arises, growth slows, and at maturity, growth stops (Figure 85.1). These functions find applications in a range of fields, from biology to economics. For example, in the development of an embryo, a fertilized ovum splits, and the cell count grows: 1, 2, 4, 8, 16, 32, 64, and so forth. This is exponential growth. But the fetus can grow only as large as the uterus can hold; thus other factors start slowing down the increase in the cell count, and the rate of growth slows (but the baby is still growing,
FIGURE 85.1 Logistic S curve
c81-90
JWBK121-Mun
March 15, 2008
22:14
85. Forecasting—Logistic S-Growth Curves
Char Count=
265
of course). After a suitable time, the child is born and keeps growing. Ultimately, the cell count is stable; the person’s height is constant; the growth has stopped, at maturity. The same principles can be applied to population growth of animals or humans and the market penetration and revenues of a product, where there is an initial growth spurt in market penetration but over time, the growth slows due to competition and eventually the market declines and matures.
PROCEDURE You can run the preset simulation by clicking on Risk Simulator | Change Profile and selecting the S Curve profile, then run the simulation by going to Risk Simulator | Run Simulation. Alternatively, you can set your own or change the underlying simulation assumption as well as the input parameters. The Growth Rate, Maximum Capacity, and Initial Value are the input values in the model to determine the characteristics of the S curve. In addition, you can also enter in the Lookup Year to determine the value for that particular year, where the corresponding value for that year is set as a forecast. Using the initial model inputs and running the simulation yields the forecast charts seen in Figure 85.2, where the expected value of Year 30 is 2054.88 units. Alternatively, you can use Risk Simulator’s S curve forecasting module to automatically generate forecast values (Risk Simulator | Forecasting | J-S Curve).
FIGURE 85.2 Simulated results on the logistic S curve
c81-90 JWBK121-Mun
266
March 15, 2008 22:14 Char Count=
c81-90
JWBK121-Mun
March 15, 2008
22:14
Char Count=
86. Forecasting—Markov Chains and Market Share
267
86. Forecasting—Markov Chains and Market Share File Name: Forecasting—Markov Chains and Market Share Location: Modeling Toolkit | Forecasting | Markov Chains and Market Share Brief Description: Illustrating how to run a simulation model on Markov chains to determine path-dependent movements of market share and the long-term steady-state effects of market share Requirements: Modeling Toolkit, Risk Simulator
The Markov process is useful for studying the evolution of systems over multiple and repeated trials in successive time periods. The system’s state at a particular time is unknown, and we are interested in knowing the probability that a particular state exists. For instance, Markov chains are used to compute the probability that a particular machine or equipment will continue to function in the next time period or whether a consumer purchasing Product A will continue to purchase Product A in the next period or switch to a competitive brand B.
PROCEDURE 1. When opening this model, make sure to select Enable Macros when prompted (you will get this prompt if your security settings are set to Medium, i.e., click on Tools | Macros | Security | Medium when in Excel). The Model worksheet shows the Markov process and is presented as a chain of events, also known as a Markov chain. 2. Enter the required inputs (e.g., probabilities of switching or transition probabilities, the initial store location, and number of periods to forecast using the Markov process). Click Compute when ready. Note: The probabilities are typically between 5% and 95%, and the Checksum total probabilities must be equal to 100%. These transition probabilities represent the probabilities that a customer will visit another store in the next period. For instance, 90% indicates that the customer is currently a patron of Store A and there is a 90% probability that s/he will stay at Store A the next period and 10% probability that s/he will be visiting another store (Store B) the next period. Therefore, the total probability must be equal to 100% (Figure 86.1). 3. Further, select the location of the customer at time zero or right now, whether it is Store A or Store B. 4. Finally, enter the number of periods to forecast, typically between 5 and 50 periods. You cannot enter a value less than 1 or more than 1000.
c81-90
JWBK121-Mun
268
March 15, 2008
22:14
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 86.1 Markov chain input assumptions
RESULTS The results from a Markov process include a chain of events, indicating the transition from one store to the next. For instance, if there are 1,000 customers total, at Period 5, there will be 723 customers, on average, in Store A versus 277 in Store B, and so forth. Over time, a steady state occurs, and the Steady State Probability results indicate what would happen if the analysis is performed over a much longer period. These steady-state probabilities are also indicative of the respective market shares. In other words, over time and at equilibrium, Store A has a 66.67% market share as compared to 33.33% market share (Figure 86.2). This analysis can be extended to, say, a machine functioning over time. In the example given, say the probability of a machine functioning in the current state will continue to function the next period about 90% of the time. Then the probability that the machine will still be functioning in five periods is 72.27% or 66.67% in the long run. Alternatively, you can click on Risk Simulator | Forecasting | Markov Chain to automatically generate Markov forecast and report.
FIGURE 86.2 Markov chain results with steady-state effects
c81-90
JWBK121-Mun
March 15, 2008
22:14
Char Count=
87. Forecasting—Multiple Regression
269
87. Forecasting—Multiple Regression File Name: Forecasting—Multiple Regression Location: Modeling Toolkit | Forecasting | Multiple Regression Brief Description: Illustrating how to run a multiple or multivariate regression analysis Requirements: Modeling Toolkit, Risk Simulator
This example shows how a multiple regression can be run using Risk Simulator. The raw data are arranged in the Cross-Sectional Data worksheet, which contains cross-sectional data on all 50 U.S. states on the number of aggravated assaults (in thousands) per year, the number of bachelor’s degrees awarded per year, police expenditure per capita population, population size in millions, population density (person per square mile), and unemployment rate. The idea is to use multiple regression analysis to see if there is a relationship between the number of aggravated assaults per year and these explanatory variables.
MULTIPLE REGRESSION ANALYSIS To run this model, simply: 1. In the Cross Sectional Data worksheet, select the area C5:H55. 2. Select Risk Simulator | Forecasting | Multiple Regression. 3. Choose Aggravated Assault as the dependent variable in the regression and click on OK (Figure 87.1). Note that more advanced regressions, such as lag regressors, stepwise regression, and nonlinear regressions can also be run using Risk Simulator. For details on running such regressions as well as the results interpretation, refer to Modeling Risk, 2nd Edition, by Dr. Johnathan Mun (Hoboken, NJ: John Wiley & Sons, 2006).
RESULTS SUMMARY Refer to the Report worksheet for details on the regression output. The worksheet has more details on the interpretation of specific statistical results. It provides these elements: multiple regression and analysis of variance output, including coefficients of determination, hypothesis test results (single variable t-test and multiple variable F-test), computed coefficients for each regressor, fitted chart, and much more. Also,
c81-90
JWBK121-Mun
270
March 15, 2008
22:14
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 87.1 Running a regression
remember to look at the Data Diagnostic models chapter for running analytical and econometric diagnostics like multicollinearity, autocorrelation, micronumerosity, distributive lags, and other technical regression issues before running a regression analysis. The data set was set up intentionally in such a way that some of the independent variables are significant while others are not statistically significant (you can tell this from the t-test computed p-values in the report), and some of the variables are nonlinearly related to the dependent variable. Therefore, try out several variations of your model to find the best fit. Finally, do not forget to put the data set through the data diagnostics tool in Risk Simulator. See the Forecasting—Data Diagnostics chapter for how to do this.
c81-90
JWBK121-Mun
March 15, 2008
22:14
Char Count=
88. Forecasting—Nonlinear Extrapolation and Forecasting
271
88. Forecasting—Nonlinear Extrapolation and Forecasting File Name: Forecasting—Nonlinear Extrapolation Location: Modeling Toolkit | Forecasting | Nonlinear Extrapolation Brief Description: Illustrating how to forecast with a nonlinear extrapolation and comparing the results using more conventional time-series decomposition forecasts with trends and seasonality effects Requirements: Modeling Toolkit, Risk Simulator
This model provides some historical data on sales revenues for a firm. The goal of this exercise is to use Risk Simulator to run the Nonlinear Extrapolation tool and forecast the revenues for the next several periods. The data are located in the Time-Series Data worksheet and are arranged by months, from January 2004 to December 2004. As the data are time-series in nature, we can apply extrapolation to forecast the results.
NONLINEAR EXTRAPOLATION Note that the nonlinear extrapolation methodology involves making statistical projections by using historical trends that are projected for a specified period of time into the future. It is only used for time-series forecasts. Extrapolation is fairly reliable, relatively simple, and inexpensive. However, extrapolation, which assumes that recent and historical trends will continue, produces large forecast errors if discontinuities occur within the projected time period. To run this model, simply: 1. Go to the Time-Series Data worksheet. 2. Select the Sales Revenue data series (cells H11:H22) and select Risk Simulator | Forecasting | Nonlinear Extrapolation (Figure 88.1). 3. Extrapolate for 6 Periods using the Automatic Selection option. 4. Repeat the process by selecting the Net Income data series (cells M11:M22). 5. Select cells H11:H22 again but this time run the Time-Series Analysis tool (select Risk Simulator | Forecasting | Time-Series Analysis) with forecast period of 6 with seasonality of 6. 6. Repeat Step 5’s time-series analysis on cells M11:M22. 7. Compare the results from extrapolation and time-series analysis.
c81-90
JWBK121-Mun
March 15, 2008
272
22:14
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 88.1 Nonlinear extrapolation
MODEL RESULTS ANALYSIS For your convenience, the analysis Report worksheets are included in the model. A fitted chart and forecast values are provided as well as the error measures and a statistical summary of the methodology. Notice that when the historical data and future expectations are such that growth rates are nonlinear and smooth, extrapolation works better. Similar to the case of the sales revenues, compare the graphs visually and see the corresponding root mean squared error values (denoted as RMSE in the report), where the smaller this error, the better the model fits and forecasts the data), but when seasonality occurs, time-series analysis is better (in the case of net income).
c81-90
JWBK121-Mun
March 15, 2008
22:14
Char Count=
89. Forecasting—Stochastic Processes, Brownian Motion
273
89. Forecasting—Stochastic Processes, Brownian Motion, Forecast Distribution at Horizon, Jump-Diffusion, and Mean-Reversion File Name: Forecasting—Stochastic Processes and others Location: Modeling Toolkit | Forecasting | Stochastic Processes, as well as various models such as the Brownian Motion, Jump-Diffusion, Forecast Distribution at Horizon, and Mean-Reverting Stochastic Models Brief Description: Illustrating how to simulate stochastic processes (Brownian motion random walk, mean reversion, jump diffusion, and mixed models) Requirements: Modeling Toolkit, Risk Simulator
A stochastic process is a sequence of events or paths generated by probabilistic laws. That is, random events can occur over time but are governed by specific statistical and probabilistic rules. The main stochastic processes include random walk or Brownian motion, mean reversion, and jump diffusion. These processes can be used to forecast a multitude of variables that seemingly follow random trends but are restricted by probabilistic laws. We can use Risk Simulator’s Stochastic Process module to simulate and create such processes. These processes can be used to forecast a multitude of time-series data including stock prices, interest rates, inflation rates, oil prices, electricity prices, commodity prices, and so forth.
STOCHASTIC PROCESS FORECASTING To run this model, simply: 1. Select Risk Simulator | Forecasting | Stochastic Processes. 2. Enter a set of relevant inputs or use the existing inputs as a test case (Figure 89.1). 3. Select the relevant process to simulate. 4. Click on Update Chart to view the updated computation of a single path or click OK to create the process.
MODEL RESULTS ANALYSIS For your convenience, the analysis Report worksheet is included in the model. A stochastic time-series chart and forecast values are provided in the report as well as
c81-90
JWBK121-Mun
274
March 15, 2008
22:14
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 89.1 Running a stochastic process forecast
each step’s time period, mean, and standard deviation of the forecast (Figure 89.2). The mean values can be used as the single-point estimate, or assumptions can be manually generated for the desired time period. That is, after finding the appropriate time period, create an assumption with a normal distribution with the appropriate mean and standard deviation computed. A sample chart with 10 iteration paths is included to graphically illustrate the behavior of the forecasted process. Clearly, the key is to calibrate the inputs to a stochastic process forecast model. The input parameters can be obtained very easily through some econometric modeling of historical data. The Data Diagnostic and Statistical Analysis models show how to use Risk Simulator to compute these input parameters. See Dr. Johnathan Mun’s Modeling Risk, 2nd Edition (Hoboken, NJ: John Wiley & Sons, 2006), for the technical details on obtaining these parameters. The Risk Simulator tool is useful for quickly generating large numbers of stochastic process forecasts. Alternatively, if the inputs are uncertain and need to be simulated, use Modeling Toolkit’s examples: Brownian Motion Stochastic Process, JumpDiffusion Stochastic Process, or Mean-Reverting Stochastic Process (all of these are located in the Modeling Toolkit | Forecasting menu item). Finally, a deterministic model on stock price forecast assuming a random walk process is also available (Modeling Toolkit | Forecasting | Forecast Distribution at Horizon) where stock price forecasts are provided for a future date.
c81-90
JWBK121-Mun
March 15, 2008
22:14
Char Count=
89. Forecasting—Stochastic Processes, Brownian Motion
FIGURE 89.2 Stochastic process forecast results
275
c81-90
JWBK121-Mun
276
March 15, 2008
22:14
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
90. Forecasting—Time-Series ARIMA File Name: Forecasting—Time-Series ARIMA Location: Modeling Toolkit | Forecasting | Time-Series ARIMA Brief Description: Illustrating how to run an econometric model called the Box-Jenkins ARIMA, which stands for autoregressive integrated moving average, an advanced forecasting technique that takes into account historical fluctuations, trends, seasonality, cycles, prediction errors, and nonstationarity of the data Requirements: Modeling Toolkit, Risk Simulator
The Data worksheet in the model contains some historical time-series data on money supply in the United States, denoted M1, M2, and M3. M1 is the most liquid form of money (cash, coins, savings accounts, etc.); M2 and M3 are less liquid forms of money (bearer bonds, certificates of deposit, etc.). These data sets are useful examples of long-term historical time-series data where ARIMA can be applied. Briefly, ARIMA econometric modeling takes into account historical data and decomposes it into an autoregressive (AR) process, where there is a memory of past events (e.g., the interest rate this month is related to the interest rate last month, etc., with a decreasing memory lag); an integrated (I) process, which accounts for stabilizing or making the data stationary and ergodic, making it easier to forecast; and a moving average (MA) of the forecast errors, such that the longer the historical data series, the more accurate the forecasts will be, as the model learns over time. ARIMA models therefore have three model parameters, one for the AR(p) process, one for the I(d) process, and one for the MA(q) process, all combined and interacting among each other and recomposed into the ARIMA (p,d,q) model. There are many reasons why an ARIMA model is superior to common time-series analysis and multivariate regressions. The common finding in time-series analysis and multivariate regression is that the error residuals are correlated with their own lagged values. This serial correlation violates the standard assumption of regression theory that disturbances are not correlated with other disturbances. The primary problems associated with serial correlation are:
Regression analysis and basic time-series analysis are no longer efficient among the different linear estimators. However, as the error residuals can help to predict current error residuals, we can take advantage of this information to form a better prediction of the dependent variable using ARIMA. Standard errors computed using the regression and time-series formula are not correct and are generally understated. If there are lagged dependent variables set as the regressors, regression estimates are biased and inconsistent but can be fixed using ARIMA.
c81-90
JWBK121-Mun
March 15, 2008
22:14
Char Count=
90. Forecasting—Time-Series ARIMA
277
Autoregressive Integrated Moving Average or ARIMA(p,d,q) models are the extension of the AR model that uses three components for modeling the serial correlation in the time series data. The first component is the autoregressive (AR) term. The AR(p) model uses the p lags of the time series in the equation. An AR(p) model has the form: yt = a1 yt−1 + . . . + ap yt−p + et . The second component is the integration (d) order term. Each integration order corresponds to differencing the time series. I(1) means differencing the data once. I(d) means differencing the data d times. The third component is the moving average (MA) term. The MA(q) model uses the q lags of the forecast errors to improve the forecast. An MA(q) model has the form: yt = et + b1 et−1 + . . . + bq et−q . Finally, an ARIMA(p,q) model has the combined form: yt = a1 yt−1 + . . . + ap yt−p + et + b1 et−1 + . . . + bq et−q . In interpreting the results of an ARIMA model, most of the specifications are identical to the multivariate regression analysis. However, there are several additional sets of results specific to the ARIMA analysis. The first is the addition of Akaike Information Criterion (AIC) and Schwarz Criterion (SC), which are often used in ARIMA model selection and identification. That is, AIC and SC are used to determine if a particular model with a specific set of p, d, and q parameters is a good statistical fit. SC imposes a greater penalty for additional coefficients than the AIC but generally, the model with the lowest AIC and SC values should be chosen. Finally, an additional set of results called the autocorrelation (AC) and partial autocorrelation (PAC) statistics are provided in the ARIMA report. For instance, if autocorrelation AC(1) is nonzero, it means that the series is first order serially correlated. If AC dies off more or less geometrically with increasing lags, it implies that the series follows a low-order autoregressive process. If AC drops to zero after a small number of lags, it implies that the series follows a low-order moving-average process. In contrast, PAC measures the correlation of values that are k periods apart after removing the correlation from the intervening lags. If the pattern of autocorrelation can be captured by an autoregression of order less than k, then the partial autocorrelation at lag k will be close to zero. The Ljung-Box Q-statistics and their p-values at lag k are also provided, where the null hypothesis being tested is such that there is no autocorrelation up to order k. The dotted lines in the plots of the autocorrelations are the approximate two standard error bounds. If the autocorrelation is within these bounds, it is not significantly different from zero at approximately the 5% significance level. Finding the right ARIMA model takes practice and experience. These AC, PAC, SC, and AIC are highly useful diagnostic tools to help identify the correct model specification. Finally, the ARIMA parameter results are obtained using sophisticated optimization and iterative algorithms, which means that although the functional forms look like those of a multivariate regression, they are not the same. ARIMA is a much more computationally intensive and advanced econometric approach.
RUNNING A BOX-JENKINS ARIMA FORECAST To run this model, simply: 1. Go to the Data worksheet and select Risk Simulator | Forecasting | ARIMA. 2. Click on the LINK icon beside the Time Series Variable input box, and link in C7:C442.
c81-90
JWBK121-Mun
278
March 15, 2008
22:14
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 90.1 Running a Box-Jenkins ARIMA model
3. Enter in the relevant P, D, Q inputs, forecast periods, maximum iterations, and so forth (Figure 90.1) and click OK. The nice thing about using Risk Simulator is the ability to run its AUTO-ARIMA module. That is, instead of needing advanced econometric knowledge, the AUTOARIMA module can automatically test all most commonly used models and rank them from the best fit to the worst fit. Figure 90.2 illustrates the results generated from an AUTO-ARIMA module in Risk Simulator and Figure 90.3 shows the bestfitting ARIMA model report. Finally, please note that if the Exogenous Variable input is used, the number of datapoints in this variable less the number of datapoints in the Time-Series Variable has to exceed the number of Forecast Periods.
JWBK121-Mun
March 15, 2008
22:14
Char Count=
FIGURE 90.2 AUTO-ARIMA results
c81-90
279
JWBK121-Mun
March 15, 2008
22:14
Char Count=
FIGURE 90.3 Best-Fitting AUTO-ARIMA results
c81-90
280
JWBK121-Mun
March 15, 2008
22:14
Char Count=
FIGURE 90.3 (Continued)
c81-90
281
JWBK121-Mun
March 15, 2008
22:14
Char Count=
FIGURE 90.3 (Continued)
c81-90
282
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
91. Forecasting—Time-Series Analysis
283
91. Forecasting—Time-Series Analysis File Name: Forecasting—Time-Series Analysis Location: Modeling Toolkit | Forecasting | Time Series Analysis Brief Description: Illustrating how to run time-series analysis forecasts, which take into account historical base values, trends, and seasonalities to project the future Requirements: Modeling Toolkit, Risk Simulator
The historical sales revenue data are located in the Time-Series Data worksheet in the model. The data are quarterly sales revenue from Q1 2000 to Q4 2004. The data exhibit quarterly seasonality, which means that the seasonality is four (there are four quarters in one year or one cycle). Time-series forecasting decomposes the historical data into the baseline, trend, and seasonality, if any. The models then apply an optimization procedure to find the alpha, beta, and gamma parameters for the baseline, trend, and seasonality coefficients and then recompose them into a forecast. In other words, this methodology first applies a backcast to find the best-fitting model and best-fitting parameters of the model that minimizes forecast errors and then proceeds to forecast the future based on the historical data that exist. This of course assumes that the same baseline growth, trend, and seasonality hold going forward. Even if they do not—say, when there exists a structural shift (e.g., company goes global, has a merger, spin-off, etc.)—the baseline forecasts can be computed and then the required adjustments can be made to the forecasts.
PROCEDURE To run this model, simply: 1. Select the historical data (cells H11:H30). 2. Select Risk Simulator | Forecasting | Time-Series Analysis. 3. Select Auto Model Selection, Forecast 4 Periods and Seasonality 4 Periods (Figure 91.1). Note that you can select Create Simulation Assumptions only if an existing Simulation Profile exists. (If not, click on Risk Simulator, New Simulation Profile, and then run the time-series forecast per steps 1 to 3, but remember to check the Create Simulation Assumptions box.)
c91-100
JWBK121-Mun
March 17, 2008
284
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 91.1 Running a time-series analysis forecast
MODEL RESULTS ANALYSIS For your convenience, the analysis Report and Methodology worksheets are included in the model. A fitted chart and forecast values are provided in the report as well as the error measures and a statistical summary of the methodology (Figure 91.2). The Methodology worksheet provides the statistical results from all eight time-series methodologies. Several different types of errors can be calculated for time-series forecast methods, including the mean-squared error (MSE), root mean-squared error (RMSE), mean absolute deviation (MAD), and mean absolute percent error (MAPE). The MSE is an absolute error measure that squares the errors (the difference between the actual historical data and the forecast-fitted data predicted by the model) to keep the positive and negative errors from canceling each other out. This measure also tends to exaggerate large errors by weighting the large errors more heavily than smaller errors by squaring them, which can help when comparing different timeseries models. The MSE is calculated by simply taking the average of the Error2 . RMSE is the square root of MSE and is the most popular error measure, also known as the quadratic loss function. RMSE can be defined as the average of the absolute values of the forecast errors and is highly appropriate when the cost of the forecast errors is proportional to the absolute size of the forecast error. MAD is an error statistic that averages the distance (absolute value of the difference between the actual historical data and the forecast-fitted data predicted by the model) between each pair of actual and fitted forecast data points. MAD is calculated by taking the average of the |Error|, and is most appropriate when the cost of forecast errors is proportional to the absolute size of the forecast errors.
JWBK121-Mun
March 17, 2008
11:12
Char Count=
FIGURE 91.2 Time-series analysis results
c91-100
285
c91-100
JWBK121-Mun
March 17, 2008
286
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 91.3 Error estimates
MAPE is a relative error statistic measured as an average percent error of the historical data points and is most appropriate when the cost of the forecast error is more closely related to the percentage error than the numerical size of the error. This ˆ error estimate is calculated by taking the average of YtY−tYt , where Yt is the historical data at time t, while Yˆt is the fitted or predicted data point at time t using this time-series method. Finally, an associated measure is the Theil’s U statistic, which measures the naivety of the model’s forecast. That is, if the Theil’s U statistic is less than 1.0, then the forecast method used provides an estimate that is statistically better than guessing. Figure 91.3 provides the mathematical details of each error estimate.
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
92. Industry Applications—Biotech Manufacturing Strategy
287
92. Industry Applications—Biotech Manufacturing Strategy File Name: Biotech—Biotech Manufacturing Strategy Location: Modeling Toolkit | Industry Applications | Biotech—Manufacturing Strategy Brief Description: Illustrating how to set up various manufacturing strategies in the biotech industry Requirements: Modeling Toolkit, Risk Simulator Special Contribution: This model was contributed by Uriel Kusiatin, a senior vice president of SKS Consulting (www.sksconsulting.us), a turnaround and strategy advisory firm that works with small-cap companies and their investors in the telecommunications, high-tech, and life sciences industries. Uriel has applied real options, Monte Carlo simulation, and optimization techniques to research and development portfolio decisions, licensing opportunities, and major capital investments. Uriel regularly presents at industry conferences, and has guest lectured at the Wharton School and MIT’s Sloan School of Management on the applications of real options analysis in the life sciences industry. He also coteaches a course on real options analysis to students at the Executive Masters in Technology Management program cosponsored by the School of Engineering and Applied Sciences and the Wharton School of the University of Pennsylvania. Mr. Kusiatin holds an MBA from the Wharton School and a B.Sc. in industrial engineering from the Engineering Academy of Denmark.
This model is used to evaluate two competing strategies for building biotech manufacturing capacity for a drug entering Phase I clinical trials. The company evaluating the investment decision intends to launch the drug into a high-price/low-volume market initially but has identified an additional low-price/high-volume market that it could address, given the right business conditions (net present value > 0). Expanding into this market will require three times the capacity required for the primary highprice/low-volume market. In addition, the company faces significant technical risks as it moves through lengthy and expensive Food and Drug Administration–mandated clinical trials; the company has the option to abandon further investments at any time should business conditions (such as additional competitors) become unfavorable. Given these many uncertainties and opportunities, which alternative should the company choose?
c91-100
JWBK121-Mun
288
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 92.1 Biotech manufacturing model summary results
Monte Carlo simulation is used to simulate commercial uncertainty as well as the probabilities of successfully launching a drug currently in development and for determining the manufacturing facility’s capacity. A binomial approach is then used to value the inherent flexibility (strategic real options) for each alternative. For the specific details, refer to the model (Figure 92.1).
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
93. Industry Applications—Biotech Inlicensing Drug Deal Structuring
289
93. Industry Applications—Biotech Inlicensing Drug Deal Structuring File Name: Biotech—Inlicensing Drug Deal Structuring Location: Modeling Toolkit | Industry Applications | Biotech—Inlicensing Drug Deal Structuring Brief Description: Illustrating how to identify and negotiate the ideal inlicensing terms for a compound in the biotech industry Requirements: Modeling Toolkit, Risk Simulator Special Contribution: This model was contributed by Uriel Kusiatin, a senior vice president of SKS Consulting (www.sksconsulting.us), a turnaround and strategy advisory firm that works with small-cap companies and their investors in the telecommunications, high-tech, and life sciences industries. Uriel has applied real options, Monte Carlo simulation, and optimization techniques to research and development portfolio decisions, licensing opportunities, and major capital investments. Uriel regularly presents at industry conferences, and has guest lectured at the Wharton School and MIT’s Sloan School of Management on the applications of real options analysis in the life sciences industry. He also coteaches a course on real options analysis to students at the Executive Masters in Technology Management program cosponsored by the School of Engineering and Applied Sciences and the Wharton School of the University of Pennsylvania. Mr. Kusiatin holds an MBA from the Wharton School and a B.Sc. in industrial engineering from the Engineering Academy of Denmark.
This model is used to identify optimal deal terms for a company preparing to negotiate the inlicensing of a compound that has successfully completed Phase I clinical trials. The company needs to identify the combination of up-front payments, milestone payments, research and development funding, and royalty fees it should agree to that maximizes net present value (NPV) while reducing risk. Monte Carlo simulation is used to simulate commercial uncertainty as well as the probabilities of successfully launching the drug currently in development. Stochastic optimization is used to optimize deal NPV given budgetary constraints and risk tolerance thresholds in an environment of commercial and technical uncertainty. A binomial approach is then used to value the built in strategic real options of the contingency-based deal terms. For more details, please refer to the model (Figure 93.1).
c91-100
JWBK121-Mun
290
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 93.1 Inlicensing model summary
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
94. Industry Applications—Biotech Investment Valuation
291
94. Industry Applications—Biotech Investment Valuation File Name: Biotech—Biotech Investment Valuation Location: Modeling Toolkit | Industry Applications | Biotech—Investment Valuation Brief Description: Illustrating how to evaluate the decisions of investing in the development of a drug given commercial uncertainties Requirements: Modeling Toolkit, Risk Simulator Special Contribution: This model was contributed by Uriel Kusiatin, a senior vice president of SKS Consulting (www.sksconsulting.us), a turnaround and strategy advisory firm that works with small-cap companies and their investors in the telecommunications, high-tech, and life sciences industries. Uriel has applied real options, Monte Carlo simulation, and optimization techniques to research and development portfolio decisions, licensing opportunities, and major capital investments. Uriel regularly presents at industry conferences, and has guest lectured at the Wharton School and MIT’s Sloan School of Management on the applications of real options analysis in the life sciences industry. He also coteaches a course on real options analysis to students at the Executive Masters in Technology Management program cosponsored by the School of Engineering and Applied Sciences and the Wharton School of the University of Pennsylvania. Mr. Kusiatin holds an MBA from the Wharton School and a B.Sc. in industrial engineering from the Engineering Academy of Denmark.
This model is used to evaluate a decision to invest in the development of a drug given commercial uncertainties and technical risks. Monte Carlo simulation is used to simulate commercial uncertainty as well as the probabilities of successfully completing clinical trial phases for a drug currently in development. For specific details, please refer to the model (Figure 94.1).
FIGURE 94.1 Staged-gate investment process in drug development
c91-100 JWBK121-Mun
292
March 17, 2008 11:12 Char Count=
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
293
95. Industry Application—Banking Integrated Risk Management, Probability of Default, Economic Capital, Value at Risk, and Optimal Bank Portfolios File Names: Multiple files (see chapter for details on example files used) Location: Various places in the Modeling Toolkit Brief Description: Illustrating multiple models in computing a bank’s economic capital, value at risk, loss given default, and probability of default Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions: B2ProbabilityDefaultMertonImputedAssetValue, B2ProbabilityDefaultMertonImputedAssetVolatility, B2ProbabilityDefaultMertonII, B2ProbabilityDefaultMertonDefaultDistance, B2ProbabilityDefaultMertonRecoveryRate, B2ProbabilityDefaultMertonMVDebt
With the new Basel II Accord, internationally active banks are now allowed to compute their own risk capital requirements using the internal ratings–based (IRB) approach. Not only is adequate risk capital analysis important as a compliance obligation, it provides banks the ability to optimize their capital by computing and allocating risks, performing performance measurements, executing strategic decisions, increasing competitiveness, and enhancing profitability. This chapter discusses the various approaches required to implement an IRB method and the step-by-step models and methodologies in implementing and valuing economic capital, value at risk, probability of default, and loss given default, the key ingredients required in an IRB approach, through the use of advanced analytics such as Monte Carlo and historical risk simulation, portfolio optimization, stochastic forecasting, and options analysis. It shows the use of Risk Simulator and the Modeling Toolkit software in computing and calibrating these critical input parameters. Instead of dwelling on theory or revamping what has already been written many times, this chapter focuses solely on the practical modeling applications of the key ingredients to the Basel II Accord. Specifically, these topics are addressed:
Probability of Default (structural and empirical models for commercial versus retail banking) Loss Given Default and Expected Losses Economic Capital and Portfolio Value at Risk (structural and risk-based simulation)
c91-100
JWBK121-Mun
March 17, 2008
294
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Portfolio Optimization Hurdle Rates and Required Rates of Return
Please note that several other white papers exist and are available by request (send an e-mail request to
[email protected]). They discuss such topics as:
White Paper: portfolio optimization, project selection, and optimal investment allocation White Paper: credit analysis White Paper: interest rate risk, foreign exchange risk, volatility estimation, and risk hedging White Paper: exotic options and credit derivatives
To follow along the analyses in this chapter, we assume that the reader already has Risk Simulator, Real Options SLS, and Modeling Toolkit (Basel II Toolkit) installed and is somewhat familiar with the basic functions of each software. If not, please refer to www.realoptionsvaluation.com (click on Download) and watch the Getting Started videos, read some of the Getting Started case studies, or to install the latest trial versions of these software programs. Alternatively, refer to Part I of this book to obtain a primer on using these software programs. Each topic discussed starts with some basic introduction to the methodologies that are appropriate, followed by some practical hands-on modeling approaches and examples.
PROBABILITY OF DEFAULT Probability of default measures the degree of likelihood that the borrower of a loan or debt (the obligor) will be unable to make the necessary scheduled repayments on the debt, thereby defaulting on the debt. Should the obligor be unable to pay, the debt is in default, and the lenders of the debt have legal avenues to attempt a recovery of the debt or at least partial repayment of the entire debt. The higher the default probability a lender estimates a borrower to have, the higher the interest rate the lender will charge the borrower as compensation for bearing the higher default risk. Probability of default models are categorized as structural or empirical. Structural models look at a borrower’s ability to pay based on market data, such as equity prices, market and book values of asset and liabilities, as well as the volatility of these variables. Hence these structural models are used predominantly to estimate the probability of default of companies and countries, most applicable within the areas of commercial and industrial banking. In contrast, empirical models or credit scoring models are used to quantitatively determine the probability that a loan or loan holder will default, where the loan holder is an individual, by looking at historical portfolios of loans held and assessing individual characteristics (e.g., age, educational level, debt to income ratio, etc.). This second approach is more applicable to the retail banking sector.
Structural Models of Probability of Default Probability of default models are models that assess the likelihood of default by an obligor. They differ from regular credit scoring models in several ways. First
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
295
of all, credit scoring models usually are applied to smaller credits—individuals or small businesses—whereas default models are applied to larger credits—corporations or countries. Credit scoring models are largely statistical, regressing instances of default against various risk indicators, such as an obligor’s income, home renter or owner status, years at a job, educational level, debt to income ratio, and so forth (discussed later in this chapter). Structural default models, in contrast, directly model the default process and typically are calibrated to market variables, such as the obligor’s stock price, asset value, debt book value, or the credit spread on its bonds. Default models find many applications within financial institutions. They are used to support credit analysis and to determine the probability that a firm will default, to value counterparty credit risk limits, or to apply financial engineering techniques in developing credit derivatives or other credit instruments. The first model illustrated in this chapter is used to solve the probability of default of a publicly traded company with equity and debt holdings and accounting for its volatilities in the market (Figure 95.1). This model is currently used by KMV and Moody’s to perform credit risk analysis. This approach assumes that the book value of asset and asset volatility are unknown and solved in the model; that the company is relatively stable; and that the growth rate of the company’s assets is stable over time (e.g., not in start-up mode). The model uses several simultaneous equations in options valuation coupled with optimization to obtain the implied underlying asset’s market value and volatility of the asset in order to compute the probability of default and distance to default for the firm.
Illustrative Example: Structural Probability of Default Models on Public Firms It is assumed that at this point, the reader is well versed in running simulations and optimizations in Risk Simulator. The example model used is the Probability of Default—External Options model and can be accessed through Modeling Toolkit | Prob of Default | External Options Model (Public Company). To run this model (Figure 95.1), enter in the required inputs:
Market value of equity (obtained from market data on the firm’s capitalization, i.e., stock price times number of stocks outstanding) Market equity volatility (computed in the Volatility or LPVA worksheets in the model) Book value of debt and liabilities (the firm’s book value of all debt and liabilities) Risk-free rate (the prevailing country’s risk-free interest rate for the same maturity) Anticipated growth rate of the company (the expected annualized cumulative growth rate of the firm’s assets, which can be estimated using historical data over a long period of time, making this approach more applicable to mature companies rather than start-ups) Debt maturity (the debt maturity to be analyzed, or enter 1 for the annual default probability)
The comparable option parameters are shown in cells G18 to G23. All these comparable inputs are computed except for Asset Value (the market value of asset) and the Volatility of Asset. You will need to input some rough estimates as a starting point so that the analysis can be run. The rule of thumb is to set the volatility of the
JWBK121-Mun
March 17, 2008
11:12
Char Count=
FIGURE 95.1 Default probability model setup
c91-100
296
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
297
asset in G22 to be one-fifth to half of the volatility of equity computed in G10 and the market value of asset (G19) to be approximately the sum of the market value of equity and book value of liabilities and debt (G9 and G11). Then an optimization needs to be run in Risk Simulator in order to obtain the desired outputs. To do this, set Asset Value and Volatility of Asset as the decision variables (make them continuous variables with a lower limit of 1% for volatility and $1 for asset, as both these inputs can only take on positive values). Set cell G29 as the objective to minimize as this is the absolute error value. Finally, the constraint is such that cell H33, the implied volatility in the default model, is set to exactly equal the numerical value of the equity volatility in cell G10. Run a static optimization using Risk Simulator. If the model has a solution, the absolute error value in cell G29 will revert to zero (Figure 95.2). From here, the probability of default (measured in percent) and the distance to default (measured in standard deviations) are computed in cells G39 and G41. Then the relevant credit spread required can be determined using the Credit Analysis—Credit Premium model or some other credit spread tables (such as using the Internal Credit Risk Rating model). The results indicate that the company has a probability of default at 0.56% with 2.54 standard deviations to default, indicating good creditworthiness (Figure 95.2). A simpler approach is to use the Modeling Toolkit functions instead of manually running the optimization. These functions have internal intelligent optimization routines embedded in them. For instance, the B2ProbabilityDefaultMertonImputed AssetValue and B2ProbabilityDefaultMertonImputedAssetVolatility functions perform multiple internal optimization routines of simultaneous stochastic equations to obtain their respective results, which are then used as an input into the B2ProbabilityDefaultMertonII function to compute the probability of default. See the model for more specific details.
Illustrative Example: Structural Probability of Default Models on Private Firms Several other structural models exist for computing the probability of default of a firm. Specific models are used depending on the need and availability of data. In the previous example, the firm is a publicly traded firm, with stock prices and equity volatility that can be readily obtained from the market. In the next example, we assume that the firm is privately held, meaning that there would be no market equity data available. This example essentially computes the probability of default or the point of default for the company when its liabilities exceed its assets, given the asset’s growth rates and volatility over time (Figure 95.3). Before using this model, first review the model on external publicly traded company. Similar methodological parallels exist between these two models, and this example builds on the knowledge and expertise of the previous example. In Figure 95.3, the example firm with an asset value of $12M and a debt book value of $10M with significant growth rates of its internal assets and low volatility returns a 0.67% probability of default. Instead of relying on the valuation of the firm, external market benchmarks can be used, if such data are available. In Figure 95.4, we see that additional input assumptions are required, such as the market fluctuation (market returns and volatility) and relationship (correlation between
FIGURE 95.2 Default probability of a publicly traded entity
c91-100 JWBK121-Mun
298
March 17, 2008 11:12 Char Count=
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
299
FIGURE 95.3 Default probability of a privately held entity the market benchmark and the company’s assets). The model used is the Probability of Default—Merton Market Options model accessible from Modeling Toolkit | Prob of Default | Merton Market Options Model (Industry Comparable).
Empirical Models of Probability of Default As mentioned, empirical models of probability of default are used to compute an individual’s default probability, applicable within the retail banking arena, where empirical or actual historical or comparable data exist on past credit defaults. The data set in Figure 95.5 represents a sample of several thousand previous loans, credit, or debt issues. The data show whether each loan had defaulted or not (0 for no default, and 1 for default) as well as the specifics of each loan applicant’s age, education level (1 to 3 indicating high school, university, or graduate professional education), years with current employer, and so forth. The idea is to model these empirical data to see which variables affect the default behavior of individuals, using Risk Simulator’s Maximum Likelihood Model. The resulting model will help the
FIGURE 95.4 Default probability of a privately held entity calibrated to market fluctuations
c91-100
JWBK121-Mun
300
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 95.5 Empirical analysis of probability of default bank or credit issuer compute the expected probability of default of an individual credit holder of having specific characteristics.
Illustrative Example: Applying Empirical Models of Probability of Default The example file is Probability of Default—Empirical and can be accessed through Modeling Toolkit | Prob of Default | Empirical (Individuals). To run the analysis, select the data (include the headers) and make sure that the data have the same length for all variables, without any missing or invalid data. Then using Risk Simulator, click on Risk Simulator | Forecasting | Maximum Likelihood Models. A sample set of results are provided in the MLE worksheet, complete with detailed instructions on how to compute the expected probability of default of an individual. The Maximum Likelihood Estimates (MLE) approach on a binary multivariate logistic analysis is used to model dependent variables to determine the expected probability of success of belonging to a certain group. For instance, given a set of independent variables (e.g., age, income, education level of credit card or mortgage loan holders), we can model the probability of default using MLE. A typical regression model is invalid because the errors are heteroskedastic and nonnormal, and the resulting estimated probability estimates will sometimes be above 1 or below 0. MLE analysis handles these problems using an iterative optimization routine. The computed results show the coefficients of the estimated MLE intercept and slopes.
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
301
FIGURE 95.6 MLE results The coefficients estimated are actually the logarithmic odds ratios and cannot be interpreted directly as probabilities. A quick but simple computation is first required. The approach is simple. To estimate the probability of success of belonging to a certain group (e.g., predicting if a debt holder will default given the amount of debt he holds), simply compute the estimated Y value using the MLE coefficients. Figure 95.6 illustrates an individual with 8 years at a current employer and current address, a low 3% debt to income ratio and $2,000 in credit card debt has a log odds ratio of −3.1549. The inverse antilog of the odds ratio is obtained by computing: exp(−3.1549) exp(estimated Y) = = 0.0409 1 + exp(estimated Y) 1 + exp(−3.1549) So, such a person has a 4.09% chance of defaulting on the new debt. Using this probability of default, you can then use the Credit Analysis—Credit Premium model to determine the additional credit spread to charge this person, given this default level and the customized cash flows anticipated from this debt holder.
LOSS GIVEN DEFAULT As shown previously, probability of default is a key parameter for computing credit risk of a portfolio. In fact, the Basel II Accord requires the probability of default as well as other key parameters, such as the loss given default (LGD) and exposure at default (EAD), be reported as well. The reason is that a bank’s expected loss is equivalent to: Expected Losses = (Probability of Default) × (Loss Given Default) ×(Exposure at Default) or simply: EL = PD × LGD × EAD
c91-100
JWBK121-Mun
302
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
PD and LGD are both percentages, whereas EAD is a value. As we have shown how to compute PD earlier, we will now revert to some estimations of LGD. There are several methods used to estimate LGD. The first is through a simple empirical approach where we set LGD = 1 – Recovery Rate. That is, whatever is not recovered at default is the loss at default, computed as the charge-off (net of recovery) divided by the outstanding balance: LGD = 1 − Recovery Rate or Charge-offs (Net of Recovery) LGD = Outstanding Balance at Default Therefore, if market data or historical information are available, LGD can be segmented by various market conditions, types of obligor, and other pertinent segmentations. LGD can then be readily read off a chart. A second approach to estimate LGD is more attractive in that if the bank has available information, it can attempt to run some econometric models to create the best-fitting model under an ordinary least squares (OLS) approach. By using this approach, a single model can be determined and calibrated, and this same model can be applied under various conditions, with no data mining required. However, in most econometric models, a normal transformation will have to be performed first. Suppose the bank has some historical LGD data (Figure 95.7), the best-fitting distribution can be found using Risk Simulator (select the historical data, click on Risk Simulator | Tools | Distributional Fitting (Single Variable) to perform the fitting routine). The result is a beta distribution for the thousands of LGD values. Then, using the Distribution Analysis tool in Risk Simulator, obtain the theoretical mean and standard deviation of the fitted distribution (Figure 95.8). Then transform the LGD variable using the B2NormalTransform function in the Modeling Toolkit software. For instance, the value 49.69% will be transformed and normalized to 28.54%. Using this newly transformed data set, you can run some nonlinear econometric models to determine LGD. For instance, a partial list of independent variables that might be significant for a bank, in terms of determining and forecast the LGD value might include:
Debt to capital ratio Profit margin Revenue Current assets to current liabilities Risk rating at default and one year before default Industry Authorized balance at default Collateral value Facility type Tightness of covenant Seniority of debt Operating income to sales ratio (and other efficiency ratios) Total asset, total net worth, total liabilities
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
303
FIGURE 95.7 Fitting historical LGD data
ECONOMIC CAPITAL AND VALUE AT RISK Economic capital is critical to a bank as it links a bank’s earnings and returns to risks that are specific to business line or business opportunity. In addition, these economic capital measurements can be aggregated into a portfolio of holdings. Value at Risk (VaR) is used in trying to understand how the entire organization is affected by the various risks of each holding as aggregated into a portfolio, after accounting for cross-correlations among various holdings. VaR measures the maximum possible loss given some predefined probability level (e.g., 99.90%) over some holding period or time horizon (e.g., 10 days). Senior management at the bank usually selects the probability or confidence interval, which reflects the board’s risk appetite. Stated another way, we can define the probability level as the bank’s desired probability of surviving per year. In addition, the holding period usually is chosen such that it coincides with the time period it takes to liquidate a loss position. VaR can be computed several ways. Two main families of approaches exist: structural closed-form models and Monte Carlo risk simulation approaches. We will showcase both methods, starting with the structural models.
c91-100
JWBK121-Mun
304
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 95.8 Distributional Analysis tool
The second and much more powerful approach is Monte Carlo risk simulation. Instead of simply correlating individual business lines or assets, entire probability distributions can be correlated using mathematical copulas and simulation algorithms, by using Risk Simulator. In addition, tens to hundreds of thousands of scenarios can be generated using simulation, providing a very powerful stress testing mechanism for valuing VaR. Distributional-fitting methods are applied to reduce the thousands of data points into their appropriate probability distributions, allowing their modeling to be handled with greater ease.
Illustrative Example: Structural VaR Models This first VaR example model used is Value at Risk—Static Covariance Method, accessible through Modeling Toolkit | Value at Risk | Static Covariance Method. This model is used to compute the portfolio’s VaR at a given percentile for a specific holding period, after accounting for the cross-correlation effects between the assets (Figure 95.9). The daily volatility is the annualized volatility divided by the square root of trading days per year. Typically, positive correlations tend to carry a higher VaR compared to zero correlation asset mixes, whereas negative correlations reduce the total risk of the portfolio through the diversification effect (Figures 95.9 and
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
305
FIGURE 95.9 Computing VaR using the structural covariance model 95.10). The approach used is a portfolio VaR with correlated inputs, where the portfolio has multiple asset holdings with different amounts and volatilities. Each asset is also correlated to each other. The covariance or correlation structural model is used to compute the VaR given a holding period or horizon and percentile value (typically 10 days at 99% confidence). Of course, the example illustrates only a few assets or business lines or credit lines for simplicity’s sake. Nonetheless, using
FIGURE 95.10 Effects of different correlations
c91-100
JWBK121-Mun
306
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
the VaR function (B2VaRCorrelationMethod) in the Modeling Toolkit, many more lines, asset, or businesses can be modeled.
Illustrative Example: VaR Using Monte Carlo Simulation The model used is Value at Risk—Portfolio Operational and Credit Risk VaR Capital Adequacy and is accessible through Modeling Toolkit | Value at Risk | Portfolio Operational and Credit Risk VaR Capital Adequacy. This model shows how operational risk and credit risk parameters are fitted to statistical distributions and their resulting distributions are modeled in a portfolio of liabilities to determine the VaR (99.50th percentile certainty) for the capital requirement under Basel II requirements. It is assumed that the historical data of the operational risk impacts (Historical Data worksheet) are obtained through econometric modeling of the key risk indicators. The Distributional Fitting Report worksheet is a result of running a distributional fitting routine in Risk Simulator to obtain the appropriate distribution for the operational risk parameter. Using the resulting distributional parameter, we model each liability’s capital requirements within an entire portfolio. Correlations can also be inputted if required, between pairs of liabilities or business units. The resulting Monte Carlo simulation results show the VaR capital requirements. Note that an appropriate empirically based historical VaR cannot be obtained if distributional fitting and risk-based simulations were not first run. The VaR will be obtained only by running simulations. To perform distributional fitting, follow these six steps: 1. In the Historical Data worksheet (Figure 95.11), select the data area (cells C5:L104) and click on Risk Simulator | Tools | Distributional Fitting (Single Variable). 2. Browse through the fitted distributions and select the best-fitting distribution (in this case, the exponential distribution in Figure 95.12) and click OK. 3. You may now set the assumptions on the Operational Risk Factors with the exponential distribution (fitted results show Lambda = 1) in the Credit Risk worksheet. Note that the assumptions have already been set for you in advance. You may set the assumption by going to cell F27 and clicking on Risk Simulator | Set Input Assumption, selecting Exponential distribution, and entering 1 for the Lambda value and clicking OK. Continue this process for the remaining cells in column F, or simply perform a Risk Simulator Copy and Risk Simulator Paste on the remaining cells. A. Note that since the cells in column F have assumptions set, you will first have to clear them if you wish to reset and copy/paste parameters. You can do so by first selecting cells F28:F126 and clicking on the Remove Parameter icon or select Risk Simulator | Remove Parameter. B. Then select cell F27, click on the Risk Simulator Copy icon or select Simulation | Copy Parameter, and then select cells F28:F126 and click on the Risk Simulator Paste icon or select Risk Simulator | Paste Parameter. 4. Next you can set additional assumptions, such as the probability of default using the Bernoulli distribution (column H) and Loss Given Default (column J). Repeat the procedure in step 3 if you wish to reset the assumptions.
JWBK121-Mun
March 17, 2008
11:12
Char Count=
FIGURE 95.11 Sample historical bank loans
c91-100
307
c91-100
JWBK121-Mun
March 17, 2008
11:12
308
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 95.12 Data-fitting results 5. Run the simulation by clicking on the Run icon or clicking on Risk Simulator | Run Simulation. 6. Obtain the Value at Risk by going to the forecast chart once the simulation is done running and selecting Left-Tail and typing in 99.50. Hit Tab on the keyboard to enter the confidence value and obtain the VaR of $25,959 (Figure 95.13). Another example on VaR computation is shown next, where the model Value at Risk—Right Tail Capital Requirements is used, available through Modeling Toolkit | Value at Risk | Right Tail Capital Requirements.
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
309
FIGURE 95.13 Simulated forecast results and the 99.50% value at risk This model shows the capital requirements per Basel II requirements (99.95th percentile capital adequacy based on a specific holding period’s VaR). Without running risk-based historical and Monte Carlo simulation using Risk Simulator, the required capital is $37.01M (Figure 95.14) as compared to only $14.00M that is required using a correlated simulation (Figure 95.15). This is due to the crosscorrelations between assets and business lines, and can be modeled only using Risk Simulator. This lower VaR is preferred as banks can now be required to hold less required capital and can reinvest the remaining capital in various profitable ventures, thereby generating higher profits.
JWBK121-Mun
March 17, 2008
11:12
Char Count=
FIGURE 95.14 Right-tail VaR model
c91-100
310
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
311
FIGURE 95.15 Simulated results of the portfolio VaR
To run the model, follow these three steps: 1. Click on Risk Simulator | Run Simulation. If you had other models open, make sure you first click on Risk Simulator | Change Simulation Profile and select the Tail VaR profile before starting. 2. When the simulation is complete, select Left-Tail in the forecast chart and enter in 99.95 in the Certainty box and hit TAB on the keyboard to obtain the value of $14.00M Value at Risk for this correlated simulation. 3. Note that the assumptions have already been set for you in advance in the model in cells C6:C15. However, you may set it again by going to cell C6 and clicking on Risk Simulator | Set Input Assumption, selecting your distribution of choice or using the default Normal Distribution or performing a distributional fitting on historical data, then clicking OK. Continue this process for the remaining cells in column C. You may also decide to first Remove Parameters of these cells in column C and setting your own distributions. Further, correlations can be set manually when assumptions are set (Figure 95.16) or by going to Risk Simulator | Edit Correlations (Figure 95.17) after all the assumptions are set. If risk simulation was not run, the VaR or economic capital required would have been $37M, as opposed to only $14M. All cross-correlations between business lines have been modeled, as are stress and scenario tests, and thousands and thousands
c91-100
JWBK121-Mun
312
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 95.16 Setting correlations one at a time
FIGURE 95.17 Setting correlations using the correlation matrix routine
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
313
of possible iterations are run. Individual risks are now aggregated into a cumulative portfolio level VaR.
EFFICIENT PORTFOLIO ALLOCATION AND ECONOMIC CAPITAL VaR As a side note, by performing portfolio optimization, a portfolio’s VaR actually can be reduced. We start by first introducing the concept of stochastic portfolio optimization through an illustrative hands-on example. Then, using this portfolio optimization technique, we apply it to four business lines or assets to compute the VaR or an unoptimized versus an optimized portfolio of assets and see the difference in computed VaR. You will note that at the end, the optimized portfolio bears less risk and has a lower required economic capital.
Illustrative Example: Stochastic Portfolio Optimization The Optimization model used to illustrate the concepts of stochastic portfolio optimization is Optimization—Stochastic Portfolio Allocation and can be accessed via Modeling Toolkit | Optimization | Stochastic Portfolio Allocation. This model shows four asset classes with different risk and return characteristics. The idea here is to find the best portfolio allocation such that the portfolio’s bang for the buck, or returns to risk ratio, is maximized. That is, in order to allocate 100% of an individual’s investment among several different asset classes (e.g., different types of mutual funds or investment styles: growth, value, aggressive growth, income, global, index, contrarian, momentum, etc.), optimization is used. This model is different from others in that there exist several simulation assumptions (risk and return values for each asset), as seen in Figure 95.18. That is, a simulation is run, then optimization is executed, and the entire process is repeated multiple times to obtain distributions of each decision variable. The entire analysis can be automated using stochastic optimization.
FIGURE 95.18 Asset allocation model ready for stochastic optimization
c91-100
JWBK121-Mun
March 17, 2008
314
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
In order to run an optimization, several key specifications on the model have to be identified first: Objective: Maximize Return to Risk Ratio (C12) Decision Variables: Allocation weights (E6:E9) Restrictions on Decision Variables: Minimum and maximum required (F6:G9) Constraints: Portfolio total allocation weights 100% (E11 is set to 100%) Simulation Assumptions: Return and risk values (C6:D9) The model shows the various asset classes. Each asset class has its own set of annualized returns and annualized volatilities. These return and risk measures are annualized values such that they can be compared consistently across different asset classes. Returns are computed using the geometric average of the relative returns, while the risks are computed using the logarithmic relative stock returns approach. The allocation weights in column E holds the decision variables, which are the variables that need to be tweaked and tested such that the total weight is constrained at 100% (cell E11). Typically, to start the optimization, we will set these cells to a uniform value, where in this case, cells E6 to E9 are set at 25% each. In addition, each decision variable may have specific restrictions in its allowed range. In this example, the lower and upper allocations allowed are 10% and 40%, as seen in columns F and G. This setting means that each asset class can have its own allocation boundaries. Column H shows the return to risk ratio, which is simply the return percentage divided by the risk percentage, where the higher this value, the higher the bang for the buck. The remaining sections of the model show the individual asset class rankings by returns, risk, return to risk ratio, and allocation. In other words, these rankings show at a glance which asset class has the lowest risk or the highest return, and so forth.
RUNNING AN OPTIMIZATION To run this model, simply click on Risk Simulator | Optimization | Run Optimization. Alternatively, and for practice, you can set up the model using these seven steps: 1. Start a new profile (Risk Simulator | New Profile). 2. For stochastic optimization, set distributional assumptions on the risk and returns for each asset class. That is, select cell C6 and set an assumption (Risk Simulator | Set Input Assumption) and make your own assumption as required. Repeat for cells C7 to D9. 3. Select cell E6, and define the decision variable (Risk Simulator | Optimization | Decision Variables or click on the Define Decision icon) and make it a Continuous Variable and then link the decision variable’s name and minimum/maximum required to the relevant cells (B6, F6, G6).
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
315
4. Then use the Risk Simulator Copy on cell E6, select cells E7 to E9, and use Risk Simulator’s Paste (Risk Simulator | Copy Parameter and Simulation | Paste Parameter or use the Risk Simulator copy and paste icons). Make sure you do not use Excel’s copy and paste. 5. Next, set up the optimization’s constraints by selecting Risk Simulator | Optimization | Constraints, selecting ADD, and selecting the cell E11, and making it equal 100% (total allocation, and do not forget the % sign). 6. Select cell C12, the objective to be maximized, and make it the objective: Risk Simulator | Optimization | Set Objective or click on the O icon. 7. Run the simulation by going to Risk Simulator | Optimization | Run Optimization. Review the different tabs to make sure that all the required inputs in steps 2 and 3 are correct. Select Stochastic Optimization and let it run for 500 trials repeated 20 times.
You may also try other optimization routines where:
Static Optimization is an optimization that is run on a static model, where no simulations are run. This optimization type is applicable when the model is assumed to be known and no uncertainties exist. Also, a static optimization can be run first to determine the optimal portfolio and its corresponding optimal allocation of decision variables before more advanced optimization procedures are applied. For instance, before running a stochastic optimization problem, a static optimization is run to determine if there exist solutions to the optimization problem before a more protracted analysis is performed. Dynamic Optimization is applied when Monte Carlo simulation is used together with optimization. Another name for such a procedure is simulationoptimization. In other words, a simulation is run for N trials, and then an optimization process is run for M iterations until the optimal results are obtained or an infeasible set is found. That is, using Risk Simulator’s Optimization module, you can choose which forecast and assumption statistics to use and replace in the model after the simulation is run. Then these forecast statistics can be applied in the optimization process. This approach is useful when you have a large model with many interacting assumptions and forecasts, and when some of the forecast statistics are required in the optimization. Stochastic Optimization is similar to the dynamic optimization procedure except that the entire dynamic optimization process is repeated T times. The results will be a forecast chart of each decision variable with T values. In other words, a simulation is run and the forecast or assumption statistics are used in the optimization model to find the optimal allocation of decision variables. Then another simulation is run, generating different forecast statistics, and these new updated values are then optimized, and so forth. Hence, each of the final decision variables will have its own forecast chart, indicating the range of the optimal decision variables. For instance, instead of obtaining single-point estimates in the dynamic optimization procedure, you can now obtain a distribution of the decision variables and, hence, a range of optimal values for each decision variable, also known as a stochastic optimization.
c91-100
JWBK121-Mun
316
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 95.19 Simulated results from the stochastic optimization Stochastic optimization is performed when a simulation is first run and then the optimization is run. Then the whole analysis is repeated multiple times. The result is a distribution of each decision variable, rather than a single-point estimate (Figure 95.19). This means that instead of saying you should invest 30.57% in Asset 1, the optimal decision is to invest between 30.10% and 30.99% as long as the total portfolio sums to 100%. This way, the optimization results provide
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
317
management or decision makers a range of flexibility in the optimal decisions. Refer to Chapter 11 of Modeling Risk, 2nd Edition, by Dr. Johnathan Mun (Hoboken, NJ: John Wiley & Sons, 2006), for more detailed explanations about this model, the different optimization techniques, and an interpretation of the results. Chapter 11’s appendix also details how the risk and return values are computed.
Illustrative Example: Portfolio Optimization and Portfolio VaR Now that we understand the concepts of optimized portfolios, let us see what the effects are on computed economic capital through the use of a correlated portfolio VaR. This model uses Monte Carlo simulation and optimization routines in Risk Simulator to minimize the VaR of a portfolio of assets (Figure 95.20). The file used is Value at Risk—Optimized and Simulated Portfolio VaR, which is accessible via Modeling Toolkit | Value at Risk | Optimized and Simulated Portfolio VaR. In this example, we intentionally used only four asset classes to illustrate the effects of an
FIGURE 95.20 Computing VaR with simulation
c91-100
JWBK121-Mun
318
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 95.21 Nonoptimized VaR
optimized portfolio. In real life, we can extend this to cover a multitude of asset classes and business lines. Here we illustrate the use of a left-tail VaR as opposed to a right-tail VaR, but the concepts are similar. First, simulation is used to determine the 90% left-tail VaR. The 90% lefttail probability means that there is a 10% chance that losses will exceed this VaR for a specified holding period. With an equal allocation of 25% across the four asset classes, the VaR is determined using simulation (Figure 95.21). The annualized
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
95. Industry Application—Banking
319
returns are uncertain and hence simulated. The VaR is then read off the forecast chart. Then optimization is run to find the best portfolio subject to the 100% allocation across the four projects that will maximize the portfolio’s bang for the buck (returns to risk ratio). The resulting optimized portfolio is then simulated once again, and the new VaR is obtained (Figure 95.22). The VaR of this optimized portfolio is a lot less than the not-optimized portfolio.
FIGURE 95.22 Optimal portfolio’s VaR through optimization and simulation returns a much lower capital requirement
c91-100
JWBK121-Mun
March 17, 2008
320
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
HURDLE RATES AND DISCOUNT RATES Another related item in the discussion of risk in the context of Basel II Accords is the issue of hurdle rates, or the required rate of return on investment that is sufficient to justify the amount of risk undertaken in the portfolio. There is a nice theoretical connection between uncertainty and volatility whereby the discount rate of a specific risk portfolio can be obtained. In a financial model, the old axiom of high risk, high return is seen through the use of a discount rate. That is, the higher the risk of a project, the higher the discount rate should be to risk-adjust this riskier project so that all projects are comparable. There are two methods in computing the hurdle rate. The first is an internal model, where the VaR of the portfolio is computed first. This economic capital is then compared to the market risk premium. That is, we have Hurdle Rate =
Market Return – Risk-Free Return Risk Capital
That is, assuming that a similar set of comparable investments are obtained in the market, based on tradable assets, the market return is obtained. Using the bank’s internal cash flow models, all future cash flows can be discounted at the risk-free rate, in order to determine the risk-free return. Finally, the difference is then divided into the VaR risk capital to determine the required hurdle rate. This concept is very similar to the capital asset pricing model (CAPM), which often is used to compute the appropriate discount rate for a discounted cash flow model. Weighted average cost of capital, hurdle rates, multiple asset pricing models, and arbitrage pricing models are the other alternatives but are based on similar principles. The second approach is the use of the CAPM to determine the hurdle rate.
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
96. Industry Application—Electric/Utility
321
96. Industry Application—Electric/Utility Optimal Power Contract Portfolios
File Name: Electric Utility—Electricity Contract Risk Location: Modeling Toolkit | Industry Application | Electric Utility—Electricity Contract Risk Brief Description: Modeling electric utility contracts under uncertainty and performing portfolio optimization to obtain the efficient portfolio allocation Requirements: Modeling Toolkit, Risk Simulator Special Credits: This case study was contributed by Elio Cuneo Hervieux, CRM, an electrical civil engineer with an MBA in finance. He is the energy supply contracts manager in an energy generation company in northern Chile, an area in which important mining companies are located. Besides being responsible for looking after the correct application of the contractual agreements with clients, he specializes in the analysis and definition of risk metrics for each contract and for the company’s portfolio. He can be contacted at
[email protected].
Electricity is generated through different production methods, either with hydropower (reservoirs and rivers) or thermal methods (where a great variety of technologies exists, depending on the type of fuel used). A common characteristic of the units that produce electricity at competitive prices is that they are very capital intensive. In addition, the input used to generate energy may present important variations in its prices, as in the case with thermal power stations. Another potential source of volatility that should be considered is the hydrology, specifically, the availability of the water resource for hydropower generation. In Chilean legal contracts of electricity supply, there are two items that comprise the electricity billing. The first is associated with the power levels, and the second is associated with the energy levels. Billings associated with power levels are related to the peak of the client’s demand expressed in US$/kW-month (U.S. dollars per kilowatt, per month of usage). The amount is also related to the investments in generation made by the energy producer, developed at the client’s request, or an alternative value is used in accordance with the unitary price of the power that is traded in the respective markets. The last case corresponds to the electricity market in Chile. Billings associated with energy levels are related to the type of fuels used for the energy generation or according to a future projection of the prices of the spot market or a mixture of both.
c91-100
JWBK121-Mun
322
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Since billings to the client consider these two key variables, to obtain the prospective margins and the associated profitability, it is necessary to assign different weights to each variable. In practice, there is no consensus in this respect. From the point of view of obtaining the margin for the electricity sale, the margin can be divided into two components with different characteristics in terms of risk: the margin for power and the margin for energy. 1. Margin for power. Once the rate of power for the client is fixed, the respective margin for this variable is determined by the level of the client’s maximum demand with respect to the cost levels where power is traded in the electricity market. If the client maintains a stable level of maximum demand, the margin will be maintained. 2. Margin for energy. This margin corresponds to the difference between the incomes associated to energy and the costs incurred during the energy production. As an example, an energy generator using only one type of fuel, the energy rate would be upgraded according to the price variations experienced by the input. If the producer maintains a generation matrix with units that use diverse types of fuels (diversified matrix of fuels), the energy rate would be upgraded considering a polynomial function that contemplates the variation in the prices of the inputs according to certain percentages in that each part participates in the generation portfolio. In terms of risk, this last event presents a particularity, because it is expected that the polynomial function of upgrading the energy rate represents the form in which the production costs move, so it is possible to properly cover the upgraded rate. In analyzing the risk of the polynomial function of upgrading the energy rate for a hypothetical company, we assume the use of typical prices of inputs as well as standard technical aspects for thermal stations of electricity generation.
EFFICIENT FRONTIER OF GENERATION According to the theory of efficient portfolios of investment, when a portfolio of different assets with diverse profitability and risk levels exists, the most efficient investment portfolio is obtained when the combination of assets that is selected (given by the percentage of investment allocation of each asset) is located at some point along the efficient frontier of all possible feasible combination portfolio sets. In the case of electricity generation, the same output can be generated according to different inputs. The associated risks to the generation costs can be analyzed using the theory of efficient frontier, where each input represents an asset, the production cost and profitability, and the risks associated with the volatility of the price of each input. The efficient frontier of generation (EFG) should be representative of the operation of the power stations for a certain period of time; typically, 12 months is appropriate, because it is necessary to consider the period of time where the units are out of service for programmed maintenance or forced outage (FOR), and where it is necessary to buy backup energy from third parties. Usually this energy has risk characteristics different from those of the energy generated by a unit that is forcibly
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
323
96. Industry Application—Electric/Utility
TABLE 96.1
Technical Aspects
Fuel Type
Net MW
Heat Rate
COYM
FOR
Maintenance
220 150
7.50 MMBTU/MWh 0.41 Ton/MWh
US$ 2.00/MWh US$ 2.00/MWh
7% 8%
35 Days/yr. 40 Days/yr.
Natural Gas Coal
TABLE 96.2 Fuel Type Natural Gas Coal Spot Market
Economic Variables Volatility
Fuel Price
COYM
Variable Cost
20% 30% 60%
US$ 2.00/MMBTU US$ 60.00/Ton
US$ 2.00/MWh US$ 2.00/MWh
US$ 17.00/MWh US$ 26.60/MWh US$ 30.00/MWh
stopped. If we add the fact that the prices of the inputs have their own volatility, besides the changes that the industry is exposed to in the local market, it is clear that the EFG results will have dynamic characteristics rather than being stationary. As an example of obtaining the EFG, let us consider a company that has generating units of the characteristics indicated in Tables 96.1 and 96.2. The EFG for the generation assets are obtained in the example model as well as in Figure 96.1, which summarizes the results. From Figure 96.1, it is interesting to highlight these points:
Under normal operating conditions with the two power stations in service, the EFG moves between points A and B. In any contract of electricity supply, the rate of updated energy should be over the line that unites points A and B.
FIGURE 96.1 Typical generation portfolio curve with mix natural gas, coal, and marginal cost (CMg)
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
324
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
The EFG for the assets sustains a change that is represented by the curves that unite the points A and C, similar to the one that unites points B and C. This change originates from the fact that the units must be subject to maintenance (the cost of generation of the unit that goes out of service is replaced by the purchase of the same energy block in the spot market or a similar block through contracts with third parties). The EFG curves shown clearly indicate the changes in the position of risk that the hypothetical company sustains when units are out of service compared to a controlled risk condition when the units operate in a normal way. For the hypothetical company and considering the previously stated technical economic variables to be the operational points of each curve, we find:
Frontier NG - Coal Coal - CMg NG - CMg
Average C
Volatility
US$ 20.89/MWh US$ 22.27/MWh US$ 28.62/MWh
18.63% 27.08% 45.99%
The EFG curves shown in Figure 96.1 represent a static situation of operations for the units of the market, which allow you to individualize the aspects that have to be taken into account at any moment to fix the upgrade scheme of the energy rate to the risk levels the company may face. The case corresponds to the risk that the hypothetical company faces when maintaining one of the units in programmed maintenance. This risk can notably affect the annual margins for energy.
Since the EFG of the generation assets represent a static situation of operation of the power stations, it is necessary to run stochastic simulations that allow stress testing of the indexation polynomial function that is considered for the energy rate in order to detect possible scenarios where the originally estimated margin is not reached. If there were such scenarios, a certain probability of obtaining lower margins than those originally estimated would exist.
ILLUSTRATIVE EXAMPLE To illustrate, suppose there are two outlines for energy rates upgrading to a hypothetical client whose demand reaches the 370 MW level, with a load factor of about 90% per month conditional on the typical demand for electricity in a mining company. To cover this client’s demand, there are two possible outlines of rate upgrades in which each one is associated with the kind of fuel used for the electricity production and for the generation asset of the hypothetical company. The analysis seeks to compare two outlines of rate upgrades, determining the impact of each outline in the prospective annual margins as well as its risks. The goal of the analysis is to generate recommendations regarding which outline the electricity-producing company should choose.
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
325
96. Industry Application—Electric/Utility
Outline 1 This first upgrade outline is typical in the electricity supply based on thermal stations and considers, as a base for the energy rate upgrade, the variation of the prices of the supplies with which the electricity is generated. The considered shared percentages correspond to the participation of the different inputs used by the electricity producer in the process of generation (e.g., here we consider natural gas and coal). Block 1: 59.45%, 220MW of the consumption of the client’s energy with rates based on use of natural gas, determined as: EG(m) = EG0
PGm PG0
Block 2: 40.55%, 150MW of the consumption of the client’s energy, with rates based on use of coal, determined as: EC(m) = EC0
PCm PC0
Outline 2 Besides considering the variation in the input prices, this outline deals with the effects of the lack of generation units being used for programmed maintenance or for forced outages of generation units. In practical terms, it considers the EFG associated with the generation assets of the hypothetical company. Block 1: 49.60%, 184MW of the consumption of the client’s energy, with rates based on use of natural gas, determined as: EG(m) = EG0
P Gm P G0
Block 2: 32.85%, 122MW of the consumption of the client’s energy, with rates based on use of coal, determined as: EC(m) = EC0
PCm PC0
Block 3: 17.55%, 64MW of the consumption of the client’s energy, with rates based on use of energy purchased in the spot market, determined as: ECMg(m) = ECMg0
CMgm CMg0
where we define the variables as: EG0 = Base value of the energy rate for Block 1 of the client’s consumption and considering generation to natural gas, in US$/MWh EC0 = Base value of the energy rate for Block 2 of the client’s consumption and considering generation to coal, in US$/MWh
c91-100
JWBK121-Mun
March 17, 2008
326
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
TABLE 96.3
Resolution of the considered percentages
Unit
MW
FOR
Maint Days
Net days
MW days
%
NG Coal Spot days
220 150
7% 8%
35/yr. 40/yr.
304 296 130
66,979 44,370 23,701
49.60% 32.85% 17.55%
Total
370
135,050
100.00%
ECMg0 = Base value of the energy rate for block 3 of the client’s consumption and considering purchases in the spot market, in US$/MWh PG0 = Base value of base natural gas, in US$/MMBTU PC0 = Base value of coal, base 6000 kcal, in US$/Ton CMg0 = Base value in the spot market, in US$/MWh EG(m) = Upgraded value to period m of the rate of energy of Block 1 EC(m) = Upgraded value to period m of the rate of energy of Block 2 ECMg(m) = Upgraded value to period m of the rate of energy of Block 3 PGm = Natural gas price, valid for the period m, in US$/MMBTU PCm = Coal price, valid for the period m, in US$/Ton CMgm = Price of the spot market valid for the period m, in US$/MWh The associated percentages of each input were determined by considering the effects of the programmed maintenance days to each unit as well as the percentages of forced outages associated to each unit. Table 96.3 summarizes the resolution of the considered percentages. For the case study, the numerical values listed in Table 96.4 are indicated for the different variables that conform to the polynomial of upgrade of the energy rate. Risk Simulator was used to generate and run the stochastic risk simulations of the impact of the indexation outlined in the portfolio energy margin. The associated parameters of the different variables that represent risks are set as probability distributions, as shown in Table 96.5.
TABLE 96.4
Polynomial upgrade variable values
Variable
No. 1
No. 2
Eg0 Ec0 ECMg0 PG0 PC0 CMg0
20.40 31.92
20.40 31.92 36.00 2.00 60.00 30.00
2.00 60.00
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
327
96. Industry Application—Electric/Utility
TABLE 96.5 Items
Distributional assumptions for simulation Minimum
Mid
Maximum
Distribution
1.8 30 10 240 400 2,828
2.0 45 22.5 302 521 2,900
3.0 60 35 360 650 2,973
Triangular Triangular Triangular Triangular Triangular Triangular
NG FOB Coal Freight Bunkers Diesel MW SEP
The results of the margins for energy of the portfolio, expressed in US$/MWh, according to the outline of upgrade of the rate of energy are summarized next. Table 96.6 shows the statistics so readers can visualize the differences in terms of risk for the hypothetical company, comparing one outline versus another outline of energy rate upgrade. The statistics obtained from the simulations for each outline of upgrade provide information regarding the different characteristics of risk that the hypothetical company may face for the energy sale.
In terms of mid values, Outline 2 offers a better prospective value for the margin of the portfolio compared to Outline 1. In terms of risk, Outline 2 is more attractive for the hypothetical company (lower standard deviation, coefficient of variation and range). Outline 2 has improved statistics because the risks associated with periodic unit maintenance as well as no programmed outages are attenuated when considering the upgrade of the energy rate, a percentage of the energy bought at the spot market should also be reflected under the operating conditions of the power stations. TABLE 96.6
Risk simulation forecast statistics and results
Mean Median Standard Deviation Variance Average Deviation Coef. of Variation Maximum Minimum Range Skewness Kurtosis 25% Percentile 75% Percentile
Outline 1
Outline 2
4.038 4.324 1.515 2.295 1.110 0.375 7.485 −6.051 13.537 −1.446 3.620 3.424 5.027
6.289 6.264 1.458 2.125 1.112 0.232 11.284 −0.718 12.002 −0.312 1.408 5.429 7.178
c91-100
JWBK121-Mun
March 17, 2008
328
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 96.2 Gross Margin, percentile curve, Outline No. 1 and No. 2
An alternative way to evaluate the outlines of the energy rate upgrading is in considering the percentiles associated with the margins that are obtained after running the stochastic simulations. For example, if the hypothetical company determines that the margin of its interest on an annual basis is US$4/MWh, which is the more attractive strategy? Figure 96.2 illustrates the two strategies graphically. If a minimum annual margin of US$4/MWh is required in commercial terms, upgrade No. 2 is more convenient because the results would indicate that it has a probability equal to or above 95% that the required margin is exceeded, whereas outline No. 1 has an associated probability of only 60%. The comparison is a method contracts of these type can be analyzed. If a portfolio of N commercial agreements exist, based on the methods discussed in this chapter, it is possible to determine the associated probabilities that the margins originally estimated at contract signing can be exceeded.
CONCLUSION Based on our analysis, we can conclude:
The efficient frontier of generation, EFG, is the only way to obtain analytically valid and correct results in the determination of the outline of rate upgrade to more efficient energy for the hypothetical company. A valid analysis of risk in the determination of the impact in the margins for electric power sale of an industry in a world of highly volatile inputs market can only be determined with simulation and optimization techniques. Even when examples are based on the analysis of the company’s portfolio, analyses should include studies of risks associated with each contract in order to
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
329
recognize the contribution of each to the total risk of the margins of energy of the company and the requirements of taking corrective actions. It is advisable to develop these analyses in a permanent form for risk administration, analyzing existing contracts as well as future client contracts. The effects of the existing contracts to the portfolio can be determined using a similar approach. Even when the developed example allows us to compare two outlines of rate upgrades on a projection of twelve months, similar analyses can be developed to visualize the impact in the VAN of the company, considering a long horizon with other risk factors (for example, FOR variable rate, days of maintenance different to those originally estimated, changes of the market of the inputs, unavailability of input supply as in the case of Chile, etc).
97. Industry Application—IT—Information Security Intrusion Risk Management File Name: Industry Application—IT Risk Management Investment Decision Locations: Modeling Toolkit | Industry Application | IT Risk Management Investment Decision Brief Description: The case study and model illustrated in this chapter looks at information systems security attack profile as well as provide decision analysis and support on the required optimal investment Requirements: Modeling Toolkit, Risk Simulator Special Credits: This model was contributed by Mark A. Benyovszky, managing director of Zero Delta Center for Enterprise Alignment. Zero Delta CfEA is a research and development organization that specializes in helping companies to align their strategic and tactical efforts. Mr. Benyovszky may be reached at
[email protected] or +1.312.235.2390. Additional information about Zero Delta can be found at www.zerodelta.org.
Organizations of all sizes rely upon technology to support a wide-range of business processes that span the spectrum from “back-office” finance and accounting to “mid-office” manufacturing, distribution, and other operational support functions, to “front-office” sales, marketing, and customer support functions. As a general rule of thumb, larger organizations have more complex system environments and significantly greater volumes of data along with a wide range of different types of information. If you were to look across industries, there are different degrees of sensitivity of both the systems and information that are employed. For example, financial and
c91-100
JWBK121-Mun
330
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
insurance companies store critical and very sensitive information (financial transactions and personal medical histories) about their customers; or that an energy company engaged in gas transmission and distribution relies upon critical technology systems that control the flow of gas through complex pipeline networks. Regardless of the specific industry an organization is involved with or the size of the company, the underlying technology systems and the data and information they consume and produce are significant business assets. Like any asset, they must be protected. In order to protect these assets, we must understand what their individual and collective risk profiles look like. Protecting these assets is of paramount concern. Technology systems are interconnected across private, semi-private, and public networks. Every second (perhaps you prefer nanoseconds, picoseconds, or attoseconds—depending upon your “geekiness factor”) of every day, information moves across these networks—most of the time the information moves about intentionally; on other occasions it does not. We can think of this information and these systems in the context of an information security asset portfolio. It is important for us to quantify the value of each class of system or set of information, which will help us to understand, according to a scale of sensitivity, which assets require greater protection. Higher value assets are likely to be greater targets for attack (based on the basic risk/reward equation). We can then apply various methods against the portfolio to determine the composite (a high-level view) risk level of the portfolio, risk profiles of categories of assets, and individual asset risk profiles (detailed view). This approach enables us to gain a better grasp on our information and technology asset portfolio, and provides us with the ability to determine how much to invest to protect each class of assets. While the specific approaches and processes that are required to perform this initial portfolio structuring are beyond the scope of this case study, determining the probabilities of events occurring against these assets and what the resultant outcomes are likely to be is at the center of our discussion. This case study will assume that this structuring process already has been completed. Specifically, there are five steps to undergo, including: Step 1: Create Environment Details Step 2: Create Attack Models Step 3: Create Attack Scenarios Step 4: Determine Financial Impact Step 5: Arrive at Investment Decision Now, let us get on with the heart of our discussion. Monte Carlo simulation provides us with an effective way to estimate losses associated with a given attack. Monte Carlo simulation addresses the “flaw of averages” problem that plagues many single point estimates or estimates based upon standard averages. For the sake of this discussion, we will explore how we applied this approach to a large gas transmission and distribution company. The company (which we will refer to as Acme T&D) is one of the largest natural gas transmission and distribution companies in the world. Acme T&D has an extensive gas pipeline network that
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
331
supplies natural gas to wholesalers and retailers in some of the largest markets throughout North America. Energy companies fit in a unique category of organizations that use technology at the core of their business operations. Acme T&D relies upon extensive industrial control systems known in the industry as SCADA (Supervisory Control and Data Access) and PCM (Process Control Monitoring) systems. These systems are composed of a number of devices that are distributed throughout the gas pipeline network; these components are used to control the flow of gas through the network. It supplies critical information, such as gas flow-rate, temperature of gas, and pressure at various points through the network, to a system operator who then makes certain decisions of what to do to keep the pipeline running at an operationally and economically efficient level—always supplying gas where it’s needed and when it’s needed in a dynamic environment that changes on a consistent basis. These systems are critical not only to the operations of Acme T&D but they are critical also to the greater infrastructure of the United States. If the transmission and distribution of natural gas is interrupted for a significant period of time it can have ‘downstream’ effects that could be economically (the suspended operations of manufacturing companies that rely upon natural gas) or personally (lack of gas to run a furnace in the cold of winter) devastating. Clearly, these SCADA system(s) would be categorized as business critical assets with the highest priority placed on them vis-a-vis their protection. `
STEP 1: CREATE ENVIRONMENT DETAILS When we consider the extent to which an attack will cause damage, we must identify the factors that drive the top end of our model. These factors will be different for each company (with similarities for companies within the same industry). For Acme T&D our greatest concerns, from an operational perspective, are the count and types of networks in the environment, and employee productivity (we will take into account separately how operations are impacted when a threat impacts a SCADA network). The reason for using employee productivity as a factor is due to the fact that when networks are down or systems are unreachable (for whatever reason), employees are directly impacted (we use this in this example because of its universal relevance across industry domains).
ACME T&D Network Counts Enterprise Network Count SCADA Network Count PCN Network Count
16 4 1
Total Networks
21
As an aside, and as previously eluded to, the factors that drive the model will change based upon industry characteristics. For example, a financial institution may
c91-100
JWBK121-Mun
332
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
wish to use the economic losses associated with stolen credit card data as a primary factor to drive the model, in addition to employee productivity losses, and so forth. Acme T&D has approximately 10,000 employees. We must determine the payroll expenses (fully burdened) per hour. We are simplifying this model intentionally—it is not likely that 10,000 employees are working all at once (e.g., some % of employees may be on a shift rotation schedule). A sample computation is shown next: Total Employee Cost/Hour = Employee Count × Salary/2,000 Where 2,000 is the number of hours worked per employee each calendar year, or 2,080 less 80 hours for holidays; and the Salary input is the fully-burdened average salary for all employees. The model is based upon various types of attack. We determine the probability that each attack will occur and to what extent it will cause damage (economic and operational) to the organization. We then create a separate model (our attack portfolio), which will allow us to simulate multiple attacks occurring against different networks in the environment and the resultant impacts in aggregate. We classify attacks based upon two variables—the frequency and impact of the attack. An attack as profiled in Class I is considered an average attack. An average attack could be considered a low-impact worm, a Trojan horse, or virus that may affect various network systems and employee computers. Acme T&D has a variety of tools deployed in the network to mitigate these types of attacks; however, as stated earlier, no tool is 100% effective. This is where the value of Monte Carlo simulation is realized. Minimum Most Likely Maximum
0.7 0.8 1.0
Now we construct the remaining elements of the model. We will use standard (and fairly conservative) estimates for the probability of occurrence of an attack. Table 97.1 illustrates how the top-end of the model comes together. We place the attack types across the columns of the model and we create the network structure and impact structure components.
STEP 2: CREATE ATTACK MODELS We must first create a common attack model and categorize the different types of attacks that exist. The classes of attacks are based upon the severity level of the attack (from average to extreme). We also indicate the extent of damage that an attack produces and the recovery details associated with each class of attack. This classification structure provides us with a basic framework we can leverage throughout the analysis exercise. We have five classes of attacks structured in our model. The descriptors are qualitative in nature (Table 97.1).
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
TABLE 97.1 Attack Class
333
Qualitative assessments of attack classes Severity Level of Attack
Type of Attack
Extent of Damage
Recovery Approach
Class I
Average
Benign worm, Trojan Limited. Most horse, virus, or damage occurs at equivalent. host level.
Mostly automated, but may require some human intervention.
Class II
Slightly Above Average
Worm, Trojan horse, Limited. Damage can virus, or equivalent occur at the host and designed to create network level. some damage or consume resources.
Human intervention is required. Humans use tools that require interaction and expertise.
Class III Moderately Worm, Trojan horse, Above or equivalent Average designed to create significant damage and consume resources.
Noticeable damage at host and network levels. Automated tools have limited affect to combat attacker.
Significant human intervention is required. Personnel require physical access to host machines and network environments.
Class IV Significantly Concentrated attack Above by hacker using a Average variety of tools and techniques to compromise systems.
Significant damage to important/sensitive data. May also include damage to host machines as Trojans and other tools are used to circumvent detection and mitigation techniques.
Extensive human intervention is required. Data and systems recovery necessary. Multiple techniques and methods necessary to fully recover.
Class V
Critical damage to important/sensitive information. Irreversible damage to systems/hardware.
Extensive human intervention is required. External “experts” required to assess and recover environment.
Extreme Case
Concentrated attack by hacker or groups of hackers who are trying to compromise information/systems and have malicious intent.
We create current state and future state models for the classes of attacks. This is performed for comparison purposes and is an important aspect to the overall analysis. The current state model is based upon the technology and approaches that are currently in use (our preexisting investments) to detect, mitigate, and recover from each respective type of attack. The future-state model is based upon a set of new technologies (our future investments) that can be deployed in the environment to enhance the security of the environment, mitigate a wider range of attacks, and more rapidly recover from various types of attacks.
c91-100
JWBK121-Mun
March 17, 2008
334
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
These types of attacks will be consistent across our current and future state models. There are a number of variables that are a part of our attack models. They include: % of Network Impacted % of Employees Impacted Productivity Loss (hours/employee) Costs to Recover Employees Hours to Recover Employees Note that the models are populated with static values that are single point estimates and averages. For example, a Class I Attack in the current state attack model has a 10% Network Impacted value and a 5-hour Productivity Loss value. How can we be absolutely certain that a Class I Attack will always impact 10% of the networks and result in a productivity loss of 5 hours per employee (along with the other variables included in the model)? We cannot be certain, at least not with a reliable degree of confidence. As such, any analysis based upon single point estimates or averages is flawed. Monte Carlo simulation allows us to refine our assumptions and provides us with a mechanism to perturb these variables in a dynamic fashion. While we have solved the problem of dealing with averages, we are faced with a new challenge—what are the appropriate ranges to use to perturb the values and how should these perturbations behave throughout the simulation? To gather these values we leveraged the Delphi method. Following the Delphi method approach, we interviewed a number of technical experts in the environment who had knowledge of prior attacks and the extent to which tools were used to mitigate them. The expert panel provided the details necessary to determine how the model variables might behave and provided appropriate upper and lower boundary values. Figure 97.1 illustrates how we have adapted the % of Network Impacted value for a Class I attack. The original value was based upon an average of 10%. Upon closer inspection and after some discussion, our panel of experts determined that such an attack is unlikely to impact less than 10% of the network and may in fact impact a greater percentage of the network before it is identified and subsequently terminated successfully before further damage can occur. Using Monte Carlo simulation, we create an assumption for this value and select a normal distribution. We truncate the left side (or minimum value) of the distribution to take into account the 10% “floor” and provide some movement towards the right side (maximum or ceiling value) of the distribution. We set the mean to 10% and standard deviation to 5%. The resultant distribution indicates a minimum value of 10%, a mean of 10% (our average), and a maximum value of approximately 25%. We have introduced into our model a very powerful feature. Our model better reflects reality by taking into account the uncertainty associated with this value. We use this same approach for the other values and select and adjust the distributions accordingly. To further illustrate this point, Figure 97.2 is taken from the Class V attack column. A Class V attack is considered an extreme event. The probability of occurrence is very low and the damage caused is expected to be extreme or
JWBK121-Mun March 17, 2008 11:12
FIGURE 97.1 Truncated percent of Network Impacted simulation assumption
c91-100 Char Count=
335
FIGURE 97.2 Percent of Network Impacted simulation assumption of a Class V attack
c91-100 JWBK121-Mun
336
March 17, 2008 11:12 Char Count=
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
337
FIGURE 97.3 Forecast
catastrophic in nature. An analogous event would be a volcano eruption or an earthquake (which may evoke a tsunami wave, for example, if either event occurs in a susceptible place in the South Pacific) that occurs once every hundred years. The Gumbel Maximum Distribution is ideally suited for this type of catastrophic event. This distribution model is positively skewed and is designed to produce a higher probability of lower numbers and a lower probability of extreme values. We set the Alpha value to 70 and the Beta to 10. This results in a mean of 75.7722 and a standard deviation of 12.8255. It is important to note the third and fourth moments of this distribution. Skewness coefficient is 1.1395 (indicating the positively skewed nature of the distributions) and Kurtosis coefficient is 2.400 (indicating the extent to which extreme events should occur in the distribution). This distribution model better reflects reality vis-a-vis extreme attacks. We can ` see in Figure 97.3 there are higher probabilities to the left of the mean than to the right. However, the model has taken into account the extreme distributions to the far right of the median. The original analysis, which was based upon standard averages indicated that for this scenario, the total financial losses are $21,741,176. If we follow our “1 in 3” approach, we find that the number is adjusted downward to $18,0174,729 or by a little over 12%. As you explore the model in more detail you will note the use of various distributions for each class of attack. We adjust these figures for each scenario to take into account the greater variability of more advanced and staged attacks. We know that as attacks gain more sophistication there are more ‘unknowns’ with respect to how far reaching an attack will be or to what extent it will cause damage. Hence, the mean and standard deviation parameters can be adjusted to take into account this variability.
c91-100
JWBK121-Mun
March 17, 2008
11:12
338
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
MODEL RESULTS Impact to Operational Productivity We have determined that the average fully-burdened salary per employee is $80,000. For Scenario I, we estimate that an attack that impacts each employee results in 5 hours of lost productivity. It costs Acme T&D $39.22 per employee per hour of lost productivity. For an attack profile we modeled in Scenario I where 10% of the networks and 10% of employees are impacted results in a total productivity loss of $196,078.43 (Table 97.2).
Recovery Costs Attacks generally result in some form of damage (more often than not the damage is nonphysical in nature). It is often necessary to deploy technical personnel to remediate the impacted environments and systems. There are two dimensions to this remediation. There is network remediation (resetting/reconfiguring network routers, switches, firewalls, etc.) and “client” remediation (“ghosting” client machines, patching software, reinstalling/reconfiguring software, etc.). Our model takes into account the number of resources and time necessary to recover the networks and the same for recovering employees. For Scenario I the costs are $50,000 and $4,800, respectively.
Total Impact We now sum up all of the separate loss components of the model. Loss (Productivity) + Loss (Network Recovery) + Loss (Employee Recovery) For Scenario I, we have total losses of $147,647. TABLE 97.2
Modeling results from Scenario I
Lost Revenues Impact to Operational Productivity
$196,078.43
Assumption—Avg. Salary/Employee (fully burdened) Assumption—Total Time to Fully Recover/employee (hours) Productivity Cost/hour
$80,000 5 $39.22
Costs to Recover/employee
$50
Assumption (hours to recover) Costs to Recover Networks Assumption—Hours to Recover Resources per network
1 $4,800 12 5
Cost per Hour Total Costs to Recover Employees
$50 $50,000
Total Costs to Recover Networks
$4,800
Total Impact
$246,078.43
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
339
In the model, there are 4 additional scenarios. For each scenario we adjust the assumptions to better fit the attack profiles. The percentage of networks down and employees impacted increase for each scenario. Exposing the Flaw of Averages Total Impact (original)
Class I Attack
Class II Attack
Class III Attack
Class IV Attack
Class V Attack
147,647
616,471
1,933,235
5,223,529
21,741,176
Total Impact (revised)
310,616
714,145
1,679,616
7,507,908
23,817,256
Variance (%)
210.38%
115.84%
86.88%
143.73%
109.55%
The next step of our modeling efforts involves creating a “portfolio of attacks.” This step will provide us with the answer to the question “how much should Acme T&D invest in security solutions to mitigate the risks associated with the attacks profiled?”
STEP 3: CREATE ATTACK SCENARIOS Now that we have determined the estimated costs associated with different types of attacks we are ready to move on to creating the attack scenarios. The attack scenarios will provide us with the total losses realized during a specified period of time. We have created six attack scenarios. The attack scenarios consider the occurrence of different types of attacks over a five year period. By creating different scenarios, we can consider different “foreseeable futures.” This approach allows an organization to determine how it wishes to “view the world” from a risk planning and risk mitigation standpoint. The degree to which an organization will tolerate risk varies greatly. Some organizations are more tolerant of risk and will invest less in mitigating technologies and approaches while other organizations that are more risk adverse will invest substantially more in order to reduce their risk profile. One can think of this type of investment as an insurance policy—juggling “premium” with “payout” or from a strategic real options perspective of risk mitigation. The scenarios provide us with a landscape view—from lowest to highest possible losses. We will explore two different approaches to determining the probability of attacks occurring across a specified timeline. The first approach involves the use of the Delphi method. We interview a number of subject matter and technical experts who are asked to produce five different likely scenarios for various attack profiles. We provide some guidance and suggest to each expert that the scenarios should range from a “most likely scenario” to a “least likely scenario.” This team of experts is then brought together to discuss the ranges across the spectrum. After various conversations and some debate, the team collectively determines to reduce the total scenarios (25 − 5 experts × 5 scenarios) to the “final 5.” These scenarios are reflected as Scenarios I through V on the Attack Scenarios spreadsheet. Figure 97.4 illustrates a Scenario I attack profile. On our defined scale of “least likely to most likely” this scenario is most likely to be realized. The experts provided the count of each type of attack that occurs within our 5-year period and further determined the years in which they will occur.
c91-100
JWBK121-Mun
340
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 97.4 Scenario I attack profile of a future state
We have carried over our financial impact information from our previous exercise. For each class of attack we have current state and future state impact costs. The first section of the model includes the classes of attacks. For Scenario I, we have determined that three Class I attacks will occur in years 1, 3, and 4. In addition we have determined that one Class II attack will occur in year 5. The second section of the model includes the impact values from the attack models. We include in this model both the current state and future state impact values. These values are computed for each year and are summed in the totals column. The variance value indicates the percentage reduction from current to future state loss values. By investing in the proposed technologies we can reduce by 73.34% the total losses for this scenario. The risk adjustment value is the difference of the current state impact and future state impact values. This value is carried over to the next step of our analysis. We use this same model to create the other attack scenarios. Figure 97.5 illustrates the Scenario IV attack profile. This scenario represents the opposite end of the spectrum. In this scenario the company is successful in preventing all classes of attacks until year 5, when a Class V attack occurs. This is the infamous “hacker with malicious intent” scenario, wherein a hacker sets out, with a concentrated effort to circumvent intrusion management technologies with the specific desire to cause significant harm to the organization. For Acme T&D this scenario could perhaps
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
341
FIGURE 97.5 Scenario IV attack profile of a future state
reflect the sentiments of a terrorist who has a desire to gain access to the critical gas pipeline systems in order to cause a catastrophic failure to the pipeline network. One could argue that such an approach to determining these probabilities lacks “scientific rigor” or can be significantly biased—either intentionally or unintentionally. Consider the technical expert who firmly believes that his skills are second to none with respect to effectively deploying and managing an armory of intrusion management technology. He may be biased to create scenarios that reflect the conservative end of the spectrum, significantly coloring the reality of the environment or threat landscape. If an organization were to pin their decision on this approach, a crafty hacker who has superior skills to this individual may easily circumvent these technologies and successfully realize his attack objectives and goals. Conversely, consider the “doomsday” character who is constantly pondering the worst case scenario and has a strong voice in the room. He or she may be overly aggressive with the attack scenarios, creating unrealistic models that result in “doom and gloom.” How can one test for these biases? Is there a way to independently determine the probabilities and likelihoods of events? Indeed there is a way and it is again found in Monte Carlo simulation.
c91-100
JWBK121-Mun
342
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Scenario VI represents our independent attack scenario. You may consider this the “control model.” This is our unbiased “expert” who is neutral to all biases. The probabilities of occurrence are factually driven and leverage a distribution model that is focused on the discrete binary nature of these events—an event either happens or it doesn’t. The Poisson distribution provides us with the ability to address the unique aspects of occurrence probabilities. Figure 97.6 illustrates how we can leverage the Poisson distribution for the Class I Attack. These events are discrete in nature—they either occur or don’t occur. For a Class I Attack we set the Lambda value to 1.5984. This creates a distribution model that ranges from 0 to 6. Note on the left side of the model the probability scale. We can see that this Lambda value results in a 20% nonoccurrence outcome. Or, in other words, 80% of the time a Class I Attack will occur at least one time (at a rate of approximately 33%) and may occur up to 6 times within our time interval at a rate of say .01%. Compare this to a Poisson distribution model for a Class V extreme attack. We set the Lambda value to 0.0012 to reflect this. It results in a distribution model where this event will not occur 99.9988% of the time. There is only a .0012% chance that the event will occur in any given trial. You may wonder why, if Monte Carlo simulation can be used reliably to arrive at probabilities of occurrence, we choose to use two different methods for determining probabilities. There are three primary reasons for doing so; they include:
To reduce the “fear of the black box” phenomenon. People who are not familiar with analytical techniques or the details associated with statistical methods have a tendency to treat analysis and the resultant outputs as “black box” generated values. There is a natural fear to distrust the unknown. By leveraging both statistical methods and expert opinion interviews, the layman observing the analysis and output can rationalize in their mind how the results were generated and how they were validated or refuted. It also provides an avenue for the layman to provide input (vis-a-vis his or her own opinions) into the equation. ` To spur additional dialogue and debate. The interview process inherently spurs additional dialogue among the expert panel. My experience has been that the greater the divergence in opinions the more debate occurs, which results in more robust and more refined models. The process may require more work, but, more often than not, the value of the outcome is greater than the additional effort. As a “litmus test” of expert opinions. Conversely, if we relied solely on the input of expert opinions without thinking through and modeling out the statistical side of the equation, we may fall victim to tunnel vision.
While it’s beyond the scope of this case study, these models could be further enhanced by creating forecasts for different types of attacks and determining the probabilities of becoming a victim for each attack. These enhancements could be realized by using historical data (what is published in the public domain along with an organization’s own data). For the purposes of simplicity, we leveraged the Delphi method to create the various attack scenarios. The attack scenario total impact values range from $1,547,895 to $23,791,472, which represents quite a significant range. How do we determine how much to invest to mitigate the risks associated with the attacks?
JWBK121-Mun
March 17, 2008
11:12
Char Count=
FIGURE 97.6 Poisson distribution assumption
c91-100
343
c91-100
JWBK121-Mun
344
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
STEP 4: DETERMINE FINANCIAL IMPACT We are now ready to explore different investment scenarios to offset the risks of attacks. We now have more reliable estimates for the various classes of attacks and can take this financial information and turn it through a classical Net Present Value (NPV) and Discounted Cash Flow (DCF) analysis. Our NPV/DCF analysis also will have six different scenarios that will follow the same scenario structure as those previously defined. We follow this same approach through the entire analysis. It allows us to see multiple sides of the problem and arrive at a more reliable outcome. We will return to our original investment estimate (as provided by the client) of $2,000,000, which was previously arrived at through a variety of network and systems analysis. This amount reflects the investment necessary to upgrade and enhance the intrusion management systems currently distributed throughout the environment. At a high-level, this investment will result in:
The replacement of Intrusion Detection Systems (IDS) with Intrusion Prevention Systems (IPS). An increased deployment of IPS devices at additional points throughout the network—from network perimeter to network core. The deployment of Network Behavior Analysis (NBA) solutions at various points throughout the network along with data collection and analysis engines necessary to detect anomalies and suspicious network activities.
The logical question is, “Does a $2,000,000 investment adequately address the risks associated with the attacks and their likelihood of occurrence in this environment?” Add to this, “Is it too much or too little?” If you recall from the previous steps, we created two different aspects of our models. Current state and future state views. The basic premise of our argument is that no technology or set of technologies can provide 100% protection from all types of attacks. However, we can intelligently place technology throughout the environment that mitigates these attacks. And, these technologies will have varying degrees of success with respect to eliminating altogether the attack or significantly reducing the damage produced or the amount of time necessary to recover from an attack. What is important to us then is the reduction of losses. The investment decision is how much we should invest to reduce our losses. This is the basis behind our current state and future state views. We now move on to create our DCF and NPV analysis scenarios. Figure 97.7 illustrates Scenario I. We create a 5-year time horizon and determine the timing and intensity of capital investments (the intrusion management technology solutions). The Risk Adjustment value is the difference of the Current State Impact less the Future State impact for each year in this scenario (as modeled previously during the attack scenario step). We compute our net cash flows for each year, sum up the values and then apply our NPV and Internal Rate of Return (IRR) calculations (note: we use the MIRR function in Excel to better adapt to negative values). We also have unknowns associated with
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
345
FIGURE 97.7 DCF and NPV analysis
this model. We don’t know precisely a few critical inputs into the model. We must account for these uncertainties. The following lists the additional inputs required to run the NPV analysis. DCF/NPV Input Parameters Discount Rate Finance Rate Reinvestment Rate Equipment Annual Maintenance
10% 5% 7% 15.00%
Where we define the following parameters: Discount Rate. The standard discount rate on the cash flows. Finance Rate. The cost of capital or financing rate used to acquire the desired assets. Reinvestment Rate. The return on the free cash flows that are reinvested. Equipment Annual Maintenance. The annual maintenance fees and service fees associated with keeping current (software upgrades, signature updates) the various technology solutions. We apply Monte Carlo-based distributions to each value. For example, we may have varying degrees of success negotiating annual maintenance fees on the various equipment we decide to purchase. For this value we use a normal distribution with the mean set to 15% (industry average for a company of Acme T&D’s size) and the standard deviation set to 0.015%, which gives us a range of between just less than 12% and slightly more than 18% (both of which are realistic outer limits). Next, Figure 97.8 represents what the expert team believed to be the “planning case”—in others words the team agreed that they should plan their efforts and investments based upon this scenario. This is also the scenario we use for our “unbiased expert.”
c91-100
JWBK121-Mun
346
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 97.8 DCF and NPV analysis on Scenario VI
Based upon this case we are expecting a total of eight attacks during a five year period. Our current state model suggests that we would incur losses of $11,650,567 in this scenario; our future state model suggests losses of $4,600,118. As mentioned above, our DCF/NPV analysis is concerned with the net difference, which in this case is $6,990.449. The model takes into account when these losses occur and when the difference is realized. We then compute the NPV and IRR values. Using the $2,000,000 assumption as our capital investment in intrusion management technologies, we can see that this scenario results in a positive NPV of $2,228,925.15, which results in a 35.32% IRR on our investment. Clearly, this model supports a $2,000,000 investment. This model in isolation would suggest that we could nearly double the initial investment and still have a positive NPV and IRR (the threshold to negative NPV is $3,666,000 year 1 expense following standard computations for all other variables in the model).
STEP 5: ARRIVE AT INVESTMENT DECISION We are now near the end of our analysis. We now have a solid understanding of what our current and future risks are vis-a-vis the losses we are likely to incur across ` a variety of attack scenarios. We know that a $2,000,000 investment is within the range of reason. However, we also know that we could invest more and as a result further reduce our risk of losses. Alternatively, we could invest less and rely upon the relatively low probability of being a target of a severe or catastrophic event. We are at a crossroad. There is no absolute right or wrong decision for all or any organization. The decision makers in your organization must choose the right decision based upon all of the available facts, expert opinion, and in light of the organization’s culture. Consider that the analysis is relatively conservative in nature. Consider that the most conservative and least biased model (the model generated by our “unbiased expert”) suggests that 80% of the time losses will be greater than $1,857,474 (current
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
347
FIGURE 97.9 Risk tolerance levels
state) and if we implement our proposed future state technology plan these losses will reduce to $267,792, resulting in a total loss reduction of $1,589,742. Follow this same mode of thinking and be on the greater side of a “betting man”—51% of the time losses will be greater than $2,570,762 (current state) and $401,688 (future state), yielding $2,169,074 in loss reductions. Figure 97.9 illustrates an example set of risk tolerance and required investment levels, and the resulting simulation forecast distributions shown in Figure 97.10 further illustrates the probabilistic levels of these risk tolerances.
FIGURE 97.10 Simulation forecast risk tolerance levels
c91-100 JWBK121-Mun
348
March 17, 2008 11:12 Char Count=
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
98. Industry Applications—Insurance ALM Model
349
98. Industry Applications—Insurance ALM Model File Name: Insurance—Asset Liability Management (ALM) Location: Modeling Toolkit | Industry Applications | Insurance ALM Model Brief Description: Illustrating how to perform a basic asset liability model by optimizing asset allocation for a portfolio of insurance policies while minimizing the volatility of the insurance surplus Requirements: Modeling Toolkit, Risk Simulator Special Credits: This model was contributed by Victor Wong, the managing director of Real Actuarial Consulting Limited. He is currently a fellow of the Society of Actuaries (FSA), a fellow of the Canadian Institute of Actuaries (FCIA), a charter holder of the Certified Financial Analyst (CFA), and certified in Risk Management (CRM).
This is a simplified Asset Liability Management (ALM) model for a portfolio of insurance endowment products. The key risk factor that requires modeling is the interest yield curve, which in turn affects the asset and liability cash flows and market values. The objective of the model is to minimize the surplus volatility. The decision variables are the asset allocation into various asset classes. This and the next two chapters discuss the concepts of ALM and pension benefits as they pertain to insurance analysis.
ASSET LIABILITY MANAGEMENT ALM is a financial technique that can help companies to manage the mismatch of asset and liability and/or cash flow risks. The mismatched risks are due to different underlying factors that cause the assets and liabilities to move in different directions with different magnitudes. Asset liability risk is a leveraged form of risk. The capital of most financial institutions is small relative to the firm’s assets or liabilities, so small percentage changes in assets or liabilities can translate into large percentage changes in capital. Typically, companies such as banks, insurance companies, and pension funds (or their corporate sponsors) adopt such techniques to help them better manage their mismatched asset/liability risks (more particularly the interest rate risks) and to ensure that their capital will not be depleted in changing demographic and economic environments. Techniques for assessing asset liability risk include gap analysis and duration analysis. These analyses facilitated techniques of gap management and duration
c91-100
JWBK121-Mun
350
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
matching of assets and liabilities. Both approaches worked well if assets and liabilities comprised fixed cash flows. However, the increasing use of options, such as embedded prepayment risks in mortgages or callable debt, posed problems that these traditional analyses could not address. Thus, Monte Carlo simulation techniques are more appropriate to address the increasingly complex financial markets. Today, financial institutions also make use of over-the-counter (OTC) derivatives to structure hedging strategies and securitization techniques to remove assets and liabilities from their balance sheet, therefore eliminating asset liability risk and freeing up capital for new business. The scope of ALM activities has broadened to other nonfinancial industries. Today, companies need to address interest rate exposures, commodity price risks, liquidity risk, and foreign exchange risk.
EMBEDDED OPTIONS IN FINANCIAL INSTRUMENTS Traditionally, ALM was used as a tool to protect the capital/surplus from movements of assets/liabilities against a certain risk (e.g., parallel shift in yield curve). In theory, ALM enables the financial institution to remove certain volatility risks. For banks and insurers, ALM can potentially lower regulatory capital requirements, as less capital is needed to protect against unforeseen risks. For pension sponsors, ALM also can reduce the plan’s funding requirements and accounting costs by locking into a certain level of return. Cash Flow Matching (or Immunization) is one of the ALM methods in which both asset and liability cash flows are matched exactly such that any movement in the yield curve would be irrelevant for the entity. However, most financial instruments today rarely have fixed cash flows. Thus, cash flow matching would require frequent portfolio rebalancing, which is prohibitively expensive. Due to the shortcomings of cash flow matching, duration matching was used to manage the mismatch risks (Figure 98.1). Typical duration matching is to find an optimal asset allocation portfolio in which the asset duration matches the liability duration. The asset and liability duration is defined as the amount of change in the market value of assets/liabilities when the yield curve shifts by 100 basis points. The obvious shortcomings of duration matching are that the yield curve rarely shifts in a parallel fashion and that the linear approximate (asset and liability duration) works well only on small changes to the yield curve. Today’s financial assets and liabilities have embedded options that significantly affect the timing of cash flows, sensitivity to change in market rates, and total return. Examples of embedded options in various financial institutions include:
Insurance policies. Guaranteed rates, cash surrender values, policy loans, dividends/bonuses Banks. Prepayment option to borrowers, overdraft, early withdrawal Pension plans. Early retirement, cash out option, DC conversion Assets. Callable options, prepayment options, abandon option (credit/ bankruptcy)
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
98. Industry Applications—Insurance ALM Model
351
FIGURE 98.1 Insurance ALM duration matching
Figure 98.1 illustrates the effects of compound embedded options and the sensitivity of a life insurer’s liabilities to the change in interest rates. Other variations of traditional ALM models include convexity matching (second derivative) and dynamic matching (frequent rebalancing). These variations attempt to increase the precision of the changes in market values of the assets and liabilities compensating for the effects of the embedded options. Traditional ALM using cash flow/duration matching is not an effective method to protect capital as models do not recognize the risks of embedded options. Furthermore, the trading costs of rebalancing the portfolios are prohibitively expensive. The simulation approach on assets/liability models is a better way to protect capital by capturing the impact of embedded options in many possible variations and finding the optimal portfolio that can minimize the volatility of the surpluses. More advanced approaches would consider the downside risk only. Most financial institutions would like to guard against the risk of reducing capital/surpluses. An increase in capital/surpluses is actually a good thing. As a result, a slightly higher volatility in the entity’s capital may be acceptable as long as the potentially higher yields can outweigh the level of downside risk undertaken.
IMPLEMENTING ALM Six steps are taken to implement an effective ALM: 1. Set ALM objectives. First of all, the bank, insurer, or pension fund sponsor needs to decide on its ALM objectives. The objectives may be affected by the organization’s desires, goals, and positioning in relation to its stakeholders, regulators, competition, and external rating agencies. Would it be simply minimizing the volatility of surpluses? Would a higher yield be more desirable, and if so, what is the maximum level of risk that can be undertaken?
c91-100
JWBK121-Mun
352
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 98.2 Optimization in ALM
2. Determine risk factors and cash flow structure. The ALM manager/analyst needs to capture the various risks the entity carries and take into account the complex interactions between the asset and liability cash flows. Risk factors may include market, interest rate, and credit risks as well as contractual obligations that behave like options. 3. Consider available hedging solutions. While diversification can reduce nonsystematic risks, financial institutions often carry firm-specific risks that cannot be diversified easily. The organization needs to evaluate the appropriateness of various hedging solutions, including types of assets, use of hedging instruments (derivatives, interest rate options, credit options, pricing, reinsurance) and use of capital market instruments (securitization, acquisition, and sale of business lines). 4. Model the risk factors. Modeling the underlying risk factors may not be trivial. If the ALM manager’s concern is the interest rate risks, then modeling the yield curve would be critical. 5. Set decision variables. For different financial institutions, the decision variables would differ. For instance, an insurer needs to set a decision on asset allocation to each qualified investment, the level of dividend/bonuses paid to policyholders, the amount of new businesses undertaken, pricing, and so on. 6. Set constraints. Typically financial institutions are heavily regulated (reserve and capital requirements). More recently, accounting requirements (or profits) also have become increasingly important. These constraints need to be modeled to ensure that the final solution can meet the regulatory and accounting requirements. Figure 98.2 shows a typical ALM modeling flowchart for an insurance company that aims to minimize its interest rate risks.
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
353
98. Industry Applications—Insurance ALM Model
TABLE 98.1
New versus old approaches in ALM risk management
Risk Minimization (Old Way)
Real Options Approach (New Way)
All risks are undesirable
Downside risks are undesirable Missing upside potential is also undesirable There may be value in waiting
Minimize volatility to capital
Formulate strategies to hedge capital depletion, while retaining upside potentials
Limit asset allocation to match liability during duration, thus reducing asset return potentials
Value, analyze, and rank/optimize alternative “protection” strategies against return/risk objectives from an organization’s perspective Trial portfolios/proof of concept Redesign of products with embedded options
REAL OPTIONS APPLICATIONS TO ALM Real options have useful applications in the evaluation of various hedging strategies in an ALM context. Traditional analysis focuses on minimizing surplus volatility as a fixed strategy with periodic rebalancing (Table 98.1). However, today’s business decisions are much more dynamic with increasingly complex hedging instruments and strategies available to management. Each business strategy has its associated costs, risks, and benefits. Business strategies that can be implemented when needed can enhance business flexibility to guard against undesirable events, therefore enhancing their overall value to the organization (Table 98.2). Real options can determine the risk-adjusted strategic value to the organization that can be ranked and optimized according to business objectives and available resources.
SUMMARY Traditional ALM models are no longer sufficient in protecting the company’s limited capital due to the presence of embedded options in both assets and liabilities. The simulation approach can provide a more realistic picture by taking into account the TABLE 98.2
Asset versus liability strategies
Asset-Based Strategies
Liability-Based Strategies
Asset allocation
Pricing margin/reserves
Structured products (derivatives, swaps, credit options, interest rate options, etc.)
Reinsurance
Alpha and beta approach to asset allocation
Securitization
Timing
Acquisition/sale of business lines Redesigning/reengineering products Timing
c91-100
JWBK121-Mun
March 17, 2008
354
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
risks of embedded options and other risk factors simultaneously. More advanced models today focus on downside risks with moderate risk taking to enhance yields. Optimization algorithms can then be used to maximize return/risk objectives within set constraints and decision variables. Formulating real options strategies can enhance corporate value by incorporating future risks and management flexibility in decision analysis. This book highlights the potential applications of real options in an ALM context. However, a comprehensive analysis of this topic is beyond our scope.
99. Operational Risk—Queuing Models at Bank Branches File Name: Banking—Queuing Models at Bank Branches Location: Modeling Toolkit | Banking Models | Queuing Models Brief Description: Illustrating how to set up a queuing model, run a Monte Carlo simulation on a queuing model, and interpret the results of a queuing model Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2QueuingSCProbNoCustomer, B2QueuingSCAveCustomersWaiting, B2QueuingSCAveCustomersin System, B2QueuingSCAveTimeWaiting, B2QueuingSCAveTimeinSystem, B2QueuingSCProbHaveToWait, B2QueuingSCAProbNoCustomer, B2QueuingSCAAveCustomersWaiting, B2QueuingSCAAveCustomersinSystem, B2QueuingSCAAveTimeWaiting, B2QueuingSCAAveTimeinSystem, B2QueuingSCAProbHaveToWait, B2QueuingMCProbNoCustomer, B2QueuingMCAveCustomersWaiting, B2QueuingMCAveCustomersinSystem, B2QueuingMCAveTimeWaiting, B2QueuingMCAveTimeinSystem, B2QueuingMCProbHaveToWait, B2QueuingMGKProbBusy, B2QueuingMGKAveCustomersinSystem
MODEL BACKGROUND Think of how queuing models work; consider a customer service call center, a bank teller’s waiting line, or the waiting line at an ATM machine. The queue is the line of people waiting to get served. Typically, the arrival rates of patrons follow a Poisson distribution on a per-period basis, per hour or per day, etc. The number of checkout counters open is the number of channels in a queuing model. The rate at which servers are able to serve patrons typically follows an exponential distribution. The questions that a queuing model answers are how many servers or channels there
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
99. Operational Risk—Queuing Models at Bank Branches
355
should be if we do not want patrons to wait more than X minutes, or, if we have Y servers, what the probability is that a patron arriving will have to wait and what the average wait time is. These types of models are extremely powerful when coupled with simulation, where the arrival rates and service times are variable and simulated. Imagine applications from staffing call centers, customer service lines, and checkout counters to how many hospital beds should exist in a hospital per type of diagnosticrelated group and the like. These models are based on operations research queuing models. The singlechannel queuing model and the multiple-channel queuing model assume a Poisson distribution of arrival rates and exponential distribution of service times, with the only difference between them being the number of channels. Both the MG1 single arbitrary model and M/G/K blocked queuing model assume the same Poisson distribution on arrival rates but do not rely on the exponential distribution for service times. The two main differences between these two general-distribution (G) models are that the M/G/K uses multiple channels as compared to the single-channel MG1, as well as the fact that the MG1 model assumes the possibility of waiting in line while the M/G/K model assumes customers will be turned away if the channels are loaded when they arrive.
RUNNING A MONTE CARLO SIMULATION In all of these models, the results are closed form. Hence, only the input assumptions (arrival rates and service rates) are uncertain and should be simulated. The forecast results should be any of the outputs of interest. Please see Figures 99.1 and 99.2.
FIGURE 99.1 MG1: Single-channel arbitrary queuing model
c91-100
JWBK121-Mun
356
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 99.2 Multiple-channel queuing model
100. Optimization—Continuous Portfolio Allocation File Name: Optimization—Continuous Portfolio Allocation Location: Modeling Toolkit | Optimization | Continuous Portfolio Allocation Brief Description: Illustrating how to run an optimization on continuous decision variables, viewing and interpreting optimization results Requirements: Modeling Toolkit, Risk Simulator
This model shows 10 asset classes with different risk and return characteristics. The idea here is to find the best portfolio allocation such that the portfolio’s bang for the buck, or returns to risk ratio, is maximized—that is, to allocate 100% of an individual’s investment portfolio among several different asset classes (e.g., different types of mutual funds or investment styles: growth, value, aggressive growth, income, global, index, contrarian, momentum, etc.). In order to run an optimization, several key specifications on the model have to first be identified: Objective: Maximize Return to Risk ratio (C18) Decision Variables: Allocation weights (E6:E15) Restrictions on Decision Variables: Minimum and maximum required (F6:G15) Constraints: Portfolio total allocation weights 100% (E17 is set to 100%)
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
100. Optimization—Continuous Portfolio Allocation
357
FIGURE 100.1 Asset allocation optimization model The model shows the 10 asset classes. Each asset class has its own set of annualized returns and risks, measured by annualized volatilities (Figure 100.1). These return and risk measures are annualized values such that they can be consistently compared across different asset classes. Returns are computed using the geometric average of the relative returns, while the risks are computed using the annualized standard deviation of the logarithmic relative historical stock returns approach. See the chapters on Volatility models for detailed calculations. The allocation weights in column E hold the decision variables, which are the variables that need to be tweaked and tested such that the total weight is constrained at 100% (cell E17). Typically, to start the optimization, we will set these cells to a uniform value; in this case, cells E6 to E15 are set at 10% each. In addition, each decision variable may have specific restrictions in its allowed range. In this example, the lower and upper allocations allowed are 5% and 35%, as seen in columns F and G. This setting means that each asset class may have its own allocation boundaries. Next, column H shows the return to risk ratio for each asset class, which is simply the return percentage divided by the risk percentage, where the higher this value, the higher the bang for the buck. The remaining sections of the model show the individual asset class rankings by returns, risk, return to risk ratio, and allocation. In other words, these rankings show at a glance which asset class has the lowest risk or the highest return, and so forth.
RUNNING AN OPTIMIZATION To run this model, simply click on Risk Simulator | Optimization | Run Optimization. Alternatively, for practice, you can try to set up the model again by doing (the steps are illustrated in Figure 100.2): 1. Start a new profile (Risk Simulator | New Profile) and give it a name. 2. Select cell E6, and define the decision variable (Risk Simulator | Optimization | Set Decision, or click on the Set Decision D icon) and make it a Continuous
c91-100
JWBK121-Mun
358
3.
4.
5. 6. 7.
March 17, 2008
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Variable and then link the decision variable’s name and minimum/maximum required to the relevant cells (B6, F6, G6). Then use the Risk Simulator Copy on cell E6, select cells E7 to E15, and use Risk Simulator’s Paste (Risk Simulator | Copy Parameter and Risk Simulator | Paste Parameter or use the copy and paste icons). To rerun the optimization, type in 10% for all decision variables. Make sure you do not use the regular Excel copy | paste. Next, set up the optimization’s constraints by selecting Risk Simulator | Optimization | Constraints, selecting ADD, and selecting cell E17, and making it (==) equal 100% (for total allocation, and remember to insert the % sign). Select cell C18 as the objective to be maximized (Risk Simulator | Optimization | Set Objective). Select Risk Simulator | Optimization | Run Optimization. Review the different tabs to make sure that all the required inputs in steps 2 to 4 are correct. You may now select the optimization method of choice and click OK to run the optimization. a. Static Optimization is an optimization that is run on a static model, where no simulations are run. This optimization type is applicable when the model
FIGURE 100.2 Optimization model setup
c91-100
JWBK121-Mun
March 17, 2008
11:12
Char Count=
100. Optimization—Continuous Portfolio Allocation
359
FIGURE 100.2 (Continued)
is assumed to be known and no uncertainties exist. Also, a static optimization can be run first to determine the optimal portfolio and its corresponding optimal allocation of decision variables before applying more advanced optimization procedures. For instance, before running a stochastic optimization problem, first run a static optimization to determine if there exist solutions to the optimization problem before performing a more protracted analysis. b. Dynamic Optimization is applied when Monte Carlo simulation is used together with optimization. Another name for such a procedure is simulationoptimization. In other words, a simulation is run for N trials and then an optimization process is run for M iterations until the optimal results are obtained
c91-100
JWBK121-Mun
March 17, 2008
360
11:12
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
or an infeasible set is found. That is, using Risk Simulator’s Optimization module, you can choose which forecast and assumption statistics to use and replace in the model after the simulation is run. Then you can apply these forecast statistics in the optimization process. This approach is useful when you have a large model with many interacting assumptions and forecasts, and when some of the forecast statistics are required in the optimization. c. Stochastic Optimization is similar to the dynamic optimization procedure except that the entire dynamic optimization process is repeated T times. The results will be a forecast chart of each decision variable with T values. In other words, a simulation is run and the forecast or assumption statistics are used in the optimization model to find the optimal allocation of decision variables. Then another simulation is run, generating different forecast statistics, and these new updated values are optimized, and so forth. Hence, each of the final decision variables will have its own forecast chart, indicating the range of the optimal decision variables. For instance, instead of obtaining singlepoint estimates in the dynamic optimization procedure, you can now obtain a distribution of the decision variables and, hence, a range of optimal values for each decision variable, also known as a stochastic optimization. Note: If you are to run either a dynamic or stochastic optimization routine, make sure that you first define the assumptions in the model. That is, make sure that some of the cells in C6:D15 are assumptions. The model setup is illustrated in Figure 100.2.
RESULTS INTERPRETATION Briefly, the optimization results show the percentage allocation for each asset class (or projects or business lines, etc.) that would maximize the portfolio’s bang for buck (i.e., the allocation that would provide the highest returns subject to the least amount of risk). In other words, for the same amount of risk, what is the highest amount of returns that can be generated, or for the same amount of returns, what is the least amount of risk that can be obtained? See Figure 100.3. This is the concept of the Markowitz efficient portfolio analysis. For a comparable example, see Chapter 106, Military Portfolio and Efficient Frontier.
JWBK121-Mun
March 17, 2008
11:12
Char Count=
FIGURE 100.3 Optimization results
c91-100
361
c101-110
JWBK121-Mun
March 17, 2008
362
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
101. Optimization—Discrete Project Selection File Name: Optimization—Discrete Project Selection Location: Modeling Toolkit | Optimization | Discrete Project Selection Brief Description: Illustrating how to run an optimization on discrete integer decision variables in project selection in order to choose the best projects in a portfolio given a large variety of project options subject to risk, return, budget, and other constraints Requirements: Modeling Toolkit, Risk Simulator
This model shows 12 different projects with different risk and return characteristics. The idea here is to find the best portfolio allocation such that the portfolio’s total strategic returns are maximized. That is, the model is used to find the best project mix in the portfolio that maximizes the total returns after considering the risks and returns of each project, subject to the constraints of number of projects and the budget constraint. Figure 101.1 illustrates the model. Objective: Maximize Total Portfolio Returns (C17) or Sharpe returns to risk ratio (C19) Decision Variables: Allocation or Go/No-Go decision (I4:I15) Restrictions on Decision Variables: Binary decision variables (0 or 1) Constraints: Total Cost (D17) is less than $5000 and less than or equal to 6 projects selected (I17)
RUNNING AN OPTIMIZATION To run this preset model, simply run the optimization (Risk Simulator | Optimization | Run Optimization) or for practice, set up the model yourself: 1. Start a new profile (Risk Simulator | New Profile) and give it a name. 2. In this example, all the allocations are required to be binary (0 or 1) values, so first select cell I4 and make this a decision variable in the Integer Optimization worksheet and select cell I4 and define it as a decision variable (Risk Simulator | Optimization | Set Decision or click on the Set Decision icon) and make it a Binary Variable. This setting automatically sets the minimum to 0 and maximum to 1 and can only take on a value of 0 or 1. Then use the Risk Simulator Copy on cell I4, select cells I5 to I15, and use Risk Simulator’s Paste (Risk Simulator |
JWBK121-Mun
March 17, 2008
11:27
Char Count=
FIGURE 101.1 Discrete project selection model
c101-110
363
c101-110
JWBK121-Mun
364
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Copy Parameter and Risk Simulator | Paste Parameter or use the Risk Simulator copy and paste icons, NOT the Excel copy/paste). 3. Next, set up the optimization’s constraints by selecting Risk Simulator | Optimization | Constraints and selecting ADD. Then link to cell D17, and make it <=5000, and select ADD one more time and click on the link icon and point to cell I17 and set it to <=6. 4. Select cell C19, the objective to be maximized, and select Risk Simulator | Optimization | Set Objective then, select Risk Simulator | Optimization | Run Optimization. Review the different tabs to make sure that all the required inputs in steps 2 and 3 are correct. 5. You may now select the optimization method of choice and click OK to run the optimization: a. Static Optimization is an optimization that is run on a static model, where no simulations are run. This optimization type is applicable when the model is assumed to be known and no uncertainties exist. Also, a static optimization can be run first to determine the optimal portfolio and its corresponding optimal allocation of decision variables before applying more advanced optimization procedures. For instance, before running a stochastic optimization problem, first run a static optimization to determine if there exist solutions to the optimization problem before performing a more protracted analysis. b. Dynamic Optimization is applied when Monte Carlo simulation is used together with optimization. Another name for such a procedure is simulation optimization. In other words, a simulation is run for N trials and then an optimization process is run for M iterations until the optimal results are obtained or an infeasible set is found. That is, using Risk Simulator’s Optimization module, you can choose which forecast and assumption statistics to use and replace in the model after the simulation is run. Then you can apply these forecast statistics in the optimization process. This approach is useful when you have a large model with many interacting assumptions and forecasts, and when some of the forecast statistics are required in the optimization. c. Stochastic Optimization is similar to the dynamic optimization procedure except that the entire dynamic optimization process is repeated T times. The results will be a forecast chart of each decision variable with T values. In other words, a simulation is run and the forecast or assumption statistics are used in the optimization model to find the optimal allocation of decision variables. Then another simulation is run, generating different forecast statistics, and these new updated values are optimized, and so forth. Hence, each of the final decision variables will have its own forecast chart, indicating the range of the optimal decision variables. For instance, instead of obtaining singlepoint estimates in the dynamic optimization procedure, you can now obtain a distribution of the decision variables and, hence, a range of optimal values for each decision variable, also known as a stochastic optimization. Note: If you are to run either a dynamic or stochastic optimization routine, make sure that you define the assumptions first in the model. That is, make sure that some of the cells in C4:D15 are assumptions. The model setup is illustrated in Figure 101.2.
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
101. Optimization—Discrete Project Selection
365
FIGURE 101.2 Setting up a project selection optimization problem
VIEWING AND INTERPRETING FORECAST RESULTS In addition, you can create a Markowitz efficient frontier by running the optimization, then resetting the budget and number of projects constraints to a higher level and rerunning the optimization. You can do this several times to obtain the riskreturn efficient frontier. For a more detailed example, see Chapter 106, Military Portfolio and Efficient Frontier model.
c101-110
JWBK121-Mun
366
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 101.2 (Continued)
102. Optimization—Inventory Optimization File Name: Optimization—Inventory Optimization Location: Modeling Toolkit | Optimization | Inventory Optimization Brief Description: Illustrating how to run an optimization on manufacturing inventory to maximize profits and minimize costs subject to manufacturing capacity constraints Requirements: Modeling Toolkit, Risk Simulator
This model is used to find the optimal allocation of parts in a manufacturer’s portfolio of inventory. In the model, eight different parts are shown, with their
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
102. Optimization—Inventory Optimization
367
respective prices per unit (Figure 102.1). The demand for these parts is divided into four phases or cycles. For example, in the automotive aftermarket, certain parts are required more or less at certain times in the car’s life cycle (e.g., fewer parts may be needed during the first three years of a car’s life than perhaps in years 4 to 6, or years 7 to 10, versus over 10 years) or parts of a machine, or the inventory in a retail shop during the four seasons in a year, and so forth. The demand levels in these cycles for each part is simulated (assumptions have already been preset in the model) and can be based on expert opinions, expectations, forecasts, or historical data (e.g., using distributional fitting methods in Risk Simulator). In this model, if the manufacturer makes a higher quantity of a product than it can sell, there will be an added holding cost per unit (storage cost, opportunity cost, cost of money, etc.). Thus, it is less profitable to make too many units, but too few units means lost sales. In other words, if the manufacturer makes too much, it loses money because it could have made some more of something else, but if it makes too
FIGURE 102.1 Inventory model before optimization
c101-110
JWBK121-Mun
368
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
little of a product, it should have made more to sell more of the product and make a profit. So an optimal inventory and manufacturing quantity is required. This model is used to find the best allocation of parts to manufacturer given the uncertainty in demand and, at the same time, to maximize profits while minimizing cost. However, the manufacturer has constraints. It cannot manufacture too many units of a single part due to resource, budget, and manufacturing constraints (size of factory, cycle time, manufacturing time, cost considerations, etc.). For example, say the firm cannot produce more than 100,000 units per part, and during each manufacturing or seasonal cycle, it cannot produce more than 250,000 units, and the total parts produced are set at no more than 800,000 units. In short, the problem can be summarized as: Objective: Maximize total net revenues Problem: Find the best allocation of parts to maximize net revenues Surplus and shortages are expensive Constraints: Each part cannot exceed 100,000 units Each cycle cannot exceed 250,000 units Each part during each cycle should be between 15,000 and 35,000 units Total parts cannot exceed 800,000 units Assumptions: Surplus units has a holding inventory cost of $1 One very simple allocation is to produce an equal amount (e.g., 25,000 units per part, per cycle, thereby hitting all the constraints) as illustrated in Figure 102.1. The total net revenue from such a simple allocation is found to be $105,082,000.
OPTIMIZATION PROCEDURE (PREDEFINED) In contrast, an optimization can be set up to solve this problem. The optimization has already been set up in the model and is ready to run. To run it: 1. Go to the Optimization Model worksheet and click on Risk Simulator | Change Profile and select the Inventory Optimization profile. 2. Click on the Run Optimization icon or click on Risk Simulator | Optimization | Run Optimization and click OK. 3. Click Replace when optimization is completed. The results show the optimal allocation of manufactured parts that maximizes the total net profit, increasing it from $105.082 million to $110.116 million. Clearly optimization has created value. Please see Figure 102.2. 4. Using these optimal allocations, run a simulation (the assumptions and forecasts have been predefined in the model) by clicking on the RUN icon or select Risk Simulator | Run Simulation. 5. On the Net Revenues forecast chart, select Two-Tail and type in 90 in the Certainty box and hit TAB on the keyboard to obtain the 90% confidence interval (between $105 million and $110 million). View the Statistics tab to obtain the mean expected value of $108 million for the net revenues.
JWBK121-Mun
March 17, 2008
11:27
Char Count=
FIGURE 102.2 Optimized results
c101-110
369
c101-110
JWBK121-Mun
370
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 102.3 Optimized net revenues 6. Select Right-Tail and type in 105082 in the Value box, to find the probability that this optimal allocation’s portfolio of manufactured products (Figure 102.3) will exceed the simple equal allocation’s net revenues seen in Figure 102.1. The results indicate that there is a 95.40% probability that by using this optimal portfolio, the manufacturer will make more net profits than going with a simple equal allocation (Figure 102.3).
OPTIMIZATION PROCEDURE (MANUAL) To set up the model again from scratch, follow the next instructions. 1. Go to the Optimization Model worksheet and click on Risk Simulator | New Profile and give the profile a new name. You may have to reset the decision variables to some starting value (i.e., make the cells C13:J16 all 25 as a starting point), so that you can immediately see when the optimization generates new values. 2. Click on cell K23 and make it the objective to maximize (Risk Simulator | Optimization | Set Objective, and select Maximize). 3. Click on cell C13 and make it a decision variable (Risk Simulator | Optimization | Set Decision, and select Continuous for getting a continuous variable (we need this because the model’s values are in thousands of units and thousands of dollars) and set the bounds to be between 15 and 35 (you can set your own bounds, but these are typically constraints set by the manufacturer), to signify that the manufacturer can make only between 15,000 and 35,000 units of this part per cycle due to resource and budget or machine constraints.
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
102. Optimization—Inventory Optimization
371
FIGURE 102.4 Setting constraints one at a time
4. Set the constraints by clicking on Risk Simulator | Optimization | Constraints, and select ADD for adding a new constraint. Add the three additional constraints in the model, namely, cells K13 to K16 <=250 for each cell, C18 to J18 <=100 for each cell, and K18 <=800. Do this one constraint at a time (Figure 102.4). Clearly you can put in your own constraints as required. 5. Click on the Run Optimization icon or click on Risk Simulator | Optimization | Run Optimization. Look through the tabs to make sure everything is set up correctly. Click on the Method tab and select Static Optimization and click OK to run the optimization. 6. Click Replace when optimization is completed. The results illustrate that the optimal allocation of manufactured parts that maximizes net profit, increases it from $105.082 million to $110.116 million. Clearly optimization has created value. 7. Using these optimal allocations, run a simulation (the assumptions and forecasts have been predefined in the model) by clicking on the RUN icon or select Risk Simulator | Run Simulation. 8. On the Net Revenues forecast chart, select Two-Tail and type in 90 in the Certainty box and hit TAB on the keyboard to obtain the 90% confidence interval (between $105 million and $110 million). View the Statistics tab to obtain the mean expected value of $108 million for the net revenues. 9. Select Right-Tail and type in 105082 in the Value box, to find the probability that this optimal allocation’s portfolio of manufactured products will exceed the simple equal allocation’s net revenues seen previously.
c101-110
JWBK121-Mun
372
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
103. Optimization—Investment Portfolio Allocation File Name: Optimization—Investment Portfolio Allocation Location: Modeling Toolkit | Optimization | Investment Portfolio Allocation Brief Description: Illustrating how to run an optimization on investment decision variables in project allocation and new product mix Requirements: Modeling Toolkit, Risk Simulator
This model looks at a set of new product lines that a company is thinking of investing in and the respective decisions on amounts to be invested to maximize profits (Figure 103.1). Each proposed new product line has its own estimated operating net returns and allowed investment range (lower and upper bounds). The idea is to maximize the total expected returns on the portfolio of investments subject to some budget constraint.
FIGURE 103.1 Investment allocation model
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
104. Optimization––Investment Capital Allocation I (Basic Model)
373
104. Optimization––Investment Capital Allocation I (Basic Model) File Name: Optimization—Capital Investments (Part A).xls Location: Modeling Toolkit | Optimization | Investment Capital Allocation— Part I Brief Description: This model is used to find the optimal levels of capital investments in a strategic plan based on different risk and return characteristics of each type of product lines. This is the first of a two part model. Requirements: Modeling Toolkit, Risk Simulator
Companies restructure their product mix to boost sales and profits, increase shareholder value, or survive when the corporate structure becomes impaired. In successful restructurings, management not only actualizes lucrative new projects, but also abandons existing projects when they no longer yield sufficient returns, thereby channeling resources to more value creating uses. At one level, restructuring can be viewed as changes in financing structures and management. At another level, it may be operational—in response to production overhauls, market trends, technology, and industry or macroeconomic disturbances. For banks called on to finance corporate restructurings, things are a bit different. For example, most loans provide a fixed return over fixed periods that are dependent on interest rates and the borrower’s ability to pay. A good loan will be repaid on time and in full. Hopefully, the bank’s cost of funds will be low, with the deal providing attractive risk-adjusted returns. If the borrower’s business excels, the bank will not participate in upside corporate values (except for a vicarious pleasure in the firm’s success). However, if a borrower ends up financially distressed, lenders share much and perhaps most of the pain. Two disparate goals—controlling default (credit) risk, the bank’s objective, and value maximization, a traditional corporate aspiration—are often at odds, particularly if borrowers want the money to finance excessively aggressive projects. In the vast majority of traditional credit analyses, where the spotlight focuses on deterministically drawn projections, hidden risks are often exceedingly difficult to uncover. Devoid of viable projections, bankers will time and again fail to bridge gaps between their agendas and client aspirations. This chapter and the next chapter offer ways for bankers to advance both their analytics and their communication skills; senior bank officials and clients alike, to “get the deal done” and ensure risk/reward agendas, are set in equilibrium. Undeniably, the direct way to achieve results is to take a stochastic view of strategic plans rather than rely inappropriately on deterministic base case/conservative scenarios.
c101-110
JWBK121-Mun
374
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 104.1 Investment allocation model
ABC Bank is asked to approve a $3,410,000 loan facility for the hypothetical firm RI Furniture Manufacturing LTD. Management wants to restructure four of its operating subsidiaries. In support of the facility, the firm supplied the bank with deterministic base-case and conservative consolidating and consolidated projections—income statement, balance sheet, and cash flows. On the basis of deterministic consolidating projections, bankers developed the stochastic spreadsheet depicted in Figure 104.1. This spreadsheet included maximum/minimum investments ranges supporting restructuring in each of the four product lines and Risk Simulator is applied to run an optimization on the optimal amounts to invest in each product line. The annual operating returns are also set as assumptions in a stochastic optimization run. To run the Optimization in the preset model, click on Risk Simulator | Optimization | Run Optimization or click on the Run Optimization icon and select Stochastic Optimization. Figure 104.2 illustrate the results without (before) and with (after) optimization. Notice that the expected returns has increased through an optimal allocation of the investments in the four business units. Figure 104.3 shows the results of two simulation runs (simulation is run on the original before-optimization allocation and simulation is then re-run after the optimization). As expected, the mean or expected returns is higher with the optimization, but because we only set the total returns as the objective, risks also went up (as measured by standard deviation) by virtue of higher returns necessitating higher risks. However, at closer inspection, the coefficient of variation, which is computed by the standard deviation divided by the mean (risk to return ratio), actually stayed relatively constant (the small variations are due to random effects of simulation, which will dissipate with higher simulation trials).
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
104. Optimization––Investment Capital Allocation I (Basic Model)
FIGURE 104.2 Results from before and after an optimization run
FIGURE 104.3 Simulation results from before and after an optimization
375
c101-110
JWBK121-Mun
376
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Typically, at this point, as a prudent next step, the banker will have to discuss this first optimization run with the firm’s management on three levels: the maximum expected return, the optimal investments/loan facility allocation, and the risk of expected returns. If the risk level is unacceptable, the standard deviation must be reduced to preserve credit grade integrity. In the next chapter, we take this simple model and add more levels of complexity by adding in an efficient frontier model and credit risk contribution effects on the portfolio.
105. Optimization—Investment Capital Allocation II (Advanced Model) File Name: Optimization—Capital Investments (Part B).xls Location: Modeling Toolkit | Optimization | Investment Capital Allocation— Part II Brief Description: This model is used to find the optimal levels of capital investments in a strategic plan based on different risk and return characteristics of each type of product line as applied in an efficient frontier, and the contribution to the entire portfolio’s credit risk is determined. This is the second of a two part model. Requirements: Modeling Toolkit, Risk Simulator
This model is the second of two parts, continued from the previous chapter. This model is Part B of the Capital Investments example. To get started, please first review Part A of the model before attempting to run this follow up model. This follow up model looks at a twist at the optimization procedures performed in Part A by now incorporating risk and the return to risk ratio (Sharpe Ratio, a Nobel Prize winning concept). As can be seen in Figure 105.1, the model is the same as last time, with the exception of the inclusion of Column F, which is a measure of risk. This risk measure can be a probability of failure or volatility computation. Typically, for a project, the higher the expected value, the higher the risk. Therefore, with the inclusion of this risk value, we can now compute not only the expected returns for the portfolio, but also the expected risk for the entire portfolio, weighted to the various investment levels. Instead of simply maximizing returns in the portfolio (Opt Run 1 in Figure 105.1, based on the previous chapter’s model), which will by definition, create a potentially higher risk portfolio (high risk equals high return), we may also want to maximize the Sharpe Ratio (portfolio returns to risk ratio), which will in turn provide the maximum levels of return subject to the least risk, or for the same risk, provide the highest returns, yielding an optimal point on the Markowitz Efficient Frontier for this portfolio (Opt Run 2 in Figure 105.1). As can be seen in the optimization
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
105. Optimization—Investment Capital Allocation II (Advanced Model)
377
FIGURE 105.1 Revised capital investment model with the inclusion of stochastic risk elements
results table, Opt Run 1 provides the highest returns ($246,072) as opposed to the original value of $231,058. Nonetheless, Opt Run 2, where we maximize the Sharpe Ratio instead, we get a slightly lower return ($238,816) but the total risk for the entire portfolio is reduced to 4.055% instead of 4.270%. A simulation is run on the original investment allocation, Opt Run 1, and Opt Run 2 and the results are shown in Figure 105.2. Here, we clearly see that the slightly less returns provide a reduced level of risk, which is good for the bank. To run this predefined optimization model, simply click on Risk Simulator | Optimization | Run Optimization and click OK. To change the objective from Maximizing Returns to Risk, to Maximizing Returns, simply click on the Objective O icon in the Risk Simulator toolbar or click on Risk Simulator | Optimization | Set Objective and link it to either cell C20 for Sharpe Ratio, or C17, for Returns, and then run the optimization. In addition, the total investment budget allowed can be changed to analyze what happens to the returns and risk of the portfolio. For instance, Figure 105.3 illustrates the results from such an analysis, and the resulting expected risk and return values. In order to better understand each point’s risk structure, an optimization is carried out and a simulation is run. Further down, two sample extreme cases where $2.91M versus $3.61M are invested. From the results, one can see that the higher the risk (higher range of outcomes), the higher the returns (expected values are higher and
c101-110
JWBK121-Mun
378
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 105.2 Simulation results from optimization runs
the probability of beating the original expected value is also higher). Such analyses will provide the bank the ability to better analyze the risk and return characteristics. A lot of additional analyses can be applied in this model. For instance, we can apply the Probability of Default computations of implied Asset Value and Implied Volatility, to obtain the Cumulative Default Probability so that the bank can understand the risk of this deal, and decide (based on the portfolio of deals), what the threshold of lending should be. For instance, if the bank does not want anything above a 3.5% probability of default for a 5-year cumulative loan, then $3.41M is the appropriate loan value threshold. In addition, Value at Risk for a portfolio of loans can also be determined for before and after this new loan, for the bank to decide if absorbing this new loan is possible, and the effects to the entire portfolio’s VaR
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
105. Optimization—Investment Capital Allocation II (Advanced Model)
379
FIGURE 105.3 Efficient frontier results
FIGURE 105.4 Probability of default and contributions to portfolio Value at Risk
valuation and capital adequacy. Figure 105.4 shows some existing loans (grouped by tranches and types), and the new loan request. It is up to management to decide if the additional hit to capital requirements is reasonable. For details on credit risk analysis and Value at Risk models, please refer to the relevant chapters in this book.
c101-110
JWBK121-Mun
380
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
106. Optimization—Military Portfolio and Efficient Frontier File Name: Optimization—Military Portfolio and Efficient Frontier Location: Modeling Toolkit | Optimization | Military Portfolio and Efficient Frontier Brief Description: Illustrating how to run an optimization on discrete binary integer decision variables in project selection and mix, view and interpret optimization results, create additional qualitative constraints to the optimization model, and generate an efficient frontier by applying optimization on changing constraints Requirements: Modeling Toolkit, Risk Simulator
This model shows 20 different projects with different risk and return characteristics as well as several qualitative measures, such as strategic score, military readiness score, tactical score, and comprehensive score (see Figure 106.1). These scores are obtained through subject matter experts—decision makers, leaders, and managers of organizations—where their expert opinions are gathered through the double-blind Delphi method. After being scrubbed (e.g., extreme values are eliminated, large data variations are analyzed, and multiple iterations of the Delphi method are performed), their scores can be entered into a Distributional Fitting routine to find the best-fitting distribution or used to develop a Custom distribution for each project in Risk Simulator. The central idea of this model is to find the best portfolio allocation such that the portfolio’s total comprehensive strategic score and profits are maximized. That is, it is used to find the best project mix in the portfolio that maximizes the total Profit*Score measure, where profit points to the portfolio level net returns after considering the risks and costs of each project, while the score measures the total comprehensive score of the portfolio, all the while being subject to the constraints on number of projects, budget constraint, full-time equivalence resource (FTE) restrictions, and strategic ranking constraints. Objective: Maximize total portfolio returns times the portfolio comprehensive score (C28) Decision Variables: Allocation or go/no-go decision (J5:J24) Restrictions on Decision Variables: Binary decision variables (0 or 1)
JWBK121-Mun
March 17, 2008
11:27
Char Count=
FIGURE 106.1 Military portfolio optimization
c101-110
381
c101-110
JWBK121-Mun
March 17, 2008
382
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Constraints: Total Cost (E26) is less than or equal to $3800 (in thousands or millions of dollars) Less than or equal to 10 projects selected (J26) in the entire portfolio Full-time Equivalence resources have to be less than or equal to 80 (M26) Total Strategic Ranking for the entire portfolio must be less than or equal to 100 (F26)
RUNNING AN OPTIMIZATION To run this preset model, simply open the profile (Risk Simulator | Change Profile) and select Military Portfolio and Efficient Frontier. Then run the optimization (Risk Simulator | Optimization | Run Optimization) or, for practice, set up the model yourself. 1. Start a new profile (Risk Simulator | New Profile) and give it a name. 2. In this example, all the allocations are required to be binary (0 or 1) values, so, first select cell J5 and make this a decision variable in the Integer Optimization worksheet. Select cell J5 and define it as a decision variable (Risk Simulator | Optimization | Set Decision or click on the Set Decision icon) and make it a Binary Variable. This setting automatically sets the minimum to 0 and maximum to 1 and can only take on a value of 0 or 1. Then use the Risk Simulator Copy on cell J5, select cells J6 to J24, and use Risk Simulator’s Paste (Risk Simulator | Copy Parameter and Risk Simulator | Paste Parameter or use the Risk Simulator copy and paste icons, NOT the Excel copy/paste). 3. Next, set up the optimization’s constraints by selecting Risk Simulator | Optimization | Constraints, and selecting ADD. Then link to cell E26, and make it <=3800, select ADD one more time, and click on the link icon and point to cell J26 and set it to <=10. Continue with adding the other constraints (cell M26 <=80 and F26 <=100). 4. Select cell C28, the objective to be maximized, and select Risk Simulator | Optimization | Set Objective and select Risk Simulator | Optimization | Run Optimization. Review the different tabs to make sure that all the required inputs in steps 2 and 3 above are correct. 5. You may now select the optimization method of choice and click OK to run the optimization. The model setup is illustrated in Figure 106.2. Note: If you are to run either a dynamic or stochastic optimization routine, make sure that you first define the assumptions in the model. That is, make sure that some of the cells in C5:C24 and E5:F24 are assumptions. The suggestion for this model is to run a discrete optimization.
PORTFOLIO EFFICIENT FRONTIER Clearly, running the optimization procedure will yield an optimal portfolio of projects where the constraints are satisfied. This represents a single optimal portfolio
JWBK121-Mun
March 17, 2008
11:27
Char Count=
FIGURE 106.2 Setting up an optimization model
c101-110
383
JWBK121-Mun
March 17, 2008
11:27
Char Count=
FIGURE 106.2 (Continued)
c101-110
384
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
106. Optimization—Military Portfolio and Efficient Frontier
385
FIGURE 106.3 Efficient frontier
point on the efficient frontier (e.g., Portfolio B on the chart in Figure 106.3). Then, by subsequently changing some of the constraints—for instance, by increasing the budget and allowed projects—we can rerun the optimization to produce another optimal portfolio given these new constraints. Therefore, a series of optimal portfolio allocations can be determined and graphed. This graphical representation of all optimal portfolios is called the portfolio efficient frontier. At this juncture, each point represents a portfolio allocation; for instance, Portfolio B might represent projects 1, 2, 5, 6, 7, 8, 10, 15, and so forth, while Portfolio C might represent projects 2, 6, 7, 9, 12, 15, and so forth, each resulting in different tactical, military, or comprehensive scores and portfolio returns. It is up to the decision maker to decide which portfolio represents the best decision and if sufficient resources exist to execute these projects. Typically, in an efficient frontier analysis, you would select projects where the marginal increase in benefits is positive and the slope is steep. In the next example, you would select Portfolio D rather than Portfolio E as the marginal increase is negative on the y-axis (e.g., Tactical Score). That is, spending too much money may actually reduce the overall tactical score, and hence this portfolio should not be selected. Also, in comparing Portfolios A and B, you would be more inclined to choose B, as the slope is steep and the same increase in budget requirements (xaxis) would return a much higher percentage Tactical Score (y-axis). The decision to choose between Portfolios C and D would depend on available resources and the decision maker deciding if the added benefits warrant and justify the added budget and costs. To further enhance the analysis, you can obtain the optimal portfolio allocations for C and D and then run a simulation on each optimal portfolio to decide what the probability that D will exceed C in value is and whether this probability of occurrence justifies the added costs.
c101-110
JWBK121-Mun
March 17, 2008
386
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
107. Optimization—Optimal Pricing with Elasticity File Name: Optimization—Optimal Pricing with Elasticity Location: Modeling Toolkit | Optimization | Optimal Pricing with Elasticity Brief Description: Illustrating price optimization that maximizes total revenues based on price elasticity of demand Requirements: Modeling Toolkit, Risk Simulator
This model is used to find the optimal pricing levels that will maximize revenues through the use of historical elasticity levels. The price elasticity of demand is a basic concept in microeconomics, which can be briefly described as the percentage change of quantity divided by the percentage change in prices. For example, if, in response to a 10% fall in the price of a good, the quantity demanded increases by 20%, the price elasticity of demand would be 20%/(−10%) = −2. In general, a fall in the price of a good is expected to increase the quantity demanded, so the price elasticity of demand is negative, but in some literature, the negative sign is omitted for simplicity (denoting only the absolute value of the elasticity). We can use this fact in several ways, including the point elasticity by taking the first derivative of the inverse of the demand function and multiplying it by the ratio of price to quantity at a particular point on the demand curve: εd =
δQ P . δP Q
where ε d = Price elasticity of demand P = Price Q = Quantity demanded Instead of using instantaneous point elasticities, this example uses the discrete version, where we define elasticity as:
εd =
%Q P2 − P1 Q2 − Q1 P2 + P1 Q2 − Q1 . = Q2 +Q1 ÷ P2 +P1 = %P Q2 + Q1 P2 − P1 2
2
To further simplify things, we assume that in a category of hotel rooms, cruise ship tickets, airline tickets, or any other products with various categories (e.g.,
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
387
107. Optimization—Optimal Pricing with Elasticity
FIGURE 107.1 Sample historical pricing
standard room, executive room, suite, etc.), there is an average price and average quantity of units sold per period. Therefore, we can further simplify the equation to: εd =
Q2 − Q1 Q
÷
P2 − P1 P
=
P Q2 − Q1 . Q P2 − P1
where P = Average price Q = Average quantity If we have in each category the average price and average quantity sold, as in the model, we can compute the expected quantity sold given a new price if we have the historical elasticity of demand values. Please see Figure 107.1. In other words, if we take: Q1 − εd (P2 − P1 )
Q P
= Q2
we would get: Q1 −
P Q2 − Q1 Q (P2 − P1 ) = Q2 . Q P2 − P1 P
To illustrate, suppose the price elasticity of demand for a single room during high season at a specific hotel property is 3.15 (we use the absolute value), where the average price last season was $750 and the average quantity of rooms sold was 200 units. What would happen if prices were to change from $750 (P1) to $800 (P2)? That is, what would happen to the quantity sold from 200 units (Q1)? Please see Figure 107.2. Please note that εd is a negative value but we simplify as a positive value here to be consistent with economic literature. Using the last equation, we compute the newly predicted quantity demanded at $800 per night to be: Q2 = Q1 − εd (P2 − P1 )
Q P
= 200 − 3.15 (800 − 750)
200 = 158 750
c101-110
JWBK121-Mun
March 17, 2008
388
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 107.2 Elasticity simulation
The higher the price, the lower the quantity demanded, and vice versa. Indeed, the entire demand curve can be reconstructed by applying different price levels. For instance, the demand curve for the single room is reconstructed in Figure 107.3. Note: See the Simulation—Demand Curve and Elasticity Estimation model for examples on how to obtain elasticity measures.
OPTIMIZATION PROCEDURES Using the principles of price elasticity of demand, we can now figure out the optimal pricing structure of these hotel rooms by setting: Objective: Maximize total revenues Constraints: Number of rooms available per type Decision Variables: Price to charge for each type of room
FIGURE 107.3 Reconstructed demand curve for single room
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
107. Optimization—Optimal Pricing with Elasticity
389
FIGURE 107.4 Distribution of stochastic optimization decision variable
This model already has the optimization set up. To run it directly: 1. Go to the Model worksheet and click on Risk Simulator | Change Profile and choose the Optimal Pricing with Elasticity profile. 2. Click on the Run Optimization icon or click on Risk Simulator | Optimization | Run Optimization. 3. Select the Method tab and select either Static Optimization if you wish to view the resulting optimal prices or Stochastic Optimization to run simulation with optimization multiple times, to obtain a range of optimal prices. The results from a stochastic optimization routine are seen in the Report worksheet. In addition, several forecast charts will be visible once the stochastic optimization routine completes. For instance, looking at the Executive suites, select Two-Tail, type 90 in the Certainty box, and hit TAB on the keyboard to obtain the 90% confidence level (e.g., the optimal price to charge for the season is between $991 and $993 per night). Please see Figure 107.4. To reset the model manually, perform the next steps. 1. Go to the Model worksheet and click on Risk Simulator | New Profile and give the new profile a name. 2. Reset the values on prices. That is, enter 800 for cells H7, H8 and 1000 for cells H9 to H12. We do this so that we determine the initial starting prices that are easy to remember, versus the optimized price levels later on. 3. Set the objective. Select cell J15 and click on the O (set objective) icon or click Risk Simulator | Optimization | Set Objective.
c101-110
JWBK121-Mun
390
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
4. Set the decision variables. Select cell H7 and click on the D (set decision variable) icon or click Risk Simulator | Optimization | Set Decision. Select Continuous and give it the relevant lower and upper bounds (see columns M and N) or click on the link icons and link the lower and upper bounds (cells M7 and N7). 5. Set the constraints. Click on the C icon or Risk Simulator | Optimization | Constraints and add the capacity constraints on the maximum number of available rooms (e.g., click ADD and link cell I7 and make it <=250 and so forth. 6. Click on the Run Optimization icon or click on Risk Simulator | Optimization | Run Optimization. 7. Select the Method tab and select either Static Optimization if you wish to view the resulting optimal prices, or Stochastic Optimization to run simulation with optimization multiple times, to obtain a range of optimal prices. But remember that in order to run a stochastic optimization procedure, you need to have assumptions set up. Select cell G7 and click on Risk Simulator | Set Input Assumption and set an assumption of your choice or choose Normal distribution and use the default values. Repeat for cells G8 to G12, one at a time, then you can run stochastic optimization.
108. Optimization—Optimization of a Harvest Model File Name: Optimization—Optimization of a Harvest Model Location: Modeling Toolkit | Optimization | Optimization of a Harvest Model Brief Description: Illustrating how to find the optimal harvest rate for a population that maximizes return Requirements: Modeling Toolkit, Risk Simulator
This is a population dynamics and harvest model, where, given an initial population and the growth rate of the population, we can determine the optimal harvest rate that maximizes the return on investment, given a specific cost of capital and carrying capacity of the population. For instance, this can be a model on a herd of cattle or a forest of trees, where there is a carrying or maximum capacity of the population. The population dynamics can be summarized as: Percentage Growth t = Growth Rate (1 − Population t−1 ÷ Carrying Capacity) Births t = Percentage Growth t Population t−1 Continuing Population t = Population t−1 − Harvest t Population t = Births t + Continuing Population t
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
108. Optimization—Optimization of a Harvest Model
391
FIGURE 108.1 Harvest model A sample set of inputs is provided, as are the model computations and the timeseries chart shows the population dynamics and interactions. Typically, if there is no excess harvesting, the continuing graph should stabilize over time. Please see Figures 108.1 and 108.2.
PROCEDURE The model has been preset with optimization parameters and simulation assumptions. To run this model, simply: 1. Go to the Model worksheet and click on Risk Simulator | Change Profile and select the Population and Harvest Model profile. 2. Click on the Run Optimization icon or Risk Simulator | Optimization | Run Optimization, and click Replace when the optimization routine is complete.
FIGURE 108.2 Harvest model chart
c101-110
JWBK121-Mun
392
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 108.3 Optimized results
You will see the optimal annual harvest is 125.80 thousand, providing the highest total value of the property (Figure 108.3). In fact, after the optimization is run, run a simulation on the optimized annual harvest. You can see that there is a 53% certainty that this optimized harvest level will result in values greater than those expected without optimization (Figure 108.4). To replicate the model and create it from scratch, follow these instructions. 1. Go to the Model worksheet and click on Risk Simulator | New Profile and give the profile a name. 2. Set the objective. Select cell H10 (Total Value of Property) and click on the O icon or Risk Simulator | Optimization | Set Objective (Figure 108.5).
FIGURE 108.4 Total property value forecast chart
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
108. Optimization—Optimization of a Harvest Model
393
FIGURE 108.5 Setting optimization objective
3. Set the decision variable. Select cell D7 (Annual Harvest) and click on the D icon or Risk Simulator | Optimization | Set Decision. Select Continuous and click on the link icon and link the lower bounds and upper bounds to D12 and D13 (Figure 108.6). Continuous decision variables can be chosen as the number of trees modeled is in thousands of units. 4. Reset the decision variable. Change the value in cell D7 to some starting value, for example, 100. 5. Prepare to run the optimization. Click on the Run Optimization icon or Risk Simulator | Optimization | Run Optimization (Figure 108.7). 6. Run the optimization. Click on the Method tab and select Static Optimization and click OK. Click Replace when the optimization routine is complete. You will see the optimal annual harvest is 125.80, providing the highest total value of the property.
FIGURE 108.6 Continuous decision variables
c101-110
JWBK121-Mun
394
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 108.7 Optimization run
109. Optimization—Optimizing Ordinary Least Squares File Name: Optimization—Optimizing Ordinary Least Squares Location: Modeling Toolkit | Optimization | Optimizing Ordinary Least Squares Brief Description: Illustrating how to solve a simple bivariate regression model using the ordinary least squares approach, using Risk Simulator’s optimization and regression analysis tools Requirements: Modeling Toolkit, Risk Simulator
It is assumed that the user is knowledgeable about the fundamentals of regression analysis. The general bivariate linear regression equation takes the form of Y = β0 + β1 X + ε
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
395
109. Optimization—Optimizing Ordinary Least Squares
where β 0 = Intercept β 1 = Slope ε = Error term It is bivariate as there are only two variables, a Y or dependent variable and an X or independent variable, where X is also known as the regressor (sometimes a bivariate regression is also known as a univariate regression as there is only a single independent variable X). The dependent variable is so named because it depends on the independent variable; for example, sales revenue depends on the amount of marketing costs expended on a product’s advertising and promotion, making the dependent variable sales and the independent variable marketing costs. An example of a bivariate regression is seen as simply inserting the best-fitting line through a set of data points in a two-dimensional plane as seen in the graphs on the next page. In other cases, a multivariate regression can be performed, where there are multiple or k number of independent X variables or regressors, where the general regression equation will now take the form: Y = β0 + β1 X1 + β2 X2 + β3 X3 . . . + βk Xk + ε However, fitting a line through a set of data points in a scatter plot, as shown on the next page may result in numerous possible lines. The best-fitting line is defined as the single unique line that minimizes the total vertical errors (i.e., the sum of the absolute distances between the actual data points and the estimated line). To find the best-fitting unique line that minimizes the errors, a more sophisticated approach is applied, using regression analysis. Regression analysis therefore finds the unique best-fitting line by requiring that the total errors be minimized, or by calculating:
Min
n
(Yi − Yˆi )2
i=1
where Yˆi = Predicted value Yi = Actual value Only one unique line minimizes this sum of squared errors. The errors (vertical distances between the actual data and the predicted line) are squared to prevent the negative errors from canceling out the positive errors. Solving this minimization problem with respect to the slope and intercept requires calculating first derivatives and setting them equal to zero: n n d d (Yi − Yˆi )2 = 0 and (Yi − Yˆi )2 = 0 dβ0 dβ1 i=1
i=1
c101-110
JWBK121-Mun
March 17, 2008
11:27
396
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
which yields the bivariate regression’s least squares equations: n
β1 =
i=1
n
(Xi − X)(Yi − Y) n
= (Xi − X)2
i=1
n
Xi Yi −
i=1 n i=1
n
Yi
i=1
n
Xi2 −
Xi
i=1
2
n
Xi
i=1
n
β0 = Y − β1 X
MODEL BACKGROUND In the model, we have a set of Y and X values, and we need to find the slope and intercept coefficients that minimize the sum of squared errors of the residuals, which will hence yield the ordinary least squares (OLS) unique line and regression equation. We start off with some initial inputs (say, 3000 for both the intercept and slope, where we will then attempt to find the correct answer for, but we need to insert some values here as placeholders for now). Then we compute the predicted values using the initial slope and intercept, and then the residual error between the predicted and actual Y values (Figure 109.1). These residuals are then squared and summed to obtain the sum of squared residuals. In order to get the unique line that minimizes the sum of the squared residual errors, we employ an optimization process, then compare the results to manually computed values, and then reconfirm the results using Risk Simulator’s regression module.
Procedure 1. Click on Risk Simulator | Change Profile and select OLS Optimization to open the preset profile. 2. Go to the Model worksheet and click on Risk Simulator | Optimization | Run Optimization and click RUN. To obtain a much higher level of precision, re-run the optimization a second time to make sure the solution converges. This is important as sometimes the starting points (placeholder values of 3000) chosen are important and may require a second or third optimization run to converge to a solution.
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
109. Optimization—Optimizing Ordinary Least Squares
397
FIGURE 109.1 Optimizing ordinary least squares approach
If you wish to re-create the optimization, you can attempt to do so by following the next instructions. The optimization setup is also illustrated in Figure 109.2. 1. Start a new profile (Risk Simulator | New Profile) and give it a name. Change the cells F37 and F38 back to some starting point, such as 3000 and 3000. This way, when the results are obtained, you see the new values. 2. Select cell F39 and make it the Objective to minimize (this is the total sums of squares of the errors) by clicking on Risk Simulator | Optimization | Set Objective, make sure F39 is selected, and click Minimize.
c101-110
JWBK121-Mun
398
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 109.2 Setting up the least squares optimization
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
109. Optimization—Optimizing Ordinary Least Squares
399
FIGURE 109.3 Optimized OLS results
3. Set the decision variables as the intercept and slope (cells F37 and F38, one at a time) by first selecting cell F37 and clicking on Risk Simulator | Optimization | Set Decision or click on the D icon. Put in some large lower and upper bounds, such as −100000 and 100000. Repeat for cell F38. 4. Click on Risk Simulator | Optimization | Run Optimization or click on the Run Optimization icon. Make sure to select Static Optimization and click Run. The optimized results are shown in Figure 109.3. The intercept is 2489.16 and slope is 91.98, which means that the predicted regression equation is Y = 2489.16 + 91.89X. We can also recover these values if we computed these coefficients manually. The region B44 to G66 shows the manual computations and the results are shown in cells C71 and C72, which are identical to those computed using the optimization procedure (Figure 109.4). The third and perhaps easiest alternative is to use Risk Simulator’s regression module. Perform the regression using these steps: 1. Select the Y and X values (including the header). That is, select the area B12:C34. 2. Click on Risk Simulator | Forecasting | Multiple Regression Analysis. 3. Make sure Y is selected as the dependent variable and click OK. The results are generated as the Report worksheet in this file. Clearly the regression report is much more comprehensive than what we have done, complete with analytics and statistical results. The regression results using Risk Simulator shows the same values as we have computed using the two methods shown in Figure 109.5. Spend some time reading through the regression report. Using Risk Simulator is a lot simpler and provides more comprehensive results. In fact, the t-test shows that X is statistically not significant and hence is a lousy predictor of Y. This is a fact that we cannot determine using the first two methods. In fact, when you have multiple independent X variables in a multivariate regression, you cannot compute the results manually using the second method. The only recourse and best alternative is to use this regression module.
FIGURE 109.4 Manual computations
c101-110
JWBK121-Mun
400
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 109.5 Risk Simulator results
110. Optimization—Stochastic Portfolio Allocation File Name: Optimization—Stochastic Portfolio Allocation Location: Modeling Toolkit | Optimization | Stochastic Portfolio Allocation Brief Description: Illustrating how to run a stochastic optimization on continuous decision variables with simulation and interpreting optimization results Requirements: Modeling Toolkit, Risk Simulator
This model shows four asset classes with different risk and return characteristics. The idea here is to find the best portfolio allocation such that the portfolio’s bang for the buck, or returns to risk ratio, is maximized. That is, the goal is to allocate 100% of an individual’s investment among several different asset classes (e.g., different types of mutual funds or investment styles: growth, value, aggressive growth, income, global, index, contrarian, momentum, etc.) This model is different from others in that there exists several simulation assumptions (risk and return values for each asset), as seen in Figure 110.1. A simulation is run, then optimization is executed, and the entire process is repeated multiple times to obtain distributions of each decision variable. The entire analysis can be automated using stochastic optimization. In order to run an optimization, several key specifications on the model have to be identified first: Objective: Maximize Return to Risk ratio (C12) Decision Variables: Allocation weights (E6:E9) Restrictions on Decision Variables: Minimum and maximum required (F6:G9) Constraints: Portfolio total allocation weights 100% (E11 is set to 100%) Simulation Assumptions: Return and risk values (C6:D9)
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
110. Optimization—Stochastic Portfolio Allocation
401
FIGURE 110.1 Asset allocation model ready for stochastic optimization
The model shows the various asset classes. Each asset class has its own set of annualized returns and annualized volatilities. These return and risk measures are annualized values such that they can be consistently compared across different asset classes. Returns are computed using the geometric average of the relative returns, while the risks are computed using the logarithmic relative stock returns approach. See the various chapters on Volatility models for computational details. The allocation weights in column E holds the decision variables, which are the variables that need to be tweaked and tested such that the total weight is constrained at 100% (cell E11). Typically, to start the optimization, we will set these cells to a uniform value; in this case, cells E6 to E9 are set at 25% each. In addition, each decision variable may have specific restrictions in its allowed range. In this example, the lower and upper allocations allowed are 10% and 40%, as seen in columns F and G. This setting means that each asset class may have its own allocation boundaries. Next, column H shows the return to risk ratio, which is simply the return percentage divided by the risk percentage for each asset, where the higher this value, the higher the bang for the buck. The remaining parts of the model show the individual asset class rankings by returns, risk, return to risk ratio, and allocation. In other words, these rankings show at a glance which asset class has the lowest risk, or the highest return, and so forth.
RUNNING AN OPTIMIZATION To run this model, simply click on Risk Simulator | Optimization | Run Optimization. Alternatively, and for practice, you can set up the model using the approach that follows. 1. Start a new profile (Risk Simulator | New Profile). 2. For stochastic optimization, set distributional assumptions on the risk and returns for each asset class. That is, select cell C6 and set an assumption (Risk Simulator | Set Input Assumption) and make your own assumption as required. Repeat for cells C7 to D9.
c101-110
JWBK121-Mun
402
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
3. Select cell E6, and define the decision variable (Risk Simulator | Optimization | Set Decision or click on the Set Decision icon) and make it a Continuous Variable and then link the decision variable’s name and minimum/maximum required to the relevant cells (B6, F6, G6). 4. Then use the Risk Simulator copy on cell E6, select cells E7 to E9, and use Risk Simulator’s paste (Risk Simulator | Copy Parameter) and Risk Simulator | Paste Parameter or use the copy and paste icons). Remember not to use Excel’s copy/ paste functions. 5. Next, set up the optimization’s constraints by selecting Risk Simulator | Optimization | Constraints, selecting ADD, and selecting the cell E11, and making it equal 100% (total allocation; do not forget the % sign). 6. Select cell C12, the objective to be maximized and make it the objective: Risk Simulator | Optimization | Set Objective or click on the O icon. 7. Run the optimization by going to Risk Simulator | Optimization | Run Optimization. Review the different tabs to make sure that all the required inputs in steps 2 and 3 are correct. Select Stochastic Optimization and let it run for 500 trials repeated 20 times. (Figure 110.2 illustrates these setup steps.)
FIGURE 110.2 Setting up the stochastic optimization
c101-110
JWBK121-Mun
March 17, 2008
11:27
Char Count=
110. Optimization—Stochastic Portfolio Allocation
403
FIGURE 110.2 (Continued)
VIEWING AND INTERPRETING FORECAST RESULTS Stochastic optimization is performed when a simulation is first run and then the optimization is run. Then the whole analysis is repeated multiple times. The result is a distribution of each decision variable rather than a single-point estimate (Figure 110.3). This means that instead of saying you should invest 30.57% in Asset 1, the optimal decision is to invest between 30.10% and 30.99% as long as the total portfolio sums to 100%. This way the results provide management or decision makers a range of flexibility in the optimal decisions.
c101-110
JWBK121-Mun
404
March 17, 2008
11:27
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 110.3 Simulated results from the stochastic optimization approach
c111-120
JWBK121-Mun
March 17, 2008
12:47
Char Count=
111. Options Analysis—Binary Digital Instruments
405
111. Options Analysis—Binary Digital Instruments File Name: Options Analysis—Binary Digital Instruments Location: Modeling Toolkit | Options Analysis | Binary Digital Instruments Brief Description: Illustrating investment vehicles whereby the holder will be paid some cash amount or be provided the underlying asset as long as the value of the asset does not breach some specified barriers Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2BinaryDownAndInCashAtExpirationOrNothing, B2BinaryUpAndInCashAtExpirationOrNothing, B2BinaryDownAndInAssetAtExpirationOrNothing, B2BinaryUpAndInAssetAtExpirationOrNothing, B2BinaryDownAndOutCashAtExpirationOrNothing, B2BinaryUpAndOutCashAtExpirationOrNothing, B2BinaryDownAndOutAssetAtExpirationOrNothing, B2BinaryUpAndOutAssetAtExpirationOrNothing, B2BinaryDownAndInCashAtHitOrNothing, B2BinaryUpAndInCashAtHitOrNothing, B2BinaryDownAndInAssetAtHitOrNothing, B2BinaryUpAndInAssetAtHitOrNothing
Binary exotic instruments (also known as Digital, Accrual, or Fairway instruments) become valuable only if a barrier (upper or lower) is breached (or not), and the payout could be in the form of some prespecified cash amount or the underlying asset itself. The cash or asset exchanges hands either at the point when the barrier is breached or at the end of the instrument’s maturity (at expiration), assuming that the barrier is breached at some point prior to maturity. For instance, in down and in cash at expiration instruments, the instruments pay the specified cash amount at expiration if and only if the asset value breaches the lower barrier (asset value goes below the lower barrier), providing the holder of the instrument a safety net or a cash insurance in case the underlying asset does not perform well. In up and in instruments, the cash or asset is provided if the underlying asset goes above the upper barrier threshold. In up and out or down and out instruments, the asset or cash is paid as long as the barrier is not breached. In at expiration instruments, cash and assets are paid at maturity, whereas at hit instruments are payable at the point when the barrier is breached. Figure 111.1 shows a sample set of inputs in a binary digital instrument.
JWBK121-Mun
March 17, 2008
12:47
Char Count=
FIGURE 111.1 Binary digital options
c111-120
406
c111-120
JWBK121-Mun
March 17, 2008
12:47
Char Count=
112. Options Analysis—Inverse Floater Bond
407
112. Options Analysis—Inverse Floater Bond File Name: Options Analysis—Inverse Floater Bond Lattice Location: Modeling Toolkit | Options Analysis | Inverse Floater Bond Lattice Brief Description: Illustrating a bond or debt with a coupon payment that is inversely related to interest rates Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2BDTInterestRateLattice, B2BDTFloatingCouponPriceLattice, B2BDTNoncallableDebtPriceLattice, B2BDTFloatingCouponPriceValue, B2BDTNoncallableDebtPriceValue
A floating coupon bond is a bond or other type of debt whose coupon rate has a direct or inverse relationship to short-term interest rates, and can be constructed using modified interest-rate-based binomial lattices. With an inverse floater, as interest rates rise, the coupon rate falls. When short-term interest rates fall, an inverse floater holder benefits in two ways: The bond appreciates in price, and the yield increases. The opposite is true for a direct floater bond. Alternatively, coupons could be allowed to float based on some external benchmark. The only way such inverse floaters can be valued is through the use of modified binomial lattices, where the short rate or spot interest rate can be simulated using a discrete step approach in a lattice. Please note that a regular binomial or trinomial lattice used to model stock prices and options cannot be used. Those stock price lattices assume a Brownian motion process, whereas the modified lattices used in this chapter (based on the Black-Derman-Toy methodology) are calibrated to the term structure of interest rates with localized and changing volatilities over time. At each step in the lattice, the yield curve or term structure of interest rates is mirrored, through a series of interest rates and instantaneous volatilities changing over time. The inverse rate, or the amount of coupon rate decrease per percent increase in interest rate, is an input to compute this inverse floater. The result is the value of the inverse floater hedge. It is also assumed that this bond is callable, and the callable price and time step are also required inputs. For instance, Figures 112.1 and 112.2 show the input parameters required for solving this model (e.g., face value of the bond or principal of the debt, maturity, total steps in the lattice, current spot risk-free interest rate, the current coupon rate, market price of the bond, and if the bond is callable at certain steps and at a certain predetermined callable price). Further, the period-specific coupon payments, risk-free or external interest rate, and the interest rate volatilities can be entered as inputs. These three variables are entered as a time-series (creating different interest
JWBK121-Mun
March 17, 2008
12:47
Char Count=
FIGURE 112.1 Inverse floater bond lattice (part 1)
c111-120
408
JWBK121-Mun
March 17, 2008
12:47
Char Count=
FIGURE 112.2 Inverse floater bond lattice (part 2)
c111-120
409
c111-120
JWBK121-Mun
410
March 17, 2008
12:47
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
yield curves with specific shapes and localized volatilities) in rows 13 to 15 of the model. The first step is to create the short rate lattice, that is, the fluctuations of the interest rates based on the term structure of interest rates and localized volatilities entered in rows 14 and 15. In order to make sure that the interest rate lattice is built correctly, simply set all the volatilities to zero and use some simple interest rates such as those seen in Figure 112.3. Here we see that step 1’s lattice rate of 3% is the sum of step 0’s and step 1’s spot rates (1% and 2%), and step 2’s lattice rate of 5% is the same as the sum of step 1’s and step 2’s rates (2% and 3%), and so forth. This indicates that the interest rate lattice is calibrated correctly for interest rates. Finally, we can check that the interest rate lattice is calibrated correctly for the localized volatility by putting in nonzero annualized volatilities and make the number of years match the number of lattice steps (this means each step taken on the lattice is one year). Figure 112.4 shows this calibration (cell C5 = C6 = 12, row 15 shows all nonzero volatilities, and the volatility checks in row 19 are simply the natural logarithmic ratios of the upper and lower values on the lattice divided by 2). As an example, the formula in cell D19 is LN(D21/D22)/2, which yields 2%, the input local volatility for that step. The remaining columns in row 19 match the input volatilities, indicating that the interest rate lattice is calibrated correctly for the yield curve and its corresponding volatilities. To create this interest rate lattice, use the B2BDTInterestRateLattice function and link the inputs to the relevant locations in the spreadsheet. Figure 112.5 shows the function used and the input links for cell C21, the first input in the interest rate lattice. Once this function is completed, copy this cell C21 and select the 13 × 13 matrix (we need a 13 × 13 matrix because we have a 12-step lattice starting at time 0, yielding a total of 13 steps), and paste the values in this area (C21:O33). Then, make sure the entire area is selected, click on F2, and then hold down the Shift + Control keys and hit Enter. This will update the matrix to the results seen in Figure 112.1. Replicate these steps for the second matrix using the B2BDTFloatingCoupon PriceLattice function, which computes the price of a bond with floating and changing coupon payments. Finally, generate the third lattice using B2BDTNoncallable DebtPriceLattice for a noncallable straight bond value without any floating or changing coupons. The prices of these bonds are read directly off the lattices at step 0—for example, $108.79 and $97.59 for the two bonds, respectively. The difference in price ($11.19 in cell K5) comes from the hedging effects of the floating coupons. In addition, instead of having to build complete lattices, you can use the B2BDTFloatingCouponPriceValue and B2BDTNoncallableDebtPriceValue functions to obtain the values of the bonds (see cells F35 and F51), and the difference is the value of the floating coupon hedge. These “value” functions come in really handy, as the results can be obtained immediately without having to re-create cumbersome lattice models. In Chapters 114 and 115, we discuss other lattice models used to value bond options, but the same modeling paradigm discussed in this chapter holds, including the use of lattice versus value functions, and the methodology for creating a lattice matrix in Excel.
JWBK121-Mun
March 17, 2008
12:47
Char Count=
FIGURE 112.3 Calibrating an interest rate lattice
c111-120
411
JWBK121-Mun
March 17, 2008
12:47
Char Count=
FIGURE 112.4 Calibrating an interest rate lattice
c111-120
412
c111-120
JWBK121-Mun
March 17, 2008
12:47
Char Count=
113. Options Analysis—Options-Trading Strategies
413
FIGURE 112.5 Creating the interest rate lattice matrix
113. Options Analysis—Options-Trading Strategies File Name: Options Analysis—Option Strategies Location: Modeling Toolkit | Options Analysis | Options Trading Strategies Brief Description: Illustrating various options-trading strategies such as covered calls, protective puts, bull and bear spreads, straddles, and strangles Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2OptionStrategyLongCoveredCall, B2OptionStrategyWriteCoveredCall, B2OptionStrategyLongProtectivePut, B2OptionStrategyWriteProtectivePut, B2OptionStrategyLongBullDebitSpread, B2OptionStrategyLongBearDebitSpread, B2OptionStrategyLongBullCreditSpread, B2OptionStrategyLongBearCreditSpread, B2OptionStrategyLongStraddle, B2OptionStrategyWriteStraddle, B2OptionStrategyLongStrangle, B2OptionStrategyWriteStrangle
c111-120
JWBK121-Mun
414
March 17, 2008
12:47
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
We will start by discussing a simple option example. Suppose ABC’s stock is currently trading at $100, you can either purchase one share of ABC stock or purchase an options contract on ABC. Let’s see what happens. If you purchase the stock at $100 and in three months if the stock goes up to $110, you made $10 or 10%. And if it goes down to $90, you lost $10 or 10%. This is simple enough. Now, suppose you purchased a European call option with a strike price of $100 (at-themoney) for a 3-month expiration at a cost of $5, and if the underlying stock price goes up to $110, you execute the call option, buy the stock at $100 contractually, and sell the stock in the market at $110. After the cost of the option, you net $110 − $100 − $5 = $5. The cost of the option was $5 and you made $5, so the return was 100%. If the stock goes down to $90, you let the option expire worthless and lose the $5 payment to buy the option, rather than a loss of −$10 if you held the stock. In addition, if the stock goes to $0, you lose only $5 if you held the option versus losing $100 if you held the stock! This shows the limited downside of the option and unlimited upside leverage—the concept of hedging. You can purchase many options through the leverage power of options. For instance, if you bought 20 options at $5 each, it would cost you $100, the same as holding one share of stock. If the stock stays at $100 at maturity, you lose nothing on the stock, but the problem is, you would lose 100% of your option premium. This is the concept of speculation! Now, let’s complicate things. We can represent the payoffs on call and put options graphically, as seen in Figure 113.1. For instance, take the call option with a strike price of $10 purchased when the stock price was also $10. The premium paid to buy the option was $2. Therefore, if at maturity the stock price goes up to $15, the call option is executed (buy the stock at the contractual price of $10, turn around and sell the stock at the market price of $15, make $5 in gross profits, and deduct the cost of the option of $2, which yields a net profit of $3). Following the same logic, the remaining payoff schedule can be determined as shown in Figure 113.1. Note that the breakeven point is when the stock price is $12 (strike price plus the option premium). If the stock price falls to $10 or below, the net is −$2 in losses because the option will never be exercised and the loss is the premium paid on the option. We can now chart the payoff schedules for buying (long) and selling (short) simple calls and puts. The “hockey stick” charts shown in Figure 113.1 are the various option payoff schedules. Next, we can use these simple calls and puts and put them in a portfolio or various combinations to create options trading strategies. An options trading strategy refers to the use of multiple investment vehicles, including call options, put options, and the underlying stock. These vehicles can be purchased or sold at various strike prices to create strategies that will take advantage of certain market conditions. For instance, Figure 113.2 shows a covered call strategy, which comprises a long stock and short call combination. The position requires buying a stock and simultaneously selling a call option on that stock. The colored charts in the Excel model provide a better visual than those shown in this chapter. Please refer to the model for more details. Just like in previous chapters, the functions used are matrices, where the first row is always the stock price at maturity; the second and third rows are the individual positions’ net returns (e.g., in the case of the covered call, the second row represents the returns on the stock and the third row represents the short call position); and the fourth row is always the net total profit of the positions combined. As usual, apply the relevant Modeling Toolkit equations, copy and paste the cell’s functions to a
JWBK121-Mun
March 17, 2008
12:47
Char Count=
FIGURE 113.1 Options payoff schedules
c111-120
415
JWBK121-Mun
March 17, 2008
12:47
Char Count=
FIGURE 113.2 Long covered call position
c111-120
416
c111-120
JWBK121-Mun
March 17, 2008
12:47
113. Options Analysis—Options-Trading Strategies
Char Count=
417
matrix comprising 4 rows by 18 columns, select the entire matrix area, hold down the Shift + Control keys and hit Enter to fill in the entire matrix. There are also two optional input parameters, the starting and ending points on the plot. These values refer to the starting and ending stock prices at maturity to analyze. If left empty, the software will automatically decide on these values. The model also illustrates other options trading strategies. Figures 113.3 and 113.4 show the graphical representation of these positions, and the following lists these strategies and how they are obtained. A long covered call position is a combination of buying the stock (underlying asset) and selling a call of the same asset, typically at a different strike price than the starting stock price. Writing a covered call is a combination of selling the stock (underlying asset) and buying a call of the same asset, typically at a different strike price than the starting stock price. A long protective put position is a combination of buying the stock (underlying asset) and buying a put of the same asset, typically at a different strike price than the starting stock price. Writing a protective put is a combination of selling the stock (underlying asset) and selling a put of the same asset, typically at a different strike price than the starting stock price. A long bull spread position is the same as writing a bearish spread, both of which are designed to be profitable if the underlying security rises in price. A bullish debit spread involves purchasing a call and selling a further out-of-themoney call. A bullish credit spread involves selling a put and buying a further out-of-the-money put. A long bearish spread position takes advantage of a falling market. The strategy involves the simultaneous purchase and sale of options, and both puts and calls can be used. Typically, a higher strike price option is purchased and a lower strike price option is sold. The options should have the same expiration date. A long straddle position requires the purchase of an equal number of puts and calls with identical strike price and expiration. A straddle provides the opportunity to profit from a prediction about the future volatility of the market. Long straddles are used to profit from high volatility but the direction of the move is unknown. Writing a straddle position requires the sale of an equal number of puts and calls with identical strike price and expiration. A straddle provides the opportunity to profit from a prediction about the future volatility of the market. Writing straddles is used to profit from low volatility but the direction of the move is unknown. A long strangle requires the purchase of an out-of-the-money call and the purchase of an out-of-the-money put for a similar expiration (or within the same month), and profits from significant volatility of either directional move of the stock. Writing a strangle requires the sale of an out-of-the-money call and the sale of an out-of-the-money put for a similar expiration (or within the same month), and profits from low volatility of either directional move of the stock.
JWBK121-Mun
March 17, 2008
12:47
Char Count=
FIGURE 113.3 Options strategies I
c111-120
418
JWBK121-Mun
March 17, 2008
12:47
Char Count=
FIGURE 113.4 Options strategies II
c111-120
419
c111-120
JWBK121-Mun
420
March 17, 2008
12:47
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
114. Options Analysis—Options-Adjusted Spreads Lattice File Name: Options Analysis—Options Adjusted Spreads On Debt Location: Modeling Toolkit | Options Analysis | Options Adjusted Spreads on Debt Brief Description: Illustrating options-adjusted spreads (OAS), which value the spread between a straight bond and a bond with some embedded options, such as callability covenants, and are used to create modified binomial lattices to value such instruments, assuming changing interest rates and volatilities over time Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2BDTInterestRateLattice, B2BDTNoncallableDebtPriceLattice, B2BDTCallableDebtPriceLattice
Certain types of debt come with an option-embedded provision; for instance, a bond might be callable if the market price exceeds a certain value (when prevailing interest rates drop, making it more profitable for the issuing company to call the debt and reissue new ones at the lower rate) or have a prepayment allowance of mortgages or lines of credit and debt. Use this model to compute the option-adjusted spread (i.e., the additional premium that should be charged on the option provision). Figure 114.1 illustrates the OAS lattice model with a set of sample inputs. The interest rates are assumed to be changing over time, with changing instantaneous volatilities, as it is in real life, where the yield curve or term structure of interest rates changes over time and has different volatilities at different ends of the curve. The entire yield curve may shift, twist, or bend, with higher maturity. (See the Term Structure of Volatility chapter for details on how to compute interest rate volatilities and the typical characteristics of volatility smiles and frowns.) Setting up this model is very similar to the one shown in Chapter 112 on inverse floater bonds. Please review that chapter on how to create the lattice matrix as well as information on calibrating the underlying interest rate lattice to make sure it models the term structure of interest rates and the corresponding localized volatilities of interest rates. Note that one of the key inputs in the model is the call price of the bond and the time to call, translated into time steps. For instance, in a four-year debt modeled using an eight-step lattice, if the bond is callable starting in the third year, then the callable step is set as 6. Note that this model is the same as Chapter 112’s model; the main difference is that this model allows you to generate your own modified binomial lattices to any number of steps. (Be careful. The higher the number of steps, the longer it takes to generate and compute the lattices, depending on your computer’s processor speed.)
JWBK121-Mun
March 17, 2008
12:47
Char Count=
FIGURE 114.1 Option-adjusted spreads lattice maker
c111-120
421
c111-120
JWBK121-Mun
422
March 17, 2008
12:47
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
115. Options Analysis—Options on Debt File Name: Options Analysis—Options on Debt Location: Modeling Toolkit | Options Analysis | Options on Debt Brief Description: Computing options using the binomial lattice approach, assuming that the yield curve and term structure of interest rates change over time (both interest rates and volatility of interest rates are changing over time) Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2BDTInterestRateLattice, B2BDTZeroPriceLattice, B2BDTAmericanCallonDebtLattice, B2BDTAmericanPutonDebtLattice, B2BDTEuropeanCallonDebtLattice, B2BDTEuropeanPutonDebtLattice, B2BDTAmericanCallonDebtValue, B2BDTAmericanPutonDebtValue, B2BDTEuropeanCallonDebtValue, B2BDTEuropeanPutonDebtValue
This model is used to compute the American and European options on debt using a binomial lattice modified to account for changing volatilities and term structure of interest rates over the life of the option (Figure 115.1). This model provides a sample template for building interest rate pricing lattices using the Modeling Toolkit functions. It is set up to only compute up to 10 steps. Using this template, you can generate additional steps as required as well as create customized lattices to value any types of embedded option debt instruments. Also, you can use the Real Options SLS software to value many types of options and options-embedded instruments. The modified lattices introduced in Chapters 112 and 114 are specific to financial instruments that depend on interest rates, where the underlying asset’s fundamental values are driven by interest rate fluctuations. Please see Chapter 112 for instructions on setting up these lattice matrices and how to calibrate these matrices for the term structure of interest rates and its localized interest rate volatilities. The model also shows how to compute the American and European call and put options on the bond. Note that the entire lattices can be built using the “lattice” functions or the value of the bond option can be computed simply by using the “value” functions in the model.
JWBK121-Mun March 17, 2008 12:47
FIGURE 115.1 Generating binomial lattices on debt options
c111-120 Char Count=
423
c111-120
JWBK121-Mun
424
March 17, 2008
12:47
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
116. Options Analysis—Five Plain Vanilla Options File Names: Options Analysis—Plain Vanilla Call Options (I–IV) and Plain Vanilla Put Options Location: Modeling Toolkit | Options Analysis | Plain Vanilla Options (I–IV and Put Options) Brief Description: Solving plain vanilla call and put option examples using the Real Options SLS software by applying advanced binomial lattices Requirements: Real Options SLS, Modeling Toolkit
This chapter shows the five plain vanilla call and put options solved using Real Options SLS. These are plain vanillas as they are the simplest call and put options, without any exotic add-ons. A simple European call option is computed in this example using SLS. To follow along, start this example file by selecting Start | Programs | Real Options Valuation | Real Options SLS | Real Options SLS. Then click on the first icon, Create a New Single Asset Model, and when the single asset SLS opens, click on File | Examples. This is a list of the example files that come with the software for the single-asset lattice solver. Start the Plain Vanilla Call Option I example. This example file will be loaded into the SLS software, as seen in Figure 116.1. Alternatively, you can start the Modeling Toolkit software, click on Modeling Toolkit in Excel and select Options Analysis, and select the appropriate Options Analysis files. As this chapter uses the Real Options SLS software, it is recommended that readers unfamiliar with real options analysis or financial options review the introductory materials in Chapter 174 (Introduction to the SLS Software) before proceeding with this chapter. The starting PV Underlying Asset or starting stock price is $100, and the Implementation Cost or strike price is $100 with a 5-year maturity. The annualized risk-free rate of return is 5%, and the historical, comparable, or future expected annualized volatility is 10%. Click on RUN (or Alt-R) and a 100-step binomial lattice is computed. The results indicate a value of $23.3975 for both the European and American call options. Benchmark values using Black-Scholes and Closed-Form American approximation models as well as standard plain vanilla Binomial American and Binomial European Call and Put options with 1,000-step binomial lattices are also computed. Notice that only the American and European options are selected, and the computed results are for these simple plain vanilla American and European call options. The benchmark results use both closed-form models (Black-Scholes and ClosedForm Approximation models) and 1,000-step binomial lattices on plain vanilla options. You can change the steps to 1000 in the basic inputs section (or open and use
c111-120
JWBK121-Mun
March 17, 2008
12:47
Char Count=
116. Options Analysis—Five Plain Vanilla Options
425
FIGURE 116.1 SLS results of simple European and American call options the Plain Vanilla Call Option II example model) to verify that the answers computed are equivalent to the benchmarks as seen in Figure 116.2. Notice that, of course, the values computed for the American and European options are identical to each other and identical to the benchmark values of $23.4187, as it is never optimal to exercise a standard plain vanilla call option early if there are no dividends. Be aware that the higher the lattice step, the longer it takes to compute the results. It is advisable to start with lower lattice steps to make sure the analysis is robust and then progressively increase lattice steps to check for results convergence. See Chapter 6 of Real Options Analysis, 2nd Edition by Dr. Johnathan Mun (Hoboken, NJ: John Wiley & Sons, 2006) on convergence criteria on lattices regarding how many lattice steps are required for a robust option valuation. However, as a rule of thumb, the typical convergence occurs between 100 and 1000 steps. Alternatively, you can enter Terminal and Intermediate Node Equations for a call option to obtain the same results. Notice that using 100 steps and creating your own Terminal Node Equation of Max(Asset-Cost,0) and Intermediate Node Equation of Max(Asset-Cost,OptionOpen) will yield the same answer. When entering your own equations, make sure to first check Custom Option.
c111-120
JWBK121-Mun
426
March 17, 2008
12:47
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 116.2 SLS comparing results with benchmarks Figure 116.3 illustrates how the analysis is done. The example file used in this example is Plain Vanilla Call Option III. Notice that the value $23.3975 in Figure 116.3 agrees with the value in Figure 116.1. The Terminal Node Equation is the computation that occurs at maturity, while the Intermediate Node Equation is the computation that occurs at all periods prior to maturity and is computed using backward induction. The term “OptionOpen” (one word, without spaces) represents “keeping the option open” and is often used in the Intermediate Node Equation when analytically representing the fact that the option is not executed but kept open for possible future execution. Therefore, in Figure 116.3, the Intermediate Node Equation Max(Asset-Cost,OptionOpen) represents the profit maximization decision of either executing the option or leaving it open for possible future execution. In contrast, the Terminal Node Equation of Max(Asset-Cost,0) represents the profit maximization decision at maturity of either executing the option if it is in the money or allowing it to expire worthless if it is at the money or out of the money. In addition, you can create an Audit worksheet in Excel to view a sample 10-step binomial lattice by checking the box Generate Audit Worksheet. For instance, loading the example file Plain Vanilla Call Option I and selecting the box
c111-120
JWBK121-Mun
March 17, 2008
12:47
116. Options Analysis—Five Plain Vanilla Options
Char Count=
427
FIGURE 116.3 Custom equation inputs creates a worksheet as seen in Figure 116.4. Several items on this audit worksheet are noteworthy.
The audit worksheet generated will show the first 10 steps of the lattice, regardless of how many you enter. That is, if you enter 1000 steps, the first 10 steps will be generated. If a complete lattice is required, simply enter 10 steps in the SLS and the full 10-step lattice will be generated. The Intermediate Computations and Results are for the Super Lattice, based on the number of lattice steps entered and not the 10-step lattice generated. To obtain the Intermediate Computations for 10-step lattices, simply rerun the analysis inputting 10 as the lattice steps. This way, the audit worksheet generated will be for a 10-step lattice, and the results from SLS will now be comparable (Figure 116.5). The worksheet only provides values, as it is assumed that the user was the one who entered in the terminal and intermediate node equations, so there is no need to re-create these equations in Excel again. The user can always reload the SLS file and view the equations or print out the form if required (by clicking on File | Print).
c111-120
JWBK121-Mun
428
March 17, 2008
12:47
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 116.4 SLS-generated audit worksheet
The software also allows you to save or open analysis files. That is, all the inputs in the software will be saved and can be retrieved for future use. The results will not be saved because you may accidentally delete or change an input and the results would no longer be valid. In addition, rerunning the super lattice computations will take only a few seconds. It is always advisable for you to rerun the model when opening an old analysis file. You may also enter in Blackout Steps, which are the steps on the super lattice that have different behaviors from the terminal or intermediate steps. For instance, you can enter 1000 as the lattice steps and enter 0-400 as the blackout steps, and
c111-120
JWBK121-Mun
March 17, 2008
12:47
Char Count=
116. Options Analysis—Five Plain Vanilla Options
429
FIGURE 116.5 SLS results with a 10-step lattice
some Blackout Equation (e.g., OptionOpen). This means that for the first 400 steps, the option holder can only keep the option open. Other examples include entering: 1, 3, 5, 10 if these are the lattice steps where blackout periods occur. You will have to calculate the relevant steps within the lattice where the blackout exists. For instance, if the blackout exists in years 1 and 3 on a 10-year, 10-step lattice, then steps 1, 3 will be the blackout dates. This blackout step feature comes in handy when analyzing options with holding periods, vesting periods, or periods where the option cannot be executed. Employee stock options have blackout and vesting periods, and certain contractual real options have periods during which the option cannot be executed (e.g., cooling-off periods or proof of concept periods). If equations are entered into the Terminal Node Equation box and American, European, or Bermudan Options are chosen, the Terminal Node Equation you entered will be the one used in the super lattice for the terminal nodes. However, for the intermediate nodes, the American option assumes the same Terminal Node Equation plus the ability to keep the option open; the European option assumes that the option can only be kept open and not executed; while the Bermudan option assumes
c111-120
JWBK121-Mun
430
March 17, 2008
12:47
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
that during the blackout lattice steps, the option will be kept open and cannot be executed. If you also enter the Intermediate Node Equation, first choose the Custom Option. Otherwise you cannot use the Intermediate Node Equation box.) The Custom Option result uses all the equations you have entered in Terminal, Intermediate, and Intermediate with Blackout sections. The Custom Variables list is where you can add, modify, or delete custom variables, the variables that are required beyond the basic inputs. For instance, when running an abandonment option, you need the salvage value. You can add this in the Custom Variables list, provide it a name (a variable name must be a single word), the appropriate value, and the starting step when this value becomes effective. That is, if you have multiple salvage values (i.e., if salvage values change over time), you can enter the same variable name (e.g., salvage) several times, but each time, its value changes and you can specify when the appropriate salvage value becomes effective. For instance, in a 10-year, 100-step super lattice problem where there are two salvage values—$100 occurring within the first 5 years and increases to $150 at the beginning of Year 6—you can enter two salvage variables with the same name, $100 with a starting step of 0 and $150 with a starting step of 51. Be careful here as Year 6 starts at step 51, not 61. That is, for a 10-year option with a 100-step lattice, we have: Steps 1–10 = Year 1; Steps 11–20 = Year 2; Steps 21–30 = Year 3; Steps 31–40 = Year 4; Steps 41–50 = Year 5; Steps 51–60 = Year 6; Steps 61–70 = Year 7; Steps 71–80 = Year 8; Steps 81–90 = Year 9; and Steps 91–100 = Year 10. Finally, incorporating 0 as a blackout step indicates that the option cannot be executed immediately. Finally, using the example file, Plain Vanilla Option IV, we see that when a dividend exists, the American option with its ability for early exercise is worth more than the Bermudan, which has early exercise but blackout periods when it cannot be exercised (e.g., Blackout Steps of 0–95 for a 100-step lattice on a 5-year maturity model means that for the first 4.75 years, the option cannot be executed), which in turn is worth more than the European option, which can be exercised only at expiration (Figure 116.6).
c111-120
JWBK121-Mun
March 17, 2008
12:47
Char Count=
116. Options Analysis—Five Plain Vanilla Options
FIGURE 116.6 American, Bermudan, and European options
431
c111-120
JWBK121-Mun
432
March 17, 2008
12:47
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
117. Probability of Default—Bond Yields and Spreads (Market Comparable) File Name: Probability of Default—Yields and Spreads Location: Modeling Toolkit | Probability of Default | Bond Yields and Spreads (Market Comparable) Brief Description: Generating the probability of default on a bond or debt by looking at market-traded instruments’ yields (bonds) Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2ProbabilityDefaultCumulativeBondYieldApproach, B2ProbabilityDefaultHazardRate, B2ProbabilityDefaultCumulativeSpreadApproach, B2ProbabilityDefaultAdjustedBondYield, B2ProbabilityDefaultCorrelation, B2ProbabilityDefaultAverageDefaults, B2ProbabilityDefaultPercentileDefaults
This chapter looks at the computation of probability of default on a bond or debt by looking at market-traded instruments. Probability of default measures the degree of likelihood that the borrower of a loan or debt (an obligor) will be unable to make the necessary scheduled repayments on the debt, thereby defaulting on it. If the obligor is unable to pay, the debt is in default, and the lenders have legal avenues to attempt a recovery of the debt or at least partial repayment of the entire debt. The higher the default probability a lender estimates a borrower to have, the higher the interest rate the lender will charge the borrower as compensation for bearing the higher default risk. Probability of default models are categorized as structural or empirical. Structural models compute probability of default based on market data (e.g., bond yields and spreads, equity prices, market and book values of assets and liabilities, as well as the volatility of these variables). In contrast, empirical models or credit scoring models are used to quantitatively determine the probability that a loan or loan holder will default, where the loan holder is an individual, by looking at historical portfolios of loans held, where individual characteristics are assessed (e.g., age, educational level, debt to income ratio, etc.). The bond yield comparable approach is clearly a structural model. In these models, given the annualized spot risk-free yields over time, the corresponding corporate bond yields (both are zero coupon bonds), and the expected recovery rate upon default, we can compute the implied cumulative default probability, the default probability in a particular year, and the hazard rates for each year (Figure 117.1).
JWBK121-Mun March 17, 2008 12:47
FIGURE 117.1 Bond yield approach to probability of default
c111-120 Char Count=
433
c111-120
JWBK121-Mun
434
March 17, 2008
12:47
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
In addition, the cumulative default probability between two years can also be computed using the same bond yield approach. Sometimes, only the risky bond or debt’s spread (the premium charged above the risk-free rate) is known. Using this credit spread, the cumulative probability of default can also be determined. To obtain the probability of default in a given year, compute two cumulative default probabilities and get the difference. This model also exemplifies the use of the Credit Risk Plus method employed by Credit Suisse Financial Products and is used to compute the average number of credit defaults per period given the total number of credit exposures in the portfolio, the cumulative probability of default on average, and the percentile Value at Risk for the portfolio. The Credit Risk Plus method can also be used to compute the percentile, given some estimated average number of defaults per period. The downside to this method is the need for bond yield data that are comparable to the debt or bond to be analyzed. Conversely, the subsequent chapters look at other structural and empirical models that do not require a bond market.
118. Probability of Default— Empirical Model File Name: Probability of Default—Empirical Location: Modeling Toolkit | Prob of Default | Empirical (Individuals) Brief Description: Computing the probability of default on loans of individuals given some historical data on existing loans (age, educational levels, years at employment, etc.) and applying a maximum likelihood estimation approach Requirements: Modeling Toolkit, Risk Simulator
Probability of default measures the degree of likelihood that the borrower of a loan or debt (also called an obligor) will be unable to make the necessary scheduled repayments on the debt, thereby defaulting on it. Should the obligor be unable to pay, the debt is in default, and the lenders of the debt have legal avenues to attempt a recovery of the debt or at least partial repayment of the entire debt. The higher the default probability a lender estimates a borrower to have, the higher the interest rate the lender will charge the borrower as compensation for bearing the higher default risk. Probability of default models are categorized as structural or empirical. Structural models are presented over the next few chapters, which look at a borrower’s ability to pay based on market data such as equity prices, market and book values of asset and liabilities, as well as the volatility of these variables. Hence they are used predominantly to estimate the probability of default of companies and countries.
c111-120
JWBK121-Mun
March 17, 2008
12:47
Char Count=
118. Probability of Default—Empirical Model
435
FIGURE 118.1 Empirical analysis of probability of default
In contrast, empirical models or credit scoring models as presented here are used to quantitatively determine the probability that a loan or loan holder will default, where the loan holder is an individual, by looking at historical portfolios of loans held, where individual characteristics are assessed (e.g., age, educational level, debt to income ratio, etc.). Other default models in the Modeling Toolkit handle corporations using market comparables and asset/liability valuations, as seen in the next chapter. The data here represents a sample of several hundred previous loans, credit, or debt issues. The data show whether each loan had defaulted or not as well as the specifics of each loan applicant’s age, education level (1–3 indicating high school, university, or graduate professional education), years with current employer, and so forth (Figure 118.1). The idea is to model these empirical data to see which variables affect the default behavior of individuals, using Risk Simulator’s Maximum Likelihood Models. The resulting model will help the bank or credit issuer compute the expected probability of default of an individual credit holder of having specific characteristics. To run the analysis, select the data (include the headers) and make sure that the data have the same length for all variables, without any missing or invalid data. Then click on Risk Simulator | Forecasting | Maximum Likelihood Models. A sample set of results are provided in the MLE worksheet in the model, complete with detailed instructions on how to compute the expected probability of default of an individual. Maximum Likelihood Estimates (MLE) on a binary multivariate logistic analysis is used to model dependent variables to determine the expected probability of success
c111-120
JWBK121-Mun
436
March 17, 2008
12:47
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
of belonging to a certain group. For instance, given a set of independent variables (e.g., age, income, education level of credit card or mortgage loan holders), we can model the probability of default using MLE. A typical regression model is invalid because the errors are heteroskedastic and nonnormal, and the resulting estimated probability estimates sometimes will be above 1 or below 0. MLE analysis handles these problems using an iterative optimization routine. Use the MLE when the data are ungrouped (there is only one dependent variable, and its values are either 1 for success or 0 for failure). If the data are grouped into unique categories (for instance, the dependent variables are actually two variables, one for the Total Events T and another for Successful Events S), and if, based on historical data for a given level of income, age, education level, and so forth, data on how many loans had been issued (T) and of those, how many defaulted (S) are available, then use the grouped approach and apply the Weighted Least Squares and Unweighted Least Squares method instead. The coefficients provide the estimated MLE intercept and slopes. For instance, the coefficients are estimates of the true population b values in the equation Y = b0 + b1 X1 + b2 X2 + . . . + bn Xn . The standard error measures how accurate the predicted coefficients are, and the Z-statistics are the ratios of each predicted coefficient to its standard error. The Z-statistic is used in hypothesis testing, where we set the null hypothesis (Ho) such that the real mean of the coefficient is equal to zero, and the alternate hypothesis (Ha) such that the real mean of the coefficient is not equal to zero. The Z-test is very important as it calculates if each of the coefficients is statistically significant in the presence of the other regressors. This means that the Z-test statistically verifies whether a regressor or independent variable should remain in the model or it should be dropped. That is, the smaller the p-value, the more significant the coefficient. The usual significant levels for the p-value are 0.01, 0.05, and 0.10, corresponding to the 99%, 95%, and 90% confidence levels. The coefficients estimated are actually the logarithmic odds ratios and cannot be interpreted directly as probabilities. A quick computation is first required. The approach is simple. To estimate the probability of success of belonging to a certain group (e.g., predicting if a debt holder will default given the amount of debt held), simply compute the estimated Y value using the MLE coefficients. For instance, if the model is, say, Y = −2.1 + 0.005 (Debt in thousands), then someone with a $100,000 debt has an estimated Y of −2.1 + 0.005(100) = −1.6. Then compute the inverse antilog of the odds ratio by computing: exp(−1.6) exp(estimated Y) = = 0.1679 1 + exp(estimated Y) 1 + exp(−1.6) Such a person has a 16.79% chance of defaulting on the new debt. Using this probability of default, you can then use the Credit Analysis—Credit Premium model to determine the additional credit spread to charge this person, given this default level and the customized cash flows anticipated from this debt holder.
c111-120
JWBK121-Mun
March 17, 2008
12:47
Char Count=
119. Probability of Default—External Options Model (Public Company)
437
119. Probability of Default—External Options Model (Public Company) File Name: Probability of Default—External Options Model Location: Modeling Toolkit | Prob of Default | External Options Model (Public Company) Brief Description: Computing the probability of default and distance to default of a publicly traded company by decomposing the firm’s book value and market value of liability, assets, and volatility using a simultaneous equations options model with optimization Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2MertonDefaultProbabilityII, B2MertonDefaultDistance, B2BlackScholesCall, B2ProbabilityDefaultMertonImputedAssetValue, B2ProbabilityDefaultMertonImputedAssetVolatility, B2ProbabilityDefaultMertonRecoveryRate, B2ProbabilityDefaultMertonMVDebt
Probability of default models is a category of models that assesses the likelihood of default by an obligor. These models differ from regular credit scoring models in several ways. First of all, credit scoring models usually are applied to smaller credits—individuals or small businesses—default models are applied more to larger credits—corporation or countries. Credit scoring models are largely statistical, regressing instances of default against various risk indicators, such as an obligor’s income, home renter or owner status, years at a job, educational level, debt to income ratio, and so forth. An example of such a model is seen in the previous chapter, Probability of Default—Empirical Model, where the maximum likelihood approach is used on an advanced econometric regression model. Default models, in contrast, directly model the default process and typically are calibrated to market variables, such as the obligor’s stock price, asset value, debt book value, or credit spread on its bonds. Default models find many applications within financial institutions. They are used to support credit analysis and to find the probability that a firm will default, to value counterparty credit risk limits, or to apply financial engineering techniques in developing credit derivatives or other credit instruments. This model is used to solve the probability of default of a publicly traded company with equity and debt holdings and accounting for its volatilities in the market (Figure 119.1). This model is currently used by KMV and Moody’s to perform credit risk analysis. This approach assumes that the book value of the asset and asset volatility are unknown and solved in the model and that the company is relatively stable and the growth rate of its assets are stable over time (e.g., not in start-up mode).
JWBK121-Mun
March 17, 2008
12:47
Char Count=
FIGURE 119.1 Default probability model setup
c111-120
438
c111-120
JWBK121-Mun
March 17, 2008
12:47
Char Count=
119. Probability of Default—External Options Model (Public Company)
439
The model uses several simultaneous equations in options valuation coupled with optimization to obtain the implied underlying asset’s market value and volatility of the asset in order to compute the probability of default and distance to default for the firm. It is assumed that, at this point, the reader is well versed in running simulations and optimizations in Risk Simulator. If not, it is suggested that the reader first spends some time on the Basic Simulation model as well as the Continuous Portfolio Optimization chapters before proceeding with the procedures discussed next. To run this model, enter in the required inputs, such as the market value of equity (obtained from market data on the firm’s capitalization, i.e., stock price times number of stocks outstanding), market value of equity (computed in the Volatility or LPVA worksheets in the model), book value of debt and liabilities (the firm’s book value of all debt and liabilities), the risk-free rate (the prevailing country’s risk-free interest rate for the same maturity), the anticipated growth rate of the company (the expected annualized cumulative growth rate of the firm’s assets, estimated using historical data over a long period of time, making this approach more applicable to mature companies rather than start-ups), and the debt maturity (the debt maturity to be analyzed, or enter 1 for the annual default probability). The comparable option parameters are shown in cells G18 to G23. All these comparable inputs are computed except for Asset Value (the market value of asset) and the Volatility of Asset. You will need to input some rough estimates as a starting point so that the analysis can be run. The rule of thumb is to set the volatility of the asset in G22 to be one-fifth to half of the volatility of equity computed in G10 and the market value of asset (G19) to be approximately the sum of the market value of equity and book value of liabilities and debt (G9 and G11). Then you need to run an optimization in Risk Simulator in order to obtain the desired outputs. To do this, set Asset Value and Volatility of Asset as the decision variables (make them continuous variables with a lower limit of 1% for volatility and $1 for asset, as both these inputs can take on only positive values). Set cell G29 as the objective to minimize, as this is the absolute error value. Finally, the constraint is such that cell H33, the implied volatility in the default model, is set to exactly equal the numerical value of the equity volatility in cell G10. Run a static optimization using Risk Simulator. If the model has a solution, the absolute error value in cell G29 will revert to zero (Figure 119.2). From here, the probability of default (measured in percent) and distance to default (measured in standard deviations) are computed in cells G39 and G41. Then the relevant credit spread required can be determined using the Credit Analysis—Credit Premium model or some other credit spread tables (such as using the Internal Credit Risk Rating model). The simpler alternative is to use Modeling Toolkit’s pre-built internally optimized functions (we embedded some artificial intelligence integrated with optimization of simultaneous stochastic equations to solve the problem) using B2ProbabilityDefaultMertonImputedAssetValue (cell I21) and B2ProbabilityDefaultMertonImputedAssetVolatility (cell I24) to be used in obtaining the probability of default (B2ProbabilityDefaultMertonII in cell G39).
JWBK121-Mun
March 17, 2008
12:47
Char Count=
FIGURE 119.2 Default probability model results
c111-120
440
c111-120
JWBK121-Mun
March 17, 2008
12:47
Char Count=
120. Probability of Default—Merton Internal Options Model (Private Company)
441
120. Probability of Default—Merton Internal Options Model (Private Company) File Name: Probability of Default—Merton Internal Model Location: Modeling Toolkit | Prob of Default | Merton Internal Model (Private Company) Brief Description: Computing the probability of default and distance to default of a privately held company by decomposing the firm’s book value of liability, assets, and volatility Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2MertonDefaultProbabilityII, B2MertonDefaultDistance
This model is a structural model that employs the options approach to computing the probability of default and distance to default of a company, assuming that the book values of asset and debt are known, as are the asset volatilities and anticipated annual growth rates. If the book value of assets or volatility of assets is not known and the company is publicly traded, use the External Options model in the previous chapter instead. This model assumes these inputs are known or the company is privately held and not traded. It essentially computes the probability of default or the point of default for the company when its liabilities exceed its assets, given the asset’s growth rates and volatility over time (Figure 120.1). It is recommended that the reader review the previous chapter’s model on external publicly traded company before using this model. Methodological parallels exist between these two models, with this chapter building on the knowledge and expertise of the last one.
FIGURE 120.1 Credit default risk model with options modeling
c121-130
JWBK121-Mun
442
March 17, 2008
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
121. Probability of Default—Merton Market Options Model (Industry Comparable) File Name: Probability of Default—Merton Market Options Model Location: Modeling Toolkit | Prob of Default | Merton Market Options Model (Industry Comparable) Brief Description: Computing the probability of default and distance to default of a publicly or privately held company by decomposing the firm’s book value of liability, assets, and volatility while benchmarking the company’s returns to some external benchmark or index Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2MertonDefaultProbability
This models the probability of default for both public and private companies using an index or set of comparables (the market), assuming that the company’s asset and debt book values are known as well as the asset’s annualized volatility. Based on this volatility and the correlation of the company’s assets to the market, we can determine the probability of default. This model is similar to the models in the last two chapters, which should be reviewed before attempting to run the model illustrated here, as the theoretical constructs of this model are derived from these last two structural models. Please see Figure 121.1.
FIGURE 121.1 Merton market options model
c121-130
JWBK121-Mun
March 17, 2008
12:52
Char Count=
122. Project Management—Cost Estimation Model
443
122. Project Management—Cost Estimation Model File Name: Project Management—Cost Estimation Model Location: Modeling Toolkit | Project Management | Cost Estimation Model Brief Description: Illustrating how to use Risk Simulator for running a Monte Carlo simulation on a multiphased project to forecast costs and cost overruns, simulating discrete events (success/failure and go/no-go outcomes), linking these events, and linking the input parameters in a simulation assumption to calculated cells Requirements: Modeling Toolkit, Risk Simulator
This is a cost estimation model used to forecast the potential cost and cost overruns of a multiphased project, using simulation, where the later phases of the project will depend on the earlier phases. That is, you cannot proceed to the second phase unless the first phase is successful and so forth. In this example, the model estimates the total cost of a project with five phases, where at each phase, there might be several subphases. For each phase or subphase, the minimum, maximum, and most likely estimates of costs are obtained and simulated (Figure 122.1). These cost estimates depend on the forecasts of weeks required in each phase and the cost per week. These costs are simulated through parameter linking to triangular distributions. For instance, the cells in column G are distributional assumptions linked to columns H, I, and J. In addition, the estimates of minimum, likely, and maximum costs in these columns depend on the estimates of time to completion and cost per week in columns L to O. Cost overruns in each phase and their chances are also simulated in column P. A probability of success is estimated based on past experience and subject matter expert estimates, listed in column Q. Using these projections of project success/failure, Bernoulli distributions are set up in column R to simulate the discrete event of a success (1) or failure (0). We then link these events in column S where only if the previous period is successful will the current phase continue, assuming that it too is successful. Otherwise, if the previous phase was a failure, all future phases are canceled and the project stops at that point. Finally, the values in column K are the expected values after accounting for the simulated results of time to completion, cost per week, overrun estimates, and success or failure of each phase.
FIGURE 122.1 Cost estimation of a multiphased project
c121-130 JWBK121-Mun
444
March 17, 2008 12:52 Char Count=
c121-130
JWBK121-Mun
March 17, 2008
12:52
Char Count=
122. Project Management—Cost Estimation Model
445
PROCEDURE To run the model, simply: 1. Go to the Model worksheet and click on Risk Simulator | Change Profile and select the Cost Estimation Model profile. 2. Run the simulation by clicking on Risk Simulator | Run Simulation. The forecast chart (Figure 122.2) results show a trimodal distribution (a distribution with three peaks) because of the discrete event simulation we ran (i.e., the
FIGURE 122.2 Simulated trimodal cost outputs
c121-130
JWBK121-Mun
446
March 17, 2008
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
success and failure probabilities). The right-tail probability past the single point estimate is found to be 44.50%. This means that there is a 44.50% probability that the expected total cost of this project of $385,555 is not going to be enough (click on Right-Tail and enter 385555 then hit Tab on the keyboard). Therefore, to be 95% certain there are adequate funds to cover the cost of the project, you need $457,006 (click on Left-Tail and enter 95 in the certainty box then hit Tab on the keyboard).
123. Project Management—Critical Path Analysis (CPM PERT GANTT) File Name: Project Management—Critical Path Analysis (CPM PERT GANTT) Location: Modeling Toolkit | Project Management | Critical Path Analysis (CPM PERT GANTT) Brief Description: Illustrating how to use Risk Simulator for running simulations on time to completion of a project and critical path determination, creating GANTT charts and CPM/PERT models, simulating dates and binary variables, and linking input assumptions Requirements: Modeling Toolkit, Risk Simulator
This model illustrates how a project management critical path analysis and completion times can be modeled using Risk Simulator. The typical inputs of minimum, most likely, and maximum duration of each process or subproject can be modeled using the triangular distribution. This distribution typically is used in the Critical Path Method (CPM) and Program Evaluation Review Technique (PERT) in project management, together with the beta distribution. In other words, the assumptions in the model can also take the beta and gamma distributions rather than the triangular distribution. However, the beta and gamma distributions are more complex for project managers to use, hence the popularity of the triangular distribution. In this two-part model, the first is a CPM critical path analysis to determine the critical path as well as the expected completion time of a project (Figure 123.1). The second part of the model is on the same project, but we are interested in the actual potential completion date with a GANTT chart analysis (Figure 123.2).
PROCEDURE 1. The cells in column G in the model already have simulation assumptions set. To view the assumptions, select cell G5 and click on Risk Simulator | Set Input Assumption. You will see that a triangular distribution is set, where the input parameters (minimum, most likely, and maximum) are linked to the corresponding cells in columns D to F (Figure 123.3). Note that when you click on the input
JWBK121-Mun March 17, 2008 12:52
FIGURE 123.1 Critical path model and time to market analysis
c121-130 Char Count=
447
JWBK121-Mun
March 17, 2008
12:52
Char Count=
FIGURE 123.2 GANTT chart
c121-130
448
JWBK121-Mun
March 17, 2008
12:52
Char Count=
FIGURE 123.3 Setting the input assumptions
c121-130
449
c121-130
JWBK121-Mun
March 17, 2008
450
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
parameters, you will see the links to D5, E5, and F5. Notice also that the input parameter boxes are now colored light green, indicating that the parameters are linked to a cell in the spreadsheet. If you entered or hard-coded a parameter, it would be white. You can link a parameter input to a cell in a worksheet by clicking on the link icon (beside the parameter input box) and clicking on the relevant cell in the worksheet. 2. For simplicity, other assumptions have already been preset for you, including the completion percentage in column E of the second part of the model, where we use the beta distribution instead. Further, several forecasts have been set, including the Finish Time (L25), Ending Date (F63), and critical path binary values (N5:N24) for various tasks. All that needs to be done now is to run the simulation: Risk Simulator | Run Simulation.
RESULTS INTERPRETATION Three results are noteworthy. 1. Looking at the Finish Time forecast chart after the simulation is completed, select Left-Tail and enter 95 in the Certainty box and hit TAB on the keyboard (Figure 123.4). This shows a value of 301.71, which means that there is a 95% chance that the project will complete in slightly under 302 days or that only a 5% chance exists where there will be an overrun of 301 days. Similarly, you can enter in 285 in the Value box and hit TAB. The forecast chart returns a value of 69.60%, indicating that there is only a 69.60% chance that the project can finish within 285 days or a 30.40% chance the project will take longer than 285 days. Other values can be entered similarly. 2. Select one of the tasks (e.g., Task 10). The forecast chart is bimodal, with two peaks, one at 0 and another at 1, indicating “NO” and “YES” on whether this particular task ends up being on the critical path (Figure 123.5). In this example, select the Left-Tail and type in 0.99 and hit TAB on the keyboard. Notice that the certainty value is 58.20%, indicating that this project has a 58.20% chance of not being on the critical path (the value 0) or 41.80% chance of being on the critical path (the value 1). This means that Task 10 is potentially very critical. In contrast, look at Task 9’s forecast chart. The Left-Tail certainty is 100% at the value 0.99, indicating that 100% of the time, this project is not going to be on the critical path, and therefore has less worry for the project manager. 3. Finally, the Ending Date forecast can be used to determine the potential ending date of the project. Looking at the forecast chart output, select the Left-Tail, enter 95, and hit TAB (Figure 123.6). The corresponding 95th percentile returns a value of 39301. This is a text format for a specific date in Excel. To convert it into a date we understand, simply type this number in any empty cell and click on Format | Cells (or hit Ctrl-1) and select Number | Date, and the cell value changes to August 7, 2007. This means that there is a 95% chance that the project will be completed by August 7, 2007. You can now experiment with other values and certainty levels as well as modify the model to suit your needs and project specifications. In addition, you can use the same methodology to simulate binary values (variables with 0/1 values) and dates.
c121-130
JWBK121-Mun
March 17, 2008
12:52
Char Count=
123. Project Management—Critical Path Analysis (CPM PERT GANTT)
FIGURE 123.4 Forecast results
451
c121-130
JWBK121-Mun
452
March 17, 2008
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 123.5 Task forecast
c121-130
JWBK121-Mun
March 17, 2008
12:52
Char Count=
124. Project Management—Project Timing
453
FIGURE 123.6 GANTT chart forecast end date
124. Project Management—Project Timing File Name: Project Management—Project Timing Location: Modeling Toolkit | Project Management | Project Timing Brief Description: Illustrating how to use Risk Simulator for simulating a project’s completion date and how to revert the simulated results back to dates Requirements: Modeling Toolkit, Risk Simulator
This project management model (Figure 124.1) is used to illustrate how the completion dates of a project can be simulated and the resulting values converted back into a regular date format. The model assumes that the tasks are all sequential. For nonsequential and path-independent projects, see the Project Management—Critical Path Analysis (CPM PERT GANTT) model instead. The input assumptions used for simulation are listed under the Assumption column, which are all set as triangular distributions, a typical distribution used in project management and related to the PERT distribution. These assumptions are
JWBK121-Mun
March 17, 2008
12:52
Char Count=
FIGURE 124.1 Project timing model
c121-130
454
c121-130
JWBK121-Mun
March 17, 2008
12:52
Char Count=
124. Project Management—Project Timing
455
FIGURE 124.2 Probabilistic project completion dates
then linked to the minimum (Min), most likely (ML), and maximum (Max) values of completion in weeks for each task and subtask. The final completion date is set as the output forecast. Because dates, when simulated, are presented in numerical values, using the forecast chart, you can determine the 90% certainty that the project will be completed by a certain time period. In the example, the 90% Left-Tail confidence interval shows a value of 39210.046 in Figure 124.2. Entering this value in any cell in Excel and then changing the properties of the cell to show dates rather than a numerical value (right-click on the cell, and select Format Cells | Date, and select the date format of your choice) will reveal that the 90th percentile date of completion is May 8, 2007.
c121-130
JWBK121-Mun
March 17, 2008
456
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
125. Real Estate—Commercial Real Estate ROI File Name: Real Estate—Commercial ROI Location: Modeling Toolkit | Industry Applications | Real Estate—Commercial ROI Brief Description: Illustrating how to use Risk Simulator for simulating a financial ROI analysis for real estate projects and running scenario analysis on the optimal holding period of the asset that maximizes net present value Requirements: Modeling Toolkit, Risk Simulator
This model is used to value the profit and loss from a real estate deal with industry standards, such as cap rates, vacancy and collection factors, and assumed growth rates and Consumer Price Index (CPI) adjusted pricings. The pro forma income statement and cash flow statement with the relevant salvage sale value of the property and amortization table of the mortgage are also included. The model uses function calls from the Modeling Toolkit and is simulated using Risk Simulator. Please see Figure 125.1.
PROCEDURE To run this model, simply click on Risk Simulator | Run Simulation. However, if you wish, you can create your own simulation model. 1. Start a new simulation profile and add various assumptions on cells E5:E13 and L5:L8. You can make them all triangular distributions and link the input parameters to the relevant cells (e.g., for Gross Rent the First Year, link Min to F5, Likely to E5, and Max to G5). Define the assumption by first making sure there is a simulation profile (Risk Simulator | New Profile), then selecting the relevant cells and clicking on Risk Simulator | Set Input Assumption. 2. You can then select the predefined cell, click on Risk Simulator | Copy Parameter, hit the Escape key once (so that the values will not be copied, only the assumptions will be copied), and select the other cells to copy to (e.g., E6:E13 and later, L5:L8) and click on Risk Simulator | Paste Parameter. 3. Add forecasts to the key outputs such as Net Present Value (Risk Simulator | Set Output Forecast). 4. Run the simulation (Risk Simulator | Run Simulation) and interpret the results.
JWBK121-Mun
March 17, 2008
12:52
Char Count=
FIGURE 125.1 Commercial real estate ROI analysis
c121-130
457
c121-130
JWBK121-Mun
458
March 17, 2008
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 125.2 Scenario analysis tool
In addition, a tornado sensitivity analysis and a scenario analysis can be run on the model. The generated reports for both techniques are included in the model. To run these analyses, follow these instructions.
Tornado analysis. Select cell I15 (Net Present Value) and start the tornado analysis (Risk Simulator | Tools | Tornado Analysis) and select the checkbox use cell addresses and click OK. A report will be generated. Interpret the tornado results as usual. Scenario analysis. Start the scenario analysis tool (Risk Simulator | Tools | Sensitivity Analysis) and enter the relevant inputs as seen in Figure 125.2.
The scenario report worksheet will be generated, listing all the possible combinations of Holding Periods and Mortgage Maturities as well as the resulting NPV (Figure 125.3).
JWBK121-Mun
March 17, 2008
12:52
Char Count=
FIGURE 125.3 Scenario analysis results table
c121-130
459
c121-130
JWBK121-Mun
March 17, 2008
460
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
126. Risk Analysis—Integrated Risk Analysis File Name: Risk Analysis—Integrated Risk Analysis Location: Modeling Toolkit | Risk Analysis | Integrated Risk Analysis Brief Description: Integrating Monte Carlo simulation, forecasting, portfolio optimization, and real options analysis into a single unified model and showing how one plays off another Requirements: Modeling Toolkit, Risk Simulator
One of the main questions in risk analysis is how the individual software applications in our Risk Simulator and Real Options SLS software suite can be applied in concert with one another. This example illustrates how an integrated risk management model can be constructed using the Real Options SLS and Risk Simulator’s Monte Carlo Simulation, Forecasting, and Optimization tools. Figure 126.1 shows an integrated model process of how time-series forecasting, returns on investment, Monte Carlo simulation, real options, and portfolio optimization are linked within one nicely integrated analytical package. Refer to Dr. Johnathan Mun’s Modeling Risk (2006) and Real Options Analysis, 2nd Edition (2005), both published by John Wiley & Sons, for more hands-on details. This model is meant to quickly illustrate the practical applications of these approaches.
FORECASTING In the Forecasting worksheet (Figure 126.2), first create a profile (Risk Simulator | New Profile) and give it a name, then select the area G7:G26 of the data area and click on Risk Simulator | Forecasting | Time-Series Analysis. Suppose you know from experience that the seasonality of the revenue stream peaks every four years. Enter the seasonality of 4 and specify the number of forecast periods to 5. The Forecasting tool automatically selects the best time-series model to forecast the results through backcast-fitting techniques and optimization, to find the best-fitting model and its associated parameters. The best-fitting time-series analysis model is chosen. Running this module on the historical data will generate a report with the five-year forecast revenues (Forecast Report worksheet) that are complete with Risk Simulator assumptions (cells D43:D47). These output forecasts then become the inputs into the Valuation Model worksheet (cells D19:H19).
JWBK121-Mun
March 17, 2008
12:52
Char Count=
FIGURE 126.1 Integrated risk analysis approach
c121-130
461
c121-130
JWBK121-Mun
462
March 17, 2008
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 126.2 Stochastic time-series forecasting
c121-130
JWBK121-Mun
March 17, 2008
12:52
126. Risk Analysis—Integrated Risk Analysis
Char Count=
463
MONTE CARLO SIMULATION The Valuation Model worksheet shows a simple discounted cash flow model (DCF) that calculates the relevant net present value (NPV) and internal rate of return (IRR) of a project or strategy. Notice that the output of the Forecast Report worksheet becomes an input into this Valuation Model worksheet. For instance, cells D19:H19 are linked from the first forecast worksheet. The model uses two discount rates for the two different sources of risk (market risk-adjusted discount rate of 12% for the risky cash flows and a 5% cost of capital to account for the private risk of capital investment costs). Further, different discounting conventions are included (midyear, end-year, continuous, and discrete discounting), as well as a terminal value calculation assuming a constant growth model. Finally, the capital implementation costs are separated out of the model (row 46), but the regular costs of doing business (direct costs, cost of goods sold, operating expenses, etc.) are included in the computation of free cash flows. That is, this project has two phases. The first phase costs $5 and the second, larger phase costs $2,000. The statistic NPV shows a positive value of $123.26 while the IRR is 15.68%, both exceeding the zero-NPV and firm-specific hurdle rate of 12% required rate of return. Therefore, at first pass, the project seems to be profitable and justifiable. However, risk has not been considered. Without applying risk analysis, you cannot determine the chances this NPV and IRR are expected to occur. This is done by selecting the cells you wish to simulate (e.g., D20) and clicking on Risk Simulator | Set Input Assumption (Figure 126.3). For illustration purposes, select the triangular distribution and link to the input parameters to the appropriate cells. Continue the process to define input assumptions on all other relevant cells. Then select the output cells, such as NPV, IRR, and so forth (cells D14, D15, H14, and H15) and set them as forecasts (i.e., select each cell one at a time and click on Risk Simulator | Set Output Forecast and enter the relevant variable names). Note that all assumptions and forecasts have already been set for you in advance.
REAL OPTIONS ANALYSIS The next stop is real options analysis. In the previous simulation applications, risk was quantified and compared across multiple projects. The problem is, so what? That is, we have quantified the different levels of risks in different projects. Some are highly risky, some are somewhat risky, and some are not so risky. In addition, the relevant returns are also pretty variable as compared to the risk levels. Real options analysis takes this to the next step. Looking at a specific project, real options identifies if there are ways to mitigate the downside risks while taking advantage of the upside uncertainty. In applying Risk Simulator, we have only quantified uncertainties, not risks! Downside uncertainty, if it is real and affects the firm, becomes risks, while upside uncertainty is a plus that firms should try to capitalize on. At this point, we can either solve real options problems using Real Options SLS’s multiple asset module (install Real Options SLS and click on Start | Real Options Valuation | Real Options SLS | Real Options SLS) or apply its analytics directly in Excel by using the SLS Functions we will see later (Figures 126.5 and 126.6). The
FIGURE 126.3 Applying Monte Carlo risk-based simulation on the model
c121-130 JWBK121-Mun
464
March 17, 2008 12:52 Char Count=
c121-130
JWBK121-Mun
March 17, 2008
12:52
126. Risk Analysis—Integrated Risk Analysis
Char Count=
465
Real Options SLS is used to obtain a quick answer, but if a distribution of option values is required, then use the SLS Functions and link all the inputs into the function; we use the latter in this example. The real options analysis model’s inputs are the outputs from the DCF model in the previous step. For instance, the free cash flows are computed previously in the DCF. The Expanded NPV value in cell C21 in the Real Options worksheet is obtained through the SLS Function call. Simulation was used to also obtain the volatility required in the real options analysis. The Real Options SLS results (Figure 126.4) indicate an Expanded NPV of $388M, while the NPV was $123M. This means that there is an additional $265 expected value in creating a two-staged development of this project rather than jumping in first and taking all the risk. That is, NPV analysis assumed that you would definitely invest all $5M in year 1 and $2,000M in year 3 regardless of the outcome (see Figure 126.7). This view of NPV analysis is myopic. Real options analysis, however, assumes that if all goes well in Phase I ($5M), then you continue on to Phase II ($2,000M). Therefore, due to the volatility and uncertainty in the project (as obtained by Monte Carlo simulation), there is a chance that the $2,000M may not even be spent, as Phase II never materializes (due to a bad outcome in Phase I). In addition, there is a chance that a valuation free cash flows exceeding $1,849M may occur. The expected value of all these happenstances yields an Expanded NPV of $388M (the average potential after a two-phased stage-gate development process is implemented). It is better to perform a stage-gate development process than to take all the risks immediately and invest everything. Figures 126.5 and 126.6 illustrate how the analysis can be performed in the Excel model by using function calls. Notice that in the real options world, executing Phase II is an option, not a requirement or obligation, while in the DCF world all investments have been decreed in advance and will and must occur (Figure 126.7). Therefore, management has the flexibility and ability to abandon the project after Phase I and not continue on Phase II if the outcome of the first phase is bad. Based on the volatility calculated in the real options model, there is a chance Phase II will never be executed, there is a chance that the cash flows could be higher and lower, and there is a chance that Phase II will be executed if the cash flows make it profitable. Therefore, the net expected value of the project after considering all these potential avenues is the option value calculated previously. In this example as well, we assumed a 3% annualized dividend yield, indicating that by stage-gating the development and taking our time, the firm loses about 3% of its PV Asset per year (lost revenues and opportunity costs as well as lower market share due to waiting and deferring action). Note that as dividends increase, the option to wait and defer action goes down. In fact, the option value reverts to the NPV value when dividend exceeds 32%. That is, the NPV or value of executing now is $123.26M (i.e., $1849.43M – $1721.41M – $4.76M). So, at any dividend yield exceeding 32%, the optimal decision is to execute immediately rather than to wait and phase the development.
OPTIMIZATION The next step is portfolio optimization; that is, how to efficiently and effectively allocate a limited budget (budget constraint and human resource constraints) across
FIGURE 126.4 Applying real options on a two-phased project
c121-130 JWBK121-Mun
466
March 17, 2008 12:52 Char Count=
JWBK121-Mun
March 17, 2008
12:52
Char Count=
FIGURE 126.5 Linking real options in the spreadsheet
c121-130
467
c121-130
JWBK121-Mun
468
March 17, 2008
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 126.6 Real options function call in Excel
many possible projects while simultaneously accounting for their uncertainties, risks, strategic flexibility, and all the while maximizing the portfolio’s ENPV subject to the least amount of risk (Sharpe ratio). To this point we have shown how a single integrated model can be built utilizing simulation, forecasting, and real options. Figure 126.8 illustrates a summary of 12 projects (here we assume that you have re-created the simulation, forecasting, and real options models for each of the other 11 projects), complete with their expanded NPV, NPV, cost, risk, return to risk ratio, and profitability index. To simplify our analysis and to illustrate the power of optimization under uncertainty, the returns and risk values for projects B to L are simulated, instead of rebuilding this 11 other times.
FIGURE 126.7 Graphical depiction of a two-phased sequential compound option model
JWBK121-Mun March 17, 2008 12:52
FIGURE 126.8 Portfolio of 12 projects with different risk-return profiles
c121-130 Char Count=
469
c121-130
JWBK121-Mun
March 17, 2008
470
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Then the individual project’s human resource requirements as measured by fulltime equivalences (FTEs) are included. We simplify the problem by listing only the three major human resource requirements: engineers, managers, and salespeople. The FTE salary costs are simulated using Risk Simulator. Finally, decision variables (the Decision Variables in column J in the model) are constructed, with a binary variable of 1 or 0 for go and no-go decisions. The model also shows all 12 projects and their rankings sorted by returns, risk, cost, returns-to-risk ratio, and profitability ratio. Clearly, it is fairly difficult to determine simply by looking at this matrix which projects are the best to select in the portfolio. The model also shows this matrix in graphical form, where the size of the balls is the cost of implementation, the x-axis shows the total returns including flexibility, and the y-axis lists the project risk as measured by volatility of the cash flows of each project. Again, it is fairly difficult to choose the right combinations of projects at this point. Optimization is required to assist in our decision making. To set up the optimization process, first select each of the decision variables in column J and set the relevant decision variables by clicking on Risk Simulator | Set Decision. Alternatively, you can create one decision variable and copy/paste. The linear constraint is such that the total budget value for the portfolio has to be less than $5,000. Next, select Portfolio Sharpe Ratio (D22) as the objective to maximize. The next sections illustrate the results of the optimization under uncertainty and the results interpretation. Further, we can add additional constraints if required, such as where the total FTE cannot exceed $9,500,000, and so forth. We can then change the constraints of number of projects allowed and we can marginally increase this constraint to see what additional returns and risks the portfolio will obtain. In each run, with a variable constraint inserted, an efficient frontier will be generated. This efficient frontier is generated by connecting a sequence of optimal portfolios under different risk levels. That is, for a particular risk level, points on the frontier are the combinations of projects that maximize the returns of the portfolio subject to the lowest risk. Similarly, these are the points where, given a set of returns requirements, these combinations of projects provide the least amount of portfolio risk.
OPTIMIZATION PARAMETERS For the first run, these parameters were used: Objective: Maximize Portfolio Returns Constraint: Total Budget Allocation = $5,000 Constraint: Total number of projects not to exceed 6 (we then change this from 5 to 9, to create an investment efficient frontier) The results are shown in Figure 126.10, including a portfolio efficient frontier (Figure 126.9), where, on the frontier, all the portfolio combinations of projects will yield the maximum returns (portfolio NPV) subject to the minimum portfolio risks. Clearly the first portfolio (with the 5-project constraint) is not a desirable outcome due to the low returns. So, the obvious candidates are the other portfolios. It is now up to management to determine what risk-return combination it wants; that is, depending on the risk appetite of the decision makers, the budget availability and
c121-130
JWBK121-Mun
March 17, 2008
12:52
Char Count=
126. Risk Analysis—Integrated Risk Analysis
471
FIGURE 126.9 Efficient frontier the capabilities of handling more projects, these five different portfolio combinations are the optimal portfolios of projects that maximize returns subject to the least risk, considering the uncertainties (simulation), strategic flexibility (real options), and uncertain future outcomes (forecasting). Of course, the results here are only for a specific set of input assumptions, constraints and so forth. You may choose to change any of the input assumptions to obtain different variants of optimal portfolio allocations. Please see Figures 126.9 and 126.10. Many observations can be made from the summary of results. While all of the optimal portfolios are constrained at under a total cost budget and number of projects less than or equal to some value (in the efficient frontier, we change this constraint from 5 to 9 projects), all of these combinations of projects use the budget most effectively as it generates the higher ENPV with the least amount of risk (Sharpe ratio is maximized) for the entire portfolio. However, the further to the right we go
FIGURE 126.10 Project selection by multiple criteria
c121-130
JWBK121-Mun
472
March 17, 2008
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
in the efficient frontier chart, the higher the level of risk (wider range for ENPV and risk; higher-volatility risk coefficient; requires more projects, making it riskier and more costly to implement) and the higher the potential returns. These values can be obtained from the forecast charts from Risk Simulator (not shown) by setting the optimal project selections and running a simulation each time to view the ENPV and IRR forecast charts. It is at this point that the decision maker has to decide which risk-return profile to undertake. All other combinations of projects in a portfolio are by definition suboptimal to these two, given the same constraints, and should not be entertained. Hence, from a possible portfolio combination of 12! or 479,000,000 possible outcomes, we have now isolated the decision down to these best portfolios. Finally, we can again employ a high-level portfolio real option on the decision. That is, because all portfolios require the implementation of projects B, D, H, I, J, do these first! Then leave the option open to execute the remaining projects (F, K, L) depending on which portfolios management decides to go with later. By doing so, we can buy additional time (portfolio level option to defer) before making the final determination, in case something happens during development, in which case the project can be terminated (option to abandon), the analysis can be re-run and different projects can be added and new optimal portfolio can be obtained (option to switch).
127. Risk Analysis—Interest Rate Risk File Name: Risk Analysis—Interest Rate Risk Location: Modeling Toolkit | Risk Analysis | Interest Rate Risk Brief Description: Applying duration and convexity measures to account for a bond’s sensitivity and how interest rate shifts can affect the new bond price, and how this new bond price can be approximated using these sensitivity measures Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2BondPriceDiscrete, B2ModifiedDuration, B2ConvexityDiscrete
Banks selling fixed income products and vehicles need to understand interest rate risks. This model uses duration and convexity to show how fixed income products react under various market conditions. To compare the effects of interest rate and credit risks on fixed income investments, this model uses modified duration and convexity (discrete discounting) to analyze the effects of a change in interest rates on the value of a bond or debt. Duration and convexity are sensitivity measures that describe exposure to parallel shifts in the spot interest rate yield curve applicable to individual fixed income
c121-130
JWBK121-Mun
March 17, 2008
12:52
Char Count=
127. Risk Analysis—Interest Rate Risk
473
instruments or entire fixed income portfolios. These sensitivities cannot warn of exposure to more complex movements in the spot curve, including tilts and bends, only parallel shifts. The idea behind duration is simple. Suppose a portfolio has a duration measure of 2.5 years. This means that the portfolio’s value will decline about 2.5% for each 1% increase in interest rates—or rise about 2.5% for each 1% decrease in interest rates. Typically, a bond’s duration will be positive, but exotic instruments, such as mortgage-backed securities, may have negative durations or investment portfolios that short fixed income instruments or pay fixed for floating on an interest rate swap. Inverse floaters tend to have large positive durations. Their values change significantly for small changes in rates. Highly leveraged fixed income portfolios tend to have very large (positive or negative) durations. In contrast, convexity summarizes the second most significant piece of information, the nonlinear curvature of the yield curve; duration measures the linear or first-approximation sensitivity. Duration and convexity traditionally have been used as tools for immunization or asset-liability management. To avoid exposure to parallel spot curve shifts, an organization (e.g., an insurance company or defined benefit pension plan) with significant fixed income exposures might perform duration matching by structuring its assets so that their duration matches the duration of its liabilities (so the two offset each other). Even more effective (but less frequently practical) is duration-convexity matching, in which assets are structured so that durations and convexities match. Please see Figure 127.1.
FIGURE 127.1 Interest rate risk
c121-130
JWBK121-Mun
March 17, 2008
474
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
128. Risk Analysis—Portfolio Risk Return Profiles File Name: Risk Analysis—Portfolio Risk and Return Profile Location: Modeling Toolkit | Risk Analysis | Portfolio Risk and Return Profile Brief Description: Computing the risk and return on a portfolio of multiple assets given each asset’s own risk and return as well as its respective pairwise covariance Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2PortfolioReturns, B2PortfolioVariance, B2PortfolioRisk
This model computes the portfolio level of returns and risks given the percent allocated on various assets, the expected returns and risks on individual assets, and the variance-covariance matrix of the asset mix. You can use the VarianceCovariance tool in the Modeling Toolkit to compute this matrix if you have the raw stock returns data. Please see Figure 128.1. For instance, if raw data on various assets exist (Figure 128.2), simply select the data area and run the Variance-Covariance tool under the Modeling Toolkit | Tools menu item. The generated results are shown in the computed Variance-Covariance worksheet. This is a very handy tool indeed as the portfolio risk computations can sometimes be rather cumbersome. For instance, the portfolio risk is: N m n 2 2 wk σk + 2wi w j ρi, j σi σ j , σp = k=1
i=1 j=1
Where the volatility (σ ) of each asset k is squared and multiplied by its weight squared, summed, and added to the summation of all pairwise correlations among the assets (ρ), by their respective weights (w) and volatilities (σ ). For the example model, this equation expands to 21 cross terms and 7 squared terms, creating a relatively complicated equation to compute manually. Using the single B2PortfolioRisk equation, the process is greatly simplified.
JWBK121-Mun March 17, 2008 12:52
FIGURE 128.1 Portfolio risk-return profile using variance-covariance matrix
c121-130 Char Count=
475
c121-130
JWBK121-Mun
476
March 17, 2008
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 128.2 Raw stock returns data
c121-130
JWBK121-Mun
March 17, 2008
12:52
Char Count=
129. Risk Hedging—Delta-Gamma Hedging
477
129. Risk Hedging—Delta-Gamma Hedging File Name: Risk Hedging—Delta-Gamma Hedge Location: Modeling Toolkit | Risk Hedging | Delta-Gamma Hedge Brief Description: Setting up a delta-gamma riskless and costless hedge in determining the number of call options to sell and buy, the number of common stocks to buy, and the borrowing amount required to set up a perfect arbitrage-free hedge Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2DeltaGammaHedgeCallSold, B2DeltaGammaHedgeSharesBought, B2DeltaGammaHedgeMoneyBorrowed
The Delta-Gamma hedge provides a hedge against larger changes in the asset value. This is done by buying some equity shares and a call option, which are funded by borrowing some amount of money and selling a call option at a different strike price. The net amount is a zero sum game, making this hedge completely effective in generating a zero delta and zero gamma for the portfolio, just as in a delta hedge, where the total portfolio’s delta is zero (e.g., to offset a positive delta of some underlying assets, call options are sold to generate sufficient negative delta to completely offset the existing deltas to generate a zero delta portfolio). The problem with delta-neutral portfolios is that secondary changes (i.e., larger shocks) are not hedged. Delta-gamma hedged portfolios, on the contrary, hedge both delta and gamma risk, making them a lot more expensive to generate. The typical problem with such a hedging vehicle is that in larger quantities, buying and selling additional
FIGURE 129.1 Delta-gamma hedging
c121-130
JWBK121-Mun
478
March 17, 2008
12:52
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
options or underlying assets may change the market value and prices of the same instruments used to perform the hedge. Therefore, typically, a dynamic hedge or continuously changing hedge portfolios might be required. Please see Figure 129.1.
130. Risk Hedging—Delta Hedging File Name: Risk Hedging—Delta Hedge Location: Modeling Toolkit | Risk Hedging | Delta Hedge Brief Description: Setting up a delta riskless and costless hedge in determining the number of call options to sell, the number of common stocks to buy, and the borrowing amount required to set up a costless hedge Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2DeltaHedgeCallSold, B2DeltaHedgeSharesBought, B2DeltaHedgeMoneyBorrowed
The delta hedge provides a hedge against small changes in the asset value by buying some equity shares of the asset and financing it through selling a call option and borrowing some money. The net should be a zero sum game to provide a hedge where the portfolio’s delta is zero. For instance, an investor computes the portfolio delta of some underlying asset and offsets this delta through buying or selling some additional instruments such that the new instruments will offset the delta of the existing underlying assets. Typically, say an investor holds some stocks or commodity such as gold in the long position, creating a positive delta for the asset. To offset this, she sells some calls to generate negative delta, such that the amount of the call options sold on the gold is sufficient to offset the delta in the portfolio. Please see Figure 130.1.
FIGURE 130.1 Delta hedging
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
131. Risk Hedging—Effects of Fixed versus Floating Rates
479
131. Risk Hedging—Effects of Fixed versus Floating Rates File Name: Risk Hedging—Effects of Fixed versus Floating Rates Location: Modeling Toolkit | Risk Hedging | Effects of Fixed versus Floating Rates Brief Description: Setting up various levels of hedging to determine the impact on earnings per share Requirements: Modeling Toolkit
This model illustrates the impact to financial earnings and earnings before interest and taxes (EBIT) on a hedged versus unhedged position. The hedge is done through an interest rate swap payments. Various scenarios of swaps (different combinations of fixed rate versus floating rate debt) can be generated and tested in this model to determine the impact to earnings per share (EPS) and other financial metrics. This model looks at several scenarios of changes in interest rates (row 23 in the model), where a fixed rate versus floating rate is applied (rows 26 and 27) based on the proportion of fixed to floating debt structure (rows 16 and 17). The fixed rate is fixed under all conditions whereas the floating rate is pegged to the LIBOR rate and some predetermined swap rate (where a floating rate is swapped with a fixed rate and vice versa) as well as the changes to the prevailing interest rate in the economy. For example, the floating rate will be equal to the fixed rate adjusted for the swap rate plus LIBOR and changes to the prevailing market rate (see row 27 of the model). This model can then be simulated by adding assumptions to the LIBOR and change in the prevailing interest rates, to determine the probabilistic impacts to changes in the company’s earnings based on a fixed versus floating rate regime. The foreign exchange cash flow hedge model shown in the next chapter goes into more detail on the hedging aspects of foreign exchange through the use of risk simulation. Please see Figure 131.1.
c131-140
JWBK121-Mun
480
March 17, 2008
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 131.1 Impacts of an unhedged versus hedged position
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
132. Risk Hedging—Foreign Exchange Cash Flow Model
481
132. Risk Hedging—Foreign Exchange Cash Flow Model File Name: Risk Hedging—Foreign Exchange Cash Flow Model Location: Modeling Toolkit | Risk Hedging | Foreign Exchange Cash Flow Model Brief Description: Illustrating how to use Risk Simulator for simulating the foreign exchange rate to determine whether a hedged fixed exchange rate or floating unhedged rate is worth more Requirements: Modeling Toolkit, Risk Simulator
This cash flow model is used to illustrate the effects of hedging foreign exchange rates. The tornado sensitivity analysis illustrates that foreign exchange rate, or forex, has the highest effects on the profitability of the project. Suppose for the moment that the project undertaken is in a foreign country (FC), the values obtained are denominated in FC currency, and the parent company is in the United States and requires that the net revenues be repatriated back to the United States. The questions we ask here are: What is the appropriate forex rate to hedge at, and what are the appropriate costs for that particular rate? Banks will be able to provide your firm with the appropriate pricing structure for various exchange forward rates, but by using the model here, you can determine the added value of the hedge and hence decide if the value added exceeds the cost to obtain the hedge. This model is already preset for you to run a simulation on. The Forex Data worksheet shows historical exchange rates between the FC and U.S. dollar. Using these values, we can create a custom distribution (we simply used the rounded values in our illustration), which is already preset in this example model. However, should you wish to replicate the model setup, you can follow these next steps:
1. Start a new profile (Risk Simulator | New Profile) and give it an appropriate name. 2. Go to the Forex Data worksheet and select the data in cells K6:K490 and click on Edit | Copy or Ctrl + C. 3. Select an empty cell (e.g., cell K4) and click on Risk Simulator | Set Input Assumption and select Custom Distribution. 4. Click on Paste to paste the data into the custom distribution, then Update Chart to view the results on the chart. Then click File | Save and save the newly created distribution to your hard drive.
c131-140
JWBK121-Mun
March 17, 2008
482
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
TABLE 132.1
Unhedged Hedged at 0.85 Hedged at 0.83 Hedged at 0.80
Hedged versus unhedged risk simulation results Mean ($000)
Stdev ($000)
% Confidence ($000)
CV (%)
2292.82 2408.81 2352.13 2267.12
157.94 132.63 129.51 124.83
2021 to 2550 2199 to 2618 2147 to 2556 2069 to 2463
6.89% 5.51% 5.51% 5.51%
5. Go to the Model worksheet and select the Forex cell (J9). Click on Risk Simulator | Set Input Assumption, and choose Custom, then click on Open a distribution and select the previously saved custom distribution. 6. You may continue to set assumptions across the entire model and set the NPV cell as a forecast (Risk Simulator | Set Output Forecast). 7. RUN the simulation with the custom distribution to denote an unhedged position. You can then rerun the simulation, but this time, delete the custom distribution (use the Delete Simulation Parameter icon, not Excel’s delete function or the keyboard’s delete key) and enter in the relevant hedged exchange rate, indicating a fixed rate. You may create a report after each simulation to compare the results. A smarter approach is to duplicate the simulation profile and delete this input assumption on the duplicated profile. You can then switch between profiles to run the hedged and unhedged positions. From the sample analysis, see Table 132.1. From this table, several things are evident:
The higher the hedged exchange rate is, the more profitable the project (e.g., 0.85 USD/FC is worth more than 0.80 USD/FC). The relative risk ratio, computed as the coefficient of variation (CV, or the standard deviation divided by the mean) is the same regardless of the exchange rate, as long as it is hedged. The CV is lower for hedged positions than unhedged positions, indicating that the relative risk is reduced by hedging. It seems that the exchange rate hedge should be above 0.80, such that the hedged position is more profitable than the unhedged. In comparing a hedged versus an unhedged position, we can determine the amount of money the hedging is worth; for instance, going with a 0.85 USD/ FC means that on average, the hedge is worth $115,990,000 (computed as $2,408.81 − $2,292.82 denominated in thousands). This means that as long as the cost of the hedge is less than this amount, it is a good idea to pursue the hedge. In fact, hedging at 0.83 USD/FC returns a lower expected value on the project, making this hedge level less optimal. Finally, the higher the hedge, the lower the risk to the project (as evident from the standard deviation), but again, a higher hedge costs more and the expected value of the project becomes lower. See Figures 132.1–132.4.
JWBK121-Mun
March 17, 2008
12:57
Char Count=
FIGURE 132.1 Hedging cash flow models
c131-140
483
JWBK121-Mun
March 17, 2008
12:57
Char Count=
FIGURE 132.2 Unhedged cash flow model
c131-140
484
JWBK121-Mun
March 17, 2008
12:57
Char Count=
FIGURE 132.3 Hedged at 0.85 USD/FC
c131-140
485
JWBK121-Mun
March 17, 2008
12:57
Char Count=
FIGURE 132.4 Hedged at 0.83 USD/FC
c131-140
486
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
133. Risk Hedging—Hedging Foreign Exchange Exposure
487
133. Risk Hedging—Hedging Foreign Exchange Exposure File Name: Risk Hedging—Hedging Forex Exposure Location: Modeling Toolkit | Risk Hedging | Foreign Exchange Exposure Hedging Brief Description: Illustrating how to use Risk Simulator for simulating foreign exchange rates to determine the value of a hedged currency option position Requirements: Modeling Toolkit, Risk Simulator
This model is used to simulate possible foreign exchange spot and future prices and the effects on the cash flow statement of a company under a freely floating exchange rate versus using currency options to hedge the foreign exchange exposure. Please see Figure 133.1. Row 5 of the model has assumptions preset on simulating foreign exchange spot rates whereas row 6 has fixed exchange rates. To hedge fluctuations in the foreign exchange rates, currency forward put options are purchased (modeled in row 13 using the B2CurrencyForwardPutOption function). The cost of the hedge is simply the put valuation times the quantity of options purchased (cells B14 and H26). The income from this hedge is the put options strike less the prevailing foreign exchange rate at maturity times the open position (cell H27). The difference is of course the profit or loss from hedging. Finally, the loss distribution is simply the difference in value of the project from the time the hedge is put in place and its value at maturity. In an unhedged position, this value is a simple difference. In a hedged position, the cost of the hedge as well as the profits from the hedge are included in the computation (cells H31 and H32). The model can then be simulated and its risks quantified. Figure 133.2 shows the effects of the Value at Risk (VaR) of a hedged versus unhedged position. Clearly the right-tailed VaR of the loss distribution is higher without the currency options hedge. Figure 133.3 shows that there is a lower risk, lower risk to returns ratio, higher returns, and less swing in the outcomes of a currency-hedged position than an exposed position. Finally, Figure 133.4 shows the hedging effectiveness; that is, how often the hedged is in the money and become usable.
FIGURE 133.1 Hedging currency exposures with currency options
c131-140 JWBK121-Mun
488
March 17, 2008 12:57 Char Count=
JWBK121-Mun March 17, 2008 12:57
FIGURE 133.2 Values at Risk of hedged versus unhedged positions
c131-140 Char Count=
489
FIGURE 133.3 Forecast statistics of the loss distribution
c131-140 JWBK121-Mun
490
March 17, 2008 12:57 Char Count=
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
134. Sensitivity—Greeks
491
FIGURE 133.4 Hedging effectiveness
134. Sensitivity—Greeks File Name: Sensitivity—Greeks Location: Modeling Toolkit | Sensitivity | Greeks Brief Description: Computing an option’s instantaneous sensitivity parameters, or Greeks Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2CallDelta, B2CallGamma, B2CallTheta, B2CallRho, B2CallVega, B2PutDelta, B2PutGamma, B2PutTheta, B2PutRho, B2PutVega
This model illustrates the option Greeks to determine the instantaneous sensitivities to each of the inputs. Be careful as these values are based on partial differential equations for instantaneous changes and may not represent the changes of, say, some percentage differential. To determine the sensitivities of each input by varying one input at a time, use Risk Simulator’s tornado tool or see the Sensitivity—Tornado and Sensitivity Nonlinear model instead. Greeks are sometimes useful for setting up delta and delta-gamma hedges, to completely reduce the risk in a portfolio of options and underlying assets. Please see Figure 134.1.
c131-140
JWBK121-Mun
492
March 17, 2008
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 134.1 Sensitivity analysis—option Greeks
Using the corollary outputs generated by options theory, we can obtain the Greeks––namely, Delta, Gamma, Rho, Theta, Vega, and Xi––as a form of sensitivity analysis. By definition, sensitivity analysis, or stress testing, looks at the outcome of the change in the option price given a change in one unit of the underlying variables. In our case, these sensitivities reflect the instantaneous changes of the value of the option given a unit change in a particular variable, ceteris paribus. In other words, we can form a sensitivity table by simply looking at the corresponding values in Delta, Gamma, Rho, Theta, Vega, and Xi. Delta provides the change in value of the option given a unit change in the present value of the underlying asset’s cash flow series. Gamma provides the rate of change in Delta given a unit change in the underlying asset’s cash flow series. Rho provides us with the change in the value of the option given that we change the interest rate one unit, Theta looks at the change per unit of time, Vega looks at the change per unit of volatility, and Xi looks at the change per unit of cost. In other words, one can provide a fairly comprehensive view of the way the value of the option changes given changes in these variables, thereby providing a test of the sensitivity of the option’s value. A worst-case, nominal case, and best-case scenario can then be constructed. The sensitivity table not only provides a good test of the robustness of our results but also provides great insight into the value drivers in the firm (i.e., which variables have the most impact on the firm’s bottom line). The following provides the derivations of these sensitivity measures for a European option without dividend payments. In actual options analysis, it might be easier to compute the sensitivities based on a percentage change to each input rather than instantaneous changes. Nonetheless, these Greeks are useful when modeling simultaneous hedging strategies (e.g., gamma and delta-gamma hedges as described previously in this book).
Call Delta
+ r + 12 σ 2 (T) and Starting from C = St N(d1 ) − Xe N(d2 ), where d1 = √ σ T √ d2 = d1 − σ T, we can get the call Delta, defined as the change in call value for −r T
ln
St X
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
493
134. Sensitivity—Greeks
a change in the underlying asset value; that is, the partial derivative instantaneous time t. Differentiating, we obtain: Delta = =
∂Ct at an ∂ St
∂Ct ∂ N(d1 ) ∂ N(d2 ) = N(d1 ) + St − Xe−r T ∂ St ∂ St ∂ St
∂Ct e− 2 d1 ∂d1 e− 2 d2 ∂d2 = N(d1 ) + St √ − Xe−r T √ ∂ St 2π ∂ St 2π ∂ St 1
1
2
2
√ 1 2 − 12 (d1 −σ T)2 1 S ∂Ct e− 2 d1 1 St t −r T e = N(d1 ) + St √ √ √ − Xe √ ∂ St σ T 2π σ T 2π √ 1 2 − 12 σ 2 T+d1 σ T ∂Ct e e− 2 d1 = N(d1 ) + √ 1 − Xe−r T ∂ St St σ 2π T
1 2 ∂Ct e− 2 d1 Xe−r T − 1 σ 2 T ln(St / X)+(r +σ 2 /2)T 2 = N(d1 ) + √ e e 1− ∂ St St σ 2π T 1 2 e− 2 d1 ∂Ct Xe−r T − 1 σ 2 T St r T 1 σ 2 T = N(d1 ) + √ e 2 e e2 1− ∂ St St X σ 2π T Delta = =
∂Ct = N(d1 ) ∂ St
Call Gamma Gamma = =
∂ ∂ St
∂ 1 1 2 ∂d1 = √ e− 2 d1 ∂ St ∂ St 2π ∂ 1 1 1 2 = √ e− 2 d1 √ ∂ St St σ T 2π −d2 1
e 2 ∂ = Gamma = = √ ∂ St St σ 2π T
Call Rho Rho = P =
∂ N(d1 ) ∂ N(d2 ) ∂Ct = St + XTe−r T N(d2 ) − Xe−r T ∂r ∂r ∂r
∂Ct e− 2 d1 ∂d1 e− 2 d2 ∂d2 = St √ + XTe−r T N(d2 ) − Xe−r T √ ∂r 2π ∂r 2π ∂r − 12 d12 ∂d1 e ∂Ct 1 2 2 ∂d1 = √ − Xe−r T e 2 (d1 −d2 ) + XTe−r T N(d2 ) St ∂r ∂r ∂r 2π 1
2
1
2
c131-140
JWBK121-Mun
March 17, 2008
494
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS 1 2 ∂Ct e− 2 d1 ∂d1 1 2 2 = √ St − Xe−r T e− 2 σ T eln(S/ X)+(r +σ /2)T + XTe−r T N(d2 ) ∂r 2π ∂r 1 2 XSt e− 2 d1 ∂d1 ∂Ct = √ St − + XTe−r T N(d2 ) ∂r X 2π ∂r ∂Ct Rho = P = = XTe−r T N(d2 ) ∂r
Call Theta ∂ N(d1 ) ∂Ct ∂ −r T N(d2 ) = St −X e ∂T ∂T ∂T ∂Ct 1 1 1 2 ∂d1 1 2 ∂d2 = St √ e− 2 d1 − rXe−r T N(d2 ) − Xe−r T √ e− 2 d2 ∂T ∂t ∂t 2π 2π ln SXt ∂d1 1 ∂d2 ∂d1 σ σ2 as + = we have = + √ and r + √ ∂t 2σ T 3/2 2 ∂t ∂t 2σ T 2 T 1 1 ∂Ct σ ∂d1 1 2 ∂d1 1 2 = St √ e− 2 d1 − rXe−r T N(d2 ) − Xe−r T √ e− 2 d2 + √ ∂T ∂t ∂t 2 T 2π 2π
1 2 − 2 d1 ∂Ct ∂d1 e σ ∂d1 1 2 2 = √ − Xe−r T e− 2 (d1 −d2 ) + √ St − rXe−r T N(d2 ) ∂T ∂t ∂t 2 T 2π
1 2 √ ∂d1 e− 2 d1 σ ∂d1 ∂Ct 1 2 − rXe−r T N(d2 ) = √ − Xe−r T e− 2 (σ T) eσ d1 T + √ St ∂T ∂t ∂t 2 T 2π
1 2 ∂d1 e− 2 d1 σ ∂Ct −r T− 12 (σ 2 T )+ln(S/ X)+(r +σ 2 /2)T ∂d1 = √ − Xe + √ St ∂T ∂t ∂t 2 T 2π
Theta = =
− rXe−r T N(d2 ) 1 2 ∂d1 ∂d1 e− 2 d1 St σ ∂Ct − rXe−r T N(d2 ) = √ − St − √ St ∂T ∂t ∂t 2 T 2π −d2 1
∂Ct −Sσ e 2 Theta = = = √ − rXe−r T N(d2 ) ∂T 2 2π T
Call Vega ∂Ct ∂
= St N(d1 ) − Xe−r T N(d2 ) ∂σ ∂σ St ∂Ct 1 2 ∂d1 1 2 ∂d2 = √ e− 2 d1 − Xe−r T e− 2 d2 ∂σ ∂σ ∂σ 2π ∂Ct ∂d1 1 1 2 2 2 ∂d2 = √ e− 2 d1 St − Xe−r T e−(d1 −d2 ) ∂σ ∂σ ∂σ 2π
Vega = V =
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
495
134. Sensitivity—Greeks
∂Ct ∂d1 1 − 1 d2 1 2 = √ e St ∂σ ∂σ 2π ∂Ct ∂d1 1 1 2 = √ e− 2 d1 St ∂σ ∂σ 2π ∂Ct ∂d1 1 1 2 = √ e− 2 d1 St ∂σ ∂σ 2π ∂Ct 1 ∂d1 1 2 = √ e− 2 d1 St ∂σ ∂σ 2π √ −d12 ∂Ct St Te 2 Vega = V = = √ ∂σ 2π
− Xe
√ −r T − 12 σ 2 T+d1 σ T ∂d2
e
∂σ
− Xe−r T e− 2 σ T eln(S/ X)+(r +σ 1
− St
∂d2 ∂σ
2
2
/2)T ∂d2
∂d1 √ − + T ∂σ
∂σ
Call Xi Xi = =
∂ N(d1 ) ∂ N(d2 ) ∂Ct = −N(d2 ) + St − Xe−r T ∂ Xt ∂ Xt ∂ Xt
∂Ct e− 2 d1 ∂d1 e− 2 d2 ∂d2 = −N(d2 ) + St √ − Xe−r T √ ∂ Xt 2π ∂ Xt 2π ∂ Xt √ 1 2 − 12 (d1 −σ T)22 1 S ∂Ct e− 2 d1 1 St t −r T e = −N(d2 ) + St √ √ √ − Xe √ ∂ Xt σ T 2π σ T 2π √ 1 2 − 12 σ 2 T+d1 σ T e e− 2 d1 ∂Ct = −N(d2 ) + √ 1 − Xe−r T ∂ Xt St σ 2π T 1
2
1
2
1 2 ∂Ct e− 2 d1 Xe−r T − 1 σ 2 T ln(St / X)+(r +σ 2 /2)T 2 = −N(d2 ) + √ e e 1− ∂ Xt St σ 2π T 1 2 ∂Ct e− 2 d1 Xe−r T − 1 σ 2 T St r T 1 σ 2 T = −N(d2 ) + √ e 2 e e2 1− ∂ Xt St X σ 2π T ∂Ct = −N(d2 ) Xi = = ∂ Xt
c131-140
JWBK121-Mun
496
March 17, 2008
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
135. Sensitivity—Tornado and Sensitivity Charts Linear File Name: Sensitivity—Tornado and Sensitivity Linear Location: Modeling Toolkit | Sensitivity | Tornado and Sensitivity Linear Brief Description: Illustrating how to use Risk Simulator to run a presimulation sensitivity analysis (tornado and spider charts) and run a postsimulation sensitivity analysis (sensitivity charts) Requirements: Modeling Toolkit, Risk Simulator
This example illustrates a simple discounted cash flow model and shows how sensitivity analysis can be performed prior to running a simulation and after a simulation is run. Tornado and spider charts are static sensitivity analysis tools useful for determining which variables impact the key results the most. That is, each precedent variable is perturbed a set amount, and the key result is analyzed to determine which input variables are the critical success factors with the most impact. In contrast, sensitivity charts are dynamic, in that all precedent variables are perturbed together in a simultaneous fashion. (The effects of autocorrelations, cross-correlations, and interactions are all captured in the resulting sensitivity chart.) Therefore, a tornado static analysis is run before a simulation while a sensitivity analysis is run after a simulation. The precedents in a model are used in creating the tornado chart. Precedents are all the input variables that affect the outcome of the model. For instance, if a model consists of A = B + C, and where C = D + E, then B, D, and E are the precedents for A (C is not a precedent as it is only an intermediate calculated value). The testing range of each precedent variable can be set when running a tornado analysis, and is used to estimate the target result. If the precedent variables are simple inputs, then the testing range will be a simple perturbation based on the range chosen (e.g., the default is ±10%). Each precedent variable can be perturbed at different percentages if required. A wider range is important as it is better able to test extreme values rather than smaller perturbations around the expected values. In certain circumstances, extreme values may have a larger, smaller, or unbalanced impact (e.g., nonlinearities may occur where increasing or decreasing economies of scale and scope creep in for larger or smaller values of a variable) and only a wider range will capture this nonlinear impact.
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
135. Sensitivity—Tornado and Sensitivity Charts Linear
497
CREATING A TORNADO AND SENSITIVITY CHART To run this model, simply: 1. Go to the DCF Model worksheet and select the NPV result (cell G6) as seen in Figure 135.1. 2. Select Risk Simulator | Tools | Tornado Analysis (or click on the Tornado Chart icon). 3. Check that the software’s intelligent naming is correct or, better still, click on Use Cell Address to use the cell address as the name for each precedent variable and click OK (Figure 135.2).
FIGURE 135.1 Discounted cash flow model
c131-140
JWBK121-Mun
March 17, 2008
498
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 135.2 Setting up a tornado analysis
Interpreting the Results The report generated illustrates the sensitivity table (starting base value of the key variable as well as the perturbed values and the precedents). The precedent with the highest impact (range of output) is listed first. The tornado chart illustrates this analysis graphically (Figure 135.3). The spider chart performs the same analysis but also accounts for nonlinear effects. That is, if the input variables have a nonlinear effect on the output variable, the lines on the spider chart will be curved. Rerun the analysis on the Black-Scholes model sheet (see next chapter for details) in the tornado and sensitivity charts (nonlinear) example file. The sensitivity table shows the starting NPV base value of $96.63 and how each input is changed (e.g., Investment is changed from $1,800 to $1,980 on the upside with a +10% swing, and from $1,800 to $1,620 on the downside with a −10% swing). The resulting upside and downside values on NPV are –$83.37 and $276.63, with a total change of $360, making it the variable with the highest impact on NPV. The precedent variables are ranked from the highest impact to the lowest impact. The spider chart illustrates these effects graphically. The y-axis is the NPV target value while the x-axis depicts the percentage change on each of the precedent value (the central point is the base case value at $96.63 at 0% change from the base value of each precedent). Positively sloped lines indicate a positive relationship or effect while negatively sloped lines indicate a negative relationship (e.g., investment is negatively sloped, which means that the higher the investment level, the lower the NPV). The absolute value of the slope indicates the magnitude of the effect computed as the percentage change in the result given a percentage change in the precedent (a steep line indicates a higher impact on the NPV y-axis given a change in the precedent x-axis).
JWBK121-Mun
March 17, 2008
12:57
Char Count=
FIGURE 135.3 Tornado analysis result
c131-140
499
c131-140
JWBK121-Mun
March 17, 2008
12:57
500
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Tornado Chart
Investment
1620
1980 0.44
Tax Rate
0.36
A Price
9
11
B Price
11.025
13.475
A Quantity
45
55
B Quantity
31.5
38.5 16.665
13.635
C Price
22
18
C Quantity
0.165
Discount Rate
0.135
0.055
Price Erosion
0.045
Sales Growth
0.018
Depreciation
9
11
Interest
2.2
1.8
Amortization
2.7
3.3
0.022
Cap Ex
0
Working Capital
0
–150
–100
–50
0
50
100
150
200
250
300
350
FIGURE 135.3 (Continued)
The tornado chart illustrates the results in another graphical manner, where the highest impacting precedent is listed first. The x-axis is the NPV value with the center of the chart being the base case condition. Green (lighter) bars in the chart indicate a positive effect while red (darker) bars indicate a negative effect. Therefore, for investments, the red (darker) bar on the right side indicates a negative effect of investment on higher NPV––in other words, capital investment and NPV are negatively correlated. The opposite is true for price and quantity of products A to C (their green or lighter bars are on the right side of the chart).
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
135. Sensitivity—Tornado and Sensitivity Charts Linear
501
Although the tornado chart is easier to read, the spider chart is important to determine if there are any nonlinearities in the model. Such nonlinearities are harder to identify in a tornado chart and may be important information in the model or provide decision makers important insight into the model’s dynamics. For instance, in the Black-Scholes model (see the next chapter on nonlinear sensitivity analysis), the fact that stock price and strike price are nonlinearly related to the option value is important to know. This characteristic implies that option value will not increase or decrease proportionally to the changes in stock or strike price, and that there might be some interactions between these two prices as well as other variables. As another example, an engineering model depicting nonlinearities might indicate that a particular part or component, when subjected to a high enough force or tension, will break. Clearly, it is important to understand such nonlinearities. Remember that tornado analysis is a static sensitivity analysis applied on each input variable in the model––that is, each variable is perturbed individually and the resulting effects are tabulated. This makes tornado analysis a key component to execute before running a simulation. One of the very first steps in risk analysis is where the most important impact drivers in the model are captured and identified. The next step is to identify which of these important impact drivers are uncertain. These uncertain impact drivers are the critical success drivers of a project, where the results of the model depend on these critical success drivers. These variables are the ones that should be simulated. Do not waste time simulating variables that are neither uncertain nor have little impact on the results. Tornado charts assist in identifying these critical success drivers quickly and easily. Following this example, it might be that price and quantity should be simulated, assuming if the required investment and effective tax rate are both known in advance and unchanging.
CREATING A SENSITIVITY CHART To run this model, simply: 1. 2. 3. 4.
Create a new simulation profile called (Risk Simulator | New Profile). Set the relevant assumptions and forecasts on the DCF Model worksheet. Run the simulation (Risk Simulator | Run Simulation). Select Risk Simulator | Tools | Sensitivity Analysis.
Interpreting the Results Notice that if correlations are turned off, the results of the sensitivity chart are similar to the tornado chart. Now reset the simulation, and turn on correlations (select Risk Simulator | Reset Simulation, then select Risk Simulator | Edit Profile and check Apply Correlations, and then Risk Simulator | Run Simulation), and repeat the earlier steps for creating a sensitivity chart. Notice that when correlations are applied, the resulting analysis may be different due to the interactions among variables. Of course you will need to set the relevant correlations among the assumptions. Sometimes the chart axis variable names can be too long (Figure 135.4). If that happens, simply rerun the tornado but truncate or rename some of the long variable names to something more concise. Then the charts will be more visually appealing.
JWBK121-Mun
March 17, 2008
12:57
Char Count=
FIGURE 135.4 Dynamic sensitivity analysis results
c131-140
502
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
136. Sensitivity—Tornado and Sensitivity Nonlinear
503
The results of the sensitivity analysis comprise a report and two key charts. The first is a nonlinear rank correlation chart that ranks from highest to lowest the assumption-forecast correlation pairs. These correlations are nonlinear and nonparametric, making them free of any distributional requirements (i.e., an assumption with a Weibull distribution can be compared to another with a Beta distribution). The results from this chart are fairly similar to that of the tornado analysis seen previously (without the capital investment value, of course, which we decided was a known value and hence was not simulated), with one special exception. Tax rate was relegated to a much lower position in the sensitivity analysis chart as compared to the tornado chart. This is because by itself, tax rate will have a significant impact but once the other variables are interacting in the model, it appears that tax rate has less of a dominant effect (this is because tax rate has a smaller distribution as historical tax rates tend not to fluctuate too much, and also because tax rate is a straight percentage value of the income before taxes, where other precedent variables have a larger effect on NPV). This example proves that performing sensitivity analysis after a simulation run is important to ascertain if there are any interactions in the model and if the effects of certain variables still hold. The second chart illustrates the percent variation explained; that is, of the fluctuations in the forecast, how much of the variation can be explained by each of the assumptions after accounting for all the interactions among variables? Notice that the sum of all variations explained is usually close to 100% (sometimes other elements impact the model but they cannot be captured here directly), and if correlations exist, the sum may sometimes exceed 100% (due to the interaction effects that are cumulative).
136. Sensitivity—Tornado and Sensitivity Nonlinear File Name: Sensitivity—Tornado and Sensitivity Nonlinear Location: Modeling Toolkit | Sensitivity | Tornado and Sensitivity Nonlinear Brief Description: Illustrating how to use Risk Simulator to run a presimulation sensitivity analysis (tornado and spider charts) and to run a postsimulation sensitivity analysis (sensitivity charts) Requirements: Modeling Toolkit, Risk Simulator
This example illustrates a simple option valuation model and shows how sensitivity analysis can be performed prior to running a simulation and after a simulation is run. Tornado and spider charts are static sensitivity analysis tools useful for determining which variables impact the key results the most. That is, each precedent variable is perturbed a set percentage amount, and the key result is analyzed to
c131-140
JWBK121-Mun
March 17, 2008
504
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
determine which input variables are the critical success factors with the most impact. In contrast, sensitivity charts are dynamic, in that all precedent variables are perturbed together in a simultaneous fashion. The effects of autocorrelations, crosscorrelations, and interactions are all captured in the resulting sensitivity chart. Therefore, a tornado analysis is done before a simulation whereas a sensitivity analysis is done after a simulation. Please review the previous chapter for more theoretical details on tornado and sensitivity analysis before proceeding.
CREATING A TORNADO AND SPIDER CHART To run this model, simply: 1. Go to the Black-Scholes Model worksheet and select the Black-Scholes result (cell E13). 2. Select Risk Simulator | Tools | Tornado Analysis (or click on the Tornado Chart icon). 3. Check that the software’s intelligent naming is correct for the precedent values. 4. Change the upper and lower test values from 10% to 80% on all variables and click OK (Figure 136.1).
FIGURE 136.1 Setting up a Tornado analysis
c131-140
JWBK121-Mun
March 17, 2008
12:57
136. Sensitivity—Tornado and Sensitivity Nonlinear
Char Count=
505
Interpreting the Results The report generated illustrates the sensitivity table (starting base value of the key variable as well as the perturbed values and the precedents). The precedent with the highest impact (range of output) is listed first. The tornado chart illustrates this analysis graphically. The spider chart performs the same analysis but also accounts for nonlinear effects. That is, if the input variables have a nonlinear effect on the output variable, the lines on the spider chart will be curved. Notice that the stock price and strike price have the highest effects (positive and negative effects, respectively) and the inputs are nonlinear. Positively sloped lines indicate positive relationships, while negatively sloped lines indicate negative relationships. The steeper the slope of the line, the higher the impact on the bottom line value. In this special case where the option pricing model is nonlinear, you will see that the tornado chart is not symmetrical and the spider chart has a few lines that are curved. These are key characteristics of nonlinear models. This means that an X percentage change in one variable will not always mean a Y percentage change in the forecast result. To understand the variables in a model, it is important to identify nonlinearities (Figure 136.2).
CREATING A SENSITIVITY CHART To run this model, simply: 1. Create a new simulation profile (Risk Simulator | New Profile). 2. Set some assumptions on the inputs (with correlations) and set the output forecast. 3. Run the simulation (Risk Simulator | Run Simulation). 4. Select Risk Simulator | Tools | Sensitivity Analysis.
Interpreting the Results Notice that if correlations are turned off, the results of the sensitivity chart are similar to those of the tornado chart. Now reset the simulation and turn on correlations (select Risk Simulator | Reset Simulation, then select Simulation | Edit Profile and check Apply Correlations, then Risk Simulator | Run Simulation), and repeat the steps for creating a sensitivity chart (Figure 136.3). Notice that when correlations are applied, the resulting analysis may be slightly different due to the interactions among variables (Figure 136.4). Notice that when correlations are turned on, the Stock Price position is reduced to a less prominent effect as it is correlated to Dividend Yield, a less dominant variable. Note that in the sensitivity chart, Volatility has a larger effect once the interactions among variables have all been accounted for. Volatility in real life is a key indicator of option value, and the analysis here proves this fact.
FIGURE 136.2 Nonlinear and asymmetrical tornado chart
c131-140 JWBK121-Mun
506
March 17, 2008 12:57 Char Count=
c131-140
JWBK121-Mun
March 17, 2008
12:57
136. Sensitivity—Tornado and Sensitivity Nonlinear
FIGURE 136.2 (Continued)
Char Count=
507
JWBK121-Mun
March 17, 2008
12:57
Char Count=
FIGURE 136.3 Dynamic sensitivity analysis charts
c131-140
508
JWBK121-Mun
March 17, 2008
12:57
Char Count=
FIGURE 136.4 Dynamic sensitivity charts
c131-140
509
c131-140
JWBK121-Mun
510
March 17, 2008
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
137. Simulation—Basic Simulation Model File Name: Simulation—Basic Simulation Model Location: Modeling Toolkit | Simulation | Basic Simulation Model Brief Description: Illustrating how to use Risk Simulator to run a Monte Carlo simulation, view and interpret forecast results, set seed values, set run preferences, extract simulation data, and create new and switching among simulation profiles Requirements: Modeling Toolkit, Risk Simulator
The model in the Static and Dynamic Model worksheet illustrates a very simple model with two input assumptions (revenue and cost) and an output forecast (income). The model on the left is a static model with single-point estimates; the model on the right is a dynamic model on which Monte Carlo assumptions and forecasts can be created. After running the simulation, the results can be extracted and further analyzed. In this model we can also learn to set different simulation preferences, run a simulation with error and precision controls, and set seed values. This should be the first model you run to get started with Monte Carlo simulation. Figure 137.1 shows this basic model, which is a very simplistic model of revenue minus cost to equal income. On the static model, the input and output values are unchanging or static. This means that if revenue is $2 and cost is $1, then the income must be $2 − $1, or $1. We replicate the same model on the right and call it a dynamic model, where we will run a simulation.
FIGURE 137.1 World’s simplest model
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
137. Simulation—Basic Simulation Model
511
FIGURE 137.2 Risk Simulator toolbar
RUNNING A MONTE CARLO SIMULATION To run this model, simply: 1. Select Risk Simulator | New Simulation Profile (or click on the New Profile icon in Figure 137.2) and provide it with a name. 2. Select cell G8 and click on Risk Simulator | Set Input Assumption (or click on the Set Input Assumption icon in Figure 137.2). 3. Select Triangular Distribution and set the Min = 1.50, Most Likely = 2.00, and Max = 2.25, and hit OK (Figure 137.3). 4. Select cell G9 and set another input assumption. This time use Uniform Distribution with Min = 0.85 and Max = 1.25. 5. Select cell G10 and set that cell as the output forecast by clicking on Risk Simulator | Set Output Forecast. 6. Select Risk Simulator | Run Simulation (or click on the Run icon in Figure 137.2) to start the simulation.
Viewing and Interpreting Forecast Results The forecast chart is shown when the simulation is running. Once simulation is completed, the forecast chart can be used (Figure 137.4). The forecast chart has several tabs, including Histogram, Statistics, Preferences, and Options. Of particular interest are the first two tabs. For instance, the Histogram shows the output forecast’s probability distribution in the form of a histogram, where the specific values can be determined using the certainty boxes. For example, select Two-Tail, enter 90 in the certainty box, and hit Tab on the keyboard. The 90% confidence interval is shown (0.5273 and 1.1739), meaning that there is a 5% chance that the income will fall below $0.5273 and another 5% chance that it will be above $1.1739. Alternatively, you can select Left-Tail and enter 1.0 in the input box, hit Tab, and see that the lefttail certainty is 74.30%, indicating that there is a 74.30% chance that the income will fall below $1.0 (alternatively, there is a 25.70% chance that income will exceed $1.0). The Statistics tab (Figure 137.5) illustrates the statistical results of the forecast variable. Refer to the Appendix on the different measures of risk for more details on how to interpret these statistics and their specific uses in risk analysis and risk management. Note that your results will not be exactly the same as those illustrated here because a simulation (random number generation) was run. By definition, the results will not be exactly the same every time. However, if a seed value is set (see next
JWBK121-Mun
March 17, 2008
12:57
Char Count=
FIGURE 137.3 Setting up an assumption
c131-140
512
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
137. Simulation—Basic Simulation Model
513
FIGURE 137.4 Forecast charts section), the results will be identical in every single run. Setting a seed value is important especially when you wish to obtain the same values in each simulation run (e.g., you need the live model to return the same results as a printed report during a presentation).
SETTING A SEED VALUE 1. Reset the simulation by selecting Risk Simulator | Reset Simulation. 2. Select Risk Simulator | Edit Simulation Profile. 3. Select the check box for random number sequence.
c131-140
JWBK121-Mun
March 17, 2008
514
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 137.5 Statistics tab
4. Enter in a seed value (e.g., 999) and hit OK (Figure 137.6). 5. Run the simulation and note the results, then re-run the simulation a few times and verify that the results are the same as those obtained earlier. Note that the random number sequence or seed number has to be a positive integer value. Running the same model with the same assumptions and forecasts with the identical seed value and same number of trials will always yield the same results. The number of simulation trials to run can be set in the same run properties box.
FIGURE 137.6 Setting a random number sequence seed
c131-140
JWBK121-Mun
March 17, 2008
12:57
137. Simulation—Basic Simulation Model
Char Count=
515
FIGURE 137.7 Data extraction
SETTING RUN PREFERENCES The run preferences dialog box allows you to specify the number of trials to run in a particular simulation. In addition, the simulation can be paused if a computational error in Excel is encountered (e.g., #NUM or #ERROR). Correlations can also be specified between pairs of input assumptions (Figure 137.3), and if Turn on Correlations is selected, these specified correlations will be imputed in the simulation.
EXTRACTING SIMULATION DATA The simulation’s assumptions and forecast data are stored in memory until the simulation is reset or when Excel is closed. If required, these raw data can be extracted into a separate Excel spreadsheet. To extract the data, simply: 1. Run the simulation. 2. After the simulation is completed, select Risk Simulator | Tools | Extract Data. 3. Choose the relevant assumptions or forecasts to extract and click OK (Figure 137.7).
c131-140
JWBK121-Mun
516
March 17, 2008
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 137.8 Switching among profiles
CREATING NEW AND SWITCHING AMONG SIMULATION PROFILES The same model can have multiple simulation profiles in Risk Simulator. That is, different users of the same model can create their own simulation assumptions, forecasts, run preferences, and so forth. All these preferences are stored in separate simulation profiles, and each profile can be run independently. This powerful feature allows multiple users to run the same model their own way or allows the same user to run the model under different simulation conditions, thereby allowing for scenario analysis on Monte Carlo simulation. To create different profiles and switch among different profiles, simply: 1. Create several new profiles by clicking on Risk Simulator | New Simulation Profile. 2. Add the relevant assumptions, forecasts, or change the run preferences as desired in each simulation profile. 3. Switch among different profiles by clicking on Risk Simulator | Change Active Simulation (Figure 137.8). Please refer to Part I for a more detailed description and sample getting started exercises on running simulations.
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
138. Simulation—Best Surgical Team
517
138. Simulation—Best Surgical Team File Name: Simulation—Best Surgical Team Location: Modeling Toolkit | Simulation | Best Surgical Team Brief Description: Illustrating how to use Risk Simulator to simulate the chances of a successful surgery and understand the distribution of successes, using the Distribution Analysis tool to compute the cumulative distribution function (CDF) and the inverse CDF (ICDF) of the population distribution, and using Bootstrap Simulation to obtain the statistical confidence of success rates Requirements: Modeling Toolkit, Risk Simulator
In this model, we analyze six hospitals and look at their respective successes and failures when it comes to a specific type of surgical procedure. This historical information is tabulated in the Model worksheet and is summarized in Figure 138.1. For instance, Boston General Hospital had 1,254 surgical cases of a specific nature within the past five years, and out of those cases, 1,024 were considered successful while 230 cases were not successful (based on predefined requirements for failures and successes). Therefore, the sample success rate is 81.66%. The problem is, this is only a small sample and is not indicative of the entire population (i.e., all possible outcomes). In this example, we simply assume that history repeats itself. Because there are dozens of surgeons performing the same surgery and the tenure of a surgeon extends multiple years, we assume this is adequate and that any small changes in staffing does not necessarily create a significant change in the success rates. Otherwise, more data should be collected and multivariate regressions and other econometric modeling should be performed to determine if the composition of surgeons and other medical practitioners will contribute in a statistically significant way to the rates of success and we use simulation as well as distributional and probabilistic models to predict the successful outcomes as will be seen next. In addition, jump diffusion or new medical technology advances might also contribute to a higher success rate. The advent of these technologies and their corresponding
FIGURE 138.1 Historical surgical success rates
c131-140
JWBK121-Mun
March 17, 2008
518
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
contributions to success rates can be modeled using the custom distribution through solicitation from subject matter experts in the area. Nonetheless, for this example, we look at using historical data to forecast the potential current success rate and to determine the entire population’s distribution of success rates.
MONTE CARLO SIMULATION Success rates are distributed following a Beta distribution, where the Alpha parameter is the number of successes in a sample and the Beta parameter is the number of failures. For instance, the Boston hospital’s distribution of success rates is a Beta (1024, 230). Using Risk Simulator’s set input assumption (Figure 138.2), you can see that the theoretical mean is 0.8166, equivalent to the sample success rate of 81.66% computed previously. Just knowing the sample mean is insufficient. By using simulation, we can now reconstitute the entire population based on the sample success rate. For instance, running a simulation on the model yields the forecast results shown in Figures 138.3 and 138.4 for the hospital. After running 1,000 trials, the expected value is 81.66%. In addition, we now have a slew of statistics associated with the success rate at Boston General. The theoretical success rate is between 77.88% and 85.24%, and is fairly symmetrical with a skew close to zero (Figure 138.4). Further, the 90% confidence puts the success rate at between 79.90% and 83.41% (Figure 138.3). You can now perform a comparison among all the hospitals surveyed and see which hospital is the safest place to get the procedure done.
PROCEDURES FOR RUNNING MONTE CARLO SIMULATION This model already has assumptions and forecasts predefined for you. To run the predefined model, simply: 1. Go to the Model worksheet and click on Risk Simulator | Change Profile and select the Best Surgical Team profile. 2. Click on the RUN icon or select Risk Simulator | Run Simulation. 3. Select the forecast chart of your choice (e.g., Boston) and use the Two-Tail analysis. Type 90 in the Certainty box and hit TAB on the keyboard. 4. You can enter in other certainty values and hit TAB to obtain other confidence results. Also, click on the Statistics tab to view the distributional statistics. To re-create the model’s assumptions and forecasts, simply: 1. Go to the Model worksheet and click on Risk Simulator | New Profile and give it a new profile name of your choice. 2. Select cell G4 and click on Risk Simulator | Set Input Assumption. Select the BETA distribution and enter in the relevant parameters (e.g., type in 1024 for Alpha and 230 for Beta) and click OK. Repeat for the other cells in G5 to G9.
JWBK121-Mun
March 17, 2008
12:57
Char Count=
FIGURE 138.2 Set input assumption
c131-140
519
c131-140
JWBK121-Mun
520
March 17, 2008
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 138.3 Boston General Hospital forecast output 3. Select H4 and click on Risk Simulator | Set Output Forecast and give it a relevant name (e.g., Boston), and repeat by setting the rest of the cells in H5 to H9 as forecast cells. 4. Click on the RUN icon or select Risk Simulator | Run Simulation. 5. Select the forecast chart of your choice (e.g., Boston) and use the Two-Tail analysis. Type 90 in the Certainty box and hit TAB on the keyboard. 6. You can enter in other certainty values and hit tab to obtain other confidence results. Also, click on the Statistics tab to view the distributional statistics.
FIGURE 138.4 Boston General Hospital forecast statistics
c131-140
JWBK121-Mun
March 17, 2008
12:57
138. Simulation—Best Surgical Team
Char Count=
521
DISTRIBUTION ANALYSIS In addition, because we are now able to reconstitute the entire distribution, we can use the information to compute and see the various confidence levels and their corresponding success rates by using the Distribution Analysis tool. For instance, selecting the Beta distribution and entering the relevant input parameters and choosing the cumulative distribution function, we can now see that, say, 81.5% success rate has a CDF of about 43.6% (Figure 138.5). This means that you have a 43.6% chance that the success rate is less than or equal to 81.5% or that you have a 56.4% chance that your success rate will be at least 81.5%, and so forth. These values are theoretical values. An alternative approach is to use the forecast chart and computing the lefttail probability (CDF). Here we see the empirical results show that the probability of getting less than 81.5% success rate is 43.9% (Figure 138.6), close enough to the theoretical. Clearly the theoretical results are more reliable and accurate. Nonetheless, the empirical results approach the theoretical when enough trials are run in the simulation. Conversely, you can compute the Inverse CDF (ICDF) of the distribution. For instance, the 10% CDF shows an ICDF value of 80.24%, which means that you
FIGURE 138.5 Distributional analysis tool
c131-140
JWBK121-Mun
March 17, 2008
522
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 138.6 Empirical results
have a 10% chance of getting a success rate below or equal to 80.24% or that, from a more positive point of view, you have a 90% chance that the success rate is at least 80.24% (Figure 138.7).
PROCEDURES FOR RUNNING A DISTRIBUTION ANALYSIS 1. Start the Distribution Analysis tool (click on Risk Simulator | Tools | Distribution Analysis). 2. Select the beta distribution and enter 1024 for alpha and 230 for beta parameters, and select CDF as the analysis type. Then select Range of Values and enter 0.8 and 1 as the lower and upper bounds, with a step of 0.005, and click RUN. 3. Now change the analysis type to ICDF and the bounds to between 0.05 and 1 with a step size of 0.005, and click RUN.
BOOTSTRAP SIMULATION A Bootstrap simulation can also be run on the resulting forecast. For instance, after running a simulation on the model, a bootstrap is run on Boston General on its mean, maximum, and minimum values (Figure 138.8). The results indicate that there is a 90% statistical confidence that the average success rate of the true population is between 81.6% and 81.7%, with the lowest or minimum success rate you can be assured of, at a 90% confidence, of between 77.8% and 78.5%. This means that 90 out of a 100 times a surgery is performed, or 90% confidence, the lowest and most conservative success rate is between 77.8%
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
138. Simulation—Best Surgical Team
523
FIGURE 138.7 CDF and ICDF of exact probabilities
and 78.5%. For additional exercise, change the confidence levels to 95% and 99% and analyze the results. Repeat for all other hospitals.
PROCEDURES FOR BOOTSTRAP SIMULATION To run a bootstrap simulation, follow these steps: 1. Run a simulation based on the steps outlined earlier. You cannot run a bootstrap unless you first run a simulation. 2. Click on Risk Simulator | Tools | Nonparametric Bootstrap and select the hospital of your choice (e.g., Boston), and then select the statistics of your choice (e.g., mean, maximum, minimum), enter 5000 as the number of bootstrap trials, and click OK. 3. Select Two-Tail on the statistics forecast chart of your choice (e.g., Mean), type in 90 in the Certainty box, and hit TAB on the keyboard, to obtain the 90% statistical confidence interval of that statistic. Repeat on other statistics or use different tails and confidence levels (Figures 138.9 and 138.10).
c131-140
JWBK121-Mun
524
March 17, 2008
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 138.8 Bootstrap simulation tool
FIGURE 138.9 Bootstrapped mean values
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
139. Simulation—Correlated Simulation
525
FIGURE 138.10 Bootstrapped minimum values
139. Simulation—Correlated Simulation File Name: Simulation—Correlated Simulation Location: Modeling Toolkit | Simulation | Correlated Simulation Brief Description: Illustrating how to use Risk Simulator to run a simulation with correlated variables and to extract data and determine the final correlations Requirements: Modeling Toolkit, Risk Simulator
This model illustrates how to set up basic correlated variables in a Monte Carlo simulation, extract the data, and confirm that the resulting correlations closely match the input correlations in the model.
CORRELATED MODEL Simply open the existing profile, run the simulation, extract the data, and compute a correlation matrix. The results for 5,000 trials are provided in this example. Note that the input pairwise correlations were all 0.50, and the resulting correlations computed
c131-140
JWBK121-Mun
526
March 17, 2008
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 139.1 Example correlated simulation model
from the data are very close to the required input correlations. Figure 139.1 illustrates the sample correlated model. Figure 139.2 shows the simulated results that have been extracted. A correlation analysis was run on the results using Excel, and the results indicate that the empirical simulated data match the correlations required and entered in the model. Therefore, this model is used to illustrate how correlations can be inputted in a simulation model. The next chapter shows the effects of correlations on a model, specifically on the risk and return profile of the model. For the sake of completion, Figure 139.3 shows how pairwise correlations can be set in Risk Simulator. Simply select any of the cells with predefined assumption and click on Risk Simulator | Set Assumption and in the set assumption dialog box, enter in the correlations (see the highlighted section in Figure 139.3). Be aware that entering correlations this way requires manual pairwise inputs. That is, suppose there are three assumptions in the model, A, B, and C. There will be three pairs of correlations: AB, AC, and BC. In order to insert these three pairwise correlations, you will have to select variable A and manually type in the correlations for AB and AC (note that inserting a correlation between A and B within the A variable will automatically update variable B to show this same correlation back to variable A), and then select variable B to correlate it to variable C. Alternatively, Figure 139.4 shows how all pairwise correlations can be entered in a matrix. This matrix is available after assumptions have been first set, and then going to Risk Simulator | Edit Correlations. You can also select a predefined correlation matrix in Excel and click on the Paste button in the edit correlations dialog to update the matrix.
FIGURE 139.2 Computed empirical correlations
JWBK121-Mun March 17, 2008 12:57
FIGURE 139.3 Setting pairwise correlations one at a time
c131-140 Char Count=
527
c131-140
JWBK121-Mun
528
March 17, 2008
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 139.4 Setting pairwise correlations in a matrix
140. Simulation—Correlation Effects on Risk File Name: Simulation—Correlation Effects Model Location: Modeling Toolkit | Simulation | Correlation Effects Model Brief Description: Illustrating how to use Risk Simulator to create correlated simulations and compare the results between correlated and uncorrelated models as well as extract and manually compute (verify) the correlations of the assumptions Requirements: Modeling Toolkit, Risk Simulator
This model illustrates the effects of correlated simulation compared to uncorrelated simulation—that is, when a pair of simulated assumptions is not correlated, positively correlated, and negatively correlated. Sometimes the results can be very different. In addition, the raw data of the simulated assumptions are extracted after the simulation and manual computations of their pairwise correlations are performed. The results indicate that the correlations hold after the simulation.
c131-140
JWBK121-Mun
March 17, 2008
12:57
Char Count=
140. Simulation—Correlation Effects on Risk
529
RUNNING A MONTE CARLO SIMULATION To run this model, simply: 1. Click on Correlation model and replicate the assumptions and forecasts. 2. Run the simulation by clicking on the Run icon or Risk Simulator | Run Simulation.
Viewing and Interpreting Forecast Results The resulting simulation statistics indicate that the negatively correlated variables provide a tighter or smaller standard deviation or overall risk level on the model. Please see Figure 140.1. This relationship exists because negative correlations provide a diversification effect on the variables and hence tend to make the standard deviation slightly smaller. Thus, we need to make sure to input correlations when there indeed are correlations between variables. Otherwise this interacting effect will not be accounted for in the simulation. The positive correlation model has a larger standard deviation, as a positive correlation tends to make both variables travel in the same direction, which makes the extreme ends wider and hence increases the overall risk. Therefore, the model without any correlations will have a standard deviation between the positive and negative correlation models. Notice that the expected value or mean does not change much. In fact, if sufficient simulation trials are run, the theoretical and empirical values of the mean remain the same. The first moment (central tendency) does not change with correlations. Only the second moment (spread) will change. Note that this characteristic exists only in simple models with a positive relationship. That is, a Price × Quantity model is considered a “positive” relationship model
FIGURE 140.1 Effects on forecast statistics when variables are correlated (positive, zero, and negative correlations)
c131-140
JWBK121-Mun
530
March 17, 2008
12:57
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 140.1 (Continued)
(as is a Price + Quantity model), where a negative correlation decreases the range, whereas a positive correlation increases the range. The opposite is true for negative relationship models. For instance, Price ÷ Quantity or Price – Quantity would be a negative relationship model, and a positive correlation will reduce the range of the forecast variable, whereas a negative correlation will increase the range. Finally, for more complex models (e.g., larger models with multiple variables interacting with positive and negative relationships and sometimes with positive and negative correlations), the results are hard to predict and cannot be determined theoretically. Only by running a simulation would the true results of the range and outcomes be determined. In such a scenario, tornado analysis and sensitivity analysis would be more appropriate.
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
141. Simulation—Data Fitting
531
141. Simulation—Data Fitting File Name: Simulation—Data Fitting Location: Modeling Toolkit | Simulation | Data Fitting Brief Description: Illustrating how to use Risk Simulator to fit a single variable to existing data, fit multiple variables to existing data, and to simulate, extract data, and refit to distributions Requirements: Modeling Toolkit, Risk Simulator
This example illustrates how existing sample data can be used to find the statistically best-fitting distribution. By doing so, we also confirm the simulation results through the distributional fitting routine (i.e., we simulate a particular distribution, extract its raw data, and refit the simulated data back to all distributions).
RUNNING A SINGLE-FIT To run this model, simply: 1. Go to the Raw Data worksheet in the model and select cells C2:C201. 2. Click on Risk Simulator | Tools | Distributional Fitting (Single Variable). 3. Make sure Fit Continuous Distributions is selected and all distributions are checked, then click OK (Figure 141.1). 4. The resulting fit of all distributions is shown. Select the best fit (ranked first), view the statistics, and click OK (Figure 141.2). 5. A report will be generated indicating all the relevant statistics as well as the data used for fitting (for future reference). Note that if a profile exists and if the Automatically Generate Assumption choice is selected (Figure 141.2), the report will contain an assumption that is the best fit. Otherwise, only the type of distribution and its relevant input assumptions are provided. You can repeat this exercise on the remaining data points provided.
Notes You can replicate the fitting routine on multiple variables simultaneously (variables must be arranged in contiguous columns) instead of one at a time. Select the area C2:E201 and run the multiple variable fitting routine by selecting Risk Simulator | Tools | Distributional Fitting (Multi-Variable). However, there are several key points to remember. First, more data implies a better statistical fit. Do not fit to very few data points and expect a good fit.
c141-150
JWBK121-Mun
532
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 141.1 Fitting to multiple distributions Second, only positive discrete data (integers) can be fitted to discrete distributions. When negative values or continuous data exist, you should always fit to continuous distributions (first transpose them to the positive region). Third, certain distributions are related to other distributions through their statistical properties. For example, a t-distribution becomes a normal distribution when degrees of freedom is high; a Poisson distribution can be used to approximate a binomial; or a normal can be used to approximate a Poisson, hypergeometric, and binomial. In addition, the beta and gamma distributions are highly flexible. For instance, in the beta distribution, if the two inputs parameters, alpha and beta, are equal, the distribution is symmetrical. If either parameter is 1 while the other parameter is greater than 1, the distribution is triangular or J-shaped. If alpha is less than beta, the distribution is said to be positively skewed (most of the values are near the minimum value). If alpha is greater than beta, the distribution is negatively skewed (most of the values are near the maximum value). There are many other such relationships. Just because the fit is not exactly to the distribution expected does not mean the data are bad or the routine is incorrect. It simply means that another distribution is better suited for the data. When performing a multiple variable fitting, make sure to select the right distribution type for the variables (e.g., continuous for the first two and discrete for the third distribution in our model). Further, do not select the variable name when fitting; instead, enter the variable name in the fitting dialog box. Also, if you have an existing simulation profile (Risk Simulator | New Simulation Profile), you can
c141-150
JWBK121-Mun
March 17, 2008
13:4
141. Simulation—Data Fitting
Char Count=
533
FIGURE 141.2 Distributional fitting results
select the Automatically Generate Assumptions option such that the report will have assumptions set up for you. Further, these assumptions will also include correlations. Then the question is what correlation coefficient is significant for this data set (i.e., is a correlation of 0.0151 significant or merely a residual effect of randomness and should not be incorporated? What about 0.016 or 0.15, etc.?). Risk Simulator automatically computes the statistical significance cutoff level (in this case, any correlations above the absolute value of 0.0910 are statistically significant). If you select this option, the software will ignore all correlations below this significance level. Please see Figure 141.3.
c141-150
JWBK121-Mun
534
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 141.3 Multiple variables simultaneous fitting
142. Simulation—Debt Repayment and Amortization File Name: Simulation—Debt Repayment and Amortization Location: Modeling Toolkit | Simulation | Debt Repayment and Amortization Brief Description: Illustrating how to use Risk Simulator to simulate interest rates in a mortgage and amortization schedule to determine the potential savings on interest payments if additional payments are made each period and when the interest rates can become variable and unknown over time Requirements: Modeling Toolkit, Risk Simulator
This is an amortization model examining a debt repayment schedule. In this example, we look at a 30-year mortgage with a portion of that period on a fixed
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
142. Simulation—Debt Repayment and Amortization
535
interest rate and a subsequent period of variable rates with minimum and maximum caps. This model illustrates how the mortgage or debt is amortized and paid off over time, resulting in a final value of zero at the end of the debt’s maturity. Further, this model allows for some additional periodic (monthly, quarterly, semiannually, annually) payment, which will reduce the total amount of payments, reduce the total interest paid, and reduce the length of time it takes to pay off the loan. Notice that initially the principal paid off is low, but it increases over time. Because of this, the initial interest portion of the loan is high but decreases over time as the principal is paid off. The required input parameters of the model are highlighted in boxes, and an assumption on the uncertain interest rate is set in cell D8, with a corresponding forecast cell at I12. By entering additional payments per period, you can significantly reduce the term of the mortgage (i.e., pay it off faster), and at a much lower total payment (you end up saving a lot on interest payments). A second forecast cell is set on J12, to find out the number of years it takes to pay off the loan if additional periodic payments are made.
PROCEDURE You can either change the assumptions or keep the existing assumptions and run the simulation: 1. Click on Risk Simulator | Change Profile and select the Debt Repayment and Amortization profile and click OK. 2. Run the simulation by clicking on Risk Simulator | Run Simulation.
Interpretation The resulting forecast charts (Figure 142.1) on the example inputs indicate that the least amount of money that can be saved by paying an additional $500 per month can potentially save the mortgage holder $136,985 (minimum) and at most $200,406 (maximum), given the assumed uncertainty fluctuations of interest rates during the variable rate period. The 90% confidence level can also be obtained, meaning that 90% of the time, the total interest saved is between $145,749 and $191,192. In addition, the Payoff Year with Extra Payment forecast chart (Figure 142.1) shows that given the expected fluctuations of interest rates and the additional payments made per period, there is a 90% chance that the mortgage will be paid off between 20.9 and 21.8 years, with an average of 21.37 years (mean value). The quickest payoff is 20.67 years (minimum). If interest rates are on the high end, the payoff may take up to 22 years (maximum).
FIGURE 142.1 Interest paid and payoff year on long-term debt
c141-150 JWBK121-Mun
536
March 17, 2008 13:4 Char Count=
JWBK121-Mun
March 17, 2008
13:4
Char Count=
FIGURE 142.1 (Continued)
c141-150
537
c141-150
JWBK121-Mun
March 17, 2008
538
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
143. Simulation—Demand Curve and Elasticity Estimation File Name: Simulation—Demand Curve and Elasticity Estimation Location: Modeling Toolkit | Simulation | Demand Curve and Elasticity Estimation Brief Description: Illustrating how to use Risk Simulator to determine the price elasticity of demand for a historical set of price and quantity sold data for linear and nonlinear functions Requirements: Modeling Toolkit, Risk Simulator
This model illustrates how the price elasticity of demand can be computed from historical sale price and quantity sold. The resulting elasticity measure is then used in the Optimization—Optimal Pricing with Elasticity model to determine the optimal prices to charge for a product in order to maximize total revenues. This model is used to find the optimal pricing levels that will maximize revenues through the use of historical elasticity levels. The price elasticity of demand is a basic concept in microeconomics, which can be briefly described as the percentage change of quantity divided by the percentage change in prices. For example, if, in response to a 10% fall in the price of a good, the quantity demanded increases by 20%, the price elasticity of demand would be 20%/(−10%) = −2. In general, a fall in the price of a good is expected to increase the quantity demanded, so the price elasticity of demand is negative. In some literature, the negative sign is omitted for simplicity (denoting only the absolute value of the elasticity). We can model or value elasticity in several ways, including the point elasticity by taking the first derivative of the inverse of the demand function and multiplying it by the ratio of price to quantity at a particular point on the demand curve: εd =
δQ P · δP Q
where εd = Price elasticity of demand Q = Quantity demanded P = Price Instead of using instantaneous point elasticities, this example uses the discrete version, where we define elasticity as: εd =
P2 − P1 Q2 − Q1 P2 + P1 %Q Q2 − Q1 ÷ = · = Q2 + Q1 P2 + P1 %P Q2 + Q1 P2 − P1 2 2
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
539
143. Simulation—Demand Curve and Elasticity Estimation
To further simplify things, we assume that in a category of hotel rooms, cruise ship tickets, airline tickets, or any other products with various categories (e.g., standard room, executive room, suite, etc), there is an average price and average quantity of units sold per period. Therefore, we can further simplify the equation to: εd =
Q2 − Q1 Q
÷
P2 − P1 P
=
Q2 − Q1 Q P2 − P1 P
·
where we now use the average price and average quantity demanded values P, Q. If we have in each category the average price and average quantity sold such as in the model, we can compute the expected quantity sold given a new price if we have the historical elasticity of demand values. The example model (Figure 143.1) shows how a nonlinear price-quantity relationship yields a static and unchanging elasticity, whereas a linear price-quantity relationship typically yields changing elasticities over the entire price range. Nonlinear extrapolation and linear regression are run to determine the linearity and nonlinearity of the prices. For the nonlinear relationship, a loglinear regression can be run to determine the single elasticity value (Figure 143.2). Elasticities of a linear relationship have to be determined at every price level using the previous equations.
FIGURE 143.1 Nonlinear demand curve with a static elasticity versus linear demand curve with changing elasticity
c141-150 JWBK121-Mun
540
March 17, 2008 13:4 Char Count=
JWBK121-Mun March 17, 2008 13:4
FIGURE 143.2 Nonlinear demand curve’s elasticity estimated using a nonlinear regression
c141-150 Char Count=
541
c141-150
JWBK121-Mun
March 17, 2008
542
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
144. Simulation—Discounted Cash Flow, Return on Investment, and Volatility Estimates File Name: Simulation—DCF, ROI and Volatility Location: Modeling Toolkit | Simulation | DCF, ROI and Volatility Brief Description: Illustrating how to use Risk Simulator to run a Monte Carlo simulation in a financial return on investment (ROI) analysis, incorporate various correlations among variables (time-series and cross-sectional relationships), and calculate forward-looking volatility based on the present value asset approach Requirements: Modeling Toolkit, Risk Simulator
This is a generic discounted cash flow (DCF) model showing the net present value (NPV), internal rate of return (IRR), and return on investment (ROI) calculations. This model can be adapted to any industry and is used to illustrate how a Monte Carlo simulation can be applied to actual investment decisions and capital budgeting. In addition, the project’s or asset’s volatility is also computed in this example model. Volatility is a measure of risk and is a key input into financial option and real options analysis.
PROCEDURE To run this model, simply click on Risk Simulator | Run Simulation. However, if you wish you can create your own simulation model: 1. Start a new simulation profile and add various assumptions on the price and quantity inputs. 2. Remember to add correlations between the different years of price and quantity (autocorrelation). 3. Continue to add correlations between price and quantity (cross-correlations). 4. Add forecasts to the key outputs (NPV and Intermediate X Variable to obtain the project’s volatility).
Results Interpretation If you run the existing simulation profile, you will obtain two forecast charts, the NPV and Intermediate X forecasts.
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
144. Simulation—Discounted Cash Flow, Return on Investment, and Volatility Estimates
543
A. From the NPV forecast chart (Figure 144.1), select Two-Tail and enter in 95 in the Certainty box and hit Tab on the keyboard. The results show that the 95% confidence interval for the NPV falls between $2429 and $3485. If you click on the Statistics tab, the expected value or mean shows the value $2941. Alternatively, select Left-Tail and enter in 5 in the Certainty box and hit Tab. The results indicate that the 5% worst-case scenario will still yield a positive NPV of $2488, indicating that this is a good project. You can now compare this with other projects to decide which project should be chosen. Finally, because probabilities of left and right tails in the same forecast distribution are complementary, if you select Right-Tail and
FIGURE 144.1 NPV forecast results
c141-150
JWBK121-Mun
544
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 144.1 (Continued)
enter 2.5 in the Certainty box and hit Tab, the result is $3485, identical to the 95% confidence level’s right-tail value (95% confidence interval means there is a 2.5% probability in each tail). B. The Intermediate X Variable is used to compute the project’s volatility. See the chaper on Volatility models for details on computing volatility of projects using multiple different methods. To obtain a volatility measure, first modify the model as appropriate and then click on Set Static CF in the model (near cell B46) to generate a static cash flow. Then run the simulation and view the Intermediate Variable X’s forecast chart (Figure 144.2). Go to the Statistics tab and obtain the standard
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
144. Simulation—Discounted Cash Flow, Return on Investment, and Volatility Estimates
545
FIGURE 144.2 Forecast results of the X variable
deviation, and annualize it by multiplying the value with the square root of the number of periods per year. In this case, the annualized volatility is 6.45% as the periodicity of the cash flows in the model is annual; otherwise multiply the standard deviation by the square root of 4 if quarterly data is used or the square root of 12 if monthly data is used, etc. See the chapter on Volatility modeling later in this book for the different volatility estimation techniques in more detail.
c141-150
JWBK121-Mun
546
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
145. Simulation—Infectious Diseases File Name: Simulation—Infectious Diseases Location: Modeling Toolkit | Simulation | Infectious Diseases Brief Description: Illustrating how to build an epidemic diseases model and use Risk Simulator to run a Monte Carlo simulation on the epidemic model to determine survival probabilities Requirements: Modeling Toolkit, Risk Simulator
Remember the science fiction movies where some terrorist cell sets off a chemical or biological weapon in a major cosmopolitan area and the heroes in the movie are in a high-tech situation room with the president and high-ranking military officials looking at a large screen indicating the possible scenarios with the number of people perishing within a day, within two days, and so forth? Well, the model in this chapter briefly illustrates how an epidemic can spread within a population and how the input assumptions that are uncertain (as in any rampant disease) can be simulated (Figure 145.1). By simulating these assumptions, we can obtain the number of people who might perish if this disease becomes an epidemic. The input parameters include the total population size in the infected area, the probability of contact with an infected person, the initial number of infected individuals, and the potential healing rate. The total of people who might be susceptible, the infected individuals, and those removed from the infected list (cured) are shown over several time periods. The output forecast of interest is the total number of people who perished in this epidemic. An epidemic chart is included to help you visualize the spread of the disease over time. You can increase or decrease the contact probability and healing rates by clicking on the up/down spinners to see the effects on this chart (Figure 145.2) and how the disease is spread quickly or slowly over time. The rest of the model is fairly self-explanatory.
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
145. Simulation—Infectious Diseases
FIGURE 145.1 Simulating an infectious diseases epidemic model
547
c141-150
JWBK121-Mun
548
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 145.2 Epidemic interactive model
146. Simulation—Recruitment Budget (Negative Binomial and Multidimensional Simulation) File Name: Simulation—Recruitment Budget (Negative Binomial and Multidimensional Simulation) Location: Modeling Toolkit | Simulation | Recruitment Budget (Negative Binomial and Multidimensional Simulation) Brief Description: Illustrating how to use Risk Simulator to apply the Negative Binomial distribution to compute the required budget to obtain the desired outcomes given probabilities of failure, use the Distribution Analysis tool to determine the theoretical probabilities of success, and apply a Multidimensional Simulation approach where the inputs into a simulation are uncertain Requirements: Modeling Toolkit, Risk Simulator
The negative binomial distribution is useful for modeling the distribution of the number of additional trials required on top of the number of successful occurrences required (R). For instance, in order to close a total of 10 sales opportunities, how many extra sales calls would you need to make above 10 calls, given some probability of success in each call? The x-axis shows the number of additional calls required or the number of failed calls. The number of trials is not fixed, the trials continue
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
146. Simulation—Recruitment Budget (Negative Binomial and Multidimensional Simulation)
549
FIGURE 146.1 Sample negative binomial distribution until the Rth success, and the probability of success is the same from trial to trial. Probability of success (P) and number of successes required (R) are the distributional parameters. Such a model can be applied in a multitude of situations, such as the cost of cold sales calls, the budget required to train military recruits, and so forth. The simple example shown illustrates how a negative binomial distribution works. Suppose that a salesperson is tasked with making cold calls, and a resulting sale is considered a success while no sale means a failure. Suppose that historically the proportion of sales to all calls is 30%. We can model this using the negative binomial distribution by setting the successes required (R) to equal 2 and the probability of success (P) as 0.3. For instance, as illustrated in Figure 146.1, say the salesperson makes two calls. The success rate is 30% per call, and the success or failure of one call versus another are statistically independent of one another. There can be four possible outcomes: SS, SF, FS, FF, where S stands for success and F for failure. The probability of SS is 0.3 × 0.3, or 9%; SF and FS is 0.3 × 0.7, or 21% each; and FF is 0.7 × 0.7, or 49%. (The total probability is 100%.) Therefore, there is a 9% chance that two calls are sufficient and no additional calls are required to get the two successes required. In other words, X = 0 has a probability of 9% if we define X as the additional calls required beyond the 2 calls. Extending this to three calls, we have many possible outcomes, but the key outcomes we care about are two successful calls, which we can define as these combinations: SSS, SSF, SFS, and FSS. and their respective probabilities are computed (e.g., FSS is computed by 0.7 × 0.3 × 0.3 = 6.30%). Now the combinatorial sequence SSS and SSF do not require a third call because the first two calls have already been successful. Further, SFF, FSF, FFS, and FFF all fail the required two-call success as they only have either zero or one successful call. The sum total probability of the situations requiring a third call to make exactly two calls successful out of three is 12.60%. Clearly, doing this exercise for a large combinatorial problem with many required successes is difficult and intractable. However, we can obtain the same results using Risk Simulator. For instance, creating a new profile and setting an input assumption and choosing the negative binomial distribution with R = 2 and P = 0.3 provides the chart in Figure 146.2. Notice that the probability of X = 0 is exactly what we had computed, 9%, and X = 1 yields 12.60%.
JWBK121-Mun
March 17, 2008
13:4
Char Count=
FIGURE 146.2 Negative binomial distribution
c141-150
550
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
146. Simulation—Recruitment Budget (Negative Binomial and Multidimensional Simulation)
551
FIGURE 146.3 Negative binomial distribution’s theoretical PDF properties It is easier if you use Risk Simulator’s Distribution Analysis tool (Figure 146.3) to obtain the same results with a table of numerical values. If you choose the PDF type, you obtain the exact probabilities. That is, there is a 12.6% probability you will need to make three calls (or one additional call on top of the required two calls), or 13.23% with four calls, and so forth. If you select CDF (Figure 146.4), the cumulative probabilities will be computed. For instance, the CDF value when X = 1 is 9% + 12.6% or 21.6%, which means that 21.6% of the time, making three calls (or X = 1 extra call beyond the required two) is sufficient to get two successful calls. To be, say, 80% sure, you should be prepared to make up to seven additional calls, or nine calls total, but you can stop once two successes have been achieved.
PROCEDURE FOR COMPUTING PROBABILITIES IN A NEGATIVE BINOMIAL DISTRIBUTION 1. Start the Distribution Analysis tool (click on Risk Simulator | Tools | Distribution Analysis). 2. Select Negative Binomial from the Distribution drop list, and enter 2 for successes required and 0.3 for the probability of success, then select Range of Values and enter 0 and 10 for the Lower and Upper Bounds, with 1 as the Step Size, and click RUN.
c141-150
JWBK121-Mun
552
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 146.4 Negative binomial distribution’s theoretical CDF properties
MULTIDIMENSIONAL SIMULATION The Model worksheet shows a simple recruitment budget example, where the task is to determine the total cost required to obtain the required number of qualified and trained recruits, when, in fact, only a small percentage of recruits will successfully complete the training course. The required weeks of training and cost per week are also used as inputs. The question is, how much money (budget) is required to train these recruits given these uncertainties? In Figure 146.5, G9 is a Risk Simulator assumption with a negative binomial distribution with R = 10 (linked to cell G4) and P = 0.25 (linked to cell G5). Cell G12 is the model’s forecast. You can now run a simulation and obtain the distribution forecast of the required budget. In contrast, if the 25% rate is an unknown and uncertain input, you can set it as an assumption, and make the value in cell G9 a link to the 25% cell, and check the box Enable Dynamic Simulation. This means that the value 25% is an assumption and changes with each trial, and the number of recruits failing is also an assumption but is linked to this changing 25% value. This is termed a multidimensional simulation model (Figure 146.6).
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
146. Simulation—Recruitment Budget (Negative Binomial and Multidimensional Simulation)
553
FIGURE 146.5 Recruitment model
The results show that on a 90% confidence, the required budget should be between $115,000 and $315,000 (Figure 146.7), with an expected value of $200,875 (Figure 146.8).
PROCEDURE FOR RUNNING A MULTIDIMENSIONAL SIMULATION The model is also preset with assumptions and forecasts. To run this model: 1. Go to the Model worksheet and click on Risk Simulator | Change Profile and select the Recruitment Budget profile. 2. Click on the RUN icon or Risk Simulator | Run Simulation. 3. On the forecast chart, select Two-Tail, enter in 90 in the Certainty box, and hit TAB on the keyboard. To re-create the assumptions and forecasts: 1. Go to the Model worksheet and click on Risk Simulator | New Profile and give the profile a name. 2. Select cell G5 and click on Risk Simulator | Set Input Assumption. Choose a Uniform distribution and enter in 0.2 and 0.3 as min and max values. (You may enter your own values if required, as long as they are positive and maximum exceeds minimum.) 3. Select cell G9 and click on Risk Simulator | Set Input Assumption. Choose a Negative Binomial distribution, click on the Link icon for Successes Required, and link it to cell G4. Repeat for the Probability input by linking to cell G5. 4. Click on the checkbox Enable Dynamic Simulation and click OK. 5. Select cell G12 and click on Risk Simulator | Set Output Forecast and click OK. 6. Click on the RUN icon or Risk Simulator | Run Simulation. 7. On the forecast chart, select Two-Tail, enter in 90 in the Certainty box, and hit TAB on the keyboard.
FIGURE 146.6 Multidimensional simulation assumptions
c141-150 JWBK121-Mun
554
March 17, 2008 13:4 Char Count=
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
146. Simulation—Recruitment Budget (Negative Binomial and Multidimensional Simulation)
FIGURE 146.7 Statistical confidence levels
FIGURE 146.8 Simulated statistics
555
c141-150
JWBK121-Mun
March 17, 2008
556
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
147. Simulation—Retirement Funding with VBA Macros File Name: Simulation—Retirement Funding with VBA Macros Location: Modeling Toolkit | Simulation | Retirement Funding with VBA Macros Brief Description: Illustrating how to use Risk Simulator to run Monte Carlo simulation and capture the output results, and run Visual Basic for Applications (VBA) macros in your models together with Monte Carlo simulation Requirements: Modeling Toolkit, Risk Simulator
This model illustrates a simplified retirement amortization computation allowing for inflation adjustments. That is, given how much you currently have and the expected rate of return on your investment vehicles, the expected inflation rates, your current salary, and the expected amount you wish to have starting at retirement, this model computes how much annual investment is required to sufficiently fund your retirement (Figure 147.1). The required inputs and computed outputs are summarized here. Required Inputs Initial Savings Amount: How much you have currently Annual Required Rate of Return: Expected annualized rate of return on the investment vehicle Current Age: Your current age right now, corresponding to the initial savings Retirement Age: Your proposed age at retirement Average Terminal Age: Expected age at death (life expectancy) Annual Salary: Your current salary level Replacement Ratio: Percentage of your current salary required at retirement Inflation Rate: Annualized inflation rate expected Terminal Wealth Goal: Typically $0 (complete amortization) Computed Results Terminal Wealth: How much you have left at terminal age Annual Contribution: Required annual contribution to have sufficient funds for retirement
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
147. Simulation—Retirement Funding with VBA Macros
557
FIGURE 147.1 Retirement funding model (inflation adjusted)
MONTE CARLO SIMULATION As rate of returns on investment and inflation rate are both uncertain and vary, these two variables are defined as input assumptions. The output forecast is the required annual contribution. This model already has a simulation profile, assumptions, forecasts, and VBA macros created. To run this model, simply: 1. Go to the Payments Per Period worksheet. 2. Select Risk Simulator | New Profile and define assumptions on Rate of Return (E6) and Inflation (I8) and then click on the RUN icon or Risk Simulator | Run Simulation.
c141-150
JWBK121-Mun
March 17, 2008
558
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Model Analysis You can re-create the model’s simulation and VBA analysis by repeating these steps: 1. Click on Alt-F11 to get to the Excel VBA environment (Figure 147.2). 2. Double click on Sheet 1, and you will see the VBA macro code used to compute the Annual Contribution. Notice that the codes are listed under the Private Sub called Worksheet Change. This setup means that every time a value changes in the worksheet (i.e., when a simulation trial occurs), the macro will be executed and the value of the annual contribution will be computed accordingly in each simulation trial. 3. Exit the VBA environment and go back to the Payments Per Period worksheet. 4. Create a New Simulation Profile and give it an appropriate name. 5. Define assumptions on the interest annual required rate of return and inflation rate. 6. Define forecast on annual contribution. 7. Run the simulation.
JWBK121-Mun March 17, 2008 13:4
FIGURE 147.2 Visual Basic for Applications (VBA) code with simulation
c141-150 Char Count=
559
c141-150
JWBK121-Mun
560
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
148. Simulation—Roulette Wheel File Name: Simulation—Roulette Wheel Location: Modeling Toolkit | Simulation | Roulette Wheel Brief Description: Illustrating how to use Risk Simulator to run Monte Carlo simulation on a roulette wheel Requirements: Modeling Toolkit, Risk Simulator
This is a fun model to illustrate how Visual Basic for Applications can be used in conjunction with Risk Simulator to generate random simulations of a roulette wheel and the subsequent payoffs. To run the model, go to the Roulette worksheet and click on Reset to get started. You can then click on the Step Simulation icon (the icon that looks like a play button between the Run and Reset icons). You can do this one click at a time or just run the entire simulation to see what the payoff looks like. A roulette wheel has 38 slots or possible outcomes. The first 36 slots represent the 36 numbers, and each number is assigned either a red or black color (Figure 148.1). Value 37 and 38 are the 0 or 00 outcomes; for 37, it is a value you can bet on, but 38 means the house wins. Further, in a real roulette game, you can bet on even or odd values, the first second or third set of 12 numbers, and so forth. However, for simplicity, we only allow the possibilities of betting on a number and a color. To run more trials, simply change the number of bets (cell M3) and edit the simulation profile to change the number of trials to equal this new number of bets (Risk Simulator | Edit Profile).
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
148. Simulation—Roulette Wheel
FIGURE 148.1 Roulette wheel equivalent
561
c141-150
JWBK121-Mun
562
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
149. Simulation—Time Value of Money File Name: Simulation—Time Value of Money Location: Modeling Toolkit | Simulation | Time Value of Money Brief Description: Illustrating how to solve time value of money problems and how to add simulation to time value of money problems to account for uncertainties Requirements: Modeling Toolkit, Risk Simulator
Time value of money (TVM) is one of the most fundamental yet important concepts in the world of finance and decision analysis. It provides a robust framework to value and decide the best course of action among multiple projects, based on an equal footing. For instance, one project might have a 10-year life providing high cash flow returns in the future but also costs more while another costs less and returns slightly lower cash flows. By applying TVM methods, the net present value (NPV) of a project can be easily determined and the correct and optimal course of action can be taken. The examples start off easy with the basics of future value and present value, then on to annuities with equal payments per period. Then we have different period analysis (monthly versus annual payments), NPV, internal rate of return, and return on investment analysis. A TVM amortization table example is also presented. Clearly these are simple, getting-started examples. Also, see the Discounted Cash Flow, Return on Investment, and Volatility Estimates chapter for an example of a more protracted NPV analysis with various end-of-period, beginning-of-period, discrete discounting, and continuous discounting approaches. Some quick manual computations are shown here. The examples in the Models worksheet use Excel functions as well as manual computations. Figure 149.1 illustrates the basic theoretical concepts of the time value of money. Figure 149.2 illustrates some worked-out examples in the model. The Excel worksheet model provides more step-by-step examples of these applications.
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
149. Simulation—Time Value of Money
FIGURE 149.1 Underlying theory of time value of money
563
c141-150
JWBK121-Mun
March 17, 2008
564
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 149.1 (Continued)
JWBK121-Mun
March 17, 2008
13:4
Char Count=
FIGURE 149.2 Applying time value of money
c141-150
565
JWBK121-Mun
March 17, 2008
13:4
Char Count=
FIGURE 149.2 (Continued)
c141-150
566
JWBK121-Mun
March 17, 2008
13:4
Char Count=
FIGURE 149.2 (Continued)
c141-150
567
c141-150
JWBK121-Mun
568
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 149.2 (Continued)
JWBK121-Mun
March 17, 2008
13:4
Char Count=
FIGURE 149.2 (Continued)
c141-150
569
JWBK121-Mun
March 17, 2008
13:4
Char Count=
FIGURE 149.2 (Continued)
c141-150
570
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
150. Six Sigma—Obtaining Statistical Probabilities, Basic Hypothesis Tests
571
150. Six Sigma—Obtaining Statistical Probabilities, Basic Hypothesis Tests, Confidence Intervals, and Bootstrapping Statistics File Names: Six Sigma—Hypothesis Testing and Bootstrap Simulation, and Six Sigma—Probabilities and Hypothesis Tests (CDF, PDF, ICDF) Location: Modeling Toolkit | Six Sigma | Hypothesis Testing and Bootstrap Simulation, and Probabilities and Hypothesis Tests (CDF, PDF, ICDF) Brief Description: Illustrating how to use Risk Simulator’s Distributional Analysis and Modeling Toolkit’s probability functions to obtain exact probabilities of events, confidence intervals, and hypothesis testing for quality control and how to use Risk Simulator to run hypothesis tests after a simulation run, generate a hypothesis test with raw data, understand the concept of random seeds, and run a nonparametric bootstrap simulation to obtain the confidence intervals of the statistics Requirements: Modeling Toolkit, Risk Simulator
COMPUTING THEORETICAL PROBABILITIES OF EVENTS FOR SIX SIGMA QUALITY CONTROL In this chapter, we use Risk Simulator’s Distributional Analysis Tool and Modeling Toolkit’s functions to obtain exact probabilities of the occurrence of events for quality control purposes. These will be illustrated through some simple discrete distributions. The chapter also provides some continuous distributions for the purposes of theoretical hypotheses tests. Then hypothesis testing on empirically simulated data is presented, where we use theoretical distributions to simulate empirical data and run hypotheses tests. The next chapter goes into more detail on using the Modeling Toolkit’s modules on one-sample and two-sample hypothesis tests using t-tests and ztests for values and proportions, analysis of variance (ANOVA) techniques, and some powerful nonparametric tests for small sample sizes. This chapter is a precursor and provides the prerequisite knowledge to the materials presented in the next chapter.
A. Binomial Distribution The binomial distribution describes the number of times a particular event occurs in a fixed number of trials, such as the number of heads in 10 flips of a coin or
c141-150
JWBK121-Mun
March 17, 2008
13:4
572
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
the number of defective items out of 50 items chosen. For each trial, only two outcomes are possible that are mutually exclusive. The trials are independent, where what happens in the first trial does not affect the next trial. The probability of an event occurring remains the same from trial to trial. Probability of success (p) and the number of total trials (n) are the distributional parameters. The number of successful trials is denoted x (the x-axis of the probability distribution graph). The input requirements in the distribution include: Probability of success > 0 and < 1 (for example, p ≥ 0.0001 and p ≤ 0.9999) and Number of Trials ≥ 1 and integers and ≤ 1000. Example: If the probably of obtaining a part that is defective is 50%, what is the probability that in selecting 4 parts at random, there will be no defective part, or 1 defective part, or 2 defective parts, and so forth? Re-create the probability mass function or probability density function (PDF): Probability of no defects P(x = 0): 6.25% using B2DistributionPDFBinomial (4,0.5,0) or computed: C04 (.5)0 (.5)4−0 =
4! 1 (.5)0 (.5)4−0 = = 6.25% 0!(4 − 0)! 16
Where we define P(x = 0) as the probability (P) of the number of successes of an event (x), and the mathematical combination (C). Probability of one defect P(x = 1): 25.00% using B2DistributionPDFBinomial (4,0.5,1) or computed: C14 (.5)1 (.5)4−1 =
4! 4 (.5)1 (.5)3 = = 25% 1!(4 − 1)! 16
Probability of two defects P(x = 2): 37.50% using B2DistributionPDFBinomial (4,0.5,2) or computed: C24 (.5)2 (.5)4−2 =
4! 6 (.5)2 (.5)2 = = 37.50% 2!(4 − 2)! 16
Probability of three defects P(x = 3): 25.00% using B2DistributionPDFBinomial (4,0.5,3) or computed: C34 (.5)3 (.5)4−3 =
4! 4 (.5)3 (.5)1 = = 25% 3!(4 − 3)! 16
Probability of four defects P(x = 4): 6.25% using B2DistributionPDFBinomial (4,0.5,4) or computed: C44 (.5)4 (.5)4−4 =
4! 1 (.5)4 (.5)0 = = 6.25% 4!(4 − 4)! 16
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
150. Six Sigma—Obtaining Statistical Probabilities, Basic Hypothesis Tests
573
Total probabilities: 100.00% In addition, you can sum the probabilities up to obtain the cumulative distribution function (CDF):
Probability of no defects P(x = 0): 6.25% using B2DistributionCDFBinomial (4,0.5,0) or computed as P(x = 0) Probability of up to 1 defect P(x ≤ 1): 31.25% using B2DistributionCDF Binomial(4,0.5,1) or computed as P(x = 0) + P(x = 1) Probability of up to 2 defects P(x ≤ 2): 68.75% using B2DistributionCDF Binomial(4,0.5,2) or computed as P(x = 0) + P(x = 1) + P(x = 2) Probability of up to 3 defects P(x ≤ 3): 93.75% using B2DistributionCDF Binomial(4,0.5,3) or computed as P(x = 0) + P(x = 1) + P(x = 2) + P(x = 3) Probability of up to 4 defects P(x ≤ 4): 100.00% using B2DistributionCDF Binomial(4,0.5,4) or computed as P(x = 0) + P(x = 1) + P(x = 2) + P(x = 3) + P(x = 4)
The same analysis can be performed using the Distribution Analysis Tool in Risk Simulator. For instance, you can start the tool by clicking on Risk Simulator | Tools | Distributional Analysis, selecting Binomial, entering in 4 for Trials, 0.5 for Probability, and then selecting PDF as the type of analysis, and a range of between 0 and 4 with a step of 1. The resulting table and PDF distribution is exactly as computed using the Modeling Toolkit functions, as seen in Figure 150.1. In addition, the four distributional moments can be determined using the tool as well as using the B2 functions:
Mean or Average: 2.00 using B2DistributionBinomialMean(4,0.5) Standard Deviation: 1.00 using B2DistributionBinomialStdev(4,0.5) Skewness Coefficient: 0.00 using B2DistributionBinomialSkew(4,0.5) Kurtosis (Excess): –0.50 using B2DistributionBinomialkurtosis(4,0.5)
In addition, typically, for discrete distributions, the exact probabilities are called probability mass functions (PMFs); they are called probability density functions (PDFs) for continuous distributions. However, in this book, we use both terms interchangeably. Also, this chapter highlights only some of the examples illustrated in the model. To view more detailed examples, please see the Excel model in the Modeling Toolkit.
B. Poisson Distribution The Poisson distribution describes the number of times an event occurs in a given space or time interval, such as the number of telephone calls per minute or the number of errors per page in a document. The number of possible occurrences in any interval is unlimited; the occurrences are independent. The number of occurrences in one interval does not affect the number of occurrences in other intervals, and the average number of occurrences must remain the same from interval to interval. Rate or Lambda is the only distributional parameter. The input requirements for the distribution is Rate > 0 and ≤ 1000.
c141-150
JWBK121-Mun
March 17, 2008
574
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 150.1 Distributional analysis for a binomial PDF
Example: A tire service center has the capacity of servicing 6 customers in an hour. From prior experience, on average 3 show up an hour. The owner is afraid that there might be insufficient manpower to handle an overcrowding of over 6 customers. What is the probability that there will be exactly 6 customers? What about 6 or more customers? Using the Distributional Analysis Tool, we see that the PDF of exactly 6 customers is 5.04% (Figure 150.2) and the probability of 6 or more is the same as 1 – the CDF probability of 5 or less, which is 1 – 91.61% or 8.39% (Figure 150.3).
C. Normal Distribution The normal distribution is the most important distribution in probability theory because it describes many natural phenomena, such as people’s IQs or heights. Decision makers can use the normal distribution to describe uncertain variables, such as the inflation rate or the future price of gasoline. Some value of the uncertain variable is the most likely (the mean of the distribution), the uncertain variable could as likely be above the mean as it could be below the mean (symmetrical about the mean), and the uncertain variable is more likely to be in the vicinity of the mean than
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
150. Six Sigma—Obtaining Statistical Probabilities, Basic Hypothesis Tests
575
FIGURE 150.2 PDF on a Poisson
farther away. Mean (µ) and standard deviation (σ ) are the distributional parameters. The input requirements include: Mean can take on any value and Standard Deviation > 0 and can be any positive value. Example: You observe that in the past, on average, your manufactured batteries last for 15 months with a standard deviation of 1.5 months. Assume that the battery life is normally distributed. If a battery is randomly selected, find the probability that it has a life of less than 16.5 months or over 16.5 months. Using the tool, we obtain CDF of X = 16.5 months as 84.13%, which means that there is 84.13% chance that the manufactured batteries last up to 16.5 months and 1 – 0.8413, or 15.87%, chance the batteries will last over 16.5 months (Figure 150.4). The same value of 84.13% can be obtained using the function B2DistributionCDFNormal(15,1.5,16.5) to obtain 84.13% (Figure 150.5). Example: Alternatively, if you wish to provide a 12-month warranty on your batteries (i.e., if the battery dies before 12 months, you will give a full refund), what are the chances that you may have to provide this refund? Using the tool, we find that the CDF for X = 12 is 2.28% chance that a refund will have to be issued (Figure 150.6).
c141-150
JWBK121-Mun
March 17, 2008
576
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 150.3 CDF on a Poisson
So far we have been computing the probabilities of events occurring using the PDF and CDF functions and tools. We can also reverse the analysis and obtain the X values given some probability, using the inverse cumulative distribution function (ICDF), as seen next. Example: If the probability calculated in problem 2 is too high (hence, too costly for you), and you wish to minimize the cost and probability of having to refund your customers down to a 1% probability, what would be a suitable warranty date (in months)? The answer is to provide anything less than an 11.51-month guarantee will most likely result in less than or equal to a 1% chance of a return. To obtain the results here, we use the ICDF analysis in the Distributional Analysis Tool (Figure 150.7). Alternatively, we can use the Modeling Toolkit function B2DistributionICDFNormal (15,1.5,0.01) to obtain 11.510478 (Figure 150.8).
HYPOTHESIS TESTS IN A THEORETICAL SITUATION This section illustrates how to continue using the Distributional Analysis Tool to simplify theoretical hypothesis tests.
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
150. Six Sigma—Obtaining Statistical Probabilities, Basic Hypothesis Tests
FIGURE 150.4 CDF of a normal distribution
FIGURE 150.5 CDF of normal using function calls
577
c141-150
JWBK121-Mun
578
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 150.6 Probability of a guarantee refund
Example: Sometimes we need to obtain certain X values given a certainty and probability level for the purposes of hypothesis testing. This is where the ICDF comes in handy. For instance, suppose a light bulb manufacturer needs to test if its bulbs can last on average 1,000 burning hours. If the plant manager randomly samples 100 light bulbs and finds that the sample average is 980 hours with a standard deviation of 80 hours, at a 5% significance level (two-tails), do the light bulbs last an average of 1,000 hours? There are several methods to solve this problem, including the use of confidence intervals, Z-scores, and p-values. For example, we are testing the null hypothesis H0 : Population Mean = 1,000 and the alternate hypothesis Ha : Population Mean is NOT 1,000. Using the Z-score approach, we first obtain the Z-score equivalent to a two-tail alpha of 5% (which means one tail is 2.5%, and using the Distributional Analysis Tool we get the Z = 1.96 at a CDF of 97.50%, equivalent to a one-tail p-value of 2.5%). Using the Distributional Analysis Tool, set the distribution to normal with a mean of zero and standard deviation of 1 (this is the standard normal Z distribution). Then compute the ICDF for 0.975 or 97.5% CDF, which provides an X value of 1.9599 or 1.96 (Figure 150.9).
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
150. Six Sigma—Obtaining Statistical Probabilities, Basic Hypothesis Tests
FIGURE 150.7 Obtaining the ICDF
FIGURE 150.8 Function call for ICDF
579
c141-150
JWBK121-Mun
580
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 150.9 Standard normal Z-score
Using the confidence interval formula, we get: s µ±Z √ n 80 1000 ± 1.96 √ 100 1000 ± 15.68 This means that the statistical confidence interval is between 984.32 and 1015.68. As the sample mean of 980 falls outside this confidence interval, we reject the null hypothesis and conclude that the true population mean is different from 1,000 hours. A much quicker and simpler approach is to use the Distributional Analysis Tool directly. Seeing that we are performing a statistical sample, we first need to correct for small sampling size bias by correcting the standard deviation to get: 80 s =8 √ =√ n 100
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
150. Six Sigma—Obtaining Statistical Probabilities, Basic Hypothesis Tests
581
FIGURE 150.10 Obtaining p-values using Distributional Analysis Tool
Then we can find the CDF relating to the sample mean of 980. We see that the CDF p-value is 0.0062, less than the alpha of 0.025 one-tail (or 0.50 two-tail), which means we reject the null hypothesis and conclude that the population mean is statistically significantly different from the 1,000 hours tested (Figure 150.10). Yet another alternative is to use the ICDF method for the mean and sampling adjusted standard deviation and compute the X values corresponding to the 2.5% and 97.5% levels. The results indicate that the 95% two-tail confidence interval is between 984.32 and 1,015.68 as computed previously. Hence, 980 falls outside this range; this means that the sample value of 980 is statistically far away from the hypothesized population value of 1,000 (i.e., the unknown true population based on a statistical sampling test can be determined to be not equal to 1,000). Please see Figure 150.11. Note that we adjust the sampling standard deviation only because the population is large and we sample a small size. However, if the population standard deviation is known, we do not divide it by the square root of N (sample size). Example: In another example, suppose it takes on average 20 minutes with a standard deviation of 12 minutes to complete a certain manufacturing task. Based on a sample of 36 workers, what is the probably that you will find someone completing the task taking between 18 and 24 minutes?
c141-150
JWBK121-Mun
582
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 150.11 Computing statistical confidence intervals Again, we adjust the sampling standard deviation to be 12 divided by the square root of 36, or equivalent to 2. The CDF for 18 and 24 are 15.86% and 97.72% respectively, yielding the difference of 81.86%, which is the probability of finding someone taking between 18 and 24 minutes to complete the task. Please see Figure 150.12. Example: Sometimes, when the sample size is small, we need to revert to using the Student’s T distribution. For instance, suppose a plant manager studies the life of a particular battery and samples 10 units. The sample mean is 5 hours with a sample standard deviation of 1 hour. What is the 95% confidence interval for the battery life? Using the T distribution, we set the degrees of freedom as n − 1 or 9, with a mean location of 0 for a standard T distribution. The ICDF for 0.975 or 97.5% (5% two-tail means 2.5% on one-tail, creating a complement of 97.5%) is equivalent to 2.262 (Figure 150.13). So, the 95% statistical confidence interval is: s x ± t√ n 1 5 ± 2.262 √ 10 5 ± 0.71 Therefore, the confidence interval is between 4.29 and 5.71.
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
150. Six Sigma—Obtaining Statistical Probabilities, Basic Hypothesis Tests
583
FIGURE 150.12 Sampling confidence interval
HYPOTHESIS TESTS IN AN EMPIRICAL SIMULATION This next example shows how two different forecasts or sets of data can be tested against one another to determine if they have the same means and variances. That is, if the first distribution has a mean of 100, how far away does the mean of the second distribution have to be such that they are considered statistically different? The example illustrates two models (A and B) with the same calculations (see Simulation Model worksheet) where the income is revenue minus cost. Both sets of models have the same inputs and the same distributional assumptions on the inputs, and the simulation is run on the random seed of 123456. Two major items are noteworthy. The first is that the means and variances (as well as standard deviations) are slightly different. These differences raise the question as to whether the means and variances of these two distributions are identical. A hypothesis test can be applied to answer this first question. A nonparametric bootstrap simulation can also be applied to test the other statistics to see if they are statistically valid. The second item of interest is that the results from A and B are different although the input assumptions are identical and an overall random seed has been applied (Figure 150.14). The different results occur because with a random seed applied,
c141-150
JWBK121-Mun
March 17, 2008
584
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 150.13 Standard T distribution
each distribution is allowed to vary independently as long as it is not correlated to another variable. This is a key and useful fact in Monte Carlo simulation.
RUNNING A HYPOTHESIS TEST To run this model, simply: 1. Open the existing simulation profile by selecting Risk Simulator | Change Simulation Profile. Choose the Hypothesis Testing profile. 2. Select Risk Simulator | Run Simulation or click on the RUN icon. 3. After the simulation run is complete, select Risk Simulator | Tools | Hypothesis Test. 4. Make sure both forecasts are selected and click OK. The report and results are provided in Figure 150.15. The results indicate that the p-value for the t-test is higher that 0.05, indicating that both means are statistically identical and that any variations are due to random white noise. Further, the p-value is also high for the F-test, indicating that both variances are also statistically identical to one another.
JWBK121-Mun
March 17, 2008
13:4
Char Count=
FIGURE 150.14 Simulation results
c141-150
585
JWBK121-Mun
March 17, 2008
13:4
Char Count=
FIGURE 150.14 (Continued)
c141-150
586
c141-150
JWBK121-Mun
March 17, 2008
13:4
Char Count=
150. Six Sigma—Obtaining Statistical Probabilities, Basic Hypothesis Tests
587
FIGURE 150.15 Hypothesis testing results
RUNNING A NONPARAMETRIC BOOTSTRAP SIMULATION The preceding hypothesis test is a theoretical test and is thus more accurate than empirical tests (e.g., bootstrap simulation). However, these tests do not exist for other statistics; hence, an empirical approach is required, namely, nonparametric bootstrap simulation. To run the bootstrap simulation, simply reset and rerun the simulation; then, once the simulation is complete, click on Risk Simulator | Tools | Nonparametric Bootstrap. Choose one of the forecasts (only one forecast can be chosen at a time when running bootstrap simulation), select the statistics of interest, and click OK (Figure 150.16). The resulting forecast charts are empirical distributions of the statistics. By typing in 90 on the certainty box and hitting Tab on the keyboard, the 90% confidence is displayed for each statistic. For instance, the skewness interval is between 0.0452
c141-150
JWBK121-Mun
588
March 17, 2008
13:4
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 150.16 Nonparametric bootstrap simulation
and 0.2877, indicating that the value zero is outside this interval; that is, at the 90% two-tail confidence (or significance of 0.10 two-tailed), model A has a statistically significant positive skew. Clearly, the higher the number of bootstrap trials, the more accurate the results (recommended trials are between 1,000 and 10,000). Think of bootstrap simulation in this way: Imagine you have 100 people with the same exact model running the simulation without any seed values. At the end of the simulations, each person will have a set of means, standard deviations, skewness, and kurtosis. Clearly, some people with have exactly the same results while others are going to be slightly off, by virtue of random simulation. The question is, how close or variable is the mean or any of these statistics? In order to answer that question, you collect all 100 means and show the distribution and figure out the 90% confidence level (Figure 150.17). This is what bootstrap does. It creates alternate realities of hundreds or thousands of runs of the same model, to see how accurate and what the spread of the statistic is. This also allows us to perform hypothesis tests to see if the statistic of interest is statistically significant or not.
JWBK121-Mun March 17, 2008 13:4
FIGURE 150.17 Bootstrap simulation’s forecast results
c141-150 Char Count=
589
c151-160
JWBK121-Mun
590
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
151. Six Sigma—One- and Two-Sample Hypothesis Tests Using t-Tests, Z-Tests, F-Tests, ANOVA, and Nonparametric Tests (Friedman, Kruskal Wallis, Lilliefors, and Runs Tests) File Names: Six Sigma—Hypothesis Testing (Advanced Techniques) Location: Modeling Toolkit | Six Sigma | Hypothesis Testing (Advanced Techniques) Brief Description: Illustrating how to use Modeling Toolkit’s Statistical Tools to run single-, two-variable and multivariate hypothesis tests using parametric and nonparametric techniques Requirements: Modeling Toolkit, Risk Simulator
This chapter is a follow-up on the last chapter on hypothesis testing. As a primer to the more advanced applications described here, please first review the materials in the last chapter. Here we continue the quest of quality control and sampling to test one, two, and multiple different variables to see if they are statistically significantly similar to or different from each other or from some hypothesized value. The methodologies introduced in this chapter are more advanced than those presented in the last chapter but build on similar concepts. All the statistical tests shown in this chapter are available in the Statistical Tools section of the Modeling Toolkit. A hypothesis test is a statistical test of the characteristics of a population by testing a small sample collected. In most cases, the population to be studied might be too large, difficult, or expensive to be completely sampled (e.g., all 100 million registered voters in the United States in a particular election). Hence, a smaller sample (e.g., a random sample of 1,100 voters from 20 cities) is collected, and the sample statistics are tabulated. Then, using hypothesis tests, the characteristics of the entire population can be inferred from this small sample. Modeling Toolkit allows the user to test one-variable, two-variable, and multiple-variable hypotheses tests using t-tests, Z-tests, analysis of variance (ANOVA) and advanced nonparametric techniques. To perform a hypothesis test, first set up the null hypothesis (H0 ) and the alternate hypothesis (Ha ). Here are four quick rules: 1. 2. 3. 4.
Always set up the alternate hypothesis first, then the null hypothesis. The alternate hypothesis always has these signs: > or < or =. The null hypothesis always has these signs: ≥ or ≤ or =. If the alternate hypothesis is =, then it is a two-tailed test; if <, then it is a left(one-) tailed test; if >, then it is a right- (one-) tailed test.
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
151. Six Sigma—One- and Two-Sample Hypothesis Tests
591
If the p-value is less than the significance level (the significance level α is selected by the user and is usually 0.10, 0.05, or 0.01) tested, reject the null hypothesis and accept the alternate hypothesis.
TWO-TAILED HYPOTHESIS TEST A two-tailed hypothesis tests the null hypothesis of whether the population mean, median, or other statistic of the sample data set is statistically identical to the hypothesized mean, median or the specified statistic. The alternative hypothesis is that the real population mean or median or statistic is statistically different from the hypothesized mean or median or statistic when tested using the sample data set. If the calculated p-value is less than or equal to the alpha significance value, then reject the null hypothesis and accept the alternate hypothesis. Otherwise, if the p-value is higher than the alpha significance value, do not reject the null hypothesis.
RIGHT-TAILED HYPOTHESIS TEST A right-tailed hypothesis tests the null hypothesis such that the population mean, median, or other statistic of the sample data set is statistically less than or equal to the hypothesized mean, median or the specified statistic. The alternative hypothesis is that the real population mean or median or statistic is statistically greater than the hypothesized mean or median or statistic when tested using the sample data set. If the calculated p-value is less than or equal to the alpha significance value, then reject the null hypothesis and accept the alternate hypothesis. Otherwise, if the p-value is higher than the alpha significance value, do not reject the null hypothesis.
LEFT-TAILED HYPOTHESIS TEST A left-tailed hypothesis tests the null hypothesis such that the population mean, median, or other statistic of the sample data set is statistically greater than or equal to the hypothesized mean, median or the specified statistic. The alternative hypothesis is that the real population mean or median or statistic is statistically less than the hypothesized mean or median or statistic when tested using the sample data set. If the calculated p-value is less than or equal to the alpha significance value, then reject the null hypothesis and accept the alternate hypothesis. Otherwise, if the p-value is higher than the alpha significance value, do not reject the null hypothesis. Example: A lightbulb manufacturing company wants to test the hypothesis that its bulbs can last on average, at least 1000 burning hours. Hence, the company randomly selects 20 sample bulbs. The burning hours are shown in Figure 151.1. At a 5% significance level, what is the conclusion? Figure 151.1 illustrates the sample data of 20 lightbulbs randomly selected and tested using the One-Variable t-Test. Figure 151.2 illustrates the results obtained. The hypothesis tested is, of course, a one-tail right-tail test. As the sample size is less than 30, we use a one-variable t-test. Figure 151.1 also illustrates the data setup and requirements to run a one-variable t-test. Clearly, the null hypothesis tested is such that the population average life of the lightbulbs is ≤ 1,000 hours, while the alternate hypothesis is such that the population
JWBK121-Mun
March 17, 2008
13:9
Char Count=
FIGURE 151.1 One-variable t-test
c151-160
592
JWBK121-Mun
March 17, 2008
13:9
Char Count=
FIGURE 151.2 Hypothesis test results
c151-160
593
c151-160
JWBK121-Mun
March 17, 2008
594
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
life of the lightbulbs > 1,000 hours. The calculated p-value for the right tail is 0.0220, which is less than the 5% significance (α) level. Hence we reject the null hypothesis and conclude that the population average of the life of the lightbulbs exceed 1,000 hours.
ONE-VARIABLE TESTING FOR MEANS (t-TEST) This one-variable t-test of means is appropriate when the population standard deviation is not known but the sampling distribution is assumed to be approximately normal. (The t-test is used when the sample size is less than 30.) This t-test can be applied to three types of hypothesis tests: a two-tailed test, a right-tailed test, and a left-tailed test, to examine based on the sample data set if the population mean is equal to, less than, or greater than some hypothesized mean. If the calculated p-value is less than or equal to the significance level in the test, then reject the null hypothesis and conclude that the true population mean is not equal to (two-tail test), less than (left-tail test), or greater than (right-tail test) the hypothesized mean based on the sample tested. Otherwise, the true population mean is statistically similar to the hypothesized mean.
Data Requirements For the one-variable t-tests, create data tables such as the one in Figure 151.3, and select the data area one variable at a time (e.g., to test Variable One, select the data from A2 to A29, or data points 10 to 66). Repeat this process for other variables. To extend the data set, just add more observations (rows) or more variables (columns), as illustrated in Figures 151.3 and 151.4.
ONE-VARIABLE TESTING FOR MEANS (Z-TEST) The one-variable Z-test is appropriate when the population standard deviation is known and the sampling distribution is assumed to be approximately normal. This applies when the number of data points exceeds 30. This Z-test can be applied to three types of hypothesis tests: a two-tailed test, a right-tailed test, and a left-tailed test, to examine based on the sample data set if the population mean is equal to, less than, or greater than some hypothesized mean. If the calculated p-value is less than or equal to the significance level in the test, then reject the null hypothesis and conclude that the true population mean is not equal to (two-tail test), less than (left-tail test), or greater than (right-tail test) the hypothesized mean based on the sample tested. Otherwise, the true population mean is statistically similar to the hypothesized mean.
Data Requirements For the one-variable Z-test, create data tables such as the one in Figure 151.5, and select the data area one variable at a time (e.g., to test Variable Eleven, select the data from A2 to A34, or data points 10 to 58). Repeat this process for other variables. To extend the data set, just add more observations (rows) or more variables (columns)
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
151. Six Sigma—One- and Two-Sample Hypothesis Tests
FIGURE 151.3 Data requirements for a one-variable t-test
FIGURE 151.4 Running a one-variable t-test
595
c151-160
JWBK121-Mun
March 17, 2008
596
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 151.5 Data requirements for a one-variable Z-test
but make sure that you have at least 30 data points to run a Z-test (Figures 151.5 and 151.6).
ONE-VARIABLE TESTING FOR PROPORTIONS (Z-TEST) The one-variable Z-test for proportions is appropriate when the sampling distribution is assumed to be approximately normal. This applies when the number of data points exceeds 30 and when the number of data points, N, multiplied by the hypothesized population proportion mean, P, is greater than or equal to five, NP ≥ 5. The
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
151. Six Sigma—One- and Two-Sample Hypothesis Tests
597
FIGURE 151.6 Running a one-variable Z-test
data used in the analysis have to be proportions and be between 0 and 1. This Z-test can be applied to three types of hypothesis tests: a two-tailed test, a right-tailed test, and a left-tailed test, to examine based on the sample data set if the population mean is equal to, less than, or greater than the hypothesized mean. If the calculated p-value is less than or equal to the significance level in the test, then reject the null hypothesis and conclude that the true population mean is not equal to (two-tail test), less than (left-tail test), or greater than (right-tail test) the hypothesized mean based on the sample tested. Otherwise, the true population mean is statistically similar to some hypothesized mean. For data requirements, see the preceding section. However, make sure that the data are proportions between 0 and 1 (not including 1).
TWO VARIABLES WITH DEPENDENT MEANS (t-TEST) The two-variable dependent t-test is appropriate when the population standard deviation is not known but the sampling distribution is assumed to be approximately normal. The t-test is used when the sample size is less than 30. In addition, this test is specifically formulated for testing the same or similar samples before and after an event (e.g., measurements taken before a medical treatment are compared against those measurements taken after the treatment to see if there is a difference). This t-test can be applied to three types of hypothesis tests: a two-tailed test, a right-tailed test, and a left-tailed test. Suppose that a new heart medication was administered to 100 patients (N = 100), and the heart rates before and after the medication was administered were measured. The Two Dependent Variables t-Test can be applied to test if the new medication is effective by testing to see if there is a statistically different “before and after” averages. The Dependent Variables test is used here because there is only a
c151-160
JWBK121-Mun
March 17, 2008
598
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
single sample collected (the heartbeats of the same patients were measured before and after administration of the new drug). The two-tailed null hypothesis tests that the true population’s mean of the difference between the two variables is zero versus the alternate hypothesis that the difference is statistically different from zero. The right-tail null hypothesis test is such that the differences in the population means (first mean less second mean) is statistically less than or equal to zero (which is identical to saying that the mean of the first sample is less than or equal to the mean of the second sample). The alternative hypothesis is that the mean difference of the real population is statistically greater than zero when tested using the sample data set (which is identical to saying that the mean of the first sample is greater than the mean of the second sample). The left-tail null hypothesis test is such that the differences in the population means (first mean less second mean) is statistically greater than or equal to zero (which is the same as saying that the mean of the first sample is greater than or equal to the mean of the second sample). The alternative hypothesis is that the mean difference of the real population is statistically less than zero when tested using the sample data set (which is identical to saying that the mean of the first sample is less than the mean of the second sample). If the calculated p-value is less than or equal to the significance level in the test, reject the null hypothesis and conclude that the true population difference of the population means is not equal to (two-tail test), less than (left-tail test), or greater than (right-tail test) zero based on the sample tested. Otherwise, the true population mean is statistically similar to the hypothesized mean.
Data Requirements For the two-variable t-tests, create data tables such as the one shown in Figure 151.7, and select the data area two variables at a time (e.g., to test Variables Eight and Nine together, select the data from H2 to I29 or the data points 10 to 66). Repeat this process for other pairs of variables. To extend the existing data set, just add more observations (rows) or more variables (columns). Please see Figures 151.7 and 151.8.
TWO VARIABLES THAT ARE INDEPENDENT WITH EQUAL VARIANCES (t-TEST) The two-variable t-test with equal variances is appropriate when the population standard deviation is not known but the sampling distribution is assumed to be approximately normal. The t-test is used when the sample size is less than 30. In addition, the two independent samples are assumed to have similar variances. For illustration, suppose that a new engine design is tested against an existing engine design to see if there is a statistically significant difference between the two. The t-Test on Two (Independent) Variables with Equal Variances can be applied. This test is used here because there are two distinctly different samples collected here (new engine and existing engine), but the variances of both samples are assumed to be similar (the means may or may not be similar but the fluctuations around the mean are assumed to be similar).
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
151. Six Sigma—One- and Two-Sample Hypothesis Tests
599
FIGURE 151.7 Data requirements for a two-variable dependent means t-test
This t-test can be applied to three types of hypothesis tests: a two-tailed test, a right-tailed test, and a left-tailed test. A two-tailed hypothesis tests the null hypothesis H0 such that the mean difference of the population between the two variables is statistically identical to the hypothesized mean differences (HMD). If HMD is set to zero, this is the same as saying that the first mean equals the second mean. The alternative hypothesis is that the difference between the real population means is statistically different from the hypothesized mean differences when tested using the sample data set. If HMD is set to zero, this is the same as saying that the first mean does not equal the second mean in the alternative hypothesis. A right-tailed hypothesis tests the null hypothesis H0 such that the population mean difference between the two variables is statistically less than or equal to the hypothesized mean differences. If HMD is set to zero, this is the same as saying that the first mean is less than or equals the second mean. The alternative hypothesis is that the real difference between population means is statistically greater than the hypothesized mean differences when tested using the sample data set. If HMD is set to zero, this is the same as saying that the first mean is greater than the second mean. A left-tailed hypothesis tests the null hypothesis H0 such that the difference between the population means of the two variables is statistically greater than or
c151-160
JWBK121-Mun
600
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 151.8 Running a two-variable dependent means t-test
equal to the hypothesized mean differences. If HMD is set to zero, this is the same as saying that the first mean is greater than or equals the second mean. The alternative hypothesis is that the real difference between population means is statistically less than the hypothesized mean difference when tested using the sample data set. If HMD is set to zero, this is the same as saying that the first mean is less than the second mean. If the calculated p-value is less than or equal to the significance level in the test, reject the null hypothesis and conclude that the true population difference of the population means is not equal to (two-tail test), less than (left-tail test), or greater than (right-tail test) HMD based on the sample tested. Otherwise, the true difference of the population means is statistically similar to the HMD. For data requirements, see the section called “Two Variables with Dependent Means (t-Test).”
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
151. Six Sigma—One- and Two-Sample Hypothesis Tests
601
TWO VARIABLES THAT ARE INDEPENDENT WITH UNEQUAL VARIANCES (t-TEST) The two-variable t-test with unequal variances (the population variance of sample 1 is expected to be different from the population variance of sample 2) is appropriate when the population standard deviation is not known but the sampling distribution is assumed to be approximately normal. The t-test is used when the sample size is less than 30. In addition, the two independent samples are assumed to have similar variances. To illustrate, suppose that a new customer relationship management (CRM) process is being evaluated for its effectiveness. The customer satisfaction ranking between two hotels (one with and the other without CRM implemented) are collected. The t-Test on Two (Independent) Variables with Unequal Variances can be applied. This test is used here because there are two distinctly different samples collected (customer survey results of two different hotels), and the variances of both samples are assumed to be dissimilar (due to the difference in geographical location plus the different demographics and psychographics of customers at both properties). This t-test can be applied to three types of hypothesis tests: a two-tailed test, a right-tailed test, and a left-tailed test. A two-tailed hypothesis tests the null hypothesis H0 such that the population mean differences between the two variables is statistically identical to the hypothesized mean differences. If HMD is set to zero, this is the same as saying that the first mean equals the second mean. The alternative hypothesis is that the real difference between the population means is statistically different from the hypothesized mean differences when tested using the sample data set. If HMD is set to zero, this is the same as saying that the first mean does not equal the second mean. A right-tailed hypothesis tests the null hypothesis H0 such that the difference between the means of the population of the two variables is statistically less than or equal to the hypothesized mean differences. If HMD is set to zero, this is the same as saying that the first mean is less than or equals the second mean. The alternative hypothesis is that the mean of the differences of the real populations is statistically greater than the hypothesized mean difference when tested using the sample data set. If HMD is set to zero, this is the same as saying that the first mean is greater than the second mean. A left-tailed hypothesis tests the null hypothesis H0 such that the difference between the population means of the two variables is statistically greater than or equal to the hypothesized mean differences. If HMD is set to zero, this is the same as saying that the first mean is greater than or equals the second mean. The alternative hypothesis is that the real difference between population means is statistically less than the hypothesized mean difference when tested using the sample data set. If HMD is set to zero, this is the same as saying that the first mean is less than the second mean. If the calculated p-value is less than or equal to the significance level in the test, reject the null hypothesis and conclude that the true population difference of the population means is not equal to (two-tail test), less than (left-tail test), or greater than (right-tail test) the hypothesized mean based on the sample tested. Otherwise, the true difference of the population means is statistically similar to the hypothesized mean. For data requirements, see the section called “Two Variables with Dependent Means (t-Test).”
c151-160
JWBK121-Mun
March 17, 2008
602
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
TWO VARIABLES THAT ARE INDEPENDENT TESTING FOR MEANS (Z-TEST) The two-variable Z-test is appropriate when the population standard deviations are known for the two samples, and the sampling distribution of each variable is assumed to be approximately normal. This applies when the number of data points of each variable exceeds 30. To illustrate, suppose that a market research was conducted on two different markets and the sample collected is large (N must exceed 30 for both variables). The researcher is interested in testing whether there is a statistically significant difference between the two markets. Further suppose that such a market survey has been performed many times in the past and the population standard deviations are known. A two independent variables Z-test can be applied because the sample size exceeds 30 on each market, and the population standard deviations are known. This Z-test can be applied to three types of hypothesis tests: a two-tailed test, a right-tailed test, and a left-tailed test. A two-tailed hypothesis tests the null hypothesis H0 such that the difference between the two population means is statistically identical to the hypothesized mean. The alternative hypothesis is that the real difference between the two population means is statistically different from the hypothesized mean when tested using the sample data set. A right-tailed hypothesis tests the null hypothesis H0 such that the difference between the two population means is statistically less than or equal to the hypothesized mean. The alternative hypothesis is that the real difference between the two population means is statistically greater than the hypothesized mean when tested using the sample data set. A left-tailed hypothesis tests the null hypothesis H0 such that the difference between the two population means is statistically greater than or equal to the hypothesized mean. The alternative hypothesis is that the real difference between the two population means is statistically less than the hypothesized mean when tested using the sample data set.
Data Requirements For the two independent variables Z-test, create data tables such as the one in Figure 151.9, and select the data area two variables at a time (e.g., select the data from A2 to B34, or data points 10 to 65). To extend the data set, just add more observations (rows) or more variables (columns), but make sure that you have at least 30 data points in each variable to run a Z-test. Please see Figure 151.10.
TWO VARIABLES THAT ARE INDEPENDENT TESTING FOR PROPORTIONS (Z-TEST) The two-variable Z-test on proportions is appropriate when the sampling distribution is assumed to be approximately normal. This applies when the number of data points of both samples exceeds 30. Further, the data should all be proportions and be between 0 and 1.
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
151. Six Sigma—One- and Two-Sample Hypothesis Tests
603
FIGURE 151.9 Data requirements for a two-variable independent Z-test
To illustrate, suppose that brand research was conducted on two different headache pills and the sample collected is large. N must exceed 30 for both variables. The researcher is interested in testing whether there is a statistically significant difference between the proportion of headache sufferers of both samples using the different headache medication. A two independent variables Z-test for proportions can be applied because the sample size exceeds 30 on each market, and the data collected are proportions. This Z-test can be applied to three types of hypothesis tests: a two-tailed test, a right-tailed test, and a left-tailed test. A two-tailed hypothesis tests the null hypothesis H0 such that the difference in the population proportion is statistically identical to
c151-160
JWBK121-Mun
604
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 151.10 Running a two-variable independent Z-test
the hypothesized difference. If the hypothesized difference is set to zero, the null hypothesis tests if the population proportions of the two samples are identical. The alternative hypothesis is that the real difference in population proportions is statistically different from the hypothesized difference when tested using the sample data set. A right-tailed hypothesis tests the null hypothesis H0 such that the difference in the population proportion is statistically less than or equal to the hypothesized difference. If the hypothesized difference is set to zero, the null hypothesis tests if population proportion of Sample 1 is equal or less than the population proportion of Sample 2. The alternative hypothesis is that the real difference in population proportions is statistically greater than the hypothesized difference when tested using the sample data set. A left-tailed hypothesis tests the null hypothesis H0 such that the difference in the population proportion is statistically greater than or equal to the hypothesized difference. If the hypothesized difference is set to zero, the null hypothesis tests if population proportion of Sample 1 is equal or greater than the population proportion
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
151. Six Sigma—One- and Two-Sample Hypothesis Tests
605
of Sample 2. The alternative hypothesis is that the real difference in population proportions is statistically less than the hypothesized difference when tested using the sample data set.
Data Requirements For the two independent variables Z-test for proportions, create data tables such as the one in Figure 151.11, and select the data two variables at a time (e.g., select the
FIGURE 151.11 Data requirements for a two-variable Z-test for proportions
c151-160
JWBK121-Mun
606
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 151.12 Running a two-variable Z-test for proportions
data from D2 to E34, or data points 0.0360 to 0.0300). To extend the data set, just add more observations (rows) or more variables (columns), but make sure that the data are proportions between 0 and 1 (not inclusive of 1). Finally, make sure that you have at least 30 data points to run a Z-test (Figures 151.11 and 151.12).
TWO VARIABLES THAT ARE INDEPENDENT TESTING FOR VARIANCES (F-TEST) The two-variable F-test analyzes the variances from two samples (the population variance of Sample 1 is tested with the population variance of Sample 2 to see if they are equal). This test is appropriate when the population standard deviation is not known and the sampling distribution is assumed to be approximately normal.
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
151. Six Sigma—One- and Two-Sample Hypothesis Tests
607
The measurement of variation is a key issue in Six Sigma and quality control applications. In this illustration, suppose that the variation or variance around the units produced in a manufacturing process is compared to another process to determine which process is more variable and, hence, less predictable in quality. This F-test typically can be applied to a single hypothesis test: a two-tailed test. A two-tailed hypothesis tests the null hypothesis H0 such that the population variance of the two variables is statistically identical. The alternative hypothesis is that the population variances are statistically different from one another when tested using the sample data set. If the calculated p-value is less than or equal to the significance level in the test, reject the null hypothesis and conclude that the true population variances of the two variables are not statistically equal to one another. Otherwise, the true population variances are statistically similar to each other. For data requirements, see the section called “Two Variables that Are Independent Testing for Proportions (Z-Test).”
BASICS OF NONPARAMETRIC METHODOLOGIES Nonparametric techniques make no assumptions about the specific shape or distribution from which the sample is drawn. This is different from the other hypotheses tests, such as ANOVA or t-Tests (parametric tests), where the sample is assumed to be drawn from a population that is normally or approximately normally distributed. If normality is assumed, the power of the test is higher due to this normality restriction. However, if flexibility on distributional requirements is needed, nonparametric techniques are superior. In general, nonparametric methodologies provide these advantages over other parametric tests:
Normality or approximate normality does not have to be assumed. Fewer assumptions about the population are required (i.e., nonparametric tests do not require that the population assume any specific distribution). Smaller sample sizes can be analyzed. Samples with nominal and ordinal scales of measurement can be tested. Sample variances do not have to be equal, which is required in parametric tests. However, several caveats are worthy of mention:
Compared to parametric tests, nonparametric tests use data less efficiently. The power of nonparametric tests is lower than that of parametric tests.
Therefore, if all the required assumptions are satisfied, it is better to use parametric tests. However, in reality, it may be difficult to justify these distributional assumptions, or small sample sizes may exist, requiring the need for nonparametric tests. Thus, nonparametric tests should be used when the data are nominal or ordinal, or when the data are interval or ratio but the normality assumption is not met. The next sections list the nonparametric tests available for use in the software.
c151-160
JWBK121-Mun
March 17, 2008
608
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
CHI-SQUARE GOODNESS OF FIT TEST The chi-square test for goodness of fit is used to examine if a sample data set could have been drawn from a population having a specified probability distribution. The probability distribution tested here is the normal distribution. The null hypothesis tested such that the sample is randomly drawn from the normal distribution (versus the alternate hypothesis that the sample is not from a normal distribution). If the calculated p-value is less than or equal to the alpha significance value, reject the null hypothesis and accept the alternate hypothesis. Otherwise, if the p-value is higher than the alpha significance value, do not reject the null hypothesis.
Data Requirements For the chi-square goodness of fit test, create data tables such as the one shown in Figure 151.13, and select the data area (e.g., select the data from D6 to E13, or data points 800 to 4). To extend the data set, just add more observations (rows), as seen in Figures 151.13 and 151.14. For testing other distributions, use Risk Simulator’s distributional fitting tools instead.
CHI-SQUARE TEST OF INDEPENDENCE The chi-square test for independence examines two variables to see if there is some statistical relationship between them. This test is not used to find the exact nature of the relationship between the two variables but simply to test if the variables could be independent of each other. The null hypothesis tested is such that the variables are independent of each other versus the alternate hypothesis that the variables are not independent of each other. The Chi-Square test looks at a table of observed frequencies and a table of expected frequencies. The amount of disparity between these two tables is calculated and compared with the Chi-Square test statistic. The
FIGURE 151.13 Data requirements for the chi-square goodness of fit test
JWBK121-Mun March 17, 2008 13:9
FIGURE 151.14 Running a chi-square goodness of fit test
c151-160 Char Count=
609
c151-160
JWBK121-Mun
March 17, 2008
610
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
observed frequencies reflect the cross-classification for members of a single sample, and the table of expected frequencies is constructed under the assumption that the null hypothesis is true.
Data Requirements For the chi-square test of independence, create data tables such as the one in Figure 151.15, and select the data area two variables at a time (e.g., select the data from A2 to B29, or data points 10 to 110). To extend the data set, just add more observations (rows) as seen in Figure 151.15.
CHI-SQUARE POPULATION VARIANCE TEST The chi-square test for population variance is used for hypothesis testing and confidence interval estimation for a population variance. The population variance of a sample is typically unknown; hence the need for quantifying this confidence interval. The population is assumed to be normally distributed. See the Excel model for details of this test.
FRIEDMAN’S TEST The Friedman test is a form of nonparametric test that makes no assumptions about the specific shape of the population from which the sample is drawn, allowing for smaller sample data sets to be analyzed. This method is the extension of the Wilcoxon signed rank test for paired samples. The corresponding parametric test is the randomized block multiple treatment ANOVA, but unlike the ANOVA, the Friedman test does not require that the data set be randomly sampled from normally distributed populations with equal variances. See later sections of this chapter for details on these tests. The Friedman test uses a two-tailed hypothesis test where the null hypothesis is such that the population medians of each treatment are statistically identical to the rest of the group. That is, there is no effect on the treatment among the different treatment groups. The alternative hypothesis is such that the real population medians are statistically different from one another when tested using the sample data set. That is, the medians are statistically different, which means that there is a statistically significant effect among the different treatment groups. If the calculated p-value is less than or equal to the alpha significance value, reject the null hypothesis and accept the alternate hypothesis. Otherwise, if the p-value is higher than the alpha significance value, do not reject the null hypothesis.
Data Requirements For the Friedman’s test, create data tables such as the one shown in Figure 151.16, and select the data area (e.g., select the data from C22 to F32, or data points Treatment 1 to 80). If selecting the headers, remember to select “Treat first row as headers” (Figure 151.16). To extend the data set, just add more observations (rows).
JWBK121-Mun March 17, 2008 13:9
FIGURE 151.15 Data requirements and running the chi-square test of independence
c151-160 Char Count=
611
c151-160
JWBK121-Mun
March 17, 2008
612
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 151.16 Data requirements and running the Friedman test for randomized block design
KRUSKAL-WALLIS TEST The Kruskal-Wallis test is a form of nonparametric test that makes no assumptions about the specific shape of the population from which the sample is drawn, allowing for smaller sample data sets to be analyzed. This method is the extension of the Wilcoxon signed rank test for paired samples that compares the statistical
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
151. Six Sigma—One- and Two-Sample Hypothesis Tests
613
properties of more than two independent samples. The corresponding parametric test is the one-way ANOVA, but unlike the ANOVA, the Kruskal-Wallis test does not require that the data set be randomly sampled from normally distributed populations with equal variances. The Kruskal-Wallis test is a two-tailed hypothesis test where the null hypothesis is such that the population medians of each treatment are statistically identical to the rest of the group. That is, there are no different effects among the different treatment groups. The alternative hypothesis is such that the real population medians are statistically different from one another when tested using the sample data set. That is, the medians are statistically different, which means that there is a statistically significant effect among the different treatment groups. If the calculated p-value is less than or equal to the alpha significance value, reject the null hypothesis and accept the alternate hypothesis. Otherwise, if the p-value is higher than the alpha significance value, do not reject the null hypothesis. The benefit of the Kruskal-Wallis test is that it can be applied to ordinal, interval, and ratio data; ANOVA, however, is applicable only for interval and ratio data. Also, the Kruskal-Wallis test can be run with fewer data points. To illustrate, suppose that three different drug indications (T = 3) were developed and tested on 100 patients each (N = 100). The Kruskal-Wallis test can be applied to test if these three drugs are all equally effective statistically. If the calculated p-value is less than or equal to the significance level used in the test, reject the null hypothesis and conclude that there is a significant difference among the different treatments. Otherwise, the treatments are all equally effective.
Data Requirements For the Kruskal-Wallis test, create data tables such as the one in Figure 151.17, and select the data area (e.g., select the data from C40 to F50, or data points Treatment 1 to 80). To extend the data set, just add more observations (rows) or more treatment variables to compare (columns). If selecting the variable names, remember to select “Treat first row as headers.”
LILLIEFORS TEST The Lilliefors test is a form of nonparametric test that makes no assumptions about the specific shape of the population from which the sample is drawn, allowing for smaller sample data sets to be analyzed. This test evaluates the null hypothesis of whether the data sample was drawn from a normally distributed population versus an alternate hypothesis that the data sample is not normally distributed. If the calculated p-value is less than or equal to the alpha significance value, reject the null hypothesis and accept the alternate hypothesis. Otherwise, if the p-value is higher than the alpha significance value, do not reject the null hypothesis. This test relies on two cumulative frequencies: one derived from the sample data set, the second derived from a theoretical distribution based on the mean and standard deviation of the sample data. An alternative to this test is the Chi-Square test for normality, which requires more data points to run than the Lilliefors test.
c151-160
JWBK121-Mun
March 17, 2008
614
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 151.17 Data requirements and running the Kruskal-Wallis test for a nonparametric one-way ANOVA equivalence
Data Requirements For the Lilliefors test for normality, create data tables such as the one shown in Figure 151.18, and select the data area one variable at a time (e.g., select the data from A2 to A29, or data points 10 to 66). To extend the data set, just add more observations (rows).
JWBK121-Mun March 17, 2008 13:9
FIGURE 151.18 Data requirements and running the nonparametric Lilliefors test for normality
c151-160 Char Count=
615
c151-160
JWBK121-Mun
March 17, 2008
616
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
RUNS TEST The runs test is a form of nonparametric test that makes no assumptions about the specific shape of the population from which the sample is drawn, allowing for smaller sample data sets to be analyzed. This test evaluates the randomness of a series of observations by analyzing the number of runs it contains. A run is a consecutive appearance of one or more observations that are similar. The null hypothesis tested is whether the data sequence is random versus the alternate hypothesis that the data sequence is not random. If the calculated p-value is less than or equal to the alpha significance value, reject the null hypothesis and accept the alternate hypothesis. Otherwise, if the p-value is higher than the alpha significance value, do not reject the null hypothesis.
Data Requirements For the runs test for randomness, create data tables such as the one shown in Figure 151.19, and select the data area multiple variables at a time (e.g., select the data from A2 to J29, or data points 10 to 66). To extend the data set, just add more observations (rows) and variables (columns).
WILCOXON SIGNED-RANK TEST (ONE VARIABLE) The single-variable Wilcoxon signed-rank test is a form of nonparametric test that makes no assumptions about the specific shape of the population from which the sample is drawn, allowing for smaller sample data sets to be analyzed. This method looks at whether a sample data set could have been randomly drawn from a particular population whose median is being hypothesized. The corresponding parametric test is the one-sample t-test, which should be used if the underlying population is assumed to be normal, providing a higher statistical power on the test. The Wilcoxon signed-rank test can be applied to three types of hypothesis tests: a two-tailed test, a right-tailed test, and a left-tailed test. If the calculated Wilcoxon statistic is outside the critical limits for the specific significance level in the test, reject the null hypothesis and conclude that the true population median is not equal to (two-tail test), less than (left-tail test), or greater than (right-tail test) the hypothesized median based on the sample tested. Otherwise, the true population median is statistically similar to the hypothesized median. For data requirements, see the section called “One-Variable Testing for Means (t-Test).”
WILCOXON SIGNED-RANK TEST (TWO VARIABLES) The Wilcoxon signed-rank test for paired variables is a form of nonparametric test that makes no assumptions about the specific shape of the population from which the
JWBK121-Mun March 17, 2008 13:9
FIGURE 151.19 Data requirements for the nonparametric runs test for randomness
c151-160 Char Count=
617
c151-160
JWBK121-Mun
March 17, 2008
618
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
sample is drawn, allowing for smaller sample data sets to be analyzed. This method looks at whether the median of the differences between the two paired variables is equal. This test is specifically formulated for testing the same or similar samples before and after an event (e.g., measurements taken before a medical treatment are compared against those measurements taken after the treatment to see if there is a difference). The corresponding parametric test is the two-sample t-test with dependent means, which should be used if the underlying population is assumed to be normal, providing a higher statistical power on the test. The Wilcoxon signed-rank test can be applied to three types of hypothesis tests: a two-tailed test, a right-tailed test, and a left-tailed test. To illustrate, suppose that a new engine design is tested against an existing engine design to see if there is a statistically significant difference between the two. The paired variable Wilcoxon signed-rank test can be applied. If the calculated Wilcoxon statistic is outside the critical limits for the specific significance level in the test, reject the null hypothesis and conclude that the difference between the true population medians is not equal to (two-tail test), less than (left-tail test), or greater than (right-tail test) the hypothesized median difference based on the sample tested. Otherwise, the true population median is statistically similar to the hypothesized median. For data requirements, see the section called “Two Dependent Variables Testing for Means (Z-Test).”
SINGLE-FACTOR MULTIPLE TREATMENTS ANOVA The one-way ANOVA for single factor with multiple treatments is an extension of the two-variable t-test, looking at multiple variables simultaneously. The ANOVA is appropriate when the sampling distribution is assumed to be approximately normal. ANOVA can be applied only to the two-tailed hypothesis test. A two-tailed hypothesis tests the null hypothesis such that the population means of each treatment are statistically identical to the rest of the group, which means that there is no effect among the different treatment groups. The alternative hypothesis is such that the real population means are statistically different from one another when tested using the sample data set. To illustrate, suppose that three different drug indications (T = 3) were developed and tested on 100 patients each (N = 100). The one-way ANOVA can be applied to test if these three drugs are all equally effective statistically. If the calculated p-value is less than or equal to the significance level used in the test, reject the null hypothesis and conclude that there is a significant difference among the different treatments. Otherwise, the treatments are all equally effective.
Data Requirements For the One-Way ANOVA module, create tables such as the one in Figure 151.20, and select the data area (15 to 25). You can extend the data by adding rows of observations and columns of treatments.
JWBK121-Mun March 17, 2008 13:9
FIGURE 151.20 Data requirements and running a one-way ANOVA
c151-160 Char Count=
619
c151-160
JWBK121-Mun
March 17, 2008
620
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
RANDOMIZED BLOCK MULTIPLE TREATMENTS ANOVA The one-way randomized block ANOVA is appropriate when the sampling distribution is assumed to be approximately normal and when there exists a block variable for which ANOVA will control (block the effects of this variable by controlling it in the experiment). ANOVA can be applied only to the two-tailed hypothesis test. This analysis can test for the effects of both the Treatments as well as the effectiveness of the Control or Block variable. If the calculated p-value for the treatment is less than or equal to the significance level used in the test, reject the null hypothesis and conclude that there is a significant difference among the different Treatments. If the calculated p-value for the Block variable is less than or equal to the significance level used in the test, reject the null hypothesis and conclude that there is a significant difference among the different Block variables. To illustrate, suppose that three different headlamp designs (B = 3) were developed and tested on four groups of volunteer drivers grouped by their age (T = 4). The one-way randomized block ANOVA can be applied to test if these three headlamps are all equally effective statistically when tested using the volunteers’ driving test grades. Otherwise, the treatments are all equally effective. This test can determine if the differences occur because of the treatment (that the type of headlamp will determine differences in driving test scores) or from the block or controlled variable (that age may yield different driving abilities).
Data Requirements For the randomized block ANOVA module, create tables such as the one in Figure 151.21, and select the data area (e.g., values 90 to 73 in the figure). You can extend the data by adding rows of blocks and columns of treatments.
TWO-WAY ANOVA The Two-Way ANOVA is an extension of the single-factor and Randomized Block ANOVA by simultaneously examining the effects of two factors on the dependent variable, along with the effects of interactions between the different levels of these two factors. Unlike the randomized block design, this model examines the interactions between different levels of the factors, or independent variables. In a two-factor experiment, interaction exists when the effect of a level for one factor depends on which level of the other factor is present. There are three sets of null and alternate hypotheses to be tested in the two-way analysis of variance. The first test is on the first independent variable, where the null hypothesis is that no level of the first factor has an effect on the dependent variable. The alternate hypothesis is that there is at least one level of the first factor having an effect on the dependent variable. If the calculated p-value is less than or equal to the alpha significance value, reject the null hypothesis and accept the alternate hypothesis. Otherwise, if the p-value is higher than the alpha significance value, do not reject the null hypothesis.
JWBK121-Mun March 17, 2008 13:9
FIGURE 151.21 Data requirements and running a randomized block ANOVA
c151-160 Char Count=
621
c151-160
JWBK121-Mun
622
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
The second test is on the second independent variable, where the null hypothesis is that no level of the second factor has an effect on the dependent variable. The alternate hypothesis is that there is at least one level of the second factor having an effect on the dependent variable. If the calculated p-value is less than or equal to the alpha significance value, reject the null hypothesis and accept the alternate hypothesis. Otherwise, if the p-value is higher than the alpha significance value, do not reject the null hypothesis. The third test is on the interaction of both the first and second independent variables, where the null hypothesis is that there are no interacting effects between
FIGURE 151.22 Data requirements and running a two-way ANOVA
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
152. Six Sigma—Sample Size Determination and Design of Experiments
623
levels of the first and second factors. The alternate hypothesis is that there is at least one combination of levels of the first and second factors having an effect on the dependent variable. If the calculated p-value is less than or equal to the alpha significance value, reject the null hypothesis and accept the alternate hypothesis. Otherwise, if the p-value is higher than the alpha significance value, do not reject the null hypothesis.
Data Requirements For the Two-Way ANOVA module, create tables such as the one in Figure 151.22, and select the data area (e.g., the values 804 to 835 in the figure). You can extend the data by adding rows of factors and columns of treatments. Note that the number of replications in the table is 2 (i.e., two rows of observations per Factor A type). Of course, you can increase the number of replications as required. The number of replications has to be consistent if you wish to extend the data set.
152. Six Sigma—Sample Size Determination and Design of Experiments File Names: Six Sigma—Sample Size Correlation; Six Sigma—Sample Size DPU; Six Sigma—Sample Size Mean; Six Sigma—Sample Size Proportion; Six Sigma—Sample Size Sigma; Six Sigma—Delta Precision; Six Sigma—Design of Experiments and Combinatorics Location: Modeling Toolkit | Six Sigma Brief Description: Illustrating how to obtain the required sample size in performing hypotheses testing from means to standard deviations and proportions Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2SixSigmaSampleSize, B2SixSigmaSampleSizeProportion, B2SixSigmaSampleSizeZeroCorrelTest, B2SixSigmaSampleSizeDPU, B2SixSigmaSampleSizeStdev, B2SixSigmaDeltaPrecision
In performing quality controls and hypothesis testing, the size of the samples collected is of paramount importance. Theoretically, it would be impossible or too expensive and impractical to collect information and data on the entire population to be tested (e.g., all outputs from a large manufacturing facility). Therefore, statistical sampling is required. The question is: What size sample is sufficient? The size of a statistical sample is the number of repeated measurements that are collected. It is typically denoted n, a positive integer. Sample size determination is critical in Six
c151-160
JWBK121-Mun
624
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 152.1 Sample size determination for the mean
Sigma and quality analysis as different sample sizes lead to different accuracies and precision of measurement. This can be seen in such statistical rules as the Law of Large Numbers and the Central Limit Theorem. With all else being equal, a larger sample size n leads to increased precision in estimates of various properties of the population. The question that always arises is: How many sample data points are required? The answer is it depends. It depends on the error tolerances and precision required in the analysis. This model is used to compute the minimum required sample size given the required error, precision, and variance levels. There are five different sample size determination models in the Modeling Toolkit. The first model (Six Sigma—Sample Size Mean) computes the minimum required sample size for a hypothesis test (one- or two-tailed test), where the required errors and precisions are stated, as is the sample standard deviation (Figure 152.1). Alpha Error is the Type I error, also known as the significance level in a hypothesis test. It measures the probability of not having the true population mean included in the confidence interval of the sample. That is, it computes the probability of rejecting a true hypothesis. 1 – Alpha is, of course, the confidence interval, or the probability that the true population mean resides in the sample confidence interval. Beta Error is the Type II error, or the probability of accepting a false hypothesis, or the probability of not being able to detect the mean’s changes. 1 – Beta is the power of the test. Delta Precision is the accuracy or precision with which the standard deviation may be estimated. For instance, a 0.10% Delta with 5% Alpha for two tails means that the estimated mean is plus or minus 0.10%, at a 90% (1 – 2 × Alpha) confidence level. Finally, the Sigma Sample is the sample standard deviation of the data set. The remaining models are very similar in that they determine the appropriate sample size to test the standard deviation or sigma levels given some Alpha and Beta levels (Six Sigma—Sample Size Sigma), testing proportions (Six Sigma—Sample Size Proportion), or defect per unit (Six Sigma—Sample Size DPU). In addition, the Six Sigma—Sample Size Correlation model is used to determine the minimum required sample size to perform a hypothesis test on correlations. Finally, the Six Sigma—Delta Precision model works backward in that given some Alpha and Beta errors and a given sample size, the Delta precision level is computed instead.
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
152. Six Sigma—Sample Size Determination and Design of Experiments
625
DESIGN OF EXPERIMENTS AND COMBINATORICS Another related issue to sampling and testing or experimentation is that of generating the relevant combinations and permutations of experimental samples to test. Let us say you have five different projects and need to decide all the combinations of projects that can occur. For instance, the value 00000 means no projects are chosen; value 10000 means project 1 is chosen and no others; 010000 means project 2 is chosen and no others; and so forth. This result can also include multiple projects, such as 10101, where projects 1, 3, and 5 are chosen. Clearly, if the portfolio has more projects, the combinations can become rather intractable. This is where combinatorial math comes in. We will use Excel to determine the combinatorials.
PROCEDURE 1. Start Excel’s Analysis Toolpak (Tools | Add-Ins | Analysis Toolpak). 2. Go to the Combinatorics model and view the examples. The example shown illustrates how to convert a five-project or five-asset portfolio into all the possible combinations possible by using the DEC2BIN function in Excel. The DEC2BIN function stands for decimal to binary function. Specifically, it takes the value of a decimal or number and converts it into the relevant binary structure of 0s and 1s. For instance, if there are five projects in a portfolio, then there are 32 combinations or 25 combinations. As binary coding starts from 0, we can number these outcomes from 0, 1, 2, all the way to 31 (providing 32 possible outcomes). We then convert these to a five-digit binary sequence using the function DEC2BIN(0,5), which yields 00000. As another example, DEC2BIN(10,5) yields 01010. By creating this combinatorial matrix, we see all the possible combinations for five projects. We can then pick out the ones required. For example, say we need to select two out of five projects and need all the possible combinations. We know that there are 10 combinations, and by using the example model, we can determine exactly what they are. Figure 152.2 shows an example. Using this approach, we can add in Risk Simulator to randomly select the projects and cycle through all possible combinations of projects that can be executed in time. As an example, say you have five projects in a portfolio and you would like to randomly choose three projects to execute in a sequence of three years. In each year you cannot execute more than three projects, and no project can repeat itself. In addition, you wish to run all possible combinations of possibilities. Such an example is also included in the model and is self-explanatory.
JWBK121-Mun
March 17, 2008
13:9
Char Count=
FIGURE 152.2 Combinatorics results
c151-160
626
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
153. Six Sigma—Capability Measures, Specification Levels, and Control Charts
627
153. Six Sigma—Statistical and Unit Capability Measures, Specification Levels, and Control Charts File Names: Six Sigma—Statistical Capability; Six Sigma—Unit Capability; Six Sigma—Control Charts Location: Modeling Toolkit | Six Sigma Brief Description: Illustrating how to obtain the various capability measures, such as the capability index (Cpk and Cp), yields, defects per million opportunities (DPMO), and others Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2SixSigmaStatCP, B2SixSigmaStatCPK, B2SixSigmaStatDPU, B2SixSigmaStatDPMO, B2SixSigmaStatProcessSigma, B2SixSigmaUnitDPU, B2SixSigmaUnitDPMO, B2SixSigmaUnitYield, B2SixSigmaUnitCPK, B2SixSigmaUnitProcessSigma
CAPABILITY MEASURES AND SPECIFICATION LEVELS Capability measures in Six Sigma are the central analytics in quality control. Defective Proportion Units (DPUs) are the number of defects observed with respect to the total opportunities or outcomes, and can be measured with respect to a million opportunities (defective parts per million [DPPM], also known as defects per million opportunities [DPMO]). Process Capability Index (Cpk) takes into account the offcentered distribution and analyzes it as a centered process producing a similar level of defects (defined as the ratio between permissible deviation, measured from the mean value to the nearest specific limit of acceptability, and the actual one-sided 3-sigma spread of the process). Simplistically, it measures how many times you can fit three standard deviations of the process between the mean of the process and the nearest specification limit. Assuming that the process is stable and predictable, if you can do this once, Cpk is 1, and your process probably needs attention. If you can do it 1.5 times, your process is excellent, and you are on the path to being able to discontinue final inspection. If you can do it 2 times, you have an outstanding process. If Cpk is negative, the process mean is outside the specification limits. Cpk of at least 1.33 is desired. Finally, the sigma of the process is determined. Typically, this process sigma value can be used to compare before-and-after effects of some quality improvement program or to see if the process is up to par in terms of a target sigma level.
c151-160
JWBK121-Mun
March 17, 2008
628
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 153.1 Six Sigma capability modeling This statistical capability model uses as inputs the statistical properties of the process, including the observed mean and standard deviation (sigma) as well as predefined lower and upper specification limits (LSLs and USLs). In contrast, the Unit Capability model computes some of the same capability measures but uses actual number of defects and population size. In addition, the Cp index is computed, which relates to the potential of the process to meet specification, regardless of where the process distribution falls. The process may not be performing within specifications, but the index tells you how the process will perform if you can shift the distribution mean to the desired target (without changing the distribution). If we have a Cp of 2.0, it has the potential to be a Six Sigma process. In contrast, Cpk actually compares the distribution of the data to the specification limits to determine its actual performance. If you have a Cpk of 2.0, you have a Six Sigma process. As discussed, a Cpk of at least 1.33 is desired, and 1.5 is excellent. Please see Figure 153.1.
QUALITY CONTROL CHARTS Sometimes the specification limits are not set; instead, statistical control limits are computed based on the actual data collected (e.g., the number of defects in a manufacturing line). For instance, in Figure 153.2, we see 20 sample experiments or samples taken at various times of a manufacturing process. The number of samples taken varied over time, and the number of defective parts were also gathered. The upper control limit (UCL) and lower control limit (LCL) are computed, as are the central line (CL) and other sigma levels. The resulting chart is called a control chart, and if the process is out of control, the actual defect line will be outside of the UCL and LCL lines. Typically, when the LCL is a negative value, we set the floor as zero, as illustrated in Figure 153.2. In the interpretation of a control chart, by adding in the +/− 1 and 2 sigma lines, we can divide the charts into several areas or zones, as illustrated in Figure 153.3. Typically the process is considered out of control if on the control charts:
One point is beyond Area A. Two out of three consecutive points are in Area A or beyond.
JWBK121-Mun
March 17, 2008
13:9
Char Count=
FIGURE 153.2 Example quality control p-chart
c151-160
629
c151-160
JWBK121-Mun
630
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 153.3 Interpreting control charts
Four out of five consecutive points are in Area B or beyond. Eight consecutive points are in Area C or beyond.
Additionally, a potential structural shift can be detected if any one of these occurs:
At least 10 out of 11 sequential points are on one side of the CL. At least 12 out of 14 sequential points are on one side of the CL. At least 14 out of 17 sequential points are on one side of the CL. At least 16 out of 20 sequential points are on one side of the CL.
The Modeling Toolkit presents several control chart types, and each type is used under different circumstances:
X chart. This chart is used when the variable has raw data values, there are multiple measurements in a sample experiment, multiple experiments are run, and the average of the collected data is of interest. R chart. This chart is used when the variable has raw data values, there are multiple measurements in a sample experiment, multiple experiments are run, and the range of the collected data is of interest. XmR chart. This chart is used when the variable has raw data values and is a single measurement taken in each sample experiment, multiple experiments are run, and the actual value of the collected data is of interest.
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
154. Valuation—Buy versus Lease
631
P-chart. This chart is used when the variable of interest is an attribute (e.g., defective or nondefective), the data collected are in proportions of defects (or number of defects in a specific sample), there are multiple measurements in a sample experiment, multiple experiments are run, the average proportion of defects of the collected data is of interest, and the number of samples collected in each experiment differs. NP-chart. This chart is used when the variable of interest is an attribute (e.g., defective or nondefective), the data collected are in proportions of defects (or number of defects in a specific sample), there are multiple measurements in a sample experiment, multiple experiments are run, the average proportion of defects of the collected data is of interest, and the number of samples collected in each experiment is constant for all experiments. C-chart. This chart is used when the variable of interest is an attribute (e.g., defective or nondefective), the data collected are in total number of defects (actual count in units), there are multiple measurements in a sample experiment, multiple experiments are run, the average number of defects of the collected data is of interest, and the number of samples collected in each experiment is the same. U-chart. This chart is used when the variable of interest is an attribute (e.g., defective or nondefective), the data collected are in total number of defects (actual count in units), there are multiple measurements in a sample experiment, multiple experiments are run, the average number of defects of the collected data is of interest, and the number of samples collected in each experiment differs.
154. Valuation—Buy versus Lease File Name: Valuation—Buy versus Lease Location: Modeling Toolkit | Valuation | Buy versus Lease Brief Description: Illustrating how to compute the net advantage to leasing by comparing the cash flow differential between purchasing an asset or equipment versus leasing it Requirements: Modeling Toolkit, Risk Simulator
Leasing versus buying decisions have plagued decision makers for the longest time. A lease is simply a contractual agreement between a lessee and lessor. This agreement specifies that the lessee has the right to use the asset in return for a periodic payment to the lessor who owns the asset. Typically the lessor is the equipment or asset manufacturer or a third-party leasing company that purchases the asset from the manufacturer in order to provide a lease to the user. Leases usually are divided into two categories: operating and financial. An operating lease is a lease for a certain period of time that is typically less than the economic life of the asset, which
c151-160
JWBK121-Mun
632
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
means that the lease is not fully amortized (the total lease payments received do not cover the cost of the asset). In addition, the lessor or owner of the asset has to cover the maintenance of the asset throughout the life of the lease, and the lease can be terminated early (with an option to abandon) whereby the lessor has the ability to sell the asset at some residual value to recoup the value of the initial asset cost. A financial lease, in contrast, is usually fully amortized and the lease cannot be canceled. In addition, no maintenance is provided for the lessee. In order to entice the lessee into a financial lease, several provisions can be included in the lease. In a sale and leaseback provision, the lessee owns the asset but sells it to the lessor and receives some cash payment up front, and leases the asset back from the lessor for periodic lease payments. In leveraged leases, a third party finances the lessor, obtains periodic payments from the lessor, and part of the payments go toward paying off the third-party financing the deal. Other types of leases exist, and exotic instruments and options can be embedded into the contract. For financial reporting purposes, certain leases are classified as capital leases, which will appear on the balance sheet (the present value of future lease payments appear as a liability, with a counterbalance on the asset side for the same amount) whereas operating leases do not appear on the balance sheet and are captured only in the income statement. Therefore, sometimes debt displacement is a hidden cost of leases and should be considered as part of the overall optimal capital allocation in a company. Figure 154.1 illustrates how to compute a purchase versus a lease, specifically looking at the net advantage to leasing. In the first scenario, the asset or equipment is purchased, with the cost of the machine if purchased immediately and the potential operating savings of having the machine. The corporate tax rate and before-tax discount rate are the required inputs. Using these values, the after-tax operating savings are computed over the life of the lease, and the tax shield or tax savings from depreciation by owning the equipment (assuming a simple straight-line depreciation, or you may insert your own accelerated depreciation schedule if required), and the net cash flow for purchasing the equipment is computed. In the lease scenario, the lease payments are listed and the benefits of tax savings from the lease are also included. The after-tax operating savings are added to this tax savings to obtain the net cash flow to leasing. To compute the net advantage to leasing (NAL), the net of the cash flows between the lease and purchase scenarios are computed and discounted back to time zero using an after-tax discount rate. If the NAL is positive, leasing is the better decision. Conversely, if the NAL is negative, purchasing the equipment is the better decision. In addition, sometimes leases are based on a fluctuation of interest rates that can be uncertain, or the cost of funds can be fluctuating. These inputs can be subjected to simulation, and the resulting NAL is set as the forecast. The probability that NAL is positive can be determined. Hence, this is the same probability that the lease will be more advantageous than the purchase scenario.
JWBK121-Mun
March 17, 2008
13:9
Char Count=
FIGURE 154.1 Buy versus lease scenario cash flows
c151-160
633
c151-160
JWBK121-Mun
634
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
155. Valuation—Banking: Classified Loan Borrowing Base File Name: Banking—Classified Loan Borrowing Base Location: Modeling Toolkit | Banking Models | Classified Loan Borrowing Base Brief Description: Illustrating how to value and classify a loan based on accounts receivable valuation Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2CreditAcceptanceCost, B2CreditRejectionCost Special Credits: This model was contributed by Professor Morton Glantz. He is a financial consultant, educator, and advisor to corporate financial executives, government ministers, privatization managers, investment and commercial bankers, public accounting firms, members of merger and acquisition teams, strategic planning executives, management consultants, attorneys, and representatives of foreign governments and international banks. As a senior officer of JP Morgan Chase, he specialized in credit analysis and credit risk management, risk grading systems, valuation models, and professional training. He is on the finance faculty at the Fordham Graduate School of Business, has appeared in the Harvard University International Directory of Business and Management Scholars and Research, and earned Fordham University Deans Award for Faculty Excellence on three occasions, a school record. His areas of expertise include advanced forecasting methods, advanced credit risk analysis, credit portfolio risk management, essentials of corporate finance, corporate valuation modeling and analysis credit risk analysis and modeling, project finance, and corporate failure. He has authored a number of books, including: Credit Derivatives: Techniques to Manage Credit Risk for Financial Professionals (with Erik Banks and Paul Siegel) (New York: McGraw-Hill, 2006); Optimal Trading Strategies (New York: AMACOM, 2003) (coauthor: Dr. Robert Kissell); Managing Bank Risk: An Introduction to Broad-Base Credit Engineering (New York: Academic Press/Elsevier, 2002), which received the RISKBOOK.COM Award: Best Finance Books of 2003; Scientific Financial Management (New York: AMACOM, 2000); Loan Risk Management (New York: McGraw-Hill, 1995); and The Bankers Handbook on Credit Risk, with Dr. Johnathan Mun (New York: Academic Press/Elsevier, 2008).
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
155. Valuation—Banking: Classified Loan Borrowing Base
635
The key characteristic of a borrowing base workout loan is the fact that the amount a borrower may borrow at any one time is restricted by both the total exposure allowed under the applicable workout agreement and the borrowing base. Borrowing base loans are not limited to workout loans. This is in contrast to the traditional loans where, as long as the borrower complies with loan covenants, he or she may draw down the entire loan facility. The borrowing base in an assetbased loan may consist of accounts receivable, inventory, equipment, and even real estate. Collateral quality and these collaterals can be converted into cash influences amount (the level of cash advance) approved. This models the creditworthiness of a borrower, given that the loan is backed by accounts receivables of the debt holder’s firm. In this model, we can determine the probability that the loan will be paid back and determine if the loan should be issued or not. Please see Figure 155.1. This model is similar to accept-reject decisions in capital budgeting whereby the specific application introduced here is in the area of “problem loans” where the loan
FIGURE 155.1 Borrowing base accounts receivable model
c151-160
JWBK121-Mun
636
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
workout exercise is to control the distressed-borrower approval based on orders sold on credit, such as accounts receivable. The model reveals the relationship between marginal returns and marginal costs of approving an order and granting credit terms. For example, if acceptance costs are below rejection costs on an incremental order, the opportunity cost of rejecting the credit is too much; thus the credit would be accepted. The model is a powerful tool in problem loan turnarounds. For example, the banker would allow the distressed borrower to approve a weak credit order if the incremental cost of production was very low because even if the receivable went bad, the borrower would not lose much if the incremental cost of manufacturing a widget was $1.00 while the widget was sold for $100.00. The model can be made stochastic because there are assumption variables, forecast variables, and possibly constraints. The Acceptance/Rejection Costs part of the model serves as a framework for evaluating decisions on changing credit policies (Figure 155.2).
FIGURE 155.2 Acceptance and rejection costs of new credit
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
156. Valuation—Banking: Break-Even Inventory with Seasonal Lending Trial Balance Analysis
637
156. Valuation—Banking: Break-Even Inventory with Seasonal Lending Trial Balance Analysis File Name: Banking—Break Even Loan Inventory Location: Modeling Toolkit | Valuation | Banking Models—Classified Break Even Loan Inventory Brief Description: Illustrating how to value the creditworthiness of a line of credit or debt based on the company’s loan inventory Requirements: Modeling Toolkit, Risk Simulator Special Credits: This model was contributed by Professor Morton Glantz. He is a financial consultant, educator, and advisor to corporate financial executives, government ministers, privatization managers, investment and commercial bankers, public accounting firms, members of merger and acquisition teams, strategic planning executives, management consultants, attorneys, and representatives of foreign governments and international banks. As a senior officer of JP Morgan Chase, he specialized in credit analysis and credit risk management, risk grading systems, valuation models, and professional training. He is on the finance faculty at the Fordham Graduate School of Business, has appeared in the Harvard University International Directory of Business and Management Scholars and Research, and earned Fordham University Deans Award for Faculty Excellence on three occasions, a school record. His areas of expertise include advanced forecasting methods, advanced credit risk analysis, credit portfolio risk management, essentials of corporate finance, corporate valuation modeling and analysis credit risk analysis and modeling, project finance, and corporate failure. He has authored a number of books, including: Credit Derivatives: Techniques to Manage Credit Risk for Financial Professionals (with Erik Banks and Paul Siegel) (New York: McGraw-Hill, 2006); Optimal Trading Strategies (New York: AMACOM, 2003) (coauthor: Dr. Robert Kissell); Managing Bank Risk: An Introduction to Broad-Base Credit Engineering (New York: Academic Press/Elsevier, 2002), which received the RISKBOOK.COM Award: Best Finance Books of 2003; Scientific Financial Management (New York: AMACOM, 2000); Loan Risk Management (New York: McGraw-Hill, 1995); and The Bankers Handbook on Credit Risk, with Dr. Johnathan Mun (New York: Academic Press/Elsevier, 2008).
c151-160
JWBK121-Mun
638
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
Trial balances take snapshots of general ledger accounts for assets, liabilities, and owner’s equity during any point in the season. Lenders, to detect liquidity problems that compromise repayment, use the break-even inventory method, derived from a trial balance. Liquidity problems stem from inventory devaluations (write-downs). And trouble can surface quickly––two or three months down the season––often on the heels of disappointing bookings. At this point, inventory value usually falls below recorded book value. Trial balances are associated with the periodic inventory method, where inventory is counted and valued at the lower of cost or market periodically, as opposed to the perpetual inventory system, where inventory is counted and valued frequently. Because ending inventory determines cost of goods sold and ultimately profits, bankers should know inventory value throughout the season. Examples of perpetual inventory include items scanned into the computer a shopper sees at cash registers and high-ticket items painstakingly monitored. Of the two methods, periodic inventory is more uncertain, rendering seasonal analysis indefinable at best if not impossible unless break-even techniques are employed effectively. Though accountants have not appraised inventory value, assuming periodic inventory, management certainly has. Thus, lenders derive break-even inventory by first asking management for estimated lower of cost or market inventory values. They then figure spreads between the two, break-even and estimated. For example, in Figure 156.1, if inventory required to break even is $7,500 while management estimated inventory is $6,000, net loss for the period is $1,500. Interim losses caused by large inventory write-downs reduce chances of a short-term loan cleanup. The logic is that write-downs follow production problems, poor strategic planning, stockpiling, canceled orders, overly optimistic sales projections, and flawed cash budgets.
JWBK121-Mun
March 17, 2008
13:9
Char Count=
FIGURE 156.1 Inventory valuation model
c151-160
639
c151-160
JWBK121-Mun
640
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
157. Valuation—Banking: Firm in Financial Distress File Name: Banking—Firm in Financial Distress Location: Modeling Toolkit | Banking Models | Firm in Financial Distress Brief Description: Illustrating how to value the continued operations of a firm under distress as well as to ascertain its creditworthiness Requirements: Modeling Toolkit, Risk Simulator Special Credits: This model was contributed by Professor Morton Glantz. He is a financial consultant, educator, and advisor to corporate financial executives, government ministers, privatization managers, investment and commercial bankers, public accounting firms, members of merger and acquisition teams, strategic planning executives, management consultants, attorneys, and representatives of foreign governments and international banks. As a senior officer of JP Morgan Chase, he specialized in credit analysis and credit risk management, risk grading systems, valuation models, and professional training. He is on the finance faculty at the Fordham Graduate School of Business, has appeared in the Harvard University International Directory of Business and Management Scholars and Research, and earned Fordham University Deans Award for Faculty Excellence on three occasions, a school record. His areas of expertise include advanced forecasting methods, advanced credit risk analysis, credit portfolio risk management, essentials of corporate finance, corporate valuation modeling and analysis credit risk analysis and modeling, project finance, and corporate failure. He has authored a number of books, including: Credit Derivatives: Techniques to Manage Credit Risk for Financial Professionals (with Erik Banks and Paul Siegel) (New York: McGraw-Hill, 2006); Optimal Trading Strategies (New York: AMACOM, 2003) (coauthor: Dr. Robert Kissell); Managing Bank Risk: An Introduction to Broad-Base Credit Engineering (New York: Academic Press/Elsevier, 2002), which received the RISKBOOK.COM Award: Best Finance Books of 2003; Scientific Financial Management (New York: AMACOM, 2000); Loan Risk Management (New York: McGraw-Hill, 1995); and The Bankers Handbook on Credit Risk, with Dr. Johnathan Mun (New York: Academic Press/Elsevier, 2008).
In this model, the publicly available McKinsey discounted cash flow (DCF) model is employed to value the probability that the equity value of a company falls below zero. (This is set as an output forecast variable in a simulation.) A sample snapshot of the model is shown in Figure 157.1. When equity values fall below
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
157. Valuation—Banking: Firm in Financial Distress
641
FIGURE 157.1 Firm in financial distress
zero, the firm has an excess of economic liabilities above and beyond its economic assets. (The assets are underwater.) Using this approach, bankers, under the Basel II requirements, can internally access the expected default frequency (EDF) or obligor risk, using their own assumptions and projections of the company’s financial health. Finally, positive economic equity implies a call option whereby the shareholders can purchase the assets from creditors by paying off all the loans and continue business well past the residual period. A negative economic equity implies a put option whereby shareholders put or sell the assets to the creditors and can walk away from the bankrupt business exactly as if they purchased an out-of-the-money option and walked away losing only their initial investment.
c151-160
JWBK121-Mun
642
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
158. Valuation—Banking: Pricing Loan Fees Model File Name: Banking—Stochastic Loan Pricing Model Location: Modeling Toolkit | Banking Models | Stochastic Loan Pricing Model Brief Description: This valuation model illustrates how to price a bank loan based on commercial lending practices Requirements: Modeling Toolkit, Risk Simulator Special Credits: This model was contributed by Professor Morton Glantz. He is a financial consultant, educator, and advisor to corporate financial executives, government ministers, privatization managers, investment and commercial bankers, public accounting firms, members of merger and acquisition teams, strategic planning executives, management consultants, attorneys, and representatives of foreign governments and international banks. As a senior officer of JP Morgan Chase, he specialized in credit analysis and credit risk management, risk grading systems, valuation models, and professional training. He is on the finance faculty at the Fordham Graduate School of Business, has appeared in the Harvard University International Directory of Business and Management Scholars and Research, and earned Fordham University Deans Award for Faculty Excellence on three occasions, a school record. His areas of expertise include advanced forecasting methods, advanced credit risk analysis, credit portfolio risk management, essentials of corporate finance, corporate valuation modeling and analysis credit risk analysis and modeling, project finance, and corporate failure. He has authored a number of books, including: Credit Derivatives: Techniques to Manage Credit Risk for Financial Professionals (with Erik Banks and Paul Siegel) (New York: McGraw-Hill, 2006); Optimal Trading Strategies (New York: AMACOM, 2003) (coauthor: Dr. Robert Kissell); Managing Bank Risk: An Introduction to Broad-Base Credit Engineering (New York: Academic Press/Elsevier, 2002), which received the RISKBOOK.COM Award: Best Finance Books of 2003; Scientific Financial Management (New York: AMACOM, 2000); Loan Risk Management (New York: McGraw-Hill, 1995); and The Bankers Handbook on Credit Risk, with Dr. Johnathan Mun (New York: Academic Press/Elsevier, 2008).
Stochastic models (Figure 158.1) produce an entire range of results and confidence levels feasible for the pricing run. Monte Carlo simulation fosters a more realistic (pricing) environment and, as a result, involves elements of uncertainty too complex to be solved with deterministic pricing models.
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
158. Valuation—Banking: Pricing Loan Fees Model
FIGURE 158.1 Pricing of loan fees
643
c151-160
JWBK121-Mun
644
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
159. Valuation—Valuation Model File Name: Valuation—Valuation Model, Valuation—Financial Statements, Valuation—Financial Ratios Analysis Location: Modeling Toolkit | Valuation | Valuation Model, Financial Statements, Financial Ratios Analysis Brief Description: These valuation models are based on standard discounted cash flow and financial statement approaches, useful for generating standard financial statements, computing financial ratios, and have been modified to account for the earnings per share of a company under uncertainty Requirements: Modeling Toolkit, Risk Simulator
In this model, we run a Monte Carlo simulation on the forecast projections of a company’s profitability, as modeled in an income statement and earnings per share valuation. Figure 159.1 shows a simple screenshot of the Valuation model, where the boxed cells are the input assumptions in the valuation of the firm. The model then takes these input assumptions as starting points and forecasts the income statement, balance sheet, cash flow statement, and some financial ratios. Using these financial statements, the valuation of the firm is obtained (Figure 159.2). The valuation flows through these financial statements in order to obtain the net operating profit after taxes (NOPAT). From this NOPAT value, adjustments on tax-related items and non-cash expenses are added back to determine the free cash flows available to the firm. Then, the company’s net present value is computed as the the sum of the present value of future cash flows. This is the firm’s present value of operations (PVO). We then add PVO to the value of continuation (value of the firm to perpetuity) assuming some growth rate and weighted average cost of capital. The total value is called the enterprise value, which is the valuation of the firm. To get a more detailed look at the three key financial statements, refer to the Modeling Toolkit | Valuation | Financial Statements model. These models also provide a list of typical financial ratios. Please refer to Appendix B for details and definitions of these ratios (see the functions list with a B2Ratios prefix).
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
159. Valuation—Valuation Model
FIGURE 159.1 Simulation and valuation on income statements and earnings per share
645
JWBK121-Mun
March 17, 2008
13:9
Char Count=
FIGURE 159.2 Valuation results
c151-160
646
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
160. Value at Risk—Optimized and Simulated Portfolio VaR
647
160. Value at Risk—Optimized and Simulated Portfolio VaR File Name: Value at Risk—Optimized and Simulated Portfolio VaR Location: Modeling Toolkit | Value at Risk | Optimized and Simulated Portfolio VaR Brief Description: Computing the Value at Risk (VaR) of a portfolio of assets and using Monte Carlo simulation to perform historical simulation to determine the tail-end confidence interval for determining VaR Requirements: Modeling Toolkit, Risk Simulator
This model uses Monte Carlo simulation and optimization routines in Risk Simulator to minimize the Value at Risk (VaR) of a portfolio of assets (Figure 160.1). First, simulation is used to determine the 90% left-tail VaR. (This means that there is a 10% chance that losses will exceed this VaR for a specified holding period.) With an equal allocation of 25% across the four asset classes, the VaR is determined using simulation (Figure 160.2). The annualized returns are uncertain and hence simulated. The VaR is then read off the forecast chart. Then optimization is run to find the best portfolio subject to the 100% total allocation across the four projects that will maximize the portfolio’s bang for the buck (returns-to-risk ratio). The resulting optimized portfolio is then simulated once again, and the new VaR is obtained (Figure 160.3). The VaR of this optimized portfolio is a lot less than the not-optimized portfolio. For a more detailed end-to-end application of VaR and how this methodology is applied in a banking environment for obtaining capital adequacy measurements (Basel II’s advanced internal risk-based method), please see Chapter 95 on an integrated Basel II application.
c151-160
JWBK121-Mun
648
March 17, 2008
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 160.1 Computing Value at Risk with simulation
c151-160
JWBK121-Mun
March 17, 2008
13:9
Char Count=
160. Value at Risk—Optimized and Simulated Portfolio VaR
FIGURE 160.2 Nonoptimized Value at Risk
649
c151-160
JWBK121-Mun
March 17, 2008
650
13:9
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 160.3 Optimal portfolio’s Value at Risk through optimization and simulation
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
161. Value at Risk—Options Delta Portfolio VaR
651
161. Value at Risk—Options Delta Portfolio VaR File Name: Value at Risk—Options Delta Portfolio VaR Location: Modeling Toolkit | Value at Risk | Options Delta Portfolio VaR Brief Description: Computing the Value at Risk (VaR) of a portfolio of options using closed-form structural models to determine the VaR, after accounting for the cross-correlations among the various options Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2VaROptions, B2CallDelta
There are two examples in this Value at Risk (VaR) model. The first example illustrates how to compute the portfolio VaR of a group of options, given the deltas of the options. The option delta is the instantaneous sensitivity of the option’s value to the stock price. (See the Sensitivity—Greeks chapter for more details on computing an option’s delta sensitivity measure.) Given the underlying stock price of the option, the quantity of options held in the portfolio, each option’s delta levels, and the daily volatility, we can compute the portfolio’s VaR measure. In addition, given a specific holding period (typically between 1 and 10 days) or risk horizon days and the percentile required (typically between 90% and 99.95%), we can compute the portfolio’s VaR for that holding period and percentile. If the option’s delta is not available, use the B2CallDelta function instead, or see the Given Option Inputs worksheet. Please see Figures 161.1 and 161.2.
FIGURE 161.1 Delta options VaR with known options delta values
c161-173
JWBK121-Mun
652
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 161.2 Delta options VaR with options input parameters
The second example illustrates how to compute the portfolio VaR of a group of options, given the option inputs (stock price, strike price, risk-free rate, maturity, annualized volatilities, and dividend rate). The option delta is then computed using the B2CallDelta function. The delta measure is the instantaneous sensitivity of the option’s value to the stock price. Given the underlying stock price of the option, the quantity of options held in the portfolio, each option’s delta levels, and the daily volatility, we can compute the portfolio’s VaR measure. In addition, given a specific holding period (typically between 1 and 10 days) or risk horizon days and the percentile required (typically between 90% and 99.95%), we can compute the portfolio’s VaR for that holding period and percentile. Finally, the daily volatility is computed by dividing the annualized volatility by the square root of trading days per year. Please see Figure 161.2. For a more detailed end-to-end application of VaR and how this methodology is applied in a banking environment for obtaining capital adequacy measurements (Basel II’s advanced internal risk-based method), please see Chapter 95 on an integrated Basel II application.
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
162. Value at Risk—Portfolio Operational and Credit Risk VaR Capital Adequacy
653
162. Value at Risk—Portfolio Operational and Credit Risk VaR Capital Adequacy File Name: Value at Risk—Portfolio Operational and Capital Adequacy Location: Modeling Toolkit | Value at Risk | Portfolio Operational and Capital Adequacy Brief Description: Illustrating how to use Risk Simulator for distributional fitting, to find the best-fitting distributions for credit risk and operational risk parameters, to run Monte Carlo simulation on these credit and operational risk variables to determine the total capital required under a 99.50% Value at Risk Requirements: Modeling Toolkit, Risk Simulator
This model shows how operational risk and credit risk parameters are fitted to statistical distributions and their resulting distributions are modeled in a portfolio of liabilities to determine the Value at Risk (VaR) (99.50th percentile certainty) for the capital requirement under Basel II requirements. It is assumed that the historical data of the operational risk impacts (Historical Data worksheet) are obtained through econometric modeling of the Key Risk Indicators. The Distributional Fitting Report worksheet is a result of running a distributional fitting routine in Risk Simulator to obtain the appropriate distribution for the operational risk parameter. Using the resulting distributional parameter, we model each liability’s capital requirements within an entire portfolio. Correlations can also be inputted if required, between pairs of liabilities or business units. The resulting Monte Carlo simulation results show the VaR capital requirements. Note that an appropriate empirically based historical VaR cannot be obtained if distributional fitting and risk-based simulations were not first run. Only by running simulations will the VaR be obtained. For a more detailed end-to-end application of VaR and how this methodology is applied in a banking environment for obtaining capital adequacy measurements (Basel II’s advanced internal risk-based method), please see Chapter 95 on an integrated Basel II application.
PROCEDURE To perform distributional fitting, follow these steps: 1. In the Historical Data worksheet (Figure 162.1), select the data area (cells C5:L104) and click on Risk Simulator | Tools | Distributional Fitting (Single Variable). 2. Browse through the fitted distributions and select the best-fitting distribution (in this case, the exponential distribution in Figure 162.2) and click OK.
JWBK121-Mun
March 17, 2008
13:15
Char Count=
FIGURE 162.1 Sample historical bank loans
c161-173
654
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
162. Value at Risk—Portfolio Operational and Credit Risk VaR Capital Adequacy
655
FIGURE 162.2 Data fitting results
Running a Simulation in the Portfolio Model 1. You may now set the assumptions on the Operational Risk Factors with the exponential distribution (fitted results show Lambda = 1) in the Credit Risk worksheet. Note that the assumptions have already been set for you in advance. You may set it by going to cell F27 and clicking on Risk Simulator | Set Input Assumption, selecting Exponential distribution, and entering 1 for the Lambda value and clicking OK. Continue this process for the remaining cells in column F, or simply perform a Risk Simulator Copy and Risk Simulator Paste on the remaining cells. a. Note that since the cells in column F have assumptions set, you will first have to clear them if you wish to reset and copy/paste parameters. You can do so by first selecting cells F28:F126 and clicking on the Remove Parameter icon or select Risk Simulator | Remove Parameter. b. Then select cell F27, click on Risk Simulator Copy or select Risk Simulator | Copy Parameter, and then select cells F28:F126 and click on Risk Simulator Paste or select Risk Simulator | Paste Parameter.
c161-173
JWBK121-Mun
656
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
2. Next, additional assumptions can be set, such as the probability of default using the Bernoulli distribution (column H) and Loss Given Default (column J). Repeat the procedure in step 3 if you wish to reset the assumptions. 3. Run the simulation by clicking on the Run icon or clicking on Risk Simulator | Run Simulation. 4. Obtain the VaR by going to the forecast chart once the simulation is done running and selecting Left-Tail and typing in 99.50. Hit Tab on the keyboard to enter the confidence value and obtain the VaR of $25,959 (Figure 162.3).
FIGURE 162.3 Simulated forecast results
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
163. Value at Risk—Right-Tail Capital Requirements
657
163. Value at Risk—Right-Tail Capital Requirements File Name: Value at Risk—Right-Tail Capital Requirements Location: Modeling Toolkit | Value at Risk | Right-Tail Capital Requirements Brief Description: Illustrating how to use Risk Simulator for running a riskbased correlated Monte Carlo simulation to obtain the right-tailed Value at Risk for Basel II requirements Requirements: Modeling Toolkit, Risk Simulator
This model shows the capital requirements per Basel II requirements (99.95th percentile capital adequacy based on a specific holding period’s Value at Risk [VaR]). Without running risk-based historical and Monte Carlo simulation using Risk Simulator, the required capital is $37.01M (Figure 163.1) as compared to only $14.00M is required using a correlated simulation (Figure 163.2). This is due to the crosscorrelations between assets and business lines, and can only be modeled using Risk Simulator. This lower VaR is preferred as banks can now be required to hold less required capital and can reinvest the remaining capital in various profitable ventures, thereby generating higher profits. For a more detailed end-to-end application of VaR and how this methodology is applied in a banking environment for obtaining capital adequacy measurements (Basel II’s advanced internal risk-based method), please see Chapter 95 on an integrated Basel II application.
PROCEDURE 1. To run the model, click on Risk Simulator | Run Simulation (if you had other models open, make sure you first click on Risk Simulator | Change Simulation Profile, and select the Tail VaR profile before starting). 2. When simulation is complete, select Left-Tail in the forecast chart and enter in 99.95 in the Certainty box and hit TAB on the keyboard to obtain the value of $14.00M VaR for this correlated simulation. 3. Note that the assumptions have already been set for you in advance in the model in cells C6:C15. However, you may set it again by going to cell C6 and clicking on Risk Simulator | Set Input Assumption, selecting your distribution of choice or use the default Normal Distribution (as a quick example) or perform
JWBK121-Mun
March 17, 2008
13:15
Char Count=
FIGURE 163.1 Right-tail VaR model
c161-173
658
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
163. Value at Risk—Right-Tail Capital Requirements
659
FIGURE 163.2 Simulated results of the portfolio VaR
a distributional fitting on historical data and click OK. Continue this process for the remaining cells in column C. You may also decide to first Remove Parameters of these cells in column C and set your own distributions. Further, correlations can be set manually when assumptions are set (Figure 163.3) or by going to Risk Simulator | Edit Correlations (Figure 163.4) after all the assumptions are set.
JWBK121-Mun
March 17, 2008
13:15
Char Count=
FIGURE 163.3 Setting correlations one at a time
c161-173
660
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
164. Value at Risk—Static Covariance Method
661
FIGURE 163.4 Setting correlations using the correlation matrix routine
164. Value at Risk—Static Covariance Method File Name: Value at Risk—Static Covariance Method Location: Modeling Toolkit | Value at Risk | Static Covariance Method Brief Description: Computing a static Value at Risk (VaR) using a correlation and covariance method, to obtain the portfolio’s VaR after accounting the individual component’s returns, volatility, cross-correlations, holding period, and VaR percentile Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Function Used: B2VaRCorrelationMethod
This model is used to compute the portfolio’s Value at Risk (VaR) at a given percentile for a specific holding period, after accounting for the cross-correlation effects between the assets (Figure 164.1). The daily volatility is the annualized volatility divided by the square root of trading days per year. Typically, positive correlations tend to carry a higher VaR compared to zero correlation asset mixes, whereas negative
c161-173
JWBK121-Mun
662
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 164.1 Computing Value at Risk using the covariance method correlations reduce the total risk of the portfolio through the diversification effect (Figure 164.2). The approach used is a portfolio VaR with correlated inputs, where the portfolio has multiple asset holdings with different amounts and volatilities. Each asset is also correlated to each other. The covariance or correlation structural model
FIGURE 164.2 Value at Risk results with different correlation levels
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
165. Volatility—Implied Volatility
663
is used to compute the VaR given a holding period or horizon and percentile value (typically 10 days at 99% confidence). For a more detailed end-to-end application of VaR and how this methodology is applied in a banking environment for obtaining capital adequacy measurements (Basel II’s advanced internal risk-based method), please see Chapter 95 on an integrated Basel II application.
165. Volatility—Implied Volatility File Name: Volatility—Implied Volatility Location: Modeling Toolkit | Volatility | Implied Volatility Brief Description: Computing the implied volatilities using an internal optimization routine, given the fair market values of a call or put option as well as all their required inputs Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2ImpliedVolatilityCall, B2ImpliedVolatilityPut
This implied volatility computation is based on an internal iterative optimization, which means it will work under typical conditions (without extreme volatility values, i.e., too small or too large). It is always good modeling technique to recheck the imputed volatility using an options model to make sure the answer coincides before adding more sophistication to the model. That is, given all the inputs in an option analysis as well as the option value, the volatility can be imputed. Please see Figure 165.1.
FIGURE 165.1 Getting the implied volatility from options
c161-173
JWBK121-Mun
664
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
166. Volatility—Volatility Computations (Log Returns, Log Assets, Implied Volatility, Management Assumptions, EWMA, GARCH) File Name: Volatility—Volatility Computations Location: Modeling Toolkit | Volatility, and applying: EWMA Model, GARCH Model, Implied Volatility, Log Asset Returns, Log Cash Flow Returns, and Probability to Volatility Models Brief Description: Using Risk Simulator to apply Monte Carlo simulation in order to compute a project’s volatility measure Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions Used: B2Volatility and B2GARCH
There are several ways to estimate the volatility used in the option models. A description of the five most common and valid approaches follows. 1. Logarithmic cash flow returns approach or logarithmic stock price returns approach. This method is used mainly for computing the volatility on liquid and tradable assets, such as stocks in financial options. However, sometimes it is used for other traded assets, such as the price of oil or electricity. The drawback is that discounted cash flow models with only a few cash flows generally will overstate the volatility, and this method cannot be used when negative cash flows occur. Thus, this volatility approach is applicable only for financial instruments and not for real options analysis. The benefits include its computational ease, transparency, and modeling flexibility. In addition, no simulation is required to obtain a volatility estimate. The approach is simply to take the annualized standard deviation of the logarithmic relative returns of the time series data as the proxy for volatility. The Modeling Toolkit function B2Volatility is used to compute this volatility, where the time series of stock prices can be arranged either in a chronological or reverse chronological order when using the function. See the Log Cash Flow Returns example model in the Volatility section of Modeling Toolkit for details. 2. Exponentially weighted moving average (EWMA) models. This approach is similar to the logarithmic cash flow returns approach. It uses the B2Volatility function, to compute the annualized standard deviation of the natural logarithms of relative stock returns. The difference here is that the most recent value will have a higher weight than values farther in the past. A lambda, or weight
c161-173
JWBK121-Mun
March 17, 2008
166. Volatility—Volatility Computations
13:15
Char Count=
665
variable, is required (typically, industry standards set this at 0.94). The most recent volatility is weighted at this lambda value, and the period before that is (1 – lambda), and so forth. See the EWMA example model in the Volatility section of Modeling Toolkit for details. 3. Logarithmic present value returns approach. This approach is used mainly when computing the volatility on assets with cash flows. A typical application is in real options. The drawback of this method is that simulation is required to obtain a single volatility, and it is not applicable for highly traded liquid assets, such as stock prices. The benefit includes the ability to accommodate certain negative cash flows. Also, this approach applies more rigorous analysis than the logarithmic cash flow returns approach, providing a more accurate and conservative estimate of volatility when assets are analyzed. In addition, within, say, a cash flow model, you can set up multiple simulation assumptions (insert any types of risks and uncertainties, such as related assumptions, correlated distributions and nonrelated inputs, multiple stochastic processes, etc.), and allow the model to distill all the interacting risks and uncertainties in these simulations. You then obtain the single value volatility, which represents the integrated risk of the project. See the Log Asset Returns example model in the Volatility section of Modeling Toolkit for details. 4. Management assumptions and guesses. This approach is used for both financial options and real options. The drawback is that the volatility estimates are very unreliable and are only subjective best guesses. The benefit of this approach is its simplicity––using this method, you can easily explain to management the concept of volatility, both in execution and interpretation. Most people understand what probability is but have a hard time understanding what volatility is. Using this approach, you can impute one from another. See the Probability to Volatility example model in the Volatility section of Modeling Toolkit for details. 5. Generalized autoregressive conditional heteroskedasticity (GARCH) models. These methods are used mainly for computing the volatility on liquid and tradable assets, such as stocks in financial options. However, sometimes they are used for other traded assets, such as the price of oil or electricity. The drawback is that these models require a lot of data and advanced econometric modeling expertise is required. In addition, these models are highly susceptible to user manipulation. The benefit is that rigorous statistical analysis is performed to find the best-fitting volatility curve, providing different volatility estimates over time. The EWMA model is a simple weighting model; the GARCH model is a more advanced analytical and econometric model that requires advanced algorithms, such as generalized method of moments, to obtain the volatility forecasts. This chapter provides a high-level review of these methods, focusing on the application rather than the theory.
PROCEDURE 1. For the Log Cash Flow or Log Returns approach, make sure that your data are all positive. (This approach cannot apply negative values.) The Log Cash Flow Approach worksheet illustrates an example computation of downloaded
c161-173
JWBK121-Mun
666
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 166.1 Historical stock prices and volatility estimates
Microsoft stock prices (Figure 166.1). You can perform Edit | Copy and Edit | Paste Special | Values Only using your own stock closing prices to compute the volatility. The B2Volatility function is used to compute volatility. Enter in the positive integer for periodicity as appropriate (e.g., to annualize the volatility using monthly stock prices, enter in 12, or 52 for weekly stock prices, etc.) as seen in the example model. 2. For the Log Present Value approach, negative cash flows are allowed. In fact, this approach is preferred to the Log Cash Flow approach when modeling volatilities in a real options world. First, set some assumptions in the model or use the preset assumptions as is. The Intermediate X Variable is used to compute the project’s volatility (Figure 166.2). Run the simulation and view the Intermediate Variable X’s forecast chart. Go to the Statistics tab and obtain the Standard Deviation (Figure 166.3). Annualize it by multiplying the value with the square root of the number of periods per year. (In this case, the annualized volatility is 11.64% as the periodicity is annual; otherwise multiply the standard deviation by the square root of 4 if quarterly data is used, the square root of 12 if monthly data is used, etc.) 3. The Volatility to Probability approach can provide rough estimates of volatility, or you can use it to explain to senior management the concept of volatility (Figure 166.4). To illustrate, say your model has an expected value of $100M, and the best-case scenario as modeled or expected or anticipated by subject matter experts or senior management is $150M with a 10% chance of exceeding this value. We compute the implied volatility as 39.02%. Clearly this is a rough estimate, but it is a good start if you do not wish to perform elaborate modeling and simulation to obtain a volatility measure. Further, this approach can be reversed. That is, instead of getting volatility from
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
166. Volatility—Volatility Computations
FIGURE 166.2 Using the PV Asset approach to model volatility
FIGURE 166.3 Volatility estimates using the PV Asset approach and simulation
667
c161-173
JWBK121-Mun
668
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 166.4 Probability to volatility approximation approach
probability, you can get probability from volatility. This reversed method is very powerful in explaining to senior management the concept of volatility. To illustrate, we use the worst-case scenario model next. Suppose you run a simulation model and obtain an annualized volatility of 35% and need to explain what this means to management. Well, 35% volatility does not mean that there is a 35% chance of something happening; nor does it mean that the expected value can go up 35%. Management has a hard time understanding volatility, whereas probability is a simpler concept. For instance, if you state that there is a 10% chance of something happening (such as a product being successful in the market), management understands this to mean that 1 out of 10 products will be a superstar. You can take advantage of this fact and impute the probability from the analytically computed volatility to explain the concept in a simplified manner. Using the Probability to Volatility Worst-Case Scenario model, assume that the worst case is defined as the 10% left-tail (you can change this if you wish) and that your analytical model provides a 35% volatility. Simply click on Tools | Goal Seek and set the cell as F24 (the volatility cell) to change to the value 35% (your computed volatility) by changing the Alternate Worst-Case Scenario cell (F21). Clicking OK returns the value $55.15 in cell F21. This means that a 35% volatility can be described as a project with an expected net present value (NPV) of $100M. It is risky enough that the worst-case scenario can occur less than 10% of the time, and if it does, it will reduce the project’s NPV to $55.15M. In other words, there is a 1 in 10 chance the project will be below $55.15M, or 9 out of 10 times it will make at least $55.15M (Figure 166.5). 4. For the EWMA and GARCH approaches, time-specific multiple volatility forecasts are obtained. That is, a term structure of volatility can be determined using these approaches. GARCH models are used mainly in analyzing financial time-series data, in order to ascertain their conditional variances and volatilities. These volatilities are then used to value the options as usual, but the amount of historical data necessary for a good volatility estimate remains significant. Usually several dozens––and even up to hundreds––of data points are required to obtain good GARCH estimates. GARCH is a term that incorporates a family of models that can take on a variety of forms, known as GARCH(p,q), where p and q are positive integers that define the resulting GARCH model and its forecasts. In most cases for financial instruments, a GARCH(1,1) is sufficient and is most generally used.
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
166. Volatility—Volatility Computations
669
FIGURE 166.5 Sample computation using the volatility to probability approach For instance, a GARCH (1,1) model takes the form of yt = xt γ + εt 2 2 σt2 = ω + αεt−1 + βσt−1
where the first equation’s dependent variable (yt ) is a function of exogenous variables (xt ) with an error term (εt ). The second equation estimates the variance (squared volatility σt2 ) at time t, which depends on a historical mean (ω); news about volatility from the previous period, measured as a lag of the squared 2 ); and volatility from the previous period residual from the mean equation (εt−1 2 (σt−1 ). The exact modeling specification of a GARCH model is beyond the scope of this book. Suffice it to say that detailed knowledge of econometric modeling (model specification tests, structural breaks, and error estimation) is required to run a GARCH model, making it less accessible to the general analyst. The other problem with GARCH models is that the model usually does not provide a good statistical fit. That is, it is impossible to predict the stock market and, of course, equally if not harder to predict a stock’s volatility over time. Figure 166.6 shows a GARCH (1,1) on a sample set of historical stock prices using the B2GARCH model. In order to create GARCH forecasts, start the Modeling Toolkit and open the Volatility | GARCH model. You will see a model that resembles Figure 166.7. Follow the next procedures to create the volatility estimates. 1. Enter the stock prices in chronological order (e.g., cells I6:I17) 2. Use the B2GARCH function call in Modeling Toolkit. For instance, cell K3 has the function call: “B2GARCH($I$6:$I$17,$I$19,1,3)” where the stock price inputs are in cells I6:I17, the periodicity is 12 (i.e., there are 12 months in a year, to obtain the annualized volatility forecasts), predictive base is 1, and we forecast for a sample of three periods into the future. Because we will copy and
c161-173
JWBK121-Mun
670
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 166.6 Sample GARCH (1,1) model
paste the function down the column, make sure that absolute addressing is used (i.e., $I$6), and not relative addressing (I6). 3. Copy cell K3 and paste the function on cells K3:K20 (select cell K3 and drag the fill handle to copy the function down the column). You do this because the first three values are the GARCH estimated parameters of Alpha, Beta, and Gamma, and at the bottom (e.g., cells K18:K20) are the forecast values. 4. With the entire column selected (cells K3:K20 selected), hit F2 on the keyboard, and then hold down Shift+Ctrl and hit Enter. This will update the entire matrix with GARCH forecasts.
FIGURE 166.7 Setting up a GARCH model
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
166. Volatility—Volatility Computations
671
FIGURE 166.8 Running a GARCH model in Risk Simulator
Alternatively, and probably a simpler method is to use Risk Simulator’s Forecast GARCH module as seen in Figure 166.8, select the data location of raw stock prices and enter in the required parameters (see below for a description of these parameters). The result is a GARCH volatility report seen in Figure 166.9. Note that the GARCH function has several inputs below: Stock Prices: This is the time-series of stock prices in chronological order. Typically, dozens of data points are required for a decent volatility forecast. Periodicity: This is a positive integer indicating the number of periods per year (e.g., 12 for monthly data, 252 for daily trading data, and so forth), assuming you wish to annualize the volatility. For getting periodic volatility, enter 1. Predictive Base: This is the number of periods (the time-series data) back to use as a base to forecast volatility. The higher this number, the longer the historical base is used to forecast future volatility. Forecast Period: This is a positive integer indicating how many future periods beyond the historical stock prices you wish to forecast. Variance Targeting: This variable is set as False by default (even if you do not enter anything here) but can be set as True. False means the omega variable is automatically optimized and computed. The suggestion is to leave this variable empty. If you wish to create mean-reverting volatility with variance targeting, set this variable as True. P: This is the number of previous lags on the mean equation. Q: This is the number of previous lags on the variance equation.
JWBK121-Mun
March 17, 2008
13:15
Char Count=
FIGURE 166.9 GARCH forecast results
c161-173
672
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
167. Yield Curve—CIR Model
673
167. Yield Curve—CIR Model File Name: Yield Curve—CIR Model Location: Modeling Toolkit | Yield Curve | CIR Model Brief Description: Estimating and modeling the term structure of interest rates and yield curve approximation assuming that the interest rates are meanreverting Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Function Used: B2CIRBondYield
The yield curve is the time-series relationship between interest rates and the time to maturity of some underlying debt. The more formal mathematical description of this relationship is called the term structure of interest rates. The yield curve can take on various shapes. In the normal yield curve, yields rise as maturity lengthens and the yield curve is positively sloped, reflecting investor expectations for the economy to grow in the future (and hence an expectation that inflation rates will rise in the future). An inverted yield curve occurs when long-term yields fall below shortterm yields, and long-term investors will settle for lower yields now if they think the economy will slow or even decline in the future. This situation is indicative of a worsening economy in the future (and hence an expectation that inflation will remain low in the future). Another potential situation is a flat yield curve, signaling uncertainty in the economy. The yield curve also can be humped or show a smile or a frown. The yield curve over time can change in shape through a twist or bend, a parallel shift, or a movement on one end versus another. As the yield curve is related to inflation rates, and central banks in most countries have the ability to control monetary policy to target inflation rates, inflation rates are mean-reverting in nature. This also implies that interest rates are mean-reverting as well as stochastically changing over time. This chapter shows the Cox-Ingersoll-Ross (CIR) model, which is used to compute the term structure of interest rates and yield curve. The CIR model assumes a mean-reverting stochastic interest rate. The rate of reversion and long-run mean rates can be determined using Risk Simulator’s Statistical Analysis Tool. If the long-run rate is higher than the current short rate, the yield curve is upward sloping, and vice versa. Please see Figure 167.1.
c161-173
JWBK121-Mun
674
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 167.1 CIR model on yield curve construction
168. Yield Curve—Curve Interpolation BIM Model File Name: Yield Curve—Curve Interpolation BIM Location: Modeling Toolkit | Yield Curve | Curve Interpolation BIM Brief Description: Estimating and modeling the term structure of interest rates and yield curve approximation using a curve interpolation method Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Function Used: B2YieldCurveBIM
c161-173
JWBK121-Mun
March 17, 2008
13:15
168. Yield Curve—Curve Interpolation BIM Model
Char Count=
675
FIGURE 168.1 BIM model
A number of alternative methods exist for estimating the term structure of interest rates and the yield curve. Some are fully specified stochastic term structure models while others are simply interpolation models. The former include the CIR and Vasicek models (illustrated in other chapters in this book), while the latter are interpolation models such as the Bliss or Nelson approach. This chapter looks at the Bliss interpolation model (BIM) (Figure 168.1) for generating the term structure of interest rates and yield curve estimation. Some econometric modeling techniques are required to calibrate the values of several input parameters in this model. The Bliss approach modifies the Nelson-Siegel method by adding an additional generalized parameter. Virtually any yield curve shape can be interpolated using these models, which are widely used at banks around the world.
c161-173
JWBK121-Mun
676
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
169. Yield Curve—Curve Interpolation NS Model File Name: Yield Curve—Curve Interpolation NS Location: Modeling Toolkit | Yield Curve | Curve Interpolation NS Brief Description: Estimating and modeling the term structure of interest rates and yield curve approximation using the NS model of a curve interpolation method Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Function Used: B2YieldCurveNS
This is the Nelson-Siegel (NS) interpolation model for generating the term structure of interest rates and yield curve estimation. Some econometric modeling techniques are required to calibrate the values of several input parameters in this model. Just like the Bliss model in the previous chapter, the NS model (Figure 169.1) is purely an interpolation model, with four estimated parameters. If properly modeled, it can be made to fit almost any yield curve shape. Calibrating the inputs in the NS model requires facility with econometric modeling and error optimization techniques. Typically, if some interest rates exist, a better approach is to use the spline interpolation method (see Chapter 172 for details).
c161-173
JWBK121-Mun
March 17, 2008
13:15
169. Yield Curve—Curve Interpolation NS Model
FIGURE 169.1 NS model
Char Count=
677
c161-173
JWBK121-Mun
678
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
170. Yield Curve—Forward Rates from Spot Rates File Name: Yield Curve—Forward Rates from Spot Rates Location: Modeling Toolkit | Yield Curve | Forward Rates from Spot Rates Brief Description: Determining the implied forward rate given two spot rates using a bootstrap model Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Function Used: B2ForwardRate
Given two spot rates (from Year 0 to some future time periods), you can determine the implied forward rate between these two time periods. For instance, if the spot rate from Year 0 to Year 1 is 8%, and the spot rate from Year 0 to Year 2 is 7% (both yields are known currently), the implied forward rate from Year 1 to Year 2 (that will occur based on current expectations) is 6%. Please see Figure 170.1.
FIGURE 170.1 Forward rate extrapolation
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
171. Yield Curve—Term Structure of Volatility
679
171. Yield Curve—Term Structure of Volatility File Name: Yield Curve—Term Structure of Volatility Location: Modeling Toolkit | Yield Curve | Term Structure of Volatility Brief Description: Illustrating the term structure of volatilities of interest rates using not only interest rates but their underlying volatility Requirements: Modeling Toolkit, Risk Simulator
This model illustrates the term structure of volatility of interest rates. Interest rate volatilities change over time and, depending on the maturity, also are different during the same time period. Are shorter-term rates more volatile, or long-term rates more volatile? It depends on the liquidity preference, trading market depth, economic outlook, and so forth. Using multiple years of historical data, spot interest rates are analyzed and their respective volatilities are tabulated in the model, and the results are shown in Figures 171.1 and 171.2.
FIGURE 171.1 Term structure of average volatility
c161-173
JWBK121-Mun
680
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 171.2 Term structure of volatility by maturity Notice that longer-term yields tend to have smaller volatilities than shorterterm, more liquid and highly traded instruments (Figure 171.2), such as one-year Treasury bills. Note that in the Bond Options models, we use the BDT lattices to model the term structure of interest rates accounting for different spot rates and local volatilities over time. In contrast, this chapter uses actual empirically-based models to determine the historical term structure of interest rates and its local volatilities.
172. Yield Curve—U.S. Treasury Risk-Free Rates and Cubic Spline Curves File Name: Yield Curve—U.S. Treasury Risk-Free Rates and Yield Curve— Spline Interpolation and Extrapolation Location: Modeling Toolkit | Yield Curve | U.S. Treasury Risk-Free Rates and Modeling Toolkit | Yield Curve | Spline Interpolation and Extrapolation Brief Description: Illustrating how to use Risk Simulator for applying distributional fitting and computing volatilities of risk-free rates, as well as interpolating and extrapolating missing interest rates using Cubic Spline curves Requirements: Modeling Toolkit, Risk Simulator
This model shows the daily historical yield of the U.S. Treasury securities, from 1 month to 30 years, for the years 1990 to 2006. The volatilities of these
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
172. Yield Curve—U.S. Treasury Risk-Free Rates and Cubic Spline Curves
681
time-series yields are also computed, and they are then put through a distributional fitting routine in Risk Simulator to determine if they can be fitted to a particular distribution and hence used in a model elsewhere. There are several ways to compute the volatility of a time series of data, including:
Logarithmic Cash Flow or Stock Returns Approach. This method is used mainly for computing the volatility of liquid and tradable assets, such as stocks in financial options; however, sometimes it is used for other traded assets, such as price of oil and electricity. The drawback is that discounted cash flow models with only a few cash flows generally will overstate the volatility, and this method cannot be used when negative cash flows occur. The benefits include its computational ease, transparency, and modeling flexibility of the method. In addition, no simulation is required to obtain a volatility estimate. Logarithmic Present Value Returns Approach. This approach is used mainly when computing the volatility on assets with cash flows. A typical application is in real options. The drawback of this method is that simulation is required to obtain a single volatility, and it is not applicable for highly traded liquid assets, such as stock prices. The benefit includes the ability to accommodate certain negative cash flows. Also, this approach applies more rigorous analysis than the logarithmic cash flow returns approach, providing a more accurate and conservative estimate of volatility when assets are analyzed. Management Assumptions and Guesses. This approach is used for both financial options and real options. The drawback is that the volatility estimates are very unreliable and are only subjective best guesses. The benefit of this approach is its simplicity—using this method, you can easily explain to management the concept of volatility, both in execution and interpretation. Generalized Autoregressive Conditional Heteroskedasticity (GARCH) Models. These methods are used mainly for computing the volatility on liquid and tradable assets, such as stocks in financial options. However, sometimes they are used for other traded assets, such as the price of oil or electricity. The drawback is that these models require a lot of data, and advanced econometric modeling expertise is required. In addition, these models are highly susceptible to user manipulation. The benefit is that rigorous statistical analysis is performed to find the best-fitting volatility curve, providing different volatility estimates over time.
This model applies the first method. For technical details on volatility estimates, please refer to Chapter 7A of Dr. Johnathan Mun’s Real Options Analysis, 2nd Edition (Hoboken, NJ: John Wiley & Sons, 2005). That section goes into the theory and step-by-step interpretation of the method. The Risk-Free Rate Volatility worksheet shows the computed annualized volatilities for each term structure, together with the average, median, minimum, and maximum values. These volatilities are also fitted to continuous distributions, and the results are shown in the Fitting Volatility worksheet. Notice that longer-term yields tend to have smaller volatilities than shorter-term, more liquid, and highly traded instruments.
c161-173
JWBK121-Mun
682
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
PROCEDURE The volatilities and their distributions have already been determined in this model. To review them, follow these instructions: 1. Select any of the worksheets (1990 to 2006), and look at the historical risk-free rates as well as how the volatilities are computed using the logarithmic returns approach. 2. Go to the Fitting Volatility worksheet and view the resulting fitted distributions, their p-values, theoretical versus empirical values, and their cross-correlations. 3. Notice that in most cases, the p-values are pretty high. For example, the p-value for the 3-month volatilities for the past 17 years is 0.9853, or roughly about 98.53% of the fluctuations in the actual data can be accounted for by the fitted lognormal distribution, which is indicative of an extremely good fit. We can use this information to model and simulate short-term volatilities very robustly. In addition, the cross-correlations indicate that yields of closely related terms are highly correlated, but the correlation decreases over time. For instance, the 3-month volatilities tend to be highly correlated to the 6-month volatilities but only have negligible correlations to longer-term yields (e.g., 10-year) volatilities. This is highly applicable for simulating and modeling options-embedded instruments, such as options-adjusted spreads, which require the term structure if interest rates and volatility structure. Please see Figure 172.1. To replicate the distributional fitting routine, follow the next instructions. You can perform a single-variable fitting and replicate the steps for multiple variables or perform a multiple-fitting routine once. To perform individual fits: 1. Start a new profile by clicking on Risk Simulator | New Simulation Profile and give it a name. 2. Go to the Risk-Free Rate Volatility worksheet and select a data column (e.g., select cells K6:K22). 3. Start the single-fitting procedure by clicking on Risk Simulator | Tools | Distributional Fitting (Single-Variable). 4. Select Fit to Continuous Distributions and make sure all distributions are checked (by default) and click OK. 5. Review the results. Note that the best-fitting distribution is listed or ranked first, complete with its p-value. A high p-value is considered statistically to be a good fit. That is, the null hypothesis that the fitted distribution is the right distribution cannot be rejected; therefore, we can conclude that the distribution fitted is the best fit. Thus, the higher the p-value, the better the fit. Alternatively, you can roughly think of the p-value as the percentage fit; for example, the Gumbel Maximum Distribution fits the 10-year volatilities at about 95.77%. You also can review the theoretical and empirical statistics to see how closely the theoretical distribution matches the actual empirical data. If you click OK, a single-fit report will be generated together with the assumption. Please see Figure 172.2.
JWBK121-Mun March 17, 2008 13:15
FIGURE 172.1 P-values and cross-correlations of fitting routines
c161-173 Char Count=
683
c161-173
JWBK121-Mun
March 17, 2008
684
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 172.1 (Continued)
Alternatively, you can perform a multiple-variable fitting routine to fit multiple variables at the same time. The problem here is that the data must be complete. In other words, there must be no gaps in the area selected. Looking at the Risk-Free Rate Volatility worksheet, there are gaps as in certain years. The 1-month Treasury Bill, 20-year Treasury Note, and 30-year Treasury Bond are not issued, hence the empty cells. You may have to perform a multivariable-fitting routine on the 3-month to 10-year volatilities and perform single-variable fits on the remaining 1-month, 20-year, and 30-year issues. To perform a multivariable fit, follow these instructions: 1. Make sure you have already created a simulation profile. Then select the cells D6:K22 and start the multifit routine: Risk Simulator | Tools | Distributional Fitting (Multi-Variable). 2. You can rename each of the variables if desired, and make sure that the distribution types are all set to Continuous. Click OK when ready. A multiple distribution fitting report will then be generated (see the Fitting Volatility spreadsheet). Please see Figure 172.3.
CUBIC SPLINE CURVES The cubic spline polynomial interpolation and extrapolation model is used to fill in the gaps of missing values in a time-series data set. In this chapter, we illustrate the model application on spot yields and term structure of interest rates, whereby the model can be used both to interpolate missing data points within a time series of interest rates (as well as other macroeconomic variables, such as inflation rates and commodity prices or market returns) and also to extrapolate outside the given or known range, which is useful for forecasting purposes. To illustrate, Figure 172.4 shows the published zero coupon government bonds (risk-free rates) issued by the U.S. Department of the Treasury. The values provided are 1-month, 3-month, 6-month, and 1 to 3 years, and then the values skip to years 5, 7, 10, 20, and 30. We can apply the cubic spline methodology to interpolate the missing values from 1 month to 30 years, and to extrapolate beyond 30 years.
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
172. Yield Curve—U.S. Treasury Risk-Free Rates and Cubic Spline Curves
685
FIGURE 172.2 Distributional fitting results
Figure 172.5 shows the interest rates plotted as a yield curve, and Figure 172.6 shows how to run the cubic spline forecasts. You can use either Modeling Toolkit’s B2CubicSpline function or Risk Simulator | Forecasting | Cubic Spline. The Known X Values input are the values on the x-axis (in the example, we are interpolating interest rates, a time series of values, making the x-axis time). The Known Y Values are the published interest rates. With this information, we can determine the required Y values (missing interest rates) by providing the required X values (the time periods where we want to predict the interest rates).
c161-173
JWBK121-Mun
686
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 172.3 Multiple-fitting result
Figure 172.7 shows the results from a cubic spline forecast. The entire term structure of interest rates for every six months is obtained, from the period of 6 months to 50 years. The interest rates obtained up to year 30 are interpolated. Interest rates beyond year 30 are extrapolated from the original known data set. Notice that the time-series chart shows a nonlinear polynomial function that is obtained using spline curve methodology.
JWBK121-Mun March 17, 2008 13:15
FIGURE 172.4 Sample U.S. Treasury risk-free interest rates
c161-173 Char Count=
687
c161-173
JWBK121-Mun
688
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
FIGURE 172.5 Yield curve of published rates
FIGURE 172.6 Modeling cubic spline
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
172. Yield Curve—U.S. Treasury Risk-Free Rates and Cubic Spline Curves
FIGURE 172.7 Cubic spline results
689
c161-173
JWBK121-Mun
690
March 17, 2008
13:15
Char Count=
MODELING TOOLKIT AND RISK SIMULATOR APPLICATIONS
173. Yield Curve—Vasicek Model File Name: Yield Curve—Vasicek Model Location: Modeling Toolkit | Yield Curve | Vasicek Model Brief Description: Creating the term structure of interest rates and reconstructing the yield curve, assuming the underlying interest rates are meanreverting and stochastic Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Function Used: B2VasicekBondYield
This is the Vasicek model used to compute the term structure of interest rates and yield curve. The Vasicek model assumes a mean-reverting stochastic interest rate. The rate of reversion and long-run mean rates can be determined using Risk Simulator’s Statistical Analysis Tool. If the long-run rate is higher than the current short rate, the yield curve is upward sloping, and vice versa. The yield curve is the time-series relationship between interest rates and the time to maturity of some underlying debt. The more formal mathematical description of this relationship is called the term structure of interest rates. The yield curve can take on various shapes. In the normal yield curve, yields rise as maturity lengthens and the yield curve is positively sloped, reflecting investor expectations for the economy to grow in the future (and hence an expectation that inflation rates will rise in the future). An inverted yield curve occurs when the long-term yields fall below shortterm yields, and long-term investors settle for lower yields now if they think the economy will slow or even decline in the future, indicative of a worsening economic situation in the future (and hence an expectation that inflation will remain low in the future). Another potential situation is a flat yield curve, signaling uncertainty in the economy. The yield curve also can be humped or show a smile or a frown. The yield curve over time can change in shape through a twist or bend, a parallel shift, or a movement on one end versus another. As the yield curve is related to inflation rates, and as central banks in most countries have the ability to control monetary policy in order to target inflation rates, inflation rates are mean-reverting in nature. This also implies that interest rates are mean-reverting as well as stochastically changing over time. In a 1977 paper, Czech mathematician Oldrich Vasicek proved that bond prices on a yield curve over time and various maturities are driven by the short end of the yield curve, or the short-term interest rates, using a risk-neutral martingale measure. Vasicek assumed the mean-reverting Ornstein-Uhlenbeck process in his work; hence the resulting Vasicek model requires that a mean-reverting interest rate process be modeled. Rate of mean reversion and long-run mean rates are both inputs in the Vasicek model. Please see Figure 173.1.
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
173. Yield Curve—Vasicek Model
FIGURE 173.1 Using the Vasicek model to generate a yield curve
691
c161-173
JWBK121-Mun
March 17, 2008
13:15
Char Count=
692
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
PART
2
Real Options SLS Applications
693
c174-c180
JWBK121-Mun
March 17, 2008
13:33
694
Char Count=
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
174. Introduction to the SLS Software
695
174. Introduction to the SLS Software File Names: Plain Vanilla Call Option I, Plain Vanilla Call Option II, Plain Vanilla Call Option III, Plain Vanilla Call Option IV, Simple Two-Phased Sequential Compound Option, Simple Calls and Puts Using Trinomial Lattices Location: Modeling Toolkit | Real Options Models Brief Description: Using the Real Options SLS software to solve single-asset and multiple-asset with multiple-phased options using closed-form models, binomial, trinomial, and multinomial lattices Requirements: Modeling Toolkit, Real Options SLS
The Real Options Super Lattice Software (SLS) comprises several modules, including the Single Asset and Single Phased SLS, Multiple Asset or Multiple Phased SLS, Multinomial Lattice Solver, SLS Excel Solution, and SLS Functions modules (Figure 174.1). These modules are highly powerful and customizable binomial and multinomial lattice solvers and can be used to solve many types of options (including the three main families of options: real options, which deals with physical and intangible assets; financial options, which deals with financial assets and the investments of such assets; and employee stock options, which deals with financial assets provided to employees within a corporation). This text illustrates some sample real options, financial options, and employee stock options applications that users will encounter most frequently.
The Single Asset and Single Phased module is used primarily for solving options with a single underlying asset using binomial lattices. Even highly complex options with a single underlying asset can be solved using the SLS. The Multiple Phased or Multiple Asset module is used for solving options with multiple underlying assets and sequential compound options with multiple phases using binomial lattices. Highly complex options with multiple underlying assets and phases can be solved using the SLS. The Multinomial Lattice Solver module uses multinomial lattices (trinomial, quadranomial, pentanomial) to solve specific options that cannot be solved using binomial lattices. The SLS Excel Solution implements the SLS computations within the Excel environment, allowing users to access the SLS Functions directly in Excel. This feature facilitates model building, formula and value linking and embedding, and running simulations, and provides the user sample templates to create such models.
c174-c180
JWBK121-Mun
March 17, 2008
13:33
696
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 174.1 Real Options SLS software
The SLS Functions are additional real options and financial options models accessible directly through Excel. This facilitates model building, linking and embedding, and running simulations. The Real Options SLS software supports multiple languages (e.g., English, Chinese, Japanese, and Spanish), and you can switch between languages on the main user interface (Figure 174.1) by selecting the language at the bottom left of the software. Please be aware that if your Windows environment does not have the foreign language pack installed, switching between languages may yield unrecognizable characters.
The SLS software was created by Dr. Johnathan Mun and accompanies the materials presented at different training courses on real options, simulation, and employee stock options valuation that he teaches. While the software and its models are based on his books, the training courses cover the material in more depth, including solving sample business cases and framing real options of actual cases. It is highly suggested that readers familiarize themselves with the fundamental concepts of real options by watching some of the Getting Started videos in the accompanying DVD or reading chapters 1 to 7 of Real Options Analysis, 2nd Edition (New York:
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
174. Introduction to the SLS Software
697
John Wiley & Sons, 2006), prior to attempting an in-depth real options analysis using this software. Note: The first edition of Real Options Analysis: Tools and Techniques published in 2002 shows the Real Options Analysis Toolkit software, a precursor to the Real Options SLS (Super Lattice Solver). The Real Options SLS supersedes the Real Options Analysis Toolkit by providing these enhancements:
Verifying and fixing all inconsistencies, computation errors, and bugs Allowance for changing input parameters over time (customized options) Allowance for changing volatilities over time Incorporation of Bermudan (vesting and blackout periods) and Customized Options Flexible modeling capabilities in creating or engineering your own customized options Super speed implementation on the lattice computations, increasing the computation speed by about 1,000 times General enhancements to accuracy, precision, and analytical prowess
Readers are advised to focus on using the Real Options SLS as it provides many powerful enhancements and analytical flexibility over the Real Options Analysis Toolkit software.
SINGLE ASSET AND SINGLE PHASED MODULE Figure 174.2 illustrates the Single Asset and Single Phased SLS module. After installing the software, the user can access the SLS by clicking on Start | Programs | Real Options Valuation | Real Options SLS | Real Options SLS and choose the Create a New Single Asset Option Model. The module has several sections: Option Type, Basic Inputs, Custom Equations, Custom Variables, Benchmark, Result, and Create Audit Worksheet. To help you get started, several simple examples are in order. A simple European call option is computed in this example using SLS. To follow along, start this example file by selecting Start | Programs | Real Options Valuation | Real Options SLS | Real Options SLS. Then click on the Create a New Single Asset Model icon (Figure 174.1). When the Single Asset SLS opens (Figure 174.2), click on File | Examples. This is a list of the example files that come with the software for the single-asset SLS. Start the Plain Vanilla Call Option I example. This example file will be loaded into the SLS software, as seen in Figure 174.3. The starting PV Underlying Asset or starting stock price is $100, and the Implementation Cost or strike price is $100 with a 5-year maturity. The annualized risk-free rate of return is 5%, and the historical, comparable, or future expected annualized volatility is 10%. Click on RUN (or Alt-R or hit Enter), and a 100-step binomial lattice is computed. The results indicate a value of $23.3975 for both the European and American call options. Benchmark values using Black-Scholes and Closed-Form American approximation models as well as standard plain vanilla Binomial American and Binomial European Call and Put Options with 1,000-step binomial lattices are also computed. Notice that only
c174-c180
JWBK121-Mun
March 17, 2008
698
13:33
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 174.2 Single Asset SLS
the American and European Options are selected, and the computed results are for these simple plain vanilla American and European call options. The benchmark results use both closed-form models (Black-Scholes and ClosedForm Approximation models) and 1,000-step binomial lattices on plain vanilla options. You can change the steps to 1000 in the basic inputs section to verify that the answers computed are equivalent to the benchmarks seen in Figure 174.4. Notice that, of course, the values computed for the American and European options are identical to each other and identical to the benchmark values of $23.4187, as it is never optimal to exercise a standard plain vanilla call option early if there are no dividends. Be aware that the higher the lattice step, the longer it takes to compute the results. It is advisable to start with lower lattice steps to make sure the analysis is robust and then progressively increase lattice steps to check for results convergence. See Chapter 6 of Real Options Analysis, 2nd Edition for more details as to how many lattice steps are required for a robust option valuation. However, as a rule of thumb, the typical convergence occurs between 100 and 1000 steps. Alternatively, you can enter Terminal and Intermediate Node Equations for a call option to obtain the same results. Notice that using 100 steps and creating your own
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
174. Introduction to the SLS Software
699
FIGURE 174.3 SLS results of simple European and American call options Terminal Node Equation of Max(Asset-Cost,0) and Intermediate Node Equation of Max(Asset-Cost,OptionOpen) will yield the same answer. When entering your own equations, make sure that Custom Option is first checked. Figure 174.5 illustrates how the analysis is done. The example file used in this example is: Plain Vanilla Call Option III. Notice that the value $23.3975 in Figure 174.5 agrees with the value in Figure 174.3. The Terminal Node Equation is the computation that occurs at maturity, while the Intermediate Node Equation is the computation that occurs at all periods prior to maturity and is computed using backward induction. The term “OptionOpen” (one word, without spaces) represents “keeping the option open” and often is used in the Intermediate Node Equation when analytically representing the fact that the option is not executed but kept open for possible future execution. Therefore, in Figure 174.5, the Intermediate Node Equation Max(Asset-Cost,OptionOpen) represents the profit maximization decision of either executing the option at a particular point in time on the lattice or leaving it open for possible future execution. In contrast, the Terminal Node Equation of Max(Asset-Cost,0) represents the profit maximization decision at maturity of either executing the option if it is in the money or allowing it to expire worthless if it is at the money or out of the money.
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
700
REAL OPTIONS SLS APPLICATIONS
FIGURE 174.4 Comparing results with benchmarks
In addition, you can create an Audit Worksheet in Excel to view a sample 10step binomial lattice by checking the box Generate Audit Worksheet. For instance, loading the example file Plain Vanilla Call Option I and selecting the box creates a worksheet as seen in Figure 174.6. Several items on this audit worksheet are noteworthy:
The Audit Worksheet generated will show the first 10 steps of the lattice, regardless of how many steps you enter. That is, if you enter 1,000 steps, the first 10 steps will be generated. If a complete lattice is required, simply enter 10 steps in the SLS and the full 10-step lattice will be generated instead. The Intermediate Computations and Results are for the Super Lattice, based on the number of lattice steps entered and not based on the 10-step lattice generated. To obtain the Intermediate Computations for 10-step lattices, simply rerun the analysis inputting 10 as the lattice steps. This way the Audit Worksheet generated will be for a 10-step lattice, and the results from SLS will now be comparable (Figure 174.7).
c174-c180
JWBK121-Mun
March 17, 2008
13:33
174. Introduction to the SLS Software
Char Count=
701
FIGURE 174.5 Custom equation inputs
The worksheet provides only values, as it is assumed that you entered in the terminal and intermediate node equations; hence there is no need to re-create these equations. You can always reload the SLS file and view the equations or print out the form if required (by clicking on File | Print).
The software also allows you to save or open analysis files. That is, all the inputs in the software will be saved and can be retrieved for future use. The results will not be saved because you may accidentally delete or change an input and the results will no longer be valid. In addition, rerunning the Super Lattice computations takes only a few seconds, and it is always advisable for you to rerun the model when opening an old analysis file. You may also enter in Blackout Steps, steps on the Super Lattice that will have different behaviors from the terminal or intermediate steps. For instance, you can enter 1000 as the lattice steps, and enter 0–400 as the Blackout Steps, and some Blackout Equation (e.g., OptionOpen). This means that for the first 400 steps, the option holder can only keep the option open. Other examples include entering: 1, 3, 5, 10 if these are the lattice steps where blackout periods occur. You will have to
c174-c180
JWBK121-Mun
March 17, 2008
13:33
702
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 174.6 SLS-generated audit worksheet
calculate the relevant steps within the lattice where the blackout exists. For instance, if the blackout exists in years 1 and 3 on a 10-year, 10-step lattice, then steps 1, 3 will be the blackout dates. This blackout step feature comes in handy when analyzing options with holding periods, vesting periods, or periods where the option cannot be executed. Employee stock options have blackout and vesting periods, and certain contractual real options have periods during which the option cannot be executed (e.g., cooling-off periods, or proof of concept periods). If equations are entered into the Terminal Node Equation box and American, European, or Bermudan options are chosen, the Terminal Node Equation you entered will be the one used in the Super Lattice for the terminal nodes. However, for the
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
174. Introduction to the SLS Software
703
FIGURE 174.7 SLS results with a 10-step lattice
intermediate nodes, the American option assumes the same Terminal Node Equation plus the ability to keep the option open; the European option assumes that the option can only be kept open and not executed; the Bermudan Option assumes that during the blackout lattice steps, the option will be kept open and cannot be executed. If you also enter the Intermediate Node Equation, you should choose the Custom Option first. (Otherwise you cannot use the Intermediate Node Equation box). The Custom Option result uses all the equations you have entered in Terminal, Intermediate, and Intermediate with Blackout sections. The Custom Variables list is where you can add, modify, or delete custom variables (the variables that are required beyond the basic inputs). For instance, when running an abandonment option, you need the salvage value. You can add this in the Custom Variables list, provide it a name (a variable name must be a single word), the appropriate value, and the starting step when this value becomes effective. That is, if you have multiple salvage values (i.e., if salvage values change over time), you can enter the same variable name (e.g., salvage) several times, but each time its value changes and you can specify when the appropriate salvage value becomes effective. For instance, in a 10-year, 100-step lattice problem where there are two
c174-c180
JWBK121-Mun
March 17, 2008
13:33
704
Char Count=
REAL OPTIONS SLS APPLICATIONS
salvage values—$100 occurring within the first 5 years and increases to $150 at the beginning of Year 6—you can enter two salvage variables with the same name, $100 with a starting step of 0 and $150 with a starting step of 51. Be careful here as Year 6 starts at step 51, not 61. That is, for a 10-year option with a 100-step lattice, we have: Steps 1–10 = Year 1; Steps 11–20 = Year 2; Steps 21–30 = Year 3; Steps 31–40 = Year 4; Steps 41–50 = Year 5; Steps 51–60 = Year 6; Steps 61–70 = Year 7; Steps 71–80 = Year 8; Steps 81–90 = Year 9; and Steps 91–100 = Year 10. Finally, incorporating 0 as a blackout step indicates that the option cannot be executed immediately.
MULTIPLE ASSET OR MULTIPLE PHASED SLS MODULE The Multiple Asset or Multiple Phased module (MSLS) is an extension of the Single Asset and Single Phased Module in that the MSLS can be used to solve options with multiple underlying assets and/or multiple phases. The MSLS allows the user to enter multiple underlying assets as well as multiple valuation lattices (Figure 174.8). These valuation lattices can call to user-defined Custom Variables. Some examples of the types of options that the MSLS can be used to solve include:
Sequential Compound Options (two-, three-, and multiple-phased sequential options) Simultaneous Compound Options (multiple assets with multiple simultaneous options)
FIGURE 174.8 Multiple (Asset and Phase) SLS
c174-c180
JWBK121-Mun
March 17, 2008
174. Introduction to the SLS Software
13:33
Char Count=
705
Chooser and Switching Options (choosing among several options and underlying assets) Floating Options (choosing between calls and puts) Multiple Asset Options (3D binomial option models) Many other customizable options
The MSLS software has several areas, including a Maturity and a Comment area. The Maturity value is a global value for the entire option, regardless of how many underlying or valuation lattices exist. The Comment field is for your personal notes describing the model you are building. There is also a Blackout and Vesting Period Steps section and a Custom Variables list similar to the one in the SLS. The MSLS also allows you to create Audit Worksheets. To illustrate the power of the MSLS, a simple illustration is in order. Click on Start | Programs | Real Options Valuation | Real Options SLS | Real Options SLS and select the second icon, the Create a New Multiple Asset Option Model module (Figure 174.1). Then click on File | Examples | Simple Two Phased Sequential Compound Option to load our first example. Figure 174.9 shows the MSLS example loaded. In this simple example, a single underlying asset is created with two valuation phases. The strategy tree for this option is seen in Figure 174.10. The project is executed in two phases—the first phase within the first year costs $5 million, while the second phase occurs within two years but only after the first phase is executed, and costs $80 million, both in present value dollars. The PV Asset of the project is $100 million (the net present value [NPV] therefore is $15 million) and faces 30% volatility in its cash flows. The computed strategic value using the MSLS is $27.67 million, indicating that there is a $12.67 million in option value. That is, spreading out and staging the investment into two phases has significant value (an expected value of $12.67 million, to be exact). See the sections on compound options in Chapter 10 of Real Options Analysis for more examples and results interpretation.
MULTINOMIAL SLS MODULE The Multinomial Lattice Solver (MNLS) is the third module of the Real Options SLS software. The MNLS applies multinomial lattices—where multiple branches stem from each node—such as trinomials (three branches), quadranomials (four branches), and pentanomials (five branches). Figure 174.11 illustrates the MNLS module. The module has a Basic Inputs section, where all of the common inputs for the multinomials are listed. Then there are four lattice types with four different multinomial applications complete with the additional required inputs depending on the type chosen. Figure 174.12 shows an example call and put option computation using trinomial lattices. To follow along, start the third module from the Real Options SLS software main screen (click on the third icon, Create a New Multinomial Option Model) and open the example file Simple Calls and Puts using Trinomial Lattices under the File | Examples menu, change the dividend rate to 0% and hit Enter. Note that the results shown in Figure 174.12 using a 50-step lattice is equivalent to the results shown in Figure 174.3 using a 100-step binomial lattice. In fact, a trinomial lattice or any other multinomial lattice provides identical answers to the binomial
FIGURE 174.9 MSLS solution to a simple two-phased Sequential Compound Option
c174-c180 JWBK121-Mun
706
March 17, 2008 13:33 Char Count=
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
174. Introduction to the SLS Software
FIGURE 174.10 Strategy tree for two-phased sequential compound option
FIGURE 174.11 Multinomial Lattice Solver
707
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
708
REAL OPTIONS SLS APPLICATIONS
FIGURE 174.12 Simple calls and puts using trinomial lattices
lattice at the limit, but convergence is achieved faster at lower steps. To illustrate, Table 174.1 shows how the trinomial lattice of a certain set of input assumptions yields the correct option value with fewer steps than it takes for a binomial lattice. Because both yield identical results at the limit but trinomials are much more difficult to calculate and take a longer computation time, the binomial lattice usually is used instead. However, a trinomial is required under one special circumstance: when the underlying asset follows a mean-reverting process.
TABLE 174.1
Binomial versus trinomial lattices
Steps Binomial Lattice Trinomial Lattice
5
10
100
1,000
5,000
$30.73 $29.22
$29.22 $29.50
$29.72 $29.75
$29.77 $29.78
$29.78 $29.78
c174-c180
JWBK121-Mun
March 17, 2008
174. Introduction to the SLS Software
13:33
Char Count=
709
With the same logic, quadranomials and pentanomials yield identical results as the binomial lattice with the exception that these multinomial lattices can be used to solve these special limiting conditions:
Trinomials. Results are identical to binomials and are most appropriate when used to solve mean-reverting underlying assets Quadranomials. Results are identical to binomials and are most appropriate when used to solve options whose underlying assets follow jump-diffusion processes Pentanomials. Results are identical to binomials and are most appropriate when used to solve two underlying assets that are combined, called rainbow options (e.g., price and quantity are multiplied to obtain total revenues, the price and quantity each follows a different underlying lattice with its own volatilities and both underlying parameters could be correlated to one another).
SLS EXCEL SOLUTION MODULE The SLS software also allows you to create your own models in Excel using customized functions. This is an important function, because certain models may require linking from other spreadsheets or databases (e.g., when running certain Excel macros and functions, or certain inputs need to be first simulated, or inputs may change over the course of modeling your options). This Excel compatibility allows you to innovate within the Excel spreadsheet environment with complete flexibility. Specifically, the sample worksheet included in the software solves the SLS, MSLS, and Changing Volatility model. To illustrate, Figure 174.13 shows a Customized Abandonment Option solved using SLS. The same problem can be solved using the SLS Excel Solution module by clicking on Start | Programs | Real Options Valuation | Real Options SLS | SLS Functions. Excel will then start. Next, click on Start | Programs | Real Options Valuation | Real Options SLS | SLS Excel Solutions in order to start the Excel template. If you are using Excel XP or 2003, make sure to set Excel’s macro security to medium or below (Tools | Macro | Security | Medium). If you are using Excel 2007, make sure the security settings are turned off; otherwise the functions will not work properly. (To reset your security settings when in Excel 2007, click on the Office button located at the top left of the screen, and select Excel Options | Trust Center | Trust Center Settings | Macro Settings | Enable All Macros.) The sample solution is seen in Figure 174.14. Notice the same results using the SLS and the SLS Excel Solution file. You can use the template provided by simply clicking on File | Save As in Excel and use the new file for your own modeling needs. Similarly, the MSLS can be solved using the SLS Excel Solver. Figure 174.15 shows a complex multiple-phased sequential compound option solved using the SLS Excel Solver. The results shown here are identical to the results generated from the MSLS module (example file: Multiple Phased Complex Sequential Compound Option). One small note of caution here: If you add or reduce the number of option valuation lattices, make sure you change the function’s link for the MSLS Result to incorporate the right number of rows; otherwise the analysis will not compute properly. For example, the default shows three option valuation lattices. By selecting
c174-c180
JWBK121-Mun
710
March 17, 2008
13:33
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 174.13 Customized Abandonment Option using SLS
FIGURE 174.14 Customized Abandonment Option using SLS Excel solution file
JWBK121-Mun March 17, 2008 13:33
FIGURE 174.15 Complex Sequential Compound Option using SLS Excel solution file
c174-c180 Char Count=
711
c174-c180
JWBK121-Mun
March 17, 2008
13:33
712
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 174.16 Changing Volatility and Risk-Free Rate Option
the MSLS Results cell in the spreadsheet and clicking on Insert | Function, you will see that the function links to cells A24:H26 for these three rows for the OVLattices input in the function. If you add another option valuation lattice, change the link to A24:H27, and so forth. You can also leave the list of custom variables as is. The results will not be affected if these variables are not used in the custom equations. Finally, Figure 174.16 shows a Changing Volatility and Changing Risk-Free Rate Option. In this model, the volatility and risk-free yields are allowed to change over time, and a nonrecombining lattice is required to solve the option. In most cases, it is recommended that you create option models without the changing volatility term structure. Getting a single volatility is difficult enough, let alone a series of changing volatilities over time. If different volatilities that are uncertain need to be modeled, run a Monte Carlo simulation using the Risk Simulator software on volatilities instead. This model should be used only when the volatilities are modeled robustly and are rather certain and change over time. The same advice applies to a changing risk-free rate term structure.
SLS EXCEL FUNCTIONS MODULE The software also provides a series of SLS Functions that are directly accessible in Excel. To illustrate its use, start the SLS Functions by clicking on Start | Programs | Real Options Valuation | Real Options SLS | SLS Functions, and Excel will start. When in Excel, you can click on the function wizard icon or simply select an empty cell and click on Insert | Function. While in Excel’s equation wizard, either select the All category or Real Options Valuation, the name of the company that developed the software. Here you will see a list of SLS Functions (with SLS prefixes) that are ready for use in Excel. Figure 174.17 shows the Excel equation wizard.
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
174. Introduction to the SLS Software
713
FIGURE 174.17 Excel’s equation wizard Suppose you select the first function, SLSBinomialAmericanCall, and hit OK. Figure 174.18 shows how the function can be linked to an existing Excel model. The values in cells B1 to B7 can be linked from other models or spreadsheets, or can be created using Excel’s Visual Basic for Applications (VBA) macros, or can be dynamic and changing, as when running a simulation. Another quick note of caution here:
FIGURE 174.18 Using SLS Functions in Excel
c174-c180
JWBK121-Mun
March 17, 2008
714
13:33
Char Count=
REAL OPTIONS SLS APPLICATIONS
Certain SLS Functions require many input variables, and Excel’s equation wizard can show only five variables at a time. Therefore, remember to scroll down the list of variables by clicking on the vertical scroll bar to access the rest of the variables.
LATTICE MAKER MODULE Finally, the full version of the software comes with an advanced binomial Lattice Maker module. This module is capable of generating binomial lattices and decision lattices with visible formulas in an Excel spreadsheet. Figure 174.19 illustrates an example option generated using this module. The illustration shows the module
FIGURE 174.19 Lattice Maker
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
175. Employee Stock Options—Simple American Call Option
715
inputs (you can obtain this module by clicking on the Create a Lattice icon on the main Real Options SLS software screen (Figure 174.1). Enter in the relevant inputs and click OK to run the Lattice Maker. (Use the sample inputs as shown in Figure 174.19 as an example.) Notice that the visible equations are linked to the existing spreadsheet, which means this module will come in handy when running Monte Carlo simulations or linking to and from other spreadsheet models. The results also can be used as a presentation and learning tool to peep inside the analytical black box of binomial lattices. Last but not least, this module also has a decision lattice with specific decision nodes indicating expected optimal times of execution of certain options. The results generated from this module are identical to those generated using the SLS and Excel functions, but they have the added advantage of a visible lattice. (Lattices of up to 200 steps can be generated using this module.)
175. Employee Stock Options—Simple American Call Option File Name: Employee Stock Options—Simple American Call Option Location: Modeling Toolkit | Real Options Models Brief Description: Computing the American Call Option (the option is exercisable at any time up to and including maturity) using binomial lattices and closed-form solutions Requirements: Modeling Toolkit, Real Options SLS
Figure 175.1 shows how an American Call Option can be computed using the Real Options SLS software. American options are, of course, exercisable at any time prior to and up to and including the maturity period. Refer to the previous chapter on how to use the SLS software and how to calibrate the number of lattice steps required in the analysis to obtain results convergence.
c174-c180
JWBK121-Mun
March 17, 2008
13:33
716
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 175.1 American Call Option
176. Employee Stock Options—Simple Bermudan Call Option with Vesting File Name: Employee Stock Options—Simple Bermudan Call Option Location: Modeling Toolkit | Real Options Models Brief Description: Computing the Bermudan Call Option (the option is exercisable at any time up to and including maturity except during certain blackout periods) using binomial lattices and closed-form solutions Requirements: Modeling Toolkit, Real Options SLS
Figure 176.1 illustrates the Real Options SLS model Employee Stock Options—Simple Bermudan Call Option with Vesting on how an employee stock
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
176. Employee Stock Options—Simple Bermudan Call Option with Vesting
717
FIGURE 176.1 SLS results of a vesting call option option (ESO) with a vesting period and blackout dates can be modeled. Enter the blackout steps (0-39). Because the blackout dates input box has been used, you will need to enter the terminal node equation (TE), intermediate node equation (IE), and intermediate node equation during vesting and blackout periods (IEV). Enter Max(Stock-Strike,0) for the TE; Max(Stock-Strike,0,OptionOpen) for the IE; and OptionOpen for IEV (example file used: ESO Vesting). This means the option is executed or left to expire worthless at termination; execute early or keep the option open during the intermediate nodes; and keep the option open only and no executions are allowed during the intermediate steps when blackouts or vesting occurs. The result is $49.73 (Figure 176.1), which can be corroborated with the use of the ESO Valuation Toolkit (Figure 176.2). ESO Valuation Toolkit is another software tool developed by the author at Real Options Valuation, Inc., specifically designed to solve ESO problems following the 2004 FAS 123R. In fact, this software was used by the Financial Accounting Standards Board to model the valuation example in its final FAS 123R Statement in December 2004. Before starting with ESO valuations, it is suggested that you refer to Valuing Employee Stock Options (Hoboken, NJ: John Wiley & Sons, 2004), by Dr. Johnathan Mun, as a primer. Figure 176.2 illustrates a comparable set of results using another software called the Employee Stock Options Valuation Toolkit. (See www.realoptionsvaluation.com for more details.)
c174-c180
JWBK121-Mun
718
March 17, 2008
13:33
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 176.2 ESO Valuation Toolkit results of a vesting call option
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
177. Employee Stock Options—Simple European Call Option
719
177. Employee Stock Options—Simple European Call Option File Name: Employee Stock Options—Simple European Call Option Location: Modeling Toolkit | Real Options Models Brief Description: Computing the European Call Option (the option is exercisable only at maturity) using binomial lattices and closed-form solutions Requirements: Modeling Toolkit, Real Options SLS
The European Call Option can be exercised only at maturity. In most typical cases where the underlying asset of stock pays a dividend, an American Call Option is typically worth more than a Bermudan Call Option, which in turn is worth more than a European call option, due to each preceding option’s ability to exercise early. Simple European options can be computed using closed-form models (e.g., Black-Scholes) as well as binomial lattices. The results are identical when sufficient lattice steps are applied. However, for more complex options models, only binomial lattices are appropriate or can be used to perform the valuation, as will be seen in later chapters and models. Please see Figure 177.1.
c174-c180
JWBK121-Mun
March 17, 2008
13:33
720
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 177.1 European Call Option
178. Employee Stock Options—Suboptimal Exercise File Name: Employee Stock Options—Suboptimal Exercise Location: Modeling Toolkit | Real Options Brief Description: Valuing an employee stock option where suboptimal exercise behavior is considered and modeled Requirements: Modeling Toolkit, Real Options SLS
This model shows how suboptimal exercise behavior multiples can be included in the analysis and how the custom variables list can be used, as seen in Figure 178.1
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
178. Employee Stock Options—Suboptimal Exercise
721
FIGURE 178.1 SLS results of a call option with suboptimal exercise behavior
(example file used: ESO Suboptimal Behavior). The terminal equation (TE) is the same as the previous example, but the intermediate equation (IE) assumes that the option will be suboptimally executed if the stock price in some future state exceeds the suboptimal exercise threshold times the strike price. Notice that the intermediate equation during vesting periods (IEV) is not used because we did not assume any vesting or blackout periods. Also, the Suboptimal exercise multiple variable is listed on the customs variable list with the relevant value of 1.85 and a starting step of 0. This means that 1.85 is applicable starting from step 0 in the lattice all the way through to step 100. The results again are verified through the ESO Toolkit (Figure 178.2).
c174-c180
JWBK121-Mun
722
March 17, 2008
13:33
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 178.2 ESO Toolkit results of a call option accounting for suboptimal behavior
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
179. Employee Stock Options—Vesting, Blackout, Suboptimal, Forfeiture
723
179. Employee Stock Options—Vesting, Blackout, Suboptimal, Forfeiture File Name: Employee Stock Options—Vesting, Blackout, Suboptimal, Forfeiture Location: Modeling Toolkit | Real Options Brief Description: Computing an employee stock option after considering the real-life elements of vesting, blackout periods, suboptimal exercise behaviors, and forfeiture rates, which usually significantly lower the value of the option to be expensed and be solved only using a customized Real Options SLS approach (closed-form models are inadequate and inappropriate) Requirements: Modeling Toolkit, Real Options SLS
This model incorporates the element of forfeiture into the model as seen in Figure 179.1 (example file used: ESO Vesting, Blackout, Suboptimal, Forfeiture). This means that if the option is vested and the prevailing stock price exceeds the suboptimal threshold above the strike price, the option will be executed summarily and suboptimally. If vested but not exceeding the threshold, the option will be executed only if the postvesting forfeiture occurs; otherwise the option is kept open. This means that the intermediate step is a probability weighted average of these occurrences. Finally, when an employee forfeits the option during the vesting period, all options are forfeited, with a prevesting forfeiture rate. In this example, we assume identical pre- and postvesting forfeitures so that we can verify the results using the ESO Toolkit (Figure 179.2). In certain other cases, a different rate may be assumed.
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
FIGURE 179.1 SLS results of a call option accounting for vesting, forfeiture, suboptimal behavior, and blackout periods
FIGURE 179.2 ESO Toolkit results after accounting for vesting, forfeiture, suboptimal behavior, and blackout periods
724
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
180. Exotic Options—American and European Lower Barrier Options
725
180. Exotic Options—American and European Lower Barrier Options File Names: Exotic Options—Barrier Option—Down and In Lower Barrier Call, Exotic Options—Barrier Option—Down and Out Lower Barrier Call Location: Modeling Toolkit | Real Options Brief Description: Computing upper barrier options; that is, options that either get kicked in the money or out of the money when they breach a lower barrier Requirements: Modeling Toolkit, Real Options SLS
The Lower Barrier option measures the strategic value of an option (this applies to both calls and puts) that comes either in the money or out of the money when the asset value hits an artificial lower barrier that is currently lower than the asset value. Therefore, a Down and In option (for both calls and puts) becomes live if the asset value hits the lower barrier. Conversely, a Down and Out option is live only when the lower barrier is not breached. Examples of this option include contractual agreements whereby if the lower barrier is breached, some event or clause is triggered. The value of a barrier option is lower than standard options, as the barrier option will be valuable only within a smaller price range than the standard option. The holder of a barrier option loses some of the traditional option value. Therefore, a barrier option should sell at a lower price than a standard option. An example would be a contractual agreement whereby the writer of the contract can get into or out of certain obligations if the asset or project value breaches a barrier. Figure 180.1 shows a Lower Barrier option for a Down and In Call. Notice that the value is only $7.3917, much lower than a regular American call option of $42.47. This is because the barrier is set low, at $90. This means that all of the upside potential that the regular call option can have will be reduced significantly, and the option can be exercised only if the asset value falls below this lower barrier of $90 (example file used: Barrier Option—Down and In Lower Barrier Call). To make such a Lower Barrier option binding, the lower barrier level must be below the starting asset value but above the implementation cost. If the barrier level is above the starting asset value, it becomes an Upper Barrier option. If the lower barrier is below the implementation cost, the option will be worthless under all conditions. When the lower barrier level is between the implementation cost and starting asset value, the option is potentially worth something. However, the value of the option is
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
726
REAL OPTIONS SLS APPLICATIONS
FIGURE 180.1 Down and In Lower American Barrier Option
dependent on volatility. Using the same parameters as in Figure 180.1 and changing the volatility, these examples illustrate what happens:
At a volatility of 75%, the option value is $4.34. At a volatility of 25%, the option value is $3.14. At a volatility of 5%, the option value is $0.01.
The lower the volatility, the lower the probability that the asset value will fluctuate enough to breach the lower barrier such that the option will be executed. By balancing volatility with the threshold lower barrier, you can create optimal trigger values for barriers. Figure 180.2 shows the Lower Barrier option for Down and Out Call option. Here, if the asset value breaches this lower barrier, the option is worthless; it is valuable only when it does not breach this lower barrier. As call options have higher values when the asset value is high and lower values when the asset value is low, this Lower Barrier Down and Out Call Option is worth almost the same as the
c174-c180
JWBK121-Mun
March 17, 2008
13:33
Char Count=
180. Exotic Options—American and European Lower Barrier Options
727
FIGURE 180.2 Down and Out Lower American Barrier Option
regular American option. The higher the barrier, the lower the value of the Lower Barrier option will be (example file used here: Barrier Option—Down and Out Lower Barrier Call). For instance,
At a lower barrier of $90, the option value is $42.19. At a lower barrier of $100, the option value is $41.58.
Figures 180.1 and 180.2 illustrate American Barrier Options. To change these into European Barrier Options, set the Intermediate Node Equation to OptionOpen. In addition, for certain types of contractual options, vesting and blackout periods can be imposed. For solving such Bermudan Barrier Options, keep the same Intermediate Node Equation as the American Barrier Options but set the Intermediate Node Equation During Blackout and Vesting Periods to OptionOpen and insert the corresponding blackout and vesting period lattice steps. Finally, if the barrier is a changing target over time, put in several custom variables named Barrier with the different values and starting lattice steps.
c181-c189
JWBK121-Mun
728
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
181. Exotic Options—American and European Upper Barrier Options File Names: Exotic Options—Barrier Option—Up and In Upper Barrier Call, Real Options—Barrier Option—Up and Out Upper Barrier Call Location: Modeling Toolkit | Real Options Brief Description: Computing Upper Barrier options; that is, options that either get kicked in the money or out of the money when they breach an upper barrier Requirements: Modeling Toolkit, Real Options SLS
The Upper Barrier option measures the strategic value of an option (this applies to both calls and puts) that comes either in the money or out of the money when the asset value hits an artificial upper barrier that is currently higher than the asset value. Therefore, in an Up and In Option (for both calls and puts), the option becomes live if the asset value hits the upper barrier. Conversely, for the Up and Out Option, the option is live only when the upper barrier is not breached. This is very similar to the lower barrier option, but now the barrier is above the starting asset value, and for a binding barrier option, the implementation cost is typically lower than the upper barrier. That is, the upper barrier is usually greater than the implementation cost and also greater than the starting asset value. Examples of this option include contractual agreements whereby if the upper barrier is breached, some event or clause is triggered. The values of barrier options typically are lower than standard options, as the barrier option will have value within a smaller price range than the standard option. The holder of a barrier option loses some of the traditional option value; therefore, a barrier option should sell at a lower price than a standard option. An example would be a contractual agreement whereby the writer of the contract can get into or out of certain obligations if the asset or project value breaches a barrier. The Up and In Upper American Barrier option has slightly lower value than a regular American Call option, as seen in Figure 181.1, because some of the option value is lost when the asset is less than the barrier but greater than the implementation cost. Clearly, the higher the upper barrier, the lower the Up and In Barrier option value will be as more of the option value is lost due to the inability to execute when the asset value is below the barrier (example file used: Barrier Option—Up and In Upper Barrier Call). For instance:
When the upper barrier is $110, the option value is $41.22. When the upper barrier is $120, the option value is $39.89.
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
181. Exotic Options—American and European Upper Barrier Options
729
FIGURE 181.1 Up and In Upper American Barrier option
In contrast, an Up and Out Upper American Barrier option is worth a lot less because this barrier truncates the option’s upside potential. Figure 181.2 shows the computation of such an option. Clearly, the higher the upper barrier, the higher the option value will be (example file used: Barrier Option—Up and Out Upper Barrier Call). For instance:
When the upper barrier is $110, the option value is $23.69. When the upper barrier is $120, the option value is $29.59.
Finally, note the issues of nonbinding barrier options. Examples of nonbinding options are:
Up and Out Upper Barrier calls. When the upper barrier is less than or equal to the implementation cost, then the option will be worthless. Up and In Upper Barrier calls. When the upper barrier is less than or equal to the implementation cost, then the option value reverts to a simple call option.
c181-c189
JWBK121-Mun
730
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 181.2 Up and Out Upper American Barrier option
Examples of Upper Barrier options are contractual options. Typical examples are:
A manufacturer contractually agrees not to sell its products at prices higher than a prespecified upper barrier price level. A client agrees to pay the market price of a good or product until a certain amount and then the contract becomes void if it exceeds some price ceiling.
Figures 181.1 and 181.2 illustrate American Barrier options. To change these into European Barrier options set the intermediate node equation to OptionOpen. In addition, for certain types of contractual options, vesting and blackout periods can be imposed. For solving such Bermudan Barrier options, keep the same Intermediate Node Equation as the American Barrier options but set the Intermediate Node Equation during blackout and vesting periods to OptionOpen and insert the corresponding blackout and vesting period lattice steps. Finally, if the barrier is a changing target over time, put in several Custom Variables named Barrier with the different values and starting lattice steps.
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
182. Exotic Options—American and European Double Barrier Options and Exotic Barriers
731
182. Exotic Options—American and European Double Barrier Options and Exotic Barriers File Names: Exotic Options—Barrier Option—Up and In, Down and In Double Barrier Call, Exotic Options—Barrier Option—Up and Out, Down and Out Double Barrier Call Location: Modeling Toolkit | Real Options Models Brief Description: Computing American and European Double Barrier options where the option gets kicked in the money or out of the money if the underlying asset or stock breaches an upper or lower barrier Requirements: Modeling Toolkit, Real Options SLS
The Double Barrier option is solved using the binomial lattice. This model measures the strategic value of an option (this applies to both calls and puts) that comes either in the money or out of the money when the asset value hits either the artificial upper or lower barriers. Therefore, an Up and In and Down and In option (for both calls and puts) indicates that the option becomes live if the asset value hits either the upper or lower barrier. Conversely, for the Up and Out and Down and Out option, the option is live only when neither the upper nor the lower barrier is breached. Examples of this option include contractual agreements whereby if the upper barrier is breached, some event or clause is triggered. The value of barrier options is lower than standard options, as the barrier option will have value within a smaller price range than the standard option. The holder of a barrier option loses some of the traditional option value; therefore, such options should be worth less than a standard option. Figure 182.1 illustrates an American Up and In, Down and In Double Barrier option, which is a combination of the upper and lower barrier options shown previously. The same exact logic applies to this double barrier option. Figure 182.1 illustrates the American Barrier option solved using the SLS. To change these into a European Barrier option, set the intermediate node equation to OptionOpen. In addition, for certain types of contractual options, vesting and blackout periods can be imposed. For solving such Bermudan Barrier options, keep the same Intermediate Node Equation as the American Barrier options but set the Intermediate Node Equation during blackout and vesting periods to OptionOpen and insert the corresponding Blackout and Vesting Period lattice steps. Finally, if the barrier is a changing target over time, put in several Custom Variables named Barrier with the different values and starting lattice steps.
c181-c189
JWBK121-Mun
March 17, 2008
732
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 182.1 Up and In, Down and In Double Barrier option
Exotic Barrier options exist when other options are combined with barriers. For instance, an option to expand can be executed only if the PV Asset exceeds some threshold, or a contraction option to outsource manufacturing can only be executed when it falls below some breakeven point. Again, such options can be easily modeled using the SLS.
EXERCISE: BARRIER OPTIONS Barrier options are combinations of call and put options such that they become in the money or out of the money when the asset value breaches an artificial barrier. Standard single upper barrier options can be call up and in, call up and out, put up and in, and put up and out. Standard single lower barrier options can be call down and in, call down and out, put down and in, and put down and out. Double Barrier options are combinations of standard single upper and lower barriers. Assume that the PV Asset is $200, maturity is 5 years, risk-free rate is 5%, no dividends, and 25% volatility.
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
182. Exotic Options—American and European Double Barrier Options and Exotic Barriers
733
1. For each of the barrier option types (upper, lower, and double barrier options), determine: a. What are some examples of nonbinding inputs? That is, show the combinations of cost and barrier levels that make the options nonbinding. b. How do you know the option is nonbinding? c. Show some binding inputs and explain the differences between binding results and nonbinding results. 2. Run the Double Barrier option, change each input parameter, and explain the effects on the Up and In and Down and In Call option, Up and In and Down and In Put option, Up and Out and Down and Out Call option, and Up and Out and Down and Out Put option. Explain your observations when the barrier levels change or when volatility increases. a. Replicate the analysis using a standard lower barrier option. b. Replicate the analysis using a standard upper barrier option. 3. Suppose a chemical manufacturer executes a sales contract with a customer and agrees to charge the customer $8/pound of a particular reagent if the customer agrees to purchase 1 million pounds per year for the next 5 years. The market price of the reagent is currently $10/pound. Thus, the customer receives a $2 million discount per year. In addition, if the price is always set at $8/pound, the manufacturer might be losing out on being able to sell products at the higher market prices. Therefore, an agreement is struck whereby the contract holds as long as the market price is below $11/pound. If the prevailing price exceeds this threshold, the product will be sold at this market price. Assume a 25% annualized volatility and a 5% risk-free rate over the next 5 years. a. How much is such a price protection contact worth to the customer if this upper barrier does not exist? b. How much is such a price protection contact worth to the customer if this upper barrier is implemented? c. What happens when the upper threshold is increased? d. What happens when the price volatility of this compound is lower than 25%? Are the price differentials between the option with and without barrier constant for various volatility levels? In other words, is the barrier option a linear function of volatility?
c181-c189
JWBK121-Mun
734
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
183. Exotic Options—Basic American, European, and Bermudan Call Options File Name: Basic American, European, versus Bermudan Call Options Location: Modeling Toolkit | Real Options Brief Description: Comparing and valuing the three different types of option—American (exercisable at any time), Bermudan (exercisable at certain times), and European (exercisable only at expiration)—where dividends exist and when dividends do not exist Requirements: Modeling Toolkit, Real Options SLS
Figure 183.1 shows the computation of basic American, European, and Bermudan options without dividends, while Figure 183.2 shows the computation of the
FIGURE 183.1 Simple American, Bermudan, and European options without dividends
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
183. Exotic Options—Basic American, European, and Bermudan Call Options
735
FIGURE 183.2 Simple American, Bermudan, and European options with dividends
same options but with a dividend yield. Of course, European options can be executed only at termination and not before; in American options, early exercise is allowed; and in Bermudan options, early exercise is not allowed except during blackout or vesting periods. Notice that the results for the three options without dividends are identical for simple call options, but they differ when dividends exist. When dividends are included, the simple call option values for American is greater than or equal to Bermudan, which in turn is greater than or equal to European in most basic cases, as seen in Figure 183.2. Of course, these generalities can be applied only to plain vanilla call options and do not necessarily apply to other exotic options (e.g., Bermudan options with vesting and suboptimal exercise behavior multiples sometimes carry a higher value when blackouts and vesting occur than regular American options with the same suboptimal exercise parameters).
c181-c189
JWBK121-Mun
736
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
184. Exotic Options—Basic American, European, and Bermudan Put Options File Name: Basic American, European, versus Bermudan Put Options Location: Modeling Toolkit | Real Options Brief Description: Comparing and valuing the three different types of option—American (exercisable at any time), Bermudan (exercisable at certain times), and European (exercisable only at expiration)—where dividends exist and when dividends do not exist Requirements: Modeling Toolkit, Real Options SLS
The American and European Put options without dividends are calculated using the SLS in Figure 184.1. The sample results of this calculation indicate the strategic
FIGURE 184.1 American and European Put option using SLS
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
184. Exotic Options—Basic American, European, and Bermudan Put Options
737
value of the project’s net present value (NPV) and provide an option to sell the project within the specified maturity in years. There is a chance that the project value can significantly exceed the single-point estimate of PV Asset Value (measured by the present value of all uncertain future cash flows discounted at the risk-adjusted rate of return) or be significantly below it. Hence the option to defer and wait until some of the uncertainty becomes resolved through the passage of time is worth more than executing immediately. The value of being able to wait before executing the option and selling the project at the Implementation Cost in present values is the value of the option. The NPV of executing immediately is simply the Implementation Cost less the Asset Value ($0). The option value of being able to wait and defer selling the asset only if the condition goes bad and becomes optimal the difference between the calculated results (total strategic value) and the NPV or $24.42 for the American option and $20.68 for the European option. The American Put option is worth more than the European Put option even when no dividends exist, contrary to the call options seen previously. For simple call options, when no dividends exist, it is never optimal to exercise early. However, sometimes it may be optimal to exercise early for put options, regardless of whether dividend yields exist. In fact, a dividend yield will decrease the value of a call option but increase the value of a put option because when dividends are paid out, the value of the asset decreases. Thus, the call option will be worth less and the put option will be worth more. The higher the dividend yield, the earlier the call option should be exercised and the later the put option should be exercised. The put option can be solved by setting the terminal node equation as Max(CostAsset,0) as seen in Figure 184.1 (example file used: Plain Vanilla Put Option). Puts have a similar result as calls in that when dividends are included, the basic put option values for American are greater than or equal to Bermudan, which in turn are greater than or equal to European options in most basic cases. You can confirm this by simply setting the dividend rate at 3% and blackout steps at 0–80 and rerunning the SLS module. Recall that a higher dividend means a higher put option value but a lower abandonment option value (see the next chapter on abandonment real options). Thus, a put option is not exactly identical to an abandonment option. A high dividend means the abandonment option’s cost of waiting and holding on to the option is high (e.g., not selling a piece of land means having to pay the taxes, insurance, and maintenance on it, reducing the value of waiting before abandoning). However, for a put option, the asset price or value in the market decreases because of this dividend, making the put option (the ability to sell the asset at a predetermined contractual price) worth more. Dividends in the abandonment option affect the holding cost of the option, but dividends in the put option affect the underlying asset value. Therefore, care has to be taken to choose the relevant option model to use.
EXERCISE: AMERICAN, BERMUDAN, AND EUROPEAN OPTIONS These exercises allow us to compare the results of a European option with American and Bermudan options. In addition, the Black-Scholes model is used to compare the results. The Black-Scholes equation is applicable for analyzing European-type options—that is, options that can be executed only at maturity and not before.
c181-c189
JWBK121-Mun
738
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
The original Black-Scholes model cannot solve an option problem when there are dividend payments. However, extensions of the Black-Scholes model, termed the Generalized Black-Scholes model, can accommodate a continuous dividend payout for a European option. Perform the exercises that follow. Assume that a call option’s asset value and strike price are both $100, subject to 25% volatility. The maturity on this option is 5 years, and the corresponding risk-free rate on a similar asset maturity is 5%. Finally, for the Bermudan option, assume a 4-year vesting period. 1. Using the Single Asset SLS software, calculate the American, European, and Bermudan call options using a 100-step lattice. 2. Compare your results using 5, 10, 50, 100, 300, 500, and 1,000 steps in the SLS software. Explain what happens when the number of steps gets higher. 3. Now assume that a continuous dividend payout yielding 3% exists. What happens to the value of the option? 4. Show that the value of an American call option is identical to the European call option when no dividends are paid. That is, it is never optimal to execute an American call option early when no dividend payouts exist. Now consider the Bermudan option. What generalities can you come up with? 5. Repeat question 4 on the put option. Here you will see a very different set of results. What generalities can you come up with? 6. Show that as a 3% dividend yield exists, the value of the American call option exceeds the value of a European option. Why is this so? How does the Bermudan option compare? What happens when the blackout steps are significant in the Bermudan option (e.g., increase the vesting period from 4 to 4.5 years and then to 5 years)? What happens when the blackout vesting period is identical to the maturity of the option? How do you model this in the software? 7. Repeat question 6 on the put option. Here you will see a very different set of results. What generalities can you come up with?
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
185. Real Options—American, European, Bermudan, and Customized Abandonment Options
739
185. Real Options—American, European, Bermudan, and Customized Abandonment Options File Names: Real Options—Abandonment American Option, Real Options— Abandonment Bermudan Option, Real Options—Abandonment Customized Option, Real Options—Abandonment European Option Location: Modeling Toolkit | Real Options Models Brief Description: Computing the abandonment option assuming American, Bermudan, and European flavors as well as customized inputs and parameters to mirror real-life situations using closed-form models and customized binomial lattices Requirements: Modeling Toolkit, Real Options SLS
The abandonment option looks at the value of a project’s or asset’s flexibility in being abandoned over the life of the option. As an example, suppose that a firm owns a project or asset and that based on traditional discounted cash flow (DCF) models, it estimates the present value of the asset (PV Underlying Asset) to be $120M. For the abandonment option, this is the net present value (NPV) of the project or asset. Monte Carlo simulation using the Risk Simulator software indicates that the Volatility of this asset value is significant, estimated at 25%. Under these conditions, there is a lot of uncertainty as to the success or failure of this project. The volatility calculated models the different sources of uncertainty and computes the risks in the DCF model including price uncertainty, probability of success, competition, cannibalization, etc. Thus, the value of the project might be significantly higher or lower than the expected value of $120M. Suppose an abandonment option is created whereby a counterparty is found and a contract is signed that lasts five years (Maturity) such that for some monetary consideration now, the firm has the ability to sell the asset or project to the counterparty at any time within these five years (indicative of an American option) for a specified Salvage of $90M. The counterparty agrees to this $30M discount and signs the contract. What has just occurred is that the firm bought itself a $90M insurance policy. That is, if the asset or project value increases above its current value, the firm may decide to continue funding the project, or sell it off in the market at the prevailing fair market value. Alternatively, if the value of the asset or project falls below the $90M threshold, the firm has the right to execute the option and sell off the asset to the counterparty at $90M. In other words, a safety net of sorts has been erected to prevent the value of the asset from falling below this salvage level. Thus, how much is this safety net or insurance policy worth? You can create competitive advantage in negotiation if
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
740
REAL OPTIONS SLS APPLICATIONS
FIGURE 185.1 American Abandonment Option
the counterparty does not have the answer and you do. Further assume that the fiveyear Treasury Note Risk-Free Rate (zero coupon) is 5% from the U.S. Department of the Treasury (www.treas.gov/offices/domestic-finance/debt-management/interestrate/yield-hist.html). The American Abandonment Option results in Figure 185.1 show a value of $125.48M, indicating that the option value is $5.48M as the present value of the asset is $120M. Hence, the maximum value you should be willing to pay for the contract on average is $5.48M. This resulting expected value weights the continuous probabilities that the asset value exceeds $90M versus when it does not (where the abandonment option is valuable). Also, it weights when the timing of executing the abandonment is optimal such that the expected value is $5.48M. In addition, some experimentation can be conducted. Changing the salvage value to $30M (this means a $90M discount from the starting asset value) yields a result of $120M, or $0M for the option. This result means that the option or contract is worthless because the safety net is set so low that it will never be utilized. Conversely, setting the salvage level to three times the prevailing asset value, or $360M,
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
185. Real Options—American, European, Bermudan, and Customized Abandonment Options
741
would yield a result of $360M, and the options valuation results indicate $360M, which means that there is no option value; there is no value in waiting and having this option, or simply, execute the option immediately and sell the asset if someone is willing to pay three times the value of the project right now. Thus, you can keep changing the salvage value until the option value disappears, indicating the optimal trigger value has been reached. For instance, if you enter $166.80 as the salvage value, the abandonment option analysis yields a result of $166.80, indicating that at this price and above, the optimal decision is to sell the asset immediately, given the assumed volatility and the other input parameters. At any lower salvage value, there is option value, and at any higher salvage value, there will be no option value. This break-even salvage point is the optimal trigger value. Once the market price of this asset exceeds this value, it is optimal to abandon. Finally, adding a Dividend Rate, the cost of waiting before abandoning the asset (e.g., the annualized taxes and maintenance fees that have to be paid if you keep the asset and not sell it off, measured as a percentage of the present value of the asset) will decrease the option value. Hence, the break-even trigger point, where the option becomes worthless, can be calculated by successively choosing higher dividend levels. This break-even point again illustrates the trigger value at which the option should be optimally executed immediately, but this time with respect to a dividend yield. That is, if the cost of carry or holding on to the option, or the option’s leakage value, is high (i.e., if the cost of waiting is too high), do not wait and execute the option immediately. Other applications of the abandonment option include: buy-back lease provisions in a contract (guaranteeing a specified asset value); asset preservation flexibility; insurance policies; walking away from a project and selling off its intellectual property; and purchase price of an acquisition. To illustrate, here are some additional examples of the abandonment option and sample exercises.
An aircraft manufacturer sells its planes of a particular model in the primary market for, say, $30M each to various airline companies. Airlines are usually risk-averse and may find it hard to justify buying an additional plane with all the uncertainties in the economy, demand, price competition, and fuel costs. When uncertainties become resolved over time, actions and events, airline carriers may have to reallocate and reroute their existing portfolio of planes globally, and an excess plane on the tarmac is very costly. The airline can sell the excess plane in the secondary market, where smaller regional carriers buy used planes, but the price uncertainty is very high and is subject to significant volatility, of, say, 45%, and may fluctuate wildly between $10M and $25M for this class of aircraft. The aircraft manufacturer can reduce the airline’s risk by providing a buy-back provision or abandonment option, where at any time within the next five years, the manufacturer agrees to buy back the plane at a guaranteed residual salvage price of $20M, at the request of the airline. The corresponding risk-free rate for the next five years is 5%. This reduces the downside risk of the airline and hence reduces its risk, chopping off the left tail of the price fluctuation distribution and shifting the expected value to the right. This abandonment option provides risk reduction and value enhancement to the airline. Applying the abandonment option in SLS using a 100-step binomial lattice, we find that this option is worth $3.52M. If the airline is the smarter counterparty and calculates this value and
c181-c189
JWBK121-Mun
742
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
gets this buy-back provision for free as part of the deal, the aircraft manufacturer has just left over 10% of its aircraft value on the negotiation table. Information and knowledge is highly valuable in this case. A high-tech disk drive manufacturer is thinking of acquiring a small start-up firm with a new microdrive technology (a super-fast and high-capacity pocket hard drive) that may revolutionize the industry. The start-up is for sale, and its asking price is $50M based on an NPV fair market value analysis some third-party valuation consultants have performed. The manufacturer can either develop the technology itself or acquire this technology through the purchase of the firm. The question is, how much is this firm worth to the manufacturer, and is $50M a good price? Based on internal analysis by the manufacturer, the NPV of this microdrive is expected to be $45M, with a cash flow volatility of 40%, and it would take another three years before the microdrive technology is successful and goes to market. Assume that the three-year risk-free rate is 5%. In addition, it would cost the manufacturer $45M in present value to develop this drive internally. If using an NPV analysis, the manufacturer should build the drive itself. However, if you include an abandonment option analysis whereby if this specific microdrive does not work, the start-up still has an abundance of intellectual property (patents and proprietary technologies) as well as physical assets (buildings and manufacturing facilities) that can be sold in the market at up to $40M, the abandonment option together with the NPV yields $51.83, making buying the start-up worth more than developing the technology internally and making the purchase price of $50M worth it. See the chapter on expansion option for more examples on how this start-up’s technology can be used as a platform to further develop newer technologies that can be worth a lot more than just the abandonment option.
REAL OPTIONS—ABANDONMENT AMERICAN OPTION Figure 185.1 shows the results of a simple abandonment option with a 10-step lattice, as discussed, while Figure 185.2 shows the audit sheet that is generated from this analysis. Figure 185.3 shows the same abandonment option but with a 100-step lattice. To follow along, open the example file by clicking on Start | Programs | Real Options Valuation | Real Options SLS | Real Options SLS | Single Asset Option Model and then click on File | Examples | Abandonment American Option. Notice that the 10-step lattice yields $125.48 while the 100-step lattice yields $125.45, indicating that the lattice results have achieved convergence. The Terminal Node Equation is Max(Asset,Salvage), which means the decision at maturity is to decide if the option should be executed, selling the asset and receiving the salvage value, or not to execute, holding on to the asset. The Intermediate Node Equation used is Max(Salvage, OptionOpen), indicating that before maturity, the decision is either to execute early in this American option to abandon and receive the salvage value, or to hold on to the asset and, hence, hold on to and keep the option open for potential future execution, denoted simply as OptionOpen.
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
185. Real Options—American, European, Bermudan, and Customized Abandonment Options
743
FIGURE 185.2 Audit sheet for the abandonment option
REAL OPTIONS—ABANDONMENT EUROPEAN OPTION Figure 185.4 shows the European version of the abandonment option, where the Intermediate Node Equation is simply OptionOpen, as early execution is prohibited before maturity. Of course, being able to execute the option only at maturity is worth less ($124.5054 compared to $125.4582) than being able to exercise earlier. The example files used are Abandonment American Option and Abandonment European Option. For example, the airline manufacturer in the previous case example can agree
c181-c189
JWBK121-Mun
744
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 185.3 American, European, and Bermudan abandonment option with 100-step lattice
to a buy-back provision that can be exercised at any time by the airline customer versus only at a specific date at the end of five years—the former American option will clearly be worth more than the latter European option.
REAL OPTIONS—ABANDONMENT BERMUDAN OPTION Sometimes a Bermudan option is appropriate, as when there might be a vesting period or blackout period when the option cannot be executed. For instance, if the contract stipulates that for the five-year abandonment buy-back contract, the airline customer cannot execute the abandonment option within the first 2.5 years. This is shown in Figure 185.5 using a Bermudan option with a 100-step lattice on five years, where the blackout steps are from 0–50. This means that during the first 50 steps (as well as right now, or step 0), the option cannot be executed. This is modeled by inserting OptionOpen into the Intermediate Node Equation during Blackout and
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
185. Real Options—American, European, Bermudan, and Customized Abandonment Options
745
FIGURE 185.4 European abandonment option with 100-step lattice
Vesting. This forces the option holder only to keep the option open during the vesting period, preventing execution during this blackout period. Figure 185.5 shows that the American option is worth more than the Bermudan option, which is worth more than the European option, by virtue of the ability of each option type to execute early and the frequency of execution possibilities.
REAL OPTIONS—ABANDONMENT CUSTOMIZED OPTION Sometimes the salvage value of the abandonment option may change over time. To illustrate, in the previous example of an acquisition of a start-up firm, the intellectual property will most probably increase over time because of continued research and development (R&D) activities, thereby changing the salvage values over time. An example is seen in Figure 185.6, where there are five salvage values over the five-year abandonment option. This can be modeled by using the custom variables. Type in the input variables one at a time, as seen in Figure 185.6’s Custom Variables list. Notice that the same variable name (Salvage) is used, but the values change over
c181-c189
JWBK121-Mun
March 17, 2008
13:38
746
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 185.5 Bermudan abandonment option with 100-step lattice
time, and the starting steps represent when these different values become effective. For instance, the salvage value $90 applies at step 0 until the next salvage value of $95 takes over at step 21. This means that for a five-year option with a 100-step lattice, the first year including the current period (steps 0 to 20) will have a salvage value of $90, which then increases to $95 in the second year (steps 21 to 40), and so forth. Notice that as the value of the firm’s intellectual property increases over time, the option valuation results also increase, which makes logical sense. You can also model in blackout vesting periods for the first six months (steps 0–10 in the blackout area). The blackout period is very typical of contractual obligations of abandonment options where during specified periods, the option cannot be executed (a cooling-off period).
EXERCISE: OPTION TO ABANDON Suppose a pharmaceutical company is developing a particular drug. However, due to the uncertain nature of the drug’s development progress, market demand, success
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
185. Real Options—American, European, Bermudan, and Customized Abandonment Options
747
FIGURE 185.6 Customized abandonment option
in human and animal testing, and Food and Drug Administration approval, management has decided that it will create a strategic abandonment option. That is, at any period within the next five years of development, management can review the progress of the R&D effort and decide whether to terminate the drug development program. After five years, the firm would have either succeeded or completely failed in its drug development initiative, and there exists no option value after that time period. If the program is terminated, the firm can potentially sell off its intellectual property rights to the drug in question to another pharmaceutical firm with which it has a contractual agreement. This contract with the other firm is exercisable at any time within this time period, at the whim of the firm owning the patents. Using a traditional DCF model, you find the present value of the expected future cash flows discounted at an appropriate market risk-adjusted discount rate to be $150 million. Using Monte Carlo simulation, you find the implied volatility of the logarithmic returns on future cash flows to be 30%. The risk-free rate on a riskless asset for the same time frame is 5%, and you understand from the intellectual property officer of the firm that the value of the drug’s patent is $100 million contractually, if sold within the next five years. For simplicity, you assume that this
c181-c189
JWBK121-Mun
748
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
$100 million salvage value is fixed for the next five years. You attempt to calculate how much this abandonment option is worth and how much this drug development effort on the whole is worth to the firm. By virtue of having this safety net of being able to abandon drug development, the value of the project is worth more than its net present value. You decide to use a closed-form approximation of an American put option because the option to abandon drug development can be exercised at any time up to and including the expiration date. You also decide to confirm the value of the closed-form analysis with a binomial lattice calculation. With these assumptions, do the next exercises, answering the questions posed: 1. Solve the abandonment option problem manually using a 10-step lattice and confirm the results by generating an audit sheet using the SLS software. 2. Choose the correct answer for these questions. a. Increases in maturity (increase/decrease) an abandonment option value. b. Increases in volatility (increase/decrease) an abandonment option value. c. Increases in asset value (increase/decrease) an abandonment option value. d. Increases in risk-free rate (increase/decrease) an abandonment option value. e. Increases in dividend (increase/decrease) an abandonment option value. f. Increases in salvage value (increase/decrease) an abandonment option value. 3. Apply 100 steps using the software’s binomial lattice. a. How different are the results as compared to the five-step lattice? b. How close are the closed-form results compared to the 100-step lattice? 4. Apply a 3% continuous dividend yield to the 100-step lattice. a. What happens to the results? b. Does a dividend yield increase or decrease the value of an abandonment option? Why? 5. Assume that the salvage value increases at a 10% annual rate. Show how this can be modeled using the software’s Custom Variables List. 6. Explain the differences in results when using the Black-Scholes and American put option approximation in the benchmark section of the Single Asset SLS software.
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
186. Real Options—American, European, Bermudan, and Customized Contraction Options
749
186. Real Options—American, European, Bermudan, and Customized Contraction Options File Names: Real Options—Contraction American and European Option, Real Options—Contraction Bermudan Option, Real Options—Contraction Customized Option Location: Modeling Toolkit | Real Options Models Brief Description: Computing the contraction option (American, Bermudan, and European) with customizable and changing input parameters using closed-form models and custom binomial lattices Requirements: Modeling Toolkit, Real Options SLS
A contraction option evaluates the flexibility value of being able to reduce production output or to contract the scale and scope of a project when conditions are not as amenable, thereby reducing the value of the asset or project by a contraction factor, but at the same time creating some cost savings. As an example, suppose you work for a large aeronautical manufacturing firm that is unsure of the technological efficacy and market demand for its new fleet of long-range supersonic jets. The firm decides to hedge itself through the use of strategic options, specifically an option to contract 10% of its manufacturing facilities at any time within the next five years (i.e., the contraction factor is 0.9). Suppose that the firm has a current operating structure whose static valuation of future profitability using a discounted cash flow (DCF) model (in other words, the present value of the expected future cash flows discounted at an appropriate market risk-adjusted discount rate) is found to be $1,000M (PV Asset). Using Monte Carlo simulation, you calculate the implied volatility of the logarithmic returns of the asset value of the projected future cash flows to be 30%. The risk-free rate on a riskless asset (five-year U.S. Treasury Note with zero coupons) is found to be yielding 5%. Further, suppose the firm has the option to contract 10% of its current operations at any time over the next five years, thereby creating an additional $50 million in savings after this contraction. These terms are arranged through a legal contractual agreement with one of the firm’s vendors, which had agreed to take up the firm’s excess capacity and space. At the same time, the firm can scale back and lay off part of its existing workforce to obtain this level of savings (in present values). The results indicate that the strategic value of the project is $1,001.71M (using a 10-step lattice as seen in Figure 186.1), which means that the net present value (NPV) currently is $1,000M, and the additional $1.71M comes from this contraction option. This result is obtained because contracting now yields 90% of $1,000M +
c181-c189
JWBK121-Mun
750
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
$50M, or $950M, which is less than staying in business and not contracting and obtaining $1,000M. Therefore, the optimal decision is not to contract immediately but keep the ability to do so open for the future. Hence, in comparing this optimal decision of $1,000M to $1,001.71M of being able to contract, the option to contract is worth $1.71M. This should be the maximum amount the firm is willing to spend to obtain this option (contractual fees and payments to the vendor counterparty). In contrast, if Savings were $200M instead, then the strategic project value becomes $1,100M, which means that starting at $1,000M and contracting 10% to $900M and keeping the $200 in savings yields $1,100M in total value. Hence, the additional option value is $0M, which means that it is optimal to execute the contraction option immediately as there is no option value and no value to wait to contract. The value of executing now is $1,100M as compared to the strategic project value of $1,100M; there is no additional option value, and the contraction should be executed immediately. That is, instead of asking the vendor to wait, the firm is better off executing the contraction option now and capturing the savings. Other applications include shelving a research and development project by spending a little to keep it going but reserving the right to come back to it should conditions improve; the value of synergy in a merger and acquisition where some management personnel are let go to create the additional savings; reducing the scope and size of a production facility; reducing production rates; a joint venture or alliance; and other applications. To illustrate, here are some additional examples of and exercises for the contraction option.
A large oil and gas company is embarking on a deep-sea drilling platform that will cost the company billions to implement. A DCF analysis is run and the NPV is found to be $500M over the next 10 years of economic life of the offshore rig. The 10-year risk-free rate is 5%, and the volatility of the project is found to be at an annualized 45% using historical oil prices as a proxy. If the expedition is highly successful (oil prices are high and production rates are soaring), then the company will continue its operations. However, if things are not looking too good (oil prices are low or moderate and production is only decent), it is very difficult for the company to abandon operations (why lose everything when net income is still positive although not as high as anticipated, not to mention the environmental and legal ramifications of simply abandoning an oil rig in the middle of the ocean). Hence, the oil company decides to hedge its downside risk through an American Contraction Option. The oil company was able to find a smaller oil and gas company (a former partner on other explorations) interested in a joint venture. The joint venture is structured such that the oil company pays this smaller counterparty a lump sum right now for a 10-year contract whereby at any time and at the oil company’s request, the smaller counterparty will have to take over all operations of the offshore oil rig (i.e., taking over all operations and hence all relevant expenses) and keep 30% of the net revenues generated. The counterparty is in agreement because it does not have to spend the billions of dollars required to implement the rig in the first place, and it actually obtains some cash up front for this contract to assume the downside risk. The oil company is also in agreement because it reduces its own risks if oil
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
186. Real Options—American, European, Bermudan, and Customized Contraction Options
751
prices are low and production is not up to par, and it ends up saving over $75M in present value of total overhead expenses, which it can reallocate and invest somewhere else. In this example, the contraction option using a 100-step lattice is valued to be $14.24M using SLS. This means that the maximum amount the counterparty should be paid should not exceed this amount. Of course, the option analysis can be further complicated by analyzing the actual savings on a present value basis. For instance, if the option is exercised within the first five years, the savings is $75M but if exercised during the last five years then the savings is only $50M. The revised option value is now $10.57M. A manufacturing firm is interested in outsourcing its manufacturing of children’s toys to a small province in China. By doing so, it will produce overhead savings of over $20M in present value over the economic life of the toys. However, outsourcing this internationally will mean lower quality control, problems in delayed shipping, added importing costs, and assuming the added risks of unfamiliarity with the local business practices. In addition, the firm will consider outsourcing only if the quality of the workmanship in this firm is up to the stringent quality standards it requires. The NPV of this particular line of toys is $100M with a 25% volatility. The firm’s executives decide to purchase a contraction option by locating a small manufacturing firm in China, spending some resources to try out a small-scale proof of concept (thereby reducing the uncertainties of quality, knowledge, import-export issues, health-related issues like checking for lead in the toys, etc.). If successful, the firm will agree to give this small Chinese manufacturer 20% of its net income as remuneration for its services, plus some start-up fees. The question is: How much is this option to contract worth; that is, how much should the firm be willing to pay, on average, to cover the initial start-up fees plus the costs of this proof of concept stage? A contraction option valuation result using SLS shows that the option is worth $1.59M, assuming a 5% risk-free rate for the one-year test period. As long as the total costs for a pilot test are less than $1.59M, it is optimal to obtain this option, especially if it means potentially being able to save more than $20M.
REAL OPTIONS—CONTRACTION AMERICAN AND EUROPEAN OPTION Figure 186.1 illustrates a simple 10-step Contraction Option while Figure 186.2 shows the same option using 100 lattice steps (example file used: Contraction American and European Option).
REAL OPTIONS—CONTRACTION BERMUDAN OPTION Figure 186.3 illustrates a five-year Bermudan Contraction Option with a four-year vesting period (blackout steps of 0 to 80 out of a five-year, 100-step lattice) where, for the first four years, the option holder can only keep the option open and not execute it (example file used is Contraction Bermudan Option).
c181-c189
JWBK121-Mun
March 17, 2008
13:38
752
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 186.1 American and European options to contract with a 10-step lattice
REAL OPTIONS—CONTRACTION CUSTOMIZED OPTION Figure 186.4 shows a customized option where there is a blackout period and the savings from contracting change over time (example file used is Contraction Customized Option). These results are for the aeronautical manufacturing example.
EXERCISE: OPTION TO CONTRACT You work for a large automobile spare parts manufacturing firm that is unsure of the technological efficacy and market demand of its products. The firm decides to hedge itself through the use of strategic options, specifically an option to contract 50% of its manufacturing facilities at any time within the next five years. Suppose the firm has a current operating structure whose static valuation of future profitability using a DCF model (in other words, the present value of the expected future cash flows discounted at an appropriate market risk-adjusted
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
186. Real Options—American, European, Bermudan, and Customized Contraction Options
753
FIGURE 186.2 American and European options to contract with a 100-step lattice
discount rate) is found to be $1 billion. Using Monte Carlo simulation, you calculate the implied volatility of the logarithmic returns on the projected future cash flows to be 50%. The risk-free rate on a riskless asset for the next five years is found to be yielding 5%. Suppose the firm has the option to contract 50% of its current operations at any time over the next five years, thereby creating an additional $400 million in savings after this contraction. This is done through a legal contractual agreement with one of its vendors, which had agreed to take up the firm’s excess capacity and space. Then the firm can scale back its existing workforce to obtain this level of savings. A Closed-Form American Approximation Model can be used, because the option to contract the firm’s operations can be exercised at any time up to the expiration date and can be confirmed with a binomial lattice calculation. Do the next exercises, answering the questions that are posed. 1. Solve the contraction option problem manually using a 10-step lattice and confirm the results by generating an audit sheet using the software.
c181-c189
JWBK121-Mun
754
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 186.3 Bermudan option to contract with blackout vesting periods
2. Modify the continuous dividend payout rate until the option breaks even. What observations can you make at this break-even point? 3. Use the Closed-Form American Approximation Model in the benchmark area of the software by using the corresponding put option. In order to do this appropriately, you will need to rerun the model with modified input parameters. What are these required input parameters? 4. How can you use the American Abandonment Option as a benchmark to estimate the contraction option? If it is used, are the resulting option values comparable? 5. Change the contraction factor to 0.7, and answer question 4. Why are the answers different? Suppose the initial estimate of $400 million in savings is applicable only if the contraction option is executed immediately. However, due to opportunity costs and time value of money, assume that the $400 million goes down by $10 million each year. What happens to the value of this option and how much is it worth now?
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
186. Real Options—American, European, Bermudan, and Customized Contraction Options
FIGURE 186.4 Customized option to contract with changing savings levels
755
c181-c189
JWBK121-Mun
756
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
187. Real Options—American, European, Bermudan, and Customized Expansion Options File Names: Real Options—Expansion American and European Option, Real Options—Expansion Bermudan Option, and Real Options—Expansion Customized Option Location: Modeling Toolkit | Real Options Models Brief Description: Computing expansion options with multiple flavors (American, Bermudan, European) with customizable and changing inputs using closed-form and customized binomial lattices Requirements: Modeling Toolkit, Real Options SLS
The expansion option values the flexibility to expand from a current existing state to a larger or expanded state. Therefore, an existing state or condition must be present in order to use the expansion option. That is, there must be a base case on which to expand. If there is no base case state, then the simple Execution Option (calculated using the simple call option) is more appropriate, where the issue at hand is whether to execute a project immediately or to defer execution. As an example, suppose a growth firm has a static valuation of future profitability using a discounted cash flow (DCF) model (in other words, the present value of the expected future cash flows discounted at an appropriate market risk-adjusted discount rate) that is found to be $400 million (PV Asset). Using Monte Carlo simulation, you calculate the implied Volatility of the logarithmic returns on the assets based on the projected future cash flows to be 35%. The risk-free rate on a riskless asset (five-year U.S. Treasury note with zero coupons) for the next five years is found to be 7%. Further suppose that the firm has the option to expand and double its operations by acquiring its competitor for a sum of $250 million (Implementation Cost) at any time over the next five years (Maturity). What is the total value of this firm, assuming that you account for this expansion option? The results in Figure 187.1 indicate that the strategic project value is $638.73M (using a 10-step lattice), which means that the expansion option value is $88.73M. This result is obtained because the net present value (NPV) of executing immediately is $400M × 2 – $250M, or $550M. Thus, $638.73 less $550M is $88.73M, the value of the ability to defer and to wait and see before executing the expansion option. The example file used is Expansion American and European Option. Increase the dividend rate to, say, 2% and notice that both the American and European Expansion Options are now worth less and that the American Expansion Option is worth more than the European version by virtue of the American Option’s
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
187. Real Options—American, European, Bermudan, and Customized Expansion Options
757
FIGURE 187.1 American and European options to expand with a 100-step lattice
ability to be executed early (Figure 187.2). The dividend rate is the cost of waiting to expand or to defer and not execute. That is, this rate is the opportunity cost of waiting on executing the option, and the cost of holding the option. If dividend is high, then the ability to defer reduces and the option to wait decreases. Increase the Dividend Rate to 4.9% and see that the binomial lattice’s custom option result reverts to $550, (the static, expand-now scenario), indicating that the option is worthless (Figure 187.3). This result means if the cost of waiting as a proportion of the asset value (as measured by the dividend rate) is too high, then execute now and stop wasting time deferring the expansion decision! Of course, this decision can be reversed if the volatility is significant enough to compensate for the cost of waiting. That is, it might be worth something to wait and see if the uncertainty is too high even if the cost to wait is high. Other applications of this option abound. To illustrate, here are some additional examples of and exercises for the contraction option.
Suppose a pharmaceutical firm is thinking of developing a new type of insulin that can be inhaled and the drug will be absorbed directly into the bloodstream. A novel and honorable idea. Imagine what this means to diabetics, who will
c181-c189
JWBK121-Mun
758
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 187.2 American and European options to expand with a dividend rate
no longer need painful and frequent injections. The problem is, this new type of insulin requires a brand-new development effort. If the uncertainties of the market, competition, drug development, and Food and Drug Administration approval are high, perhaps a base insulin drug that can be ingested is developed first. The ingestible version is a required precursor to the inhaled version. The pharmaceutical firm can decide either to take the risk and fast-track development into the inhaled version or to buy an option to defer, to first wait and see if the ingestible version works. If this precursor works, then the firm has the option to expand into the inhaled version. How much should the firm be willing to spend on performing additional tests on the precursor and under what circumstances should the inhaled version be implemented directly? Suppose the intermediate precursor development work yields an NPV of $100M, but at any time within the next two years, an additional $50M can be invested into the precursor to develop it into the inhaled version, which will triple the NPV. However, after modeling the risk of technical success and uncertainties in the market (competitive threats, sales, and pricing structure), the annualized volatility of the cash flows using the logarithmic present value returns approach comes to 45%. Suppose the
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
187. Real Options—American, European, Bermudan, and Customized Expansion Options
759
FIGURE 187.3 Dividend rate optimal trigger value
risk-free rate is 5% for the two-year period. Using the SLS, the analysis results yields $254.95M, indicating that the option value to wait and defer is worth more than $4.95M after accounting for the $250M NPV if executing now. In playing with several scenarios, the breakeven point is found when dividend yield is 1.34%. This means that if the cost of waiting (lost net revenues in sales by pursuing the smaller market rather than the larger market, and loss of market share by delaying) exceeds $1.34M per year, it is not optimal to wait, and the pharmaceutical firm should invest in the inhaled version immediately. The loss in returns generated each year does not sufficiently cover the risks incurred. An oil and gas company is currently deciding on a deep-sea exploration and drilling project. The platform provides an expected NPV of $1,000M. This project is fraught with risks (price of oil and production rate are both uncertain) and the annualized volatility is computed to be 55%. The firm is thinking of purchasing an expansion option by spending an additional $10M to build a slightly larger platform that it does not currently need. If the price of oil is high, or when production rate is low, the firm can execute this expansion option and
c181-c189
JWBK121-Mun
March 17, 2008
13:38
760
Char Count=
REAL OPTIONS SLS APPLICATIONS
execute additional drilling to obtain more oil to sell at the higher price, which will cost another $50M, thereby increasing the NPV by 20%. The economic life of this platform is 10 years, and the risk-free rate for the corresponding term is 5%. Is obtaining this slightly larger platform worth it? Using the SLS, the option value is worth $27.12M when applying a 100-step lattice. Therefore, the option cost of $10M is worth it. However, this expansion option will not be worth it if annual dividends exceed 0.75% or $7.5M a year—this is the annual net revenues lost by waiting and not drilling as a percentage of the base case NPV. Figure 187.4 shows a Bermudan Expansion Option with certain vesting and blackout steps, while Figure 187.5 shows a Customized Expansion Option to account for the expansion factor changing over time. Of course, other flavors of customizing the expansion option exist, including changing the implementation cost to expand, and so forth.
FIGURE 187.4 Bermudan expansion option
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
187. Real Options—American, European, Bermudan, and Customized Expansion Options
761
FIGURE 187.5 Customized expansion option
EXERCISE: OPTION TO EXPAND As another example, suppose a growth firm exists, and its valuation of future profitability using a DCF model (in other words, the present value of the expected future cash flows discounted at an appropriate market risk-adjusted discount rate) is found to be $400 million. Using Monte Carlo simulation, you calculate the implied volatility of the logarithmic returns on the projected future cash flows to be 35%. The risk-free rate on a riskless asset for the next five years is found to be yielding 7%. Suppose that the firm has the option to expand and double its operations by acquiring its competitor for a sum of $250 million at any time over the next five years. What is the total value of this firm assuming you account for this expansion option? You decide to use a closed-form approximation of an American call option as a benchmark because the option to expand the firm’s operations can be exercised at any time up to the expiration date. You also decide to confirm the value of the
c181-c189
JWBK121-Mun
762
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
closed-form analysis with a binomial lattice calculation. Do the next exercises, answering the questions. 1. Solve the expansion option problem manually using a 10-step lattice and confirm the results by generating an audit sheet using the software. 2. Rerun the expansion option problem using the software for 100 steps, 300 steps, and 1,000 steps. What are your observations? 3. Show how you would use the Closed-Form American Approximation model to estimate and benchmark the results from an expansion option. How comparable are the results? 4. Show the different levels of expansion factors but still yielding the same expanded asset value of $800. Explain your observations of why, when the expansion value changes, the Black-Scholes and Closed-Form American Approximation models are insufficient to capture the fluctuation in value. a. Use an expansion factor of 2.00 and an asset value of $400.00 (yielding an expanded asset value of $800). b. Use an expansion factor of 1.25 and an asset value of $640.00 (yielding an expanded asset value of $800). c. Use an expansion factor of 1.50 and an asset value of $533.34 (yielding an expanded asset value of $800). d. Use an expansion factor of 1.75 and an asset value of $457.14 (yielding an expanded asset value of $800). 5. Add a dividend yield and see what happens. Explain your findings. a. What happens when the dividend yield equals or exceeds the risk-free rate? b. What happens to the accuracy of closed-form solutions, such as the BlackScholes and Closed-Form American Approximation models, when used as benchmarks? 6. What happens to the decision to expand if a dividend yield exists? Now suppose that although the firm has an annualized volatility of 35%, the competitor has a volatility of 45%. This means that the expansion factor of this option changes over time, comparable to the volatilities. In addition, suppose the implementation cost is a constant 120% of the existing firm’s asset value at any point in time. Show how this problem can be solved using the Multiple Asset SLS. Is there option value in such a situation?
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
188. Real Options—Contraction, Expansion, and Abandonment Options
763
188. Real Options—Contraction, Expansion, and Abandonment Options File Names: Real Options—Expand Contract Abandon American and European Option, Real Options—Expand Contract Abandon Bermudan Option, Real Options—Expand Contract Abandon Customized Option I, Real Options—Expand Contract Abandon Customized Option II Location: Modeling Toolkit | Real Options Models Brief Description: Modeling a combination of mutually exclusive chooser options (contraction, expansion, and abandonment) with customizable inputs Requirements: Modeling Toolkit, Real Options SLS
The contraction, expansion, and abandonment option applies when a firm has three competing and mutually exclusive options on a single project to choose from at different times up to the time of expiration. Be aware that this is a mutually exclusive set of options. That is, you cannot execute any combinations of expansion, contraction, or abandonment at the same time. Only one option can be executed at any time. For mutually exclusive options, use a single model to compute the option value as seen in Figure 188.1 (Expand Contract Abandon American and European Option). However, if the options are not mutually exclusive, calculate them individually in different models and add up the values for the total value of the strategy. Figure 188.2 illustrates a Bermudan option with the same parameters but with certain blackout periods (example file used: Expand Contract Abandon Bermudan Option), while Figure 188.3 (example file used: Expand Contract Abandon Customized Option I) illustrates a more complex Custom Option where, during some earlier period of vesting, the option to expand does not exist yet. Perhaps the technology being developed is not yet mature enough to be expanded into some spin-off technology. In addition, during the postvesting period but prior to maturity, the option to contract or abandon does not exist. Perhaps the technology is now being reviewed for spin-off opportunities. Figure 188.4 uses the same example as Figure 188.3, but now the input parameters (salvage value) are allowed to change over time, perhaps accounting for the increase in project, asset, or firm value if abandoned at different times (example file used: Expand Contract Abandon Customized Option II).
EXERCISE: OPTION TO CHOOSE—CONTRACTION, EXPANSION, ABANDONMENT (DOMINANT OPTION) Suppose a large manufacturing firm decides to hedge itself through the use of strategic options. Specifically, it has the option to choose among three strategies: expanding its current manufacturing operations, contracting its manufacturing operations, or
c181-c189
JWBK121-Mun
764
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 188.1 American, European, and custom options to expand, contract, and abandon (mutually exclusive options)
completely abandoning its business unit at any time within the next five years. Suppose the firm has a current operating structure whose static valuation of future profitability using a discounted cash flow (DCF) model (in other words, the present value of the future cash flows discounted at an appropriate market risk-adjusted discount rate) is found to be $100 million. Using Monte Carlo simulation, you calculate the implied volatility of the logarithmic returns on the projected future cash flows to be 15%. The risk-free rate on a riskless asset for the next five years is found to be yielding 5% annualized returns. Suppose the firm has the option to contract 10% of its current operations at any time over the next five years, thereby creating an additional $25 million in savings after this contraction. The expansion option will increase the firm’s operations by 30%, with a $20 million implementation cost. Finally, by abandoning its operations, the firm can sell its intellectual property for $100 million. Do the next exercises, answering the questions. 1. Solve the chooser option problem manually using a 10-step lattice and confirm the results by generating an audit sheet using the SLS software.
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
188. Real Options—Contraction, Expansion, and Abandonment Options
765
FIGURE 188.2 Bermudan option to expand, contract, and abandon
2. Recalculate the option value in question 1 accounting only for an expansion option. 3. Recalculate the option value in question 1 accounting only for a contraction option. 4. Recalculate the option value in question 1 accounting only for an abandonment option. 5. Compare the results of the sum of these three individual options in questions 2 to 4 with the results obtained in question 1 using the chooser option. a. Why are the results different? b. Which value is correct? 6. Prove that if there are many interacting options, if there is a single dominant strategy, the value of the project’s option value approaches this dominant strategy’s value. That is, perform the next steps, then compare and explain the results. a. Reduce the expansion cost to $1. b. Increase the contraction savings to $100.
c181-c189
JWBK121-Mun
766
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 188.3 Custom options with mixed expand, contract, and abandon capabilities
7.
8. 9. 10.
c. Increase the salvage value to $150. d. What inferences can you make based on these results? Solve this Contraction and Abandonment option: Asset value of $100, fiveyear economic life, 5% annualized risk-free rate of return, 25% annualized volatility, 25% contraction with a $25 savings, and a $70 abandonment salvage value. Show and explain what happens when the salvage value of abandonment far exceeds any chances of a contraction. For example, set the salvage value at $200. In contrast, set the salvage value back to $70, and increase the contraction savings to $100. What happens to the value of the project? Solve just the Contraction Option in isolation. That is, set the contraction savings to $25 and explain what happens. Change the savings to $100 and explain the change in results. What can you infer from dominant option strategies? Solve just the abandonment option in isolation. That is, set the salvage value to $70, and explain what happens. Change the salvage value to $200, and explain the change in results. What can you infer from dominant option strategies?
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
189. Real Options—Dual Variable Rainbow Option Using Pentanomial Lattices
767
FIGURE 188.4 Custom options with mixed expand, contract, and abandon capabilities with changing input parameters
189. Real Options—Dual Variable Rainbow Option Using Pentanomial Lattices File Name: Real Options—Dual-Asset Rainbow Option Pentanomial Lattice Location: Modeling Toolkit | Real Options Models Brief Description: Solving a rainbow option made up of two correlated underlying assets by applying a pentanomial lattice Requirements: Modeling Toolkit, Real Options SLS
The dual variable rainbow option for both American and European options requires the pentanomial lattice approach. Rainbows on the horizon after a rainy
c181-c189
JWBK121-Mun
768
March 17, 2008
13:38
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE˜189.1 Two binomial lattices (asset prices and quantity)
day comprise various colors of the light spectrum, and although rainbow options are not as colorful as their physical counterparts, they get their name from the fact that they have two or more underlying assets rather than one. In contrast to standard options, the value of a rainbow option is determined by the behavior of two or more underlying elements and by the correlation between these underlying elements. That is, the value of a rainbow option is determined by the performance of two or more underlying asset elements. This particular model is appropriate when there are two underlying variables in the option (e.g., price of asset and quantity) where each fluctuates at different rates of volatilities but at the same time might be correlated (Figure 189.1). These two variables are usually correlated in the real world, and the underlying asset value is the product of price and quantity. Due to the different volatilities, a pentanomial, or five-branch, lattice is used to capture all possible combinations of products (Figure 189.2). Be aware that certain combinations of inputs mayyield an unsolvable lattice with negative implied probabilities. If that result occurs, a message will appear. Try a different combination of inputs as
FIGURE˜189.2 Pentanomial lattice (combining two binomial lattices)
c181-c189
JWBK121-Mun
March 17, 2008
13:38
Char Count=
189. Real Options—Dual Variable Rainbow Option Using Pentanomial Lattices
769
FIGURE˜189.3 Pentanomial lattice solving a Dual-Asset Rainbow Option
well as higher lattice steps to compensate. Figure 189.3 illustrates the pentanomial model in SLS. Notice that a high positive correlation will increase both call option and put option values, because if both underlying elements move in the same direction, there is a higher overall portfolio volatility. Price and quantity can fluctuate at high-high and low-low levels, generating a higher overall underlying asset value. In contrast, negative correlations will reduce both call and put option values for the opposite reason due to the portfolio diversification effects of negatively correlated variables. Of course, correlation here is bounded between −1 and +1 inclusive. If a real options problem has more than two underlying assets, either use the SLS or Risk Simulator to simulate the underlying asset’s trajectories and capture the interacting effects in a discounted cash flow model. See the varied examples in SLS using pentanomial lattices to solve 3D dual asset (e.g., spreads, exchange, dual strike, and portfolio) options.
c190-c198
JWBK121-Mun
770
March 17, 2008
13:45
Char Count=
REAL OPTIONS SLS APPLICATIONS
190. Real Options—Exotic Chooser Option File Name: Exotic Options—Exotic Chooser Option Location: Modeling Toolkit | Real Options Models Brief Description: Illustrating Chooser Options, which are single options that allow the holder to decide if options become calls or puts at some future time, essentially creating two options in one, where the cost of the single option is lower than the cost of two options Requirements: Modeling Toolkit, Real Options SLS
Many types of user-defined and exotic options can be solved using the SLS software. For instance, Figure 190.1 shows a simple Exotic Chooser Option. In this
FIGURE 190.1 American and European Exotic Chooser Option using SLS
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
191. Real Options—Exotic Complex Floating American and European Chooser
771
simple analysis, the option holder has two options, a call and a put. Instead of having to purchase or obtain two separate options, one single option is obtained, which allows the option holder to choose whether the option will be a call or a put, thereby reducing the total cost of obtaining two separate options. For instance, with the same input parameters in Figure 190.1, the American Chooser Option is worth $6.7168, as compared to $4.87 for the call and $2.02 for the put ($6.89 total cost for two separate options).
191. Real Options—Exotic Complex Floating American and European Chooser File Names: Exotic Complex Floating European Chooser, Exotic Complex Floating American Chooser Location: Modeling Toolkit | Real Options Models Brief Description: Illustrating Chooser Options, which are single options that allow the holder to decide if options become calls or puts at some future time, essentially creating two options in one, where the cost of the single option is lower than the cost of two options Requirements: Modeling Toolkit, Real Options SLS
A more complex Chooser Option can be constructed using the SLS software as seen in Figures 191.1 and 191.2. Figure 191.1 shows a European version where the selection of a call or put can only occur at maturity. Figure 191.2 shows an American version where the selection can occur at any time up to and including maturity. In these examples, the execution costs of the call versus put are set at different levels. An interesting example of a complex Chooser Option is a firm developing a highly uncertain and risky new technology. The firm tries to hedge its downside as well as capitalize its upside by creating a Chooser Option. That is, the firm can decide to build the technology itself once the research and development phase is complete or it can sell the intellectual property of the technology, both at different costs. To further complicate matters, you can use the SLS software to easily and quickly solve the situation where building and selling off the option each has a different volatility and time to choose.
FIGURE 191.1 Complex European Exotic Chooser Option using SLS
c190-c198 JWBK121-Mun
772
March 17, 2008 13:45 Char Count=
JWBK121-Mun March 17, 2008 13:45
FIGURE 191.2 Complex American Exotic Chooser Option using SLS
c190-c198 Char Count=
773
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
774
REAL OPTIONS SLS APPLICATIONS
192. Real Options—Jump-Diffusion Option Using Quadranomial Lattices File Name: Real Options—Jump-Diffusion Calls and Puts Using Quadranomial Lattices Location: Modeling Toolkit | Real Options Models Brief Description: Computing the options where the underlying asset follows a jump-diffusion process (e.g., oil and electricity prices), using quadranomial lattices Requirements: Modeling Toolkit, Real Options SLS
The Jump-Diffusion Calls and Puts for both American and European options apply the Quadranomial Lattice approach. This model is appropriate when the underlying variable in the option follows a jump-diffusion stochastic process. Figure 192.1 illustrates an underlying asset modeled using a jump-diffusion process. Jumps are commonplace in certain business variables, such as price of oil and price of gas where prices take sudden and unexpected jumps (e.g., during a war). The underlying variable’s frequency of jump is denoted as its jump rate, and the magnitude of each jump is its jump intensity. The Binomial Lattice is able to capture only a stochastic process without jumps (e.g., Brownian Motion and Random Walk processes). When there is a probability of jump (albeit a small probability that follows a Poisson distribution), additional branches are required. The Quadranomial Lattice (four branches on each node) is used to capture these jumps, as seen in Figure 192.2
FIGURE 192.1 Jump-diffusion process
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
192. Real Options—Jump-Diffusion Option Using Quadranomial Lattices
775
FIGURE 192.2 Quadranomial Lattice
Be aware that due to the complexity of the models, some calculations with higher lattice steps may take slightly longer to compute. Furthermore, certain combinations of inputs may yield negative implied risk-neutral probabilities and result in a noncomputable lattice. In that case, make sure the inputs are correct (e.g., jump intensity has to exceed 1, where 1 implies no jumps; check for erroneous combinations of jump rates, jump sizes, and lattice steps). The probability of a jump can be computed as the product of the jump rate and time-step δt. Figure 192.3 illustrates a sample quadranomial jump-diffusion option analysis (example file used: Jump Diffusion Calls and Puts Using Quadranomial Lattices). Notice that the jump diffusion call and put options are worth more than regular calls and puts. This is because with the positive jumps (10% probability per year with an average jump size of 1.50 times the previous values) of the underlying asset, the movements of the asset are more significant than without any jumps. This results in the call and put options being worth more, even with the same volatility (i.e., $34.69 compared to $31.99 and $15.54 compared to $13.14).
c190-c198
JWBK121-Mun
776
March 17, 2008
13:45
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 192.3 Quadranomial Lattice results on jump-diffusion options
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
193. Real Options—Mean-Reverting Calls and Puts Using Trinomial Lattices
777
193. Real Options—Mean-Reverting Calls and Puts Using Trinomial Lattices File Names: Real Options—Mean-Reverting Calls and Puts Using Trinomial Lattices Location: Modeling Toolkit | Real Options Models Brief Description: Computing the options where the underlying asset follows a mean-reverting process (e.g., interest rates), using trinomial lattices Requirements: Modeling Toolkit, Real Options SLS
The Mean-Reversion option in the SLS software calculates both the American and European options when the underlying asset value is mean-reverting. A mean-reverting stochastic process reverts back to the long-term mean value (LongTerm Rate Level) at a particular speed of reversion (Reversion Rate). Examples of variables following a mean-reversion process include inflation rates, interest rates, gross domestic product growth rates, optimal production rates, and price of natural gas. Certain variables such as these succumb either to natural tendencies or economic/business conditions to revert to a long-term level when the actual values stray too far above or below this level. For instance, monetary and fiscal policy will prevent the economy from significant fluctuations, while policy goals tend to have a specific long-term target rate or level. Figure 193.1 illustrates a regular stochastic process (dotted line) versus a mean-reversion process (solid line). Clearly the mean-reverting process with its dampening effects will have a lower level of uncertainty than the regular process with the same volatility measure.
FIGURE 193.1 Mean-reversion in action
c190-c198
JWBK121-Mun
778
March 17, 2008
13:45
Char Count=
REAL OPTIONS SLS APPLICATIONS
FIGURE 193.2 Comparing mean-reverting calls and puts to regular calls and puts
Figure 193.2 shows the mean-reverting call option computed using the SLS Multinomial Lattice module. Using this module, we can compute additional call and put results from a regular option modeled using the Trinomial Lattice versus calls and puts assuming a mean-reverting (MR) tendency of the underlying asset using the Mean-Reverting Trinomial Lattice. Based on the results (see Table 193.1), several items are worthy of notice:
The MR Calls are less than or equal to regular Calls because of the dampening effect of the mean-reversion asset. The MR asset value will not increase as high as the regular asset value. Conversely, the MR Puts are greater than or equal to regular Puts because the asset value will not rise as high, indicating that there will be a higher chance that the asset value will hover around the PV Asset, and a higher probability it will be below the PV Asset, making the put option more valuable.
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
779
194. Real Options—Multiple Assets Competing Options
TABLE 193.1
Regular calls and puts versus mean-reverting options solved using trinomial lattices
American Call American Put European Call European Put
Regular
Mean-Reverting
$31.99 $13.14 $31.99 $9.87
$18.62 $18.76 $13.58 $15.26
With the dampening effect, the MR Call and MR Put ($18.62 and $18.76) are more symmetrical in value than with a regular call and put ($31.99 and $13.14). The regular American Call equals regular European Call without dividends because without dividends, it is never optimal to execute early. However, because of the mean-reverting tendencies, being able to execute early is valuable, especially before the asset value decreases. So, we see that MR American Calls are greater than or equal to MR European Calls, but of course both are less than the regular Call. Other items of interest in mean-reverting options include:
The higher (lower) the long-term rate level, the higher (lower) the call options. The higher (lower) the long-term rate level, the lower (higher) the put options.
Finally, be careful when modeling mean-reverting options. Higher lattice steps usually are required, and certain combinations of reversion rates, long-term rate level, and lattice steps may yield unsolvable trinomial lattices. When this occurs, the software will return error messages.
194. Real Options—Multiple Assets Competing Options File Names: Real Options—Multiple Assets Competing Options (3D Binomial) Location: Modeling Toolkit | Real Options Models Brief Description: Computing a three-dimensional binomial option where there are multiple underlying assets and the option holder has the ability to purchase the asset with the highest value Requirements: Modeling Toolkit, Real Options SLS
This model illustrates the power of real options analysis and the ability for the Real Options SLS software to solve customizable options. This two-underlying-asset
JWBK121-Mun
March 17, 2008
13:45
Char Count=
FIGURE 194.1 Multiple Assets Competing Option
c190-c198
780
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
195. Real Options—Path-Dependent, Path-Independent, Mutually Exclusive
781
model is sometimes called a 3D binomial model, as there are multiple assets (Figure 194.1). The problem is solved using the multiple assets and multiple-phased MNLS module in the SLS software. The critical pieces are the terminal and intermediate equations. In this example, we solve a call option on the maximum (i.e., given that there are multiple underlying assets, this option allows the holder to execute and purchase the asset that is the highest when executed). The option can be an American, Bermudan, or European option. To create an American option, use: 1. Terminal equation: Max(FirstAsset-Cost,SecondAsset-Cost,0) 2. Intermediate equation: Max(FirstAsset-Cost,SecondAsset-Cost,OptionOpen) To create a Bermudan option, simply add in the Blackout and Vesting Period Steps to cover the appropriate time periods in the lattice when the option cannot be executed. For instance, for a five-year project solved using a 100-step lattice, if the blackout or vesting period is the first three years (option cannot be executed in Years 0 to 3, including the current time zero), set the blackout steps to be 0–60, and make sure you type in OptionOpen for the blackout equation. To create a European option, keep the same terminal equation but change the intermediate equation to: OptionOpen. Notice that 3D options can also be solved using pentanomial lattices in SLS. See Chapter 189 for details.
195. Real Options—Path-Dependent, Path-Independent, Mutually Exclusive, Non–Mutually Exclusive, and Complex Combinatorial Nested Options Sequential compound options are Path-Dependent Options, where one phase depends on the success of another, in contrast to Path-Independent Options, such as those solved using SLS. These options can be mutually exclusive or nonmutually exclusive. In all these types of options, there might be multiple underlying assets (e.g., Japan has a different risk-return or profitability-volatility profile from the United Kingdom or Australia). You can build multiple underlying asset lattices this way using the SLS, and combine them in many various ways, depending on the options. Some examples of path-dependent versus path-independent and mutually exclusive versus nonmutually exclusive options follow.
Path-Independent and Mutually Exclusive Options. Use the SLS to solve these types of options by combining all the options into a single valuation lattice. Examples include the option to expand, contract, and expand. These are mutually exclusive if you cannot both expand into a different country while abandoning
c190-c198
JWBK121-Mun
782
March 17, 2008
13:45
Char Count=
REAL OPTIONS SLS APPLICATIONS
and selling the company. They are path independent if there are no restrictions on timing (i.e., you can expand, contract, and abandon at any time within the confines of the maturity period). Path-Independent and Non–Mutually Exclusive Options. Use the SLS to solve these types of options by running each of the options that are nonmutually exclusive one at a time. Examples include the option to expand your business into Japan, the United Kingdom, and Australia. These are not mutually exclusive if you can choose to expand to any combinations of countries (e.g., Japan only, Japan and the United Kingdom, the United Kingdom and Australia, etc.). They are path independent if there are no restrictions on timing (i.e., you can expand to any country at any time within the maturity of the option). Add the individual option values and obtain the total option value for expansion. Path-Dependent and Mutually Exclusive Options. Use the SLS software to solve these types of options by combining all the options into one valuation lattice. Examples include the option to expand into the three countries, Japan, the United Kingdom, and Australia. However, this time, the expansions are mutually exclusive and path dependent. That is, you can expand into only one country at a time, but at certain periods you can expand only into certain countries (e.g., Japan is optimal only in three years due to current economic conditions, export restrictions, etc., as compared to the United Kingdom expansion, which can be executed right now). Path-Dependent and Non–Mutually Exclusive Options. Use the SLS software to solve these. These are typically simple sequential compound options with multiple phases. If more than one non–mutually exclusive option exists, rerun the SLS for each option. Examples include the ability to enter Japan from Years 0–3, Australia in Years 3–6, and the United Kingdom at any time between Years 0–10. Each entry strategy is not mutually exclusive if you can enter more than one country. The strategies are path dependent as they are time dependent. Nested Combinatorial Options. These are the most complicated options and can take a combination of any of the four preceding types. In addition, the options are nested within one another in that the expansion into Japan must come only after Australia and cannot be executed without heading to Australia first. In addition, Australia and the United Kingdom are okay, but you cannot expand to the United Kingdom and Japan (e.g., certain trade restrictions, antitrust issues, competitive considerations, strategic issues, restrictive agreements with alliances). For such options, draw all the scenarios on a strategy tree and use IF, AND, OR, and MAX statements in SLS to solve the option. That is, if you enter into the United Kingdom, that is it, but if you enter into Australia, you can still enter into Japan or the United Kingdom but not Japan and the United Kingdom.
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
196. Real Options—Sequential Compound Options
783
196. Real Options—Sequential Compound Options File Names: Real Options—Simple Two-Phased Sequential Compound Option, Real Options—Multiple-Phased Sequential Compound Option, Real Options—Multiple-Phased Complex Sequential Compound Option Location: Modeling Toolkit | Real Options Models Brief Description: Computing Sequential Compound Options with multiplephased and complex interrelationships between phases and customizable and changing input parameters Requirements: Modeling Toolkit, Real Options SLS
Sequential Compound Options are applicable for research and development investments or any other investments that have multiple stages. The Real Options SLS software is required for solving Sequential Compound Options. The easiest way to understand this option is to start with a two-phased example, as seen in Figure 196.1. In a two-phased example, management has the ability to decide if Phase II (PII)
FIGURE 196.1 Graphical representation of a Two-Phased Sequential Compound Option
c190-c198
JWBK121-Mun
784
March 17, 2008
13:45
Char Count=
REAL OPTIONS SLS APPLICATIONS
should be implemented after obtaining the results from Phase I (PI). For example, a pilot project or market research in PI indicates that the market is not yet ready for the product, hence PII is not implemented. All that is lost is the PI sunk cost, not the entire investment cost of both PI and PII. The next example illustrates how the option is analyzed.
REAL OPTIONS—SIMPLE TWO-PHASED SEQUENTIAL COMPOUND OPTION The illustration in Figure 196.1 is valuable in explaining and communicating to senior management the aspects of an American sequential compound option and its inner workings. In the illustration, the Phase I investment of –$5M (in present value dollars) in Year 1 is followed by Phase II investment of –$80M (in present value dollars) in Year 2. It is hoped that positive net free cash flows (CF) will follow in Years 3 to 6, yielding a sum of PV Asset of $100M (CF discounted at, say, a 9.7% discount or hurdle rate), and the volatility of these CFs is 30%. At a 5% risk-free rate, the strategic value is calculated at $27.67, as seen in Figure 196.2 using a 100-step lattice, which means that the strategic option value of being able to defer investments and to wait and see until more information becomes available and uncertainties become resolved is worth $12.67M because the NPV (net present value) is worth $15M ($100M – $5M – $80M). In other words, the expected value of perfect information is worth $12.67M, which indicates that if market research can be used to obtain credible information to decide if this project is a good one, the maximum the firm should be willing to spend in Phase I is on average no more than $17.67M (i.e., $12.67M + $5M) if PI is part of the market research initiative, or simply $12.67M otherwise. If the cost to obtain the credible information exceeds this value, then it is optimal to take the risk and execute the entire project immediately at $85M. (Example file used: Simple Two-Phased Sequential Compound Option.) In contrast, if the volatility decreases (uncertainty and risk are lower), the strategic option value decreases. In addition, when the cost of waiting (as described by the dividend rate as a percentage of the asset value) increases, it is better not to defer and wait that long. Therefore, the higher the dividend rate, the lower the strategic option value. For instance, at an 8% dividend rate and 15% volatility, the resulting value reverts to the NPV of $15M, which means that the option value is zero and that it is better to execute immediately as the cost of waiting far outstrips the value of being able to wait given the level of volatility (uncertainty and risk). Finally, if risks and uncertainty increase significantly even with a high cost of waiting (e.g., 7% dividend rate at 30% volatility), it is still valuable to wait. This model provides the decision maker with a view into the optimal balancing between waiting for more information (expected value of perfect information) and the cost of waiting. You can analyze this balance by creating strategic options to defer investments through development stages where at every stage, the project is reevaluated as to whether it is beneficial to proceed to the next phase. Based on the input assumptions used in this model, the sequential compound option results show the strategic value of the project, and the NPV is simply the PV Asset less the Implementation Costs of both phases. In other words, the strategic option value is the difference between the calculated strategic value minus the NPV. It is
JWBK121-Mun March 17, 2008 13:45
FIGURE 196.2 Solving a Two-Phased Sequential Compound Option using SLS
c190-c198 Char Count=
785
c190-c198
JWBK121-Mun
786
March 17, 2008
13:45
Char Count=
REAL OPTIONS SLS APPLICATIONS
recommended that you vary the volatility and dividend inputs to determine their interactions—specifically, where the break-even points are for different combinations of volatilities and dividends. Using this information, you can make better go or no-go decisions based on the resulting option value (e.g., break-even volatility points can be traced back into the discounted cash flow model to estimate the probability of crossing over and this ability to wait becomes valuable). Note that the investment costs of $80M and $5M are in present values for several reasons. The first is that only the PV Asset undergoes an underlying asset lattice evolution to future values; costs do not undergo such an evolution and are hence in present values. Also, for example, in a five-year option, an implementation that can occur at any time within these five years will have very different actual implementation costs. To make these costs comparable, we use their present values. Thus, an $80M cost can occur at any time within some specified time period; the present value of the cost is still $80M regardless of when it hits. Finally, the risk-free rate discounts the expected value of keeping the option open, not the cost. So, there is no double-discounting, and using the present value for these costs is correct.
REAL OPTIONS—MULTIPLE-PHASED SEQUENTIAL COMPOUND OPTION The sequential compound option can similarly be extended to multiple phases with the use of the SLS software. A graphical representation of a multiphased or stagegate investment is seen in Figure 196.3. The example illustrates a 10-phase project, where, at every phase, management has the option and flexibility either to continue
FIGURE 196.3 Multiple-Phased Sequential Compound Option
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
196. Real Options—Sequential Compound Options
787
to the next phase if everything goes well or to terminate the project otherwise. Based on the input assumptions, the results in the SLS indicate the calculated strategic value of the project, while the NPV of the project is simply the PV Asset less all Implementation Costs (in present values) if implementing all phases immediately. Therefore, with the strategic option value of being able to defer and wait before implementing future phases (due to the volatility, there is a possibility that the asset value will be significantly higher) the strategic value of the project is higher. Hence the ability to wait before making the investment decisions in the future is the option value or the strategic value of the project less the NPV. Figure 196.4 shows the results using the SLS software. Notice that due to the backward induction process used, the analytical convention is to start with the last phase and go all the way back to the first phase (example file used: Multiple-Phased Sequential Compound Option). In NPV terms the project is worth –$500. However, the total strategic value of the stage-gate investment option is worth $41.78. This means that although the investment looks bad on an NPV basis, in reality, by hedging the risks and uncertainties through sequential investments, the option holder can pull out at any time and not have to keep investing unless things look promising. If after the first phase things look bad, the option holder can pull out and stop investing; the maximum loss will be $100 (Figure 196.4), not the entire $1,500 investment. If, however, things look promising, the option holder can continue to invest in stages. The expected value of the investments in present values after accounting for the probabilities that things will look bad (and hence stop investing) versus things looking great (and hence continuing to invest) is worth an average of $41.78M. Notice that the option valuation result will always be greater than or equal to zero (e.g., try reducing the volatility to 5% and increasing the dividend yield to 8% for all phases). When the option value is very low or zero, this means that it is not optimal to defer investments. The stage-gate investment process is not optimal here. The cost of waiting is too high (high dividend) or the uncertainties in the cash flows are low (low volatility); hence, invest if the NPV is positive. In such a case, although you obtain a zero value for the option, the analytical interpretation is significant. A zero or very low value is indicative of an optimal decision not to wait.
REAL OPTIONS—MULTIPLE-PHASED COMPLEX SEQUENTIAL COMPOUND OPTION The sequential compound option can be further complicated by adding customized options at each phase, as illustrated in Figure 196.5. In the figure, at every phase, there may be different combinations of mutually exclusive options including the flexibility to stop investing, abandon and salvage the project in return for some value, expand the scope of the project into another project (e.g., spin off projects and expand into different geographical locations), contract the scope of the project resulting in some savings, or continue on to the next phase. The seemingly complicated option can be solved very easily using SLS, as seen in Figure 196.6 (example file used: Real Options—Multiple-Phased Complex Sequential Compound Option). To illustrate, Figure 196.6’s SLS path-dependent sequential option uses the inputs shown in Table 196.1.
FIGURE 196.4 Solving a Multi-Phased Sequential Compound Option using SLS
c190-c198 JWBK121-Mun
788
March 17, 2008 13:45 Char Count=
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
196. Real Options—Sequential Compound Options
789
FIGURE 196.5 Graphical representation of a Multiple-Phased Complex Sequential Compound Option
Exercise: Sequential Compound Option A Sequential Compound Option exists when a project has multiple phases and latter phases depend on the success of previous ones. Suppose a project has two phases, of which the first has a one-year expiration that costs $500 million. The second phase’s expiration is three years and costs $700 million. Suppose that the implied volatility of the logarithmic returns on the projected future cash flows is calculated to be 20%. The risk-free rate on a riskless asset for the next three years is found to be yielding 7.7%. The static valuation of future profitability using a discounted cash flow model—in other words, the present value of the future cash flows discounted at an appropriate market risk-adjusted discount rate—is found to be $1 billion. Do the next exercises, answering the questions. 1. Solve the Sequential Compound Option problem manually using a 10-step lattice, and confirm the results by generating an audit sheet using the software. 2. Change the sequence of the costs. That is, set the first phase’s cost to $700 and the second phase’s cost to $500 (both in present values). Compare your results. Explain what happens. 3. Now suppose the total cost of $1,200 (in present values) is spread out equally over six phases in six years. How would the analysis results differ? 4. Suppose that the project is divided into three phases with the options in each phase as described. Solve the option using Multiple Asset SLS. a. Phase 3 occurs in three years, the implementation cost is $300, but the project can be abandoned and salvaged to receive $300 at any time within the first year, $350 within the second year, and $400 within the third year. b. Phase 2 occurs in two years, and the project can be spun off to a partner. In doing this, the partner keeps 15% of the NPV and saves the firm $200. c. Phase 1 occurs in one year, and the only thing that can be done is to invest the $300 implementation cost and continue to the next phase.
FIGURE 196.6 Solving a Multiple-Phased Complex Sequential Compound Option using SLS
c190-c198 JWBK121-Mun
790
March 17, 2008 13:45 Char Count=
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
197. Real Options—Simultaneous Compound Options
TABLE 196.1
791
SLS equations for a sample complex sequential compound option
Phase 3:
Terminal: Max(Underlying*Expansion-Cost,Underlying,Salvage) Intermediate: Max(Underlying*Expansion-Cost,Salvage,OptionOpen) Steps: 50
Phase 2:
Terminal: Max(Phase3,Phase3*Contract+Savings,Salvage,0) Intermediate: Max(Phase3*Contract+Savings,Salvage,OptionOpen) Steps: 30
Phase 1:
Terminal: Max(Phase2,Salvage,0) Intermediate: Max(Salvage,OptionOpen) Steps: 10
197. Real Options—Simultaneous Compound Options File Names: Real Options—Simple Two-Phased Simultaneous Compound Option, Real Options—Multiple-Phased Simultaneous Compound Option Location: Modeling Toolkit | Real Options Models Brief Description: Computing Simultaneous Compound Options with multiple options executed simultaneously with complex interrelationships between phases and customizable and changing input parameters Requirements: Modeling Toolkit, Real Options SLS
The Simultaneous Compound Option evaluates a project’s strategic value when the value of the project depends on the success of two or more investment initiatives executed simultaneously in time. The Sequential Compound Option evaluates these investments in stages, one after another over time, while the Simultaneous Option evaluates these options in concurrence. Clearly, the Sequential Compound Option is worth more than the Simultaneous Compound Option by virtue of staging the investments. Note that the Simultaneous Compound Option acts like a regular execution call option. Hence the American Call Option is a good benchmark for such an option.
REAL OPTIONS—TWO-PHASED SIMULTANEOUS COMPOUND OPTION Figure 197.1 shows how a Simultaneous Compound Option can be solved using the SLS (example file used: Simple Two-Phased Simultaneous Compound Option). Like the Sequential Compound Option analysis, the existence of an option value implies
FIGURE 197.1 Solving a Simultaneous Compound Option using SLS
c190-c198 JWBK121-Mun
792
March 17, 2008 13:45 Char Count=
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
197. Real Options—Simultaneous Compound Options
793
that the ability to defer and wait for additional information prior to executing is valuable due to the significant uncertainties and risks as measured by volatility. However, when the cost of waiting as measured by the dividend rate is high, the option to wait and defer becomes less valuable, until the break-even point where the option value equals zero and the strategic project value equals the NPV of the project. This break-even point provides the decision maker with valuable insights into the interactions between the levels of uncertainty inherent in the project and the cost of waiting to execute.
REAL OPTIONS—MULTIPLE-PHASED SIMULTANEOUS COMPOUND OPTION The same analysis can be extended to multiple investment Simultaneous Compound Options as seen in Figure 197.2 (example file used: Multiple-Phased Simultaneous Compound Option).
Exercise: Simultaneous Compound Option In a Compound Option analysis, the value of the option depends on the value of another option. For instance, a pharmaceutical company currently going through a particular Food and Drug Administration (FDA) drug approval process has to go through human trials. The success of the FDA approval depends heavily on the success of human testing, both occurring at the same time. Suppose that the former costs $900 million and the latter $500 million. Further suppose that both phases occur simultaneously and take five years to complete. Using Monte Carlo simulation, you calculate the implied volatility of the logarithmic returns on the projected future cash flows to be 25%. The risk-free rate on a riskless asset for the next five years is found to be yielding 7.7%. The drug development effort’s static valuation of future profitability using a discounted cash flow (DCF) model (in other words, the present value of the future cash flows discounted at an appropriate market risk-adjusted discount rate) is found to be $1 billion. Do the next exercises, answering the questions. 1. Solve the Simultaneous Compound Option problem manually using a 10-step lattice, and confirm the results by generating an Audit Sheet using the software. 2. Swap the implementation costs such that the first cost is $500 and the second cost is $900. Is the resulting option value similar or different? Why? 3. What happens when part of the cost of the first option is allocated to the second option? For example, make the first cost $450 and the second cost $950. Does the result change? Explain. 4. Show how a Closed-Form American Approximation Model can be used to benchmark the results from a Simultaneous Compound Option. 5. Show how a Sequential Compound Option also can be used to calculate or at least approximate the Simultaneous Compound Option result. 6. Assume that the total $1,400 implementation cost (in present values) is now distributed equally across seven simultaneous projects. Does the result change? Why or why not?
FIGURE 197.2 Solving a multiple investment Simultaneous Compound Option using SLS
c190-c198 JWBK121-Mun
794
March 17, 2008 13:45 Char Count=
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
795
198. Real Options—Simple Calls and Puts Using Trinomial Lattices
198. Real Options—Simple Calls and Puts Using Trinomial Lattices File Names: Real Options—Simple Calls and Puts Using Trinomial Lattices Location: Modeling Toolkit | Real Options Models Brief Description: Solving Simple Call and Put options using Trinomial Lattices and comparing with results from Binomial Lattices Requirements: Modeling Toolkit, Real Options SLS
Building and solving Trinomial Lattices is similar to building and solving Binomial Lattices, complete with the up/down jumps and risk-neutral probabilities, but it is more complicated due to more branches stemming from each node. At the limit, both the binomial and trinomial lattices yield the same result, as seen in Table 198.1. However, the lattice-building complexity is much higher for Trinomial or multinomial lattices. The only reason to use a Trinomial Lattice is because the level of convergence to the correct option value is achieved more quickly than by using a binomial lattice. In the table, notice how the Trinomial Lattice yields the correct option value with fewer steps than it takes for a Binomial Lattice (1,000 as compared to 5,000). Because both yield identical results at the limit but trinomials are much more difficult to calculate and take a longer computation time, the Binomial Lattice usually is used instead. However, a Trinomial Lattice is required only when the underlying asset follows a mean-reverting process. Table 198.1 shows the convergence of Trinomial and Binomial lattices. Figure 198.1 shows another example. To run this model, use the SLS software and click on Create a New Multinomial Option Model. The computed American Call is $31.99 using a 5-step trinomial and is identical to a 10-step binomial lattice seen in Figure 198.2. Therefore, due to the simpler computation and the speed of computation, the SLS software uses Binomial Lattices instead of trinomials or other multinomial lattices. The only time a Trinomial Lattice is truly useful is when the underlying asset of the option follows a mean-reversion tendency.
TABLE 198.1
Sample convergence of results between binomials and trinomials
Steps Binomial Lattice Trinomial Lattice
5
10
100
1,000
5,000
$30.73 $29.22
$29.22 $29.50
$29.72 $29.75
$29.77 $29.78
$29.78 $29.78
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
796
FIGURE 198.1 Simple Trinomial Lattice solution
REAL OPTIONS SLS APPLICATIONS
c190-c198
JWBK121-Mun
March 17, 2008
13:45
Char Count=
198. Real Options—Simple Calls and Puts Using Trinomial Lattices
FIGURE 198.2 10-step Binomial Lattice comparison result
797
c190-c198
JWBK121-Mun
March 17, 2008
13:45
798
Char Count=
c199-c203
JWBK121-Mun
March 17, 2008
13:49
Char Count=
PART
3
Real Options Strategic Case Studies—Framing the Options
799
c199-c203
JWBK121-Mun
March 17, 2008
13:49
800
Char Count=
c199-c203
JWBK121-Mun
March 17, 2008
13:49
Char Count=
199. Real Options Strategic Cases—High-Tech Manufacturing
801
199. Real Options Strategic Cases—High-Tech Manufacturing: Build or Buy Decision with Real Options File Names: Real Options Strategic Cases—High-Tech Manufacturing’s Strategy A, Real Options Strategic Cases—High-Tech Manufacturing’s Strategy B, Real Options Strategic Cases—High-Tech Manufacturing’s Strategy C Location: Modeling Toolkit | Real Options Models Brief Description: Framing and solving a real options problem of choosing between build and buy strategies when the underlying project is filled with risk and uncertainty Requirements: Modeling Toolkit, Real Options SLS
Microtech, Inc. is a billion-dollar high-tech manufacturing firm interested in developing a new state-of-the-art, high-capacity micro hard drive that fits into the palm of your hand. The problem is, the larger the capacity, the larger the magnetic disk has to be, and the faster it has to spin. Therefore, making small hard drives is hard enough, let alone a high-capacity hard drive that is reliable and state of the art. The risks the firm faces include market risks—will this product sell, will it sell enough, and at what price?—and private risks—will the technology work, and can we develop it fast enough ahead of the competition?—both of which are significant enough to yield disasters in the project. In performing its due diligence, the vice president of the firm’s advanced emerging technologies found a small start-up that is currently developing such a technology, and it is approximately three-quarters of the way there. This start-up will initiate a patent process in the next few weeks. Microtech would like to consider acquiring the start-up prior to the patent process starting. The start-up has shown interest in being acquired and, based on preliminary discussions, requested $50M for the firm. The question is, should Microtech acquire the firm and mitigate some development risk but still face the market risk and some residual development risk? After all, the start-up has the technology only partially completed. Then again, through the acquisition, Microtech can take a potential rival out of the picture and even mitigate the chances of its competitors acquiring this firm’s technologies. How much is this firm really worth, compared to its asking price of $50M? What options exist to mitigate some of the market and development risks? Can Microtech take advantage of additional opportunities in the market through the acquisition? The finance staff at Microtech with the assistance of several external consultants began to collect all the relevant data and created a discounted cash flow (DCF) model. The best-guess present value of the benefits from the firm is $100M. This means that
c199-c203
JWBK121-Mun
802
March 17, 2008
13:49
Char Count=
REAL OPTIONS STRATEGIC CASE STUDIES—FRAMING THE OPTIONS
the net present value (NPV) of buying the firm is $50M after accounting for the acquisition cost of $50M. In the DCF model, the probability of technical success is also modeled (using several binomial distributions and their relevant probabilities of success in several phases multiplied together) as well as the market positioning (triangular distributions were used to simulate the different market conditions in the future). Using Risk Simulator software, the Monte Carlo simulation’s resulting annualized volatility is found to be 25%, a somewhat moderate level of risk. The finance staff also created another DCF with which to compare the result. This second DCF models the scenario of building the technology in-house. The total development cost in present value will be $40M, a lot less than the acquisition cost of $50M. At first glance, Microtech might be better off building the technology in-house, providing an NPV of $60M. However, the volatility is found to be 30%, as it is riskier to develop the technology from scratch than to buy a firm with the technology almost completed. The question now becomes: What, if any, strategic real options exist for Microtech to consider? Is the NPV analysis sufficient to justify doing it itself? What about all the market and private risks in the project? If acquiring the firm, are there any options to protect Microtech from failure? If building the technology itself, are there any options to mitigate the development risks? Microtech then proceeded to perform some real options framing exercises with its executives, facilitated by an external real options expert consultant. The questions raised included what risks exist and how can they be reduced. The real options consultant then put a real options framework around the discussions and came up with a preliminary strategy tree (Figure 199.1). For the first pass, four main options were conceived: mitigate the development risk of building themselves; mitigate the risk of the market; mitigate the risk of failure if acquiring the firm; and take advantage of the upside risks whenever possible. These options are compiled into path-dependent strategies in the figure. Strategy A is to develop the technology in-house but the research and development (R&D) risk is mitigated through a stage-gate investment process, where the total $40M required investment is spread into four steps of $10M each (in present values). At any phase of the development, the fate of the next stage is determined (i.e., to decide if the R&D initiative should continue depending on the outcome of the current phase). The investment can be terminated at any time, and the maximum loss will be the total investment up to that point. That is, if R&D shows bad results after one year, the initiative is abandoned, the firm exits the project, and the maximum loss is $10M, not the entire $40M as defined in the DCF model. The questions are: How much is this strategic path worth? Is stage-gating the process worth it? Strategy B is to develop the technology but hedge the market risk. That is, a preliminary Phase I market research is performed for $5M in the first year to obtain competitive intelligence on Microtech’s competitors and to see if the market and complementary technologies exist such that the microdrive will indeed be successful. Then, depending on the results of this market research, Phase II’s R&D initiative will or will not be executed. Because the market research takes an entire year to complete, further stage-gating the R&D initiative is not an option because it will significantly delay product launch. So Phase II is a full-scale R&D. In this strategic path, although the market risk is mitigated through market research, the development risk still
c199-c203
JWBK121-Mun
March 17, 2008
13:49
Char Count=
199. Real Options Strategic Cases—High-Tech Manufacturing
803
FIGURE 199.1 Strategy tree on build versus buy
exists. Hence, a contraction option is conceived. That is, Microtech finds another counterparty to assume the manufacturing risks by signing a two-year contract whereby at any time within the next two years, Microtech can have this counterparty firm take over the development of the increased rotational latency of the microdrives and seek times during the R&D process. The counterparty shares in 30% of the net profits without undertaking any R&D costs. Microtech will assume the entire $40M R&D cost, but ends up mitigating its highest development risks and also save $10M (in present values) by not having to increase its own manufacturing competencies by hiring outside consultants and purchasing new equipment. The questions are: How much is this strategic path worth? Is the market research valuable? How much should Microtech share its net profits with the counterparty?
c199-c203
JWBK121-Mun
804
March 17, 2008
13:49
Char Count=
REAL OPTIONS STRATEGIC CASE STUDIES—FRAMING THE OPTIONS
Strategy C is to purchase the start-up firm for $50M. However, by acquiring the firm, Microtech obtains additional options. Specifically, if the technology or market does not work out as expected, Microtech can sell the start-up (sell its intellectual property, patents, technology, assets, buildings, etc.) for an estimated salvage value of $25M within the first year. As the content of the start-up’s intellectual property is expected to increase over time because of added development efforts, the salvage value is expected to increase by $1M each year. If the technology is successful within the next five years, other products can be created from this microdrive base platform technology. For instance, the microdrive is applicable for use not only in laptops; with an additional funding of $5M, the technology can be adapted into handheld global positioning system (GPS) map locators for cars and travel enthusiasts, personal pocket-size hard drives (where people can carry an entire computer on a key chain; all they have to do is plug it in to a monitor and the virtual computer comes up), MP3 players, and a multitude of other products, which, by Microtech’s estimates, will increase the NPV of the microdrive by 35%. However, this expansion option exists only in Strategy C, as time to market is crucial for these new products and the start-up already has three-quarters of the technology completed, speeding Microtech’s time to market tremendously. The questions are: How much is the start-up actually worth to Microtech? Is $50M too high a price to pay for the company? Figure 199.1 shows these three strategic paths and the relevant information on each strategy branch.
REAL OPTIONS STRATEGIC CASES—HIGH-TECH MANUFACTURING’S STRATEGY A Strategy A’s total strategic value is worth $64.65M using the Multiple Asset SLS software with 100-step lattices as seen in Figure 199.2. The NPV is $60M, indicating that the option value is worth $4.65M. This means that there is definitely value in stage-gating the R&D initiative to hedge downside risks. To follow along, open the SLS file: Strategic Cases—High-Tech Manufacturing’s Strategy A. When the annualized dividend rate exceeds 2.5%, the option value becomes zero and the total strategic value reverts to the NPV of $60M, as seen in Figure 199.3. This means that by spending more time and putting off development through a stage-gate process, as long as the maximum losses per year (lost market share and opportunity losses of net revenues from sales) do not exceed $2.5M (2.5% of $100M), then stage-gating is valuable. Otherwise, the financial leakage is too severe; the added risk is worth it, and the $40M should be spent immediately on a large-scale development effort. Progressively modify each valuation phase’s dividend rate from 0% to 2.5%.
REAL OPTIONS STRATEGIC CASES—HIGH-TECH MANUFACTURING’S STRATEGY B The total strategic value of Strategy B is valued at $75.90, (rounded) as seen in Figure 199.4. The NPV is $55M (computed by taking $100M – $5M – $40M), which means that the options are valued at $20.90M. So, thus far, Strategy B is the better strategic path, with a value of $75.90M. In addition, Figure 199.5 shows the strategic value without the contraction option, worth $59.06M ($4.06M option
JWBK121-Mun
March 17, 2008
13:49
Char Count=
FIGURE 199.2 Value of Strategy A
c199-c203
805
c199-c203
JWBK121-Mun
806
March 17, 2008
13:49
Char Count=
REAL OPTIONS STRATEGIC CASE STUDIES—FRAMING THE OPTIONS
FIGURE 199.3 Strategy A’s break-even point
value to stage-gate with market research and $55M NPV). Thus, the contraction option with the counterparty to hedge the downside technical risk is worth $16.84M. A further analysis can be performed by changing the contraction factor (how much is allocated to the counterparty) and the amount of savings, as seen in Figure 199.6.
REAL OPTIONS STRATEGIC CASES—HIGH-TECH MANUFACTURING’S STRATEGY C Finally, the total strategic value for Strategy C (Figure 199.7) is valued at $131.12 (rounded) less $50 purchase price of the start-up company or a net strategic value of $81.12M. That is, Microtech should be willing to pay no more than $55.22M for the start-up (i.e., $50M + $81.12M − $75.90M); otherwise it is better off pursuing Strategy B and building the technology itself. Thus, the optimal strategy is to purchase the start-up company, go to market quickly with the ability to abandon and sell the start-up should things fail, or further invest an additional R&D sum later on to develop spin-off technologies. If real options analysis is not performed, Microtech would have chosen to develop the technology itself immediately and spend $40M. This strategy would yield the highest NPV if real options and risk mitigation options are not considered. Microtech would have made a serious decision blunder and taken unnecessary risks. By performing the real options analysis, additional spin-off products and opportunities surface, which prove to be highly valuable.
JWBK121-Mun
March 17, 2008
13:49
Char Count=
FIGURE 199.4 Value of Strategy B
c199-c203
807
c199-c203
JWBK121-Mun
808
March 17, 2008
13:49
Char Count=
REAL OPTIONS STRATEGIC CASE STUDIES—FRAMING THE OPTIONS
FIGURE 199.5 Strategy B without contraction
FIGURE 199.6 Decision table on savings and contraction factors
c199-c203
JWBK121-Mun
March 17, 2008
13:49
Char Count=
199. Real Options Strategic Cases—High-Tech Manufacturing
FIGURE 199.7 Value of Strategy C
809
c199-c203
JWBK121-Mun
810
March 17, 2008
13:49
Char Count=
REAL OPTIONS STRATEGIC CASE STUDIES—FRAMING THE OPTIONS
200. Real Options Strategic Cases—Oil and Gas: Farm-Outs, Options to Defer, and Value of Information File Names: Real Options Strategic Cases—Oil and Gas—Strategy A, Real Options Strategic Cases—Oil and Gas—Strategy B Location: Modeling Toolkit | Real Options Models Brief Description: Framing and solving a real options problem of choosing among various flexibilities of farm-outs and outsourcing, deferring decisions, and the value of information, when the underlying project is filled with risk and uncertainty Requirements: Modeling Toolkit, Real Options SLS
An oil and gas company, NewOil, is in the process of exploring a new field development. It intends to start drilling and exploring a region in Alaska for oil. Preliminary geologic and aerial surveys indicate that there is an optimal area for drilling. However, NewOil faces two major sources of uncertainties: market uncertainties (oil price volatility and economic conditions) and private uncertainties (geological area, porosity, oil pressure, reservoir size, etc.). Using comparable analysis and oil price forecasts, the development of this new oil field is expected to yield a sum present value (PV) of $200M, but drilling may cost up to $100M (in present values). The firm is trying to see if any strategic options exist to mitigate the market and private risks as well as to find the optimal strategic path. Figure 200.1 represents the outcome of a strategic brainstorming activity at NewOil. Specifically, NewOil can simply take the risk and start drilling, shown as Strategy C. In this case, the NPV is found to be $100M, but there are tremendous amounts of risk in this strategy. To reduce the private risk, either a three-dimensional (3D)-seismic can be implemented or a series of test wells can be drilled. 3D-seismic studies will cost about $5M and take only 0.5 years to complete. The information obtained is fairly reliable but still contains significant amounts of uncertainty. In contrast, test wells can be drilled but will cost NewOil $10M and take 2 years to complete. However, test wells provide more solid and accurate information than seismic studies as the quality of the oil, the pressure, caps, porosity, and other geologic factors will become known over time as more test wells are drilled. Finally, to reduce the market risk, specifically the oil price reduction and volatility, NewOil has decided to execute a joint venture with LocalOil, a small second-tier oil and gas engineering company specializing in managing and running oil rigs. NewOil will provide LocalOil 49% of its gross profits from the oil field provided that LocalOil takes over the entire drilling operation at the request of NewOil. LocalOil benefits from a now fully developed oil field without any investments of its own, while
c199-c203
JWBK121-Mun
March 17, 2008
13:49
200. Real Options Strategic Cases—Oil and Gas
Char Count=
811
FIGURE 200.1 Strategy tree for oil and gas
NewOil benefits by pulling out its field personnel and saving $30M in total operating expenses during the remaining economic life of the oil rig but still captures a 51% share of the field’s production. The question now is: Which strategic path in Figure 200.1 is optimal for NewOil? Should it take some risks and execute the seismic study, or completely reduce the private risk through a series of test wells? Perhaps the risk is small enough such that the opportunity losses of putting off immediate development far surpass the risks taken and the oil field should be developed immediately. Figure 200.2 shows the valuation of Strategy A. The NPV of Strategy A is $90M, which means that the option value is $33.74M ($123.74M – $90M). This means that putting off development by doing some test wells and farming out operations in the future when oil prices are not as high as NewOil expects is worth a lot. This total strategic value of $123.74 is now compared with the total strategic value of Strategy B valued at $129.58M (Figure 200.3). Clearly the cheaper and faster seismic study in Strategy B brings with it a higher volatility (Strategy B has a 35% volatility compared to 30% for Strategy A), and having the ability to farm out development of the field is worth more under such circumstances. In addition, the added dividend outflow in Strategy A reduces the option value of deferring and getting more valid information through drilling test wells. Thus, the higher the cost of waiting and holding on to this option, the lower the option value.
JWBK121-Mun
March 17, 2008
13:49
Char Count=
FIGURE 200.2 Value of Strategy A
c199-c203
812
JWBK121-Mun
March 17, 2008
13:49
Char Count=
FIGURE 200.3 Value of Strategy B
c199-c203
813
c199-c203
JWBK121-Mun
814
March 17, 2008
13:49
Char Count=
REAL OPTIONS STRATEGIC CASE STUDIES—FRAMING THE OPTIONS
201. Real Options Strategic Cases—Pharmaceutical Development: Value of Perfect Information and Optimal Trigger Values File Names: Real Options—Strategic Cases—R&D Stage-Gate Process A, and Real Options—Strategic Cases—R&D Stage-Gate Process B Location: Modeling Toolkit | Real Options Models Brief Description: Framing and valuing a Real Options application of performing stage-gate development processes versus fast-tracking a development, and the value of information, when the underlying project is filled with risk and uncertainty Requirements: Modeling Toolkit, Real Options SLS
Suppose BioGen, a large multibillion-dollar pharmaceutical firm, is thinking of developing a new type of insulin that can be inhaled and the drug will directly be absorbed into the bloodstream. This is indeed a novel and honorable idea. Imagine what this means to diabetics who will no longer need painful and frequent injections. The problem is, this new type of insulin requires a brand-new development effort. If the uncertainties of the market, competition, drug development, and Food and Drug Administration approval are high, perhaps a base insulin drug that can be ingested should first be developed. The ingestible version is a required precursor to the inhaled version. BioGen can decide to either take the risk and fast-track development into the inhaled version or buy an option to defer, to first wait and see if the ingestible version works. If this precursor works, then the firm has the option to expand into the inhaled version. How much should BioGen be willing to spend on performing additional tests on the precursor and under what circumstances should the inhaled version be implemented directly? Suppose that BioGen needs to spend $100M on developing the inhaled version. If successful, the expected net present value (NPV) is $24M (i.e., $124M present value [PV] asset less the $100M PV development cost). The probability of technical success, market prices, revenues, and operating expenses are all simulated in the discounted cash flow model. The resulting cash flow stream has a volatility of 22%. In contrast, if BioGen first develops the ingestible version, it will cost $10M and take an entire year to develop, forcing the later-phase development of the inhaled version to start one year later. Because this inhaled version uses similar precursors, the development cost is only $95M, not $100M. However, by being one year late to market, the PV Asset of doing an ingestible version before attempting the inhaled
c199-c203
JWBK121-Mun
March 17, 2008
13:49
Char Count=
201. Real Options Strategic Cases—Pharmaceutical Development
815
FIGURE 201.1 Real Options strategy tree for pharmaceutical development version will be reduced to $120M. This means that the ingestible-inhaled strategy will yield an NPV of $15M. Figure 201.1 shows these two competing strategies. Clearly, under an NPV analysis, the best approach is to pursue the inhaled version directly. However, when a Real Options analysis is performed by applying a two-phased Sequential Compound Option, the total strategic value is found to be $27.24M, as seen in Figure 201.2. The strategic option value of being able to defer investments and to wait and see until more information becomes available and uncertainties become resolved is worth $12.24M because the NPV is worth $15M ($120M – $10M – $95M). In other words, the expected value of perfect information is worth $12.24M. This amount indicates that the intermediate phase of developing an ingestible precursor which can be used to obtain credible information to decide if further development is possible, the maximum the firm should be willing to spend in the ingestible intermediate phase is on average no more than $22.24M (i.e., $12.24M + $10M). If the cost to obtain the credible information exceeds this value, then it is optimal to take the risk and execute the entire project immediately at $100M, or Strategy B. In contrast, if the volatility decreases (uncertainty and risk are lower), the strategic option value decreases. In addition, when the cost of waiting (as described by the dividend rate as a percentage of the asset value) increases, it is better not to defer and wait that long. Therefore, the higher the dividend rate, the lower the strategic option value. For instance, at a 17.20% dividend rate and 22% volatility, the resulting value reverts to the NPV of $15M (Figure 201.3), which means that the option value is zero and that it is better to execute immediately, as the cost of waiting far outstrips the value of being able to wait given the level of volatility (uncertainty and risk). Finally, if risks and uncertainty increase significantly even with a high cost of waiting (e.g., 17.20% dividend rate at 30% volatility), it is still valuable to wait. This model provides the decision maker with a view into the optimal balancing between waiting for more information (expected value of perfect information) and the cost of waiting. You can analyze this balance by creating strategic options to defer investments through development stages where at every stage the project is
JWBK121-Mun
March 17, 2008
13:49
Char Count=
FIGURE 201.2 Value of Strategy A
c199-c203
816
c199-c203
JWBK121-Mun
March 17, 2008
13:49
Char Count=
202. Real Options Strategic Cases—Option to Switch Inputs
817
FIGURE 201.3 Sequential Compound Option’s break-even point reevaluated as to whether it is beneficial to proceed to the next phase. You can vary the volatility and dividend inputs to determine their interactions—specifically, where the break-even points are for different combinations of volatilities and dividends. Thus, using this information, firms can make better go or no-go decisions (e.g., break-even volatility points can be traced back into the discounted cash flow model to estimate the probability of crossing over, and this ability to wait becomes valuable).
202. Real Options Strategic Cases—Option to Switch Inputs File Names: Real Options—Strategic Cases—Switching Option’s Strategy A, Real Options—Strategic Cases—Switching Option’s Strategy B Location: Modeling Toolkit | Real Options Models Brief Description: Framing and valuing a Real Options application of the ability to switch use when the underlying project is filled with risk and uncertainty Requirements: Modeling Toolkit, Real Options SLS
This strategic case is a Switching Option (an electric generating plant can be retrofitted to take both coke and coal as inputs, as opposed to the current situation
c199-c203
JWBK121-Mun
818
March 17, 2008
13:49
Char Count=
REAL OPTIONS STRATEGIC CASE STUDIES—FRAMING THE OPTIONS
FIGURE 202.1 Strategy tree for Switching Options
where only one type of input can be used) and can be summarized in the strategy tree shown in Figure 202.1. In Strategy A, signing a contract where it comes into play only when prices hit a lower level is a lower down-and-in barrier call option. The present value (PV) asset value is the current net present value (NPV) staying on course, and the lower barrier is the NPV value that is obtained given the lower price level where the contract is executed. Signing the contract is worth more than Strategy B’s ability to switch; hence, Strategy A should be executed. Besides, this is a lot cheaper than retrofitting the plant to take different inputs. The total strategic value is positive as the option value of this contract is $6.22M (Figure 202.2), but the cost of executing the contract to buy the option is $5M. In Strategy B, the ability to switch back and forth has some value even if both input fuels are at the same price currently. In fact, the NPV shows that there is no value in the ability to switch. If using coal right now has an NPV of $10M and coke also has an NPV of $10M because their price levels are currently the same, then the act of switching incurs a $0.5M cost, and switching now yields an NPV of –$0.50M. Obviously, then, ability to switch is useless in an NPV paradigm. However, in a real options paradigm, having the ability to switch but not doing it right now is worth a lot. For instance, coal and coke have different volatilities in prices and may or may not be correlated (a negative correlation increases the Switching Option’s value
c199-c203
JWBK121-Mun
March 17, 2008
13:49
Char Count=
202. Real Options Strategic Cases—Option to Switch Inputs
819
FIGURE 202.2 Switching Option Strategy A
as the new input acts like a hedging vehicle). The price movements of both inputs are highly volatile, and there is a chance that the first input’s prices are higher than the second inputs such that switching becomes optimal. The strategic value yields $5.49M (regardless if it is switching from coal with 35% volatility to coke with 55%, or vice versa). However, the cost of getting this option (retrofitting the plant) is $7.25M, so the total strategic value is negative, making this strategy not optimal (Figure 202.3).
JWBK121-Mun
March 17, 2008
13:49
Char Count=
FIGURE 202.3 Switching Option Strategy B
c199-c203
820
c199-c203
JWBK121-Mun
March 17, 2008
13:49
Char Count=
203. Valuation—Convertible Warrants with a Vesting Period and Put Protection
821
203. Valuation—Convertible Warrants with a Vesting Period and Put Protection File Names: Valuation—Warrant—Combined Value, Valuation— Warrant —Put Only, Valuation—Warrant—Warrant Only Location: Modeling Toolkit | Real Options Models Brief Description: Framing and valuing issues of warrants and protective warrants with a put provision Requirements: Modeling Toolkit, Real Options SLS
This case study provides a sample application of the Real Options SLS software on valuing a warrant (an instrument that can be converted into a stock, similar to a Call Option) that has a protective Put Option associated with it. The analysis herein also applies both a customized Binomial Lattice and a closed-form Black-Scholes model for comparison and benchmarking purposes. The valuation analysis performed was based on an actual consulting project, but all proprietary information has been modified except for certain basic publicly available information. The accuracy of the results is dependent on this factual information at the time of valuation. This case details the input assumptions used as well as some benchmark due diligence to check the results. Certain technical details have been left out to simplify the case. The client company very recently acquired a small information technology firm. The acquisition consisted of both cash and warrants, which are convertible into stocks. But because the client’s stocks are fairly volatile, the acquired firm negotiated a protective put to hedge its downside risks. In return for providing this protective put, the client requested that the warrant be exercisable only if the target firm is solvent and its gross margins exceed 33% and are no less than $10 million. Clearly, this problem cannot be solved using a Black-Scholes model because there exist dividends, a vesting period, a threshold price put protection at which the put can be exercised, and the fact that the put cannot be exercised unless the warrant is converted into a stock but only when the stock price is below $33. To summarize, the list below shows the assumptions and requirements in this exotic warrant:
Stock price on grant date:
Warrant strike price: Warrant maturity:
Risk-free rate: Volatility:
$30.12 $15.00 10.00 years (grant date) 4.24% (grant date) 29.50%
c199-c203
JWBK121-Mun
March 17, 2008
822
13:49
Char Count=
REAL OPTIONS STRATEGIC CASE STUDIES—FRAMING THE OPTIONS
Dividend rate:
Put threshold price: Vesting for warrant:
Vesting for put option:
0.51% $33.00 3 years 5 years
Further, these additional requirements were modeled:
The protective put option can be exercised only if the warrant is exercised. The put option can be exercised only if the stock price is below $33.00 at the time of exercise. The warrant can be exercised only if the recipient’s gross margin equals or exceeds 33% and is no less than $10 million. A simulation forecast puts an 85% to 90% uniform probability of occurrence for this event. The warrant can be exercised only if recipient is solvent. Another simulation forecast puts a 90% to 95% uniform probability of occurrence for this event. The protective put payout is the maximum spread between the put threshold price less the client’s common stock price or the warrant price.
The risk-free rate is based on the 10-year U.S. Treasury note. The volatility estimate is based on the client’s historical weekly closing prices for the past one, two, and three years. The volatilities are estimated using the standard deviation of the natural logarithmic relative returns, annualized for a year, and then averaged. The dividend rate is assumed to be 0.51% based on available market data on client shares. The total probability of exceeding the gross margin threshold as well as solvency requirements is 80.9375% (calculated using the midpoint probability estimates of both independent events 87.50% times 92.50%, and were results based on simulation forecasts using historical financial data). The only way to value such a protective put on a warrant is by using Binomial Lattices. However, a Black-Scholes model is used to benchmark the results.
WARRANT VALUATION In order to solve the warrant part of the exotic vehicle, high-level analysis rules need to be created. 1. If the period ≥ 3 years, then at maturity, the profit-maximizing decision is: Max (Exercise the warrant accounting for the probability the requirements are met; or let the warrant expire worthless otherwise.) 2. If the period ≥ 3 years, then prior to maturity, the profit-maximizing decision is: Max (Exercise the warrant accounting for the probability the requirements are met; or keep the option open for future execution.) 3. If the period < 3 years, then hold on to the warrant as no execution is allowed.
c199-c203
JWBK121-Mun
March 17, 2008
13:49
Char Count=
203. Valuation—Convertible Warrants with a Vesting Period and Put Protection
823
PROTECTIVE PUT OPTION VALUATION The same is done on the Protective Put Option. 1. If the period ≥ 5 years, then at maturity, the profit-maximizing decision is: Max (If the stock price is < $33, then exercise the warrant and collect the Protective Put payout, after accounting for the probability the requirements are met; or let the warrant and Put Option expire worthless.) 2. If the period ≥ 5 years, then prior to maturity, the profit-maximizing decision is: Max (If the stock price is < $33, then exercise the warrant and collect the Protective Put payout, after accounting for the probability the requirements are met; or keep the option open for future execution.) 3. If the period < 5 years, then hold on to the Put Option, as no execution is allowed. The binomial model used is a combination of Bermudan vesting nested option, where all the requirements (vesting periods, threshold price, probability of solvency, probability of exceeding gross margin requirements) have to be met before the warrant can be executed, and the Put Option can be executed only if the warrant is executed. However, the warrant can be executed even if the Protective Put is not executed.
ANALYTICAL RESULTS A summary of results follows. The results start with a decomposition of the Warrant call and the Protective Put valued independently. These results are then compared to benchmarks to ascertain their accuracy and model reliability. Then a combination of both instruments is valued in a mutually exclusive Nested Option model. The results of interest are the Combined Option model, but we can only obtain such a model by first decomposing the parts. The analysis was performed using the SLS software. To follow along, you can start the Single Asset SLS software and load the relevant example files: Valuation—Warrant—Combined Value, Valuation—Warrant—Put Only, and Valuation—Warrant—Warrant Only.
A. Warrant at Grant Date
Naive Black-Scholes (benchmark): $19.71 Adjusted Black-Scholes (benchmark): $15.95 (probability-adjusted benchmark) Binomial Lattice (100 steps): $15.98 (using SLS)
As can be seen, the Binomial Lattice for the Warrant converges to the BlackScholes results. The reason for this convergence is that the dividend rate is low, making it not optimal to exercise early, but still worth slightly more than a simple European Option. See Figure 203.1 for the details.
c199-c203
JWBK121-Mun
824
March 17, 2008
13:49
Char Count=
REAL OPTIONS STRATEGIC CASE STUDIES—FRAMING THE OPTIONS
FIGURE 203.1 Warrant valuation at grant date
B. Protective Put Option at Grant Date
Static protection value (total): $1.5 million (100,000 warrants granted) Static protection value (per warrant): $15.00 (guaranteed minimum) Adjusted static protection value: $12.14 (probability-adjusted benchmark) Binomial Lattice (100 steps): $12.08 (using SLS)
The analysis can be seen in Figure 203.2. The analysis up to this point decomposes the Warrant call and the Protective Put options and their values are comparable to the static benchmarks, indicating that the models are specified correctly and the results are accurate. However, the Warrant issues cannot be separated from the Protective Put because they are combined into one instrument. Separating them means that at certain points and conditions in time, the holder can both execute the call and execute the put option protection with another call. This constitutes double-counting. Thus, in such a mutually exclusive condition (either a call is executed or a Protective Put is executed with the call, not both), a combination valuation is performed. The results are shown next.
c199-c203
JWBK121-Mun
March 17, 2008
13:49
Char Count=
203. Valuation—Convertible Warrants with a Vesting Period and Put Protection
825
FIGURE 203.2 Protective Put at grant date
C. Combination of Warrant and Protective Put Option at Grant Date
Black-Scholes Call Option: $19.71 (benchmark) Black-Scholes Put Option: $0.91 (benchmark) Combination of both Black-Scholes: $20.62 (sum of both benchmarks) Binomial Lattice (100 steps): $22.37
Figure 203.3 illustrates the analysis performed. Using Black-Scholes Call and Put Option models as benchmarks, we see that the sought-after result of $22.37 is valid, after considering that the decompositions of the model are also valid. Clearly the total combination value has to exceed the BlackScholes as the Warrant-Put is an American Option (with vesting requirements). To summarize, the analysis cannot be completed without the use of the Single Asset SLS software. Even when solving such complicated instruments, using the software makes pricing relatively straightforward.
c199-c203
JWBK121-Mun
826
March 17, 2008
13:49
Char Count=
REAL OPTIONS STRATEGIC CASE STUDIES—FRAMING THE OPTIONS
FIGURE 203.3 Combined Warrant with Protective Put at grant date
appa
JWBK121-Mun
March 17, 2008
10:21
Char Count=
APPENDIX
A
List of Models
hat follows is a list of the models included in the Modeling Toolkit. As software programs are continually upgraded and improved, please check the user manual for the latest list of models available after this book’s printing date. This toolkit comprises over 800 analytical models and functions (see Appendix B for the list of functions), with about 300 analytical model Excel/SLS templates and example spreadsheets (see the following list) covering the areas of risk analysis, simulation, forecasting, Basel II risk analysis, credit and default risk, statistical models, and much more. This toolkit is a set of mathematically sophisticated models written in C++ and linked into Excel spreadsheets.
W
Analytics 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Central Limit Theorem Central Limit Theorem (Lottery Numbers) Flaw of Averages Mathematical Integration Parametric and Nonparametric Hypothesis Tests Data set Projectile Motion Regression Diagnostics Ships in the Night Statistical Analysis Weighting of Ratios
Banking Models 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.
Audit of Construction Lending Banker’s Construction Budget Classified Breakeven Loan Inventory Classified Loan Borrowing Base Classified Loan Cash Budget and Overdraft Facilities Federal Reserve Camels Rating System Firm in Financial Distress Project Finance Risk Rating Queuing Models Reconciling Enron’s Cash Flow Risk Rating Model
827
appa
JWBK121-Mun
March 17, 2008
10:21
Char Count=
828 22. 23. 24. 25.
APPENDIX A
Sample Cash Flow Model Sensitivity Projections Stochastic Loan Pricing Model Valuation and Appraisal
Credit Analysis 26. 27. 28. 29. 30. 31. 32.
Credit Premium Credit Default Swaps and Credit Spread Options Credit Default Swaps (Counterparty Default and Correlations) Credit Risk and Effects on Prices External Debt Rating and Spreads Internal Credit Risk Rating Model Profit Cost Analysis of New Credit
Debt Analysis 33. 34. 35. 36. 37. 38. 39.
Asset Equity Parity Model Cox Model on Price and Yield of Risky Debt with Mean Reverting Rates Debt Repayment and Amortization Debt Sensitivity Models Merton Price of Risky Debt with Stochastic Asset and Interest Vasicek Debt Option Valuation Vasicek Price and Yield of Risky Debt
Decision Analysis 40. 41. 42. 43. 44. 45. 46.
Decision Tree Basics Decision Tree—EVPI, Minimax, and Bayes Theorem Economic Order Quantity and Inventory Reorder Point Economic Order Quantity and Optimal Manufacturing Expected Utility Analysis Inventory Control Queuing Models
Exotic Options 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58.
American, Bermudan and European Options Asian Arithmetic Asian Geometric Asset or Nothing Barrier Options Binary Digital Options Cash or Nothing Credit Spread Options Commodity Options Complex Chooser Currency Options Double Barrier Options
appa
JWBK121-Mun
March 17, 2008
10:21
Char Count=
List of Models
59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92.
Exchange Assets Extreme Spreads Foreign Equity Linked Forex Foreign Equity Domestic Currency Foreign Equity Fixed Forex Foreign Takeover Options Forward Start Futures and Forward Options Gap Options Graduated Barriers Implied Trinomial Lattices Index Options Inverse Gamma Out-of-the-Money Options Jump Diffusion Leptokurtic and Skewed Options Lookback Fixed Strike Partial Time Lookback Fixed Strike Lookback Floating Strike Partial Time Lookback Floating Strike Min and Max of Two Assets Options on Options Options Collar Perpetual Options Simple Chooser Spread on Futures Supershares Time Switch Trading Day Corrections Two-Asset 3D Options Two Assets Barrier Two Assets Cash Two Assets Correlated Uneven Dividends Writer Extendible
Forecasting 93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104.
Brownian Motion Stochastic Process Data Diagnostics Econometrics, Correlations, and Multiple Regression Modeling Exponential J-Growth Curves Forecast Distribution at Horizon Forecasting Manual Computations Jump-Diffusion Stochastic Process Linear Interpolation Logistic S-Growth Curves Markov Chains and Market Share Mean-Reverting Stochastic Process Multiple Regression
829
appa
JWBK121-Mun
March 17, 2008
10:21
Char Count=
830 105. 106. 107. 108.
APPENDIX A
Nonlinear Extrapolation Stochastic Processes Time-Series ARIMA Time-Series Analysis
Industry Applications 109. Biotech—Manufacturing Strategy 110. Biotech—In-licensing and Deal Structuring 111. Biotech—Investment Valuation 112. 113. 114. 115. 116. 117.
Electric Utility—Efficient Frontier Generation Electric Utility—Electricity Contract Risk Information Technology—Forecasting Utilization Information Technology—IT Security Decision Analysis IT—Risk Management Investment Decision Real Estate—Commercial ROI
Optimization 118. 119. 120. 121. 122. 123. 124. 125. 126. 127. 128.
Continuous Portfolio Allocation Discrete Project Selection Inventory Optimization Investment Portfolio Allocation Investment Portfolio Allocation (Part I) Investment Portfolio Allocation (Part II) Military Portfolio and Efficient Frontier Optimal Pricing with Elasticity Optimization of a Harvest Model Optimizing Ordinary Least Squares Stochastic Portfolio Allocation
Options Analysis 129. 130. 131. 132. 133.
Binary Digital Instruments Inverse Floater Bond Lattice Options Trading Strategies Options Adjusted Spreads on Debt Options on Debt
Probability of Default 134. 135. 136. 137. 138.
Bond Yields and Spreads (Market Comparable) Empirical (Individuals) External Options Model (Public Company) Merton Internal Model (Private Company) Merton Market Options Model (Industry Comparable)
appa
JWBK121-Mun
March 17, 2008
10:21
Char Count=
List of Models
831
Project Management 139. Cost Estimation Model 140. Critical Path Analysis (CPM, PERT, GANTT) 141. Project Timing
Real Options SLS 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153. 154. 155. 156. 157. 158. 159. 160. 161. 162. 163. 164. 165. 166. 167. 168. 169. 170. 171. 172. 173. 174. 175. 176. 177. 178. 179.
Employee Stock Options—Simple American Call Employee Stock Options—Simple Bermudan Call with Vesting Employee Stock Options—Simple European Call Employee Stock Options—Suboptimal Exercise Employee Stock Options—Vesting and Suboptimal Exercise Employee Stock Options—Vesting, Blackout, Suboptimal, Forfeiture Exotic Options—Accruals on Basket of Assets Exotic Options—American Call Option with Dividends Exotic Options—American Call Option on Foreign Exchange Exotic Options—American Call Option on Index Futures Exotic Options—Barrier Option—Down and In Lower Barrier Exotic Options—Barrier Option—Down and Out Lower Barrier Exotic Options—Barrier Option—Up and In Upper Barrier Call Exotic Options—Barrier Option—Up and In, Down and In Double Barrier Call Exotic Options—Barrier Option—Up and Out Upper Barrier Call Exotic Options—Barrier Option—Up and Out, Down and Out Double Barrier Call Exotic Options—Basic American, European, versus Bermudan Call Options Exotic Options—Chooser Option Exotic Options—Equity Linked Notes Exotic Options—European Call Option with Dividends Exotic Options—Range Accruals Options Analysis—Plain Vanilla Call Option I Options Analysis—Plain Vanilla Call Option II Options Analysis—Plain Vanilla Call Option III Options Analysis—Plain Vanilla Call Option IV Options Analysis—Plain Vanilla Put Option Real Options—Abandonment American Option Real Options—Abandonment Bermudan Option Real Options—Abandonment Customized Option Real Options—Abandonment European Option Real Options—Contraction American and European Option Real Options—Contraction Bermudan Option Real Options—Contraction Customized Option Real Options—Dual-Asset Rainbow Option Pentanomial Lattice Real Options—Excel-Based Options Models Real Options—Exotic Complex Floating American Chooser Real Options—Exotic Complex Floating European Chooser Real Options—Expand Contract Abandon American and European Option
appa
JWBK121-Mun
March 17, 2008
10:21
Char Count=
832 180. 181. 182. 183. 184. 185. 186. 187. 188. 189. 190. 191. 192. 193. 194. 195. 196. 197. 198. 199. 200. 201. 202. 203.
APPENDIX A
Real Options—Expand Contract Abandon Bermudan Option Real Options—Expand Contract Abandon Customized Option I Real Options—Expand Contract Abandon Customized Option II Real Options—Expansion American and European Option Real Options—Expansion Bermudan Option Real Options—Expansion Customized Option Real Options—Jump Diffusion Calls and Puts Using Quadranomial Lattices Real Options—Mean-Reverting Calls and Puts Using Trinomial Lattices Real Options—Multiple Asset Competing Options (3D Binomial) Real Options—Multiple-Phased Complex Sequential Compound Option Real Options—Multiple-Phased Sequential Compound Option Real Options—Multiple-Phased Simultaneous Compound Option Real Options—Simple Calls and Puts Using Trinomial Lattices Real Options—Simple Two-Phased Sequential Compound Option Real Options—Simple Two-Phased Simultaneous Compound Option Real Options—Strategic Cases—High-Tech Manufacturing Strategy A Real Options—Strategic Cases—High-Tech Manufacturing Strategy B Real Options—Strategic Cases—High-Tech Manufacturing Strategy C Real Options—Strategic Cases—Oil and Gas—Strategy A Real Options—Strategic Cases—Oil and Gas—Strategy B Real Options—Strategic Cases—R&D Stage-Gate Process A Real Options—Strategic Cases—R&D Stage-Gate Process B Real Options—Strategic Cases—Switching Option’s Strategy A Real Options—Strategic Cases—Switching Option’s Strategy B
204. 205. 206. 207. 208. 209. 210. 211. 212. 213. 214. 215.
Trinomial Lattices—American Call Option Trinomial Lattices—American Put Option Trinomial Lattices—European Call Option Trinomial Lattices—European Put Option Trinomial Lattices—Mean-Reverting American Call Option Trinomial Lattices—Mean-Reverting American Put Option Trinomial Lattices—Mean-Reverting European Call Option Trinomial Lattices—Mean-Reverting European Put Option Trinomial Lattices—Mean-Reverting American Abandonment Option Trinomial—Mean-Reverting American Contraction Trinomial—Mean-Reverting American Expansion Trinomial—Mean-Reverting American Abandonment, Contraction, Expansion 216. Trinomial—Mean-Reverting Bermudan Abandonment, Contraction, Expansion 217. Trinomial—Mean-Reverting Customized Abandonment, Contraction, Expansion 218. Trinomial—Mean-Reverting European Abandonment, Contraction, Expansion 219. 220. 221. 222.
Quadranomial Lattices—Jump Diffusion American Call Option Quadranomial Lattices—Jump Diffusion American Put Option Quadranomial Lattices—Jump Diffusion European Call Option Quadranomial Lattices—Jump Diffusion European Put Option
appa
JWBK121-Mun
March 17, 2008
10:21
Char Count=
List of Models
223. 224. 225. 226. 227. 228. 229. 230. 231. 232. 233. 234. 235. 236. 237. 238. 239.
Pentanomial Lattices—American Rainbow Call Option Pentanomial Lattices—American Rainbow Put Option Pentanomial Lattices—Dual Reverse Strike American Call (3D Binomial) Pentanomial Lattices—Dual Reverse Strike American Put (3D Binomial) Pentanomial Lattices—Dual Strike American Call (3D Binomial) Pentanomial Lattices—Dual Strike American Put (3D Binomial) Pentanomial Lattices—European Rainbow Call Option Pentanomial Lattices—European Rainbow Put Option Pentanomial Lattices—Exchange of Two Assets American Put (3D Binomial) Pentanomial Lattices—Maximum of Two Assets American Call (3D Binomial) Pentanomial Lattices—Maximum of Two Assets American Put (3D Binomial) Pentanomial Lattices—Minimum of Two Assets American Call (3D Binomial) Pentanomial Lattices—Minimum of Two Assets American Put (3D Binomial) Pentanomial Lattices—Portfolio American Call (3D Binomial) Pentanomial Lattices—Portfolio American Put (3D Binomial) Pentanomial Lattices—Spread of Two Assets American Call (3D Binomial) Pentanomial Lattices—Spread of Two Assets American Put (3D Binomial)
Risk Analysis 240. Integrated Risk Analysis 241. Interest Rate Risk 242. Portfolio Risk and Return Profile
Risk Hedging 243. 244. 245. 246. 247.
Delta Gamma Hedge Delta Hedge Effects of Fixed versus Floating Rates Foreign Exchange Cash Flow Model Foreign Exchange Exposure Hedging
Sensitivity 248. Greeks 249. Tornado and Sensitivity Charts Linear 250. Tornado and Sensitivity Nonlinear
Simulation 251. 252. 253. 254. 255. 256. 257. 258. 259.
833
Basic Simulation Model Best Surgical Team Correlated Simulation Correlation Effects Model Data Fitting Debt Repayment and Amortization Demand Curve and Elasticity Estimation DCF, ROI, and Volatility Infectious Diseases
appa
JWBK121-Mun
March 17, 2008
10:21
Char Count=
834
APPENDIX A
260. Recruitment Budget (Negative Binomial and Multidimensional Simulation) 261. Retirement Funding with VBA Macros 262. Roulette Wheel 263. Time Value of Money
Six Sigma 264. 265. 266. 267. 268. 269. 270. 271. 272. 273. 274. 275. 276.
Control Charts (c, n, p, u, X, XmR, R) Delta Precision Design of Experiments and Combinatorics Hypothesis Testing and Bootstrap Simulation Hypothesis Testing (Advanced Techniques) Probabilities and Hypothesis Test Sample Size Correlation Sample Size DPU Sample Size Mean Sample Size Proportion Sample Size Sigma Statistical Capability Measures Unit Capability Measures
Valuation 277. 278. 279. 280. 281. 282. 283. 284. 285. 286.
APT, Beta, and CAPM Buy versus Lease Caps and Floors Convertible Bond Financial Ratios Analysis Financial Statements Analysis Valuation Model Valuation—Warrant—Combined Value Valuation—Warrant—Put Only Valuation—Warrant—Warrant Only
Value at Risk 287. 288. 289. 290. 291.
Optimized and Simulated Portfolio VaR Options Delta Portfolio VaR Portfolio Operational and Capital Adequacy Right-Tail Capital Requirements Static Covariance Method
Volatility 292. EWMA Model 293. GARCH Model 294. Implied Volatility
appa
JWBK121-Mun
March 17, 2008
10:21
Char Count=
List of Models
295. Log Asset Returns 296. Log Cash Flow Returns 297. Probability to Volatility
Yield Curve 298. 299. 300. 301. 302. 303. 304. 305.
CIR Model Curve Interpolation BIM Curve Interpolation NS Forward Rates from Spot Rates Spline Interpolation and Extrapolation Term Structure of Volatility U.S. Treasury Risk-Free Rate Vasicek Model
835
appa
JWBK121-Mun
March 17, 2008
10:21
Char Count=
836
appb
JWBK121-Mun
March 17, 2008
13:29
Char Count=
APPENDIX
B
List of Functions
hat follows is a comprehensive list of the functions in the Modeling Toolkit that can be accessed either through the analytical DLL libraries or in Excel. Please keep checking back at the web site for a more updated list. The software is continually evolving and newer applications and models are constantly added. Finally, the Risk Simulator tools applicable when using the Modeling Toolkit are also listed at the end.
W
1. B2AEPMarketValueAsset Market Value of Asset using the Asset-Equity Parity Model. 2. B2AEPMarketValueDebt Market Value of Debt using the Asset-Equity Parity Model. 3. B2AEPRequiredReturnDebt Required Return on Risky Debt using the Asset-Equity Parity Model. 4. B2AltDistributionCallOption Computes the European call option for an underlying asset returns distribution with skew and kurtosis, and is not perfectly normal. May return an error for unsolvable inputs. 5. B2AltDistributionPutOption Computes the European put option for an underlying asset returns distribution with skew and kurtosis, and is not perfectly normal. May return an error for unsolvable inputs. 6. B2AnnuityRate Returns the percentage equivalent of the required periodic payment on an annuity (e.g., mortgage payments, loan repayment). Returns the percentage of the total principal at initiation. 7. B2AsianCallwithArithmeticAverageRate An average rate option is a cash-settled option whose payoff is based on the difference between the arithmetic average value of the underlying during the life of the option and a fixed strike. 8. B2AsianCallwithGeometricAverageRate An average rate option is a cash-settled option whose payoff is based on the difference between the geometric average value of the underlying during the life of the option and a fixed strike.
837
appb
JWBK121-Mun
838
March 17, 2008
13:29
Char Count=
APPENDIX B
9. B2AsianPutwithArithmeticAverageRate An average rate option is a cash-settled option whose payoff is based on the difference between a fixed strike and the arithmetic average value of the underlying during the life of the option. 10. B2AsianPutwithGeometricAverageRate An average rate option is a cash-settled option whose payoff is based on the difference between a fixed strike and the geometric average value of the underlying during the life of the option. 11. B2AssetExchangeAmericanOption Option holder has the right up to and including expiration to swap out Asset 2 and receive Asset 1, with predetermined quantities. 12. B2AssetExchangeEuropeanOption Option holder has the right at expiration to swap out Asset 2 and receive Asset 1, with predetermined quantities. 13. B2AssetOrNothingCall At expiration, if in the money, the option holder receives the stock or asset. For a call option, as long as the stock or asset price exceeds the strike at expiration, the stock is received. 14. B2AssetOrNothingPut At expiration, if in the money, the option holder receives the stock or asset. For a put option, stock is received only if the stock or asset value falls below the strike price. 15. B2BarrierDoubleUpInDownInCall Valuable or knocked in the money only if either barrier (upper or lower) is breached (i.e., asset value is above the upper or below the lower barriers), and the payout is in the form of a call option on the underlying asset. 16. B2BarrierDoubleUpInDownInPut Valuable or knocked in the money only if either barrier (upper or lower) is breached (i.e., asset value is above the upper or below the lower barriers), and the payout is in the form of a put option on the underlying asset. 17. B2BarrierDoubleUpOutDownOutCall Valuable or stays in the money only if either barrier (upper or lower barrier) is not breached, and the payout is in the form of a call option on the underlying asset. 18. B2BarrierDoubleUpOutDownOutPut Valuable or stays in the money only if either barrier (upper or lower barrier) is not breached, and the payout is in the form of a put option on the underlying asset. 19. B2BarrierDownandInCall Becomes valuable or knocked in the money if the lower barrier is breached, and the payout is the call option on the underlying asset. Sometimes cash is paid at maturity, assuming that the option has not been knocked in. 20. B2BarrierDownandInPut Becomes valuable or knocked in the money if the lower barrier is breached,
appb
JWBK121-Mun
March 17, 2008
List of Functions
21.
22.
23.
24.
25.
26.
27.
13:29
Char Count=
839
and the payout is the put option on the underlying asset. Sometimes cash is paid at maturity, assuming that the option has not been knocked in. B2BarrierDownandOutCall Valuable or in the money only if the lower barrier is not breached, and the payout is the call option on the underlying asset. Sometimes cash is paid at maturity, assuming that the option has not been knocked out. B2BarrierDownandOutPut Valuable or in the money only if the lower barrier is not breached, and the payout is the put option on the underlying asset. Sometimes cash is paid at maturity, assuming that the option has not been knocked out. B2BarrierUpandInCall Becomes valuable or knocked in the money if the upper barrier is breached, and the payout is the call option on the underlying asset. Sometimes cash is paid at maturity, assuming that the option has not been knocked in. B2BarrierUpandInPut Becomes valuable or knocked in the money if the upper barrier is breached, and the payout is the put option on the underlying asset. Sometimes cash is paid at maturity, assuming that the option has not been knocked in. B2BarrierUpandOutCall Valuable or in the money only if the upper barrier is not breached, and the payout is the call option on the underlying asset. Sometimes cash is paid at maturity, assuming that the option has not been knocked out. B2BarrierUpandOutPut Valuable or in the money only if the upper barrier is not breached, and the payout is the put option on the underlying asset. Sometimes cash is paid at maturity, assuming that the option has not been knocked out. B2BDTAmericanCallonDebtLattice Computes the American call option on interest-based instruments and debt or bonds, and creates the entire pricing lattice.
28. B2BDTAmericanCallonDebtValue Computes the American call option value on interest-based instruments and debt or bonds, and returns only one value instead of the entire lattice. 29. B2BDTAmericanPutonDebtLattice Computes the American put option on interest-based instruments and debt or bonds, and creates the entire pricing lattice. 30. B2BDTAmericanPutonDebtValue Computes the American put option value on interest-based instruments and debt or bonds, and returns only one value instead of the entire lattice. 31. B2BDTCallableDebtPriceLattice Computes the revised price lattice of a callable debt such that the options adjusted spread can be imputed. Allows for changing interest and interest volatilities over time. 32. B2BDTCallableDebtPriceValue Computes the present value of a coupon bond/debt that is callable, to see the differences in value from a noncallable debt, and returns only one value instead of the entire lattice.
appb
JWBK121-Mun
840
March 17, 2008
13:29
Char Count=
APPENDIX B
33. B2BDTCallableSpreadValue Computes the option adjusted spread (i.e., the additional premium that should be charged on the callable option provision). 34. B2BDTEuropeanCallonDebtLattice Computes the European call option on interest-based instruments and debt or bonds, and creates the entire pricing lattice. 35. B2BDTEuropeanCallonDebtValue Computes the European call option value on interest-based instruments and debt or bonds, and returns only one value instead of the entire lattice. 36. B2BDTEuropeanPutonDebtLattice Computes the European put option on interest-based instruments and debt or bonds, and creates the entire pricing lattice. 37. B2BDTEuropeanPutonDebtValue Computes the European put option value on interest-based instruments and debt or bonds, and returns only one value instead of the entire lattice. 38. B2BDTFloatingCouponPriceLattice Value of the floater bond’s lattice (coupon rate is floating and can be directly or inversely related to interest rates; e.g., rates drop, coupon increases, the bond appreciates in price, and the yield increases). 39. B2BDTFloatingCouponPriceValue Value of the floater bond (coupon rate is floating and can be directly or inversely related to interest rates; e.g., rates drop, coupon increases, the bond appreciates in price, and the yield increases). 40. B2BDTNoncallableDebtPriceLattice Computes the pricing lattice of a coupon bond/debt that is not callable, to see the differences in value from a callable debt. 41. B2BDTNoncallableDebtPriceValue Computes the present value of a coupon bond/debt that is not callable, to see the differences in value from a callable debt. 42. B2BDTInterestRateLattice Computes the short rate interest lattice based on a term structure of interest rates and changing interest volatilities, as a means to compute option values. 43. B2BDTNonCallableSpreadValue Computes the straight spread on a bond that is noncallable in order to compare it with the option provision of an option adjusted spread model. 44. B2BDTZeroPriceLattice Computes the straight price lattice of zero bonds based on a term structure of interest rates and changing interest volatilities, as a means to compute interest-based option values. 45. B2BDTZeroPriceLattice2 Computes the straight price lattice of zero bonds based on a term structure of interest rates and changing interest volatilities, as a means to compute interestbased option values. Returns the same results as the B2BDTZeroPriceLattice function but requires interest rates and interest volatilities as inputs, rather than the entire interest rate lattice.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
841
46. B2BDTZeroPriceValue Computes the straight price of zero bonds at time zero, based on a term structure of interest rates and changing interest volatilities, as a means to compute interest-based option values. 47. B2BinaryDownAndInAssetAtExpirationOrNothing Binary digital instrument receiving the asset at expiration, only if a corresponding asset hits a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 48. B2BinaryDownAndInAssetAtExpirationOrNothingCall Binary digital call option receiving the asset at expiration if the asset hits a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 49. B2BinaryDownAndInAssetAtExpirationOrNothingPut Binary digital put option receiving the asset at expiration if the asset hits a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 50. B2BinaryDownAndInAssetAtHitOrNothing Binary digital instrument receiving the asset when it hits a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 51. B2BinaryDownAndInCashAtExpirationOrNothing Binary digital instrument receiving a cash amount at expiration, only if a corresponding asset hits a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 52. B2BinaryDownAndInCashAtExpirationOrNothingCall Binary digital call option receiving the cash at expiration if the asset hits a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 53. B2BinaryDownAndInCashAtExpirationOrNothingPut Binary digital put option receiving the cash at expiration if the asset hits a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 54. B2BinaryDownAndInCashAtHitOrNothing Binary digital instrument receiving a cash amount when a corresponding asset hits a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 55. B2BinaryDownAndOutAssetAtExpirationOrNothing Binary digital instrument receiving the asset at expiration, only if a corresponding asset does not hit a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 56. B2BinaryDownAndOutAssetAtExpirationOrNothingCall Binary digital call options receiving the asset at expiration, only if a corresponding asset does not hit a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously.
appb
JWBK121-Mun
842
March 17, 2008
13:29
Char Count=
APPENDIX B
57. B2BinaryDownAndOutAssetAtExpirationOrNothingPut Binary digital put options receiving the asset at expiration, only if a corresponding asset does not hit a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 58. B2BinaryDownAndOutCashAtExpirationOrNothing Binary digital instrument receiving a cash amount at expiration, only if a corresponding asset does not hit a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 59. B2BinaryDownAndOutCashAtExpirationOrNothingCall Binary digital call option receiving a cash amount at expiration, only if a corresponding asset does not hit a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 60. B2BinaryDownAndOutCashAtExpirationOrNothingPut Binary digital put option receiving a cash amount at expiration, only if a corresponding asset does not hit a lower barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 61. B2BinaryUpAndInAssetAtExpirationOrNothing Binary digital instrument receiving the asset at expiration, only if a corresponding asset hits an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 62. B2BinaryUpAndInAssetAtExpirationOrNothingCall Binary digital call option receiving the asset at expiration if the asset hits an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 63. B2BinaryUpAndInAssetAtExpirationOrNothingPut Binary digital put option receiving the asset at expiration if the asset hits an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 64. B2BinaryUpAndInAssetAtHitOrNothing Binary digital instrument receiving the asset when it hits an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 65. B2BinaryUpAndInCashAtExpirationOrNothing Binary digital instrument receiving a cash amount at expiration, only if a corresponding asset hits an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 66. B2BinaryUpAndInCashAtExpirationOrNothingCall Binary digital call option receiving the cash at expiration if the asset hits an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 67. B2BinaryUpAndInCashAtExpirationOrNothingPut Binary digital put option receiving the cash at expiration if the asset hits an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
843
68. B2BinaryUpAndInCashAtHitOrNothing Binary digital instrument receiving a cash amount when a corresponding asset hits an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 69. B2BinaryUpAndOutAssetAtExpirationOrNothing Binary digital instrument receiving the asset at expiration, only if a corresponding asset does not hit an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 70. B2BinaryUpAndOutAssetAtExpirationOrNothingCall Binary digital call options receiving the asset at expiration, only if a corresponding asset does not hit an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 71. B2BinaryUpAndOutAssetAtExpirationOrNothingPut Binary digital put options receiving the asset at expiration, only if a corresponding asset does not hit an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 72. B2BinaryUpAndOutCashAtExpirationOrNothing Binary digital instrument receiving a cash amount at expiration, only if a corresponding asset does not hit an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 73. B2BinaryUpAndOutCashAtExpirationOrNothingCall Binary digital call option receiving a cash amount at expiration, only if a corresponding asset does not hit an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 74. B2BinaryUpAndOutCashAtExpirationOrNothingPut Binary digital put option receiving a cash amount at expiration, only if a corresponding asset does not hit an upper barrier or receives nothing otherwise. DT is monitoring steps: 1/12 monthly, 1/52 weekly, 1/250 daily, 0 continuously. 75. B2Binomial3DAmericanDualStrikeCallOption Returns the American option with the payoff [Max(Q2 S2 – X2 , Q1 S1 – X1 )] and valued using a 3D binomial lattice model. 76. B2Binomial3DAmericanDualStrikePutOption Returns the American option with the payoff [Max(X2 – Q2 S2 , X1 – Q1 S1 )] and valued using a 3D binomial lattice model. 77. B2Binomial3DEuropeanDualStrikeCallOption Returns the European option with the payoff [Max(Q2 S2 – X2 , Q1 S1 – X1 )] and valued using a 3D binomial lattice model. 78. B2Binomial3DEuropeanDualStrikePutOption Returns the European option with the payoff [Max(X2 – Q2 S2 , X1 – Q1 S1 )] and valued using a 3D binomial lattice model. 79. B2Binomial3DAmericanExchangeOption Returns the American and European call and put option (same values exist for all types) with the payoff (Q2 S2 – Q1 S1 ) and valued using a 3D binomial lattice model.
appb
JWBK121-Mun
844
March 17, 2008
13:29
Char Count=
APPENDIX B
80. B2Binomial3DAmericanMaximumTwoAssetsCallOption Returns the American option with the payoff [Max(Q2 S2 , Q1 S1 ) – X] and valued using a 3D binomial lattice model. 81. B2Binomial3DAmericanMaximumTwoAssetsPutOption Returns the American option with the payoff [X – Max(Q2 S2 , Q1 S1 )] and valued using a 3D binomial lattice model. 82. B2Binomial3DEuropeanMaximumTwoAssetsCallOption Returns the European option with the payoff [Max(Q2 S2 , Q1 S1 ) – X] and valued using a 3D binomial lattice model. 83. B2Binomial3DEuropeanMaximumTwoAssetsPutOption Returns the European option with the payoff [X – Max(Q2 S2 , Q1 S1 )] and valued using a 3D binomial lattice model. 84. B2Binomial3DAmericanMinimumTwoAssetsCallOption Returns the American option with the payoff [Min(Q2 S2 , Q1 S1 ) – X] and valued using a 3D binomial lattice model. 85. B2Binomial3DAmericanMinimumTwoAssetsPutOption Returns the American option with the payoff [X – Min(Q2 S2 , Q1 S1 )] and valued using a 3D binomial lattice model. 86. B2Binomial3DEuropeanMinimumTwoAssetsCallOption Returns the European option with the payoff [Min(Q2 S2 , Q1 S1 ) – X] and valued using a 3D binomial lattice model. 87. B2Binomial3DEuropeanMinimumTwoAssetsPutOption Returns the European option with the payoff [X – Min(Q2 S2 , Q1 S1 )] and valued using a 3D binomial lattice model. 88. B2Binomial3DAmericanPortfolioCallOption Returns the American option with the payoff (Q2 S2 + Q1 S1 – X) and valued using a 3D binomial lattice model. 89. B2Binomial3DAmericanPortfolioPutOption Returns the American option with the payoff (X – Q2 S2 + Q1 S1 ) and valued using a 3D binomial lattice model. 90. B2Binomial3DEuropeanPortfolioCallOption Returns the European option with the payoff (Q2 S2 + Q1 S1 – X) and valued using a 3D binomial lattice model. 91. B2Binomial3DEuropeanPortfolioPutOption Returns the European option with the payoff (X – Q2 S2 + Q1 S1 ) and valued using a 3D binomial lattice model. 92. B2Binomial3DAmericanReverseDualStrikeCallOption Returns the American option with the payoff [Max(X2 – Q2 S2 , Q1 S1 – X1 )] and valued using a 3D binomial lattice model. 93. B2Binomial3DAmericanReverseDualStrikePutOption Returns the American option with the payoff [Max(Q2 S2 – X2 , X1 – Q1 S1 )] and valued using a 3D binomial lattice model. 94. B2Binomial3DEuropeanReverseDualStrikeCallOption Returns the European option with the payoff [Max(X2 – Q2 S2 , Q1 S1 – X1 )] and valued using a 3D binomial lattice model.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
845
95. B2Binomial3DEuropeanReverseDualStrikePutOption Returns the American option with the payoff [Max(Q2 S2 – X2 , X1 – Q1 S1 )] and valued using a 3D binomial lattice model. 96. B2Binomial3DAmericanSpreadCallOption Returns the American option with the payoff (Q1 S1 – Q2 S2 – X) and valued using a 3D binomial lattice model. 97. B2Binomial3DAmericanSpreadPutOption Returns the American option with the payoff (X + Q2 S2 – Q1 S1 ) and valued using a 3D binomial lattice model. 98. B2Binomial3DEuropeanSpreadCallOption Returns the European option with the payoff (Q1 S1 – Q2 S2 – X) and valued using a 3D binomial lattice model. 99. B2Binomial3DEuropeanSpreadPutOption Returns the European option with the payoff (X + Q2 S2 – Q1 S1 ) and valued using a 3D binomial lattice model. 100. B2BinomialAdjustedBarrierSteps Computes the correct binomial lattice steps to use for convergence and barrier matching when running a barrier option. 101. B2BinomialAmericanCall Returns the American call option with a continuous dividend yield using a binomial lattice, where the option can be exercised at any time up to and including maturity. 102. B2BinomialAmericanPut Returns the American put option with a continuous dividend yield using a binomial lattice, where the option can be exercised at any time up to and including maturity. 103. B2BinomialBermudanCall Returns the American call option with a continuous dividend yield using a binomial lattice, where the option can be exercised at any time up to and including maturity except during the vesting period. 104. B2BinomialBermudanPut Returns the American put option with a continuous dividend yield using a binomial lattice, where the option can be exercised at any time up to and including maturity except during the vesting period. 105. B2BinomialEuropeanCall Returns the European call option with a continuous dividend yield using a binomial lattice, where the option can be exercised only at maturity. 106. B2BinomialEuropeanPut Returns the European put option with a continuous dividend yield using a binomial lattice, where the option can be exercised only at maturity. 107. B2BlackCallOptionModel Returns the Black model (modified Black-Scholes-Merton) for forward contracts and interest-based call options. 108. B2BlackPutOptionModel Returns the Black model (modified Black-Scholes-Merton) for forward contracts and interest-based put options.
appb
JWBK121-Mun
846
March 17, 2008
13:29
Char Count=
APPENDIX B
109. B2BlackFuturesCallOption Computes the value of a commodities futures call option given the value of the futures contract. 110. B2BlackFuturesPutOption Computes the value of a commodities futures put option given the value of the futures contract. 111. B2BlackScholesCall European call option using the Black-Scholes-Merton model. 112. B2BlackScholesProbabilityAbove Computes the expected probability the stock price will rise above the strike price under a Black-Scholes paradigm. 113. B2BlackScholesPut European put option using the Black-Scholes-Merton model. 114. B2BondCIRBondDiscountFactor Returns the discount factor on a bond or risky debt using the Cox-IngersollRoss model, accounting for mean-reverting interest rates. 115. B2BondCIRBondPrice Cox-Ross model on Zero Coupon Bond Pricing assuming no arbitrage and mean-reverting interest rates. 116. B2BondCIRBondYield Cox-Ross model on Zero Coupon Bond Yield assuming no arbitrage and meanreverting interest rates. 117. B2BondConvexityContinuous Returns the debt’s Convexity or second order sensitivity using a series of cash flows and current interest rate, with continuous discounting. 118. B2BondConvexityDiscrete Returns the debt’s Convexity or second order sensitivity using a series of cash flows and current interest rate, with discrete discounting. 119. B2BondConvexityYTMContinuous Returns the debt’s Convexity or second order sensitivity using an internal Yield to Maturity of the cash flows, with continuous discounting. 120. B2BondConvexityYTMDiscrete Returns the debt’s Convexity or second order sensitivity using an internal Yield to Maturity of the cash flows, with discrete discounting. 121. B2BondDurationContinuous Returns the debt’s first order sensitivity Duration measure using continuous discounting. 122. B2BondDurationDiscrete Returns the debt’s first order sensitivity Duration measure using discrete discounting. 123. B2BondHullWhiteBondCallOption Values a European call option on a bond where the interest rates are stochastic and mean-reverting. Make sure Bond Maturity > Option Maturity.
appb
JWBK121-Mun
March 17, 2008
13:29
Char Count=
List of Functions
847
124. B2BondHullWhiteBondPutOption Values a European put option on a bond where the interest rates are stochastic and mean-reverting. Make sure Bond Maturity > Option Maturity. 125. B2BondMacaulayDuration Returns the debt’s first order sensitivity Macaulay Duration measure. 126. B2BondMertonBondPrice Bond price using Merton Stochastic Interest and Stochastic Asset Model. 127. B2BondModifiedDuration Returns the debt’s first order sensitivity Modified Duration measure. 128. B2BondPriceContinuous Returns the bond price of a cash flow series given the time and discount rate, using continuous discounting. 129. B2BondPriceDiscrete Returns the bond price of a cash flow series given the time and discount rate, using discrete discounting. 130. B2BondVasicekBondCallOption Values a European call option on a bond where the interest rates are stochastic and mean-reverting to a long-term rate. Make sure Bond Maturity > Option Maturity. 131. B2BondVasicekBondPrice Vasicek Zero Coupon Price assuming no arbitrage and mean-reverting interest rates. 132. B2BondVasicekBondPutOption Values a European put option on a bond where the interest rates are stochastic and mean-reverting to a long-term rate. Make sure Bond Maturity > Option Maturity. 133. B2BondVasicekBondYield Vasicek Zero Coupon Yield assuming no arbitrage and mean-reverting interest rates. 134. B2BondYTMContinuous Returns bond’s Yield to Maturity assuming continuous discounting. 135. B2BondYTMDiscrete Returns bond’s Yield to Maturity assuming discrete discounting. 136. B2CallDelta Returns the option valuation sensitivity Delta (a call option value’s sensitivity to changes in the asset value). 137. B2CallGamma Returns the option valuation sensitivity Gamma (a call option value’s sensitivity to changes in the Delta value). 138. B2CallOptionOnTheMax The maximum values at expiration of both assets are used in option exercise, where the call option payoff at expiration is the maximum price between Asset 1 and Asset 2 against the strike price.
appb
JWBK121-Mun
848
March 17, 2008
13:29
Char Count=
APPENDIX B
139. B2CallOptionOnTheMin The minimum values at expiration of both assets are used in option exercise, where the call option payoff at expiration is the minimum price between Asset 1 and Asset 2 against the strike price. 140. B2CallRho Returns the option valuation sensitivity Rho (a call option value’s sensitivity to changes in the interest rate). 141. B2CallTheta Returns the option valuation sensitivity Theta (a call option value’s sensitivity to changes in the maturity). 142. B2CallVega Returns the option valuation sensitivity Vega (a call option value’s sensitivity to changes in the volatility). 143. B2CashOrNothingCall At expiration, if the option is in the money, the option holder receives a predetermined cash payment. For a call option, as long as the stock or asset price exceeds the strike at expiration, cash is received. 144. B2CashOrNothingPut At expiration, if the option is in the money, the option holder receives a predetermined cash payment. For a put option, cash is received only if the stock or asset value falls below the strike price. 145. B2ChooserBasicOption Holder chooses whether the option is a call or a put by the chooser time, with the same strike price and maturity. Typically cheaper than buying a call and a put together while providing the same level of hedge. 146. B2ChooserComplexOption Holder gets to choose whether the option is a call or a put within the Chooser Time, with different strike prices and maturities. Typically cheaper than buying a call and a put, while providing the same level of hedge. 147. B2ClosedFormAmericanCall Returns the American option approximation model with a continuous dividend yield call option. 148. B2ClosedFormAmericanPut Returns the American option approximation model with a continuous dividend yield put option. 149. B2CoefficientofVariationPopulation Computes the population coefficient of variation (standard deviation of the sample divided by the mean), to obtain a relative measure of risk and dispersion. 150. B2CoefficientofVariationSample Computes the sample coefficient of variation (standard deviation of the sample divided by the mean), to obtain a relative measure of risk and dispersion. 151. B2CommodityCallOptionModel Computes the value of a commodity-based call option based on spot and futures market, and accounting for volatility of the forward rate.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
849
152. B2CommodityPutOptionModel Computes the value of a commodity-based put option based on spot and futures market, and accounting for volatility of the forward rate. 153. B2CompoundOptionsCallonCall A compound option allowing the holder to buy (call) a call option with some maturity, in the future within the option maturity period, for a specified strike price on the option. 154. B2CompoundOptionsCallonPut A compound option allowing the holder to buy (call) a put option with some maturity, in the future within the option maturity period, for a specified strike price on the option. 155. B2CompoundOptionsPutonCall A compound option allowing the holder to sell (put) a call option with some maturity, in the future within the option maturity period, for a specified strike price on the option. 156. B2CompoundOptionsPutonPut A compound option allowing the holder to sell (put) a put option with some maturity, in the future within the option maturity period, for a specified strike price on the option. 157. B2ConvenienceYield The convenience yield is simply the rate differential between a non-arbitrage futures and spot price and a real-life fair market value of the futures price. 158. B2ConvertibleBondAmerican Computes the value of an American convertible bond using binomial lattices, and accounting for the stock’s volatility and dividend yield, as well as the bond’s credit spread above risk-free. 159. B2ConvertibleBondEuropean Computes the value of a European convertible bond using binomial lattices, and accounting for the stock’s volatility and dividend yield, as well as the bond’s credit spread above risk-free. 160. B2CreditAcceptanceCost Computes the risk-adjusted cost of accepting a new credit line with a probability of default. 161. B2CreditAssetSpreadCallOption Provides protection from an increase in spread but ceases to exist if the underlying asset defaults and the option is based on the price of the asset. 162. B2CreditAssetSpreadPutOption Provides protection from a decrease in spread but ceases to exist if the underlying asset defaults and the option is based on the price of the asset. 163. B2CreditDefaultSwapSpread Returns the valuation of a credit default swap (CDS) spread, allowing the holder to sell a bond/debt at par value when a credit event occurs. 164. B2CreditDefaultSwapCorrelatedBondandSwapPrice Computes the valuation of a bond with a credit default swap where both parties are correlated and each has a probability of default and possible recovery
appb
JWBK121-Mun
850
March 17, 2008
13:29
Char Count=
APPENDIX B
rates. At default, the holder receives the notional principal or par value of the bond. 165. B2CreditDefaultSwapCorrelatedBondPrice Computes the valuation of a bond without any credit default swap where the bond or debt has a probability of default and possible recovery rate. 166. B2CreditDefaultSwapCorrelatedSwapPrice Computes the price of a credit default swap where both parties are correlated and each has a probability of default and possible recovery rates. At default, the holder receives the notional principal or par value of the bond. 167. B2CreditRatingWidth Computes the credit ratings width to generate the credit ratings table. 168. B2CreditRejectionCost Computes the risk-adjusted cost of rejecting a new credit line with a probability of default. 169. B2CreditRiskShortfall Returns the Credit Risk Shortfall given probability of default and recovery rates. 170. B2CreditSpreadCallOption Provides protection from an increase in spread but ceases to exist if the underlying asset defaults. Only credit default swaps can cover default events. Credit spread options (CSOs) are sometimes combined with CDSs. 171. B2CreditSpreadPutOption Provides protection from a decrease in spread but ceases to exist if the underlying asset defaults. Only credit default swaps can cover default events (CSOs are sometimes combined with CDSs). 172. B2CubicSpline Interpolates and extrapolates (linear and nonlinear) the unknown Y values (based on the required X value) given some series of known X and Y values, and can be used to interpolate inside the data sample or extrapolate outside the known sample. 173. B2CurrencyCallOption Option to exchange foreign currency into domestic currency by buying domestic currency (selling foreign currency) at a set exchange rate on a specified date. Exchange rate is foreign currency to domestic currency. 174. B2CurrencyForwardCallOption Computes the value of a currency forward call option. 175. B2CurrencyForwardPutOption Computes the value of a currency forward put option. 176. B2CurrencyPutOption Option to exchange domestic currency into foreign currency by selling domestic currency (buying foreign currency) at a set exchange rate on a specified date. Exchange rate is foreign currency to domestic currency. 177. B2DeltaGammaHedgeCallBought Computes the total amount of call values that has to be bought to perform a Delta-Gamma neutral hedge. Returns a negative value indicating cash outflow.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
851
178. B2DeltaGammaHedgeCallSold Computes the single unit of call value that has to be sold to perform a DeltaGamma neutral hedge. Returns a positive value indicating cash inflow. 179. B2DeltaGammaHedgeMoneyBorrowed Computes the amount of money that has to be borrowed to perform a DeltaGamma neutral hedge. Returns a positive value indicating cash inflow. 180. B2DeltaGammaHedgeSharesBought Computes the total value of stocks that have to be bought to perform a Delta-Gamma neutral hedge. Returns a negative value indicating cash outflow. 181. B2DeltaHedgeCallSold Computes the single unit of call value that has to be sold to perform a Deltaneutral hedge. Returns a positive value indicating cash inflow. 182. B2DeltaHedgeMoneyBorrowed Computes the amount of money that has to be borrowed to perform a Deltaneutral hedge. Returns a positive value indicating cash inflow. 183. B2DeltaHedgeSharesBought Computes the total value of stocks that have to be bought to perform a Deltaneutral hedge. Returns a negative value indicating cash outflow. 184. B2DistributionBernoulliKurtosis Returns the Bernoulli distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 185. B2DistributionBernoulliMean Returns the Bernoulli distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 186. B2DistributionBernoulliSkew Returns the Bernoulli distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive skew means average exceeds median and the tail points to the right, whereas negative skew means average is less than median and the tail points to the left. 187. B2DistributionBernoulliStdev Returns the Bernoulli distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 188. B2DistributionBetaKurtosis Returns the Beta distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 189. B2DistributionBetaMean Returns the Beta distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 190. B2DistributionBetaSkew Returns the Beta distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive skew means average exceeds median
appb
JWBK121-Mun
852
March 17, 2008
13:29
Char Count=
APPENDIX B
and the tail points to the right, whereas negative skew means average is less than median and the tail points to the left. 191. B2DistributionBetaStdev Returns the Beta distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 192. B2DistributionBinomialKurtosis Returns the Binomial distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 193. B2DistributionBinomialMean Returns the Binomial distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 194. B2DistributionBinomialSkew Returns the Binomial distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive skew means average exceeds median and the tail points to the right, whereas negative skew means average is less than median and the tail points to the left. 195. B2DistributionBinomialStdev Returns the Binomial distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 196. B2DistributionCauchyKurtosis Returns the Cauchy distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 197. B2DistributionCauchyMean Returns the Cauchy distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 198. B2DistributionCauchySkew Returns the Cauchy distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive skew means average exceeds median and the tail points to the right, whereas negative skew means average is less than median and tail points to left. 199. B2DistributionCauchyStdev Returns the Cauchy distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 200. B2DistributionChiSquareKurtosis Returns the Chi-Square distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 201. B2DistributionChiSquareMean Returns the Chi-Square distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
853
202. B2DistributionChiSquareSkew Returns the Chi-Square distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive skew means average exceeds median and the tail points to the right, whereas negative skew means average is less than median and tail points to left. 203. B2DistributionChiSquareStdev Returns the Chi-Square distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 204. B2DistributionDiscreteUniformKurtosis Returns the Discrete Uniform distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 205. B2DistributionDiscreteUniformMean Returns the Discrete Uniform distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 206. B2DistributionDiscreteUniformSkew Returns the Discrete Uniform distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive skew means average exceeds median and the tail points to the right, whereas negative skew means average is less than median and tail points to left. 207. B2DistributionDiscreteUniformStdev Returns the Discrete Uniform distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 208. B2DistributionExponentialKurtosis Returns the Exponential distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 209. B2DistributionExponentialMean Returns the Exponential distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 210. B2DistributionExponentialSkew Returns the Exponential distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive skew means average exceeds median and the tail points to the right, whereas negative skew means average is less than median and tail points to left. 211. B2DistributionExponentialStdev Returns the Exponential distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 212. B2DistributionFKurtosis Returns the F distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail.
appb
JWBK121-Mun
854
March 17, 2008
13:29
Char Count=
APPENDIX B
213. B2DistributionFMean Returns the F distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 214. B2DistributionFSkew Returns the F distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive skew means average exceeds median and the tail points to the right, whereas negative skew means average is less than median and tail points to left. 215. B2DistributionFStdev Returns the F distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 216. B2DistributionGammaKurtosis Returns the Gamma distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 217. B2DistributionGammaMean Returns the Gamma distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 218. B2DistributionGammaSkew Returns the Gamma distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive skew means average exceeds median and the tail points to the right, whereas negative skew means average is less than median and tail points to left. 219. B2DistributionGammaStdev Returns the Gamma distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 220. B2DistributionGeometricKurtosis Returns the Geometric distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 221. B2DistributionGeometricMean Returns the Geometric distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 222. B2DistributionGeometricSkew Returns the Geometric distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 223. B2DistributionGeometricStdev Returns the Geometric distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 224. B2DistributionGumbelMaxKurtosis Returns the Gumbel Max distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
855
225. B2DistributionGumbelMaxMean Returns the Gumbel Max distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 226. B2DistributionGumbelMaxSkew Returns the Gumbel Max distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 227. B2DistributionGumbelMaxStdev Returns the Gumbel Max distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 228. B2DistributionGumbelMinKurtosis Returns the Gumbel Min distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 229. B2DistributionGumbelMinMean Returns the Gumbel Min distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 230. B2DistributionGumbelMinSkew Returns the Gumbel Min distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 231. B2DistributionGumbelMinStdev Returns the Gumbel Min distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 232. B2DistributionHypergeometricKurtosis Returns the Hypergeometric distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 233. B2DistributionHypergeometricMean Returns the Hypergeometric distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 234. B2DistributionHypergeometricSkew Returns the Hypergeometric distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 235. B2DistributionHypergeometricStdev Returns the Hypergeometric distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 236. B2DistributionLogisticKurtosis Returns the Logistic distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail.
appb
JWBK121-Mun
856
March 17, 2008
13:29
Char Count=
APPENDIX B
237. B2DistributionLogisticMean Returns the Logistic distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 238. B2DistributionLogisticSkew Returns the Logistic distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 239. B2DistributionLogisticStdev Returns the Logistic distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 240. B2DistributionLognormalKurtosis Returns the Lognormal distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 241. B2DistributionLognormalMean Returns the Lognormal distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 242. B2DistributionLognormalSkew Returns the Lognormal distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 243. B2DistributionLognormalStdev Returns the Lognormal distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 244. B2DistributionNegativeBinomialKurtosis Returns the Negative Binomial distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 245. B2DistributionNegativeBinomialMean Returns the Negative Binomial distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 246. B2DistributionNegativeBinomialSkew Returns the Negative Binomial distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 247. B2DistributionNegativeBinomialStdev Returns the Negative Binomial distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 248. B2DistributionNormalKurtosis Returns the Normal distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
857
249. B2DistributionNormalMean Returns the Normal distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 250. B2DistributionNormalSkew Returns the Normal distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 251. B2DistributionNormalStdev Returns the Normal distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 252. B2DistributionParetoKurtosis Returns the Pareto distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 253. B2DistributionParetoMean Returns the Pareto distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 254. B2DistributionParetoSkew Returns the Pareto distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 255. B2DistributionParetoStdev Returns the Pareto distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 256. B2DistributionPoissonKurtosis Returns the Poisson distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 257. B2DistributionPoissonMean Returns the Poisson distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 258. B2DistributionPoissonSkew Returns the Poisson distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 259. B2DistributionPoissonStdev Returns the Poisson distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 260. B2DistributionRayleighKurtosis Returns the Rayleigh distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail.
appb
JWBK121-Mun
858
March 17, 2008
13:29
Char Count=
APPENDIX B
261. B2DistributionRayleighMean Returns the Rayleigh distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 262. B2DistributionRayleighSkew Returns the Rayleigh distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 263. B2DistributionRayleighStdev Returns the Rayleigh distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 264. B2DistributionTKurtosis Returns the Student’s T distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 265. B2DistributionTMean Returns the Student’s T distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 266. B2DistributionTSkew Returns the Student’s T distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 267. B2DistributionTStdev Returns the Student’s T distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 268. B2DistributionTriangularKurtosis Returns the Triangular distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 269. B2DistributionTriangularMean Returns the Triangular distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 270. B2DistributionTriangularSkew Returns the Triangular distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 271. B2DistributionTriangularStdev Returns the Triangular distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 272. B2DistributionUniformKurtosis Returns the Uniform distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
859
273. B2DistributionUniformMean Returns the Uniform distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 274. B2DistributionUniformSkew Returns the Uniform distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 275. B2DistributionUniformStdev Returns the Uniform distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 276. B2DistributionWeibullKurtosis Returns the Weibull distribution’s theoretical excess kurtosis (fourth moment), measuring the peakedness of the distribution and its extreme tail events. An excess kurtosis of 0 implies a normal tail. 277. B2DistributionWeibullMean Returns the Weibull distribution’s theoretical mean or expected value (first moment), measuring the central tendency of the distribution. 278. B2DistributionWeibullSkew Returns the Weibull distribution’s theoretical skew (third moment), measuring the direction of the distribution’s tail. Positive (negative) skew means average exceeds (is less than) median and the tail points to the right (left). 279. B2DistributionWeibullStdev Returns the Weibull distribution’s theoretical standard deviation (second moment), measuring the width and average dispersion of all points around the mean. 280. B2DistributionCDFBernoulli Computes the Bernoulli distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 281. B2DistributionCDFBeta Computes the Beta distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 282. B2DistributionCDFBinomial Computes the Binomial distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 283. B2DistributionCDFChiSquare Computes the Chi-Square distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 284. B2DistributionCDFDiscreteUniform Computes the Discrete Uniform distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X.
appb
JWBK121-Mun
860
March 17, 2008
13:29
Char Count=
APPENDIX B
285. B2DistributionCDFExponential Computes the Exponential distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 286. B2DistributionCDFFDist Computes the F distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 287. B2DistributionCDFGamma Computes the Gamma distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 288. B2DistributionCDFGeometric Computes the Geometric distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 289. B2DistributionCDFGumbelMax Computes the Gumbel Max distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 290. B2DistributionCDFGumbelMin Computes the Gumbel Min distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 291. B2DistributionCDFLogistic Computes the Logistic distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 292. B2DistributionCDFLognormal Computes the Lognormal distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 293. B2DistributionCDFNormal Computes the Normal distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 294. B2DistributionCDFPareto Computes the Pareto distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 295. B2DistributionCDFPoisson Computes the Poisson distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. 296. B2DistributionCDFRayleigh Computes the Rayleigh distribution’s theoretical Cumulative Distribution
appb
JWBK121-Mun
March 17, 2008
List of Functions
297.
298.
299.
300.
301.
302.
13:29
Char Count=
861
Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. B2DistributionCDFStandardNormal Computes the Standard Normal distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. B2DistributionCDFTDist Computes the Student’s T distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. B2DistributionCDFTriangular Computes the Triangular distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. B2DistributionCDFUniform Computes the Uniform distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. B2DistributionCDFWeibull Computes the Weibull distribution’s theoretical Cumulative Distribution Function (CDF)—that is, the cumulative probability of the distribution at all points less than or equal to X. B2DistributionICDFBernoulli Computes the Bernoulli distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value.
303. B2DistributionICDFBeta Computes the Beta distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 304. B2DistributionICDFBinomial Computes the Binomial distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 305. B2DistributionICDFChiSquare Computes the Chi-Square distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 306. B2DistributionICDFDiscreteUniform Computes the Discrete Uniform distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value.
appb
JWBK121-Mun
862
March 17, 2008
13:29
Char Count=
APPENDIX B
307. B2DistributionICDFExponential Computes the Exponential distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 308. B2DistributionICDFFDist Computes the F distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 309. B2DistributionICDFGamma Computes the Gamma distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 310. B2DistributionICDFGeometric Computes the Geometric distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 311. B2DistributionICDFGumbelMax Computes the Gumbel Max distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 312. B2DistributionICDFGumbelMin Computes the Gumbel Min distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 313. B2DistributionICDFLogistic Computes the Logistic distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 314. B2DistributionICDFLognormal Computes the Lognormal distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 315. B2DistributionICDFNormal Computes the Normal distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 316. B2DistributionICDFPareto Computes the Pareto distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
863
317. B2DistributionICDFPoisson Computes the Poisson distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 318. B2DistributionICDFRayleigh Computes the Rayleigh distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 319. B2DistributionICDFStandardNormal Computes the Standard Normal distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 320. B2DistributionICDFTDist Computes the Student’s T distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 321. B2DistributionICDFTriangular Computes the Triangular distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 322. B2DistributionICDFUniform Computes the Uniform distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 323. B2DistributionICDFWeibull Computes the Weibull distribution’s theoretical Inverse Cumulative Distribution Function (ICDF); that is, given the cumulative probability between 0 and 1 and the distribution’s parameters, the function returns the relevant X value. 324. B2DistributionPDFBernoulli Computes the Bernoulli distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. 325. B2DistributionPDFBeta Computes the Beta distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. 326. B2DistributionPDFBinomial Computes the Binomial distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass
appb
JWBK121-Mun
864
327.
328.
329.
330.
March 17, 2008
13:29
Char Count=
APPENDIX B
function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. B2DistributionPDFChiSquare Computes the Chi-Square distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. B2DistributionPDFDiscreteUniform Computes the Discrete Uniform distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. B2DistributionPDFExponential Computes the Exponential distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. B2DistributionPDFFDist Computes the F distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities.
331. B2DistributionPDFGamma Computes the Gamma distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. 332. B2DistributionPDFGeometric Computes the Geometric distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. 333. B2DistributionPDFGumbelMax Computes the Gumbel Max distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. 334. B2DistributionPDFGumbelMin Computes the Gumbel Min distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. 335. B2DistributionPDFLogistic Computes the Logistic distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass
appb
JWBK121-Mun
March 17, 2008
List of Functions
336.
337.
338.
339.
13:29
Char Count=
865
function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. B2DistributionPDFLognormal Computes the Lognormal distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. B2DistributionPDFNormal Computes the Normal distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. B2DistributionPDFPareto Computes the Pareto distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. B2DistributionPDFPoisson Computes the Poisson distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities.
340. B2DistributionPDFRayleigh Computes the Rayleigh distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. 341. B2DistributionPDFStandardNormal Computes the Standard Normal distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. 342. B2DistributionPDFTDist Computes the Student’s T distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. 343. B2DistributionPDFTriangular Computes the Triangular distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. 344. B2DistributionPDFUniform Computes the Uniform distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass
appb
JWBK121-Mun
866
March 17, 2008
13:29
Char Count=
APPENDIX B
function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. 345. B2DistributionPDFWeibull Computes the Weibull distribution’s theoretical Probability Density Function (PDF). The PDF of a discrete distribution returns the exact probability mass function or probability of occurrence, but the PDFs of continuous distributions are only theoretical values and not exact probabilities. 346. B2EquityLinkedFXCallOptionDomesticValue Call options whose underlying asset is in a foreign equity market, and the fluctuations of the foreign exchange risk are hedged by having a strike price on the foreign exchange rate. Resulting valuation is in the domestic currency. 347. B2EquityLinkedFXPutOptionDomesticValue Put options whose underlying asset is in a foreign equity market, and the fluctuations of the foreign exchange risk are hedged by having a strike price on the foreign exchange rate. Resulting valuation is in the domestic currency. 348. B2EWMAVolatilityForecastGivenPastPrices Computes the annualized volatility forecast of the next period, given a series of historical prices and the corresponding weights placed on the previous volatility estimate. 349. B2EWMAVolatilityForecastGivenPastVolatility Computes the annualized volatility forecast of the next period given the previous period’s volatility and changes in stock returns in the previous period. 350. B2ExtremeSpreadCallOption Maturities are divided into two segments, and the call option pays the difference between the max assets from segment two and max of segment one. 351. B2ExtremeSpreadPutOption Maturities are divided into two segments, and the put option pays the difference between the min of segment two’s asset value and the min of segment one’s asset value. 352. B2ExtremeSpreadReverseCallOption Maturities are divided into two segments, and a reverse call pays the min from segment one less the min of segment two. 353. B2ExtremeSpreadReversePutOption Maturities are divided into two segments, and a reverse put pays the max of segment one less the max of the segment two. 354. B2FiniteDifferenceAmericanCall Computes the American call option using finite differencing methods, as an alternative to simulation, closed-form approximation models, and lattices. 355. B2FiniteDifferenceAmericanPut Computes the American put option using finite differencing methods, as an alternative to simulation, closed-form approximation models, and lattices. 356. B2FiniteDifferenceEuropeanCall Computes the European call option using finite differencing methods, as an alternative to simulation, closed-form approximation models, and lattices.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
867
357. B2FiniteDifferenceEuropeanPut Computes the European put option using finite differencing methods, as an alternative to simulation, closed-form approximation models, and lattices. 358. B2FixedStrikeLookbackCall Strike price is fixed, while at expiration the payoff is the difference between the maximum asset price less the strike price during the lifetime of the option. 359. B2FixedStrikeLookbackPut Strike price is fixed, while at expiration the payoff is the maximum difference between the lowest observed asset price less the strike price during the lifetime of the option. 360. B2FixedStrikePartialLookbackCall Strike price is fixed, while at expiration the payoff is the difference between the maximum asset price less the strike price during the starting period of the lookback to the maturity of the option. 361. B2FixedStrikePartialLookbackPut Strike price is fixed, while at expiration the payoff is the maximum difference between the lowest observed asset price less the strike price during the starting period of the lookback to the maturity of the option. 362. B2FloatingStrikeLookbackCallonMin Strike price is floating, while at expiration the payoff on the call option is being able to purchase the underlying asset at the minimum observed price during the life of the option. 363. B2FloatingStrikeLookbackPutonMax Strike price is floating, while at expiration the payoff on the put option is being able to sell the underlying asset at the maximum observed asset price during the life of the option. 364. B2FloatingStrikePartialLookbackCallonMin Strike price is floating, while at expiration the payoff on the call option is being able to purchase the underlying at the minimum observed asset price from inception to the end of the lookback time. 365. B2FloatingStrikePartialLookbackPutonMax Strike price is floating, while at expiration the payoff on the put option is being able to sell the underlying at the maximum observed asset price from inception to the end of the lookback time. 366. B2ForecastBrownianMotionSimulatedSeries Computes the entire time-series of Brownian motion stochastic process forecast values. 367. B2ForecastDistributionValue Computes the forecast price of an asset in the future, assuming the asset follows a Brownian motion random walk and returns the forecast price given the cumulative probability level. 368. B2ForecastDistributionValuePercentile Computes the cumulative probability or percentile of an asset in the future, assuming the asset follows a Brownian motion random walk and returns the forecast cumulative percentile given the future price.
appb
JWBK121-Mun
868
March 17, 2008
13:29
Char Count=
APPENDIX B
369. B2ForecastDistributionReturns Computes the forecast return of an asset in the future, assuming the asset follows a Brownian motion random walk and returns the forecast percent return given the cumulative probability level. 370. B2ForecastDistributionReturnsPercentile Computes the cumulative probability or percentile of an asset’s returns in the future, assuming the asset follows a Brownian motion random walk and returns the forecast cumulative percentile given the return. 371. B2ForecastJumpDiffusionSimulatedSeries Computes the entire time-series of a jump-diffusion stochastic process forecast values. 372. B2ForecastMeanReversionSimulatedSeries Computes the entire time-series of a mean-reverting stochastic process forecast values. 373. B2ForecastIncrementalFinancialNeeds Computes the incremental funds required to cover the projected organic sales growth of the company based on the projected year’s financials. 374. B2ForecastIncrementalPercentSalesGrowthFinancedExternally Computes the incremental funds as a percent of sales growth that is required from external funding to cover the projected organic sales growth of the company. 375. B2ForeignEquityDomesticCurrencyCall Computes the value of a foreign-based equity call option struck in a domestic currency and accounting for the exchange rate volatility. 376. B2ForeignEquityDomesticCurrencyPut Computes the value of a foreign-based equity put option struck in a domestic currency and accounting for the exchange rate volatility. 377. B2ForeignEquityFixedFXRateDomesticValueQuantoCall Quanto call options are denominated in another currency than the underlying asset, with expanding or contracting protection coverage of the foreign exchange rates. 378. B2ForeignEquityFixedFXRateDomesticValueQuantoPut Quanto put options are denominated in another currency than the underlying asset, with expanding or contracting protection coverage of the foreign exchange rates. 379. B2ForwardRate Computes the Forward Interest Rate given two Spot Rates. 380. B2ForwardStartCallOption Starts proportionally in or out of the money in the future. Alpha < 1: call starts (1 – A)% in the money, put starts (1 – A)% out of the money. Alpha > 1: call (A – 1)% out of the money, put (A – 1)% in the money. 381. B2ForwardStartPutOption Starts proportionally in or out of the money in the future. Alpha < 1: call starts (1 – A)% in the money, put starts (1 – A)% out of the money. Alpha > 1: call (A – 1)% out of the money, put (A – 1)% in the money.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
869
382. B2FuturesForwardsCallOption Similar to a regular option but the underlying asset is a futures of a forward contract. A call option is the option to buy a futures contract, with the specified futures strike price at which the futures is traded if the option is exercised. 383. B2FuturesForwardsPutOption Similar to a regular option but the underlying asset is a futures of a forward contract. A put option is the option to sell a futures contract, with the specified futures strike price at which the futures is traded if the option is exercised. 384. B2FuturesSpreadCall The payoff of a spread option is the difference between the two futures’ values at expiration. The spread is Futures 1 – Futures 2, and the call payoff is Spread – Strike. 385. B2FuturesSpreadPut The payoff of a spread option is the difference between the two futures’ values at expiration. The spread is Futures 1 – Futures 2, and the put payoff is Strike – Spread. 386. B2GARCH Computes the forward-looking volatility forecast using the generalized autoregressive conditional heteroskedasticity (p, q) model where future volatilities are forecast based on historical price levels and information. 387. B2GapCallOption The call option is knocked in if the asset exceeds the reference Strike 1, and the option payoff is the asset price less Strike 2 for the underlying. 388. B2GapPutOption The put option is knocked in only if the underlying asset is less than the reference Strike 1, providing a payoff of Strike 2 less the underlying asset value. 389. B2GeneralizedBlackScholesCall Returns the Black-Scholes model with a continuous dividend yield call option. 390. B2GeneralizedBlackScholesCallCashDividends Modification of the Generalized Black-Scholes model to solve European call options, assuming a series of dividend cash flows that may be even or uneven. A series of dividend payments and time are required. 391. B2GeneralizedBlackScholesPut Returns the Black-Scholes model with a continuous dividend yield put option. 392. B2GeneralizedBlackScholesPutCashDividends Modification of the Generalized Black-Scholes model to solve European put options, assuming a series of dividend cash flows that may be even or uneven. A series of dividend payments and time are required. 393. B2GraduatedBarrierDownandInCall Barriers are graduated ranges between lower and upper values. The option is knocked in the money proportionally depending on how low the asset value is in the range.
appb
JWBK121-Mun
870
March 17, 2008
13:29
Char Count=
APPENDIX B
394. B2GraduatedBarrierDownandOutCall Barriers are graduated ranges between lower and upper values. The option is knocked out of the money proportionally depending on how low the asset value is in the range. 395. B2GraduatedBarrierUpandInPut Barriers are graduated ranges between lower and upper values. The option is knocked in the money proportionally depending on how high the asset value is in the range. 396. B2GraduatedBarrierUpandOutPut Barriers are graduated ranges between lower and upper values. The option is knocked out of the money proportionally depending on how high the asset value is in the range. 397. B2ImpliedVolatilityBestCase Computes the implied volatility given an expected value of an asset, along with an alternative best-case scenario value and its corresponding percentile (must be above 50%). 398. B2ImpliedVolatilityCall Computes the implied volatility in a European call option given all the inputs parameters and the option value. 399. B2ImpliedVolatilityPut Computes the implied volatility in a European put option given all the inputs parameters and the option value. 400. B2ImpliedVolatilityWorstCase Computes the implied volatility given an expected value of an asset, along with an alternative worst-case scenario value and its corresponding percentile (must be below 50%). 401. B2InterestAnnualtoPeriodic Computes the periodic compounding rate based on the annualized compounding interest rate per year. 402. B2InterestCaplet Computes the interest rate caplet (sum all the caplets into the total value of the interest rate cap) and acts like an interest rate call option. 403. B2InterestContinuousToDiscrete Returns the corresponding discrete compounding interest rate, given the continuous compounding rate. 404. B2InterestContinuousToPeriodic Computes the periodic compounding interest rate based on a continuous compounding rate. 405. B2InterestDiscreteToContinuous Returns the corresponding continuous compounding interest rate, given the discrete compounding rate. 406. B2InterestFloorlet Computes the interest rate floorlet (sum all the floorlets into the total value of the interest rate floor) and acts like an interest rate put option.
appb
JWBK121-Mun
March 17, 2008
13:29
Char Count=
List of Functions
871
407. B2InterestPeriodictoAnnual Computes the annualized compounding interest rate per year based on a periodic compounding rate. 408. B2InterestPeriodictoContinuous Computes the continuous compounding rate based on the periodic compounding interest rate. 409. B2InverseGammaCallOption Computes the European call option assuming an inverse Gamma distribution, rather than a normal distribution, and is important for deep out-of-the-money options. 410. B2InverseGammaPutOption Computes the European put option assuming an inverse Gamma distribution, rather than a normal distribution, and is important for deep out-of-the-money options. 411. B2IRRContinuous Returns the continuously discounted Internal Rate of Return for a cash flow series with its respective cash flow times in years. 412. B2IRRDiscrete Returns the discretely discounted Internal Rate of Return for a cash flow series with its respective cash flow times in years. 413. B2LinearInterpolation Interpolates and fills in the missing values of a time series. 414. B2MarketPriceRisk Computes the market price of risk used in a variety of options analyses, using market return, risk-free return, volatility of the market, and correlation between the market and the asset. 415. B2MathGammaLog Returns the result from a Log Gamma function. 416. B2MathIncompleteBeta Returns the result from an Incomplete Beta function. 417. B2MathIncompleteGammaP Returns the result from an Incomplete Gamma P function. 418. B2MathIncompleteGammaQ Returns the result from an Incomplete Gamma Q function. 419. B2MatrixMultiplyAxB Multiplies two compatible matrices, such as M × N and N × M, to create an M × M matrix. Copy and paste function to the entire matrix area and use Ctrl+Shift+Enter to obtain the matrix. 420. B2MatrixMultiplyAxTransposeB Multiplies the first matrix with the transpose of the second matrix (multiplies M × N with M × N matrix by transposing the second matrix to N × M, generating an M × M matrix). Copy and paste function to the entire matrix area and use Ctrl+Shift+Enter to obtain the matrix.
appb
JWBK121-Mun
872
March 17, 2008
13:29
Char Count=
APPENDIX B
421. B2MatrixMultiplyTransposeAxB Multiplies the transpose of the first matrix with the second matrix (multiplies M × N with M × N matrix by transposing the first matrix to N × M, generating an N × N matrix). Copy and paste function to the entire matrix area and use Ctrl+Shift+Enter to obtain the matrix. 422. B2MatrixTranspose Transposes a matrix from M × N to N × M. Copy and paste function to the entire matrix area and use Ctrl+Shift+Enter to obtain the matrix. 423. B2MertonJumpDiffusionCall Call value of an underlying whose asset returns are assumed to follow a Poisson Jump Diffusion process; that is, prices jump several times a year, and cumulatively these jumps explain a percentage of the total asset volatility. 424. B2MertonJumpDiffusionPut Put value of an underlying whose asset returns are assumed to follow a Poisson Jump Diffusion process; that is, prices jump several times a year, and cumulatively these jumps explain a percentage of the total asset volatility. 425. B2NormalTransform Converts values into a normalized distribution. 426. B2NPVContinuous Returns the Net Present Value of a cash flow series given the time and discount rate, using continuous discounting. 427. B2NPVDiscrete Returns the Net Present Value of a cash flow series given the time and discount rate, using discrete discounting. 428. B2OptionStrategyLongBearCreditSpread Returns the matrix [stock price, buy put, sell put, profit] of a long bearish credit spread (buying a higher strike put with a high price and selling a lower strike put with a low price). 429. B2OptionStrategyLongBullCreditSpread Returns the matrix [stock price, buy put, sell put, profit] of a bullish credit spread (buying a lower strike put at a low price and selling a higher strike put at a high price). 430. B2OptionStrategyLongBearDebitSpread Returns the matrix [stock price, buy call, sell call, profit] of a long bearish debit spread (buying a higher strike call with a low price and selling a lower strike call with a high price). 431. B2OptionStrategyLongBullDebitSpread Returns the matrix [stock price, buy call, sell call, profit] of a bullish debit spread (buying a lower strike call at a high price and selling a further out-ofthe-money higher strike call at a low price). 432. B2OptionStrategyLongCoveredCall Returns the matrix [stock price, buy stock, sell call, profit] of a long covered call position (buying the stock and selling a call of the same asset). 433. B2OptionStrategyLongProtectivePut Returns the matrix [stock price, buy stock, buy put, profit] of a long protective put position (buying the stock and buying a put of the same asset).
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
873
434. B2OptionStrategyLongStraddle Returns the matrix [stock price, buy call, buy put, profit] of a long straddle position (buying an equal number of puts and calls with identical strike price and expiration) to profit from high volatility. 435. B2OptionStrategyLongStrangle Returns the matrix [stock price, buy call, buy put, profit] of a long strangle (buying a higher strike call at a low price and buying a lower strike put at a low price—close expirations) to profit from high volatility. 436. B2OptionStrategyWriteCoveredCall Returns the matrix [stock price, sell stock, buy call, profit] of writing a covered call (selling the stock and buying a call of the same asset). 437. B2OptionStrategyWriteProtectivePut Returns the matrix [stock price, sell stock, sell put, profit] of writing a protective put position (selling the stock and selling a put of the same asset). 438. B2OptionStrategyWriteStraddle Returns the matrix [stock price, sell call, sell put, profit] of writing a straddle position (selling an equal number of puts and calls with identical strike price and expiration) to profit from low volatility. 439. B2OptionStrategyWriteStrangle Returns the matrix [stock price, sell call, sell put, profit] of writing a strangle (sell a higher strike call at a low price and sell a lower strike put at a low price—close expirations) to profit from low volatility. 440. B2Payback Computes the payback period given some initial investment and subsequent cash flows. 441. B2PerpetualCallOption Computes the American perpetual call option. Note that it returns an error if dividend is 0% (this is because the American option reverts to European and a perpetual European has no value). 442. B2PerpetualPutOption Computes the American perpetual put option. Note that it returns an error if dividend is 0% (this is because the American option reverts to European and a perpetual European has no value). 443. B2PortfolioReturns Computes the portfolio weighted average expected returns given individual asset returns and allocations. 444. B2PortfolioRisk Computes the portfolio risk given individual asset allocations and variancecovariance matrix. 445. B2PortfolioVariance Computes the portfolio variance given individual asset allocations and variance-covariance matrix. Take the square root of the result to obtain the portfolio risk. 446. B2ProbabilityDefaultAdjustedBondYield Computes the required risk-adjusted yield (premium spread plus risk-free rate) to charge given the cumulative probability of default.
appb
JWBK121-Mun
874
March 17, 2008
13:29
Char Count=
APPENDIX B
447. B2ProbabilityDefaultAverageDefaults Credit Risk Plus’ average number of credit defaults per period using total portfolio credit exposures, average cumulative probability of default, and percentile Value at Risk for the portfolio. 448. B2ProbabilityDefaultCorrelation Computes the correlations of default probabilities given the probabilities of default of each asset and the correlation between their equity prices. The result is typically much smaller than the equity correlation. 449. B2ProbabilityDefaultCumulativeBondYieldApproach Computes the cumulative probability of default from Year 0 to Maturity using a comparable zero bond yield versus a zero risk-free yield and accounting for a recovery rate. 450. B2ProbabilityDefaultCumulativeSpreadApproach Computes the cumulative probability of default from Year 0 to Maturity using a comparable risky debt’s spread (premium) versus the risk-free rate and accounting for a recovery rate. 451. B2ProbabilityDefaultHazardRate Computes the hazard rate for a specific year (in survival analysis) using a comparable zero bond yield versus a zero risk-free yield and accounting for a recovery rate. 452. B2ProbabilityDefaultMertonDefaultDistance Distance to Default (does not require market returns and correlations but requires the internal growth rates). 453. B2ProbabilityDefaultMertonI Probability of Default (without regard to Equity Value or Equity Volatility, but requires asset, debt, and market values). 454. B2ProbabilityDefaultMertonII Probability of Default (does not require market returns and correlations but requires the internal asset value and asset volatility). 455. B2ProbabilityDefaultMertonImputedAssetValue Returns the imputed market value of asset given external equity value, equity volatility, and other option inputs. Used in the Merton probability of default model. 456. B2ProbabilityDefaultMertonImputedAssetVolatility Returns the imputed volatility of asset given external equity value, equity volatility, and other option inputs. Used in the Merton probability of default model. 457. B2ProbabilityDefaultMertonMVDebt Computes the market value of debt (for risky debt) in the Merton-based simultaneous options model. 458. B2ProbabilityDefaultMertonRecoveryRate Computes the rate of recovery in percent for risky debt in the Merton-based simultaneous options model. 459. B2ProbabilityDefaultPercentileDefaults Credit Risk Plus method to compute the percentile given some estimated average number of defaults per period.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
875
460. B2PropertyDepreciation Value of the periodic depreciation allowed on a commercial real estate project, given the percent of price going to improvement and the allowed recovery period. 461. B2PropertyEquityRequired Value of the required equity down payment on a commercial real estate project, given the valuation of the project. 462. B2PropertyLoanAmount Value of the required mortgage amount on a commercial real estate project, given the value of the project and the loan required (loan-to-value ratio or the percentage of the value that a loan represents is required). 463. B2PropertyValuation Value of a commercial real estate property assuming Gross Rent, Vacancy, Operating Expenses, and the Cap Rate at Purchase Date (Net Operating Income/Sale Price). 464. B2PutCallParityCalltoPut Computes the European put option value given the value of a corresponding European call option with identical input assumptions. 465. B2PutCallParityCalltoPutCurrencyOptions Computes the European currency put option value given the value of a corresponding European currency call option on futures and forwards with identical input assumptions. 466. B2PutCallParityCalltoPutFutures Computes the value of a European put option on futures and forwards given the value of a corresponding European call option on futures and forwards with identical input assumptions. 467. B2PutCallParityPuttoCall Computes the European call option value given the value of a corresponding European put option with identical input assumptions. 468. B2PutCallParityPuttoCallCurrencyOptions Computes the value of a European currency call option given the value of a corresponding European currency put option on futures and forwards with identical input assumptions. 469. B2PutCallParityPuttoCallFutures Computes the value of a European call option on futures and forwards given the value of a corresponding European put option on futures and forwards with identical input assumptions. 470. B2PutDelta Returns the option valuation sensitivity Delta (a put option value’s sensitivity to changes in the asset value). 471. B2PutGamma Returns the option valuation sensitivity Gamma (a put option value’s sensitivity to changes in the Delta value). 472. B2PutOptionOnTheMax The maximum values at expiration of both assets are used in option exercise,
appb
JWBK121-Mun
876
473.
474.
475.
476.
477.
478.
479.
480.
481.
482.
483.
484.
March 17, 2008
13:29
Char Count=
APPENDIX B
where the call option payoff at expiration is the strike price against the maximum price between Asset 1 and Asset 2. B2PutOptionOnTheMin The minimum values at expiration of both assets are used in option exercise, where the call option payoff at expiration is the strike price against the minimum price between Asset 1 and Asset 2. B2PutRho Returns the option valuation sensitivity Rho (a put option value’s sensitivity to changes in the interest rate). B2PutTheta Returns the option valuation sensitivity Theta (a put option value’s sensitivity to changes in the maturity). B2PutVega Returns the option valuation sensitivity Vega (a put option value’s sensitivity to changes in the volatility). B2QueuingMCAveCustomersinSystem Average number of customers in the system, using a multiple-channel queuing model assuming a Poisson arrival rate with Exponential distribution of service times. B2QueuingMCAveCustomersWaiting Average number of customers in the waiting line, using a multiple-channel queuing model assuming a Poisson arrival rate with Exponential distribution of service times. B2QueuingMCAveTimeinSystem Average time a customer spends in the system, using a multiple-channel queuing model assuming a Poisson arrival rate with Exponential distribution of service times. B2QueuingMCAveTimeWaiting Average time a customer spends in the waiting line, using a multiple-channel queuing model assuming a Poisson arrival rate with Exponential distribution of service times. B2QueuingMCProbHaveToWait Probability an arriving customer has to wait, using a multiple-channel queuing model assuming a Poisson arrival rate with Exponential distribution of service times. B2QueuingMCProbNoCustomer Probability that no customers are in the system, using a multiple-channel queuing model assuming a Poisson arrival rate with Exponential distribution of service times. B2QueuingMGKAveCustomersinSystem Average number of customers in the system, using a multiple-channel queuing model assuming a Poisson arrival rate with unknown distribution of service times. B2QueuingMGKCostPerPeriod Total cost per time period, using a multiple-channel queuing model assuming a Poisson arrival rate with unknown distribution of service times.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
877
485. B2QueuingMGKProbBusy Probability a channel will be busy, using a multiple-channel queuing model assuming a Poisson arrival rate with unknown distribution of service times. 486. B2QueuingSCAAveCustomersinSystem Average number of customers in the system, using an MG1 single-channel arbitrary queuing model assuming a Poisson arrival rate with unknown distribution of service times. 487. B2QueuingSCAAveCustomersWaiting Average number of customers in the waiting line, using an MG1 single-channel arbitrary queuing model assuming a Poisson arrival rate with unknown distribution of service times. 488. B2QueuingSCAAveTimeinSystem Average time a customer spends in the system, using an MG1 single-channel arbitrary queuing model assuming a Poisson arrival rate with unknown distribution of service times. 489. B2QueuingSCAAveTimeWaiting Average time a customer spends in the waiting line, using an MG1 singlechannel arbitrary queuing model assuming a Poisson arrival rate with unknown distribution of service times. 490. B2QueuingSCAProbHaveToWait Probability an arriving customer has to wait, using an MG1 single-channel arbitrary queuing model assuming a Poisson arrival rate with unknown distribution of service times. 491. B2QueuingSCAProbNoCustomer Probability that no customers are in the system, using an MG1 single-channel arbitrary queuing model assuming a Poisson arrival rate with unknown distribution of service times. 492. B2QueuingSCAveCustomersinSystem Average number of customers in the system, using a single-channel queuing model. 493. B2QueuingSCAveCustomersWaiting Returns the average number of customers in the waiting line, using a singlechannel queuing model. 494. B2QueuingSCAveTimeinSystem Average time a customer spends in the system, using a single-channel queuing model. 495. B2QueuingSCAveTimeWaiting Average time a customer spends in the waiting line, using a single-channel queuing model. 496. B2QueuingSCProbHaveToWait Probability an arriving customer has to wait, using a single-channel queuing model. 497. B2QueuingSCProbNoCustomer Returns the probability that no customers are in the system, using a singlechannel queuing model.
appb
JWBK121-Mun
878
March 17, 2008
13:29
Char Count=
APPENDIX B
498. B2RatiosBasicEarningPower Computes the basic earning power (BEP) by accounting for earnings before interest and taxes (EBIT) and the amount of total assets employed. 499. B2RatiosBetaLevered Computes the levered beta from an unlevered beta level after accounting for the tax rate, total debt, and equity values. 500. B2RatiosBetaUnlevered Computes the unlevered beta from a levered beta level after accounting for the tax rate, total debt, and equity values. 501. B2RatiosBookValuePerShare Computes the book value per share (BV) by accounting for the total common equity amount and number of shares outstanding. 502. B2RatiosCapitalCharge Computes the capital charge value (typically used to compute the economic profit of a project). 503. B2RatiosCAPM Computes the capital asset pricing model’s required rate of return in percent, given some benchmark market return, beta risk coefficient, and risk-free rate. 504. B2RatiosCashFlowtoEquityLeveredFirm Cash flow to equity for a levered firm (accounting for operating expenses, taxes, depreciation, amortization, capital expenditures, change in working capital, preferred dividends, principal repaid, and new debt issues). 505. B2RatiosCashFlowtoEquityUnleveredFirm Cash flow to equity for an unlevered firm (accounting for operating expenses, taxes, depreciation, amortization, capital expenditures, change in working capital, and taxes). 506. B2RatiosCashFlowtoFirm Cash flow to the firm (accounting for earnings before interest and taxes [EBIT], tax rate, depreciation, capital expenditures, and change in working capital). 507. B2RatiosCashFlowtoFirm2 Cash flow to the firm (accounting for net operating profit after taxes [NOPAT], depreciation, capital expenditures, and change in working capital). 508. B2RatiosContinuingValue1 Computes the continuing value based on a constant growth rate of free cash flows to perpetuity using a Gordon Growth Model. 509. B2RatiosContinuingValue2 Computes the continuing value based on a constant growth rate of free cash flows to perpetuity using net operating profit after taxes (NOPAT), return on invested capital (ROIC), growth rate, and current free cash flow. 510. B2RatiosCostEquity Computes the cost of equity (as used in a CAPM model) using the dividend rate, growth rate of dividends, and current equity price. 511. B2RatiosCurrentRatio Computes the current ratio by accounting for the individual asset and liabilities.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
879
512. B2RatiosDaysSalesOutstanding Computes the days sales outstanding by looking at the accounts receivable value, total annual sales, and number of days per year. 513. B2RatiosDebtAssetRatio Computes the debt-to-asset ratio by accounting for the total debt and total asset values. 514. B2RatiosDebtEquityRatio Computes the debt-to-equity ratio by accounting for the total debt and total common equity levels. 515. B2RatiosDebtRatio1 Computes the debt ratio by accounting for the total debt and total asset values. 516. B2RatiosDebtRatio2 Computes the debt ratio by accounting for the total equity and total asset values. 517. B2RatiosDividendsPerShare Computes the dividends per share (DPS) by accounting for the dividend payment amount and number of shares outstanding. 518. B2RatiosEarningsPerShare Computes the earnings per share (EPS) by accounting for the net income amount and number of shares outstanding. 519. B2RatiosEconomicProfit1 Computes the economic profit using invested capital, return on invested capital (ROIC), and weighted average cost of capital (WACC). 520. B2RatiosEconomicProfit2 Computes the economic profit using net operating profit after taxes (NOPAT), return on invested capital (ROIC), and weighted average cost of capital (WACC). 521. B2RatiosEconomicProfit3 Computes the economic profit using net operating profit after taxes (NOPAT) and capital charge. 522. B2RatiosEconomicValueAdded Computes the economic value added using earnings before interest and taxes (EBIT), total capital employed, tax rate, and weighted average cost of capital (WACC). 523. B2RatiosEquityMultiplier Computes the equity multiplier (the ratio of total assets to total equity). 524. B2RatiosFixedAssetTurnover Computes the fixed asset turnover by accounting for the annual sales levels and net fixed assets. 525. B2RatiosInventoryTurnover Computes the inventory turnover using sales and inventory levels. 526. B2RatiosMarketBookRatio1 Computes the market to book value (BV) per share by accounting for the share price and the book value per share.
appb
JWBK121-Mun
880
March 17, 2008
13:29
Char Count=
APPENDIX B
527. B2RatiosMarketBookRatio2 Computes the market to book value per share by accounting for the share price, total common equity value, and number of shares outstanding. 528. B2RatiosMarketValueAdded Computes the market value added by accounting for the stock price, total common equity, and number of shares outstanding. 529. B2RatiosNominalCashFlow Computes the nominal cash flow amount assuming some inflation rate, real cash flow, and the number of years in the future. 530. B2RatiosNominalDiscountRate Computes the nominal discount rate assuming some inflation rate and real discount rate. 531. B2RatiosPERatio1 Computes the price-to-earnings (P/E) ratio using stock price and earnings per share (EPS). 532. B2RatiosPERatio2 Computes the price-to-earnings (P/E) ratio using stock price, net income, and number of shares outstanding. 533. B2RatiosPERatio3 Computes the price-to-earnings (P/E) ratio using growth rates, rate of return, and discount rate. 534. B2RatiosProfitMargin Computes the profit margin by taking the ratio of net income to annual sales. 535. B2RatiosQuickRatio Computes the quick ratio by accounting for the individual assets and liabilities. 536. B2RatiosRealCashFlow Computes the real cash flow amount assuming some inflation rate, nominal cash flow (Nominal CF), and the number of years in the future. 537. B2RatiosRealDiscountRate Computes the real discount rate assuming some inflation rate and nominal discount rate. 538. B2RatiosReturnonAsset1 Computes the return on assets using net income amount and total assets employed. 539. B2RatiosReturnonAsset2 Computes the return on assets using net profit margin percentage and total asset turnover ratio. 540. B2RatiosReturnonEquity1 Computes return on equity using net income and total common equity values. 541. B2RatiosReturnonEquity2 Computes return on equity using return on assets (ROA), total assets, and total equity values. 542. B2RatiosReturnonEquity3 Computes return on equity using net income, total sales, total assets, and total common equity values.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
881
543. B2RatiosReturnonEquity4 Computes return on equity using net profit margin, total asset turnover, and equity multiplier values. 544. B2RatiosROIC Computes the return on invested capital (typically used for computing economic profit) accounting for change in working capital; property, plant, and equipment (PPE); and other assets. 545. B2RatiosShareholderEquity Computes the common shareholder’s equity after accounting for total assets, total liabilities, and preferred stocks. 546. B2RatiosTimesInterestEarned Computes the times interest earned ratio by accounting for earnings before interest and taxes (EBIT) and the amount of interest payment. 547. B2RatiosTotalAssetTurnover Computes the total asset turnover by accounting for the annual sales levels and total assets. 548. B2RatiosWACC1 Computes the weighted average cost of capital (WACC) using market values of debt, preferred equity, and common equity, as well as their respective costs. 549. B2RatiosWACC2 Computes the weighted average cost of capital (WACC) using market values of debt, market values of common equity, as well as their respective costs. 550. B2ROBinomialAmericanAbandonContract Returns the American option to abandon and contract using a binomial lattice model. 551. B2ROBinomialAmericanAbandonContractExpand Returns the American option to abandon, contract, and expand using a binomial lattice model. 552. B2ROBinomialAmericanAbandonExpand Returns the American option to abandon and expand using a binomial lattice model. 553. B2ROBinomialAmericanAbandonment Returns the American option to abandon using a binomial lattice model. 554. B2ROBinomialAmericanCall Returns the American call option with dividends using a binomial lattice model. 555. B2ROBinomialAmericanChangingRiskFree Returns the American call option with dividends and assuming the risk-free rate changes over time, using a binomial lattice model. 556. B2ROBinomialAmericanChangingVolatility Returns the American call option with dividends and assuming the volatility changes over time, using a binomial lattice model. Use small number of steps or it will take a long time to compute! 557. B2ROBinomialAmericanContractExpand Returns the American option to contract and expand using a binomial lattice model.
appb
JWBK121-Mun
882
March 17, 2008
13:29
Char Count=
APPENDIX B
558. B2ROBinomialAmericanContraction Returns the American option to contract using a binomial lattice model. 559. B2ROBinomialAmericanCustomCall Returns the American option call option with changing inputs, vesting periods, and suboptimal exercise multiple using a binomial lattice model. 560. B2ROBinomialAmericanExpansion Returns the American option to expand using a binomial lattice model. 561. B2ROBinomialAmericanPut Returns the American put option with dividends using a binomial lattice model. 562. B2ROBinomialBermudanAbandonContract Returns the Bermudan option to abandon and contract using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 563. B2ROBinomialBermudanAbandonContractExpand Returns the Bermudan option to abandon, contract, and expand, using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 564. B2ROBinomialBermudanAbandonExpand Returns the Bermudan option to abandon and expand using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 565. B2ROBinomialBermudanAbandonment Returns the Bermudan option to abandon using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 566. B2ROBinomialBermudanCall Returns the Bermudan call option with dividends, where there is a vesting/blackout period during which the option cannot be executed. 567. B2ROBinomialBermudanContractExpand Returns the Bermudan option to contract and expand, using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 568. B2ROBinomialBermudanContraction Returns the Bermudan option to contract using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 569. B2ROBinomialBermudanExpansion Returns the Bermudan option to expand using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 570. B2ROBinomialBermudanPut Returns the Bermudan put option with dividends, where there is a vesting/blackout period during which the option cannot be executed. 571. B2ROBinomialEuropeanAbandonContract Returns the European option to abandon and contract, using a binomial lattice model, where the option can be executed only at expiration.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
883
572. B2ROBinomialEuropeanAbandonContractExpand Returns the European option to abandon, contract, and expand, using a binomial lattice model, where the option can be executed only at expiration. 573. B2ROBinomialEuropeanAbandonExpand Returns the European option to abandon and expand, using a binomial lattice model, where the option can be executed only at expiration. 574. B2ROBinomialEuropeanAbandonment Returns the European option to abandon using a binomial lattice model, where the option can be executed only at expiration. 575. B2ROBinomialEuropeanCall Returns the European call option with dividends, where the option can be executed only at expiration. 576. B2ROBinomialEuropeanContractExpand Returns the European option to contract and expand, using a binomial lattice model, where the option can be executed only at expiration. 577. B2ROBinomialEuropeanContraction Returns the European option to contract using a binomial lattice model, where the option can be executed only at expiration. 578. B2ROBinomialEuropeanExpansion Returns the European option to expand using a binomial lattice model, where the option can be executed only at expiration. 579. B2ROBinomialEuropeanPut Returns the European put option with dividends, where the option can be executed only at expiration. 580. B2ROJumpDiffusionCall Returns the closed-form model for a European call option whose underlying asset follows a Poisson Jump Diffusion process. 581. B2ROJumpDiffusionPut Returns the closed-form model for a European put option whose underlying asset follows a Poisson Jump Diffusion process. 582. B2ROMeanRevertingCall Returns the closed-form model for a European call option whose underlying asset follows a mean-reversion process. 583. B2ROMeanRevertingPut Returns the closed-form model for a European put option whose underlying asset follows a mean-reversion process. 584. B2ROPentanomialAmericanCall Returns the Rainbow American call option with two underlying assets (these are typically price and quantity, and are multiplied together to form a new combinatorial pentanomial lattice). 585. B2ROPentanomialAmericanPut Returns the Rainbow American put option with two underlying assets (these are typically price and quantity, and are multiplied together to form a new combinatorial pentanomial lattice).
appb
JWBK121-Mun
884
March 17, 2008
13:29
Char Count=
APPENDIX B
586. B2ROPentanomialEuropeanCall Returns the Rainbow European call option with two underlying assets (these are typically price and quantity, and are multiplied together to form a new combinatorial pentanomial lattice). 587. B2ROPentanomialEuropeanPut Returns the Rainbow European put option with two underlying assets (these are typically price and quantity, and are multiplied together to form a new combinatorial pentanomial lattice). 588. B2ROQuadranomialJumpDiffusionAmericanCall Returns the American call option whose underlying asset follows a Poisson Jump Diffusion process, using a combinatorial quadranomial lattice. 589. B2ROQuadranomialJumpDiffusionAmericanPut Returns the American put option whose underlying asset follows a Poisson Jump Diffusion process, using a combinatorial quadranomial lattice. 590. B2ROQuadranomialJumpDiffusionEuropeanCall Returns the European call option whose underlying asset follows a Poisson Jump Diffusion process, using a combinatorial quadranomial lattice. 591. B2ROQuadranomialJumpDiffusionEuropeanPut Returns the European put option whose underlying asset follows a Poisson Jump Diffusion process, using a combinatorial quadranomial lattice. 592. B2ROStateAmericanCall Returns the American call option using a state jump function, where the up and down states can be asymmetrical, solved in a lattice model. 593. B2ROStateAmericanPut Returns the American put option using a state jump function, where the up and down states can be asymmetrical, solved in a lattice model. 594. B2ROStateBermudanCall Returns the Bermudan call option using a state jump function, where the up and down states can be asymmetrical, solved in a lattice model, and where the option cannot be exercised during certain vesting/blackout periods. 595. B2ROStateBermudanPut Returns the Bermudan put option using a state jump function, where the up and down states can be asymmetrical, solved in a lattice model, and where the option cannot be exercised during certain vesting/blackout periods. 596. B2ROStateEuropeanCall Returns the European call option using a state jump function, where the up and down states can be asymmetrical, solved in a lattice model, and where the option can be exercised only at maturity. 597. B2ROStateEuropeanPut Returns the European put option using a state jump function, where the up and down states can be asymmetrical, solved in a lattice model, and where the option can be exercised only at maturity. 598. B2ROTrinomialAmericanCall Returns the American call option with dividend, solved using a trinomial lattice.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
885
599. B2ROTrinomialAmericanMeanRevertingCall Returns the American call option with dividend, assuming the underlying asset is mean-reverting, and solved using a trinomial lattice. 600. B2ROTrinomialAmericanMeanRevertingPut Returns the American put option with dividend, assuming the underlying asset is mean-reverting, and solved using a trinomial lattice. 601. B2ROTrinomialAmericanPut Returns the American put option with dividend, solved using a trinomial lattice. 602. B2ROTrinomialBermudanCall Returns the Bermudan call option with dividend, solved using a trinomial lattice, where during certain vesting/blackout periods the option cannot be exercised. 603. B2ROTrinomialBermudanPut Returns the Bermudan put option with dividend, solved using a trinomial lattice, where during certain vesting/blackout periods the option cannot be exercised. 604. B2ROTrinomialEuropeanCall Returns the European call option with dividend, solved using a trinomial lattice, where the option can be exercised only at maturity. 605. B2ROTrinomialEuropeanMeanRevertingCall Returns the European call option with dividend, solved using a trinomial lattice, assuming the underlying asset is mean-reverting, and where the option can be exercised only at maturity. 606. B2ROTrinomialEuropeanMeanRevertingPut Returns the European put option with dividend, solved using a trinomial lattice, assuming the underlying asset is mean-reverting, and where the option can be exercised only at maturity. 607. B2ROTrinomialEuropeanPut Returns the European put option with dividend, solved using a trinomial lattice, where the option can be exercised only at maturity. 608. B2SCurveValue Computes the S-Curve extrapolation’s next forecast value based on previous value, growth rate, and maximum capacity levels. 609. B2SCurveValueSaturation Computes the S-Curve extrapolation’s saturation level based on previous value, growth rate, and maximum capacity levels. 610. B2SemiStandardDeviationPopulation Computes the semi-standard deviation of the population; that is, only the values below the mean are used to compute an adjusted population standard deviation, a more appropriate measure of downside risk. 611. B2SemiStandardDeviationSample Computes the semi-standard deviation of the sample; that is, only the values below the mean are used to compute an adjusted sample standard deviation, a more appropriate measure of downside risk.
appb
JWBK121-Mun
886
March 17, 2008
13:29
Char Count=
APPENDIX B
612. B2SharpeRatio Computes the Sharpe Ratio (returns-to-risk ratio) based on a series of stock prices of an asset and a market benchmark series of prices. 613. B2SimulateBernoulli Returns simulated random numbers from the Bernoulli distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 614. B2SimulateBeta Returns simulated random numbers from the Beta distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 615. B2SimulateBinomial Returns simulated random numbers from the Binomial distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 616. B2SimulateChiSquare Returns simulated random numbers from the Chi-Square distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 617. B2SimulatedEuropeanCall Returns the Monte Carlo simulated European call option (only European options can be approximated well with simulation). This function is volatile. 618. B2SimulatedEuropeanPut Returns the Monte Carlo simulated European put option (only European options can be approximated well with simulation). This function is volatile. 619. B2SimulateDiscreteUniform Returns simulated random numbers from the Discrete Uniform distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 620. B2SimulateExponential Returns simulated random numbers from the Exponential distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 621. B2SimulateFDist Returns simulated random numbers from the F distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 622. B2SimulateGamma Returns simulated random numbers from the Gamma distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 623. B2SimulateGeometric Returns simulated random numbers from the Geometric distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
887
624. B2SimulateGumbelMax Returns simulated random numbers from the Gumbel Max distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 625. B2SimulateGumbelMin Returns simulated random numbers from the Gumbel Min distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 626. B2SimulateLogistic Returns simulated random numbers from the Logistic distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 627. B2SimulateLognormal Returns simulated random numbers from the Lognormal distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 628. B2SimulateNormal Returns simulated random numbers from the Normal distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 629. B2SimulatePareto Returns simulated random numbers from the Pareto distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 630. B2SimulatePoisson Returns simulated random numbers from the Poisson distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 631. B2SimulateRayleigh Returns simulated random numbers from the Rayleigh distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 632. B2SimulateStandardNormal Returns simulated random numbers from the Standard Normal distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 633. B2SimulateTDist Returns simulated random numbers from the Student’s T distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 634. B2SimulateTriangular Returns simulated random numbers from the Triangular distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution.
appb
JWBK121-Mun
888
March 17, 2008
13:29
Char Count=
APPENDIX B
635. B2SimulateUniform Returns simulated random numbers from the Uniform distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 636. B2SimulateWeibull Returns simulated random numbers from the Weibull distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 637. B2SixSigmaControlCChartCL Computes the center line in a control c-chart. C-charts are applicable when only the number of defects is important. 638. B2SixSigmaControlCChartDown1Sigma Computes the lower 1 sigma limit in a control c-chart. C-charts are applicable when only the number of defects is important. 639. B2SixSigmaControlCChartDown2Sigma Computes the lower 2 sigma limit in a control c-chart. C-charts are applicable when only the number of defects is important. 640. B2SixSigmaControlCChartLCL Computes the lower control limit in a control c-chart. C-charts are applicable when only the number of defects is important. 641. B2SixSigmaControlCChartUCL Computes the upper control limit in a control c-chart. C-charts are applicable when only the number of defects is important. 642. B2SixSigmaControlCChartUp1Sigma Computes the upper 1 sigma limit in a control c-chart. C-charts are applicable when only the number of defects is important. 643. B2SixSigmaControlCChartUp2Sigma Computes the upper 2 sigma limit in a control c-chart. C-charts are applicable when only the number of defects is important. 644. B2SixSigmaControlNPChartCL Computes the center line in a control np-chart. NP-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes is constant. 645. B2SixSigmaControlNPChartDown1Sigma Computes the lower 1 sigma limit in a control np-chart. NP-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes is constant. 646. B2SixSigmaControlNPChartDown2Sigma Computes the lower 2 sigma limit in a control np-chart. NP-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes is constant. 647. B2SixSigmaControlNPChartLCL Computes the lower control limit in a control np-chart. NP-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes is constant.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
889
648. B2SixSigmaControlNPChartUCL Computes the upper control limit in a control np-chart. NP-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes is constant. 649. B2SixSigmaControlNPChartUp1Sigma Computes the upper 1 sigma limit in a control np-chart. NP-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes is constant. 650. B2SixSigmaControlNPChartUp2Sigma Computes the upper 2 sigma limit in a control np-chart. NP-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes is constant. 651. B2SixSigmaControlPChartCL Computes the center line in a control p-chart. P-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes might be different. 652. B2SixSigmaControlPChartDown1Sigma Computes the lower 1 sigma limit in a control p-chart. P-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes might be different. 653. B2SixSigmaControlPChartDown2Sigma Computes the lower 2 sigma limit in a control p-chart. P-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes might be different. 654. B2SixSigmaControlPChartLCL Computes the lower control limit in a control p-chart. P-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes might be different. 655. B2SixSigmaControlPChartUCL Computes the upper control limit in a control p-chart. P-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes might be different. 656. B2SixSigmaControlPChartUp1Sigma Computes the upper 1 sigma limit in a control p-chart. P-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes might be different. 657. B2SixSigmaControlPChartUp2Sigma Computes the upper 2 sigma limit in a control p-chart. P-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes might be different. 658. B2SixSigmaControlRChartCL Computes the center line in a control R-chart. R-charts are used when the number of defects is important; in each subgroup experiment multiple measurements are taken, and the range of the measurements is the variable plotted.
appb
JWBK121-Mun
890
March 17, 2008
13:29
Char Count=
APPENDIX B
659. B2SixSigmaControlRChartLCL Computes the lower control limit in a control R-chart. R-charts are used when the number of defects is important; in each subgroup experiment multiple measurements are taken, and the range of the measurements is the variable plotted. 660. B2SixSigmaControlRChartUCL Computes the upper control limit in a control R-chart. R-charts are used when the number of defects is important; in each subgroup experiment multiple measurements are taken, and the range of the measurements is the variable plotted. 661. B2SixSigmaControlUChartCL Computes the center line in a control u-chart. U-charts are applicable when the number of defects is important, and where in each experimental subgroup the number of sample sizes is the same. 662. B2SixSigmaControlUChartDown1Sigma Computes the lower 1 sigma limit in a control u-chart. U-charts are applicable when the number of defects is important, and where in each experimental subgroup the number of sample sizes is the same. 663. B2SixSigmaControlUChartDown2Sigma Computes the lower 2 sigma limit in a control u-chart. U-charts are applicable when the number of defects is important, and where in each experimental subgroup the number of sample sizes is the same. 664. B2SixSigmaControlUChartLCL Computes the lower control limit in a control u-chart. U-charts are applicable when the number of defects is important, and where in each experimental subgroup the number of sample sizes is the same. 665. B2SixSigmaControlUChartUCL Computes the upper control limit in a control u-chart. U-charts are applicable when the number of defects is important, and where in each experimental subgroup the number of sample sizes is the same. 666. B2SixSigmaControlUChartUp1Sigma Computes the upper 1 sigma limit in a control u-chart. U-charts are applicable when the number of defects is important, and where in each experimental subgroup the number of sample sizes is the same. 667. B2SixSigmaControlUChartUp2Sigma Computes the upper 2 sigma limit in a control u-chart. U-charts are applicable when the number of defects is important, and where in each experimental subgroup the number of sample sizes is the same. 668. B2SixSigmaControlXChartCL Computes the center line in a control X-chart. X-charts are used when the number of defects is important; in each subgroup experiment multiple measurements are taken, and the average of the measurements is the variable plotted. 669. B2SixSigmaControlXChartLCL Computes the lower control limit in a control X-chart. X-charts are used when
appb
JWBK121-Mun
March 17, 2008
List of Functions
670.
671.
672.
673.
674.
675.
676.
677.
13:29
Char Count=
891
the number of defects is important; in each subgroup experiment multiple measurements are taken, and the average of the measurements is the variable plotted. B2SixSigmaControlXChartUCL Computes the upper control limit in a control X-chart. X-charts are used when the number of defects is important; in each subgroup experiment multiple measurements are taken, and the average of the measurements is the variable plotted. B2SixSigmaControlXMRChartCL Computes the center line in a control XmR-chart. XmR-charts are used when the number of defects is important; there is only a single measurement for each sample, and a time series of moving ranges is the variable plotted. B2SixSigmaControlXMRChartLCL Computes the lower control limit in a control XmR-chart. XmR-charts are used when the number of defects is important; there is only a single measurement for each sample, and a time series of moving ranges is the variable plotted. B2SixSigmaControlXMRChartUCL Computes the upper control limit in a control XmR-chart. XmR-charts are used when the number of defects is important; there is only a single measurement for each sample, and a time series of moving ranges is the variable plotted. B2SixSigmaDeltaPrecision Computes the error precision given specific levels of Type I and Type II errors, as well as the sample size and variance. B2SixSigmaSampleSize Computes the required minimum sample size given Type I and Type II errors, as well as the required precision of the mean and the error tolerances. B2SixSigmaSampleSizeDPU Computes the required minimum sample size given Type I and Type II errors, as well as the required precision of the defects per unit and the error tolerances. B2SixSigmaSampleSizeProportion Computes the required minimum sample size given Type I and Type II errors, as well as the required precision of the proportion of defects and the error tolerances.
678. B2SixSigmaSampleSizeStdev Computes the required minimum sample size given Type I and Type II errors, as well as the required precision of the standard deviation and the error tolerances. 679. B2SixSigmaSampleSizeZeroCorrelTest Computes the required minimum sample size to test whether a correlation is statistically significant at an alpha of 0.05 and beta of 0.10. 680. B2SixSigmaStatCP Computes the potential process capability index Cp given the actual mean and sigma of the process, including the upper and lower specification limits. 681. B2SixSigmaStatCPK Computes the process capability index Cpk given the actual mean and sigma of the process, including the upper and lower specification limits.
appb
JWBK121-Mun
892
March 17, 2008
13:29
Char Count=
APPENDIX B
682. B2SixSigmaStatDPMO Computes the defects per million opportunities (DPMO) given the actual mean and sigma of the process, including the upper and lower specification limits. 683. B2SixSigmaStatDPU Computes the proportion of defects per unit (DPU) given the actual mean and sigma of the process, including the upper and lower specification limits. 684. B2SixSigmaStatProcessSigma Computes the process sigma level given the actual mean and sigma of the process, including the upper and lower specification limits. 685. B2SixSigmaStatYield Computes the nondefective parts or the yield of the process, given the actual mean and sigma of the process, including the upper and lower specification limits. 686. B2SixSigmaUnitCPK Computes the process capability index Cpk given the actual counts of defective parts and the total opportunities in the population. 687. B2SixSigmaUnitDPMO Computes the defects per million opportunities (DPMO) given the actual counts of defective parts and the total opportunities in the population. 688. B2SixSigmaUnitDPU Computes the proportion of defects per unit (DPU) given the actual counts of defective parts and the total opportunities in the population. 689. B2SixSigmaUnitProcessSigma Computes the process sigma level given the actual counts of defective parts and the total opportunities in the population. 690. B2SixSigmaUnitYield Computes the nondefective parts or the yield of the process given the actual counts of defective parts and the total opportunities in the population. 691. B2StandardNormalBivariateCDF Given the two Z-scores and correlation, returns the value of the bivariate standard normal (means of zero, variances of 1) cumulative distribution function. 692. B2StandardNormalCDF Given the Z-score, returns the value of the standard normal (mean of zero, variance of 1) cumulative distribution function. 693. B2StandardNormalInverseCDF Computes the inverse cumulative distribution function of a standard normal distribution (mean of zero, variance of 1). 694. B2StandardNormalPDF Given the Z-score, returns the value of the standard normal (mean of zero, variance of 1) probability density function. 695. B2StockIndexCallOption Similar to a regular call option but the underlying asset is a reference stock index such as the Standard & Poor’s 500. The analysis can be solved using a Generalized Black-Scholes-Merton model as well.
appb
JWBK121-Mun
March 17, 2008
List of Functions
13:29
Char Count=
893
696. B2StockIndexPutOption Similar to a regular put option but the underlying asset is a reference stock index such as the Standard & Poor’s 500. The analysis can be solved using a Generalized Black-Scholes-Merton model as well. 697. B2SuperShareOptions The option has value only if the stock or asset price is between the upper and lower barriers, and at expiration provides a payoff equivalent to the stock or asset price divided by the lower strike price (S/X Lower). 698. B2SwaptionEuropeanPayer European Call Interest Swaption, where the holder has the right to enter in a swap to pay fixed and receive floating interest payments. 699. B2SwaptionEuropeanReceiver European Put Interest Swaption, where the holder has the right to enter in a swap to receive fixed and pay floating interest payments. 700. B2TakeoverFXOption At a successful takeover (foreign firm value in foreign currency is less than the foreign currency units), option holder can purchase the foreign units at a predetermined strike price (in exchange rates of the domestic to foreign currency). 701. B2TimeSwitchOptionCall Holder gets AccumAmount × TimeSteps each time asset > strike for a call. TimeSteps is the frequency at which the asset price is checked as to whether the strike is breached (e.g., for 252 trading days, set DT as 1/252). 702. B2TimeSwitchOptionPut Holder gets AccumAmount × TimeSteps each time asset < strike for a put. TimeSteps is the frequency at which the asset price is checked as to whether the strike is breached (e.g., for 252 trading days, set DT as 1/252). 703. B2TradingDayAdjustedCall Call option corrected for varying volatilities (higher on trading days than on nontrading days). Trading Days Ratio is the number of trading days left until maturity divided by total trading days per year (between 250 and 252). 704. B2TradingDayAdjustedPut Put option corrected for varying volatilities (higher on trading days than on nontrading days). Trading Days Ratio is the number of trading days left until maturity divided by total trading days per year (between 250 and 252). 705. B2TrinomialImpliedArrowDebreuLattice Computes the complete set of implied Arrow-Debreu prices in an implied trinomial lattice using actual observed data. Copy and paste the function and use Ctrl+Shift+Enter to obtain the matrix. 706. B2TrinomialImpliedArrowDebreuValue Computes the single value of implied Arrow-Debreu price (for a specific step/column and up-down event/row) in an implied trinomial lattice using actual observed data. 707. B2TrinomialImpliedCallOptionValue Computes the European call option using an implied trinomial lattice approach, taking into account actual observed inputs.
appb
JWBK121-Mun
894
March 17, 2008
13:29
Char Count=
APPENDIX B
708. B2TrinomialImpliedDownProbabilityLattice Computes the complete set of implied DOWN probabilities in an implied trinomial lattice using actual observed data. Copy and paste the function and use Ctrl+Shift+Enter to obtain the matrix. 709. B2TrinomialImpliedDownProbabilityValue Computes the single value of implied DOWN probability (for a specific step/column and up-down event/row) in an implied trinomial lattice using actual observed data. 710. B2TrinomialImpliedLocalVolatilityLattice Computes the complete set of implied local probabilities in an implied trinomial lattice using actual observed data. Copy and paste the function and use Ctrl+Shift+Enter to obtain the matrix. 711. B2TrinomialImpliedLocalVolatilityValue Computes the single value of implied localized volatility (for a specific step/column and up-down event/row) in an implied trinomial lattice using actual observed data. 712. B2TrinomialImpliedUpProbabilityLattice Computes the complete set of implied UP probabilities in an implied trinomial lattice using actual observed data. Copy and paste the function and use Ctrl+Shift+Enter to obtain the matrix. 713. B2TrinomialImpliedUpProbabilityValue Computes the single value of implied UP probability (for a specific step/column and up-down event/row) in an implied trinomial lattice using actual observed data. 714. B2TrinomialImpliedPutOptionValue Computes the European put option using an implied trinomial lattice approach, taking into account actual observed inputs. 715. B2TwoAssetBarrierDownandInCall Valuable or knocked in the money only if the lower barrier is breached (reference Asset 2 goes below the barrier), and the payout is in the option on Asset 1 less the strike price. 716. B2TwoAssetBarrierDownandInPut Valuable or knocked in the money only if the lower barrier is breached (reference Asset 2 goes below the barrier), and the payout is in the option on the strike price less the Asset 1 value. 717. B2TwoAssetBarrierDownandOutCall Valuable or stays in-the-money only if the lower barrier is not breached (reference Asset 2 does not go below the barrier), and the payout is in the option on Asset 1 less the strike price. 718. B2TwoAssetBarrierDownandOutPut Valuable or stays in the money only if the lower barrier is not breached (reference Asset 2 does not go below the barrier), and the payout is in the option on the strike price less the Asset 1 value. 719. B2TwoAssetBarrierUpandInCall Valuable or knocked in the money only if the upper barrier is breached
appb
JWBK121-Mun
March 17, 2008
List of Functions
720.
721.
722.
723.
724.
13:29
Char Count=
895
(reference Asset 2 goes above the barrier), and the payout is in the option on Asset 1 less the strike price. B2TwoAssetBarrierUpandInPut Valuable or knocked in the money only if the upper barrier is breached (reference Asset 2 goes above the barrier), and the payout is in the option on the strike price less the Asset 1 value. B2TwoAssetBarrierUpandOutCall Valuable or stays in the money only if the upper barrier is not breached (reference Asset 2 does not go above the barrier), and the payout is in the option on Asset 1 less the strike price. B2TwoAssetBarrierUpandOutPut Valuable or stays in the money only if the upper barrier is not breached (reference Asset 2 does not go above the barrier), and the payout is in the option on the strike price less the Asset 1 value. B2TwoAssetCashOrNothingCall Pays cash at expiration as long as both assets are in the money. For call options, both asset values must be above their respective strike prices. B2TwoAssetCashOrNothingDownUp Cash will be paid only if at expiration the first asset is below the first strike, and the second asset is above the second strike.
725. B2TwoAssetCashOrNothingPut Pays cash at expiration as long as both assets are in the money. For put options, both assets must be below their respective strike prices. 726. B2TwoAssetCashOrNothingUpDown Cash will be paid only if the first asset is above the first strike price, and the second asset is below the second strike price at maturity. 727. B2TwoAssetCorrelationCall Asset 1 is the benchmark asset, whereby if at expiration Asset 1’s value exceeds Strike 1’s value, then the call option is knocked in the money, and the payoff on the option is Asset 2 – Strike 2; otherwise the option becomes worthless. 728. B2TwoAssetCorrelationPut Asset 1 is the benchmark asset, whereby if at expiration Asset 1’s value is below Strike 1’s value, then the put option is knocked in the money, and the payoff on the option is Strike 2 – Asset 2; otherwise the option becomes worthless. 729. B2VaRCorrelationMethod Computes the Value at Risk using the Variance-Covariance and Correlation method, accounting for a specific VaR percentile and holding period. 730. RB2VaROptions Computes the Value at Risk of a portfolio of correlated options. 731. B2Volatility Returns the Annualized Volatility of time-series cash flows. Enter in the number of periods in a cycle to annualize the volatility (1 = annual, 4 = quarterly, 12 = monthly data).
appb
JWBK121-Mun
896
March 17, 2008
13:29
Char Count=
APPENDIX B
732. B2VolatilityImpliedforDefaultRisk Used only when computing the implied volatility required for optimizing an option model to compute the probability of default. 733. B2WarrantsDilutedValue Returns the value of a warrant (like an option) that is convertible to stock while accounting for dilution effects based on the number of shares and warrants outstanding. 734. B2WriterExtendibleCallOption The call option is extended beyond the initial maturity to an extended date with a new extended strike if at maturity the option is out of the money, providing a safety net of time for the option holder. 735. B2WriterExtendiblePutOption The put option is extended beyond the initial maturity to an extended date with a new extended strike if at maturity the option is out of the money, providing a safety net of time for the option holder. 736. B2YieldCurveBIM Returns the Yield Curve at various points in time using the Bliss model. 737. B2YieldCurveNS Returns the Yield Curve at various points in time using the Nelson-Siegel approach. 738. B2ZEOB Returns the Economic Order Batch or the optimal quantity to be manufactured on each production batch. 739. B2ZEOBBatch Returns the Economic Order Batch analysis’ optimal number of batches to be manufactured per year. 740. B2ZEOBHoldingCost Returns the Economic Order Batch analysis’ cost of holding excess units per year if manufactured at the optimal level. 741. B2ZEOBProductionCost Returns the Economic Order Batch analysis’ total cost of setting up production per year if manufactured at the optimal level. 742. B2ZEOBTotalCost Returns the Economic Order Batch analysis’ total cost of production and holding costs per year if manufactured at the optimal level. 743. B2ZEOQ Economic Order Quantity’s order size on each order. 744. B2ZEOQExcess Economic Order Quantity’s excess safety stock level. 745. B2ZEOQOrders Economic Order Quantity’s number of orders per year. 746. B2ZEOQProbability Economic Order Quantity’s probability of out of stock.
appb
JWBK121-Mun
March 17, 2008
13:29
Char Count=
List of Functions
897
747. B2ZEOQReorderPoint Economic Order Quantity’s reorder point. The following lists the statistical and analytical tools in the Modeling Toolkit: 748. Statistical Tool: Chi-Square Goodness of Fit Test 749. Statistical Tool: Chi-Square Independence Test 750. Statistical Tool: Chi-Square Population Variance Test 751. 752. 753. 754. 755.
Statistical Tool: Dependent Means (T) Statistical Tool: Friedman’s Test Statistical Tool: Independent and Equal Variances (T) Statistical Tool: Independent and Unequal Variances (T) Statistical Tool: Independent Means (Z)
756. Statistical Tool: Independent Proportions (Z) 757. Statistical Tool: Independent Variances (F) 758. 759. 760. 761.
Statistical Tool: Kruskal-Wallis Test Statistical Tool: Lilliefors Test Statistical Tool: Principal Component Analysis Statistical Tool: Randomized Block Multiple Treatments
762. Statistical Tool: Runs Test 763. Statistical Tool: Single Factor Multiple Treatments 764. Statistical Tool: Testing Means (T) 765. Statistical Tool: Testing Means (Z) 766. Statistical Tool: Testing Proportions (Z) 767. Statistical Tool: Two-Way ANOVA 768. 769. 770. 771.
Statistical Tool: Variance-Covariance Matrix Statistical Tool: Wilcoxon Signed-Rank Test (One Variable) Statistical Tool: Wilcoxon Signed-Rank Test (Two Variables) Valuation Tool: Lattice Maker for Debt
772. Valuation Tool: Lattice Maker for Yield The following lists Risk Simulator tools/applications that are used in the Modeling Toolkit: 773. Monte Carlo Simulation Using 25 Statistical Distributions 774. Monte Carlo Simulation: Simulations with Correlations 775. 776. 777. 778. 779.
Monte Carlo Simulation: Simulations with Precision Control Monte Carlo Simulation: Simulations with Truncation Stochastic Forecasting: Basic Econometrics Stochastic Forecasting: Box-Jenkins ARIMA and Auto ARIMA Stochastic Forecasting: Cubic Spline
appb
JWBK121-Mun
March 17, 2008
13:29
Char Count=
898
APPENDIX B
780. Stochastic Forecasting: GARCH 781. Stochastic Forecasting: J and S Curves 782. Stochastic Forecasting: Markov Chains 783. 784. 785. 786. 787. 788. 789. 790.
Stochastic Forecasting: Maximum Likelihood Stochastic Forecasting: Nonlinear Extrapolation Stochastic Forecasting: Regression Analysis Stochastic Forecasting: Stochastic Processes Stochastic Forecasting: Time-Series Analysis Portfolio Optimization: Discrete Binary Decision Variables Portfolio Optimization: Discrete and Continuous Decision Variables Portfolio Optimization: Discrete Decision Variables
791. 792. 793. 794. 795.
Portfolio Optimization: Static Optimization Portfolio Optimization: Dynamic Optimization Portfolio Optimization: Stochastic Optimization Simulation Tools: Bootstrap Simulation Simulation Tools: Custom Historical Simulation
796. Simulation Tools: Data Diagnostics 797. 798. 799. 800.
Simulation Tools: Distributional Analysis Simulation Tools: Multiple Correlated Data Fitting Simulation Tools: Scenario Analysis Simulation Tools: Sensitivity Analysis
801. Simulation Tools: Single Data Fitting 802. Simulation Tools: Statistical Analysis 803. Simulation Tools: Tornado Analysis The following lists Real Options SLS tools/applications that are used in the Modeling Toolkit: 804. 805. 806. 807. 808. 809.
Audit Sheet Functions Changing Volatility and Risk-Free Rates Model Lattice Maker SLS Single Asset and Single Phase: American Options SLS Single Asset and Single Phase: Bermudan Options SLS Single Asset and Single Phase: Customized Options
810. 811. 812. 813. 814.
SLS Single Asset and Single Phase: European Options SLS Multiple Asset and Multiple Phases SLS Multinomial Lattices: Pentanomials SLS Multinomial Lattices: Quadranomials SLS Multinomial Lattices: Trinomials
815. SLS Multinomial Lattices: Trinomials Mean-Reversion
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
APPENDIX
C
Understanding and Choosing the Right Probability Distributions
lotting data is one method for selecting a probability distribution. The following steps provide another process for selecting probability distributions that best describe the uncertain variables in your spreadsheets. To select the correct probability distribution, use the following steps:
P
1. Look at the variable in question. List everything you know about the conditions surrounding this variable. You might be able to gather valuable information about the uncertain variable from historical data. If historical data are not available, use your own judgment, based on experience, listing everything you know about the uncertain variable. 2. Review the descriptions of the probability distributions. 3. Select the distribution that characterizes this variable. A distribution characterizes a variable when the conditions of the distribution match those of the variable. Alternatively, if you have historical, comparable, contemporaneous, or forecast data, you can use Risk Simulator’s distributional fitting modules to find the best statistical fit for your existing data. This fitting process will apply some advanced statistical techniques to find the best distribution and its relevant parameters that describe the data.
PROBABILITY DENSITY FUNCTIONS, CUMULATIVE DISTRIBUTION FUNCTIONS, AND PROBABILITY MASS FUNCTIONS In mathematics and Monte Carlo simulation, a probability density function (PDF) represents a continuous probability distribution in terms of integrals. If a probability distribution has a density of f (x), then intuitively the infinitesimal interval of [x, x + dx] has a probability of f (x) dx. The PDF therefore can be seen as a smoothed version of a probability histogram; that is, by providing an empirically large sample of a continuous random variable repeatedly, the histogram using very narrow ranges
899
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
900
APPENDIX C
will resemble the random variable’s PDF. The probability of the interval between [a, b] is given by b f (x)dx a
which means that the total integral of the function f must be 1.0. It is a common mistake to think of f(a) as the probability of a. This is incorrect. In fact, f (a) can sometimes be larger than 1—consider a uniform distribution between 0.0 and 0.5. The random variable x within this distribution will have f (x) greater than 1. The probability in reality is the function f (x)dx discussed previously, where dx is an infinitesimal amount. The cumulative distribution function (CDF) is denoted as F(x) = P(X ≤ x), indicating the probability of X taking on a less than or equal value to x. Every CDF is monotonically increasing, is continuous from the right, and at the limits, has the following properties: lim F (x) = 0
x→−∞
and
lim F (x) = 1
x→+∞
Further, the CDF is related to the PDF by b F (b) − F (a) = P(a ≤ X ≤ b) = f (x)dx a
where the PDF function f is the derivative of the CDF function F. In probability theory, a probability mass function or PMF gives the probability that a discrete random variable is exactly equal to some value. The PMF differs from the PDF in that the values of the latter, defined only for continuous random variables, are not probabilities; rather, its integral over a set of possible values of the random variable is a probability. A random variable is discrete if its probability distribution is discrete and can be characterized by a PMF. Therefore, X is a discrete random variable if
P(X = u) = 1
u
as u runs through all possible values of the random variable X.
DISCRETE DISTRIBUTIONS Following is a detailed listing of the different types of probability distributions that can be used in Monte Carlo simulation. This listing is included in the appendix for the reader’s reference.
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
Understanding and Choosing the Right Probability Distributions
901
Bernoulli or Yes/No Distribution The Bernoulli distribution is a discrete distribution with two outcomes (e.g., head or tails, success or failure, 0 or 1). The Bernoulli distribution is the binomial distribution with one trial and can be used to simulate Yes/No or Success/Failure conditions. This distribution is the fundamental building block of other more complex distributions. For instance:
Binomial distribution: Bernoulli distribution with higher number of n total trials and computes the probability of x successes within this total number of trials. Geometric distribution: Bernoulli distribution with higher number of trials and computes the number of failures required before the first success occurs. Negative binomial distribution: Bernoulli distribution with higher number of trials and computes the number of failures before the xth success occurs. The mathematical constructs for the Bernoulli distribution are as follows: 1 − p for x = 0 P(x) = p for x = 1 or P(x) = px (1 − p)1−x Mean = p Standard Deviation =
p(1 − p)
1 − 2p Skewness = p(1 − p) Excess Kurtosis =
6 p2 − 6 p + 1 p(1 − p)
The probability of success (p) is the only distributional parameter. Also, it is important to note that there is only one trial in the Bernoulli distribution, and the resulting simulated value is either 0 or 1. Input requirements: Probability of success > 0 and < 1 (that is, 0.0001 ≤ p ≤ 0.9999)
Binomial Distribution The binomial distribution describes the number of times a particular event occurs in a fixed number of trials, such as the number of heads in 10 flips of a coin or the number of defective items out of 50 items chosen. The three conditions underlying the binomial distribution are: 1. For each trial, only two outcomes are possible that are mutually exclusive. 2. The trials are independent—what happens in the first trial does not affect the next trial.
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
902
APPENDIX C
3. The probability of an event occurring remains the same from trial to trial. P(x) =
n! px (1 − p)(n−x) for n > 0; x = 0, 1, 2, . . . n; and 0 < p < 1 x!(n − x)!
Mean = np Standard Deviation =
np(1 − p)
1 − 2p Skewness = np(1 − p) Excess Kurtosis =
6 p2 − 6 p + 1 np(1 − p)
The probability of success (p) and the integer number of total trials (n) are the distributional parameters. The number of successful trials is denoted x. It is important to note that probability of success (p) of 0 or 1 are trivial conditions and do not require any simulations, and, hence, are not allowed in the software. Input requirements: Probability of success > 0 and < 1 (that is, 0.0001 ≤ p ≤ 0.9999). Number of trials ≥ 1 or positive integers and ≤ 1,000 (for larger trials, use the normal distribution with the relevant computed binomial mean and standard deviation as the normal distribution’s parameters).
Discrete Uniform The discrete uniform distribution is also known as the equally likely outcomes distribution, where the distribution has a set of N elements, and each element has the same probability. This distribution is related to the uniform distribution, but its elements are discrete and not continuous. The mathematical constructs for the discrete uniform distribution are as follows: 1 ranked value N N+1 ranked value Mean = 2 (N − 1)(N + 1) Standard Deviation = ranked value 12 P(x) =
Skewness = 0 (that is, the distribution is perfectly symmetrical) Excess Kurtosis =
−6(N 2 + 1) ranked value 5(N − 1)(N + 1)
Input requirements: Minimum < Maximum and both must be integers (negative integers and zero are allowed)
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
Understanding and Choosing the Right Probability Distributions
903
Geometric Distribution The geometric distribution describes the number of trials until the first successful occurrence, such as the number of times you need to spin a roulette wheel before you win. The three conditions underlying the geometric distribution are: 1. The number of trials is not fixed. 2. The trials continue until the first success. 3. The probability of success is the same from trial to trial. The mathematical constructs for the geometric distribution are as follows: P(x) = p(1 − p)x−1 for 0 < p < 1 and x = 1, 2, . . . , n Mean =
1 −1 p
Standard Deviation =
1− p p2
2− p Skewness = 1− p Excess Kurtosis =
p2 − 6 p + 6 1− p
The probability of success (p) is the only distributional parameter. The number of successful trials simulated is denoted x, which can only take on positive integers. Input requirements: Probability of success > 0 and < 1 (that is, 0.0001 ≤ p ≤ 0.9999). It is important to note that probability of success (p) of 0 or 1 are trivial conditions and do not require any simulations, and, hence, are not allowed in the software.
Hypergeometric Distribution The hypergeometric distribution is similar to the binomial distribution in that both describe the number of times a particular event occurs in a fixed number of trials. The difference is that binomial distribution trials are independent, whereas hypergeometric distribution trials change the probability for each subsequent trial and are called trials without replacement. For example, suppose a box of manufactured parts is known to contain some defective parts. You choose a part from the box, find it is defective, and remove the part from the box. If you choose another part from the box, the probability that it is defective is somewhat lower than for the first part because you have removed a defective part. If you had replaced the defective part, the probabilities would have remained the same, and the process would have satisfied the conditions for a binomial distribution.
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
904
APPENDIX C
The three conditions underlying the hypergeometric distribution are: 1. The total number of items or elements (the population size) is a fixed number, a finite population. The population size must be less than or equal to 1,750. 2. The sample size (the number of trials) represents a portion of the population. 3. The known initial probability of success in the population changes after each trial. The mathematical constructs for the hypergeometric distribution are as follows: (N − Nx )! (Nx )! x!(Nx − x)! (n − x)!(N − Nx − n + x)! P(x) = N! n!(N − n)! for x = Max(n − (N − Nx ), 0), . . . , Min(n, Nx ) Mean =
Nx n N
(N − Nx )Nx n(N − n) N2 (N − 1) (N − 2Nx )(N − 2n) N−1 Skewness = N−2 (N − Nx )Nx n(N − n)
Standard Deviation =
Excess Kurtosis =
V(N, Nx , n) where (N − Nx )Nx n(−3 + N)(−2 + N)(−N + n)
V(N, Nx, n) = (N − Nx )3 − (N − Nx )5 + 3(N − Nx )2 Nx − 6(N − Nx )3 Nx + (N − Nx )4 Nx + 3(N − Nx )Nx2 − 12(N − Nx )2 Nx2 + 8(N − Nx )3 Nx2 + Nx3 − 6(N − Nx )Nx3 + 8(N − Nx )2 Nx3 + (N − Nx )Nx4 − Nx5 − 6(N − Nx )3 Nx + 6(N − Nx )4 Nx + 18(N − Nx )2 Nx n − 6(N − Nx )3 Nx n + 18(N − Nx )Nx2 n − 24(N − Nx )2 Nx2 n − 6(N − Nx )3 n − 6(N − Nx )Nx3 n + 6Nx4 n + 6(N − Nx )2 n2 − 6(N − Nx )3 n2 − 24(N − Nx )Nx n2 + 12(N − Nx )2 Nx n2 + 6Nx2 n2 + 12(N − Nx )Nx2 n2 − 6Nx3 n2 The number of items in the population (N), trials sampled (n), and number of items in the population that have the successful trait (Nx ) are the distributional parameters. The number of successful trials is denoted x. Input requirements: Population ≥ 2 and integer Trials > 0 and integer Successes > 0 and integer Population > Successes Trials < Population Population < 1,750
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
Understanding and Choosing the Right Probability Distributions
905
Negative Binomial Distribution The negative binomial distribution is useful for modeling the distribution of the number of trials until the rth successful occurrence, such as the number of sales calls you need to make to close a total of 10 orders. It is essentially a superdistribution of the geometric distribution. This distribution shows the probabilities of each number of trials in excess of r to produce the required success r. The three conditions underlying the negative binomial distribution are: 1. The number of trials is not fixed. 2. The trials continue until the rth success. 3. The probability of success is the same from trial to trial. The mathematical constructs for the negative binomial distribution are as follows: P(x) =
(x + r − 1)! r p (1 − p)x for x = r, r + 1, . . . ; and 0 < p < 1 (r − 1)!x!
Mean =
r (1 − p) p
Standard Deviation =
r (1 − p) p2
2− p Skewness = r (1 − p) Excess Kurtosis =
p2 − 6 p + 6 r (1 − p)
The probability of success (p) and required successes (r) are the distributional parameters. Input requirements: Successes required must be positive integers > 0 and < 8,000. Probability of success > 0 and < 1 (that is, 0.0001 ≤ p ≤ 0.9999). It is important to note that probability of success (p) of 0 or 1 are trivial conditions and do not require any simulations, and, hence, are not allowed in the software.
Poisson Distribution The Poisson distribution describes the number of times an event occurs in a given interval, such as the number of telephone calls per minute or the number of errors per page in a document. The three conditions underlying the Poisson distribution are: 1. The number of possible occurrences in any interval is unlimited. 2. The occurrences are independent. The number of occurrences in one interval does not affect the number of occurrences in other intervals. 3. The average number of occurrences must remain the same from interval to interval.
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
906
APPENDIX C
The mathematical constructs for the Poisson are as follows: e−λ λx for x and λ > 0 x! Mean = λ √ Standard Deviation = λ 1 Skewness = √ λ P(x) =
Excess Kurtosis =
1 λ
Rate (λ) is the only distributional parameter. Input requirements: Rate > 0 and ≤ 1,000 (that is, 0.0001 ≤ rate ≤ 1,000)
CONTINUOUS DISTRIBUTIONS Beta Distribution The beta distribution is very flexible and is commonly used to represent variability over a fixed range. One of the more important applications of the beta distribution is its use as a conjugate distribution for the parameter of a Bernoulli distribution. In this application, the beta distribution is used to represent the uncertainty in the probability of occurrence of an event. It is also used to describe empirical data and predict the random behavior of percentages and fractions, as the range of outcomes is typically between 0 and 1. The value of the beta distribution lies in the wide variety of shapes it can assume when you vary the two parameters, alpha and beta. If the parameters are equal, the distribution is symmetrical. If either parameter is 1 and the other parameter is greater than 1, the distribution is J-shaped. If alpha is less than beta, the distribution is said to be positively skewed (most of the values are near the minimum value). If alpha is greater than beta, the distribution is negatively skewed (most of the values are near the maximum value). The mathematical constructs for the beta distribution are as follows: (x)(α−1) (1 − x)(β−1) for α > 0; β > 0; x > 0 (α)(β) (α + β) α Mean = α+β αβ Standard Deviation = (α + β)2 (1 + α + β) f (x) =
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
Understanding and Choosing the Right Probability Distributions
907
2(β − α) 1 + α + β Skewness = √ (2 + α + β) αβ Excess Kurtosis =
3(α + β + 1)[αβ(α + β − 6) + 2(α + β)2 ] −3 αβ(α + β + 2)(α + β + 3)
Alpha (α) and beta (β) are the two distributional shape parameters, and is the gamma function. The two conditions underlying the beta distribution are: 1. The uncertain variable is a random value between 0 and a positive value. 2. The shape of the distribution can be specified using two positive values. Input requirements: Alpha and beta > 0 and can be any positive value
Cauchy Distribution or Lorentzian Distribution or Breit–Wigner Distribution The Cauchy distribution, also called the Lorentzian distribution or Breit–Wigner distribution, is a continuous distribution describing resonance behavior. It also describes the distribution of horizontal distances at which a line segment tilted at a random angle cuts the x-axis. The mathematical constructs for the Cauchy or Lorentzian distribution are as follows: f (x) =
γ /2 1 π (x − m)2 + γ 2 /4
The Cauchy distribution is a special case where it does not have any theoretical moments (mean, standard deviation, skewness, and kurtosis) as they are all undefined. Mode location (m) and scale (γ ) are the only two parameters in this distribution. The location parameter specifies the peak or mode of the distribution, while the scale parameter specifies the half-width at half-maximum of the distribution. In addition, the mean and variance of a Cauchy or Lorentzian distribution are undefined. In addition, the Cauchy distribution is the Student’s t distribution with only 1 degree of freedom. This distribution is also constructed by taking the ratio of two standard normal distributions (normal distributions with a mean of zero and a variance of one) that are independent of one another. Input requirements: Location can be any value Scale > 0 and can be any positive value
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
908
APPENDIX C
Chi-Square Distribution The chi-square distribution is a probability distribution used predominantly in hypothesis testing, and is related to the gamma distribution and the standard normal distribution. For instance, the sums of independent normal distributions are distributed as a chi-square (χ 2 ) with k degrees of freedom: d
Z12 + Z22 + . . . + Zk2 ∼ χk2 The mathematical constructs for the chi-square distribution are as follows: f (x) =
2(−k/2) k/2−1 −x/2 e for all x > 0 x (k/2)
Mean = k Standard Deviation = 2 Skewness = 2 k 12 Excess Kurtosis = k
√
2k
The gamma function is written as . Degrees of freedom k is the only distributional parameter. The chi-square distribution can also be modeled using a gamma distribution by setting the shape parameter as k/2 and scale as 2S2 where S is the scale. Input requirements: Degrees of freedom > 1 and must be an integer < 1,000
Exponential Distribution The exponential distribution is widely used to describe events recurring at random points in time, such as the time between failures of electronic equipment or the time between arrivals at a service booth. It is related to the Poisson distribution, which describes the number of occurrences of an event in a given interval of time. An important characteristic of the exponential distribution is the “memoryless” property, which means that the future lifetime of a given object has the same distribution, regardless of the time it existed. In other words, time has no effect on future outcomes. The mathematical constructs for the exponential distribution are as follows: f (x) = λe−λx for x ≥ 0; λ > 0 Mean =
1 λ
1 λ Skewness = 2 (this value applies to all success rate λ inputs)
Standard Deviation =
Excess Kurtosis = 6 (this value applies to all success rate λ inputs)
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
Understanding and Choosing the Right Probability Distributions
909
Success rate (λ) is the only distributional parameter. The number of successful trials is denoted x. The condition underlying the exponential distribution is: 1. The exponential distribution describes the amount of time between occurrences. Input requirements: Rate > 0 and ≤ 300
Extreme Value Distribution or Gumbel Distribution The extreme value distribution (Type 1) is commonly used to describe the largest value of a response over a period of time, for example, in flood flows, rainfall, and earthquakes. Other applications include the breaking strengths of materials, construction design, and aircraft loads and tolerances. The extreme value distribution is also known as the Gumbel distribution. The mathematical constructs for the extreme value distribution are as follows: f (x) =
x−m 1 −z ze where z = e β for β > 0; and any value of x and m β
Mean = m + 0.577215β Standard Deviation =
1 2 2 π β 6
√ 12 6(1.2020569) Skewness = π3 = 1.13955 (this applies for all values of mode and scale) Excess Kurtosis = 5.4 (this applies for all values of mode and scale) Mode (m) and scale (β) are the distributional parameters. There are two standard parameters for the extreme value distribution: mode and scale. The mode parameter is the most likely value for the variable (the highest point on the probability distribution). The scale parameter is a number greater than 0. The larger the scale parameter, the greater the variance. Input requirements: Mode can be any value Scale > 0
F Distribution or Fisher–Snedecor Distribution The F distribution, also known as the Fisher–Snedecor distribution, is another continuous distribution used most frequently for hypothesis testing. Specifically, it is used to test the statistical difference between two variances in analysis of variance tests and likelihood ratio tests. The F distribution with the numerator degree of
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
910
APPENDIX C
freedom n and denominator degree of freedom m is related to the chi-square distribution in that:
n + m n n/2 n/2−1 x χn2 /n d 2 m ∼ Fn,m or f (x) = (n+m)/2 n m n χm2 /m x +1 2 2 m m Mean = m−2 2m2 (m + n − 2) for all m > 4 n(m − 2)2 (m − 4) 2(m + 2n − 2) 2(m − 4) Skewness = m−6 n(m + n − 2) Standard Deviation =
Excess Kurtosis =
12(−16 + 20m− 8m2 + m3 + 44n − 32mn + 5m2 n − 22n2 + 5mn2 ) n(m− 6)(m− 8)(n + m− 2)
The numerator degree of freedom n and denominator degree of freedom m are the only distributional parameters. Input requirements: Degrees of freedom numerator and degrees of freedom denominator both > 0 integers
Gamma Distribution (Erlang Distribution) The gamma distribution applies to a wide range of physical quantities and is related to other distributions: lognormal, exponential, Pascal, Erlang, Poisson, and chisquare. It is used in meteorological processes to represent pollutant concentrations and precipitation quantities. The gamma distribution is also used to measure the time between the occurrence of events when the event process is not completely random. Other applications of the gamma distribution include inventory control, economic theory, and insurance risk theory. The gamma distribution is most often used as the distribution of the amount of time until the rth occurrence of an event in a Poisson process. When used in this fashion, the three conditions underlying the gamma distribution are: 1. The number of possible occurrences in any unit of measurement is not limited to a fixed number. 2. The occurrences are independent. The number of occurrences in one unit of measurement does not affect the number of occurrences in other units. 3. The average number of occurrences must remain the same from unit to unit. The mathematical constructs for the gamma distribution are as follows: α−1 x x e− β β with any value of α > 0 and β > 0 f (x) = (α)β Mean = αβ
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
Understanding and Choosing the Right Probability Distributions
Standard Deviation =
911
αβ 2
2 Skewness = √ α Excess Kurtosis =
6 α
Shape parameter alpha (α) and scale parameter beta (β) are the distributional parameters, and is the gamma function. When the alpha parameter is a positive integer, the gamma distribution is called the Erlang distribution, used to predict waiting times in queuing systems, where the Erlang distribution is the sum of independent and identically distributed random variables each having a memoryless exponential distribution. Setting n as the number of these random variables, the mathematical construct of the Erlang distribution is: f (x) =
xn−1 e−x for all x > 0 and all positive integers of n (n − 1)!
Input requirements: Scale beta > 0 and can be any positive value Shape alpha ≥ 0.05 and any positive value Location can be any value
Logistic Distribution The logistic distribution is commonly used to describe growth, that is, the size of a population expressed as a function of a time variable. It also can be used to describe chemical reactions and the course of growth for a population or individual. The mathematical constructs for the logistic distribution are as follows: f (x) =
e
µ−x α
for any value of α and µ µ−x 2 α 1+e α
Mean = µ Standard Deviation =
1 2 2 π α 3
Skewness = 0 (this applies to all mean and scale inputs) Excess Kurtosis = 1.2 (this applies to all mean and scale inputs) Mean (µ) and scale (α) are the distributional parameters. There are two standard parameters for the logistic distribution: mean and scale. The mean parameter is the average value, which for this distribution is the same as the mode, because this distribution is symmetrical. The scale parameter is a number greater than 0. The larger the scale parameter, the greater the variance.
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
912
APPENDIX C
Input requirements: Scale > 0 and can be any positive value Mean can be any value
Lognormal Distribution The lognormal distribution is widely used in situations where values are positively skewed, for example, in financial analysis for security valuation or in real estate for property valuation, and where values cannot fall below zero. Stock prices are usually positively skewed rather than normally (symmetrically) distributed. Stock prices exhibit this trend because they cannot fall below the lower limit of zero but might increase to any price without limit. Similarly, real estate prices illustrate positive skewness and are lognormally distributed as property values cannot become negative. The three conditions underlying the lognormal distribution are: 1. The uncertain variable can increase without limits but cannot fall below zero. 2. The uncertain variable is positively skewed, with most of the values near the lower limit. 3. The natural logarithm of the uncertain variable yields a normal distribution. Generally, if the coefficient of variability is greater than 30 percent, use a lognormal distribution. Otherwise, use the normal distribution. The mathematical constructs for the lognormal distribution are as follows: 2 1 − [ln(x)−ln(µ)] e 2[ln(σ )]2 for x > 0; µ > 0 and σ > 0 f (x) = x 2π ln(σ )
σ2 Mean = exp µ + 2 Standard Deviation = exp(σ 2 + 2µ)[exp(σ 2 ) − 1] exp(σ 2 ) − 1 (2 + exp(σ 2 )) Skewness =
Excess Kurtosis = exp(4σ 2 ) + 2 exp(3σ 2 ) + 3 exp(2σ 2 ) − 6 Mean (µ) and standard deviation (σ ) are the distributional parameters. Input requirements: Mean and standard deviation both > 0 and can be any positive value Lognormal Parameter Sets By default, the lognormal distribution uses the arithmetic mean and standard deviation. For applications for which historical data are available, it is more appropriate to use either the logarithmic mean and standard deviation, or the geometric mean and standard deviation.
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
Understanding and Choosing the Right Probability Distributions
913
Normal Distribution The normal distribution is the most important distribution in probability theory because it describes many natural phenomena, such as people’s IQs or heights. Decision makers can use the normal distribution to describe uncertain variables such as the inflation rate or the future price of gasoline. The three conditions underlying the normal distribution are: 1. Some value of the uncertain variable is the most likely (the mean of the distribution). 2. The uncertain variable could as likely be above the mean as it could be below the mean (symmetrical about the mean). 3. The uncertain variable is more likely to be in the vicinity of the mean than further away. The mathematical constructs for the normal distribution are as follows: (x−µ)2 1 e− 2σ 2 for all values of x and µ; while σ > 0 f (x) = √ 2π σ Mean = µ
Standard Deviation = σ Skewness = 0 (this applies to all inputs of mean and standard deviation) Excess Kurtosis = 0 (this applies to all inputs of mean and standard deviation) Mean (µ) and standard deviation (σ ) are the distributional parameters. Input requirements: Standard deviation > 0 and can be any positive value Mean can be any value
Pareto Distribution The Pareto distribution is widely used for the investigation of distributions associated with such empirical phenomena as city population sizes, the occurrence of natural resources, the size of companies, personal incomes, stock price fluctuations, and error clustering in communication circuits. The mathematical constructs for the Pareto are as follows: β Lβ for x > L x(1+β) βL Mean = β −1 f (x) =
Standard Deviation =
β L2 (β − 1)2 (β − 2)
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
914
APPENDIX C
Skewness =
β − 2 2(β + 1) β β −3
Excess Kurtosis =
6(β 3 + β 2 − 6β − 2) β(β − 3)(β − 4)
Location (L) and shape (β) are the distributional parameters. There are two standard parameters for the Pareto distribution: location and shape. The location parameter is the lower bound for the variable. After you select the location parameter, you can estimate the shape parameter. The shape parameter is a number greater than 0, usually greater than 1. The larger the shape parameter, the smaller the variance and the thicker the right tail of the distribution. Input requirements: Location > 0 and can be any positive value Shape ≥ 0.05
Student’s t Distribution The Student’s t distribution is the most widely used distribution in hypothesis testing. This distribution is used to estimate the mean of a normally distributed population when the sample size is small, and is used to test the statistical significance of the difference between two sample means or confidence intervals for small sample sizes. The mathematical constructs for the t distribution are as follows: [(r + 1)/2] (1 + t 2 /r )−(r +1)/2 f (t) = √ r π[r/2] where t =
x − x¯ and is the gamma function s
Mean = 0 (this applies to all degrees of freedom r except if the distribution is shifted to another nonzero central location) r Standard Deviation = r −2 Skewness = 0 (this applies to all degrees of freedom r ) Excess Kurtosis =
6 for all r > 4 r −4
Degree of freedom r is the only distributional parameter. The t distribution is related to the F-distribution as follows: The square of a value of t with r degrees of freedom is distributed as F with 1 and r degrees of freedom. The overall shape of the probability density function of the t distribution also resembles the bell shape of a normally distributed variable with mean 0 and variance 1, except that it is a bit lower and wider or is leptokurtic (fat tails at the ends and peaked center). As the number of degrees of freedom grows (say,
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
Understanding and Choosing the Right Probability Distributions
915
above 30), the t distribution approaches the normal distribution with mean 0 and variance 1. Input requirements: Degrees of freedom ≥ 1 and must be an integer
Triangular Distribution The triangular distribution describes a situation where you know the minimum, maximum, and most likely values to occur. For example, you could describe the number of cars sold per week when past sales show the minimum, maximum, and usual number of cars sold. The three conditions underlying the triangular distribution are: 1. The minimum number of items is fixed. 2. The maximum number of items is fixed. 3. The most likely number of items falls between the minimum and maximum values, forming a triangular-shaped distribution, which shows that values near the minimum and maximum are less likely to occur than those near the most likely value. The mathematical constructs for the triangular distribution are as follows:
f (x) =
2(x − Min) (Max − Min)(Likely − Min)
for Min < x < Likely
for Likely < x < Max
2(Max − x) (Max − Min)(Max − Likely)
1 (Min + Likely + Max) 3 Standard Deviation = 1 2 (Min2 + Likely + Max2 − MinMax − MinLikely − MaxLikely) 18
Mean =
Skewness = √ 2(Min + Max − 2Likely)(2Min − Max − Likely)(Min − 2Max + Likely) 2
5(Min2 + Max2 + Likely − MinMax − MinLikely − MaxLikely)3/2 Excess Kurtosis = −0.6 (this applies to all inputs of Min, Max, and Likely) Minimum value (Min), most likely value (Likely), and maximum value (Max) are the distributional parameters. Input requirements: Min ≤ Likely ≤ Max and can also take any value However, Min < Max and can also take any value
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
916
APPENDIX C
Uniform Distribution With the uniform distribution, all values fall between the minimum and maximum and occur with equal likelihood. The three conditions underlying the uniform distribution are: 1. The minimum value is fixed. 2. The maximum value is fixed. 3. All values between the minimum and maximum occur with equal likelihood. The mathematical constructs for the uniform distribution are as follows: 1 for all values such that Min < Max Max − Min Min + Max Mean = 2 (Max − Min)2 Standard Deviation = 12 f (x) =
Skewness = 0 (this applies to all inputs of Min and Max) Excess Kurtosis = −1.2 (this applies to all inputs of Min and Max) Maximum value (Max) and minimum value (Min) are the distributional parameters. Input requirements: Min < Max and can take any value
Weibull Distribution (Rayleigh Distribution) The Weibull distribution describes data resulting from life and fatigue tests. It is commonly used to describe failure time in reliability studies as well as the breaking strengths of materials in reliability and quality control tests. Weibull distributions are also used to represent various physical quantities, such as wind speed. The Weibull distribution is a family of distributions that can assume the properties of several other distributions. For example, depending on the shape parameter you define, the Weibull distribution can be used to model the exponential and Rayleigh distributions, among others. The Weibull distribution is very flexible. When the Weibull shape parameter is equal to 1.0, the Weibull distribution is identical to the exponential distribution. The Weibull location parameter lets you set up an exponential distribution to start at a location other than 0.0. When the shape parameter is less than 1.0, the Weibull distribution becomes a steeply declining curve. A manufacturer might find this effect useful in describing part failures during a burn-in period .
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
Understanding and Choosing the Right Probability Distributions
917
The mathematical constructs for the Weibull distribution are as follows: α f (x) = β
α α−1 − x x e β β
Mean = β(1 + α −1 )
Standard Deviation = β 2 (1 + 2α −1 ) − 2 (1 + α −1 ) Skewness =
2 3 (1 + β −1 ) − 3(1 + β −1 )(1 + 2β −1 ) + (1 + 3β −1 ) 3/2 (1 + 2β −1 ) − 2 (1 + β −1 )
Excess Kurtosis = −6 4 (1 + β −1 ) + 12 2 (1 + β −1 )(1 + 2β −1 ) − 3 2 (1 + β −1 ) − 4(1 + β −1 )(1 + 3β −1 ) + (1 + 4β −1 ) 2 (1 + 2β −1 ) − 2 (1 + β −1 )
Location (L), shape (α), and scale (β) are the distributional parameters, and is the gamma function. Input requirements: Scale > 0 and can be any positive value Shape ≥ 0.05 Location can take on any value
appc
JWBK121-Mun
March 18, 2008
3:42
Char Count=
918
appd
JWBK121-Mun
March 17, 2008
10:17
Char Count=
APPENDIX
D
Financial Statement Analysis
his appendix provides some basic financial statement analysis concepts used in financial modeling chapters throughout the book. The focus of this appendix is on calculating the free cash flows used under different scenarios, including making appropriate adjustments under levered and unlevered operating conditions. Although many versions of free cash flows exist, these calculations are examples of more generic free cash flows applicable under most circumstances. An adjustment for inflation and the calculation of terminal cash flows are also presented here. Finally, a market multiple approach that uses price-to-earnings ratios is also briefly discussed.
T
FREE CASH FLOW CALCULATIONS Below is a list of some generic financial statement definitions used to generate free cash flows based on GAAP (generally accepted accounting principles):
Gross Profits = Revenues − Cost of Goods Sold. Earnings Before Interest and Taxes = Gross Profits − Selling Expenses – General and Administrative Costs − Depreciation − Amortization. Earnings Before Taxes = Earnings Before Interest and Taxes − Interest. Net Income = Earnings Before Taxes − Taxes. Free Cash Flow to Equity = Net Income + Depreciation + Amortization – Capital Expenditures ± Change in Net Working Capital − Principal Repayments + New Debt Proceeds − Preferred Dividends − Interest (1 − Tax Rate). Free Cash Flow to the Firm = EBIT (1 − Tax Rate) + Depreciation + Amortization − Capital Expenditures ± Change in Net Working Capital = Free Cash Flow to Equity + Principal Repayment − New Debt Proceeds + Preferred Dividends + Interest (1 − Tax Rate).
FREE CASH FLOW TO A FIRM An alternative version of the free cash flow for an unlevered firm can be defined as:
Free Cash Flow = Earnings Before Interest and Taxes [1 − Effective Tax Rate] + Depreciation + Amortization − Capital Expenditures ± Change in Net Working Capital.
919
appd
JWBK121-Mun
March 17, 2008
10:17
Char Count=
920
APPENDIX D
LEVERED FREE CASH FLOW For a levered firm, the free cash flow becomes:
Free Cash Flow = Net Income + α [Depreciation + Amortization] ± α [Change in Net Working Capital] − α [Capital Expenditures] − Principal Repayments + New Debt Proceeds − Preferred Debt Dividends
where α is the equity-to-total-capital ratio; and (1 − α) is the debt ratio.
INFLATION ADJUSTMENT The following adjustments show an inflationary adjustment for free cash flows and discount rates from nominal to real conditions:
Nominal C F (1 + E[π ]) 1 + Nominal ρ Real ρ = −1 (1 + E[π ]) Real C F =
where CF π E[π] ρ
is the cash flow series; is the inflation rate; is the expected inflation rate; and is the discount rate.
TERMINAL VALUE The following are commonly accepted ways of getting terminal free cash flows under zero growth, constant growth, and supernormal growth assumptions:
Zero Growth Perpetuity: ∞ t−1
FCFt FCFT = t [1 + WACC] WACC
Constant Growth: ∞ FCFt−1 (1 + gt ) t−1
[1 +
WACC]t
=
FCFT−1 (1 + gT ) FCFT = WACC − gT WACC – gT
appd
JWBK121-Mun
March 17, 2008
10:17
Char Count=
921
Financial Statement Analysis
Punctuated Supernormal Growth: N t−1
FCFt + [1 + WACC]t
FCF N(1 + g N) [WACC − g N] [1 + WACC] N
WACC = ωe ke + ωd kd (1 − τ ) + ω pe kpe where FCF WACC g t T N ω ke kd kpe τ
is the free cash flow series; is the weighted average cost of capital; is the growth rate of free cash flows; is the individual time periods; is the terminal time at which a forecast is available; is the time when a punctuated growth rate occurs; is the respective weights on each capital component; is the cost of common equity; is the cost of debt; is the cost of preferred equity; and is the effective tax rate.
PRICE-TO-EARNINGS MULTIPLES APPROACH Related concepts in valuation are the uses of market multiples. An example is using the price-to-earnings multiple, which is a simple derivation of the constant growth model shown above, breaking it down into dividends per share (DPS) and earnings per share (EPS) components. The derivation starts with the constant growth model: P0 =
DPS0 (1 + gn ) DPS1 = ke − gn ke − gn
We then use the fact that the dividend per share next period (DPS1 ) is the earnings per share current period multiplied by the payout ratio (PR), defined as the ratio of dividends per share to earnings per share, which is assumed to be constant, multiplied by one plus the growth rate (1 + g) of earnings: DPS1 = EPS0 [PR](1 + gn ) Similarly, the earnings per share the following period is the same as the earnings per share this period multiplied by one plus the growth rate: EPS1 = EPS0 (1 + gn )
appd
JWBK121-Mun
March 17, 2008
10:17
Char Count=
922
APPENDIX D
Substituting the earnings per share model for the dividends per share in the constant growth model, we get the pricing relationship: P0 =
EPS0 [PR](1 + gn ) ke − gn
Because we are using price-to-earnings ratios, we can divide the pricing relationship by earnings per share to obtain an approximation of the price-to-earnings ratio (PE): P0 [PR] = ≈ PE1 EPS1 ke − gn Assuming that the PE and EPS ratios are fairly stable over time, we can estimate the current pricing structure through forecasting the next term EPS we obtain: P0 = EPS1 [PE1 ] Issues of using PE ratios include the fact that PE ratios change across different markets. If a firm serves multiple markets, it is difficult to find an adequate weighted average PE ratio. PE ratios may not be stable through time and are most certainly not stable across firms. If more efficient firms are added to less efficiently run firms, the average PE ratio may be skewed. In addition, market overreaction and speculation, particularly among high-growth firms, provide an overinflated PE ratio. Furthermore, not all firms are publicly held, some firms may not have a PE ratio, and if valuation of individual projects is required, PE ratios may not be adequate because it is difficult to isolate a specific investment’s profitability and its corresponding PE ratio. Similar approaches include using other proxy multiples, including Business Enterprise Value to Earnings, Price to Book, Price to Sales, and so forth, with similar methods and applications.
DISCOUNTING CONVENTIONS In using discounted cash flow analysis, several conventions require consideration: continuous versus discrete discounting, midyear versus end-of-year convention, and beginning versus end-of-period discounting.
Continuous versus Discrete Periodic Discounting The discounting convention is important when performing a discounted cash flow analysis. Using the same compounding period principle, future cash flows can be discounted using the effective annualized discount rate. For instance, suppose an annualized discount rate of 30 percent is used on a $100 cash flow. Depending on the compounding periodicity, the calculated present value and future value differ (see Table D.1). To illustrate this point further, a $100 deposit in a 30 percent interest-bearing account will yield $130 at the end of one year if the interest compounds once a year.
appd
JWBK121-Mun
March 17, 2008
10:17
Char Count=
923
Financial Statement Analysis
TABLE D.1
Continuous versus Periodic Discrete Discounting
Periodicity
Periods/Year
Interest Factor
Future Value
Present Value
Annual Quarterly Monthly Daily Continuous
1 4 12 365 ∞
30.00% 33.55 34.49 34.97 34.99
$130.00 133.55 134.49 134.97 134.99
$76.92 74.88 74.36 74.09 74.08
However, if interest is compounded quarterly, the deposit value increases to $133.55 due to the additional interest-on-interest compounding effects. For instance, Value at the end of the first quarter = $100.00(1 + 0.30/4)1 = $107.50 Value at the end of the second quarter = $107.50(1 + 0.30/4)1 = $115.56 Value at the end of the third quarter = $115.56(1 + 0.30/4)1 = $124.23 Value at the end of the fourth quarter = $124.23(1 + 0.30/4)1 = $133.55 That is, the annualized discount rate for different compounding periods is its effective annualized rate, calculated as discount periods −1 1+ periods For the quarterly compounding interest rate, the effective annualized rate is 30.00% 4 – 1 = 33.55% 1+ 4 Applying this rate for the year, we have $100(1 + 0.3355) = $133.55. This analysis can be extended for monthly, daily, or any other periodicities. In addition, if the interest rate is assumed to be continuously compounding, the continuous effective annualized rate should be used, where lim
periods→∞
discount 1+ periods
periods – 1 = ediscount − 1
For instance, the 30 percent interest rate compounded continuously yields e0.3 − 1 = 34.99%. Notice that as the number of compounding periods increases, the effective interest rate increases until it approaches the limit of continuous compounding. The annual, quarterly, monthly, and daily compounding is termed discrete periodic compounding, as compared to the continuous compounding approach using the exponential function. In summary, the higher the number of compounding periods, the higher the future value and the lower the present value of a cash flow payment. When applied to discounted cash flow analysis, if the discount rate calculated using a weighted average cost of capital is continuously compounding (e.g., interest payments and cost of capital are continuously compounding), then the net present value calculated may be overoptimistic if discounted discretely.
appd
JWBK121-Mun
March 17, 2008
10:17
Char Count=
924
APPENDIX D WACC = 20% Year 0
Year 1
Year 2
Year 3
Year 4
Year 5 Time
Investment = –$1,000 FCF1 = $500
NPV = −$1,000 +
FCF2 = $600
FCF3 = $700
FCF4 = $800
FCF5 = $900
$500 $600 $700 $800 $900 + + + + = $985 (1 + 0.2)1 (1 + 0.2)2 (1 + 0.2)3 (1 + 0.2)4 (1 + 0.2)5
FIGURE D.1 Full-year versus midyear discounting
Full-Year versus Midyear Convention In the conventional discounted cash flow approach, cash flows occurring in the future are discounted back to the present value and summed to obtain the net present value of a project. These cash flows are usually attached to a particular period in the future, measured usually in years, quarters, or months. The time line in Figure D.1 illustrates a sample series of cash flows over the next five years, with an assumed 20 percent discount rate. Because the cash flows are attached to an annual time line, they are usually assumed to occur at the end of each year. That is, $500 will be recognized at the end of the first full year, $600 at the end of the second year, and so forth. This is termed the full-year discounting convention. However, under usual business conditions, cash flows tend to accrue throughout the entire year and do not arrive in a single lump sum at the end of the year. Instead, the midyear convention may be applied. That is, the $500 cash flow gets accrued over the entire first year and should be discounted at 0.5 years, rather than 1.0 years. Using this midpoint supposes that the $500 cash flow comes in equally over the entire year. NPV = −$1,000 + +
$600 $500 + 0.5 (1 + 0.2) (1 + 0.2)1.5
$700 $800 $900 + + = $1,175 (1 + 0.2)2.5 (1 + 0.2)3.5 (1 + 0.2)4.5
End-of-Period versus Beginning-of-Period Discounting Another key issue in discounting involves the use of end-of-period versus beginningof-period discounting. Suppose the cash flow series are generated on a time line such as in Figure D.2. WACC = 20% Year 2002
Year 2003
Year 2004
Year 2005 Time
Investment = −$1,000 FCF1 = $500
FCF2 = $600
FCF3 = $700
FIGURE D.2 End-of-period versus beginning-of-period discounting
appd
JWBK121-Mun
March 17, 2008
Financial Statement Analysis
10:17
Char Count=
925
Further suppose that the valuation date is January 1, 2002. The $500 cash flow can occur either at the beginning of the first year (January 1, 2003) or at the end of the first year (December 31, 2003). The former requires the discounting of one year and the latter, the discounting of two years. If the cash flows are assumed to roll in equally over the year—that is, from January 1, 2002, to January 1, 2003—the discounting should only be for 0.5 years. In contrast, suppose that the valuation date is December 31, 2002, and the cash flow series occurs at January 1, 2003, or December 31, 2003. The former requires no discounting, while the latter requires a one-year discounting using an end-ofyear discounting convention. In the midyear convention, the cash flow occurring on December 31, 2003, should be discounted at 0.5 years.
appd
JWBK121-Mun
March 17, 2008
10:17
Char Count=
926
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
APPENDIX
E
Exotic Options Formulae
BLACK AND SCHOLES OPTION MODEL—EUROPEAN VERSION This is the famous Nobel Prize–winning Black-Scholes model without any dividend payments. It is the European version, where an option can only be executed at expiration and not before. Although it is simple enough to use, care should be taken in estimating its input variable assumptions, especially that of volatility, which is usually difficult to estimate. However, the Black-Scholes model is useful in generating ballpark estimates of the true real options value, especially for more generic-type calls and puts. For more complex real options analysis, different types of exotic options are required.
Definitions of Variables S present value of future cash flows ($) X implementation cost ($) r risk-free rate (%) T time to expiration (years) σ volatility (%)
cumulative standard-normal distribution
Computation
ln(S/ X) + (r + σ 2 /2)T ln(S/ X) + (r − σ 2 /2)T − Xe−r T √ √ σ T σ T ln(S/ X) + (r − σ 2 /2)T ln(S/ X) + (r + σ 2 /2)T −r T Put = Xe − − S − √ √ σ T σ T
Call = S
BLACK AND SCHOLES WITH DRIFT (DIVIDEND)—EUROPEAN VERSION This is a modification of the Black-Scholes model and assumes a fixed dividend payment rate of q in percent. This can be construed as the opportunity cost of holding the option rather than holding the underlying asset.
927
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
928
APPENDIX E
Definitions of Variables S present value of future cash flows ($) X implementation cost ($) r risk-free rate (%) T σ q
time to expiration (years) volatility (%) cumulative standard-normal distribution continuous dividend payout or opportunity cost (%)
Computation Call = Se
−qT
ln(S/ X) + (r − q + σ 2 /2)T √ σ T
− Xe−r T
ln(S/ X) + (r − q − σ 2 /2)T √ σ T
ln(S/ X) + (r − q − σ 2 /2)T Put = Xe−r T − √ σ T − Se
−qT
ln(S/ X) + (r − q + σ 2 /2)T − √ σ T
BLACK AND SCHOLES WITH FUTURE PAYMENTS—EUROPEAN VERSION Here, cash flow streams may be uneven over time, and we should allow for different discount rates (risk-free rate should be used) for all future times, perhaps allowing for the flexibility of the forward risk-free yield curve.
Definitions of Variables S X r T σ
present value of future cash flows ($) implementation cost ($) risk-free rate (%) time to expiration (years) volatility (%)
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
929
Exotic Options Formulae
q CFi
cumulative standard-normal distribution continuous dividend payout or opportunity cost (%) cash flow at time i
Computation
S ∗ = S − C F1 e−r t1 − C F2 e−r t2 . . . − C Fn e−r tn = S −
n
C Fi e−r ti
t−1
ln(S ∗ / X) + (r − q + σ 2 /2)T √ σ T ∗ ln(S / X) + (r − q − σ 2 /2)T − Xe−r T √ σ T ∗ ln(S / X) + (r − q − σ 2 /2)T Put = Xe−r T − √ σ T ∗ ln(S / X) + (r − q + σ 2 /2)T − S ∗ e−qT − √ σ T
Call = S ∗ e−qT
CHOOSER OPTIONS (BASIC CHOOSER) This is the payoff for a simple chooser option when t1 < T 2 , or it doesn’t work! In addition, it is assumed that the holder has the right to choose either a call or a put with the same strike price at time t1 and with the same expiration date T 2 . For different values of strike prices at different times, we need a complex variable chooser option.
Definitions of Variables S
present value of future cash flows ($)
X r t1
implementation cost ($) risk-free rate (%)
time to choose between a call or put (years) T 2 time to expiration (years) σ volatility (%)
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
930 q
APPENDIX E
cumulative standard-normal distribution continuous dividend payments (%)
Computation Option Value = Se
−qT2
− Se
ln(S/ X) + (r − q + σ 2 /2)T2 √ σ T2
−qT2
−ln(S/ X) + (q − r )T2 − t1 σ 2 /2 √ σ t1
− Xe
−r T2
+ Xe
−r T2
ln(S/ X) + (r − q + σ 2 /2)T2 − σ T2 √ σ T2
√ −ln(S/ X) + (q − r )T2 − t1 σ 2 /2 + σ t1 √ σ t1
COMPLEX CHOOSER The holder of the option has the right to choose between a call and a put at different times (TC and TP ) with different strike levels (XC and XP ) of calls and puts. Note that some of these equations cannot be readily solved using Excel spreadsheets. Instead, due to the recursive methods used to solve certain bivariate distributions and critical values, the use of programming scripts is required.
Definitions of Variables S
present value of future cash flows ($)
X r T σ q I Z
implementation cost ($) risk-free rate (%) time to expiration (years) for call (TC ) and put (TP ) volatility (%) cumulative standard-normal distribution cumulative bivariate-normal distribution continuous dividend payout (%) critical value solved recursively intermediate variables (Z1 and Z2 )
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
931
Exotic Options Formulae
Computation First, solve recursively for the critical I value as follows:
0 = Ie
−q(TC −t)
ln(I/ XC ) + (r − q + σ 2 /2)(TC − t) √ σ TC − t
−XC e−r (TC −t)
ln(I/ XC ) + (r − q + σ 2 /2)(TC − t) − σ TC − t √ σ TC − t
+ Ie
−q(TP −t)
−ln(I/ XP ) + (q − r − σ 2 /2)(TP − t) √ σ TP − t
−XP e−r (TP −t)
−ln(I/ XP ) + (q − r − σ 2 /2)(TP − t) + σ TP − t √ σ TP − t
Then using the I value, calculate d1 =
ln(S/I) + (r − q + σ 2 /2)t √ σ t
y1 =
ln(S/ XC ) + (r − q + σ 2 /2)TC √ σ TC
and
√ d2 = d1 − σ t
and
ln(S/ XP ) + (r − q + σ 2 /2)TP √ σ TP ρ1 = t/TC and ρ2 = t/TP y2 =
Option Value = Se−qTC (d1 ; y1 ; ρ1 ) − XC e−r TC (d2 ; y1 − σ TC ; ρ1 )
− Se−qTP (−d1 ; −y2 ; ρ2 ) + XP e−r TP (−d2 ; −y2 + σ TP ; ρ2 )
COMPOUND OPTIONS ON OPTIONS The value of a compound option is based on the value of another option. That is, the underlying variable for the compound option is another option. Again, solving this model requires programming capabilities.
Definitions of Variables S
present value of future cash flows ($)
r σ
risk-free rate (%) volatility (%) cumulative standard-normal distribution
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
932 q I X1
APPENDIX E
continuous dividend payout (%) critical value solved recursively cumulative bivariate-normal distribution strike for the underlying ($) strike for the option on the option ($)
X2 t1 expiration date for the option on the option (years) T 2 expiration for the underlying option (years)
Computation First, solve for the critical value of I using
ln(I/ X1 ) + (r − q + σ 2 /2)(T2 − t1 ) X2 = Ie σ (T2 − t1 ) ln(I/ X1 ) + (r − q − σ 2 /2)(T2 − t1 ) −r (T2 −t1 ) − X1 e σ (T2 − t1 ) −q(T2 −t1 )
Solve recursively for the value I above and then input it into ln(S/ X1 ) + (r − q + σ 2 /2)T2 ; √ σ T2 −qT2 Call on call = Se ln(S/I) + (r − q + σ 2 /2)t1 ; t1 /T2 √ σ t1 ln(S/ X1 ) + (r − q + σ 2 /2)T2 − σ T2 ; √ σ T2 − X1 e−r T2 ln(S/I) + (r − q + σ 2 /2)t1 √ − σ t1 ; t1 /T2 √ σ t1 √ ln(S/I) + (r − q + σ 2 /2)t1 − σ t1 − X2 e−r t1 √ σ t1
EXCHANGE ASSET FOR ASSET OPTION The exchange asset for an asset option is a good application in a mergers and acquisition situation when a firm exchanges one stock for another firm’s stock as a means of payment.
Definitions of Variables S
present value of future cash flows ($) for Asset 1 (S1 ) and Asset 2 (S2 )
X
implementation cost ($)
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
933
Exotic Options Formulae
Q r T
quantity of Asset 1 to be exchanged for quantity of Asset 2 risk-free rate (%) time to expiration (years) for call (T C ) and put (T P )
σ
volatility (%) of Asset 1 (σ 1 ) and Asset 2 (σ 2 )
σ * portfolio volatility after accounting for the assets’ correlation ρ cumulative standard-normal distribution q1 continuous dividend payout (%) for Asset 1 q2 continuous dividend payout (%) for Asset 2
Computation Option =
2 2 ln(Q S /Q S ) + q − q + σ + σ − 2ρσ σ 1 1 2 2 2 1 1 2 /2 T 1 2 Q1 S1 e−q1 T T σ12 + σ22 − 2ρσ1 σ2
ln(Q1 S1 /Q2 S2 ) + q2 − q1 + σ12 + σ22 − 2ρσ1 σ2 /2 T T σ12 + σ22 − 2ρσ1 σ2 −q2 T −Q2 S2 e − T σ12 + σ22 − 2ρσ1 σ2
FIXED STRIKE LOOK-BACK OPTION The strike price is fixed in advance, and at expiration, the call option pays out the maximum of the difference between the highest observed price in the option’s lifetime and the strike X, and 0, that is, Call = Max[SMAX − X, 0]. A put at expiration pays out the maximum of the difference between the fixed strike X and the minimum price, and 0, that is, Put = Max[X − SMIN , 0].
Definitions of Variables S
present value of future cash flows ($)
X r T σ
implementation cost ($) risk-free rate (%) time to expiration (years) volatility (%) cumulative standard-normal distribution
q
continuous dividend payout (%)
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
934
APPENDIX E
Computation Under the fixed strike look-back call option, when we have X > SMAX , the call option is
ln(S/ X) + (r − q + σ 2 /2)T Call = Se √ σ T √ ln(S/ X) + (r − q + σ 2 /2)T −r T −σ T − Xe √ σ T ln(S/ X) + (r − q + σ 2 /2)T −2(r2−q) √ σ S σ T − 2 2(r − q) √ σ X + Se−r T − T 2(r − q) σ ln(S/ X) + (r − q + σ 2 /2)T + e(r −q)T √ σ T −qT
However, when X < SMAX the call option is
ln(S/S MAX) + (r − q + σ 2 /2)T Call = e (S MAX − X) + Se √ σ T √ ln(S/S MAX) + (r − q + σ 2 /2)T −r T −σ T − S MAXe √ σ T −r T
−qT
ln(S/S MAX) + (r − q + σ 2 /2)T √ −2(r −q) S σ T − σ2 2 √ S σ 2(r − q) MAX + Se−r T − T 2(r − q) σ 2 ln(S/S MAX) + (r − q + σ /2)T (r −q)T +e √ σ T
FLOATING STRIKE LOOK-BACK OPTIONS Floating strike look-back options give the call holder the option to buy the underlying security at the lowest observable price and the put holder the option to sell at the highest observable price. That is, we have a Call = Max(S − SMIN , 0) and Put = Max(SMAX − S, 0).
Definitions of Variables S
present value of future cash flows ($)
X
implementation cost ($)
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
935
Exotic Options Formulae
r T σ
risk-free rate (%) time to expiration (years) volatility (%) cumulative standard-normal distribution
q
continuous dividend payout (%)
Computation
ln(S/S MI N) + (r − q + σ 2 /2)T √ σ T √ ln(S/S MI N) + (r − q + σ 2 /2)T −σ T − S MI Ne−r T √ σ T − ln(S/S MI N) − (r − q + σ 2 /2)T √ −2(r2−q) σ T σ S 2(r − q) √ + 2 S MI N T σ σ + Se−r T 2(r − q) (r −q)T − ln(S/S MI N) − (r − q + σ 2 /2)T −e √ σ T
Call = Se−qT
Put = S MAXe
−r T
√ − ln(S/S MAX) − (r − q + σ 2 /2)T +σ T √ σ T
− Se−qT
− ln(S/S MAX) − (r − q + σ 2 /2)T √ σ T
ln(S/S MAX) + (r − q + σ 2 /2)T √ σ T −2(r − q) √ T σ
−2(r2−q) σ S − S MAX σ2 + Se−r T 2(r − q) ln(S/S MAX) + (r − q + σ 2 /2)T (r−q)T +e √ σ T
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
936
APPENDIX E
FORWARD START OPTIONS Definitions of Variables S X r t1
present value of future cash flows ($) implementation cost ($) risk-free rate (%)
T2 σ
time when the forward start option begins (years) time to expiration of the forward start option (years) volatility (%) cumulative standard-normal distribution
q
continuous dividend payout (%)
Computation
ln(1/α) + (r − q + σ 2 /2)(T2 − t1 ) Call = Se e √ σ T2 − t1 ln(1/α) + (r − q + σ 2 /2)(T2 − t1 ) −Se−qt1 αe(−r)(T2 −t1 ) − σ T2 − t1 √ σ T2 − t1 − ln(1/α) − (r − q + σ 2 /2)(T2 − t1 ) −qt1 (−r)(T2 −t1 ) Put = Se αe + σ T2 − t1 √ σ T2 − t1 − ln(1/α) − (r − q + σ 2 /2)(T2 − t1 ) − Se−qt1 e−q(T2 −t1 ) √ σ T2 − t1 −qt1 −q(T2 −t1 )
where α is the multiplier constant. Note: If the option starts at X percent out-of-the-money, α will be (1 + X). If it starts at-the-money, α will be 1.0, and (1 − X) if in-the-money.
GENERALIZED BLACK-SCHOLES MODEL Definitions of Variables S present value of future cash flows ($) X implementation cost ($) r risk-free rate (%) T σ b q
time to expiration (years) volatility (%) cumulative standard-normal distribution carrying cost (%) continuous dividend payout (%)
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
Exotic Options Formulae
937
Computation
ln(S/ X) + (b + σ 2 /2)T Call = Se √ σ T ln(S/ X) + (b − σ 2 /2)T −r T − Xe √ σ T ln(S/ X) + (b − σ 2 /2)T −r T Put = Xe − √ σ T ln(S/ X) + (b + σ 2 /2)T (b−r)T − Se − √ σ T (b−r)T
Notes: b=0
Futures options model
b=r−q b=r b = r − r*
Black-Scholes with dividend payment Simple Black-Scholes formula Foreign currency options model
OPTIONS ON FUTURES The underlying security is a forward or futures contract with initial price F. Here, the value of F is the forward or futures contract’s initial price, replacing S with F as well as calculating its present value.
Definitions of Variables X implementation cost ($) F futures single-point cash flows ($) r risk-free rate (%) T time to expiration (years) σ volatility (%) cumulative standard-normal distribution q continuous dividend payout (%)
Computation ln(F / X) + (σ 2 /2)T ln(F / X) − (σ 2 /2)T −r T Call = F e − Xe √ √ σ T σ T ln(F / X) − (σ 2 /2)T ln(F / X) + (σ 2 /2)T −r T −r T − Fe − Put = Xe − √ √ σ T σ T −r T
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
938
APPENDIX E
SPREAD OPTION The payoff on a spread option depends on the spread between the two futures contracts less the implementation cost.
Definitions of Variables X r T σ
implementation cost ($) risk-free rate (%) time to expiration (years) volatility (%)
F1 F2 ρ
cumulative standard-normal distribution price for futures contract 1 price for futures contract 2 correlation between the two futures contracts
Computation First, calculate the portfolio volatility: σ =
σ12 + σ2
F2 F2 + X
2 − 2ρσ1 σ2
F2 F2 + X
Then, obtain the call and put option values: F1 2 + (σ /2)T ln F F + X 1 2 √ F + X σ T 2 −r T Call = (F2 + X) e F1 2 + (σ /2)T ln √ F + X 2 − σ T √ − σ T
F1 2 − (σ /2)T − ln √ F + X 2 + σ T √ σ T −r T e Put = (F2 + X) F1 2 − (σ /2)T ln F F + X 1 2 − √ F + X σ T 2
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
Exotic Options Formulae
939
DISCRETE TIME SWITCH OPTIONS The discrete time switch option holder will receive an amount equivalent to At at maturity T for each time interval of t where the corresponding asset price Sit has exceeded strike price X. The put option provides a similar payoff every time Sit is below the strike price.
Definitions of Variables S present value of future cash flows ($) X implementation cost ($) r risk-free rate (%) T time to expiration (years) σ
volatility (%)
cumulative standard-normal distribution b carrying cost (%), usually the risk-free rate less any continuous dividend payout rate
Computation
ln(S/ X) + (b − σ 2 /2)it Call = Ae t √ σ it i=1 n − ln(S/ X) − (b − σ 2 /2)it Put = Ae−r T t √ σ it i=1 −r T
n
TWO-CORRELATED-ASSETS OPTION The payoff on an option depends on whether the other correlated option is in-themoney. This is the continuous counterpart to a correlated quadranomial model.
Definitions of Variables S X r T σ
present value of future cash flows ($) implementation cost ($) risk-free rate (%) time to expiration (years) volatility (%) cumulative bivariate-normal distribution function
ρ q1 q2
correlation (%) between the two assets continuous dividend payout for the first asset (%) continuous dividend payout for the second asset (%)
appe
JWBK121-Mun
March 18, 2008
11:3
Char Count=
940
APPENDIX E
Computation √ ln(S2 / X2 ) + r − q2 − σ22 /2 T + σ T; √ 2 σ2 T Call = S2 e−q2 T ln(S1 / X1 ) + r − q1 − σ 2 /2 T 1 + ρσ2 T;ρ √ σ1 T ln(S2 / X2 ) + r − q2 − σ22 /2 T ; √ σ2 T −X2 e−r T ln(S1 / X1 ) + r − q1 − σ 2 /2 T 1 ;ρ √ σ1 T −ln(S2 / X2 ) − r − q2 − σ22 /2 T ; √ σ2 T −r T Put = X2 e −ln(S1 / X1 ) − r − q1 − σ 2 /2 T 1 ;ρ √ σ1 T √ −ln(S2 / X2 ) − r − q2 − σ22 /2 T − σ T; √ 2 σ2 T − S2 e−q2 T −ln(S1 / X1 ) − r − q1 − σ 2 /2 T √ 1 − ρσ2 T; ρ √ σ1 T
appf
JWBK121-Mun
March 15, 2008
21:34
Char Count=
APPENDIX
F
Measures of Risk
TAMING THE BEAST Risky ventures are the norm in the daily business world. The mere mention of names such as George Soros, John Meriweather, Paul Reichmann, and Nicholas Leeson, or firms such as Long Term Capital Management, Metallgesellschaft, Barings Bank, Bankers Trust, Daiwa Bank, Sumimoto Corporation, Merrill Lynch, and Citibank brings a shrug of disbelief and fear. These names are some of the biggest in the world of business and finance. Their claim to fame is not simply being the best and brightest individuals or being the largest and most respected firms, but for bearing the stigma of being involved in highly risky ventures that turned sour almost overnight. George Soros was and still is one of the most respected names in high finance; he is known globally for his brilliance and exploits. Paul Reichmann was a reputable and brilliant real estate and property tycoon. Between the two of them, nothing was impossible, but when they ventured into investments in Mexican real estate, the wild fluctuations of the peso in the foreign exchange market were nothing short of a disaster. During late 1994 and early 1995, the peso hit an all-time low and their ventures went from bad to worse, but the one thing that they did not expect was that the situation would become a lot worse before it was all over and billions would be lost as a consequence. Long Term Capital Management was headed by Meriweather, one of the rising stars in Wall Street, with a slew of superstars on its management team, including two Nobel laureates in finance and economics (Robert Merton and Myron Scholes). The firm was also backed by giant investment banks. A firm that seemed indestructible blew up with billions of dollars in the red, shaking the international investment community with repercussions throughout Wall Street as individual investors started to lose faith in large hedge funds and wealth-management firms, forcing the eventual massive bailout organized by the Federal Reserve. Barings was one of the oldest banks in England. It was so respected that even Queen Elizabeth II herself held a private account with it. This multibillion-dollar institution was brought down single-handedly by Nicholas Leeson, an employee halfway around the world. Leeson was a young and brilliant investment banker who headed up Barings’ Singapore branch. His illegally doctored track record showed significant investment profits, which gave him more leeway and trust from the home office over time. He was able to cover his losses through fancy accounting and by taking significant amounts of risk. His speculations in the Japanese yen went south
941
appf
JWBK121-Mun
March 15, 2008
942
21:34
Char Count=
APPENDIX F
and he took Barings down with him, and the top echelon in London never knew what hit them. Had any of the managers in the boardroom at their respective headquarters bothered to look at the risk profile of their investments, they would surely have made a very different decision much earlier on, preventing what became major embarrassments in the global investment community. If the projected returns are adjusted for risks, that is, finding what levels of risks are required to attain such seemingly extravagant returns, it would be sensible not to proceed. Risks occur in everyday life that do not require investments in the multimillions. For instance, when would one purchase a house in a fluctuating housing market? When would it be more profitable to lock in a fixed-rate mortgage rather than keep a floating variable rate? What are the chances that there will be insufficient funds at retirement? What about the potential personal property losses when a hurricane hits? How much accident insurance is considered sufficient? How much is a lottery ticket actually worth? Risk permeates all aspects of life and one can never avoid taking or facing risks. What we can do is understand risks better through a systematic assessment of their impacts and repercussions. This assessment framework must also be capable of measuring, monitoring, and managing risks; otherwise, simply noting that risks exist and moving on is not optimal. This book provides the tools and framework necessary to tackle risks head-on. Only with the added insights gained through a rigorous assessment of risk can one actively manage and monitor risk.
Risks permeate every aspect of business, but we do not have to be passive participants. What we can do is develop a framework to better understand risks through a systematic assessment of their impacts and repercussions. This framework also must be capable of measuring, monitoring, and managing risks.
THE BASICS OF RISK Risk can be defined simply as any uncertainty that affects a system in an unknown fashion whereby the ramifications are also unknown but bears with it great fluctuation in value and outcome. In every instance, for risk to be evident, the following generalities must exist:
Uncertainties and risks have a time horizon. Uncertainties exist in the future and will evolve over time. Uncertainties become risks if they affect the outcomes and scenarios of the system. These changing scenarios’ effects on the system can be measured. The measurement has to be set against a benchmark.
appf
JWBK121-Mun
March 15, 2008
21:34
Char Count=
Measures of Risk
943
Risk is never instantaneous. It has a time horizon. For instance, a firm engaged in a risky research and development venture will face significant amounts of risk but only until the product is fully developed or has proven itself in the market. These risks are caused by uncertainties in the technology of the product under research, uncertainties about the potential market, uncertainties about the level of competitive threats and substitutes, and so forth. These uncertainties will change over the course of the company’s research and marketing activities—some uncertainties will increase while others will most likely decrease through the passage of time, actions, and events. However, only the uncertainties that affect the product directly will have any bearing on the risks of the product being unsuccessful. That is, only uncertainties that change the possible scenario outcomes will make the product risky (e.g., market and economic conditions). Finally, risk exists if it can be measured and compared against a benchmark. If no benchmark exists, then perhaps the conditions just described are the norm for research and development activities, and thus the negative results are to be expected. These benchmarks have to be measurable and tangible, for example, gross profits, success rates, market share, time to implementation, and so forth.
Risk is any uncertainty that affects a system in an unknown fashion and its ramifications are unknown, but it brings great fluctuation in value and outcome. Risk has a time horizon, meaning that uncertainty evolves over time, which affects measurable future outcomes and scenarios with respect to a benchmark.
THE NATURE OF RISK AND RETURN Nobel laureate Harry Markowitz’s groundbreaking research into the nature of risk and return has revolutionized the world of finance. His seminal work, which is now known all over the world as the Markowitz Efficient Frontier, looks at the nature of risk and return. Markowitz did not look at risk as the enemy but as a condition that should be embraced and balanced out through its expected returns. The concept of risk and return was then refined through later works by William Sharpe and others, who stated that a heightened risk necessitates a higher return, as elegantly expressed through the capital asset pricing model (CAPM), where the required rate of return on a marketable risky equity is equivalent to the return on an equivalent riskless asset plus a beta systematic and undiversifiable risk measure multiplied by the market risk’s return premium. In essence, a higher-risk asset requires a higher return. In Markowitz’s model, one could strike a balance between risk and return. Depending on the risk appetite of an investor, the optimal or best-case returns can be obtained through the efficient frontier. Should the investor require a higher level of returns, he or she would have to face a higher level of risk. Markowitz’s work carried over to finding combinations of individual projects or assets in a portfolio that would provide the best bang for the buck, striking an elegant balance between risk and return. In order to better understand this balance, also known as risk adjustment
appf
JWBK121-Mun
March 15, 2008
21:34
Char Count=
944
APPENDIX F
in modern risk analysis language, risks must first be measured and understood. The following section illustrates how risk can be measured.
THE STATISTICS OF RISK The study of statistics refers to the collection, presentation, analysis, and utilization of numerical data to infer and make decisions in the face of uncertainty, where the actual population data is unknown. There are two branches in the study of statistics: descriptive statistics, where data is summarized and described, and inferential statistics, where the population is generalized through a small random sample, such that the sample becomes useful for making predictions or decisions when the population characteristics are unknown. A sample can be defined as a subset of the population being measured, whereas the population can be defined as all possible observations of interest of a variable. For instance, if one is interested in the voting practices of all U.S. registered voters, the entire pool of a hundred million registered voters is considered the population, whereas a small survey of one thousand registered voters taken from several small towns across the nation is the sample. The calculated characteristics of the sample (e.g., mean, median, standard deviation) are termed statistics, while parameters imply that the entire population has been surveyed and the results tabulated. Thus, in decision making, the statistic is of vital importance, seeing that sometimes the entire population is yet unknown (e.g., who are all your customers, what is the total market share, etc.) or it is very difficult to obtain all relevant information on the population seeing that it would be too time- or resource-consuming. In inferential statistics, the usual steps undertaken include:
Designing the experiment—this phase includes designing the ways to collect all possible and relevant data. Collection of sample data—data is gathered and tabulated. Analysis of data—statistical analysis is performed. Estimation or prediction—inferences are made based on the statistics obtained. Hypothesis testing—decisions are tested against the data to see the outcomes. Goodness-of-fit—actual data is compared to historical data to see how accurate, valid, and reliable the inference is. Decision making—decisions are made based on the outcome of the inference.
Measuring the Center of the Distribution— The First Moment The first moment of a distribution measures the expected rate of return on a particular project. It measures the location of the project’s scenarios and possible outcomes on average. The common statistics for the first moment include the mean (average), median (center of a distribution), and mode (most commonly occurring value). Figure F.1 illustrates the first moment—where, in this case, the first moment of this distribution is measured by the mean (µ) or average value.
appf
JWBK121-Mun
March 15, 2008
21:34
Char Count=
945
Measures of Risk σ1
σ 1 = σ2
σ2
µ1
µ1 ≠ µ 2
µ2
Skew = 0 Kurtosis = 0
FIGURE F.1 First moment
Measuring the Spread of the Distribution— The Second Moment The second moment measures the spread of a distribution, which is a measure of risk. The spread or width of a distribution measures the variability of a variable, that is, the potential that the variable can fall into different regions of the distribution—in other words, the potential scenarios of outcomes. Figure F.2 illustrates two distributions with identical first moments (identical means) but very different second moments or risks. The visualization becomes clearer in Figure F.3. As an example, suppose there are two stocks and the first stock’s movements (illustrated by the dotted line) with the smaller fluctuation is compared against the second stock’s movements (illustrated by the darker line) with a much higher price fluctuation. Clearly an investor would view the stock with the wilder fluctuation as riskier because the outcomes of the more risky stock are relatively more unknown than the less risky stock. The vertical axis in Figure F.3 measures the stock prices; thus, the more risky stock has a wider range of potential outcomes. This range is translated into a distribution’s width (the horizontal axis) in Figure F.2, where the wider distribution represents the riskier asset. Hence, width or spread of a distribution measures a variable’s risks. Notice that in Figure F.2, both distributions have identical first moments or central tendencies, but clearly the distributions are very different. This difference in the distributional width is measurable. Mathematically and statistically, the width
σ2 σ1
Skew = 0 Kurtosis = 0
µ1 = µ2
FIGURE F.2 Second moment
appf
JWBK121-Mun
March 15, 2008
21:34
Char Count=
946
APPENDIX F Stock prices
Time
FIGURE F.3 Stock price fluctuations
or risk of a variable can be measured through several different statistics, including the range, standard deviation (σ ), variance, coefficient of variation, volatility, and percentiles.
Measuring the Skew of the Distribution— The Third Moment The third moment measures a distribution’s skewness, that is, how the distribution is pulled to one side or the other. Figure F.4 illustrates a negative or left skew (the tail of the distribution points to the left) and Figure F.5 illustrates a positive or right skew (the tail of the distribution points to the right). The mean is always skewed toward the tail of the distribution while the median remains constant. Another way of seeing this is that the mean moves, but the standard deviation, variance, or width may still remain constant. If the third moment is not considered, then looking only at the expected returns (e.g., mean or median) and risk (standard deviation), a positively skewed project might be incorrectly chosen! For example, if the horizontal axis represents the net revenues of a project, then clearly a left or negatively skewed distribution might be preferred as there is a higher probability of greater returns (Figure F.4) as compared to a higher probability for lower-level returns (Figure F.5). Thus, in a skewed distribution, the median is a better measure of returns, as the medians for bothFigures F.4 and F.5 are identical, risks are identical, and, hence, a
σ1 = σ2 Skew < 0 Kurtosis = 0
µ1
µ2
FIGURE F.4 Third moment (left skew)
µ1 ≠ µ2
appf
JWBK121-Mun
March 15, 2008
21:34
Char Count=
947
Measures of Risk σ1 = σ2 Skew > 0 Kurtosis = 0
µ1 ≠ µ2
µ1
µ2
FIGURE F.5 Third moment (right skew)
project with a negatively skewed distribution of net profits is a better choice. Failure to account for a project’s distributional skewness may mean that the incorrect project may be chosen (e.g., two projects may have identical first and second moments, that is, they both have identical returns and risk profiles, but their distributional skews may be very different).
Measuring the Catastrophic Tail Events of the Distribution—The Fourth Moment The fourth moment, or kurtosis, measures the peakedness of a distribution. Figure F.6 illustrates this effect. The background (denoted by the dotted line) is a normal distribution with an excess kurtosis of 0. The new distribution has a higher kurtosis; thus the area under the curve is thicker at the tails with less area in the central body. This condition has major impacts on risk analysis as for the two distributions in Figure F.6; the first three moments (mean, standard deviation, and skewness) can be identical, but the fourth moment (kurtosis) is different. This condition means that, although the returns and risks are identical, the probabilities of extreme and catastrophic events (potential large losses or large gains) occurring are higher for a high kurtosis distribution (e.g., stock market returns are leptokurtic or have high kurtosis). Ignoring a project’s return’s kurtosis may be detrimental. Note that sometimes a normal kurtosis is denoted as 3.0, but in this book we use the measure of excess kurtosis, henceforth simply known as kurtosis. In other words, a kurtosis of 3.5
σ1 = σ2
Skew = 0 Kurtosis > 0
µ1 = µ2
FIGURE F.6 Fourth moment
appf
JWBK121-Mun
March 15, 2008
21:34
948
Char Count=
APPENDIX F
is also known as an excess kurtosis of 0.5, indicating that the distribution has 0.5 additional kurtosis above the normal distribution. The use of excess kurtosis is more prevalent in academic literature and is, hence, used here. Finally, the normalization of kurtosis to a base of 0 makes for easier interpretation of the statistic (e.g., a positive kurtosis indicates fatter-tailed distributions while negative kurtosis indicates thinner-tailed distributions).
Most distributions can be defined up to four moments. The first moment describes the distribution’s location or central tendency (expected returns), the second moment describes its width or spread (risks), the third moment its directional skew (most probable events), and the fourth moment its peakedness or thickness in the tails (catastrophic losses or gains). All four moments should be calculated and interpreted to provide a more comprehensive view of the project under analysis.
THE MEASUREMENTS OF RISK There are multiple ways to measure risk in projects. This section summarizes some of the more common measures of risk and lists their potential benefits and pitfalls. The measures include:
Probability of Occurrence. This approach is simplistic and yet effective. As an example, there is a 10 percent probability that a project will not break even (it will return a negative net present value indicating losses) within the next 5 years. Further, suppose two similar projects have identical implementation costs and expected returns. Based on a single-point estimate, management should be indifferent between them. However, if risk analysis such as Monte Carlo simulation is performed, the first project might reveal a 70 percent probability of losses compared to only a 5 percent probability of losses on the second project. Clearly, the second project is better when risks are analyzed. Standard Deviation and Variance. Standard deviation is a measure of the average of each data point’s deviation from the mean. This is the most popular measure of risk, where a higher standard deviation implies a wider distributional width and, thus, carries a higher risk. The drawback of this measure is that both the upside and downside variations are included in the computation of the standard deviation. Some analysts define risks as the potential losses or downside; thus, standard deviation and variance will penalize upswings as well as downsides. Semi-Standard Deviation. The semi-standard deviation only measures the standard deviation of the downside risks and ignores the upside fluctuations. Modifications of the semi-standard deviation include calculating only the values below the mean, or values below a threshold (e.g., negative profits or negative cash flows). This provides a better picture of downside risk but is more difficult to estimate.
appf
JWBK121-Mun
March 15, 2008
Measures of Risk
21:34
Char Count=
949
Volatility. The concept of volatility is widely used in the applications of real options and can be defined briefly as a measure of uncertainty and risks. Volatility can be estimated using multiple methods, including simulation of the uncertain variables impacting a particular project and estimating the standard deviation of the resulting asset’s logarithmic returns over time. This concept is more difficult to define and estimate but is more powerful than most other risk measures in that this single value incorporates all sources of uncertainty rolled into one value. Beta. Beta is another common measure of risk in the investment finance arena. Beta can be defined simply as the undiversifiable, systematic risk of a financial asset. This concept is made famous through the CAPM, where a higher beta means a higher risk, which in turn requires a higher expected return on the asset. Coefficient of Variation. The coefficient of variation is simply defined as the ratio of standard deviation to the mean, which means that the risks are commonsized. For example, the distribution of a group of students’ heights (measured in meters) can be compared to the distribution of the students’ weights (measured in kilograms). This measure of risk or dispersion is applicable when the variables’ estimates, measures, magnitudes, or units differ. Value at Risk. Value at Risk (VaR) was made famous by J. P. Morgan in the mid-1990s through the introduction of its RiskMetrics approach, and has thus far been sanctioned by several bank governing bodies around the world. Briefly, it measures the amount of capital reserves at risk given a particular holding period at a particular probability of loss. This measurement can be modified to risk applications by stating, for example, the amount of potential losses a certain percent of the time during the period of the economic life of the project—clearly, a project with a smaller VaR is better. Worst-Case Scenario and Regret. Another simple measure is the value of the worst-case scenario given catastrophic losses. Another definition is regret. That is, if a decision is made to pursue a particular project, but if the project becomes unprofitable and suffers a loss, the level of regret is simply the difference between the actual losses compared to doing nothing at all. Risk-Adjusted Return on Capital. Risk-adjusted return on capital (RAROC) takes the ratio of the difference between the fiftieth percentile (median) return and the fifth percentile return on a project to its standard deviation. This approach is used mostly by banks to estimate returns subject to their risks by measuring only the potential downside effects and ignoring the positive upswings.
The following details the computations of some of these risk measures and is worthy of review before proceeding through the book.
COMPUTING RISK This section illustrates how some of the more common measures of risk are computed. Each risk measurement has its own computations and uses. For example, certain risk measures are applicable only on time-series data (e.g., volatility) while others are applicable in both cross-sectional and time-series data (e.g.,
appf
JWBK121-Mun
March 15, 2008
21:34
Char Count=
950
APPENDIX F
variance, standard deviation, and covariance), while others require a consistent holding period (e.g., Value at Risk) or a market comparable or benchmark (e.g., beta coefficient).
Probability of Occurrence This approach is simplistic yet effective. The probability of success or failure can be determined several ways. The first is through management expectations and assumptions, also known as expert opinion, based on historical occurrences or experience of the expert. Another approach is simply to gather available historical or comparable data, industry averages, academic research, or other third-party sources, indicating the historical probabilities of success or failure (e.g., pharmaceutical R&D’s probability of technical success based on various drug indications can be obtained from external research consulting groups). Finally, Monte Carlo simulation can be run on a model with multiple interacting input assumptions and the output of interest (e.g., net present value, gross margin, tolerance ratios, and development success rates) can be captured as a simulation forecast and the relevant probabilities can be obtained, such as the probability of breaking even, probability of failure, probability of making a profit, and so forth. See Chapter 5 on step-by-step instructions on running and interpreting simulations and probabilities.
Standard Deviation and Variance Standard deviation is a measure of the average of each data point’s deviation from the mean. A higher standard deviation or variance implies a wider distributional width and, thus, a higher risk. The standard deviation can be measured in terms of the population or sample, and for illustration purposes, is shown in the following list, where we define xi as the individual data points, µ as the population mean, N as the population size, and n as the sample size: Population standard deviation:
σ =
n (xi − µ)2 i−1 N
and population variance is simply the square of the standard deviation or σ 2 . Alternatively, use Excel’s STDEVP and VARP functions for the population standard deviation and variance respectively. Sample standard deviation:
s=
n ¯ 2 (xi − x) i−1 n−1
appf
JWBK121-Mun
March 15, 2008
21:34
Char Count=
951
Measures of Risk
X
Sum Mean
–10.50 12.25 –11.50 13.25 –14.65 15.65 –14.50 –10.00 –1.43
X – Mean
Square of (X – Mean)
–9.07 13.68 –10.07 14.68 –13.22 17.08 –13.07
82.2908 187.1033 101.4337 215.4605 174.8062 291.6776 170.8622
Population Standard Deviation and Variance Sum of Square (X – Mean) Variance = Sum of Square (X – Mean)/N Using Excel’s VARP function: Standard Deviation = Square Root of (Sum of Square (X – Mean)/N ) Using Excel’s STDEVP function:
1,223.6343 174.8049 174.8049 13.2214 13.2214
Sample Standard Deviation and Variance Sum of Square (X – Mean) Variance = Sum of Square (X – Mean)/(N – 1) Using Excel’s VAR function: Standard Deviation = Square Root of (Sum of Square (X – Mean)/(N – 1)) Using Excel’s STDEV function:
1,223.6343 203.9390 203.9390 14.2807 14.2807
FIGURE F.7 Standard deviation and variance computation and sample variance is similarly the square of the standard deviation or s2 . Alternatively, use Excel’s STDEV and VAR functions for the sample standard deviation and variance respectively. Figure F.7 shows the step-bystep computations. The drawbacks of this measure are that both the upside and downside variations are included in the computation of the standard deviation, and its dependence on the units (e.g., values of x in thousands of dollars versus millions of dollars are not comparable). Some analysts define risks as the potential losses or downside; thus, standard deviation and variance penalize upswings as well as downsides. An alternative is the semi-standard deviation.
Semi-Standard Deviation The semi-standard deviation only measures the standard deviation of the downside risks and ignores the upside fluctuations. Modifications of the semi-standard deviation include calculating only the values below the mean, or values below a threshold (e.g., negative profits or negative cash flows). This approach provides a better picture of downside risk but is more difficult to estimate. Figure F.8 shows how a sample
appf
JWBK121-Mun
March 15, 2008
21:34
Char Count=
952
APPENDIX F
X
X − Mean
−10.50 12.25 −11.50 13.25 −14.65 15.65 −14.50
2.29 Ignore 1.29 Ignore −1.86 Ignore −1.71
Sum Mean
Square of (X − Mean) 5.2327 (Ignore the positive values) 1.6577 (Ignore the positive values) 3.4689 (Ignore the positive values) 2.9327 −51.1500 −12.7875
Population Standard Deviation and Variance Sum of Square (X − Mean) Variance = Sum of Square (X − Mean)/N Using Excel’s VARP function: Standard Deviation = Square Root of (Sum of Square (X − Mean)/N) Using Excel’s STDEVP function:
13.2919 3.3230 3.3230 1.8229 1.8229
Sample Standard Deviation and Variance Sum of Square (X − Mean) Variance = Sum of Square (X − Mean)/(N − 1) Using Excel’s VAR function: Standard Deviation = Square Root of (Sum of Square (X − Mean)/(N − 1)) Using Excel’s STDEV function:
13.2919 4.4306 4.4306 2.1049 2.1049
FIGURE F.8 Semi-standard deviation and semi-variance computation
semi-standard deviation and semi-variance are computed. Note that the computation must be performed manually.
Volatility The concept of volatility is widely used in the applications of real options and can be defined briefly as a measure of uncertainty and risks. Volatility can be estimated using multiple methods, including simulation of the uncertain variables impacting a particular project and estimating the standard deviation of the resulting asset’s logarithmic returns over time. This concept is more difficult to define and estimate but is more powerful than most other risk measures in that this single value incorporates all sources of uncertainty rolled into one value. Figure F.9 illustrates the computation of an annualized volatility. Volatility is typically computed for timeseries data only (i.e., data that follows a time series such as stock price, price of oil, interest rates, and so forth). The first step is to determine the relative returns from period to period, take their natural logarithms (ln), and then compute the sample standard deviation of these logged values. The result is the periodic volatility. Then, annualize the volatility by multiplying this periodic volatility by the square root of the number of periods in a year (e.g., 1 if annual data, 4 if quarterly data, and 12 if
appf
JWBK121-Mun
March 15, 2008
21:34
Char Count=
953
Measures of Risk
Months
X
0 1 2 3 4 5 6 Sum Average
10.50 12.25 11.50 13.25 14.65 15.65 14.50
Relative Returns
1.17 0.94 1.15 1.11 1.07 0.93
LN (Relative Returns)
0.1542 −0.0632 0.1417 0.1004 0.0660 −0.0763 0.3228 0.0538
Square of (LN Relative Returns – Average)
0.0101 0.0137 0.0077 0.0022 0.0001 0.0169
Sample Standard Deviation and Variance Sum of Square (LN Relative Returns − Average) Volatility = Square Root of (Sum of Square (LN Relative Returns − Average)/N − 1) Using Excel’s STDEV function on LN(Relative Returns): Annualized Volatility (Periodic Volatility × Square Root (Periods in a Year))
0.0507 10.07% 10.07% 34.89%
FIGURE F.9 Volatility computation
monthly data are used). See Chapter 166 on Volatility Computations for details on obtaining volatility risk measures from various approaches (e.g., GARCH, volatility to probability, logarithmic returns, implied volatility, and others).
Beta Beta is another common measure of risk in the investment finance arena. Beta can be defined simply as the undiversifiable, systematic risk of a financial asset. This concept is made famous through the CAPM, where a higher beta means a higher risk, which in turn requires a higher expected return on the asset. The beta coefficient measures the relative movements of one asset value to a comparable benchmark or market portfolio; that is, we define the beta coefficient as: β=
ρx,mσx σm Cov(x, m) = Var (m) σm2
where Cov(x,m) is the population covariance between the asset x and the market or comparable benchmark m, Var(m) is the population variance of m, where both can be computed in Excel using the COVAR and VARP functions. The computed beta will be for the population. In contrast, the sample beta coefficient is computed using the correlation coefficient between x and m or ρ x,m and the sample standard deviations of x and m or using sx and sn for sample standard deviations and σ x and σ m for population standard deviations.
appf
JWBK121-Mun
March 15, 2008
21:34
954
Char Count=
APPENDIX F
A beta of 1.0 implies that the relative movements or risk of x is identical to the relative movements of the benchmark (see Example 1 in Figure F.10 where the asset x is simply one unit less than the market asset m, but they both fluctuate at the same levels). Similarly, a beta of 0.5 implies that the relative movements or risk of x is half of the relative movements of the benchmark (see Example 2 in Figure F.10 where the asset x is simply half the market’s fluctuations m). Therefore, beta is a powerful measure but requires a comparable to which to benchmark its fluctuations.
Coefficient of Variation The coefficient of variation (CV) is simply defined as the ratio of standard deviation to the mean, which means that the risks are common sized. For example, a distribution of a group of students’ heights (measured in meters) can be compared to the distribution of the students’ weights (measured in kilograms). This measure of risk or dispersion is applicable when the variables’ estimates, measures, magnitudes, or units differ. For example, in the computations in Figure F.7, the CV for the population is –9.25 or –9.99 for the sample. The CV is useful as a measure of risk per unit of return, or when inverted, can be used as a measure of bang for the buck or returns per unit of risk. Thus, in portfolio optimization, one would be interested in minimizing the CV or maximizing the inverse of the CV.
Value at Risk Value at Risk (VaR) measures the amount of capital reserves at risk given a particular holding period at a particular probability of loss. This measurement can be modified to risk applications by stating, for example, the amount of potential losses a certain percent of the time during the period of the economic life of the project—clearly, a project with a smaller VaR is better. VaR has a holding time period requirement, typically one year or one month. It also has a percentile requirement, for example, a 99.9 percent one-tail confidence. There are also modifications for daily risk measures such as DEaR or Daily Earnings at Risk. The VaR or DEaR can be determined very easily using Risk Simulator; that is, create your risk model, run a simulation, look at the forecast chart, and enter in 99.9 percent as the right-tail probability of the distribution or 0.01 percent as the left-tail probability of the distribution, then read the VaR or DEaR directly off the forecast chart.
Worst-Case Scenario and Regret Another simple measure is the value of the worst-case scenario given catastrophic losses. An additional definition is regret; that is, if a decision is made to pursue a particular project, but if the project becomes unprofitable and suffers a loss, the level of regret is simply the difference between the actual losses compared to doing nothing at all. This analysis is very similar to the VaR but is not time dependent. For instance, a financial return on investment model can be created and a simulation is run. The 5 percent worst-case scenario can be read directly from the forecast chart in Risk Simulator.
10.50 12.25 11.50 13.25 14.65 15.65 14.50
21.00 24.50 23.00 26.50 29.30 31.30 29.00
Covariance population using Excel’s COVAR: Variance of M using Excel’s VARP: Population Beta (Covariance population (X, M)/Variance (M))
FIGURE F.10 Beta coefficient computation
Population Beta
Covariance population using Excel’s COVAR: Variance of M using Excel’s VARP: Population Beta (Covariance population (X, M)/Variance (M))
2.9827 2.9827 1.0000
Population Beta
Correlation between X and M using Excel’s CORREL: Standard deviation of X using Excel’s STDEV: Standard deviation of M using Excel’s STDEV: Beta Coefficient (Correlation X and M * Stdev X * Stdev M)/ (Stdev M * Stdev M)
1.0000 1.8654 1.8654 1.0000
0 1 2 3 4 5 6
X
Correlation between X and M using Excel’s CORREL: Standard deviation of X using Excel’s STDEV: Standard deviation of M using Excel’s STDEV: Beta Coefficient (Correlation X and M * Stdev X * Stdev M)/ (Stdev M * Stdev M)
11.50 13.25 12.50 14.25 15.65 16.65 15.50
Months
5.9653 11.9306 0.5000
1.0000 1.8654 3.7308 0.5000
21:34
Sample Beta
10.50 12.25 11.50 13.25 14.65 15.65 14.50
X
March 15, 2008
Sample Beta
0 1 2 3 4 5 6
Months
Market Comparable M
Example 2: Half the fluctuations of the market
JWBK121-Mun
Market Comparable M
Example 1: Similar fluctuations with the market
appf Char Count=
955
appf
JWBK121-Mun
March 15, 2008
21:34
Char Count=
956
APPENDIX F
Risk-Adjusted Return on Capital Risk-adjusted return on capital (RAROC) takes the ratio of the difference between the fiftieth percentile P50 or its median return and the fifth percentile P5 return on a project to its standard deviation σ , written as: RAROC =
P50 − P5 σ
This approach is used mostly by banks to estimate returns subject to their risks by measuring only the potential downside effects and truncating the distribution to the worst-case 5 percent of the time, ignoring the positive upswings, while at the same time common sizing to the risk measure of standard deviation. Thus, RAROC can be seen as a measure that combines standard deviation, CV, semistandard deviation, and worst-case scenario analysis. This measure is useful when applied with Monte Carlo simulation, where the percentiles and standard deviation measurements required can be obtained through the forecast chart’s statistics view in Risk Simulator.
appg
JWBK121-Mun
March 18, 2008
1:39
Char Count=
APPENDIX
G
Mathematical Structures of Stochastic Processes
hroughout the book, we discuss using stochastic processes for establishing simulation structures, for risk-neutralizing revenue and cost, for forecasting and obtaining an evolution of pricing structures, as well as for modeling and valuing exotic options. This appendix sheds some light on the underpinnings of a stochastic process and what it means. A stochastic process is nothing but a mathematically defined equation that can create a series of outcomes over time, outcomes that are not deterministic in nature. That is, it is an equation or a process that does not follow any simple discernible rule such as: price will increase X percent every year, or revenues will increase by this factor of X plus Y percent. A stochastic process is by definition nondeterministic and not static, and one can plug different numbers into a stochastic process equation and obtain different results every time. Instead of dealing with a single reality, a stochastic process models the randomness and indeterminacy of the future outcomes. The current initial value is known but the future paths are unknown; however, certain paths are more probable than others. Therefore, to model a stochastic process, probability distributions and simulation techniques are required. For instance, the path of a stock price is stochastic in nature, and one cannot reliably predict the stock price path with any certainty. However, the price evolution over time is enveloped in a process that generates these prices. The process is fixed and predetermined, but the outcomes are not. Hence, by stochastic simulation, we create multiple pathways of prices, obtain a statistical sampling of these simulations, and make inferences on the potential pathways that the actual price may undertake given the nature and parameters of the stochastic process used to generate the time series. The interesting thing about stochastic process simulation is that historical data are not necessarily required; that is, the model does not have to fit any sets of historical data. To run a stochastic process forecast, either compute the expected returns and the volatility of the historical data, estimate them using comparable external data, or make assumptions about these values based on expert judgment and expectations. Three basic stochastic processes are discussed in this appendix, including the geometric Brownian motion, which is the most common and prevalently used process due to its simplicity and wide-ranging applications. The mean-reversion and jumpdiffusion processes are also discussed. Regardless of the process used, the idea is to simulate multiple paths or evolutions thousands of times (see Figure G.1), where certain paths are more prevalent than others. This means that at the time period
T
957
appg
JWBK121-Mun
March 18, 2008
1:39
Char Count=
958
APPENDIX G
Stochastic Process
400.0
350.0
300.0
250.0
200.0
150.0
100.0
50.0
0.0 0
2
4
6
8
10
FIGURE G.1 Simulating multiple paths in a stochastic process
of interest (i.e., the specific forecast year, month, day, or other period), there will be thousands of values; these values are plotted in a histogram and its statistical properties are determined.
SUMMARY MATHEMATICAL CHARACTERISTICS OF BROWNIAN MOTION (RANDOM WALK) Assume a process X, where X = [Xt : t ≥ 0] if and only if Xt is continuous, where the starting point is X0 = 0, X is normally distributed with mean zero and variance one or X ∈ N(0, 1), and where each increment in time is independent of each previous increment and is itself normally distributed with mean zero and variance t, such that Xt+a − Xt ∈ N(0, t). Then, the process dX = α X dt + σ X dZ follows a geometric Brownian motion, √ where α is a drift parameter, σ the volatility measure, the Weiner ] ∈ N(µ, σ ) or X and dX are lognormally disprocess dZ = εt δt such that ln[ dX X tributed. If at time zero X0 = 0, then the expected value of the process X at any time t is such that E[X(t)] = X0 eαt and the variance of the process X at time t is 2 case where V[X(t)] = X02 e2αt (eσ t − 1). In the continuous ∞ there is a drift parameter α, ∞ the expected value then becomes E[ 0 X(t)e−r t dt] = 0 X0 e−(r −α)t dt = X0 /(r − α). Stated in another more applicable format, the √ Brownian motion or random walk process takes the form of δXX = α(δt) + σ ε δt for regular options simulation when multiple time steps are √ simulated, or a more generic version takes the form of δXX = (α − σ 2 /2)δt + σ ε δt for a geometric process with fewer time steps.
appg
JWBK121-Mun
March 18, 2008
1:39
Char Count=
Mathematical Structures of Stochastic Processes
959
For an exponential version, we simply take the exponentials, and as an example, we have √ δX = exp[α(δt) + σ ε δt] X where we define X as the variable’s value δX as the change in the variable’s value from one step to the next α as the annualized growth or drift rate σ as the annualized volatility ε as the random variable from a normal N(0,1) distribution Figure G.2 illustrates a sample forecast path of a random walk Brownian motion process. Notice that in this example, the drift growth rate is a positive 5%, which means the evolution trends upwards most of the time, with some fluctuations around this trend due to a positive annualized volatility, where the higher the volatility, the more volatile this fluctuation around the trend. To estimate the parameters from a set of time-series data, the drift rate and volatility can be found by setting α to be the average of the natural logarithm of the t t , while σ is the standard deviation of all ln XXt−1 values. relative returns ln XXt−1
FIGURE G.2 Sample price path of a random walk Brownian motion process
appg
JWBK121-Mun
March 18, 2008
1:39
Char Count=
960
APPENDIX G
SUMMARY MATHEMATICAL CHARACTERISTICS OF MEAN-REVERSION PROCESSES If a stochastic process has a long-run attractor such as a long-run production cost or long-run steady state inflationary price level, then a mean-reversion process is more likely. The process reverts to a long-run average such that the expected value σ2 is E[Xt ] = X + (X0 −X)e−ηt and the variance is V[Xt −X] = 2η(1−e −2ηt ) . The special circumstance that becomes useful is that in the limiting case when the time change becomes instantaneous or when dt → 0, we have the condition where Xt − Xt−1 = X(1 − e−η ) + Xt−1 (e−η − 1) + εt , which is the first-order autoregressive process, and η can be tested econometrically in a unit root context. Stated in another more applicable format, the following describes the mathematical structure of a mean-reverting process with drift: δXX = η(Xeα(δt) − X)δt + α(δt) + √ σ ε δt. In order to obtain the rate of reversion and long-term rate, using the historical data points, run a regression such that Yt − Yt−1 = β0 + β1 Yt−1 + ε, and we find η = − ln[1 + β1 ] and X = −β0 /β1 , where we further define η as the rate of reversion to the mean, X as the long-term value the process reverts to, Y as the historical data series, β 0 as the intercept coefficient in a regression analysis, and β 1 as the slope coefficient in a regression analysis. Figure G.3 illustrates a sample evolution path
FIGURE G.3 Sample price path of a mean-reverting process
appg
JWBK121-Mun
March 18, 2008
1:39
Char Count=
Mathematical Structures of Stochastic Processes
961
of a mean-reverting process. Notice that the fluctuations are not as wild as in the random walk process but are more tampered and fluctuate around the long-term mean value ($120 in this example).
SUMMARY MATHEMATICAL CHARACTERISTICS OF JUMP-DIFFUSION PROCESSES Start-up ventures, research and development initiatives, and oil and electricity prices usually follow a jump-diffusion process. Business operations may be status quo for a few months or years, and then a product or initiative becomes highly successful and takes off; or when there is a terrorist attack or war breaks out, oil prices jump immediately. An initial public offering of equities, oil price jumps, and the price of electricity are textbook examples of this. Assuming that the probability of the jumps follows a Poisson distribution, we have a process dX = f (X, t)dt + g(X, t)dq, where the functions f and g are known and where the probability process is dq =
0 with P(X) = 1 − λdt µ with P(X) = Xdt
A jump-diffusion process is similar to a random walk process except there is a probability of a jump at any point in time. The occurrences of such jumps are completely random, but their probability and magnitude are governed by the process itself. In fact, these three processes can be combined into a mixed process. An example of a mixed mean-reverting, jump-diffusion, random-walk stochastic process is √ δX = η(X exp(α(δt)) − X)δt + α(δt) + σ ε δt + θ F (λ)(δt) X where we further define θ as the jump size of S F(λ) as the inverse of the Poisson cumulative distribution function λ as the jump rate of S Figure G.4 illustrates a sample path evolution of a jump-diffusion process. Notice that there are sharp edges or jumps in value from one period to the next. The higher the jump rate and jump size, the sharper these jumps. The jump size can be found by computing the ratio of the postjump to the prejump levels, and the jump rate can be imputed from past historical data. The other parameters are found the same way as in the other processes. For computational details and examples, see Chapter 89: Forecasting— Stochastic Processes, Brownian Motion, Forecast Distribution at Horizon, Jump Diffusion, and Mean-Reversion or use the Modeling Toolkit’s examples, Forecasting: Brownian Motion Stochastic Process, Forecasting: Jump-Diffusion Stochastic Process, Forecasting: Mean-Reverting Stochastic Process, and Forecasting: Stochastic Processes.
appg
JWBK121-Mun
962
March 18, 2008
1:39
Char Count=
MATHEMATICAL STRUCTURES OF STOCHASTIC PROCESSES
FIGURE G.4 Sample price path of a jump-diffusion process
glos
JWBK121-Mun
March 13, 2008
18:3
Char Count=
Glossary of Input Variables and Parameters in the Modeling Toolkit Software
ach of the inputs used in the Modeling Toolkit functions is listed here. Typically, most inputs are single point estimates; that is, a single value such as 10.50, with the exception of the input variables listed with “Series” in parentheses.
E
A This is the first input variable that determines the shape of the beta and gamma functions, and is required to compute the Incomplete Beta and Incomplete Gamma values. The Incomplete Beta function is a generalization of the beta function that replaces the definite integral of the beta function with an indefinite integral, and is a mathematical expression used to compute a variety of probability distributions such as the gamma and beta distributions. The same can be said about the Incomplete Beta function. This input is used exclusively in the B2MathIncompleteBeta, B2MathIncompleteGammaP, and B2MathIncompleteGammaQ functions, and the parameter is a positive value. Above Below This input variable is used in the partial floating lookback options where the strike price is floating at the Above Below ratio, which has to be a positive value, and is greater than or equal to 1 for a call, and less than or equal to 1 for a put. Accruals This is the amount in notes accruals, a subsection of current liabilities in the balance sheet. This variable is typically zero or a positive dollar or currency amount. Additional Cost This is the amount in additional operating cost used in the B2CreditAcceptanceCost function to determine if a specific credit should be accepted or rejected. This variable is typically a positive dollar or currency amount, and the amount can be zero or positive. Alpha Alpha is used in several places and has various definitions. In the first instance, alpha is the shape parameter in several distributions such as the beta, gamma, Gumbel, logistic, and Weibull distributions. It is also used in the Forward Call Option where if Alpha < 1, then a call option starts (1 – Alpha)% in the money (a put option will be the same amount out of the money), or if Alpha > 1, then the call starts (Alpha – 1)% out of the money (a put option will be the same amount in the money). Finally, alpha is also used as the alpha error level, or Type I error, also known as the significance level in a hypothesis test. It measures the probability of not having the true population mean included in the confidence interval of the sample. That is, it computes the probability of rejecting a true hypothesis. 1 – Alpha
963
glos
JWBK121-Mun
964
March 13, 2008
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
is of course the confidence interval, or the probability that the true population mean resides in the sample confidence interval, and is used in several Six Sigma models. Regardless of use, this parameter has to be a positive value. Amortization This is the amount in amortization in the financial income statement of a firm, and is used to compute the cash flow to equity for both a levered and unlevered firm. This amount is typically zero or positive. Amounts (Series) This is a series of numbers (typically listed in a single column with multiple rows) indicating the dollar or currency amounts invested in a specific asset class, used to compute the total portfolio’s Value at Risk and used only in the B2VaRCorrelationMethod function. These parameters have to be positive values and arranged in a column with multiple rows. Arithmetic Mean This is the simple average used in the lognormal distribution. We differentiate this from the geometric or harmonic means, as this arithmetic mean or simple average is the one used as an input parameter in the lognormal distribution. This parameter has to be a positive value, as the lognormal distribution takes on only positive values. Arithmetic Standard Deviation This is a simple population standard deviation that is used in the lognormal distribution. You can use Excel’s STDEVP to compute this value from a series of data points. This parameter has to be a positive value. Arrival Rate This is the rate of arrival on average to a queue in a specific time period (e.g., the average number of people arriving at a restaurant per day or per hour), and typically follows a Poisson distribution. This parameter has to be a positive value. Asset 1 and Asset 2 These are the first and second assets in a two-asset exotic option or exchange of asset options. Typically, the first asset (Asset 1) is the payoff asset, whereas the second asset (Asset 2) is some sort of benchmark asset. This is not to be confused with PVAsset, which is the present value of the asset used in a real options analysis. These parameters must be positive values. Asset Allocation (Series) These are a series of percentage allocations of assets in a portfolio and must sum to 100%, and this series is used to compute a portfolio’s total risk and return levels. These parameters are arranged in a single column with multiple rows and can take on zero or positive values, but the sum of these values must equal 100%. Asset Turnover This is the total asset turnover financial ratio, or equivalent to annual total sales divided by total assets, used to compute return on equity or return on asset ratios. It has to be a positive value. Asset Volatility This is the internal asset volatility (not to be confused with regular volatility in an options model where we compute it using external equity values) used in determining probabilities of default and distance to default on risky debt (e.g., Merton models); it has to be a positive value. This value can only be determined through optimization either using Risk Simulator to solve for a multiple simultaneous equation function or using the B2ProbabilityDefaultMertonImputedAssetVolatility function.
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
965
Average Lead This is the average lead time in days required in order to receive an order that is placed. This parameter is typically a positive value, and is used in the economic order quantity models. Average Measurement (Series) This is a series of the average measurements per sample subgroup in a Six Sigma environment to determine the upper and lower control limits for a control chart (e.g., in an experiment, 5 measurements are taken of a production output, and the experiment is repeated 10 different times with 5 samples taken each time, and the 10 averages of the 5 samples are computed). These values are typically zero or positive, and are arranged in a single column with multiple rows. Average Price This is the average of historically observed stock prices during a specific lookback period, used to determine the value of Asian options. This parameter has to be positive. B This is the second input variable for the scale of the beta or gamma functions, and is required to compute the Incomplete Beta and Incomplete Gamma values. The Incomplete Beta function is a generalization of the Beta function that replaces the definite integral of the beta function with an indefinite integral, and is a mathematical expression used to compute a variety of probability distributions such as the gamma and beta distributions. The same can be said about the Incomplete Beta function. This input is used exclusively in B2MathIncompleteBeta, B2MathIncompleteGammaP, and B2MathIncompleteGammaQ functions, and the parameter is a positive value. Barrier This is the stock price barrier (it can be an upper or lower barrier) for certain exotic barrier and binary options where if the barrier is breached within the lifetime of the option, the option either comes into the money or goes out of the money, or an asset or cash is exchanged. This parameter is a positive value. Base This is the power value for determining and calibrating the width of the credit tables. Typically, it ranges between 1 and 4 and has to be a positive value. Baseline DPU This is the average number of defects per unit in a Six Sigma process, and is used to determine the number of trials required to obtain a specific error boundary and significance level based on this average DPU. This parameter has to be a positive value. Batch Cost This is the total dollar or currency value of the cost to manufacture a batch of products each time the production line is run. This parameter is a positive value. Benchmark Prices (Series) This is a series of benchmark prices or levels arranged in a single column with multiple rows, such as the market Standard & Poor’s 500, to be used as a benchmark against another equity price level in order to determine the Sharpe ratio. Best Case This is the best-case scenario value or dollar/currency, used in concert with the Expected Value and Percentile value, to determine the volatility of the process or project. This value is typically positive and has to exceed the expected value.
glos
JWBK121-Mun
March 13, 2008
966
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
Beta This parameter is used in several places and denotes different things. When used in the beta, gamma, Gumbel, logistic, and Weibull distributions, it is used to denote the scale of the distribution. When used in the capital asset pricing model (CAPM), it is used to denote the beta relative risk (covariance between a stock’s returns and market returns divided by the variance of the market returns). Finally, beta is also used as the beta error or Type II error, measuring the probability of accepting a false hypothesis, or the probability of not being able to detect the standard deviation’s changes. 1 – Beta is the power of the test, and this parameter is used in statistical sampling and sample size determination in the Six Sigma models. Regardless, this parameter has to be a positive value. Beta 0, 1, and 2 These are mathematical parameters in a yield curve construction when applying the Bliss and Nelson-Siegel models for forecasting interest rates. The exact values of these parameters need to be calibrated with optimization, but are either zero or positive values. Beta Levered This is the relative risk beta level of a company that is levered or has debt, and can be used to determine the equivalent level of an unlevered company’s beta. This parameter has to be a positive value. Beta Unlevered This is the relative risk beta level of a company that is unlevered or has zero debt, and can be used to determine the equivalent level of a levered company’s beta with debt. This parameter has to be a positive value. Bond Maturity positive value.
This is the maturity of a bond, measured in years, and has to be a
Bond Price This is the market price of the bond in dollars or other currency units, and has to be a positive value. Bond Yield This is the bond’s yield to maturity—that is, the internal rate of return on the bond when held to maturity—and has to be a positive value. These could be applied to corporate bonds or Treasury zero coupon bonds. Buy Cap Rate This is the capitalization rate computed by (net operating income/sale price) at the time of purchase of a property, and is typically a positive value, used in the valuation of real estate properties. BV Asset This is the book value of assets in a company, including all short-term and long-term assets. BV Debt and BV Liabilities This is the book value of debt or all liabilities in a company, including all short-term and long-term debt or liabilities, and has to be a positive value. BV Per Share This is the book value price of a share of stock, typically recorded at the initial public offering price available through the company’s balance sheet, and has to be a positive value. Calendar Ratio This ratio is a positive value and is used in pricing an option with a Trading Day Correction, which looks at a typical option and corrects it for the varying volatilities. Specifically, volatility tends to be higher on trading days than on nontrading days. The Trading Days Ratio is simply the number of trading days
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
967
left until maturity divided by the total number of trading days per year (typically between 250 and 252), and the Calendar Days Ratio is the number of calendar days left until maturity divided by the total number of days per year (365). Callable Price This is the amount that, when a bond is called, the bondholder will be paid, and is typically higher than the par value of the bond. This parameter requires a positive value. Callable Step This is the step number on a binomial lattice representing the time period when a bond can be called, and this parameter is a positive integer. For instance, in a 10-year bond when the bond is callable starting on the fifth anniversary, the callable step is 50 in a 100-step lattice model. Call Maturity This is the maturity of the call option in years, and is used in the complex chooser option (i.e., the exotic option where the holder can decide to make it a call or a put, and each option has its own maturity and strike values), and must be a positive value. Call Strike This is the strike price of the call option in dollars or currency, and is used in the complex chooser option (i.e., the exotic option where the holder can decide to make it a call or a put, and each option has its own maturity and strike values), and must be a positive value. Sometimes, this variable has different suffixes (e.g., Call Strike Sell Low, Call Strike Buy High, and so forth, whenever there might be more than one call option in the portfolio of option strategies, and these suffixes represent whether this particular call is bought or sold, and whether the strike price is higher or lower than the other call option). Call Value This is the value of a call option, and is used in the put-call parity model, whereby the value of a corresponding put can be determined given the price of the call with similar option parameters, and this parameter has to be a positive value. Sometimes, this variable has different suffixes (e.g., Call Value Sell Low, Call Value Buy High, and so forth, whenever there might be more than one call option in the portfolio of option strategies, and these suffixes represent whether this particular call is bought or sold, and whether the premium paid for the option or the option’s value is higher or lower than the other call option). Cap This is the interest rate cap (ceiling) in an interest cap derivative, and has to be a positive value. The valuation of the cap is done through computing the value of each of its caplets and summing them up for the price of the derivative. Capacity This is the maximum capacity level, and is used in forecasting using the S-curve model (where the capacity is the maximum demand or load the market or environment can hold), as well as in the economic order quantity (batch production) model; it has to be a positive value. Capital Charge This is the amount of invested capital multiplied by the weighted average cost of capital or hurdle rate or required rate of return. This value is used to compute the economic profit of a project, and is a positive value. Capital Expenditures This is used to compute the cash flow to the firm and the cash flow to equity for a firm. Capital expenditures are deducted from the net cash flow to a firm as an expenditure, and this input parameter can be zero or a positive value.
glos
JWBK121-Mun
968
March 13, 2008
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
Cash This variable is used in several places. The first and most prominent is the amount of money that is paid when a binary or barrier option comes into the money, whereas it is also used to denote the amount of cash available in a current asset on a balance sheet. This parameter is zero or positive. Cash Dividend This is the dividend rate or dividend yield, in percent, and is typically either zero or positive. This parameter is not to be confused with Cash Dividends series, which is a dollar or currency unit amount, and which can also be zero or positive. This variable is used many times in exotic and real options models. Cash Dividends (Series) This is a series of cash dividends in dollars or currency units, which come as lump sum payments of dividends on the underlying stock of an option and can be zero or positive values. This input variable is used in the Generalized Black-Scholes model with cash dividends, and the timing of these cash dividends (Dividend Times) are also listed as a series in a single column with multiple rows. Cash Flows (Series) This is a series of cash flows used for a variety of models, including the computation of volatility (using the logarithmic cash flow returns approach) and bond models (bond pricing, convexity, and duration computations), and each cash flow value must be a positive number, arranged in a column with multiple rows. Channels This is the number of channels available in a queuing model—for instance, the number of customer service or point of sale cash registers available in a McDonald’s fast-food restaurant, where patrons can obtain service. This parameter is a positive integer. Channels Busy This is the number of channels that are currently busy and serving customers at any given moment. This parameter can be zero or a positive integer. Choose Time or Chooser Time This is the time available for the holder of a complex chooser option whereby the option holder can choose to make the option a call or a put, with different maturities and strike prices. This parameter is a positive value. Column The column number in a lattice; for instance, if there is a 20-step lattice for 10 years, then the column number for the third year is the sixth step in the lattice and the column is set to 6, corresponding to the step in the lattice. Columnwise This variable is used in the changing risk-free and changing volatility option model, where the default is 1, indicating that the data (risk-free rates and volatilities) are arranged in a column. This parameter is either a 1 (values are listed in a column) or a 0 (values are listed in a row). Common Equity This is the total common equity listed in the balance sheet of a company, and is used in financial ratios analysis to determine the return on equity as well as other profitability and efficiency measures, and this parameter is a positive value. This value is different than total equity, which also includes other forms such as preferred equity. Compounding This is the number of compounding periods per year for the European Swaptions (payer and receiver) and requires a positive integer (e.g., set it as 365 for daily compounding, 12 for monthly compounding, and so forth).
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
969
Contract Factor This is the contraction factor used in a real option to contract, and this value is computed as the after-contracting net present value divided by the existing base-case net present value (stated another way, this value is 1 – X where X is the fraction that is forgone if contraction occurs, or the portion that is shared with an alliance or joint venture partner or outsourcing outfit), and the parameter has to be between 0 and 1, noninclusive. Conversion Date This is the number of days in the future where the convertible bond can be converted into an equivalent value of equity. Corporate Bond Yield This is the yield of a risky debt or a risky corporate bond in percent, and is used to compute the implied probability of default of a risky debt given a comparable zero coupon risk-free bond with similar maturity. This input has to be a positive value. Correlation This variable is used in multiple places, including exotic options with multiple underlying assets (e.g., exchange of assets, two-asset options, foreign exchange, and futures or commodity options) and the bivariate normal distribution where we combine two correlated normal distributions. Correlations (Series) This is an n × n correlation matrix and is used to value the portfolio Value at Risk where the individual components of the portfolio are correlated with one another. Cost, Cost 1, and Cost 2 This is a dollar or currency amount corresponding to the cost to execute a particular project or option, and has to be a positive value. This variable is used most frequently in real options models. When there are multiple costs (Cost 1 and Cost 2), this implies several underlying assets and their respective costs or strike prices. Cost of Debt This is the cost of debt before tax in percent, used to compute the weighted average cost of capital for a project or firm, and is typically a zero or positive value. Cost of Equity This is the cost of equity before tax in percent, used to compute the weighted average cost of capital for a project or firm, and is typically a zero or positive value. Cost of Funds This is the cost of obtaining additional funds, in percent, used in determining credit acceptance levels, and this parameter can be zero or a positive value. Cost of Losing a Unit This is the monetary dollar or currency amount lost or forgone if one unit of sales is lost when there is an insufficient number of channels in the queuing models to determine the optimal number of channels to have available, and can be zero or a positive value. Cost of Order This is a dollar or currency amount of the cost of placing an order for additional inventory, used in the economic order quantity models to determine the optimal quantity of inventory to order and to have on hand. Cost of Preferred Equity This is the before-tax cost of preferred equity in percent, used to compute the cost of funds using the weighted average cost of capital model, and is either zero or a positive value.
glos
JWBK121-Mun
970
March 13, 2008
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
Cost to Add Channel This is the monetary dollar or currency amount required to add another channel in the queuing models, to determine the optimal number of channels to have available, and is a positive value. Coupon and Coupons (Series) This is the coupon payment in dollars or currency of a debt or callable debt, and is used in the options adjusted spread model to determine the required spreads for a risky and callable bond. For Coupons, it is a time series of cash coupon payments at specific times. Coupon Rate This is the coupon payment per year, represented in percent, and is used in various debt-based options and credit options where the underlying is a coupon-paying bond or debt, and this value can be zero or positive. Covariances (Series) This is the n × n variance-covariance matrix required to compute the portfolio returns and risk levels given each individual asset’s allocation (see Asset Allocation), and these values can be negative, zero, or positive values. The Variance-Covariance Matrix tool in the Modeling Toolkit can be used to compute this matrix given the raw data of each asset’s historical values. Credit Exposures This is the number of credit or debt lines that exists in a portfolio, and has to be a positive integer. Credit Spread This is the percentage spread difference between a risky debt or security and the risk-free rate with comparable maturity, and is typically a positive value. Cum Amount This is a dollar or currency amount, used in a Time Switch option, where the holder receives the Accumulated (Cum) Amount × Time Steps each time the asset price exceeds the strike price for a call option (or falls below the strike price for a put option). Currency Units This input parameter is a positive value and is used in a Foreign Takeover option with a foreign exchange element, which means that if a successful takeover ensues (if the value of the foreign firm denominated in foreign currency is less than the foreign currency units required), then the option holder has the right to purchase the number of foreign currency units at the predetermined strike price (denominated in exchange rates of the domestic currency to the foreign currency) at the expiration date of the option. Current Asset This is the sum of cash, accounts receivable, and inventories on a balance sheet—that is, the short-term liquid assets—and has to be a positive value. Current Price This is the price level of a variable at the current time. This known value has to be positive, and is used for forecasting future price levels. Current Yield This is the current spot interest rate or yield, used to price risky debt with callable and embedded option features, and has to be a positive value. Custom Risk-Free (Series) This is a series of risk-free rates with the relevant times of occurrence—that is, where there are two columns with multiple rows and the first column is the time in years (positive values) and the second column lists the risk-free rates (each value has to be a positive percentage), and both columns have multiple
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
971
rows. This variable is used in the custom option models where risk-free rates and volatilities are allowed to change over time. Custom Volatility (Series) This is a series of annualized volatilities with the relevant times of occurrence—that is, where there are two columns with multiple rows and the first column is the time in years (positive values) and the second column lists the volatilities (each value has to be a positive percentage), and both columns have multiple rows. This variable is used in the custom option models where risk-free rates and volatilities are allowed to change over time. CY Reversion This is the rate of mean reversion of the convenience yield of a futures and commodities contract, and has to be zero or a positive value. The convenience yield is simply the rate differential between a nonarbitrage futures and spot price and a real-life fair market value of the futures price, and can be computed using the B2ConvenienceYield function. With the raw data or computed convenience yields, the mean reversion rate can be calibrated using Risk Simulator’s statistical analysis tool. CY Volatility This is the annualized volatility of the convenience yield of a futures and commodities contract, and has to be a positive value. The convenience yield is simply the rate differential between a nonarbitrage futures and spot price and a real-life fair market value of the futures price, and can be computed using the B2ConvenienceYield function. The volatility can be computed using various approaches as discussed in the Volatility definition. Daily Volatilities (Series) This is a series of daily volatilities of various asset classes (arranged in a column with multiple rows), used in computing the portfolio Value at Risk, where each volatility is typically small but has to be a positive value. This can also be computed using annualized volatilities and dividing them by the square root of number of trading days per year. Days Per Year This is the number of days per year to compute days sales outstanding, and is typically set to 365 or 360. The parameter has to be a positive integer. Debt Maturity The maturity period measured in years for the debt, typically this is the maturity of a corporate bond, and is a positive value, used in the asset-equity parity models, to determine the market value of assets and market value of debt, based on the book value of debt and book value of assets as well as the equity volatility. Defaults This is the number of credit or debt defaults within some specified period, and can be zero or a positive integer. Default Probability This is the probability of default, set between 0% and 100%, to compute the credit risk shortfall value, and can be computed using the Merton probability of default models, as well as other probability of default models in the Modeling Toolkit. Defective Units (Series) These is the series of numbers of defective units in Six Sigma models, to compute the upper and lower control limits for quality control charts; the numbers are typically zero or positive integers, arranged in a column with multiple rows.
glos
JWBK121-Mun
972
March 13, 2008
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
Defects This is a single value indicative of the number of defects in a process for Six Sigma quality control, to determine items such as process capability (Cpk) defects per million opportunities (DPMO) and defects per unit (DPU). This parameter is either zero or a positive integer. Delta Delta is a precision measure used in Six Sigma models. Specifically, the Delta Precision is the accuracy or precision with which the standard deviation may be estimated. For instance, a 0.10% Delta with 5% Alpha for 2 tails means that the estimated mean is plus or minus 0.10%, at a 90% (1 – 2 × Alpha) confidence level. Deltas (Series) This is a series of delta measures, where the delta is defined here as a sensitivity measure of an option. Specifically, it is the instantaneous change of the option value with an instantaneous change in the stock price. You can use the B2CallDelta function to compute this input, which typically consists of positive values arranged in a column with multiple rows. Demand This is the level of demand for a particular manufactured product, used to determine the optimal economic order quantity or the optimal level of inventory to have on hand, and has to be a positive integer. Depreciation This is the level of depreciation, measured in dollars or currency levels, as a noncash expense add-back to obtain the cash flows available to equity and cash flows available to the firm. DF This is the degrees of freedom input used in the chi-square and t-distributions. The higher this value, the more closely these distributions approach the normal or Gaussian distribution. This input parameter is a positive integer, and is typically larger than 1. You can use Risk Simulator’s distributional fitting tool to fit your existing data to obtain the best estimate of DF. Alternatively, the distributional analysis tool can also be used to see the effects of higher and lower DF values. DF Denominator This is the degrees of freedom of the denominator used in the F-distribution. This input parameter is a positive integer, and is typically larger than 1. You can use Risk Simulator’s distributional fitting tool to fit your existing data to obtain the best estimate of DF. Alternatively, the distributional analysis tool can also be used to see the effects of higher and lower DF values. DF Numerator This is the degrees of freedom of the numerator used in the Fdistribution. This input parameter is a positive integer, and is typically larger than 1. You can use Risk Simulator’s distributional fitting tool to fit your existing data to obtain the best estimate of DF. Alternatively, the distributional analysis tool can also be used to see the effects of higher and lower DF values. Discount Rate This is the discount rate used to determine the price-to-earnings multiple by first using this input to value the future stock price. This parameter is a positive value, and in the case of the PE Ratio model it needs to be higher than the growth rate. Sometimes the weighted average cost of capital is used in its place for simplicity. Dividend, Dividend Rate, Dividend 1 and 2 This is the dividend rate or dividend yield, in percent, and is typically either zero or positive. This parameter is not to be confused with Cash Dividend, which is a dollar or currency unit amount and can also be zero or positive. This variable is used many times in exotic and real
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
973
options models. Dividend 1 and Dividend 2 are simply the dividend yields on the two underlying assets in a two-asset option. Dividend Times (Series) This is a series of times in years when the cash dividends in dollars or currency are paid on the underlying stock of an option, and can be zero or positive values. This input variable is used in the Generalized Black-Scholes model with cash dividends, and the timing of these cash dividends is listed as a series in a single column with multiple rows. Domestic RF This is the domestic risk-free rate used in foreign or takeover options that requires the inputs of a domestic and foreign risk-free rate, which in this case has to be a positive value. Down This is the down step size used in an asymmetrical state option pricing model, and needs to be a value between 0 and 1. This value should be carefully calibrated to the option’s maturity and the number of lattice steps, to denote the down step size per lattice step. DSO This is days sales outstanding, or the average accounts receivables divided by the average sales per day, to be used to compute the profitability of issuing new credit to a corporation. This input variable can be computed using the B2RatiosDaysSalesOutstanding function, and the parameter has to be a positive value. DT This is the time between steps; that is, suppose a bond or an option has a maturity of 10 years and a 100-step lattice is used. DT is 0.1, or 0.1 years will elapse with every lattice step taken. This parameter has to be a positive value, and is used in the B2BDT lattice functions. Duration This variable is typically computed using some B2BondDuration function, but as an input it represents the conversion factor used in converting a spread or interest rate differential into a dollar currency amount, and is used in several debt-based options. This input has to be a positive value, and in some cases is set to 1 in order to determine the debt-based option’s value in percentage terms. EBIT Earnings before interest and taxes (EBIT) is used in several financial ratios analysis models. EBIT is also sometimes called operating income, and can be a negative or positive value. Ending Plot This variable is used in the options trading strategies (e.g., straddles, strangles, bull spreads, and so forth), representing the last value to plot for the terminal stock price (the x-axis on an option payoff chart); it has to be higher than the Starting Plot value, and is a positive input. EPS Earnings per share (EPS) is net income divided by the number of shares outstanding; EPS is used in several financial ratios analysis models, and can take on either negative or positive values. Equity Correlation This is the correlation coefficient between two equity stock prices (not returns), and can be between –1 and +1 (inclusive), including 0. Equity Multiplier Equity multiplier is the ratio of total assets to the total equity of the company, indicating the amount of increase in the ability of the existing equity to generate the available total assets, and has to be a positive value.
glos
JWBK121-Mun
March 13, 2008
974 Equity Price or Share Price a positive value.
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
This is the same as stock price per share, and has to be
Equity Value or Total Equity This is the same as total equity in a firm, computed by the number of shares outstanding times the market share price, and can be either zero or a positive value. Equity Volatility This is the volatility of stock prices, not to be confused with the volatility of internal assets. The term Volatility is used interchangeably with Equity Volatility, but this term is used in models that require both equity volatility and some other volatility (e.g., asset volatility or foreign exchange rate volatility), and this value is typically positive. Exchange Rate This is the foreign exchange rate from one currency to another, and is the spot rate for domestic currency to foreign currency; it has to be a positive value. Exercise Multiple This is the suboptimal exercise multiple ratio, computed as the historical average stock price at which an option with similar type and class, held by a similar group of people, was executed, divided by the strike price of the option. This multiple has to be greater than 1. This input variable is used in valuing employee stock options with suboptimal exercise behaviors. Expand Factor This is the expansion factor for real options models of options to expand, and has to be a positive value greater than 1.0, computed using the total expanded net present value (base case plus the expanded case) divided by the base case net present value. Expected Value This is the expected value or mean value of a project’s net present value, used to determine the rough estimate of an annualized implied volatility of a project using the management approach (volatility to probability approach), and is typically a positive value. Face Value This is the face value of a bond, in dollars or currency, and has to be a positive value. This face value is the redeemable value at the maturity of the bond (typically, this value is $1,000 or $10,000). First Period This input variable is used in a spread option, where the maturity of a spread option is divided into two periods (from time zero to this first period, and from the first period to maturity) and the spread option pays the difference between the maximum values of these two periods. This input parameter has to be greater than zero and less than the maturity of the spread option. First Variable This is the first variable used in a pentanomial lattice model to value exotic or real options problems. In the pentanomial lattice, two binomial lattices (a binomial lattice models two outcomes, up or down, evolved through the entire lattice) are combined to create a single rainbow lattice with two underlying variables multiplied together, to create five possible outcomes (UP1 and UP2, UP1 and DOWN2, Unchanged 1 and Unchanged 2, DOWN1 and UP2, and DOWN2 and DOWN2). This input parameter has to be a positive value. Fixed FX Rate This input variable is used in valuing Quanto options that are traded on exchanges around the world, (also known as foreign equity options). The
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
975
options are denominated in another currency than that of the underlying asset. The option has an expanding or contracting coverage of the foreign exchange value of the underlying asset, based on the fixed exchange rate (domestic currency to foreign currency), and has to be a positive value. Floor This is the interest rate floor and is an interest derivative; it has to be a positive value. The valuation of the floor is done through computing the value of each of its floorlets and summing them up to determine the price of the derivative. Foreign Exchange Volatility or Forex Volatility This is the annualized volatility of foreign exchange rates, typically computed using the annualized logarithmic relative returns (use the B2Volatility function to compute this volatility based on historical exchange rates), and has to be a positive value. Foreign Rate or Foreign RF This is the foreign risk-free rate, used in foreign exchange or foreign equity options and valuation models, and has to be a positive value. Foreign Value This is the value of a foreign firm denominated in foreign currency, used in valuing a takeover option, and this value has to be a positive number. Forward CY Correlation This variable is sometimes truncated to “ForCYCorrel.” It is the linear correlation between forward rates and convenience yields, and is used in valuing commodity options. Correlations have to be between –1 and +1 (typically noninclusive). Forward Days This is the positive integer representing the number of days into the future where there is a corresponding forward rate that is applicable. Forward Price This is the prearranged price of a contract set today for delivery in the future, and is sometimes also used interchangeably in terms of the future price of an asset or commodity that may not be prearranged but is known with certainty or is the expected price in the future. Forward Rate This is the forward rate in a commodity option, and has to be a positive value. Forward Reversion Rate or For-Reversion This input variable is used in valuing commodity options. It computes the values of commodity-based European call and put options, where the convenience yield and forward rates are assumed to be meanreverting and each has its own volatilities and cross-correlations, creating a complex multifactor model with interrelationships among the variables. The forward reversion rate is the rate of mean reversion of the forward rate, and is typically a small positive value; it can be determined and calibrated using Risk Simulator’s statistical analysis tool. Forward Time This is the time in the future when a forward start option begins to become active, and this input parameter has to be a positive value greater than zero and less than the maturity of the option. Forward Volatility or For-Volatility This input variable is used in valuing commodity options. It computes the values of commodity-based European call and put options, where the convenience yield and forward rates are assumed to be
glos
JWBK121-Mun
976
March 13, 2008
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
mean-reverting and each has its own volatilities and cross-correlations, creating a complex multifactor model with interrelationships among the variables. The forward volatility is the annualized volatility of forward rates and prices, and has to be a positive value, typically computed using the annualized logarithmic relative returns of historical forward prices (use the B2Volatility function to compute this volatility based on historical prices). It has to be a positive value. Free Cash Flow This is the free cash flow available to the firm, and can be computed as the net income generated by the firm with all the modifications of noncash expense add-backs as well as capital expenditure reductions, or can be computed using the three B2RatiosCashFlow models. Future Price This is the price in the future of any variable that is either known in advance or forecasted. This value is not the price of a futures contract, and is typically a positive value. Future Returns This is the returns of any variable that is either known in advance or forecasted. This value is not the returns on a futures contract, and can be positive or negative in value. Futures, Futures Price, and Futures 1 or Futures 2 This is the price of the futures contract (if there are two futures contracts, there will be a numerical value, as in the futures spread options computations), and has to be a positive value. Futures Maturity This is the maturity of the futures contract, measured in years, and has to be a positive value. Granularities This input parameter has to be a positive integer value and is used in the computation of finite differences in obtaining the value of an option. Great care has to be taken to calibrate this input, using alternate closed-form solutions. Gross Rent This is the dollar or currency amount of annualized gross rent, and can be zero or a positive value; it is used in property valuation models. Growth Rate This positive percentage value is used in various locations and signifies the annualized average growth of some variable. In the financial ratios analysis, this would be the growth rate of dividends (and this value must be less than the discount rate used in the model). In contrast, this parameter is the annualized growth rate of assets for the Merton probability of default models, and this variable is used as the growth of a population or market in the S-curve forecast computation on curve saturation rates. Holding Cost This is the zero or positive dollar or currency cost of holding on to an additional unit of inventory, used in the economic order quantity models to determine the optimal level of inventories to hold. Horizon This is a positive value representing some time period denominated in years, and is used in forecasting future values of some variable. Horizon Days This is a positive integer value representing the number of holding days to compute a Value at Risk for, which typically is between 1 and 10 days, and calibrated to how long it will take on average for the bank or company to liquidate its assets to cover any extreme and catastrophic losses or to move out of a loss portfolio.
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
977
Inflation This is the annualized rate of inflation, measured as a percentage, and is typically a positive value, although zero and negative values may occur but are rare. Interest Lattice This refers to the lattice that is developed for the underlying interest rates modeled for a yield curve and its spot volatilities over time, and is used in pricing interest-sensitive derivatives. Interest Paid This is the dollar or currency amount of interest paid per year, and is either zero or a positive value. Interest Rate This is the percentage interest paid per year, and is typically zero or a positive value. Interest Rates (Series) This is a series of annualized interest rates or discount rates in percent, in a column with multiple rows, used in computing a project’s net present value or the price of a bond (given a corresponding series of cash flows). Interest Volatility This is the annualized volatility of interest rates, in percent, and has to be a positive value. See the definition of Volatility in this Glossary for details on some of the techniques used in computing volatility. Inventory This is the amount of inventories in dollars or currency, and can be determined from a company’s balance sheet; it is typically a positive number but can sometimes take on a zero value. Invested Capital This is the dollar or currency amount of invested capital, and is typically a positive value, used to compute capital charge and economic capital of a project or firm. Investment This is the initial lump sum investment dollar or currency amount, used to compute the internal rate of return (IRR) of a project, and this value is a positive number (although it is used as a negative value in the model, enter the value as positive). Jump Rate This variable is used in a Jump Diffusion option, which is similar to a regular option with the exception that instead of assuming that the underlying asset follows a lognormal Brownian Motion process, the process here follows a Poisson Jump Diffusion process, and is used in the B2ROJumpDiffusion models. That is, stock or asset prices follow jumps, and these jumps occur several times per year (observed from history). Cumulatively, these jumps explain a certain percentage of the total volatility of the asset. The jump rate can be determined using historical data or using Risk Simulator’s statistical analysis tool to calibrate the jump rate. Jump Size Similar to the Jump Rate, the Jump Size is used to determine the size of a jump in a Jump Diffusion option model. Typically, this value is greater than 1, to indicate how much the jump is from the previous period, and is used on the B2ROJumpDiffusion models. Jumps Per Year An alternative input to the Jump Size is the number of jumps per year, as it is easier to calibrate the total number of jumps per year based on expectations or historical data; this input is a positive integer used in the B2MertonJumpDiffusion models.
glos
JWBK121-Mun
978
March 13, 2008
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
Known X and Known Y Values These are the historical or comparable data available and observable, in order to use the cubic spline model (both interpolate missing values and extrapolate and forecast beyond the sample data set), which is usually applied in yield curve and interest rate term structure construction. Kurtosis This is the fourth moment of a distribution, measuring the distribution’s peakedness and extreme values. An excess kurtosis of 0 is a normal distribution with “normal” peaks and extreme values, and this parameter can take on positive, zero, or negative values. Lambda, Lambda 1, and Lambda 2 Lambda is the mean or average value used in a Poisson (an event occurring on average during a specified time period or area) and an exponential (the average rate of occurrence) distribution, and is also used in calibrating the yield curve models. Regardless of the use, lambda has to be a positive value. Last Return This input is used in the exponentially weighted moving average (EWMA) volatility forecast, representing the last period’s return; it can be periodic or annualized, and can take on positive or negative values. If entering a periodic return, make sure to set the Periodicity input in the EWMA function to 1 to obtain a periodic volatility forecast, or the correct periodicity value to obtain the annualized volatility forecast. Conversely, if entering an annualized return, set periodicity to be equal to 1 to obtain the annualized volatility forecast. Last Volatility This input is used in the EWMA volatility forecast, representing the last period’s volatility; it can be periodic or annualized, and can take on only positive values. If entering a periodic volatility, make sure to set the Periodicity input in the EWMA function to 1 to obtain a periodic volatility forecast, or the correct periodicity value to obtain the annualized volatility forecast. Conversely, if entering an annualized volatility, set periodicity to be equal to 1 to obtain the annualized volatility forecast. Likely This is the most likely or mode value in a triangular distribution, and can take on any value, but has to be greater than or equal to the minimum and less than or equal to the maximum value inputs in the distribution. Loan Value Ratio This is a positive percentage ratio of the amount of loan required to purchase a real estate investment to the value of the real estate. Location This is the location parameter in the Pareto distribution, also used as the starting point or minimum of the distribution, and is sometimes also called the Beta parameter in the Pareto distribution; it can only take on a positive value. Long Term Level This is the long-term level to which the underlying variable will revert in the long run; it is used in mean-reverting option models, where the underlying variable is stochastically changing but reverts to some long-term mean rate, which has to be a positive value. Long Term Rate This is similar to the long-term level, but the parameter here is a percent interest rate, a long-term rate to which the underlying interest rate process reverts over time. Lookback Length This input variable is used in a floating strike partial lookback option, where at expiration the payoff on the call option is being able to purchase
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
979
the underlying asset at the minimum observed price from inception to the end of the lookback time. Conversely, the put will allow the option holder to sell at the maximum observed asset price from inception to the end of the lookback time. Lookback Start This input variable is used in fixed strike lookback options, where the strike price is predetermined, such that at expiration, the payoff on the call option is the difference between the maximum observed asset price less the strike price during the time between the Lookback Start period to the maturity of the option. Conversely, the put will pay the maximum difference between the lowest observed asset price less the strike price during the time between the starting period of the lookback to the maturity of the option. Lost Sales Cost This is the dollar or currency amount of a lost sale, typically zero or a positive value, and is used in the economic order quantity models to determine the optimal levels of inventory to hold or levels of production to have. Lower Barrier This is the lower barrier stock price in a double barrier or graduated barrier option, where this barrier is typically lower than the existing stock price and lower than the upper barrier level; it must be a positive value. Lower Delta This is the instantaneous options delta (a Greek sensitivity measure that can be computed using the B2CallDelta or B2PutDelta functions) of the percentage change in option value given the instantaneous change in stock prices for the lower barrier stock price level. This value is typically set at zero or a positive value. Lower Strike This is the lower strike price (a positive value) in a Supershare option, which is traded or embedded in supershare funds and is related to a Down and Out, Up and Out double barrier option, where the option has value only if the stock or asset price is between the upper and lower barriers; at expiration, it provides a payoff equivalent to the stock or asset price divided by the lower strike price. Lower Value This input variable is used in the B2DT lattices for computing option adjusted spreads in debt with convertible or callable options, and represents the value that is one cell adjacent to the right and directly below the current value in a lattice. All values in a lattice and this input must be positive. LSL This is the lower specification level of a Six Sigma measured process—that is, the prespecified value that is the lowest obtainable or a value that the process should not be less than. Marginal Cost This is the additional dollar or currency cost to the bank or creditgranting institution of approving one extra credit application, and is used to determine if a credit should be approved; this parameter is typically a positive value. Marginal Profit This is the additional dollar or currency profit to the bank or credit-granting institution of approving one extra credit application, and is used to determine if a credit should be approved; this parameter is typically a positive value. Market Price Risk This input variable is used in mean-reverting option models as well as in the CIR, Merton, and Vasicek models of risky debt, where the underlying interest rate process is also assumed to be mean-reverting. The market price of risk is also synonymous with the Sharpe ratio, or bang for the buck—that is, the expected returns of a risky asset less the risk-free rate, all divided by the standard deviation of the excess returns.
glos
JWBK121-Mun
980
March 13, 2008
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
Market Return This is the positive percentage of the annualized expected rate of return on the market, where a typical index such as the Standard & Poor’s 500 is used as a proxy for the market. Market Volatility This input variable is the annualized volatility of a market index, used to model the probability of default for both public and private companies using an index, a group of comparables, or the market, assuming that the company’s asset and debt book values are known, as well as the asset’s annualized volatility. Based on this volatility and the correlation of the company’s assets to the market, we can determine the probability of default. Matrix A and Matrix B (Series) This is simply an n × m matrix where n and m can be any positive integer, and is used for matrix math and matrix manipulations. Maturity This is the period until a certain contract, project, or option matures, measured in years, and has to be a positive value. Maturity Bought This input variable is the maturity, measured in years (a positive value), of a call option that is bought in a Delta-Gamma hedge that provides a hedge against larger changes in the underlying stock or asset value. This is done by buying some equity shares and a call option, which are funded by borrowing some amount of money and selling a call option at a different strike price. The net amount is a zero sum game, making this hedge costless. Maturity Extend This is the maturity in years, for the writer extendible option of the extended maturity, and has to be a positive value. Maturity Sold This input variable is the maturity, measured in years, of a call option that is sold in a Delta-Gamma hedge that provides a hedge against larger changes in the underlying stock or asset value. This is done by buying some equity shares and a call option, which are funded by borrowing some amount of money and selling a call option at a different strike price. The net amount is a zero sum game, making this hedge costless. Maximum or Max This is the maximum value of a distribution (e.g., in a discrete uniform, triangular, or uniform distribution), indicating the highest attainable value, and can be both positive or negative values, as well as integer (used in discrete uniform, triangular, or uniform distributions) or continuous (used in triangular and uniform distributions). Mean This is the arithmetic mean used in distributions (e.g., logistic, lognormal, and normal distributions) as well as the average levels in a Six Sigma process. This value can be positive (e.g., logistic and lognormal distributions) or negative (e.g., normal distribution), and is typically positive when applied in Six Sigma. Mean Reverting Rate This is the rate of reversion of an underlying variable (typically interest rates, inflation rates, or some other commodity prices) to a long-run level. This parameter is either zero or positive, and the higher the value, the faster the variable’s value reverts to the long-run mean. Use Risk Simulator’s statistical analysis tool to determine this rate based on historical data. Measurement Range (Series) In each sampling group in a Six Sigma process, several measurements are taken, and the range (maximum value less the minimum
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
981
value) is determined. This experiment is replicated multiple times through various sampling groups. The measurement range is hence a series of values (one value for each statistical sampling or experiment subgroup) arranged in a column with multiple rows, where each row represents a group. The range has to be a positive value and is typically a positive integer, and the results are used to determine the central line, as well as upper and lower control limits for quality control charts in Six Sigma. Minimum or Min This is the minimum value of a distribution (e.g., in a discrete uniform, triangular, or uniform distribution), indicating the lowest attainable value, and can be both positive or negative values, as well as integer (used in discrete uniform, triangular, or uniform distributions) or continuous (used in triangular and uniform distributions). MV Debt This is the market value of risky debt, and can be priced using the AssetEquity Parity models using book values of debt and equity, and applying the equity volatility in the market. Typically, this value is different from the book value of debt, depending on the market volatility and internal asset values, but is always zero or a positive value. MV Equity This is the total market value of equity, computed by multiplying the number of outstanding shares by the market price of a share of the company’s stock, and is a positive value. MV Preferred Equity This is the total market value of preferred equity, computed by multiplying the number of outstanding shares by the market price of a share of the company’s preferred stock, and is a positive value. Net Fixed Asset This is the total net fixed assets (gross fixed long-term assets less any accumulated depreciation levels), and is a positive value, obtained from a company’s balance sheet. Net Income This is the net income after taxes, in dollar or currency amounts, and can be either positive or negative. New Debt Issue This is the amount of new debt issued to raise additional capital, and is either zero or positive. Nominal CF This is the nominal cash flow amounts, including inflation, and can be negative or positive. Nominal cash flow is the real cash flow levels plus inflation adjustments. Nominal Rate This is the quoted or nominal interest rate, which is equivalent to the real rate of interest plus the inflation rate, and as such is typically higher than either the real interest rate or the inflation rate, and must be a positive value. Nonpayment Probability This is the probability that a debt holder will be unable to make a payment and will default for one time. Sometimes the probability of default can be used, but in most cases the single nonpayment probability is higher than the complete default probability. NOPAT Net operating profits after taxes (NOPAT) is typically computed as net revenues less any operating expenses and less applicable taxes, making this value
glos
JWBK121-Mun
982
March 13, 2008
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
typically higher than net income, which accounts for other items such as depreciation and interest payments. This parameter can be positive or negative. Notes or Notes Payable The amount in dollars or currency for notes payable, a form of short-term current liability, is typically zero or a positive value. Notional This is a positive dollar amount indicating the underlying contractual amount (e.g., in a swap). Observed Max This is the observed maximum stock price in the past for a lookback Asian option, and this parameter has to be a positive amount and larger than the observed minimum value. Observed Min This is the observed minimum stock price in the past for a lookback Asian option, and this parameter has to be a positive amount and smaller than the observed maximum value. Old Value This is the previous period’s value or old value, used in computing the S-curve forecast, and must be a positive value. Operating Expenses The dollar or currency amount of total operating expenses (other than direct expenses or cost of goods sold, but including items like sales and general administrative expenses) has to be a positive value. Option Maturity This is the maturity of an option measured in years, and has to be a positive value; the longer the maturity, holding everything else constant, the higher the value of the option. Option Strike This is the contractual strike price of an option measured in dollars or currency levels, and has to be a positive value. Holding everything else constant, a higher strike price means a lower call option value and a higher put option value. Option Value This is the value of an option, and has to be either zero or a positive value. The option value is never negative, and can be computed through a variety of methods including closed-form models (e.g., Black-Scholes and American approximation models); lattices (binomial, trinomial, quadranomial, and pentanomial lattices); simulation; and analytical techniques (variance reduction, finite differences, and iterative processes). Other Assets The value of any short-term indirect or intangible assets is usually a zero or positive value. Payables The amount in dollars or currency values for accounts payable, a form of short-term current liability, is typically zero or a positive value. Payment Probability This is used to compute the cost of rejecting a good credit by accounting for the chances that payment will be received each time when it is due, and is a positive percentage value between 0% and 100%. Percentile This parameter has to be a positive value between 0% and 100%, and is used in Value at Risk computations and implied volatility computations. In VaR analysis, this value is typically 95%, 99%, or 99.9%, whereas it has to be lower than 50% for the worst-case scenario volatility model and higher than 50% for the best-case scenario volatility model.
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
983
Periodicity Periodicity in the context of barrier options means how often during the life of the option the asset or stock value will be monitored to see if it breaches a barrier. As an example, entering 1 means annual monitoring, 12 implies monthly monitoring, 52 for weekly, 252 for daily trading, 365 for daily calendar, and 1,000,000 for continuous monitoring. In the application of GARCH volatility forecasts, if weekly stock price data is used, enter 52 for periodicity (250 for number of trading days per year if daily data is used, and 12 for monthly data). Regardless of the application, this parameter is a positive integer. Periodic Rate This is the interest rate per period, and is used to compute the implied rate of return on an annuity; this value has to be a positive percent. Periods This refers to a positive integer value representing the number of payment periods in an annuity, and is used to compute the equivalent annuity payment based on the periodic rate. Population This is used in the hypergeometric discrete distribution, indicating the population size. Clearly this positive integer value has to be larger than the population successes and is at least 2. The total number of items or elements or the population size is a fixed number, a finite population; the population size must be less than or equal to 1,750; the sample size (the number of trials) represents a portion of the population; and the known initial probability of success in the population changes after each trial. Population Success or Pop Success This is used in the hypergeometric discrete distribution, indicating the number of successes of a trait in a population. Clearly this positive integer value has to be smaller than the population size. The hypergeometric distribution is a distribution where the actual trials change the probability for each subsequent trial and are called trials without replacement. For example, suppose a box of manufactured parts is known to contain some defective parts. You choose a part from the box, find it is defective, and remove the part from the box. If you choose another part from the box, the probability that it is defective is somewhat lower than for the first part because you have removed a defective part. If you had replaced the defective part, the probabilities would have remained the same, and the process would have satisfied the conditions for a binomial distribution. The total number of items or elements (the population size) is a fixed number, a finite population; the population size must be less than or equal to 1,750, the sample size (the number of trials) represents a portion of the population, and the known initial probability of success in the population changes after each trial. PPE This is the dollar or currency value of plant, property, and equipment values, and is either zero or positive. Preferred Dividend This is the dollar or currency amount of total dividends paid to preferred stocks (dividends per share multiplied by the number of outstanding shares), and is a positive value. Preferred Stock This is the price of a preferred stock per share multiplied by the number of preferred shares outstanding, and has to be a positive value. Previous Value This is the value of some variable in the previous period, used in forecasting time-series data. This has to be a positive value.
glos
JWBK121-Mun
984
March 13, 2008
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
Price and CY Correlation This is the correlation between bond price returns and convenience yields, used in the computation of commodity options, and can take on any value between –1 and +1, inclusive. Price and Forward Correlation This is the correlation between bond price returns and future price returns, used in the computation of commodity options, and can take on any value between –1 and +1, inclusive. Price Improvement This is a percentage value of the price of a real estate property that went to improvements, and is used to compute the depreciation on the property. Price Lattice This is the price lattice of an interest-based derivative (e.g., bond option) where the underlying is the term structure of interest rates with its own volatilities. Principal Repaid This is the dollar or currency amount indicating the value of principal of debt repaid, and is used to compute the adjusted cash flow to equity of a levered firm. Probability This is a probability value between 0% and 100% and used in the inverse cumulative distribution function (ICDF) of any distribution, where given a probability level and the relevant distributional parameters, will return the X value of the distribution. For instance, in tossing a coin two times, using the binomial distribution (trials is set to 2 and the probability of success, in this case, obtaining heads in the coin toss, is set to 50%), the ICDF of a 25% probability parameter will return an X value of 0. That is, the probability of getting no heads (X of zero) is exactly 25%. Profit Margin This is the percentage of net income to total sales, and is typically a positive value, although zero and negative values are possible. Proportion This is the proportion of defects in a Six Sigma model to determine the requisite sample size to obtain in order to reach the desired Type I and Type II errors, and this value is between 0 and 1, inclusive. Put Maturity This is the maturity of the put option, measured in years, and this parameter is a positive value. Put Strike This is the contractual strike price for the put option, and has to be a positive value. Sometimes this variable has different suffixes (e.g., Put Strike Sell Low, Put Strike Buy High, and so forth, whenever there might be more than one put option in the portfolio of option strategies, and these suffixes represent whether this particular put is bought or sold, and whether the strike price is higher or lower than the other put option). Put Value This is the fair market value of the put option, and sometimes the theoretical price of a put option is used in its place when market information is unavailable. This parameter requires a positive input. Sometimes this variable has different suffixes (e.g., Put Value Sell Low, Put Value Buy High, and so forth, whenever there might be more than one put option in the portfolio of option strategies, and these suffixes represent whether this particular put is bought or sold, and whether the premium paid for this put option or the option value is higher or lower than the other put option).
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
985
PV Asset or Present Value of the Asset This is the ubiquitous input in all real options models, and is the sum of the present values of all net benefits from a real options project or its underlying asset. Sometimes the net present value is used as a proxy, but typically the implementation cost is separated from the PV Asset value, such that PV Asset less any implementation cost, if executed immediately, equals the net present value of the project. The PV Asset input has to be a positive value. Quantities (Series) This is a series of positive integers indicating the number of a specific class of options in a portfolio in order to compute the Value at Risk of a portfolio of options, and these values are typically arranged in a column with multiple rows. Quantity 1 and Quantity 2 These are positive integers indicating the amount of the first asset that is exchanged for the second asset in an asset exchange option with two correlated underlying assets. Random This value replaces the Probability value when used to obtain the inverse cumulative distribution function (ICDF) of a probability distribution for the purposes of running a simulation. This variable is between 0 and 1, inclusive, and is from a continuous uniform distribution. By choosing a random value between 0 and 1 with equal probability of any continuous value between these two numbers, we obtain a probability value between 0% and 100%, and when mapped against the ICDF of a specific distribution, it will return the relevant X value from that distribution. Then, when repeated multiple times, it will yield a simulation of multiple trials or outcomes from that specific distribution. You can use Excel’s RAND() function for this input. Rate of Return This is the annualized percentage required rate of return on equity, used to compute the price to earnings ratio. Real Cash Flow This is the real cash flow level after adjusting and deducting inflation rates. Specifically, the real cash flow plus inflation is the nominal cash flow. Real Rate This is the real rate of return or real interest rate after inflation adjustments; in other words, the real rate of return plus the inflation rate is the nominal rate of return. Receivables The dollar or currency amount of accounts receivable, a short-term or current asset from the balance sheet, is usually a positive value or zero. Recovery Period This is the recovery period in determining the depreciation of real estate investments, in number of years. Recovery Rate This is the rate of recovery to determine the credit risk shortfall— that is, the percentage of credit that defaults and the proportion that is recoverable. Remaining Time This is the amount of time remaining in years in an Asian option model. Return on Asset This is the return on a project or an asset, computed by taking net income after taxes and dividing it by total assets; this parameter value can be positive or negative.
glos
JWBK121-Mun
March 13, 2008
986
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
Returns (Series) These are the percentage returns on various assets in a portfolio, arranged in a column with multiple rows; they can be both negative and positive, and are used to compute the portfolio’s weighted average returns. Revenues
This is the dollar or currency amount of net revenues per year.
Risk-Free Rate and Risk-Free 0 This is the annualized risk-free rate of government securities comparable in maturity to the underlying asset under analysis (e.g., the risk-free rate with the same maturity as the option), and has to be positive. Riskfree 0 is the default variable for a changing risk-free rate option model, where if the risk-free series is left blank, this single rate is used throughout the maturity of the option. ROIC This is the return on invested capital (ROIC), and can be computed using the B2RatiosROIC function, using net operating profit after taxes, working capital, and assets used. This value can be negative or positive. Row
This is the row number in a lattice, and starts from 0 at the top or first row.
Sales This is the annual total sales of the company in dollar or currency values and is a positive number. Sales Growth is a related variable that looks at the difference of sales between two periods in percentage, versus Sales Increase, which is the difference in sales but denominated in currency amounts. Salvage This is the positive salvage value in dollars or currency value when an option is abandoned; the holder of the abandonment option will receive this amount. Sample Size This is the positive integer value of sample size in each subgroup used in the computation of a Six Sigma quality control chart and computation of control limits. Savings The positive dollar or currency value of savings when the option to contract is executed—that is, the amount of money saved. Second Variable This is the second underlying variable used in a pentanomial lattice, where the underlying asset lattice is the product of the first and second variables; this input parameter has to be positive. Service Rate This parameter measures the average rate of service per period (typically per day or per hour)—that is, on average, how many people will be serviced in a queue in a period (e.g., per hour or per day). This value has to be positive. Shape This is the second input assumption in the Pareto distribution, determining the shape of the distribution, and is a positive value. Share Price or Equity Price This is the current share or stock price per share at the time of valuation, used in a variety of options models, and has to be a positive dollar or currency value. Shares
This is the number of outstanding shares of a stock, and is a positive integer.
Sigma This is the variation or standard deviation measure of variation within a process and is used in Six Sigma quality control models. This parameter has to be a positive value.
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
987
Sigma Service Rate This is the variation or standard deviation measure of variation within the service rate used in Six Sigma process and quality control models. This value has to be a positive value. Single Interest This is the interest rate used in computing a bond’s convexity and duration models, the second- and first-level sensitivities, respectively. This input parameter has to be a positive value. Single Period This is the period in years or months that is used to interpolate the missing value within a range of values, applied in the B2LinerInterpolation model (used together with the Time Periods series and corresponding Values series). Skewness This is the third moment or measure of skew in a distribution. This input parameter is used in an Alternate Distribution option model, where the underlying distribution of the asset returns is assumed to be skewed and has some kurtosis. This value can be either positive or negative. S Max This is the observed maximum stock price in the past in an extreme spread option, where such options have their maturities divided into two segments, starting from time zero to the First Time Period (first segment) and from the First Time Period to Maturity (second segment). An extreme spread call option pays the difference between the maximum asset value from the second segment and the maximum value of the first segment. Conversely, the put pays the difference between the minimum of the second segment’s asset value and the minimum of the first segment’s asset value. A reverse call pays the minimum from the first segment less the minimum of the second segment, whereas a reverse put pays the maximum of the first segment less the maximum of the second segment. This variable is the observed maximum stock value in the observable past. S Min This is the observed minimum stock price in the past in an extreme spread option, similar to the S Max variable as described previously. Spot FX Rate This is the input in a currency option, which is the current or spot exchange rate, computed by the ratio of the domestic currency to the foreign currency; it has to be a positive value. Spot Price The spot price is the same as the existing or current stock price, and is a positive value. We use this definition to differentiate between the spot and average or future price levels, and this parameter has to be positive. Spot Rate, Spot Rate 1, and Spot Rate 2 This is the input in an exotic currency forward option, which is the current or spot interest rate, and has to be a positive value. Spot Volatility This is the commodity option’s spot price return’s annualized volatility, as measured by the zero bond price level, and this value has to be positive. Spread Certain types of debt come with an option-embedded provision; for instance, a bond might be callable if the market price exceeds a certain value (when prevailing interest rates drop, making it more profitable for the issuing company to call the debt and reissue new bonds at the lower rate) or prepayment allowance of mortgages or lines of credit and debt. This input is the option adjusted spread (i.e.,
glos
JWBK121-Mun
988
March 13, 2008
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
the additional premium that should be charged on the option provision). This value is computed using an optimization or internal search algorithm. Standard Deviation The standard deviation or sigma is the second moment of a distribution, and can be defined as the average dispersion of all values about the central mean; it is an input into the normal distribution. The higher the sigma level, the wider the spread and the higher the risk or uncertainty. When applying it as a normal distribution’s parameter, it is the standard deviation of the population and has to be a positive value (there is no point in using a normal distribution with a sigma of zero, which is nothing but a single point estimate, where all points in the distribution fall exactly at the mean, generating a vertical line). Standard Deviation of Demand This is the measure of the variability of demand as used in the determination of economic order quantity, and this value is either zero or positive. Standard Deviation of Lead Time This is the measure of the variability of lead time it takes to obtain the inventory or product after it is ordered, as used in the determination of economic order quantity, and this value is either zero or positive. Starting Plot This variable is used in the options trading strategies (e.g., straddles, strangles, bull spreads, and so forth), representing the first value to plot for the terminal stock price (the x-axis on an option payoff chart); it has to be lower than the Ending Plot value, and is a positive input. Steps This is a positive integer value (typically at least 5 and between 100 and 1000) denoting the total number of steps in a lattice, where the higher the number of steps, the higher the level of precision but the longer the computational time. Stock This is the current stock price per share at the time of valuation, used in a variety of options models, and has to be a positive dollar or currency value. Stock Index This is the stock index level, and must be a positive value, measured at the time of valuation; it is used in index options computations. Stock Prices (Series) This is a list of stock prices over time in a series as used in the GARCH volatility model (B2GARCH) or computation of the Sharpe ratio (B2SharpeRatio), listed in chronological order (e.g., Jan, Feb, Mar, and so forth) in a single column with multiple rows, versus stock prices at valuation dates for various options in a portfolio, when used to compute the portfolio’s Value at Risk (B2VarOptions). Stock Volatility This is the same as Equity Volatility or simply Volatility described in this Glossary (and used interchangeably), but this definition is used when multiple volatilities are required in the model, in order to reduce any confusion. Strike, Strike 1, and Strike 2 The strike price in an option is the contractually prespecified price in advance at which the underlying asset (typically a stock) can be bought (call) or sold (put). Holding everything else constant, a higher (lower) strike price means a lower (higher) call option value and a higher (lower) put option value. This input parameter has to be a positive value, and in some rare cases it can be set to very close to zero for a costless strike option. Strike 1 and Strike 2 are used when
glos
JWBK121-Mun
March 13, 2008
18:3
Char Count=
Glossary of Input Variables and Parameters
989
referring to exotic option inputs with two underlying assets (e.g., exchange options or a 3D binomial model). Strike Bought This is the positive dollar or currency strike price of an option (usually a call) purchased in a Delta-Gamma hedge that provides a hedge against larger changes in the underlying stock or asset value. This is done by buying some equity shares and a call option, which are funded by borrowing some amount of money and selling a call option at a different strike price. Strike Extend This is the positive value of the new strike price in a writer extendible option, which is an insurance policy in case the option becomes worthless at maturity. Specifically, the call or put option can be automatically extended beyond the initial maturity date to an extended date with a new extended strike price, assuming that at maturity the option is out of the money and worthless. This extendibility provides a safety net of time for the holder of the option. Strike FX Rate This is the positive dollar or currency value of the contractual strike price denominated in exchange rates (domestic currency to foreign currency) for a foreign exchange option. Strike Rate This is the positive percentage value of the contractual strike price in a swaption (option to swap) or a futures option. Strike Sold This is the positive dollar or currency strike price of an option (usually a call) sold in a Delta-Gamma hedge that provides a hedge against larger changes in the underlying stock or asset value. This is done by buying some equity shares and a call option, which are funded by borrowing some amount of money and selling a call option at a different strike price. Successes This is the number of successes in the negative binomial distribution, which is useful for modeling the distribution of the number of additional trials required on top of the number of successful occurrences required. For instance, in order to close a total of 10 sales opportunities, how many extra sales calls would you need to make above 10 calls, given some probability of success in each call? The x-axis of the distribution shows the number of additional calls required or the number of failed calls. The number of trials is not fixed; the trials continue until the required number of successes, and the probability of success is the same from trial to trial. The successes input parameter has to be a positive integer less than 8,000. Success Probability This is a probability percent, between 0% and 100%, inclusive, for the probability of an event occurring, and is used in various discrete probability distributions such as the binomial distribution. Tails This is the number of tails in a distribution for hypothesis testing as applied in Six Sigma models to determine the adequate sample size for specific Type I and Type II errors. This parameter can only be either 1 or 2. Tax Rate Tenure
This is the corporate tax rate in percent and has to be a positive value. This is the maturity of a swaption (option to swap).
This Category This is the category index number (a positive integer—1, 2, 3, and so forth), to compute the relative width of the credit rating table.
glos
JWBK121-Mun
990
March 13, 2008
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
Time, Time 1, and Time 2 The Time variable is in years (positive value) to indicate the specific time period to forecast the interest rate level using various yield curve models, whereas Time 1 and Time 2 are the years for different spot rates, in order to impute the forward rate between these two periods. Time Interval or DT This is the positive time step input used in a time switch option, where the holder of the option receives the Accumulated Amount × Time Steps each time the asset price exceeds the strike price for a call option (or falls below the strike price for a put option). The time step is how often the asset price is checked as to whether the strike threshold has been breached (typically, for a one-year option with 252 trading days, set DT as 1/252). Time Periods (Series) This is a series of positive time periods in years, arranged in a column with multiple rows, concurrent with another column of values, so that any missing values within the range of the time periods can be interpolated using the B2LinearInterpolation and B2CubicSpline models. The time periods do not have to be linearly and sequentially increasing. Timing (Series) This is a series of positive time periods in years, arranged in a column with multiple rows, concurrent with another column of cash flows, so that the present value or price of the bond or some other present value computations can be done. Typically, the timing in years is linearly increasing. Total Asset This is the total assets in a company, including all short-term and longterm assets, and can be determined from the company’s balance sheets. Typically, this parameter is a positive value, and is used in financial ratios analysis. Total Capital This is the total dollar or currency amount of capital invested in order to compute the economic value added in a project. Total Category This is a positive integer value in determining the number of credit rating categories required (e.g., AAA, AA, A, and so forth). Typically, this value is between 3 and 12. Total Debt This is the total debt in a company, including all short-term and longterm debt, and can be determined from the company’s balance sheets. Typically, this parameter is zero or a positive value, and is used in financial ratios analysis. Total Equity or Equity Value This is the total common equity in a company, and can be determined from the company’s balance sheets. Typically, this parameter is zero or a positive value. Total Liability This is the total liabilities in a company, including all short-term and long-term liabilities, and can be determined from the company’s balance sheets. Typically, this parameter is zero or a positive value, and is used in financial ratios analysis. Trading Ratio This is the number of trading days left until maturity divided by the number of trading days in a year (typically around 250 days), and is used to compute the plain-vanilla option value after adjusting for the number of trading days left; it is typically a positive value. Trials This value is used in several places. For a probability distribution, it denotes the number of trials or events (e.g., in a binomial distribution where a coin is tossed
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
991
10 times, the number of trials in this case is 10) or denotes the number of simulation trials and iterations to complete in order to compute the value of an option using the simulation approach. Regardless, this parameter has to be a positive integer. Units This is the positive integer value denoting the number of units sampled in a Six Sigma quality control study, to determine the number of defects and proportion of defects. Units Fulfilled This zero or positive integer input variable is used in the Time Switch option model, where in such an option, the holder receives the Accumulated Amount × Time Steps each time the asset price exceeds the strike price for a call option within the maturity period (or falls below the strike price for a put option). Sometimes the option has already accumulated past amounts (or as agreed to in the option as a minimum guaranteed payment) as measured by the number of time units fulfilled (which is typically set at zero). Unlevered Cost of Equity This is the cost of equity in an unlevered firm with no debt, and has to be a positive value, used to compute the weighted average cost of capital for a company. Up This is the up step size used in an asymmetrical state option pricing model, and needs to be a value greater than 1. This value should be carefully calibrated to the option’s maturity and the number of lattice steps, to denote the up step size per lattice step. Upper Barrier This is the upper barrier stock price in a double barrier or graduated barrier option, where this barrier is typically higher than the existing stock price and higher than the lower barrier level; it must be a positive value. Upper Delta This is the instantaneous options delta (a Greek sensitivity measure that can be computed using the B2CallDelta or B2PutDelta functions) of the percentage change in option value given the instantaneous change in stock prices, for the upper barrier stock price level. This value is typically set at zero or a positive value. Upper Strike This is the upper strike price (a positive value) in a Supershare option, which is traded or embedded in supershare funds, and is related to a Down and Out, Up and Out double barrier option, where the option has value only if the stock or asset price is between the upper and lower barriers, and at expiration provides a payoff equivalent to the stock or asset price divided by the lower strike price. Upper Value This input variable is used in the B2DT lattices for computing option adjusted spreads in debt with convertible or callable options, and represents the value that is one cell adjacent to the right and directly above the current value in a lattice. All values in a lattice and this input must be positive. USL This is the upper specification level of a Six Sigma measured process—that is, the prespecified value that is the highest obtainable value or a value that the process should not exceed. Vacancy Factor and Collection Factor This is the percentage (between 0% and 100%) where the ratio of vacancies or noncollectable rent occurs as a percentage of 100% occupancy, and is used in the valuation of real estate properties.
glos
JWBK121-Mun
992
March 13, 2008
18:3
Char Count=
GLOSSARY OF INPUT VARIABLES AND PARAMETERS
Values (Series) This is a series of values or numbers, either negative or positive values, arranged in a column with multiple rows, to be used in concert with the Time Period variable, where any missing values can be interpolated and internally fitted to a linear model. As an example, suppose the following series of time periods and values exist (Time 1 = 10, Time 2 = 20, Time 5 = 50); we can then use the B2LinearInterpolation and B2CubicSpline models to determine the missing value(s). Vesting Year This is the number of years or partial years in which the option is still in the vesting period and cannot be executed. This vesting year period can range from zero to the maturity of the option (the latter being a no-vesting American option, whereas the latter reverts to a European option), and if the value is somewhere in between, it becomes a Bermudan option with blackout and vesting periods. Volatilities (Series) This is a series of annualized volatilities (see the definition of Volatility for more details) arranged in a row with multiple columns going across, for use in the valuation of risky debt and callable bonds or bond spreads. Each value in the series must be positive. Volatility This is the annualized volatility of equity or stock prices; it has to be a positive value, and can be computed in various ways—for example, exponentially weighted moving average (EWMA), generalized autoregressive conditional heteroskedasticity (GARCH), logarithmic relative returns, and so forth. Review the volatility examples and models in the Modeling Toolkit to obtain details on these methodologies. Volatility 0, 1, 2 These volatility variables are computed exactly as discussed in the Volatility definition, but the difference is that for Volatility 0, this is the default volatility used in a customized option model with changing volatilities (that is, if the changing volatilities input is left empty, this Volatility 0 will be used as the single repeated volatility in the model), whereas Volatility 1 and 2 are the volatilities for the first underlying asset and the second underlying asset in a multiple asset option model. These values have to be positive values. Volatility FX or Volatility Foreign Exchange Rate This is the annualized volatility of foreign exchange rates (see the Volatility definition for the various methods applicable in valuing this parameter), and this value has to be positive. Volatility Ratio This variable is used in the Merton Jump Diffusion models, where this ratio is the percentage of volatility that can be explained by the jumps, and is typically a positive value not exceeding 1. WACC The weighted average cost of capital (WACC) is the average cost of capital from common equity, debt (after tax), and preferred equity, all weighted by the amount obtained from each source. It has to be a positive value, and when used in perpetual firm continuity values with growth rates, WACC has to be greater than the growth rate parameter. Warrants This is the positive integer number indicative of the total number of warrants issued by the company. Working Capital This is also known as the net working capital of a company and can be determined using the company’s balance sheet, and is typically a positive dollar or currency value (while zero is a rare but possible occurrence).
glos
JWBK121-Mun
March 13, 2008
18:3
Glossary of Input Variables and Parameters
Char Count=
993
Worst Case This is the worst-case scenario’s dollar or currency value of a project or asset within a one-year time frame, and is used in the implied volatility (volatility to probability) estimation. When used together with the Best Case and Expected Value input parameters, this worst case value has to be less than these two latter inputs. X This is the ubiquitous random variable X, and is used in multiple locations. When used in probability distributions, it denotes the X value on the x-axis of the probability distribution or the specific outcome of a distribution (e.g., in tossing a coin 10 times, where the probability of getting heads is 50%, we can compute the exact probability of getting exactly four heads, and in this case, X = 4). X is typically a positive value (continuous values in continuous distributions, and discrete positive values, including zero, for discrete probability distributions). Z1 and Z2 These are the standard normal z-scores used in a bivariate normal distribution. These values can be either negative or positive. Zero Bond Price This is the price of a zero coupon bond, used in the valuation of callable and risky debt and for pricing commodity options, and this parameter has to be a positive value. Zero Yields This is the yield of a zero coupon bond, used in the valuation of callable and risky debt, and this parameter has to be a positive value.
glos
JWBK121-Mun
March 13, 2008
18:3
Char Count=
994
abdvd
JWBK121-Mun
March 18, 2008
20:24
Char Count=
About the DVD
This DVD contains: 1. Risk Simulator trial version (30-day trial license and 10-day pre-installed license). 2. Real Options SLS trial version (30-day license information provided below). 3. Employee Stock Options Valuation Toolkit (3-day license information provided below). 4. Modeling Toolkit trial version (30-day license information provided below). 5. Sample getting started modeling videos viewable from your personal computer. 6. Sample case studies and models. 7. Several brochures and manuals on our Certified in Risk Management (CRM designation) program, Training DVD set, live training seminars, software, and other pertinent information. Also, do visit our web site at www.realoptionsvaluation.com to obtain the latest free Excel models, training schedules, case studies, and so forth. Please follow the instructions below to run the trial software programs.
MODELING VIDEOS If you have an older version of Windows Media Player, you might only hear voices without any video when running some of the enclosed sample video clips. To fix this problem, simply install the “Video CODEC.exe” located in the DVD’s “Required Software” folder. Restart the videos and they should work.
RISK SIMULATOR (INTERNATIONAL VERSION: ENGLISH, CHINESE, JAPANESE, SPANISH) FULL & TRIAL VERSION DOWNLOAD: You can download the Risk Simulator (RS) software from the web site to obtain the latest version or use the version that is on your DVD and run the file Software Install—Risk Simulator.exe. This is a full version of the software but will expire in 10 days, during which you can purchase a license to permanently unlock the software. If you are using Windows Vista and Excel 2007, please see the Windows Vista note below before installing the software or the software license. In addition, there is an extended 30-day trial license in the DVD. Please see instructions below for details on how to license the software. To obtain the extended trial period, install RS and then start Excel, click on Risk Simulator, License, Install License and point to the location of the extension license file in the DVD folder “Instructions and Licenses,” restart Excel and you are now
995
abdvd
JWBK121-Mun
March 18, 2008
20:24
Char Count=
996
ABOUT THE DVD
temporarily licensed. If you are running Windows Vista, please see the note below before installing the license. To permanently unlock the software, install RS and then purchase a license from the web site and e-mail us your Hardware ID (after installing the software, start Excel, click on Risk Simulator, License, and e-mail
[email protected] the 11- to 20-digit Hardware ID located on the bottom left of the splash screen). We will then e-mail you a permanent license file. Save this file to your hard drive, start Excel, click on Risk Simulator, License, Install License and point to the location of this license file, restart Excel and you are now permanently licensed. If you are running Windows Vista, please see the note below before installing the license.
System Requirements, FAQ, and Additional Resources
Requires Windows Vista/XP; Excel XP/2003/2007; 512 MB RAM; 100 MB Hard Drive; administrative rights; and .NET Framework 1.1 and 20. Download Microsoft .NET Framework 1.1 or from the DVD, run the file Dot NET 1.1.exe (required to install Risk Simulator if your system does not already have it).
Please view the FAQ on the web site if you have any questions about system requirements or problems installing the software. Windows Vista This software works with Vista and Excel 2007. Please follow the instructions below for installing the software as well as for installing the license in Vista. You need to first turn off User Access Control before installing the software or license file for Risk Simulator: (i) In Windows Vista, click on Start | Control Panel | Classic View | User Accounts | Turn on User Access Control ON or OFF; and uncheck Use User Account Control; and then reboot the computer. You can now install the software per the instructions above, or install the permanent license that you have purchased by starting Excel and clicking on Risk Simulator | License | Install License, and browse to the license file you received when you purchased the software. (ii) When restarting the computer, you will get a message that UAC is turned off. You can turn this message off by going to the Control Panel, Security Center; Change the Way Security Center Alerts Me; Don’t Notify Me and Don’t Display the Icon. Either that or you can turn UAC back on after completing the license installation.
REAL OPTIONS SLS (WITH SUPER SPEED) (INTERNATIONAL VERSION: ENGLISH, CHINESE, JAPANESE, SPANISH) The FULL/TRIAL version of Real Options SLS is now available. Download the software from the web site (or browse the DVD and run Real Options .exe). Make sure you install .NET Framework 2.0 before installing this software (see below). Trial Version Info You can test this software on a 30-day trial basis, upon which you can purchase the software at www.realoptionsvaluation.com.
abdvd
JWBK121-Mun
March 18, 2008
20:24
Char Count=
About the DVD
997
Full Version Info If you are purchasing or have already purchased the software, simply download and install the software. You will need TWO license keys for the software. The first, when prompted after the 30-day trial period when you start the SLS Functions, e-mail us the Hardware Fingerprint at
[email protected] and we will send you your first permanent unlock code. Next, for the second license, you have 30 days to click on Install License on the main software screen and e-mail us the Hardware ID. One you receive the license file, click on Buy or Install License, and click on Activate and browse to the license file we sent you. Please note that the Hardware ID and Hardware Fingerprint are different.
System Requirements, FAQ, and Additional Resources
Requires Windows Vista/XP; Excel XP/2003/2007; 512 MB RAM; 50 MB Hard Drive; Administrative rights; and NET Framework 2.0. Download Microsoft NET Framework 2.0 (required for Real Options SLS if your system does not already have it). Version 2.0 should be installed together with 1.1 (they work side by side). This file is also available in the DVD’s required software folder.
Please view the FAQ if you have any questions about system requirements or problems installing the software. Installation Instructions for Real Options SLS can be found on the web site.
EMPLOYEE STOCK OPTIONS VALUATION TOOLKIT FULL & TRIAL VERSION DOWNLOAD: Download the Employee Stock Options Toolkit from the web site. A license key is required to install. Once you purchased the full license, all you have to do is to e-mail us your Fingerprint ID (at installation or at the end of your trial period when you start the software, you will be asked for a key and the fingerprint is provided to you then). We suggest writing down this Fingerprint ID. Send the fingerprint to
[email protected] and we will send you the permanent license key within a few hours of your purchase. After 3 days, simply start the software and you will be asked for a permanent license key. Enter the key we e-mailed you to permanently unlock the software. Use the following trial license key when prompted for your free 3-day trial: Name: Temporary Key License: 656B-994F-F942-952F
System Requirements, FAQ, and Additional Resources Requires Windows Vista/XP; Excel XP/2003/2007; 256 MB RAM; 10 MB Hard Drive; and administrative rights. Please view the FAQ if you have any questions about system requirements or problems installing the software.
abdvd
JWBK121-Mun
March 18, 2008
20:24
Char Count=
998
ABOUT THE DVD
MODELING TOOLKIT The Modeling Toolkit has over 800+ models and functions, as well as over 300+ Excel and SLS modeling templates for use within the Excel environment as well as standalone SLS. Please see the MANUALS folder on the DVD for a detailed list of the models and functions. This software is introduced in detail in the Advanced Analytical Models (Dr. Johnathan Mun, Wiley 2008) book. FULL & TRIAL VERSION DOWNLOAD: Download the Modeling Toolkit from the web site or install the software from the DVD. A license key is required to install. Once you purchased the full license, all you have to do is to e-mail us your Fingerprint ID (at installation or at the end of your trial period when you start the software, you will be asked for a key and the fingerprint is provided to you then). We suggest writing down this Fingerprint ID. Send the fingerprint to
[email protected] and we will send you the permanent license key within a few hours of your purchase. After 30 days, simply start the software and you will be asked for a permanent license key. Enter the key we e-mailed you to permanently unlock the software. Use the following trial license key when prompted for your free 30-day trial: Name: 30 Day Trial Key: 4C55-0BA2-420E-CA84
System Requirements, FAQ, and Additional Resources Windows Vista/XP, Excel XP/2003/2007, 512 MB RAM, 75 MB Hard Drive, and administrative rights. Please view the FAQ if you have any questions about system requirements or problems installing the software.
CUSTOMER CARE If you have trouble with the CD-ROM, please call the Wiley Product Technical Support phone number at (800) 762–2974. Outside the United States, call 1(317) 572–3994. You can also contact Wiley Product Technical Support at http://support.wiley.com. John Wiley & Sons will provide technical support only for installation and other general quality control items. For technical support on the applications themselves, consult the program’s vendor at admin@ realoptionsvaluation.com or visit www.realoptionsvaluation.com. To place additional orders or to request information about other Wiley products, please call (877) 762-2974.
about
JWBK121-Mun
March 18, 2008
1:57
Char Count=
About the Author
r. Johnathan C. Mun is the founder and CEO of Real Options Valuation, Inc. (ROV), a consulting, training, and software development firm specializing in strategic real options, financial valuation, Monte Carlo simulation, stochastic forecasting, optimization, Basel II, FAS123, and risk analysis located in northern Silicon Valley, California. ROV has partners around the world, including Beijing, Chicago, Hong Kong, Mexico City, New York, Port Harcourt of Nigeria, Shanghai, Singapore, Zurich, and other locations. ROV also has a local office in Shanghai. He is also the chairman of the International Institute of Professional Education and Research (IIPER), an accredited global organization providing the Certified in Risk Management (CRM) designation, among others, staffed by professors from named universities from around the world. He is the creator of the Real Options SLS Super Lattice Solver software, Risk Simulator software, Modeling Toolkit software, and Employee Stock Options Valuation software showcased in this book, as well as the risk analysis Training DVD. He holds public seminars on risk analysis and CRM programs. He has authored nine other books published by John Wiley & Sons, including Banker’s Handbook on Credit Risk (2008); Modeling Risk: Applying Monte Carlo Simulation, Real Options, Optimization, and Forecasting (2006); Real Options Analysis: Tools and Techniques, First and Second Editions (2003 and 2005); Real Options Analysis Course: Business Cases (2003); Applied Risk Analysis: Moving Beyond Uncertainty (2003); and Valuing Employee Stock Options (2004). His books and software are being taught and used by faculty and students at top universities around the world, including Beijing University, Bern Institute in Germany, Chung-Ang University in South Korea, Georgetown University, ITESM in Mexico, Massachusetts Institute of Technology, U.S. Naval Postgraduate School, New York University, Stockholm University in Sweden, University of the Andes in Chile, University of Chile, University of Pennsylvania Wharton School, University of York in the United Kingdom, and Edinburgh University in Scotland, among others. Dr. Mun is also currently a finance and economics professor and has taught courses in financial management, investments, real options, economics, and statistics at the undergraduate and the graduate MBA and Ph.D. levels. He teaches and has taught at universities all over the world, from the U.S. Naval Postgraduate School (Monterey, California) and University of Applied Sciences (Switzerland and Germany) as full professor, to Golden Gate University, San Francisco State University, and St. Mary’s College (California), and has chaired many graduate research MBA thesis and Ph.D. dissertation committees. He also teaches weeklong risk analysis, real options analysis, and risk analysis for managers’ public courses where participants can obtain the CRM designation upon completion. He is a senior fellow at the Magellan Center and sits on the board of standards at the American Academy
D
999
about
JWBK121-Mun
1000
March 18, 2008
1:57
Char Count=
ABOUT THE AUTHOR
of Financial Management. He was formerly the Vice President of Analytics at Decisioneering, Inc., where he headed the development of options and financial analytics software products, analytical consulting, training, and technical support, and where he was the creator of the Real Options Analysis Toolkit software, the older and much less powerful predecessor of the Real Options Super Lattice software introduced in this book. Prior to joining Decisioneering, he was a Consulting Manager and Financial Economist in the Valuation Services and Global Financial Services practice of KPMG Consulting and a Manager with the Economic Consulting Services practice at KPMG LLP. He has extensive experience in econometric modeling, financial analysis, real options, economic analysis, and statistics. During his tenure at Real Options Valuation, Inc., Decisioneering, and KPMG Consulting, he taught and consulted on a variety of real options, risk analysis, financial forecasting, project management, and financial valuation issues for over 100 multinational firms (former and current clients include 3M, Airbus, Boeing, BP, Chevron Texaco, Financial Accounting Standards Board, Fujitsu, GE, Microsoft, Motorola, Pfizer, Timken, U.S. Department of Defense, U.S. Navy, Veritas, and many others). His experience prior to joining KPMG included being department head of financial planning and analysis at Viking Inc. of FedEx, performing financial forecasting, economic analysis, and market research. Prior to that, he did financial planning and freelance financial consulting work. Dr. Mun received his Ph.D. in Finance and Economics from Lehigh University, where his research and academic interests were in the areas of investment finance, econometric modeling, financial options, corporate finance, and microeconomic theory. He also has an MBA in business administration, an MS in management science, and a BS in Biology and Physics. He is Certified in Financial Risk Management, Certified in Financial Consulting, and Certified in Risk Management. He is a member of the American Mensa, Phi Beta Kappa Honor Society, and Golden Key Honor Society as well as several other professional organizations, including the Eastern and Southern Finance Associations, American Economic Association, and Global Association of Risk Professionals. In addition, he has written many academic articles published in the Journal of the Advances in Quantitative Accounting and Finance, the Global Finance Journal, the International Financial Review, the Journal of Financial Analysis, the Journal of Applied Financial Economics, the Journal of International Financial Markets, Institutions and Money, the Financial Engineering News, and the Journal of the Society of Petroleum Engineers.
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
Index
Abandonment options, 739–748, 763–767 Absolute returns, 89 Accrual(s): on basket of assets, 178–179 instruments, 405–406 range, 226–227 Acquisitions, 801–809, 821. See also Buy decisions AIC (Akaike Information Criterion), 277 Allocation, portfolio: banking and, 313–314 continuous, 356–361 investment, 372 stochastic, 400–404 ALM (asset liability management), 349–354 Alpha errors, 624 American closed-form approximations, 225, 424, 697–699, 753, 761–762 American options: abandonment, 739–740, 742–743 call options, 180–187, 424–425, 697–699, 726, 733–735, 737, 738 chooser options and, 764, 770–771 contraction, 750–751 on debt, 422–423 dividends and, 186–187, 734–738 double and exotic barrier, 731–733 dual variable rainbow options and, 767–769 employee stock options and, 715–717 exchange asset options and, 203 exotic options and, 178, 731–733 expansion, 756–758 on foreign exchange, 182–183 futures contracts and, 212 on index futures, 184–185 jump-diffusion options and, 774 lower barrier, 725–727 mean-reversion options and, 777, 779 multiple assets competing options and, 781 perpetual options and, 225 plain vanilla options and, 424–431 put options, 422, 424, 697, 733, 736–738, 748 range accruals and, 226 in Real Options SLS, 697–699, 715–717 with sensitivities, 180–181 simultaneous compound options and, 791 upper barrier, 728–730 Amortizations, 141–144, 534–537
Analytics: Central Limit Theorem, 79–88 Flaw of Averages, 88–93 lottery numbers, winning, 84–88 Mathematical Integration Approximation Model, 93–96 projectile motion, 96–99 regression diagnostics, 100–109 Ships in the Night, 109–111 statistical analysis, 111–122 weighting of ratios, 123–124 ANOVA (analysis of variance), 610–611, 618–623 Approximation models, 93–96, 180 Approximations, 93–96, 186, 225. See also American closed-form approximations ARIMA (autoregressive integrated moving average), 74–75, 276–282 Arithmetic averages, 188–189 Asian lookback options, 188–190 Assets: accruals on basket of, 178–179 allocation optimization model, 356–361 asset-equity parity model, 137–138 asset liability management (ALM), 349–354 asset or nothing options, 190–191 benchmark, 235–236 debt analysis and, 147–148 exchange assets options, 203 market value of, 137–138 multiple, 779–781 reference, 236 two-asset options, 212, 222–223, 233–236 volatility of, 137–138, 222 Assumptions: defining, 9–12 management, 666, 681 optimization and, 62 violations of, 46, 101, 240 Audit worksheets, 700, 702 Auto-ARIMA, 75, 278–281 Autocorrelations. See also Correlations diagnostic tools and, 47–49, 51–52 forecasting and, 241, 243, 244, 277 regression diagnostics and, 103–105 statistical analysis and, 113, 120–121 Autoregressive integrated moving average (ARIMA), 75, 276–282 Average options, 188–189
1001
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
1002 Averages: analytics and, 88–93 arithmetic, 188–189 autoregressive integrated moving, 75, 276–282 exponentially weighted moving, 664–665 geometric, 88–90, 189–190 harmonic, 90–91 skewed, 91–93 Backcasting, 133 Backward induction, 153, 156, 158 Bandwidth requirements, 329 Banking, 293–320, 634–643 break-even inventory and, 637–639 classified loan borrowing base and, 634–636 default probabilities in, 294–301 economic capital and value at risk and, 303–312 firm in financial distress and, 640–641 hurdle and discount rates and, 318–320 loss given default and, 301–303 optimization and, 315–319 portfolio allocation and, 313–315 pricing loan fees model and, 642–643 queuing models and, 354–356 Barrier options: binary digital instruments and, 405 double, 200–201, 230, 731–733 exotic, 191–192, 732 gap options and, 212 graduated, 213–214 lower, 725–727, 818 two-asset, 233–234 upper, 728–730 Basel II Accord, 293, 301, 652, 653, 657 Baseline growth, 283 Basic Econometrics, 75 Basic simulation model, 510–516 Bayesian analysis (Bayes’ Theorem), 166–168 Bearish spread positions, 417 Benchmark assets, 235–236 Benchmarks, 697–700 Benyovszky, Mark A., 329 Bermudan options: abandonment, 744–746 chooser options and, 765 contraction, 751–754 dividends and, 186, 734–738 employee stock options and, 716–718 exotic options and, 178 expansion, 760 plain vanilla options and, 429–431 with sensitivities, 180–181 Bernoulli distributions, 97 Beta distributions, 83, 446, 518 Beta errors, 624 BIM (Bliss interpolation model), 674–675 Binary decision variables, 62 Binary digital instruments, 405–406 Binary digital options, 193–194
INDEX Binomial distributions, 54–56, 81–84, 548–552, 571–573 Binomial lattices: closed-form model versus, 821–826 employee stock options and, 719 exotic options and, 180–182, 186, 217 modified, 407–413, 420–423 multinomial lattices versus, 705, 708, 709, 768, 795–798 plain vanilla options and, 424–431 Biotech, 287–292 Bivariate linear regression, 395–400 Black-Derman-Toy methodology, 407 Blackout Steps, 428–429, 701–703, 717, 723–724 Black-Scholes closed-form model, 697–699, 719, 821–823 Black-Scholes option pricing model. See also Generalized Black-Scholes-Merton model exotic options and, 180, 217, 237, 737–738 plain vanilla options and, 424 tornado and sensitivity charts and, 27–28 Bliss interpolation model (BIM), 674–675 Bond(s): call options, 149–150 debt analysis and, 138–140 inverse floater, 407–413 value of, 129–130, 151–152 yields and spreads, 432–434 Bootstrap simulations, 35–40, 522–525, 587–589 Borrowing base loans, 634–636 Box-Jenkins ARIMA, 75, 277–278 Break-even inventory, 637–639 Brownian motion, 49, 51, 106, 216, 245 Bull spread positions, 417 Buy decisions. See also Acquisitions build versus, 801–809 lease versus, 631–633 Calendar-days ratios, 232 Call and put collar strategy, 224–225 Call options. See also American options; Bermudan options; European options bonds, 149–150 definition of, 211 dividends and, 734–735 mean-reversion and, 777–779 plain vanilla, 424–431 simple, 795–797 Capability measures (Cpk), 627–631 Capital: asset liability management and, 349–354 economic, 303–315 value at risk and, 653–661 CAPM (capital asset pricing model), 320 Case studies. See Real Options Strategic Case Studies Cash flow. See also Discounted cash flow model matching, 350 model, 481–486, 667 returns, 664–668, 681
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
Index Cash or nothing options, 195 Causation versus correlation, 19, 52, 247 CDF (cumulative distribution function), 54–56, 81 Central Limit Theorem, 40, 79–88, 624 Changing volatility, 709, 712 Chi-square tests, 608–611 Chooser options, 196–197, 228, 763–767, 770–773 CIR (Cox-Ingersoll-Ross) model, 138–140, 673–674 Classified loan borrowing base, 634–636 Closed-form models. See American closed-form approximations; Black-Scholes closed-form model Coefficient of variation, 327, 374, 482 Combinatorics, 625–626 Commercial real estate, 456–459 Commodity options, 198 Complex chooser options, 197, 228, 771–773 Complex combinatorial nested options, 781–782, 823 Compound options, 223–224. See also Sequential compound options; Simultaneous compound options Confidence intervals, 16–18, 87, 580–583 Constraints, 61–64, 368, 382, 470–472 Continuous decision variables, 62, 64–70, 400–404 Continuous portfolio allocation, 356–361 Contraction options, 748–755, 763–767, 803–804, 806, 808 Control charts, quality, 628–631 Convertible warrants, 821–826 Convexity, 145–146, 472–473 Correlations. See also Autocorrelations causation versus, 19, 52, 247 correlated simulations, 525–529 decision analysis and, 158, 168 effects model, 528–530 forecasting and, 246–253 pairwise, 525–531 parametric, 19 precision control and, 18–22 rainbow options and, 768–769 regression diagnostics and, 108 sensitivity analysis and, 29–33, 505, 508–509 serial, 276–277 value at risk and, 304–305, 309–312, 657–663 volatility and, 682–684 Cost estimation model, 443–446 Covariance, 474–475, 661–663 Covered call positions, 416–417 Cox-Ingersoll-Ross (CIR) model, 138–140, 673–674 Cpk (Process Capability Index), 627 CPM (critical path method), 446–453 Credit analysis. See also Debt analysis; Loans credit default swaps and credit spread options, 127–128 credit premium, 125–126
1003 credit risk analysis, 129–130, 133–134, 295, 437, 653–656 external debt ratings and spread, 131–132 internal credit risk rating model, 133–134 profit cost analysis of new credit, 135–136 Credit default swaps, 131–132 Credit premiums, 125–126 Credit risk analysis, 129–130, 133–134, 295, 437, 653–656 Credit Risk Plus method, 434 Credit scoring models, 295, 299–301, 432–436 Credit spreads, 125–132, 434 Credit Suisse Financial Products, 434 Critical path analysis (CPM PERT GANTT), 446–453 Cubic spline extrapolation, 258, 262–263, 684–689 Cumulative distribution function (CDF), 54–56, 81 Cuneo Hervieux, Elio, 321 Currency options, 199–200, 205–207, 487–488. See also Foreign exchange: options Custom distributions, 76 Customized options: abandonment, 745–747 chooser options and, 764, 766, 767 contraction, 752, 755 expansion, 760–761 Custom variables, 703 Data. See also Historical data extraction, 42, 515 fitting, 531–534 Debt: options on, 422–423 repayment of, 141–144, 534–537 value of, 129–132, 147–148, 151–152 Debt analysis. See also Credit analysis asset-equity parity model, 137–138 Cox model, 138–140 debt repayment and amortization, 141–144 debt sensitivity models, 145–146 Merton model, 147–148 Vasicek models, 149–152 Debt sensitivity models, 145–146 Decision analysis: Bayes’ Theorem and, 166–168 buy versus build, 801–809 buy versus lease, 631–633 decision trees, 153–168 economic order quantity and, 169–172 expected utility analysis and, 172–173 expected value of perfect information and, 164 inventory and, 169–172, 174–175 Minimax and, 165, 168 optimization models and, 56–70 queuing models and, 176–178 Decision trees, 153–168
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
1004 Decision variables. See also Variables continuous, 62, 64–70, 400–404 integers, 62 mixed, 62 optimization and, 60–63 Default probabilities: banking and, 294–301 bond yield and spreads and, 432–434 credit analysis and, 125–126, 133, 135 empirical models of, 294, 299–301, 432–436 external options model (public company), 437–440 loss given default, 301–303 Merton models and, 441–442 structural models of, 294–300, 432–434, 441 Defective proportion units (DPU), 627 Deferment options, 810–813 Delphi method, 33, 334, 339, 342, 380 Delta, call, 492–493 Delta-gamma hedges, 474–478 Delta hedges, 214, 478 Delta options portfolio, 651–652 Delta precision, 624 Demand curves, 538–541 Descriptive statistics, 112–114 Design of experiments, 625–626 Deterministic, 59, 62, 274, 373–374, 642 Diagnostic tools, forecasting and regression, 42, 44–52, 100–109, 239–247, 270 Digital instruments, 405–406 Discounted cash flow model: abandonment options and, 739 contraction options and, 748 expansion options and, 756 net present value and, 344–346 sensitivity analysis and, 496–497 simulations and, 23–24, 542–545 valuations and, 640 volatility and, 664 Discount rates, 318–320 Discrete integer variables, 70–73, 364–368 Discrete project selection, 362–366 Discrete uniform distribution, 79–81, 86 Distributional analysis, 54–56, 58–61, 521–523 Distributional fitting: data and, 531–534 multiple variable, 682–686 simulations and, 33–35, 36 statistical analysis and, 113–116 value at risk and, 304, 306, 653–655 Distributions: Bernoulli, 97 Beta, 83, 446, 518 binomial, 54–56, 81–84, 548–552, 571–573 custom, 76 discrete uniform, 79–81, 86 exponential, 354–356 Gumbel maximum, 337 hypergeometric, 83
INDEX normal, 79–86, 574–577 outcomes, 79–81 Poisson, 83, 176–177, 342, 354–356, 573–576 skewed, 217 skewness, 36–38 triangular, 446 Distributive lag analysis, 47–48, 104–105, 243–244 Dividends: call options and, 734–735 exotic options and, 186–187, 201–202 expansion options and, 756–757, 759 put options and, 736–738 uneven payment options, 237 Domestic currency, 205–206, 207 Double barrier options, 200–201, 230, 731–733 DPU (defective proportion units), 627 Dual variable rainbow options, 767–769 Duration, 145–146, 349–351, 470–471 Dynamic optimization: continuous portfolio allocation and, 359–360 discrete project selection and, 364 industry applications and, 316 military portfolio and, 377, 380 simulations and, 63–64 stochastic portfolio allocation and, 403–404 Dynamic versus static perturbations, 25–26, 29–33 Econometrics, 75, 248–253 Economic capital, 303–314 Economic order quantity, 169–172 Efficient frontier: of generation, 322–328 integrated risk analysis and, 470–472 military portfolio and, 380–385 optimization procedure, 63–64, 71, 360, 365 Elasticity, 386–390, 538–541 Electric/utility, 321–329 Embedded options, 350–351, 420 Employee stock options, 715–724 American call options and, 715–716 Bermudan call options with vesting and, 716–718 blackouts and, 723–724 European call options and, 719–720 forfeiture and, 723–724 suboptimal exercise and, 720–722, 723–724 valuation toolkit, 717–718, 721–722 vesting and, 716–718, 723–724 Epidemics, 546–548 Equity, 137–138 Errors. See also Normality of errors alpha/beta, 624 estimates of, 284, 286 mean absolute percent, 284, 286 mean-squared, 284 sphericity of, 104, 243, 286 ESO (Employee Stock Options) Valuation Toolkit, 717–718, 721–722
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
Index European options: abandonment, 743–745 chooser options and, 764, 770–773 contraction, 751–753 on debt, 422–423 dividends and, 186, 201–202, 734–738 double and exotic barrier, 731–733 dual variable rainbow options and, 767–769 employee stock options and, 719–720 exchange asset options and, 203 exotic options and, 178, 198, 217, 236 expansion, 756–760 futures contracts and, 212 inverse gamma options and, 215 jump-diffusion options and, 774 lower barrier, 725–727 mean-reversion options and, 777 multiple assets competing options and, 781 plain vanilla options and, 424–431 range accruals and, 226 in Real Options SLS, 697–699 with sensitivities, 180–181 upper barrier, 728–730 EVII (expected value of imperfect information), 161, 163–164 EVPI (expected value of perfect information), 164 EWMA (exponentially weighted moving average) models, 664, 665, 668 Excel, 1–2, 4, 6, 8, 19, 24, 42, 63, 67, 71, 74, 90, 93, 96, 163, 315, 344, 358, 364, 382, 402, 410, 414, 424, 426–427, 450, 455, 463, 465, 515, 526, 558, 562, 625, 695–697, 700, 709–714 Exchange assets options, 203 Exchange rates. See Foreign exchange Exotic options: accruals on basket of assets, 178–179 Asian lookback options, 188–190 asset or nothing options, 190–191 barrier options, 191–192, 200–201, 213–214, 725–733 basic call options, 734–735 basic put options, 736–738 binary digital options, 193–194 cash or nothing options, 195 chooser options, 196–197, 228, 763–767, 770–773 commodity options, 198 currency options, 199–200 with dividends, 186–187, 201–202, 237 exchange assets options, 203 foreign equity options, 207–208 foreign exchange options, 182–183, 199–200, 205–206 foreign takeover options, 209 forward start options, 210 futures options, 184–185, 211–212, 229 gap options, 212–213 index options, 184–185, 214–215 inverse gamma options, 215–216
1005 jump-diffusion options, 216–217 leptokurtic and skewed options, 217–218 lookback options, 188–190, 218–222 option collar, 224–225 options on options, 223–224 perpetual options, 225 range accruals, 226–227 with sensitivities, 180–181 supershare options, 230 time switch options, 231 trading-day corrections, 232 two asset options, 222–223, 233–236 writer extendible options, 238 Expansion options, 756–767, 804–806 Expected utility analysis, 172–173 Expected values, 156, 161, 163–164 Experiments, design of, 625–626 Exponential distribution, 354–356 Exponential growth, 254–256, 264–266 Exponentially weighted moving average (EWMA) models, 664–665 External debt ratings, 127–132 External options model (public company), 437–440 Extrapolation, 77, 118, 258–263, 271–272, 684–689 Extreme spreads options, 204–205 Fairway options, 226–227, 405–406 Farm-outs, 810–813 Federal Reserve Bank, 139 Financial distress, 640–641 Financial instruments, embedded options in, 350–351 Financial statements, valuation and, 637–639, 644 Fixed income investments, 472–473 Fixed strike options, 218–220 Fixed versus floating rates, 479–480 Flaw of Averages, 88–93 Floating exchange rate, 487 Floating strike options, 220–222 Floating versus fixed rates, 479–480 Forecast(s): charts, 14–18, 511–513 correlations and, 246–253 defining, 12–13 diagnostic tools, 42, 44–52, 100–109, 239–247, 270 econometric, 248–253 exponential J-growth curves and, 254–256 integrated risk analysis and, 460–462 interpretation of results, 13–16 linear interpolation and, 258–263 logistic S-growth curves and, 264–266 manual computations, 257–258 market share and, 267–268 Markov chains and, 267–268 module, functions of, 3 multiple regression, 248–253, 269–270
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
1006 Forecast(s) (Continued) nonlinear extrapolation and, 258–263, 271–272 optimization and, 62 statistics, 14 stochastic processes and, 77–78, 245–246, 273–275 techniques, 73–78 time-series analysis and, 257–258, 283–286 time-series ARIMA and, 276–282 Foreign equity options, 205–208 Foreign exchange: cash flow model, 481–486 hedging exposure, 487–491 options, 182–183, 199–200, 205–206 Foreign takeover options, 209 Forfeiture, 719–720 Forward rates, 674 Forward start options, 210 Free cash flow, 345, 463, 465, 784 Friedman’s test, 610–612 F-tests, two-variable, 606–607 Futures, 184–185, 211–212, 229 Gamma, call, 492–493. See also Delta-gamma hedges; Inverse gamma options GANTT chart analysis, 446–453 Gap analysis, 349–350 Gap options, 212–213 GARCH (generalized autoregressive conditional heteroskedasticity), 76, 665, 668–672, 673 Garman-Kohlhagen model, 182, 199 Generalized Black-Scholes-Merton model, 180, 184, 214–215, 232, 738. See also Black-Scholes option pricing model Geometric averages, 88–90, 189–190 Glantz, Morton, 634, 640, 642 Goodness of fit tests, 608–609 Graduated barrier options, 213–214 Greeks, 491–495 Growth: baseline, 283 exponential, 254–256, 264–266 Gumbel Maximum Distribution, 337 Harmonic averages, 90–91 Harvest model, 390–394 Hedges. See Risk hedges Heteroskedasticity, 44, 46, 100–101, 239–240, 242. See also GARCH High-tech manufacturing, 801–809 Histogram (tab), 511 Historical data: distributional fitting and, 33 elasticity and, 386–390, 538 forecasting and, 276, 283 success rates and, 517–518 value at risk and, 653–654 volatility and, 668
INDEX Hull-White models, 149 Hurdle rates, 319–320 Hypergeometric distributions, 83 Hypotheses tests: advanced techniques, 590–623 ANOVA, 619–623 bootstrap simulation as, 36–37 chi-square tests, 608–611 classical, 40–42 confidence intervals and, 580–583 design of experiments and, 623–626 in empirical simulation, 580–587 Friedman’s test, 610–612 Kruskal-Wallis test, 612–614 Lilliefors test, 613–615 nonparametric methodologies, 607–618 one-variable, 591–597 runs test, 616–617 sample size determination and, 623–626 statistical analysis and, 113, 116 in theoretical situations, 576, 578–582 two-variable, 597–607 types of, 591–594 Wilcoxon signed-rank test, 616, 618 ICDF (inverse cumulative distribution function), 54–56, 579 Implied volatility, 663 Index futures, 184–185 Index options, 214–215 Industry applications: banking, 293–320 biotech, 287–292 electric/utility, 321–329 information security intrusion risk management, 329–348 insurance asset liability management model, 349–354 inventory, 169–172, 174–175, 366–371, 637–638 manufacturing, 170–172, 287–288, 801–809 oil and gas industry, 810–813 pensions, 354 Industry comparables, 442 Infectious diseases, 546–548 Inflation, 556, 673 Information, value of, 810–817 Information security intrusion risk management, 329–348 attack models, 332–339 attack scenarios, 339–342 environmental details, 331–332 financial impact, 344–346 investment decisions, 346–348 Inlicensing drug deal structuring, 289–290 Input assumptions. See Assumptions Installation, software, 4–5 Insurance asset liability management model, 349–354 Integers decision variables, 62
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
Index Integrated risk analysis: forecasting and, 460–462 Monte Carlo simulation and, 463–464 optimization and, 465, 468–472 Real Options analysis and, 463, 465–468 Intellectual property, 742, 745–747, 771, 804 Interest payments, 141–144, 534–537 Interest rates: debt analysis and, 145 floating versus fixed, 479–480 inverse floater bonds and, 407–413 mean-reverting, 138–140, 147–152, 673, 690 premiums on, 125–126, 131–132 risk and, 472–473 term structure of, 673, 674–675, 684, 690–691 volatility of, 139, 422–423, 679–680 Intermediate Node Equations, 425–430, 698–699, 703 Internal credit risk rating model, 133–134 Internal optimization model, 169 Internal rate of return, 146, 344, 463, 542, 562 Internal ratings-based approach, 293 Interpolation, 258–263, 674–677, 684–685 Inventory, 169–172, 174–175, 366–371, 637–638 Inverse cumulative distribution function (ICDF), 54–56, 579 Inverse floater bonds, 407–413 Inverse gamma options, 215–216 Investments: fixed income, 472–473 information security and, 346–348 portfolio allocation, 372 return on, 123–124, 456–459, 542–545 simulations and, 542–545 staged-gate, 292, 802, 815, 818 J Curves, 254–256 J-S Curves, 76. See also S-growth curves Jump-diffusion, 106, 216–217, 245, 774–776 Kendall’s tau, 19 KMV, 295, 437 Kruskal-Wallis test, 612–614 Kurtosis, 217 Kusiatin, Uriel, 287, 291 Languages, 696 Lattice Maker module, 714–715 Lattices. See also Binomial lattices; Real Options Super Lattice Solver (SLS) multinomial, 695, 705, 707–709, 768, 795–797 options-adjusted spreads, 420–421 pentanomial, 709, 767–769 quadranomial, 709, 774–776 trinomial, 705, 708–709, 777–779, 795–797 Law of Large Numbers, 40, 624 Lease versus buy valuation, 631–633 Left-tailed hypotheses tests, 591, 594 Leptokurtic options, 217–218 Liabilities, 349–354
1007 Lilliefors test, 613–615 Linear interpolation, 258–263 Linear optimization, 63 Linear regression, 103–104, 240–241, 394–399 Linear trend detection, 119 Loans, 634–639, 642–643. See also Credit analysis Logarithmic cash flow returns approach, 664–666, 681 Logarithmic present value returns, 665, 666–669, 681 Logistic S-growth curves, 76, 264–266 Lookback options, 188–190, 218–222 Loss given default, 301–303 Lottery numbers, winning, 84–88 Lower barrier options, 725–727, 818 MAD (mean absolute deviation), 284 Management assumptions and guesses, 666, 681 Manufacturing, 170–172, 287–288, 801–809 MAPE (mean absolute percent error), 284, 286 Market research, 159–164, 167, 802–803 Market share, 267–268 Market-traded instruments, 432 Market uncertainties, 810, 814 Market values: of assets, 137–138 of debt, 147–148 of interest rate risk, 139 Markov chains, 77, 267–268 Markowitz efficient frontier optimization procedure, 63–64, 70, 362, 367 Mathematical Integration Approximation Model, 93–96 Maturity, 76, 125, 127, 129, 130, 141, 145–146, 149, 178, 180, 183, 186, 190, 191, 193, 196, 197, 200–203, 204, 213, 218, 223–224, 226, 232, 234, 238, 264–265, 295, 405, 407, 414, 417, 420, 424, 426, 430, 439, 487, 535, 652, 673, 679–680, 690, 697, 699, 705, 715, 716, 719, 732, 737–739, 742, 743, 748, 756, 763, 771, 782, 821–823 Matrix: regret, 165 variance-covariance, 474–475 (see also Covariance) Maximin analysis, 164–165, 168, 222–223 Maximum likelihood estimation (MLE), 76–77, 436 McKinsey discounted cash flow model, 640 Mean: hypotheses tests and, 583, 594, 596–598, 599, 600, 602 median versus, 92 Mean absolute deviation (MAD), 284 Mean absolute percent error (MAPE), 284, 286 Mean-reversion: forecasting/regression diagnostics and, 106, 245 of interest rates, 138–140, 147–152, 673, 690 options, 777–779 trinomial lattices and, 795
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
1008 Mean-squared error (MSE), 284 Median, 92 Media streaming, 329 Meeting, probability of, 109–111 Merton models. See also Generalized Black-Scholes-Merton model of debt analysis, 147–148 internal options (private company), 441 market options (industry comparable), 442 MG1 single arbitrary queuing model, 177, 355 M/G/k blocked queuing model, 177, 355 Micronumerosity, 46, 101, 240, 242 Military portfolio, 379–385 Minimax analysis, 165, 168, 222–223 Mixed decision variables, 62 MLE (maximum likelihood estimation), 77, 436 Modeling Risk: Applying Monte Carlo Simulation, Real Options Analysis, Stochastic Forecasting, and Portfolio Optimization (Mun), 2, 12, 248, 269, 274, 315, 360, 365, 511 Modeling toolkit software, 1–2 Modified binomial lattices, 407–413, 420–421 Money, time value of, 562–570 Monitoring periodicities, 191–192, 200, 233–234 Monte Carlo simulations: asset liability management and, 350 banking and, 304, 306–312, 316 basic simulation model and, 510–513 biotech industry and, 288, 289, 291 continuous portfolio allocation and, 359–360 correlations and, 20–22, 529–530 decision analysis and, 156–158, 161–163, 168, 171, 178 discrete project selection and, 364 information security and, 331, 333, 342, 346 integrated risk analysis and, 463–464 investment decisions and capital budgeting and, 542 optimization and, 62–63 queuing models and, 357 retirement funding and, 557–558 Risk Simulator and, 2–8 running, 6–16 surgical success rates and, 518–520 valuation model and, 644 value at risk and, 306–312, 647–650, 653 Moody’s, 133, 295, 437 Mortgages, 141–144 MSE (mean-squared error), 284 Multicollinearity, 51, 106, 108, 245–246 Multidimensional simulations, 552–555 Multinomial Lattice Solver, 695, 705, 707, 768, 795–797 Multiple Asset or Multiple Phased module, 695, 704–705, 706, 707 Multiple assets competing options, 779–781 Multiple-phased complex sequential compound options, 787–789, 790 Multiple-phased sequential compound options, 786–787, 788
INDEX Multiple-phased simultaneous compound options, 793–794 Multiple regression modeling, 248–253, 269–270 Multiple variable distributional fitting, 682–684, 685, 686 Multivariate regression, 77 Mutual exclusivity of options, 781–782 Negative binomial distributions, 548–552 Nelson-Siegel (NS) interpolation model, 676–677 Nested combinatorial options, 781–782, 823 Net present value and discounted cash flow analysis, 344–346 Nonlinear extrapolation, 77, 118, 258–263, 271–272 Nonlinearity, 46–47, 241–242 Nonlinear optimization, 63 Nonlinear rank correlation charts, 503 Nonlinear tornado and sensitivity charts, 503–509 Nonparametric bootstrap simulations, 587–589 Nonparametric correlations, 19 Nonparametric hypotheses tests, 607–618 Nonstationarity, 120, 122 Normal distributions, 79–86, 574–576, 577 Normality of errors: forecasting/regression diagnostics and, 47, 49, 104–106, 243, 245 statistical analysis and, 113, 116–117, 119 NS (Nelson-Siegel) interpolation model, 676–677 Objectives, 62–63 Oil and gas industry, 810–811 One-variable tests: T-tests, 591–594, 595 Wilcoxon signed-rank test, 616–618 Z-tests, 594, 596–597 Operational risk, 356–358, 653–656 Opportunity costs, 165 Optimal pricing with elasticity, 386–390 Optimal trigger values, 741, 759, 814–817 Optimization. See also Dynamic optimization; Static optimization asset allocation optimization model, 358–363 banking and, 315–319 with continuous decision variables, 64–70 continuous portfolio allocation, 356–361 with discrete integer variables, 70–73 discrete project selection, 362–366 examples of, 58–62 harvest model, 390–394 integrated risk analysis and, 465, 468–472 internal optimization model, 169 inventory optimization, 366–371 investment portfolio allocation, 372 Markowitz efficient frontier procedure, 63–64, 70, 362, 367 methods of, 63–64 military portfolio and efficient frontier, 380–385 module, functions of, 3
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
Index optimal pricing with elasticity, 386–390 ordinary least squares, 394–399 portfolio, 56–70 simulation-optimization, 63, 315, 359, 360, 364 stochastic, 63–64, 313–315, 400, 404 stochastic portfolio allocation, 400–404 terminology of, 60–66 value at risk and, 317–319, 647–650 Options. See also American options; Barrier options; Bermudan options; Call options; Customized options; Employee stock options; European options; Exotic options; Put options; Real Options entries abandonment, 739–748, 763–767 Asian lookback, 188–190 asset or nothing, 190–191 average, 188–189 barrier, 158, 168, 191–192, 200–201, 213–214, 233–234, 725–727, 728–730, 731–733, 818 binary digital, 193–194 cash or nothing, 195 chooser, 196–197, 228, 763–767, 770–773 collar, 224–225 commodity, 198 compound, 223–224 contraction, 748–755, 763–767, 803–804, 806, 808 contract versus futures contract, 212 credit spread, 131–132 currency, 199–200, 205–207, 487–488 on debt, 422–423 deferment, 810–813 delta portfolio, 651–652 dual variable rainbow, 767–769 embedded, 350–351, 420 exchange assets, 203 expansion, 756–767, 804, 806 extreme spreads, 204–205 fairway, 226–227, 405–406 fixed strike, 218–220 floating strike, 220–222 foreign equity, 205–208 foreign exchange, 182–183, 199–200, 205–206 foreign takeover, 209 forward start, 210 futures, 184–185, 211–212, 229 gap, 212–213 index, 214–215 inverse gamma, 215–216 jump-diffusion, 106, 216–217, 245, 774–776 leptokurtic, 217–218 lookback, 188–190, 218–222 mean-reversion, 777–779 multiple assets competing, 779–781 mutual exclusivity of, 781–782 nested combinatorial, 781–782, 823 path-dependent/path-independent, 781–782
1009 payoff values and, 414–416 perpetual, 225 plain vanilla, 424–431 quanto, 208 with sensitivity, 180–181 sequential compound, 781–791, 815, 817 simultaneous compound, 791–794 skewed, 217–218 supershare, 230 switching, 817–820 time switch, 231 two-asset, 212, 222–223, 233–236 uneven dividend payments, 237 writer extendible, 238 Options-adjusted spreads lattices, 420–421 Options analysis: binary digital instruments, 405–406 on debt, 422–423 inverse floater bond, 407–413 options-adjusted spreads lattices, 420–421 plain vanilla options, 424–431 trading strategies, 413–419 Ordinary least squares, 394–400 Outcomes distribution, 79–81 Outcomes probabilities, 153, 159–161 Outliers, 46, 92, 103, 240–242 Output forecasts. See Forecast(s) Pairwise correlations, 525–530 Parametric correlations, 19 Path-dependent/path-independent options, 781–782 Payoff values: decision analysis and, 153, 157, 159–161, 168, 172–173 exotic options and, 218–222 options and, 414–416 PDF (probability density functions), 54–56, 573 Pearson’s correlation coefficient, 19 Pearson’s product moment correlations, 51–52, 108–109, 246 Pentanomial lattices, 709, 767–769 Periodicities, monitoring, 191–192, 200, 233–234 Perpetual options, 225 PERT (program evaluation review technique), 446–453 Perturbations: dynamic versus static, 25, 29–32 sensitivity analysis and, 496–498, 504–505 Pharmaceutical development, 814–817 Plain vanilla options, 424–431 PMF (probability mass functions), 54–56, 673 Poisson distributions: Central Limit Theorem and, 83 industry applications and, 342 queuing models and, 176–177, 354–356 Six Sigma quality control and, 573–574, 575, 576 Poisson jump-diffusion process, 216 Population variance, 610
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
1010 Portfolio allocation: banking and, 313–314 continuous, 356–361 investment, 372 stochastic, 399–404 Portfolio efficient frontier, 382, 385–386, 470–472 Portfolio optimization. See Optimization Portfolio risk return profiles, 474–476 Precedents, 496–498, 504–505 Precision control, 18–22 Preferences, run, 515 Prices. See also Black-Scholes option pricing model capital asset pricing model, 320 credit risk analysis and, 129–130 debt analysis and, 138–140, 147–148, 151–152 elasticity and, 386–390 of loan fees model, 642–643 quantity and, relationship between, 539–540 strike, 218–224 Private companies, 441 Probability. See also Default probabilities of meeting, 109–111 outcomes, 153, 159–161 statistical, 571–576 steady state, 268 to volatility, 666, 668 Probability density functions (PDF), 54–56, 573 Probability mass functions (PMF), 54–56, 573 Process Capability Index (Cpk), 627 Profiles: portfolio risk return, 474–476 risk, 163–164 simulation, 6, 8–9, 515–516 Profit cost analysis, 135–136 Program evaluation review technique (PERT), 446–453 Projectile motion, 96–99 Project management: cost estimation model, 443–446 critical path analysis, 446–453 project timing, 453–455 Proportions, 596–597, 602–606 Protective put positions, 417, 821–826 Public companies, 437–440 Purchase. See Acquisitions; Buy decisions Put options: American, dividends and, 736–738 call and put collar strategy, 224–225 debt analysis and, 149–150 definition of, 211 mean-reversion and, 777–779 plain vanilla, 424–431 protective, 417, 821–826 put on call options, 223 simple, 795–799 Quadranomial lattices, 709, 774–776 Qualitative forecasting, 72 Quality control. See Six Sigma quality control
INDEX Quantitative forecasting, 74 Quanto options, 208 Queuing models, 176–178, 354–356 Rainbow options, 763–765 Random walk, 49, 51, 106, 216, 245 Range accruals, 226–227 Rank correlation chart, 31–32, 503 Ratios: calendar-days, 232 return to risk, 129–130, 356–361, 400–401, 470–472 Sharpe, 62, 63, 66, 73–74 trading-days, 232 weighting of, 123–124 Real estate, commercial, 456–459 Real Options analysis, 158, 168, 353, 459, 463, 465–468 Real Options Analysis: Tools and Techniques, 2nd Edition (Mun), 425, 562, 681, 696–698 Real Options Analysis Toolkit, 697 Real Options Strategic Case Studies: build or buy decision, 801–809 deferment options, 810 farm-outs, 810–813 optimal trigger values, 814–817 switching options, 817–820 value of information, 810–813, 814–817 warrant valuation, 821–826 Real Options Super Lattice Solver (SLS): abandonment options and, 739–748 American options and, 697–699, 715–717 Bermudan options and, 716–718 chooser options and, 763–767, 770–773 contraction options and, 748–755 dual variable rainbow options and, 767–769 European options and, 697–699 exotic options and, 178–182, 186, 226, 770–773 expansion options and, 756–762 forecast module of, 3 integrated risk analysis and, 463, 465 introduction to, 694–715 jump-diffusion options and, 774–776 Lattice Maker module of, 714–715 mean-reversion options and, 777–779 Multinomial Lattice Solver and, 695, 705, 708–709, 768, 795–798 Multiple Asset or Multiple Phased module of, 695, 704–705, 706, 707 multiple assets competing options and, 779–781 optimization module of, 3 plain vanilla call and put options and, 424–431 Risk Simulator and, 4 sequential compound options and, 781–791 simple call and put options and, 795–798 simulation module of, 2–3 simultaneous compound options and, 791–794 Single Asset and Single Phased module of, 695, 697–704
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
Index SLS Excel Functions module of, 696, 712–714 SLS Excel Solution module of, 695, 709–711 Recruitment budget, 548–555 Reference assets, 236 Regression: bivariate, 394–400 diagnostic tool, 42, 44–52, 100–109, 239–247, 270 multiple, modeling, 248–253, 269–270 multivariate, 77 Regret matrix, 165 Relative returns, 89–90 Retirement funding, 556–559 Return(s): on investments (ROI), 123–124, 456–459, 542–545 logarithmic cash flow, 664–666, 681 logarithmic present value, 665, 666–667, 681 relative and absolute, 89–90 to risk ratio, 129–130, 356–361, 400–401, 470–472 (see also Sharpe ratio) risk return profiles, 474–476 Rho, call, 492–494 Right-tail capital requirements, 657–661 Right-tailed hypotheses tests, 591 Risk: analysis, 460–473, 474–476, 803–809, 810 asset-liability, 349–350 capital analysis, 293 debt analysis and, 138–140, 147–148, 151–152 information security intrusion risk management, 329–348 operational, 354–356, 653–656 preferences, 172 profile, 163–164 return profiles, 474–476 returns to risk ratio, 129–130, 356–361, 400–401, 470–472 (see also Sharpe ratio) tolerance levels, 347–348 Risk-free rate volatility, 676–685, 708 Risk hedges: delta-gamma hedges, 477–478 delta hedges, 214, 478 fixed versus floating rates, 479–480 foreign exchange cash flow model, 477–482 foreign exchange exposure, 487–491 Risk-neutral, 690, 775, 795 Risk Simulator, introduction to, 2–7 RMSE (root mean-squared error), 284 ROI (return on investment), 123–124, 456–459, 542–545 Roulette wheel, 560–561 Runs test, 616–617 Salvage values, 735–744 Sample size determination, 623–626 Scenario analysis, commercial real estate, 458–459 Scholes. See Black-Scholes closed-form model; Black-Scholes option pricing model; Generalized Black-Scholes-Merton model
1011 SC (Schwarz Criterion), 277 Seasonality, 272, 283 Seasonal lending trial balance analysis, 637–639 Security, information intrusion risk management, 329–348 Seed values, 513–514 Sensitivity: analysis, 25, 29–33, 505 charts, 496, 501–505, 508–509 debt sensitivity models, 145–146 Greeks, 491–495 options with, 180–181 tables, 25, 27 tornado analysis and, 25, 496–501, 503–507 Sequential compound options, 781–791, 815, 818 Serial correlations, 276–277 S-growth curves, 76, 264–266 Sharpe ratio, 62, 63, 66, 74 Ships in the Night, 109–111 Sigma sample, 624 Simple call and put options, 795–797 Simple chooser options, 196, 228 Simple put options, 791–793 Simulation-optimization, 63, 315, 359–360, 364 Simulations: basic simulation model, 510–516 correlation and, 525–530 data fitting, 531–534 debt repayment and amortization, 534–537 demand curve and elasticity estimation, 538–541 infectious diseases, 546–548 investment decisions and capital budgeting, 542–545 module, 2–3 multidimensional, 552–555 profile, 6, 8–9, 515–516 recruitment budget, 548–556 reports, 42–43 retirement funding with VBA macros, 556–559 roulette wheel, 560–561 surgical success rates, 517–525 time value of money, 562–570 Simultaneous compound options, 791–794 Single Asset and Single Phased module, 695, 697–704 Six Sigma quality control: capability measures, 627–631 hypotheses tests (advanced techniques), 590–623 hypotheses tests in empirical simulations, 583–587 hypotheses tests in theoretical situations, 576, 578–582 nonparametric bootstrap simulations, 587–589 sample size determination and design of experiments, 623–626 statistical probabilities and, 571–576 Skewed averages, 91–93 Skewed distributions, 217
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
1012 Skewed options, 217–218 Skewness distributions, 36–38 SLS Excel Functions module, 696, 712–714 SLS Excel Solution module, 695, 709–712 Software requirements, 3–6 Spearman’s rank correlation (Spearman’s R), 19, 22, 52, 108–109, 247 Specification levels, 627–628 Sphericity of errors, 104, 243 Spider charts, 25, 26–28, 33, 496–301, 504–506 Spline extrapolation, 77–78, 258–263 Spot curves, 472–473 Spot rates, 678 Spot yields, 684 Spreads: bearish/bull, 417 credit, 125–132, 434 extreme spreads options, 204–205 on futures options, 229 Staged-gate investment process, 292, 802, 815, 817 Standard deviations, 529, 580–583, 624 Standard & Poor’s 500, 214, 236 Static covariance method, 661–663 Static versus dynamic perturbations, 25–26, 29–32 Stationarity, 106, 120, 122 Statistical analysis tools, 52–57, 111–122 Statistical capability measures (Cpk), 627–628 Statistical confidence intervals, 580–583 Statistical probabilities, 571–576 Static optimization: continuous portfolio allocation, 358–359 discrete project selection and, 364 military portfolio and, 382 simulations and, 63–64 stochastic portfolio allocation and, 401–402 Statistics: descriptive, 112–113, 114 forecast, 14 tab, 511, 515 Theil’s U, 286 Steady state probability, 268 Stochastic optimization: banking and, 315 continuous portfolio allocation and, 360 description of, 63–64 discrete project selection and, 364 stochastic portfolio allocation and, 400, 404 Stochastic portfolio allocation, 400–404 Stochastic processes: forecasting/regression diagnostics and, 49–50, 78, 106–107, 245–246, 273–275 statistical analysis and, 113, 120, 122 Straddle positions, 417 Strangle positions, 417 Strategy trees, 802, 803, 811, 815, 818 Strike prices, 218–224
INDEX Suboptimal exercise, 720–724 Success rates, surgical, 517–525 Super Lattice Solver (SLS). See Real Options Super Lattice Solver (SLS) Supershare options, 230 Surgical success rates, 517–525 Switching options, 817–820 Terminal Node Equations, 425–430, 698–699, 702–703 Theil’s U statistic, 286 Theta, call, 492, 494 Time horizon, 303, 344 Time-series analysis, 78, 113, 119–122, 257–258, 283–286 Time-series ARIMA, 276–282 Time-series data: analytics and, 90–91 diagnostic tools and, 47, 49–52 extrapolation and interpolation of, 684–689 forecasting and, 241, 245, 258–263, 276–282 regression diagnostics and, 103–104, 106 volatility and, 668 Time switch options, 231 Time value of money, 562–570 Tornado analysis, 22–33, 458, 496, 503–506 Trading-days ratio, 232 Trading strategies, 413–419 Traveling financial planner, 58–62 Trend analysis, 113, 118, 119, 283 Trial balances, 637–638 Trials, 8, 9, 12–13, 22, 34, 37, 54, 58, 59, 60, 63, 81, 85, 87, 267, 315, 316, 374, 402, 514, 515, 518, 521, 523, 525, 548–549, 552, 558, 560, 571–572, 573, 588 Trial version of software, 1–2 Triangular distribution, 446 Trigger values, 741, 759, 814–817 Trinomial lattices, 705, 708–709, 774–779, 795–797 Truncation, 12 T-tests, 591–594, 595, 597–601 Two asset options, 212, 222–223, 233–236 Two-phased sequential compound options, 783–786, 815, 817 Two-phased simultaneous compound options, 791, 793 Two-tailed hypotheses tests, 591 Two-variable tests: F-tests, 606–607 T-tests, 597–601 Wilcoxon signed rank tests, 616, 618 Z-tests, 602–606 Uncertainty. See also Monte Carlo simulations debt ratings and spread under, 131–132 industry applications and, 334 market, 810, 814 optimization under, 59, 62 private, 810
Ind
JWBK121-Mun
March 17, 2008
12:45
Char Count= 0
1013
Index Underlying asset, 127, 139, 178, 182, 184, 188, 189, 190, 191, 193, 200, 205, 206, 208, 211–212, 214, 216, 217, 219, 220, 221, 222, 226, 229, 233, 235–236, 237, 295, 405, 417, 422, 424, 439, 477–478, 491–493, 695, 697, 704, 705, 708, 709, 719, 731, 737, 739, 767–769, 774–775, 777–778, 779–781, 786, 795 Uneven dividend payments options, 237 Unit capability measures, 627–628 Upper barrier options, 728–730 U.S. Treasury securities, 680–689 Utility analysis, 172–173, 321–329 Valuation. See also Payoff values of break-even inventory, 637–639 of buy versus lease, 631–633 of classified loan borrowing base, 634–636 of convertible warrants, 821–826 of debt, 129–132, 147–148, 151–152 ESO Valuation Toolkit, 717–718, 721–722 expected values and, 156, 161, 163–164 of firm in financial distress, 640–641 of information, 810–817 market values and, 137–139, 147–148 optimal trigger values and, 741, 759, 814–817 of pricing loan fees model, 642–643 salvage, 739–748 seed, 513–514 of time value of money, 562–570 valuation model, 644–646 Valuation lattice, 704–705, 709, 712, 781, 782 Value at Risk, 303–312 economic capital and, 313–314 foreign exchange exposure and, 487, 489 Monte Carlo simulations and, 306–312, 647–650, 653 operational and credit risk and, 653–656 optimization and, 316, 647–650 options delta portfolio and, 651–652 right-tail capital requirements and, 657–661 static covariance method and, 661–663 structural models of, 304–306 Valuing Employee Stock Options (Mun), 717 Variables. See also Decision variables; One-variable tests; Two-variable tests custom, 703 discrete integer, 70–73, 362–366 distributional fitting and, 33–35, 682–684, 685, 686 dual, rainbow options, 767–769 Variance-covariance matrix, 474–475. See also Covariance Variance Inflation Factor (VIF), 51, 108, 246–247 Variance(s): analysis of (ANOVA), 610, 611, 618–623 charts, 32
hypotheses tests and, 584, 598–601, 606, 607 population, 610 Variation, percent explained, 503 Vasicek models (Oldrich Vasicek), 149–152, 690–691 VBA (Visual Basic for Applications), 556–560 Vega, call, 492, 494–495 Vesting, 716–718, 723–724, 821–826 VIF (Variance Inflation Factor), 51, 108, 246–247 Violations of assumptions, 46, 101, 240 Visual Basic for Applications (VBA), 556–560 Volatility: of assets, 137–138, 222 barrier options and, 726 computations, 664–672 EWMA, 664, 665, 668 GARCH, 664, 665, 668–672, 681 implied, 663 of interest rates, 139, 422–423, 679, 680 inverse floater bonds and, 407, 410 logarithmic cash flow returns approach, 664–666, 681 logarithmic present value returns approach, 665, 666–669, 681, 759 management assumption approach, 664–666, 681 to probability, 666, 668 Real Options SLS and, 712 risk-free rate, 680–689, 712 sensitivity analysis and, 505 simulations and, 542–545 value at risk and, 661 Warrants, valuation of, 821–826 Weighted least squares, 436 Weighting of ratios, 123–124 Wilcoxon signed-rank test, 616, 618 Wong, Victor, 349 Writer extendible options, 238 Xi, call, 492–495 Yield curves: asset liability management and, 349–354 Cox-Ingersoll-Ross model, 673–674 curve interpolation, 674–677 debt analysis and, 138–140, 146, 151–152 forward rates from spot rates, 678 term structure of volatility, 679–680 U.S. Treasury risk-free rates and cubic spline curves, 680–689 Vasicek model, 690–691 Z-scores, 580 Z-tests, 436, 594, 596, 597, 602, 606
cnote
JWBK121-Mun
March 18, 2008
20:18
Char Count= 0
For more information about the DVD, see the About the DVD section on page 995. CUSTOMER NOTE: IF THIS BOOK IS ACCOMPANIED BY SOFTWARE, PLEASE READ THE FOLLOWING BEFORE OPENING THE PACKAGE This software contains files to help you utilize the models described in the accompanying book. By opening the package, you are agreeing to be bound by the following agreement: This software product is protected by copyright and all rights are reserved by the author, John Wiley & Sons, Inc., or their licensors. You are licensed to use this software on a single computer. Copying the software to another medium or format for use on a single computer does not violate the U.S. Copyright Law. Copying the software for any other purpose is a violation of the U.S. Copyright Law. This software product is sold as is without warranty of any kind, either express or implied, including but not limited to the implied warranty of merchantability and fitness for a particular purpose. Neither Wiley nor its dealers or distributors assumes any liability for any alleged or actual damages arising from the use of or the inability to use this software. (Some states do not allow the exclusion of implied warranties, so the exclusion may not apply to you.)
1014