International Series in Operations Research & Management Science Volume 162
Series Editor Frederick S. Hillier Stanford University, CA, USA Special Editorial Consultant Camille C. Price Stephen F. Austin State University, TX, USA
For further volumes: http://www.springer.com/series/6161
Ahti Salo Jeffrey Keisler Alec Morton Editors
Portfolio Decision Analysis Improved Methods for Resource Allocation
13
Editors Ahti Salo Aalto University School of Science Systems Analysis Laboratory PO Box 11100 00076 Aalto Finland
[email protected]
Jeffrey Keisler Department of Management Science and Information Systems University of Massachusetts, Boston Morrissey Boulevard 100 Boston, MA 02125 USA jeff
[email protected]
Alec Morton Department of Management London School of Economics Houghton Street WC2A 2AE London United Kingdom a
[email protected]
ISSN 0884-8289 ISBN 978-1-4419-9942-9 e-ISBN 978-1-4419-9943-6 DOI 10.1007/978-1-4419-9943-6 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2011932226 © Springer Science+Business Media, LLC 2011 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Foreword
Resource allocation problems are ubiquitous in business and government organizations. Stated quite simply, we typically have more good ideas for projects and programs than funds, capacity, or time to pursue them. These projects and programs require significant initial investments in the present, with the anticipation of future benefits. This necessitates balancing the promised return on investment against the risk that the benefits do not materialize. An added complication is that organizations often have complex and poorly articulated objectives and lack a consistent methodology for determining how well alternative investments measure up against those objectives. The field of decision analysis (or DA) has recognized the ubiquity of resource allocation problems for three or four decades. The uncertainty of future benefits clearly calls for application of standard approaches for modelling uncertainties – such as decision trees, influence diagrams, and Monte Carlo simulation, and for using utility functions to model decision makers’ risk preference. The problem of many objectives is an opportunity to apply multi-attribute utility or value models – or similar techniques – to decompose a difficult multidimensional problem into a series of relatively simple judgments about one-dimensional preferences and relative trade-off weights. Application of these models in real-world organizations also requires state-of-the-art techniques and processes for eliciting probabilities and preferences, performing sensitivity analyses, and presenting clear and compelling results and recommendations. As such, these problems clearly fall within the mainstream of DA methods and practice. However, in contrast to traditional DA, it is important to recognize that resource allocation is a portfolio problem, where the decision makers must choose the best subset of possible projects or investments subject to resource constraints. Several other disciplines have also focused on the portfolio aspects of resource allocation problems, particularly operations research (OR) and finance. OR techniques such as mathematical programming are directly relevant to finding optimal portfolios of projects subject to resource constraints. However, mathematical optimization is not typically considered part of routine DA practice, and combining the best of v
vi
Foreword
decision analytic models with mathematical programming has received only limited attention. DA practitioners commonly solve portfolio problems with heuristic approaches, such as prioritization based on benefit–cost ratios, despite foundational work done by Donald Keefer, Craig Kirkwood, and their colleagues in the 1970s. In finance, the considerable body of work on applications of OR to financial portfolios is relevant to understanding trade-offs of risk and return. However, finance and DA use different approaches and make different assumptions for assessing and modelling risk preferences. Portfolio DA also poses significant practical challenges. In large organizations, hundreds or thousands of projects and programs may be considered for inclusion in a portfolio. Assessing multiple evaluation criteria across hundreds or thousands of projects can be daunting. If probability assessments are required, then good DA practice typically emphasizes careful and rigorous assessment and modelling of the uncertainties. This amounts to building hundreds of models and assessing probability distributions for many thousands of uncertain parameters. In my own experience working on resource allocation in hospitals, healthcare systems, and large manufacturing firms, these organizations sometimes struggle to develop even straightforward deterministic financial models for their projects. Conducting a fullblown decision analysis for each project is simply infeasible. Stated quite simply, the most critical scarce resources are the resources required to obtain and evaluate data, assess probabilities, construct models, and evaluate alternative portfolios of projects. Decision makers typically start with limited information about each project and portfolio, and need guidance about which projects to analyze, and at what level of detail. Given the ubiquity of portfolio problems, the rich intersection of multiple relevant disciplines, and the very real practical challenges associated with these problems, one would naturally expect to see decision analysts rise to the challenge. The applied side of the decision analysis profession appears to have responded – particularly in oil and gas production, pharmaceuticals, and military applications. A number of vendors have also responded by providing commercial software tools specifically designed to support portfolio DA. Despite the apparent success of portfolio DA in practice, there has been surprisingly limited attention to portfolio theory and methods in academic DA. With only a few notable exceptions, introductory DA textbooks either provide no discussion of portfolio methods and problems, or at best, a cursory overview of a few applications. Nor has the theory and methods of portfolio DA received more than limited attention in the research literature. In recent years, in part to address this relative neglect, Ahti Salo, Jeffrey Keisler, and Alec Morton organized a series of invited sessions at various national and international conferences. These sessions brought together researchers and practitioners with a shared interest in portfolio decision analysis. I believe these sessions succeeded in encouraging academicpractitioner communication, and perhaps have persuaded more academic decision analysts to focus their attention of portfolio problems, issues, and challenges. This volume, Portfolio Decision Analysis: Improved Methods for Resource Allocation, is the next logical step in Salo, Keisler, and Morton’s efforts to foster a
Foreword
vii
rich and productive interaction of theory, method, and application in portfolio DA. I commend and applaud the editors’ objectives, and I am pleased to see that they have succeeded in attracting a diverse collection of contributions from a distinguished set of authors. The editors do an excellent job of providing an overview of these contributions in their introductory chapter. I simply conclude by encouraging you to take up where these authors leave off, either by furthering the development of new theory and methods, or by putting these theories and methods to the test in real organizations struggling to make the best use of scarce resources. Strata Decision Technology, L.L.C. Champaign, IL, USA
Don N. Kleinmuntz, Ph.D. Executive Vice President
Acknowledgments
This book is the result of the collective efforts of many people. Chapter authors have contributed in the most directly and tangible way; but there are many others who deserve mention, not least for having convinced us of the need for a dedicated book on Portfolio Decision Analysis and also for having provided us with the intellectual foundation and the financial means to bring such a book to fruition. In particular, we wish to thank the panelists in the series of panel discussions that we organized at the INFORMS and EURO conferences in 2008–2010: Jos´e Figueira, Don Kleinmuntz, Jack Kloeber, Jim Matheson, Larry Phillips, Martin Schilling, Theo Stewart, Christian Stummer, and Alexis Tsouki`as (in alphabetical order). These panelists presented through-provoking ideas and engaged prominent scholars and practitioners like Carlos Bana e Costa, Robert Dyson, Ron Howard, and Carl Spetzler in lively discussion. We are pleased to note that echoes of these discussions – which were thus enabled by EURO and INFORMS conference organizers – are reflected in many places of this book. The three of us are indebted to our educators who instilled in us a deep and abiding interest in decision analysis and who have mentored us in our professional lives. In particular, Ahti Salo wishes to thank Raimo P. H¨am¨al¨ainen for an excellent research environment and numerous occasions for fruitful collaboration; Alec Morton acknowledges the support and guidance of Larry Phillips and Carlos Bana e Costa, who introduced him to this area, and thanks them for many long and insightful discussions. Jeffrey Keisler thanks his former employers and colleagues at General Motors Corp., Argonne National Laboratory and Strategic Decisions Group for opportunities to work on, experiment with, and inquire about portfolio applications. He also thanks the many other decision analysis friends and colleagues (including a substantial number of the contributors to this book) who have been engaged in this discussion over the years. Jeffrey also thanks Elana Elstein for her support during the development of this book. We have received financial support from several sources. The Academy of Finland provided the grant which allowed Ahti Salo to devote a substantial share of his time to the development of this book, while the Game Changers project made it ix
x
Acknowledgments
possible for him to work on the book together with Alec Morton in December 2010 at IIASA (International Institute for Applied Systems Analysis). The Suntory and Toyota International Centres for Economics and Related Disciplines (STICERD) at the London School of Economics provided a grant which allowed us to work together in Helsinki and London in summer 2010. Finally, we wish to express our most sincere gratitude to the authors with whom it has been our pleasure to work and who have succeeded in writing 15 chapters which, taken together, push the frontiers of research and practice in this fascinating subfield of decision analysis still further.
Contents
Part I
Preliminaries
1
An Invitation to Portfolio Decision Analysis . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Ahti Salo, Jeffrey Keisler, and Alec Morton
3
2
Portfolio Decision Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Jeffrey Keisler
29
3
The Royal Navy’s Type 45 Story: A Case Study . . . .. . . . . . . . . . . . . . . . . . . . Lawrence D. Phillips
53
Part II 4
Methodology
Valuation of Risky Projects and Illiquid Investments Using Portfolio Selection Models. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Janne Gustafsson, Bert De Reyck, Zeger Degraeve, and Ahti Salo
79
5
Interactive Multicriteria Methods in Portfolio Decision Analysis. . . . . 107 Nikolaos Argyris, Jos´e Rui Figueira, and Alec Morton
6
Empirically Investigating the Portfolio Management Process: Findings from a Large Pharmaceutical Company . . . . . . . . . . . 131 Jeffrey S. Stonebraker and Jeffrey Keisler
7
Behavioural Issues in Portfolio Decision Analysis . .. . . . . . . . . . . . . . . . . . . . 149 Barbara Fasolo, Alec Morton, and Detlof von Winterfeldt
8
A Framework for Innovation Management . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 167 Gonzalo Ar´evalo and David R´ıos Insua
xi
xii
9
Contents
An Experimental Comparison of Two Interactive Visualization Methods for Multicriteria Portfolio Selection .. . . . . . . . . . 187 Elmar Kiesling, Johannes Gettinger, Christian Stummer, and Rudolf Vetschera
Part III
Applications
10 An Application of Constrained Multicriteria Sorting to Student Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 213 Julie Stal-Le Cardinal, Vincent Mousseau, and Jun Zheng 11 A Resource Allocation Model for R&D Investments: A Case Study in Telecommunication Standardization . . . . . . . . . . . . . . . . . 241 Antti Toppila, Juuso Liesi¨o, and Ahti Salo 12 Resource Allocation in Local Government with Facilitated Portfolio Decision Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 259 Gilberto Montibeller and L. Alberto Franco 13 Current and Cutting Edge Methods of Portfolio Decision Analysis in Pharmaceutical R&D . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 283 Jack Kloeber 14 Portfolio Decision Analysis: Lessons from Military Applications .. . . . 333 Roger Chapman Burk and Gregory S. Parnell 15 Portfolio Decision Analysis for Population Health ... . . . . . . . . . . . . . . . . . . . 359 Mara Airoldi and Alec Morton About the Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 383 About the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 387 Index . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 405
Contributors
Mara Airoldi Department of Management, London School of Economics and Political Science, London, UK,
[email protected] Gonzalo Ar´evalo International RTD Projects Department, Carlos III National Health Institute, Ministry of Science and Innovation, Madrid, Spain,
[email protected] Nikolaos Argyris Management Science Group, Department of Management, London School of Economics and Political Science, London, UK,
[email protected] Roger Chapman Burk Department of Systems Engineering, US Military Academy, West Point, NY 10996, USA,
[email protected] Zeger Degraeve Department of Management Science & Operations, London Business School, Sussex Place, Regent’s Park, London NW1 4SA, UK,
[email protected] Bert De Reyck Management Science & Innovation, University College London, Gower Street, London WC1E 6BT, UK,
[email protected] Department of Management Science & Operations, London Business School, Sussex Place, Regent’s Park, London NW1 4SA, UK,
[email protected] Barbara Fasolo Department of Management, London School of Economics and Political Science, London, UK, b
[email protected] Jos´e Rui Figueira CEG-IST, Instituto Superior T´ecnico, Universidade T´ecnica de Lisboa, Lisbon, Portugal,
[email protected] LORIA, D´epartement de G´enie Industriel, Ecole des Mines de Nancy, Nancy, France,
[email protected] L. Alberto Franco Warwick Business School, Coventry, UK, alberto
[email protected]
xiii
xiv
Contributors
Johannes Gettinger Institute of Management Science, Vienna University of Technology, Theresianumgasse 27, 1040 Vienna, Austria,
[email protected] Janne Gustafsson Ilmarinen Mutual Pension Insurance Company, Porkkalankatu 1, 00018 Ilmarinen, Helsinki, Finland,
[email protected] Jeffrey Keisler Management Science and Information Systems Department, College of Management, University of Massachusetts Boston, Boston, MA, USA, jeff
[email protected] Elmar Kiesling Department of Business Administration, University of Vienna, Bruenner Str. 72, 1210 Vienna, Austria,
[email protected] Jack Kloeber Kromite, LLC, Boulder, CO, USA,
[email protected] Juuso Liesi¨o Systems Analysis Laboratory, Aalto University School of Science, P.O. Box 11100, 00076 Aalto, Finland,
[email protected] Gilberto Montibeller Management Science Group, Department of Management, London School of Economics and Political Science, London, UK, g
[email protected] Alec Morton Management Science Group, Department of Management, London School of Economics and Political Science, London, UK, a
[email protected] Vincent Mousseau Laboratoire G´enie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92295 Chˆatenay-Malabry, Cedex, France,
[email protected] Gregory S. Parnell Department of Systems Engineering, US Military Academy, West Point, NY 10996, USA,
[email protected] Lawrence D. Phillips Management Science Group, Department of Management, London School of Economics and Political Science, London, UK, larry
[email protected] David R´ıos Insua Royal Academy of Sciences, Valverde 22, 28004 Madrid, Spain, david
[email protected] Ahti Salo Systems Analysis Laboratory, Aalto University School of Science, P.O. Box 11100, 00076 Aalto, Finland,
[email protected] Julie Stal-Le Cardinal Laboratoire G´enie Industriel, Grande Voie des Vignes, 92295 Chˆatenay-Malabry, Cedex, France,
[email protected] Jeffrey S. Stonebraker Department of Business Management, College of Management, North Carolina State University, Campus Box 7229, Raleigh, NC 27695-7229, USA, jeff
[email protected]
Contributors
xv
Christian Stummer Faculty of Business Administration and Economics, Universitaetsstr. 25, 33615 Bielefeld, Germany,
[email protected] Antti Toppila Systems Analysis Laboratory, Aalto University School of Science, P.O. Box 11100, 00076 Aalto, Finland,
[email protected] Rudolf Vetschera Department of Business Administration, University of Vienna, Bruenner Str. 72, 1210 Vienna, Austria,
[email protected] Detlof von Winterfeldt International Institute for Applied Systems Analysis, Laxenburg, Austria,
[email protected] Jun Zheng Laboratoire G´enie Industriel, Grande Voie des Vignes, 92295 Chˆatenay-Malabry, Cedex, France, jun.zheng@ecp fr
Part I
Preliminaries
Chapter 1
An Invitation to Portfolio Decision Analysis Ahti Salo, Jeffrey Keisler, and Alec Morton
Abstract Portfolio Decision Analysis (PDA) – the application of decision analysis to the problem of selecting a subset or portfolio from a large set of alternatives – accounts for a significant share, perhaps the greater part, of decision analysis consulting. PDA has a sound theoretical and methodological basis, and its ability to contribute to better resource allocation decisions has been demonstrated in numerous applications. This book pulls together some of the rich and diverse efforts as a starting point for treating PDA as a promising and distinct area of study and application. In this introductory chapter, we first describe what we mean by PDA. We then sketch the historical development of some key ideas, outline the contributions contained in the chapters and, finally, offer personal perspectives on future work in this sub-field of decision analysis that merits growing attention.
1.1 What Is Portfolio Decision Analysis? Practically, all organizations and individuals have goals that they seek to attain by allocating resources to actions that consume resources. Industrial firms, for example, undertake research and development (R&D) projects, expecting that these projects allow them to introduce new products that generate growing profits. Municipalities allocate public funds to initiatives that deliver social and educational services to their citizens. Regulatory bodies attempt to mitigate harmful consequences of human activity by imposing alternative policy measures which contribute to objectives such as safety and sustainability. Even many individual decisions can be viewed analogously. For instance, university students need to consider what academic courses and recreational pursuits to engage in, recognizing that time is a limited A. Salo () Systems Analysis Laboratory, Aalto University School of Science, P.O. Box 11100, 00076 Aalto, Finland e-mail:
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 1, © Springer Science+Business Media, LLC 2011
3
4
A. Salo et al.
resource when aspiring to complete one’s studies successfully and on schedule while having a rewarding social life. Decision problems such as these are seemingly different. Yet from a methodological point of view, they share so many similarities that it is instructive to consider them together. Indeed, all the above examples involve one or several decision makers who are faced with alternative courses of action which, if implemented, consume resources and enable consequences. The availability of resources is typically limited by constraints while the desirability of consequences depends on preferences concerning the attainment of multiple objectives. Furthermore, the decision may affect several stakeholders who are impacted by the decision even if they are not responsible for it. There can be uncertainties as well, for instance, at the time of decision making, it may be impossible to determine what consequences the actions will lead to or how much resources they will consume. These, in short, are the key concepts that characterize decision contexts where the aim is to select a subset consisting of several actions with the aim of contributing to the realization of consequences that are aligned with the decision maker’s preferences. They are also key parts of the following definition of Portfolio Decision Analysis:
By Portfolio Decision Analysis (PDA) we mean a body of theory, methods, and practice which seeks to help decision makers make informed multiple selections from a discrete set of alternatives through mathematical modeling that accounts for relevant constraints, preferences, and uncertainties.
A few introductory observations about this definition are in order. To begin with, theory can be viewed as the foundation of PDA in that it postulates axioms that characterize rational decision making and enable the development of functional representations for modeling such decisions. Methods build on theory by providing practicable approaches that are compatible with these axioms and help implement decision processes that seek to contribute to improved decision quality (see Keisler, Chapter 2). Practice consists of applications where these methods are deployed to address real decision problems that involve decision makers and possibly even other stakeholders (see, e.g. Salo and H¨am¨al¨ainen 2010). Thus, applications build on decision models that capture the salient problem characteristics, integrate relevant factual and subjective information, and synthesize this information into recommendations about what subset of alternatives (or portfolio) should be selected. In general, PDA follows the tradition of decision analysis (and, more broadly, of operations research) in that it seeks to improve decision making by using mathematical models in the development of decision recommendations. In effect, the breadth of application domains where PDA has already been applied, combined with the strategic nature of many resource allocation decisions, suggest that PDA is one of the most important branches of decision analysis (see, e.g. Kleinmuntz 2007).
1 An Invitation to Portfolio Decision Analysis
5
This notwithstanding, PDA does not seem to have received comparable attention in the literature. In part, this may be due to its many connections to other areas. For example, a considerable share of applied research is scattered across specialized journals that are not regularly read by methodologically oriented researchers or practicing decision analysts. PDA differs somewhat from the standard decision analysis paradigm in its focus on portfolio choice (as opposed to the choice of a single alternative from a set, such as choosing a single site to drill for oil). In this setting, it is possible to put forth analytical arguments as to why the pooling of several single choice problems into a more encompassing portfolio choice problem can be beneficial. First, the solution to the portfolio problem will be at least as good, because the combination of single choice problems, when considered together, constitutes a portfolio problem where there is a constraint to choose one alternative from each single choice problem. Thus, when considering these single choice problems together, the removal of these (possibly redundant) single choice constraints may lead to a better solution. Second, if the single choice problems are interconnected – for instance, due to the consumption of shared resources or interactions among alternatives in different subsets – the portfolio frame may provide a more realistic problem representation and consequently better decision recommendations. Although these remarks do not account for concerns such as computational complexity or possible delays formulating portfolio problems, they suggest that some problems that have been addressed by looking at independent single choice sub-problems may benefit from re-formulations as portfolio problems. As a rule, PDA problems involve more alternatives than single choice problems. From the viewpoint of problem structuring, a key question in PDA is therefore what alternatives can be meaningfully analyzed as belonging to the “same” portfolio. While PDA methods do not impose inherent constraints on what alternatives can be analyzed together, there are nevertheless considerations which suggest that some alternatives can be more meaningfully treated as a portfolio. This is the case, for instance, when the alternatives consume resources from the same shared pool; when the alternatives are of the same “size” (measured, e.g. in terms of cost, or the characteristics of anticipated consequences); when the future performance of alternatives is contingent on decisions about what other alternatives are selected; or when the consideration of alternatives together as part of the same portfolio seems justified by shared responsibilities in organizational decision making. The fact that there are more alternatives in portfolio choice suggests also that stakes may be higher than in single choice problems, as measured, for example, by the resources that are committed, or by the significance of economic, environmental or societal impacts of consequences. As a result, the adoption of a systematic PDA approach may lead to particularly substantial improvements in the attainment of desired consequences. But apart from the actual decision recommendations, there are even other rationales that can be put forth in favor of PDA-assisted decision processes. For example, PDA enhances the transparency of decision making, because the structure of the decision process can be communicated to stakeholders
6
A. Salo et al.
and the process leaves an auditable trail of the evaluation of alternatives with regard to the relevant criteria. This, in turn, is likely to enhance the efficiency of later implementation phases and the accountability of decision makers. Overall, the aim of this book is to consolidate and strengthen PDA as a vibrant sub-discipline of decision analysis that merits the growing attention of researchers and practitioners alike. Towards this end, we offer 15 chapters that present new theoretical and methodological advances, describe high-impact case studies, and analyze “best practices” in resource allocation in different application domains. In this introductory chapter, we draw attention to key developments in the evolution of PDA. We then summarize the key contributions in each chapter and, finally, outline avenues for future work, convinced that there are exciting opportunities for theoretical, methodological and practical work.
1.2 Evolution of Portfolio Decision Analysis In this section, we discuss developments which have contributed to the emergence of PDA over the past few decades, based on our reading of the scientific literature and personal conversations with some key contributors. Our presentation is intended as neither a systemic historical account nor a comprehensive literature review but, rather, as a narrative that will give the reader a reasonable sense of how what we call PDA has evolved. By way of its origins, PDA can be largely viewed as a subfield of decision analysis (DA). We will therefore give most of our attention to developments in DA, particularly from the viewpoint of activities in the USA. However, we start by briefly discussing interfaces to neighboring fields where related quantitative methods for resource allocation have also been developed.
1.2.1 Financial Portfolio Optimization As a term, “portfolio” is often associated with finance and, in particular, with optimization models that provide recommendations for making investments into market-tradable assets that are characterized by expected return and risk. The evolution of such models began with the celebrated Markowitz mean-variance model (Markowitz 1952) and continued with the development of the Capital Asset Pricing Model (Sharpe 1964). Already in the 1960s, optimization techniques were becoming increasingly important in forming investment portfolios that would exhibit desired risk and return characteristics. Unlike PDA models, however, models for financial portfolio optimization typically consider tradable assets whose past market performance provides some guidance for estimating the risk-return characteristics of assets. Another difference is that the decision variables in financial optimization models are usually continuous so that assets can be purchased or sold in any given quantities, whereas PDA models
1 An Invitation to Portfolio Decision Analysis
7
are built for selection problems which usually involve binary choices. These differences notwithstanding, advances in financial optimization are relevant to PDA. For example, in many cases financial optimization models have spearheaded the adoption of risk measures that are applicable also in PDA (see, e.g. Artzner et al. 1999).
1.2.2 Capital Budgeting Models While financial optimization deals with portfolios of tradeable assets, capital budgeting is typically concerned with intra-organizational investment, most notably the allocation of resource to real assets (Lorie and Savage 1955; Brealey and Myers 2004). Capital budgeting models have early antecedents in the optimization models that were formulated for resource allocation and asset deployment during the Second World War (see, e.g. Mintzberg 1994). Related models were then adopted in the corporate world. The main responsibility would often rest with a centralized corporate finance function that would establish fixed top-down budgets for divisions or departments which, in turn, would take detailed decisions on what activities to pursue with these budgets. Technically, some archetypal capital models can be formulated as linear programming problems (Asher 1962), even if the explicit modeling of problem characteristics such as nonlinearities may call for more general mathematical programming approaches. Capital budgeting and its variants received plenty of attention in finance and operations research in the 1960s and 1970s (see, e.g. Weingartner 1966, 1967). For example, program budgeting sought to elaborate what resources would be used by the elements of a large public sector program and how program objectives would be affected by these elements, in order to improve the transparency of planning processes and the efficiency of resource allocation (Haveman and Margolis 1970). Marginal analysis, in turn, focused on the comparative analysis of marginal costs and marginal benefits as a step towards finding the best uses of resources to activities, in recognition of the broader concerns that are not necessarily under the direct control of the decision maker (Fox 1966; see also Loch and Kavadias 2002). At present, formalized approaches to capital budgeting are practically pervasive in corporations and public organizations. Yet it is our impression that it is still rare to find capital budgeting models that make systematic use of all the advanced features of decision modeling – such as probabilities, multiple criteria, or dynamic modeling with real options – that belong to the modern PDA toolkit.
1.2.3 Quantitative Models for Project Selection Project selection was recognized as an important decision problem in operations research by the late 1950s (Mottley and Newton 1959). Methods for this problem help
8
A. Salo et al.
determine which combinations of proposed projects or activities an organization should fund in order to maximize the attainment of its objectives (see, e.g. Heidenberger and Stummer 1999 for a good review). Between the mid-1960s and the mid-1970s, these methods received plenty of attention in the literature. Growing demand for these methods was generated by large centralized R&D departments of major corporations where R&D managers had to decide which project proposals to fund. Helping to meet this early demand was the efforts of consulting firms such as Arthur D. Little (see Roussel et al. 1991, for a review). By the mid-1970s, the literature on project selection methods had grown to the point where various multicriteria and multiobjective approaches (Lee and Lerro 1974; Schwartz and Vertinsky 1977), probabilistic approaches, and even approaches that account for project interactions were being proposed (see Aaker and Tyebjee 1978; Heidenberger and Stummer 1999; Henriksen and Traynor 1999). These approaches typically lead to mathematical optimization formulations where the desired results were captured by the objective function that was to be maximized subject to budgetary and other resource constraints. The scope of these approaches was gradually extended so that the projects would be assessed not only with regard to financial return but also in view of objectives such as strategic fit and innovativeness (see, e.g. Souder 1973). Research on project selection methods then slowed markedly in the mid-1970s. One plausible cause is that organizations become less centralized with less formal planning cycles. It has also been suggested that the methods became excessively complex and time consuming, even to the point where their practical utility was being undermined (cf. Shane and Ulrich 2004). Experiences from this period should serve as a reminder that while a high degree of mathematical sophistication makes it possible to consider complex project portfolios elaborated at a great level of detail, the successful uptake of these methods is still crucially dependent on securing an appropriate fit with actual organizational decision-making practices.
1.2.4 Decision Analysis Although linkages to finance, capital budgeting and project selection are relevant, the origins and philosophy of PDA stem from the field of decision analysis. Prior to 1960, Ramsey (1978), de Finetti (1937, 1964), and Savage (1954) had built a mathematical theory of decision under uncertainty based on subjective probability. Edwards (1954, 1961) then outlined a research agenda for exploring the descriptive accuracy of subjective expected utility theory, but there was yet no decision analysis, in the sense of a managerial technology based on these concepts. In this setting, Pratt et al. (1965) made key steps towards the development of an advanced statistical decision theory with a stronger orientation towards prospective applications than the philosophical discussions and theoretic system-building of early Bayesian statisticians. In this context, von Neumann–Morgenstern’s utility functions (von Neumann and Morgenstern 1947) were increasingly seen as a viable tool for helping decision makers clarify their attitudes to risk.
1 An Invitation to Portfolio Decision Analysis
9
As an applied field, decision analysis was established in the 1960s (Raiffa and Schlaifer 1961; Howard 1966; Raiffa 1968). It was quickly recognized as essential particularly in capital intensive industries in the USA. For example, oil wildcatters’ drilling decisions (Grayson 1960) served as a testbed for decision tree analysis, resulting in some of the most canonical pedagogical examples of high-impact applications. By the end of the 1960s, decision analysis gained prominence in management education at leading US research universities (e.g. Raiffa 1968). For instance, Howard began to develop a Stanford-based group to expand the uptake of DA in addressing important decision problems (see, e.g. Howard and Matheson 1984). Multi-attribute utility theory (MAUT) was formulated by Keeney and Raiffa (1976) who built on the work on utility theory (e.g. Fishburn 1964, 1970) and measurement theory (Krantz et al. 1971). MAUT and its riskless sister theory, multiattribute value theory (MAVT), were subsequently applied to a growing range of problems in public policy problems where the scoring of alternatives with regard to multiple attributes contributed to more transparent and systematic decision making. The application of DA also expanded due to the efforts that the Stanford group made by developing the DA cycle (Howard 1968; see also Keeney 1982) in order to create an organizational decision support process to facilitate the analysis of decisions. Contemporaneous advances included tornado diagrams, which show the sensitivity of DA results to uncertainties, and influence diagrams, which are simple yet powerful graphical tools for representing decision models and for structuring the elicitation of probability assessments (Howard 1988). A more interactive approach to decision analysis, decision conferencing, evolved from consulting activities where relatively simple quantitative models were employed to facilitate interactive decision processes. For example, the firm Decisions and Designs applied decision analytic methods with particular attention to the process of modeling (see Buede 1979). In the 1970s, this group started to work with the US Marine Corps using a PDA approach called Balanced Beam which was to last for several decades. Beginning in the late 1970s, Cam Peterson, his colleagues and later also Larry Phillips began to expand their work on decision conferencing with multiple parties (Phillips and Bana e Costa 2007). An alternative family of methods have been developed largely by francophone researchers under the heading title “decision aid” (Roy 1985, 1991). The general philosophy of these methods differs somewhat from the above approaches, in that considerable emphasis is placed on the constructive nature of the interaction between analyst and the decision maker, as well as the social context of the decision. Although similar ideas also surface in the decision analysis tradition (Phillips 1984), they are arguably even more prominent in the decision aid tradition, most notably in the so-called “outranking” approaches that employ different procedures in the aggregation of criterion scores. Many of the assumptions of classical decision theory – such as complete comparability and ability to fully discriminate between alternatives – are not taken for granted by proponents of outranking approaches (see, e.g. Bouyssou and Vansnick 1986; Brans and Vincke 1985). Overall, the theoretic and axiomatic basis of outranking approaches has a more discrete and ordinal flavor than those of value/utility-based approaches.
10
A. Salo et al.
Another decision analytic tradition which departs from the axiomatic foundation of MAUT/MAVT theory has grown around the Analytic Hierarchy Process (AHP; see, e.g. Saaty 2005) which, however, has been criticized for methodological shortcomings such as the possibility of rank reversals where the introduction or removal of an alternative may change the relative rankings of other alternatives (see, e.g. Salo and H¨am¨al¨ainen 1997 and references therein). Despite such shortcomings, the AHP has won popularity among practitioners, and it has been applied to various PDA problems such as the selection of R&D projects (Ruusunen and H¨am¨al¨ainen 1989).
1.2.5 From Decision Analysis to Portfolio Decision Analysis As DA methods matured, they were increasingly applied to portfolio problems. As early as the 1960s, Friend and Jessop (1969) were experimenting with the use of formal methods in coordinating planning decisions within local government, and by the mid-to-late 1970s there was much practical work which could readily fall under our definition of PDA. Several methods for project selection, for instance, incorporated systematic approaches to quantification, such as project trees which resembled the more formal DA methods deployed by decision analysts. In particular, the DA group at the Stanford Research Institute (which later morphed into several consulting firms, most notably Strategic Decisions Group, or SDG) expanded its portfolio activities. Specifically, several companies (for examples, see Howard and Matheson 1984) requested consultants to examine their entire R&D plan once they had conducted analyses of isolated R&D projects and strategic decisions. In these analyses, the cost, the probability of success (in some cases as the combined probability of clearing multiple technical hurdles), and the market value given success (based on forecasts of market share, price, etc.) were first assessed for each project using DA methods. Combining these analyses at the portfolio level offered several useful outputs. For instance, after the expected net present values (ENPVs) had been calculated for each project, the projects could be sorted in order of “bang-for-the-buck” (calculated as the ENPV of a project divided by its cost) in order to establish the efficient frontier. For the given budget, the highest value portfolio could then be identified as the intersection of the budget line with this frontier. This portfolio typically contained a substantially better set of projects than those the company would have selected without the analysis. These kinds of successful DA engagements were conducted at major companies such as Eastman-Kodak (Rzasa et al. 1990; Clemen and Kwit 2001) and Du Pont (Krumm and Rolle 1992). Graphical structuring tools, too, have been essential in project and portfolio management. For example, while financial portfolios are often shown with plots of risk versus return for potential investments, the BCG matrix (Henderson 1970) prompts business managers to view their sets of products holistically by classifying
1 An Invitation to Portfolio Decision Analysis
11
them into quadrants of dogs, question marks, stars and cash cows, based on the current status and potential for growth and profitability (and companies were advised to prune their businesses to achieve a healthy mix). One related result from early PDA efforts is an analogous risk-return matrix (Owen 1984) where projects are placed into quadrants that reflect value and probability of success. When using such a matrix, decision makers, most notably portfolio managers, are encouraged to build a balanced portfolio with some “bread and butter” projects (high probability, not very high value), some “oysters” (low probability but high potential value) so that there would be a stream of “pearls” (successful projects that were revealed as oysters were developed) and to eliminate “white elephants” (projects with low prospects for success and low potential value). Researchers and consultants have also been developing dedicated decision analysis-based tools for portfolio choice problems. The DESIGN software was developed by Decisions and Designs (which later evolved into the Equity software), with a view to supporting interactive modeling in a decision conferencing setting (von Winterfeldt and Edwards 1986, pp. 397–399). By using project-level costs and benefits as inputs, the software produced what is now regarded as a classic display showing the efficient portfolios in a cost–benefit space. According to Larry Phillips (personal communication), the original motivation for DESIGN was not portfolio choice but negotiation. In this frame, “costs” and “benefits” were viewed not as different objectives of a single decision maker, but rather as the differing, but not completely opposed, objectives of different parties. In the late 1970s, Keefer (1978) and Keefer and Kirkwood (1978) formulated resource allocation decisions as non-linear optimization problems where the objective function could be assessed as multi-attribute utility function. Such combinations of formal optimization and decision analytic preference elicitation called for methodological advances, including the use of mixed integer programming in solving problems with multiple decision trees containing uncertainties. Also, approaches for three point approximations were developed in response to the need to build optimization models where the probability distributions could be estimated at each point (Keefer and Bodily, 1983). In effect, these developments foreshadow current PDA efforts to combine practical methods for assessing uncertainties and for eliciting preferences when deriving inputs for optimization models. At the same time, PDA approaches were applied to guide the development of major governmental project and R&D portfolios by practitioners in organizations such as Woodward Clyde, Inc (Golabi et al. 1981) and US National Laboratories (Peerenboom et al. 1989), in a line of work that continues through today (see, for instance, Chapter 14 by Parnell in this book). In the 1980s and 1990s, various approaches to product portfolio management were developed, and motivated by project management problems, but largely uninformed by the formal discipline of decision analysis. These approaches employ various means of scoring projects heuristically (e.g. innovativeness, risk, profitability) as a step in providing guidance to the management of project lifecycles. Clients of DA firms, internal experts applying DA, and DA consultants were exposed to
12
A. Salo et al.
these approaches, while specialists in these approaches were in turn exposed to DA. This has led to mingling of ideas which is not easy to track in detail. This notwithstanding, Cooper et al. (2001) provides an excellent comparison of many of these approaches which lie outside the mainstream of decision analysis. In the 1990s, and largely, but not exclusively in the USA, the portfolio approach gained prominence at several major corporations. Together with this development, decision analysis consulting grew and portfolio consultancy became a major part of its business. Portfolio projects were especially successful in the pharmaceutical industry (e.g. Sharpe and Keelin 1998) and the oil and gas industry (e.g. Skaf 1999; Walls et al. 1995) – perhaps because these industries make large capital outlays, are exposed to uncertain returns, have clearly staged decisions and face large numbers of comparable investment opportunities with a clear separation between the financial and the operational sides of the business. Many, if not most, large companies in these industries have formalized some sort of DA portfolio planning processes. Equally important, large numbers of people from these industries have received formal DA training. Thanks to this breadth of practice, scholars engaged in consulting activities began to codify principles for high-quality R&D decision-making based on the use of DA methods (Matheson and Matheson 1998). Portfolio approaches have evolved with advances in software. For example, when spreadsheets and presentation tools became more widespread and user-friendly, it became possible to harness them in building companywide portfolio processes that were integrated with company project/finance databases. Indeed, companies such as General Motors (Kusnic and Owen 1992; Bordley et al. 1999; Barabba et al. 2002) established DA approaches to portfolio management as part of formal planning processes in their R&D, product groups, and even business units (Bodily and Allen 1999). When these DA groups gained expertise, they tended to develop and manage their processes with less outside help. Subsequently, these practitioners have formed networks such as the Decision Analysis Affinity Group (DAAG). Various smaller consulting firms have also sprouted, some with variations of the SDG type of approach, with a focus on project portfolio management. The resulting proliferation of decentralized activity has spawned innovations at the firm level and also conferences and books about lessons learned (e.g. Pennypacker and Retna 2009). Thus, practitioner-level knowledge sharing proceeds apace, with some, but not necessarily close, connections to the scholarly research community. In the 1990s and 2000s, there has been an expansion of multicriteria decisionmaking approaches to portfolio problems both in public and private spheres. For example, Phillips and Bana e Costa have championed the application of PDA methods to significant resource allocation problems in the UK and Portuguese public sectors (see Phillips and Bana e Costa 2007, for a description of their approach and experiences). There has also been a trend towards processes that seek to inform decision makers and help them to reduce the set of candidate solutions to a manageable size, partly based on the notion that a “unique” optimum solution is not necessarily best aligned with complex multi-stakeholder processes (Liesi¨o et al. 2007; Vilkkumaa et al. forthcoming). Advances in computer and software technology have been important drivers, too, because increasing computing power
1 An Invitation to Portfolio Decision Analysis
13
and visualization technology have permitted the near real-time presentation of what the model assumptions signify in terms of recommended portfolio decisions. Indeed, recent portfolio management systems (see, e.g. Portfolio Navigator of SmartOrg, Inc.) and integrated enterprise resource management systems collate project data and offer both standard and query-based portfolio graphs and reports. After the turn of the millennium, we have seen a proliferation of PDA approaches that build on many of the above streams. Recent advances by Gustafsson and Salo (2005), Stummer and Heidenberger (2003), Stummer et al. (2009), Kavadias and Loch (2004) and Liesi¨o et al. (2007), for instance, exemplify new approaches that combine preference assessment techniques, multicriteria methods, decision trees, optimization algorithms and interactive software tools for engaging stakeholders in organizational decision-making processes. The nature of applied work seems to be evolving as well. Some companies in the pharmaceutical industry, for instance, that have employed simpler productivity indices to rank projects are now modeling project-level interactions in more detail and using advanced structured optimization methods. PDA approaches are also finding uses in a growing range of application domains, such as the selection of nature reserves, for instance (see, e.g. Bryan 2010). In conclusion, PDA has well-established roots that go back to the very origins of operations research. It is based on sound approaches for problem structuring, preference elicitation, assessment of alternatives, characterization of uncertainties and engagement of stakeholders. Recent advances in “hard” and “soft” methodologies, together with improved computer technology and software tools, have fostered the adoption of PDA approaches so that many organizations are now explicitly thinking in terms of portfolios. PDA methods become gained ground especially in capital intensive industries such as pharmaceuticals and energy; but they have penetrated even many other industries and public organizations where decisionmaking activities can benefit from participatory processes (cf. H¨am¨al¨ainen 2003, 2004) through which the stakeholders’ expertise is systematically brought to bear on the decisions.
1.3 Contributions in this Book The 15 chapters in this book give an overview of the range of perspectives which can be brought to bear on PDA. These chapters are structured in three parts: Preliminaries (of which this “invitation” is the first chapter); Theory; and Applications. The chapters in Preliminaries expand on this chapter and give a stronger sense of what PDA is. The chapters in Theory present new theoretical advances in PDA, while the part on Applications illustrates the diversity and commonality of PDA practice across problem domains. We emphasize that there are important synergies between the three parts: the part on Theory, for example, presents advances that are relevant for future applied work while the chapters in Applications demonstrate the diversity of decision contexts where PDA can be deployed.
14
A. Salo et al.
1.3.1 Preliminaries In Chapter 2, Keisler tackles the critical question: What is a “good” decision – and a good decision analysis – in a portfolio context? One perspective is provided by the celebrated decision quality chain of Matheson and Matheson (1998): Keisler argues that the dimensions of decision quality proposed by Matheson and Matheson – sound decision framing, diverse and well-specified alternatives, meaningful and accurate information, clear and operationally useful statements of value, logical synthesis, and an orientation towards implementation – provide a useful frame for elaborating the distinctive features of portfolio decisions. Keisler also reminds us that all modeling is a simplification of reality, and distinguishes four different levels of modeling detail in portfolio decisions. Costs and values may be observed more or accurately; projects may be more or less well-specified; multiple objectives may be taken into account completely, incompletely or not at all; and synergies between projects may be considered to a varying degree of detail. Keisler also summarizes a body of evidence, based on simulation models, which sheds light on the likely loss in value from assuming away a particular aspect of modeling complexity. Thus, his work offers new perspectives on what constitutes a suitably detailed or even “requisite” model (see also Phillips 1984). In Chapter 3, Phillips provides a detailed account of a successful high-profile application of PDA to the design of a battleship for the UK’s Royal Navy. His story begins with the breakdown of the Horizon project, a three-nation collaboration to produce a new warship, and the decision by the UK to strike out on its own. Despite this inauspicious start, the author helped the Ministry of Defence to make critical design decisions to arrive at an agreed design concept in only 15 months. A critical element of the multicriteria approach was the use of “decision conferences,” facilitated working modeling meetings with the representation and active engagement from key players and experts. With its rich detail, this chapter invites a careful study by all those who are interested in instructive details of “how to do it.” Phillips also argues for the importance of being attentive of the social process, and of the role that PDA can play in helping members of an organization to construct preferences as a step towards agreeing on how to go forward.
1.3.2 Theory The part on theory begins with a chapter “Valuation of Risky Projects and Other Illiquid Investments Using Portfolio Selection Models” where Gustafsson, de Reyck, Degraeve and Salo present an approach to the valuation of projects in a setting where the decision maker is able to invest in lumpy projects (which cannot necessarily be traded in the market) and financial assets (which can be traded). Starting from the assumption that future uncertainties can be captured through scenario trees and that alternative actions for the investment projects can
1 An Invitation to Portfolio Decision Analysis
15
be explicated for these scenarios, their approach essentially generalizes standard financial methods for project valuation, most notably option pricing and “contingent claims analysis” (the authors call their method “contingent portfolio analysis”), resulting in an approach that is consistent with standard decision analysis approach to investment under uncertainty. Gustafsson et al. also show that an important property of option pricing methodology, the equality of break-even buying and selling prices, generalizes to their environment. Overall, the approach is a significant and practically relevant extension of existing theory which benefits from the rigor of a DA framework. In Chapter 5, Argyris, Figueira and Morton discuss multicriteria approaches. Building on the themes of Keisler, they discuss what constitutes a “good” DA process. They underscore the dangers of simply assuming, as is common in practice, that value functions over criteria are linear. While endorsing the view that analysis should be an interactive process, they point out that the term “interactive” is somewhat loose and, in fact, compatible with a range of different practices of varying degrees of interactivity. They then present two formal optimization models which can produce multicriteria non-dominated solutions based on limited preference information. They also discuss alternative uses of these models in an interactive setting. A natural question about any problem helping technique is “What does practice actually look like?” In Chapter 6, Stonebraker and Keisler provide intriguing insights into this question in the context of pharmaceutical drug development. In effect, drug development is a highly cost-intensive activity with clear-cut binary outcomes – either it leads to a marketable product or not – and thus one would expect that DA methods are particularly well received in this environment. This is indeed the case, and the (unnamed) company in question has built up an impressive database of its decision analyses. Yet Stonebraker and Keisler’s analysis shows that there is considerable variability in the structure of the underlying DA models even in the same organization. This raises the question of whether such differences are warranted while it also points back to themes in Chapter 2: How can analyses of such differences be exploited, in order to improve practice? Applying PDA in practice is essentially a human endeavor. This realization – which we endorse emphatically – implies that psychological and organizational issues are central. In Chapter 7, Fasolo, Morton and von Winterfeldt bring this perspective to bear on portfolio decision making. Taking their cue from the celebrated work of Tversky and Kahneman on heuristics and biases, they observe that much of decision modeling builds on normative assumptions that are made when seeking guide decision behavior. In particular, they discuss how the (often implicit) assumptions underlying this normative stance can be violated behaviorally, drawing on, first of all, individual laboratory-based research on individual decision making, and field experience of supporting organizational resource allocation decision. One of their observations is that the portfolio frame itself can serve as a debiasing device, because it forces decision makers to think of decisions more broadly. In Chapter 8, Ar´evalo and Rios Insua turn their attention to the technology underpinning portfolio management in the context of innovation management.
16
A. Salo et al.
They survey web-based tools for innovation management, and outline an IT system, called SKITES, for managing a portfolio of innovations either within an organization or in a collaborative network. SKITES users may have a sponsoring or facilitative role, or they may propose or assess innovative projects. The proposed system is based around a workflow where proposal are sought, screened, evaluated and managed to delivery of promised benefits. This chapter serves to remind us of the potential of that technology can have in ensuring that the decision-making process unfolds in an orderly manner, and the players involved have access to the tools they need to help them structure choice and make thoughtful decisions. Chapter 9, the last chapter in the Theory section (like Fasolo, Morton and von Winterfeldt’s chapter) draws explicitly on psychological theory, and like Ar´evalo and Rios Insua, is concerned with the information technology which underpins PDA. Specifically, the authors, Kiesling, Gettinger, Stummer and Vetschera, are interested in the human–computer interface aspects of PDA software. They report an experiment to compare how users respond to two different information displays in a multicriteria setting – a classical display, namely the parallel coordinates plot, and a more innovative “heatmap” display. Their chapter presents a compelling case that the choice of display makes a striking difference both on how users experience the system, and also on their actual behavior in exploring the solution space. They conclude, echoing a theme that arises often in this book, that the most appropriate choice of information display may depend on task and user characteristics – that there may be no all-purpose “best” solution.
1.3.3 Applications In Chapter 10, Le Cardinal, Mousseau and Zheng present an application of an outranking method to the selection of students to an educational program. This set-up is slightly different from the typical set-up in PDA: decisions are not simply yes/no, because the students are assigned to four categories – definitely yes, possibly yes, possibly no and definitely no; moreover, the constraints are not monetary but pertain to demands such as balance of gender. Such a problem can be modeled as a sorting problem where the set of objects is to be partitioned into ordered classes, each with different action implications. They employ the ELECTRE TRI method which is a member of the ELECTRE family, combined with a mathematical programming model to arrive an overall portfolio. Because the direct elicitation of preference parameters can be difficult, they allow the decision maker to provide judgments that characterize attractive solutions and place restrictions on the distribution of the items across the categories. Importantly, this chapter reminds us that in many cases one may wish to take a coordinated system of decisions, but these decisions may be more complex (and thus, more interesting) than simply “do” or “don’t do”, and of the importance of taking into account the need for portfolio balance.
1 An Invitation to Portfolio Decision Analysis
17
Toppila et al. in Chapter 11 present a case study which describes the development of a PDA model for helping a high-technology telecommunication company make investments in its standardization activities. The model explicitly recognizes the uncertainties associated with these standardization investments and admits incomplete information about model parameters. Building on these inputs, the decision model helps determine which standardization activities should be either strengthened or weakened, based on a “core index” metric which is based on the computation of all non-dominated resource allocations. Another feature of the decision model is that it specifically captures interaction terms, thus distinguishing it from the model of, for example, Phillips (Chapter 3) and in contrast to the advice of Phillips and Bana e Costa (2007) who recommend handling interactions outside of the formal model. In Chapter 12, Montibeller and Franco discuss the implementation of PDA in local government in the UK. They emphasize that local authorities have historically tended to budget in an incremental fashion, and describe an environment in which transparency and accessibility are more important than technical sophistication. Their approach – of which they provide a number of case studies – blends ideas about structuring drawn from the British “Problem Structuring Methods” tradition, combined with the decision conferencing approach already foreshadowed in Chapter 3 by Phillips. They close by presenting some ideas for the development of a standardized toolkit – in the broadest sense, comprising both software and process templates – for practitioners of PDA in this domain. Chapter 13 by Kloeber deals with the practice of PDA in the pharmaceutical sector. He describes an industry with high costs, long development cycles, and opportunities to earn huge sums of money – or fail utterly. In such an environment, a company’s fortunes are critically dependent on the quality of decisions made about R&D investment and the management of the R&D portfolio. Based on extensive consulting experience, Kloeber describes tools for both project-level and portfoliolevel analysis, and discusses the role of both probabilistic and multicriteria models. Burk and Parnell survey PDA activity in the military area in Chapter 14. They point out that in this decision context there is no one single dimension of value, and consequently most of the work they survey uses multicriteria methods. Specifically, they present a comprehensive, structured literature review and describe six case studies in detail using a comparative framework which, in terms of its approach, may be useful for other authors looking to perform a structured review of practice. Drawing on this review, they discuss different modeling choices and levels of information (gold, silver, platinum and combined) in model development. Burk and Parnell are not bound to any particular modeling approach but, rather, report experiences from different approaches in different settings. Their chapter thus serves as a reminder of the importance of broadmindedness in modeling and the pragmatic use of whatever tools seem the most appropriate for the problem at hand. In the final Chapter, Airoldi and Morton begin with an overview of PDA practice in public sector healthcare systems, especially in settings where the implementing body is a health authority that is responsible for a geographically defined population. They draw attention to two indigenous traditions in healthcare, starting with the Generalized Cost Effectiveness tradition and continuing with an approach
18
A. Salo et al.
derived from Program Budgeting and Marginal Analysis which incorporates certain multicriteria ideas. By way of contrast, they then present a case study of their own, using a decision conferencing approach that has some similarities with the work of Phillips (Chapter 3) and Montibeller and Franco (Chapter 12), but draws on ideas in health economics in scaling up benefits to the population level. The population aspect features significantly in Airoldi and Morton’s ideas for future work, and they describe concerns about the nature, timing and distribution of benefits and costs which are not explicitly incorporated in their process. They also speculate whether models which handle these important and decision relevant aspects could be developed without losing the accessibility which comes from the use of simpler models. The chapter therefore serves as a reminder that to be genuinely valuable and useful, PDA methods have to be adapted and contextualized to meet the specific challenges in some given domain.
1.4 Perspectives for the Future Taken together, the above chapters highlight the diversity of contexts where PDA can be deployed. Combined with the messages in our storyline on the evolution of PDA, they also suggest prospective topics for future work. We structure our openended discussion of these topics under three headings. We begin with “Embedding to PDA in organizational decision making.” We then proceed by addressing topics in relation to “Enhancing PDA theory, methods and tools” and “Expanding the knowledge base.”
1.4.1 Embedding PDA in Organizational Decision Making As pointed out by the then President of INFORMS, Don N. Kleinmuntz, in a panel discussion on PDA at the EURO XXIV meeting in 2009, the real value of PDA is realized when an organization institutionalizes the use of PDA in its planning processes. There are highly encouraging examples of such institutionalizations, for instance, companies such as Chevron have built strong competitive advantage by making systematic use of PDA methods (Bickel 2010). Yet such institutionalizations are not very common and, at times, when institutionalization has taken place, the early focus on portfolio decisions may have been diluted by data management or other implementation issues. Seen from this perspective, we need to know more about what factors contribute to successful institutionalization. Here, lessons from the broad literature on the diffusion of innovations (Rogers 2005) can offer insights. Further guidance can be obtained from reflective studies of attempted institutionalizations, along the lines of Morton et al. (2011) who have examined such attempted institutionalizations in two areas of the UK public sector. Furthermore, since there exists a rich knowledge base of “one-shot” PDA interventions, it is of interest to study how uses of PDA can be
1 An Invitation to Portfolio Decision Analysis
19
integrated in organizational decision-making processes on a more recurrent basis. Thus, topics for future activities under this broad heading include the following: • Transcending levels of organizational decision making: Resource allocation decisions often involve decision makers at different levels of the organization with differentiated roles and responsibilities, which leads to the question of how PDA activities at different levels can be best interlinked. For instance, PDA activities may focus on strategic and long-term perspectives that provide “topdown” guidance for operational and medium-term activities (see, e.g. Brummer et al. 2008). But one can also build “bottom-up” processes where individual departments first carry out their own PDA processes to generate inputs that are taken forward to higher levels of decision within corporate management teams or executive boards (see, e.g. Phillips and Bana e Costa 2007). Such processes can be complementary, which means that the design of a process that is most appropriate for a given organization calls for careful consideration. • Interlinking organizations with PDA methods: In many sectors, the boundaries between organizations and their environment are becoming more porous. For example, the competitiveness of industrial firms is increasingly dependent on how effectively they collaborate in broader innovation networks. In parallel with this development, demands for transparency and accountability in policy making have fostered the development of methods which explicitly capture the interests of different stakeholder groups. Hence, PDA methods with group decision support capabilities for inter-organizational planning seem highly promising (see, e.g. Vilkkumaa et al. forthcoming). We therefore believe that more work is needed to explore what PDA approaches suit best problem domains that involve several organizations. • Re-using PDA models and processes: Deploying a similar PDA process from one year to the next may offer substantial benefits: for instance, the costs of training are likely to be lower while software tools can also be used effectively. But there may be drawbacks as well. For instance, strict adherence to a “standardized” PDA process may stifle creativity or, in some settings, it can be harmful if the characteristics of the decision context change so that the initial PDA formulation is no longer valid, say, due to the emergence of additional decision criteria. As a result, there is a need to understand when PDA models can really be re-used on a recurrent basis. This stands in contrast to most reported case studies which typically assume that the PDA model is built from scratch. • Facilitating the diffusion of PDA models across problem domains: Even though the specifics of PDA models vary from one planning context to another, there are nevertheless archetypal models – such as the multi-attribute capital budgeting model – which can be deployed across numerous problem contexts subject to minor variations. This suggests that when PDA models are being developed, it may be useful to extract the salient modeling features of specific applications as a step towards promoting the diffusion of PDA approaches by way of “transplanting” models from one context to another. Yet, in such model transplantation, concerns of model validity merit close attention, because a model that is valid in one problem context may not be so in another.
20
A. Salo et al.
1.4.2 Extending PDA Theory, Methods and Tools Embedding PDA in organizational decision making calls for adequate theory, methods and tools. Over the years, the literature on PDA methods has grown so that there are now several tested and well-founded approaches for addressing most of the concerns that are encountered in PDA problems (including uncertainties, constraints, preferences, and interactions). The relevant tool set, too, has become broader and spans tools that range from the relatively simple (e.g. scoring templates) to quite complicated (e.g. dedicated PDA tools with integrated optimization algorithms). As an overall remit for extending PDA methods and tools, we believe that the advancement of PDA benefits from an appropriate balance of theory, methods and practice. This means, for instance, that the development of exceedingly complex mathematical PDA models whose complexity is not motivated by challenges of real problems may be counter-productive. In effect, the modest uptake of some of the most sophisticated R&D planning models can be seen as an indication of the risk of developing models that are impractical for one reason or another; hence, requisite models (Phillips 1984) may prove more useful than the results of more onerous and comprehensive modeling efforts. There are also important tradeoffs even in the framing of PDA processes. That is, while truly comprehensive formulations make it possible to provide more “systemic” recommendations that span more alternatives and even organizational units, such formulations can be more difficult to build, for instance, due to difficulties of establishing criteria that can be meaningfully applied to a broader set of alternatives. All in all, we believe there are many promising avenues for advancing the frontiers of theory, methods and tools: • Theoretical development: Although PDA inherits much of its theoretic axiomatic machinery from the wider field of decision analysis, interpreting that theory in terms of the particular models used in PDA still requires additional work. A recent example of such an apparent gap in theory is the baseline problem documented in Clemen and Smith (2009) and Morton (2010), where certain plausible procedures for setting the zero for value measurement can lead to rank reversals when the option set is changed. Moreover, while axiomatizations of the portfolio choice problem exist (Fishburn 1992), these axioms seem to require strong additive separability assumptions between projects. Thus, a further direction for theoretic research would seem to investigate what sorts of relaxations of these separability assumptions might be possible (see Liesi¨o 2011). • Advances in IT and software tools: Progress in ICT, mathematical programming and interactive decision support tools has been an essential enabler of the adoption of PDA approaches: for instance, the computing power of modern PCs makes it possible to solve large portfolio problems while visualization tools help communicate salient data properties and decision recommendations. Advances in ICT offer also untapped potential for approaching new kinds of problems that
1 An Invitation to Portfolio Decision Analysis
21
have not yet been addressed with PDA methods. For example, PDA methods can be deployed in collaborative innovation processes for eliciting and synthesizing information from stakeholder groups. A related use of PDA methods is participatory budget formation where stakeholders have shared interests in resource allocation decision and the priority-setting process must fulfill demanding quality standards, for instance, due to accountability requirements (Rios Insua et al. 2008; Danielson et al. 2008). • Interfacing PDA tools with other IT systems: Many PDA software tools are standalone programs (see Lourenc¸o et al. 2008 for a review of currently available tools). Yet portfolio decisions often rely on data that may reside in other IT systems. This would suggest that a greater degree of integration with other IT systems can be helpful in designing decision processes that are linked to sources of information and, moreover, generate information that can be harnessed in other contexts. Such aspirations need to be tempered by the recognition that the development of integrated software solutions and IT systems can be costly and time consuming. Moreover, it is possible that radical changes in organizational structures and processes lead to different requirements for decision support so that existing IT systems lose some of their relevance. These caveats notwithstanding, the identification of viable opportunities for tool integration can expand the use of PDA tools. • Harnessing incomplete information: Typically, the estimation of parameters for a PDA model requires a large number of judgments which can be time consuming and cognitively burdensome to elicit. There exist some tools which reduce the elicitation effort by providing recommendations based on incomplete information (Liesi¨o et al. 2007, 2008; Argyris et al. 2011), or which help decision makers understand how sensitive the recommendations are with respect to model parameters (Lourenc¸o et al. 2011). Nevertheless, there are still significant opportunities for tool development, for instance, by exploring how sensitive the results are to assumptions concerning the underlying representation of value or by transferring ideas form the reasonably well-developed multi-criteria context to the probabilistic context (see, e.g. Liesi¨o and Salo 2008). • Building models for the design of PDA processes: Most PDA models help make choices from the set of available alternatives. But analytical models can also assist in the design of PDA-assisted decision support processes. For example, by capturing the salient characteristics of the PDA problem (e.g. number of alternatives, uncertainty of model parameters), it is possible to evaluate alternative PDA process designs ex ante and to guide decisions about screening thresholds or strategies for acquiring information about the alternatives. Indeed, as demonstrated by Keisler (2004), the combination of simulation/optimization approaches for evaluating PDA processes holds much promise in terms of improving the appropriate use of PDA methods.
22
A. Salo et al.
1.4.3 Expanding the PDA Knowledge Base This third heading covers topics on which work is needed to improve knowledge of how successful PDA-assisted planning processes can be best enabled. These topics have a strong element of empirical research aimed better understanding of the use of PDA methods and tools to improve resource allocation processes: • Pursuing behavioral research: Decision analysis – more than any other subfield of operations research – has a close relationship with psychology and with behavioral decision theory, in particular. From the perspective of psychology, PDA problems have several specific features. First, decision makers may have to consider very large numbers of alternatives, which may pose challenges (for visualization, for instance). Second, portfolio decisions are often taken in planning contexts which involve stakeholders from around the organization. These decisions are therefore social undertakings, which means that social psychological issues (such as what constitutes a persuasive argument) are important. Third, PDA problems may give rise to behavioral biases beyond those considered in the earlier literature on single choice problems. There is consequently a need for studies on when such biases are likely to occur, what impacts these biases have on decision quality, and how they can be best avoided. More generally, there is a need for knowledge about how decisions are shaped by the choice of PDA methodologies and what methods can be expected to work “best” in specific decision contexts. • Enabling successful facilitation: Facilitation is often a significant part of PDA interventions, especially when the PDA activity takes place in workshops or “decision conferences” (Phillips and Phillips 1993). The existing literature on facilitation – much of which is associated with the discipline of Organizational Development – helps understand these interventions by addressing processual concerns such as the management of power, politics and personalities. In the PDA context, there is an additional challenge in that modeling central becomes central in guiding content issues (Eden 1990). There is therefore a need to better understand the role of facilitation skills in PDA (e.g. Ackermann 1996). Such an understanding is vital because becoming an effective facilitator is not easy, and facilitation skills are not taught in most operations research/management science programs. • Reflective analyses of real PDA-assisted processes: To better understand preconditions for the successful deployment of PDA methods, there is a need to build evidence from reflective analyses of real case studies. Here, relevant perspectives (see, e.g. H¨am¨al¨ainen 2004) extend well beyond the choice of PDA methods to the broader characterization of the decision context, including questions about how the PDA process is framed, what roles the participants enact, and what the “status” of the recommendations is as an input to subsequent decisions. Even though the enormous variety of decision problems and the plurality of decision-making cultures may thwart attempts at generalization, these kinds of analyses – which are likely to benefit from the use of systematic frameworks that
1 An Invitation to Portfolio Decision Analysis
23
resemble those proposed for MCDA methods (see, e.g. Montibeller 2007) – may generate valuable insights into what methods work best as a function of problem characteristics. • Organizational learning through evaluation: In organizations, PDA interventions are substantial undertakings which, as a rule, should consequently be subjected to appropriate evaluations. These evaluations can serve two purposes. First, they help the organization understand what opportunities for improvement there are and how a re-designed PDA process could constitute an improvement over past practice. Second, if there is a well-established evaluation culture, consistent evaluations serve to build a database of “what works and where and when.” One attendant challenge for research is to develop practical evaluation frameworks for PDA, for instance, along the lines discussed by Schilling et al. (2007). • Strengthening skill sets: In PDA, skills for appropriate framing and scoping are pivotal because they lay the foundation for the analysis and co-determine how PDA effectively will inform organizational decision making (see, e.g. Spetzler 2007). Furthermore, skills that pertain to modeling – such as the choice of functional representation or the specification of model parameters – are crucial, too, because even seemingly innocuous modeling assumptions (such as the choice of baselines; Clemen and Smith 2009) may have repercussions on recommendations. As a result, those in charge of PDA processes need to master a broad range of skills, including social skills that help organizations articulate their objectives and technical skills that are needed to make well-founded choices among alternative approaches. In summary, PDA has reached its current position through an evolutionary path, enabled through the fruitful interplay of theory, methods and practice. At present, organizations are arguably faced with growing challenges in their decision making. In many business environments, for instance, firms much reach decisions more quickly in order to reap profits under accelerated product life cycles while the complexity of issues at stake may make it imperative to more experts and stakeholders. Similarly, in the public sphere legitimate demands for accountability and the efficiency of resource allocations impose ever more stringent quality requirements on decision processes. In this setting, PDA is well positioned to address these kinds of challenges, thanks to the concerted efforts of researchers and practitioners who pursue opportunities for theoretical and methodological work and, by doing so, help organizations improve their decision making.
References Aaker DA, Tyebjee TT (1978) A model for the selection of interdependent R&D projects. IEEE Trans Eng Manage EM-25:30–36 Ackermann F (1996) Participants’ perceptions of the role of facilitators using a GDSS. Group Decis Negot 5:93–112
24
A. Salo et al.
Argyris N, Figueira JR, Morton A (2011) A new approach for solving multi-objective binary optimisation problems, with an application to the knapsack problem. J Global Optim 49:213–235 Artzner P, Delbaen F, Eber J-M, Heath D (1999) Coherent measures of risk. Math Finance 9(3):203–228 Asher DT (1962) A linear programming model for the allocation of R and D efforts. IRE Trans Eng Manage EM-9:154–157 Barabba V, Huber C, Cooke F, Pudar N, Smith J, Paich M (2002) A multimethod approach for creating new business models: the General Motors OnStar project. Interfaces 32(1):20–34 Bickel E (2010) Decision analysis practice award. Decis Anal Today 29(3):8–9 Bodily SE, Allen MS (1999) A dialogue process for choosing value-creating strategies. Interfaces 29(6):16–28 Bouyssou D, Vansnick J-C (1986) Noncompensatory and generalized noncompensatory preference structures. Theory Decis 21:251–266 Bordley RF, Beltramo M, Blumenfeld D (1999) Consolidating distribution centers can reduce lost sales. Int J Prod Econ 58(1):57–61 Brans JP, Vincke P (1985) A preference ranking organisation method: (The PROMETHEE method for multiple criteria decision-making). Manage Sci 31(6):647–656 Brealey RA, Myers S (2004) Principles of corporate finance, 7th edn. McGraw-Hill, New York Brummer V, K¨onn¨ol¨a T, Salo A (2008) Foresight within ERA-NETs: experiences from the preparation of an international research program. Technol Forecast Soc Change 75(4):483–495 Bryan BA (2010) Development and application of a model for robust, cost-effective investment in natural capital and ecosystem services. Biol Conserv 143(7):1737–1750 Buede DM (1979) Decision analysis: engineering science or clinical Art? Decisions and Designs, McLean Clemen RT, Kwit RC (2001) The value of decision analysis at Eastman Kodak company. Interfaces 31(5):74–92 Clemen RT, Smith JE (2009) On the choice of baselines in multiattribute portfolio analysis: a cautionary note. Decis Anal 6(4):256–262 Cooper RJ, Edgett S, Kleinschmidt E (2001) Portfolio management for new products, 2nd edn.. Perseus, Cambridge Danielson M, Ekenberg L, Ekengren A, H¨okby T, Lid´en J (2008) Decision support process for participatory democracy. J Multi-Criteria Decis Anal 15(1–2):15–30 de Finetti B (1964) Foresight: its logical laws, its subjective sources (translation of the article La Pr´evision: ses lois logiques, ses sources subjectives, Annales de l’Institut Henri Poincar´e, 1937). In: Kyburg HE, Smokler HE (eds) Studies in subjective probability. Wiley, New York Eden C (1990) The unfolding nature of group decision support: two dimensions of skill. In: Eden C, Radford J (eds) Tackling strategic problems: the role of group decision support. Sage, London Edwards W (1954) The theory of decision making. Psychol Bull 51(4):380–417 Edwards W (1961) Behavioral decision theory. Ann Rev Psychol 12:473–498 Fishburn PC (1964) Decision and value theory. Wiley, New York Fishburn PC (1970) Utility theory for decision making. Wiley, New York Fishburn PC (1992) Utility as an additive set function. Math Oper Res 17 (4):910–920 Fox B (1966) Discrete optimization via marginal analysis. Manage Sci 13(3):210–216 Friend JK, Jessop WN (1969) Local government and strategic choice. Tavistock, London Golabi K, Kirkwood CW, Sicherman A (1981) Selecting a portfolio of solar energy projects using multiattribute preference theory. Manage Sci 27:174–189 Grayson CJ (1960) Decisions under uncertainty: drilling decisions by oil and gas operators. Graduate School of Business, Harvard University, Boston Gustafsson J, Salo A (2005) Contingent portfolio programming for the management of risky projects. Oper Res 53(6):946–956 H¨am¨al¨ainen RP (2003) Decisionarium – aiding decisions negotiating and collecting opinions on the Web. J Multi-Criteria Decis Anal 12(2–3):101–110 H¨am¨al¨ainen RP (2004) Reversing the perspective on the applications of decision analysis. Decis Anal 1(1):26–31
1 An Invitation to Portfolio Decision Analysis
25
Haveman RH, Margolis J (eds) (1970) Public expenditures and policy analysis. Markham, Chicago Heidenberger K, Stummer C (1999) Research and development project and resource allocation: a review of quantitative approaches. Int J Manage Rev 1(2):197–224 Henderson BD (1970) Perspectives on the product portfolio. Boston Consulting Group, Boston Henriksen A, Traynor A (1999) A practical R&D project-selection scoring tool. IEEE Trans Eng Manage 46(2):158–170 Howard RA (1966) Decision analysis: applied decision theory. Proceedings of the Fourth International Conference on Operational Research, Wiley-Interscience, New York, pp 55–71 Howard RA (1968) The foundations of decision analysis. IEEE Trans Syst Man Cybern SSC4:211–219 Howard RA (1988) Decision analysis: practice and promise. Manage Sci 34(6):679–695 Howard RA, Matheson JE (eds) (1984) Readings on the principles and applications of decision analysis. Strategic Decisions Group, Menlo Park Kavadias S, Loch CH (2004) Project selection under uncertainty: dynamically allocating re-sources to maximize value. Kluwer, Dordrecht Keefer DL (1978) Allocation planning for R & D with uncertainty and multiple objectives. IEEE Trans Eng Manage EM-25:8–14 Keefer DL, Bodily SE (1983) Three-point approximations for continuous random variables. Manage Sci 29(5):595–603 Keefer DL, Kirkwood CW (1978) A multiobjective decision analysis: budget planning for product engineering. J Oper Res Soc 29:435–442 Keeney RL (1982) Decision analysis: an overview. Oper Res 30(5):803–838 Keeney RL, Raiffa H (1976) Decisions with multiple objectives: preferences and value trade-offs. Wiley, New York Keisler J (2004) Value of information in portfolio decision analysis. Decis Anal 1(3):177–189 Kleinmuntz DN (2007) Resource allocation decisions. In: Edwards W, Miles RF, von Winter-feldt D (eds) Advances in decision analysis. Cambridge University Press, Cambridge Krantz DH, Luce DR, Suppes P, Tversky A (1971) Foundations of measurement, vol. I. Academic, New York Krumm FV, Rolle CF (1992) Management and application of decision analysis in Du Pont. Interfaces 22(6):84–93 Kusnic MW, Owen D (1992) The unifying vision process: value beyond traditional decision analysis in multiple-decision-maker environments. Interfaces 22(6):150–166 Lee SM, Lerro AJ (1974) Capital budgeting for multiple objectives. Financ Manage 3(1):51–53 Liesi¨o J (2011) Measurable multiattribute value functions for project portfolio selection and resource allocation. Aalto University, Systems Analysis Laboratory, manuscript Liesi¨o J, Mild P, Salo A (2007) Preference programming for robust portfolio modeling and project selection. Eur J Oper Res 181(3):1488–1505 Liesi¨o J, Mild P, Salo A (2008) Robust portfolio modeling with incomplete cost information and project interdependencies. Eur J Oper Res 190(3):679–695 Liesi¨o J, Salo A (2008) Scenario-based portfolio selection of investment projects with incomplete probability and utility information. Helsinki University of Technology, Systems Analysis Laboratory Research Reports, E23 Loch CH, Kavadias S (2002) Dynamic portfolio selection of NPD programs using marginal returns. Manage Sci 48(10):1227–1241 Lorie JH, Savage LJ (1955) Three problems in rationing capital. J Business 28(4):229–239 Lourenc¸o JC, Bana e Costa C, Morton A (2011) PROBE: a multicriteria decision support system for portfolio robustness evaluation (revised version). LSE Operational Research Group Working Paper Series, LSEOR 09.108. LSE, London Lourenc¸o JC, Bana e Costa C, Morton A (2008) Software packages for multi-criteria resource allocation. In: Proceedings of the international management conference, Estoril, Portugal Markowitz HM (1952) Portfolio selection. J Finance 7(1):77–91 Matheson D, Matheson JE (1998) The Smart Organization: creating value through strategic R&D. Harvard Business School Press, Boston
26
A. Salo et al.
Mintzberg H (1994) The rise and fall of strategic planning: reconceiving the roles for planning, plans, planners. Prentice-Hall, Glasgow Montibeller G (2007) Action-researching MCDA interventions. In: Shaw D (ed), Keynote Papers, 49th British Operational Research Conference (OR 49), 4–6 Sep, University of Edinburgh. The OR Society, Birmingham Morton A (2010) On the choice of baselines in portfolio decision analysis. LSE Management Science Group working paper, LSEOR10.128. LSE, London Morton A, Bird D, Jones A, White M (2011) Decision conferencing for science prioritisation in the UK public sector: a dual case study. J Oper Res Soc 62:50–59 Mottley CM, Newton RD (1959) The selection of projects for industrial research. Oper Res 7(6):740–751 Owen DL (1984) Selecting projects to obtain a balanced research portfolio. In: Howard RA, Matheson JE (eds) Readings on the principles and applications of decision analysis. Strategic Decisions Group, Menlo Park Peerenboom JP, Buehring WA, Joseph TW (1989) Selecting a portfolio of environmental programs for a synthetic fuels facility. Oper Res 37(5):689–699 Pennypacker J, Retna S (eds) (2009) Project portfolio management: a view from the trenches. Wiley, New York Phillips LD (1984) A theory of requisite decision models. Acta Psychol 56(1–3):29–48 Phillips LD, Bana e Costa CA (2007) Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann Oper Res 154(1):51–68 Phillips LD, Phillips MC (1993) Facilitated work groups: theory and practice. J Oper Res Soc 44(3):533–549 Pratt JW, Raiffa H, Schlaifer R (1965) Introduction to statistical decision theory. McGraw-Hill, New York Raiffa H (1968) Decision analysis. Addison-Wesley, Reading Raiffa H, Schlaifer R (1961) Applied statistical decision theory. Harvard Business School, Boston Ramsey FP (1978) Truth and probability. In: Mellor DH (ed) Foundations: essays in philosophy, logic, mathematics and economics. Routledge & Kegan Paul, London Rios Insua D, Kersten GE, Rios J, Grima C (2008) Towards decision support for participatory democracy. In: Burstein F, Holsapple CW (eds) Handbook on decision support systems 2. Springer, Berlin, pp 651–685 Rogers EM (2005) Diffusion of INNOVATIONS (5th edn). Free Press, New York Roussel PA, Saad KN, Erickson TJ (1991) Third generation R&D: managing the link to corporate strategy. Harvard Business School Press, Boston Roy B (1985) M´ethodologie multicrit`ere d’aide a` la d´ecision. Economica, Paris Roy B (1991) The outranking approach and the foundations of ELECTRE methods. Theory Decis 31:49–73 Ruusunen J, H¨am¨al¨ainen RP (1989) Project selection by an integrated decision aid. In: Golden BL, Wasil EA, Harker PT (eds) The analytic hierarchy process: applications and studies. Springer, New York, pp 101–121 Rzasa P, Faulkner TW, Sousa NL (1990) Analyzing R&D portfolios at Eastman Kodak. Res Technol Manage 33:27–32 Saaty TL (2005) The analytic hierarchy and analytic network processes for the measurement of intangible criteria and for decision-making. In: Figueira J, Greco S, Ehrgott M (eds) Multiple criteria decision analysis: state of the art surveys. Kluwer, Boston, pp 345–408 Salo A, H¨am¨al¨ainen RP (1997) On the measurement of preferences in the analytic hierarchy process. J Multi-Criteria Decis Anal 6(6):309–319 Salo A, H¨am¨al¨ainen RP (2010) Multicriteria decision analysis in group decision processes. In: Kilgour DM, Eden C (eds) Handbook of group decision and negotiation. Springer, New York, pp 269–283 Savage LJ (1954) The foundations of statistics. Wiley, New York Schilling M, Oeser N, Schaub C (2007) How effective are decision analyses? Assessing decision process and group alignment effects. Decis Anal 4(4):227–242
1 An Invitation to Portfolio Decision Analysis
27
Schwartz SL, Vertinsky I (1977) Multi-attribute investment decisions: a study of R&D project selection. Manage Sci 42(3):285–301 Shane SA, Ulrich KT (2004) Technological innovation, product development, and enterpreneurship in management science. Manage Sci 50(2):133–144 Sharpe WF (1964) Capital asset prices: a theory of market equilibrium under conditions of risk. J Finance 19:425–442 Sharpe P, Keelin T (1998) How SmithKline Beecham makes better resource-allocation decisions. Harvard Business Rev 76(2):45–46 Skaf MA (1999) Portfolio management in an upstream oil and gas organization. Interfaces 29(6):84–104 Souder WE (1973) Analytical effectiveness of mathematical models for R&D project selection. Manage Sci 19(8):907–923 Spetzler CS (2007) Building decision competency in organizations. In: Edwards W, Miles RF, von Winterfeldt D (eds) Advances in decision analysis. Cambridge University Press, Cambridge Stummer C, Heidenberger K (2003) Interactive R&D portfolio analysis with project interdependencies and time profiles of multiple objectives. IEEE Trans Eng Manage 50:175–183 Stummer C, Kiesling E, Gutjahr WJ (2009) A multicriteria decision support systems for competence-driven project portfolio selection. Int J Inform Technol Decis Making 8(2):379–401 Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science 185:1124–1131 Vilkkumaa E, Salo A, Liesi¨o J (forthcoming) Multicriteria portfolio modeling for the development of shared action agendas. Group Decis Negot von Neumann J, Morgenstern O (1947) Theory of games and economic behavior. Princeton University Press, Princeton von Winterfeldt D, Edwards W (1986) Decision analysis and behavioral research. CUP, Cambridge Walls MR, Morahan GT, Dyer SD (1995) Decision analysis of exploration opportunities in the onshore US at Phillips petroleum company. Interfaces 25(6):39–56 Weingartner HM (1966) Capital budgeting of interrelated projects: survey and synthesis. Manage Sci 12(7):485–516 Weingartner HN (1967) Mathematical programming and the analysis of capital budgeting problems. Markham, Chicago
Chapter 2
Portfolio Decision Quality Jeffrey Keisler
Abstract The decision quality framework has been useful for integrating decision analytic techniques into decision processes in a way that adds value. This framework extends to the specific context of portfolio decisions, where decision quality is determined at both the project level and the portfolio level, as well as in the interaction between these two levels. A common heuristic says that the perfect amount of decision quality is the level at which the additional cost of improving an aspect of the decision is equal to the additional value of that improvement. A review of several models that simulate portfolio decision-making approaches illustrates how this value added depends on characteristics of the portfolio decisions, as does the cost of the approaches.
2.1 Introduction The nature of portfolio decisions suggests particular useful interpretations for some of the elements of decision quality (DQ). Because of the complexity of these problems, portfolio decision makers stand to benefit greatly from applying DQ. This chapter aims to facilitate such an application by characterizing the role of DA in portfolios; describing elements of portfolio decision quality (PDQ); defining levels of achievement on different dimensions of DQ; relating such achievement to value added (using a value-of-information analogy); and considering the drivers of cost in conducting portfolio decision analysis (PDA). It is helpful to start by characterizing the role of DA in portfolio problems, particularly in contrast to the role of optimization algorithms. Portfolio optimization algorithms range from simple to quite complex, and may require extensive data J. Keisler () Management Science and Information Systems Department, College of Management, University of Massachusetts Boston, Boston, MA, USA e-mail:
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 2, © Springer Science+Business Media, LLC 2011
29
30
J. Keisler
inputs. DA methods largely serve to elicit and structure these often subjective inputs, thereby improving the corresponding aspects of DQ. A portfolio decision process should frame the decision problem by defining what is in the portfolio under consideration and what can be considered separately, who is to decide, and what resources are available for allocation. At their simplest, portfolio alternatives are choices about which proposed activities are to be supported, e.g., through funding. But there may also be richer alternatives at the project level, or different possible funding levels, as well as portfolio level choices that affect multiple entities. For each project, information may be needed about the likelihood of outcomes and costs associated with investments – and for portfolio decisions especially, the resulting estimates should be consistent and transparent across projects. The appropriate measure of the value of the portfolio may be as simple as the sum of the project expected monetary values, or as complex as a multiattribute utility function of individual project and aggregate portfolio performance; this involves balancing tradeoffs among different attributes as well as between risk and return, short term and long term, small and large, etc. Decisions that logically synthesize information, alternatives and values across the portfolio can be especially complex, as they must comprehend many possible interactions between projects. Implementation of portfolio decisions requires careful scheduling and balancing to assure that sufficient resources and results will be available when projects need them. In practice, it has been common to calculate the value added by PDA as the difference between the value of the “momentum” portfolio that would have been funded without the analysis and the value of the (optimal) portfolio ultimately funded. This idea can be refined to understand the value added by specific analysis that focuses on any of the DQ elements. This is similar to an older idea of calculating the value of analysis by calculating the value of information “revealed” to the decision maker by the analysis, and it suggests specific, conceptually illuminating, structural models for the value of different improvements to portfolio decision quality. It is productive to think in terms of four steps in the different dimensions of DQ: No information, information about the characteristics of the portfolio in general, partial specific information about some aspects of the portfolio in a particular situation, or complete information about those aspects. These steps often correspond (naturally, as it were) to discrete choices about formal steps to be used in the portfolio decision process. Thus, it is possible to discuss clearly rather detailed questions about the value added by PDA efforts. This chapter synthesizes some of my past and current research (some mathematical, some simulation, some empirical) where this common theme has emerged. Without going to the level of detail of the primary research reports, this chapter describes how various PDQ elements can be structured for viewing through this lens. A survey of results shows that no one step is clearly most valuable. Rather, for each step, there are conditions (quantitative characteristics of the portfolio) that make quality relatively more or less valuable (perhaps justifying a low-cost pre-analysis or an organizational diagnosis step assessing those conditions).
2 Portfolio Decision Quality
31
Several related queries help to clarify how PDQ could apply in practical settings. We consider (briefly) how choices about analysis affect the cost of analysis, e.g., number of variables, number of assessments, number of meetings, etc. Because these choices go hand-in-hand with DQ levels that can be valued, this provides a basis for planning to achieve DQ. We review (briefly) case data from several organizations to understand where things stand in current practice and which elements of DQ might need reinforcement. Finally, in order to illustrate how the PDQ framework facilitates discussion of decision processes, we review several applications that describe approaches taken. The chapter concludes with a PDQ-driven agenda to motivate future research. A richer set of decision process elements could be modeled in order to deepen the understanding of what drives PDQ. Efforts to find more effective analytic techniques can be focused on the aspects of portfolio decisions that have the highest potential value from increased quality. Alternatively, in areas where lower quality may suffice, research can focus on finding techniques that are simpler to apply. The relative value of PDQ depends on the situation, and it may be that simple and efficient approaches ought to be refined for one area in one application setting, while more comprehensive approaches could be developed for another setting.
2.2 Decision Portfolios A physical portfolio is essentially a binder or folder in which some related documents are carried together, a meaning which arises from the Latin roots port (carry) and folio (leaf or sheet). Likewise, an investment portfolio is a set of individual investments which a person or a firm considers as a group, while a project portfolio is a set of projects considered as a group. PDA applies decision analysis (DA) to make decisions about portfolios of projects, assets, opportunities, or other objects. In this context, we can speak more generally of (a) portfolio (of) decisions. We define a portfolio decision as a set of decisions that we choose to consider together as a group. I emphasize the word choose because the portfolio is an artificial construct – an element is a member of the portfolio only because the person considering the portfolio deems it so. I emphasize the word decisions because PDA methods are applied to decisions and not to projects or anything else. A portfolio of project decisions starts with a portfolio of projects and (often) maps each project simply to the decision of whether or not to fund it. Likewise, other portfolios of concern may be mapped to portfolios of decisions. Considering these decisions in concert is harder than considering them separately. There are simply more facts to integrate and there are coordination costs. Therefore, decisions are (or ought to be) considered as a portfolio only when there is some benefit in doing so. That benefit is typically that the best choice in one decision depends on the status of the other decisions.
2 Portfolio Decision Quality
33
Brown 1978). This approach is hard to apply in general. With portfolio decisions, there are reasons why it is not so hopeless. The ultimate value resulting from the decisions about a portfolio decision is a function of the analytic strategy used (steps taken to improve decision quality in different dimensions) and certain portfolio characteristics. I have used an approach that has been dubbed optimization simulation (Nissinen 2007) to model the value added by the process. This approach works if there is an appropriate way to specify the steps taken and to quantify the portfolio characteristics. I am not sure if it is applicable as a precise tool to plan efforts for a specific portfolio decision, but it is able to generate results for simulated portfolios and these give insight that may guide practice.
2.4 Portfolio Decision Quality We now discuss how the decision quality framework applies in the context of portfolio decisions. Note, this is a perspective piece, and so the classifications that follow are somewhat arbitrary (as may be the classifications of the original decision quality framework), but are intended as a starting point for focusing discussion on the portfolio decision process itself.
2.4.1 Framing In DA practice, “framing” (drawing rather loosely on Tversky and Kahneman’s (1981) work on framing effects) typically refers to understanding what is to be decided and why. In many decisions (single or portfolio), before even discussing facts, it is necessary to have in mind the right set of stakeholders, as it is their views that drive the rest of the process. There are various methods for doing so (e.g., McDaniels et al. 1999) and because portfolios involve multiple decisions, it is typical for there to be multiple stakeholders associated with them. In portfolio settings, a first step in framing is to determine what the portfolio of decisions is. Sometimes the decisions are simple fund/do not fund decisions for a set of candidate projects. In other situations, it is not as clear. For example, there might be a set of physical assets the disposition of each of which has several dimensions that can be influenced by a range of levers. Then the portfolio might be viewed in terms of the levers (what mix of labor investment, capital improvements, new construction, outsourcing, and closure should be applied), the dimensions of disposition (how much should efforts be directed toward achieving efficiency, effectiveness, quality, profitability, growth, and societal benefit), the assets themselves (how much effort should be applied to each site), or richer combinations of these.
34
J. Keisler
Once the presenting portfolio problem has been mapped to a family of decisions, a key issue in framing is to determine what is in and out of the portfolio of decisions. The portfolio is an artificial construct, and decisions are considered together in a portfolio when the decision maker deems it so. There is extra cost in considering decisions as a portfolio rather than as a series of decentralized and one-off decisions. Thus, the decisions that should be joined in a portfolio are the ones where there is enough benefit from considering them together to offset the additional cost (note: Cooper et al. 2001 discuss the related idea of strategic buckets, elements of which are considered together). Such benefit arises when the optimal choice on one decision depends on the status of other decisions or their outcomes. A common tool for framing decisions is to use the decision hierarchy (Howard 2007), to characterize decisions as already decided policy, downstream tactics (which can be anticipated without being decided now), out of scope (and not linked to each other in an important way at this level), or as the strategic decisions in the current context. Portfolios themselves may also form a hierarchy, e.g., a company has a portfolio of business units each of which has a portfolio of projects (e.g., Manganelli and Hagen 2003). The decision frame, like a window frame, determines what is in view and what is not – what alternatives could reasonably be considered, what information is relevant, what values are fundamental for the context, etc. In the standard project selection problem, the most basic interaction between decisions arises from their drawing on the same set of constrained resources. Thus, with a vector of decision variables X D .x1 I : : : I xn / (e.g., the amount of funding for projects 1 to n, or simply whether indicator variable of whether or not to fund the projects) the decision problem is MaxX V .X / s.t. C.X / B. The first part of the framing problem is to determine the elements of X , and the related question of identifying the constraints B. The set of all project proposals can be divided into clusters in terms of timing (e.g., should all proposals received within a given year be considered in concert, or should proposals be considered quarterly?), department or division (should manufacturing investments compete with funds alongside marketing investments?) geography, level, or other dimensions. Montibeller et al. (2009) give some attention to this grouping problem. Even if there are no interactions between projects other than competition for resources, the larger the set of projects considered within a portfolio, the less likely it is that productive projects in one group (often a business unit or department) will go without resources while unproductive projects elsewhere (often in another business unit or department) obtain them. Related to bounding of decisions in the portfolio problem is defining the constraints. Of course, a portfolio consisting of all proposals received in a 3-month period ought to involve allocating a smaller budget than a larger portfolio of proposals received over a longer time. But sometimes the budget has to be an explicit choice. For example, Sharpe and Keelin (1998) describe their success with the portfolio at Smithkline Beecham, and in particular, how they persuaded management to increase the budget for the portfolio when the analysis showed that there were worthwhile projects that could not be funded. In this case, the
2 Portfolio Decision Quality
35
higher budget required the company to align its R&D strategy with its financial strategy for engaging in capital markets, essentially allowing the portfolio of possible investments to spread over a greater time span. In other cases, allowing the portfolio budget to vary could imply that R&D projects are competing for funds with investments in other areas, so that the relevant portfolio contains a wider range of functions. We see here that portfolio decisions involve distinctions that are not salient in general. Mapping from a portfolio of issues to a portfolio of decisions (what is to be decided) frames at the level of the individual projects, although this also necessarily drives how analysis proceeds at the portfolio level. Scoping frames explicitly at the portfolio level, but it automatically affects the primary question at the project level of whether a proposal is even under consideration. Bounding the solution space in terms of budget/resource constraints frames at the portfolio level, and has little bearing on efforts at the project level.
2.4.2 Alternatives In the DQ framework, the quality of alternatives influences the quality of the decision because if the best alternatives are not under consideration, they will simply not be selected. High-quality alternatives are said to be well specified (so that they can be evaluated correctly), feasible (so their analysis is not a waste of time), and creative (so that surprising potential sources of value will not be overlooked). In a portfolio, there are alternatives defined at the project level (essentially, these are mutually exclusive choices with respect to one object in the portfolio), and at the portfolio level (e.g., the power set of projects). At the project level, the simplest alternatives are simply “select” or “don’t select.” A richer set of alternatives may contain different funding levels and a specification for what the project would be at each funding level; although these variations take time to prepare, richer variation at the project level allow for a better mix at the portfolio level. A related issue is that the level of detail with which projects are defined determines what may be recombined at the portfolio level. For example, if a company is developing two closely related products, it may or may not be possible to consider portfolio alternatives including one but not the other product, depending on whether they are defined as related projects (with detail assembled for each) or as a single project. At the portfolio level, one may consider as the set of alternatives the set of all feasible combinations of project-level decisions. Here, we get into issues of what is computationally tractable, as well as practical for incorporating needed human judgments. How alternatives are defined (and compared) at the portfolio level affects what must be characterized at the project level. Where human input is needed, high-quality portfolio level alternatives may be organized with respect to events (following Poland 1999), objectives (following value-focused thinking, Keeney 1996), or resources (e.g., strategy tables, as in Spradlin and Kutoloski 1999), or constraints, or via interactive decision support tools (e.g., visualization methods such as heatmaps, as in Kiesling et al. 2011).
36
J. Keisler
2.4.3 Information The quality of information about the state of the world (especially as it pertains to the value of alternatives) is driven by its completeness, precision, and accuracy. High quality information about what is likely to happen enables estimates that closely predict actual value of alternatives. In portfolio decisions, the quality of information at the project level largely has the same drivers. But rather than feeding a go/no-go decision about the project in isolation, this information feeds choices about what is to be funded within a portfolio, and this can mean that less detailed estimates of project value are needed. On the other hand, because projects may be in competition for resources used, project level information about costs may be more significant in feeding the portfolio decision. Furthermore, as has been noted, consistency across projects is important – a consistent bias may have less impact on the quality of the portfolio decision (and its implementation) than a smaller bias that is less consistent. At the level of the portfolio, interactions between projects are important – synergies and dissynergies, dynamic dependencies/sequencing, and correlations may all make the value of the portfolio differ from the value of its components considered in isolation. When these characteristics are present, the search for an optimal portfolio is not so simple as ranking projects in order of productivity index to generate the efficient frontier. Therefore, it is important not only to collect this portfolio information appropriately (and perhaps iteratively), but also to structure it so as to facilitate later operations.
2.4.4 Values In decision analysis and other prescriptive approaches to decision making, decision makers seek to select the most preferred option. Much effort may go into identifying preferences and values in order to facilitate that selection. In valuing a portfolio of projects, since the options are different possible portfolios, it helps to construct a value function that comprehends preferences and then represents them in a form amenable to making the necessary comparisons, i.e., the options that have higher values ought to be the ones the decision maker prefers. If project value is additive, then high quality on values requires mainly that the value function include the right attributes and the right weights for each project. If the value function is additive across attributes and across projects, then the best portfolio decision arises from setting the right value function to be incorporated across the set of projects. A more complex value function, e.g., a nonlinear multi-attribute utility function over the sum of project level contributions, requires that the projects be characterized and measured in the right terms, but the hard judgments must be made at the portfolio level. Finally, with portfolio decisions there are often more stakeholders affected by the set of projects, and who therefore have values to be integrated. This is an area within PDA where there are numerous common approaches.
2 Portfolio Decision Quality
37
Of particular note, at the portfolio level there is often a range of interacting objectives (or constraints). Rather than undertaking the sometimes prohibitive task of formally structuring a utility function for them all, the portfolio manager may strive for “balance” (see Cooper et al. 2001, the “SDG grid” in Owen 1984 and elsewhere, or Farquhar and Rao 1976), which is commonly done through 2 2 matrices showing how much of one characteristic and how much of another each project has. This could include balance between risk and return, across risks (i.e., diversification), benefits and costs, short term and long term, internal and external measure, one set of stakeholders and another (i.e., fairness), resources of different types used, balance over time, or other characteristics. Balance grids are easier to work with conceptually than are convex multi-attribute utility functions with interactive terms. With such grids, and with projects mapped to them, the decision maker can then envision the effect of putting in or pulling out individual projects, thus directly relating decisions about individual projects to portfolio level value. Other interactive approaches utilizing feedback or questions based on an existing portfolio model may also help to identify values and interactions between them, e.g., Argyris et al. (2011).
2.4.5 Logical Synthesis In a standard DA setting, the logic element of DQ means the assurance that information, values, and alternatives will be properly combined to yield identify the course of action most consistent with the decision maker’s preferences. Standard DA utilizes devices such as decision trees, probability distributions, and utility functions to ensure consistency that the decision maker’s actions and beliefs conform to normative axioms. Certainly, much of this still applies at the project level within portfolios. Detailed inputs at the project level ought to be logically synthesized to obtain value scores that are then incorporated in the portfolio level decision. For example, in the classic SDG-style PDA, decision trees at the project level identify the ENPV and cost of each project. If there is minimal interaction between projects, the portfolio level synthesis is simply to rank projects by productivity index and fund until the budget is exhausted. However, interactions – as mentioned previously – can include synergies and dissynergies, logical constraints where under some circumstances some combinations of activities may be impossible, while in other cases certain activities may only be possible if other activities are also undertaken. In simpler cases, it may be possible to still capture this primarily with simple spreadsheet-level calculations based on the DA-derived inputs, but often optimization and math programming techniques are required and in their absence, it is unlikely that an unaided decision maker would be able to approach the best portfolio. When such techniques are to be used, it is especially important to have coherence between the algorithms to be deployed and the project level inputs. Furthermore, because optimization models often require simplifying assumptions (as do all models to some extent), this element of quality
38
J. Keisler
may require a feedback loop in which managers review the results of the model and refine it where necessary. Such feedback is commonly prescribed in decision modeling (see Fasolo et al. 2011).
2.4.6 Commitment and Implementation Producing the desired results once portfolio decisions are made (which SDG calls Value Delivery) requires effort at the border of project and portfolio management. At the portfolio level, resources must be obtained and distributed to projects as planned. The portfolio plan itself, with more precise timing, targets, and resource requirements must be translated back into detailed project plans, as the initial specifications for all individual approved projects must be organized into a consistent and coherent set of activities for the organization. Project managers monitor progress and adapt plans when the status changes. Sets of projects may affect each others’ execution, e.g., one project might need to precede another or it may be impossible to execute simultaneously two projects that require the same resource. In this case, in addition to orchestrating a multiproject plan, the portfolio manager must monitor the fleet of projects and make adjustments to keep them in concordance over time with respect to resource use and product release. One important event that can occur at the project level is failure. When projects really do fail, the portfolio is better off if they are quickly abandoned. At the project level, this requires incentives not to hide failure and to move on (as was embodied in Johnson & Johnson’s value of “Freedom to Fail,” Bessant 2003). All these information flows between projects and between project and portfolio managers benefit if the portfolio decision process has organizational legitimacy – if it is transparent and perceived as fair (Matheson and Matheson 1998).
2.4.7 Interacting Levels of Analysis Thus, PDQ is determined at the project level, the portfolio level, and in both directions of the interface between those two levels, as shown in this partial listing (Table 2.1). Since the required level of each element of decision quality depends on the value added by that element and its costs (we can think of this terms of bounded rationality), it would help to have a way to measure the value and cost. Cost of efforts to create decision quality is a subject that has not been much studied, and we shall only consider it in abstract terms in this chapter, but it is not so difficult to think about – if specific efforts are contemplated, the main cost of those efforts is the time of the individuals involved, and there are many areas of management in which methods are applied to quantify such costs, e.g., software development cost
2 Portfolio Decision Quality
39
Table 2.1 Some determinants of portfolio decision quality
Framing
Alternatives
Information
Values Logical synthesis
Implementation
Portfolio level Resources and budgets (bounding) Subsets of candidate projects, portfolio strategies Specifying synergies, dynamic dependency, correlations between projects Utility function, balance Optimization
Alignment, monitoring and correcting
Interchange between project and portfolio levels Which projects are in which portfolio (scoping) Projects suitably decomposed Consistency
Summary statistics Dealing with dependencies between projects Ensuring resource availability
Project level Mapping issues to decisions Well-specified plans for multiple funding levels Probability distribution over outcomes
Attributes and measures Decision tree
Project management, incentives, buyin, etc.
estimation. Value depends on the information and decisions involved, and perhaps other context, which makes it difficult to judge intuitively. We now explore how models of the portfolio decision process can be used to gain insight about the way specific efforts affect portfolio value.
2.5 Valuing Portfolio Decision Quality 2.5.1 Four Discrete Levels of Portfolio Decision Quality It can be useful in some of these cases to think about four possible levels of information being brought to bear on the decision (Fig. 2.2). The first level is “ignorance” – not that decision makers have no information, but rather that their decisions take a perspective other than that of using the parametric information they do have. For example, in highly politicized situations, information about project value may have little to do with funding. The value of the portfolio under such a process can be used as a baseline for measuring improvement, and the process is modeled as if it selects projects at random. The second level is “analog” portfolio-level information but not about information specific projects. In this case, a decision rule can be developed that takes into account the characteristics of the average project – or even of the distribution of projects. In the pharmaceutical industry, for example, it is common to use “portfolio-wide averages
42
J. Keisler
The implication is that portfolio managers should focus first on creating a culture that supports discipline in sticking to prioritization, and only focus on more precise estimates where there is relatively large uncertainty. Under some circumstances, reasonable shortcuts can yield most of the value added by analysis with proportionally lower cost of analysis than with the brute-force approach in which all projects are analyzed. Specifically, if organizational conditions allow for a well-defined threshold productivity level to be identified before analysis, a project can be funded or not merely based on whether its value-to-cost ratio exceeds the threshold. A modified threshold rule works well if there is large variance among the value-to-cost ratio of different projects: apply triage and fund projects that exceed the productivity threshold by a certain amount, do not fund projects that fall a certain amount below, and analyze the rest of the projects so that they can be funded if after analysis they are shown to exceed the threshold. This model considers the general question of improving estimates. The three other models essentially take this as a starting point and consider choices about which bases of the value estimate to improve, or, alternatively, what more complex portfolio value measures should be based on the general value estimates. Decision analysts who saw these results found them interesting, but also stressed that analysts do more than obtain precise value estimates. For example, they make the alternatives better – in particular by identifying a range of alternatives for different funding levels.
2.5.3 Modeling the Value of Higher Quality Alternatives The next model (Keisler 2011) compares different tactics for soliciting alternatives for various budget levels for each potential investment. In this model, C denotes cost, and each project is assumed to have an underlying value trajectory (colloquially called a buyup curve): Vi .Ci / D ri Œ1 EXP.ki Ci =Ci max /=Œ1 EXP.ki /: Project level parameter values for the return .ri / and curvature P .ki / are P drawn from random distributions. Funding decisions to maximize E i Vi s.t. i Ci B are compared for analytic strategies that result in stronger or weaker information states about k and r. As before, in a baseline situation projects are selected at random, after a first-cut analysis, projects are funded based on their productivity parameter, also essentially as before. At the other extreme is the gold standard analysis in which a full continuum of funding levels and corresponding project values is determined for each project. In between are strategies where a small number of intermediate alternatives are defined for each project as well as “haircut” strategies that trim each project’s funding from its requested level, with the rationale that projects tend to have decreasing returns to scale. If returns to scale are not decreasing, the optimal
2 Portfolio Decision Quality
43
160000 140000
Portfolio value
120000 100000 80000 60000 40000 20000 0 0
1000
2000 Budget
3000
4000
Contininuous project funding levels
Funding proportional to project's productivity index at full funding
Four possible funding levels per project Yes-no funding decisions
Each project gets same portion of request (haircuts) Fund projects at random
Fig. 2.4 Portfolio value when projects may be funded at intermediate levels
portfolio allocates projects either 0 or 100% of their requested funds. Otherwise, the relative value added by different analytic strategies varies with the distribution of returns to scale across projects. This study showed that value added by generating project level alternatives can be as high as 67% of the value added by getting precise value estimates for the original simple projects as in the first model, as seen in Fig. 2.4. Most of the additional value added can be obtained by either generating an additional one or two intermediate funding level alternatives per project. In some circumstances, similar gains are possible from applying a formula that utilizes project-specific information about k along with a portfolio-wide estimate of r, for a sophisticated type of haircut (layered). The results were sensitive to various parameters. For example, at high budget levels where most of the funding requested by projects is available, there is little benefit to having additional alternatives between full and no funding.
2.5.4 Modeling the Value of Higher Quality on Values Another PDA technique is scoring projects on multiple attributes, computing total project value as a weighted average of these scores (e.g., Kleinmuntz and Kleinmuntz 1999); weights themselves may be derived from views of multiple
2 Portfolio Decision Quality
45
could share the cost of that element if both were pursued, i.e., they have cost synergy. Alternatively, projects can have value synergy, e.g., while each product has a target market, new markets can be pursued only if multiple products are present (e.g., peanut butter and chocolate). In these cases where portfolio cost and value is not actually the sum of individual project cost and value, PDA can add value by identifying such synergies prior to funding decisions. A decentralized PDA will not identify synergies, but where the process has a specific step built in to identify synergies, it will most likely identify those that exist. This is not trivial, as the elements from which potential synergies emerge are not labeled as such – in the example above, it would be necessary to identify the peanut butter cup market and this would require creative interaction involving the chocolate and peanut butter product teams. The model in Keisler (2005) compares analytic strategies that evaluate all synergies, cost synergies or value synergies against the simplest strategy that considers no synergies and again, a baseline in which projects are not funded or funded at random. (This model does not include possible dissynergies, e.g., in weapon selection problems where if an enemy is going to be killed by one weapon, there is no added value in another weapon that is also capable of killing the enemy.) In this model, each project can require successful completion of one or more of atomic cost elements, where Sik D 1 or 0 depending on whether or not cost element k is required to complete project i . Similarly, Rij D 1 or 0 depending on whether project i is required to achieve value element j . The j th value P element Q is worth Vj , and the kth P cost element costs Ck . Portfolio value is j Vj i Fi Rij , and portfolio cost is k Ck Maxi Fi Sik . In the simulation, V and C are drawn from known distributions, and Rij and Sik have randomly generated values of 0 or 1 with known probability. The relative value added by strategies that comprehend synergies compared to myopic strategies depends on the munificence of the environment (Lawerence and Lorsch 1967). At low levels of actual synergy, value added is small because little is worth funding, and above a saturation point, value added is small because many projects are already worth funding. At a sweet spot in the middle, completely considering synergies can increase portfolio value more than 100% and in some cases over 300%. In certain cases, comprehension of the possibility of synergies is a substantial improvement over the case where each project is evaluated myopically. Figure 2.6 shows results from a less extreme cluster of projects simulated in this study, where cross-project cost synergies were assumed to be present in 30% of the cost elements in the cluster, and cross-project value synergies were assumed to be present in 10% of the value elements. In this example, the value of identifying synergies is substantial, but far more substantial if both cost and value synergies are both identified. Most important in determining the value added at each level are the prevalence of synergies between value elements and cost elements of different projects, as well as the relative size of value elements to cost elements (and thus the likelihood that projects will merit funding even in isolation).
2 Portfolio Decision Quality
47
understanding what drives the cost of a process. Assessment costs ought to vary systematically with the number of assessments that must be done of each type. This way of thinking about decision process costs is similar to the way that computation times for algorithms are estimated – considering how often various steps are deployed as a function of the dimensions of the problem and then accounting for the cost of each step (and in simpler decision contexts, psychologists (Payne et al. 2003) have similarly modeled the cost of computation). Beyond assessments, there are organizational hurdles to successfully implement some approaches. The treatment of both issues below is speculative, but in practice, consultants use similar methods to define the budget and scope for their engagements. Estimating assessment costs A: number of projects analyzed B: cost of analysis per project C: number of alternatives generated per project D: cost per alternative generated/evaluated E: number of attributes in value function F: cost of assessing an attribute’s weight G: cost of scoring a project on an attribute H: projects per cluster I: number of value elements J: number of cost elements K: cost per synergy possibility checked Analysis for estimation of project productivity: AB For example, with a portfolio with 30 .D A/ projects for which productivity must be estimated at a cost of $5,000 .B/ of analysis per project, the cost of analysis would be $150,000. Analysis for estimation of intermediate alternatives: AC D For example, with a portfolio with 30 .D A/ projects, each having a total of 3 .D C / nonzero funding levels whose cases must be detailed at a cost of $10,000 .D D/ per case, the cost of analysis would be $900,000. Analysis involving use of multiple criteria: E F CE AG For example, with a portfolio with 30 .D A/ projects being evaluated on 8 .D E/ attributes with a cost of $5,000 .D F / per attribute to judge its importance (e.g., translate to a dollar scale), and $1,000 (D G, e.g., the cost 1 h of a facilitated group meeting with five managers) to score a single attribute on a single project, the cost of analysis would be $40;000 C $240;000 D $280;000.
48
J. Keisler
Analysis to identify synergy: .A=H / H Š .I C J /K For example, with a portfolio of 30 .D A/ projects divided into clusters of size 5 .D H /, searching for potential synergies among 7 .I / value elements and 8 .J / potential value elements at an average cost of $50 per synergy checked (DK, some short interviews and checked with some brief spreadsheet analysis), the cost of analysis would be 6 5Š .7 C 8/ $50 D $540;000. These costs apply only to the parts of analysis that are at the most complete level. Frugal strategies replace some of the multiplicative cost factors here with onetime characterizations, and it is also possible to omit some of the analysis entirely. Excluded are certain fixed costs of analysis.
2.7 Observations from Practice and from Other Research Both the PDQ framework and the associated value-of-analysis as value-ofinformation approach have, in my experience, been useful lenses for examining organizations’ portfolio decision processes. At one company (Stonebraker and Keisler 2011), this framework facilitated analysis of the process used at a major pharmaceutical corporation. In general, the data showed (at a coarse level) that the organization was putting more effort in where the value added was higher. We were able to identify specific areas in which it may have spent too much or too little effort developing analytic detail. This led to discussion about the reasons for such inconsistencies – which were intentional in some cases but not in others – and whether the organization could improve its effectiveness in managing those subportfolios. At another major company (Keisler and Townson 2006), analysis of the data from a recent round of portfolio planning revealed that adding more alternatives at the project level would add substantial value to some of the portfolio by allowing important tasks on relatively low-priority projects to go ahead, and that this would be an important consideration for management of certain specific portions of the portfolio. We were able to identify some simple steps to gain some of this potential value with minimal disruption. At this same company, we also considered the measures used for project evaluation and were able to find a simpler set of criteria that could lead to the same value as the existing approach – or better if some of the measures were better specified. In another application (described at the end of Keisler 2008), a quick look at the portfolio characteristics showed the way to a satisfactory approach involving a modest amount detail on criteria and weights, to successfully recover from an unwieldy design for a portfolio decision support tool. Finally, on another PDA effort that covered multiple business units at a major pharmaceutical company, involved many projects and products with a substantial
2 Portfolio Decision Quality
49
amount of interaction in terms of value and cost, and it looked like it would be very challenging to handle such a volume of high-quality analysis. At the outset of the project, the engagement manager and I discussed the value of identifying potential synergies in PDA. In setting up the effort, the analysis team divided up to consider subportfolios within which synergies were considered most likely. The resulting engagement went efficiently and was considered a strong success.
2.8 Research Agenda Within this framework, we can discuss certain broad directions for research. Only a few pieces of the longer list of decision quality elements were modeled. More such optimization simulation models could enrich our understanding of what drives the value of PDQ. Additionally, the description here of the elements of PDQ can be fleshed out and used to organize lessons about practices that have been successful in various situations. As we consider where to develop more effective analytic techniques, we can focus on aspects of portfolio decisions where the value from increased quality would be highest – that is, as it becomes easier to make a decision process perform at a high level, the “100% quality level” will be higher as will the total value added by the decision process. Alternatively, in areas where the value added by greater precision etc. is minimal, research can focus on finding techniques – shortcuts – that are simpler to apply. The relative value of PDQ depends on the situation, and it may be that simple and efficient approaches ought to be refined for one area in one application setting, while more comprehensive approaches could be developed for another setting.
2.9 Conclusions Interpreting DQ in the context of portfolios allows us to use it for the same purposes as in other decision contexts. We aim to improve decisions as much as is practical by ensuring that all the different aspects of the decision are adequately considered. But we also recognize – especially with portfolio decisions – that many parts of the decision process are themselves costly due to the number of elements involved, e.g., eliciting probability judgments. Resources for decision making (as opposed to resources allocated as a result of the decision) are generally quite limited, and time may be limited as well. Therefore, we can use the DQ checklist to test whether the resources applied to the decision process match the requirements of the situation. To the extent we can think in concrete terms of the drivers of the value added by a decision process, we can use the DQ framework more skillfully. To the extent that portfolio decisions have common characteristics that are not shared by other classes of decisions, more detailed descriptions of their DQ elements help ensure that attention goes to the right parts of the process. As organizations formally integrate
50
J. Keisler
PDA into their planning processes, choices such as what is to be centralized and decentralized, and what is to be done in parallel should relate in a clear way to the overhead cost of the process and to the levels of quality and hence the value added by the process.
References Argyris N, Figueira JR, Morton A (2011) Interactive multicriteria methods in portfolio decision analysis. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Bessant J (2003) Challenges in innovation management. In: Shavinin LV (ed) The international handbook on innovation. Elsevier, Oxford, pp 761–774 Cooper RJ, Edgett S, Kleinschmidt E (2001) Portfolio management for new products. 2nd edn. Perseus, Cambridge Farquhar PH, Rao VR (1976) A balance model for evaluating subsets of multiattributed items. Manage Sci 22(5):528–539 Fasolo B, Morton A, von Winterfeldt D (2011) Behavioural issues in portfolio decision analysis. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Howard RA (1988) Decision analysis: practice and promise. Manage Sci 34(6):679–695 Howard RA (2007) The foundations of decision analysis revisited. In: Edwards W, Miles RF, von Winterfeldt D (eds) Advances in decision analysis from foundations to applications. Cambridge University Press, Cambridge, pp 32–56 Keeney RL (1996) Value focused thinking. Harvard University Press, Boston Keisler J (2004) Value of information in portfolio decision analysis. Decis Anal 1(3):177–189 Keisler J (2005) When to consider synergies in project portfolio decision analysis. UMass Boston College of Management Working Paper Keisler J (2008) The value of assessing weights in multi-criteria portfolio decision analysis. J Multi-Criteria Decis Anal 15(5–6):111–123 Keisler, J (2011) The value of refining buyup alternatives in portfolio decision analysis. UMass Boston College of Management Working Paper UMBCMWP 1048 Keisler J, Townson D (2006) Granular portfolio analysis at Pfizer. Presented at POMS Conference. Boston Kiesling E, Gettinger J, Stummer C, Vetschera V (2011) An experimental comparison of two interactive visualization methods for multi-criteria portfolio selection. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Kleinmuntz CE, Kleinmuntz DN (1999) A strategic approach to allocating capital in healthcare organizations. Healthcare Finan Manage 53(4):52–58 Lawerence P, Lorsch D (1967) Organization and environment. Harvard Business School Press, Boston Manganelli R, Hagen BW (2003) Solving the corporate value enigma. American Management Association, New York Matheson D, Matheson JE (1998) The smart organization. Harvard Business School Press. Boston, MA McDaniels TL, Gregory RS, Fields D (1999) Democratizing risk management: successful public involvement in local water management decisions. Risk Anal 19(3):497–510 McNamee J, Celona P (2001) Decision analysis for the practitioner, 4th edn. SmartOrg, Menlo Park Montibeller G, Franco LA, Lord E, Iglesias A (2009) Structuring resource allocation decisions: a framework for building multi-criteria portfolio models with area-grouped options. Eur J Oper Res 199(3):846–856
2 Portfolio Decision Quality
51
Nissinen J (2007) The impact of evaluation information in project portfolio selection. Masters Thesis Helsinki University of Technology Owen DL (1984) Selecting projects to obtain a balanced research portfolio. In: Howard RA, Matheson, JE (eds) Readings on the principles and applications of decision analysis. Strategic Decisions Group, Menlo Park Payne JW, Bettman JR, Johnson JR (2003) The adaptive decision maker. Cambridge University Press, Cambridge Poland WB (1999) Simple probabilistic evaluation of portfolio strategies. Interfaces 29:75–83 Sharpe PT, Keelin T (1998) How SmithKline Beecham makes better re-source-allocation decisions. Harvard Business Rev 76(2):45–57 Spradlin CT, Kutoloski DM (1999) Action oriented portfolio management. Res Technol Manage 42(2): 26–32 Stonebraker J, Keisler J (2011) Characterizing the analytical process used in evaluating the commercial potential of drug development projects in the portfolio for a large pharmaceutical company. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Tversky A, Kahneman D (1981) The framing of decisions and the psychology of choice. Science 211(4481):453–458 Watson SR, Brown RV (1978) The valuation of decision analysis. J R Stat Soc A-141:69–78
Chapter 3
The Royal Navy’s Type 45 Story: A Case Study Lawrence D. Phillips
Abstract This chapter presents systems engineering as portfolio analysis carried out with multiple stakeholders who hold different perspectives about the system elements, and where conflicting objectives must be accommodated in deciding what is affordable. A case study of the United Kingdom’s Royal Navy Type 45 destroyer shows how a portfolio approach traded-off time, cost and performance to arrive at a plausible way forward. The combination of technical system modelling with group processes that engaged all stakeholders enabled a solution to be agreed within only 15 months. This set a record for major equipment procurement in the Ministry of Defence, and saved the contractor 2 years of design work.
3.1 Introduction Deciding what combination of capabilities for a new ship are affordable, within a budget and delivered in time, provides a near-perfect example of how a portfolio approach to prioritisation and resource allocation can apply to systems engineering and provide an efficient path for procurement. An example is the United Kingdom’s Type 45 Daring Class destroyer (Fig. 3.1), a major project that was commended as “innovative” by a National Audit Office report. They went on to say that “: : :the Department used a Capability Cost Trade-Off Model to determine the optimum affordable capability” (National Audit Office 2002). The Type 45 destroyer was the first major project undertaken under the new Smart Procurement strategy, introduced by the UK’s Defence Secretary in 1997, and now known as Smart Acquisition. This approach allowed trade-offs of cost against capability to be considered prior to the design phase, and included through-life L.D. Phillips () Management Science Group, Department of Management, London School of Economics and Political Science, London, UK e-mail: larry
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 3, © Springer Science+Business Media, LLC 2011
53
54
L.D. Phillips
Fig. 3.1 HMS Daring, the first in class of the United Kingdom’s Type 45 destroyers, commissioned 23 July 2009. Copyright© Brian Burnell, made available under Creative Commons Attribution-Share Alike 3.0 Unported license
costing of the system. With this approach, the operational requirements ceased to be a straightjacket, making it possible to meet new priorities, accommodate emerging requirements and solve outstanding design problems. Risks were thereby reduced, and it was hoped that uncertainty about costs would be reduced. This chapter explains how this new approach to procurement strategy emerged from a combination of a social process that engaged all the key players in a succession of 1-to-3-day meetings, and a prioritisation model based on multi-criteria decision analysis (MCDA). Three goals governed this approach: (1) to determine the best combination of capabilities that would be affordable (the “solution”) thereby ensuring (2) that the first-of-class ship could be delivered by 2006, and (3) to align the key players to the solution so that subsequent revisions would be minimised.
3.2 Background The precursor to Type 45 was the Horizon project, a joint effort by Italy, France and the UK that began in 1992 to design a frigate. Differing requirements of the countries soon surfaced, and despite compromises, a more formal structure was required to handle disagreements. An International Joint Venture Company (IJVC) was established to bring each country’s contractors together with key naval and civilian personnel in an attempt to agree a plausible and affordable solution.
3 The Royal Navy’s Type 45 Story: A Case Study
55
In 1997, I was commissioned to facilitate meetings of the combined teams, using a structured process of deliberative discourse (Renn 1999), decision conferencing (Phillips 2007) and MCDA modelling (Keeney and Raiffa 1976) to help the group achieve an affordable solution. Three challenging decision conferences, lasting between 3 and 4 days each, carried out in late 1997 and early 1998, with much work between sessions by the ten teams formed to provide support for each of the ship’s functions, managed to work through the many differences of opinion, resulting in a possible solution. At the final decision conference, participants agreed that the decision conference provided a sound foundation for the development of an affordable ship. Despite this progress, in April 1999 the UK withdrew from the project as it became apparent that they differed in their assessment of the level and complexity of the threat, and because the UK wanted to use the technology of the Sampson radar system. The French and Italians continued their collaboration, and the UK’s subsequent independent development of the Type 45 adopted the Horizon project’s Principal Anti-Air Missile System (PAAMS) and some aspects of the hull design.
3.3 Method This section explains our approach as process consultants concerned to develop a helping relationship with a client (Schein 1999), building trust and enabling the client to develop and own the model, thereby enabling them to achieve their objectives, while also developing a shared understanding of the issues among team members, creating a sense of common purpose, and building commitment to the way forward. We begin by describing our first two meetings, then the kick-off meeting and the workshops. An explanation of the MCDA model and of decision conferencing follows. We describe the decision conferences leading to the solution, and the subsequent specialised decision conferences that resolved a few remaining issues about the ship’s propulsion system.
3.3.1 The Initial Meetings and Workshops From our first meeting on 21 July 1999, with the shipbuilder BAE Systems (which had taken over the Horizon project’s UK contractor, Marconi), the Royal Navy customer and the naval architect, Chris Lloyd, it was made clear that the Type 45 must be capable of fitting different systems in the future, so that an upgrade path would have to be accommodated by the ship’s design. We showed the model developed in the Horizon project, which demonstrated how solution trade-offs were made. The naval architect said this is how naval architects think – trading off one design solution against another. This persuaded the leader of the 10-week study
56
L.D. Phillips
period to buy in to the approach. During the meeting, we were apprised of the structure of the study period, the directors of six areas being studied, commercial, programme, warship design, combat systems, integrated logistics support and habitability, and the many tasks being pursued in each of those areas. At a second meeting 8 days later, we were introduced to participants representing the operational requirements, the procurement authority and industry. We showed the Horizon approach, explained that the final model represented the collective views of all participants, that the model provided insight, not “the right answer,” and that it helped the group to achieve their objectives, even if the individual paths differed. Chris Lloyd suggested updated capability solutions to be considered in the revised Horizon model, and he outlined the basic systems, called the “Sump,” that were essential (such as the hull, internal structure, electrical and fire fighting systems), the starting point for adding systems that would make the ship operational by adding value needed to realise mission objectives. This kind of initial meeting, with just a few senior key players, is a typical start for a project of this type. Its purpose, from our perspective, is threefold: first, to ensure that the senior decision makers understand and are committed to the sociotechnical process that we will be applying, second, to plan the roll-out of the project, and third, to begin the process of structuring the problem. Participants at these two initial meetings agreed to engage all the leaders of the Integrated Project Teams at a kick-off meeting on 9 August. At this meeting, participants began the process of modifying the structure of the Horizon model to make it suitable for the UK. They agreed definitions of costs and benefit criteria, reviewed the definitions of the ship’s main functions, agreed the constituents of the Sump, and began defining new capability solutions for the functions, avoiding use of the phrase “system solutions” because that implied the purchase of a particular system, which at this stage would be premature. Instead, “capability” indicated what performance was needed, not how it would be achieved, thereby avoiding any commitment to a particular system that could deliver the capability. Another 1-day meeting a week later reviewed the previous week’s work, made some changes and completed the structure, recognising that subsequent work would require further changes. At this stage, the structure of the model covered 116 capability options distributed very unevenly across 25 functional areas, for a total number of possible whole-ship capability solutions just short of 6 1015 . Two costs and five benefits contributed further complexity to the task of finding an affordable whole-ship solution. With a “good enough” structure of functions, capability options, cost and criteria agreed, we facilitated a series of workshops, one for each functional area, in the last week of August. The purpose was to further refine the structure, receive cost data and score the solution options against the criteria. Each workshop was attended by about 20 people, including the Warship Integration civilian, relevant Team Leader, operational requirements staff, operational analysts, specialists, military and procurement support staff and contractors knowledgeable about the function under consideration. A final merge meeting was intended to assess trade-off weights and bring everything together in September. However, this was not what happened.
3 The Royal Navy’s Type 45 Story: A Case Study
57
3.3.2 The Prioritisation Model For a ship, once the most basic functions are agreed, the key functions that determine the ship’s capabilities to achieve missions were defined as separate areas for considering solutions. Many solutions were considered for each function, from modest-performing, low-cost solutions, to high-performing, costly solutions. It is, of course, possible to establish a requirement for each function, and then send separate teams away to provide specific solutions that meet the requirements. This approach, however, inevitable imposes the Commons Dilemma: individually optimal solutions rarely, if ever, provide an overall best use of the available resource (Hardin 1968). So, which options constitute the best overall portfolio of solutions so as to generate the most effective warship for the available budget? The answer requires considering trade-offs between solution options, but some solutions meet objectives of performance, time and cost better than others, so we adopted multi-criteria modelling to enable those solution options to be traded-off through the exercise of military judgement. The basic principle for prioritising the solutions is embodied in the priority index, PI: PI D
Risk adjusted benefit Cost
MCDA is applied to combine all benefits onto one preference value for each solution. The preference value is then multiplied by the probability that the capability option will deliver the expected performance. That product is divided by the project’s total forward cost to obtain the project’s priority index. The overall preference value for each solution requires two types of judgement. The first is a preference value representing the value of the capability option’s performance on each criterion, while the second is a set of criterion weights, which are scale constants that ensure the units of value on all criterion scales are equal. For the Type 45 project, the preference values were mostly determined by direct judgements of a group of specialists, civilian and military, knowledgeable about the capability options and the military requirements for the capability options. Often, scores were assessed directly onto preference scales as judged by the military participants, occasionally assisted by operational analysts who had studied performance and effectiveness. At other times, preferences were judged to be simply linear, inverse linear or non-linear value functions related to measurable performance. For example, a specialist group concerned with accommodation assessed an S-shaped value function relating preference values to square metres of deck space for accommodation: low preference scores for cramped accommodation, improving with more square metres, but then levelling off as the provision becomes too hotel-like. In all cases, the capability option or combination of capability options within a function associated with the highest performance was assigned a score of 100, with the least well-performing capability option given a zero. All other capability options
58
L.D. Phillips
were scored on that 0–100 scale. Participants were reminded that a preference value of zero did not mean that there was no value associated with that capability option, just as 0 ı C does not represent the absence of temperature. Consistency checks comparing ratios of differences of preference values helped to ensure the realism and consistency of the preference scaling process. Participants were frequently reminded that a preference judgement associated with a capability option represented the added value the option provides to achieving the criterion’s objective. In one case, an officer asked, “What if a capability option gives me more performance than I need?” to which our answer was, “Then it doesn’t add value and its preference value should be no more than represents what you need.” This was an important idea for the group, as many found it difficult initially to distinguish between performance and the value of that performance. Operational analysts present at the meetings could provide information about performance and technical effectiveness, but none had ever studied the value of that performance and effectiveness to the accomplishment of missions. The MCDA approach showed that while an understanding of performance, and possibly effectiveness, is required for scoring capability options, it is not sufficient without considering military judgement about the value that performance brings in a military context. The second judgement concerns the scale constants associated with each criterion scale: paired comparisons of swing-weights were applied to all the capability options in one function compared to all the capability options in another function. Swing weights represent the comparability of preference units on one criterion compared to another, as 9 ı F equate to 5 ı C of temperature. As such, they represent trade-offs between criterion scales, and we put a two-part trade-off question to participants: “First, consider the difference in performance between the lowest and highest performing capability options in this function compared to that function. Now, how much do you care about those differences?” By asking this question repeatedly between pairs of criterion scales, the largest difference that matters is identified and assigned an arbitrary weight of 100 points. Each of the other scales is then compared to that one and assigned a weight equal to or less than 100. It is, of course, a matter for military judgement to assess how much a difference in performance matters to the accomplishment of missions, and the idea of “caring” about a performance difference was quickly grasped by the experienced military participants. Preference values and weights were combined by computer software, Equity1 , which is programmed to calculate the overall preference value, vi , for each capability option: X vi D wj vij j
1 Developed initially by the Decision Analysis Unit at the London School of Economics, Equity is now available from Catalyze Limited, www.catalyze.co.uk. Equity is used to prioritise options across many areas to create cost-effective portfolios than can inform the processes of budgeting and resource allocation.
3 The Royal Navy’s Type 45 Story: A Case Study
59
The preference value of capability option i on criterion j is given by vij and the weight associated with the criterion is wj . Multiplying the value by the weight and summing over the criteria gives the overall value. Details about the Equity calculation are given by Phillips and Bana e Costa (2007). Two types of weights are required by Equity. One set compares the scales for a given criterion from one function to the next, known as within-criterion weights. The method of assessing these weights leaves at least one function’s scale with a weight of 100. The second set compares those 100-weighted function scales across the benefit criteria, called across-criteria weights. The within-criterion weights ensure that all units of added value are equivalent from one function to the next but only within each criterion, while the across-criteria weights equate the units of added value across the criteria, thereby ensuring that all benefit scales throughout the entire model are based on the same unit of added value. Equity provides a natural structure for classifying possible capability solutions: as options within functions. Twenty-five functions defined areas of required capability, including Propulsion (power generation facility), Stealth (signature reduction), Vulnerability (resistance to shock, blast, fragments, and other threats), Accommodation, Helicopter Facilities, Underwater Detection, and so forth. Capability options for each of these functions spanned the whole range of possibilities, from below the minimum deployable capability to well above requirements. The initial structure of the model, as displayed by Equity, is shown in Fig. 3.2.
3.3.3 The Decision Conferences Scoring the options and weighting the criteria in August raised many questions that required modifications to the Sump, clarification of assumptions, improved definitions of the functions and better precision in defining the capability option. It was clear by the middle of August that a “preliminary” decision conference (held without the attendance of any senior officers) would be required “in order to weed out the areas requiring more work by the industry/project/customer team and start the pricing process, before a further conference : : : that will begin to firm up on the design.” That wording of the 24 August invitation to the preliminary decision conference proved to be prophetic. It took one further decision conference, several workshops and a shorter decision conference in 2002 to realise a finally agreed affordable solution. The preliminary decision conference, held over 3 days, 2, 3 and 6 September (the break was a weekend), spent the first day-and-a-half reviewing and revising options, costs and scores for the capability options in all functions. A social process, the “nominal group technique” (Delbecq et al. 1974) was applied to assessing weights in the afternoon. First, paired comparisons enabled the entire team to identify, for the Performance criterion, the function whose capability options would provide the most added preference value from performance; the selected function was given a weight of 100 points. Next, another function was chosen, and all team members
Fig. 3.2 The initial structure of the Equity model. Functions are shown in the left column. The adjacent rows show the capability options. Note that for some functions the first option is “None,” whereas for others it is the least-performing, lowest-cost capability option. The options in many areas, such as Propulsion, are mutually exclusive: one and only one will be chosen for inclusion in the portfolio. For other functions, options prefixed with a “C” sign are cumulative: any number of those options might be included, or even none if that is shown in column 1
60 L.D. Phillips
3 The Royal Navy’s Type 45 Story: A Case Study
61
were asked to privately write down their judged weight, compared to the 100, for the added value of that function’s capability options. The facilitator then quickly constructed on a flip chart a frequency distribution for the full range of weights by asking for a show of hands of all those assigning a weight greater than 100, equal to 100, in the 90s, 80s, and so on down until all views had been obtained. The members assigning the highest and lowest weights were then asked for their reasons, followed by discussion about the weights by the whole group. After a brief discussion, the final judgement was given to the senior naval officers representing those responsible for the acquisition of the ship, and the eventual users. Initially, this process was difficult for the group, but as it enabled everyone to express a view, and minimised anchoring phenomena – the first person to speak can bias the entire group (Roxburgh 2003; Tversky and Kahneman 1974) – it was adopted as standard procedure, and became easier. A first run of the model and examination of the benefit–cost curves for each function showed some disparities that were the subject of much discussion the next morning, provoking further changes and additions. By the end of the day, the group felt that the model was now plausible, but still needed further work, especially on costs. The next decision conference engaged senior officers, who sat at a separate table, and provided the final judgements when disagreements, especially about weights, arose.
3.4 Results This section explains the cost and benefit criteria against which the capability options were evaluated in the preliminary decision conference, the results obtained with that model, the further development of the model over the next decision conference, the subsequent workshops, a follow-through workshop, and the final results.
3.4.1 Cost and Benefit Criteria Two costs were represented in the Equity model: 1. Unit Production Cost (UPC). Incremental total cost (recurring for every ship) of each capability solution, including the cost of installation and setting to work, for one first-of-class ship. 2. Non-Recurring Cost (NRC). This includes the cost of developing each solution, system engineering costs, facility development and production, documentation, initial logistics package, initial training, and first-of-class trials. The Equity model took a weighted sum of these costs. The weight on UPC was taken as 90% of the stated figure; this represented the effect of learning across the
62
L.D. Phillips
whole class. A weight of 9% was used for NRC to spread the development costs across a class of 12 ships, thereby not penalising the first ship by requiring it to take the full NRC. It is important to note that a substantial portion of the whole-ship cost is hidden from the Equity model in the Sump. Five benefit criteria were included in the model. The first two captured the extent to which a capability solution is expected to add value, and the remaining three benefits acted as penalties or dis-benefits. 1. Performance. The added value of a capability option was assessed against subcriteria associated uniquely with each function. With this definition, the common performance criterion could be defined differently for each function. 2. Growth Potential. The extent to which a capability option provides potential to grow to provide a better solution with more capability. (This was considered to be a radical criterion, for it would be possible to trade out capability to gain more growth potential.) 3. Time to Deliver Solution. Score 100 if the option supports at least a 90% chance of ship in-service date (ISD) of September 2007, score 0 of the option has 50% or less probability of meeting that date. This analysis was done for the first-ofclass only. 4. Risk. Probability that the solution will deliver the expected performance, at the expected cost. A logarithmic proper scoring rule was applied to convert the probabilities into a negative penalty score (Bernardo and Smith 1994)2, thereby providing proportionately more negative penalty scores for low probabilities. 5. Logistic Cost. This includes fuel, ammunition, maintenance, training, manning and lifetime support over a 25-year ship life. Evaluations were derived from rating models related to each function’s performance criteria, with the total rating scores converted to preferences scale judgements – the higher the cost, the lower the preference. These last three penalty criteria made it possible for their effects to so substantially reduce the positive benefits that the prioritisation index could be very low, even negative if the negative score from the Risk criterion exceeded the benefits. As the MoD had been accused by the National Audit Office of failing to control risk on previous major projects, the idea that risks could outweigh benefits was appealing to the Type 45 team.
3.4.2 Results of the Preliminary Decision Conference On completion of the scoring and double-weighting assessments, the model was complete and Equity’s calculations left every capability option characterised by a
2
See Section 2.7.2, “The Utility of a Probability Distribution.”
3 The Royal Navy’s Type 45 Story: A Case Study
63
Fig. 3.3 The prioritisation triangle VALUE for MONEY
RISKADJUSTED BENEFIT
COST
Fig. 3.4 Benefits versus costs for the Accommodation options
single cost and single risk-adjusted benefit, and, therefore, a priority score, shown graphically in Fig. 3.3 – the higher the ratio of benefit to cost, the steeper the slope, and the better the value for money. We started the process of examining results by looking at the triangles for each function to see if the priorities there made sense. For example, the Accommodation function is shown in Fig. 3.4. There, each triangle represents an additional cost and benefit from the previous triangle, on which it depends for its added value – they are not separate, independent projects whose triangles can be arranged in order of declining benefit to yield a wholly concave function. In addition, the starting point, option 1, is a very minimum standard of accommodation. Although the Accommodation benefit–cost function looked about right to the team, many of the others revealed inconsistencies. Some were very jagged: more cost delivered less benefit in many cases. In others the penalties outweighed the costs, giving negative slopes.
64
L.D. Phillips
Still others showed how decisions about the first-of-class might be affected by the September 2007 ISD. Curves that were mostly convex instead of concave suggested that the options might be improved for that function. These curves stimulated several suggestions about how the options should be revised, which demonstrates the importance of being able to show these graphs separately for each function. One of the obstacles to obtaining better cost data was a vicious circle: team members who were potential contractors were reluctant to provide costs for the options until they received an exact specification for a system from the Royal Navy. But the naval attendees could not do that without a better idea of what a specification might cost. Each side, in the absence of better guidance from the other, felt they would be guessing. To overcome the stalemate the naval officers pointed out that systems were not being costed – capabilities were. The facilitators pointed out that the Equity model is not sensitive to precise costs, that “ball-park” estimates are sufficient. And the armed forces specialists in costs asked that only representative, realistic costs be presented by the contractors. The contractors realised that if they gave costs that were too high, that could call into question the viability of the project, and could lead to its cancellation. If the contractors gave costs that were too low, they might eventually be held to them. Everyone knew that costs had to be squeezed, now and in the future. Realism was encouraged. Equity next combined the triangles across all the functions into one overall prioritisation, which enabled the group for the first time to see whole-ship solutions. They discovered that a high proportion of high-priority solutions generated benefits out of proportion to their costs. It also showed that many capability solutions cannot be justified on the basis of the benefit they generate for the cost; more beneficial solutions exist for the same or less cost. Overall, it was clear that too much capability was required, too little money was available, and too little time was left to deliver the first ship. The group proposed a possible whole-ship solution, i.e. a portfolio of proposed options, but the model showed that better solutions at no additional cost were available. However, the better solutions often went beyond requirements for some functions while not going far enough in others, so they were unacceptable to the group. Participants then introduced constraints on the model to prevent unacceptable combinations, and an affordable whole-ship solution emerged for ships later in the class. Further constraints and a reduction in cost provided a ball-park view of a firstof-class ship. The model indicated how this solution could be substantially improved with some additional funding. Sensitivity analyses conducted by changing weights and scores showed that the same solutions resulted for many of the functions, so the solutions for these functions would be largely unaffected by new information. This finding helped the group to suggest functions for which further study of the options, and their costs and benefits, would be warranted.
3 The Royal Navy’s Type 45 Story: A Case Study
65
Fig. 3.5 The position of the Class solution at £152 million
3.4.3 Results of the Decision Conference Key flag officers attended this final decision conference, the purposes of which were to update the Equity model and examine possible affordable solutions for the Class and first-of-class. Participants began by reviewing capability solutions for the same 25 ship functions considered in the preliminary decision conference. Updates provided before the start of the decision conference by the Integrated Project Team included a substantially expanded Sump and additional assumptions, several new or redefined options, and revisions of costs and benefits. The decision conference began with a review of these changes and additions, with participants ensuring that this new material was realistic and consistent across all functions. The group then re-assessed the trade-offs between the functions as new within-criterion weights, but these did not signal the need for a change to the across-criteria weights. In examining the results, the group started with the Class model (which placed zero weight on the Time criterion), and then a plausible best solution at the cost target. The affordable portfolio, at point F for a budget of £152 million (recall that this is not the whole-ship cost, only the cost of the capability options) is shown in Fig. 3.5. The options included in that portfolio are shown to the left of the solid vertical lines in Fig. 3.6. Next, the group repeated this process by including the Time criterion to find a best solution for the first-of-class at its cost target of £280 million. The resulting
Fig. 3.6 The Class solution at £152 million: all capability options to the left of the solid vertical lines
66 L.D. Phillips
3 The Royal Navy’s Type 45 Story: A Case Study
67
portfolio was wholly unsatisfactory to the group, as it precluded migration paths to the Class solution for some of the functions. This proved to be a major turning point for the group as they began an intensive process of using the model to develop compatible first-of-class and Class solutions. They began by blocking off high-priority options in the first-of-class model that limited the migration path, caused incompatible combinations, were clearly unacceptable or were thought to provide unnecessary extra benefit. These changes caused a reconsideration of the Class model, so several iterations had to be completed before consistency was achieved between the two models. By the fifth iteration, plausible solutions emerged that were affordable. Figure 3.7 gives a clear graphical picture of the constraints that led to the affordable solution. The result of eliminating these solutions from the portfolio can be seen in Fig. 3.8. The possible portfolios are shown on the upper surface of the inner figure, and the Class solution lies at the knee of the inner curve. Immediately after the decision conference, a smaller number of key players carried out numerous sensitivity analyses, changing some within-criterion and across-criteria weights, to identify configurations that would improve on the firstof-class solutions. Their efforts confirmed an uncomfortable truth: under the current budget, the 12 ships would have to go to sea without either a gun or sonar. Keeping the budget fixed, ten lowest-priority capability options would have to be forgone to pay for the gun (Fig. 3.9). In a November briefing, the Royal Navy’s Commanderin-Chief said that this was not acceptable, though he was clear that the other recommendations about the Type 45 configuration could be used as the basis for agreeing next steps. By January 2000, new information had come to light suggesting that instead of the gas turbine propulsion system that had been agreed at the second decision conference, a fully integrated electrical propulsion system could now be considered. To test the feasibility of this type of system, 23 participants from the MoD and industry gathered for a third decision conference, on 25–26 January, to evaluate five propulsion options: (1) a low-risk option that provided a benchmark, (2) a variation on the system used in the Type 23 frigate, (3) the gas turbine system from the previous decision conference, (4) a variation on the gas turbine system and (5) the electric propulsion technology. These options were evaluated against five costs, eight benefit criteria, and two risk criteria, with cost estimates provided by a Rolls Royce/Alstom industry report, preference values for the benefit criteria assessed by participants on 0–100 scales as in the Equity model, and probabilities of success judged by the group for the two risk scales. These data were input to the Hiview computer program3, which converted cost data to preference values by applying an inverse linear transformation, and a logarithmic scoring rule transformed the probabilities to negative preference values. Swing-weights assessed by participants equated the units of added value on
3 Hiview is used for evaluating options on many criteria. Like Equity, it is a commercial product available from Catalyze Limited, www.catalyze.co.uk.
Fig. 3.7 The Class solution at the end of the decision conference. The lightly shaded capability solutions to the left of the solid vertical lines were blocked out of the whole-ship portfolio if they hindered the migration path, or were clearly unacceptable for other reasons. The lightly shaded solutions to the right of the vertical lines were blocked if they provided low or unnecessary added value. The up arrow indicates the next priority, the down arrow the previous one
68 L.D. Phillips
3 The Royal Navy’s Type 45 Story: A Case Study
69
Fig. 3.8 The position of the Class solution at £152 million, shown at point F, with constraints imposed on the model. The outer curve is the original, unconstrained result
all scales. The overall weighted scores revealed a surprise: the electrical propulsion system was both lowest in overall cost, and best in overall risk-adjusted benefit. This conclusion withstood numerous sensitivity analyses on the weights of the criteria. In January 2002, a fourth 2-day facilitated decision conference of 23 participants from the MoD and BAE Systems reviewed priorities in light of new information, as work was soon to begin on building the first three ships. All the functions were reviewed, scores changed as required, and new weights assessed. BAE Systems made further slight refinements after the meeting, and on re-running the model found that very little had changed. At a half-day follow-through meeting in midFebruary, the author helped the four BAE Systems participants to use the model to develop a short-term plan.
3.5 Discussion From the UK’s withdrawal from the Horizon Project in April 1999, which signalled the “Initial Gate” for the planning start of the Type 45, through to the “Main Gate” approval by the MOD and subsequent announcement by the UK’s Defence Secretary in July 2000, just 15 months passed, a record for a major equipment programme
Fig. 3.9 Forcing the 155 mm gun into the affordable portfolio (right-most option in MCGS) causes Equity to trade out ten lowest-priority options (darker shading) in order to fund the gun
70 L.D. Phillips
3 The Royal Navy’s Type 45 Story: A Case Study
71
in the MOD. Why did this happen so quickly? Of course, the ship configuration from the Horizon project provided a head start, but a major reason is that all the key players were aligned at the end of the decision conferencing process. As a result, few changes were made during the subsequent design process, and the built ships show pretty much the same configuration as was agreed at the end of the decision conferencing process. We believe that five features contributed to the success of the project. First, all the key perspectives on the capabilities of an affordable ship were represented by participants in the decision conferences – people were not just brought into meetings as they were needed. At first, we met considerable resistance to the idea of meetings that lasted 2 or 3 days, and it was necessary for senior military staff to mandate attendance. Very quickly, though, participants discovered the value of interacting with others as it became evident that decisions about one system often created knock-on consequences elsewhere, and as new insights developed that affected the ship’s capabilities. Second, it was possible to keep working meetings of 20–25 people on track and focussed through the efforts of the facilitators – the author and two Royal Naval personnel, one of whom supported the computer modelling, while the other worked with the author in facilitating the group. The discussion moved from one part of the MCDA model to another, enabling the group to maintain focus on one piece of the overall picture at a time. Third, the modelling enabled military judgement to be made explicit, as in the case of the S-shaped value function for accommodation space. Indeed, the accommodation team argued during the first two decision conferences about the importance of good accommodation to morale, which led to the development of the value function. (A judgement later confirmed by the glowing pride of a seaman as he showed his personal space on the Daring to the interviewer in a national network TV program broadcast in 2010.) Fourth, working together and building trust enabled the group to circumvent the potential impasse between being specific about requirements and costs. Initially, industry asked for detailed requirements to develop realistic costs, but the MoD needed to know what level of performance industry could provide before being too specific about requirements. By exploring capabilities rather than specific system requirements, and approximate costs for the capabilities, it became possible to break this stalemate. Fifth, the senior Navy staff effectively used the MCDA model to “steer” the answer to better and more realistic solutions. After seeing the incompatibility between the first-of-class and Class solutions, they tried out different judgements, looked at the results, made new judgements, saw those results, shaping and reshaping their judgements until they arrived at solutions that were consistent and realistic. Indeed, all participants were constructing their preferences throughout the process (Fischhoff 2006; Lichtenstein and Slovic 2006), learning from each other, gaining new perspectives, trying out novel ideas, developing a shared understanding of the issues and gaining a higher-level view of the whole technical system. Everyone recognised there was no single “best” solution that would be revealed by the MCDA
72
L.D. Phillips
model; rather, they saw the model as a way to test ideas, to try out different capability solutions before having to commit to purchasing specific systems, and to find the collectively best solution possible. We had not anticipated the extensive re-working that characterised this exploration of an affordable configuration. Although the project began with a thorough study to gain agreement about the Sump, this was revised over and over again at successive workshops and decision conferences. The 25 functional areas of required ship capability were defined early in the project, but by the third decision conference had been reduced to 18. Objectives were agreed early and translated into four criteria, the most important one being performance, which was defined differently for each functional capability area, and these continued to be refined during the project. In particular, logistic costs proved to be a particularly difficult challenge because there were, of course, no data available for such a new type of ship. As the developing MCDA model only required relative scores for the benefits, a separate group developed a point-scoring model that saw revisions at each successive decision conference. Again, the value of working as a group became evident, as new ideas developed. It is clear to us that we were engaged in a socio-technical system. The many workshops and decision conferences engaged all the key players, and the group processes helped them to construct coherent preferences that contributed to an overall integrated solution. Each participant spoke from his or her own area of expertise, but had to be subject to instant peer review by the whole group. Many inputs were changed as this process of judging and reviewing took place, a process that helped the group construct new, robust solutions before they had to be tested in reality. The model played a role similar to that of an architect’s model (Gregory et al. 1993): representing a to-be-constructed future, which could be modified, shaped and improved by examining the overall result, eliminating inconsistencies, improving sub-system designs, preventing over-designing for excess capability, keeping costs and risks under control.
3.6 Conclusions The original plan for 12 ships was reduced to eight ships in 2003, but delays and rising costs no doubt contributed to the decision in June 2008 to further scale back the programme to six ships. HMS Daring was launched on 1 February 2006 and commissioned on 23 July 2009, after extensive sea trials, and the second ship, HMS Dauntless, was commissioned on 3 June 2010. At this writing the next four ships were either under construction at BAE Systems Surface Fleet, fitting out or in sea trials. Perhaps the best people to provide an overall view of this project are those who were directly involved and responsible for delivering a solution. I asked four key participants to provide their views about the decision conferencing/MCDA modelling process; their roles in the Type 45 project are in parentheses.
3 The Royal Navy’s Type 45 Story: A Case Study
73
Rear Admiral Philip Greenish (from 1997 to 2000, Director of Operational Requirements, Sea Systems, and then, following reorganization, Director of Equipment Capability, Above Water Battlespace) We were faced with a very urgent need to define an affordable capability for the Type 45 destroyer when the UK withdrew from the Horizon Project. We were unlikely to have sufficient funding to provide the full Horizon capability – at least in the first of class – and we did not have time for traditional approaches to trading cost and capability. With some limited experience in the use of decision conferencing and multi-criteria decision analysis, we embarked on what could best be termed a voyage of exploration and discovery. The result was extraordinary: consensus was achieved across all aspects of capability with a rapidity that was scarcely believable for such a complex scenario and with such difficult constraints. Furthermore, the decisions made [were] then sustained. It was widely agreed by all participants that there had been a thorough, comprehensive and open debate which had been supported by an appropriate methodology that everyone had bought in to. It is a pleasure to see the first ships of the class at sea looking pretty much as we had envisaged.
Rear Admiral Richard Leaman (Assistant Director Above Water Battlespace – responsible for requirements definition for the T45, Future Carrier and Future Escort) When initially introduced, this process was sorely needed, and ground-breaking thinking for the MoD. For the first time, highly complex decisions on cost-capability trade-offs could be made in a consensual, analytical and structured way. Whilst not everybody agreed with the final results, all the key stakeholders had been given the chance to argue their case, and then had no option but to accept the broad conclusions that emerged. A great tool and great process; I have used it successfully several times since – in a $500 million complex NATO transformation programme, all the way to a £60 million National charity budget. Skilled facilitation is critical – I would recommend the reader chooses contractor very carefully.
Captain Malcolm Cree (Requirements Desk Officer, Equipment Capability Customer, Above Water Battlespace Directorate) I took over desk level MoD responsibility for reconciling the “requirements” (performance), time and cost conundrum of the Type 45 Destroyer project from the moment the UK pulled out of the tri-national Horizon Frigate programme. I was faced with a new programme, a very well developed and over-ambitious set of requirements, an extremely demanding timeline and a budget that was insufficient to deliver the capability required in the time allowed. However, the MoD had just introduced “Smart Procurement,” and Type 45 was the first major project that would face approval under this new regime. I was initially sceptical about using externally facilitated Decision Conferencing to help decipher the conundrum but I rapidly became a fervent supporter and helped to bring the key decision makers in the MoD on side with the process. The critical factor was that everyone involved – the IPT, MoD, Fleet HQ and Industry had a common interest in succeeding – because failure to solve the problem would have seriously threatened the entire project. The MCDA approach drove all parties, who would not normally have engaged in an open, honest debate, to recognise the viewpoints of others and realise what was vital, important, desirable and superfluous, enhancing understanding of the issues, generating a sense of common purpose and eventually a commitment to the way forward. The Prime Contractor was obliged to produce realistic capability solutions and costs; over-estimating would have made the project unaffordable, under-estimating would have laid them open to serious criticism down-stream. I suspect that they became as intellectually and socially committed to the process as the MoD participants.
74
L.D. Phillips I was able to argue the case for longer term “growth paths” and for including capabilities that were of minimal cost if “designed in” but would have cost much more to “add in” later (such as the requirement to carry an embarked military force of up to 60 troops and strengthen the flight deck to enable a Chinook to land on it). Several capabilities that formerly would have been regarded as unaffordable “nice to have” options became highly cost-effective realities. We were also able to justify building in “growth potential,” such as space for additional (and longer) missile silos for a range of possible weapons – the common sense of including this “future-proofing” at the design stage would simply have not been possible without the rigour of MCDA and the inclusion of “soft” criteria. The end result made “selling” the plan to senior decision makers, and the production of a convincing Main Gate Business Case, much simpler. The project sailed through Main Gate and won a commendation from the Board and its Chairman (the Chief Scientific Advisor). I am convinced that this would not have been possible with a traditional procurement approach. The use of MCDA and Decision Conferencing did not stop there; with the support of several senior officers, I was able to roll out the process across the whole Above Water Battlespace Directorate, enabling the prioritisation all the current and future projects. The process was then applied across the whole of the central Defence procurement budget. The former worked well because fundamentally we all shared a desire to achieve the best from the money available for the Royal Navy, even to the point of accepting that projects individuals had worked on for years were not the best use of resources after all. The latter also worked but was poorly applied in many areas and fell foul to inter-service rivalry and the lack of robust, independent facilitation. Decision Conferencing supported by Equity and Hiview, with experienced independent facilitation, for Type 45 and later work in the MoD enabled complex problems to be solved and encouraged real teamwork. Some of the results were undermined by subsequent MoD savings (6 ships instead of 12) but when I first embarked and went to sea in HMS DARING, I was able to point out to the Captain all the things that had only been made possible by a process, a model, a decision analyst and a group of sufficiently enlightened people with a common aim. I have used MCDA and Decision Conferencing many times since, for prioritising savings and enhancement measures for the Fleet Command to deciding the optimum Base Port arrangements for the Navy’s submarines and mine countermeasures vessels. The steps of MCDA have much in common with the military “appreciation” or “campaign planning” process (that we tend to ignore when faced with a budget or a programme instead of an enemy!). When applied with care, MCDA is the most helpful process I have come across in many years of appointments in which prioritisation and decision making were the key requirements. I only wish it worked at home!
Chris Lloyd (Type 45 Design Authority & Engineering Director, BAE Systems, 1999–2003) Traditional conceptual design trade-off studies can take many months to laboriously iterate through numerous design solutions and costing exercises. One of the great benefits of Decision Conferencing was its ability to mix and match the major capability elements and achieve a near instant assessment of the relative performance and cost. In this respect, the sensitivity analysis was crucial to build confidence that simple models were suitable for even the most complex problems. The other key element was the concept of agreeing a “target cost” and focussing on what could be afforded rather than fixing the scope and commercially negotiating the “best” price. Whilst, common in commercial sectors, this was, and still is, a novelty in Defence. Another revelation of Decision Conferencing to the Type 45 Programme participants was how it dealt with the subjectivity of capability requirements, prioritisation and arriving at shared conclusions. This “social” process of joint workshops with all leading stakeholders was fundamental to its success. In my 25 years in naval procurement, it was the first and
3 The Royal Navy’s Type 45 Story: A Case Study
75
only time that I have seen Flag Officers openly debating the relative merits of radars and electronic warfare systems with senior procurement managers, technical specialists and ship designers. The exchange of knowledge and thus convergence on a jointly understood solution was phenomenal. Much is made of Integrated Project Teams (IPT) and Customer/Industry collaboration in Defence, but this is one of the few occasions where I have seen an approach that actually steers the participants towards a conclusion in days, not weeks or years. Acknowledgements I am grateful to Malcolm Cree and the editors for helpful suggestions that improved the manuscript. I also belatedly thank the many participants in the decision conferences, particularly Commander Dean Molyneaux and Lieutenant Commander Bill Biggs, who assisted in facilitating the decision conferences, Commodore Philip Greenish, who ably led participants through the tangled web of conflicting objectives, Brigadier General Keith Prentiss, who continued to exert pressure on costs, and Captain Joe Kidd, Captain Joe Gass and Commander Malcolm Cree, who championed the process from the start and saw it through to the final propulsion decision conference.
References Bernardo JM, Smith AFM (1994) Bayesian theory. Wiley, Chichester Delbecq A, Van de Ven A, Gustafson D (1974) Group techniques for program planning. Scott Foresman, Glenview Fischhoff B (2006) Constructing preferences from labile values. In: Lichtenstein S, Slovic P (eds) The construction of preference. Cambridge University Press, New York Gregory R, Lichtenstein S, Slovic P (1993) Valuing environmental resources: a constructive approach. J Risk Uncertain 7(2):177–197 Hardin, G (1968) The tragedy of the commons. Science 162:1243–1248 Keeney RL, Raiffa H (1976) Decisions with multiple objectives: preferences and value tradeoffs. Wiley, New York Lichtenstein S, Slovic P (eds) (2006) The construction of preference. Cambridge, New York National Audit Office (2002) Ministry of Defence major project reports. The Stationery Office, London Phillips LD (2007) Decision conferencing. In: Edwards W, Miles RF, Von Winterfeldt D (eds) Advances in decision analysis: from foundations to applications. CUP, Cambridge Phillips LD, Bana e Costa CA (2007) Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann Oper Res 154(1):51–68 Renn O (1999) A model for an analytic-deliberative process in risk management. Environ Sci Technol 33(18):3049–3055 Roxburgh C (2003) Hidden flaws in strategic thinking. McKinsey Quarterly, 2 Schein EH (1999) Process consultation revisited: building the helping relationship. Addison-Wesley, Reading Tversky A, Kahneman D (1974) Judgment under uncertainty: HEuristics and biases. Science 185:1124–1131
Part II
Methodology
Chapter 4
Valuation of Risky Projects and Illiquid Investments Using Portfolio Selection Models Janne Gustafsson, Bert De Reyck, Zeger Degraeve, and Ahti Salo
Abstract We develop a portfolio selection framework for the valuation of projects and other illiquid investments for an investor who can invest in a portfolio of private, illiquid investment opportunities as well as in securities in financial markets, but who cannot necessarily replicate project cash flows using financial intruments. We demonstrate how project values can be solved using an inverse optimization procedure and prove several general analytical properties for project values. We also provide an illustrative example on the modeling and pricing of multiperiod projects that are characterized by managerial flexibility.
4.1 Introduction Project valuation and selection has attracted plenty of attention among researchers and practitioners over the past few decades. Suggested methods for this purpose include (1) discounted cash flow analysis (DCF, see e.g. Brealey and Myers 2000) to account for the time value of money, (2) project portfolio optimization (see Luenberger 1998) to account for limited resources and several competing projects, and (3) options pricing analysis, which has focused on the recognition of the managerial flexibility embedded in projects (Dixit and Pindyck 1994; Trigeorgis 1996). Despite the research efforts that have been made to address challenges in project valuation, traditional methods tend to suffer from shortcomings which limit their practical use and theoretical relevance. For example, DCF analysis does not specify how the discount rate should be derived so as to properly account for (a) the time value of money, (b) risk adjustment implied by (1) the investor’s risk aversion (if any) and (2) the risk of the project and its impact on the aggregate risk faced J. Gustafsson () Ilmarinen Mutual Pension Insurance Company, Porkkalankatu 1, 00018 Ilmarinen, Helsinki, Finland e-mail:
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 4, © Springer Science+Business Media, LLC 2011
79
80
J. Gustafsson et al.
by the investor through diversification and correlations; and (c) opportunity costs imposed by alternative investment opportunities such as financial instruments and competing project opportunities. Traditionally, it is proposed that a project’s cash flows should be discounted at the rate of return of a publicly traded security that is equivalent in risk to the project (Brealey and Myers 2000). However, the definition of what constitutes an “equivalent” risk is problematic; in effect, unless a publicly traded instrument (or a trading strategy of publicly traded instruments) exactly replicates the project cash flows in all future states of nature, it is questionable whether a true equivalence exists. (Also, it is not clear if such a discount rate would account for anything else than opportunity costs implied by securities.) Likewise, traditional options pricing analysis requires that the project’s cash flows are replicated with financial instruments. Still, most private projects and other similar illiquid investments are, by their nature, independent of the fluctuations of the prices of publicly traded securities. Thus, the requirement of the existence of a replicating portfolio for a private project seems unsatisfactory as a theoretical assumption. In this chapter, we approach the valuation of private investment opportunities through portfolio optimization models. We examine a setting where an investor can invest both in market-traded, infinitely divisible assets as well as lumpy, nonmarkettraded assets so that the opportunity costs of both classes of investments can be accounted for. Examples of such lumpy assets include corporate projects, but in general the analysis extends to any nontraded, all-or-nothing-type investments. Market-traded assets include relevant assets that are available to the investor, such as equities, bonds, and the risk-free asset which provides a risk-free return from one period to the next. In particular, we demonstrate how portfolio optimization models can be used to determine the value of each nontraded lumpy asset within the portfolio. We also show that the resulting values are consistent with options pricing analysis in the special case that a replicating portfolio (or trading strategy) exists for such a private investment opportunity. Our procedure for the valuation of a project resembles the traditional net present value (NPV) analysis in that we determine what amount of money at present the investor equally prefers to the project, given all the alternative investment opportunities. Because this equivalence can be determined in two similar but different ways (where the choice of the appropriate way depends on the setting being modeled), we present two pricing concepts: breakeven selling price (BSP) and breakeven buying price (BBP), which have been used frequently in decision analytic approaches to investment decision making and even other settings (Luenberger 1998; Raiffa 1968; Smith and Nau 1995). Regarding the investor’s preferences, we seek to keep the treatment quite generic by allowing the use of virtually any rational preference model. The preference model merely influences the objective function of the portfolio model and possibly introduces some risk constraints, which do not alter the core of the valuation procedure, on condition that the preference model makes it possible to determine when two investment portfolios are equally preferred. In particular, we show that the required portfolio models can be formulated for both expected utility maximizers and mean-risk optimizers.
4 Valuing Investments with Portfolio Selection Methods
81
To illustrate how the project valuation procedure can be implemented in a multiperiod setting with projects that are characterized by varying degrees of managerial flexibility, we use contingent portfolio programming (CPP, see Gustafsson and Salo 2005) to formulate a multiperiod project-security portfolio selection model. This approach uses project-specific decision trees to capture real options that are embedded in projects. Furthermore, we provide a numerical example that parallels the example in Gustafsson and Salo (2005). This example suggests that while there is wide array of issues to be addressed in multiperiod settings, it is still possible to deal with them all with the help of models that remain practically tractable. Relevant applications of our valuation methodology include, for instance, pharmaceutical development projects, which are characterized by large nonmarketrelated risks and which are often illiquid due to several factors (see Chapter 13 in this book). For example, if the project is very specialized, the expertise to evaluate it may be available only within the company, in which case it may be difficult for outsiders to verify the accuracy and completeness of evaluation information. Also, details concerning pharmaceutical compounds may involve corporate secrets that can be delivered only to trusted third parties, which may lower the number of potential buyers and decrease the liquidity of the project. This chapter is structured as follows. Section 4.2 introduces the basic structure of integrated project-security portfolio selection models and discusses different formulations of portfolio selection problems. In Section 4.3, we introduce our valuation concepts and examine their properties. Section 4.4 discusses the valuation of investment opportunities such as real options embedded in the project. Section 4.5 gives an example of the framework in a multiperiod setting, which allows us to compare the results with a similar example in Gustafsson and Salo (2005). In Section 4.6, we summarize our findings and discuss implications of our results.
4.2 Integrated Portfolio Selection Models for Projects and Securities 4.2.1 Basic Model Structure We consider investment opportunities in two categories: (1) securities, which can be bought and sold in any quantities and (2) projects, lumpy all-or-nothing type investments. From a technical point of view, the main difference between these two types of investments is that the projects’ decision variables are binary, while those of the securities are continuous. Another difference is that the cost, or price, of securities is determined by a market equilibrium model, such as the Capital Asset Pricing Model (CAPM, see Lintner 1965; Sharpe 1964), while the investment cost of a project is an endogenous property of the project.
82
J. Gustafsson et al.
Portfolio selection models can be formulated either in terms of rates of return and portfolio weights, like in Markowitz-type formulations, or by specifying a budget constraint, expressing the initial wealth level, subject to which the investor’s terminal wealth level is maximized. The latter approach is more appropriate to project portfolio selection, because the investor is often limited by a budget constraint and it is natural to characterize projects in terms of cash flows rather than in terms of portfolio weights and returns.
4.2.2 Types of Preference Models Early portfolio selection formulations (see, e.g., Markowitz 1952) were bi-criteria decision problems minimizing risk while setting a target for expected return. Later, the mean-variance model was formulated in terms of expected utility theory (EUT) using a quadratic utility function. However, there are no similar utility functions for most other risk measures, including the widely used absolute deviation (Konno and Yamazaki 1991). In effect, due to the lexicographic nature of bi-criteria decision problems, most mean-risk models cannot be represented by a real-valued functional, thus being distinct from the usual preference functional models such as the expected utility model. Therefore, we distinguish between two classes of preference models: (1) preference functional models, such as the expected utility model, and (2) bicriteria optimization models or mean-risk models. For example, for EUT, the preference functional is U ŒX D EŒu.X /, where u./ is the investor’s von Neumann–Morgenstern utility function. Many other kinds of preference functional models, such as Choquet-expected utility models, have also been proposed. In addition to preference functional models, mean-risk models have been widely used in the literature. These models are important, because much of the modern portfolio theory, including the CAPM, is based on a mean-risk model, namely the Markowitz mean-variance model (Markowitz 1952). Table 4.1 describes three possible formulations for mean-risk models: risk minimization, where risk is minimized for a given level of expectation (Luenberger 1998), expected value maximization, where expectation is maximized for a given level of risk (Eppen et al. 1989), and the additive formulation, where the weighted sum of mean and risk is maximized (Yu 1985) and which is effectively a preference functional model. The models employed by Sharpe (1970) and Ogryczak and Ruszczynski (1999) are special cases of this model. The latter one, in particular, is important, because it can represent the investor’s certainty equivalent. In Table 4.1, is the investor’s risk measure, is the minimum level for expectation, and R is the maximum level for risk. The parameters are tradeoff coefficients. In our setting, the sole requirement for the applicable preference model is that it can uniquely identify equally preferable portfolios. By construction, models based on preference functionals have this property. Also, risk minimization and expected value maximization models can be employed if we define that equal preference
4 Valuing Investments with Portfolio Selection Methods
83
Table 4.1 Formulations of mean-risk preference models Risk minimization Expected value maximization General additive Sharpe (1970) Ogryczak and Ruszczynski (1999)
Objective min ŒX max EŒX max 1 EŒX 2 ŒX max EŒX ŒX max EŒX ŒX
Constraints EŒX ŒX R
prevails whenever the objective function values are equal and the applicable constraints are satisfied (but not necessarily binding). In general, there no particular reason to favor any one of these models, because the choice of the appropriate model may depend on the setting. For example, in settings involving the CAPM, the mean-variance model may be more appropriate, while decision theorists may prefer to opt for the expected utility model.
4.2.3 Single-Period Example Under Expected Utility A single-period portfolio model under expected utility can be formulated as follows. Let there be n risky securities, a risk-free asset (labeled as the 0th security), and m projects. Let the price of asset i at time 0 be Si0 and let the corresponding (random) price at time 1 be SQi1 . The price of the risk-free asset at time 0 is 1 and 1 C rf at period 1, where rf is the risk-free interest rate. The amounts of securities in the portfolio are denoted by xi ; i D 0; : : : ; n. The investment cost of project k in time 0 is Ck0 and the (random) cash flow at time 1 is CQ k1 . The binary variable zk indicates whether project k is started or not. The investor’s budget is b. We can then formulate the model using utility function u as follows: 1. maximize utility at time 1: " max E u x;z
subject to 2. budget constraint at time 0:
n X
SQi1 xi C
i D0
m X
!# CQ k1 zk
kD1
n P i D0
Si0 xi C
m P kD1
Ck0 zk b
3. binary variables for projects: zk 2 f0; 1g; k D 1; : : : ; m 4. continuous variables for securities: xi free i D 0; : : : ; n In typical settings, the budget constraint could be formulated as an equality, because in the presence of a risk-free asset all of the budget will normally be expended at the optimum. In this model and throughout the chapter, it is assumed that there are no transaction costs or taxes on capital gains, and that the investor is able to borrow and lend at the risk-free interest rate without limit.
84
J. Gustafsson et al.
4.3 Valuation of Projects and Illiquid Investments 4.3.1 Breakeven Buying and Selling Prices Because we consider projects as illiquid, nontradable investment opportunities, there is no market price that can be used to value the project. In such a setting, it is reasonable to define the value of the project as the cash amount at present that is equally preferred to the project. In a portfolio context, this can be interpreted so that the investor is indifferent between the following two portfolios: (A1) a portfolio with the project and (B1) a portfolio without the project and additional cash equal to the value of the project. Alternatively, however, we may define the value of a project as the indifference between the following two portfolios: (A2) a portfolio without the project and (B2) a portfolio with the project and a reduction in available cash equal to the value of the project. The project values obtained in these two ways will not, in general, be the same. Analogous to (Luenberger 1998; Raiffa 1968; Smith and Nau 1995), we refer to the first value as the “breakeven selling price,” as the portfolio comparison can be understood as a selling process, and the second type of value as the “breakeven buying price.” A central element in BSP and BBP is the determination of equal preference for two different portfolios, which holds in many portfolio optimization models when the optimal values for the objective function match for the two portfolios. This works straightforwardly under preference functional models where the investor is, by definition, indifferent between two portfolios with equal utility scores, but the situation becomes slightly more complicated for mean-risk models where equal preference might be regarded to hold only when two values – mean and risk – are equal for the two portfolios. However, as we discussed before, if the risks are modeled as constraints, the investor can be said to be indifferent if the expectations of the two portfolios are equal and they both satisfy the risk constraints. Thus, we can establish equal preference by comparing the optimal objective function values also in this case. Table 4.2 describes the four portfolio selection settings and, in particular, the necessary modifications to the base portfolio selection model for the calculation of breakeven prices. Here, the base portfolio selection model is simply the model that is appropriate for the setting being modeled; for example, it can be the simple oneperiod project-security model given in Section 4.2.3 or the complex multiperiod model described in Section 4.5. All that is required for the construction of the underlying portfolio selection problem is that it makes it possible to establish equal preference between two portfolios (in this case through the objective function value) and to have a parameter that describes the initial budget. Here, vsj and vbj are unknown modifications to the budget in Problem 2 such that the optimal objective function value in Problem 2 matches that of Problem 1. The aim of our valuation methodology is to determine the values of these unknown parameters.
4 Valuing Investments with Portfolio Selection Methods
85
Table 4.2 Definitions of the value of project j A. Breakeven selling price
B. Breakeven buying price
Definition
vsj such that WsC D Ws
vbj such that WbC D Wb
Problem 1
Problem A1 Mandatory investment in the project
Problem B1 Project is excluded from the portfolio (investment in the project is prohibited) Optimal objective function value: Wb Budget at time 0: b0
Optimal objective function value: WsC Budget at time 0: b0 Problem 2
Problem A2 Project is excluded from the portfolio (investment in the project is prohibited) Optimal objective function value: Ws Budget at time 0: b0 C vsj
Problem B2 Mandatory investment in the project Optimal objective function value: WbC Budget at time 0: b0 vbj
4.3.2 Inverse Optimization Procedure Finding a BSP and BBP is an inverse optimization problem (see e.g. Ahuja and Orlin 2001): one has to find for what budget the optimal value of the Problem 2 matches a certain desired value (the optimal value of Problem 1). Indeed, in an inverse optimization problem, the challenge is to find the values for a set of parameters, typically a subset of all model parameters, that yield the desired optimal solution. Inverse optimization problems can broadly be classified into two groups: (a) finding an optimal value for the objective function and (b) finding a solution vector. The problem of finding a BSP or BBP falls within the first class. In principle, the taskof finding a BSP is equivalent to finding a root to the function f s vsj D Ws vsj WsC , where WsC is the optimal value of Problem 1 and Ws vsj is the corresponding optimal value in Problem 2 as the function of parameter vsj . Similarly, the BBP can be obtained by finding the root to the function f b vbj D Wb WbC vbj . In typical portfolio selection problems where the investor exhibits normal kind of risk preferences and a risk-free asset is available, these functions are normally increasing with respect to their parameters. To solve such root-finding problems, we can use any of the usual root-finding algorithms (see, e.g. Belegundu and Chandrupatla 1999) such as the bisection method, the secant method, and the false position method. These methods do not require knowledge of the functions’ derivatives which are not typically known. If the first derivatives are known, or when approximated numerically, we can also use the Newton–Raphson method. Solution of BSP and BBP in more complex settings with discontinuous or nonincreasing functions may require more sophisticated methods. It may be noted that, in some extreme, possibly unrealistic settings, equal preference cannot be established for any budget amount, which would imply that BSP and BBP would not exist.
86
J. Gustafsson et al.
4.3.3 General Analytical Properties 4.3.3.1 Sequential Consistency Breakeven selling and buying prices are not, in general, equal to each other. While this discrepancy is accepted as a general property of risk preferences in EUT (Raiffa 1968), it may also seem to contradict the rationality of these valuation concepts. It can be argued that if the investor were willing to sell a project at a lower price than at which he/she would be prepared to buy it, the investor would create an arbitrage opportunity and lose an infinite amount of money when another investor repeatedly bought the project at its selling price and sold it back at the buying price. In a reverse situation where the investor’s selling price for a project is greater than the respective buying price, the investor would be irrational in the sense that he/she would not take advantage of an arbitrage opportunity – if such an opportunity existed – where it would be possible to buy the project repeatedly at the investor’s buying price and to sell it at a slightly higher price below the investor’s BSP. However, these arguments ignore the fact that the breakeven prices are affected by the budget and that therefore these prices may change after obtaining the project’s selling price and after paying its buying price. Indeed, it can be shown that in a sequential setting where the investor first sells the project, adds the selling price to the budget, and then buys the project back, the investor’s selling price and the respective (sequential) buying price are always equal to each other. This observation is formalized as the following proposition. The proof is given in the Appendix. It is assumed in this proof and throughout Section 4.3.3 that the objective function is continuous and strictly increasing with respect to the investor’s budget (money available at present). Thus, an increase in the budget will always result in an increase in the objective function value. Proposition 1. A project’s breakeven selling (buying) price and its sequential breakeven buying (selling) price are equal to each other.
4.3.3.2 Consistency with Contingent Claims Analysis Option pricing analysis or contingent claims analysis (CCA; see, e.g., Brealey and Myers 2000; Luenberger 1998), can be applied to value projects whenever the cash flows of a project can be replicated using financial instruments. According to CCA, the value of project j is given by the market price of the replicating portfolio (a portfolio required to initiate a replicating trading strategy) less than the investment cost of the project: vCCA D Ij Ij0 j where Ij0 is the time-0 investment cost of the project and Ij is the cash needed to initiate the replicating trading strategy. A replicating trading strategy is a trading strategy using financial instruments that exactly replicates the cash flows of the project in each future state of nature.
4 Valuing Investments with Portfolio Selection Methods
87
It is straightforward to show that, when CCA is applicable, i.e., if there exists a replicating trading strategy, then the breakeven buying and selling prices are equal to each other and yield the same vCCA result as CCA (cf. Smith and Nau 1995). j This follows from the fact that a portfolio consisting of (A) the project and (B) a shorted replicating portfolio is equal to getting a cash flow for sure now and zero cash flows at all future states of nature. Therefore, whenever a replicating portfolio exists, the project can effectively be reduced to obtaining a sure amount of money at present. To illustrate this, suppose first that vCCA is positive. Then, any rational investor j will invest in the project (and hence its value must be positive for any such investors), since it is possible to make money for sure, amounting to vCCA , by j investing in the project and shorting the replicating portfolio. Furthermore, any rational investor will start the project even when he/she is forced to pay a sum vbj less than vCCA to gain a license to invest in the project, because it is now possible to j CCA b gain vj vj for sure. On the other hand, if vbj is greater than vCCA , the investor j will not start the project, because the replicating portfolio makes it possible to obtain the project cash flows at a lower cost. A similar reasoning applies to BSPs. These observations are formalized in Proposition 2. The proof is straightforward and is given in the Appendix. Due to the consistency with CCA, the breakeven prices can be regarded as a generalization of CCA to incomplete markets. Proposition 2. If there is a replicating trading strategy for a project, the breakeven selling price and breakeven buying price are equal to each other and yield the same result as CCA.
4.3.3.3 Sequential Additivity The BBP and BSP for a project depend on what other assets are in the portfolio. The value obtained from breakeven prices is, in general, an added value, which is determined relative to the situation without the project. When there are no other projects in the portfolio, or when we remove them from the model before determining the value of the project, we speak of the isolated value of a project. We define the respective values for a set of projects as the joint added value and joint value. Figure 4.1 illustrates the relationship between these concepts. Isolated project values are, in general, non-additive; they do not sum up to the value of the project portfolio composed of the same projects. However, in a sequential setting where the investor buys the projects one after the other at the prevailing buying price at each time, the obtained project values do add up to the joint value of the project portfolio. These prices are the projects’ added values in a sequential buying process, where the budget is reduced by the buying price after each step. We refer to these values as sequential added values. This sequential additivity property holds regardless of the order in which the projects are bought. Individual projects can, however, acquire different added values depending on the sequence in which they are bought. These observations are formalized in the following proposition. The proof is in the Appendix.
Single project
Isolated Value
Added Value
Portfolio of projects
J. Gustafsson et al. Number of projects being valued
88
Joint Value
Joint Added Value
No other projects
Additional projects
Number of other projects in the portfolio
Fig. 4.1 Different types of valuations for projects
Proposition 3. The breakeven buying (selling) prices of sequentially bought (sold) projects add up to the breakeven buying (selling) price of the portfolio of the projects regardless of the order in which the projects are bought (sold).
4.4 Valuation of Opportunities 4.4.1 Investment Opportunities When valuing a project, we can either value an already started project or an opportunity to start a project. The difference is that, although the value of a started project can be negative, that of an opportunity to start a project is always nonnegative, because a rational investor does not start a project with a negative value. While BSP and BBP are appropriate for valuing started projects, new valuation concepts are needed for valuing opportunities. Since an opportunity entails the right but not the obligation to take an action, we need selling and buying prices that rely on the comparison of settings where the investor can and cannot invest in the project, instead of does and does not. The lowest price at which the investor would be willing to sell an opportunity to start a project can be obtained from the definition of the BSP by removing the requirement to invest in the project in Problem A1. We define this price as the opportunity selling price (OSP) of the project. Likewise, the opportunity buying price (OBP) of a project can be obtained by removing the investment requirement in Problem B2. It is the highest price that the investor is willing to pay for a license to start the project. Opportunity selling and buying prices have a lower bound of zero; it is also straightforward to show that the opportunity prices can be computed by taking a maximum of 0 and the respective breakeven price. Table 4.3 gives a summary of opportunity selling and buying prices.
4 Valuing Investments with Portfolio Selection Methods
89
Table 4.3 Definitions of a value of an opportunity A. Opportunity selling price
B. Opportunity buying price
Definition
vs such that WsC D Ws
vb such that WbC D Wb
Problem 1
Problem A1 No alteration to base portfolio model
Problem B1 Opportunity is excluded from the portfolio Optimal objective function value: Wb Budget at time 0: b0
Optimal objective function value: WsC Budget at time 0: b0 Problem 2
Problem A2 Opportunity is excluded from the portfolio Optimal objective function value: Ws Budget at time 0: b0 C vs
Problem B2 No alterations to base portfolio model Optimal objective function value: WbC Budget at time 0: b0 vb
4.4.2 Real Options and Managerial Flexibility Opportunity buying and selling prices can also be used to value real options (Brosch 2008; Trigeorgis 1996) contained in the project portfolio. These options result from the managerial flexibility to adapt later decisions to unexpected future developments. Typical examples include possibilities to expand production when markets are up, to abandon a project under bad market conditions, and to switch operations to alternative production facilities. Real options can be valued much in the same way as opportunities to start projects. However, instead of comparing portfolio selection problems with and without the possibility to start a project, we will compare portfolio selection problems with and without the real option. This can typically be implemented by preventing the investor from taking a particular action (e.g., expanding production) when the real option is not present. Since breakeven prices are consistent with CCA, also opportunity prices have this property, and can thus be regarded as a generalization of the standard CCA real option valuation procedure to incomplete markets.
4.4.3 Opportunity to Sell the Project to Third Party Investor From the perspective of finance theory, an important application of the real options concept is the management’s ability to sell the project to a third party investor (or to the market). Indeed, a valuation where the option to sell the project to a third party investor is accounted for, in addition to the opportunity costs implied by other investment opportunities, can be regarded as a holistic valuation that fully accounts for both private and market factors that influence the value of the project. Projects where options to sell are important include pharmaceutical development projects, where the rights to develop compounds further can be sold to bigger companies after a certain stage in clinical trials is reached. The possibility to sell the project to the market is effectively an American put option embedded in the project. When selling the project is a relevant management
90
J. Gustafsson et al.
option, the related selling decisions typically need to be implemented as a part of the project’s decision tree, which necessitates the use of an approach where projects are modeled through decision trees, such as CPP (Gustafsson and Salo 2005). That is, at each state, in addition to any other options available to the firm, the firm can opt to sell the project at the highest price than being offered by any third party investor. The offer price may depend on several factors and who the investor is, but if a market-implied pricing is used, then it may be possible to compute the offer price by using standard market pricing techniques, such as the market-implied risk-neutral probability distribution. Like any other real option, the opportunity to sell the project can increase but cannot decrease the value of the project. Also, such an American put option also sets a lower bound for the value of the project.
4.5 Implementation of Multiperiod Project Valuation Model 4.5.1 Framework We develop an illustrative multiperiod model using the CPP framework (Gustafsson and Salo 2005). In CPP, uncertainties are modeled using a state tree, representing the structure of future states of nature, as depicted in the leftmost chart in Fig. 4.2. The state tree need not be binomial or symmetric; it may also take the form of a multinomial tree with different probability distributions in its branches. In each nonterminal state, securities can be bought and sold in any, possibly fractional quantities. The framework includes budget balance constraints that allow the transfer of cash from one time period to the next, adding interest while doing so, so that the accumulated cash and the impact of earlier cash flows can be measured in the terminal states. Projects are modeled using decision trees that span over the state tree. The two right-most charts in Fig. 4.2 describe how project decisions, when combined with the state tree, lead to project-specific decision trees. The specific feature of these decision trees is that the chance nodes are shared by all projects, since they are generated using the common state tree. Security trading is implemented through state-specific trading variables, which are similar to the ones used in financial models of stochastic programming (e.g. Mulvey et al. 2000) and in Smith and Nau’s method (Smith and Nau 1995). Similar to the single-period portfolio selection model in Section 4.2.3, the investor seeks either to maximize the utility of the terminal wealth level, or the expectation of the terminal wealth level subject to a risk constraint.
4.5.2 Model Components The two main components of the model are (a) states and (b) the investor’s investment decisions, which imply the cash flow structure of the model.
92
J. Gustafsson et al.
each decision point d are bound by the restriction that only one za ; a 2 Ad , can be equal to one. The state in which the action at decision point d is chosen is denoted by !.d /. For a project k, the vector of all action variables za relating to the project, denoted by zk , is called the project management strategy of k. The vector of all action variables of all projects, denoted by z, is the project portfolio management strategy. We call the pair (x, z), composed of all trading and action variables, the aggregate portfolio management strategy.
4.5.2.3 Cash Flows and Cash Surpluses p
Let CFk .zk ; !/ be the cash flow of project k in state ! with project management strategy zk . When Ca .!/ is the cash flow in state ! implied by action a, this cash flow is given by X X p CFk .zk ; !/ D Ca .!/ za a2Ad d 2 Dk W !.d / 2 B .!/
where the restriction in the summation of the decision points guarantees that actions yield cash flows only in the prevailing state and in the future states that B can state. .!/ is defined as B .!/ D The set ˚ 0 be reached from the prevailing k 0 n ! 2 j9 k 0 such that B .!/ D ! , where B .!/ D B.B n1 .!// is the nth predecessor of ! .B 0 .!/ D !/. The cash flows from security i in state ! 2 are given by 8 < Si .!/ xi;! CFsi .xi ; !/ D Si .!/ xi;B.!/ : Si .!/ .xi;B.!/ xi;! /
if ! D !0 if ! 2 T if ! ¤ !0 ^ ! … T
Thus, the aggregate cash flow CF.x, z; !/ in state ! 2 , obtained by summing up the cash flows for all projects and securities, is CF.x; z; !/ D
n P
CFsi .xi ; !/ C
m P
p
CFk .zk ; !/ iD1 kD1 8 n P P P ˆ Si .!/ xi;! C Ca .!/ za ; if ! D !0 ˆ ˆ ˆ iD1 a2Ad ˆ ˆ d 2 D k W ˆ ˆ ˆ ˆ !.d / 2 B .!/ ˆ ˆ ˆ n ˆ P P ˆP ˆ < Si .!/ xi;B.!/ C Ca .!/ za ; if ! 2 T a2Ad D iD1 d 2 Dk W ˆ ˆ ˆ ˆ !.d / 2 B .!/ ˆ ˆ ˆ n P P P ˆ ˆ ˆ Si .!/ .xi;B.!/ xi;! / C Ca .!/ za ; if ! ¤ !0 ^ ! … T ˆ ˆ ˆ iD1 a2A d ˆ d 2 Dk W ˆ ˆ : !.d / 2 B .!/
4 Valuing Investments with Portfolio Selection Methods
93
Together with the initial budget in each state, cash flows define cash surpluses that would result in state ! 2 if the investor chose portfolio management strategy (x, z). Assuming that excess cash is invested in the risk-free asset, the cash surplus in state ! 2 is given by CS! D
b.!/ C CF.x; z; !/ b.!/ C CF.x; z; !/ C .1 C rB.!/!! / CSB.!/
if ! D !0 ; if ! ¤ !0
where b.!/ is the initial budget in state ! 2 and rB.!/!! is the short rate at which cash accrues interest from state B.!/ to !. The cash surplus in a terminal state is the investor’s terminal wealth level in that state.
4.5.3 Optimization Model When using a preference functional U , the objective function for the model can be written as a function of cash surplus variables in the last time period, i.e. max U.CST /;
x;z;CS
where CST denotes the vector of cash surplus variables in period T . Under the risk-constrained mean-risk model, the objective is to maximize the expectation of the investor’s terminal wealth level X max p.!/ CS! : x;z;CS
!2T
Three types of constraints are imposed on the model: (a) budget constraints, (b) decision consistency constraints, and (c) risk constraints (in the case of riskconstrained models). The formulation of a multiperiod portfolio selection model under both a preference functional and a mean-risk model is given in Table 4.4.
4.5.3.1 Budget Constraints Budget constraints ensure that there is a nonnegative amount of cash in each state. They can be implemented using continuous cash surplus variables CS! , which measure the amount of cash in state !. These variables lead to the budget constraints CF.x; z; !0 / CS!0 D b.!0 / CF.x; z; !/ C .1 C rB.!/!! / CSB.!/ CS! D b.!/;
8! 2 nf!0 g
94
J. Gustafsson et al. Table 4.4 Multi-period models Preference functional model max U.CST /
Objective function Budget constraints
x,z,CS
Mean-risk model P max p.!/ CS! x,y,CS !2T
CF.x,z; !0 / CS!0 D b.!0 / CF.x, z; !/ C .1 C rB.!/!! / CSB.!/ CS! D b.!/; 8! 2 nf!0 g P za D 1; k D 1; : : : ; m
Decision consistency constraints
a2Ad 0
Pk
a2Ad
za D zap.d / ;
˚ 8d 2 Dk n dk0 ;
; C R CS! .CST / C ! C ! D 08! 2 T
Risk constraints
za 2 f0; 1g; 8a 2 Ad 8d 2 Dk k D 1; : : : ; m xi;! free 8! 2 ; i D 1; : : : ; n CS! free 8! 2
Variables
k D 1; : : : ; m
za 2 f0; 1g; 8a 2 Ad 8d 2 Dk k D 1; : : : ; m xi;! free8! 2 ; i D 1; : : : ; n CS! free 8! 2 ! 0 8! 2 T C ! 0 8! 2 T
Note that if CS! is negative, the investor borrows money at the risk-free interest rate to cover a funding shortage. Thus, CS! can also be regarded as a trading variable for the risk-free asset. 4.5.3.2 Decision Consistency Constraints Decision consistency constraints ensure the logical consistency of the projects’ decision trees. They require that (a) at each decision point reached only one action is selected, and that (b) at each decision point that is not reached, no action is taken. Decision consistency constraints can be written as X za D 1; k D 1; : : : ; m a2Ad 0
X
k
za D zap.d / ;
˚ 8d 2 Dk n dk0 ;
k D 1; : : : ; m;
a2Ad
where the first constraint ensures that one action is selected in the first decision point, and the second implements the above requirements for the other decision points.
4 Valuing Investments with Portfolio Selection Methods
95
4.5.3.3 Risk Constraints A risk-constrained model includes one or more risk constraints. We focus on the single constraint case. When denotes the risk measure and R the risk tolerance, a risk constraint can be expressed as .CST / R: In addition to variance .V /, several other risk measures have been proposed. These include semivariance (Markowitz 1959), absolute deviation (Konno and Yamazaki 1991), lower semi-absolute deviation (Ogryczak and Ruszczynski 1999), and their fixed target value counterparts (Fishburn 1977). Semivariance (SV), absolute deviation (AD) and lower semi-absolute deviation (LSAD) are defined as ZX SV W N X D
Z1 .x X / dFX .x/; AD W ıX D
jx X j dFX .x/; and
2
1
1
LSAD W ıNX D
ZX
ZX jx X j dFX .x/ D
1
.X x/dFX .x/; 1
where X is the mean of random variable X and FX is the cumulative density function of X . The fixed target value statistics are obtained by replacing X by an appropriate constant target value . All these measures can be formulated in an optimization program by introducing deviation constraints. In general, deviation constraints are expressed as CS! .CST / C ! C ! D 0 8! 2 T ;
where .CST / is a function that defines the target value from which the devia tions are calculated, and C ! and ! are nonnegative deviation variables which measure how much the cash surplus in state ! 2 T differs from the target value. For example, when the target value is the mean of the terminal wealth level, the deviation constraints are written as X CS! p.! 0 /CS! 0 C 8! 2 T : ! C ! D 0; ! 0 2T
With the help of these deviation variables, some common dispersion statistics can now be written as follows: AD W
X
C p.!/ . ! C ! /
!2T
LSAD W
X
!2T
p.!/ !
96
J. Gustafsson et al.
V W
X
C p.!/ . ! C ! /
2
!2T
SV W
X
2 p.!/ . !/
!2T
The respective fixed-target value statistics can be obtained with the deviation constraints CS! C ! C ! D 0;
8! 2 T ;
where is the fixed target level.P Expected downside risk (EDR), for example, can then be obtained from the sum p.!/ !. !2T
4.5.3.4 Other Constraints Even other constraints can be modeled, including short selling limitations, upper bounds for the number of shares bought, and credit limit constraints (Markowitz 1987). For the sake of simplicity, however, we assume in the following sections that there are no such additional constraints in the model.
4.5.4 Example We next illustrate project valuation in a multiperiod setting with an example similar to the one in Gustafsson and Salo (2005). In this setting, the investor can invest in projects A and B in two stages as illustrated in Fig. 4.3. At time 0, he/she can start either one or both of the projects. If a project is started, he/she can make a further investment at time 1. If the investment is made, the project generates a positive cash flow at time 2; otherwise, the project is terminated with no further cash flows. In the spirit of the CAPM, it is assumed that the investor can also borrow and lend money at a risk-free interest rate, in this case 8%, and invest in the equity market portfolio. The investor is able to buy and short the market portfolio and the risk-free asset in any quantities. The initial budget is $9 million. The investor is a meanLSAD optimizer with a risk (LSAD) tolerance of R D $5 million. Here, we use a risk-constrained model instead of a preference functional model as in Gustafsson and Salo (2005), because otherwise the optimal strategy will be unbounded with the investor investing an infinite amount in the market portfolio and financing this by going short in the risk-free asset, or vice versa, depending on the value of the mean-LSAD model’s risk aversion parameter. Uncertainties are captured through a state tree where uncertainties are divided into market and private uncertainties (see Figs. 4.3 and 4.4). The price of the market portfolio is entirely determined by the prevailing market state, while the projects’
4 Valuing Investments with Portfolio Selection Methods
99
Based on Figs. 4.5 and 4.6, budget constraints can now be written as: 1zASY 2zBSY x!0 CS!0 D 9 3zACY1u 2zBCY1u C 1:24x!0 1:24x!1u C 1:08CS!0 CS!1u D 0 3zACY2u 2zBCY2u C 1:24x!0 1:24x!2u C 1:08CS!0 CS!2u D 0 3zACY1d 2zBCY1d C 1x!0 1x!1d C 1:08CS!0 CS!1d D 0 3zACY2d 2zBCY2d C 1x!0 1x!2d C 1:08CS!0 CS!2d D 0 20zACY1u C 2:5zBCY1u C 1:5376x!1u C 1:08CS!1u CS!1u1u D 0 20zACY1u C 2:5zBCY1u C 1:24x!1u C 1:08CS!1u CS!1u1d D 0 20zACY1d C 2:5zBCY1d C 1:24x!1d C 1:08CS!1d CS!1d1u D 0 20zACY1d C 2:5zBCY1d C 1x!1d C 1:08CS!1d CS!1d1d D 0 10zACY1u C 1zBCY1u C 1:5376x!1u C 1:08 CS!1u CS!1u2u D 0 10zACY1u C 1zBCY1u C 1:24x!1u C 1:08CS!1u CS!1u2d D 0 10zACY1d C 1zBCY1d C 1:24x!1d C 1:08CS!1d CS!1d 2u D 0 10zACY1d C 1zBCY1d C 1x!1d C 1:08CS!1d CS!1d 2d D 0 5zACY2u C 25zBCY2u C 1:5376x!2u C 1:08CS!2u CS!2u1u D 0 5zACY2u C 25zBCY2u C 1:24x!2u C 1:08CS!2u CS!2u1d D 0 5zACY2d C 25zBCY2d C 1:24x!2d C 1:08CS!2d CS!2d1u D 0 5zACY2d C 25zBCY2d C 1x!2d C 1:08CS!2d CS!2d1d D 0 10zBCY2u C 1:5376x!2u C 1:08CS!2u CS!2u2u D 0 10zBCY2u C 1:24x!2u C 1:08CS!2u CS!2u2d D 0 10zBCY2d C 1:24x!2d C 1:08CS!2d CS!2d 2u D 0 10zBCY2d C 1x!2d C 1:08CS!2d CS!2d 2d D 0 For each terminal state !T 2 T , there is a deviation constraint CS!T EV C !T C !T D 0, where EV is the expected cash balance over all terminal states, viz. EV D
X !T 2T
p!T CS!T :
100
J. Gustafsson et al.
Table 4.5 Investments in securities ($ million)
State
xM
CS
0 1u 1d 2u 2d
71.37 16.40 105.46 27.28 98.71
64:37 4:35 106:61 16:85 98:86
In addition, the following decision consistency constraints apply: zASY C zASN D 1 zACY1u C zACN1u D zASY zACY2u C zACN2u D zASY zACY1d C zACN1d D zASY zACY2d C zACN2d D zASY The risk constraint is now
X
zBSY C zBSN D 1 zBCY1u C zBCN1u D zBSY zBCY2u C zBCN2u D zBSY zBCY1d C zBCN1d D zBSY zBCY2d C zBCN2d D zBSY
p!T !T 5;
!T 2T
and the objective function is Maximize EV D
X
p!T CS!T :
!T 2T
In this portfolio selection problem, the optimal strategy is to start both projects; project A is terminated at time 1 if private state 2 occurs and project B if private state 1 occurs, i.e., variables zASY ; zACY1u ; zACY1d ; zBSY ; zBCY2u ; zBCY2d are one and all other action variables are zero. The optimal amounts invested in the market portfolio and the risk-free asset are given in columns 2 and 3 of Table 4.5, respectively. There is an expected cash balance of EV D $25:63 million and LSAD of $5.00 million at time 2. The portfolio has its least value, $0.32 million, in state 1d 2d . It is worth noting that the value of the portfolio can be negative in some terminal states at higher risk levels, because the states’ cash surplus variables are not restricted to nonnegative values. Thus, the investor will borrow money at the risk-free rate and invest it in the market portfolio, and may hence default on his/her loan obligations if the market does not go up in either of the time periods and the project portfolio performs poorly. Breakeven selling and buying prices for projects A and B, as well as for the entire project portfolio, are given in the last row in Table 4.6. These prices are now equal, and we therefore record them in a single cell. (This is a property of the employed preference model.) For the sake of comparison, we also give the terminal wealth levels when the investor does and does not invest in the project/portfolio being valued, denoted by W C and W , respectively. The portfolio value differs from the value of $5.85 million obtained in Gustafsson and Salo (2005), where the investor did not have the possibility to invest in
4 Valuing Investments with Portfolio Selection Methods Table 4.6 Project values ($ million)
101
Portfolio WC W WC W v D vb D vs
A
B
$25:63 $17:16
$25:63 $22:17
$25:63 $20:54
$8:47 $7:26
$3:46 $2:97
$5:09 $4:36
the market portfolio. There are two reasons for this difference. First, due to the possibility to invest limitless amounts in the market portfolio, we use a riskconstrained preference model with R D $5:00 million, whereas the example in Gustafsson and Salo (2005) used a preference functional model with D 0:5. With different preference models we also have different risk-adjustment. Second, because here it is possible to invest in the market portfolio, the optimal portfolio mix is likely to be different from the setting where investments in market-traded securities are not possible, and thus the project portfolio is also likely to obtain a value different from the one obtained in Gustafsson and Salo (2005).
4.6 Summary and Conclusions In this chapter, we have considered the valuation of private projects in a setting where an investor can invest in a portfolio of projects as well as securities traded in financial markets, but where the replication of project cash flows with financial securities may not be possible. Specifically, we have developed a valuation procedure based on the concepts of breakeven selling and buying prices. This inverse optimization procedure requires the solution of portfolio selection problems with and without the project that is being valued and determining the lump sum that makes the investor indifferent between the two settings. We have also offered analytical results concerning the properties of breakeven prices. Our results show that the breakeven prices are, in general, consistent valuation measures in that they exhibit sequential additivity and consistency; they are also consistent with CCA. Quite importantly, the proposed methodology overcomes several deficiencies in earlier approaches to the valuation of projects and other illiquid investments. That is, the methodology accounts systemically for the time value of money, explicates the investor’s risk preferences, captures the projects’ risk characteristics and their impacts on aggregate risks at the portfolio level. This methodology also accounts for the opportunity costs of alternative investment opportunities which are explicitly included in the portfolio selection model. Overall, the methodology constitutes a new, complete and theoretically well-founded approach to the valuation of nonmarket traded investments. Furthermore, we have shown that it is possible to include real options and managerial flexibility in the projects through modeling them as decision trees in the CPP framework (Gustafsson and Salo 2005). We have also shown how such real
102
J. Gustafsson et al.
options can be valued using the concepts of opportunity buying and selling prices, and demonstrated that such resulting real option values are consistent with CCA. Indeed, since the present framework does not require the existence of a replicating trading strategy, a key implication is that the proposed methodology makes it possible to generalize CCA to the valuation of private projects in incomplete markets. This work suggests several avenues for further research. In particular, it is of interest to investigate settings where the investor can opt to sell the project to the market (or to a third party investor), as the resulting valuations would then holistically account for the impact that markets can have on the value of the project (i.e. implicit market pricing and opportunity costs). Analysis of specific preference models such as the mean-variance model also seems appealing, not least because the CAPM (Lintner 1965; Sharpe 1964) is based on such preferences. More work is also needed to facilitate the use of the methodology in practice and to link it to existing theories of pricing.
Appendix Proof of Proposition 1 Let us prove the proposition first for the BSP and the sequential buying price. Let the BSP for the project be vsj . Then, based on Table 4.2, vsj will be defined by the portfolio setting in the middle column of Table A.1. Next, we can observe that Problem B1 in determining the sequential buying price is the same as Problem A2 for the BSP, wherefore also the optimal objective function values will be the same, i.e., Wb D Ws . Since by definition of breakeven prices Table A.1 Definition of the sequential buying price value of project j A. Breakeven selling price
B. Sequential buying price
Definition
vsj such that WsC D Ws
vbj such that WbC D Wb
Problem 1
Problem A1 Mandatory investment in the project
Problem B1 Project is excluded from the portfolio (investment in the project is prohibited) Optimal objective function value: Wb Budget at time 0: b0 C vsj
Optimal objective function value: WsC Budget at time 0: b0 Problem 2
Problem A2 Project is excluded from the portfolio (investment in the project is prohibited) Optimal objective function value: Ws Budget at time 0: b0 C vsj
Problem B2 Mandatory investment in the project Optimal objective function value: WbC Budget at time 0: b0 C vsj vbj
4 Valuing Investments with Portfolio Selection Methods
103
we have WsC D Ws and WbC D Wb , it follows that we also have WsC D WbC . Since Problem A1 and Problem B2 are otherwise the same, except that the first has the budget of b0 and the second b0 C vsj vbj , and because the optimal objective function value is strictly increasing with respect to the budget, it follows that b0 C vsj vbj must be equal to b0 to get WsC D WbC , and therefore vbj D vsj . The proposition for the BBP and the respective sequential selling price is proven similarly.
Proof of Proposition 2 A replicating trading strategy for a project is a trading strategy that produces exactly the same cash flows in all future states of nature as the project. Thus, by definition, starting the project and shorting the replicating trading strategy will lead to a situation where cash flows net each other out in each state of except at time 0. At time 0, the cash flow will be Ik Ik0 , where Ik0 is the time-0 investment cost of the project and Ij is the cash needed to initiate the replicating trading strategy. Therefore, a setting where the investor starts the project and shorts the replicating trading strategy will be exactly the same as a case where the project is not included in the portfolio and the time-0 budget is increased by Ik Ik0 (and hence the same objective function values). Therefore, the investor’s BSP for the project is, by the definition of BSP, Ik Ik0 . The proposition for the BBP can be proven similarly.
Proof of Proposition 3 Let us begin with the BBP and a setting where the portfolio does not include projects and the budget is b0 . Suppose that there are n projects that the investor buys sequentially. Let us denote the optimal value for this problem by Wb;1 . Suppose then that the investor buys a project, indexed by 1, at his or her BBP, vb1 . Let us C denote the resulting optimal value for the problem by Wb;1 . By definition of the C BBP, Wb;1 D Wb;1 . Suppose then that the investor buys another project, indexed by 2, at his or her BBP, vb2 . The initial budget is now b0 vb1 , and after the second project is bought, it is b0 vb1 vb2 . Since the second project’s Problem 1 and the first project’s Problem 2 are the same, the optimal values for these two problems C are the same, i.e., Wb;2 is equal to Wb;1 . Add then the rest of the projects in the same manner, as illustrated in Table A.2. The resulting budget in the last optimization problem, which includes all the projects, is b0 vb1 vb2 vbn . Because Problem 2 of each project (except for the last) in the sequence is always Problem 1 of the next project and because by the definition of the BBP, for each project, the objective function values in Problems 1 C C C and 2 are equal, we have Wb;n D Wb;n D Wb;n1 D Wb;n1 D Wb;n2 D D Wb;1 . Therefore, by the definition of the BBP, the BBP for the portfolio including all the
104
J. Gustafsson et al.
Table A.2 Buying prices in a sequential buying process P1
P2
First project Optimal value: Wb;1
Second project Optimal value: C Wb;2 D Wb;1
Third project Optimal value: C Wb;3 D Wb;2
nth project Optimal value: C Wb;n D Wb;n1
Budget at time 0: b0
Budget at time 0: b0 vb1
Budget at time 0: b0 vb1 vb2
Budget at time 0: b0 vb1 vb2 vbn1
C Optimal value: Wb;2
C Optimal value: Wb;3
C Optimal value: Wb;n
Budget at time 0: b0 vb1 vb2
Budget at time 0: b0 vb1 vb2 vb3
Budget at time 0: b0 vb1 vb2 vbn
Optimal value: C Wb;1 Budget at time 0: b0 vb1
project must be vbptf D vb1 C vb2 C C vbn . By re-indexing the projects and using the above procedure, we can change the order in which the projects are added to the portfolio. In doing so, the projects can obtain different values, but they still sum up to the same joint value of the portfolio. Similar logic proves the proposition for BSPs.
References Ahuja RK, Orlin JB (2001) Inverse optimization. Oper Res 49(5):771–783 Belegundu AD, Chandrupatla TR (1999) Optimization concepts and applications in engineering. Prentice Hall, New York Brealey R, Myers S (2000) Principles of corporate finance. McGraw-Hill, New York Brosch R (2008) Portfolios of real options. Lecture notes in economics and mathematical systems, vol 611. Springer, Berlin Clemen RT (1996) Making hard decisions – an introduction to decision analysis. Duxbury, Pacific Grove Dixit AK, Pindyck RS (1994) Investment under uncertainty. Princeton University Press, Princeton Eppen GD, Martin RK, Schrage L (1989) A scenario based approach to capacity planning. Oper Res 37(4):517–527 Fishburn PC (1977) Mean-risk analysis with risk associated with below-target returns. Am Econ Rev 67(2):116–126 French S (1986) Decision theory – an introduction to the mathematics of rationality. Ellis Horwood, Chichester Gustafsson J, Salo A (2005) Contingent portfolio programming for the management of risky projects. Oper Res 53(6):946–956 Konno H, Yamazaki H (1991) Mean-absolute deviation portfolio optimization and its applications to the Tokyo stock market. Manage Sci 37(5):519–531 Lintner J (1965) The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets. Rev Econ Stat 47(1):13–37 Luenberger DG (1998) Investment science. Oxford University Press, New York Markowitz HM (1952) Portfolio selection. J Finance 7(1):77–91
4 Valuing Investments with Portfolio Selection Methods
105
Markowitz HM (1959) Portfolio selection: efficient diversification of investments. Cowles Foundation, Yale Markowitz HM (1987) Mean-variance analysis in portfolio choice and capital markets. Frank J. Fabozzi Associates, New Hope Mulvey JM, Gould G, Morgan C (2000) An asset and liability management model for Towers Perrin-Tillinghast. Interfaces 30(1):96–114 Ogryczak W, Ruszczynski A (1999) From stochastic dominance to mean-risk models: semideviations as risk measures. Eur J Oper Res 116(1):33–50 Raiffa H (1968) Decision analysis – introductory lectures on choices under uncertainty. AddisonWesley, Reading Sharpe WF (1964) Capital asset prices: a theory of market equilibrium under conditions of risk. J Finance 19(3):425–442 Sharpe WF (1970) Portfolio theory and capital markets. McGraw-Hill, New York Smith JE, Nau RF (1995) Valuing risky projects: option pricing theory and decision analysis. Manage Sci 41(5):795–816 Trigeorgis L (1996) Real options: managerial flexibility and strategy in resource allocation. MIT, MA Yu P-L (1985) Multiple-criteria decision making: concepts, techniques, and extensions. Plenum, New York
Chapter 5
Interactive Multicriteria Methods in Portfolio Decision Analysis Nikolaos Argyris, Jos´e Rui Figueira, and Alec Morton
Abstract Decision Analysis is a constructive, learning process. This is particularly true of Portfolio Decision Analysis (PDA) where the number of elicitation judgements is typically very large and the alternatives under consideration are a combinatorial set and so cannot be listed and examined explicitly. Consequently, PDA is to some extent an interactive process. In this chapter we discuss what form that interactivity might take, discussing first of all how the process of asking for judgements should be staged and managed, and secondly what assumptions should be made about the substantive value models which the analyst assumes. To make the discussion concrete, we present two interactive procedures based on extended dominance concepts which assume linear additive and concave piecewise-linear additive value functions, respectively.
5.1 Introduction Recent years have seen Portfolio Decision Analysis (PDA) grow in practical importance and in the attention which it has received from Decision Analysis scholars and practitioners. The characteristic feature of PDA is the application of Decision Analysis concepts of preference and uncertainty modelling to the construction of portfolios of organisational activities. As such PDA represents the application of management science techniques to one of the most significant problems which an organisation faces, that of arriving at a systematic and “rational” allocation of internal funds and workforce. Numerous examples of practical applications exist in the military procurement (Austin and Mitchell 2008; Ewing et al. 2006), in the management of R&D in pharmaceutical and high-technology companies as well as A. Morton () Management Science Group, Department of Management, London School of Economics and Political Science, London, UK e-mail:
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 5, © Springer Science+Business Media, LLC 2011
107
108
N. Argyris et al.
in the government sector (Phillips and Bana e Costa 2007; Lindstedt et al. 2008; Morton et al. 2011), in planning healthcare provision (Kleinmuntz 2007), prioritising maintenance funds for public infrastructure such as roads and flood defences (Mild and Salo 2009), and the like. Supporting the resource allocation process is not a new ambition in Operations Research. Indeed, the very term “programming” in mathematical programming is borrowed from the military planning problems which were the original inspiration for the seminal work of Dantzig in the late 1940s. Viewed from a mathematical programming perspective, the tools of decision analysis provide a systematic way to specify an objective function in situations where values are contested and decision makers face significant uncertainties, as is typical in substantial applications. This is an important aspect of the analysis process as solutions are often highly sensitive to the way in which objectives are specified and operationalised mathematically and decision makers (DMs) cannot be expected to simply write down or draw out objective functions without support. Taking this theme somewhat further, a common view (which we share) is that preference elicitation is a constructive process (Slovic and Lichtenstein 2006) whereby the DM reflects, deliberates, learns, and comes to form a relatively stable set of values (Phillips 1984; Roy 1993). Often this involves a reconciliation of alternative perspectives, for example, the “gut feel” ranking of options and the modelled ranking which comes from the disaggregate judgements assembled in the decision model. Learning takes place in all choice situations; however, this is particularly true in PDA (as compared to single choice where only one object is to be selected) as the set of projects which the DM has to assess may be projects with which she is not deeply familiar, and the number of elicitation judgements is typically very large. Moreover, the set of possible portfolios is typically so large that it cannot be contemplated directly and instead has to be defined implicitly. If one accepts this view, it seems clear that any approach to PDA should be (in some sense) interactive. Further, anyone who wishes to design a tool or process for PDA has to confront two critical questions which we focus on in this chapter. • What assumptions are to be made substantively about the form of the value function (if any) which is to be used to characterise the DM’s preferences? • How should the process of asking for judgements from the DM be staged and managed, so as to support learning and ensure that the time available for elicitation is used most productively? In this chapter, drawing on ideas from the sister discipline of multiobjective programming, we discuss these two questions, focusing simultaneously on questions of computational tractability and efficient use of elicited judgement. An important idea in this chapter is the use of extended dominance concepts to guide interactive search. Because these concepts have been developed and extensively discussed in the multiobjective programming literature, we focus particularly on multicriteria or multiobjective PDA rather than probabilistic PDA. However, there is a strong formal analogy between the additive value model †l l vl ./, and the expected utility
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
109
P model l pl u./ and the reader who is more interested in choice under uncertainty than under conflicting objectives may find the chapter retains a semblance of comprehensibility if she reads “probability” for “criterion weight”; “utility” for “value”, and so forth throughout. The reader who is interested in finding out more about multiobjective programming is referred to Steuer (1986), Miettinen (1999), or Ehrgott (2005) for further information. The chapter proceeds as follows. First, we address the two questions above. In Section 5.2, we discuss assumptions relating to the form of the value function; in Section 5.3, we discuss questions relating to the staging of the interactive procedure. Section 5.4 presents our own interactive procedure which we think is promising in terms of bridging the gap between the decision analysis and multiobjective optimisation literatures. Section 5.5 presents some numerical examples of this procedure. Section 5.6 concludes.
5.2 Assumptions About Value Functions In this section, we will discuss the implications of assumptions which the analyst and the DM together make about the value functions which characterise the DM’s preferences. To do this, we briefly outline some formal concepts and notation which will underpin our analysis. We suppose that there is a DM who can undertake a number of projects, subject to resource constraints, for example, on cash or manpower. For expositional simplicity we will frame our discussion in terms of a single DM, while recognising that resource allocation decisions are often – indeed typically – made collectively. This DM has a number of criteria or objectives which she wishes to achieve, that is to say “maximise”. Formal notation is presented in Table 5.1. Again for expositional simplicity, we will suppose that projects are not linked by logical constraints (for example, one project is not a superset of another project). Such constraints pose no essential computational difficulties from the point of view of the methods we will discuss but would complicate notation as we would have to define criterion functions f over a subset of f0; 1gN , as some combinations of projects might be logically impossible and could have no criterion value associated with them. One way to look at this problem and data is as a multiobjective programme. “max” f .x/ D .f1 .x/; f2 .x/; : : : ; fl .x/; : : : ; fP .x// X subject to W wj i xj 6 Wi ; i 2 I ; j 2J
xj 2 f0; 1g; j 2 J:
(5.1)
Specifically, this is a multidimensional, multiobjective knapsack problem (Gomes da Silva et al. 2004; Mavrotas et al. 2009).
110
N. Argyris et al.
Table 5.1 Main notation Concept Projects Portfolios
Resources
Criteria
Notation J D f1; 2; : : : ; j; : : : N g denotes an index set of indivisible projects X D fx 1 ; x 2 ; : : : ; x j ; : : : x N g f0; 1gN denotes a set of feasible portfolios, where a portfolio is a set containing one or more projects. This set may be too large to list explicitly G D fg1 ; g2 ; : : : ; gi ; : : : gM g denotes a set of non-decreasing resource functions. The set of constraints, gi .x/ Wi , for all i 2 I , are resource constraints where the right-hand side corresponds to the available amount of a given resource. Resources provide an alternative, implicit way to define X F D ff1 ; f2 ; : : : ; fl ; : : : fP g denotes a set of non-decreasing criterion functions. The criterion functions fl from F are such that fl W f0; 1gN 7! RC and f D .f1 ; f2 ; : : : ; fj ; : : : fP /
Another way to look at this problem is through the lens of decision analysis. In this case, the focus of interest is a value function V ./, that is to say, a functional which maps in the criterion space (i.e. the whole of the space of P -dimensional vectors of non-negative real numbers, in which f .X / is contained) into Œ0; 1 and represents the DM’s preferences in the sense that the DM prefers z0 to z00 iff V .z0 / > V .z00 /. To see the relationship between these complementary perspectives, we define the following extended dominance concepts. Given a set of value functions V , we define: Definition 1. (V -efficient portfolios and V -non-dominated points). A feasible portfolio x 0 2 X is called V -efficient iff, for some V 2 V , there is no other feasible portfolio x 00 2 X such that V .f .x 00 // > V .f .x 0 //. In this case, f .x 0 / is called a V -non-dominated point. Any portfolio which is not V -efficient is V -inefficient, and any feasible point which is not V -non-dominated is V -dominated. Definition 2. (V -supported portfolios and points). A feasible portfolio x 0 2 X is called a V -efficientP supported portfolio iff, some V 2 V , there is no convex Pfor K K k k k combination z D f .x / (with D 1 and k > 0 8k) such kD1 kD1 that V .z/ > V .f .x 0 //. f .x 0 / is called a V -non-dominated supported point. Any efficient portfolios or non-dominated points which are not supported are called unsupported. Where V is the set of all possible monotonic increasing functionals, these extended dominance concepts collapse to standard multiobjective definitions of supported and unsupported dominance and efficiency. It is also wellPknown that when V is the set of linear additive functions L A D fV ./ W V .z/ D PlD1 l zl for some l 2 RC 8lg, then all efficient portfolios/non-dominated points are supported and the two definitions coincide. The implications of restricting oneself to a linear additive value model are worth exploring in some depth, as this is a model form which is commonly used in practice. As is well known the use of linear additive value models has considerable advantages in terms of both conceptual simplicity and computational
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
111
tractability. For example, once the DM has specified weights, an attractive approach is to prioritise projects in the order of the ratio of their weighted contribution on each criterion to their resource cost (Phillips and Bana e Costa 2007). However, if the linear additive model is not a correct characterisation of the DM’s preferences, it can be rather misleading. For example, suppose the analyst (male) asks the DM (female) for “swing weights” for two benefit dimensions (“revenue” and “social welfare”, for example) using the following approach; he asks her whether she would prefer all the revenue gains of doing all the projects or the social welfare gains of doing all the projects, and when she concludes in favour of the former, he asks her how great the value of the said revenue gains is as a fraction of the said social welfare gains. Normalising gives a pair of scaling factors, revenue and welfare , with revenue C welfare D 1: In the linear value model, these “weights” are effectively extrapolated across the whole of the criterion space: an obvious danger is that the there could be some curvature in the organisation’s value functions over aggregate levels of revenue and social welfare, and a consequent curvature in the isopreference curves (see MacCrimmon and Wehrung 1988 for some empirical evidence on this point). If the isopreference curvature is rather slight in the neighbourhood of the points chosen as the basis for the weighting question, it may not be evident to DM or analyst. We illustrate this danger with an example, depicted in Fig. 5.1. Suppose that revenue can be delivered at a lower cost than social welfare, in the sense that if the organisation devoted its money budget to delivering social welfare, it could deliver almost all the revenue which it would achieve from implementing all its candidate projects; on the other hand, if it devoted all its budget to delivering social welfare, it would generate only a fraction of the social welfare which it would gain from implementing all the projects on the table. The (extended convex hull of the) feasible set is thus the grey area in the figure and the implied isopreference curves of the linear additive model are the dashed lines. However, suppose the DM attaches diminishing value to revenue (for example, in the case of a not-for-profit organisation which has to pay salaries but really exists to undertake more altruistic activities). In this case, the isopreference curves which would be a better representation of the DM’s preferences (referred to for
Social Welfare
b
Fig. 5.1 Isopreference lines for an organisation trading off revenue and social welfare
a Revenue
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
113
only the most provisional preference information at the first pass, and using that to narrow the space of possible attractive portfolios (for example, using triage rules) before returning to firm up key preference judgements (Liesi¨o et al. 2007). Still other analysts might perform mini-Portfolio Decision Analyses within different areas (e.g. budget categories) of an organisation with a view to building to an overall global value function which synthesises and subsumes the value functions across multiple areas (Phillips and Bana e Costa 2007). Nature of preference information sought? The preference information sought may differ in terms of precision (Are point values required or are interval or qualitative assessments adequate?), comprehensiveness (Do all parameters of the value model have to be assessed, at least provisionally, before proceeding to a subsequent stage?), and whether preference intensities or only ordinal preferences are admitted. But different analysts may also differ in terms of the objects over which preferences are sought: Are preferences sought over the portfolios in X directly (as is the case in the Balanced Beam procedure of Kuskey et al. 1981) or are they sought over the criterion space with the construction of specific portfolios coming rather late in the process (Keeney 1992; Keeney and McDaniels 1992, 1999)? This has significant consequences for the construction of a value function: If preferences are elicited over the (discrete) portfolio space, the solvability properties necessary for the identification of a unique value function will not generally hold, whereas if questioning takes place in the criterion space, the analyst may be able to ask for indifference judgements which would allow the identification of a unique value function. Definition of attractive portfolios and termination criterion? The general aim of the analysis is that the DM learns enough that she can home in on a small number of “attractive” portfolios. But is attractiveness defined by the model (for example, portfolios which are efficient with respect to some possible value function) or should they rather be portfolios which are attractive for some other reason (for example, the holistic evaluation of the DM or of the DM’s boss)? The former suggests that the model structure can be taken for granted, and the latter that the model structure itself is always tentative. Similarly if there are multiple attractive portfolios, or the portfolios which the value model suggests are attractive are not so regarded by the DM, does this suggest that the model needs refinement, or might one terminate the analysis at this stage if the DM has a sense of having adequately explored the problem (Phillips 1984)? To summarise, in designing a PDA procedure one typically has several design elements or parameters to play with. There is disappointingly little in the literature in the way of frameworks to guide this design process (as is the case in multicriteria decision analysis and multiobjective programming literature more generally – see, for example, Vanderpooten (1989) or Gardiner and Steuer (1994)). It seems clear that such design choices should be made on the basis of both efficiency considerations, based on analytic and computational work, or psychological considerations, based on empirical investigation of DMs’ responses to these methods. However, such research is thin on the ground and where it does exist gives ambiguous guidance. For
114
N. Argyris et al.
example, in an interactive single choice multicriteria choice setting, Stewart (1993) finds that obtaining indifference statements is highly effective in narrowing down the space of possible value functions. One can always elicit an indifference statement if one accepts intensity of preference judgements by asking for a bisection point in the criterion space. However, such questions are also more cognitively challenging for DMs than simple questions of ordinal preference and so it is not possible to infer from this a context-free recommendation that bisection questions should always be used in all circumstances.
5.4 An Interactive Scheme In Sections 5.2 and 5.3, we have discussed two critical questions arising in the design of a PDA process: the question of what the analyst can assume about the DM’s value function and the question of how the interactive procedure itself should be staged and managed. Possible answers to these questions are linked and that linkage is heavily dependent on the existence of supporting software and computing technology. In this section, to illustrate this point, we outline an interactive scheme and present the mathematical and computational machinery needed to make this scheme operational. The scheme uses the sort of extended dominance concept outlined in Section 5.2. It thus differs from multiobjective approaches based on a free search concept, in which assumptions about value functions are not made explicit (Vanderpooten and Vincke 1989; Vanderpooten 1989) or from approaches such as the Zionts–Wallenius procedure (Zionts and Wallenius 1976, 1983), where qualitative assumptions (such as quasiconcavity) are made about value functions but the implications of a DM’s judgements are used to direct local search rather than to characterise the set of possible value functions. Extended dominance is made concrete in our scheme in the following way. Judgements of ordinal preference % between pairs of alternatives or intensity of preference % between four-tuples of alternatives are elicited from DMs and these judgements are translated into constraints on the set of possible value functions. As usual and denote the symmetric parts of these preference relations. Our scheme relies on three particular assumptions about the problem structure: 1. Value functions are additive and concave, and the partial value functions are monotonically increasing. Concavity is often a plausible feature of value and utility functions. Empirically, value functions are likely to be concave if there is some sort of satiation effect, whereby benefits give less marginal value when there are already large quantities, although non-concavities may exist if there is, for example, a target level of performance (see MacCrimmon and Wehrung 1988). An interesting subcase is where value functions are linear – as we have mentioned above, this is a common assumption in practice, and it makes our scheme particularly tractable.
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
115
2. Criterion functions are additive, that is to say fl .x/ D cl x, where cl is a vector of project-level benefits. This assumption is not as restrictive as it might appear – see, for example, Liesi¨o et al. (2008) for some ways to handle criterion level interactions between projects within a similar framework. 3. Resource functions are linear additive, that is to say gi .x/ D wi x, where wi is a vector of project-level costs (which again is not an especially restrictive assumption). We do not necessarily recommend the use of the procedures we propose for all PDA problems, or indeed, for any specifically. Nor are they in any way typical of the vast literature on interactive multiobjective methods (for overviews see, e.g. Vanderpooten 1989; Gardiner and Steuer 1994; Korhonen 2004; Branke et al. 2008). Rather, we present it here from the point of view of illustrating what is possible. Using as it does extended dominance, this scheme has the attractive property (from a Decision Analysis point of view) of being based on a explicit characterisation of the set of possible value functions derived from elicited information, and so can be related to approaches which are popular in the Decision Analysis community. A program flowchart representing the logic of the interactive scheme is shown in Fig. 5.3. Stage 1 involves identifying criteria, projects, and resources, and thus optimal portfolios; Stage 2 involves the identification of the reference set and the expression of preferences by the DM. In Stage 3, a preference disaggregation procedure is used to test for possible compatible value functions and perhaps generate such value functions if they exist and incompatibilities if they do not. In Stage 4, some sort of portfolio optimisation model is solved and the results passed back to the DM for consideration, and depending on whether the model is judged requisite, the process either cycles back to an earlier stage or terminates. To make this scheme practical, we need some sort of machinery for steps 3 and 4, in particular, which can “find compatible value functions”, “identify inconsistencies”, and “solve portfolio optimisation models”. We present in Section 5.4.1, two formulations which involve representing the space of possible value functions as a polyhedron and discuss how these formulations might help.
5.4.1 Formulations We suppose that the range of the possible portfolios in the dimensions of the criterion space Zj range in value from zj to zj with the respective vectors written z and z. We suppose that the DM has expressed preferences % or preference differences % over some portfolios corresponding to a “reference set” R of points in criterion space. “r 1 % r 2 ” is read as “r 1 is at least as preferred as r 2 ” and “.r 1 r 2 / % .r 3 r 4 /” is read as “r 1 is preferred to r 2 at least as intensely 3 4 as r is preferred to r ”. If r 1 % r 2 and r 2 % r 1 then r 1 and r 2 are indifferent, r 1 r 2 , and the corresponding relation is defined in the obvious way. We suppose for presentational convenience that the DM has judged z % r % z 8r 2 R. Bounding the value function so that all portfolios are evaluated in the Œ0; 1 interval
116
N. Argyris et al.
START
Benefits
Resources
Projects
Stage 1. Structuring
Feasible Portfolios Stage 2. Elicitation of preference information
Construct set of reference portfolios Express preferences over reference portfolios
Identify inconsistencies
NO
Compatible value functions?
YES
Find compatible value functions
Solve portfolio optimisation models NO
Stage 2. Construction of value functions
Stage 2. Identification of attractive portfolios
Requisite? YES
END
Fig. 5.3 Program flow of the interactive scheme
and translating the DM’s expressed preference judgements into constraints, we can arrive at a space of possible value functions (as shown in (5.2)) (Some writers also propose modelling a weak preference relation by “padding” the constraints with a very small number " but there is in our view no theoretic reason for any particular choice of the value of ".): .˘1/ .˘ 2/ .˘ 3/ .˘ 4/ .˘ 5/ .˘ 6/
8.r 1 ; r 2 / 2 R R W r 1 % r 2 U.r 1 / > U.r 2 / 1 2 1 2 U.r 1 / D U.r 2 / 8.r ; r / 2 R R W r r 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W U.r 1 / U.r 2 / > U.r 3 / U.r 4 / r 2 / % .r 3 r 4/ .r 1 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W U.r 1 / U.r 2 / D U.r 3 / U.r 4 / r 2 / .r 3 r 4/ .r 1 Set “1” U.z/ D 1 Set “0” U.z/ D 0: (5.2)
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
117
The intended interpretation of (5.2) is as follows: U.r/ is the value or utility assigned to point r. The first two constraints (˘1–˘ 2) impose that whenever r 1 is preferred to (indifferent to) r 2 , then the value assigned to r 1 is greater than (equal to) the value assigned to r 2 (this is what it means for a value function to represent a system of preferences). The second two constraints (˘ 3–˘ 4) impose similar restrictions on values for statements involving preference intensities. The third pair of constraints (˘ 5–˘ 6) bound the value scores over the feasible criterion values to the [0,1] interval. In other words (5.2) provides us with the space of possible value assignments which respect the DM’s expressed preferences. However, if we are to perform an optimisation, it is not enough to know the value assignments to these portfolios only; we have to know something about value functions over the whole of the criterion space. One simple way to extend the value function from these assignments is to introduce a vector of non-negative P variables .1 ; 2 ; : : : ; l ; : : : P / and impose an additional constraint U.r k / D l2F l rlk for each r k 2 R. This requires that the value function is of a linear additive form (U 2 L A ), which will lead to easier and more convenient formulations. However, it is plausible that we may wish to use more complex value models. One approach is borrow ideas from preference disaggregation (Figueira et al. 2009). Our presentation here draws on the UTA approach (Jacquet-Lagr`eze and Siskos 1982, 2001). We show how it is possible to use such ideas to impose a value function of the concave piecewise-linear additive form (U 2 C PA ). Introduce P P variables ul .fl .rlk // for each r k 2 R and impose the constraint k U.f .r // D l2F ul .fl .rlk // (thus rather than weighting the criteria, we are l transforming them by utility functions). Define zl WD z0l and zl WD zL l . Then l each range Œz0l ; zL l (=Œzl ; zl /, for all l 2 F , can be divided into Ll subintervals: q1
Œz0l ; z1l ; Œz1l ; z2l ; : : :; Œzl
l ; zl ; : : :; ŒzlLl 1 ; zL l 8l 2 F;
q
(5.3)
where q
zl D z0l C
q .zl zl /; q D 0; 1; : : : ; Ll ; and 8l 2 F: Ll
(5.4)
The value ul ./ of a criterion level rlk , i.e. ul .rlk /, can be determined by linear q qC1 interpolations, as follows, for rlk 2 Œzl ; zl for all l 2 F : q
ul .rlk / D ul .zl / C
q
rlk zl qC1
zl
qC1
q
zl
.ul .zl
q
/ ul .zl //:
(5.5)
The piecewise linear function is thus defined through the determination of q the value of the breakpoints (i.e. the ul .zl /). Increasing the number of breakpoints enlarges the space of possible value functions and so make more accurate assessments possible; however, it will increase the size of the associated mathematical programmes and will have computational implications. In the L A
118
N. Argyris et al.
formulation we impose monotonicity by requiring that weights are non-negative; in this new environment, we achieve this by imposing the family of constraints qC1 q ul .zl / ul .zl / > 0, 8l 2 F; 8q D 0; : : : ; Ll 1: If we were just concerned with feasible value functions this would be enough. However, we are specifically interested in value functions which we can use in an optimisation setting, and to achieve this we will impose concavity through an additional set of constraints qC2
ul .zl
qC2
zl
qC1
/ ul .zl qC1
zl
/
qC1
6
ul .zl
qC1
zl
q
/ ul .zl /
(5.6)
q
zl
8l 2 F , 8q D 0; : : : ; Ll 2. To summarise, we have the following polyhedra of possible value functions in the L A and C PA cases, as shown in (5.6) and (5.7). Linear additive case: .˘1/ U.r 1 / > U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 % r 2 .˘ 2/ U.r 1 / D U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 r 2 .˘ 3/ U.r 1 / U.r 2 / > U.r 3 / U.r 4 / r 2 / % .r 3 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 1 2 3 4 .˘ 4/ U.r / U.r / D U.r / U.r / r 2 / .r 3 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 .˘ 5/ U.z/ D 1 .˘ 6/ U.z/ D 0 P l rlk 8r k 2 R .L1/ U.r k / D
r 4/ r 4/
l2F
.L2/ l > 0 8l 2 F
(5.7)
Concave piecewise-linear additive case: .˘1/ U.r 1 / > U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 % r 2 .˘ 2/ U.r 1 / D U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 r 2 .˘ 3/ U.r 1 / U.r 2 / > U.r 3 / U.r 4 / r 2 / % .r 3 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 1 2 3 4 .˘ 4/ U.r / U.r / D U.r / U.r / r 2 / .r 3 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 .˘ 5/ U.z/ D 1 .˘ 6/ U.z/ D 0 P ul .rlk / 8r k 2 R .C1/ U.r k / D .C 2/ .C 3/ .C 4/
l2F q rlk zl q qC1 q k / ul .zl // 8r k 2 ul .rl / D ul .zl / C qC1 q .ul .zl zl zl qC1 q ul .zl / ul .zl / > 0; 8l 2 F; 8q D 0; : : : ; Ll 1 qC2 qC1 qC1 q ul .zl /ul .zl / ul .zl /ul .zl / qC2
zl
qC1
zl
6
qC1
zl
q
zl
r 4/ r 4/
R;8l 2 F
; 8l 2 F; 8q D 0; : : : ; Ll 2
(5.8)
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
119
The interpretation of these systems of inequalities is straightforward. Constraints (˘1–˘ 6) which are shared in both systems are the same as in (5.2) and ensure that utility assignments are compatible with stated preferences. In the linear additive case, they are supplemented by constraints L1–L2 which ensure that the value function can be written as a positive weighted sum of the criterion values. In the concave piecewise-linear additive case the additional constraints are the C constraints: C1 ensures that the value of a point can be written as the sum of its criterion values; C 2 ensures that these criterion values are piecewise interpolations of the values at the adjacent grid points; C 3 ensures that the partial value functions are increasing and C 4 that they are concave.
5.4.2 Use of Formulations We can use these polyhedra to address various questions of interest: 1. Do value functions compatible with the DM’s expressed preference information exist? 2. If value functions do not exist, can we suggest to the DM preference judgements which she may wish to revise? 3. If value functions do exist, what would a representative sample of them look like? 4. What efficient portfolios might we wish to present back to the DM to elicit further preferences? Question 1 can be easily addressed using a penalty idea. We replace constraints ˘1–˘ 4 in the L A or C PA polyhedra with the following “soft” constraints ˘ 0 1–˘ 0 4: .˘ 0 1/ U.r 1 /C C .r 1 ; r 2 / > U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 % r 2 .˘ 0 2/ U.r 1 /C C .r 1 ; r 2 / .r 1 ; r 2 / D U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 r 2 .˘ 0 3/ U.r 1 / U.r 2 /C C .r 1 ; r 2 ; r 3 ; r 4 / > U.r 3 / U.r 4 / r 2 / % .r 3 r 4/ 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 .˘ 0 4/ U.r 1 / U.r 2 /C C .r 1 ; r 2 ; r 3 ; r 4 / .r 1 ; r 2 ; r 3 ; r 4 / D U.r 3 / U.r 4 / r 2 / .r 3 r 4/ 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 (5.9) These constraints ˘ 0 1–˘ 0 4 contain positive slack terms C s and s which measure the deviation of a solution from feasibility with respect to ˘1–˘ 4. To obtain a solution which is feasible with respect to ˘ 0 1–˘ 0 4, we require the slack terms to be non-negative and minimise their sum. If the value of the resulting programme is 0, there is a feasible assignment of values, otherwise not. In this latter case, we have found that the preferences expressed by the DM are collectively not consistent with any possible value function (this is not uncommon – see Korhonen et al. 1990). It is possible to identify candidate sets of judgements which the DM may wish to be relaxed by formulating and solving a sequence of integer programmes, based on the ideas of Mousseau et al. (2003), thus supplying an answer to question 2.
120
N. Argyris et al.
First we solve the following integer programme, where M is a large real number: P ı.r 1 ; r 2 / min .r 1 ;r 2 /2RRWr 1 %r 2
C
P
.r 1 ;r 2 /2RRWr 1 r 2
C
ı C .r 1 ; r 2 / C P
.r 1 ;r 2 ;r 3 ;r 4 /2RRRRW.r 1
C
P
.r 1 ;r 2 ;r 3 ;r 4 /2RRRRW.r 1
C
P
.r 1 ;r 2 ;r 3 ;r 4 /2RRRRW.r 1
P
ı .r 1 ; r 2 /
.r 1 ;r 2 /2RRWr 1 r 2 1 2 3
ı.r ; r ; r ; r 4 /
r 2 /% .r 3
r 4/
r 2 / .r 3
r 4/
r 2 / .r 3
r 4/
ı C .r 1 ; r 2 ; r 3 ; r 4 / ı .r 1 ; r 2 ; r 3 ; r 4 /
subject to W .˘ 00 1/ U.r 1 / C M ı.r 1 ; r 2 / > U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 % r 2 .˘ 00 2/ U.r 1 / C M ı C .r 1 ; r 2 / > U.r 2 / and U.r 2 / C M ı .r 1 ; r 2 / > U.r 1 / 8.r 1 ; r 2 / 2 R R W r 1 r 2 .˘ 00 3/ U.r 1 / U.r 2 / C M ı.r 1 ; r 2 ; r 3 ; r 4 / > U.r 3 / U.r 4 / r 2 / % .r 3 r 4/ 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 .˘ 00 4/ U.r 1 / U.r 2 / C M ı C .r 1 ; r 2 ; r 3 ; r 4 / > U.r 3 / U.r 4 / and U.r 3 / U.r 4 / C M ı .r 1 ; r 2 ; r 3 ; r 4 / > U.r 1 / U.r 2 / r 2 / .r 3 r 4/ 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 .˘ 5/ U.z/ D 1 .˘ 6/ U.z/ D 0 .1/ ı.r 1 ; r 2 / 2 f0; 1g 8.r 1 ; r 2 / 2 R R W r 1 % r 2 .2/ ı C .r 1 ; r 2 /; ı .r 1 ; r 2 / 2 f0; 1g 8.r 1 ; r 2 / 2 R R W r 1 r 2 .3/ ı.r 1 ; r 2 ; r 3 ; r 4 / 2 f0; 1g r 2 / % .r 3 r 4/ 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 C 1 2 3 4 1 2 3 4 .4/ ı .r ; r ; r ; r /; ı .r ; r ; r ; r / 2 f0; 1g r 2 / .r 3 r 4/ 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 (5.10) This uses the same idea as formulation (5.9). The difference is that in this formulation the ıs are binary variables. We multiply these ıs by a large number “M ” and this is used as the slack in the inequalities ˘ 00 1–˘ 00 4, which govern whether value assignments represent preferences (which they must do if all ıs = 0). The mathematical programme minimises the number of violated constraints (the sum of indicator variables of violated constraints). If it has been shown that there are no feasible value functions in L A or C PA , but the objective function of (5.10) is 0 at optimality, the DM’s preference judgements do not violate normative principles (e.g. transitivity) but they are inconsistent with linear additivity/concave additivity. In this case, it might be advisable to begin a discussion with the DM to re-establish whether these structural assumptions are appropriate, perhaps with a view to restructuring the value model. If the objective function is not 0 at optimality, it may be possible to restore feasibility if the DM is prepared to reconsider some of her preference or indifference
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
121
statements. Denote the index set of the ıs (ı C s , ı s) which take the value 1 in the optimal solution of (5.10) as S1 (S1C , S1 ). Then append the constraint to P ı.r 1 ;r 2 /2S1
C
ı.r 1 ; r 2 / C
C
ı C .r 1 ;r 2 /2S1C
P
ı.r 1 ;r 2 ;r 3 ;r 4 /2S
P
P
ı C .r 1 ; r 2 / C
ı.r 1 ; r 2 ; r 3 ; r 4 / C
P
P ı .r 1 ;r 2 /2S1
ı C .r 1 ;r 2 ;r 3 ;r 4 /2S1C
1
ı .r 1 ; r 2 /
ı C .r 1 ; r 2 ; r 3 ; r 4 /
ı .r 1 ; r 2 ; r 3 ; r 4 /
ı .r 1 ;r 2 ;r 3 ;r 4 /2S1 6 jS1 j C jS1C j C jS1 j
(5.11)
(5.10) and resolve. This constraint ensures that when we solve our modified version of (5.10) we will find a new set of infeasible constraints. Denote the index set of the ı s (ıC s , ı s) which take the value 1 in the optimal solution to this new programme as S2 (S2C , S2 ), define a new constraint corresponding to (5.11), and iterate until the resulting mathematical programme is infeasible. This procedure will generate a succession of sets of preference and indifference statements which may be relaxed to restore feasibility. Turning now to question 3, assuming for the moment that we obtain a positive indication that there is indeed a set of feasible value functions, a natural next question might be how to find a representative set – perhaps a sophisticated DM might be able to choose between value functions directly. An idea which has been expressed in the literature, though in an MOLP rather than a combinatorial context (Jacquet-Lagreze et al. 1987; Stewart 1987), is to obtain a set of P value functions by finding the feasible value functions which maximise l or ul .zl / for each l 2 F . In the C PA case, however, it would seem to make sense also to consider “most concave” and “least concave” value functions, perhaps obtained by a procedure such as maximising/minimising the utility of some central point in z, as this could make a significant difference to the character of the eventual solution: intuitively, the “more concave” a utility function is, the more evenly distributed the preferred solution vector will be in the criterion space. q Once a number of suitable sets of l s or ul .rlk /s and ul .zl /s have been found, any member of that set can be used as an objective function in a combinatorial programme, and solving that programme will result in an efficient solution. In the L A case, given a vector o of weights, one can form max U D
P P
ol cl x
lD1
subject to W X .O1/ wj i xj 6 Wi ; 8i 2 I ; j 2J
.O2/ xj 2 f0; 1g; j 2 J
(5.12)
122
N. Argyris et al.
O1 is a constraint on feasible solutions to ensure that their aggregate cost falls within some budget constraint, and O2 ensures that projects are either done or not done, i.e. they cannot be fractionally done. In the C PA case, it is somewhat more involved, q1 as we qhave toq linearise the q1 partial value functions. To do this, let zl ; ul zl and zl ; ul zl denote the two points that allows to define the piece q of the piecewise function ul ./ linear q q q1 ı q q1 w.r.t. criterion l 2 F . This piece has a slope sl D ul zl ul zl zl zl , and since the function is concave the following relation holds: q
L
s11 > sl2 > sl > > sl q : q
q
q1
denote the difference between two consecutive criterion Let dl D zl zl q q1 q q levels, and zOl denote a feasible value within the range Œzl ; zl . Thus, zOl can be q q considered as a new decision variable such that 0 6 zOl 6 dl ; 8l 2 F; 8q D 1; 2; : : : ; Ll ; 8l 2 F , the single term, Pzl , of leach function ul .zl / becomes zl D P q Ll z O and this in turn has to equal j 2J cj xj to ensure that there is a feasible qD1 l point in the decision space. Now, we can define the linear value function ul .zl / D PLl q q s z O ; 8l 2 F . So, by adding constraints D1 and D2 which impose these qD1 l l conditions, given some value function U.f .x//,we can find the optimal portfolio for each of the extremal value functions by solving a new mixed f0; 1g LP model: max U D
P l2F
PLl
q q Ol qD1 sl z
subject to W X .O1/ wij xj 6 Wi ; 8i 2 I ; j 2J
.O2/ xj 2 f0; 1g; j 2 J Ll X X q .D1/ zOl D cjl xj ; 8l 2 F ; qD1
q
j 2J q
.D2/ 0 6 zOl 6 dl ; 8l 2 F; 8q D 1; 2; : : : ; Ll :
(5.13)
Given the way we built the concave functions it is easy to see that the solution will be the value for the optimal portfolio according to the value function consistent with q ul .rlk /s and ul .zl /s. Because of the concavity of the function, for a given criterion q q q q q1 level zOl 2 Œ0; dl , it is also easy to see that zOl D dl for all l D 1; : : : ; dl and q zOl D 0 for all l D q C 1; : : : ; Ll . The approach sketched above has given us one way of tackling question 4: In the context of an interactive procedure, one could use the above method to find a number of very different possible value functions; use these value functions to identify (hopefully also very different) portfolios, each of which is efficient with respect to some value function, and present these portfolios back to the DM to elicit new judgements. Suppose, however, our DM is particularly demanding, and wants not just some but all efficient portfolios. Consider first of all the L A case. In this case, we do
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
123
not have a determinate o . To comply with the DM’s request, we could form a programme, parametric in of the following form: P max U./ D l cl x l2F
subject to W .˘1/ U.r 1 / > U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 % r 2 .˘ 2/ U.r 1 / D U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 r 2 .˘ 3/ U.r 1 / U.r 2 / > U.r 3 / U.r 4 / 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1
r 2 / % .r 3
r 4/
r 2 / % .r 3
r 4/
.˘ 4/ U.r / U.r / D U.r / U.r / 1
2
3
4
8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 .˘ 5/ U.z/ D 1 .˘ 6/ U.z/ D 0 P l rlk .L1/ U.r k / D l2F
.L2/ l > 0 X .O1/ wij xj 6 Wi ; 8i 2 I ; j 2J
.O2/ xj 2 f0; 1g; j 2 J
(5.14)
We have seen all the constraints of this programme before: the reader is referred back to (5.2), (5.7), and (5.12) for an interpretation. It is not immediately obvious how one should solve this parametric programme. One strategy, explored in detail in Argyris et al. (2011), is to treat the programme as an optimisation not just in x but also in . Because of the Boolean nature of the x vector, it is possible to substitute a third set of variables ˛j l for the product term l xj as long as a suitable set of constraints is added. Solving this problem yields one efficient solution; if constraints are then appended to “cut off” this efficient solution and the programme is resolved, one obtains another efficient solution, and it is possible to show that all efficient solutions can be obtained in this way. The C PA case can be dealt with in a completely analogous way. Another possibility is that the DM wants to identify portfolios which are “as different as possible” from some incumbent portfolio x o which she has in mind. In this case, we with the constraint set of (5.14) but in which we Pcan solve a programme P maximise l2F l cl x l2F l cl x o : Argyris et al. (2011) present the formulation in full for the L A case and describe a procedure whereby a DM interacts with a Decision Support System. In this procedure, at each iteration, the Decision Support System solves this programme to obtain portfolio x , and the DM expresses a preference between her incumbent portfolio and x . The more preferred portfolio of these two becomes the new incumbent, and the Decision Support System resolves the programme updated with the new preference information. Argyris et al. show that if the DM knows her value function, the procedure will eventually recover her optimal portfolio. These results can also be generalised to the C PA case.
124
N. Argyris et al.
5.5 Example Finally, we present for illustrative purposes an example of how our formulations might work. The data we use are based on work with a Primary Care Trust (a body which combines the roles of local health authority and health insurer in England) reported in Airoldi et al. (2011). The DM has a cash surplus of £1m per annum and is interested in thinking through how these funds might be spent. There are 21 candidate packages and the two criteria in this example are health benefit and inequality reduction. The data from this example are as shown in Table 5.2. We start by considering the linear additive case, where V L A . Using the enumeration procedure described earlier we can identify all V -efficient supported portfolios and associated points in the criterion space. It turns out that there is only a handful of these in this example, specifically the five portfolios listed in Table 5.3. This provides a very considerable reduction in problem complexity, as we have effectively transformed an instance of a PDA problem into a typical instance of a single choice problem. Figure 5.4 plots all supported points in the criterion space. It is particularly interesting that the single portfolio that has the greatest impact in reducing inequalities seems to be relatively unattractive in terms of overall health benefits. Similarly, the group of portfolios that offer higher health benefits seem relatively unattractive in terms of reducing inequalities. Looking back at the portfolio compositions and data, there is a simple explanation for this. Project 2
Table 5.2 Data for numerical example
Project
Cost
Health benefit
Inequality Reduction
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
130 650 100 600 300 300 760 50 75 50 150 120 300 300 100 60 160 80 600 480 140
45 4,500 2.5 19.2 12 16 90 6 100 72 18.9 18 16.2 50.4 43.2 45 67.5 9 101.25 72 36
75 100 25 50 5 50 25 0 60 8 80 56 76 40 49 17.5 35 35 70 56 28
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
125
Table 5.3 Supported portfolios and points in the criterion space Portfolio r1 r4 r3 r2 r5
Projects .1; 3; 9; 11; 12; 15; 16; 17; 18/ .1; 2; 9; 12/ .1; 2; 9; 10; 18/ .1; 2; 9; 10; 16/ .2; 8; 9; 10; 15; 16/
Cost 975 975 985 965 985
Health 349.1 4,663.0 4,726.0 4,762.0 4,766.2
Inequity 432:5 291:0 278:0 260:5 234:5
500 r1
Inequality Reduction
450 400
r6
350 300
r2
250
r 34 r r5
200 150 100 50 0
0
1000
2000
3000
4000
5000
Health Benefit Fig. 5.4 Non-dominated portfolios for healthcare example
delivers very considerable health benefits – more than an order of magnitude greater than any other project. We now illustrate the interactive procedure based on a L A value function which was discussed in Section 5.4. For the purposes of “simulating” a DM, we shall assume that her preference ordering is represented by the L A value function u .z/ D z1 C 5z2 , where z1 ; z2 are the aggregate health benefit and inequality reduction criteria scores, respectively. To initialise the procedure we need to identify an efficient supported portfolio. This can easily be done by maximising one of the criteria over the feasible portfolio set. Choosing to maximise the reduction in inequalities leads to the identification of supported portfolio r 1 . We then initialise R WD fr 1 g and the incumbent portfolio r o WD r 1 . We proceed to identify an alternative portfolio which maximises the difference in value with the incumbent with respect to a compatible value function (any L A value function at this first iteration). This leads to the identification of r 5 as the alternative portfolio. At this point the DM is asked to state her preferences with respect to r 1 and r 5 . Since u .r 5 / > u .r 1 /, she states that r 5 % r 1 . We incorporate this information in the characterisation of all compatible value functions by introducing the constraint U.r 5 / > U.r 1 / in the instance of the formulation of Argyris et al. (2011), set the incumbent portfolio r o WD r 5 and then resolve. This
126
N. Argyris et al.
Value function for inequality reduction
1
1
0.8
0.8
Value
Value
Value Function 1
Value function for Health benefit
0.6 0.4 0.2
0.6 0.4 0.2 0
0 0
1780.05
3560.1
0
5340.15
1
1
0.8
0.8
Value
Value
Value function 2
313.5
627
940.5
Inequality reduction
Health benefit
0.6 0.4 0.2
0.6 0.4 0.2
0
0 0
1780.05
3560.1
Health benefit
5340.15
0
313.5
627
940.5
Inequality reduction
Fig. 5.5 Two value functions compatible with expressed preference information
leads to the identification of r 4 which is preferred to r 5 so that we set r o WD r 4 introduce the additional constraint U.r 4 / > U.r 5 / and repeat this process. In the next two iterations, we will set r o WD r 3 and again r o WD r 2 and progressively add the constraints U.r 3 / > U.r 4 / and U.r 2 / > U.r 3 /. Resolving at this point we identify r 2 again with an optimal objective function value equal to zero. From this we establish that there exists no portfolio which could possibly be preferred to r 2 on the basis of any compatible value function. It is immediate to verify that this is indeed the case by using the value function u .z/ which simulates the DM. Suppose on the other hand the DM has structurally quite different preferences, of the concave piecewise additive (C PA ) type. In particular, she faces a government target requiring her to achieve a certain reduction in health inequalities and she is intent on hitting this target. (Suppose this target is around the 310 mark, illustrated in the figure by the dashed line.) She does not only care about inequalities, but when her total score falls below the inequality target, her preferences are largely driven by the existence of the target: Achieving the target, she says, is worth about 15 times as much as achieving the health benefit of doing all the interventions, for a total health benefit of 5,340 points. However, once she has achieved the target, she is not particularly concerned about further inequality reductions: in the neighbourhood of r 1 , a unit of health inequality reduction is worth between 2 and 5 points of health benefit. If we were to find value functions which respect these expressed preferences (with three linear pieces in each criterion), we might find feasible partial value functions as shown in Fig. 5.5. Although rather hard to distinguish by visual inspection, these two value functions do yield different solutions. Optimising value function 2 returns us r 1 but value function 1 returns
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
127
us a point we have not previously encountered, r 6 – the portfolio of projects .1; 9; 10; 11; 12; 15; 16; 17; 21/. This portfolio yields 445.6 units of health benefit and 408.5 units of reduction in health inequalities, and costs £985,000. It is shown in Fig. 5.4 marked with a star. Portfolio r 6 is thus an unsupported solution, which we could not have found if we had restricted ourselves to value functions of the linear additive type.
5.6 Conclusion This chapter has been predicated on a belief that incorporating interactive elements in PDA can often be highly beneficial. In our exploration of the form which that interactivity should take we have discussed two interlinked questions: First, how should the process of asking for judgements be staged and managed, and second, what assumptions might an analyst want to make about the structure of a DM’s value functions. We have presented and contrasted two interactive procedures for the case where value functions are linear additive or concave piecewise-linear additive. In the former case, it is relatively easy to both identify feasible value functions and build formulations which integrate the weight and decision space (like (5.14)), but the disadvantage is that it imposes a strong and possibly unrealistic assumption on the DM’s preferences. In the latter case on the other hand, identifying value functions and solving the integrated formulation is computationally more complex but holds out the possibility of representing a richer set of value functions. One way to look at PDA is that it represents an attempt to broaden out the traditional decision analysis paradigm beyond the “single choice” setting, to a setting where the set of alternatives may be very large and only available implicitly (in particular because they are members of a combinatorial set defined by a system of constraints). An implication is that alongside the activities associated with performing a decision analysis – assessment of utility of value functions, elicitation of criterion weights or subjective probabilities, and so on – there is a need to select small number of attractive alternatives from this feasible set, in other words to solve some sort of optimisation problem. Thus two (interlinked) questions arise: • How can the elicitation best be structured (or what assumptions have to made about the DM’s beliefs and preferences) in order that the underlying optimisation problem becomes tractable? • How can the optimisation problem be best exploited to produce results which can guide the elicitation procedure? Neither of these important questions fall squarely within the boundaries of any one of the conventional OR/MS subfields, and therein lies the excitement and challenge. Because of this linkage between elicitation and optimisation, opportunities for advancing PDA seem to us to depend heavily on the availability of suitable software
128
N. Argyris et al.
(although it is quite possible to undertake Portfolio Decision Analyses with rather minimal technology – see, e.g. Kirkwood 1996). Several commercialised PDA software packages exist (Lourenc¸o et al. 2008), and experimental softwares with some interactive features are also available and documented in the research literature (Liesi¨o et al. 2007, 2008; Lourenc¸o et al. 2011). However, many apparently rather simple tasks, such as the enumeration of all efficient unsupported solutions given some set of preference information, are in fact extremely technically challenging. As a result, we can expect PDA softwares to go through multiple iterations, and by the time they stabilise, to incorporate some rather sophisticated algorithms. Building useable technology is itself an interactive, learning process. In this chapter, we have tried to highlight that in designing a PDA procedure one typically has several design elements or parameters to play with – indeed, there are combinatorially many ways in which a PDA could be conducted (when one takes into account that choices have to be made about the structure of the value function; the nature of the questioning of the decision maker, whether cardinal or ordinal, and whether in the decision or criterion space; the intensity of interactivity, and so on). There is disappointingly little in the literature in the way of frameworks to guide this design process, and we hope that this chapter may represent a small step towards remedying this. The focus on this chapter has been on what is technically possible, but in the long run, we would hope that a broader suite of research work (analytic, numerical or empirical) may actually try to answer some of the questions about what works best and where and why. Acknowledgements The authors gratefully acknowledge the support of the RAMS grant from the Council of Rectors of Portuguese Universities and the British Council under the Treaty of Windsor programme, and COST Action Research grant IC0602 on Algorithmic Decision Theory. Jos´e Rui Figueira also acknowledges a RENOIR research grant from FCT (PTDC/GES/73853/2006) and financial support from LORIA (project: Multiple Criteria in ROAD). Alec Morton wishes to thank the Instituto Superior T´ecnico at the Technical University of Lisbon for the hospitality shown to him over several visits. We are grateful to Mara Airoldi and Jenifer Smith for permission to use the data on which our worked example is based. Thanks to Nuno Gonc¸alves for proof reading an earlier version and Ahti Salo and Jeff Keisler for helpful comments.
References Airoldi M, Morton A, Smith J, Bevan G (2011) Healthcare prioritisation at the local level: a sociotechnical approach. Sympose Working Paper 7. LSE, London Argyris N, Figueira JR, Morton A (2011) A new approach for solving multi-objective binary optimisation problems, with an application to the Knapsack problem. J Global Optim 49:213–235 Austin J, Mitchell IM (2008) Bringing value focused thinking to bear on equipment procurement. Mil Oper Res 13:33–46 Branke J, Deb K, Miettinen K, Słowi´nski R (eds) (2008) Multiobjective optimisation. Lecture notes in computer science: theoretical computer science and general issues, vol 5252. Springer, Berlin Ehrgott M (2005) Multicriteria optimisation. Springer, Berlin
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
129
Ewing PL, Tarantino W, Parnell GS (2006) Use of decision analysis in the army base realignment and closure (BRAC) 2005 military value analysis. Decis Anal 3:33–49 Figueira JR, Greco S, Słowinski R (2009) Building a set of additive value functions representing a reference preorder and intensities of preference: GRIP method. Eur J Oper Res 195:460–486 Gardiner LR, Steuer RE (1994) Unified interactive multiple-objective programming. Eur J Oper Res 74:391–406 Gomes da Silva C, Cl´ımaco J, Figueira J (2004) A scatter search method for the bi-criteria multidimensional f0,1g-knapsack problem using surrogate relaxation. J Math Model Algorithms 3:183–208 Jacquet-Lagr`eze E, Siskos J (1982) Assessing a set of additive utility functions for multi-criteria decision-making, the UTA method. Eur J Oper Res 10:151–184 Jacquet-Lagr`eze E, Siskos Y (2001) Preference disaggregation: 20 years of MCDA experience. Eur J Oper Res 130:233–245 Jacquet-Lagr`eze E, Meziani R, Słowi`nski R (1987) MOLP with an interactive assessment of a piecewise linear utility function. Eur J Oper Res 31:350–357 Keeney RL (1992) Value focused thinking: a path to creative decision-making. Harvard University Press, Cambridge Keeney RL, McDaniels TL (1992) Value-focused thinking about strategic decisions at BC Hydro. Interfaces 22:94–109 Keeney RL, McDaniels TL (1999) Identifying and structuring values to guide integrated resource planning at BC Gas. Oper Res 47:651–662 Kirkwood C (1996) Strategic decision making: multiobjective decision analysis with spreadsheets. Duxbury, Belmont, CA Kleinmuntz DN (2007) Resource allocation decisions. In: Edwards W, Miles RF, von Winterfeldt D (eds) Advances in decision analysis. CUP, Cambridge Korhonen P (2004) Interactive methods. In: Figueira J, Greco S, Ehrgott M (eds) Multiple criteria decision analysis: state of the art surveys. Springer, Berlin Korhonen P, Moskowitz H, Wallenius J (1990) Choice behavior in interactive multiple-criteria decision making. Ann Oper Res 23(1):161–179 Kuskey KP, Waslow KA, Buede DM (1981) Decision analytic support of the United States Marine Corp’s program development: a guide to the methodology. Decisions and Designs, Inc., McLean, VA Liesi¨o J, Mild P, Salo A (2007) Preference programming for robust portfolio modeling and project selection. Eur J Oper Res 181:1488–1505 Liesi¨o J, Mild P, Salo A (2008) Robust portfolio modeling with incomplete cost information and project interdependencies. Eur J Oper Res 190(3):679–695 Lindstedt M, Liesi¨o J, Salo A (2008) Participatory development of a strategic product portfolio in a telecommunication company. Int J Technol Manag 42:250–266 Lourenc¸o J, Bana e Costa C, Morton A (2008) Software packages for multi-criteria resource allocation. In: International engineering management conference, Europe, Estoril, Portugal Lourenc¸o J, Bana e Costa C, Morton A (2011) PROBE: a multicriteria decision support system for portfolio robustness evaluation (revised version). LSE Operational Research Group Working Paper Series, LSEOR 09.108. LSE, London MacCrimmon K, Wehrung D (1988) Taking risks. Macmillan, Glasgow Mavrotas G, Figueira JR, Florios K (2009) Solving the bi-objective multidimensional knapsack problem exploiting the concept of core. Appl Math Comput 215(7):2502–2514 Miettinen K (1999) Nonlinear multiobjective optimisation. Kluwer, Dordrecht Mild P, Salo A (2009) Combining a multiattribute value function with an optimization model: an application to dynamic resource allocation for infrastructure maintenance. Decis Anal 6(3):139–152 Morton A, Bird D, Jones A, White M (2011) Decision conferencing for science prioritisation in the UK public sector: a dual case study. J Oper Res Soc 62:50–59 Mousseau V, Figueira J, Dias L, Gomes da Silva C, Cl´ımaco J (2003) Resolving inconsistencies among constraints on the parameters of an MCDA model. Eur J Oper Res 147:72–93
130
N. Argyris et al.
Phillips LD (1984) A theory of requisite decision models. Acta Psychol 56:29–48 Phillips LD, Bana e Costa C (2007) Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann Oper Res 154:51–68 Roy B (1993) Decision science or decision-aid science? Eur J Oper Res 66:184–203 Slovic P, Lichtenstein S (eds) (2006) The construction of preference. CUP, Cambridge Steuer RE (1986) Multiple criteria optimisation: theory, computation and application. Wiley, New York Stewart TJ (1987) An interactive multiple objective linear programming method based on piecewise linear additive value functions. IEEE Trans Syst Man Cybern SMC-17:799–805 Stewart TJ (1993) Use of piecewise linear value functions in interactive multi-criteria decision support: a Monte Carlo study. Manag Sci 39:1369–1381 Vanderpooten D (1989) The interactive approach in MCDA: a technical framework and some basic conceptions. Math Comput Model 12:1213–1220 Vanderpooten D, Vincke P (1989) Description and analysis of some representative interactive multicriteria procedures. Math Comput Model 12:1221–1238 Zionts S, Wallenius J (1976) Interactive programming method for solving multiple criteria problem. Manag Sci 22(6):652–663 Zionts S, Wallenius J (1983) An interactive multiple objective linear-programming method for a class of underlying non-linear utility-functions. Manag Sci 29(5):519–529
Chapter 6
Empirically Investigating the Portfolio Management Process: Findings from a Large Pharmaceutical Company Jeffrey S. Stonebraker and Jeffrey Keisler
Abstract This exploratory study analyzes cross-sectional project data from the enterprise information system at a large pharmaceutical company in order to gain insight into the company’s portfolio decision process and determinants of the ways in which decision analytic tools were applied in practice. Statistical measures based on the economic parameters describe individual projects and various partitions of the company’s portfolio of projects. Other statistics, such as number of scenarios, describe aspects of the structured decision process. The study found significant differences across project groups, and results in suggesting reasons for these differences. More generally, obtainable empirical data proved to be useful for studying the patterns in the use of portfolio decision analysis.
6.1 Introduction 6.1.1 Research Motivation Applications of decision analysis are often more complicated than merely conducting probability or utility assessments and then performing calculations. Numerous variations on the textbook approaches exist in practice. These variations arise as practitioners attempt to adapt to their clients’ needs. Doing so is as much an art, as a science. Practitioners have by this time developed those elements of this art that can be identified simply by reflecting on client needs and project experience. To build knowledge that goes beyond a collection of lessons learned about the
J.S. Stonebraker () Department of Business Management, College of Management, North Carolina State University, Campus Box 7229, Raleigh, NC 27695-7229, USA e-mail: jeff
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 6, © Springer Science+Business Media, LLC 2011
131
132
J.S. Stonebraker and J. Keisler
application of decision analysis, it will be useful to explicitly consider empirical data on what happens when decision analysis is applied. The empirical study of portfolio decision analysis could be especially fruitful. Every portfolio creates coherent data about a large number of projects. Portfolio decision analysis itself involves complexities that drive variation in application – both organizational and analytic, as discussed elsewhere in this book. We were inspired to seek empirical data because of questions we had that were not answered elsewhere. To what extent do portfolios vary? To what extent does practice vary? And to what extent does practice align with portfolio characteristics? We obtained access to raw data from a large pharmaceutical company (henceforth referred to as LPC) where portfolio decision analysis was widely used. This chapter presents exploratory work based on a rich case study using archival portfolio planning data. We characterized the portfolios and processes used and the relationships between them. As practitioners and scholars, we found the results to be informative and in some ways surprising. More importantly, this effort holds insights for how empirical work can be a worthwhile direction for future portfolio decision analysis research.
6.1.2 Overview The analytical process and methods used by pharmaceutical companies to evaluate the commercial potential of the drug development projects in their portfolios may be a contributing factor to lower-than-expected returns. Over the last three decades less than a third of new drugs launched into the marketplace have returned their research and development (R&D) investment (Grabowski et al. 2002; Grabowski and Vernon 1994; Grabowski and Vernon 1990). Credible estimates that account for the uncertainty when evaluating the commercial potential of drug development projects are often lacking and difficult to assess (Evans 1996; Steven 2002; Stonebraker 2002). For example, less than half of pharmaceutical firms in 1990s used economic evaluations to aid in go/no-go decisions during development (DiMasi et al. 2001). Furthermore, decisions to terminate development of new drugs for primarily economic reasons have occurred very late in the drug development process (DiMasi 2001). Decision processes and drivers of decision quality in pharmaceutical PDA are well characterized in Chapter 13 of this volume (Kloeber 2011). Portfolio management typically focuses on characterizing the performance indicators of the portfolio, that is, maximizing the value of the portfolio, balancing the portfolio, and aligning the portfolio with the company’s strategy (see, e.g., Cooper et al. 1998), but little on the analytical process. The analytical process is how project-level information is obtained and then used to make go/no-go decisions for each project in the portfolio. Key pieces of project-level information include the following: probabilities of technical success, development cost and time, and
6 Characterizing the Analytical Process
133
commercial potential. Drug development is, due to its regulatory nature, a stagegate process. Thus, it is customary to characterize risk in terms of the probabilities of technical success for each remaining stages of development. Many pharmaceutical companies have a rich, historical database of past projects which they use to estimate probabilities of technical success and development cost and timing. The other key piece of project-level information needed for portfolio management is estimates of the commercial potential for a drug development project. This is often the weakest piece of information for portfolio management, especially in cases where the company has limited experience to gauge the market response. This chapter focuses on the analytical process used for evaluating the commercial potential of drug development projects in a portfolio at LPC. We extract project-level information from this company’s enterprise information system and explore this information quantitatively and compare the analytical process characteristics through a decisionquality lens (Matheson and Menke 1994; Matheson and Matheson 1998). In the next section, we provide background on the decision context – drug development and applications of decision analysis in drug development. Section 6.3 presents our hypotheses while Section 6.4 describes the methods we used to characterize the analytical process of evaluating the commercial potential of drug development projects in a portfolio for a pharmaceutical organization. Section 6.5 describes the results, Section 6.6 presents a discussion of our findings, and Section 6.7 is concluding remarks.
6.2 Background 6.2.1 Decision Context of Drug Development The long timelines and enormous investments from preclinical through clinical testing to regulatory approval to product launch as well as the staggering odds against technical success render drug development a challenging undertaking. Typically, it takes 10–15 years to successful complete the R&D of a new drug, during which time approximately $1.2 billion (DiMasi and Grabowski 2007) is spent; including the cost of failures, since a drug’s success cannot be predetermined. Of those drugs that begin clinical testing, approximately 20% are approved by regulatory agencies (e.g., US Food and Drug Administration – FDA) for launch with success rates varying by therapeutic disease area from 12% for respiratory diseases to 28% for anti-infective disorders (DiMasi 2001). Even if drug development is successful as defined by regulatory approval for market launch, there are no guarantees of commercial success – judged by payback of the R&D investment once the drug is launched into the market. In fact, since the mid-1970s, approximately one out of three drugs launched into the marketplace returned its R&D investment (Grabowski et al. 2002; Grabowski and Vernon 1994,
134
J.S. Stonebraker and J. Keisler
1990). Lower returns on new drug introductions have been consistent for the past three decades while R&D investments and life-cycle sales have increased in recent years, development times have slightly decreased (DiMasi 2002), and attrition rates have remained stable (DiMasi 2001). Drug development decision making is characterized by a sequence of decision points or gates (Cooper et al. 1998) where a drug development project is either terminated due to some undesirable feature or progressed to the next development stage and resources allocated. The development stages – Preclinical, Phase 1, Phase 2, and Phase 3 – are designed to gather information prior to the next decision point. At each decision point, information collected on the technical feasibility and commercial potential of the drug development project is evaluated and these results are used in deciding whether development should continue. Technical feasibility includes the time and costs needed to develop the drug and the likelihood of the drug succeeding in each stage of development. Commercial potential focuses on the uncertainty of internal factors (e.g., production costs, marketing costs) as well as on the uncertainty of external factors (e.g., the size of the market, the drug’s potential share of that market, and the drug’s price) that affect a drug’s economic viability over its product life cycle (Bauer and Fischer 2000).
6.2.2 Drug Development Decision Analysis For several decades, decision analysis has proven to be of tremendous value in resolving complex business decisions (Keefer et al. 2004; Corner and Kirkwood 1991) especially in the high-risk industry of drug development. Decision analysis has been used in the pharmaceutical industry for choosing among research drug candidates (Loch and Bode-Greuel 2001), deciding whether to continue development for a new drug (Beccue 2001; Johnson and Petty 2003; Pallay and Berry 1999; Poland 2004; Poland and Wada 2001; Stonebraker 2002; Viswanathan and Bayney 2004), deciding whether to pursue a joint venture to develop a new drug (Thomas 1985), planning production capacity for marketed drugs (Abt et al. 1979), assessing new technologies (Peakman and Bonduelle 2000), and evaluating drug development projects in a portfolio (Evans 1996; Sharpe and Keelin 1998; Spradlin and Kutoloski 1999; Steven 2002). During a drug development decision analysis, information is collected from experts on the key issues or parameters affecting the technical feasibility and commercial potential of the drug development project; a financial model is then constructed to describe relationships among the relevant model parameters. To be most effective, the application of decision analysis to a drug development project requires a structured interaction between a cross-functional drug development team and its senior management, with periodic exchanges of key deliverables (Bodily and Allen 1999). These teams typically have expertise in preclinical research, clinical testing, medicine, regulatory affairs, manufacturing, sales and marketing, finance, project management, and decision analysis. Instead of using
6 Characterizing the Analytical Process
135
life years or quality-adjusted life years to describe outcomes, as is commonly done in clinical decision analysis (e.g., Cantor 2004), drug development decision analysis uses net present value (NPV) as the financial outcome measure. NPV is a standard financial method for evaluating projects; it is the present value of free, future cash flows, with each cash inflow (including sales revenue) and cash outflow (including cost) discounted back to its present value and then summed. Pharmaceutical companies often use a small set of model common parameters across projects in a portfolio (Hess 1993). In our instance, the most influential model parameters for NPV are analyzed probabilistically using a decision tree to evaluate the technical feasibility and commercial potential of the drug development project and to determine a probability distribution of NPV and an expected NPV. Drug development decision analyses are essential for building support to improve the quality of decisions within the pharmaceutical organization.
6.3 Conjectures Besides the three objectives that typically characterize the performance indicators of the portfolio, there is a fourth objective that is needed for portfolio management. The three performance objectives include the following (Cooper et al. 1998): (1) maximizing the total value of the portfolio (e.g., expected NPV, expected return-on-investment), (2) balancing the portfolio (e.g., therapeutic disease area, development stage), and (3) aligning the portfolio with the company’s strategy (e.g., resource allocation). The fourth objective, the focus of this chapter, characterizes the analytical process, that is, how does the company ensure a credible evaluation of the technical feasibility and commercial potential for a drug development project. The analytical process must be consistent, comprehensive, transparent, and realistic. It must be consistent between projects in the portfolio (interproject) and within a project from evaluation to evaluation (intraproject). The analytical process must be comprehensive in that it includes all important, decision-relevant considerations. Throughout the organization the process must be transparent by clearly communicating assumptions and relationships. Finally, the analytical process should be realistic, that is, the process should embrace uncertainty. The portfolios of large pharmaceutical companies consist of many drug development projects in different therapeutic disease areas and in different stages of development, and with far too many projects competing for scarce resources. For each drug development project in the portfolio, the technical feasibility and commercial potential are evaluated. Given the analytical effort required to evaluate projects in its portfolio, large pharmaceutical companies often streamline their analytical process (Evans 1996; Stonebraker 2002). Many large pharmaceutical companies have extensive historical databases on project-level information for the time and costs to complete each stage of development as well as probabilities of technical success for each stage of development. This information is also well-documented through industry
136
J.S. Stonebraker and J. Keisler
benchmark studies and the literature (see, e.g., The Tufts Center for the Study of Drug Development). Instead of using the well-proven probability elicitation methods from decision analysis (Keeney and von Winterfeldt 1991; Merkhofer 1987; Morgan and Henrion 1990; Shephard and Kirkwood 1994; Spetzler and Sta¨el von Holstein 1975; Wallsten and Budescu 1983) to estimate project-specific probabilities of technical success, large pharmaceutical companies often use historical or industry benchmarks for its probabilities of technical success (Stonebraker 2002). Since attrition rates remained relatively stable for a long period of time starting in the 1970s (DiMasi 2001) these benchmarks are good approximations for projectspecific probabilities of technical success. Pharmaceutical companies have difficulty in forecasting credible estimates for the commercial potential of a drug development project. Unlike development costs, time, and probabilities of technical success, there are rarely historical databases and industry benchmark studies that catalog commercial potential, especially for novel therapies of unmet medical needs (Evans 1996, Steven 2002, Stonebraker 2002). Across the many drug development projects in the portfolio of a large pharmaceutical company, there should be a consistent level of analytical rigor when evaluating the commercial potential within a project (intraproject) and between projects in the portfolio (interproject). On the other hand, a one-size-fits-all approach to project evaluation does not allow for more streamlined analysis in areas where this might be suitable. As we search for interesting relationships, the tension between these two views leads us to consider two specific hypotheses about the processes used at LPC. Note, our analysis here only tests the hypotheses with respect to this specific case; testing whether the hypotheses hold more generally would require data from additional cases. Hypothesis 1. The analytical process used in evaluating the commercial potential of drug development projects differs significantly across the portfolio by therapeutic disease area. Hypothesis 2. The analytical process used in evaluating the commercial potential of drug development projects differs significantly across the portfolio by development stage.
6.4 Approach We collected drug development project-level information from LPC’s enterprise information system. This project-level information is a cross-section of drug development projects in LPC’s portfolio. The portfolio today for this company is quite different from the portfolio when the information was collected, given attrition rates of its drug development projects, in-licensing deals, etc. To make comparisons between different groups of projects, we calculated summary statistics for them. We used the mean, standard deviation (SD), and coefficient of variation (CV) to describe the distribution of the number of commercial scenarios
6 Characterizing the Analytical Process
137
used to evaluate the commercial potential for each drug development project in the portfolio by the therapeutic disease area and development stage. We also used the mean, SD, and CV to describe the distribution of the expected commercial potential for each drug development project in the portfolio by therapeutic disease area and development stage. The CV is the SD expressed as a percent of the mean and is useful for comparing the amount of variation in dissimilar data sets. An analysis of variance (ANOVA) compared the means of the number of commercial scenarios between therapeutic disease areas and stages of development. ANOVA was also used to compare the means of each project’s expected commercial potential by therapeutic disease area and development stage. Finally, ANOVA was used to compare the means of each project’s expected commercial potential by its number of commercial scenarios that were used to evaluate the commercial potential of a drug development project. To accomplish this, we grouped the drug development project into three categories – Category 1: one commercial scenario (deterministic case), Category 2: two to six commercial scenarios, and Category 3: seven to nine commercial scenarios. Correlation analyses were performed on a project-by-project basis among therapeutic disease areas and development stages to examine the associations between the number of commercial scenarios and expected commercial potential. For all correlation analyses, the strength of the association between two variables was assessed by the correlation coefficient .R/. P < 0:05 was considered statistically significant.
6.5 Results 6.5.1 Description of Data We obtained project-level information from LPC’s enterprise information system on the 223 drug development projects in its portfolio. Each drug development project was classified as belonging to one of six therapeutic disease areas (indexed as A, B, C, D, E, and F) and into one of four development stages (preclinical, Phase 1, Phase 2, and Phase 3). The enterprise information system provided project-level information on costs by development stage, probabilities of technical success for each of the remaining development stages, timing (regional market launch dates), and NPVs by commercial scenario. When evaluating the commercial potential of each drug development project in its portfolio, LPC considers up to nine possible commercial scenarios. The company defines a commercial scenario by two factors – product profile and market environment. The company describes the product profile using the product attributes of the drug development project – namely, its efficacy, safety/tolerability, dosage and administration, patient convenience, and treatment cost (Tiggemann et al. 1998). For each project, the company uses these attributes to define up to three outcomes for the product profile – an upside outcome, a most-likely outcome,
138
J.S. Stonebraker and J. Keisler Profile
Upside
Most-likely
Downside
Environment
Probability
NPV
Upside
Scenario Profile 1
Upside
Environment Upside
p1
NPV1
Most-likely
2
Upside
Most-likely
p2
NPV2
Downside
3
Upside
Downside
p3
NPV3
Upside
4
Most-likely Upside
p4
NPV4
Most-likely
5
Most-likely Most-likely
p5
NPV5 NPV6
Downside
6
Most-likely Downside
p6
Upside
7
Downside
Upside
p7
NPV7
Most-likely
8
Downside
Most-likely
p8
NPV8
Downside
9
Downside
Downside
p9
NPV9
Fig. 6.1 Commercial scenarios for drug development projects
and a downside outcome. For example, an upside outcome could be described as having better efficacy and safety than the standard of care with the remaining attributes being equivalent to the standard of care. The most-likely and downside outcomes would be described in a similar fashion. The company describes the market environment using various parameters, such as patient population, percent of patients treated, price per treatment, class share of market, number of products in class, order of market entry in class, product share within class, year of launch, and patent expiry. The company defines up to three outcomes for the market environment – an upside outcome, a most-likely outcome, and a downside outcome. LPC combines these outcomes along with the outcomes for the product profile resulting in nine potential commercial scenarios as shown below. For each scenario, that is, along each path of the tree in Fig. 6.1, there is an NPV and a probability. The NPV is determined in a separate, off-line market model based on settings of the marketing parameters (e.g., patient population, percent of patients treated, etc.) and the probabilities of the upside, most-likely, and downside outcomes for the product profile as well as the market environment are estimated by the drug development project team. The expected commercial potential for a drug development project is the expectation of NPV and probability of each commercial scenario considered in Fig. 6.1 and is given by the following equation: ENPV D
n X
NPVi pi
i D1
where ENPV is the expected net present value, NPVi is the net present value of commercial scenario i; pi is the probability of commercial scenario i , and n is the number of commercial scenarios ranging from one to nine. Table 6.1 shows the number of drug development projects by therapeutic disease area and stage of development obtained from LPC’s enterprise information system.
6 Characterizing the Analytical Process
139
Table 6.1 Portfolio information obtained from LPC’s enterprise information system: # number of drug projects by therapeutic disease area development stage Disease area Development stage
A
B
C
D
E
F
Total
Phase 3 Phase 2 Phase 1 Preclinical Total
5 14 7 5 31
4 4 2 10 20
7 12 11 13 43
9 16 17 12 54
11 17 7 4 39
11 7 8 10 36
47 70 52 54 223
There were six therapeutic disease areas, labeled A, B, C, D, E, and F and a total of 223 drug development projects in the portfolio. Therapeutic area A has a total of 31 drug development projects consisting of 5 Phase 3 projects, 14 Phase 2 projects, 7 Phase 1 projects, and 5 preclinical projects. Therapeutic disease area D has the most drug development projects (54), whereas B has the least (20). Thus, there are differences in the maturity of the therapeutic disease areas, although these are not drastic. The portfolio also has more drug development projects in Phase 2 (70) than any other development stage. These numbers indicate that the portfolio is not balanced across therapeutic disease areas or stages of development. The number of drug development projects ranged from 20 to 54 across therapeutic disease areas. While balance across therapeutic disease areas is not required a priori as good business practice, the relative balance may influence the character of the analytic process in each area. The portfolio is also heavily concentrated with 70 drug development projects in Phase 2. Given typical pharmaceutical attrition rates, the portfolio, to achieve balance, should have the largest number of drug development projects in preclinical, followed by Phase 1, Phase 2, and Phase 3. As we consider the analytical processes used, we note that this lack of balance suggests different groups of projects may differ in ways that affect how they should be managed. Table 6.2 shows the number of commercial scenarios used in evaluating the commercial potential of a drug development project by therapeutic disease area and stage of development. For example, of the five Phase 3 projects in therapeutic disease area A, one project used one commercial scenario to evaluate its commercial potential, one project used two scenarios, two projects each used four scenarios, and one project used six scenarios. For the entire portfolio consisting of 223 drug development projects, most projects (70) used only one commercial scenario to evaluate its commercial potential. On the other hand, only six projects used all nine commercial scenarios. The number of commercial scenarios used in the evaluation of commercial potential varied considerably across therapeutic disease areas. Table 6.3 presents the mean, SD, and CV of each drug development project by therapeutic disease area and ANOVA results. The number of commercial scenarios used by therapeutic disease
140
J.S. Stonebraker and J. Keisler
Table 6.2 Portfolio information obtained from LPC’s enterprise information system: # of commercial scenarios used in evaluating the commercial potential of a drug development project by therapeutic disease area and stage of development Phase Phase 3
Phase 2
Phase 1
Preclinical
Therapeutic disease area
# of commercial scenarios evaluated
A B C D E F All A B C D E F All A B C D E F All A B C D E F All Total
1 1 3 5 5 8 8 30 0 1 5 2 4 4 16 0 0 0 0 3 5 8 0 1 6 1 0 8 16 70
2 1 0 0 3 1 1 6 2 0 1 1 1 1 6 0 0 0 0 0 0 0 1 0 0 1 0 0 2 14
3 0 1 1 0 2 2 6 2 1 0 0 11 0 14 0 0 0 1 0 0 1 0 0 0 2 0 0 2 23
4 2 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
5 0 0 0 0 0 0 0 1 2 1 3 0 0 7 1 0 5 3 1 1 11 0 6 3 1 1 1 12 30
6 1 0 1 0 0 0 3 1 0 0 1 1 1 4 1 0 0 0 3 0 4 0 0 0 0 2 0 2 12
7 0 0 0 0 0 0 0 8 0 5 7 0 0 20 5 2 6 13 0 1 27 4 3 4 7 1 0 19 66
8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
9 0 0 0 1 0 0 1 0 0 0 2 0 1 3 0 0 0 0 0 1 1 0 0 0 0 0 1 1 6
Total 5 4 7 9 11 11 47 14 4 12 16 17 7 70 7 2 11 17 7 8 52 5 10 13 12 4 10 54 223
area A to evaluate the commercial potential of its drug development projects was the highest at 5:48 ˙ 2:05 .mean ˙ SD/, whereas the lowest number of commercial scenarios was 2:36 ˙ 2:55 .mean ˙ SD/ for therapeutic disease area F. The number of commercial scenarios used for evaluating the commercial potential of drug development projects between therapeutic disease areas was significantly different except for the following comparisons: therapeutic disease areas A and B, A and D, B and C, B and D, and E and F. The expected commercial potential varied considerably across therapeutic disease areas. Table 6.4 presents the mean, SD, and CV of each drug development project’s expected commercial potential by therapeutic disease area and ANOVA
6 Characterizing the Analytical Process
141
Table 6.3 Statistical analysis of the number of commercial scenarios used to evaluate the commercial potential of drug development projects by therapeutic disease area (P compares therapeutic disease areas) Therapeutic disease area A B C D E F All
Mean 5:48 4:30 4:12 5:26 2:85 2:36 4:09
SD 2:05 2:27 2:66 2:53 1:91 2:55 2:62
CV (%) 37 53 65 48 67 108 64
N 31 20 43 54 39 36 223
P B 0.06
C 0.02 0.79
D 0.65 0.14 0.03
E <0:01 0.01 0.02 <0:01
F <0:01 0.01 <0:01 <0:01 0.35
P compares the mean number of commercial scenarios for therapeutic disease area using an analysis of variance (ANOVA); N is the number of drug development projects in a therapeutic disease area SD standard deviation, CV coefficient of variation
Table 6.4 Statistical analysis of the expected commercial potential ($ million, NPV) by therapeutic disease area (P compares therapeutic disease areas) Therapeutic disease area A B C D E F All
Mean 2;413 429 830 1;804 430 945 1;199
SD 2;293 438 984 1;750 515 1;019 1;530
CV (%) 95 102 119 97 120 108 128
N 31 20 43 54 39 36 223
P B <0:01
C <0:01 0.09
D 0.17 <0:01 <0:01
E <0:01 1.00 0.03 <0:01
F <0:01 0.04 0.61 0.01 0.01
P compares the mean expected commercial potential for therapeutic disease area using an analysis of variance (ANOVA); N is the number of drug development projects in a therapeutic disease area SD standard deviation, CV coefficient of variation
results. The expected commercial potential ($ million NPV) of therapeutic disease area A had the highest value at 2;413 ˙ 2;293 .mean ˙ SD/, whereas the lowest expected commercial potential ($ million NPV) was 429 ˙ 438 .mean ˙ SD/ for therapeutic disease area B. At both extremes, we note that the intraproject standard deviation on NPV of commercial potential is about as large as the mean. Therapeutic disease areas A and D offer the most value to the portfolio whereas therapeutic disease areas B and E offer the least value. Furthermore, areas A and D have relatively high proportions of early stage projects. For the company, this provides useful perspective on the nature of their different groups of projects. The differences between areas are useful to us as clues to differences between analytic processes used in the different areas. Within each therapeutic disease area, there is substantial variability for the expected commercial potential among drug development projects. The expected
142
J.S. Stonebraker and J. Keisler
Table 6.5 Correlation of the number of commercial scenarios used to evaluate the commercial potential of drug development projects and the expected commercial potential ($ million NPV) by therapeutic disease area
Therapeutic disease area
Correlation
P
A B C D E F
0.57 0.26 0.46 0.32 0.52 0.08
<0.01 0.26 <0.01 0.02 <0.01 0.64
Table 6.6 Statistical analysis of # of commercial scenarios used to evaluate the commercial potential of drug development projects by development stage (P compares development stages) P Development stage Preclinical Phase 1 Phase 2 Phase 3 All
Mean 4:44 5:54 4:23 1:89 4:09
SD 2.60 2.19 2.57 1.64 2.62
CV (%) 59 40 61 87 64
N 54 52 70 47 223
Phase 1 0.02
Phase 2 0.65 <0.01
Phase 3 <0.01 <0.01 <0.01
P compares the mean number of commercial scenarios for development stage using an analysis of variance (ANOVA); N is the number of drug development projects in a development stage SD standard deviation, CV coefficient of variation
commercial potential between therapeutic disease areas was significantly different except for the following comparisons: therapeutic disease areas A and D, B and C, B and E, and C and F. Table 6.5 provides correlations of the number of commercial scenarios used to evaluate the commercial potential of drug development projects and the expected commercial potential by therapeutic disease area. As the number of commercial scenarios increased, the expected commercial potential increased with the strongest correlations being therapeutic disease area A .R D 0:57; P < 0:01/ and therapeutic disease area E .R D 0:52; P < 0:01/. The correlations for therapeutic disease areas C and D were weakly correlated and not significant for therapeutic disease areas B and F. The number of commercial scenarios used in the evaluation of commercial potential varied considerably across development stage. Table 6.6 presents the mean, SD, and CV of each drug development project by stage of development and ANOVA results. The number of commercial scenarios used to evaluate the commercial potential of Phase 1 projects was the highest at 5:54 ˙ 2:19 .mean ˙ SD/, whereas the lowest number of commercial scenarios was 1:89 ˙ 1:64 .mean ˙ SD/ for
6 Characterizing the Analytical Process
143
Table 6.7 Statistical analysis of the expected commercial potential ($ million NPV) by development stage (P compares development stages) P Development stage
Mean
SD
CV (%)
N
Phase 1
Phase 2
Phase 3
Preclinical Phase 1 Phase 2 Phase 3 All
1,274 1,898 1,225 299 1,199
1,278 1,936 1,561 456 1,530
100 102 127 153 128
54 52 70 47 223
0.05
0.85 0.04
<0.01 <0.01 <0.01
P compares the mean expected commercial potential for development stage using an analysis of variance (ANOVA); N is the number of drug development projects in a development stage SD standard deviation, CV coefficient of variation Table 6.8 Correlation of the number of commercial scenarios used to evaluate the commercial potential of drug development projects and the expected commercial potential ($ million NPV) by development stage Development stage Correlation P Preclinical Phase 1 Phase 2 Phase 3
0:31 0:16 0:44 0:55
0.02 0.25 <0.01 <0.01
Phase 3 projects. The number of commercial scenarios used for evaluating the commercial potential of drug development projects between development stages was significantly different except for preclinical and Phase 2. The expected commercial potential varied considerably across stage of development. Table 6.7 presents the mean, SD, and CV of each drug development project by development stage and ANOVA results. The expected commercial potential ($ million NPV) had the highest value at 1;898 ˙ 1;936 .mean ˙ SD/ for Phase 1 projects, whereas the lowest expected commercial potential ($ million NPV) was 299 ˙456 .mean˙SD/ for Phase 3 projects. The expected commercial potential for drug development projects in Phase 3 was significantly lower than projects in earlier stages of development. The expected commercial potential between development stages was significantly different except for preclinical and Phase 2. While project phase relates to commercial potential, it also relates to the number of scenarios considered – although not as directly as one might expect. To make sense of this, we consider how phase and commercial together relate to the number of scenarios considered. Table 6.8 provides correlations of the number of commercial scenarios used to evaluate the commercial potential of drug development projects and the expected commercial by development stage. As the number of commercial scenarios increased, the expected commercial potential increased with the strongest correlation being drug development projects in Phase 3 .R D 0:55; P < 0:01/. The correlations for drug development projects in preclinical .R D 0:31; P D 0:02/ and Phase 2 .R D 0:44; P < 0:01/ were weakly correlated and not significant for drug development projects in Phase 1.
144
J.S. Stonebraker and J. Keisler
Table 6.9 Statistical analysis of the expected commercial potential ($ million NPV) by number of commercial scenarios used to evaluate the commercial potential of drug development projects (P compares categories) Categories for the number of commercial scenarios Category 1 (one scenario) Category 2 (two to six scenarios) Category 3 (seven to nine scenarios) All
P N
Mean
SD
CV (%)
473 1,030
789 1,379
167 134
70 81
2,094
1,789
85
72
1,199
1,530
128
223
2–6
7–9
<0.01
<0.01 <0.01
P compares the mean expected commercial potential for the commercial-scenario categories using an analysis of variance (ANOVA); N is the number of drug development projects in a commercial-scenario category SD standard deviation, CV coefficient of variation
The expected commercial potential varied considerably across commercial scenarios used in evaluating the commercial potential of drug development projects. Table 6.9 presents the mean, SD, and CV of each drug development project across commercial-scenario categories and ANOVA results. The expected commercial potential ($ million NPV) had the highest value at 2:094 ˙ 1;789 .mean ˙ SD/ for projects that were evaluated using seven to nine commercial scenarios, whereas the lowest expected commercial potential ($ million NPV) was 477 ˙ 789 .mean ˙ SD/ for projects that were evaluated with one commercial scenario. The expected commercial potential was significantly different across all commercial-scenario categories.
6.6 Discussion There is support for Hypothesis 1. The analytical process – the number of commercial scenarios – used in evaluating the commercial potential of drug development projects is significantly different across the therapeutic disease areas of the portfolio. There, however, a few exceptions where there is consistency in evaluating the commercial potential across therapeutic disease areas. In fact, there appears to be two different evaluation approaches being used in the portfolio. Therapeutic disease areas A, B, C, and D use significantly more commercial scenarios when evaluating the commercial potential of its drug development projects when compared to therapeutic disease areas E and F (mean of 4.85 versus 2.61, P < 0:01). There is strong support for Hypothesis 2. The analytical process – the number of commercial scenarios – used in evaluating the commercial potential of drug development projects is significantly different across the development stages of the portfolio. There, however, is one exception where there is consistency across drug development projects in preclinical and Phase 2. Drug development projects in
6 Characterizing the Analytical Process
145
Phase 3 use the least number of commercial scenarios when compared to the other stages of development, illustrating that pharmaceutical companies are not using economic evaluations especially for late-stage drug development projects (DiMasi et al. 2001; DiMasi 2001). As the drug development projects move closer to market, LPC tends to use fewer commercial scenarios to account for the uncertainty associated with commercial potential of its projects. This seemingly violates the sunk-cost principle of decision analysis. For example, the organization has already sunk lots of resources in a Phase 3 project and has, in essence, already made the decision to submit it to regulatory agencies. Therefore, only a few commercial scenarios are considered when evaluating the commercial potential of a Phase 3 project. Another possible explanation is that LPC may not be embracing uncertainty since it only considered a few (up to nine) commercial scenarios when evaluating the commercial potential of its drug development projects. But does the analytical process used by LPC appropriately embrace uncertainty when evaluating the commercial potential of its drug development projects? The commercial potential of a drug development project is uncertain with a range of possible commercial scenarios. Hence, how many commercial scenarios are necessary to ensure intra-project consistency and inter-project consistency? Using one commercial scenario to represent the commercial potential of a drug development project will result in inter-project and intra-project inconsistencies. For example, how can LPC ensure that the “right” single commercial scenario is consistently picked for a project from evaluation to evaluation and from project to project? The danger is that in many commercial forecasting problems singlepoint estimates (such as averages or base-case values) are inadequate to reach sound conclusions (Abt et al. 1979). Single-point estimates are fraught with the difficulties of balancing pessimism against optimism. Similar to the flaw of averages (Savage 2002) lumping factors together into three outcomes (downside, most-likely, and upside) can produce significant errors. No single-point estimate – however obtained – can reliably and repeatedly balance risk and opportunity. With a limited number of commercial scenarios as was our case, it is very difficult to define the right set of scenarios that truly span the range of interest for the commercial potential. True probabilistic forecasts are the only sensible and fiduciary-responsible way to evaluate projects in a highly uncertain environment. When we embrace uncertainty, our forecasts of factors influencing commercial potential are no longer expressed with single or limited number of commercial scenarios, but rather as ranges or probability distributions.
6.7 Conclusions This exercise of analyzing a company’s archival portfolio planning data produced specific insights about the company itself, initial insights about portfolios and processes more generally, and lessons about research about the application of
146
J.S. Stonebraker and J. Keisler
portfolio decision analysis methods. Portfolio decision analysis efforts often provide insight about the nature of the organization’s portfolio, such as its overall strength or its balance. But it is also possible to understand how the organization is approaching its portfolio decisions, and to use this information to identify subtle decision process improvements. For example, in this case we found that some projects were evaluated with substantially fewer scenarios than others that had comparable uncertainty. On seeing this pattern, we realized that in one disease area, when compounds advanced through a relatively early milestone, the company switched them from being classified as development projects, which were managed by one group, to commercial projects which were managed by a group more focused on implementation. A simple recommendation to improve the process without overhauling it would be to delay when these compounds are switched. Treating the company as a representative case for its industry or, in some ways, for a range of industries using portfolio methods, we found several issues that merit further research. In this case, the analytic process and methods used in the portfolio practice of this pharmaceutical company were clearly not consistent across therapeutic disease areas. In some ways, our findings in this case support the theory that value of analysis is like value of information and more analysis occurs where such analysis is of highest value. In other ways, there seem to be institutional drivers for analytic practice. Such drivers include organizational momentum, as well as structure – the treatment of projects depends on their business classification as well as the numerical profile of the project. While experienced practitioners might find this unsurprising, documenting that this is in fact occurring worthwhile. Furthermore, the fact that it is occurring suggests that more useful research could be conducted to understand what exactly is occurring and why. Finally, as scholars are interested in advancing the field of decision analysis, we found that there is indeed a rich and usable pool of empirical data available from the archives of companies that use portfolio planning methods. The data were, of course, structured for the convenience of the company’s planners and not for our convenience in analyzing it. But without a great deal of difficulty, we were able to manipulate and extract useful material for study. We expect the situation would be the same as comparable organizations. By gathering data from organizations that use portfolio decision analysis, we can find useful patterns about how practice matches needs, and how it might better do so. In this case, we took an exploratory look at available cross-sectional data from a single company. We were already aware of many cases presented as success stories, examples of implementation, and lessons learned. It is a different endeavor to treat these cases as natural experiments in which we can measure various results in order to identify patterns. In the long term, this type of work will expand to a more structured – and practical – theory about drivers of effective implementation of portfolio decision analysis methods which are inherently organizational, There are several streams of research found in this book: mathematical and applied work to develop new methods (Gustafsson et al. 2011), laboratory type empirical studies work on assessment (Fasolo et al. 2011), empirical tests of specific methodologies (Kiesling et al. 2011; Argyris et al. 2011), and textured reviews of portfolio
6 Characterizing the Analytical Process
147
management experience (Kloeber 2011). Empirical study, starting with exploratory work as in this chapter, and using statistical methods to move toward formal work, will complement other work in the larger program of building knowledge about portfolio decision analysis practice.
References Abt R, Borja M, Menke MM, Pezier JP (1979) The dangerous quest for certainty in market forecasting. Long Range Plan 12:52–62 Argyris N, Figueira JR, Morton A (2011) Interactive multicriteria methods in portfolio decision analysis. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Bauer HH, Fischer M (2000) Product life cycle patterns for pharmaceuticals and their impact on R&D profitability of late mover products. Int Bus Rev 9:703–725 Beccue P (2001) Choosing a development strategy for a new product at Amgen. Practise abstracts. Interfaces 31(5):62–64 Bodily SE, Allen MS (1999) A dialogue process for choosing value-creating strategies. Interfaces 29(6):16–28 Cantor SB (2004) Clinical applications in the decision analysis literature. Decis Anal 1:23–25 Cooper RG, Edgett SJ, Kleinschmidt EJ (1998) Portfolio management for new products. AddisonWesley, Reading Corner JL, Kirkwood CW (1991) Decision analysis applications in the operations research literature, 1970–1989. Oper Res 39:206–219 DiMasi JA (2001) Risks in new drug development: approval success rates for investigational drugs. Clin Pharmacol Ther 69:297–307 DiMasi JA (2002) The value of improving the productivity of drug development process – faster times and better decisions. Pharmacoeconomics 20(suppl 3):1–10 DiMasi JA, Caglarcan E, Wood-Armany M (2001) Emerging role of pharmacoeconomics in the research and development decision-making process. Pharmacoeconomics 19(7):753–766 DiMasi JA, Grabowski HG (2007) The cost of biopharmaceutical R&D: is biotech different? Manage Decis Econ 28:469–479 Evans P (1996) Streamlining formal portfolio management. Scrip Magazine February:25–28 Fasolo B, Morton A, von Winterfeldt D (2011) Behavioural issues in portfolio decision analysis. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Grabowski HG, Vernon JM (1990) A new look at the returns and risks to pharmaceutical R&D. Manage Sci 36:804–821 Grabowski HG, Vernon JM (1994) Returns to R&D on new drug introductions in the 1980s. J Health Econ 13:383–406 Grabowski HG, Vernon JM, DiMasi JA (2002) Returns on research and development for 1990s new drug introductions. Pharmacoeconomics 20(s3):11–29 Gustafsson J, De Reyck B, Degraeve Z, Salo A (2011) Valuation of risky projects and other illiquid investments using portfolio selection models. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Hess SW (1993) Swinging on the branch of a tree: project selection applications. Interfaces 23(6):5–12 Johnson E, Petty N (2003) Apimoxin development strategy analysis. Practice abstracts. Interfaces 33(3):57–59 Keefer DL, Kirkwood CW, Corner JL (2004) Perspectives on decision analysis applications. Decis Anal 1:5–24
148
J.S. Stonebraker and J. Keisler
Keeney RL, von Winterfeldt D (1991) Eliciting probabilities from experts in complex technical problems. IEEE Trans Eng Manage 38:191–201 Kiesling E, Gettinger J, Stummer C, Vetschera V (2011) An experimental comparison of two interactive visualization methods for multi-criteria portfolio selection. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Kloeber J (2011) Methods that work in portfolio decision analysis. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Loch CH, Bode-Greuel K (2001) Evaluating growth options as sources of value for pharmaceutical research projects. R&D Manage 31(2):231–248 Matheson D, Matheson JE (1998) The smart organization – creating value through strategic R&D. Harvard Business School Press, Boston Matheson JE, Menke MM (1994) Using decision quality principles to balance your R&D portfolio. Res Technol Manage 37(3):38–43 Merkhofer MW (1987) Quantifying judgmental uncertainty: methodology, experiences, and insights. IEEE Trans Syst Man Cybern 17:741–752 Morgan MG, Henrion M (1990) Uncertainty: a guide to dealing with uncertainty in quantitative risk and policy analysis. Cambridge University Press, Cambridge Pallay A, Berry SM (1999) A decision analysis for an end of Phase II go/stop decision. Drug Inform J 33:821–833 Peakman T, Bonduelle Y (2000) Steering a course through the technology maze. Drug Discov Today 5(8):337–343 Poland WB (2004) Balancing drug safety and efficacy for a go/no-go decision. Practice abstracts. Interfaces 34(2):113–116 Poland WB, Wada R (2001) Combining drug-disease and economic modeling to inform drug development decisions. Drug Discov Today 6:1165–1170 Savage SL (2002) The flaw of averages – decisions based on average numbers are wrong on average. Harvard Bus Rev 80(November):20–21 Sharpe P, Keelin T (1998) How SmithKline Beecham makes better resource-allocation decisions. Harvard Business Rev 76(March–April):45–57 Shephard GG, Kirkwood CW (1994) Managing the judgmental probability elicitation process: a case study of analyst/manager interaction. IEEE Trans Eng Manage 41:414–425 Spetzler CS, Sta¨el von Holstein C-AS (1975) Probability encoding in decision analysis. Manage Sci 22:340–352 Spradlin CT, Kutoloski DM (1999) Action-oriented portfolio management. Res Technol Manage 42(2):26–32 Steven SE (2002) Focused portfolio measures to support decision making throughout the pipeline. Drug Inform J 36:623–630 Stonebraker JS (2002) How Bayer makes decisions to develop new drugs. Interfaces 32(6):77–90 Thomas H (1985) Decision analysis and strategic management, research and development: A comparison between applications in electronics and ethical pharmaceuticals. R&D Manage 15(1):3–21 Tiggemann RF, Dworaczyk DA, Sabel H (1998) Project portfolio management: a powerful strategic weapon in pharmaceutical drug development. Drug Inform J 32:813–824 Wallsten TS, Budescu DV (1983) Encoding subjective probabilities: a psychological and psychometric review. Manage Sci 29:151–173 Viswanathan V, Bayney R (2004) Decision analysis to evaluate proof-of-principle trial design for a new drug candidate. Practise abstracts. Interfaces 34(3):206–207
Chapter 7
Behavioural Issues in Portfolio Decision Analysis Barbara Fasolo, Alec Morton, and Detlof von Winterfeldt
Abstract The aim of this chapter is to review some behavioural issues in portfolio choice and resource allocation decisions, with a focus on their relevance to Portfolio Decision Analysis. We survey some of behavioural literature on the most common heuristics and biases that arise in and can interfere with resource allocation processes. The common idea behind this behavioural literature is that of cognitive or motivational failure as an explanation for the violation of normative models. Then, we reflect on the relevance of this literature by drawing from the authors’ personal experiences as decision maker or decision analyst in real world resource allocation settings. We argue that justifiability can also be a reason for the normative violations. We conclude by discussing ways in which an analyst might approach debiasing.
7.1 Introduction Over the past several years much work has been done on developing models and procedures to help decision makers (DMs) tackle portfolio choice and resource allocation problems. The theory, models and tools for cost-effective solutions to resource allocation are by now well developed (e.g. Phillips and Bana e Costa 2007; Kleinmuntz et al. 2007). There also have been many successful applications of these tools and some unsuccessful ones (for an interesting story of failure, see Jenni et al. 1995). Understanding how people make decisions naturally seems important if we are to avoid (or at least minimise the frequency of) failures in the future. Yet surprisingly little work has been done on understanding how people naturally and intuitively approach the task of allocating resources, and the biases that B. Fasolo () Department of Management, London School of Economics and Political Science, London, UK e-mail:
[email protected]
A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 7, © Springer Science+Business Media, LLC 2011
149
150
B. Fasolo et al.
result from the institutional, legal, or political environment in which the resource allocation takes place. This contrasts with the more classical single choice decision analysis paradigm (in which a single item is selected from a set), in which a considerable body of behavioural research exists (von Winterfeldt and Edwards 1986; von Winterfeldt 1999; Morton and Fasolo 2009). A central idea in behavioural decision making is that most unaided choices are based on heuristics, which can then lead to systematic biases when decision behaviour is compared with normative models or principles (Tversky and Kahneman 1974). We distinguish individual biases, which are due to cognitive or motivational factors operating at the level of individual participants in a resource allocation process, from organisational biases, which arise from the political dynamics of organisational decision making. Organisational and individual biases are often interlinked (Coff and Laverty 2001) as we point out in Section 7.5 below. Our chapter proceeds as follows. We outline (in Section 7.2) a formal framework for Portfolio Decision Analysis which provides a lens through which unaided resource allocation and portfolio decisions can be interpreted. A systematic departure from this formal framework could then be considered a “bias.” In Section 7.3, we describe relevant individual biases from laboratory work, and in Section 7.4, we describe experiences of organisational biases in decision analysis and decision making practice. In Section 7.5, we discuss the relationship between the formal, individual and organisational perspectives of the previous three sections. Section 7.6 concludes by discussing what can be done to control such biases.
7.2 Formal Framework The models which underpin Portfolio Decision Analysis take a variety of forms, but a relatively generic model might be as follows. There are n projects which can be undertaken, with index set I D f1; : : : ; i; : : : ; ng. xi is a decision variable which takes the value 1 if a project is to be done, and 0 if a project is not to be done, and x D .x1 ; : : : ; xi ; : : : xn / is a decision vector, representing a portfolio. Portfolios are mapped by some function f ./ into a space of consequences (e.g. lives saved, illness averted, opportunities for consumption of goods and services provided), which may be multidimensional or random, and then these consequences are in turn evaluated by a function v./. v./ will be referred to as a value function but could be either a risky utility, elicited using questions about indifferences between lotteries, or a riskless value, elicited through questions about preference intensity. Certain axiom systems capture cancellation-like conditions under which consequence or value functions are additive over projects (Fishburn 1992), but these axioms have little normative appeal in general and a priori, and may plausibly not hold for real-world problems, where the benefits or costs of a collection of projects may not be the same as the sum of the benefits or costs if the individual projects were done on their own (Keisler 2005).
7 Behavioural Issues in Portfolio Decision Analysis
151
A feature of the situation facing the DM is that not all projects can be done: x must be selected from some feasible set X , hopefully “optimally” in the sense that the value of the consequences of the selected x is (non-strictly) greater than the value of the consequences of any other feasible point in X.v.f .x // v.f .x// for all x in X ). Often X will be defined by a budget constraint, which is to say that associated with each portfolio there is a (single- or multi-dimensional) cost function c./ and X is defined by c.x/ B, where B is some budget constraint. In such cases where projects are associated with particular organisational subunits (i.e. I can be partitioned into subsets of projects Ij which “belong” to particular subunits j ), a choice of x will have the interpretation of a resource allocation among competing departments (as, e.g. in Phillips and Bana e Costa 2007). An alternative way to think about resource allocation is to suppose that the DM is faced with an allocation of a limited amount of money or other resource B across m organisational units j . Let an allocation of B be denoted by a decision vector y D .y1 ; : : : ; yi ; : : : yn / from a set Y . Define m “production functions” fj ./ over each yj representing the benefits that the organisational unit produces with its allocation, and let v./ represent a value function over these different products. Depending on the current allocation, the marginal value generated by a unit of spend may vary greatly across the organisation, and hence one way to move towards a more rational allocation is to redeploy resources from areas of low cost effectiveness to areas of higher cost effectiveness (if production and value functions are concave then a series of incremental changes guided by this principle will lead eventually to optimality). In this alternative model, the “projects” which the organisational subunits undertake are, so to speak, implicit, and a strong separability assumption is made about the products of different organisational units: thus this model may be more appropriate in environments where the coordination between departments is not particularly important and the centre takes a relatively “hands-off” role with respect to operations (e.g. a university), whereas the original model may be more appropriate in environments where different departments must act in concert to deliver organisational goals (e.g. different military services with a common mission) and so the centre wishes to have a high degree of visibility on what departments are actually spending their money and deploying resources on. These models – generic as they are – are useful in highlighting how DMs could go astray. A DM could erroneously assess f ./ or fj ./, e.g. optimistically over-assessing the benefits which flow from a particular project or from allocating funds to a particular department, focussing on a single dimension of benefits where multiple benefits are relevant, or failing to adequately account for uncertainty that benefits will in fact be realised. A related error might be to mis-specify X or Y – e.g. in the fixed budget case a DM might underestimate c./ or overestimate B. An alternative (but subtler) error might lie in the specification of v./. Of course, the DM is in some sense the canonical authority on v./, as it is a representation of his or her preferences, but some choices of v./ might look odd – e.g. if v./ is a von Neumann– Morgenstern utility function, the DM may be excessively risk averse considering the scale of possible consequences (Kahneman and Lovallo 1993). Finally, a DM might make entirely adequate assessments of all functions and parameters, but fail to find the optimal x or y through a defective search strategy. Of course in naturalistic
152
B. Fasolo et al.
settings where no decision analysis model is being used it may be impossible to definitively identify exactly where and how dysfunction enters into decision making, but having such a frame may be a useful diagnostic aid.
7.3 Individual Biases: Cognitive or Motivational Failure as Meta-Explanation In this section, we review the main experimental evidence on individual heuristics and biases available in the behavioural literature, which we see as relevant to Portfolio Decision Analysis. Normative violations are explained by individual cognitive or motivational failure. We group studies into four categories: those dealing with suboptimisation, with partition dependence, with various forms of status quo bias and with scope insensitivity. We focus here on heuristics and biases in portfolio decisions and resource allocation; for a reader interested in knowing more about the general topic of heuristics and biases we refer to Gilovich et al. (2002).
7.3.1 Suboptimisation The first experimental approaches to “intuitive” resource allocation consisted in assigning very small resource allocation problems to DMs and measuring how far the participants’ performance was from the optimal solution obtained from Linear Programming or Integer Programming. For instance, Langholtz et al. (2002) asked Coast Guard personnel to schedule two helicopters or boats to maximise the number of hours they could patrol an area. Results showed that na¨ıve DMs can achieve at least 90% of the optimal solution. However, these studies disregarded entirely the strategies which subjects used to approach the task. Attention to process came with later research which used verbal protocols (e.g., Langholtz et al. 1997). In this study, participants were given a fixed budget (e.g. £75 per week and 15 hours) and asked to choose their weekly combination of meals at home (costing $2.50 and taking 1 hour of cooking and preparation time) and meals in a restaurant (costing $5.00 and taking 30 minutes of cooking and preparation time). The goal was to use all the resources, within a specified daily constraint (minimum two meals, maximum 4). The results showed that college students not trained in OR techniques could find an allocation close enough to the optimal Integer Programming solution, but on average 5–10% of resources were lost. A few people tended to compute the daily average hours, others consumed and checked as they went along the task. Strictly speaking, such studies focus on heuristics rather than biases (and indeed, could well be considered as studies of problem solving rather than of decision making as such). However, this study was among the first to
7 Behavioural Issues in Portfolio Decision Analysis
153
mention a bias towards equal distribution of resources, or “equality.” Such heuristics have been dubbed “maximum entropy heuristics,” and the associated bias, “partition dependence.”
7.3.2 Partition Dependence A recent direct empirical demonstration of maximum entropy heuristics as a powerful cognitive tendency and partition dependence as a resulting bias affecting resource allocations comes from Fox et al. (2005). By “partition dependence,” these authors mean a specific kind of framing effect, the sensitivity of resource allocations to the way in which projects are grouped. Bardolet et al. (2011) demonstrate the effect in a study where MBA students took the role of the manager in charge of capital allocation in a large corporation that has three main product divisions: Home Care, Beauty Care, and Health Care. Each has a different number of geographical business units (Home Care in the USA, Europe and Latin America; Beauty Care in the USA and Europe; and Health Care only in the USA). Participants were provided with a brief description of the company’s divisions and business units, data about past and future performance, and were randomly assigned to one of two conditions. In the first group, participants were told that the firm applies centralised decision making. This effectively means that participants are to allocate the available capital among the six geographic units (Home Care in the USA, Europe and Latin America; Beauty Care in the USA and Europe and Health Care only in the USA). The second group was instead told that the firm applies decentralised decision making, implying that participants had to allocate the available capital among the three product divisions (Home Care, Beauty Care, and Health Care). The study showed dramatic partition dependence. The Health Care division was allocated 20% (roughly 100% divided by the six subdivisions) of the capital in the first condition (centralised) and 33% of the capital (100% divided by three divisions) in the second condition, although all participants saw the same information. The dependence of resource allocation on the partition should alert DMs and facilitators of resource allocation processes of the importance of reflecting on the appropriateness of the partition applied to the problem. DMs in real settings (where decisions have significant consequences) are not immune to partition dependence, and its manifestation might be overinvestment in poorly performing divisions and underinvestment in well performing divisions (Scharfstein and Stein 2000). Partition dependence is, however, less likely to occur when DMs have high expertise. For instance, Huberman and Jiang (2006) studied how over 500,000 participants chose to allocate their contributions (up to 25% of their income) to different 401(k) plans. The choice was overwhelming, with up to 59 plans to allocate funds to. Due to the overabundance of choice, it was found that participants in fact focussed on three or four plans and then allocated the resources equally across this small sub-set. However, the tendency to allocate resources equally decreased when participants had higher expertise.
154
B. Fasolo et al.
Table 7.1 An illustration of the embedding effect and scope insensitivity (adapted from Kahneman and Knetsch 1992) Sample 1 What is the most you would be willing to pay each year To go into special fund to improve environmental services [defined to include disaster preparedness]? To improve disaster preparedness [defined to include availability of rescue and equipment and personnel]? To improve availability of rescue and equipment and personnel
Sample 2
Sample 3
$135:91
$29:06
$151:60
$14:12
$74:65
$122.64
The partition dependence bias is related to the more general class of valuetree-induced weighting biases such as the splitting bias (Jacobi and Hobbs 2007), whereby elicitees start with an equal allocation of weight among attributes in each tree partition and allocate relatively more weight to those higher level attributes having most sub-attributes. The maximum entropy heuristic which leads to partition dependence is closely related to a heuristic referred to in the behavioural literature as “na¨ıve diversification” (Benartzi and Thaler 2001). In their studies, they found that, given the task to allocate their funds to several personal saving options, people simply tend to allocate 1=n of their funds to each of n investment options regardless of the nature of these investments.
7.3.3 Scope Insensitivity Normally, one would assume that projects of greater scope carry larger benefits and value than those with a very small scope. Often, however, DM’s intuitive value judgments do not display this scope sensitivity – a phenomenon which was first found and researched in relation to economic willingness to pay (Kahneman and Knetsch 1992). An interesting lesson from this classic study was the importance of the “embedding effect” – people’s willingness to pay does not seem to vary linearly with the size of the project evaluated. This is particularly the case when evaluations of projects of different size are made by different people. For example in Kahneman and Knetsch’s study (see Table 7.1), Sample 1 was willing on average to pay $135.91 each year to into a special fund to improve environmental services, and Sample 3 was willing to pay a similar amount of money for a much smaller project (aimed just to improve the availability of rescue equipment and personnel). However, Sample 1 gave more scope-sensitive answers when the varying scope of the projects assessed was made explicit. Scope insensitivity could be due to a cognitive limitation, including the inability to visualise large benefits, or to aggregate appropriately the value of several
7 Behavioural Issues in Portfolio Decision Analysis
155
constituent projects. A different explanation offered by Kahneman and Knetsch is affect. The mere act of making a donation to improve the environment makes us feel good (the “moral satisfaction” or “warm glow” hypothesis). While affect is crucial to good judgment, this research shows that affect (and therefore motivation to donate) does not scale up (we do not feel 100 times more satisfied when donating to save 100 birds rather than 1 bird). More recently, this explanation has been further refined into the over-reliance on feelings rather than calculation when making the assessment (Hsee and Rottenstreich 2004). In their studies, participants’ willingness to pay for quantities of different scope (e.g. save one or four pandas) was examined in two different conditions: affect-poor (pandas were presented as black large dots) or affect-rich (colour copies of cute pandas were given). The interesting result of this panda study was that there is no scope insensitivity in the affect-poor scenario (resulting in a monotonically strictly increasing the value function). On the contrary, in the affect-rich scenario the value function had an initial rise and then was flat, insensitive to further increases in scope.
7.3.4 Status Quo Bias The status quo bias is the strong tendency to persist with current projects or devote to them more resources than warranted, even if the projects are not performing up to standards or are worse than competing ones (Samuelson and Zeckhauser 1988). This bias has been explained by a host of important psychological frailties, including loss aversion (quitting implies the certain loss of the previous investment, while status quo implies a small chance of a large gain), or the inability to compute opportunity costs (Northcraft and Neale 1986). The role of the status quo and reference dependence is well document and modelled within the framework of Prospect Theory (Bromiley 2009). DMs find it hard to see the status quo as an action that consumes resources which could be used elsewhere, unless the opportunity costs are explicitly highlighted. The bias towards projects already started is particularly strong when these projects are close to completion. A demonstration is offered by Conlon and Garland (1993), who asked participants to imagine being the president of a large company faced with the decision of how to allocate the $10 million budgeted for sonar scrambling material for submarines. Participants saw different sunk cost information (one million vs. five million vs. nine millions already spent), and different information regarding the time of project’s completion (Engineering department briefs that project is 10% vs. 50% vs. 90% complete). While a lot of behavioural research has highlighted the prevalence of the sunk cost effect, in this experiment expected completion time, not sunk cost, determined the probability that the expenditure of the next million was authorised for a project already started. The closer the expected completion date, the more willing participants were to invest the last million.
156
B. Fasolo et al.
7.4 Organisational Biases: Justifiability as Meta-Explanation The literature reviewed so far focussed on individual level psychological biases, which have been studied in the laboratory and manifest themselves because of cognitive or motivational limitations of single individuals. However, decisions in organisations, even where there is a centralised decision maker, are typically made through the interactions of multiple players. The dynamics of these interactions – particularly the need to argue for, defend and justify decisions – leads to distinctive “organisational” biases in decision making. Organisational biases are acknowledged as important (e.g. Coff and Laverty 2001) but have been less on the radar of empirical researchers. Here, we aim to fill this gap by turning to the organisational heuristics and biases we see as most relevant to resource allocation practice. As evidence from the laboratory is rather hard to come by, we primarily rely on case experiences partially from the authors’ experiences and partly from the small available literature. While we cannot exclude that organisational biases might arise from organisational weakness or laziness, we argue that most of the organisational heuristics and biases we review can arise from a personal need of the allocator to justify the allocation decision to others. Justifiability is a very important “meta-objective” that decision makers juggle against other process objectives, such as the need for accuracy (Payne et al. 1993). Psychological research shows that convincing rationales are attractive as they allow decisions to be easily explained to others and DMs can feel confident in their decision (Shafir et al. 1993). Further, good reasons are effective “weapons of influence” and persuasion (Cialdini 2007). For example, in a recent salary increase allocation, one of us asked his program leaders and department heads to propose salary increases for the most exceptional candidates only. Altogether about 15% of the staff were recommended under this rule. It turned out that 10% of these had recently been assigned additional duty or increase in grade level of employment without any additional compensation. It was thus easy to accept the salary increase proposals for these cases and put the other ones on hold, even though it may not have been a very cost-effective allocation. But this line of reasoning leads to the question: what constitutes a convincing justification? In this section, we outline five such arguments, based on equalisation, anchoring, minimum requirement, demonstrable benefits and appeal to champions.
7.4.1 Equalisation Argument When resources are allocated over a given set of n programs or activities and when there is little rationale for a differential allocation, the initial starting point (as discussed above when describing the maximum entropy heuristic) is often to allocate the resources equally by given each program or activity a share of 1=n of the total resources. This is an appealing argument on a “fairness” basis, as it allows
7 Behavioural Issues in Portfolio Decision Analysis
157
the DM to justify the decision as “fair” to the recipients of the resources, especially when no individual unit can claim superiority in terms of effectiveness in achieving organisational objectives. When one of the authors was appointed as director of an international research organisation, this was exactly the situation he faced. There were 14 research programs, some small, some large, some, on the surface, more productive than others, some in need of change and repair, yet all received the same amount of base funding from a central pool. It was unclear how this equal funding rule had been established, except that it was considered fair and acceptable by the previous director and the program leaders. In the absence of sound cost-effectiveness arguments, there was much resistance to changing this allocation scheme. Of course, this scheme also led to some perverse incentives. For example, it was in the interest of a program leader to split his or her program, thus obtaining a larger share of the overall allocation .2=.n C 1/ instead of 1=n where n is the number of programs). This did in fact happen for a few years, leading to a proliferation of programs. Hierarchical organisations often use an equalising approach at the higher levels of the organisation, with less tendency to do so at lower levels, where performance is relatively easily to measure and quantify. When developing a resource allocation model and process for the US Army Corps of Engineers’ Construction Engineering Research Laboratory, Edwards and von Winterfeldt (in Edwards et al. 1988) found that division directors differentiated quite strongly between projects within their division, but generally felt that a close-to-equal allocation across divisions was appropriate. Their preference was to receive a similar share and then allocate this share within their divisions according to their own rules. Here too, it was easier to provide cost-effectiveness arguments for an unequal allocation across projects within a division and much harder to find these arguments for allocations across divisions. The solution of comparing all projects on the same level, ignoring division boundaries, was initially resisted and only overcome by a strong intervention of the leadership of the institution. Anyone who has been involved in salary setting processes has seen a related version of this argument. Division leaders often propose to give everyone the same raise, starting with a “raise pool” allocated from the top. One of the authors was a department chair or dean of faculty of several institutions and as such involved as a DM in several processes of assigning salary raises to faculty members. Even though he tried to avoid an equal allocation of raises, he was always cognizant of the fact that the tendency was to provide a very flat distribution of raises, e.g. 3–4% across the board.
7.4.2 Anchoring Argument Given the need to justify resource allocations, it is only natural that many resource allocations anchor on last year’s or other historical allocations (justified and approved at the time!) and make adjustments in terms of percentages. In a major
158
B. Fasolo et al.
multi-billion resource allocation project for the US Department of Energy (DOE), the allocation was to numerous environmental restoration projects across 12 sites owned and operated by the DOE. These sites had various environmental restoration projects, mostly to remove and isolate radioactive materials from nuclear weapons production. While the overall funding from the DOE headquarters was clearly aimed at these environmental restoration projects, the funds were actually allocated in bulk to the individual sites. Site managers had a strong incentive to maintain their share of environmental restoration funds and argued for an annual increase to make up for inflation and other factors. While the prioritisation system would have allocated the funds quite differently, it could not compete with the simple rule of giving each site a similar increase in funding and letting them allocate their funds to the restoration projects on their own site. In the end, the formal allocation model proposed by the DOE was discarded and the previous system of making incremental adjustments was continued.
7.4.3 Minimum Requirement Argument A different case is the minimum requirement argument, which anchors allocations on very extreme, often cost-ineffective or impossible allocations. Efforts to implement a “zero-based budgeting” system have often failed, because the management units that were recipients of the budgets argued successfully that they need a minimum budget to survive. This was also the case in a budget allocation effort for the US National Nuclear Security Agency (NNSA) that one of us developed in 2002. This effort was to prioritise the budget of the NNSA for its program offices at multiple sites in the USA. As with the environmental restoration prioritisation system discussed above, the program offices at each site argued that they must have a minimum staff in several areas (legal, human resources, etc.) to conduct their support functions and that only activities that were above the minimum staffing level could be subject to prioritisation. A similar argument was made by program managers involved in the US DOE’s waste management activities. In particular, program managers argued that without a minimum staff they could not maintain a minimum ability to function in order to keep the waste management operations going. A variant on this argument is to base the rationale for protecting expenditure on legal grounds. In many cases, the organisations that want to prioritise projects and activities also have legal agreements to meet certain requirements. In most contexts one would give high priority to these projects and activities or to even force them to be at the top of the list. However, in some cases, there simply is not enough money to meet all the legal requirements simultaneously. This was the case in both above referenced DOE prioritisations – for environmental restoration and for waste management. In both cases, the DOE sites had engaged in the so-called Tri-Party agreements, which involved the site managers, the environmental regulators of the state they were located in, and the Indian Tribes and Nations (many of the sites were located on former Indian territory). These Tri-Party
7 Behavioural Issues in Portfolio Decision Analysis
159
agreements often promised a level of performance and goals that were simply not achievable in a cost-effective way. Typically, a strict interpretation of a Tri-Party agreement would have required three to five times the overall budget that the DOE had available in any given year. When trying to prioritise among the environmental restoration and waste management projects and activities, lawyers representing the stakeholders in the Tri-Party agreements argued that there was no need to prioritise at all – only to fulfil the legal requirements. This in turn led to a stalemate, the eventual cancellation of the prioritisation activities and a return to the old budgeting days dominated by the use of equalising and anchoring heuristics.
7.4.4 Champion Argument It helps to have a powerful champion for a project or activity when it comes to resource allocation, and this fact can itself shape how decisions are made. In fact, many experienced facilitators of resource allocation processes have some kind of placeholder or euphemistic dimension for the strength of organisational support of a particular project or activity. Edwards and von Winterfeldt (in Edwards et al. 1988) encountered this bias head on, when prioritising projects for the Construction Engineering Research Laboratory (CERL) of the US Corps of Engineers. When eliciting the prioritisation criteria from program managers, the importance of considering the level of support for a particular project was stated repeatedly. This sentiment was so strong, that they built it into the prioritisation system as “the number of stars of the officer supporting the project.” This practice has been found to have some positive performance on projects requiring creativity and innovation (Howell and Sheab 2001), but can be risky when used to make decisions. At best it leads to double counting the benefits that a project already has. At worst it introduces a highly subjective factor in the prioritisation calculus, as lamented by Sharpe and Keelin (1998) in their account of Smithklein’s old resource allocation model. In our CERL example, a project to design better field toilets for high ranking officers received a high priority, most likely for these reasons. On the positive side, this was a very successful resource application model which remained in use for over 15 years, possibly because of this kind of flexibility.
7.4.5 Demonstrable Benefits Argument Some projects and activities have benefits that can be more easily argued for and demonstrated and they tend to receive higher scores in resource allocation systems. Projects with short term and certain returns have more easily demonstrated benefits than those with very long term and uncertain returns. This argument captures one the same three organisational biases identified in Coff and Laverty (2001), namely
160
B. Fasolo et al.
the bias towards tangible assets, the bias towards certainty and the bias towards sequential processing and short time horizons arising from the annual nature of the investment cycle. In one case, we built a resource allocation system to prioritise health and projects and activities of the Health and Safety Division of the Los Alamos National laboratory (Otway et al. 1992). There were several projects that had immediate and tangible benefits – e.g. buying another ambulance car. The proponents of this project argued that it would save at least one life every 5 years. On the other hand, projects that removed radioactive materials with similar benefits (elimination of one cancer fatality per 5 years) were scored lower, because the benefits were both uncertain and removed in time. Another domain where the demonstrable benefit argument features prominently is healthcare resource allocation. A frequent complaint heard from public health professionals is that highly beneficial preventive activities (e.g. smoking cessation, subsidy of exercise programmes) are often given second place to cost-ineffective acute treatments. For example, in the case of stroke, relatively simple calculations show the high relative cost-effectiveness of preventive activity as opposed to highly visible treatments such as thrombolysis, which have traditionally been the policy priority (see Airoldi et al. 2008; Airoldi and Morton 2011). It is plausible that this bias emerges because of a genuine assessment difficulty: the causal chain between (say) promoting reduced salt consumption and avoided strokes has several links and operates over a period of several years, and the individuals who avoid strokes by reducing salt consumption cannot be ex ante identified.
7.5 Discussion: Formal, Individual and Organisational Perspectives In this section, we revisit the relationship between the three previous sections. We have outlined a formal model, which can be used to illustrate how portfolio decision making can go astray, some behavioural literature based on laboratory evidence, and observations from practical experience. We suggest possible relationships between these three perspectives, and invite further research to address each relationship in detail.
7.5.1 Formal Perspective Versus Individual Perspective The formal structure of the underpinning model provides a helpful structure for interpreting the individual-level psychological phenomena and decision biases which we have just reviewed. Suboptimisation, for instance, represents a relatively unsubtle cognitive failure to correctly calculate x . This does not by itself seem enormously surprising: presented with what is essentially a mathematical problem
7 Behavioural Issues in Portfolio Decision Analysis
161
in which they have no specific training, most people will fail to identify the correct solution (although they can, by trial and error as well as by using shortcuts, come quite close). The scope insensitivity experiments show a failure to consistently assess v./ across the whole spectrum of projects of different scope, but an interesting question which naturally follows on is why DMs do not seem to have similar difficulty consistently assessing the cost function c./. The Bardolet et al. example of partition dependence relates to the partitioning of the set of projects, I into subsets Ij , but it is rather hard to interpret within our framework as participants were simply asked to make a resource allocation, without any explicit assessment of values or consequences. However, one might infer from their experiment that values, consequences or (most probably) both exhibit some sort of flattening across a partition. The $10 million budget allocation task on which the Conlon and Garland experiment is based also involves a direct allocation, and so it is hard from a decision analysis standpoint to tease out exactly what is happening, but the great influence of a close-to-completion timeline suggests that people tend to consider “close to deadline” is a benefit in itself, so it might be a failure to properly assess v./.
7.5.2 Formal Perspective Versus Organisational Perspective Just as for the individual biases, interpreting the organisational biases perspective with the lens of formal principles leads to interesting reflections. The Minimum Requirement argument could be interpreted within the formal model as a placing of excessive restrictions on the feasible set X or Y . But of course it might happen that one is given a task which is in fact impossible as defined (e.g. “set an exam paper such that all marks will be higher than the average”), as in the DOE example cited above, and the only possible response other than failure is to seek out the significant stakeholders and persuade them to accept a task redefinition. The Champion and Demonstrable Benefits arguments are both clearly political, in the sense that decisions are not judged on the basis only of their direct consequences but of the prospects for decision implementability and the fall-out of the decision for the DM personally. All DMs take cognizance of their own personal interests at some level at least some of the time of course, and in some settings, even the most idealistic DM would be well advised to watch his back, so it hardly seems fair to deem DMs who pay attention to such issues “irrational.” But nevertheless, we would regard organisations where these concerns dominate decision making and eclipse all other concerns as seriously dysfunctional.
7.5.3 Individual Perspective Versus Organisational Perspective We see a tight connection between the organisational-arguments and individualbiases perspectives. Each of the organizational arguments we presented is more likely to “work” (i.e. is appealing and can inform the resource allocation process)
162
B. Fasolo et al.
the more it can reflect the thinking process of the recipient of the argument and allocation, where this process can be biased in the ways indicated. Consider the case of the equalising argument. Offering an equal allocation could well be favoured by allocators because, as shown by a wealth of social psychological studies, allocations are instinctively evaluated by recipients of the allocation on the basis of the fairness of the resulting distribution (for one of the first analyses, see Walster et al. 1978) as well as of the fairness of the process used when making allocation decisions. Consistency across people is the most important criterion for discerning a fair allocation procedure, whereas consistency across time was felt to be less important (Barrett-Howard and Tyler 1986). This might then explain why DMs are often concerned about the fairness of the allocation, and may resort to an equalisation argument. This tendency towards equal allocations is a deeply ingrained cognitive strategy, which is employed whenever trying to allocate a scarce resource into a small number of different categories. Research in consumer decision making and distributive justice have found similar evidence for an equality heuristic (Messick 1993; Roch et al. 2000). Similarly, the Anchoring argument “bets” on one of the most powerful and pervasive heuristic called “anchoring-and-adjustment” (for a detailed description, see Gilovich et al. 2002). The Minimum Requirement argument is reminiscent of a mechanism operating at an individual level, the resistance to trade-off certain “protected values” against others, particularly economic values (Baron and Spranca 1997). The Champion argument might work because of the persuasiveness of the source endorsing the allocation (Petty and Cacioppo 1981), whereas the Demonstrable Benefits argument could exploit a range of individual biases, from the preference for easy-to-evaluate attributes (Hsee 1996), to impatience and preference for immediate gains over larger later and uncertain gains (Thaler and Shefrin 1981).
7.6 Conclusion In this chapter, we have reviewed salient behavioural issues on Portfolio Decision Analysis. The focus of our attention has been on bringing together normative, individual and organisational psychological perspectives, and to draw, not only on laboratory research, but also on craft knowledge and experience from the field. We have aimed to be as comprehensive as we can but we limit our review to a personal view of the available literature, which we hope will stimulate further thought and research. As an applied discipline, decision analysis seeks to improve decision making. It is rather humbling to realise that we have spent the entire chapter on discussing problems, with barely a mention of possible actions which a decision maker could take to debias. In a sense, the portfolio frame is itself a debiasing device, as it encourages decision makers to “bracket broadly” (Read et al. 1999), simultaneously considering decisions about multiple projects rather than making decisions about individual projects on a one-off sequential basis. Moreover, there are debiasing procedures
7 Behavioural Issues in Portfolio Decision Analysis
163
which are specific to portfolio decision analysis models: e.g. the structure of the portfolio model may make it possible to use “forcing in” a specific project to an optimal portfolio (with consequent “forcing out” of other marginal projects so that a budget constraint is observed) as a consistency checking device (Phillips and Bana e Costa 2007). However, as far as individual level biases are concerned, we have little to add to the good generic advice which is a staple of the debiasing literature: use explicit problem representations (such as decision analysis models) which make assumptions transparent and explicit; consider alternative problem representations to avoid becoming trapped by any one representation; deliberately seek out multiple perspectives; and elicit, re-elicit, and perform sensitivity analysis (Fischhoff 1982; Larrick 2004). What we have called organisational biases are harder to guard against. Some organisations may have poor decision habits, and embedding new ways of approaching thinking about and approaching decisions can be demanding for even the most skilful decision analyst (Matheson and Matheson 2007). Yet there do exist organisations which have successfully embedded Portfolio Decision Analysis within their planning cycles (Phillips and Bana e Costa 2007; Morton et al. 2011). As this chapter has highlighted there are interesting and under-researched questions about how to improve the thinking process surrounding resource allocation, and we hope this chapter encourages researchers to exploit these research opportunities.
References Airoldi M, Bevan G, Morton A, Oliveira M, Smith J (2008) Estimating the health gains and cost impact of selected interventions to reduce stroke mortality and morbidity in England. Health Foundation, London Airoldi M, Morton A (2011) Portfolio Decision Analysis for population health. In: Salo A, Keisler J, Morton A (eds) Portfolio Decision Analysis: improved methods for resource allocation. New York, Springer Bardolet D, Fox CR, Lovallo D (2011) Corporate Capital Allocation: A Behavioral Perspective. Strategic Management Journal (in press) Baron J, Spranca M (1997) Protected values. Organ Behav Hum Decis Process 70:1–16 Barrett-Howard E, Tyler TR (1986) Procedural justice as a criterion in allocation decisions. J Pers Soc Psychol 50:296–304 Benartzi S, Thaler RH (2001) Naive diversification strategies in defined contribution saving plans, American economic review. Am Econ Assoc 91:79–98 Bromiley P (2009) A prospect theory model of resource allocation. Decis Anal 6:124–138 Cialdini R (2007) Influence: the psychology of persuasion. Harper Business, New York Coff RW, Laverty KJ (2001) Roadblocks to competitive advantage: how institutional constraints and decision biases hinder investments in strategic assets. J High Technol Manage Res 12:1–24 Conlon D, Garland H (1993) The role of project completion information in resource allocation decisions. Acad Manage J 36:402–413 Edwards W, von Winterfeldt D, Moody D (1988) Simplicity in decision analysis:an example and a discussion. In: Bell DE, Raiffa H, Tversky A (eds) Decision analysis: descriptive, normative, and prescriptive aspects. CUP, New York Fischhoff B (1982) Debiasing. In: Kahneman D, Slovic P, Tversky A (eds) Judgment under uncertainty: heuristics and biases. CUP, Cambridge
164
B. Fasolo et al.
Fishburn PC (1992) Utility as an additive set function. Math Oper Res 17:910–920 Fox CR, Bardolet D, Lieb D (2005) Partition dependence in decision analysis, resource allocation and consumer choice. In: Zwick R, Rapoport A (eds) Experimental business research, vol. 3: marketing, accounting, and cognitive perspectives. Kluwer, Norwell/Dordrecht Gilovich T, Griffin DW, Kahneman T (2002) Heuristics and biases: the psychology of intuitive judgment. CUP, New York Howell JM, Sheab CM (2001) Individual differences, environmental scanning, innovation framing and champion behaviour: key predictors of project performance. J Prod Innov Manage 18: 15–27 Huberman G, Jiang W (2006) Offering versus Choice in 401(k) plans: equity exposure and number of funds. J Finance 61(2):763–801 Hsee CK (1996) The evaluability hypothesis: an explanation for preference reversals between joint and separate evaluations of alternatives. Organ Behav Hum Decis Process 67:247–257 Hsee CK, Rottenstreich Y (2004) Music, pandas and muggers: on the affective psychology of value. J Exp Psychol Gen 133:23–30 Jacobi SK, Hobbs BF (2007) Quantifying and mitigating the splitting bias and other value treeinduced weighting biases. Decis Anal 4:194–210 Jenni KE, Merkhofer MW, Williams C (1995) The rise and fall of a risk-based priority system: lessons from DOE’s Environmental Restoration Priority System. Risk Anal 15:397–410 Kahneman D, Knetsch JL (1992) Valuing public goods: the purchase of moral satisfaction. J Environ Econ Manage 22:57–70 Kahneman D, Lovallo D (1993) Timid Choices and Bold Forecasts: a cognitive perspective on risk taking. Manage Sci 39(1):17–31 Keisler J (2005) When to consider synergies in project portfolio decisions. College of Management Working Paper, 27 Kleinmuntz DN (2007) Resource allocation decisions. In: Edwards W, Miles RF, von Winterfeldt D (eds) Advances in decision analysis. CUP, Cambridge Langholtz H, Ball C, Sopchak B, Auble J (1997) Resource-allocation behavior in complex but commonplace tasks. Organ Behav Hum Decis Process 70(3):249–266 Langholtz H J, Marty AT, Ball CT, Nolan EC (2002) Resource-allocation behavior. Kluwer, Norwell Larrick RP (2004) Debiasing. In: Koehler DJ, Harvey N (eds) Blackwell handbook of judgement and decision making. Blackwell, London Matheson D, Matheson JE (2007) From decision analysis to the decision organization. In: Edwards W, Miles RF, Von Winterfeldt D (eds) Advances in decision analysis. CUP, Cambridge Messick D (1993) Equality as a decision heuristic. In: Mellers BA, Baron J (eds) Psychological perspective on justice. CUP, Cambridge Morton A, Fasolo B (2009) Behavioural decision theory for multi-criteria decision analysis: a guided tour. J Oper Res Soc 60:268–275 Morton A, Bird D, Jones A, White M (2011) Decision conferencing for science prioritisation in the UK public sector: a dual case study. J Oper Res Soc 62:50–59 Northcraft GB, Neale MA (1986) Opportunity costs and the framing of resource allocation decisions. Organ Behav Hum Decis Process 37(3):348–356 Otway HJ, Pucket JM, von Winterfeldt D (1992) The priorization of environment, safety, and health activities. Technical Report. Los Alamos National Laboratory, Los Alamos Payne JW, Bettman JR, Johnson EJ (1993) The adaptive decision maker. CUP, Cambridge Petty RE, Cacioppo JT (1981) Attitudes and persuasion: classic and contemporary approaches. Wm. C. Brown, Dubuque Phillips LD, Bana e Costa CA (2007) Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann Oper Res 154:51–68 Read D, Loewenstein G, Rabin M (1999) Choice bracketing. J Risk Uncertain 19(1–3):171–197 Roch S, Samuelson C, Allison S, Dent J (2000) Cognitive load and the equality heuristic: a two stage model of resource overconsumption in small groups. Organ Behav Hum Decis Process 83:185–212
7 Behavioural Issues in Portfolio Decision Analysis
165
Samuelson W, Zeckhauser RJ (1988) Status quo bias in decision making. J Risk Uncertain 1:7–59 Scharfstein D, Stein J (2000) The dark side of internal capital markets: divisional rent-seeking and inefficient investment. J Finance 55(6):2537–2564 Shafir E, Simonson I, Tversky A (1993) Reason-based choice. Cognition 49(2):11–36 Sharpe P, Keelin T (1998) How Smithkline Beecham makes better resource-allocation decisions. Harvard Business Rev 76:45–57 Thaler RH, Shefrin HM (1981) An economic theory of self-control. J Polit Econ 89(2):392–406 Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science 185:1124–1131 von Winterfeldt D (1999) On the relevance of behavioral research for decision analysis. In: Shanteau J, Mellers B, Schum D (eds) Decision science and technology. Kluwer, Boston von Winterfeldt D, Edwards W (1986) Decision analysis and behavioral research. CUP, Cambridge Walster E, Walster GW, Berscheid E (1978) Equity: theory and research. Allyn & Bacon, Boston
Chapter 8
A Framework for Innovation Management Gonzalo Ar´evalo and David R´ıos Insua
Abstract Over the last 3 years a niche has emerged for Web-based tools to support innovation management processes. However, these tools usually focus on a limited part of the innovation management cycle. Moreover, in spite of the inherent portfolio nature of many decisions in innovation management, such tools tend to lack group decision support capabilities, except for simple mechanisms using discussion fora and voting systems. We describe a flexible framework based on collaborative decision analysis to support innovation management processes and outline a Webbased architecture implementing such framework.
8.1 Introduction Over the last 3 years, a market niche for Web-based innovation management tools has emerged. This is probably related with the rising concern among CEOs that innovation is a key factor for companies to stay ahead of their competitors. Indeed, as Townsend et al. (2008) indicate, innovation is a strategic priority for 93% of senior business executives. This interest is well grounded on statistical data. For example, if we choose a macroeconomic approach to assess the importance of innovation in the development of an economy, the European Innovation Scoreboard Pro Inno Europe (2009) shows that countries that are more focused on innovation, such as Sweden and Germany, have suffered less in terms of their unemployment rate in the aftermath of the current financial crisis than less innovative countries, such as Greece and Spain. Similarly, at
G. Ar´evalo () International RTD Projects Department, Carlos III National Health Institute, Ministry of Science and Innovation, Madrid, Spain e-mail:
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 8, © Springer Science+Business Media, LLC 2011
167
168
G. Ar´evalo and D. R´ıos Insua
the enterprise sector level, based on the EUROSTAT (2010) New Cronos database, the motor industry has faced a reduction of almost 20% in employment in the period 2007–2009, whereas the computing, consulting, and related services sector only had a reduction of 0.1%. Similar conclusions hold even for longer periods. Thus, in a globalized economy, there is a need to innovate to increase flexibility and provide new services and products. Building on the recent success of the Web 2.0 and cloud computing paradigms, several new innovation management tools are being deployed over the Web. However, these tools support just a few phases of the innovation management process. Moreover, in spite of the many portfolio decisions in this application area, they provide few group decision support capabilities, typically based only on discussion fora or simple voting mechanisms. This chapter proposes a flexible framework for managing innovation processes and outlines how such a framework could be implemented through the Web to support innovation. Our methodology is based on collaborative decision analysis and resource allocation procedures to select a portfolio of potentially innovative projects. It is inspired by our experience in relevant consulting and incorporates, from a decision analytic perspective, best practices as described in Luecke (2009). It is flexible in the sense that it may fit and can be adapted to several organizational cultures. The chapter is structured as follows. We first review some innovation management tools, emphasizing their decision support facilities and corresponding strengths and weaknesses. We then describe SKITES, a general framework to support all phases of the innovation management process, drawing heavily on recent e-participation and Web-based collaborative decision support methodologies, see Burstein and Holsapple (2008). We pay special attention to decision analytic issues. We conclude by discussing how such framework could be implemented in a generic architecture to support interactions over the Web, to scale up a fairly complex procedure.
8.2 Web-Based Innovation Management Process Tools The increasing relevance of innovation has led to the emergence of a vision of innovation as a creative and collaborative activity that needs to be managed proactively within organizations, see Howells (2005). Shane and Ulrich (2004) provide a literature review of relevant research issues. Typically, innovation management processes include phases where (1) some projects are proposed, which are then; (2) filtered; then, (3) a portfolio of projects is chosen, and; (4) their implementation is monitored. The need to manage such highly relevant processes appropriately has spawned a new market of Web-based innovation management tools.
8 A Framework for Innovation Management
169
8.2.1 GDS Features of Some Innovation Management Platforms Here, we briefly outline the key features of six innovation management platforms which benefit from the popularity of Web 2.0; see French et al. (2010) for an introduction to such technologies. These platforms are currently running initiatives with varying levels of penetration in the innovation management market. We emphasize their group decision support (GDS) facilities and indicate which phases of an innovation process are supported. The first two tools provide marketplaces for matching of innovation offer and demand. Besides this, the third and fourth tools incorporate voting mechanisms and simple filters and ratings for decision support. The fifth one supports innovation management processes comprehensively. The last one is a system for benchmarking innovation in organizations. We claim no completeness of our list. Rather, the systems here described span all key activities in innovation management processes. They are also having a reasonable success in terms of growth in customers and investment funding. Innoget (http://www.innoget.com/) is an open innovation portal connecting companies with a network of scientists, labs, and R&D-oriented companies. It allows users to publish problems and receive proposals, solicit technologies that meet their demands, and offer their products. No formal decision support is provided. The company seeking to launch innovative projects receives feedback from research experts, who may engage in a research collaboration. Some important firms such as Orange or Leche Pascual have used it to search for innovative projects. Innocentive (http://www.innocentive.com/) is a platform that connects companies, academic institutions, and public sector organizations with a network of researchers who earn prizes (and reputation) when solving proposed challenges. It may be seen as an open challenge marketplace with incentives, which are mainly in the form of prizes, although other rewards such as grants or collaborations may also be given. This tool is quite simple in the sense that it is just a matching mechanism. There is no evaluation of innovative projects, nor is any follow-up supported. Thus, Innocentive just covers the gap between innovation producers and demanders. ideas4all (http://en.ideas4all.com/) is a recent start-up which serves as a platform in which persons and organizations propose ideas that receive support via a voting system, based on usefulness and reasonableness. Users can also pose problems in search for assistance or support. Thus, ideas4all is essentially an open idea marketplace supported by a simple voting mechanisms, not focused on innovation. Any kind of idea can be proposed, and almost any person can act as evaluator. Neither the voting mechanism nor the evaluation criteria match the specific needs of innovation markets. Qmarkets (http://www.qmarkets net/) is a company which builds on collective wisdom to provide several services. Specifically, its product (Idea Management)
170
G. Ar´evalo and D. R´ıos Insua
supports the implementation of a four-stage innovation process (submit, interact, evaluate, decide), helping a company to find new products that match its strategic objectives. Nevertheless Qmarkets does not cover the implementation and follow-up phases. As for decision support, it provides voting tools and a proprietary evaluation system based on filters and ratings. The system is meant to be used internally within an organization, not allowing external participants. Accept (http://www.accept360.com/) is an innovation management system combining modules for idea generation, portfolio management, and product development based on best practices in innovation processes and business intelligence. Accept supports the idea generation, the selection, and the execution phases. It includes several voting mechanisms and multicriteria value functions with fixed criteria that are adapted to the circumstances of each partner or market, to support portfolio selection. The system does not support the follow-up phase. Accept is more a software service than an innovation consultancy service supporting the entire innovation process. Imp3 rove (https://www.improve-innovation.eu/) started from a project funded by the European Commission. Their most relevant decision support tool facilitates the benchmarking of an organization in terms of innovation, based on five criteria (innovation strategy, organization and culture, innovation life cycle processes, enabling factors, innovation results). This helps the company in comparing its results with those of competitors in the same sector. This tool is oriented mainly toward small and medium-sized enterprises (SMEs). The resulting evaluation, which resembles the European Foundation Quality Management (EFQM) model for quality assessment, suggests improvements in an organization innovation process.
8.2.2 Discussion The above tools are backed by expertise in innovation management and seek to promote and support innovation within organizations. However, these tools incorporate fairly simplistic methodologies and mechanisms for group decision support and resource allocation, mainly through facilitating discussions and voting protocols. Few tools use more sophisticated mechanisms based on group value functions, but they tend to use criteria that are fixed across organizations and/or weights fixed for all of them. Moreover, they are based on a fixed innovation management process that cannot be really adapted to any organizational culture: the organization needs to adapt to the tool, rather than the tool adapts to the organization. This background sets the stage for our framework which supports such processes, from the generation of innovative projects, to their filtering, evaluation, and selection, to the follow-up of their execution. This framework is intended to be flexible and adaptive, to embrace various organizational cultures and different enterprise sizes including public and private ones, SMEs and big firms, innovation
8 A Framework for Innovation Management
171
networks, among others, with different innovation cultures, based on different criteria, business models, competitive advantages, target markets, sizes, or business environments. Because there are several portfolio decisions to be taken in innovation management, the framework should provide appropriate GDS methodologies for assessing projects. This assessment could combine standard indicators with others not so well known specifically designed for the evaluation of innovation projects. Our framework is based on best practices in innovation management, see Luecke (2009), which we augment by adding collaborative decision analysis tools, as in Raiffa et al. (2002). Moreover, we design such framework to make it implementable through the Web to better support distributed decision making and facilitate its application at a broader scale. In doing this, we draw on recent developments and debates in the field of e-participation, see R´ıos Insua et al. (2008), French et al. (2007), and R´ıos Insua and French (2010), and the tradition in GDS systems, see Burstein and Holsapple (2008). We also draw on portfolio resource allocation methods, see Vilkkumaa et al. (2011) and Kleinmuntz (2007) for relevant pointers, as well as other chapters in this volume.
8.3 SKITES: A Framework for Innovation Management As described here, SKITES (Sharing Knowledge and Information Towards Economic Success) reflects an open approach to innovation for sustainable growth. We emphasize the decision support aspects of the framework for making choices about proposals for innovative projects. SKITES is structured along the following phases: 1. 2. 3. 4.
Innovative projects are generated and proposed They are filtered and documented They are chosen for implementation They are followed up with a view toward project management and gathering data to support future innovation rounds
As we shall see, decision analysis methods are core to this approach in phases 1, where we aim at screening projects, and 2, where we need to allocate the available resources to a portfolio of projects. We distinguish four roles within SKITES: • Organization. This refers to the organization (company or public body) which sets up the innovation process, according to specified rules. • Proposers. These are the individuals or teams that respond to a call for proposals issued by the organization proposing innovative products or services. • Assessors. These are experts whose role is to evaluate and manage the innovation process and decide which proposals are to be implemented. The size, composition, and involvement of this group may vary from one organization
172
G. Ar´evalo and D. R´ıos Insua
to another. They will be accountable for the final portfolio of projects chosen. This group might be formed by experts from the organization, external advisors or, even, by the whole set of constituents, in line with recent e-participation experiences, see Lavin and R´ıos Insua (2010). • Facilitators. These will be experts engaged in SKITES. They will have a sound background in innovation management and a twofold role: on the one hand, to assist proposers and experts with difficulties encountered using this framework, and, on the other hand, to revise the information supplied by the proposers looking for coherence and consistency. We consider two different operation modes for SKITES: • Closed. In this case, the organization restricts the innovation process only to designated members. This is typical of large organizations with sufficient human resources to deal with their innovation challenges. However, an open innovation paradigm is starting to gain importance, see, e.g., Chesbrough et al. (2008), and more organizations are adopting such approach to reach more disruptive innovations. • Open. Conceptually, see Herzog (2008), Open Innovation is defined as the use of external and internal resources to accelerate internal innovation, and, at the same time, the use of external pathways to market for internal knowledge. In this case, an organization releases its demands for innovative products, projects, or services, by proposing challenges for specific markets. This will be typical of small organizations which may be too little to innovate effectively on their own. This may be the case also of public bodies, which must strive for transparency, fairness, and publicity when funding projects. Many sources of open innovation can be identified, mainly based on licensing, joint agreements, venture capital, and spin-offs. Methodologically, both innovation modes are handled in the same fashion, the only difference being the inclusion of external participants. This entails the need to develop appropriate security mechanisms to allow individuals to take part in innovation processes as their permissions indicate.
8.3.1 Phase 0: Generation of Innovation Projects In this phase, innovative projects are generated for later detailed evaluation. Although Kleinmuntz (2007) suggests that there is always an abundance of proposals among which we need to allocate our limited resources, this may not be the case, which is why an organization might be interested in an open innovation approach. The systematic generation of innovative project proposals may be pursued with informal and more formal tools. Among the informal ones, brainstorming is the most popular approach. The nominal group technique is an evolution of brainstorming; however, to avoid underperformance of less confident participants, the collection of ideas is done in a systematic way through a written procedure. Other informal sources for innovative projects that can be considered are ideas
8 A Framework for Innovation Management
173
from customers who are lead users of products and ideas contests through a call linked to a specific subject or area. More formal approaches are based on checklists (like PESTEL, SWOT, or PROACT) or rich picture diagrams that are described in French et al. (2010). TRIZ, which is a problem-solving, analysis, and forecasting tool based on patterns of invention in the global patent literature, may be used to generate innovative project proposals in a formal manner, see Altshuller (1999). Value-focused thinking (Keeney 1997) is also relevant in the creation of alternatives. Luecke (2009) and Shane and Ulrich (2004) present further pointers to this important topic. The proposals generated should be described using a same format to facilitate the comparability of projects. We focus here on decision-making aspects to facilitate the evolution of ideas toward innovative projects. For that purpose, rough estimates of the required indicators are needed. Innovative projects may then be discussed among proposers and filtered through a voting system. As a consequence of such a debate, innovative projects may evolve and/or be eliminated for later phases, for example, if they do not receive sufficient votes from the pool of potential voters, as in, e.g., ideas4all. During this generation phase, it may be interesting to include a first project filter based on a self-assessment by the proposers, specially in those cases in which there is a large number of project proposals. This autoanalysis serves proposers as a reflective exercise about their proposals. This filter could be based, e.g., on a Rough Cut Analysis, see Luecke (2009), which uses three key questions: 1. Does the proposed innovation fit the strategy of the company? 2. Does the proposer have sufficient technical competence to make it work? 3. Does the company have sufficient business competence to make it successful? Luecke (2009) includes categorical answers to the above questions, which may be difficult to answer. Thus, we suggest a simpler answer format, which details the previous questions, and uses responses based on five-point Likert items or yes–no answers as required: • Strategic fit. 1. Score (from 1 to 5) the technical fit of the innovative project to the organization. 2. Indicate if it is more suitable for the organization to launch the project on its own, or license it to a third party. 3. Score (from 1 to 5) how feasible is, in case of success, that this innovative project opens up new markets. • Technical competence. 1. Determine whether it is feasible to develop the innovative project with the current staff. 2. Determine whether it is feasible to launch the project given the organization current work load. 3. Estimate, if so, the percentage of extra personnel effort needed to develop the project.
174
G. Ar´evalo and D. R´ıos Insua
• Business competence. 1. Score (from 1 to 5) the perception of the increment in marketing effort needed to launch the project. 2. Score (from 1 to 5) the perception of the extent to which current products/services consumption would be negatively affected, because of the potentially new product or service. 3. Score (from 1 to 5) the perception of the effort necessary to train staff. After such assessment, each project a is evaluated with regard to r screening criteria .a1 ; a2 ; : : : ; ar /. We could P build a simple weighted value function, see, e.g., French et al. (2010), v.a/ D j wj aj , for the organization to filter innovative projects based on such criteria, retaining only projects above a threshold value. Alternatively, we could use minimum thresholds bj to retain only those projects which are sufficiently good on all relevant criteria, that is, such that aj bj ; 8j . Note that we could use both filters in combination, i.e., retain proposals with high enough value and high enough criteria evaluations. Such autoassessments have twofold risks. First, some proposers could overestimate the performance of the projects they are proposing to pass this stage. Note, however, that these projects could be detected and excluded later on, when the assessors evaluate proposals. Second, other proposers could underestimate the performance of their projects, mainly because of their inexperience in areas such as strategy, marketing, technology, and business competence. Yet a key element for innovative projects would be the engagement and enthusiasm of the proposers. Therefore, we would expect from them at least an appropriate concept and expectations about their innovative projects. It may be the case that the proposer feels unable to answer the pertinent questions. Thus, we should open communication channels with relevant actors within the organization in the case of a closed innovation process and supply external advice within open innovation processes. This would be supported by the facilitators.
8.3.2 Phase 1: Project Filtering After Phase 0, an initial portfolio of innovation projects is available. These need to be documented by the proposers with a prebusiness plan, with indicators, the novelty of the innovative project and other relevant information which will facilitate project comparison. The information gathered may also be used as an initial guide for project management, if the corresponding project is eventually launched. Clearly, the indicators chosen may vary among organizations. For example, objectives and evaluation criteria will usually differ from the private sector to the public one. We briefly discuss some of the most relevant ones. From the financial side, the following are well known: • The Net Present Value (NPV) is defined as the sum of the present values of the individual cash flows. It is usually measured on an annual basis, but it can be calculated also on a monthly basis.
8 A Framework for Innovation Management
175
• The Internal Rate of Return (IRR) is the rate used in capital budgeting to measure and compare the profitability of investments. It is an indicator of the efficiency, quality, or yield of an investment. • The Payback Period is the time needed to recoup the initial investment. These financial indicators are widely used to assess traditional investment projects. However, they are not necessarily helpful when applied in isolation to the innovation sector, as illustrated by Christensen et al. (2008) or Aven (2010). The use of IRR, NPV, and Payback frequently causes decision makers to underestimate the actual returns and benefits of innovation projects, as they tend to focus on the difficulty of foreseeing future cash flows, in comparison to similar measures for incremental projects. Thus, routine projects tend to get the green light more often than really innovative ones. To mitigate this shortcoming of classical financial indicators with respect to innovation projects, we could use, on a complementary basis, the following concepts: • Discovery-Driven Planning. This method, proposed by McGrath and MacMillan (1995), starts at the end, estimating the minimum profit level that make innovative projects acceptable. Then, the price of the innovative product or service is calculated, together with the implied level of sales. We then answer whether we are capable of reaching such level of sales. • The R–W–W Method. This method is based on a practical approach developed by Day (2007) which draws on three categorical questions: 1. Is it Real? Is there really such a need in the market? 2. Can we Win? Would the product or service be competitive? 3. Is this innovation Worthy? This question is concerned with the strategic fit of the proposal and whether it has potential from the financial point of view. Apart from financial indicators, innovation projects are frequently evaluated also with indicators pertaining to human resources, such as the percentage of staff working on research and development activities and the percentage of staff with a PhD. Other relevant indicators refer to information about competitors, state-of-theart products, market targets, associated technologies, as well as required resources, expected sales, and funding. These indicators also supplement the weaknesses of traditional financial indicators, where innovation is concerned. Moreover, as innovation is strongly linked to human capital, these indicators could be used to monitor the project. Again, they should be collected through templates to facilitate project comparison. Once the above data are entered, they are checked for consistency. This analysis will be based on automatic controls and validations, such as, for instance, whether there is proportionality between staff and expected revenues, comparisons between cash-flow sales, and so on. Nevertheless, the process will be accessible as well to the facilitators in charge of this stage.
176
G. Ar´evalo and D. R´ıos Insua
After this initial checking, projects are scored, now by the assessors, from 1 to 5 on three general topics in relation with the rough cut analysis, conducted by proposers during phase 0: • Strategic/Potential Impact • Operational Impact • Difficulties to enter in the market, in terms of competitors If the scores are sufficiently high, based on a threshold system and/or a multicriteria value function, then a full study of the prebusiness plan is launched. Note that this is similar, for example, to the standard proposal screening procedure in research, technical development, and innovation projects funded by the European Commission, where it is necessary to overcome a minimum threshold for each criterion as well as a minimum value for the sum of them. The proposals identified in this phase are deemed to have sufficient potential and opportunities to enter the market and the proposers will be asked for more details regarding costs, financial sources, and an in-depth analysis of the project opportunities, covering the full business plan.
8.3.3 Phase 2: Project Selection We enter now the phase of selecting projects for implementation. The decision needs to take into account the scarcity of resources (financial, human, materials, etc.). Methodologically, we need to allocate several resources among several projects, subject to one or more resource constraints. This resource allocation process needs to, somehow, maximize the satisfaction of the selecting group. This may be done in several ways, as specified below. Moreover, the group will select the projects based both on current and on future opportunities. Thus, some good projects could be withheld and delayed for later implementation. From a technical point of view, there is a group of n assessors that has to decide how to allocate resources, say a budget b and amount d of personnel. There is a set of q potential projects, X D fa1 ; : : : ; aq g. Project ai has an estimated cost ci , employs di persons, and is evaluated with respect to m criteria, with values j xi , j D 1; : : : ; m. For simplicity, we assume that we have a sufficiently precise estimate of each project cost and features, i.e., we do not deem uncertainty relevant. Alternatively, we would have probability distributions over such features, which would be treated as outlined below. We represent this information through a table: Project
Cost
H.Res.
Criteria
a1 :: : ai :: : aq
c1 :: : ci :: : cq
d1 :: : di :: : dq
.x11 ; : : :; x1m / :: : .xi1 ; : : :; xim / :: : .xq1 ; : : :; xqm /
8 A Framework for Innovation Management
177
Assume that the total cost of the proposed projects is greater than b and/or the total number of work effort required is greater than d . Otherwise, all projects could be started. In addition to resource constraints, there may exist other constraints that restrict portfolios. Typical constraints would be: the maximum budget allocated to one topic will be e euros; we shall support at most f projects of a given type; or, we can implement a certain project only if another project is implemented. A feasible portfolio will be a subset of projects, defined by the corresponding subset of indices F I D f1; 2; : : : ; qg, which satisfies X ci b; i 2F
X
di d;
i 2F
and other possible constraints. In case of uncertainty about the project features, we would have stochastic constraints that could be handled, for example, by requiring that the constraint is satisfied with a sufficiently high probability. The set of feasible portfolios will be designated A D fF 1 ; F 2 ; : : : ; F s g. The allocation process may be undertaken in several ways. Many of the tools described in Section 8.2 introduce only voting mechanisms to support such decision. A classical approach is based on maximizing the NPV, assuming that the group members agree on such criteria, as described in detail by Kleinmuntz (2007). Vilkkumaa et al. (2011) provide a framework which assumes a group value function, aggregating the multicriteria value functions of the participants. Possibly incomplete information is obtained about the weights and values to identify potentially interesting portfolios. Additional information is solicited in case there are no clear-cut recommendations. If no additional information is actually available, voting and bargaining mechanisms are introduced. As an example of the variety of approaches regarding group portfolio resource allocation decisions, within the related problem of participatory budget formation, Alfaro et al. (2010) describe numerous procedures which differ in the involved stages and group decision tasks employed at those stages. Thus, the allocation process depends essentially on the organization: the framework should be able to support various basic group decision-making tasks including voting systems, negotiation methods, arbitration, and group value functions. Efremov and R´ıos Insua (2010) describe these and other collaborative decision analysis methodologies with a view toward implementing them through the Web. In SKITES, we emphasize the following flexible approach to allocating resources by a group: 1. Individual problem exploration. At this stage, we elicit the participants’ preferences about the consequences of the projects, e.g., in terms of their utility or value functions, depending on whether uncertainty is deemed relevant or not. We focus on this last case; otherwise, we would substitute values by expected utilities.
178
G. Ar´evalo and D. R´ıos Insua
Assume, therefore, that each assessor’s preferences through a multiattribute value function vj , j D 1; : : : ; n, that he aims at maximizing, see, e.g., French (1986). Therefore, we may associate with an innovation management process a matrix of j valuation entries vi , the value that assessor j gives to project i Assessors Cost Projects
a1 :: : ai :: : aq
c1 ci cq
HR
1
j
d1 :: : di :: : dq
v11
j v1
:: : v1i :: : v1q
:: : j vi :: : j vq
n :: : :: :
vn1 :: : vni :: : vnq
To simplify matters, we shall assume that the value given by the j th assessor to a feasible portfolio F will be the sum of the values of the projects in F , that is, vj .F / D
X
j
vi ;
j D 1; : : : ; n:
i 2F
Theoretical assumptions underpinning such additivity assumption are discussed in, e.g., Golabi (1987) and Golabi et al. (1981). The assessors may use this information to determine their preferred portfolios and the reasons for their choices. The preferred feasible portfolio Fj? for assessor j will be that giving him the maximum value. Should there be just the maximum budget constraint, Fj? would be obtained through a knapsack problem, see Martello and Toth (1990): X j max vi F I
s:t:
i 2F
X
ci b
i 2F
In general, there will be other constraints and we must use general implicit enumeration algorithms to compute the participants’ optimal portfolios, like those based on constraint logic programming, see Marriott and Stuckey (1988). For smaller problems, integer programming and combinatorial optimization techniques might be sufficient. Logically, if all assessors prefer the same optimal portfolio, that would be the group decision. However, typically, various individuals will obtain different optimal portfolios, since their preferences may represent a wide variety of conflicting interests. Consequently, an agreement should be sought as a joint decision. We may view this phase of the innovation management process as a
8 A Framework for Innovation Management
179
negotiation table, see Rios and R´ıos Insua (2009), which shows the value given by each assessor to each feasible portfolio: Assessors 1 Feasible portfolios
1
F :: : Fs
Individual optimal portfolios
j 1
v1 .F / :: : v1 .F s / F1
n 1
vj .F / :: : vj .F s / Fj
vn .F 1 / :: : vn .F s / Fn
To start with, we could compute the set of nondominated portfolios. Based on the previous table, we associate a score vector with each feasible portfolio F , v.F / D .v1 .F /; : : : ; vn .F //, from which a dominance relation between portfolios may be defined in a standard way. Dominated portfolios may be removed from the previous table, retaining only the nondominated ones. Relatively efficient methods to determine the whole set of nondominated portfolios may be used, see Vilkkumaa et al. (2011) or Rios and R´ıos Insua (2008). If this set is very diverse, we still need to manage the conflict. Note, however, that if some projects (“core projects”) are contained in all nondominated portfolios, these will be uncontroversial, thus reducing the problem. See Liesio et al. (2007) for developments around the concept of core projects. 2. Conflict resolution. When several assessors have very different optimal portfolios we shall need specific methodologies to reach a reasonable group choice. Some of the potentially usable approaches are: • Arbitration. If we know the assessors’ preferences, an arbitration approach can be based on an algorithm to compute the chosen arbitrated solution based on some equitable criterion Thomson (1994). To do this, we need to describe the resource allocation problem in terms of a value set and a disagreement point. The set A of feasible portfolios will be transformed to the assessors’ value set S D f.v1 ; : : : ; vn / W 9 F 2 A s:t: vi D vi .F /; i D 1; : : :; ng. The disagreement point is a vector d 2 Rn whose j th coordinate represents the value that the j th assessor would give to an initial reference portfolio to be improved. The vector d could be related to the values associated with implementing no project, or with those projects in the core. Such a d is related to the baseline scores whose choice is discussed in Clemen and Smith (2009). Thus, we represent the resource allocation problem as a pair .S; d /, where S is a finite but potentially large set. The problem consists of trying to reach a consensus over the set P .S; d / of nondominated assessors’ values which are better than the disagreement point d . An arbitration resource allocation solution concept is a rule associating with each resource allocation problem .S; d /, one portfolio in A, based on the selection of a point in P .S; d /. Among the various arbitration concepts, for reasons outlined in Rios and R´ıos Insua (2010), we favor the balanced increments and the balanced concession solution concepts.
180
G. Ar´evalo and D. R´ıos Insua
A shortcoming of the arbitration approach is that these solutions could be seen as imposed. An advantage is the possibility of mitigating the complexity due to the presence of a potentially large pool of assessors discussing advantages and disadvantages of portfolios. Note that group value and utility functions may be superseded within arbitration schemes. • Negotiation. Instead of arbitration, we could use negotiation. Though there are various generic schemes, negotiations consist of processes in which portfolios are offered iteratively, until one of them is accepted by a reasonable percentage of assessors. Otherwise, no offered portfolio is globally accepted. Because of the potential discrepancies in preferences, we allow assessors to discuss portfolios. Kersten (2008) provides a comprehensive review of negotiation methods. Rather than using a formal negotiation method, we could allow assessors to post portfolio offers and debate them through a discussion forum. In such way, they would interact and share knowledge when they propose portfolios. They could receive analytical aid through several indices to evaluate posted offers. Assessors could be allowed to vote in favor or against offers. The offer with the highest level of acceptance among assessors could be considered as an agreement, if this level is sufficiently high. Otherwise, no offered portfolio will be globally accepted through negotiation. • Voting. We could directly move on to voting, but this might have the shortcoming that we do not motivate sufficiently deliberation among assessors. Again, we could appeal to numerous voting schemes (Brams and Fishburn 2002). For reasons outlined in Brams and Fishburn (2007) we tend to favor approval voting. We may tailor these three approaches in several ways, to address the requirements of various organizational styles. Among several possibilities, we could directly implement an arbitration scheme. Or, we could implement a negotiation scheme and, if negotiations end up in a deadlock, we may solve it through arbitration or through voting. Or we could directly move toward voting. 3. Postsettlement. If the outcome of the conflict resolution is reached through negotiation or voting, it could be the case that it is dominated in a Pareto sense: there would be portfolios which are better for all the assessors. Therefore, they should try to improve it in a negotiated manner, through a negotiation scheme designed to converge to a nondominated portfolio, which is better than the outcome obtained previously. One example of such method is in Rios and R´ıos Insua (2010), which combines both balanced increment and balanced concession movements in a single negotiation algorithm. Note that the information obtained at the exploration phase would be useful not only for computing the assessors’ preferred resource allocations among projects, but also for evaluating portfolios offered through the negotiation phase, to vote in a better informed fashion and, finally, to check whether the negotiated or voted outcome is dominated and, consequently, start at stage 3. One possible comment is that assessors may be reluctant to reveal their preferences. We assume in this design
8 A Framework for Innovation Management
181
that they will provide this information to a secure and trusted intermediary, in a framework that is called FOTID (full, open, and truthful intermediary disclosure), see R´ıos Insua et al. (2008). Such intermediary could be a secure Web server, in line with recent e-participation developments, see R´ıos Insua and French (2010).
8.3.4 Phase 3: Follow-Up of the Selected Portfolio The aim of this phase is to follow the evolution of the selected portfolio with a double objective: • Identify deviations (positive or negative) with respect to the original project schedule and resource consumption plan • Gather information and experience to be considered for the future phases of filtering and selection of innovative projects To this end, once a project is selected, several indicators will be defined in relation with resource consumption. All these indicators will be integrated in the business plan and they will guide the first phases of the project. The integration of each indicator will be analyzed, so that indicators that are not relevant for tracking a project once it has been selected will be discarded. The indicator system should be flexible enough to allow for the inclusion of new indicators when deemed relevant. Therefore, SKITES would provide support for: • According to the specificities of innovative projects, defining the information to be gathered during the follow-up phase. As the process is based on collective common knowledge and on a continuous learning process, this new information will evolve continuously. • Providing a flexible, user-friendly and adaptive system with templates to facilitate the consistent collection of information, thus simplifying comparisons. Automatic validations of the information gathered should be implemented, allowing for checks concerning data quality and coherence. • A system allowing to set up alarms not only when we have had deviations, but also when these deviations are foreseen, based on appropriate prediction models. Based on the stored information, some early warning alarms can be implemented. • Building cost estimators, indicators, market projections, etc. This kind of information is valuable, especially in view of future innovation rounds. • Facilitating the comparison of innovative projects (blind benchmarking), identifying synergies, niches, and possible clusters to enter into a market in a better position.
8.3.5 Discussion SKITES is based on some recent consulting projects and motivated by the need to scale such framework to more and bigger groups over the Web. We could ask
182
G. Ar´evalo and D. R´ıos Insua
whether SKITES, as a more formal decision framework, may actually be more of a burden, rather than a solution in innovation management, as innovation is a creative activity per se, and such formal tools may deter creativity. However, recall that creativity is actually very frequently undertaken based on a deliberate methodological approach, aimed at generating new knowledge, and only a few inventions are the fruit of spontaneous research activity, see Clamen (2003). From the user’s perspective, the development of a more encompassing approach does not necessarily entail a much more sophisticated system. We consider SKITES more powerful than other available tools because it incorporates relevant collaborative decision analysis methodologies. The proposed approach profits from the knowledge of a group as a whole, as a framework in which its members provide their opinions, share them, and reach a better solution based on knowledge sharing among diverse participants. As cogently argued in Salo and Kakola (2005), timeliness may impose intrinsic constraints within innovation processes. This entails that the organization must adapt the scheme to its culture and time available, by choosing the appropriate stages and allocating the appropriate time to each of them. As an example, an organization requiring a fast decision process can simplify the above scheme by just choosing a “debate and vote” resource allocation making process. In a similar fashion, there may be many different decision-making styles and levels of analytical sophistication among the assessors. We could conceive an alternative framework. Phase (1) would allow the assessors to manipulate the problem to better understand it and the implications of their judgments; these could be based on less sophisticated methods such as goal programming or just debating with other assessors. Phase (2) would entail the construction and manipulation of the problem by the group, allowing sophisticated negotiation methods using value functions as well as simple methods like those based on debating the pros and cons of options in a forum and voting on options. Phase (3) would entail, in this case, exploring whether the outcome may be improved. Indeed, by potentially adapting to numerous collaborative schemes SKITES may actually adapt to varied organizational innovation styles. Notwithstanding this, if possible cognitively and timewise, we would support an implementation in which assessors’ value functions are elicited and, if conflict arises, they negotiate support by a formal negotiation method iterating toward a nondominated outcome.
8.4 Conclusion Innovation is critical for competitive success. Building on the successes of the Web 2.0, there is an emerging market for Web-based innovation management tools. These tools seek to facilitate the management of innovation processes within organizations. However, these tools tend to focus on just some parts of the process and they impose fairly rigid management innovation processes to which an organization should conform. Quite importantly for the theme of this book, they tend to oversimplify the methods by which the portfolio of projects is chosen.
8 A Framework for Innovation Management
183
We have described SKITES, a flexible framework that supports innovation management processes, with especial emphasis on the embedded GDS problems related with choosing innovation projects. We believe that organizations could benefit from adopting such framework, introducing a more transparent, fair, and cost-efficient system. Given its potential, we have developed a Web-based architecture supporting SKITES and implemented a Java prototype of it. The relevance of Web-based architectures is specially appropriate in distributed organizations, where firms are working globally with employees, delegations, and departments distributed among different countries. Our architecture allows an organization to define its own innovation management process from the basic SKITES scheme. It is based on a Service Oriented Architecture (SOA) bus, supporting several databases and several services and would be connected to the corresponding enterprise software platform if one exists. SMEs might not have such enterprise platform, and they could use SKITES, e.g., through a cloud computing environment. The databases supported refer to innovation indicators and participants, in their four roles. The services so far supported include brainstorming, a discussion forum, preference modeling, a simple negotiation system, voting, and innovation management process definition. As has happened with many other aspects of our lives, we believe that innovation management may benefit from the use of flexible, comprehensive, well-founded solutions implemented through the Web. Acknowledgements This work was supported by grants MICINN-eCOLABORA and RIESGOSCM. We are grateful to Felipe Garcia for introducing us to this research topic and market niche. Discussions with the editors greatly enhanced this chapter and our vision of innovation management.
References Alfaro C, Gomez J, Rios J (2010) From participatory budgets to e-participatory budgets. In: R´ıos Insua D, French S (eds) e-Democracy: a group decision and negotiation perspective. Springer, New York Altshuller G (1999) The innovation algorithm. Technical Innovation Center, Worcester Aven T (2010) Misconceptions of risk. Wiley, Chichester Brams S, Fishburn P (2002) Voting procedures. In: Arrow K, Sen A, Suzumura K (eds) Handbook of social choice and welfare, vol 1. North Holland, Amsterdam Brams S, Fishburn P (2007) Approval voting. Springer, New York Burstein C, Holsapple C (2008) Handbook on decision support systems, vol 2. Springer, New York Chesbrough H, Vanhaverbeke W, West J (2008) Open innovation: researching a new paradigm. Oxford University Press, Oxford Christensen C, Kaufman S, Shih W (2008) Innovation killers: how financial tools destroy your capacity to do new things. Harv Bus Rev 86:98–105 Clamen A (2003) Reducing the time from basic research to innovation on chemical sciences, vol 2. USA National Research Council, Washington, DC, pp 18–27
184
G. Ar´evalo and D. R´ıos Insua
Clemen RT, Smith JE (2009) On the choice of baselines in multi-attribute portfolio analysis. Decis Anal 6(4):256–262 Day G (2007) Is it real? Can we win? Is it worth doing? Managing risk and reward in an innovation portfolio. Harv Bus Rev 85(12):3–17 Efremov R, R´ıos Insua D (2010) Collaborative decision analysis and e-participation. In: R´ıos Insua D, French S (eds) e-Democracy: a group decision and negotiation perspective. Springer, New York EUROSTAT New Cronos Data Base (2010) http://epp.eurostat.ec.europa.eu/portal/page/portal/ statistics/themes French S (1986) Decision theory. Ellis Horwood, Chichester French S (2010) The internet and the web. In: R´ıos Insua D, French S (eds) e-Democracy: a group decision and negotiation perspective. Springer, New York French S, R´ıos Insua D, Ruggeri F (2007) e-Participation and decision analysis. Decis Anal 4: 211–226 French S, Maule J, Papamichail N (2010) Decision behaviour, analysis and support. Cambridge University Press, Cambridge Golabi K (1987) Selecting a group of dissimilar projects for funding. IEEE Trans Eng Manag 34:138–145 Golabi K, Kirkwood C, Sicherman A (1981) Selecting a portfolio of solar energy projects using multiattribute preference theory. Manag Sci 27:174–189 Herzog P (2008) Open and closed innovation: different cultures for different strategies. Gabler, Wiesbaden Howells J (2005) The management of innovation and technology. Sage, London Keeney R (1997) Value focused thinking. Harvard University Press, Cambridge, MA Kersten G (2008) Negotiation and e-negotiation. Springer, New York Kleinmuntz D (2007) Resource allocation decisions. In: Edwards W, Miles RF, von Winterfeldt D (eds) Advances in decision analysis. Cambridge University Press, Cambridge, MA Lavin JM, R´ıos Insua D (2010) Participatory processes and instruments. In: R´ıos Insua D, French S (eds) e-Democracy: a group decision and negotiation perspective. Springer, New York Liesio J, Mild P, Salo A (2007) Preference programming for robust portfolio modeling and project selection. Eur J Oper Res 181(3):1488–1505 Luecke R (2009) The innovator’s toolkit. Harvard Business Press, Cambridge, MA Marriott K, Stuckey PJ (1998) Programming with constraints: an introduction. MIT Press, Cambridge, MA Martello S, Toth P (1990) Knapsack problems: algorithms and computer implementations. Wiley, Chichester McGrath RG, MacMillan IC (1995) Discovery driven planning. Harv Bus Rev 73(7):44–54 Pro Inno Europe (2009) European Innovation Scoreboard (EIS) 2009. European Commission. Brussels Raiffa H, Richardson J, Metcalfe D (2002) Negotiation analysis. Harvard University Press, Cambridge, MA Rios J, R´ıos Insua D (2008) A methodology for participatory budget formation. J Oper Res Soc 59:203–212 Rios J, R´ıos Insua D (2009) Supporting negotiations over influence diagrams. Decis Anal 6: 153–171 Rios J, R´ıos Insua D (2010) Balanced increment and concession methods for negotiation support. RACSAM 104:41–56 R´ıos Insua D, French S (eds) (2010) e-Democracy: a group decision and negotiation approach. Springer, New York R´ıos Insua D, Kersten G, Rios J, Grima C (2008) Decision support and participatory democracy. Int J E Bus Res 6:161–191 Salo A, Kakola T (2005) Groupware support for requirements management in new product development. J Organ Comput Electron Commerce 15(4):253–284
8 A Framework for Innovation Management
185
Shane S, Ulrich K (2004) Technological innovation and product development and entrepreneurship in management science. Manag Sci 50(2):133–144 Thomson W (1994) Cooperative models of bargaining. In: Aumann R, Hart S (eds) Handbook of game theory, vol 2. Elsevier, Amsterdam, pp 1238–1277 Townsend C, Radjou N, Andrews C, Liu J (2008) The rise of innovation management tools. Forrester Research, Cambridge, MA Vilkkumaa E, Salo A, Liesio J (2011)] Multicriteria portfolio modeling for the development of shared action agendas. Group Decis Negot (to appear)
Chapter 9
An Experimental Comparison of Two Interactive Visualization Methods for Multicriteria Portfolio Selection Elmar Kiesling, Johannes Gettinger, Christian Stummer, and Rudolf Vetschera
Abstract We compare two visualization methods for interactive portfolio selection: heatmaps and parallel coordinates. To this end, we conducted an experiment to analyze differences in terms of subjective user evaluations and in terms of objective measures referring to effort, convergence, and the structure of the search process. Results indicate that subjects who used the parallel coordinates visualization found the method easier to use, perceived the selection process as being less effortful, and experienced less decisional conflict than subjects who used the heatmap visualization. Concerning objective measures, we did not find significant differences in the time taken to complete the selection task. However, we found that subjects who used parallel coordinates engaged in a more exploratory approach when investigating the space of efficient portfolios. Finally, the experiments clearly showed that decision-making styles play an important role in users’ attitude toward the visualization method. Our findings suggest that the choice of visualization method has a considerable impact on both the users’ subjective experiences when using a decision support system for portfolio selection, and on their objective performance.
9.1 Introduction Portfolio selection problems, in particular those involving multiple evaluation criteria, place a significant cognitive burden on the decision maker (DM). Decision support systems (DSS) employing interactive visualization methods that allow the DM to progress iteratively toward a most preferred solution can alleviate this burden. Such systems do not require extensive a priori preference information, but
C. Stummer () Faculty of Business Administration and Economics, Universitaetsstr. 25, 33615 Bielefeld, Germany e-mail:
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 9, © Springer Science+Business Media, LLC 2011
187
188
E. Kiesling et al.
allow the DM to gradually learn about his/her implicit preferences. This process requires a constant exchange of information between the DM and the DSS. While research on the presentation of information to users has a long tradition in the area of management information systems (MIS), the design of suitable representations and interaction mechanisms for DSS is less well understood. This is particularly true for multicriteria portfolio selection in which users have to deal with a potentially large number of discrete decision alternatives from the set of (Pareto) efficient portfolios. This chapter extends the stream of experimental research on behavioral aspects of interactive multiple-criteria decision making (e.g., Korhonen et al. 1990, 1997) by comparing two graphical methods that allow the DM to freely explore the solution space by changing upper and lower bounds for each criterion. To this end, we implemented interactive heatmaps, which are tables in which cells are colored according to the value taken by a variable, and interactive parallel coordinate plots, in which criteria are depicted on separate axes while portfolios are represented by profile lines. These two methods make it possible to perform different manipulations on the information presented and the bounds on each criterion: a heatmap allows the DM to sort portfolios by each criterion, and/or to set upper and lower bounds for any criterion via a context menu. In parallel coordinate plots, by contrast, the mechanism for setting the upper or lower bounds for criteria is based on dragging bars to mark acceptable intervals and indicates which portfolios will be eliminated during dragging operations. We conducted experiments with 96 business administration students at the University of Vienna using a portfolio selection problem that was familiar to them, namely, that of selecting courses for the forthcoming semester. The two visualization approaches are compared by means of subjective measures such as user satisfaction or understanding of the problem, as well as by objective measures referring to effort, convergence, and the process structure. The remainder of this chapter proceeds as follows: First, we introduce the interactive methods used in the experiments and motivate our research questions (cf. Sections 9.2 and 9.3, respectively). Section 9.4 then outlines the experimental design, which includes descriptions of participants, the underlying portfolio selection problem, and the procedure for performing the experiments. Section 9.5 describes the measures that are used in Section 9.6 to analyze the results. Finally, Section 9.7 concludes with a summary and an outlook on further research.
9.2 Visualization Methods There are various techniques for visualizing multidimensional data. A large group of these approaches aims at representing n-dimensional points as dissimilar objects such as faces (Chernoff 1973), houses (Korhonen 1991), trees (Kleiner and Hartigan 1981), or glyphs (Anderson 1960). Andrews plots (Andrews 1972), by contrast, represent each alternative as a line defined by a trigonometric function parametrized with the point values in the various dimensions, whereas interactive decision maps (Lotov et al. 1997, 2004; Lotov and Miettinen 2008) depict several efficiency
9 Experimental Comparison of Two Interactive Visualization Methods for MCPS
189
frontiers of two criteria depending on the value of a third criterion. For our experiments, we chose to compare two methods that turned out to be particularly promising in the context of portfolio selection (cf. the application case described by Stummer et al. 2009), namely interactive heatmaps and interactive parallel coordinate plots. Interactive heatmaps were adopted from visualization methods developed by specialists in data mining (Lotov and Miettinen 2008); they are essentially matrices where the cells are colored according to the cell value (Cook et al. 2007). Compared to tables, this representation provides a higher information density and makes it easier to identify patterns such as correlations and trade-offs between criteria. The term arises from the red to blue color scale that is often used. So far, this visualization method has primarily been applied in molecular biology and clinical applications (e.g., for the purpose of DNA sequence visualization; cf. Eisen et al. 1998; Cook et al. 2007; Gehlenborg et al. 2005; Kibbey and Calvet 2005; Podowski 2006). More recently, however, its use in multicriteria optimization, as a means for visualizing the Pareto frontier or an approximation thereof, was proposed by Pyrke et al. (2007) and Lotov and Miettinen (2008). In portfolio decision analysis, heatmaps are not only convenient for identifying clusters of alternatives but can also be used for selecting a single alternative (Lotov and Miettinen 2008). In our experiment, each row in the matrix represents a portfolio and each column represents a criterion. Different cell color shadings are used to represent the value of a criterion for a particular portfolio. We used a trichromatic mapping in which poor criterion values, i.e., low (high) values in criteria to be minimized (maximized), were represented by shades of red, medium values by shades of yellow, and good values by shades of green, respectively. This mapping corresponds to the intuitive “stop light” color scheme and was easily understood by subjects. An example is provided in Fig. 9.1. To narrow down interesting alternatives by reducing the set of admissible portfolios, the DSS provides mechanisms for filtering undesirable solutions by specifying upper and/or lower bounds for criterion values. In the software implementation of our experiment, this can easily be achieved by right-clicking a cell and selecting the entries “Set as minimum” or “Set as maximum,” respectively, from the context menu. Additionally, subjects can sort portfolios by ascending or descending criterion values by clicking the column title labels, which may reveal interesting patterns and clusterings. Finally, it is possible to remove previously imposed constraints for a single criterion or for all criteria by clicking on “Reset criterion” or “Reset all.” The second method used in our experiments, namely interactive parallel coordinate plots, represents a fundamentally different visualization approach. In parallel coordinate plots (cf. Inselberg 1985; Inselberg and Dimsdale 1990; Inselberg 2009), criteria are represented on separate axes that are laid out in parallel. Portfolios are depicted as profile lines that connect the points marking the values achieved in each criterion on the respective axis. The profile lines of all admissible portfolios are superimposed for ease of comparison. This representation emphasizes geometric interpretability and provides a good overview of the distribution of values, given
190
E. Kiesling et al.
Fig. 9.1 Heatmap visualization (screen capture)
that the number of efficient portfolios remains moderate. Patterns such as positive or negative correlations can be easily identified in criteria laid out next to each other. The mechanism for setting upper or lower bounds for criteria is based on dragging bars to mark the aspired intervals. During dragging operations, the software indicates which portfolios will be eliminated, thus providing the DM with immediate feedback about trade-offs between criteria. For an example see Fig. 9.2.
9.3 Research Questions Previous research on user acceptance of information technology suggests that users trade off the perceived effort of using a technology and that technology’s perceived usefulness and accuracy (Davis 1989; Venkatesh and Davis 2000). Recent research has employed this cost–benefit framework also successfully for multicriteria DSS (Aloysius et al. 2006). One important factor in decision support that influences the perceived effort is the DM’s familiarity with the support provided. Empirical research suggests that it influences the use of a system in several respects. First, users experienced with the system carry out better planning for their decision-making process and are more focused on the set of decision aids. Experience allows them to refer to stable heuristics based on past experiences rather than requiring considerable
9 Experimental Comparison of Two Interactive Visualization Methods for MCPS
191
Fig. 9.2 Parallel coordinates visualization (screen capture)
effort (Lusk and Kersnick 1979). DMs that perceive tasks to be of higher effort typically also perceive their results as being less accurate (Aloysius et al. 2006). As there is a positive correlation between decision quality and decision confidence (Kamis and Stohr 2006), those DMs who are familiar with a solution method exhibit a higher degree of confidence in the final solution (Buchanan 1994). However, prior experience also presents a source of possible bias since experiences with similar prior decision formats may not be suited for other tasks (Lusk and Kersnick 1979; Vessey 1991). Note that the concept of parallel coordinates and its variations is well known among DMs, while heatmaps are a relatively new visualization technique and have up to now been only narrowly used (Gehlenborg et al. 2005; Podowski 2006; Auman et al. 2007; Pyrke et al. 2007). At the very beginning of the portfolio selection process, DMs face a vast number of efficient portfolios. There is evidence in the literature that DMs apply noncompensatory strategies in choice modes with a high number of alternatives. Noncompensatory strategies include elimination-by-aspect or lexicographic rules, but do not involve the assessment of trade-offs among attributes (Kottemann and Davis 1991; Buchanan 1994; Korhonen et al. 1997). They are typically used to eliminate many alternatives in the early stages of the decision-making process requiring less effort by the DM but resulting also in less accurate decisions (Buchanan 1994). As heatmaps enable the visualization of high-density information, we expect that heatmaps provide better support for a noncompensatory strategy. Moreover, heatmaps allow users to visually discover relationships in large and complex data
192
E. Kiesling et al.
sets (Kibbey and Calvet 2005; Podowski 2006; Auman et al. 2007). Consequently, subjects who are supported by heatmaps should have a good overview of the initially presented information (Kottemann and Davis 1991). When the number of displayed alternatives is reduced, the need to deal with high information density becomes increasingly irrelevant. In this stage, users typically follow a compensatory strategy (Kottemann and Davis 1991; Korhonen et al. 1997). Compensatory strategies involve explicit assessments of the trade-offs between alternatives over the entire range of objective values. Therefore, compensatory strategies require more effort and time by the DM, but in turn result in more accurate decisions than noncompensatory strategies (Buchanan 1994; Korhonen et al. 1997). Compensatory strategies and the use of explicit trade-offs often require DMs to confront the conflict, while in noncompensatory strategies the DM does not have to engage in challenging computational and emotional processes. The difficulties of explicit trade-offs are shown to be sources of decisional conflict – a DM’s negative affective state facing decisions involving risk – and lower post-decisional confidence (Aloysius et al. 2006; Koedoot et al. 2001; Kottemann and Davis 1991). Parallel coordinates support trade-off tasks via their geometric interpretability with a moderate number of alternatives to be compared (Larkin and Simon 1987; Bierstaker and Brody 2001; Zhang et al. 2008), while users of heatmaps have to refer to the numerical values mapped to color (Cook et al. 2007). Research Question 1 Does the provision of DMs with either interactive heatmaps or interactive parallel coordinate plots have an impact on users’ perception of the quality and effort of the portfolio selection process?
The methods used in the experiments implement an unstructured decision process that gives the DM the possibility to freely explore the solution space. DMs can easily define upper and lower bounds for the maximization and minimization of criteria, thereby excluding portfolios that do not achieve the aspired value in the selected criterion. Due to their characteristics, heatmaps provide a good overview of the entire set of portfolios. Furthermore, our implementation of heatmaps allows the DM to sort portfolios according to the values in distinct criteria. This helps the DM get an initial feeling for the characteristics of the presented portfolios. In contrast to heatmaps, parallel coordinate plots perform best when there is a moderate number of portfolios that need to be visualized. Empirical research has shown that interpretation of graphs is strongly affected by an increase in visual complexity (Coll et al. 1994; Meyer et al. 1997; Swink and Speier 1999). The interactive parallel coordinate plots used in this study enable subjects to observe which portfolios are about to be eliminated while a bound is being modified. Compared to heatmaps, the dragging mechanism used in the parallel coordinate representation allows for a more effortless modification of bounds, which favors a more exploratory approach. Regarding time, familiarity of users with the visualization techniques implemented in the DSS generally leads to tasks being solved more quickly (Coll et al. 1991; Meyer et al. 1997; Lee et al. 2008). Research Question 2 Does the provision of DMs with either interactive heatmaps or interactive parallel coordinate plots influence the duration and structure of the portfolio selection process and the effort exerted by users?
9 Experimental Comparison of Two Interactive Visualization Methods for MCPS
193
Increasing the number of criteria and portfolios increases the level of task complexity, which is determined by the number of subtasks that need to be performed to solve the problem. An increase in task complexity leads to an increase in the cognitive effort that needs to be invested in the decision-making process (Wood 1986; Campbell 1988; Smelcer and Carmel 1997). Literature and prior empirical studies show that an increase in task complexity results in an increase in decision time and/or a decrease in decision accuracy or decision quality and, as a consequence, in users’ confidence (Dickson et al. 1986; Coll et al. 1994; Bystroem and Jaervelin 1995; Smelcer and Carmel 1997; Swink and Speier 1999; Mennecke et al. 2000; Borthick et al. 2001; De et al. 2001). An increase in perceived effort typically leads to an increase in decisional conflict. Decisional conflict, in turn, is negatively related to the perceived accuracy of decisions and preferences for a DSS. Furthermore, an increase in perceived effort has a negative influence on the user’s attitudes toward the implemented decision support (Chu and Spires 2000; Aloysius et al. 2006). However, effort is positively related to decision quality, which has a positive effect on decision confidence, which in turn has a positive impact on perceived usefulness (Kamis and Stohr 2006). In parallel coordinate plots, the number of portfolios that can be visualized is limited by the size of the screen space. Therefore, the larger the number of admissible portfolios, the more profile lines overlapping and crossing on axes. Consequently, an increase in the number of visualized portfolios leads to an increase in information density and visual complexity. This makes it in turn more difficult for the DM to observe individual values as well as relationships between distinct portfolios. In heatmaps, portfolios are visualized in tabular form, which is less sensitive to an increase in the number of visualized portfolios. Therefore, increasing the number of admissible portfolios does not lead to an increase in visual complexity. Consequently, we expect an increase in the number of admissible portfolios to have a stronger effect when using parallel coordinates. However, due to the fact that subjects have to scroll more to observe all portfolios when using heatmaps, we expect them to need more steps and time to choose a portfolio in the more complex treatment. These differences should be reflected in the subjective as well as objective measures of the process. Research Question 3 Does the level of problem complexity have an impact on subjective and objective measures of the portfolio selection process and outcome?
The impact of personal characteristics on decision making constitutes another interesting field of research (cf. Loo 2000; Spicer and Sadler-Smith 2005). Thunholm (2004) has pointed out that the evaluation of individual decision-making styles makes it possible to better understand the entire decision-making process. In general, decision-making style refers to the way individuals process information in order to solve problems in several fields (Gambetti et al. 2008). Decision style has been defined as a “learned habitual response pattern exhibited by an individual when confronted with a decision situation” (Scott and Bruce 1995, p. 820). This definition was expanded by the observation that individual decisionmaking styles are also based on cognitive abilities (Thunholm 2004). As cognitive
194
E. Kiesling et al.
abilities are stable and not easily changed, a DSS needs to adapt to the needs of individual DMs (Thunholm 2004, 2008). Scott and Bruce (1995) define five behavioral dimensions based on DMs’ self-evaluation: (1) a rational style described as logical and structured approach toward decision making, (2) an intuitive style characterized by a tendency to rely on premonitions and feelings, (3) a dependent style, where individuals search for advice and guidance from others before making important decisions, (4) an avoidant style characterized by attempts to postpone or avoid making decisions, and (5) a spontaneous style characterized by a feeling of immediacy and impulsive decisions. The five-factor model was confirmed in several follow-up studies (e.g., Loo 2000; Thunholm 2004; Spicer and Sadler-Smith 2005; Gambetti et al. 2008; Thunholm 2008; Baiocco et al. 2009). They concluded that even though an individual might have a predominant style, decision styles are not reciprocally exclusive, i.e., DMs use a combination of different styles when making important decisions. While gender has been found to have no influence on the preferred decisionmaking style (Loo 2000; Spicer and Sadler-Smith 2005; Baiocco et al. 2009), research on the technology acceptance model (TAM) has shown that women differ in their perception but not in their use of e-mail (Gefen and Straub 1997). Furthermore, while women consider usefulness, perceived ease of use and social norms, men are more driven by their perception of usefulness in the decision-making process (Venkatesh and Morris 2000). Research Question 4 Do the individual characteristics of a DM have an impact on subjective and objective criteria of the portfolio selection process and outcome?
9.4 Experimental Design In order to investigate the research questions presented above, we conducted a controlled experiment that adopted a between-subject approach. Two factors were systematically varied to create the test conditions, namely visualization method (i.e., heatmap vs. coordinates) and problem complexity (i.e., simple vs. complex). The latter was manipulated by posing two different portfolio selection problems that differed in the number of criteria used and in the number of efficient portfolios.
9.4.1 Participants Subjects were recruited from various classes in the undergraduate and graduate business administration programs at the University of Vienna, Austria. As an incentive for participation, a lottery was held in which 12 MP3 music players were distributed among the participating students. The 96 subjects were assigned to one of 14 groups based on the time slots for which they indicated availability when registering for the experiment. All subjects in a group solved the same problem
9 Experimental Comparison of Two Interactive Visualization Methods for MCPS Table 9.1 Sample composition and treatments
195
Problem Mode/participants
Simple m f
Total
Complex m f
Total
Heatmap Coordinates
9 10
23 21
12 10
28 24
14 11
16 14
under the same treatment conditions. Table 9.1 provides an overview of the sample composition and the distribution across treatments. All subjects were proficient in the use of personal computers. While participation in the experiment was voluntary, it was pointed out to subjects that the “diligent execution” of all tasks was a necessary requirement for entering the lottery drawings. All students did appear to take the exercise seriously.
9.4.2 Apparatus Both visualization methods were implemented in C# on Windows. The program automatically recorded and time-stamped each action performed by a subject (i.e., imposition, modification or release of constraints, sorting, selection of a portfolio, indication of completion). During experiments, the program was simultaneously run on 15 identical computers in the computer lab in which the experiment was performed.
9.4.3 Portfolio Selection Problem In order to provide a realistic setting, we used a portfolio selection problem with which participants could readily identify, namely the selection of courses for the forthcoming semester. Each participating student had to complete the task of selecting a class schedule that best corresponds to his/her individual preferences. Note that at Austrian universities, students are not provided with a ready-made schedule, but rather are free to set theirs up individually. The subjects were therefore familiar with the portfolio selection problem used in the experiments and exhibited a high level of problem identification. The following three criteria were used to characterize each alternative in the “simple problem” treatment: total number of ECTS (European Credit Transfer System) points (maximize), total remaining spare time per week (maximize), and average evaluation scores in previous semesters of courses in the schedule (maximize). The amount of spare time was calculated based on 14 available hours per day net of the time of required attendance at the university, which was assumed to start at the beginning of the first course and last until the end of the last course each day. By maximizing the total remaining spare time, subjects implicitly seek to block time slots for classes within a day (e.g., they prefer having one course
196
E. Kiesling et al.
at nine and the second at eleven and no class for the rest of the day instead of having the second class not until seven in the evening) as well as within the week (e.g., having all courses scheduled within 4 days with no classes on the remaining day is typically preferred to disperse classes over all days of the week). Note that there might be differing preferences (e.g., by students who are part-time or fulltime employed), but subjects were instructed to maximize all three objectives. In the “complex problem” treatment, four additional criteria were used: average evaluation score of course lecturers in previous semesters (maximize), percentage of students who passed the course that had the lowest pass rate of all courses in the portfolio in previous semesters (maximize), prospective average number of students in class (minimize), and average grade obtained by students in past courses (maximize). Sets of efficient class schedules (i.e., the set of (Pareto) efficient portfolios) for both problem instances were calculated using actual data on 31 bachelor-level courses offered at the University of Vienna. Efficient schedules were identified by completely enumerating all 231 > 2 109 course combinations and conducting pairwise dominance checks while accounting for the following feasibility constraints: (1) courses cannot overlap, (2) if multiple courses of the same type are available, only one of them can be chosen, and (3) the maximum total number of ECTS points of all courses in a schedule is limited to 50 (with 30 ECTS points being the regular workload per semester). Solving both problem instances resulted in a set of 331 nondominated portfolios for the three-objectives problem and a set of 2,614 nondominated portfolios for the seven-objectives problem. While the complete solution set was used for the “simple problem” treatment in the experiments, the set used in the “complex problem” treatment was limited to 999 randomly selected portfolios, because an increase by 668 alternatives already considerably increases the level of problem complexity without affecting the responsiveness of the interactive visualizations in any noticeable way.
9.4.4 Procedure Our experiment consisted of a scripted verbal introduction, a training session, a scripted explanation of the problem setting, the actual portfolio selection exercise, and an online survey to be filled out immediately after completion of the exercise. The total time for a complete session was about 45 minutes. At the beginning of a session, the scripted verbal introduction briefly explained portfolio selection problems in general terms and demonstrated the visualization method and interaction mechanisms used in the respective treatment. (To ensure comprehensibility of all explanations, we conducted a thorough pretest that involved five subjects.) Then, a training session that used a simple, generic problem instance that involved 15 randomly generated nondominated portfolios had to be completed by each participant within 5 minutes. The number of criteria (labeled with capital letters “A”–“C” and “A”–“G,” respectively) was the same as in the actual exercise. Before the experimental exercise commenced, the class schedule selection task was
9 Experimental Comparison of Two Interactive Visualization Methods for MCPS
197
explained to participants. In order to ensure uniformity and control among/across groups and across sessions, questions were generally not entertained. However, a written summary detailing the criteria was available to all subjects during the experiment. In the exercise, subjects had to narrow down the set of admissible alternatives and finally indicate their most preferred option. They could then terminate the process and proceed to the survey. A maximum time limit of 15 minutes was allowed for the task and shown to users as a countdown on screen. Finally, a ten-page online survey was used to collect demographic information and elicit subjective outcome measures.
9.5 Measurement We used both subjective and objective measures to obtain a broad picture of the impact of the two visualization methods studied.
9.5.1 Subjective Measures Subjective measures indicate the subjective evaluation of the system, the decision process, and its outcomes by users. All measures are based on established scales from the literature. Concerning the evaluation of the system, we followed the well-established TAM developed by Davis (1989), which has evolved into one of the most widely used model in the evaluation of user attitudes toward information systems. It distinguishes two important factors that determine attitudes toward (and eventually the use of) an information system: perceived ease of use (PercEase) and perceived usefulness (PercUse) of the system. The subjective evaluation of the decision process was also measured along two dimension. The first dimension is perceived effort (PercEff ), which indicates how difficult or easy users found it to perform the selection task using the interactive visualization provided to them. The second dimension is the perceived decisional conflict (DecConf ). This measure is even stronger related to the subjects’ feelings during the process; items included in this measure refer to anxiety, conflict, and the notion of tenseness during the process. For both measures, we applied scales developed by Aloysius et al. (2006). Finally, we used perceived accuracy (PercAcc) to evaluate the outcome of the process (also based on a scale by Aloysius et al. (2006)).
9.5.2 Objective Measures Objective measures describe properties of the search process that can be determined from the process logs recorded by the software. These measures not only provide
198
E. Kiesling et al.
objective data on the effort involved in the task, but also enable us to identify structural differences in the search processes performed using different visualizations. While it is difficult to objectively measure the actual effort a DM spent in solving a given problem, we approximate it by two indicators: • Steps: The total number of filtering steps that a subject performed during the entire process. • Total time (TotTime): The total time between the first and last interactions with the system, measured in seconds. In order to indicate whether the search process converged smoothly toward the most preferred portfolio, or whether the DM frequently backtracked from the search path, we considered the absolute number of reversals (RevAbs). This measure counts the number of filtering steps that led to an increase, rather than a decrease, in the number of admissible portfolios. A high value of RevAbs thus indicates a rather erratic search process. The speed at which portfolios are eliminated in different stages of the search process is also an important process characteristic. Subjects could first try to cut down the number of admissible portfolios quite radically, and then spend more time working with the remaining “good” portfolios, or they might try to retain a broad view of possible portfolios for a long time and converge quickly to the solution only in late stages of the process. To capture such differences in behavior, we divided the entire search process into three phases of equal length (in clock time), and separately considered the first and the third phase. This leads to the measures Avg1 and Avg3, which represent the average number of admissible portfolios in the first and last third of the process, represented as fractions of the total number of possible portfolios. If no steps took place exactly at the boundaries between phases, the first step beyond the boundaries was also included in the calculation of averages. This was necessary since in some experiments, the first step that actually led to a reduction in the number of admissible portfolios was performed after a third of the total time had passed, and consequently, in these experiments Avg1 would otherwise be equal to the total number of portfolios.
9.6 Results 9.6.1 Construct Validity and Descriptive Statistics The psychometric constructs used in our research questions were all measured using multi-item scales. Although well-established scales were used for all constructs, we nevertheless tested their validity by calculating Cronbach Alpha values for all constructs as shown in Table 9.2. The Alpha values for almost all constructs are well above the recommended threshold of 0.7 (Hair 2010). The only construct that does not reach the threshold is
9 Experimental Comparison of Two Interactive Visualization Methods for MCPS Table 9.2 Cronbach alpha values for constructs
Sample size
199
Number of items
Alpha
Subjective outcome measures Perceived usefulness 96 Perceived ease of use 96 Decisional conflict 96 Perceived effort 96 Perceived accuracy 96
5 6 3 2 3
0.9297 0.9521 0.7625 0.6506 0.8297
Decision styles Rational Intuitive Dependent Avoiding Spontaneous
5 5 5 5 5
0.8255 0.7905 0.8504 0.9284 0.8320
96 96 96 96 96
perceived effort, whose Alpha value 0.65 is also very close to the threshold and can still be considered acceptable for exploratory studies. We therefore use the constructs as specified for our further analysis. Table 9.3 gives an overview of descriptive statistics for the outcome variables we are considering, grouped by our main factors.
9.6.2 Subjective Outcome Measures We first analyze the subjective outcome measures. For all our analyses, we use a linear regression model incorporating both the experimental factors and personal characteristics of our subjects as explanatory variables. The main results of these models are shown in Table 9.4. In performing the regression analysis, we used a dummy variable for the problem with seven criteria rather than the actual number of criteria, since we are only interested in the effect of complexity and not in an effect of adding each single criterion to the problem. Only a few significant effects were uncovered by this analysis. As could be expected, users felt more comfortable with the more familiar coordinate-based visualization than with heatmaps. Users in the parallel coordinate treatment indicated both a significantly greater ease of use of the system and experienced significantly less decisional conflict in their decision-making processes. They also perceived the effort involved in using the interactive parallel coordinate visualization to be lower than using the heatmap, although this relationship is only weakly significant. Thus we find some, but not very strong, support for Research Question 1. However, as Fig. 9.3 clearly shows, the impact of view type diminishes in more complex problems, and even vanishes in the case of decisional conflict. This is also picked up in the regression analysis by the significant interaction term between view and criteria in the regression for decisional conflict. For perceived ease of use and perceived effort, this interaction term is not significant.
200 Table 9.3 Descriptive statistics for variables
E. Kiesling et al.
Variable
View
PercUse
Heat Coord
PercEase
Heat Coord
DecConf
Heat Coord
PercEff
Heat Coord
PercAcc
Heat Coord
Steps
Heat Coord
TotTime
Heat Coord
RevAbs
Heat Coord
RevRel
Heat Coord
Avg1
Heat Coord
Avg3
Heat Coord
PathLen
Heat Coord
Number of criteria 3 7 Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD
21:8261 7:9810 23:4286 8:3281 24:6957 10:5977 31:5238 12:8437 12:1304 4:3307 7:5714 3:7759 7:8261 2:5522 5:8095 3:4003 11:5217 4:1655 12:0952 4:7739 14:8696 14:6855 52:7143 34:1909 568:1304 223:9980 450:8571 150:3647 4:3043 5:5304 11:6667 9:1724 0:1562 0:1232 0:1997 0:0751 0:5625 0:1697 0:3160 0:1665 0:2155 0:2238 0:0347 0:0432 5:0040 6:3142 2:7439 1:5491
21:8214 8:1924 22:4167 6:6784 28:2857 10:2048 31:2500 9:8522 9:8214 4:6352 10:1250 4:5713 7:1429 2:4753 6:5417 2:6040 12:5000 3:7466 11:6667 4:7335 16:7857 9:9195 64:8750 45:6340 669:6429 194:9127 548:5833 267:2672 3:6071 2:4546 10:2500 8:7190 0:1617 0:1002 0:1365 0:0700 0:4446 0:2015 0:3188 0:2007 0:1646 0:1990 0:0066 0:0089 4:3735 3:3991 2:0353 1:9858
9 Experimental Comparison of Two Interactive Visualization Methods for MCPS
201
Table 9.4 Regression analysis – subjective outcome variables Variable (Intercept) View Criteria Gender Rational DS Intuitive DS Dependent DS Avoiding DS Spontaneous DS View criteria R2
Coeff t Coeff t Coeff t Coeff t Coeff t Coeff t Coeff t Coeff t Coeff t Coeff t
PercUse 2.6515 0.4826 3.0459 1.4467 0.3545 0.1805 1.0258 0.7013 *** 0.5964 4.5057 0.2105 1.4470 0.1783 1.5007 0.0280 0.2596 0.0991 0.6319 2.4464 0.8602
PercEase 0.0784 0.0098 ** 8.4951 2.7804 2.7228 0.9554 3.3545 1.5804 *** 0.7462 3.8850 0.0826 0.3912 0.2028 1.1766 0.0594 0.3789 0.1401 0.6152 5.7447 1.3919
DecConf *** 12.2935 3.7295 *** 4.5012 3.5635 2.0310 1.7237 0.7173 0.8174 * 0.1866 2.3499 0.0291 0.3337 0.0859 1.2046 0.0293 0.4524 0.1077 1.1444 ** 5.4738 3.2079
PercEff *** 9.9349 4.6448 * 2.0852 2.5440 0.3044 0.3981 0.2125 0.3731 0.0788 1.5289 0.0157 0.2767 0.0324 0.7008 * 0.1001 2.3820 0.0610 0.9980 1.5343 1.3856
PercAcc * 8.0964 2.3203 0.8674 0.6487 0.6290 0.5043 1.5939 1.7158 0.1338 1.5921 0.0500 0.5410 0.0543 0.7198 0.0348 0.5072 0.0837 0.8404 1.6906 0.9359
0.2226
0.1942
0.2026
0.1044
0.0168
Ease of Use
5
5
10
15
20
10
25
30
15
35
40
20
Decisional Conflict
Heat/3 Coord/3 Heat/7 Coord/7
Heat /3 Coord /3 Heat /7 Coord/7
Fig. 9.3 Distribution of ease of use and decisional conflict for the treatment groups
All these relationships in some way concern the difficulties that users experienced in using the system. The subjective quality of results, as measured by the perceived usefulness of the system and the perceived accuracy of results, was not affected by either of the two main factors analyzed in our study.
202
E. Kiesling et al.
Table 9.5 Regression analysis – objective outcome variables Variable (Intercept) View Criteria Gender Rational DS Intuitive DS Dependent DS Avoiding DS Spontaneous DS View criteria R2
Steps TotTime RevAbs Avg1 Avg3 Coeff 15.1927 * 385.4339 7.1584 *** 0.5203 0.1430 t 0.6295 2.3571 1.2864 3.4422 1.1549 Coeff *** 37.2515 116.6639 ** 7.0584 *** 0.2249 *** 0.1894 t 4.0276 1.8618 3.3099 3.8832 3.9911 Coeff 4.2691 * 137.6545 0.2368 * 0.1373 0.0460 t 0.4948 2.3550 0.1191 2.5416 1.0381 Coeff 3.7594 ** 118.0890 1.9398 0.0147 0.0288 t 0.5851 2.7126 1.3093 0.3646 0.8724 Coeff 0.0520 5.2872 0.0086 0.0025 0.0033 t 0.0894 1.3421 0.0641 0.6792 1.0906 Coeff 0.4193 7.8104 0.0524 0.0072 0.0046 t 0.6561 1.8037 0.3555 1.7987 1.3952 Coeff 0.4623 * 8.0221 0.1737 0.0047 0.0043 t 0.8858 2.2690 1.4435 1.4397 1.6012 Coeff 0.5160 * 7.2165 0.0864 0.0016 0.0006 t 1.0880 2.2461 0.7904 0.5255 0.2416 Coeff 0.6701 8.1022 0.1404 0.0039 0.0010 t 0.9722 1.7351 0.8833 0.9000 0.2839 0.1065 0.0154 Coeff 9.5208 31.4676 0.9777 t 0.7620 0.3718 0.3394 1.3604 0.2402 0.3210 0.1893 0.1731 0.1991 0.2350
Contrary to our expectations, problem complexity as measured by the number of criteria did not have any impact on our subjective measures. Thus, Research Question 3 has to be answered negatively for this group of outcome measures. Regarding Research Question 4, which concerned user characteristics, we noticed that the users’ decision-making style has a strong impact on their subjective evaluation of the decision support system. Users who scored high on the rational style found the system both more useful and easier to use, and also experienced less decisional conflict. In contrast, users who scored high on the avoiding decision style indicated a greater perceived effort.
9.6.3 Objective Outcome Measures Table 9.5 gives an overview of the regression analyses performed for the objective outcome variable measuring effort, solution time, and process structure. As an objective measure of effort, we use the number of steps involving filtering by the user. The regression analysis indicates that the different methods led to significantly different efforts: Users of the parallel coordinate visualization performed almost four times as many filter changes as users of the heatmap display. While this result can partly be explained by the fact that users in the heatmap display performed other operations (like sorting, which was not available on the coordinate view),
9 Experimental Comparison of Two Interactive Visualization Methods for MCPS
203
Filtering steps
0
50
100
150
Fig. 9.4 Distribution of the number of filtering steps used for the treatment groups
Heat /3 Coord/3 Heat /7 Coord /7
this still is an important indicator of the actual cognitive effort involved in solving the problem using different graphical representations. Contrary to our expectations, problem complexity did not have a significant impact on the number of steps performed. Although the mean values shown in Table 9.3 indicate that in both views a slightly larger number of steps was actually performed for the more complex problem, this difference is not statistically significant. As Fig. 9.4 suggests, this lack of significance might be due to the considerable larger variance in the number of steps used for more complex problems. The converse is true for the total time taken to solve the problem. Here the regression analysis indicates that problem complexity has a significant impact, while the view type does not. As could be expected, subjects on average took longer to solve the more complex problems than the simpler problems. This finding suggests that subjects have an explicit or implicit ‘time budget’ for completing the task which is determined by the task characteristics (and not the tool characteristics). However, while the method used did not show a significant effect in the regression analysis, a t-test indicates that average times using parallel coordinates (502.98 s) are significantly shorter than the times taken using the heatmap (623.8627 s, t D 2:7042, p D 0:00817). Total time was also the only outcome variable in which the gender of the user had a significant impact: according to the regression analysis, it took women about 2 min more to arrive at a solution than men. A very small, but statistically significant influence is also related to decision styles, where the dependent style reduces and the avoiding style increases the time until arriving at a decision. Concerning the structure of the decision process, RevAbs measures the number of “backward steps” in the decision process by counting the number of filtering operations which led to an increase, rather than a decrease, in the number of admissible portfolios. Figure 9.5 shows the considerable variation in this indicator. The left part of this figure represents the time path of one (out of 12) experiment exhibiting an entirely smooth convergence to the most preferred solution (RevAbs D 0), while the right part shows one of the most extreme cases with RevAbs D 31.
E. Kiesling et al.
0
0
200
200
Portfolios 400 600
Portfolios 400 600
800
800
1000
1000
204
0
100 200 300 400 500 600
0
200
Time
400
600
800 1000
Time
Fig. 9.5 Examples of sessions with low and high values of RevAbs
RevAbs: Female users
0
0
5
5
10
10
15
20
15
25
20
30
25
35
RevAbs: Male users
Heat/3 Coord/3 Heat/7 Coord/7
Heat /3 Coord /3 Heat /7 Coord /7
Fig. 9.6 Distribution of RevAbs across experimental conditions for male and female users
The regression analysis shown in Table 9.5 indicates that view type is the only variable that has a significant impact on the number of backward steps. Interestingly, this effect seems to be quite different for male and female users, as shown in Fig. 9.6. The difference between parallel coordinate method and heatmap method is much more marked for male users than for female users. However, a regression analysis including the interaction term between these two variables did not show a significant interaction. Another important characteristic of the decision process is the rate and time structure of convergence. To analyze this property, we consider the average number of admissible portfolios in the first (Avg1) and last (Avg3) third of the entire process. As Table 9.5 shows, both variables are significantly influenced by the choice of the method. Users of the parallel coordinate method have already achieved a
9 Experimental Comparison of Two Interactive Visualization Methods for MCPS
View: Heatmap
205
1.0 0.8 0.6 0.4 0.2 0.0
0.0
0.2
0.4
0.6
0.8
1.0
View: Coordinates
1
2
3
4
5
6
7
8
9 10
1
2
3
4
5
6
7
8
9 10
Fig. 9.7 Relative number of admissible portfolios over time for both view modes
significantly larger reduction in the number of admissible portfolios in the first third of the process, and also maintain this lead in the last third. Surprisingly, there is also a larger reduction in the earlier stages of the process when the number of criteria is larger. This difference between the two display modes can clearly be observed in Fig. 9.7. This figure shows the relative number of admissible portfolios (standardized by the total number of portfolios) in each tenth of the entire process. In the parallel coordinate mode, after only 10% of total time, there were no more cases in which more than 80% of the portfolios were still admissible, and typically less than 20% of the portfolios remained admissible after half the process. In contrast, in the heatmap display, users reverted to a situation in which all portfolios are admissible throughout the entire process. Summarizing the results of this section, we find much stronger support for Research Question 2 (impact on duration and structure of the portfolio selection process and the effort exerted by users) than we did for Research Question 1 (impact on users’ perception of the quality and effort of the portfolio selection process). Research Question 3 (impact of problem complexity) was only weakly confirmed. Regarding Research Question 4 (impact of individual characteristics of the DM), we also found a weaker effect impact on objective outcome measures than on the subjective measures.
9.7 Conclusions For our study, we conducted experiments with two visual interactive methods that aim at supporting DMs in multicriteria portfolio selection. Both methods make it possible to explore the space of (Pareto) efficient portfolios by setting and/or
206
E. Kiesling et al.
manipulating aspiration levels for a number of objectives and, thus, do not require a rigorous a priori preference elicitation process. The results show that the choice of visualization method has indeed a considerable impact on both the users’ subjective experiences when using the system and their objective performance. With respect to subjective measures, parallel coordinate plots turned out to be superior to heatmaps since subjects not only rated parallel coordinate plots to be easier to use, but they also experienced less decisional conflict and considered the effort involved to be lower than in the case of heatmaps. Contrary to our expectations, problem complexity did not have a significant impact on any of the subjective measures perceived usefulness, ease of use, decisional conflict, and perceived accuracy. However, differences in the subjective perception of the visualization methods in the less complex task were reduced by an increase in problem complexity. Concerning objective measures, we did not find any strong relationship between problem representation and total time required for problem solving, although subjects who used the parallel coordinate method ran through a considerably larger number of filtering steps and reduced the number of admissible portfolios much faster than subjects using the heatmap method. In addition, our results suggest that user characteristics have an impact on subjective evaluation and, thus, on the acceptance of decision support methods. This fact highlights the necessity to establish the right fit between problem representation, support tools, and the users’ cognitive style which constitutes an important task for DSS designers, to which studies like ours can contribute. Although we have been able to identify some interesting effects of the different visualization types, our study has several limitations. We only studied one decision problem in one population of users. Since our results indicate a strong influence of user characteristics, it is obvious that generalization of these results to other problems and other user groups will require more experiments. At a more general level, we also considered only one particular type of decision support for multiobjective portfolio problems: a free search in the set of nondominated portfolios using thresholds on criteria. Other approaches, which use different ways of specifying user preference like attribute weights, will require different types of interaction with users and different problem representations. These approaches would require an entirely different set of experiments, although some of the methods we have developed here (e.g., our approach to analyzing the decision process) could also be used to study these questions. Apart from these more remote topics, our results also raise several more closely related topics for future studies. On the one hand, we have so far only established that the two methods we studied cause different behavior. But it is not yet clear which features of the two methods are causing the larger number of reversals when using heatmaps, or the faster convergence when using coordinate plots. Experiments with a wider range of different plots will be needed to precisely identify the characteristics of display formats and the cognitive processes that lead to such differences. Furthermore, while we clearly observe a more direct convergence path for the parallel coordinates view type, this does not yet allow us to conclude that this view type is “better.” While we can say that subjects make faster progress
9 Experimental Comparison of Two Interactive Visualization Methods for MCPS
207
in converging toward a final solution, the less monotonous and more indirect convergence path observed for the heatmap view type may be interpreted in such a way that users conduct a more exhaustive exploration of the solution space and gain a better overview and understanding of its structure. It would therefore be interesting to evaluate subjects’ overview of the solution space after the treatment in order to address the question which of the view types leads to a better understanding of its structure. On the other hand, the fact that one decision process is different from another does not answer the important question which process (and which visualization method) leads to better outcomes. Evaluating the quality of solutions to multicriteria decision problems is a difficult task. If users were able to directly indicate which solution is better, they would not need elaborate decision support tools. However, this question must ultimately be answered in order to make firm recommendations on the choice of visualization methods.
References Aloysius JA, Davis FD, Wilson DD, Taylor AR, Kottemann JE (2006) User acceptance of multicriteria decision support systems: the impact of preference elicitation techniques. Eur J Oper Res 169(1):273–285 Anderson E (1960) A semigraphical method for the analysis of complex problems. Technometrics 2(3):387–391 Andrews D (1972) Plots of high-dimensional data. Biometrics 28(1):125–136 Auman JT, Boorman GA, Wilson RE, Travlos GS, Paules RS (2007) Heat map visualization of high-density clinical chemistry data. Physiol Genomics 31(2):352–356 Baiocco R, Laghi F, D’Alessio M (2009) Decision-making style among adolescents: relationship with sensation seeking and locus of control. J Adolesc 32(4):963–976 Bierstaker JL, Brody RG (2001) Presentation format, relevant task experience and task performance. Manag Audit J 16(3):124–128 Borthick AF, Bowen PL, Jones DR, Tse MHK (2001) The effects of information request ambiguity and construct incongruence on query development. Decis Support Syst 32(1):3–25 Buchanan JT (1994) An experimental evaluation of interactive MCDM methods and the decision making process. J Oper Res Soc 45(9):1050–1059 Bystroem K, Jaervelin K (1995) Task complexity affects information seeking and use. Inform Process Manag 31(2):191–213 Campbell DJ (1988) Task complexity: a review and analysis. Acad Manag Rev 13(1):40–52 Chernoff H (1973) The use of faces to represent points in k-dimensional space graphically. J Am Stat Assoc 68(342):361–368 Chu PC, Spires EE (2000) The joint effects of effort and quality on decision strategy choice with computerized decision aids. Decis Sci 31(2):259–292 Coll R, Thyagarajan A, Chopra S (1991) An experimental study comparing the effectiveness of computer graphics data versus computer tabular data. IEEE Trans Syst Man Cybern 21(4): 897–900 Coll R, Coll J, Thakur G (1994) Graphs and tables: a four-factor experiment. Commun ACM 37(4):77–86 Cook D, Hofman H, Lee EK, Yang H, Nikolau B, Wurtele E (2007) Exploring gene expression data, using plots. J Data Sci 5(2):151–182 Davis F (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart 13(3):319–340
208
E. Kiesling et al.
De P, Sinha AP, Vessey I (2001) An empirical investigation of factors influencing object-oriented database querying. Inform Technol Manag 2(1):71–93 Dickson G, DeSanctis G, McBride DJ (1986) Understanding the effectiveness of computer graphics for decision support: a cumulative experimental approach. Commun ACM 29(1): 40–47 Eisen MB, Spellman PT, Brown PO, Botstein D (1998) Cluster analysis and display of genomewide expression patterns. Proc Natl Acad Sci U S A 95(25):14863–14868 Gambetti E, Fabbri M, Bensi L, Tonetti L (2008) A contribution to the Italian validation of the general decision-making style inventory. Pers Indiv Differ 44(4):842–852 Gefen D, Straub D (1997) Gender differences in the perception and use of e-mail: an extension to the technology acceptance model. MIS Quart 21(4):389–400 Gehlenborg N, Dietzsch J, Nieselt K (2005) A framework for visualization of microarray data and integrated meta information. Inf Vis 4(3):164–175 Hair JF (2010) Multivariate data analysis, 7th edn. Pearson, Upper Saddle River Inselberg A (1985) The plane with parallel coordinates. Vis Comput 1(2):69–91 Inselberg A (2009) Parallel coordinates – visual multidimensional geometry and its applications. Springer, Berlin Inselberg A, Dimsdale B (1990) Parallel coordinates: a tool for visualizing multi-dimensional geometry. In: Proceedings of the first conference on visualization. IEEE Computer Society Press, Los Alamitos, CA, USA, pp 361–378 Kamis A, Stohr E (2006) Parametric search engines: what makes them effective when shopping online for differential products? Inform Manag 43(7):904–918 Kibbey C, Calvet A (2005) Molecular property explorer: a novel approach to visualizing SAR using tree-maps and heatmaps. J Chem Inform Model 45(2):523–532 Kleiner B, Hartigan JA (1981) Representing points in many dimensions by trees and castles. J Am Stat Assoc 76(374):260–269 Koedoot N, Molenaar S, Oosterveld P, Bakker P, de Graeff A, Nooy M, Varekamp I, de Haes H (2001) The decisional conflict scale: further validation in two samples of dutch oncology patients. Patient Educ Couns 45(3):187–193 Korhonen P (1991) Using harmonious houses for visual pairwise comparison of multiple criteria alternatives. Decis Support Syst 7(1):47–54 Korhonen P, Moskowitz H, Wallenius J (1990) Choice behavior in interactive multiple-criteria decision making. Ann Oper Res 23(1):161–179 Korhonen P, Alexander O, Mechitov A, Moshkovich H, Wallenius J (1997) Choice behaviour in a computer-aided multiattribute decision task. J Multi-Crit Decis Anal 6(4):233–246 Kottemann J, Davis F (1991) Decisional conflict and user acceptance of multicriteria decisionmaking aids. Decis Sci 22(4):918–926 Larkin J, Simon H (1987) Why a diagram is (sometimes) worth ten thousand words. Cogn Sci 11(1):65–100 Lee Z, Wagner C, Shin HK (2008) The effect of decision support system expertise on system use behavior and performance. Inform Manag 45(6):349–358 Loo R (2000) A psychometric evaluation of the general decision-making style inventory. Pers Indiv Differ 29(5):895–905 Lotov A, Miettinen K (2008) Visualizing the Pareto frontier. In: Branke J, Deb K, Miettinen K, Slowinski R (eds) Multiobjective optimization (LNCS 5252). Springer, Berlin, Heidelberg, New York, pp 213–243 Lotov A, Bushenkov V, Chernov A (1997) Internet, GIS and interactive decision maps. J Geogr Inform Decis Anal 1(2):118–149 Lotov AV, Bushenkov VA, Kamenev GK (2004) Interactive decision maps, approximation and visualization of Pareto frontier. Kluwer, Dordrecht Lusk E, Kersnick M (1979) The effect of cognitive style and report performance on task performance: the MIS design consequences. Manag Sci 25(8):787–798
9 Experimental Comparison of Two Interactive Visualization Methods for MCPS
209
Mennecke B, Crossland M, Killingsworth B (2000) Is a map more than a picture? The role of SDSS technology, subject characteristics, and problem complexity on map reading and problem solving. MIS Quart 24(4):601–629 Meyer J, Shinar D, Leiser D (1997) Multiple factors that determine performance with tables and graphs. Hum Factors 39(2):268–286 Podowski RM (2006) Visualization of complementary systems biology data with parallel heatmaps. IBM J Res Dev 50(6):575–581 Pyrke A, Mostaghim S, Nazemi A (2007) Heatmap visualisation of population based multi objective algorithms. In: Obayashi S, Deb K, Poloni C, Hiroyasu T, Murata T (eds) Evolutionary multi-criterion optimization (LNCS 4403). Springer, Berlin, Heidelberg, New York, pp 361–375 Scott S, Bruce R (1995) Decision-making style: the development and assessment of a new measure. Educ Psychol Meas 55(5):818–831 Smelcer J, Carmel E (1997) The effectiveness of different representation for managerial problem solving: comparing tables and maps. Decis Sci 28(2):391–420 Spicer D, Sadler-Smith E (2005) An examination of the general decision making style questionnaire in two UK samples. J Manag Psychol 20(2):137–149 Stummer C, Kiesling E, Gutjahr WJ (2009) A multicriteria decision support system for competence-driven project portfolio selection. Int J Inform Technol Decis Making 8(2):379–401 Swink M, Speier C (1999) Presenting geographic information: effects of data aggregation, dispersion, and users’ spatial orientation. Decis Sci 30(1):169–195 Thunholm P (2004) Decision-making style: habit, style or both? Pers Indiv Differ 36(4):931–944 Thunholm P (2008) Decision-making styles and physiological correlates of negative stress: is there a relation? Scand J Psychol 49(3):213–219 Venkatesh V, Davis F (2000) A theoretical extension of the technology acceptance model: four longitudinal field studies. Manag Sci 46(2):186–204 Venkatesh V, Morris M (2000) Why don’t men ever stop to ask for directions? Gender, social influence, and their role in technology acceptance and usage behavior. MIS Quart 24(1):115–139 Vessey I (1991) Cogntive fit: a theory-based analysis of the graphs versus tables literature. Decis Sci 22(2):219–240 Wood R (1986) Task complexity: definition of the construct. Organ Behav Hum Decis Process 37(1):60–82 Zhang L, Kuljis J, Liu X (2008) Information visualization for DNA microarray data analysis: a critical review. IEEE Trans Syst Man Cybern C Appl Rev 38(1):42–54
Part III
Applications
Chapter 10
An Application of Constrained Multicriteria Sorting to Student Selection Julie Stal-Le Cardinal, Vincent Mousseau, and Jun Zheng
Abstract This research considers decision situations in which a portfolio of individuals is to be selected. Criteria relative to quality of individuals conflict with constraints on the composition of the group. This work applies to student selection problems in which the group criteria conflict with the constraints relative to the student portfolio. In this chapter, we propose an innovative methodology to support such a decision. In the first step, the Electre Tri method sorts students into ordered predefined categories according to how each student individually fulfills the selection requirements; in a second analysis, a mathematical programming formulation combines the results of the first step with the requirements on the student group to select a group of students who are individually good and satisfactory as a group. The methodology identifies a portfolio of students that is a good compromise between group constraints and the Electre Tri student classification. Although this new methodology is applied in the context of student selection, it can be applied widely in other multicriteria portfolio decision contexts.
10.1 Introduction Organizations’ staffing problems are important as they represent a strategic issue dealing with the management of competencies. Consider, for instance, a consultancy that needs to recruit collaborators for the next years. These collaborators will be assigned to missions (technical, managerial, and strategic missions). Candidates are evaluated on their individual abilities to fulfill the requirements of the various clients. However, the firm wishes to achieve social/ethnic as well as gender diversity.
V. Mousseau () Laboratoire G´enie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92295 Chˆatenay-Malabry, Cedex, France e-mail:
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 10, © Springer Science+Business Media, LLC 2011
213
214
J. Stal-Le Cardinal et al.
Consider a project leader who has decomposed his/her project into several subtasks in a work breakdown structure and who needs to assign resources to each subtask. (S)he can staff each project task with a number of collaborators or even can outsource all or parts of subtasks. For a given task, each collaborator/subcontractor has his/her technical and behavioral competencies to fulfill the given task goals. However, the assignment of the project team must satisfy certain constraints: each possible collaborator/subcontractor will be available only at certain times, positive or negative synergies can appear among collaborators. These examples show the diversity of staffing problems which, however, share common structural aspects: • Candidates are evaluated individually on (often qualitative) criteria related to their qualities and competencies, • The performance of the staffing solution (group of individuals assigned to positions) is related not only to the quality of individuals but also to constraints/criteria concerning the overall group. This chapter focuses on models related to the selection of individuals, in which the composition of the group imposes constraints on the selection of individuals. We address a real-world example dealing with student selection at Ecole Centrale Paris, France, and use this case study to discuss the usability of the proposed methodology for more general recruitment/staffing problems. The goal of this selection is to choose candidates among students having similar competencies, i.e., trained in the same school. The main concern for the decision maker involved in this selection is to find students who best fulfill the requirements of the courses, based on their behavior, competencies, and abilities. The proposed methodology is based on: • The Electre Tri method (see Figueira et al. 2005; Mousseau et al. 2000) for evaluating how well each student fits the selection requirements, • The introduction of constraints on categories (i.e., evaluation levels) used in Electre Tri to model constraints concerning the overall group (such as balance in gender in the group) Our contribution is an innovative two-stage methodology to support such staffing decisions. In a first step, the Electre Tri method sorts students to ordered predefined categories according to how each student individually fulfills the selection requirements. On the basis of the individual evaluations, a second step of the analysis combines the results of the first step with the requirements to select a group of students who are individually good and satisfactory as a group. This second step involves a mathematical programming formulation and identifies portfolios of students which are a good compromise between group constraints and the Electre Tri students classification. Although this new methodology is applied in the context of student selection, it has a wide range of applicability for portfolio analysis and can be applied in other multicriteria portfolio decision contexts. The chapter is organized as follows. Section 10.2 reviews the literature on multiple criteria student management problems. We describe the context of our case study in Section 10.3. The proposed methodology is presented in Section 10.4. In Section 10.5, the implementation of the methodology is explained based on using
10 An Application of Constrained Multicriteria Sorting to Student Selection
215
data from a real-world application. Insights from the implementation on student selection and, more generally on staffing problems, are provided in Section 10.6. The final section provides conclusions and ideas for further research.
10.2 Literature on Multiple Criteria Student Selection The management of students in universities has long been an issue for academic institutions: Which students should enter an academic program? How to compose groups of students to form teams for projects? How to assign students to academic majors? Student selection problems can be schematically divided into two different types of decision situation. The first type of problem concerns the partition of the set of students. Depending on the situation, the partition of students can refer to groups required for team projects, or to the distribution of students among majors according to their preferences, see Reeves and Hickman (1992), Weitz and Jelassi (1992), Bafail and Moreb (1993), Saber and Ghosh (2001), Miyaji et al. (1988). The second problem refers to the selection of students for entering an academic program or a particular major, in which case the issue is not to partition a set but, rather, to identify the students with the highest merit to enter the program; such questions are related to recruitment issues, see Kuncel et al. (2001), Yeh (2003), Leyva Lopez (2005). According to the problem type, the information used as input to solve the question can vary. The type of information considered can be: • Students’ preferences, for the assignment to majors, or the composition of groups (e.g., Miyaji et al. 1988; Reeves and Hickman 1992; Saber and Ghosh 2001); the preferences in such cases correspond to an order on majors, or wishes on the group formation. • Criteria or constraints on the group formation such as gender issue, similarity of performance among groups, or diversity within groups (e.g., Miyaji et al. 1988; Reeves and Hickman 1992; Weitz and Jelassi 1992; Saber and Ghosh 2001). • Student individual performance, GPA, predictive success attributes (see for instance Kuncel et al. 2001; Yeh 2003; Leyva Lopez 2005). Earlier approaches give recommendations based on different multiple criteria decision models and diverse methodologies. In particular, Montibeller et al. (2009) and Philips and Bana e Costa (2007) make use of problem structuring methodologies. Some papers involve a ranking of students/alternatives to select the k best ones (e.g., Leyva Lopez 2005), yet no approaches exist, to our knowledge, which consider these problems as multicriteria sorting problems with additional constraints on categories (which is the case of the methodology proposed in this work, see Mousseau et al. 2003a). In terms of solution methods, there is an extensive literature on portfolio decision analysis in which mathematical programming formulations are frequently involved (e.g., Miyaji et al. 1988; Polyashuk 2006; Montibeller et al. 2009; Liesi¨o et al. 2008;
216
J. Stal-Le Cardinal et al.
Ghasemzadeh and Archer 2000; Archer and Ghasemzadeh 1999). In the aggregation of criteria, methodologies either use a synthesis criterion (e.g., multiple attribute value theory, Philips and Bana e Costa 2007) or outranking-based methods (e.g., Leyva Lopez 2005, see also Figueira et al. 2005). Depending on the aggregation procedure, the criteria can be quantitative or ordinal. Moreover, many models require a complete specification of preferences to induce recommendations, whereas a limited number of papers allow imprecise specification of preferences (e.g., Liesi¨o et al. 2008; Yeh 2003; Keisler 2008). Our approach extends earlier research by outlining a two-stage methodology for student admission in an academic program (Industrial Engineering major). In the first stage, each student is evaluated individually to determine whether (s)he fulfills the admission requirements. This first stage is modeled as a multicriteria sorting model (in which criteria evaluate students’ individual performance) and involves the Electre Tri outranking method (see Mousseau et al. 2000). The second stage, which involves a mathematical programming formulation, identifies subsets of students, which best fit the selection goals of the decision maker, and sets priorities among incompatible requirements on the group formation. We do not provide in this chapter an extensive presentation of Electre Tri, but the interested reader may find detailed presentation of outranking methods in Roy (1991) and Figueira et al. (2005).
10.3 Case Study Description 10.3.1 Context In France, to enter most engineering schools (so-called “grandes e´ coles”), 2 years of preparatory studies are required before a competitive examination. During these 2 years, students acquire a high level of knowledge mainly in mathematics, physics, and chemistry. The competitive examination is a selection process in which each engineering school selects its own admitted students. As one of the engineering schools, Ecole Centrale Paris (ECP) selects its students based on nationwide competitive examinations after the 2 years of preparatory studies: 350 places (for more than 10,000 candidates) are available through a specific entrance examination. The other places are filled through different methods of selection: 150 places are available to university transfer students and foreign students who meet specific selection requirements. ECP is an institution of higher education whose principal mission is to prepare highly qualified, general engineers for professions in industry and research. The educational program of ECP is based upon an integrated multidisciplinary approach that combines basic scientific and technical education, and has an emphasis to economic, social, and human realities in industry. The students will study 3 years at ECP. With respect to international standards, the first year (ECP1) at ECP corresponds to the final year of their undergraduate
10 An Application of Constrained Multicriteria Sorting to Student Selection
217
Fig. 10.1 Organization of the studies at ECP
studies; the two remaining years (ECP2 and ECP3) correspond to graduate studies. At the end of ECP2, students choose a major for the year ECP3. We are here concerned with the selection of students for the admission to one of the majors proposed in ECP3 (see Fig. 10.1). For their final year (ECP3), the students have to choose a major among nine, in addition to their major, students have to choose a “professional track” among six professionally oriented set of courses (entrepreneurship (E), design of innovative systems (DIS), operations management (OM), international project management (IPM), research (R), and strategy and finance (SF)). During ECP3, students follow courses proposed by their major and other courses corresponding to their professional tracks (ECP3 is composed of alternating periods devoted to the courses of majors and the courses of professional tracks). Thus, students acquire both knowledge and competency in engineering to become adapted to an everchanging job market, fluctuating growth sectors, and the emergence of new fields of activity. The nine majors proposed at ECP are: • • • • • • • • •
Industrial Engineering major (IE) Applied mathematics major Sustainable civil engineering major Energy major Environment and Biotechnologies major Mechanical and aerospace engineering major Physics and applications major Information sciences major Advanced systems major
Students admitted to the IE major should make an additional choice among four possible streams of courses. These streams correspond to “subspecialization” within the IE major (product/service design, production/industrialization, supply chain, management). The choice of a stream defines a choice of specific elective courses. This is important for the selection process because the minimum number of students required to open a course is ten, which implies that at least ten students per stream must be selected.
218
J. Stal-Le Cardinal et al.
10.3.2 Case Description Each year, a decision is to be made concerning which students should be admitted to the Industrial Engineering (IE) major. The limit established by the dean of studies is 50 students per major, and the number of students who apply for the IE major always exceeds the available places. Two persons are in charge of this selection process: the head and vice-head of the IE major. They work collaboratively on this problem and decide which students will enter the IE major (these two stakeholders have very similar views and agree on which type of student should be selected). Therefore, in the following, the decision maker (DM) refers to this group of two persons. The time line of the selection process is the same each year. Each student is required to specify a preference order on three majors. Each major receives the applications of students who have selected this major as a first choice. Applications of the students who are not admitted to their first choice of major are forwarded to the head of their second choice major, and eventually to their third choice. Concerning the IE major, the DM receives the files of all students applying as a first choice to the major at the end of April. This application includes a curriculum vitae, the GPAs obtained during the two first years at ECP (ECP1 and ECP2) and a personal motivation letter. In addition, this file also includes a choice of professional track, and the second and the third choice of the majors. For students applying to the IE major, an additional choice is required from the students: the choice of a stream (one of the four subspecializations). Before interviewing students, the DM examines all applications and individual interviews are planned in the beginning of May. After 3 or 4 days of interviews, the DM makes a first decision, which consists of a list of 50 accepted students, complemented with a waiting list of 10 students (the remaining students are redirected to their second choice major). In May, a few accepted students might resign from the IE major and make it possible for students on the waiting list to be admitted. At the end of May, this progressive withdrawal process stops. Until 2009, the admission of students to the IE major was based on an ad hoc selection process that was not thoroughly formalized and did not incorporate any formal decision support, but rather, proceeded through an intensive interaction with the students and multiple exchanges and discussion between the head and vice-head of the IE major. The DM analyzed the students’ results and letters of motivation before having an individual interview with each of them. To assess whether a given student fulfills the different requirements or not, the DM considered the following aspects: • • • • •
Motivation to follow courses in the IE major Professional project in relation to his/her formal industrial experiences Maturity and personality Knowledge about Industrial Engineering GPAs during the two first year at ECP
10 An Application of Constrained Multicriteria Sorting to Student Selection
219
After each interview, each student was classified into one of the three following categories: • The student should be accepted in the IE major • The student should be placed on a waiting list • The student should not be accepted in the IE major Apart from these five mentioned requirements, the DM also considered additional issues: to balance the gender on the set of selected students, an additional advantage was implicitly assigned to female students (less than 20% of candidates are women). Similarly, the DM sought to build a balanced student group with respect to the choice of professional track; this was done by giving a penalty to students who chose a frequently demanded professional track. At the end of the interview process, the DM decided the 50 selected students and designed a waiting list. Very often, rejected students requested a second interview or wanted to know why they had been rejected.
10.3.3 Stakes Involved in the Student Selection Process In order to reach a balanced distribution of students among the nine majors, the dean of studies imposes a maximum of 50 students per major. For some of the majors, this constraint is not an issue, as, each year, there are no more than 20 students who apply. As the IE major is the most popular major, each year more than 70 students apply to this major. For the DM, this constraint is crucial in many aspects: • The decision is to be explained to the rejected students. The DM must be able to make the reasons explicit why a student is not selected. • The DM considers that rejected students are not “bad” students but rather students who do not fit the IE major orientation. • The DM needs to justify to the dean of studies and heads of other majors the decision process and the result of selection. Namely, the DM wants to avoid the selection to be viewed as “IE major admits all students with high GPA.” • The DM makes the selection decision not only on the basis of how well each student individually fits the IE major but also with respect to the coherence of the group of students as a whole (gender issue, balance among professional track, etc.). • The minimum number of students required to open a course is ten. Because some courses are restricted to students who are assigned to one of the four streams, an important aspect of the selection process is to try to admit at least ten students per stream. Considering these issues, the DM needs decision support tools to be introduced in the student selection process. The introduction of such tools should lead to a gain in efficiency by reducing the time spent by the DM, to improve transparency of the selection process (namely for the students and dean of studies), and perpetuate a systematic yearly selection process.
220
J. Stal-Le Cardinal et al.
10.4 A Methodology for Admitting Students to the IE Major at ECP The objective of the DM is to have a systematic selection process that can also help him justify the final decision concerning the selection of students in the IE major. In addition to this main purpose, one of the aims of this chapter is also to investigate the introduction of such a tool in student selection processes, and in a broader sense into selection processes. The analyst, who supported the DM in designing a decision support methodology, is a specialist in multicriteria decision analysis. The proposed methodology has been developed through an intensive interaction between the DM and the analyst. The structuring phase consists of several working sessions. In the first stage, the DM and analyst worked on a common understanding of the existing process (decision to be made, criteria to consider, stakeholders involved in the decision process, constraints of the process, etc.). Simultaneously, key data from former years were gathered. At this point, application files from 2009 were analyzed. Then, the process consisted in discussions between the DM and the analyst to identify, explicit, and justify the criteria on which the selection of students is done. This led the DM and analyst to distinguish among the criteria that characterize each student: • Individual criteria. Relative to the quality of the application (compliance to the IE major requirements); these criteria are used in the model that evaluates students individually, see Section 10.4.1. • Group criteria. Factors to consider at a group level (balance in gender, equalize distribution among professional tracks, etc.), see Section 10.4.2. This distinction justifies the elaboration of a two-level methodology. In a first step, each student application is evaluated individually, i.e., without considering the factors that are relevant at the group level (gender, professional track, etc.). This results in a list of students who individually deserve to be admitted to the IE major based on their individual merits. However, this group of students may not be satisfactory for the DM (unbalanced in gender or among professional track). The second step establishes which students should be admitted in order to select students who are individually good, on one hand, and to form a group that is collectively satisfactory, on the other. Moreover, this second step should provide insights for the DM on how to integrate and balance these two aspects.
10.4.1 Evaluating Students Individually The available information has been structured to evaluate students on six criteria: the first two criteria correspond to the student’s GPA in first and second year study (in France, grades are given on a 0–20 scale, 20 being the best possible grade); the last four criteria (motivation, professional project, maturity/personality, knowledge
10 An Application of Constrained Multicriteria Sorting to Student Selection
221
about IE) are qualitatively evaluated on a 5-level scale (5 being the best evaluation). They are defined as follows (the precise description of criteria and evaluation scales is provided in Appendix 1): • Motivation. Perceived motivation of the student with respect to the choice of the IE major as judged by the DM through the interview and by reading the cover letter. • Professional career plans. The ability of the student to articulate his/her future professional project with his/her previous achievements (courses, etc.). (S)he bases her choice to come to IE on a logical and consistent way, on what (s)he has done previously, and on her employment plans and prospects. The coherence of the choice of major with the professional track is considered here. • Maturity/personality. Maturity and breadth of view with respect to both professional and broader societal matters. • General knowledge of industrial engineering and its career opportunities. The ability to define what Industrial Engineering is, in particular knowledge of the contents of the IE major at ECP and the various outcomes. On the basis of these six criteria, the DM first assigns each student to one of the four categories defined hereafter, according to whether or not they fulfill the requirements for entering the IE major: • C1 : Students who do not meet at all the requirements to enter the IE major, and consequently who can really not be accepted. • C2 : Students who do not really meet the requirements to enter the IE major. • C3 : Students who correspond fairly well to the requirements to enter the IE major. These students could be admitted or appear in the waiting list. • C4 : Students who correspond fully to the requirements to enter the IE major, and therefore should be admitted. In this first analysis, each application is examined individually to assess how well each student fulfills the requirements for admission to the IE major. The choice of four categories was made by the analyst and the DM as a good compromise between the discriminatory power and the overcomplexity of this model.
10.4.2 Considerations Relative to the Group of Students to be Admitted The classification stemming from step 1 (individual evaluation) is made independently for each student without any consideration dealing with the number of available positions in the IE major. Hence, it remains insufficient for the DM for defining which students to select. It should be emphasized that when the number of “sufficiently good” students (C3 [ C4 ) is less than the number of positions (50), the methodology makes it possible to decide either to admit less than 50 students, or to consider admitting some students assigned to C2 . On the contrary, if the number
222
J. Stal-Le Cardinal et al.
of “sufficiently good” students exceeds 50, the DM has some flexibility to choose among them a subset of 50 students which will be balanced in terms of gender and other considerations relative to the group of admitted students. A way to integrate this limitation in the number of admitted students is to impose that the number of alternatives (students) assigned to these categories is lower or equal to 50. Moreover, the DM takes other considerations into account, which gives additional constraints on the group of students to be selected: • The DM wishes to have a good balance in terms of gender. • The students in the IE major choose one of the four existing “streams” (product/service design, production/industrialization, supply chain, management), which corresponds to specific courses. Therefore, the number of students in each stream should not be too low; a course is opened only if it contains at least ten students. • During their last year study, in addition to their major, students are assigned to a “professional track” among six (professionally oriented set of courses). The DM wants to have a group of students well distributed among “professional track,” in order for the group of students to be representative of the various jobs available for industrial engineers on the labor market. The above considerations can be formulated in the model through the addition of constraints on the number of students assigned to C3 or C4 who exhibit a specific property. For instance, a good balance among the professional tracks can be formulated by a constraint stating that the number of students assigned to C3 or C4 and who chose a specific professional track should not exceed 20. Obviously, constraints concerning the balance (in gender, streams, and professional tracks) might not match the group of students assigned to C3 or C4 , and not be compatible with the student evaluation model. In order to appreciate how these constraints conflict, we identify various ways to relax these constraints. These relaxations characterize sets of incompatible constraints from which the DM can get insights into the conflicting aspects of the selection problem. It supports him/her in finding a reasonable compromise between arguments relative to the quality of the selected students and the desired quality of the group of selected students as a whole.
10.4.3 Interest of the Proposed Methodology The introduction of the proposed methodology in the actual selection process improves this decision process as perceived by the DM and other stakeholders. Specifying a structured process based on a formal method of selection is a real advantage for many reasons: • It is easier for the DM to justify the decisions: criteria are explicit, the student evaluation model can be explained, and it is calibrated based on past examples
10 An Application of Constrained Multicriteria Sorting to Student Selection
223
(see Section 10.5.1.2). When incremented in the formal process, the result is similar to the “informal” result (the decision of the DM). • All stakeholders involved in the selection process (the DM, students, dean of studies, heads of other majors) have a better understanding of the decision process with less perceived arbitrariness. • For nonselected students who want to have explicit explanations of why they are refused, it is easier for the DM to show them the result of the decision through this formal process. These elements contribute to the acceptability of the selection process and its results by the stakeholders. Another important argument of the proposed methodology is related to the fact that it is decomposed into two “phases,” and fits the process used by the DM to select students. In the first phase, application files are studied, students are interviewed, and each student is individually evaluated by the DM (see Section 10.5.1). In the second phase, once each student is evaluated (classified in one of the four categories), arguments relative to the group of selected students are considered. Then the DM conducts an analysis of how considerations about each student individually conflict with arguments related to the set of students as a whole (see Section 10.5.2). Such analysis makes it possible to define a final set of selected students. Hence, the student evaluation tool is intended to be used during the interview, while the tool which analyzes how to balance group issues with the students’ individual qualities should be used after the interview process.
10.5 Models Involved in the Selection Process In this section, we present how the proposed methodology can be implemented. The description of the implementation is applied to the data set corresponding to the student selection process which took place in 2009. The full data set (76 applicants in 2009) is provided in Appendix 2.
10.5.1 Sorting Students Using Electre Tri To evaluate the quality of student applications individually, we assign each student to one of the four categories (C4 C3 C2 C1 ) using the Electre Tri method (see Figueira et al. 2005; Mousseau et al. 2000). In fact, we use a simplified version of the Electre Tri method, avoiding indifference and preference thresholds on criteria and using the pessimistic assignment rule (this simplified version has been axiomatized by Bouyssou and Marchant 2007a,b), which we will call Electre TriBM . In what follows, we provide an introduction to the Electre TriBM method, based on the actual case study, i.e., assigning each student applying to the IE major to one of the four categories. We choose the Electre Tri method to assign
224
J. Stal-Le Cardinal et al.
Fig. 10.2 Definition of categories using limit profiles
the students to categories, because this method is very well adapted to problems in which ordinal criteria (i.e., “qualitative” criteria for which differences of evaluations have no meaning; evaluations can be interpreted in terms of order only) are involved.
10.5.1.1 An Introduction to the Electre Tri Method Electre TriBM is a multiple criteria assignment method that assigns alternatives to predefined ordered categories. In order to illustrate how Electre TriBM assigns alternatives to categories, let us consider the following example (see Fig. 10.2): a set of alternatives D fx; y; : : :g evaluated on three criteria g1 ; g2 ; g3 (each having a [0,10] scale) are to be assigned to a category C1 , C2 , or C3 (C1 being the least preferred category). The multicriteria frontier between C1 and C2 (C2 and C3 , respectively) is defined by b1 D .5; 5; 5/ (b2 D .7; 7; 7/, respectively). Electre TriBM assigns an alternative x to the highest category Ch for which x is at least as good as the lower frontier of Ch , that is to C3 if x is at least as good as b2 , otherwise to C2 if x is at least as good as b1 , otherwise to C1 . The criteria are equally important (wj D 1=3; j D 1; : : : ; 3) and the majority level is set to D 0:5, which means that any pair of criteria is sufficient to form a majority. Veto evaluations vj h ; j D 1; 2; 3; h D 2; 3 correspond to evaluations on criterion gi which precludes the assignment to category Ch . These veto evaluations are defined as vj 3 D 3; vj 2 D 1; j D 1; 2; 3, which imply that any alternative having an evaluation lower than 1 on a criterion cannot be assigned to C2 . Similarly, any alternative having an evaluation lower than 3 on a criterion cannot be assigned to C3 . Consider two alternatives x D .8; 6; 4/ and y D .8; 6; 0/. We first compare x (y, respectively) to the frontier b2 . For x (y, respectively) to be considered at least as good as b2 , x (y, respectively) should be at least as good as b2 on a majority of criteria (a set of criteria for which the sum of weights exceeds the majority level ). This is not the case as x (y, respectively) is at least as good as b2 on the first criterion only. Therefore, x (y, respectively) cannot be assigned to category C3 . x and y are then compared to b1 : both x and y are at least as good as b1 on a majority of criteria (the two first criteria). In addition to this majority condition, for x (y, respectively) to be considered at least as b1 , it should not have an evaluation below the veto evaluation vi1 . This condition holds for x but not for y. Hence x is at least as good as b1 , while y is not; therefore, x is assigned to C2 , while y is assigned to C1 .
10 An Application of Constrained Multicriteria Sorting to Student Selection
225
From this illustrative example, we can specify Electre TriBM assignment procedure in our student evaluation context as follows. In order to assign a student a 2 A to a category, Electre TriBM compares each student to each category limit b 2 B, hence building an outranking relation A B [ B A (see Roy 1991) which validates or invalidates, for each a 2 A and b 2 B, the assertion a b (and b a), whose meaning is “a is at least as good as b.” In order to validate the assertion a b (or b a), two conditions should be verified: • Concordance. For an outranking a b (or b a) to be valid, a “sufficient” majority of criteria should be in favor of this assertion. • Nondiscordance. When the concordance condition holds, none of the criteria in the minority should oppose the assertion a b (or b a) in a “too strong way.” The process by which Electre TriBM assigns a student to a category can be formulated as follows (the process is described here in the context of four categories, but can be easily generalized): • In order to be assigned to C4 (best category), a student evaluation must be at least as good as b3 (limit between C3 and C4 ) on a “majority” of criteria, and should not have, on any criterion, a “veto evaluation” which precludes the assignment of this student to C4 , otherwise • So as to be assigned to C3 (second best category), a student evaluations must be at least as good as b2 on a “majority” of criteria, and should not have, on any criterion, a “veto evaluation” which precludes the assignment of this student to C3 , otherwise • So as to be assigned to C2 , a student evaluations must be at least as good as b1 on a “majority” of criteria, and should not have, on any criterion, a “veto evaluation” which precludes the assignment of this student to C2 , otherwise • The student is assigned to C1 In the above process, the notion of majority is defined by taking into account the relative importance of criteria; hence, a set of importance coefficients (w1 ; w2 ; : : :; w6 ) that sum to 1 and a majority threshold 2 Œ0:5; 1 are used to determine Pwhether or not a student is at least as good as a profile on a majority of criteria: j Wgj .a/gj .b/ wj . Moreover, the notion of veto evaluation corresponds to an evaluation on a criterion gj which is sufficiently bad to forbid a student to be assigned to a category Ch . Electre TriBM uses a set of veto thresholds (v1 .bh /; : : :; vj .bh /; : : :; v6 .bh /), h 2 f1; 2; 3g such that the veto evaluation vj h is equal to gj .bh / vj .bh /. Note that the weights wj do not represent trade-offs but are used here to determine whether a subset of criteria constitutes (or not) a majority. Hence, the information conveyed by the weights can be interpreted as a “voting power” associated with each criterion, whereas a subset of criteria is considered as a majority winning coalition of criteria) when the sum of the weights of these criteria exceeds the majority threshold . It should also be highlighted that these “winning coalitions” of criteria play the same role when comparing an alternative to the different profiles separating any two consecutive categories.
226
J. Stal-Le Cardinal et al.
Table 10.1 Parameter values Criteria Frontiers gj .b1 / gj .b2 / gj .b3 /
g1 10 11 13
g2 10 11 13
Criteria g3 2 3 4
g4 2 3 4
g5 2 3 4
g6 2 3 4
Veto evaluations vj 2 vj 3 vj 4
g1 – – –
g2 – – –
g3 – 1 2
g4 – 1 1
g5 – 1 1
g6 – 1 2
In our case study, the four ordered categories are C4 C3 C2 C1 (where denotes the order on categories), and six criteria g1 ; g2 ; g3 ; g4 ; g5 , and g6 are considered. The assignment of a student a 2 A results from the comparison of a with the three profiles defining the limits of the categories B D fb1 ; b2 ; b3 g, where bh D .g1 .bh /; g2 .bh /; g3 .bh /; g4 .bh /; g5 .bh /; g6 .bh // is an evaluation vector representing the frontier between categories Ch1 and Ch , h D 2: : :4.
10.5.1.2 Elicitation of the Electre Tri Student Evaluation Model In order to define an Electre TriBM model for our student selection problem, the preference related parameters should be elicited: • Category limits (gj .b/, j D 1: : :6; b 2 B) • The relative importance of criteria – The weight vector w and the majority level used in the concordance condition – The veto evaluations vj h , j D 1: : :6, h D 2; 3; 4 used in the nondiscordance condition So as to set the values for these parameters, interviews were conducted with the DM. The category limits (gj .b/, j D 1: : :6; b 2 fb1 ; b2 ; b3 g) were induced directly from these discussions. For instance, the DM was able to state that if a student a had evaluation on criterion g1 greater or equal to 13 (g1 : GPA in the first year study at ECP, ranging in France in [0,20]), then his/her GPA in first year would “vote” (contribute to the assignment) to category C4 . The values of gj .bh / are provided in Table 10.1. Furthermore, during the interviews, the DM made the statements given below. These statements are relevant to define “veto evaluations” vj h : 1. A student having a score less than 3 on the criterion “Motivation” (g3 ) cannot be assigned to category C4 . 2. A score 1 on one of the four last criteria prevents a student to be admitted (i.e., cannot be assigned to category C3 nor C4 ). 3. A student evaluated 1 or 2 on criterion “general knowledge on Industrial engineering” (g6 ) cannot be assigned to category C4 . At this stage of the elicitation process, the frontiers separating consecutive categories and veto evaluations are determined (see Table 10.1). The parameters
10 An Application of Constrained Multicriteria Sorting to Student Selection
227
which remain to be elicited are the weights wj ; j D 1: : :6 and the majority level . The DM was not comfortable with providing precise figures about these parameters, but was confident about his judgments on the assignment of some students, statements which indirectly (the limits of categories and veto thresholds being fixed) provide information about the weights wj ; j D 1: : :6 and majority level . A relatively large body of literature has emerged which proposes methodologies to elicit indirectly DMs’ preferences for Electre Tri method. Instead of asking the DM to provide precise preference on the parameters, the method uses the holistic preference information which requires less effort from the DM. Several authors have proposed methodologies to infer preference parameters from assignment examples provided by the DM (Mousseau and Slowinski 1998; Dias and Climaco 1999; Dias et al. 2002; Mousseau and Dias 2004).These methodologies propose to infer the preference parameters that best match the DMs’ preference information from holistic information, i.e., assignment examples. This is how we proceeded to elicit indirectly the importance of criteria with the DM. From his experience and expertise, the DM was confident in his judgment concerning five given students. In other words, he managed to provide five assignment examples given below (we define the subset of these five students A A): 1. a10 and a61 should be assigned to category C3 (a10 ! C3 , a61 ! C3 ) 2. a22 should be assigned to category C2 (a22 ! C2 ) 3. a59 and a68 should be assigned to category C4 (a59 ! C4 , a68 ! C4 ) These assignment examples induce linear constraints to infer weights and majority level P (see Mousseau and Slowinski 1998). For instance, a68 ! C4 implies that j Wgj .a68 /gj .b3 / wj . Moreover, the DM believed that the second criterion (second-year grade) should not be more important than the third (motivation), fourth (professional career plan) or fifth (maturity/personality) criterion, which led to the following constraints: w2 w3 , w2 w4 , and w2 w5 . From this indirect information concerning the weights and majority level (assignment examples and additional constraints on weights), we inferred a weight vector and cutting level using the inference algorithm proposed by Mousseau and Slowinski (1998). The inference program (see Appendix 3) minimizes an error function subject to constraints which represents the assignment examples (and the way these examples are used in Electre Tri). The inferred values were w D .0:16; 0:17; 0:21; 0:21; 0:21; 0:05/ and D 0:54. This completes the elicitation of the model and allows us to compute the assignment of each student to a category (C4 , C3 , C2 , or C1 ). From this model, the number of students assigned to C4 , C3 , C2 , C1 , is 29, 27, 4, 16, respectively (all results are provided in Appendix 2). From this analysis, it appears that 56 students meet the requirements to be admitted to the IE major (C4 and C3 ). Among these 56 students, 18 are girls (38 are boys). The distribution among the four streams (product/service design, production/industrialization, supply chain, management) is 19, 14, 12, and 11, and the distribution among the six professional tracks is (0, 13, 26, 0, 8, 9).
228
J. Stal-Le Cardinal et al.
Although this first model provides interesting insights to the DM (which student deserves to be selected, based on his/her evaluations), the results do not fully match the specificities of the decision situation. Namely, one important characteristic of the decision to be made deals with the fact that the maximum number of students to be selected is 50. Obviously, the proposed model does not account for the size of categories C3 and C4 (it only assesses each student individually and assigns each of them to a category). With this model, it is possible that the number of students assigned to categories C4 and C3 exceeds 50 (which is the case in our data set), in which case, the DM should be supported in choosing the 50 students to be selected out of the ones assigned to C4 and C3 (56 in our data set). Moreover, the DM wants to evaluate the group of students to be selected as a whole on various aspects (see Section 10.5.2). For instance, there is a gender issue and a reasonable balance concerning “streams” is desired by the DM. A good balance among the “professional tracks” is also viewed by the DM as highly desirable. The second phase of the methodology aims at integrating such constraints in the selection of the students.
10.5.2 Modeling Constraints to Design the Group of Selected Students In this section, we propose a methodology to select a group of students individually as good as possible and moreover conforms collectively as much as possible to the constraints specified by the DM on the group to be admitted. In order to do so we will pursue the analysis of the data of the year 2009. In this case, the question amounts to deciding which students to select out of the 56 who were judged as fulfilling the IE major requirements.
10.5.2.1 Defining the Group Constraints From the first step of the analysis, it follows that 56 students reasonably deserve to be admitted to the IE major. Among these 56 students, 29 are assigned to category C4 (students who fully correspond to the requirements to enter the IE major, and therefore should be admitted), and 27 to category C3 (students who fairly well correspond to the requirements to enter the IE major. These students could be admitted or appear in the waiting list). In the following, all students assigned to category C4 are unconditionally admitted, while the ones assigned to C3 will be considered for admission depending on the constraints on the group. In order to specify constraints relative to the group of selected students, we will proceed as follows. Let us consider one binary variable xi , i D 1: : :27, for each student assigned to C3 . These variable are defined by xi D 1 when the student i is admitted (the ith student in the list of 27 students assigned to C3 ), xi D 0 otherwise.
10 An Application of Constrained Multicriteria Sorting to Student Selection
229
The first type of constraints to be considered refers to the number of actually admitted students. The objective of the DM is to limit the number of admitted students to 50; as the 29 students assigned P to C4 are all admitted, this can be formulated by the following constraint: 27 DM considers i D1 xi 21 . If theP relaxing this constraint, the successive relaxations take the form 27 i D1 xi 21, P27 P27 P27 x 22, x 23,. . . x 27. Note that the last relaxation i D1 i i D1 i i D1 i amounts at not considering size constraints. The second type of constraints is related to the distribution of admitted students among the four “streams.” Let us define S as the 4 27 matrix of choice of streams by students assigned to C3 (sij D 1 if the ith student assigned to C3 chooses the stream j , sij D 0 otherwise). As each course should have at least ten students, the number of students P in each stream should be at least ten. Such a constraint can be formulated as 27 i D1 sij xi C nj .C4 / 10; j D 1: : :4, where nj .C4 / denotes the number of students assigned to C4 (and therefore admitted) who chose the stream j . If the DM considers relaxing P these constraints concerning the streams, the successive relaxations take the form 27 s x C nj .C4 / 10; j D 1: : :4, P27 P27i D1 ij i i D1 sij xi C nj .C4 / 9; j D 1: : :4, i D1 sij xi C nj .C4 / 8; j D 1: : :4, etc. The third type of constraints concerns the distribution of admitted students among the six “professional tracks.” Let us define P T the 6 27 matrix of choice of professional tracks by student assigned to C3 (ptij D 1 if the ith student assigned to C3 chooses the professional track j , ptij D 0 otherwise). The goal of the DM is to select students well spread among professional tracks and therefore to limit the number of admitted students in P a given professional track to 20 at most. Such a constraint can be formulated as 27 i D1 ptij xi C mj .C4 / 20; j D 1: : :6, where mj .C4 / denotes the number of students assigned to C4 (and therefore admitted) who chose the professional track j . If the DM considers relaxing these constraints successive relaxations take the P concerning the professional tracks, the P 27 form 27 pt x C m .C / 20; j D 1: : :6, ij i j 4 i D1 P i D1 ptij xi C mj .C4 / 21; 27 j D 1: : :6, i D1 ptij xi C mj .C4 / 22; j D 1: : :6, etc. The fourth type of constraints is related to the issue of gender. The DM considers that a good balance between girls and boys obtains when the number of accepted girls is in the interval [20,30]. Let us define geni D 1 if the ith student assigned to C3 is a girl, P gi D 0 otherwise). This constraint can be formulated as P27 27 gen x 20 and relaxations take i iP i /xi 20. The successive i D1 i D1 .1 gen P27 P27 27 the form of i D1 geni xi 19 and i D1 .1 geni /xi 19, i D1 geni xi 18, P27 i D1 .1 geni /xi 17, etc. It is obvious that not all the above constraints are compatible. The issue for the DM is then to identify the alternative ways to relax the constraint so as to make the problem feasible, and to choose the best solution among these.
10.5.2.2 Identifying “Best Compromises” Among the List of Constraints In order to help the DM selecting a “best compromise” among incompatible requirements, all possible minimal sets of constraint relaxations yielding a feasible set of constraints have been computed.
230
J. Stal-Le Cardinal et al.
First of P all, please note that all constraints considered are linear and can be written as 27 i D1 ˛i xi ˇ. Please also note that the number of relaxations of a constraint (which are also linear) is finite (and even a small number). In the following, we will consider the (infeasible) set of constraints defined in the previous section and all their respective relaxations. It is obvious that the relaxed constraints are redundant initial constraints. Let us suppose that the number of constraints and relaxed constraints is equal to p. The kth constraint can be written as 27 X
˛i k xi ˇk :
(10.1)
i D1
Let us now define yk (k D 1: : :p), p new binary variables (one for each constraint and relaxed constraint) and rewrite the kth constraint as 27 X
˛i k xi C Myk ˇk ; where M is an arbitrary large positive value:
(10.2)
i D1
It is clear that when yk D 0, (10.2) is equivalent to (10.1), but when yk D 1, (10.2) is always verified, and it is as if (10.1) is “deleted.” Moreover, it should be noted that when the kth constraint is deleted (yk D 1), then one of its relaxed constraints Pthat is initially redundant, becomes active. We consider as objective function z D 27 i D1 yi which should be minimized subject to the set of p constraints defined as in (10.2) and defined the following mathematical program: 8 Xp < min z D yk kD1 (10.3) X27 : s.t. ˛i k xi C Myk ˇk ; k W 1: : :p i D1
The optimal solution of this mathematical program identifies the smallest set of constraints and the relaxed constraint whose deletion leads to a feasible set of constraints. Let us denote by S the set of indices of these constraints (k such that yk D 1). So as to find alternative solution to reach feasibility, we will add to the above mathematical program a constraint that will ensure this optimal solution P is not found: i 2S xi card.S /, where card.S / denotes the cardinality of the set S . Solving this second mathematical program leads to find a new way to relax the constraints. This iterative process continues until the resulting mathematical program to be solved is infeasible, which means that there does not exist any other way to relax constraints. The algorithm described here is very similar to the one used in Mousseau et al. (2003b) and Mousseau et al. (2006) to solve inconsistencies among a set of inconsistent preference statements.
10.5.2.3 Results on the 2009 Data Set In order to identify the possible “compromises” among the list of constraints on the set of 2009 applicants, we applied the above algorithm. The mathematical program
10 An Application of Constrained Multicriteria Sorting to Student Selection
231
to be solved at each iteration (to identify one minimal set of constraint relaxation) had 95 binary variables (27 xi variables for each student assigned to C3 , and 68 yk variables for each constraint relaxation). The algorithm has been implemented using CPLEX v.11, and solved on an Intel Core Duo CPU 3 GHz with 2 GB RAM; the CPU time was 0.76 s. The limited computing time (for 95 binary variables) makes it conceivable to consider even larger data sets. On the given data, the algorithm ran four iterations, that is to say four alternative minimal sets of constraint relaxations were identified. These four sets correspond to the only four ways to account for the infeasibility of the constraints on the group of selected students (admit a higher number of selected students, number of students in each stream, professional track, and gender). These solutions are described in the following table:
Sol1 Sol2 Sol3 Sol4
] Students (girls/boys)
] Stream (1/2/3/4)
] Prof. track (E/SF/IPM/OM)
50 (15/35) 50 (16/34) 50 (17/33) 50 (18/32)
(16/14/10/10) (17/13/10/10) (16/13/11/10) (16/13/11/10)
(13/20/8/9) (12/21/8/9) (11/22/8/9) (12/23/7/8)
So as to interpret the solutions stemming from the algorithm, one should consider the four ways to relax the constraints concerning the group of selected students: • • • •
Accepting more than 50 students Accepting to worsen the gender balance Accept to worsen the balance among streams Accept to worsen the balance among professional tracks
In the case of this data set, the interpretation of these four solutions is straightforward. It appears clearly in the four solutions that the total number of admitted students (50) and the minimum number of students per stream (10) fully conform to the DM’s wishes. Moreover, degrading the solution on these two aspects does not make a possibility of improving the situation on the constraints concerning gender or professional track. The only way to solve infeasibility in the set of constraints involves gender balance and balance among professional tracks. If the DM wishes to admit a maximum number of girls (18), he will have to accept to have 23 students in the second professional track (Sol4 ); conversely, if he wants to limit to 20 the number of admitted students in the second professional track, this will limit the number of admitted girls to 15 (Sol1 ). There also exist two intermediate solutions (Sol2 and Sol3 ) that provide reasonable compromise among these two issues. Finally, considering these four solutions, the DM chooses to admit 18 girls (Sol4). The list of selected students is provided in Appendix 2.
232
J. Stal-Le Cardinal et al.
10.6 Insights from the Model Implementation of Staffing Problems The first tangible result concerns the set of selected students stemming from the use of the methodology. In this chapter, we provide results for the 2009 data set, but the decision process makes it possible to use it each year repeatedly, in a consistent manner. Moreover, the annual use of this methodology guarantees that the selection policy remains consistent over time. The development of the methodology has also established criteria for the evaluation of students together with precise evaluation scales, and their uses in the decision process. The methodology was successfully adopted by the DM in 2010 and is now considered a tool for the student selection process. A key success factor of the adoption of the methodology has been a strong involvement of the DM in the elaboration of the methodology. Second, the development of the methodology produced results that are interesting in their own right. The four categories in Electre Tri, along with their semantic and multicriteria frontiers, help the DM clarify requirements to enter the IE major. Setting values to criteria weights and veto thresholds forced the DM to consider the relative importance he attaches to the criteria, and the result of this preference elicitation process makes explicit how he compares criteria in terms of importance. Another insight for the DM is linked to the data and decision process structure. Indeed, the implementation of the model shows that there is some conflict between group and individual issues. For instance, it is not always possible to admit 50 students who satisfy all the constraints concerning gender balance, balance among streams, and balance among professional tracks. This insight highlights that the DM may have to relax the constraints to make his decision, and the methodology forces him to explicit these choices. At the process level, the use of a decision model makes the process more transparent, because it is based on a well-known and sound methodology. Results support the DM in providing explanations about the selection process to the various stakeholders (students, heads of other majors, and dean of studies). Yet, there are still some controversial aspects with respect to the constraints relaxation in the second phase of our methodology. Indeed, why should one constraint and not the others be relaxed? Are there any priorities among constraints? Here, the methodology forces the DM to make explicit choices about which constraints to relax, whereas an informal process would make it possible for him to select students without acknowledging the underlying compromises among constraints. It might then be useful to support the DM in the selection of constraints to be relaxed, instead of identifying the possible relaxations among which to choose. Another aspect of the decision process which could need decision support is the definition of the waiting list (a ranking procedure of the students assigned to C3 who are not admitted could be a reasonable solution). Methodologically, the proposed selection process puts a strong focus on the individual evaluation of students (Electre Tri method at step 1) and considers group constraints only in a second step. An advantage of this formulation is that if the
10 An Application of Constrained Multicriteria Sorting to Student Selection
233
number of students assigned to C3 and C4 does not reach the quota (50), the DM is recommended to admit less than 50 students (only the ones who fulfill the admission requirements). This is one of the characteristics of our approach whereas standard portfolio decision analysis methodologies would rather select the best group of 50 students. We believe that the proposed methodology has a wider range of applicability. It obviously can be applied to student selection problems in other academic institutions. It should be noted that the methodology is also relevant if the alternatives to be selected are not students but projects, investments, etc. In fact, the proposed methodology does not include any specificities of student evaluation, but rather uses Electre Tri, a generic multicriteria evaluation procedure. Moreover, the second phase of the methodology builds portfolios considering constraints involving attributes (gender, professional track, etc.), but this second phase can be described independently of the semantics of these attributes.
10.7 Conclusion The case studied is related to the selection of students from the same academic institution (ECP in Paris). Selected students are not assigned to different tasks but are altogether in a same activity field (IE major). Moreover, selected students are not directly compared with each other, but rather to norms which define admissibility; hence, the methodology proceeds through an overall evaluation, without any ranking. This individual evaluation is complemented, in a second phase, by the introduction of constraints on the set of selected students. In the first phase, students are evaluated individually with an Electre Tri model; the choice of this outranking-based noncompensatory model is motivated by the qualitative nature of the criteria. The preference parameters are elicited indirectly from preference statements provided by the DM and based on former student selection processes. The set of constraints on the group of selected students provided by the DM cannot be fully respected; the second phase consists in identifying alternative sets of constraint relaxations which lead to feasible solutions. The DM should then choose among these possibilities. A direct extension of this research work is supporting the elaboration of the waiting list. This could be performed by the introduction of a ranking model applied to the students who fit the admission requirements, but who are not admitted. Moreover, an interesting work to be pursued could be to integrate directly constraints on category size in the Electre Tri models and to infer such model from assignment examples. Such an approach would be of particular interest if the profiles defining the categories are considered as variables in the inference process, that is, inferred from assignment examples rather than directly elicited from the DM. In addition to the case study concerning student selection, this chapter’s contribution lies in the proposed approach which could be applied more generally. The model implementation is relevant to student selection, but could be extended
234
J. Stal-Le Cardinal et al.
to other contexts (in other universities), and other type of selection problems in which alternatives represent investment projects of a corporate firm; the projects to be financed should be individually good, but also balanced across sectors, fairly distributed in the various entities of the group, etc. More generally, this model could be implemented on any other selection problem involving individual evaluation (to check for admissibility requirements), and constraints on the group of selected alternatives. Acknowledgments The authors are thankful to the editors for their fruitful comments and remarks on the initial draft, which greatly improved the final version of this chapter.
Appendix 1: Definition of the Qualitative Evaluation Criteria • Motivation: Perceived motivation of the student in the choice of the IE major as judged by the DM through the interview and by reading the cover letter: 1. “I come to IE because it is the only non-technical option at ECP, I don’t know how to find my way”, sloppy letter graphically and in terms of content 2. She/He doesn’t know exactly why (s)he wants the IE Major, cover letter correctly written but not revealing a particularly strong motivation 3. Student motivation and looks inspired by the offer of the Major, however (s)he could consider other options 4. Motivated Student, able to project her/him into the future (his future employability and academic year) and that clearly expresses how the IE Major corresponds to expectations 5. Highly motivated student, saying that (s)he is willing to invest in the “life of the Major” (students’ delegate or other responsibility) and showing in her/his letter and her/his interview that the choice of the IE Major is the natural continuation of its courses and enables her/him to make a clearly formulated career plan • Professional Career Plan: Ability of the student to articulate his/her future professional project with his/her previous achievements (courses, etc.). She/he takes into account the logic, consistency and variety of what (s)he has done previously, the reasons for his/her choice to come to IE projects and employability. The coherence of the choice of major with the Professional Track is considered here: 1. Student unable to express or that has no career plans, has never visited a factory, unaware of what constitutes a factory and what it means to work in one 2. Professional career plan still unclear, despite the training and professional experiences
10 An Application of Constrained Multicriteria Sorting to Student Selection
235
3. Professional career plan starting to be worked on whilst not being specific enough, but is able to provide some elements of where (s)he wants move to 4. The student is clear in expressing her/his projects proved by internships, but still hesitating between different career paths (which are clearly specified) 5. The professional career plan is clear and well defined; (s)he has done a series of courses of various sorts and other experiences that are part of this logic • Maturity/Personality: Maturity and the openness of the student that brought her/him to focus on the Industrial Engineering in a large sense and beyond to general society concerns: 1. Student whose maturity is not asserted, that justifies her/his answer by default, or very vague or even evasive 2. Student still fairly young at heart, “I’m hesitating about what I want to do but I have some ideas on certain types of jobs, so I want to test the idea by this Major” (s)he didn’t made internships, wants to be hired just to get a clearer picture of IE 3. Student who remains a bit unsure about his/her career choice but realizes (s)he must move forward on this issue, although (s)he has already forged a few elements of views in his/her professional experiences 4. Student mature, dynamic, and the year IE Major should still allow her/him to mature and arrive at the job market very positively 5. Mature student, shows dynamism, able to express clearly her/his choices for her/his projects, shows a cultural openness beyond the strict academic requirements at ECP • General knowledge of Industrial Engineering and its career opportunities: Ability to define what industrial engineering is, in particular knowledge of the contents of the Industrial Engineering Major of ECP and the various outcomes: 1. Student cannot describe what Industrial Engineering is, knowing nothing of the contents of the Major, and not knowing the jobs of Industrial Engineering 2. Student able to express a veneer of knowledge about Industrial Engineering, but being unable to go beyond the general speech 3. Student with some knowledge in Industrial Engineering, jobs in general, but without a detailed vision of these elements; (s)he doesn’t necessarily know the contents of the major 4. Student well aware of what Industrial Engineering may represent, but whose vision may still remain vague about the contents of the option and/or outcomes 5. Student very aware of what the field of Industrial Engineering constitutes, having ascertained precisely the content of the option, and can even specify the choice of electives, knows in detail the various opportunities in terms of jobs
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a16 a17 a18 a19 a20 a21 a22 a23 a24 a25
11.87 13.82 15.66 13.77 13.72 12.8 14.15 13.96 12.87 12.44 13.43 12.63 11.5 12.64 13.63 13.38 13.12 13.68 12.87 13.37 13.45 12.42 13.33 13.18 12.68
g1 .ai /
12.66 13.82 15.66 13.77 13.72 12.8 14.15 13.96 12.87 12.65 13.43 12.63 12.31 12.64 13.63 13.87 13.05 13.68 12.87 13.37 13.45 12.42 13.33 13.18 12.68
g2 .ai /
4 4 5 4 1 5 4 3 1 3 3 3 3 2 4 3 5 5 3 3 2 2 4 5 3
g3 .ai / 4 4 4 4 1 4 4 2 0 2 3 3 3 3 5 4 4 5 3 3 1 2 3 4 2
g4 .ai /
Appendix 2: Evaluation of 2009 Students
4 4 4 4 2 5 4 2 1 3 2 4 3 2 5 4 5 5 2 4 1 2 4 5 3
g5 .ai / 4 3 5 4 1 4 3 2 1 2 1 3 3 2 4 3 3 5 2 4 2 2 3 5 3
g6 .ai / M M F M M F M M M M M F M M M M M M F M M M M M M
Gender E SF SF SF SF SF E SF E E IPM SF IPM SF OM OM SF OM SF IPM SF SF SF OM SF
Prof. track 1 2 1 4 3 3 2 2 4 1 4 3 2 4 1 1 1 3 4 1 3 2 1 2 3
Stream C4 C4 C4 C4 C1 C4 C4 C2 C1 C3 C1 C3 C3 C2 C4 C3 C4 C4 C3 C3 C1 C2 C4 C4 C3
Category 1 1 1 1 0 1 1 0 0 0 0 1 0 0 1 0 1 1 1 1 0 0 1 1 0
Accepted
236 J. Stal-Le Cardinal et al.
a26 a27 a28 a29 a30 a31 a32 a33 a34 a35 a36 a37 a38 a39 a40 a41 a42 a43 a44 a45 a46 a47 a48 a49 a50
14.08 12.96 12.73 12.59 12.68 14.32 13.52 13.08 13.54 13.36 13.78 12.17 11.55 12.81 12.78 13.24 12.42 14.16 12.57 12.86 12.73 12.75 12.93 13.54 12.65
14.08 12.96 12.73 12.59 13.39 14.29 14.32 13.08 13.54 13.36 13.78 12.17 11.55 12.81 12.87 13.24 12.67 14.16 12.57 12.86 12.73 12.75 12.87 13.54 12.65
4 3 4 3 5 4 4 2 3 4 4 3 1 1 4 1 2 4 4 2 3 3 3 3 3
5 3 4 3 4 5 4 2 3 4 4 3 1 1 3 2 3 4 4 1 3 3 4 3 4
4 3 5 4 4 3 5 2 4 4 4 3 1 1 4 1 3 4 5 1 4 4 5 4 4
3 2 2 3 4 3 4 1 3 3 3 3 1 1 4 1 4 4 3 1 3 4 3 3 4
M M F M M M M M F M F M M F F M M F M M M F M F M
SF SF SF E SF E E OM SF OM SF SF E SF SF OM IPM SF OM IPM E SF IPM SF E
4 4 1 2 4 3 3 4 2 3 3 1 4 2 3 2 1 1 2 4 3 1 2 4 2
C4 C3 C3 C3 C4 C4 C4 C1 C3 C4 C4 C3 C1 C1 C3 C1 C3 C4 C4 C1 C3 C3 C3 C3 C3
(continued)
1 0 1 1 1 1 1 0 1 1 1 0 0 0 1 0 1 1 1 0 1 1 1 1 1
10 An Application of Constrained Multicriteria Sorting to Student Selection 237
a51 a52 a53 a54 a55 a56 a57 a58 a59 a60 a61 a62 a63 a64 a65 a66 a67 a68 a69 a70 a71 a72 a73 a74 a75 a76
g1 .ai / 12.61 11.01 12.21 11.43 13.02 10.94 11.87 12.87 12.40 12.43 11.51 11.93 12.54 11.84 13.54 11.45 11.71 12.18 12.53 11.43 12.33 13.72 13.82 12.52 12.96 10.71
(continued)
g2 .ai / 12.31 12.03 14.11 12.41 14.12 12.24 12.77 14.24 13.84 13.43 13.28 13.43 13.77 13.64 13.01 13.03 12.11 13.51 14.77 12.53 13.15 13.72 13.91 12.56 13.54 12.37
g3 .ai / 1 3 4 2 4 3 3 3 5 4 4 4 1 4 1 3 2 2 4 2 2 2 2 3 5 3
g4 .ai / 1 3 4 3 4 2 3 4 5 3 4 4 2 4 1 4 2 2 4 2 2 1 3 4 4 2
g5 .ai / 2 3 4 2 4 1 4 4 5 3 3 4 2 4 1 4 2 3 5 1 2 1 4 2 4 1
g6 .ai / 1 3 3 3 5 4 3 3 4 3 3 3 1 4 2 3 1 3 4 2 2 1 5 3 4 4 Gender M M M F F M M M M F M M M F M F M F F M M M M M M M
Prof. track E E IPM SF SF SF E IPM OM SF E SF IPM SF IPM OM IPM SF IPM SF E IPM E E OM IPM
Stream 3 3 1 4 4 3 2 1 1 4 4 3 1 2 4 2 2 1 1 2 4 3 1 4 2 2
Category C1 C3 C4 C3 C4 C1 C3 C3 C4 C3 C4 C4 C1 C4 C1 C3 C1 C3 C4 C1 C2 C1 C4 C3 C4 C1
Accepted 0 1 1 1 1 0 1 1 1 1 1 1 0 1 0 1 0 1 1 0 0 0 1 1 1 0
238 J. Stal-Le Cardinal et al.
10 An Application of Constrained Multicriteria Sorting to Student Selection
239
Appendix 3: Mathematical Program to Infer Electre Tri Weights max ˛
(10.4)
s.t. ˛ xi ; ˛ yi ; P
i D f59; 68; 10; 61g
(10.5)
i D f10; 61; 22g
(10.6)
j Wgj .ai /gj .b3 /
wj xi D ;
for i D f59; 68g
(10.7)
j Wgj .ai /gj .b2 /
wj xi D ;
for i D f10; 61g
(10.8)
j Wgj .ai /gj .b3 /
wj C yi D ;
for i D f10; 61g
(10.9)
P P P
j Wgj .a22 /gj .b1 /
wj x22 D
(10.10)
j Wgj .a22 /gj .b2 /
wj C y22 D
(10.11)
P
P6
j D1 wj
D1
wj 2 Œ0; 0:5; w2 wj ; 2 Œ0:5; 1
(10.12) 8j D f1; 2; : : :; 6g
8j D f3; 4; 5g
(10.13) (10.14) (10.15)
References Archer N, Ghasemzadeh F (1999) An integrated framework for project portfolio selection. Int J Proj Manag 17(4):73–88 Bafail A, Moreb A (1993) Optimal allocation of students to different departments in an engineering college. Comput Ind Eng 25(1–4):295–298 Bouyssou D, Marchant T (2007a) An axiomatic approach to noncompensatory sorting methods in MCDM, i: The case of two categories. Eur J Oper Res 178(1):217–245 Bouyssou D, Marchant T (2007b) An axiomatic approach to noncompensatory sorting methods in MCDM, ii: More than two categories. Eur J Oper Res 178(1):246–276 Dias L, Climaco J (1999) On computing ELECTRE’s credibility indices under partial information. J Multicrit Decis Anal 8(2):74–92 Dias L, Mousseau V, Figueira J, Cl´ımaco J (2002) An aggregation/disaggregation approach to obtain robust conclusions with ELECTRE TRI. Eur J Oper Res 138(2):332–348 Figueira J, Mousseau V, Roy B (2005) ELECTRE methods. In: Figueira J, Greco S, Ehrgott M (eds) multiple criteria decision analysis: state of the art surveys, Springer Verlag, Boston, Dordrecht, London, pp 133–162 Ghasemzadeh F, Archer NP (2000) Project portfolio selection through decision support. Decis Support Syst 29(1):73–88 Keisler J (2008) The value of assessing weights in multi-criteria portfolio decision analysis. J Multicrit Decis Anal 15(5–6):111–123 Kuncel N, Hezlett L, Ones D (2001) Assigning students to academic majors. Psychol Bull 127(1):162–181 Leyva Lopez JC (2005) Multicriteria decision aid application to a student selection problem. Pesquisa Operacional 25(1):45–68
240
J. Stal-Le Cardinal et al.
Liesi¨o J, Mild P, Salo A (2008) Robust portfolio modeling with incomplete cost information and project interdependencies. Eur J Oper Res 190(3):679–695 Miyaji I, Ohno K, Mine H (1988) Solution method for partitioning students into groups. Eur J Oper Res 33(1):82–90 Montibeller G, Franco LA, Lord E, Iglesias A (2009) Structuring resource allocation decisions: a framework for building multi-criteria portfolio models with area-grouped options. Eur J Oper Res 199(3):846–856 Mousseau V, Dias L (2004) Valued outranking relations in ELECTRE providing manageable disaggregation procedures. Eur J Oper Res 156(2):467–482 Mousseau V, Slowinski R (1998) Inferring an ELECTRE TRI model from assignment examples. J Global Optim 12(2):157–174 Mousseau V, Slowinski R, Zielniewicz P (2000) A user-oriented implementation of the ELECTRE TRI method integrating preference elicitation support. Comput Oper Res 27(7–8):757–777 Mousseau V, Dias L, Figueira J (2003a) On the notion of category size in multiple criteria sorting models. Dimacs research report 2003-02, Rutgers Univerity Mousseau V, Dias L, Figueira J, Gomes C, Cl´ımaco J (2003b) Resolving inconsistencies among constraints on the parameters of an MCDA model. Eur J Oper Res 147(1):72–93 Mousseau V, Dias L, Figueira J (2006) Dealing with inconsistent judgments in multiple criteria sorting models. QJ Oper Res 4(3):145–158 Philips L, Bana e Costa C (2007) Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann Oper Res 154:51–68 Polyashuk A (2006) Formulation of portfolio selection problem with multiple criteria. JMCDA 13(2–3):135–145 Reeves G, Hickman E (1992) Assigning MBA students to field study project teams: A multicriteria approach. Interfaces 22(5):52–58 Roy B (1991) The outranking approach and the foundations of ELECTRE methods. Theory Decis 31:49–73 Saber H, Ghosh J (2001) Assigning students to academic majors. Omega 29(6):513–523 Weitz R, Jelassi M (1992) Assigning students to groups: A multi-criteria decision support system approach. Decis Sci 23(3): 746–757 Yeh H (2003) The selection of multiattribute decision making methods for scholarship student selection. Int J Sel Assess 11(4):289–296
Chapter 11
A Resource Allocation Model for R&D Investments: A Case Study in Telecommunication Standardization Antti Toppila, Juuso Liesi¨o, and Ahti Salo
Abstract Industrial firms need to adjust their R&D activities in response to changing perceptions about the business relevance and success probabilities of these activities. In this chapter, we present a decision model for guiding the allocation of resources to a portfolio of R&D activities. In our model, the dynamic structure of the decision problem is captured by decision trees, and interval estimates are employed to describe uncertainties about the sales parameters. Possible interactions among the activities – such as synergy and cannibalization effects – are accounted for by approximating their impact. We also describe how this model was deployed in a major telecommunication company and how the company has adopted the model into regular and extensive operational use when allocating resources to standardization activities.
11.1 Introduction High-technology companies invest extensively in research and development (R&D) to enhance their competitiveness through technological advances. In practice, much of R&D is carried out in projects that differ in scope, relevance, novelty, phase, risk, and return (Henriksen and Traynor 1999; Brummer et al. 2011). Large firms, in particular, launch, continue, and discontinue projects with the aim of building and maintaining a project portfolio that satisfies resource constraints and responds to relevant business objectives. Often, the resulting R&D portfolio selection problem is complex enough to benefit from the use of formal decision modeling. This is one
A. Toppila () Systems Analysis Laboratory, Aalto University School of Science, P.O. Box 11100, 00076 Aalto, Finland e-mail:
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 11, © Springer Science+Business Media, LLC 2011
241
242
A. Toppila et al.
of the reasons why numerous resource allocation models have been proposed for R&D portfolio selection (see, e.g., Kleinmuntz 2007; Henriksen and Traynor 1999). In this chapter, we present a decision model for allocating resources to standardization activities in a large telecommunication company. By definition, standardization activities contribute to the establishment of standards which provide benefits such as compatibility and interoperability. These activities resemble traditional R&D investments, and thus resource allocation models for R&D portfolios are, in principle, applicable. Yet there are important differences as well. First, standardization activities may take longer to execute than most R&D projects and, as a result, the resource allocation problem becomes effectively that of adjusting the overall portfolio rather than that of selecting new projects. Second, completed standardization activities may generate business impacts only after a considerable delay so that these impacts can be quite uncertain. Third, these impacts may crucially be dependent on what other standardization activities have been successful. Motivated by these observations, our decision model differs from traditional R&D portfolio selection models in that it responds to the need (1) to adjust the activity portfolio over extended periods of time, (2) to capture uncertainties concerning later developments after the completion of standardization activities, and (3) to account for interactions among the activities. In the methodologically oriented literature on resource allocation, decision models for the selection of R&D portfolios are often deployed as one-shot interventions (e.g., Lindstedt et al. 2008; Santiago and Vakili 2005; Henriksen and Traynor 1999). But when the projects are of different duration so that new project proposals arise before earlier projects are completed, these models can also be used for adjusting portfolios by evaluating ongoing and prospective projects together before taking decisions about which projects are launched, continued, or killed. In many cases, such selection models can be redeployed with little extra effort by updating the relevant model parameters (Salo and Liesi¨o 2006). Yet, when applying such models with updated parameters, it is necessary to estimate nonlinear changes in the rates of resource consumption and technological progress, to ensure that the marginal costs of further technological progress are correctly assessed and that the resources can be deployed where they are likely to have the greatest positive impact. R&D project selection models typically involve parameters about the projects’ expected resource consumption levels, future cash flows, and success probabilities. These parameters are usually elicited from experts, because the uniqueness of different projects offers only limited possibilities for making use of other sources of information. However, a challenge in R&D management is that it may be difficult or impossible for the experts to provide point estimates that they would feel confident about, given the large uncertainties that characterize R&D activities and their impacts. In principle, uncertainties about these parameters could be described with probability distributions that are assessed with standard elicitation techniques (see, e.g., Hora 2007). Yet a complication of such an approach is that the experts may not be able to provide reliable estimates of such distributions, particularly if the R&D projects generate impacts after a long delay or if these impacts are complicated (e.g., Salo and H¨am¨al¨ainen 1992, 2010). In consequence, it may be
11 A Resource Allocation Model for R&D Investments: A Case Study
243
useful to characterize uncertainties with the help of confidence intervals and to apply Preference Programming methods which have been deployed successfully to address uncertainties without making specific assumptions about probability distributions (e.g., Salo and H¨am¨al¨ainen 1992; Arbel 1989; Liesi¨o et al. 2007; K¨onn¨ol¨a et al. 2007; Brummer et al. 2011). A further challenge in the management of R&D portfolios is that there may exist interactions such as synergy or cannibalization effects among the projects (Kleinmuntz 2007; Stummer and Heidenberger 2003; Liesi¨o et al. 2008; Fox et al. 1984). For instance, two projects may be related to substitute technologies, in which case the successful completion of both projects would offer less value than what would be achieved by summing their values subject to the constraint that only one of the project will be undertaken. Often, it is sufficient to account for interactions by judgmental tradeoffs based on expert opinion (Phillips and Bana e Costa 2007). However, when the number of project dependencies or interactions grows, such judgments become increasingly difficult to assess (Fox et al. 1984). It may therefore be relevant to build an approximative model of these interactions to better understand their impacts at the portfolio level. In relation to these challenges, our decision model for allocating resources to standardization activities is novel in that it combines three methodological features, most notably (1) the recurrent use of the model where impacts of alternative resource levels are systematically assessed, (2) the modeling of uncertainties through confidence intervals and Preference Programming methods, and (3) the approximative modeling of interactions among the activities. These features are discussed in the context of a case study. The methodological results of the case study are now in full operational use. The rest of this chapter is structured as follows. Section 11.2 reviews resource allocation models. Section 11.3 describes the context of our case study, i.e., telecommunication standardization, and presents the resource allocation model. Section 11.4 concludes.
11.2 Resource Allocation Models in R&D Management There exist a wide range of resource allocation models for R&D project selection (for overview and references see, e.g., Henriksen and Traynor 1999; Kleinmuntz 2007). Many of these models are variants of the capital budgeting model in that they (1) consider R&D projects that are either funded at the proposed funding level or alternatively discarded, (2) impose a budget constraint that limit the selection of projects, and (3) provide recommendations as to what the “best” projects to be funded are (for details, see Kleinmuntz 2007). Some of these assumptions have been relaxed. For instance, in their resource allocation model for SmithKline Beecham, a large pharmaceutical company, Sharpe and Keelin (1998) relax the assumption that projects can either be fully funded or discarded; instead, they consider four alternative variants of each project, namely (1) the current plan,
244
A. Toppila et al.
(2) a buy-up option, (3) a buy-down option, and (4) a liquidation plan. They then identify projects that yield only relatively small benefits compared to their cost and derive recommendations as to how resources should be reallocated to other projects. They report that evaluation of these alternatives encouraged creative decisions and improved the understanding of which parts of each project offered most value to the company. Santiago and Vakili (2005) present a model where the R&D process is divided into a development phase and a commercialization phase. They argue that because of asymmetrical information, the development phase cannot easily attract funds outside the company, while funds can be raised from the market during the commercialization phase. They model this information asymmetry as a staged development process where the commercialization of each R&D project is contingent on the success of the development phase. Thus, they have effectively two budget constraints, of which the first is a hard constraint on the consumption of development resources, followed by the second which is induced by the market on commercialization resources. These budget constraints are related to each other through contingency constraints. The elicitation of model parameters by experts may be prone to errors also because vested interests may cause some projects to be unfairly evaluated (e.g., Salo and Liesi¨o 2006; Sharpe and Keelin 1998). Such tendencies can, in part, be partly mitigated by eliciting incomplete information about numerical assessments, because it may be easier to agree on reasonable bounds of interval-valued statements or to obtain ordinal information instead of seeking to elicit point estimates for probabilities or other numerical parameters. That is, even if it is not possible to reach a consensus on what the probability of a given scenario is, it may be possible to accept statements such as “the probability of scenario 1 is less than 0.1” or “the probability of scenario 1 is greater than that of scenario 2.” Robust Portfolio Modeling (RPM; Liesi¨o et al. 2007, 2008) is a framework for portfolio selection under incomplete information. In RPM, uncertainties about parameters are modeled by forming feasible sets that contain all the plausible parameter values (e.g., defined as confidence intervals about the uncertain parameters). The novelty of the RPM approach is that it determines all the nondominated portfolios, i.e., those portfolios which are not dominated by any other portfolio that would yield higher or equal value for all the parameters contained in the feasible sets. The RPM approach then examines these nondominated portfolios and determines which projects are contained in all, some, or no nondominated portfolios, which results in a classification of projects into core, borderline and exterior projects, respectively. This approach has been employed successfully in several case studies such as K¨onn¨ol¨a et al. (2007), Lindstedt et al. (2008), and Brummer et al. (2011) where there have been considerable uncertainties. For example, K¨onn¨ol¨a et al. (2007) report a case study on the screening of innovation ideas where the ratings by several respondents were analyzed by identifying those ideas that were among the best ones (core) based on the use of mean ratings (consensus-oriented approach) and those which were more controversial in that they were brought to the fore when using the variances of respondents’ ratings (dissensus-oriented approach).
11 A Resource Allocation Model for R&D Investments: A Case Study
245
R&D projects often have interactions or interdependencies (Phillips and Bana e Costa 2007; Stummer and Heidenberger 2003; Liesi¨o et al. 2008; Fox et al. 1984). These interactions can be categorized into (1) cost or resource utilization interaction; (2) outcome, probability, or technical interaction; and (3) benefit, payoff, or effect interaction (Fox et al. 1984). The easiest interactions to model are often those that relate to costs or to the utilization of resources, because costs are usually known in advance and the cause of the interaction is therefore known. For instance, if two projects make use of the same piece of equipment, this could be modeled as a cost saving interaction. In contrast, interactions that pertain to the outcomes, success probabilities, or technical interactions can be more difficult to model. In principle, such interactions can be captured through joint distributions that are determined with the help of copulas, for instance (Clemen and Reilly 1999). In the presence of benefit, payoff, or impact interactions, the value of a project will depend on what other projects are selected. For instance, if two projects are related to the same technology, they may compete for the same customers. These interactions have not received much attention in literature (Stummer and Heidenberger 2003; Fox et al. 1984), possibly because it is often difficult to specify these interactions in detail or to separate their impacts from those of actual projects, meaning that the exact specification and assessment of all significant interactions would be tedious (see Fox et al. 1984). Moreover, authors such as Phillips and Bana e Costa (2007) argue that interactions do not affect decision recommendations very often and hence only the strongest interactions need to be accounted for, at least in the first approximation. But when the number of interactions grows, it becomes harder to determine which interactions are significant, especially if the impact of one interaction is enhanced by the presence of another interaction. Thus, in the presence of multiple interactions, it may be helpful to build an approximate model for interactions to better understand the overall impact of interactions on the resource allocation decision.
11.3 Resource Allocation Model for Standardization in Telecommunications Participation in technical standardization provides many benefits to companies, such as technical compatibility and interoperability for meeting consumer expectations (see, e.g., Glimstedt 2001). The outcome of technical standardization is a (technical) standard, defined as “. . . a recording of one or more solutions to one or more problems of matching persons, objects, processes or any combination thereof, and which is intended for common and repeated use in any technical field” (Lea and Hall 2004). In telecommunication, there are hundreds of standardization activities that companies can participate in. The benefits and costs of standardization activities may differ, and it is not viable to participate in all such activities to secure benefits from standardization. The selection of which standards a company should participate in is essentially an R&D portfolio selection problem.
246
A. Toppila et al.
Compared to traditional R&D portfolio selection, there are additional modeling challenges: 1. Standards are often interrelated to each other. For instance, standards may relate to substitute technologies, which means that two competing standards may not coexist very long. 2. The benefits of standardization do not accrue immediately and may impact sales after several years. The benefits are contingent on the success of the standard and also on how well the company can exploit the opportunities of standardization. This section describes the development of a standardization resource allocation model for a large telecommunication company. The company is involved in hundreds of standardization activities and has a dedicated standardization unit. Apart from modeling challenges, the design and development of the decision model involved organizational issues. Specifically, because standards are related to different technologies, the required expertise for elicitation was distributed across different parts of the company: technical standardization experts had best technical know-how of standards, whereas managers and road-mappers were best aware of market conditions and strategic considerations. For this reason, model parameter assessment needed input from numerous experts, including managers with limited possibilities to take part in lengthy face-to-face meetings. The experts were also geographically distributed, which constrained opportunities for gathering expertise through decision conferencing in face-to-face meetings. These challenges implied that a requisite model should be simple in the sense that the parameters would need to be correctly and coherently elicited with minimal training and limited possibilities to get support in understanding the decision model.
11.3.1 Rationales for Standardization As a part of out case study, the company standardization experts identified three value adding objectives: (1) maximization of revenues, (2) alignment of the standardization portfolio with company strategy, and (3) diversification of risks. These objectives and means to achieve them are summarized in Table 11.1. In revenue maximization, standardization can be seen as a support activity for product sales. The consensus was that the impacts of standardization on expected sales were mainly driven by widely adopted technologies, that is, architectures (product and/or process) that become widely accepted as industry standards (also known as dominant designs; see, e.g., Anderson and Tushman 1990). In this setting, standardization contributes to the development of widely adopted technologies (Koski and Kretschmer 2007). Because the emergence of widely adopted technologies has an impact on the survival of the firm (Suarez and Utterback 1995) and yields revenues to the holder of proprietary rights to the technology in terms of establishing a natural monopoly (Schilling 2008), the value of standardization can be approximated using this link to future sales.
11 A Resource Allocation Model for R&D Investments: A Case Study
247
Table 11.1 Value adding objectives in standardization and means for achieving them Objectives Maximize revenues and reduce market uncertainty Align portfolio with company strategy Diversify risks
Means Standardization Achieve compatibility Promote own technology Participate in key technologies
Development Pursue widely adopted technologies Increase use of company’s technology Acquire complementary assets
For creating a balanced portfolio, a company must also consider objectives that cannot be expressed in terms of future sales. Proprietary technologies are often made available to other parties to introduce products that comply with a standard, which can lead to a bandwagon effect where one technology quickly becomes adopted by the industry (Farrell and Saloner 1985). As a consequence, early standardization is critical to competitiveness in network markets (Edquist and Hommen 1998). Thus, the company needs to consider standardization from the viewpoint of promoting technologies. Wider diffusion of company technologies also improves the utilization of internal know-how. Standardization can also be used in risk management: by participating in the development of key technologies, the company increases the chances of acquiring having rights to technologies or products that benefit from bandwagon effects. However, there are alternative costs of allowing other participating parties to enter the same markets.
11.3.2 Mathematical Development of a Resource Allocation Model We assume that there are standardization activities j D 1; : : : ; m, which are associated with a specific set of features or technical properties that can be implemented in a product. If the implementation of the standardization activity is successful, a widely adopted technology emerges. This technology, in turn, will eventually create (an increase in) sales Sj through the products that are based on that technology. Each activity j can be funded by committing rjs 0 resources to standardization and rjd 0 resources for development. For this resource allocation, the probability of succeeding in standardization is pjs .rjs /, while the probability of successfully developing a widely adopted technology is pjd C .rjd / if standardization is successful and pjd .rjd / if standardization is unsuccessful. The probability of creating a widely adopted technology can be positive even if standardization fails, because widely adopted technologies can emerge through de facto standards, for instance. Standardization activity j generates sales if and only if the corresponding widely
248
A. Toppila et al. pjd+(rjd)
Commit (rjs;rjd) resources
Y p s es j (r s j)
Value Node
o N
Technology widely accepted?
Uncertainty node
Sales Sj
Y es
Decision node
No sales 1−pjd+(rjd)
Standardisation successful?
es
Sales Sj
N o
Technology widely accepted?
Y
No rsj) s( pj
1−
pjd−(rjd)
No sales 1−pjd−(rjd)
Fig. 11.1 Decision tree for a standardization activity
adopted technology is established, which occurs with total probability P .rjs ; rjd / D pjs .rjs /pjd C .rjd / C .1 pjs .rjs //pjd .rjd /. The corresponding decision tree is shown in Fig. 11.1. We discretize choices concerning the amount of resources that are allocated to standardization activities to limit the amount of elicitation and computational effort when dealing with tens or hundreds of activities. For each activity j , we define 16 different variants by restricting our choice of activity standardization and development funding levels rjs and rjd to L D f100%,50%,C0%; C100%g of the current plan, respectively. Thus, standardization funding level i 2 L and development funding level k 2 L imply that the amount of resources committed to standardization is rjs i and resource committed to development is rjdk . As a result of this discretization, we can analyze the impact of significant increases or decreases in funding with only 12 probability assessments per activity, that is, 4 assessments of standardization success probability with different funding levels and 8 assessments the probabilities of developing a widely adopted technology conditioned on the success/failure of standardization of the activity. The use of the relative scale in discretization is motivated by the wish to emphasize that the assessments are changes to the current plan. The expected sales resulting from the portfolio, that is ES D
m X j D1
P .rjs ; rjd /Sj ;
(11.1)
11 A Resource Allocation Model for R&D Investments: A Case Study
249
is with respect Pm tod standardization and development budget constraints Pmaximized m s j D1 rj Bs and j D1 rj Bd , where Bs and Bd are the respective budgets. For the time being, we assume that budgets are not interchangeable (meaning that standardization resources cannot be allocated to development and vice versa). This reflects the practical limitation that short-term resources in standardization (e.g., employees) cannot be easily transferred to development and vice versa. We formulate the optimization model for solving the optimal resource allocaj tion by introducing binary variables xi k , which indicate that the activity j has standardization level i 2 L and development level k 2 L. Using these variables, the resource activity can be expressed with P Pallocationj for each standardization P P j rjs D i 2L k2L rjs i xi k and rjd D i 2L k2L rjdk xi k , along with the constraints XX i 2L k2L
j
xi k D 1
for all j D 1; : : : ; m;
(11.2)
which enforce that only one resource level is indicated for each activity. The success probabilities are given by P .rjis ; rjkd / D pjs .rjis /pjd C .rjkd / C 1 pjs .rjis / pjd .rjkd / : The resource allocation that maximizes expected sales is the solution to the Mixed Integer Linear Program (MILP) max x
subject to
m XX X j D1 i 2L k2L m XX X
j
P .rjis ; rjkd /Sj xi k j
rjis xik Bs
j D1 i 2L k2L m XX X
j
rjkd xik Bd
j D1 i 2L k2L
XX
j
xik D 1; j D 1; : : : ; m
i 2L k2L j
xik 2 f0; 1g; i 2 L; k 2 L; j D 1; : : : ; m:
(11.3)
Although the model has only budget constraints, it is possible to introduce additional linear constraints to account for other constraints such as mutual exclusivity of activities. These additional constraints can be included in the above MILP using standard modeling techniques. The portfolios that satisfy constraints in (11.3) and the possible additional constraints form the set of feasible portfolios, which is denoted by ˘f .
250
A. Toppila et al.
11.3.3 Modeling Sales Uncertainty The sales Sj for each widely adopted technology was relatively difficult for experts to assess. Although the decision tree defined scenarios on which the assessment could be conditioned, the standardization experts felt they were unable to specify a full probability distribution. However, they were able to provide plausible upper and lower bounds for sales. This approach was also believed to be transparent in the sense that these assessments could be more easily debated than probability distributions. Thus, let the sales interval be S j Sj S j , where S j and S j are the assessed lower and upper bounds of sales, respectively. The set of sales vectors is S D ŒS 1 ; S 1 ŒS m ; S m . The optimum of problem (11.3) depends on what point estimates Sj are chosen from the set of sales vectors S. In this setting, some portfolios are not interesting choices in the sense they are not optimal regardless of which sales estimates from S are chosen. We therefore focus on potentially optimal portfolios (portfolios that maximize expected sales for some feasible sales) ˘PO , i.e., ˇ n o ˇ 0 ˘PO D 2 ˘f ˇ9S 2 S W 2 arg max ES . / ; S 0
where ESS . 0 / is the expected sales for vector S . The set of potentially optimal portfolios can be used to provide guidance for resource allocation decisions. Following RPM (Liesi¨o et al. 2007), we define the core index CI of a resource allocation .rjs ; rjd / for activity j as the share of potentially optimal portfolios that contain that allocation for the activity, that is ˇ( )ˇ ˇ ˇ ˇ ˇ s d s d s d s0 d 0 ˇ D .r1 ; r1 /; : : : ; .rm ; rm / 2 ˘PO j.rj ; rj / D .rj ; rj / ˇ ˇ ˇ 0 0 ˇ ˇ CI.rjs ; rjd / D ˇ ˇ ˇ˘PO ˇ where j:j denotes the number of elements in a set. If a resource allocation has a core index of 1 (core allocation), it is included in all potentially optimal portfolios and can therefore be recommended. Conversely, if a feasible allocation with a core index 1 would not be in a funding portfolio, then for all sales estimates there exist some other portfolios which yield higher expected sales. The parallel argument applies for allocations with core index 0 (exterior allocation) and these allocations should not be included in the final portfolio. Decision recommendations for allocations with core indices between 0 and 1 (borderline allocation) are dependent on the sales estimate within the set of possible sales, wherefore a decisive recommendation cannot be given. In large problems it may not be possible to compute all potentially optimal portfolios with a brute force algorithm, because checking all 16m possible portfolios
Fig. 11.2 Example of a decision recommendation, which illustrates the core indices in a matrix
Development funding level
11 A Resource Allocation Model for R&D Investments: A Case Study
251
+100 %
0
0
0:28
0:19
±0 %
0
0:11
0:10
0:26
−50 %
0
0
0
0
−100 %
0
0
0
0
−100 % −50 % ±0 % +100 % Standardization funding level
would soon become computationally intractable. A subset of potentially optimal portfolios can be found with Monte Carlo simulation by generating uniformly distributed sales vectors from S and solving the optimization problem (11.3) for each generated vector. Although the subset found by Monte Carlo simulation does not necessarily contain all potentially optimal portfolios, we use this set for approximating the core indices.
11.3.4 Visualization of Results The resource allocation recommendations were visualized as a matrix, as shown in Fig. 11.2, where each cell in the matrix represents the core index of the corresponding resource allocation decision. For instance, the upper right corner of the matrix represents the decision of increasing both standardization and development resources by 100%. The core index of this decision is 0.19, meaning that this action is included in 19% of the potentially optimal portfolios. The shading is based on a linear gray scale, where white corresponds to a core index of 0 and black a core index of 1. If the sales parameters were to be specified as point estimates, there would be only one optimal portfolio and thus only one cell would have a core index of 1 (thus shaded black), and the other cells would have a core index of 0 (thus shaded white). The decision recommendations can be further visualized by representing the ranges within which the expected sales vary when resources are either decreased or increased. Figure 11.3 shows these ranges for six activities and the expected sales’ midpoint with the current funding plan. This gives an overview of the up or down potential of the activities. For instance activity 1 is almost at the top of its range with current funding, thus it is unlikely to benefit from additional resources. Activity 3 in contrast has plenty of potential with additional resources.
252
A. Toppila et al.
Fig. 11.3 Example ranges for the activities expected sales when funding is varied from 100% to C100% compared to the current plan. Current funding level is denoted with a dot
1
Activity
2 3 4 5 6 Expected sales
11.3.5 A Model for Binary Activity Interactions The success or failure of a standardization activity or a group of activities may have a positive or negative effect on other standardization activities. These effects are called interactions. Although an interaction can pertain to several standards, a first approximation is to consider pairwise interactions. We capture these pairwise interactions by describing the magnitude of influence caused by the success or failure of a dependent activity with a reward or penalty. Continuing with problem (11.3), let activities j1 and j2 have an allocation of standardization and development resources defined by the binary selection variables O such that activity j1 uses r s x j1 for standardization and x j and indices .i; k; iO ; k/ j1 i i k j
j
j
rjd1 k xi k1 for development and activity j2 uses rjs iO xO 2O for standardization and r d O xO 2O j2 k i k ik 2 for development. We define interaction variables ( j j I 1OO 2 ikik
D
j
j ik
1;
if xik1 D xOO2 D 1
0;
otherwise
to indicate that a combination of standardization resource allocations is active. Constraint (11.2) ensures that a unique interaction is indicated for every resource allocation. We model the impact of an interaction as follows: The expected sales of activity j1 without interactions is P .rjs1 ; rjd1 /Sj1 . Let PI be the probability of the interaction occurring and a ¤ 0 be the relative change in expected value of activity j1 if the interaction occurs. Then, the expected impact of the interaction on portfolio expected sales is aP .rjs1 ; rjd1 /Sj1 PI . We modeled only pairwise interactions, that is,
11 A Resource Allocation Model for R&D Investments: A Case Study
253
activity j2 impacts activity j1 , and the occurrence of the interaction depends only on either the success or failure of the impacting activity. Thus, we modeled interactions as follows: • If the success of activity j2 has an impact on the probability of success of activity j j j1 , we add I 1O O2 in the objective function of problem (11.3) with the coefficient i ki k
aj1 j2 P .rjs1 i ; rjd1 k /Sj1 P .rjs iO ; rjd kO /; 2
2
where aj1 j2 is the constant describing the strength and direction of the interaction (aj1 j2 > 0 positive impact, aj1 j2 < 0 negative impact). • If the failure of activity j2 has an impact on the probability of success of activity j j j1 , we add I 1O O2 in the objective function with the coefficient i ki k
aj1 j2 P .rjs1 i ; rjd1 k /Sj1 1 P .rjs iO ; rjd kO / : 2
2
Because the sales gain or loss is proportional to the expected sales of activity j1 (i.e., P .rjs1 i ; rjd1 k /Sj ) in cases where the invoking activity will always succeed or fail, the interactions are always full or none, respectively. Thus an expert assessing the constant aj1 j2 can interpret it as the relative increase or decrease in sales that occurs if the interaction is present. In our case, the strength of interactions were assessed on a qualitative scale “critical”–“very critical”–“extremely critical,” which was converted to numerical values such that aj1 j2 2 f˙0:3; ˙0:6; ˙0:9g. However, these penalties/rewards are nonprobabilistic because they cannot be interpreted as the absolute increase in expected value that the success of a beneficial activity would give. To guarantee that an interaction variable I is positive only when the associated funding levels are selected, we introduce the following constraints: • A positive interaction term (a > 0) should be allowed a nonzero value only if 2I
j1 j2
i k iOkO
j
j
xi k1 C xO 2O : ik
• A negative interaction term (a < 0) is allowed to be zero only if j1 j2 i k iOkO
2I
j
j ik
xi k1 C xO 2O 1:
At optimum these constraints are equalities, because in the first case the interaction term has a positive coefficient in the objective function (to be maximized) and a negative in the latter case. Suppose that there are four sets of interactions defined by triplets .j1 ; j2 ; aj1 j2 / meaning that the interaction prescribes the magnitude of interaction aj1 j2 on activity j1 by activity j2 : • • • •
A.C; C/ is the set of interactions which enhances j1 when j2 succeeds. A.C; / is the set of interactions which enhances j1 when j2 fails. A.; C/ is the set of interactions which weakens j1 when j2 succeeds. A.; / is the set of interactions which weakens j1 when j2 fails.
254
A. Toppila et al.
Then the complete heuristic model extended from (11.3) is max x;I
m XX X
j
P .rjis ; rjkd /Sj xik
j D1 i 2L k2L
X
C
XXXX
aj1 j2 P .rjs1i ; rjd1k /Sj1 I
i 2L k2L iO 2L k2L .j1 ;j2 ;aj1 j2 / O 2A.C;C/[A.;C/
X
C
XXXX
aj1 j2 P .rjs1i ; rjd1k /Sj1 I
i2L k2L Oi2L kO 2L .j1 ;j2 ;aj1 j2 / 2A.C;/[A.;/
subject to
2I
j1 j2 i k iOkO
j
j ik
xi k1 C xO 2O ;
j1 j2 P .rjs iO ; rjd kO / ikOikO 2 2
j1 j2 ikOikO
1 P .rjs iO ; rjd kO / 2
2
(11.4)
i; k; iO ; kO 2 L ; .j1 ; j2 ; : : :/ 2 A.C; C/
[A.C; / j j 2I 1O O2 i ki k
j
j ik
xi k1 C xO 2O 1;
i; k; iO; kO 2 L ; .j1 ; j2 ; : : :/ 2 A.; C/
[A.; / m XX X j D1 i 2L k2L m XX X j D1 i 2L k2L
XX i 2L k2L j
xi k1 ; I
j
rjs i xi k Bs j
rjdk xi k Bd
j
xi k D 1;
j1 j2 i k iOkO
2 f0; 1g;
j D 1; : : : ; m i; k; iO ; kO 2 L ; j1 ; j2 D 1; : : : ; m:
There are jLj4 binary variables for each interaction, where jLj denotes the number of resource allocation levels. Because jLj was 4 in our case, each interaction can be captured with 256 binary variables. Because the optimum of the problem (11.4) needs to be solved a large number of times in the Monte Carlo simulation and because the growing number of binary variables increases the computation time, we reduced the amount of variables as follows: We assume that the interaction depends only on the standardization funding and that standardization of activities j1 and j2 is done with current/planned development budgets. Then, the interaction variables j j j1 j2 I 1O O2 and Ii;`; (also their coefficients in the objective function) are equal, where iO ;` i ki k ` D C0% denotes the index of the current/planned standardization and development
11 A Resource Allocation Model for R&D Investments: A Case Study
255
budgets, and they can be collapsed to a single variable. This reduces the number of interaction variables to jLj2 D 16. With the reduced number of interaction variables, the impact of interactions can be visualized with a core index matrix as in Fig. 11.2. For interactions, the columns of the matrix represent core indices of standardization levels of the impacted standard and the rows core indices of standardization levels of the impacting standard (or vice versa). Then the impact of the interaction can be visualized by comparing this matrix computed with and without interactions.
11.3.6 Experiences of Using the Model The model was first used with 100 activities from several different areas of technical standardization. Data were gathered with a spreadsheet questionnaire that was supplemented with background material including a general description of the model. The respondents were asked to provide assessments of the model parameters, namely sales intervals, standardization and development costs, and standardization and development success probabilities. There were numerous interactions. Thus, computational tractability was ensured by accounting only for strong interactions. This resulted in nine interactions on a scale of “critical”–“very critical”–“extremely critical,” which were converted to numerical values. The results were visualized to the experts by showing a matrix of core indices for each activity. The results were computed with and without interactions and corresponding core index matrices were presented together with a visualization of the alternative data. The recommendations were either strong in the sense that only few allocations had positive core indices or weak in the sense that many allocations had a positive core index. If all the positive core indices for an activity suggested increased standardization funding, the corresponding standardization activity was considered as a candidate for receiving more funding. Suggestions for decreased funding were interpreted analogously. Interactions did not have a significant impact on the decision recommendations, because increased funding was typically recommended to the same activities regardless of whether interactions were accounted for or not. This was, in part, because positive interactions tended to impact those activities for which the model recommended additional resources in the absence of interactions whereas negative interactions impacted those activities for which the model recommended reduced resources in the absence of interactions. Thus, the modeling of interactions was not as important as their potential impact would have suggested. The elicitation of information about future sales was challenging and often resulted in wide and overlapping intervals. This was because the additional sales will not materialize in the near future and that they are also affected by external uncertainties such as market size and intensity of competition. As a result, most
256
A. Toppila et al.
activities had positive core indices for several allocations. However, this was in fact a strength of the model because the recommendations still gave direction to overall resource reallocation. In this setting, the fact that there was no unique optimal decision recommendation resulted in increased confidence in the model, because several different allocations could indeed be justified. Structuring of the model as a decision tree helped explain the results. In comparison with more general resource allocation models (such as capital budgeting models), a closer inspection of standardization as means for developing widely adopted technologies gave insights, especially when seeking to build a consensus based on the estimates of several experts with different fields of expertise. In particular, without explicating how an activity brings value to the company, it would have been difficult to explain to a standardization engineer why the value of a technically superior standard may be low for the company. As a summary, the modeling approach was sound and transparent enough to guide the resource allocation decisions. Furthermore, the model was considered general enough to also support resource allocation decisions of other areas of technological standardization. The company currently has the model in extensive operational use for allocating resources to activities in all essential areas of telecommunication standardization.
11.4 Conclusions In this chapter, we have presented a resource allocation model for standardization activities in a major telecommunication company. The resource allocation problem was structured with decision trees, and Preference Programming methods were applied to model the uncertain future sales associated with standardization activities. The model was first piloted with 100 standardization activities and, based on initial favorable experiences, it was subsequently adopted into operational use. Currently, it is employed in support of resource allocation decisions in all areas of telecommunication standardization that the company is active in. The strengths of the model – which have partly facilitated its uptake – include features such as parsimony with relatively few parameters, computational tractability, and ease of extending the model to the analysis of new activities. Although the model has been developed in the context of standardization, it holds promise when allocating resources to other R&D activities as well. Acknowledgments The authors acknowledge the support of the Finnish Funding Agency for Technology and Innovation (Tekes). The authors thank Timo Ali-Vehmas, Ville Brummer, Kari Lang, Tuomo Jorri, Anssi Kostiainen, Matti Alkula, and Juho Laatu for their contributions to the development and deployment of the model. Computations of the case study were carried out with Xpress-MP software provided by FICO through their Academic Partner Program.
11 A Resource Allocation Model for R&D Investments: A Case Study
257
References Anderson P, Tushman ML (1990) Technological discontinuities and dominant designs: a cyclical model of technological change. Adm Sci Q 35(4):604–633 Arbel A (1989) Approximate articulation of preference and priority derivation. Eur J Oper Res 43(3):317–326 Brummer V, Salo A, Nissinen J, Liesi¨o J (2011) A methodology for the identification of prospective collaboration networks in international R&D programs. Int J Technol Manag 54(4):369–389 Clemen R, Reilly T (1999) Correlations and copulas for decision and risk analysis. Manag Sci 45(2):208–224 Edquist C, Hommen L (1998) The ISE policy statement–the innovation policy implications of the innovation systems and european integration. Technical report, University of Link¨oping Farrell J, Saloner G (1985) Standardization, compatibility, and innovation. RAND J Econ 16(1):70–83 Fox G, Baker N, Bryant J (1984) Models for R and D project selection in the presence of project interactions. Manag Sci 30(7):890–902 Glimstedt H (2001) Competitive dynamics of technological standardization: The case of third generation cellular communications. Ind Innov 8(1):49–78 Henriksen A, Traynor A (1999) A practical R&D project-selection scoring tool. IEEE Trans Eng Manag 46(2):158–170 Hora S (2007) Eliciting probabilities from experts. In: Edwards W, Miles RJ, von Winterfeldt D (eds) Advances in decision analysis. Cambridge University Press, New York Kleinmuntz D (2007) Resource allocation decisions. In: Edwards W, Miles R, von Winterfeldt D (eds) Advances in decision analysis. Cambridge University Press, New York, pp 400–418 K¨onn¨ol¨a T, Brummer V, Salo A (2007) Diversity in foresight: insights from the fostering of innovation ideas. Technol Forecast Soc Change 74(5):608–626 Koski H, Kretschmer T (2007) Innovation and dominant design in mobile telephony. Ind Inn 14(3):305–324 Lea G, Hall P (2004) Standards and intellectual property rights: an economic and legal perspective. Inf Econ Policy 16(1):67–89 Liesi¨o J, Mild P, Salo A (2007) Preference programming for robust portfolio modeling and project selection. Eur J Oper Res 181(3):1488–1505 Liesi¨o J, Mild P, Salo A (2008) Robust portfolio modeling with incomplete cost information and project interdependencies. Eur J Oper Res 190(3):679–695 Lindstedt M, Liesi¨o J, Salo A (2008) Participatory development of a strategic product portfolio in a telecommunications company. Int J Technol Manag 42(3):250–266 Phillips L, Bana e Costa C (2007) Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann Oper Res 154(1):51–68 Salo A, H¨am¨al¨ainen R (1992) Preference assessment by imprecise ratio statements. Oper Res 40(6):1053–1061 Salo A, H¨am¨al¨ainen R (2010) Preference programming-multicriteria weighting models under incomplete information. In: Zopounidis C, Pardalos P (eds) Applied Optimization. Handbook of multicriteria analysis, vol 103. Springer, Berlin Salo A, Liesi¨o J (2006) A case study in participatory priority-setting for a Scandinavian research program. Int J Inf Technol Decis Mak 5(1):65–88 Santiago L, Vakili P (2005) Optimal project selection and budget allocation for R&D portfolios. In: Anderson T, Daim T, Kocaoglu D, Milosevic D, Weber C (eds) Technology management: a unifying discipline for melting the boundaries, Portland State University, Dept. of Engineering and Technology Management, pp 275–281 Schilling M (2008) Strategic management of technological innovation, 2nd edn. McGraw-Hill, Singapore
258
A. Toppila et al.
Sharpe P, Keelin T (1998) How SmithKline Beecham makes better resource-allocation decisions. Harv Bus Rev 76(2):45–58 Stummer C, Heidenberger K (2003) Interactive R&D portfolio analysis with project interdependencies and time profiles of multiple objectives. IEEE Trans Eng Manag 50(2):175–183 Suarez F, Utterback J (1995) Dominant designs and the survival of firms. Strateg Manag J 16(6):415–430
Chapter 12
Resource Allocation in Local Government with Facilitated Portfolio Decision Analysis Gilberto Montibeller and L. Alberto Franco
Abstract Resource allocation in local government imposes several challenges for public managers, such as substantial pressures for more efficiency in public spending, frequent changes in levels of income for public organisations, steadily growing demand for public services, and higher public expectations and increased scrutiny. In order to tackle these challenges effectively public managers and policy makers are focusing on changing old ways of working, with a view to creating and delivering public value in an ever increasingly “wicked” context. One area of concern relates to improving decision-making processes and the accountability of decisions, particularly within the context of resource allocation in local government. In this chapter, we review our experience of using facilitated portfolio decision analysis to help local government teams assess the value of a range of public services or projects. Our discussion is focused primarily on the British local government context, and illustrated with several case studies drawn from our own research and practical interventions. The approach and experience discussed here, however, can be easily translated to similar contexts in other countries.
12.1 Introduction The environment within which public managers operate is typically complex, uncertain and contested. It is complex due to the high number of interconnected areas they have to tackle as part of their everyday work. It is uncertain because the local, regional and national agendas that provide direction to their work will be influenced, in varying and unpredictable degrees, by political dynamics,
G. Montibeller () Management Science Group, Department of Management, London School of Economics and Political Science, London, UK e-mail:
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 12, © Springer Science+Business Media, LLC 2011
259
260
G. Montibeller and L.A. Franco
environmental changes, and technological developments. Finally, it is a contested environment due to the existence of multiple and conflicting views and interests about the extent and shape of public issues needing attention and the different ways to address them. Working under the conditions described above is indeed a common feature in the life of most public managers. In the UK, these conditions are currently being exacerbated by a number of recent trends. These include substantial pressures for more efficiency in public spending, frequent changes in levels of income for public organisations, steady growing demand for public services, and higher public expectations and increased scrutiny. Consequently, public managers are currently faced by unparalleled challenges which they are expected to deal with effectively. In order to meet these challenges, policy makers and public managers are focusing on changing old ways of working in order to create and deliver public value in an ever increasingly “wicked” context. For example, it is now widely recognised that in order to provide services that meet the needs and aspirations of the public a joined-up approach to service delivery is required. This has taken the form of different partnerships and collaborations such as Local Strategic Partnerships, Crime and Disorder Reduction Partnerships, Local and Multi-Area Agreements, among others. Another current concern of policy makers and public managers relates to improving resource allocation processes and the accountability of budgetary decisions. Within the context of local government, for example, budgetary decisions tend to be highly political, often complicated by the sheer volume of options available, all requiring a rationale accountable to the public. It seems quite likely that there will be a growing clamour to ensure that budgetary decisions are sound and legitimate for those affected. It is not uncommon for managers in the local government budgetary process to adopt a narrow view of the budget, one that is guided by the perspective provided from their own budget area. Managers are expected to try to increase their budgets and, when asked to find savings, to seek “making the best use possible of the resource that is available to them.” Thus, budgetary decisions are looked at in departmental silos rather than as a portfolio. Furthermore, when global budget cuts or increases are required, the tendency has been to decide on across the board percentage decreases/increases with little attempt to develop a system of priorities. This approach to budgeting typically leads to inefficiencies, as value is infrequently measured across different service areas. The other type of resource allocation decision that is frequent in local government is the funding of new initiatives such as a new service or project. In many ways this is similar to the budgetary process, with similar challenges. Two main differences are noticeable, however. First, the budgetary decision is whether to fund (or not) a new initiative, rather than whether to increase, maintain or decrease expenditure associated with an ongoing service or project. The second distinction is the nature of the criteria set employed to assess the value of the new initiative, which may be distinct from the one used when assessing ongoing services or activities. This chapter presents an alternative approach to resource allocation in local government for both budgetary decisions and funding decisions concerning new
12 Resource Allocation in Local Government
261
activities or projects. Specifically, the chapter reviews our experience of using facilitated portfolio decision analysis (PDA), to help local government teams assess the value of a range of public services. Our discussion below will be focused on the British local government context, as it is the domain in which we have undertaken most of our practical interventions. We believe, however, that many of the issues discussed here are relevant for resource allocation in a local government context in general, and that the use of PDA can be easily translated to similar contexts in other countries (see, for example Bana e Costa and Carvalho Oliveira 2002) The remainder of the chapter is structured in the following way. The next section introduces the British local government context and the issues associated with resource allocation, particularly those related to budgetary decisions. Our discussion will identify “incrementalism” as the traditional approach to budgeting in local government as well as the wider public sector in the UK and Ireland. Next, we present a resource allocation framework intended to counter incrementalism, and serve as an effective and efficient facilitative mechanism for budgetary and investment decisions about public services. The framework will be illustrated with case studies drawn from our own research and practical interventions. In the final section, we discuss how to implement the proposed framework, and outline directions for further research.
12.2 Resource Allocation in Local Government With local expenditure comprising 25% of total public expenditure and 10% of national income, the UK local government is an economically important sector. In local government the annual budget cycle has historically determined policies, plans and objectives, and acted as the principal method by which the sector implemented organisational and structural accountability (Goddard 2004). Having a major role to play in the delivery of public services in areas such as education, social services and environmental well-being, the setting of the budget is perhaps the single most significant process and decision made by a local authority during the course of the year. It implies formulating and making key choices between different possibilities for expenditure: from creating new ones to cutting, freezing or increasing current expenditure. As resources are always limited, the budget process forces a local authority to prioritize from the various projects and activities it wishes to initiate, continue, reduce or close down. With few exceptions (e.g. Boyne et al. 2000), research has consistently shown that the traditional approach to budgeting in local government as well as the wider public sector in the UK and Ireland has been incremental (McCarthy and Lane 2009; Midwinter 2005; Seal 2003). Although the notion of incrementalism has been defined in many ways (Berry 1990), here we conceptualise incrementalism from two perspectives: as an output and as a process. The former considers a budget as incremental when departures from the expenditure base in the previous year represent small or regular changes (Davis et al. 1966; Wildavsky 1964; Wildavsky and Caiden 2003); the latter views the process of setting the budget as incremental
262
G. Montibeller and L.A. Franco
if little systematic analysis is employed, few alternative options are evaluated, and the alternatives that are considered do not differ significantly from the status quo (Braybrook and Lindblom 1963; Lindblom 1959). It is argued that such behaviour is the result of a number of conservative influences including habit, history and tradition (Seal 2003). These, combined with a lack of resources, leads policy makers and public managers to adopt simple “rules of thumb” in order to reduce the technical and political complexity of budgetary decisions (Boyne et al. 2000). Examples of such rules are “protect the base” (the continued funding of existing services) and “follow central guidelines” (follow central government’s prescription of how to allocate money). Overall, these rules avoid radical changes and lead to stability of budget decisions. The problems caused by the inherent continuity and stability of incremental budgeting became particularly salient as the economic environment of British local authorities showed signs of turbulence from the mid-1970s onwards. This made reform of the budget process a political priority, and led to a search for new modes of organisational control in the public sector (Hoggett 1996), and a shift in focus from the concentration on inputs, to one which linked inputs with outputs and outcomes. During the period 1979–1997, policy reforms under the umbrella term of “New Public Management” resulted in some changes in local government through mechanisms such as competitive tendering for public services, centrally imposed “caps” on local expenditure, and the creation of quasi-markets in education, social services and housing. Compulsory tendering and the capping of local government expenditure were later abolished in the period 1997–2009 by the New Labour government. Yet, the government still retained reserve powers to control local expenditure and required local authorities to deliver services based on the new principles of Best Value, the promotion of public–private partnerships, and the scrutiny of performance through nationally set performance indicators and audits (Hood 1995; Seal 2003). Research indicates that the intended reforms have been successfully implemented in some cases, and incrementalism is beginning to be challenged. For example, Seal and Ball (2004) report on the adoption, in two British local authorities, of budgetary processes that disrupted the application of incremental routines. Similarly, the work of Pinch (1995) and that of Bolton and Leach (2002) identify particular interventions designed to counter the influence of incremental practices in British local authorities. Incrementalism has, however, survived. Hood (1995, p. 106) found that the reforms had not created a complete collapse of previous models of public administration. Within British local authorities, Seal and Ball (2004) found that incrementalism has continued despite the intended reforms. Some authors argue that the apparent tenacity of such approach is due to the lack of alignment between the intended reforms, and the embedded cultural norms, rules, and routines within the organisation (see, for example Ter Bogt, 2008a,b). When this occurs, countermeasures to avoid incremental budgeting are likely to be confronted by organisational inertia (Seal 2003). In other words, effective change will require more complex interventions than simply changing “old” rules for “new” ones.
12 Resource Allocation in Local Government
263
From a decision analytic perspective, it follows that any form of support designed to move away from incremental budgeting practices must take close account of both cultural aspects and other institutionalised patterns of organisational thinking and action. Within the local government context, at the very least, it means acknowledging that resource allocation is both a technical and sociopolitical process (Wildavsky 1964). Furthermore, it should be recognised that public managers, together with the citizens they serve and the politicians they report to, are not only interested in the economic efficiency and effectiveness of public agencies, but also focus on non-economic aspects such as, for example, equality, accountability, responsiveness and equity (Wilson 1989). Consequently, support for resource allocation must be in the form of embedding a new set of decision rules and routines that enables public managers to consider both economic and sociopolitical aspects. That is, a decision support framework that provides a form of “social rationality” (Ter Bogt 2008a) comprising both economic and socio-political concerns, and one that signifies a departure from incrementalism. In the following sections we introduce and discuss one such framework.
12.3 A Decision Framework for Resource Allocation As the previous section highlighted, the process of resource allocation in local government is challenging, both from a technical and a social perspective. It is technically difficult because it comprises the assessment of a large number of services or projects that need to be funded, the pursuing of multiple and often conflicting objectives in delivering such services or projects, and the definition of difficult trade-offs that need to be made in allocating limited budgets. The other facet of this type of decision has received less attention in the decision analytic literature. As also highlighted in the previous section, the process of resource allocation in local governments is socially complex because of the multiplicity of stakeholders involved in the decision; the pressures for service delivery from active stakeholder groups representing different segments of the local community; and the complex participation patterns in the decision that require interactions with and mandate from elected members. Therefore, any form of decision support intended to promote a better allocation of resources must be designed to address both the technical challenges and the social challenges just discussed (Cherns 1987; Phillips 2007). In terms of supporting the technical side of such decision problem, the right tools are the ones researched in PDA. In particular, given the omnipresence of multiple objectives and intangibles in this context, multi-criteria portfolio analysis models have been often used for prioritising projects or services. The use of Decision Analysis provides a clear framework for thinking about resource allocation decisions and a common language (Howard 2004) which is both precise and helpful in allowing public managers expressing preferences and trade-offs.
264
G. Montibeller and L.A. Franco
For supporting the social side of resource allocation model building, the most adequate mode of working seems to be facilitated modelling (Franco and Montibeller 2010a) such as decision conferencing, where portfolio decision analytic models are built by and analysed with the group of decision makers (see Phillips 2007; Phillips and Bana e Costa 2007; Phillips 2011). The notion of facilitated decision modelling is not new in the UK local government context. Friend and his colleagues developed an interactive decision approach to the analysis of interconnectedness in options and the management (rather than modelling) of different types of uncertainty back in the 1970s. The approach, known as Strategic Choice (Friend and Hickling 2005), is firmly rooted in the British soft operational research tradition and has influenced the development of the intervention framework discussed below. We will refer to this type of intervention as facilitated portfolio decision analysis. Given the socio-technical nature of these decision problems, a first step of any intervention is to help public managers in defining the decision problem they are dealing with, identifying which type of resource allocation is needed (i.e. funding new initiatives or prioritising services or projects when budgets levels are changed), and then designing the process of engagement with key stakeholders. These steps are then followed by an assessment of value and costs of each service or project, and a PDA. We discuss below each of these steps in turn.
12.3.1 Defining the Decision Problem In many instances, managers in local government have a compartmentalised view of the organisation, with a focus on their own department/division. This may lead to a failure to understand the repercussion of actions, or the impacts that running (or not running) services or projects may cause. A powerful tool for understanding different perspectives about resource allocation decisions is the use of causal maps (Eden 2004). These maps are networks of ideas or concepts, with links denoting either perceived causality (a cause at the beginning of an arrow which leads to an effect at the end of the arrow) or perceived influence (a means at the beginning of an arrow which influences an end at the end of the arrow). For further details about the structural aspect maps see Montibeller and Belton (2006); for a review of these tools in a decision analytic context see Franco and Montibeller (2010c). For example, Fig. 12.1 shows a causal map built for a group of managers from Coventry City Council, a group responsible for the internal auditing and planning department which was underperforming at that time. (The number associated with every concept just indicates the order in which it was entered into the map.) Our task was to help them to clarify the areas that needed improvement in the department and identify possible actions for improvement in every area, as well as prioritise such options. The map was created using facilitated modelling, with participants entering concepts and arrows into the Group Explorer networked workstation system (www.phrontis.com) running along the Decision Explorer mapping software
Fig. 12.1 A causal map for discovering areas for improvement
12 Resource Allocation in Local Government 265
266
G. Montibeller and L.A. Franco
(www.banxia.com). This allows entered concepts to be projected “on-the-spot” onto a large public screen, thus helping to inform the discussion and enabling the sharing of different perspectives. The process was guided by the two authors, who were working as decision analysts and group facilitators. The map, which has a meansend structure, displays the means available to group members at the bottom and the ends they wish to pursue at the top. After exploring the complex interrelationship between means and ends, the group then agreed on the areas for improvement they wanted to tackle, four of which are shown in Fig. 12.1 (concepts 27, 33, 36 and 43). This type of representation can also be used for identifying objectives (Bana e Costa et al. 1999; Belton et al. 1997; Montibeller and Belton 2006), as well as options in a portfolio model (Montibeller et al. 2009). In this case study, for instance, we used the map to brainstorm actions that could be implemented to improve every area. For example, one of the areas identified in the causal map (but not shown in Fig. 12.1) is that the group itself should be more effectively used by the other divisions of the council. Several ideas were suggested for improving this area, as shown in Fig. 12.2, which were then grouped and labelled by the group members, with our support. Those concepts with a square border became options to be evaluated in a portfolio decision model.
12.3.2 Defining the Type of Resource Allocation Evaluation From a portfolio decision analytic point of view, two main resource allocation problems are dealt with by managers in a local government context. The first one relates to the allocation of resources to fund new initiatives such as new projects or services; the second one is linked to the prioritisation of strategic budgets entailing decisions on which, or to which extent, ongoing services will be funded. These two decision problems are formally represented next.
12.3.2.1 Allocation of Resources to New Initiatives The first type of problem is the allocation of resources to new projects or services that a local authority wishes to fund. It can be easily formulated as a traditional portfolio problem. Let A be the set of N new projects or services to be funded, with A D fa1 ; a2 ; : : :; an g. Every i th project is assessed by an additive multi-criteria model, which provides its overall value V .ai /, as a weighted sum of K partial K P wj vj .ai /. To performances vj .ai / and their respective weight wj , i.e. V .ai / D j D1
every project or service an implementation cost, ci , is also estimated – see Keeney and Gregory (2005) for guidelines on how to define attributes to assess partial performances; Keeney (2002) for a discussion about value trade-offs and weights in multi-criteria models; and Kirkwood (1997) for details on how to elicit value functions.
Fig. 12.2 Ideas for the team to be used more effectively by the organisation
12 Resource Allocation in Local Government 267
12 Resource Allocation in Local Government
269
Fig. 12.4 An area-grouped portfolio decision problem
the assessment of options, enabling every department to estimate the performance of projects or services. Another way of grouping projects is by creating new areas that put together these new services, thus avoiding each department “to fight for its own corner” – see Montibeller et al. (2009) for a discussion on structuring portfolio models with areas. For example, for the Coventry City Council project described above, we built up a portfolio model as shown in Fig. 12.4, using the Equity software (www.catalyze.co.uk) where at the bottom of every column there is the name of the area and the blocks represent the possible actions for improvement. In another project, we helped a team drawn from the Library Learning and Culture division of Warwickshire County Council to explore new sources of income generation. The division is divided into six areas (libraries, museum service, county record office, adult and community learning, heritage education service and county arts service), and its aims are to inspire learning and imagination for people of all ages living in the region. Although the majority of the services provided by the division are free of charge, there is a small range of services for which there is a small fee (e.g. renting CDs and DVDs from the library). The challenge was to explore new ideas for generating extra income without increasing the current level of charges for this small range of services. The assumption was that specific sources of funding would be targeted to finance the best ideas. As in the case of the Coventry City Council project, we made use of the Decision Explorer and Group Explorer software packages. This time however, we did not use them to support the definition of the decision problem. Rather, we used them to support the structuring and derivation of evaluation criteria. Following Keeney’s (1992) value-focused thinking approach, we constructed a means-ends network map with members of the division team in a workshop format, and employed it to guide the brainstorming of new ideas for income generation. Using Group Explorer’s anonymous input tool, workshop participants were able to produce 203 potential
270
G. Montibeller and L.A. Franco
ideas to generate additional income to the division. Given such large number of ideas, they were subsequently clustered and further developed after the workshop and in several meetings with the team. The ideas were grouped in a bottom-up fashion by splitting them into 12 broad income generation “themes” and then, initially, an analysis was conducted to identify the particular division area where an idea could be implemented. Finally, a manageable set, containing the most interesting ideas, was chosen at a subsequent meeting with the division head and the finance manager. It is worth noting that in this case (Warwickshire County Council) we opted for identifying income generation themes emerging from the option set, instead of taking the division’s six areas as a basis for clustering the ideas. The rationale for this was to reduce the potential for tension among the heads and staff of the different division areas, who otherwise could have ended up fighting for their new ideas to be placed in their own areas. We built the portfolio model similarly to the one build for Coventry City Council, using the Equity software to support the evaluation.
12.3.2.2 Strategic Budgeting Another type of decision problem, which is currently very pressing in local government, relates to budget cutbacks and thus the prioritisation of strategic budgets. We use the term “strategic budget” here to differentiate it from the budgeting exercise that is typically conducted within this context. We see this as a top-down yet participatory decision process, with a focus on assessing individual ongoing services or projects, where the budget is centrally analysed and the portfolio of services is optimised given the budget available. The formulation is similar to the one we presented in the previous section, but now we suggest modelling it as a continuous knapsack problem (for details, see Martello and Toth 1990), with a variable yi associated with each i th service, which indicates the proportion of the service that should be kept within the new budget B: max
N X
V .ai /yi
(12.2)
i D1
s:t: N X
yi ci R
i D1
With 0 yi 1 i D 1; 2; : : : ; N For dealing with this type of evaluation, analysts usually employ optimisation, as these are large decision problems, many times with dependencies between services. An illustration of this type of approach is the analysis we conducted for a local authority in the English Midlands, which was concerned with the prospect of a cut of
12 Resource Allocation in Local Government
271
up to 40% for their £14:2 million budget (its name cannot be disclosed and some of the data has been disguised, due to confidentiality issues). There were seven areas of direct expenditure, as shown in Table 12.1 (column a shows the base budget for each area), and their initial approach was to cut proportionally across all the areas. Instead, we build a model as formulated in (12.2) to support their decision and reduce the value losses. Notice that strategic budgeting problems could also be formulated as (12.1). There are some advantages, in our view, in formulating the problem as (12.2). Firstly, it allows the analyst to include a lower bound on each yi , if required for political or social reasons, such as concerns about implementation or fairness, respectively. Second, if the options being considered are relatively large, i.e., organisational units that perform a large number of services, then turning off one of them could prove to be impossible in practice. By formulating it as a continuous knapsack problem, it will just reduce the budget of the least efficient one. There are also some disadvantages in using (12.2). One is that it is not always possible to reduce a budget of a service without destroying it. We have assumed that there is a linear relationship between value and cost of a service, but this may not always occur (a cut of 20% may generate a fall of 50% in its value, for example). In this case, a more realistic mapping between cost and value would be required as well as a cut-off point for the service. Regarding the issue about large options, discussed in the previous paragraph, using (12.2) will not always alleviate it, if the budget cut is severe enough and thus a whole option is turned off. An alternative solution to this issue is to use (12.2) but decompose the large options into much smaller services.
12.3.3 Designing the Engagement with Key Stakeholders Within the context of resource allocation in local government, attention to stakeholders is needed to assess and enhance political feasibility of budgetary decisions. Consideration of stakeholders is also important to satisfy those involved in, or affected by the decision, that the intervention has followed rational, fair and legitimate procedures. This does not imply that all possible stakeholders should be satisfied by or involved in the resource allocation process – only that the key stakeholders must be. Of course the choice of which stakeholders are “key” and what level of participation is appropriate should be the result of careful consideration by the analysts and their sponsors, and is likely to include a sample of public managers, citizens and politicians. For example, in an intervention with the Teenage Pregnancy Strategy Group from the London Borough of Newham (Franco and Lord 2011), we worked closely with a multi-stakeholder group comprising senior managers from the borough, as well as representatives from the National Health Service, the children’s rights and sex education group; faith groups, and the young parents forum, among others. For useful examples of engagement with stakeholder groups in decision analytic interventions, see Gregory and Keeney (1994) and Gregory and Wellman (2001).
Areas Local business and enterprise Corporate and customer services Finance Housing infrastructure and planning Environmental service delivery Community engagement, cohesion and wellbeing Community safety and enforcement Total 5;607
3;065 1;097
5;264
1;677
2;391
19;634
1;880;200 720;000
4;252;000
1;195;000
1; 490;900
14;200; 400
(b) Overall value full budget 535
4;357;800
(a) Base budget 2010/2011 [£] 304;500
11;360;320
1;213;100
880;900
3;400;800
1;880;200 720;000
2;991;320
(c) Cost new budget [£] 274;000
18;056
2;265
1;485
4;879
3;065 1;097
4;745
(d) Overall value new budget 521
Table 12.1 Initial budget and optimised budget (20% overall cut) for a local authority in the English Midlands
80:0
81:4
73:7
80:0
100:0 100:0
68:6
(e) Original budget (%) 90:0
92:0
94:8
88:5
92:7
100:0 100:0
84:6
(f) Original value (%) 97:4
272 G. Montibeller and L.A. Franco
12 Resource Allocation in Local Government
273
In the case of complex organisational settings it may be useful to consider formal means to identify and manage stakeholders. Several problem structuring tools for stakeholder analysis are available in the literature (e.g. Bryson 2004; Parnell et al. 2008; Rosenhead and Mingers 2001). The most widely used techniques include the power-interest grid, star diagram, and stakeholder influence map (Eden and Ackermann 1998); and stakeholder-issue interrelation diagram and problemframe stakeholder maps (Bryson 2004). Stakeholder analyses should be designed to gain needed information, build political acceptance and address some important questions about legitimacy, representation and credibility (Bryson 2004). Once the required engagement with stakeholders is scoped, the next stage in the intervention process is to articulate value-for-money concerns and conduct the PDA, which we discuss in the following sections.
12.3.4 Assessing Public Value The concept of “public value” is critical for managers working in the public sector (Benington and Moore 2010). Yet, paradoxically, what is meant by “public value” is usually unclear. This is not surprising, as “value” is a concept to which multiple meanings can be attached (Rohan 2000). Furthermore, the notion of “public value” extends beyond economic considerations, and also encompass social, political, cultural and environmental dimensions of value (Benington 2010). Therefore, public managers would benefit immensely from using a value-focused thinking framework (Keeney 1992) and being able to clearly define and agree on a set of public value dimensions to assess the services they provide or the initiatives they wish to support. The definition of public value criteria vj ./ must reflect the mission, strategic objectives and priorities of the local government agency (which, in our experience, are specified in the current strategic plan) and, at the same time, must allow the evaluation of possible benefits in running every service (see, for example Barcus and Montibeller 2008; Gregory and Keeney 1994; Keeney 1992). Such a value framework is shown in Fig. 12.5. While we believe this public value criteria have to be tailor-made for every organisation, to reflect both the specific decision context and the nature of the services being considered (Barcus and Montibeller 2008; Keeney 1992), the process of defining them should be more standardised. For problems concerning the funding of new initiatives (see Section 12.3.2.1), the other key parameter is the estimation of the implementation cost for each i th service or project, ci . This traditionally is calculated as the discounted future implementation cost of the investment. Care should be taken in setting up the discount rate, to avoid punishing excessively projects with heavy set up costs but that may provide high public value. In a similar way, it is also possible to discount value, albeit not common in practice (see Montibeller and Franco 2011). In some cases, less precise estimates are sufficient to build a requisite model (Phillips 1984). For example, in the Coventry City Council project described earlier,
Direct expenditure Enforcement Car parks administration Safer communities LAA fund Environmental health Planning enforcement CCTV Community safety Licensing team Improvement grants Home improvement agency Control centre Total
Areas 3:00 1:00 4:00 5:00 5:00 5:00 5:00 1:00 3:00 3:00 1:00
442;900 133;100 37;300 74;300 94;300 421;000 10;000 83;900 1;490;900
(b) Value multiplier
92;600 99;600 1;900
(a) Base budget 2009/2010 (£)
2;214;500 665;500 186;500 371;500 94;300 1;263;000 30;000 83;900
277;800 99;600 7;600
(c) Value service
1;000 301 84 168 43 570 14 38 2;391
125 45 3
(d) Normalised value
1:00 1:00 1:00 1:00 0:00 1:00 1:00 0:00
1:00 0:00 1:00
(e) Run service?
1;000 301 84 168 0 570 14 0 2;265
125 0 3
(f) Value services running
Table 12.2 Calculating value and the optimal portfolio for a local authority in the English Midlands (20% overall budget cut)
442;900 133;100 37;300 74;300 0 421;000 10;000 0 1;213;100
92;600 0 1;900
(g) Cost service Running
12 Resource Allocation in Local Government 275
276
G. Montibeller and L.A. Franco
12.3.5 Conducting a Portfolio Decision Analysis Once a decision model is developed, either for the funding to new initiatives (model (12.1) above) or for strategic budgeting (model (12.2) above), it is possible to conduct a PDA solving the model and providing recommendations. Such portfolio models can help in identifying the options which provide the highest value-formoney, as well as a portfolio of options that is Pareto efficient in the use of public resources. As we mentioned at the beginning of this section, while the technical aspects of a PDA are critical for the success of an intervention, equally crucial is the management of the resource allocation process, including dealing with the group dynamics and negotiation involved in the decision. Goddard (1997), for example, has empirically shown that the relationship between groups has a significant impact on the budget planning process. Furthermore, Jacobsen’s (2006) study of over 1,000 local politicians and managers identified that individuals act to defend the budget for their area of responsibility, thus decreasing the economic rationality of the budgetary process. An effective management of the resource allocation process is required to secure that the economic and social–political dimensions of rationality are not overlooked, thus challenging the influence of incrementalist routines. The need to manage process have led us to use extensively facilitated PDA as an intervention approach, which we locate within a broader family of facilitated decision modelling methods (Franco and Montibeller 2010a). This requires a process by which portfolio decision models are jointly developed with a group of stakeholders, with the assistance of computer support, and in which insights about the decision situation being modelled is the result from model-based analysis produced “on-the-spot” (Franco and Montibeller 2010b). For example, in the Coventry City Council project, once the effort and benefits were assessed by the group (via a discussion about the scores and agreement on a joint score), the Equity software then prioritised the actions in terms of value for money, using value/cost ratios, as shown in Fig. 12.6. These outputs were then used by the group to discuss how the high value-for-money options could be implemented (the highest one was to set up regular meetings with the Head of Service, as shown in the fifth row of Fig. 12.6, with a ratio of 2.62). A similar approach was used for the London Borough of Newham project. The resulting model was used to help the group identify the highest value portfolio at various budgets. Participants were immediately surprised to see some of their flagship projects replaced by more, smaller projects. In particular, projects within the “clinical services” area appeared to be very expensive and not being picked up as efficient by the portfolio model. These results were counterintuitive to many of the participants, as some of these projects were considered to be at the core of the group’s strategy. The political consequences of actually adopting the more efficient portfolios were then raised, and Equity was used to explore possible adjustments which would make certain portfolios more politically feasible. Furthermore, postintervention interviews with participants indicated that our intervention approach
12 Resource Allocation in Local Government
277
Fig. 12.6 A prioritisation of new project using facilitated portfolio decision analysis (PDA)
was able to implement a participatory yet efficient resource allocation process. We interpreted this outcome as the ability of the process to meet the participants’ needs for social rationality (for details, see Franco and Lord 2011). For the case of the English Midlands council budgetary decision, we built a large Excel spreadsheet and optimised its value (model (12.2)) using the Premium Solver add-in (www.solver.com). For example, for the budget area Community Safety & Enforcement, Table 12.2, column (e) shows which services should run (1.00) and which ones should be discontinued (0.00) for an overall budget reduction of 20%. The impact of these cuts in every expenditure area is shown in Table 12.1, in terms of their new budget (column c) new value (column d) and the % reductions in the budget (column e) and value (column f). Notice that this optimal portfolio would represent a loss of only 8% in value (100% minus the total loss shown in the last cell of column f), against a 20% loss for across-the-board cuts (colloquially known as “salami-slicing” or taking a “haircut” – see Keisler 2011), assuming that value loss is a linear function. Playing with the model, decision makers could see the impact of budget cuts on the % value loss when using PDA versus a linear cut, as shown in Fig. 12.7 (the value losses for a 20% budget cut are highlighted in the graph).
12.4 Conclusions and Directions for Future Work In this chapter we argued that resource allocation in local governments is both an important field of application for PDA, and a decision environment that has its
278
G. Montibeller and L.A. Franco
Value Loss in Strategic Budgeting
100 90 80 Value Loss [%]
70 60 50 value loss across-theboard cut
40 30
value loss PDA cut
20 10 0
0
20
40 60 Budget Cut [%]
80
100
Fig. 12.7 Value loss in linear cuts versus PDA for a local authority in the English Midlands
own characteristics and challenges. In this context, we suggested an intervention framework for conducting PDAs. Such a framework is intended to provide a new set of decision rules and routines intended to assist managers in deciding what services to provide or what initiatives to support, and thus contributing to the generation of public value. Drawing on the British problem structuring tradition (Rosenhead and Mingers 2001), the framework emphasises the importance of ensuring the legitimacy and political feasibility of budgetary decisions through problem structuring and the involvement of key stakeholders in building portfolio decision analytic models. It also stresses the need to identify the type of decision problem, and recognises value measurement as a critical element in developing sound decision value-based models, as well as the effective management of decision processes, thus our use of a facilitated version of PDA. How could the intervention framework presented here be effectively implemented? Introducing change into any organisation can be difficult, but particularly so when dealing with budgetary processes in local government. As we noted in Section 12.2, incremental budgeting practices are difficult to let go, and thus we posit that significant “decision analytic capability” needs to be developed in local government through appropriate training, apprenticeship or knowledge transfer partnerships between academia and the public sector. Otherwise a shift to a PDA approach is likely to be fraught with difficulties. We believe there are several areas of further development in the field of PDA. First, the issue on how to deploy facilitated portfolio analysis for optimisation models has to be investigated, which is linked with the availability of software. Decision conferencing has traditionally deployed user-friendly software for benefit/cost ratio analysis (Phillips 2007; Phillips and Bana e Costa 2007) and there are recent
12 Resource Allocation in Local Government
279
developments of specialised software for portfolio optimisation (e.g. Lourenc¸o et al. 2011). Our view is that the creation of tailor-made spreadsheets with optimisation models embedded, with graphical and interactive-modelling interfaces may be the way forward, given the low implementation cost they represent for public (and also private) sector organisations. The second area for further work is in developing process protocols to define public criteria. This calls for developments in stakeholder analysis techniques, including processes to identify who provides which information in these decision contexts. In our experience, discerning public value dimensions is especially difficult, and thus any agreed public value criteria will have to measure unambiguously the performances of services, both to avoid favouritism and also allow the justification of decisions. In this sense, the use of direct ratings, where assessments of performance and judgements are combined, should be avoided. The distinction between facts and values, for example via value functions, is also beneficial in defining who will provide the inputs for specific parts of the model. The third area of research is extending PDA for the support of strategic budgeting. We have suggested that this may require a change in the formulation of a binary knapsack problem, making the decision variables continuous in the range [0,1] and reducing the value loss when budget cuts are imposed. Of course using this model will provide quite different results than an across-the-board cut, which may create concerns about fairness. On the other hand, it provides a benchmark where other (non-Pareto efficient) solutions may be tried and compared. We have argued that, in this context, a continuous knapsack formulation has some advantages against the traditional binary variables employed in most PDA models (which were typically developed for new projects). But this is still an open issue for debate and empirical validation. There are, therefore, several technical and social issues associated with this type of analysis remaining.
References Bana e Costa CA, Carvalho Oliveira R (2002) Assigning priorities for maintenance, repair and refurbishment in managing a municipal housing stock. Eur J Oper Res 138:380–391 Bana e Costa CA, Ensslin L, Correa EC, Vansnick JC (1999) Decision support systems in action: integrated application in a multi-criteria aid process. Eur J Oper Res 113(2):315–335 Barcus A, Montibeller G (2008) Supporting the allocation of software development work in distributed teams with multi-criteria decision analysis. Omega 36(3):464–475 Belton V, Ackermann F, Shepherd I (1997) Integrated support from problem structuring through to alternative evaluation using COPE and VISA. J Multi-Criteria Decis Anal 6(3):115–130 Belton V, Stewart TJ (2002) Multiple criteria decision analysis: an integrated approach. Kluwer, Dordrecht Benington J (2010) From public choice to public value. In: Benington J, Moore MH (eds) Public value: theory and practice. Palgrave Macmillan, Glasgow Benington J, Moore MH (eds) (2010) Public value: theory and practice. Palgrave Macmillan, Glasgow
280
G. Montibeller and L.A. Franco
Berry WD (1990) The Confusing Case of Budgetary Incrementalism: Too many meanings for a single concept. Journal of Politics 52(1):167–196 Bolton N, Leach S (2002) Strategic planning in local government: a study of organisational impact and effectiveness. Local Government Stud 28(4):1–21 Boyne G, Ashworth R, Powell M (2000) Testing the Limits of Incrementalism: an empirical analysis of expenditure decisions by English local authorities 1981–1996. Public Admin 78(1):51–73 Braybrook D, Lindblom CE (1963) A strategy of decision: policy evaluation as a social process. Free Press of Glencoe, New York Bryson JM (2004) what to do when stakeholders matter: stakeholder identification and analysis techniques. Public Manage Rev 6(1):21–53 Cherns A (1987) Principles of sociotechnical design revisited. Hum Relat 40(3):153–161 Davis O, Dempster M, Wildavsky A (1966) A theory of the budgetary process. Am Polit Sci Revi 60:529–547 Eden C (2004) Analyzing cognitive maps to help structure issues or problems. Eur J Oper Res 159(3):673–686 Eden C, Ackermann F (1998) Strategy making: the journey of strategic management. Sage, London Franco LA, Lord E (2011) Understanding multi-methodology: evaluating the perceived impact of mixing methods for group budgetary decisions. Omega 39(3):362–372 Franco LA, Montibeller G (2010a) Facilitated modelling in operational research (invited review) Eur J Oper Res 205(3):489–500 Franco LA, Montibeller G (2010b) ‘On-the-Spot’ modelling and analysis: the facilitated modelling approach. In Cochran JJ, Cox Jr LA, Keskinocak P, Kharoufeh JP, Smith JC (eds) Wiley encyclopedia of operations research and management science. Wiley, New York Franco LA, Montibeller G (2010c) Problem structuring for multi-criteria decision analysis interventions. In Cochran JJ, Cox Jr LA, Keskinocak P, Kharoufeh JP, Smith JC (eds) Wiley encyclopedia of operations research and management Science. Wiley, New York Friend J, Hickling A (2005) Planning under pressure: the strategic choice approach, 3rd edn. Elsevier, Ansterdam Goddard A (1997) Organisational culture and budgetary control in a UK local government organisation. Account Bus Res 27(2):111–123 Goddard A (2004) Budgetary practices and accountability habitus: a grounded theory. Account Audit Account J 17(4):546–577 Gregory R, Keeney RL (1994) Creating policy alternatives using stakeholder values. Manage Sci 40(8):1035–1048 Gregory R, Wellman K (2001) Bringing stakeholder values into environmental policy choices: a community-based estuary case study. Ecol Econ 39(1):37–52 Hoggett P (1996) New modes of control in the public service. Public Admin 74(1):9–31 Hood C (1995) The New Public management in the 1980’s: variations on a theme. Accounting, Organizations and Society 20(2/3):93–109 Howard RA (2004) Speaking of decisions: precise decision language. Decis Anal 1(2):71–78 Jacobsen D (2006) Pubic sector growth: comparing politicians’ and administrators’ spending preferences. Public Admin 84(1):185–204 Keeney RL (1992) Value-focused thinking: a path to creative decision-making. Harvard University Press, Cambridge Keeney RL (2002) Common mistakes in making value trade-offs operations research. Oper Res 50(6):935–945 Keeney RL, Gregory RS (2005) Selecting attributes to measure the achievement of objectives. Oper Res 53(1):1–11 Keeney RL, Raiffa H (1993) Decisions with multiple objectives: preferences and value trade-offs, 2nd edn. CUP, Cambridge Keisler J (2011) Portfolio decision quality. In: Salo A, Keisler J, Morton A (eds) Portfolio decision analysis: improved methods for resource allocation. Springer, New York
12 Resource Allocation in Local Government
281
Kirkwood CW (1997) Strategic decision making: multiobjective decision analysis with spreadsheets. Duxbury, Belmont Kleinmuntz DN (2007) Resource allocation decisions. In: Edwards W, Miles RF, Von Winterfeldt D (eds) Advances in decision analysis: from foundations to applications. CUP, Cambridge Lindblom CE (1959) The science of muddling through. Public Admin Rev 19:79–88 Lourenc¸o JC, Bana e Costa C, Morton A (2011) PROBE: a multicriteria decision support system for portfolio robustness evaluation (revised version). LSE Operational Research Group Working Paper Series, LSEOR 09.108. LSE, London Martello S, Toth P (eds) (1990) Knapsack problems. Algorithms and computer implementations. Wiley, Chichester McCarthy G, Lane A (2009) One step beyond: is the public sector ready to let go of budgeting? J Finance Manage Public Serv 8(2):23–50 Midwinter W (2005) Budgetary scrutinity in the Scottish parliament: an adviser’s view. Financ Account Manage 21(1):13–32 Montibeller G, Belton V (2006) Causal maps and the evaluation of decision options: a review. J Oper Res Soc 57(7):779–791 Montibeller G, Franco LA (2011) Raising the bar: strategic multi-criteria decision analysis. J Oper Res Soc 62:855–867 Montibeller G, Franco LA, Lord E, Iglesias A (2009) Structuring resource allocation decisions: a framework for building multi-criteria portfolio models with area-grouped projects. Eur J Oper Res 199(3):846–856 Parnell GS, Driscoll PJ, Henderson DL (eds) (2008) Decision making for systems engineering and management. Wiley, Hoboke Phillips L (1984) A Theory of Requisite Decision Models. Acta Psychologica 56(1-3):29–48 Phillips LD (2007) Decision conferencing. In: Edwards W, Miles RF, Von Winterfeldt D (eds) Advances in decision analysis: from foundations to applications. CUP, Cambridge Phillips LD (2011) The royal navy’s type 45 story: a case study. In: Salo A, Keisler J, Morton A (eds) Portfolio decision analysis: improved methods for resource allocation. Springer, New York Phillips LD, Bana e Costa CA (2007) Transparent prioritisation, budgeting, and resource allocation with multi-criteria decision analysis and decision conferencing. Ann Oper Res 154(1):51–68 Pinch P (1995) Governing urban finance: changing budgetary strategies in British local government. Environ Plan A 27(6):965–983 Rohan MJ (2000) A rose by any name? The values construct. Person Soc Psychol Rev 4(3): 255–277 Rosenhead J, Mingers J (eds) (2001) Rational analysis for a problematic world revisited: problem structuring methods for complexity, uncertainty and conflict. Wiley, Chichester Seal W (2003) Modernity, modernisation and the deinstitutionalisation of incremental budgeting in local government. Financ Account Manage 19(2):93–116 Seal W, Ball A (2004) Re-inventing Budgeting: the impact of third way modernisation on local government budgeting. Research Summaries Executive Series (CIMA) 2(10):1–13 Ter Bogt H (2008a) Management accounting change and new public management in local government: a reassessment of ambitions and results – an institutionalist approach to accounting change in the Dutch Public Sector. Financ Account Manage 24(3):209–241 Ter Bogt H (2008b) Recent and future management changes in local government: continuing focus on rationality and efficiency? Financ Account Manage 24(1):31–57 Wildavsky A (1964) The politics of the budgetary process. Little Brown, Boston Wildavsky A, Caiden N (2003) New politics of the budgetary process, 5th edn. Longman, Harlow Wilson JQ (1989) Bureaucracy: what government agencies do and why they do it. Basic Books, New York
Chapter 13
Current and Cutting Edge Methods of Portfolio Decision Analysis in Pharmaceutical R&D Jack Kloeber
Abstract The R&D, regulatory, and marketing process in the pharmaceutical industry have distinct differences in timing, risk, opportunity, compliance, and cost structure compared to most other industries, making portions of portfolio decision analysis (PDA) significantly different. We will cover these differences and how PDA methods need to adapt to be most useful. We will address concepts of project valuation, portfolio prioritization, portfolio uncertainty, portfolio balance, and portfolio optimization. One goal of this chapter is to introduce the readers to concepts and methods which may not be used or even considered in their organizations, as well as to put the methods currently being used into context. We will present a methodology growing in popularity within PDA of discovery and early development portfolios– multiple objective decision analysis (MODA). A major theme throughout this chapter is a focus on portfolio objectives, rather than project goals, values, and risks. So many companies still focus their analytical efforts on project level valuation and prioritization, missing chances to help the organization meet portfolio level goals, fill company levels gaps, and help leaders conduct portfolio level risk management. We explore the idea that subportfolios exist, and different PDA methods may be more successfully implemented in one subportfolio than others. Finally, maximizing value, managing risk, and aligning decision making with company values requires an integrated portfolio management process, which we address at the end of the chapter.
J. Kloeber () Kromite, LLC, Boulder, CO, USA e-mail:
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 13, © Springer Science+Business Media, LLC 2011
283
284
J. Kloeber
13.1 Introduction The R&D, Marketing, and Sales processes in the pharmaceutical industry have distinct differences in timing, risk, opportunity, and cost structure compared to most other commercial companies, making portions of portfolio decision analysis (PDA) significantly different. We will discuss these industry characteristics and which methods are being used to solve the toughest problems. We will address concepts of portfolio prioritization and portfolio optimization, two popular methods of approaching key portfolio level decisions, and the question of when each should be used and for which purpose. We will also present a multiple objective approach to PDA for discovery and early development portfolios, and compare this approach to the more classic, financially oriented analysis. Finally, we will discuss how portfolio level measures are needed for optimization, but do not always coincide intuitively with traditionally gathered and reported project level information, due to marketplace dependencies, medical dependencies, and scientific dependencies during development. The types of projects being evaluated in discovery (programs) vs. early development (compounds), and late development (projects) make consolidation across the entire pipeline problematic. The first half of this chapter explores the characteristics of a pharmaceutical company’s R&D portfolio, the types of subportfolios that exist, and the various methods that are successfully being implemented in some of the largest and most successful pharmaceutical companies to maximize value, manage risk (inherent in all R&D), and align decision making with company objectives and values. The second half of the chapter addresses methods which, after thoughtful reflection, and years of practical experience and observation, should be used much more frequently to help the pharmaceutical analyst in his/her role. Some of the improved methods were developed specifically for this industry, while others are taken from industries such as oil and gas, finance, and government.
13.2 Background of PDA in the Pharmaceutical Industry The pharmaceutical industry, primarily in the USA, has been using projectlevel decision analysis for over 30 years. Decision analysis should, therefore, be fairly mature, wide spread, and well understood in this industry. However, even though methods were introduced and DA processes were installed in several large pharmaceutical companies as early as 1985, many other companies either failed in their attempts to internalize decision analysis concepts, such as Decision Quality, or never attempted to introduce these concepts. Portfolio decision analysis requires another layer of complexity, one that seems counter to the culture within pharmaceutical companies, since most activities are oriented toward the individual projects, and not the entire R&D portfolio. Most companies in this industry have undergone significant changes in scope, make-up, organization, expertise,
13 Current and Cutting Edge PDA in Pharmaceutical R&D
285
Percent of Respondants
Question: How does your organization measure portfolio management performance? 90 80 70 60 50 40 30 20 10 0
Accuracy of longrange budget forecasts
Acurracy of longrange capacity forecasts
Accuracy of project timelines
Number of NDA's submitted
Number of NDA's accepted
Revenue from New Therapies
Fig. 13.1 Survey Question by Insight Pharma Reports, a subsidiary of CHI
and alliances. Currently, many mid-sized and a few large pharmaceutical companies outsource their requirements for project-level DA and PDA. Internal analytical organizations continue to analyze and recommend portfolio actions to management, and can be found in finance, commercial, R&D, or strategy groups. The methods and processes vary greatly between internal organizations and between external consulting companies. Some Contract Research Organizations (CROs) are also assisting pharmaceutical companies in decision making, decision analysis, and portfolio level support and management. Companies such at Quintiles, Parexel, i3, PPD, and ICON, have decided to offer pharmaceutical companies the chance to outsource much of their portfolio level analysis as well as project level analysis. There was a recent study conducted by Insight Pharma Reports which is included below to illustrate how PDA organizations are viewed, what their responsibilities are perceived to be and how well they are believed to be performing. Figure 13.1 contains the results from a 2009 survey (Insight Pharma Reports Survey 2009, presented by Clear Trial LLC, CHI Portfolio Management Conference, November, 2009). The percent of respondent companies which use each type of measure to assess the performance of a Portfolio Management function is shown. The reader will notice an unfortunate absence of key portfolio concerns and associated metrics, such as • • • •
reducing portfolio risk managing portfolio risk identifying and achieving portfolio level objectives improving the quality of the project portfolio
286
J. Kloeber
The accuracy of timing or cost estimates for an individual project is completely different when compared to reducing the total cost of a portfolio of projects or decreasing the average cycle time of a portfolio of projects. The difference is important, and one that will be emphasized throughout this chapter. To successfully hit the many project milestones on time and on target is a natural measure of good planning and good execution of a project– in other words good project management, not necessarily good portfolio management. The measures from the survey should serve both as an introduction as well as a source of insight into current portfolio management practices: Accuracy of long term budgets – As discussed above, this measure is actually a test of the descriptive ability, and subsequent planning and execution abilities, of the project management organization. However, it is being used as a measure of the proficiency of the Portfolio Management organization. The closer the actual budgetary requirements were to the forecasted requirements, the better the portfolio management organization performed on this measure. Accuracy of long range capacity forecasts – The forecasts are of required capacity as opposed to forecasting actual capacity, which is a much easier task. While this type of accuracy is again a test of the descriptive ability of the Portfolio Management organization, it is much more integrated and important than budget accuracy. An inaccurate forecast could cause too much or too little capacity to be built. There are a few famous instances where a $300M to $500M plant was built to satisfy forecasted capacity, only to see the blockbuster drug not be approved by the FDA. Too little capacity has caused millions of prescriptions go unfilled and finally rewritten for a competitor drug. Accurate capacity requires aligning the sales, marketing, and manufacturing organizations, and normally involves correct timing across multiple drugs. Accuracy of project timelines – This measure is a test of the project team’s ability to accurately assess the times to reach certain project milestones (or desire, since there are some motivational biases which could pressure teams to project longer or shorter timelines for their projects). Normally the completion or beginning of a phase is forecasted, as well as the beginning and completion of certain critical studies within the project’s development plan. Presumably, this metric is included because management views timely achievement of project-level milestones, or at least the accurate projection of achievement dates, as part of Portfolio Management’s responsibilities. Note : For all three of these measures mentioned above, management is certainly not waiting 5–10 years to assess the forecasts against the actual revenues, costs, or milestone achievements. At best, the organization may be checking the accuracy of short-term forecasts of cost, revenue, and project timing where the uncertainty is lowest, the variability is lowest, and portfolio management insights and recommendations have the least impact. Number of New Drug Applications Submitted. (Known as NDAs, these are very large, complex submission packages submitted to the FDA after scientific and
13 Current and Cutting Edge PDA in Pharmaceutical R&D
287
clinical studies have been completed. They represent the culmination of many years of research for one compound, in one disease.) While the number of NDAs submitted is an important one for the organization, 1 year or even 2 years’ total NDAs submitted has very little to do with Portfolio Management efficiency, proficiency, or experience, and much more to do with how many projects were in Phase 3 already and in a position to progress through the submission preparation process. A more relevant objective might be how the total number of submissions compares to the total number of submissions needed for the company to reach its portfolio objective of number of successful launches or total revenue flow in the short-term future. Number of New Drug Applications Accepted. While NDAs represent years of research, some NDAs are not accepted for review due to incomplete information, incorrect submissions, or inappropriate submissions. A high acceptance-tosubmission ratio indicates an organization which is aligned with FDA guidelines and procedures and an integrated clinical development process which designs the clinical trials properly. Acceptance does not imply FDA approval. While estimates in the early 2000s were at 95% for approval given acceptance, recent estimates of approval percentage range from 70 to 85% depending on the therapeutic area. Revenue from New Therapies. This is the sum of revenues, in a given year, from company projects which represent new therapies for the patients. Often, “a new therapy” is loosely defined as any drug which recently received a regulatory approval (USA, EU, Japan, Canada, Emerging markets, other countries) and fits at least one of the following categories: 1. Gives patients a new form of drug delivery (e.g., oral form instead of an Intravenous form) 2. Gives patients a new dose regimen (e.g., can take new drug once a week instead of once a day) 3. Gives patients a better safety profile (e.g., lower likelihood of a serious adverse event related to the drug) 4. Gives patients a better side effect profile (e.g., less probability of, or less severe, nausea) 5. Gives patients better efficacy (e.g., Lipitor lowers the LDL (the “bad” cholesterol) better than Pravachol) 6. Gives patients a different mechanism of action (e.g., a completely different type of drug which works on a different part of the cell in treating or curing the same disease) This measure can be dramatically affected by the range of years included in “recent,” the quality and quantity of the branded product’s marketing team, the competitive landscape, the technical and regulatory success or failure of key products in the pipeline which could have had recent regulatory approvals. Because of these other major factors influencing this measure, this is an extremely rough proxy for a Portfolio Management group’s success or effectiveness. In general, this survey is a good indicator of what Portfolio Management groups are being asked to accomplish. Unfortunately, neither the list of measures nor
288
J. Kloeber
the degree to which they are being used by the pharmaceutical companies are very promising. Portfolio Management organizations, normally staffed by talented, analytical, and knowledgeable managers, are not being challenged to fully analyze the toughest questions, to confirm the key assumptions, and to accurately model the impacts of very critical decisions at the project and portfolio level. They are often not being asked to assess, communicate, or help manage the large risks present in today’s pharmaceutical industry. The Portfolio Management function should exist to add value to the portfolio of new drugs through better insight, improved decision making, and improved alternatives. All of the tools, methods, organizational discussions, and process flows in the rest of this chapter are to help the Portfolio Management organization bring clarity to the tough decisions and add value.
Purpose of the Survey : To assess shifts in the demands on the Portfolio Management function and how companies are addressing those shifts • Survey conducted by Insight Pharma Reports (sister company to CHI) – Phase 1: Quantitative (trends snapshot) – Phase 2: Qualitative (best practices) • Survey conducted on October 7–29, 2009 to senior managers in portfolio management, corporate strategy, and clinical operations • A total of 94 responses Sample respondents for the above survey Actelion Amgen Ariad AstraZeneca Bayer Healthcare Biothera Boehringer Ingelheim Emergent Biosolutions Fresenius
Johnson & Johnson Merck Serono Novartis NovImmune Pfizer Roche Solvay SuperGen UCB
13.3 Pharmaceutical Industry Portfolio Management Characteristics In this section, we will briefly discuss the differences between the Pharmaceutical Industry and other industries which use PDA methods for analysis and decision making. We structure our comparison using the four pillars of project and portfolio
13 Current and Cutting Edge PDA in Pharmaceutical R&D
289
analysis and evaluation– timing, risk, value, and cost. These four elements seem to dictate so much of what is discussed, measured, announced, and used for rationale for R&D decisions.
13.3.1 Four Pillars of Analysis and Evaluation 1. Timing: From discovery to launch, the R&D involved in producing an ethical drug can take between 6 and 13 years. The US patent on new chemical compounds is 20 years. Once a successfully marketed drug goes “off patent,” the drop in revenue, in 1 year, approaches 90%. This large drop is due to the competition– the many generic companies waiting to take advantage of the work done by the original IP owner. With no R&D expenses to recoup and almost no marketing, 10% of the original price still allows them a reasonable margin. Many industries operate in a much more condensed cycle. Consumer health products, for example, are an industry in which there is normally a 3-month to 2 year process from concept to launch – with a market life of 3–5 years. Financial portfolios often keep a stock in their portfolio for as little as two fiscal quarters. 2. Risk: Risk in development is due to safety, drug/drug interaction, side effects (convenience, tolerability), dosing, and efficacy (does the drug significantly help the patient?). After research is complete and a potential new drug enters the development stage, there is still only a one-in-ten chance that the drug will be launched. Of those launched, only one out of three drugs are profitable. This is a far cry from other industries which could not survive with such low chances of success from concept to launch. Many industries discount the technical risk aspect completely and assume that any project which is funded will launch, given a good funding decision and good project management. Many industries are more concerned with the engineering and eventual market success rather than increasingly intense testing of the science and medicinal value associated with a compound. 3. Cost: Despite the recession and healthcare reform legislation, pharma and biotech companies are continuing to invest heavily in R&D. In 2009, US companies spent $65.3 billion, an increase of more than $1.5 billion from 2008.1 While cost estimates range wildly, depending on how the study was accomplished, most experts agree that it costs nearly $1B to develop a non-orphan drug from Discovery to Approval. If failures are included, the expected cost per successful new drug increases dramatically from $1.5B to $1.8B. Another source estimates the average fully capitalized resource cost (including research on abandoned drugs) to be $1.4 billion in 2003.2 While new product project costs in other industries are extremely variable ($10,000 for small consumer products to $10B 1 2
Drugs in Development: From Pipeline to Market 2010 PharmaLive, July 2010. K.S. Holland and B. B´atiz-Lazo, 2004 Case Study.
290
J. Kloeber Table 13.1 Top ten drugs by worldwide sales in 2009 Position 1 2 3 4 5 6 7 8 9 10
Product/brand Lipitor Plavix Nexium Advair/Seretide Zocor Norvasc Zyprexa Risperdal Prevacid Effexor
2009 US sales $12.9B $5.9B $5.7B $5.6B $5.3B $5.0B $4.7B $4.0B $4.0B $3.8B
Disease High cholesterol Blood thinner Heartburn Asthma High cholesterol High blood pressure Schizophrenia Schizophrenia Heartburn Depression
Matthew Herper and Peter Kang 03.22.06, 6:00 AM ET, 2010 Forbes.com, “The World’s Ten Best-Selling Drugs” www.forbes.com
for large capital expenditures for building and drilling in off-shore drilling locations), costs running above $1B per product certainly put the pharmaceutical industry in the upper tier of new product development costs. 4. Value or Opportunity: High costs (>1B USD), low success rates .<5%/, and long lead times (>7 years) all point toward an industry which should be avoided by investors. However, the human suffering from diseases which are potentially preventable or treatable by pharmaceuticals, as well as the potential revenues, are staggering. Table 13.1 shows the financial heights to which these drugs have brought their companies. These are dollar amounts for a single year, and most drugs have 5–13 years of market exclusivity. The diseases these drugs address range from depression to schizophrenia to asthma to important cardiovascular problems and diseases.
13.3.2 Adding Value to the Pharmaceutical Project Portfolio by Improving Decision Quality Let us begin by formally recognizing the elements of quality decision making, as first developed by Matheson and Matheson (1998), Howard (1988), McNamee and Celona (1990). These (project) DQ elements are (1) Frame, (2) Values, (3) Alternatives, (4) Information, (5) Logic, and (6) Commitment. Keisler (2011) proposes a modification of these elements to directly address Portfolio Decision Quality (PDQ)– (1) Frame, (2) Values, (3) Alternatives, (4) Information, (5) Logical Synthesis, and (6) Implementation. In Table 13.2, each element of PDQ is reviewed in terms of Pharmaceutical R&D decision making by identifying a range of level of quality achievement. The low and high quality descriptions for each PDQ element may also shed light on the types of methods and tools which are needed to achieve high-quality decision making.
13 Current and Cutting Edge PDA in Pharmaceutical R&D
291
Table 13.2 Comparison of levels across each (Portfolio) Decision Quality element Decision Quality Element– Frame Low level The Target Product Profile (TPP) contains the clinical safety, efficacy, and tolerability standards which should be achieved to pass regulatory hurdles. In low quality organizations, the TPP is allowed to move based upon the changing perception of the candidate drug’s potential. Major assumptions include: 1. Meets TPP hurdles–trial is a success– project moves forward 2. Meets TPP hurdles–trial is a failure– project stops 3. The Go/No-Go is not really a decision, but an outcome.
High level Project development decision initially seen as a risky investment with many possible outcomes depending upon the development plan, the science, and the patients. The development phases are a structure for testing the new medicine but the trial design, timing, trial size, dose, and desired performance levels all affect the probability and the magnitude of the outcomes. The start of each phase is seen as another rich decision, with more information available, several different development plans (including delay, sell, or stop), and more cost at risk.
Decision Quality Element– Values Low level Peak Sales or Risk-Adjusted Net Present Value is used to value the alternatives. The No-Go alternative is usually valued at $0 Risk-Adjusted NPV. There is usually an unstated level of NPV which will cause the decision team to move forward with the project, if it meets the TPP criteria. Other objectives beyond NPV are either tracked separately or ignored in the valuation.
High level Values are challenged and investigated. Portfolio Values are considered. Risk-Adjusted NPV may not be the only driver of value. The correct value, or values, is used to bring clarity to the decision at hand.
Decision Quality Element– Alternatives Low level
High level
Cross Functional team develops a single “best” plan which trades off risk, cost, timing, and market performance. Dialogue between decision makers is infrequent or nonexistent until the full plan is posed as “the plan” in a Go/No-Go environment. The only alternative plan shown is the No-Go option meaning a full stop to development.
Cross functional development team is challenged to explore all development plans to 1. lower the monetary risk 2. lower the overall trial cost 3. decrease the time to market launch, better meet the patients’ needs 4. decrease the probability of unforeseen problems after launch (safety, tolerability, efficacy)
Decision Quality Element– Information Low level
High level
The single development plan is developed with base case values– single point estimates of the most likely outcome for all uncertainties. Assumptions are documented for the base case. Probabilities are not used in calculations or in tradeoff discussions.
All development plans are adequately investigated. Uncertainty is included in the investigation, quantified, and communicated to management. Key uncertainty drivers are identifies as well as risk management/mitigation strategies. (continued)
292
J. Kloeber
Table 13.2 (continued) Decision Quality Element– Logic Low level The economic logic is well known and agreed upon (P&L). Several (many) financial measures are calculated and shown, usually compared to the Stop Development option. Decision Makers are free to combine or ignore any of the financial measures as they see fit. Nonfinancial measures are usually not quantified but are treated as satisfying constraints.
High level The economic logic is well known and agreed upon (P&L). Financial and non-financial measures have been agreed upon. A structured method is applied to arrive at the recommended approach combining all decision makers’ key values.
Decision Quality Element– Implementation Low level Often, decisions are delayed for more information, usually due a desire to look at other alternative development plans. If a decision is reached, it is revisited some months later as the budget gets tighter. Several new tests are suggested and after a discussion are decided on. Sometimes, these test results are not relevant or useful for future decision making.
High level The clarity of the decision leads to implementation of the decision. Decision teams have adequate documentation and analysis to explain and defend decisions for future revisits. Value-of-information analyses have been accomplished and the future trials and tests have also been decided.
The consequences of low level decision quality include significant long-term loss of value within a company’s portfolio. Mistakes such as allowing TPP standards to move; creating only a sole project development plan; ignoring uncertainty; measuring values which are not aligned with the company’s strategies; or inaction or inability to make tough decisions, can all cause value to be missed, mis-measured, or unrealized.
13.3.3 R&D Project Stage-Gate Process Many companies have struggled to establish a successful portfolio analysis capability as well as a working Portfolio Management process. In the last 20 years, nearly all pharmaceutical companies have instituted a project-oriented stage-gate approach to its R&D process, providing for Go/No-Go decision making at key milestones in the R&D journey. This stage-gate process, while providing a method for quality control, and for managing risky development spend, creates tension between project level decision making and portfolio level decision making. Typically, the portfolio process is a cyclical one, every quarter or year, ensuring alignment with budget and company objectives. The stage-gate process, however, is driven not by time but by the progress of the experimental drug through R&D hurdles (See Spilker
13 Current and Cutting Edge PDA in Pharmaceutical R&D
293
Fig. 13.2 Stage-Gate Process in Pharmaceuticals New Product Development
2009, Chapter 10, Fig. 10.1). A successful PDA process must be one which ties the stage-gate process to the budget and corporate objectives, without delaying the drug’s progress or ignoring portfolio level imperatives. The challenge is to make strategically aligned decisions at each stage of a project, while adhering to the constraints and reacting to the opportunities of the cyclical business processes (Figs. 13.2 and Fig. 13.3). While an uncertainty tree is a popular way to communicate the risks and the process designed to reduce risk through gathering information about the compound using increasingly tough hurdles and expensive means, it does not communicate the decisions that must be made before starting each next phase of development. Usually the results from a trial are not obvious and require management to consider: 1. What was just learned? 2. What is the risk, cost, and potential value for the recommended path forward? 3. Are there other paths which would add more value to the project and to the portfolio? Not including these decision points in the modeling of the project’s development path can lead to a lack of good alternatives, missed opportunities to change or sell
13 Current and Cutting Edge PDA in Pharmaceutical R&D
295
Let us investigate the many methods which are at the PDA group’s disposal and how they are being used to accomplish the primary objective of maximizing value with manageable risks. EIRR: EIRR is the Expected Internal Rate of Return. The EIRR is calculated as the Internal Rate of Return (IRR) for the expected cash flow. The expected cash flow is the actual statistical mean of each year’s cash flow. Although many financial executives (CFO’s, Director of Finance Operations) are accustomed to working with IRR and therefore relate very quickly to EIRR, the measure, by itself, gives no indication of the size of the opportunity, the magnitude of the upfront R&D cost, nor the amount of risk associated with the development cost. This measure should not be used as a sole measure or prioritization method for R&D projects. ENPV: The ENPV is the Expected Net Present Value, the mathematical probability weighted average of all possible NPV outcomes, including the negative NPV outcomes if the project fails during development or in registration. This measure does give a better indication of the size of the opportunity, and it takes the technical risk into account as well as the costs of R&D. However, the costs and risks are hidden in the overall metric and more transparency into them may be needed. This the most frequently used financial measure within the industry for long-term value of new drug candidates. NPV (post-launch): This measure includes all cashflows after approval of the drug by the regulatory agency. NPV of cashflows, discounted at the company’s approved discount rate, the minimum internal rate of return (MIRR), or weighted average cost of capital (WACC). While this measure lacks any R&D costs or R&D risk, that is exactly its attractiveness. It is a very pure measure of the commercial opportunity, with the time value of money helping to differentiate between opportunities now vs. opportunities 10 years from now. If the commercial uncertainties are modeled and included, this measure should be the mean NPV (post-launch). PI: This measure is normally called the productivity index or the productivity ratio. We define this measure as the ratio of the ENPV over the EDC (Expected Development Cost). ENPV has been defined above. The EDC is calculated as the risk-adjusted average net present cost (NPC) of development. The EDC takes technical risk at each stage into account as well as the time value of money. This is a form of return on investment, where the investment is risk-adjusted development cost. Most large companies have some version of this measure for each project, although most do not keep or track this measure for their portfolio. This ratio has some of the same weaknesses as the EIRR. It hides the actual size of the opportunity as well as the riskiness and magnitude of the development costs. Clinical Proof of Concept (POC) is usually tested in Phase 2. Before the proof, at the end of Phase 1, the organization has large uncertainty about whether the drug actually has an impact on the disease. The larger costs begin after POC with manufacturing and much larger clinical trials. This is the key milestone in risk reduction, if the test meets the clinical efficacy hurdles. It is also the key milestone
296
J. Kloeber
Table 13.3a Sample projects with measures
Project Proj 1 Proj 2 Proj 3 Proj 4 Proj 5 Proj 6 Proj 7 Proj 8 Proj 10 Proj 12 Proj 15 Proj 17 Proj 21 Proj 23 Proj 27
Disease High cholesterol Chronic pain Osteoarthritis Dry eye Pancreatic cancer HIV/AIDS Antibiotic Depression Schizophrenia Lung cancer Anxiety Asthma Dry eye Migraine Antibiotic
Phase PC 2b 2a 1 PC 1 PC 2a 1 1 1 1 PC 1 3
PTRS (%) 8 42 20 17 5 20 10 15 9 11 14 11 3 7 64
NPV 1,120 290 1,216 750 353 556 178 1,120 862 1,120 422 616 360 256 375
ENPV 45 101 190 89 2 79 10 110 22 40 12 31 10 2 222
Long term clin cost ($M) 132 350 431 354 132 106 38 191 66 179 461 142 92 101 225
Short term clin cost ($M) 15 179 167 36 9 19 10 45 10 20 93 79 15 30 129
Productivity index 0.34 0.29 0.44 0.25 0.02 0.74 0.26 0.58 0.33 0.21 0.03 0.22 0.11 0.03 0.99
for cost reduction, if the test does not meet the hurdles, in drug development, since the expenses related to large clinical trials and manufacturing development are avoided. Because of this significant milestone, many risk-adjusted measures vary dramatically compared before vs. after POC. Comparing any one of these measures between projects on opposite sides of POC can lead to misleading valuation of a project and to incorrect decision making. Many companies develop internal hurdles for progression of a project based upon these measures. We have found that the internal hurdles are usually good guidelines for further investigation into a project’s risk and value, but not refined or sensitive enough to be valuable Go/No-Go criteria. Below is a table of projects (Tables 13.3a–13.3d) and a few commonly calculated measures. A Portfolio Management organization may be asked to prioritize these projects for resource allocation purposes, or possibly to identify candidates for stopping or out-licensing. It is quite difficult to identify a correct prioritization from Table 13.3a, since there is no clear pattern. Ordering by the various measures lends some insight into the differences between the projects and is a common method used in the industry. In Table 13.4, one can start to identify projects which appear to be less risky, less costly, and more valuable due to the color coding of projects into top (green cells), middle (yellow cells), and bottom (red cells) thirds in each measure. This type of descriptive analysis usually leads to a great deal of discussion, some insight on a project level, but very little insight on a portfolio level. This technique is simple to implement, is objective, but unfortunately can lead to a decision context which
13 Current and Cutting Edge PDA in Pharmaceutical R&D Table 13.3b Order by PTRS
Table 13.3c Order by ENPV
297
Project
Disease
PTRS (%)
Proj 27 Proj 2 Proj 3 Proj 6 Proj 4 Proj 8 Proj 15 Proj 12 Proj 17 Proj 7 Proj 10 Proj 1 Proj 23 Proj 5 Proj 21
Antibiotic Chronic pain Osteoarthritis HIV/AIDS Dry eye Depression Anxiety Lung cancer Asthma Antibiotic Schizophrenia High cholesterol Migraine Pancreatic cancer Dry eye
64 42 20 20 17 15 14 11 11 10 9 8 7 5 3
Project Proj 27 Proj 3 Proj 8 Proj 2 Proj 4 Proj 6 Proj 1 Proj 12 Proj 17 Proj 10 Proj 15 Proj 7 Proj 5 Proj 23 Proj 21
Disease Antibiotic Osteoarthritis Depression Chronic pain Dry eye HIV/AIDS High cholesterol Lung cancer Asthma Schizophrenia Anxiety Antibiotic Pancreatic cancer Migraine Dry eye
ENPV 222.0 190.0 110.0 101.0 89.0 79.0 45.0 40.0 31.0 22.0 12.0 10.0 2.0 2.0 10.0
is lacking clarity which often leads to poor decision making. The tabular approach also has other weaknesses such as 1. Many of the measures are extremely dependent, indeed, are functions of other measures, e.g., ENPV contains both the probabilities from PTRS and the value and costs from NPV. NPV includes short-term cost, long-term cost, and revenues. Productivity index is a function of ENPV and risk adjusted long-term cost. 2. The number of a project’s measures in the top third (white cells) should communicate relative value of project. Yet, analysts are often asked by decision makers to include many different measures of interest, without regard to how the measures overlap in purpose with other measures (e.g., NPV vs. ENPV) or
298 Table 13.3d Order by productivity index
J. Kloeber
Project
Disease
Productivity index
Proj 27 Proj 6 Proj 8 Proj 3 Proj 1 Proj 10 Proj 2 Proj 7 Proj 4 Proj 17 Proj 12 Proj 15 Proj 5 Proj 23 Proj 21
Antibiotic HIV/AIDS Depression Osteoarthritis High cholesterol Schizophrenia Chronic pain Antibiotic Dry eye Asthma Lung cancer Anxiety Pancreatic cancer Migraine Dry eye
0.99 0.74 0.58 0.44 0.34 0.33 0.29 0.26 0.25 0.22 0.21 0.03 0.02 0.03 0.11
Table 13.4 Top, middle, and bottom thirds identified for the key measures
the importance of the measure (e.g., next year’s cost vs. 10 year cost), yet the number of white cells will shape the opinion. For example, it would be easy to find another measure for which Project 8 is in the bottom third. Should this change the perceived value of that project? 3. The categorization may be inappropriate. The change in category (e.g., changing from top third to middle third) may be insignificant for that measure yet very significant for another measure. 4. None of these measures is compared to phases, yet the phase of development can have a large impact on the appropriate risk, cost, and even NPV (due to delay in launch) for a project.
13 Current and Cutting Edge PDA in Pharmaceutical R&D
299
Despite the deficiencies of such comparisons, many companies spend a significant amount of time discussing prioritization using just such tables. Other organizations use graphs of two or three of these measures using a simple scatter plot or bubble chart. These graphs can add more insight, and coloring the bubbles by phase can allow for better visual within-phase comparison. Missing from both of these methods– tabular and graphical– are budgetary and resource constraints (long term and short term). If there are multiple constraints, a simple prioritization by any measure can be very misleading. In general, any portfolio level objectives such as total eNPV, expected revenue in a given year, or number of launches in a year are difficult to optimize or even view using such tables. Balance, or diversity, is also not addressed but often an important aspect of an R&D portfolio with regards to phases (having the right number of projects in each phase to accomplish objectives); disease areas (to align with company strategy); and technology (e.g. level of innovation, biologics vs. chemical molecules, drug delivery device).
13.5 Portfolio Level Thinking One of the weakest areas within pharma Portfolio Management, and an area where the tools in PDA may help, is with regard to the Portfolio itself. Because most in Senior Management have risen through the levels of sales, marketing, project management, clinical operations, basic science, manufacturing, animal studies, statistics, etc., they have focused primarily on the molecule, the individual drug, and its potential and liabilities. Rarely have they been asked to consider the portfolio of projects together, as an entity. Therefore, many strategic and portfolio level meetings digress into project level information– costs, risks, opportunities, schedule, and partners. It is always more comfortable for a manager to bring the issues back into his/her original area of expertise. PDA groups (Portfolio Management, Analytics, Project and Portfolio Management) have a responsibility to help lift the discussion to the next level– the portfolio level. If all measures presented are project level measures, then the discussion will always be around projects. If portfolio-level measures, tied to portfolio-level objectives (and similarly tied to strategic objectives) are calculated and displayed as part of the analysis, the conversation will naturally focus on the portfolio. Many measures which have meaning at the project level also exist and maintain their meaningfulness at the portfolio level, e.g., ENPV. Other measures do not exist at the portfolio level, such as Launch Date, and others only exist at the portfolio level, such as balance. In Table 13.5 below, we break down many measures being used by companies today in a prioritization/optimization environment and indicate which project measures are meaningful at the portfolio level, which are not, and how they would be calculated. We hope that by simply adding some key portfolio-level measures into the monthly, semi-annual, or annual portfolio discussions, an analyst will raise the level
300
J. Kloeber
Table 13.5 Comparing project and portfolio-level measures
Project measures NPV
Project level relevance High
ENPV
High
Total development cost
High
EDC
High
NPV postlaunch
High
Peak year revenue
High
Revenue at peak of sales
Loss of patent
High
Productivity index
High
Length of marketing exclusivity ENPV/EDC
Gap year revenue
High
Risk-adjusted gap year revenue Launch date
High High
Project phase
High
Earliest phase NOT complete
PTRS
High
TA balance
Probability of technical and regulatory success What TA
Molecule type balance
What molecule type
Project comments Non-risk adjusted value
Portfolio level relevance
Risk-adjusted value Non-risk adjusted, nondiscounted
High
Portfolio comments Without project risk, this summation of nonrisk adjusted value is unrealistic Sum, if independent
Risk-adjusted development cost Development cost not included
High
Without project risk, this summation is an overestimate of cost Sum, if independent
Revenue in Company’s identified gap year PTRS X Gap Revenue Year of launch
High
Without project risk, summation is overestimation of value Since each project peaks in different years, this measure is meaningless Average length of marketing exclusivity may be helpful Sum is meaningless. Total Portfolio ENPV/Total Portfolio EDC is meaningful Without risk, summation is overestimation
High
Summation is meaningful, if independent Not meaningful, sum of risk-adjusted launches each year is helpful Not meaningful, sum of projects in each phase is meaningful Not meaningful– only meaningful at project level
High
Sum of # of Proj, ENPV, or other portfolio metric in each TA Sum of # of Proj, ENPV, or other portfolio metric in each molecule type
High
(continued)
13 Current and Cutting Edge PDA in Pharmaceutical R&D
301
Table 13.5 (continued)
Project measures Phase balance
Project level relevance
Project comments What phase
Innovation
High
Level of innovation
Unmet medical need
High
UMN of target disease
Portfolio level relevance High
Optimal ENPV
N/A
High
Optimal gap year revenue
N/A
High
Probability of achieving objective
N/A
High
Portfolio comments Sum of # of Proj, ENPV, or other portfolio metric in each phase Average level of innovation may be meaningful Average portfolio UMN or Balance of High, Med, Low UMN may be useful Portfolio. Deterministic results should then be simulated to obtain uncertainty band Portfolio. Deterministic method should be simulated to obtain uncertainty band Yield highest probability of achieving objective
of discussion, will allow strategy to be more directly tied to portfolio decisions, will bring insight into the overall value, cost, risk of the portfolio, and will allow for tracking of progress at the portfolio level. Table 13.5 should be valuable in assessing the types of measures being collected within a PDA organization and whether they are being aggregated appropriately or being used for the appropriate level of analysis, i.e., portfolio or project level analysis. We will now address several of the portfolio measures from Table 13.5, expanding upon the comments contained within Table 13.5. 1. Portfolio Balance Portfolio Balance implies a correct level of resource allocation and a correct number of projects of certain types in the portfolio. Portfolio Balance charts such as the one in Fig. 13.4 can drive tremendously valuable discussions. Why is Anti-Infectives (AI) more costly than Dermatology, compared to the expected net present it will achieve? Central Nervous System (CNS) projects are not affecting the 2010 budget proportional to the huge short term revenue it is forecasted to bring in. Can we learn from CNS and from Dermatology? Are we doing something wrong in the way we develop AI drugs and do we want to stay in AI long term? These are the questions which should be discussed and will be if analysis is presented at the portfolio level.
304
J. Kloeber
Fig. 13.6 Examining the gap year and the probability of achieving CAGR goal
stochastic optimization, a fairly sophisticated technique, but much more available in 2011 than just 5–10 years ago. When the analysis is completed over increasing funding levels, a very insightful graph can be constructed, allowing the decision makers to trade off budget against probability of achieving the strategic goal.
13.6 Treatment of Commercial Uncertainties Commercial uncertainties (often called commercial risks) are usually identified by market researchers and marketing professionals. The main uncertainties are consistently in the same categories. • Unexpected competitors: Drug development is very risky, for competitors as well. It is not surprising that some launches are not expected while other drug candidates, thought to be less likely to succeed, do. Guardasil was a surprise in its technical success and its speed to market, whereas the Dual PPAR class of drugs was a very popular mechanism with great hopes for being effective against diabetes. Only two are currently on the market which is much fewer than most expected. • The exact approved label is a target for the project team during drug development, but the European and US Regulatory authorities have the final word, and often that translates into a more restricted patient population or a black box warning. • The time to reach the market has two levels of uncertainty. The uncertainty of the length of the development can cause problems with manufacturing, distribution, and cost. A late launch can also cause a tremendous impact on sales since the order to market, within a class of drugs, strongly influences market share.
13 Current and Cutting Edge PDA in Pharmaceutical R&D
305
• Post-launch medical risks, usually due to adverse events with patients, have been increasingly been in the news, and cause unplanned and unexpected impacts such as decreased labels, a hold on use of the drug, or being pulled from the market altogether. Examples of this type of uncertainty are Vioxx (pulled from the market), Tysabri (pulled but allowed to return to market), and Avandia (under investigation). • The intellectual property owned by pharmaceutical companies is protected by several types of patents. But these patents, both in the USA as well as in Europe, are constantly being challenged in court. A loss of 2 years on a 20-year patent can cause a 25% of revenue since most drugs only have 8–10 years of market exclusivity due to a long drug development process. Recent patent fights have involved Plavixr (Sanofi-Aventis and B-MS), Lipitorr (Pfizer), Risperdalr (J&J), and Concertar (McNeil). An organization must first identify the commercial uncertainties relevant to each drug candidate and the drivers of the uncertainties. For example, to assert that the market share of the new drug candidate is variable is only a start, and trying to assess the uncertainty without finding the key drivers is futile. One key driver may be whether a potential competitor succeeded in launching its product. This probability can be assessed as well as the effect on the product’s market share. Missing a key launch window may be another driver of market share uncertainty, since the firstin-class product traditionally garners a larger share of the market even after other equal products come on the market. The project manager or leader is usually the expert who will be able to estimate the probability of making or missing the launch window. The identification of the key drivers is key to giving management actions to understand and hopefully manage the uncertainty. Assessment of the key uncertainty drivers requires use of rigorous elicitation techniques to help mitigate bias when it is identified. Using consistent language and confidence intervals during the assessment allows for future modeling and comparisons. Confidence bands of 80 or 90% can be understood by most and can be easily communicated to managers. While commercial uncertainties are quite large compared to other industries, the variability of the quality of uncertainty modeling is surprising. Many companies do not demand a rigorous treatment of the uncertainties and lose the chance to perform insightful risk assessment and defensive and opportunistic risk management. We estimate the approximate percentage of companies which have deficiencies to be • Less than half of the pharmaceutical companies use forecasting models which allow for rigorous modeling of uncertainty. • Less than half of those companies who have rigorous models, which can handle uncertainty, actually conduct rigorous uncertainty assessment. • Less than 25% of the companies have or use valuation (P&L) models that rigorously account for uncertainty in costs or revenues. • Less than 10% of the companies use models which account for uncertainty in timing.
308 Table 13.6 Example hurdles for four common measures
J. Kloeber
NPV postlaunch
PI
$811 $1,000
1.6 3
Therapeutic area not within core strategy Pre-POC 16.2% $131 $1,112 Post-POC 25% $500 $1,500
2.8 6
Hurdles !
EIRR
ENPV
Therapeutic area within core strategy Pre-POC 13.3% $89 Post-POC 18% $300
The four project measures shown would be calculated for every project in the R&D portfolio as well as for each business development candidate (potential for being licensed in from another company to be developed by this pharmaceutical company). The project measures would be compared to the appropriate hurdles for the category of the project (i.e., whether the project’s targeted diseases are within the stated core therapeutic areas or outside of those areas, and if the project is currently Pre-Clinical Proof of Concept (Pre-PoC) or not). The Portfolio Management group would have worked with senior decision makers to establish the appropriate hurdles for each measure and category. The hurdles shown in the table were recommended for use as initial hurdles for entry into the portfolio for business development candidates or for continuation into the next development phase for internal pipeline projects Table 13.6 shows that the hurdles are significantly different for the pre-POC compared to the Post-POC projects. How exactly would a company set such hurdles? In Figs. 13.9–13.12, there are four histograms built from historical data of projects being measured on these four measures during their development. From these distributions, and the company’s desire to increase value in the pipeline and reinforce the Core Therapeutic Areas (TAs) without completely barring large opportunities for drugs focused on non-Core TAs, the hurdles are derived. Hurdles for projects in Core TAs ($811M, median) and non-Core TAs ($1.12B, 75th percentile) are identified. This is an example of choosing hurdles. It is clear that management is trying to aggressively improve the value of the portfolio by choosing very high hurdles for continuance. It should be emphasized that hurdles for entrance into a portfolio through purchasing the license from another company may be different compared to hurdles for internally owned and discovered compounds. The reason the median value of $811M NPV is chosen as a hurdle in this case is because management does not want to spend valuable budget dollars for a project which would, if successful, actually lower the median value of the portfolio. Similarly, a non-Core TA project has to be significantly better in value (75th percentile) to warrant bringing in a project that is not aligned with strategy. Hurdles for Projects in Core TA (13.30, median) and non-Core TAs (16.20, 75th percentile) are similarly identified for EIRR– Expected Internal Rate of Return.
13 Current and Cutting Edge PDA in Pharmaceutical R&D
311
Similar historical data can be used for post-POC hurdles. Following these, hurdles ties the decisions to quality standards of return on investment and total value. Setting different hurdles allows the decision makers to have the same simple process yet differentiate between significantly different categories – e.g., biologics vs. small molecules or pre-POC vs. post-POC projects. Finally, the previously stated yet unmeasured strategic focus is quantified and can be adjusted. Projects focused on strategically critical disease areas (in this example) are allowed to stay in or enter the portfolio with lower financial measures (easier hurdles) than projects in nonstrategic disease areas. Adjusting the hurdles can slowly adjust the balance between different categories and can slowly implement strategies which are easy to categorize but difficult to measure and implement.
13.8 Using a Multiple Objective Decision Analysis Approach to Prioritize Drug Development Projects Organizations in completely different industries have faced difficult decisions such as • • • •
prioritizing bioterrorism threats (U.S. Department of Homeland Security, 2008) identifying the best military base closure candidates (U.S. Congress, 2005) funding new technologies for radioactive waste remediation (Grelk et al., 1998) prioritizing Information Technology projects within a Medical Devices company
In each case, the decision maker was asked to prioritize projects while achieving multiple, sometimes conflicting, objectives. This book contains a large number of chapters focused on, or containing references to, multiple objective/multiple criteria/multiple attribute methods for portfolio analysis. The pharmaceutical industry has also benefited from such methodologies. The multiple objective approaches presented to management vary wildly, and each method can be implemented expertly or poorly. Very few analysts within the pharmaceutical industry have received formal education regarding the assumptions, pitfalls, benefits, and consistency regarding these methods, which causes the methods to appear to be either extremely simplistic or just another “black box” method which lacks transparency and defensibility. Instead of attempting in this one chapter to help the analyst select which multiple objective/multiple criteria approach is appropriate for his/her organization and analytical problem, we will present a case study of one method– Value Focused Thinking. We will emphasize its use, the way in which management is involved, and some of the insights and results of the analysis. Through over 18 years of using this approach, the results achieved in most organizations after properly implementing this method are as good or as better than those shown below. There are other methods for prioritizing projects with multiple conflicting objectives, including SAW (Simple Additive Weighting), AHP (Analytical Hierarchy Process), ANP
13 Current and Cutting Edge PDA in Pharmaceutical R&D
313
(Analytical Network Process), TOPSIS, ELECTRE, and Outranking methods. The reader will find other multiple objective methods referenced in other chapters throughout this book. Many comparative studies have been published, including Buede and Maxwell (1995), Belton and Stewart (2001), and Figueira et al. (2005). Below is a case study indicating how a MODA approach can add insight and consistency to a prioritization effort within a drug R&D pipeline (see Fig. 13.13). We refer the reader to Keeney (1992) for a full description and defence of Value Focused Thinking. The focus for this discussion is on the types of objectives that have been used in the pharmaceutical industry and some of the types of analysis which have helped decision makers prioritize early development compounds and projects. Many quite experienced decision makers, faced with the task of prioritizing projects or assets for the purpose of funding, cutting, or otherwise allocating resources, move directly to prioritizing the alternatives at hand by determining which project is adding the least or the most value to the organization. Ralph Keeney, however, says to first look at the values of the organization. Keeney affirms that “Values are what we care about : : : [they] should be the driving force for our decision making : : : [and] the basis for the time and effort we spend thinking about decisions.” Once the company develops a structured view of its own objectives, projects or assets can be compared against these objectives to reveal value gaps. New projects or hybrid projects are often developed that help eliminate the value gaps. The final analysis is one of transparency and relevancy, bringing needed clarity to the decision maker.
13.9 Pharmaceutical Case Study The management team was trying to prioritize their lead compounds, all in stages prior to acceptance into development. They knew that they had to focus on a few compounds but were unable to agree upon the prioritization. We proposed using a multiple objective approach to the prioritization. They had tried using a multicriteria approach in the past, but not for this problem. Most were skeptical but they agreed to try the approach. The steps taken by the team to build a quantification model were • • • • •
Define Objectives Organize an Objectives Hierarchy Define Measures Define Value Functions Assign Weights
The internal decision analyst facilitated the team of experts and managers to build a suitable model for the prioritization. After two three-hour-long discussions
314
J. Kloeber
Fig. 13.14 Example of an objectives hierarchy, weights, and a measure
surrounding the objectives of the Oncology therapeutic area, the team structured the objectives into a suitable hierarchy, with the top level being the key objectives that were relevant for this decision (see Fig. 13.14). With the objectives identified, the work of finding metrics which would align with these objectives began. The functional experts (oncologists, biologists, and marketing experts) assisted in developing measures which aligned best with the objectives. Keeney and Raiffa (1993) have developed a mathematically sound and defensible method for combining the performance scores into one overall number. The value number communicates the value of the program to the organization on a consistent scale. For example, if your objective was to fill an apparent project launch gap in 2018, a measure should be developed which takes this objective into account. Such a measure was “Launch Timing” (see Fig. 13.14). The difference in time from the 2018 gap year became the measure. Metrics must be tied to a key objective to show how it adds value to the company. There may be a certain level of achievement in a metric which adds a disproportionate amount of value. Using the same example and Fig. 13.14, the earlier launches are logically of greater value– an earlier launch leads to longer market exclusivity and a sooner start to the revenue stream produced by the new
316
J. Kloeber
value to the organization significantly (upper-left in Fig. 13.15). Project teams often have to make decisions surrounding which indication (disease) should be tested and developed first, if the drug has the potential to help in more than one area. This sequencing of development alternatives is a common and important class of decisions, development leaders face. Multiple Objective value can be deterministic or stochastic. Explicitly modeling uncertainty when significant uncertainty exists in the decision makers’ minds is the correct approach. We differentiate between risk (probability of technical failure due to safety, efficacy, manufacturing or regulatory reasons) and uncertainty which we define as variability in the estimates of value in each measure. The uncertainty of an estimate of a level of attainment in one measure propagates through the value function, the weights, and the aggregation to yield a distribution of final multiple objective value. We recommend selecting the most influential and variable measures for uncertainty inclusion. We select the measures by estimating the typical range of value within a measure and multiplying it by the weight of the measure. The measures with the potential for the largest swings in value should be assessed for uncertainty for every project or strategy being evaluated. It is very common to have discussions about risk vs. reward at this stage, based on either graphically or mathematically comparing technical risk with expected multiple objective value. The projects should be compared only with projects in similar stage of development. Using Multi-Objective valuation with objectives they had chosen, and measures which reflected their preferences, the four decision makers in our example felt comfortable with the risk vs. reward view. Those with low value (left side of chart) and low POS (lower half of chart) should be closely examined (upper right (Bubble Chart) in Fig. 13.15). It was painfully clear that one of the programs with both low value and low probability of success would be hard to support as a top priority. One of the key objectives within Novelty is First in Class. The measure associated with this objective categorized each program into First in Class, Second in Class, or Third or higher in Class. The order of arrival in the market place is often used as a proxy for the innovation or novelty of the drug and of the company. From the bottom-left graph in Fig. 13.15, one can assess how balanced the portfolio is in its short term spend, long-term spend (EDC), and overall projected financial value (ENPV) with respect to Novelty (First in Class). Portfolio optimization aims to maximize portfolio value within R&D constraints. As seen in the bottom right graph in Fig. 13.15, an optimal value portfolio can be calculated for any budget level. From the multiple objective perspective, this means maximizing the Total Multiple Objective value of the portfolio. Within the case study, management was able to look at the Drug Discovery and early development programs in a new light. Clarity for the decision makers was increased. The decision makers were confident that the valuation results consistently captured the key areas of concern leading to valuable insight and appropriate action.
13 Current and Cutting Edge PDA in Pharmaceutical R&D
317
13.10 Portfolio-Level Thinking and Phase Balance One of the weakest areas within Pharmaceutical PDA is the level of thinking. So much effort and analysis is at the project level, too little effort is dedicated to thinking at the Portfolio level. Because most in Senior Management have risen through the levels of sales, marketing, project management, clinical operations, basic science, manufacturing, animal studies, statistics, etc., they have focused primarily on the molecule, the individual drug, and its potential and its liabilities. Rarely they have been asked to consider the portfolio of projects together, as an entity. Therefore, many strategic and portfolio level meetings digress into project level issues, including validity of data for– costs, risks, opportunities, schedules, and partners. It is always more comfortable to discuss issues in your original area and level of expertise than to look across the entire portfolio and check for portfolio impact, portfolio goals, or portfolio gaps. By simply adding some key portfolio-level measures into the monthly, semiannual, or annual portfolio discussions, the level of discussion will rise to the portfolio or strategic level and will allow strategy to be more directly tied to portfolio decisions. The PDA leader can track the progress of the portfolio and bring insight into the overall portfolio value, cost, and risk. 1. Portfolio Balance was discussed earlier in this chapter and is being addressed by many large pharmaceutical companies. The major problem with modeling portfolio balance is that describing balance is quite straight forward given the measures and graphics discussed in this chapter, but prescribing the right balance is much more difficult. The first and most important step is to develop consistent measures aligned with the areas of balance considered most relevant for that company. Consistently tracking, reporting, graphing, and discussing should lead to an organizational understanding of the right balance. A type of portfolio balance not discussed in the earlier sections of the chapter is Phase Balance– sometimes called pipeline flow balance. Because of the risk and timing of projects in different phases, it is possible to estimate the number of projects, at any one time, needed to reach the company’s project launch objectives and supporting R&D progression objectives. Many companies use the annual progression method. This method uses the probability of success benchmark for each phase to estimate how many projects per year need to succeed in each phase to provide a sustainable portfolio at the company desired launch rate. For instance, with a desired launch rate of 2, the company would need 2 projects to succeed annually in the registration phase. The company would need 2=:94 D 2:13 projects to succeed in Phase 3 and enter into the registration phase each year. Similarly, the company would need 2=.:94:69/ D 3:08 to succeed in Phase 2 each year and enter into Phase 3. The R&D organization does the same calculations for each phase within the pipeline and often set as organizational and even personal goals for the year. These goals push each team to meet milestones but do answer the question– if every team meets its goals, will we meet our strategic
318
J. Kloeber
Table 13.7 Actual projects vs. goals of projects to sustain two launches per year Anti-infectives Cardiovascular CNS GI IMID/pulmonary Metabolic Oncology Virology WHC/urology Phase totals Months duration Phase Goals POS for phase
PC 2 2 1 1 3 3 4 3 4 28 10.5 In PC 22.1 0.66
Phase 1 1 4 2 1 4 7 0 2 7 28 16.8 In Ph 1 23.3 0.57
Phase 2 1 2 0 2 4 6 0 1 1 17 36.6 In Ph 2 29.0 0.32
Phase 3 2 1 0 2 0 3 2 0 0 10 36 In Ph 3 9.2 0.69
Reg 1 0 0 1 0 0 0 1 0 3 15 In Reg 2.7 0.94
Total 7 9 3 7 11 19 6 7 12 81 104.4 Launch 2 0.12
goals? Often the answer is “no.” This type of calculation does not directly help the organization gauge the pipeline on a daily basis; it is an annual look at the number of successes within a phase per year. A less used but just as simple an approach is the Work In Progress approach. One must work backwards from the portfolio level annual objective of number of launches (Table 13.7). Using either benchmark times and risks or a combination of estimated and benchmark times and risks (by phase), the portfolio organization calculates how many projects should be ongoing at any time during the year. This measure identifies the gaps and the overages, by phase and by Therapeutic Area. It is not tied to a calendar or fiscal year and can be used to drive Business Development efforts by disease, therapeutic area, and phase. For example, if the goal is 2 launches per year, a steady-state pipeline should have .2=:94/ .15 months=12 months/ D 2:66–2:7 projects as some point in the registration phase. Similarly, to support 2.7 projects in registration phase, the pipeline should have .2=.:94 :69// .36 months=12 months/ D 9:25–9:2 projects in Phase 3 at any given time. Using similar mathematics, the in-phase numbers can be calculated for each phase, and the sample pipeline in Table 13.7 can be compared against these “inventory” goals which should provide the strategic launch goal of 2 per year. One does not have to wait for the end of the year to immediately see the gaps and the opportunities – and to take action. Regarding the example in Table 13.7, it is clear that the company is doing an excellent job in filling the pipeline in order to achieve two new drug launches a year – except for Phase 2. A significant gap exists in Phase 2, with 29 projects needed and only 17 phase 2 projects are currently in the pipeline. Identifying this gap should focus the discussion where it should be Why or how did this gap occur? What actions can we take to fill this gap in the near future?
13 Current and Cutting Edge PDA in Pharmaceutical R&D
321
investigated the main reasons for the uncertainty. The reasons, shown to the right of the project, should help management identify ways of improving the expected revenues, one project at a time. For instance, Compound 2C, an anticancer agent for multiple myeloma, could be first in its class if it can launch before the competition. Perhaps management, now seeing the critical part this project plays in the portfolio, can take action to ensure the timeliness of the project.
13.12 Treating the R&D Portfolio as Four Different Portfolios The portfolio of a pharmaceutical company includes the discovery portfolio, the early development portfolio, the late development portfolio, and the life cycle management portfolio. All are included in the R&D effort. Each type of portfolio has different objectives, different costs, risks, outputs, and even different “projects” (Table 13.8). Therefore, each portfolio may need a different method for analyzing the building blocks of the projects – time, cost, risk, and reward – which, in turn, will feed valuable information into the portfolio level analysis (Table 13.9). 1. The discovery portfolio deals with programs where many possible chemical compounds or biological molecules could be produced from one program. The goal is to develop a chemical (or grow a biological molecule) which has the right scientific characteristics to indicate possible clinical success in at least one disease. Often, a backup molecule that is very close in structure to the “lead molecule” is developed. Often there is a tension between resourcing the backup to develop faster and overtake the lead or to resource the lead to reduce time to market. A secondary compound is also often developed as a molecule with a different mechanism of action but designed to treat the same disease. Secondary and backup compounds are additional “shots on goal.” Dependencies between the compound structures make estimating risks very challenging. Discovery Portfolio Decision Making: most discovery decision making is driven entirely by scientific test and scientific reasoning. At the cell target selection level, a multiple objective can help the scientist include strategy or the market in a consistent way in prioritizing which target to use to start a program. Single objective decision analysis using decision trees (or influence diagrams) is clearly called for in working with backup and secondary compound decisions. 2. The early development portfolio addresses individual compounds which need to be tested for actual effectiveness and safety concerns. A single compound may be effective for treating over 20 different diseases. Often the specific targeted disease is not decided upon when entering this stage. One of the critical decisions in Early Development is the choice of which diseases to pursue with this chemical or biologic molecule. Before leaving Early Development, a project– a compound with development program targeted at a specific disease– must have passed a battery of safety and efficacy (effectiveness) hurdles, both in the test tube, in
Elements of portfolio Molecules and programs
Molecules with multiple indications
Molecules and a lead indication
Molecules and a follow-on indication
Portfolios Late discovery
Early development
Late development
Life cycle management
2–4
3–5
2–4
Time in portfolio (years) 2
Table 13.8 Characteristics of the three portfolios in pharmaceuticals R&D
$50–400 M
$50–400 M
$20–150 M
Cost $5–10 M
60
40
25
Probability of succeeding to next stage (%) 25
Reward Extremely hard to estimate market, competition, pricing– little known about molecule characteristics For each disease, 5–9 years before launch, competition is difficult, efficacy, safety liabilities, and side effects unknown and are large revenue drivers Characteristics better known, easier to estimate marketplace, pricing, and regulations Market pricing may be set by initial indication, commercial market is much better known, molecule characteristics already studied
322 J. Kloeber
Portfolio element Program
Compound
Portfolio Discovery
Early development
From optimized lead to proof of concept in humans (usually Phase 2)
Phases included Early and late discovery, up to optimized lead compound
Resources and budget based on compound development strategy selection
Backup and secondary compound support
Type of decision at portfolio level Level of resources (Talent) to each program
Recommended analytics Model expected throughput for all programs. Align resource allocation with strategic guidance/objectives Develop risk, cost, and quality models for backups and secondary compounds. Use strategic objectives to guide policy Use Multiple Objective Decision Analysis Approach. NPV/ENPV is too uncertain to base key decisions on it. Prioritization and optimization needed
Table 13.9 Proposed analytical approaches for the three main portfolios within a pharmaceutical R&D pipeline
(continued)
Update in real time. Full review annually. Review strategic alignment every 2 years.
Build model as new programs are created. Run Model and adjust resources semiannually
Frequency Build model once. Update as needed. Review resource allocation quarterly
13 Current and Cutting Edge PDA in Pharmaceutical R&D 323
Portfolio element
Indication
Portfolio
Full development
Table 13.9 (continued)
From POC Success to Registration (usually Phase 3 and Registration)
Phases included
Resources and budget based on indication development strategy
Type of decision at portfolio level Sequencing of indications for one compound
Recommended analytics Mutiple Objective Decision Analysis approach ensuring dependencies are modeled Focus should be on Financial Portfolio Analysis. Optimization is useful if multiple strategies per project are available, or budget is inflexible, or in licensed project is being decided on. Risk tolerance is key since very large investments must be made at this point
Update monthly, full review semiannually
Frequency Usually this is a one time modeling effort. Revisit upon resolution of uncertainties
324 J. Kloeber
13 Current and Cutting Edge PDA in Pharmaceutical R&D
325
animals, and finally in humans. Showing efficacy in humans at a safe dose is considered Proof of Concept and marks the end of Early Development. Early Development Portfolio Decision Making: The wide variability and hard to quantify market acceptance and development issues make this the ideal portfolio for MODA, as discussed in the case study above. Activities such as consistent risk assessment and cost, time, and value tradeoffs help at portfolio and project level decision making. 3. The Late Stage development portfolio is comprised entirely of experimental molecules which have proven themselves to be initially safe and efficacious in humans. The longest and most expensive of the hurdles remains registration trials, in which companies test the molecules in a certain disease with a larger patient population to show safety and efficacy with both statistical and medical significance. Late Stage Development Portfolio Decision Making: The dramatic reduction of unknowns due to testing, the increase in cost-at-risk, and ability to financially evaluate the cost and rewards leads us to recommend single objective decision analysis, at both the project level stage-gate decision making and within a portfolio context. Portfolio objectives for this stage are financial in nature, achieving a level of revenue in a difficult year, fitting large and “lumpy” costs into a constrained budget, and maximizing long-term value for the stockholder support a financial method. 4. Life Cycle Management portfolio is comprised of projects (molecule C disease). Each of these molecules has already been statistically and medically shown to help patients in at least one other disease, and have been approved by the appropriate regulatory agencies. Where, in the Early Development portfolio, the lead disease is chosen out of a number of potential diseases, in the Life Cycle Portfolio, other diseases are chosen for testing for a molecule which is already being manufactured and marketed and used in patients. The risk is much lower once the Proof of Concept has been reached for this follow-on targeted disease. The four pillars of analysis of R&D projects are Risk, Cost, Timing, and Reward (or Value). As is laid out in Table 13.8, each portion of the R&D pipeline has different general characteristics of the four pillars which will help in determining what type of PDA tools will be most successful in adding clarity to the decisions at hand (Table 13.9).
13.13 Risk and Uncertainty in Pharmaceutical R&D Portfolios The main characteristic of pharmaceutical R&D portfolios, that most everyone would recognize, is risk. We will focus on scientific and medical risk, as they typically play a more prominent role in the total uncertainty of a pharmaceutical
328
J. Kloeber
Fig. 13.21 A decision tree approach used by the Bristol-Myers Squibb Co. (presented at the Argyle Pharmaceuticals Leadership Forum, NYC, June 22, 2010)
within the company. Typically, a company will conduct a strategic planning exercise during the first quarter of the year when external influences are examined and compared to last year’s assumptions. Given agreement on the impact of external influences, strategic objectives are reevaluated and often new ones are established. With modified strategic objectives, the plan for reaching these objectives has to be modified. An assessment of its internal pipeline is accomplished, with costs, risks, timing, and value all being analyzed. A strategic portfolio review then occurs, identifying the gaps between internal pipeline productivity and the new strategic goals. The gaps are normally the drivers for setting Business Development goals, adjusting the R&D budget (lower to achieve better bottom line results, higher to achieve better pipeline production) and generally driving the short term business plan for about 1–3 years in the future (see Fig. 13.21). As shown in Figs. 13.2 and 13.3, there are decision nodes missing from the typical representation of the project level decision tree. Placing the decision nodes before and after the resolution of the appropriate uncertainties makes the representation more complex and more realistic. Additionally, if the actual alternatives are not modeled or represented, the organization will have a tendency to ignore alternatives such as out-license, change trial design, or be more aggressive with the development plan, and continue to decide between only the Go/No-Go alternatives for the single momentum development alternative. One large pharmaceutical company, the Bristol-Myers Squibb Co., has developed and is implementing a more decision-oriented process for the Stage-Gate development of a project. In Fig. 13.21, we see how B-MS strategically placed the key
13 Current and Cutting Edge PDA in Pharmaceutical R&D
329
decision nodes within the development process. Although there are no uncertainty nodes in this company view of the process, the key uncertainties being resolved before each decision are outlined below. The research engine, which in the case of B-MS, includes discovery, preclinical work, Phase 1, and Phase 2 POC (Proof of Concept) is followed by a significant decision involving the commitment of huge development resources. The decision is placed at this point, immediately following the clinical proof of concept, because the most useful information that would be available about the efficacy of the drug candidate obtained with minimum resources will have just become available. A large part of the safety information will have already been gathered in Phase 1 and in the preclinical tests. After the decision regarding who will develop the drug, formulation issues must be solved, enough product must be produced, and Phase 2b (dosing) trials begin. The manufacturing decision shown in the center of Fig. 13.21 occurs only after the formulation issues have been solved and the correct doses are known. This decision involves not only manufacturing issues, which are very capital intensive, but also a decision to move forward with the costliest trials, Phase 3 (up to 10,000 patients). After Phase 3 trials are completed, the next key decision is who will commercialize (responsible for sales and marketing) the new drug. The tradeoffs are cost and effort of training and placing a competitive sales force vs. giving a percentage of the sales to the commercialization partner. This is the right time for the decision because the company can demand much more for commercialization once the efficacy and safety are known (Phase 3 results). The uncertainty inherent in the marketplace is also greatly reduced– as the market launch date grows near, the competitive landscape falls into place. This company is using the foundations of decision analysis to optimize their process: 1. Assess and use the value of information. 2. Make reasonable tradeoffs between cost and risk reduction through testing. 3. Wait to make a decision until it has to be made so that (a) the maximum amount of information has been gained (b) the company can keep downstream options open as long as possible. The processes in Figs. 13.2, 13.3, and 13.21 are completely focused on the development and decision making for one R&D project. Within a portfolio there are tens, scores, or even hundreds of these projects progressing through the stagegate process. Whether a company follows a process in Fig. 13.2 or in Fig. 13.21, decisions must be made to accommodate the project’s extremely critical timeline, not an annual or semi-annual business cycle. This presents a tension between normal business cycles of strategic planning, business planning, and the project development within portfolio planning. Monthly decision-focused governance meetings can provide the link between the two types of processes– project-oriented and business-oriented. The Portfolio Analysis organization can provide the appropriate information and analytical link.
330
J. Kloeber
Portfolio Mgmt Strategy Business Plan
Portfolio Review for BP decision making
Work with Therapeutic Areas on Strategy
Assess External Future
Gather External Information
Organizational BP Process Planning
Final Budget
Qtr 1
Qtr 4
Assist TA and Functions with Prioritization
Initial rollup, Adjust cuts, brief
Qtr 2
Qtr 3
Objectives
ID Projects Assess for Own inclusion Trajectory
Receive initial budget-collect data
Portfolio Review for Strategy Assessment
Adjust or Keep Strategy Translate Portfolio Review decisions into BP Guidance
Fig. 13.22 Portfolio planning, business planning, and strategic planning cycles
The portfolio analysis organization supports the business planning and long range planning processes with descriptive analytics “where are we now?” “What kind of portfolio balance do we have?”, as well as prescriptive analytics “Where should we place our limited resources to meet our strategic objectives?”. Armed with this knowledge and the resultant guidance of the company’s leaders, the portfolio analysis group can relate the individual project decisions at hand to the accomplishment of the strategic objectives. The governance committees should have clarity not only around the value and risk of a project using several development alternatives, but also how the project fits into the company’s portfolio (balance, cost, timing, innovation, revenues), as well as which key decisions (and resource commitments) are approaching and vying for the same resources. Figure 13.22 is one view of the interaction and timing of an integrated Portfolio Management process. The portfolio analysis group can also serve as the quality control group for many of the measures used for project level performance, including cost estimates, technical risk assessment (likelihood of a project meeting technical hurdles during development), timing of milestones, uncertainty assessments, and valuation.
13 Current and Cutting Edge PDA in Pharmaceutical R&D
331
13.15 Summary In this chapter, we have reviewed many tools and methods currently being used by internal PDA experts, managers within the R&D organization, and PDA oriented consultants. The sophistication and understanding of PDA in pharmaceuticals has been steadily rising for the past 10 years. However, current practices still leave questions unanswered due to gaps in process, organizational structure, or availability of adequately sophisticated methods. In the second section of this chapter, we addressed many of the gaps through slight modification of current tools, or by bringing in methods from other industries which have been successfully used in PDA. These methods are intended to help the PDA professional add insight, bring clarity to the decisions at hand, and improve the productivity and success of the companies using them within the Pharmaceutical Industry.
References Belton V, Stewart T (2001) Multiple criteria decision analysis, Kluwer Academic Publishers Buede D, Maxwell D (1995) Rank disagreement: a comparison of multi-criteria methodologies. J Multi-Criteria Decis Anal 4(1–2):1–21 Figueira J, Greco S, Ehrgott M (eds) (2005) Multiple criteria decision analysis. State of the art surveys. Springer, New York Grelk B, Kloeber J, Jackson J, Parnell G, Deckro R (1998) Making the CERCLA criteria analysis of remedial alternatives more objective. Remediation 8(2):87–105 Howard R (1988) Decision analysis: practice and promise. Manage Sci 34(6):679–695 Keeney R (1992) Value focused thinking: a path to creative decision-making. Harvard University Press, Cambridge Keeney R, Raiffa, H (1993) Decisions with multiple objectives, Cambridge books. Cambridge University Press, Cambridge Keisler J (2004) Value of information in portfolio decision analysis. Decis Anal 1(3):177–189 Keisler J (2011) Portfolio decision quality. In: Salo A, Keisler J, Morton A (eds) Portfolio decision analysis: improved methods for resource allocation. Springer, New York Matheson D, Matheson JE (1998) The smart organization. Harvard Business School Press, Boston McNamee P, Celona J (1990) Decision Analysis with Supertree, Second edition. South San Francisco, California: The Scientific Press Parnell GS, Bennett GE, Engelbrecht JA, Szafranski R (2002) Improving resource allocation within the national reconnaissance office. Interfaces 32(3):77–90 Salo A, Keisler J, Morton A (2011) An invitation to portfolio decision analysis. In: Salo A, Keisler J, Morton A (eds) Portfolio decision analysis: improved methods for resource allocation. Springer, New York Sharpe P, Keelin T (1998) How SmithKline Beecham Makes Better Resource-Allocation Decisions, Harvard Business Review, March-April Spilker B (2009) Guide to drug development: a comprehensive review and assessment. Lippicott, Williams and Wilkins, Philadelphia Spradlin T, Kutolksi D (1999) Action-oriented portfolio Management. Res Technol Manage 42(2):26–32 U.S. Congress (2005) Congressional Base Realignment and Closure Act U.S. Department of Homeland Security (2008) Bioterrorism risk assessment: a call for change, committee on methodological improvements to the Department of Homeland Security’s Biological Agent Risk Analysis. National Research Council National Academies Press
Chapter 14
Portfolio Decision Analysis: Lessons from Military Applications Roger Chapman Burk and Gregory S. Parnell
Abstract We review the use of portfolio decision analysis in military applications, such as weapon systems, types of forces, installations, and military research and development projects. We start by comparing military and commercial portfolio problems in general, discussing the distinguishing characteristics of the military decision environment: hostile and adaptive adversaries, a public decision process with multiple stakeholders, and high system complexity. Then we list and summarize 24 military DA applications published from 1992 to 2010. We find that the most widespread prominent feature of these applications is the careful modeling of value from multiple objectives. Mathematical optimization is not so common, but it can be important when a large number of interdependencies or side constraints makes it hard to find good feasible candidate portfolios. Quantitative methods of accounting for risk are surprisingly rare, considering the high level of uncertainty in the military environment. We analyze six of the applications in more detail, looking at how they model portfolio value calculation, swing weight assessment, constraints and dependencies, and uncertainty and risk. An appendix provides a recommended procedure for portfolio decision analysis based on the authors’ experience and the applications reviewed.
14.1 Introduction This chapter reviews the practice of portfolio decision analysis in military applications. Most of the applications are US military applications. Many of the problems faced by military organizations are similar to those of civil and even commercial organizations: resource allocation, research and development (R&D), system
R.C. Burk () Department of Systems Engineering, US Military Academy, West Point, NY 10996, USA e-mail:
[email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 14, © Springer Science+Business Media, LLC 2011
333
334
R.C. Burk and G.S. Parnell
acquisition, logistics, personnel management, training, maintenance, communications, and so forth. The complexity of these problems often requires a decision analysis (DA) approach to find an implementable high-quality solution. Sometimes DA is needed not so much because of the complexity of the problem as because of the need to have an objective and transparent rationale for the decision, to give all stakeholders confidence that their needs are being considered. In either case, we believe the techniques of portfolio decision analysis can lead to provably better results. In this chapter, we focus primarily on portfolio problems that are characteristic of, or unique to, the military, such as portfolios of weapon systems, types of forces, installations, or military R&D portfolios. In some military applications, the portfolio may be called the project list, the program, the budget, the system, the architecture, the system of systems, or the enterprise. We show how portfolio decision analysis has been used for such problems, and we summarize best practices in the US military. The next section 14.2 compares military and commercial portfolio problems, highlighting both their similarities and their differences. The following section 14.3 reviews published research on military portfolio decision analysis. The next section 14.4 presents and compares six illustrative case studies to show the breadth of techniques used in military portfolio analysis, and evaluates this work by describing how they addressed the four key technical issues the analyst has to face when modeling a military portfolio problem. A final section 14.5 summarizes and concludes this chapter. An appendix provides a synthesis of best military portfolio DA practices.
14.2 Comparison of Military Versus Commercial Portfolio Analysis Portfolio decision analysis in military and commercial applications has some similarities but some significant differences. Table 14.1 provides a comparison of portfolio decision applications in a military organization with their counterparts in companies in two of the industries that make significant use of decision analysis: petrochemicals and pharmaceuticals. There are many similarities: long time horizons for product development and marketing; high acquisition and lifecycle costs; and significant technical and schedule risk. However, the following characteristics of military applications make them unique: Hostile and adaptive adversaries. Oil drilling may take place in a hostile environment, but at least that environment merely operates according to the laws of nature, rather than actively seeking new and ingenious ways to destroy the rig, kill the drillers, and thwart the oil company’s plans, and it is not funded by a malign foreign power. This presence of a hostile and adaptive adversary adds an element of game theory and scenario building to the decision process, and portfolio selection has to consider both the deterrent effect and military effectiveness should deterrence fail.
14 Portfolio Decision Analysis: Lessons from Military Applications
335
Table 14.1 Comparison of portfolio application areas Factor Illustrative organizational objectives
Petrochemicals Increase shareholder value Provide energy for the nation and customers
Pharmaceuticals Increase shareholder value Improve health and quality of life
Major uncertainties
Existence of resources at particular locations
Technology development uncertainties
Effectiveness and efficiency of drilling and processing technologies Impact of drilling operations and products on the environment State, national, and international approvals to drill
Causes of diseases Efficacy of competitor drugs Prevalence of future diseases Efficacy of drugs Unwanted side effects of new drugs
Schedule uncertainties
Cost uncertainties
Operating Environment
Energy demand, energy prices, technology development problems, environmental protection requirements, and schedule slips Natural environment
Military Provide defense capabilities for the national command authority to achieve national objectives Minimize causalities Reduce collateral damage Be cost effective Future national, regional, and terrorist threats to national interests Technology readiness to develop and produce future systems R&D test failures in potential operational environments
Success of trials FDA approvals International approvals Products from competition, technical and schedule changes
Testing success Acquisition approvals Congressional funding authorizations Changes by adversaries, immature technologies, and schedule changes
Pharmaceutical laboratories Natural environment
Hostile natural and adversarial environment (continued)
It also puts a premium on secrecy and security. The hostile environment makes human factors especially important, since levels of personal risk are often much higher than that would be acceptable in civil applications. For safety reasons, it is also often difficult to test military systems (e.g., nuclear weapons) in realistic operational conditions. Defense decision process. The military decision process is subject to scrutiny by higher levels of command in the agency or service, by the Joint Staff, by the Department of Defense, by the Office of Management and Budget, by the
336
R.C. Burk and G.S. Parnell
Table 14.1 (continued) Factor Strategic partnerships
Petrochemicals Mergers and acquisitions
Pharmaceuticals Mergers and acquisitions
Intraorganizational resource competition Portfolio decision reviews
Divisions
Divisions
Corporate and Board of Directors
Corporate and Board of Directors
Military Purchase for foreign systems Foreign military sales to offset costs and support strategic objectives Services and defense agencies
Military hierarchy, defense agency, Department of Defense, Office of Management of Budget, Congress
Government Accountability Office, by Congressional Committees, by the Office of the President, and by public advocacy organizations. Major system acquisition must follow legal processes and contracting procedures, including multiple formal reviews. Legal restrictions on use of government funds must be respected, and the traditions and legally defined roles of the military services can have an effect on the decision. There is no equivalent to net present value (NPV) as the major decision criterion for defense decisions, since there is no revenue stream (though there is certainly a cost stream). Instead, there is a large and diverse set of stakeholders, each with a different idea of what military capabilities the system must provide. This leads to the need for a multiple objective approach, often with a large number of incommensurable and conflicting objectives. System complexity. Military systems are typically not only highly complex, they are also closely interconnected into enterprise architectures (supersystems) of even greater complexity, and they operate in every physical environment: land, sea, air, space, and cyberspace. These complex systems, many costing billions of dollars, take 5–15 years to develop and many be used (with modifications) for decades. The competition with an adaptive adversary puts a premium on having the latest technology, which increases both complexity and technical risk.
14.3 Review of Military Portfolio Decision Analysis We based our review of the military decision analysis literature on the literature search the second author conducted for a publication for the Military Operations Research Society (Parnell, 2007) and added more recent publications. Table 14.2
14 Portfolio Decision Analysis: Lessons from Military Applications
337
Table 14.2 Military portfolio decision applications
1
Authors and year Austin and Mitchell (2008)
Client (U.S. unless otherwise stated) UK Ministry of Defense
Military application Equipment procurement
2
Baker et al. (2000)
Air Force Academy
Allocate resources to buy equipment
3
Bresnick et al. (1997)
Joint Staff
Decide force mix for reconnaissance
4
Brown, Dell, and Newman (2004) Brown, Dell, Holtz, and Newman (2003)
N/A (modeling paper) Air Force Space Command
6
Buckshaw et al. (2005)
Joint Staff J6
7
Buede and Bresnick (1992)
Marine Corps and Army
Weapon system procurement Inform space systems investment planning Insight into options for defense against different information network attacks Acquire and allocate resources
8
Air Force
Prioritize future military system concepts and supporting technologies
9
Burk and Parnell (1997) Jackson et al. (1997) Rayno et al. (1997) Parnell et al. (1998) Parnell et al. (1999) Geis et al. (to appear) Chang et al. (2010)
N/A (algorithms paper)
Weapon system procurement
10
Davis et al. (2008)
Office of the Secretary of Defense
Prompt global strike systems; ballistic missile defense systems
5
Analysis techniques Decision conferencing; Multiple objective value analysis; benefit vs. cost Multiple objective value analysis; sensitivity analysis Multiple objective value function; lifecycle costing Optimization Multiple objective value analysis; optimization Multiple objective value analysis; risk analysis; attack trees; optimization; cost-benefit analysis Multiple objective value analysis; resource-allocation techniques; decision conferencing Multiple objective value analysis; scenarios; Value-Focused Thinking to generate new alternatives Mean-variance optimization with multiple measures of effectiveness Pareto analysis; scenarios; multiple attributes; cost vs. effectiveness; presentation methods (continued)
338
R.C. Burk and G.S. Parnell
Table 14.2 (continued)
11
Authors and year Ewing et al. (2006)
Client (U.S. unless otherwise stated) Army
12
Greiner et al. (2003)
Air Force
13
Hamill et al. (2002)
14
Jurk et al. (2004)
Defense Advanced Projects Research Agency and Air Force Institute of Technology Force Protection Battlelab, Air Force
15
Klimack and Kloeber (2000)
Army Training and Doctrine Command
16
Leinart et al. (2002)
National Air Intelligence Center
17
Lindberg (2008)
Army
18
Parnell et al. (2001)
National Reconnaissance Office
19
Parnell et al. (2002)
National Reconnaissance Office
Military application Base Realignment and Closure of Army Installations Command, Control, Communications, Computer, and Intelligence (C4I) projects Reduce risk to DoD systems
Analysis techniques Multiple objective value analysis; optimization
Rate innovative force protection ideas and concepts Allocate time resources and validate basic combat training Analyze networks for command, control, communications, and computers Critical infrastructure repair projects
Multiple objective value analysis
Allocate resources for research and development projects Allocate resources for operational support projects and services
Analytical Hierarchy Process (AHP); optimization; iteration with decision maker Multiple objective value; risk analysis; value of information; optimization
Multiple objective value analysis
Graph theory; multiple objective value analysis
Data Envelopment Analysis (DEA); probability of success; Pareto analysis Multiple objective value analysis; Pareto analysis; risk analysis Multiple objective value analysis; optimization (continued)
14 Portfolio Decision Analysis: Lessons from Military Applications
339
Table 14.2 (continued)
20
Authors and year Parnell et al. (2004)
Client (U.S. unless otherwise stated) Air Force Research Laboratory
21
Stafira et al. (1997)
Air Force
22
Trainor et al. (2007)
Army
23
Walmsley and Hearn (2004) Woodaman et al. (2010)
British Army
24
Joint Improvised Explosive Device (IED) Defeat Organization
Military application Allocate resources for space system research and development projects
Evaluate future systems to support counterproliferation of weapons of mass destruction Evaluation of the number of regions for the Army Installation Agency
Armored combat support vehicles Valuating counter-IED solutions
Analysis techniques Multiple objective value analysis; Monte Carlo risk analysis; optimization; Value-Focused Thinking to generate new alternatives Influence diagram; decision tree; a two-attribute utility function; value of control concepts; sensitivity analysis Multiple objective value analysis; value versus manpower; Value-Focused Thinking to generate new alternatives Optimization Multiple objective value analysis; technical risk analysis; time discounting
summarizes the analyses we have found from 1992 to 2010. The great majority are from the USA; only three items in the table (1,9,23) are from other countries. The variety of applications is quite diverse, including future concept analysis (8,12,14,21), research and development project selection (18,20), force mix (3,10), weapons system and equipment procurement (1,2,4,9,23), procurement and launch scheduling of space systems (5), communications and information (6,12,16), projects and services (7,15,17,19,24), and installations (11,22). By far the most common technique is multiple objective value analysis, which is prominent in most of the papers in the table. Optimization is the next most common technique, generally some form of binary programming to solve a knapsack-type problem of finding the best portfolio possible within a given budget. It is featured in almost half of the table items (4,5,6,9,11,12,13,19,20,23), though it is the major thrust in only three (4,9,23). The paper by (Brown et al. 2004) is particularly useful
340
R.C. Burk and G.S. Parnell
– it provides a survey of formulations for this kind of optimization. To address global uncertainties, two sets of studies use scenario analysis (8,10); others use probability trees (17,21) or risk analysis (6,13,18,20,24) to address technical or other uncertainties. Project cost figures in almost all the papers in the table, and cost or cost-versus-benefit analysis is pre-eminent in three of them (3,6,10); and Pareto analysis (ordering the projects by the ratio of value per money) is a major feature in five others (1,10,17,18,19). Four papers put special emphasis on interactions with the client (1,7,10,12), though others make some mention of it. Sensitivity analysis is used in many papers and is a major feature in two (2,21). Several other techniques appear only once in Table 14.2, including attack trees (6), resource allocation (7), Analytical Hierarchy Process (12), value of information (13), graph theory (16), Data Envelopment Analysis (17), Monte Carlo simulation (20), influence diagrams (21), and time discounting (24). Table 14.2 lists military portfolio decision analysis publications from the last twenty years. To generalize about this literature, it seems clear that the most important feature of military portfolio analysis is careful modeling of value from multiple objectives. Mathematical optimization is secondary, though it can be important, especially when it is hard to find good candidate portfolios by cost–benefit or Pareto ranking because of interdependencies among the projects or because of external constraints like space launch schedules (Brown et al. 2003). Considering the many uncertainties in the military environment, it is perhaps surprising that explicit scenario analysis or other methods of accounting for uncertainty are not more commonly applied. Since cost overruns continue to bedevil defense system acquisitions (Meier 2009), we believe probabilistic modeling of cost might be a fruitful area for future military portfolio decision analysis research.
14.4 Six Case Studies in Military Portfolio Analysis To illustrate the mainstream of military portfolio decision analysis, Table 14.3 provides more detail on six of the studies listed in Table 14.2. These six portfolio decision analysis case studies are very diverse. Their problem domains range from future system concepts to R&D to future systems to current systems to installations. All of the studies had multiple stakeholders at multiple levels of DoD and Congress. Some of the studies have been presented at the highest levels of the services/agency, DoD, and Presidential Commissions (BRAC). Some of the studies analyzed known alternatives while others made explicit use of ValueFocused Thinking (Keeney 1992) to develop new alternatives. The constraints range from none to extremely complex space constellation and launch facility constraints. In three cases uncertainty and risk are not explicitly modeled, sometimes because the analysis was being done for one budget year at a time (Marine Corps, NRO OSO). Long range studies (future concepts and R&D) and multiyear year budget studies were more likely to explicitly model uncertainty. Optimization and Pareto ranking of value for money were the two methods of calculating portfolio value.
Stakeholders
Decision maker(s)
Table 14.2 Reference Problem domain
Year of publication
Marine Corps budget managers, Navy, and Congress Joint Commanders, Marine Corps leaders, Marines and their families
7 Program prioritization for budget development
1992
Marine Corps Program Budget
Users of AF space and cyberspace system capabilities, Air Staff, DoD, and Congress
Director, Operational Support Office
Users of national reconnaissance products in Joint Commands and all services
Air Force Chief of Staff
Air Force Laboratory directors, acquisition program managers, and operational commands
Communities, States, Congress, Major Commands, and Army Users of AF space system capabilities, Air Staff, DoD, and Congress
(continued)
Army, DoD, BRAC 2005 Presidential Commission
11 Base realignment and closure (BRAC)
2006
Army Base Realignment and Closure (BRAC) 2005
Commander, Air Force Research Laboratory, and Directorates
20 Space system R&D project evaluation
2004
2003a 5 Space and cyberspace system investment planning Air Force Space Command, Air Force, and DoD
Air Force Space Technology R&D
Air Force Space Command Investment
19 Use of national reconnaissance products and services
2002
NRO Operational Support Office (OSO) Program Budget
8 Future space system concept development and evaluation
1998
Air Force 2025 Study
Table 14.3 Six military portfolio decision analysis case studies
14 Portfolio Decision Analysis: Lessons from Military Applications 341
Constraints
Project Development
Value Model
Table 14.2 Reference Projects Evaluated
Year of publication
Budget
Balance Beam technique used to holistically assess value Marines
7 Marine Corps programs
1992
Marine Corps Program Budget
Table 14.3 (continued)
Not considered
Air Force Officers with Value-Focused Thinking (VFT)
Project Value with about 140 value measures
8 Future air and space system concepts
1998
Air Force 2025 Study
Budget and must-funds
Existing Programs with some VFT
19 Products, product support, training, liaisons, software application development Project Value with 11 value measures
2002
NRO Operational Support Office (OSO) Program Budget
Budget, space system constellation, and space launch infrastructure
Existing programs and future concepts
Portfolio Value with 67 value measures
Multiyear Budgets
Existing projects with a few new VFT projects
Portfolio Value with about 50 value measures
20 Space R&D projects
2004
2003a 5 Space and cyberspace investment programs
Air Force Space Technology R&D
Air Force Space Command Investment
(continued)
Alternatives to close installation and move units generated by Army leaders Many portfolio capability requirements
Installation Value with 40 value measures
11 Army installations
2006
Army Base Realignment and Closure (BRAC) 2005
342 R.C. Burk and G.S. Parnell
Sum of project value
Over 20 years
Portfolio Value Calculation
Usage
Methodology used on three major studies
Not calculated
8 4 Scenarios with different weights
1998
Air Force 2025 Study
Several years
Optimization model to maximize the sum of project value
19 One project risk measure but not used in value model calculations
2002
NRO Operational Support Office (OSO) Program Budget
Updated to current practice (Tedeschi 2010, private communication)
7 Not modeled
Table 14.2 Reference Uncertainty
a
1992
Year of publication
Marine Corps Program Budget
Table 14.3 (continued)
Optimization model to ensure feasible launch schedule, minimize cost, and maximize value over time Over 10 years
Several years
20 Monte Carlo simulation using triangular distribution on uncertain project scores Optimization model to evaluate the value of each set of projects
2004
2003a 5 Not modeled
Air Force Space Technology R&D
Air Force Space Command Investment
Optimization model to insure feasible portfolio (and maximize sum of installation value) One time use for a multiyear study; may be used again
11 A key factor in the portfolio capability requirement studies
2006
Army Base Realignment and Closure (BRAC) 2005
14 Portfolio Decision Analysis: Lessons from Military Applications 343
344
R.C. Burk and G.S. Parnell
There are four technical issues illustrated in these cases that deserve special discussion for military portfolio decision analysis: portfolio value calculation; swing weighting; constraints and dependencies; and uncertainty and risk. The first issue is the portfolio value calculation. The most common quantitative value model for military decision analysis is the additive value model (Parnell 2007), but for portfolios there is the additional issue of how the value of individual projects will be aggregated into a portfolio value. Three approaches are common: project value model with no explicit calculation of portfolio value, portfolio value as the sum of project value calculated with a project value model, and a nonadditive portfolio value model. In the first approach, the project value model assesses the value of each individual project and the results are used to rank the projects for the decision makers without necessarily defining the portfolio. The project value is plotted versus the cost or technical difficulty and decision makers use these insights to select promising projects to pursue. This approach is especially useful when decision-making authority is decentralized and the selection of projects to fund will not be made on one organization. Studies using this approach include Air Force 2025 in Table 14.3 (also Burk and Parnell 1997; Geis et al. to appear). In the second approach, the value of the portfolio is taken to be the sum of the project value of all projects in the portfolio. This is the easiest approach, but it assumes that the project values are additive. An example of this approach was the BRAC study in Table 14.3. This approach can be done with or without constraints. In the third approach, a value model defines the value of the portfolio and each feasible set of projects is scored on the value measures, not the individual projects. This approach is much harder to explain and to solve as a linear model. An example of a portfolio value model was the AFRL Space Technology R&D study in Table 14.3 (Parnell et al. 2004), which assessed the value of the total space technology portfolio. The scoring algorithm for each set of projects was the maximum (minimum) score of each of the component projects for an increasing (decreasing) value function. This portfolio value function was reasonable for a set of research projects all trying to achieve the same capability, and its special structure made it practical to use mathematical optimization to identify the best portfolios under budget constraints. The second major technical issue is swing weight assessment in additive value models. In the majority of the publications in our literature survey swing weighting was used. However, a pervasive issue in military decision analysis is the inappropriate use of importance weighting (Dillon-Merrill et al. 2008). In our experience this is due to lack of understanding of the mathematics of multiple objective decision analysis. The introduction of the Swing Weight Matrix (Parnell 2007) is an important technique to help decision analysts assess swing weights and explain the logic of the assessment to senior decision makers. This will be described in the Best Practices section below. The third major technical issue is constraints and project dependencies. We group these two together because project constraints can be used to model project dependencies in an optimization model (Kirkwood 1997). If there is only one constraint, i.e., budget, and the project values and costs are independent of whatever other projects are also in the portfolio, then the best modeling approach may
14 Portfolio Decision Analysis: Lessons from Military Applications
345
be to rank the projects by value per cost without mathematical optimization. If optimization is used, this is the famous knapsack problem with one constraint. Optimization can also be used to determine the maximum-value portfolio for a number of budget levels. However, if there are multiple constraints and the values and/or costs of a project can vary based on what other projects are included, then an optimization model may be the best tool to calculate the maximum-value portfolio. Two examples of portfolio value models with large numbers of constraints are the Air Force Space Command Investment model (Brown et al. 2003) and the BRAC 2005 model (Ewing et al. 2006). The last major technical issue is how to handle uncertainty and risk. For uncertainty, the most common military decision analysis techniques are scenarios, Monte Carlo simulation, and decision trees. For long planning horizons, scenario analysis can be used to assess the potential risks and opportunities with significantly different international security, economic, military, and technological assumptions. The Air Force has used scenario analysis for several studies on future system concepts, including the Air Force 2025 study in Table 14.3 (Parnell et al. 1998), as well as Spacecast 2020 (Burk and Parnell 1997) and Blue Horizons 2030 (Geis et al. to appear). Scenario analysis is especially useful if the scenarios span the strategic planning space. Monte Carlo analysis was used in the Air Force Space Technology R&D project to model the uncertainty about the performance of system concepts, identify risk drivers for projects, and assess the portfolio risk (Parnell et al. 2004). Monte Carlo analysis is especially useful for a large number of independent uncertainties. Decision trees were not used in the six case studies in Table 14.3, but Parnell et al. (2011) show how to use decision trees can be used with multiobjective decision analysis to assess uncertainties about technology performance and potential actions of adversaries. Decision trees are most useful for a few, dependent uncertainties. The Monte Carlo and decision tree techniques are typically used for shorter planning horizons than scenario analysis. As has been discussed in the introduction to this book, portfolio decision analysis is a social–technical process. Portfolio decision analysis for military applications is an especially challenging social–technical process due to the large number of decision makers and stakeholders. There are many opportunities to significantly improve military decision making (Parnell 2007). However, there are many pitfalls to avoid (Dillon-Merrill et al. 2008). The appendix to this chapter contains a synthesis of the most effective portfolio decision analysis techniques we have found. It is applicable to all areas of portfolio DA, not just military.
14.5 Conclusion We obtained several insights from our research and our professional expertise as military decision analysts and continuing education instructors. The first and most important insight is that there are many portfolio decision analysis applications in the decision analysis literature and many more in practice that are not published.
346
R.C. Burk and G.S. Parnell
Second, almost all of the articles we reviewed and most of the unpublished applications use multiple objective decision analysis. None of the military applications publications we found used single objective decision analysis, e.g., NPV. Third, the vast majority of the articles use multiple objective value that assesses returns to scale instead of multiple attribute utility that uses lotteries to assess risk preference and returns to scale. Fourth, surprisingly few military applications explicitly model uncertainty using any of the three techniques presented in this chapter (scenarios, Monte Carlo Simulation, and decision trees). We have identified the key decision analysis modeling decisions and have presented the alternative techniques we have found in the literature review for these modeling decisions. Based on our literature review and our experience as military decision analysts, we have identified the best practices and proposed a portfolio decision analysis procedure that incorporates these practices (Appendix).
Appendix: A Recommended Procedure for Portfolio Decision Analysis Portfolio decision analysis is a social–technical process. Portfolio decision analysis for military applications is an especially challenging social–technical process due to the large number of decision makers and stakeholders. There are many opportunities to significantly improve military decision making (Parnell 2007). However, there are many pitfalls to avoid (Dillon-Merrill et al. 2008). This appendix contains a synthesis of the most effective portfolio decision analysis techniques we have found. We believe the techniques are applicable to other areas of portfolio DA, not just military.
A.1 Portfolio Decision Analysis Marketing The largest challenge in portfolio decision analysis is obtaining senior leadership approval and key stakeholder buy-in to use decision analysis. Several successful marketing approaches have been used. In consulting, the best marketing technique is to have previously done good work. The first marketing technique, and the easiest, is when the portfolio analysis is a continuation of similar analysis in the same organization. Many portfolio analyses have been done for multiple years. The second case is the decision maker or a key champion in the organization identifies a decision-making problem and seeks help from the decision analyst. The third case, and the most challenging, is when the decision analyst approaches the decision maker and convinces him/her that they have a problem that portfolio decision analysis will help solve. Once the portfolio decision analysis has been approved there are several social and technical decisions to be made.
14 Portfolio Decision Analysis: Lessons from Military Applications
347
A.2 Key Social Decisions The most important decision is how to engage the decision makers and key stakeholders. The two most successful approaches are the Dialog Decision Process (Spetzler 2007), called Interim Progress Reviews, IPRs, in military applications, and the Decision Conference (Phillips 2007). The IPR process involves regular interaction with the decision makers to ensure the portfolio decision analysis will meet their needs. In this process, the decision analysis is done by decision analysts meeting with stakeholders between meetings with the decision makers. The decision conference approach gathers the key decision makers and stakeholders in one location for an extended period (hours or days) to perform and validate the portfolio analysis. In large organizations several decision conferences can be conducted and then a “merge” decision conference integrates the portfolio lists of subordinate units. A second issue is the time available to perform the analysis. Important issues include the total calendar time and the availability of decision makers and key stakeholders. A third important issue is how to overcome resistance to change from individuals who are comfortable with the current portfolio decision process or concerned that a new process might reduce their resources.
A.3 Key Technical Decisions The most important technical decisions the analyst must make in deciding how to model a given problem are • Identifying basic data: the decision makers (which may be a committee or other group or several individuals in a hierarchical organization); the set of possible alternatives1 and their costs (acquisition and life cycle) and possible benefits; and constraints on funding (research and development, acquisition, and operations and support) and other resources that limit portfolio selection. Finding all these things is often far from trivial, but they need to be found in order to perform portfolio analysis. • Modeling value to the stakeholders, including how it is to be elicited, modeled, and verified. • Developing screening criteria or requirements to ensure that all alternatives are acceptable. • Insuring that swing weights are understood and used. • Identifying the programmatic constraints that ensure the analysis provides a balanced portfolio that meets the needs of customers across the key mission areas. • Identifying improved alternatives and new alternatives.
1 Our experience is that is surprisingly difficult to obtain a complete list of the alternatives, especially in organizations with projects in many classification levels.
348
R.C. Burk and G.S. Parnell
• Modeling major uncertainties affecting the problem and the alternatives. • Assessing interactions among the alternatives and the possible need for optimization software to find good feasible portfolios. • Presenting results to the decision maker, including enough sensitivity analysis to make the results convincing.
A.4 Problem Formulation We formulate the problem as follows: Out of a set of alternatives, each with a cost and some potential beneficial results, we want to select the subset (the “portfolio”) that is best overall, subject to budget limit(s) and programmatic constraints. This problem is to be distinguished from the allied problems of alternative evaluation (in which there is no portfolio) and of financial portfolio selection (with a single beneficial result, i.e., money). When a portfolio problem is hard enough to require decision analysis, it is generally for one or more of four reasons: (1) more alternatives than the budget can fund; (2) multiple and conflicting objectives; (3) major uncertainties; (4) interactions among the alternatives. We discuss practical approaches to these difficulties in some detail. Other common difficulties include difficulty identifying the alternatives, uncertain alternative costs; uncertain overall budget; access to decision makers and key stakeholders; desires or constraints on completion time; multiple time periods; multiple resource constraints. We will deal with these more briefly. For the problem of multiple and conflicting objectives, the approach of ValueFocused Thinking (Keeney 1992) recommends focusing on the values or objectives that the portfolio is supposed to fulfill, rather than on the alternative projects. Using multiobjective decision analysis, we recommend developing an additive value model, which provides an objective, transparent, logical structure to give a numerical measure of overall value for each project and for the portfolio as a whole. Such a model is made up of five parts: a value hierarchy, which describes and organizes the benefits desired; measures that quantify each benefit; ranges for each of the measures, from worst acceptable (or available) to best possible (or available); value functions that describe how value accumulates as one goes from low to high scores in each measure; and swing weights that show the relative value of full-range swings in each of the different measures. The value model must be based on values carefully elicited from the decision maker(s) and other important stakeholders. Measures can be direct (best) or proxy, and they can be natural (best) or constructed, depending on the time and data available. The resulting value model is usually an interval scale, meaning that only intervals have meaning, not absolute numbers or ratios of scores. The required assumption of the additive value model is that the value measures have mutual preferential independence. This means that the assessment of the value function on one value measure does not depend on the scores of the other value measures. For further detail see Keeney and Raiffa (1976) or Kirkwood (1997).
14 Portfolio Decision Analysis: Lessons from Military Applications
349
A.5 Definition of Value Our research found that military applications use value instead of utility. We begin with the definition of value. Our definition of value is returns to scale on a measure of importance (Kirkwood 1997). Military value measures can have many names: outcome measures, performance measures, key performance parameters, effectiveness measures, efficiency measures, evaluation measures, measures of merit, and process measures, to name a few. For business decisions, value is typically measured in dollars (see function on the left of Fig. A.1) and NPV. Each element of a cash flow has a value curve. For example, number of units sold can be used to calculate revenue. If there is no quantity discount, the curve would be linear. NPV is the discounted dollars of a cash flow at the present time. NPV is very useful for private companies and public organizations that can convert benefits to dollars. Normalized value uses a value function (shown on the right side of Fig. A.1) to convert the level on a measure to a normalized value instead of NPV. The most common total value calculation is the additive value model, which is the weighted sum of the value on each value measure (Parnell 2007). Normalized value is very useful for private and public decisions where the benefits cannot be converted to dollars.
A.6 Qualitative Value Model Development For portfolio value models, we have found that it is very useful to first identify the functions that the portfolio (system, systems of systems, or architecture) must perform (Parnell et al. 2011). For each function, we then identify the objectives we want to achieve for that function. Finally, for each objective we identify the value measures that can be used to assess the potential to achieve the objectives. In each
Fig. A.1 Definition of value. Note, this figure uses linear curves but in general value functions may well be nonlinear
350
R.C. Burk and G.S. Parnell
application, we use the client’s preferred terminology. For example in military decision analysis, functions can be called missions, capabilities, activities, tasks, or other terms. Likewise, objectives can be called criteria, evaluation considerations, or other terms. Finally, value measures can be called any of the previously mentioned terms. The credibility of the qualitative value model is very important in portfolio decision analysis since there are typically many stakeholders and multiple decision maker reviews. Parnell and his colleagues have developed four structured techniques for qualitative value modeling that improve the model credibility and named them the platinum, gold, silver, and combined standard. Platinum standard. Decision analysts should strive to talk directly to the senior leaders who make and influence the decisions. A platinum standard value model is based on information from interviews with senior decision makers and key stakeholders. Affinity diagrams (Parnell et al. 2011) can be used to group the functions and objectives into logical mutually exclusive and collectively exhaustive categories. For example, interviews with senior decision makers and stakeholders were used to develop a value model for the Army’s 2005 BRAC value model (Ewing et al. 2006). Gold standard. When we cannot get direct access to senior decision makers, we look for alternative approaches. One alternative is to use a document approved by senior decision makers. A gold-standard value model is developed based on an approved vision, policy, strategy, doctrine, architecture, or capability planning document. For example, in the SPACECAST 2020 study, we used the US Space Doctrine as the model’s foundation. Many military acquisition programs use acquisition documents as a gold standard since the documents define system functions and key performance parameters. Many times the gold standard document has many of the functions, objectives, and some of the value measures. If the value measures are missing we work with stakeholders to identify appropriate value measures for each objective. Silver standard. Sometimes the gold standard documents are not adequate (not current or not complete) and we are not able to interview a significant number of senior decision makers and key stakeholders. As an alternative, the silver standard value model uses data from the many stakeholder representatives. Again, we use affinity diagrams to group the functions and objectives into mutually exclusive and collectively exhaustive categories. For example, inputs from about 200 stakeholders’ representatives were used to develop the Air Force 2025 value model (Parnell et al. 1998). This technique has the advantage of developing new functions and objectives that are not included in the existing gold standard documents. For example, at the time of the study the Air Force Vision was Global Reach, Global Power. The Air Force 2025 value model identified the function Global Awareness (later changed to Global Vigilance) which was subsequently added to the qualitative value hierarchy. Combined standard. Since it is difficult to interview all senior decision makers and many times key documents are not sufficient to completely specify a value model, the most common technique is the combined standard. We combine the
14 Portfolio Decision Analysis: Lessons from Military Applications
351
results of a review of several gold standard documents, findings from interviews with some senior decision makers and key stakeholders, and data from multiple meetings with stakeholder representatives. This technique was used to develop a space technology value model for the Air Force Research Laboratory Space Technology R&D Portfolio (Parnell et al. 2001). As a decision analyst, your first choice should always be to interview the senior decision makers and key stakeholders. As preparation for these interviews, you should research potential gold standard documents and talk to stakeholder representatives. However, changes in the environment and leadership may cause a potential gold standard document to no longer reflect leadership values. If you are going to use a gold standard document, you should confirm with senior leaders that the document still reflects leadership values. We recommend you interview the senior decision makers and stakeholders to ensure you understand their values and objectives and supplement these interviews with focus group meetings with stakeholder representatives.
A.7 Swing Weighting The swing weight matrix is a best practice for decision analysis (Parnell 2007). The key concept of the swing weight matrix is to define what we mean in the decision context by the importance and impact on the decision of the range of variation for the value measures. The idea of the swing weight matrix is straightforward. A measure that is very important to the decision should be weighted higher than a measure that is less important. A measure that differentiates between alternatives, that is, a measure in which value measure ranges vary significantly, is weighted more than a measure that does not differentiate between alternatives. The first step is to create a matrix (Fig. A.2) in which the top defines the value measure importance and the left side represents the impact of the variation on the decision. The levels of importance and variation should be thought of as constructed scales that have sufficient clarity to allow the stakeholders to uniquely place every value measure in one of the cells. A measure that is very important to the decision and has a large variation in its scale would go in the upper left of the matrix (cell labeled A).2 A value measure that has low importance and has small variation in its scale goes in the lower right of the matrix (cell labeled E). Since many individuals may participate in the assessment of weights, it is important to ensure consistency of the weights assigned. To illustrate the use of the swing weight matrix in a military decision analysis problem, in Fig. A.2 we have defined importance in terms of three mission levels: mission critical, mission enabling, and mission enhancing. In addition, the impact of the value measure
2
Many people like to place very important, high variation in upper right corner, or cell C3.
352
R.C. Burk and G.S. Parnell
Importance of the value measure to the decision
Capability Gap (Impact of the range of variation of the value measure on the decision)
Mission Critical (High)
Mission Enabling (Medium)
Mission Enhancing (Low)
Large
A
B2
C3
Medium
B1
C2
D2
Low
C1
D1
E
Fig. A.2 The elements of the swing weight matrix
range is the capability gap for the lowest value level to the ideal level. It is easy to understand that a mission critical measure with a large capability gap (A) should be weighted more than mission critical measure with a medium capability gap (B1). It is harder to trade off the weights between a mission critical measure with a low capability gap (C1) and a mission enabling measure with a high capability gap (B2). Weights should descend in magnitude as we move on the diagonal from the top left to the bottom right of the swing weight matrix. Multiple measures can be placed in the same cell with the same or different weights. If we let the letters A through E represent the diagonals in the matrix, then A is the highest weighted cell, B is the next highest weighted diagonal, then C, then D, and then E. For the swing weights in the cells in Fig. A.2 to be consistent, they need to meet the following relationships. If we denote i to be the label of the cell in the swing weight matrix and fi to be the unnormalized swing weight of the value measures in each cell, then the following strict inequalities relationships of non-normalized swing weights must hold: fA > fi for all i in all other cells fB1 > fC1 ; fC2 ; fD1 ; fD2 ; fE fB2 > fC2 ; fC3 ; fD1 ; fD2 ; fE fC1 > fD1 ; fE fC2 > fD1 ; fD2 ; fE fC3 > fD2 ; fE fD1 > fE fD2 > fE In general, no other specific relationships hold. Once all the value measures are placed in the cells of the matrix, we can use any swing weight technique to obtain the non-normalized weights as long as we follow the consistency rules cited above. In assigning weights, the stakeholders need to assess their tradeoffs between level of importance and impact of the level of variation in measure scale on the decision. Two common approaches are the value
14 Portfolio Decision Analysis: Lessons from Military Applications
353
increment approach (Kirkwood 1997) and the balance beam approach (Watson and Buede 1987). Using the value increment approach, we assign the measure in cell A (the upper left-hand corner cell) an arbitrary large non-normalized swing weight, for example, 100 .fA D 100/. Next, we could assess the weight of the lowest weighted measure in cell E (the lower right-hand corner) the appropriate swing weight, for example, 1. This means the swing weight of measure A is 100 times more than that of measure E. Since we need to sufficiently differentiate the weights, it is important to consider what the maximum in cell A should be. Sometimes 1,000 is chosen instead of 100. Of course fE can be other numbers besides 1. If we use 100 and 1, we have three orders of magnitude. If we use 1,000 and 1 we have four orders of magnitude. Using a value increment approach, non-normalized swing weights can then be assigned to all the other value measures relative to fA by descending through the very important measures, then through the important measures, then through the less important measures. We can normalize the weights for the measures to sum to 1 using wi D
fi ; n P fi
i D1
where fi is the non-normalized swing weight assessed for the i th value measure, i D 1 to n for the number of value measures, and wi is the normalized swing weight.
A.8 Modeling of Uncertainty The decision analyst’s basic tool for modeling decisions under uncertainty is the decision tree (DT), which is commonly used and readily understood. However, DTs are more complex with multiattribute problems, and when used for decision recommendation the use of only expected value assumes risk neutrality on the part of the decision maker, which frequently does not hold. If risk seeking or aversion is significant, there are two possible approaches. We can use a risk profile (using Monte Carlo simulation or the decision tree) for each alternative and the portfolio and let the DM use his/her own subjective risk attitude. The second approach is to use multiattribute utility theory (MAUT) to develop a model that incorporates the DM’s risk attitude as well as his values; this is difficult, time consuming, and complex, since lottery questions must be used for each value measure. We have not found it to be commonly used. For large portfolio problems, we believe it is best to proceed as follows: For overarching uncertainties caused by long time horizons, e.g., the future geopolitical situation, generate different versions of the value model with different swing weights reflecting differing importance of measures in different scenarios, find “optimal portfolios” with each, and select the programs that are in all of the portfolios and some additional programs as hedges for each scenario. For risks
354
R.C. Burk and G.S. Parnell
in project outcome, elicit probabilities and outcomes for three or four cases of each risky alternative (e.g., complete failure, poor outcome, most likely outcome, and best outcome), find several good portfolios based on most likely outcomes, use Monte Carlo simulation to generate distributions of combined output value for each portfolio, and make the final choice based on the distributions. One of the most important challenges is how to deal with systemic uncertainty (that may affect many alternatives) as opposed to uncertainty about the scores of an alternative on a value measure due to uncertainty about how the technology will develop. It may be necessary to elicit subjective probabilities of future events (geopolitical, economic, or technical) from subject matter experts. This should be done with great care and caution, because of experts’ poor record in making such estimates.
A.9 Project Interactions The most straightforward way to assemble the portfolio is to accept projects in order of decreasing value divided by cost until the budget is exhausted. Often this simple approach is best. However, this may result in the budget cutting off midway through an expensive project. Also, there may be logical constraints (e.g., alternatives A and B are mutually exclusive), interactions in cost (e.g., the cost of C and D together is less than the cost of C by itself plus the cost of D by itself), interactions in the values (e.g., the value of E and F together is different from the value of E by itself plus the value of F by itself), or probabilistic covariance in outcome. All these problems can be addressed by formulating a suitable binary optimization program, which can be solved by using Excel’s Solver or other commercial software. However, this approach should be used with caution. The math program can quickly become too large to really understand or explain to senior decision makers and stakeholders. The resulting optimum portfolios can be fragile, in the sense that they can change drastically with only a small change in data (for instance, a little additional budget can result in an alternative being deleted from the portfolio, which is very hard to explain to that alternative’s proponent). Finally, if the problem is very large (hundreds of alternatives), it can take a long time (hours or days) to solve.
A.10 Summary: A 15-Step Procedure for Portfolio Decision Analysis While every portfolio decision analysis will be different, we recommend you consider the following steps, which make use of the best practices described in the chapter. 1. Interview key decision makers and stakeholders to insure that you fully understand the social (organizational and political) and the technical issues. 2. Select the Dialog Decision Process or the Decision Conferencing approach to interacting with the key decision makers and stakeholders.
14 Portfolio Decision Analysis: Lessons from Military Applications
355
3. Gather basic information about the problem: key documents, list of existing alternatives, alternative costs, budget constraints, programmatic constraints, uncertainties, interactions. 4. Determine the major modeling issues: multiple and conflicting objectives, uncertain events, interactions, or something else? Concentrate effort in the area(s) of major issues. 5. Decide which approach to use to develop the qualitative value model: platinum, gold, silver, or combined. 6. Develop the value model using Value-Focused Thinking by interacting with stakeholder representatives. 7. Vet the qualitative value model with key decision makers and stakeholders. 8. Develop the quantitative value model (usually an additive value model). 9. Use the swing weight matrix to assess the weights assigned to each value measure. 10. Compare the set of alternatives to the values and look for gaps and opportunities to identify better alternatives. 11. Develop new alternatives to fill the gaps and take advantage of opportunities. 12. Assess major risks (systemic and alternative) and uncertainties. For multiple risks over time, construct a decision tree and elicit outcomes and probabilities for representative discrete cases. For longer time horizons, develop scenarios that span the strategic planning space and assess different versions of the value model with different swing weights. 13. Assess interaction problems and the number of constraints. If minor, select a portfolio using a “bang for buck” heuristic (value per dollar) and adjust it manually. With more constraints, use an optimization model. 14. Perform a sensitivity analysis on important modeling assumptions and uncertainties. 15. Display the results graphically in a way that highlights value of the portfolio and the remaining value gaps.
References Austin J, Mitchell IM (2008) Bringing value focused thinking to bear on equipment procurement. Mil Oper Res 13(2):33–46 Baker SF, Green SG, Lowe JK, Francis VE (2000) A value-focused approach for laboratory equipment purchases. Mil Oper Res 5(4):43–56 Bresnick TA, Buede DM, Pisani AA, Smith LL, Wood BB (1997) Airborne and space-borne reconnaissance force mixes: a decision analysis approach. Mil Oper Res 3(4):65–78 Brown GG, Dell RF, Holtz H, Newman AM (2003) How US Air Force Space Command optimizes long-term investment in space systems. Interfaces 33(4):1–14 Brown GG, Dell RF, Newman AM (2004) Optimizing military capital planning. Interfaces 34(6):415–425 Buckshaw DL, Parnell GS, Unkenholz WL, Parks DL, Wallner JM, Saydjari OS (2005) Mission oriented risk and design analysis of critical information systems. Mil Oper Res 10(2):19–38
356
R.C. Burk and G.S. Parnell
Buede DM, Bresnick TA (1992) Applications of decision analysis to the military systems acquisition process. Interfaces 22(6):110–125 Burk RC, Deschapelles C, Doty K, Gayek JE, Gurlitz T (2002) Performance analysis in the selection of imagery intelligence satellites. Mil Oper Res 7(2):45–60 Burk RC, Parnell GS (1997) Evaluating future space systems and technologies. Interfaces 27(3):60–73 Chang T-J, Yang S-C, Lin T-L, Chang K-J (2010) Heuristics approach for portfolio selection with military investment assets. J Chung Cheng Inst Technol 39(1):97–112 Davis PK, Shaver RD, Beck J (2008) Portfolio-analysis methods for assessing capability options. RAND National Defense Research Institute, Santa Monica Department of Defense (DoD) (2006) Risk management guide for DoD acquisition, 6th edn., version 1.0. Defense Acquisition University, Fort Belvoir Dillon-Merrill RL, Parnell GS, Buckshaw DL, Hensley WR, Caswell DJ (2008) Avoiding common pitfalls in decision support frameworks for Department of Defense analyses. Mil Oper Res 13(2):19–31 Ewing PL Jr, Tarantino W, Parnell GS (2006) Use of decision analysis in the Army Base Realignment and Closure (BRAC) 2005 military value analysis. Decis Anal 3:33–49 Geis JP II, Parnell GS, Newton H, Bresnick TA (to appear) Blue horizons study assesses future capabilities and technologies for the United States Air Force. Interfaces Greiner MA, Fowler JW, Shunk DL, Carlyle WM, McNutt RT (2003) A hybrid approach using the Analytic Hierarchy Process and integer programming to screen weapon systems projects. IEEE Trans Eng Manage 50:192–203 Haertling KP, Deckro RF, Jackson JA (1999) Implementing information warfare in the weapon targeting process. Mil Oper Res 4(1):51–65 Hamill TJ, Deckro RF, Kloeber JM, Kelso TS (2002) Risk management and the value of information in a defense computer system. Mil Oper Res 7(2):61–82 Jackson JA, Parnell GS, Jones BL, Lehmkuhl LJ, Conley H, Andrew J (1997) Air Force 2025 operational analysis. Mil Oper Res 3(4):5–21 Jurk DM, Chambal SP, Thal AM Jr (2004) Using value-focused thinking to select innovative force protection ideas. Mil Oper Res 9(3):17–30 Keeney RL (1992) Value-focused thinking: a path to creative decisionmaking. Harvard University Press, Cambridge Keeney RL, Raiffa H (1976) Decision making with multiple objectives: preferences and value tradeoffs. Wiley, New York Kirkwood CW (1997) Strategic decision making: multiobjective decision analysis with spreadsheets. Duxbury, Pacific Grove Klimack WK, Kloeber JM Jr (2000) A multi-attribute preference theory assessment of US Army basic combat training program of instruction. Technical Report 2000–02, Center for Modeling, Simulation and Analysis, Air Force Institute of Technology, Wright-Patterson Air Force Base Leinart JA, Deckro RF, Kloeber JM Jr, Jackson JA (2002) A network disruption modeling tool. Mil Oper Res 7(1):69–77 Lindberg TJ (2008) The critical infrastructure portfolio selection model. Master of Military Art and Science thesis, US Army Command and General Staff College, Fort Leavenworth Meier SR (2009) Causal inferences on the cost overruns and schedule delays of large-scale US federal defense and intelligence acquisition programs. Proj Manage J 41(1):28–39 Parnell GS (2007) Value-focused thinking. In: Loerch A, Rainey L (eds) Methods for conducting military operational analysis. Military Operations Research Society, Alexandria Parnell GS, Bennett GE, Engelbrecht JA, Szafranski R (2002) Improving customer support resource allocation within the National Reconnaissance Office. Interfaces 32(3):77–90 Parnell GS, Burk RC, Schulman A, Westphal D, Kwan L, Blackhurst J, Verret P, Karasopoulos H (2004) Air Force Research Laboratory space technology value model: creating capabilities for future customers. Mil Oper Res 9(1):5–18 Parnell GS, Conley HW, Jackson JA, Lehmkuhl LJ, Andrew JM (1998) Foundations 2025: a framework for evaluating future air and space forces. Manage Sci 44:1336–1350
14 Portfolio Decision Analysis: Lessons from Military Applications
357
Parnell GS, Driscoll PJ, Henderson DL (eds) (2011) Decision making for systems engineering and management, 2nd edn. Wiley Series in Systems Engineering, Andrew P. Sage (ed), Wiley, New York Parnell GS, Gimeno BI, Westphal D, Engelbrecht JA, Szafranski R (2001) Multiple perspective R&D portfolio analysis for the National Reconnaissance Office’s technology enterprise. Mil Oper Res 6(3):19–34 Parnell GS, Jackson JA, Burk RC, Lehmkuhl LJ, Engelbrecht JA Jr (1999) R&D concept decision analysis: using alternate futures for sensitivity analysis. J Multi-Crit Decis Anal 8:119–127 Phillips LD (2007) Decision conferencing. In: Edwards W, Miles R, von Winterfeldt D (eds) Advances in decision analysis. Cambridge Press, UK Rayno B, Parnell GS, Burk RC, Woodruff BW (1997) A methodology to assess the utility of future space systems. J Multi-Crit Decis Anal 6:344–354 Spetzler C (2007) Building decision competency. In: Edwards W, Miles R, von Winterfeldt D (eds) Advances in decision analysis. Cambridge Press, UK Stafira S Jr, Parnell GS, Moore JT (1997) A method for evaluating military systems in a counterproliferation role. Manage Sci 43:1420–1430 Trainor TE, Parnell GS, Kwinn B, Brence J, Tollefson E, Downes P (2007) The US Army uses decision analysis in designing its installation regions. Interfaces 37(3):253–264 Walmsley NS, Hearn P (2004) Balance of investment in armoured combat support vehicles: an application of mixed integer programming. J Oper Res Soc 55:403–412 Watson SR, Buede DM (1987) Decision synthesis: the principles and practice of decision analysis. Cambridge University Press, Cambridge, UK Woodaman RFA, Loerch AG, Laskey KB (2010) A decision analytic approach for measuring the value of counter-IED solutions at the Joint Improvised Explosive Device Defeat Organization. GMU-AFCEA Symposium: critical issues in C4I. Available via Internet. http://c4i.gmu.edu/ events/reviews/2010/papers/Woodaman Valuation of Counter-IED Solutions.pdf
Chapter 15
Portfolio Decision Analysis for Population Health Mara Airoldi and Alec Morton
Abstract In this chapter, we discuss the application of Multi-Criteria Portfolio Decision Analysis in healthcare. We consider the problem of allocating a limited budget to healthcare for a defined population, where the healthcare planner needs to take into account both the state of ill-health of the population, and the costs and benefits of providing different healthcare interventions. To date, two techniques have been applied widely to combine these two perspectives: Generalized Cost Effectiveness Analysis and Program Budgeting and Marginal Analysis. We describe these two approaches and present a case study to illustrate how a simple, formal Multi-Criteria Portfolio Decision Analysis model can help structure this sort of resource allocation problem. The case study highlights challenges for the research community around the use of disease models, capturing preferences relating to health inequalities, unrelated future costs, the appropriate balance between acute and preventive interventions, and the quality of death.
15.1 Background Many public sector planners have responsibility for defined populations, and work in environments where key dimensions of performance are hard to measure, values are contested, and decisions have to be negotiated between stakeholders within and beyond the organization, and with the general population. This is particularly true of healthcare. Despite the existence of a vast medical evidence base, interpreting that evidence in the context of a particular population is not straightforward; tradeoffs between different sorts of treatments for different sorts of patients inevitably arise; M. Airoldi () Department of Management, London School of Economics and Political Science, London, UK e-mail:
[email protected]
A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 15, © Springer Science+Business Media, LLC 2011
359
360
M. Airoldi and A. Morton
and the professional status of healthcare workers and the intense public interest mean that making decisions unilaterally behind closed doors is not regarded as acceptable. In this chapter, we outline the challenges of applying Portfolio Decision Analysis to maintain and improve population health. In some healthcare systems, the scope of such planning will be limited to public health interventions, as primary and acute care will be delivered by organizations (such as insurers) which do not have responsibility for a defined population, but which compete for business in a market. In other systems, such as the English National Health Service, with which we are particularly familiar, almost all healthcare is delivered (at time of writing) by geographically defined health authorities called “Primary Care Trusts.” To understand the background, healthcare planning is dominated by two communities of analytic professionals which represent two different perspectives on the meaning of “healthcare need” which have been labeled “humanitarian” and “realist” (Acheson 1978). A difficulty in healthcare planning is integrating these two perspectives. On one hand, public health analysts tend to take a “humanitarian” perspective. In this perspective, the focus of analysis is the identification and measurement of existing suffering. The analysis usually takes the form of “needs assessment,” which is a snapshot of the health status of the population under investigation in terms of disease prevalence and mortality rates. Needs assessment is often very revealing: psychiatric morbidity surveys reveal very substantial untreated mental illness; and rising rates of obesity in the UK herald higher rates of diabetes, heart disease and so on in later life. At the same time, however, needs assessment is not enough by itself, as many conditions generate substantial morbidity (e.g. chronic conditions) but little can practically be done about them. On the other hand, health economists take a “realist” perspective, focusing on what could be done to improve health and on whether the cost of doing so is affordable. In this perspective, choice between particular interventions, such as surgical procedures or pharmaceuticals, takes centre stage. Health economists assess the value and affordability of an intervention through “incremental cost effectiveness analysis” (e.g. Gold et al. 1996; Drummond et al. 2005; Williams 1985): each intervention is assessed compared to the next best alternative in terms of its additional costs, and the additional benefits it is expected to generate for the average patient. Benefits are usually not expressed in monetary terms but using metrics such as the Quality Adjusted Life Year or “QALY” (Williams 1985). These metrics are calculated as the product of life duration expressed in years and a quality of life weight represented on a scale ranging from 0 to 1, where 0 corresponds to the quality of life equivalent to being dead and 1 to that of “full health,” respectively. Thus, the unit on the QALY metric, i.e. one QALY represents the equivalent of a year spent in full health. In informing resource allocation decisions, health economists estimate the “incremental cost-effectiveness ratio” of treating an additional patient, i.e. the ratio between the additional costs and the additional QALYs of an intervention
15 Portfolio Decision Analysis for Population Health
361
compared to the next best alternative and recommend the funding of all interventions below some critical ratio. The approach proposed by health economists has a normative basis, in that the critical ratio can be interpreted as the Lagrange multiplier associated with the budget constraint in some implied optimization problem. However, unless one knows the extent of disease in the population, one has no idea of the cost coefficients in the budget constraints, and thus what the critical ratio should be. The remainder of the chapter is organized as follows. First, we review two techniques which have received particular attention in the literature, one which has emphasized the need for explicit Portfolio Decision Analysis and the other which has focused on the need to include multiple criteria and to engage key stakeholders. Second, we present a case study to illustrate how Multi-Criteria Portfolio Decision Analysis, broadly in the spirit of Phillips and Bana e Costa (2007) or Phillips (2011), can further the development of techniques to support healthcare planners. Finally, we offer some reflections on the particular challenges of doing Portfolio Decision Analysis in a population health context.
15.2 Existing Techniques The challenge for the working health planner is to draw on available information about healthcare needs of the population and cost-effectiveness, and synthesize this information in order to establish a programme of activity which will give greatest value for the locality for which she is responsible. Various approaches have been developed for helping decision makers with this decision and two have received particular attention: Generalized Cost Effectiveness Analysis (GCEA) and Programme Budgeting and Marginal Analysis. GCEA is the tool which the World Health Organization recommends for planning population health (Hutubessy et al. 2003; Tan-Torres Edejer et al. 2003). Similar to the cost-per-QALY tool of economists, interventions are assessed compared to an alternative in terms of the ratio of added costs and benefits. The alternative intervention is a counterfactual which usually corresponds to what would happen in the absence of the investigated intervention and costs and benefits are assessed explicitly through a simulation model. Benefits are measured by Disability Adjusted Life Year (DALY), which are a measure substantially similar to the QALY, although with a negative sense: the DALY is a “bad,” whereas the QALY is a good. Airoldi and Morton (2009) have shown that the way that the DALY is currently computed makes it problematic as a benefit measure (e.g. increases in length of lifetime may increase DALYs). On the other hand, Morton (2010) has argued that a suitably corrected version of the DALY could have an interpretation as a metric with similarities to the well-established poverty metric in the domain of income. A further distinction between the canonical health economics approach, which stresses incremental cost-effectiveness, and the GCEA approach, is that in the latter
362
M. Airoldi and A. Morton
benefits and costs are estimated at the population level rather than for the average patient and the alternative uses of limited resources are taken into account, making the objective function and the budget constraint explicit. In this respect, GCEA stands as a direct descendant of the Global Burden of Disease studies pioneered by the World Health Organisation (Murray and Lopez 1996), which can in turn trace their lineage to the techniques of public health needs assessment. Its proponents highlight that GCEA, although highly informative in its own right, needs to be integrated with further analysis to take into account other concerns of the health planner. In particular, while the framework models the objective of maximizing population health explicitly within limited resources, health planners will be interested in achieving other goals such as health equity and system responsiveness (Hutubessy et al. 2003). A competitor approach, which aims at taking into account the multiple concerns of the health planner explicitly, is Program Budgeting and Marginal Analysis (PBMA). PBMA is an economics-inspired approach to public sector planning designed to aid local decision makers (Mitton and Donaldson 2004). PBMA uses the principles of opportunity cost and marginal analysis: the “opportunity cost” of providing an intervention is the value of the best available alternative use of the same resources; and “marginal analysis” consists in focusing on the additional costs and benefits associated with the proposed change in service provision (rather than the costs and benefits of the resulting overall portfolio of healthcare services). PBMA covers a variety of different practices, with a similar process but different evaluation procedures. In general, the process is led by a facilitator and consists in several steps: determine the aim and scope of the analysis, identify where resources are currently spent, form a panel of decision makers, determine locally relevant criteria for decision making, identify options for investment and disinvestment, assess options against the set criteria, and validate results and recommend resource re-allocation (Mitton et al. 2003). The evaluation procedure recommended in PBMA is multi-criteria decision analysis, but the set of criteria and the form of the value function differ between case studies to reflect the preferences of the local stakeholders. In some cases, the composite concept of cost effectiveness is used as a criterion for a multi-criteria value function (Mitton et al. 2003). In other cases, the cost effectiveness of an intervention is derived by calculating the ratio of the multi-criteria benefit score and the cost of each intervention (Robson et al. 2008; Kemp and Fordham 2008; Baughan and Ferguson 2008; Wilson et al. 2006). In some cases, the concept of need and that of effectiveness are entered as separate criteria (e.g. Wilson et al. 2006); in others they seem to be integrated as a population health impact criterion (e.g. Robson et al. 2008). PBMA practitioners recognize the need to provide more guidance on the criteria and the shape of the value function. Peacock et al. (2007), for instance, propose a Multi-Attribute Utility function with three criteria, and Wilson et al. (2008) report and reflect on current practices for assessing the value of interventions and to arrive at a priority ordering. Criteria used in selected articles are summarized in Table 15.1.
15 Portfolio Decision Analysis for Population Health
363
Table 15.1 Multiple criteria and value function reported in PBMA exercises Source Mitton et al. (2003)
Robson et al. (2008)
Similar lists in Kemp and Fordham (2008) and Baughan and Ferguson (2008)
Peacock et al. (2007)
Wilson et al. (2006)
Criteria Access/capacity Appropriateness Sustainability/cost-effectiveness System integration Clinical/population health effectiveness Better outcomes (sub-criteria: contributes to local action plan; meeting outcomes for ‘Every Child Matters’; impact on current and future need) Increased participation (sub-criteria: user centered; user involvement; feedback; community consultation) Improved working together (sub-criteria: mental health service delivery; Appropriate service partners; Common assessment framework; Appropriate workforce; recruitment; knowledge and expertise; supervision and support; local children services workforce strategy) Improved quality of services (sub-criteria: Experience; Risk management; Location of service; Professional standards; Social marketing) Individual health Community health (i.e. community ownership and control of the programme; and long-term sustainability) Equity (i.e. accessible and addressing the need of most disadvantaged groups) Access and equity Effectiveness Local and National priorities Need Prevention Process Quality of life
Value function Additive scores. Interventions ranked by aggregate benefit scores, funding allocated in this order until constraint met Additive (using swing weights). prioritisation based on benefit–cost ratio
Ad hoc (the score for individual health is multiplied by a weighted sum of the other two criteria)
Additive, prioritization based on benefit/cost ratio
Although PBMA practice does not draw on the Multi-Criteria Portfolio Decision Analysis literature, PBMA proponents envisage developing their techniques by drawing more systematically on models to assess the benefits and costs of interventions from the health economics and Multi-criteria Decision Analysis literatures (Mitton and Donaldson 2004). In summary, whilst GCEA highlights the need to formalize the resource allocation problem, PBMA highlights the need to consider multiple criteria and to engage key stakeholders in the process.
364
M. Airoldi and A. Morton
15.3 Case Study Over the last few years we have used Multi-criteria Portfolio Decision Analysis, broadly in the spirit of Phillips and Bana e Costa (2007) to support a number of local Primary Care Trusts in England. Primary Care Trusts are organizations responsible for purchasing healthcare services on behalf of their local population. At the time of writing this chapter, there are 152 Primary Care Trusts in England with an average population of 330,000 people each. The problem of selecting which services to purchase and their scale is a portfolio allocation problem. Primary Care Trusts purchase services from local providers such as hospitals, ambulance, and community care services. The purchase of services is formalized in separate contracts between the Primary Care Trust and each healthcare provider, to plan the provision of the anticipated type and volume of care for the local population in the forthcoming year. These services are funded from general taxation and offered “free of charge”; the available resources are exogenously determined according to the healthcare need of each Primary Care Trust using a formula which takes into account factors such as size of the population, its demographic characteristics, and its socio-economic deprivation (Department of Health 2008). The available resources are typically insufficient to provide all services that would benefit the local population and Primary Care Trusts are responsible for deciding which interventions should be funded. In this section, we describe the use of Multi-criteria Portfolio Analysis to support the allocation of resources in a public health programme called “Staying Healthy” in a Primary Care Trust in central London. The key aim of the programme is to prevent disease through disease prevention and the promotion of healthy lifestyles. We start by explaining the problem faced by the Staying Healthy board and by formulating the underlying multi-criteria Portfolio Decision Analysis problem. We then describe how we engaged local stakeholders in identifying options for resource allocation, and helped them to express their objectives operationally, to assess the options against the criteria and to interpret the results of the Portfolio Decision Analysis model.
15.3.1 Framing the Problem The Staying Healthy Board is responsible for a wide range of activity to maintain the population healthy by reducing the prevalence of risk factors such as high blood pressure, obesity and smoking. The underlying logic of the programme is that by reducing the prevalence of risk factors in a population, the incidence of diseases such as circulatory diseases or cancers can be reduced and hence the number of premature deaths and preventable ill health would be lower. The information available to plan interventions in the Staying Healthy programme is predominantly the current health state of the population in terms
15 Portfolio Decision Analysis for Population Health
365
of risk factor prevalence, disease incidence, disease prevalence and associated mortality statistics. We used Multi-Criteria Portfolio Decision Analysis in a series of interactive workshops or “Decision Conferences” (Phillips 2007) to help the Staying Healthy board in integrating the current information about current health state of the population with the expected benefit of potential interventions. To present the decision model underpinning the workshops formally, we use the following notation: • I : set of healthcare interventions, indexed by i D 1; : : : ; n • G: set of healthcare intervention groups, indexed by g D 1; : : : ; q • A: set of attributes or criteria, indexed by a D 1; : : : ; m We use the notation “i 2 g” to mean “intervention i ” falls within group g. The parameters of our model are as follows: • B: available budget • ci : cost of intervention i • sai added value in terms of attribute a which would be generated by implementing intervention for all i 2 g. Scores were elicited by participants by attribute a, one groupP g at a time • vi D a wag sai value of intervention for all i 2 g • wa within g : within criteria weight for each group g for criterion a • wacross a : across criteria weight for criterion a • wag D wa within g wacross a : weight of attribute or criterion a for scaling interventions in group g The decision variable is x D .x1 ; : : : ; xi ; : : : ; xn / which is a vector of n elements xi 2 Œ0; 1 to represent the extent to which the proposed intervention could be funded, with 0 representing no funding and 1 representing full funding. The implied optimization model is hence: Max
X
vi xi I subject to
i
X
ci xi B
(15.1)
i
In the rest of the section, we describe how members of the Staying Healthy board and a group of local stakeholders of the Primary Care Trust used this framework to inform priority setting.
15.3.2 Planning the Workshops The facilitators and two directors in the area of Public Health discussed the problem faced by the Staying Healthy board and agreed to explore their investment priorities with a focus on reducing cerebrovascular diseases (CVD), that is circulatory diseases which could cause damage to the brain (e.g. stroke) or to the heart (e.g. coronary heart diseases). The facilitators described the process of Decision
366
M. Airoldi and A. Morton
Conferencing, which is a set of working meetings led by an impartial facilitator to build a formal model of the problem and explore the solution space (Phillips and Bana e Costa 2007) and proposed to formalize it through a Multi-Criteria Portfolio Decision Analysis model. The directors decided to engage members of the Staying Healthy Board and key stakeholders (including family doctors, health visitors and patient representatives) to populate the model during three half-day meetings at about 10-day intervals. The participation to these workshops ranged from six to ten people. In the first meeting participants framed the decision problem by stating their fundamental objectives and by generating a list of alternatives to achieve them (i.e. the set of criteria A; and the set of interventions I , grouped in thematic areas G), which we describe in Section 15.3.1. In the second meeting, they scored each intervention against the set criteria, which included cost (i.e. parameters ci and sai ). Finally, in the last meeting, participants assessed the tradeoff across the different criteria to generate an overall benefit score for each intervention (i.e. the two sets of weights wa within g and wacross a ), explored the results produced through a MultiCriteria Portfolio Decision Analysis software1 and engaged in a discussion to inform priority setting.
15.3.3 The Strategic Decision Frame: Objectives and Alternatives Following the Value-focused thinking framework (Keeney 1996), we engaged participants in defining the decision problem starting with the identification of their values and by articulating them in fundamental objectives. In keeping with Keeney, we distinguished “fundamental objectives,” i.e. the ends that participants valued in the context of allocating resources to health promotion and prevention activities, from “mean objectives,” i.e. the methods to achieve those ends. Participants initially identified their objectives with the three overall aims articulated by the Board of the Primary Care Trust: • to reduce the prevalence of key risk factors for premature mortality • to reduce premature mortality • to reduce inequalities in prevalence of risk factors and premature mortality For each objective, participants discussed the reasons for their importance, in order to distinguish mean objectives (e.g. reducing prevalence of risk factors) from end objectives (e.g. reducing premature mortality). The discussion led to the identification of additional end objectives and to their operational definition. The final list of end objectives was as follows:
1 We used “Equity,” that is a software which was developed by the LSE and currently maintained by Catalyze Ltd (www.catalyze.co.uk).
15 Portfolio Decision Analysis for Population Health
367
• Reducing Premature Mortality .a1 /: The extent to which an intervention reduces premature CVD mortality in the medium-run (5–10 years) and the long-run (10C years). • Improving Individual Quality of Life .a2 /: The extent to which an intervention improves an individual’s overall well-being (as defined by the individual). • Improving Social Quality of Life .a3 /: The extent to which an intervention improves an individual’s overall opportunities (in employment, education, etc.) and their engagement in social life. • Reducing Health Inequalities .a4 /: The extent to which an intervention reduces the unjustifiable and avoidable gaps in health status among different social groups in the local population. With these end objectives in mind, we invited participants to suggest well-defined interventions which the Staying Healthy Board should finance. An intervention was considered “well-defined” if, in principle, it was possible to answer the following questions: 1. How much does it cost to provide the intervention? 2. How many people benefit from it? 3. How exactly do people benefit from it? Each participant worked individually and listed five to six interventions, writing each of his or her proposals on a different piece of paper. Participants revised the list of interventions, eliminating duplicates and asking clarifications on the details of the interventions and then clustered them in homogenous areas. The clustering facilitated the assessment of the extent to which each intervention contributed to achieving the stated objectives, because clusters of interventions could be associated with a risk factor and the associated epidemiological and clinical literature on its effectiveness. The final list of interventions for prioritization consisted of 20 interventions grouped in the following six areas: Smoking, Physical activity, Blood pressure (pharmacological interventions), Statin (i.e. pharmacological interventions to tackle hypercholesterolemia), Diet and Alcohol. Due to time constraints, Diet and Alcohol were later excluded from the formal analysis. The list of the 14 interventions included in the model is summarily reported in Fig. 15.1 and Table 15.2. For each area, participants also defined a baseline of care (i.e. current care).
15.3.4 Scoring The scoring process aimed at eliciting parameters sai in the Multi-Criteria Portfolio Decision Analysis model. These assessments were done through direct rating, one area g at a time and one criterion a at a time. The scores should reflect the “added value” of the intervention in achieving the considered objective or criterion, where a score of 0 was assigned by default to the current care (interventions labeled “do nothing” in each area). For example, the group first considered all
368
M. Airoldi and A. Morton
Fig. 15.1 The final model showing the interventions in each of the six areas
the interventions in the Smoking area. We asked the group to identify the one intervention that provided the most impact on Reducing health inequality in the local population. After some discussion, the group agreed it was “Tobacco control,” so that intervention was assigned a preference value of 100. Next, the group compared each of the other interventions in the Smoking area to this one as a benchmark and judged the relative value each intervention would contribute to “Reducing health inequalities.” Thus, when participants argued that “Cessation Levels 2 & 3” would provide about 40% as much value as “Tobacco control,” a score of 40 was assigned. In all cases, the ratios of the numbers reflected ratios of participants’ relative strength of preference for the two interventions. The group arrived at these relative judgments through discussion and consulting available evidence. Consistency checks were particularly useful to revise the group’s assessments. For example, if interventions A, B and C scored 100, 20 and 80, respectively, then participants were asked if B and C together created the same value for the criterion under investigation as project A alone. If not, then revisions were made to the scores until consistency was obtained. Thus, each benefit score gave the relative added value attributable to Reducing health inequality from commissioning that intervention. The process was applied to the projects for all four benefit criteria. Participants were encouraged to assess scores representing the value associated uniquely with each criterion, thus avoiding double counting. The assessment of the value of each intervention to reduce premature mortality drew from participants’ knowledge of the clinical and epidemiological literature and the model enabled them to translate the knowledge for their concrete problem. Similarly, for some interventions participants could draw on published models which estimate the impact of interventions in improving the quality of life of patients (measured in QALY).
15 Portfolio Decision Analysis for Population Health
369
Table 15.2 Description of interventions .i / by group .g/ Area .g/ Smoking
Intervention .i / Do nothing Brief
Cessation “Levels 2 & 3” Pregnancy Tobacco control Physical activity
Do nothing Brief Level 2
Level 3 Workplace Physical environment Blood pressure
Do nothing Better Detection
Prescribing Better Monitoring Statin
Do nothing Primary prevention (High Risk patients) Secondary prevention
Description Current care with no additional effort by Primary Care Trust Brief interventions by range of practitioners (GPs, practice nurses, pharmacists, other clinicians) Intervention in the community including Nicotine Replacement Therapy and counseling through smoking cessation clinics Smoking cessation in pregnancy Home and workplace interventions to promote smoking cessation “G-Pack” currently provided, which includes assessment, advice and follow-up Opportunistic interventions in primary care and community services Targeted intervention consisting of fostering motivation, goal-setting, follow-up, and coaching Intensive 10-week programme post-diagnosis Health promotion activities in within workplace environment Influencing transport, urban planning, buildings, children Current care with no additional effort by Primary Care Trust Opportunistic screening (everybody visiting a GP), receiving current care in terms of monitoring and prescribing Following good practice for those currently detected better monitoring of those currently detected Current care with no additional effort by Primary Care Trust Identifying people at high risk (without disease)
1/Secondary Prevention: Treating people with disease. Different strategies: - Higher versus lower intensity treatment with statins - Titration strategy
In discussing the value of interventions, participants also clarified how the intervention should have been implemented in practice and were able to estimate its costs. The working definition for defining the cost of each intervention was “the cost of providing an intervention to a pre-specified population group (i.e. the ongoing cost).”
370
M. Airoldi and A. Morton Table 15.3 Normalizing scores on reduction in premature mortality for the area “Physical activity” Intervention Do nothing Brief Level 2 Level 3 Workplace Physical environment
Assessed scores 0 70 50 30 60 100
Cumulative scores 0 70 120 150 210 310
Normalized scores 0 23 39 48 68 100
The overall available budget B was never explicitly defined, although the limitedness of resources was clearly a constraint for two reasons. Firstly, even though the Staying Healthy programme had a set budget constraint, the exercise only covered a subset of the activity they fund; secondly, the aim of the exercise was to explore a systematic assessment of options which could be used to formulate business cases for negotiating additional resources. The parameter B remained a variable which defined different funding constraint scenarios.
15.3.5 Weighting the Criteria Total benefits cannot be calculated until the units of benefit from one area to the next and one criterion to the next are equated. This was accomplished in three steps. First, the benefit scores on a given criterion for a particular area were added and normalized so that the resulting scale extends from 0, representing do nothing to 100, representing most preferred. Take the scores on Reducing premature mortality for interventions in the area “Physical activity” as an example. The first column of numbers in Table 15.3 gives the scores for the Reduction in premature mortality. The second column gives the cumulative sum of the scores. The third column shows the normalization, which results in a preference value scale. For this scale, 100 represent the total achievable reduction in premature mortality associated with all six projects, assuming they are successful. This normalization process was carried out on each of the benefit scales for every area. Thus, input scores associated with different criteria were all converted to cumulative preference (or value)-scales. However, the benefit scores for different criteria were always assessed relative to a different, arbitrary 100 for each scale. Thus, the weights for these criteria have to be judged. Two types of weights are required within this model. One set compares the scales for a given criterion from one area to the next; the within-criterion weights. The other set of weights compares the benefit criteria to each other; these are called across-criteria weights. These weights reflect the trade-offs in values between the
372
M. Airoldi and A. Morton Table 15.4 Within- and across-criterion weights Cost ongoing Within-criteria weights Smoking – Physical – activity BP – Statin – Diet – Alcohol – Across-criteria 100 weights
Health inequality
Premature mortality
Individual QoL
Social QoL
100 50
100 25
90 100
20 100
70 10 0 0 100
75 55 0 0 100
50 20 0 0 50
15 2 0 00 50
Reduction in premature mortality by doing all the interventions in Smoking. They also felt as strong a preference for the potential improvements in individual and social quality of life that would be achieved by doing all the interventions in the Physical activity area. However, they thought these were about half as valuable as the Reduction in health inequalities and premature mortality from the Smoking area. Thus, across-criteria weights of 100 were assigned both to Reducing health inequalities and Reducing premature mortality; and across-criteria weights of 50 were assigned to Improving quality of life both from the individual and the social perspective.
15.3.6 Results The weighting procedure illustrated above allowed the group to arrive at an overall aggregated value of intervention in terms of its expected contribution to reduce mortality with its expected contribution to reduce health inequalities and its expected contribution in improving quality of life. We then divided the benefit score by the estimated costs of implementing the intervention to assess its value-formoney. We can think of each of the 14 assessed interventions in terms of a triangle that has the benefit score in the vertical side and the cost necessary to generate those benefits on the horizontal side. Thus, the slope of the hypotenuse can be used to signal value-for-money: the steeper the slope, the better the value-for-money. Some writers have criticized the use of the value-for-money ordering (Kleinmuntz 2007) as it is not an optimizing algorithm for problem (1). However, we find that it captures simply and directly the critical insight that projects which deliver high benefit are not a good buy if they simultaneously cost a great deal of money, and allows nontechnical people to understand and “own” the results of the model. To illustrate the impact of the approach, in this section we first report a graph with the value-for-money triangles for each area separately; then we combine all
15 Portfolio Decision Analysis for Population Health
373
Fig. 15.3 The partial efficient frontier for the “Smoking” area
triangles in a single graph to facilitate a comparison of the value contributed by intervention in different areas. An examination of the ordered benefit–cost curve for each area is instructive, for the curves often give an overall view of the areas that is not obvious by looking at the individual interventions. This curve is a partial efficient frontier, with interventions ordered by their efficiency score, which is represented by the steepness of the hypotenuse of the associated value-for-money triangle. The illustrative example of the partial efficient frontier for the “Smoking” area is reported in Fig. 15.3. The partial efficient frontier in Fig. 15.3 shows that targeted interventions to smoking pregnant women (i.e. “pregnancy”) have a very good value-for-money. The intervention is a small scale intervention (hence the small triangle almost completely hidden by the circles with number “1” and “2”) which seems to offer good benefits in terms of the criteria at a relatively low cost. If we look back at the group assessments, although the intervention has very small benefit in Reducing premature mortality, it has a relatively large impact on Improving the quality of life from the individual’s perspective and some impact on Reducing health inequality for a very small cost .£40; 000/ compared to other interventions in this area. The next intervention in terms of value-for-money is “Tobacco control” (i.e. the triangle between point “2” and “3” in the graph), which contributes a large benefit for a relatively low cost. In fact, according to the group’s judgments, “Tobacco control” contributes the greatest benefit in smoking. This can be seen in the graph by observing that the vertical side of the associated triangle is the longest vertical side of all the four triangles in this graph. The model combined all the interventions into one curve of cumulative benefit versus cumulative cost. This is shown in Fig. 15.4. The shaded area represents the convex hull of the cost and value points of all possible combinations of interventions (360 possible portfolios).
374
M. Airoldi and A. Morton
Fig. 15.4 The efficient frontier, showing the 14 interventions in value-for-money order
15.4 Discussion The systematic process we used is similar to that of PBMA and the feedback from the director of Public Health who promoted the exercise is in line with the findings of PBMA practitioners (Kemp et al. 2008): the process enabled participants to quantify the relative value-for-money of potential interventions by focusing on their added value and by making their opportunity cost explicit; assessment was possible even though the available information was incomplete; and the model facilitated a structured discussion between key stakeholders, both with clinical and managerial backgrounds and perspectives. The underlying model used, however, drew more systematically on the normative basis of Portfolio Decision Analysis by making the objective function and the nature of the budgetary constraint explicit, which is a feature emphasized by the technique of GCEA advocated by the World Health Organization. In addition to GCEA, however, we included multiple criteria explicitly using Multi-Criteria Decision Analysis techniques and engaged local stakeholders to articulate their mental model, to contribute their specialist knowledge and to confront key tradeoffs openly. Though based on a technically simple model, these decision conferences and the models developed in them have been well received by the sponsoring organizations. In this sense, our experience has been positive and our methods have been “successful.” However, the limited scale of the intervention described in the case study and the sheer complexity of healthcare means that many issues have been left unexplored, which we briefly review here.
15 Portfolio Decision Analysis for Population Health
375
15.4.1 Use of Evidence and Disease Modelling Our case described above uses direct assessment of health benefits, drawing on expert knowledge, informed by the clinical literature. It should be emphasized that when making judgments about the extent of a health benefit, despite the availability of clinical studies and meta-analyses, it is not generally possible to “read off” the health benefits from a clinical trial, as local populations may have particular characteristics which differ from the populations of the clinical trials (e.g. subjects in clinical trials are normally more healthy than the typical patient). One approach to achieving this local contextualization is to use or develop formal disease models. Such models do exist for most common conditions, and are often of considerable sophistication. In these models, which use Markov chains, System Dynamics, or Discrete Event Simulation, a cohort of patients flows through a system of disease states over a period of time, under differing scenarios and treatment programmes. Such models are themselves based on judgment, but judgments at a more disaggregated level; and some of the disaggregated judgments at least can be directly validated. We have developed such models ourselves (e.g. Airoldi et al. 2008b), but to do so is highly time consuming. In a time and resourcelimited environment where one is charged with making assessments on multiple interventions for different conditions, building a disease model for each is simply not practical. How problematic this reliance on direct expert judgment is depends on the quality of that judgment relative to the quality of projections of a disease model, and ultimately there is a cost–benefit tradeoff to be made about whether the improvement in the quality of the assessment of an outcome that comes from a disease model is worth the additional investment. We do not have a particularly good sense of how far this is the case. Certainly, initial assessments by workshop participants of the scale of the benefits of particular programmes could vary massively. However, this could be viewed either as a cause for concern or as a healthy acknowledgement of uncertainty on matters which are the subject of continuing scientific dispute. More information or evidence on this point would be very useful in guiding practice.
15.4.2 Health Inequalities One of the fundamental findings of public health is that there is, globally, a strong and persistent “socio-economic gradient” in health, whereby the lower socio-economic classes experience worse health, on a variety of dimensions, and overall, as measured on metrics such as Life Expectancy and Disability-Free Life Expectancy. In the UK, and in particular in England, the recently deposed Labour government took strong but ultimately ineffectual action to reduce this gap, by allocating additional funds to the so-called Spearhead Primary Care Trusts with
376
M. Airoldi and A. Morton
more deprived populations. A persistent suspicion is that much of these funds did not reach the more deprived populations for which they were intended, and that the beneficiaries of this expenditure were the more well-off people who happened to live in the more deprived areas. We are not aware of compelling evidence for this view, but it is consistent with that we know about healthcare consumption – that the vocal middle classes access and consume more healthcare than the remainder of the population despite their generally better health. The difficulty of tracing the ultimate beneficiaries of these funds illustrates that most Primary Care Trusts – whatever they may or may not have done – did not have a transparent system for deciding how resources should be allocated across different subgroups within their population. The experience with the Primary Care Trust described above illustrates how difficult building such a system is: “inequality” evoked very different things for different people, and sometimes for the same people at different times. Thus, the inequality criterion in our model represented for the group, a composite of (at least) socio-economic, race, and gender inequality. In ongoing work (Morton and Airoldi 2010), we are attempting to develop a clearer and more transparent framework for assessing values of healthcare interventions taking into account aversion to inequality, with a view to informing decisions about, for instance, the targeting of screening programmes on particular subpopulations. A difficulty in designing such an approach is that while people often feel strongly that health inequalities are unjust and should be tackled, they simultaneously reject any idea that one person’s need for healthcare be weighted more heavily than another person’s need for healthcare. It is not at all obvious how such conflicting moral intuitions are to be reconciled.
15.4.3 Unrelated Future Costs An issue which surfaces from time to time in the health economics literature is the role of so-called unrelated future costs (Garber et al. 1996). These are the healthcare costs which accrue as a result of (for example) saving the life of someone who then goes on to incur further healthcare expenditures, which had he died, would have been avoided. The most prominent example of this issue is in the context of smoking: lung cancer is a quick and cheap way to die, and as a result, those who die of the disease save the public purse considerable sums of money, from long-term care late in life, pensions and so on. Preventing such a death does incur costs to the system further down the line. From the point of view of a Primary Care Trust these costs do not loom large, and we do not take them into account in our modelling. In a way this makes sense: Primary Care Trusts have responsibility for an annual budget which is set by the Department of Health, and this budget is determined based on the morbidity of their population. Thus, from the point of view of the individual Primary Care Trust, if they manage to keep more sick people alive, they will receive more funds (assuming
15 Portfolio Decision Analysis for Population Health
377
the budget is exogenously determined). From the point of view of the system as a whole, however, this looks to use like a bias which promotes longer sicker lives at the expense of shorter healthier ones, by underestimating the cost of the former. This offends our economic and moral sensibilities, but how to introduce these so-called unrelated future costs into models suitable for use by Primary Care Trust decision makers is a problem to which we do not yet have a solution.
15.4.4 Acute Versus Preventive Interventions In the case described in this chapter, the decision makers were interested in considering activities which could ameliorate a range of different diseases. This is in contrast to the sort of situation where economic evaluation is most frequently used in healthcare. Often structured appraisal methods are applied in decision contexts where the choice is between different treatments for the same or similar conditions. However, restricting the use of analysis in this way means that the large scale context-setting decisions are often made on the basis of unanalyzed intuitions and gut-feel. The challenge then, is to develop methods and procedures which can help decision makers compare treatments which may be very heterogeneous in terms of the aims of the treatment and the character of the beneficiaries – consider, for example comparing hip replacement and gender reassignment surgery. Probably, the most striking distinction is between acute and preventive interventions. Acute interventions are typically developed in the hospital setting – for example, surgical procedures such as coronary heart bypass surgery. Preventive interventions, on the other hand, may take the form of public information campaigns or the provision of services to help people in the community take better care of their health – such as services which help people to give up smoking. Acute and preventive interventions differ both in terms of the nature of the knowledge base underlying claims on the effectiveness. The effectiveness of acute interventions is typically studied by randomized controlled trials; while the evidence of effectiveness of preventive interventions (e.g. smoking cessation services) is of a different nature and is often considered to be weaker. Another, and possibly more important difference between acute and preventive interventions, however, is that because the beneficiaries of acute interventions are named individuals, whereas the beneficiaries of preventive interventions are statistical individuals, there is a natural constituency to advocate for the greater uptake of acute interventions. Thus, both epistemological and political factors seem to favour investment in acute interventions. An example which highlights the issues involved is that of national policy on stroke. According to modelling studies which we worked on in a separate but related piece of work (Airoldi et al. 2011), preventive interventions for stroke offer excellent opportunities for improving population health at a moderate cost, whereas acute interventions such as stroke clinics and thrombolysis, although they may substantially reduce the disability associated with stroke for those unfortunate enough to experience it, nevertheless cannot compare in quantitative terms to the
378
M. Airoldi and A. Morton
effectiveness of the preventive interventions. Yet national policy continues to stress implementation of stroke clinics and thrombolysis to the almost complete exclusion of preventive interventions. Accordingly, a challenge for portfolio decision analysis in the healthcare arena is to provide decision makers with frameworks for reflection on the appropriate balance between acute and preventive parts of their portfolios.
15.4.5 The Good Death In the case described in this chapter, we used an evaluation scheme which depended heavily on the concept of “health.” Some interventions, however, have no or negligible impact on health but rather are primarily intended to ease the process of dying (palliative care being the most obvious example). Comparing interventions which improve health with interventions designed to improve the quality of death seems to be something which people find difficult conceptually. Part of the reason for this may be that there is lack of empirical evidence (and a lack of standardised evaluation schemes) on what constitutes a “good death” and such evidence as there is suggests that tastes may differ substantially within the population (e.g. while most people would prefer a painless death, some may prefer a death which is sudden, whereas others may prefer to have adequate notice to set their affairs in order and say goodbye to loved ones). Moreover, we suspect that people’s preferences over different sorts of deaths are likely to be relatively labile, as the issue is one which most people are probably not given to thinking about deeply and frequently (different forms of bad health, on the other hand, are relatively familiar, and someone who has experienced both a broken arm and a migraine can relatively easily say which is the more unpleasant experience). Thus, we conclude there is a need for decision analysis techniques to be developed which can compare lifeimproving and death-improving interventions.
15.5 Conclusion In this chapter, we have outlined the challenge faced by the healthcare planner, who needs to combine information about the current health status of a defined population with that on the costs and the effectiveness of possible interventions to improve the level of health and its distribution. Existing techniques tend to focus either on the process which the healthcare planner could follow to confront key tradeoffs (PBMA) or on the explicit modelling of the underlying disease to estimate the intervention impact on the local health economy (GCEA). In the case study which we presented, we used the socio-technical approach of Decision Conferencing (Phillips and Bana e Costa 2007) to engage a group of key stakeholders in a Primary Care Trust in England and build a model for their resource allocation problem based on Multi-Criteria Portfolio Decision Analysis, learning
15 Portfolio Decision Analysis for Population Health
379
from and building on both these traditions. Our engagement with the National Health Service continues, and we are refining our methods to deal with some of the challenges outlined in the previous section. The approach that we used could be described as low-tech and participative, in the sense that the methods which we used are not fundamentally technically innovative. Rather, we have used our case study to draw attention to some of the deep and complex issues which beset decision making in this area – ambiguous or incomplete evidence, aversion to inequality, costs associated with the prolongation of life, difficulties in balancing acute and preventive treatments, difficulties in balancing life- and death-enhancing interventions. When the issues are so complex, theoretic development is not an optional extra: using population health metrics such as the DALY as a basis for prioritisation (Airoldi and Morton 2009) or weighting health states to model inequality aversion (Østerdal 2003) without proper understanding of what one is doing can lead to very odd results. Yet theory by itself is not enough. Despite the long history of operations research in healthcare, and of the vast database of medical, epidemiological, and health economic evidence at their disposal, we find that Primary Care Trusts make limited use of structured techniques which could help them think systematically and quantitatively about the big questions of what they get for the money they spend. This is a disheartening state of affairs, particularly considering the stresses which healthcare systems face in coming years, as funds become tighter and people live longer. But it is also a reminder of the importance of usability and accessibility to decision makers operating in a challenging environment. Decision Analysts – with their respect for theory and their preoccupation with producing tools which actually work – are perhaps uniquely well placed to play a role in pushing forward practice in this important and fascinating area. Acknowledgements The authors wish to thank Hiten Dodhia and Lynda Jessopp and participants to the three Decision Conferences held in 2008. They are also grateful to Gwyn Bevan for helpful discussions and to the Health Foundation for financial support.
References Acheson R (1978) The definition and identification of need for health care. J Epidemiol Commun Health 32:10–15 Airoldi M, Bevan G, Morton A, Oliveira M, Smith J (2011) Estimating the health gains and cost impact of selected interventions to reduce stroke mortality and morbidity in England. Priority Setting for Population Health WP series, LSE Department of Management (availabel at http:// www2.lse.ac.uk/management/research/initiatives/sympose/publications.aspx) Airoldi M, Bevan G, Morton A, Oliveira M, Smith J (2008b) Facing poor glucose control in type 1 diabetes: requisite models for strategic commissioning. Healthcare Manage Sci 11:89–110 Airoldi M, Morton A (2009) Adjusting life for quality or disability: stylistic difference or substantial dispute? Health Econ 18:1237–1247
380
M. Airoldi and A. Morton
Baughan S, Ferguson B (2008) Road testing programme budgeting and marginal analysis (PBMA) in three English regions: Hull Teaching Primary Care Trust Diabetes Pilot Project. Yorkshire and Humber Public Health Observatory, York Department of Health (2008) Resource allocation: weighted capitation formula – 6th edn. Department of Health, London Drummond KF, Sculpher MJ, Torrance GW, O’ Brien BJ, Stoddart GL (2005) Methods for the economic evaluation of health care programmes. OUP, Oxford Garber AM, Weinstein MC, Torrance GW, Kamlet MS (1996) Theoretical foundations of cost-effectiveness analysis. In: Gold MR, Siegel JE, Russell LB, Weinstein MC (eds) Costeffectiveness in health and medicine. OUP, Oxford Gold MR, Siegel JE, Russell LB, Weinstein MC (1996) Cost-effectiveness in health and medicine. Oxford University Press, New York Hutubessy R, Chisholm D, Edejer T, Who C (2003) Generalized cost-effectiveness analysis for national-level priority-setting in the health sector. Cost Eff Resour Alloc 1(1):8 Keeney RL (1996) Value-focused thinking: identifying decision opportunities and creating alternatives. Eur J Oper Res 92:537–549 Kemp L, Fordham R (2008) Road testing programme budgeting and marginal analysis (PBMA) in three English Regions: Norfolk mental health PBMA pilot project. Yorkshire and Humber Public Health Observatory, York Kemp L, Fordham R, Robson A, Bate A, Donaldson C, Baughan S, Ferguson B, Brambleby P (2008) Road testing programme budgeting and marginal analysis (PBMA) in three English Regions: Hull (Diabetes), Newcastle (CAHMS), Norfolk (Mental Health). Yorkshire and Humber Public Health Observatory, York Kleinmuntz DN (2007) Resource allocation decisions. In: Edwards W, Miles RF, von Winterfeldt D (eds) Advances in decision analysis. CUP, Cambridge Mitton C, Donaldson C (2004) Priority setting toolkit: a guide to the use of economics in healthcare decision making. BMJ, London Mitton C, Patten S, Waldner H, Donaldson C (2003) Priority setting in health authorities: a novel approach to a historical activity. Soc Sci Med 57:1653–1663 Morton A (2010) Bridging the gap: health equality and the deficit framing of health. Health Econ 19:1497–1501 Morton A, Airoldi M (2010) Incorporating health inequalities considerations in PCT priority setting. Working Paper LSEOR 10.122. London School of Economics Operational Research Group, London Murray CJL, Lopez AD (eds) (1996) The global burden of disease. Harvard School of Public Health, Boston Østerdal LP (2003) A note on cost-value analysis. Health Econ 12:247–250 Peacock SJ, Richardson JRJ, Carter R, Edwards D (2007) Priority setting in health care using multi-attribute utility theory and programme budgeting and marginal analysis (PBMA). Soc Sci Med 64:897–910 Phillips LD (2007) Decision conferencing. In: Edwards W, Miles RF, Von Winterfeldt D (eds) Advances in decision analysis: from foundations to applications. CUP, Cambridge Phillips LD (2011) The Royal Navys Type 45 Story: A Case Study. In Salo A, Keisler J, Morton A (eds) Portfolio Decision Analysis: improved methods for resource allocation. Springer, New York Phillips L, Bana e Costa C (2007) Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann Oper Res 154:51–68 Robson A, Bate A, Donaldson C (2008) Road testing programme budgeting and marginal analysis (PBMA) in three English Regions: Newcastle CAMHS PBMA Pilot project. Yorkshire and Humber Public Health Observatory, York Tan-Torres Edejer T, Baltussen R, Adam T, Hutubessy R, Acharya A, Evans D, Murray CJL (2003) Making choices in health: WHO guide to cost-effectiveness analysis. World Health Organization, Geneva
15 Portfolio Decision Analysis for Population Health
381
Williams A (1985) Economics of coronary artery bypass grafting. Br Med J 291:326–329 Wilson EC, Peacock SJ, Ruta D (2008) Priority setting in practice: what is the best way to compare cost and benefits? Health Econ 18:467–478 Wilson EC, Rees J, Fordham RJ (2006) Developing a prioritisation framework in an English Primary Care Trust (available at http://www.resource-allocation.com/content/4/1/3)
About the Editors
Ahti Salo is professor of systems analysis at the Systems Analysis Laboratory of the Aalto University School of Science. He obtained his M.Sc. and Dr.Tech. degrees at the Helsinki University of Technology in 1987 and 1992, respectively. He has worked as a senior researcher at the Technical Research Centre of Finland (VTT) and Nokia Research Center, and as a visiting professor at the London Business School, Universit´e Paris-Dauphine and the University of Vienna. His research interests include portfolio decision analysis, risk management, efficiency analysis, and technology foresight. He has written well over 100 publications, including papers in leading journals such as Management Science, Operations Research, and European Journal of Operational Research, and is currently an associate editor of Decision Analysis. He has directed numerous high-impact decision support processes such as the national foresight project FinnSight 2015. In 2010, he became the President of the Finnish Operations Research Society, served on the jury of the EURO Doctoral Dissertation Award, and began his work as the European and Middle East Representative on the INFORMS International Activities Committee.
A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6, © Springer Science+Business Media, LLC 2011
383
384
About the Editors
Jeffrey Keisler is associate professor of Management Science and Information Systems at the University of Massachusetts Boston. He earned a Ph.D. in decision sciences and an S.M. in engineering sciences at Harvard as well as an MBA at the University of Chicago and a B.S. from the University of Wisconsin. He has over 50 assorted publications including articles in journals such as Decision Analysis, Interfaces, and Risk Analysis. His research interests include decision analysis applications, portfolio decision analysis, decision processes, value of information, and other decision analytic methods. He was previously a decision analyst at General Motors, Argonne National Laboratory and Strategic Decisions Group. He is editor of Decision Analysis Today and an associate editor for Interfaces. He is President of the Specialty Group on Decision Analysis and Risk within the Society for Risk Analysis. Most recently, he became Vice-President/President-Elect of the INFORMS Decision Analysis Society.
Dr. Alec Morton is lecturer in Operational Research in the Department of Management at the London School of Economics and Political Science. He teaches courses in decision analysis, simulation, and statistics, and his research interests are in the application of decision analysis to planning problems, especially in
About the Editors
385
healthcare; in Multi-criteria Decision Analysis and Multi-objective Optimization; in the normative foundations of health economics, and in games of attack and defence. He is a graduate of the Universities of Manchester and Strathclyde, and before joining the LSE, worked at Singapore Airlines and the National University of Singapore.
About the Authors
Mara Airoldi is a Research Officer and PhD candidate in the Department of Management at the London School of Economics. Her research interests include both methodological and application issues in the area of resource allocation, with a focus on health care policies. Her methodological interests revolve around the normative foundation of resource allocation, which in the case of health care include the definition of objectives such as maximizing the health of a definite population and reducing health inequalities. Mara is particularly interested in bridging the gap between normative approaches and practice. To bridge this gap, her applied work aims at developing decision-analytic tools which employ normative principles, build on evidence and can be used in practice. She has a degree in economics from Bocconi University in Milan where she specialized in Economics of Public Choice and an M.Sc. in decision science from the LSE.
A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6, © Springer Science+Business Media, LLC 2011
387
388
About the Authors
Gonzalo Ar´evalo is part-time lecturer of statistics and operations research at the Rey Juan Carlos University, and advisor on International RTD and Innovation funding mechanisms for the Spanish Ministry of Science and Innovation, based at the Carlos III Health Institute. Previously, he was Technology Transfer Officer at URJC. He has participated in more than 20 regional, national, and international projects in relation with RTD and innovation. He is also evaluator on RTD and Innovation Policies for the Spanish Ministry of Science and Innovation, the Belgian Federal Science Policy Office, and the European Commission. Currently, he is working on his Ph.D. thesis on decision analysis methods to support innovation, and studying a Master Degree on Direction & Management of RTD and Innovation. He has recently cofounded SKITES to apply the methodology proposed in his contribution to this volume.
Dr. Nikolaos Argyris is an LSE Fellow in Operational Research in the Department of Management at the London School of Economics and Political Science, where he teaches courses in Optimization Modelling. His research interests include Multi-Objective Optimization, Interactive Multi-Criteria Decision Making, Utility Theory, and Efficiency Analysis. He has been involved in applications interfacing Operational Research methods for efficiency analysis of schools in England as well resource allocation in the English National Health Service. Nikolaos holds a Ph.D. in Operational Research from the LSE.
About the Authors
389
Dr. Roger Chapman Burk is an associate professor in the Department of Systems Engineering at the United States Military Academy at West Point. He teaches courses in operations research, decision analysis, and systems engineering. His research interests include R&D portfolio optimization and modelling of space and unmanned aircraft systems. He retired from the Air Force in 1995 and worked in industry for 5 years before coming to West Point. He has an M.S. in Space Operations from the Air Force Institute of Technology and a Ph.D. in Operations Research from the University of North Carolina at Chapel Hill.
Zeger Degraeve first studied at the Universities of Ghent and Leuven in Belgium and then gained his PhD from the Graduate School of Business at the University of Chicago. He was Professor of Management Science in the Department of Applied Economics at the Katholieke Universiteit Leuven in Belgium, before joining London Business School in 1999. Since March 2002 he has served as Associate Dean of the Executive MBA and Executive MBA-Global Programmes, and he has also served as Deputy Dean (Programmes) and a member of the School’s Management Committee. He established London Business School’s new centre in Dubai
390
About the Authors
and in July 2008, he was appointed the inaugural Sheikh Mohammed bin Rashid Al Maktoum Professor of Innovation at London Business School. Degraeve provides consultancy support across a range of issues including: operations management, logistics and supply chain management, environmental planning and purchasing strategy development. Recent consultancy clients include the European Commission, Andersen Business Consulting, National Economic Research Associates (NERA) and McKinsey and Co. Degraeve’s research has won several awards, including: the Association of the European Operational Research Societies’ (EURO) Prize for Best Applied Paper and the Chairman’s Award for the Best Applied Contributed Paper by INFORMS (The Institute For Operations Research and Management Sciences). He has published over 50 articles on decision making, optimization, scenario analysis and risk assessment for leading journals, including: Management Science, Operations Research, INFORMS Journal on Computing, European Journal of Operational Research, Interfaces and Harvard Business Review.
Bert De Reyck is professor and chair of the Department of Management Science and Innovation at University College London (UCL). He is also an adjunct professor at London Business School. Before joining UCL, Bert held positions at the Kellogg School of Management at Northwestern University and the Rotterdam School of Management. Bert’s research and teaching activities focus on strategic decision making, project management, project portfolio management, and risk management. Among others, Bert has worked with Pfizer, Merck, Roche, Novartis, SanofiAventis, Shire, Unilever, PwC, Dunlop Aerospace, Eaton Aerospace, EuroControl, Lloyds, the New York Transit Authority, and Regal Petroleum.
About the Authors
391
Dr. Barbara Fasolo is tenured lecturer in decision sciences, in the Department of Management, at the London School of Economics and an expert in behavioural decision science. An economist by training (B.Sc. Bocconi University, Milan), she then took an M.Sc. in decision sciences from the LSE and a Ph.D. in psychology from the University of Colorado at Boulder. Prior to joining the LSE, she worked for 2 years as a post-doctoral researcher at the Adaptive Behaviour and Cognition Group of the Max Planck Institute of Human Development in Berlin, Germany. Dr. Fasolo’s research focuses on decision behaviour and implications for policy making, managerial practice, and personal development.
Jos´e Rui Figueira is full professor at Nancy’s School of Mines, Nancy, France and researcher at LORIA laboratory and associate member at CEG-IST, Center for Management Studies of Instituto Superior T´ecnico. Professor Figueira taught and did research also at University of Evora, University of Coimbra, and Technical University of Lisbon. He obtained his Ph.D. in 1996 in the field of Operations Research from University of Paris-Dauphine and his HDR at the same University
392
About the Authors
in 2009 in the field of Multiple Criteria in Operations Research and Decision Aiding. Professor Figueira’s current research interests are in decision analysis, integer programming, network flows, and multiple criteria decision aiding. He currently serves as editor of the Newsletter of the European Working Group on Multiple Criteria Decision Aiding and one of the coordinators of this group. He is currently the book editor section of the European Journal of Operational Research and associate editor of Journal of Multi-Criteria Decision Analysis and of the International Journal of Multi-Criteria Decision Making. He was recently elected as a member of the Executive Committee of the International Society for MCDM.
Dr. L. Alberto Franco is associate professor of Operational Research and Systems at Warwick Business School, United Kingdom. He has a background in civil engineering, and holds an M.Sc. in Operational Research from Lancaster University and a Ph.D. in Operational Research from the London School of Economics and Political Science. At Warwick he lectures on managing complexity and decision making at postgraduate level, and his main research interests focus on understanding the role and impact of facilitated dialogue and analysis on the effectiveness of management teams. He has extensive practical experience in the use of problem structuring and decision analysis approaches to support strategic issue management, value-focused thinking, and inter-organisational collaboration in the private, public, and non-for profit sectors.
About the Authors
393
Johannes Gettinger is a research assistant at the Vienna University of Technology, Austria. He holds a master’s degree in international business administration from his studies at the University of Vienna and the University of Bologna. His current research is funded by the Austrian Science Fund (FWF). The reserach focus is on conflict resolution, in particular electronically mediated negotiation, negotiation as well as decision support systems, and post-settlement negotiations.
Janne Gustafsson, Dr.Sc. (Tech.), is a Portfolio Manager at Ilmarinen, a Finnish pension fund, focusing on hedge fund style investments. Prior to joining Ilmarinen, Dr. Gustafsson worked 6 years at Cheyne Capital Management, a London-based hedge fund, which he joined from Monte Paschi Asset Management in Milan. Dr. Gustafsson has studied at Helsinki University of Technology in Finland, now a part of Aalto University, and at the London Business School. His research interests include valuation of private investment opportunities, asset management, portfolio selection models, and decision making under risk and uncertainty. Some key parts of Dr. Gustafsson’s research have been published in Operations Research, a leading academic journal.
394
About the Authors
Elmar Kiesling is a junior researcher and lecturer at the University of Vienna, Austria. He holds a master’s degree in international business administration and pursues a Ph.D. in management at the University of Vienna. His research interests include multi-criteria decision support systems, agent-based modelling, and information security management.
Don N. Kleinmuntz, Ph.D., is cofounder at Strata Decision Technology LLC (SDT). He has been named a Fellow of the Institute for Operations Research and the Management Sciences (INFORMS, 2007) and of the Association for Psychological Science (1997). Don is an active member of INFORMS, and served as President in 2009. Dr. Kleinmuntz was a founding editor-in-chief of Decision Analysis (2001– 2006) and an associate editor of Management Science (1995–2001).
About the Authors
395
Jack Kloeber is a retired US Army Lieutenant Colonel with a Ph.D. in decision analysis and years of experience in R&D portfolio management, decision analysis, modelling and simulation, technology selection, and strategy development. Jack taught mathematics at West Point for 3 years and graduate level decision analysis, systems simulation, and technology selection for 6 years at the Air Force Institute of Technology. His work has supported decisions for various superfund sites within the Department of Energy, technology organizations within the Department of Defense, and many global pharmaceutical developers and manufacturers. Jack led the Portfolio Management group for Bristol-Myers Squibb in the early 2000s and, more recently led the Portfolio Management group for J&J Pharmaceutical Services – coordinating the portfolio management efforts across multiple R&D and marketing operating companies. He received his Ph.D. in economic decision analysis from Georgia Institute of Technology and Masters in Industrial Engineering from Lehigh University. Jack is currently a senior partner at Kromite LLC.
Dr. Juuso Liesi¨o is a teaching research scientist in the Systems Analysis Laboratory at Aalto University. His research interests include portfolio decision analysis, robust resource allocation models and decision-support systems. In these fields, he has
396
About the Authors
developed novel modelling approaches, algorithms and computer software, and collaborated with several companies and public organizations in applying portfolio decision support models in practice. He teaches courses in decision analysis and optimization.
Dr. Gilberto Montibeller is tenured lecturer in management sciences in the Department of Management, at the London School of Economics. With a first degree in electrical engineering (UFSC, Brazil, 1993), he started his career as an executive at British and American Tobacco. Moving back to the academia, he was awarded a masters degree (UFSC, 1996) and a Ph.D. in production engineering (UFSC, 2000). He then continued his studies as a post-doctoral fellow in management science at the University of Strathclyde. Dr Montibeller is area editor of the Journal of Multi-Criteria Decision Analysis. His research interests encompass strategic decision making, decision making under multiple objectives and uncertainty, and structuring of decision analytic models. He was awarded the Wiley Prize in Applied Decision Analysis, for his research on combining scenario planning and multicriteria decision analysis. An expert on decision analysis, Dr. Montibeller has extensive experience in applying it during the past 15 years, consulting to both private and public organisations in Europe and South America.
About the Authors
397
Vincent Mousseau obtained a Ph.D. in computer science/operations research at the University of Paris-Dauphine in 1993, and went on to complete his HDR in 2003. He has served as an assistant professor at the University of Paris-Dauphine from ´ 1994 to 2008, and he has since served as Professor at Ecole Centrale Paris in the Industrial Engineering laboratory. Within the Industrial Engineering laboratory, he heads the research team “Decision Aid for Production/Distribution.” His research is in the field of operations research and decision-making aids. More specifically, his research focuses on preference modelling and elicitation. He is very active in this research community and, for instance, organized in 2010 an international doctoral school in multiple criteria decision Aid (http://www.gi.ecp.fr/mcda-ss). Professor Mousseau has published many articles in international journals such as European Journal of Operational Research, Computers and OR, Annals of OR, Decision Support Systems, Naval Research Logistics, and Information Sciences. He is involved in the Decision Deck project (http://www.decisiondeck.org) as president of the consortium.
398
About the Authors
Dr. Gregory S. Parnell is a professor of systems engineering at the United States Military Academy at West Point. He teaches decision analysis, systems engineering, and operations research. Dr. Parnell is also Chairman of the Board and a senior principal with Innovative Decisions Inc., a leading decision analysis consulting firm. His research and consulting involve strategic planning, research and development (R&D) portfolio analysis, and resource allocation decision making. He has served as President of the Decision Analysis Society of Institute for Operations Research and Management Science and President of the Military Operations Research Society. Dr. Parnell received his Ph.D. from Stanford University.
Larry Phillips is a visiting professor at the London School of Economics and Political Science, the Director of the Decision Capability Group in the LSE’s Department of Management and Director of Facilitations Ltd. He specializes in helping decision makers to analyse complex issues involving uncertainty, risk, and multiple, conflicting objectives and often works with groups of key players using a problem-solving process called decision conferencing. He previously worked at Brunel University and has a Ph.D. in psychology from the University of Michigan, where his adviser was Ward Edwards, and a B.Sc. in engineering from Cornell.
About the Authors
399
In 2005, he was awarded the Ramsey Medal of the Decision Analysis Society of INFORMS, in recognition of his distinguished contributions to the field of decision analysis.
David R´ıos Insua is a member of the Royal Academy of Sciences, Spain. Previously, he has researched and/or taught at Manchester University, Leeds University, Duke University, Purdue University, the University of Paris-Dauphine, CNR-IMATI (Italy), SAMSI (USA), and IIASA (Austria). He has published 15 books and 90 refereed papers in his areas of interest which include decision analysis, risk analysis, Bayesian inference and group decision making, and their applications to water management, energy planning, security and ICT. He currently leads the Risk Analysis research program of the region of Madrid and formerly led the National Science Foundation program on Risk Analysis, Extreme Events, and Decision Theory at SAMSI. He is scientific advisor of AIsoy Robotics which develops decision theoretic-based emotional robots and recently cofounded SKITES to apply the methodology proposed in his contribution to this volume.
400
About the Authors
´ Julie Stal-Le Cardinal is an associate professor of the Ecole Centrale Paris. Her teaching activities concern the domains of innovation, project management, leadership and personal development, and design methodology. In parallel to these activities, Julie Stal-Le Cardinal exercises an activity of council in project management. Her research activities and national and international scientific collaborations concern knowledge and decisions in design, actors’ choices, and collaborative decisions in projects as well as organization and company modeling with applications in healthcare.
Jeff Stonebraker is currently an assistant professor at North Carolina State University’s Poole College of Management with research and teaching interests in decision analysis, commercial forecasting/modelling, and hemophilia. His research is published in Operations Research, Interfaces, IEEE Transactions on Engineering Management, and Haemophilia. Prior to this academic post, Jeff was a decision analyst in the biotechnology and pharmaceutical industries at Bayer Biological Products and GlaxoSmithKline and a consultant with Applied Decision Analysis. Jeff is a retired US Air Force Major with two tours of duty at the Air Force Academy teaching decision analysis, operations research, and mathematics. Jeff received the
About the Authors
401
Ph.D. degree in management science from Arizona State University, the M.S. degree in engineering-economic systems from Stanford University, and the B.S. degree in electrical engineering from the University of South Florida.
Dr. Christian Stummer holds the chair of innovation and technology management at Bielefeld University in Germany. His research interests comprise (1) multiobjective portfolio analysis, the provision of proper (interactive) decision support systems as well as their application to various decision problems ranging from R&D management to IT (security) management or healthcare management and (2) agentbased simulations of innovation diffusion processes. Before his current appointment he served as an associate professor at the University of Vienna, as the head of a research group at the Electronic Commerce Competence Center in Vienna and as a visiting professor at the University of Texas at San Antonio.
M.Sc. Antti Toppila is a Doctoral Student in the Systems Analysis Laboratory at the Aalto University School of Science. His research interests are in analysing value of information in portfolio selection problems, modelling of uncertainties with incomplete information, and applications of efficiency analysis in risk management and portfolio selection problems. He obtained his M.Sc. degree from the Helsinki University of Technology in 2009, and he is currently working on his Doctoral Dissertation.
402
About the Authors
Rudolf Vetschera is a professor of organization and planning at the School of Business, Economics and Statistics, University of Vienna, Austria. He holds a Ph.D. in economics and social sciences from the University of Vienna, Austria. Before his current position, he was full professor of Business Administration at the University of Konstanz, Germany. He has published three books and more than 80 papers in reviewed journals and collective volumes. His main research area is in the intersection of organization, decision theory, and information systems, in particular negotiations, decisions under incomplete information, and the impact of information technology on decision making and organizations.
Detlof von Winterfeldt is the director of the International Institute for Applied Systems Analysis (IIASA) in Laxenburg, Austria. He is on leave from the University of Southern California (USC), where he is a Professor of Industrial and Systems Engineering and a Professor of Public Policy and Management. Concurrently with his term at IIASA, he is visiting the London School of Economics and Political Science as a Centennial Professor in the Management Science Group of the Department of Management. For the past 30 years, he has been active in teaching, research, university administration, and consulting. He has taught courses in statistics, decision analysis, risk analysis, systems analysis, research design, and behavioural decision research. His research interests are in the foundation
About the Authors
403
and practice of decision and risk analysis as applied to the areas of technology development, environmental risks, natural hazards, and terrorism. In 2000, he received the Ramsey Medal for distinguished contributions to decision analysis from the Decision Analysis Society of INFORMS. In 2009, he received the Gold Medal from the International Society for Multi-criteria Decision Making for advancing the field.
Jun Zheng is currently a Ph.D. Candidate under the supervision of Professor Vincent Mousseau in Laboratoire G´enie Industriel (LGI) at Ecole Centrale Paris (ECP), France. Her research interests mainly focus on preference elicitation and preference modelling in multiple criteria decision aid. The objective of her Ph.D. thesis is to implement elicitation tools in order to carry out the multiple criteria decision aid process, to improve the solution of a large class of problems in operations management. She is financially supported by the State Scholarship Fund of China Scholarship Council.
Index
A Absolute deviation (AV), 82, 95 Absolute number of reversals, 198 Across criteria weight, 365, 370–372 Across criteria weights, 59 Acute vs. preventive interventions, 377–378 Admissible portfolios, 189, 193, 198, 203–206 Adversaries, 334–335, 345 AHP. See Analytic hierarchy process Alternatives, 30, 32, 34–37, 39, 42–43, 47, 48 Analysis of variance (ANOVA), 137, 139–144 Analytical process, 132, 133, 135, 136, 139, 144, 145 Analytic hierarchy process (AHP), 10, 338 Anchoring argument, 156–158, 162 ANOVA. See Analysis of variance Arbitration, 177, 179–180 Aspiration level, 206 AV. See Absolute deviation
B Balance, 36, 39, 139, 145, 146 Balance constraints, 214, 222, 231, 232 BBP. See Breakeven buying price BCG matrix, 10 Behavioral/Behavioural research, 22, 150, 155 Benchmarks, 136 Benefit-cost curve, 61, 373 Biases, 149, 150, 152–163 Borderline project, 244 Bounding, 34, 35, 39 Breakeven buying price (BBP), 80, 84, 85, 87, 88, 103 Breakeven selling price (BSP), 80, 84–88, 102–104
Budget, 34, 35, 37, 39, 42, 43, 47 Budget constraint, 82, 83, 93–94, 99, 151, 163, 243
C CAGR. See Combined average growth rate Cannibalization effects, 243 Capital asset pricing model (CAPM), 81–83, 96, 102 Capital budgeting, 7, 8, 20, 243, 256 Category limits, 225, 226 Causal maps, 264 CCA. See Contingent claims analysis CI. See Core index Class solution, 65–69, 71 Clinical proof of concept (CPC), 295, 308, 329 Coefficient of variation (CV), 136, 137, 139–144 Cognitive factors, 150 Collaborative decision analysis, 168, 171, 177, 182 Combined average growth rate (CAGR), 304, 319, 320 Commercialization phase, 244 Commercial potential, 132–145 Commercial scenario, 136–145 Compensatory strategy, 192 Concave piecewise linear additive form, 117 Concordance, 225, 226 Consensus-oriented approach, 244 Consistency check, 58 Constraint logic programming, 178 Constructive view of preference, 108 Contingent claims analysis (CCA), 86, 87, 89, 101, 102
A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6, © Springer Science+Business Media, LLC 2011
405
406 Contingent portfolio programming (CPP), 81, 90, 101 Convergence, 188, 203, 204, 206, 207 Core index (CI), 250 Core project, 244 Cost-benefit analysis, 11 Cost of analysis, 31, 42, 46–48 Cost reduction, 296 CPC. See Clinical proof of concept CPP. See Contingent portfolio programming Criterion function, 109, 110, 115 Criterion weight, 224, 225, 227 Critical project, 268 Culture, 42 CV. See Coefficient of variation
D DAAG. See Decision analysis affinity group DALY. See Disability-adjusted life year Data envelopment analysis (DEA), 338, 340 Decision aid, 9 Decisional conflict, 192, 193, 197, 199, 201, 202, 206 Decision analysis, 3–24 Decision analysis affinity group (DAAG), 12 Decision conference, 9, 11, 17, 18, 55, 59–61, 71–75, 347 Decision confidence, 191, 193 Decision consistency constraint, 93, 94, 100 Decision explorer, 264, 269 Decision hierarchy, 34 Decision maker (DM), 214, 216, 218–223, 226–229, 231–234 Decision-making style, 193, 194, 202 Decision portfolio, 29–50 Decision quality, 29–50, 132, 133 Decision tree, 9, 11, 13, 339, 345, 346, 353, 355 Demonstrable benefits argument, 156, 159–162 Dialog decision process, 347, 354 Disability-adjusted life year (DALY), 361, 379 Disability-free life expectancy, 375 Discounted cash flow analysis, 79 Discovery-driven planning, 175 Disease modelling, 375 Dissensus-oriented approach, 244 DM. See Decision maker Drug development, 132–145 Drug development stages, 133–140, 142–145
Index E EDC. See Expected development cost EDR. See Expected downside risk Efficient frontier, 315 EFQM. See European Foundation Quality Management EIRR. See Expected internal rate of return Electre Tri, 214, 216, 223–228, 232, 233, 239 Empirical data, 132, 146 ENPV. See Expected net present value Enterprise information, 133, 136–140 Equalisation argument, 156–157, 162 Equity (software), 58 European Foundation Quality Management (EFQM), 170 EUT. See Expected utility theory Expected development cost (EDC), 295, 300, 316 Expected downside risk (EDR), 96 Expected internal rate of return (EIRR), 295, 306, 308–310 Expected net present value (ENPV), 10, 295–302, 308, 310, 316, 323 Expected sales, 246, 248–253 Expected utility theory (EUT), 82, 86 Expected value maximization, 82, 83 Exterior project, 244
F Facilitated modeling, 264 Facilitated PDA, 276 Failure, 38 Financial portfolio optimization, 6–7 First-of-class solution, 67, 71 Flaw of averages, 145 Formal framework, 150–152 Frame, 30–35, 48, 49 Fundamental objectives, 366 Funding level, 243, 248, 251, 252
G GDS. See Group decision support Generalized cost effectiveness analysis (GCEA), 361–363, 374, 378 Go/No-Go decision, 292 Good death, the, 378 Graph theory, 338 Group constraints, 214, 228–229, 232 Group decision support (GDS), 168–171, 183 Group Explorer, 264, 269 Group level criteria, 220
Index H Haircut, 277 Health economics, 361, 363, 376 Health economists, 360, 361 Health inequalities, 359, 367, 368, 372, 375–376 Heatmap, 188–195, 199, 202–207 Historical data, 133, 135, 136 Horizon, 54–56, 69, 71, 73 Humanitarian vs. realist perspective, 360
I Implementation, 30, 32, 36, 38, 39 Imprecise specification of preferences, 216 Incomplete information, 17, 21–22 Incrementalism, 261–263 Indifference threshold, 223 Influence diagram, 9, 339 Information, 29, 30, 32, 34, 36–43, 46, 48 Information asymmetry, 244 Innovation management, 167–183, 244 Innovation management platform, 169–170 Institutional drivers, 146 Interacting objectives, 37 Interaction mechanism, 188, 196 Interaction variable, 253 Interactive heatmap, 188, 189, 192 Interactive parallel coordinate plot, 188, 189, 192 Interactive visualization, 187–207 Interim progress review (IPR), 346 Internal rate of return (IRR), 175 Inverse optimization problem, 85 Isopreference line, 111, 112
J Justifiability, 156–160
K Knapsack problem, 109, 178, 268, 270, 271, 279
L Levels of decision quality, 38–39 Life expectancy, 375 Linear additive model, 111, 112 Local government, 259–279 Local government budgetary process, 260 Logical synthesis, 37–39
407 Long range goals, 303–304 Lower semi-absolute deviation (LSAD), 95, 96, 100 M Majority level, 224, 226, 227 Managerial flexibility, 79, 81, 89, 101 Marginal analysis, 7, 18 Marginal value, 151 Market environment, 137, 138 MAUT. See Multi-attribute utility theory MAVT. See Multi-attribute value theory MCDA. See Multicriteria decision analysis Mean objectives, 366 Mean-risk preference models, 82, 83 Military applications, 333–355 Minimum internal rate of return, (MIRR), 295 Minimum requirement argument, 156, 158–159, 161, 162 MODA. See Multiobjective decision analysis Momentum portfolio, 30 Monte Carlo simulation, 251, 254 Motivational factors, 150 Multi-attribute utility theory (MAUT), 9, 10, 353 Multi-attribute value theory (MAVT), 9, 10 Multicriteria decision analysis (MCDA), 54, 55, 57, 58, 71–74 Multicriteria portfolio decision analysis, 361, 363–367, 378 Multicriteria sorting, 213–239 Multiobjective decision analysis (MODA), 313, 325 Multiobjective programme, 109 Multiperiod project valuation, 90–101 Multiple objective value analysis, 337–339 N Natural monopoly, 246 NDA. See New drug application Negotiation, 177, 179, 180, 182, 183 Net present value (NPV), 80, 135, 137, 138, 141–144, 174, 175, 177, 291, 295, 296, 298, 299, 301, 306, 308, 309, 319, 323, 331 Network markets, 247 New drug application (NDA), 286–287 New Public Management, 262 Nominal group technique, 172 Non-compensatory strategy, 191, 192 Nondiscordance, 225, 226 Nondominated portfolio, 244 NPV. See Net present value
408 O Opportunity selling price (OSP), 88, 89 Optimization, 29, 33, 37, 39, 40, 49 Options pricing, 79, 80 Organizational learning, 23 Organizational objectives, 335 OSP. See Opportunity selling price Outranking approaches, 9 Outranking methods, 216
P Parallel coordinate plot, 188, 189, 192, 193, 206 Pareto analysis, 338, 340 Pareto frontier, 189 Partition dependence, 152–154, 161 Payback period, 175 PDA. See Portfolio decision analysis, 3–24 PDQ. See Portfolio decision quality, 290, 291 Peak year revenue, 300 Perceived accuracy, 193, 197, 199, 201, 206 Perceived decisional conflict, 197 Perceived ease of use, 194, 197, 199 Perceived effort, 190, 193, 197, 199, 202 Perceived usefulness, 190, 193, 197, 199, 201, 206 Performance objectives, 135 Pessimistic assignment rule, 223 Petrochemicals, 334–336 Pharmaceutical industry, 134 Pharmaceuticals, 293, 294, 328, 334–336 PI. See Productivity index POC. See Proof of concept Point estimate, 145 Portfolio balance, 301–302, 317–319, 330 Portfolio decision analysis (PDA), 3–24 Portfolio decision quality (PDQ), 290, 291 Portfolio decision review, 336 Portfolio information, 139, 140 Portfolio management, 131–147, 285–288, 292, 294–295, 299, 303, 307, 308, 317, 321, 325, 327–330 Portfolio tradeoff, 315 Postsettlement, 180 Potentially optimal portfolio, 250, 251 Preference disaggregation, 115, 117 Preference elicitation, 108, 206 Preference intensity, 150 Preference programming, 243, 256 Preference threshold, 223 Primary Care Trust, 360, 364–366, 369, 375–379 Prioritisation triangle, 63
Index Priority index, 57 Probability of technical and regulatory success (PTRS), 296, 297, 300, 327 Problem complexity, 193, 194, 196, 202, 203, 205, 206 Problem structuring methodologies, 215 Productivity index (PI), 295, 298, 300, 306, 308, 310 Product portfolio management, 12 Product profile, 137, 138 Program budgeting, 7, 18 Program budgeting and marginal analysis(PBMA), 362, 363, 374, 378 Project dependencies, 344 Project filtering, 174–176 Project interactions, 8, 245, 354 Project level information, 132, 133, 135–137 Project selection, 7–8, 10, 34, 176–181 Project valuation, 79, 81, 90–101 Proof of concept (POC), 295, 308, 323, 325, 329 Psychometric constructs, 198 PTRS. See Probability of technical and regulatory success Public health, 360, 362, 364, 365, 374, 375 Public value, 260, 273–275, 278, 279
Q Quality-adjusted life year (QALY), 360, 361, 368 Qualitative value model development, 349–351
R Rank reversal, 10, 21 Real options, 81, 89, 101 Required capacity, 286 Research and development (R&D), 241–256, 294–299 Research and development (R&D) investment, 132–134 Resource allocation, 149–153, 156–161, 163, 259–279 Resource constraints, 109, 110 Resource function, 110, 115 Risk constraint, 80, 84, 90, 93–96, 100 Risk measures, 7 Risk reduction, 295, 329 Risk-return matrix, 11 Robust portfolio modeling (RPM), 244 Royal Navy, 53–75 R–W–W method, 175
Index S Scope insensitivity, 152, 154–155, 161 Scoping, 35, 39 SD. See Standard deviation SDG. See Strategic decisions group SDG grid, 37 Semivariance (SV), 95, 96 Sequential additivity, 87–88, 101 Sequential consistency, 86 SEUT. See Subjective expected utility theory Sharing knowledge and information towards economic success (SKITES), 168, 171–183 Small and medium-sized enterprise (SME), 170, 183 Sociotechnical, 56, 72, 264 Staffing problem, 213–215, 232–233 Stage-gate process, 292–294, 329 Stakeholder involvement, 22 Stakeholders, 263, 264, 271–273, 276, 278 Standard deviation (SD), 136, 137, 139–144 Standardization, 241–256 State tree, 90, 91, 96, 97 Statistical analysis, 141–144 Statistical decision theory, 8 Status quo bias, 152, 155 Strategic buckets, 34 Strategic choice, 264 Strategic decisions group (SDG), 10, 12 Strategic fit, 8 Strategic partnership, 336 Student classification, 214 Student selection, 213–239 Subjective expected utility theory (SEUT), 8 Subjective probability, 8 Suboptimisation, 152–153, 160 Substitute technologies, 243, 246 Sump, 56, 59, 62, 65, 72 SV. See Semivariance Swing weighting, 344, 351–353 Synergy, 45–48 Synergy effects, 243 System complexity, 336 T TA. See Therapeutic area TAM. See Technology acceptance model Target product profile (TPP), 291, 292 Task complexity, 193 Technical success (probability of), 132, 133, 135–137 Technology acceptance model (TAM), 194, 197 Technology management, 247
409 Therapeutic area (TA), 133, 135–142, 144, 146, 287, 302–303, 308–310, 314, 318, 326, 327 Threshold, 41, 42 Timing, 34, 38 Tornado diagram, 9 TPP. See Target product profile Triage, 42 TRIZ, 173 Type 45, 53–75 U Uncertainty, 132, 134, 135, 145, 146 Unmet medical need (UMN), 301, 315 Unrelated future costs, 376–377 User acceptance, 190 UTA, 117 Utility function, 8, 11 V Value added by analysis, 41, 42 Value/cost ratio, 268, 276 Value-focused thinking, 35, 173, 337, 338, 340, 342, 348, 354, 366 Value for money triangle, 372, 373 Value function, 57, 71 Value of information, 29, 30, 48 Value scales, 370 V -efficient portfolios, 110 V -efficient supported portfolios, 110, 124 Veto evaluation, 224–226 Visual complexity, 192, 193 V -nondominated points, 110 V -nondominated supported points, 110 Von Neumann-Morgenstern utility function, 151 Voting, 168–170, 173, 177, 180, 182, 183 W WACC. See Weighted average cost of capital Wealth level, 82, 90, 93, 95, 100 Weapon systems, 334 Web 2.0, 168, 169, 182 Weighted average cost of capital (WACC), 295 Widely adopted technology, 247, 248, 250 Within criteria weight, 365, 371, 372 Within criterion weights, 59, 65 Work breakdown structure, 214 World Health Organization, 361, 374 Z Zionts-Wallenius procedure, 114