Risk Management in Financial Institutions Formulating Value Propositions
This page intentionally left blank
Risk Management in Financial Institutions Formulating Value Propositions Editor in Chief Dr. ing. Jürgen H.M. VAN GRINSVEN Jürgen van Grinsven helps financial institutions to solve their complex risk management problems. In his opinion, solutions for risk management need to be effective, efficient and lead to satisfaction when implemented in the business. This book helps (senior) risk managers to formulate value propositions. Why? Paramount to their success is the ability to identify, assess, formulate and communicate value propositions to their stakeholders.
Editor in Chief
Dr. ing. Jürgen H.M. van Grinsven
Editorial board
Michael Bozanic MBA
Drs. Philip Gardiner
Joop Rabou RA RE
Drs. Gert Jan Sikking
Advisors
Prof. dr. Fred de Koning RA RE
Drs. Patrick Oliemeulen CAIA FRM RV
Cover design and Photo
Drs. Wilfred Geerlings (wilfredgeerlings.com)
Assistant
Ir. Henk de Vries
Guidance and questions before you read this book.
Are you able to formulate and communicate value propositions to your 1. Introduction to risk management
2. Risks in client relationships
3. Risks in the building blocks
stakeholders?
Life-cycle needs are continuously evolving, is your organization ready for tailored client needs?
The foundations of risk management in financial services have been in place for years. However, as concepts, tools, and approaches are changing is your organization benefitting from these developments?
Risk managers are scrambling to understand the intricacies of Basel II and 4. Risks in running the business
5. Formulating value propositions
Solvency II. In their desire to understand market, credit, and insurance risks have your risk managers relegated Operational Risk to an afterthought? The most important job of senior risk managers today is to: identify, formulate, assess, deliver and communicate value propositions to their stakeholders. How do you formulate yours?
© Copyright 2010 by Dr. Jürgen H.M. van Grinsven and IOS Press. All rights reserved. ISBN 978-1-60750-087-2 (print). ISBN 978-1-60750-475-7 (online). First edition. Editor in Chief: Dr. ing. Jürgen H.M. van Grinsven Cover design and photo: Drs. Wilfred Geerlings Assistant: Ir. Henk de Vries Dr. ing. Jürgen H.M. van Grinsven www.jurgenvangrinsven.com Tel: +31.6.15.586.586
Published by IOS Press under the imprint Delft University Press Publisher IOS Press BV Nieuwe Hemweg 6b 1013 BG Amsterdam, The Netherlands Tel: +31-20-688 33 55. Fax: +31-20-687 00 19 Email:
[email protected] www.iospress.nl www.dupress.nl Legal notice The publisher is not responsible for the use which might be made of the following information. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form by any means, electronic, mechanical, photocopying, recording, or otherwise, without the written permission of the author. Printed in The Netherlands
Key words: Risk management, financial institutions, value propositions, financial risk management, nonfinancial risk management, operational risk, integrity, compliance, credit risk, key risk indicators, data scaling, modeling operational risk, losses, hedge funds, Basel II, Solvency II.
Table of contents 1.
INTRODUCTION TO RISK MANAGEMENT .....................................................1
1.1. Risk management in financial institutions............................................................. 2 1.1.1. Importance of risk management ........................................................................ 2 1.1.2. Implementation of risk management ................................................................. 3 1.1.3. Summary .......................................................................................................... 5 1.1.4. References ........................................................................................................ 6 2.
RISKS IN CLIENT RELATIONSHIPS ................................................................7
2.1. Client life cycle management.................................................................................. 8 2.1.1. Know your client .............................................................................................. 8 2.1.2. The process..................................................................................................... 10 2.1.3. Summary ........................................................................................................ 12 2.1.4. References ...................................................................................................... 13 2.2. Integrity management after the credit crunch ..................................................... 14 2.2.1. Integrity and ethics ......................................................................................... 14 2.2.2. Effectively managing integrity norms ............................................................. 15 2.2.3. Summary ........................................................................................................ 19 2.2.4. References ...................................................................................................... 20 2.3. Compliance in control........................................................................................... 22 2.3.1. Regulatory pressure ........................................................................................ 22 2.3.2. Dealing with complexity: four best practices ................................................... 23 2.3.3. Structured methodology .................................................................................. 25 2.3.4. Summary ........................................................................................................ 26 2.3.5. References ...................................................................................................... 27 3.
RISKS IN THE BUILDING BLOCKS ...............................................................29
3.1. The basics of credit risk management ................................................................. 30 3.1.1. Credit risk defined .......................................................................................... 30 3.1.2. Problems in managing credit risk .................................................................... 31 3.1.3. Elements of credit risk management................................................................ 32 3.1.4. Assessment of credit worthiness ..................................................................... 33 3.1.5. Monitoring the condition of individual credits ................................................ 34 3.1.6. Credit risk modeling ....................................................................................... 34 3.1.7. Summary ........................................................................................................ 36 3.1.8. References ...................................................................................................... 36 3.2. New developments in measuring credit risk ........................................................ 38 3.2.1. Economic concepts ......................................................................................... 38 3.2.2. Basel II: improving credit risk measurement ................................................... 40 3.2.3. Summary ........................................................................................................ 42
3.2.4.
References ...................................................................................................... 43
3.3. Risk indicators ...................................................................................................... 44 3.3.1. Risk indicators defined ................................................................................... 44 3.3.2. Risk indicator characteristics .......................................................................... 45 3.3.3. Risk indicator definition process ..................................................................... 48 3.3.4. Summary ........................................................................................................ 51 3.3.5. References ...................................................................................................... 51 3.4. Combining probability distributions .................................................................... 52 3.4.1. Loss distribution, operational risk and expert judgment .................................. 53 3.4.2. Combining probability distributions................................................................ 54 3.4.3. Improved effectiveness, efficiency and satisfaction ......................................... 56 3.4.4. Summary ........................................................................................................ 57 3.4.5. References ...................................................................................................... 58 3.5. Data scaling for modeling operational risk .......................................................... 59 3.5.1. Loss distribution and statistics ........................................................................ 60 3.5.2. Experimental results ....................................................................................... 62 3.5.3. Calculating the value-at-operational risk ......................................................... 63 3.5.4. Summary ........................................................................................................ 65 3.5.5. References ...................................................................................................... 66 3.6. Characteristics of diversified hedge fund portfolios ............................................ 67 3.6.1. Hedge funds ................................................................................................... 67 3.6.2. Description and performances hedge fund strategies ....................................... 69 3.6.3. Asymmetric Hedge Fund Returns ................................................................... 71 3.6.4. Optimal Hedge Fund Portfolios ...................................................................... 72 3.6.5. Results............................................................................................................ 73 3.6.6. Summary ........................................................................................................ 77 3.6.7. References ...................................................................................................... 78 4.
RISKS IN RUNNING THE BUSINESS ............................................................ 81
4.1. Putting operational risk into context.................................................................... 82 4.1.1. Operational risk management and scenario planning ....................................... 82 4.1.2. Scenario planning to support operational risk management ............................. 83 4.1.3. Scenario planning to support integrated risk management ............................... 85 4.1.4. Summary ........................................................................................................ 87 4.1.5. References ...................................................................................................... 87 4.2. Solvency II: dealing with operational risk ............................................................ 89 4.2.1. Solvency II framework ................................................................................... 89 4.2.2. SII attention points ......................................................................................... 90 4.2.3. SII expected benefits ...................................................................................... 91 4.2.4. Operational risk .............................................................................................. 92 4.2.5. Examples of large incidents and failures ......................................................... 93 4.2.6. Difficulties and challenges in insurers’ operational risk management .............. 94 4.2.7. Summary ........................................................................................................ 95
4.2.8.
References ...................................................................................................... 96
4.3. Operational risk management as shared business process .................................. 97 4.3.1. Shared service center and operational risk ....................................................... 97 4.3.2. Business case: a large insurance firm ............................................................ 100 4.3.3. From drawing board to implementation ........................................................ 101 4.3.4. Make conscious choices ................................................................................ 101 4.3.5. Summary ...................................................................................................... 102 4.3.6. References .................................................................................................... 103 4.4. Controlling operational risk in workflow management.......................................104 4.4.1. Workflow management ................................................................................. 104 4.4.2. Multiple expert’s judgment approach ............................................................ 105 4.4.3. Business case: semi-autonomous organization .............................................. 107 4.4.4. Summary ...................................................................................................... 108 4.4.5. References .................................................................................................... 108 4.5. A comprehensive approach to control operational risk ....................................... 110 4.5.1. Risk capital ................................................................................................... 110 4.5.2. The Approach ............................................................................................... 111 4.5.3. Discussion .................................................................................................... 114 4.5.4. Summary ...................................................................................................... 115 4.5.5. References .................................................................................................... 115 4.6. Operational losses: much more than a tail only .................................................. 116 4.6.1. Basel II compliance ...................................................................................... 116 4.6.2. Loss management for business as usual......................................................... 116 4.6.3. Summary ...................................................................................................... 121 4.6.4. References .................................................................................................... 121 4.7. Improving operational risk management ............................................................122 4.7.1. Difficulties and challenges in operational risk management .......................... 122 4.7.2. A way of working for scenario analysis ........................................................ 123 4.7.3. Summary ...................................................................................................... 127 4.7.4. References .................................................................................................... 127 4.8. Group support systems for operational risk management ..................................129 4.8.1. Expert judgment and group support systems ................................................. 129 4.8.2. Business case: Dutch financial institution ..................................................... 130 4.8.3. Summary ...................................................................................................... 132 4.8.4. References .................................................................................................... 132 5.
FORMULATING VALUE PROPOSITIONS ................................................... 135
5.1.
Definition of a value proposition .........................................................................135
5.2.
The client ‘drives’ the value proposition .............................................................136
5.3.
Role of the senior risk manager ...........................................................................137
5.4. Process of formulating value propositions .......................................................... 138 5.4.1. Understanding the benefits for the stakeholders ............................................ 139 5.4.2. Formulate the value proposition .................................................................... 139 5.4.3. Deliver the value proposition ........................................................................ 141 5.5.
Summary .............................................................................................................. 142
5.6.
References ............................................................................................................ 142
AUTHORS INDEX ................................................................................................ 143 CURRICULUM VITAE .......................................................................................... 145
Preface and Acknowledgements Risk management has been in place in financial institutions for hundreds of years. However, nowadays, many senior risk managers are confronted with questions about the added value of risk management. Moreover, many of them have the feeling they have to ‘defend’ their work in the institution.
For senior risk managers it becomes increasingly important to formulate value propositions. With this book we aim to help them to identify, assess, formulate and communicate value propositions to their stakeholders.
This book is the result of a comprehensive compilation of blind-peer-reviewed chapters. The chapters in this book are written by principal risk management researchers and risk management practitioners from leading financial institutions. The entire writing process took us approximately 2 years (from the ‘call for chapters’ to arrive at the research results, relevant content, tables, figures, illustrations and cover design). The chapters are written in such a way that it helps the senior risk manager to create more insight in the concepts, methods and tools of risk management. Moreover, we are confident that this supports the senior risk manager to formulate value propositions. . Many people and several institutions supported me to achieve the content required for this book. Special thanks to Ir. Henk de Vries, Drs. Wilfred Geerlings, Drs. Philip Gardiner (NIBC), Michael Bozanic MBA (Fortis Insurance), Joop Rabou RA RE (Rabobank), Drs. Gert Jan Sikking (PGGM), Prof. dr. Fred de Koning RA RE (Nyenrode) and Drs. Patrick Oliemeulen CAIA FRM RV (Insignia) for their valuable insights, texts, being friends and for the constructive discussions we had. Further, I would like to thank the contributors from ABN Amro and ING. Last, but certainly not least, I want to thank my family for their love and support.
Jürgen van Grinsven, Hedel, January 2010
This page intentionally left blank
List of abbreviations BIS II
New Capital Accord
BL
Business Line
BU
Business Unit
CDD
Customer Due Diligence
DNB
De Nederlandsche Bank
EAD
Exposure At Default
EC
Economic Capital
FI
Financial Institution
GSS
Group Support System
IS
Information System(s)
KRI
Key Risk Indicator
KYC
Know Your Client
LGD
Loss Given Default
Loss data
Recorded losses using a number of properties
MDR
Mean Downside Risk
MEEA
Multiple Expert Elicitation and Assessment
MV
Mean Variance
OR
Operational Risk
ORM
Operational Risk Management
PD
Probability of Default
SPB
Shared Business Process
VaR
Value at Risk
WfM
Work flow Management
#
Number
This page intentionally left blank
Are you able to formulate and communicate value propositions to your stakeholders?
M. Bozanic
1.
Introduction to risk management Michael Bozanic MBA Dr. ing. Jürgen van Grinsven
Risk management has been in place in financial institutions for hundreds of years. Managing on the five major decision variables (Liquidity, Credit, Interest rate, Cost, Capital) have been considered in one degree or another for quite some time. However, our approach to managing these interactions has changed considerably. New approaches and techniques have been developed based on technological developments, greater access to data, and a deeper understanding of the relationships between the variables. These developments have come at a good time as stakeholders such as clients, employees, suppliers and regulators are increasingly becoming more informed and more demanding in terms of transparency and accountability.
Risk managers are under pressure to compete in highly competitive markets while solidly honoring their obligations and navigating their businesses safely into the future. Paramount to their success is the ability to identify, assess, formulate, and communicate value proposition to those same stakeholders. It is with this objective in mind, to assist senior risk managers compose value propositions, why this books exists. There are many value drivers to the success of a financial institution, when reading through the chapters in this book we are confident you will find many insightful ideas, concepts, and methods to help shape or reshape your value propositions. Some ideas you may find very unique others rather complimentary to your current approach in constructing value propositions.
The primary objective of this book is to support senior risk managers in further developing and effectively communicating viable, coherent, and sound value propositions to their stakeholders. We define a value proposition as a clear, concise series of realistic statements based on an analysis and quantified review of the benefits, costs, risks and value that can be delivered to stakeholders.
1
Chapter 1: introduction to risk management
1.1. Risk management in financial institutions Dr. ing. Jürgen van Grinsven Prof. dr. Ben Ale Drs. Marc Leipoldt
Recent scandals and losses indicate that pro-active risk management is not yet a common practice in Financial Institutions (FI’s). The corrective impulse from risk related losses is only short lived. Reports as an instrument to control risk often turn into documents to comply with internal rules and external regulation. The effects from this are visible for the conscientious risk manager: the next incident is just around the corner. Retrospectively, this incident will show unambiguous resemblances with previous risks. However, with good instruments we can learn from previous occurrences. The question is: do we want to invest in managing risks or do we assume that ‘it will not happen to us’? In this chapter we first discuss the importance of risk management in financial institutions. Then, we present an overview of publicized losses and relate them to managerial aspects in risk management in financial institutions. Finally, we discuss implementation aspects and address the issues that increase the likelihood for a successful implementation of risk management in the business.
1.1.1. Importance of risk management Risk Management supports decision-makers to make informed decisions based on a systematic assessment of risks in a financial institution and its context (Cumming & Hirtle, 2001; Grinsven, Janssen & Houtzager, 2005). Despite lessons from the past, risk and risk taking behavior seems to shift towards the operational level. In 1995 the unauthorized trading of a bank employee was one of the main causes of the collapse of the Barings Bank. This was possible due to insufficient segregation of duties and because the management did not fully understand what happened in Singapore. In 2003 SMI management suffered a loss of $1,5 million. The cause of this loss was remarkably common: a secretary committed fraud with cheques. During the investigation it became clear that insufficient segregation of duties and wrongly designed procedures was asking for trouble. More recently, in 2009 the DSB bank collapsed and in 2008 several insurance firms suffered losses. These examples are not isolated incidents, they materialize with some regularity in both large and small financial institutions. Although we would expect that financial institutions have learned from these debacles, publicized losses
2
Chapter 1: introduction to risk management
indicate that, in general, insufficient has been learned from the past, see Table 1. Noticing the high-profile events, it is not surprising that the financial service sector is increasingly aware of the commercial significance of risk management. Three reasons might underpin this: the subprime-crises, the New Basel Accord and the fact that directors and managers increasingly face personal liabilities (Young, Blacker et al., 1999; Grinsven, 2009). Year Organization 2009 DSB Bank 2008 Delta Lloyd Nationale Nederlanden Fortis ASR Société Générale 2003 SMI management 2002 Allied Irish Bank DBS bank WorldCom Bank of America 2001 Enron Asia Pacific Breweries 2000 Semblog 1995 Barings Bank 1994 Metallgesellschaft
Loss Collapse of the DSB bank (total loss to be confirmed) 300 million € by dissemination of incorrect information 365 million € by dissemination of incorrect information 750 million € by dissemination of incorrect information 4,9 billion € by suspicion of deceiving/fraudulent transactions 1,5 million $ by fraud with cheques 750 million $ by unauthorized trading 750 million $ by unauthorized trading 7,6 billion $ by inflated profits and concealed losses 12,6 million $ by illegal transactions/fraud 250 billion $ 116 million $ by cheating banks 18,5 million $ by false invoices, bogus accounting entries 1,4 billion $ by unauthorized trading 1,5 billion on oil futures
Table 1: publicized losses (Grinsven, 2007; Grinsven 2008)
To summarize: by the end of the 1990s many financial institutions increasingly focus their efforts on risk management. This was mainly motivated by costly catastrophes (see above) such as Barings, Sumitomo and the New Basel Accord (Grinsven, 2009).
1.1.2. Implementation of risk management When financial institutions decide that active management of risk is important they should consider the best way to implement this in their daily business. Both the positive and negative effects of implementation have to be taken into account because: frequently goals are not achieved or budgets are wasted on inappropriate projects. Further, active management of risk can fail for several reasons: risk losses are not reduced resulting in higher costs and/or savings from synergy with other projects e.g. compliance is lower than expected, for example due to high coordination costs. Moreover, the internal service delivery can be disappointing: the transition to the new situation does not go well or the budget is too low to really have a positive effect.
3
Chapter 1: introduction to risk management
For risk management to function properly, one should think about the desired results in advance. Deliberate choices have to be made by the management. Risk managers should formulate value propositions: the planning and control of risk management must comply with specific requirements enforced by internal rules and external regulation. This can lead to a shift of influence in the internal organization. In practice, there is also the tendency of business managers to make information about risks superficial, including the information about potential losses. This burdens the risk manager with the problem: How can I make good risk management and its consequences visible to the (top) management? Way of thinking A way of thinking helps to capitalize on new chances in the market and to complete a successful integration in the business. The way of thinking should reflect the senior’s risk managers view on risk management, provide (risk) managers in the business with the underlying structure, set’s the overall tone, delineates how risk managers thinks’ that the major decision variables (liquidity, credit, interest rate, cost capital) should be interpreted and provides design guidelines on which risk management is based. See Grinsven (2009) for an elaboration of the way of thinking. Design guidelines The way of thinking incorporates at least four important design guidelines. These guidelines ‘guide’ the way of working in a FI. The starting point for the design guidelines is that it gives the (top) management a sufficient, reliable, robust and accurate assessment of the risks and the mitigation measures that have to be taken.
The first guideline states that the approach must satisfy the dominating laws, legislation and risk management standards.
The second guideline states that the approach must be based on procedural rationality: decision makers (managers) must be able to make decisions as rationally as possible. This can only be accomplished when there is a sharply defined risk management process to which different parties can commit them. Transparence in financial risk is an example of this since decision makers cannot rely on the ‘math’ behind risks only.
The third guideline states that the approach must guarantee a shared view about the results so that control measures can be implemented effectively in the business. One must realize that a strong basis is needed among the employees working in the financial institution.
4
Chapter 1: introduction to risk management
The fourth guideline states that the approach must be flexible to use and be able to be put into the daily practice.
Way of working The way of working uses the guidelines as ‘guidance’. The way of working describes the responsibilities, process, systems, people and activities that need to be executed for a successful implementation in the business. In this chapter we present several aspects because it leads to far to elaborate on this. For an elaborate overview and detailed description we refer to Grinsven (2009) or see chapter 4.7 in this book for a more detailed example. Responsibility and accountability Too often responsibilities are blamed on multiple persons so no individual can be held responsible. In most cases there is actually only one person who is responsible. Therefore, that person should be held accountable for the decision which is taken. Who is responsible, is accountable. This is incorporated in the governance structure of the organization. The three lines of defense have to be designed carefully. Realization is the key word. People, processes, systems It must be clear what risk management exactly entails in each business unit: which processes and systems are being used, which activities are carried out, which systems are being used at what time by which employees and which control measures are available. The risk manager responsible for the integration of various risk management functions must have the knowledge to do so, be able to collaborate with other business functions and not be scared to criticize existing organizational patterns.
1.1.3. Summary Financial Institutions are increasingly focusing on risk management. However, for risk management to function properly one should think about the desired results in advance. This can be improved by (1) a way of thinking that includes a number of guidelines that help to formulate value propositions (2) a clear way of working including the governance, responsibilities, accountabilities, people, processes and systems. These aspects help to formulate the added value of risk management and advocate using a systematic approach for risk management to capitalize on new opportunities in the market.
5
Chapter 1: introduction to risk management
1.1.4. References Cumming, C. & Hirtle B. (2001). The challenges of risk management in diversified financial companies, Federal Reserve Bank of New York Economic Policy Review. Grinsven, van. J., Janssen, M., Houtzager, M. (2005). Operationeel Risico Management als Shared Business Process, IT Monitor, Augustus, (in Dutch). Grinsven, J.H.M. v., Ale, B. & Leipoldt, M. (2006). Ons overkomt dat niet: Risicomanagement bij financiële instellingen" Finance Incorporated, 6, 19-21. (in Dutch). Grinsven, J.H.M. v. (2009). Improving operational risk management. IOS Press, pp.240. Grinsven, van. Jürgen H. M.,. (2008). Ons overkomt dat niet: Integraal risicomanagement & compliance nog niet vanzelfsprekend, in Bank en effectenbedrijf, pp. 4-6,. (in Dutch). Young, B., Blacker, K., Cruz, M., King, J. Lau, D., Quick, J., Smallman, C., Toft, B. (1999). Understanding operational risk: A consideration of main issues and underlying assumptions. Operational Risk Research Forum.
6
Life-cycle needs are continuously evolving, is your organization ready for tailored client needs?
M. Bozanic
2.
Risks in client relationships Joop Rabou RA RE Dr. ing. Jürgen van Grinsven
Financial Institutions (FI) have come into existence to service the evolving financial needs of clients. Mainly motivated by the credit crunch, market pressure, e-commerce and decentralization many financial institutions increasingly focus their efforts on their clients. As such, there is a strong need for further developing and strengthening client relations.
As financial institutions increasingly recognize the quality of client relations are important, they actively implement this focus in their daily business. Both the management of risks (such as: integrity, credit, operational and reputation) and optimal client advisory – with initiative and foresight - have to be taken into account when aiming at service across the client’s full life-cycle.
Nowadays, client relation management is characterized by risk management and preferably incorporated in the strategic policies and operating procedures. Good risk management is embedded in daily work processes of the institutions and can help formulate value propositions in anticipation of client needs, thereby further strengthening client relations.
In this chapter we will provide senior risk managers with insight in the risks regarding the development of client relationships, the government of client relationships and the monitoring of these client relationships. With these insights senior risk managers can formulate value propositions for their stakeholders.
7
Chapter 2: risks in client relationships
2.1. Client life cycle management Ing. Patrick Abas
In 2001, the Basel Committee on Banking Supervision published the Customer Due Diligence (CDD) for banks report. Its aim was to provide a framework for banking supervisors on customer identification and Know Your Client (KYC) to combat the funding of terrorist activities and money laundering. The report was recognized at the international conference of banking supervisors in September 2002 as the agreed CDD standard (De Koker, 2006). In the general guide to account opening and customer identification the Basel Committee on Banking Supervision presented further details on customer identification (Basel, 2003).
In May 2003 De Nederlandsche Bank (DNB) and the Netherlands Bankers’ Association issued a joint commentary on the report. In 2004 an additional memorandum on customer identification was sent to Dutch financial institutions. Both reports form the framework which can be used by them to formulate their own ‘know your customer policies’ (NVB and DNB, 2004). The purpose of this policy is to recognize and minimize risks for financial institutions, when dealing with clients. This is based on the principle of KYC. Because of this policy, financial institutions can now avoid or at least minimize involvement in e.g. money laundering, fraud and terrorist financing.
In this chapter we will discuss the translation of the CDD policy into the management of clients by financial institutions. First, we will provide some background information on KYC and the various processes involved. Second, we will discuss the different steps and processes in the management of clients and risks involved.
2.1.1. Know your client Know Your Client focuses mainly on combating money-laundering. The financial action task force is an intergovernmental body that combats the misuse of the financial system to launder money or finance terrorism. For this purpose it develops and promotes policies at an international level. Furthermore, the task force cooperates with other international bodies to combat money laundering. Money laundering is defined as “the processing of criminal proceeds
8
Chapter 2: risks in client relationships
in order to disguise their illegal origin” (Basel, 2001; FATF, 2003). Money laundering sanctions are bans imposed on one country by another country based on specific factual discovery. The sanctions advise any financial institution to avoid doing business with the imposed country. For example: when country A imposes a sanction on country B no business deals should be carried out in the native currency of A with B. Major sanctions are imposed by the European Union (EU) and by the Office of Foreign Assets and Control (OFAC) of the United States of America (USA). These sanctions play an important role in the client risk assessment and require all staff to be aware of the sanctions imposed by EU and OFAC. The policies should be strictly followed when dealing with clients all over the world. Moreover, all necessary precautions should be taken when dealing with such clients. Figure 1 presents an overview of several processes in the client life cycle which can be used by financial institutions to manage their clients and assess the risks involved.
Client Life Cycle Review
CAP
Periodic Process
CTP
CDD - WID
DDB
Client Termination Process
EXIT
RR
Risk Re-Assesment Incident Based Review
Transaction Banking EU PEP OFAC
SAR
RR
Dynamic Black Box
Figure 1: overview client life cycle processes (Basel, 2001)
The Client Acceptance Process (CAP) is an ongoing standard risk based process approach to CDD, with the prime aim of minimizing the Reputation Risk (RR) of the financial institution, when considering whether to take on a new client. A financial institution can conduct a client review or risk re-assessment. This can trigger the exit process when the client relationship is deemed unacceptable as a result of the review or risk re-assessment. However, a decision to
9
Chapter 2: risks in client relationships
close the client relationship may also be based purely on commercial grounds. The Client Termination Process (CTP) is always initiated at a clients own request. It is either done through ending the contract or natural death of the client. Periodic review (ongoing standard process) is a process separate from risk re-assessment. During a periodic review clients are scheduled for regular review based on their profile. Client or market events can also trigger more immediate reviews. A client’s risk environment has to be assessed in order to monitor their businesses. It helps the bank to decide whether to continue the business with the client or not. It also helps the bank to identify any hidden factors that reveals the client’s risk nature, to update its database with recent information, to protect its reputation and business license in the respective regions. In addition risk re-assessment also helps other business units within the bank to facilitate further business relations with the client. Risk re-assessment includes gathering detailed information of the client, client’s business dealings, domicile, contacts and management. Transaction banking can be considered a dynamic black box. In this box all transactions are filtered and scanned against national and international sanction and freeze lists. Examples of the standard lists currently in use are: EU-name and country list, OFAC-name and country list, Political Exposed Persons (PEP) list. In addition to these lists individual financial institutions can have additional scan and sanction lists which they will want to impose.
2.1.2. The process During the client acceptance process the law on identification of provision of services (e.g. in the Netherlands: WID) is an instrument in the fight against money laundering. Financial institutions must establish and document the clients’ identity before providing a financial service. The clients’ identity must be established not only when entering into a long-term relationship with that customer but also when providing certain one-off occasional financial services. The WID also has the objective of combating tax and other fraud. As with the disclosure of unusual transactions act, as part of its supervisory duties the De Nederlandsche Bank assesses and enforces the adequacy of institutions' procedures and measures that focus on combating money laundering.” (DNB, 2008).
During the client acceptance process the client is identified by using legal documents. The client must provide sufficient evidence for the relationship banker to accept the client. If the client is to be considered a liability and/or poses a reputation risk the client acceptance process will trigger the exit process. Consequently if the customer due diligence research delivers an
10
Chapter 2: risks in client relationships
unacceptable high risk or the client is listed on a ‘don’t do business list’ this will also trigger the exit process. If the result of the client acceptance process is positive the client (with an acceptable risk rating) is ‘handed over’ to transaction banking. Transaction banking has two major reasons. First: to recognize and prevent terrorist financing and second: to recognize all suspicious transactions and report them. Reporting of these suspicious transactions is completed by filing a suspicious activity report (SAR), as required by the anti money laundering (AML) legislation.
Transaction Banking Layers Transaction Banking SAR RR
FOP Risk Rating = Low
EU
Risk Rating = Increased
OFAC
Risk Rating = High
PEP
Dynamic Black Box Figure 2: transaction banking layers (Basel, 2001)
Ideally transaction banking should consist of layers which represent the clients risk rating. The lower the rating the larger the field of operation (FOP) will be and vice versa. For example, a client with a high risk rating will be monitored more stringently than a client with a low risk rating. In a normal situation the client will live throughout his/her financial life within the boundaries of transaction banking. Any suspicious client behavior will result in a suspicious activity report (SAR). This report will be analyzed. A result of this analysis can be a trigger to execute the risk re-assessment process. The risk re-assessment process is triggered by an incident such as suspicious transactions, bad press and rumors. See Figure 2.
Risk re-assessment is a process based on the same approach as the client due diligence process. The goal of risk re-assessment is to provide supporting evidence for the client being scrutinized using a predefined client risk assessment. The evidence found can be used to make an
11
Chapter 2: risks in client relationships
adjustment in clients risk rating. If, after adjustment, the risk rating is within the set boundaries the client can do business as usual. In case the evidence results in an unacceptable risk rating the ‘client’ is handed over to the exit process. This results in ending the relationship with the client. A regular review is based on the risk rating as a result of the client acceptance process. The risk rating determines the frequency of client’s review. For example, if the rating is low, then the review frequency can be three years. If the rating is high, then the review frequency will be more frequent, for example, yearly. The review process will differ between retail and wholesale clients. Were the retail process can be relatively simple the wholesale process proves to be more difficult. The actual process of reviewing is similar to the risk re-assessment process. The difference lies solely in the fact that the risk re-assessment process is incident triggered. The exit process can be triggered by the following (sub)processes:
Client Acceptance Process. If, after careful consideration, the result of the client scan is negative (high risk and /or reputation risk). The Exit process is executed.
Risk Re-Assessment (Incident Based Review - IBR) .One of clients actions/transactions (SAR, Bad Press, etc.) triggers the Risk Re-Assessment process and subsequent investigation. If the result of this investigation is negative the Exit process is executed. During Risk Re-Assessment the Exit process will be executed on any unidentifiable client and / or were the KYC/Risk Profile is unacceptable.
Review. If during a periodic review, evidence was found which makes the client a liability then this is an unacceptable risk factor. The evidence of the review will trigger the Exit process. This process also can be triggered as a consequence of any one of the following: (a) Clean up (as a result of the CTP process); (b) Product / Revenue Review; (c) Strategic Exits.
2.1.3. Summary Lack of a customer due diligence policy may result in reputation, operational, legal and concentration risks for the financial institution. Therefore, financial institutions have to formulate policies and procedures to assess the risks from a particular client or product and implement measures to mitigate the identified risks. The identification and assessment of client risk during the client acceptance process, risk re-assessment and periodic review enables a financial institution to continuously review the risk involved in the management of their clients and take appropriate measures when necessary.
12
Chapter 2: risks in client relationships
2.1.4. References Basel (2001). Basel Committee on Banking Supervision. Customer Due Diligence for Banks. Basel (2003). Basel Committee on Banking Supervision. General Guide to Account Opening and Customer Identification. DNB (2008). The place of the content changes. We refer to Http://www.dnb.nl FATF. (2003). The Forty Recommendations: Financial Action Task Force on Money Laundering. Koker, L. De (2006). Money laundering control and suppression of financing of terrorism. Journal of Financial Crime, 13(1), 26-50. NVB, DNB (2004). Additional Memorandum on Customer Identification. March.
13
Chapter 2: risks in client relationships
2.2. Integrity management after the credit crunch Dr. Sylvie Bleker- van Eyk
Compliance and integrity are inextricably linked together. However, when managing the risks within financial institutions, the emphasis is put on the management of compliance risks. The in depth analysis of integrity risk and, as a result, the management of integrity risks is but a poor substitute for what has proven to be a key risk factor during the credit crunch. The main question is how to effectively manage integrity risks. Integrity is conceived as a behavioral component of risk management and, therewith, deemed difficult to effectively manage. We disagree; integrity may well be a less rational component within risk management, but it most certainly can be managed. Management of behavioral aspect requires creativity and the possibility to ‘think-out-of-the-box’.
In this chapter we discuss the increasing importance of integrity as a result of the credit crunch. First, we discuss the different norms that can be distinguished in integrity policies. Secondly, we focus on the normative force of integrity norms. Afterwards, we will present how integrity risks can be managed effectively and the importance of supervision when managing integrity. Finally, we will discover that regaining trust within the financial market will depend on our ability to demonstrate effective integrity risk management.
2.2.1. Integrity and ethics Integrity is difficult to define. Black’s Law Dictionary defines integrity as “soundness or moral principle and character, as shown by one person dealing with others in the making and performance of contracts, and fidelity and honesty in the discharge of trusts”(Nolan & NolanHaley, 1990) Ethics and integrity are often used as synonym. Huberts defines integrity as the quality of acting in accordance with the values, norms and rules accepted by the organization and the public (Huberts, 2001). He refers to the public morals. This is essential, because morals may change/evolve, therewith affecting the values and rules we act upon (WRR, 2003). Huberts states that “ethics refer to the collection of values and norms, functioning as standards or ‘yardsticks’ for assessing the integrity of one’s conduct. Ethics are a set of principles that provide a framework for acting. The moral nature of these principles refers to what is judged as right, just, or good (conduct). Values are principles or standards of behavior that should have a
14
Chapter 2: risks in client relationships
certain weight in choice of action (what is good to do or bad to omit doing). Norms state what is morally correct behavior in a certain situation. Values and norms guide the choice of action and provide a moral basis for justifying or evaluating what we do. Integrity is acting within the framework of moral values and norms (ethics)” (Huberts, 2001). Managing integrity is managing the different relationships the financial institution (FI) maintains; the different frameworks within which it acts. With regard to integrity management we distinguish a multitude of relationships. The core values remain the same within every framework. Nevertheless, each relationship requires a slightly different approach due to the extent of the enforceability of the requested and/or expected behavior of the participants involved in the various relationships. We mention: the relationship between the employees; the relationship between the FI and its employees; the relationship between the FI and its shareholders; the relationship between the FI and its direct stakeholders (agents, vendors, suppliers, customers etc.); the relationship between the FI and governmental and supervisory authorities; the relationship between the competitors in the market; the relationship between the FI and the other indirect stakeholders (neighboring communities, pressure groups etc.). The question is how the norms regulating the above mentioned relationships can be as effective as possible.
2.2.2. Effectively managing integrity norms Normative force of integrity norms An important aspect of the effectiveness of norms is their normative force. Legal character implies legal consequences. In principle, legally non-binding instruments or norms will not entail legal consequences, while legally binding norms may be enforced through legal action. In particular cases, legally non-binding norms may also have indirect legal effect, for instance, within national courts ("Batco I", 1978; , "Batco II", 1979). Both legally binding norms and legally non-binding norms may have normative force, which can be defined as the effect of a norm directed at influencing behavior. Instruments which are legally non-binding may contain standards of behavior which have normative effect. The addressee may feel obliged to act accordingly, not because he perceives the norm as being legally binding, but because he perceives it as a moral obligation. Through its normative force, a standard of behavior turns into a norm (van Eyk, 1995). We distinguish three main categories of norms.
First, policies, in which an objective is laid down. Sometimes, these objectives may be of a concrete nature, but more often policies are rather abstract. Examples of such policies are the many corporate values statements. Consensus regarding the policy concerns solely its
15
Chapter 2: risks in client relationships
objective and not the means by which this should be realized. The normative force of policies is, therefore, correspondingly vague.
The second category is formed by principles. These norms are generally highly abstract and serve as a guideline. Principles do not aim at an objective as such, but at the quality of behavior. Principles are `norms of aspiration' and their normative force lies predominantly in the sphere of prohibition. Most general corporate Codes of Conduct contain principles.
The third category of norms is rules. Rules are precisely formulated and demand a specific behavior. A rule ideally lays down who is obliged to behave in a certain manner and who is the beneficiary of such behavior. Often, the general principles laid down in Codes of Conduct are explicitly dealt with in sub codes, defining specific behavior such as the sub codes on insider trading, corruption, competition or gifts & entertainment. Normative force is the willingness of the addressee to accept the norm as a guideline for his behavior and is influenced by internal as well as external factors (Dijk, 1987; van Eyk, 1995) such as: the way in which the norm has been created and formulated; the internal consistency of the system of norms (repetition in diverse corporate documents and expressions); the consensus on values underlying the norms; the authority or the power of the person or corporate organ setting the norm; the self-interest involved in complying with the norm, and the expectation that the compliance with the norm will be enforced by sanctions; effective (internal and external) supervision of compliance.
Managing integrity risk The first step in managing integrity is to pinpoint the integrity risks a financial institution encounters in its daily operations. Normally, in enterprise risk management, the risks are identified by means of a (strategic) risk assessment. With regard to compliance and integrity issues, we hold the opinion that such risk assessments do not offer sufficient insight. Compliance (especially within financial institutions) is directly linked to the ‘license to operate’. The Board has a helicopter view over the organization. The danger with compliance and integrity may, on first sight, be hardly visible. Erosion of compliance and integrity (dishonest behavior and eventually fraud) are best noticeable on the micro level. Consequently, it is preferable to pair the risk assessment with a compliance and integrity survey of the shop floor (Bleker, Claassen, & Zevenhuizen, 2008).
16
Chapter 2: risks in client relationships
When analyzing integrity issues, the discussion is almost always limited to the opportunities leading to possible dishonest behavior and fraud. This results in further limiting opportunities. This focus is misleading. Contrary to the saying, opportunity as such does not make a thief. It is not the simple existence of an opportunity that will persuade a person to misuse that opportunity. Opportunity is but one of the three elements constituting an integrity breach (Albrecht, Albrecht, & Albrecht, 2002). The second element is pressure. Pressure comes from outside the FI as well as from inside. Outside pressures are, for instance, financial difficulties, addictions, criminal connections. Inside pressures can also constitute great danger. This has been proven by the recent credit crunch. Targets that are set too tight, a bonus system that allows for huge bonuses can easily compromise the employee leading to catastrophic consequences. Rationalization constitutes the third element. This element is the most difficult one to tackle, because it is predominantly behavioral. Rationalization is the key element, because it will pull the trigger when an employee tries to rationalize why he could and should utilize a given opportunity, due to pressure he experiences. By dissecting integrity into these three elements, to a large extent, we can strip ‘integrity’ from its behavioral aspects and couple integrity with hard control (infra Figure 3). By analyzing the existing procedures and processes within the FI in relation to the specified integrity risks, the blanks become visible and the voids can be filled by adapting existing or adopting new measures in order to become ‘more in control’ with regard to integrity issues (S.C. Bleker-van Eyk & H. Zevenhuizen, 2008).
ssu Pre
n tio iza nal tio Ra
Controls: Adapt targets Adapt bonus system Addiction aid Wellfare officer Competence management (HRM)
re
Controls: Preventive Detective Encouraging
Behaviour Employees Organization
Controls: Tone a the top / peers Webbased learning Competence management (HRM) Workshops Training
Opportunity
Controls: Security measures Camera surveillance Log-in codes Physical safety measures
Figure 3: pressure, rationalization and opportunity (also see Grinsven, Bleker- van Eyk, 2009)
17
Chapter 2: risks in client relationships
Supervision As discussed supra, supervision is one of the key elements to improve the normative force of behavioral norms. Supervision can be divided into two main categories: i.e. internal and external supervision. Supervision is crucial to the success of integrity policies. In general, when the FI wholeheartedly adheres to the values underlying the integrity norms and the tone at the top is consistent with these values, they will become part of the DNA of the financial institution.
Nevertheless, entering integrity in the DNA does not prevent us from contracting a disease. Checkups will be necessary. When the perception of the addressees of the norm is that the financial institution does not ‘walk the talk’, compliance with the integrity norms may be seriously hampered by the element of ‘rationalization’. Internal supervision starts with the set up of a monitoring system, monitoring the effectiveness of the norms in daily practice (adaptation when necessary), the compliance with these norms and the follow-up in case of noncompliance. As the behavioral norms are translated into the procedures and incorporated in the processes of the organization, internal audit can run checks on compliance with the norms and detect possible flaws within the system. The more rigorous the external supervision on the integrity of a FI will increase the necessity of internal supervision and, therewith, enhance the normative force of the norms.
The credit crunch has taught us that trust is a main issue. Trust is used as a synonym for integrity. During the credit crunch, the main focus was on integrity issues that had a negative – in this case: catastrophic – effect on the entire economic and monetary system. Board members of financial institutions, leaving the sinking ship and taking millions of dollars as bonus with them. Managers taking high risks with dodgy financial products to increase personal targets and bonuses. While confidence slips out through the backdoor, panic makes its grand entrance! The tone at the some of the top has put the system in peril. According to the Group of Thirty “As financial market turmoil spreads across the globe, regulators, supervisors, policymakers, and the public at large have been questioning the effectiveness of financial supervision and whether changes to existing supervisory models are needed. Such a re-assessment process is not a new phenomenon. History has shown that financial market disruptions have often been followed by regulatory reforms.”(Group of Thirty, 2008). In its report the Group of Thirty describes the four main approaches on regulatory supervision:
18
Chapter 2: risks in client relationships
1. The Institutional Approach, where the firm’s legal status (for example, an entity registered as a bank, a broker-dealer, or an insurance company) determines which regulator is tasked with overseeing its activity both from a safety and soundness and a business conduct perspective. 2. The Functional Approach, where the supervisory oversight is determined by the business that is being transacted by the entity. Each type of business may have its own functional regulator responsible for both safety and soundness oversight of the entity and business conduct regulation. 3. The Integrated Approach has one single universal regulator that conducts both safety and soundness oversight and conduct-of-business regulation for all the sectors of the financial services business. 4. The Twin Peaks Approach is based on the principle of regulation by objective and refers to a separation of regulatory functions between two regulators: one that performs the safety and soundness supervision function and the other that focuses on conduct-of- business regulation (e.g. the Netherlands). As a result of the credit crunch, the discussions regarding the reform of the supervisory mechanisms will go full speed ahead. From March 2007 up to March 2008, the United States Department of the Treasury already conducted a research on the reform of the regulatory system, resulting in a “Blueprint of a Modernized Financial Regulatory Structure” (The Department of the Treasury, 2008). The flaws in the supervisory system of the United States were already obvious, due to the growing instability of the US financial markets. The Blueprint presents a conceptual model based on an objectives-based regulatory approach, the Twin Peaks approach, with a distinct regulator for each of the three objectives: (1) market stability regulation, (2) safety and soundness regulation associated with government guarantees, and (3) business conduct regulation. It is obvious that the supervision of business conduct will entail stronger emphasis on preserving the integrity within all relationships as mentioned supra in the introductory paragraph.
2.2.3. Summary Increasing normative force by ‘walking the talk’, adopting ‘harder’ controls and increasing supervision are prerequisites to achieve integrity within the financial institution in particular and the financial markets in general. The credit crunch clearly demonstrated that trust is essential to the operation of the market. Goodwill is a glass of water that one fills drop by drop. One shock
19
Chapter 2: risks in client relationships
can make the glass fall and spill all trust. It will take time and effort to restore all the spilled trust. With regard to integrity, two general approaches can de distinguished. The American approach is rule based. The European approach is principle based. As a result of the latest financial crisis, major reforms will take place. The protection of integrity will play a predominant part in these reforms. The author strongly believes that also in Europe a strong shift towards a more rule based approach will be noticeable. Confidence will not be restored easily. With regard to business ethics the end of the last century shifted the stakeholders shifted from “tell me” to “show me”. In the decade to come, we will most certainly see a shift towards “prove me”. Financial institutions will have to prove that they are in control , including with regard to their integrity policy. Normative force will have to be increased and the controls will harden until confidence is regained.
2.2.4. References Albrecht, W. S., Albrecht, C. O., & Albrecht, C. O. (2002). Fraud Examination. Thomson SouthWestern Mason, OH. Batco I, 71 (Court of Appeal Amsterdam 1978). Batco II, 217 (Court of Appeal Amsterdam 1979). Bleker, S. C., Claassen, L., & Zevenhuizen, H. (2008). Een framework voor integraal risicomanagement: Standaard Uitgeverij. Dijk, P. (1987). Normative Force and Effectiveness of International Norms. German Yearbook of International Law, 9-35. Grinsven, van. J.H.M., Bleker- van Eyk, S.. (2009). Controller kan fraude beter managen. Finance & Control, pp. 8-12, Issue August. Group of Thirty. (2008). The Structure of Financial Supervision: Approaches and Challenges in a Global Marketplace. Washington. Huberts, W. J. C. (2001). National Integrity Systems. Country Study Report for Transparency International, The Netherlands. Nolan, J. R., & Nolan-Haley, J. M. (1990). Black's Law Dictionary. St. Paul, MN: West Publishing Co.
20
Chapter 2: risks in client relationships
S.C. Bleker-van Eyk, & H. Zevenhuizen. (2008). In Audit Magazine (pp. 23-27): Institute for Internal Auditors the Netherlands. The Department of the Treasury. (2008). Blueprint of a Modernized Financial Regulatory Structure. Washington. van Eyk, S. C. (1995). The OECD declaration and decisions concerning multinational enterprises: An attempt to tame the shrew: Ars Aequi Libri. WRR. (2003). Waarden, normen en de last van het gedrag. Wetenschappelijke Raad voor het Regeringsbeleid.
21
Chapter 2: risks in client relationships
2.3. Compliance in control Drs. Arnoud Hassink Marc Morgenland Drs. Bas van Tongeren
The world around us changes fast. Regulators try to keep pace with these changes by continuously issuing new regulations and guidelines and by intensifying supervision. In addition to external regulations, many financial institutions issue internal rules and codes of conduct. Acting according to both external and internal rules is referred to as “compliance”. Compliance risk is “the risk of legal or regulatory sanctions, material financial loss, or loss to reputation a financial institution may suffer as a result of its failure to comply with laws, regulations, rules, related self-regulatory organization standards, and codes of conduct applicable to its activities” (Basel, 2005).
Financial institutions are subject to an ever increasing number of rules and guidelines. In this chapter, we argue towards an integral, principle-based approach to compliance. In our view, this is the only way for financial institutions to effectively cope with a complex regulatory environment. We present four best practices for dealing with the complexity of rules and regulations. Further, we present a structured methodology for the organization of the compliance function in a financial institution.
2.3.1. Regulatory pressure The advent of many new laws and regulations is dictated by affairs that have taken place in the past and that have had an impact on consumer confidence. An example is the disaster with share lease products in The Netherlands. Buyers of these investment products were unaware of the fact that they borrowed money to buy shares. The monthly amounts they paid were not invested, but were only interest payments. The return on the leased shares should have been sufficient to repay the loan and return a nice profit. As the share prices dropped, the investors were stuck with debts they had to repay. This affair led to lawsuits against the financial institution that offered these products and eventually to the withdrawal of this institution from the Dutch market. This example shows that market imperfections call for stricter regulation. Large financial institutions are particularly confronted with regulatory pressure because
22
Chapter 2: risks in client relationships
confidence in these institutions is critical for economic stability (Llewellyn, 1999). See Table 2 for examples of regulatory pressure. Regulation Basel I; Basel II Corporate Governance Sarbanes Oxley Local Governance – codes. E.g. Tabaksblat (NL) IAS/IFRS
Patriot Act
Cause Fall of the Herstattbank (Germany); Developments in risk management Downfall Enron
Initiator Central banks and large banks in Europe Senatorial commission (U.S.)
Aimed at Banks
Compromise on bill to abolish anti-takeover constructions Globalization requires a uniform set of reporting rules
AEX and later, the Dutch Government Accountants, supported by European Commission U.S. Government
All companies quoted on Dutch stock exchange
Terrorism
All companies quoted on the U.S. stock exchange
Companies in Europe
All companies worldwide
Table 2: examples of regulatory pressure (based on references)
2.3.2. Dealing with complexity: four best practices Increasing globalization, issues with the corporate governance of complex institutions, continuously changing understanding of what constitutes sound operational management, changing laws and regulations, as well as the ongoing evolution of products, combined with a determination of governments and regulators to fight money laundering, terrorist financing and other illegal financial transactions, created complex situations for financial institutions that operate internationally (Schilder, 2006; Garretsen, Groeneveld, Van Ees and De Haas, 1999). From literature and interviews with compliance officers, we have derived four best practices to deal with this complexity. Be proactive The first practice is to be proactive. A proactive attitude can preserve financial institutions from problems. To be able to anticipate to future legislation, the financial institution should assess to what extent it is sensitive to changes in rules. This sensitivity depends on internal (product characteristics, distribution channels, customer groups) and external factors (market developments, political developments). A compliance check must be part of the decision process when developing and implementing new products or entering new markets. Further, a sound system for risk assessment is an important requirement for a proactive financial institution. A further requirement is to have flexible information systems to respond to new information needs of stakeholders.
23
Chapter 2: risks in client relationships
Build on principles The second practice is to build on principles, not just on rules only. Two approaches to compliance are possible. The first is to check whether the financial institution acts in conformity with the rules and regulations from a rule-based perspective (as if it were a police task). The second approach is to advise in the field of compliance from a principle-based approach. This approach puts values like integrity, carefulness and transparency first. To protect the financial institution from the risk of non-compliance, a rule-based approach is imperative. But responsible conduct cannot be achieved solely by applying the rules (Michaelson, 2006). Principles give guidance for situations where the rules are not clear. Many companies choose a combination of both approaches taking the official rules and laws as the lower limit (Bikker & Huijser, 2001). Stimulate a compliance culture The third practice is to stimulate a compliance culture. When corporate culture is compliance oriented, there is less need for enforcement. This culture must be supported by everyone in the financial institution and must be embedded in all processes and reward systems (Visser, 2006). The institution must weigh the interests of the diverse stakeholders. Next to making a profit (in the interest of shareholders), the interests of the customers must be taken into account. A customer must be informed about the risks and uncertainties of a financial product. If the customer willingly accepts these risks and uncertainties, he cannot hold the seller accountable for possible losses on charges of misrepresentation. Benefit from compliance The fourth practice is to benefit from compliance. Financial institutions worry about the rising costs of compliance. To the highest organizational level, employees are concerned with compliance issues and risk management. Managers can be held personally liable for breaches of the rules. As a consequence, large amounts of money are invested in internal controls, reporting lines and risk models. Decisions are postponed until all legal details are worked out. Service quality and product development may be endangered. ING Group announced that they will increase their compliance staff to about one percent of total head count (Marel, 2006). Ultimately, customers have to pay for this.
Still, a functioning compliance function can have added value leading to a competitive edge (Podpiera, 2004; Vries, 2005). An efficient compliance function has many benefits: First, it may
24
Chapter 2: risks in client relationships
prevent the institution from reputation damage and penalties by the regulators. The loss of market value that is thus avoided very often outweighs the costs. Second, it may lead to a lower capital charge under Basel II and Solvency II because of good risk management practice. Third, it may provide insight into the overlap between different rules. This may prevent the institution from doing double work when implementing new rules and regulations. Finally, it may lead to more efficient and effective processes. In many cases, an “in control”-statement is demanded. This encourages institution to judge, improve and describe their processes.
2.3.3. Structured methodology In large financial institutions, the danger of a fragmented approach of compliance and duplication of effort is imminent. To avoid this, a structured, integral approach of compliance is necessary that brings risk management, internal audit, legal affairs, human resources and compliance departments together to accomplish a compliance-driven culture and financial institution. A structured method for the financial institution of the compliance function in a financial institution has six elements. Survey The first element is a survey which consists of two kinds of questions. Questions about the stakeholders: who are the main stakeholders of the institution? What products and services does the institution offer, what laws are applicable? Further, questions about the organization culture: what are the shared values? How open is the communication? Is there room for sound criticism? Do employers talk to each other about their conduct? The answers to these questions determine whether a rule-based or a principle-based approach should be chosen. Risk analysis The second element is risk analysis. By conducting a risk analysis the main compliance risks are mapped and prioritized. The risk analysis consists of two steps: determine the main compliance risks and quantify the compliance risks. Using interviews and self risk assessments, the financial institutions existing processes and procedures are investigated. The risk of non-compliance includes informing clients incorrectly, not safeguarding clients from unacceptable risks, insider trading, money laundering, chasing after commissions. For each risk, an assessment is made of its probability and impact. The impact of an offence cannot always be expressed in terms of money. A relatively small fine by a regulator may lead to big media impact resulting in reputation damage.
25
Chapter 2: risks in client relationships
Change The third element is change. Measures are taken to reduce or control the risks. Risk and compliance cannot operate in a silo; the controls and measurement must be implemented in the existing systems and procedures (Rasmussen, 2005). Further, compliance should move from task orientation to process orientation. Organize The fourth element is organize. The compliance function must have instruments to check if the company is in compliance and if the measures taken are effective. Key compliance indicators can be used such as the number of customer complaints or the number of times limits were exceeded. The compliance officer must have unrestricted access to data in the institution and have the opportunity to ask questions to employees. Risk management The fifth element is risk management. Risk management includes a number of activities. Measurement and registration of the score on compliance indicators. Registration and reporting of incidents. Corrective actions by implementation of new procedures, guidelines and internal controls. Awareness actions and education, discouragement actions and sanctions. Reporting The sixth element is reporting. Periodically, the compliance function should report on: the progress of compliance-related activities; possible breaches and incidents and the measures that have been taken; the effect of measures taken; planned investigations by the regulating supervisor; matters that may cause reputation damage; developments in the field of legislation and the expected impact on the company.
An effective enterprise-wide compliance-risk management program is flexible enough to respond to change, and it is tailored to the financial institutions corporate strategies, business activities, and external environment (Olson, 2006). Compliance should no longer be treated as a project but as an ongoing activity.
2.3.4. Summary There is an evident danger of overregulation in the financial services sector. Dealing with a highly complex regulatory environment currently is one of the biggest challenges for financial institutions. Just following the existing rules is not enough to prevent future damage from
26
Chapter 2: risks in client relationships
compliance issues. Therefore, financial institutions should proactively follow developments in relevant internal and external factors. They should stimulate a compliance culture and build on principles. A structured approach to compliance helps to take calculated risks and benefit from compliance.
2.3.5. References Basel Committee on Banking Supervision (2005). Compliance and the compliance function in banks. Bank for international settlements: April (available at www.bis.org). Bikker, J.A. and A.P. Huijser (2001). (R)evolutie in het toezicht op banken. ESB, vol. 86, issue 4295, 16 February. Economist, the (2005). Sarbanes-Oxley, A price worth paying? The Economist, May 19th 2005 From The Economist print edition. Garretsen, H., J.M. Groeneveld, H. van Ees and R. de Haas (1999), Financiële fragiliteit en crises. Economisch Statistische Berichten, vol. 84, issue 4194, 12 March. Llewellyn, David (1999). The Economic Rationale for Financial Regulation. London: The Financial Services Authority. Marel, van der Gerben (2006). ING voert controle sterk op, AFM komt met boete en ingreep na reeks incidenten. Het Financieele Dagblad, 3 maart. Michaelson, Christopher (2006). Compliance and the Illusion of Ethical Progress. Journal of Business Ethics, 2006, vol. 66, issue 2, pages 241 – 251 Olson, Mark W. (2006). Enterprise-Wide Compliance-Risk Management, Speech at the Fiduciary and Investment
Risk
Management
Association’s Twentieth
Anniversary
Training
Conference, Washington D.C., April 10. Podpiera Richard (2004). Does Compliance with Basel Core Principles Bring Any Measurable Benefits? IMF Working Paper, Monetary and Financial Systems Department: November. Rasmussen, Michael (2005). Trends 2005: Risk and Compliance Management, Forrester Research. Rogier, L.J.J. (2006). Preventieve bestuurlijke rechtshandhaving. Oratie prof. mr. L.J.J. Rogier, Erasmus Universiteit Rotterdam 16 maart.
27
Chapter 2: risks in client relationships
Schilder, A. (2006). Banks and the compliance challenge, Speech by dr. A. Schilder, director of De Nederlandsche Bank (Dutch Central Bank), Asian Bankers Summit, Bangkok, March 16, (available at: www.dnb.nl) Schneck, von, Ottmar (2002). Wo liegt eigentlich Basel II. Reutlingen: European School of Business (available at http://www.esb-reutlingen.de/). Tabaksblat, Morris et al. (2003). De Nederlandse corporate governance code. Beginselen van deugdelijk ondernemingsbestuur en best practice bepalingen. Den Haag, 10 maart. Visser, E.T. (2006). Neem eigen bedrijfscultuur serieus. Het Financieele Dagblad, 7 september. Vries, de Bouke (2005). Corporate Governance in Europe. Special Issue 2005/23, Research Paper, pages 1 – 4. Rabobank Economic Research Department, Rabobank, Netherlands, October.
28
The foundations of good management in financial services have been in place for years. However, as concepts, tools, and approaches are changing is your organization benefitting from these developments?
M. Bozanic
3.
Risks in the building blocks Joop Rabou RA RE Drs. Gert Jan Sikking
The activities that financial institutions perform can be broadly divided into two distinct sets. One set of activities is associated with ‘building’ the institution. The other set of activities are associated with ‘running’ the institution. In this chapter we present several sections which focus on risks in the building blocks. These risks are associated with “building” an institution.
The current credit crunch has highlighted the importance of sound risk management within the financial institutions industry. The prudence in executing the risk policy determines to a large extend the risk profile of the financial institution as well as the capital ratios. Risk management within financial institutions requires clear risk management principles and policies for interest rate, market, liquidity and currency risk, as well as credit risk at a portfolio level. Risk management requires a rigorous framework of limits and controls to manage the risks and support the value proposition of the institution.
This chapter covers the basis of credit risk management and the ongoing developments in this area. Topics covered are acceptance policy for wholesale clients based on the know-yourcustomer principle as well as the use of credit risk models in credit risk management. Probability of Default (PD), Exposure at Default (EAD) and Loss Given Default (LGD) are important Basel II parameters that are extensively used as part of credit risk management frameworks within financial institutions. In addition you will find a section focusing on risk indicators, combining probability distributions, data scaling and the characteristics of diversified hedge fund portfolios.
29
Chapter 3: risks in the building blocks
3.1. The basics of credit risk management Drs. Arnoud Hassink
Credit risk is the most important risk in the banking business. Surprisingly, credit risk management has not always received the attention it deserves. Sometimes, commercial interests prevailed over risk management. This chapter starts with a definition of credit risk. Then, problems in managing credit risk and the elements of sound credit risk management are treated. Finally, an overview of the process of credit risk management and recent developments in credit risk modeling are presented.
3.1.1. Credit risk defined The main function of a bank is to borrow money and lend it to companies, governments and private persons. A bank functions as an intermediary between those who deposit money and others who need money to finance their activities. Doing so, banks bridge differences in currency, timing, duration and amount. Banks are specialized in the business of profitably managing risks (Hempel & Simonson, 1999). By their expertise and size, banks are better able to estimate and manage risks than parties who do not make their living out of it. The banks’ risks can be divided in three groups: credit risk, market risk and operational risk. One of the most important categories of risk a bank faces, is credit risk. Figure 4 provides an overview of the structure of a bank’s credit risk. The credit risk of a bank is not only determined by the characteristics of its portfolio, portfolio risk, but also by the way the bank has organized its credit-granting process, transaction risk.
Credit risk
Transaction risk
Selection risk
Underwriting risk
Portfolio risk
Operations risk
Intrinsic risk
Figure 4: credit risk at a bank
30
Concentration risk
Chapter 3: risks in the building blocks
Transaction risk has three components: selection risk, underwriting risk and operations risk. Selection risk is the risk that a bank accepts bad risks because it has not put enough effort in investigating the client’s creditworthiness. Underwriting risk is the risk that a bank has not safeguarded itself against (the consequences of) payment problems and defaults. A bank can secure itself by demanding collaterals and by reinsuring its business. Operations risk is the risk that revenues are missed because of errors in recording loan transactions.
Portfolio risk is composed of intrinsic and concentration risk. Intrinsic risk is determined by factors that are unique for specific lenders or industries, such as clientele, solvency and earning power. Concentration risk occurs when a bank’s credit portfolio is concentrated in groups of lenders, industries or regions. Transaction risk and portfolio risk cannot be viewed in isolation. High portfolio risk may be the consequence of non-compliance of credit policies (i.e. selection risk).
Transaction risk as described is the risk of errors or non-compliance of credit policies. It can also be defined as “the operational risk of the process of credit-granting”. Credit risk in a narrow sense can then be defined as: “the risk that a bank’s result and equity are affected by a weakening creditworthiness of a debtor”. This definition is broader than the widely used “risk that a debtor cannot meet his commitments”. A reason to introduce a new definition is the fact that a bank runs credit risk and may lose money even when a client does not fail. An example: a company that is downgraded by credit rating agencies is perceived less creditworthy even if it still pays interest. If this company called for credit today, the bank would state other conditions. And a lower credit rating is also a signal that future interest payments may be endangered. Managing credit risk involves both prevention of problems and limiting the consequences of deteriorating creditworthiness of clients. Besides, credit risk must be managed on an individual basis as well as on a portfolio basis.
3.1.2. Problems in managing credit risk Banks have not always given credit risk management the attention it needs. Some causes for this are (Hempel & Simonson, 1999; Ter Haar & Van der Linden, 1991): many banks aimed at volume growth and lost track of credit quality. Account managers were responsible for both commercial activities and credit control. Staffs were not sufficiently qualified for credit risk assessment. Credit approval did not function well: the higher the loan, the more people were involved in the approval process. And the more people had already approved, the less the inclination to look over the loan application scrupulously. Credit control was not taken
31
Chapter 3: risks in the building blocks
seriously. A lack of good systems for signaling payment problems. The characteristics of the portfolio as a whole were unknown. In a broader sense, banks did not have enough understanding of economic or other circumstances that can lead to deterioration in the credit standing of a bank's counterparties (Basel Committee, 2000). The 2007 credit crunch that started in the U.S. subprime mortgage market, demonstrates the interdependence between factors like interest rates, unemployment and creditworthiness (Job, 2007). Some argue that the incentive systems within banks spurred employees to take more risk than responsible (Rajan, 2008). If banks do not properly manage their credit risk, this may affect their own creditworthiness.
3.1.3. Elements of credit risk management The goal of credit risk management is to maximize a bank's risk-adjusted rate of return by maintaining credit risk exposure within acceptable parameters (Basel Committee, 2000). Credit risk management starts with clear credit policies. These must be propagated by senior management and be translated into guidelines and procedures.
Credit policies should contain the following elements (Hempel & Simonson, 1999): Objectives, e.g. target groups of customers, growth targets. Strategy, e.g. the desired composition of the credit portfolio and the bank’s tolerance for risk. Credit limits at the level of individual borrowers and counterparties, and groups of connected counterparties (industries). Acceptance criteria, desired and undesired loan types, acceptable collaterals. Procedures for book keeping and revision. Next to clear policies and detailed procedures, expertise is essential. Professional and independent credit analysis departments with knowledge of relevant industries are needed. Also, there must be a specialized department for monitoring problem loans. Trainings must be followed to keep up with recent developments and techniques.
A bank must have a good credit information system that produces information on (Ter Haar en Van der Linden, 1991): loans that bear a potential risk and need extra attention. Based on financial ratios, industry-specific information and limit exceedings, timely intervention is possible. Information on the extent and quality of risk spread across quality classes, industries and countries. Information on the overall quality of the credit portfolio and its development. To assess if the risks are acceptable, a measure for credit risk is needed. With the use of credit risk models, credit risk can be expressed by a number. The Basel Capital Accord of 1988 and the
32
Chapter 3: risks in the building blocks
new Basel Accord (Basel II) have stimulated the development of tools to measure, compare and manage credit risk.
Traditional instruments to control credit risk are acceptance criteria, collateral, diversification, loan syndicated on, credit insurance and reinsurance, securitization, covenants: clauses that enforce financial restructuring. Next to the instruments mentioned above, increasing use is made of derivative instruments, credit derivatives to control credit risk, especially concentration risk (Kasapi, 1999; Scholtens, 1997).
3.1.4. Assessment of credit worthiness An important means to limit credit risk has always been “selection at the gate”. In the credit application process the credit department gathers information to assess the creditworthiness of the applicant and judges if the application matches with credit policies. A private person or a company can run into difficulty for a multitude of reasons. Some clients however stand up to the bad weather while others go bankrupt. This is why a bank is interested in the capacity of a client to recover from financial distress. Indicators for this capacity are liquidity and solvency.
The time and effort a credit department must spend in investigating creditworthiness, depends on the size of possible losses that occur when the wrong decision is taken. This is determined by the amount and complexity of a loan and the extent to which the loan is guaranteed or secured. The following data is used in the investigation. Financial data provided by the client himself. The registration of outstanding debts, in The Netherlands, private loans are recorded centrally by the Bureau voor Krediet Registratie (BKR). Data on the opinions of credit agencies like Dun & Bradstreet, Moody’s, Standard & Poor’s (in the case of companies and governments). Data on appraisal values of collateral. Further, earlier experiences with the client.
Based on these sources an image is drawn of the creditworthiness of the client. The “5 c’s” can be used to score on different aspects (Moyer, McGuigan & Kretlow, 2001). The first is character: is the client willing to meet his obligations. Earlier experience plays an important role in answering this question. The second is the client’s capacity to meet his obligations. This is judged based on the current liquidity position and an estimation of future cash flows of the client. The third is capital: this refers to the property of a client. When it concerns companies,
33
Chapter 3: risks in the building blocks
solvency is used as a measure. The fourth is collateral: does the applicant have possessions that may serve as collateral. Collateral may be liquidated in case of payment problems. Therefore they are also called “secondary sources of repayment”. The fifth is conditions: an estimate is made of the effect that macro-economic circumstances have on the capacity to repay the loan. Almost all banks use internal scoring systems to assess the creditworthiness of clients. Using credit scores, banks can determine how much they should lend and to what conditions. Credit scores also form the basis for credit ratings which are used to model credit risk.
3.1.5. Monitoring the condition of individual credits The condition of individual loans must be monitored continuously. If there is relevant information about a change in creditworthiness, this must be passed to the department responsible for assigning internal risk ratings to the credit. The value of any underlying collateral and guarantees must also be monitored. Such monitoring will assist the bank in making necessary changes to contractual arrangements as well as maintaining adequate reserves for credit losses. In assigning responsibilities, bank management should recognize the potential for conflicts of interest, especially for personnel who are judged and rewarded on such indicators as loan volume, portfolio quality or short-term profitability (Basel Committee, 2000).
3.1.6. Credit risk modeling To be able to manage credit risk on portfolio level, a bank must measure portfolio risk. Popular tools for measuring credit risk are: CreditMetrics, CreditPortfolioView, Credit Risk+ and Credit Portfolio Manager. The focus of these systems is on downside outcomes, which means payment problems, failure, and etcetera. Measures of risk therefore tend to focus on the likelihood of losses, rather than the characterizing the entire distribution of possible future outcomes (Lowe, 2002). All these models try to measure the potential loss that a portfolio of credit exposures could suffer, with a predetermined confidence level, within a specified time horizon, commonly: a year.
Building blocks for measuring credit risk are (Crouhy, Galai & Mark, 2000): a system for rating loans based on a concept of the probability of the borrower defaulting. Assumptions about the correlation of default probabilities (PD, Probability of Default) across borrowers. Assumptions about the loss incurred in the case of default (LGD, Loss Given Default). Assumptions regarding the correlation between PD and LGD. Each of these elements can also be found in
34
Chapter 3: risks in the building blocks
the Internal Ratings Based (IRB) Approach of the Basel Committee for calculating regulatory capital. Combined with Exposure at Default (EAD) and the maturity of a loan, the Expected Loss (EL) can be calculated. In most systems, PD has two elements: a system for rating individual borrowers according to their creditworthiness and a transition matrix which provides details of how borrowers are expected to migrate on average to different rating classes over a given time horizon. With these two elements, it is possible to calculate the Value at Risk for each borrower over the relevant horizon.
There are several methods for determining credit ratings (Basel Committee, 2001). Statisticalbased: based on statistical data a score is calculated and a rating is assigned. Expert judgment (constrained): a scoring model is used but a credit expert determines the rating. Full expert judgment: a credit analyst determines the rating using qualitative and quantitative data. Further, external ratings, obtained from credit rating agencies. Some rating systems rely on market-based information.
In these systems, PD is a decreasing function of the firms’ equity price and an increasing function of the firms’ leverage and the volatility of the equity price. Ratings provided by the major credit rating agencies are not conditioned on market prices but use information such as industry outlook, competitive advantage, and management quality. The PD’s across borrowers may be correlated. When an industry is in financial distress, many companies may find it hard to pay the interest on their borrowings. Loss given default (LGD) is an important variable in credit risk modeling. If a bank were guaranteed to receive all monies owed even in the event of a borrower defaulting, credit risk would be zero regardless of the probability of default (Lowe, 2002). However, loans are not always secured. In the foundation approach of the Basel Committee, LGD is set at 50% for unsecured loans. Credit risk models generally treat PD and LGD as independent variables. In times of economic downturns, average default rates are higher and average asset values tend to be lower, resulting in higher LGD’s. Credit risk models help banks to estimate their risk. Banks that have sophisticated models in place are rewarded with lower capital requirements under the Basel II Capital Accord. It is dangerous to depend on models only and therefore it is interesting to see how well the models will do during economic cycles.
35
Chapter 3: risks in the building blocks
3.1.7. Summary Credit risk is defined as “the risk that the result and the equity of a bank are affected by a weakening creditworthiness of a creditor”. Credit risk must not only be managed on the level of individual loans, but also on portfolio level. Effective credit risk management needs good credit policies, expertise, information systems control instruments and a measure for credit risk. The introduction of the Basel Capital Adequacy Standards has re-positioned credit risk management and has stimulated the development of tools to measure, compare and manage credit risk. Credit risk is expressed as a function of Probability of Default (PD), Loss Given Default (LGD), Exposure at Default (EAD) and their correlations.
3.1.8. References Basel Committee on Banking Supervision (1999). Credit Risk Modelling: Current Practices and Applications, Bank for International Settlements, Basel, April, pp. 1-60 Basel Committee on Banking Supervision (2000). Principles for the Management of Credit Risk, Bank for International Settlements, Basel Basel Committee on Banking Supervision (2001). The Internal Ratings-Based Approach, Consultative Document, Basle Committee on Banking Supervision, Bank for International Settlements, Basel Crouhy, M., D. Galai, and R. Mark (2000). A comparative analysis of current credit risk models, Journal of Banking and Finance, January, pp. 57-117 Haar, H. ter & G. van der Linden (1991). Bank Management, Performance, planning en control, NIBE, Amsterdam Hempel, G., & D. Simonson (1999). Bank Management, Text and Cases, 5th edition, John Wiley & Sons, New York Job, I. (2007). Yesterday the dotcom bubble, today the subprime crisis, and tomorrow…, Eco News, No. 69 – September 06, Crédit Agricole S.A. Economic Research Department, Paris, France Jones, D. & J. Mingo (1998). Industry Practices in Credit Risk Modelling and Internal Capital Allocations: Implications for a Models-Based Regulatory Capital Standard, FRBNY Economic Policy Review, Federal Reserve Bank of New York
36
Chapter 3: risks in the building blocks
Kasapi, A. (1999). Mastering Credit Derivatives, Prentice Hall, First Edition Lowe, P. (2002). Credit risk measurement and procyclicality, BIS Working Paper nr. 116, Bank for International Settlements, Basel, September, pp. 1-17 Moyer, R., J. McGuigan & W. Kretlow (2001). Contemporary Financial Management, 8th edition, South-Western College Publishing, Cincinnati. Rajan, R (2008). Bankers' pay is deeply flawed, The Financial Times, January 9, London, England.
37
Chapter 3: risks in the building blocks
3.2. New developments in measuring credit risk Drs. Koen Munniksma Dr. Menno Dobber
In recent years the needs for professional skills in the modeling and management of credit risk have rapidly increased and credit risk modeling has become an important topic in the field of finance and banking. Credit risk is the possibility of a loss as a result of a situation that those who owe money to the bank may not fulfill their obligation. While in the past most interests were in the assessments of the individual creditworthiness of an obligor, more recently there is a focus on modeling the risk inherent in the entire banking portfolio. This shift in focus is caused in greater part by the change in the regulatory environment of the banking industry. Banks need to retain capital as a buffer for unexpected losses on their credit portfolio. The level of capital that needs to be retained is determined by the central banks.
Since January 2007 a new capital accord, the Basel II Capital Accord, is operative. The Basel II Capital Accord is the successor of the Basel I Accord. The capital accords are named after the place where the Bank for International Settlements (BIS) has its offices, Basel, Switzerland. The BIS supplies banks and other financial institutes with recommendations on how to manage capital. The influence and reputation of the Basel Committee on Banking Supervision (BCBS) is such that its recommendations are considered world wide as “best practice”.
In this chapter we discuss credit risk measurement under the Basel II Capital Accord. First, the economic concepts behind the credit risk measurement under the Basel II Capital Accord are discussed. Secondly, the improvements made in the Basel II Capital Accord are described in more detail.
3.2.1. Economic concepts The economic concepts behind the Basel II Capital Accord are most easily explained by means of figure 5. The curve in Figure 5 describes the likelihood of losses of a certain magnitude. The curve represents the probability density function of a portfolio loss.
38
Chapter 3: risks in the building blocks
Figure 5: likelihood of losses
Expected Loss (EL) is the mean of the loss distribution. The EL is defined as the average level of credit losses a financial institution can reasonably expect to experience. Note that, in contrast to a normal distribution, the mean is not at the centre of the distribution but rather is right of the peak. That occurs because the typical loss distribution of a credit portfolio is asymmetric. It has a long right-hand tail.
Financial institutions view EL as a cost
component of doing business. EL is managed by financial institutions in a number of ways, including through the pricing of credit exposures and through provisioning.
Unexpected Loss (UL) is the standard deviation of the loss distribution. In contrast to EL, UL is a risk associated with being in the business, rather than a cost of doing business. One of the functions of bank capital is to provide a buffer to protect a bank’s debt holders against losses that exceed expected levels. Banks often have an incentive to minimize the capital they hold. Reducing capital frees up economic resources that can be directed to profitable investments. However, if a bank holds less capital, the chance is that it will not be able to meet its own debt obligations will increase. Economic Capital (EC) is a risk measure that can be viewed, as the amount of capital the bank needs to retain to cope with unexpected loan losses. The more risky the assets, the more capital will be required to support them.
The main difference between EC en Regulatory Capital (RC) is that RC is the minimal capital required by the regulator (Basel II), whereas economic capital is the capital level bank shareholders would choose in absence of capital regulation. The target insolvency rate of a
39
Chapter 3: risks in the building blocks
financial institution determines the amount of EC a bank needs to retain. Many large commercial banks are using a target insolvency rate of 0,03%. The target insolvency rate is directly linked with the credit rating of the financial institution. Looking at historical one-year default rates, the probability of default for a financial institution rated AA is 0,03%.
The capital requirements described in the Basel II Capital Accord focuses on the frequency of bank insolvencies arising from credit losses that supervisors are willing to accept. It is possible to estimate the amount of loss which will be exceeded with a predefined probability by means of a stochastic credit portfolio model. The area under the right hand site of the curve is the likelihood that a bank will not be able to meet its own credit obligations from its profits and capital. The percentile or confidence level is then defined as 100% minus this likelihood and the corresponding threshold is called the Value At Risk (VaR). VaR is one of the most widely used and accepted financial risk measures in the banking world. However, in the literature researchers have extensively criticized the use of VaR as a measure of risk (Artzner, Delbaen, Eber and Heath, 1999).
The Capital Requirements function used for the derivation of regulatory capital under the Basel II Capital accord is based on a specific model developed by BCBS. An important restriction was made to fit supervisory needs: the capital required for any given loan should only depend on the risk of that loan and must not depend on the portfolio it is added to. In other words; the model underlying the Basel Capital Requirements function should be portfolio invariant. Gordy (2003) has shown that essentially only so called Asymptotic Single Risk Factors (ASRF) models are portfolio invariant. ASFR models are derived from “traditional” credit portfolio models by the law of large numbers. When a portfolio consists of a large number of relatively small exposures, idiosyncratic risks associated with individual exposures tend to cancel out one-another. Systematic risks that affect many exposures have then only an effect on portfolio losses. The systematic risks that affect all borrowers to a certain degree (e.g. industry and region risks) are modeled in the ASFR model by one systematic risk factor. Banks are encouraged to use other credit risk models to fit their internal risk management needs.
3.2.2. Basel II: improving credit risk measurement The Basel I Capital Accord represented the first step toward risk-based capital adequacy requirements. Although Basel I helped to stabilize the declining trend in banks’ solvency ratios;
40
Chapter 3: risks in the building blocks
however, it suffered from several problems that became increasingly evident over time. The Basel I Capital Accord dates from 1988 and since then the financial world has changed dramatically. It has been criticized on generally three grounds. The first point of criticism is that the Basel I Capital Accord provides inconsistent treatment of credit risks. For example, a relatively risky bank in the Netherlands requires less regulatory capital than a relatively less risky corporation. The second point of criticism is that the Basel I Capital Accord does not measure risk on a portfolio basis. It does not take account of diversification or concentration and there is no provision for short positions. For example the amount of regulatory capital on a mortgage loan in a portfolio consisting only of mortgages is the same as when the mortgage loan is part of a portfolio consisting of a variety of loan products. The last point of criticism is that the Basel I Capital Accord provides for no regulatory relief as models and management of capital improve. In January 2001 the BCBS released its proposal for a new Accord and in November 2005 the Basel II Capital Accord was published. The New Capital Accord attempts to improve the Basel I Capital Accord on several points. The first point of improvement arises from the fact that in the Basel II Capital Accord banks are granted a greater flexibility to determine the appropriate level of capital to be held in reserve according to their risk exposure. Secondly, the Basel II Capital Accord focuses on the enhancement of the stability and reliability of the international financial system. Furthermore, the Basel II Capital Accord stimulates the improvement of risk management. The way banks can improve their credit risk management is not determined in the Basel II Capital Accord. The accord however defines the choice for banks between three methods for organizing their credit risk management: the Standardized Approach (SA); Foundation Internal Ratings Based Approach (F-IRB), Advanced Internal Ratings Based Approach (A-IRB). Approach Complexity Accuracy Capital Charge
The Standardized Approach Low Low High
Foundation IRB Approach Medium Medium Medium
Advanced IRB Approach High High Low
Table 3: high level summary Basel approaches
In Table 3, a high-level summary is presented of the differences between the three approaches. There is a big difference between the standardized approach and the two IRB approaches. The two IRB approaches make use of external credit agencies (e.g. Moody’s, S&P and Fitch), while
41
Chapter 3: risks in the building blocks
the SA is more an extension of the Basel I system. The SA is the approach that banks should minimally use. The IRB approaches make use of internal credit models. Banks can only use these two approaches if their data management is of high level. A data history of 5 years for various products and clients is a requirement. The migration from the SA to the IRB approaches is complex, claims a lot of effort from staff and requires more computer hardware capacity. However, it can be concluded that the increase in labor intensity caused by an IRB approach is accompanied with high savings (BCBS, 2006).
3.2.3. Summary The Basel II Capital Accord is based on some of the same risk measurement concepts used in the internal credit risk models that more sophisticated banks use internally. However, embedded in its goal of achieving tractability and uniformity are a number of supervisory parameters and simplifying assumptions. As a consequence the structure of the Basel II Capital Accord is different from the internal structure of banks’ credit risk models. It is important to look for cases in which the Basel II measures are more conservative than a bank’s credit risk model, which might create potential incentives for the bank to engage in new forms of regulatory capital arbitrage.
One of the most important assumptions of the Basel II Capital Accord is that the bank’s portfolio is well diversified and does not contain any significant concentrations of individual borrowers. Given this assumption, the credit risk model embedded in the Basel II Capital Accord is not designed to be sensitive to variations in concentration risk across banks. The underlying risk of a bank’s portfolio can therefore not be fully reflected in the regulatory capital calculation. Take for example a portfolio that is highly concentrated in a particular geographic market. The correlation between these assets in the portfolio is for sure higher than the correlation values provided by Basel II. Although the BCBS had proposed a granularity adjustment to address single-borrower concentrations, it dropped this proposal later on. The credit risk model of Basel II is therefore insensitive to single-borrower concentrations.
42
Chapter 3: risks in the building blocks
3.2.4. References Abel Elizalde, Rafael Repullo (2004). Economic and Regulatory Capital – What is the difference?. CEMFI Working Paper No. 0422. Basel Committee on Banking Supervision (1988). International convergence of capital measurement and capital standards. Bank for International Settlements. Basel Committee on Banking Supervision (2003). Basel II: The New Basel Capital Accord – Third Consultative Paper. Bank for International Settlements. Basel Committee on Banking Supervision (2003). Basel II: The New Basel Capital Accord – Second Consultative Paper. Bank for International Settlements. Basel Committee on Banking Supervision (2005). An Explanatory Note on the Basel II IRB Risk Weight Functions. Bank for International Settlements. Basel Committee on Banking Supervision (2006). Results of the fifth quantative impact study. Bank for International Settlements. Basel Committee on Banking Supervision (2006). History of the Basel Committee and its Membership. Bank for International Settlements. Basel Committee on Banking Supervision (2006): Studies on credit risk concentration. Bank for International Settlements. Gordy (2003). A risk-factor model foundation for ratings-based bank capital rules. Journal of Financial Intermediation 12(3). p. 199-232. Merton (1974). On the pricing of corporate debt: The risk structure of interest rates. Journal of Finance, American Finance Association, vol. 29(2), p. 449-470 Philipe Artzner, Freddy Dalbaen, Jean-Marc Eber, David Heath (1999). Coherent measures of Risk. Mathematics of Finance, no. 3, p. 203-228. Tor Jacobsen, Jesper Linde, Kasper Roszbach (2004). Credit risk versus capital requirements under Basel II: are SME loans and retail credit really different?. Sveriges Riksbank Working Paper Series No. 162 Vasicek (2002). Loan portfolio value. Risk Magazine, p 160-162.
43
Chapter 3: risks in the building blocks
3.3. Risk indicators Dr. Gerrit Jan van den Brink RA
Financial institutions (FI) focussed more strongly on operational risk recently (Brink, 2002; Grinsven, 2009). Operational risk is the risk of a loss caused by failures or inadequacies, which can occur due to four risk cause categories: people, systems, processes and external factors (Risk Management Association, 2000). The regulatory requirements which were published in the Basel Committee’s International Convergence on Capital Standards and Capital Measures (Basel Committee on Banking Supervision, 2005) and the Sound Practices for the Management and Supervision of Operational Risk (Basel Committee on Banking Supervision, 2003) have contributed to this new focus. Especially the Sound Practices (which are valid for all banks regardless the approach they choose for the regulatory capital calculation) prescribe the identification and assessment of operational risks. One of the possibilities to identify risk is the implementation of risk indicators.
Risk management requires a future-oriented focus. In an ideal world a FI wishes an empty loss database and self-assessment results, which show an insignificant operational risk profile. The implementation of risk indicators, however, is a prerequisite to touch in the direction of the ideal world. In this chapter the process of risk indicator definition will be described. After presenting the targets of risk indicators and their characteristics, the process will be described to find the right risk indicators.
3.3.1. Risk indicators defined Risk indicators can be defined as follows: risk indicators are parameters, which focus on business processes or process bundles to predict upcoming changes in the operational risk profile of those business processes or process bundles. The most important word in the definition is the verb “predict”. Risk indicators focus on the future and are therefore essential instruments for the FI’s risk management. The time window available for reactions is critical: the earlier a change in the risk profile is detected, the better. The longer the time window for reaction the bigger the chance to prevent any damage. A causal analysis is therefore a key condition to define valid risk indicators. If only operational risk events are captured by a risk indicator, the risk is already manifest and in most cases a short time period to react remains.
44
Chapter 3: risks in the building blocks
Damage cannot be prevented in most cases. Risk indicators should achieve the following targets: operational risk events should be prevented and unfavorable trends should be detected in time. The prevention of operational risk events can be effectively supported by an ITapplication. The system executes periodical measurements and checks if the predefined thresholds have been exceeded. If a threshold has been exceeded, the responsible staff for the affected process automatically receives a message, to enable them to start remedial actions. Such solutions are more effective than those sending such messages to risk controllers. Especially in case of a quick response, the message has to go to the people who can take immediate action. Some risks, however, cause an event over a long time period, by infecting slowly but certainly. The risk indicator still moves in the “green zone” but its values are getting a bit worse time after time. Such an unfavorable trend can indicate a need for actions. For example motivation of staff can be mentioned. If the motivation index gets worse and worse, it is time to react, even if the values are still below the threshold. A considerable amount of time elapses, before the trust of staff has been regained. Decreasing motivation is closely linked to mistrust and frustration. It should be taken into account that reactions on such trends take a considerable time before they become effective. The time window for reactions “closes” in this case while the indicator was in the “green zone”.
3.3.2. Risk indicator characteristics The achievement of the two described targets is determined by the characteristics of the various risk indicators. Some characteristics may seem to be common place, but experience regarding the definition of risk indicators indicates, that it is important to remember those during the definition of indicators. The following characteristics of risk indicators can be identified: (A) risk indicators should be measured on a regular basis. (B) Risk indicators should reflect the risk. (C) Risk indicators need thresholds. (D) management should be informed after the thresholds have been exceeded to take actions. (E) Risk indicators should be measured on a timely basis. (F) Risk indicators should detect changes in the risk profile before the operational risk events become manifest. (G) Risk indicators should be measured efficiently.
Measurement on a regular basis is necessary to prevent operational risk events and to detect unfavorable trends. A trend cannot be recognized, in time, if there are not a number of measurement results available. The frequency, however, needs to be questioned. How often should a measurement be executed? There is no direct answer to this question. The following
45
Chapter 3: risks in the building blocks
approach may help to create a clearer picture: both the remaining time to react and the expected damage amount will play an important role. If the time window is rather narrow to react after the receipt of the threshold excess message, the measurement frequency should be increased. However, the exact increase of the measurement frequency is determined by the expected loss and its standard deviation. If these values are high, the need for more frequent measurements becomes even higher.
The measurement frequency also determines the effectiveness of a risk indicator, since the risk awareness of the responsible manager can cause undesirable behavior. If the manager thinks that the measurement frequency is too low, then he will not trust the messages resulting from the risk indicator system. Instead he will build a shadow system to monitor the risk more frequently. If he thinks that the measurement frequency is too high, then he will start to neglect the messages resulting from the risk indicator system, since he thinks the monitoring is too finely tuned. If a real problem occurs, he may detect this too late. It may then be late to execute corrective actions. The characteristic that the risk indicator should reflect the risk, seems to be unnecessary. However, if risk indicators of FI’s are reviewed, it appears that this characteristic is not always given.
For example the following risk indicators in the financial industry can be mentioned: transaction volumes, IT-network traffic and the number of open items on nostro-accounts. Such risk indicators are often implemented, since measurements are already taking place and therefore no additional cost occurs. If the reflection of risk is checked, the vulnerability of those indicators mentioned is detected. The indicator “transaction volume” is used for performance measurement and cost allocation purposes. But what does the indicator say about risk? If its value moves from 10,000 to 20,000, does this mean that the risk has been increased? The question cannot be solved by these values only. However, everyone perceives, that a risk increase could be possible based on these values, if the transaction processing capacity is completely exhausted. If the usage of the transaction processing capacity is measured, an indicator has been found which is able to measure a change in the risk profile.
The same approach can be taken for the risk indicator “IT-network traffic”. The number of open items on nostro-accounts is also often used as a risk indicator. The risk to be monitored is
46
Chapter 3: risks in the building blocks
the possibility that a FI already credits a customer or counterparty account internally, although the amount was not credited to its nostro-account. This causes at least an interest expense and in case of fraud the full amount may be lost. However, the number of open items does not measure this risk. If the open items are just one day old, no risk is manifest. If the open items are for example 30 days old, interest losses have already occurred. Moreover the monetary value may also be lost due to fraudulent actions. To measure the risk the aging of the open nostro items is essential. The monetary volume should be aged as well, since if the value of an open item is EUR 50 the importance is totally different to an open item of EUR 1 billion.
The need for thresholds results from the prerequisite to detect changes in the risk profile early. The implementation of this requirement depends on the effectiveness of a risk indicator. Thresholds should fit to the risk appetite of the responsible process managers. If a manager thinks, that the threshold is too low, he will not pay enough attention to warnings issued by the risk indicator. This may cause an operational risk event and subsequent damage, which could have been avoided. If the manager thinks that the thresholds are too high, he will not trust the warning capability and build its own monitoring instruments.
The threshold level is also determined by the available time to react. For example if database capacity needs to be enlarged, it takes time before such an action can be completed. Such database expansion cannot be executed during business hours and therefore a full day production needs to be considered as well. If an elapsed time of four hours for a database expansion is taken into account, it should be considered how much capacity is used per hour. These values will determine at which capacity usage level a threshold should be set, in order to avoid damages. A timely measurement is also essential to stay within the reaction window. The importance is depicted in Figure 6.
47
Chapter 3: risks in the building blocks
Time Risk driver: Circulation
Event Fire
?
Smoke detector Prevention
Effect: Loss of assets
Damage control
Figure 6: example threshold level
The reaction time window ends shortly before the risk event occurs. Sometimes even days remain to react; but in some cases only a few minutes are left. Therefore actions should be aligned to the remaining reaction time. The characteristic of a timely indication of changes in the risk profile of a financial institution is hard to guarantee. Especially if the elapsed time between the manifestation of the risk driver and the subsequent effect is short, changes in such risk drivers need to be identified at the earliest point in time possible. For example changes in market volatility are messengers announcing an increase in transaction volumes. The last characteristic describes the measurement efficiency of risk indicators. Risk indicators cause periodical expenses, since they need be measured and sometimes the results come together in the risk indicator system using IT-interfaces. The expenses for risk indicators should not exceed the potential savings of standard risk cost and capital cost. In case of an excess, the risk acceptance is better from an economic point of view.
3.3.3. Risk indicator definition process The successful implementation of risk indicators depends on fulfilling some conditions. Before a risk indicator project is started, meeting these conditions should be considered. The following three points should be investigated: First, does a control culture exist in the involved organizational units? Second, are monitoring procedures already implemented in the unit? Third, does a process description exist, are losses caused by risk events collected and are selfassessments executed?
48
Chapter 3: risks in the building blocks
The control culture decides the success of the implementation of risk indicators. Culture is difficult to measure. Based on the following questions a picture can be drawn: is transparency in the financial institution promoted? Is a clear strategy which also includes risk appetite and tolerance of senior management available? Is the implementation of a strategy monitored? Are errors seen as a moment to learn, order should involved staff take disciplinary actions into account if errors are discussed (also see page chapter 1, page 2)? Are remediation actions to solve weaknesses seriously planned and is the implementation monitored? This question list is definitely not exhaustive, but the answers give an initial picture of the robustness of the control culture. This culture needs to be “lived” on all organisational levels, since higher level have always an example function for the lower ones. If an existing monitoring procedure is in place, it is a first indication of future oriented management.
Management obviously understands, that un-desired changes in the values measured need counter actions. Moreover, it is helpful if the risk indicator system can build on existing escalation procedures. An analysis of loss data and self-assessment results together with risk capital value and process descriptions show the vulnerability of processes or systems. If losses with the same causes occur more frequently in a process, a structural problem may be the issue. If this problem cannot be solved, monitoring by use of risk indicators can improve the situation. Insufficient process quality can be detected by use of self-assessments. If the quality issue is caused by an inherent problem, risk indicators again can be a solution. The risk capital can be broken down to processes and organisational units showing the more risky areas, in which the most serious weaknesses can be expected. In all cases risk indicators should not be implemented automatically, since the solution of a problem is still the most effective solution. Risk controllers especially are tempted to quickly implement a risk indicator in such cases. It should be remembered that risk indicators cause initial and periodic expenses.
The definition of risk indicators needs to be carefully prepared. This process starts with the selection of the workshop participants. These participants should have a sound understanding of the objects to be monitored. Central risk management or controlling should almost always be excluded from defining the risk indicators centrally, since they are not familiar with the details of all processes. The process manager’s experience regarding the processes inherent risks should not be underestimated. It is an important source for the definition of risk indicators. The end-
49
Chapter 3: risks in the building blocks
to-end process view is important during the selection of the risk indicators, since the risk drivers should be identified as early as possible in the process. This requires a holistic perspective of both the workshop participants and the moderator.
The question often arises, if each organizational unit needs to handle the definition process on its own. Would it not be better, if a central organization would define and collect a central library of possible risk indicators from which each organizational unit could take advantage? Moreover, if risk indicators are not be defined centrally, in which way could consistency be guaranteed with aggregation possibilities in mind? A uniform definition is also essential for the benchmarking among organizational units. Currently, various financial institutions are building a common risk indicator library with the future target to benchmark externally. It looks as, if all arguments would are in favor of a central library of uniform defined risk indicators and let the organizational units just select the ones fitting to their needs. However, the negative side of such an approach is the missing investigation of the inherent risks and therefore risk drivers may not be detected, which will then not be monitored. In Figure 7 the definition process has been depicted. Identify risk categories to be monitored
Identification based on occurred losses, existing risk profiles and process owner’s experience
Identify the risk drivers The causal drivers for an operational risk event need to be identified
Transform risk drivers in measures Find measures for causal driver, which are: - objective - efficient - preferably quantitative
Define measure as RI Set up of the measure as RI in the System Define alert and critical levels as thresholds Define staff members to be informed after excesses of levels
Collect & analyse data Collect RIvalues and comments Approve RIvalues and comments
Follow up
Determine preventive/ mitigating actions Follow up on the effectiveness of preventive/ mitigating actions
Figure 7: risk indicators definition process
At the start the object to be monitored is determined. It can be just one specific internal control measure (like the nostro account reconciliation), a process or an organizational unit. The selection is based on existing risk profiles. As already described before the risk drivers are determined in the second step, which need to be made measurable in the third step. Measurability determines the success of a risk indicator. A quantitative measurement is to be
50
Chapter 3: risks in the building blocks
preferred above a qualitative assessment, since it is mostly deductible form existing data. Moreover, a quantitative measurement is almost always more objective. The first three steps are the most important ones. In step four, the risk indicator and its attributes have to be defined. In the fifth step the monitoring phase can be started, in which values are collected, compared to the thresholds and regularly analyzed by a risk controller. It is also important, that warning signals are followed up, in order to ensure that the actual risk profile is aligned to the risk appetite and tolerance. In such cases the targets of the measurement by use of risk indicators have then been completed.
3.3.4. Summary Risk indicators are essential instruments for pro-active operational risk management. They should help to predict changes in the company’s operational risk profile. The risk indicators have two targets: the prevention of operational risk events and the timely detection of unfavorable trends. A risk indicator needs some characteristics in order to meet those targets. The remaining reaction time plays an important role. A first indication for the definition of risk indicators can be found in actual process descriptions, loss and self-assessment data and risk capital numbers.
3.3.5. References Basel Committee on Banking Supervision, (2003). Basel Committee on Banking Supervision, Sound Practices for the Management and Supervision of Operational Risk, www.bis.org Basel Committee on Banking Supervision (2005). International Convergence on Capital Standards and Capital Measures, www.bis.org Brink, G. J. v. d. (2002). Operational Risk: The New Challenge for Banks. New York, Palgrave. Grinsven, J.H.M. van (2009). Improving Operational Risk Management. IOS Press. Grinsven, Jurgen. van (2009). De Vries, Henk. Key Risk Indicators. Finance & Control. February, pp. 29-32. Risk Management Association (2000). "Operational Risk: The Next Frontier." The Journal of Lending & Credit Risk Management (March): 38-44.
51
Chapter 3: risks in the building blocks
3.4. Combining probability distributions Dr. ing. Jürgen van Grinsven Dr. ir. Louis Goossens
The major difficulties and challenges which Financial Institutions (FI’s) face are closely related to the identification and estimation of the level of exposure to Operational Risk (OR). FI’s can use internal and external loss data as input to construct a probability distribution thereby estimating their exposure to OR. Although internal loss data is considered to be the most important source of information it is generally insufficient because there is a lack of internal loss data and the internal data often has a poor quality. To overcome these problems, FI’s can supplement their internal loss data with external loss data. However, using external loss data raises a number of methodological issues such as reliability, consistency and aggregation.
Multiple expert judgment can be extremely important to construct a probability distribution and helps to overcome the problems with internal- and external loss data. Moreover, multiple expert judgment can be utilized in situations in which either because of costs, technical difficulties, internal- and external data issues, regulatory requirements or the uniqueness of the situation makes it impossible to construct a probability distribution. However, no common method exists to combine probability distributions derived from multiple expert judgment. Current methods are often too complex and lack the flexibility to address the need for efficiency, effectiveness and satisfaction with practical use.
In this chapter we first discuss the literature background on the loss distribution, operational risk and expert judgment. Then, we propose a method for combining probability distributions The distributions are derived in a risk self assessment with multiple experts. The method combines a mathematical and behavioral aggregation method in order to combine probability distributions. The method aims to improve the effectiveness, efficiency and satisfaction with its practical use. We applied this method to two business cases and present the results in this chapter. Finally, the main research issues as well as future developments are discussed.
52
Chapter 3: risks in the building blocks
3.4.1. Loss distribution, operational risk and expert judgment Risk for a financial institution is defined in earnings volatility. Earnings volatility creates the potential for loss, which in turn needs to be funded. It is this potential for loss that imposes a need for financial institutions to hold capital that will enable them to absorb losses. Losses can be divided into expected losses, unexpected losses and catastrophic losses, see Figure 8. This has significant implications for Operational Risk Management (ORM). Frequency
Expected Loss
Unexpected Loss Catastrophic Loss
Confidence level
Impact
Figure 8: loss distribution
Expected Losses (EL), resulting from e.g. data entry errors or a branch robbery and their consistent OR, can be reduced by setting up internal control procedures. The costs of such procedures will be accounted for in the operations budget.
Unexpected Losses (UL), resulting from e.g. rogue trading have to be covered using capital. The aim of capital is to absorb unexpected swings in earnings and insure the capacity of a financial institution continue to operate up to a particular confidence level.
Catastrophic Losses (CL), e.g. the bankruptcy of the Barings Bank is any loss in excess of the budgeted loss plus the capital of a financial institution. FI’s usually attempt to transfer this type of loss through insurance which can help mitigate the resulting absolute losses for the FI.
53
Chapter 3: risks in the building blocks
Operational Risk (OR) can be defined as the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events (RMA, 2000). Figure 9 illustrates that a loss is caused by an operational event, which in turn is caused by four different factors: processes, people, systems or external events, see figure 9. Processes People Event
Loss
Systems External events Figure 9: causal model of operational risk
Expert judgment is defined as: “the degree of belief, based on knowledge and experience, that an expert makes in responding to certain questions about a subject” (Clemen & Winkler, 1999; Cooke & Goossens, 2004). Experts usually assess the identified operational risks by evaluating its frequency of occurrence and the impact associated with the possible loss. Although it is sometimes reasonable to provide a decision-maker with individual expert assessment results, it is often necessary to aggregate the experts’ assessments into a single one. This is founded on the fundamental principle that underlies the use of multiple experts: a set of experts can provide more information than a single expert. Moreover, consulting multiple experts can be viewed as increasing the sample size for providing the input to estimate a financial institutions exposure to operational risk. But how does one get the best assessment (aggregated results) from multiple experts?
3.4.2. Combining probability distributions When elicitating multiple experts, e.g. in a risk self assessment, each expert assesses a number of operational risks. This results in a single (loss) distribution from each expert (such as presented in Figure 8). When elicitating multiples experts, we are confronted with the challenge of combining these single (loss) distributions. Following Grinsven (2009) we propose to combine mathematical and behavioral aggregation methods to aggregate the experts’ results. Due to the restricted number of pages in this chapter we present a short description of this method. More details about this method can be found in Grinsven (2009).
54
Chapter 3: risks in the building blocks
Mathematical aggregation methods In mathematical aggregation methods, a single ‘combined’ assessment is constructed per variable by applying procedures or analytical models that operate on the expert’s individual assessment. The methods range from simple e.g. equally weighed average, to more sophisticated methods e.g. Bayesian.
Several principles exist that can be used for mathematical aggregation methods, all of which are aimed at reducing inconsistency and bias. First, use simple aggregation methods because they perform better than more complex methods (Winkler & Poses, 1993; Clemen & Winkler, 1999; Armstrong, 2001a). Second, use equal weights unless you have strong evidence to support unequal weighing of expert’s estimations (Armstrong, 2001c). Finally, according to Armstrong (2001a) it is important to match the aggregation method to the current situation in the financial institution. Behavioral aggregation methods Behavioral aggregation methods require experts, who have to make an estimate from partial or incomplete knowledge, to interact in some fashion (Clemen and Winkler, 1999; Cooke and Goossens, 2004). Some possibilities include face-to-face ‘manual’ group meetings or ‘computer supported’ group meetings. Emphasis is sometimes placed on attempting to reach agreement, consensus or just on sharing information (Goossens & Cooke, 2001).
Several principles exist that can be used to facilitate behavioral aggregation methods, all of which are aimed at reducing inconsistency and bias, more specifically, overconfidence. First, use experienced facilitators to guide the experts through the assessment (Clemen & Winkler, 1999). Second, structure the expert’s interaction. Specific procedures exist that can be used to structure and facilitate the expert interaction e.g. the Delphi method, the Nominal Group Technique and the expert information technique. Third, use group interaction only when needed e.g. to discuss relevant information (Clemen & Winkler, 1999). Fourth, when interaction is needed, use a devil’s advocate to challenge the experts in the risk assessment with additional factual information. This need to be prepared before the RSA. Fifth, enable experts to share information because when information is shared, it is expected that the better arguments and information that result will be more important for influencing the group and that information proved to be redundant will be discounted (Clemen & Winkler, 1999).
55
Chapter 3: risks in the building blocks
3.4.3. Improved effectiveness, efficiency and satisfaction In this section we will present the results of the two cases conducted at two business units of a large Dutch FI. The combination method applied in the test cases uses both mathematical and behavioral methods. Experts estimations were combined using the Equally Weighted Means method, the Nominal Group Technique was used to reach consensus. The results are structured in terms of effectiveness, efficiency and satisfaction. In each session the participants were asked to rate the novel situation in comparison to the contemporary situation. Rating took place using a seven point scale with 1 being most negative, 4 being equal and 7 being most positive. See Grinsven (2009) for a detailed description of the method used. Effectiveness To measure the effectiveness of the method, the participants were asked to respond to four statements. Table 4 presents an overview of the statements and the aggregated results derived from the assessments of the individual experts. Case 1 to tot The GSS session was more effective than a manual session The GSS session helped the group to generate the most important ideas and alternatives The GSS session increased quality of outcomes of the session The outcomes of the sessions met my expectations
Case 2
Impr. to tot Impr.
5.24 1.01
Yes
5.24 1.07
Yes
5.32 0.72
Yes
5.09 1.04
Yes
5.01 0.88 4.84 1.18
Yes Yes
4.77 1.07 4.50 1.10
Yes Yes
Table 4: effectiveness ( = average, = standard deviation, impr = improved yes/no)
Table 4 indicates that in both test cases the average response of the experts to the four statements is > 4. Therefore we can conclude the effectiveness of the method has improved compared to the contemporary situation. Efficiency To measure the efficiency of the method, the participants were asked to respond to three statements. Table 5 gives an overview of the statements and the aggregated results derived from the assessments of the individual experts. Case 1
Case 2
to tot Impr. to tot Impr. The available time has been used well The agenda has been executed efficiently Much time was spent in proportion to the result
4.84 1.29 4.93 1.23 3.82 1.56
Yes Yes No
5,10 1.00 5,09 0,86 4.18 1.47
Table 5: efficiency ( = average, = standard deviation, impr = improved yes/no)
56
Yes Yes No
Chapter 3: risks in the building blocks
Table 5 indicates that in both test cases the average response of the experts to the four statements is > 4. Therefore we can conclude the effectiveness of the method has improved compared to the contemporary situation. Satisfaction To measure the satisfaction with the method, the participants were asked to respond to five statements. Table 6 gives an overview of the statements and the aggregated results derived from the assessments of the individual experts. Case 1 to I feel satisfied with the way in which today's meeting was 4.79 conducted I feel good about today's meeting process 4.76 I found the progress of today’s session pleasant 5.06 I feel satisfied with the procedures used in today's meeting 4.71 I feel satisfied about the way we carried out the activities in 4.68 today’s meeting
tot
Case 2
Impr. to tot Impr.
1.20
Yes
4.79 0.98
Yes
1.30 1.18 1.45
Yes Yes Yes
4.88 0.95 4.50 0.93 5.12 0.88
Yes Yes Yes
1.32
Yes
4.88 1.01
Yes
Table 6: satisfaction ( = average, = standard deviation, impr = improved yes/no)
Table 6 indicates that in both test cases the average response of the experts to the five statements is > 4. Therefore we can conclude the satisfaction with the method has improved compared to the contemporary situation.
3.4.4. Summary Our research results indicate that our method improves the combining of probability distributions. Our method combines mathematical and behavioral aggregation methods to construct a single loss distribution. We applied our method to two business cases in a large Dutch financial institution. We tested our method on three questions: (a) is our method more effective than the method used previously in the financial institution? (b) is our method more efficient than the method used previously in the financial institution? (c) leads our method to more satisfaction when implemented in the business compared to the method used previously in the financial institution? Our research results indicate that our method improves the effectiveness, efficiency and satisfaction when implemented in practice. We recommend further research into the combination of probability distributions. Our research indicates to investigate methods which can contribute to the aggregation when the standard deviation of a particular risk is high. In sum, our method can be used to provide more efficient, more effective and higher satisfactory results to management.
57
Chapter 3: risks in the building blocks
3.4.5. References Armstrong, J. S. (2001a). Selecting Forecasting Methods. Principles of Forecasting: A handbook for Researchers and Practitioners. J. S. Armstrong. Boston/Dordrecht/London, Kluwer Academic Publishers: 364-403. Armstrong, J. S. (2001c). Combining Forecasts. Principles of Forecasting: A handbook for Researchers and Practitioners. J. S. Armstrong. Boston/Dordrecht/London, Kluwer Academic Publishers: 364-403. Chappelle, A., Y. Crama, et al. (2004). Basel II and Operational Risk: Implications for risk measurement and management in the financial sector. J. Smets, National Bank of Belgium. Clemen, R. T. and R. L. Winkler (1999). "Combining Probability Distributions From Experts in Risk Analysis." Risk Analysis 19(2): 187-203. Cooke, R. M. and L. H. J. Goossens (2004). "Expert judgement elicitation for risk assessments of critical infrastructures." Journal of Risk Research 7(6): 643-156. Goossens & Cooke (2001). Expert Judgment Elicitation in Risk Assessment. Delft University of Technology research report. Grinsven, J.H.M. van. (2009). Improving Operational Risk Management. IOS Press. RMA (2000) “Operational Risk: The Next Frontier” The Journal of Lending & Credit Risk. Winkler & Poses (1993). Evaluating and combining physicians probabilities of survival in an intensive care unit. Management Science, 39.
58
Chapter 3: risks in the building blocks
3.5. Data scaling for modeling operational risk Dr. ir. Jan van den Berg Heru S. Na Lourenco C. Miranda Drs. Marc Leipoldt
In probabilistic terms, Operational Risk (OR) modeling is about modeling loss distributions. The loss distribution portrays loss events which are defined as incidents where monetary losses occurred. Having fitted the loss distribution, the goal is to use it to make inferences about future behaviour of losses according to the risk profile of the financial institution (FI). In many cases, a type of Value-at-Risk calculation is applied to calculate the capital charge for OR. A future risk profile is usually estimated using internally experienced loss information of several event categories. This information may be readily found in the general ledger of the FI or in a specific repository of losses built for that purpose. The latter is the result of the gathering effort made by the various Business Lines (BLs) of the FI. Although some progress has been made (Office, 2005), generally speaking, financial institutions still do not have enough information available regarding the OR losses that occurred under their premises that would enable a meaningful forecasting. This conclusion is underpinned by the fact that according the Advanced Measurement Approach (AMA) of the Basel II framework (Basel, 2004), it is required to compose a loss distribution for each combination of eight BLs and seven OR loss event types (Alderweireld, 2006). In addition, the Basel Committee prescribes that OR calculations are based on an observation period of internal loss data during at least five years and assessments for the regulatory capital covering OR should be based on a 99.9% confidence interval. Taking all these requirements into account, it is not surprising that there still exist a considerable shortage of data for the development of reliable OR models. So, something has to be done.
In this chapter, we apply direct scaling of variables according to a power law in order to enable comparison of OR data from different banks and different BLs. Suitable scaling variables may be related to FI (BL)-specific factors such as size (measured by exposure indicators like gross revenue, transaction volumes, number of employees, etc.) and the risk control environment (Shih, 2001). We analyze the distributions of the aggregate loss and the frequency of OR losses per unit of time, and of the severity of OR losses per loss event. We also show the outcomes of
59
Chapter 3: risks in the building blocks
some of the tests we have performed and present a straightforward procedure for calculating the Value-at-OR. Finally we discuss our findings both in an OR management perspective.
3.5.1. Loss distribution and statistics Using the so-called ‘Loss Distribution Approach’ (Chapelle, 2005), frequency and severity distributions of OR loss data can be combined (convoluted) in order to produce a loss distribution describing the distribution of the aggregate loss per period (Alexander, 2003). Here, the frequency distribution describes the number of OR loss events per unit of time and the severity distribution represents the distribution of loss amounts per single OR loss event. By running a numerical simulation, the distribution of the sum of OR losses per unit of time is calculated giving a typical distribution of OR losses as presented in Figure 10.
Figure 10: typical loss distribution
This distribution can be used to calculate the Value-at-Risk for OR. We now discuss the statistical properties of the aggregate loss (stochastic) variable. A financial institution is usually divided into various Business Lines (BLs) b, (b = 1, … , B), each of which can, for modeling purposes, be viewed as an independent FI. The stochastic variable Lb describing the aggregate loss of BL b per time unit, can be thought of as being caused by two components, namely, the ‘common component’ R com and the ‘idiosyncratic component’ rbidio (Shih, 2001):
Lb u(rbidio , R com ).
(1)
In our approach, the common component R com is assumed to be stochastic and refers to the statistical influence on OR losses caused by general factors such as macroeconomic, geopolitical and cultural environments, the general human nature, and more. The idiosyncratic component rbidio is assumed to be deterministic and refers to OR due to more specific factors such as size and exposure towards operational risk of BL b (e.g. Gross Income). As a consequence, the
60
Chapter 3: risks in the building blocks
effect of R com on the probability distribution of Lb is thought to be common to all Lines of Business, while the effect of rbidio is specific for that BL for which we try to compensate.
In order to be able to implement a suitable compensation method, we need to make assumptions about the precise effect of the idiosyncratic component on the distribution of the stochastic variable Lb. The first assumption made here is that the total effect of R com and rbidio can be decomposed where rbidio is an indicator of size of BL b. This indicator is denoted as sb and will be represented by the gross income of the particular BL. Next to this, we assume that Lb scales with the gross income per period for a particular BL according to a power law. This results into the following equation that is supposed to hold:
Lb u(rbidio , R com ) g (rbidio ).h( R com ) (sb ) .h( R com ),
(2)
where λ > 0 is a universal exponent, i.e., a number that is equal for all BLs b. Equation (2) suggests that larger the BL, larger the aggregate loss suffered. The proportion between the losses of different BLs is given by the scaling factor (sb)λ. Some thinking reveals that h(R com) actually represents the aggregate loss Lst per period of the ‘standard’ BL with size sst = 1. We then observe that equation (2) can compactly be written as
Lb (sb ) .Lst
Lst Lb (sb ) .
or, equivalently,
(3)
So, having available a set of OR loss sample data from several BLs (i.e., data from several probability distributions of Lb) and rescaling them using the right equation in (3) (i.e., multiplying the sample values originating from BL b by (sb)-λ, b = 1, 2, . . .), we may assume that the complete set of rescaled data originates from just one distribution, namely, that of stochastic variable Lst. Vice versa, according to the left equation in (3) data corresponding to the distribution of Lb are found by multiplying all samples from Lst with rescaling factor (sb). Given the above-introduced modeling assumptions, we can derive several mathematical properties the most important ones for this paper are given here: Using standard transformation properties of stochastic variables, the next scaling formulas are a direct consequence of the assumptions made above:
L (sb ) . L b
st
and
L (sb ) . L b
st
(4)
In other words, the scaling of the stochastic variables Lb also holds for their mean μ and standard deviation σ.
61
Chapter 3: risks in the building blocks
The left equation in (4) can be considered as a non-linear regression line of data pairs where for each sb, the value
L
b
equals the mathematical expectation of Lb. A linearized equation
is obtained by taking the logarithm and by using the notations
i ln( Lst )
( sb , Lb )
l ln( Lb )
, s ln( sb ) , and
yielding
l ln( Lb ) ln( sb ) ln( Lst ) s i .
(5)
A similar analysis can be performed with respect to the right equation in (4): using the notation
l ln( Lb )
, s ln( sb ) , and
i ln( Lst )
, the next linear equation is obtained:
l ln( Lb ) ln( sb ) ln( Lst ) s i . Equations (5) and (6) describe the hypotheses that data pairs
( s, l )
, respectively
(6)
( s, l ) lie on a
linear regression line with the same gradient parameter λ. These hypo-theses have been tested, some results of which are described in the next section.
3.5.2. Experimental results For the size variable sb of each Business Line (BL) or Business Unit (BU) b, we use the daily gross income data of the year 2003 which are added up to the gross income per week by division with the number of weeks in a year. Loss data concerning the aggregate loss, frequency, and severity of OR losses have been made available by ABN-AMRO. Internal loss data are given per Business Unit while the external loss data are given per Business Line (for example, BL Corporate Finance and BL Retail Banking), the latter according to the Basel Lines of Business categorization (Cole, 2001). For the reason of simplicity we have chosen to use these data directly instead of trying to transfer the internal bank data per BU into data per BL. For each BL/BU, the OR losses per week have been calculated.
We start by showing the linear regression results with respect to the aggregate loss distribution based on equation (5), using both the internal data per BU and the external data per BL: see Figure 11. The data concern average aggregate OR losses per week for each BU and BL. From the left Figure, we observe that the values of the gradient parameter λ of the two regression
62
Chapter 3: risks in the building blocks
lines are quite close namely, 1.0205 and 1.1877. If we combine the data (right Figure), we find a similar regression line where λ = 1.0606. Regression with the means of amounts of OR losses, per BU and BL 7.5
lμ
7
lμ
l μ = 1.0205 S - 2.3695
6.5
R 2 = 0.8914
6
Regression with the means of amounts of OR losses, combined data
7.5 7 6.5
5.5
Internal BU
5
External BL
6
l μ = 1.0606S - 2.6921
5.5
R2 = 0.8099
5
4.5
l μ INTERNAL = 1.1877S INTERNAL - 3.6506
4
4.5
R 2 = 0.5961
3.5
4 3.5
3 6
8
10
12
S
14
7
8
9
S
10
Figure 11: regression results
We performed the same experiments, but this time based on equation (6). The values found for the gradient parameter λ were respectively 0.8052 and 1.4482, and for the combination 1.0667. The latter is very close to the one found for equation (5) in case of using the combined data set. We also performed several statistical tests. It has been observed that in case of using the combination of internal and external data, the values of the gradient parameters are both significant (at a significance level 99.99%, respectively 99.95%) and, as have been observed already, they are almost equal. These results suggest the existence of a power law as hypothesized.
We also performed many experiments with respect to the frequency distribution of OR losses and to the severity distribution of OR losses. For the frequency case, we have concluded that there exists a universal power-law relationship between the frequency of operational loss per week and the size and exposure towards OR per week of the combination of external BLs and internal BUs, although the results are somewhat less significant. For the severity distribution, there does not seem to hold a power law.
3.5.3. Calculating the value-at-operational risk Since we have concluded that a power law holds for aggregate OR losses, we can apply a Valueat-OR (VaOR) calculation. To do so, the data from all BLs and BUs are first rescaled to data from the standard distribution using as scaling coefficient (sb)-λ according to the right equation
63
Chapter 3: risks in the building blocks
of (3), with λ = 1.0637, the average value found. A histogram of these rescaled data is shown in
L WEEK,STANDARD Sorted 0.005 0.0045 0.004 0.0035 0.003 0.0025 0.002 0.0015 0.001 0.0005 0
VAR week, 99,9%,standard
L WEEK,STANDARD
0.55 0.50 0.45 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00 0. 00 00 0 0. 00 00 6 0. 00 01 2 0. 00 01 8 0. 00 02 4 0. 00 03 0 0. 00 03 6 0. 00 04 2 0. 00 04 8 0. 00 05 4 0. 00 06 0
Density
the left part of Figure 12. Having all together
L WEEK,STANDARD
VAR week, 99%,standard VAR week, 95%,standard VAR week, 95%,standard 0
100
200
300
400
500
600
700
800
N
Figure 12: approximation of the probability density function Lst found by data rescaling (left), and sorted standardized aggregate losses per week, with VaOR-values at 95%, 99%, and 99.9% confidence level (right)
N sample values and choosing a confidence level α, the VaOR of the aggregate losses of the standard BU/BL is given by the j-th sorted loss where j = α.N. Here, this VaOR has been calculated for 3 confidence levels. The resulting values are summarized in Table 7. E.g., for a time horizon of one week, the aggregate loss of the standard BU/BL at confidence level 99.9% equals 0.00363 (for reasons of confidentiality, the precise meaning of this number is not illuminated here).
Confidence level α (%) 95 99 99.9
j 705 735 741
VaOR (per week) 0.00019 0.00067 0.00363
Table 7: the VaOR for the standard BU/BL at three confidence levels
To find the VaOR of the aggregate losses for another BU/BL, we can apply scaling in a similar way as has been applied in equation (5) and (6), since these scaling equations also appear to hold for the VaOR. Knowing the VaOR of a BU/BL, it is easy to calculate the so-called OR capital charge as dictated by the Basel Committee general procedure (Alderweireld, 2006).
64
Chapter 3: risks in the building blocks
3.5.4. Summary The above-given findings support the belief that there exists a power-law relationship between the aggregate loss per week and the size and exposure towards operational risk per week. However, although the mean and standard deviation values of OR losses scale in the same way, it cannot be guaranteed that the full probability density function of operational losses scales in the same way.
Even allowing for the limited data that was available for use in these experiments, we think that the results provide interesting discussion points for the OR Management (ORM) practice. Especially from a Capital Adequacy perspective, the outcome is encouraging and opens up avenues for further research. The finding that the outlined relation holds true for the aggregate loss amounts per period, but not for severity per event and to a limited extent for the number of losses per period points to a clear direction in ORM terms. The lack of a relation between size and severity is a generally accepted rule. In fact, extreme losses have been incurred at small units within banks (e.g. Barrings), while being large by no means exempts a bank from incurring extreme events. This part of the finding is therefore well within expectation.
From a theoretical point of view, one would expect that the variability of the aggregate amount then, is accounted for by the frequency information. The fact that the equation holds much stronger for aggregate amounts than it does for frequency may be confusing at first sight. We believe however that the explanation should be sought in the looseness with which loss recording is done. In technical analysis, we are assuming the losses to have been accurately recorded. In practice, however, it is well known that in the ORM discipline, debate over the way in which losses are recorded (is this one event of 10 million, or is it 10 events of 1 million) can lead to political and fierce debates. In that sense, we trust the aggregate loss amounts per period to be a far more accurate representation and far more comparable between BUs, BLs and firms than either frequency or severity data by itself. This argument strongly supports the results found.
Extensions to this research are of course widely open. Most importantly however, it seems necessary to further validate the outcomes by using new, larger data sets. If the outcomes of new experiments turn out to be in line with what we have found, the application of data scaling
65
Chapter 3: risks in the building blocks
for the calculation of VaOR and other risk measures may be standardized according the Advanced Measurement Approach (AMA) of the Basel II framework (Basel, 2004). For more details of the research results reported here, we refer to Na (2005) and Na (2006).
3.5.5. References Alderweireld, T., Garcia, J. and Lonard, L. (2006) A practical operational risk scenario analysis quantification, Risk Magazine, pp. 92–94, February. Alexander, C. (2003). Operational Risk Regulation, Analysis and Management, Prentice Hall. Cole, R., et al., (2001). Working paper on the regulatory treatment of operational risk, Tech. Rep., Basel Committee on Banking Supervision, Basel, Switzerland. Basel Committee on Banking Supervision. (2004). International convergence of capital measurement and capital standards, a revised framework, Tech. Rep., Bank for International Settlements, Basel, Switzerland, June. Chapelle, A., Crama, Y., Hübner, G., and Peters, J.-P. (2005). Measuring and managing operational risk in the financial sector, an integrated framework, Tech. Rep., Social Science Research Network, February. Na, H.S., van den Berg, J., Couto Miranda, L., and Leipoldt, M. (2006). An econometric model to scale operational losses, Journal of Operational Risk, 1(2), Summer. Na, H.S., van den Berg, J., Couto Miranda, L., and Leipoldt, M., Data Scaling for Operational Risk Modeling. (2005) Technical Report ERS-2005-092-LIS, Erasmus Research Institute of Management (ERIM), Erasmus University Rotterdam. Office of Comptroller of the Currency et al. (2005). Results of the 2004 loss data collection exercise for operational risk, Tech. Rep., May 2005. Shih, J. (2001). On the use of external data for operational risk quantification, Risk Architecture, Citigroup.
66
Chapter 3: risks in the building blocks
3.6. Characteristics of diversified hedge fund portfolios Drs. Paul de Jong
Investors that are planning to invest in hedge funds are faced with various difficult questions. In which hedge fund strategy am I going to invest? Do I choose for an individual fund or for a diversified portfolio of hedge funds? Several hedge fund strategies realize returns which are not normally distributed. Therefore, it is important for an investor which optimization technique to use when constructing a diversified hedge fund portfolio. Does the investor choose the classic framework, the mean variance (MV) analysis, which is theoretically appropriated when returns are normally distributed. Alternatively does the investor choose an optimization technique such as the mean downside risk (MDR) analysis, where risk is defined as downside risk.
The central question in this chapter is as follows: “In which proportion does the MV construction technique differ from the MDR construction technique when allocating to various hedge fund strategies?” To answer this question we will, first, provide some background information on hedge funds. Secondly, we describe several hedge fund strategies and their performance. Thirdly, we describe the asymmetry of hedge fund returns. Fourthly, we compare the use of the MV and MDR analysis for the construction of optimal hedge fund portfolios. Finally, we present our conclusions.
3.6.1. Hedge funds A frequently heard argument to invest in hedge funds is that the funds produce returns which are similar to those of a representative market index. On the other hand the risk (standard deviation) is comparable with bonds. Through these characteristics the risk/return relationship or “Sharpe ratio” is exceptionally favorable for hedge funds. The Sharpe ratio is defined as the excess return fund divided by standard deviation of the fund. Brooks and Kat (2002) demonstrate that hedge fund strategies produce more downside risk than upside risk. The MV construction technique is not applicable because it does not incorporate the non-normal distributed returns of the hedge funds into the framework. The hedge funds which are subject to downside risk frequently show high Sharpe ratios. When an investor creates hedge fund portfolios using the MV construction technique there is the risk of over-allocation to these strategies which are subject to downside risk. An investor will not prefer this technique if he
67
Chapter 3: risks in the building blocks
wants to avoid poor results. It becomes obvious that an investor needs to incorporate the nonnormal distributed returns when constructing hedge fund portfolios to minimize downside surprises.
In previous years more investors have allocated investments into hedge funds. These investment decisions were the result of the assumption that hedge funds produce positive returns in bear markets as well as in bull markets. Because of this allocation strategy the hedge fund sector has strongly grown in the past ten years. In 1990 there were 2.000 hedge funds operating with an invested amount of $70 billion. At the end of 2003 there were 8.000 hedge funds active with an invested amount summing up to $750 billion (Van Hedge Fund Advisors, 2004). The first hedge fund was founded by Alfred Jones in 1949 with the objective to hedge the market risk. The strategy of Jones’s hedge fund can be defined as typically market-neutral. By buying undervalued equities (long) and selling equities (short) the market risk is eliminated.
Hedge funds have another structure then normal investment funds which are widely available to the public. Because of this hedge funds do not have to comply with certain regulations, for example the Security Exchange Act. Therefore, hedge funds are able to carry out flexible strategies. In contrast to ordinary investment funds hedge funds can use short selling, leverage, derivatives and perform arbitrage strategies in several markets. This flexibility offers hedge funds the possibility to hedge the market risk of the investments. The popularity of the hedge fund sector in the previous years has resulted in many academic studies of the value of hedge fund investments. Anson (2002) and Edwards and Caglayan (2001) show that hedge funds offer diversification possibilities and moreover perform better in bear markets then long only equity investments. Because most of the hedge fund strategies are not perfectly correlated with market indices they give the appearance of being market neutral investments.
More recent academic research disputes the assumptions that hedge fund investments are market neutral. See for example: Asness, Krail and Liew (2001), Fung and Hsieh (2002) and Ennis and Sebastian (2003). These researchers show through several regression models that hedge fund investments are far from market neutral. Moreover the alpha of the several hedge fund strategies decreases when returns are corrected for stale prices. Alpha is the excess return that is realized when returns are corrected for exposures to different types of systematic risk
68
Chapter 3: risks in the building blocks
factors, for example the stock and obligation market. Furthermore there are academic studies that have researched the characteristics of optimal hedge fund portfolios created by several optimization techniques. Krokhmal, Ursysev and Zrazhevsky (2002) show standard optimization techniques which assume that returns that are normally distributed lead to riskier portfolios than would be the case when we take into account the nonlinearities of the return distributions of hedge funds. Brooks and Kat (2002) show that hedge fund returns are not normally distributed. Therefore, traditional performance measures like the Sharpe ratio and Jensen’s alpha are not applicable when evaluating hedge fund returns. The research of McFall Lamm (2003) shows that large negative returns can be limited by using alternative optimization techniques. Agarwal and Naik (2004) show that a large part of the stock related hedge fund strategies show payoffs that are similar to short positions in put options on market indices. As a consequence these hedge funds show a significantly left tail risk, and this is not taken into account when using the mean variance framework. The academic studies indicates that when constructing optimal hedge funds portfolios we must take into account the non-normal distributed returns of several strategies. In this chapter optimal portfolios are presented using the mean variance and mean downside risk construction techniques.
3.6.2. Description and performances hedge fund strategies In Figure 13 the cumulative performances of the nine CSFB/Tremont hedge fund strategies are displayed for the period January 1994 through August 2003. Figure 13 shows that the various hedge fund strategies result in different returns. All hedge funds with exception of the managed futures and dedicated short bias strategies performed badly during the Russian Obligation Crisis in 1998 and the fall of the Long Term Capital-Management fund in August 1998. In the following paragraph nine hedge fund strategies from the CSFB/Tremont database are now briefly described.
69
Chapter 3: risks in the building blocks
350
300
250
($) 200 150
100
50
Managed Futures
Global Macro
Emerging Markets
Long/Short Equity
Fixed-Income Arbitrage
Equity Market Neutral
Event-Driven
Convertible Arbitrage
jul-03
jan-03
jul-02
jan-02
jul-01
jan-01
jul-00
jan-00
jul-99
jan-99
jul-98
jan-98
jul-97
jan-97
jul-96
jan-96
jul-95
jan-95
jul-94
jan-94
Short Bias
Figure 13: hedge fund performances Jan 1994 – August 2003 (CSFB/Tremont)
The dedicated short bias funds take short positions in shares and derivatives. As the name already implies, this is short sellers. The emerging market funds invest in equities and obligations in emerging markets. These funds performed badly during the Asia crisis (July 1997 up to the end of 1998). The equity market neutral funds take both long positions as short positions in equities. The portfolio is constructed in such a way that there is no exposure to the equity market. The market neutral funds take advantage of market inefficiencies in the equity markets. These funds show stable returns over the entire period examined. The managed future funds invest in financial, commodity and currency markets. The managers of these funds are frequently called Commodity Trading Advisors (CTAs). The long short funds invest mainly in the equity markets. The managers of these funds concentrate on stock picking. The goal of these funds is to be not market neutral. These funds performed very well during the bull market (until March 2000) but performed badly during the bear market (2001 and 2002). The event driven funds are active in the equity markets. The fund managers base their strategies on the occurrence of certain events, such as mergers, acquisitions and reorganizations. The global macro funds looks for fundamental inefficiencies in the markets, which will result in substantial movements in market prices. Here one must think of movements in exchange rates and equity prices. These funds performed reasonably well during the bear market (2001 and 2002). The fixed income arbitrage funds try to
70
Chapter 3: risks in the building blocks
profit from price anomalies between closely related fixed income products. The convertible arbitrage funds try to profit from mispricings in convertible bonds. For example these funds take an exposure in the volatility or credit risk of a bond. The equity exposure is hedged through a short position in the underlying equity.
3.6.3. Asymmetric Hedge Fund Returns In Figure 14 the hedge fund returns of the aggregate hedge fund index and the nine sub indices are presented for the period January 1994 up to August 2003. Portfolio Monthly Return Std. Deviation Skewness Kurtosis Minimum Maximum Jarque-Bera Test** Aggregate Hedge Fund Index 0.89% 2.49% 0.10 1.60* -7.55% 8.53% 9.67 Convertible Arbitrage 0.83% 1.4% -1.53* 3.87 -4.68% 3.57% 48.92 Event-Driven 0.9% 1.77% -3.39* 22.18* -11.77% 3.68% 2000.23 Equity Market-Neutral 0.85% 0.9% 0.20 0.15* -1.15% 3.26% 40.03 Fixed-Income Arbitrage 0.55% 1.16% -3.20* 16.02* -6.96% 2.02% 1017.32 Long/Short Equity 0.98% 3.22% 0.24 3.20 -11.43% 13.01% 1.31 Emerging Markets 0.65% 5.2% -0.54* 3.53 -23.03% 16.42% 7,00 Global Macro 1.19% 3.55% -0.03 1.85 -11.55% 10.6% 6.41 Managed Futures 0.61% 3.53% 0.03 0.56* -9.35% 9.95% 28.79 Dedicated Short Bias -0.02% 5.23% 0.90* 2.10* -8.69% 22.71% 19.58 * Differs significantly from normal distribution with 95% reliability ** Critical value J-B test is 9.21 for 99% reliability interval (J-B> 9.21 returns not normal distributed)
Figure 14: hedge fund returns CSFB/Tremont indices January 1994 – August 2003
Figure 14 indicates that most of the hedge funds strategies returns differ significantly from the normal distribution. The normal distribution is perfectly symmetric, 50 percent of the probability lies above the mean. Therefore, the skewness and the kurtosis, which are the third and fourth moment of the distribution, are equal to 0 and 3. If the mean and standard deviation of the normal distribution are known, then the likelihood of every point in the distribution is also known. In the case of a negative skewness and a kurtosis greater then 3, the mean and variance would not provide enough information to show that the probability to lie within a range of values. A negative skewness implies that large negative returns are more likely then large positive returns. This will not be preferred by investors that are risk averse. A kurtosis value greater than 3 implies that the distribution is peaked and shows fat tails. Returns are more grouped around the mean and moreover show more returns in the extreme tails. The normality of the return distributions is tested statistically through the Jarque-Bera (J-B) test. Based upon the J-B test we can conclude that no less than 7 of the 10 hedge fund return distributions are non-normally distributed. As a result of the non-normally distributed hedge fund returns the standard deviation alone is not a suitable measure for risk. This will lead to overconfidence in the returns of the different hedge fund strategies. Moreover it implies that standard optimization techniques are not sufficient. When investors’ construct hedge fund portfolios
71
Chapter 3: risks in the building blocks
they should take into account the non-normally distributed returns. By taking this into account large irregular returns can be avoided.
3.6.4. Optimal Hedge Fund Portfolios Monthly hedge fund return data for the period January 1994 through August 2003 is used from the CSFB/Tremont database. The CSFB/Tremont indices are value-weighted net-of-fee indices. The nine sub-indices from the database, which are representative for the several hedge fund strategies, are used for empirical research. Optimal hedge fund portfolios are created using two construction techniques, namely the mean variance (MV) and mean downside risk (MDR) analysis. MV is the classic Markowitz (1959) approach. MDR is a downside risk investment approach, by which downside deviations are calculated to a relatively minimum acceptable return. In this empirical research the minimum acceptable return is put on the risk-free interest rate
R f . The technique that is best suitable for
creating optimal portfolios is the technique
which takes into account the non-normally distributed returns of the several hedge fund strategies. MV is the traditional investment choice problem approach. This can be expressed in terms of expected returns and the variance of a portfolio of assets. Efficient portfolios are created by choosing a combination of assets where the variance is minimized for a certain return level. Then a combination is chosen which is consistent with the risk tolerance of an investor. MDR is an optimization technique by which only returns below a certain target return contribute to risk. This differs from the variance which takes along all return deviations relative to the expected return when measuring risk. The MDR optimization framework can be expressed as follows (Harlow, 1991):
Select X by which MDR n R f ; is minimized
Rf
Rp R f
1 R f Rp T 1
By which:
n=2
j
jE Rj
Rp
j
j 1, j 0
n
and
72
(2)
(3)
T is the number of return observations. free interest rate.
(1)
Rp
Rf
is the target return and in this case equal to the risk
is the return on hedge fund portfolio P. X is a vector of the weights of the
Chapter 3: risks in the building blocks
several hedge fund strategies.
E( R j )
is the expected return on the various hedge fund strategies.
Equation (1) is subject to two constraints, namely the combined weights (X) must sum up to 100% and the weights must always be larger than zero. These constraints rule out the possibility of short positions.
Summarized: an investor with a target return,
Rf
, determines the allocation weights to the
hedge fund strategies, J, to reach the efficient point within the MDR efficient set. The MDR optimization problem cannot be solved analytically. This is due to the fact that a combination of returns cannot be expressed as a function of the moments of the individual return distributions. This differs from the variance. The variance of a portfolio is the weighted sum of the individual variances and covariance’s. The MDR optimization problem cannot be expressed in this way. Nevertheless we create efficient portfolios using a numerical method. See for more relevant MDR literature: Brouwer (1997), Harlow and Roa (1989), Harlow (1991) and Sortino and Van der Meer (1991).
3.6.5. Results By using the two optimization techniques optimal MV and MDR portfolios have been created. Twenty optimal portfolios have been taken out of the sample. The twenty optimal portfolios produce equal returns but minimize the risk for the two optimization techniques (standard deviation or downside risk). The return of the efficient portfolios varies between the 8,88% and 11,16% per year. In Figure 15 the allocation patterns of the several hedge fund strategies are displayed for the twenty optimal MV and MDR portfolios. The allocation patterns of the two optimization techniques show a great many similarities. Both techniques allocate a large part to the equity market-neutral strategy. Equity market-neutral is a robust strategy, both the lowerand higher-return portfolios allocate heavily to this particular strategy. The lower-return portfolios allocate heavily to the convertible arbitrage, event-driven, fixed income arbitrage, long/short equity and market-neutral hedge fund strategies. It should be noted that the MDR construction technique allocates a small part to the managed futures and emerging markets strategies. The more risky MDR portfolios allocate to only two strategies, namely equity marketneutral and global macro. At the same time the more risky MV portfolios also allocate a small part to the event-driven strategy.
73
Chapter 3: risks in the building blocks
Figure 15: allocation patterns
Figure 16: allocations optimal MV and MDR portfolios
The main difference between the MV and MDR allocations displayed in Figure 16 is that the MDR technique allocates on average less to the event driven and convertible arbitrage strategies. This is not surprising since these two strategies are negatively skewed with significant excess kurtosis. The MDR optimization takes into account the downside risk and thereby allocates less to these two strategies. MDR optimization also allocates more to the long/short equity, managed futures and dedicated short bias strategies. Large positive returns dominate in the return distributions of these strategies (positive skewness). Composition differences between the MV and MDR portfolios are illustrated in Figure 17. Figure 17 presents the average allocation to the various hedge fund styles for the twenty efficient portfolios, generated with the two optimization techniques.
74
Chapter 3: risks in the building blocks
The descriptives for the return distributions are also presented in Figure 17. The MV optimization technique allocates for almost 17% to arbitrage strategies with a negative skewness in the return distribution (convertible, event-driven and fixed-income arbitrage). The MDR optimization technique only allocates for 8% to the arbitrage strategies. Instead, MDR allocates more to strategies with a positive skewness in the return distribution. These strategies are particularly equity market-neutral, long/short equity, managed futures and dedicated short bias. As a consequence the efficient MDR portfolios show a more positive skewness than the efficient MV portfolios. Figure 17 indicates that both optimization techniques have a positive skewness. This indicates that by diversification into several hedge fund strategies the negative asymmetric characteristics of the individual hedge fund strategies partially disappear. When we look at the minimum and maximum returns we conclude that the MDR portfolios perform better than the MV portfolios. Hedge fund style
MDR
MV
Convertible Arbitrage
1.64%
5.96%
Event-Driven
4.81%
8.26%
Equity Market-Neutral
72.18%
71.61%
Fixed Income Arbitrage
1.83%
2.61%
Long/Short Equity
3.72%
1.42%
Emerging Markets
0.54%
0,00%
Global Macro
8.89%
6.09%
Managed Futures
1.15%
0,00%
5.22%
4.05%
Dedicated Short Bias Totaal
100,00%
100,00%
Descriptives
MDR
MV
Monthly Return
0.84%
0.84%
Monthly Standard deviation
0.84%
0.8%
Kurtosis
0.79
0.74
Skewness
0.53
0.23
Minimum
-0.96%
-1.43%
Maximum
3.51%
3.25%
Figure 17: statistics for the two optimization techniques
From the results presented above it becomes clear that the return characteristics improve when the MDR optimization technique is used. All this is reached without loss of return, since the two techniques generate identical returns for each portfolio. The return distributions of the efficient MV and MDR portfolios are displayed in Figure 18.
75
Chapter 3: risks in the building blocks
Figure 18: return distributions MV and MDR portfolios
When we study the results more closely we observe that the return distribution of the MDR portfolio shows that large negative returns occur less frequently in comparison with the MV portfolios (left tail is less thick). Moreover, we see that the return distributions of the MDR portfolios are less peaked compared to the MV portfolios (returns relatively less grouped around the average). The right tail of the MDR return distribution is thicker then the right tail of the MV return distribution (more often large positive returns). Figure 19 shows the statistics of the twenty efficient portfolios created with the MV and MDR optimization techniques.
76
Chapter 3: risks in the building blocks
Mean Variance Optimization P
Return
Sigma
MDR Risk Skew
1
8,88%
2,24%
1,87%
2
9,00%
2,27%
3
9,12%
2,30%
4
9,24%
5 6
Mean Downside Risk Optimization
Kur
Min
Max
Sigma
Kur
Min
Max
-0,05
-0,17
-0,94%
2,37%
2,35%
MDR Risk Skew 1,59%
0,26
-0,24
-0,65%
2,64%
1,87%
-0,02
-0,14
-0,94%
2,39%
2,36%
1,59%
0,24
-0,18
-0,71%
2,65%
1,91%
-0,01
-0,09
-0,94%
2,42%
2,45%
1,63%
0,33
-0,04
-0,77%
2,79%
2,33%
1,91%
-0,01
-0,03
-0,94%
2,45%
2,46%
1,63%
0,31
0,03
-0,74%
2,78%
9,36%
2,38%
1,94%
-0,02
0,04
-0,98%
2,51%
2,50%
1,66%
0,40
0,20
-0,86%
2,92%
9,48%
2,43%
2,01%
-0,02
0,11
-1,07%
2,58%
2,58%
1,70%
0,37
0,23
-0,79%
2,95%
7
9,60%
2,48%
2,08%
-0,01
0,20
-1,15%
2,67%
2,60%
1,73%
0,37
0,32
-0,77%
3,04%
8
9,72%
2,54%
2,18%
0,03
0,33
-1,23%
2,79%
2,67%
1,84%
0,41
0,41
-0,80%
3,14%
9
9,84%
2,60%
2,29%
0,07
0,46
-1,32%
2,90%
2,74%
1,87%
0,47
0,60
-0,90%
3,30%
10
9,96%
2,67%
2,36%
0,10
0,56
-1,40%
3,02%
2,81%
1,91%
0,49
0,68
-0,87%
3,38%
11 10,08%
2,76%
2,42%
0,13
0,64
-1,48%
3,16%
3,01%
2,04%
0,62
0,83
-0,99%
3,59%
12 10,20%
2,85%
2,46%
0,16
0,70
-1,56%
3,29%
3,04%
2,11%
0,57
0,88
-0,95%
3,67%
13 10,32%
2,94%
2,53%
0,19
0,74
-1,64%
3,43%
3,08%
2,15%
0,45
0,89
-1,17%
3,76%
14 10,44%
3,05%
2,60%
0,21
0,76
-1,72%
3,57%
3,19%
2,29%
0,54
1,01
-1,17%
3,89%
15 10,56%
3,16%
2,67%
0,23
0,80
-1,80%
3,72%
3,26%
2,42%
0,50
1,03
-1,47%
4,00%
16 10,68%
3,29%
2,70%
0,32
1,02
-1,91%
3,94%
3,36%
2,49%
0,48
1,04
-1,55%
4,07%
17 10,80%
3,44%
2,74%
0,40
1,19
-2,01%
4,15%
3,46%
2,67%
0,47
1,10
-1,73%
4,19%
18 10,92%
3,62%
2,81%
0,47
1,32
-2,12%
4,37%
3,64%
2,77%
0,53
1,18
-1,79%
4,40%
19 11,04%
3,82%
2,98%
0,51
1,34
-2,20%
4,58%
3,85%
2,88%
0,60
1,19
-1,72%
4,62%
20 11,16%
4,05%
3,26%
0,53
1,35
-2,29%
4,79%
4,05%
3,15%
0,60
1,21
-1,92%
4,84%
Figure 19: MV versus MDR Portfolio descriptives
All MDR portfolios are more positively skewed compared to the MV portfolios. When studying the minimum and maximum returns we note that the MDR portfolios when compared with the MV portfolios perform better in all cases. Therefore, the central element when constructing efficient hedge fund portfolios is how risk is defined by an investor. By applying the two construction techniques the negative skewness and excess kurtosis characteristics in the return distributions of the individual hedge funds change to positive skewness and significantly lower kurtosis.
3.6.6. Summary In this chapter we have demonstrated that downside risk can be explicitly taken into account by the use of the MDR construction technique. The two construction techniques show that a diversified multi hedge fund strategy portfolio is attractive. None of the individual hedge fund strategies show a better risk/return relationship then the optimal hedge fund portfolios created with the two construction techniques. Equity market-neutral is a robust strategy. Both techniques allocate a significant part to this strategy. The results show that the riskier hedge fund portfolios allocate more to the global macro hedge fund strategy. On the other hand the less risky portfolios allocate more to the equity market-neutral hedge fund strategy.
77
Chapter 3: risks in the building blocks
The MDR optimization technique only takes into account returns that are smaller than a certain target return. Because of this large negative returns can be avoided as much as possible. The central element when constructing efficient hedge fund portfolios is how risk is defined by an investor. Investors who are risk averse for large negative surprises will be better off using the MDR construction technique when creating efficient hedge fund portfolios. The MDR construction technique allocates more to hedge fund strategies with less downside risk compared to the MV technique. In particular to the following strategies: equity market-neutral, long/short equity and global macro. As a consequence the return distributions of the MDR portfolios show more positive skewness when compared to the MV portfolios.
The results indicate that it really does matter which construction technique is used when creating diversified hedge fund portfolios. This is especially true when an investor wants to limit downside risk. From this point of view an investor can construct more efficient hedge fund portfolios when using the MDR construction technique. This technique solves partially the negative properties in the return distributions of the individual hedge fund strategies.
3.6.7. References Agarwal, V., and N. Naik (2004). Risk and Portfolio Decisions Involving Hedge Funds. The Review of Financial Studies, 17, pp. 63-98. Anson, M (2002). Symmetric Performance Measures and Asymmetric Trading Strategies. The Journal of Alternative Investments, 5, pp. 81-85. Asness, C., R. Krail and J. Liew (2001). Do Hedge Funds Hedge?. The Journal of Portfolio Management, 28, pp. 6-19. Brooks, C., and H. Kat (1997). The Statistical Properties of Hedge Fund Index Returns and Their Implications for Investors. The Journal of Alternative Investments, 5 (2002), pp. 26-44. Brouwer, F. (1997). Applications of the mean downside risk investment model. Ph.D Dissertation, Amsterdam: Vrije Universiteit. Edwards, F., and O. Caglayan (2001). Hedge Funds and Commodity Fund Investments in Bull and Bear Markets. The Journal of Portfolio Management. 27, pp. 97-107. Ennis, R., and M. Sebastian (2003). A Critical Look at the Case for Hedge Funds. The Journal of Portfolio Management. pp. 103-112.
78
Chapter 3: risks in the building blocks
Fung, W., and D. Hsieh (2000). Performance Characteristics of Hedge Funds and Commodity Funds: Natural vs Spurious Biases. The Journal of Financial and Quantitative Analysis, 35, pp. 291-307. Fung, W., and D. Hsieh (2002). Hedge-Fund Benchmarks: Information Content and Biases. Financial Analyst Journal. 58, pp. 22-34. Harlow, W (1991). Asset Allocation in a Downside-Risk Framework.”Financial Analyst Journal, 47, pp. 28-40. Harlow, W. and K. Roa (1989). Asset Pricing in a Generalized Mean-Lower Partial Moment Framework: Theory and Evidence. The Journal of Financial and Quantitative Analyses, vol. 24 no. 3, pp. 285-311. Krokhmal, P., S. Uryasev, and G. Zrazhevsky (2002). Risk Management for Hedge Fund Portfolios. The Journal of Alternative Investments, 5, pp. 10-30. Markowitz, H. (1959) Portfolio Selection: Efficient Diversification of Investments. New York: Wiley. McFall Lamm, R. (2003). Asymmetric Returns and Optimal Hedge Fund Portfolios. The Journal of Alternative Investments, 8, pp. 9-21. Sortino, A., and R. van der Meer (1991). Downside Risk. The Journal of Portfolio Management, 17, pp. 27-31. Van Hedge Fund Advisors (2004). Financial Advisors: Their Need for Hedge Funds Accelerates. The solution. Nashville, TN.
79
Chapter 3: risks in the building blocks
80
Risk managers are scrambling to understand the intricacies of Basel II and Solvency II. In their desire to understand market, credit, and insurance risks have your risk managers relegated Operational risk to an after-thought? How does your organization approach operational risk and formulate value propositions from this risk type?
M. Bozanic
4.
Risks in running the business Drs. Philip Gardiner Drs. Gert Jan Sikking
In chapter two we argued that the activities that financial institutions perform can be broadly divided into two distinct sets. One set of activities is associated with ‘building’ the institution. The other set of activities are associated with ‘running’ the institution which is the focus of this chapter. These activities are associated with governance, management and operations. In this chapter we present a series of sections associated with non-financial risk, the identification, measurement and control. These are the risks associated with ‘running’ an institution. Within a financial institution there are a number of departments whose activities focus directly on aspects of non-financial risk exposure through participation in the overall operational risk and control framework. These departments, Operational Risk, Compliance, Legal, Human Resources, and Information Security all perform specific risk control activities that add to the efficiency of ‘running’ the institutions business.
The following chapters discuss specific approaches to controlling non-financial risk. They examine ways of approaching the control process and how to establish an institution wide support system. Techniques for the identification and examination of risk are presented along with methods to use data gathered, provide predictive models and assess the institutions ability to respond in periods of stress. Taken as a whole these activities enhance the operational risk and control framework and reduce the likelihood and impact of unexpected direct and indirect loss as the result of events. The reduction of the likelihood and impact of non-financial risks on an institution has a positive effect on profitability and the resulting reduction of unexpected loss aids in the institutions solvency improvement process over time.
81
Chapter 4: risks in running the business
4.1. Putting operational risk into context Drs. Remco Bloemkolk
Operational Risk Management (ORM) increasingly supports senior decision makers to make informed decisions based on a systematic assessment of operational risk (Cumming and Hirtle, 2001; Cruz, 2002; Van Grinsven, 2009). ORM is to a significant degree driven by regulations (e.g. Basel II of the Bank for International Settlement, Solvency II currently being developed by the European Commission, the Swiss Solvency Test and the Individual Capital Assessments of the Financial Services Authority). As a result of regulatory changes, operational risk management has increasingly become a more strategic issue. It is more frequently being used to achieve competitive advantage (Clausing, 1994; Cruz, Coleman et al., 1998). Financial institutions use various tools and methods to comply with developing regulations. Currently, we see the following developments in practice: first, a more forward-looking operational risk management relying on a hybrid of loss data and expert judgement. Second, a more integrated risk management and closer alignment of strategic planning and risk management. There is a need to support these developments by putting risk into context using scenario planning.
In this chapter we discuss how scenario planning can be used to support these developments. First, we provide background information regarding operational risk management and scenario planning. Second, we describe how scenario planning can support more forward-looking operational risk management. Finally, we show how scenario planning can support the development of more integrated risk management that is informed by the strategic objectives of the financial institution and key uncertainties in its business environment.
4.1.1. Operational risk management and scenario planning Operational risk management can be defined as the systematic identification, assessment, monitoring, mitigation and reporting of operational risk (Grinsven, 2009). Business decisions in financial institutions are made under risk and uncertainty (Murphy & Winkler, 1977; Turban, Aronson et al, 2001; Butcher, 2006; Grinsven, 2009). Risk is an adverse event that contrary to uncertainty is quantifiable in terms of probability and severity (Knight, 1971; Turban, Aronson et al, 2001; Cleary & Malleret, 2006). Scenario planning is used for strategy development in
82
Chapter 4: risks in running the business
many sectors including oil & gas, pharmaceuticals and financial services. Scenario planning is a method that some financial institutions use to make flexible long-term strategic plans that take into account strategic risks (Butcher, 2006). It is a method that has its origins in military intelligence (Meyer, 1987; Schwartz, 1991; Godet, 1993; Van der Heijden, 1996; Ralston, 2006).
The aim of scenario planning is to create a decision-making process that produces strategic options based on business environment scenarios. Such scenarios are frameworks for structuring management perceptions about alternative future environments in which their decisions might be played out (Van der Heijden, 1996; Butcher, 2006; Ralston, 2006; Cleary & Malleret, 2006). Such scenarios help to mitigate the uncertainty regarding the future development of the business environment and provide insight into potential risks, opportunities and their potentially non-linear interrelationships, including those that may produce “Black Swans”. “Black Swans” are unpredictable, low-frequency, high severity events (Taleb, 2007). The insights these scenarios facilitate should not be confused with foresight. They facilitate open-minded reasoning about alternative ways the business environment may develop taking its uncertainties as fundamental starting points.
Scenario analysis to describe a risk event is different from scenario planning. Financial institutions frequently use scenario analysis for risk identification and assessment purposes to supplement external and internal loss data. It may be seen as the identification and assessment of the root cause, severity and probability of a specific potential adverse event. Scenario analysis is used for assessing exposure to (extreme) risks for which no sufficient historical data is available. Scenario analysis by itself does not ensure that the financial institution identifies the relevant risks or that they are assessed correctly (Karow, 2000; Harmantzis, 2003; Tripp, Bradley et al, 2004).
4.1.2. Scenario planning to support operational risk management Scenario planning includes a number of activities: identification of key trends and driving forces, prioritizing the identified uncertainties, determination of early warning indicators, mitigation and reporting (Van der Heijden, 1996; Ralston, 2006; Cleary & Malleret, 2006; Nekkers, 2007). Further, each scenario planning activity can support risk management activities. Here, we will discuss the activities for scenario planning and indicate how they can support risk management.
83
Chapter 4: risks in running the business
The first activity during scenario planning is the identification of key social, political, technological and economic trends, driving forces for the financial institution and their interrelationship. Further, the identification of uncertainties for the financial institution regarding the driving forces that may introduce new trends and change known trends. Examples of such uncertainties are: the impact of product innovation on the stability of financial markets, the impact of the emergence of Asia and ageing in mature economies on financial services and its regulators, or the impact of speedy adoption of new information technologies by consumers. The above mentioned first activity should address a number of questions. Typical questions that can be asked during risk identification are: Which risks are associated with the various trends and driving forces that are relevant for the organization? How are they correlated if they can be correlated at all? How is the correlation between operational risks and other types of risk influenced by uncertainties?
The second activity in scenario planning is prioritizing the identified uncertainties. After the uncertainties have been prioritized for two or three key uncertainties it is determined how the driving forces may interact with each other over a relevant business planning horizon. The result is a set of business environment scenarios describing the potential interplay of driving forces. These activities can support risk management activities during risk assessment. Typical questions that can be asked during risk assessment are: what is the probability and severity of existing risks and how do they differ between scenarios? How are risks for the financial institution influenced by uncertainties and how are these risks related to each other? What are in each scenario potential new and emerging risks? What is the risk appetite and tolerance of the financial institution in each scenario?
The third activity in scenario planning is to determine scenario early warning indicators. Determine indicators, related to the development of trends and driving forces, that indicate that the business environment is developing in the direction of one of the scenarios. Examples of early warning indicators are: rapid growth of new financial markets in combination with lowering of credit standards, increasingly harmonized or disharmonized financial services regulation, industry consolidation and youth culture. This third activity can support risk management for monitoring risks. Typical questions that can be asked during risk monitoring
84
Chapter 4: risks in running the business
are: How is the frequency and severity of key risks developing? Can they be related to early warning indicators of a particular business environment scenario? What does that mean for the risk appetite and tolerance of the financial institution?
The fourth activity in scenario planning is mitigation and reporting. During this activity it is determined: what the strategic options to capture opportunities for the future are, which option is implemented and will drive the strategy. Further, based on the perceived potential development of the business environment, determine how the financial institution can be prepared for alternative opportunities and options. This activity can support risk management during mitigation and reporting. Typical questions that can be asked during risk mitigation and reporting are: which risk controls need to be strengthened and which can be reduced or eliminated to execute on the chosen strategic option? Which risk controls are needed to control new and emerging risks? With which parties do we need to engage in dialogue to mitigate the potential impact of new and emerging risks while reaping its opportunities? These activities indicate that scenario planning supports operational risk management by providing a dynamic and business environment driven context for the identification, assessment, monitoring and mitigation of operational risk. It indicates how risks may be related to opportunities presented by the business environment under varying circumstances. In this manner operational risk and its development may be associated with new and emerging risks in the business environment and the strategic objectives of the financial institution.
4.1.3. Scenario planning to support integrated risk management As mentioned, part of leading practice is the development of further integrated risk management that is aligned with the strategic objectives of the financial institution. Leading practice furthers integrated risk management through the integration of various policies into one policy house, of different risk management systems into one risk management system and of risk management departments into one risk management organization. Another way to further integrate risk management and align it with the strategic objectives of the financial institution is by integrating the different perspectives on its business environment. Scenario planning supports this by providing a dynamic and business environment driven context for the identification, assessment, monitoring and mitigation of all risk types. The scenario planning activities described in the previous section supports management, risk management and internal audit in understanding the business environment in an integrated way.
85
Chapter 4: risks in running the business
They do so because they take fundamental uncertainty as a starting point. Integrating the different perspectives (e.g. management, risk management and internal audit) on the potential development of the business environment further enhances the understanding of risks in relation to business opportunities. In this way it improves the flexibility and the ability to run the business under varying business environment circumstances. Figure 20 and Figure 21 provide an overview of how integration of perspectives of the business environment may contribute to a further integrated risk management that is aligned with the strategic objectives of the financial institution. Strategic/business risk
Market risk
Insurance risk
Credit risk
Operational risk
Strategic/business environment outlook
Market environment outlook
Insurance environment outlook
Credit environment outlook
Operational environment outlook
Figure 20: isolated perspectives on the business environment
Strategic/business risk
Market risk
Insurance risk
Credit risk
Operational risk
Business environment scenarios based on the perspectives of management, risk management and internal audit
Figure 21: Integrated perspectives on the business environment
The business environment scenarios mentioned in Figure 21 capture in an integrated way the current outlook on the business environment and possible alternatives based on the potential uncertain interplay between driving forces. They indicate how existing, emerging and new risks may develop in relation to the opportunities presented by the business environment under varying external circumstances.
86
Chapter 4: risks in running the business
4.1.4. Summary Incorporating scenario planning into risk management facilitates the ability to put operational and other risks into context. Thereby, it increases the understanding of: (1) which risks actually do matter given the circumstances, (2) it supports the estimation of the frequency and severity when loss data is not available, (3) it supports understanding of how operational and other risks may be related to each other and to opportunities presented by the business environment under varying circumstances.
[Note: Drs. Remco Bloemkolk has written this chapter in his personal capacity.]
4.1.5. References Basel Committee on Banking Supervision, (2006). Observed Range of Practice in Key Elements of Advanced Measurement Approaches (AMA). Bank for International Settlement. Butcher, J., Turner, N., Drenth, G. (2006). Navigating in the Midst of More Uncertainty and Risk. Journal of Applied Corporate Finance, 18, No. 4. Clemen, R.T., Winkler, R. L. (1999) Combining Probability Distributions from Experts in Risk Analysis. Risk Analysis, 19, No. 2. Cruz, M. (2002). Modelling, Measuring and Hedging of Operational Risk: Wiley Finance. Cumming, C., & Hirtle, B. (2001). The Challenge of Risk Management in Diversified Financial Companies: Federal Reserve Bank of New York Economic Policy Review. Clausing, D. (1994). Total Quality Development: a Step-by-step Guide to World-Class Concurrent Engineering, New York: Asme Press. Cruz, M. Coleman, R. & Salkin, G. (1998). Modelling and Measuring of Operational Risk. Journal of Risk, 1, pp. 63-72. Cleary, S. M., Malleret, T. D., (2006). Resilience to Risk. Human & Rousseau. Culp, C. L., (2002). The Art of Risk Management. Alternative Risk Transfer, Capital Structure, and the Convergence of Insurance and Capital Markets. Wiley Finance. Fischer, E. (1995). Evaluating Public Policy. Wadsworth Publishing.
87
Chapter 4: risks in running the business
Folpmers, M., Hofman, A., (2004) Risicocalculatie met driehoeken. Finance & Control, February. Godet, M. (1993). From Anticipation to Action: A Handbook of Strategic Prospective. Unesco. Grinsven, J. H. M. (2009). Improving Operational Risk Management. IOS Press. Harmantzis, F. (2003). Operational Risk Management: Risky Business. ORMS Today, 30, pp. 3036. Heijden, K. van der., (1996). Scenarios: The Art of Strategic Conversation. John Wiley & Sons. Karow, C. (2000). Operational Risk: Ignore it at your peril. Ernst & Young Cross Currents, 2, pp. 13 – 17. Knight, F. H., (1971). Risk, Uncertainty and Profit. Augustus M. Kelley. Meyer, H. (1987). Real World Intelligence. Weidenfeld & Nicolson. Murphy, A. H., & Winkler, R. L. (1977). Reliability of subjective probability forecasts of precipitation and temperature. Applied Statistics, 26, pp 41-47. Nekkers, J. (2007) Wijzer in de toekomst. Werken met toekomstscenario's. Business Contact. Ralston, B., Wilson, I., (2006). The Scenario Planning Handbook. A Practitioner’s Guide to Developing Strategies in Uncertain Times. Thompson. Schwartz, P. (1991). The Art of the Long View. Doubleday. Stever, R., & Wilcox, J. A., (2007). Regulatory Discretion and Banks’ Pursuit of Safety in Similarity. BIS Working Papers, 235. Bank for International Settlement. Taleb, N. N., (2007). The Black Swan. The Impact of the Highly Improbable. Random House, New York. Tripp, M. H., Bradley, H. L. Devitt, R. Orros, G. C. Overton, G. L. Pryor, L. M. et al. (2004). Quantifying Operational Risk in General Insurance Companies: Institute of Actuaries. Turban, E. Aronson, J. E. & Bolloju, N. (2001). Decision Support Systems and Intelligent Systems (6th ed.). New Jersey: Prentice Hall. Yankelovich, D. (2006). Profit with Honour. Yale University Press.
88
Chapter 4: risks in running the business
4.2. Solvency II: dealing with operational risk Dr. ing. Jürgen van Grinsven Drs. Remco Bloemkolk
While a more principle-based approach may allow for operational risk management and measurement solutions that best fits the insurance company, its strategic direction and its risk profile, there is a risk that operational risk may receive less priority than it deserves. It may for instance be argued that existing insurance risk models already cover operational risk or that the capital requirement for operational risk is smaller than that for other risks and therefore needs less priority. In addition, operational risk costs could be seen as part of the claim costs. However, one can only draw lessons learned and understand the financial impact of learning from those mistakes when the true costs of operational risk are clear. Operational risk management and measurement in insurance companies becomes more effective when the costs for operational risk can be distinguished from the total costs. Although insurers have historically focused on understanding and managing investment and underwriting risk, recent developments in operational risk management, guidelines by the rating agencies and the forthcoming Solvency II regime increase insurers’ focus on operational risk. Inevitably, insurers have to decide on their approach to managing operational risk.
4.2.1. Solvency II framework The Solvency II framework consists of three pillars, each covering a different aspect of the economic risks facing insurers, see Figure 22. This three-pillar approach aims to align risk management and risk measurement. The first pillar relates to the quantitative requirement for insurers to understand the nature of their risk exposure. As such, insurers need to hold sufficient regulatory capital to ensure that (with a 99.5% probability over a one-year period) they are protected against adverse events. The second pillar deals with the qualitative aspects and sets out requirements for the governance and risk management of insurers. The third pillar focuses on disclosure and transparency requirements by seeking to harmonize reporting and provide insight into insurers’ risk and return profiles.
89
Chapter 4: risks in running the business
SOLVENCY II Underwriting Risk Investment Risk
Pillar 1 Minimum Standards
Pillar 2 Supervisor Review
Pillar 3 Market Discipline
(Quantitative requirements)
(Qualitative requirements)
(Disclosure & Transparency requirements)
Implementation
Control
Disclosure
Credit Risk Liquidity Risk Operational Risk
Figure 22: Solvency II framework (Grinsven & Bloemkolk, 2009)
4.2.2. SII attention points Solvency II (SII) is the updated set of regulatory requirements for insurance companies operating in the European Union. It revises the existing capital adequacy regime and is expected to come into force in 2012. In Table 8 we have summarized a number of attention points in SII. The SII framework is relatively principle based, i.e. less prescriptive than Basel II. the introduction of a Solvency Capital Requirement (SCR) in addition to a Minimal Capital Requirement (MCR) allows for earlier regulatory intervention. Understanding operational risk is increasingly important for insurance companies and in particular the relationship between risk management and measurement. To calculate regulatory capital, insurers can use a standard formula or an internal model. Although the standard formula allows diversification, it does not allow for any diversification benefits between operational risk and other types of risk. This may encourage insurance companies to treat operational risk as a separate problem, unrelated to the chosen business mix or direction. Furthermore, the standard formula may not be sufficiently conservative. Operational risk calculations based on internal models will need to pass the following seven tests: data, statistical quality, calibration, validation, profit and loss attribution, documentation and use.. Although insurers who seek greater insight in their operational risk can choose the internal model approach, SII may not provide sufficient incentive for using them. The challenge for many insurers will be to develop an internal model that can appropriately reflect the business and risk management decisions of management. This is more important than having a model is technically very sophisticated.
90
Chapter 4: risks in running the business
The scope of Solvency II Principle based or rule based Capital requirements Risk management / measurement Diversification Approach to calculate regulatory capital Partial use of internal models
Solvency II European Union Relatively principle based Solvency capital requirement and the minimum capital requirement Emphasizes relationship between risk management and measurement Diversification between operational risk and other risk types is allowed Standard model or an internal model approach Is allowed depending on where internal models add the most value
Table 8: Solvency II attention points
4.2.3. SII expected benefits Solvency II has a number of expected benefits, both for insurers and consumers. Although the most obvious benefit seems to be preventing catastrophic losses, other less obvious benefits which are considered to be important are summarized in Table 9. From an insurer’s perspective, SII can help to better meet long-term expectations of policyholders. Although there are some issues SII can enable better insight in risk and cost drivers through a stronger relationship between risk management and measurement.. From a consumer perspective it can be argued that reducing the risk of failure is important to guarantee the insurance policy (also see section: examples of insurer company failure). Further, when insurers are able to improve their risk management it can lead to a more transparent range of products, better matching
the
consumer’s needs and a decrease of costs. SII expected benefits Insurer Consumer Can better meet the long-term expectation of Reduced risk of failure or default by an insurer policyholders Allows principle based internal risk and capital Risk more accurately reflected in the costs of assessment models insurance and investment contracts Enhanced insight in risk and cost drivers through a More transparent products closer relationship between management and measurement. Increased confidence in the financial stability of the Better match between products and individual insurer requirements Provides supervisors with early warning so that they can intervene promptly if capital falls below the required level Table 9: Solvency II expected benefits (Grinsven & Bloemkolk, 2009)
91
Chapter 4: risks in running the business
These expected benefits, make SII an increasingly important issue for insurers. Not surprisingly, solvency has evolved into an academic discipline of its own and much of its literature is aimed at the quantitative requirements. Despite the progress made in SII, the next section indicates that insurers will also encounter a number of difficulties and challenges in operational risk before they can utilize these expected benefits.
4.2.4. Operational risk Over the past few decades many insurers have capitalized on the market and have developed new business services for their clients. On the other hand, the operational risk that these insurers face have become more complex, more potentially devastating and more difficult to anticipate. Although operational risk may not require the largest percentage of the solvency capital requirement it is possibly the largest threat to the solvency of insurers. This is because it is a relatively new risk and it has been identified as a separate risk category in Solvency II. Moreover, the committee of European Insurance and Occupational Pensions Supervisors (CEIOPS) argue that operational risk is an area that requires special attention. Operational risk is defined as the capital charge for ‘the risk of loss arising from inadequate or failed internal processes, people, systems or external events’. This definition is based on the underlying causes of such risks and seeks to identify why an operational risk loss happened, see Figure 23. It also indicates that operational risk losses result from complex and non-linear interactions between risk and business processes. Processes People Event
Loss
Systems External events Figure 23: dimensions of operational risk (Grinsven, 2009)
Several studies in different countries have attributed insurance company failure to underreserving, under-pricing, under-supervised delegating of underwriting authority, rapid expansion into unfamiliar markets, reckless management, abuse of reinsurance, shortcomings in internal controls and a lack of segregation of duties. Unbundling operational risk from other risk types in risk management and risk measurement can help prevent future incidents and failures. This holds true for smaller and larger losses. Often, larger losses are the cumulative effect of a
92
Chapter 4: risks in running the business
number of smaller losses. In other words, they can be the result of the bad practices that flourish in excellent economic circumstances, when the emphasis is on managing the business rather than operational risks.
4.2.5. Examples of large incidents and failures Insurance company failures in which operational risk played a significant role include:
The near-collapse of Equitable Life Insurance Society in the UK, which resulted from of a culture of manipulation and concealment, where the insurer failed to communicate details of its finances to policyholders or regulators.
The failure of HIH Insurance, which resulted from the dissemination of false information, money being obtained by false or misleading statements and intentional dishonesty).
American International Group (AIG) and Marsh, where the CEOs were forced from office following allegations of bid-rigging. Bid rigging, which involves two or more competitors arranging non-competitive bids, is illegal in most countries.
Delta Lloyd, ASR verzekeringen, SNS Reaal and Nationale Nederlanden (the Netherlands) agreed to compensate holders of unit-linked insurance policies for the lack of transparency in the product cost structures. For more details see www.veb.net.
The above examples illustrate that such losses are not isolated incidents in the insurance industry. If not properly managed they can occur with some regularity. Given the high-profile events, insurers need to be increasingly aware of the commercial significance of operational risk. The large loss events mentioned above can be classified into operational risk categories. This is not an easy task because operational risk operational risk losses result from complex and nonlinear interactions between risk and business processes. Table 10 presents several examples of operational risk categories and insurer exposure. Operational risk categories and insurer exposure Operational risk category Example of insurer exposure Internal fraud Employee theft, claim fabrication External fraud Claim fraud, falsifying application information Employment practices and workplace safety Repetitive stress, discrimination Clients, products and business practices Client privacy, bad faith, redlining Damage to physical assets Physical damage to own office, own automobile fleets Business disruption and system failures Processing centre downtime, system interruptions Table 10: operational risk categories and insurer exposure (Grinsven & Bloemkolk, 2009)
93
Chapter 4: risks in running the business
The forthcoming Solvency II regime will present a number of difficulties and challenges for the operational risk management activities of insurers.
4.2.6. Difficulties and challenges in insurers’ operational risk management Insurers have not historically gathered operational risk data across their range of activities. As a result, the major difficulties and challenges that insurers face are closely related to the identification and estimation of the level of exposure to operational risk. A distinction can be made between internal and external loss data, risk self-assessment, supporting techniques, tools and governance. See Table 11 for an overview. Difficulties and challenges concerning operational risk at insurers Loss data Risk self-assessment Techniques, tools, governance Lack of internal loss data Risk self-assessment process is Biases of interviewees are not labor-intensive understood Quality of internal loss data Static view of risk self- Chasing changing loss data assessments Applicability of internal loss data Inconsistent use of risk self- Techniques and tools are not assessments shared in the insurance firm Aggregation of internal loss data Quality of results Techniques have a bad fit with tools Reliability of external loss data Subjectivity of results Coordination of large data volumes Consistency of external loss data Assessments are only refreshed Linkage between qualitative annually approaches and scenario analysis used Applicability of external loss data Approaches tend to focus on Governance of risk department expected losses versus actuarial department Aggregation of external loss data Low-frequency, high-impact Key risk indicators do not link assessments can be arbitrary, back to causal factors identified resulting in significant over or understatement of solvency and economic capital requirements. Table 11: difficulties and challenges concerning operational risk at insurers
Loss data form the basis for measuring operational risk. Although internal loss data are considered the most important source of information, they are generally insufficient because of a lack and the often poor quality of such data. Insurers can overcome these problems by supplementing their internal loss data with external loss data from consortia such as ORX, ORIC and several commercial database providers. Using external loss data raises a number of methodological issues, including the problems of reliability, consistency and aggregation. Consequently, insurers need to develop a documented disciplined approach and improve the quality of their data and data-gathering techniques.
94
Chapter 4: risks in running the business
Risk self-assessment (scenario analysis) can be an extremely useful way to overcome the problems of internal and external loss data. It can be used in situations in which it is impossible to construct a probability distribution, whether for reasons of cost or because of technical difficulties, internal and external data issues, regulatory requirements or the uniqueness of a situation. It also enables insurers to capture risks that relate, for example, to new technology and products as these risks are not likely to be captured by historic loss data. However, current scenario analysis methods are often too complex, not used consistently throughout a group and do not take adequate account of the insurer’s strategic direction, business environment and appetite for risk.
The techniques and tools that insurers use to support risk self-assessments are often ineffective, inefficient and not successfully implemented. Research indicates that 19.5% of current practices are often not shared within the group, while 22% of respondents are dissatisfied and 11% very dissatisfied with the quality of their information technology support services. Another question that can be raised is the governance of risk management. How, for example, are the risk and actuarial departments aligned?
4.2.7. Summary In this chapter we discussed operational risk in the context of Solvency II. Operational risk is possibly the largest threat to insurers. This is because operational risk losses result from complex and non-linear interactions between risk and business processes. Unbundling operational risk from the other types of risk in risk management and risk measurement can help prevent future failures for insurers.
SII is on track to put greater emphasis on the link between risk management and risk measurement of operational risk. We have addressed the most important difficulties and challenges in operational risk management: loss data, risk management, tools, techniques and governance. Those insurers able to ensure an effective response to these major difficulties and challenges are expected to achieve a significant competitive advantage.
95
Chapter 4: risks in running the business
4.2.8. References CEIOPS, consultation and QIS studies, www.ceiops.eu De Nederlandsche Bank, Subject Solvency II, www.dnb.nl Grinsven (2009). Improving Operational Risk Management. IOS Press. Grinsven, J.H.M. van., Bloemkolk R. (2009), Solvency II: Dealing with Operational Risk., FSI Magazine April 2009. Grinsven, J.H.M. van., Bloemkolk R. (2009), Solvency II: Understanding Operational Risk., in Bank en Effectenbedrijf Juni 2009. VEB, acties verliespolis, see for details: www.veb.net
96
Chapter 4: risks in running the business
4.3. Operational risk management as shared business process Dr. ir. Marijn Janssen Dr. ing. Jürgen van Grinsven Ir. Henk de Vries
In many financial institutions the decision-making concerning Operational Risk Management (ORM) is laborious and challenging. This is because the departments of the financial institutions are not always working together which leads to different points of view and fragmented responsibilities. The structuring of ORM as Shared Business Process (SBP) can potentially be used as a solution to overcome this fragmentation of responsibilities. However, the implementation of the SBP is often cumbersome and not easy to realize. As a consequence the structuring of ORM as shared business process remains often on the drawing board.
In this chapter we discuss how a Shared Business Process can improve ORM. First, we describe how a SBP can support ORM. Second, a business case will be presented describing the implementation of a SBP at a large insurance firm in the Netherlands. Third, we will discuss the problems that the implementation of a SBP can bring about and how these problems should be dealt with.
4.3.1. Shared service center and operational risk Managers are always seeking to improve their company structure for better alignment with the strategic objectives. In the 1960s and 1970s this resulted in more centralized organizational structures to profit from economies of scale and scope. An example is the huge centralized data centers. In the 1980s organizations became decentralized to overcome the disadvantages of centralization. By having a decentralized organizational structure it was possible to react more quickly to changes and to become more customer-oriented. Over the last couple of years a new, hybrid, concept has been introduced, the Shared Service Center (SSC). The SSC tries to capture the advantages of both centralization and decentralization (Janssen and Joha, 2004). ‘A shared service center is an semi-autonomous unit within an organization delivering specific services to operational units of the organization on a basis of pre-agreed conditions and price’ (Bergeron, 2003). The aim of a SSC is to remain close to the user and react quickly to changes, while at the same time benefiting from economies of scale and scope. There are several characteristics that
97
Chapter 4: risks in running the business
make a SSC unique. One, there are several users (often different departments). Two, there are pre-defined agreements made concerning what the SSC should deliver at what price. Three, the SSC has a certain amount of autonomy. Four, usually the instructing parties (departments or business units) and the SSC are part of the same organizations. There are three shapes of SSC’s (Opheij and Willems, 2004): 1. Shared Back Offices: departments are concentrated in a SSC, e.g. a shared product administration or financial administration; 2. Shared Support Process: supporting services, e.g. ICT and HRM; 3. Shared Business Process: business processes carried out by several departments, e.g. the settlement of claims at an insurance company.
Operational Risk Management can be defined as the identification and mitigation of operational risks in an organization and its surroundings (Grinsven, 2009). An operational risk can be defined as the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events (RMA, 2000). For example, an operational risk for a bank is the risk of unauthorized trading. These risks might result in catastrophic losses. At the end of the 1990s many financial institutions became aware of the need for ORM. On the one hand this need was caused by expensive catastrophes, like Barings and Société Générale. On the other hand this was caused by new legislation like Basel II which requires that ORM should be implemented as a separate function within organizations (Grinsven, 2009). The introduction of ORM often resulted in ineffective structuring of the ORM-function, as each department implemented ORM from its own perspective. As a result the management could receive contradicting risk assessments and recommendations from each department as depicted in Figure 24.
98
Chapter 4: risks in running the business
Quality Improvement Actuarial
Fraud coordination team
Management
Corporate Audit Services
Operational Risk Management
Compliance
Service Center Claims Business Process Management
Figure 24: each business function separately reports to management (Grinsven, 2009).
This has a number of negative consequences: 1. It is ineffective. For the management it is difficult to decide and implement policies when departments provide contradicting recommendations. Which one is right? Which recommendation should be followed? Moreover, this results in lobbying as departments try to get priority of the management to receive as much resources as possible. 2. It is Inefficient. It is inefficient because similar work is performed by different departments resulting in unwanted duplication of efforts and re-work. Moreover, due to the fragmentation it is not possible to profit from economies of scale. 3. It leads to dissatisfaction. Departments can perceive that ‘management’ is: not listening to them, their input is disregarded, reports are not read and their opinion not taken seriously. The current situation in many financial institutions is characterized by a hierarchical structure organized around business functions. The different departments act autonomous and do not communicate with each other which results in fragmentation of tasks and responsibilities. Moreover due to this fragmentation it is difficult to obtain an integrated view on ORM. A logic solution to this problem is the structuring of ORM as a shared service center. Through conjugation of ORM tasks and responsibilities within a SSC an integrated view on ORM can become reality, as depicted in Figure 25.
99
Chapter 4: risks in running the business
Management
Shared Service Center
Quality improvement
Actuarial
Compliance
Service Center Claims
Business Process Management
Operational Risk Management
Corporate Audit Services
Fraud coordination team
Figure 25: a shared report to the management through the SSC
4.3.2. Business case: a large insurance firm At a large insurance firm in the Netherlands the management was confronted with different and contradictory recommendations. Eight different departments of this financial institution approached ORM in four primary processes (making offers, accepting, mutating and damage handling) from their own perspective. Consequently, the management received eight different reports resulting in unclear decision making processes and assignment of resources. Reports from certain departments received more attention than those of other departments. The amount of attention a department received was an important criteria for the assignment of resources: the more attention (priority), the more resources. Therefore, each department tried to get as much attention as possible. In times of economic recession this is even amplified due to ever decreasing budgets. A Shared Business Process (SBP) concerns the unbundling and concentration of business processes in an autonomous business unit. In these processes confidential information concerning operational risks often plays an important role and companies want to retain this information in-house. A SSC as SBP can provide economies of scale while at the same time retaining experiences within the organization and ensuring a high level of information security (e.g. Janssen and Joha, 2006). A SBP for operational risk management was introduced at the large insurance firm. Within the ORM-SBP the knowledge was concentrated and the formerly fragmented field of responsibilities was concentrated in the SSC as schematically depicted in Figure 25. This ORM-SBP ensures that one recommendation is presented to the management.
100
Chapter 4: risks in running the business
4.3.3. From drawing board to implementation Once managers of financial institutions have decided that an ORM-SBP is necessary they need to think about the way an ORM-SBP can be implemented. They have to take into account pitfalls that keep the objectives from being achieved. The implementation of an ORM-SBP can fail for several reasons. First, the costs can be higher than expected. Second, the savings due to economies of scale can be lower than expected. The service provisioning can be disappointing: the transition to the new situation with an ORM-SBP can (partly) fail or the budget might be too low. Finally, it might result in an additional layer of red tape. Often these disappointments are caused by the political behavior of stakeholders. Departments fear that their existing influence will decrease when an ORM-SBP is positioned between them and the management. Through the implementation the power balance might shift as:
Departments managers might fear to lose a part of their autonomy and have to share it with other departments.
Employees have to work in a more result focused environment.
Managers of service centers get more autonomy and resources. However, they have to make service agreements and realize these levels.
Middle managers see a decline in the number of managers and fear for their own career.
In short, the balance of power in the financial institution might be perceived to change which often leads to resistance. This results in the delay or blocking of the implementation by certain departments.
4.3.4. Make conscious choices To accomplish the objectives it is important to define a strategy to deal with resistance, as the shift in balance of power can lead to large counterforce’s. The following elements need to be decided on. First, one had to decide about the positioning of the ORM-SBP in the financial institution. It can be positioned as a central service-unit, a service partner or as an autonomous organization. For the first alternative the emphasis is on product control, in the second on cooperation and in the third on own profit. The case described in this chapter was positioned as a service partner. Second, it must be clear what the service contains: which clients and suppliers are there, which services are being offered, which processes, employees and systems are being used? The organization of the ORM-SBP must be enterprising, client- and result focused. The employees must be able to cooperate well and the people in control must be able to put existing
101
Chapter 4: risks in running the business
patterns up for discussion. Third, a systematic approach of ORM is necessary to be able to capitalize on new chances in the market and to realize a successful implementation of the ORM-SBP. The approach must comply with four important design guidelines. The starting point is that the approach must give a sufficient, reliable, robust and accurate assessment of the operational risk and the mitigation measures to be taken. These four design guidelines are:
The approach must comply with the current legislation and standards in the field of ORM.
The approach must be based on procedural rationality: decision makers (managers) must be able to make their decisions as rational as possible. This can only be accomplished when there is a sharply defined ORM-process to which different stakeholders are able to commit themselves.
The approach must guarantee a shared view over the results so that control measures can be implemented effectively. For this a sufficient knowledge basis is needed among the employees and stakeholders in the organization.
The approach must be flexible to use and practically applicable.
4.3.5. Summary A Shared Business Process can solve the problem of the fragmented allocation of tasks and functions for Operational Risk Management. A SBP bundles and concentrates the primary business processes in an autonomous business unit, responsibilities are concentrated and the management is provided with one, uniform, recommendation concerning ORM.
The
implementation of a SBP has its own problems. This is often caused by political behaviour of stakeholders in the financial institutions.
To overcome these counter forces and for a ORM-SBP to be successful there are different choices that need to be made in advance. First, the position of the ORM-SBP in the organization has to be determined. Second, it must be clear what the services of the ORM-SBP contains. Third, a systematic approach is necessary to be able to capitalize new chances in the market and to realize a successful implementation of the ORM-SBP.
102
Chapter 4: risks in running the business
4.3.6. References Bergeron, B. (2003). Essentials of Shared Services: John Wiley & Sons. Grinsven, J.H.M. (2009). Improving Operational Risk Management. IOS Press. Janssen, M. and A. Joha (2004), De onzekere belofte van het shared service center, Informatie, July-August, 2004. Janssen, M., & Joha, A. (2006). Motives for Establishing Shared Service Centers in Public Administrations. International Journal of Information Management, Vol. 26, No. 2, pp. 102116. Opheij, W. and F. Willems (2004). Shared Service Centers: balanceren tussen pracht en macht, Holland Management Review, no. 95, May-June 2004, pp. 31-45.
103
Chapter 4: risks in running the business
4.4. Controlling operational risk in workflow management Dr. ing. Jürgen van Grinsven Dr. ir. Marijn Janssen
Workflow Management (WfM) systems are aimed at the automation of business processes performed by human beings. Over the past decades, there has been a rapid increase in the use of such systems which has resulted in major benefits, including higher efficiencies and costs reductions (Allen, 2000). However, some disadvantageous effects appeared as processes became more rigid due to the reduction of the scope of control by humans. Although the allocation of responsibilities have become clearer, the ability to anticipate and react to risk events has become more difficult, as business logic is stored in rigid workflow systems and responsibilities for the total end-to-end process are not always clear (Gortmaker, Janssen and Wagenaar, 2005). As a result, the operational risks that financial institutions face have become more severe and more difficult to anticipate on. As a consequence, there is a need to strengthen the human element in WfM systems to systematically deal with operational risks.
This chapter presents a multiple experts’ judgment approach to identify and control operational risks in workflow management systems. First we discuss workflow systems and the multiple experts’ judgment approach. Then, we present a business case and show how typical operational risk can be identified in workflow systems.
4.4.1. Workflow management Coordinating events, artifacts and people are tasks that consume most of a manager’s time. In the past decade all kinds of principles and systems have been developed to automate the management of workflow. The Workflow Management Coalition (WMC) defines workflow as “The automation of a business process, in whole or part, during which documents, information or tasks are passed on from one participant to another for action, according to a set of procedural rules” (Lawrence, 1997). In essence, WfM systems are aimed at the automation of the coordination function, the management of the dependencies between sequences of tasks performed by human beings. A WfM system is an automated system that manages the coordination process by assigning the work to humans, passing it on to the next worker and
104
Chapter 4: risks in running the business
tracking its’ progress (Allen, 2000). Work is distributed to persons based on an explicit model of the organizational structures in place and capabilities of workers (Lawrence, 1997).
The main benefits of workflow management systems are cost savings and work improvements which are the result of better managing the workflow. Other claimed advantages include that it helps managers to focus on staff and business issues, rather than the routine assignment of tasks (Allen, 2000; Stark, 1997; WFMC, 2004), procedures are formally documented and followed exactly, meeting all business and regulatory requirements (Hales, 1997), the best persons can perform the important tasks first, parallel processing to reduce lead times (Stark, 1997; WFMC, 2004) and sometimes improved service and employee satisfaction, but those are harder to measure.
WfM systems need to balance two conflicting needs: the need for control and the need to provide sufficient flexibility to allow for changes during their execution (Narendra, 2004). Major disadvantages are the inflexibility of workflow processes and the reduction of the creativity and span of control of humans. Traditionally, WfM is almost completely focused on controlling the flow and provides little support for adapting to changing circumstances of major events (Klein, Dellarocas & Bernstein, 2000; Narendra, 2004). Due to the pre-defined sequence of activities of a workflow, cases are always handled in the same way. This might be very efficient, however, does not contribute to the fast identification and solving of handling of problems, exceptions and operational risks.
4.4.2. Multiple expert’s judgment approach There is a need for identifying, assessing and controlling operational risks for workflow management systems. An Operational Risk can be defined as the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events. Expert judgment can be utilized to create a better understanding and quantification of exposure to operational risk. Clemen and Winkler define Expert Judgment as “the degree of belief, based on knowledge and experience, that an expert makes in responding to certain questions about a subject” (Clemen and Winkler, 1999).
105
Chapter 4: risks in running the business
In order to have a consistent and comparable assessment for different businesses, branches or with research over time, it is important to have a well-structured and systematic approach. Especially with multiple experts, an approach needs to have clear steps in which the assessment needs to be performed. The Multiple Expert Elicitation and Assessment (MEEA) developed by Van Grinsven (2009) approach consists of five phases proposed by Cooke and Goossens (2000). Figure 26 shows the five phases as sequential steps. input
Preparation
Risk mitigation
Risk identification
Reporting
Risk assessment
output
Figure 26: expert judgment process (Grinsven, 2009)
The preparation phase is used to provide the frame for the experts, taking into account that all the most important activities should be considered prior to the expert judgment exercise should be considered (Cooke & Goossens, 2000). The risk identification phase is used to provide the frame for the experts, taking into account that the most important issues should have been considered. The risk assessment phase aims to provide a decision-maker with results derived from the experts. Experts usually assess identified absolute risks by evaluating frequency of occurrence and the impact associated with the possible loss (Keil, Wallace et al., 2000; Weatherall and Hailstones, 2002). Risk mitigation involves identifying alternative and more effective control measures which aim to minimize the frequency of occurrence and / or impact of the operational risks (Keil, Wallace et al., 2000; Jaafari, 2001). In the reporting phase all the relevant information regarding the problem and data derived from the experts should be noted down in a formal report and be presented to the decision makers and to the experts (Goossens and Cooke, 2000).
106
Chapter 4: risks in running the business
4.4.3. Business case: semi-autonomous organization In this section we will demonstrate how the MEEA approach can be used to identify, assess and control operational risks in a WfM system. We used the MEEA approach to identify WfM problems in a supply chain. The supply chain consists of a number of (semi-)autonomous organizations. The organizations in the supply chain are geographical dispersed and can make their own decisions concerning the management of the supply chain. The supply chain is supported by a WfM system. The allocation of capacity is managed by predefined procedures and strict rules, which leaves little room for exceptions. The system guards the lead-times and pushes tasks to single resources, often-administrative employees. The system is used by 25 organizations, all geographically fragmented and dispersed. Each organization has their own, customized version of the system operating independently of the systems of the other organizations. As such we deal with a fragmented landscape where the various organizations have little interaction with each other.
Using MEEA the system was analyzed and seven categories of typical problems were found, each containing multiple operational risks events. The first category is rollback. In WfM there is the option of rollback. Rollbacks are often necessary because there is more time needed to collect information or because there are no experts available, rollbacks can cause misunderstanding and confusions, for example when summons are send by post. The second category is selectivity and availability. Because WfM cannot check the agenda’s or availability of persons, work can be offered to employees who are unavailable or unsuited. The third category is delegation. Most of the work is automatically pushed into the workbasket of an employee. It is not possible to remove it and put it into another employee’s workbasket (when it is not processed quickly enough or when that particular person is unavailable). The fourth category is information provision. Decisions are often based on discussions between experts. Often the background and the argumentation of these discussions are not written down coherently. There is a need for mechanisms that provide a memory function. The fifth category is the involvement of wrong persons. The involvement of wrong experts, e.g. who have expertise about a different field. The sixth shortcoming, the sequence of executions of tasks is determined by rules and constraints. Trade-offs must be made but it is not supported by the current WfM. The seventh and last shortcoming is priority and alerting. Priority is supported by workflow systems. These priorities
107
Chapter 4: risks in running the business
can, in theory, be assigned by the system. This is difficult and can result in a high number of tasks with a high priority.
WfM systems need to balance two conflicting needs: the need for control and the need to provide sufficient flexibility to allow for changes during their execution. In our business case we found that the workflow systems were primarily focused on controlling the workflows and provided hardly any flexibility or mechanisms to react to unforeseen events and circumstances. We also found that risk assessments should be performed regularly to find the operational risks and weak links in the current tasks automated using WfM systems. Adding risk assessment as a standard, recurring, activity will have the advantage that risks can be identified at an early stage. By making use of experts which are involved in the workflow, the operational and tactical knowledge can be mobilized. Experts can meet periodically to identify the possible risks within the seven risk categories (see above). The robust properties of conventional WfM systems combined with an added flexibility of using MEAA to identify previous unknown bottlenecks or events and also to find directions on how to deal with them can prevent the risks from happening. Furthermore, MEEA can be used to identify controls which mitigate risks.
4.4.4. Summary In this chapter we identified operational risks categories which were created by making use of a rigid workflow management system. We identified seven categories of typical operational risk problems including rollback, selectivity and availability, delegation, information provision, involvement of wrong persons, sequence of executions of tasks and priority and alerting. We proposed to regularly identify and assess the operational risks of workflow systems by using multiple experts’ judgment. In this way risks can be identified at an early stage and actions to mitigate risks can be identified. We adopted the MEEA approach, which has its roots in the financial and the logistic sector. Our case study shows that MEEA is also suited for identifying operational risks in workflow management.
4.4.5. References Allen, R. (2000) Workflow: An introduction, The Workflow Handbook 2001: 15-38. Clemen, R. T. and R. L. Winkler (1999). Combining Probability Distributions From Experts in Risk Analysis. Risk Analysis 19(2): 187-203.
108
Chapter 4: risks in running the business
Cooke, R. M. and Goossens, L. H. J. (2000), Procedures Guide for Structured Expert Judgement in Accident Consequence Modeling. Radiation Protection Dosimetry 90(3): 303309. Gortmaker, J., Janssen, M. & Wagenaar, R.W. (2005). Accountability of Electronic Cross-agency Service-Delivery Processes. Dexa EGOV05, 4th International Conference on Electronic Government. Grinsven, J.H.M. v. (2009), Improving operational risk management, IOS Press. Hales, K. (1997). Workflow in Context. In P. Lawrence (Ed.), Workflow Handbook 1997. Chichester: John Wiley & Sons. Jaafari, A. (2001), Management of risks, uncertainties and opportunities on projects: time for a fundamental shift. International Journal of Project Management 19: 89-101. Keil, M., L. Wallace, et al. (2000). An investigation of risk perception and risk propensity on the decision to continue a software developement project. The journal of Systems and Software 53: 145-157. Klein, M. Dellarocas, C. & Bernstein, A. (2000) Introduction to the special issue on adaptive workflow system. Computer Supported cooperative Works (CSCW), 9, pp. 265-267. Lawrence, P. (1997). Workflow Handbook 1997, Workflow Management Coalitions, John Wiley & Sons, New York. Plesums, C. (2002). Introduction to Workflow. In L. Fischer (Ed.), The Workflow Handbook 2002: 19-38. Lighthouse Point, FL: Future Strategies Inc. Narendra, N. (20004). Flexible support and management of adaptive workflow processes. Information Systems Frontiers, 5(3): 247-262. Stark, H. (1997). Understanding Workflow. In P. Lawrence (Ed.), Workflow Handbook 1997. Chichester: John Wiley & Sons. Weatherall, A. and F. Hailstones (2002). Risk Identification and Analysis using a Group Support System (GSS). Proceedings of the 35th Hawaii International Conference on System Sciences, Hawaii. WFMC.About the WFMC - Introduction to the Workflow Management Coalition. Retrieved 12/07, 2004, from http://www.wfmc.org/about.htm
109
Chapter 4: risks in running the business
4.5. A comprehensive approach to control operational risk Dr. Menno Dobber
Over the last several years, financial institutions (FI’s) have realized the importance of holding enough capital to cover the operational losses, especially for years with extremely high losses. Operational risks (OR) are, according to the commonly used definition, the risks of loss resulting from inadequate or failed internal processes, people and systems, or from external events. Operational losses (OL) do not occur as often as other types of risk losses, e.g., credit and market risk. However, their impact can be dramatic. For this reason the capital needed for OR is often many times higher than for the other types of risk. OL are inevitable; every FI has them. However, it is important to control those losses and to decrease the losses as far as possible. As a consequence of the need to control and reduce loss the Basle committee defines rules for the control of OR in the Basle 2 accord. Financial institutions have to develop sophisticated OR measurement methods and are motivated to decrease the OR. Key questions that arise are: (1) how to measure OR?, (2) how to predict OL?, and (3) how to decrease OR as much as possible at minimum costs?
In this chapter, we present a comprehensive view on controlling OR to answer these questions. The approach shows how (1) the OR can be measured, (2) the quality of the FI can be measured, (3) the relation between OR and the internal organization can be derived. Furthermore, this knowledge can be used to (4) predict the OL in the future, and (5) obtain the most profitable improvement in the internal FI in terms of decrease in OL. First, we will provide some background information. Secondly, we present and discuss the approach. Thirdly, we present conclusions and future research.
4.5.1. Risk capital The minimum amount of capital needed to cover the risks in a certain percentage of the cases is called Economic Capital (EC) (Matten, 2000). Financial institutions should then aim to hold risk capital of an amount equal to at least economic capital. This risk capital can easily be extracted from the FI in order to cover the operational losses. The risk strength of a financial institution is determined by a comparison of the risks, represented by the EC, and the risk capital. The aim of rating agencies, such as S&P and Moody’s, is to measure and report about this strength of all
110
Chapter 4: risks in running the business
financial institutions. A decrease in the OR, and consequently the EC, has the following main advantages: (1) the OL are lower, (2) the institution can hold less risk capital, which means a decrease in the costs, and (3) rating agencies will report about the increased strength, which leads to a better reputation in the market. In the next section, we present an approach to decrease the OR.
4.5.2. The Approach Main idea In general, the higher the quality of the organization the lower the operational risks which in turn lead to lower and fewer operational losses. Consequently, the FI can hold less risk capital. This means that if a FI aims to improve the control of the OR and decrease its OL, the internal process analyses have to determine the most desirable improvements in the organization, and the height of the EC needs to adjust to the quality of the internal organization. As discussed above, it is important for FI’s to keep interaction between the internal organization and the risk strength. This is represented by Figure 27. Internal organization Interaction Risk strength
Risk capital
Economic capital
Figure 27: aim: interaction between internal organization and the financial strength.
To achieve an effective interaction, it is necessary to (1) quantify the quality of the internal organization, (2) quantify the risk and the probabilities, and (3) identify the relation between these aspects. Quantification of the quality of the internal organization is difficult due to the fact that this is not directly measurable. Quantifying the risks is also a challenge due to the lack of operational loss data; operational losses do not occur very often and financial institutions have begun to measure these in the last couple of years. Moreover, identification of the relations between the above presented aspects is hard and needs sophisticated mathematical tools.
111
Chapter 4: risks in running the business
Seven-step plan Having reviewed the concept behind the approach we present a comprehensive view on controlling the OR in FI’s. The approach shows methods which are effective in tackling each of the different challenges. Moreover, we show how the knowledge which is obtained by tackling these challenges can be used to predict the future OL and observe the most effective improvements in the internal organization to decrease the OR. Figure 28 illustrates the concepts of this approach.
The reality
Processes
Losses
Description of the reality
Process description
Loss database
Quantifying the reality
Risk indicators
Quantify losses
Relation between elements
Correlation
Figure 28: flow diagram of the operational-losses
As can be seen in Figure 28, there exists a direct relation between the size of the losses and the risks in the organizational processes. The first step of the approach is to describe the processes in the FI. This is a representation of the reality, which means that it cannot show every element of the real processes. It is important that the key elements of the processes have been described. In general, it is not possible to quantify many operational losses directly. For example, a computer that does not work for a while leads to an operational loss, but the direct loss is hard to quantify. As a consequence the need arises to set up a loss database, which contains many details about the losses (e.g., size of the losses, department, and the date). Because there is a loss database it is possible to analyze the losses. Subsequent, analyses of the causes can be performed by relating the process description and the loss database. This knowledge opens up the possibility to improve the weakest parts of the FI organizational processes in operationalrisk sense.
112
Chapter 4: risks in running the business
The key steps to control the operational losses are as follows:
Process description
Define and quantify key risk indicators (KRIs) within processes
Construct loss database
Quantify operational losses in FI
Correlation between KRIs and operational losses
Prediction of operational losses
Identify points for improvement and improve processes in FI
The first step, to control operational losses, is to describe all processes within the financial institution. The first step in the description process is to make a general description and second to add more detail to it. The more detailed the description the better the causes of the operational losses can be located. The second step is the define and quantify the key risk indicators (KRIs) within several processes (BCBS, 2001a; Scandizzo, 2005). The third step is to construct a loss database by collecting all data about historical operational losses (Hoffman, 2002) (BCBS, 2002). A key question is which data are relevant for the database. In general, the date of loss, the size, and the type of process where the loss is reported are necessary to collect. It is essential to link the lowest possible level of the process with the operational loss, according to the process description. Furthermore, one needs to define other relevant data that have to be stored. If it is not possible to quantify the loss simply, it is necessary to store well motivated estimations of the loss size in the database. Another constraint is that definitions used need to be unique, clear and consistent. For example, for some FI’s operational losses are by definition above a 1000 euro’s. Another possibility is to register near losses, which are losses that were not suffered, but were very close. The fourth step is to quantify operational losses in the FI. In this step a FI needs to obtain insight about the collected information. One aspect is a quantification of the expected operational losses that have been suffered and will be suffered in the future. Another aspect is to derive the size of the unexpected operational losses, which can be quantified by computing the EC. The Loss Distribution approach, whose concepts are presented in the Basle 2 accord (BCBS, 2001c-e), has proven to be very useful in gaining insight in the distribution of the unexpected losses. This approach is based on a Monte Carlo simulation which uses fitted distributions for the frequency and the severity of operational losses (de Fontnouvelle, DeJesus-Rueff, Jordan & Rosengren, 2003). The fifth step is to link KRIs and operational losses. If the data collection period is long enough and the dataset is
113
Chapter 4: risks in running the business
large. Correlation analyses, linear regression (Jammalamadaka, 2003), and neural networks are possible methods to gain insight into the relations between the losses and the KRIs. The sixth step is the prediction of operational losses. Predictions of the future KRIs and the derived relations from the fifth step can be used in the model to compute expectations of the future losses by applying the relation of KRIs and OL and the predicted KRIs. The seventh step is to identify points for improvement and improve processes in the FI. Estimations of the future losses and the KRIs provide insight in the impact of an improvement of a process on the operational losses.
4.5.3. Discussion As intended, improving a process leads to lower losses. This means that the historical data is not representative for the future. Therefore, it is necessary to (1) go through all the above described steps and analyze what exactly has changed in the processes and the risks, and (2) analyze the changes in the values of the old and the new KRIs and estimate the operational losses for the new situation. It is possible to convert the database by using the model, defined in step 5 and those new values of the KRIs. This is an ongoing process which continuously improves the process organization. A key factor for the success of this approach is that departments register every operational loss. Otherwise, the insights in the losses are not correct. Measures or rules have to assure that all departments are honest about their risks. The Basle committee has the key task to evaluate the whole process of controlling the operational risks. They have to decide whether the institutions improve the processes significantly enough that the operational risks decrease. They need to evaluate the whole control process. The two motivations to decrease operational risks which are defined in the Basle 2 accord still exist in this new approach. The first motivation: is that process improvements lead to a decrease of the economic capital, is still an essential part of the operational risk control. Furthermore, the second motivation: that institutions decrease the amount of losses and consequently the costs. In this new approach, those two motivations can be seen as logical results of the risk control and have a direct one-to-one relation. In the approach presented the internal organization constitutes a key factor which forms the basis of the operational risk analysis. It is important that the institution focuses on decreasing the risks by improving the internal organization. The definition of the several types of operational risks has to be clear and uniformly defined.
114
Chapter 4: risks in running the business
4.5.4. Summary In this chapter, we presented an approach to control the operational risks. The approach is practical and is developed by taking into account recent complications in controlling the operational risks. The approach shows how (1) the OR can be measured, (2) the quality of the organization can be measured, (3) the relation between OR and the internal organization can be derived. Furthermore, this knowledge can be used to (4) predict the OL in the future, and (5) obtain the most profitable improvement in the internal organization in terms of decrease in OL.
4.5.5. References Basel Committee on Banking Supervision (2001a), Consultative Document: Operational risk. Basel Committee on Banking Supervision (2001b), Consultative Document: Overview of the New Basel Capital Accord, Basel. Basel Committee on Banking Supervision (2001c), Consultative Document: The New Basel Capital Accord, Basel. Basel Committee on Banking Supervision (2001d), The New Basel Capital Accord: an explanatory note, Basel. Basel Committee on Banking Supervision (2001e), Update on the new Basel Capital accord, Basel. Basel Committee on Banking Supervision (2002), The Quantitative Impact Study for Operational Risk: Overview of Individual Loss Data and Lessons Learned, Basel. De Fontnouvelle P., DeJesus-Rueff V., Jordan, J., Rosengren, E. (2003). Using Loss Data to Quantify Operational Risk. Federal Reserve Bank of Boston. Hoffman, D.G. (2002) Managing Operational Risk: 20 Firmwide Best Practice Strategies. Wiley Jammalamadaka, S.R. (2003). Introduction to Linear Regression Analysis. The American Statistician, 57(1). American Statistical Association, 67-67. Matten, C. (2000) Managing Bank Capital: Capital Allocation and Performance Measurement. Wiley. Scandizzo, S. (2005). Risk Mapping and Key Risk Indicators in Operational Risk Management. Economic Notes. 34(2), 231-256.
115
Chapter 4: risks in running the business
4.6. Operational losses: much more than a tail only Peter Leijten
This chapter is about operational losses and how insight in these losses can be extremely valuable for a financial institution. It describes a best practice approach for the implementation of loss recording and reporting. It will not address the modeling of operational risk, because for that subject there is plenty of information available. For those less familiar with operational risk management, an operational loss represents (potential) financial or reputation damage as a result from a failing human, system, process or from external threats. Some examples for a bank are e.g. bank robbery, delayed or not executed transactions, advice inconsistent with customers risk profile, documentation lost or incomplete contract and fraud.
4.6.1. Basel II compliance Most financial institutions collect operational loss data to model extreme operational risk events. These events are usually found in the tails as calculated by means of Monte Carlo simulations. The effort of collecting data and model operational risk is often driven by the goal to become Basel II compliant. The more advanced the operational risk program of a FI is, the less regulatory capital needs to be set aside as a buffer against extreme events. The operational risk management requirements to become Basel II compliant are set by the BIS and require approval from the regulator. The objective of the Basel II accord is to stimulate development of advanced risk management methods. Banks have the economic need to keep the regulatory capital as low as possible and at the same time need to prove to investors and rating agencies they have robust risk management practices in place. Although the focus on the extreme event or the reduction of capital is understandable, the side effect is also that a FI often forgets to fully leverage the power of insight in losses. This insight really can boost risk awareness and the self learning capacity of a FI and at the same time dramatically reduce costs resulting from failures.
4.6.2. Loss management for business as usual Who needs to record losses When setting up loss registration you first have to consider what type of FI you need to facilitate. There is a difference between an FI that offers low frequency, high value and tailor
116
Chapter 4: risks in running the business
made financial products or an organization that offers high frequency, low value standardized and highly automated services. In the first case you can assign the responsibility to register the losses to the ORM unit simply because the frequency is low and most losses will have a considerable impact and will require detailed and immediate analysis. In the latter the number of losses can be considerable and probably recurring. In this situation it is best to make the operational business units responsible for the registration of losses in the loss database. When you have to build up your loss management from ground up, there is one important other consideration. A large part of the information you need probably is already in your complaints management system. Consider extending your complaints management system with an interface to your operational risk management system to avoid manual duplication of readily available information. Now there are different approaches to distinguish losses from other financial transactions. Some banks use the centralized approach to detect losses by validating general ledger entries to see if some need to be flagged as operational losses. We believe this method has only one major advantage and many drawbacks. It usually requires limited effort to implement this, but has the drawback that it is very likely that many operational losses will remain undetected and information is incomplete. In other words, the advancement of insight in your FI’s operational risk will remain limited. The decentralized detection and recording approach demands much more effort from the ORM unit. It requires that almost every department needs to understand what operational losses are and how to record them. The simple fact that the business is recognizing and recording operational losses is the best starting point to improve and sustain risk awareness and management at the operational level. Aggregate these losses and you will be able to present an operational risk heat map for your organization and manage operational risk at tactical (product) and strategic (company) levels. We advise to combine the recording of losses with the financial booking of the loss. This way you add an additional quality and integrity layer to your loss recording process simply because financial accounting controls are put in place. It will help you to avoid discussions about the correctness of the loss data and also who is responsible for this correctness. Now what about legal claims or potential regulator fines? These need to be recorded in your loss management system before a financial transaction, if any, will occur. Once it is clear if there will be any financial impact simply follow the regular procedure and make sure no double recording can happen. In all cases it should always remain the responsibility of the business, not
117
Chapter 4: risks in running the business
the ORM unit, to record all losses timely, completely and correctly. Below you find a high level loss management process flow. We would like to put some emphasis on the third step. The assignment of the loss should be tied to acceptance by the loss bearing unit of the loss. This confirmation that the responsibility for the loss is accepted adds to the quality of information as well as to a sustained risk awareness at operational levels in your FI. See Figure 29 for the process flow.
Forecast
Detect
Error, (financial) damage
Register
Complaint, damage, legal claim
Assign
Operational loss (costs) to loss bearing unit
Account
Pay customer, transfer costs to department
Report
Analyze
Operational losses
Mitigate
Risks, improve or add controls
Figure 29: process flow loss management
Who needs to pay the loss Losses should be accounted for per business line as prescribed by the BIS. The losses are input for capital calculations and subsequently capital is assigned to profit centers. However, you also want to make visible who is responsible for the losses that occur. In general it is best to assign the losses to the department that is causing them. Now the occurrence of some operational losses are acceptable as part of the design of a product or process. For instance losses that result from skimming a debit card carrying information on a magnetic stripe. These losses should be assigned to the product owner. Other losses result from wrongful execution of procedures. These losses should be assigned to the department that caused the non-adherence to the procedure. Sometimes losses are caused by service providers. It’s is best to assign these losses to
118
Chapter 4: risks in running the business
the department who owns the contract with that service provider. No matter what, when customers have to be paid it speaks for itself that the loss management process should never delay the payment. What needs to be recorded All operational losses above a certain threshold need to be recorded. Although one can decide to minimize the administrative effort by increasing the threshold, it is better to start with a low threshold and optimize the threshold after several years of experience. In our experience: a threshold of 1000 Euro works well and we believe this is a sensible threshold for any FI. The idea behind this is that most errors with a relatively low financial impact can also result in incidents with a high financial impact. For modeling and benchmarking purposes a FI can become member from ORX or ORIC. These organizations work with thresholds of 20.000 dollar and £ 10.000 , so the threshold of 1000 Euro, pound of dollar will fit. When setting a loss threshold the thresholds from ORX or ORIC should not be considered as a guideline for an organization because the purpose is totally different as we will present later. Now what attributes does a loss record have? You should take the minimal requirements from BIS and ORX or ORIC. We have outlined these requirements in Table 12. Reference ID number
requirements Business Line (Level 2) Code
Event Category (Level 2) Code
Country (ISO Code) Date of Recognition Direct Recovery
Date of Occurrence Credit-related Indirect Recovery
Date of Discovery Gross Loss Amount Related event Ref ID
Table 12: requirements
This list is an example to demonstrate why loss management has different purposes. When loss management is limited to these attributes it can only be used for benchmarking. When you want to use the information for internal analysis to reduce losses or review controls there is a variety of attributes missing. For internal analysis and reporting you also need to record (a) type of loss i.e. operational loss, operational loss component of a credit loss, (legal) claim (b) a descriptive text outlining what has happened and why (c) which department needs to take this loss in its books (d) which department did cause this loss (more explanation later on) (e) loss event category i.e. what event occurred (very easy to make fundamental mistakes here) (f) loss cause category i.e. what caused the event that resulted in the loss (g) product or service like mortgages, credit card, loan etc.(usually requires more hierarchies).
119
Chapter 4: risks in running the business
Now this all looks pretty straightforward, but there are generic issues that probably require some additional effort to ensure alignment with other supporting processes. It is recommended to align your product and services list with financial reporting and complaints management. This will allow you to incorporate more easily operational losses in your cost reporting and correlate operational losses with customer complaints. Furthermore we advise to ensure (for reporting purposes) you will be able to aggregate losses at each hierarchical level in your organization and aggregate at product and product group level (see chapter 3.4 and 3.5). When you intend to apply for Basel II compliance it is mandatory to use standard loss categories. Unfortunately when you limit the loss recording to those categories your loss analysis and reporting capabilities will be severely impacted. The problem with these categories is that they are often too vague and when used in aggregated loss reporting do not allow a manager to set priority or take action. A potential solution is to connect these mandatory categories to your business defined categories, but not show them in the loss management process. When do losses need to be recorded Losses should be recorded (and booked) within a short period after detection. The larger the loss the more difficult this will be, but a limit of 20 days should be do-able. For large (potential) losses you can decide to design a same-day emergency reporting process to inform senior management more quickly. For operational losses that are part of a credit loss there is a difficulty. Most of these credit loss components, with the exception of fraud, usually emerge at the moment when a loan defaults. The best approach is to record at the earliest possible time i.e. just after detection. Where do losses need to be recorded Preferably, losses are recorded in a central loss database and off course in the general ledger. This way it is easy to report and to get central oversight of operational losses trends. Centralized storage is also beneficial for operational risk modeling, forecasting and Basel II compliance. It also allows you to analyze losses and determine where the concentrations of losses can be found. What quality controls could be implemented? One important control is a frequent sign off by management stating the full coverage, completeness and correctness of recorded losses. A review of the ORM department from the correct recording also should be considered, preferably combined with the analysis of the important losses. Ensuring the quality is important for modeling, reporting and analysis.
120
Chapter 4: risks in running the business
Combining this review with the reconciliation of the loss database against the general ledger also adds to the quality and the trustworthiness of the information. What to do with the loss information? The added value of the operational loss information is already outlined in various places in this overview. We suggest ensuring frequent (monthly) reporting at all levels of your FI. This enables management to develop risk awareness and actively manage operational risks or failing controls. We recommend using loss information to create a set of Key Risk Indicators (KRI) or as input for a risk heat map (see chapter 3.3). Usually loss recording is required to model operational risk as part of a Basel II. Using losses to measure the quality of execution in your FI by incorporating loss thresholds in performance contracts is taking it one step further. Losses are backward looking, but by analyzing losses you can also develop operational risk indicators. These signal an increased level of risk which require management attention. Loss information can also be used to review your insurance strategy.
4.6.3. Summary This chapter outlined the added value of an operational loss management process in a business as usual context. The process will improve the learning capability of a FI. It also can be a very effective instrument to reduce costs. We have outlined many examples to take advantage of operational loss information in a business as usual environment. A citation of a senior manager from a large bank illustrates this. He asked: “Show me the losses and show me the audit findings so I can see whether or not we are in control”.
Note: Peter Leijten has written this chapter in his personal capacity.
4.6.4. References Basel Committee on Banking Supervision (2001a). Consultative Document: Operational risk, Basel Basel Committee on Banking Supervision (2001b). Consultative Document: Overview of the New Basel Capital Accord, Basel Basel Committee on Banking Supervision (2001c). Consultative Document: The New Basel Capital Accord, Basel.
121
Chapter 4: risks in running the business
4.7. Improving operational risk management Dr. ing. Jürgen van Grinsven
Operational Risk Management (ORM) supports decision-makers in making informed decisions based on a systematic assessment of operational risk (Brink, 2001; Cumming & Hirtle, 2001; Brink, 2002; Cruz, 2002). Operational Risk (OR) is defined as: the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events (RMA, 2000). OR has become more complex, devastating and difficult to anticipate. As such, Financial Institutions (FI’s) use scenario analysis for identifying and estimating the level of uncertainty surrounding OR, see e.g. (Muermann & Oktem, 2002; Ramadurai, Beck et al., 2004).
In this chapter we will discuss first the difficulties and challenges concerning ORM in financial institutions. Then, we briefly present an approach for scenario-analysis. Finally we will present in more detail a way of working with scenario-analysis. This approach to scenario-analysis aims to improve the effectiveness, efficiency and satisfaction with the results.
4.7.1. Difficulties and challenges in operational risk management By the end of the 1990s many FI’s increasingly focused their effort on ORM. This was primarily motivated by the volatility of today’s marketplace, costly catastrophes e.g. Metallgesellshaft, Barings, Daiwa, Sumitomo, Enron, WorldCom and regulatory-driven reforms such as the new Basel accord (Brink, 2002; Grinsven, 2009). Three additional dynamics that drive this change in focus are decentralization, market pressure and e-commerce (Connolly, 1996; CFSAN & Nutricion, 2002; Karow, 2002; BCBS, 2003a).
However, ORM is not yet a common practice in the business processes of FI’s. The impulse derived from of suffered losses, caused by operational risks, is only short lived (Grinsven, Ale et al., 2006). Moreover, few methods and tools are available to identify, quantify and manage operational risk. The major difficulties and challenges that FI’s face, with regard to ORM, are closely related to the identification and estimation of the level of exposure to operational risk (Young, Blacker et al., 1999; Carol, 2000). Following Grinsven (2009), a distinction can be made
122
Chapter 4: risks in running the business
between difficulties with loss data (internal and external), scenario analysis and techniques and tools. Table 13 summarizes these difficulties and challenges. Difficulties and challenges in Operational Risk Management Loss Data Scenario analysis Techniques and Tools Lack of internal loss data Results are subjective Biases of interviewees are not understood Poor quality of internal loss data Results are of poor quality Chasing changing loss data Reliability of external loss data is Inconsistent use of risk self Techniques and tools are not to low assessments shared in the organization Consistency of external loss data Static view of risk self Techniques have a bad fit with assessments the tools Aggregation of external loss data Risk self assessment process is Coordination of large data is often difficult labor intensive volumes is difficult Table 13: Overview of difficulties and challenges in ORM (Grinsven, 2009).
Loss data forms the basis for the measurement of OR (Cruz, Coleman et al., 1998; Brown, Jordan et al., 2002; Hoffman, 2002; Ramadurai, Beck et al., 2004). To compensate for the lack or poor quality of internal loss data FI’s often make use of external loss data. Scenario analysis (i.e. expert judgment) is often used to overcome the methodological shortcomings of internal and external loss data. However, the process, techniques and tools used to support these scenario analysis are often ineffective, inefficient and unsuccessfully implemented in the FI. GAIN (2004) indicates that 19.5% of the current practices are often not shared in the institution. Moreover, the study indicates that 22 % of practitioners are dissatisfied and 11 % is very dissatisfied with the quality of their organization’s information technology services. In short: there is a need for a structured approach for ORM scenario-analysis.
4.7.2. A way of working for scenario analysis Scenario-analysis can be defined as a instrument to establish a risk profile of a FI. An effective and efficient approach of scenario-analysis consists of a way of thinking, a way of working, a way of modeling and a way of controlling (Grinsven, 2009). In this chapter we elaborate on the way of working and only briefly discuss modeling and controlling. The way of thinking, way of controlling and way of modeling is extensively discussed in Grinsven (2009).
The way of thinking reflects our view on ORM, and provides an underlying structure. It sets the overall tone, delineates how we think that the specific elements should be interpreted and provides the design guidelines on which a scenario-analysis is based. The way of modeling
123
Chapter 4: risks in running the business
describes the modeling techniques used to construct models in the methodology. The way of controlling describes how the controls of the way of working and the models that we use to design a process to improve operational risk management. The way of working (which is the focus of this chapter) describes the process, activities and steps that need to be executed in the five phases preparation, risk identification, risk assessment, risk mitigation, and reporting of ORM.
Scenario-analysis can be divided in the phases preparation, scenario analysis (consisting of: risk identification, risk assessment and risk mitigation) and reporting. The way of working describes the process, activities and steps that need to be executed in those phase’s. The phases are embedded in a FI’s vision, goals, strategy, monitoring and control environment. Figure 30 presents an overview of the way of working.
Vision, Goals, Strategy
Scenarios
Preparation
Reporting Internal data
External data
Monitoring & Control
Figure 30: way of working in scenario-analysis
Preparation phase The preparation phase provides the framework for the scenario-analysis and takes into account the most important activities prior to the identification, assessment, mitigation, and reporting of operational risks. First the context, relevant problems and goals of the particular scenarioanalysis are determined. This can be done by means of studying organizational documents about the financial institution, fault trees with risk-overviews, computer programs and conducting interviews with several subject matter experts and managers. Once the context has been determined, the rationale for selecting the subject matter experts has to be provided. In
124
Chapter 4: risks in running the business
this step, a list of predefined criteria can be used. Important criteria are relevant knowledge and experience, gender, internal politics and commitment to the method. Then, the experts who are taking part in the scenario-analysis can be identified and selected. Although the problem owner has an important voice in selecting the experts, it is suggested that this is done in cooperation with the facilitator of the scenario-analysis. Although it is sometimes reasonable to provide a decision maker with the individual assessment results of the scenario-analysis, it is often necessary to aggregate the assessments into a document. Before doing so, it is suggested to assess the experts. The assessment of experts aims to weigh the performance of each selected expert to aggregate his or her assessment of operational risk more accurately into one combined assessment. When all experts are selected their roles in the scenario-analysis are defined. The definition of roles aims to counteract overconfidence, groupthink and conflicts. Moreover, the it enables us to add new tasks to existing roles. Essential roles in this phase are those of the manager, initiator, expert and the facilitator. A next step is to choose the right method in combination with supporting tools. A method can consist of an explicit designed process of sequential interrelated activities. Supporting tools can consist of a support system to facilitate communication between the experts. Following Clemen & Winkler (1999) we suggest using simple methods and tools because this increases the experts’ understanding, implementation, and reduces the frequency of mistakes and is expense. Finally, it is recommended to tryout the most critical elements of the scenario-analysis. This provides insight into the process and increases the chance to achieve the specified goals. Moreover, the experts are trained in the method of the scenario-analysis, tools, goals, context and ORM. Risk identification phase The risk identification phase aims to provide a reliable information base, which is important for an accurate estimation of the frequency and impact of operational risk. The first step is to identify the operational risk events. We suggest doing this anonymously. During the identification we advocate that the facilitator uses cues (trigger words) to help the experts with the identification of events. When the events are identified, the causes of these events can be explored in a similar manner and framed together with an event. To help the experts in framing, the facilitator can make use of cues to clearly define the operational risks. For example “the risk is that… through which…”. A clear defined operational risk is important for measurement, see the risk assessment phase below. Next, the operational risks are categorized into a number of predefined impact categories. Categorization is recommended because it provides a frame of reference for the risk assessment and mitigation phase and is useful for regulatory reporting
125
Chapter 4: risks in running the business
purposes. For example, the categorization within the Basel II framework (BCBS, 1998; Brink, 2002). Finally, a gap analysis needs to be performed. In this step the identified OR are compared with the relevant internal and external loss data. Often this activity is conducted by a subgroup of experts who posses specific substantive knowledge for the analysis at hand. This analysis can lead for example to the identification of an extra set of OR for certain impact categories. Risk assessment phase The risk assessment phase aims to provide the management with an accurate quantification of the frequency and impact associated with the potential loss of the identified OR and control measures. The risk assessment phase is used to quantify the OR in terms of its frequency and the impact associated with the possible loss. The first step is to assess the absolute level of exposure to OR. In this step, the experts disregard the existing control measures in the FI. As such, a frame of reference is constructed. The second step is the assessment of the managed level of exposure to OR. Now, the experts take into account the existing control measures. Each expert makes an individual assessment of both the absolute and managed level of exposure to OR. Next, the results are aggregated. The aim of this step is to provide a FI with the input to measure their exposure to OR. Aggregation of the results can be accomplished using a mathematical or behavioral aggregation method. We recommend combining these aggregation methods to reduce errors and achieve more accurate results. Further, we advocate using simple mathematical methods, e.g. the assessment of frequency and impact, because they usually perform better than more complex methods (Clemen & Winkler, 1999; Hulet & Preston, 2000). Behavioral aggregation methods often require interaction between the experts. This interaction needs to be designed and structured carefully. Moreover, when there is uncertainty about the relevant knowledge of the experts then we recommend using an equalweighted average rule to combine the individual results. Risk mitigation phase The risk mitigation phase aims to mitigate the OR that, after assessment, still have an unacceptable level of frequency and/or impact. For OR that are not sufficiently managed by the existing control measures alternative control measures are identified. Each OR to be mitigated needs its own specific set of control measures. Existing control frameworks, databases and benchmark studies can be used for this. We advise applying mitigating measures that stay within reasonable relationship bounds to the expected losses. Similar to the risk assessment phase, the
126
Chapter 4: risks in running the business
experts assess both the absolute and managed level of exposure to OR. Finally, the results are aggregated in a similar fashion as the risk assessment phase. Reporting phase The reporting phase aims to provide the management, regulators, initiators and experts with relevant information regarding the scenario-analysis. All relevant information and data which is derived from the scenario-analysis is formally documented. It is recommended that the facilitator is familiar with the most recent reporting guidelines. The experts have invested their valuable time and intellectual efforts into the scenario-analysis. We advise discussing both the results and the final report with the experts to leverage the experiences gained, facilitate learning and to maintain continuity.
4.7.3. Summary Scenario analysis is often used to overcome the methodological shortcomings with internal and external loss data in financial institutions. The process, techniques and tools used to support scenario-analysis are often ineffective, inefficient and unsuccessfully implemented in the FI. In this chapter a way of working for scenario-analysis is presented to overcome these problems. We presented a structured way of working for scenario-analysis that enables financial institutions to operate with scarce data and understand OR with a view to reducing it, while simultaneously reducing economic capital within the Basel II regulations. Moreover, a way of working for scenario-analysis makes it possible to explore complex future scenario’s, possible business opportunities and protect financial institutions from catastrophic losses. The way of working presented in this chapter has been successfully implemented at several large financial institutions.
4.7.4. References BCBS. (2003a). Sound Practices for the Management and Supervision of Operational Risk: Bank for International Settlements. Basel Committee Publications No 96. Brink, G. J. v. d. (2002). Operational Risk The New Challenge for Banks. New York: Palgrave. Brown, M., Jordan, J. S., & Rosengren, E. (2002). Quantification of Operational Risk, 239-248. Carol, A. (2000). Bayesian Methods for Measuring Operational Risk. Reading, UK: The University of Reading.
127
Chapter 4: risks in running the business
CFSAN, & Nutricion, C. f. F. S. a. A. (2002). Initiation and Conduct of All 'Major' Risk Assessments within a Risk Analysis Framework: U. S. Food and Drug Administration. Clemen, R. T., & Winkler, R. L. (1999). Combining Probability Distributions From Experts in Risk Analysis. Risk Analysis, 19(2), 187-203. Connolly, J. M. (1996). Taking a Strategic Look at Risk: Marsh & McLennan Companies. Cruz, M., Coleman, R., & Salkin, G. (1998). Modeling and Measuring Operational Risk. Journal of Risk, 1, pp.63-72. GAIN. (2004). COSO Impact on Internal Auditing, 2005, from gain. Grinsven, J.H.M. v., Ale, B. & Leipoldt, M. (2006). Ons overkomt dat niet: Risicomanagement bij financiële instellingen" Finance Incorporated, 6, 19-21. Grinsven, J.H.M. v. (2009). Improving operational Risk Management, IOS Press. Hoffman, D. D. (2002). Managing Operational Risk: 20 Firmwide Best Practice Strategies (1 ed.): Wiley. Hulet, D. T., & Preston, J. Y. (2000). Garbage In, Garbage Out? Collect Better Data for Your RiskAssessment. Paper presented at the Proceedings of the Project Management Institute. Annual Seminars & Symposium, Houston, Texas, USA. Ramadurai, K., Beck, T., Scott, G., Olson, K., & Spring, D. (2004). Operational Risk Management & Basel II Implementation: Survey Results. New York: Fitch Ratings Ltd. RMA, (2000). Operational Risk: The Next Frontier. The Journal of Lending & Credit Risk Management (March), 38-44. Young, B., Blacker, K., Cruz, M., King, J., Lau, D., Quick, J., et al. (1999). Understanding Operational Risk: A consideration of main issues and underlying assumptions. Operational Risk Research Forum.
128
Chapter 4: risks in running the business
4.8. Group support systems for operational risk management Dr. ing. Jürgen van Grinsven Ir. Henk de Vries
Over the past years, various Group Support Systems (GSS) have been used to support Operational Risk Management (ORM) (Brink, 2002). ORM supports decision-makers to make informed decisions based on a systematic assessment of operational risks (Cumming and Hirtle, 2001; Brink, 2003). Financial Institutions (FI) often use loss data and expert judgment to estimate their exposure to operational risk (Cruz, 2002). Utilizing expert judgment is usually completed with more than one expert individually, often referred to as individual and selfassessments, or group-wise with more than one expert, often referred to as group-facilitated self-assessments (Grinsven, 2007). While individual self-assessments are currently the leading practice, the trend is more towards group-facilitated self-assessments. There is a need to support these group-facilitated assessments using Group Support Systems.
In this chapter we discuss how Group Support Systems (GSS) can be used to support expert judgment activities to improve ORM. First we describe how a GSS can support multiple expert judgment activities. Secondly a case study will be presented describing the application of a specific GSS, GroupSystems to ORM in a financial institution.
4.8.1. Expert judgment and group support systems Expert judgment is defined as the degree of belief a risk occurs, based on knowledge and experience that an expert makes in responding to certain questions about a subject (Clemen & Winkler, 1999; Cooke & Goossens, 2004). Expert judgment is increasingly advocated in FI’s for identifying and estimating the level of uncertainty about Operational Risk (OR) (Grinsven, 2007). Moreover, expert judgment can be used to incorporate forward-looking activities in ORM. Group Support Systems (GSS) can be used for the combined purposes of process improvement and knowledge sharing (Kock & Davison, 2003). GSS can be seen as an electronic technology that supports a common collection of tasks in ORM such as idea generation, organization and communication. GSS aim to improve (collaborative) group work (Vogel, Nunamaker et al., 1990; Nunamaker, Dennis et al., 1991). Effectiveness and efficiency gains can be achieved by applying GSS to structure multiple experts’ exchange of ideas,
129
Chapter 4: risks in running the business
opinions and preferences. There are several GSS tools available to support multiple experts in collaborative group work (Austin et al., 2006). Examples are: GroupSystems, Facilitate, WebIQ, Meetingworks and Grouputer. Grinsven (2007) presents an overview of GSS tools, based on GroupSystems, that can be used for supporting experts in the ORM phases, see Table 14 for examples. ORM phase Preparation
Risk identification
Risk assessment
Risk mitigation
Reporting
General description Provides the framework for the experts, taking into account the most important activities prior to the identification, assessment, mitigation, and reporting of operational risks. Aims to provide a reliable information base to enable an accurate estimation of the frequency and impact of OR in the risk assessment phase.
Examples of GSS Tools Categorizer Electronic Brainstorming Group Outliner
Electronic Brainstorming Group Outliner Vote Aims for an accurate quantification of the Alternative Analysis frequency of occurrence and the impact Vote associated with the potential loss of the identified OR and existing control measures. Aims to mitigate those OR that, after Alternative Analysis assessment, still have an unacceptable level of Topic Commenter frequency and/ or impact. Aims to provide the stakeholders such as the Group Outliner manager, initiator and experts with the relevant information regarding the ORM exercise. Table 14: examples of GSS tools (Grinsven, 2007)
4.8.2. Business case: Dutch financial institution The ORM process consists of five main phases: preparation, risk identification, risk assessment, risk mitigation and reporting phase. These phases can be viewed as an IPO model (Input, Processing, Output) resulting in an accurate estimate of exposure to OR as final output. In this section we present a case study describing the application of GroupSystems to each phase of the ORM process at a large Dutch FI (Grinsven, 2007). Preparation phase The activities of the preparation phase can be divided in the sub activities: determine the context and objectives, identifying, selecting and assess the experts, choosing the method and tools, tryout the ORM exercise and train the experts (Goossens & Cooke, 2001; Grinsven, 2007). To determine the context and objectives, we used the Electronic Brainstorming tool to consider the business process under investigation and determine the scope. Moreover, we learned that the Electronic Brainstorming, Group Outliner and Categorizer tools from
130
Chapter 4: risks in running the business
GroupSystems can be used by the facilitator to support multiple experts in order to clearly define the scope, context and objectives of the ORM exercise. Using the Topic Commenter, we identified and selected the final group of experts for the ORM exercise. The experts were not assessed. We tried out the exercise with two managers from the Dutch FI. We tested if the GSS was helpful for the particular exercise. We trained the experts in using the GSS by presenting and practicing several practical examples with them. Risk identification phase The activities of the risk identification phase can be divided in identifying the OR, categorize the OR and perform a gap analysis. One of the objectives of this phase is to arrive at a comprehensive and reliable identification of the OR to reduce the likelihood that an unidentified operational risk becomes a potential threat to the FI. Member status, internal politics, fear of reprisal and groupthink can make the outcome of the risk identification less reliable (Grinsven, 2007). Electronic Brainstorming was used to identify the events. Then, the Categorizer was used to define the most important OR. For this, we used a group-facilitated workshop. Using Electronic Brainstorming combined with a Vote tool we supported the experts to perform a gap analysis. In the case study, the experts appreciated the possibility to identify OR events anonymously. Risk assessment phase The activities of the risk assessment phase can be divided in the following sub activities: assess the OR and aggregating the results. Grinsven (2007) advises to ensure the experts assess the OR individually, to minimize inconsistency and bias, also see e.g. (Clemen & Winkler, 1999). We used the Alternative Analysis tool from GroupSystems to enable the experts to assess the OR individually and anonymously. Then, using the Multi Criteria tool, we calculated the results by aggregating the individual expert assessments. The standard deviation function helped us to structure the interactions between the experts. In this interaction the experts provided the rationales behind their assessments. We learned that the GSS tools helped us to prevent the results being influenced by groupthink and the fear of reprisal. Risk mitigation phase The activities of the risk mitigation phase can be divided in three sub activities: identify alternative control measures, re-assess the residual operational risk and aggregate the results. The methods and tools that can be used in this phase are almost identical to the risk assessment phase. However, a slightly more structured method was used to identify alternative control
131
Chapter 4: risks in running the business
measures. At the Dutch FI we used the Topic Commenter tool to support this activity. Experts were enabled to elaborate / improve the existing control measures and provided examples for each of them. Then, we used the Alternative Analysis tool to anonymously re-assess the frequency and impact of the OR. Finally, we calculated and aggregated the results using the standard deviation function combined with a group facilitated session. Reporting phase The activities of the reporting phase can be divided in the sub activities: documenting the results and providing feedback to the experts. Documenting the results needs to follow regulatory reporting standards. This was done by a person of the Dutch FI, using the output results from the GSS as an input for the report. The intermediate results were presented to the experts immediately after the ORM sessions. Following Grinsven (2007), a highly structured process was used to present these results thereby enabling the experts to leverage the experiences gained and to maintain business continuity. We facilitated a manual workshop to provide feedback to the experts. Moreover, at the Dutch FI we made sure the final report complied to the relevant regulatory reporting standards. Future research should investigate applying GSS to provide structured feedbacks.
4.8.3. Summary Expert judgment is extremely important for ORM when loss data does not provide a sufficient, robust, satisfactory identification and estimation of the FI's exposure to OR. The case study indicates that a GSS can be used to support expert judgment in every phase of the ORM process. GroupSystems can be used in each ORM phase to support experts in order to achieve more effective, efficient and satisfying results. GSS can be used to help gathering and processing information about operational risk. Moreover, GSS has the potential to improve the ORM process by minimizing inconsistency and biases, reducing groupthink and gather and processing information. However, more research need to be done to find out which GSS packages besides GroupSystems are suitable to support the ORM process.
4.8.4. References Austin, T., N. Drakos, and J. Mann, (2006). Web Conferencing Amplifies Dysfunctional Meeting Practices. Nr. G00138101, Gartner, Inc. Brink, G. J. v. d. (2002). Operational risk: The new challenge for banks. New York, Palgrave.
132
Chapter 4: risks in running the business
Brink, G. J. v. d. (2003). The implementation of an advanced measurement approach within Dresdner Bank Group. IIR Conference Basel II: Best practices in risk management andmeasurement, Amsterdam, Dresdner Bank Group. Clemen, R. T. & Winkler, R. L. (1999). Combining probability distributions from
experts in
risk analysis. Risk Analysis 19(2): 187-203. Cooke, R. M. & Goossens, L.H.J. (2000). Procedures guide for structured expert judgement. BrusselsLuxembourg, European Commission. Cooke, R. M. & Goossens, L.H.J. (2004). Expert judgement elicitation for risk assessments of critical infrastructures. Journal of Risk Research 7(6): 643-156. Cumming, C. & Hirtle B. (2001). The challenges of risk management in diversified financial companies, Federal Reserve Bank of New York Economic Policy Review. Cruz, M. (2002). Modeling, measuring and hedging operational risk, Wiley Finance. Goossens, L.H.J. & Cooke, R.M. (2001). Expert Judgement Elicitation in Risk assessment. Assessment and Management of Environmental Risks, Kluwer Academic Publishers. Grinsven, J.H.M. v. (2007), Improving operational risk management, Ph.D Dissertation Delft University of Technology, Faculty of Technology Policy and Management, the Netherlands. Kock, N. & Davison, R. (2003), Can lean media support knowledge sharing? Investigating a hidden advantage of process improvement. IEEE Transactions on Engineering Management, 50(2), pp. 151-163. Nunamaker, J. F., A. R. Dennis, et al. (1991). "Electronic Meeting Systems to Support Group Work." Communications of the ACM 34(7): 40-61. Vogel, D., J. F. Nunamaker, W.B. Martz, R. Grohowkisk & C. McGoff (1990). Electronic meeting systems experience at IBM." Journal of Management InformationSystems 6(3): 25-43.
133
Chapter 4: risks in running the business
134
The most important job of senior risk managers today is to identify, formulate, assess, deliver and communicate value propositions to their stakeholders.
J. van Grinsven
5.
Formulating value propositions Dr. ing. Jürgen van Grinsven
We observe that senior risk managers are under pressure. They must participate in a highly competitive environment while solidly honoring their professional obligations and navigating their business safely toward the future. Paramount to their success is the ability to identify, formulate, assess, deliver and communicate value propositions to their stakeholders. In this chapter we reflect upon the previous chapters, present an outlook and provide guidance for senior risk managers to formulate value propositions.
5.1. Definition of a value proposition There are a large number of definitions illustrative of the concept of a value proposition and its importance to the business. Most definitions stress the importance of the client but do not address the concept of risk (Lanning and Phillips, 1991). From chapter one, three and four we learned about the importance of risk management, value and how to quantify credit and operational risk costs in financial institutions. From chapter two we learned that the aim of financial institutions is to service the evolving financial needs of its stakeholders (e.g. clients, employees, suppliers, regulators) and in response to the credit crunch, market pressure, ecommerce and decentralization many financial institutions increasingly focus their efforts on their clients. Taking this information into consideration, we define a value proposition as: a clear, concise series of realistic statements based on an analysis and quantified review of the benefits, costs, risks and value that can be delivered to stakeholders. This definition can be used to position value to a number of stakeholder classes:
Clients (internal and external)
Suppliers / strategic business partners
Employees
Regulators
135
Chapter 5: formulating value propositions
5.2. The client ‘drives’ the value proposition Formulating value propositions is increasingly important for the economic activities and economic development of financial institutions. Over time, formulating value proposition generate more profit than propositions aimed at improving the effectiveness and/or efficiency or having no propositions at all, see Figure 31. Nowadays, both small and large institutions realize the importance of the value proposition. €
Propositions aimed at Added Value
Propositions aimed at improvement of efficiency / effectiveness
No propositions
Time
Figure 31: importance of value propositions (based on publication list)
Yet, despite the growing attention, there is remarkably little agreement between financial institutions and its clients as to what constitutes a value proposition (Grinsven and Ros, 2009). Moreover, the current credit crunch indicates that there is a strong distrust between clients and financial institutions. Clients believe that FI’s want to extract money from them; want to sell them more of the existing products and services; and desire to want to cross-sell them new products and services. There are several causes/constraints to this viewpoint (Grinsven and Ros, 2009; Maister, 2003; Kambil et. al., 1996). Financial institutions are constrained by laws and legislation, processes, budgets and motivated by various factors such as managerial approval, personal satisfaction and long term functionality. Further, the value they want to deliver is not specified in sufficient detail. Fortunately, there appears to be consensus between clients and financial institutions about the idea that value propositions will deliver positive
136
Chapter 5: formulating value propositions
results. In Table 15 we have summarized a number of positive results from a client and financial institutions’’ perspective. Clients and financial institutions’ perspective on value propositions Increased revenue Decreased costs Improved operational efficiency Faster time to market Increased market share Improved customer retention Leads are generated Decreased employee turnover Increased value Table 15: results of value propositions (Maister, 2003; Kambil et.al., 1996)
Following Grinsven and Ros (2009) we argue that success of a financial institution goes in tandem with the collaborative participation of its clients. Client relationship management is characterized by risk management and can help to formulate value propositions. In our view, the clients drive the value propositions of a financial institution. Each client is unique and in order for a value proposition to be effective it has to be tailored the client’s specific needs. These needs can be known through e.g. good client risk management, see chapter two. A value proposition focuses on understanding and meeting these requirements. It is not about telling the client what you offer, it is about why your offer is the best choice. For the financial institution, this requires shifting perspectives. A value proposition operates within the constraints mentioned above, makes full use of risk management and meets the requirements of clients. Financial institutions should be focused on the following questions: How can we create and deliver value to our clients; How do we ensure our proposition is relevant and attractive; and How do we ensure that the experience of our clients is consistently positive? As we argued before, a good starting point might be the senior risk manager.
5.3. Role of the senior risk manager One of the lessons learned from the credit crunch is that the role of the senior risk manager cannot remain the same in the highly competitive and globalized environment (Grinsven and Ros, 2009). Their role will be increasingly important for the financial institution. However, due to the recent emphasis on ‘added value’, many senior risk managers have to change the way they think ‘how’ they deliver value to their stakeholders. Existence of a value proposition is a prerequisite for the diffusion and penetration of risk management. Senior risk managers who want to make a financial institution succeed face a challenging task. They have to recognize that formulating value propositions is not an isolated activity. Rather, it is about learning to fully understand and decisively act on specific demands that financial institutions and its stakeholders (e.g. internal clients, external clients, board of directors, risk committee, analysts, employees, suppliers, rating agencies, regulators) most value. This includes full transparency to the key
137
Chapter 5: formulating value propositions
stakeholders. An increasingly important objective for the senior risk managers is assuring the key stakeholders that their risk management is effective, efficient and leads to satisfaction when implemented in the business. Moreover, the role of the senior risk manager is to develop value propositions for the financial institution and the (key) stakeholders. In Table 16 we have outlined several key stakeholders* and examples of information that can be provided to them. Key stakeholder Board of directors
Risk committee Employees Regulators
Analysts Rating agencies
Information Periodic reports and updates (assurance) on the major risks as well as the review of risk management policies and the effectiveness of the internal control environment. Risk information for directing all credit, market, operational risk management activities. Also insurance, security, audit and compliance information. Added value of risk management to the employee and the business they are working in. E.g. risk awareness, security, and privacy of information. Assurance about sound (risk) management practices. Assurance about the financial institutions compliance with the regulatory requirements. Assurance about the availability of/and liquidity behind risk transfer products. Risk information about risk exposure(s) to develop investment opinions/strategies. Risk information about risk exposure(s) to develop their rating opinions. Table 16: key stakeholders and their information
*
Note that we do not mention the Chief Risk Officer (CRO) here because the CRO is usually
the head of the senior risk managers and can also be viewed as a senior risk manager.
5.4. Process of formulating value propositions An important aim of a value proposition is to be superior and more profitable than propositions aimed at improving the efficiency or effectiveness, see Figure 31. The latter are usually proposed to improve processes or applications. A value proposition is superior and must therefore be precise and specific. Moreover, the proposition must be actionable, allow trade-offs, and enable the setting of priorities throughout the business functions, processes and relevant resources. But how do you formulate such a value proposition? What are the essential ingredients? In this paragraph we prescribe the process of formulating value propositions from the viewpoint of the senior risk manager. This process entails:
Understanding the benefits for the stakeholders;
Formulate the value proposition and;
Deliver the value proposition.
138
Chapter 5: formulating value propositions
5.4.1. Understanding the benefits for the stakeholders Understanding the benefits for the stakeholders is crucial to enable the formulation of value propositions. Since there are typically several different stakeholders (who might benefit in peripheral ways) it is often useful to identify the top beneficiary stakeholder (see paragraph 5.3). The main idea behind this approach is to make a work break down from the tangible benefits and then address the stakeholders appropriately. An essential ingredient in understanding the benefits to the stakeholders is knowledge. To get the required knowledge, it is not about making assumptions but about listening, interviewing and being assured that you are dealing with the essential information i.e. decision criteria (Maister, 2003). Only then, you can formulate value propositions for the stakeholders. How can we do this? Directly asking the stakeholders does often not reveal the aspects from which they would benefit most. Frequently, they give a broad description of the features, attributes and experiences. Moreover, they regularly propose many ideas including thoughts that are not actionable, profitable or beneficial. Senior risk managers need skills to choose between these ideas. A better way to understand stakeholder benefits is to use semi-structured interviews or to experience the stakeholder benefits yourself. Other means are user-groups, reversed seminars, client meetings and client feedback (Maister, 2003) Helpful questions can be: who really cares about this idea? What do the stakeholders care about? What would I want as an end result? Is this more valuable than ‘other’ solutions? Who to talk with? In financial institutions there can be a large number of stakeholders. Do you need to talk with all of them to understand the benefits? Of course not, a more realistic scenario is to select and talk to representatives of different groups of stakeholders. Their views can be complemented with other stakeholders from these groups. When there are no other views on the benefits than that you already heard, you have probably gathered enough input, see.5.4.2.
5.4.2. Formulate the value proposition A value proposition should be formulated as a clear, concise series of realistic statements based on an analysis and quantified review of the benefits, costs, risks and value that can be delivered to stakeholders. An example of a value proposition, based on chapter 4.8, is: ‘using a group support system in the risk identification phase leads to 20% higher quantity of the identified risks. When you read carefully through the chapters 1-4 you will discover the value that is embedded in these chapters. In each chapter the ‘who cares’ question is answered. Each chapter
139
Chapter 5: formulating value propositions
starts with answering that question. The chapter then carefully builds towards the conclusion. Moreover, in each chapter, numerous arguments are mentioned and used in favor of using a e.g. an approach, methodology, process or tool. We summarized several examples of (short) value propositions in Table 17. These value propositions are derived from our previous chapters. In the table we refer to the corresponding chapter, mention the key stakeholder, benefits and short value proposition. We refer to the previous chapters for the value propositions embedded in these chapters. Chapter Key stakeholder 2.1 Board of directors 2.2
Risk committee
2.3.
Board of directors Risk committee Regulators Board of directors Regulators
3.2
Benefits Combat money laundering
Value proposition (short) Minimizing reputation risk and compliance with regulators. Directing integrity Increased 40% improvement of being ‘in control’ for the risk committee. Effectively deal with a Implementing an integral, principle complex set of laws and based approach to compliance in the regulation financial institution. Balancing economic capital (EC) and regulatory capital (RC)
3.3
Risk committee
Pro active operational risk management
4.2
Board of directors Regulators
Manage operational risk within Solvency II
4.5
Risk committee Rating agency
Predict and decrease operational risk losses.
Improved capital management and compliance with regulations. With which we can reduce 27% of the total EC and RC costs. Prevention of operational risk and timely detection of unfavorable trends which leads to a reduction of the operational costs with 8 %. Unbundling operational risk from other risk types helps to prevent future and catastrophic failures. Implementing a practical approach to effectively and efficiently control operational risks. This reduces the coordination costs in the daily operations
Table 17: examples of value propositions (based on the chapters 2, 3 and 4)
As a senior risk manager you can use each chapter in this book to find valuable arguments to formulate your value proposition(s). Moreover, you will find many insightful ideas, concepts and methods to help shape or reshape your value propositions. Assess your value proposition Once you roughly formulated a value proposition we recommend assessing it, to confirm that it is distinctive and appropriate. The best way to assess your value proposition is by asking others. Although it is best to ask the clients (they are the target) we recommend to first ask your peers senior risk managers, then a number of senior risk colleagues from other business lines (risk
140
Chapter 5: formulating value propositions
council) and finally several board members or other influencers closely related to them. This is less risky than directly asking the FI’s clients. This process helps to shape, re-shape and sharpen your value proposition. Questions that you might expect from these stakeholders are related to the problem that will be solved, the strategic fit of the proposition with the financial institution, the competences within the institution to deliver the proposition, benefits for the clients and/or other stakeholders. Figure 32 visualizes the assessment process of your value proposition.
Roughly shaped Value Propositions
Propositions aimed at Added Value
Peer Senior Risk Managers
Risk Council
Board
Figure 32: assess your value proposition
In Table 18 we provide several examples of questions which you can use to assess your value proposition. These questions can also be expected from the peer senior risk managers, risk council and the board. In the table we distinguish between questions that are related to the stakeholder, competitors and the delivery. Stakeholder Are the stakeholders clearly identified? Are the stakeholder benefits explicit, clear, unambiguous, specific and quantifiable? Did we choose the best value proposition? Is our proposition clear and simple?
Competitors Is the proposition efficient compared to competitors? Is the proposition superior compared to competitors? Is our proposition viable compared to competitors?
Delivery Can we deliver the proposition? Do we have the skills? Can we deliver at a cost that returns an adequate profit? Are there any risks in delivering the proposition?
Table 18: questions which you can use to assess your value proposition
5.4.3. Deliver the value proposition When you have chosen the proposition, it needs to be delivered (Kambil et. al., 1996; Lanning and Phillips, 1991; Lanning and Michaels, 1988). Effective communication is crucial for
141
Chapter 5: formulating value propositions
delivering the message and to persuade the stakeholders (Perloff, 1993). Consequently, a senior risk manager needs to be a skilled communicator. To enable effective communication the senior risk manager is supposed to know who his stakeholders are, be aware of their expectations and (political) agenda’s, understand how they measure those expectations, know the benchmark(s) he is measured against and understand what he is actually accomplishing to satisfy the stakeholder needs (Perloff, 1993; Grunig, 1992). Creativity during communication is important because research shows that stakeholders react more positively to new stimuli that are different from that previously received (Maister, 2003; Perloff, 1993). By taking new and unique approaches in your communication, you are more likely to attract and maintain your stakeholder's attention. Further, being relevant underscores the importance of delivering the value proposition in a way that is meaningful and important to the stakeholders.
5.5. Summary In this chapter we presented an outlook and provided guidance for senior risk managers to formulate value propositions. We defined a value proposition and argued that the client ‘drives’ the value proposition. We discussed the role of the senior risk manager and presented the process of formulating value propositions – from understanding, to – how to formulate, assess and deliver a value proposition.
5.6. References Grinsven, Jürgen van. (2009) Improving Operational Risk Management. IOS Press. 2009. Grinsven, Jürgen van., and Ros Gert Jan. (2009). Lessen en uitdagingen in het financiële systeem. In Bank en Effectenbedrijf. Issue October. Grunig, James E. (1992). Excellence in Public Relations and Communication Management. New Jersey. Kambil, A., Ginsberg, A. and Bloch, M. (1996). "Re-inventing Value Propositions", NYU Centre for Research on Information Systems Working Paper IS - 96 - 21, New York University. Lanning, M. and Michaels, E. (1988), A Business is a Value Delivery System, McKinsey Staff Paper. Lanning, M. and Phillips, L. (1991), Building Market-Focused Organisations. White Paper. Maister, David H. (2003). Management van professionele organisaties. Academic Service. Perloff, Richard M. (1993). The Dynamics of Persuasion. Hillsdale, New Jersey.
142
Authors index Author Dr. ing. Jürgen van Grinsven
Prof. dr. Fred de Koning RA RE Drs. Patrick Oliemeulen CAIA FRM RV Michael Bozanic MBA Drs. Philip Gardiner Joop Rabou RA RE Drs. Gert Jan Sikking Ir. Henk de Vries Prof. dr. Ben Ale Dr. ir. Louis Goossens Dr. ir. Marijn Janssen Dr. ir. Jan van den Berg Dr. Gerrit Jan van den Brink RA Dr. Menno Dobber Peter Leijten Drs. Koen Munniksma Heru S. Na Lourenco C. Miranda Drs. Marc Leipoldt Dr. Sylvie Bleker Drs. Remco Bloemkolk Ing. Patrick Abas Drs. Arnoud Hassink Drs. Paul de Jong Marc Morgenland Drs. Bas van Tongeren
Affiliation Director at Deloitte (Enterprise Risk Services). Lecturer at the Nivra-Nyenrode University. Director at Van Grinsven Consulting. Professor Nyenrode, School of Accountancy & Controlling. Director and senior risk advisor at Insignia Advisory Head of Operational Risk Management Control at Fortis Insurance, Group Risk. Operational Risk Manager at NIBC Bank NV, the Hague. Manager risk management, directorate particulieren, at Rabobank Nederland. Advisor to the CEO and secretary of the Board of PGGM Investments. Delft University of Technology Professor at Delft University of Technology, section safety science. Associate professor at Delft University of Technology, section safety science. Associate professor at Delft University of Technology, section ICT. Associate professor Delft University of Technology, ICT. Partner at ValueData7 GmbH. Former head of operational risk at Dresdner Bank. ABN Amro. Former Ph.D researcher at the Vrije Universiteit Amsterdam. Head ORM products and services, ABN Amro N.V., Group Risk Management. ABN Amro bank, Capital Management Group. ABN Amro N.V., Group Risk Management. ABN Amro, Brazilian Risk Management. Managing director at Global Risk Advisory Services. Senior Manager at Deloitte Enterprise Risk Services. Senior analyst at ING, Corporate Risk Management. Senior business consultant at Abas Consultancy FZE, Dubai. Senior controller at Rabobank Global Financial Markets. Paul is senior consulant at ConQuastor B.V. Project Manager at ConQuaestor B.V. Consultant at Björnsen Consulting.
.
.
143
This page intentionally left blank
Curriculum vitae Dr. ing. Jürgen H. M. van Grinsven, holds a Ph.D. in Operational Risk Management, a MSc degree (Drs.) in the social sciences, a BC degree (ing.) in technology management and a BC degree in electronics.
Jürgen is working in the professional services industry. He acts as a management consultant for a large range of (inter) national organizations. He worked in numerous projects at major banks, insurance companies and pension funds such as: ABN Amro, ING, Postbank, Nationale Nederlanden, RVS verzekeringen, PGGM, Achmea, Achmea Avero, Interpolis, Cintrus Achmea, Staalbankiers, Philips Pensioenfonds, Banca di Roma.
Further, Jürgen is a frequent speaker/trainer at public events such as PRMIA and Euroforum. Moreover, he is teaching risk management related courses at Nyenrode University, Delft University of Technology, and the Haagsche Hogeschool.
His research is published in several books, book chapters, articles and presented at international conferences in e.g. the Netherlands, the United States, Croatia, Turkey, Germany and France.
Please visit: www.jurgenvangrinsven.com Here you can find e.g.: a recent cv, books, articles and contact details.
145
This page intentionally left blank