Contributions to Management Science
For further volumes: http://www.springer.com/series/1505
.
Tatjana Samsonowa
...
273 downloads
1714 Views
13MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Contributions to Management Science
For further volumes: http://www.springer.com/series/1505
.
Tatjana Samsonowa
Industrial Research Performance Management Key Performance Indicators in the ICT Industry
Tatjana Samsonowa SAP AG SAP Research Dietmar-Hopp-Allee 16 69190 Walldorf Germany
ISSN 1431-1941 ISBN 978-3-7908-2761-3 e-ISBN 978-3-7908-2762-0 DOI 10.1007/978-3-7908-2762-0 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011940419 # Springer-Verlag Berlin Heidelberg 2012 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Printed on acid-free paper Physica-Verlag is a brand of Springer-Verlag Berlin Heidelberg Springer-Verlag is a part of Springer ScienceþBusiness Media (www.springer.com )
Managing the Performance of Industrial Research in the ICT Industry Assessing Research Goals with suitable Key Performance Indicators Dem Fachbereich Rechts- und Wirtschaftswissenschaften an der Technischen Universita¨t Darmstadt zur Erlangung des Grades eines Doctor rerum politicarum (Dr. rer. pol.) eingereichte
Dissertation vorgelegt von Dipl.-Betriebswirtin (FH) Tatjana Samsonowa aus Arkalyk Erstgutachter: Prof. Dr. Peter Buxmann Zweitgutachter: Prof. Dr. Volker Caspari Tag der Einreichung: Darmstadt, 2. November 2010 Tag der mu¨ndlichen Pru¨fung: 14. Januar 2011
.
Acknowledgement
This thesis would have never been written without the support of many, many people. I hope to be forgiven that I do not mention all the names here: I am truly grateful to the members of the supervisory team of the Information Systems department of the Darmstadt University of Technology. Professor Dr. Peter Buxmann and Professor Dr. Volker Caspari: Your commitment was outstanding! I feel privileged that I was able to work with you and to learn so much from your expertise and guidance! As my thesis was created within SAP Research’s PhD program, I owe huge gratitude to many SAP colleagues, of which I want to mention at least two: Dr. Joachim Schaper and Dr. Wolfgang Gerteis. Joachim, you gave me the courage to start such this endeavor in the first place and you restored my confidence more than once in the process. Besides your constructive feedback and your inspiration I knew that I always could count on you. You have been a true mentor during the entire time! Wolfgang, you supported me in critical phases of my research. You gave me both, merciless feedback as well as constructive ideas. With your patience and dedication you helped me many times to get unstuck and to find my path again. I was very fortunate to have such a tough sparring partner and at the same time a trusted advisor like you! A big THANK YOU! to all others who helped me to complete my research and to make it a success: the students who supported me especially in collecting the data for my case studies: Your energy, your good mood, and your hard and dedicated work was invaluable! The interviewees from the case study companies: Your willingness to share relevant information was crucial to support the major findings of my thesis. And all the participants of my survey: A cordial Thank You for spending your precious time on questionnaires and follow-up discussions and providing me with such a valuable data collection! Last not least I want to express my deepest gratitude to the most important people in my life: my parents Emma and Erhard and my sister Lydia: You have always been here for me with your whole heart during the entire time. I dedicate this work to you! I love you! You are the light of my life! Truly, Tatjana. vii
.
Contents
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Background and Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Research Questions and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Dissertation Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 2 4 5
2
Performance Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction and Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Invention and Innovation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Innovation Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Research and Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Performance Management – Basic Terms and Definitions . . . . . . . . . . . 2.2.1 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Goals and Goal Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Measures, Metrics, and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Performance Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.5 Performance Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Purposes of Performance Management/Measurement . . . . . . . . . 2.3.2 The Information and Communication Technologies (ICT) Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 The Software Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9 9 10 14 15 22 22 26 27 32 37 41 42
State-of-the-Art in Performance Management . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Performance Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Generic Approaches: Schools of Thought . . . . . . . . . . . . . . . . . . . . . . 3.1.2 R&D Performance Measurement Systems . . . . . . . . . . . . . . . . . . . . . 3.1.3 Research (Only) Performance Measurement Systems . . . . . . . . . 3.1.4 Requirements of a Performance Management System . . . . . . . . . 3.1.5 Characteristics of Research Compared to Development . . . . . . .
53 55 55 58 63 67 74
3
48 50
ix
x
Contents
3.2 Organizational Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Generic Organizational Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 R&D Organizational Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Research (Only) Organizational Goals . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Key Performance Indicators (KPIs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Generic Key Performance Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 R&D Key Performance Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Research (Only) Key Performance Indicators . . . . . . . . . . . . . . . . . 3.4 Conclusions and Research Gaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
83 83 91 93 98 98 101 105 108
4
Performance Management: Analysis Approach . . . . . . . . . . . . . . . . . . . . . . . 4.1 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Design and Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Example Case Study “SAP Research” . . . . . . . . . . . . . . . . . . . . . . . . 4.1.4 Initial Findings and Further Approach . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Performance Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Approach and Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Cluster: Technology Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Cluster: Future Business Opportunities . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 Cluster: Research Portfolio Management . . . . . . . . . . . . . . . . . . . . . 4.2.5 Cluster: Intellectual Property Creation . . . . . . . . . . . . . . . . . . . . . . . . 4.2.6 Cluster: Operational Excellence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.7 Cluster: Talent Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.8 Cluster: Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.9 Cluster: Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.10 Cluster: Presence in Scientific Community . . . . . . . . . . . . . . . . . . 4.2.11 Cluster: Collaboration with Academia . . . . . . . . . . . . . . . . . . . . . . . 4.2.12 Cluster: Collaboration with Partners and Customers . . . . . . . . 4.2.13 Performance Cluster Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Performance Cluster Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Goal Decomposition Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
113 114 114 116 118 127 129 129 132 135 137 140 142 143 145 146 148 149 151 152 157 158 171
5
Survey Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Development of the Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Sample Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Analysis/Description of the Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 General Data: Organizational Context and Research Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Performance Measurement Data: Organizational Goals and KPIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
173 173 173 175 177 177 197 221
Contents
6
7
Towards a Systematic Performance Management Approach for Industrial Research in the ICT Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Research Performance Management System . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Suggested Model for a Research PMgS – Levels and Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Recommendations: Process Options to Design a PMgS . . . . . . 6.3 Review of Requirements Against the Created PMgS . . . . . . . . . . . . . . . . 6.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Summary of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Explicit Answers to the Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Research Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Research Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Research Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Research Question 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
223 223 224 225 230 235 241 243 243 248 248 250 251 252 252 254 256
Appendix A Content and Media Sector as Part of the ICT . . . . . . . . . . . . . . . 257 Appendix B Key Performance Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Appendix C List of interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Appendix D Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.1 ABB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.2 EMC2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.3 IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.4 Intel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.5 Microsoft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.6 Philips Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.7 Deutsche Telekom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
303 303 311 321 330 340 348 360
Appendix E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 E.1 Online Questionnaire (English) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 E.2 Online Questionnaire (Russian) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
xii
Contents
List of Abbreviations and Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Eidesstattliche Erkla¨rung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Chapter 1
Introduction
Abstract This chapter serves as an introduction to the dissertation. It outlines the relevant disciplines that embed the research objectives of this work. The central research question is presented to frame the scope of the work. Furthermore, research questions are formulated and finally the research structure is presented. Keywords Background and problem definition in performance management in industrial research environments • Introduction to performance management of industrial research
Industrial research is often seen as an incubator for new technological knowledge. Companies engage in research in order to gain a comparative competitive advantage over their competitors. Sustainable long-term market success cannot be achieved without the right investment in innovation. Technical progress (inventions) and subsequent innovations are the focal point for growth, both in employment and prosperity.1 It is recognized that new technological knowledge can be gained through the planned, systematic use of scarce resources. Only a small portion of technological progress is based on chance discoveries, moreover it is “ordered, financed, produced, and paid for.”2 This means that it is an economic question of how to most efficiently organize and manage the unit that is in charge of the generation of inventions within a company. Furthermore, economic managerial analysis is required on the performance and efficiency of the production of such knowledge.3 This work addresses the performance management of industrial research departments within the ICT industry.4
1
Solow (1956), pp. 65–94. Machlup (1959), p. 117. 3 Sch€atzle (1965). 4 ICT – The Information and Communication Technologies. 2
T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0_1, # Springer-Verlag Berlin Heidelberg 2012
1
2
1 Introduction
An organization’s ability to manage its performance, that is, to define, measure, analyze and improve its performance is of outstanding importance for its future development. The measurement of operational issues is an essential foundation for the assessment of past events that influence the starting point for subsequent management decisions. Changes in both the external environment and the inner world of businesses, such as increased global competition and emphasis on new management concepts, such as “open innovation”, have changed the need for information provision and processing to support management. In order to successfully face progressive globalization and increased intense competitive rivalry, an organization’s performance needs to be permanently monitored in all relevant areas. A number of proposals are presented in the literature on so-called Performance Measurement Systems (PMS). A PMS holistically monitors and communicates the objectives and the degree of achievement of an organization.
1.1
Background and Problem Definition
Schumpeter hypothesized innovation, company size and market concentration to be positively related. He also suggested innovation and economic growth are linked. However to date neither hypothesis has been confirmed.5 The pace and extent of technological progress depends not only on research or R&D (rate of invention), but also on the initial application of new knowledge (rate of innovation) and on the general spread of new methods (rate of imitation). It is the company’s decision translated into its strategy based on its capabilities on how to act in the business environment and what to stress in its business model: invention, innovation or imitation. A typical problem for growing industrial organizations in the ICT sector is to introduce strategic technology research to ensure the long-term success of the company. During the creation of a corporate research function, the question arises as to how to assess this function in both the short and long-term. As the literature study will show, there is no dedicated work available that investigates, in detail, the particular situation that research organizations face. Moreover, studies typically look at the R&D organization by amalgamating research and development into one function. This aggregation represents a real problem for the management and measurement of the performance of the functions, especially in large companies where these two functions operate separately and the activities in each function are fundamentally different.6 Brown and Svenson analyzed how to measure R&D productivity and concluded: “Research and
5
Caspari (2001). Seiler (1965), Leifer and Triskari (1987), pp. 71–78, Brown and Svenson (1998), pp. 30–35, Karlsson et al. (2004), p. 180. 6
1.1 Background and Problem Definition
3
development clearly perform different functions and produce quite different outputs”.7 Assessing performance in industrial R&D is difficult, but it is even more difficult for the research function alone. As Loch and Tapper mention, measurement problems in a research group (as opposed to development) are the more severe.8 One difficulty of assessing performance is that the performance management systems often apply inappropriate performance measures, “That what we measure has nothing to do with what we are doing.”9 This is also the reason why, in practice, many scientists and engineers in research organizations do not accept, or are negative about, measuring performance. It seems to be obvious that the performance management systems need to be related to the activities that are actually conducted in the research organization. Another difficulty with the assessment of performance in research is that it is mostly knowledge work that needs to be assessed and it is often not possible to quantify the knowledge. And, if it is possible to express the output/outcome of the knowledge work in measuring units, it is extremely difficult to assess their value. For example, one such output could be an invention, which a researcher produces. However, the outcome of that output might only appear years later in the form of a patent. The process of converting the invention (output) into a patent (outcome) is uncertain and long. Typically, the details of such an invention would be passed internally to the Intellectual Property department who would draft a patent application. This application would then be submitted to a patent office, for example the European Patent Office. After approximately 2–3 years a patent application will eventually be granted or rejected. Looking at that from the performance measurement perspective the production of the invention disclosure itself might only take some weeks or months and so fits in the annual performance measurement cycle of the organization. Ironically, the number of invention disclosures is a very popular key performance indicator (KPI) for industrial research organizations yet this KPI gives little clue as to the number of patents that will finally be granted to the organization. Of course the organization could assume a certain acceptance rate for patent applications, but this means that the KPI (number of invention disclosures produced) is only as good as the accuracy of the acceptance rate used. Of course this rate is likely to vary over time, and not all patents are as valuable as others. Summarizing, we have exemplified some of the problems in performance measurement in industrial research environments: (1) since performance measurement systems are designed for annual evaluation cycles, they normally capture outputs and not outcomes; (2) output KPIs are not necessarily good indicators for
7
Brown and Svenson (1998), p. 34. Loch and Tapper (2002), p. 185. 9 Brent and Pretorius (2009), and statements found in a survey by Kerssens-van Drongelen (2001). 8
4
1 Introduction
the time-lagged outcomes; (3) in order to predict or estimate expected outcomes, output KPIs are needed together with additional assumptions. Building on these issues, a major mismatch in the practice of industrial research departments is concealed by the fact that research goals are often targeting for longterm outcomes. However, they are set annually and the measurement process to assess goal achievement is usually annual. Therefore, the KPIs that are used are typically designed in a way to measure the outputs rather than outcomes. This is the most fundamental problem for performance measurement in industrial research organizations, and greatest area of contention in this work’s main research question: How to assess the performance of an industrial research organization? This work is a contribution to the “Performance Management of Industrial Research Organizations” and to systematically analyzing the various capabilities of research organizations, their goal-setting as well as their setting of performance measures.
1.2
Research Questions and Objectives
By subdividing the central research question introduced above, it is the goal of the thesis to understand in more detail: RQ1: What is the common practice in regard to performance management of industrial applied research organizations? Are there common or similar organizational goals, and are there common or similar key performance indicators? RQ2: Is there a generic performance management approach (PM approach) to describe the common practice of performance management of industrial applied research organizations? How are goals and KPIs reflected in this approach? RQ3: How are the peculiarities of industrial applied research [specifically, and the requirements of performance management systems in general] considered in the developed PM approach? RQ4: Can there any recommendations be derived from the suggested approach to advise on the implementation and further development of a performance management system in an industrial applied research organization? Based on the research questions the following research objectives can be derived: • List the goals which companies aim at within industrial applied research departments; • Create a catalogue of the KPIs which are applied to monitor the performance of the respective goal; • Examine if there are industry best practices with regards to monitoring the performance of a respective goal as well as if there are dependencies between the KPIs identified before;
1.3 Dissertation Structure
5
• Design and build a performance management system that facilitates performance management for industrial research departments; • Develop recommendations for practitioners on how to set up performance management in industrial research.
1.3
Dissertation Structure
The dissertation is divided into seven chapters. Each chapter is now introduced. The overall design of the dissertation and the structure is summarized in Fig. 1.1. Chapter 1 serves as an introduction to the dissertation. It outlines the relevant disciplines that embed the research objectives of this work. The central research question is presented to frame the scope of the work. Furthermore, research questions are formulated and finally the research structure is presented. Chapter 2 introduces basic terms and definitions. First, the nature and characteristics of invention and innovation are examined. The review shows that the commercial success of an innovation does not only depend on the quality of inventions generated in research and/or development and that both invention and innovation represent components of a process of managing innovation that extends over the entire value chain of a company. Furthermore, the following concepts are distinguished: Innovation Management, Technology Management and R&D Management in order to also understand in detail differences between the functions: research and development. The following questions are answered: What is meant by research and by development; what are the roles and tasks of the units; what kind of technological knowledge do they produce; what is the purpose of Performance Management; what are its components; and, what are industrial research departments? Chapter 3 investigates the state-of-the-art in the performance management literature. In this context, the performance management systems, organizational goals and key performance indicators are examined at three levels of granularity: generic (literature reporting about generic approaches covering either one or more functions of a company), R&D (focusing on the specifics of the R&D function), and research function only. The insights in this chapter enable us to formulate the research gaps and refine the primary question with four research sub-questions. Chapter 4 provides an overview on the methodology and introduces the step-bystep development of a performance management system via a series of in-depth case studies. The analysis approach aims at developing a model for performance management systems built on the basic components found in practice such as organizational department goals and individual KPIs. The model is extended with additional components that allow us to compare the various performance management systems and to identify relationships between these components. Performance clusters (i.e. groups of KPIs), which represent one significant component of our developed model, are introduced. Furthermore, the goals are analyzed for the similarity of their sub-components. Altogether, Chap. 4 describes a preliminary
6
1 Introduction
1 Introduction 1.1 1.2 1.3
Background and problem definition Research questions and objectives Dissertation structure
3 State-of-the-art in Performance Management
2 Performance Management 2.1 2.2 2.3
Introduction and fundamentals Performance management – basic terms and definitions Summary
3.1 3.2 3.3 3.4
Performance management systems Organizational goals Key performance indicators (KPIs) Conclusions and research gaps
4 Performance Management - Analysis Approach 4.1 4.2 4.3 4.4 4.5
Case studies Performance clusters Performance cluster analysis Goal decomposition analysis Summary
5.1 5.2 5.3 5.4 5.5
Introduction Development of online survey Sample design Analysis / description of the results Conclusion
5 Survey Findings
6 Towards a Systematic Performance Management System
for Industrial Research in the ICT Industry 6.1 6.2 6.3 6.4
Introduction Research performance management system Review of requirements against the created PMgS Conclusions
7.1 7.2 7.3 7.4 7.5
Summary of the dissertation Explicit answers to the research questions Limitations Future research Concluding remarks
7 Conclusions
Appendix A B C D E
Definition of media ICT sector Examples of KPIs from the literature List of interviews Case studies Questionnaire
Fig. 1.1 Outline of the dissertation
1.3 Dissertation Structure
7
approach on how to deal with the individual elements of performance management, that is, with KPIs, KPI classes, performance clusters and finally organizational department goals. Chapter 5 presents the findings from the large quantitative survey, which tests the findings from the literature review and case studies. Both the qualitative case studies, as well as the results of a survey and its comparison, are the core of this chapter. The result of the evaluation reveals that industrial research organizations seem to have a set of shared organizational goals and a set of shared KPIs, which can be summarized in KPI classes. In a sum, the survey results endorse the relationships between KPIs and KPI classes, performance clusters and organizational research goals. This chapter is therefore dedicated to the examination of the qualitative results from the quantitative perspective. Chapter 6 synthesizes all our findings into a holistic model and suggests a new systematic way to assess the organizational performance of an industrial research department. This chapter responds to the overall research question and is dedicated to our performance management model. Its five levels, and their relationships, are introduced step-by-step. Furthermore recommendations on how to design a Performance Management System (PMgS) are provided, which take into consideration the four elements of performance management: planning, measurement, analysis and review/improvement. Lastly, the general requirements for the design of a PMgS that take into account the specific peculiarities of industrial research organizations are revisited and our PMgS assessed. Chapter 7 is the final chapter. It clarifies the findings of the dissertation and reflects on the research questions. The chapter firstly summarizes the key results of the work and then provides explicit answers to the research question and subquestions. Furthermore, it discusses the limitations of our work and suggests areas for future research. Figure 1.1 illustrates the outline of the dissertation.
.
Chapter 2
Performance Management
Abstract This chapter introduces basic terms and definitions. First, the nature and characteristics of invention and innovation are examined. The review shows that the commercial success of an innovation does not only depend on the quality of inventions generated in research and/or development and that both invention and innovation represent components of a process of managing innovation that extends over the entire value chain of a company. Furthermore, the following concepts are distinguished: Innovation Management, Technology Management and R&D Management in order to also understand in detail differences between the functions: research and development. The following questions are answered: What is meant by research and by development; what are the roles and tasks of the units; what kind of technological knowledge do they produce; what is the purpose of Performance Management; what are its components; and, what are industrial research departments? Keywords Goals and goal setting • Innovation management • Invention versus innovation • Key performance indicators (KPI) • Measures, metrics, and indicators • Performance management and measurement • Research and development (R&D)
2.1
Introduction and Fundamentals
In this chapter, the formal foundations of our study will be developed. The discipline, nature, and area of our research interest will be outlined and the main definitions in the field of performance management will be introduced.
T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0_2, # Springer-Verlag Berlin Heidelberg 2012
9
10
2.1.1
2 Performance Management
Invention and Innovation
Private companies and the public sector1 are seeking ways to increase their profits or stimulate the economy as a whole. Inventions and innovation have become key to improved global competitiveness in many business sectors and are seen as a means of achieving economic wealth. To be able to understand the dynamics of these phenomena, the definitions of innovation and invention will be analyzed, and the differences elaborated upon. Understanding the difference between the two phenomena helps us to comprehend the nature of each individually. This in turn helps us monitor and identify their emergence and to better match them to the place (environment) and time (phase) when they occur. The ability to do this is important because in the thesis, we will separate R&D, which is commonly seen as one single function, into two departments: research and development and examine only the former. The major goal of a research department is to investigate future trends and to foster innovation within a company. In general, the output of research departments is represented by inventions and it is of utmost importance for companies to transform their inventions into innovations. There are a variety of definitions of the term “innovation” in the literature, but a generally accepted standard definition does not exist.2 From the Latin translation of the original term innovatio, the elements of originality, newness and novelty can be derived. The different approaches to defining the concept of innovation all refer to the change and the uniqueness of a state or process.3 Schumpeter,4 who initially in 1911 coined the term ‘innovation’ in the field of economics, gives five pragmatic examples5 of innovations:
1 There are many national, European and international studies about the nature of inventions and innovations by e.g., OECD, NIST and various private consulting companies. Both innovations and inventions are seen as success factors for technological change, which strongly influences economic growth. Innovative economies typically exhibit the following characteristics: higher rates of economic growth, greater growth in employment, higher productivity, greater investment in people and capital and greater capacity for the economy to attract and retain highly-qualified people. 2 Brockhoff (1994), pp. 27–28, Specht and Beckmann (1996), p. 15, Hauschildt (1997), p. 3, Geschka (1983), p. 823. 3 Hauschildt (1997), p. 25, Pleschak and Sabisch (1996), p. 1. 4 Translated from Schumpeter (1964): “. . .so bedeutet Innovation “die neue und andersartige Kombination dieser (vorhandenen) Dinge und Kr€afte” – the “new and different combination of existing things and forces””. 5 Translated from Schumpeter (1964): “F€ unf F€alle der Innovation: (1) Herstellung eines neuen Produkts oder einer neuen Produktqualit€at; (2) Einf€ uhrung einer neuen, in einem Industriezweig noch unbekannten Produktionsmethode, die jedoch nicht auf einer neuen Erfindung beruhen muss; (3) Erschließung eines neuen Absatzmarktes, auf dem ein Industriezweig noch nicht eingef€uhrt war, egal, ob dieser Markt schon vorher existierte oder nicht; (4) Erschließung einer neuen Bezugsquelle von Rohstoffen oder Halbfabrikaten; (5) Durchf€uhrung einer
2.1 Introduction and Fundamentals
11
• Production of a new product or new product quality; • Introduction of a new, production method in an industry sector, which, however, does not need to be based on a new invention; • Development of a new sales market to an industry sector regardless of whether this market previously existed or not; • Tapping into a new source of raw materials or semi-finished products; • Implementation of a re-organization such as the establishment or breaking-up of a monopoly. Therefore, from our view, innovation in business can be seen as the implementation of new technical, business-related, organizational, or societal solutions in companies. Innovation aims to attain company goals in new ways. An innovation can therefore be considered as the logical successor to an invention, which we define for the context of this thesis as the first technical realization of an existing problem as a result of R&D activities. An invention becomes an innovation once it is introduced to the market or is used in the production process. The commercial success of an innovation requires scientific, technological, organizational, financial and commercial activities. The term “innovation” (Table 2.1) implies the creation of something new. In this regard it is often equated with the term “invention”. Fagerberg (2003) states: “An important distinction is normally made between invention and innovation. Invention is the first occurrence of an idea for a new product or process while innovation is the first attempt to carry it out in practice”.6 He acknowledges that the distinction is sometimes unclear because in some cases (e.g. biotechnology) innovation and invention are closely linked. An important characteristic, however, is the considerable time-lag between the two. He specifies other features such as the different requirements for working out ideas and carrying them out in practice, along with different locations: inventions may be carried out anywhere, while innovations occur mostly in firms in the commercial sphere. Furthermore, he lists other requirements such as production knowledge, skills and facilities, market knowledge, a well-functioning distribution system, sufficient financial resources, which a firm has to combine in order to turn an invention into an innovation.
Neuorganisation wie Schaffung Monopolstellung”. 6 Fagerberg (2004), p. 4.
einer
Monopolstellung
oder
Durchbrechen
einer
12
2 Performance Management
Table 2.1 Definitions and comparisons of the terms “invention” and “innovation” Author Invention Innovation “Invention is the first occurrence of an “Innovation is the first Fagerberg idea for a new product or process” commercialization of the idea” (2003)a Norman “An invention is a new manmade “Innovation . . . (is) a better way of (1994)b device or process. A new device doing things. Individuals and which qualifies as an invention may institutions innovate in all their take such forms as a new physical goal-directed behavior, which is product, a new biological life form, defined as an effort of an individual or a new piece of software. A or an institution at achieving process, on the other hand, is a performance as measured by a chemical, physical, or biological criterion, whether objective or chain of events that produces a subjective. With respect to such product or service” goal-directed behavior, a formal definition of an innovation is the creation or implementation of a new alternative which achieves higher performance as measured by the respective criterion” “The term innovation can be Specht and “Invention means as much as a interpreted in a narrow and broad Beckmann discovery/creation of something sense. In the narrow sense it affects (1996)c new. This is the technical the market launch of a new product implementation of new scientific or the start of a new production findings, or new combinations of process. In the broad sense it is the existing scientific findings, or a entire innovation process of combination of the two” invention and innovation”d a Fagerberg (2004), p. 4 b Norman (1994), pp. 4–5 c Specht and Beckmann (1996), p. 15 d Translated from Specht and Beckmann (1996), p. 15, “Der Begriff Innovation kann eng und weit interpretiert werden. Im engeren Sinne betrifft er die Markteinf€uhrung eines neuen Produkts oder das Anfahren eines neuen Produktionsprozesses. Im weiteren Sinne wird unter Innovation der gesamte Prozess der Invention und Innovation i.e.S. verstanden. Wenn von Innovationsprozess gesprochen wird, so ist meist Innovation i.w.S. gemeint”
Table 2.2 is a summary of the main characteristics that distinguish “invention” and “innovation”. To summarize and highlight the difference between invention and innovation definitions,7 one can make the following conclusions: I-1. An innovation does not directly require an invention; I-2. If it does occur, an invention will happen before an innovation; I-3. An invention becomes an innovation only once it is successfully deployed in a commercial context; I-4. Several inventions can contribute to a single innovation. To illustrate the difference between invention and innovation, three examples are given. A current example of a successful innovation is the iPod. The iPod is a
7
Gurel (2007a).
2.1 Introduction and Fundamentals
13
Table 2.2 The main differences between “invention” and “innovation” Element Invention Innovation Order Predecessor (occurs first, in the form of an Successor (first attempt to idea) implement the idea) Nature Often concerns a single product or process Often involves a combination of products and processes Time-lag 5–20+ years Location May occur anywhere (universities, research Most likely occurs in commercial organizations, R&D departments) firms Skills Inventor skills: narrow, deep, domain-specific Innovator skills: broad, entrepreneurial
portable media player designed by Apple that was launched in 2001. The iPod, as an MP3 player, was not invented by Apple and such devices previously existed. It is, however, the mixture of the iPod’s design, the functionality of the iTunes software, and the iTunes store that in combination made the iPod an innovation.8 This example illustrates that an innovation may be based on several existing (quite old) inventions: in this case it was the MP3 from Fraunhofer in 1991, the portable audio device from Kane Kramer in 1979, and the hard drive as a storage technology that were incorporated into a single device. Conversely, there are several examples of great inventions where the inventors failed to successfully deploy their invention in a commercial context (cf. I-3.). In 1947, a group of scientists at the AT&T laboratories created the world’s first transistor. The invention was patented, but the organization was not able to find an application for the new device. They did an outstanding job with the invention but failed to develop the innovation. Precisely for this reason, in 1952, AT&T decided to license out the transistor. For a mere $25,000, companies like Texas Instruments, Sony, and IBM acquired a technology that would produce enormous revenue in the future.9 Xerox is another company to have witnessed their inventions being turned into innovations by other firms. Many of the achievements of modern computer technology can be traced back to the famous Palo Alto Research Center (PARC). At PARC, the first personal computer was developed (years before Apple or IBM), in addition to the first graphical-oriented monitor, word-processing software, a workstation, a laser printer, a local area network, a hand-held mouse, the concept of the laptop (Alan Kay’s Dynabook), one of the first computer games, and many more. With the exception of the laser printer, Xerox management did not recognize the potential of PARC’s inventions. The success was reserved for other companies like Apple and Microsoft, and for employees leaving PARC to found their own companies to market their inventions. The most prominent examples are Bob Metcalfe, founder of 3Com, who marketed Ethernet, and John Warnock, founder of Adobe, who marketed his invention “Interpress” under the name “Postscript”. In the end, Xerox profited from almost none of these breakthrough inventions.10
8
Gurel (2007b). Cohen et al. (2000). 10 Smith and Alexander (1999). 9
14
2 Performance Management
Examples of this kind from industry were important catalysts for the change in management thinking and brought the importance of both invention and innovation to the attention of managers. In this way, innovation management gained in importance in modern management as described in the following section.
2.1.2
Innovation Management
From our examination of invention and innovation, it seems obvious that the commercial success of an innovation does not solely depend on the quality of the invention in research and/or development. Moreover, the effective management of all components that belong to the very complicated process of managing innovation, which embraces all functions in the entire company play a role in determining the success of the innovation. The interdisciplinary and multifunctional management of innovation focuses on the improvement of competitiveness and effectiveness of firms.11 Innovation management covers all of the tasks that are required to create technology know-how and to transform this know-how into marketable innovations.12 Innovation management is often a part of corporate strategy and may refer to products, services, manufacturing processes, organizational structures, management processes etc. In addition, the development and commercialization of non-technological change processes is within the remit of innovation management. In summary, innovation management deals with the design of processes and functions, which are tailored to the creation and commercialization of innovative things.13 As mentioned in the previous chapter when defining the terms invention and innovation the contemporary German literature on innovation management suggests a distinction between innovation management in a broader sense and in a narrower sense, see Table 2.2. While innovation management in a narrower sense deals with the implementation and diffusion of innovation, innovation management in the broader sense embraces the whole process from the early fuzzy front end14 phases to the subsequent recycling and withdrawal from the market. Products are often initiated within research and development departments, especially in an industrial
11
Tidd et al. (2005). Hauschild (1997), pp. 25–27. 13 Specht and Beckmann (1996), p. 15, Uhlmann (1978), p. 1, Little (1997), p. 1. 14 The term “Fuzzy Front End (FFE)” is used in various sources to describe the early phases of the innovation process; examples are: Khurana and Rosenthal (1998): “. . . we define the front end to include product strategy formulation and communication, opportunity identification and assessment, idea generation, product definition, project planning, and executive reviews”. Kim and Wilemon (2002): “Thus, we define the FFE as the period between when an opportunity is first considered and when an idea is judged ready for development.” Reinertsen (1994): “We call the time between when you could have started development and when you actually do, the ‘fuzzy front-end’.” 12
2.1 Introduction and Fundamentals
15
Fig. 2.1 Innovation management, technology management and R&D management15 (Source: Diagram based on Brockhoff 1994 and Betz 1998
environment. Therefore, one of the tasks of innovation management in the broader sense and the specific task of R&D management is to determine the need for technology know-how and to foster its creation. Figure 2.1 illustrates the building blocks of the disciplines: Innovation Management, Technology Management and R&D Management, which literature often addresses separately. Following these considerations the next chapter focuses on the early phases of the innovation process and discusses the (rather blurred) borders between research and development. This leads us to shape the term “Industrial Research”.
2.1.3
Research and Development
Research and Development (R&D) is a widely-used term. However, its contents are ambiguous and the use of the term itself is not uniform. The term “R&D” in everyday language, especially within the industrial context, can be applied in two ways: (1) R&D is an organizational unit; (2) R&D describes a set of activities16 that are conducted in the early phases of the
15 16
Adapted from Brockhoff (1994), p. 51, Betz (1998), p. 27. O’Donnell and Duffy (2005) consider an activity to be a fundamental element of a process, p. 18.
16
2 Performance Management
innovation process. These two interpretations are related in the sense that the organizational unit in general conducts part of the activities described by R&D.17 The spectrum of activities, which the term “R&D” covers according to general literature is much broader than the specific R&D activities conducted by a specific R&D department. Therefore, two different R&D organizations can only be considered comparable if their corresponding sets of R&D activities are fully juxtaposed. For the purpose of this thesis we use the term “R&D” whenever we refer to activities, and the term “R&D department” or “R&D organization” when referring to an organizational unit. We will now take a closer look at definitions of R&D in the literature. According to Specht and Beckmann, R&D is defined as “activities and processes that lead to new tangible and/or intangible artifacts”.18 Since it is essential for this thesis to separate research from development,19 the word “new” in this definition requires further attention. Brockhoff as well as Specht and Beckmann discuss two dimensions of “newness”20: 1. Content dimension: what is new? 2. Two aspects can be examined for the purpose of diagnosing newness: the newness of the very fact and the degree of newness. Subjective dimension: new for whom? The subjective dimension considers the opinion of four possible subjects: (1) any individual or expert, which could be either (2) a manager (from a businessoriented point of view), (3) a manager (from the eco-system perspective, including customers and partners), and (4) groups of experts (often expressed through national patent offices). Hauschildt adds two extra dimensions21 for the whole innovation process: 3. Process-related dimension: where (in the process) does the “new” start and where does it end? Since Hauschildt refers to the overall innovation process, this dimension is of less relevance to us. 4. Normative dimension: does “new” equal “successful”? The normative dimension asks whether the “new” is also successful. Some literature suggests using the term “innovation” for products or processes, which allow for an “improvement” to the status quo. Along with this, the goal system of a user is referred to and as a consequence, the degree of goal
17
This view is also shared by Porter (1985) in his work on corporate strategy. He considers organizations as a bundle of activities to deliver products and services. In every company activities are therefore the means by which the work gets done and performance is accomplished. 18 Specht and Beckmann (1996), p. 16. 19 For the purposes of assessing the performance of industrial research departments, the question of separation between research and especially development will be further discussed in Sect. 3.1.5. 20 Brockhoff (1994,) p. 35, Specht and Beckmann (1996), p. 16. 21 Hauschildt (1997), pp. 7–23.
2.1 Introduction and Fundamentals
17
achievement after deploying the innovation should be higher than prior to the innovation. Hauschildt discusses that it is assumed that a goal system exists in which the goals can be articulated in a way that (1) they can be recognized by third parties, and (2) that these goals can be generalized. In addition, it is imperative that, (3) from the extent of the “goal achievement”, an assessment of the “achieved improvement” can be derived. Furthermore, it is assumed in this context that (4) it is possible to agree upon a uniform success measure and that all evaluators arrive at a similar assessment in terms of such success measure. Hauschildt discusses the consequences of these four assumptions in detail and concludes that it is not possible to use this dimension to characterize innovation. The relevance of this dimension will be later addressed in the context of performance management (see Sect. 3.2) Brockhoff’s two dimensions are supported by the Frascati Manual22 (FM). According to the FM, research and experimental development – abbreviated as “R&ED”23 – comprise creative work undertaken on a systematic basis in order to both increase the amount of knowledge, including knowledge of man, culture and society, and to extend the use of this knowledge to devise new applications. “New” in the context of the FM is the “. . .appreciable element of novelty and the resolution of scientific and/or technological uncertainty, i.e., when the solution to a problem is not readily apparent to someone familiar with the basic stock of common knowledge and techniques for the area concerned”.24 Within the first part of this definition, the content dimension is reflected (appreciable element . . . technological uncertainty), and the subjective dimension is substantiated in the second part of the clause (someone familiar . . . the area concerned). Using this definition, the FM makes finer distinctions and identifies within the term “R&ED” the following three activities: basic research, applied research and experimental development, as shown in Table 2.3. The FM does not provide a name for phases comprising activities beyond experimental development. We therefore introduce the term product development, which comprises activities that are necessary after ED but before the beginning of the production phase.
22
The Frascati Manual is a Proposed Standard Practice for Surveys on Research and Experimental Development, published in 2002 by the Organization for Economic Co-operation and Development (OECD) dealing with the measurement of scientific and technological activities. The Frascati Manual is based on experience gained from collecting R&D statistics in OECD member countries. It has been developed over the last 40 years on the concept of science and technology indicators and developed a series of methodological manuals known as the “Frascati Family”, which includes manuals on: R&D (Frascati Manual), innovation (Oslo Manual), human resources (Canberra Manual), technological balance of payments and patents as science and technology indicators. 23 The Frascati Manual’s definition of development (the “D” in R&D) does not include product development: instead it only includes experimental development. In order to distinguish between the two definitions of development, R&ED is used wherever a reference to the Frascati Manual is made. 24 OECD (2002), p. 34, clause 84.
18
2 Performance Management
Table 2.3 Definitions of R&ED in the Frascati Manuala Activity Definition Focus The creation of knowledge in Basic research Experimental or theoretical work general (BR) undertaken primarily to acquire new knowledge, or the underlying foundation of phenomena and observable facts without any particular application or use in view Applied research Also original investigation undertaken in The creation of marketable (AR) order to acquire new knowledge. It is, knowledge however, directed primarily towards a specific practical aim or objective Systematic work, drawing on knowledge The development or improvement Experimental of new products, processes, gained from research and practical development systems or services experience that is directed to produce (ED) new materials, products and devices; to install new processes, systems and services; or to substantially improve those already produced or installed Beyond The FM does not provide detail or definitions of activities beyond ED. experimental However, the FM does explore activities that are on the borderline of or development b beyond ED. Examples of these are: Item Treatment Remarks Prototypes Include in R&D As long as the primary objective is to make further improvements Pilot plants Include in R&D As long as the primary purpose is R&D Industrial design and Divide Include design required during drawing R&D. Exclude design for production process Industrial engineering Divide Include “feedback” R&D and and tooling up tooling up industrial engineering associated with development of new products and new processes. Exclude for production processes Trial production Divide Include if production implies fullscale testing and subsequent further design and engineering. Exclude all other associated activities Routine tests Exclude Even if undertaken by R&D staff Data collection Exclude Except when an integral part of R&D Public inspection Exclude control, enforcement of standards, regulations Source: OECD (2002) a OECD (2002), Frascati Manual, p. 41, clause 110 b The FM does not provide a name here; it only discusses activities beyond ED
2.1 Introduction and Fundamentals
19
The FM uses a strictly functional approach for these definitions, i.e., “the nature of the R&D activity of the performing unit, rather than its principal (economic) activity, is examined”.25 In this context, principal activity refers to the organizational role within the performing unit within the innovation lifecycle. Consequently, the FM covers both “formal” R&ED in R&D units and “informal or occasional” R&ED in other units.26 This is in line with the approach that we have chosen for the term “R&D”. A logical temporal sequence for research (basic and applied) and experimental development is assumed. The definition of ED implies the extrapolation of research (basic and applied) results within this phase: “a . . . work, drawing on existing knowledge gained from research. . .” The FM itself acknowledges: “there are many conceptual and operational problems associated with these categories”.27 The phases of knowledge gain are intertwined, partially influence each other, and cannot be considered independently.28 The distinction between these phases is criticized because of the fuzzy demarcation of the phases. The Illinois Institute of Technology Research explicitly illustrated this fact in an empirical study on the overlapping of activities between phases by analyzing five important innovations.29 We would like to point out that this phenomenon is also apparent for the later product development and production phases.30 Nonetheless, Brockhoff31 states that the problem in distinguishing terms (i.e., basic research, applied research and experimental development) leads to a more applicable structure either by eliminating borders or by an even more subtle differentiation, i.e., by creating new borders. Although this approach sounds promising at a first glance, the context-based classification of activities stated in the FM provides a counter-argument to the feasibility of the approach. Since this thesis focuses on research in an industrial environment, it is necessary to put the term ‘industrial research’ into context.32 The role and dynamics of
25
OECD (2002), Frascati Manual, p. 76, clause 236. OECD (2002), Frascati Manual, p. 17, clause 14. 27 OECD (2002), Frascati Manual, p. 79, clause 251, note also that the FM acknowledges that possibly the greatest source of error in measuring R&D is the difficulty of locating the cut-off point between experimental development and the related activities required to realize an innovation. 28 Sch€atzle (1965), p. 19, Hauber (2002), p. 24. 29 Illinois Institute of Technology Research (1968), p. 15, Brockhoff (1994), p. 41. 30 Brockhoff (1994), p. 40. 31 Brockhoff (1994), p. 39. 32 Depending on the business sector, discipline and individual organizational setup of companies, there are often overlaps of different phases that are affiliated to the research function, and there are also different names for these phases. A widely-used term in literature that describes the phase beginning somewhere within industrial research, sometimes starting before the “grey zone” and sometimes after and continuing with product development is “New Product Development (NPD)”. Another name is “Advanced Development (AD)”, or just “Development (D)”. These phases are beyond the scope of this work; the separation from development, however, will be discussed in detail. 26
20
2 Performance Management
industrial research is acknowledged, for example, by the European Commission, as a one of the important facilitating forces behind innovation. The European Commission carefully monitors a variety of research activities in the European Union and pays special attention to statistics and figures such as research investment33 by top European firms. As discussed above, the borders between the phases are blurred, and in terms of this thesis, we must focus our attention on the borders of industrial research in order to identify its activities. This problem has often been discussed in literature. In order to better understand the core questions: “where does industrial research start/end?” and “where the development starts?” we review the literature on the definition of the term “industrial research” in the following sections. The term ‘industrial research’ became popular in the American literature in the 1960s and 1970s. Chorafas identifies industrial laboratories as one of seven types of research establishment. Stating the difficulty of an exact separation of pure research and applied research and development, he says34: pure research consists of the exploration of something previously unknown in order to scientifically formulate physical or technical singularities; applied research consists of a transformation of the discoveries of pure research into practical products. Chorafas continues describing development as the continuous improvement of these products up to the highest degree of perfection and the determination of the most optimal production run. He argues that the difference between industrial research and other research establishments is manifested in the intent of the work rather than the working methods. He also notices that industrial research departments are established to tackle certain research areas and to seek solutions that are industrially exploitable.35 Sch€atzle takes up these considerations and states that exponents of business economics and engineers who manage research activities within companies favor the term ‘industrial research’ instead of ‘research and development’. He concludes that the term ‘industrial research’ is identical to the term R&D, provided that the research is aimed at technology research, and the activities are conducted by industrial companies.36
33
The European Commission collects and analyzes policy-relevant information on corporate R&D through its ‘industrial research and innovation monitoring and analysis activities’ (IRIM) at the Joint Research Centre’s Institute for Prospective Technological Studies (JRC-IPTS), in co-operation with DG Research. The EU Industrial R&D Investment Scoreboard, the EU Survey on Business Trends in R&D, the Digest of Industrial R&D and the Industrial R&D Economic and Policy Analysis are some of the products of this work (for more information on these reports see http://iri.jrc.ec.europa.eu). EC-JRC (2008), p. 5. 34 Reine Forschung bestehe in der Erforschung des bisher Unbekannten, um entdeckte physikalische oder technische Eigenheiten im naturwissenschaftlichen Sinne zu formulieren. Angewandte Forschung dagegen bestehe in der Umgestaltung der Endeckungen der reinen Forschung in brauchbare Produkte. Ihr folge die Entwicklung mit einer stetigen Verbesserung dieser Produkte bis zu einem hohen Grad der Vollkommenheit und der Bestimmung des g€ unstigsten Herstellungsganges. 35 Chorafas (1963), p. 19. 36 Sch€atzle (1965), pp. 11–12.
2.1 Introduction and Fundamentals
21
Bruggmann, for his part, defines industrial research in a narrow sense as technology research, and in a broad sense to also include business-oriented research related to technology research.37 The European Union provides a more precise definition of industrial research: the planned research or critical investigation aimed at the acquisition of new knowledge and skills for developing new products, processes or services or for bringing about a significant improvement in existing products, processes or services. It comprises the creation of components of complex systems, which is necessary for the industrial research, notably for generic technology validation, to the exclusion of prototypes as covered by “experimental development”. These definitions of industrial research are quite inconsistent. They range from basic research, to product development in its broadest definition, to applied research only in its narrowest definition. Additionally, the implied sequence and separation,38 as for example, defined by the FM, rarely exist in practice as – in general agreement – the borders between the phases are rather blurred and somewhat overlapping. The execution of these three types of R&D can happen within the same department by essentially the same staff.39 Hence, the mapping of individual activities to a dedicated phase can be a very difficult task and strongly depend on the context in which the corresponding activities are conducted.40 Furthermore, recent trends such as open innovation and shortened innovation life cycles confirm this observation. Figure 2.2 summarizes the discussion above and integrates the terms
Fig. 2.2 Scope of activities in industrial research (Source: The figure was derived from the literature review by the author)
37
Bruggmann (1957), p. 4f. Refer to Sch€atzle (1965), p. 21, OECD (2002), pp. 77–79, clauses 240–250. 39 OECD (2002), Frascati Manual, p. 79, clause 251. 40 FM discusses this in terms of project contexts, OECD (2002), p. 34, clause 85. 38
22
2 Performance Management
‘research and development’ (R&D), R&ED according to the FM, and our view of industrial research. Our view, which is that industrial research in the narrower sense encompasses applied research and experimental development, is based on the initial ideas of the FM, which we subsequently validated with the case studies (cf. Appendix D). Due to the blurring of the different phases, industrial research in the broader sense may, however, also cover activities starting at the end of basic research and ending within the product development phase.41 In the following thesis, we will always use the term ‘industrial research’ in its broader sense. As a final remark in this section we would like to point out that industrial research sometimes tends to cultivate self-perpetuating dynamics within an organization. This is due to the very different nature of a research department compared to other company departments. A research department, for example, that conducts public-funded research projects has to nourish its own eco-system. This eco-system, according to Beck and V€ olker, includes external groups such as academia, industrial partners, political bodies, media and internal units. Of these internal units, the foremost would be the development department, as well as the communication and marketing departments, top management, etc.42
2.2 2.2.1
Performance Management – Basic Terms and Definitions Performance
The subject of this study is performance measurement and therefore the term ‘performance’ is defined. A review of relevant literature shows that no uniform definition of the term ‘performance’ exists. Management literature, in particular, has many proposals as how to measure performance without precisely defining it first.43 Originally, the term “performance” meant a play or piece of music, according to Wettstein’s44 citing of Andersen and Fagerhaug “. . .performance is believed to have originated in the fifteenth century to mean a play or exhibition of some type”. The Oxford English Dictionary defines performance as: “Performance. The action of performing, or something performed. . . The carrying out of a command, duty, purpose, promise, etc.; execution, discharge, fulfillment. Often antithetical to promise. . . The accomplishment, execution, carrying out, working out of anything ordered or undertaken; the doing of any action or work; working, action (personal
41
Product development comprises activities that are beyond ED activities. However they are necessary ahead of the production phase. 42 Beck and V€olker (2009), p. 34. 43 Krause (2005), p. 17. 44 Wettstein (2002), p. 15.
2.2 Performance Management – Basic Terms and Definitions
23
or mechanical); spec. the capabilities of a machine or device, now esp. those of a motor vehicle or aircraft measured under test and expressed in a specification. . . The observable or measurable behaviour of a person or animal in a particular, usu. experimental, situation. . . The action of performing a ceremony, play, part in a play, piece of music, etc.. . .” The dictionary definition shows that the term has kept its original definition but has gained additional meanings. “Accomplishment”, “efficiency”, “capability”, and “satisfaction” are listed as synonyms. The definitions encompass many different aspects; this is reinforced by the often cited statement of Meyer and Gupta: “there is a massive disagreement as to what performance is and that the proliferation of performance measures has led to the paradox of performance, i.e. that organizational control is maintained by not knowing exactly what performance is”.45 In the performance measurement literature we encountered many different definitions. Different fields use different definitions in different contexts: For example, production management accentuates the activity, the organizational context focuses on fast and optimal cost processes; economics sees performance as productivity; business studies often translate performance into monetary value; management accounting sees performance as an output of a company in financial terms; change management defines performance as generating results and emphasizes stakeholders e.g. shareholders, customers, personnel. Table 2.4 shows some relevant definitions and is ordered by publication date. Performance is not an absolute but a relative measure of success. Hauber46 reports that performance can be assessed for: (a) set objectives (planned/actual comparison, planned/will be comparison), (b) other defined periods (intertemporal comparison), and (c) an object of comparison (competitive comparison/ benchmarking). A comprehensive overview of the different facets of the term “performance” can be found in the work of Krause.47 All definitions above have one common characteristic; they all are related to two terms: effectiveness and efficiency. These two terms are well defined in the literature, with the most common citation being by Drucker: “Effectiveness is the foundation of success – efficiency is the minimum condition for survival after success has been achieved. Efficiency is concerned with doing things right, effectiveness is doing the right things”.48 However, in colloquial language these terms are mis-used as synonyms for profitability or goal-oriented behaviour. The definition of efficiency and effectiveness implicitly presumes the existence of a pre-defined goal as both can only be evaluated against a goal. As a consequence, all definitions examined above are similar with regards to the existence of
45
Meyer and Gupta (1994), p. 309. Hauber (2002), p. 54. 47 Krause (2005), pp. 17–22. 48 Drucker (1974), p. 45. 46
24
2 Performance Management
Table 2.4 Definitions of the term “performance” Source
Definition
Venkatraman and Ramanujam (1986) Cordero (1989)
“Performance is the time test of any strategy”a
Lebas (1995)
Neely et al. (1995) Rolstadas (1998)
Dwight (1999) Hoffmann (1999)
Andersen and Fagerhaug (2002)
Gr€ uning (2002)
Hauber (2002)
Wettstein (2002) EFQM (2003) Krause (2005)
a
“Effectiveness (i.e. measuring output to determine if they help accomplish objectives)” “Efficiency (i.e. measuring resources to determine whether minimum amounts are used in the production of these outputs)”b “Performance is about deploying and managing well the components of the causal model that leads to the timely attainment of stated objectives within constraints specific to the firm and to the situation”c “Efficiency and effectiveness of purposeful action”d “A complex interrelationship between seven performance criteria: effectiveness, efficiency, quality, productivity, quality of work life, innovation, profitability/ budget-ability”e “The level to which a goal is attained”f “The term “performance” describes an evaluated contribution to the attainment of organizational goals. This contribution can be generated by individuals and groups of employees within the organization, as well as by external groups, e.g., suppliers”g “We believe it is sufficient to have reached a point where performance has replaced productivity and is generally accepted to cover a wide range of aspects of an organization – from the old productivity to the ability to innovate, to attract the best employees, to maintain an environmentally sound outfit, or to conduct business in an ethical manner”h Performance is understood as the ability of a company to achieve goals, i.e. meet expectations, and is therefore influenced by results in a wider sense,i but also by the corresponding goal settingj “The term “performance” describes the contribution of specific systems (organizational units of differing sizes, employees, and processes) to attain and validate the goals of a company”k “Performance can be understood as the degree of stakeholder satisfaction”l “Performance is the level of attainment achieved by an individual, team, organization or process”m “Performance refers to the degree of the achievement of objectives or the potentiallypossible accomplishment regarding the important characteristics of an organization for the relevant stakeholders. Performance is therefore principally specified through a multidimensional set of criteria. The source of the performance is the actions of players in the business processes”n
Venkatraman and Ramanujam (1986), p. 802 Cordero (1989), p. 185, note that Cordero hypothesizes that overall performance is a function of both technical performance and commercial performance c Lebas (1995), pp. 29 d Neely et al. (1995), pp. 80–116 e Rolstadas (1998), pp. 989–999 f Dwight (1999), pp. 258–275 g Translated from Hoffmann (1999), p. 33: “Unter Performance/Leistung wird der bewertete Beitrag zur Erreichung der Ziele einer Organisation verstanden. Dieser Beitrag kann von Individuen und Gruppen von Mitarbeitern innerhalb der Organisation sowie von externen Gruppen (z.B. Lieferanten) erbracht werden” h Andersen and Fagerhaug (2002), cited from Wettstein (2002), p. 17 i The formulation “in a wider sense” emphasizes the fact that “results” do not only refer to periodical revenue figures j Translated from Gr€ uning (2002), p. 5: “Performance wird hier als die F€ahigkeit eines Unternehmens verstanden, Ziele zu erreichen, also Erwartungen zu erf€ullen und ist somit sowohl von Ergebnissen im weiteren Sinne, wir aber ebenso durch die entsprechende Zielstellung beeinflusst” (continued) b
2.2 Performance Management – Basic Terms and Definitions
25
Table 2.4 (continued) Source
Definition
k
Translated from Hauber (2002), p. 54: “Unter Performance wird der Beitrag spezifischer Systeme (Organisationseinheiten unterschiedlicher Gr€ oße, Mitarbeiter, Prozesse) verstanden, die Ziele des Unternehmens zu erreichen und zu € uberpr€ ufen” l Translated from Wettstein (2002), p. 17: “Performance kann aufgefasst werden als Grad der Zufriedenheit der relevanten Anspruchsgruppen” m EFQM (2003) n Translated from Krause (2005), pp. 17–21: “Performance bezeichnet den Grad der Zielerreichung oder der potenziell m€ oglichen Leistung bez€ uglich der f€ ur die relevanten Stakeholder wichtigen Merkmale einer Organisation. Performance wird deshalb erst durch ein multidimensionales Set von Kriterien pr€azisiert. Die Quelle der Performance sind die Handlungen der Akteure in den Gesch€aftsprozessen”
one or several goals of which the degree of attainment can be determined. Gr€uning, for example, defines performance as the ability of a company to achieve its goals (cf. Fig. 2.3). Performance depends on the one hand from the results (over or under performance) and on the other hand from the goal setting.49
result1
Over performance Goal setting
result2
Underperformance result3
Time
Fig. 2.3 Performance as goal attainment (Source: Gr€ uning 2002)
Effectiveness and efficiency in this context can therefore be understood in an abstract sense as performance measures that need to be appropriately quantified to evaluate goal attainment: Effectiveness as an indicator of the degree of a goal attainment, and efficiency as an indicator of the resources that were consumed to reach the level of achievement. For an overall evaluation of the performance, the relative importance of each aspect should be appropriately considered. In this thesis, the term “performance” is used as the level/degree of goal achievement of an organization/department rather than of individuals. Individual work performance is very much addressed in the area of applied psychology.
49
Gr€uning (2002), p. 5.
26
2.2.2
2 Performance Management
Goals and Goal Setting
The way in which the term “performance” has been defined in the previous section immediately raises the following question: What are “goals” and/or “organizational goals”? In organizations a number of individuals simultaneously work on different activities with different or at least slightly different interests. In order to bundle the interests and direct them in a strategic direction for the overall organization, an instrument is required. A means to achieve this alignment are explicitly formulated, as are jointly-accepted goals e.g. management by objectives.50 Within this context, how this alignment process takes place is of major significance. We initially revisit definitions of the term “goal” found in the literature. The Merriam-Webster dictionary51 defines the term “goal” as the end towards which effort is directed. Synonyms are “objective”, “aim”, and “intent”. According to Nagel, within the business-organizational context, “goal” should be used when that which is strived for is relevant for action and has a direct connection with the concrete problem and its solution.52 D€ orner points out that individual’s goals can be contradictory due to their different interests.53 He concludes that the major purpose of goal setting is discovering and issue-related handling of conflicts. Hamel cites the first German-speaking investigation by Schwantag in 1951.54 The definition of the characteristics of a “goal” include: reference to the future, assignment of a positive valence, determination of a state, event, process, and effect. Within organizational psychology authors define goals as: • What the individual is consciously trying to do55; • Where levels of performance sought appear to be common elements in attempts to motivate performance; success is associated with goal achievement and failure with performance below the goal level56; • What an individual is trying to accomplish; it is the objective or aim of an action57; • A target state or condition the organization wants to achieve.58
50
Nagel (1992), p. 2626. Merriam-Webster’s 11th Collegiate Dictionary (2004) [goal]. 52 Nagel (1992), p. 2627. 53 D€orner et al. (1983), pp. 37–38. 54 Schwantag (1951) cited from Hamel (1992), p. 2634. 55 Locke (1968), p. 159. 56 Frost and Mahoney (1976), p. 328. 57 Locke et al. (1981), p. 126. 58 Griffin (1990), p. 161. 51
2.2 Performance Management – Basic Terms and Definitions
27
For this thesis we use the term “goal” as per the definition of Hamel59: A goal is an envisaged and intended future state, an anticipated vision of the impact of actions.
Hamel adds to the definition that in contrast to (pure) forecasts, goals show the character of activity; within the goal, the intent of attainment or completion is logically included.60 Nagel61 deals with goal setting in the context of the problem solving process. He develops a hierarchical (top down) approach for goal setting and describes goal setting as a process where goals are cascaded and refined six times for seven levels. Specht and Beckmann also support the process view. They argue that within the phases of problem recognition and the evaluation of alternative solutions the matter of goal creation should be seen as a process across a period of time and not as an act that occurs at a specific point in time.62 Following Specht and Beckmann63 we define the goal setting process as: A systematic reduction of complexity, which can be realized, on the one hand, by the initial decomposition of the goal followed by subsequent structuring in a goal system and, on the other hand, through the iterative involvement of goal creation in the problem-solving process.
2.2.3
Measures, Metrics, and Indicators
The performance measurement literature relies on a variety of definitions to describe metrics that are applied to assess goal attainment in organizations. In this section we analyze the different terms and select which term to be use in this thesis. The following terms, inter alia, have been found in the performance measurement literature: “performance metrics”, “performance criteria”, “performance measures”, “performance indicators”, “key result indicators”, “critical success factors”, “key success indicators”, “indexes”, “strategic measures” and “success
59
Translated from Hamel (1992), p. 2634: Als Ziel kann man folglich “einen vorgestellten und gewollten zuk€unftigen Vorgang oder Zustand, eine antizipierte Vorstellung der Wirkung unseres Handelns” verstehen. Ziele weisen im Gegensatz zu (reinen) Prognosen den Charakter von handlungssteuernden Vorgaben auf; im Ziel ist die Erreichungs- oder Erf€ullungsabsicht definitionslogisch enthalten, also Bidlingmaier (1964). 60 Hamel (1992), p. 2634. 61 Nagel (1992), p. 2627. 62 Specht and Beckmann (1996), p. 18, p. 125. 63 Derived from Specht and Beckmann (1996), p. 125: “Der Zielbildungsprozeß kann als eine systematische Komplexsit€atsreduzierung, die zum einen durch Zielzerlegung mit anschließender Strukturierung in einem Zielsystem und zum anderen durch iterative Einbindung der Zielbildung in den kognitiven Problem-L€ osungs-Prozeß realisiert werden kann, verstanden werden”.
28
2 Performance Management
measures”. Krause notes that the use of the terms: “performance measures”, “performance metrics”, “performance indicators”, and “key performance indicators” has gained in importance recently.64 In order to assess things (e.g. activities, products, services) adequate measurement instruments are required. For this thesis, the definitions and differences between the following terms will be elaborated on: measure, metric, performance indicator and key performance indicator (Table 2.5).
Table 2.5 Key terms in Performance Measurement
Terms to be defined in this chapter Measure Metric Performance Indicator Key Performance Indicator
The Merriam-Webster-Dictionary65 describes the term “measure” as: (a) A fixed or suitable limit (1a (3)); (b) The dimensions, capacity, or amount of something ascertained by measuring (1b); (c) An estimate of what is to be expected (1c); (d) A measured quantity (1d (1)); (e) Amount, degree (1d (2)); (f) A standard or unit of measurement. . . (2b); (g) A basis or standard of comparison (6). The complete dictionary entry contains additional aspects to those listed above; we have only selected those that are most relevant for our context.66 Our selection already hints at many different facets and suggests that the exact meaning of the word depends heavily on the context in which it is used as well as on subjective interpretation. Within our context, definitions (d) and (f) have the best match as they both suggest an “indication of a quantity”. For our work we use the definition closest to (d) and define measure as “a quantifying value”.
64
Krause (2005), p. 21. Merriam-Webster’s 11th Collegiate Dictionary (2004) [measure]. 66 For example music, dance or instruments were not considered in our selection. 65
2.2 Performance Management – Basic Terms and Definitions
29
Geisler67 provides the following definition for metrics: they “may be used generically to describe a system of measurement that includes: (1) the item or object that is being measured; (2) units to be measured, also referred to as “standard units”, and (3) value of a unit as compared to other units of reference”. Comparison of the terms “measure” and “metric” suggests that the major difference between them is that a metric embodies additional information about the referent. A metric puts a measure into a certain context (e.g. the distance between two points in a two-dimensional plane) which is given by an item or an object or a set of those, defines a unit of measure (e.g. meter) and a reference unit (the definition of 1 m). Within the context of performance measurement there is often no one single adequate metric that allows us to exactly determine the degree of goal achievement. Geisler states that “in the social, managerial, and behavioral environments and sciences, the phenomenon under consideration is much less precise. In most instances the phenomenon of interest is in the form of a process, or at least as a set of events”.68 According to Gladen,69 numbers that try to picture complex issues in a simple manner have, in a broader sense, more or less the character of indicators. He states that indicators in narrower sense are not obtained through the consolidation of quantitative information and he defines indicators as “auxiliary metrics, whose characteristics or changes allow some conclusions on the characteristics or changes of another measure which is considered important”.70 He adds that indicators are needed for facts or parameters that are not directly measurable or observable, and mentions that their validity is less than those of the original facts. Following this explanation, we define a performance indicator in the context of organizational performance measurement as follows: A performance indicator is an auxiliary metric that partially reflects the performance of an organizational unit.
The following definitions of “performance indicator” have been found in the literature (Table 2.6):
67
Geisler (2000), p. 34. Geisler (2000), p. 35. 69 Gladen (2005), p. 14. 70 Translated from Gladen (2005), p. 14: “(Indikatoren) Sie sind Ersatzgr€oßen, deren Auspr€agung oder Ver€anderung den Schluss auf die Auspr€agung oder Ver€anderung einer anderen als wichtig erachteten Gr€oße zulassen”. 68
30
2 Performance Management
Table 2.6 Definition of the term “Performance Indicator” Source Definition Ahaus (1994) “Description of a subject, measurement scale and a measurement procedure. A performance indicator is the operationalization of a non-measurable goal”a Neely et al. (1995) “A performance measure can be defined as a metric used to quantify the efficiency and/or effectiveness of an action”b Kerklaan et al. (1996) “An instrument to measure a predefined part of the performance of a process in order to monitor the development of this performance. A complete indicator consists of a measure, a norm, a measurement instrument and a registration technique”c Kerssens-van “A performance indicator is a variable which indicates the effectiveness Drongelen (2001) and/or efficiency of a process, system or part of a system when compared with a reference value”d Gladen (2005) “. . . (in a broader sense) are the quantitative information which have been prepared for the specific needs of business analysis and management” “. . . are alternate parameters, whose characteristic or variation allow the inferring of the characteristic or variation of another parameter which is considered to be important” a Ahaus (1994), p. 143, cited from Kerssens-van Drongelen (2001), p. 74 b Neely et al. (1995), p. 80. In our interpretation, this definition of Neely et al. is more closely related to the concept of performance indicators c Kerklaan et al. (1996), p. 208, cited from Kerssens-van Drongelen (2001), p. 74 d Kerssens-van Drongelen (2001), p. 81
We can conclude from the above definitions that performance cannot be seen as something absolute and it is difficult to capture and quantify performance precisely – thus indicators for performance are needed. Concluding from Gladen’s definition that indicators are alternatives that provide approximations it is obvious that in general performance cannot be sufficiently quantified by means of one single indicator. Sound statements need a set of performance indicators.71 Arguing that the information to be reported to upper management should be presented in a reduced form, Gladen explains information reduction through consolidation (qualitative and quantitative) and that reduction through selection exists. With reduction through selection he suggests the existence of key performance indicators and in this way justifies their use.72 Some definitions found in the literature are listed below (Table 2.7):
71
B€osch (2007), pp. 104–105. Gladen discusses six types of consolidation adopted from Birk (1991): Informationsentlastung durch Verdichtung und Informationsentlastung durch Selektion, Gladen (2005), p. 13. This view will be further analyzed in Chap. 3.
72
2.2 Performance Management – Basic Terms and Definitions
31
Table 2.7 Definition of the term “Key Performance Indicator” Author Definition Dransfield et al. “Tactical measures, or key performance indicators, are a set of enterprise(1999) level measures that collectively capture the overall performance of the enterprise and act as predictors of future success, that is, of future values of the success measures”a Hauber (2002) “Performance measures provide information in a quantified and condensed form about the performance of organizational units, employees and processes and are, therefore, an important basis of information for managers to use when supervising a company”b Meyer (2002) “. . . drivers of financial performance, that is non-financial measures describing internal processes, products, and customers, at the level of the entire firm or its business units (Meyer also calls them aggregate measures)”c Gladen (2005) “(in a narrower sense) are measures, which are deliberately/intentionally/ willfully heavily compacted to absolute and relative numbers so as to being able to report in a concentrated form about a numerically ascertainable facts/data”d Parmenter (2007) “Key performance indicators represent a set of measures focusing on those aspects of organizational performance that are the most critical for the current and future success of the organization”e a Dransfield et al. (1999), pp. 99–150. Furthermore, authors distinguish between “strategic measures” defining performance on an investment level and “operational measures” on the work processes of the enterprise b Translated from Hauber (2002), p. 54: “Performance Measures sind Kennzahlen, die in quantifizierter und verdichteter Form Auskunft € uber die Performance von Organisationseinheiten, Mitarbeitern oder Prozessen geben, und daher f€ ur das Management eine wichtige Informationsbasis zur Unternehmenssteuerung sind”. In our interpretation, this definition of Hauber is more closely related to the concept of key performance indicators c Meyer (2002), p. 9, uses a similar definition to Hauber’s “non-financial” measures term, which expresses his motivation for performance indicators and the notion of holistic view “level of the entire firm” demonstrates the character of key performance indicators. Meyer also notes that the information about performance is obscured by aggregate performance measures. The aggregation conceals the sources where the performance is poor and where it is excellent. They are lumped together and in the end do not indicate where to place corrective actions d Translated from Gladen (2005), pp. 11–12, “Kennzahlen im ‘weiteren Sinne’: Das sind quantitative Informationen, die f€ ur die spezifischen Bed€ urfnisse der Unternehmensanalyse und steuerung aufbereitet worden sind”. Gladen subsumes Indicators falling into this category. “Kennzahlen im ‘engeren Sinne’: Diese sind Maßgr€ oßen, die willentlich stark verdichtet werden zu absoluten und relativen Zahlen, um mit ihnen in einer konzentrierten Form €uber einen zahlenm€aßig erfassbaren Sachverhalt berichten zu k€ onnen” e Parmenter (2007), p. 3. Parmenter distinguishes between three types of performance measures: (1) Key result indicators (KRIs) describing how you have done in a given perspective; (2) Performance indicators (PIs) telling you what to do; and (3) Key performance indicators (KPIs) suggest what you should do to increase performance. His concept will further be discussed in Sect. 3.3.1
The common ground shared by these definitions is that in the “last” step, the focus is on aspects that are deemed critical to the organization. This leads us back to the earlier idea that a reasonably complete impression of the overall performance of
32
2 Performance Management
an organizational unit typically requires a set of Key Performance Indicators (KPIs).73 Building on the terms “measure”, “metric” and “performance indicator” presented above, we define the term “key performance indicator” in the following way: Key performance indicators are a set of performance indicators, which have been selected or defined upfront by management that strongly reflects the critical factors that are of particular interest for performance of an organizational unit.
Below is a summary of the terms defined in this chapter (Table 2.8):
Table 2.8 Definitions: measure, metric, PI and KPI Term Definition A quantifying value Measure Metric
A metric puts a measure into a certain context. The context is given by an item or an object or a set of items or objects. It defines a unit of measure and a reference unit
Performance Indicator
A performance indicator is an auxiliary metric that partially reflects the performance of an organizational unit
Key Performance Indicator
Key performance indicators are a set of performance indicators that are selected upfront and agreed on by management to be the most representative and/or critical performance indicators. A key performance indicator is an element of this set
2.2.4
Performance Management
Simply knowing the level of attainment or performance does not improve the performance itself. The performance has to be actively managed. Performance management and performance measurement are sometimes mistaken for each other. Klingebiel states that the literature is inadequate on the conceptual, contextual and definitional differences between performance management and performance measurement.74 Lebas claims “management could hardly exist without measurement”. He argues that performance management and performance measurement are closely intertwined and therefore inseparable. His clarification regarding their dimensions: “performance management precedes and follows performance measurement, in a virtuous spiral and performance management creates the context for measurement”, suggests performance measurement is a
73 74
Note that all the definitions that we found use the term in the plural indicating “a set of KPIs”. Klingebiel (1999), p. 9.
2.2 Performance Management – Basic Terms and Definitions
33
part of performance management.75 The distinction between the terms is essential for this thesis because they differ in the functions they cover and therefore help us to define the scope of the thesis. Many different definitions of performance management exist in the literature. To help provide a consistent view on performance management, the definition for performance management system will also be reviewed. The interpretation of the term “system” is often ambiguous. Therefore, before studying “performance management” and “performance management system” we briefly analyze the definition of “system”. System can be defined as a uniformly-ordered whole. System concepts provide a useful way to describe many organizational phenomena, including the information system, features of applications and development processes. Organized, purposeful structure regarded as a ‘whole’ consisting of interrelated and interdependent elements (components, entities, factors, members, parts etc.). These elements continually influence one another (directly or indirectly) to maintain their activity and the existence of the system, in order to achieve the common purpose (the goal) of the system.76
On the one hand, the system can be interpreted as the interplay of all integrated components and their interdependencies; on the other hand it is often interpreted as an information system that monitors the overall performance. The following definitions for the terms “performance management” and “performance management system” have been found in the literature. Performance management system definitions will be mentioned at the end. The order is by publication date (Table 2.9). The definitions suggest that performance management encompasses all “management” activities: planning, organizing, co-ordinating, leading, controlling, staffing and motivating. This supports the fact that management is the larger domain and includes performance measurement as a component. Wettstein77 draws on the general concept of corporate management78 deteruhli defines management as the entirety of institutions, promined by R€uhli.79 R€ cesses and tools that provide a basis for will-formation (planning and decision) and will-enforcement (instructions and control) in the context of problem-solving by a community (with complex interpersonal relationships). From the four constitutive elements of planning, decision, instructions/order and control, Wettstein assigns control and partly planning to performance measurement, while decision and instructions/order he clearly assigns to the concept of management. This approach is also supported by Brunner who limits performance measurement to “measurement/assessment” by means of performance indicators. He also highlights
75
Lebas (1995), p. 23. Business Dictionary (2009). 77 Wettstein (2002), p. 27. 78 R€uhli (1985), p. 30 follows Gutenberg (1976) in his definitions and uses the following terms synonymously: F€uhrung ¼ Leitung ¼ Management. 79 R€uhli (1985), p. 28. 76
34
2 Performance Management
Table 2.9 Definitions of the term “Performance Management” Author
Definition
Lebas (1995)
“A philosophy which is supported by performance measurement.Achieving congruence as to the definition of the parameters of performance and the causal model(s) that lead to it is one of the essential functions of (performance) management”a “A company-wide management system which transforms the process of the operationalization of company strategies and objectives into a permanent management system. The achievement of objectives (of the relevant stakeholders) is supported by the combination of strategies, strategic initiatives and the planning, controlling and monitoring of the relevant management quantities”b “An approach to connect value-based strategic planning (financial value) with a measurable strategic implementation in order to resolve currently-existing deficits in strategic management and to point to new ways towards value-based corporate governance”c “Includes techniques which enable managers, in coherence with the overall company objectives, to plan, guide and improve the performance of their employees”d “Performance management is the process of planning, managing and controlling quantified variables that refer to the resources (inputs) and their transformation (throughput) in the performance (outputs) of a company’s specific systems”e “The process of managing an organization’s strategy through a fully integrated system of business improvement methodologies, metrics, processes, software tools and systems that manage the performance of an organization”f “Performance management encompasses all activities that are aimed at the optimization of stakeholder benefits through the constant improvement of the players’ professional competence and social skills, and at the same time, that minimize the financial, physical, temporal, emotional and social effort”g
Brunner (1999)
Gomez et al. (2002) Hoffmann (2000) Hauber (2002)
Cokins (2004)
Krause (2005)
Definition of the term “Performance Management System” Krause (2005) “A performance management system (PMS) is a management system based upon indicators that support the tasks aimed at optimizing the benefits to an organization’s stakeholders. Therefore, effective PMSs must represent the correlation between performance goals, goal achievement indicators, success-critical value-added activities and techniques for the improvement of the performance on all levels and along the entire value chain of an organization”h a
Lebas (1995), p. 34 Translated from Brunner (1999), p. 11: “Performance Management ist ein unternehmensweites Managementsystem, das den Prozess zur Operationalisierung der Unternehmensstrategien und –ziele in ein permanentes F€ uhrungssystem € uberf€ uhrt. Durch die Verkn€upfung von Strategien, strategischen Initiative und der Planung, Steuer und Kontrolle der relevanten Steuerungsgr€oßen wird die Zielerreichung (der relevanten Anspruchsgruppen) unterst€utzt” c Translated from Gomez et al. (2002), p. 426: “Performance Management ist ein Ansatz, der die wertorientierte Strategieplanung (finanzieller Wert) mit einer messbaren Strategieimplementierung verbindet, um dadurch heute bestehende Defizite im strategischen Management zu € uberwinden und neue Wege zu einer wertbewussten Unternehmensf€uhrung zu weisen”. d Translated from Hoffmann (2000), p. 29: “Performance Management beinhaltet Techniken, mit denen Manager in Abstimmung mit den € ubergeordneten Unternehmenszielen die Performance ihrer Mitarbeiter planen, lenken und verbessern k€ onnen”. e Translated from Hauber (2002), p. 56: “Unter Performance Management wird der Prozess der Planung, Steuerung und Kontrolle quantifizierter Gr€ oßen verstanden, die sich auf die Ressourcen (Input) und deren Transformation (Troughput) in Leistungen (Output) von spezifischen Systemen eines Unternehmens beziehen” f Cokins (2004), cited from Krause (2005), p. 38 g Translated from Krause (2005), p. 39: “Performance Management umfasst alle Aktivit€aten, die unter st€andiger Aktualisierung der Fach- und Sozialkompetenz der Akteure auf die Optimierung des Stakeholder-Nutzens gerichtet sind und dabei gleichzeitig den finanziellen, materiellen, zeitlichen, emotionalen und sozialen Aufwand minimieren” h Translated from Krause (2005), pp. 17–21: “Ein Performance Managementsystem (PMS) ist ein indikatorenbasiertes Managementsystem zur Unterst€ utzung der Aufgaben bei der Optimierung des Stakeholder-Nutzens einer Organisation. Daher m€ ussen effektive PMS den Zusammenhang zwischen Performance-Zielen, Indikatoren f€ ur die Zielerreichung, erfolgkritischen Wertsch€opfungsaktivit€aten und Maßnahmen zur Verbesserung der Performance €uber alle Ebenen und entlang der gesamten Wertsch€ opfungskette einer Organisation abbilden” b
2.2 Performance Management – Basic Terms and Definitions
35
“management” and thus the planning, management and control of performance within the performance management term. Performance management can also be explained in terms of management cybernetics.80 Like planning, completion and control, the processes of feed-forward and feed-back belong to performance management. Both feed-forward and feed-back are information loops which contribute to the target state of the system. In order to return the imbalances to a target state within a system, the system during the feedback process81 has to communicate the information about the output. This has to trigger counteraction until the state of the system matches the set goals. This control mechanism leads to a system that always returns to the target state. The control mechanism compares the current values with set goals, and in the case of difference, takes corrective action until the current values match the goals.82 The advantage of a feed-back information loop is that management, as the control authority, does not require much information in order to match the system and goals. The system can be seen as a “black box” and control activities can be confined to optimize input and output relationships. The disadvantage lies in the time lag and consequently the fact that the corrective actions can occur only after the result or output is known. The goals can usually be achieved only after additional loops and therefore with considerable time delay. In the process of feed-forward, the system requires information regarding the anticipated deviation of goals before the result or output occurs. Corrections can be triggered at an early stage until the result complies with defined targets. The main advantage of a feed-forward process, compared to feed-back, is that the detection of imbalances happens at an early stage, and not after the deviation is identified. The intervention of corrective action does not happen as it does with the feed-back process. It happens through early anticipation of negative factors rather than by feed-back. The disadvantage of the feed-forward processes is that management does not have comprehensive information. The precondition for the feed-forward control is the knowledge about the relationships within the system and their inherent cause and effect. According to Gr€ uning83 the performance measurement system translates the cybernetic process control for the multidimensional goal
80
“Management cybernetics is the concrete application of natural cybernetic laws to all types of organizations and institutions created by human beings, and to the interactions within them and between them. It is a theory based on natural laws. It addresses the issues that every individual who wants to influence an organization in any way must learn to resolve. This theory is not restricted to the actions of top managers. Every member of an organization and every person who to a greater or lesser extent communicates or interacts with it is involved in the considerations”, Beer (1959). http://en.wikipedia.org/wiki/Management_cybernetics. 81 Herder-Dorneich (1993), pp. 47–48. 82 Beer (1962), p. 131, Gomez (1981), pp. 246–247 (five steps to design cybernetic process control), Hauber (2002), p. 57. 83 Gr€uning (2002), p. 9.
36
2 Performance Management
system at all levels, from the strategic via tactical to the operational level84 (cf. Fig. 2.4). This view is shared by Dransfield, Fischer and Vogel.85 Fig. 2.4 Performance level model (Source: Bredrup 1995)
Having concluded our outlining and discussion of management concepts, a synthesis of the constitutive elements within performance management can be made. All of the definitions we have reviewed share at least four elements: planning, measurement, analysis and review/improvement (Fig. 2.5).
Fig. 2.5 The elements of Performance Management (Source: The figure was derived from the literature review by the author)
84
Bredrup (1995), p. 174. Some parts of the literature state that the operational and tactical levels are identical, see Gr€uning (2002), p. 9. 85 Dransfield et al. (1999) distinguish three basic zones of measurement arguing that strategic level contains external measures of success. They pool tactical and operational levels together stating that these mostly comprise internal measures.
2.2 Performance Management – Basic Terms and Definitions
37
• Planning, including general planning of strategy, defining goals and escorting them through the entire goal-setting process, defining to-be state or nominal values for later comparisons with actually achieved values, defining key performance indicators, deciding on timeframes for the planned strategy (short term, long term etc.). • The measurement element includes the determination of the current status. Sometimes pure data collection is associated with this activity. This element is not explicitly mentioned in some definitions, e.g. in Hoffmann and Krause. However, in our opinion, the measurement element is implicitly included in their definition. This is because they subsequently refer to the analysis element, which requires an “as-is” state. If necessary, the measurement element can include breaking the KPIs down into the PIs which are actually measured. It can also work in the other direction, consolidating the PIs back into KPIs. • Analysis includes the activities that go beyond pure measurement activities: evaluating, interpreting, projecting and forecasting from the current situation, determining the deviation from objectives and analyzing the effects of corrective actions resulting from interdependencies between goals and actions with ‘what if scenarios’. Within the analysis element it is not only deviations from goal attainment that are detected, but also information is provided with regard to “what happens if” the priorities (of goals or indicators) are changed. • Review/Improvement concentrate on the identification of concrete activities to implement conclusions drawn from analyses. Some examples of short-term decisions are: periodic rewards or identification of necessary training or corrective actions such as budget cuts, travel restrictions or resource reassignments. Longer-term examples include the adjustment and reformulation of organizational goals and KPIs between periodical performance management cycles. The performance management cycle can be seen as applied in both a long cycle (e.g. for a single goal-setting period) and in a shorter-cycle when assessing the goal achievement intermediately and taking corrective actions to improve goal achievement for the overall period.
2.2.5
Performance Measurement
In this section we analyze various definitions of performance measurement and performance measurement system found in the literature with respect to the elements identified above. Since the late 1980s the problem of measuring work results and work performance within the English language management accounting literature has been addressed under the title “Performance Measurement”.86 Although the term
86
B€osch (2007), p. 103.
38
2 Performance Management
“Performance Measurement” is used frequently, its definition is incomplete.87 In their literature review Neely et al., write: “Performance measurement is a topic which is often discussed but rarely defined”.88 In the following section we will list and discuss definitions found in the literature of the term “performance measurement” and “performance measurement system”. Table 2.10 contains definitions of “performance measurement” followed by “performance measurement system” that have been found in the literature. The order is chronological. Table 2.10 Definitions of “Performance Measurement” and “Performance Measurement system” Source
Definition
Definition of the term “Performance Measurement” Anthony et al. (1989) “Performance measurement is the key to effective management supervision and control of people in organizations. But it is also an effective tool for guiding the direction of organizational subunits. The aim of performance measures is to minimize losses and to reward quality performance by comparing actual with desired performance”a Sink and Tuttle (1989) “Performance Measurement is relative measurement. In order to interpret performance measurement data, one must have something with which to compare the measures. Commonly used alternatives are standards, goals, or baselines”b Emmanuel et al. (1990) “A vital part of the control process, and one with which accounting is particularly concerned, is the measurement of actual performance so that it may be compared with what is desired, expected or hoped for. However, it is important to stress that performance measurement is but one stage in the overall control process; it is also necessary to set standards, and to take appropriate action to ensure that such standards are attained”c Carter et al. (1995) “If there is a unifying theme to performance measurement, then it lies in the genuflection to the perspectives of economy, efficiency, and effectiveness, and the production of measures of input, output and outcome”d Gleich (1997) “Performance measurement can be defined as the development and deployment of (often several) quantifiable measurements of various dimensions (e.g., cost, time, quality, innovation, customer satisfaction) which are applied to assess the effectiveness and efficiency of the performance and performance potential of different objects within the enterprise (organizational units of various sizes, employees, and processes)”e Kerssens-van Drongelen and “The acquisition and analysis of information about the actual attainment of company Cook (1997) objectives and plans, and about factors that may influence this attainment”f Evangelidis (1992) “The process of determining how successful organizations or individuals have been in attaining their objectives”g Neely et al. (1995) “Performance measurement can be defined as the process of quantifying the efficiency and effectiveness of action”h Sinclair and Zairi (1995) “The measurement (as a process) of performance at all levels within an organization”i Hauber (2002) “Performance measurement involves the process of quantifying and evaluating the goal achievement of organizational units, employees and processes”j Wettstein (2002) “The term “performance measurement” . . . encompasses the measuring, analyzing and communicating of the performance as well as the planning of actions and measures”k
(continued)
87 88
Sometimes the literature uses the term “Performance Reporting”, Ramin and Fey (1998), p. 287. Neely et al. (1995), pp. 80–116.
2.2 Performance Management – Basic Terms and Definitions
39
Table 2.10 (continued) Source
Definition
Definition of the term “Performance Measurement System” Neely et al. (1995) “A performance measurement system can be defined as the set of metrics used to quantify both the efficiency and effectiveness of actions”l Gleich (1997) “A Performance Measurement System is a planning and control concept, containing monetary indicators which are aligned to the objectives of a company. These indicators pertain, using measurements and ratios to complement each other, to all of the company’s success and performance-relevant levels that influence its longterm financial viability. The indicators are designed to integrate the needs of stakeholders and are focused on continuous improvement and flexibility”m Simons (1999) “Performance measurement systems: information systems that managers use to track the implementation of business strategy by comparing actual results against strategic goals and objectives. A performance measurement system typically comprises systematic methods of setting business goals together with periodic feedback reports” Hauber (2002) “A performance measurement system is a coordinated set of performance measures of various dimensions which are interrelated, and thus, as a whole, provide complete information about the goal achievement of various entities within a company”n Wettstein (2002) “A performance measurement system (PMS) communicates the operational strategy and monitors the overall performance of an organization at all levels. The PMS supports effective communication of the performance to all stakeholders, provides managers with both operational and strategic decision support, gathers knowledge about the organization and simplifies the organizational learning process. To achieve this goal, the PMS defines processes and makes use of appropriate information systems”o Gr€ uning (2002) “A performance measurement system is a system for measuring and managing business performance that is multi-dimensional, characterized by mutual interdependence and integrates strategic and operational aspects, based on a cybernetic process with elements of organizational learning”p Baum et al. (2004) “Performance measurement systems are used for the measurement and management of aspects of the company’s success and its determining factors. These aspects are multidimensional and, by reciprocal interdependencies, are identified as both strategic and operational”q a
Anthony et al. (1989), p. 142 Sink and Tuttle (1989), p. 60 c Emmanuel et al. (1990), p. 31 d Carter et al. (1995), p. 35 e Translated from Gleich (1997), p 112: “Als Performance Measurement kann der Aufbau und Einsatz meist mehrerer quantifizierbarer Maßgr€ oßen verschiedenster Dimensionen (z.B. Kosten, Zeit, Qualit€at, Innovationsf€ahigkeit, Kundenzufriedenheit) verstanden werden, die zur Beurteilung der Effektivit€at und Effizienz der Leistung und Leistungspotenziale unterschiedlichster Objekte im Unternehmen (Organisationseinheiten unterschiedlichster Gr€oße, Mitarbeiter, Prozesse) herangezogen werden” f Kerssens-van Drongelen and Cook (1997), p. 346 g Evangelidis (1992), p. 45 h Neely et al. (1995), p. 80 i Sinclair and Zairi (1995), pp. 145–168 j Translated from Hauber (2002), p. 29: “Performance measurement umfasst den Prozess der Quantifizierung und Evaluierung der Zielerreichung von Organisationseinheiten, Mitarbeitern oder Prozessen” k Translated from Wettstein (2002), p. 19: “Unter dem Begriff Performance Measurement wird . . . das Messen, Analysieren und Kommunizieren der Performance sowie das Planen von Aktionen und Maßnahmen verstanden” l Neely et al. (1995), p. 81 (continued) b
40
2 Performance Management
Table 2.10 (continued) Source
Definition
m
Translated from Gleich (1997), pp. 114–115: “Ein Performance Measurement System ist ein Planungs- und Steuerungskonzept, das monet€are Kennzahlen beinhaltet, die auf allen erfolgs- und leistungsrelevanten Unternehmensebenen mit den Einflußgr€oßen der langfristigen finanziellen Leistungsf€ahigkeit des Unternehmens komplementiert sind, so daß die Maßgr€oßen bzw. Kennzahlen einander erg€anzen, auf die strategische Zielsetzung des Unternehmens ausgerichtet sind, die Anspr€uche der Stakeholder integrieren und auf kontinuierliche Verbesserung und Flexibilit€at ausgerichtet sind” n Translated from Hauber (2002), p. 58: “Ein Performance Measurement System ist die geordnete Gesamtheit von Leistungsmaßen verschiedenster Dimensionen, die in einer Beziehung zueinander stehen und so als Gesamtheit € uber die Zielerreichung unterschiedlichster Objekte im Unternehmen vollst€andig informieren” o Translated from Wettstein (2002), p. 24: “Ein Performance-Measurement-System (PMS) kommuniziert die operationalisierte Strategie und € uberwacht die ganzheitliche Performance einer Organisation auf s€amtlichen Ebenen. Das PMS unterst€utzt die effektive Kommunikation der Performance mit allen Stakeholdern, bietet Managern sowohl operative als auch strategische Entscheidungsunterst€ utzung, sammelt Wissen der Organisation und vereinfacht das organisationelle Lernen. Um dieses Ziel zu erreichen, definiert das PMS geeignete Prozesse und bedient sich geeigneter Informationssysteme” p Translated from Gr€ uning (2002), p. 10: “Ein Performance Measurement-System ist ein System zur Messung und Lenkung der mehrdimensionalen, durch wechselseitige Interdependenzen gekennzeichneten, strategische und operative Aspekte integrierenden Unternehmensperformance auf Basis eines kybernetischen Prozesses mit Elementen organisationalen Lernens” q Translated from Baum et al. (2004), p. 49: “Performance Measurement Systeme dienen der Messung und Lenkung der mehrdimensionalen, durch wechselseitige Interdependenzen gekennzeichneten strategischen und operativen Aspekte des Unternehmenserfolgs und seiner Einflussgr€oßen”
Common to many authors’89 definitions is the attainment of objectives and the process element. Kerssens-van Drongelen and Cook highlight the analysis aspect of the measurement. While Kerssens-van Drongelen and Cook stress the acquisition and analysis of information; Wettstein in addition incorporates the communication and further planning of actions and tactics. This could lead to the assumption that performance measurement is more than just a review and illustration of past achievements. Another important element of performance measurement seems especially to be the process of interpretation of results for improvement and the development and demonstration of actions and action opportunities. In order to consistently derive the key elements for performance measurement, the definitions will be analyzed on the basis of the four elements extracted from performance management. The table shows a particular emphasis on two elements: measurement; analysis.
89
Here: Evangelidis, Neely et al., Sinclair, and Hauber.
2.3 Summary
41
Table 2.11 Comparison of Performance Measurement (system) definitions Constitutive elements mentioned in Performance Measurement definitions Review/ Planning Measurement Analysis improvement Performance Measurement
Source
Anthony et al. (1989) (✓) ✓ ✓ ✓ Sink and Tuttle (1989) (✓) ✓ (✓) Emmanuel et al. (1990) ✓ ✓ ✓ Carter et al. (1995) ✓ Gleich (1997) ✓ ✓ (✓) Kerssens-van Drongelen and Cook (1997) ✓ ✓ Evangelidis (1992) (✓) ✓ Neely et al. (1995) ✓ (✓) Sinclair and Zairi (1995) ✓ Hauber (2002) ✓ ✓ Wettstein (2002) ✓ ✓ ✓ Source Performance Measurement system Neely et al. (1995) ✓ Gleich (1997) ✓ ✓ ✓ ✓ Simons (1999) ✓ ✓ ✓ ✓ Hauber (2002) ✓ ✓ Wettstein (2002) ✓ ✓ ✓ ✓ Baum et al. (2004) (✓) ✓ ✓ ✓ component is explicitly mentioned in the definition (✓) component is not explicitly mentioned, but is implied by our definition (In those cases where the availability of an element has been put in parentheses the existence of the element is not explicitly mentioned in the cited source, nevertheless the definition implies its presence)
We therefore include the following aspects in our definition of performance measurement (see Table 2.11): A process in which: • Relevant data for performance indicators (as-is data) is collected; and • The collected data is evaluated, analyzed and interpreted (comparison of as-is with to-be).
2.3
Summary
Figure 2.6 summarizes all terms that have been introduced in the context of performance management. The overall performance management cycle is based on four constitutive elements. Within the planning element, organizational goals are derived from the company strategy and are operationalized through KPIs. The performance measurement part concentrates on collecting adequate performance indicators and consolidates them into KPIs for the purposes of analysis of the current situation and also prediction. The review and improvement element assesses performance achievements by comparing KPIs as-is with KPIs to be, and draws conclusions on short-term corrective actions as well as on the adjustment of goals and KPIs in the next management cycle.
42
2 Performance Management
Fig. 2.6 Performance Management: putting the terms together (Source: The figure was derived from the literature review by the author)
2.3.1
Purposes of Performance Management/Measurement
According to Geisler90 measurement plays an important role in everyday life: “Measuring the objects and the events in the world around us is not only a scientific necessity, but also the means to make sense of the complexity of natural phenomena. We continually live by measures of our surroundings. We measure the passage of time, the temperatures in our climate, our economic situations, and everything else with which we make contact.” The question of measuring or not measuring the performance of researchers91 is a heavily discussed question in both theory and practice. The fact that rigorous measurement kills creativity is probably the most prominent argument for not measuring. Creativity is seen as an indispensable feature in the search for new ideas, new products, and innovations. This search for newness covers a significant part of the activity of industrial research organizations. Therefore it should be adequately reflected in a PMS in research organizations.
90
Geisler (2000), p. xi. In conducting our case studies we realized that some companies that were pre-selected did not measure performance within their research department. The desire not to constrain the freedom of researchers seems to be a major argument for not measuring their performance. These cases were taken into account, however they were not further considered for our case studies.
91
2.3 Summary
43
There are a number of contributions to the literature focusing on purposes for measuring performance, as outlined in Table 2.13 below. According to Sink and Tuttle,92 “The most important, and perhaps the only really valid, reason for measuring performance . . . is to support and enhance improvement”. In order to improve, information about the status quo is required, which in the end allows us to demonstrate that an improvement has actually been achieved. According to Specht and Beckmann93 this information is also necessary for decision-making by management and has to be available in the right place at the right time, in the right quality and quantity, as well as at the lowest possible cost. According to the literature, there are six major categories of reasons for performance management; two of them – communication and alignment – are strongly related to the context of goal setting. Another two have their background within the evaluation of goal attainment – determining the status quo and predicting attainment for a specific period. The final two have their origin in psychology and focus on motivation – on the individual level as well as on the organizational level. Table 2.12 illustrates these six categories and provides the structure which is used in the remainder of the chapter to classify the purposes of performance management as found in the literature. Table 2.12 Purposes of Performance Management in industrial research Categories of purposes of Performance Management 1. Goal setting (a) Communication
(b) Alignment
2. Evaluating goal achievement (a) Status-quo (b) Prediction
3. Motivation (a) Organizational
(b) Personal
Setting goals is a prerequisite for the assessment of the actual performance. Communication and alignment are two purposes within the goal setting category. Communication ensures that company goals get across into other parts of the organization. According to our definition of the goal setting process (cf. Sect. 2.2.2), the goals formulated at higher levels have to be decomposed and cascaded and communicated to lower levels. Alignment provides assurance that the sum of all cascaded goals reflects the overall goal at the next higher level. During the goal setting process iterative communication and alignment takes place by receiving, checking and giving feedback on whether the cascaded goals meet certain crucial characteristics as discussed by Locke.94 Such information can be used to revise the performance indicators and potentially adjust the cascaded goals. These two
92
Sink and Tuttle (1989), p. 141. Note that Specht and Beckmann (1996), p. 332 discuss the control of R&D and its fundamentals. The analysis and integration of R&D control into performance measurement will be discussed in Sect. 3.2.2. 94 (1) Each goal has to be clear so that people can carry out appropriate actions to reach the goal; (2) goals should not be too easy to attain, but at the same time, they must be attainable; and (3) goals should be accepted by the receivers, goals that are not accepted may have negative impacts on the performance. Locke (1968), p. 125, Tankoonsombut (1998), p. 12. 93
44
2 Performance Management
purposes primarily reflect the planning element of the performance management cycle as the goal setting takes place within this element. Our definition of goal – an envisaged and intended future state; an anticipated vision of the impact of actions – directly motivates category 2 as it requires: determination of the status quo (2a), i.e. to provide the current performance level compared to the level aimed for; and to be able to predict (2b) the future achievable state; especially with regard to being able to support decision-making for corrective actions. Status quo evaluation and prediction respectively cover the measurement and the analysis elements of the performance management cycle. Not only business economics but also other disciplines such as “Organizational and Occupational Psychology” and “Organizational Behavior and Human Performance” examine the performance of individuals in an organizational context. The motivation of employees impacts their individual performance, and consequently, organizational performance. This is a result of a great number of studies from applied psychology examining different behaviors and their impacts on performance. Therefore, it is not surprising that individual motivation is considered to be one of the most effective tools for governing performance. By personal motivation we mean rewards, career planning, etc., and this represents category 3b. The demonstration of the contribution of subordinate goals to superior goals is often used as a means to justify the existence of an organizational unit. This especially occurs when instead of a clear quantitative (economic) outcome an organizational unit’s output can only be vaguely estimated. This aspect (existential justification) represents the organizational facet (3a) of the motivation category. The motivational purposes mainly refer to the review/improvement element of the performance management cycle. Figure 2.7 illustrates the discussed relationships between the constitutive elements of performance management with the described purposes. In the following, by way of example, we discuss three sources that elaborate on purposes for performance management and map their findings to our categories. There are a variety of other sources, which are not further discussed here. The interested reader is referred to Table 2.13 where we list additional sources and indicate the purposes. Landy and Farr95 focus their research on the work performance of individuals. They state that the prediction of performance plays a major role in all personnel decisions and many other types of organizational decisions. They list three kinds of purposes why performance information may be collected and what it is used for: administrative purposes, guidance and counseling purposes, and research purposes. 1. Within the administrative part they mention promotion, lateral transfer, demotion and retention decisions; merit compensation decisions; training program assignments; and the establishment of scores for selection procedures.
95
Landy and Farr (1983), pp. 3–4.
2.3 Summary
45
Fig. 2.7 Constitutive elements of PMS and their relationships (Source: The figure was derived from the literature review by the author)
These purposes are primarily reflected within our “motivation” category as the reasons mentioned mainly concern the implications for decision-making related to the performance of individuals. 2. Landy and Farr identify the improvement of job satisfaction and work motivation as general purposes for guidance and counseling performance information. This is required for providing information regarding the current individual performance level as well as for probable, and possible, future job assignments in the organization. In their opinion, guidance and counseling purposes may include supervisory feedback to subordinate personnel regarding their relative and absolute strengths and weaknesses. They also mention career planning and preparation in this context. The purposes mentioned are also related to individual motivation. 3. The third purpose is obtaining performance information as part of various projects assessing human-resource-related procedures and initiatives. Examples of these are the validation of selection procedures, the evaluation of training programs, and the evaluation of motivation and satisfaction-oriented interventions such as compensation plans and job enrichment programs. The purposes96 described in this part best fit into the ‘status-quo’ aspect of our second category.
96
See Table 2.13.
46
2 Performance Management
Butler97 identifies nine possible reasons for measuring: (1) Improve organizational performance, (2) assist in decision making, (3) provide visibility of results, (4) improve understanding, (5) compare absolute performance, (6) improve motivation, (7) improve communication, (8) improve individual performance, and (9) retention of control. Reasons 4 and 7 cover the communication aspect (1a) of the “goal setting” category. Reason 9 in our context is partially related to the alignment aspect (1b) of “goal setting” and partially to organizational motivation (3a). Reasons 3 and 5 relate to the status quo category (2a) whereas reason 2 is reflected by the prediction aspect (2b) within our categories. Finally, reasons 1, 6 and 8 deal with motivational aspects where 1 refers to the organizational (3a) and the latter two to the personal (3b) category. Kerssens-van Drongelen98 explores purposes and translates them into functions by developing a taxonomy of measurement systems. She identifies seven measurement system functions: 1. Provide insight into (expected) deviations from objectives/environmental factors to support the diagnosis by a manager as to whether, and if so which, steering measures to apply 2. Fuelling learning to improve the predictive model 3. Alignment and communication of objectives 4. Supporting decision making on performance-based rewards 5. Provide insight into (expected) deviations from objectives/environmental factors to support the diagnosis by subordinates as to whether, and if so which, steering measures to apply 6. Justification of existence, decision and performance 7. Motivating people through feedback Functions 1 and 5 match with our status quo (2a) category as they detect the deviations from objectives, whereas function 2 fits to the prediction (2b) aspect. Function 3 reflects the category “goal setting” (1a and 1b); function 6 corresponds with the organizational (3a), and functions 4 and 7 with the individual (3b) motivation category. Table 2.13 Comparison of purposes for Performance Management Categories of purposes of Performance Measurement Author 1. Goal setting
Landy and Farr (1983)a Sink and Tuttle (1989)b
2. Evaluating goal achievement
3. Motivation
(a) Status (a) (b) Communication Alignment quo ✓ (3)
(b) (a) (b) Prediction Organizational Personal ✓ (1, 2)
✓
✓
✓
(continued)
97 98
Butler (1994), p. 21, see Table 2.13. Kerssens-van Drongelen (2001), p. 46, see Table 2.13.
2.3 Summary
47
Table 2.13 (continued) Categories of purposes of Performance Measurement Author
Butler (1994)c Bonsdorff and Andersin (1995)d Dhavale (1996)e Kaplan and Norton (1996)f Kerssens-van Drongelen (2001)g Loch and Tapper (2002)h Godener and S€oderquist (2004)i Schreyer (2007)j a
1. Goal setting
2. Evaluating goal achievement
(a) (b) Communication Alignment ✓ (4, 7) ✓ (9) ✓ (2, 3)
(a) Status quo ✓ (3, 5) ✓ (4, 5)
3. Motivation
(b) (a) Prediction Organizational ✓ (2) ✓ (1, 9) ✓ (6)
✓ (2)
✓ (2)
✓ (3)
✓ (3)
✓ (1, 5) ✓ (2)
✓ (1)
✓ (2, 3) ✓ (4)
✓ (1)
✓ (1)
✓ (2)
✓ (1, 3, 7)
✓ (1, 2, 3, 4)
✓ (4, 5) ✓ (8)
✓ (2)
(b) Personal ✓ (6, 8) ✓ (1)
✓ (1)
✓ ✓ (1)
✓ (6)
✓ (4, 7) ✓ (2)
✓ (3, 5)
✓ (4, 5) ✓ (6)
Landy and Farr (1983), pp. 3–4 Sink and Tuttle (1989), p. 1. “The most important, and perhaps the only really valid, reason for measuring performance . . . is to support and enhance improvement” c Butler (1994), p. 21 d Bonsdorff and Andersin (1995), p. 67 e Dhavale (1996), p. 50: “Performance Measurements, evaluation systems, and reward systems are indispensable management tools. They can help motivate employees to work toward fulfilling the organization’s strategic perspectives. By contrast, poorly designed or poorly implemented performance measurement systems encourage dysfunctional and suboptimal behavior throughout an organization” f Kaplan and Norton (1996), p. 147. “The objective of any measurement system should be to (1) motivate all managers and employees to (2) implement successfully the business unit’s strategy” g Kerssens-van Drongelen (2001), p. 46 h Loch and Tapper (2002), p. 186, highlight four main functions of R&D performance measurement: (1) alignment and prioritization; (2) evaluation and incentives; (3) operational control, and (4) learning and improvement. Furthermore the authors state that the fundamental purpose of performance measurement is to encourage behavior that achieves the goals of the organization i Godener and S€oderquist (2004), p. 197. (1) communicating objectives, agreements and rules; (2) defining corrective actions based on diagnosis and control; (3) allocating resources; (4) deciding on individual promotions, salary increases and other incentives (5) learning/continuous improvement j Schreyer (2007), p. 74: (1) operationalization of the company’s strategy, (2) identification of and focusing on success factors, (3) visualization of the interrelations, (4) planning and controlling of the employment of resources, (5) performance evaluation, (6) employee motivation, (7) communication processes, (8) learning effects b
48
2.3.2
2 Performance Management
The Information and Communication Technologies (ICT) Sector
In comparison with some other industries mentioned, such as pharmaceuticals, chemicals, petrol and gas, steel and some sectors of electronics, the ICT sector is relatively young and very fast growing.99 In an interim report “Benchmarking national and regional policies in support of the competitiveness of the ICT sector in the EU”100 the authors report that the ICT sector overtook the traditional pillars of the European economy, such as the pharmaceutical and the automotive industries, in terms of annual turnover which in 2004 was about EUR 200 billion in Europe and about EUR 1,000 billion worldwide. Although the history of this sector dates back to the 1950s,101 definition of the ICT sector remains a challenge. Looking at the relevant sources it is apparent that a generally agreed upon definition of the ICT industry does not exist. The different interpretations of the term are due to the fact that the sector is widespread, is subject to constant technological changes and is embracing and integrating more and more domains by delivering cross-domain solutions. ICT includes all hardware and software, which are necessary for transmitting, processing and utilizing digital data in any possible form. Data in this sense include texts, sounds and images.102 Currently, there are three from several attempts succeeded to create a standardized form of accounting and measurement of scientific and technological activity within and across countries: OECD103 and UNESCO, the U.S. National Science Board, and Japan’s National Institute of Science and Technology Policy (NISTEP). There are a few different standardization communities that have classified
99
For example: at the end of 2008, from close to zero (only) 10 years ago, in the developing world, mobile phones have reached an estimated average 49.5% penetration rate, ITU (2009), p. 1. 100 Friedewald et al. (2004), p. 7. 101 In particular, discussions about the role of research and development in high technology started around 1950 within the context of technological and economic competitiveness of the ICT producing sector, Friedewald et al. (2004), p. 7. 102 The OECD defines the ICT sector as a combination of manufacturing and service industries that capture, transmit and display data and information electronically. 103 The Organization for Economic Co-operation and Development (OECD) is an international organization of 30 countries that accepts the principles of representative democracy and freemarket economics. It originated in 1948 led by Robert Marjolin of France to help administer the Marshall Plan for the reconstruction of Europe after World War II. Later, its membership was extended to non-European states. In 1961, it was reformed into the Organization for Economic Cooperation and Development by the Convention on the Organization for Economic Co-operation and Development.
2.3 Summary
49
the ICT sector, with their various standards (SIC,104 ISIC,105 CPC,106 NACE,107 and NAICS108). The definitions are revised on an irregular base. After having reviewed two standards: ISIC and NACE, we decided to use the most current definition provided by the OECD in 2007 for the thesis. This is based on the UN International Standard for the Industrial Classification of all Economic Activities (ISIC4). The standard applies the following general principle (definition) to identify economic activities (industries) related to ICT: The production (goods and services) of a candidate industry must primarily be intended to fulfill or enable the function of information processing and communication by electronic means, including transmission and display.109
The list of ICT industries (ISIC Rev. 4) that meet this condition is provided in the table below (Table 2.14). Regarding our exploration and definition of ICT, it is necessary to exclude some of the subsectors. We are examining industrial research departments and their activities and it is clear to us that these departments will not engage in all of the subsectors listed. For example ICT-related research and development activities are not relevant to the subsector of ICT services called “Repair of computers and communication equipment”. For this reason we exclude these subsectors: • 9511 Repair of computers and peripheral equipment, and • 9512 Repair of communication equipment.
104 The Standard Industrial Classification (abbreviated SIC) is a United States government system for classifying industries by a four-digit code. 105 ISIC is the United Nations International Standard Industrial Classification of all economic activities. This classification is the international standard for the classification of productive economic activities. The main purpose is to provide a standard set of economic activities so that entities can be classified according to the activity they carry out. The hierarchical structure of the classification comprises: Tabulation Categories – one letter alpha code A to Q; Divisions – twodigit codes; Groups – three-digit codes; Classes – four-digit codes. The third revision of ISIC is used in the 1993 SNA. 106 The Central Product Classification (CPC) of the United Nations Statistics Division constitutes a complete product classification covering goods and services. It was intended to serve as an international standard for assembling and tabulating all kinds of data requiring product detail including industrial production, national accounts, service industries, domestic and foreign commodity trade, international trade in services, balance of payments, consumption and price statistics, CPC (2002). 107 NACE stands for The Statistical Classification of Economic Activities in the European Community and is a European industry standard classification system consisting of a 6 digit code. NACE is equivalent to the SIC and NAICS system. 108 The North American Industry Classification System or NAICS is used by business and government to classify and measure economic activity in Canada, Mexico and the United States. It has largely replaced the older SIC system; however, certain government departments and agencies, such as the U.S. Securities and Exchange Commission (SEC), still use the SIC codes. The NAICS numbering system is a six-digit code. 109 OECD (2007), p. 15.
50
2 Performance Management
Table 2.14 ICT sector according to ISIC Rev. 4 Definition of the ICT sectora (codes and sectors) ICT manufacturing industries 2610 Manufacture of electronic components and boards 2620 Manufacture of computers and peripheral equipment 2630 Manufacture of communication equipment 2640 Manufacture of consumer electronics 2680 Manufacture of magnetic and optical media ICT trade industries 4651 Wholesale of computers, computer peripheral equipment and software 4652 Wholesale of electronic and telecommunications equipment and parts ICT services industries 5820 Software publishing 61 Telecommunications 6110 Wired telecommunications activities 6120 Wireless telecommunications activities 6130 Satellite telecommunications activities 6190 Other telecommunications activities 62 Computer programming, consultancy and related activities 6201 Computer programming activities 6202 Computer consultancy and computer facilities management activities 6209 Other information technology and computer service activities 631 Data processing, hosting and related activities; web portals 6311 Data processing, hosting and related activities 6312 Web portals 951 Repair of computers and communication equipment 9511 Repair of computers and peripheral equipment 9512 Repair of communication equipment Source: OECD (2007) a Note that there is a growing recognition of inclusion of the “content industries” into the ICT definition. “Content industries” create and distribute content (e.g. text, audio, video), particularly those that create and distribute content to a wide audience. The definition of the content and media sector is provided in the Appendix A
2.3.3
The Software Industry
Having defined the ICT sector as a whole we position the software industry within the ICT sector in the next step. The European Commission initiated extensive studies to evaluate the contribution of information and communication technologies in economies, particularly monitoring the software industry. The software industry is described as “one of Europe’s best vectors of growth in innovation and competitiveness”.110
110
Truffle 100 (2006), p. 1.
2.3 Summary
51
For a good overview of the economic principles, strategies and future trends among software providers and software industry, see Buxmann et al.111 In its definition of software (SW), BITKOM112 distinguishes between system software and standard applications: • System software: System infrastructure software includes system management SW, network management, security SW, storage SW, server ware, system level SW. Application tools include data management SW, middleware, development tools, etc. • Standard applications: consumer, content, collaboration, enterprise applications. • Independently from software, BITKOM defines IT-services as Consulting: planning/design of IT-systems, including IT-related business consulting. Implementation: services such as procurement, configuration, installation, testing and management, development of customized solutions, IT-education and -training. Operations management: management of components of the IT infrastructure for customers including help desk services, asset management services, systems management, network management, software update management, application services providing, web hosting, facilities management, back-up/archiving/business recovery services. Support services: maintenance, telephone support, etc. On a multi-company level, the definition of “software industry” is somewhat difficult. First of all, this is because software can be interpreted as (a) a good and (b) as a service. Secondly, there is a discussion about software publishing and broadcasting industries, in particular whether these industries, or components thereof, should be classified in the ICT services grouping or in the content and media grouping. The OECD states that software publishing (ISIC 5820) covers at least two distinct components: the publishing of productivity software and the publishing of multimedia software (i.e. games). Because the latter type of software is designed to inform, educate or entertain and has more in common with other types of content products such as newspapers, television programs, films or musical recordings, this industry should ideally be classified in the content and media sector according to some members of the board.113 However, due to the fact that ISIC recognizes only one software publishing industry that produces both types of software, this industry was included in ICT services. According to the OECD the productivity software which is designed to facilitate information processing is classified more appropriately with technology-centric services such as telecommunications or hosting services (ISIC 62/631).
111
Buxmann et al. (2008). BITKOM is a German Association for Information Technology, Telecommunications and New Media and stands for Bundesverband Informationswirtschaft, Telekommunikation und neue Medien e.V. 113 The definition is based upon an agreement between the members of the “Working Party on Indicators for the Information Society” (WPIIS). 112
52
2 Performance Management
To summarize, according the latest review of ISIC4, the ICT sector is comprised of ICT manufacturing industries, ICT trade industries and ICT services industries (including ICT repair industries). Regarding the definition of the software industry, it can be concluded that in essence the software producing sector is mainly represented in ISIC 62 Computer programming, consultancy and related activities.
Chapter 3
State-of-the-Art in Performance Management
Abstract This chapter investigates the state-of-the-art in the performance management literature. In this context, the performance management systems, organizational goals and key performance indicators are examined at three levels of granularity: generic (literature reporting about generic approaches covering either one or more functions of a company), R&D (focusing on the specifics of the R&D function), and research function only. The insights in this chapter enable us to formulate the research gaps and refine the primary question with four research subquestions. Keywords Characteristics of research compared to development • Key performance indicators • Organizational goals • Performance management systems (PMS) • Requirements catalogue of a PMS
Having already defined the relevant terms for the context of the thesis and explored reasons for managing performance, this chapter presents an overview of the relevant literature. As it has been shown earlier in Sect. 2.2 there is no consistent definition or conceptualization of performance management and performance measurement in the literature. Therefore, we do not make a clear distinction between authors that position their work in performance management and those that position themselves in performance measurement.1 Moreover, our focus is on the specifics parts that are relevant for this thesis, which authors deal with within their contributions. Due to our focus on performance management within industrial applied research departments in the ICT sector, this literature review starts by exploring general
1 See our discussion about definitions in Sects. 2.2.4 as well as 2.2.5, where we concluded that performance measurement builds a part of performance management.
T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0_3, # Springer-Verlag Berlin Heidelberg 2012
53
54
3 State-of-the-Art in Performance Management
literature on performance management across industries rather than within specific to organizational functions. The focus is subsequently narrowed to our specific research area: performance management in industrial research organization in the ICT sector. Altogether in this chapter, literature which deals with the following aspects is examined: • The holistic view on performance management systems (PMgSs) and their major components: organizational goals and KPIs • The application of PMgS in different organizational functions: in the entire organization (generic); in the R&D function as one single entity; and in the research department as an entity separate from the development department. • The affiliation to the industry, i.e. non-sector specific and industry sectorspecific. In both literature and practice, multiple interpretations of performance management exist and are difficult to classify. Therefore, Sect. 3.1 presents a framework based on existing schools of thought. This framework allows for a categorization of performance management and measurement concepts at an abstract level. Within this framework, the most well-known performance management approaches are shown. The literature review in this section is conducted with the following focus: generic PMgSs, R&D PMgSs and finally, research PMgSs. Having looked at the schools of thought, it is essential to understand the necessary characteristics of a system in order for it to have value. Section 3.1.4 elaborates on the more detailed requirements for a performance management system. Here we analyze four recent requirement catalogues and condense them into a single set of requirements. This section is crucial for a basic understanding of the constitutive elements of a performance management system. This work measures industrial applied research departments. Therefore, Sect. 3.1.5 separates “research” and “development”, which literature has traditionally treated as a single entity. Furthermore, this section identifies the main characteristics of industrial research and derives additional requirements for a PMgS tuned to a specific organization. Organizational goals as a major component of a PMgS are discussed in Sect. 3.2. The literature review in this section presents an overview of different existing goal systems and also highlights the goal-setting process within organizations. The organizational goals are inspected from three different perspectives: the generic company-wide perspective, the R&D function, and the research function (separately from development). The same approach is used in Sect. 3.3 when reporting on the state-of-the-art about key performance indicators. Section 3.4 presents our conclusions on this review and identifies the research gap. This provides a basis for the central research questions to be formulated, which reflect the general intent of our research.
3.1 Performance Management Systems
3.1 3.1.1
55
Performance Management Systems Generic Approaches: Schools of Thought
We begin our literature review by exploring the question “what are the theoretical underpinnings for performance measurement concepts and how can these be classified?” The analyzed literature on performance management often presents isolated approaches that have different foci and that are difficult to sort into a uniform classification scheme. Only one recent contribution (from Krause 2006) that suggests a uniform classification scheme based on different “schools of thought” has been found. Krause classifies the different schools of thought according to the major focus of the respective approach: (1) financing/funding, (2) business strategy, (3) business processes, and (4) employees. Please note that concrete performance management approaches cannot always be mapped to a single category, as they may address aspects of different schools of thought. The four schools of thought will now be discussed briefly. 1. The guiding principle for the finance-focused school of thought is “the most serious potential problem is that we are running out of money”. Krause places approaches within this category in which financing plays a central management role and all other aspects are aligned accordingly. Approaches of this category are: Beyond/Better Budgeting (Hope 2003), Value Based Management (Brunner 1999), Shareholder Value (Rappaport 1998), Individual Classification Systems (GE, PIMS). Criticism is given to budget-oriented approaches for the following reasons: • The lack of flexibility of the budgets to changing business conditions; • The lacking connection of budgets to the corporate strategy; • The dysfunctional and creativity-inhibiting “command-and-control” principle; • The short-term effectiveness in the achievement of financial targets by focusing on an annual quota. 2. The alignment of the strategy of a company throughout its departments is the most important component within the strategy school of thought. Krause expresses its guiding principle by “the capital importance that we know where we want to go and all pull together”. The Balanced Scorecard (Kaplan 1992, 1993, 1996a, b, 2000, 2002, 2004), benchmarking concepts (Camp 1989; Siebert 1998), the Strategic Measurement and Analysis Reporting Technique (SMART), the Performance Pyramid (Lynch and Cross 1995), and Ratios au Tableau de Bord (Lauzel and Cibert 1959) are all mainly orientated towards this school of thought. 3. Business process management and control is the third school of thought portrayed by Krause. The most renowned approaches which he identifies in
56
3 State-of-the-Art in Performance Management
this category are again the Ratios au Tableau de Bord, which is deemed to be the precursor to the balanced scorecard; the Activity Based Profitability Analysis (ABPA), which is part of the Activity Based Costing group (Meyer 2002), SixSigma (Hurry and Schroeder 2000); and Total Quality Management (TQM) (Malorny 1996; Wolter 1997). Both Six-Sigma and TQM overlap with the employee-oriented framework. 4. Approaches which focus on employee contributions to performance are based on self-assessment by employees and encompass the quality award concepts and intellectual capital approaches. “We must above all bring our employees to use their knowledge within the teams” is the guiding principle of the employeefocused school of thought. Specifically, the European Foundation for Quality Management (EFQM) Award, the Malcolm Baldrige Award in the U.S. and the quality award of the Ludwig Erhard Stiftung in Germany are mapped into this last category. The goal of these approaches is the continuous improvement of existing organizational structures through self-adaptation (Fig. 3.1). Critical appraisal of the schools of thought can be drawn from Klingebiels2 research where he discusses the integration problems within the coordination tasks of a performance management system. As a coordination task he understands the goal-appropriate, content-based and time-based alignment of interdependent decisions within existing corporate structures.3 Klingebiel states that for a The framework approaches can be allocated to four schools of thought Beyond/Better Budgeting Individual classification systems (GE, PIMS) Benchmarking
Process Management
Value Based Management / Shareholder Value SMART / Performance Pyramid
Balanced Scorecard
Ratios au Tableau de Bord TQM
Activity Based Costing
Six Sigma
EFQM, Self Evaluation Intellectual Capital / balance of knowledge
„The most serious problem is that we are running out of money“
Financing
„The most important thing is that we know what we want to achieve and that we all pull together“
Strategy
„We have clear & reasonable goals. The crucial point is their realization in optimized processes.“
Processes
„We mainly have to force our employees to use their knowledge in the team.“
Employees
Fig. 3.1 Four schools of thought (Source: Figure based on Krause 2006)
2 3
Klingebiel (2000). Klingebiel (2000), p. 298.
3.1 Performance Management Systems
57
competitive advantage of a company the correct horizontal integration (between functions, products or regions) and vertical integration (between different company levels) is of utmost importance. In his view, the interplay of the following factors determines the competitive advantage of a company: (1) structures, (2) processes, (3) systems, (4) cultures. These should be reflected in the design of a performance measurement system. According to Klingebiel the correct mix of these factors depends on the individual situation of a company. In summary, Klingebiel in his approach embraces at least three of the four schools of thought (Table 3.1). Table 3.1 Comparison of factors determining PMSs: Krause and Klingebiel Four schools of thought, Krause Klingebiel • Business strategy (2nd in the framework) • Organizational structures (1st mentioned) • Processes (3rd) • Processes (2nd) • Employees (4th) • Cultures (4th)
Klingebiel does not mention financial aspects, which is prominently represented as the first school of thought. However, he additionally refers to systems (3rd), which in turn are not considered in Krause’s schools of thought. From this comparison it is clear that “schools of thought” is a useful construct for a theoretical classification even if they are intermingled in practice. As Klingebiel stated, this demonstrates that the performance management system cannot be a pure form of one of the schools of thought. The four schools of thought proposed by Krause allow for the classification of performance management and measurement approaches according to their focus and, therefore, provide our research with a framework of possible directions when trying to understand and sort the recent literature contributions. A great number of literature contributions discussed in this section highlight the relevance and importance of the topic of performance management and measurement systems in organizations today. The literature reports that performance management encompasses many different aspects. It ranges from generic suggestions for holistic approaches to corporate performance management4 to very specific strategic recommendations on how to develop or implement performance management in practice along with dashboards, tools and supporting guidelines.5 On the other hand, many academic contributions have been published with different foci on, for example, specific functions or industries like: • • • • •
4 5
Marketing (Reinicke 2004); Procurement (Bode 2008); Healthcare (Bevan); Automotive (Lampl 2009); Public management (Halligan et al. 2010), etc.
Krause (2006), Oehler (2006), Nyiri (2007), just to name some of the recent ones. Jetter (2004), Cokins (2009).
58
3 State-of-the-Art in Performance Management
Generic, well-known performance management concepts are reported by authors such as: • • • • •
Kaplan and Norton (1992); Lynch and Cross (1995); Gleich (2001); Gladen (2002); Krause (2006).
Since these literature sources attempt to address as many elements of PMgS6 as possible, they should therefore, according to our definition (see Sects. 2.2.4 and 2.2.5), be assigned to performance management approaches. Whereas contributions from Klingebiel (1999, 2000), Wettstein (2002), Hauber (2002) are, according to the authors, considered to be performance measurement approaches, a clear differentiation from performance management is hard to see.
3.1.2
R&D Performance Measurement Systems
In this section, we move along the organizational functions dimension from generic approaches to specific issues addressing the R&D function. Before reviewing the performance measurement literature, a brief discussion of the R&D function is given to help us understand the specifics of the R&D function in comparison to other functions within an organization and consequently the implications for performance management. As discussed, a pure form of performance management does not exist in general, and this fact is even more pronounced in R&D. In the following review, the difficulties of R&D management and measurement are discussed and these show that the pure form of performance management is not applicable. As it follows from the Sect. 2.2.5 where the definition of the “R&D” term has been described, the R&D within a company can have the function of an incubator for new or future business opportunities.7 The significance of R&D for industrial organizations is often described as one of the most important factors affecting future profitability.8 Having in mind the four schools of thought: (1) financing/ funding, (2) business strategy, (3) business processes, and (4) employees, we explore its applicability to R&D.
6
Four constitutive elements of a PMgS have been defined in Sect. 2.2.5: planning, measurement, analysis and review. 7 The R&D function justifies its existence within organizations due to the very dynamic markets with constant newcomers and, thus, potential competitors (increased levels of competition), continually-improving technologies (rapidly-increasing production processes and methods) and the growing transparency and fast-changing preferences of customers, Hauber (2002), pp. 26–34. 8 BCG (2003), p. 5.
3.1 Performance Management Systems
59
R&D is generally meant to prepare a company for the “next generation”, be it the next generation of products, services, customers, competitors, partners or employees. However, how does the management and measurement in the R&D function differ from in the other internal functions? The nature of experimentation, which is inherent to the R&D function, explains this best. Researchers have to “try it out”; they do not know what the result will be and might even have little idea about the desired result and its effect on the planned result. This experimental modus operandi shows that the schools of thought fail in R&D; for example the business processes school of thought is very difficult to apply due to the nonrepetitive nature of the experiment. This uncertainty makes it difficult to plan, manage and measure the activities. The financing school of thought is hard to apply to R&D in the ICT sector. This is because outputs lack tangibility and perceptions about their monetary value are fuzzy. The quantity (of outputs) as such, which is often an important criterion for other departments, is of little importance for assessing R&D. A pre-condition for calculation of profitability, which sets inputs in relation with outputs, is the quantifiability of the outputs. Obstacles to the quantifiability of R&D outputs are reported in the literature9: • Mapping difficulties: although expenditure on R&D is directly linked to future earnings, it is practically impossible illustrate the detail of how each Euro spent on R&D contributes to the company’s income. • Measuring the output: an appropriate methodology for measuring and evaluating R&D output does not yet exist. The output of an R&D project may be the development in the form of hardware and software, of a patent or an engineering drawing. An R&D output could also be the knowledge that a desired solution is not feasible. This output cannot always be assigned to a subsequent product. • Periodical accounting: the typical timing of the R&D projects’ results means a delay between the expense and income dates, and within the classical, periodbased cost accounting system, no direct revenue can be assigned to current expenditure.10 • Linkages with other functional areas: the comparison of the costs of an R&D project and the associated cash flows of the product only allows limited statements about the value-added contribution of R&D. Success within the value-added R&D phase is a necessary, but not sufficient condition, for business success. Assigning economic success to indirect operational areas is an unsolved problem in industrial economics.
9
Hauber (2002), p. 41. Note that author acknowledges that his work applies predominantly to development (D); nevertheless, he uses the term “R&D” because, according to Hauber, on the one hand, the basic philosophy applies also to research, and on the other hand, the strict distinction between research and development is not appropriate. Hauber 2002, p. 24. 10 This problem is addressed in the literature under the term “time lag“. Schainblatt also observes that the time lag between fundamental discoveries and new industries can be from 15 to 50 years.
60
3 State-of-the-Art in Performance Management
Another characteristic of R&D is that R&D is “often a shared work for which it is quite difficult to assign a meaningful output to any one individual”.11 For over 90 years, practitioners and academics have devoted their thoughts to the matter of measurement of work performance. The question of measuring R&D productivity remains very controversial. In 1988, Brown and Svenson found that the major problem of R&D performance measurement is its acceptance. In their study, they state some of the reasons why assessment of R&D is difficult: • Many scientists and engineers think that it is impossible to effectively measure R&D productivity. In fact, the very fact of measurement is thought to discourage creativity and motivation among high-level professionals. • Many feel that management should just have faith that R&D is a good investment without trying to measure it. • Scientists and engineers feel negatively about measuring R&D performance for fear that such systems may expose their own inadequacies and lack of productivity. • Many attempts to measure R&D performance have resulted in dismal failure. These failures have led many to believe that all measurement systems don’t work.12 Kerssens-van Drongelen states three major problems of R&D measurement: • It is difficult to accurately isolate R&D’s contribution to company performance from the other business activities because it is always the intertwined efforts that eventually result in outcomes in the marketplace. • There is a problem of matching specific R&D inputs (in terms of money or manhours) and intermediate outputs (research findings, new technologies, materials, etc.) with final outputs (including new or improved products and processes). • The time lag between R&D efforts and their pay-offs in the marketplace represent the third major problem.13 Previous approaches to performance management in R&D have a common defect: they deal only with selected aspects while trying to assess the performance of R&D. A similar observation is made by Hauber (2002): “None of the models fully succeed in encapsulating R&D performance as none of them take all relevant factors into consideration”.14 Kerssens-van Drongelen also notes that there is no confirmed theoretical model for a performance measurement system. The question of the characteristics of the various dimensions of performance measurement is especially important in order to understand all of the influencing factors. Questions such as, “what is the measured object?”, “what kinds of measurement levels are
11
Schainblatt (1982), p. 10. Brown and Svenson (1998), pp. 31–32: following five reasons of failure have been identified: too much emphasis on internal measurement; too much focus on behavior; measuring outputs of questionable value to the organization; measurement system is too complicated, and measurement system is too subjective. 13 Kerssens-van Drongelen (2001), pp. 9–11. 14 Hauber (2002), p. 119, Gaiser and Servatius (1990), p. 13, Zeidler (1986), p. 321. 12
3.1 Performance Management Systems
61
there?” and “what are the (content) areas for measurement?” are important and have to be clarified before going further in setting up a performance measurement system and deciding on performance measures. A widespread view is that process improvements are the only levers for increasing performance.15 Few articles in the performance management literature were found that summarize the range of performance measurement dimensions. Often, individual aspects are singled out and discussed in isolation without considering the overall relationship. Contributions focusing on isolated objects can be found, and discussions distinguishing possible organizational levels are especially prevalent. For example, Ranftl (1977), p. 70, reports on five possible R&D organizational levels: an individual, a group, a major design, an entire program, and a larger organizational segment; and Griffin and Page (1993) discuss a single product, a program, and the total firm. Patterson (1983), Robb (1991), Szakonyi (1994) focus on an R&D department, while Kerssens-van Drongelen (2001), p. 55, gives an overview of measurement subjects including combining functional and departmental views. Hauber (2002), p. 158, develops three performance measurement layers: R&D department, product projects and projects without a concrete link to the market. Lynch and Cross (1995) presented a four-level pyramid, which makes a link between strategy and operations by translating strategic objectives from the topdown and measures from the bottom-up. These four levels are: corporate, business units, business operating systems, and departments and work centers. An attempt to summarize the different dimensions of R&D performance analysis has been undertaken recently by Ojanen and Vuola (2006).16 The authors present five basic dimensions of R&D performance analysis, which are given in Table 3.2. Table 3.2 The basic dimensions of R&D performance analysis Measurement The purpose of Measurement perspectives measurement level R&D type
Process phase
FOR WHOM? Customer Internal
WHAT? Basic research Exploratory research Applied research
WHEN? Input In-process
Product development Product improvements (incremental)
Outcome
WHY? Strategic control Justification of existence Benchmarking
Financial, shareholders Other stakeholders Resource allocation
WHERE? Industry Network Company SBU/department
Process Development of activities/problem areas Others Motivation, rewarding Project, team Individual Source: Ojanen and Vuola (2006) Learning
15 16
Hauber (2002), p. 119, Hronec (1993). Ojanen and Vuola (2006), pp. 279–290.
Output
62
3 State-of-the-Art in Performance Management
The first dimension addresses different measurement perspectives on strategy and objectives. Ojanen and Vuola list the following possible stakeholder groups: customer, internal, financial shareholders, other stakeholders, learning, and others. Beck and V€olker (2009)17 present a research and R&D-specific stakeholder analysis. The second dimension represents the purpose of R&D performance analysis. Strategic control, justification of existence, benchmarking, resource allocation, development of activities/problem areas, motivation, rewarding, and others are assigned to this category.18 The third dimension distinguishes different organizational levels of R&D performance analysis. The following levels have been identified: industry, network, company SBU/department, process, project, team, and individual. The type of R&D is the fourth dimension suggested. A distinction has been made between basic research, exploratory research, applied research, product development, and product improvements (incremental).19 The last dimension is focused on the phase of the R&D process to be measured. Input, in-process, output and outcome are the proposed characteristics for this dimension. This set of dimensions is one possible way to structure an overview of the different measurement perspectives. However, these examples are sometimes misarranged. For example, the “learning” of the first dimension implies that a second dimension has the purpose of measurement. The dimensions-table concept presented by Ojanen and Vuola helps to structure the multi-dimensional, multicriteria and multi-person task of measure selection, which is often difficult to execute effectively. Specifically, the understanding and recognition of the combinations of correct measurement dimensions are of value in selecting the appropriate R&D indicators. Due to the special characteristics of R&D shown at the beginning of the section, which outlined problems related to R&D, many of them are caused by the time-lag that belongs to a fifth dimension: the process phase as per Ojanen and Vuola. Two recent publications on R&D performance measurement, which design a performance measurement system are: a thesis from the University of Twente by Kerssens-van Drongelen (2001) and a thesis from the University of Mainz by Hauber (2002). While Kerssens-van Drongelen’s work is rather theoretical, Hauber’s research originated in an automotive industry and is therefore praxisoriented. Kerssens-van Drongelen in her research develops a Performance measurement system Systematic Design Approach (PSDA). In her work she specifically enhances taxonomies of measurement system functions, R&D measurement subjects, strategic R&D control models, R&D work types, and measurement
17
E.g., externally: academia, industrial partners, political bodies and media, and internally: development department, as well as the communication and marketing departments and top management. Beck and V€ olker (2009), p. 34. 18 We presented an overview of purposes of performance management in Sect. 2.2.5, pp. 37–43. 19 Detailed differentiation between types of R&D activities is provided in the definition Sect. 2.1.3, pp. 8–15.
3.1 Performance Management Systems
63
methods and metrics. She also proposes tools and guidelines for the development of measurement systems in R&D environments. The reader is referred to her work for the detailed analysis and relevant literature sources as it does not make sense to repeat it in our work. Hauber proposes a performance measurement concept with three layers. The first layer is based on a balanced scorecard with four perspectives: finance, customers, innovation and knowledge, and employees representing the management mechanism for R&D managers. The second layer represents a subsystem for product projects.20 Two processes are central to this layer: the first process is decision-making in terms of the start, design and goal-setting of each project. This process concerns the portfolio selection and strategy finding, i.e. effectiveness. The second process is system control in terms of management and control of product projects. The quality of this process demonstrates the efficiency. Finally the third layer considers of projects without a clear link to the market. The important difference between projects in the second and the third layer is that third layer project’s have no connection with products, whereas second layer projects do. According to Hauber, performance measurement concepts often ignore the fact the choice of right strategy in the context of the project selection an important factor for the achievement of corporate goals. He states that another weakness of performance measurement concepts, is the unlinked nature of these objects. Hauber concludes that the isolated assessment of an individual R&D project without consideration of other projects or the total R&D organization is not sufficient to capture the entire performance. It is worth mentioning that it is still rather difficult to identify the instruments or concepts in the performance measurement system, which can help to capture the entire performance of the department.
3.1.3
Research (Only) Performance Measurement Systems
The approaches discussed above focus on R&D as a unified entity, i.e. R&D is seen as one single function. In practice however, especially in large corporations, research and development operate as two different organizational units.21 Therefore, the aforementioned approaches are of limited help to practitioners for
20
Note that even though Hauber calls his work "R&D performance measurement", he acknowledges the fact that his work primarily focuses on the development part. Hauber (2002), p. 24. As such, the second layer of his proposed PMsS, which focuses on product projects is typical for the development function but might not necessarily find an equivalent type of projects in research only. 21 Seiler, (1965), Mansfield et al. (1971), Leifer and Triskari (1987), pp. 71–78, Brown and Svenson (1998), pp. 30–35.
64
3 State-of-the-Art in Performance Management
developing and implementing performance management within their particular organizations (either research or development).22 Research and development units have fundamentally different purposes. This can be seen in their different activities and leads to a need for different performance indicators. The differences between the activities in the two units and the implications for performance measurement in remaining relevant to the context of each unit will be shown below. For example, a company that manufactures products differs in its overall company strategy from the one that offers services. Correspondingly, their goals differ and therefore so do their KPIs. Each component is adjusted to the contextdependent environment of the company. Thus, it is obvious that performance measurement systems differ dependent upon whether they are designed to monitor a research or a development organization. In addition, the concrete content, goals and KPIs may vary depending on the industrial sector and are tightly connected to the strategy of the company and thus have to reflect a fit to its current situation.23 In our review of the performance management and measurement literature no sources could be found reporting a holistic performance measurement system. Two case studies were found reporting on the experiences of setting up performance measurement in research. Loch and Tapper (2002) report on a performance measurement system for an industrial research group at a medium-sized diamond producer. Mett€anen’s case study (2005) focuses on the design of a performance measurement system in a non-profit research institute. Loch and Tapper present a case study where the authors implemented a performance measurement system for a research group (rather than development) because research was especially difficult to measure, thus providing a higher test hurdle. In a first step the circumstances for acceptance were created. In the next step the content of the research performance was cascaded from the company’s business strategy. This is regarded in our research work as an important element of performance management. Therefore, special focus has been attached to cascading the strategy. Unfortunately the description has been simplified in order to focus on exposition and to preserve confidentiality of the company. During this process, the type of outputs were identified, a set of performance measures were formulated (this part will be discussed later in Sect. 3.3.3), and a collection of programs (a portfolio) was defined. Both contributions will be discussed in depth in Sect. 3.3 in the context of performance measures. Table 3.3 summarizes the most relevant approaches of performance management and measurement representing organizational function and industry affiliation. The mapping to the schools-of-thought was made by considering the focus of each approach. Not every approach has been discussed in detail, neither has every approach been mentioned within this section. Some relevant approaches will be
22
Samsonowa et al. (2009), p. 158. There is no one set of measures that will remain definitive over time. Performance measures, as with the organization itself, should be flexible to change (cf. Driva et al. 2000).
23
Non-sector specific Non-sector specific Non-sector specific
Non-sector specific
General General General General General
General General General General
General General General
R&D
Beyond Budgeting
Value Based Management Shareholder Value
Individual Classification systems (GE, PIMS)
Ratios au Tableau de Bord
Balanced Scorecard (BSC)
Integrated Performance Measurement
Activity Based Profitability Analysis (ABPA) Total Quality Management Six-Sigma
Hope and Fraser (2003) Brunner(1999) Rappaport (1998) Anthony et al. (1989) Lauzel and Cibert (1959) Kaplan and Norton (1992) Klingebiel (2000) Meyer (2002) Malorny (1996) Hurry and Schroeder (2000) EFQM (1996)
European Foundation for Quality Management Award Malorny (1996) Malcolm Baldrige Award Krause (2005) Performance Management: Eine StakeholderNutzen-orientierte und Gescha¨fts-prozessbasierte Methode Szakonyi (1994) Measuring R&D Effectiveness
Non-sector specific Non-sector specific Non-sector specific
Non-sector specific
Non-sector specific
Non-sector specific
Non-sector specific
Non-sector specific Non-sector specific
Non-sector specific
General
Approach
Source
Functiona Industry affiliationb
Table 3.3 Summary of relevant performance management and measurement concepts
✓
(✓)
✓
✓
✓
✓ ✓ ✓
(✓)
✓ ✓
(✓)
✓
✓ ✓
✓
✓ ✓
(continued)
✓ ✓
✓
(✓) (✓)
Financing Strategy Processes Employees ✓
School of thought
3.1 Performance Management Systems 65
Performance Measurement in Research and Development Systematic Design of R&D Performance Measurement Systems
Hauber (2002)
Non-sector specific Non-sector specific
R&D R&D
Functiona Industry affiliationb
✓
Financing Strategy Processes Employees ✓ ✓
School of thought
Measuring R&D productivity: complementing the R&D Aerospace ✓ picture by focusing on research activities New strategic goals and organizational solutions in R&D Automotive, ✓ large R&D labs telecommunication ✓ ✓ Exploring the differences in performance R&D Aerospace, cosmetics, measurement between research and pharma-biotech, development: evidence from a multiple case chemicals study Loch and Tapper Implementing a strategy-driven performance R only Diamond producer ✓ (2002) measurement system for an applied research group R only Work efficiency non-profit ✓ ✓ Metta¨nen (2005) Design and implementation of a performance organization measurement system for a research organization Note that this list is not exhaustive, neither is it a complete listing of all sources found in relevant literatures. The Approaches are not listed in any particular order. This table presents an intended categorization of literature sources regarding the function, industry affiliation and the school of thought. a General – apply for the entire company or cross-department R&D – Research and Development as a single entity R only – Research department b Non-sector specificIndustry-sector specific
Kerssens-van Drongelen (2001) Karlsson et al. (2004) Cesaroni et al. (2004) Chiesa and Frattini (2007)
Approach
Source
Table 3.3 (continued)
66 3 State-of-the-Art in Performance Management
3.1 Performance Management Systems
67
discussed in subsequent sections as they address specific components of either organizational goals or performance indicators. In the next step we analyze the necessary characteristics of a performance management system. Normally, a requirements analysis describes aspects that are relevant for determining the needs or conditions to be met by a new or altered product. An example of this is to take into account conflicting stakeholder requirements.24
3.1.4
Requirements of a Performance Management System
Most differences between performance management systems can be explained by their initial definition.25 The initial definition determines who (stakeholders)26 are provided with what kind of information.27 This fact influences the design of processes and interfaces within a PMgS and highlights the need for a thorough requirements analysis that considers all intricacies and specifics for a performance management and measurement system. Many isolated literature sources can be found reporting on different requirements, which constitute possible complementary elements of performance measurement systems. In the following, we analyze the four most recent sources28 that present requirement catalogues to get an understanding of the components that are crucial for a performance management and measurement system. • A comprehensive performance measurement study by Klingebiel from 2000;
24
Laplante (2009), p. 13. Requirement – something essential to the existence or occurrence of something else, MerriamWebster’s 11th Collegiate Dictionary (2004) [requirement]. A requirement is a singular documented need of what a particular product or service should be or perform. It is a statement that identifies a necessary attribute, capability, characteristic, or quality of a system in order for it to have value and utility to a user, Wikipedia, 17.02.1020 [http:// en.wikipedia.org/wiki/Requirement] Requirements try to describe the result, in our research the result is a performance management system. In this chapter we describe what the PMgS should do. 26 Hence each and every organization has its unique ecosystem, and its own definition of stakeholder groups. For a listing of possible stakeholders in a typical organization see Hahn (1994), p. 62. 27 Since the interpretation of performance management varies greatly, see Sects. 2.2.4 and 2.2.5, it especially differs in the definition given by different stakeholder groups, which are to be provided with information. As such, the design of interfaces also differs. 28 We are aware of the fact that the four studies we have chosen for the analysis of the PMS requirements focus on different units of analysis. This is not a disadvantage for our examination as they study essentially the same phenomenon. Moreover, the examination of the same phenomenon in different units of analysis and on different abstraction levels, helps to produce a comprehensive view. 25
68
3 State-of-the-Art in Performance Management
• A dissertation about performance measurement with a focus on IT-design by Wettstein from 2002; • A study on performance measurement in R&D by Hauber from 2005; • A study encompassing a holistic performance management approach by Krause from 2006. Klingebiel analyzes and compares relevant conceptual contributions to performance measurement and examines which internal and external company (stakeholder) information need is relevant. An important part of the study is the requirements framework for a performance measurement system. According to Klingebiel, a framework of a performance measurement system should be oriented to the following requirement: to use patterns of thought, which help to identify complex operational dependencies in their multifaceted interfaces and to design optimal processes.29 In his requirement catalogue (cf. Table 3.4) for performance measurement systems, Klingebiel30 differentiates between eleven requirements for the structure of a PMS and twelve requirements for selected indicators. Wettstein31 introduces a catalogue with 45 requirements for a PMsS. Since Wettstein focuses on the integration of a PMsS with an IT-system, he distinguishes two parts: (1) requirements for the PMsSs as such, and (2) requirements for ITsystems supporting the PMsS. He classifies the requirements of the first part into five groups: performance indicators, goals, organization, processes, and operating efficiency. Within the second part, Wettstein distinguishes between: considered entities, functional, not functional, and other requirements32 (Table 3.5). Hauber focuses on performance measurement in research and development and distinguishes, on the one hand, between purposes, and on the other hand between requirements, which are split into formal and contextual requirements33 (Table 3.6). Krause examines performance management concepts and analyzes requirements for a management method, which he sorts into four clusters. The suggested requirement clusters, with 20 requirements in total, are: overall concept, knowledge base, procedure model, and software support34 (Table 3.7). When comparing these four requirement catalogues, it becomes evident that Klingebiel, Hauber and Krause suggest requirements that focus on a generic management approach, while Wettstein presents requirements on a more detailed level. Although Wettstein’s requirements do not consider the specifics of
29
Klingebiel (2000), p. 298. Klingebiel (2000), pp. 34–35. 31 Translated from Wettstein (2002), p. 105. 32 Please note that we have considered in our classifications the requirements which are businessoriented (WET01 – WET 17); remaining requirements . WET18 – WET45 describe the IT-system requirements and were omitted, since the software integration is out of the scope of this work. 33 Hauber (2002), p. 109. 34 Krause (2005), p. 109. 30
3.1 Performance Management Systems
69
Table 3.4 Catalogue of requirements for a “Performance Measurement System” according to Klingebiel The performance measurement structures KLI01 Require a company-specific adaptation with a tight dependence on strategic direction to fulfill the delivery of information KLI02 Are an integral part of an aligned and self-consistent information supply of a company’s different organizational units and levels based on performance indicators KLI03 Enable the deduction of actions to improve the current performance situation (no scoreboard) by means of the structure of performance indicators KLI04 Are targeted at continuous improvement, which, in individual cases, could also be an abrupt improvement, especially with regards to the elimination of non-value-added activities KLI05 Are targeted at the coordinated balance of relevant stakeholders’ interests KLI06 Build a system of performance indicators that are oriented according to specified areas of responsibility KLI07 Consider the independent information needs of decentralized units KLI08 Feature, in private sector companies, a system-oriented, enterprise value-oriented holistic architecture KLI09 Are targeted at strategic information demand according to value chain KLI10 Incorporate constantly changing information needs in terms of lifecycle KLI11 Have an evolutionary system focus The selected performance indicators KLI12 Focus on the most important aspects and, therefore, are few in number KLI13 Build a consistent and complementary arrangement in terms of content with monetary/ non-monetary orientation KLI14 Are primarily aimed at improving future performance (e.g. emphasis on innovation and learning) KLI15 Have an intentionally high stability throughout the reporting KLI16 Derive the rhythm of their reporting mainly from the corporate level KLI17 Support the evidence of recent methodological approaches’ success through a high process orientation KLI18 Are characterized by good communicability, which strengthen self-motivation KLI19 Are coupled with a parallel system of incentives and support personal motivation KLI20 Take into account the competitive requirements of cost, time and quality KLI21 Represent the current/expected performance in the core competencies KLI22 Also include, after critical examination, increasingly recent financial-oriented target values (e.g. EVA,a shareholder value) and their value drivers KLI23 Consider behavior-related aspects during the selection process of performance indicators Source: Translated from Klingebiel 2000, pp. 34–35. In order to assure an accurate mapping we classified each requirement by numbering it sequentially after the first three letters of the author’s name. This procedure was applied to all four sources under consideration a Economic Value Added, the value of an activity that is left over after subtracting from it the cost of executing that activity
companies in a particular sector, or in a specific function, his concept considers organizational goals and links them to performance indicators. Even though Hauber designed an approach for a specific R&D function, his requirements are nevertheless of quite a generic nature. Examples of this include: creation of transparency, decision support or motivation within the purpose; simplicity, operability and
70
3 State-of-the-Art in Performance Management
Table 3.5 Requirements for a “Performance Measurement System” and for “IT-System for a Performance Measurement System” according to Wettstein Performance indicators WET01 Consideration of • Financial and non-financial • Leading and lagging • Internal and external performance indicators WET02 Structuring according to impact, execution and premise indicators WET03 Indicators are operationalized goals WET04 Set target (target value of a performance indicator) Goals WET05 Are operationalized by means of indicators WET06 Are derived from the strategy WET07 Only those goals are considered which have a stakeholder Organization (embedding, structure, involved parties) WET08 Goals are connected with organization and employees WET09 Individual structure can be established WET10 PMS can be integrated into existing management systems WET11 Administration of performance relevant data is carried out centrally Processes WET12 For the creating of a PMS WET13 For the acquisition of company internal and external data WET14 Stakeholder communication of the performance relevant data WET15 For the development of the ex ante causal relationships and for their ex post review WET16 For the maintenance (review) of the PMS Operating efficiency WET17 Verifiability of the cost-benefit situation of a PMS Considered entities WET18 Performance indicators WET19 Goals WET20 Company organization structure WET21 Employees WET22 Stakeholder WET23 Goal targets (required values) of performance indicators for a certain point in time or period WET24 Causal relationships Functional requirements: Data acquisition WET25 Manual entry of data WET26 Internal and external data sources WET27 Appropriate mechanisms for the automatic generation of performance-relevant data from operational systems WET28 Coupling with external sources Data storage WET29 Centralized, uniform data structure WET30 Storage of performance indicators at different granularity levels WET31 Set goal targets WET32 Long-term storage of performance data (continued)
3.1 Performance Management Systems
71
Table 3.5 (continued) WET33 Performance data should be available to other management systems within the company Data analysis WET34 Tools for analyzing the cause-effect chains WET35 The system enables the calculation of trends WET36 The information system calculates performance gaps between actual and target values Data communication WET37 Free navigation in the data structure WET38 Stakeholder-oriented communication of performance-relevant data WET39 Representation of the performance gap Not functional requirements WET40 Security concept, high data security WET41 Based on a standardized technology (e.g. Internet) WET42 Good response times WET43 Good expandability and scalability WET44 High availability and short recovery time Other requirements WET45 Far-reaching independence from the applied performance measurement approach Source: Translated from Wettstein (2002) Table 3.6 Requirements for a “Performance Measurement System” according to Hauber Purposes of performance measurement systems HAU01 Creation of transparency HAU02 Decision support HAU03 Cybernetic model HAU04 Motivation Formal requirements HAU05 Simplicity and operability HAU06 Comparability HAU07 Objectivity of performance indicators to influence the results, and not the behavior HAU08 Reliability to avoid dysfunctional impact HAU09 Acceptability by affected people Contextual requirements HAU10 Relevance, focus on KEY performance indicators, congruency with corporate goals HAU11 Focus on the future in terms of planning effectiveness (enabling early-warning) HAU12 Multiple dimensions, consideration of not just a single goal but rather a bundle of goals of a department and their performance indicators HAU13 Multiple levels, consideration of the corporate levels of a company Source: Translated from Hauber (2002)
comparability within formal requirements; KPIs-focus, consideration of multiple levels. These listed requirements are not function-specific and can be applied in a generic approach. Generally, performance measurement systems should support individual organizations in their individual environments. This often requires a definition or adjustment of context-related artifacts; e.g., within his knowledge base cluster, Krause subsumes among others the provisioning of techniques and templates,
72
3 State-of-the-Art in Performance Management
Table 3.7 Requirements for a “Performance Management Method” according to Krause Overall concept KRA01 Holistic approach KRA02 Solution integration KRA03 Openness Knowledge base KRA04 Represent content and structure of existing PMS KRA05 Provide existing techniques and templates KRA06 Indicator catalogue KRA07 Reference models Procedure model KRA08 Task-related PMS development KRA09 Method to capture actual profile and to develop target profile KRA10 Method for cause and effect net transparency KRA11 Method for process-oriented indicator definition KRA12 Indicator data model KRA13 Creation of PMS triangular structure KRA14 User participation via reverse flow principle Software support KRA15 Knowledge management in terms of experience backup KRA16 Knowledge use KRA17 Development of new PMS KRA18 Alignment of PMS content and structure KRA19 Analysis about PMS content and structure KRA20 Representation of the value chain Source: Translated from Krause (2005)
indicator catalogues, or reference models. Such information, which sometimes may simply not be available and have to be generated, is needed in order to populate a general performance framework to make it work for a specific organization or department. By analyzing the four requirements catalogues a synthesis can be made. Within all approaches the following four main areas can be identified: 1. The organizational structure, design and embedding of a performance management system; 2. Processes required within a PMgS; 3. Goals; 4. Performance indicators. Each area is described by a number of different requirements, summarized from Klingebiel, Wettstein, Hauber, and Krause (Table 3.8). The first area is concerned with the overall performance management characteristics: the holistic approach, adherence to company strategy, integration with the overall existing management systems, balance of relevant stakeholders, and the existence of templates, techniques and catalogues. The second area is process-related, indicating the importance and need for well-defined processes. The third area describes
3.1 Performance Management Systems
73
Table 3.8 Our classification of requirements for a “Performance Management System” Condensed requirements
Literature sources
PMS embedding and design
Klienge- Wettstein biel (WET) (KLI)
General characteristics: A PMS needs to be: • Derived from and aligned with the overall company strategy • Built on a holistic approach • Open and flexible for potential changes/ enhancements Specific PMS components: A PMS needs to include: • Reference models • An indicator catalogue and data model • Existing best practices, techniques and templates Model to capture and analyze cause and effect relationships: This model needs to allow for: • Identifying measures/potential for continuous improvement • Coordinating and balance interests of relevant stakeholders • Reviewing the effectiveness of the PMS itself • Evaluating the efficiency of the PMS itself (including cost-benefit analysis) The PMS needs to be coupled with a reward system The PMS needs to be integrated with: • Information demand according to the value chain • Existing management systems • Existing organizational structure(s) (e.g. decentralized units) • Existing tasks and business processes • Existing IT infrastructure PMS processes are required for Capturing company internal and external data Communicating performance data to stakeholders Capturing as-is profile and to develop to-be profile Ensuring user participation Creating, reviewing and evolving the PMS itself (including goals and performance indicators) Capturing and analyzing cause and effect relationships Goals have to Be derived from the strategy and reflect core competencies Be operationalized by means of indicators Have a stakeholder Be tied to organizations and employees to ensure responsibility Performance Indicators Are operationalized goals
KLI01 KLI08
Hauber (HAU)
Krause (KRA)
HAU01 HAU05
KRA01
HAU06 HAU09
KRA02 KRA03 KRA05 KRA06 KRA07 KRA12
KLI04 KLI05
WET15
KLI10
HAU03
KRA10
HAU08
KRA14
KLI19
KLI09 KLI07
WET07 WET08
HAU12
KRA13
KLI17
WET09
HAU13
KRA08
HAU09 HAU07
KRA11 KRA09
WET11 WET12
KLI11 KLI10
WET12 WET14 WET13
KLI21
WET06 WET05 WET07 WET08
KRA14 KRA11 HAU08
HAU02 HAU10
HAU09
WET03
(continued)
74
3 State-of-the-Art in Performance Management
Table 3.8 (continued) Condensed requirements
Literature sources
Focus on the most important aspects/critical success factors Measure the current/expected performance of core competencies
KLI12
Are low in number/comparable
Refer to results and not behaviors Have good communicability Are continuously part of periodical reporting of the department/company Consider measuring the following aspects: • Financial values (cost, EVA, shareholder value) • Non-financial values (time, quality) Consider measuring the following perspectives: • Leading and lagging • Internal and external performance indicators • Impact, execution and premise indicators Allow actions to be derived to improve current performance Strengthen self-motivation and guide behavior
KLI06 KLI13 KLI21 KLI12
HAU02 HAU08
KRA13
WET04
HAU01 HAU06 HAU07 HAU08
KLI18 KLI02 KLI15 KLI16 WET01 KLI20 KLI22 WET01
HAU11
WET02 KLI03 KLI13 KLI18 KLI19 KLI23
KRA13 HAU04
requirements for organizational goals; and finally, the fourth area summarizes the requirements for performance indicators, describing their qualities within performance management systems. When allocating different requirements from the four catalogues, the mapped source has been sorted to the principal item in each category. Table 3.8 gives, on the one hand, a comprehensive overview of fundamental components (i.e. four areas) that a performance management system should have. On the other hand, it describes qualities and characteristics of these components by means of the collected requirements.
3.1.5
Characteristics of Research Compared to Development
After having analyzed the requirements of performance management, we will examine specific requirements for R&D in the next step and then for research separately from development as per the focus of this work. First of all we assess the specifics of industrial research reported in the literature in order to derive and formulate requirements specifically for this function. In this context we
3.1 Performance Management Systems
75
consequentially discuss the current state-of-the-art in the literature regarding the separation of these two functions. The Research and Development functions within an organization have traditionally been treated as one single function called R&D. For assessment purposes, these functions need to be separated as will be shown in the following discussion. By assessing a research department’s performance we are better able to understand the relationship between the individual components that are assessed and how they impact the organization as a whole. This requires both knowledge about research activities that directly influence the department’s performance and knowledge about factors that affect its ability to produce significant results. The issue of assessing the performance, especially for R&D in industry, is a difficult one, and it becomes even more complicated when moving to the very beginning of the value chain within a company, which often starts in its research department. The need to treat research and development separately has been identified and proposed long ago. For example Brown and Svenson35 analyzed the reasons for failure of R&D measurement and evaluations systems and derived five reasons from which they derived six recommendations36 for the design of a workable PMS. Their sixth recommendation is to separate the evaluation of research and development. They argue that: “Research and development clearly perform different functions and produce quite different outputs”. The authors define: “primary research output” as information or knowledge relevant to the business of a company; and “development output” as enhancements, or completely new products and processes. The authors suggest that research output is an input for development and vice versa. Because of these major differences, the authors strongly recommend to track different KPIs for research and development. The authors suggest some examples for contrasting measures,37 however they do not mention goals or objectives related to these measures. The authors explicitly emphasize that the functions must be considered separately, “. . .research and technology development need their own measures, which are different from the measures of product development”. The managerial implications of the Leifer and Triskari study suggest that both units should not be managed or controlled in the same way. The differences stem from their contextual, organizational and communication characteristics. The results suggest different orientations inherent in performing research versus development activities. “These results suggest that . . . research and development units should not be lumped together either in research or in the management literature”.38 Surprisingly, this suggestion of separation has only recently been followed by
35
Brown and Svenson (1998), pp. 30–35. The following six recommendation have been made on how to design a PMgS: (1) focus on external vs. internal measurement; (2) focus on measuring outcomes and outputs, not behavior; (3) measure only valuable accomplishments/outputs; (4) make the measurement system simple; (5) make the measurement system objective; and (6) separate R&D evaluation. 37 The term “Contrasting” refers to the different functions for which the measures are relevant. 38 Leifer and Triskari (1987), p. 76. 36
76
3 State-of-the-Art in Performance Management
researchers. Loch and Tapper (2002), Karlsson et al. (2004), Mett€anen (2005), Chiesa and Frattini (2007) have all investigated the differences between the departments or in particular the research function within an industrial environment. Karlsson et al.39 attempt to define areas of research and development to demonstrate why their performance should not be jointly measured. They observe that Research in industrial practice is most often separated from Development organizationally. Authors cite empirical studies from Seiler (1965) and Mansfield et al. (1971) noticing that they show identical results reporting that 58% of the companies studied had separated Research from Development. Mansfield’s hypothesis was that size of expenditure was the major explanation for the cases where research is not separated from development. For example, when either research or development units are smaller than 26% or larger than 76% companies tend not to separate the two.40 With their empirical investigation Chiesa and Frattini stress the need for differentiation when designing performance measurement systems according to the type of measured activity. The authors, however, do not investigate the concrete characteristics in terms of contextual goals and performance indicators of both types of units. The individual assessment of research departments was, for a long time, regarded as an obstacle since creativity is seen as a major part of the work of researchers.41 Accordingly, KPIs assessing research activities are a highly controversial topic of discussion.42 To better understand the particularities industrial research the following extracts the characteristics of industrial research departments from the literature. The first characteristic identified is: the nature of activity realized in research. Leifer and Triskari43 analyzed the main differences between research and development and derived the main disparities from the tasks and activities of research and development units.44 The authors note that research tasks consist of technology expansion and are primarily concerned with expanding scientific knowledge and assessing its feasibility. In this sense, technology expansion places an emphasis on the inventive process, that is, expansion of a technology base. Development tasks, on the other hand, involve the application of the technology base to operational requirements with the intent of bringing a new (or modified) product into existence. Research tasks are aimed at generation or expansion of knowledge whereas development tasks have as their objective the physical output of new products or processes.45 Similar to Leifer and Triskari’s approach, Karlsson et al. suggest that the activities of both functions to be artifacts to examine: “It is our
39
Karlsson et al. (2004), pp. 179–186. Karlsson et al. (2004), p. 180. 41 Refer to Schainblatt (1982), p. 10, Brown and Svenson (1998), pp. 30–35. 42 The examination of KPIs within industrial research (only) will be addressed in Sect. 3.3.3. 43 Leifer and Triskari (1987), pp. 71–78. 44 We are aware of the military setting of this research, nevertheless the hypotheses examined in this study are perceived as valid for our considerations. 45 Leifer and Triskari (1987), p. 71. 40
3.1 Performance Management Systems
77
belief that research activities, according to these variables,46 are very different from development activities, making it hard to measure R&D as a unified entity”.47 Another very important and defining feature of the research function is the time lag between when outputs are produced in research and then outcome which represents the collective efforts of many different units of a company at the end of the value chain. This feature, in our opinion, applies to both research and development functions. Loch and Tapper48 state: “. . .success can be assessed only after long delays, or it accrues to other units of the organization”. A very generic and comprehensive illustration of the performance measurement process can be found in Brown and Svenson.49 They describe a model for R&D (see Fig. 3.2) where they introduce the R&D laboratory as a system within the macrosystem of the entire organization.
2. PROCESSING SYSTEM
• • • • • • •
People Ideas Equipment Facilities Funds Information Specific Requests
4. RECEIVING SYSTEM
R&D Lab
1. INPUTS
Activities • • • •
Researching Developing Testing Reporting Results
A A) IN-PROCESS MEASUREMENT AND FEEDBACK
3. OUTPUTS • • • • •
Patents Products Processes Publications Facts / Knowledge
B) OUTPUT MEASUREMENT AND FEEDBACK
• Marketing • Business 5. OUTCOMES Planning • Manufacturing • Cost • Engineering Reduction • Operations • Sales Improvement • Product Improvements • Capital Avoidance
B
C) OUTCOME MEASUREMENT AND FEEDBACK
C
Fig. 3.2 The R&D Laboratory as a system (Source: Brown and Svenson 1998)
The model consists of two components: the processing system and the receiving system. The R&D lab receives inputs and transforms these inputs into outputs, which are taken up by the receiving system (consisting of other units of the same organization) that in turn produces outcomes. The actions of transforming are called activities. We refer to inputs, activities, outputs and outcomes as measured artifact types since they represent types of things that are measured. The distinction between output and outcome is the important aspect of this model. In their model
46
The authors refer to four factors: time, originality, organization and knowledge depth. Karlsson et al. (2004), p. 180. 48 Loch and Tapper (2002), p. 185. 49 Brown and Svenson (1998), p. 31. 47
78
3 State-of-the-Art in Performance Management
they also introduce measurement and feedback loops related to activities, outputs and outcomes. This illustration (Fig. 3.2) shows figuratively the time lag between A (in-process measurement), B (output measurement), and C (outcome measurement). As Brown and Svenson did not apply their model to a specific industry sector they could not give any indications as to the time of feedback loops. The next characteristic, which shows its unique legitimacy for the Research function, is the fact that outputs of research need not always necessarily be positive. Hauber50 distinguished between the following six types of outputs of R&D51: • • • • • •
Output-type 1: “product innovation” Output-type 2: “shelve product” (non-launched technology) Output-type 3: “support services” Output-type 4: “technical know-how” Output-type 5: “competencies/expertise” Output-type 6: “know-how”
Research that attempts to find solutions for a particular problem fails for nonavoidable reasons. This can be called a negative research output: something that should have worked does not. According to Prechelt52 due to the current publication climate such negative results are usually camouflaged as positive results by nonevaluating or mis-evaluating the research or by redefining the problem to fit the solution. He indicates that such publication behavior hampers the progress by suppressing some valuable insights, producing spurious understanding and misleading further research efforts. The validity of positive as well as negative outputs needs to be accepted as equally incorrect. One prominently discussed characteristic in the literature is the uncertainty factor in the research. The research outcome is uncertain to result in economically successful products or services. Feltham and Xie53 highlight in their work the uncontrollability and un-observability and name three factors: first, the actions and strategies implemented are not observable directly – the direct compensation for the input into the firm is therefore not possible; second, the full consequences of managers’ actions are not observable – because the impact of those actions extend beyond his subunit and time as its manager; and third, uncontrollable events influence the consequences that are observed. Loch and Tapper54 state same facts like Feltham and Xie: “. . .effort levels may not be observable, project success is uncertain, influenced by uncontrollable factors. . .” Slightly different aspect of uncertainty (not results but rather the environment) were analyzed by Leifer and
50
Hauber (2002), p. 207. Note that not all R&D output types are also applicable to research. Instead of type 1 and 2 research, outputs could be concepts or feasibility studies. 52 Prechelt (1997). 53 Feltham and Xie (1994), pp. 429–453.s 54 Loch and Tapper (2002), p. 185. 51
3.1 Performance Management Systems
79
Triskari.55 They operationalized uncertainty using the environmental uncertainty component measures of complexity/dynamism and predictability/controllability. Different stakeholders and different communication methods is the next characteristic of industrial research function identified in the literature. Beck and V€olker mention that compared to other functions, research seems to interact with different stakeholders.56 Depending on the type of projects,57 research departments might interact differently to other units with external parties like academia, industrial partners, political bodies and media. Also, interaction with internal units, foremost with the development department, as well as the communication, marketing and top management is notable. Heterogeneous stakeholder groups require special communication of research topics.58 Leifer and Triskari show very surprising findings. They found that development units coordinate more with other organizational units than research units do and that members of development units process more information to and from people outside their own unit than do members of research units.59 This can only be explained due to the military environment and timeframe of the study. It should be mentioned that contemporary research suggests that the organization and execution of research projects is influenced by new trends such as open innovation. The next research characteristic is originality. It is not meant to describe the nature of topics and problems dealt with, as by definition research tackles new and unsolved problem. Moreover, Karlsson et al.60 use “originality” to illustrate the differences in terms of continuity/discontinuity. They cite Stahl and Steger61 concerning the fact that research often makes discontinuous “jumps” in knowledge, which results in solutions that can be called innovations. “The innovations are not aimed at a specific product, but merely display the possibilities within a technology area.”62 Development on the other hand, is a more continuous evolution of existing ideas. The next characteristic identified is knowledge depth. Karlsson et al.63 state that the knowledge regarding the specific area of interest has to be deeper in the case of research. General and specific competences are distinguished by the knowledge
55
Leifer and Triskari (1987), p. 73, note especially their second hypothesis. On both these measures as well as an overall measure of environmental uncertainty there was no statistical difference between the Research and Development units, meaning that both research and development groups perceived the environment similarly. The authors, however, did not examine the uncertainty of the output artifacts of each unit. 56 Beck and V€olker (2009), pp. 28–35. Amelingmeyer (2005), p. 353. 57 For example public funded projects, depending on a form, can include different disciplines like academia, industrial companies, standard bodies, etc., and can be comprised from more than 20 different organizations. 58 Beck and V€olker (2009), p. 35. 59 Note that the sixth hypothesis was not commented here as it was not relevant to our research. 60 Karlsson et al. (2004), pp. 179–186. 61 Stahl and Steger (1977), cited from Karlsson et al. (2004), p. 181. 62 Karlsson et al. (2004), p. 181. 63 Karlsson et al. (2004), p. 182, Roussel et al. (1991), cited from Karlsson et al. (2004), p. 182.
80
3 State-of-the-Art in Performance Management
depth aspect. The need for specialists, with highly specialized knowledge, is greater in research, and the need for generalists, with cross-functional understanding, is greater in development. Structure and processes have been identified as one characteristic of research. Karlsson et al. examine how the type of activities in research and development are organized. They conclude that research projects are often conducted in semi-controlled chaos in a uni-disciplinary environment. Development, on the other hand, is often conducted in project form and in a multi-disciplinary environment. Leifer and Triskari examined, in their third hypothesis, whether inter-unit dependence for development units is higher than for research units.64 The authors found inter-unit dependence in research units was significantly greater that in development units. Furthermore, they found that research units were significantly more centralized than development units and that there was partial support for the hypothesis that development units are more loosely structured. The authors do acknowledge, however, that the military setting of the research is of fundamental importance. Chiesa and Frattini analyzed and compared the structure of the PMS in research and development departments. Their empirical investigation shows that the design of the structure of a performance measurement system is significantly influenced by the type of activity it is applied to. They found that in research activities, typical control objects are single researchers, research projects and the scientific or technical department. Conversely, development departments do not apply individual and department measures, but tend to apply teams and projects as controlled objects. Furthermore, the authors stress that research is generally structured according to an input-oriented approach favoring high specialization in knowledge production, whereas development is organized mainly on the basis of output-oriented solutions. The authors conclude that in terms of structure the departmental dimension is more relevant for research, while the team dimension is more important for development. In their investigation, the authors notice a wide spread of matrix structures that reconcile the advantages of input- and output-oriented structures and therefore the project dimension is critical in both research and development functions. These above mentioned characteristics, and especially when we consider the time-lag and output vagueness of research, mean that the research function has to be seen as an investment without tangible results. A key feature of the inventive process is that the result will not be known ahead of the output. It is often difficult to define, describe, and specify the artifact which is expected. In a commercial environment “the view of what the expected output is from a firm’s research activities varies from company to company, as well as within a company over a period of time, much due to market developments and changing customer demands.”65 Karlsson et al. note that the question of “expected output”
64
Inter-unit dependence was operationalized with questions pertaining to the extent that work units depend on others to accomplish work objectives and the extent to which others depend on them. 65 Karlsson et al. (2004), p. 180.
Description
Generally the tasks of researchers are different from development because the researcher is more oriented toward theoretical knowledge whereas the developer is repeating validated methods in order to engineer the product Time-lag (output vs. outcome); In research there is in general a originality significant time lag between output of research and the outcome resulting from this output. The steps turning the output into the outcome are generally not under the control of research Generally, research has the potential to generate disruptive ‘jumps’ in knowledge resulting in innovations whereas development is more a continuous evolution of existing ideas Validity of positive as well as In general the research outputs can be negative outputs positive or negative, and in both cases represent valid research results Output uncertainty (nature of output – Generally the research outcome is uncertain to result in economically complexity/dynamism/ predictability/controllability/ successful product or services. The secrecy) risks are more commercial than technical
Tasks types (routine vs. exceptions)
Characteristic
Table 3.9 Main characteristics of industrial research and requirements
(continued)
Leifer and Triskari (1987)
Hauber (2002), Prechelt (1997)
R and D Brown and Svenson (1998), Karlsson et al. (2004), Chiesa and Frattini (2007)
The PMgS has to consider both results R without penalizing the negative findings The PMgS has to reflect the fact that due R to uncertain conversion of research outcome into successful products the outcome only cannot represent the basis for conducting the performance assessment
The PMgS has to cope with the irregular generation of outputs
The PMgS has to reflect the fact that there is a significant time lag between output of research and the outcome resulting from this output
Leifer and Triskari (1987), Karlsson et al. (2004)
Function Source
The PMgS needs to consider the nature R of research tasks (more nonroutine, more exceptions, more complexity)
Requirement
3.1 Performance Management Systems 81
Investment per se
Function: R&D entity – Research and Development as a single entity R only – Research department
Definition of the output
Quantifiability of the output
Knowledge depth
Different stakeholders and different communication methods
Description
From the time-lag characteristic and difficulty to precisely describe the research output results, there needs to be clear agreement that the research function is seen as an investment without tangible results The research department, depending on the type of research projects it conducts, has a variety of stakeholders: e.g. academia, industrial partners, political bodies, media and internal units such as the development department, communication and marketing departments, and top management being the most important The profile of a typical researcher usually requires a specific competence (specialized knowledge) within an area, which takes long time to acquire, whereas the knowledge from a wide range of areas is required for a profile of a developer (general competence) The results of industrial research departments have no immediate commercial value, it is generally difficult to directly quantify monetary value of research output It is generally difficult to define in advance and therefore measure the “expected output” due to its uniqueness and lack of repeatability
Characteristic
Table 3.9 (continued) Requirement
R
The PMgS has to identify benchmarking R values for the output
The PMgS has to identify reasonable R indicators for the monetary value of research output
The PMgS has to consider the different R skill sets of people working within research
The PMgS has to consider the interactions with different stakeholders
Karlsson et al. (2004)
Chiesa and Frattini (2007)
Karlsson et al. (2004)
Beck and V€olker (2009)
Loch and Tapper (2002), Ojanen and Vuola (2006))
Function Source
The PMgS has to consider the R timeframes of research activities; initially the performance can only be assessed upon activities or outputs, not outcomes
82 3 State-of-the-Art in Performance Management
3.2 Organizational Goals
83
from research activities is a reflection on the definition of research at the company. The authors observe that if there is no clear definition at hand, there will be differences in opinion regarding what the expected output is, thus making the measurement very difficult. Table 3.9 summarizes the main characteristics of research. Every characteristic is then translated into a specific requirement for a PMgS, furthermore an indication for the function R only, or R&D together is provided. In this section we have shown general performance management approaches summarized within four schools of thoughts suggested by Krause and have elaborated on a catalogue of fundamental requirements for a performance management system. The examination of each aspect of a performance management approach is not possible within our investigation and is beyond the scope of our work. For example we exclude the investigation into the technical integration and process of a performance management concept. To stress the context-dependent aspects the subsequent sections focus on the critical components of performance management as identified in Sect. 2.3 and analyze the nature of organizational goals in more detail and thereafter performance indicators to assess the achievement of the goals.
3.2 3.2.1
Organizational Goals Generic Organizational Goals
In Sect. 2.2.2 we defined the term “goal” as an envisaged and intended future action or state, an anticipated vision of the impact of our actions66; we also clarified the role of goal setting in problem solving processes. Furthermore, we found evidence that goal creation should be seen as a process across a period of time and not as an act that occurs at a specific point in time.67 In this section we will specifically analyze the literature on the nature of generic and non-sector-specific organizational goals. It is difficult, on a generic level, to find evidence on concrete goals due to the uniqueness of the operational environment and circumstances of each organization. The literature at this level refers to goal setting, principles, systems and dimensions. Hamel (1992) noticed that organizations, due to their complexity, do not have one specific goal, but rather a bundle of interrelated goals. As such he suggested the presence of an organizational goal structure or system.68 All activities of an organization are directed to ultimately realize the future state desired in the goal
66
Schwantag (1951) cited from Hamel (1992), p. 2635, Bidlingmaier (1964) cited from Hamel (1992), p. 2635, Hamel (1992), p. 2635, Nagel (1992), p. 2626, Specht and Beckmann (1996), p. 18, Hauschildt (1997). 67 Specht and Beckmann (1996), p. 18 and p. 125. 68 Hamel (1992), p. 2635.
84
3 State-of-the-Art in Performance Management
structure. Furthermore, Hamel underlines that the goal itself is a complex issue that consists of several interactive elements and hence has an internal structure. Therefore he directs his goal system analysis towards an intra-goal structure (cf. Table 3.10) as well as an inter-goal structure (Fig. 3.5). He also mentions the “supra-goal” structure, arguing that it is necessary to examine the relationship between the corporate goal system and the corporate eco-system. Table 3.10 Intra-goal structure of corporate goals Corporate goal Substantive goal Formal goal Product 1 Product related Product 2 • Quality Product 3 • Specifications Product 4 • Reusability Product n • Dimensions Source: Hamel (1992)
Organization related Optional Mandatory • Social • Profitability • Liquidity • Legal compliance
For the analysis of corporate goals, Hamel takes the following aspects into consideration: obligatory and optional elements of a goal system, internal and external relations of a goal system, elements of individual goals and their relationships, and static and dynamic components of a goal system. For our work we will confine our investigation to the comprehension of the goal system and will examine in detail the intra and inter-goal structures. Each organization has a corporate goal, which reflects the totality of all activities. This corporate goal is, however, regarding its structure and mission, globally articulated and is not appropriate for directly controlling specific actions. It must instead be “broken down” to allow a direct change from an “existing” into a “desired” state through a more precise and concrete definition.69 Goals are then derived regularly over several corporate levels by way of a “goal hierarchy” Granger (1964) or “means-ends chain” Heinen (1966). Hamel distinguishes between two types of goals: corporate goals70 and decision goals71 that represent two opposite ends of the goal system, where decision goals are at the lowest level of the cascading chain. First, corporate goals are “substantive”72 or “formal.” Substantive goals can be focused on one single “product” or “service,” or a “product family” or “range of services.” Formal goals can be either product related goals or organization related goals.
69
Hamel (1992), p. 2638. “Unternehmensziele“, Hamel (1992), p. 2638. 71 “Entscheidungsziele“, Hauschildt (1970), p. 551, Hamel (1992), p. 2638, Hauschildt (1997), pp. 269–272. 72 “Sachziel“, Kosiol (1966), or “Leistungsziel“ Schmidt (1969). 70
3.2 Organizational Goals
85
Second, according to Hauschildt73 and Hamel,74 decision goals represent guidelines for economic activity for the smallest unit: the task. These apply at all organizational levels. These generally require very specific goals in order to enable a direct transformation of the “existing” reality into the other “desired” reality. As every executable action results from a previous decision-making process, the analysis of intra-goal structures can also be focused on decision goals (Table 3.11). Table 3.11 Elements of the decision goal Decision Goal Object Specification Properties P1 P2 Pn Source: Hamel (1992)
Standard S1 S2 Sn
Function F1 F2 Fn
Following Hauschildt, the decision goals are a desired state with an intention to a realization that allows the actors to develop a specific behavior. This requires an appropriate goal articulation, which depends on a sufficient quantity of items to provoke unambiguous action sequences. Both Hamel and Hauschildt list five dimensions, which represent the elements of the decision goal. While Hamel discusses a fifth dimension, “goal complexity,” Hauschildt instead places a different dimension, “time reference” and discusses “goal complexity” on a general goal system level (Table 3.12). We will follow Hauschildt’s approach and discuss the issue of goal complexity on a more detailed level as it represents an important aspect for organizational goal setting. Corporate actions are generally determined by more than one goal. This phenomenon of ‘multiple’ goals is not problematic as long as the individual goals are complementary, i.e. as long as the positive fulfillment of one’ goal also positively fulfills the other. The conflicting goals pose a prioritization problem: how to proceed if an alternative is presented by which a sub-goal can only be accomplished to the detriment of others? In this case priorities are to be set within a goal system; Hauschild75 defines a goal system as the purposive order of several goals, including conflicting goals. • Each complementary goal can be decomposed into a series of subordinate goals via a multiple-level tree structure. Each subordinate goal contributes via a “means-end” relationship to the achievement of superior goals. • Conflicting goals require a different order form. Two concepts can be used to systematically handle conflicting goals in theory and practice: the benefit and the constraint concepts.76
73
Hauschildt (1997), p. 269. Hamel (1992), p. 2638. 75 Hauschildt (1997), p. 271. 76 Thommen and Achleitner (2003), p. 112. See also details for value benefit analysis Weber (1992), pp. 1435-1448, Hauschildt (1997), p. 271. 74
86
3 State-of-the-Art in Performance Management
Table 3.12 Dimensions of the decision goal according to Hauschildt Dimension Description Goal object Identifies a facet (not the entirety) of the reality, which is competently differentiated by the decision-maker. A goal object is represented by the decision field or rather the problem, for which the goal is defineda Goal specifications Goal objects feature individual specifications, each of which has to be separately defined. Other literature provides related terms: goal contents, goal variables, goal attributes, decision criteriab Goal norm The decision maker chooses the alternatives to be realized according their contribution to goal achievement. This contribution has to be quantifiable. What is needed is a goal norm or standard, i.e. a rule that specifies how to dimension and quantify the goal specification Goal function Each alternative which is to be considered in the decision making process provides a certain goal attainment contribution. The rule, by which the decision maker specifies which goal achievement contribution he aims for, is called the goal function Time reference Goals always reach into the future. It is necessary to determine the end date or timeframe in which the goal has to be achieved. Within this dimension, goals are to be defined as shorter-term goals or longer-term goals Source: Hauschildt (1997) a Hauschildt observes that in classical business economics the goal object applies to the entire company. He contradicts the implicit assumption that goals that are defined for the entire company apply to the individual decision fields and states that for every problem a specific goal has to be developed. However, the compatibility of problem-specific goals (decision goal) must remain compatible with the organizational goal b For example, the technical goal variables are often described as content goals and they refer on the one hand to constructiveness of a product or service (construction goal), and on the other hand to the utilization aspects (utilization goal). Economic variables are the target profit and turnover from a product or a service
Hauschildt suggests that the goal ordering axiom is goal clarity: goals are to be clearly and operationally defined in marked dimensions. He postulates the norm of goal clarity as imperative for successful management. Figure 3.3 shows the range from two goals being mutually (totally) exclusive (goal opposition) to two goals being fully complementary, so that the achievement of one goal inevitably leads to the achievement of the second goal (full goal complementary). Less extreme cases are goal competition, where the attainment of the first goal negatively impacts the attainment of the second one; goal neutrality, where goals are independent; and partial goal complementarity, where the attainment of the first goal raises the likelihood of the attainment of the second goal (Fig. 3.4). The complexity of decision goals is due to the multiple properties, norms and functions of the goal characteristics (cf. Table 3.11) The number of characteristics is considered by Hamel to define ‘goal complexity’. Hamel also points out that goal complexity could be a function of the formulation of goal characteristics and their conceptual content is therefore not absolute. In terms of inter-goal structure, Hamel in Fig. 3.5 specifies the relationship between the individual goal levels (corporate goal, division goal, decision goal) as a self-contained unit, in which on the corporate level content and formal goal are integrated into a decision goal.
3.2 Organizational Goals
87
Complementarity Relationship --
-
0
+
++
very negative
negative
none
positive
very positive
Goal Opposition
Goal Competition
Goal Neutrality
Partial Goal Complementarity
Full Goal Complementarity
Achieving one goal means complete failure in achieving the other goal.
Achieving one goal has a negative impact on achieving the other goal.
Achieving one goal has no impact on achieving the other goal.
Achieving one goal has a positive impact on achieving the other goal.
Achieving one goal necessitates the achievement of the other goal.
Instrumental relation
Preference relation
Fig. 3.3 Degrees of compatibility between two goals (Source: Thommen and Achleitner 2003)
Goal system
Goal dimensions
Content
Extent
Goal relations
Time
Competition
Main goal
Neutrality
Secondary goal
Complementarity
Superior goals Subordinate goals
Fig. 3.4 Analysis need in building a goal system (Source: Heinen 1991, p. 16)
The characteristics of decision goals that foster the best possible performance have received much attention from organizational psychologists. It is not our intention to examine this in detail, although the effects on goal-setting itself do merit attention. Organizational psychology looks at the goals’ characteristics from a slightly different angle. In his goal-setting theory77 in industrial-organizational
77
As mentioned already in section 2.2 other disciplines such as Organizational and Occupational Psychology or Organizational Behavior and Human Performance examined individuals’ performance in an organizational context and had to therefore deal particularly with goal setting. Goal setting theory has been approached by Edwin Locke in mid-1960s who had developed this theory upon Aristotle’s speculation that purpose can cause action. Locke researched the impact that goals have on the individual activity of its time performance; he continued researching goal setting for 30 years.
88
3 State-of-the-Art in Performance Management
Corporate Goal Substantive Goal Product 1
Formal Goal Product Related
Product 2 Product 3 Product 4
Organization Related
- quality
optional
- specifications
- social
mandatory - profitability
- reusability
- liquidity
- dimensions
- legal compliance
Product n
Division Goals
Specification
Object
Decision Goal Properties
Standard
P1
S1
Function F1
P2
S2
F2
Pn
Sn
Fn
Fig. 3.5 Inter-goal-structure between corporate and decision goals (Source: Hamel 1992)
psychology, Locke78 analyzes the characteristics of task performance and its effects on goals. According to Locke, hard and specific goals can lead to higher performance.79 He argues that ‘goal setting is most likely to improve task performance when the goals are specific and sufficiently challenging, the subjects have sufficient ability . . . and assigned goals are accepted by the individual.’80 Locke defines difficulty of goals as the degree of proficiency or level of performance sought, and specificity (or clarity) as the degree of quantitative precision with which the aim is specified. Three aspects of Locke’s statement are outlined in the literature81: • To achieve performance improvement, the content of each goal has to be clear so that people can carry out appropriate actions to attain the goal. If the content is not clear and it is misunderstood there might be confusion and frustration on what needs to be done.
78
Locke (1968), p. 161. This hypothesis was also proven by other authors: Champion and Lord (1982), p. 267, Yukl and Latham (1978), Locke and Latham (1990). 80 Locke et al. (1981), p.125. 81 Tankoonsombut (1998), p. 12. 79
3.2 Organizational Goals
89
• Goals should not be too easy to attain, but at the same time, they must be attainable. Setting easy goals may have no effect because people do not have to exert increased effort to reach the goals. Locke82 examines 12 studies on the relationship between goals and task performance. The outcome from the studies reveals that hard goals produce a higher level of performance than easy goals do. He claims, “The results are unequivocal: the harder the goal the higher the level of performance.” Endorsement is provided by other authors: ‘Of course, goals should not be so easily attainable that employees see them as a joke or an insult,’ ‘Goals must be challenging but within reach.83 • Goals that are not accepted may have negative impacts on task performance. The individual rejects the goal and stops trying for that goal, such as when he is assigned a goal which is perceived as impossible.84 According to Tankoonsombut85 employee involvement in the goal-setting process may lead to greater goal acceptance or facilitate greater goal commitment than if goals had been unilaterally assigned. The importance of goals and feedback in an employee’s work has also been stated by Drucker. The effects of feedback on organizational performance have been widely studied86 and the positive effects of feedback on task performance have been widely recognized as essential for learning, motivation, satisfaction and performance. Six categories of feedback sources are suggested87: (1) formal rewards, (2) informal assignments, (3) supervisors, (4) co-workers, (5) comparisons of work to that of others (benchmarking) and (6) information one receives from the task itself (task feedback). Tankoonsombut88 arranges these in two groups from the employees’ perspective: The first group includes feedback sources that are intrinsic to, under the control of, or psychologically closer to the individual. The second group includes feedback sources that are external to, beyond the control of, or psychologically further away from the individual. Feedback from self-comparison of one’s work to that of others (benchmarking) and from the task itself fall into the first category, while feedback from the organization (i.e. formal rewards and informal assignments), supervisors and co-workers falls into the second category. It is more difficult to find evidence in the literature reporting on goal dimensions of a PMS than on concrete articulated organizational goals. Similarly, it is difficult with regard to classifications or any other organization or systemization attempts
82
Locke (1968), p. 162. Griffin (1990), p. 170. 84 Locke (1968), p. 162. 85 Tankoonsombut (1998), p. 13. 86 Locke (1968), Greller and Herold (1975), Ilgen et al. (1979), Earley et al. (1990), Busby (1997). 87 Greller (1980), cited from Tankoonsombut (1998), p. 11. 88 Tankoonsombut (1998), p. 31. 83
90
3 State-of-the-Art in Performance Management
other than the dimension of performance measures. For example Hahn suggests three categories of goals89: • Substantive goals (product/service program with specific quality goals); • Value goals/financial goals (increase of economic value of the organization, dividend continuity, liquidity); • Social goals/human goals (employees, society, environment). Even in conjunction with performance measurement, only very few articles report about concrete organizational goals and goal-setting processes. These will be discussed later. Nevertheless, experiences with performance measurement interventions and consequences are reported. Contrary to the literature examined above, the PMS literature reports an absence of consideration of organizational goals and a lack of continuous systematic breakdown of goals (Table 3.13). Table 3.13 Problem areas in performance management reported in the literature Author Statements regarding Ittner (1998) Difficulties with goal cascading to lower levels Gleich (2001) Poor (or loose) coordination of strategic, operational goals and indicators; alignment between strategy and goals is not sufficiently pronounced Frigo (1999) The link between goals and indicators is practically nonexistent Brunner (1999) No formulated (corporate) strategy exists, strategic goals are not quantified (not specific), employees in lower levels, customer or suppliers are not involved in the goal setting process Tieke (1999) Goal attainment control, for 70% of companies, is of central importance, strategic goals are rarely quantified/unoperationalized Steinle (2001) Quantification of strategic goals is reported to be problematic Schneidermann (1999) Goal values are negotiated and not based on stakeholder requirements or the fundamental borders of processes and skills of the organization Zimmermann (2000) Main problem area – definition of strategic goals Kueng (2001) In spite of ISO-9000 certification only 36% of companies defined goals and measures for process management Bourne (2000) Goal commitment of top-management was observed to positively influence performance measurement (patriarchal structured family businesses were more successful than business units of big enterprises), additional initiatives of holding companies: changed strategies lead to irrelevant goals and metrics as a negative factor T€ opfer (2002) Problem areas reported: (more pronounced in medium-sized businesses in comparison to big companies) coupling of goal attainment with financial incentives, no explicit coupling of strategic goals with budgeting process Habermann (2002) Problem areas: two-thirds of surveyed companies had no clear formulated strategy, 77% had difficulties with definition of adequate measures, too few strategic measures, too many operative measures, 69% had difficulties with the definition of target values
89
Hahn (1994), p. 62.
3.2 Organizational Goals
91
These statements indicate that major problems exist on both the tactical and operational levels. The quantification, breakdown and cascading are activities that deal with the formulation of goals. This translates to the coupling of goals with metrics, and to the alignment with strategy, and includes aspects of communication. The complete absence of goals is rarely reported. It should not be interpreted that those firms do not pursue organizational goals. The assumption can be made, however, that in this case again the communication aspects are to be improved.
3.2.2
R&D Organizational Goals
Only a few studies describe the concrete goals of R&D departments. Examples that have been analyzed in detail are based in the following industrial sectors: automotive (Centro Ricerche Fiat (CRF), Italy), telecommunication (Telecom Italia Lab (TiLab), Italy), and aerospace (Volvo Aero Corporation, Sweden). These studies do not focus on analyzing organizational goals per se, but rather on the implications on goal setting after organizational changes in managerial procedures and strategies. These studies suggest that organizational goal setting for R&D departments within the context of reorganization requires concomitant analysis of organizational strategy, goals and performance-related measures. The request for more accountability of R&D departments reported as being the trigger for organizational changes. The fact of shifting from cost centers to autonomous business units with the goal of producing cash flows in order to justify expenses is reported for CRF and TiLab90 (Cesaroni et al. 2004). In the case of TiLab, the origin of the organizational changes was telecommunication deregulation. New competition on the domestic telecommunication markets and new opportunities on foreign markets changed the role of R&D departments. As such the exploitation of R&D activities become a priority.91 The authors report that the goal of TiLab in 2001 was to ‘. . . create the ability to integrate new realities through the creation of autonomous units, partnership with external subjects and financial participation in new entrepreneurial activities.’ Furthermore, the following goals are mentioned: • • • • • •
90 91
Creation of spin-off companies and incubators; Development of specialized competencies for the marketing of inventions; Development of new ideas, technologies or business opportunities; Technological transfer; Partnerships or acquisitions; Creation of competencies for organizational “Skill Building”.
Cesaroni et al. (2004), pp. 45–56. Cesaroni et al. (2004), p. 47.
92
3 State-of-the-Art in Performance Management
Historically, CRF since its inception in 1976 was an independent center. The reorganization of CRF was a reaction to the automotive industry crisis in 1993, where exploration and exploitation strategies were reinforced. Major restructuring took place in 1998 where CRF responded to objectives of technology transfer and exploration of long-term research when designing the organization. Within the exploration strategy two major goals were identified: • Research transfer: ‘to develop products, processes and technologies to be effectively transferred to the clients’, and • Participation in publicly-funded research programs: ‘to create an international research network to be activated when CRF intends to take part in research programs promoted by the European Union or other governmental authorities’. The exploitation strategy was dominated by the technology transfer strategy ‘to search for “proper” clients’ and ‘to develop “proper” products’, knowledge transfer and the transfer of human resources. All three are strongly interconnected. The most complete form of technology transfer is the transfer of human resources, since the knowledge of researchers is transferred as well.92 For Volvo Aero Corporation, the authors reported that the company operated under great pressure. Being a functionally-oriented company, dependent to a large extent on nationally-funded military projects, the company had evolved into a business-oriented, leading manufacturer in the aerospace market. The authors report that an internal distinction between research and development is made in terms of levels of validation. The validation process contains 9 steps: 1–3 basic research, 4–6 demonstration, and 7–9 development (development starts with a decision to take a product to full-scale production).93 The authors observe that in 1996, all research programs at Volvo Aero were structured into product development research and process development research. Product-related properties, such as performance, environment, lifecycle cost, safety and availability combine to form product development research. On the other hand, increased efficiency of the internal company processes, such as product development, production and product support, are at the core of process development research. The following goals of the technology programs during 1970s are reported: • Military. Develop technologies to achieve increased performance of military engines, while at the same time developing competent engineers. • Commercial. Develop spin-off products from the military competence, such as hydraulics, heaters for cars, trucks, and boats, vehicle components for trucks, etc.94 • Space propulsion. Early investments were made to develop possibilities to compete in this area.
92
Cesaroni et al. (2004), p. 51. Karlsson et al. (2004), p. 182. 94 These were alternative products because Volvo Aero saw a future decrease in military projects. 93
3.2 Organizational Goals
93
During the 1980s, priority was given to the development of the product: ‘to develop product-related technologies for commercial aircraft engines as well as for military engines’. In 1990, reorganization took place in business areas: Vehicle Components, Land and Marine Gas Turbines, Space Propulsion, Commercial Engines, and Military Engines. This shows that the research programs became even more business-oriented. The current goals of the research programs are as follows: • • • • • •
Decrease cost; Decrease lead-time; Increase quality; Increase flexibility; Increase performance of the products (present or future); Increase efficiency of the company.
The studies presented above analyze organizational goals for different purposes. The two first studies highlight the shift of industrial R&D departments from a cost center situation to acting like a profit center. The second study explores factors which affect a performance measurement system. While the first two studies address R&D departments as a whole, the third study seeks evidence on how to separate these two departments and recommends separating them when measuring performance. When analyzing the goals, the following key terms reflect the broader aims of the goals: spin-offs, incubators, inventions, new ideas, new technologies, new business opportunities, new realities, technological transfer, partnerships, acquisitions, people skills, public-funded research, and international network. A conclusion that can be drawn from these studies is that the individual context in which any given research department operates is so specific as to invalidate the comparison of research goals across organizations on a standardized basis.
3.2.3
Research (Only) Organizational Goals
Companies engage in research for a number of reasons; their objectives may include the development of innovative products or services, investigation of new markets, increased efficiency and/or quality, or the creation of capabilities to learn about and to implement change. Identifying motivations of companies for engaging in research helps to examine forces that drive their research activities, such as competition or opportunities for entering new markets. The output of a research department could be an invention, whereas the outcome from the perspective of a company could be an innovation. Therefore, the primary role of a research department is to help a company analyze new markets or seek new technologies, thereby feeding the company with inventions that lead to innovations. Understanding the exact role of a research department within the company helps it formulate its organizational goals. The formulation of organizational goals for a research department is a very difficult task due to the nature of research. ‘To gain increased
94
3 State-of-the-Art in Performance Management
understanding of a phenomenon or to search for new elements of technology is the goal of a research department’ (Karlsson et al. 2004, p. 182; Asimov 1962). ‘It further entails greater uncertainty of outcome and higher technological risks’ (Karlsson et al. 2004, Mansfield et al. 1971). The authors also observe that the customers for these kinds of results are often internal to the company. A very useful study that contributed greatly to the understanding of the distinction between research and development, and especially the elaboration of organizational goals of these units, is by Mesthene and Clintock (1962). Their document “The Nature of Research Goals: Some Necessary Definitions” was prepared for the United States Air Force Project, RAND.95 Mesthene and Clintock examined the nature of three types of scientific research: academic research, mission-oriented research, and technological development. These were examined in two types of organization: academic organizations (universities and university-like institutions) and missionoriented organizations (industries, the military services and other government agencies such as NASA,96 AEC97 and NIH98). They observed that the objective of industrial research was not merely to add to knowledge, but to contribute to the production of a useful item.99. The objective of academic research was to contribute to the growth of knowledge about the world. This difference, in their opinion, implies an important role for research management in a mission-oriented organization. Accordingly, university administrators have no significant research management function beyond that which is implicit in their commitment to support research in the major intellectual disciplines. All research planning and guidance is done by the research scientists themselves. “But the criterion for evaluation and funding research in a mission-oriented organization is more than simply the availability of a competent and balanced research staff. Research must be evaluated and funded instead in the light of its contribution to the extra-scientific goals of the organization.”100 The authors specify different kinds of decisions to be made by management: • • • • •
What lines of research to encourage; What projects to discontinue; What kinds of scientists to hire; How to exploit the results of research for organizational ends; How to modify organizational objectives in ways calculated to take advantage of research findings.
From this listing the authors conclude that in mission-oriented research, management becomes a major and necessary function.
95
RAND – the name is derived from a contraction of the term research and development. NASA stands for National Aeronautics and Space Administration, USA. 97 AEC, abbreviation for Atomic Energy Commission, USA. 98 NIH is the abbreviation of National Institute of Health, USA. 99 As an example authors use the production of a refrigerator or a bomber, noticing that the objective for which research is supported by an army or a firm is a production of a useful item. 100 Mesthene and Clintock (1962), p. 2. 96
3.2 Organizational Goals
95
The authors state that mission-oriented organizations find it easier to formulate goals for development units than for research programs. The reason, in their opinion, lies in a difference between scientific research and technological development. First of all, they state that although both research programs and development programs produce knowledge, each produces a different kind of knowledge. Research, in their view, seeks to find out how nature operates and whether its principles can be translated into laboratory models. Development, on the other hand, aims at a specific useful device. It seeks information about how laboratory models and other existing mechanisms can be modified or combined into pieces of equipment that can accomplish a particular practical objective. Mesthene and Clintock distinguish three different kinds of goals: 1. End items (these are the goals of development); 2. Purely intellectual goals; 3. Useful research goals. The authors use the term “end item” to express a specific hardware101 item with which development ends. “Purely intellectual goals” are at the other end of the R&D spectrum and are wholly within the context of a scientific inquiry. ‘Contrary to what is sometimes loosely believed, the most fundamental research is neither goalless, nor directed merely at very vague, general goals. All science, no matter how basic, seeks specific answers to specific questions.’102 The authors state that this kind of research is often motivated simply by curiosity and though very specific, however, the goals of the research may have no known or intended reference to practical use. The last category represents “useful research goals”.103 These are located between end items and intellectual goals. The term refers to the goals of the research undertaken in the expectation of a practical payoff – a use, or “application” – beyond itself. The notion of “use” differentiates useful research goals from intellectual goals. The authors define useful research goals as ‘. . . intellectual goals with the addition of an extra-scientific or extra-intellectual use-relevance’.104 Although “use” distinguishes useful research goals from academic research, “use” never takes the form of an end-item as in development. The authors examining the use-dimension observe that the research process itself provides no basis for the distinction, which is one reason why it is so difficult to distinguish between
101 In our opinion, it is also possible to apply this term in the software industry with a slight modification. "Hardware" implies a degree product maturity. The authors give examples of hardware: a bomber, a submarine, an engine, a drug, etc. In the case of software providers, we think executable software could also be valid as an end item. 102 Mesthene and Clintock (1962), p. 2. 103 The authors admit that the term chosen is awkward, but must suffice for want of a better one. 104 Note that this definition and distinction is made from the manager’s perspective. The authors note that a different distinction from that which can be made within intellectual goals on the basis of their apparent usefulness is from scientists.
96
3 State-of-the-Art in Performance Management
intellectual goals and useful research goals. Thus they analyze the product of research activities, stating that this (product) takes the form of true general propositions, which perform a dual function. The propositions both describe some natural operation or behavior, and predict some future event, asserting that certain things will happen in the future if certain conditions are met. This predictive aspect is built into scientific (research) propositions. The authors note that the term “built into” is important, meaning logically contained in; logically necessary. The authors claim that the process of achieving intellectual goals is identical with the process of achieving useful research goals. They define the use-dimension of useful research goals as ‘something added to its intellectual component’.105 Furthermore, they state that any attempt to carefully formulate a useful research goal must start with the relevant intellectual goal and then go on to incorporate the use-relevance dimension. They pose two further questions: (1) at which point in the research organization is use-relevance introduced into research goals, and (2) what is the source of this additional element? To answer the second question we reiterate that according to the authors the research process is not a source of use-relevance, but rather the extra-scientific purpose of men. They state that the raw materials from which useful research goals are derived are questions that arise from attempts to solve problems that are not primarily intellectual or purely scientific in nature. Regarding the first question, the authors argue that the use-relevance (based on the answer to the second question) is not always pertinent to the formulation of research goals. They outline three cases when research goals can acquire use-relevance: 1. Before research starts: in the case of research undertaken with the specific expectation of a useful payoff. Useful research goals contain a use-dimension before the research is started. 2. During the course of the research: in the case when research is initiated in search of purely intellectual goals but then takes some new direction before the original goals are achieved. The authors distinguish two reasons for that occurrence: (a) Revealing previously unsuspected possibilities that result in a replacement of the original intellectual goals with a new set of intellectual goals; and (b) Consciously changing orientation because intellectual goals are converted into useful research goals.
105
The authors claim that all research, whether academic or mission-oriented aims at least at intellectual goals. “This is necessary so long as it is to remain respectable research, because it is the intellectual ends of an inquiry that provides its rationale” (Mesthene and Clintock 1962, p. 13).
3.2 Organizational Goals
97
3. .After the completion of the research: the use-possibilities of research findings may either not appear at all, or only after the research is completed. The practical implication of the findings appears only after the intellectual goals have been achieved. The authors summarize that in the first case the goals remain only intellectual. In the other cases, the use-relevance serves to convert intellectual goals into useful research goals. They observe that if the useful research goals precede the initiation of a project, as in the classic case of mission-oriented research, they can be said to have stimulated it. If, in the other case, the use-relevance is added at some time during the course of a project, the research may take a different direction. In both cases non-scientific, practical considerations stimulate and guide research.
Table 3.14 Summary of different goal dimensions addressed in literature Goals Goal object
Characteristic
Goal-setting process Feedback
Goal-description
Operative problems
Author Product-specific Karlsson et al. (2004) Organizational Asimov (1962), Mansfield et al. (1971), Heinen (1991), Hahn (1994), Cesaroni et al. (2004), Karlsson et al. (2004) Individual Locke et al. (1981), Locke and Latham (1990) Operative Kosiol (1966), Schmidt (1969), Hamel (1992), Nagel (1992), Hahn (decision (1994), Hauschildt (1997), Habermann (2002) goal) Economic Hahn (1994) Compatibility Mesthene and Clintock (1962), Thommen and Achleitner (2003) Complexity Habermann (2002), T€opfer (2002) Achievability Locke et al. (1981), Locke and Latham (1990), Griffin (1990) Specificity Asimov (1962), Locke et al. (1981), Locke and Latham (1990), Champion and Lord (1982), Griffin (1990), Cesaroni et al. (2004) Activity-based Cesaroni et al. (2004), Karlsson et al. (2004) Hamel (1992), Nagel (1992), Hauschildt (1997), T€opfer (2002) Locke (1968), Greller and Herold (1975), Latham and Yukl (1975), Ilgen et al. (1979), Greller (1980), Earley et al. (1990), Busby (1997), Tankoonsombut (1998) Schwantag (1951), Bidlingmaier (1964), Mesthene and Clintock (1962), Kosiol (1966), Schmidt (1969), Locke (1968), Griffin (1990), Heinen (1991), Specht and Beckmann (1996) Bourne (2000), Brunner (1999), Frigo (1999), Habermann (2002), Tieke (1999), T€opfer (2002), Kueng (2001), Zimmermann (2000), Schneiderman (1999), Steinle (2001), Ittner (1998), Gleich (2001)
Table 3.14 gives an overview of the relevant organizational-goals literature on the generic level that embraces the entire company, and on the specific company levels: R&D function and Research only. However, it can be said that very few contributions discussing the individual strategies of companies can be found. It is often possible to deepen the understanding of a company’s strategy by looking up mission and vision statements that are usually provided on their web-pages. However, the translation of the strategy into practical organizational goals remains an under-investigated area in the literature.
98
3.3 3.3.1
3 State-of-the-Art in Performance Management
Key Performance Indicators (KPIs) Generic Key Performance Indicators
In the following section we examine non-sector specific literature addressing performance measures, performance indicators, and KPIs.106 Our attention will especially be given to the following questions: 1. What kinds of performance measures/indicators are listed in the literature 2. In which context are they measured, e.g. how are they organized in terms of structure; what is the frequency of measurement; which artifact is being measured (input, output, outcome or activities); what, if any, logic is being applied. As we reported in the section on PMS approaches, the balanced scorecard is perhaps the most used analysis framework. It is a general framework with four dimensions and therefore does not specifically detail performance measures. An alternative framework to the balanced scorecard is Parmenter’s107 framework that includes two additional perspectives. He provides a database, listing 342 measures108 with an indication of possible application. This listing is a collection of different measures with no specific industry focus. The list makes a distinction in following aspects: frequency of measure, balanced scorecard perspective, applicable balanced scorecard teams, applicable sectors and the strategic objective. Regarding the frequency of measure, the range is reported according to specific time frames such as: 24/7, daily, weekly, quarterly, monthly, two, three or four times a year. It is also reported following events such as: when audits are performed, after an employee survey, after the performance review round, after every staff survey. Parmenter organizes all measures within the following six perspectives: • Financial (utilization of assets, optimization of working capital); • Customer (increase customer satisfaction, targeting customers that generate the most profit); • Environment/community (supporting local businesses, linking with future employees, community leadership); • Internal (delivery in full and on time, optimizing technology, effective relationships with key stakeholders);
106
The respective definitions are provided in Sect. 2.2.3. Parmenter (2007). 108 Parmenter (2007), pp. 203–231, the database also contains metrics for R&D which will be examined in detail in the following section. On account of minor relevance the table with 342 measures will not be placed in this section, but can be found in Appendix B. 107
3.3 Key Performance Indicators (KPIs)
99
• Employee satisfaction (positive company culture, retention of key staff, increased recognition); • Learning and growth (empowerment, increasing expertise, and adaptability). The listing assigns each measure to at least one of the following teams: sales and marketing, production, quality assurance, accounting, sales and back office, IT-communications, IT help desk, call centers, service delivery teams, production control, dispatch, project teams, service teams, human resources, operations, public relations, stock control, research and development, payroll, procurement, design, planning, training. Furthermore, Parmenter distinguishes in his list the following applicable sectors: all private sectors, all sectors where customers retain a sum of money until satisfactory completion, all sectors that dispatch goods, other sectors like: manufacturing, service, charity, construction, banking, insurance, critical services sector, retail, professional service firms and tertiary. The last aspect in Parmenter’s listing is the strategic objective. The following objectives are distinguished: increase profitability, provide efficient operations, retain customers, minimize negative comments in the marketplace, establish long-term relationships with profitable customers, increase sales, generate more reliable products, improve employee satisfaction and productivity, create desirable workplace, make employees happy, make customers happy, which makes shareholders happy, work well with the community and environment, reduce environmental impact, create positive public perception, maintain supplier relationships, create innovation, retain skilled and experienced workforce, provide healthy and safe work environment. Parmenter introduces his concept with the following three parameters109: KRIs, KPIs, and PIs, but this distinction is not available within his database. He notices that KRIs have often been mistaken for KPIs and that KRIs provide information that is ideal for the Board. He argues that KRIs typically cover a longer period of time than KPIs. He also suggests separating KRIs from other measures, which in his concept would separate performance measures into those impacting governance and those impacting management. Furthermore, a concrete recommendation is made to have a governance report with a dashboard format, consisting of up to 10
109 “Key result indicators (KRIs) give a clear picture whether you are traveling in the right direction. They do not, however, tell one what is needed to do to improve these results; Key performance indicators (KPIs) represent a set of measures focusing on those aspects of organizational performance that are the most critical for the current and future success of the organization.” Parmenter does not really define performance indicators (PIs) as he only mentions: “In between KRIs and the true KPIs are numerous performance indicators. These complement the KPIs and are shown with them on the scorecard for the organization and the scorecard for each division, department, and team.” Parmenter (2007), pp. 3–7.
100
3 State-of-the-Art in Performance Management
measures providing high-level KRIs for the Board and a balanced scorecard comprising up to 20 measures (Parmenter suggests a mixture of KPIs and PIs) for management. Altogether, Parmenter introduces a 10/80/10 rule and argues that an organization very rarely needs more than 10 KRIs, up to 80 PIs and 10 KPIs.110 For our specific question, which asks which performance indicators are best suited to assess organizational goals in an industrial research department, Parmenter’s database is of little use. From the sector-specific perspective, we do not find indications in the database of the ICT-sector or reference to the industrial research function. A good crossover from the generic literature on performance measures to the literature addressing measures in R&D can be presented by the work of Geisler.111 Geisler, in his book, summarizes metrics for science and technology (S&T). Therefore, his work cannot be completely allocated to generic functions because S&T is only a part of the company’s value chain. Furthermore, his work cannot be precisely allocated to R&D because the area he covers is broader, addressing the whole innovation continuum. In our context the larger part of his work is allocated to the generic function with a gradual transition to R&D. Geisler summarizes 52 metrics within the following eight categories112: • • • • • • • •
Input/investments in S&T; Economic/financial metrics; Commercial and business metrics; Bibliometric metrics; Patents; Peer review metrics; Organizational, strategic and managerial metrics; Stages of outcomes.
Some of the metrics are well-suited to assess the performance of industrial research, for example measures from “patents” are typical metrics to assess the inventiveness of an organization. Bibliometric metrics are also often used to assess publication intensity, or peer-review metrics for the content, which is very domain specific and only a small number of individuals (peers) can judge. Economic or financial metrics are more difficult to apply and are less common in practice when assessing the performance of industrial research departments.
110
Parmenter (2007), p. 8. Geisler (2000). 112 Geisler (2000), pp. 80–86. Please note that the author emphasizes: each category contains illustrative measures. The lists are not exhaustive and are not a complete listing of all measures found in the relevant literatures. Measures are not listed in any particular order. See Appendix B. 111
3.3 Key Performance Indicators (KPIs)
3.3.2
101
R&D Key Performance Indicators
In the following we review literature reporting on performance measures for R&D. Interest in the topic of performance measurement in R&D has increased in the last few decades, but the topic is not new at all. Legitimate interest for the “right” KPIs has been shown, in particular, by managers of R&D laboratories. This is especially the case when there is an increased demand for contributions or value of contributions from R&D and a need to make the impact of results more visible to the overall organization. Therefore, the call for transparency is connected to budget allocation procedures, which often results in a search for the “right” KPIs that justify R&D’s existence. Articles reporting these issues can be traced to the 1970s. One of the most comprehensive studies for evaluation of R&D productivity was published by Ranftl (1977) within the Hughes Aircraft Company. In this study Ranftl analyzed typical characteristics for a high-productive research and development department. The two-stage study,113 which was conducted between 1973 and 1975, identified three levels as starting points for the measurement and improvement of R&D productivity: Productivity of the organization, productivity of the R&D manager and productivity of the employees. The results of the study are presented in the form of checklists (to examine the typical criteria of a productive R&D process) and profiles (to describe a typical profile of a respective unit). Since then, based on his findings,114 many studies have been conducted to find appropriate metrics to assess R&D’s productivity. Schainblatt115 lists more than 40 different kinds of measures, indicators, ratios, etc., from seven in-depth case studies. Brown and Svenson116 present 20 measures distinguishing research and technology development measures and pure product development measures. Brown and Gobeli117 report on 23 measurements collected in a field study and suggest a list of the ten most important indicators covering the primary areas of importance. Tipping and Zeffren118 present 33 metrics within their approach called Technology Value Pyramid, by associating them with five managerial aspects. In their empirical investigations, Chiesa and Frattini found that there is a pronounced specialization in the use of different performance indicators along the R&D process.119 In their
113 The following characteristics are attached to this study: more than 2,300 interviews with top R&D managers, interviews in 59 enterprises of the investment and consumer goods industry, interviews with 28 consulting companies and various seminars and large-scale literature search. 114 An extract of the indicators suggested by Ranftl can be found in Appendix B. 115 Schainblatt (1982), pp. 10–18. Note that we summarized only relevant measures for R&D, and left measures reported for the engineering units out, we also tried to avoid double entries. 116 Brown and Svenson (1998), pp. 30–35. The detailed list of measures suggested by Brown and Svenson is provided in Appendix B. 117 Brown and Gobeli (1992), p. 330. The detailed list of measures suggested by Brown and Gobeli is provided in Appendix B. 118 Tipping and Zeffren (1995), pp. 32–63. The detailed list of 33 measures suggested by Tipping and Zeffren is provided in Appendix B. 119 Chiesa and Frattini (2007), pp. 294. The full list of performance indicators collected by Chiesa and Frattini is provided in Appendix B.
102
3 State-of-the-Art in Performance Management
study, the authors found a tendency for the use of quantitative objective metrics in development, and qualitative subjective measures in research. The authors draw particular attention to the specific degree of uncertainty and complexity that characterizes the activities of the two departments. They argue that this translates into availability of quantitative data to be used to build precise and objective indicators. According to the authors, the uncertainty and the distance in time of research results, make it impossible to use numeric metrics that are independent from experts’ opinion. Their empirical investigation does not provide evidence of the use of quantitative subjective indicators; nevertheless authors argue that they represent a sort of hybrid solution between quantitative objective and qualitative subjective metrics that can be used in the development of PMsSs when it would be too costly to systematically measure objective indicators, or in applied research when uncertainty levels are lower and there is a specific need for numeric information (e.g. for supporting formal project curtailment decisions). Due to the very broad range of activities covered by the R&D function, a great variety of metrics is to be found. Almost all listings distinguish between qualitative and quantitative metrics. Regarding the categorization perspective, the following taxonomy of performance measurement methods has been suggested by Kerssensvan Drongelen120 (Table 3.15). Table 3.15 Summary of measurement methods adapted from Kerssens-van Drongelen Taxonomy of performance measurement methods Subjective, qualitative methods Semi-objective, qualitative methods Subjective, semi-quantitative methods: • Simple: single item/judgmental conversion or averaging of scores for several items into an aggregate metric • Sophisticated: conversion of scores for several items into an aggregate metric using sophisticated formulae or techniques Semi-objective, semi-quantitative methods: • Simple: single item/judgmental conversion or averaging of scores for several items into an aggregate metric • Sophisticated: conversion of scores for several items into an aggregate metric using sophisticated formulae or techniques Subjective, quantitative methods: • Financial • Non-financial Semi-objective, quantitative methods: • Financial • Non-financial Objective, quantitative methods: • Financial • Non-financial Source: Kerssens-van Drongelen (2001)
120
Kerssens-van Drongelen (2001), p. 80.
3.3 Key Performance Indicators (KPIs)
103
From the content perspective, many authors cite people-based metrics. These include a focus on skill and training aspects, motivation, moral etc. Furthermore, the intellectual property aspect is often mentioned with metrics such as granted patents, patent application, cost of inventions, counts of patent disclosures, etc. Publications, papers, books are also present in many listings. Technology transfer to manufacturing or other units and strategic alignment are also often named. Another observation, which is not exactly of the content nature, but more structural, is the fact that many authors try to assign each metric to specific content areas. These areas summarize the metrics and they often represent important areas for the companies evaluated. For example, Tipping et al. (1995) call these areas Managerial Factors121: • Value Creation (VC) demonstrating the value of R&D activities to the positioning, profitability and growth of the organization and to the creation of shareholder value; • Portfolio Assessment (PA) communicating the total R&D program across various dimensions of interest, including time horizon, level of risk, core competency exploitation, and new/old business; • Asset Value of Technology (AVT) indicating the strength and vitality of the firm’s technology (e.g. proprietary assets, know-how, people, etc.) and foreshadowing the potential of the R&D organization to create future value for the company; • Integration with Business (IWB) indicating the degree of integration, the commitment of the business to the R&D processes and programs, teamwork, and ability to exploit technology across the organization; • Practice of R&D Processes to Support Innovation (PRD) indicating the efficiency and effectiveness of R&D processes in producing useful output for the firm. The processes include project management practices, idea generation, communication and other “best practices” in managing R&D. The authors assign each of the 33 metrics to one or a combination of these factors. Brown and Gobeli short-listed ten metrics, summarized in seven categories: (1) resources, (2) project management, (3) people management, (4) planning, (5) new technology study and development, (6) outputs, (7) division results/outcomes. Similarly, Geisler allocates all metrics to the following eight categories mentioned above: (1) input/investment in S&T, (2) economic/financial metrics, (3) commercial and business metrics, (4) bibliometric metrics, (5) patents, (6) peer review metrics, (7) organizational, strategic and managerial metrics, and (8) stages of outcomes. Our observation about the connection between individual measures with more general areas is consistent with the findings of Mertins and Krause. Mertins122 formulated an object structure model that is based on the idea of a goal – measures – process network. Krause and Mertins distinguish four object types in their model shown in Fig. 3.6:
121
Tipping et al. (1995), pp. 32–63. Mertins (1998), cited from Krause, notes that Krause had adapted the structure in the course of his work and therefore the naming has been changed. 122
104
3 State-of-the-Art in Performance Management Critical success factor(s)
Performance indicator(s)
Business process(es)
Performance improvement project(s)
Fig. 3.6 Tetrahedron model adapted from Krause (Source: Krause 2005)
• The first object type is a critical success factor (CSF).123 The concept of the CSF goes back to Daniel in the early 1960s and Rockart at the end of the 1970s.124 The concept is based on the assumption that a limited number of individual variables exist that makes a significant contribution to a company’s success. “Critical success factors . . . are, for any business, the limited number of areas in which results, if they are satisfactory, will ensure successful competitive performance for the organization. They are the few key areas where ‘things must go right’ for the business to flourish. If results in these areas are not adequate, the organization’s efforts for the period will be less than desired. As a result, the critical success factors are areas of activity that should receive constant and careful attention from management. The current status of performance in each area should be continually measured, and that information should be made available. . . ., critical success factors support the attainment of organizational goals.”125 • The second object type in Krause and Mertin’s model is business process. According to Krause, business processes represent sources of added value and hence of performance. Krause argues that the precise knowledge is the basis for the identification of CSFs.
123
Note that a critical success factor (CSF) is not a key performance indicator (KPI). Critical success factors are elements that are vital for a strategy to be successful. KPIs are a set of performance indicators that quantify management objectives and enable the measurement of strategic performance. A critical success factor, following Rockart, is what drives the company forward. 124 Daniel (1961), Rockart (1979). 125 Rockart (1979), p. 85.
3.3 Key Performance Indicators (KPIs)
105
• The third type is represented by the performance improvement project. The authors argue that the performance improvement project impacts the efficiency and effectiveness of business processes. • Performance indicators represent the fourth object type of the model. The justification for including this type into the model is its management support function. Although Krause mentions that CSFs are linked to goals by the performance indicators in his model, the goals are omitted within both the description and in the illustration. These were, however, part of Mertins’s original model. Business processes and performance improvement projects are driven by these goals, and the measured data is consolidated through performance indicators and is usable for the assessment of goal attainment.
3.3.3
Research (Only) Key Performance Indicators
Because companies, for long time, regarded the functions of research and development as one single function, very few literature sources were found on performance indicators that could be applied for a research department only. Two studies were found that reported performance measures that were applied in practice within research departments only (one within an industrial research group and one within a non-profit research organization). Mett€anen’s126 study designed and implemented a performance measurement system in the TTS Institute (Work Efficiency Institute, in Finnish: Ty€ otehoseura). The study focuses on research activities only. The TTS Institute’s research activity involves three research departments: agriculture, forestry and home economics. Mett€anen reports ten success factors allocated to all three departments subsumed under financial or non-financial categories. It is striking to note that the number of non-financial success factors outweighs the financial success factors (Table 3.16). Altogether 24 measures were defined to be applied within a performance measurement system of the TTS Institute. The author stresses the difficulty of data collection for the measures chosen due to special characteristics such as the importance of intellectual capital. Different collection methods are required for intellectual capital measures than for financial measures. Questionnaires are often deployed to gather data and are subjective assessments by the interviewees. The performance measures listed in Table 3.17 attributed to each success factor mentioned above were finally defined for each department. Mett€anen reports that the calculation is done separately for each department (Table 3.17) Another case study which reports on a concrete example from industry is the study from Loch und Tapper.127 They applied a strategy-driven performance measurement system for the process technology research group at GemStone, a medium-sized diamond producer. The authors developed 28 (see Table 3.18)
126 127
Mett€anen (2005), pp. 178–188. Loch and Tapper (2002), pp. 185–198.
106
3 State-of-the-Art in Performance Management
Table 3.16 Case organization’s success factors divided into financial and non-financial factors Financial success Non-financial success factors factors Related to the Related to external Related to internal • Growth • Increasing profitability
• Good productivity
employees • Increasing academic competency • Increasing competencies • Employee welfare
relationships • Positive publicity • Internationalization
structure • Effective distribution of work
• Customer satisfaction, users • Effective project management • Customer satisfaction, • Availability of financers equipment • Finding new financers and • Sharing knowledge keeping old financers between departments
Source: Mett€anen (2005)
Table 3.17 Measures of the case organization Success factor Measure/measures Growth Increase in turnover (%) Increase in working years (number) Increasing profitability Profitability of projects on average (coefficient) Annual profit (€) Positive publicity Number of appearances in media (number) Internationalization Number of employees having attended an international occasion (number) Customer satisfaction, users Percentage of satisfied customers (%) Finding new financers and Percentage of financing other than government subsidy (%) keeping old financers Percentage of private financing (%) Percentage of new financers (%) Customer satisfaction, financers Percentage of satisfied customers (%) Effective distribution of work Percentage of employees who work from 50% to 80% of the theoretical work time on projects (%) Effective project management Mean ratio, calculated based on project evaluation form (number) Availability of equipment Annual investment in equipment (€) Sharing knowledge between Number of joint projects between departments (€) departments Number of joint projects between departments (number) Good productivity Amount of financing per employee on average (€) Number of publications per employee (number) Increasing academic Number of credits for postgraduate studies (number) competence Number of postgraduate degrees (number) Number of other degrees (number) Increasing competencies Percentage of employees who have achieved personal development goals (%) Investments in education (€) Employee welfare Employee satisfaction index (questionnaire) Source: Mett€anen (2005)
Conscientious management of uncertainty (e.g., risk reduction assessment) Professional documentation, peer-review
Level of prototype maturity (e.g. # of major technology revisions after hand-over to development) Quality of documentation to development Professional schedule and budget planning and control
Communication within research and with customers Use of external knowledge, cooperation with partners Source: Loch and Tapper (2002)
Process Measures (project level)
Output Measures (project level)
Innovation score
Market potential of innovations in $ # Of presentations to external customers
Knowledge repository and external reputation # Of requested handbooks published and delivered Quality of research program homepage in intranet: # of external publications, patents, and their impact
Clarity and quality of conclusions in technology assessments (e.g. by peer review) “Project management” of writing handbooks and assessments
Request fulfillment Clear go/no go decision Quality of interaction with the Completeness of literature surveys technical services requestor Documentation
% Of support requests fulfilled # Of training sessions signed off by customer and delivered # Of problem analysis reports requested and delivered Response time Successful study completion
Table 3.18 Performance measures New technologies & breakthrough concepts Customer support Output Measures # Of significant innovations delivered Customer satisfaction index (group level, by customer) Impact of the technologies delivered (qualitative Response time to queries estimation by customer, follow-up to learn)
3.3 Key Performance Indicators (KPIs) 107
108
3 State-of-the-Art in Performance Management
performance measures, which are distributed on the one hand between group level and project level, and on the other hand between three types of research outputs: new technologies and breakthrough concepts, customer support, and knowledge repository and external reputations.
3.4
Conclusions and Research Gaps
This chapter started by explaining the terms that are crucial for a general understanding of this work. Altogether in this chapter an overview is provided of relevant specific streams of writing concerned with performance management and measurement, in general, in R&D, and in industrial research departments. This includes non-sector specific findings as well as industry-sector specific findings. A particular emphasis within the ICT sector with a focus on the software industry is made. Thereafter, Sect. 2.2 dealt with the state-of-the-art of the performance management concept in general, exploring performance measurement and its constitutive elements in particular. Sect. 2.3 discussed the purposes of performance management: we identified and compared them, consolidating them into three main categories; and finally associated the categories with the performance management cycle. Section 3.1.4 listed a requirements catalogue for a performance management system. Section 3.1.5 summarized the main characteristics of an industrial research department and its requirements in terms of performance assessment. Thereafter, the nature of organizational goals and key performance indicators was investigated in Sects. 3.2 and 3.3. As the literature review makes clear, performance measurement in a creative environment, such as a research department, is seen by the majority of contemporary researchers as a matter of fact128 and is as necessary as the management function itself: “The time has come to lay aside the old excuses for not measuring and base-lining R&D effectiveness, and to do it anyway – because it is the right thing to do. Direct measurements can be made. Do they involve uncertainty? Yes. Are they somewhat subjective? Of course. Are they valuable? Absolutely!129; “It’s one thing to recognize that all measurements of research are imperfect. But it’s quite another thing to say that because they are imperfect, we shouldn’t measure at all.”130 Nevertheless, some companies still hold the belief that performance measurement damages the creativity of research output.131
128 Kerssens-van Drongelen has in particular shown in her work the need for performance measurement; see Kerssens-van Drongelen (2001), pp. 1–10. 129 Philip Francis (1992), Vice President and Chief Technical Officer of Square D company, cited from Kerssens-van Drongelen (2001). 130 Walter Robb (1991), Senior Vice President for Corporate Research and Development at General Electric, cited from Kerssens-van Drongelen (2001). 131 One of our analyzed companies argues against the measurement of research.
3.4 Conclusions and Research Gaps
109
In general the review illustrates that performance measurement is a central challenge for research departments in industry.132 It shows that there are several approaches to measuring performance in industrial R&D,133 and that there are few approaches that measure the research environment independently from the development environment.134 Despite the established understanding that measuring research is important and worthwhile, there is a lack of scientific contributions that address the performance management of research departments only. The following research questions always imply the context of the ICT sector without being mentioned explicitly. Consequently, the research question for the present investigation is: RQ1: What is the common practice in regard to performance management of industrial applied research organizations? Are there common or similar organizational goals, and are there common or similar key performance indicators?
The investigation into this question should produce valuable information for the research inquiry in general. Moreover, these findings should provide the basis for more detailed research on the applicability and favorability of performance management practice in industrial research departments. As the literature review reveals, the application of performance measurement is still unexplored in industrial research. Furthermore, the reviewed literature shows that there are fundamental differences between the research function and the development function within industrial organizations.135 As such, the application of an R&D measurement system to the research function would only have a limited validity. Loch and Tapper (2002), Karlsson et al. (2004), Mett€anen (2005) who investigated the performance measurement phenomenon within different industrial sectors confirm this conclusion. Thus, there is a need to further investigate whether a generic approaches exists that describes the common performance management practice of industrial research departments: RQ2: Is there a generic performance management approach (PM approach) to describe the common practice of performance management of industrial applied research organizations? How are goals and KPIs reflected in this approach?
This work investigates the organizational performance of industrial research departments. Therefore the most issue is to understand the characteristics of industrial research departments that substantiate organizational peculiarities and how the performance management system reflects these peculiarities. In this context the literature review provides a limited theoretical base. In our related analysis
132
For example, Loch and Tapper (2002), p. 185. As summarized in Table 3.3. 134 Mett€anen (2005), Loch and Tapper (2002). 135 Refer to Sects. 3.1.4 and 3.1.5. 133
110
3 State-of-the-Art in Performance Management
we presented a detailed overview of the specific characteristics of research in general, and industrial research in particular,136 as there are: • The nature of activities pursued in research137; • The time lag between outputs that are produced in the research and their resulting outcomes138; • The fact that valid outputs of research are not necessarily positive139; • The uncertainty factor (uncontrollability, unobservability, unpredictability) in research activities140; • The different stakeholders and different communication methods141 compared to development and production units; • The originality of research work in terms of discontinuous “jumps” in knowledge142; • The knowledge depth and highly specialized knowledge of research staff vs. more cross-functional expertise143; • The different organization in terms of structure and processes of research in regard to formalization and centralization.144 One feature that reflects the very nature of the inventive process is that the result of research activities is not known ahead of time. It is often difficult to define, describe and specify the artifact or knowledge, which is expected as an output. In a commercial environment “the view of what the expected output is from a firm’s research activities varies from company to company, as well as within a company over a period of time, due largely to market developments and changing customer demands”.145 The authors also note that the question of “expected output” from research activities is a reflection of the definition of research of the company. They observe that if there is no clear definition at hand there will be differences in opinion regarding what the expected output is, thus making the measurement very difficult. Managing that many particularities and the complexity emanating from them requires a comprehensive approach that allows for the flexibility of accommodating all necessary conditions such as, for example, the possibility to track the transformation of research output into the outcomes for the company which typically need a longer observation cycle than just one measurement cycle.
136
In Sect. 3.1.5. Leifer and Triskari (1987), Karlsson et al. (2004). 138 Feltham and Xie (1994), Brown and Svenson (1998), Loch and Tapper (2002), Chiesa et al. (2007). 139 Prechelt (1997). 140 Feltham and Xie (1994), Leifer and Triskari (1987) Loch and Tapper (2002). 141 Leifer and Triskari (1987), Amelingmeyer (2005), Beck and V€olker (2009). 142 Stahl and Steger (1977), Karlsson et al. (2004). 143 Karlsson et al. (2004). 144 Leifer and Triskari (1987), Chiesa et al. (2007). 145 Karlsson et al. (2004). 137
3.4 Conclusions and Research Gaps
111
When organizational performance is being assessed, the organizational performance means nothing other than the accumulated contribution of employees to the organization’s goals. Employees’ actions are anchored within their organizational activities. Activities are herewith the mechanism by which work gets done and performance is accomplished. If the purpose of measurement is to manage the performance of an organization, then the key to effective measurement is to understand corresponding activities. The reviewed literature shows that the activities that people execute can be used as a basis for composing an organizational function. Therefore, orientating performance management regarding activities might be a promising way to tackle RQ2. As this area lacks longitudinal analyses that study the performance management phenomenon, the approach has to consider the peculiarities of the studied unit of analysis and has to be closely related to the following question(s): RQ3: How are the peculiarities of industrial applied research [specifically, and the requirements of performance management systems in general] considered in the developed PM approach?
Although existing frameworks146 are undoubtedly valuable, their adoption is often constrained by the fact that they are simply frameworks. They suggest some areas in which measures of performance might be useful, but provide little guidance on how the appropriate measures can be identified, introduced and ultimately used to manage the business. For any performance measurement framework to be of practical value, the process of populating the framework has to be understood. The answers to these three research questions should produce novel insights concerning the state-of-the-art of performance measurement in industrial research departments. Thus, the research findings of this inquiry should be valuable for the scientific community, but also for practitioners. In order to achieve a high practical relevance there is a need to develop guidelines, in the form of policy recommendations, on how to design, implement and/or further develop a performance management system in practice. In this context the following additional research question is of major interest: RQ4: Can any recommendations be derived from the suggested approach to advise on the implementation and further development of a performance management system in an industrial research organization?
The answers to each of the four research questions will be provided in the chapters as follows: In the Chap. 4 we introduce the step-by-step development of a performance management system, via a series of in-depth case studies, and thereby answer RQ 2 and RQ 3. Chap. 5 describes the results of a large quantitative survey on the developed approach, thereby contemporarily validating the approach. Chap. 6 describes recommendations that can be derived from the developed approach including the goal-setting process and the performance management process.
146
E.g. balanced scorecard, ABC, etc.
.
Chapter 4
Performance Management: Analysis Approach
Abstract This chapter provides an overview on the methodology and introduces the step-by-step development of a performance management system via a series of in-depth case studies. The analysis approach aims at developing a model for performance management systems built on the basic components found in practice such as organizational department goals and individual KPIs. The model is extended with additional components that allow us to compare the various performance management systems and to identify relationships between these components. Performance clusters (i.e. groups of KPIs), which represent one significant component of our developed model, are introduced. Furthermore, the goals are analyzed for the similarity of their sub-components. Altogether, Chap. 4 describes a preliminary approach on how to deal with the individual elements of performance management, that is, with KPIs, KPI classes, performance clusters and finally organizational department goals. Keywords Case studies • Collaboration with academia • Collaboration with partners and customers • Design • Future business opportunities • Image • Intellectual property (IP) creation • Interviews • Operational excellence • Performance clusters • Performance management analysis approach • Portfolio management • Presence in scientific community • Publications • Talent pool • Technology transfer
In this chapter we introduce our approach to developing a model for performance management systems. It is built on basic components found in practice (such as organizational department goals and individual KPIs) as well as on additional components that allow us to compare the various performance management systems and to identify relationships between these components. The first part of the chapter describes the approach in general; the latter part applies the approach to a number of case studies. In this way we develop a methodological base for a large quantitative survey. A mixed-method research approach has been chosen for this work because of the sensitivity of the topic and the lack of extant secondary data available. The area is T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0_4, # Springer-Verlag Berlin Heidelberg 2012
113
114
4 Performance Management: Analysis Approach
often considered to be sensitive for companies because performance management includes planning, assessment, analysis and review of departments and internal goals. A mixed-method approach means that both qualitative and quantitative methods of data collection were used.1 Collis and Hussey state that this approach is known as triangulation.2 The following four types3 of triangulation can be distinguished: data triangulation; investigator triangulation; methodological triangulation; and triangulation of theories. Within this work methodological triangulation was applied. Morse classifies methodological triangulation as simultaneous or sequential.4 For this work, sequential triangulation was applied, including: 1. Extensive literature research which has consistently accompanied all major stages of this investigation; 2. Case study research5 that allowed for the data collection and validation of the relevance of the research area; and 3. Survey research where the findings were eventually validated via a large quantitative survey. The literature review that was presented in Chap. 2 forms a basis for the development of a catalogue of semi-structured questions. These were used for the preparation of the case studies. In the remainder of this chapter we describe the analysis of existing Performance Measurement Systems and develop a model for performance measurement in industrial research.
4.1 4.1.1
Case Studies Design and Preparation
This work required a framework to analyze the sample of industrial research companies. Such a framework could not be found in the literature. The following
1
Eastery-Smith et al. (2002), Kasper (2006). Collis and Hussey (2003) argue that the application of a mixture of approaches is quite usual when conducting business research. “The use of different research approaches, methods and techniques in the same study is known as triangulation and can overcome the potential bias and sterility of a single-method approach” (Collis and Hussey 2003, p. 78). The main advantage of triangulation is the greater validity and reliability of the research results (Denzin 1978). 3 Eastery-Smith et al. (2002). 4 “Simultaneous triangulation is the use of the qualitative and quantitative methods at the same time. In this case, there is limited interaction between the two datasets during the data collection, but the findings complement one another at the end of the study. Sequential triangulation is used of the results of one method are essential for planning the next method. The qualitative method is completed before the quantitative method is implemented or vice versa” (Morse 1991, p. 120). 5 Yin (2006). 2
4.1 Case Studies
115
Fig. 4.1 Framework: research department and its eco-system
model6 (cf. Fig. 4.1) is based on Brown & Svenson’s graph and represents the interactions and flows, in and out, of an industrial research organization. This systemic view shows a research department as a processing system and other parts of a company as the receiving system. Furthermore, inputs, outputs and outcomes are exemplified and the measurement aspect is highlighted with measurement timeframes. This graph, while useful in its simplicity, is a caricature of reality and the granularity of the processes is excessively simplified. Although it is incomplete, this graph has the benefit of presenting the environment around a research department within a company on a high but appropriate level of abstraction. This framework enables us to establish a shared basis for understanding between interviewees about the unit of analysis, which is the research department, and to make the research environment with its inputs, activities, outputs and outcomes more explicit. In a next step it was necessary to understand to whether performance measurement existed and was used in practice. Thereafter, we had to identify the companies that could be used to study the topic. In accordance with the focus of our research we selected the candidate companies for the interviews according to the following criteria:
6
Samsonowa et al. (2009), p. 162.
116
4 Performance Management: Analysis Approach
• Global companies in the ICT sector and similar sectors listed on a stock exchange, and • With a dedicated research organization, and • With high or medium R&D intensity rate, (ratio of R&D spending to total sales) according to the Scoreboard of the European Union.7 According to these criteria following 12 companies were selected: ABB, Deutsche Telekom AG, EMC, Google, Hewlett Packard, IBM, Intel, Microsoft, Nokia, Philips, SAP, Siemens, and Sun Microsystems. Twenty-first-round interviews8 with representatives from the research departments of the companies were made. The selection criterion was primarily a certain understanding and necessity of the organizational performance assessment; and the existence of a functional performance measurement system within the research department. In order to be able to compare the companies it is necessary to collect the same relevant data within each company and to process this data in a common manner. Two matrices (cf. Table 4.1 for the outline) were developed to capture the essence of performance measurement within each case study: a matrix comprising research goals and subgoals (cf. Table 4.2) and their weightings as well as the matrix (PMS matrix compilation) containing performance measures, their weights and mappings to the research goals (cf. Table 4.4). These matrices were used to capture the contribution of individual KPIs to the goal achievement of the entire department. They represent, in summation, the performance measurement system as a set of weighted department goals with each goal being assessed by a collection of weighted KPIs. By means of this matrix we were able to collect ‘comparable’ data for the case study companies. The subgoals in this context were a methodological tool to develop the performance matrix in cases where companies had an underdeveloped performance management system. For example such a system could be characterized as where KPIs were unclear, the assessment of goals was not well-defined, and/or the relationship between goals and KPIs was not explicit. Subgoals helped to breakdown organizational research goals, in an intermediate step, into their measurable constitutive parts with which the KPIs could easily be associated.
4.1.2
Execution
After the first round of interviews, which provided the first insights into performance management practices, we selected a set of companies with which to test our
7 The EU industrial R&D investment SCOREBOARD groups companies into four sectors of R&D intensity: high (above 5%), medium (2%–5%), low (1%–2%) and very-low (less than 1%). 8 The detailed list of the first round interviews is in Appendix C.
4.1 Case Studies
117
research questions. We conducted a series of in-depth case studies to investigate how performance measurement in industrial research organizations is reflected in practice. The empirical investigation employed a multiple case study methodology9 as it allows for in-depth cross-examination of each case in its own complex environment and natural setting. In this section we describe the organization and structure of the case studies and summarize our first findings. To select candidates for the in-depth case studies we added a fourth criterion to the three selection criteria used to choose candidates for the first interviews. This fourth criterion is the existence of performance management within the company. The case studies were conducted as semi-structured expert interviews with several iterations. The following companies were selected: ABB, EMC, Deutsche Telekom, IBM, Intel, Microsoft, Philips and SAP. The interviewees were the heads of the research departments as well as additional people such as managers in charge of controlling research or heads of project management offices responsible for collecting, processing and editing the performance measurement data for review meetings with decision-makers. To be able to capture the data within companies as comprehensively as possible, three people from the research departments of each company were interviewed.10 The case study is divided into three parts as shown in Table 4.1: the first part addresses the organizational context and research environment; the second part deals with the management of performance in the research unit, and the third part captures the peculiarities of each case that were discovered during the case study. Table 4.1 Case study outline
Case study outline 1. Organizational context and research environment: (a) Company profile (b) R&D Characteristics (c) Research organization structure (d) Research process (e) Methods and tools 2. Performance measurement: (a) Research goals/subgoals (b) Research KPIs (c) Mapping goals and subgoals/KPIs 3. Conclusions
Part 1. contains five subsections (1.a–1.e). 1.a starts with a brief introduction of the company with key facts and figures such as: size, locations, sector, market presence, number of employees, products and services, investment in R&D and in
9
Eisenhardt (1989), Yin (2006). Information about all interviews, including names and positions of interviewees, interview dates and methods are summarized in Appendix C.
10
118
4 Performance Management: Analysis Approach
research only. Section 1.b describes the research department in terms of input, activities and output. The section, by way of a diagram, lists the inputs, activities and outputs. This provides a basis for understanding the relationships between inputs, activities and outputs as well as the goals and KPIs related to them. Section 1.c addresses the embedding of the research department within the overall structure of the company, as well as the internal structure of the organization. Section 1.d describes how research is conducted, more specifically, the detailed sequence of activities carried out in the research organization. Finally, Section 1.c describes tools that support the research process, the production of outputs, as well as collaboration with external scientific communities and internal networks. The core of the empirical investigation is Part 2.“Performance Measurement”. The focus in Section 2.a is on the kind of organizational goals that industrial research departments have and the way they are prioritized from an internal research view and from a corporate view. Sections 2.b and 2.c explore the performance indicators that industrial research organizations use to measure and monitor the achievement of their goals, as well as capturing the relationships and contribution of each sub-goal or KPI to research and corporate goals. Peculiarities and key observations from each case study are summarized in Part 3. In the following section we present just one case study to illustrate the type of the data that was collected. The remaining seven case studies are available in Appendix D.
4.1.3
Example Case Study “SAP Research”
4.1.3.1
Company Profile
SAP11 is the world’s leading provider of business software. Based on the SAP NetWeaver Platform, SAP provides its customers with a wide range of business solutions. SAP defines business software as comprising enterprise resource planning (ERP) and related applications such as supply chain management (SCM), customer relationship management (CRM), product life-cycle management (PLM) and supplier relationship management (SRM). These five major enterprise applications form the SAP Business Suite, which is SAP’S core product, which is supplemented by further product offerings and solutions for Small and Midsize Enterprises (SME). SAP is headquartered in Walldorf, Germany and employs about 43,800 people worldwide (as of December 31, 2007), of which more than 14,700 are located in Germany. The company is represented by its employees in more than 50 countries, serving over 47,000 customers in more than 120 countries. The shareholder structure12 of SAP is characterized by a wide distribution of share ownership: Approximately 27.2% from a total of 1,246.6 million shares are in
11 12
Internet source: http://www.sap.com/about/index.epx. Internet source: http://www.sap.com/about/investor/stock/shareholders/index.epx.
4.1 Case Studies
119
free floatation. Approximately 28.8% of the shares are held by the three founders of SAP, their trusts and holding companies (without family members). The company itself holds about 3.9% of the shares. In 2007 SAP achieved total revenues of €10.25 billion, which lead to a profit of €1.94 billion. Approximately 14% (€1.46) of the revenues were spent on R&D activities.13
4.1.3.2
R&D Characteristics
Figure 4.2 is derived from Brown and Svenson14 and Samsonowa, Buxmann and Gerteis15 showing the environment as well as the main inputs, activities, outputs and outcomes of SAP’s research organization.
Fig 4.2 Definition of inputs and outputs of SAP Research (Source: Samsonowa et al. 2009)
SAP Research was established in 1999 and is mainly focused on the execution of collaborative research activities. It distinguished the following four different types of research project:
13
Internet source: http://www.sap.com/germany/about/press/archive/press_show.epx?ID¼4266. Brown and Svenson (1998), pp. 30–35. 15 Samsonowa et al. (2009), p. 162. 14
120
4 Performance Management: Analysis Approach
University grant projects: Research activities are conducted within the scope of a grant project by an academic partner of SAP Research at universities. The content of grant projects is defined in collaboration with SAP Research. Co-innovation research projects: The content is defined jointly by SAP Research and a project consortium and the funding organization (e.g. the European Union). Transfer projects: Transfer projects ensure cutting edge technology in SAP products and allow for research outputs to be transformed into products. Transfer projects are completely, or at least to a certain extent, funded by SAP Business Units. Seed-funded projects: Seed-funded projects are set-up primarily to identify future trends and to enable breakthrough innovation. The content is solely defined by SAP Research in order to build competencies in areas, which go beyond today’s businesses, and have the potential to result in competitive advantage in the future. Funding characteristics (as of 2007): The ratio between internal and external funding at SAP Research in the EMEA and Australia regions is approximately 55:45. The Americas region is totally internally funded. Internal (55): 82% of internal funding is corporate-financed (a fixed budget allocated to the research department); 18% is financed by business or development units. External (45): external contractual research (including publicly-funded projects). 4.1.3.3
Research Organization Structure
The research organization of SAP is as a corporate entity, which is directly accountable to the Executive Board. The Head of SAP Research is a direct subordinate of the Executive Board member, which is in charge of the department of Research & Breakthrough Innovation (R&BI) (Fig. 4.3).
Fig. 4.3 Organizational structure of SAP (Source: SAP Annual Report 2007)
SAP Research comprises Research Centers and Campus-based Engineering Centers, as well as the corporate venturing group: SAP Inspire (Fig. 4.4).
4.1 Case Studies
121
Fig. 4.4 Organizational structure of SAP Research (Source: SAP Research Report 2007)
SAP Inspire collects ideas from internal and external entrepreneurs (employees, customers, etc.), evaluates them, and selects those with the most promising growth opportunities to successfully bring them to commercialization along the whole innovation process in a time-frame of about 18 months. There are two different types of organizational setups for research locations: Campus-based Engineering Centers (CECs) are located close to universities thereby leveraging the talent and expertise of academia; SAP-Labs-based Research Centers (SRCs) are co-located with SAP development labs, which allows the close alignment of research and development (Fig. 4.5).
Fig. 4.5 Distribution of SAP research locations (Source: SAP Research Report 2007)
122
4 Performance Management: Analysis Approach
Six research programs (cf. Fig. 4.6) build the pool of technology expertise in SAP Research. They are distributed across the above-mentioned locations with at least two research programs being conducted at each location. Each research program is lead by a Research Program Manager (RPM) who is responsible for the technology vision of the research program and its implementation by university grant projects, publically-funded projects as well as transfer projects.
Fig. 4.6 Research programs at SAP research (Source: SAP Research Report 2007)
SAP Research is supported by three shared functions: Research Portfolio Office (RPO) dealing with portfolio management and internal business development, Project Management Office (PMO), which is responsible for the administrative management of publically-funded projects and Communications, which supports all kinds of communication, media and events of SAP Research.
4.1.3.4
Research Process
Starting with the identification of emerging trends and technologies, the organization develops a strategic roadmap to build expertise in certain research topics with a time horizon of 3–5 years until the first major results become market-relevant. Research topics are dealt with in the phases shown in Fig. 4.7 where a dedicated research project spans one or two phases. There is no formal transition between the phases other than the production of the outputs associated with each phase. Within the co-Innovation projects, as well as the transfer projects, a very rigid milestonebased approach with concrete deliverables and results is used. Project progress and results are assessed by SAP-external reviewers and/or the receiving organization. The idea selection process of SAP Inspire is oriented alongside a funnel model: submitted ideas have to pass several quality gates before they can proceed to the next phase of the innovation process and be realized within a prototyping project (Fig. 4.8).
4.1 Case Studies
123
Focus SAP Research Basic Research
Applied Research
Industrial Research
Qualified Idea
Demonstrator
Prototype
Market Evaluation Customer Pilot
Production or Spin Off Innovative Project
Collaborative Research Invent
Validate
Transfer
Fig. 4.7 Research process at SAP research (Source: SAP Research Report 2007, p. 29)
Fig. 4.8 Idea selection process at SAP Inspire (Source: SAP Research Report 2007)
4.1.3.5
Methods and Tools
SAP Research has developed the PMO Suite, which supports the whole innovation process from idea generation for project proposals, to project close and transfer. The following tools are available under the umbrella of the PMO Suite: INNOVATION: The InnoSuite manages the proposal process for publicallyfunded projects. This includes initial proposal ideas, initial planning and execution location selection. PARTNER: The Partner Database keeps track of project relationships with partners in collaborative research projects. PLANNING: The CAPA tool is a resource planning tool for managing resources in projects. Records about the progress of the projects are kept, comparisons of target and actual performance are drawn and administrative project information can be shared depending on user roles.
124
4 Performance Management: Analysis Approach
INFORMATION: The SAP Research Net (SRN) provides information about every single researcher, his area of expertise, and the projects he has been engaged in. Furthermore, every research project is listed with the information about the scope of the project, related projects and the relevant stakeholders of the project. Thus, communication between researchers is simplified and enhanced and transaction costs can be reduced. WIKI: The SAP Research Wiki is the information-sharing tool for the SAP internal audience. PUBLICATION: SAP Research is using BibSonomy16 to manage and keep record of the publications of researchers.
4.1.3.6
Research Goals
As the global technology research unit of SAP, SAP Research’s goal is to significantly contribute to SAP’s product portfolio and extend SAP’s leading position by identifying and shaping emerging IT trends through applied research. The research activities are focused on strategically important business areas of the company and are aimed at leveraging the entrepreneurial and inventive talent within SAP. Table 4.2 Relative weighting of SAP corporate research goals Name Description
Weighting
Research Corporate view view 1. Intellectual Create and secure IP to ensure commercially exploitable 10 10 property benefits to maximize competitive advantage and protect innovation leadership 2. Alignment Give input to Business Units/Development to drive 30 40 & transfer incremental innovation into SAP’s existing product portfolio and influence future products 3. Image Create a positive image (internal & external) to ensure 15 5 successful interactions with all partners 4. Operational Ensure efficiency and effectiveness: what is an 5 15 excellence obligation in today’s business processes 5. Technology Achievement of technical goals within a project or 30 20 creation research field 6. Talent Discover, develop and retain talented resources for SAP 10 10 ∑ ¼ 100 ∑ ¼ 100 Source: Compiled based on interviews with representatives of SAP Research (The detailed list of SAP Research interviewees is in Appendix C)
4.1.3.7
Research KPIs
The following table (Table 4.3) shows the list of key performance indicators used by SAP Research together with the metrics that are used to evaluate them.
16
A Social Bookmark and Publication Sharing System.
4.1 Case Studies
125
Table 4.3 Subgoals and KPIs used to assess SAP Research Subgoals KPIs Transfer efforts into business units # Of person days delivered to BU’s Rate of significant contributions to business Subjective evaluation by the BU manager about units the significance of the transferred research results for development Product/service scenarios selected for board & # Of products/scenarios with a positive rating by CTO consideration the Advisory and Investment Board Vision research program in place and reviewed Research program paper of program manager which gets subjective approval of superior manager Research program implementation Extent to which projects meet the vision of the research program (subjective evaluation) Ratio of key technology trends uncovered by % SAP research 3 years ago that are considered strategic for today’s product development EU projects delivered on time and budget % Sponsored research contract results # of projects conducted at universities with a disseminated within SAP fixed grant from SAP research Money provided from outside vs total budget % Quality IDFs submitted to SAP IP Office # Of filed IDFs Patents granted to SAP research vs. SAP all % Different idea submitters # Of different people who submit proposals for research projects Ideas/business cases that pass first quality gate # Of proposals achieving a positive subjective (Steering Committee) and get a positive rating by the committee and by SAP rating by the community employees Business cases approved by SAP Board # Of proposals approved by Executive Board Stimulation of innovative talent No metric in place Attract external senior scientists to join SAP # Of external researchers joining SAP Research Research Research top talent leaving to other SAP areas # Of SAP researchers Image of SAP Research within SAP No metric in place External image of SAP research No metric in place Scientific publications # Publications # Publications in newspapers, customer magazines, etc. Partner and customer events/workshops # Of events where demonstrators or prototypes are shown Company fair contributions # Of internal events where the awareness within the BU’s can be raised Quality of patent filings Ratio of # patents filed vs. # patent granted Ph.D. theses completed # Guest researchers from universities # Source: Compiled based on interviews with representatives of SAP Research (The detailed list of SAP Research interviewees is in Appendix C)
4.1.3.8
Mapping Goals and KPIs
The following table (Table 4.4) shows the mapping of subgoals to goals and their relative weighting that has been developed with SAP during a series of interviews.
Intellectual Property
100
5 10 15 10
100
30 20 15 5 20 10
100
15
10 30 30 15
Alignment & Transfer Image Operational Excellence 40 20
100
50 20 30
Technology Creation
Source: Compiled based on interviews with representatives of SAP Research (The detailed list of SAP Research interviewees is in Appendix C)
KPIs Transfer efforts into business units Rate of significant contributions to business units Product/service scenarios selected for board & CTO Vision research programme in place and reviewed Research program implementation Technologies uncovered 3 years ago that are strategic now EU projects delivered on time and budget Sponsored research results disseminated within SAP Money provided from outside vs. total budget Quality IDFs submitted to SAP IP Office 30 Patents granted to SAP Research vs. SAP all 70 Different idea submitters Ideas passing first quality gate with a positive rating Business cases approved by SAP Board Stimulation of innovative talent Attract external senior scientists to join SAP Research Research top talent leaving to other SAP areas Image of SAP Research within SAP External Image of SAP Research Scientific publications Publications Partner and customer events/workshops Company fair contributions Quality of patent filings PhD theses completed Guest researchers from universities ∑ 100
Goals
Table 4.4 Mapping KPIs to goals and their relative weighting
30 10 100
30 30
Talent
126 4 Performance Management: Analysis Approach
4.1 Case Studies
127
In this regard, the table actually shows the subjective assessment of which goals the KPIs are likely to affect.
4.1.3.9
Conclusions
Most of the different KPIs are clearly assigned to well-defined research subgoals in a formalized framework. The subgoals and KPIs are developed in an iterative process between the research department and the top management and serve the strategic adjustment of the research activities. Where it makes sense, subgoals are further broken down and are part of personal target agreements with individual researchers. Other observations: • • • • •
PMS not mature, but advancing, as the organization grows; No financial evaluation of research results; Mainly objective measurement criteria; Image is set as a goal, but no definite KPI has been defined; In general 4–5 KPIs are used to measure a single goal.
4.1.4
Initial Findings and Further Approach
When conducting the case studies, two kinds of companies were found: (1) companies with a sophisticated performance measurement that could easily answer all our questions related to Part II. of the case study, and (2) companies where performance measurement was weakly developed. The latter cases can be characterized as follows: although goals exist, their priorities are not defined and/or the goals are not really reflected in the KPIs and/or there were no weightings available. If priorities or research goals were missing, the interviewees were first asked for an estimated weighting of the research goals according to the perceived importance. Secondly, the interviewees were asked about the KPIs that they were monitoring and how they would relate them to goals. As mentioned earlier, we used the concept of subgoals to develop this mapping, which resulted in the PMS matrix compilation. The mappings in our case studies do not necessarily reflect the concrete performance management system in place for the research department, but rather indicate the perceived performance measured. When examining the case studies with respect to feedback loops of our theoretical framework, it becomes evident that the feedback loops implemented with respect to performance evaluation only partially match. The feedback loops that are introduced in the theoretical framework (four loops coming back from activities, two from outputs and outcomes) are in reality only formally implemented through performance measurement cycles. The difficulty of holistic measurement is that the performance measurement cycles, including the internal reward cycles, are executed on an annual basis. Therefore, companies generally tend to only reflect
128
4 Performance Management: Analysis Approach
annual and/or shorter term (mostly quarterly) measurement cycles. This annual assessment is a consequence of the nature of the financial year and the necessity for the organizational unit to plan its budget and the allocation of its resources adequately. In the case of activities monitored on a quarterly basis, and outputs that can be measured annually, there is no problem. However, the real value-added outcomes from research, such as additional license revenues or brand value, are only quantifiable after a significant time delay and therefore cannot be measured within a single annual performance measurement cycle. Nevertheless, it is common practice to measure these outputs with KPIs, even though the KPIs may only reflect the longer-term outcomes to a very limited extent. The outcomes are also difficult to measure because the value of an isolated research output within an aggregate company outcome is difficult to assess. This is particularly true where large and complex software systems are produced and the research results may ultimately only represent a small part of the overall system, but the research could still be a critical part of the system. One-hundred-and-sixty-four different KPIs were collected from the eight case studies, as well as 45 organizational goals of which 30 were unique. These numbers by themselves hit at great complexity and that direct comparison of performance measurement between companies with contrasting research goals and KPIs is not possible at this level of granularity. To cope with this situation (164 different KPIs) an abstraction of the KPIs was needed to permit harmonization of KPIs according to our framework which we introduced in Fig. 4.1. Such an abstraction could, for example, be similar to four categories of the balanced scorecard; it needs to reflect the activities of a research department and mirror the content perspective, which it ultimately attempts to assess in terms of the performance. The information gained from case studies served two purposes: (1) the development of an activity-oriented performance management system, and (2) the preparation of a large quantitative survey to validate the findings gained from the qualitative analysis. Altogether, four major analysis steps were carried out: 1. Performance Cluster Definition: In-depth analysis of the performance data collected in the case studies by deriving a set of generic research activities based on a grouping of KPIs and their relationship to goals. The result was that 11 performance clusters representing the performance-related research activities were identified and validated; 2. KPI Consolidation: Classification of the KPIs of each performance cluster into the KPI classes was made; 3. Cluster Analysis: A calculation and comparison of the weights of the performance clusters across the case studies was conducted; 4. Goal Spectral Analysis: An analysis of relationships between individual goals as well as between goals and KPIs of research departments in terms of performance cluster contributions was made. In the following section, we present the results arising from the four analysis steps.
4.2 Performance Clusters
4.2
129
Performance Clusters
Initially we assumed that the direct comparison of the performance management systems of different companies is possible by directly comparing the collected research goals and KPIs. However this turned out to be extremely difficult for two reasons: 1. Although the goals of different companies appear similar at a first glance, when exploring the KPIs that were used to measure these goals, it seemed that the interpretation of the goals by each company is markedly different. Likewise, the opposite case exists: although goals initially sounded very different, the companies seemed to be evaluating similar things from the perspective of KPIs. 2. The direct comparison on the KPI level is simply impossible due to the sheer number of different KPIs.
4.2.1
Approach and Definition
In order to cope with this situation we used a clustering approach that allowed for a harmonization of KPIs according to our framework introduced above (cf. Fig. 4.1). The harmonization approach draws upon a grouping of KPIs from a content perspective that especially reflects activities that are being conducted, or are needed, in order to contribute to the achievement of goals. The following four steps describe the evolution of the performance clustering process. Step one: Clustering of KPIs The initial set of clusters was identified by applying the following procedure to the KPIs collected in the case studies: • The KPIs were analyzed with respect to their similarity regarding the artifact classes (i.e. inputs, outputs, activities and outcomes) that are measured. • The assignment to clusters was done by answering the following two questions: first, what is being measured (artifact), and second, why is it being measured (goal/sub-goal). KPIs addressing similar artifacts (what?) in a similar context (why?) were assigned to the same cluster. In general, there was a clear match between one KPI and one cluster. There were few exceptions: – In some cases the same KPI was assigned to multiple clusters due to different contexts in which they were measured (the same KPI was contributing to different goals); – In other cases the same KPI was split into its constituent parts reflecting the different artifacts that were measured; the KPI constituents were then assigned to clusters. Step two: Cluster validation The clusters were validated by a literature review. In particular it was based on the OECD Frascati family of manuals elaborating the concept of science and technology
130
4 Performance Management: Analysis Approach
indicators, and on the work of Chesbrough17 on the open innovation concept. The result from this analysis was a set of typical research activities related to each cluster. Step three: Identification of underlying process From our observations on our framework (cf. Fig. 4.1) on a research department and its environment, and the literature review, we realized that the activities summarized in the identified clusters can be organized in an underlying process. We examined each performance cluster to refine its activities by either decomposing the identified processes into activities, or by composing processes from the set of identified activities. The underlying cluster process can embrace the entire ecosystem of a research department and thereby even cross the boundaries of the company. Typical processes in this regard are: portfolio management, project management, partnerships/networking management, knowledge management, intellectual property management, idea management, technology asset management. Step four: KPI consolidation As a consequence of the identified underlying cluster processes, we revisited the original groups of KPIs and abstracted from the concrete KPIs to generic KPI classes that are organized according to the underlying cluster processes. By abstracting from concrete company-specific processes of the case studies to more general processes we were able to consolidate the KPIs within the performance clusters by formulating more generic KPIs. To derive more generic KPIs we analyzed the measurability of activities within each performance cluster: what is measurable about the activities (e.g. writing papers: number, quality, etc.), which of them reflect a process, or a part of a process, and if so, which parts of the process are measured? Depending on the process, in some cases the whole activity chain has to be assessed, i.e. also the parts that go beyond the research department, in order to attain the outcome and to be able to assess it. As a result, we were able to derive 37 KPI classes (see definition below) from the original 160 KPIs. As a result of the creation process described above, we are now able to define the term: performance cluster: A performance cluster comprises a collection of typical research activities as well as a set of specific KPI classes that relate to a common underlying process.
The underlying performance cluster processes can be characterized by the following properties: • Company internal vs. company external focus; • Formal vs. informal processes; and • Time pattern. Some of the underlying processes of the performance clusters are strongly linked to internal processes. The internal focus allows for an easier assessment of their performance, since measurement can be organized alongside a formalized and fairly transparent company-wide process with well-defined steps. External
17
Chesbrough (2003), Chesbrough et al. (2007).
4.2 Performance Clusters
131
processes are often less formalized, or their individual steps are less transparent, regarding objective assessment. The degree of formalization becomes apparent in the way in which the processes of an organization, or its steps to process formation, are defined. In the case of informal/latent processes there is a lack of transparency, and as a consequence, the possibilities to assess the progress of the processes are limited. Another characteristic of the underlying process is the time-lag between execution of the activities and the manifestation of its results. From the performance management context, the following timing patterns are conceivable: short-term consideration is within one performance management cycle, long-term consideration is at least one cycle and often more; non-termed is a continuous occurrence, and is therefore not limited time-wise, neither is it bound to the performance management cycle. Finally we define a KPI class as follows: A KPI class summarizes a set of KPIs with at least one common denominator in order to assess certain activities of a performance cluster.
The common denominator could for example be the artifact class in terms of input, output, process and outcome, or the generic artifact property regarding quality or quantity. Table 4.5 lists the 11 performance clusters, which were identified via the creation process as described above. Table 4.5 Performance clustersa No Underlying process Cluster name Acronym 1. Transfer research results Technology Transfer TT 2. Identify future business opportunities Future Business Opportunities FBO 3. Define and manage the research Research Portfolio Management RPM portfolio 4. Create intellectual property Intellectual Property Creation IPC 5. Operate effectively and efficiently Operational Excellence OpEx 6. Develop a talent pool Talent Pool TP 7. Create a thought leadership image Image IMG 8. Publish research results Publications PUB 9. Show presence in the scientific Presence in Scientific Community PSC community 10. Collaborate with academia Collaboration with Academia CA 11. Collaborate with partners and Collaboration with Partners and CPC customers Customers a In the remainder of this document we omit the term ‘performance’ when we referring to concrete performance clusters (e.g., Technology Transfer cluster or TT cluster instead of Technology Transfer performance cluster)
In the following section we describe each performance cluster on a very detailed level. The performance clusters are characterized, according to our definition, from the following perspectives: • Underlying process which predominates the performance cluster; • Corresponding activities;
132
4 Performance Management: Analysis Approach
• A set of KPI classes with the relevant artifact classes and properties; • Examples of concrete KPIs with concrete artifacts and properties; • References to literature that validate the cluster.
4.2.2
Cluster: Technology Transfer18
The Technology Transfer (TT) cluster comprises all activities that are relevant to hand over research results to other groups of a company, typically development or production units. The transferred results must have an exploitation potential (commercial) and the recipient must have an interest and/or plan in place for how to exploit them. Exploitation is in general deemed to be commercial and has two facets: cost-saving and revenue generating. Therefore, the important aspect of the TT cluster is to capture and assess the output and/or outcome of research and transfer activities. Output includes the tangible artifacts that are transferred, their refinement during the transfer process, as well as all activities that help to convey the knowledge upon which the artifact is built or is based in order to ensure further (product) development by the receiving unit. In this regard we consider code as a tangible result in the software industry. The handover may happen in different ways, starting with explicitly-defined and formalized processes to simple informal knowledge exchange. The more formalized the processes specified, the easier it is to define and measure process parameters besides the above-mentioned output and outcome parameters. Since the focus is clearly on output and outcome, the details of the underlying process are of less importance. As mentioned above, the term “transfer” implies two parties (sender and receiver) being involved. The findings from our case studies show that from the organizational perspective, the activities around technology transfer are commonly organized within projects. The advantage of the “project” form is the formal character that forces both parties to define the transferred artifact and to estimate its value from each perspective. The assessment of the technology transfer will depend on how the projects are finally organized. Ultimately, all companies will organize the details of this process differently, depending upon their specific needs and organizational settings. As such, the concrete design of the process is less relevant. Of greater relevance are the process characteristics, which can be measured, or those that ideally would be measured, in terms of inputs, specific gates, outputs or outcomes. For the TT process to succeed, a number of generic steps need to be followed. Figure 4.9 depicts the steps involved in such a process. While each concrete case (transfer project) is different, some of the key steps and questions involved in bringing forward new technologies through the value chain of the company can be uniformly presented. These key steps are:
18
Here we consider intra-organizational technology transfer within a company – between departments, not between different organizations or institutions.
4.2 Performance Clusters
133
2 Business case
1 Identification of transfer potential
4 Sign-off
3 Terms & conditions
6 Closing
5 Implementation
Fig. 4.9 Generic Technology Transfer process19
• Identification of a need, or enthusiasm, for a new idea/concept/technology (research output) in e.g. the business or development unit; • Design of a pro-forma business case; • Definition of terms and conditions (resources/funding); • Sign-off (agreement); • Implementation of the transfer object (execution/monitoring); • Closing/transfer completion and cross-charge or acknowledgment/product launch. Typically, technology transfer20 in an industrial research environment can originate from two different impulses. The two directions are technology push21 and technology pull. Depending on the motivation, the process is accordingly different. In the case of technology push (where the scientific and technological knowledge and skills can be applied to invent a new product or process) the situation typically is as follows: the technology from the technical point of view might be ready and some example use cases identified, but a concrete business scenario is still needed to test the technology in real life. The latter case is different: there is a concrete request (recognition of a need or a potential for an invention) for a specific technology. Depending on the concrete case, the process may encompass steps 1–6 in the case of technology push, whereas in the case of technology pull, the process will most likely directly start at step 3. In the case of the technology transfer, the process typically starts towards the end of the value chain of research (with the completion of a result to be transferred), and reaches beyond the borders of the research department, ending with further development of the artifact by the receiving unit. Using the outlined underlying process and analyzing the original KPIs that were assigned to the TT performance cluster from the case studies, we derived five KPI classes that together cover all KPIs (with some of the original KPIs being split into two or more parts and assigning the parts accordingly): • Transfer volume: volume of the technology transferred to development or other (business) units; • Transfer result quality: quality of the research results transferred;
19
Edvinsson and Malone (1997), Pham (2008), p. 92. Taschler and Chappelow (1997), pp. 29–34. 21 Sometimes also referred to as “idea push”. 20
134
4 Performance Management: Analysis Approach
• Transfer process quality: quality of the transfer process or transfer activities; • Transfer significance: significance of the transferred research results for the receiving unit; and • Economic transfer value: economic transfer value of the transfer activity or transferred research results. The generic TT process and the mapping to KPI classes associated with this performance cluster are shown in Fig. 4.10. 2 Business case
1 Identification of transfer potential
4 Sign-off
3 Terms & conditions
6 Closing
5 Implementation
1. Transfer volume 2. Transfer result quality 3. Transfer process quality 4. Transfer significance 5. Economic transfer value
Fig. 4.10 Generic technology transfer process mapped with KPI classes
Examples for typical technology transfer KPIs are: • Transfer efforts from research into business units measured by number of person days delivered to BU’s; • Number of component deliveries; • Significance of research projects measured by the investment of the business units in research projects e.g. in terms of resources dedicated to jointlyconducted research projects; • Time spent by researchers in the development or business units; • Net present value (NPV) of the transferred artifact. Many literature sources highlight the importance of technology transfer22 for research organizations. This issue, in particular in the industrial research domain, is intensively studied and well-understood. The most famous example is probably the book on high-profile failures at Xerox PARC.23 Nowadays, many companies have a dedicated “Office for Technology Transfer” to identify research
22
Tipping et al. (1995), pp. 32–63, especially their fourth managerial factor IWB, which stands for “Integration with Business”; Schmoch et al. (2000), Sommerlatte (2006), Geschka (1988). 23 Smith and Alexander (1999).
4.2 Performance Clusters
135
output that has commercial potential and to develop strategies to exploit it. To sum up, there are many processes and cross-divisional activities within companies that address technology transfer. These aim to seamlessly support innovation by reducing barriers for the diffusion of new technologies between departments.
4.2.3
Cluster: Future Business Opportunities
The Future Business Opportunities (FBO) cluster summarizes activities around the variety of research output and outcome, from fuzzy ideas to concrete business plans, all of which have the potential to lead to new income streams. If the TT cluster focuses on the concrete transfer process of research results into the development phase, the FBO cluster deals with the evolution of ideas within the overall innovation process and the contribution of the research department to this process. Both the need and importance of innovation, and the fact that innovation takes place on the market and is closely connected to commercial success are undisputed. Therefore, the FBO performance cluster is fundamental to innovation management, especially in industrial research. It encompasses aspects of idea generation and incubation, as well as new business development. It also includes facets of entrepreneurship in response to identified opportunities.24 The Oslo Manual25 covers the subject of development and diffusion of new technologies addressing technological innovation data. The manual has become the reference work for numerous large-scale surveys that examine the nature and impacts of innovation in the business sector. The manual shows that these activities are essential for research departments and therefore endorses the existence of this performance cluster. Figure 4.11 represents a generic process26 underlying the FBO cluster. The names of the stages and their demarcations, and to a minor extent even their sequence, might slightly differ from company to company, however all stages are necessary in principle: • Idea generation: this stage can happen in multiple ways as there are many different techniques and methodologies to support this stage (see especially Geschka 2006 for details). • Ideas screening and evaluation regarding their attractiveness, potential business impact and compatibility with the objectives of a company. • Concept generation and feasibility studies: This stage develops a concept for the implementation of an idea and evaluates the feasibility of this concept.
24
Heuser (2006), pp. 271–290. OECD (2005), Oslo Manual. 26 Scholl (2006), pp. 163–194, Heuser (2006), pp. 271–290, Geschka (2006), pp. 217–248. 25
136
4 Performance Management: Analysis Approach
• Finding sponsorship and piloting: With the sponsorship of an internal or external investor this stage develops a pilot in order to validate the concept and its feasibility in a concrete business scenario. • Implementation and incubation: The transfer of the pilot results to the development organization and its further development into a final product/service takes place during this stage. • Commercialization: This stage comprises a go-to-market strategy and the product/service roll-out. Idea screening
finding sponsorship
2 & estimation
1 Idea generation
4 / piloting
3 concept generation
/ feasibility
6 commercialization
5 Implementation
/ incubation
Fig. 4.11 Generic process of the future business opportunities cluster
This process is often visualized as a funnel, or a series of funnels, which implies that there are several stages to pass with each stage having specific evaluation criteria. The number of ideas is reduced, stage by stage, and thereby the most promising ideas are developed into real business opportunities. It is important to mention that the departments and people involved may change several times during the process. This is especially true if the level of maturity of an idea from the technological perspective is quite low. Because of these characteristics, the FBO cluster belongs to the group of fairly formalized, company-internal processes. The duration needed for an idea to pass through the various stages may be several performance management cycles and therefore exhibit a long-term pattern. This is particularly true for ideas with a low level of technological maturity and therefore a high likelihood that the research department is involved. The three KPI classes of the FBO cluster are arranged along this process (see Fig. 4.11): • Intensity of input into the FBO process; • (Weighted) # of ideas moved to a certain or the next phase of the FBO process; and • (Achieved) business impact of an idea in terms of its economic value. The three KPI classes comprise measures for the input (1), process (2), and output/outcome (3) of the FBO process. Using the throughput as the process measure seems to be more appropriate than measuring the mean “production” time, as the variance of the production times may be quite high due to the aforementioned different technology maturity levels. Therefore, ensuring and assessing a continuous stream of ideas seems to be the more appropriate choice. The generic FBO process and the mapping to KPI classes associated with this performance cluster are shown in Fig. 4.12.
4.2 Performance Clusters
2
1 Idea generation
137
Ideas screening & evaluation
4
finding sponsorship / piloting
3 concept generation
/ feasibility
input input intensity into the innovation process
6 commercialization
5 Implementation
/ incubation
throughput # of ideas moved to a certain phase of the innovation process or (weighted) # of ideas moved to the next phase
output / outcome (achieved) business impact of an idea in terms of its economic value
Fig. 4.12 Generic process of the future business opportunity cluster with KPI classes
The following KPI examples have been collected in case studies and are summarized within the FBO cluster: • • • • • • •
Number of ideas; Number of ideas per researcher; Number of ideas with business case developed; Number of ideas with demonstrator developed; Number of ideas handed over to development; Number of start-ups created in research; Number of products/business scenarios with a positive rating by the Advisory and Investment Board; • % Of key technology trends uncovered 3 years ago that are considered strategic for today’s product development.
4.2.4
Cluster: Research Portfolio Management
The cluster called Research Portfolio Management (RPM) characterizes a certain maturity level of the research department, which is reflected in the extent to which the portfolio of the research department is organized and/or managed. The RPM cluster encompasses all kinds of activities: from structuring and organizing the content areas that a research department is active in, describing the visions of these areas, through to developing and maintaining concrete implementation plans to achieve these visions. The activities of the RPM performance cluster are partially classified as management activities according to the Frascati Manual (FM). Generally, for the inhouse (not governmental) management activities, a distinction is made between: “. . . direct support for R&D by persons such as R&D managers closely associated with individual projects, who are included in both the personnel and expenditure series, and persons such as financial directors, whose support is indirect or auxiliary and who are included in the expenditure series only as an element of overheads”.27 The argument used by the FM to include these activities into R&D is as follows: “. . .when these contribute directly to R&D projects and are undertaken
27
OECD (2002), Frascati Manual, p. 19, clause 26.
138
4 Performance Management: Analysis Approach
exclusively for R&D, then they are part of R&D”.28 The FM lists “. . . the R&D manager who plans and supervises the scientific and technical aspects of the project or the person who produces the interim and final reports containing the results of the project”29 as a typical example of such activities. Furthermore, a clear overlap with our performance cluster RPM is given by Tipping et al. (1995) second Managerial Factor30: the Portfolio Assessment (PA). This factor is identified as communicating the total R&D program across various dimensions of interest, including time horizon, level of risk, core competency exploitation, and new/old business. The RPM is a dynamic decision process within which the number of active research areas, programs or projects is constantly being revised. This includes new research areas, programs or projects being evaluated, selected, and prioritized as well as existing ones that might be accelerated, killed, or re-prioritized, for example by allocating and reallocating. “The portfolio decision process is characterized by uncertain and changing information, dynamic opportunities, multiple goals and strategic considerations, interdependences among projects, and multiple decision makers and locations”.31 The following five generic steps represent a simplified portfolio management process within an industrial research department (Fig. 4.13).
Elaboration and asessment of a
2 vision for a research area/program
1 Identification of research
areas / programs –a portfolio
4
3 Elaboration and validation
of a strategic roadmap
Execution and implementation of the determined portfolio
5
Evaluation of the Implementation roadmaps
Fig. 4.13 Generic Research Portfolio Management process
• Identification: this process step includes trend-scouting and initial valuation of the relevance regarding the business of a company. • Elaboration and assessment of the vision tackles the next granularity level after a research area is determined. At this stage an individual area is defined and agreed upon in terms of a longer-term vision. • Elaboration and validation of a strategic roadmap: This step involves activities to define a concrete roadmap that is going to be implemented to achieve the vision agreed upon in the previous step. It also contains a consolidation of the individual roadmaps into an overall research portfolio. • Execution and implementation of the determined portfolio.
28
OECD (2002), Frascati Manual, p. 91, clause 293. OECD (2002), Frascati Manual, pp. 45–46, clause 132. 30 Altogether Five Managerial Factors have been identified by Tipping et al. (1995), pp. 32–63; compare Sect. 3.3.2, pp. 107–108. 31 Cooper et al. (2001), p. 3. 29
4.2 Performance Clusters
139
• Evaluation and readjustment of the implementation roadmaps: This step reflects the resulting portfolio for a certain point in time as it is an on-going (live) process, which also includes the closing and phasing out research areas. Within these five steps, which are presented on a high level here, numerous activities represent the detailed and lower-level decision-making process: optimal allocation of resources, selecting the right and killing poor projects, deciding on spending in the right areas, eliminating redundancies, aligning projects in terms of an overall strategy, managing gaps in the existing portfolio, planning the next periodic review of the total portfolio. The RPM cluster can be assessed with the following four KPI classes: • • • •
Structure and quality of the research (i.e. certain technology areas); Visions related to the individual parts of the research portfolio and their quality; Roadmaps to achieve the visions and their quality; and Implementation of the roadmaps and the quality of contributions from research projects.
These four KPI classes sequentially assess the progress within the process. The generic RPM process and the mapping to KPI classes associated with this performance cluster are shown in Fig. 4.14. Elaboration and asessment of a
2 vision for a research area/program
1 Identification of research
structure and quality of the research
Execution and implementation of the determined portfolio
3 Elaboration and validation
5 Evaluation of the Imple-
of a strategic roadmap
mentation roadmaps
areas / programs –a portfolio
Structuring
4
Vision visions related to the individual parts of the research portfolio and their quality
Roadmap roadmaps to achieve the visions and their quality
Implementation implementations of the roadmaps and the quality of contributions from research projects
Fig. 4.14 Generic Research Portfolio Management process mapped with KPI classes
The following KPI examples have been collected in case studies that were summarized within the RPM cluster: • Vision paper for Research Programs from Program Manager available and approved by superior manager; • Extent to which projects contribute to the vision of a Research Program (subjective evaluation); • External subjective evaluations from the recognized experts in the research field: “normal” accomplishment (solved an unsolved problem/generated visibility around his/her research); • Percentage of maximum score in Project Portfolio Score Sheet averaged over all program areas (creating and maintaining a well-balanced research project); • Subjective, qualitative evaluation by project lead and program manager;
140
4 Performance Management: Analysis Approach
• Subjective evaluation of the structure of the research work by internal or external reviewers (simplest way: the structure is in place or not); • Subjective evaluation of the vision documents of the individual research programs by internal or external reviewers (simplest way: vision documents are available or not); • Subjective evaluation of the roadmap to reach the vision by internal or external reviewers (simplest way: the roadmap document is in place or not); • Subjective evaluation of the scientific accomplishments, technical achievements, or technical impact by internal or external reviewers.
4.2.5
Cluster: Intellectual Property Creation
The Intellectual Property (IP) Creation cluster comprises KPIs that address research output and outcome in the form of ideas and inventions that are (1) created by researchers and that the company is trying to protect for a certain period of time; or (2) published by researchers in order to prevent others from protecting the idea or invention. The ideas and inventions are generally captured by invention disclosures. Depending on the IP strategy of the company these might for example be filed as patents or made public as defensive publications. In paragraph 75, the Frascati Manual states that patent work directly connected with R&ED projects is part of R&ED. For example, preparing initial invention disclosures together with the supporting technical documentation used as a starting point for the filing is an activity that is acknowledged as an activity conducted by the research department. This Frascati statement justifies the activities being grouped in this performance cluster. The manual also discusses the fact that there are several stages in the protection of IP and that statistics that consider these stages are more meaningful than statistics just reflecting granted patents, cf. Annex 7, paragraphs 5–7 and 9. This affirms the underlying process of the cluster as well as the KPI classes, which were identified. In addition, the Patent Statistics Manual32 deals with the measurement of scientific and technical activities. The fact that the OECD dedicated specific extra work on patents provides additional justification for subsuming the activities related to IP creation in a performance cluster of their own. The intellectual property creation process is most likely one of the most formalized processes in industry that secure competitive advantage. This is a result of the fact that the process has to be closely aligned with legislation regarding intellectual property protection and in turn with the modus operandi of patent offices and other related authorities. The most typical and relevant stages of the process are depicted in Fig. 4.15 below: • Invention Disclosure Form (IDF) is a written document which is presented to the internal legal or IP department. This document contains information that is often
32
OECD (2009), Patent Statistics Manual.
4.2 Performance Clusters
141
confidential. In general it is a brief description of the invention. It might be a new process, a device, a technology, a new application, or an improvement to an existing product or process. • First filing: the IP department typically drafts a patent application (first filing). This is then submitted to a national33 or international34 patent office. • Patent: the patent office eventually decides whether the patent will be granted or the application will be rejected. 1
Invention Disclosure Form
2 First filing
Patent granted / defensive publication
3
Fig. 4.15 Generic Intellectual Property Creation process
Our IP performance cluster has five KPI classes, which can be divided into two groups. The first summarizes quantitative KPI classes such as: • The volume of potentially protectable inventions submitted into the IP pipeline; • Volume of IDFs passing a certain intermediate stage in the external protection or publication process; • Volume of patents granted and/or defensive publications. The second group addresses qualitative KPI classes: • Quality of granted patents; • Alignment of research activities with the IP strategy of the company (Fig. 4.16).
qualitative
quantitative
1
Invention Disclosure Form
2 First filing
volume of potentially protectable inventions submitted into the IP pipeline
volume of first filings out of the IP pipeline
alignment of research activities with the IP strategy of the company
3
Patent granted / defensive publication
volume of patents granted and/or defensive publications
quality of granted patents and/or defensive publications
Fig. 4.16 Generic Intellectual Property Creation process mapped with KPI classes
33
For example: DPMA – Deutsches Patent- und Markenamt; INPI – Institut national de la proprie´te´ industrielle in France; USPTO – United States Patent and Trademark Office. 34 For example: WIPO – World International Property Organization; EPO – European Patent Office.
142
4 Performance Management: Analysis Approach
The following KPI examples have been collected in the case studies: • • • • •
Number of invention disclosures; Number of first filings, defensive publications, trade secrets; Number of patents granted to the company; Economic value of the granted patent; Number of submitted inventions addressing the IP strategy.
4.2.6
Cluster: Operational Excellence
The Operational Excellence (OPEX) cluster deals with the efficiency of process execution in an organization. It contains a set of auxiliary processes that support the operational structure of an organization as a whole, or its constituent parts. The first three KPI classes in the cluster: • Adherence to budget. • Adherence to timelines; and • Quality of the risk management in place need to be allocated to the operational units so they can easily be consolidated again for monitoring purposes. In fact, as consolidated figures they represent the actual status of the research organization in terms of operational achievements. As a consequence, those KPIs belonging to these classes are quite often collected and evaluated several times per performance management cycle. The KPI class • Quality of project management is in fact a consolidation of (1)–(3) in the sense that compliance at agreed deadlines is an important factor for the quality of a project. Additionally, it also deals with other project management tasks such as resource planning, conflict resolution, etc. The KPI class • General risk sharing is a partial answer to deal with the uncertainty of research. This is predominantly done by sharing the investment in research undertakings. Several studies35 show that the perceived quality of the working environment has an impact on the creativity of people and teams. Therefore, the KPI class • Quality of the working environment is an important operational factor, in particular within a research department. From a KPI perspective, this is supported by our case studies where the assessment of the working environment was present several times.
35
Amabile et al. (2003), p. 1157, Woodman et al. (1993), p. 295, Amabile and Conti (1994).
4.2 Performance Clusters
143
Based on these descriptions it becomes evident that the auxiliary processes of this cluster are mainly internal processes. Furthermore, they need to be fairly formal, especially as far as their breakdown and consolidation are concerned, and therefore the related KPIs are quite often evaluated several times within a single performance management cycle. The following examples for KPIs that were subsumed in the OpEx cluster have been collected in case studies: • Difference between the planned budget for each line item to the actual budget measured (least deviation is rated to be the best); • Successful (on-time) termination at a particular phase / gate; • Data compliance at an agreed deadline; • Scoring in employee satisfaction survey; • Percentage by which the potential damage has been lowered by adequate measures against the bottom-line; • Volume of third-party investment (in €), number of participations in externallyfunded programs.
4.2.7
Cluster: Talent Pool
The Talent Pool (TP) cluster summarizes activities that refer to recruiting people into the research department, their continuous development inside the department, and eventually their transfer into other parts of the organization or back into its eco-system. The Frascati Manual devotes the following paragraphs to discuss the measurement and classification of R&ED resources: 68, 92, 96-100, 149, 300-303, 306-309, 313 and annex 7: 41–43. The sheer number of paragraphs shows how important this matter is. Thus, the OECD saw the necessity to produce an extra document addressing human resources aspects within R&ED: the Canberra Manual.36 In order to obtain a complete picture of both the supply and demand for human resources in science and technology, the manual is based on two dimensions: qualification and occupation. The Manual states that highly-skilled human resources are essential for the development and diffusion of knowledge and that they constitute the crucial link between technological progress and economic growth, social development and environmental well-being. The manual argues that the lead time to train and to develop S&T skills are long and the costs high. The fact that the manual explicitly focuses on these topics highlights the importance of the TP cluster. As a consequence, it is legitimate to assess a research department upon metrics that deal with activities to train, retrain and utilize these expensive and often rare skills. Thus, our proposed cluster is justified. The processes within the Talent Pool Cluster are all people-related processes. Historically, labor law enforces many formal processes, which all companies have to comply with. These processes embrace, for example, the employment, deployment
36
OECD (1995), Canberra Manual.
144
4 Performance Management: Analysis Approach
and development of employees as well as the termination of work contracts. This explains the pronounced tendency of existent formalized processes in this cluster. On a generic level the following process stages can be identified (Fig. 4.17): 1 Inflow
2 Development
3 Outflow
Fig. 4.17 Generic Talent Pool process
• The Inflow step contains the initial contact between the company and potential employee, assessment of the personality profile and face-to-face interviews to attract talent. • Development comprises the steps that are necessary for development and deployment, management and evaluating talents, like 360 feedbacks, leadership diagnostics, coaching, mentoring, exposure to upper management or other special incentives/opportunities. • Outflow vs. retention of talent from the research department into the other departments of a company are the activities within this stage. Quite often, along with an idea/business opportunity being transferred or further developed, people are moving along the value chain and thereby accompanying the evolution from idea into a product or service. Within the Talent Pool cluster we identified four KPI classes: • Volume/quality of people hired into the research organization; • Volume/quality of people leaving the research organization to move to other parts of the company/ecosystem; • Volume/quality of development measures undertaken; and • Quality of the people in the research department. 2
Development
3
Outflow
quantitative
Inflow
volume of people hired into the research organization
volume of development measures undertaken
volume of people leaving the research organization into other parts the company / ecosystem
qualitative
1
quality of people hired into the research organization
quality of development measures undertaken; quality of the people in the research department
quality of people leaving the research organization into other parts the company / ecosystem
Fig. 4.18 Generic Talent Pool process mapped with KPI classes
As can be inferred from the names of the KPI classes, these could be further subdivided into two groups with the focus on quality and quantity. We explicitly did
4.2 Performance Clusters
145
not do this as we wished to avoid creating too many KPI classes in this performance cluster (Fig. 4.18). The following KPI examples, which are relevant for the TP cluster, have been collected in the case studies: • Number of externally-attracted senior scientists that match the profile for which they are hired; • Number of top talents moving to other parts of the company; • Number of job rotations, number of PhDs finished; • Subjective assessments through partners or stakeholders, team track records, 360 feedback.
4.2.8
Cluster: Image
The Image cluster (IMG) comprises all activities that contribute to the reputation of an industrial research department. In order to attract the best talent, the creation of a strong positive image is, for an industrial research department, at least as important as for the corporation. In fact, a research department may also contribute greatly to the image of a company by way of perceptions about its innovativeness. It is also worth mentioning that the image of a research department not only has to consider people external to the company but also to people within the company. Clusters such as TT and FBO might also contribute greatly to the goal achievement of the department and therefore its internal perception becomes a key success factor. Assessing the reputation, however, is a difficult and costly process.37 As an example for image assessment on the corporate level, recall the Fortune magazine which publishes an annual ranking of the World’s Most Admired Companies.38 This is a well-established measure for corporate image. Image and reputation are soft factors that need to be managed carefully. In practice, departments like “public relations” or “communication” take up the role of creating and managing the corporate image. Although the underlying process of the IMG cluster is considered as latent, less-formalized and to a large extent external, there are some explicit parts of an image “process”39 such as: • Identifying stakeholder groups (partners, customers, government institutions, specific research communities, media, financial institutions or public organizations); and determining its importance for the research department; • Choosing target audiences for certain image campaigns; • Defining a profile of a desirable research image via visual, verbal or behavioral elements;
37
Klein (1999). Fortune Magazine (2009). 39 Kotler (1996). 38
146
4 Performance Management: Analysis Approach
• Analyzing the present image, comparing it to the desirable image, identifying image gaps and generating a plan for how to fill the gaps and thereby improve the image. These internal steps are supposed to trigger the external part of the “process”. As a consequence of these considerations the KPI classes of the IMG cluster aim to capture the quality of both: • Internal visibility: internal perception or internal recognitions; • External visibility: external perception or external recognitions. The artifacts relevant for these KPI classes are more likely output artifacts for internal perception, and more likely outcome artifacts for external perception. The following examples of KPIs have been collected in our case studies: • Number of visits by key decision makers, subjective evaluation of collaborations with internal groups (BUs, PDs, LOBs); • Number of contributions to internal conferences or exhibitions; • Number of awards, speeches, invited keynotes, medals, prizes, fairs and exhibitions.
4.2.9
Cluster: Publications
The dissemination and communication of the progress of research work is an integral part of research as such. There are many forms of communicating the progress. One of these has already been described within the IP cluster. Another form is typically a publication ahead of which the actual scientific investigation of a researcher occurs. The Publications (PUB) cluster comprises activities that produce research output in the form of scientific and non-scientific publications, including journals, books, proceedings, etc. Journals typically refer to peer-reviewed publications devoted to disseminating new research results and developments within specific disciplines, sub-disciplines or fields of study. These include original articles, research letters, research papers, and review articles. There are different types of books such as monographs, which are relatively short books or treatises on a single scholarly subject written by a specialist(s). Chapters, which are one or more major divisions in a book, each complete in itself but related in theme to the division preceding or following it. Edited works are collections of scholarly contributions written by different authors on a related theme. Proceedings typically refer to a published record of a conference, congress, symposium or other meeting whose purpose is to disseminate new research results and new developments within specific disciplines, sub-disciplines, or fields of study. The Frascati Manual identifies, in paragraphs 69 and 149, the publication of research findings in scientific journals as activities of R&ED. Furthermore, in annex 7, paragraphs 21, 24 and 25 of the manual define bibliometrics as the generic term for
4.2 Performance Clusters
147
data on publications. It suggests that bibliometrics was originally limited to collecting data on the number of scientific articles and other publications, classified by author and/or by institution, field of science, country, etc., in order to construct simple “productivity” indicators for academic research. Subsequently, more sophisticated and multidimensional techniques based on citations were developed. The manual proposes the use of more sensitive measures of research quality using citation indexes and co-citation analyses. The visible part of a publication process is the submission to a journal or a conference. This stage, by nature, extends beyond the company boundaries and operates externally with the organizers of scientific conferences. Traditionally, such a journal or conference has a strict review process, often involving multiple blind peer reviews. The process of peer review has been well-described by Meadows (1998).40 When a researcher submits his/her paper to the editorial office of a journal, it is first scanned by the editor, who may reject it either because it is “out of scope” (not dealing with the right subject matter for that journal) or because it is clearly of such low quality that it cannot be considered at all. Papers that pass this first hurdle are then sent to experts in the field (usually two) who are generally asked to classify the paper as publishable immediately, publishable with amendments and improvements, or not publishable.41 The peer review process is formalized and well-organized when entering the journal or association’s boundaries. Within the industrial research department this process is on one hand a concomitant phenomenon, because it is in the nature of a researcher to publish his/her findings and insights of their own research. On the other hand, the volume and quality of publications represents the department’s ability to originate new concepts/solutions. This cluster captures the quality of the research department itself. In our classification (cf. Table 4.6) PUB cluster shows the following characteristics: external, formalized and non-termed. Non-termed applies to the PUB cluster, since there is no synchronization with the performance management cycle, besides the fact that the external process part of the publication cycle in general takes less than a year. The property of being formalized strongly refers to the external part of the process. The internal part depends heavily on the status of the research work and the availability of publishable results. The property “external” results from the fact that the overall process is mainly driven by external opportunities (conferences, journals, etc.) to publish research results. Two KPI classes can be distinguished within the PUB cluster: • Quantity of publication; • Quality of publications.
40 41
Meadows (1998), pp. 177–194. Rowland (2002).
148
4 Performance Management: Analysis Approach
The first KPI class is a quantitative count of published papers, for example: number of first or co-authorships of journal and conference papers, books, white papers, etc. The second KPI class captures only those that are most relevant for the corresponding department depending on the discipline and perceived quality standards. Examples for the second KPI class are the following KPIs: number of best paper awards, papers published in “A” journals, etc. Although we didn’t specifically search for concrete evidence, choosing the quality KPI class as opposed to the quantity KPI class might indicate a higher organizational maturity level. The following KPI examples have been collected from our case studies: • Number of first or co-authorships of journal and conference papers, books, white papers, technical reports etc. • Number of best paper awards, papers at selected conferences.
4.2.10 Cluster: Presence in Scientific Community Presence in Scientific Community (PSC): this cluster comprises KPIs measuring the activities of researchers in the scientific community, such as invited talks, keynotes, panels, chairmanship and editorship of conferences or scientific events. Paragraph 149 addresses the presence of R&ED in activities like organization of scientific conferences or involvement in scientific reviews and gives evidence for such a cluster. It is not a disputed fact that the participation of researchers in scientific communities belongs to the typical activities of researchers by nature. According to Chorafas42 affiliation with scientific societies or recognition within the scientific community are personal factors that need to be considered when assessing researcher performance. The Frascati Manual defines R&D activities for measurement purposes. It analyzes a whole range of R&D activities by classifying them and explicitly listing exclusions from R&D related activities. The manual furthermore defines criteria for distinguishing the boundaries of R&D and related activities, and discusses the problems at the border between the two. The presence of R&D activities is identified in several paragraphs; especially paragraph 14943 which identifies activities such as: “The publication of research findings in scientific journals, organization of scientific conferences or involvement in scientific reviews”. The first part of the clause addresses the Publications cluster and the second part explicitly mentions presence in the scientific community. When only considering the activities of industrial research, the participation in such communities is by nature more pronounced in research than in development.
42 43
Chorafas (1963), p. 67. OECD (2002), Frascati Manual, clause 149, p. 49.
4.2 Performance Clusters
149
This is explained by the fact that the findings of research projects need to be published and presented in the scientific community, which is often the only possibility to get qualified feedback from peers in the same area. Another reason is that this function accumulates expertise, which is then requested to act in an editorial or review capacity for publications, to be conference program committee members, to chair sessions, or to give talks, keynotes, presentations, etc. These activities are reflected in the two KPI classes of the PSC cluster: • Participation in scientific events (excluding publications); • Participation in advisory boards, or related bodies, honorary titles or memberships, or visiting scholarships. The KPI classes distinguish the seniority of people working in the research department that engage themselves in different scientific communities. The first KPI class summarizes activities of junior researchers’ involvement in scientific events excluding publications. Examples of KPIs summarized in the KPI class are: number of invited talks or keynotes, chairs in workshops or conferences, participation in program committees or journal reviews and editorships. The second KPI class addresses senior researchers’ involvement, such as participation in advisory boards or related bodies, honorary titles or memberships or visiting scholarships. KPI examples for this class include: number of invitations to governmental, professional, industrial, academic advisory boards or standardization committees, number of doctorates, fellowships, presidencies of scientific or technical societies. Again, the selection of the first or the second class can indicate the “maturity” of scientific expertise within a research department. The following KPI examples have been collected in the case studies: • Number of invited talks or keynotes, chairs in workshops or conferences, participation in program committees or journal reviews, editorships; • Number of invitations to governmental/professional/industrial/academic advisory boards or standardization committees, number of doctorates, fellowships, presidencies of societies, invitations for research sabbatical positions.
4.2.11 Cluster: Collaboration with Academia There are two clusters in our clustering concept that address collaboration aspects. These are related to each other, however they address different players in the ecosystem of a research department: academia on the one hand, and business partners and customers on the other. Collaborative research happens in many ways and is more common in some disciplines than in others. Working with others on a research project can have several benefits, but there can be drawbacks as well. When done in the right spirit, collaborative research can result in more reliable and powerful results that come to publication faster than they would if the research were
150
4 Performance Management: Analysis Approach
done independently.44 Researchers can pool their knowledge and critique each other’s work before starting the publication process. The Collaboration with Academia (CA) cluster KPIs evaluate the intensity or volume as well as the quality of input from, and activities related to, collaboration with academic partners. Input refers to output produced by academia and taken up by the research organization. Activities comprise joint work as well as actions trying to impact the roadmap of academic research. Collaboration between industry and academia is crucial to deliver a shared vision for the future of technology and education. Industry is, in general, committed to deepening its relationship with academia, because only by working together will the next generations of technologies be created. The connections of an industrial research department with academia can result in advantages for both sides. The nature of this linkage has a strong educational character and the spectrum of educational profiles involved in the collaboration is very broad. This collaboration typically involves postgraduate and postdoctoral research workers, but also other senior academic scientists, and can take the form of either short-term interaction via workshops, internships providing access to the real environment for studies and theses, or long-term co-sponsored studentships, or integration of academic staff into research projects. For students, such collaboration can result in higher qualifications, where eventually the students can take positions of responsibility in academia or industry, or just become a part of the scientific staff team. For the research department, collaboration can lead to the development and promotion of new methods or access to state of the art information. The benefits for an industrial research department to collaborate with academia are: • Brand and recruitment: better visibility (sponsored initiatives, access to real business environment to study, companies presentations, etc); • Student contacts and development of expertise: cooperation in teaching (learning tasks, entrepreneurial projects, theses, case studies, internships); • Development of expertise and competitiveness: shared expertise and networking (expert consultation, events, seminars on current research results, supplementary and management training, information services); • Innovative operational models and new inventions: cooperation in research (restricted and longer term research ventures, support functions for inventions and research activities, promotion of entrepreneurship). The drawbacks of collaborative research should not be underestimated. In the event of successful invention, it is often difficult to dismantle the ownership of the intellectual property rights to the invention. The CA cluster encompasses two KPI classes:
44
Fine (2010).
4.2 Performance Clusters
151
• Intensity of collaboration; • Quality of collaboration. The intensity KPI class assesses volume of involvement, e.g. investment at universities in relation to total budget, or number of guest researchers hosted within research coming from universities. The quality KPI class can, in general, only capture subjective opinions. These are the personal opinions of the people involved in the collaborative projects. The following KPI examples have been collected in our case studies: • Investment at universities in relation to total budget, number of guest researchers from universities (sabbaticals etc.); • Subjective evaluation of each university project from the supervising professor via a questionnaire.
4.2.12 Cluster: Collaboration with Partners and Customers The Collaboration with Partners and Customers (CPC) cluster addresses joint activities between the research organization and partners and customers, as well as the output resulting from these activities. The KPIs reflect the proximity to partners and customers (intensity of collaboration), as well as the quality of the activities. Such proximity reflects how well-aligned with partners and engaged with customers, the industrial research department is. The motivation for the CA cluster is entirely applicable to this cluster as well. The clusters are “somewhat” related, nevertheless they address different linkages in terms of the manuals, and therefore justify their separate existence. The differences lie especially in the nature of the conducted research: while the CA cluster focuses on basic research tackling theories or concepts, the CPC cluster points at applied research addressing, for example, prototyping in real business contexts. As an analogy to the CPC cluster, this cluster also consists of two KPI classes: • Intensity of collaboration; • Quality of collaboration. The intensity KPI class assesses the volume of involvement with partners. This KPI class is a quantitative count of collaborative projects. Example KPIs include the number of projects involving an external stakeholder versus the total number of projects within a department, or the number of joint research results like showcases, prototypes, etc. Again, analogous to the second KPI class of the CA cluster, the quality KPI class of the CPC cluster captures subjective assessment by customers or partners. The following KPI examples have been collected in the case study: • Number of projects in which an external customer and/or business partner is involved versus the total number of projects; number of joint research results like showcases, prototypes or demos; • Subjective assessment by customers and/or partners by means of a survey.
152
4 Performance Management: Analysis Approach
4.2.13 Performance Cluster Overview To conclude, this section introduced in-depth descriptions of performance clusters, which form the spectrum of activities found in industrial research organizations. Table 4.6 outlines three properties that allow for a categorization of performance clusters. Each property is discussed and thoroughly described within its respective performance cluster in the paragraphs above. Table 4.6 Performance cluster properties Performance cluster Company internal vs. company external focus 1 Technology Transfer I 2 Future Business Opportunities 3 Research Portfolio Management 4 Intellectual Property Creation 5 Talent Pool 6 Operational Excellence 7 Image I/E 8 Publications E 9 Presence in Scientific Community 10 Collaboration with Academia 11 Collaboration with Partners and Customers I internal, E external F formal, I informal L long-term, S short-term, N non-termed (not limited time-wise)
Formal vs. informal processes F F
Time pattern L L
F
L
F
L
F/I F I F F/I
L/N S N N N
F/I
N
F/I
N
The 11 performance clusters can be classified with the help of Table 4.6: the first classification property is connected with the internal or external focus. Clusters 1–6 are linked to internal company processes; whereas 8–11 extend beyond the boundaries of a company. The process of IMG cluster has both components: internal and external. Depending on the process, in some cases the whole activity chain has to be assessed, i.e. also the parts that go beyond the research department, in order to attain the outcome and to be able to assess it. This is particularly true for the following clusters: Technology Transfer, Future Business Opportunities, Technical Achievements and Intellectual Property Creation.45
45
Especially in the case of intellectual property, this might depend on the overall organizational setup of the individual company.
4.2 Performance Clusters
153
The second property is the degree of formalization of the processes (i.e. explicitly formalized processes or more informal or latent processes). The processes of clusters 1–4, 6 and 8 are more formal and the remaining clusters cannot be accurately assigned to one property. For example, a performance cluster with explicitly formalized processes is Intellectual Property Creation cluster, while examples of clusters with less explicit processes are Publications and Image. Processes of the Operational Excellence cluster are highly formalized. Operation Excellence e.g. refers to the fact that a large amount of research activities are embedded in a project context. It therefore embraces KPIs for assessing the entirety of all processes within a project context as well as ancillary processes carried out within a research department. The pattern of time can be seen quite clearly within the performance cluster processes. The processes of clusters 1–4 are for long-term periods, whereas 7–11 are more continuous and therefore not limited time-wise. The processes within performance cluster six are an exception and always considered to be short-term. Altogether, the performance clusters differ significantly from each other regarding their sharpness and the number of different KPIs that were collected in the case studies. For example, the nature of the Technology Transfer cluster allows for a variety of different KPIs for the measured object, whereas the Publications cluster mainly uses the number and quality of publications. The clusters: Publications and Presence in the Scientific Community are related to each other, but have been separated into two different clusters because we were able to unambiguously assign different KPIs to each one. The following table (Table 4.7) shows the resulting list of 37 classes, their relevant cluster affiliation, and examples. Table 4.7 Catalogue of KPI classes with corresponding properties and examples Cluster Nr KPI class description Artifact Artifact Examples classes property 1. Transfer volume: Technology Op, P # # Person days, amount of Transfer volume of services charged to technology transfer other departments activities to (€) development or other (business) units 2. Transfer result quality: Op Q Subjective evaluation by quality of the the receiving unit via research results questionnaire transferred 3. Transfer process quality: P Q Subjective evaluation of quality of the transfer the collaboration process or transfer with the receiving activities unit via questionnaire 4. Transfer significance: Op, Oc Q Subjective evaluation of significance of the the importance of the transferred research transfer for the results for the development/ receiving unit business of the receiving unit (continued)
154
4 Performance Management: Analysis Approach
Table 4.7 (continued) Cluster Nr KPI class description 5.
Future Business Opportunities
6.
7.
8.
Research Portfolio 9. Management
Artifact Artifact Examples classes property Op, Oc # Equivalent opportunity cost of using an external expert service, revenue generated by the transferred artifact I, P # # Ideas, # ideas per researcher
Economic transfer value: economic value of the transfer activity or transferred research results Input: intensity of input into the innovation process Innovation process: P (weighted) # of ideas moved to a certain or next phase of the innovation process
Outcome: (achieved) Oc business impact of an idea in terms of its economic value Structuring: structure P and quality of the research (i.e., certain technology areas)
10. Vision: visions related to P the individual parts of the research portfolio and their quality
#
#
Q
Q
P, Op
Q
P, Op 12. Implementation: implementation of the roadmaps and the quality of contributions from research projects
Q
11. Roadmap: roadmaps to achieve the visions and their quality
# Ideas with a business case developed + 2 # ideas with demonstrator developed + 3 # ideas handed over to development $, €
Subjective evaluation of the research portfolio by internal or external reviewers (simplest way – portfolio management is in place or not) Subjective evaluation of the vision documents of the individual research programs by internal or external reviewers (simplest way: vision documents are available or not) Subjective evaluation of the roadmap by internal or external reviewers (simplest way – in place or not) Subjective evaluation by internal or external reviewers of the scientific accomplishments, technical achievements, or technical impact (continued)
4.2 Performance Clusters Table 4.7 (continued) Cluster Nr KPI class description Intellectual Property Creation
13.
14.
15.
16. 17.
Operational Excellence
18.
19.
20.
21.
22.
23.
Talent Pool
24.
155
Artifact Artifact Examples classes property Input volume: volume of P, Op # # Invention disclosures potentially protectable inventions submitted into the IP pipeline Output volume: volume P, Op # # First filings, defensive of first filings from publications, trade the IP pipeline secrets Outcome volume: Op, Oc # # Patents granted to the volume of patents company granted Input quality: quality of Oc Q Economic value of the granted patents granted patent Outcome quality: Oc Q # Submitted inventions alignment of addressing the IP research activities strategy with the IP strategy of the company Budget: adherence to P # difference of planned budget budget of each line item to actual budget measured (less deviation is rated to be better) Q Successful (on-time) Timeliness: adherence to P termination at a timelines (phases, particular phase/gate gates) Oc # % By which the Risk management: potential damage has quality of risk been lowered by management adequate measures against the bottomline P Q Data compliance at Project management agreed deadlines quality: quality of project management I # Volume of third-party General risk sharing: investment (in €), # volume of (external) participations in investments in the externally-funded research organization programs (beyond seed funding) Working environment: I Q Scoring in employee satisfaction survey quality of the working environment Inflow: volume/quality I #Q # Externally-attracted of people hired into senior scientists with skill-sets compatible the research to the profiles organization required (continued)
156
4 Performance Management: Analysis Approach
Table 4.7 (continued) Cluster Nr KPI class description 25.
26.
27.
Image
28.
29.
Publications
30.
31.
Presence in Scientific Community
32.
33.
Artifact Artifact Examples classes property Outflow: volume/quality Oc #Q # Top talents moving to of people leaving the other parts of the research organization company to move to other parts of the company/ ecosystem Talent development: P #Q # Job rotations, # PhDs volume/quality of finished, training development measures undertaken Talent assessment: Op Q Subjective assessments quality of the people through partners or in the research stakeholders, team department track records, 360 feedback Internal visibility: Op Q # Visits by key decision internal perception or makers, subjective internal recognition evaluation of collaboration with internal groups, # contributions to internal conferences or exhibitions Oc Q # Awards, speeches, External visibility: keynotes, medals, external perception prizes, fairs and or external exhibitions recognition Quantity of publications: Op # # Journal & conference volume of papers, book publications contributions, # white papers, # technical reports Q # Best paper awards, Quality of publications: Op papers at selected quality of conferences publications Junior researcher Op Q # Invited talks or involvement: keynotes, chairs at participation in workshops or scientific events conferences, (excluding participation in publications) program committees or journal reviews, editorships Op Q # Invitations to Senior researcher governmental/ involvement: participation in professional/ industrial/academic advisory boards or (continued)
4.3 Performance Cluster Analysis Table 4.7 (continued) Cluster Nr KPI class description related bodies, honorary titles, memberships or visiting scholarships
Collaborationwith 34. Intensity: volume of Academia collaboration with academia
35. Quality: quality of collaboration with academia
Collaboration 36. Intensity: volume of withPartners collaboration with and Customers partners and customers
157
Artifact Artifact Examples classes property advisory boards or standardization committees, # doctorates, fellowships, presidencies in scientific/technical society Op # Investment at universities in relation to total budget, # guest researchers from universities (sabbaticals) Op Q Subjective evaluation for each university project from the supervising professor via a questionnaire Op, Oc # # Projects in which an external customer and/or partner in a business relationship is involved vs. total projects, # joint research results such as showcases and prototypes Op, Oc Q Subjective assessment by customers and/or partners via a survey
37. Quality: quality of collaboration with partners and customers Artifact classes: P process, I input, Op output, Oc outcome Artifact properties: Q quality, # quantity
4.3
Performance Cluster Analysis
A performance cluster is a set of semantically related subgoals/KPIs. Creating these clusters has two main benefits. We can: (1) determine the importance of each performance cluster based on the cumulative weight of its subgoals/KPIs, and thereby rank the clusters per company, (2) compare the weights and the rankings between the companies. Figure 4.19 depicts the cluster comparison for all eight case study companies. Clusters at the top are those with the highest weights and therefore most importance.
158
4 Performance Management: Analysis Approach Cluster ranking C1
C2
C3
C4
C5
C6
C7
C8
TT
IP
FBO
FBO
TT
CPC
OpEx
FBO
TP
TT
TT
TT
FBO
OpEx
PUB
RPM
RPM
CPC
TT
FBO
PSC
RPM
IP
IP
IP
IP
OpEx
IMG
OpEx
OpEx
IP
OpEx
TP
TT
CPC
RPM
FBO IP TT PUB
TP IP
IMG
RPM IMG
IMG
PUB
PUB PSC
TP
CA
PUB CPC
IMG
TP
RPM PUB PSC
CA
CA OpEx
Fig. 4.19 Cluster comparison from case studies
Based on this analysis we conclude that certain performance clusters have the same, or similar importance, for the organizations in question. For example, the Technology Transfer cluster is present in all companies examined and in the cases of C1, C2, C3, C4, C5 and C8 the TT cluster is ranked as first or second. In C6 it is ranked third and in C7 the TT cluster shares the fifth position with the Publications cluster. The Intellectual Property cluster is also omnipresent and is ranked fourth or fifth with two exceptions: C2 and then C6 where it shares second position with the Operational Excellence cluster. The Future Business Opportunities (FBO) cluster is also present in all but two companies and is always ranked in the top four, where present. To summarize, the ranking the clusters allows us to compare the number, presence and importance of performance clusters between the companies. The minimum amount of clusters used is four; for example in C2, while the maximum is ten, which is true for C5, C7 and C8.
4.4
Goal Decomposition Analysis
In the previous section we analyzed the KPIs dimension and the relative importance of the performance clusters. However, because the goal of this research is to build a comprehensive methodology, we now analyze the contribution to individual goals of the performance clusters. Whereas in the previous section we defined performance clusters by grouping semantically similar KPIs, our objective in this section is to determine whether companies measure similar sets of activities when similar goals are to be achieved. If we were able to provide a recommendation such as: if you have an organizational goal “X”, then a certain set of activities is necessary to achieve this goal,
4.4 Goal Decomposition Analysis
159
which are in turn most appropriately measured by a set of KPIs reflecting the corresponding set of performance clusters, it will greatly help practitioners building performance management systems. To make this example more concrete: if you have an organizational goal that is to “Create intellectual property”, then the activities summarized by the IP cluster are necessary in order to achieve this goal and therefore a set of KPIs grouped in the IP cluster is most appropriate to assess this goal. To be able to provide such guidelines we need to examine the goal dimension. We have to evaluate whether performance clusters are reasonable mediators between goals and KPIs. If this is the case, then we are in essence defining relations between individual goals, as well as between goals and KPIs, that are expressed by performance clusters. Within each case study the relationships between goals and KPIs appear to be independent to one another; however, in order to derive a general view across different companies, we are specifically interested in: 1. Similarities between the goals of the different research departments; 2. Relationships between the KPIs applied to similar goals; and 3. General connections between goals and KPIs. In the case studies, each company provided us with a set of organizational goals for their research department. These are, as such, not directly comparable across companies. As mentioned in Sect. 3.1.4, within the case studies, we collected 164 different KPIs and 45 goals. Of these 45 goals some of them are present in multiple cases: there are only 30 unique goals. These numbers show that the results of the case studies are difficult to interpret at an initial glance, where the lack of comparability is due to: (1) goals sounding similar or identical, yet having heterogeneous KPIs, making them likely to actually mean different things; (2) the sheer number of different KPIs being difficult to handle without any deeper analysis of the KPIs themselves. Therefore, recognizing general interdependencies between KPIs is difficult and, and as a result, the same is true for general connections between goals and KPIs. The situation is even more complex when comparing research goals only. The difficulty lies in the interpretation of language and the difficulty in this analysis step is to find an approach to systematically compare research goals. During the formation of our performance clusters, we identified which KPIs were associated with the cluster. Now, to compare the goals, we performed – what we called – goal decomposition analysis. We decomposed each goal into the performance clusters that contribute to the assessment of the goal Fig. 4.20. Using the notion of the performance cluster composition we are able to rephrase the recommendation from the beginning of this section as follows: if you have an organizational goal with specific characteristics, choose a set of KPIs that results in a performance composition as follows. In order to make such statements we need to be able to compare organizational goals. Expressing this in terms of performance clusters we therefore would ideally like to show: if goals are similar then with a high likelihood they have a similar cluster composition. In turn, this would require (a) a method to determine the extent
160
4 Performance Management: Analysis Approach
Fig. 4.20 Goal decomposition example
TT FBO
RESEARCH GOAL
RPM IP OpEx TP IMG PUB
PSC CA CPC
to which two goals are interpreted similarly and (b) a similarity measure for performance cluster compositions. It is obvious that we are not able to succeed with (a) in a general manner.46 As a consequence, we will proceed with our analysis the other way round and try to find reasonable similarity measures and define the similarity of goals based on the similarity of clusters: DEF_ G: Two goals are similar if they have similar performance cluster compositions.
If two cluster compositions have a “good” measure of similarity, there is high likelihood that if the goals are similar, based on our definition, they would be subjectively interpreted as being similar. As a result we have reduced our problem to finding a similarity measure for performance cluster compositions. When tackling this question we realized that we could define quite sophisticated47 similarity measures, such as geometric distance for instance, however from a pragmatic perspective the problem with determining their quality remains open as it is still bound to the evaluation of the semantic similarity of goals. We reduced this problem again by using a heuristic for the similarity measure that we call “similarity indicator” (SI). The similarity indicator reduces the similarity measure to a binary value and in this way abstracts from the “degree” of cluster compositions of “being similar”. The greater the correlation between the semantic
46
Semantic technologies would be an approach to go in this direction. However they fail due to the fact that for a general approach they need extensible ontologies and their pragmatic applicability is not (yet) given. 47 Other ways to define similarity of performance cluster spectra could be: the extent of deviation between clusters expressed in its threshold value, or to define the similarity relationship on a finegranular level taking into account not just one, but two or more maximum weighting clusters, etc.
4.4 Goal Decomposition Analysis
161
judgment of similitude, and the result of the similarity indicator, the better the indicator. In the remainder of this section we define two different similarity indicators and evaluate them by applying them to the goals of our case studies. Following our definition that a cluster composition (cf. Fig. 4.20) is the decomposition of a goal into the performance clusters that contribute to the goal assessment with a certain weight and our initial definition of the similarity of goals we further refine definition DEF_G using a specific similarity indicator48 to define similarity between performance cluster compositions as follows: SI-1: Two performance cluster compositions are similar if they have the same cluster with the highest weight.
The relationship between similar goals would then also implicitly define a relationship between goals and performance clusters in the sense that for goals with a similar semantic interpretation, there is, to a certain extent, a typical cluster composition. In the following we show examples of similar cluster compositions together with the title of the goal to which they are related. When applying this to our case studies we also have to consider that the selection of KPIs for some goals in our case studies might not be an ideal one in some of the cases. This could result in cluster compositions appearing as different even though the corresponding goals are semantically similar. In the following we show the considerations discussed above by applying the similarity indicator SI-1 to the goals collected in our case studies. Although we provide our own semantic interpretation of the goals for an evaluation of the usefulness of the indicator, we leave it to the reader’s own discretion to judge from his own semantic perspective whether the individual goals that are indicated as being similar are in fact similar – obviously solely based on the phrasing of the titles of the goals. The following overview comprises all goals that are assigned by SI-1, i.e. also those that from our subjective interpretation are not assigned appropriately Fig. 4.21. Figure 4.21 shows those goals where the TT cluster (dark green) has the highest weight. For example, the TT cluster contributes to the goal of company 2: “Scientific advisor” with 100%. The same is true for company 3 with 100% for both goals listed. The TT cluster contributes to the assessment of the goal “Top Results” of company 1 with 77%, the IP cluster with 16%, Publications (PUB) with 5%, and Operational Excellence (OpEx) with 2%. The “Alignment and Transfer” goal of company 8 shows a TT cluster contribution of 60%, Future Business Opportunities (FBO) of 30% and Talent Pool (TP) of 10%, etc. This example illustrates all aspects of the discussion in the previous paragraph: For the first six goals, the TT cluster contributes at least 50% to the assessment of the goals. Three out of nine goals listed have the same title “Alignment &
48
Similarity indicator is a function which compares two performance cluster compositions and provides a yes or no decision at the end.
162
4 Performance Management: Analysis Approach
[company 2] Scientific advisor to the development: researcher time spent on technology development [company 3] Alignment & Transfer TT
[company 3] Advanced Technology Solutions: building-out technology-leveraged solutions
FBO RPM
[company 1] Top Results: Project results have a significant impact on business [company 8] Alignment and Transfer: Give input to development to drive incremental innovation
OpEx CA IP
[company 6] Alignment and Transfer: Give input to Business Units by the transfer of research results [company 5] xPG Influence
TP IMG PUB PSC
[company 5] IT Strategy
CPC
[company 5] Enterprise Architecture [company 4] Technology innovation and leadership 0%
10% 20% 30%
40% 50% 60% 70%
80%
90% 100%
Fig. 4.21 Technology Transfer-oriented goals with SI-1 indicator
Transfer”. Additionally the “Scientific advisor to the development” goal can also be interpreted as a way of defining an “Alignment & Transfer” goal as can the “Top Results” goal of company 1 when looking at its high level description. The “xProduct Group influence” contains the notion of “transfer”: knowledge transfer or transfer of concrete artifacts. The first seven goals, from our semantic interpretation, are well-assigned to the group of “Technology Transfer-oriented goals”. The last three goals of the figure might be less related to the first seven goals in terms of semantic interpretation although SI-1 does indicate this. Out of the remaining 35 goals in our case studies, there are no additional goals that from our semantic interpretation would belong to the group of “Technology Transfer-oriented goals”. In summary SI-1 indicates technology transfer-oriented goals correctly 93% of the time. Figure 4.22 shows a selection of goals that are similar based on SI-1, since for all the goals the IPC cluster (marked in red) contributes to their assessment with at least 60%; in companies 2, 6 and 8 by 100%. IP
[company 2] IPR generation
TT FBO
[company 6] Intellectual Property
RPM
[company 8] Intellectual Property Creation
OpEx CA
[company 1] Protect Technology
TP
[company 5] Intellectual Property
IMG PUB
[company 3] Protect Technology
PSC CPC
0%
10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Fig. 4.22 Intellectual Property Creation-oriented goals with SI-1 indicator
4.4 Goal Decomposition Analysis
163
With respect to their titles, all six goals can be definitely interpreted as being similar. In our case studies we identified an additional goal “Scientific excellence” of company 2, which from our semantic interpretation belongs to the group of “Intellectual Property-oriented goals”, however the SI-1 did not detect this. In this case, out of the remaining 38 goals of our case studies there is one additional goal that we would assign to the group “Intellectual Property-oriented goals”. With this, the SI-1 identifies with 98% accuracy the intellectual property creation-oriented goals (Fig. 4.23). [company 1] Excellence OpEx
[company 3] Operational Excellence
TT
[company 7] People
FBO RPM
[company 7] Strategic partnerships
CA
[company 6] Operational Excellence
IP TP
[company 6] Collaborative Research
IMG
[company 6] Growth
PUB PSC
[company 8] Operational Excellence
CPC
[company 7] Operational Excellence 0%
10% 20% 30% 40%
50%
60% 70% 80%
90% 100%
Fig. 4.23 Operational Excellence-oriented goals with SI-1 indicator
Figure 4.23 shows goals that are related to operational excellence regarding their cluster composition, as the OpEx cluster delivers the highest contribution in terms of its weight. It is also interesting to mention that the Collaboration with Partners and Customers (CPC) cluster contributes in a significant way to the goal assessment; i.e., when present, which is the case in five of nine goals, the contribution is the second highest. This at least shows that more fine-grained criteria for the indicators could increase their quality. For the last two goals, the contribution of the OpEx cluster is below 50%, which might be less than expected; however, the cluster still represents the highest contribution for the assessment of the goal. With this, we believe our semantic assignment of the nine goals to the group of “Operational Excellence-oriented goals” is good. Of the remaining 36 goals in our case studies there are no additional goals that should belong to this group. As such, and in summary, this SI-1 assigns operational excellence-oriented goals 100% correctly. In Fig. 4.24 we selected goals where we expected a high contribution of the Talent Pool (TP) cluster marked in orange. This is true for three of the four goals shown, where the TP cluster contributes at least 50% to the goal achievement. In the case of company 5, a very diverse cluster composition is visible, which might indicate that either non-appropriate KPIs are used, or the goal title is misleading. In the case of the “Market savvy” goal, a goal strongly related to the TP-cluster would
164
4 Performance Management: Analysis Approach
TP OpEx
[company 6 ] Talent
TT
[company 1] Top people
FBO RPM
[company 1] Market savvy
CA IP IMG
[company 5] People
PUB PSC
0%
10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
CPC
Fig. 4.24 People Development-oriented goals with SI-1 indicator
not necessarily be expected at first glance. However, when analyzing the cluster composition, it becomes evident that this goal is actually pretty much a TP-oriented goal since “market savvy” has a central requirement that people be educated with an in-depth market knowledge. Furthermore, the high contribution of the TP cluster suggests that there is an expectation to transfer the insights from the knowledge, and advances related to new technology solutions, into the development organization. Out of the remaining 41 goals in our case studies there are no additional goals that from our semantic interpretation should belong to the group of “People Development-oriented goals”. In summary, SI-1 indicates people development-oriented goals 95% correctly. [company 1] Top ideas FBO
[company 3] New opportunities
TP OpEx
[company 4] Business impact
TT
[company 3] Technology evaluation
RPM CA
[company 3] Proof of concept development
IP IMG
[company 8] Technology creation
PUB
[company 4] Technology innovation and leadership
PSC CPC
[company 5] People 0%
10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Fig. 4.25 Idea Generation-oriented goals with SI-1 indicator
Figure 4.25 shows a selection of goals that are similar. This is because the FBO cluster, marked in yellow, dominates the contribution to the assessment of the goals. It is interesting to note that the RPM cluster seems to play an important role for the assessment of idea generation-oriented goals. The FBO cluster contributes to the goals of company 1: “Top ideas” and company 3: “New
4.4 Goal Decomposition Analysis
165
opportunities” with 100%; to the goals of company 4: “Business impact” with 70% and to the assessment of both goals of company 3: “Technology evaluation” and “Proof of concept development” with almost 60%. The contribution of the RPM cluster is also prominently represented in assessing the following goals: company 3: “Technology evaluation” and “Proof of concept development” with more than 25%; company 8: “Technology creation” with 50%; as well as company 4: “Technology innovation and leadership” with 20%. Common terms in the headings of the goals are as follows: ideas, opportunities, technology, concepts. The last goal of the Fig. 4.30 might be less related to the first seven goals in terms of semantic interpretation although SI-1 would indicate this to be the case. Out of the remaining 38 goals in our case studies there are no additional goals that from our semantic interpretation should belong to the group of the “Idea Generation-oriented goals”. In summary SI-1 correctly indicates 97% of technology transfer-oriented goals. IMG FBO
[company 1] 360 visibility
TT RPM
[company 8] Image
OpEx CA
[company 4] Thought leadership
IP
[company 5] Image
TP PUB PSC
0%
10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
CPC
Fig. 4.26 Image-oriented goals with SI-1 indicator
Figure 4.26 shows four goals from four different companies where the IMG cluster marked in purple has the highest weight. The IMG cluster contributes, for example, to the goal of company 1 “360 visibility” with 70%, 60% for company 8, and 57% for company 4. The contribution in the case of company 5 is 25%, which is still the highest amongst all clusters in the cluster composition. In addition, when looking at the headline of the goals and their semantic interpretation, one can conclude that the goals are in fact similar. In our case studies we identified an additional goal “Strategic partnerships” of company 7, which from our semantic interpretation belongs to the group of “Imageoriented goals”. However the SI-1 did not interpret this goal as such. As such, out of the remaining 40 goals in our case studies, there is one additional goal that we would assign to the group “Image-oriented goals”. With this, the SI-1 identifies, with 98% accuracy, the image-oriented goals. In sum, these examples demonstrate that the similarity indicator is a reasonable and pragmatic way to compare goals based on their cluster composition.
166
4 Performance Management: Analysis Approach
Nevertheless we additionally want to reaffirm this analysis by applying a slightly different second similarity indicator: SI-2: Two performance cluster compositions are similar if they have the same cluster with the highest average weight of the KPIs assigned to that cluster.
As with SI-1 we present the following groups of goals that should be considered similar but now based on SI-2. Figure 4.27 lists goals where the highest average weight of KPIs is given by the TT cluster (dark green). For the first five goals: from company 2, company 3 with two goals, and companies 6 and 1, the average weight of KPIs, which are assigned to the TT cluster is exactly 50%. The contribution of KPIs from the TT cluster to the next two goals, which are from company 4, is 40%. [company 2] Scientific advisor to the development: researcher time spent on technology development [company 3] Alignment & Transfer [company 3] Advanced Technology Solutions: building-out technology-leveraged solutions
TT FBO RPM
[company 6] Give input to Business Units by the transfer of research results to apply them in products
OpEx
[company 1] Market savvy
CA
[company 4] Contribute to critical standards
IP TP
[company 4] Technology innovation and leadership
IMG
[company 6] Image
PUB PSC
[company 8] Alignment and Transfer: Give input to development to drive incremental innovation
CPC
[company 1] Top results [company 5] IT strategy [company 5] People 0
10
20
30
40
50
60
70
80
90
100
Fig. 4.27 Technology Transfer-oriented goals with SI-2 indicator
Seven goals that are indicated by SI-2 are also, according to our semantic interpretation, contained within the “Technology Transfer-oriented goals” group. However, five goals are incorrectly assigned (Market savvy, Image, Top results, IT strategy and People). Of the remaining 33 goals from our case studies, there are no additional goals that should belong to the group of “Technology Transfer-oriented goals” according to our semantic interpretation. In summary, SI-2 assigns 88% of technology transfer-oriented goals correctly. From the semantic perspective, the similarity becomes at least partially evident when looking at the goals’ names: 8 of the 12 goals share terms such as: alignment, technology, transfer or standards. When comparing the results of SI-1 with those of SI-2 the following can be observed: (1) seven of the goals that are correctly indicated as being similar by SI-2 are also indicated as being similar by SI-1; (2) of the three goals that our semantic interpretation suggests are incorrectly assigned by SI-1, one is also incorrectly assigned by SI-2.
4.4 Goal Decomposition Analysis
167
[company 3] Protect technology IP
[company 8] Intellectual property creation
TT
[company 2] IPR generation
FBO RPM
[company 1] Protect technology
OpEx CA
[company 6] Intellectual property
TP
[company 5] Intellectual property
IMG PUB
[company 2] Scientific excellence
PSC
[company 7] Value creation
CPC
[company 5] People 0
10
20
30
40
50
60
70
80
90
100
Fig. 4.28 Intellectual Property Creation-oriented goals with SI-2 indicator
Figure 4.28 shows the goals that SI-2 assigns to the IPC cluster (red). For the first six of the nine goals, the title supports the similarity indicator result. The scientific excellence goal of company 2 might indicate that the IP protection strategy is driven by defensive publications for the following reasons: (1) There are two goals where the IPC cluster plays an important role and (2) one of those – the goal “Scientific excellence” – also includes the PUB cluster to a significant amount. Similar considerations are also valid for company 1. The last two goals are examples of where SI-2 does not deliver the expected results when we consider the names of the goals and we assume that the KPIs are well-chosen for those goals. Seven goals that are indicated by SI-2 are also, according to our semantic interpretation, well-assigned to the “Intellectual Property Creation-oriented goals” group. Two goals are falsely assigned (Value creation and People). Of the remaining 36 goals from our case studies there are no additional goals which, from our semantic interpretation, should belong to the “Intellectual Property Creation-oriented goals” group. In summary, SI-2 assigns 93% of IP creation-oriented goals correctly. When comparing the results of SI-1 with those of SI-2 the following can be observed: (1) six of the goals that are correctly indicated as being similar by SI-2 are also indicated as similar by SI-1; (2) one goal (Scientific excellence) has not been detected by SI-1 which in turn was correctly identified by SI-2. Figure 4.29 shows the selection of goals that SI-2 assigns to the OpEx cluster (beige). Five out of eight case studies have a goal entitled “(Operational) Excellence” and these are all assigned as being similar by SI-2. Companies 6 and 7 have three and four goals respectively. This might indicate that those companies have organizational goals are too focused on operational excellence and are therefore less appropriate reflections on the actual activities in the research departments. Or alternatively is could mean that process measures are preferred over KPIs that assess output or outcome.
168
4 Performance Management: Analysis Approach
[company 1] Excellence
[company 6] Operational excellence OpEx
[company 6] Growth
TT
[company 6] Collaboration with partners and customers
FBO RPM
[company 3] Operational excellence
CA
[company 8] Operational excellence
IP TP
[company 7] People
IMG
[company 7] Strategic partnerships
PUB PSC
[company 7] Value creation
CPC
[company 7] Operational excellence
[company 5] People 0
10
20
30
40
50
60
70
80
90
100
Fig. 4.29 Operational Excellence-oriented goals with SI-2 indicator
Nine goals that are indicated by SI-2 are also, according to our semantic interpretation, correctly assigned to the “Operational Excellence-oriented goals” group. Two goals are false positives (Value creation and People of company 5). Of the remaining 34 goals from our case studies there are no additional goals which, from our semantic interpretation, should belong to the “Operational Excellenceoriented goals” group. In summary SI-2 indicates 95% of operational excellenceoriented goals correctly assigned. When comparing the results of SI-1 with those of SI-2, the following can be observed: nine of the goals that are correctly indicated as being similar by SI-2 are also indicated as being similar by SI-1. TP
[company 1] Top people
OpEx
[company 1] Market savvy
TT FBO
[company 6] Talent
RPM CA
[company 7] Value creation
IP
[company 7] Operational excellence
IMG PUB
[company 5] People
PSC CPC
0
10
20
30
40
50
60
70
80
90
100
Fig. 4.30 People Development-oriented goals with SI-2 indicator
Figure 4.30 shows the selection of goals that SI-2 assigns to the TP cluster (orange). SI-2 correctly identifies three goals, which our semantic interpretation also assigned to the “People Development-oriented goals”.
4.4 Goal Decomposition Analysis
169
However, three goals are false positives (Value creation, Operational excellence and People of company 549). Of the remaining 39 goals from our case studies there are no additional goals that, according to our semantic interpretation, should belong to the “People Development-oriented goals” group. In summary, SI-2 assigns people-oriented goals correctly 93% of the time. When comparing the results of SI-1 with those of SI-2 the following can be observed: (1) three of the goals that are correctly indicated as being similar by SI2 are also indicated as being similar by SI-1; (2) one goal that, according to our semantic interpretation, is incorrectly assigned by SI-1 is also incorrectly assigned by SI-2. [company 3] Technology evaluation [company 1] Top ideas FBO
[company 3] New opportunities
TP OpEx
[company 8] Technology creation
TT
[company 4] Technology innovation and leadership
RPM CA
[company 4] Business impact
IP IMG
[company 5] xPG Influence
PUB
[company 7] Value creation
PSC CPC
[company 7] Operational excellence [company 5] People 0
10
20
30
40
50
60
70
80
90
100
Fig. 4.31 Idea Generation-oriented goals with SI-2 indicator
Figure 4.31 shows the goals that SI-2 assigns to the FBO cluster (yellow). It is interesting to note that companies 4, 5, 7 and 8 each have two goals related to idea generation. For the first four goals: from company 8, 1, 3 and 8, the average weight of KPIs that are assigned to the FBO cluster is 50% or higher. SI-2 identifies six goals that our semantic interpretation also assigned to the “Idea Generation-oriented goals” group. Three goals are false positives (xPG Influence, Value creation and Operational Excellence). Of the remaining 36 goals from our case studies, there are no additional goals that our semantic interpretation suggests should belong to the “Idea Generation-oriented goals” group. In summary, SI-2 predicts 91% of idea generation-oriented goals correctly. When comparing the results of SI-1 with those of SI-2 the following can be observed: (1) six of the goals that are correctly indicated as similar by SI-2 are also
49
Please see our explanations for the misleading title of this goal in our comments on the SI-1 indicator.
170
4 Performance Management: Analysis Approach
indicated as similar by SI-1; (2) one goal (People) which, according to our semantic interpretation is incorrectly assigned by SI-1, is also incorrectly assigned by SI-2. IMG FBO
[company 1] 360° visibility
TT
[company 4] Thought leadership
RPM OpEx
[company 7] Strategic partnerships
CA IP
[company 8] Image
TP
[company 5] Image
PUB PSC
0
10
20
30
40
50
60
70
80
90
100
CPC
Fig. 4.32 Image-oriented goals with SI-2 indicator
Figure 4.32 shows the goals assigned by SI-2 to the IMG cluster (purple). Five goals that are identified by SI-2 are also identified by our semantic interpretation as belonging to the “Image-oriented goals” group. Of the remaining 40 goals from our case studies there are no additional goals, which, according to our semantic interpretation, should belong to the “Image-oriented goals” group. In summary SI-2 assigns all the image-oriented goals correctly. When comparing the results of SI-1 with those of SI-2 the following can be observed: (1) four of the goals that are correctly indicated as being similar by SI2 are also indicated as being similar by SI-1; (2) one goal (Strategic partnerships) has not been detected by SI-1, but was correctly identified by SI-2 (Table 4.8). Table 4.8 Accuracy comparison of SI-1 and SI-2 for six goal groups No Goal group Accuracy of SI-1 (%) Accuracy of SI-2 (%) 1. Technology Transfer-oriented goals 93 88 2. Intellectual Property Creation-oriented goals 98 93 3. Operational Excellence-oriented goals 100 95 4. People Development-oriented goals 95 93 5. Idea Generation-oriented goals 97 91 6. Image-oriented goals 98 100
The performance of SI-2 is at least as good as SI-1.50 The composition analysis and similarity indicators applied to our case study data delivers, in a pragmatic way,
50
Regarding the analysis results, we would have expected that in each goal there is at least one dominant performance cluster present in the cluster spectrum.
4.5 Summary
171
reasonable results on the goal similarities. By turning this around, it can provide a simple way to formulate recommendations for practitioners. This is of course our underlying motivation, as we stated at the outset of this section. Additionally, we can conclude that the goal composition analysis seems to be a valid approach to compare organizational goals. The findings of this analysis enable us to construct recommendations for practitioners. More precisely, for a specific organizational research goal, we can suggest (1) the set of activities that might contribute to the achievement of the goal and (2) a set of KPIs to assess the results of these activities and hence the organizational goal. With this method, having implicitly affirmed our assumption that performance clusters are a suitable means to building the connection between goals and KPIs, we substantiated our performance clusters with a deep analysis of activities, as documented above. These analysis steps, like others described in this chapter, will eventually be applied to the data of a large quantitative survey in order to validate the developed procedures. Altogether, although our data from the case studies is less detailed than that which can be obtained in a large quantitative survey, we nevertheless have observed and described some patterns that are a partial answer to research question 1. They also help us generate a questionnaire for the survey.
4.5
Summary
In this chapter we introduced our analysis approach. We presented this method using the findings from an in-depth case study of more than 70 personal expert interviews. The data revealed the need for the formation of performance clusters. Initially, KPIs were summarized, from the content perspective, by grouping similar KPIs into performance clusters. The performance clusters we created allow for an analysis of organizational goals by decomposing the assessment via these performance clusters. The association of organizational processes within performance clusters yields KPI classes. Altogether, performance clusters, KPI classes and the final KPIs build the foundation for the measurement part of a performance management system. Based on these analysis steps we may conclude that we have succeeded in developing a preliminary approach to deal with the individual elements of performance management, i.e. KPIs, KPI classes, performance clusters and finally organizational department goals. These elements, among others, will be analyzed on a large scale by the quantitative survey. The design of the questionnaire, the execution of the enquiry, and the findings are presented in the next chapter.
.
Chapter 5
Survey Findings
Abstract This chapter presents the findings from the large quantitative survey, which tests the findings from the literature review and case studies. Both the qualitative case studies, as well as the results of a survey and its comparison, are the core of this chapter. The result of the evaluation reveals that industrial research organizations seem to have a set of shared organizational goals and a set of shared KPIs, which can be summarized in KPI classes. In a sum, the survey results endorse the relationships between KPIs and KPI classes, performance clusters and organizational research goals. This chapter is therefore dedicated to the examination of the qualitative results from the quantitative perspective. Keywords Analysis • Description of the results • Development of the survey • Sample design • Definition of the unit of analysis • Large quantitative survey • Statistics • Comparison of the results of quantitative survey with the qualitative analysis of the case studies
5.1
Introduction
In this section, we describe the development of our large quantitative survey and its findings.
5.2
Development of the Survey
The questionnaire1 is based on the findings of the case studies. As previously presented, the KPIs and goals have been classified, consolidated and condensed from individual facts into generic groups on a more abstract level. This format
1
See Appendix E.
T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0_5, # Springer-Verlag Berlin Heidelberg 2012
173
174
5 Survey Findings
enabled us to compose a questionnaire to gather information on a large scale with which to test/verify the findings of the case studies. The case studies essentially collated the five facts directly into one matrix.2 The survey also collated the five facts into one matrix, but rather than having the respondent complete a matrix, the questionnaire was administered via an online survey. The five facts are: (a) (b) (c) (d) (e)
Organizational research department goals Goal weightings in percent within the entire goal system (all goals ¼ 100%) KPIs KPI weightings in percent within each goal (one goal ¼ 100%) Mapping of KPIs contributing to a goal
Goals have been expressed in eight generic goals (a) KPIs have been expressed in 37 KPI classes (c) The weightings (b and d) in percentage are collected via a scale. The mapping (e) has been generated as a pre-selection based on the evaluation of case study data. The aggregation of goals and the mapping preparations are explained below. KPI consolidation (KPI classes) were presented in detail in Sect. 4.2.1. As previously presented the case studies identified 45 organizational research department goals. Thirty of the 45 were different. Eight generic organizational goals for industrial research departments were formulated for the questionnaire (Table 5.1): Table 5.1 Eight generic organizational goals in industrial research Goal Generic goals 1. Alignment with and transfer to internal development and other (business) units 2. Create and protect intellectual property 3. Improve the internal and external image of the research department and/or the company 4. Generate and evaluate future business opportunities 5. Recruit and develop excellent talent 6. Achieve a high standard of operational excellence 7. Establish and maintain strategic partnerships and/or collaborative research 8. Drive technology innovation and technology leadership
This list of eight generic goals is a result of a semi-objective, qualitative estimation, which results from iterative discussions with case study companies. The people selected for the process of sorting, classifying and formulating the goals held senior management positions and have experience in defining and cascading goals within their daily activity.
2
Compare the developed PMS matrix compilation presented earlier.
5.3 Sample Design
175
Table 5.2 represents the findings of the case studies from the cross-case analysis mapping of goals and KPI classes. Table 5.2 Mapping goals and KPIs Performance clusters Technology transfer Future business opportunities Technical achievements Intellectual property Operational excellence Talent pool Image Publications Presence in scientific community Collaboration with academia Collaboration with partners and customers Total KPI classes
Organizational research goals 1 x x
2 x x
3 x
x
x
x
4 x x
5 x
x
7
x
x x x
x
6
x x
x x x
8 x x x x
x x
x
x x
x x x
# KPI classes per performance cluster 5 3 4 5 6 4 2 2 2 2 2
15 15 16 13 17 17 14 17 37
This mapping table was used to build the questionnaire3 by pre-assigning KPI classes to the eight generic organizational research goals. The respondents had the possibility to specify additional KPIs that were used in the company and were not identified among the presented KPI classes.
5.3
Sample Design
A large quantitative survey was used as the research approach. The target population for the online survey was as follows: • Economically significant enterprises operating in the ICT industry; • Located worldwide, covering especially Europe, CIS, and USA; • With a formal research department in the organizational structure. The sample-design process consists of the following steps: 1. The ICT industry was defined according to the ISIC classification.4 2. The chosen codes based on ISIC were translated into the SIC.5
3
See Appendix E. Refer to the earlier definition of ISIC. 5 The reason for this translation step was the fact that the data available is still organized according the old standard (SIC). 4
176
5 Survey Findings
3. The sample was drawn from a total population of 2,161 companies. As a basis, the Schonfeld & Associates Database6 was used. This database was amended and refined with data from other economic areas.7 4. The initial list of 2,161 companies was reduced to 497 companies by the following instrument: R&D ratio.8 This implies that companies reporting R&D spending would have a research department. 5. An appropriate contact person9 was identified to complete the questionnaire in each company. 6. The identified person was then first contacted by phone. At the initial point of contact, the contact person was asked a series of questions to ensure: • That the company has a research department • That the contact person is involved in a decision-making process regarding research organization activities. 7. The person, who showed interest in the survey and met the screening requirements, received an invitation to participate in the survey by e-mail (Table 5.3).
Table 5.3 Definition of the unit of analysis Selection criteria Definition ICT manufacturing industries ICT sectoraa ICT trade industries ICT services industries Department The selection criteria is a dedicated research department within the overall organization (can even be an independent legal entity), the research department must have its own organizational goals Person The profile of the people who were addressed was: a person in charge of the research department (Head of Research). Alternatively, it COOs (Chief Operating Officers), persons responsible for controlling activities within research department, or somebody who is in charge of monitoring the achievement of goals and is able to propose countermeasures in case of deviations a The industry-based definition of the Information and Communication Technology (ICT) sector is based on International Standard Industry Classification (ISIC) Revision 4. The classification framework of ICT originates from industries that primarily produce ICT goods and services
6 Schonfeld & Associates Database: R&D Ratios & Budgets 2008. Note that this database contains worldwide data organized by industry sector. 7 EU: 2008 EU Industrial R&D Investment Scoreboard and Truffle 100 representing the top 100 European software vendors; CIS – ICT congress, China. 8 As a percentage of sales. 9 Different sources were used: Onesourse Database representing the largest business database, contact lists provided by the EU, contacts extracted by internet search, etc.
5.4 Analysis/Description of the Results
177
In total, 108 companies participated in the survey, two of whom were deselected as strong outliers resulting in 106, which meant a target response rate of 21.73%. The criteria for non-response were: • No appropriative research department (too few research activities, mostly R&D, etc.) • The form was returned, but the key questions were not answered • Strong inconsistencies in the data.
5.4 5.4.1
Analysis/Description of the Results General Data: Organizational Context and Research Environment
To first portray the sample formation of the participating companies in our investigation we used a frequency distribution, which shows that 106 companies responded to the survey. Figure 5.1 shows that 57 companies had headquarters in Europe or the U.S. and 49 companies were headquartered in the CIS. With such a clear divide in the sample, the ensuing analysis was done separately for the two groups where necessary. Fig. 5.1 Location of company headquarters: overview
Europe and USA 57
CIS 49
N=106
Figure 5.2 shows that the Europe and USA group is dominated by Germany (18 responses) followed by the USA with eight, France with seven and the UK with six responses. Figures 5.3–5.5 illustrate the distribution of the companies in the subsectors of ICT. This information has to be interpreted carefully because this question assumes the possibility of multiple answers. The investigation focused on the IT/ICT sector, which has three major subsectors: Manufacturing, Trade, and Services. More than half of the participating companies are operating in one subsector, 13.2% are present in two subsectors and 11.3% are in three subsectors. It is interesting to note that almost 10% belong to more than five subsectors.
178
5 Survey Findings
DE 18
FR 7 UK 6
US 8
FI 3
CH 2 IL 1 IE 1
IT 2
Other European countries 6%
DK 1 BE 1
NL 3 ES 4
CIS 49
N =106
Fig. 5.2 Location of company headquarters: detailed overview
5 -7 9.4% 4 13.2%
1 subsector 2 subsectors 3 subsectors
1 52.8%
3 11.3%
4 subsectors 5 -7 subsectors
2 13.2%
Fig. 5.3 Simultaneous presence in multiple sectors
Services
73.6%
Manufacturing
48.1%
Trade
30.2%
Other sector 0%
11.3% 10%
20%
30%
40%
50%
60%
70%
80% N = 106
Fig. 5.4 Multi-sector distribution of participating companies
5.4 Analysis/Description of the Results
179
Computer programming and related services
71.8%
ICT consultancy services
33.3%
Data processing, hosting and related services
30.8%
Telecommunication services
30.8%
Other services
9.0% 0%
10%
20%
30%
40%
50%
60%
70%
80%
N = 78
Fig. 5.5 Detailed distribution of the ICT service subsectors
The simultaneous presence in multiple sectors explains the distribution in Fig. 5.4. The analysis shows that the majority of the companies are engaged in the Service subsector (78 companies representing 73.6%), followed by Manufacturing (51 companies with 48.1%) and Trade (32 companies with 30.2%). 11.3% of companies are in other subsectors.10 As the service part plays an important role within the ICT sector and could be collected on a more granular level, Fig. 5.5 gives more detailed information. Among the services subsector,11 the most considerable one is the computer programming and related services with 71.8%. ICT consultancy services is second with 33.3%, followed by data processing, hosting and related services and telecommunication services that are represented with the same intensity at 30.8%. We assume that the size of the company plays an important role because it is typically bigger companies that have research departments. Therefore, the following analysis focuses on: • Size of the entire company • Size of the development department • Size of the research department only. Figure 5.6 shows that two main groups of companies dominate our sample when it comes to company size: • Thirty-six companies (33.9%) employ 500 or fewer people; • Twenty-eight companies (26%) employ more than 20,000 people. These two groups represent the smallest and biggest companies on the spectrum. The size of the development department shows a distribution that is similar to the size of the entire company (cf. Fig. 5.7).12
10
For example: applied IT for health and well-being applications, IT for electrical equipment, IT for energy, IT for chemistry/manufacturing, IT for industrial automation devices, etc. 11 Please note that the N ¼ 78 focusing on the Services part only. 12 It has to be noted that three companies found difficulty stating this number. This reduced the overall number of 106 respondents to 103.
180
5 Survey Findings
Fig. 5.6 Size of entire company 28
≤ 500
36
501 -2000 2001 -5000 5001 -20000
10
≥ 20000 12
20 N=106
Fig. 5.7 Size of development department 27
≤ 50
34
51 -200 201 -500 501 -2000
6
≥ 2000 13 23 N=103
Figure 5.8 shows the distribution of research department sizes. Thirty-one of the surveyed companies employ 10 or fewer workers, 22 companies employ 11–50 people. Twenty-one companies have more than 500 research department workers.13 Fig. 5.8 The size of research department 21
≤10
31
11 -50 51 -100
11
101 -500 ≥500 18
22 N=103
13
Again, three companies had difficulty indicating the number of employees within the research department (N ¼ 103).
5.4 Analysis/Description of the Results
181
The breakdown of sizes of research departments and development departments are very similar. On average, research departments are approximately 20–25% of the size of the development departments within participating companies. For example, 68% of companies with a research department of ten10 or less people correspond to development departments with 50 or less people. Concerning the ratio of the size of the research department to the size of the entire company, the following observations can be made. First, for the smallest group: research department (equal to or fewer than ten people) represents 2% of the size of the entire company (equal to or less than 500 workers). For the biggest group: research department (more than 500 co-workers) represents 2.5% of the size of the entire company (more than 20,000 workers). For most companies, the bigger the size of the company, the bigger the size of the research department. The organizational goals of the research departments is one of the major foci of this study. Therefore, the approaches to companies’ goal-setting processes are of interest. Figure 5.9 reveals that most companies’ goal-setting processes are carried out top-down (51.9%), followed by the middle-out approach (38.5%). Only a few companies (slightly less than 10%) use a bottom-up approach when defining their organizational goals.
top-down
38,5%
bottom-up 51,9%
middle-out
9,6%
Which of the following approaches best describes your company’s goal-setting process?
N=104
Fig. 5.9 Goal-setting approach
The breakdown within the individual groups can be seen when the size of the entire company is added to this analysis (cf. Fig. 5.10). In all groups, except the first one (up to 500 people), the use of top-down goal-setting processes is at least 50%, within the third and fifth group the use of top-down approaches is almost 60%. It is worth mentioning that the bottom-up goal-setting approach in industrial research is rather less usual, if not non-existent as in the second and third groups.
182
5 Survey Findings 100% 44%
45%
40%
36%
26%
80% Goal-setting process
15% 60% 10%
top-down bottom-up
14% 40%
middle-out 20%
42%
50%
63%
55%
60%
0% 1st group: up to 500
2nd group: 3rdgroup: 4thgroup: 501-2.000 2.001-5.000 5.001-20.000 The size of entire company
5thgroup: >20.000
N = 104
Fig. 5.10 Cross tabulation – goal-setting process with the company size
Figure 5.11 shows that companies from Europe and the USA tend towards topdown approaches (59.3%), whereas CIS (49%) prefer middle-out. Within both groups the minority (9.3% Europe and USA and 10.2% CIS respectively) organize their goal-setting process from the bottom-up.
49%
middle-out 29% 10%
bottom-up
9% 41%
top-down
62% 0%
10%
20% CIS
30%
40%
50%
60%
Europe and USA
Fig. 5.11 Goal-setting approach Europe and USA vs. CIS
In practice, each level in an organization tries to achieve its own goals, and these in turn contribute to the overall company success. Usually, the company goals are broken down throughout all hierarchical levels. A hierarchical level is normally represented by the managers that are responsible for the goal achievement on that level. These managers must refine and cascade the goals down to individual teams.
5.4 Analysis/Description of the Results
183
The number of management levels between the executive board and the head of research reflects how many times the organizational company goals have to be broken down before reaching the research department. The results show that there can be up to six levels between the executive board and the research department. Five companies out of the 10214 indicated no levels. This means that the head of research department was on the executive board. Forty-one companies gave the modal answer of a separation of two management levels. Altogether, 86 companies cover the concentration of one to three management levels of separation. The case studies indicated that some companies review their organizational goals and their KPIs more often than others. To gain a deeper understanding of organizational goal definition within research departments, we asked respondents how often organizational goals are reconsidered in their companies. In practice, achievement of organizational goals is often assessed by using performance measures. Additionally, respondents were asked to indicate the frequencies with which KPIs are used to assess their goals. Figure 5.12 shows the frequencies of use of goals on the left and KPIs on the right side.
5%
2% 4%
9% 7%
24%
29%
Goals
more frequently than once a 10% year once a year once every two years once every three years less frequently than once every three ears
60%
KPIs
50%
N=98
Fig. 5.12 Frequency of reconsideration of goals and KPIs
The dominant category within both reconsideration of goals as well as reconsideration of KPIs is the “once a year” frequency (60% goals, 50% KPIs). This confirms the companies’ orientation towards analysis driven by fiscal periods.15 The second most populous group is companies that review their goals (29%) and KPIs (24%) more frequently than once a year. It should be noted that few companies review their goals (11%) and KPIs (26%) less frequently than once a year. The frequency of KPI reconsideration is very similar to the frequency of organizational goal reconsideration. Only the last three categories show any notable variation. It would appear
14
Note that four companies had difficulty indicating the number of management levels between the head of the research department and the executive board. The respondents were asked to not to count the head of research; including however the executive board in their number. 15 N ¼ 106, eight cases were removed from consideration: One did not know about the reconsideration of goals, four cases stated the data to goals but not the reconsideration of KPIs, and three cases giving no data to both questions.
184
5 Survey Findings
therefore appear that reconsideration of goals and reconsideration of KPIs is interconnected within the performance management processes that occurs annually. Cross tabulation of these two sets of answers reveals a clear relationship. Table 5.4 shows that companies do indeed seem to have inter-related organizational processes for reviewing their organizational goals and performance measures.16 Of the companies that review their goals more frequently than once a year (28), onehalf (14) simultaneously review their performance measures. Additionally 43% of these companies review their KPIs at least once a year. Accordingly to the above, the predominant category is the reconsideration of both goals as well as KPIs “once a year” (61%). It seems that companies generally reconsider their goals more frequently than their KPIs. This implies that the controlling factors within the companies are actually the goals and it is therefore these that need to adjust most frequently. The KPIs, on the contrary, are reconsidered less frequently, which implies that these are fairly stable and can be used over longer periods. Having revealed how and how often goals and KPIs are reviewed, respondents were asked to indicate their satisfaction with their current performance measurement. Figure 5.13 shows that about half of the respondents are satisfied with their current goals and KPIs and would not like to change them. Of the 50% of people that would make changes about half would change the intrinsic goals/KPIs and half would keep the goals/KPIs but give them new priorities or weights. The cross-tabulation in Fig. 5.14 illustrates that companies, which are satisfied with the current goals are also satisfied with their KPIs (75.5%). The same applies to the group preferring to select new goals; they would also prefer to select new KPIs (71.4%). This fact confirms the relationship between goals and KPIs. Use of KPIs in research departments is a highly controversial issue because of the high degree of intangibility, uncertainty, uncontrollability and unpredictability of their outputs and outcomes.17 As such, it is useful to understand the nature of the KPIs and whether they are mostly quantitative or qualitative. Literature tends to conclude that quantitative KPIs are less viable for research output and that qualitative KPIs should be used.18 As such, we asked about the processes, methods and tools that the companies use to collect qualitative data. Almost 60% of the survey participants stated that the balance between qualitative and quantitative measures was good. Nearly 10% think that more weight should be placed on quantitative measures, whereas slightly more than 30% think that more weight should be placed on qualitative measures. Regarding the adequate set-up to collect qualitative data it appears that nearly 66% have means in place, while 34% have no tools or methods to capture qualitative data. This is shown in Figs. 5.15 and 5.17.
16
Using Crosstabs’ nominal-by-nominal and ordinal-by-ordinal measures, we have found a statistically significant positive association between the frequencies of the reconsideration goals and KPIs. 17 See our discussion in previous chapters. 18 See our discussion in previous chapters.
Reconsideration More frequently than Count of goals once a year % Within the goal Once a year Count % Within the goal Once every 2 years Count % Within the goal Once every 3 years Count % Within the goal Less frequently than Count once every 3 years % Within the goal Total Count % Within the goal
Table 5.4 Reconsideration goals versus Reconsideration KPIs
14 50% 8 14% 1 20% 0 0% 0 0% 23 24%
More frequently than once a year
Once a Once year every 2 years 12 1 43% 4% 36 6 61% 10% 0 3 0% 60% 0 0 0% 0% 1 0 25% 0% 49 10 50% 10%
Reconsideration of KPIs Once every 3 years 1 4% 4 7% 1 20% 1 50% 0 0% 7 7% 0 0% 5 8% 0 0% 1 50% 3 75% 9 9%
Less frequently than once every 3 years 28 100% 59 100% 5 100% 2 100% 4 100% 98 100%
Total
5.4 Analysis/Description of the Results 185
186
5 Survey Findings
23%
27% “No changes, I am happy with Tgoals/KPIs as they are“
Goals
51%
“Keep exactly the same goals/KPIs, but give them other priorities “
22%
KPIs
“Select new goals/KPIs“
48%
29%
N=96
Fig. 5.13 Opportunity to change goals or KPIs
100%
If you had the opportunity to change the KPIs, WHAT would you change? "No changes, I am happy with the KPIs as they are"
80%
5%
31%76%
24%
60%
"Keep exactly the same KPIs, but give them other priorities"
54%
40%
71%
"Select new KPIs"
6%
20%
15%
18%
0%
"Select new goals" "Keep exactly the same "No changes, I am goals, but give them happy with the goals as other priorities" they are"
If you had the opportunity to change the goals, WHAT would you change? N=96
Fig. 5.14 Cross tabulation – opportunity to change goals with the opportunity to change KPIs
Via a cross tabulation we look into each group on the left to understand the availability of processes, methods and tools to collect qualitative data (cf. Fig. 5.17). The following result is observed: the majority (85%) of participants that stated that their measures are well-balanced have an adequate set-up to collect the qualitative measures. Only 15% said no while indicating a good balance between both. An interesting group of 58% stated that the ratio of used measures is not well-balanced and that more weight should be placed on qualitative measures. Just over half of them (52%) do not have adequate means to gather qualitative KPIs. Additionally, to reveal the ‘gut feeling’ of survey participants, we asked for written statements on:
5.4 Analysis/Description of the Results
187
100%
80%
60%
33%
YES, they are well-balanced
9%
NO, I think more weight should be placed on quantitative measures
58%
NO, I think more weight should be placed on qualitative measures
40%
20%
0%
Do you think that the ratio of qualitative and quantitative measures in your performance measurement is adequate? N=90 Fig. 5.15 Ratio of qualitative/quantitative measures currently used
• Satisfaction with the research organization’s current performance measurement: hypothetical opportunities to change goals and KPIs, change their weights or to leave everything as it is; • The ratio of qualitative and quantitative measures in their performance measurement; • Whether companies have methods and tools available to collect qualitative measures; Additional comments, which we feel are worthwhile, are reported: • “We are in the process of implementing a ‘stage/gate’ methodology for new product releases. Such a methodology will incorporate clear measurements (quality, cost and timeline).” • “I am aware that there is a need to continuously reflect about optimizing the evaluations. Qualitative assessments are very subjective and depend on the person’s experience doing the assessment.” • “We are in the implementation phase right now of new performance evaluation tools.” • “The goal-setting process is well developed. The KPI process, however, is not yet well adapted to the specific nature of research objectives; this is to be further developed in the coming years.”
188
5 Survey Findings 100%
34%
No
80%
60%
40% 66%
Yes
20%
0% Do you think that you have an adequate set-up (processes, methods, tools) to collect the appropriate qualitative data? N=91
Fig. 5.16 Availability of means to collect qualitative data
The ratio of qualitative and quantitative measures
YES, they are well-balanced
NO, I think more weight should be placed on quantitative measures
15%
85%
50%
NO, I think more weight should be placed on qualitative measures
50%
48%
0%
20%
52%
30%
40%
50%
60%
Adequate set-up (processes, methods, tools) in place yes
no
N=80
Fig. 5.17 Cross tabulation of Figs. 5.15 (ratio) and 5.16 (availability)
5.4 Analysis/Description of the Results
189
• “Goals of course may differ from year to year depending on the situation of the company and the organization. So even if I feel that the goals are rational, I anticipate their change over time.” • “Our research program is relatively new and small enough to be managed mostly through qualitative assessments rather than formal measurements. Over time, the formality will become more important. This survey is a good illustration of some of the tools we could use.” • “Buy in of top management on the complete range of KPIs could be improved. Some parts of management show interest in parts of the KPIs only.” • “It is very difficult to find the right balance between qualitative and quantitative measures. Also, qualitative measures tend to suffer from a lack of objectivity in the process.” • “A mix of quantitative measures to track operational processes and some specific research KPIs are needed, e.g. patents, along with a qualitative assessment of impact. The complexity comes from the combination of these measures, which cannot be ‘formulaic’.” Whether or not to measure the performance of industrial research organizations has been a hotly-discussed topic in both academic and practitioner circles. Our literature review revealed that the latest development in practitioner thinking19 is that measurement is necessary, but implementation is generally weak.20 We therefore asked our respondents to select one statement that best reflects their opinion. Our data analysis reveals that altogether 96% (cf. Fig. 5.18) of the survey participants confirm the view we gathered via the literature review. Performance measurement in industrial research organizations negatively affects creativity; it is either too costly or not necessary at all.
4% Performance measurement in industrial research organizations is necessary but the measured results should not influence management decisions too heavily.
39% 57%
Please select the following statement that best matches your opinion.
Fig. 5.18 Necessity of performance measurement in industrial research
19 20
Compare the work of Kerssens-van Drongelen (1999, 2001). Francis (1992), Robb (1991), Gupta and Wilemon (1996).
Performance measurement in industrial research organizations is an obvious need and is as necessary as management itself.
N=99
190
5 Survey Findings
While 57% are of the opinion that performance management is as necessary as management itself, 39% endorse this statement but suggest that management decisions should not be too heavily influenced by the results that are measured. It is interesting to note that only 4% of the respondents believe that performance measurement is a negative influence or unnecessary. Again, respondents could comment freely on this question. The following comments are worth sharing: • “I think there are two kinds of industrial research organizations; one looks more to the future and the other is more product-oriented. My response applies to a more product-oriented industrial organization.” • “The performance measurement that most positively contributes should be applied to the environment itself – one that is research-funded and facilitated so that the results progress naturally.” • “Basic research is done more and more outside companies. Companies nowadays focus more on applied research and development.” • “The questions are very relevant. The real question is ‘how to measure the impact of your research organization on the company’, and this can be thoroughly and lengthily debated and discussed. Therefore it is best to have some kind of performance measurement installed; but then again, it may stifle your research capacity!” • “Performance measurement is part of the story; it provides data but rarely the whole picture. Judgment and insight are a key part of the process and, if excluded, result in less of an impact on those areas that can’t be measured, i.e., an impact on the organization ahead of product revenue, often by many years.” • “Just is ‘just’ an indication if you like numbers, but there is more beyond which has impact for a ‘successful story’.” Following the general discussion about performance measurement in the entire research department, we asked participants about the assessment process for individual employee performance. The majority of our sample (cf. Fig. 5.19) have such a process in place (91%), while 9% have no processes in place. Because, especially from an organizational and occupational psychology perspective, there is a strong opinion that personal incentives improve individual performance,21 we surveyed the degree to which companies use variable salaries (bonuses etc..) as incentives to achieve goals. Figure 5.20 shows each company (100 companies answered that question) as one dot in the graph. Figure 5.20 demonstrates clearly respondent preference for rounded numbers: respondent’s choices can be observed to go up in intervals of five. The most distinctive are: • 0% company 1–12; • 5% company 21–31;
21
Compare Landy and Farr (1983).
5.4 Analysis/Description of the Results
191
Fig. 5.19 Assessment process for individual employee performance
9%
yes no
91%
Do you have an individual employee performance assessment process in place? N=102
100th
90th
Company’s position
80th 70th 60th 50th 40th 30th 20th 10th 0
0
10
20
30
40
50
60
70
80
90
100
Variable payment (in %) Please indicate the approximate share of the variable payments within your organization that are directly related to goal achievements. N = 100
Fig. 5.20 Variable part of the salary to goal achievements
192
5 Survey Findings
• 10% company 37–55; • 15% company 57–65; • 20% company 66–85. Staff in 12 companies have no variable part in their salaries. This means there is no connection between remuneration and goal achievement in these companies. The maximum seems to be at the 50% mark and only two companies indicated this number. As shown in Fig. 5.21 the two research departments that indicated a 50% variable part in their salaries come from different groups in terms of size. One belongs to very small size, up to ten people (in blue) and second to the bigger size up of to 500 people (in red). 100th
90th
Company’s position
80th 70th 60th 50th 40th 30th 20th 10th 0 0
10
20
30
40
50
60
70
80
90
100
Variable payment (in %) Please indicate the approximate share of the variable payments within your organization that are directly related to goal achievements. N = 100 Up to 10 employees
51 -500 employees
11 -50 employees
>500 employees
Fig. 5.21 Variable part of the salary with the research department size
Figure 5.22 shows how research departments of different sizes manage their variable part of salaries. The color-coded distribution shows that in each group different sizes are represented; this indicates that our sample is well-proportioned.
5.4 Analysis/Description of the Results
193
90th 80th
Company’s position
70th 60th 50th 40th 30th 20th 10th 0 0
10
20
30
40
50
60
70
80
90
100
Amount spent on collaboration with universities (in %) Please indicate the approximate share of the research budget that is spent on collaboration with universities. N = 92
Fig. 5.22 Spending on collaboration with academia
Basic research is done more and more outside companies. Companies nowadays focus more on applied research and development.22
In coherence with this statement, our case study research revealed that companies collaborate more and more with universities23 to integrate basic research and build applications from the results for business-relevant scenarios. Looking beyond company boundaries, we asked the respondents to indicate the approximate share of their research budget spent on university collaboration. The analysis (cf. Fig. 5.22) shows that almost 30 of the interviewed companies spend none of their research budget on collaboration with universities. Figure 5.22 highlights only one company from the 92 that invests 50% of its budget on university collaboration. One possible interpretation could be that basic research conducted at universities represents an important source to the internal research of the company where only applied research is conducted. However, the
22
This statement is part of the comments from our large quantitative survey. This thought is also substantiated by the open innovation notion strongly promoted by Chesbrough (2003) and Chesbrough et al. (2007).
23
194
5 Survey Findings
majority of the surveyed companies (67%) seem to spend a small amount on collaboration with academia: the modal group is 0–5%, followed by 6–10% group with 17% of the companies. Figure 5.23 shows with color codes the sizes of the research departments. Again, it can be seen that there are no differences in spending on university collaboration between different sizes of research departments.
90th 80th
Company’s position
70th 60th 50th 40th 30th 20th 10th 0
0
10
20
30
40
50
60
70
80
90
100
Amount spent on collaboration with universities (in %) Please indicate the approximate share of the research budget that is spent on collaboration with universities. N = 92 Up to 10 employees
51 -500 employees
11 -50 employees
>500 employees
Fig. 5.23 Spending on collaboration with academia with the research department size
The amount of money spent on collaboration with academia is often interpreted, especially by practitioners, as an indicator for the intensity and quality of collaboration and is often related to the research department’s or company’s ability to establish open innovation platforms. Generally, due to the lack of other KPIs, this indicator gives a good idea of the intensity of collaboration, when considering that behind this budget are joint projects involving both parties. However, another indicator could be the researcher’s informal networks, which often cannot be made fully transparent to the company, but do affect a research department’s capability to integrate basic research.
5.4 Analysis/Description of the Results
195
It is often assumed that research departments represent a pool where new products, technologies, solutions, concepts or simply business opportunities are being developed. Companies strive to protect new inventions that could lead to competitive advantage. The number of granted patents is one of the most commonly used indicators for assessing the innovative capability of a company. Certainly, patents are not only filed by research departments, but by other functions, like development, as well. In order to understand how research departments perform in this regard we asked the survey participants to indicate the approximate percentage of granted patents coming from research compared to the rest of the company. The data analysis shows the following picture: generally, the distribution is continuous from no patents up to all company’s patents originating in the research department. It is interesting to note that 21 companies of the 84 (25%) have no patents originating from their research departments. At the other end of the scale, there are ten companies (12%) where the research department files all of the company’s patents. Other similar-sized groups generate 50%, 60% and 80% of the company’s patents (Fig. 5.24).
90th
Company’s position
80th 70th 60th 50th 40th 30th 20th 10th 0 0
10
20
30
40
50
60
70
80
90
100
Granted patents (in %) Please indicate the approximate percentage of the granted patents coming from research compared to the rest of the company. N = 84
Fig. 5.24 Patents originating within research departments
Figure 5.25 shows a color-coded distribution of the research department sizes and their number of patents (in percent) generated. It can be seen that there are no differences in terms of size of research departments concerning patent generation.
196
5 Survey Findings
90th
Company’s position
80th 70th 60th 50th 40th 30th 20th 10th 0 0
10
20
30
40
50
60
70
80
90
100
Granted patents (in %)
Please indicate the approximate percentage of the granted patents coming from research compared to the rest of the company. N = 84 Up to 10 employees
51 -500 employees
11 - 50 employees
>500 employees
Fig. 5.25 Patents originated within research departments with its sizes
Corporate funding allows the research department to work on long-term topics, which could be of strategic relevance for the business of the company. Other internal funding might come from internal business units as a cross charge for the technology transfer from research. Public funding is funds acquired from national or European funding bodies, e.g., BMBF, BMWI in Germany, or European Union on the European level. Customer or partner funding are often joint initiatives on dedicated topics or technologies. Figure 5.26 shows the sources of funding for research departments in the companies. Internal funding seems to dominate the budget of industrial research departments: corporate funding represents the biggest share (65%) of participants’ budgets, followed by other internal funding with 15%. The external funding part is split, almost equally, into public funding with 10% and customer or partner funding (9%).
5.4 Analysis/Description of the Results
197
% other external sources % customer/partner funding 1% % public funding 9% 10%
% other internal funding
% corporate funding
15% 65%
Please indicate the approximate percentage of funding sources for your department
N = 96
Fig. 5.26 Funding source distribution
5.4.2
Performance Measurement Data: Organizational Goals and KPIs
5.4.2.1
Goals
One of the main areas that we want to examine within this work is the importance of pre-set organizational goals and KPIs within the performance management system. Within the questionnaire, the respondents were given eight mission statements24 for industrial research that might be reflected in the organizational goals of their research organization. The respondents were asked to select those goals that, in the current measurement period, best reflect their organizational research goals by indicating their importance for overall achievement. The following collection of statements was provided: 1. Alignment with and transfer to internal development and other (business) units 2. Create and protect intellectual property 3. Improve the internal and external image of the research department and/or company
24
The development of these mission statements was discussed earlier at the beginning of this chapter.
198
4. 5. 6. 7. 8.
5 Survey Findings
Generate and evaluate future business opportunities Recruit and develop excellent talent Achieve a high standard of operational excellence Establish and maintain strategic partnerships and/or collaborative research Drive technology innovation and technology leadership
Furthermore, the respondents were given the possibility to add other goals that are currently in use and could not be subsumed under the eight above. In fact, no new goals were added to the original eight goals. The following are examples of the additional goals collected together with the indication of our placement (Table 5.5): Table 5.5 Examples of additional goals named in the survey Additional goal named Match and placement to generic goal Transfer of project results Moved to 1 Develop competence for company strategy Moved to 8 Level of public funding received Moved to 6 Produce new products Moved to 4 Move talents into the business Moved to 5 Identify emerging technologies Moved to 4 Adherence to scheduling Moved to 6 Scientific output (publications) Moved to 3 Maintain the economic status of the R&D division Moved to 6
The analysis of the data on organizational goals shows that the most popular one seems to be the goal “Generate and evaluate future business opportunities”, which logically is consistent with the overall mission of research. This goal (4) in Table 5.6 has been selected by 101 companies, while the least selected goal (3) is “Improve the internal and external image of the research department and/or the company” and was selected by 82 companies. Table 5.6 Frequency of use of KPIs assessing goal 1 Goal Description number Goal 1 Alignment with and transfer to internal development and other (business) units Goal 2 Create and protect intellectual property Goal 3 Improve the internal and external image of the research department and/or the company Goal 4 Generate and evaluate future business opportunities Goal 5 Recruit and develop talent Goal 6 Achieve a high standard of operational excellence Goal 7 Establish and maintain strategic partnerships and/or collaborative research Goal 8 Drive technology innovation and technology leadership
Frequency of use 94 96 82 101 89 90 92 91
In terms of the number of organizational goals currently used in the companies’ research departments, our analysis shows that the lowest number of goals a research
5.4 Analysis/Description of the Results
199
organization strives for is two.25 Regarding the largest number of goals used, the number eight seems to be quite realistic as the dominant part of the sample (54%) indicated that all eight goals are currently in place (cf. Fig. 5.27). 60
54%
Companies (in %)
50
40
30
20
14%
15%
6
7
9% 10
1%
3%
4%
3
4
0 1
2
5
8
The number of goals presently assessed N = 106
Fig. 5.27 Number of goals used in research departments
Furthermore, the respondents were asked to express the importance of each previously selected goal. Assuming that the entire goal system would add up to 100%, the respondents were asked to assign a weight to each goal. This weighting is used later to understand the importance of individual KPIs within the entire performance management system.
5.4.2.2
KPIs
It is easier to analyze goals, within a performance management system, than indicators that assess the goal achievement. That is because: • It is common practice in organizations to prioritize goals; and • There is, as a consequence of prioritization, always only a limited number of goals within a goal system.
25
Within our case studies, the lowest number of research goals in a single company was three.
200
5 Survey Findings
Both reasons also apply, but to a limited extent, to KPIs. It is generally difficult to prioritize them and there is no general rule on how many KPIs should assess one single goal. We asked companies to select the KPIs that best correspond with the KPIs actually used in their research organizations. This selection of KPIs and mapping process to the previously selected goals was the dominating part of the questionnaire. As previously mentioned, 37 KPI classes have been identified, so for the sake of simplicity in the questionnaire the KPI classes were named as KPIs, with examples for each KPI class provided to ease comprehension. For the sake of feasibility of filling out our questionnaire, not all 37 KPIs were listed per goal. Moreover, only a pre-selected number of KPIs were listed. This pre-selection was made based on the qualitative analysis from the case studies, and is consequently being verified in this quantitative survey. As introduced above, for the purpose of survey, we have provided normalized goals. These goals were generated in a way that we would expect certain performance clusters to dominate certain goals. This fact was to be verified in the data collected in the survey. Even although we provided only a selection (varying from 12 up to 17) of 37 KPIs to assess one of the eight goals, the analysis of the collected survey data shows in that there is a tendency that people select more KPIs per goal in the online survey than in the personal interviews in the case studies. The qualitative data suggests that almost 34% of all goals across all case studies are assessed with two KPIs and 22% of goals with three KPIs. However quantitative data has two peaks of 10% with 13 and 17 KPIs per goal (Fig. 5.28). This effect can be explained with the fact that the participants of the survey often interpreted the questionnaire from the perspective of which KPI they think is important to assess a goal, and not which is effectively
Frequency of selected KPIs per goal
35,0% 30,0% 25,0% 20,0% 15,0% 10,0% 5,0% 0,0% 1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17
Maximum possible # of KPIs per goal survey
case studies
Fig. 5.28 Comparison of number of selected KPIs per goal
5.4 Analysis/Description of the Results
201
used in their organization.26 This tendency will be analyzed with both similarity indicators (SI1 and SI2) introduced in Chap. 3. In earlier chapters, we concluded that both similarity indicators SI1 and SI2 provided equivalent reasonable results. From the performance management perspective the tendency to assign more KPIs which assess one single goal (what we have observed in the online survey), automatically mean that the importance of each KPI expressed in its relative weighting is consequently distributed more evenly. Therefore, in terms of comparison calculation, the average weight of a KPI in one cluster (SI2) has for the analysis of survey data a higher relevancy than the sum of weightings within a cluster (SI1). To counterbalance this tendency for the analysis, we will apply SI2. We have already introduced performance clusters as content areas reflecting typical research activities; in the survey the respondents however had no indication of its existence. The mapping table, which is presented in the beginning of this chapter, shows that altogether not more than 17 KPIs were given to the respondents for the selection. Again, just as for the goals, also for the KPIs, the respondents had the possibility to name their own KPIs which were currently used to assess their organizational goals if they were missing in the pre-selection. Table 5.7 shows KPIs Table 5.7 Examples of additional KPIs named in the survey Additional KPI named Match to KPI class number Transfer of people to the 25. Outflow: volume/quality of people leaving the research business organization to move to other parts of the company/ecosystem Number of open applications 24. Inflow: volume/quality of people hired into the research for a job organization External awards 29. External visibility: external perception or external recognition External recognition Major awards Number of business 7. Innovation process: (weighted) # of ideas moved to a certain or initiatives transferred the next phase of the innovation process Number of proposals developed Completed actions on 19. Timeliness: adherence to timelines (phases, gates) management agenda On Time Delivery (OTD) % Of progress reports published on time Time sheet compliance 21. Project management quality: quality of project management Customers demos 36. Intensity: volume of collaboration with partners and customers Professional recognition 28. Internal visibility: internal perception or internal recognitions Expenditure on 26. Talent development: volume/quality of development measures (introductory) training undertaken Amount of customer 37. Quality: quality of collaboration with partners and customers complaints
26
Note that the questionnaire in each section pointed out the fact that those KPIs should be selected which best correspond with those effectively used for the research organization.
202
5 Survey Findings
that have been named as additional ones, which were examined and sorted in one of the available KPI classes if there was a clear match (Table 5.7):
5.4.2.3
KPIs Assessing Goal 1
The goal “Alignment with and transfer to internal development and other (business) units” (referred to as goal 1) has been selected by 94 companies as being present in their goal system in the current measurement period. The remaining 11 companies stated that they do not assess this goal. KPI usage and frequency is in Table 5.8. Table 5.8 Frequency of use of KPIs assessing goal 1 Goal 1: Alignment with and transfer to internal development and other (business) units KPI assessing the . . . . . . quality of the research results transferred . . . volume of technology transfer activities to development or other (business) units . . . significance of the transferred research results for the receiving unit . . . alignment of research activities with the IP strategy of the company . . . (achieved) business impact of an idea in terms of its economic value . . . economic value of the transfer activity or transferred research results . . . quality of the transfer process or transfer activities . . . quality of collaboration with partners and customers . . . volume of potentially protectable inventions submitted into IP pipeline . . . volume of patents granted . . . volume of collaboration with partners and customers . . . intensity of input into the innovation process . . . volume of first filings out of the IP pipeline . . . quality of granted patents . . . (weighted) # of ideas moved to a certain or the next phase of the innovation process
KPI_2 KPI_1
Used% 85.1 83.0
Not used% 14.9 17.0
KPI_4
80.9
19.1
KPI_17
75.5
24.5
KPI_8
74.5
25.5
KPI_5
73.4
26.6
KPI_3 KPI_33 KPI_13
71.3 68.1 67.0
28.7 31.9 33.0
KPI_15 KPI_32 KPI_6 KPI_14 KPI_16 KPI_7
66.0 64.9 60.6 59.6 56.4 54.3
34.0 35.1 39.4 40.4 43.6 45.7
The three least used KPIs are: • Volume of first filings out of the IP pipeline (KPI_14) • Quality of granted patents (KPI_16) • (Weighted) # of ideas moved to a certain or the next phase of the innovation process (KPI_7). Less than 60% of the companies use these three KPIs to assess the abovementioned goal.
5.4 Analysis/Description of the Results
203
The three most used KPIs are: • Quality of the research results transferred (KPI_2) • Volume of technology transfer activities to development or other (business) units (KPI_1) • Significance of the transferred research results for the receiving unit (KPI_4). These three KPIs are used by more than 80% of companies to assess goal 1 (in the case of KPI_2 there are 85.1%; 83% for KPI_1; and in the case of KPI_4 – 80.9%). Furthermore, all three KPIs belong to the performance cluster “Technology Transfer”.27 In Chap. 3 we stated that if we could determine the importance of the KPIs we would be able to establish the rank of the respective performance cluster for each company. The ranking of performance clusters is a prerequisite for the comparison between companies. From that comparison, the statements regarding the presence and importance of performance clusters will be possible. In the following, we would like to understand which results deliver the data analysis of our quantitative survey. In the following step we then compare these findings with the data resulting from case studies to examine the consistency of the data. Two aspects of the survey data is examined: A. the presence of performance clusters to assess the respective goal. This is from the assignment of KPI classes selected from the questionnaire; and B. the importance expressed in ranking of performance clusters that result from the weightings of selected KPI classes. As explained above, additional to the pre-assigned KPI classes provided in the questionnaire, respondents had the possibility to add their own KPI classes in case they felt that important KPI classes were missing. This aspect is important as the assignment of KPI classes translates directly into the presence of performance clusters. Note that the performance clusters concept was not visible within the questionnaire, that is the respondents did not know which KPI class is assigned to which performance cluster. In the following, in order to show the presence of performance clusters per goal, we examine the usage frequencies of the pre-assigned KPI classes and performance clusters. The additionally named KPI classes/performance clusters are mentioned individually when examining each goal. For goal 1 “Alignment with and transfer to internal development and other (business) units” the average weighting of used KPI classes is as follows (Table 5.9). To determine the ranking of the performance clusters used to assess goal 1 we calculated in Table 5.10 averages across all companies (94 companies said goal 1 is used) of average weightings of selected KPIs per cluster. The reasons for choosing this method are twofold:
27
The performance cluster concept has been introduced in Sect. 3.2 and analyzed in Sect. 3.3.
204
5 Survey Findings
Table 5.9 Selected performance clusters in goal 1 according to SI2 (survey) Goal 1 Performance clusters Average across companies of average weightings of selected KPIs per cluster Ranking
TT 2.22
IP 1.86
FBO 1.75
CPC 1.46
TP 0.02
1
2
3
4
5
• Every cluster has a varying number of KPIs, which also vary in the difficulty of possible ways to collect them; and • Not every KPI in one cluster is effectively used to assess the goal (e.g. there might be the case that only two out of six KPIs are eventually used). As such, it does not mean that an unselected KPI (in the questionnaire as not used) is less important, but rather that the company does not collect the KPI, or has no means to collect this. Therefore, we calculate the average not across all KPIs, but only across all effectively used KPIs. In the next step, we compare these (survey) results with the findings of case studies in terms of presence and importance (rankings) of performance clusters. In the following, we present all of the goals from our case studies, which have the TT performance cluster ranked highest. It is left to everybody’s own discretion to decide from a semantic perspective whether the individual goals taken from the case studies are similar to the generic goal 1 “Alignment with and transfer to internal development and other (business) units” from the survey. Table 5.10 Goals with TT cluster ranked highest Goals with TT ranked highest Case study
Name of goal
C_1
Top results (project results have a significant impact on business) Scientific advisor to the development Alignment and transfer
C_2 C_3 C_3
C_6
% Ranking
% Ranking % Ranking Advanced technology % solutions (in key areas of Ranking the company’s strategy codify the technical strategy) Transfer (Input to % development by the Ranking transfer of research results to apply them in company’s products)
TT
IP
77% 1
16% 2
FBO CPC PUB OPEX TP 5% 3
2% 4
100% 1 100% 1 100% 1
50% 1
25% 2
25% 2
(continued)
5.4 Analysis/Description of the Results
205
Table 5.10 (continued) Goals with TT ranked highest Case study C_8
Name of goal
TT
% Alignment and transfer (provide input to Ranking development to drive incremental innovation into company’s existing product portfolio)
60% 1
IP
FBO CPC PUB OPEX TP 30% 2
10% 3
It should be pointed out that company 3 (C_3) has two organizational goals related to the generic goal 1; in both cases the company assess these goals with KPI classes that all come from the performance cluster TT. When comparing the findings for goal 1 from case studies with the survey results, we can conclude that there is a strong consistency in both analyses. As Table 5.9 shows, in the survey only one additional KPI has been added, which from the content perspective is related to the performance cluster TP. This conforms with the selection of KPI classes in case studies, e.g., company 8 (C_8) uses KPI classes from the TP cluster (with the weighting of 10%) to assess goal 1. If we disregard the lowest three performance clusters (PUB, OPEX and TP) and focus on the first four: TT, IP, FBO and CPC, a clear consistency to the result of the survey can be observed. Table 5.10 shows that the KPI classes assigned to the performance cluster TT dominate all other KPI classes in terms of its ranking. The IP cluster is present in two companies (C_1 and C_6), it is in second place in terms of ranking when present. FBO is also second in C_8. There is no discrepancy between the rankings of IP and FBO and although both are ranked second, they do not occur in one combination. The CPC cluster is present in C_6 and ranked also second. To remind the reader of KPI classes that are associated with the TT cluster, which is ranked first, we specify them here: Transfer volume Transfer result quality Transfer process quality Transfer significance Economic transfer value
KPIs measuring the volume of technology transfer activities (e.g. person days, x-charges (€)) KPIs assessing the quality of the artifacts transferred (e.g. subjective evaluation by the receiving unit) KPIs assessing the quality of the transfer process (e.g. subjective evaluation by the receiving unit) KPIs assessing the significance of the transferred artifacts for the receiving unit (e.g. quality of the strategic alignment between R and D) KPIs measuring the economic value of the transfer activity or transferred artifact (e.g. activity: in the case of knowledge transfer - the cost that would have occurred by hiring/acquiring an external consultant; artifact: generated revenue)
Summarizing the examination of goal 1, four clusters TT, IP, FBO and CPC are the most popular performance clusters to assess goal 1. Furthermore, the ranking of performance clusters of the survey affirms the ranking from case studies. This is at least true concerning the first two clusters being equally most dominant when assessing this goal.
206
5.4.2.4
5 Survey Findings
KPIs Assessing Goal 2
The goal “Create and protect intellectual property” (referred to as goal 2) has been selected by 96 of the 103 companies as being present in their goal system in the current measurement period. KPI frequency for goal 2 can be observed in Table 5.11.
Table 5.11 Frequency of use of KPIs assessing goal 2 Goal 2: Create and protect intellectual property KPI assessing the . . . . . . volume of patents granted . . . volume of potentially protectable inventions submitted into IP pipeline . . . alignment of research activities with the IP strategy of the company . . . volume of first filings out of the IP pipeline . . . (achieved) business impact of an idea in terms of its economic value . . . economic value of the transfer activity or transferred research results . . . volume of technology transfer activities to development or other (business) units . . . quality of granted patents . . . volume of publications . . . significance of the transferred research results for the receiving unit . . . quality of the research results transferred . . . quality of the transfer process or transfer activities . . . quality of publications . . . intensity of input into the innovation process . . . (weighted) # of ideas moved to a certain or the next phase of the innovation process
KPI_15 KPI_13
Used % 77.1 76.0
Not used % 22.9 24.0
KPI_17
72.9
27.1
KPI_14 KPI_8
71.9 70.8
28.1 29.2
KPI_5
69.8
30.2
KPI_1
68.8
31.3
KPI_16 KPI_30 KPI_4
66.7 66.7 65.6
33.3 33.3 34.4
KPI_2 KPI_3 KPI_15 KPI_6 KPI_7
64.6 61.5 57.3 55.2 46.9
35.4 38.5 42.7 44.8 53.1
The three KPIs with the maximum values in the table above are the following: • Volume of patents granted (KPI_15) • Volume of potentially protectable inventions submitted into IP pipeline (KPI_13) • Alignment of research activities with the IP strategy of the company (KPI_17) The three most frequently used KPIs belong to one performance cluster IP. We examined goal 2 by abstracting the KPIs to the level of performance clusters, as we did for goal 1. One additional KPI has been added and it is related to the
5.4 Analysis/Description of the Results
207
performance cluster PSC. This conforms to the findings from case studies shown below (Table 5.12). Table 5.12 Selected performance clusters in goal 2 according to SI2 (survey) Goal 2 Performance clusters Average across companies of average weightings of selected KPIs per cluster Ranking
IP 2.09
TT 1.90
FBO 1.72
PUB 0.75
PSC 0.01
1
2
3
4
5
In the next step, shown in Table 5.13, we compare the survey ranking result with the findings of the case studies. Note that C_2 has two organizational goals related to the generic goal 2. In both cases the company assesses these only with KPI classes from the performance cluster IP. To remind the reader of KPI classes that are associated with the IP cluster we specify them here: Input volume Output volume Outcome volume Input quality Outcome quality
KPIs measuring the volume of potentially protectable submitted inventions into the IP pipeline (e.g. number of invention disclosures) KPIs measuring the volume of first filings of the IP pipeline (e.g. number of first filings, defensive publications, trade secrets) KPIs measuring the volume of patents granted (e.g. number of patents granted to the company) KPIs measuring the alignment of research activities with the IP strategy of the company (e.g. number of submitted inventions addressing the IP strategy) KPIs measuring the quality of granted patents (e.g. economic value of the granted patent)
The three clusters: RPM, CPC and PSC can be neglected for this analysis because of their low importance, and so we concentrate on four first performance clusters: IP, TT, FBO and PUB. As shown in Table 5.13, the KPI classes from the IP performance cluster are represented most strongly when assessing goal 2. When comparing this case study finding with the survey results, we can conclude that the data is consistent. Ninetysix companies stated they use predominantly KPI classes assigned to the IP cluster. This is confirmed by the calculation in Table 5.12. Summarizing the examination of the generic goal 2 “Create and protect intellectual property” we can conclude that the quantitative results reflect our findings from the qualitative analysis in terms of presence and importance of KPI classes and performance clusters. The most popular KPI classes come from performance clusters IP and TT.
208
5 Survey Findings
Table 5.13 Goals with IP cluster ranked highest (case study) Goals with IP ranked highest Case study C_1
C_2 C_2
C_3
C_5 C_6
C_8
5.4.2.5
Name of goal Adequate protection of own technology and awareness of the IP of others Protect strategic areas via patents and trade marks To become well known in the scientific community on an international basis Identify patentable technology and move it through the patent process Intellectual property
IP % 70% Ranking 1
% Ranking % Ranking
FBO PUB RPM CPC PSC 30% 2
100% 1 100% 1
% 60% Ranking 1
% Ranking % Create IP, what is a precondition for usage Ranking of research results in future products to ensure commercially exploitable benefits % Create and secure IP to ensure commercially Ranking exploitable benefits to maximize competitive advantage
TT
40% 2
70% 10% 9% 1 2 3 100% 1
4% 4
3% 5
2% 7
3% 6
100% 1
KPIs Assessing Goal 3
Goal 3, “Improve the internal and external image of the research department and/or the company” is probably the most fuzzy and therefore most difficult to assess. Only 82 of the 106 companies stated they have this goal, which makes it the least used of all the goals. The usage frequency of KPIs selected to assess this goal is shown in Table 5.14. The first and third most frequently used KPIs belong to performance cluster IMG. Table 5.15. shows the KPI classes associated with IMG performance cluster are ranked one, followed by TT ranked two, IP third, PUB forth and PSC last. When analyzing the case study data we see clearly see that KPI classes from the IMG cluster are prevalent for assessing goal 3. The case study goals are presented in Table 5.16, which ranked the IMG cluster highest. Table 5.16 shows that KPI classes from the IMG cluster have the highest weighting and were therefore ranked highest. The KPI classes that are assigned to the IMG cluster are:
5.4 Analysis/Description of the Results
209
Table 5.14 Frequency of use of KPIs assessing goal 3 Goal 3: Improve the internal and external image of the research department and/or the company KPI assessing the . . . . . . external perception or external recognitions . . . volume of publications . . . internal perception or internal recognitions . . . participation in scientific events beyond publications . . . quality of publications . . . volume of technology transfer activities to development or other (business) units . . . quality of the research results transferred . . . participation in advisory boards or related bodies, honorary titles or memberships, or visiting scholarships . . . significance of the transferred research results for the receiving unit . . . economic value of the transfer activity or transferred research results . . . alignment of research activities with the IP strategy of the company . . . quality of the transfer process or transfer activities . . . volume of patents granted . . . volume of potentially protectable inventions submitted into IP pipeline . . . volume of first filings out of the IP pipeline . . . quality of granted patents
KPI_29 KPI_30 KPI_28 KPI_36
Used % 87.8 78.0 76.8 74.4
Not used % 12.2 22.0 23.2 25.6
KPI_31 KPI_1
72.0 72.0
28.0 28.0
KPI_2 KPI_37
72.0 70.7
28.0 29.3
KPI_4
69.5
30.5
KPI_5
64.6
35.4
KPI_17
63.4
36.6
KPI_3 KPI_15 KPI_13
61.0 57.3 51.2
39.0 42.7 48.8
KPI_14 KPI_16
48.8 45.1
51.2 54.9
Table 5.15 Selected performance clusters in goal 3 according to SI2 (survey) Goal 3 Performance clusters Average across companies of average weightings of selected KPIs per cluster Ranking
Internal visibility
External visibility
IMG 1.96
TT 1.94
IP 1.65
PUB 1.54
PSC 1.50
1
2
3
4
5
KPIs assessing the internal perception or measuring internal recognitions (e.g. # of visits of key decision makers, subjective evaluation of collaborations with internal groups (BUs, PDs, LOBs), number of contributions to internal conferences or showcases) KPIs assessing the external perception or measuring external recognitions (e.g. # awards, speeches, invited keynotes, medals, prizes, fairs and exhibitions)
There are slight discrepancies between the survey and case study results in terms of the TT, IP and PUB cluster. Three companies (C_1, C_3 and C8) ranked the PUB cluster second, while the survey shows PUB fourth and TT second. This is explained in that companies assess internal image generation with the KPI classes
210
5 Survey Findings
Table 5.16 Goals with IMG cluster ranked highest (case study) Goals with IMG ranked highest Case study C_1
C_3
C_5
C_8
Name of goal Visibility inside and outside the company Being the best in and coining the scientific and technology community in selected areas Robust university engagements including joint company’s and university research centers Create a positive image (internal & external) to ensure successful interactions with all partners
IMG TT % 70% Ranking 1
IP
PUB PSC CPC FBO RPM 30% 2
% 57% Ranking 1
10% 18% 15% 4 2 3
% 25% 20% 10% 4% Ranking 1 2 4 5
% 60% Ranking 1
20% 2
4% 5
14% 20% 3% 3 2 6
20% 2
related to technology transfer. Regarding the IP cluster, the rankings in the case studies were lower (four) than in the survey (three). This shows that many companies use KPI classes, such as the number of patents from the IP cluster, to assess their image generation. Altogether, the discrepancies are minimal and are visible only in the rankings. In terms of presence of performance clusters, the data from both sources is consistent.
5.4.2.6
KPIs Assessing Goal 4
Preparation of the next generation of the company portfolio of products/services is anchored in the organizational goal “Generation and evaluation future business opportunities”. This is referred to as goal 4. It stands to reason that this goal was the most popular one and was selected by 101 companies from the 106 as being in use. The KPI classes used to assess this goal are shown in the table below. As demonstrated in Table 5.17, the three KPIs with highest usage rate are: • (Achieved) business impact of an idea in terms of its economic value • Volume of technology transfer activities to development or other (business) units
5.4 Analysis/Description of the Results
211
Table 5.17 Frequency of use of KPIs assessing goal 4 Goal 4: Generate and evaluate future business opportunities KPI assessing the . . . . . . (achieved) business impact of an idea in terms of its economic value . . . volume of technology transfer activities to development or other (business) units . . . significance of the transferred research results for the receiving unit . . . economic value of the transfer activity or transferred research results . . . intensity of input into the innovation process . . . quality of the research results transferred . . . (weighted) # of ideas moved to a certain or the next phase of the innovation process . . . alignment of research activities with the IP strategy of the company . . . quality of the transfer process or transfer activities . . . volume of patents granted . . . quality of granted patents . . . volume of potentially protectable inventions submitted into IP pipeline . . . volume of first filings out of the IP pipeline
KPI_8
Used % 83.2
Not used % 16.8
KPI_1
77.2
22.8
KPI_4
74.3
25.7
KPI_5
73.3
26.7
KPI_6 KPI_2 KPI_7
67.3 66.3 65.3
32.7 33.7 34.7
KPI_17
62.4
37.6
KPI_3 KPI_15 KPI_16 KPI_13
56.4 55.4 51.5 50.5
43.6 44.6 48.5 49.5
KPI_14
49.5
50.5
Table 5.18 Selected performance clusters in goal 4 according to SI2 (survey) Goal 4 Performance clusters Average across companies of average weightings of selected KPIs per cluster Ranking
FBO 2.10
TT 2.03
IP 1.62
1
2
3
• Significance of the transferred research results for the receiving unit The first KPI is associated with the performance cluster FBO, and the second and third with TT. When analyzing the survey data on the performance cluster level, we obtain the following cluster ranking results (cf. Table 5.18). There is only a small gap between the values for the FBO performance cluster (Ø 2.10) ranked first, and TT cluster (Ø 2.03) ranked second. The performance cluster IP is ranked third and has a greater gap. To compare this (survey) ranking with the case studies, we present the case study goals that ranked highest in the FBO cluster (Table 5.19). Concerning the ranking, and in contrast to the other goals, we note that only the first (FBO) performance cluster has a ranking consistent between the case study and survey results. Regarding the presence of performance clusters, the TT and IP cluster conform with the survey results presented in the case studies. Furthermore, the performance clusters OPEX and RPM are prominently represented in the case studies, but not in the survey. No additional KPIs from these clusters were suggested.
212
5 Survey Findings
Table 5.19 Goals with FBO cluster ranked highest (case study) Goals with FBO ranked highest Case study C_1
C_3
C_3 C_3 C_4 C_8
Name of goal Continuous contribution of top ideas to assure a full and valuable R&D project pipeline Identify and explore emerging technologies that are disruptive to EMC and/or their competitors Proof of concept (PoC) development Technology evaluation
FBO TT % 100% Ranking 1
IP
TP OPEX PRM
% Ranking
% Ranking % Ranking Create assets and values that % boost company’s business. Ranking Technology creation % Ranking
60% 1 60% 1 72% 1 50% 1
10% 3
2% 4
5% 4 10% 3 10% 2% 9% 2 4 3
25% 2 30% 2 5% 6 50% 1
For the FBO performance cluster we specify the KPI classes: Input
KPIs measuring the input intensity into the innovation process (e.g. # of ideas, # of ideas per researcher) Throughput KPIs measuring the # of ideas moved to a certain phase of the innovation process (weighted) sum of ideas moved to the next phase Outcome KPIs measuring the (achieved) business impact of an idea by its economic value ($, €)
To conclude, we established that the FBO cluster contributes very strongly and consistently to goal 4 “Generate and evaluate future business opportunities”. Furthermore we established that the TT and IP clusters also contribute consistently, but with less strength.
5.4.2.7
KPIs Assessing Goal 5
The goal “Recruit and develop talent” (goal 5) was selected by 89 of the 106 companies as being in use. Table 5.20 shows the survey results regarding KPI classes, which have been selected by survey participants as being used to assess this goal. Looking at the three most popular KPIs used across participating companies, we find that the first two KPIs are associated with the TP performance cluster. The third one belongs to OPEX. Table 5.21 shows the performance cluster ranking resulting from the survey. In the survey, two additional KPIs (two companies from 89) have been added, which are related to the performance cluster IMG.
5.4 Analysis/Description of the Results
213
Table 5.20 Frequency of use of KPIs assessing goal 5 Goal 5: Recruit and develop talent KPI assessing the . . . . . . volume/quality of people hired into the research organization . . . quality of the people in the research department . . . quality of project management . . . quality of the working environment . . . adherence to budget . . . adherence to timelines (phases, gates) . . . volume/quality of people leaving the research organization to move to other parts of the company/ ecosystem . . . quality of the research results transferred . . . quality of collaboration with partners and customers . . . volume/quality of development measures undertaken . . . economic value of the transfer activity or transferred research results . . . volume of collaboration with partners and customers . . . volume of technology transfer activities to development or other (business) units . . . significance of the transferred research results for the receiving unit . . . quality of the risk management in place . . . volume of (external) investments into the research organization (beyond seed funding) . . . quality of the transfer process or transfer activities
KPI_24
Used % 87.6
Not used % 12.4
KPI_27 KPI_20 KPI_21 KPI_18 KPI_19 KPI_25
86.5 83.1 83.1 80.9 80.9 74.2
13.5 16.9 16.9 19.1 19.1 25.8
KPI_2 KPI_33 KPI_26
73.0 69.7 68.5
27.0 30.3 31.5
KPI_5
67.4
32.6
KPI_32 KPI_1
65.2 65.2
34.8 34.8
KPI_4
65.2
34.8
KPI_22 KPI_23
59.6 59.6
40.4 40.4
KPI_3
58.4
41.6
Table 5.21 Selected performance clusters in goal 5 according to SI2 (survey) Goal 5 Performance clusters Average across companies of average weightings of selected KPIs per cluster Ranking
TP 2.15
OPEX 2.03
TT 1.83
CPC 1.66
IMG 0.07
1
2
3
4
5
To compare this (survey) ranking with the case studies, Table 5.22 presents the case study goals that ranked TP cluster highest. For the most important cluster (TP), the ranking data of the survey corresponds with the ranking of the case studies. The TT performance cluster is present in both survey and case studies, although with different rankings. There are discrepancies in the remaining performance clusters OPEX, CPC, IMG and CA. Although it should be noted that in the survey the OPEX cluster was ranked second, which can be explained that companies assess their Talent Pool cluster using measurement criteria, which are
214
5 Survey Findings
Table 5.22 Goals with TP cluster ranked highest (case study) Goals with TP ranked highest Case study C_1 C_1 C_8
Name of goal Top people: best-suited people for defined profiles Market savvy
% Ranking % Ranking Discover, develop and retain talented % resources for the company Ranking
TP OPEX TT CPC CA 70% 30% 1 2 50% 50% 1 1 90% 10% 1 2
closely connected to people aspects, such as budget, timelines, project/risk management qualities, etc. The following KPI classes are associated with the TP cluster: Inflow
KPIs measuring the volume/quality of people hired into the research organization (e.g. number of externally attracted senior scientists, matching of the skill set of the hired people with the profiles they are hired for) Outflow KPIs measuring the volume/quality of people leaving the research organization into other parts of the company/ecosystem (e.g. number of top talents leaving to other parts of the company) Talent development KPIs measuring development measures undertaken (e.g. number of job rotations, number of the volume/ PhDs finished) quality of Talent KPIs assessing the quality of the people in the research department (e.g. assessment subjective assessments through partners or stakeholders, team track records, 360 feedback)
In conclusion we can say that there is positive proof that the KPI classes are associated with the TP performance cluster when assessing goal 5 “Recruit and develop talent”. The presence of the KPI classes associated with the TT performance cluster can be detected when assessing goal 5.
5.4.2.8
KPIs Assessing Goal 6
Goal 6 is “Achieve a high standard of operational excellence”. Ninety companies from 106 selected this goal as being present in their goal system in the current measurement period. Table 5.23 shows the usage frequency of KPIs for goal 6. The KPIs selected by survey participants as most frequently used to assess goal 6 are: • Adherence to budget • Adherence to timelines (phases, gates) • Quality of project management
5.4 Analysis/Description of the Results
215
Table 5.23 Frequency of use of KPIs assessing goal 6 Goal 6: Achieve a high standard of operational excellence KPI assessing the . . . . . . adherence to budget . . . adherence to timelines (phases, gates) . . . quality of project management . . . quality of the people in the research department . . . quality of the risk management in place . . . volume of collaboration with partners and customers . . . quality of the working environment . . . quality of collaboration with partners and customers . . . (achieved) business impact of an idea in terms of its economic value . . . volume/quality of people hired into the research organization . . . volume/quality of people leaving the research organization to move to other parts of the company/ecosystem . . . volume/quality of development measures undertaken . . . volume of (external) investments into the research organization (beyond seed funding) . . . intensity of input into the innovation process . . . (weighted) # of ideas moved to a certain or the next phase of the innovation process . . . volume of collaboration with academia . . . quality of collaboration with academia
KPI_18 KPI_19 KPI_20 KPI_27 KPI_22 KPI_32
Used % 92.2 90.0 84.4 73.3 71.1 71.1
Not used% 7.8 10.0 15.6 26.7 28.9 28.9
KPI_21 KPI_33
70.0 66.7
30.0 33.3
KPI_8
66.7
33.3
KPI_24
63.3
36.7
KPI_25
62.2
37.8
KPI_26
61.1
38.9
KPI_23
58.9
41.1
KPI_6 KPI_7
58.9 56.7
41.1 43.3
KPI_34 KPI_35
52.2 48.9
47.8 51.1
These three KPIs are associated with the performance cluster OPEX. To understand which KPI classes, in terms of performance clusters, are used when assessing goal 6 we calculate the average weightings of KPIs across all companies with goal 6 (Table 5.24). Table 5.24 Selected performance clusters in goal 6 according to SI2 (survey) Goal 6 Performance clusters Average across companies of average weightings of selected KPIs per cluster Ranking
OPEX 2.15
FBO 1.75
TP 1.63
CPC 1.58
CA 1.11
1
2
3
4
5
When analyzing the survey data on the performance cluster level we obtain the following cluster ranking results: KPI classes from five performance clusters are used to assess goal 6. Ranking of the performance clusters shows OPEX first, FBO second, followed by TP (third), CPC (fourth) and CA (fifth).
216
5 Survey Findings
To compare this (survey) ranking with the case studies Table 5.25 presents the case study goals that ranked OPEX as the highest cluster highest. Table 5.25 Goals with OPEX cluster ranked highest (case study) Goals with OPEX ranked highest Case study C_1 C_3 C_6 C_6
C_6 C_7
C_7 C_7
C_8
Name of goal Professional execution of projects Operational excellence Collaborative Research Ensure efficiency, effectiveness and success of the conducted research Growth
% Ranking % Ranking % Ranking % Ranking
% Ranking Create and maintain an % exciting Ranking environment to attract international toptalent; be a source of highly skilled people Strategic Partnerships % Ranking % Ensure & maintain efficiency and Ranking effectiveness (project management, customer relationship management) Ensure efficiency and % effectiveness, what Ranking is an obligation in today’ business processes
OPEX FBO TP
CPC CA RPM TT IMG IP
100% 1 90% 1 50% 1 50% 1
10% 2 25% 2 50% 1
50% 1 83% 1
25% 2
50% 1 17% 2
60% 1 44% 1
20% 2 11% 11% 17% 6% 11% 3 3 2 4 3
45% 1
10% 4
30% 2
20% 2
15% 3
Nine organizational goals have been identified within the case studies that use KPI classes associated with the OPEX performance cluster with the highest weighting. Of note is that some companies had multiple goals (C_6 and C_7 each with three goals) where KPI classes associated with the OPEX cluster had the highest weight. Both the case studies and the survey provide a consistent view on the ranking and presence of KPI classes within the performance cluster OPEX when
5.4 Analysis/Description of the Results
217
assessing goal 6. Here we specify KPI classes that belong to the OPEX performance cluster: Budget
Timeliness Project mgt quality Working environment Risk mgmt
General risk sharing
KPIs measuring the adherence to budget (e.g. difference of planned budget of each line item to actual budget measured [least deviation is rated to be the best]) KPIs assessing the adherence to timelines (phases, gates) (e.g. successful [intime] termination at a particular phase/gate) KPIs assessing the quality of project management (e.g. data compliance at an agreed deadline) KPIs measuring the quality of the working environment (e.g. scoring in the employee satisfaction survey) KPIs assessing the quality of the risk management in place (e.g. percentage to which the potential damage has been lowered by adequate measures against the bottom-line) KPIs assessing the volume of external investments into research (e.g. volume of external investment [in €], # of participations in externally funded programs)
For the remaining performance clusters the ranking cannot be confirmed when comparing the survey and case study results. In terms of presence, however, performance clusters from the survey are consistent with the data of case studies.
5.4.2.9
KPIs Assessing Goal 7
The next organizational goal that we examine is goal 7 “Establish and maintain strategic partnerships and/or collaborative research”. Ninety-two of the 106 companies selected this goal as currently being assessed. Table 5.26 shows the KPIs in the survey that respondents selected as being used most frequently to assess goal 7. The KPIs most frequently used to assess goal 7 are: • Volume of collaboration with partners and customers • Quality of collaboration with partners and customers • Adherence to budget The first two KPIs are associated with the performance cluster CPC and the third one is associated with OPEX. Table 5.24 shows the KPI classes, in terms of performance clusters, are used when assessing goal 7 (Table 5.27). Table 5.28 shows that according to the survey: CPC is ranked first, OPEX second, followed by IMG, CA and PSC in third, fourth and fifth positions. Table 5.28 shows case study goals that are ranked using KPI classes belonging to CPC performance cluster. Three goals have been identified within the case studies that used KPI classes associated with the CPC cluster that is ranked first. When now comparing both rankings, the data shows consistency for the first three clusters: CPC, OPEX and IMG regarding their ranking and therefore presence when assessing goal 7; CPC
218
5 Survey Findings
Table 5.26 Frequency of use of KPIs assessing goal 7 Goal 7: Establish and maintain strategic partnerships and/or collaborative research KPI assessing the . . . . . . volume of collaboration with partners and customers . . . quality of collaboration with partners and customers . . . adherence to budget . . . quality of project management . . . external perception or external recognitions . . . adherence to timelines (phases, gates) . . . volume of collaboration with academia . . . quality of collaboration with academia . . . internal perception or internal recognitions . . . participation in scientific events beyond publications . . . volume of (external) investments into the research organization (beyond seed funding) . . . quality of the working environment . . . participation in advisory boards or related bodies, honorary titles or memberships, or visiting scholarships . . . quality of the risk management in place
KPI_ 32 KPI_ 33 KPI_ 18 KPI_ 20 KPI_ 29 KPI_ 19 KPI_ 34 KPI_ 35 KPI_ 28 KPI_ 36 KPI_ 23
Used% 85.9 84.8 78.3 77.2 75.0 73.9 69.6 66.3 66.3 65.2 63.0
Not used% 14.1 15.2 21.7 22.8 25.0 26.1 30.4 33.7 33.7 34.8 37.0
KPI_ 21 KPI_ 37
62.0 60.9
38.0 39.1
KPI_ 22
57.6
42.4
Table 5.27 Selected performance clusters in goal 7 according to SI2 (survey) Goal 7 Performance clusters Average across companies of average weightings of selected KPIs per cluster Ranking
CPC 2.09
OPEX 1.93
IMG 1.70
CA 1.51
PSC 1.38
1
2
3
4
5
Table 5.28 Goals with CPC cluster ranked highest (case study) Goals with CPC ranked highest Case study
Name of goal
C_5
Industry Influence
C_5
Collaborative Research Create reputation (internal & external) to establish company and the research center as a valued collaboration partner
C_6
CPC OPEX IMG CA PSC TT FBO IP % Ranking % Ranking % Ranking
45% 1 50% 1 67% 1
5% 3 50% 1 33% 2
5% 3
PRM
20% 20% 5% 2 2 3
5.4 Analysis/Description of the Results
219
ranked first, OpEx ranked second, IMG ranked third. The following KPI classes are associated with the CPC cluster (OpEx and IMG have already been listed above): Intensity KPIs assessing the volume of collaboration with partners and customers (e.g. Number of projects vs. total, in which an external customer and or partner in a business relation is involved, number of joint research results like showcases, prototypes) Quality KPIs assessing the quality of collaboration with partners and customers (e.g. subjective assessment by customers and/or partners by means of a survey)
It is curious to note that the performance cluster CA, which is ranked fourth based on survey results, could not be confirmed when comparing with case study results. From the content perspective this cluster is very close to the CPC cluster, as it addresses the collaboration with universities. Therefore the survey result is reasonable and should find it’s way into consideration when choosing KPI classes to assess the collaboration aspect within the goal.
5.4.2.10
KPIs Assessing Goal 8
Last, of the eight generic organizational goals provided in the survey is goal 8 “Drive technology innovation and technology leadership”. This goal was selected by 91 companies as in-use. Table 5.29 shows the usage frequency of KPIs for goal 8. The top three KPIs used to assess goal 8 are: • Quality of the research results transferred • (Achieved) business impact of an idea in terms of its economic value • Roadmaps to achieve the visions and their quality The selection at first glance can be perceived as ambiguous as the KPIs belong to three different clusters. The first KPI is associated with the cluster TT, the second with the FBO cluster and the third with the cluster RPM. In order to understand now, which KPI classes in terms of performance clusters were selected to be used in the survey when assessing goal 8 we calculate the average weightings of KPIs across all companies, which currently use goal 8 (Table 5.30). Analyzing the survey data, the following cluster ranking can be seen when abstracting from the KPI level to the performance cluster level: TT cluster is ranked first, RPM is second, followed at a very small distance by the FBO cluster in third place. The IP and IMG clusters conclude the ranking in fourth and fifth place. To compare this (survey) ranking with the case studies we need to understand which goals in the case studies ranked highest in the selection of clusters reflected in the survey (Table 5.31). Four organizational goals could be identified within the case studies, which used KPI classes associated with the TT and RPM clusters ranked highest. To compare the KPI classes associated with the TT cluster (see evaluation of goal 1) we list the KPI classes that are associated with the RPM cluster:
220
5 Survey Findings
Table 5.29 Frequency of use of KPIs assessing goal 8 Goal 8: Drive technology innovation and technology leadership KPI assessing the . . . . . . quality of the research results transferred . . . (achieved) business impact of an idea in terms of its economic value . . . roadmaps to achieve the visions and their quality . . . significance of the transferred research results for the receiving unit . . . implementations of the roadmaps and the quality of contributions from research projects . . . economic value of the transfer activity or transferred research results . . . volume of technology transfer activities to development or other (business) units . . . structure and quality of the research (i.e., certain technology areas) . . . alignment of research activities with the IP strategy of the company . . . visions related to the individual parts of the research portfolio and their quality . . . quality of the transfer process or transfer activities . . . volume of potentially protectable inventions submitted into IP pipeline . . . quality of granted patents . . . volume of patents granted . . . intensity of input into the innovation process . . . volume of first filings out of the IP pipeline . . . (weighted) # of ideas moved to a certain or the next phase of the innovation process
KPI_ 2 KPI_ 8
Used% 83.5 82.4
Not used% 16.5 17.6
KPI_ 11 KPI_ 4
82.4 80.2
17.6 19.8
KPI_ 12
80.2
19.8
KPI_ 5
78.0
22.0
KPI_ 1
76.9
23.1
KPI_ 9
75.8
24.2
KPI_ 17
70.3
29.7
KPI_ 10
69.2
30.8
KPI_ 3
64.8
35.2
KPI_ 13
64.8
35.2
KPI_ 16 KPI_ 15 KPI_ 6 KPI_ 14 KPI_ 7
64.8 63.7 62.6 57.1 51.6
35.2 36.3 37.4 42.9 48.4
Table 5.30 Selected performance clusters in goal 8 according to SI2 (survey) Goal 8 Performance clusters Average across companies of average weightings of selected KPIs per cluster Ranking
Structuring
Vision Roadmap
TT RPM FBO IP IMG 2.26 2.05 2.04 1.68 0.03 1
2
3
4
5
KPIs assessing the quality of the structure in terms of content (e.g. regarding certain technology areas) by internal and/or external reviewers. (worst case/ simplest way – the structure is in place or is not) KPIs assessing the quality of the vision by internal or external reviewers (worst case/simplest way – in place or not in place) KPIs assessing the quality of the roadmap (e.g. a semi-ordered list of problems that need to be solved in order to reach the vision) by internal or external reviewers (worst case/simplest way – in place or not in place) (continued)
5.5 Conclusion
221
Table 5.31 Goals with TT and RPM cluster ranked highest (case study) Goals with TT and RPM ranked highest Case study C_3
Name of goal Contribute to critical standards
C_4
C_6
Being the first on the market with technology innovations, and being the best in the field Growth
C_8
Technology creation
% Ranking % Ranking % Ranking % Ranking
TT 40% 2 40% 1
RPM FBO IP IMG OPEX 60% 1 20% 40% 2 1 50% 1 50% 50% 1 1
50% 1
Implementation KPIs assessing the quality of contributions of project or activities with respect to the research roadmap (e.g. scientific accomplishments, technical achievements, technical impact) by internal or external reviewers)
The ranking of the case studies is reflected in the ranking of the survey. It can be concluded that all three performance clusters: TT, RPM and FBO seem to be important to assess goal 8. Furthermore, the remaining performance clusters: IP and IMG could not be confirmed in terms of ranking. However, from the content perspective, it makes sense that the technology innovation and technology leadership aspects within the goal are assessed with KPI classes from the intellectual property and image KPIs.
5.5
Conclusion
Due to the effect of people’s tendency to assign more KPIs per goal in the survey the SI-1 is less convenient for the analysis of survey data. We therefore considered this effect by applying SI-2, which compensates for the shortcoming of the survey concerning: • The higher number of KPIs per goal compared to case study data; and • The absolute weighting which is required for the analysis with SI-1 is only of limited or partial significance. Therefore, we can conclude that both similarity indicators SI-1 and SI-2, which provide us with the similarity measures of performance cluster spectra, give us enough evidence to state that our definition28 of the similarity of goals is a valid approach to compare them. In this chapter we examined the data gathered from the large quantitative survey and compared the findings obtained from qualitative case study analyses. From that, we first were able to find a set of common organizational goals used in industrial
28
See Sect. 3.4, DEF_G.
222
5 Survey Findings
applied research organizations. With this, we are answering the first part of the second sentence in RQ1 posted in Chap. 2: Are there common or similar organizational goals? Both the qualitative case studies and the quantitative survey affirm that industrial research organizations seem to have the following set of common organizational goals: • Alignment with and transfer to internal development and other (business) units • Create and protect intellectual property • Improve the internal and external image of the research department and/or the company • Generate and evaluate future business opportunities • Recruit and develop excellent talent • Achieve a high standard of operational excellence • Establish and maintain strategic partnerships and/or collaborative research • Drive technology innovation and technology leadership Second, we were able to find evidence of a set of 37 KPI classes29 that used to assess organizational goals. With that, we are answering the second part of the second sentence in RQ1. With these quantitative results we are able to confirm the findings derived from the qualitative analysis of our case studies which strongly endorse the performance cluster concept and KPI classes presented in Chap. 3. The relationship between KPIs summarized in KPI classes, performance clusters and eight generic organizational research goals has been shown with regard to: • Which KPI classes are used to assess a particular research goal, • Which KPIs are used more frequently, • Was is the presence and ranking of the performance clusters to which the KPI classes are associated. The ranking reflects the average weighting in terms of importance of an individual KPI for the assessment of a goal. In the next chapter we synthesize all our findings into a holistic model and suggest a new systematic way to assess the organizational performance of an industrial research department.
29
Please refer to Table 3.7 in Chap. 3.
Chapter 6
Towards a Systematic Performance Management Approach for Industrial Research in the ICT Industry
Abstract This chapter synthesizes all our findings into a holistic model and suggests a new systematic way to assess the organizational performance of an industrial research department. This chapter responds to the overall research question and is dedicated to our performance management model. Its five levels, and their relationships, are introduced step-by-step. Furthermore recommendations on how to design a Performance Management System (PMgS) are provided, which take into consideration the four elements of performance management: planning, measurement, analysis and review/improvement. Lastly, the general requirements for the design of a PMgs that take into account the specific peculiarities of industrial research organizations are revisited and our PMgS assessed. Keywords Research Performance Management System (PMgS) • Performance Measurement System (PMsS) • A Five-level Performance Management Model • KPI level • Activity Level • Goal Level • Recommendations • Process options to design a PMgS • Guidelines for the development of a PMgS • Top-down goalsetting process • Middle-out goal-setting process • Requirements catalogue for the introduced PMgS
6.1
Introduction
In the previous two chapters we explored the individual components of our research: goals and KPIs. We built categories to classify each of these components: performance clusters and KPI classes. Based on our case study findings, as well as on the findings of the large quantitative survey, we established that these categories could be helpful in organizing, designing or reviewing a performance management system for an industrial research department. However, practitioners dealing with performance management in organizations would benefit further from our findings if we are able to derive clear guidelines on how the individual components fit
T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0_6, # Springer-Verlag Berlin Heidelberg 2012
223
224
6 Towards a Systematic Performance Management Approach for Industrial Research
together, what the dependencies and relationships between each of the components are and how to proceed in a given situation. In this section we combine the individual parts of our analysis into a holistic activity-based view of performance management. As a result, we obtain a five-level performance management model addressing different hierarchical levels of a company and representing different levels of granularity in terms of breaking down goals at the company level or KPIs at the department level. Furthermore, we develop guidelines for practitioners to assess options and derive research KPIs from the company goals, considering top-down and middle-out approaches. Finally, this chapter evaluates our model against the list of requirements for PMgS presented in the Sect. 2.1.4. The main result of this synthesis chapter is a practical PMgS process that provides a (partial) solution for an established performance management need and pays heed to the context in which it is applied.
6.2
Research Performance Management System
The definition of “performance”, given in Chap. 2, hints at the idea of future value being reflecting in the capabilities of an organization. Goals are defined by managers as future statements but KPIs, however, capture information about the past. The contradiction that KPI data related to the past is used to assess future needs has to be resolved in any managerial system.1 Lebas states that the accumulated data about the past only becomes information when it is used in a decision-making model or when it is transformed into some kind of predictive parameter to be used in decisionmaking.2 Zammuto highlights that evaluations of past performance are used to direct current and future performance and argues that the performance management systems employed today do not provide managers with the quality of information necessary to guide present and future organizational action. Both authors highlight the problems associated with using past data to assess the future. Zammuto claims that it is the goal-based approach that is the primary evaluative approach employed by organizations today. The goal-based approach typically represents one set of preferences for performance: usually those of organizational managers. He accentuates that preferences of other relevant groups are not taken into consideration. Furthermore, the author states that with respect to the impact of performance, the attainment of managerial goals is not the only relevant set of preferences. According to Zammuto, organizational performance is judged in many ways by the organization’s reference groups regardless of whether these judgments are
1 2
Zammuto (1982), p. 17, Lebas (1995), p. 26. Lebas (1995), p. 25.
6.2 Research Performance Management System
225
represented in any formal evaluation framework. As reference groups, he identifies concrete linkages between an organization and the larger society. The research environment, in particular, combines different groups that collaborate on topics ignoring organizational boundaries. Their judgments of performance can be central to the concept of justification of research itself because it is from those judgments that justification is often derived. Zammuto furthermore argues that if such judgments are not included in the evaluation systems, the preferences of these reference groups will be ignored in directing current and future organizational performance. “Perceived constituent preferences will be taken into account to the degree that managers feel they are important in attaining managerial goals. This might be sufficient if performance were taking place in a static social environment and constituent preferences were predictable and stable over time. But. . .society has undergone a number of fundamental changes which are reflected in qualitatively shifting constituent preferences for performance. Hence, a major problem with goal-based evaluation is that they employ a static model of organizational performance when performance takes place in a dynamic environment”.3 The second problem that Zammuto broaches is the fact that current evaluative systems do not assess performance in the context in which it occurs.4 He states that without an understanding of the social and environmental constraints under which an organization is operating, it is not possible to obtain an accurate assessment of the impact of performance. “Social and environmental constraints act to limit what is possible in terms of performance at any given moment. Performance which may have a desirable impact at one point of time may, at another, have undesirable effects because of changes in the context in which it has occurred”.5 Lebas reminds us of the problem of extrapolation of past data into the future. He states that such information, based on past data, is only relevant as much as it helps the user understand the potential for success in the future. The extrapolation requires stability of the relations that created the data in the first place, stable causal models for example. In our case, however, the stability itself is a contradiction considering the nature and environment of a research department.
6.2.1
Suggested Model for a Research PMgS – Levels and Relations
The performance management model we are suggesting and presenting here considers the issues discussed above and especially the list of requirements
3
Zammuto (1982), p. 18. Compare our discussion highlighting the importance of “context” for performance management systems especially in regard to industrial applied research departments in Chap. 2. 5 Zammuto (1982), p. 19. 4
226
6 Towards a Systematic Performance Management Approach for Industrial Research
compiled in Chap. 2. The described model can be seen as a model comprising the relations between the input, activities, outputs and outcomes of a research department. As the causality of these artifacts is difficult to verify and is highly controversial, the emergence of the outputs can nevertheless be tracked within the presented model via the performance cluster concept. In the following we describe the five levels (cf. Fig. 6.1) and the relations between them: Fig. 6.1 A five-level Performance Management Model
Company Goals
Research Goals
Performance Goals
KPI Classes
Concrete KPIs
1. Level 1 represents the company goals; 2. Level 2 embodies the research goals, i.e. the goals of the research department; 3. Level 3 consists of the performance clusters reflecting sets of activities that are typical for industrial research organizations; 4. Level 4 comprises KPI classes abstracted from a set of concrete KPIs 5. Level 5 consists of concrete KPIs. Performance measurement can be likened to a continuous weighing-up of a defined organizational business goal on the one side and the degree of its achievement on the other side. Fundamentally, goals and the goal setting process can viewed at three different levels as shown in Sect. 2.2.4: strategic level (where to go?), tactical level (what to do?) and operational level (how to do it?). These levels can be mapped to the five levels of performance management as follows: Levels 1 and 2 embody the strategic level. In order to achieve a goal on the strategic level the right mix of activities needs to be in place at the tactical level. The tactical level is represented by our level 3: performance clusters reflecting the main research activities. Levels 4 and 5 contain the operational level which consists of concrete KPIs categorized within KPI classes.
6.2 Research Performance Management System
227
When considering the negotiation process of goal-setting in a company, the strategic goals often represent broad, long-term goals defining the vision of the entire function. In practice, these goals are typically given by management. The strategic goals can often be cross-functional and therefore require the creation of focus teams or multiple initiatives to move the goals forward. Tactical goals are more middle ground, short-term (typically with a 1-year horizon in practice) covering more specific projects that assist the attainment of the strategic goals. Operational goals are then translated into a series of concrete day-to-day activities to achieve the tactical objectives. For example, in the research department the question arises: with which research activities it is possible to attain a certain goal? For the performance measurement of the research department this answer should already deliver a rough indicator in terms of “what” can be achieved (Fig. 6.2). Fig. 6.2 Strategic/goal level of PMgS
Company Goals
Research Goals
Company goals (level 1) need to be broken down into department goals (level 2). Assuming the existence of company goals, the formation of research goals is a rather difficult task. Depending on the nature of the research, long-term projects, which might last for several years (and therefore several performance management cycles) coupled with the uncertainty of the inventive process, makes goal-setting more difficult. In practice, this cascading and translation process is often “negotiated” as a bilateral agreement between two perspectives: The global perspective i.e. company-level goals are broken down to the local (i.e. department) level which receives these cascaded goals. Within the global perspective (level 1) a manager has to ensure that the contributions of the individual departments support the achievement of the overall company goals as efficiently as possible. The manager of a research department (level 2) in turn has to take the local perspective and negotiate the goals in a way that assures that the expected contributions of his department can be achieved by means of the current or potential future activities (Fig. 6.3). As a result, the relations between research goals and performance clusters (level 3) can be described as follows: The research goals can only be achieved by pursuing a certain mix of activities, i.e. it is necessary to have the “right” mix of activities in place or to develop the department towards that mix to attain the goals. As performance clusters are activity-based, they represent a catalogue summarizing various typical research activities in an industrial setup. It is conceivable that to work towards
228
6 Towards a Systematic Performance Management Approach for Industrial Research
Fig. 6.3 Interface between goals and activity level of PMgS
Company Goals
Research Goals
Performance Clusters
a new or modified goal might require new or different activities. For example, if a research department in the past did not have the goal to create intellec tual property, then most likely an explicit process to support IP creation will not be in place for this department and also the activities related to that process might not yet be part of the activity mix (Fig. 6.4).
Performance Clusters
KPI Classes
Fig. 6.4 Interface between KPI (PMsS) and activity levels of PMgS
With the activity mix defined, the question of which criteria to assess the activities, needs to be answered (level 4). KPI classes are a means to derive concrete KPIs based on the properties of the classes regarding the choice of the artifact class that is being assessed, i.e. input, process, output or outcome, and the artifact property regarding the assessment of quality and quantities. In this regard, the data collected during the measurement process needs to be appropriate to assess the quality. Furthermore it may be purely quantitative data where certain quantitative data inherently contain certain quality aspects.6 In this context, it is worth mentioning that the selection of artifact classes, or artifact properties, may be heavily constrained by the (company) processes in place as far as the efficient collection of the actual data is concerned (Fig. 6.5).
6 See for a full classification of qualitative vs. quantitative properties of KPIs Table 3.7: Catalogue of KPI classes with corresponding properties and examples.
6.2 Research Performance Management System
229
Fig. 6.5 KPI level forming PMsS as part of PMgS
KPI Classes
Concrete KPIs
Finally, level 5 requires turning the abstract artifact classes into concrete artifacts and to define concrete measures as part of the KPIs. There might be several options here and again the concrete artifacts, as well as the measured properties, may be influenced by the processes in place for reasons of efficiency. The correct organizational goals, which any performing department attempts to achieve, should contain targets to be reached, as well as elements of time at which the target or milestones to that aim are reached and rules about a preference ordering about the ways to get there. These three elements indicate that organizational goals and the definition of performance rest on the definition of a causal model linking inputs and outcomes through selected causal relationships. The performance clusters, level 3 within our Five-level Performance Management model, represent the important link connecting the two ends: organizational goals and performance measures. Within each performance cluster the underlying processes have been identified.7 These processes have been specified as to whether they are company internal processes or whether they go beyond an organization’s boundaries. Further distinction is possible in terms of the formality and latency and length of the processes. Understanding the processes underlying performance is the only way to define the measures that lead to actions. Appropriate corrective actions can only be identified if it is clear which steps of the processes are problematic. Comprehension of the processes not only facilitates the identification of measures, and therefore of corrective actions, it allows for a clear deployment of strategy in all performance clusters within the respective context. Level 3 (cf. Fig. 6.1) not only provides the context for each organizational research goal and an array of possible performance measures depending on the aspects which need to be stressed in the specific organizations’ situation, it also considers the characteristic environment of an industrial research department including all reference groups with which such a department interacts. This helps to answer the key issues raised by Zammuto: the problem of assessing the performance out of context and not including judgments of external groups.
7
Compare Sect. 3.2 where for each performance cluster generic processes have been identified.
230
6 Towards a Systematic Performance Management Approach for Industrial Research
In this subsection we presented our PMgS by outlining each of the five levels and describing their links and interrelations. Overall, it seems to be a useful framework for designing a PMgS. In the following section we will give an outline of the main issues and general principles in each starting situation of this process and indicate when and how the performance management components (e.g. performance cluster, KPI classes) that we have developed can be applied.
6.2.2
Recommendations: Process Options to Design a PMgS
As defined in Chap. 2, performance measurement is a part of performance management and is ideally integrated into the overall business management function. In Sect. 2.2.4 we reduced performance management to its four constitutive elements: planning, measurement, analysis and review/improvement. In doing so we established that performance measurement embodies two of the four elements: measurement and analysis. Furthermore, we examined the purposes of performance measurement and examined the relationship between them and the elements. Table 6.1 summarizes the elements, purposes and activities to be pursued for each element and indicates which of the PMgS levels presented above are to be consulted. Table 6.1 Elements of PM, purposes and activities combined with the levels of PMgS Elements of Performance Management
Purposes
Activities
Level of PMgS
Planning
1. Goal setting (a) Communication (b) Alignment
All 5
Measurement
2. Evaluating Goal Achievement: (a) Status-quo
Analysis
2. Evaluating Goal Achievement: (b) Prediction
Review/ improvement
3. Motivation (a) Organizational (b) Personal
• Define goals and refine them through the entire goal-setting process • Define to-be state or nominal values for later comparisons with actually achieved values • Define KPIs • Decide on timeframes for planned strategy (short-term, long-term) • Determine the current status by collecting relevant data • Breakdown KPIs into PIs, or • Consolidate PIs back to KPIs • Evaluate, interpret, project and forecast from the current situation • Determine the deviations from objectives with “what if” scenarios • Analyze the effects of corrective actions resulting from interdependencies between goals and actions • Identify concrete activities to implement conclusions drawn from analyses • Example short-term: periodic rewards or identification of necessary training or corrective actions such as budget cuts, travel restrictions or resource reassignments • Example long-term: adjust and reformulate organizational goals and KPIs between periodical performance management cycles
4 and 5
2, 3, 4 and 5
2 and 3
6.2 Research Performance Management System
231
In the first phase it is necessary to become familiar with the current situation (i.e. where performance management starts and what it can be based on) and to gain a clear picture of whether the entire performance management is to be reviewed with all its elements (Table 6.1) or just one or a combination of the elements. We now describe the process to set up a PMgS. Two possible starting situations in regard to performance management are conceivable. First that the organization already has a PMgS or individual measurement procedures in place, or second that the organization has no performance management yet in place. This second situation is possible for newly-formed research departments, after a re-organization, or, when the research department simply has no performance management in place. The foundations for performance management are laid in the planning stage. It is the most difficult part of performance management and therefore requires the concentrated attention of a manager. This explains why, in the following process description, our focus is on this element. The activities for the remaining elements are briefly summarized in Table 6.1. We now describe the guidelines for the development of a PMgS considering three different starting situations and indicating on which level the activity takes place. Tables 6.2 and 6.3 show the starting situation for the case when a PMgS is not in place. We distinguish between (1) top-down planning of performance management in Table 6.2 and (2) middle-out presented in Table 6.3. Goal agreement is a bilateral process and when management wants to make changes, the foundations for the change (new processes, time and resources) should be facilitated top-down. The second possible case, described below, allows for an adjustment of department goals within its planning phase. Management does not insist on initially-defined goals and is open to discuss the assimilation from both the top-down and the middle-out. From the top shows what is desired in terms of performance, and from the middle shows what is achievable with given resources. Theoretically we can name this approach a middle-out goal setting approach. This is because the impulse for “achievable” performance is generated on the activity level, which is equivalent to our performance clusters (in the middle) and leads to an adjustment of the goals (at the top). The middle-out goal-setting process exists in practice; indeed 9.6% of the respondents said they use this management practice (Table 6.3). In first case, the assumption is made that the department goals are defined and cannot be changed in the course of the performance period. Our quantitative survey showed that slightly more than 50% of the respondents use the top-down approach in their research departments.8
8
See Fig. 4.9: Goal-setting approach in Sect. 4.4.1.
232
6 Towards a Systematic Performance Management Approach for Industrial Research
Table 6.2 Guidelines for the development of a PMgS, top-down goal-setting process (PMgS is not in place, department goals cannot be changed) PMgS level Design process Levels 1, 2 Break down company goals into preliminary research department goals Result: research department goals are available as a clear definition and precise formulation with priorities (for example in the form of weightings) Level 3 Define required activity mix for the research department goals Compare the required activity mix with the status-quoa and identify discrepancies (a) No discrepancies existing (b) Discrepancies existing (i) Change priorities of activities of the status quo (ii) Identify development measures for the department to achieve adapted to the activity mixb Result: targeted research activity mix with which the goals can be achieved and potential development measures to achieve the targeted mix Level 4 Identify KPI classes that are most appropriate to assess goal achievement based on activity mix (¼ performance cluster mix) and on KPI classes for the activities Result: a set of KPI classes for each performance cluster Level 5 Identify concrete KPIs for each KPI class that is going to be used to assess the performance cluster (a) KPIs can be identified for all KPI classes (b) KPIs cannot be identified for all KPI classesc (i) None of the KPI classes for a certain performance cluster provide an appropriate KPI ! change cluster mix (go to step 3) (ii) There are KPI classes for a certain performance cluster that do not provide an appropriate KPI ! change KPI class (go to step 5) Result: KPIs identified for each KPI class a We assume this interaction within the goal cascading process is between both perspectives. The local perspective often has a good overview about the existing activity mix and a good estimation about what goals within this mix are achievable. The ideal situation is established once the actual set-up fits the expectations, meaning that the currently available activity mix meets the needs of the cascaded goals b The changed activity mix might in turn need new and/or additional skills in order to be able to achieve the goal. The development measure could be to recruit additional people or to provide existing people with training, etc. c The reason for this case could be that the data cannot be collected, for example there is no process in place to collect the KPIs Table 6.3 Guidelines for the development of a PMgS, middle-out goal-setting process (PMgS is not in place, department goals can be adjusted) PMgS level Design process Level 3 Identify the status-quoa of the activities mix of the research department (status-quo mix) Levels 1, 2 Break company goals down into preliminary research department goals (target mix) Result: research department goals are available as a clear definition and precise formulation with priorities (for example in the form of weightings) Level 3 Define required activity mix for the research department goals Compare required activity mix with status-quo and identify discrepancies (a) No discrepancies existing (b) Discrepancies existing, possible resolution mechanisms: (continued)
6.2 Research Performance Management System
233
Table 6.3 (continued) PMgS level Design process (i) Change goals and/or their priorities. If goals and/or priorities of goals are changed (target mix is changed) ! go to step 2 (ii) Change priorities of activities (status quo is changed). If priorities of activities are changed ! identify development measures for the department to achieve adapted activity mix Result: targeted research activity mix with which the goals can be achieved and potential development measures to achieve the targeted mix Level 4 Identify KPI classes that are most appropriate to assess goal achievement based on activity mix (¼ performance cluster mix) and on KPI classes for the activities Result: a set of KPI classes for each performance cluster Level 5 Identify concrete KPIs for each KPI class that is going to be used to assess the performance cluster (a) KPIs can be identified for all KPI classes (b) KPIs cannot be identified for all KPI classes (i) None of the KPI classes for a certain performance cluster provides an appropriate KPI ! change cluster mix (go to step 3) (ii) There are KPI classes for a certain performance cluster that do not provide an appropriate KPI ! change KPI class (go to step 5) Result: KPIs identified for each KPI class a For example, this can be done based on the roles within the department. One possibility could be to identify how many people are engaged in these roles. Then calculating the % of the time people spend on a particular type of activity. This way status quo about the available activity mix for the department can be identified
From the two possible process approaches described in Tables 6.2 and 6.3 it is obvious that the internal set-up of a company plays a major role when (re-) designing a performance management system for the department. Many possible combinations are conceivable. For example, there is a possibility to adjust goals either during the performance period or at least during the planning phase, or the goals are simply given by management and the only way to accomplish them is to adjust the activities and/or performance indicators. The above dichotomy: a PMgS is in place or is not in place, is intentionally oversimplified. Since the situation is more complicated, and there are always some aspects of some kind of assessment in place, (for example each researcher knows how many patents or publications he/she has generated) we need to review this as well. Therefore, when facing the situation that in principle one can state that a PMgS is in place, the process could look as indicated in Table 6.4. In a sum, it is not possible to say whether it is easier to review an already existing PMgS or to design a new one from scratch. Each situation has its own advantages and disadvantages. When, for example, some parts of performance management are in place, it is very likely that there are processes in place to support the PM. In the case where a department is started from scratch, it would make sense to start with the Level 3 in our model, which embodies typical research areas and reflects inherent activities for each.
234
6 Towards a Systematic Performance Management Approach for Industrial Research
Table 6.4 Guidelines for the development of a PMgS (PMgS is already in place) PMgS level Design process Level 3 Identify the status-quo of the activities mix of the research department (status-quo mix) Levels 1, 2 Research department goals (target mix) are available. If not, our assumption is that the PMgS is not in place and the recommendation is to proceed with the process described in one of the two tables above Result: research department goals are available as a clear definition and precise formulation with priorities (for example in the form of weightings) Level 3 Define required activity mix for the research department goals Compare required activity mix with status-quo and identify discrepancies (a) No discrepancies existing (b) Discrepancies existing, possible resolution mechanisms: (i) Change goals and/or their priorities. If goals and/or priorities of goals are changed (target mix is changed) ! go to step 2 (ii) Change priorities of activities (status quo is changed). If priorities of activities are changed ! identify development measures for the department to achieve an adapted activity mix Result: targeted research activity mix with which the goals can be achieved and identify potential development measures to achieve the targeted mix Level 4 Identify KPI classes that are most appropriate to assess goal achievement based on activity mix (¼ performance cluster mix) and on KPI classes for the activities Result: a set of KPI classes for each performance cluster Level 5 Identify concrete KPIs for each KPI class that are going to be used to assess the performance cluster (a) KPIs can be identified for all KPI classes (b) KPIs cannot be identified for all KPI classes (i) None of the KPI classes for a certain performance cluster provides an appropriate KPI ! change cluster mix (go to step 3) (ii) There are KPI classes for a certain performance cluster that do not provide an appropriate KPI ! change KPI class (go to step 5) Result: KPIs identified for each KPI class
These recommendations on design procedures that have been presented in Tables 6.2–6.4 cover the initial planning element of performance management. The second element and the next step is measurement. This stage aims to evaluate the degree of goal achievement, but that can happen, the actual data has to be collected. This data is defined in the planning stage in the form of KPI classes and concrete KPIs. The collection and monitoring of data can be scheduled to happen monthly, quarterly or on a continuous basis according to requirements of the respective organization. This activity results in the provision of data for the current status. Based on the status quo data, the analysis can be initiated. The collected data needs now to be interpreted and evaluated. Consequently, deviations from the set/ defined goals can be determined, corrective actions initiated and “what if” scenarios developed. The effects of corrective actions resulting from interdependencies between goals and actions must be analyzed in parallel.
6.3 Review of Requirements Against the Created PMgS
235
The final stage we outlined is the review/improvement element. This element can be seen as a driver for organizational and/or personal motivation. In this stage, activities need to be identified to instantiate the conclusions drawn from the analysis stage. The realization of necessary corrective measures could eventually either guarantee the goal achievement, or to the contrary, show the final results in terms of deviations. Some companies might combine the personal rewards of researchers with the departments’ performance. Some of our case study interviews revealed that when staff’s bonuses depend on overall goal achievement, the motivation of people to pull together towards a defined goal is strongest. In practice, the planning, measurement, analysis and review/improvement elements are often considered as four distinct activities. However, within a performance management cycle that adds the time dimension, it merges to one activity: managing the performance. To conclude, we strongly recommend that companies consider the planning, measurement, analysis and review/improvement elements as one activity with distinct sequential orientations and treat them as complementary activities within a management cycle. The cycle, depending on the organization, should be between one and 3 years.
6.3
Review of Requirements Against the Created PMgS
In Chap. 3 we derived from literature a catalogue of requirements for a PMgS. It should be noted that this requirement catalogue has been derived from the literature addressing R&D as a whole. Therefore we do not expect to fully meet the requirements from this catalogue, since our focus is directed at the research department only. Within the following four main areas9 we tried to identify the necessary attributes, characteristics and qualities of a PMgS in order for it to have maximum value and utility: 1. The organizational structure, design and embedding of a performance management system; 2. Processes required within a PMgS; 3. Goals; 4. Performance indicators. Furthermore, we examined the literature for the specific features of an industrial research department and extracted the following nine specific requirements (SR) (Table 6.5):
9 Similar clusters of requirements have been identified by Wijnen et al. (1988): functional requirements (demands imposed on the system to ensure fulfillment of the desired function(s)), user demands (additional demands from the user), constraints (demands induced by characteristics of the system context), design constraints (object limits that have to be taken into account).
236
6 Towards a Systematic Performance Management Approach for Industrial Research
Table 6.5 Requirements considering the specifics of industrial research SR # Requirement SR 1 The PMgS needs to consider the nature of research tasks (more non-routine, more exceptions, more complexity) SR 2 The PMgS has to reflect the fact that there is a significant time-lag between output of research and the outcome resulting from this output; furthermore the PMgS has to cope with the irregular generation of outputs SR 3 The PMgS has to consider all results without penalizing the negative findings SR 4 The PMgS has to reflect the fact that due to uncertain conversion of research outcome into successful products, the outcome only cannot represent the basis for conducting the performance assessment SR 5 The PMgS has to consider the timeframes of research activities; initially the performance can only be assessed upon activities or outputs, not outcomes SR 6 The PMgS has to consider the interactions with different stakeholders SR 7 The PMgS has to consider the different skill sets of people working within research SR 8 The PMgS has to identify reasonable indicators for the monetary value of research output SR 9 The PMgS has to identify benchmarking values for the output
These requirements should especially be taken into consideration when designing a PMgS for an industrial research department. In the previous chapters we did not discuss the basic assumption that a PMgS should be economically feasible i.e. its benefits should exceed its costs. Kerssens-van Drongelen (2001) noted, citing Emmanuel et al. (1990), that in practice, fulfillment of this economic feasibility requirement is often difficult to demonstrate for performance measurement systems. Our own research10 shows that managers consider it important “to have something which is not over-complicated and assures simple handling and acceptance by employees”. Furthermore in practice, during the design process it will be necessary to define the necessary state of each requirement, in terms of their importance and whether it is a ‘must requirement’ or a ‘nice to have requirement’. We now reflect on the extent to which the requirements collected in literature are met by our PMgS. As we aim for a comprehensive concept, which can be applied in practice. However, we stress that the technical aspect of the integration and implementation of a PMgS, as well as the transformation of the concept into a software application, is out of scope of our work. It is therefore not our aim to meet all requirements collected in the literature. Table 6.6 contains the requirements list and indicates which of them are covered by our PMgS. The requirements11 that we derived from the literature addressing industrial research departments was our constant point of reference when designing the PMgS. Specifically, the design of performance clusters and KPI classes considered all the specifics necessary for an industrial research PMgS. So, for example, the nature of research captured in SR1 has been considered by the fact that the data,
10 11
From interviews conducted for the case studies. See Sect. 3.1.5 Characteristics of research compared to development.
6.3 Review of Requirements Against the Created PMgS
237
Table 6.6 Comparison of derived requirements and the developed PMgS Condensed requirements Introduced PMgS PMS embedding and design General characteristics: A PMgS needs to be ✓ These requirements are met at levels 1 and 2: Company goals (level 1) are broken down • Derived from and aligned with the overall into department goals (level 2) company strategy It represents a holistic approach, which is • Built on a holistic approach flexible and open for enhancement, for • Open and flexible for potential changes/ example through the selection of enhancements performance clusters (additional ones can be added) Specific components: A PMgS needs to ✓ The five-level model is introduced in Sect. include 5.1.2 • Reference models In particular, level 3 and level 5 include • An indicator catalogue and data model catalogues with categorized data • Existing best practices, techniques and templates The model to capture and analyze cause and ✓ The introduction of performance clusters effect relationships: This model needs to includes the detailed description of allow for processes of each cluster. The individual • Identifying measures/potential for process steps could indicate possible continuous improvement cause-effect relationships • Coordinating and balancing interests of The processes furthermore allow for the stakeholders identification of problems and • Reviewing the effectiveness of the PMgS improvement actions where necessary. • Evaluating the efficiency of the PMgS Moreover shows who the stakeholders are (including cost-benefit situation) and where the boundaries exist The PMgS needs to be coupled with a reward – Beyond the scope of our work system (✓) The scope of this thesis a performance The PMgS needs to integrate with: management system for industrial • Information demand according to the value research departments, but the integration chain aspect was beyond our scope. To deal • Existing management systems with this, we had to study the environment • Existing organizational structure(s) (e.g. much more precisely than was possible in decentralized units) this research. Therefore, integration into • Existing tasks and business processes the existing IT infrastructure and existing • Existing IT infrastructure management systems is not met by our PMgS The information demand by the value chain, existing organizational structure, and existing tasks and business process have been considered while designing the PMgS PMgS processes are required for. . . Capturing company internal and external ✓ In the description of performance cluster data processes we indicated whether the process steps have an internal or external focus. Accordingly, KPI classes reflect KPIs capturing both internal and external data (continued)
238
6 Towards a Systematic Performance Management Approach for Industrial Research
Table 6.6 (continued) Condensed requirements Communicating performance-relevant data to stakeholders
Introduced PMgS ✓ The communication of PMgS data is only first possible once the data is available, i.e. in our four elements of the PM, the data is available after the third element has been accomplished (i.e. data has been collected for measurement purposes and status quo is determined) and data is available not as a conglomeration of data but in a sorted and classified form Capturing as-is profile and to develop to-be ✓ Our developed PMgS on its third level is profile represented by performance clusters, which contain generic processes. These are mapped to KPI classes Ensuring user participation ✓ Our model is activity-based and every researcher activity influences directly the performance of the department. This interlinking ensures direct impact of researchers Creating, reviewing and evolving the PMgS ✓ The PMgS consists in principle of goals and (including goals and performance KPIs. How up-to-date the PMgS is indicators) depends on the frequency of the company’s reviewing of relevant artifacts: goals, performance clusters, KPI classes, KPIs Capturing and analyzing cause and effect ✓ Cause and effect analysis requires a fine relationships granular process level as the effects can only be identified if it is clear where in the process chain the cause is located Goals have to. . . Be derived from the strategy and reflect core ✓ Core competencies are reflected within the competencies performance clusters Be operationalized by means of indicators ✓ There is a direct link to concrete KPIs Have a stakeholder ✓ As soon as organizational performance is connected with personal performance, this requirement is then met Be tied to organizations and employees to ✓ The 1st level ensures the link to the ensure responsibility organization since the department goals are derived from the company goals. The third, fourth and fifth levels ensure the link to employees, since employee performance is assessed upon the agreed KPIs Performance indicators. . . Are operationalized goals ✓ With two condensation levels (3rd and 4th) the KPIs are linked to goals Focus on the most important aspects/critical ✓ The KPI class catalogue indicates the type of success factors artifact and with it the focus can be determined (continued)
6.3 Review of Requirements Against the Created PMgS Table 6.6 (continued) Condensed requirements Measure the current/expected performance of core competencies
Are low in number/comparable
Refer to results and not behaviors
Have good communicability
Are continuously part of the periodical reporting of the department/company
Consider measuring the following aspects: • Financial values (cost, EVA, shareholder value) • Non-financial values (time, quality)
Consider measuring the following perspectives: • Leading and lagging • Internal and external performance indicators • Impact, execution and premise indicators
239
Introduced PMgS ✓ The activity-based concept allows for analysis of the status quo. The forecast can be based on status quo data and then be extrapolated ✓ With the creation of KPI classes we ensure comparability. As most of the performance clusters have two KPI classes and a maximum six, clarity is provided ✓ This requirement is met as far as the quantitative KPIs are concerned. Qualitative KPIs often refer to behaviors, which is in the nature of things (i.e. qualitative indicators) ✓ Communicability is facilitated through the clear structure starting with prior defined goals, performance clusters, KPI classes and KPIs. Nevertheless, it could be challenging to communicate qualitative data in a crisp way ✓ In our framework “research department and its eco-system” (Fig. 3.1) we indicate four possible measurement cycles for output/ outcome measurement: quarterly, annually, 2+ years, 3+ years ✓ A common measure in research seems to be quantified input in terms of costs. Other financial values are very controversial. Non-financial values are more appropriate. The KPI classes catalogue reflects these aspects ✓ The impact indicators can only be the output or outcome indicators, which are all included in the KPI classes catalogue As to the execution indicators: the KPI classes are all mapped to the individual steps (where possible) of the processes of performance clusters. This enables progress to be determined Furthermore, the internal and external views are combined in the performance clusters and reflected in the described processes. The KPI classes are accordingly designed and represent either the internal or external perspective With regard to leading or lagging: most KPIs represent data about the past. The internal or external perception about research, publications, concepts transferred, patents etc. all represent results of activities that (continued)
240
6 Towards a Systematic Performance Management Approach for Industrial Research
Table 6.6 (continued) Condensed requirements
Permit actions to be derived to improve current performance
Strengthen self-motivation and guide behavior
Introduced PMgS have already been invested in. Therefore, the leading perspective can, in industrial research, be reflected in organizational goals, which are formulated as future statements ✓ The availability of data for KPIs allows for as-is assessment followed by comparison with to-be This takes place in the second and third elements of the PM ✓ Clear definition and communication of KPIs for employees as well as the possibility to track their own performance allows employees to adapt their own behavior The prerequisite is that the rewards are directly connected with the achievement of defined performance, which in turn fosters the self-motivation of the employees
which is now available as level 5, was originally collected within case studies representing typical industrial research KPIs and thereby reflecting the nature of research tasks. Mainly when planning (first element) the performance management cycle, the time-lag (SR2) has to be considered. This is difficult to include within a PMgS technically, unless for example it indicates whether the considered result is an output from the first, second or the third year of conducting research. The same must be taken into consideration with “negative” results from the outputs (SR3), since the result cannot be known ahead of an inventive process and therefore finding out that something does not work, or is not suitable, should be treated the same as a “positive” outcome. During the planning phase, managers must be aware of the issue of not penalizing “negative” results. Our PMgS allows for the consideration of SR4 and SR5. Although the final outcome does not represent genuine direct performance, we can capture the performance clusters’ activities, which contribute to the final outcome. SR7, where the different skill-sets of people employed in research are in focus, deserves to be mentioned. This is because one of the 11 clusters (TP – Talent Pool) is specifically dedicated to people and their development within the research department. The most difficult aspect is maybe to capture the interaction with different stakeholders. The distinctive feature, which exists in research with regard to interactions, is reflected in all but two of the clusters, which are focused on research department internal activities: Research Portfolio Management and Operational Excellence clusters. For example, within the Intellectual Property cluster the interaction takes place first with the internal IP department and then with the external patent offices. The Technology Transfer cluster is specifically dedicated to deliver research results
6.4 Conclusions
241
into the overall company, the interaction happens internally with all possible receiving units. The interaction within the clusters: Publications, Presence in Scientific Community, Collaboration with Partners and Customers is self-evident.
6.4
Conclusions
In this chapter we have outlined the performance management system (PMgS) that we are proposing for industrial research departments. It is obvious, and a matter of course, that each company will have to specify its own requirements according to company-the specific economic and organizational situation. The requirements catalogue and specific requirements can provide a basis to start with. These will need to be refined and updated and a requirements specification process considered that allows for the individual company context. Regarding the developed guidelines for the development of a PMgS we have sufficient confidence in those described in Tables 5.3 and 5.4. These feature the situations where a department is yet to establish a performance management system, and second, where a PMgS is already established and needs to be reviewed. We have less confidence in the guidelines developed in Table 5.2 where the goals are rigidly pre-set and management is not prepared to discuss and adjust the goalsetting. In this case it should be clear that, at least on the execution layer, the capabilities need to be considered. In addition to the guidelines for the process development of PMgS we have also developed tools, which facilitate the design, selection and prioritization processes considering the context of an industrial research department: performance clusters. Furthermore, within each cluster generic processes have been identified, each process step (where possible) has been assigned to a KPI class. Moreover, a catalogue of KPI classes has been generated that encompasses 37 KPI classes, which consider the variety of capabilities of a company in terms of whether quality, quantity, intensity, etc. is feasible to assess. In this chapter we have provided our answer to RQ4, and delivered our recommendations on how to develop a performance management system in different starting situations. It is assumed that the suggested framework represents a simple, comprehensive model, which can be generically implemented in every research organization. The adjustment for the specifics such as industry or the environment is facilitated by the lists and catalogues provided with the PMgS model. The exaction and implementation of the specifics of industrial applied research in the form of specific requirements answers RQ3. These are inherent within the created PMgS model since the insights are based on data gathered in case studies of industrial research departments. Furthermore, it is expected that not only industrial research departments can use the proposed model, but also independent research organizations or research departments within universities, since the model is flexible for content adjustments
242
6 Towards a Systematic Performance Management Approach for Industrial Research
on the activity levels in the performance clusters. For example, in comparison to an industrial research department, it could be assumed that an independent research organization puts more priority on generation of intellectual property (IP) than on generating new/future business opportunities (FBO). The intra-organizational technology transfer (TT) is completely absent, and replaced by inter-organizational technology transfer. The research department at the university might prioritize performance clusters: presence in scientific community (PSC) or publications (PUB) over other performance clusters.
Chapter 7
Conclusions
Abstract This is the final chapter. It clarifies the findings of the dissertation and reflects on the research questions. The chapter firstly summarizes the key results of the work and then provides explicit answers to the research question and subquestions. Furthermore, it discusses the limitations of our work and suggests areas for future research. Keywords 11 performance clusters • Catalogue of 37 KPI classes • 8 typical organizational goals of research departments • Framework: research department and its eco-system • Summary of work
This final chapter is structured as follows: Sect. 7.1 summarizes the work, Sect. 7.2 provides explicit answers to the research questions, Sect. 7.3 describes the limitations of our research, Sect. 7.4 identifies future areas for research, and concluding remarks are provided in Sect. 7.5.
7.1
Summary of Work
Assessing the performance of scientific research has been discussed extensively in science for more than 30 years. Early literature discussed the question of whether or not to assess scientific activities within organizations. Strong objections existed that measurement would harm the vital creativity of researchers. Thereafter, literature moved to address measurement procedures and relevant performance measures. Thereafter the narrow focus on performance in industrial R&D evolved to a concentration on the broader Performance Management Systems (PMgS) that have been under discussion for more than 10 years now. As both primary and secondary research reveals, approaches have been widely implemented in practice but insufficiently so. PMgS developed specifically for industrial research are scant and mostly unsatisfactory. Previous approaches represent only fragmented parts and therefore T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0_7, # Springer-Verlag Berlin Heidelberg 2012
243
244
7 Conclusions
only partially meet the needs of industrial research organizations. A major issue is that there is no comprehensive and detailed process model that describes how a PMgS can systematically be built. Therefore, Chap. 1 explained the inception of this thesis, which is stimulated by the lack of an appropriate concept for managing the performance of an industrial research organization. The background and problem definition are outlined as well as the research gap is indicated. Furthermore, the research questions are presented and the structure of the thesis is outlined. Chapter 2 provides the reader with an introduction into the general concepts and definitions of three areas which converged in this work: (1) Performance management; (2) Industrial Research Organization; and (3) ICT industry. Existing approaches primarily consider performance measures, while organizational goals are often neglected, which leads to inconsistencies when trying to assess organizational performance. Furthermore, because of the lack of methods and tools to collect the data, especially under time pressure, companies resort to quantitative measures and neglect the qualitative data. These circumstances result in an incomplete illustration of performance. We therefore established a basis with the following framework based on Brown and Svenson’s graph making the research environment with its inputs, activities, outputs and outcomes more explicit also indicating the conceivable measurement cycles (Fig. 7.1): • Quarterly for the IN-PROCESS measurement • Annually and/or 2+ years for the OUTPUT measurement • 3+ years for the OUTCOME measurement
Fig. 7.1 Framework: research department and its eco-system
7.1 Summary of Work
245
Another problem area is that performance management systems are often seen as pure measurement activity and are not integrated into the management cycle with the pertinent strategy definition expressed in organizational goals. There is therefore no monitoring or evaluation of activities with potential for suggesting and implementing corrective actions. In some cases the collection of research KPIs and its evaluation is traditionally independent from planning activities. These shortcomings are often because the necessary processes are not holistically defined within the organizations. Four constitutive elements (cf. Fig. 7.2) of performance management have been extracted from the definitions: planning, measurement, analysis and review/ improvement. Furthermore literature addressing the purposes of performance management and measurement: 1. Goal setting: (a) communication, (b) alignment 2. Evaluating Goal Achievement: (a) status-quo, (b) prediction 3. Motivation: (a) organizational, (b) personal has been classified and mapped to the four key elements of performance management.
Fig. 7.2 Constitutive elements of PMS and their relationships
In Chap. 3 we reviewed the literature analyzing the trajectory from generic to specific performance management approaches: generic approaches applicable for the entire organization, approaches applicable for the R&D department only and finally literature reporting on performance management approaches for industrial research departments. In the same manner literature was examined in regard to the two major components of a performance management: organizational goals and
246
7 Conclusions
performance measures. Each of the components has been accordingly analyzed on the generic, R&D and research department level. One of the main results has been captured in a catalogue of requirements for a performance management system. Furthermore, the nature and specifics of an industrial research organization have been analyzed and specific requirements formulated. Finally, based on the theoretical overview, research gaps and research questions have been formulated. The research questions and explicit answers will be discussed in the next section. Chapter 4 provides the procedural method whereby the elements of performance management found in practice are analyzed. These elements were studied in detail and the types of interactions that exist between them were examined. Since a very limited number of publications was available, qualitative case study research was conducted. More than 70 interviews with senior decision-makers at eight different ICT companies were made. The case studies gave insights into the current practice of performance measurement, and provided data on organizational research goals, performance measures, processes, tools and methods. A database of 164 different key performance indicators and 30 organizational research goals was generated. This data was abstracted, classified, categorized and condensed eventually resulting in 11 performance clusters representing typical content areas for industrial research departments. Furthermore, a catalogue of 37 KPI classes with concrete KPI examples was produced. Since the performance clusters build an important component of our model, the clusters were analyzed in regard to the process(es) and typical activities performed in each cluster. Processes were validated by the literature and individual process steps were mapped (where possible) with the KPI classes. We faced a problem regarding the semantic interpretation of the stated goals. In short, research managers sometimes call the same thing by different names and different things by the same name. To check this, we analyzed the relationship between goals and performance clusters using two similarity algorithms. We deconstructed both the goals and the performance clusters into their subcomponents. The algorithms then let us look for similarities between the subcomponents and thereby understand whether the structure and nomenclature of the groups and clusters was coherent. Chapter 5 describes our quantitative survey of 497 ICT companies which verified our preliminary findings from the literature review and qualitative case studies. On the whole the survey data affirmed our assumptions and findings with regard to performance management in general and to content components of a performance management system. For example, 96% of respondents believe performance measurement is as necessary as management itself, while only 4% think that performance measurement negatively affects creativity, or is too costly, or not necessary at all. The biggest part of the survey dealt with the content components of a performance management system: goals and performance measures. The survey confirmed the set of eight organizational goals, which originated from the case studies. The most popular goal was “Generate and evaluate future business opportunities”, which is consistent with the overall mission of research departments. The survey also revealed 37 KPI classes currently used by companies to assess their organizational goals.
7.1 Summary of Work
247
Furthermore, the survey endorsed the performance cluster concept we introduced in Chap. 4. Chapter 6 is the synthesis of all findings from throughout this work. The result of this work is a performance management model with five levels for an industrial research department (cf. Fig. 7.3), which is described in Sect. 6.2.1.
company Goals
Research Goals
Performance Goals
KPI Classes
Concrete KPIs
Fig. 7.3 A five-level Performance Management Model
We provide recommendations in Sect. 6.2.2 on how to design a performance management system. Guidelines are given for three different starting situations for companies and indicate which PMgS level is to be involved in each step. This model meets the requirements for a PMgS which we specified at the outset of this work.
248
7.2
7 Conclusions
Explicit Answers to the Research Questions
The overarching goal of the work was the investigation of the practices of performance management within industrial research organizations. The corresponding research question is: How to assess the performance of an industrial research organization? The research question is subdivided into further questions: RQ1: What is the common practice in regard to performance management of industrial applied research organizations? Are there common or similar organizational goals, and are there common or similar key performance indicators? RQ2: Is there a generic performance management approach (PM approach) to describe the common practice of performance management of industrial applied research organizations? How are goals and KPIs reflected in this approach? RQ3: How are the peculiarities of industrial applied research [specifically, and the requirements of performance management systems in general] considered in the developed PM approach? RQ4: Can any recommendations be derived from the suggested approach to advise on the implementation and further development of a performance management system in an industrial applied research organization?
7.2.1
Research Question 1
We found that Research and Development functions have traditionally been treated as one single function: R&D. However, development departments and research functions have fundamentally different characteristics in regard to their performance. This stems from the different activities that each conducts. The nature of industrial research departments, which we outlined in Sect. 3.5.1, makes it difficult to apply one common performance management practice for industrial research organizations. The difficulty lies in finding appropriate measures for the intangible and “one-off-natured” outputs of research departments. However many companies agree that certain communalities exist. For example, research produces highlyspecialized knowledge so they therefore often use performance measures to assess the contribution of this knowledge: they refer to this as intellectual property (IP). Another common area is the search for possibilities to strengthen the company’s business such as looking at new trends and ideas and transforming them into new technologies or new business models. We found that the normal financial year is not always applicable to research, because typically a research project in the ICT industry will last between 2 and 5 years. Our research found no common practices in handling or mitigating this time lag.
7.2 Explicit Answers to the Research Questions
249
1. With regard to the second sub question of RQ1 we identified the following eight organizational goals of industrial research organizations under which most other goals could be subsumed: Alignment with and transfer to internal development and other (business) units 2. Create and protect intellectual property 3. Improve the internal and external image of the research department and/or the company 4. Generate and evaluate future business opportunities 5. Recruit and develop excellent talent 6. Achieve a high standard of operational excellence 7. Establish and maintain strategic partnerships and/or collaborative research 8. Drive technology innovation and technology leadership The second sub-question of RQ1 (common key performance indicators across organizations) required several intermediate steps of analysis. We conclude that there is commonality but only on a level of abstraction higher than the individual KPIs: on the KPI class level. Thirty-seven KPI classes were identified (the complete catalogue comprising the performance clusters, KPI classes, artifact classes and properties as well as concrete KPI examples are provided in Table 4.7): 1. Transfer volume: volume of technology transfer activities to development or other (business) units 2. Transfer result quality: quality of the research results transferred 3. Transfer process quality: quality of the transfer process or transfer activities 4. Transfer significance: significance of the transferred research results for the receiving unit 5. Economic transfer value: economic value of the transfer activity or transferred research results 6. Input: intensity of input into the innovation process 7. Innovation process: (weighted) number of ideas moved to a certain or the next phase of the innovation process 8. Outcome: (achieved) business impact of an idea in terms of its economic value 9. Structuring: structure and quality of the research (i.e. certain technology areas) 10. Vision: visions related to the individual parts of the research portfolio and their quality 11. Roadmap: roadmaps to achieve the visions and their quality 12. Implementation: implementations of the roadmaps and the quality of contributions from research projects 13. Input volume: volume of potentially protectable inventions submitted to the IP pipeline 14. Output volume: volume of first filings out of the IP pipeline 15. Outcome volume: volume of patents granted 16. Input quality: quality of granted patents 17. Outcome quality: alignment of research activities with the IP strategy of the company 18. Budget: adherence to budget
250
7 Conclusions
19. 20. 21. 22.
Timeliness: adherence to timelines (phases, gates) Risk management: quality of the risk management in place Project management quality: quality of project management General risk sharing: volume of (external) investments into the research organization (beyond seed funding) Working environment: quality of the working environment Inflow: volume/quality of people hired into the research organization Outflow: volume/quality of people leaving the research organization to move to other parts of the company / ecosystem Talent development: volume/quality of development measures undertaken Talent assessment: quality of the people in the research department Internal visibility: internal perception or internal recognition External visibility: external perception or external recognition Quantity of publications: volume of publications Quality of publications: quality of publications Junior level/researcher involvement: participation in scientific events (excluding publications) Senior level/researcher involvement: participation in advisory boards or related bodies, honorary titles or memberships or visiting scholarships Intensity: volume of collaboration with academia Quality: quality of collaboration with academia Intensity: volume of collaboration with partners and customers Quality: quality of collaboration with partners and customers
23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37.
7.2.2
Research Question 2
We found no shared performance management approaches that could describe the common practice of performance management of industrial applied research organizations. However, the R&D literature does some partial answers to performance measurement, which is one part of performance management. There are four elements of performance management: planning, measurement, analysis and review/improvement. Performance measurement encompasses the measurement and analysis elements. This is why these concepts, by definition, do not meet organizational goals, since they do not include the planning element. Consequently, the lack of R&D performance management systems made the initial idea to only apply it to research departments impossible. As we found no shared performance management systems for R&D the response to the second sub-question in RQ2 is that no statements on how to manage goals or key performance indicators can be made. However, this stimulated the generation of a performance management model for industrial research as presented in Chap. 6. Since the goal dimension and the performance measures dimension represent the most important components of the model, our Performance Management Systems
7.2 Explicit Answers to the Research Questions
251
(PMgS) explicitly focuses on these. The organizational research department goals are reflected in our PMgS on level 2 and the KPIs are represented on levels 4 and 5.
7.2.3
Research Question 3
In understanding how the developed PM approach takes account of the peculiarities of industrial applied research Sect. 3.1.5 outlined the characteristics of research departments compared to development. Section 6.2 provides a summary where we elaborated on the extent to which the requirement catalogue on performance management derived from the literature conforms with our generated PMgS. Furthermore, by examining the literature for the specifics of research departments we augmented this catalogue with the specific requirements considering the peculiarities of research. Our findings suggest that the peculiarities are, for the most part, accounted for in our PMgS, but some of them cannot be implemented in a system. These peculiarities do, however, require managerial attention when evaluating performance. Section 6.2 discusses the development of our five-level PMgS, which is based on our data. All components that provide context to an industrial research organization should consider the specific peculiarities. With the exception of the organizational company goals level, levels must involve these specifics. Level 2 is the decomposed organizational research department goals – these were derived from qualitative case studies and verified in a large quantitative survey. Altogether eight research goals are introduced above. Level 3 represents performance clusters. Performance clusters are based on a collection of typical research activities that relate to a common underlying process. Eleven performance clusters are introduced: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
Technology Transfer (TT) Future Business Opportunities (FBO) Research Portfolio Management (RPM) Intellectual Property Creation (IP) Operational Excellence (OpEx) Talent Pool (TP) Image (IMG) Publications (PUB) Presence in Scientific Community (PSC) Collaboration with Academia (CA) Collaboration with Partners and Customers (CPC)
The specifics of industrial research are particularly visible at the performance cluster level. For example, generation of intellectual property, transfer of research results, presence in scientific community, publish research results, collaborate with academia as well as partners and customers – are all activities that can be distinguished as research specific.
252
7 Conclusions
Level 4 consists of 37 KPI classes, each mapped to the corresponding performance cluster. Level 5 encompasses concrete KPIs. Within the case studies 164 KPIs were collected. The catalogue of performance clusters, KPI classes and KPI is provided in Table 4.7.
7.2.4
Research Question 4
The core of this work is the design, development and construction of the performance management system that serves industrial applied research organizations. Our main objective was to recommend how to set-up goals and select KPIs. In Sect. 6.2.2 we provide the reader with recommendations and guidelines on process options to develop a PMgS. Hereby we distinguish different starting situations and different contexts in which companies operate. Our recommendation on content is as follows. Within the planning phase, when analyzing company goals (level 1) and deriving research goals (level 2) from company goals, the eight generic research goals (which represent typical organizational research department goals) can be taken as a basis and be compared during the goal-setting process. In addition, we recommend prioritizing research goals by allocating them relative importance weights. Furthermore, attention should by paid to Sect. 4.4 which discusses the goal decomposition analysis. To select the most appropriate KPIs (level 5) several intermediate steps need to be taken. The activity mix, with which it will be possible to achieve the defined goals, needs to be determined. To do this, we recommend assessing the performance clusters (level 3) with the corresponding processes. This comparison will deliver the basis to decide on KPI classes (level 4). We recommend balancing quality and quantity KPI classes to ensure a comprehensive PMgS. KPI classes eventually provide examples of concrete KPIs thereby closing the loop of performance planning. As for the remaining elements of performance management: measurement, analysis, review/improvement, it is difficult to make concrete recommendations, since these elements are typically strongly dependent on the company context.
7.3
Limitations
It is important that the reader is aware that the data has been collected at a certain point in time under certain circumstances, which may have changed in the meantime. As mentioned in Chap. 4, a mixed-method approach for our research was chosen as being most appropriate. The nature of our research questions had, initially, a very exploratory nature. As they were coupled with an explanatory and descriptive starting point of “what”, “how” and “why”, a combination of case studies and a
7.3 Limitations
253
survey was chosen to complement findings from the literature review.1 That is, extensive case study research was needed as a preparatory phase before starting the survey. Accordingly, the exploratory character had implications for the analysis of the survey. This means that the collected quantitative data was used for descriptive purposes, as well as for validating and verifying the findings from case studies. With regard to the above-mentioned research methods, the reader should be aware of several limitations. The conducted qualitative research was based on eight big ICT companies that may not be representative of the entire population. Furthermore, the research was dependent on the cooperation of the interview partners. Their accurate and complete answers along with the material they provided played a crucial role in the work. However, if case studies are based only on interviews and are not enriched with company documents and reports, key informant bias can influence the research. A major advantage of key informant interviews is the firsthand knowledge about a topic of interest; the disadvantages are however that they are susceptible to interviewer biases and it is difficult to prove the validity of the findings. The following measures were taken to conduct our case studies: 1. At least three different people representing different perspectives on the studied phenomenon were interviewed 2. The interviews were conducted combining personal face-to-face interviews as well as phone and video conferences 3. The interviews were split into two rounds and spread over the course of 1½ years, covering one complete performance management cycle, resulting in 92-h interviews per company (equivalent to 18–20 h audio data per company) 4. The interviews were enriched by the documents provided by the companies and furthermore by independent reports available (e.g. data about the patent rank between companies) Even though the precautionary measures were taken to avoid key informant bias, it cannot be completely precluded. Moreover, it must be noted that in some cases problems also occurred due to the confidentiality of data and aspects of nondisclosure, making it difficult to get all the data required. It should also be noted that the participants had varied backgrounds and therefore answered the questions from their own “world view” and interpretation. The methodological limitations associated with the quantitative survey mainly result from our sample (n ¼ 106, 21.73% response rate) which may not be representative of the entire population of 497 identified companies. We should also note that the case studies have a strong influence on the quantitative survey. It also cannot be precluded that the translation of the survey was done perfectly (the survey was in English and Russian) as some questions may have posed conceptual problems and other difficulties for respondents (Carrasco 2003; Kasper 2006).
1
Yin (2006), Kasper (2006).
254
7 Conclusions
As is apparent from Chap. 5, the results and conclusions of the survey are based solely on descriptive analysis. Further limitations result from the fact that our investigation is the first one that is concerned with the design of a performance management system for an industrial research department and that there is a very limited number of related studies published. Moreover, the studies are difficult to compare since different industries were studied. Given the exploratory character of the study the reader has to be aware that the recommendations presented in this work are indicative. This means that the study describes some basic relationships that are fundamental for the establishment of a performance management concept. Moreover, the reader is asked to consider that the methodological limitations may limit the validity of the findings and restrict the population to which the findings can be applied. Nevertheless, this thesis has made substantial scientific contributions to the research areas ‘innovation management’, ‘R&D management’ and ‘management of industrial research organizations’ as outlined above.
7.4
Future Research
The results of this dissertation are a platform for future research. The first area to be tackled should be one that we saw as important and interesting but just beyond the scope of this work: the maturity of a research organization. Every organization operates in a different environment under different circumstances and therefore requires suggestions and recommendations that pay attention to their situation. Understanding the notion of maturity would help a great deal in understanding this “contingency” imperative. In Chap. 6 we recommended different approaches that consider different starting situations for a research department: starting from scratch, undergoing reorganization or completely without performance management. In addition to the aforementioned need to understand maturity, further differences such as volume, regions, locations, distribution need to be better understood. These factors play an important role when managing performance. For example, if an organization is young and is yet to conduct a full cycle of a research project, then this situation is very likely to be reflected in its KPIs, because the collection of some are simply not feasible. Another maturity aspect is, for example, a wellaligned portfolio, long-established processes, entrenched structures that provide more organizational efficiency. These aspects are reflected in the performance management. But “Performance that is effective today is equally likely to be ineffective tomorrow as preferences and constraints change. The goal of the effective organization is, continually, to become effective rather than be effective. The journey is, in this case, more important than the destination”.2
2
Zammuto (1982), p. 161.
7.4 Future Research
255
Therefore, the question is how to bring the well-established organization to the next level. The initial thoughts that accompanied our research in this direction were to enhance the PMgS with a maturity model. There are many examples of maturity models in literature and practice. The most well-known is the Capability Maturity Model (CMM) developed at the Carnegie Mellon University; another example is the Capability Maturity Framework (CMF) to manage IT published by Curley in 2005. The Capability Maturity Model (CMM) was originally developed as a tool for objectively assessing the ability of government contractors’ processes to perform a contracted software project. The CMM is based on the process maturity framework first described in the 1989 book “Managing the Software Process” by Watts Humphrey. In order to give more valuable recommendations to research departments we see the necessity to differentiate between the different states they are in. Our idea is therefore to apply the PMgS and enhance it with a CMM or CMF. Considering our PMgS especially for levels 2 (goals), 3 (performance clusters) and 4 (KPI classes) the assessment of maturity is reasonable. It would be very interesting to see how the maturity of level 2 (organizational research goals) could be assessed. It is conceivable to connect the goals’ decomposition analysis with the maturity model. As for level 3 (performance clusters), this is the level in which it is most challenging to assess the maturity. However, it is also the most interesting one. This level has many aspects that must be considered. The most important one is, from our perspective, the performance cluster processes. The maturity of processes could be assessed for example via a Business Process Maturity Model (BPMM).3 Starting with processes, a connection can be made to level 4 (KPI classes), and the maturity of the entity examined in both quality and quantity. It is easier to collect quantitative KPIs and it is much more difficult to assess the qualitative value of the KPIs. It is easy to assess IP Creation with a simple KPI such as “we have to produce three patents”. But to estimate the real value in terms of changing the business and its affect on the financial aspect is very difficult. Our final suggestion is that further investigation is needed to establish the extent to which PMgS and its components (performance clusters and KPI classes) can be applied to other business functions, such as Development, within the same organization. The main focus should be how the PMgSs could be connected through complementary department goals, performance clusters, KPI classes and KPIs on both sides. For example, if there is a research goal requesting technology transfer, then how are other units stimulated to take advantage of this goal. Another aspect for consideration is how applicable is our model to independent research organizations, such as Fraunhofer-Gesellschaft or Max-Planck-Gesellschaft, or even how applicable is it to academic research. Although some generalization seems to be possible, the components may have to be adjusted as outlined in
3
Fisher (2004).
256
7 Conclusions
the conclusions of Sect. 6.3. Finally, the applicability to different industrial sectors, other than ICT, are worth investigation.
7.5
Concluding Remarks
In this final chapter we have reflected upon our research project. As stated at the beginning we have indicated what we have contributed to current insights. Generally, having found a blank page in regard to performance management of industrial research organizations to start with, our contributions are not only enhancements of established concepts but rather responses to the calls often raised in literature. We have presented one way to systematically organize the performance management for industrial research departments. It is sincerely hoped that our findings will be well-received by the academic community as well as by practitioners and will serve as a basis for further theoretical enhancements and practical applied developments to improve the efficiency and effectiveness of managing industrial research organizations. In conclusion, the novel research insights, which are introduced in this work, lay the foundation for the establishment and further development of performance management systems for industrial research departments. The main strength of the presented PMgS is its simplicity. It is also strong in that it has the capability to include external input through partners and customers who can select KPIs, which allow the inclusion of external assessment. The results of this research have made initial contribution to the literature on innovation management, R&D management and management of industrial research. Moreover, as discussed in Chap. 6, it is expected that the research results can be extended to adjacent departments or can be transferred to other corporate functions with corresponding contextual adjustments and thus enable systematically effective performance management.
Appendix A Content and Media Sector as Part of the ICT
Definition of content and media sector (is likely to be part of ICT definition in the future). Content and media industries are engaged in the production, publishing and/or the electronic distribution of content products. The following general principle (definition) is used for the identification of content or media products: “Content corresponds to an organized message intended for human beings published in mass communication media and related media activities. The value of such a product to the consumer does not lie in its tangible qualities but in its information, educational, cultural or entertainment content”. The list of industries (ISIC Rev. 4) that meet this condition is provided in the table below. Table A.1 Content and media part of the ICT sector according to ISIC Rev. 4 Definition of the content and media sector (codes and denotations) 581 Publishing of books, periodicals and other publishing activities 5811 Book publishing 5812 Publishing of directories and mailing lists 5813 Publishing of newspapers, journals and periodicals 5819 Other publishing activities 591 Motion picture, video and television programme activities 5911 Motion picture, video and television programme production activities 5912 Motion picture, video and television programme post-production activities 5913 Motion picture, video and television programme distribution activities 5919 Motion picture projection activities 592 Sound recording and music publishing activities 60 Programming and broadcasting activities 601 Radio broadcasting 602 Television programming and broadcasting activities 639 Other information service activities 6391 News agency activities 6399 Other information service activities n.e.c. Source: OECD (2007) T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0, # Springer-Verlag Berlin Heidelberg 2012
257
.
Appendix B Key Performance Indicators
T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0, # Springer-Verlag Berlin Heidelberg 2012
259
Production All private sector Sales and Marketing All private sector Quality assurance (QA) team
Customer satisfaction Customer satisfaction Customer satisfaction
Customer satisfaction Customer satisfaction
Customer satisfaction Customer satisfaction Customer satisfaction
Quarterly
Sales and Marketing
Monthly
Weekly
Weekly
Number of stock outs Weekly Brand image index (%) based from Monthly market research Cost of quality correction – rework, Weekly rejects, warrantees, returns and allowances, inspection labor, and equipment, complaint processing costs
All private sector
Sales
Sales
All private sector
All private sector
All private sector
Sales and Marketing All private sector
Production
Sales and Marketing All private sector
Sales and Marketing All private sector
Customer satisfaction
Dollar revenue gained from top customers in the week Percentage of customers with key attributes (ones that generate most profit) Percentage of successful/ unsuccessful tenders Actual delivery date versus promised date Average customer size by category (category A being the top 20% of customers) Average time from customer enquiry to sales team response Average time to resolve complaints, to get credits for product quality problems, etc.
Quarterly
Applicable Applicable sectors BSC teams Sales and Marketing All private sector
Customer satisfaction
BSC perspective
Weekly
Table B.1 Performance measurement database, Parmenter Name of measure Frequency of measure
Efficient operations
Retention of customers/ minimize negative comment in the marketplace Efficient operations Increase profitability
Efficient operations
Increase profitability
Efficient operations
Increase profitability
Increase profitability
Increase profitability
Strategic objective
260 Appendix B Key Performance Indicators
Customer acquisition (rate business unit attracts or wins new customers or business) Customer loyalty index (percentage of customer retention within customer categories) Number of customer service initial inquiries to follow-up Customers lost (number of percentage) Number of defect goods on installation (dead on arrival, including those that occur within the first 90 days of operation) Sales of goods and services taken up by key customers – top 10% to 20% of customers Direct communications to key customers in month (average number of contacts made with the key customers) Market share (proportion of business in a given market) Number of customer referrals Number of incidents where senior management needed to instigate the remedial action Number of proactive visits to top 10% of customers Customer satisfaction
Customer satisfaction
Customer satisfaction Customer satisfaction Customer satisfaction
Customer satisfaction
Customer satisfaction
Customer satisfaction Customer satisfaction Customer satisfaction
Customer satisfaction
Monthly
Quarterly
Weekly
Weekly/Monthly
Weekly
Monthly
Monthly
Quarterly
Monthly Monthly
Monthly
Sales and Marketing All private sector
Sales and Marketing All private sector Sales and Marketing All private sector
Sales and Marketing All private sector
Sales and Marketing All private sector
Sales and Marketing All private sector
Sales and Marketing, All private sector QA Team
Sales and Marketing All private sector
Sales and Marketing All private sector
Sales and Marketing All private sector
Sales and Marketing All private sector
(continued)
Increase profitability
Increase profitability Efficient operations
Increase profitability
Increase profitability
Increase profitability
More reliable products
Increase profitability
Increasing sales
Long-term relationship with profitable customers
Increase profitability
Appendix B Key Performance Indicators 261
Customer satisfaction
Customer satisfaction Customer satisfaction Customer satisfaction
When audits performed
Quarterly
Monthly
Periodically
Quality problems detected during product audits in the field
Customer satisfaction of top 10% of customers Sales closed as a percentage of total sales proposals Service expense per customer category
Sales and Marketing All private sector
Sales and Marketing All private sector
Sales and Marketing All private sector
Sales and Marketing, All private sector QA team
All private sector
Sales
Customer satisfaction
Sales and Marketing All private sector
All private sector
Customer satisfaction
Customer satisfaction
Monthly
Accounting
All private sector
Order frequency (number of orders Weekly coming in per day/week) Orders canceled by reason (up to Weekly five categories)
Customer satisfaction
Weekly
Project teams
All private sector
Applicable sectors
Sales and back office All private sector teams entering orders Sales and Marketing All private sector
Customer satisfaction
Monthly
Applicable BSC teams Sales
Customer satisfaction
Customer satisfaction
BSC perspective
Quarterly
Frequency of measure
Weekly
Order entry error rate
Number of client relationships producing significant net profit (over $X million) Number of contacts with customer during project and post-project wrap-up (major projects only) Number of credits/returns from key customers Number of visits made to core customers in a week
Table B.1 (continued) Name of measure
Efficient operations
Increase profitability
Long-term relationship with profitable customers Long-term relationship with profitable customers Increase profitability
Efficient operations
Long-term relationship with profitable customers Efficient operations
Increase profitability
Increase profitability
Increase profitability
Strategic objective
262 Appendix B Key Performance Indicators
Daily
Weekly
Three to four times a year and in some cases continuously Two times a year
Complaints not resolved on first call
Credit request processing time
Key customer satisfaction
In-house customer satisfaction percentage
Daily
Customer satisfaction
Sales and Marketing, All sectors service delivery teams All teams All sectors
Customer satisfaction
All sectors
Accounting
Sales and Marketing All sectors
Customer satisfaction
Customer satisfaction
Customer satisfaction
All sectors
All sectors
Sales and Marketing All sectors
IT communications
Daily and in some cases Customer satisfaction 24/7
Daily and in some cases Customer satisfaction 24/7
Complaints not resolved in two hours
Calls answered first time (not having to be transferred to another party) Calls on hold longer than xx seconds
All sectors
Sales and Marketing All private sector
Information technology (IT) communications IT help desk, call centers
Customer satisfaction
Customer satisfaction
Time elapsed since repeat business Weekly with category A customers (top 20% or top 10% customers) Abandon rate – caller gives up Weekly
Improve employee satisfaction and productivity (continued)
Minimizing negative comment in the marketplace Retention of customers/ minimizing negative comment in the marketplace Retention of customers/ minimizing negative comment in the marketplace Retention of customers/ minimizing negative comment in the marketplace Retention of customers/ increase sales
Minimize negative comment in the marketplace Efficient operations
Increase profitability
Appendix B Key Performance Indicators 263
Service requests outstanding (faults, works requests) at month end
Orders shipped, which are complete and on time (delivery in full on time) The mean time between QA failures Actual client projects on time (percent of total) and cost versus budget (percent of budget) Time people waited in line
Late projects by manager (a list for internal projects and a list for client projects) Number of validations to contract by type Number of Quality Service Guarantees issued (refund for poor service) Number of outstanding retention installments (monitoring closeout)
Table B.1 (continued) Name of measure
Customer satisfaction
Customer satisfaction
Daily
Quarterly
Service teams
Sales
Project teams
Customer satisfaction
Monthly
Service
Service
Service
Manufacturing
Production
All sectors
Customer satisfaction
Customer satisfaction
Monthly
QA team
Daily
Customer satisfaction
Weekly/monthly
Sales and Marketing All sectors
All sectors
Applicable sectors
Daily and in some cases Customer satisfaction 24/7
Customer satisfaction
Monthly
Applicable BSC teams Project teams
Sales and All sectors where Accounting team customers retain a sum of money until satisfactory completion Production control, All sectors who dispatch, etc. dispatch goods
Customer satisfaction
BSC perspective
Weekly
Frequency of measure
Retention of customers/ minimizing negative comment in the marketplace Retention of customers/ minimizing negative comment in the marketplace Retention of customers/ minimizing negative comment in the marketplace
Efficient operations
Efficient operations
Increase profitability
Increase profitability
Efficient operations
Efficient operations
Strategic objective
264 Appendix B Key Performance Indicators
All sectors
All sectors
All teams
HR
HR
HR
Employee satisfaction
Weekly
Number of employees who have received recognition in last week, two weeks, month
Every employee survey Employee satisfaction Empowerment index, number of (three to four times a staff and managers who say year) they are empowered (from staff survey) Length of service of staff who have Monthly Employee satisfaction left
All sectors
All sectors
All sectors
Every employee survey Employee satisfaction (three to four times a year)
HR
Employee satisfaction per survey
Employee satisfaction
All sectors
All sectors
All sectors
Monthly
HR, all teams
Human Resources (HR) HR
Employee complaint resolution timelines and effectiveness
Employee satisfaction
Monthly
Employee satisfaction
Employee (Other)
Quarterly
Sales and Marketing Service
Monthly
Customer satisfaction
Monthly
Analysis of absenteeism
Surrender ratio of equipment or service (where service of equipment is on a Monthly contract) Number of applicants for employment at the company Percentage of staff working flexible hours
Happy employees, make happy customers, which make happy shareholders Happy employees, make happy customers, which make happy shareholders Happy employees, make happy customers, which make happy shareholders Happy employees, make happy customers, which make happy shareholders Happy employees, make happy customers, which make happy shareholders Happy employees, make happy customers, which make happy shareholders Happy employees, make happy customers, which make happy shareholders (continued)
Desirable place to work
Long-term relationship with profitable customers
Appendix B Key Performance Indicators 265
HR
Periodic survey of local Environment Community/environmental community’s satisfaction index from external perception of survey company
Employee satisfaction
HR
Satisfaction with a balanced Every employee survey Employee satisfaction working and nonworking life (three to four times a (from staff survey) year) Attendance numbers for social club Quarterly Employee satisfaction functions
Monthly
All sectors
HR
Every employee survey Employee satisfaction (three to four times a year)
Recruitment rating (survey on all new employees)
Staff turnover by type (resignations, end of contract, temporary staff, terminations)
All sectors
HR
Employee satisfaction
Quarterly
Public Relations
HR, all teams
All sectors
All sectors
All sectors
All sectors
All sectors
Number of potential recruits that come from employee referrals
HR
All sectors
Applicable sectors
Employee satisfaction
Applicable BSC teams HR
Number of recognition events and Weekly awards to staff planned for next four weeks, next eight weeks
BSC perspective Employee satisfaction
Frequency of measure
Number of days working overseas Quarterly on jobs
Table B.1 (continued) Name of measure
Happy employees, make happy customers, which make happy shareholders Happy employees, make happy customers, which make happy shareholders Working well with the community and environment
Happy employees, make happy customers, which make happy shareholders Happy employees, make happy customers, which make happy shareholders Happy employees, make happy customers, which make happy shareholders Happy employees, make happy customers, which make happy shareholders Efficient operations
Strategic objective
266 Appendix B Key Performance Indicators
Environment
Weekly
Percentage of local residents in total workforce Entries to environment/community awards to be completed in next three months Number of environmental complaints received in a week Number of external charity volunteers trained by company staff
Quarterly
Weekly
Monthly
Quarterly
Dollars donated to the community Quarterly
Production
Production
Environment/ Community Environment
Environment/ Community Environment/ Community Environment/ Community
Operations
PR
Operations
Public Relations (PR) PR
HR
Environment
Weekly
Production
Environment/ Community
Environment
Weekly
Production
Production
Environment
Weekly
Operations Production
Environment
Environment Environment
Monthly Weekly
Water consumption and/or Weekly discharge per production unit (or by per employee, or per sales dollar) Number of employees involved in Quarterly community activities
Volunteer retention/recruitment Emissions from production into the environment (number) Energy consumed per unit, BTU/ sales Percentage of recycled material used as raw material input Percentage of waste generated/ recycled Waste and scrap produced
Charity
All sectors
All sectors
All sectors
All sectors
All sectors
Manufacturing
Manufacturing
Manufacturing
Manufacturing
Manufacturing
Charity Manufacturing
Positive public perception Working well with the community and environment (continued)
Working well with the community and environment Positive public perception Positive public perception Positive public perception
Efficient operations Reducing environmental impact Reducing environmental impact Reducing environmental impact Reducing environmental impact Reducing environmental impact Reducing environmental impact
Appendix B Key Performance Indicators 267
Frequency of measure
Quarterly
Number of sponsorship projects undertaken by company Number of students recruited for holiday work Percentage of current projects that are environmentally friendly Percentage complete to percentage billed by job Percentage of category A customers covered by partnership projects Percentage of customers paying cash up front on commencement of project Percentage of profitability per major project Percentage of sales that have arisen from cross-selling among business units Percentage of successful tenders Financial
Financial
Financial Financial
Financial
Monthly
Quarterly
Monthly
Monthly
All private sector
Construction
All sectors
All sectors
All sectors
All private sector
All private sector
Sales and Marketing All private sector
Sales and Marketing All private sector
Accounting
Accounting
Increase profitability
Efficient operations
Increase profitability
Increase profitability
Increase profitability
Positive public perception Positive public perception Positive public perception Positive public perception Reducing environmental impact Increase profitability
Positive public perception
All sectors
All sectors
Strategic objective
Applicable sectors
Sales and Marketing All private sector
Operations
PR
PR
PR
PR
PR
Environment/ Community Environment/ Community Environment/ Community Environment/ Community Environment/ Community Financial
Environment/ Community
Applicable BSC teams PR
BSC perspective
Quarterly
Monthly
Monthly
Quarterly
Monthly
Number of photos in paper
Number of firms employees Quarterly involved in up-skilling local community organizations Number of media coverage events Monthly
Table B.1 (continued) Name of measure
268 Appendix B Key Performance Indicators
Days in inventory Days sales in receivables Dealer profitability Dealer satisfaction survey Economic value added per employee ($) Gross margin by business Indirect expenses as a percentage of sales Marketing expense per customer ($) Net income by business New business – by occurrence type (e.g., referrals, promotional drive, prospecting, website, etc.)
Percentage of top ten customers’ business Accounts receivable turnover Average number of days spent as stock in hand Bad debt percentage to turnover Cash flow ($) Cash-to-cash cycle – length of time from cash out to cash in Contribution to revenue, or contribution margin (%) Customer and product-line profitability Days in accounts payable
Financial Financial Financial
Monthly Monthly
Financial
Quarterly
Quarterly
Financial
Quarterly
Financial Financial
Financial
Monthly
Monthly Monthly
Financial Financial Financial
Monthly Monthly Monthly
Financial Financial Financial Financial Financial
Financial Financial
Monthly Monthly
Monthly Monthly Monthly Monthly Monthly
Financial
Quarterly
All private sector
All private sector All private sector All private sector
All private sector All private sector
All private sector
All private sector All private sector
All private sector All private sector All private sector All private sector All private sector
All private sector
Accounting All private sector Sales and Marketing All private sector
Marketing
Accounting Accounting
Accounting Accounting Sales and Marketing Sales and Marketing Accounting
Accounting
Sales and Marketing All private sector
Accounting
Accounting Accounting Accounting
Accounting Stock control
Sales and Marketing All private sector
(continued)
Increase profitability Increase profitability
Efficient operations
Increase profitability Efficient operations
Maintaining supplier relationships Increase profitability Efficient operations Increase profitability Efficient operations Increase profitability
Efficient operations
Increase profitability
Increase profitability Increase profitability Increase profitability
Efficient operations Increase profitability
Increase profitability
Appendix B Key Performance Indicators 269
Number of profitable customers Number of projects with all progress payments paid Number of winning tenders that have created losses Percentage revenues from new products or service Profits from new products or business operations ($) Profit/employee ($) Return on capital employed Return on net asset value Return on equity Revenues/employee (%) Revenues/total assets (%) Sales by manager Sales growth rate by market segment Credit rating Debt-to-equity ratio Investment in development of new markets ($) IT expense as a percentage of total administrative expense People/headquarters costs Percentage unprofitable customers Progress on major IS CAPEX projects
Table B.1 (continued) Name of measure Financial Financial Financial Financial Financial Financial Financial Financial Financial Financial Financial Financial Financial Financial Financial Financial Financial Financial Financial Financial
Monthly
Quarterly
Monthly
Monthly Monthly Monthly Monthly Monthly Monthly Monthly Quarterly
Monthly Monthly Quarterly
Quarterly
Monthly Monthly Monthly
BSC perspective
Quarterly Quarterly
Frequency of measure
All private sector
IT, Accounting, or Finance Accounting Sales and Marketing IT, Accounting, or Finance
Accounting Accounting Sales
Accounting Accounting Accounting Accounting Accounting Accounting Sales and Marketing Sales and Marketing
All sectors All sectors All sectors
All sectors
All sectors All sectors All sectors
All private sector All private sector All private sector All private sector All private sector All private sector All private sector All private sector
Sales and Marketing All private sector
Sales and Marketing All private sector
Operations
Applicable Applicable sectors BSC teams Sales and Marketing All private sector Accounting All private sector
Efficient operations Increase profitability Efficient operations
Efficient operations
Increase profitability Increase profitability Innovation
Increase profitability Efficient operations Efficient operations Increase profitability Increase profitability Increase profitability Increase profitability Increase profitability
Increase profitability
Increase sales
Increase profitability
Increase profitability Efficient operations
Strategic objective
270 Appendix B Key Performance Indicators
Research and development (R&D) Sales
Production
Financial Financial Financial Financial Financial Internal process Internal process Internal process Internal process
Internal process
Internal process
Monthly Monthly
Quarterly
Weekly
Weekly
Quarterly
Monthly
Weekly
Quarterly
Weekly Changes to orders after initial placement – controllable and uncontrollable Excess inventory – anything above Monthly normal requirements
Service
Service
Insurance
Banking Banking
All sectors All sectors Banking
All sectors
Sales
Operations
All private sector
(continued)
Increase profitability
Efficient operations
Innovation
All private sector
All private sector
Efficient operations
Efficient operations
Increase profitability
Increase profitability
Increase profitability
Increase profitability
Increase profitability Increase profitability
Increase profitability Efficient operations Increase profitability
Efficient operations
All private sector
All private sector
Sales and Marketing All private sector
Accounting
All service teams
Accounting
Operations Operations
Accounting Operations Operations
Financial Financial Financial
Monthly Monthly Quarterly
IT
Financial
Monthly
Teams expenditure profile for year to date (tracks actual and expected against planned expenditure profile for year) Total assets/employee ($) Value of work in progress ($) Average cost of maintaining a customer account ($) Value of mortgage offers ($) Value of personal loan advances ($) Administrative expense as a percentage of gross premium Percentage chargeable work/ nonrecoverable Budgeted time against actual time on weekly basis Percentage of brand dominance in market Percentage of invoices processed within the week Accuracy and completeness of specifications for orders Average age of company patents (number)
Appendix B Key Performance Indicators 271
Inventory items above/below target limits Manufacturing cycle effectiveness ¼ processing/ throughput time Manufacturing process quality measures re-work (how many items make it through the process without being reworked at any stage) % Merchandise availability measure of inventory turns on selected key items Number of improvements made to existing products Number of profitable new products (over $XX and greater than X% gross margin) Outage hours per month Patents filed and issued that have been incorporated into products Percentage of products for which the first design of a device fully met the customer’s functional specification Potential revenue in sales pipeline Pricing accuracy Product-development cycle time of major new projects
Table B.1 (continued) Name of measure BSC perspective Internal process Internal process
Internal process
Internal process
Internal process Internal process
Internal process Internal process Internal process
Internal process Internal process Internal process
Frequency of measure
Monthly
Weekly
Weekly
Monthly
Monthly
Quarterly
Monthly Monthly
Monthly
Weekly Weekly Quarterly
All private sector
All private sector All private sector
All private sector
All private sector
All private sector
All private sector
All private sector
All private sector
Applicable sectors
Sales and Marketing All private sector Sales All private sector R&D All private sector
Production
Production R&D
R&D
R&D
Operations
Operations
Operations
Applicable BSC teams Production
Increase profitability Efficient operations Increase profitability
Efficient operations
Efficient operations Efficient operations
Increase profitability
Innovation
Increase profitability
Increase profitability
Increase profitability
Efficient operations
Strategic objective
272 Appendix B Key Performance Indicators
Production amount that passes to next stage in production Production cycle time (time in each stage) Quality problems due to equipment failure Queue production time ratio Ratio of new products (less than X years old) to full company catalog (%) R&D percentage of sales from propriety products Research and Development time to develop next generation of products Sales to selling costs ratio Timeliness and accuracy of price quotations and requests for samples Dollars saved by employee suggestions Percentage completed timesheets by deadline Percentage of hours spent on R&D Percentage of payments (nonpayroll) right amount paid made on time Percentage of payments (payroll) right amount paid made on time
Internal process Internal process Internal process Internal process Internal process
Internal process Internal process
Internal process Internal process
Internal process Internal process Internal process Internal process
Internal process
Weekly
Monthly
Monthly
Monthly Quarterly
Quarterly
Quarterly
Monthly Weekly
Quarterly
Monthly
Quarterly Monthly
Monthly
All private sector
All private sector
All private sector All private sector
All private sector
All private sector
All private sector
Payroll
R&D Accounting
All teams
Accounting
All sectors
All sectors All sectors
All sectors
All sectors
Sales and Marketing All private sector Sales All private sector
R&D
R&D
Production Operations
Production
Operations
Operations
(continued)
Efficient operations
Innovation Efficient operations
Efficient operations
Efficient operations
Increase profitability Efficient operations
Increase profitability
Increase profitability
Efficient operations Innovation
Efficient operations
Increase profitability
Increase profitability
Appendix B Key Performance Indicators 273
Percentage of payments made by direct credit Percentage of requests for help fixed by Help Desk during the first phone call Percentage of sales invoices issued on time Percentage of time program developers have spent on programming (tracking nonproductive time) Percentage spent of this year’s technology capital expenditure Accidents per 100,000 hours worked Accounting system downtime (8 A.M. to 6 P.M.) Adherence to schedule – tasks being performed on time Asset utilization rates of major machines Availability of Human Resources system Average mainframe response time Back-to-work programs for staff who have been absent for more than three weeks Backup every night this month
Table B.1 (continued) Name of measure Internal process Internal process
Internal process Internal process
Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process
Internal process
Monthly
Monthly
Monthly
Monthly
Monthly
Monthly
Monthly
Monthly
Monthly
Monthly Monthly
Weekly
BSC perspective
Monthly
Frequency of measure
All sectors
All sectors
All sectors
Applicable sectors
IT
IT HR
HR
Operations
Production
Accounting
HR
IT
All sectors
All sectors All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All teams, especially All sectors IT
Sales
IT
Applicable BSC teams Accounting
Efficient operations
Efficient operations Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Strategic objective
274 Appendix B Key Performance Indicators
Date of last backup tested at remote site Billing accuracy Business development expense/ administrative expense Completion of projects on time and budget (% or $ of total projects) Current users of xxx system Employees on self-managing teams Faults or service requests closed in month Initiatives underway based on satisfaction survey Investment in research ($) IT capacity Key work carried out by contractors Last update of intranet page Lost time injury frequency (graph) Managers accessing the general ledger (time) Manual transaction to automated electronic transaction ratio Median patent age in key products Monthly finance report to the report to CEO Monthly report to budget holders Number of accounts payable invoices paid late
Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process
Quarterly
Monthly Monthly
Monthly
Monthly Quarterly
Monthly
Monthly
Quarterly Monthly Monthly
Monthly Weekly Monthly
Monthly
Quarterly Monthly
Monthly Monthly
R&D Accounting or Finance, HR, IT Accounting Accounting
All teams
All teams HR Accounting
R&D IT HR
All teams
IT
IT HR
All teams
Accounting Accounting
IT
All sectors All sectors
All sectors All sectors
All sectors
All sectors All sectors All sectors
All sectors All sectors All sectors
All sectors
All sectors
All sectors All sectors
All sectors
All sectors All sectors
All sectors
(continued)
Efficient operations Efficient operations
Innovation Efficient operations
Efficient operations
Efficient operations Efficient operations Efficient operations
Innovation Efficient operations Efficient operations
Efficient operations
Efficient operations
Skilled and experienced workforce Efficient operations Efficient operations
Efficient operations Efficient operations
Efficient operations
Appendix B Key Performance Indicators 275
Number of customer calls in test week (e.g., third week of month) Number of innovations introduced in last 3, 6, 9, 12 months Number of managers accessing the general ledger Number of strategic supply relationships Number of systems that have been integrated with other company systems Number of progress payments due that have not yet been invoiced Number/percentage of projects completed on time/budget Number of management team meetings last week (or number of management meetings planned for next five days) Number of IT contractors as a percentage of IT employees Number of critical assets in a catastrophic state Number of employees Number of post-project reviews/ debriefs Number of staff trained in first aid
Table B.1 (continued) Name of measure Internal process
Internal process Internal process Internal process Internal process
Internal process Internal process Internal process
Internal process Internal process Internal process Internal process Internal process
Quarterly
Monthly
Monthly
Quarterly
Monthly
Monthly
Monthly
Quarterly
Monthly
Monthly Quarterly
Quarterly
BSC perspective
Monthly
Frequency of measure
HR
HR Operations
Operations
IT, HR
All teams
Operations
Operations
IT
Procurement
Accounting
Operations
Applicable BSC teams Accounting
All sectors
All sectors All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
Applicable sectors
Healthy and safe work environment
Efficient operations Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Innovation
Efficient operations
Strategic objective
276 Appendix B Key Performance Indicators
Number of times schedule slipped in month Orders and reports shipped by express services Number of overdue reports/ documents Policy and procedures sections updated this month Percent of operational purchases from certified vendors Percent of positive feedback from employees after attending meetings (every meeting rated via intranet) Product changes to correct design deficiencies R&D as a percentage of sales R&D resources/total resources Resolution of queries in same day Response time to inquiries and special requests Safety measures – accidents, days lost by reason Service requests (faults, works requests) logged Slow-moving and obsolete inventory Staff who have attended the stress management course Staff with > 30 days leave owing
Internal process Internal process Internal process Internal process Internal process Internal process
Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process
Monthly
Monthly
Weekly
Monthly
Monthly
Monthly
Quarterly
Quarterly Monthly Monthly Monthly
Monthly
Monthly
Quarterly
Monthly
Monthly
HR
HR
Production
IT
HR, all teams
Research IT All teams All teams
Design
All teams
Operations
All teams
All teams
All teams
All teams
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors All sectors All sectors All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
Efficient operations (continued)
Efficient operations
Increase profitability
Healthy and safe work environment Efficient operations
Innovation Efficient operations Efficient operations Efficient operations
Efficient operations
Maintain supplier relationships Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Appendix B Key Performance Indicators 277
Stakeholder feedback (on activities, working style, and communication) Number of succession plans for key positions Suppliers on the accounts payable ledger Time spent on quality improvement activities Units/labor hour and labor dollar for direct, indirect, and total labor costs Visits to managers planned next week Conversion rate of home-buying certificates to mortgage offers Value of retail investment receipts (gross) dollars Funds raised Consent return rate for rework and resubmission (numbers and dollars) Measure cost of obtaining planning consents Measure number of projects that do not need special consents (client ¼ cost benefit) Timeliness of resource consents processing
Table B.1 (continued) Name of measure BSC perspective
Internal process Internal process Internal process Internal process
Internal process Internal process Internal process Internal process Internal process
Internal process Internal process
Internal process
Quarterly
Monthly
Monthly
Monthly
Monthly
Monthly
Monthly
Monthly Monthly
Quarterly
Quarterly
Quarterly
When survey performed Internal process
Frequency of measure
Planning
Planning
Planning
Operations Planning
Accounting
Branches
All teams
Production
Ail teams
Accounting
HR
Applicable BSC teams PR
Construction
Construction
Construction
Charity Construction
Banking
Banking
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
Applicable sectors
Efficient operations
Efficient operations
Efficient operations
Efficient operations Efficient operations
Increase profitability
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Efficient operations
Strategic objective
278 Appendix B Key Performance Indicators
Claims frequency Claims severity Insurance premiums received from new product launches Accomplishment of quality improvement implementation milestones Design cycle time Downtime due to different types of equipment failure Engineering changes to off-theshelf system by reason Engineering changes after design completion Improvement in productivity (%) Number of improvements to products in month Inventory system accuracy rates Inventory turnover (number) Late items as a percentage of average daily production Process part-per-million defect rates Processes made foolproof Processes under statistical control with sufficient capability
Turnaround time of resource consent applications (days elapsed) Emergency response time Internal process Internal process Internal process Internal process
Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process Internal process
Monthly
Monthly Weekly
Monthly
Monthly
Monthly Monthly
Monthly Monthly Weekly
Monthly
Quarterly Quarterly
Internal process
Weekly
Monthly Monthly Quarterly
Internal process
Monthly
Construction
Production Production
Production
Production Production Production
Production R&D
R&D
R&D Production control, IT R&D
QA
Manufacturing Manufacturing
Manufacturing
Manufacturing Manufacturing Manufacturing
Manufacturing Manufacturing
Manufacturing
Manufacturing
Manufacturing Manufacturing
Manufacturing
Critical services sector Accounting Insurance Accounting Insurance Sales and Marketing Insurance
Operations
Planning
(continued)
Increase profitability Increase profitability
Efficient operations
Increase profitability Increase profitability Efficient operations
Increase profitability Increase profitability
Increase profitability
Increase profitability
Increase profitability Efficient operations
Efficient operations
Increase profitability Efficient operations Increase profitability
Efficient operations
Efficient operations
Appendix B Key Performance Indicators 279
Production schedule delays because of material shortages Production set-up/changeover time Quality problems attributable to design Reduction of parts count on products Service factor – percent of orders filled Technical support costs/unit sold (quality of product and clarity of instructions) Time lost due to schedule changes or deviations from schedule Total value of finished products/ total production costs Unplanned versus planned maintenance Waste – all forms: scrap, rejects, underutilized capacity, idle time, downtime, excess production, etc. Waste caused by maintenance tests Yield – net good product produced Service calls or complaints per unit sold Space productivity sales or production per square foot
Table B.1 (continued) Name of measure BSC perspective Internal process Internal process Internal process Internal process Internal process Internal process
Internal process Internal process Internal process Internal process
Internal process Internal process Internal process Internal process
Frequency of measure
Weekly
Monthly Monthly
Quarterly
Monthly
Weekly
Monthly
Weekly
Monthly
Weekly
Monthly Weekly Weekly
Monthly
Manufacturing Manufacturing Retail
Manufacturing
Manufacturing
Manufacturing
Manufacturing
Manufacturing
Manufacturing
Manufacturing
Manufacturing Manufacturing
Manufacturing
Applicable sectors
Sales and marketing Retail
Production Production QA
Production
Production
Operations
Production
QA
Production
R&D
Production R&D
Applicable BSC teams Production
Increase profitability
Increase profitability Increase profitability Increase profitability
Increase profitability
Efficient operations
Increase profitability
Efficient operations
Efficient operations
Increase profitability
Increase profitability
Increase profitability Increase profitability
Increase profitability
Strategic objective
280 Appendix B Key Performance Indicators
Number of leads generated by agents Reporting errors (e.g., time charged to closed/wrong jobs) Percentage of bids or proposals accepted Percentage of customer-facing employees having on-line access to information about customers (effective communication of accurate information to employee) Percentage of employees below age of X Percentage of employees who have interacted with customers Percentage of employees with tertiary education Percentage of managers with satisfactory IT literacy Percentage of new staff (less than three employees) who have had post-employment interview Percentage of performance reviews completed on time Percentage of rising stars with mentors Percentage of staff performance reviews completed
Internal process Internal process Internal process Learning and growth
Learning and growth Learning and growth Learning and growth Learning and growth Learning and growth
Learning and growth Learning and growth Learning and growth
Monthly
Monthly
Weekly
Monthly
Quarterly
Monthly
Quarterly
Monthly
Monthly
Monthly
Quarterly
Monthly
HR, all teams
HR
All teams
HR
IT
HR
HR
HR
IT
Sales
All teams
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
Service, manufacturing All sectors
Service
Sales and marketing Service
Skilled and experienced workforce Skilled and experienced workforce (continued)
Efficient operations
Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce
Efficient operations
Increase profitability
Efficient operations
Increase profitability
Appendix B Key Performance Indicators 281
Percentage of contractors to total staff Annual rolling average of days training by key team Average employee years of service with company Closing the skills matrix gap progress Competence development expense/payroll cost Employees certified for skilled job functions or positions Employees complying with their development plan Employees terminated for performance, other problems Employees that have improved skills during last six months Employees with delegated spending authority Increase in average grade level of reading and math skills of employees Investment in new product support and training ($) Leadership index (based on responses from a section in the employee survey)
Table B.1 (continued) Name of measure
HR HR
Sales HR
Learning and growth Learning and growth Learning and growth Learning and growth Learning and growth Learning and growth Learning and growth Learning and growth Learning and growth Learning and growth
Learning and growth Learning and growth
Monthly
Quarterly
After the performance review round Quarterly
Quarterly
Quarterly
Monthly
Six monthly
Quarterly
Quarterly
Quarterly
Every employee survey (three to four times a year)
HR, Training
HR
HR, training
HR
HR, training
HR
HR
All teams
Learning and growth
Quarterly
Applicable BSC teams HR
BSC perspective
Frequency of measure
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
Applicable sectors
Skilled and experienced workforce Skilled and experienced workforce
Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce
Skilled and experienced workforce Efficient operations
Skilled and experienced workforce Skilled and experienced workforce Desirable place to work
Strategic objective
282 Appendix B Key Performance Indicators
Number of staff who have agreed to development plans Number of teams with a balanced scorecard (BSC) – rollout of a BSC system Number of training hours in both external/internal courses Number of full time temporary employees (contractors over three months)
Leadership initiatives targeted to rising stars Managers who have had performance management training Motivation index (based on responses from a section in the employee survey) Needs assessment gap – required versus actual skills for positions Number of cumulative work experience (years) in current management team Number of current users of X system Number of in-house training courses Number of initiatives implemented from the staff survey Number of internal promotions HR
HR, training
HR HR
Learning and growth
Learning and growth Learning and growth
Learning and growth Learning and growth Learning and growth Learning and growth
Every employee survey (three to four times a year) Quarterly
Quarterly
Quarterly
Quarterly
Weekly after the employee survey Monthly
Balanced scorecard team
Learning and growth
Learning and growth Learning and growth
Monthly
Quarterly
HR
HR
HR
Learning and growth
After the performance review round Monthly
HR
IT, HR
HR
HR
Learning and growth
Monthly
HR
Learning and growth
Monthly
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
(continued)
Skilled and experienced workforce Skilled and experienced workforce
Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce Efficient operations
Skilled and experienced workforce Skilled and experienced workforce
Efficient operations
Skilled and experienced workforce Skilled and experienced workforce
Appendix B Key Performance Indicators 283
Number of internal applications for job applications closed in month Number of level 1 and 2 managers who were promoted internally Number of mentoring meetings by each high performer (rising star) Number of new staff (less than three months) who attended an induction program Number of staff who have attended an induction within four weeks of starting Percentage of managers who are women Participation in team meetings (% of total team) Percentage of cross-trained personnel Number post project reviews undertaken to ascertain lessons learned Re-skilled employees percentage of workforce requiring reskilling Staff trained to use X system
Table B.1 (continued) Name of measure Learning and growth
Learning and growth Learning and growth
Learning and growth
Learning and growth
Learning and growth Learning and growth Learning and growth Learning and growth
Learning and growth
Learning and growth
Quarterly
Quarterly
Monthly
Monthly
Quarterly
Monthly
Quarterly
Monthly
From stuff survey
Quarterly
BSC perspective
Monthly
Frequency of measure
IT, HR
HR
Production control, training Accounting
All teams
HR
HR
HR
HR
HR
Applicable BSC teams HR
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
Applicable sectors
Skilled and experienced workforce
Efficient operations
Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce
Skilled and experienced workforce
Skilled and experienced workforce
Skilled and experienced workforce Skilled and experienced workforce
Skilled and experienced workforce
Strategic objective
284 Appendix B Key Performance Indicators
HR HR
All teams
R&D
Learning and growth
Learning and growth Learning and growth Learning and growth Learning and growth
Learning and growth Learning and growth Learning and growth Learning and growth
Learning and growth
Learning and growth
Quarterly
Quarterly
Monthly
Quarterly
Monthly
Quarterly
Quarterly Quarterly
Training needs outstanding
Turnover of female staff Turnover of staff by ethnicity
Number of staff who are aware of Monthly new initiative Monthly Number of teams who have undertaken internal user satisfaction surveys Quarterly Percentage of staff meeting continuing professional development requirements Number of research papers Quarterly generated Source: Parmenter, 2007, pp. 203–231
Learning and growth
Learning and growth
Quarterly
HR HR
HR
HR
All teams, HR
HR
HR
HR
HR
HR
Learning and growth
Quarterly
HR
Staff who have verbal feedback about performance every month Number of succession plans for key positions Number of suggested improvements from employees by department Suggestions made to suggestions implemented ratio Time in training [days/year] (number) Total hours employees spend in mentoring Training days this month
Learning and growth
Monthly
Staff training attendance
Tertiary
Professional service firms
All sectors
All sectors
All sectors All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
All sectors
Skilled and experienced workforce
Skilled and experienced workforce
Skilled and experienced workforce
Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce Desirable place to work Skilled and experienced workforce Efficient operations
Innovation
Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce Skilled and experienced workforce
Appendix B Key Performance Indicators 285
286
Appendix B Key Performance Indicators
Table B.2 Pool of available metrics, Geisler Input/investment in S&T • Expenditures for each stage of R&D/I • Expenditures per time-frame, for one time period or over several time periods. • Distribution by categories of expenditures: personnel, equipment, etc. • Source of funding (business unit or central lab in the industrial company). • Comparison of expenditures, per item category: by competitors, industry averages, and sector averages. • Expenditures by discipline, technology, and scientist & engineer. • Expenditures related to a product line or other commercial unit of reference (such as customer or market). Economic/financial metrics • Cost savings (including ratio of savings in cost of goods sold). • Return on investment. • Return on assets. • Payback of investment. • Economic measures (such as price differential per unit quality characteristics that can be attributed to science and technology, computed as differential minus cost per unit to which quality characteristics are attributable multiplied by sales volume of this given unit). • “Dollarization”: Profit/Cost of R&D employees Commercial and business metrics • New sales ratio (sales revenue from products and product or process enhancements that can be attributed to commercialization of S&T outcomes in a given year). • Projected sales and income (from S&T projects in progress or in the innovation pipeline, by project or by categories of products and processes). • Profit ratio (profits as percentage of sales that can be attributed to S&T outcomes that had been commercialized and integrated into new and existing products and processes that had generated the profits). • Market share ratio (relative market share per product category or unit that can be appropriated to the contribution of S&T outcomes to product sales shat have generated the market share). • Customer satisfaction (overall ratings of the company and its products, and specific ratings of the competitive features of products or services that can be attributed to S&T outcomes incorporated in them. This measure may also be in the form of ratios of expenditures for S&T, and specific S&T outcomes, such as patents). • Interactions with customers (internal contacts of scientists and engineers with their corporate customers, such as marketing and production, and external contacts by these S&Es with the corporate customers, such as S&Es of the client organization, as well as managers – technical and commercial – of the client organization). • Regulatory compliance (contribution of the outcomes of S&T to the compliance of products and services with regulations, that can be attributed to the outcomes from S&T that are incorporated into products and services – including issues of safety, health, and ethics). • Quality and reliability (contributions of the outcomes of S&T to the level of quality and reliability of the products and services sold by the organization that are considered acceptable by customers and regulators). • Time-to-market (response time) differential to match or to surpass competitors’ new products and processes. (This measures the improvement [reduction] in the time needed by the organization to introduce new products or services, as competitive weapons, that can be attributed to S&T outcomes. This measure can also be in the form of ratios of the reduction in time-to-market, for the cost and investments in S&T, or per specific S&T outcomes.) (continued)
Appendix B Key Performance Indicators
287
Table B.2 (continued) • Proprietary sales and revenues ratio (the portion of sales of products and services, as well as revenues from licenses and similar income categories that are protected by patents and other instruments of trade secrets, that can be attributed to those patents that offer specific protection of product characteristics that provide exclusive features for the organization over its competitors). Bibliometric metrics • Publications (includes scientific papers, technical reports, articles in scientific journals, book chapters, and proceedings of conferences and symposia). (These measures can only be in the form of ratios to the investments in S&T that have generated these measures, or selected expenditures by category of type of industry and academic discipline – all in a given time period.) • Citation analysis (includes counts of citations of scientific and technical articles, as one measure of impacts on scientific community and quality of the scientific effort that has generated the publications cited). (This increase may also be in the form of ratios to investments in S&T, or by academic discipline and type of industry or sector in which the scientific effort has been conducted.) • Co-word Analysis and Database Tomography (DT). (These are measures of analyses performed on large databases of S&T bibliographical outcomes, in a form of data mining. DT, for example, provides a roadmap to topical areas found in relevant literatures. Key word analysis also assist in obtaining a measure of topical coherence, as well as topical lineage, which offers a measure of thematic history, thus helping in identifying the path of development of a field of S&T and a specific thematic area. Such measure is also helpful in a more accurate attribution of the origins and path of evolution of scientific breakthroughs, for both individuals and organizations.) • Special presentations and honors (including keynote presentations of scientific conferences and symposia and other ad hoc contributions to the literature). Patents • Count of patents (produced by S&T unit and per S&E in the unit and in the organization.) (This measure may also be in the form of ratio of number of patents in a given time frame per expenditures for S&T – namely, a measure of the cost per patent by type of industry so as to account for different patenting practices.) • Relevant or current patents (the percent of patents that are current and provide the organization with competitive proprietary advantages; also in the form of ration of these current patents per the total number of patents produced over a time period classified by industry, to account for different patenting practices). • Comparative patent standards (rations of the unit and organization pool of relevant and useful patents, per the benchmarks in the industry, for the key technological areas in which the organization is competing). • Cost of patents (considering the time lag from the application of investment in S&T, to the point of registering a patent, this is a measure of the cost of a patent; also to be considered are the issues invoked by identifying the link between investments and the patents that emerge, and differences in quality and competitive impacts of the various patents). Peer review metrics • Internal evaluation (subjective rating of the S&T unit, its activities and its outcomes, by other people and units in the organization, such as marketing and production). (This type of measure may be in the form of written evaluations and some ranking scale on an instrument that measures judgment of respondents, ad hoc or in a periodical manner.) • External evaluation (subjective evaluation of the S&T unit, its activities, its outcomes, and its overall quality – by a panel of experts). (This measure may be in the form of an invited effort requested by the S&T unit or its organization or external S&T experts, consultants, and other (continued)
288
Appendix B Key Performance Indicators
Table B.2 (continued) knowledgeable people in the community, or in the form of routine evaluation, as part of an ongoing assessment of S&T.) • Targeted reviews (including specific panes evaluations of any outcome form S&T, such as a specific scientific paper, project, or program; also includes specific judgmental assessment of a product, a patent, and individual scientist and engineer). (This measure may be considered a measure of quality, as viewed by expert reviewers.) Organizational, strategic, and managerial metrics • Project management: internal/cycle time. (This is a measure of the period between starting a given S&T project to the point of transforming an outcome to the downstream unit within the organization. It may serve as a measure of internal efficiency in the company.) • Project management: external or commercial cycle time (includes a measure of the time period between the starting of the S&T project, until the ultimate sale of a product or service to an external customer). • Existence of project champion (includes a measure of the number or portion of current S&T projects which have an identifiable “champion” in the form of a manager from outside the S&T/R&D unit). • Projects with interfunctional teams. (Includes measures of the number of projects that employ teams composed of people from units across the organization and outside the S&T/R&D unit. Such “hybrid” teams may, for example, have representatives from marketing, production, and finance. Another measure here may be the point in the project lifetime in which such teams are formed. It is widely assumed that the earlier such a team is established, the greater the probability of the commercial success of the project.) • Evaluation of the scientific and technical capabilities of the S&T unit (and by extension of the total organization). (This is a measure of external evaluation primarily by the various customers of how the firm and its S&T unit are capable of meeting the scientific and technological challenges of their markets. The measure may also be used to evaluate publicly funded laboratories and agencies, such as NASA, CDC, EDA, and even academic institutions.) • Project progress and success (includes measures of the progress of S&T projects in terms of the objectives and milestones that were established, over a given period of time, also includes measures of the number or percent of projects that exhibited technical success and have done so on-time and on-budget.) • Evaluation of projects and programs. (This is a financial measure of the degree to which S&T projects and clusters of projects-programs have had technical and commercial success. Measures include the cost per technically successful projects, and cost of commercially successful projects. Differences between these measures usually indicate problems in the downstream flow of S&T outcomes. Another type of measure includes averages of cost per project. Similar measures may be applied to programs and to the entire S&T portfolio of projects and programs.) • Ownership, support, and funding of projects and programs. (These measures include percent of projects supported and funded by other units in the organization and that are directly related to a product line or similar commercial entity in the organization. These measures also provide the distribution of projects and programs by source of organization. They may indicate over – or under reliance or relation to given units and functions that support S&T.) • Human relations measures of S&T personnel (including such measures as morale of S&T personnel and satisfaction with their work). • Relation of S&T to strategic objectives. (These are measures of the degree to which S&T objectives are related to the strategic objectives of the organization, and are current with any changes in the organization´s strategy. Differences or discrepancies between the two sets of objectives may lead to problems in S&T performance and its relevance to where the organization is heading.) (continued)
Appendix B Key Performance Indicators
289
Table B.2 (continued) • Benchmarking project and programs performance. (These are measures that relate financial, economic, and project management metrics to benchmarks that are standards or averages in the industry, as well as benchmarks that are established in view of the performance of the “best practices” in the industry and sector. Additional measures in this set are the extent to which these benchmarks influence the strategic direction of both S&T and the total organization.) Stages of outcomes • Immediate outputs (includes measures of the proximal or direct outputs from the S&T/R&D activity, such as bibliometric measures.) • Intermediate outputs (includes outputs of the organizations and entities that have received the immediate outputs, transformed them, and are providing the transformed outputs to other entities in society and the economy). • Pre-ultimate outputs (measures of the products and services that are generated by those social and economic entities that had received and transformed the intermediate outputs). • Ultimate outputs (measures of the things of value to the economy and the society that were impacted by the pre-ultimate outputs). • Index of leading indicators (manipulation of core and organization-specific measures, in a weighted procedure). (For each stage of outputs – immediate and ultimate – the index of leading indicators is constructed. The index offers a quantitative appraisal of the value of S&T at each of the stages of its flow downstream the innovation process.) • Value indices for leading indicators (includes measures of the value of each index, at each stage in the flow downstream of the innovation continuum). (Value indices are computed by subtracting the value of each leading index from the index that succeeded in. For example, the value index for an organization in the pre-ultimate stage minus the value in the intermediate stage would measure the added value to the continuum from S&T. Net value is computed by comparison with costs of S&T and transformation at each stage.) • Portion of S&T at each stage. (These are measures of the role that S&T has in each of the stages, for each of the recipient/transforming organization. The outputs from these organizations – products, services, processes, methods, etc. – are possible, in part, because of S&T absorbed and adopted by the organization. These measures offer a look at the size and value of the S&T contribution, for each output, as well as in toto.) Source: Geisler, 2000, pp. 80–86 Table B.3 Pool of metrics, Ranftl Predominantly qualitative indicatorsa • The ability to win competitive proposals • Product performance throughout its life cycle • The R&D organization’s image in the eyes of the customer • The degree of professionalism in getting work done • The quality and usefulness of ideas generated • The ability to respond to peak demands and emergencies • The tone of the organization – employee motivation and morale Predominantly quantitative indicators • Sales per employee • Profit per employee • Profits generated per R&D dollar spent • Percent of proposals won • Dollar value of proposals won versus dollars spent on bidding expenses • Drafting time per average drawing • Time per document for processing drawing through an engineering drawing release activity Source: Ranftl, 1977, pp. 25–29 a Examples of R&D productivity indicators (as identified by case study participants)
290
Appendix B Key Performance Indicators
Table B.4 Pool of metrics, Schainblatt Quantitative indicators • Publish written output • Patents and prototypes • Reports and Algorithms Qualitative indicators • General contribution to science and technology • Recognition accorded to unit • Social effectiveness of the unit • Training effectiveness of the unit • Administrative effectiveness of the unit • R&D effectiveness of the unit • Applications effectiveness of the unit Science panels judging basic research • Science resources • Impact of discoveries • Indices of literature citation to measure external impact and perceptions • Analysis of patents • Value ratings for research projects Management of developmental research • Number of analytical tests performed per professional • Number of reactors operated per professional • Number of laboratory technicians per professional • Number of pilot plant non-professionals per professional Quantitative algorithm for “program value” • Potential annual benefita • Probability of commercializationb • Competitive technical statusc • Comprehensiveness of the R&D programd Calculation • Estimated annual new or protected sales for complete product • OR – annual cost improvements • Potential annual benefit ¼ (assumed % of line 1 that represents average incremental pretax income plus 100% of line 2) • Probability of commercialization • Competitive technical status • Comprehensiveness • Program value ¼ line 3 x 4 x 5 x 6 • Total program value is sum over all business or products • A discount factor can be added depending on the number of years to potential annual benefit Other indicatorse: • Number of patent disclosures • Publications produced • Honors and awards received by R&D staff (continued)
Appendix B Key Performance Indicators
291
Table B.4 (continued) • Staff elected to national academies • Government committees staff are asked to serve on • Number of patent applications per professional (and per employee)f • Costs to support R&D staff per number of cases of goods produced Source: Schainblatt, 1982, pp. 15–17 a 1. A totally new product and business may result from a R&D program, generating income through new sales. 2. R&D programs may allow new features to be added to existing products and so protect, and possibly extend, existing sales. 3. R&D programs may also be effective in producing cost improvements. b (1.0) Transfer of technology is underway or covered in strategic plans of the business. (0.7) Technology is closely related to business interests and the businesses have some investment in the technology. (0.4) Low interest level or negative view in existing businesses. (0.1) No business home yet identified for program output. c (1.0) The program has had continued historic scientific and technical leadership and is ahead of competitive activities. (0.8) The scientific and technical approach is believed to be superior. (0.5) The approach is equally effective as parallel competitive R&D. (0.3) Other approaches are receiving heavy support but the approach in question has specific advantages. (0.1) The work is a backup or alternative solution where competitive approaches being pursued elsewhere are likely to succeed. d (1.0) All costs reductions result directly from the R&D program, or the R&D program comprehensively addresses the product used for calculation of potential annual benefit. (0.3) The program comprehensively addresses the principal component or major technical area of the product used to calculate potential annual benefit. (0.1) The program addresses one of several major components or key technical areas critical to the product used to calculate potential annual benefit. (0.01) The program is targeted to a general area of opportunity. Vague connection between the R&D program and total benefits claimed as potential annual benefit e Note that author warns that while the records of these indicators are kept, no attempt is made to systematically relate them to input indicators. Schainblatt also notices that some companies view these indicators as “image enhancements”, but not as outputs to be related to inputs. f A patent attorney appraises each patent on a scale from one to five, and the company vicepresident in charge of the laboratory grades the relevance of each patent to the company’s business, also on a scale from one to five.
292
Appendix B Key Performance Indicators
Table B.5 Classification of R&D productivity measurements, Brown and Gobeli Inputs/resources • Acquiring R&D personnel when needed • Sufficient number of R&D personnel • Training and experience of R&D personnel • Utilization of personnel skills adequate R&D budget levels Processing system Project management issues • Ability to agree upon and meet requirements • Timeliness in meeting project milestones • Timeliness in meeting design completions • Communication of status of projects • Effectiveness of plans provided • Responsiveness to added projects • Prioritizing activities and projects
General R&D characteristics • Degree of innovation • Number of design changes before release • Cost of developing a new project
Specific outputs • Quality of hardware released • Quality of software released • Number of new products introduced • Number of design changes after release R&D division results/outcomes • Contributions to Division revenue goals • Contributions to Division profitability goals • Contributions to gains in market share Source: Brown & Gobeli, 1992, p. 330
Table B.6 Ten R&D productivity indicators, Brown and Gobeli Top ten R&D productivity indicators Resources • % key skill areas learned by R&D personnel Project management • % technical specifications met or exceeded averaged across completions • % completion dates met or exceeded People management • % fully satisfactory or above R&D personnel resigning per year Planning • Number of engineering change orders due to specification changes before product release. New technology study and development • Number of patents/total number of R&D employees Outputs • Number of complaints per product per year, averaged across project, with a three-month rolling average. Division results/outcomes • % Sales from products released within the last three years. • Score on annual R&D scorecard survey, completed by marketing and operations. • Annual sales/total R&D budget Source: Brown and Gobeli, 1992, p. 330
Appendix B Key Performance Indicators
293
Table B.7 Metrics associated to five managerial factors, Tipping et al. Five managerial factors 1. Financial return/VC: a. New Sales Ratio b. Cost Savings Ratio (SCR) c. R&D Yield d. R&D Return 2. Projected value of the R&D pipeline/VC, PA 3. Comparative manufacturing cost/VC, AVT 4. Product quality and reliability/VC, AVT a. Customer or Consumer Evaluation b. Reliability/Defect Rate Assessment 5. Gross profit margin/VC, AVT 6. Market share/VC, AVT a. Direct Market Share b. Related Market Share 7. Strategic alignment/PA, AVT, IWB a. Corporate and Business Unit b. Goal Coverage 8. Distribution of technology investment/PA, AVT 9. Number of ways technology is exploited/PA, AVT 10. Number of project definitions having business/marketing approval/IWB 11. Use of project milestone system/IWB, PRD 12. Percent funding by the business/IWB 13. Technology transfer to manufacturing/IWB, PRD 14. Use of cross-functional teams/IWB, PRD 15. Hating of product technology benefits/AVT, VC a. Customer Rating b. Economic Evaluation c. Market Share Evaluation 16. Response time to competitive moves/AVT, PRD 17. Current investment in technology/AVT 18. Quality of personnel/AVT, PRD a. Internal Customer Rating b. External Customer Rating c. External Recognition d. Published Works 19. Development cycle time/AVT, PRD a. Market Cycle Time b. Project Management Cycle Time 20. Customer rating of technical capability/AVT 21. Number and quality of patents/AVT, PRD a. Percent Useful b. Value Ratio c. Retention Percent d. Cost of Invention 22. Sales protected by proprietary position/AVT, VC a. % Patent Protected Sales b. % Proprietary Sales (continued)
294
Appendix B Key Performance Indicators
Table B.7 (continued) Five managerial factors 23. Peer evaluation/AVT, PRD a. External b. Internal 24. Customer satisfaction/AVT, PRD a. External b. Internal 25. Development pipeline milestones achieved/PRD a. % of Project Milestone Achieved b. Performance Level at Each Milestone 26. Customer contact time/PRD 27. Preservation of technical output/PRD 28. Efficiency of internal technical processes/PA, PRD a. Project Assessment b. Portfolio Assessment 29. Employee morale/PRD 30. Goal clarity/PRD 31. Project ownership/empowerment/PRD 32. Management support/PRD 33. Project championship/PRD. VC Value Creation, PA Portfolio Assessment, AVT Asset Value of Technology, IWB Integration with Business, PRD Practice of R&D Processes to Support Innovation. Source: Tipping et al. 1995, pp. 22–39
Table B.8 Measuring R&D productivity, Brown and Svenson Product development • Product Life Cycle, net present cash flow to development cost • Percent of products developed in the last five years • Percent of net income from products developed in the last five years • Product development cycle time • Mean Time for Failure • Mean Time to Repair • Final product cost/operating cost/total development cost Research and technology development • Percent of new technology content in new products • Transfer rate of new knowledge and technology into product development • Competitive value of technology innovations and research data • Net income from sales and technology Examples for output measuresa (the number of) • Research proposals written • Papers published (continued)
Appendix B Key Performance Indicators
295
Table B.8 (continued) • Designs produced • Products designed • Presentations made • Patents received • Awards won • Projects completed • Books written Source: Brown and Svenson, 1998, pp. 32–33 a Note that authors stress that without a measure of the quality and value of these outputs the measurement system will drive the wrong behavior
Table B.9 Indicators of companies engaged in new product development (NPD) activities, Chiesa and Frattini Company A • Costs for internal working hours • Direct material costs • Cost for the acquisition of external services • Internal activities completion date • No of approved drawings/month • No of stipulated purchasing contracts/month • No of components built/week Company B • Cost Performance Index (a numerical indicator which evaluates the efficiency of the NPD project in terms of costs and resource consumption) • Schedule Performance Index (a numerical indicator which determines the efficiency of the project in terms of punctuality) Company C • Technical and quantitative characteristics of the new cosmetic (in terms of toxicity, tolerability, chemical, physical and organoleptic properties) • Time needed to pass the different phases of the development process • Costs for achieving different levels of development completion Company D • Man-hours dedicated to the development project • Costs required for designing a product component • % correct drawings delivered • Time needed to complete the design of a product component Source: Chiesa and Frattini, 2007a, pp. 192–196
296
Appendix B Key Performance Indicators
Table B.10 Indicators of companies engaged in basic and applied research activity, Chiesa et al. Company E (subjective evaluation of) • Market potential of novel identified molecules • International relevance of technologies acquired during one year • Degree of satisfaction of collaborations’ objectives • Level of external reputation Company F (subjective evaluation of the) • Capability to achieve the established goals (expressed in terms of quality level of the output and respect of the budget) • Non-financial scoring methods are used for evaluating the contribution to value of the platforms Company F (subjective evaluation of the) • Capability to achieve the established quality level of the output • Market attractiveness of novel identified molecules • Percentage of respected milestones • Non-financial methods are used to evaluate the profitability of the research projects Company H (subjective evaluation of) • The quality level and of the contribution to value of research project • Yes/no evaluation of respect of budgets Source: Chiesa et al., 2007b, p. 292–293
Appendix C List of Interviews
T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0, # Springer-Verlag Berlin Heidelberg 2012
297
Table C.1 Phase 1 Exploration of performance measurement practices in selected companies on order to get first insights Company Name Position Place ABB Nils Leffler, Ph.D. Group Senior Vice President Helsinki, Finland Deutsche Dr. Hagen Hultzsch Former CEO responsible for R&D Berlin, Germany Telekom AG Phone interview Douglas Shawn Senior Director, Advanced EMC2 Technology Solutions, CTO office Google Peter Norvig, Ph.D. Director of Research Mountain View, USA Google Dr. Bradley Chen manages performance analysis Mountain View, tools projects at Google USA Palo Alto, USA Hewlett Packard Kumar Goswami, Ph.D. Director Systems and Services Automation Hewlett-Packard Laboratories Hewlett Packard Rich Friedrich Director Internet Systems and Palo Alto, USA Storage Laboratory HewlettPackard Laboratories IBM Liba Svobodova, Ph.D. Technical Assistant to Dr. Matthias R€ uschlikon, Kaiserswerth Switzerland IBM Inderpal Narang Distinguished Engineer, Member San Jose, USA IBM Academy of Technology Computer Science Department Walldorf, Intel Gregg Wyant Chief Architect; General Manager/ Germany Director Strategy, Architecture and Innovation Microsoft Dr. Tony Hey Director Research Councils Eltville (IRF 2007) Helsinki, Finland Nokia Jan Bosch, Ph.D. BOSCH, Professor of Software Engineering, Nokia Research Center Nov 22, 2006
May 21, 2007
Jan 23, 2007
Apr 24, 2007
Jan 23, 2007
Apr 26, 2007
Apr 26, 2007
Apr 24, 2007
Apr 24, 2007
Mar 18, 2008
Date Nov 21, 2006 Jul 7, 2007
298 Appendix C List of Interviews
Mathias Schanze
Prof. Dr. Helmut Raffler
Mathias Schanze
Prof. Dr. Helmut Raffler, Mathias Schanze
Roger Meike
Siemens
Siemens
Siemens
Siemens
Sun Microsystems Sun Microsystems Sun Microsystems
Corporate Technology, Chief Technology Office – Innovation Strategy Head of Technical Division, Siemens CT, CI Corporate Technology, Chief Technology Office – Innovation Strategy Head of Technical Division, Siemens CT, CI Corporate Technology, Chief Technology Office – Innovation Strategy Senior Research Director
Director Sun Labs Europe
Dr. Joachim Schaper
SAP
Dr. Jeff Rulifson
Krish Mantripragada
SAP
Director Sun Labs Europe
Amar Singh
SAP
Vice President Philips Research, Scientific Program Manager Vice President Suite Solution, AUTO ID SAP, Suite Solution, Responsive Supply Vice President, EMEA
Dr. Jeff Rulifson
Prof. Dr. Emile Aarts
Philips
Phone interview
Apr 18, 2007
Apr 18, 2007
Apr 23, 2007
Oct 31, 2008
Phone interview
Menlo Park, USA Phone interview
Jul 25, 2008
Jul 01, 2008
Jun 26, 2008
Nov 26, 2007
Apr 17, 2007
Mar 03, 2007
May 03, 2007
Munich, Germany
Phone interview
Walldorf, Germany Phone interview
Palo Alto, USA
Eindhoven, Netherlands Video interview
Appendix C List of Interviews 299
CFO DT AG Laboratories Managing Director Services Managing Director Services Senior Director, Advanced Technology Solutions, CTO office Senior Director, Advanced Technology Solutions, CTO office Senior Director, Advanced Technology Solutions, CTO office Technical Assistant to Dr. Matthias Kaiserswerth Researcher, responsible for PM at IBM Research Manager Business Development & Relations, IBM Research Head of Research, Zurich Lab Director IT Innovation & Research, Intel IT Strategy, Architecture & Innovation Chief Architect; General Manager/ Director Strategy, Architecture and Innovation
Klaus-J€urgen Buß
Dr. Oliver Faber
Dr. Oliver Faber
Douglas Shawn
Douglas Shawn
Douglas Shawn
Liba Svobodova, Ph.D. and Dorothea Wiesmann
Dr. Erich R€utsche
Dr. Matthias Kaiserswerth Malvina Nisman
Gregg Wyant
EMC2
EMC2
IBM
IBM
IBM Intel
Intel
Klaus-J€urgen Buß
Group Senior Vice President Quality Manager Quality Manager Quality Manager Head of Research, D€attwil Executive Vice President, Managing Director DT AG Laboratories CFO DT AG Laboratories
Nils Leffler, Ph.D. Lisa Egli Lisa Egli Lisa Egli Otto Preiss Peter M€ockel
ABB ABB ABB ABB ABB Deutsche Telekom AG Deutsche Telekom AG Deutsche Telekom AG Deutsche Telekom AG Deutsche Telekom AG EMC2
Phone interview
R€ uschlikon, Switerland Folsom, USA
R€ uschlikon, Switzerland
R€ uschlikon, Switzerland
Phone interview
Phone interview
Phone interview
Phone interview
Phone interview
Phone interview
Phone interview
Brussels, Belgium Oerlikon, Switzerland Phone interview D€attwil, Switzerland D€attwil, Switzerland Berlin, Germany
Table C.2 Phase 2 Exploration of performance measurement practices on a broad scale (in-depth case studies) Company Name Position Place
Sep 19, 2007
Jun 23, 2007 Apr 24, 2007
Jan 24, 2007
Jan 24, 2007
Jul 29, 2008
May 30, 2008
Apr 03, 2008
Dec 05, 2008
Dec 03, 2008
Oct 22, 2008
May 14, 2008
Mar 06, 2007 May 29, 2007 Feb 04, 2008 Mar 14, 2008 Mar 14, 2008 Feb 25, 2008
Date 300 Appendix C List of Interviews
Gregg Wyant
Prof. Dr. Martin Curley Gregg Wyant
Gregg Wyant
Dr. G€otz Brasche
Dr. G€otz Brasche
Dr. G€otz Brasche
Dr. G€otz Brasche
Prof. Dr. Emile Aarts
Dr. Ferrie Aalders
Dr. Ferrie Aalders
Dr. Wolfgang Gerteis
Dr. Stephan Haller
Thomas Widenka
Simone Perlmann
Peter Bittner Burkhard Neidecker-Lutz
Intel
Intel Intel
Intel
Microsoft
Microsoft
Microsoft
Microsoft
Philips
Philips
Philips
SAP
SAP
SAP
SAP
SAP SAP
Chief Architect; General Manager/ Director Strategy, Architecture and Innovation Director IT Innovation & Research Chief Architect; General Manager/ Director Strategy, Architecture and Innovation Chief Architect; General Manager/ Director Strategy, Architecture and Innovation Program Director, Microsoft – European Microsoft Research Center Program Director, Microsoft – European Microsoft Research Center Program Director, Microsoft – European Microsoft Research Center Program Director, Microsoft – European Microsoft Research Center Vice President Philips Research, Scientific Program Manager Senior Director Research Excellence Manager Senior Director Research Excellence Manager Chief EMEA Officer, SAP Research Walldorf Senior Researcher, SAP Research, Switzerland Vice President, RPO (Research Portfolio Office) Research Associate, Business Development SAP IP EU Portfolio Development Chief Technology Officer, SAP Research Walldorf, Germany Walldorf, Germany
Karlsruhe, Germany
Walldorf, Germany
Phone interview
Walldorf, Germany
Phone interview
Phone interview
Brussels, Belgium
Walldorf, Germany
Phone interview
Phone interview
Aachen, Germany
Phone interview
Walldorf, Germany SAP Research Summit, Dresden
Phone interview
Nov 22, 2007 Dec 04, 2007
Nov 19, 2007
Nov 15, 2007
Aug 31, 2007
Nov 26, 2007
Jul 23, 2007
Jul 04, 2007
Jun 11, 2007
Jun 11, 2008
Feb 22, 2008
Dec 07, 2007
Dec 17, 2007
Jul 04, 2007
Feb 13, 2008 Jun 03, 2008
Dec 10, 2007 Appendix C List of Interviews 301
.
Appendix D Case Studies
D.1
ABB
Status: 2007
D.1.1
Company Profile
ABB is a leader in power and automation technologies that enables utility and industry customers to improve their performance while lowering environmental impact. The ABB Group operates in around 100 countries and employs more than 110,000 people. In the 2007 financial year, the group achieved total revenue of almost US$ 29 billion.1 Approximately 4% ($1.173 billion) of revenue were spent on R&D activities. About 10% of the R&D expenditures were used for research activities in the corporate research organization. Worldwide, nearly 6,000 scientists and technology experts work in the R&D organization of ABB.2 The company’s collaborations with about 70 universities enrich the work with scientific advice. ABB has expanded its R&D base in India and China, where about 40% of ABB’s R&D resources are now located.
1 2
Internet source: http://www.abb.com Kasper (2006), pp. 257.
T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0, # Springer-Verlag Berlin Heidelberg 2012
303
304
Appendix D Case Studies
D.1.2
R&D Characteristics
Figure D.1 is derived from Brown and Svenson3 and Samsonowa, Buxmann and Gerteis4 showing the environment as well as the main inputs, activities, outputs and outcomes of the ABB Corporate Research organization. 4a. DEVELOPMENT AS RECEIVING SYSTEM 1b. INPUTS
Development Activities: Specifying Designing Developing Testing Etc.
2. PROCESSING SYSTEM Research
1a. INPUTS • • • • • • • •
People Ideas Equipment Facilities Funds Information Specific Requests Etc.
Activities: Researching Pre-/Developing Testing Reporting Results
3a. OUTPUTS • • •
IN-PROCESS MEASUREMENT quarterly
3a. OUTPUT MEASUREMENT annually
• • • • • •
People Design rules for products Demonstration/ Prototypes Techn. Notes/ Papers Pilots Test beds Process Service/Support Publications
3b. OUTPUT MEASUREMENT 2+ years
1c. INPUTS
3b. OUTPUTS
5. OUTCOMES
4b. OTHER RECEIVING UNITS b Marketing c Business Development d Sales e Consulting f Eco-System g Operations h Etc.
• • • • • •
Sales Improvement Cost Reduction Product Improvements Capital Avoidance Etc.
OUTCOME MEASUREMENT 3+ years
Fig. D.1 Definition of inputs and outputs of ABB corporate research (Source: Samsonowa, Buxmann and Gerteis (2009))
Project activities form the core of research and development at ABB. The R&D efforts of ABB comprise the order-related development, which is rather distributed5 and financed by customer projects and system development in the divisions and/or business units, as well as by the actual research activities of the central research unit. The company distinguishes the following types of R&D projects: • • • •
3
Technology development projects Product development projects Contract development projects The so-called Fuzzy Front End activities comprise basic and applied research in fields in which the company has little or no knowledge (e.g., scouting, prestudies). Correspondingly, there are no concrete fields of application defined at the
Brown and Svenson (1998), pp. 30–35. Samsonowa, Buxmann and Gerteis (2009), p. 162. 5 Kasper (2006), pp. 257. 4
Appendix D Case Studies
305
beginning of such projects. These activities, as well as the technology platform development, are conducted by the central research unit. However, the product development and Fuzzy Front End projects. Funding Characteristics: The information stated here applies to the research part of R&D only. The ratio of internal and external funding is approximately 90:10. Internal (90): • 70% of the internal funding is corporate-financed (a fixed budget allocated to the research department); • 20% is in-house contractual research (financed by business or development units). External (10): • Approximately 10% of the funds that ABB budgets towards research comes from externally-contracted research which includes public (national or European) – funded projects.
D.1.3
Research Organization Structure
R&D at ABB is organized into five development divisions (Power Products, Power Systems, Automation Products, Process Automation, and Robotics) according to the company’s core business and into two research divisions: (a) Automation Technologies (b) Power Technologies. R&D is headed by the Chief Technology Officer who reports directly to the CEO of ABB. One of his main responsibilities is to interlink research and development. The development divisions are headed by a Division Technology Manager; the research divisions are led by the Head Global Lab Power and the Head Global Lab Automation. Figure D.2 highlights structure and intra organizational location of research department at ABB. Each research division consists of various programs that are distributed across seven local labs in various countries. Figure D.3 shows the details of the program structure, as well as the worldwide distribution of the labs. Each program is headed
Chief Technology Officer
Division Technology Manager Automation Products
Division Technology Manager Power Systems
Division Technology Manager power Products
Division Technology Manager Process Automation
Controlling
Division Technology Manager Robotics
Head Global Lab Power
Head Global Lab Automation
Fig. D.2 ABB technology core team (Source: R&D @ ABB, standard presentation, November 2008)
306
Appendix D Case Studies
ABB Corporate Research
Automation Technologies
Programs
Power Technologies
Local Labs
Programs
Mechatronics & RoboticsAutomation
Daettwil
Materials & Transformation
Control & Optimization
Västeras / Oslo
Power T&D Application
Sensors & Signal Processing
Ladenburg
Power Electronics
Industrial Communications
Krakow
Switching
Industrial Software Systems
Raleigh / NC Bangalore Beijin / Shanghai
Fig. D.3 Global lab and program structure of ABB corporate research (Source: R&D @ ABB, standard presentation, November 2008)
by a Program Manager who is responsible for the technical, as well as the organizational, aspect of his/her program and the corresponding projects. Each location is managed by a Lab Manager who has personnel and resource responsibilities.
D.1.4
Research Process
The organization of ABB’s research process follows the so-called Stage-Gate Model.6 The current version of the ABB Gate Model has been implemented in all ABB business divisions and in Corporate Research.
6 A Stage-Gate System is a conceptual and operational road map for moving a new-product project from idea to launch. Stage-Gate divides the effort into distinct stages separated by management decision gates. Cross-functional teams must successfully complete a prescribed set of related cross-functional tasks in each stage prior to obtaining management approval to proceed to the next stage of product development. Cooper, Robert G.: The seven principles of the latest Stage-Gate® method add up to a streamlined, new-product idea-to-launch process. In: Marketing Management, March/April 2006, pp. 18–24.
Appendix D Case Studies
307
The ABB Gate Model aims at optimally supporting business decisions to ensure investment in the most promising projects. The model applies to all projects, with the exception of consultation projects, pre-studies, scouting projects, and cooperation projects with universities or other external partners within specific budget limitations. The model encompasses eight well-defined gates, each at which decisions are made relating to the continuing or stopping of the evaluated project. The decision process at each gate involves all relevant functions of ABB. There are four possible decisions when reaching a gate: Table D.1 shows the ABB Gate Model with its defined gates. Each gate has an associated catalogue of questions to be answered in order to prepare the gate decision meeting.
Table D.1 Possible decisions in the ABB gate model Decision Description Go Proceed to next phase Go, with action Proceed to next phase and take special care on the raised action items items Redo Go back to previous phase and solve the issues found during the gate review Terminate Stop project Source: ABB Gate Model (TPS), Training 2.0 presentation, April 2007
The ABB Gate Model (Fig. D.4) is applied to all R&D divisions to ensure the seamless interoperability and smooth handover within the overall innovation process. This could, for example, mean that a research project would start with G0 and end at G5 by transferring the results into a business group that would then apply its specific gates G0 to G7 itself.
G0
G1
Agreement to start
Agreement on scope
G3
G4
G5
G6
Confirm execution
Agreement on readiness for introduction
Release / Handover
Close project
G2 Agreement on start project execution
G7 Capture return on investment
Fig. D.4 The ABB gate model (Source: ABB gate model (TPS), Training 2.0 presentation, April 2007 [The example is based on interviews conducted with ABB’s staff. The detailed list of ABB interviewees is in Appendix C])
D.1.5
Methods and Tools
ABB Corporate Research uses the following tools and methods during the research process:
308
Appendix D Case Studies
• ABB Stage Gate Model – a tool for facilitating decision-making regarding the continuity of the project and allocating budget, and monitoring the status of the project. • ePIPE – a project database that documents the exact stated progress of each project according to the Stage Gate Model mentioned above. • Net Present Value – a standard method for the financial appraisal of the projects.
D.1.6
Research Goals
The three major goals of ABB’s research labs are based on three basic elements: top people, top ideas, and top results. The goal with the highest priority – to get top people into the organization – follows the assumption that such persons have the best potential to generate top ideas and are subsequently able to turn these ideas into top results. The additional goals are interrelated with these three goals: top results contribute to high visibility, external visibility helps to acquire external know-how (look outside), which paves the way to a better understanding of customer needs (market savvy). The transfer of ideas into the final results requires a professional execution of research projects (excellence). Finally, high-profile scientific work plays an important role in protecting the intellectual property (protect technology).
Table D.2 Relative weighting of ABB corporate research goals Name Description
Weighting
Research lab view 1. Top results Project results have a significant impact on business 15 2. Top people Best-suited people for defined profiles 20 3. Top ideas Continuous contribution of top ideas to assure a full 15 and valuable R&D project pipeline 8 4. 360 visibility Visibility inside and outside the company 5. Look outside Best utilization of external know-how 8 6. Market savvy Expert knowledge and detailed understanding of 8 internal and external customer needs 7. Excellence Professional execution of projects 15 8. Protect Adequate protection of own technology and 11 technology awareness of the IP of others ∑ ¼ 100 Source: Compiled based on interviews with representatives of ABB (The detailed interviewees is in Appendix C)
Corporate view 25 15 8 10 8 8 8 18 ∑ ¼ 100 list of ABB
Appendix D Case Studies
D.1.7
309
Research KPIs
The following table shows the list of key performance indicators (KPIs), together with the metrics that are used to evaluate them, used by ABB. Table D.3 Indicators and metrics used to measure ABB Corporate Research Subgoals KPIs Highly visible external activities/recognitions (# awards, speeches, invited keynotes, etc.) Significant results Ratio of amount invested (people and capital) into project that leads to significant results to total investment. Classifying results as “significant” requires: • receiving acknowledgement from the business unit confirming its vital necessity as well as • passing Gate 5 (Agreement on product/solution release) with an NPV of 8 or greater Incremental results Ratio of amount invested (people and capital) into project that leads to incremental results to total investment. Classifying the result as “incremental” requires: • receiving acknowledgement from the business unit confirming its necessity as well as • passing Gate 5 (Agreement on product/solution release) with an NPV of 5 or greater Supported results Ratio of amount invested (people and capital) into project that leads to support results to total investment, e.g., know-how support after the transfer, expertise for specific topics, problems IDFs and first filings # Publications # Adherence to budget (%) difference of planned budget of each line item to actual budget measured (least deviation is rated to be the best) ePIPE data quality Subjective evaluation from the Research Program Manager with binary response options (yes/no, good/bad) NPV of significant results Sum of NPVs of the projects categorized as significant Transfer rate to business (refers to people target achievement % transfer) Match of the skill set of hired people with the % profiles they are hired for Ideas that end up in a project # Creativity of researchers # of ideas to # of researchers Internal visibility # of visits from key decision makers Subjective evaluation for each university project Quality of collaboration refers to joint work with universities (perception of each from the supervising university professor via a questionnaire professor supervising the project) (continued)
310
Appendix D Case Studies
Table D.3 (continued) Subgoals Investment at universities in relation to total budget Research customer feedback from internal business units (overall satisfaction with the project result) Job rotations IP strategy addressed in projects
KPIs % Average rating of projects (1–5)
# # of IP workshops for projects in relation to all projects Source: Compiled based on interviews with representatives of ABB (The detailed list of ABB interviewees is in Appendix C)
D.1.8
Mapping Goals and KPIs
The following table shows the mapping of KPIs to goals (how goals are assessed using the listed KPIs) and their relative weighting. This mapping reflects the research labs view. Table D.4 Mapping KPIs to goals and their relative weighting at ABB corporate research Goals Market Top Top Top 360 Look outside savvy results people ideas KPIs Highly visible external activities/ recognitions Significant results Incremental results Support results IDFs and first fillings Publications Adherence to budget ePIPE data quality NPV of significant results Transfer rate to business Matching of the skill set of hired people with the profiles they are hired for Ideas that end up in a project Creativity of researchers
33 22 11 16 5 1 1 11
30
Excellence Protect technology
40 30
30 70
40 60
(continued)
Appendix D Case Studies
311
Table D.4 (continued) Goals Top Top Top 360 Look Market results people ideas outside savvy Internal visibility (visits from key decision makers) Quality of collaboration (questionnaire) Investment at universities in relation to total budget Research customer feedback from internal business units (overall satisfaction with the project result) Job rotations IP strategy addressed in projects ∑ 100
Excellence Protect technology
70
20
80
50
100
50 30 100
100
100
100
100
100
100
Source: Compiled based on interviews with representatives of ABB (The detailed list of ABB interviewees is in Appendix C)
D.1.9
Conclusions
In the examined department, a very pronounced measurement system was in place. Clear research objectives with coherent weightings from the department and company perspectives are available. There is currently a very good understanding of the concrete assignment of KPIs to the research goals of the department. • The goals seem to be interrelated. • A high correlation is probable between the following goals:
Top people and top results Top results and high visibility.
• There are mainly one or two KPIs used to measure a single goal.
D.2
EMC2
Status: 2007
312
D.2.1
Appendix D Case Studies
Company Profile
EMC Corporation (EMC2) is one of the world’s leading manufacturers of solutions for information management and storage. They produce a range of enterprise storage products, ranging from disk arrays to data and information and management software. EMC employs more than 37,000 people worldwide, about 40 percent of whom work outside the U.S. They are represented by approximately 400 sales offices and partners in nearly 70 countries around the world. Their corporate headquarter is located in Hopkinton, Massachusetts. The company achieved total revenues of more than 13 billion dollars during the financial year 2007.7 Approximately 11.5% ($1,526,928) of the revenues were spent for R&D activities. In July 2006 EMC opened a new R&D office in Shanghai, China, to leverage the burgeoning Chinese labor pool and facilitate a further expansion into the Chinese market. On June 7, 2007, EMC announced that they would invest $160 million in Singapore to set up a new development laboratory.8 A series of acquisitions and partnerships helped EMC to grow to the largest provider of data storage solutions worldwide.
D.2.2
R&D Characteristics
Figure D.5 derived from Brown and Svenson9 and Samsonowa, Buxmann and Gerteis10 showing the environment as well as the main inputs, activities, outputs and outcomes of EMC’s Research organization. EMC distinguishes the following four different types of R&D projects: • • • •
University research partnership projects Advanced technology research projects Product development projects Product enhancement projects.
All types of research projects are managed centrally by the two units “EMC Innovation Network” and “Advanced Technology Solution Group”. In contrast, the product development and product enhancement projects are conducted collaboratively with business units.
7
Internet source: http://www.emc.com/about/emc-at-glance/corporate-profile/index.htm. Internet source: http://en.wikipedia.org/wiki/EMC_Corporation. 9 Brown and Svenson (1998), pp. 30–35. 10 Samsonowa, Buxmann and Gerteis (2009), p. 162. 8
Appendix D Case Studies
313 4a. DEVELOPMENT AS RECEIVING SYSTEM 1b. INPUTS
Development Activities: Specifying Designing Developing Testing Etc.
2. PROCESSING SYSTEM Research
1a. INPUTS • • • • • • • •
People Ideas Equipment Facilities Funds Information Specific Requests Etc
Activities: Researching Pre-Developing Testing Reporting Results
3a. OUTPUTS • • •
IN-PROCESS MEASUREMENT quarterly
• • • • • • •
3a. OUTPUT MEASUREMENT annually
•
Market Analyses Demonstrations/ Prototypes/PoC Techn. Notes/ White Papers Pilots Business cases Socialize Finding Bakeoffs Benchmarkings Service/ Support Publications/ Presentation Descriptive Technologies
3b. OUTPUTS
5. OUTCOMES
4b. OTHER RECEIVING UNITS b Marketing c Business Development d Sales e Consulting f Eco-System g Operations h Etc.
Sales Improvement Cost Reduction • Product Improvements • Capital Avoidance • Etc. • •
3b. OUTPUT MEASUREMENT 2+ years
1c. INPUTS
OUTCOME MEASUREMENT 3+ years
Fig. D.5 Definition of inputs and outputs of ATSG at EMC (Source: Samsonowa, Buxmann and Gerteis (2009))
Funding Characteristics: The ATSG organization is to a huge extent corporate funded. The budget is allocated out of office of the CTO. There are some additions to the budget coming from business or development units based on incremental project results that are transferred into the groups.
D.2.3
Research Organization Structure
The Research organization at EMC is situated within the Chief Technology Office headed currently by EMC Senior Vice President and Chief Technology Officer. There are three research divisions within the office of the CTO: (a) EMC University Collaboration. This group focuses on basic open collaborative research projects where the EMC university research group typically funds joint research projects with universities, mainly university research partnership projects. The group is headed by Director of the EMC Innovation Network. (b) The Research Organization carries out medium (3–5 years) and long term (5–10 years) research projects. It is headed by the same Director of the EMC Innovation Network.
314
Appendix D Case Studies
Office of the CEO
Finance & Administration
Office of the CTO
Human Resources & Processes
Product
Production
Global Services & Support
Customer Solutions & Operations
Research
Breakthrough Innovation (Development)
Fig. D.6 Organizational structure of EMC (Source: Presentation Office of the CTO, Review 2007 and Planning 2008)
Chief Technology Officer
University Collaboration
Research organization
ATSG
Fig. D.7 Research organization at EMC (Source: Presentation Office of the CTO, Review 2007 and Planning 2008)
(c) The Advanced Technology Solutions Group (ATSG), headed by a senior director works across all business units of the company, focusing on strategic research initiatives for the corporation (Fig. D.7). In the following the case study focuses on the Advanced Technology Solution Group to reflect the characteristics of applied research. The ATSG builds-out technology-leveraged solutions to codify EMC’s technical strategy delivering in demonstrable strategic increments this way contributing to the identification, evaluation and incubation of key emerging technologies bearing the potential to evolve or disrupt EMC’s product portfolio (Fig. D.8).
Appendix D Case Studies
315
St. Petersburg 2007
Beijing 2006
Head Quarters Hopkinton Silicon Valley 2001
Shanghai 2006
Israel 2007 Conway 2007 Bangalore 2004
Fig. D.8 Distribution of EMC research locations (Source: The example is based on interviews conducted with EMC’s staff [The detailed list of EMC interviewees is in Appendix C])
D.2.4
Research Process
The research process at EMC, ATSG is geared by the so-called “Agile Four Phases Model”. The Phases Model supports the project work and is used for decision making regarding the continuation of the project. Each phase represents a new stage of the project. The project execution is followed based on the Iterative Agile development. The Four Phases Model represents the maturity process starting by capturing business opportunities in Phase 0 to a project implementation plan of the project in Phase I. Phase II is the hands-on phase in which the project is being implemented and technical solutions are developed. Final delivery or closure of the project is completed in Phase III. After each phase an “Executive sign-off” has to be obtained from an executive (business and technical) sponsor in order to proceed to next phase. Figure D.9 shows the EMCs Four Phases Model with its required input documents, its main activities as well as the outputs to be delivered. There is a detailed plan available for each phase that describes exactly its entry criteria, tasks/verifications, exit criteria, as well as allocated roles and responsibilities. The research work at EMC is executed in the so-called “Scrum process.11” A project runs through a series of iterations called “sprints”, each of which is typically 2–4 weeks in length. A set of the prioritized requirements is selected in a sprint. Products are designed, coded, and tested during the sprints.
11
Sutherland (2004), Schwaber (2004).
316
Appendix D Case Studies Executive sign-off
Executive sign-off
Executive sign-off
Executive sign-off
Phase 0
Phase I
Phase II
Phase III
Opportunity Captured
Planning
Agile Development
Delivery/Closure
Required documents: High-level proposal
Required documents: Business case
Shaping business case
Use case preparation
High-level architecture
Functionality evaluation
Technology proposition and selection
High-level architecture
Solution identification
Technology trade-off and selection
Proposal shaping for clarity with owner
Collaboration/Update PoC details into Wiki
Output to be moved to the next phase: Business case created
Team selection Knowledge transition
Required documents:
Use case document Scope understanding presentation
Activities:
Activities:
Required documents:
Working PoC/ software Activities: Presentation on next steps for
Activities: Resource planning
Productize
Project planning and schedule
Showcase New market
Task allocation Low level design Coding and implementation
Output to be moved to the next phase: Productize Showcase
Testing Output to be moved to the next phase: Use case document Scope understanding presentation
New market Output to be moved to the next phase: Working functionalities / software PoC postmortem review
Fig. D.9 The four phases model at EMC’s ATSG (Source: The example is based on interviews conducted with EMC’s staff [The detailed list of EMC interviewees is in Appendix C])
For the duration of a sprint the team members meet daily to review the progress; at the end of a sprint they demonstrate what they have built. The team conducts a ‘retrospective’ to help them to improve the process after every sprint (Fig. D.10). The Scrum Process Team reports: Daily Stand-up
1-4 Week Cycles Prioritized List Feature list ranked by priority
Iteration Tasks
. what they did yesterday . what they are doing today . Any blocks in their way
Potentially Shippable Product
Product Review & Retrospective
Feature list expanded into tasks by the team
You can choose how many iterations you need To go through before you release your product
Fig. D.10 Project execution in sprints at EMC’s ATSG (Source: ATSG Engagement Model, April 2007)
Appendix D Case Studies
D.2.5
317
Methods and Tools
The scientists of EMC use following tools to promote innovation all the way along the innovation process chain: Table D.5 Tools used to support research activities at EMC Phase Tools Phase 0: Home grown tool for proposal creation Phase 1: Wiki pages are used for sharing Proof of Concept (PoC) details Phase 2: Wiki pages are used for sharing PoC details X-Planner Project Management tool ATSG eRoom CVS – concurrent versioning system Bugzilla for bug tracking Phase 3: Dashboard Reporting Tool Tools that are spanning the Open Innovationa complete research process are: The KAIZENb Framework Scrumc rules Templates for use cases specs; business cases; MRD (Market Requirement Document) as well as SWOT analysis Collaboration tools, such as blogs and communicator IM Pilot PETS (Project Engagement Tracking System) RSS Aggregator ENDL Web Server XPlanner Autonomy Source: Compiled based on interviews with representatives of EMC (The detailed list of EMC interviewees is in Appendix C) a Chesbrough (2003) b Masaaki and Heymanns (1999) c Sutherland (2004), Schwaber (2004)
D.2.6
Research Goals
The goals of EMC’s research organization are closely linked to EMC’s mission: “Identify and act upon opportunities to accelerate EMC’s growth via Advanced Technology Solutions and emerging technologies to foster and drive disruptive innovation for EMC”.12
12
Internet source: http://www.emc.com/about/emc-at-glance/corporate-profile/index.htm
318
Appendix D Case Studies
Table D.6 Relative weighting of EMC’s ATSG research goals Name Description
1. New opportunities 2. Advanced Technology Solutions 3. Alignment & Transfer 4. Technology evaluation 5. Proof of concept (PoC) development 6. Protect technology 7. Operational Excellence
Identify and explore emerging technologies that are disruptive to EMC and/or their competitors Building-out technology-leveraged solutions in key areas of the EMC strategy to codify the technical strategy in forms of proof of concepts or prototypes Collaboration with other Business Units Evaluation and incubation of emerging technologies (due diligence, prototypes, integration and showcases) Evangelize process-driven innovation. Agile Iterative Innovation Model making ideas real Identify patentable technology and move it through the patent process Ensure high effectiveness and efficiency of processes
Weighting Research Corporate view view 20 30 25
20
10
10
20
20
5
10
5
10
15
0
∑ ¼ 100 ∑ ¼ 100 Source: Compiled based on interviews with representatives of EMC (The detailed list of EMC interviewees is in Appendix C)
D.2.7
Research KPIs
The following table shows the list of key performance indicators used by EMC together with the metrics that are used to evaluate them. Table D.7 Indicators and metrics used to measure EMC’s ATSG Subgoals KPIs Portfolio opportunity management Subjective evaluation from the Research Program Manager with binary response options (yes/no, good/bad) Due diligence yes/no Component deliveries # Opportunities moved to the next phase # Component consumption # of times used in Business Units Other BU involvement into project proposal and Ratio # of projects then recipient Projects that were adopted by BU # of projects High level business case developed yes/no Use case defined yes/no Estimated effort of prioritized functionality person days broken down into demonstrable increments Executive commitment (executive sign-off in yes/no the four phase model) (continued)
Appendix D Case Studies
319
Table D.7 (continued) Subgoals KPIs Defined exit strategy (plan for transfer into other yes/no units) Opportunities evaluated for protecting # technology Consumer or interested party identified yes/no Customer feedback positive/negative evaluations Projects (successfully) terminated at each phase # Source: Compiled based on interviews with representatives of EMC (The detailed list of EMC interviewees is in Appendix C)
D.2.8
Mapping Goals and KPIs
The following table shows the mapping of KPIs to goals and their relative weighting that has been developed with EMC2 during a series of interviews. In this regard the table actually shows the subjective assessment which goals the listed KPIs are likely to effect.
Alignment & Transfer
Technology evaluation 10 60
100
40
60
10 30 100
60
PoC Protect Operational development technology Excellence
KPIs Portfolio opportunity management Due diligence 30 Component deliveries 60 Opportunities moved to the next phase 70 30 25 Component consumption 40 Other BUs involvement into project proposal and 30 then recipient Projects that were adopted by BU 70 High level business case developed 15 Use case defined 30 Estimated effort, prioritized functionality broken 5 down into demonstrable increments Executive commitment (executive sign-off in the 15 four phase model) Defined exit strategy (plan for transfer into other 10 units) Opportunities evaluated for protecting technology Consumer or interested party identified Customer feedback Projects successfully terminated at each phase ∑ 100 100 100 100 100 Source: Compiled based on interviews with representatives of EMC (The detailed list of EMC interviewees is in Appendix C)
New Adv.Techn. opportunities Solutions
Table D.8 Mapping KPIs to goals and their relative weighting at EMC’s ATSG Goals
320 Appendix D Case Studies
Appendix D Case Studies
D.2.9
321
Conclusions
Clear research objectives with coherent weightings from the department and company perspective are available. There was no concrete assignment of KPIs to the research goals of the department. • Strong emphasis on internal exploitation of results:
KPIs are very much business oriented; Mainly subjective evaluation; Almost half of KPIs used are yes/no criteria.
• There are mainly one or two KPIs used to measure a single goal.
D.3
IBM
Status: 2008
D.3.1
Company Profile
The International Business Machines Corporation (IBM) is a globally integrated technology and business company with its headquarter located in Armonk, USA. IBM offers computer software and hardware, technology and business consulting services to its customers and provides financing and asset management services to companies selling or acquiring IT related products and services.13 As of December 31, 2007, the company operates in 170 countries around the world and employs 386,558 people, including the employees at its wholly owned subsidiaries. With 121,000 people, the U.S. is the largest country implying that approximately 69% of the people are employed outside the U.S. Within the last years, IBM expanded its businesses especially in the BRIC countries (Brazil, Russia, India and China) where employment totals approximately 98,000 (74,000 employees are located in India).14,15 In the financial year 2007, IBM achieved total revenues of $98.8 billion and a net income of $10.4 billion. Research, development and engineering expense was $6.2 billion of which $5.4 billion (5.5% of revenues) were spent for scientific research and the application of scientific advances. The remaining $0.4 billion was expense for product-related engineering.
13
Internet source: http://www.ibm.com/investor/company/index.phtml Form 10-K, p. 6:ftp://ftp.software.ibm.com/annualreport/2007/2007_ibm_10k.pdf 15 Annual Report 2007, p.50 ftp://ftp.software.ibm.com/annualreport/2007/2007_ibm_annual.pdf 14
322
Appendix D Case Studies
With the Service Division excluded, the company’s investment in R&D was approximately 15 percent of its combined hardware and software revenue.16,17,18
D.3.2
R&D Characteristics
The portfolio of IBM Research is characterized by a broad diversification as research is conducted in the areas of: • • • • • • • •
Chemistry Computer Science & Engineering Electrical Engineering Materials Science Mathematical Sciences Physics Services Science, Management, & Engineering Systems.19
Figure D.11 is derived from Brown and Svenson20 and shows the main outputs of IBM Research. A special characteristic of IBM Research is the generation and exploitation of Intellectual Property (IP). In 2007 IBM was awarded more U.S. patents than any other company which was the 15th year in a row that IBM was the company with the most U.S. patent grants (see Fig. D.12). As a result, IBM annually achieves IP income of approximately $1 billion.21 IBM Research is doing basic research as well as applied research. The time horizon of the investment in research projects is generally between 12 and 36 months (>50% of investment). Some projects have a shorter time to market (~15%), and the remaining projects aim at an implementation of the research results in 3–5 years (~15%) or more than 5 years (~10%).
16
Internet source: http://www.ibm.com/investor/company/index.phtml Internet source: http://www.ibm.com/annualreport/2007/note_p.shtml 18 Annual Report 2007, p.21 ftp://ftp.software.ibm.com/annualreport/2007/2007_ibm_annual.pdf 19 Internet source: http://www.research.ibm.com/areas.shtml 20 Brown and Svenson (1998), pp. 30–35. 21 Annual Report 2007, p.21 ftp://ftp.software.ibm.com/annualreport/2007/2007_ibm_annual.pdf 17
Appendix D Case Studies
323 4a. DEVELOPMENT AS RECEIVING SYSTEM 1b. INPUTS
Development Activities: Specifying Designing Developing Testing Etc.
2. PROCESSING SYSTEM Research Activities:
1a. INPUTS • • • • •
Researching Publishing Prototyping Developing Testing
Talents Scientific Results Global trends Market client Requirements Funding • Corporate • Brands • Governmental • IP icome
3a. OUTPUTS • •
• IN-PROCESS MEASUREMENT quarterly
• •
• • •
Technology Demonstrations/ Prototypes/ Specifications Product of solution components IP Scientific publication/ Presentation GTO IP income Advice
3a. OUTPUT MEASUREMENT annually 3b. OUTPUT MEASUREMENT 2+ years
3b. OUTPUTS
5. OUTCOMES
4b. OTHER RECEIVING UNITS
•
•
b Marketing c Business Development d Sales e Consulting f Eco-System g Operations h Etc.
1c. INPUTS
•
•
Product and services Innovation Improved marketing branding Additional services and product revenue Cost reduction through IP income and process improvements
OUTCOME MEASUREMENT 3+ years
Fig. D.11 Definition of inputs and outputs of IBM Research (Source: Samsonowa, Buxmann and Gerteis (2009))
3,500
3,125
3,000 2,500
1,864
2,000
1,637
1,500
1,469
1,000 0,500 0,000 IBM (#1)
Intel (#5)
Microsoft (#6)
HP (#9)
Fig. D.12 U.S. patents 2007 (Source: IBM Press release, April 2010 [Internet source: Corporations Go Public With Eco-Friendly Patents http://www-03.ibm.com/press/us/en/pressrelease/ 23280.wss, accessed April 2010])
324
D.3.3
Appendix D Case Studies
Research Organization Structure
IBM Research is by its own statement the largest IT research organization in the world with more than 3,000 employees at eight research locations in the U.S., Europe and Asia.22,23 Although every location has its special focus areas, however research themes and projects are not restricted exclusively to sites, instead there are quite some cross-site projects (Fig. D.13).
Watson 1945 Almaden 1955
Zurich 1956
Beijing 1995 Delhi 1998
Haifa 1972 Austin 1995
Tokyo 1982
Fig. D.13 IBM research locations (Source: IBM Research: Structure, Strategy, Vision, Management [IBM Research presentation, September 2008])
IBM research is an independent corporate entity interconnected and interacting with all other IBM divisions and collaborating with external partners (clients, academia, governments, scientific community, industrial partners and even competitors) which is seen as an important success factor for Research. A critical aspect of the research activities is the transfer of results to development. To meet this challenge so-called “Joint-Programs” were established. These are programs in which a substantial part of the work force are researchers working in research sites, but are being funded by a product group, supplemented by developers working jointly with them. By establishing links to development early in the program and by working on a shared agenda the effectiveness of research can be assured. The idea of a shared agenda is expanded to include also customers in research activities in the context of first-of-a-kind projects and emerging-business-projects.
22 23
Internet source: http://www.research.ibm.com/about/index.shtml Internet source: http://www.research.ibm.com/worldwide/index.shtml
Appendix D Case Studies
325
First of a Kind: Once IBM Research feels a technology has reached a level where practical benefit can be achieved, it partners with a leading-edge client that is prepared to explore the technology in a real-world situation. Emerging Business Opportunities: Following a practical demonstration through a first-of-a-kind project, IBM works with a broader range of clients to examine the common needs for the technology and identify strategies to bring the technology to market. The involvement of customers and of IBM’s business units into the research activities via Joint Programs is the reason why approximately one third of headcount in research is externally funded and two thirds are core funded (cf. Fig. D.14). Others
10%
Joint Programs
40%
25%
Core Funded
25% Core Matching Joint Programs
Fig. D.14 Budget situation at IBM research (Source: IBM Research: Structure, Strategy, Vision, Management [IBM Research presentation, September 2008])
D.3.4
Research Process
The planning and controlling process takes place in an annual cycle: It starts from strategy building activity and results in a resource plan according to identified strategic directions (continued & new) also considering the redirection of resources (projects) from fields with decreasing strategic importance is generated. Projects are continued depending on their strategic fit (as defined by strategy) and according to technical merits. Continuing and new projects are defined and documented by project descriptions with milestones (technical/market) in the Strategy Management System database (SMS) (Fig. D.15). Project goals are cascaded into individual goals and documented in Personal Business Commitment database (PBC). Assessments are conducted at mid-year and year end, at various levels: personal, departmental, strategic initiative (extending over multiple groups at different locations).
326
Appendix D Case Studies December: Measurements
Spring: Strategy
Fall: Plan Creation Global Technology Outlook
Environment, Vision and Strategy
•Prioritization •Technical Plan •Resources
Focus Items
High-level Financial Plan
Resource Allocation
Measurements •Goals •Accomplishments •Relationship with IBM Businesses •Patents •External Recognition •Financials
•Awards •Variable Pay
Relationship Managers
Milestones Personal Business Commitments
On-going Research Activities Empowered Researchers Supporting Business Processes
January: Goal Setting
Fig. D.15 Overview planning and controlling research activities at IBM research (Source: IBM Research: Structure, Strategy, Vision, Management [IBM Research presentation, September 2008])
D.3.5
Methods and Tools
Strategy building tools: • Strategy meeting; • Science & technology scan (scientific journal, conferences etc.) and a market scan (market research document) to identify contributions for strategy meeting; • Deep dives with internal and external experts to identify new strategic directions. Planning tools: • SMS: database to document annual project descriptions with resources and milestones; • PBC: database to document and assess personal goals. Other methods and tools: • • • •
Pet projects (10% working time is dedicated to those); Wikicentral: wiki server to facilitate easy collaboration across geographies; Lotus Notes team rooms for documentation; On-site workshops and scientific conferences to preserve scientific and technical vitality of research employees.
D.3.6
Research Goals
The goals of IBM Research are: • To be vital to IBM’s future success;
Appendix D Case Studies
327
Through our technology innovation and leadership, we drive IBM’s success.
Provide differentiating technologyto our products, services and client offerings
Establish IBM’s image as a Contribute to the creation of
thought leaderin the
critical standards
external world of science and technology
We use our fundamental research disciplines to create assets and values that boost IBM’s business.. We use various strategies to couple our assets and values with IBM products and services.
Fig. D.16 Mission and strategy statements of IBM research (Source: IBM, Annual Report 2009 [Internet source: http:/www.esr.cri.nz/SiteCollectionDocuments/ESR/Corporate/PDF/ESRAnnual Report2009.pdf])
• To be famous for science and technology. These goals are reflected by the mission and the strategy of the research organization which is continually adapted to the realities of the market (see Fig. D.16: Mission and strategy statements). To fulfill its mission IBM relies on a technical vision that covers all of their business areas that needs to follow the strategy.
Table D.9 Relative weighting of IBM research goals Name Description
Weighting Research Corporate view view
1. Technology innovation and leadership 2. Contribute to critical standards 3. Thought leadership 4. Business impact
Being the first on the market with technology innovations, and being the best in the field
50
Actively contribute to mission-critical industry 20 standards like IEEE, OASIS, W3C Being the best in and coining the scientific and 10 technology community in selected areas Create assets and values that boost IBM’s 20 business. Could be both: 1. create business impact directly from research together with external companies (customer was predominantly won through the engagement of IBM Research) or 2. create business impact through the research in services or products through the transfers (product or service only exists as a result of efforts of IBM Research) ∑ ¼ 100 Source: Compiled based on interviews with representatives of IBM (The detailed interviewees is in Appendix C)
60
10 5 25
∑ ¼ 100 list of IBM
328
Appendix D Case Studies
Research means for IBM – to help its business for growth on a shorter timescale and by exploring new opportunities in the future. Therefore, applied and exploratory researches have to be in balance. To be indispensable for the company, the main criteria for the research organization is the successful transfer of research findings into IBM’s Business Units.
D.3.7
Research KPIs
The following table shows the list of key performance indicators used by IBM Research together with the metrics that are used to evaluate them.
Table D.10 Indicators and metrics used to measure IBM research Subgoals KPIs IBM accomplishments a. “normal” accomplishment (10 mil incremental revenue or 1 mil cost avoidance) b. “outstanding” accomplishment (100 mil incremental revenue or 10 mil cost avoidance) c. “extraordinary” accomplishment (1 bn incremental revenue or 100 mil cost avoidance) Scientific accomplishments External subjective evaluations from the recognized experts in the research field: a. “normal” accomplishment (solved an unsolved problem (generated visibility around his/her research)) b. “outstanding” accomplishment (research field newly opened) c. “extraordinary” accomplishment (Nobel Price) Partnership assessment – technical Subjective evaluation from partners (1–5) achievements/technical impact Partnership assessment – shared Subjective evaluation from partners (1–5) research/business unit strategy Partnership assessment – strategy Subjective evaluation from partners (1–5) execution Partnership assessment – working Subjective evaluation from partners (1–5) beyond the plan horizon Partnership assessment – people/ Subjective evaluation from partners (1–5) relationships IDFs # Patent filings # Patents # granted patents External Recognition: # • Member – major honorary academy (continued)
Appendix D Case Studies
329
Table D.10 (continued) Subgoals KPIs • Major award/medal/prize • Best paper award • Award to IBM/other awards • Fellow – sci/tech society • President – sci/tech society • Honorary doctorate • Editor • Editor in chief • Associate editor/editorial board • Guest editor • Conference/symposium chair • Conference/symposium organization • Keynote/plenary speaker • External Government/professional/or industry panel • University advisory board • Standards panel • Visiting scholar Financials (%) difference of planned budget of each line item to actual budget measured (least deviation is rated to be the best) Source: Compiled based on interviews with representatives of IBM (The detailed list of IBM interviewees is in Appendix C)
D.3.8
Mapping Goals and KPIs
The following table shows the mapping of KPIs to goals and their relative weighting that has been developed during an interview with responsible Manager from IBM Research. As a result, the following table actually shows the subjective assessment, which goals the listed KPIs are likely to effect.
Table D.11 Mapping KPIs to goals and their relative weighting at IBM research Goals
KPIs IBM Accomplishments (Impact on IBM business) a. “normal” accomplishment b. “outstanding” accomplishment c. “extraordinary” accomplishment
Contribute Technology innovation and to critical standards leadership 40 40
40
40
Thought Business leadership impact 70
40 (continued)
330
Appendix D Case Studies
Table D.11 (continued) Goals Contribute Technology innovation and to critical standards leadership
Thought Business leadership impact
Scientific accomplishments (impact on scientific topic). The same separation as in IBM accomplishments in a, b and c. Partnership assessment – technical 20 20 5 achievements/technical impact Partnership assessment – shared 2 research/business unit strategy Partnership assessment – strategy 2 execution Partnership assessment – people/ 2 relationships IDFs 10 Patent filings Patents (granted) 10 External recognition 50 Financials 9 ∑ 100 100 100 100 Source: Compiled based on interviews with representatives of IBM (The detailed list of IBM interviewees is in Appendix C)
D.3.9
Conclusions
The research organization of IBM features some characteristics of a profit center. The financial exploitation of research results is a vital part of the research activities leading to the possibility that research outputs can be distributed to competitors if the IBM Business Units have no capacity or are not willing to adopt the research results. Thus, the protection of Intellectual Property is an obligation for the generation of license income and is in combination with the creation of publications the criterion for the success of a research project at IBM.
D.4
Intel
Status: 2008
D.4.1
Company Profile
Intel Corporation is by its own account the world’s largest semiconductor chip maker, based on revenue. Besides microprocessors and chipsets, Intel offers
Appendix D Case Studies
331
motherboards, flash memory and storage products, wired and wireless internet connectivity and communication infrastructure products, software products and services.24,25 As of January, 2008, Intel had approximately 86,300 employees worldwide, with more than 50% of these employees located in the U.S. After restructuring the company in combination with headcount reductions, the number of employees has sunk dramatically from approximately 99,900 as of December, 31, 2005.26 In 2007, Intel achieved revenue of $38.33 billion and a net income of $6.98 billion. For Research and Development Intel’s expenditures were $5.8 billion. That means that in 2007 more than 15% of revenues were reinvested in R&D activities.27,28
D.4.2
R&D Characteristics
In-house R&D at Intel is conducted in three different organizations. The Corporate Technology Group, the Technical Manufacturing Group and IT Research. The present case study exemplary refers to the latter. Figure D.17 is derived from Brown and Svenson29 and shows the main outputs of Intel Research. Intel pursues an open research approach. Especially when the internal development of a new technology would be cost-prohibitive for Intel on its own and/or a whole industry is affected, collaboration with academia and the participation in industry consortia and government grants is intended. The research activities within IT Research are classified into four different types of activities: • Research: Applied and exploratory academic and internal research focused on game changing technologies and methods to the point it can either enter development or significantly modify Intel or industry roadmaps and plans. • Concept Cars: Rapid development of mockups, prototypes, etc. that demonstrate the expertise in a certain area to understand usage models and the application of
24
Form 10-K, p.1: http://media.corporate-ir.net/media_files/irol/10/101302/2007annualReport/common/pdfs/intel_ 2007_form_10-K.pdf 25 Internet source: http://techresearch.intel.com/articles/index.html?iid ¼ subhdr + tech_research 26 Form 10-K, p.S.8: http://media.corporate-ir.net/media_files/irol/10/101302/2007annualReport/common/pdfs/intel_ 2007_form_10-K.pdf 27 Internet source: http://www.intc.com/phoenix.zhtml?c ¼ 101302&p ¼ irol-fundHighlightsA 28 Earnings Release Q4 2007: http://media.corporate-ir.net/media_files/irol/10/101302/2007Q4_Intel_Earnings_Release_1245_ update.pdf 29 Brown and Svenson (1998), pp. 30–35.
332
Appendix D Case Studies 4a. DEVELOPMENT AS RECEIVING SYSTEM Development
1b. INPUTS
Activities: Specifying Designing Developing Testing Etc.
2. PROCESSING SYSTEM Research
1a. INPUTS
Activities: Researching Pre-Developing Testing Reporting Results
• • • •
3a. OUTPUTS • • •
IN-PROCESS MEASUREMENT quarterly
Market Analysis IDFs/patents Future Trends Technologyand Market validations
• • • • • • • • • •
3b. OUTPUTS
Demonstrations/ Prototypes/PoC Techn. Notes/ White Papers Publications/ Presentations Pilots Business cases Roadmap influence Service/ Support Training material Proof Points Influence of standards Talent Living Labs Descriptive Technologies
5. OUTCOMES
4b. OTHER RECEIVING UNITS b Marketing c Business Development d Sales e Consulting f Eco-System g Operations h Etc.
• • • • • •
Sales Improvement Cost Reduction Product Improvements Capital Avoidance Opening up of new markets Improved IT Capabilities and Operations
3a. OUTPUT MEASUREMENT annually 3b. OUTPUT MEASUREMENT 2+ years
1c. INPUTS
OUTCOME MEASUREMENT 3+ years
Fig. D.17 Definition of inputs and outputs of Intel IT research (Source: Samsonowa, Buxmann and Gerteis (2009))
technology while avoiding the heavy lifting of building enterprise worthy solutions. • Technology Development: Roadmap driven development to disposition new technology and upstream research results to update roadmaps and architectures and drive enterprise class solutions into the early stages of the Product Life Cycle (PLC). • Pathfinding: Best path determination or solving a key unknown, in order to accelerate programs and influence direction of capability roadmaps, business practices, and architecture opportunities.
D.4.3
Organization Structure
Intel Corporate Technology Group The Corporate Technology Group (CTG) headed by the Chief Technology Officer is focusing on Research including technology advancement in areas including
Appendix D Case Studies
333
communications, systems and processor technologies in addition to technology policies and standards bodies. The high level Organization of Intel’s Corporate Technology Group is shown in Fig. D.18.
Corp Tech Group Intel CTO
Intel Research
Communications Tech Lab
System Tech Lab
Microprocessor Tech Lab
Technology Policy and Stds
Fig. D.18 Organizational structure of corporate technology group Intel CTO (Source: Design based on interviews conducted with Intel’s staff [The detailed list of Intel interviewees is in Appendix C])
Intel Research Organization Structure Intel Research is primarily focusing on research initiatives in the areas of architecture & silicon, platform technology, eco-technology innovation in addition to exploratory research. Exploratory research at Intel is advancing the state of the art through world class technical expertise, open collaboration and university ties, as well as multi-disciplinary teams. Intel Research organization is depicted in Fig. D.19 below:
Intel Research
People and Practices Research
Intel Research Network of Labs
Research Escalation & Tech Transfer
Intel Research Network of Labs
External Programs
Fig. D.19 Organizational structure of intel IT research (Source: Design based on interviews conducted with Intel’s staff [The detailed list of Intel interviewees is in Appendix C])
Intel Research accomplishes its research goals through a University Grant Program as well as its active worldwide network of labs located around the world including Seattle, Santa Clara, Berkeley, Israel, Beijing and Bangalore.
334
Appendix D Case Studies
IT Research and Technology Development IT Research and Technology Development within Intel is specifically focused on accelerating adoption of innovative IT capabilities through applied research and technology development. While IT R&TD is a part of the Strategy, Architecture and Innovation function, it (IT R&TD) collaborates extensively with internal organizations including the Corporate Technology Group (CTG), Intel Research, the Technology Manufacturing Group (TMG) and platform groups as well as externally with other research institutions and universities worldwide.
IT R&TD Locations Taking advantage of Intel’s global presence, enabling collaboration with leading research institutions worldwide as well as closely monitoring emerging markets technology trends, IT R&TD is located in key geographic locations including the Americas, China, Israel and India. The following figure highlights the regions and the associated focus areas for IT R&TD (Fig. D.20).
Intel IT Research and Technology Development
Research
Pathfinding and Technology Development
Projects
Projects
Enterprise Collab
Design Collab
Employee Prod
B2B Integration
Business
Biz Intelligence
Data Center
Infrastructure
Fig. D.20 Location and focus areas of IT research at intel (Source: Design based on interviews conducted with Intel’s staff [The detailed list of Intel interviewees is in Appendix C])
Appendix D Case Studies
335
IT R&TD in Intel is managed through a Research and Technology Development Council. The Mission of the IT R&TD Council is to accelerate Intel’s competitive advantage through breakthrough exploratory and applied research and development in information technology capabilities and systems. The Vision is to be the industry leader and role model for information technology research and development worldwide. The R&TD Council structure is shown in (Fig. D.21).
Intel Research
Collaboration & Productivity Innovation
Business Agility
Infrastructure & Operations
Fig. D.21 Organizational structure of IT research at intel (Source: Design based on interviews conducted with Intel’s staff [The detailed list of Intel interviewees is in Appendix C])
The strategic focus of research activities is determined by the R&TD Council which oversees the whole research process. It consists of a combination of people from the business groups, the IT department and principal engineers. The Council sets the strategic priorities and investment guidelines within the three R&D themes: • Infrastructure & Operations • Business Agility • Collaboration & Productivity Innovation. The research portfolio of each theme is managed by a committee which allocates the investments within the research area and sets up a research agenda. In case of interdisciplinary overlapping, the committee can initiate a cross committee workgroup to maximize the return of complementing research themes.
D.4.4
Research Process
The IT Research roadmap is clearly defined in a step-by-step product lifecycle (PLC) model. The early stages of the PLC (Research, Pathfinding and Technology Development) represent those activities that typically follow the different maturity levels of a technology. As an exception, opportunistic path finding projects are characterized by leaving out some steps of the lifecycle to drive a result from Research or Pathfinding to rapid adoption. In this context, the overall results of the research organization can also be deployed externally to generate licensing income (Fig. D.22).
Patents/IP
Proof Points
Licensing Design Influence Wins
Operations
PLC Path
Engineering
Traditional
Architecture
• Innovation ideas • Internal IT R&D • Academic R&D • Broad Intel IT R&D
Pathfinding& TD
Appendix D Case Studies
Research
336
• IT Production Deployments • Rapid Adoption
Intel PLC Path
Fig. D.22 Product lifecycle model at intel (Source: Intel IT Research, An Overview IT Research and Development [Intel IT Research presentation, June 2008])
D.4.5
Methods and Tools
In terms of the Research Value Index (RVI) there is a mechanism in place which conforms to a set of quality gates. Every research project has to be reviewed by the R&TD Council prior to the funding decision. The Council applies eleven different RVI criteria adding up to a final score on which a project proposal is valued. The higher the score the higher is the funding probability for the proposal. If a project is getting approved for funding, the RVI is calculated for the duration of the project and the actual performance is periodically measured against the expected RVI on the basis of the same eleven criteria that have been applied for the funding decision. If the actual RVI score falls at a review below a certain threshold (20 percent below the calculated RVI), the project lead has to explain the deviation and if necessary the strategic or technical focus of the project is adjusted to changing environmental influences or at the worst the projected is cancelled. The weightings applied to the eleven criteria depend on the priorities and types of projects – Research, Pathfinding or Technology Development.
D.4.6
Research Goals
These goals apply to the Intel IT R&TD organization. Intel has six values that it measures itself on and those values are risk-taking, discipline, results-orientation, customer-orientation, a great place to work, and quality. Influence factors are aligned to those values and are pursuing the approach of “Management by Objectives”30
30
Drucker (2007).
Appendix D Case Studies
337
which applies to individual employees. Those personal goals of the researchers are derived from the research goals (influence factors) and are therefore linked with the corporate goals of Intel Corporation. Table D.12 Relative weighting of Intel IT Research goals Name Description
Weighting IT view
External influence Industry influence
Image
Intellectual property Internal influence IT strategy
Enterprise architecture
xPG influence
People
Corporate view
Influence IT Industry through BKMs in 10 specific areas including Collaboration and Productivity, Infrastructure Operations and Business Agility 10 Robust University engagements including joint Intel and University research centers, journal and conference publications Create and file Invention Disclosure Forums 10 to create IP for Intel
20
20
10
20
10
10
20
20
20
Influence IT Strategy with key contributions to Strategic Long Range Planning (SLRP) and T-SLRP Ensure close alignment and research influence on Application, IT Architecture including exploration and proof of concepts of 3 year goal state architectures (Blue Books) Collaboration with Intel’s cross product groups for inputs on key features of upcoming products including joint Pathfinding with CTG Create environment for thought leadership to foster innovation and retain key talent
10
10
∑ ¼ 100 ∑ ¼ 100 Source: Compiled based on interviews with representatives of Intel (The detailed list of Intel interviewees is in Appendix C)
D.4.7
Research KPIs
The following table shows the list of key performance indicators used by the IT R&TD. The RVI indicators are used to evaluate research and technology development projects.
338
Appendix D Case Studies
Table D.13 Indicators and metrics used to measure Intel IT Research Subgoals KPIs RVI indicators Alignment with customer needs, objectives Semi-quantitative, subjective evaluation and roadmaps Alignment with IT strategic roadmaps Semi-quantitative, subjective evaluation Alignment with Platform groups on strategic Semi-quantitative, subjective evaluation positioning and product direction Fit of investment in relation to the portfolio mix Semi-quantitative, subjective evaluation Business impact as measured by value dials Semi-quantitative, subjective evaluation or financial valuation Expected technology/knowledge transfer to the Semi-quantitative, subjective evaluation next phase of the lifecycle of roadmap influence Breadth of harvesting opportunity/impact Semi-quantitative, subjective evaluation Industry and ecosystem influence Semi-quantitative, subjective evaluation Level of innovation or breakthrough opportunity Semi-quantitative, subjective evaluation Technological, resource and planning risk Semi-quantitative, subjective evaluation Stakeholder commitment and team track record Semi-quantitative, subjective evaluation R&TD indicators Number of invention disclosure forms (IDFs) Number of IDFs approved for patent filing, publishing or trade secrets R&TD proof points
# submitted # approved by the review committee
# of objective evidences for Intel’s research activities, e.g. novel IT architecture, usage model, stimulation of new Intel platform market growth, significant Intel visibility or recognition R&TD committee alignment & transfer # of joint- or co-funded committee projects, intra-committee transfer Industry leadership external publications, keynotes, chairmanships, industry standards influence and external showcases Intel leadership Intel internal conferences, showcases, internal publications, chairmanship, roadshows, training material Source: Compiled based on interviews with representatives of Intel (The detailed list of Intel interviewees is in Appendix C)
D.4.8
Mapping Goals and KPIs
The following table shows the mapping of KPIs to goals and their relative weighting that has been developed. As shown each KPI has a relative weighting developed in co-ordination with the Research council members.
Image Intellectual Property 10 10 10 10 10
IT Enterprise Strategy Architecture 15 25 25 15 15 15
KPIs Alignment with customer needs, objectives and roadmaps Alignment with IT strategic roadmap Alignment with platform groups on strategic positioning and product direction Fit of investment in relation to the portfolio mix 10 10 Business impact as measured by value dials or financial 10 10 10 valuation Expected technology/knowledge transfer to the next phase 10 10 of the lifecycle of roadmap influence Breadth of harvesting opportunity/impact 10 Industry and ecosystem influence 30 Level of Innovation or breakthrough opportunity 10 10 Technological, resource and planning risk 5 5 Stakeholder commitment and team track record 10 Number of invention disclosure forms (IDFs) submitted 10 25 Number of IDFs approved for patent filing, publishing or 10 10 25 trade secrets R&D proof points 10 20 10 10 R&D committee alignment & transfer Industry leadership 20 15 10 Intel leadership 15 ∑ 100 100 100 100 100 Source: Compiled based on interviews with representatives of Intel (The detailed list of Intel interviewees is in Appendix C)
Industry influence
Table D.14 Mapping KPIs to goals and their relative weighting at Intel IT research Goals
20 100
10
15
100
10 10
10
10
10
10
10
xPG People Influence 15 10 15 10 25 10
Appendix D Case Studies 339
340
D.4.9
Appendix D Case Studies
Conclusions
The measurement of the performance of a project is based on the subjective assessment by the members of the R&TD Council. However, in order to make a reliable comparison of research projects possible, the factors influencing the assessment are kept uniform: • By applying different weightings, research projects of all types of research are normalized and are therefore made comparable. • External and internal projects are assessed the same way. • Same criteria are applied for research projects of all types. • Continuation is maintained among the members of the R&D Council as the decision body.
D.5
Microsoft
Status: 2007
D.5.1
Company Profile
Microsoft31,32 Corporation is a multinational computer hardware and software company divided in three core business divisions: • Microsoft Platforms & Services Division: Includes the Windows Business Group, the Server & Tools Group, and the Online Services Group (e.g. Windows Vista, MS-SQL-Server). • Microsoft Business Division: Includes the Information Worker Group, the Microsoft Business Solutions Group, and the Unified Communications Group (e.g. Microsoft Office system, Microsoft Dynamics business solutions). • Microsoft Entertainment & Devices Division: Includes the Home & Entertainment Group and the Mobile & Embedded Devices Group (e.g. Xbox 360, Zune, Windows Mobile). The firm was founded in 1975 and is the worldwide leader in software, services and solutions, based in Redmond, USA. Today, Microsoft has subsidiary offices in 100 countries around the world and employs 78,565 people worldwide of which 47,645 work inside the USA (as of June 30, 2007) and approximately 31,000 in product research and development.
31 32
Internet source: http://www.microsoft.com/presspass/inside_ms.mspx Internet source: http://www.microsoft.com/msft/reports/ar07/staticversion/10k_fr_bus_06.html
Appendix D Case Studies
341
In the fiscal year 2007 (ended June 30, 2007) Microsoft generated revenues of $52.12 billion and a net income of $14.01 billion. The expenses for research and development reached about 14% (as a percent of revenue) and added up to $7.12 billion.33
D.5.2
R&D Characteristics
Research at Microsoft is conducted in several divisions within the company. The present case study refers to research activities of Microsoft Research (MSR) and the European Microsoft Innovation Center (EMIC), located in Aachen, Germany. As the performance measurement process at MSR is restricted to the count of scientific publications, the main focus of the case study in the following is on the procedures of the EMIC. Figure D.23 is derived from Brown and Svenson34 and shows the main outputs of the EMIC. 4a. DEVELOPMENT AS RECEIVING SYSTEM 1b. INPUTS
Development Activities: Specifying Designing Developing Testing Etc.
2. PROCESSING SYSTEM Research
1a. INPUTS • • • • • • • •
People Ideas Equipment Facilities Funds Information Specific Requests Etc.
Activities: Researching Pre-Developing Testing Reporting Results
IN-PROCESS MEASUREMENT quarterly
3a. OUTPUTS • • • • • • • • • • • •
Market Analyses IDFs Patent fillings Patents Technology and market validations Demonstrations/ Prototypes/PoC Techn. Notes/ White Papers Pilots Concepts Business cases Service/support Publications/ Presentations
3a. OUTPUT MEASUREMENT annually 3b. OUTPUT MEASUREMENT 2+ years
1c. INPUTS
3b. OUTPUTS
5. OUTCOMES
4b. OTHER RECEIVING UNITS b Marketing c Business Development d Sales e Consulting f Eco-System g Operations h Etc.
• • • • •
Sales Improvement Cost Reduction Product Improvements Capital Avoidance Etc.
OUTCOME MEASUREMENT 3+ years
Fig. D.23 Definition of inputs and outputs of the EMIC (Source: Samsonowa, Buxmann and Gerteis (2009))
33 34
Internet source: http://www.microsoft.com/msft/earnings/FY07/earn_rel_q4_07.mspx Brown and Svenson (1998), pp. 30–35.
342
Appendix D Case Studies
MSR was founded in 1991 with the aim to do long-range basic research modeled on academic research institutions while giving researchers the ability to apply the results of their research to the real world via Microsoft’s products. In contrast, the EMIC,35 founded in 2003, is focused on collaborative applied research and the creation of advanced technology which could reach the market within three to six years. There exist 2 types of projects: • Consortium projects, which are funded by the European Commission and the German Ministry. Typically such projects have duration of 24–48 month. These projects are steered by a strictly defined roadmap with clearly assigned timelines and deliverables for every partnering company. The timely progress and the success of the project are ensured by regular reviews by the European Commission, which are important milestones and can be considered as quality gates. The purpose of these projects is to enable in-flow and out-flow of Microsoft and partner technologies and to seed market adoption of these technologies. • Internal research projects within the single research areas. These projects either precede or run in parallel to the collaborative projects to ensure a proper preparation and continuous project pipeline. Within these projects, the internal technology transfer is taking place. EMIC is fully funded by the headquarters. While there are no strict measures with regard to a required external funding rate. EMIC’s core mission is to engage in collaborative research projects of the European Framework Program.
D.5.3
Research Organization Structure
The differentiation between MSR and the EMIC can also be made by the organizational structure as the two departments are absolutely autonomous from each other and the rest of the company and have different responsible Managers. (Fig. D.24) The Director of the EMIC reports directly to the chief research and strategy officer (CRSO) who in turn reports to the CEO of Microsoft Corporation (Fig. D.25). The EMIC is unique to Microsoft in its focus on applied research and its contribution to European Commission and other public-sector research programs. Thus, it is a standalone entity with exclusive responsibility to the CRSO (Fig. D.26). There are three research areas that are tackled at the European Microsoft Innovation Center (Fig. D.27).
35
Internet source: http://www.microsoft.com/emic/default.mspx; http://www.microsoft.com/ emic/mission.mspx
Appendix D Case Studies
343
Office of the CEO
Human Resources & Processes
Finance & Administration
Office of the CSO
Product Divisions
COO
Research
Incubation
...
Fig. D.24 Organizational structure of MSR and EMIC (Source: EMIC Overview, June 2008 [Internet source: www.microsoft.com/emic])
Research Area Manager “home”
Chief Program Manager
Director
Research Area Manager “enterprise security”
Research Area Manager “enterprise workflows” Chief Architect
Research Area Manager “software verification” Chief Research Manager Research Area Manager “mobile embedded”
Fig. D.25 Organizational structure of EMIC (Source: EMIC Overview, June 2008 [Internet source: www.microsoft.com/emic])
344
Appendix D Case Studies
Redmond 1991
Cambridge 1997 New England 2008
Asia 1998
EMIC 2003
Silicon Valley 2001
Cairo 2006
India 2005
Research Labs
Incubation Labs
Fig. D.26 Distribution of MSR research and incubation labs (Source: Design based on interviews conducted with representatives of Microsoft [The detailed list of Microsoft interviewees is in Appendix C])
Microsoft
Innovation Center Europe
Research Areas
Mobile & Embedded
Home
Enterprise
Focuses on:
Focuses on:
Focuses on:
1. Sensor and Context Management 2. Mobile Peer-to-Peer 3. Network Awareness 4. Mapping and Location Framework 5. Distributed embedded systems
1. HomeFx –the programming framework for the connected home 2. Recommender systems
1. Web Services Security and Privacy 2. Service Level Agreements 3. Workflow 4 .Qualified Signatures
Fig. D.27 Research areas at EMIC (Source: EMIC Overview, June 2008 [Internet source: www. microsoft.com/emic])
Appendix D Case Studies
D.5.4
345
Research Process
The strategic direction is defined by the responsible managers. That means that research projects are initialized and reviewed by a few authorized individuals. Their subjective opinion is the only quality control for a project and they can decide whether to stop, to proceed or to adjust the direction of a project (Fig. D.28).
Pure research
Applied research / advanced technologies / development /deployment
Microsoft Research Cambridge Universities EMIC Industry Partners Center of Software development Localisation Dublin ISVs 15 year
10 year
5 year
0 year/ product ships
Fig. D.28 EMIC’s position within the R&D rhythm of microsoft (Source: EMIC Overview, June 2008 [Internet source: www.microsoft.com/emic])
D.5.5
Methods and Tools
• Research Roadmap (updated several times a year) • Project reviews (post mortem)
D.5.6
Research Goals
The EMIC has a clearly defined mission which is: • To contribute knowledge to the European technology base by participating with universities, research institutes and companies in collaborative research projects; • To learn from European areas of excellence and expertise and transfer the results to society as a whole through products and technologies; • To foster partnerships and relationships with European industry and academia. The goals of the EMIC are influenced by the specific situation of a newly founded organization as well as by the fact that the organization is located in Europe where it strongly benefits from the well established Framework Program of the European Commission. This result in goals related to growth by public funded projects as well as fostering partnerships with European industry and academia.
346
Appendix D Case Studies
Table D.15 Relative weighting of EMIC goals Name Description
1. Intellectual property 2. Alignment & transfer 3. Image
4. Operational excellence 5. Growth
6. Collaborative research
Create and secure IP, what is a precondition for the usage of research results in future products to ensure commercially exploitable benefits Give input to Business Units/Development by the transfer of research results to apply them in Microsoft’s products Create reputation (internal & external) to establish Microsoft and the EMIC as a valued collaboration partner Ensure efficiency, effectiveness and success of the conducted research Growth strategy through acquisition of new projects and hiring of new personal to reach a critical size Foster partnerships and relationships with European industry and academia
Weighting Research view 25
Corporate view 20
25
25
10
15
10
5
10
10
20
25
∑ ¼ 100 ∑ ¼ 100 Source: Compiled based on interviews with representatives of Microsoft (The detailed list of Microsoft interviewees is in Appendix C)
D.5.7
Research KPIs
The following table shows the list of key performance indicators used by the EMIC together with the metrics that are used to evaluate them. Table D.16 Indicators and metrics used to measure EMIC Subgoals KPIs IDFs Patent filings Patents Successful technology transfers
# # # # of effective transfers that are validated by the Business Unit Partner satisfaction of/with external Internal and subjective evaluation of cooperation with partners (e.g. EU-projects partners) partners for decision-making concerning potential strategic partnerships Successful projects (in-time, in-budget) % Partnership assessment Subjective, qualitative evaluation of transfers by responsible manager of the respective Business Unit for future transfer improvements Achievement of technical goals Subjective, qualitative evaluation by project lead and program manager Newly acquired projects # Source: Compiled based on interviews with representatives of Microsoft (The detailed list of Microsoft interviewees is in Appendix C)
Appendix D Case Studies
D.5.8
347
Mapping Goals and KPIs
The following table shows the mapping of KPIs to goals based on series of interviews with EMIC staff since there is no explicit mapping in place at EMIC. In this regard the table shows the subjective assessment: which of the listed KPIs are adequate means to measure the achievements of the corresponding goals. For the weighting of the KPIs we have assumed an equal contribution in agreement with the EMIC staff. Table D.17 Mapping KPIs to goals and their relative weighting at EMIC Goals
KPIs IDFs Patent filings Patents Successful technology transfers Partner satisfaction of/with external partners (e.g. EU-projects partners) Successful projects (in-time, in-budget) Partnership assessment Achievement of technical goals Newly acquired projects ∑
Intellectual Property
Alignment & Image Operational Transfer Excellence
33,3 33,3 33,3
8,33 8,33 8,33 50
33,3
25
33,3
Growth Collaborative Research
25
50
50
33,3
25 50
100
100
100
100
50
50
100
100
Source: Compiled based on interviews with representatives of Microsoft (The detailed list of Microsoft interviewees is in Appendix C)
D.5.9
Conclusions
As long as the EMIC is still in a growth phase to reach its critical size, the setting of explicit targets is not considered reasonable as a constant research environment is the basis for meaningful comparisons to serve as decision guidance. The performance measurement is used to gain insights into the processes of collaborative research activities and to communicate the critical objectives to the employees in order to raise the awareness for the evaluation criteria. The transfer of research results to the product units is assessed to be the critical factor for the performance of the EMIC. The responsible managers of the EMIC consider a rigid performance measurement system not reasonable as the freedom of researchers is considered to be
348
Appendix D Case Studies
a requirement for the effective generation of top research results and the development of innovations. Further observations: • No financial evaluation of research results; • Many subjective influencing factors to the measurement criteria; • Transfer of research results is most important KPI.
D.6
Philips Research
Status: 2008
D.6.1
Company Profile
Royal Philips Electronics is one of the world’s biggest electronics companies and Europe’s largest. The company was founded in 1891 in Eindhoven, the Netherlands, and is a global leader in healthcare, lifestyle and technology based products and service solutions. As of January 1, 2008, the company was re-structured and is now organized in three market sectors formerly called product divisions (PDs) – Healthcare, Lighting and Consumer Lifestyle (including the former Consumer Electronics CE and Domestic Appliances PDs). Each market sector is responsible for the management of its businesses worldwide.36 The market sectors are further structured into business units (BU) (Fig. D.29). At the end of 2007, Philips had approximately 100 production sites in 29 countries, sales and service outlets in approximately 150 countries, and some 123,800 employees. Excluding discontinued operations (MedQuist in 2007 and
Healthcare
Lighting
Consumer Lifestyle
Group Management & Services Innovation & Emerging Businesses
Fig. D.29 Company structure of Philips 2008 (Source: Philips Annual Report 2007 [Internet source: http://www.philips.com/about/investor/financialresults/annualreports/index.page])
36
Philips Annual Report 2007, p. 62 http://www.philips.com/about/investor/financialresults/annualreports/index.page
Appendix D Case Studies
349
Semiconductors in 2006), the total number of employees of the Philips Group was 118,098 at the end of 2007, compared to 115,092 at the end of 2006.37 In the financial year 2007, Philips achieved sales of €26,793 billion and a net income of €4,168 billion. €1,629 billion or 6.1% of the sales were invested in research and development. Philips’ strong innovation pipeline contributed significantly to the company’s sales growth in 2007, as 56% of group sales came from newly introduced products.38 Philips is the owner of 60.000 patents.
D.6.2
R&D Characteristics
Figure D.30 is derived from Brown and Svenson39 and shows the main outputs of Philips’ research organization. 4a. DEVELOPMENT AS RECEIVING SYSTEM 1b. INPUTS
Development Activities: Specifying Designing Developing Testing Etc.
2. PROCESSING SYSTEM Research
1a. INPUTS • • • • • • • •
People Ideas Equipment Facilities Funds Information Specific Requests Research Results from University
Activities: Researching Pre-Developing Testing Reporting Results
IN-PROCESS MEASUREMENT quarterly
3a. OUTPUTS • • • • • •
•
Technical reports IDFs Patent fillings Patents Scientific publications Business initiatives/ Ventures Technology transfer (Prototypes, Demonstrations, Know how e.g. training) People
3a. OUTPUT MEASUREMENT • annually 3b. OUTPUT MEASUREMENT 2+ years
1c. INPUTS
3b. OUTPUTS
5. OUTCOMES
4b. OTHER RECEIVING UNITS
• • • • •
b Marketing c Business Development d Sales e Consulting f Eco-System g Operations h Etc.
• • •
•
•
Sales Improvement Cost Reduction Product Improvements Capital Avoidance Opening up of new markets IP Position Generated new businesses Income from Spin-offs and Technology licensing Contributions to sustainability of the company and society at large Contributions to the scientific community and science
OUTCOME MEASUREMENT 3+ years
Fig. D.30 Definition of inputs and outputs of Philips corporate research (Source: Samsonowa, Buxmann and Gerteis (2009))
37
Philips Annual Report 22007, p. 46–47 http://www.philips.com/about/investor/financialresults/ annualreports/index.page 38 Philips Annual Report 22007, p. 44–45 http://www.philips.com/about/investor/financialresults/annualreports/index.page 39 Brown and Svenson (1998), pp. 30–35.
350
Appendix D Case Studies
Approximately 70–75% of the transferred outputs are going to the development unit addressing new products in existing markets. 25–30% are transferred to business development targeting new products in new markets. The research organization of Philips distinguishes between five different types of research projects, depending on the funding model and the scope of the particular project. The typical duration of a project is 1,000 days; every project is evaluated regularly. During this process, Philips Research stops one third of all projects, one third is continued and the last third is re-casted into new projects with a different emphasis or a different positioning. • Corporate research projects: The funding of corporate research projects is negotiated only with the board at an annual basis. It is aimed at projects within the strategic scope of the company which contribute to the growth and the image building of the company. • Contract research projects: Those projects are financed by product divisions and the outcome of the project is transferred to the respective business unit. • IP&S research projects: The Intellectual Property & Standards (IP&S) is the licensing department of Philips. It funds those projects which are aimed at generating IP in strategic areas and new domains which ought to be protected. • Incubator research projects: Concept ideas that promise to become a business and which are technologically relevant for Philips, but can not be transferred to the Business Units, are pursued by the corporate venturing organizations, the incubators. Reasons why the market sectors do not adopt the research results may be the fact that the research is aimed too far-out in the future or is not in line with the brand and strategy. The board has additional funds for the incubator development; the income that is generated by potential incubator spin-outs goes directly to the board. • Third party funded projects: Philips Research is also involved in externally funded research projects (e.g. EU projects). If the content of e.g. a public funded project conforms with the strategic direction of Philips’ research programs, the research organization participates in the respective project, although the portion of those research activities in relation to the entire research portfolio is rather small. Figure D.31 shows the allocation of the research budget depending on the source of funding. The percentage of the various sources are changing slightly, therefore the graphic only indicates the order of magnitude of each source. Incubator
10%
IP & S
Corporate
10%
Fig. D.31 Research funding at Philips (Source: Design based on interviews with representatives of Philips [The detailed list of Philips interviewees is in Appendix C])
Public
35%
10%
35% Contract
Appendix D Case Studies
D.6.3
351
Research Organization Structure
The research organization of Philips is part of Corporate Technologies which supports Philips’ operating divisions in turning innovations into advanced products and the development of new markets. The CTO has the corporate responsibility for Research and Corporate Technologies as a whole and manages the enabling of technologies across the company. In turn, Corporate Technologies is grouped under the sector Innovation & Emerging Businesses through which Philips invests in projects that are not currently part of the operating divisions, but which will lead to additional organic growth or value creation in the future (Fig. D.32).40 Board of Management Group Management Committee
Supervisory board
Lighting
Healthcare
Consumer Lifestyle
Group Management & Services Innovation & Emerging Businesses
Corporate Investment
Corporate Technology
Philips Design
The Incubators Research Intellectual Property & Standards (IP&S) Philips Applied Technologies
Fig. D.32 Organizational structure of Philips (Source: Philips: Open Innovation @ Philips Research [Philips Research Presentation, February 2008])
Founded in 1914, Philips Research employs approximately 1,800 people in its main laboratories spread across six countries around the world. The annual research budget of Philips Research is slightly less than 1% of Philips’ annual sales, which
40
Annual Report 2007, p. 44, p. 63, p. 89–90, p. 114.
352
Appendix D Case Studies
amounted to EUR 27 billion in 2007 (Total R&D efforts of Philips amount to approximately 6.1% of sales). Roughly two-thirds of the corporate research work is geared to the activities of the sectors of Philips, with contractual agreements about programs and costs. The remainder is research of a more exploratory nature (Fig. D.33).41
Briarcliff (NY) •Clinic Sites Healthcare
London •New business Hamburg •Healthcare Eindhoven •Healthcare •Lifestyle •Technology
Aachen •Lighting •Healthcare
Shanghai • Emerging markets • Network
Bangalore •Emerging markets
Fig. D.33 Worldwide locations of Philips research (Source: Locations Philips Research: Open Innovation @ Philips Research [Presentation, February 2008 http://www.research.philips.com/ profile/locations/index.html])
Philips Research conducts research in three main areas which are organized within the research programs Healthcare (~500 FTEs), Lifestyle (~300 FTEs) and Technology (~300 FTEs). Each is managed by a Research Program Board which reviews the progress of the projects and possibly stops a project or changes its strategic direction.42 • Healthcare: Building on research of hospital-based care, Philips Research has begun to explore complete care cycles to seamlessly integrate preventive healthcare measures, self-care, and clinical care. • Lifestyle: Research in the lifestyle area aims at giving people the possibility to balance wellness and work by means of simple solutions. For instance, Philips Research investigates how people can optimize their everyday experiences wherever they are and also explores ways to enable people to always look their best, with minimum effort.
41
Internet source about Philips Research http://www.research.philips.com/profile/about.html Internet source Philips Research Programs http://www.research.philips.com/profile/programs. html
42
Appendix D Case Studies
353
• Technology: Research activities in this area include fields such as solid-state light sources and systems, system-in-package, unobtrusive sensors and actuators, flexible electronics, efficient and renewable energy systems, devices for drug delivery and molecular diagnostics, communication technologies and wireless hospital solutions. By adopting Open Innovation as a paradigm, Philips Research develops ecosystems in which both industrial, start up and academic partners combine their strengths and stimulate synergies to create a competitive business advantage for each party. To meet the requirements of a global, multidisciplinary research environment, Philips supports Open Innovation Centers and innovation intermediaries for sharing facilities, services and expertise. Every researcher at Philips Research is allowed to spend 10% of his time to anything he wants to do. The only obligation of this 10%-activity is the fact that it has to be somehow related to the work within Philips Research. These activities often result in new project proposals.
D.6.4
Research Process
The research process at Philips is set up as a funneling model with a market sector funnel, an Incubator funnel and an IP&S funnel. At a very early stage of a project, a decision is made which funnel the project should be part of, depending on its scope. For instance, if a project is primarily defined to protect intellectual property rights then it is allocated to the IP&S funnel and the relevant KPI is the amount of IP that has been generated. In line with the research programs, Philips has established three corporate venturing organizations: Healthcare, Lifestyle and Technology Incubators to identify new growth opportunities for Philips and to help business teams to transform ideas into new business, by matching unmet market needs with a unique value proposition. The necessary capabilities can be sourced internally, or acquired externally, e.g. in the start-up community. These initiatives are governed by boards which are chaired by a member of the Board of Management.43 The process of venturing projects within the Philips Incubators is organized as a stage-gate process according to the Bell-Mason approach44 (cf. Fig. D.34). The Bell-Mason Diagnostic is a rule-based tool designed to characterize the status of a high information-technology start-up, at each stage of growth. There are 12 different dimensions that affect a start-up which should grow with increasing maturity of a venturing-project. When the project is maturing and moving from one phase to another phase, potential deficiencies can be identified by comparing the actual status of the relevant dimensions with the predefined ideal status. If the project
43 44
Annual Report 2007, p. 90. Bell and Mason (1991), p. 621–624.
354
Appendix D Case Studies
Fig. D.34 Bell-Mason approach at Philips (Source: Philips, New Business Incubation [Philips presentation, September 2007])
is not meeting the expectations it can be ramped up to keep the project on track by rebalancing the dimensions, the efforts can be reduced, or the project is stopped.
D.6.5
Methods and Tools
• Open Innovation – in cooperation with external partners: companies, institutes, universities.45 • Research Portal – the portal2research is the single entry point for the Research organization and its customers. It provides access to Research process support tools and –outputs. It is used in all Research laboratories and provides search functions for researchers and customers. Many functions are available: • Long range technical objectives (LRTOs), i.e. strategy documents; • Project database (with resources allocated and respective descriptions); • Output statistics (e.g. for Invention Disclosures (IDs), Patent Filings, Manuscripts); • Progress reports; • Conference papers. • Portal 2e-Library – provides access to many sources with full text journals and databases. • A total quality approach is followed (continuous improvement), based on the EFQM model.
45
Aalders (2006)
Appendix D Case Studies
355
• Process Survey Tools (PST) for supporting processes46 [EFQM reference to the PST tool]. • Balanced Scorecard to collect KPIs at various levels of organization.
D.6.6
Research Goals
Philips Research envisages a future in which we bring Sense & Simplicity to people by meaningful innovations in health and well-being.47
Following this vision statement, Philips Research clearly defined and communicated a mission statement48: Philips Research, in close cooperation with its technology partners within Philips, enables profitable growth for Philips by timely bringing innovations to the Philips market sectors.
To fulfill the mission, a series of goals and objectives have been defined. Table D.18 Goals of Philips research Name
Description
Weighting Research view
1. Value creation
Explore the unknown to create new technologies to grow Philips; translate global trends in innovation into directions to help shape the strategy of Philips 2. Strategic Leverage impact through open innovation; build and sustain partnerships a strong partnership network 3. Operational Ensure & maintain efficiency and effectiveness (project excellence management, customer relationship management, etc.) 4. People Create and maintain an exciting environment to attract international top-talent; be a source of highly skilled people for Philips
Corporate view
36
100
20
0
36
0
8
0
∑ ¼ 100
∑ ¼ 100
Source: Compiled based on interviews with representatives of Philips (The detailed list of Philips interviewees is in Appendix C)
The weighting of the goals is implicitly given by the number of KPIs contributing to each goal as all KPI have equal weight with respect to the overall goal achievement. This will be further detailed in the section “Mapping Goals and KPIs”. It is not possible to accomplish a weighting on research from the corporate view because the management board focuses rather on content that Philips Research is working on than on the goals described above. As a consequence, we gave the weight 100 to the goal 1 – Value Creation.
46
Internet source: http://excellenceone.efqm.org/Default.aspx Philips Research Vision Statement http://www.research.philips.com/profile/vision.html 48 Philips Research Mission Statement http://www.research.philips.com/profile/mission.html 47
356
D.6.7
Appendix D Case Studies
Research KPIs
Philips Research basically distinguishes two types of processes: • Strategic processes (with an annual business cycle) • Operational processes (quarterly). The output and effectiveness of the strategic processes are evaluated annually in the top management team of research, the Research Management Team (RMT), whereas the evaluation of the operational processes is done quarterly by the Operations Management Team (OMT) to enable quicker adjustments. Both teams have a Business Balanced Scorecard to collect the relevant KPIs and their results. Each of the KPIs is owned by a member of the corresponding team who decides on follow-up actions if the performance does not meet the targets. The KPIs are linked to the strategic or operational Research level as shown in the following table.
RMT (Annual Cycle)
Table D.19 Indicators and metrics used to measure Philips strategic processes Subgoals KPIs Business creation (to grow Philips) # opportunities with average score >4 + 2 * # of labventures started + 4 * # of ventures transferred to incubators Innovating to improve Philips # of present business successes with a financial impact of at global competitive position least 25 M€ p.a., resulting from research transfers over the last 5–10 years Building and sustaining a strong # of patent filings per researcher per year IPR position Source of innovation talent for % of total outflow from research (graduates and up) into Philips other parts of Philips + the Philips’ innovation ecosystem (transfers of employees) Forming strategic partnerships with Average score on the Strategic Partnership PST over all our PDs and LOBs LOBs, PDs and all questions Customer Intimacy % of projects, in which an external customer, partner or supplier, in a business relationship with a PD/BU, is involved Creating a coherent outward% of LRTOs evaluated as “Green” in Portal2Research looking vision of future Creating and maintaining a well% of maximum score in Project Portfolio Score Sheet balanced research project averaged over all Program Areas Customer relationship management Implementation of the CRM process for the five Philips PDs as measured by the CRM PST Leveraging capabilities and % of projects in a laboratory with partners where a partner is international presence an academic institute, government agency, or industry Participation in publicly funded % of publicly funded + additionally funded FTEs programs normalized to the number of Research directs Source: Compiled based on interviews with representatives of Philips (The detailed list of Philips interviewees is in Appendix C)
Appendix D Case Studies
357
OMT (Quarterly Cycle) Table D.20 Indicators and metrics used to measure Philips operational processes Subgoals KPIs Innovating to improve the Philips global # transfers registered in Projects database (ytd) competitive position # (technical) reports per researcher Building and sustaining a strong IPR # invention disclosures position # external scientific publications (2*invited Supporting the reputation of Philips as an innovative electronics company papers + manuscripts) for external journals and conferences per researcher Source of innovation talent for Philips Transfer of employees (graduates): % of total and its innovation eco-system outflow (ytd): Into other parts of Philips Into other parts of eco-system Maintaining a healthy financial position Income of Research [M€] Traffic Light feedback received % of projects for which traffic light feedback has been received Traffic Light Scores % of Greens in traffic Light scores – progress – communication – organization or stuffing Project reporting % of projects with up to date progress report Maintaining a healthy financial position Forecasted EBIT vs. AOP at year’s end [M€] Actual EBIT vs. Forecast [M€] (year to date) Project management maturity Score on a Project Management Maturity PST Excellence management Completion level of Vital Few actions of OMT Competence development % of new starters attending the four introductory courses within their 1st year (basic introduction, IP, intercultural management, project management) Ensuring engagement by offering a Engagement Index [EEI, % fav.] – all over PHILIPS creative and exciting environment Percentage scored in Creative Climate Survey – specific for research EES follow-up: % of groups with action(s) finished – research only Ensuring a safe and trusted working % of planned Health and Safety actions executed environment Source: Compiled based on interviews with representatives of Philips (The detailed list of Philips interviewees is in Appendix C)
As already mentioned above, Philips research differentiates between strategic and operational process evaluations aka RMT and OMT. Philips Research uses a balanced scorecard to accommodate KPIs. Every KPI that meets its target gets assigned two points, KPIs that meet 50% of the target get only one point, and those which are below get zero points. The overall performance is then being calculated in the following way: the result is being related to the total number of KPIs multiplied by two. Following [targets] are used upon which consequent actions are taken:
358
Appendix D Case Studies
Table D.21 Performance target matrix and possible consequences Score Decision above 90% goal is not stretching enough 80–90% good 70–80% reasonable above 60% satisfactory below 60% poor performance Source: Compiled based on interviews with representatives of Philips (The detailed list of Philips interviewees is in Appendix C)
Due to the fact that RMT and OMT: • Are two different management teams; • Have different life cycles; • Have different responsibilities. Both constructs are handled separately in two different balanced scorecards according to their requirements and are not summed up to a single performance value.
D.6.8
Mapping Goals and KPIs
The following table shows the mapping of KPIs to goals and their relative weighting that has been developed with Philips during a series of interviews. In this regard, the table actually shows the subjective assessment which goals the listed KPIs are likely to effect. In order to keep the designed methodology consistent we have chosen the following approach after having consulted Philips. All KPIs in both tables (RMT and OMT) have been counted equally with respect to their weight regarding the overall goal achievement. Consequently, the weight of each goal reflects the number of KPIs that have contributed to it. RMT and OMT (Merged) Table D.22 Mapping KPIs to goals for Philips operational processes Goals Value Creation 11,11 11,11
KPIs Business creation (to grow Philips) Innovating to improve Philips global competitive position Building and sustaining a strong IPR 11,11 position Source of innovation talent for Philips 11,11 Forming strategic partnerships with our PDs and LOBs
Strategic Partnerships
Operational Excellence
People
20 (continued)
Appendix D Case Studies
359
Table D.22 (continued) Goals Value Creation
Strategic Partnerships 20
Operational Excellence
People
Customer Intimacy Creating a coherent outward-looking 11,11 vision of the future Creating and maintaining a 11,11 well-balanced research project Customer relationship management 11,11 Leveraging capabilities and 11,11 International presence Participation in publicly funded 11,11 programs Innovating to improve the Philips 11,11 global competitive position Building and sustaining a strong IPR 11,11 position 11,11 Supporting the reputation of Philips as an innovative electronics company Source of innovation talent for Philips 11,11 and its innovation eco-system Maintaining a healthy financial 11,11 position Traffic light feedback received 20 (% of projects) Traffic light scores 20 Project reporting 20 Maintaining a healthy financial 11,11 position Project management maturity 11,11 Excellence management 11,11 Competence development 11,11 Ensuring engagement by offering 50 a creative environment Ensuring a safe and trusted working 50 environment ∑ 100 100 100 100 Source: Compiled based on interviews with representatives of Philips (The detailed list of Philips interviewees is in Appendix C)
360
D.6.9
Appendix D Case Studies
Conclusions
A very sophisticated performance assessment model is in place. A mix of qualitative and quantitative performance measures is available. • Use of balanced scorecard. • Some of the KPIs are available that are not represented in the balanced scorecard due to the minor changes in the results and because of the subjectivity.
D.7
Deutsche Telekom
Status: 2007
D.7.1
Company Profile
Deutsche Telekom is one of the world leading telecommunication companies headquartered in Bonn, Germany. It offers its customers the entire spectrum of IT and telecommunication services from a single source: fixed telephones, mobile telephones, broadband, internet services and IT/network services. As being one of Europe’s largest providers for ICT services, the company is present on the most important markets in Europe, Asia and America. Deutsche Telekom has more than 260,000 employees. Some 89.000 of those employees work outside Germany in more than 50 offices worldwide.49 Deutsche Telekom was founded in 1996 as the former state-owned monopoly Deutsche Bundespost was privatized. As of 2005, the German government still holds a 15.7% stake in company stock itself and another 14% through the KfW Bankengruppe. 4.5% of the company is owned by the Blackstone Group, a private equity firm and the remaining 63.91 percent of the shares are in a free float (Streubesitz). During its fiscal year 2007 the company had total revenues of almost € 62,5 billion.50 Approximately 0,8% of the revenues were spent on R&D activities. About 10% were invested in research activities.51
49
As of December 31, 2007. Internet source: http://de.wikipedia.org/wiki/Deutsche_Telekom. 51 Information is based on the interview with Peter M€ ockel, Executive Vice President, Managing Director, Berlin, 24.02.2008. 50
Appendix D Case Studies
D.7.2
361
R&D Characteristics
Figure D.35 is derived from Samsonowa, Buxmann and Gerteis52 and shows the inputs and main outputs of Deutsche Telekom Laboratories in the following referred to as T-Laboratories or T-Labs. 4a. DEVELOPMENT AS RECEIVING SYSTEM
Strategic Research (SR) Innovation Development (ID) 1b. INPUTS
Development
Activities: Testing R-Outputs Developing Testing Reporting Results
2. PROCESSING SYSTEM Research
1a. INPUTS • • • • • • •
People Ideas Equipment Facilities Funds Information Specific Requests
Activities: Researching Pre-Developing Testing Reporting Results
IN-PROCESS MEASUREMENT quarterly
3a. OUTPUT MEASUREMENT annually
3a. OUTPUTS • •
Publications IDFs/ Patents
•
Statistics/ Studies Analyses Invention Disclosures Facts Tech. Reports White papers
• • • • •
3b. OUTPUTS
5. OUTCOMES
4b. OTHER RECEIVING UNITS b Marketing c Business Development d Sales e Consulting f Eco-System g Operations h Etc.
• • • • •
Sales Improvement Cost Reduction Product Improvements Capital Avoidance Opening up of new markets
3b. OUTPUT MEASUREMENT 2+ years
1c. INPUTS
OUTCOME MEASUREMENT 3+ years
Fig. D.35 Matching (The matching of activities and outputs is based on interviews with the representatives of the Deutsche Telekom Laboratories, see Appendix C) of activities and outputs from Strategic Research (SR) and Innovation Development (ID) at T-Labs (Source: Samsonowa, Buxmann and Gerteis (2009))
Project activities form the core of research and development at the Deutsche Telekom Laboratories. The aim of every project is to develop innovative products and services for customers of Deutsche Telekom. The company distinguishes between the following types of R&D projects: • Strategic research laboratory projects; • Innovation development laboratory projects; • Projects with the DAI Laboratory for Agent Technologies at Technische Universit€at Berlin.
52
Samsonowa, Buxmann and Gerteis (2009), p. 162.
362
Appendix D Case Studies
Funding Characteristics Deutsche Telekom Laboratories to a large extent is financed internally through a fixed budget allocated to the department. The ratio between internal and external funding is roughly 95 to 5 percent (Fig. D.36).53 Internal (95): External (5):
Seed funding (fixed budget allocated to research department) External contractual research (public funded projects)
External Funding
Fig. D.36 Budget allocation at Deutsche Telekom laboratories (Source: Design based on interviews with representatives of Deutsche Telekom AG [The detailed list of Deutsche Telekom AG interviewees is in Appendix C])
5%
95% Internal Funding
D.7.3
Research Organization Structure
The R&D organization, called Deutsche Telekom Laboratories in Deutsche Telekom AG is responsible as a central function for all three lines of business: T-Home, T-Mobile and T-Systems. The T-Laboratories are situated within a cross-sectional function group called Group Headquarters and Shared Services (GHS). The head of R&D reports to the Group Product and Innovation Officer, who himself reports directly to the board. (Fig. D.37)
BOARD
GHS
R&D (T-Labs) …
T-Home
T-Mobile
T-Systems
Fig. D.37 Organizational structure of Deutsche Telekom (Source: Design based on interviews with representatives of Deutsche Telekom AG [The detailed list of Deutsche Telekom AG interviewees is in Appendix C])
53
Based on interviews conducted with representatives of Deutsche Telekom AG.
Appendix D Case Studies
363
The T-Laboratories have their German research base in Berlin. In 2006, the first subsidiary abroad was opened in Israel at Ben-Gurion University. About 150 experts and researchers work there. 25 Telekom employees, a staff of 65 postdoctoral employees and around 60 postgraduates, research students as well as students build the team.54 The T-Laboratories have two major functions that can be reflected as an interdisciplinary matrix structure (Fig. D.38) in which market oriented development and strategic research are closely interlinked. • Strategic Research Laboratory (SR) which is split in four groups (the rows of the matrix) implemented as associated institutes55 of the Technical University of Berlin. The four professorial research areas of the Strategic Research Laboratory, conducts basic and technological research with a long-term orientation. • Innovation Development Laboratory (ID) which is split in nine major groups that represent project fields, and serve as interface to the lines of Business and form the advanced product development group. The Innovation Development Laboratory is charged with conducting market-oriented research and development. IN and Management of Distrubuted Systems
Strategic Research Laboratory
Security in Telecommunications Usability
Customer Behavior, Needs & Markets
Exploration / Technology Scouting
Infrasturucture Development
Intuitive Usability
Inherent Security
Broadband & Wireless Access
Integrated Communication
Multi-Access Service Framework
Overarching AAA
Service-centric Networking
Innovation Development Laboratory
Fig. D.38 Interdisciplinary matrix structure of Telekom Laboratories (Source: Deutsche Telekom Laboratories, Turning Ideas into Innovation [Deutsche Telekom Laboratories, Annual Report 2005/2006])
54
Internet source http://www.laboratories.telekom.com/ipws/Deutsch/LabsGeneral/Ueberuns/ Pages/default.aspx 55 Annual Report 2007, p. 43: Intensive cooperation with the Technische Universit€at Berlin - The interplay has been firmly established since the institution was set up in 2005 and currently forms a living structure. Telekom Laboratories forms part of the Product & Innovation unit at Deutsche Telekom from an organizational perspective. At the same time, it is a private scientific institution affiliated to Technische Universit€at Berlin.
364
Appendix D Case Studies
Knowledge transfer between research and development typically happens at the intersection of the rows and columns within this matrix. Market-oriented development projects ensure a quick impact on to the product portfolio involving the research experts. As mentioned above, the Deutsche Telekom Laboratories have four research areas: • • • •
Intelligent Networks and Management of Distributed Systems; Security in Telecommunications; Usability; Service-centric Networking.
They represent the scientific pool which is then absorbed by the specific projects within the ID (cf. Fig. D.38).
D.7.4
Research Process
The research process uses a traditional development funnel mode56 consisting of four major phases: generation of ideas, selection, execution and finally commercialization. After each phase the projects are evaluated and selected again (Fig. D.39).
Fig. D.39 Research Process Telekom Laboratories (Source: Deutsche Telekom Laboratories, Turning Ideas into Innovation [Deutsche Telekom Laboratories, Annual Report 2005/2006])
56
To classical development funnel see also Wheelwright and Clark (1992) Revolutionizing Product Development, The Free Press, New York.
Appendix D Case Studies
Proposal one pager generation
365
Project outline generation
G1
Activities
Activities
Create R&D project ideas, the creation of R&D project ideas is open Document idea as a one pager proposal Assess proposals regarding strategic alignment and outcomes Approve proposals from SBUs during gate 1 dialog
G2
-
Detail proposals and provide estimates for project cost Align project with existing R&D program and budget Designate R&D project owner and program management in gate 2 Proposals which passed G1 have to be send to members of the R&D panel by the gatekeepers
Project planning
G3
Activities Align proposal with SBUs Provide support confirmation from SBUs Confirm feasibility (time, budget, deliverables) Negotiate with contractors; the contractual obligation must be restricted normally to max. 6 months; afterwards a new assignment is necessary Bids of contractors are accepted only after a Go decision at gate 3
Fig. D.40 3-Stage-Gate selection process at T-Labs (Source: Compiled based on interviews with representatives of Deutsche Telekom Laboratories [The detailed list of Deutsche Telekom AG interviewees is in Appendix C])
Altogether the evaluation and selection process is accommodated within a three stage gates process (Fig. D.40).
D.7.5
Methods and Tools
Deutsche Telekom Laboratories use 13 methods and tools to promote innovation all the way along the innovation process chain: • Tools that encourage creativity for the development, communication and refinement of ideas. • Strategic clarification at an early stage, so that potential innovations and disruptions can be identified early on. • Open innovational processes – so that innovations may be derived from external sources. • Use of global R&D networks to ensure that our own innovation projects really are innovations. • Close cooperation with business units to guarantee customer focus at an early stage of the process. • Stage-Gate® Process as a tool for the identification and encouragement of the most important innovation activities. • Two paths of innovation, planned innovation and serendipity – so that not just planned but even disruptive innovations are allowed to occur. • Support given to the transfer of innovations, to facilitate rapid commercialization. • Drawing on the services of leading international researchers, so as to make the best use of their skills and informal networks. • Project field coordinators, to coordinate innovation activities from the angle of the customer.
Presenting results of phase and outlines to DT-Laboratories executives Evaluating results
Discussing project ideas in peer reviews
Presenting project ideas to Innovation Development team/ executives Evaluating project ideas
Gate 3 Presenting complete project proposal (incl. outline, plan) to Inno Board respectively R&D Council Evaluating project proposal
If necessary, prioritization of selected partners Taking go/no go decision on continuation Taking go/no go decision on continuation Taking go/no go decision on execution of project specification of project specification of project Compliance to DTAG’s product roadmap Decision Criteria Fit with R&D strategy Clear view on tech mgmt alternatives (e.g. IPR, standardization, exclusivity, quasi-standard, etc.) State-of-the-art Quality of revised proposal Expected benefit and cost for DTA Compelling idea including business logic, Project outcome and budget back-of-envelope calculation First outcomes within 2 yrs Need for T Labs involvement Source: Compiled based on interviews with representatives of Deutsche Telekom Laboratories (The detailed list of Deutsche Telekom AG interviewees is in Appendix C)
Activities
Table D.23 Decision criteria for the 3-Stage-Gate-Process at T-Labs Gate 1 Gate 2
366 Appendix D Case Studies
Appendix D Case Studies
367
• Functional module approach to encourage the rapid and flexible development and reuse of project results in different products. • Post-mortem project calculation so that R&D activities are subject to ongoing controls. • Venturing activities, for the commercialization of radical innovations and innovations in non-core areas.
D.7.6
Strategic Research Goals
Innovation is one of the key aspects to making Deutsche Telekom a profitable enterprise on a long-term basis. In this respect R&D plays a crucial role. The R&D group functions as a supplement to the innovations of company’s suppliers, and strengthens the innovative impetus within Deutsche Telekom. The R&D department thus works in five ways to create value for the enterprise: • Sales are boosted through the development of new products and the improvement of already existing products, services, markets and business models, so as to achieve premium prices. • Efficiency is enhanced by process innovations and the introduction of new products and services which cut operating costs. • The useful life of network technologies, successful products and services is extended, with positive implications for customer loyalty. • Uncertainty is reduced by early clarification on technological and market levels, as well as through the exploration of future customer requirements and consumer trends. • The image of the group is enhanced by publicity that emphasizes, both internally and externally, Deutsche Telekom’s innovative strength and therefore its ability to prove competitive in the long term. The goals stated below apply for the research part (SR) of the T-Laboratories only. They were derived together with the Managing Director Deutsche Telekom Laboratories. Table D.24 Relative weighting of Telekom Laboratories goals Name Description
1. Scientific To become well known in the scientific excellence community on an international basis 2. Scientific advisor Up to 30% of researcher time could be spend to the on technology development activities development within the ID 3. IPR generation Protect strategic areas via patents and trade marks
Weighting Research Corporate view view 40 35 25
25
35
40
∑ ¼ 100 ∑ ¼ 100 Source: Compiled based on interviews with representatives of Deutsche Telekom Laboratories (The detailed list of Deutsche Telekom AG interviewees is in Appendix C)
368
D.7.7
Appendix D Case Studies
Strategic Research KPIs
The following table shows the list of key performance indicators used by T-Laboratories together with the metrics that are used to evaluate them. Table D.25 Indicators and metrics used to measure Telekom Laboratories Subgoals KPIs Journal & conference papers as first & co-author # Book/-contribution as first & co-author # Invited talks # Chair a workshop # Technical program committee member & review # Journal review # Editorship # IDFs # Patents # White papers # Technical reports # Innovation development projects # of development projects with researchers participating Time spent in the development unit % Source: Compiled based on interviews with representatives of Deutsche Telekom Laboratories (The detailed list of Deutsche Telekom AG interviewees is in Appendix C)
D.7.8
Mapping Goals and KPIs
The following table shows the mapping of subgoals to goals and their relative weighting that has been developed with the Telekom Laboratories during a series of interviews. Table D.26 Mapping KPIs to goals and their relative weighting at Telekom Laboratories Goals Scientific Excellence KPIs
Journal & Conference Papers as First & Co-Author Book/-contribution as first & co-author Invited talks Chair a workshop Technical program committee member & review Journal review Editorship Invention disclosures
Scientific advisor to the development
IP generation
10 10 5 5 5 5 10 20
50 (continued)
Appendix D Case Studies
369
Table D.26 (continued) Goals Scientific Excellence
Scientific advisor to the development
IP generation
Patents 20 50 White papers 5 Technical reports 5 Innovation development projects 60 Time spent within innovation 40 development projects in % ∑ 100 100 100 Source: Compiled based on interviews with representatives of Deutsche Telekom Laboratories (The detailed list of Deutsche Telekom AG interviewees is in Appendix C)
D.7.9
Conclusions
Current observations: • New so far at the KPI level – Activities in different committees. • Output of research (SR) is not necessarily transferred to development ID (inputs and outputs are seen autonomous). Although not every SR project leads to an ID activity, vice versa not every ID activity ‘must’ contain a SR contribution. The constant interaction between the ID and SR is intended and certainly typical for T-Labs. The SR colleagues take up the role of scientific advisors. The KPI of Innovation Development can be put in a nutshell as follows: – deliver results which will help to increase the sales expectations of operative business or to reduce the cost base. This is covered by the method “Value Tracking”.
.
Appendix E E.1 Online Questionnaire (English)
Questionnaire: Performance Measurement in Research Departments in ICT Organizations Thank you for participating in our survey. As a participant you will receive the survey results as soon as the data analysis is completed. Please enter your password to start the survey:
T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0, # Springer-Verlag Berlin Heidelberg 2012
371
372
Appendix E
Performance Measurement in Research Departments in ICT Organizations E.1.1
Survey Objective
With this survey, we will gain a comprehensive overview of what research goals are used by research departments, what their priorities are and how they are assessed using research KPIs. The survey is comprised of the following sections: Part A collects information on the status quo of your department’s currentlyused performance assessment. Part B collects more specific information to give us further insight into if and how certain factors influence the selection of goals and KPIs. Part C provides you the opportunity to share your personal opinion how the performance measurement of your organization could be improved.
E.1.2
Additional Information
Effort – to complete the survey you will need approximately 20 minutes. Data Security & Confidentiality – All data will be stored safely in protected systems and only stored for the duration of the study. All data are treated with absolute confidentiality. Basic information about your company is collected in order to categorize responses per industry sector and by company size. Following this categorization all company specific information will be deleted. No inferences can or will be able to be drawn in relation to you or your company. Questions – for further information, or should you have any questions, please do not hesitate to contact Ms. Tatjana Samsonowa via e-mail, tatjana.samsonowa@is. tu-darmstadt.de, or phone, +49 151 168 10 225. Response time – please complete our questionnaire within 30 days. We thank you for your valuable input.
0%
100%
Appendix E
E.1.3
373
Approach Part A:
Your responses to Part A will result in a matrix comprising the set of organizational goals on which your research organization is currently measured and the KPIs that are used for the measurement, together with the importance of the goals and the importance of the KPIs. This matrix be made available to you within the report upon the completion of our study. Not used GOAL_1 KPI_1 KPI_2 KPI_3 KPI_4 KPI_5 KPI_n
Used – less important GOAL_2
Used – important GOAL_3 ++
Used – very important GOAL_n
+ ++
+++
+++
++ +
You will be guided through Part A in two steps. Here we ask for information on the status quo of the performance assessment that is currently applied in your research organization. In Step 1, select the current goals of your research organization and indicate their importance. Not assessed
Low priority
Medium priority
In Step 2, for each goal selected in Step 1, choose the KPIs that are currently used to assess the achievement of the goals and indicate their importance.
High priority
Not Low assessed priority
GOAL_1
KPI_1
GOAL_2
KPI_2
GOAL _3
KPI_3
GOAL _4
KPI_4
GOAL _5
KPI_5
GOAL_n
KPI_n
0%
100%
Medium priority
High priority
374
E.1.4
Appendix E
A. Status quo: Research objectives, goals and KPIs
Below you find a collection of mission statements for industrial research that might be reflected in the organizational goals of your research organization. Please select those goals that, in the current measurement period, best reflect the organizational goals of your research organization by indicating their importance according to the overall achievement. If a goal is not assessed by your research organization, please check the first box (not assessed). Not assessed
Goal
Assessed Assessed Assessed low medium high priority priority priority
Alignment with and transfer to internal development and other (business) units
Create and protect intellectual Improve the internal and external image of the research department and/or thecompany Generate and evaluate future business opportunities Recruit and develop excellent talent Achieve a high standard of operational excellence Establish and maintain strategic partnerships and/or collaborative research Drive technology innovation and technology leadership Other goal: Other goal: Other goal:
0%
100%
Appendix E
375
Remark: For those goals that are assessed to measure your performance, in the following we will present a collection of possible KPIs. Please select the KPIs that best correspond with the KPIs that are used for your research organization. If they are not relevant, please check the first box (not used). Note: Some KPIs APPEAR REPEATEDLY for different goals! The reason for this is that some KPIs are used to measure different goals. For example, „# of patents“ could be used to assess both intellectual property and image. Please ONLY select the KPIs if relevant for the goal currently under consideration.
0%
100%
376
Appendix E
At the beginning of the survey you selected the following goal: “Alignment with and transfer to internal development and other (business) units”. Please select those KPIs that best correspond with the KPIs that are used for your department by indicating their importance regarding specific goal achievement. Note: Please select only the KPIs that are relevant for this specific goal. If you find KPIs that are not relevant for this goal but for another goal, please check the first box (not used).
Goal: "Alignment with and transfer to internal development and other (business) units" KPI assessing the ... ... volume of technology transfer activities to development or other (business) units Example(s): # person days, amount of services charged to other departments ( )
... quality of the research results transferred Example(s): subjective evaluation by the receiving unit via questionnaire
... significance of the transferred research results for the receiving unit Example(s): subjective evaluation of the importance of the transfer for the development/business of the receiving unit
... quality of the transfer process or transfer activities Example(s): subjective evaluation of the collaboration with the receiving unit via questionnaire
... economic value of the transfer activity or transferred research results Example(s): equivalent opportunity cost of using an external expert service, revenue generated by the transferred artifact
... intensity of input into the innovation process Example(s): # ideas, # ideas per researcher
... (weighted) # of ideas moved to a certain or the next phase of the innovation process Example(s): # ideas with a business case developed + 2x # ideas with demonstrator developed + 3x # ideas handed over to development
... (achieved) business impact of an idea in terms of its economic value Example(s): $,
... volume of potentially protectable inventions submitted into IP pipeline Example(s): # invention disclosures
... volume of first filings out of the IP pipeline Example(s): # first filings, defensive publications, trade secrets
Not used
Used Used Used less important very important important
Appendix E
377
Example(s): # first filings, defensive publications, trade secrets
Used Used Used Not used less important very important important
Goal: "Alignment with and transfer to internal development and other (business) units" KPI assessing the ... ... alignment of research activities with the IP strategy of the company Example(s): # submitted inventions addressing the IP strategy
... volume of patents granted Example(s): # patents granted to the company
... quality of granted patents Example(s): economic value of the granted patent
... volume of collaboration with partners and customers Example(s): # projects in which an external customer and/or partner in a business relation is involved vs. total projects, # joint research results like showcases, prototypes, demos
... quality of collaboration with partners and customers Example(s): subjective assessment by customers and/or partners via a survey
Other KPI: Other KPI:
0%
100%
378
Appendix E
At the beginning of the survey you selected the following goal: “Create and protect intellectual property”. Please select those KPIs that best correspond with the KPIs that are used for your department by indicating their importance regarding specific goal achievement. Note: Please select only the KPIs that are relevant for this specific goal. If you find KPIs that are not relevant for this goal but for another goal, please check the first box (not used).
Goal: "Create and protect intellectual property" KPI assessing the ... ... volume of potentially protectable inventions submitted into the IP pipeline Example(s): # invention disclosures
... volume of first fillings out of the IP pipeline Example(s): # first filings, defensive publications, trade secrets
... volume of patents granted Example(s): # patents granted to the company
... quality of granted patents Example(s): economic value of the granted patent
... alignment of research activities with the IP strategy of the company Example(s): # submitted inventions addressing the IP strategy
... intensity of input into the innovation process Example(s): # ideas, # ideas per researcher
... (weighted) # of ideas moved to a certain or the next phase of the innovation process Example(s): # ideas with a business case developed + 2x # ideas with demonstrator developed + 3x # ideas handed over to development
... (achieved) business impact of an idea in terms of its economic value Example(s): $,
... volume of publications Example(s): # journal & conference papers, book contributions, # white papers, # technical reports
... quality of publications Example(s): # best paper awards, papers at selected conferences
... volume of technology transfer activities to development or other (business) units Example(s): # person days, amount of services charged to other departments ( )
... quality of the research results transferred Example(s): subjective evaluation by the receiving unit via questionnaire
Not used
Used Used Used less important very important important
Appendix E
379
Goal: "Create and protect intellectual property" KPI assessing the ...
Used Used Used Not used less important very important important
... quality of the transfer process or transfer activities Example(s): subjective evaluation of the collaboration with the receiving unit via questionnaire
... significance of the transferred research results for the receiving unit Example(s): subjective evaluation of the importance of the transfer for the development/business of the receiving unit
... economic value of the transfer activity or transferred research results Example(s): equivalent opportunity cost of using an external expert service, revenue generated by the transferred artifact
Other KPI: Other KPI:
0%
100%
380
Appendix E
At the beginning of the survey you selected the following goal: “Improve the internal and external image of the research department and/or the company”. Please select those KPIs that best correspond with the KPIs that are used for your department by indicating their importance regarding specific goal achievement. Note: Please select only the KPIs that are relevant for this specific goal. If you find KPIs that are not relevant for this goal but for another goal, please check the first box (not used).
Goal: "Improve the internal and external image of the research department and/or the company" Not used KPI assessing the ... ... internal perception or internal recognitions Example(s): # visits by key decision makers, subjective evaluation of collaborations with internal groups, # contributions to internal conferences or exhibitions
... external perception or external recognitions Example(s): awards, speeches, keynotes, medals, prizes, fairs and exhibitions
... volume of publications Example(s): # journal & conference papers, book contributions, # white papers, # technical reports
... quality of publications Example(s): # best paper awards, papers at selected conferences
... participation in scientific events beyond publications Example(s): # invited talks or keynotes, chairs in workshops or conferences, participations in program committees or journal reviews, editorships
... participation in advisory boards or related bodies, honorary titles or memberships, or visiting scholarships Example(s): # invitations to governmental / professional / industrial / academic advisory boards or standardization committees, # doctorates, fellowships, presidentships in sci/tech society, invitations for sabbaticals
... volume of potentially protectable inventions submitted into IP pipeline Example(s): # invention disclosures
... volume of first filings out of the IP pipeline Example(s): # first filings, defensive publications, trade secrets
... volume of patents granted Example(s): # patents granted to the company
... quality of granted patents Example(s): economic value of the granted patent
Used Used Used less important very important important
Appendix E
381
Goal: "Improve the internal and external image of the research department and/or the company" Not used KPI assessing the ... ... alignment of research activities with the IP strategy of the company Example(s): # submitted inventions addressing the IP strategy
.. quality of the research results transferred Example(s): subjective evaluation by the receiving unit via questionnaire
... quality of the transfer process or transfer activities Example(s): subjective evaluation of the collaboration with the receiving unit via questionnaire
... significance of the transferred research results for the receiving unit Example(s): subjective evaluation of the importance of the transfer for the development/business of the receiving unit
... economic value of the transfer activity or transferred research results Example(s): equivalent opportunity cost of using an external expert service, revenue generated by the transferred artifact
Other KPI: Other KPI:
0%
100%
Used Used Used less important very important important
382
Appendix E
At the beginning of the survey you selected the following goal: “Generate and evaluate future business opportunities”. Please select those KPIs that best correspond with the KPIs that are used for your department by indicating their importance regarding specific goal achievement. Note: Please select only the KPIs that are relevant for this specific goal. If you find KPIs that are not relevant for this goal but for another goal, please check the first box (not used).
Goal: "Generate and evaluate future business opportunities" KPI assessing the ... ... intensity of input into the innovation process Example(s): # ideas, # ideas per researcher
... (weighted) # of ideas moved to a certain or the next phase of the innovation process Example(s): # ideas with a business case developed + 2x # ideas with demonstrator developed + 3x # ideas handed over to development
... (achieved) business impact of an idea in terms of its economic value Example(s): $,
... volume of technology transfer activities to development or other (business) units Example(s): # person days, amount of services charged to other departments ( )
... quality of the research results transferred Example(s): subjective evaluation by the receiving unit via questionnaire
... quality of the transfer process or transfer activities Example(s): subjective evaluation of the collaboration with the receiving unit via questionnaire
... significance of the transferred research results for the receiving unit Example(s): subjective evaluation of the importance of the transfer for the development/business of the receiving unit
... economic value of the transfer activity or transferred research results Example(s): equivalent opportunity cost of using an external expert service, revenue generated by the transferred artifact
... volume of potentially protectable inventions submitted into IP pipeline Example(s): # invention disclosures
... volume of first filings out of the IP pipeline Example(s): # first filings, defensive publications, trade secrets
... volume of patents granted Example(s): # patents granted to the company
Not used
Used Used Used less important very important important
Appendix E
383
Goal: "Generate and evaluate future business opportunities"
Not used
KPI assessing the ... .. quality of granted patents Example(s): economic value of the granted patent
... alignment of research activities with the IP strategy of the company Example(s): # submitted inventions addressing the IP strategy
Other KPI: Other KPI:
0%
100%
Used Used Used less important very important important
384
Appendix E
At the beginning of the survey you selected the following goal: “Recruit and develop excellent talent”. Please select those KPIs that best correspond with the KPIs that are used for your department by indicating their importance regarding specific goal achievement. Note: Please select only the KPIs that are relevant for this specific goal. If you find KPIs that are not relevant for this goal but for another goal, please check the first box (not used).
Goal: "Recruit and develop excellent talent" KPI assessing the ... ... volume/quality of people hired into the research organization Example(s): number of externally - attracted senior scientists, matching of the skill set of the hired people with the profiles they are hired for
... volume/quality of people leaving the research organization to move to other parts of the company / ecosystem Example(s): number of top talents moving to other parts of the company
... volume/quality of development measures undertaken Example(s): # job rotations, # PhDs finished
... quality of the people in the research department Example(s): subjective assessments through partners or stakeholders, team track records, 360° feedback
... adherence to budget Example(s): difference of planned budget of each line item to actual budget measured (least deviation is rated to be the best).
... adherence to timelines (phases, gates) Example(s): successful (on - time) termination at a particular phase / gate
... quality of project management Example(s): data compliance at an agreed deadline
... quality of the working environment Example(s): scoring in employee satisfaction survey
... quality of the risk management in place Example(s): % by which the potential damage has been lowered by adequate measures against the bottom -line
... volume of (external) investments into the research organization (beyond seed funding) Example(s): volume of third - party investment (in ), # of participations in externally- funded programs
Not used
Used less important
Used important
Used very important
Appendix E
385
Goal: "Recruit and develop excellent talent" Not used
KPI assessing the ... ... volume of collaboration with partners and customers Example(s): # projects in which an external customer and/or partner in a business relation is involved vs. total projects, # joint research results like showcases, prototypes, demos
... quality of collaboration with partners and customers Example(s): subjective assessment by customers and/or partners via a survey
... volume of technology transfer activities to development or other (business) units Example(s): # person days, amount of services charged to other departments ( )
... quality of the transfer process or transfer activities Example(s): subjective evaluation of the collaboration with the receiving unit via questionnaire
... significance of the transferred research results for the receiving unit Example(s): subjective evaluation of the importance of the transfer for the development/business of the receiving unit
... economic value of the transfer activity or transferred research results Example(s): equivalent opportunity cost of using an external expert service, revenue generated by the transferred artifact
Other KPI: Other KPI:
0%
100%
Used less important
Used important
Used very important
386
Appendix E
At the beginning of the survey you selected the following goal: “Achieve a high standard of operational excellence”. Please select those KPIs that best correspond with the KPIs that are used for your department by indicating their importance regarding specific goal achievement. Note: Please select only the KPIs that are relevant for this specific goal. If you find KPIs that are not relevant for this goal but for another goal, please check the first box (not used).
Goal: "Achieve a high standard of operational excellence" KPI assessing the ... ... adherence to budget Example(s): difference of planned budget of each line item to actual budget measured (least deviation is rated to be the best)
... adherence to timelines (phases, gates) Example(s): successful (on-time) termination at a particular phase / gate
... quality of project management Example(s): data compliance at an agreed deadline
... quality of the working environment Example(s): scoring in employee satisfaction survey
... quality of the risk management in place Example(s): % by which the potential damage has been lowered by adequate measures against the bottom-line
... volume of (external) investments into the research organization (beyond seed funding) Example(s): volume of third-party investment (in ), # of participations in externally-funded programs
... volume/quality of people hired into the research organization Example(s): number of externally-attracted senior scientists, matching of the skill set of the hired people with the profiles they are hired for
... volume/quality of people leaving the research organization to move to other parts of the company / ecosystem Example(s): number of top talents moving to other parts of the company
... volume/quality of development measures undertaken Example(s): # job rotations, # PhDs finished
... quality of the people in the research department Example(s): subjective assessments through partners or stakeholders, team track records, 360° feedback
... volume of collaboration with academia Example(s): investment at universities in relation to total budget, # guest researchers from universities (sabbaticals)
Not used
Used Used Used less important very important important
Appendix E
387
Used Used Used Not used less important very important important
Goal: "Achieve a high standard of operational excellence" KPI assessing the ... ... quality of collaboration with academia Example(s): subjective evaluation for each university project from the supervising professor via a questionnaire
... volume of collaboration with partners and customers Example(s): # projects in which an external customer and/or partner in a business relation is involved vs. total projects, # joint research results like showcases, prototypes, demos
... quality of collaboration with partners and customers Example(s): subjective assessment by customers and/or partners via a survey
... intensity of input into the innovation process Example(s): # ideas, # ideas per researcher
... (weighted) # of ideas moved to a certain or the next phase of the innovation process Example(s): # ideas with a business case developed + 2x # ideas with demonstrator developed + 3x # ideas handed over to development
... (achieved) business impact of an idea in terms of its economic value Example(s): $,
Other KPI: Other KPI:
0%
100%
388
Appendix E
At the beginning of the survey you selected the following goal: “Establish and maintain strategic partnerships and/or collaborative research”. Please select those KPIs that best correspond with the KPIs that are used for your department by indicating their importance regarding specific goal achievement. Note: Please select only the KPIs that are relevant for this specific goal. If you find KPIs that are not relevant for this goal but for another goal, please check the first box (not used).
Goal: "Establish and maintain strategic partnerships and/or collaborative research" KPI assessing the ... ... volume of collaboration with partners and customers Example(s): # projects in which an external customer and/or partner in a business relation is involved vs. total projects, # joint research results like showcases, prototypes, demos
... quality of collaboration with partners and customers Example(s): subjective assessment by customers and/or partners via a survey
... volume of collaboration with academia Example(s): investment at universities in relation to total budget, # guest researchers from universities (sabbaticals)
... quality of collaboration with academia Example(s): subjective evaluation for each university project from the supervising professor via a questionnaire
... participation in scientific events beyond publications Example(s): # invited talks or keynotes, chairs in workshops or conferences, participations in program committees or journal reviews, editorships
... participation in advisory boards or related bodies, honorary titles or memberships, or visiting scholarships Example(s): # invitations to governmental / professional / industrial / academic advisory boards or standardization committees, # doctorates, fellowships, presidentships in sci/tech society, invitations for sabbaticals
... internal perception or internal recognitions Example(s): # visits by key decision makers, subjective evaluation of collaborations with internal groups, # contributions to internal conferences or exhibitions
... external perception or external recognitions Example(s): awards, speeches, keynotes, medals, prizes, fairs and exhibitions
... adherence to budget Example(s): difference of planned budget of each line item to actual budget measured (least deviation is rated to be the best)
... adherence to timelines (phases, gates) Example(s): successful (on-time) termination at a particular phase / gate
Used Used Used Not used less important very important important
Appendix E
389 Used Used Used Not used less important very important important
Goal: "Establish and maintain strategic partnerships and/or collaborative research" KPI assessing the ... ... quality of project management Example(s): data compliance at an agreed deadline
... quality of the working environment Example(s): scoring in employee satisfaction survey
... quality of the risk management in place Example(s): % by which the potential damage has been lowered by adequate measures against the bottom-line
... volume of (external) investments into the research organization (beyond seed funding) Example(s): volume of third-party investment (in ), # of participations in externally-funded programs
Other KPI: Other KPI:
0%
100%
390
Appendix E
At the beginning of the survey you selected the following goal: “Drive technology innovation and technology leadership”. Please select those KPIs that best correspond with the KPIs that are used for your department by indicating their importance regarding specific goal achievement. Note: Please select only the KPIs that are relevant for this specific goal. If you find KPIs that are not relevant for this goal but for another goal, please check the first box (not used).
Goal: "Drive technology innovation and technology leadership". KPI assessing the ... ... volume of technology transfer activities to development or other (business) units Example(s): # person days, amount of services charged to other departments ( ))
... quality of the research results transferred Example(s): subjective evaluation by the receiving unit via questionnaire
... quality of the transfer process or transfer activities Example(s): subjective evaluation of the collaboration with the receiving unit via questionnaire
... significance of the transferred research results for the receiving unit Example(s): subjective evaluation of the importance of the transfer for the development/business of the receiving unit
... economic value of the transfer activity or transferred research results Example(s): equivalent opportunity cost of using an external expert service, revenue generated by the transferred artifact
... intensity of input into the innovation process Example(s): # ideas, # ideas per researcher
... (weighted) # of ideas moved to a certain or the next phase of the innovation process Example(s): # ideas with a business case developed + 2x # ideas with demonstrator developed + 3x # ideas handed over to development
... (achieved) business impact of an idea in terms of its economic value Example(s): $,
... structure and quality of the research (i.e., certain technology areas) Example(s): subjective evaluation of the research portfolio by internal or external reviewers (simplest way - portfolio management is in place or not)
... visions related to the individual parts of the research portfolio and their quality Example(s): subjective evaluation of the vision documents of the individual research programs by internal or external reviewers (simplest way: vision documents are available or not)
Used Used Used Not used less important very important important
Appendix E
391
Goal: "Drive technology innovation and technology leadership".
Not used
KPI assessing the ... ... roadmaps to achieve the visions and their quality Example(s): subjective evaluation of the roadmap by internal or external reviewers (simplest way - in place or not)
... implementations of the roadmaps and the quality of contributions from research projects Example(s): subjective evaluation of the scientific accomplishments, technical achievements, or technical impact by internal or external reviewers
... volume of potentially protectable inventions submitted into IP pipeline Example(s): # invention disclosures
... volume of first filings out of the IP pipeline Example(s): # first filings, defensive publications, trade secrets
... volume of patents granted Example(s): # patents granted to the company
... quality of granted patents Example(s): economic value of the granted patent
... alignment of research activities with the IP strategy of the company Example(s): # submitted inventions addressing the IP strategy
Other KPI: Other KPI:
0%
100%
Used Used Used less important very important important
392
Appendix E
At the beginning of the survey you entered the following additional goal: Please select those KPIs that best correspond with the KPIs that are used for your department by indicating their importance regarding specific goal achievement. Note: Please select only the KPIs that are relevant for this specific goal. If you find KPIs that are not relevant for this goal but for another goal, please check the first box (not used).
Goal: "Drive technology innovation and technology leadership". KPI assessing the ... ... volume of technology transfer activities to development or other (business) units Example(s): # person days, amount of services charged to other departments ( ))
... quality of the research results transferred Example(s): subjective evaluation by the receiving unit via questionnaire
... quality of the transfer process or transfer activities Example(s): subjective evaluation of the collaboration with the receiving unit via questionnaire
... significance of the transferred research results for the receiving unit Example(s): subjective evaluation of the importance of the transfer for the development/business of the receiving unit
... economic value of the transfer activity or transferred research results Example(s): equivalent opportunity cost of using an external expert service, revenue generated by the transferred artifact
... intensity of input into the innovation process Example(s): # ideas, # ideas per researcher
... (weighted) # of ideas moved to a certain or the next phase of the innovation process Example(s): # ideas with a business case developed + 2x # ideas with demonstrator developed + 3x # ideas handed over to development
... (achieved) business impact of an idea in terms of its economic value Example(s): $,
... structure and quality of the research (i.e., certain technology areas) Example(s): subjective evaluation of the research portfolio by internal or external reviewers (simplest way - portfolio management is in place or not)
... visions related to the individual parts of the research portfolio and their quality Example(s): subjective evaluation of the vision documents of the individual research programs by internal or external reviewers (simplest way: vision documents are available or not)
Not used
Used Used Used less important very important important
Appendix E
Goal: "Drive technology innovation and technology leadership". KPI assessing the ... ... roadmaps to achieve the visions and their quality Example(s): subjective evaluation of the roadmap by internal or external reviewers (simplest way - in place or not)
... implementations of the roadmaps and the quality of contributions from research projects Example(s): subjective evaluation of the scientific accomplishments, technical achievements, or technical impact by internal or external reviewers
... volume of potentially protectable inventions submitted into IP pipeline Example(s): # invention disclosures
... volume of first filings out of the IP pipeline Example(s): # first filings, defensive publications, trade secrets
... volume of patents granted Example(s): # patents granted to the company
... quality of granted patents Example(s): economic value of the granted patent
... alignment of research activities with the IP strategy of the company Example(s): # submitted inventions addressing the IP strategy
... adherence to budget Example(s): difference of planned budget of each line item to actual budget measured (least deviation is rated to be the best).
... adherence to timelines (phases, gates) Example(s): successful (on-time) termination at a particular phase / gate
... quality of project management Example(s): data compliance at an agreed deadline
... quality of the working environment Example(s): scoring in employee satisfaction survey
... quality of the risk management in place Example(s): % by which the potential damage has been lowered by adequate measures against the bottom-line
... volume of (external) investments into the research organization (beyond seed funding) Example(s): volume of third-party investment (in ), # participations in externally-funded programs
... volume/quality of people hired into the research organization Example(s): # externally-attracted senior scientists, matching of the skill set of the hired people with the profiles they are hired for
... volume/quality of people leaving the research organization to move to other parts of the company / ecosystem Example(s): # top talents moving to other parts of the company
... volume/quality of development measures undertaken Example(s): # job rotations, # PhDs finished
393
Not used
Used Used Used less important very important important
394
Appendix E
Goal: "Drive technology innovation and technology leadership".
Not used
KPI assessing the ... ... quality of the people in the research department Example(s): subjective assessments through partners or stakeholders, team track records, 360° feedback
.. internal perception or internal recognitions Example(s): # visits by key decision makers, subjective evaluation of collaborations with internal groups, # contributions to internal conferences or exhibitions
... external perception or external recognitions Example(s): # awards, speeches, keynotes, medals, prizes, fairs and exhibitions
... volume of publications Example(s): # journal & conference papers, book contributions, # white papers, # technical reports
... quality of publications Example(s): # best paper awards, papers at selected conferences
... volume of collaboration with partners and customers Example(s): # projects in which an external customer and/or partner in a business relation is involved vs. total projects, # joint research results like showcases, prototypes, demos
... quality of collaboration with partners and customers Example(s): subjective assessment by customers and/or partners via a survey
... volume of collaboration with academia Example(s): investment at universities in relation to total budget, # guest researchers from universities (sabbaticals)
... quality of collaboration with academia Example(s): subjective evaluation for each university project from the supervising professor via a questionnaire
.. participation in scientific events beyond publications Example(s): # invited talks or keynotes, chairs in workshops or conferences, participations in program committees or journal reviews, editorships
... participation in advisory boards or related bodies, honorary titles or memberships, or visiting scholarships Example(s): # invitations to governmental / professional / industrial / academic advisory boards or standardization committees, # doctorates, fellowships, presidentships in sci/tech society, invitations for sabbaticals
Other KPI: Other KPI:
0%
100%
Used Used Used less important very important important
Appendix E
E.1.5
395
B. Additional Information
Which of the following approaches best describes your company’s goal-setting process? Top-down Bottom-up Middle-out I don’t know Please indicate the number of management levels between the head of the research department and the executive board. (Do not count the head of research; however, please include the executive board in your number). Number of management levels:
0%
100%
396
Appendix E
How often does the research department reconsider the organizational goals? Please choose the most appropriate answer. More frequently than once a year Once a year Once every two years Once every three years Less frequently than once every three years I don’t know How often does the research department reconsider the KPIs used to assess the goals? Please choose the most appropriate answer. More frequently than once a year Once a year Once every two years Once every three years Less frequently than once every three years I don’t know
0%
100%
Appendix E
397
How many employees are working in your company´s research organization(s)? Up to 10 11–50 51–100 101–500 More than 500 I don’t know How many employees are working in your company’s development organi zation(s)? Up to 50 51–200 201–500 501–2.000 More than 2.000 I don’t know How many employees are working in your entire company? Up to 500 501–2.000 2.001–5.000 5.001–20.000 More than 20.000 I don’t know
0%
100%
398
Appendix E
Do you have an individual employee performance assessment process in place? Yes No I don’t know Please indicate the approximate share of the variable payments within your organization that are directly related to goal achievements. % Please indicate the approximate share of the research budget that is spent on collaboration with universities. % Please indicate the approximate percentage of funding sources for your department: % corporate funding % other internal funding (e.g., from business units) % public funding (e.g., from governments) % customer/partner funding % other external sources Total (100%) Please indicate the approximate percentage of the granted patents coming from research compared to the rest of the company. %
0%
100%
Appendix E
399
E.1.6 Your personal opinion regarding the “appropriateness” of your performance measurement Are you satisfied with your research organization’s current performance measurement? If you had the opportunity to change the goals, WHAT would you change? Select new goals Keep exactly the same goals, but give them other priorities (weights) No changes, I am happy with the goals as they are
If you had the opportunity to change the KPIs, WHAT would you change? Select new KPIs Keep exactly the same KPIs, but give them other priorities (weights) No changes, I am happy with the KPIs as they are
Do you think that the ratio of qualitative and quantitative measures in your performance measurement is adequate? YES, they are well–balanced NO, I think more weight should be placed on quantitative measures NO, I think more weight should be placed on qualitative measures I don’t know Do you think that you have an adequate set-up (processes, methods, tools) to collect the appropriate qualitative data? Yes No I don’t know Please feel free to comment on any of the above 3 questions:
400
Appendix E
The question of measuring or not measuring the organizational performance of industrial research organizations is heavily discussed in both theory and practice. Please select the following statement that best matches your opinion: Performance measurement in industrial research organizations is an obvious need and is as necessary as management itself. Performance measurement in industrial research organizations is necessary but the measured results should not influence management decisions too heavily. Performance measurement in industrial research organizations negatively affects creativity; it is either too costly or not necessary at all. Please feel free to comment on the above:
0%
100%
Appendix E
401
Sector classification: Please indicate in which subsector of the Information Technology (IT)/Information and Communication Technology (ICT) industry your organization is active: (multiple selections are possible) IT/ICT Manufacturing IT/ICT Trade IT/ICT Services: Telecommunication services IT/ICT Services: Computer programming and related services (e.g., software provider) IT/ICT Services: Data processing, hosting and related services IT/ICT Services: ICT consultancy services IT/ICT Services: Other services Other sector:
Please insert your contact information: Company
Contact person Address
Email
0%
100%
402
Your responses have been transmitted. Thank you very much for your participation.
Appendix E
Appendix E E.2 Online Questionnaire (Russian)
Иccлeдoвaниe:
Oцeнкa эффeктивнocти paбoты нaучнo-иccлeдoвaтeльcкиx пoдpaздeлeний кoмпaний в cфepe инфopмaциoнныx и кoммуникaциoнныx тexнoлoгий Cпacибo зa Baшe учacтиe в oпpoce. Cтaв учacтникoм нaшeгo иccлeдoвaния, Bы пoлучитe eгo peзультaты, кaк тoлькo aнaлиз инфopмaции будeт зaвepшeн. Пoжaлуйcтa, ввeдитe Baш пapoль, чтoбы нaчaть oпpoc:
T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0, # Springer-Verlag Berlin Heidelberg 2012
403
404
Appendix E
Oцeнкa эффeктивнocти paбoты нaучнo-иccлeдoвaтeльcкиx пoдpaздeлeний кoмпaний в cфepe инфopмaциoнныx и кoммуникaциoнныx тexнoлoгий E.2.1
OCHOBHЫE TEPMИHЫ
Oцeнкa эффeктивнocти – aнaлиз эффeктивнocти дeятeльнocти пoдpaздeлeния пo дocтижeнию пocтaвлeнныx цeлeй. B paмкax иccлeдoвaния ocoбoe внимaниe удeляeтcя aнaлизу эффeктивнocти paбoты в oблacти нaучныx иccлeдoвaний кaк чacти oбщeгo нaпpaвлeния дeятeльнocти пo HИOКP в opгaнизaции. Цeли в oблacти нaучныx иccлeдoвaний – coвoкупнocть цeлeй в oблacти HИOКP, в cpaвнeнии c кoтopыми будeт пpoизвoдитьcя oцeнкa эффeктивнocти paбoты иccлeдoвaтeльcкoгo пoдpaздeлeния. Ключeвыe Пoкaзaтeли (КП) эффeктивнocти paбoты нaучнoиccлeдoвaтeльcкoгo пoдpaздeлeния (KPIs) – нaбop ключeвыx пoкaзaтeлeй эффeктивнocти (мeтpик), нa ocнoвaнии кoтopыx будeт oцeнивaтьcя уcпeшнocть дocтижeния пocтaвлeнныx пepeд нaучнoиccлeдoвaтeльcким пoдpaздeлeниeм цeлeй.
E.2.2
ЦEЛИ ИCCЛEДOBAHИЯ
Peзультaты дaннoгo пpeдcтaвлeниe o тoм,
иccлeдoвaния
пoзвoлят
cocтaвить
пoлнoe
– кaкиe цeли cтaвят пepeд coбoй poccийcкиe кoмпaнии в oблacти нaучныx иccлeдoвaний; – кaкиe из цeлeй являютcя пpиopитeтными; – нa ocнoвe кaкиx ключeвыx пoкaзaтeлeй эффeктивнocти пpoвoдитcя oцeнкa дocтижeния пocтaвлeнныx цeлeй в oблacти HИOКP. Aнкeтa cocтoит из cлeдующиx paздeлoв: B paздeлe A pacпoлaгaютcя вoпpocы o cиcтeмe oцeнки эффeктивнocти paбoты в oблacти нaучныx иccлeдoвaний (HИOКP), иcпoльзуeмыe в нacтoящee вpeмя в Baшeй кoмпaнии. Paздeл B oтвeдeн для вoпpocoв, пoзвoляющиx пoлучить бoлee дeтaльную инфopмaцию o тoм, влияют ли (и ecли дa, тo кaк) oпpeдeлeнныe фaктopы нa выбop цeлeй и ключeвыx пoкaзaтeлeй эффeктивнocти в oблacти нaучныx иccлeдoвaний (HИOКP).
Appendix E
405
B paздeлe C Bы мoжeтe пoдeлитьcя личным мнeниeм oб иcпoльзуeмыx в нacтoящee вpeмя Baшим пoдpaздeлeниeм цeляx и пoкaзaтeляx эффeктивнocти, a тaкжe, в cлучae нeoбxoдимocти, пpeдлoжить cпocoбы иx улучшeния.
E.2.3
ДOПOЛHИTEЛЬHAЯ ИHФOPMAЦИЯ
Для тoгo чтoбы oтвeтить нa вoпpocы aнкeты, Baм пoнaдoбитcя oкoлo 30 минут. Кoнфидeнциaльнocть: Mы peгуляpнo пpoвoдим эмпиpичecкиe пoлную иccлeдoвaтeльcкиe пpoeкты и гapaнтиpуeм кoнфидeнциaльнocть пpoвoдимыx нaми иccлeдoвaний. Для тoгo чтoбы клaccифициpoвaть Baшу кoмпaнию пo нaпpaвлeнию дeятeльнocти (oтpacли), paзмepу, a тaкжe дpугим фaктopaм, нaм нeoбxoдимo пoлучить oбщую инфopмaцию o пpoфилe Baшeй кoмпaнии. Дaннaя инфopмaция нe будeт иcпoльзoвaнa ни в кaкиx дpугиx цeляx, пoмимo клaccификaции opгaнизaции в cooтвeтcтвии c Baшими oтвeтaми, и будeт удaлeнa пo oкoнчaнию этoгo пpoцecca. Ha ocнoвaнии peзультaтoв иccлeдoвaния никaкиe вывoды oтнocитeльнo Bac или Baшeй кoмпaнии cдeлaть нeвoзмoжнo. Bcя инфopмaция пo итoгaм иccлeдoвaния будeт пpeдcтaвлeнa тoлькo в oбoбщeннoм видe. Mы блaгoдapим Bac зa Baш цeнный вклaд в пpoвoдимoe иccлeдoвaниe!
0%
100%
406
E.2.4
Appendix E
METOДOЛOГИЯ B PAЗДEЛE A:
Oтвeты нa вoпpocы чacти A будут cкoмпoнoвaны в видe мaтpицы, cocтoящeй из нaбopa opгaнизaциoнныx цeлeй, кoтopыe oцeнивaeт Baшa opгaнизaция в тeкущий мoмeнт, и нaбopa Ключeвыx Пoкaзaтeлeй (КП), кoтopыe иcпoльзуютcя для иx измepeния, c учeтoм вaжнocти дaнныx цeлeй и Ключeвыx Пoкaзaтeлeй (КП). Этa мaтpицa будeт нaxoдитьcя в oтчeтe, кoтopый будeт Baм oтпpaвлeн, кaк тoлькo aнaлиз инфopмaции будeт зaвepшeн. Heт
Oцeнивaeтcя – низкaя вaжнocmь Цeль_1 Цeль_2 КП_1 КП_2 КП_3 КП_4 КП_5 КП_n
Oцeнивaeтcя – cpeдняя вaжнocmь Цeль_3 ++
Oцeнивaeтcя – выcoкaя вaжнocmь Цeль_n
+ ++
+++
+++
++ +
Чacть A aнкeты дeлитcя нa двa этaпa. Cнaчaлa Baм будут зaдaны вoпpocы o cиcтeмe oцeнки эффeктивнocти, кoтopaя иcпoльзуeтcя в Baшeм иccлeдoвaтeльcкoм пoдpaздeлeнии в тeкущий мoмeнт вpeмeни.
Appendix E
407
E.2.5 A. Цeли, зaдaчи и пoкaзaтeли эффeктивнocти paбoты в oблacти нaучныx иccлeдoвaний (HИOКP) B дaннoм paздeлe Baшeму внимaнию пpeдлaгaeтcя нaбop утвepждeний, кoтopыe мoгли бы oтpaжaть ocнoвныe цeли и зaдaчи нaучнoиccлeдoвaтeльcкoгo пoдpaздeлeния Baшeй opгaнизaции. Пoжaлуйcтa, выбepитe тe цeли, кoтopыe в нacтoящee вpeмя нaилучшим oбpaзoм oтpaжaют opгaнизaциoнныe цeли paбoты в oблacти нaучныx иccлeдoвaний oтдeлa HИOКP Baшeй кoмпaнии в тeкущий мoмeнт. Ecли в Baшeй кoмпaнии выдeлeнo oтдeльнoe пoдpaздeлeниe пo нaучным иccлeдoвaниям – пoжaлуйcтa, oтвeчaйтe нa вoпpocы aнкeты пo эффeктивнocти paбoты этoгo пoдpaздeлeния. Ecли oтдeльнoгo пoдpaздeлeния нeт, тo, пoжaлуйcтa, oцeнитe эффeктивнocть paбoты пo нaучным иccлeдoвaниям в paмкax eдинoгo oтдeлa HИOКP Baшeй кoмпaнии. Пoжaлуйcтa, укaжитe, иcпoльзуютcя ли дaнныe цeли в oцeнкe эффeктивнocти paбoты нaучнo-иccлeдoвaтeльcкoгo пoдpaздeлeния и, ecли дa, кaк бы Bы oцeнили вaжнocть дaнныx цeлeй. Ecли кaкaя-либo цeль нe иcпoльзуeтcя, oтмeтьтe, пoжaлуйcтa, пepвый пункт (нeт).
408
Appendix E
Пpимeчaниe: Для тex цeлeй, кoтopыe Bы oтoбpaли для oцeнки измepeния peзультaтoв paбoты в oблacти нaучныx иccлeдoвaний (HИOКP), в cлeдующeй чacти aнкeты будeт пpeдcтaвлeн нaбop вoзмoжныx Ключeвыx Пoкaзaтeлeй (КП) oцeнки эффeктивнocти.
B нaчaлe иccлeдoвaния Bы выбpaли cлeдующую цeль: «Coглacoвaниe paбoты и вoзмoжнocть пepeдaчи peзультaтoв нaучныx иccлeдoвaний oтдeлу paзpaбoтoк и дpугим (бизнec) пoдpaздeлeниям». Пoжaлуйcтa, выбepитe пoкaзaтeли oцeнки эффeктивнocти, кoтopыe нaилучшим oбpaзoм oтpaжaют ту цeль, кoтopую иcпoльзуeт Baшa кoмпaния для oцeнки paбoты в oблacти нaучныx иccлeдoвaний/ HИOКP. Для иcпoльзуeмыx пoкaзaтeлeй укaжитe, пoжaлуйcтa, иx вaжнocть для кoмпaнии.
Appendix E
409
410
Appendix E
Appendix E
411
B нaчaлe иccлeдoвaния Bы выбpaли cлeдующую цeль: «Coздaниe и зaщитa oбъeктoв интeллeктуaльнoй coбcтвeннocти». Пoжaлуйcтa, выбepитe пoкaзaтeли oцeнки эффeктивнocти, кoтopыe нaилучшим oбpaзoм oтpaжaют ту цeль, кoтopую иcпoльзуeт Baшa кoмпaния для oцeнки paбoты в oблacти нaучныx иccлeдoвaний/ HИOКP. Для иcпoльзуeмыx пoкaзaтeлeй укaжитe, пoжaлуйcтa, иx вaжнocть для кoмпaнии.
412
Appendix E
Appendix E
413
B нaчaлe иccлeдoвaния Bы выбpaли cлeдующую цeль: «Укpeплeниe имиджa oтдeлa HИOКP и caмoй кoмпaнии кaк внутpи кoмпaнии, тaк и зa ee пpeдeлaми». Пoжaлуйcтa, выбepитe пoкaзaтeли oцeнки эффeктивнocти, кoтopыe нaилучшим oбpaзoм oтpaжaют ту цeль, кoтopую иcпoльзуeт Baшa кoмпaния для oцeнки paбoты в oблacти нaучныx иccлeдoвaний/ HИOКP. Для иcпoльзуeмыx пoкaзaтeлeй укaжитe, пoжaлуйcтa, иx вaжнocть для кoмпaнии.
414
Appendix E
Appendix E
415
B нaчaлe иccлeдoвaния Bы выбpaли cлeдующую цeль: «Coздaниe и oцeнкa нoвыx вoзмoжнocтeй для бизнeca». Пoжaлуйcтa, выбepитe пoкaзaтeли oцeнки эффeктивнocти, кoтopыe нaилучшим oбpaзoм oтpaжaют ту цeль, кoтopую иcпoльзуeт Baшa кoмпaния для oцeнки paбoты в oблacти нaучныx иccлeдoвaний/ HИOКP. Для иcпoльзуeмыx пoкaзaтeлeй укaжитe, пoжaлуйcтa, иx вaжнocть для кoмпaнии.
416
Appendix E
Appendix E
417
B нaчaлe иccлeдoвaния Bы выбpaли cлeдующую цeль: «Haйм и paзвитиe пepвoклaccныx, тaлaнтливыx coтpудникoв». Пoжaлуйcтa, выбepитe пoкaзaтeли oцeнки эффeктивнocти, кoтopыe нaилучшим oбpaзoм oтpaжaют ту цeль, кoтopую иcпoльзуeт Baшa кoмпaния для oцeнки paбoты в oблacти нaучныx иccлeдoвaний/ HИOКP. Для иcпoльзуeмыx пoкaзaтeлeй укaжитe, пoжaлуйcтa, иx вaжнocть для кoмпaнии.
418
Appendix E
Appendix E
419
B нaчaлe иccлeдoвaния Bы выбpaли cлeдующую цeль: «Дocтижeниe выcoкoй cтeпeни coвepшeнcтвa в oпepaциoннoй дeятeльнocти». Пoжaлуйcтa, выбepитe пoкaзaтeли oцeнки эффeктивнocти, кoтopыe нaилучшим oбpaзoм oтpaжaют ту цeль, кoтopую иcпoльзуeт Baшa кoмпaния для oцeнки paбoты в oблacти нaучныx иccлeдoвaний/ HИOКP. Для иcпoльзуeмыx пoкaзaтeлeй укaжитe, пoжaлуйcтa, иx вaжнocть для кoмпaнии.
420
Appendix E
Appendix E
421
B нaчaлe иccлeдoвaния Bы выбpaли cлeдующую цeль: «Coздaниe и пoддepжaниe cтpaтeгичecкиx пapтнepcтв и/или coвмecтныx иccлeдoвaтeльcкиx пpoeктoв c пapтнepaми». Пoжaлуйcтa, выбepитe пoкaзaтeли oцeнки эффeктивнocти, кoтopыe нaилучшим oбpaзoм oтpaжaют ту цeль, кoтopую иcпoльзуeт Baшa кoмпaния для oцeнки paбoты в oблacти нaучныx иccлeдoвaний/ HИOКP. Для иcпoльзуeмыx пoкaзaтeлeй укaжитe, пoжaлуйcтa, иx вaжнocть для кoмпaнии.
422
Appendix E
Appendix E
423
B нaчaлe иccлeдoвaния Bы выбpaли cлeдующую цeль: «Coздaниe и paзвитиe тexнoлoгичecкиx иннoвaций и лидepcтвa в oблacти тexнoлoгий». Пoжaлуйcтa, выбepитe пoкaзaтeли oцeнки эффeктивнocти, кoтopыe нaилучшим oбpaзoм oтpaжaют ту цeль, кoтopую иcпoльзуeт Baшa кoмпaния для oцeнки paбoты в oблacти нaучныx иccлeдoвaний/ HИOКP. Для иcпoльзуeмыx пoкaзaтeлeй укaжитe, пoжaлуйcтa, иx вaжнocть для кoмпaнии.
424
Appendix E
Appendix E
425
426
Appendix E
B нaчaлe иccлeдoвaния Bы выбpaли cлeдующую цeль: Пoжaлуйcтa, выбepитe пoкaзaтeли oцeнки эффeктивнocти, кoтopыe нaилучшим oбpaзoм oтpaжaют ту цeль, кoтopую иcпoльзуeт Baшa кoмпaния для oцeнки paбoты в oблacти нaучныx иccлeдoвaний/ HИOКP. Для иcпoльзуeмыx пoкaзaтeлeй укaжитe, пoжaлуйcтa, иx вaжнocть для кoмпaнии.
Appendix E
427
428
Appendix E
Appendix E
429
430
Appendix E
E.2.6
B. Дoпoлнитeльнaя инфopмaция
Кaкoй из пepeчиcлeнныx пoдxoдoв нaилучшим oбpaзoм oпиcывaeт пpoцecc уcтaнoвлeния? Cвepxу – вниз Cнизу – ввepx Paзныe уpoвни вoвлeчeны Зaтpудняюcь oтвeтить Пoжaлуйcтa, укaжитe чиcлo уpoвнeй упpaвлeния мeжду pукoвoдитeлeм нaучнo-иccлeдoвaтeльcкoгo пoдpaздeлeния (HИOКP) и выcшим pукoвoдcтвoм. (Oбpaтитe, пoжaлуйcтa, внимaниe, чтo уpoвeнь pукoвoдитeля пoдpaздeлeния пpи пoдcчeтe нe учитывaeтcя, a уpoвeнь выcшeгo pукoвoдcтвa - учитывaeтcя). Чиcлo уpoвнeй упpaвлeния:
Appendix E
431
Кaк чacтo Bы пepecмaтpивaeтe cвoи opгaнизaциoнныe цeли? Пoжaлуйcтa, выбepeтe нaибoлee пoдxoдящий oтвeт. чaщe oднoгo paзa в гoд oдин paз в гoд oдин paз в двa гoдa oдин paз в тpи гoдa peжe oднoгo paзa в тpи гoдa зaтpудняюcь oтвeтить Кaк чacтo Bы пepecмaтpивaeтe ключeвыe пoкaзaтeли эффeктивнocти в oблacти нaучнo-иccлeдoвaтeльcкoй дeятeльнocти (HИOКP), нa ocнoвaнии кoтopыx эти цeли oцeнивaютcя? Пoжaлуйcтa, выбepeтe нaибoлee пoдxoдящий oтвeт. чaщe oднoгo paзa в гoд oдин paз в гoд oдин paз в двa гoдa oдин paз в тpи гoдa peжe oднoгo paзa в тpи гoдa зaтpудняюcь oтвeтить
432
Appendix E
Кaкoвo чиcлo coтpудникoв пoдpaздeлeния в Baшeй кoмпaнии:
нaучнo-иccлeдoвaтeльcкoгo
дo 10 чeлoвeк 11-50 51-100 101-500 бoлee 500 чeлoвeк зaтpудняюcь oтвeтить Кaкoвo чиcлo coтpудникoв oтдeлa paзpaбoтoк в Baшeй кoмпaнии: дo 50 чeлoвeк 51-200 201-500 501-2.000 бoлee 2 000 чeлoвeк зaтpудняюcь oтвeтить Кaкoвo oбщee чиcлo coтpудникoв Baшeй кoмпaнии: дo 500 чeлoвeк 501-2.000 2.001-5.000 5.001-20.000 бoлee 20.000 чeлoвeк зaтpудняюcь oтвeтить
Appendix E
433
Cущecтвуeт ли в Baшeй кoмпaнии индивидуaльнaя эффeктивнocти paбoты для кaждoгo coтpудникa?
oцeнкa
Дa Heт Зaтpудняюcь oтвeтить Укaжитe, пoжaлуйcтa, пpимepную дoлю зaтpaт, кoтopaя пpиxoдитcя нa индивидуaльныe выплaты coтpудникaм в кaчecтвe пooщpeния зa дocтижeниe пocтaвлeнныx цeлeй. % Увaжитe, пoжaлуйcтa, пpимepную дoлю бюджeтa нa иccлeдoвaния, кoтopую Baшa кoмпaния тpaтит нa coтpудничecтвo c унивepcитeтaми. % Укaжитe, пoжaлуйcтa, пpимepнoe иcтoчникoв финaнcиpoвaния пoдpaздeлeния Baшeй кoмпaнии:
пpoцeнтнoe cooтнoшeниe нaучнo-иccлeдoвaтeльcкoгo
% бюджeт кoмпaнии % дpугoe внутpeннe финaнcиpoвaниe (нaпpимep, из бюджeтa дpугoгo пoдpaздeлeния) % oбщecтвeнныe cpeдcтвa (нaпpимep, пoлучeнныe oт пpaвитeльcтвa) % cpeдcтвa, пoлучeнныe oт пoтpeбитeлeй или пapтнepoв % дpугиe внeшниe иcтoчники Bceгo (100%) Увaжитe, пoжaлуйcтa, дoлю (в пpoцeнтax) пoлучeнныx Baшим пoдpaздeлeниeм пaтeнтoв пo cpaвнeнию c ocтaльными пoдpaздeлeниями кoмпaнии. %
434
Appendix E
E.2.7 C. Baшe личнoe мнeниe o peлeвaнтнocти oцeнки эффeктивнocти Baшeй paбoты Удoвлeтвopeны ли Bы тeкущeй cиcтeмoй измepeния эффeктивнocти в Baшeй кoмпaнии? Ecли бы у Bac былa бы вoзмoжнocть измeнить цeли, ЧTO Bы бы пpeдлoжили измeнить? Bыбpaть дpугиe пoкaзaтeли Coxpaнить тe жe пoкaзaтeли, нo пpидaть им дpугoй вec Бeз измeнeний, мeня пoлнocтью уcтpaивaют тeкущиe пoкaзaтeли
Ecли бы у Bac былa вoзмoжнocть измeнить Ключeвыe Пoкaзaтeли (КП), ЧTO бы Bы измeнили? Bыбpaть дpугиe пoкaзaтeли Coxpaнить тe жe пoкaзaтeли, нo пpидaть им дpугoй вec Бeз измeнeний, мeня пoлнocтью уcтpaивaют тeкущиe пoкaзaтeли
Cчитaeтe ли Bы, чтo бaлaнc кoличecтвeнныx и кaчecтвeнныx пoкaзaтeлeй в cиcтeмe oцeнки эффeктивнocти в Baшeй кoмпaнии aдeквaтeн? ДA, пoкaзaтeли xopoшo cбaлaнcиpoвaны HET, я думaю, чтo бoльший вec дoлжeн быть у кoличecтвeнныx пoкaзaтeлeй HET, я думaю, чтo бoльший вec дoлжeн быть у кaчecтвeнныx пoкaзaтeлeй Зaтpудняюcь oтвeтить Cчитaeтe ли Bы, чтo у Bac ecть пoдxoдящaя cиcтeмa (пpoцeccы, мeтoды, инcтpумeнты) для cбopa пoдxoдящeй кaчecтвeннoй инфopмaции? Дa Heт Зaтpудняюcь oтвeтить Пoжaлуйcтa, ocтaвьтe cвoи кoммeнтapии пo вышeпepeчиcлeнным вoпpocaм:
Appendix E
435
Boпpoc o тoм, нeoбxoдимo ли или нeт измepять эффeктивнocть paбoты нaучнo-иccлeдoвaтeльcкиx пoдpaздeлeний, aктивнo oбcуждaeтcя кaк в тeopии, тaк и нa пpaктикe. Пoжaлуйcтa, выбepeтe тo утвepждeниe, кoтopoe нaилучшим oбpaзoм пoдxoдит Baшим взглядaм: Измepeниe эффeктивнocти paбoты нaучнo-иccлeдoвaтeльcкиx пoдpaздeлeний - oчeвиднaя нeoбxoдимocть, этo тaк жe нeoбxoдимo, кaк caм пpoцecc упpaвлeния. Измepeниe эффeктивнocти paбoты нaучнo-иccлeдoвaтeльcкиx пoдpaздeлeний нeoбxoдимo, нo пoлучeнныe peзультaты нe дoлжны влиять нa упpaвлeнчecкиe peшeния cлишкoм cильнo. Измepeниe эффeктивнocти paбoты нaучнo-иccлeдoвaтeльcкиx пoдpaздeлeний oтpицaтeльнo влияeт нa кpeaтивнocть, a тaкжe являeтcя cлишкoм зaтpaтным, и пoэтoму нe нужнo. Пoжaлуйcтa, ocтaвьтe cвoй кoммeнтapий пo этoму вoпpocу:
436
Appendix E
Пoжaлуйcтa, укaжитe, в кaкoм ceктope cфepы инфopмaциoнныx и кoммуникaциoнныx тexнoлoгий (Information and Communication Technologies (ICT)) oпepиpуeт Baшa кoмпaния (вoзмoжнo нecкoлькo вapиaнтoв oтвeтa): Пpoизвoдcтвo инфopмaциoнныx и кoммуникaциoнныx тexнoлoгий Пpoдaжa инфopмaциoнныx и кoммуникaциoнныx тexнoлoгий Teлeкoммуникaциoнныe уcлуги Пpoгpaммиpoвaниe и coпутcтвующиe уcлуги Oбpaбoткa дaнныx, xpaнeниe дaнныx и coпутcтвующиe уcлуги Кoнcaлтинг в cфepe инфopмaциoнныx и кoммуникaциoнныx тexнoлoгий Дpугиe уcлуги в cфepe инфopмaциoнныx и кoммуникaциoнныx тexнoлoгий Дpугoe
Пoжaлуйcтa, ввeдитe Baшу кoнтaктную инфopмaцию: Кoмпaния Кoнтaктнoe лицo Aдpec Элeктpoннaя пoчтa
Appendix E
Baши oтвeты пpиняты. Cпacибo зa учacтиe в oпpoce.
437
.
List of Abbreviations and Acronyms
3Com 3M ABPA AD AEC AR AT&T AVT BITKOM BMBF BMWI BR BSC BU Bzw. CA CAPA CEC CEO Cf. CIS COO CPC CRF CRM CSF CTO
Computers, Communication and Compatibility Minnesota Mining & Manufacturing Company Activity Based Profitability Analysis Advanced Development Atomic Energy Commission Applied Research American Telephone and Telegraph Asset Value of Technology Bundesverband Informationswirtschaft, Telekommunikation und neue Medien e.V Bundesministerium f€ ur Bildung und Forschung, Federal Ministry of Education and Research Bundesministerium f€ ur Wirtschaft und Technologie, Federal Ministry of Economics and Technology‘ Basic Research Balanced Scorecard Business Units Beziehungsweise, respectively Collaboration with Academia Capacity planning tool Campus-based Engineering Centers Chief Executive Officer from Latin: confer, compare Commonwealth of Independent States Chief Operating Officer Collaboration with Partners and Customers Centro Ricerche Fiat Customer Relationship Management Critical Success Factor Chief Technology Officer
439
440
D DEF_G DG-Research DPMA e.V. EC EC-JRC ED EFQM EMC2 EMEA EPO ERP esp. et al. EU EUR EVA F FBO FFE FM GE HAU HP i. e. S. i. w. S. i.e. IBM ICT IDF IMG INPI IP IPC IRIM ISIC ISO IT ITU IWB JRC-IPTS
List of Abbreviations and Acronyms
Development Definition of the Goal (similarity definition) Directorate-General Research Deutsches Patent und Markenamt, German Patent and Brand Office Eingetragener Verein, registered association European Commission Joint Research Centre (JRC) of the European Commission (EC) Experimental Development European Foundation for Quality Management EMC Corporation Europe, Middle East and Africa European Patent Office Enterprise Resource Planning especially and others European Union Euro Economic Value Added Function Future Business Opportunities Fuzzy Front End Frascati Manual General Electric Company Hauber Hewlett-Packard Company im engeren Sinne, in a narrower sense im weiteren Sinne, in a broader sense Latin: id est (that is; that means; in other words) International Business Machines Corporation (IBM) Information and Communication Technology Invention Disclosure Form Image Institut national de la proprie´te´ industrielle in France Intellectual Property Intellectual Property Creation Industrial research and innovation monitoring and analysis activities International Standard Industrial Classification International Organization for Standardization Information Technology Illinois Institute of Technology Integration with Business
List of Abbreviations and Acronyms
KEF KLI KPI KRA KRI LOB MBO Mgmt MSE NACE NAICS NASA NIH NIST NISTEP NPD NPV OECD OpEx OTD P PA PARC PD PhD PI PIMS PLM PM PMgS PMO PMS PMsS PoC PRD PSC PSDA PUB QPM R
441
Joint Research Centre’s Institute for Prospective Technological Studies Kritischer Erfolgsfaktor, Critical Success Factor Klingebiel Key Performance Indicators Krause Key Result Indicator Line of business Management by objectives Management Management Systems Engineering Statistical Classification of Economic Activities in the European Community North American Industry Classification System National Aeronautics and Space Administration National Institute of Health, USA National Institute of Standards and Technology National Institute of Science and Technology Policy, Japan New Product Development Net Present Value Organisation for Economic Co-Operation and Development Operational Excellence On Time Delivery Properties Portfolio Assessment Palo Alto Research Center Product Development Doctor of Philosophy Performance Indicator Production Information Management System Product Life-cycle Management Performance Management Performance Management System Project Management Office Performance Management System/Performance Measurement System Performance Measurement System Proof of Concept Practice of R&D Processes Presence in Scientific Community Performance measurement system Systematic Design Approach Publication Quantum Performance Measurement Research Department
442
R&BI R&D R&ED RAND RD&E Rev. RPM RPO RQ S S&T SBU SCI SCM SEC SI SIC SMART SME SNA spec. SRC SRM SRN SW TiLab TP TQM TT TTS Institute U.S. UN UNESCO USPTO usu. VC vs. WET WIPO WPIIS
List of Abbreviations and Acronyms
Research & Breakthrough Innovation Research and Development Research, Engineering & Development Research and Development, RAND Corporation Research, Development and Engineering Review Research Program Manager Research Portfolio Office Research Question Standard Science and Technology Strategic Business Unit Science Supply Chain Management Securities and Exchange Commission Similarity Indicator Standard Industrial Classification Specific, Measurable, Attainable, Realistic and Timely Small and Midsize Enterprises System of National Accounts Specification SAP-Labs based Research Center Supplier Relationship Management SAP Research Net Software Telecom Italia Lab Talent Pool Total Quality Management Technology Transfer Work Efficiency Institute, Finland United States United Nations United Nations Educational, Scientific and Cultural Organization United States Patent and Trademark Office Usually Value Creation Versus Wettstein World International Property Organization Working Party on Indicators for the Information Society
List of Tables
Table 2.1 Table 2.2 Table 2.3 Table 2.4 Table 2.5 Table 2.6 Table 2.7 Table 2.8 Table 2.9 Table 2.10 Table 2.11 Table 2.12 Table 2.13 Table 2.14 Table 3.1 Table 3.2 Table 3.3 Table 3.4 Table 3.5
Table 3.6 Table 3.7
Definitions and comparisons of the terms “invention” and “innovation”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The main differences between “invention” and “innovation” . . . . Definitions of R&ED in the Frascati Manual. . . . . . . . . . . . . . . . . Definitions of the term “performance”. . . . . . . . . . . . . . . . . . . . . . Key terms in Performance Measurement . . . . . . . . . . . . . . . . . . . . Definition of the term “Performance Indicator” . . . . . . . . . . . . . . Definition of the term “Key Performance Indicator”. . . . . . . . . . . Definitions: measure, metric, PI and KPI. . . . . . . . . . . . . . . . . . . . Definitions of the term “Performance Management” . . . . . . . . . . Definitions of “Performance Measurement” and “Performance Measurement system” . . . . . . . . . . . . . . . . . . . . . . . Comparison of Performance Measurement (system) definitions . . Purposes of Performance Management in industrial research . . . . Comparison of purposes for Performance Management . . . . . . . . ICT sector according to ISIC Rev. 4 . . . . . . . . . . . . . . . . . . . . . . . Comparison of factors determining PMSs: Krause and Klingebiel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The basic dimensions of R&D performance analysis . . . . . . . . . . Summary of relevant performance management and measurement concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Catalogue of requirements for a “Performance Measurement System” according to Klingebiel. . . . . . . . . . . . . . . Requirements for a “Performance Measurement System” and for “IT-System for a Performance Measurement System” according to Wettstein . . . . . . . . . . . . . . . . . . . . . . . . . . . Requirements for a “Performance Measurement System” according to Hauber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Requirements for a “Performance Management Method” according to Krause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12 13 18 24 28 30 31 32 34 38 41 43 46 50 57 61 65 69
70 71 72
443
444
Table 3.8 Table 3.9 Table 3.10 Table 3.11 Table 3.12 Table 3.13 Table 3.14 Table 3.15 Table 3.16 Table 3.17 Table 3.18 Table 4.1 Table 4.2 Table 4.3 Table 4.4 Table 4.5 Table 4.6 Table 4.7 Table 4.8 Table 5.1 Table 5.2 Table 5.3 Table 5.4 Table 5.5 Table 5.6 Table 5.7 Table 5.8 Table 5.9 Table 5.10 Table 5.11 Table 5.12 Table 5.13 Table 5.14 Table 5.15 Table 5.16 Table 5.17
List of Tables
Our classification of requirements for a “Performance Management System” . . . . . . . . . . . . . . . . . . . . . . . 73 Main characteristics of industrial research and requirements . . . . 81 Intra-goal structure of corporate goals . . . . . . . . . . . . . . . . . . . . . . 84 Elements of the decision goal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Dimensions of the decision goal according to Hauschildt. . . . . . . 86 Problem areas in performance management reported in the literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Summary of different goal dimensions addressed in literature . . . 97 Summary of measurement methods adapted from Kerssens-van Drongelen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Case organization’s success factors divided into financial and non-financial factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Measures of the case organization . . . . . . . . . . . . . . . . . . . . . . . . 106 Performance measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Case study outline. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Relative weighting of SAP corporate research goals. . . . . . . . . . 124 Subgoals and KPIs used to assess SAP Research . . . . . . . . . . . . 125 Mapping KPIs to goals and their relative weighting . . . . . . . . . . 126 Performance clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Performance cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Catalogue of KPI classes with corresponding properties and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Accuracy comparison of SI-1 and SI-2 for six goal groups . . . . 170 Eight generic organizational goals in industrial research . . . . . . 174 Mapping goals and KPIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Definition of the unit of analysis . . . . . . . . . . . . . . . . . . . . . . . . . 176 Reconsideration goals versus Reconsideration KPIs . . . . . . . . . . 185 Examples of additional goals named in the survey . . . . . . . . . . . 198 Frequency of use of KPIs assessing goal 1 . . . . . . . . . . . . . . . . . 198 Examples of additional KPIs named in the survey . . . . . . . . . . . 201 Frequency of use of KPIs assessing goal 1 . . . . . . . . . . . . . . . . . 202 Selected performance clusters in goal 1 according to SI2 (survey) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Goals with TT cluster ranked highest . . . . . . . . . . . . . . . . . . . . . 204 Frequency of use of KPIs assessing goal 2 . . . . . . . . . . . . . . . . . 206 Selected performance clusters in goal 2 according to SI2 (survey) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Goals with IP cluster ranked highest (case study) . . . . . . . . . . . . 208 Frequency of use of KPIs assessing goal 3 . . . . . . . . . . . . . . . . . 209 Selected performance clusters in goal 3 according to SI2 (survey) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Goals with IMG cluster ranked highest (case study) . . . . . . . . . . 210 Frequency of use of KPIs assessing goal 4 . . . . . . . . . . . . . . . . . 211
List of Tables
Table 5.18 Table 5.19 Table 5.20 Table 5.21 Table 5.22 Table 5.23 Table 5.24 Table 5.25 Table 5.26 Table 5.27 Table 5.28 Table 5.29 Table 5.30 Table 5.31 Table 6.1 Table 6.2
Table 6.3
Table 6.4 Table 6.5 Table 6.6
445
Selected performance clusters in goal 4 according to SI2 (survey). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Goals with FBO cluster ranked highest (case study) . . . . . . . . . . Frequency of use of KPIs assessing goal 5 . . . . . . . . . . . . . . . . . Selected performance clusters in goal 5 according to SI2 (survey). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Goals with TP cluster ranked highest (case study) . . . . . . . . . . . Frequency of use of KPIs assessing goal 6 . . . . . . . . . . . . . . . . . Selected performance clusters in goal 6 according to SI2 (survey). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Goals with OPEX cluster ranked highest (case study) . . . . . . . . Frequency of use of KPIs assessing goal 7 . . . . . . . . . . . . . . . . . Selected performance clusters in goal 7 according to SI2 (survey). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Goals with CPC cluster ranked highest (case study) . . . . . . . . . . Frequency of use of KPIs assessing goal 8 . . . . . . . . . . . . . . . . . Selected performance clusters in goal 8 according to SI2 (survey). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Goals with TT and RPM cluster ranked highest (case study) . . . Elements of PM, purposes and activities combined with the levels of PMgS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guidelines for the development of a PMgS, top-down goal-setting process (PMgS is not in place, department goals cannot be changed) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guidelines for the development of a PMgS, middle-out goal-setting process (PMgS is not in place, department goals can be adjusted). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guidelines for the development of a PMgS (PMgS is already in place) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Requirements considering the specifics of industrial research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of derived requirements and the developed PMgS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
211 212 213 213 214 215 215 216 218 218 218 220 220 221 230
232
232 234 236 237
.
List of Figures
Fig. 1.1 Fig. 2.1 Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig.
2.2 2.3 2.4 2.5 2.6 2.7 3.1 3.2 3.3 3.4 3.5 3.6 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10
Fig. 4.11 Fig. 4.12 Fig. 4.13
Outline of the dissertation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Innovation management, technology management and R&D management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Scope of activities in industrial research . . . . . . . . . . . . . . . . . . . . . 21 Performance as goal attainment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Performance level model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 The elements of Performance Management. . . . . . . . . . . . . . . . . . . 36 Performance Management: putting the terms together . . . . . . . . . . 42 Constitutive elements of PMS and their relationships . . . . . . . . . . . 45 Four schools of thought . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 The R&D Laboratory as a system . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Degrees of compatibility between two goals . . . . . . . . . . . . . . . . . . 87 Analysis need in building a goal system . . . . . . . . . . . . . . . . . . . . . 87 Inter-goal-structure between corporate and decision goals . . . . . . . 88 Tetrahedron model adapted from Krause. . . . . . . . . . . . . . . . . . . . 104 Framework: research department and its eco-system . . . . . . . . . . 115 Definition of inputs and outputs of SAP Research . . . . . . . . . . . . 119 Organizational structure of SAP. . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Organizational structure of SAP Research. . . . . . . . . . . . . . . . . . . 121 Distribution of SAP research locations . . . . . . . . . . . . . . . . . . . . . 121 Research programs at SAP research. . . . . . . . . . . . . . . . . . . . . . . . 122 Research process at SAP research . . . . . . . . . . . . . . . . . . . . . . . . . 123 Idea selection process at SAP Inspire . . . . . . . . . . . . . . . . . . . . . . 123 Generic Technology Transfer process . . . . . . . . . . . . . . . . . . . . . . 133 Generic technology transfer process mapped with KPI classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Generic process of the future business opportunities cluster. . . . . 136 Generic process of the future business opportunity cluster with KPI classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Generic Research Portfolio Management process . . . . . . . . . . . . . 138
447
448
List of Figures
Fig. 4.14 Fig. 4.15 Fig. 4.16 Fig. Fig. Fig. Fig. Fig. Fig.
4.17 4.18 4.19 4.20 4.21 4.22
Fig. Fig. Fig. Fig. Fig. Fig.
4.23 4.24 4.25 4.26 4.27 4.28
Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig.
4.29 4.30 4.31 4.32 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14
Fig. Fig. Fig. Fig. Fig. Fig. Fig.
5.15 5.16 5.17 5.18 5.19 5.20 5.21
Generic Research Portfolio Management process mapped with KPI classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generic Intellectual Property Creation process . . . . . . . . . . . . . . . Generic Intellectual Property Creation process mapped with KPI classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generic Talent Pool process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generic Talent Pool process mapped with KPI classes . . . . . . . . . Cluster comparison from case studies . . . . . . . . . . . . . . . . . . . . . . Goal decomposition example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technology Transfer-oriented goals with SI-1 indicator. . . . . . . . Intellectual Property Creation-oriented goals with SI-1 indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operational Excellence-oriented goals with SI-1 indicator . . . . . . People Development-oriented goals with SI-1 indicator . . . . . . . . Idea Generation-oriented goals with SI-1 indicator . . . . . . . . . . . . Image-oriented goals with SI-1 indicator. . . . . . . . . . . . . . . . . . . . Technology Transfer-oriented goals with SI-2 indicator. . . . . . . . Intellectual Property Creation-oriented goals with SI-2 indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operational Excellence-oriented goals with SI-2 indicator . . . . . . People Development-oriented goals with SI-2 indicator . . . . . . . . Idea Generation-oriented goals with SI-2 indicator . . . . . . . . . . . . Image-oriented goals with SI-2 indicator. . . . . . . . . . . . . . . . . . . . Location of company headquarters: overview . . . . . . . . . . . . . . . . Location of company headquarters: detailed overview . . . . . . . . . Simultaneous presence in multiple sectors . . . . . . . . . . . . . . . . . . Multi-sector distribution of participating companies. . . . . . . . . . . Detailed distribution of the ICT service subsectors . . . . . . . . . . . . Size of entire company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Size of development department . . . . . . . . . . . . . . . . . . . . . . . . . . The size of research department. . . . . . . . . . . . . . . . . . . . . . . . . . . Goal-setting approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross tabulation – goal-setting process with the company size . . Goal-setting approach Europe and USA vs. CIS . . . . . . . . . . . . . . Frequency of reconsideration of goals and KPIs . . . . . . . . . . . . . . Opportunity to change goals or KPIs . . . . . . . . . . . . . . . . . . . . . . . Cross tabulation – opportunity to change goals with the opportunity to change KPIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ratio of qualitative/quantitative measures currently used . . . . . . . Availability of means to collect qualitative data . . . . . . . . . . . . . . Cross tabulation of Figs. 5.15 (ratio) and 5.16 (availability). . . . . Necessity of performance measurement in industrial research . . . Assessment process for individual employee performance . . . . . . Variable part of the salary to goal achievements. . . . . . . . . . . . . . Variable part of the salary with the research department size. . . .
139 141 141 144 144 158 160 162 162 163 164 164 165 166 167 168 168 169 170 177 178 178 178 179 180 180 180 181 182 182 183 186 186 187 188 188 189 191 191 192
List of Figures
449
Fig. 5.22 Fig. 5.23
193
Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig. Fig.
5.24 5.25 5.26 5.27 5.28 6.1 6.2 6.3 6.4 6.5 7.1 7.2 7.3
Spending on collaboration with academia . . . . . . . . . . . . . . . . . . . Spending on collaboration with academia with the research department size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Patents originating within research departments . . . . . . . . . . . . . . Patents originated within research departments with its sizes. . . . Funding source distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Number of goals used in research departments . . . . . . . . . . . . . . . Comparison of number of selected KPIs per goal . . . . . . . . . . . . . A five-level Performance Management Model . . . . . . . . . . . . . . . Strategic/goal level of PMgS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interface between goals and activity level of PMgS . . . . . . . . . . . Interface between KPI (PMsS) and activity levels of PMgS. . . . . KPI level forming PMsS as part of PMgS . . . . . . . . . . . . . . . . . . . Framework: research department and its eco-system . . . . . . . . . . Constitutive elements of PMS and their relationships . . . . . . . . . . A five-level Performance Management Model . . . . . . . . . . . . . . .
194 195 196 197 199 200 226 227 228 228 229 244 245 247
.
Bibliography
Aalders AF (2006) Philips research, setting up a creative environment for R&D. EIRMA Learning Group “Managing Creativity”, Paris, 24 November Ahaus CTB (1994) Bevoegdheidsverdeling en Organisatie. Dissertation, University of Gr€oningen Amabile TM, Conti R (1994) Environmental determinants of work motivation, creativity, and innovation: the case of R&D downsizing. Paper presented at the technological oversights and foresights conference, Stern School of Business, New York University Amabile TM et al (2003) Assessing the work environment for creativity. Acad Manage J 39(5): 1157 Amelingmeyer J (2005) Marketing f€ ur F&E Einheiten im Unternehmen. In: Amelingmeyer J, Harland PE (eds) Technologiemanagement & marketing. DUV, Wiesbaden, pp 347–366 Andersen B, Fagerhaug T (2002) Performance measurement explained. ASQ Quality Press, Milwaukee Anthony RN, Dearden J, Bedford D (1989) Management control systems, 6th edn. Richard D. Irwin, Homewood Asimov M (1962) Introduction to design. Prentice Hall, Englewood Cliffs Baum H-G, Coenenberg AG, G€ unther T (2004) Strategisches controlling, 3rd edn. Stuttgart Beck C, V€olker R (2009) Konzepte in der industriellen Forschungskommunikation technologieintensiver Unternehmen. Wissenschaftsmanagement 1:28–35 Beer S (1959) Cybernetics and management. English University Press, London Beer S (1962) Control systems. In: Eddison R, Pennycuick K, Rivett B (eds) Operations research in management. London Bell CG, Mason HB (1991) A method to diagnose high tech venture. IEEE Press, Los Alamitos Betz F (1997) Managing technological innovation: competitive advantage from change. Wiley, New York Betz F (1998) Managing technological innovation, 1st edn. Wiley, New York Bidlingmaier J (1964). Unternehmerziele und Unternehmerstrategien. Wiesbaden Bode J (2008) Performance Management im Beschaffungsbereich: Konzept-Messung-Management. Grin Verlag, M€ unchen B€ osch D (2007). Controlling im betrieblichen Innovationssystem – Entwicklung einer Innovationscontrolling-Konzeption mit besonderem Fokus auf dem Performance Measurement. Hamburg Bourne M, Mills J, Wilcox M, Neely A, Platts K (2000) Designing, implementing and updating performance measurement systems. Int J Oper Prod Manage 20(7):754–771 Bredrup H (1995) Performance measurement. In: Rolstadas A (ed) Performance management: a business process benchmarking approach. London, p 174 Brent AC, Pretorius MW (2009) An investigation into behaviours in and performances of a R&D operating unit. In: Proceedings of the XX international society for professional innovation
T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0, # Springer-Verlag Berlin Heidelberg 2012
451
452
Bibliography
management (ISPIM) conference and the R&D management conference 2009: the future of innovation, Vienna, 21–24 June Brockhoff K (1994) Forschung und Entwicklung: Planung und Kontrolle, 4th edn. M€unchen Brown WB, Gobeli D (1992) Observations on the measurement of R & D productivity: a case study. IEEE Trans Eng Manage 39(4):330 Brown MG, Svenson RA (1998) Measuring R&D productivity. Res Technol Manage 41(6):30–35 Bruggmann M (1957) Betriebswirtschaftliche Probleme der industriellen Forschung. Winterthur Brunner J (1999) Value-based performance management: Wertsteigernde Unternehmensf€uhrung: Strategien – Instrumente – Praxisbeispiele. Wiesbaden Busby JS (1997) Do organizations deliver feedback to engineering designers effectively? Working paper, School of Industrial & Manufacturing Science, Cranfield University, Cranfield Business Dictionary (2009) http://www.businessdictionary.com/. Accessed 12 August 2010 Butler R (1994) What you measure is what you get – an investigation into the measurement of the value added by the purchasing department. IPSERA, 4th international annual conference, service sector and manufacturing procurement, The University of Birmingham, 11–12 April ¨ konomische Prinzipien, Buxmann P, Diefenbach H, Hess T (2008) Die Softwareindustrie. O Strategien, Perspektiven. Berlin,/Heidelberg Camp R (1989) Benchmarking: the search for industry best practices that lead to superior performance. Quality Resources, White Plains Carrasco L (2003) The American Community Survey (ACS) en Espan˜ol: using cognitive interviews to test the functional equivalency of questionnaire translations. Statistical Research Division Study Series Report (Survey Methodology #2003-17). Issued 5 August 2003, U.S. Census Bureau, Washington Carter N, Klein R, Day P (1995) How organizations measure success. The use of performance indicators in government. Routledge, London Caspari V (2001) Marktstruktur, Wettbewerb und Innovation. Darmstadt discussion papers in economics, 104, Darmstadt [Report] Cesaroni F, Di Minin A, Piccaluga A (2004) New strategic goals and organizational solutions in large R&D labs: lessons from Centro Ricerche Fiat and Telecom Italia Lab. R&D Manage 34 (1):45–56 Champion MA, Lord RG (1982) A control systems conceptualization of the goal-setting and changing process. Organ Behav Hum Perform 30:267 Charles D, Howells J (1992) Technology transfer in Europe: public and private network. Belhaven Press, London Chesbrough HW (2003) Open innovation: the new imperative for creating and profiting from technology. Harvard Business School Press, Boston Chesbrough H, Vanhaverbeke W, West J (2007) Open Innovation: Researching a New Paradigm. Oxford University Press Chiesa V, Frattini F (2007) Exploring the differences in performance measurement between research and development: evidence from a multiple case study. R&D Manage 37(4):292–293 Chiesa V, Frattini F, Lazzarotti V, Manzini R (2007) How do measurement objectives influence the R&D performance measurement system design? Manage Res News 30(3):187–202 Chorafas DN (1963) Die Aufgaben der Forschung in der modernen Unternehmung. M€unchen Cohen WM, Nelson RR, Walsh JP (2000) Protecting their intellectual assets: appropriability conditions and why U.S. manufacturing firms patent (or not). National Bureau of Economic Research, Working paper 7552 Cokins G (2004) Performance management: finding the missing pieces to close the intelligence gap. Wiley, New York Cokins G (2009) Performance management: integrating strategy execution, methodologies, risk, and analytics, vol 21. Wiley, Hoboken Collis J, Hussey R (2003) Business research. A practical guide for undergraduate and postgraduate students. Palgrave Macmillan, New York
Bibliography
453
Cooper RG, Edget SJ, Kleinschmidt EJ (2001) Portfolio management for new product development: results of an industry practices study. R&D management (Industrial Research Institute, Inc) 31(4):3 Cordero R (1989) The measurement of innovation performance in firm: an overview. Res Policy 19:185–192 Daniel DR (1961) Management information crisis. In: Harvard Business Rev 39(5):111–121 Denzin NK (1978) The research act. A theoretical introduction to sociological methods. McGrawHill, New York Dhavale DG (1996) Problems with existing manufacturing performance measures, J Cost Manage 9. Jg.(4):50 D€ orner D, Kreuzig HW, Reither F (1983) Lohhausen. Vom Umgang mit Unbestimmtheit und Komplexit€at. Hans Huber, Bern Dransfield SB, Fisher NI, Vogel NJ (1999) Using statistics and statistical thinking to improve organizational performance, with discussion and authors’ reply. Int Stat Rev 67(2):99–150 Driva H, Pawar KS, Menon U (2000) Measuring product development performance in manufacturing organisations. Int J Prod Econ 63:147–159 Drucker PF (1974) Management: tasks, responsibilities, practices. Harper & Row, London Drucker PF (2007) The effective executive. Elsevier/Butterworth-Heinemann, Oxford/Burlington Dwight R (1999) Searching for real maintenance performance measures. J Qual Mainten Eng 5(3):258–275 Earley PC et al (1990) Impact of process and outcome feedback on the relation of goal setting to task performance: 1969–1980. Psychol Bull 90:125–152 Eastery-Smith M, Thorpe R, Lowe A (2002) Management research: an introduction. Sage, London Edvinsson L, Malone MS (1997) Intellectual capital: realizing your company’s true value by finding its hidden brainpower: harper business. New York Eisenhardt KM (1989) Building theories from case study research. Acad Manage Rev 14(4):532–550 Emmanuel C, Otley D, Merchant K (1990) Accounting for management control. Chapman & Hall, London European Commission – Joint Research Centre (EC-JRC) (2008) Monitoring industrial research: role and dynamics of corporate R&D, JRC42738. http://iri.jrc.ec.europa.eu/concord-2007/ summary.pdf, p 5 European Foundation for Quality Management (2003) Excellence einf€uhren, Leitfaden. E.F.Q.M, Brussels Evangelidis K (1992) Performance measured performance gained. The Treasurer, No. February, p 45 Fagerberg J (2003) Schumpeter and the revival of evolutionary economics: an appraisal of the literature. J Evol Econ 13(2):125–159 Fagerberg J (2004) Innovation – a guide to the literature. In: Fagerberg J, Mowery DC, Nelson R (eds) The oxford handbook of innovation, p 4 Feltham GA, Xie J (1994) Performance measure congruity and diversity in multi-task principal/ agent relations. Acc Rev 69(3):429–453 Fine M (2010) Collaborative research. http://www.wisegeek.com. Accessed 12 August 2010 Fisher DM (2004) The business process maturity model. A practical approach for identifying opportunities for optimization. Available online at http:/issuu.com/d. . ./consulting_-_ business_process_management_practical Fortune Magazine (2009) World’s most admired companies. http://money.cnn.com/magazines/ fortune/mostadmired/2009/full_list/. Accessed April 2010 Francis PH (1992) Putting quality into the R&D process. Res Technol Manage 35(4):16–23 Frenzel P, Schroth C, Samsonowa T (2007) The enterprise interoperability center – an institutional framework facilitating enterprise interoperability. In: Proceedings of the 15th European conference on information systems (ECIS 2007), St. Gallen, p 12 Friedewald M, Kimpeler S, Hawkins R, Poel M, Lengrand L, Chatrie I (2004) Benchmarking national and regional policies in support of the competitiveness of the ICT sector in the EU.
454
Bibliography
Final Report prepared for European Commission, Directorate-General Enterprises, D4. Karlsruhe: ISI. Frigo ML, Krumwiede KR (1999) Balanced scorecards: a rising trend in strategic performance measurement. J Strateg Perform Meas 1:42–48 Frost PJ, Mahoney TA (1976) Goal setting and the task process: an interactive influence on individual performance. Organizational Behavior and Human Performance 17:328–350 Gaiser B, Servatius HG (1990) Mehr Transparenz f€ ur die Forschung und Entwicklung: Fahrplan f€ur ein F&E-Controllingsystem. In: Controlling 2. Jg.(3):13 Geisler E (2000) The metrics of science and technology. Quorum Books, Westport Geschka H (1983) Innovationsmanagement. Management Enzyklop€adie, vol 4, Landsberg am Lech, pp 823–837 Geschka H (1988) Innerbetrieblicher Technologie-Transfer – eine Chance. K€oln Geschka H (2006) Kreativit€atstechniken und Methoden der Ideenbewertung. In: Sommerlatte T, Beyer G, Seidel G (eds) Innovationskultur und Ideenmanagement, Strategien und praktische Ans€atze f€ur mehr Wachstum. Symposion, D€ usseldorf, pp 217–249 Gladen W (2002) Performance measurement als methode der unternehmenssteuerung. HMDPraxis der Wirtschaftsinformatik 227:5–16 € Rheinland, K€oln Gladen W (2005) Performance measurement, 3rd edn. Wiesbaden, Verlag TUV Gleich R (1997) Performance measurement. In: Die Betriebswirtschaft 1997(1):112, 114–115 Gleich R (2001) Das System des Performance Measurement: Theoretisches Grundkonzept. Entwicklungs- und Anwendungsstand, M€ unchen Godener B, Soderquist KE (2004) Use and impact of performance. Measurement results in R&D and NPD: an exploratory study. R&D Manage 34:197 Gomez P (1981) Modelle und Methoden des systemorientierten Managements- eine Einf€uhrung. P. Haupt, Bern Gomez P, Fasnacht D, Waldisp€ uhl R, Wasserer C (2002) Komplexe IT-Projekte ganzheitlich f€uhren. Verlag Paul Haupt, Bern Granger C (1964) The hierarchy of objectives. Harvard Bus Rev 42:63–74 Greller MM (1980) Evaluation of feedback sources as a function of role and organizational level. J Appl Psychol 65(1):24–27 Greller MM, Herold DM (1975) Sources of feedback: a preliminary investigation. Organ Behav Hum Perform 13:244–256 Griffin RW (1990) Management, 3rd edn. Houghton, Miffin Company, Dallas Griffin A, Page A (1993) An interim report on measuring product development success and failure. J Prod Innovat Manage 10(4):291–308 Gr€uning M (2002) Performance-measurement-systeme – Messung und Steuerung von Unternehmensleistung. Dissertation, Technische Universit€at Dresden Gupta AK, Wilemon D (1996) Changing patterns in Industrial R&D Management. Journal of Product Innovation Management 13:497–511 Gurel O (2007a) Innovation and invention – similar words, different concepts, ipFrontline.com. Magazine of intellectual property and technology, Article from 11 October 2007, http://www. ipfrontline.com/depts/article.asp?id¼16295&deptid¼5. Accessed 13 June 2009 Gurel O (2007b) Innovation vs. invention: knowing the difference makes a difference, WTN News article from 18 September 2007, http://wistechnology.com/articles/4184. Accessed 14 May 2009 Habermann S, Wieser A (2002) Mangel an innovativen F€uhrungssystemen im deutschen Mittelstand – Balanced Scorecard Anwendungserfahrungen. M€unchen Hahn D (1994) PuK, Controllingkonzepte: Planung und Kontrolle, Planungs- und Kontrollsysteme, Planungs- und Kontrollrechnung. Gabler, Wiesbaden Halligan J, Bouckaert G, Van Dooren W (2010) Performance management in the public sector. Chapman & Hall, Routledge Hamel W (1992) Zielsysteme. In: Frese E (ed) Handw€orterbuch der Organisation, 3rd edn. Stuttgart, pp 2634–2638
Bibliography
455
Hauber R (2002) Performance measurement in der Forschung und Entwicklung. Gabler Verlag, Wiesbaden, pp 24–119 Hauschildt J (1970) Zur Artikulation von Unternehmenszielen. ZfbF 22:551 Hauschildt J (1997) Innovationsmanagement, 2. Aufl., M€ unchen Heinen E (1966) Das Zielsystem der Unternehmung. Gabler Heinen E (1991) Industriebetriebslehre: Entscheidungen im Industriebetrieb, 9th edn. Gabler, Wiesbaden ¨ konomische Systemtheorie: Eine kurzgefaßte Herder-Dorneich P, Herder-Dorneich P (1993) O Hinf€uhrung. Nomos Verlag, Baden-Baden Heuser L (2006) Ideenmanagement und Corporate Venturing – Fallbeispiel SAP. In: Sommerlatte T, Beyer G, Seidel G (eds) Innovationskultur und Ideenmanagement, Strategien und praktische Ans€atze f€ur mehr Wachstum, Symposion, D€ usseldorf Hoffmann O (1999) Performance management: Systeme und Implementierungsans€atze. Bern Hoffmann O (2000) Performance management, 2nd edn. Haupt, Bern Hope J, Fraser R (2003) Beyond budgeting: how managers can break free from the annual performance trap. Harvard Business Press, Boston Hronec SM (1993) Vital signs: using quality, time and cost performance measurements to chart your company’s future. Amacom, American Management Association, New York Hurry M, Schroeder R (2000) Six Sigma: Prozesse optimieren, Null-Fehler-Qualit€at schaffen, Rendite radikal steigern. Campus, Frankfurt a. Main Ilgen DR, Fisher CD, Taylor MS (1979) Consequences of individual feedback on behavior in organizations. J Appl Psychol 64(4):349–371 Illinois Institute of Technology Research Institute (1968) Technology in retrospect and critical events in science, vol 1, p 15 International Telecommunication Unit (ITU) (2009) Measuring the information society – The ICT Development Index, p 1 Ittner CD, Larcker DF (1998) Innovations in performance measurement: trends and research implications. J Manage Account Res 10:205–238 Jetter W (2004) Performance management: Strategien umsetzen, Ziele realisieren, Mitarbeiter f€ordern, 2nd edn. Sch€affer-Poeschel, Sttutgart Kaplan RS, Norton DP (1992) The balanced scorecard: measures that drive performance. Harvard Bus Rev 70(1):71–79 Kaplan RS, Norton DP (1996) The balance scorecard – translating strategy into action. Harvard Business School Press, Boston Karlsson M, Trygg L, Elfstr€ om B-O (2004) Measuring R&D productivity: complementing the picture by focusing on research activities. Technovation 24:179–186 Kasper E (2006) Internal research & development markets. Springer, Berlin/Heidelberg Gmbh & Co. Kg. Kerklaan LAFM, Kingma J, van Kleef FPJ (1996) De cockpit van de organisatie. Kluwer Bedrijfswetenschappen, Deventer Kerssens-van Drongelen IC (1999) Systematic Design of R&D Performance Measurement Systems. Doctoral Dissertation, University of Twente, Enschede, the Netherlands Kerssens-van Drongelen IC (2001) Systematic design of R&D performance measurement systems. Ipskamp, Enschede Kerssens-van Drongelen IC, Cook A (1997) Design principles for the development of measurement systems for research and development processes. R&D Manage 27(4):346 Khurana A, Rosenthal SR (1998) Towards holistic “Front Ends” in new product development. J Prod Innovat Manage 15:57–74 Kim J, Wilemon D (2002) Focusing the fuzzy front-end in new product development. R&D Manage 32:269–279 Klein P (1999) Measure what matters – corporate image, Communication world, Find articles at BNET, October. http://www.bnet.com/. Accessed March 2010
456
Bibliography
Klingebiel N (1999) Performance measurement – Grundlagen, Ans€atze, Fallstudien. Gabler, Wiesbaden Klingebiel N (2000) Integriertes performance measurement. Dt. Univ.-Verl., Wiesbaden Kosiol E (1966) Die Unternehmung als wirtschafliches Aktionszentrum. Rowolth Reinbek bei, Hamburg Kotler P (1996) Marketing management: analysis, planning, implementation, and control, 9th edn. Prentice Hall College Div, Upper Saddle River Krause O (2005) Performance Measurement – Eine Stakeholder-Nutzen-orientierte und Gesch€aftsprozess-basierte Methode. Dissertation, Technische Universit€at Berlin Krause O (2006) Performance management: Eine Stakeholder-Nutzen-orientierte und Gesch€aftsprozess-basierte Methode. DUV Kueng P, Krang A (2001) Performance-Measurement-Systeme im ienstleistungsspektrum. Das Denken in Wirkunsketten ist noch wenig verbreitet – Ergebnisse einer Studie. IO Manage 70(1/2):56–63 Landy FJ, Farr JL (1983) The measurement of work performance – methods, theory, and applications. Academic, San Diego Laplante P (2009) Requirements engineering for software and systems, 1st edn. CRC Press, Redmond Latham GP, Yukl GA (1975) A review of research on the application of goal setting in organizations. Acad Manage J 18(4):824–845 Lauzel P, Cibert A (1959) Des ratios au tableau de bord. Paris Lebas M (1995) Performance measurement and performance management. Int J Prod Econ 41(9):23–35 Leifer R, Triskari T (1987) Research versus development: differences and similarities. IEEE Trans Eng Manage 34(2):71–78 Little AD (1997) Management von Innovation und Wachstum. Gabler, Wiesbaden Loch CH, Tapper UAS (2002) Implementing a strategy-driven performance measurement system for an applied research group. J Prod Innovat Manage 19(3):185–198 Locke EA (1968) Toward a theory of task motivation and incentives, Organ Behav Hum Perform 3(2):125, 159, 161–162 Locke EA, Latham GP (1990) A theory of goal setting and task performance. Prentice-Hall, Englewood Cliffs Locke EA, Shaw KN, Saari LM, Latham GP (1981) Goal setting and task performance: 1969–1980. Psychol Bull 90:125–126 Lynch R, Cross K (1995) Measure up!: Yardsticks for continuous improvement. Wiley, New York Malorny C (1996) TQM umsetzen: der Weg zur Business Excellence. Sch€affer-Poeschel, Stuttgart Mansfield E, Rapoport J, Schnee J, Wagner S, Hamburger M (1971) Research and innovation in the modern corporation. Norton, New York Masaaki I, Heymanns B (1999) Gemba Kaizen – collaborating for change. Berrett-Koehler Publishers, San Francisco Meadows AJ (1998) Communicating research. Academic, San Diego Merriam-Webster’s 11th Collegiate Dictionary (2004) Version 3.1 Mertins K, Krause O (1998) Perspectives for executive information, decision support and information management in the extended enterprise. In: Sch€onsleben P, B€uchel A (eds) Organizing the extended enterprise. Chapman & Hall, London Mesthene EG, Clintock S (1962) The nature of research goals: some necessary definitions. The RAND Corporation, Santa-Monica Mett€anen P (2005) Design and implementation of a performance measurement system for a research organization. Prod Plan Control 16(2):178–188 Meyer MW (2002) Rethinking performance measurement – beyond the balanced scorecard. Cambridge University Press, Cambridge Meyer MW, Gupta V (1994) The performance paradox. Res Organ Behav 16:309–369
Bibliography
457
Morse J (1991) Approaches to qualitative-quantitative methodological triangulation. Nurs Res 40(1):120 Nagel P (1992) Techniken der Zielformulierung. In: Frese E (ed) Handw€orterbuch der Organisation, 3rd edn. Stuttgart Neely A, Gregory M, Platts K (1995) Performance measurement system design – a literature review and research agenda. Int J Oper Prod Manage 15(4):80–116 Norman AL (1994) Informational society: an economic theory of discovery, invention and innovation. Kluwer Academic Publishers, Boston/Dordrecht/London Nyiri A (2007) Corporate performance management. Ein ganzheitlicher Ansatz zur Gestaltung der Unternehmenssteuerung, facultas.wuv O’Donnell FJ, Duffy AHB (2005) Design performance. Springer, London OECD (1995) Canberra manual, Manual on the measurement of human resources devoted to S&T, The measurement of scientific and technological activities, Paris OECD (2002) Frascati manual, proposed standard practice for surveys on research and experimental development, 6th edn. OECD, Paris OECD (2005) Oslo manual: guidelines for collecting and interpreting innovation data, The measurement of scientific and technological activities, 3rd edn. OECD, Paris OECD (2007) Information economy – sector definitions based on the internation standard indystry classification (ISIC 4), DSTI/ICCP/IIS(2006)2/FINAL, p 15 OECD (2009) OECD patent statistics manual. OECD, Paris Oehler K (2006) Corporate performance management: Mit business intelligence Werkzeugen. Hanser, M€unchen Ojanen V, Vuola O (2006) Coping with the multiple dimensions of R&D performance analysis. Int J Technol Manage 33(2–3):279–290 Parmenter D (2007) Key performance indicators – developing, implementing, and using winning KPIs. Wiley, New Jersey Patterson WC (1983) Evaluating R&D performance at Alcoa labs. Res Manage 26(2):23–27 Pham TBN (2008) Organizational Structure and Knowledge Transfer in Vietnam’s IT Companies. J Econ Develop forthcoming in September 2008 Pleschak F, Sabisch H (1996) Innovationsmanagement. Sch€affer-Poeschl Verlag, Stuttgart Porter ME (1985) Competitive advantage. Free Press, New York Prechelt L (1997) Why we need an explicit forum for negative results. J Univers Comput Sci 3(9):1–10 Ramin KP, Fey G (1998) Die internationale Rechnungslegung und ihr Informationsgehalt f€ur den Anleger, In: M€uller M, Leven F-J (Hrsg.) Shareholder Value Reporting, Ver€anderte Anforderungen an die Berichterstattung b€ orsen- notierter Unternehmen, Wien, p 287 Ranftl RM (1977) Improving R&D productivity – a study program and its application. Res Manage 20(1):25–29 Rappaport A (1998) Creating shareholder value. The Free Press, New York Reinertsen D (1994) Streamlining the fuzzy front-end. World Class Design Manuf 1(5):4–8 Robb WL (1991) How good is our research. Res Technol Manage 34(2):16–21 Rockart JF (1979) Chief executives define their own data needs. Harvard Bus Rev 57(2):85 Rolstadas A (1998) Enterprise performance measurement. Int J Oper Prod Manage 18(9/ 10):989–999 Roussel PA, Saad KN, Erickson TJ (1991) Third generation R&D, managing the link to corporate strategy. Harvard Business School Press, Boston Rowland F (2002) Peer review of electronic journals. http://elpub.scix.net/data/works/att/0313. content.pdf. Accessed 2 October 2010 R€ uhli E (1985) Unternehmungsf€ uhrung und Unternehmungspolitik 1. Bern, Stuttgart Samsonowa T (2008) Measuring research objectives by adequate key performance indicators, EIASM, 18th European doctoral summer school on technology management – innovation at crossroads, Leuven
458
Bibliography
Samsonowa T, Gerteis W (2009) Utilizing a cluster approach to measure performance in industrial research organizations. In: Proceedings of the XX international society for professional innovation management (ISPIM) conference: the future of innovation, Vienna, 21–24 June Samsonowa T, Schroth C (2007) The EIC – a consensus-centric approach for crossorganizational e-Business standards diffusion. In: Gonc¸alves RJ (ed) Enterprise interoperability II: new challenges and approaches. Springer, London, p 127 Samsonowa T, Buxmann P, Gerteis W (2009) Defining KPI sets for industrial research organizations – a performance measurement approach. Int J Innovat Manage 13(3):157–176 Samsonowa T, Gerteis W, Buxmann P (2010) Towards a systematic performance management system for industrial research organizations in the ICT sector. In: Proceedings of the XXI international society for professional innovation management (ISPIM) conference: the dynamics of innovation, Bilbao, 6–9 June Schainblatt AH (1982) How companies measure the productivity of engineers and scientists. Res Manage 25(3):10–18 Sch€atzle G (1965) Forschung und Entwicklung als unternehmerische Aufgabe,.Dissertation, K€olnOpladen Schmidt G (1969) Product-innovation und organisation. Dissertation, Gießen Schmoch U, Licht G, Reinhard M (2000) Wissens- und Technologietransfer in Deutschland. Fraunhofer IRB, M€ unchen Schneiderman AM (1999) Why balanced scorecards fail. J Strateg Perform Meas Special edition:6–11 Scholl W (2006) Evolution€ares Ideenmanagement. In: Sommerlatte T, Beyer G, Seidel G (eds) Innovationskultur und Ideenmanagement, Strategien und praktische Ans€atze f€ur mehr Wachstum. Symposion, D€ usseldorf, pp 163–194 Schreyer M (2007) Entwicklung und implementierung von performance measurement systemen. Wiesbaden Schumpeter JA (1964) Theorie der wirtschaftlichen Entwicklung. 6. Auflage. Duncker & Humblot. Berlin Schwaber K (2007) Agile project management with scrum. Microsoft Press, Unterschleißheim Schwantag C (1951) Der Wirtschaftsprozeß im Handelsbetriebe. ZfB 21:338–353 Seiler R (1965) Improving the effectiveness of research and development. McGraw-Hill, New York Siebert G (1998) Prozeß - Benchmarking – Methode zum branchenunabh€angigen Vergleich von Prozessen. Berichte aus dem PTZ, Diss. TU Berlin Sinclair D, Zairi M (1995) Effective process management through performance measurement: Part III – an integrated model of total quality-based performance measurement. Bus Process Manage J 1(3):50–65 Sinclair D, Zairi M (2000) Performance measurement: a critical analysis of the literature with respect to total quality management. Int J Manage Rev 2:145–168 Sink DS, Tuttle TC (1989) Planning and measurement in your organization of the future. Industrial Engineering and Management Press, Norcross Simons R, Da´vila A, Kaplan R (2000) Performance measurement & control systems for implementing strategy: text & cases. Prentice Hall, New Jersey Smith DK, Alexander RC (1999) Fumbling the future: how xerox invented, then ignored the first personal computer. iUniverse.com Inc., Lincoln Solow RM (1956) A contribution in the theory of economic growth. Quart J Econ 70(1):65–94 Sommerlatte T (2006) Warum Innovationskultur und Ideenmanagement so wichtig sind. In: Sommerlatte T, Beyer G, Seidel G (eds) Innovationskultur und Ideenmanagement, Strategien und praktische Ans€atze f€ ur mehr Wachstum. Symposion, D€usseldorf, pp 13–26 Souder WE, Nashar AS, Padmanabhan V (1990) A guide to the best technology transfer processes. J Technol Transf Winter–Spring:5–16 Specht G, Beckmann C (1996) F&E-Management. Schaeffer-Poeschel Verlag, Stuttgart Stahl MJ, Steger JA (1977) Measuring innovation and productivity – a peer rating approach. Res Manage 20(1):35–38
Bibliography
459
Steinle C, Thiem H, Lange M (2001) Die Balanced Scorecard als Instrument zur Umsetzung von Strategien, Praxiserfahrungen und Gestaltungshinweise. Control Mag 26(1):29–37 Sutherland J (2004–10) Agile development: lessons learned from the first Scrum (PDF). http:// www.scrumalliance.org/resources/35. Accessed 2 December 2010 Szakonyi R (1994) Measuring R&D effectiveness – I. Res Technol Manage 37(2):27–32 Tankoonsombut K (1998) Investigation of the effects of feedback and goal setting on knowledge work performance in the distributed work environment. Dissertation, Virginia Polytechnic Institute Taschler DR, Chappelow CC (1997) Intra-company technology transfer in a multinational firm. J Technol Transf 22(1):29–34 The Boston Consulting Group (2003) The world class innovation. R&D management: rules for success in manufacturing industry Thommen JP, Achleitner AK (2003) Allgemeine Betriebswirtschaftslehre: umfassende Einf€uhrung aus managementorientierter Sicht, 3rd edn. Gabler, Wiesbaden Tieke R, Landgraf F (1999) Neue Instrumente f€ ur neue Sicht. Deutsche Unternehmen haben Verbesserungsbedarf bei den derzeitigen Steuerungsgr€oßen. In Report 3(4):10–11 Tidd J, Bessant J, Pavitt K (2005) Managing innovation: integrating technological, market and organizational change, 3rd edn. Wiley, London Tipping JW, Zeffren E, Fusfeld AR (1995) Assessing the value of your technology. Res Technol Management 38(5):22–63 T€ opfer AH, Lindst€adt G, F€ orster K (2002) Balanced Score Card – Hoher Nutzen trotz zu langer Einf€uhrungszeit. Controlling 14(2):79–84 Truffle 100 (2006) Ranking of the top 100 European software vendors, p 1 Truffle 100 (2009) Ranking of the top 100 European software vendors. p 9 Uhlmann L (1978) Der Innovationsprozeß in westeurop€aischen Industriel€andern. Duncker & Humblot, Berlin United Nations Statistical Commission (2002) Central Product Classification (CPC) Venkatraman N, Ramanujam V (1986) Measurement of business performance in strategy research: a comparison of approaches. Acad Manage Rev 11(4):801–814 von Bonsdorff C, Andersin HE (1995) Supporting the business process management paradigm by means of performance measurements. Proceedings of CE96 conference “Concurrent Engineering: a global perspective”, August, McLean Weber M (1992) Nutzwertanalyse. In: Frese E (Hrsg.): Handw€orterbuch der Organisation. 3 Aufl. Sch€affer-Poeschel Verlag, Stuttgart. pp. 1435–1447 Wettstein T (2002) Gesamtheitliches performance measurement – Vorgehensmodell und informations-technische Ausgestaltung. Dissertation, Universit€at Freiburg, Schweiz Wheelwright SC, Clark KB (1992) Revolutionizing product development. Free Press, New York Wijnen G, Renes W, Storm R (1988) Projectmatig werken. Spectrum, Utrecht Wolter O (1997) Entwicklung und praktische Erprobung eines Kennzahlensystem f€ur das Total Quality Management. Zugl: Berlin, Techn. Univ., Diss. Berlin: IPK Woodman RW, Sawyer JE, Griffin RW (1993) Toward a theory of organization creativity. Acad Manage Rev 18:293–321 Yin RK (2006) Case study research: design and methods. Sage, London Yukl GA, Latham GP (1978) Interrelationships among employee participation, individual difference, goal difficulty, goal acceptance, goal instrumentality, and performance. Pers Psychol 31:305–323 Zammuto RF (1982) Assessing Organizational Effectiveness. Albany: State University of New York Press Zeidler G (1986) Herausforderung der Hochtechnologie an das Management. In: Staudt E (ed) Das Management von Innovationen. Frankfurt a.M., p 321 Zimmermann G, J€ohnk T (2000) Erfahrungen der Unternehmenspraxis mit der Balanced Scorecard. Ein empirisches Schlaglicht. Controlling 12(12):601–606
.
Eidesstattliche Erkl€arung Hiermit erkl€are ich an Eides statt, dass ich die vorliegende Arbeit Managing the Performance of Industrial Research in the ICT Industry Assessing Research Goals with suitable Key Performance Indicators selbst€andig und nur unter Verwendung der Angegebenen Literatur verfasst habe.
T. Samsonowa, Industrial Research Performance Management, Contributions to Management Science, DOI 10.1007/978-3-7908-2762-0, # Springer-Verlag Berlin Heidelberg 2012
461